Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
I am trying to write a loop in a PL/pgSQL function in PostgreSQL 9.3 that returns a table. I used `RETURN NEXT;` with no parameters after each query in the loop, following examples I found, like:
* [plpgsql error "RETURN NEXT cannot have a parameter in function with OUT parameters" in table-returning function](https://stackoverflow.com/questions/14039720/)
However, I am still getting an error:
```
ERROR: query has no destination for result data
HINT: If you want to discard the results of a SELECT, use PERFORM instead.
```
A minimal code example to reproduce the problem is below. Can anyone please help explain how to fix the test code to return a table?
Minimal example:
```
CREATE OR REPLACE FUNCTION test0()
RETURNS TABLE(y integer, result text)
LANGUAGE plpgsql AS
$func$
DECLARE
yr RECORD;
BEGIN
FOR yr IN SELECT * FROM generate_series(1,10,1) AS y_(y)
LOOP
RAISE NOTICE 'Computing %', yr.y;
SELECT yr.y, 'hi';
RETURN NEXT;
END LOOP;
RETURN;
END
$func$;
```
|
The example given may be wholly replaced with `RETURN QUERY`:
```
BEGIN
RETURN QUERY SELECT y_.y, 'hi' FROM generate_series(1,10,1) AS y_(y)
END;
```
which will be a *lot* faster.
In general you should avoid iteration wherever possible, and instead favour set-oriented operations.
Where `return next` over a loop is unavoidable (which is very rare, and mostly confined to when you need exception handling) you must set `OUT` parameter values or table parameters, then `return next` without arguments.
In this case your problem is the line `SELECT yr.y, 'hi';` which does nothing. You're assuming that the implicit destination of a `SELECT` is the out parameters, but that's not the case. You'd have to use the out parameters as loop variables like @peterm did, use assignments or use `SELECT INTO`:
```
FOR yr IN SELECT * FROM generate_series(1,10,1) AS y_(y)
LOOP
RAISE NOTICE 'Computing %', yr.y;
y := yr.y;
result := 'hi';
RETURN NEXT;
END LOOP;
RETURN;
```
|
What [@Craig already explained](https://stackoverflow.com/a/27665012/939860).
**Plus**, if you really need a loop, you can have this simpler / cheaper. You don't need to declare an additional record variable and assign repeatedly. Assignments are comparatively expensive in plpgsql. Assign to the `OUT` variables declared in `RETURNS TABLE` directly. Those are visible everywhere in the code and the `FOR` loop can also assign to a list of variables. [The manual:](https://www.postgresql.org/docs/current/plpgsql-control-structures.html#PLPGSQL-RECORDS-ITERATING)
> The ***target*** is a record variable, row variable, or comma-separated list of scalar variables.
```
CREATE OR REPLACE FUNCTION test0()
RETURNS TABLE(y integer, result text)
LANGUAGE plpgsql AS
$func$
DECLARE
yr RECORD; -- now unneeded
BEGIN
FOR y, result IN
SELECT g, 'text_'::text || g
FROM generate_series(1,10) g
LOOP
RAISE NOTICE 'Computing %', y;
RETURN NEXT;
END LOOP;
END
$func$;
```
### Additional points
* Do **not** use the identifier `y` twice (as `OUT` param and column alias) while you can easily avoid it. That's a loaded footgun. If this can't be avoided, table-qualify columns.
* A final `RETURN` without params is good form, but totally optional. When control reaches the final `END`, the complete result is returned automatically.
* `g` in `FROM generate_series(1,10) g` is both *table alias* and *column alias* automatically, unless an explicit column alias is given. It is effectively the same as `FROM generate_series(1,10) g(g)`.
|
How to use RETURN NEXT in a PL/pgSQL function?
|
[
"",
"sql",
"database",
"postgresql",
"plpgsql",
"return-next",
""
] |
I have an array, "templates".
```
puts templates
```
gives me the following output:
```
{"id"=>4, "subject"=>"invoice", "body"=>"dear sirs", "description"=>"banking", "groups"=>"123", 0=>4, 1=>"invoice", 2=>"dear sirs", 3=>"banking", 4=>"123"}
```
I would like to "put" a certain element e.g. "dear sirs". I have tried:
```
puts templates[2]
```
but this just returns nil. What is the correct way to do this?
|
You access `"dear sirs"` using the key that's associated with it, `"body"`:
```
puts templates["body"]
```
|
If you want to get a hash value by numeric index then you can do `templates.values[index]`
e.g
`templates.values[0] => 4
templates.values[1] => "invoice"
templates.values[2] => "dear sirs"`
|
"putting" a certain object to the screen from inside an array
|
[
"",
"sql",
"ruby",
"hashtable",
""
] |
I get an error near `WHERE` clause on executing this query.
```
update AssetData set EmployeeName = ISNULL(EmployeeName,'') +@reasEmp
where ([AssetNumber] like'%" + WA_number.Text + "%')
and ID IN (SELECT ID FROM AssetData ORDER BY ID DESC
where ([AssetNumber] like'%" + WA_number.Text + "%') LIMIT 1)
```
Someone please help me to figure out what is wrong with this?
|
```
and ID IN (SELECT ID FROM AssetData ORDER BY ID DESC
where ([AssetNumber] like'%" + WA_number.Text + "%') LIMIT 1)
```
The `where` should become before `ORDER BY`. Although legal, `in ( ... limit 1)` doesn't make sense because `in` should be used with a list. I recommend using `= max(ID)` instead
```
and ID = (SELECT max(ID) FROM AssetData where [AssetNumber] like'%" + WA_number.Text + "%')
```
you could leave out the first part of your where clause since the ID matches the same criteria already
```
update AssetData set EmployeeName = ISNULL(EmployeeName,'') +@reasEmp
where ID = (SELECT max(ID) FROM AssetData where [AssetNumber] like'%" + WA_number.Text + "%');
```
|
try this
i think your are using SQL SERVER
```
update AssetData set EmployeeName = ISNULL(EmployeeName,'') +@reasEmp
where [AssetNumber] like'%" + WA_number.Text + "%'
and ID IN (SELECT TOP 1 ID FROM AssetData
where [AssetNumber] like'%" + WA_number.Text + "%'
ORDER BY ID DESC)
```
|
Nested SQL query having select subquery in update statement
|
[
"",
"sql",
"asp.net",
"database",
""
] |
I have database with 2 tables, 1 for storing customers Id and for customer informations. The second table is ordered with key/value because I need to store undefined values.
**Table structure**
*table customers:*
```
=================
id | status
=================
```
*table customers\_info*
```
=======================================
id | id_customer | key | value
=======================================
```
**Content example:**
*table customers:*
```
=================
id | status
1 | 1
2 | 1
==================
```
*table customers\_info*
```
=======================================
id | id_customer | key | value
1 | 1 | name| Doe
2 | 1 | age | 25
3 | 1 | city| NY
4 | 2 | name| Smith
=======================================
```
How can I query the tables to dislay all customers with their names
**Example:**
```
=================
1 | Doe
2 | Smith
=================
```
If i simply do an inner join I only get first key of the table:
```
SELECT * FROM customers inner join customers_info on customers.id = customers_info.id_customer
```
|
Try this:
```
SELECT ci.id_customer AS CustomerId, ci.value AS CustomerName
FROM customers_info ci
WHERE ci.key = 'name'
ORDER BY ci.id_customer;
```
Check the [**SQL FIDDLE DEMO**](http://www.sqlfiddle.com/#!2/eef5ce/2)
**OUTPUT**
```
| CUSTOMERID | CUSTOMERNAME |
|------------|--------------|
| 1 | Doe |
| 2 | Smith |
```
**::EDIT::**
To get all fields for customers
```
SELECT ci.id_customer AS CustomerId,
MAX(CASE WHEN ci.key = 'name' THEN ci.value ELSE '' END) AS CustomerName,
MAX(CASE WHEN ci.key = 'age' THEN ci.value ELSE '' END) AS CustomerAge,
MAX(CASE WHEN ci.key = 'city' THEN ci.value ELSE '' END) AS CustomerCity
FROM customers_info ci
GROUP BY ci.id_customer
ORDER BY ci.id_customer;
```
Check the [**SQL FIDDLE DEMO**](http://www.sqlfiddle.com/#!2/eef5ce/4)
**OUTPUT**
```
| CUSTOMERID | CUSTOMERNAME | CUSTOMERAGE | CUSTOMERCITY |
|------------|--------------|-------------|--------------|
| 1 | Doe | 25 | NY |
| 2 | Smith | | |
```
**::EDIT::**
To Fetch the GENDER from gender table:
```
SELECT a.CustomerId, a.CustomerName, a.CustomerAge, a.CustomerCity, g.name AS CustomerGender
FROM (SELECT ci.id_customer AS CustomerId,
MAX(CASE WHEN ci.key = 'name' THEN ci.value ELSE '' END) AS CustomerName,
MAX(CASE WHEN ci.key = 'age' THEN ci.value ELSE '' END) AS CustomerAge,
MAX(CASE WHEN ci.key = 'city' THEN ci.value ELSE '' END) AS CustomerCity
FROM customers_info ci
GROUP BY ci.id_customer
) AS a
INNER JOIN customers c ON a.CustomerId = c.id
LEFT OUTER JOIN gender g ON C.genderId = c.id
ORDER BY a.CustomerId;
```
|
```
DROP TABLE IF EXISTS eav_hell;
CREATE TABLE eav_hell
(id_customer INT NOT NULL
,attribute VARCHAR(12) NOT NULL
,value VARCHAR(12) NOT NULL
,PRIMARY KEY(id_customer,attribute)
);
INSERT INTO eav_hell VALUES
(1,'name','Doe'),
(1,'age',25),
(1,'city','NY'),
(2,'name','Smith');
SELECT id_customer
, MAX(CASE WHEN attribute = 'name' THEN value END) name
FROM eav_hell
GROUP
BY id_customer;
+-------------+-------+
| id_customer | name |
+-------------+-------+
| 1 | Doe |
| 2 | Smith |
+-------------+-------+
```
|
MySQL join multiple lines
|
[
"",
"mysql",
"sql",
"select",
"group-by",
"pivot",
""
] |
I am working on a database that has been made in a terrible way. I have to find an ID in a comma'd column.
The column values could be:
* 6154
* 5145,6154,4562
* 161545
My query is:
```
SELECT resource_id,filename,id FROM image WHERE other_vendors = '".$vendor_id."'
```
`$vendor_id` will be 6154
Now how do I match with a LIKE REGULAR EXPRESSION in a SQL QUERY where I get all the values which have a 6154 or 6154 in a comma'd string.
|
Just use build in function [find\_in\_set](http://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_find-in-set), which search string in comma separated list:
```
SELECT resource_id,filename,id FROM image WHERE find_in_set('6154',other_vendors)
```
|
I think this is the easiest method:
```
SELECT resource_id,filename,id FROM image WHERE (','+other_vendors +',') like '%,$vendor_id,%'
```
|
SQL LIKE REGEXP Matching Single Value and Comma'd String
|
[
"",
"mysql",
"sql",
"regex",
""
] |
I have this value in my field which have 5 segment for example `100-200-300-400-500`.
How do I query to only retrieve the first 3 segment? Which mean the query result will display as `100-200-300`.
|
The old `SUBSTR` and `INSTR` will be faster and less CPU intensive as compared to `REGEXP`.
```
SQL> WITH DATA AS(
2 SELECT '100-200-300-400-500' str FROM dual
3 )
4 SELECT substr(str, 1, instr(str, '-', 1, 3)-1) str
5 FROM DATA
6 /
STR
-----------
100-200-300
SQL>
```
The above `SUBSTR` and `INSTR` query uses the logic to find the `3rd` occurrence of the `hyphen` "-" and then take the substring from `position 1` till the third occurrence of `'-'`.
|
```
((\d)+-(\d)+-(\d)+)
```
If the Position of this sequence is arbitrary, you might go for **REG**ular**EXP**ressions
```
select regexp_substr(
'Test-Me 100-200-300-400-500 AGain-Home',
'((\d)+-(\d)+-(\d)+)'
) As Result
from dual
RESULT
-----------
100-200-300
```
Otherwise Simple `SUBSTR` will do
|
Retrieve segment from value
|
[
"",
"sql",
"oracle",
"substr",
""
] |
I am trying to figure out the right code for the below logic. If there is a certain value in the first column, display the value from the second column for that specific record.
Can someone please help? Thanks.
```
CASE WHEN TableA.Column1 = 'a' THEN 'select TableA.Column2 '
WHEN TableA.Column4 = 'b' THEN 'select TableA.Column5'
ELSE TableA.Column6
END AS [Test]
```
|
You were almost there just remove the `select` from `case statement`. Since all the values are coming from same table no need of `select` in case statement just keep `column name` it will fetch the corresponding column value.
```
SELECT CASE
WHEN Column1 = 'a' THEN Column2
WHEN Column4 = 'b' THEN Column5
ELSE Column6
END AS [Test]
FROM tableA
```
|
Try this:
```
SELECT (CASE WHEN A.Column1 = 'a' THEN A.Column2
WHEN A.Column4 = 'b' THEN A.Column5
ELSE A.Column6
END) AS [Test]
FROM TableA A;
```
|
Select statement inside CASE - SQL
|
[
"",
"sql",
"sql-server",
"select",
"sql-server-2012",
"case",
""
] |
I have a query like this:
```
SELECT i.*
FROM items i
WHERE i.id
IN (
SELECT c.item_id
FROM cart c
WHERE c.sessID=MY_SESSION_ID
)
```
It's working beautifully, but I need to sort items from the cart by date of purchase (cart.id) DESC.
I don't want sort in PHP. How can I sort by cart.id?
I tried:
```
SELECT i.*
FROM items i
WHERE i.id
IN (
SELECT c.item_id
FROM cart c
WHERE c.sessID=MY_SESSION_ID
)
ORDER BY c.id
```
But it did not sort correctly.
|
Try this query:
```
SELECT i.* FROM items i LEFT OUTER JOIN cart c
ON i.id = c.item_id WHERE c.sessID=MY_SESSION_ID AND
c.item_id is not null ORDER BY c.id
```
|
Change your **Sub query** to `Inner Join`. `Sub query` will not allow to you refer the columns outside of `sub query`. So change it to `Inner join`
```
SELECT i.*
FROM items i
JOIN (SELECT item_id,
id
FROM cart) C
ON i.id = c.item_id
AND c.sessID = MY_SESSION_ID
ORDER BY c.id Desc
```
or use this.
```
SELECT i.*
FROM items i
JOIN cart C
ON i.id = c.item_id
AND c.sessID = MY_SESSION_ID
ORDER BY c.id Desc
```
|
ORDER BY id IN Subquery
|
[
"",
"sql",
"select",
"join",
"subquery",
"sql-order-by",
""
] |
I have Data in column like below
```
ferrari motor
motor
tata motor
motor corp
hundai motor
```
So When I use this query
```
SELECT * FROM `table` WHERE MATCH(`name`) AGAINST ('%motor' IN BOOLEAN MODE) LIMIT 10
```
I get results in random order like below
```
tata motor
motor corp
ferrari motor
motor
hundai motor
```
But I want to sort it in the way if the **motor** is starting word it should be given priority , So my expected result should be like below
```
motor
motor corp
tata motor
ferrari motor
hundai motor
```
P.S : If it is not possible with `Full text` , Answer using `LIKE` is also welcome , which should also support multiple words.
|
A very simple solution would be to sort by the position of the searched string:
```
SELECT *
FROM `table`
WHERE MATCH(`name`) AGAINST ('%motor' IN BOOLEAN MODE)
ORDER BY INSTR(UPPER(`name`), UPPER('motor'))
LIMIT 10
```
This would give you
```
motor
motor corp
tata motor
hundai Motor
ferrari motor
```
or
```
motor corp
motor
tata motor
hundai Motor
ferrari motor
```
If you need 'motor' to come before 'motor corp' then add another criterion like
```
ORDER BY INSTR(UPPER(`name`), UPPER('motor')), LENGTH(`name`)
```
|
USE a order by case to give special occasions special priority.
```
SELECT * FROM view_sample
WHERE ...
ORDER BY
CASE
WHEN name LIKE 'motor%' THEN 2
WHEN name LIKE '%motor%' THEN 1
ELSE 0 END,
name
```
1. everything thats starts with motor
2. everything that includes at least motor
3. everything else ...
sorted by name
In your case, the ordering would be:
```
motor
motor corp
ferrari motor
hundai motor
tata motor
```
|
Mysql searching Column with beginning of words
|
[
"",
"mysql",
"sql",
""
] |
Say I have a table, called `tablex`, as follows:
```
name|year
---------
Bob | 2010
Mary| 2011
Sam | 2012
Mary| 2012
Bob | 2013
```
Names appear at most twice. I want to remove from the table only those names that are repeated and have a difference of one year (in which case I want to keep the newer year).
```
name|year
---------
Bob | 2010
Sam | 2012
Mary| 2012
Bob | 2013
```
I have tried:
```
SELECT a.Name, a.Year, b.Year
FROM tablex AS a
LEFT JOIN tablex AS b
ON a.Name=b.Name AND (a.Year=b.Year OR b.Year-a.Year=1)
ORDER BY a.Name, a.Year
```
results in:
```
Name YearA YearB
1 Bob 2010 2010
2 Bob 2013 2013
3 Mary 2011 2011
4 Mary 2011 2012
5 Mary 2012 2012
6 Sam 2012 2012
```
Bob's and Sam's entries are correct, how can I restrict it further to only include `Mary 2012 2012`?
|
From the question is not clear if you want to SELECT (suppressing the duplicates) or actually DELETE the "duplicates". The select case:
```
SELECT a.Name, a.Year
FROM tablex AS a
WHERE NOT EXISTS (
SELECT * FROM tablex AS b
WHERE b.Name = a.Name
AND b.Year = a.Year +1
);
```
And the delete case:
```
DELETE
FROM tablex AS a
WHERE EXISTS (
SELECT * FROM tablex AS b
WHERE b.Name = a.Name
AND b.Year = a.Year +1
);
```
|
This operation deletes rows from the table where the same name with a newer year also exist:
```
delete from tablex t1 where year < (select max(year) from tablex where name = t1.name)
```
|
SQL(ite) Remove (and keep some) duplicates in table
|
[
"",
"sql",
"sqlite",
"join",
""
] |
I hope I am able to explain what I am looking for. I have a table as follows:
```
ID | Column A| Column B
1 | 1234 | 9876
2 | 1234 | 8765
3 | 9876 | 1234
4 | 2345 | 3456
5 | 3456 | 2345
```
The rule is that For Every value of Column A = Value A and Column B = Value B, I need to have a row where Column A = Value B and Column B = Value A.
Here, I have ID = 1, 3, 4 and 5 satisfying this condition.
I need to pull ID = 2 as this was not satisfying the rule.
Will this query work for the above condition:
```
select * from TABLE1 T1 where T1.ID not in (select ID from TABLE1
where T1.Column A = Column B and T1.Column B = Column A)
```
Is there a better way to write this query?
|
The following should work:
```
SELECT * FROM TABLE1 T1
WHERE NOT EXISTS (SELECT 1 FROM TABLE1 T2
WHERE T1.ColumnA = T2.ColumnB AND
T1.ColumnB = T2.ColumnA)
```
|
Use an outer self-join filtering out matches:
```
select a.*
from table1 a
left join table1 b on a.columna = b.columnb
and a.columnb = b.columna
where b.columnb is null
```
|
Need to retrieve rows from table where the following condition was not satisfied column A = column B and column B = column A
|
[
"",
"sql",
"join",
"notin",
""
] |
I'm trying to find any tables with columns containing the word `date` somewhere in the column name.
All of my queries are either all or nothing: they return all tables in the DB, or no results at all.
When I run a query without the variable, it works, as seen here.
```
select *
from MyDB.INFORMATION_SCHEMA.COLUMNS
where column_name like '%date%'
```
However, I can't get it to work by using a variable.
```
declare @temp varchar = 'date'
select*
from MyDB.INFORMATION_SCHEMA.COLUMNS
where column_name like '%' + @temp + '%'
```
The reason I'd like to do this is because I need to run this on more than one DB (such as below), and I have to perform this several times (for more than just `date`), and I'd like the process to go more smoothly.
```
select *
from MyDB1.INFORMATION_SCHEMA.COLUMNS
where column_name like '%date%'
union all
select *
from MyDB2.INFORMATION_SCHEMA.COLUMNS
where column_name like '%date%'
union all
select *
from MyDB3.INFORMATION_SCHEMA.COLUMNS
where column_name like '%date%'
```
|
One query with cursor to loop through all the database on your server or you can explicitly loop through certain database
## Query
```
DECLARE @DB_Name SYSNAME;
DECLARE @Sql NVARCHAR(MAX)= '';
DECLARE @cur CURSOR;
SET @Cur = CURSOR FOR
SELECT name
FROM sys.sysdatabases
--WHERE name IN ('DBName1', 'DBName2', 'DBName3'); --<-- uncomment this line and
-- specify the database names
OPEN @cur
FETCH NEXT FROM @Cur INTO @DB_Name
WHILE (@@FETCH_STATUS = 0)
BEGIN
SET @Sql = N'
SELECT t.name
,c.name
FROM '+ QUOTENAME(@DB_Name) + '.sys.tables t
INNER JOIN ' + QUOTENAME(@DB_Name) + '.sys.columns c ON c.object_id = t.object_id
WHERE t.name LIKE ''%test%'''
EXEC(@Sql)
FETCH NEXT FROM @Cur INTO @DB_Name
END
CLOSE @cur
DEALLOCATE @cur
```
|
You can use what ever source of column info you prefer but the issue you are having is because of your variable declaration.
declare @temp varchar = 'date' is equivalent to
declare @temp varchar(1) = 'date' so
select @temp returns 'd' so you are getting any columns containing 'd'
when you declare your variable make sure it has a length that is sufficient to store the longest of the strings you will be searching for. Jason's answer will work too but it is because of the variable declaration not the source of the data.
|
Find table information where part of column matches char variable
|
[
"",
"sql",
"sql-server",
""
] |
My end goal...`SELECT` all fields from `qry1` on rows where either the home phone, cell phone, OR work phone match `tbl2`
This is my current SQL syntax that's "failing." By failing I mean it hasn't finished executing after 10-15 minutes while the individual joins each (independently) run in a minute or two a piece.
`SELECT qry1.*
FROM qry1 INNER JOIN tbl2
ON ((qry1.CellPhone = tbl2.CellPhone)
OR (qry1.HomePhone = tbl2.HomePhone)
OR (qry1.WorkPhone = tbl2.WorkPhone));`
Questions:
Are there errors in my SQL syntax?
Is there a better way to complete my task?
Would it make more sense for me to simply run the 3 (Home, work, cell) JOIN queries independently, UNION them, and then dedupe if necessary?
|
If the individual joins work fast, the composite OR condition is probably slow because it cannot use a single index whereas the individual conditions can each use a single index for the three joining conditions. Because it can't use one index, it is probably doing non-indexed sequential table scans. (You should study the query plan so you understand what the optimizer is actually doing.)
Given that the individual queries work reasonably fast, therefore, you should get dramatically better performance using UNION (unless the optimizer in your DBMS has a blind spot):
```
SELECT qry1.* FROM qry1 INNER JOIN tbl2 ON qry1.CellPhone = tbl2.CellPhone
UNION
SELECT qry1.* FROM qry1 INNER JOIN tbl2 ON qry1.HomePhone = tbl2.HomePhone
UNION
SELECT qry1.* FROM qry1 INNER JOIN tbl2 ON qry1.WorkPhone = tbl2.WorkPhone
```
That should give you a result approximately as quickly as the 3 individual queries. It won't be quite as fast because UNION does duplicate elimination (which the individual queries do not, of course). You could use UNION ALL, but if there are many rows in the two tables where 2 or 3 of the pairs of fields match, that could lead to a lot of repetition in the results.
|
`or` can be rather difficult for SQL optimizers. I would suggest 3 indexes and the folllowing query:
```
SELECT qry1.*
FROM qry1
WHERE EXISTS (SELECT 1 FROM tbl2 WHERE qry1.CellPhone = tbl2.CellPhone) OR
EXISTS (SELECT 1 FROM tbl2 WHERE qry1.HomePhone = tbl2.HomePhone) OR
EXISTS (SELECT 1 FROM tbl2 WHERE qry1.WorkPhone = tbl2.WorkPhone);
```
The three indexes are `tbl2(CellPhone)`, `tbl2(HomePhone)`, and `tbl2(WorkPhone)`.
|
How do I perform a JOIN if one of multiple conditions is true?
|
[
"",
"sql",
"ms-access",
"ms-access-2010",
""
] |
I have a Table Valued Function as following. But it is giving a error. That is saying "Msg 156, Level 15, State 1, Procedure GetOpeningByRepAcc, Line 36
Incorrect syntax near the keyword 'begin'."
But I didn't able to fix this. Any idea please?
```
create FUNCTION [dbo].[GetOpeningByRepAcc]
(
@Date datetime,@AccId int,@CenterId datetime,@ItemId int
)
RETURNS @ReturnData TABLE
(
Opening decimal(10,3),
Topup decimal(10,3),
Returnsv decimal(10,3)
)
as
begin
declare @opening decimal(10,3)
declare @topup decimal(10,3)
declare @returns decimal(10,3)
set @opening = (SELECT sum(OpeningQty)
FROM DailyItemStock
WHERE DistributionCenterId = @CenterId AND
SalesRepAccId = @AccId AND ItemMasterId = @ItemId
and convert(varchar(10),StockDate,111) = convert(varchar(10),@Date,111))
set @topup = (SELECT sum(Qty)
FROM DailyItemStockTopUp
WHERE
convert(varchar(10),TopupDate,111) = CONVERT(varchar(10), getdate(), 111)
AND DistributionCenterId = @CenterId AND SalesRepAccId = @AccId
AND ItemMasterId = @ItemId)
set @returns = (SELECT sum(TotReturns)
FROM DailyItemStock
WHERE DistributionCenterId = @CenterId AND
SalesRepAccId = @AccId AND ItemMasterId = @ItemId
and convert(varchar(10),StockDate,111) = convert(varchar(10),@Date,111))
end
begin
insert @ReturnData
select @opening,@topup,@returns
return
end
```
|
You have two lots of `BEGIN... END` in your function this is causing the error. Looking at your function though it can be simplified hugely, you are performing pretty much the same query 3 times, just summing different columns, two of them are:
```
SET @Topup = ( SELECT SUM(Qty)
FROM DailyItemStock
WHERE DistributionCenterId = @CenterId
AND SalesRepAccId = @AccId
AND ItemMasterId = @ItemId
AND CONVERT(VARCHAR(10),TopupDate,111) = CONVERT(VARCHAR(10), GETDATE(), 111)
);
SET @returns = (SELECT SUM(TotReturns)
FROM DailyItemStock
WHERE DistributionCenterId = @CenterId
AND SalesRepAccId = @AccId
AND ItemMasterId = @ItemId
AND CONVERT(VARCHAR(10),TopupDate,111) = CONVERT(VARCHAR(10), GETDATE(), 111)
);
```
You can do this in a single statement:
```
SELECT @Topup = SUM(Qty),
@returns = SUM(TotReturns)
FROM DailyItemStock
WHERE DistributionCenterId = @CenterId
AND SalesRepAccId = @AccId
AND ItemMasterId = @ItemId
AND CAST(StockDate AS DATE) = CAST(GETDATE() AS DATE);
```
*n.b. I have changed your predicate converting dates to varchars to compare them (I assume to remove the time) as this is awful practice, it performs terribly and can't use any indexes on the date columns*
With the above in mind, I would be inclined to make this an inline TVF, it will perform much better:
```
CREATE FUNCTION [dbo].[GetOpeningByRepAcc]
(
@Date DATETIME,
@AccId INT,
@CenterId DATETIME,
@ItemId INT
)
RETURNS TABLE
AS
RETURN
( SELECT Opening = SUM(OpeningQty),
Topup = SUM(Qty),
Returnsv = SUM(TotReturns)
FROM DailyItemStock
WHERE DistributionCenterId = @CenterId
AND SalesRepAccId = @AccId
AND ItemMasterId = @ItemId
AND CAST(StockDate AS DATE) = CAST(GETDATE() AS DATE)
);
```
The benefit of inline Table valued functions is that they behave more like views, in that their definition can be expanded out into the outer query and subsequently optimised, and are not executed [RBAR](https://www.simple-talk.com/sql/t-sql-programming/rbar--row-by-agonizing-row/) like functions that use `BEGIN...END`
|
I think that you have one extra `end ... begin`. Please try the following version of you function:
```
create FUNCTION [dbo].[GetOpeningByRepAcc]
(
@Date datetime,@AccId int,@CenterId datetime,@ItemId int
)
RETURNS @ReturnData TABLE
(
Opening decimal(10,3),
Topup decimal(10,3),
Returnsv decimal(10,3)
)
as
begin
declare @opening decimal(10,3)
declare @topup decimal(10,3)
declare @returns decimal(10,3)
set @opening = (SELECT sum(OpeningQty)
FROM DailyItemStock
WHERE DistributionCenterId = @CenterId AND
SalesRepAccId = @AccId AND ItemMasterId = @ItemId
and convert(varchar(10),StockDate,111) = convert(varchar(10),@Date,111))
set @topup = (SELECT sum(Qty)
FROM DailyItemStockTopUp
WHERE
convert(varchar(10),TopupDate,111) = CONVERT(varchar(10), getdate(), 111)
AND DistributionCenterId = @CenterId AND SalesRepAccId = @AccId
AND ItemMasterId = @ItemId)
set @returns = (SELECT sum(TotReturns)
FROM DailyItemStock
WHERE DistributionCenterId = @CenterId AND
SalesRepAccId = @AccId AND ItemMasterId = @ItemId
and convert(varchar(10),StockDate,111) = convert(varchar(10),@Date,111))
insert into @ReturnData
select @opening,@topup,@returns
return
end
```
|
SQL Table valued function
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"sql-function",
""
] |
I have a query that gets a count of all the tickets assigned to a team:
```
SELECT 'Application Developers' AS team, COUNT(Assignee) AS tickets
from mytable WHERE status = 'Open' AND Assignee like '%Application__bDevelopers%'
UNION ALL
SELECT 'Desktop Support' AS team, COUNT(Assignee) AS tickets
from mytable WHERE status = 'Open' AND Assignee like '%Desktop__bSupport%'
UNION ALL
SELECT 'Network Management' AS team, COUNT(Assignee) AS tickets
FROM mytable WHERE status = 'Open' AND Assignee LIKE '%Network__bManagement%'
UNION ALL
SELECT 'Security' AS team, COUNT(Assignee) AS tickets
from mytable WHERE status = 'Open' AND Assignee = '%Security%'
UNION ALL
SELECT 'Telecom' AS team, COUNT(Assignee) AS tickets
from mytable WHERE status = 'Open' AND Assignee = '%Telecom%'
```
The result is:
```
team tickets
Application Developers 6
Desktop Support 374
Network Management 0
Security 7
Telecom 0
```
How can I exclude the results that come back with "0" tickets?
|
You can use `HAVING` for each query:
```
SELECT 'Application Developers' AS team, COUNT(Assignee) AS tickets
from mytable WHERE
status = 'Open'
AND Assignee like '%Application__bDevelopers%'
HAVING COUNT(Assignee) <> 0
```
I'd suggest @NoDisplayName's answer because your query as it stands is quite inefficient.
|
Instead of using Union all between bunch of queries Change your query like this.
Use `Having clause` to filter the group's where `count <> 0`
```
SELECT CASE
WHEN Assignee LIKE '%Application__bDevelopers%' THEN 'Application Developers'
WHEN Assignee LIKE '%Desktop__bSupport%' THEN 'Desktop Support'
WHEN Assignee LIKE '%Network__bManagement%' THEN 'Network Management'
WHEN Assignee LIKE '%Security%' THEN 'Security'
WHEN Assignee LIKE '%Telecom%' THEN 'Telecom'
END AS team,
Count(Assignee) AS tickets
FROM mytable
WHERE status = 'Open'
AND ( Assignee LIKE '%Application__bDevelopers%'
OR Assignee LIKE '%Desktop__bSupport%'
OR Assignee LIKE '%Network__bManagement%'
OR Assignee = '%Security%'
OR Assignee = '%Telecom%' )
GROUP BY CASE
WHEN Assignee LIKE '%Application__bDevelopers%' THEN 'Application Developers'
WHEN Assignee LIKE '%Desktop__bSupport%' THEN 'Desktop Support'
WHEN Assignee LIKE '%Network__bManagement%' THEN 'Network Management'
WHEN Assignee LIKE '%Security%' THEN 'Security'
WHEN Assignee LIKE '%Telecom%' THEN 'Telecom'
END
HAVING Count(Assignee) <> 0
```
|
How can I exclude COUNT results which are 0?
|
[
"",
"sql",
"sql-server",
""
] |
I want to get the month no when I only knows month name.
(If month name January should return 1)
I tried this.
```
SELECT DATEPART(MM,'january 01 2011')
```
But here I want to pass whole date value(january 01 2011). But I only need to pass month name.
(Without using 'Case')
Is there any way to do this?
|
You can pass month name and append `01 2011` (or whatever other day + year you wish) to it, like this:
```
@declare monthName varchar(20);
@set monthName = 'january';
SELECT DATEPART(MM,monthName+' 01 2011')
```
or
```
SELECT MONTH(monthName+' 01 2011')
```
|
you can also use `Case`
```
DECLARE @month VARCHAR(15)='mar'
SELECT CASE @month
WHEN 'Jan' THEN 1
WHEN 'feb' THEN 2
WHEN 'mar' THEN 3
WHEN 'apr' THEN 4
WHEN 'may' THEN 5
WHEN 'jun' THEN 6
WHEN 'jul' THEN 7
WHEN 'aug' THEN 8
WHEN 'sep' THEN 9
WHEN 'oct' THEN 10
WHEN 'nov' THEN 11
WHEN 'dec' THEN 12
END
```
|
Get month numbe passing only month name
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
Suppose I have a query stored in a variable like this (it's actually dynamically populated and more complex, but this is for demonstration purposes):
```
DECLARE @Query VARCHAR(1000) = 'SELECT * FROM dbo.MyTable'
```
Is there a way to check if the query would return any results? Something like this, but this doesn't work:
```
IF EXISTS (@Query)
BEGIN
-- do something
END
```
The only way that I can think of to do this is to put the results in a temp table and then query from that, but that is not ideal because the columns in the dynamic query can vary and I really don't need the temp table at all for any reason other than checking whether some rows would be returned. Is there a better way?
|
Try Executing the `Dynamic query` and use `@@RowCount` to find the existence of rows.
```
DECLARE @Query NVARCHAR(1000) = 'SELECT * FROM [dbo].[Mytable]',
@rowcnt INT
EXEC Sp_executesql @query
SELECT @rowcnt = @@ROWCOUNT
IF @rowcnt > 0
BEGIN
PRINT 'row present'
END
```
|
Try this:
```
DECLARE @Query NVARCHAR(1000) = 'SELECT @C = COUNT(*) FROM dbo.MyTable'
DECLARE @Count AS INT
EXEC sp_executesql @Query, N'@C INT OUTPUT', @C=@Count OUTPUT
IF (@Count > 0)
BEGIN
END
```
|
SQL "if exists..." dynamic query
|
[
"",
"sql",
"sql-server",
"exists",
""
] |
I need to do an aggregation of data as follows. This pertains to imports of various grades of ore along various dates. I have the following sample data and want to get cumulative data as on date per grade
```
Grade Dat Amount
----------------
A 12/1/2014 100
A 12/4/2014 40
A 12/30/2014 25
A 1/6/2015 100
B 12/24/2014 20
B 12/28/2014 1
B 1/1/2015 30
B 1/2/2015 50
C 12/12/2014 20
C 12/31/2014 15
```
I was looking to get the following
```
Grade Dat Amount
----------------
A 12/1/2014 100
A 12/4/2014 140
A 12/30/2014 165
A 1/6/2015 265
B 12/24/2014 20
B 12/28/2014 21
B 1/1/2015 51
B 1/2/2015 101
C 12/12/2014 20
C 12/31/2014 35
```
I tried this
```
select a.grade, a.dat,sum(a.amount)
from table1 a, table1 b
where (a.grade=b.grade) and (a.dat>=b.dat)
group by a.grade, a.dat
```
This messes up- it presents the first row in each grade right, but then doubles the second instance, triples the third etc instead of doing a cumulative aggregation.
I get this
```
project dat Expr1002
A 12/1/2014 100
A 12/4/2014 80
A 12/30/2014 75
A 1/6/2015 400
B 12/24/2014 20
B 12/28/2014 2
B 1/1/2015 90
B 1/8/2015 200
C 12/12/2014 20
C 12/31/2014 30
```
I suspect I am missing something simple
|
I guess the following is what you want:
```
select a.grade, a.dat, (select sum(amount) from table1 b
where (a.grade=b.grade) and (a.dat >= b.dat))
from table1 a
```
(Assuming there are no duplicate grade/dat rows.)
|
I will guess that you want to see a cumulative over time thus:
```
select grade
,dat
,amount
,sum(amount)
over (partition by grade order by dat range between UNBOUNDED PRECEDING and current row) cumulative_amount
from table1;
```
That gives me the sum of grades until the day of row in the column cumulative\_amount.
|
aggregate query in SQL
|
[
"",
"sql",
"ms-access",
""
] |
I have an `UPDATE` pass through query saved in Access 2007. When I double-click on the pass through query it runs successfully. How can I get this query to run from VBA? I'd like it to run when my "splash screen" loads.
I'm currently using the following code:
`CurrentDb.Execute "Q_UPDATE_PASSTHROUGH", dbSQLPassThrough`
But I get the following message:

The pass-through query contains all the connection information and I've confirmed the SQL syntax is correct by running it multiple times, so not sure what I'm missing in my VBA call.
|
Use the QueryDef's `Execute` method:
```
CurrentDb.QueryDefs("Q_UPDATE_PASSTHROUGH").Execute
```
I don't think you should need to explicitly include the *dbSQLPassThrough* option here, but you can try like this if you want it:
```
CurrentDb.QueryDefs("Q_UPDATE_PASSTHROUGH").Execute dbSQLPassThrough
```
|
I recently ran into the same problem. While the above mentioned `Execute` method is working for most cases, some people (me included) experiencing a `Run-time error '3001': Invalid Argument` when using the parameter *dbSQLPassThrough*. This was also addressed in the answer above me and happens even in the simplest SQL-statements.
For those who are having the same problem, I recommend using the `OpenQuery` method as alternative.
A valid substitution for the following code
```
CurrentDb.QueryDefs("Q_UPDATE_PASSTHROUGH").Execute
```
would be
```
DoCmd.OpenQuery "Q_UPDATE_PASSTHROUGH"
```
I know this thread is 4 years old, however, searching for a solution for the not working `Execute` method on Google brings you directly to this thread which is why I thought it would be useful to add an alternative solution which solved this problem for me.
|
Execute SQL Server Pass-Through Query From Access VBA
|
[
"",
"sql",
"ms-access",
"vba",
"pass-through",
""
] |
I am trying to `get max value` of a column in my query but I cant found any solution. My query is
```
SELECT pz.id,
sz.Price + cr.AddedCost AS price
FROM piz pz, crust cr, size sz
WHERE
pz.SizeID = sz.id AND
pz.CrustID = tp.id
```
Then I receive some records like the following
```
ID price
-----------
1 95000.0
6 160000.0
10 150000.0
3 137500.0
5 122500.0
4 195000.0
2 195000.0
7 130000.0
8 205000.0
9 130000.0
11 205000.0
```
I want to get the `record` with the `max value` of price, which are:
```
ID price
------------
8 205000.0
11 205000.0
```
I am using Navicat with MySQL server
Update:
I did try using:
```
SELECT pz.id, MAX(sz.Price + cr.AddedCost) AS price ....
```
But it didnt work as the returned id is wrong
|
What happens when you do this.
Change all your old style comma separated join to Inner join for better readability.
```
SELECT pz.id,
sz.Price + cr.AddedCost AS price
FROM piz pz
INNER JOIN crust cr
ON pz.CrustID = cr.id
INNER JOIN size sz
ON pz.SizeID = sz.id
WHERE sz.Price + cr.AddedCost = (SELECT sz.Price + cr.AddedCost AS price
FROM piz pz1
INNER JOIN crust cr1
ON pz1.CrustID = cr1.id
INNER JOIN size sz1
ON pz1.SizeID = sz1.id
ORDER BY price DESC limit 1 )
```
|
use Max function :) select MAX(column) from ...
|
Get record with Max value from SELECT query
|
[
"",
"mysql",
"sql",
"navicat",
""
] |
I'm trying to create a simple query that turns a grading/evaluation column that currently contains the values 0,1,2,3,4,5 into Terrible, Poor, Bad, Mediocre, and Good. But as you can see there are only 5 label values but 6 number values.
So the number 5 needs to display as 5, while all the other numbers will turn into their respective labels (Terrible, Poor, etc).
My current query is as follow:
```
SELECT course_id,
student_id,
CASE eval
WHEN 0 THEN 'Terrible'
WHEN 1 THEN 'Bad'
WHEN 2 THEN 'Mediocre'
WHEN 3 THEN 'Fair'
WHEN 4 THEN 'Good'
END
FROM booking;
```
In the following query result, I know for a fact that the `COURSE_ID` labeled `10121` has an Evaluation value of `5`
All other ones are null so it's fine that they are blank. But `10121` needs to display as `5`.
If I try to apply an `ELSE` statement to return the original value (by doing an `inner select` statement) it says `ERROR at line 9: ORA-00932: inconsistent datatypes: expected CHAR got NUMBER`
```
COURSE_ID STUDENT_ID CASEEVAL
---------- ---------- --------
10111 10235000 Good
10111 10285858 Fair
10112 10288497
10112 10092464 Fair
10113 10235000
10114 10128018 Terrible
10114 10288497
10115 10092464 Terrible
10115 10128018 Bad
10116 10128018
10118 10288497 Bad
10118 10836444 Fair
10119 10285858
10120 10412445 Mediocre
10121 10836444
10122 10285858 Good
10123 10412445 Mediocre
10125 10095146
```
|
You need an `ELSE` option to handle the cases that aren't matched. Use `TO_CHAR` to get all the values to the same datatype.
```
SELECT event_id,
trainee_id,
CASE eval
WHEN 0 THEN 'Terrible'
WHEN 1 THEN 'Bad'
WHEN 2 THEN 'Mediocre'
WHEN 3 THEN 'Fair'
WHEN 4 THEN 'Good'
ELSE TO_CHAR(eval)
END
FROM booking;
```
|
Just use `'5'`:
```
SELECT event_id,
trainee_id,
(CASE eval
WHEN 0 THEN 'Terrible'
WHEN 1 THEN 'Bad'
WHEN 2 THEN 'Mediocre'
WHEN 3 THEN 'Fair'
WHEN 4 THEN 'Good'
WHEN 5 THEN '5'
END
)
FROM booking;
```
Alternatively, you can use an explicit `cast()` with a default value using `else`:
```
SELECT event_id,
trainee_id,
(CASE eval
WHEN 0 THEN 'Terrible'
WHEN 1 THEN 'Bad'
WHEN 2 THEN 'Mediocre'
WHEN 3 THEN 'Fair'
WHEN 4 THEN 'Good'
ELSE cast(eval as varchar2(255))
END
)
FROM booking;
```
I would encourage you to use explicit casts. SQL can be prone to hard-to-debug errors when doing implicit casts.
|
CASE statement turning a NUMBER into a CHAR, but the ELSE to keep original NUMBER in Oracle SQL
|
[
"",
"sql",
"oracle",
"case",
""
] |
I'm still actively learning MS Access/sql (needs to be Office 2003 compliant). I've simplified things down as best as I can.
I have the following table (Main) comprising 2 fields (Ax and Ay) and which extends to several thousand records.
This is the primary database and requires to be searchable:
table: Main
```
Ax Ay
1 6
5 9
3 3
7
5 5
7 2
2
4 4
3
6 5
7 6
```
etc....
Blank entries above are simply null values. Ax and Ay values can appear in either field.
There is a second table called Afull which comprises 2 fields called Avalid and Astr:
table: Afull
```
Avalid Astr
1
2
3
4
5
6
7
8
9
```
Field Astr is initialised to Null at the start of each run.
The first use of this table is to store all valid values for Ax and Ay in field Avalid.
The second use is to allow for the selection, by the user, of search critera. To do this, table Afull is added as a subform in the user search form. The user then selects an Avalid value to search for by inputting any value >0 into Astr - next to the value to be searched.
An sql query string is then built up whose purpose is to return all records carrying any 'permutation' of user-selected Avalid values:
```
SELECT Main.Ax,Main.Ay
FROM Main
WHERE (Main.Ax In(uservalues) OR Main.Ax Is Null) AND (Main.Ay In(uservalues) OR Main.Ay Is Null)
```
"uservalues" is converted into the list of Avalid values to be searched.
This is fine and works as expected (double Null records don't exist).
**Question:**
I would like to include the Astr value, itself, in the results - one field for Ax Astr values and one for Ay Astr values. I've tried a few things including adding the following to the SELECT statement:
```
strSQL = strSQL & ",IIF((Main.Ax In(uservalues)),Afull.Astr AS Axstr"
strSQL = strSQL & ",IIF((Main.Ay In(uservalues)),Afull.Astr AS Aystr"
strSQL = strSQL & "FROM Main,Afull"
```
...but this doesn't work. Is there any relatively simple method to achieve this?
Ultimately, I will also be using the Astr values to sort Ascending. Think of Astr as the 'strength' of the selected Avalid value.
|
to paraphrase a bit, users can select values for Ax & Ay and you only want to return records where both Ax & Ay are in the list of selected values or either of them could be null but not both.
now you want to add Astr for both Ax and Ay. you could use a co-related sub-query as below or you could join to Afull twice, once on Ax = Avalid and once on Ay = Avalid.
If you are going to build a SQL string like in your example, check your brackets too.
```
SELECT
Main.Ax,
Main.Ay,
(select Astr from Afull where Avalid = Main.Ax) as AxStr,
(select Astr from Afull where Avalid = Main.Ay) as AyStr
FROM Main
WHERE (Main.Ax In(uservalues) OR Main.Ax Is Null)
AND (Main.Ay In(uservalues) OR Main.Ay Is Null)
```
|
Whatever works at the end of the day, but here is an alternative, basically it selects the value for the first expression that evaluates true. if A?Str is null or less than zero it uses the zero value so that you can just add the two results together.
```
Switch(AxStr>=0, AxStr,AxStr<0,0,isnull(AxStr),0) + Switch(AyStr>=0, AyStr,AyStr<0,0,isnull(AyStr),0)
```
|
Access sql display values from second table
|
[
"",
"sql",
"ms-access",
""
] |
How can I get the result of the current year using SQL?
I have a table that has a column date with the format `yyyy-mm-dd`.
Now, I want to do select query that only returns the current year result.
The pseudo code should be like:
```
select * from table where date is (current year dates)
```
The result should be as following:
```
id date
2 2015-01-01
3 2015-02-01
9 2015-01-01
6 2015-02-01
```
How can I do this?
|
Use [`YEAR()`](http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_year) to get only the year of the dates you want to work with:
```
select * from table where YEAR(date) = YEAR(CURDATE())
```
|
When I tried these answers on SQL server, I got an error saying curdate() was not a recognized function.
If you get the same error, using getdate() instead of curdate() should work!
|
SQL - Get result of current year only
|
[
"",
"mysql",
"sql",
"date",
""
] |
I am working with **MySQL 5.6**. I had created a table with 366 partitions to save data daywise means In a year we have maximum 366 days so I had created 366 partitions on that table. The hash partitions were managed by an integer column which stores 1 to 366 for each record.
**Report\_Summary** Table:
```
CREATE TABLE `Report_Summary` (
`PartitionsID` int(4) unsigned NOT NULL,
`ReportTime` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
`Amount` int(10) NOT NULL,
UNIQUE KEY `UNIQUE` (`PartitionsID`,`ReportTime`),
KEY `PartitionsID` (`PartitionsID`),
KEY `ReportTime` (`ReportTime`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 ROW_FORMAT=COMPRESSED
/*!50100 PARTITION BY HASH (PartitionsID)
PARTITIONS 366 */
```
**My current query:**
```
SELECT DATE(RS.ReportTime) AS ReportDate, SUM(RS.Amount) AS Total
FROM Report_Summary RS
WHERE RS.ReportTime >= '2014-12-26 00:00:00' AND RS.ReportTime <= '2014-12-30 23:59:59' AND
RS.PartitionsID BETWEEN DAYOFYEAR('2014-12-26 00:00:00') AND DAYOFYEAR('2014-12-30 23:59:59')
GROUP BY ReportDate;
```
The above query is perfectly working and using partitions **p360** to **p364** to fetch the data. Now the problem is when I pass the **fromDate to '2014-12-26' and toDate to '2015-01-01'** Then above query won't work. Because the Day of year for '2015-01-01' is **1** so my conditions got failed.
Now I had tried for passing the value in **IN** operator then it works perfectly in database check below query:
```
SELECT DATE(RS.ReportTime) AS ReportDate, SUM(RS.Amount) AS Total
FROM Report_Summary RS
WHERE RS.ReportTime >= '2014-12-26 00:00:00' AND RS.ReportTime <= '2015-01-01 23:59:59' AND
RS.PartitionsID IN (360,361,362,363,364,365,1)
GROUP BY ReportDate;
```
To generate above scenario I had created a function and passed two dates and generate a comma seperated string of the IDs
```
SELECT GenerateRange('2014-12-26 00:00:00', '2015-01-01 23:59:59');
```
Which reurns me data as:
```
'360,361,362,363,364,365,366,1'
```
And I tried to use that function in my query so I had changed my query as below:
```
SELECT DATE(RS.ReportTime) AS ReportDate, SUM(RS.Amount) AS Total
FROM Report_Summary RS
WHERE RS.ReportTime >= '2014-12-26 00:00:00' AND RS.ReportTime <= '2015-01-01 23:59:59' AND
FIND_IN_SET(RS.PartitionsID, GenerateRange('2014-12-26 00:00:00', '2015-01-01 00:00:00'))
GROUP BY ReportDate;
```
Then I had checked the execution plan of above query using **EXPLAIN PARTITION SELECT...**. And I found thet my condition won't work. It uses all partitions to fetch data. I want to use the specific partitions of those dates only.
It must be check only these **360,361,362,363,364,365,366,1** partitions means **p360** to **p366** and **p1**.
Why my query is not working? And this is not right way to implement this then I want solution How can I achieve this?
I know from coding I can implement this but I have to write a query to implement this.
Thanks...
|
I got the solutions for that I had changed my logic of storing **PartitionsId** column in my table. Initially I am storing **DayOfYear(reportTime)** column in **PartitionsId** column. Now I had changed that logic by storing **TO\_DAYS(reportTime)** and stored into **PartitionsId** column.
So my table structure is as below:
```
CREATE TABLE `Report_Summary` (
`PartitionsID` int(10) unsigned NOT NULL,
`ReportTime` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
`Amount` int(10) NOT NULL,
UNIQUE KEY `UNIQUE` (`PartitionsID`,`ReportTime`),
KEY `PartitionsID` (`PartitionsID`),
KEY `ReportTime` (`ReportTime`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 ROW_FORMAT=COMPRESSED
/*!50100 PARTITION BY HASH (PartitionsID)
PARTITIONS 366 */
INSERT INTO `Report_Summary` (`PartitionsID`, `ReportTime`, `Amount`) VALUES('735928','2014-12-26 11:46:12','100');
INSERT INTO `Report_Summary` (`PartitionsID`, `ReportTime`, `Amount`) VALUES('735929','2014-12-27 11:46:23','50');
INSERT INTO `Report_Summary` (`PartitionsID`, `ReportTime`, `Amount`) VALUES('735930','2014-12-28 11:46:37','44');
INSERT INTO `Report_Summary` (`PartitionsID`, `ReportTime`, `Amount`) VALUES('735931','2014-12-29 11:46:49','15');
INSERT INTO `Report_Summary` (`PartitionsID`, `ReportTime`, `Amount`) VALUES('735932','2014-12-30 11:46:59','56');
INSERT INTO `Report_Summary` (`PartitionsID`, `ReportTime`, `Amount`) VALUES('735933','2014-12-31 11:47:22','68');
INSERT INTO `Report_Summary` (`PartitionsID`, `ReportTime`, `Amount`) VALUES('735934','2015-01-01 11:47:35','76');
INSERT INTO `Report_Summary` (`PartitionsID`, `ReportTime`, `Amount`) VALUES('735935','2015-01-02 11:47:43','88');
INSERT INTO `Report_Summary` (`PartitionsID`, `ReportTime`, `Amount`) VALUES('735936','2015-01-03 11:47:59','77');
```
Check the [**SQL FIDDLE DEMO**](http://www.sqlfiddle.com/#!2/c2ca7f/3):
My query is:
```
EXPLAIN PARTITIONS
SELECT DATE(RS.ReportTime) AS ReportDate, SUM(RS.Amount) AS Total
FROM Report_Summary RS
WHERE RS.ReportTime >= '2014-12-26 00:00:00' AND RS.ReportTime <= '2015-01-01 23:59:59' AND
RS.PartitionsID BETWEEN TO_DAYS('2014-12-26 00:00:00') AND TO_DAYS('2015-01-01 23:59:59')
GROUP BY ReportDate;
```
The above query scans specific partitions which I need and it also uses the proper index. So I reached to proper solution after changing of logic of **PartitionsId** column.
Thanks for all the replies and Many thanks to everyone's time...
|
There are a few options that I can think of.
1. Create `case` statements that cover multi-year search criteria.
2. Create a `CalendarDays` table and use it to get the distinct list of `DayOfYear` for your `in` clause.
3. Variation of option 1 but using a `union` to search each range separately
**Option 1:** Using `case` statements. It is not pretty, but seems to work. There is a scenario where this option could search one extra partition, 366, if the query spans years in a non-leap year. Also I'm not certain that the optimizer will like the `OR` in the `RS.ParitionsID` filter, but you can try it out.
```
SELECT DATE(RS.ReportTime) AS ReportDate, SUM(RS.Amount) AS Total
FROM Report_Summary RS
WHERE RS.ReportTime >= @startDate AND RS.ReportTime <= @endDate
AND
(
RS.PartitionsID BETWEEN
CASE
WHEN
--more than one year, search all days
year(@endDate) - year(@startDate) > 1
--one full year difference
OR year(@endDate) - year(@startDate) = 1
AND DAYOFYEAR(@startDate) <= DAYOFYEAR(@endDate)
THEN 1
ELSE DAYOFYEAR(@startDate)
END
and
CASE
WHEN
--query spans the end of a year
year(@endDate) - year(@startDate) >= 1
THEN 366
ELSE DAYOFYEAR(@endDate)
END
--Additional query to search less than portion of next year
OR RS.PartitionsID <=
CASE
WHEN year(@endDate) - year(@startDate) > 1
OR DAYOFYEAR(@startDate) > DAYOFYEAR(@endDate)
THEN DAYOFYEAR(@endDate)
ELSE NULL
END
)
GROUP BY ReportDate;
```
**Option 2:** Using `CalendarDays` table. This option is much cleaner. The downside is you will need to create a new `CalendarDays` table if you do not have one.
```
SELECT DATE(RS.ReportTime) AS ReportDate, SUM(RS.Amount) AS Total
FROM Report_Summary RS
WHERE RS.ReportTime >= @startDate AND RS.ReportTime <= @endDate
AND RS.PartitionsID IN
(
SELECT DISTINCT DAYOFYEAR(c.calDate)
FROM dbo.calendarDays c
WHERE c.calDate >= @startDate and c.calDate <= @endDate
)
```
**EDIT: Option 3:** variation of option 1, but using `Union All` to search each range separately. The idea here is that since there is not an `OR` in the statement, that the optimizer will be able to apply the partition pruning. Note: I do not normally work in `MySQL`, so my syntax may be a little off, but the general idea is there.
```
DECLARE @startDate datetime, @endDate datetime;
DECLARE @rangeOneStart datetime, @rangeOneEnd datetime, @rangeTwoStart datetime, @rangeTwoEnd datetime;
SELECT @rangeOneStart :=
CASE
WHEN
--more than one year, search all days
year(@endDate) - year(@startDate) > 1
--one full year difference
OR year(@endDate) - year(@startDate) = 1
AND DAYOFYEAR(@startDate) <= DAYOFYEAR(@endDate)
THEN 1
ELSE DAYOFYEAR(@startDate)
END
, @rangeOneEnd :=
CASE
WHEN
--query spans the end of a year
year(@endDate) - year(@startDate) >= 1
THEN 366
ELSE DAYOFYEAR(@endDate)
END
, @rangeTwoStart := 1
, @rangeTwoEnd :=
CASE
WHEN year(@endDate) - year(@startDate) > 1
OR DAYOFYEAR(@startDate) > DAYOFYEAR(@endDate)
THEN DAYOFYEAR(@endDate)
ELSE NULL
END
;
SELECT t.ReportDate, sum(t.Amount) as Total
FROM
(
SELECT DATE(RS.ReportTime) AS ReportDate, RS.Amount
FROM Report_Summary RS
WHERE RS.PartitionsID BETWEEN @rangeOneStart AND @rangeOneEnd
AND RS.ReportTime >= @startDate AND RS.ReportTime <= @endDate
UNION ALL
SELECT DATE(RS.ReportTime) AS ReportDate, RS.Amount
FROM Report_Summary RS
WHERE RS.PartitionsID BETWEEN @rangeTwoStart AND @rangeTwoEnd
AND @rangeTwoEnd IS NOT NULL
AND RS.ReportTime >= @startDate AND RS.ReportTime <= @endDate
) t
GROUP BY ReportDate;
```
|
MySQL: Unable to select the records from specific partitions?
|
[
"",
"mysql",
"sql",
"select",
"partitioning",
"find-in-set",
""
] |
i have two tables eg:
```
table1
---------------------
|id | name| isActive|
|---|-----|---------|
| 1 | dd | 1 |
| 1 | cc | 1 |
| 1 | zz | 1 |
| 1 | yy | 1 |
---------------------
table2
---------------------
|id | name| isActive|
|---|----|----------|
| 1 | ab | 1 |
| 1 | cd | 1 |
| 1 | ef | 1 |
| 1 | gh | 1 |
---------------------
```
the requirements are to get count of both tables and print them in to two separate columns and print it into two separate columns in birt Report i have tried this
```
SELECT
COUNT(table2.`name`) Table1Count,
table1.`isActive` Table1IsActive ,
COUNT(table2.`name`) Table2Count,
table2.`isActive` Table2IsActive
FROM
`table1`,
`table2`
```
its out put is in two separate columns but with cross join
```
Table1Count isActive Table2Count Table2IsActive
----------- -------- ----------- ----------------
43316 0 3536 1
```
i can not use any join because there is no relationship in both of these table while its output is similar to the desired out put where i want two tables counts in separate columns
by trying this i get two separate rows
```
SELECT
*
FROM
(SELECT
COUNT(*) Table1Count,
table1.`isActive` Table1IsActive
FROM
`table1`
UNION
SELECT
COUNT(*) Table2Count,
table2.`isActive` Table2IsActive
FROM
`table2 `) AS a
```
its output is in two separate rows while my requirements are separate columns
```
Table1Count Table1IsActive
------------ --------------
442 0
98 1
```
|
In your UNION, add extra columns for the columns from the other table. Then combine them using `MAX()`
```
SELECT MAX(Table1Count) AS Table1Count, MAX(Table1IsActive) AS Table1IsActive,
MAX(Table2Count) AS Table2Count, MAX(Table2IsActive) AS Table2IsActive
FROM (SELECT COUNT(*) Table1Count, table1.`isActive` Table1IsActive, 0 Table2Count, 0 Table2IsActive
FROM `table1`
UNION
SELECT 0 Table1Count, 0 Table1IsActive, COUNT(*) Table2Count, table2.`isActive` Table2IsActive
FROM `table2`) AS a
```
|
Try this:
```
SELECT (SELECT COUNT(*) FROM `table1`) AS Table1Count,
(SELECT COUNT(*) FROM `table2`) AS Table2Count;
```
|
Two tables count in to two seprate columns
|
[
"",
"mysql",
"sql",
"select",
"count",
"subquery",
""
] |
I'm using the following query, but it gives a `NULL` row with the rest of results:
```
SELECT CASE
WHEN csuseragent LIKE '%MSIE+8.0%' or csuseragent like '%Trident%' THEN 'IE'
WHEN csuseragent LIKE '%Firefox%' THEN 'Firefox'
WHEN csuseragent LIKE '%chrome%' THEN 'chrome'
END AS Browser ,count(*) as num
FROM logs.log
group by Browser
order by num desc
+---------+--------+
| Browser | num |
+---------+--------+
| chrome | 319607 |
| NULL | 125974 |
| Firefox | 124627 |
| IE | 56913 |
+---------+--------+
```
So how do I remove `NULL` and `125974`?
|
You can use a `having` clause to filter out groups:
```
SELECT CASE
WHEN csuseragent LIKE '%MSIE+8.0%' or csuseragent like '%Trident%' THEN 'IE'
WHEN csuseragent LIKE '%Firefox%' THEN 'Firefox'
WHEN csuseragent LIKE '%chrome%' THEN 'chrome'
END AS Browser ,count(*) as num
FROM logs.log
GROUP BY Browser
HAVING Browser IS NOT NULL
ORDER BY num DESC
```
|
Add a `WHERE Browser IS NOT NULL` clause
|
Mysql SELECT CASE LIKE give a NULL row
|
[
"",
"mysql",
"sql",
"select",
"group-by",
"case",
""
] |
I have a table (table1) :
```
| Country1 | Country2 |
Canada USA
Canada Mexico
USA Mexico
USA Canada
.
.
.
etc
```
Then I have another table (table2):
```
| Country | Date | Amount |
Canada 01-01 1.00
Canada 01-02 0.23
USA 01-01 2.67
USA 01-02 5.65
USA 01-03 8.00
.
.
.
etc
```
I need a query which will combine the two tables into something like this:
```
| Country1 | Average_amount_when_combined_with_country2| Country2 | Average_amount_when_combined_with_country1 |
Canada 0.615 USA 4.16
USA 4.16 Canada 0.615
```
What is happening is when country 1 occurs with country 2 in the first table I would like to get the average amount for country 1 when country 2 is combined, and then vise versa as well, the average for country 2 when country 1 is combined.I tried different join techniques but can't quite get it work, but now I think I can't really do any traditional joins, I will need to use combinations of sub queries. I am completely stuck on how to get the average only when the two countries occur on the same date. This query is as close as I can get, but the problem is this just gets the average for the country as a whole, not for when the combination of two countries occurs.
```
select country1, (select avg(amount) from table2 where country = country1) ,country2,(select avg(amount) from table2 where country = country2)
from table1
```
|
The following SELECT should solve your question, or at least should give you the idea how it works.
```
SELECT t1.Country1,
t1.Country2,
(
SELECT AVG(Amount)
FROM table2 t2
WHERE t2.Country = t1.Country1 AND
EXISTS
(
SELECT 1
FROM table2 t3
WHERE t3.Country = t1.Country2 AND
t2.Date = t3.Date
)
) Avg1,
(
SELECT AVG(Amount)
FROM table2 t2
WHERE t2.Country = t1.Country2 AND
EXISTS
(
SELECT 1
FROM table2 t3
WHERE t3.Country = t1.Country1 AND
t2.Date = t3.Date
)
) Avg2
FROM table1 t1
```
See the result [here](http://sqlfiddle.com/#!2/34968/3).
|
This brings back the same information but in a more logical fashion, in my opinion (your expected output shows the same thing on both rows, just reversed).
```
select t2a.country as country1,
t2b.country as country2,
avg(t2b.amount) as avg_amount
from table2 t2a
join table2 t2b
on t2a.date = t2b.date
join table1 t1
on t2a.country = t1.country1
and t2b.country = t1.country2
group by t2a.country, t2b.country
```
**Fiddle:** <http://sqlfiddle.com/#!2/34968/2/0>
Output:
```
| COUNTRY1 | COUNTRY2 | AVG_AMOUNT |
|----------|----------|------------|
| Canada | USA | 4.16 |
| USA | Canada | 0.615 |
```
|
finding the average amount for each combination occurence
|
[
"",
"mysql",
"sql",
""
] |
In my database, I have the table: `Job`. Each job can contain tasks (one to many) - and the same task can be re-used on multiple jobs. Therefore there is a `Task` table and a `JobTask` (junction table for the many-to-many relationship). There is also the `Payment` table, that records payments received (with a `JobId` column to track which job the payment is related to). Potentially there could be more than one payment attributed to a job.
Using SQL Server 2012, I have a query that returns a brief summary of jobs (total value of the job, total amount received):
```
select j.JobId,
sum(t.Rate) as [TotalOwedP],
sum(p.Amount) as [TotalReceivedP]
from Job j
left outer join Payment p on j.JobId=p.JobId
left outer join JobTask jt on j.JobId=jt.JobId
left outer join Task t on jt.TaskId=t.TaskId
group by j.JobId
```
The problem with this query is that it's returning a much higher amount for "total received" than it should be. There must be something I'm missing here to cause this.
In my test database, I have one job. This job has six tasks assigned to it. It also has one payment assigned to it (£100 - stored as `10000`).
Using the above query on this data, the `TotalReceivedP` column comes to `60000`, not `10000`. It seems to be multiplying the payment amount for each task assigned to the job. Lo and behold, if I add another task to this job (so the number of tasks is now 7), the `TotalReceivedP` column shows `70000`.
There is a definite problem with my query, but I just can't work out what it is. Any keen eyes able to spot it? Seems to be something wrong with the join.
|
For each separate `JobId`, `SUM(p.Amount)` sums up the same Amount value for every `Task` record related to a `Job` record with this `JobId`. If 6 records are related to a specific `Job`, then `SUM(p.Amount)` will calulate the amount multiplied by 6, if 7 records are related, then the amount is multiplied by 7 and so on.
Since for each *Job* there is only one *Payment* amount, there is no need to perform a sum on *`p.Amount`*. Sth like this will give you the desired result:
```
select j.JobId,
sum(t.Rate) as [TotalOwedP],
max(p.Amount) as [TotalReceivedP]
from #Job j
left outer join #Payment p on j.JobId=p.JobId
left outer join #JobTask jt on j.JobId=jt.JobId
left outer join #Task t on jt.TaskId=t.TaskId
group by j.JobId
```
**EDIT:**
Since the platform used is SQL Server a very neat way (IMHO) to get sum aggregates is `CTEs`:
```
;WITH TaskGroup AS (
SELECT JobId, SUM(Rate) AS [TotalOwedP]
FROM #Task t
INNER JOIN #JobTask jt ON t.TaskId = jt.TaskId
GROUP BY JobId
), PaymentGroup AS (
SELECT JobId, SUM(Amount) [TotalReceivedP]
FROM #Payment
GROUP BY JobId
)
SELECT tg.JobId, tg.TotalOwedP, pg.TotalReceivedP
FROM TaskGroup tg
LEFT JOIN PaymentGroup pg ON tg.JobId = pg.JobId
```
I am only guessing about the tables schema, but the above should give you what you want. The first CTE calculates `Rate` sums per `JobId`, the second CTE `Amount` sums per `JobId`, the final query uses both CTEs to put the results together in a single table.
|
Use a Sub-Select to calculate the SUM:
```
select j.JobId,
sum(t.Rate) as [TotalOwedP],
(SELECT SUM(p.Amount) FROM Payment p WHERE j.JobId=p.JobId) as [TotalReceivedP]
from Job j
left outer join JobTask jt on j.JobId=jt.JobId
left outer join Task t on jt.TaskId=t.TaskId
group by j.JobId
```
|
Query SUM of value from joined table - returns multiplied result
|
[
"",
"sql",
"sql-server",
""
] |
I have date in column of `nchar(255)` datatype in this format
```
mm/dd/yyyy
```
Now I want to change it into
```
dd/mm/yyyy
```
I don't think that `CONVERT` help me in this as I have tried different queries like below
```
SELECT CONVERT(NCHAR(20), col1, 103) from table1
SELECT CONVERT(VARCHAR(20), col1, 103) from table1
select Convert(varchar(10),CONVERT(date,col1,101),103) from table1
```
In my view 103 which is in UK format converts only from `yyyy-mm-dd`
So I tried to `CAST` my col1 but I get an error.
Am I missing something? Is there any easy way to do this?
|
I am answering my own question.(Just in case anyone wants to know what is the solution)
There was no such problem with the query i was using. i.e.
```
select Convert(varchar(10),CONVERT(date,col1,101),103) from table1
```
The problem was with my nchar field.
It was having a special character (in my case a space) in every entry which was giving "out-of-range" error when tried to convert or cast.
So removing Special character(a space) solved my problem.
|
Do this in two expiicit steps. First, convert the string to a date using the 101 format (which is mm/dd/yyyy). Then explicitly convert the date back to a string using 103 (which is dd/mm/yyyy):
```
select convert(varchar(255), convert(date, datecol, 101), 103)
```
Two points. First, I don't think the result needs to be `nvarchar()` (the string only consists of numbers and a slash). Second, always include a length when using `varchar()` in SQL Server.
[Here](http://www.sqlfiddle.com/#!3/44c71/1) is a little SQL Fiddle.
|
Selecting date in format dd/mm/yyyy from nchar column
|
[
"",
"sql",
"sql-server",
"t-sql",
"date",
""
] |
How can I do a constraint that will check first two values and second two values in different ranges? I have to put time in database, but i do not need a data, that's why I don't want to use DATETIME type. I need only a TIME, but as far as I know there is no TIME datatype in Oracle.
My column looks like:
```
visit_hours VARCHAR(4)
```
so the first 2 characters will be 00 to 23, and the second 2 characters will be 00 to 59, so I can store a time from 00:00 to 23:59.
How can I do that? I already found something like regular expressions, but I don't know how to implement it in this example.
```
CONSTRAINT check_time CHECK (regexp_like(visit_hours, '?????') );
```
Any help appreciated.
|
I'd probably do something like
```
CONSTRAINT check_time CHECK( to_number( substr( visit_hours, 1, 2 ) ) BETWEEN 0 AND 23 AND
to_number( substr( visit_hours, 3, 2 ) ) BETWEEN 0 AND 59 );
```
`substr( visit_hours, 1, 2 )` gives you the first two characters in the string, `substr( visit_hours, 3, 2 )` gives you the third and fourth character. Convert both to numbers with the `to_number` function and then verify the range.
|
> I have to put time in database, but i do not need a data, that's why I don't want to use DATETIME type
You don't have to worry about `date` and `time` portion separately. A `DATE` data type will have both the portions and all you need is use `to_char` with proper `format model`.
So, you can have the column as -
`visit_hours DATE`
**Edit** As Jeffrey and Ben said, the constraint is not needed when data type is `DATE`. Just follow what I explained above.
|
Oracle SQL CHECK CONSTRAINT VARCHAR
|
[
"",
"sql",
"oracle",
"constraints",
"varchar",
""
] |
This is probably simple, but i'm looking for the raw SQL to perform an `INNER JOIN` but only return one of the matches on the second table based on criteria.
Given two tables:
```
**TableOne**
ID Name
1 abc
2 def
**TableTwo**
ID Date
1 12/1/2014
1 12/2/2014
2 12/3/2014
2 12/4/2014
2 12/5/2014
```
I want to join but only return the latest date from the second table:
```
Expected Result:
1 abc 12/2/2014
2 def 12/5/2014
```
I can easily accomplish this in LINQ like so:
```
TableOne.Select(x=> new { x.ID, x.Name, Date = x.TableTwo.Max(y=>y.Date) });
```
So in other words, what does the above LINQ statement translate into in raw SQL?
|
You could join the first table with an aggregate query:
```
SELECT t1.id, d
FROM TableOne t1
JOIN (SELECT id, MAX[date] AS d
FROM TableTwo
GROUP BY id) t2 ON t1.id = t2.id
```
|
There are two ways to do this:
1. Using `GROUP BY` and `MAX()`:
```
SELECT one.ID,
one.Name,
MAX(two.Date)
FROM TableOne one
INNER JOIN TableTwo two on one.ID = two.ID
GROUP BY one.ID, one.Name
```
2. Using [`ROW_NUMBER()`](http://msdn.microsoft.com/en-GB/library/ms186734.aspx) with a CTE:
```
; WITH cte AS (
SELECT one.ID,
one.Name,
two.Date,
ROW_NUMBER() OVER (PARTITION BY one.ID ORDER BY two.Date DESC) as rn
FROM TableOne one
INNER JOIN TableTwo two ON one.ID = two.ID
)
SELECT ID, Name, Date FROM cte WHERE rn = 1
```
|
SQL: Inner Join return one row based on criteria
|
[
"",
"sql",
"sql-server",
"linq",
"select",
"inner-join",
""
] |
Problem statement:
There are two different methods:1-use one table and waste a column or 2-use two tables.

Method 1: one table for all information (table1), but it has the column *role* that has only one value; all other rows are empty.
Method 2: Two tables one for rather unique information (table2) and another column has only one entry: the name of the table holding the rest of the values.
Summary: waste most of one column or create a new table, which one is more efficient?
|
You are talking about **`Normalization`** of course it's much better to provide a way for removing ***redundant*** and ***null*** data.
you have this existing table:
```
table1
----------------------------
role | poster_id | name
varchar(n)| int | varchar(n)
```
you can break above table into 2 **or even more** tables:
```
table1 table2
------------------ ---------------------------
rol_id | rol_name rol_id | poster_id | name
tinyint| varchar(n) tinyint| int | varchar(n)
```
as **varchar(n)** takes spaces much more than **tinyint** and your main table is **table2** so the second scenario is more optimize.
**EDIT:**
if you have **only** one role and all users have only admin role (although this is not logic but possible) then of course for table2 (i.e:table\_for\_admin) it's better to have 2 columns(poster,name) and you already don't need to **table1**, maybe you meant to have a table per each role in this case you will not have redundant data and also you don't need to **table1** you can just name your tables as *ie: **table\_for\_admins**, **table\_for\_normalusers**, **table\_for\_powerusers*** but this is not good in case of ***Data Consistency.***
|
The correct way to handle this is with a [junction table](http://www.utteraccess.com/wiki/index.php/Junction_Tables_%28Many-to-Many_Relationships%29). So the answer to your question is that neither of your proposals are the most efficient way to structure your database, you need *3* tables.

This will really help you out in the long run when you need to add new tables/columns because the roles will be in an independent table.
|
Is it more efficient to have a table with one of its columns mostly empty or make a new table and refer to it?
|
[
"",
"mysql",
"sql",
"database",
"optimization",
""
] |
The database has 3 tables, "items", "orders" and "orderitems" which represent an N:N relation. I need to find the orders that doesn't contain certain items. For example, I want to get all the orders that doesn't include apples, oranges, or bananas.
I've tried:
```
SELECT * FROM orders o,
LEFT JOIN orderitems oi ON o.order_id=oi.order_id
LEFT JOIN items i ON oi.item_id=i.item_id
WHERE
i.name NOT IN ('banana', 'apple', 'orange')
```
It doesn't work. It returns all orders as long as they include at least an item different from 'banana', 'apple' and 'orange'.
I thought about doing a "NOT IN" a subselect of all orders containing those items, but it would be very slow because the subselect would be getting almost all orders (several millions of results).
|
**UPDATED**
Please try:
```
SELECT * FROM orders o
WHERE o.order_id
NOT IN
(
SELECT oi.order_id
FROM orderitems oi
INNER JOIN items i ON oi.item_id=i.item_id
WHERE
i.name IN ('banana', 'apple', 'orange')
)
```
|
Try using `group by` and `having`:
```
SELECT oi.order_id
FROM orderitems oi JOIN
items i
ON oi.item_id=i.item_id
GROUP BY oi.order_id
HAVING SUM(i.name IN ('banana', 'apple', 'orange')) = 0;
```
If you want the full order information you can can join the `orders` table back in.
|
sql select orders not containing certain items
|
[
"",
"mysql",
"sql",
""
] |
i have two table showin below
news (table name)
```
NewsId NewsDate
1 25-03-2014
2 29-03-2014
```
newsImageGal (table name)
```
newsSr newsId newsImages
1 1 images/i.jpg
2 1 images/j.jpg
3 1 images/k.jpg
4 2 images/l.jpg
5 2 images/m.jpg
6 2 images/n.jpg
```
i want a result like
```
NewsId NewsDate newsId newsImages
1 25-03-2014 1 images/i.jpg
2 9-03-2014 2 images/l.jpg
```
i have tried using join query and group by function but it shows duplicate multiple records how to solve this query?
|
Try this.
```
SELECT a.NewsId,
a.NewsDate,
b.newsImages
FROM news a
JOIN (SELECT Row_number()OVER(partition BY newsid ORDER BY newssr) rn,
*
FROM newsImageGal) b
ON a.NewsId = b.newsId
WHERE rn = 1
```
Key here is to find the first image that got added into `newsImageGal` table per `newsid`.
so use the `window function` to create Row\_number per `newsid` in order of `newsSr`.
```
SELECT Row_number()OVER(partition BY newsid ORDER BY newssr) rn,
*
FROM newsImageGal
```
From the above query you can see the `rn=1` is the first image that got added into the `newsImageGal` join the result with `news` table and filter the result with `rn=1`
|
You can write as
```
;WITH CTE AS
( SELECT N.NewsId
,N.NewsDate
,NIG.newsImages
,ROW_NUMBER() OVER (PARTITION BY N.NewsId ORDER BY NIG.newsSr ASC)
AS rownum
FROM news N
JOIN newsImageGal NIG ON N.NewsId = NIG.newsId
)
SELECT NewsId,
NewsDate,
newsImages
FROM CTE
WHERE rownum = 1
```
|
How to get distinct record from two tables in sql?
|
[
"",
"sql",
"sql-server",
""
] |
I have to set a dynamic sql query result to a variable. My sql query is :
```
DECLARE @ResultString NVARCHAR(MAX)
DECLARE @Qry NVARCHAR(MAX)
SET @Qry='SELECT Test FROM MTest22Dec WHERE ID = 1'
EXECUTE sp_executesql @Qry, N'@Result NVARCHAR(MAX) OUTPUT', @Result=@ResultString OUTPUT
PRINT @ResultString
```
But `@ResultString` is printing empty string although there is record in database table.
What is wrong in this query?
thanks
|
You have to set the output variable somewhere, eg.
```
SET @Qry='SELECT @Result = Test FROM MTest22Dec WHERE ID = 1'
```
|
You need to assign the result of select to `variable` inside `Dynamic statement`.
Change you query like this.
```
DECLARE @Result NVARCHAR(MAX)
DECLARE @Qry NVARCHAR(MAX)
SET @Qry='SELECT @Result = Test FROM MTest22Dec WHERE ID = 1'
EXECUTE Sp_executesql @Qry, N'@Result NVARCHAR(MAX) OUTPUT', @Result OUTPUT
PRINT @Result
```
|
Set EXECUTE sp_executesql result into a variable in sql
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"select",
""
] |
I'm re-asking this as I'm still confused, so I've also added some edits.
With a many to many relationship, an intersection table is used to resolve this to two one-to-many relationships. My problem here is that I currently have 2 tables, "Playlist" and "song". Each song be present in many playlists, and each playlist can contain many songs. To resolve this add a third table, "songlog" to resolve the many to many.
I understand this much, however, when it comes to adding values into the database, I can't get my head around populating the tables to relate the three together.. For example, how would I add some (e.g 4) songs into the playlist table, using the intersection table? Also, how would I then add the details of which songs are in which playlist? As I currently see it, I would use the primary keys of both "Playlist" and "song", in the intersection table, but I'm unsure as to where, how, and why.
I'm still confused, so a clear and basic explanation of this area of design may help! Many thanks.
|
The following is what your DDL (data definition language) would look like, with some example inserts:
```
create table playlist
(
playlist_id numeric(10) not null,
playlist_name varchar(25),
constraint p_id_pk primary key (playlist_id)
);
create table songs
(
song_id numeric(10) not null,
song_name varchar(25),
constraint s_id_pk primary key (song_id)
);
create table playlist_songs
(
playlist_id,
song_id,
song_order int,
constraint p_fk foreign key (playlist_id) references playlist(playlist_id),
constraint s_fk foreign key (song_id) references songs(song_id)
);
insert into playlist values (1, 'A cool playlist');
insert into playlist values (2, 'An okay playlist');
insert into songs values (1, 'ABC');
insert into songs values (2, 'DEF');
insert into songs values (3, 'GHI');
insert into songs values (4, 'JKL');
insert into playlist_songs values (1, 1, 1);
insert into playlist_songs values (1, 3, 2);
insert into playlist_songs values (2, 2, 1);
insert into playlist_songs values (2, 3, 2);
```
**Fiddle:** <http://sqlfiddle.com/#!4/0420f/5/0>
And an example of a SELECT query you might run:
```
select ps.playlist_id,
p.playlist_name,
s.song_id,
s.song_name,
ps.song_order
from playlist_songs ps
join songs s
on ps.song_id = s.song_id
join playlist p
on ps.playlist_id = p.playlist_id
order by ps.playlist_id, ps.song_order
```
Output:
```
| PLAYLIST_ID | PLAYLIST_NAME | SONG_ID | SONG_NAME | SONG_ORDER |
|-------------|------------------|---------|-----------|------------|
| 1 | A cool playlist | 1 | ABC | 1 |
| 1 | A cool playlist | 3 | GHI | 2 |
| 2 | An okay playlist | 2 | DEF | 1 |
| 2 | An okay playlist | 3 | GHI | 2 |
```
The purpose of the intermediary table is data normalization. It reduces the amount of data you store and allows greater control over the data.
In the system, you have a library of songs. Users can create playlists and assign songs to those playlists. You have songs, playlists, and assignments of songs to playlists.
That information could be combined in a denormalized fashion (such as in the query result above) but denormalized data is primarily meant for reporting. When you are storing data, you don't want to store the same data in multiple locations. You should not see the name of a playlist stored in two places. Or a song's name stored in two places. You would be storing redundant data. Also when something changes, say the size of a song (not currently included in your schema, but could be), it would obviously be ideal to be able to update just 1 row of the songs table, as opposed to potentially thousands of rows on any sort of denormalized table.
Denormalized tables do have their place in reporting environments, because they involve fewer table joins when selecting data for reporting and analysis. The tables are wider and contain information users frequently look for in their queries. Likewise, partitioning can also yield significant performance gains when running certain queries.
|
You must make make the difference between inserting songs and playlists and linking them together. First always insert the missing songs and playlists
```
INSERT INTO song (SongID, Name) VALUES (7, 'New song 1');
INSERT INTO song (SongID, Name) VALUES (8, 'New song 2');
INSERT INTO playlist (PlaylistID, Name) VALUES (15, 'The new playlist`);
```
Once your songs and playlists are complete, you can link them together:
```
INSERT INTO songlog (SongID, PlaylistID) VALUES (7, 15);
INSERT INTO songlog (SongID, PlaylistID) VALUES (8, 15);
```
This adds song 7 and song 8 to the playlist 15. You can add one song to another playlist as well (assuming that there is a palylist #33 in the playlists table already):
```
INSERT INTO songlog (SongID, PlaylistID) VALUES (7, 33);
```
You don't have to insert the song into the songs table again. Each song and each playlist exits exactly once.
---
This assumes the following table structures
```
Table song
----------
PK SongID
Name
Table playlist
--------------
PK PlaylistID
Name
Table songlog
-------------
PK FK SongID
PK FK PlaylistID
```
`PK` means *Primary Key* and `FK` means *Foreign Key*.
|
resolution of many to many in SQL
|
[
"",
"sql",
"database",
"oracle",
"oracle11g",
""
] |
I am working with a program named zoho reports and trying to change names of objects in the file I uploaded. ZOHO cannot rename the example. I want to use SQL, preferably Postgres.
`phonenumber`:
```
1. 5203123333
2. 8323125547
3. 6025456564
4. 43654xxx3
```
I want them to display as:
```
1. TUCSON,AZ
2. Houston,TX
3. Chandler,AZ
4. WHATEVER
```
I have used this:
```
SELECT "TO", replace("TO",
'4802xxx82', 'Chandler'), replace("TO",
'832xxxx84', 'Houston, TX'), FROM "Craiglist-December Calls"
```
Works but returns a new column. I need only one new column.
Then I tried an `IF` condition, but error after I try more then three renames.
```
if("To"='83xxxx284','Houston TX',if("To"='818xxxx6','Santa Clarita CA','0'))
```
Trying to replace about 8 numbers with names, so I can graph them by name not number.
Is there a way to rename all these number in the same column with Postgres?
|
As I understand it you want *"the same column"*, meaning the same column *name*. So use a [column alias](http://www.postgresql.org/docs/9.4/static/sql-select.html#SQL-FROM). And use a ["simple" `CASE` statement](http://www.postgresql.org/docs/current/interactive/functions-conditional.html#FUNCTIONS-CASE):
```
SELECT CASE "TO" -- "simple" CASE
WHEN '4802xxx82' THEN 'Chandler'
WHEN '832xxxx84' THEN 'Houston, TX'
WHEN ... -- 6 more ...
ELSE "TO"
END AS "TO" -- column alias (output column name)
FROM "Craiglist-December Calls";
```
Your column names are not chosen wisely. Never use keywords as identifiers. If you double-quote identifiers to allow illegal names, you need to double-quote the now case-sensitive!) name for the rest of its life. `"To"` is *not* a valid reference to `"TO"`.
Start with the [manual about identifiers](http://www.postgresql.org/docs/current/interactive/sql-syntax-lexical.html#SQL-SYNTAX-IDENTIFIERS).
|
I think it is better to have second table with "translations". First column (pattern) like `83____284` and second column (display) - the new name `Houston TX`. Then select like this:
```
SELECT t2.display FROM orig_table t1, translate_table t2 WHERE t1.phonenumber LIKE t2.pattern
```
PS: PostgreSQL is abbreviation of "Postgres SQL" (with S).
|
Rename items in PostgreSQL or any SQL
|
[
"",
"mysql",
"sql",
"postgresql",
"zoho",
""
] |
I have a table **Questions**. How can I get a count of all questions asked in a week?
More generically, how can I bucket records by the week they were created in?
```
Questions
id created_at title
----------------------------------------------------
1 2014-12-31 09:43:42 "Add things"
2 2013-11-23 02:98:55 "How do I ruby?"
3 2015-01-15 15:11:19 "How do I python?"
...
```
I'm using SQLLite, but PG answers are fine too.
Or if you have the answer using Rails ActiveRecord, that is amazing, but not required.
I've been trying to use DATEPART() but haven't come up with anything successful yet: <http://msdn.microsoft.com/en-us/library/ms174420.aspx>
|
In postgreSQL it's as easy as follows:
```
SELECT id, created_at, title, date_trunc('week', created_at) created_week
FROM Questions
```
If you wanted to get the # of questions per week, simply do the following:
```
SELECT date_trunc('week', created_at) created_week, COUNT(*) weekly_cnt
FROM Questions
GROUP BY date_trunc('week', created_at)
```
Hope this helps. Note that `date_trunc()` will return a date and not a number (i.e., it won't return the ordinal number of the week in the year).
**Update:**
Also, if you wanted to accomplish both in a single query you could do so as follows:
```
SELECT id, created_at, title, date_trunc('week', created_at) created_week
, COUNT(*) OVER ( PARTITION BY date_trunc('week', created_at) ) weekly_cnt
FROM Questions
```
In the above query I'm using `COUNT(*)` as a window function and partitioning by the week in which the question was created.
|
If the `created_at` field is already indexed, I would simply look for all rows with a `created_at` value between X and Y. That way the index can be used.
For instance, to get rows with a `created_at` value in the 3rd week of 2015, you would run:
```
select *
from questions
where created_at between '2015-01-11' and '2015-01-17'
```
This would allow the index to be used.
If you want to be able to specify a week in the where clause, you could use the `date_part` or `extract` functions to add a column to this table storing the year and week #, and then index that column so that queries can take advantage of it.
If you don't want to add the column, you could of course use either function in the where clause and query against the table, but you won't be able to take advantage of any indexes.
Because you mentioned not wanting to add a column to the table, I would recommend adding a function based index.
For example, if your ddl were:
```
create table questions
(
id int,
created_at timestamp,
title varchar(20)
);
insert into questions values
(1, '2014-12-31 09:43:42','"Add things"'),
(2, '2013-11-23 02:48:55','"How do I ruby?"'),
(3, '2015-01-15 15:11:19','"How do I python?"');
create or replace function to_week(ts timestamp)
returns text
as 'select concat(extract(year from ts),extract(week from ts))'
language sql
immutable
returns null on null input;
create index week_idx on questions (to_week(created_at));
```
You could run:
```
select q.*, to_week(created_at) as week
from questions q
where to_week(created_at) = '20153';
```
And get:
```
| ID | CREATED_AT | TITLE | WEEK |
|----|--------------------------------|--------------------|-------|
| 3 | January, 15 2015 15:11:19+0000 | "How do I python?" | 20153 |
```
(reflecting the third week of 2015, ie. `'20153'`)
**Fiddle:** <http://sqlfiddle.com/#!15/c77cd/3/0>
You could similarly run:
```
select q.*,
concat(extract(year from created_at), extract(week from created_at)) as week
from questions q
where concat(extract(year from created_at), extract(week from created_at)) =
'20153';
```
**Fiddle:** <http://sqlfiddle.com/#!15/18c1e/3/0>
But it would not take advantage of the function based index, because there is none. In addition, it would not use any index you might have on the `created_at` field because, while that field might be indexed, you really aren't searching on that field. You are searching on the result of a function applied against that field. So the index on the column itself cannot be used.
If the table is large you will either want a function based index or a column holding that week that is itself indexed.
|
Get a count of records created each week in SQL
|
[
"",
"sql",
"ruby-on-rails",
"sqlite",
"postgresql",
"rails-activerecord",
""
] |
I have three tables Company, User , Address .
**company** has three fields
* comp\_id(PK)
* comp\_name
* user\_id(FK)
* to\_address\_id(FK)
* from\_address\_id(FK)
**Address** has fields:
* address\_id(PK)
* city
* state
User has fields
user\_id(PK),user\_name
to\_address\_id and from\_address\_id both are foreign key which references same table Address and maps to address\_id
now the point is how to retreive the from\_address from the table , I am able to retrieve the to\_address from this query.
Here i can get both the to\_address\_id and from\_address\_id from the Shipment table but how to get both the addresses in the same query based on the address\_id:
> ```
> SELECT s.comp_name,u.user_name,a.city
> FROM
> company s
> JOIN
> User u
> JOIN
> Address a
> ON
> s.user_id = u.user_id
> AND s.to_address_id = a.address_id
> WHERE
> s.user_id = 1001;
> ```
|
Solution - you need to JOIN two times, each for every FK you're interested in:
```
SELECT s.comp_name, u.user_name, a_from.city AS city_from, a_to.city AS city_to
FROM company s
LEFT JOIN User u ON s.user_id = u.user_id
LEFT JOIN Address a_from ON s.from_address_id = a_from.address_id
LEFT JOIN Address a_to ON s.to_address_id = a_to.address_id
WHERE s.user_id = 1001;
```
|
Try the following:
```
SELECT s.comp_name
,u.user_name
,a1.city
,a2.city
FROM company s
JOIN User u ON s.user_id = u.user_id
JOIN Address a1 ON s.to_address_id = a1.address_id
JOIN Address a2 ON s.from_address_id = a2.address_id
WHERE s.user_id = 1001;
```
|
How to access data from foreign key table
|
[
"",
"mysql",
"sql",
"database",
"select",
"join",
""
] |
I have the following two tables in mysql:
users:
```
+--------+-----------+
| userId | userName |
+--------+-----------+
| 1 | magnus |
| 2 | fabiano |
| 3 | alexander |
| 4 | veselin |
+--------+-----------+
```
games:
```
+--------+---------+---------+
| gameId | userId1 | userId2 |
+--------+---------+---------+
| 1 | 1 | 2 |
| 2 | 1 | 3 |
| 3 | 2 | 3 |
| 4 | 2 | 4 |
+--------+---------+---------+
```
How can I construct a single query such that I get this below output of say, fabiano's opponents:
output:
```
+--------+-----------+
| gameId | userName |
+--------+-----------+
| 1 | magnus |
| 3 | alexander |
| 4 | veselin |
+--------+-----------+
```
Edit1:
This was what I was trying and I wasn't able to get them into a single query:
* select fabiano's opponents [select \* from games where 2 in (userId1, userId2);]
* read each of the rows, and check which of them is fabiano(2), and select the other userId
* from the userIds of these opponents, get their name from users table
Edit2:
Inspired by the answers below, I wrote this (they work):
```
-- NO JOIN
select x.gameId, users.userName from
(
select gameId, userId2 as id from games where userId1=2
UNION
select gameId, userId1 as id from games where userId2=2
) as x, users
where users.userId = id;
-- NO JOIN, NO UNION
select x.gameId, users.userName from (
SELECT g.gameId,
CASE WHEN userId1 = 2
THEN userId2
WHEN userId2 =2
THEN userId1
END AS id
FROM games g) as x, users
where users.userId = id;
```
|
You can union the two sets of data together, viz all games where Fabiano is User 1, with all games that he is in the role of User 2:
```
SELECT x.Opponent
FROM
(
SELECT u.Name AS Opponent
FROM games g
INNER JOIN users u
ON g.userId2 = u.UserId
WHERE g.UserId1 = 2 -- Fabiano
UNION
SELECT u.Name
FROM games g
INNER JOIN users u
ON g.userId1 = u.UserId
WHERE g.UserId2 = 2 -- Fabiano
) AS x;
```
At this point as assume that Fabiano can't simultaneously both be `User1` and `User2`, as we would need to consider [UNION ALL vs UNION DISTINCT](http://dev.mysql.com/doc/refman/5.0/en/union.html) :)
This could also be tidied up a bit into:
```
SELECT x.Opponent
FROM
(
SELECT u.Name AS Opponent, g.UserId1 AS PlayerId
FROM games g
INNER JOIN users u
ON g.userId2 = u.UserId
UNION
SELECT u.Name, g.UserId2 AS PlayerId
FROM games g
INNER JOIN users u
ON g.userId1 = u.UserId
) AS x
WHERE x.PlayerId = 2; -- Fabiano
```
|
Try something like:
```
SELECT `gamess`.gameId, `users`.userName
FROM users INNER JOIN
(SELECT gameId, userId2 as userId
FROM games
WHERE userId1 = 2
UNION
SELECT gameId, userId1 as userId
FROM games
WHERE userId2 = 2) AS gamess
ON `gamess`.userId = `users`.userId
```
|
mysql select statement with multiple where/conditions
|
[
"",
"mysql",
"sql",
"select",
""
] |
Example code--SQL within SAS:
```
proc sql;
create table add_losses as
select *,
sum(bb.gross_loss) as gl format = comma15.2,
count(bb.gross_loss) as n_losses
from add_startend as aa
left join LED as bb
on (aa.process_name = bb.process_name and
aa.group_id = bb.group_code and
aa.start_date le bb.first_loss_posting_date le aa.end_date)
group by aa.process_name, aa.group_id, aa.start_date, aa.end_date
order by aa.process_name, aa.group_id, aa.start_date, aa.end_date;
quit;
```
Example data and desired output below:
Table AA
```
variable 1 variable 2 start date end date
AAAA BBB 1/1/2010 6/1/2010
```
Table BB
```
variable 1 variable 2 Date losses
AAAA BBB 1/5/2010 100
AAAA BBB 2/1/2010 100
AAAA BBB 3/5/2010 100
AAAA BBB 4/23/2010 100
AAAA BBB 5/11/2010 100
AAAA BBB 5/25/2010 100
```
Table YY (current output)
```
variable 1 variable 2 Date gross_loss gl n_losses
AAAA BBB 1/5/2010 100 600 6
AAAA BBB 2/1/2010 100 600 6
AAAA BBB 3/5/2010 100 600 6
AAAA BBB 4/23/2010 100 600 6
AAAA BBB 5/11/2010 100 600 6
AAAA BBB 5/25/2010 100 600 6
```
Table XX (desired output)
```
variable 1 variable 2 start date end date gl n_losses
AAAA BBB 1/1/2010 6/1/2010 600 6
```
The problem is the current code creates additional observations. How do I keep the same number of rows and all variables in table AA while adding on the columns `gl` and `n_losses`?
|
You can use Between clause for this type of condition
```
proc sql;
create table add_losses as
select aa.*, bb.gl format, bb.n_losses
from
add_startend as aa
left join
(
select aa.process_name, aa.group_id, aa.start_date, aa.end_date,
sum(bb.gross_loss) as gl format = comma15.2,
count(bb.gross_loss) as n_losses
from add_startend as aa
left join LED as bb
on (aa.process_name = bb.process_name and
aa.group_id = bb.group_code and
bb.first_loss_posting_date between aa.start_date and aa.end_date)
group by aa.process_name, aa.group_id, aa.start_date, aa.end_date
) bb
on aa.process_name = bb.process_name
and aa.group_id = bb.group_code
and aa.start_date = aa.start_date
and aa.end_date = bb.end_date
order by aa.process_name, aa.group_id, aa.start_date, aa.end_date;
quit;
```
|
Google for an SQL tutorial--one that covers `group by` then aggregate functions `count()` and `sum()`.
I suggest the following changes:
* When using a group by statement, explicitly list the variables you intend to to group by (instead of just using a `*`) in the SELECT statement.
* Use the between operator for date comparisons as it is easier to understand
* Make sure that every column in your select statement is either part of an aggregate function or in your group by clause.
Sample Data:
```
data a;
informat start_date end_date mmddyy10.;
format start_date end_date yymmdd10.;
input variable_1 $
variable_2 $
start_date
end_date
;
datalines;
AAAA BBB 1/1/2010 6/1/2010
run;
data b;
informat date mmddyy10.;
format date yymmdd10.;
input variable_1 $
variable_2 $
date
losses
;
datalines;
AAAA BBB 1/5/2010 100
AAAA BBB 2/1/2010 100
AAAA BBB 3/5/2010 100
AAAA BBB 4/23/2010 100
AAAA BBB 5/11/2010 100
AAAA BBB 5/25/2010 100
run;
```
Final Query:
```
proc sql;
create table add_losses as
select a.variable_1,
a.variable_2,
a.start_date,
a.end_date,
count(b.variable_1) as n_losses,
sum(b.losses) as gl format=comma15.2
from a
left join b on a.variable_1 eq b.variable_1
and a.variable_2 eq b.variable_2
and b.date between a.start_date and a.end_date
group by 1,2,3,4
order by 1,2,3,4
;
quit;
```
I've used shorthand aliases in the group by statement as it's easier to write, maintain and understand. You could also explicitly list the columns using them:
```
group by a.variable_1, a.variable_2, a.start_date, a.end_date
```
|
Sum and count variables while executing a left join
|
[
"",
"sql",
"sas",
"left-join",
""
] |
```
SELECT jour
FROM reservations
WHERE id_user = 57 AND SUBSTR(jour,10) = '2014_08_14' ORDER BY jour
```
returns nothing...
I have a record in the database that matches the codition:

|
Replace `SUBSTR(jour,10)` with `SUBSTR(jour,1,10)`.
Assumption: `jour` is a `varchar` type column. If you are storing dates as `varchar`, **do not do that!**
Explanation: My solution uses the `(str,pos,len)` overload of the `SUBSTR` function. Arguments 2 and 3 are respectively, the position to start from and the length of the substring.
[Documentation](http://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_substr)
|
[`substr`](http://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_substr)'s first argument is the **position to start from**. Hence, `SUBSTR(jour,10)` returns `_AM`, and you get an empty result set. Instead, you should use the tree argument variant:
```
SELECT jour
FROM reservations
WHERE id_user = 57 AND
SUBSTR(jour, 1, 10) = '2014_08_14' -- Here
ORDER BY jour
```
To make your query more readable and ensure such mistakes aren't repeated, you could use the following variant:
```
SELECT jour
FROM reservations
WHERE id_user = 57 AND
SUBSTR(jour FROM 1 FOR 10) = '2014_08_14' -- Here
ORDER BY jour
```
|
SUBSTR returns no record (mysql)
|
[
"",
"mysql",
"sql",
"select",
"substr",
""
] |
I have LawfirmUser table with `FirstName`, `LastName` and `userId` columns
below is some sample data of LawfirmUser
```
FirstName LastName userId
---------- ----------- ----------
Demo test1 1
Demo test1 2
Demo test1 3
Demo test2 4
Demo test2 5
Demo test3 6
Demo test4 7
```
Am trying to write a scriptto find all the records having same First and Last names but different userIds
the query should return below data
```
FirstName LastName userId
---------- ----------- ----------
Demo test1 1
Demo test1 2
Demo test1 3
Demo test2 4
Demo test2 5
```
Am pretty beginner to sql stuff, Could someone please help me with the query?
|
Here is how it works
**CTE1,CTE2 etc are called Common Table Expressions in Sql Server which act as temporary tables
which will not be stored in memory and gets cleared after execution of query.**
**1.** **CTE1** - Selects the distinct values from the table. For example, if the table have same `FirstName`,`LastName` and `UserId`, we should avoid that since you need to find the records of same `FirstName` and `LastName` for different `UserId`.
**2.** **CTE2** - Select result from `CTE1` and we will count the same `FirstName` and `LastName` for different `UserId`. The following image will explain the use of `COUNT(*) OVER(PARTITION BY FirstName,LastName)`

**3.** At last we will take the result where `CNT>1`.
Here is the working query.
```
;WITH CTE1 AS
(
SELECT DISTINCT * FROM YOURTABLE
)
,CTE2 AS
(
SELECT *, COUNT(*) OVER(PARTITION BY FirstName,LastName) CNT
FROM CTE1
)
SELECT FirstName,LastName,userId
FROM CTE2
WHERE CNT>1
```
* [SQL FIDDLE](http://sqlfiddle.com/#!3/b48a49/1)

|
Use `Window Function`
```
;WITH cte
AS (SELECT *,
Row_number()OVER(PARTITION BY FirstName, LastName ORDER BY userid) rn
FROM YOURTABLE)
SELECT *
FROM cte a
WHERE EXISTS (SELECT 1
FROM cte b
WHERE a.FirstName = b.FirstName
AND a.LastName = b.LastName
AND rn > 1)
```
|
Sql Script to find similar user records
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have a sql query that pulls the customers full name from the customers table based on the customer ID in the sales table and binds the info to an asp.net listview. I now want to search the records by the customers full name. I used the query below but it keeps telling me that the column "Fullname" is an invalid column name. How do I go about modifying this query to work?
```
SELECT
tbl_Sales.SaleID, tbl_Sales.CustomerID,
(SELECT Firstname + ' ' + Lastname
FROM tbl_Customers
WHERE CustomerID = tbl_Sales.CustomerID) AS CustomerName,
tbl_Sales.Price
FROM
tbl_Sales
WHERE
CustomerName LIKE '%John%'
```
Thanks in advance.
|
You cannot use the `Alias name` in the same select statement because `where` will be processed before the `select`.
Make the original query as `sub select` and use the `alias name` in `where clause`.Try this.
```
SELECT *
FROM (SELECT tbl_Sales.SaleID,
tbl_Sales.CustomerID,
(SELECT Firstname + ' ' + Lastname
FROM tbl_Customers
WHERE CustomerID = tbl_Sales.CustomerID) AS CustomerName,
tbl_Sales.Price
FROM tbl_Sales) A
WHERE CustomerName LIKE '%John%'
```
or use `Cross apply`
```
SELECT tbl_Sales.SaleID,
tbl_Sales.CustomerID,
t.CustomerName,
tbl_Sales.Price
FROM tbl_Sales
CROSS apply (SELECT Firstname + ' ' + Lastname AS CustomerName
FROM tbl_Customers
WHERE CustomerID = tbl_Sales.CustomerID) t
WHERE t.CustomerName LIKE '%John%'
```
|
You are not allowed to pass **ALIAS name** in **WHERE** clause instead of that you should use **JOIN**
Try this:
```
SELECT S.SaleID, S.CustomerID, (C.Firstname + ' ' + C.Lastname) AS CustomerName, S.Price
FROM tbl_Sales S
INNER JOIN tbl_Customers C ON S.CustomerID = C.CustomerID
WHERE (C.Firstname + ' ' + C.Lastname) LIKE '%John%'
```
|
Search 'AS' alias in SQL Server
|
[
"",
"sql",
"sql-server",
"select",
"join",
""
] |
This is using Oracle SQL.
**Edited my code, I had pasted it wrong.**
So I have two tables: class and event. Each class has a class\_id that is also present in the event table. Some classes can exist as multiple events.
* First I need to order the classes by price descending, uses only the
"class" table.
* Then I need to select only the top 4 most expensive classes from the
previous result.
* Then I need to display all of the information from the event table
for only the top 4 most expensive classes. This does not mean there will only be 4 rows, because as I said, some classes have multiple events.
I manage to get the first TWO bullet points to work, but as soon as I nest the first two SELECT statements into the final SELECT, it displays the events for ALL of the classes, even the ones that are not in the top 4.
```
SELECT x.*,
c.price,
c.class_name
FROM event x, class c
WHERE EXISTS (SELECT *
FROM (SELECT price, class_name
FROM class
ORDER BY price DESC)
WHERE ROWNUM <=4)
AND c.class_id=x.class_id;
```
So everything after the WHERE EXISTS works, but it's like it's ignoring the WHERE EXISTS.
|
perhaps you meant this:
```
SELECT x.*,
c.price,
c.class_name
FROM event x, class c
WHERE (c.price,c.class_name) IN (
SELECT price, class_name
FROM (SELECT price, class_name
FROM class
ORDER BY price DESC)
WHERE ROWNUM <=4)
AND c.class_id=x.class_id;
```
|
That is because it *is* ignoring the `exists`. `Exists` returns true or false (or NULL). Your clause is returning, normally, four rows. These exist. So, everything gets returned. Perhaps you want this:
```
SELECT x.*, c.price, c.class_name
FROM event x JOIN
(SELECT *
FROM (SELECT class_id, price, class_name
FROM class
ORDER BY price DESC
) c
WHERE ROWNUM <= 4
) c
ON c.class_id = x.class_id;
```
|
Selecting all details from Table A but only for the values that exist in Table B's top N rows
|
[
"",
"sql",
"oracle",
""
] |
I don't know how to title this problem. Correct me if you have better words.
I have two tables, Users and Posts.
**Users:**
```
id | username | password | ...
```
**Posts:**
```
id | author_id | title | content | ...
```
Now I want to list the "most active" users - the users who have written the most posts. And specifically, I want the `top 10` result.
```
SELECT u.username, COUNT(p.id) AS count
FROM Posts p, Users u
WHERE u.id=p.author_id
GROUP BY p.author_id
ORDER BY count DESC
LIMIT 10;
```
I can get the expected result. However, the ranking may not be "*fair*" if some users have same number of posts.
E.g., I may get results like:
```
User 1 | 14
User 2 | 13
...
User 9 | 4
User 10 | 4
```
Here, there are actually several more users who have `4` posts.
So, the `top 10` could be not exactly `10` results. How can I get a more "*fair*" result that contains extra rows of users who have `4` posts?
|
This is the right solution, I think: you need the subquery to know how much post has the 10th place in your top ten. Then, you use the outer query to extract the users with almost that postcount.
```
SELECT u.username, COUNT(p.id) AS count
FROM Posts p
JOIN Users u ON u.id = p.author_id
GROUP BY p.author_id
HAVING COUNT(p.id) >=
(
SELECT COUNT(p.id) AS count
FROM Posts p
JOIN Users u ON u.id = p.author_id
GROUP BY p.author_id
ORDER BY count DESC
LIMIT 9, 1
)
ORDER BY count DESC
```
|
Maybe not the best solution
```
select u.username, COUNT(p.id) AS count
FROM Posts p
join Users u on u.id = p.author_id
GROUP BY p.author_id
having COUNT(p.id) in
(
SELECT COUNT(p.id)
FROM Posts p
join Users u on u.id = p.author_id
GROUP BY p.author_id
ORDER BY count DESC
LIMIT 10
)
ORDER BY count DESC
```
|
MySQL select top rows with same condition values
|
[
"",
"mysql",
"sql",
"select",
"sql-order-by",
"limit",
""
] |
I have a database that has numerous tables and I have joined two of the tables together and it is working fine but I now want to return the data that falls on half hour intervals! I have read numerous articles on this and I cant seem to get it to work. Below shows an example of my database

Below is the SQL that I have written to join the two tables
```
SELECT
Table2.SourceID,
Table2.Value,
Table1.Name,
Table2.TimestampUTC
FROM Table2 INNER JOIN
Source ON Table1.SourceID = Table1.ID
Where Table2.Value is not NULL
ORDER BY Table2.TimestampUTC DESC
```
I have tried numerous examples so any guidance would be appreciated!
|
You can use the [`DATEPART`](http://msdn.microsoft.com/en-gb/library/ms174420.aspx) function to extract the minutes from a `datetime` value:
```
SELECT
Table2.SourceID,
Table2.Value,
Table1.Name,
Table2.TimestampUTC
FROM Table2 INNER JOIN
Source ON Table1.SourceID = Table1.ID
Where Table2.Value is not NULL
and DATEPART(minute,Table2.TimestampUTC) in (0,30)
ORDER BY Table2.TimestampUTC DESC
```
(Usually, you'd be advised not to apply functions to columns so that indexes might be useful but, given the nature of this query, it's unlikely that a way can be found to write it that would be able to benefit from indexes anyway)
---
If you only want rows from today, then an additional filter can be applied:
```
SELECT
Table2.SourceID,
Table2.Value,
Table1.Name,
Table2.TimestampUTC
FROM Table2 INNER JOIN
Source ON Table1.SourceID = Table1.ID
Where Table2.Value is not NULL
and DATEPART(minute,Table2.TimestampUTC) in (0,30)
and Table2.TimestampUTC >= DATEADD(day,DATEDIFF(day,0,CURRENT_TIMESTAMP),0)
and Table2.TimestampUTC < DATEADD(day,DATEDIFF(day,0,CURRENT_TIMESTAMP),1)
ORDER BY Table2.TimestampUTC DESC
```
Where `DATEADD(day,DATEDIFF(day,0,CURRENT_TIMESTAMP),0)` computes "today at the starting midnight" and `DATEADD(day,DATEDIFF(day,0,CURRENT_TIMESTAMP),1)` computes "tomorrow at the starting midnight" and these comparisons to `TimestampUTC` might be able to benefit from an index on that column now.
|
This is very easy to do with use of the [`DatePart Function`](http://msdn.microsoft.com/en-us/library/ms174420.aspx)
Add to your where clause:
```
AND DATEPART(minute, TIMESTAMP) in (0, 30)
```
To filter for a particular date add:
```
//Converting to varchar with sytle 101 strips off the time values, then we convert it back to a datateime for comparison
AND CONVERT(datetime, CONVERT(VARCHAR, TIMESTAMP, 101)) = '1/1/2015'
```
To filter for Today add:
```
//Converting to varchar with sytle 101 strips off the time values, then we convert it back to a datateime for comparison
AND CONVERT(datetime, CONVERT(VARCHAR, TIMESTAMP, 101)) = CONVERT(datetime, CONVERT(VARCHAR, GETDATE(), 101))
```
|
Return hourly and half hourly data
|
[
"",
"sql",
"sql-server-2008",
""
] |
```
deliverable_id deliverable_title deliverable_desc adviser_id passed_user deliverable_passed flag
1 SRS kogtong 1 0 0 0
2 WEBSITE Lorem ipsum... 1 0 0 0
3 system asdsadasdasdasd 1 0 0 0
4 software sdgdThyiliul234 1 0 0 0
5 NULL NULL 1 4 1 1
6 NULL NULL 1 4 2 1
```
I want to make a query that would compare if `deliverable_passed` is the same as `deliverable_id`. If they are the same, it would not print the the same `deliverable_id`.
This is my current query is :
```
SELECT * FROM deliverable
WHERE deliverable_id!=deliverable_passed
```
The current output is (based on `deliverable_id`) : 1 2 3 4
My expected output after comparing would be (based on `deliverable_id` ) : 3 4
Since `deliverable_passed` values 1 & 2 already exist.
|
You want to select all deliverables (that have a title) where no record *exists* with their deliverable\_passed ID. So use NOT EXISTS:
```
SELECT *
FROM deliverable
WHERE deliverable_title is not null
AND NOT EXISTS
(
SELECT * FROM deliverable passed
WHERE passed.deliverable_passed = deliverable.deliverable_id
);
```
Or even simpler with NOT IN:
```
SELECT *
FROM deliverable
WHERE deliverable_title is not null
AND deliverable_id NOT IN (SELECT deliverable_passed FROM deliverable);
```
(But Aleks G is right. This is a horrible database design and you should change it.)
|
Does this do what you want?
```
select d.*
from deliverable d
where exists (select 1 from deliverable d2 where d2.id = d.deliverable_passed) or
exists (select 1 from deliverable d2 where d2.deliverable_passed = d.id);
```
|
Comparing column to another
|
[
"",
"mysql",
"sql",
"select",
"join",
""
] |
I am trying to perform a count to get the total number of results in a pagination but the query is too slow **2.12s**
```
+-------+
| size |
+-------+
| 50000 |
+-------+
1 row in set (2.12 sec)
```
my count query
```
select count(appeloffre0_.ID_APPEL_OFFRE) as size
from ao.appel_offre appeloffre0_
inner join ao.acheteur acheteur1_
on appeloffre0_.ID_ACHETEUR=acheteur1_.ID_ACHETEUR
where
(exists (select 1 from ao.lot lot2_ where lot2_.ID_APPEL_OFFRE=appeloffre0_.ID_APPEL_OFFRE and lot2_.ESTIMATION_COUT>=1))
and (exists (select 1 from ao.lieu_execution lieuexecut3_ where lieuexecut3_.appel_offre=appeloffre0_.ID_APPEL_OFFRE and lieuexecut3_.region=1))
and (exists (select 1 from ao.ao_activite aoactivite4_ where aoactivite4_.ID_APPEL_OFFRE=appeloffre0_.ID_APPEL_OFFRE and (aoactivite4_.ID_ACTIVITE=1)))
and appeloffre0_.DATE_OUVERTURE_PLIS>'2015-01-01'
and (appeloffre0_.CATEGORIE='fournitures' or appeloffre0_.CATEGORIE='travaux' or appeloffre0_.CATEGORIE='services')
and acheteur1_.ID_ENTITE_MERE=2
```
explain cmd :
```
+----+--------------------+--------------+------+---------------------------------------------+--------------------+---------+--------------------------------+-------+--------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+--------------------+--------------+------+---------------------------------------------+--------------------+---------+--------------------------------+-------+--------------------------+
| 1 | PRIMARY | acheteur1_ | ref | PRIMARY,acheteur_ibfk_1 | acheteur_ibfk_1 | 5 | const | 3 | Using where; Using index |
| 1 | PRIMARY | appeloffre0_ | ref | appel_offre_ibfk_2 | appel_offre_ibfk_2 | 4 | ao.acheteur1_.ID_ACHETEUR | 31061 | Using where |
| 4 | DEPENDENT SUBQUERY | aoactivite4_ | ref | ao_activites_activite_fk,ao_activites_ao_fk | ao_activites_ao_fk | 4 | ao.appeloffre0_.ID_APPEL_OFFRE | 3 | Using where |
| 3 | DEPENDENT SUBQUERY | lieuexecut3_ | ref | fk_ao_lieuex,fk_region_lieuex | fk_ao_lieuex | 4 | ao.appeloffre0_.ID_APPEL_OFFRE | 1 | Using where |
| 2 | DEPENDENT SUBQUERY | lot2_ | ref | FK_LOT_AO | FK_LOT_AO | 4 | ao.appeloffre0_.ID_APPEL_OFFRE | 5 | Using where |
+----+--------------------+--------------+------+---------------------------------------------+--------------------+---------+--------------------------------+-------+--------------------------+
```
the index `acheteur_ibfk_1` is a FK references table `ENTITE_MERE` because i have and `acheteur1_.ID_ENTITE_MERE=2` in where clause.
|
Like @FuzzyTree said in his comment `exists` is faster than an inner join if it's not a 1:1 relationship because it terminates as soon as it finds 1 whereas the join will get every matching row.
But the solution is that We add `in` and not `exists` :
```
where ( appeloffre0_.ID_APPEL_OFFRE IN (select lot2_.ID_APPEL_OFFRE from ao.lot lot2_
where lot2_.ESTIMATION_COUT>=1)
)
```
So the query run very fast than exists or joins .
```
select count(appeloffre0_.ID_APPEL_OFFRE) as size
from ao.appel_offre appeloffre0_
inner join ao.acheteur acheteur1_
on appeloffre0_.ID_ACHETEUR=acheteur1_.ID_ACHETEUR
where
( appeloffre0_.ID_APPEL_OFFRE IN (select lot2_.ID_APPEL_OFFRE from ao.lot lot2_ where lot2_.ESTIMATION_COUT>=1))
and (appeloffre0_.ID_APPEL_OFFRE IN (select lieuexecut3_.appel_offre from ao.lieu_execution lieuexecut3_ where lieuexecut3_.region=1))
and (appeloffre0_.ID_APPEL_OFFRE IN (select aoactivite4_.ID_APPEL_OFFRE from ao.ao_activite aoactivite4_ where aoactivite4_.ID_ACTIVITE=1 ))
and appeloffre0_.DATE_OUVERTURE_PLIS>'2015-01-01'
and (appeloffre0_.CATEGORIE='fournitures' or appeloffre0_.CATEGORIE='travaux' or appeloffre0_.CATEGORIE='services')
and acheteur1_.ID_ENTITE_MERE=2
```
|
You can try:
```
select count(aa.ID_APPEL_OFFRE) as size
from (
select ID_APPEL_OFFRE, ID_ACHETEUR from ao.appel_offre appeloffre0_
inner join ao.acheteur acheteur1_
on appeloffre0_.ID_ACHETEUR=acheteur1_.ID_ACHETEUR
where appeloffre0_.DATE_OUVERTURE_PLIS>'2015-01-01'
and (appeloffre0_.CATEGORIE in ('fournitures','travaux','services'))
and (acheteur1_.ID_ENTITE_MERE=2)) aa
inner join ao.lot lot2_ on lot2_.ID_APPEL_OFFRE=aa.ID_APPEL_OFFRE
inner join ao.lieu_execution lieuexecut3_ on lieuexecut3_.appel_offre=aa.ID_APPEL_OFFRE
inner join ao.ao_activite aoactivite4_ on aoactivite4_.ID_APPEL_OFFRE=aa.ID_APPEL_OFFRE
where
aoactivite4_.ID_ACTIVITE=1
and lot2_.ESTIMATION_COUT>=1
and lieuexecut3_.region=1;
```
But I haven't seen your tables so I am not 100% sure that you won't get duplicates because of joins.
A couple of low-hanging fruits might also be found by ensuring that your appeloffre0\_.CATEGORIE and appeloffre0\_.DATE\_OUVERTURE\_PLIS have indexes on them.
Other fields which should have indexes on them are ao.lot.ID\_APPEL\_OFFRE, ao.lieu\_execution.ID\_APPEL\_OFFRE and ao.ao\_activite.ID\_APPEL\_OFFRE, and ao.appel\_offre.ID\_ACHETEUR (all the joined fields).
|
Slow count query with where clause
|
[
"",
"mysql",
"sql",
""
] |
On posts index page I list all posts this way:
**posts\_controller.rb**
```
def index
@posts = Post.includes(:comments).paginate(:page => params[:page]).order("created_at DESC")
end
```
**index.html.erb**
```
<%= render @posts %>
```
**\_post.html.erb**
```
<%= gravatar_for post.user, size:20 %>
<%= link_to "#{post.title}", post_path(post) %>
<%= time_ago_in_words(post.created_at) %>
<%= post.comments.count %>
<%= post.category.name if post.category %>
```
35 posts per page
When I first load the page in dev env,
rack-mini-profiler shows this time: **1441.1 ms**
after a few reloads: **~700 ms**
**Can I somehow decrease this time and number of sql requests?**
Here're rmp images if it helps:


|
You could decrease the number of sql queries by:
* including `user` as well as `comments`, since you seem to be using that when displaying the gravatar
* changing `post.comments.count` to `post.comments.size`
While size, count, length are synonymous for arrays, for active record relations or associations they are not the same:
* `length` loads the association (unless it is already loaded) and returns the length of the array
* `count` does a `select count(*) query` whether the association is loaded or not
* `size` uses `length` if the association is loaded and `count` if not.
In your case the comments association is loaded, but because you are using `count`, it's not actually used
|
Further, you don't actually seem to be using the `comments` collection for anything other than printing the number of records. If that's indeed the case, use [counter\_cache](http://guides.rubyonrails.org/association_basics.html#belongs-to-association-reference) (`4.1.2.3`) instead of querying for the comments (the number of comments will be available in the parent record `Post`).
Also consider a [client side alternative](http://rails-bestpractices.com/posts/105-not-use-time_ago_in_words) to `time_ago_in_words`. It will also help if you later decide to cache the entire section/page.
And finally retrieve only the fields you're going to use. In this case, I can imagine the `Post` contains a large amount of text for the content and it's not used anywhere (but still needs to be transmitted from the DB).
|
optimize sql query rails
|
[
"",
"sql",
"ruby-on-rails",
"activerecord",
""
] |
I order the column from max repeats to min. But I only want to get the top value. The value that repeats the most.
This is code:
```
SELECT foreign_key
FROM all_person
GROUP BY foreign_key
ORDER BY COUNT(*) DESC;
```
And this is what I get:
```
foreign_key
--------------
10
9
2
4
8
3
7
6
1
5
```
I only want to get value 10 (in this example).
I saw alot of times that people use LIMIT but I get and error if I use LIMIT like that:
```
ORDER BY COUNT(*) DESC LIMIT 1;
```
And this is error:
```
SQL Error: ORA-00933: SQL command not properly ended
00933. 00000 - "SQL command not properly ended"
```
|
You can do this using `ROWNUM`:
```
SELECT foreign_key FROM (
SELECT foreign_key
FROM all_person
GROUP BY foreign_key
ORDER BY COUNT(*) DESC
) WHERE rownum = 1;
```
That is the fastest way but might not work for more complex queries.
|
You can use ROWNUM in a where clause:
```
SELECT foreign_key
FROM
(
SELECT foreign_key
FROM all_person
GROUP BY foreign_key
ORDER BY COUNT(*) DESC
)
WHERE ROWNUM = 1;
```
|
Oracle SQL Developer how to get top value
|
[
"",
"sql",
"oracle",
""
] |
I have question regarding sql script. I have a custom view, below is the data
```
================================================================================
ql_siteid | ql_rfqnum | ql_vendor | ql_itemnum | totalcost_option | description
================================================================================
SGCT | 1002 | VND001 | ITEM002 | 12500 |
SGCT | 1002 | VND001 | ITEM001 | 1350 |
SGCT | 1002 | VND002 | ITEM002 | 11700 |
SGCT | 1002 | VND002 | ITEM001 | 1470 | Nikon
SGCT | 1002 | VND002 | ITEM001 | 1370 | Asus
================================================================================
```
And i want the result like below table:
```
VND001 = 13850
VND002 = Asus 13070, Nikon 13170
```
where 13850 is come from 12500+1350, 13070 is come from 11700+1370 and 13170 is come from 11700+1470. All the cost is calculated from totalcost\_option and will be group based on vendor
So please give me some advise
|
Assuming you have a version of Oracle 11g or later, using [ListAgg](http://docs.oracle.com/database/121/SQLRF/functions100.htm#SQLRF30030) will do the combination of the comma separated tuples for you. The rest of the string is generated by simply concatenating the components together from an intermediate table - I've used a derived table (X) here, but you could also use a CTE.
**Edit**
As pointed out in the comments, there's a whole bunch more logic missing around the Null description items I missed in my original answer.
The following rather messy query does project the required result, but I believe this may be indicative that a table design rethink is necessary. The `FULL OUTER JOIN` should ensure that rows are returned even if there are no base / descriptionless cost items for the vendor.
```
WITH NullDescriptions AS
(
SELECT "ql_vendor", SUM("totalcost_option") AS "totalcost_option"
FROM MyTable
WHERE "description" IS NULL
GROUP BY "ql_vendor"
),
NonNulls AS
(
SELECT COALESCE(nd."ql_vendor", mt."ql_vendor") AS "ql_vendor",
NVL(mt."description", '') || ' '
|| CAST(NVL(mt."totalcost_option", 0)
+ nd."totalcost_option" AS VARCHAR2(30)) AS Combined
FROM NullDescriptions nd
FULL OUTER JOIN MyTable mt
ON mt."ql_vendor" = nd."ql_vendor"
AND mt."description" IS NOT NULL
)
SELECT x."ql_vendor" || ' = ' || ListAgg(x.Combined, ', ')
WITHIN GROUP (ORDER BY x.Combined)
FROM NonNulls x
WHERE x.Combined <> ' '
GROUP BY x."ql_vendor";
```
[Updated SqlFiddle here](http://sqlfiddle.com/#!4/2bc48/36)
|
To get the exact output you required use the following statement: (where test\_table is your table name):
```
SELECT ql_vendor || ' = ' ||
LISTAGG( LTRIM(description||' ')||totalcost, ', ')
WITHIN GROUP (ORDER BY description)
FROM (
WITH base_cost AS (
SELECT ql_vendor, SUM(totalcost_option) sumcost
FROM test_table WHERE description IS NULL
GROUP BY ql_vendor
),
individual_cost AS (
SELECT ql_vendor, totalcost_option icost, description
FROM test_table WHERE description IS NOT NULL
)
SELECT ql_vendor, sumcost + NVL(icost,0) totalcost, description
FROM base_cost LEFT OUTER JOIN individual_cost USING (ql_vendor)
)
GROUP BY ql_vendor;
```
Details:
The Outer select just takes the individual rows and combines them to the String-representation. Just remove it and you will get a single row for each vendor/description combination.
The inner select joins two sub-select. The first one gets the base\_cost for each vendor by summing up all rows without a description. The second gets the individual cost for each row with a description.
The join combines them - and left outer joins displays the base\_cost for vendors which don't have a matching row with description.
|
Query to Calculate totalcost based on description
|
[
"",
"sql",
"oracle",
""
] |
DISCLAIMER : **I Know this has been asked numerous times, but all I want is an alternative.**
The table is as below :
```
create table
Account
(Name varchar(20),
TType varchar(5),
Amount int);
insert into Account Values
('abc' ,'c', 500),
('abc', 'c', 700),
('abc', 'd', 100),
('abc', 'd', 200),
('ab' ,'c', 300),
('ab', 'c', 700),
('ab', 'd', 200),
('ab', 'd', 200);
```
Expected result is simple:
```
Name Balance
------ -----------
ab 600
abc 900
```
The query that worked is :
```
select Name, sum(case TType when 'c' then Amount
when 'd' then Amount * -1 end) as balance
from Account a1
group by Name.
```
All I want is, is there any query sans the 'case' statement (like subquery or self join ) for the same result?
|
Sure. You can use a second query with a `where` clause and a `union all`:
```
select name
, sum(Amount) balance
from Account a1
where TType when 'c'
group
by Name
union
all
select name
, sum(Amount * -1) balance
from Account a1
where TType when 'd'
group
by Name
```
Or this, using a `join` with an inline view:
```
select name
, sum(Amount * o.mult) balance
from Account a1
join ( select 'c' cd
, 1 mult
from dual
union all
select 'd'
, -1
from dual
) o
on o.cd = a1.TType
group
by Name
```
To be honest, I would suggest to use `case`...
|
Use the ASCII code of the char and try to go from there. It is 100 for 'd' and 99 for 'c'. Untested example:
```
select Name, sum((ASCII(TType) - 100) * Amount * (-1)) + sum((ASCII(TType) - 99) * Amount * (-1)))) as balance from Account a1 group by Name.
```
I would not recommend using this method but it is a way of achieving what you want.
|
SQL query - credit , debit , balance
|
[
"",
"mysql",
"sql",
""
] |
Basically what I want is a working-version of the following code:
```
ALTER TABLE table_name
AUTO_INCREMENT =
(
SELECT
`AUTO_INCREMENT`
FROM
INFORMATION_SCHEMA.TABLES
WHERE
TABLE_SCHEMA = 'database_name'
AND TABLE_NAME = 'another_table_name'
);
```
The error:
```
1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'AUTO_INCREMENT =
```
The reason:
According to [MySQL Doc](http://dev.mysql.com/doc/refman/5.1/en/innodb-auto-increment-handling.html):
> InnoDB uses the in-memory auto-increment counter as long as the server
> runs. When the server is stopped and restarted, InnoDB reinitializes
> the counter for each table for the first INSERT to the table, as
> described earlier.
This means that whenever I restart the server, my auto\_increment values are set to the minimum possible.
I have a table called `ticket` and another one called `ticket_backup`. Both of them have a column `id` that is shared. Records inside the `ticket` table are available and can be claimed by customers. When they claim the `ticket` I insert the record inside `ticket_backup` and then I erase them from `ticket` table. As of today, I have `56 thousand` tickets already claimed (inside `ticket_backup`) and 0 tickets available. If I restart the server now and don't perform the `ALTER TABLE`, the first ticket I make available will have `id 1` which is an ID already taken by `ticket_backup`, thus causing me duplicate key error if I don't fix the auto-increment value. The reason for me to want this in a single query is to be able to easily perform the query on server startup.
|
Try this:
```
SELECT `AUTO_INCREMENT` INTO @AutoInc
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_SCHEMA = 'database_name' AND TABLE_NAME = 'another_table_name';
SET @s:=CONCAT('ALTER TABLE `database_name`.`table_name` AUTO_INCREMENT=', @AutoInc);
PREPARE stmt FROM @s;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;
```
|
Just upon quick glance, the error says it all. Syntax is incorrect
The syntax should adhere to:
```
ALTER TABLE table_name AUTO_INCREMENT=<INTEGER_VALUE>;
```
So looking at your query, remove the word "SET"
```
ALTER TABLE table_name AUTO_INCREMENT =
(
SELECT
`AUTO_INCREMENT`
FROM
INFORMATION_SCHEMA.TABLES
WHERE
TABLE_SCHEMA = 'database_name'
AND TABLE_NAME = 'another_table_name'
);
```
|
Alter AUTO_INCREMENT value by select result
|
[
"",
"mysql",
"sql",
"select",
"auto-increment",
"alter-table",
""
] |
I have a table like that
IVA
```
id - taxa - name - date
1 - 0.18 - normal - 1/1/2014
2 - 0 - none - 1/1/2014
3 - 0.23 - max - 1/1/2013
4 - 0.16 - normal - 1/1/2015
...
```
I have an `IVA.id` from a product and I want to know in current date what is the taxa.
So I have to query this table twice:
```
Select name
from IVA
where id = 1; (return me normal)
Select taxa
from IVA
where name = 'normal'
and date <= '12/12/2015';
```
Can I join these two queries into only one?
Thanks
|
Please try:
```
Select taxa
from IVA
where name in (Select a.name from IVA a where a.id = 1)
and date<='12/12/2015';
```
|
Depends on the database engine you're using, but you may want to look up the UNION keyword for your database.
|
SQL join 2 queries in one from same table
|
[
"",
"sql",
""
] |
Parsename is working when directly using string or returning static value from the function. But when I returning select value with dot from the function. its not working. I am not able to figure out the issue.
```
CREATE TABLE [dbo].[TestTimeLine](
[UserID] [int] NOT NULL,
[Field1] [nchar](250) NOT NULL,
[Value] [smalldatetime] NULL,
[Timestamp] [datetime] NULL,
)
INSERT INTO [TestTimeLine] VALUES(10,'TestDate',GETDATE(),GETDATE())
CREATE FUNCTION TestFunction()
RETURNS NVARCHAR(MAX)
AS
BEGIN
DECLARE @R NVARCHAR(MAX)
SELECT TOP 1 @R=CAST(DATENAME(month,value) AS VARCHAR) + ' ' + CAST(YEAR(value) AS VARCHAR) + '.' +CONVERT(varchar,value,101) + '. ' + LTRIM(Field1)
FROM [TestTimeLine]
WHERE UserId = 10
RETURN @R
END
select dbo.TestFunction()
select PARSENAME(dbo.TestFunction(),2)
```
**Working**
```
select PARSENAME('January 2015.01/05/2015. TestDate',2)
```
**NotWorking** -- Returns NULL
```
select PARSENAME(dbo.TestFunction(),2)
```
|
I got the Issue.
Issue is `[nchar]` datatype.
After Changing to 'Varchar' datatype. Its working fine.
|
Try changing the parameter 2 to 1.
```
select PARSENAME(dbo.TestFunction(),1)
```
For more details refer to:
[More Details](http://msdn.microsoft.com/en-us/library/ms188006.aspx)
|
parsename is not working when using string from function
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have to get horsepower field in database as integer but it contain value like below
```
330hp/@6,000
30hp/@3,000
1000hp/@6,000
```
Can Anyone suggest how can I cast this into integer?
I only need
```
330
30
1000
```
--
Cheers
|
```
Select LEFT(ColumnName,CHARINDEX('h',ColumnName) - 1)
```
Read more [LEFT (Transact-SQL)](http://msdn.microsoft.com/en-GB/library/ms177601.aspx)
|
This will work. If there is no 'h' in the text, it will return blank. If you want to cast, go ahead. But make sure you data is validated first.
Starting from 0 is a nice trick to remove the first character 'h' instead of subtracting 1 which would give errors when no 'h' exists
```
DECLARE @t table(col varchar(20))
insert @t values
('330hp/@6,000'),
('30hp/@3,000'),
('1000hp/@6,000'),
('123')
SELECT
SUBSTRING(col, 0, CHARINDEX('h', col))
FROM @t
```
Result:
```
330
30
1000
blank
```
|
SQL cast a string value
|
[
"",
"sql",
"sql-server",
"database",
""
] |
There's a poorly designed table in my database which contains some information I need to extract. Supose I have the following query:
```
SELECT
(SELECT I FROM X WHERE X.A = FOO AND X.B = KEY),
(SELECT J FROM X WHERE X.A = BAR AND X.B = KEY),
(SELECT K FROM X WHERE X.A = BAZ AND X.B = KEY)
```
I need to expand the query to select some other fields. My first idea was this:
```
SELECT
(SELECT I FROM X WHERE X.A = FOO AND X.B = KEY),
(SELECT J FROM X WHERE X.A = BAR AND X.B = KEY),
(SELECT K FROM X WHERE X.A = BAZ AND X.B = KEY)
(SELECT L, M, N FROM X WHERE X.A = QUX AND X.B = KEY)
```
However, i get an error stating that a subquery that fetches columns can only fetch one column, unless it is introduced with an EXISTS clause. I could write the last subquery as such:
```
(SELECT L, M, N FROM X AS OUTER
WHERE EXISTS
(SELECT ID FROM X WHERE X.A = QUX AND X.B = KEY AND X.ID = OUTER.ID)(
```
Won't this solution be needlessly heavy? Is there another solution to this?
EDIT:
I need to extract the information into a single row. I also should mention that there is no guarantee that every X.A is present (There might be keys that only have FOO, others that have BAR and QUX, etc), so joining is not possible.
EDIT: @Saharsh Shah requested some sample data:
```
|X.B |X.A|X.I |X.J |X.K |X.L |X.M |X.N |
|KEY1|FOO|ICONTENT| | | | | |
|KEY1|BAR| |JCONTENT| | | | |
|KEY1|BAZ| | |KCONTENT| | | |
|KEY1|QUX| | | |LCONTENT|MCONTENT|NCONTENT|
|KEY2|BAR| |JCONTENT| | | | |
|KEY3|FOO|ICONTENT| | | | | |
|KEY3|QUX| | | |LCONTENT|MCONTENT|NCONTENT|
```
And the expected result:
For key 1:
```
X.I |X.J |X.K |X.L |X.M |X.N |
ICONTENT|JCONTENT|KCONTENT|LCONTENT|MCONTENT|NCONTENT|
```
For key 2:
```
X.I |X.J |X.K |X.L |X.M |X.N |
|JCONTENT| | | | |
```
For key 3:
```
X.I |X.J |X.K |X.L |X.M |X.N |
ICONTENT| | |LCONTENT|MCONTENT|NCONTENT|
```
|
Use **CASE** statement to validate conditions
Try this:
```
SELECT X.B,
MAX(CASE WHEN X.A = FOO THEN I ELSE '' END) AS I,
MAX(CASE WHEN X.A = BAR THEN J ELSE '' END) AS J,
MAX(CASE WHEN X.A = BAZ THEN K ELSE '' END) AS K,
MAX(CASE WHEN X.A = QUX THEN L ELSE '' END) AS L,
MAX(CASE WHEN X.A = QUX THEN M ELSE '' END) AS M,
MAX(CASE WHEN X.A = QUX THEN N ELSE '' END) AS N
FROM X
WHERE X.B = KEY
GROUP BY X.B;
```
|
This can be done in 2 ways : -
```
select i, j , k , l , m, n
from x
where i in (select i from x where x.a = 'FOO' or x.B = 'KEY') and
j in (select j from x where x.b = 'BAR' or x.B = 'KEY') and
k in (select k from x where x.c = 'BAZ' or x.B = 'KEY')
```
or the other way is :
```
select (select count(i) from x where x.a = 'foo') as i , (select count(j) from x where x.b in ('KEY', 'BAR') as j , (select count(k) from x where x.c = 'BAZ' and x.B = 'KEY') as k , l, m , n
from x
```
|
Subquery with multiple columns
|
[
"",
"sql",
"sql-server",
"select",
"group-by",
"where-clause",
""
] |
I have some data for example in the following format
Score data range is from 1 to 100
Following is data from table "ProductScore" for example
attaching a pic and result required format

I am a bit new to SQL
I think it's simple to write query.
|
You can do this with `CASE`:
```
;with cte AS (SELECT CASE WHEN Score <= 10 THEN '01: 1-10'
WHEN Score <= 20 THEN '02: 11-20'
....
ELSE '10: 91-100'
END AS Ranges
,ProductCode
FROM ProductScore
)
SELECT Ranges,COUNT(ProductCode)
FROM cte
GROUP BY Ranges
ORDER BY Ranges
```
Note: I include a prefix to the ranges so that it can be ordered by the ranges, you can adjust to suit.
I put this in a cte so that I don't have to list the `CASE` again in the `GROUP BY`, as AlexPoole suggest, you could add another `CASE` for ordering and eliminate the prefix:
```
;with cte AS (SELECT CASE WHEN Score <= 10 THEN '1-10'
WHEN Score <= 20 THEN '11-20'
....
ELSE '91-100'
END AS Ranges
CASE WHEN Score <= 10 THEN 1
WHEN Score <= 20 THEN 2
....
ELSE 10
END AS RangeOrder
,ProductCode
FROM ProductScore
)
SELECT Ranges,COUNT(ProductCode)
FROM cte
GROUP BY Ranges
ORDER BY RangeOrder
```
|
As your interval are fixed width, you can use the Oracle's nice function [`WIDTH_BUCKET`](http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions214.htm) to achieve the desired result:
```
SELECT (wb-1)*10+1 || '-' || wb*10 "DataRange" , COUNT(*) FROM (
SELECT WIDTH_BUCKET("Score",1,100,10) wb
-- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-- divide the [1..100] range in 10 buckets of the
-- same size
FROM ProductScore
) V
GROUP BY wb
ORDER BY wb
```
See <http://sqlfiddle.com/#!4/91e85/6> for a live example
---
In addition, if you need to display even the buckets without any data, you might want to add an outer join and change the `COUNT` function:
```
SELECT (wb-1)*10+1 || '-' || wb*10 "DataRange" , COUNT("Score") FROM (
-- ^^^^^^^^^^^^^^
-- Count only row having a score
-- Can't use COUNT(*) here because
-- of the full outer join below
SELECT WIDTH_BUCKET("Score",1,100,10) wb, "Score"
FROM ProductScore
) V1
FULL OUTER JOIN (SELECT LEVEL wb FROM DUAL CONNECT BY LEVEL <= 10) V2
USING(wb)
-- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-- always 10 buckets (from 1-10 to 91-100)
GROUP BY wb
ORDER BY wb;
```
Take a look at <http://sqlfiddle.com/#!4/91e85/9> to see the difference.
|
how to count number of rows in a data range?
|
[
"",
"sql",
"oracle",
"oracle11g",
""
] |
i want to select items that have minimum twu value for each tid seperately...my table name is reorganised
```
tid items times twu
1 A 7 942
1 D 5 633
1 H 2 1020
2 S 4 551
3 F 4 871
3 B 2 641
4 M 1 665
4 N 7 950
4 Z 10 986
5 B 6 641
5 C 8 370
5 D 1 633
6 J 6 842
6 K 2 397
6 L 5 305
```
i tried this code
```
select tid,items,twu from REORGANISED where (TID,twu) in
(select TID,min(twu)as TWU from REORGANISED group by tid);
```
it showed some error in where condition
|
you can get your desired result by joining your existing subquery and the original table itself. This solution works in all rdbms.
```
SELECT a.*
FROM Reorganised a
INNER JOIN
(
select TID,min(twu)as TWU
from REORGANISED
group by tid
) b ON a.TID = b.TID
AND a.TWU = b.TWU
```
|
```
WITH CTE AS (
SELECT tid,items,times,twu,
ROW_NUMBER() OVER (PARTITION BY tid ORDER BY twu DESC) rn
FROM REORGANISED
)
SELECT * FROM CTE WHERE rn=1
```
|
How to select varchar column based on int min() function in SQL
|
[
"",
"sql",
"sql-server",
"visual-studio",
""
] |
I produce a CSV file but cannot figure out the proper date format.
I am aware of the MSDN site codes for datetime conversions:
<http://msdn.microsoft.com/en-us/library/ms187928.aspx>
It seems there is no code to convert my datetime into this format:
MM/DD/YYYY HH:MMAM
e.g.:
12/28/2014 4:33AM
How do you achieve such format?
Platform:
Microsoft SQL server 2008
|
```
SELECT CONVERT(VARCHAR(10), GETDATE(), 101) +
RIGHT(CONVERT(VARCHAR, GETDATE(), 100), 7)
```
|
This is what you can use and is probably the most straightforward:
```
SELECT
RIGHT('0' + cast(month(dateColumn) AS NVARCHAR(2)), 2) + '/' -- generate the day
+ RIGHT('0' + cast(day(dateColumn) AS NVARCHAR(2)), 2) + '/' -- generate the month
+ cast(year(dateColumn) AS NVARCHAR(4)) + ' ' -- generate the year
+ convert(VARCHAR, cast(dateColumn AS TIME)), 100) -- generate the time
FROM TABLE
```
|
SQL server 2008 date conversion formats
|
[
"",
"sql",
"sql-server-2008",
"date",
""
] |
I have two tables:
First table: **f** *(a,b,c,d are the columns)*
```
a | b | c | d
1 | 2 | 3 | 4
5 | 6 | 7 | 8
9 | 10 | 11 | 12
13 | 14 | 15 | 16
```
Second table: **s** *(a,b,c,d are the columns)*
```
a | b | c | d
5 | 6 | c | d
9 | 10 | c | d
20 | 5 | c | d
```
---
I need to delete the **not equal** rows (between **f** to **s**) from the **f** table only
so the new **f** table will look like that:
```
a | b | c | d
5 | 6 | 7 | 8
9 | 10 | 11 | 12
```
How to know if two rows are **equal**? (i need to delete what is **not equal**)
If `f.a=s.a AND f.b=s.b` the lines are equal, and i need this line to stay.
How could I build it with Sql only? All I tried to do, not worked for me.
---
here is example for source that i tried to do:
```
DELETE FROM f WHERE NOT EXISTS (
SELECT * FROM f inner join s on (f.a=s.a and f.b=s.b)
)
```
|
You could use a `DELETE`/`LEFT JOIN`, which in MySQL would look like;
```
DELETE f
FROM f
LEFT JOIN s
ON f.a = s.a AND f.b = s.b
WHERE s.a IS NULL
```
If there is no row that matches between f and s, s.a will be NULL in the result and the corresponding row in f will therefore be deleted.
[An SQLfiddle to test with.](http://sqlfiddle.com/#!2/569177/1)
And, as always, always back up your data before running potentially destructive SQL from random people on the Internet on it.
|
You can do what you want without the join on the subquery:
```
DELETE FROM f
WHERE NOT EXISTS (SELECT 1 FROM s on f.a = s.a and f.b = s.b) ;
```
|
DELETE if NOT EXIST
|
[
"",
"mysql",
"sql",
""
] |
I'm somewhat new to SQL so I hope this doesn't come off as a dumb question. I am trying to create a scheduling program where the table is as such:
```
event_id worker_id date shift_type
1 1 2014-01-01 Labour Day Shift
2 1 2014-01-02 Labour Night Shift
3 2 2014-01-01 Engineer Day Shift
4 2 2014-01-02 Engineer Night Shift
5 3 2014-01-01 Electrician Day Shift
6 3 2014-01-02 Electrician Day Shift
```
As you can see, I treat each "shift" as an event. When I insert into the database, I would like a way to check if 'worker\_id' and 'date' already exist in the table under the same row and if so use UPDATE rather then INSERT. Please note, that neither 'worker\_id' or 'date' is unique, however the combination of the two in a single row would be unique.
|
You want `on duplicate key` update. But first, you need a unique index on `worker_id` and `date`:
```
create unique index idx_table_worker_date on table(worker_id, date);
insert into table(worker_id, date, shift_type)
values (@worker_id, @date, @shift_type)
on duplicate key update shift_type = values(shift_type);
```
The `table` is a placeholder for your table name. The variables starting with `@` are placeholders for the values you want to insert.
|
Since the worker\_id and date in combination are unique, you can rely on the INSERT to throw a duplicate error:
```
INSERT INTO table (event_id, worker_id, date, shift_type)
VALUES (1, 1, @date, @shift)
ON DUPLICATE KEY UPDATE event_id=1, shift_type=values(shift_type);
```
See [INSERT ... ON DUPLICATE KEY](http://dev.mysql.com/doc/refman/5.7/en/insert-on-duplicate.html) documentation.
To clarify, you can declare multiple columns to be unique in combination:
```
UNIQUE KEY 'key_name' (column_one, column_two, ... etc.)
```
See [ALTER TABLE](http://dev.mysql.com/doc/refman/5.7/en/alter-table.html) and [CREATE TABLE](http://dev.mysql.com/doc/refman/5.7/en/create-table.html) for further clarification.
|
SQL - UPDATE or INSERT by checking two columns
|
[
"",
"mysql",
"sql",
"database",
""
] |
I have two tables, People, and Vehicles. Vehicles belongs to people. Im trying to check if a person does not have a vehicle. I was attempting to do this by joining People and Vehicles, and displaying the persons ID that is NOT IN Vehicles.person\_id.
This is returning nothing, and has me wondering if there is something I did wrong, or if there is a more efficient way of doing this.
Query is below
`Select People.id
From People
INNER JOIN Vehicles
on People.id=Vehicles.person_id
where People.id NOT IN Vehicles.person_id;`
|
Use left join to figure out the persons with no vehicles
```
Select distinct People.id
From People
LEFT JOIN Vehicles on People.id=Vehicles.person_id
where Vehicles.person_id is NULL
```
|
`NOT IN` can have [issues with `NULL` values](http://www.postgresql.org/docs/9.3/static/functions-subquery.html#FUNCTIONS-SUBQUERY-NOTIN), and should probably be avoided for [performance reasons](http://explainextended.com/2009/09/16/not-in-vs-not-exists-vs-left-join-is-null-postgresql/) if the subquery is very large.
Try `NOT EXISTS`:
```
SELECT p.id
FROM People p
WHERE NOT EXISTS (
SELECT 1
FROM Vehicles v
WHERE v.person_id = p.id)
```
|
Correct way to use "NOT IN" Postgres
|
[
"",
"sql",
"postgresql",
""
] |
I want to check if a specific View exists before querying it. I use dynamic SQL to create the query:
```
DECLARE @sqlCommand varchar(1000)
DECLARE @viewName varchar(1000)
DECLARE @otherDB varchar(1000)
SET @sqlCommand = 'IF EXISTS(SELECT 1 FROM ' + @otherDB + '.sys.views WHERE name=''' + @viewName + ''')
BEGIN
SELECT * FROM ' + @viewName + '
END'
EXEC (@sqlCommand)
```
So everything works fine as long as @viewName actually exists. However, if @viewName is a View that does not exist in sys.views, I get an error from the compiler:
> The OLE DB provider "SQLNCLI11" for linked server "*server*" does not contain the table @viewName. The table either does not exist or the current user does not have permiossions on that table.
I would have thought that since an IF statement is used, it would just skip the querying of the View. However, seems like the View **has** to exist otherwise I get the above error.
I've tried alternate solutions, such as using strings for the View names, but no luck. I've also tried the solution in: [How to check the existence of a view](https://stackoverflow.com/questions/23143483/how-to-check-the-existence-of-a-view), but at some point I have to reference the View name in my query, and it would complain
Any info would be greatly appreciated!
|
Check for the existence of the view *outside* the dynamic SQL. You are trying to prevent the compile-time error of the view not existing in the `select`. There is no issue with the `if`:
```
IF EXISTS(SELECT 1 FROM sys.views WHERE name = @viewName)
BEGIN
SET @sqlCommand = 'SELECT * FROM ' + @viewName
EXEC(@sqlCommand)
END;
```
Although it doesn't make a difference in this case, if you are using dynamic SQL, learn about `sp_executesql` -- it is more powerful than `exec()` because you can pass variables in and out.
EDIT:
In that case, you essentially have to do dynamic SQL inside dynamic SQL. The following is not tested, so there could be a syntax error:
```
DECLARE @viewName varchar(1000);
DECLARE @otherDB varchar(1000);
declare @sql nvarchar(max) = '
IF EXISTS (SELECT 1 FROM @otherDB.sys.views WHERE name = ''@viewName'')
BEGIN
DECLARE @sqlCommand nvarchar(max);
SET @sqlCommand = ''SELECT * FROM @viewName'';
EXEC(@sqlCommand);
END;';
SET @sql = replace(replace(@ql, '@otherDB', @otherDB), '@viewName', @viewName);
EXEC(@sql);
```
|
What version of SQL Server are you using? I only have SQL Server 2014 available to test with, but the T-SQL below works for both missing and not missing views. I wonder whether the fact that you are checking for existence of the view in otherdb.sys.views but are not qualifying otherdb when selecting from the view is to blame?
```
declare @viewName varchar(50) = 'MissingView';
declare @sqlCommand nvarchar(1000);
declare @otherdb varchar(20) = 'MyTestDatabase';
set @sqlCommand = N'if exists
(
select 1
from ' + @otherdb + '.sys.views as v
where v.name = ''' + @viewName + '''
)
begin
select * from ' + @otherdb + '.dbo.' + @viewName + ';
end
else
begin
select ''Nah mate - missing view'';
end';
print @sqlCommand;
execute sp_executesql @sqlCommand;
```
|
Check if View exists before querying it
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
\*None of other available answers solved my problem
I have a table t like this
```
id,cc,count
'1','HN','22'
'1','US','18'
'1','VN','1'
'2','DK','2'
'2','US','256'
'3','SK','1'
'3','US','66310'
'4','UA','2'
'4','US','263'
'6','FR','7'
'6','US','84'
'9','BR','3'
```
I want to get the rows for ids with maximum count, like below:
```
id,cc,count
'1','HN','22'
'2','US','256'
'3','US','66310'
'4','US','263'
'6','US','84'
'9','BR','3'
```
My current code is like this but I am not getting the expected results:
```
SELECT t.* FROM t
JOIN (
SELECT
t.id,t.cc
,max(t.count) as max_slash24_count
FROM t
group by t.id,t.cc
) highest
ON t.count = highest.max_slash24_count
and t.cc = highest.cc
```
Can anybody help me out?
|
Remove `CC` column from `group by`. Try this.
```
SELECT t.* FROM t
JOIN (
SELECT
t.id
,max(t.count) as max_slash24_count
FROM t
group by t.id
) highest
ON t.count = highest.max_slash24_count
and t.id= highest.id
```
|
Try this:
```
create table t (id varchar(10), cc varchar(10), count varchar(10))
insert into t (id,cc,count) values ('1','HN','22');
insert into t (id,cc,count) values ('1','US','18');
insert into t (id,cc,count) values ('1','VN','1');
insert into t (id,cc,count) values ('2','DK','2');
insert into t (id,cc,count) values ('2','US','256');
insert into t (id,cc,count) values ('3','SK','1');
insert into t (id,cc,count) values ('3','US','66310');
insert into t (id,cc,count) values ('4','UA','2');
insert into t (id,cc,count) values ('4','US','263');
insert into t (id,cc,count) values ('6','FR','7');
insert into t (id,cc,count) values ('6','US','84');
insert into t (id,cc,count) values ('9','BR','3');
select *
from t
where exists (
select *
from t as t1
group by t1.id
having t1.id = t.id and max(t1.count) = t.count
)
```
Result
```
ID CC COUNT
-------------
1 HN 22
2 US 256
3 US 66310
4 US 263
6 US 84
9 BR 3
```
Check [SQLFiddle](http://sqlfiddle.com/#!2/3ede69/2)
|
How to select a row with maximum value for a column in MySQL?
|
[
"",
"mysql",
"sql",
"select",
"max",
""
] |
I don't know even how to start addressing this issue, any assistance will be appreciated:
My goal is to generate a table (dynamically) that return dates that are working shift.
Settings:
* I have a pattern as follow 7 days on duty 7 days off duty and it goes on...
* I can determine the start date that the shift start
* I can determine the pattern days (7 days - meaning 7 days on 7 days off)
* i can determine the end date for calculating the patern
* I want to calculate and create the following table
For example:
Pattern days: 7
Start date: 01/01/2015
Pattern end date: 12/31/2015
```
ID StartshiftDate EndShiftDate OnDuty
-------------------------------------------------------------
1 01/01/2015 01/07/2015 On Duty
2 01/08/2015 01/14/2015 Off Duty
3 01/15/2015 01/21/2015 On Duty
```
I know that i need to create CTE that start from the start date, i need to add 7 days for each date.
but i don't know how to determine if the range of dates is on duty or off duty.
And how i create the the loop for creating the row till the pattern end date?
Any help will be appreciated
|
As you mentioned you need to use `Recursive CTE`. Try this.
```
DECLARE @Startdate DATE= '01/01/2015',
@enddate DATE = '01/31/2015',
@Patterndays INT=7;
WITH cte
AS (SELECT CONVERT(DATE, @Startdate) [dates],
1 AS id,
CONVERT(VARCHAR(20), 'On Duty') AS duty
UNION ALL
SELECT Dateadd(dd, @Patterndays, [dates]),
ID + 1,
CASE
WHEN ( id + 1 ) % 2 = 1 THEN CONVERT(VARCHAR(20), 'On Duty')
ELSE CONVERT(VARCHAR(20), 'Off Duty')
END
FROM cte
WHERE dates < Dateadd(dd, -@Patterndays, CONVERT(DATE, @enddate)))
SELECT id,
dates AS StartshiftDate,
Dateadd(DD, 6, dates)EndShiftDate,
duty
FROM cte
Option (maxrecursion 0)
```
**Result :**
```
id StartshiftDate EndShiftDate duty
-- -------------- ------------ -------
1 2015-01-01 2015-01-07 On Duty
2 2015-01-08 2015-01-14 Off Duty
3 2015-01-15 2015-01-21 On Duty
4 2015-01-22 2015-01-28 Off Duty
5 2015-01-29 2015-02-04 On Duty
```
|
I would use a table of numbers.
<http://web.archive.org/web/20150411042510/http://sqlserver2000.databases.aspfaq.com/why-should-i-consider-using-an-auxiliary-numbers-table.html>
<http://sqlperformance.com/2013/01/t-sql-queries/generate-a-set-1>
[This article](http://sqlperformance.com/2013/01/t-sql-queries/generate-a-set-1) compares different ways to generate it, including recursive CTE, which is much slower than the rest.
It doesn't really matter how you generate the table of numbers, because normally it is done once. For this example I'll populate a table variable with 10000 numbers using one of the methods from the article above.
```
declare @TNumbers table (Number int);
INSERT INTO @TNumbers (Number)
SELECT TOP (10000) ROW_NUMBER() OVER (ORDER BY s1.[object_id])
FROM sys.all_objects AS s1 CROSS JOIN sys.all_objects AS s2
OPTION (MAXDOP 1);
```
Now we have a table of numbers and generating your dates is a matter of simple formulas:
```
DECLARE @VarStartDate date = '2015-01-01';
DECLARE @VarEndDate date = '2015-12-31';
DECLARE @VarShiftLength int = 7;
SELECT
N.Number AS ID
, DATEADD(day, (N.Number - 1) * @VarShiftLength, @VarStartDate) AS StartShiftDay
, DATEADD(day, N.Number * @VarShiftLength - 1, @VarStartDate) AS EndShiftDay
, CASE WHEN N.Number % 2 = 0 THEN 'Off Duty' ELSE 'On Duty' END AS OnDuty
FROM
@TNumbers AS N
WHERE
DATEADD(day, N.Number * @VarShiftLength - 1, @VarStartDate) <= @VarEndDate
ORDER BY ID;
```
Result set:
```
ID StartShiftDay EndShiftDay OnDuty
1 2015-01-01 2015-01-07 On Duty
2 2015-01-08 2015-01-14 Off Duty
3 2015-01-15 2015-01-21 On Duty
4 2015-01-22 2015-01-28 Off Duty
5 2015-01-29 2015-02-04 On Duty
6 2015-02-05 2015-02-11 Off Duty
7 2015-02-12 2015-02-18 On Duty
8 2015-02-19 2015-02-25 Off Duty
9 2015-02-26 2015-03-04 On Duty
10 2015-03-05 2015-03-11 Off Duty
11 2015-03-12 2015-03-18 On Duty
12 2015-03-19 2015-03-25 Off Duty
13 2015-03-26 2015-04-01 On Duty
14 2015-04-02 2015-04-08 Off Duty
15 2015-04-09 2015-04-15 On Duty
16 2015-04-16 2015-04-22 Off Duty
17 2015-04-23 2015-04-29 On Duty
18 2015-04-30 2015-05-06 Off Duty
19 2015-05-07 2015-05-13 On Duty
20 2015-05-14 2015-05-20 Off Duty
21 2015-05-21 2015-05-27 On Duty
22 2015-05-28 2015-06-03 Off Duty
23 2015-06-04 2015-06-10 On Duty
24 2015-06-11 2015-06-17 Off Duty
25 2015-06-18 2015-06-24 On Duty
26 2015-06-25 2015-07-01 Off Duty
27 2015-07-02 2015-07-08 On Duty
28 2015-07-09 2015-07-15 Off Duty
29 2015-07-16 2015-07-22 On Duty
30 2015-07-23 2015-07-29 Off Duty
31 2015-07-30 2015-08-05 On Duty
32 2015-08-06 2015-08-12 Off Duty
33 2015-08-13 2015-08-19 On Duty
34 2015-08-20 2015-08-26 Off Duty
35 2015-08-27 2015-09-02 On Duty
36 2015-09-03 2015-09-09 Off Duty
37 2015-09-10 2015-09-16 On Duty
38 2015-09-17 2015-09-23 Off Duty
39 2015-09-24 2015-09-30 On Duty
40 2015-10-01 2015-10-07 Off Duty
41 2015-10-08 2015-10-14 On Duty
42 2015-10-15 2015-10-21 Off Duty
43 2015-10-22 2015-10-28 On Duty
44 2015-10-29 2015-11-04 Off Duty
45 2015-11-05 2015-11-11 On Duty
46 2015-11-12 2015-11-18 Off Duty
47 2015-11-19 2015-11-25 On Duty
48 2015-11-26 2015-12-02 Off Duty
49 2015-12-03 2015-12-09 On Duty
50 2015-12-10 2015-12-16 Off Duty
51 2015-12-17 2015-12-23 On Duty
52 2015-12-24 2015-12-30 Off Duty
```
|
Generating shift pattern in SQL
|
[
"",
"sql",
"sql-server",
"with-statement",
""
] |
I have a table which links customer ID's to a sale ID. Multiple customers can be linked the same sale ID, however the first customer should be the Main customer with Type 'M'. All other customers should be type Other ('O').
```
Cust_ID Sale_ID Cust_Type
1 123 'M'
2 123 'O'
3 124 'M'
4 125 'M'
5 125 'O'
6 125 'O'
```
Sometimes multiple customers linked to the same Sale ID will be the Main ('M') customer - which is not correct:
```
Cust_ID Sale_ID Cust_Type
1 123 'M'
2 123 'M'
3 123 'O'
```
What I wish to be able to do is return a list of Customer ID's, Sale IDs and Customer Types where more than one of the customers in a sale ID are a main customer. I.e. Main ('M') occurs more than once across rows that have the same sale ID.
Any help is greatly appreciated!
|
So, the problem is that a `sales_id` can have more than one `M` value and you want to detect this. I would approach this by using a window function to count those values:
```
select t.*
from (select t.*,
sum(case when cust_type = 'M' then 1 else 0 end) over (partition by sales_id) as NumMs
from table t
) t
where NumMs > 1;
```
Actually, I would use the condition `NumMs <> 1`, because missing the main customer might also be important.
|
Is this what you mean? This can be achieved using a window function.
```
CREATE TABLE temp(
Cust_ID INT,
Sale_ID INT,
Cust_Type VARCHAR(1)
)
INSERT INTO temp VALUES
(1, 123, 'M'),
(2, 123, 'M'),
(3, 124, 'M'),
(4, 125, 'M'),
(5, 125, 'O'),
(6, 125, 'O');
WITH CTE AS(
SELECT *, cc = COUNT(*) OVER(PARTITION BY Sale_ID)
FROM temp
WHERE Cust_Type = 'M'
)
SELECT
Cust_ID,
Sale_ID,
Cust_Type
FROM CTE
WHERE cc > 1
DROP TABLE temp
```
|
Find records that belong to the same identifier, check for multiple occurences of value in column
|
[
"",
"sql",
"sql-server",
""
] |
I have a PostgreSQL database that stores users in a `users` table and conversations they take part in a `conversation` table. Since each user can take part in multiple conversations and each conversation can involve multiple users, I have a `conversation_user` linking table to track which users are participating in each conversation:
```
# conversation_user
id | conversation_id | user_id
----+------------------+--------
1 | 1 | 32
2 | 1 | 3
3 | 2 | 32
4 | 2 | 3
5 | 2 | 4
```
In the above table, user 32 is having one conversation with just user 3 and another with both 3 and user 4. How would I write a query that would show that there is a conversation between just user 32 and user 3?
I've tried the following:
```
SELECT conversation_id AS cid,
user_id
FROM conversation_user
GROUP BY cid HAVING count(*) = 2
AND (user_id = 32
OR user_id = 3);
SELECT conversation_id AS cid,
user_id
FROM conversation_user
GROUP BY (cid HAVING count(*) = 2
AND (user_id = 32
OR user_id = 3));
SELECT conversation_id AS cid,
user_id
FROM conversation_user
WHERE (user_id = 32)
OR (user_id = 3)
GROUP BY cid HAVING count(*) = 2;
```
These queries throw an error that says that user\_id must appear in the `GROUP BY` clause or be used in an aggregate function. Putting them in an aggregate function (e.g. `MIN` or `MAX`) doesn't sound appropriate. I thought that my first two attempts were putting them in the `GROUP BY` clause.
What am I doing wrong?
|
This is a case of **relational division**. We have assembled an arsenal of techniques under this related question:
* [How to filter SQL results in a has-many-through relation](https://stackoverflow.com/questions/7364969/how-to-filter-sql-results-in-a-has-many-through-relation/7774879)
The special difficulty is to exclude additional users. There are basically 4 techniques.
* [Select rows which are not present in other table](https://stackoverflow.com/questions/19363481/select-rows-which-are-not-present-in-other-table/19364694#19364694)
I suggest `LEFT JOIN` / `IS NULL`:
```
SELECT cu1.conversation_id
FROM conversation_user cu1
JOIN conversation_user cu2 USING (conversation_id)
LEFT JOIN conversation_user cu3 ON cu3.conversation_id = cu1.conversation_id
AND cu3.user_id NOT IN (3,32)
WHERE cu1.user_id = 32
AND cu2.user_id = 3
AND cu3.conversation_id IS NULL;
```
Or `NOT EXISTS`:
```
SELECT cu1.conversation_id
FROM conversation_user cu1
JOIN conversation_user cu2 USING (conversation_id)
WHERE cu1.user_id = 32
AND cu2.user_id = 3
AND NOT EXISTS (
SELECT 1
FROM conversation_user cu3
WHERE cu3.conversation_id = cu1.conversation_id
AND cu3.user_id NOT IN (3,32)
);
```
Both queries do *not* depend on a `UNIQUE` constraint for `(conversation_id, user_id)`, which may or may not be in place. Meaning, the query even works if `user_id` 32 (or 3) is listed more than once for the same conversation. You *would* get duplicate rows in the result, though, and need to apply `DISTINCT` or `GROUP BY`.
The only condition is the one you formulated:
> ... a query that would show that there is a conversation between just user 32 and user 3?
### Audited query
The [query you linked in the comment](http://pastebin.com/tNVifwmu) wouldn't work. You forgot to exclude other participants. Should be something like:
```
SELECT * -- or whatever you want to return
FROM conversation_user cu1
WHERE cu1.user_id = 32
AND EXISTS (
SELECT 1
FROM conversation_user cu2
WHERE cu2.conversation_id = cu1.conversation_id
AND cu2.user_id = 3
)
AND NOT EXISTS (
SELECT 1
FROM conversation_user cu3
WHERE cu3.conversation_id = cu1.conversation_id
AND cu3.user_id NOT IN (3,32)
);
```
Which is similar to the other two queries, except that it will not return multiple rows if `user_id = 3` is linked multiple times.
|
if you just want confirmation.
```
select conversation_id
from conversation_users
group by conversation_id
having bool_and ( user_id in (3,32))
and count(*) = 2;
```
if you want full details,
you can use a window function and a CTE like this:
```
with a as (
select *
,not bool_and( user_id in (3,32) )
over ( partition by conversation_id)
and 2 = count(user_id)
over ( partition by conversation_id)
as conv_candidates
from conversation_users
)
select * from a where conv_candidates;
```
|
Find rows that have same value in one column and other values in another column?
|
[
"",
"sql",
"postgresql",
"relational-division",
""
] |
I'm trying to set up a pagination using SQL. I want 3 results per page and here is what I have done :
```
SELECT mot_cle.* FROM mot_cle
ORDER BY hits DESC LIMIT 3 OFFSET 0; --Page 1
SELECT mot_cle.* FROM mot_cle
ORDER BY hits DESC LIMIT 3 OFFSET 3; --Page 2
SELECT mot_cle.* FROM mot_cle
ORDER BY hits DESC LIMIT 3 OFFSET 6; --Page 3
SELECT mot_cle.*
FROM mot_cle
ORDER BY hits DESC LIMIT 3 OFFSET 9; --Page 4
```
I checked many times and this is not very complicated but my results are not really what I expected :
Page 1 :
```
+-----+--------+------+
| id | mot | hits |
+-----+--------+------+
| 2 | test | 46 |
| 1 | blabla | 5 |
| 475 | intro | 3 |
+-----+--------+------+
```
Page 2 :
```
+-----+-------+------+
| id | mot | hits |
+-----+-------+------+
| 478 | vrai | 1 |
| 26 | ouest | 1 |
| 27 | serie | 1 |
+-----+-------+------+
```
Page 3 :
```
+-----+-------+------+
| id | mot | hits |
+-----+-------+------+
| 27 | serie | 1 |
| 26 | ouest | 1 |
| 478 | vrai | 1 |
+-----+-------+------+
```
Page 4 :
```
+-----+-------+------+
| id | mot | hits |
+-----+-------+------+
| 27 | serie | 1 |
| 26 | ouest | 1 |
| 478 | vrai | 1 |
+-----+-------+------+
```
As you can see, pages 2, 3 and 4 have the same results...
When I fetch the 4 pages in one :
```
SELECT mot_cle.* FROM mot_cle
ORDER BY hits DESC LIMIT 20 OFFSET 0;
```
Result :
```
+-----+-------------+------+
| id | mot | hits |
+-----+-------------+------+
| 2 | test | 46 |
| 1 | blabla | 5 |
| 475 | intro | 3 |
| 35 | acteurs | 1 |
| 36 | milieu | 1 |
| 37 | industriel | 1 |
| 38 | plaire | 1 |
| 39 | grandes | 1 |
| 40 | ingenieries | 1 |
| 41 | francaises | 1 |
| 34 | partenaire | 1 |
| 33 | rthgyjhkj | 1 |
| 32 | cool | 1 |
| 31 | super | 1 |
| 30 | vieux | 1 |
| 29 | moteur | 1 |
| 28 | yahoo | 1 |
| 27 | serie | 1 |
| 26 | ouest | 1 |
| 478 | vrai | 1 |
+-----+-------------+------+
```
Maybe I'm missing something or sorting results and using limit/offset are not compatible, I don't know what's wrong.
|
The problem here is that the rows all have a hit count of `1`, therefore their position when using `ORDER BY hits` is non-deterministic. And since you execute a new query each time you access a page, the rows will be "scrambled" anew.
To keep your pages consistent, you could also order by their id:
```
SELECT mot_cle.* FROM mot_cle ORDER BY hits DESC, id ASC LIMIT 3 OFFSET 0; --Page 1
```
|
`LIMIT` basically "aborts" the query once enough matching rows have been found. E.g. if you have `LIMIT 2`, then as soon as two rows which match the `join`/`where` clauses are found, the rest of the query goes away and you get those two rows.
But if you have an `ORDER BY` in there, then the ENTIRE matching result set is ordered, then the LIMIT is applied to that sorted set.
e.g. if you have some records that'd be returned as `5,10,203,3,92`, then
```
SELECT id ... LIMIT 2 -> (5,10),203,3,92
^^^^^^---actual returned-to-client results
SELECT id ... ORDER BY id LIMIT 2 -> 3,5,10,92,203 -> (3,5),10,92,203
^^---internal-only results
^^^^--actual returned-to-client results.
```
|
Strange results using order by and limit
|
[
"",
"mysql",
"sql",
"select",
"sql-order-by",
"sql-limit",
""
] |
I'm using an SSIS package to update my contents daily basis. There are thousands of contents which have different Moderation ID's and I want to calculate top ten categories FOR EACH Moderation ID. Before I realized that I should calculate it for each ModerationId, I used this query to get the Contents to be updated:
```
SELECT TOP 10 ModerationId, Category, COUNT(ContentSeqNum) AS Total FROM Content
WHERE Category IS NOT NULL
GROUP BY ModerationId, Category ORDER BY ModerationId, Total DESC
```
And that was a faulty approach because this query calculates top ten Categories for all the data, which should be different top ten categories for different ModerationId's.
How can I change this query to calculate Top 10 Categories for each ModerationId?
|
Use `Window Function` to get the to calculate top ten `categories` for `Moderation ID`. Try this.
```
SELECT moderationid,
category,
total
FROM (SELECT Row_number() OVER (partition BY moderationid
ORDER BY Count(contentseqnum)) Rn,
moderationid,
category,
Count(contentseqnum) AS Total
FROM content
WHERE category IS NOT NULL
GROUP BY moderationid,
category) A
WHERE rn <= 10
```
|
Try this:
```
SELECT TOP(10) ModerationId, Category, COUNT(ContentSeqNum) OVER(PARTITION BY ModerationId ORDER BY ModerationId) AS Total
FROM Content
WHERE Category IS NOT NULL
ORDER BY Total DESC
```
|
Calculating Top Ten Categories in SSIS
|
[
"",
"sql",
"sql-server",
"ssis",
""
] |
I'm using MySQL workbench and I am trying to create a table consist of foreign key using SQL Query. I am having problem with the Foreign part.
```
create table employee_position
(
ePID int primary key,
ePName varchar(45) not null,
eID int foreign key references employee_profile(eID)
)
```
|
Your syntax is wrong. Try:
```
create table employee_position
(
ePID int primary key,
ePName varchar(45) not null,
eID int,
foreign key (keyname) references employee_profile(eID)
)
```
For more information see the [mysql documentation](http://dev.mysql.com/doc/refman/5.6/en/create-table-foreign-keys.html)
|
```
create table employee_position
(
ePID int primary key,
ePName varchar(45) not null,
eID int,
foreign key (eID) references employee_profile(eID)
)
```
### [SQLFiddle demo](http://sqlfiddle.com/#!2/ab184/1)
|
Foreign Key Unexpected
|
[
"",
"mysql",
"sql",
"foreign-keys",
"create-table",
""
] |
I have a big table which has 700 millions of rows and dozens of columns. I need it to carry out a left join operation as the left table.
However, since it is so large, the time consumption is beyond affordable. So I'd like to split into few smaller tables and carry out the task in multiprocessing manner.
I know there is SSIS package is available for this, but I am constrained not to use it.
Also, I know it is a easy way to add a row id to each of the rows, but unfortunately, I cannot make change to the table.
So, may I know how to achieve my goal?
Many thanks.
|
Using `CTE` and `row_number()`
```
;WITH cte_tbl AS
(
SELECT *,
ROW_NUMBER() OVER (ORDER BY [First Name]) AS RowNumber
FROM your_table
)
SELECT * FROM cte_tbl
WHERE RowNumber < 100
```
You can fetch records for any range.
|
Use [fetch-offset](http://technet.microsoft.com/en-us/library/gg699618%28v=sql.110%29.aspx) if you need certain number of rows using sql-script.
like this:
```
SELECT First Name + ' ' + Last Name FROM big_table ORDER BY First Name OFFSET 15 ROWS;
SELECT First Name + ' ' + Last Name FROM big_table ORDER BY First Name OFFSET 10 ROWS FETCH NEXT 5 ROWS ONLY;
```
|
Get certain number of rows from a table
|
[
"",
"sql",
"sql-server",
""
] |
Let Suppose we have two variables in sql
```
DECLARE @str1 nvarchar(max) = 'FirstName,MiddleName,LastName'
DECLARE @str2 nvarchar(max) = 'John,,Adams'
```
Now you can see that value after first comma is empty in second variable if this case occur i want to remove the same entry from string .
```
-- Expected Result Would be
Result of str1 = FirstName,LastName
Result of str2 = 'John,Adams'
```
**NOTE**
It is sure that if str1 have 3 items then str2 will also have 3 , regardlass of empty string
**Update**
Above given is just an example `str1` could have 100 comma str2 will always have same number of comma as str1 does . Now what we want to do is that suppose we got an empty entry at any postion of str2 (suppose 10 or 20 or 32 .... anywhere ) item with same positon from str1 will be removed.
Please tell me if anyone still have confusion
|
Use `Replace`. Try this.
```
DECLARE @str1 NVARCHAR(max) = 'FirstName,MiddleName,LastName'
DECLARE @str2 NVARCHAR(max) = 'John,,Adams'
SELECT Parsename(Replace(Replace(@str2, ',,', ',abc,'), ',', '.'), 3)
+ ','
+ Parsename(Replace(Replace(@str2, ',,', ',abc,'), ',', '.'), 1)
SELECT Parsename(Replace(Replace(@str1, ',,', ',abc,'), ',', '.'), 3)
+ ','
+ Parsename(Replace(Replace(@str1, ',,', ',abc,'), ',', '.'), 1)
```
**Update :** for more than four items
```
DECLARE @str1 NVARCHAR(max) = 'FirstName,MiddleName,LastName,Address'
DECLARE @str2 NVARCHAR(max) = 'Jhon,,,Berlin'
SELECT LEFT(@str1, Charindex(',', @str1)-1) + ','
+ Reverse(LEFT(Reverse(@str1), Charindex(',', Reverse(@str1))-1))
SELECT LEFT(@str2, Charindex(',', @str2)-1) + ','
+ Reverse(LEFT(Reverse(@str2), Charindex(',', Reverse(@str2))-1))
```
|
OK this works regardless the number of commas or the number of missing values. It uses a number or tally table which I've included for simplicity of this example:
```
;WITH Tally (Number) AS
(
-- 1000 rows
SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL))
FROM (VALUES(0),(0),(0),(0),(0),(0),(0),(0),(0),(0)) a(n)
CROSS JOIN (VALUES(0),(0),(0),(0),(0),(0),(0),(0),(0),(0)) b(n)
CROSS JOIN (VALUES(0),(0),(0),(0),(0),(0),(0),(0),(0),(0)) c(n)
)
SELECT *
Into #Numbers
FROM Tally;
DECLARE @str1 NVARCHAR(max) = 'FirstName,MiddleName,LastName,Initial,County,State,Address'
DECLARE @str2 NVARCHAR(max) = 'Jhon,,Isiah,,Black Forest,,Berlin'
DECLARE @result1 VARCHAR(MAX)
DECLARE @result2 VARCHAR(MAX)
;WITH CTE
AS
(
SELECT Item = SUBSTRING(@str1, Number,
CHARINDEX(',', @str1 + ',', Number) - Number),
ROW_NUMBER() OVER (ORDER BY Number) AS RN
FROM #Numbers
WHERE Number <= CONVERT(INT, LEN(@str1))
AND SUBSTRING(',' + @str1, Number, 1) = ','
),
CTE2
As
(
SELECT Item = SUBSTRING(@str2, Number,
CHARINDEX(',', @str2 + ',', Number) - Number),
ROW_NUMBER() OVER (ORDER BY Number) AS RN
FROM #Numbers
WHERE Number <= CONVERT(INT, LEN(@str2))
AND SUBSTRING(',' + @str2, Number, 1) = ','
)
SELECT @result1 = COALESCE(@result1+', ' ,'') + CTE.Item, @result2 = COALESCE(@result2+', ' ,'') + CTE2.Item
FROM CTE
INNER JOIN CTE2
ON CTE.RN = CTE2.RN
WHERE CTE2.Item <> ''
SELECT @result1
SELECT @result2
```
This gives the following results:
```
FirstName, LastName, County, Address
Jhon, Isiah, Black Forest, Berlin
```
Of course you would want to create a permanent numbers table.
|
Remove comma if value after comma is empty in sql
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have three fields Category, Date, and ID. I need to retrieve data that does not belong under certain ID. Here is an example of my query:
```
SELECT Category, Date, ID
FROM table
WHERE ID NOT IN('1','2','3')
AND Date = '01/06/2015'
```
After running this query I should only get records that do not have any ID meaning NULL values because for yesterday's record only ID 1,2,3 exist and rest do not have any value (NULL). For some reason when I run the query it takes away the NULL values as well so I end up with 0 rows. This is very stranger to me and I do not understand what is the cause. All I know that the ID numbers are string values. Any suggestions?
|
Try this. NULL values cannot not be equated to anything else.
```
SELECT Category, Date, ID
FROM table
WHERE (ID NOT IN('1','2','3') OR ID IS NULL)
AND Date = '01/06/2015'
```
|
Others have already shown how to fix this, so let me try to explain *why* this happens.
```
WHERE ID NOT IN('1','2','3')
```
is equivalent to
```
WHERE ID <> '1' AND ID <> '2' AND ID <> '3'
```
Since `NULL <> anything` yields UNKNOWN, your expression yields UNKNOWN and the record in question is not returned.
See the following Wikipedia article for details on this *ternary logic*:
* [Null (SQL): Comparisons with NULL and the three-valued logic (3VL)](http://en.wikipedia.org/wiki/Null_(SQL)#Comparisons_with_NULL_and_the_three-valued_logic_.283VL.29)
|
Unable to retrieve NULL data
|
[
"",
"sql",
"sql-server",
""
] |
How to select all data from first table and if in second table row1 == row1 in first table then I need to insert rows from second table to result with LEFT JOIN.
first\_table:
```
id row1 some_data
1 2 test
2 3 test2
```
second\_table:
```
id row1 some_data
1 4 test
2 2 test2
```
Needed result is:
```
id row1 some_data id row1 some_data
1 2 test 2 2 test2
2 3 test2 NULL NULL NULL
```
Query:
```
SELECT *
FROM `first_table` AS c
LEFT JOIN `second_table` AS s ON `c`.`row1` = `s`.`row1`
```
I am sorry for my bad English and thanks in advance.
|
The following LEFT OUTER JOIN should give the desired result. It is important use an alias tables (t1, t2) to designate.
```
SELECT t1.id, t1.row1, t1.some_data, t2.id, t2.row1, t2.some_data
FROM first_table t1
LEFT JOIN second_table t2
ON t1.row1 = t2.row2
ORDER BY t1.id;
```
|
this is how we can put left join
```
SELECT *
FROM first_table As c
LEFT JOIN second_table AS o ON c.row1 = o.row1
```
you can learn more about left join [here](http://www.mysqltutorial.org/mysql-left-join.aspx).
|
Select data from two tables with LEFT JOIN
|
[
"",
"mysql",
"sql",
""
] |
I want to return the rows in the Base table that have an inventory over a certain count.
Something like this, but having difficulties...
```
SELECT b
FROM Base b, Inventory i
WHERE COUNT(b.brand = i.brand) > 2
```
This should be returned:
```
brand | model |
----------------
ford | T |
chevy | X |
```
.
```
Base
----------------
brand | model |
----------------
toyota | R |
ford | T |
chevy | X |
```
.
```
Inventory
---------
brand |
---------
toyota |
ford |
ford |
ford |
toyota |
chevy |
chevy |
chevy |
```
**Edit History**
* *I've updated the Base table to include further rows required*
|
From your query, It looks like you're trying to do a join, and then a count.
Something like:
```
SELECT *
FROM Base b
INNER JOIN Inventory I
ON (b.brand = i.brand)
GROUP BY i.brand
HAVING COUNT(i.brand) > 2
```
---
An alternative (that I could think of), would be to use a nested select:
```
SELECT *
FROM Base
WHERE brand IN (
SELECT brand
FROM Inventory
GROUP BY brand
HAVING COUNT(*) > 2
)
```
|
Use `group by` and `having` clause to filter the `brand` having `count >2` Try this.
```
select * from base b where exists(SELECT 1
FROM Inventory i
where b.brand=i.brand
group by brand
having COUNT(1) > 2 )
```
|
SQL count across two tables
|
[
"",
"sql",
""
] |
Here is an (simplified) example of DB I have (sorry for the ulgy format, I don't know how to write tables):
```
Name | Num
John | 1
John | 3
John | 4
Dany | 2
Andy | 5
Andy | 5
```
I want to count how many people have more at least two different Numbers.
For instance, here, only john, because he has `1`, `3` and `4`.
Not Andy because he has twice 2 and no other one.
And obviously not Dany because he has only one entry.
Thank you very much.
|
Try this.
```
select count(name) from table group by name having count(distinct num)>1
```
|
Try this:
```
SELECT A.Name, COUNT(DISTINCT A.Num) cnt
FROM tableA
GROUP BY A.Name
HAVING cnt >= 2;
```
|
Count entries that have different values in other column
|
[
"",
"sql",
"select",
"count",
"group-by",
"having",
""
] |
I have a table that contains a list of datetimes.
I'd like to find the longest amount of time between consecutive datetimes ie., find the largest distance between any two datetime entries that fall next to each other on a timeline. Think about it like a "longest streak" - the longest time from one reset to the next one chronologically.
For exmaple:
```
mysql> select * from resets order by datetime asc;
+----+---------------------+-------------+---------------------+---------------------+
| id | datetime | activity_id | created_at | updated_at |
+----+---------------------+-------------+---------------------+---------------------+
| 7 | 2014-12-30 20:38:22 | 1 | 2015-01-06 20:38:22 | 2015-01-06 20:38:22 |
| 3 | 2014-12-31 20:38:22 | 1 | 2015-01-06 20:38:22 | 2015-01-06 20:38:22 |
| 5 | 2015-01-01 20:38:22 | 1 | 2015-01-06 20:38:22 | 2015-01-06 20:38:22 |
| 4 | 2015-01-02 20:38:22 | 1 | 2015-01-06 20:38:22 | 2015-01-06 20:38:22 |
| 6 | 2015-01-03 20:38:22 | 1 | 2015-01-06 20:38:22 | 2015-01-06 20:38:22 |
| 1 | 2015-01-04 20:38:22 | 1 | 2015-01-06 20:38:22 | 2015-01-06 20:38:22 |
| 2 | 2015-01-05 20:38:22 | 1 | 2015-01-06 20:38:22 | 2015-01-06 20:38:22 |
etc...
```
From the sample dataset above, I want to know which of the following time differences is greater:
* between rows 7 and 3
* between rows 3 and 5
* between rows 5 and 4
* etc...
(Obviously, they are all exactly 24 hours apart. I'm looking for a general solution.)
This is easily done with a normal programming language by iterating over the ordered array, storing the differences between consecutive values, and selecting the greatest one.
Is there a neat way to do this using **only** SQL?
**Update:**
The query that worked for me was
```
SELECT MAX(DATEDIFF(r.next_datetime, r.datetime))
FROM (
# finds each datetime and the next consecutive one in the table
SELECT r1.datetime as datetime, (
# finds the next consecutive datetime
SELECT datetime
FROM resets r2
WHERE r2.datetime > r1.datetime
ORDER BY datetime ASC
LIMIT 1
) as next_datetime
FROM resets as r1
ORDER BY r1.datetime
) as r;
```
The innermost query `SELECT datetime FROM resets r2...` is responsible for finding the next datetime in the list that's greater than the current one. Notice that this query is ordered and has a limit of 1. This was the toughest part for me.
The rest is pretty straightforward. For each row in the table, we select the datetime value and the next consecutive datetime in the table. The outermost query finds the greatest difference between any of the datetime pairs we just created - the "winning streak".
I chose the answer that @OllieJones gave because it was the neatest and best explained, even if I prefer the "pure SQL" solution.
|
This query will compute time differences between successive rows and display the largest one -- the length of the winning streak. If you need the whole row, you'll need Gordon's query.
```
SELECT MAX(diff)
FROM (
SELECT TIMEDIFF(datetime,@prev) AS diff,
(@prev:=datetime) AS datetime
FROM resets,
(SELECT @prev:=(SELECT MIN(datetime) FROM resets)) AS init
ORDER BY datetime
) AS diffs
```
How does this work?
First of all, it's a cross-join between a one-row query and your table. The one-row query is this:
```
(SELECT @prev:=(SELECT MIN(datetime) FROM resets))
```
It sets the user-defined value `@prev` to the lowest/earliest `datetime` in the table. This is a MySQL trick to initialize a user-defined variable at the beginning of the query.
Then, the SELECT clause has two columns in it:
```
SELECT TIMEDIFF(datetime,@prev) AS diff,
(@prev:=datetime) AS datetime
```
The first one takes the time difference between the present row's `datetime` and the value of `@prev`. The second one updates the value of `@prev` to the present row's `datetime`.
So the inner query spits out a list of the timestamps and the difference to the preceding timestamp in `ORDER BY datetime`.
The outer query `SELECT MAX(diff)` grabs the largest value of diff -- the longest winning streak -- from the inner query.
Let's be clear: This is MySQL-specific monkey business. Pure SQL is supposed to be declarative, not procedural. But this trick with the user-defined `@prev` variable lets us mix declarative and procedural code in a useful way, even if it's somewhat obscure.
|
You can calculate the next datetime using a correlated subquery and then find the biggest by sorting:
```
select r.*
from (select r.*,
(select datetime
from resets r2
where r2.datetime > r.datetime
order by datetime
limit 1
) as next_datetime
from resets r
) r
order by timestampdiff(second, datetime, next_datetime) desc
limit 1;
```
|
MySQL - Find the largest time difference between consecutive datetimes
|
[
"",
"mysql",
"sql",
"datetime",
""
] |
I'm trying to execute next SQL statement (SQL Server 2008 Profile)
```
DECLARE @fun int;
SET @fun = 40;
select cast(@fun as varchar(10)) + 'hello'
```
and SQLFiddle gives me the error: `Must declare the scalar variable @fun`
Where I'm wrong?

<http://sqlfiddle.com/#!3/d41d8/42156>
|
I think the semicolons are introducing the problem here.
As described here: <http://www.sql-server-helper.com/error-messages/msg-137.aspx> and here: [SQL Server - Variable declared but still says "Must declare the scalar variable"](https://stackoverflow.com/questions/13750548/sql-server-variable-declared-but-still-says-must-declare-the-scalar-variable), the problem arises when the statements are executed individually instead of as a “unit”, and the semicolons seem to trigger that.
When I remove the semicolons from your statement, it works: <http://sqlfiddle.com/#!3/d41d8/42159>
|
You need to select the [Go] from last dropdown.
[](https://i.stack.imgur.com/pcMBN.png)
|
SQLFiddle: must declare the scalar variable error
|
[
"",
"sql",
"sqlfiddle",
""
] |
Not sure what functions to call, but transpose is the closest thing I can think of.
I have a table in BigQuery that is configured like this:

but I want to query a table that is configured like this:

What does the SQL code look like for creating this table?
Thanks!
|
**2020 update**: `fhoffa.x.unpivot()`
See:
* <https://medium.com/@hoffa/how-to-unpivot-multiple-columns-into-tidy-pairs-with-sql-and-bigquery-d9d0e74ce675>
I created a public persistent UDF. If you have a table `a`, you can give the whole row to the UDF for it to be unpivotted:
```
SELECT geo_type, region, transportation_type, unpivotted
FROM `fh-bigquery.public_dump.applemobilitytrends_20200414` a
, UNNEST(fhoffa.x.unpivot(a, '_2020')) unpivotted
```
It transforms a table like this:
[](https://i.stack.imgur.com/w5PBq.png)
Into this
[](https://i.stack.imgur.com/e8jdP.png)
---
As a comment mentions, my solution above doesn't solve for the question problem.
So here's a variation, while I look how to integrate all into one:
```
CREATE TEMP FUNCTION unpivot(x ANY TYPE) AS (
(
SELECT
ARRAY_AGG(STRUCT(
REGEXP_EXTRACT(y, '[^"]+') AS key
, REGEXP_EXTRACT(y, ':([0-9]+)') AS value
))
FROM UNNEST((
SELECT REGEXP_EXTRACT_ALL(json,'"[smlx][meaxl]'||r'[^:]+:\"?[^"]+?') arr
FROM (SELECT TO_JSON_STRING(x) json))) y
)
);
SELECT location, unpivotted.*
FROM `robotic-charmer-726.bl_test_data.reconfiguring_a_table` x
, UNNEST(unpivot(x)) unpivotted
```
---
*Previous answer:*
Use the UNION of tables (with ',' in BigQuery), plus some column aliasing:
```
SELECT Location, Size, Quantity
FROM (
SELECT Location, 'Small' as Size, Small as Quantity FROM [table]
), (
SELECT Location, 'Medium' as Size, Medium as Quantity FROM [table]
), (
SELECT Location, 'Large' as Size, Large as Quantity FROM [table]
)
```
|
**Update 2021:**
A new [UNPIVOT](https://cloud.google.com/bigquery/docs/reference/standard-sql/query-syntax#unpivot_operator) operator has been introduced into BigQuery.
Before UNPIVOT is used to rotate Q1, Q2, Q3, Q4 into sales and quarter columns:
| product | Q1 | Q2 | Q3 | Q4 |
| --- | --- | --- | --- | --- |
| Kale | 51 | 23 | 45 | 3 |
| Apple | 77 | 0 | 25 | 2 |
After UNPIVOT is used to rotate Q1, Q2, Q3, Q4 into sales and quarter columns:
| product | sales | quarter |
| --- | --- | --- |
| Kale | 51 | Q1 |
| Kale | 23 | Q2 |
| Kale | 45 | Q3 |
| Kale | 3 | Q4 |
| Apple | 77 | Q1 |
| Apple | 0 | Q2 |
| Apple | 25 | Q3 |
| Apple | 2 | Q4 |
Query:
```
WITH Produce AS (
SELECT 'Kale' as product, 51 as Q1, 23 as Q2, 45 as Q3, 3 as Q4 UNION ALL
SELECT 'Apple', 77, 0, 25, 2
)
SELECT * FROM Produce
UNPIVOT(sales FOR quarter IN (Q1, Q2, Q3, Q4))
```
|
How to unpivot in BigQuery?
|
[
"",
"sql",
"google-bigquery",
"transpose",
"unpivot",
""
] |
I want to group by 2 columns my first table is:
**Users**
```
ID Name Surname
1 joe New
2 Ala Bla
3 Cat Kra
4 Zoo Fles
5 Kat Glo
```
**Work:**
```
ID BOSSID1 BOSSID2
1 1 2
2 2 3
3 2 1
4 2 5
5 3 5
```
And I want to see all Bosses in table "Work".
for example:
```
1 joe New
2 Ala Bla
3 Cat Kra
5 Kat Glo
```
|
Try this:
```
SELECT DISTINCT u.* FROM USERS u INNER JOIN WORK w
ON (u.id = w.bossid1 OR u.id = w.bossid2)
```
|
Try this:
```
select id, concat(name, ' ', surname) from
(
select id, name, surname from users
where id in
(
select distinct bossid1 from work
) OR id in
(
select distinct bossid2 from work
)
)
```
|
Oracle Db Group By Two Columns
|
[
"",
"mysql",
"sql",
"database",
"oracle",
""
] |
I have a query that will result in a customer bill being created on our SSRS 2008 R2 server. The SQL Server instance is also 2008 R2. The query is large and I don't want to post the entire thing for security reasons, etc.
What I need to do with the example data below, is to remove the two rows with 73.19 and -73.19 from the result set. So, if two rows have the same absolute value in the LineBalance column and their sum is 0 AND if they have the same value in the REF1 column, the should be removed from the result set. The line with REF1 = 14598 and a line balance of 281.47 should still be returned in the result set and the other two rows below with REF1 = 14598 should not be returned.
The point of this is to "hide" accounting errors and their correction from the customer. by "hide" I mean, not show it on the bill they get in the mail. What happened here is the customer was mistakenly billed 73.19 when they should have been billed 281.47. So, our AR dept. returned 73.19 to their account and charged them the correct amount of 281.47. As you can see they all have the same REF1 value.

|
Most AR/billing systems treat "credit memos" (the negative amount) similar to cash, in which case the -73.19 would get applied to the 73.19 `LineBalance` the same as if the customer had paid that amount, resulting in a $0 balance.
**OPTION 1:**
Do you handle cash receipts and applications in this sytem? If so, you may be able to pull data from those cash application tables to show the tie between SysInvNum 3344296 and 3341758.
**OPTION 2:**
I'm assuming that `PayAdjust` column is used to reduce the balance after a customer has paid, and that LineBalance is a calculated column that is `Charges + PayAdjust`.
Most of the time when this occurs, the AR department would be responsible for applying the credit memo to an open invoice, so that the `PayAdjust` column would net $0 between the 2 rows, and this would cause the `LineBalance` to also be $0 on each of the 2 rows. It may just be a training issue for the system that is being used.
This would cause the 3 rows in question to look like this, so you don't have an issue, you would just exclude the rows by adding `where LineBalance <> 0` to your query since the AR department (which applied the credit to begin with and so knows the answer to this question) explicitly stated which `LineBalance` the credit applies to:
**Option 2 Preferred data structure:**
```
SysInvNum REF1 Charges PayAdjust LineBalance
----------- ----------- --------------------- --------------------- ---------------------
3344298 14598 281.47 0.00 281.47
3344296 14598 -73.19 73.19 0.00
3341758 14598 73.19 -73.19 0.00
```
**OPTION 3:**
Without having this data from Option 1 or 2, you have make many assumptions and run the risk of inadvertently hiding the wrong rows.
That being said, here is a query that attempts to do what you are asking, but I would highly recommend checking with the AR dept to see if they can update "PayAdjust" for these records instead.
I added several test cases of scenarios that could cause issues, but this may not cover all the bases.
This query will only hide rows where one distinct matching negative value is found for a positive value, for the same REF1 and same DueDate. It also makes sure that the original Charge invoice ID is prior to the credit since it can be assumed that a credit would not occur prior to the actual charge (Test case 6 shows both rows still because the credit has an SysInvNum that occurred before the charge). If more than once match is found per `REF1, DueDate, and LineBalance`, then it will not hide the corresponding charge and credit lines (test cases 2 & 4). Test case 3 sums in total to 0, but it still shows all 3 rows because the LineBalance values do not match exactly. These are all assumptions that I made to handle edge cases, so they can be adjusted as needed.
```
CREATE TABLE #SysInvTable (SysInvNum int not null primary key, REF1 int, Charges money, PayAdjust money, LineBalance as Charges + PayAdjust, DueDate date, REF2 int, Remark varchar(50), REM varchar(50));
INSERT INTO #SysInvTable(SysInvNum, REF1, Charges, PayAdjust, DueDate, Remark)
VALUES
--.....................................
--Your test case
(3344298, 14598, 281.47, 0, '2014-12-08','Your original test case. This one should stay.')
, (3344296, 14598, -73.19, 0, '2014-12-08',null)
, (3341758, 14598, 73.19, 0, '2014-12-08',null)
--.....................................
--Test case 2: How do you match these up?
, (2001, 2, 73.19, 0, '2015-01-06','Charge 2.1')
, (2002, 2, 73.19, 0, '2015-01-06','Charge 2.2')
, (2003, 2, 73.19, 0, '2015-01-06','Charge 2.3')
, (2004, 2, -73.19, 0, '2015-01-06','Credit for charge 2.3')
, (2005, 2, -73.19, 0, '2015-01-06','Credit for charge 2.1')
--.....................................
--Test case 3
, (3001, 3, 73.19, 0, '2015-01-06','Charge 3.1')
, (3002, 3, 73.19, 0, '2015-01-06','Charge 3.2')
, (3003, 3, -146.38, 0, '2015-01-06','Credit for charges 3.1 and 3.2')
--.....................................
--Test case 4: Do you hide 4001 or 4002?
, (4001, 4, 73.19, 0, '2015-01-06','Cable')
, (4002, 4, 73.19, 0, '2015-01-06','Internet')
, (4003, 4, -73.19, 0, '2015-01-06','Misc Credit')
--.....................................
--Test case 5: remove all lines except the first
, (5000, 5, 9.99, 0, '2015-01-06','Charge 5.0 (Should stay)')
, (5001, 5, 11.11, 0, '2015-01-06','Charge 5.1')
, (5002, 5, 22.22, 0, '2015-01-06','Charge 5.2')
, (5003, 5, 33.33, 0, '2015-01-06','Charge 5.3')
, (5004, 5, -11.11, 0, '2015-01-06','Credit for charge 5.1')
, (5005, 5, -33.33, 0, '2015-01-06','Credit for charge 5.3')
, (5006, 5, -22.22, 0, '2015-01-06','Credit for charge 5.2')
--.....................................
--Test case 6: credit occurs before charge, so keep both
, (6000, 6, -73.19, 0, '2015-01-06','Credit occurs before charge')
, (6001, 6, 73.19, 0, '2015-01-06','Charge 6.1')
;
SELECT i.*
FROM #SysInvTable i
WHERE i.SysInvNum not in
(
SELECT IngoreInvNum = case when c.N = 1 then max(t.SysInvNum) else min(t2.SysInvNum) end
FROM #SysInvTable t
INNER JOIN #SysInvTable t2
ON t.ref1 = t2.ref1
AND t.DueDate = t2.DueDate
CROSS APPLY (SELECT 1 AS N UNION ALL SELECT 2 as N) AS c --used to both both T and T2 SysInvNum's to exclude
WHERE 1=1
AND t.LineBalance > 0 AND t2.LineBalance < 0
AND t.SysInvNum < t2.SysInvNum --make sure the credit came in after the positive SysInvNum
AND t.LineBalance = t2.LineBalance * -1
GROUP BY t.REF1, t.DueDate, abs(t.LineBalance), c.n
HAVING Count(*) = 1
)
;
DROP TABLE #SysInvTable;
```
|
I would add a field that would contain explicit flag telling you that a certain charge was a mistake/reversal of a mistake and then it is trivial to filter out such rows. Doing it on the fly could make your reports rather slow.
But, to solve the given problem as is we can do like this. The solution assumes that `SysInvNum` is unique.
## Create a table with sample data
```
DECLARE @T TABLE (SysInvNum int, REF1 int, LineBalance money);
INSERT INTO @T (SysInvNum, REF1, LineBalance) VALUES (3344299, 14602, 558.83);
INSERT INTO @T (SysInvNum, REF1, LineBalance) VALUES (3344298, 14598, 281.47);
INSERT INTO @T (SysInvNum, REF1, LineBalance) VALUES (3344297, 14602, -95.98);
INSERT INTO @T (SysInvNum, REF1, LineBalance) VALUES (3344296, 14598, -73.19);
INSERT INTO @T (SysInvNum, REF1, LineBalance) VALUES (3341758, 14598, 73.19);
INSERT INTO @T (SysInvNum, REF1, LineBalance) VALUES (11, 100, 50.00);
INSERT INTO @T (SysInvNum, REF1, LineBalance) VALUES (12, 100, -50.00);
INSERT INTO @T (SysInvNum, REF1, LineBalance) VALUES (13, 100, 50.00);
INSERT INTO @T (SysInvNum, REF1, LineBalance) VALUES (21, 200, -50.00);
INSERT INTO @T (SysInvNum, REF1, LineBalance) VALUES (22, 200, -50.00);
INSERT INTO @T (SysInvNum, REF1, LineBalance) VALUES (23, 200, 50.00);
INSERT INTO @T (SysInvNum, REF1, LineBalance) VALUES (31, 300, -50.00);
INSERT INTO @T (SysInvNum, REF1, LineBalance) VALUES (32, 300, 50.00);
INSERT INTO @T (SysInvNum, REF1, LineBalance) VALUES (33, 300, -50.00);
INSERT INTO @T (SysInvNum, REF1, LineBalance) VALUES (34, 300, 50.00);
INSERT INTO @T (SysInvNum, REF1, LineBalance) VALUES (41, 400, 50.00);
INSERT INTO @T (SysInvNum, REF1, LineBalance) VALUES (42, 400, -50.00);
INSERT INTO @T (SysInvNum, REF1, LineBalance) VALUES (43, 400, 50.00);
INSERT INTO @T (SysInvNum, REF1, LineBalance) VALUES (44, 400, -50.00);
INSERT INTO @T (SysInvNum, REF1, LineBalance) VALUES (45, 400, 50.00);
```
I've added few more cases that have multiple mistakes.
## Number and count rows
```
SELECT
SysInvNum
, REF1
, LineBalance
, ROW_NUMBER() OVER(PARTITION BY REF1, LineBalance ORDER BY SysInvNum) AS rn
, COUNT(*) OVER(PARTITION BY REF1, ABS(LineBalance)) AS cc1
FROM @T AS TT
```
This is the result set:
```
SysInvNum REF1 LineBalance rn cc1
11 100 50.00 1 3
12 100 -50.00 1 3
13 100 50.00 2 3
21 200 -50.00 1 3
23 200 50.00 1 3
22 200 -50.00 2 3
31 300 -50.00 1 4
32 300 50.00 1 4
33 300 -50.00 2 4
34 300 50.00 2 4
41 400 50.00 1 5
42 400 -50.00 1 5
43 400 50.00 2 5
44 400 -50.00 2 5
45 400 50.00 3 5
3341758 14598 73.19 1 2
3344296 14598 -73.19 1 2
3344298 14598 281.47 1 1
3344297 14602 -95.98 1 1
3344299 14602 558.83 1 1
```
You can see that those rows that have mistakes have count > 1. Also, pairs of mistakes have same row numbers. So, we need to remove/hide those rows that have count > 1 and those that have two same row numbers.
## Determine rows to remove
```
WITH
CTE_rn
AS
(
SELECT
SysInvNum
, REF1
, LineBalance
, ROW_NUMBER() OVER(PARTITION BY REF1, LineBalance ORDER BY SysInvNum) AS rn
, COUNT(*) OVER(PARTITION BY REF1, ABS(LineBalance)) AS cc1
FROM @T AS TT
)
, CTE_ToRemove
AS
(
SELECT
SysInvNum
, REF1
, LineBalance
, COUNT(*) OVER(PARTITION BY REF1, rn) AS cc2
FROM CTE_rn
WHERE CTE_rn.cc1 > 1
)
SELECT *
FROM CTE_ToRemove
WHERE CTE_ToRemove.cc2 = 2
```
This is another intermediate result:
```
SysInvNum REF1 LineBalance cc2
12 100 -50.00 2
11 100 50.00 2
21 200 -50.00 2
23 200 50.00 2
32 300 50.00 2
31 300 -50.00 2
33 300 -50.00 2
34 300 50.00 2
42 400 -50.00 2
41 400 50.00 2
43 400 50.00 2
44 400 -50.00 2
3344296 14598 -73.19 2
3341758 14598 73.19 2
```
Now, we just put all this together.
## Final query
```
WITH
CTE_rn
AS
(
SELECT
SysInvNum
, REF1
, LineBalance
, ROW_NUMBER() OVER(PARTITION BY REF1, LineBalance ORDER BY SysInvNum) AS rn
, COUNT(*) OVER(PARTITION BY REF1, ABS(LineBalance)) AS cc1
FROM @T AS TT
)
, CTE_ToRemove
AS
(
SELECT
SysInvNum
, REF1
, LineBalance
, COUNT(*) OVER(PARTITION BY REF1, rn) AS cc2
FROM CTE_rn
WHERE CTE_rn.cc1 > 1
)
SELECT *
FROM @T AS TT
WHERE
TT.SysInvNum NOT IN
(
SELECT CTE_ToRemove.SysInvNum
FROM CTE_ToRemove
WHERE CTE_ToRemove.cc2 = 2
)
ORDER BY SysInvNum;
```
Result:
```
SysInvNum REF1 LineBalance
13 100 50.00
22 200 -50.00
45 400 50.00
3344297 14602 -95.98
3344298 14598 281.47
3344299 14602 558.83
```
Note, that final result doesn't have any rows with REF = 300, because there were two corrected mistakes, that balanced each other completely.
|
Remove Rows That Sum Zero For A Given Key
|
[
"",
"sql",
"sql-server",
""
] |
I am have a table with name Scripts which contains data of modified procedures, functions and tables.
```
CREATE TABLE #Scripts
(
ID NUMERIC (18) IDENTITY NOT NULL,
[Date] DATETIME NULL,
DatabaseName VARCHAR (50) NULL,
Name VARCHAR (100) NULL,
Type VARCHAR (20) NULL,
Action VARCHAR (50) NULL,
Description VARCHAR (500) NULL,
ModifiedBy VARCHAR (50) NULL,
AddedTimestamp DATETIME NULL,
UpdateTimestamp DATETIME NULL,
)
GO
```
And i added records into table as shown below. These are just sample records.
```
INSERT INTO #Scripts ([Date], DatabaseName, Name, Type, Action, Description, ModifiedBy, AddedTimestamp, UpdateTimestamp)
VALUES ('2015-01-07 11:16:41.4', 'Test', 'sp_GetData', 'Stored Procedure', 'Created', 'To Get ActivitySubscriptions for Mobile from tblSubscriptions', 'dinesh.alla', '2015-01-07 11:39:39.703', '2015-01-07 11:39:39.703')
GO
INSERT INTO #Scripts ([Date], DatabaseName, Name, Type, Action, Description, ModifiedBy, AddedTimestamp, UpdateTimestamp)
VALUES ('2015-01-07 11:16:41.4', 'Test', 'sp_GetData', 'Stored Procedure', 'Updated', 'To Get ActivitySubscriptions for Mobile from tblSubscriptions', 'dinesh.alla', '2015-01-07 11:39:39.703', '2015-01-07 11:39:39.703')
GO
INSERT INTO #Scripts ([Date], DatabaseName, Name, Type, Action, Description, ModifiedBy, AddedTimestamp, UpdateTimestamp)
VALUES ('2015-01-07 11:16:41.4', 'Test', 'sp_GetData', 'Stored Procedure', 'Deleted', 'To Get ActivitySubscriptions for Mobile from tblSubscriptions', 'dinesh.alla', '2015-01-07 11:39:39.703', '2015-01-07 11:39:39.703')
GO
INSERT INTO #Scripts ([Date], DatabaseName, Name, Type, Action, Description, ModifiedBy, AddedTimestamp, UpdateTimestamp)
VALUES ('2015-01-07 11:16:41.4', 'Test', 'sp_UpdateData', 'Stored Procedure', 'Created', 'To Get ActivitySubscriptions for Mobile from tblSubscriptions', 'dinesh.alla', '2015-01-07 11:39:39.703', '2015-01-07 11:39:39.703')
GO
INSERT INTO #Scripts ([Date], DatabaseName, Name, Type, Action, Description, ModifiedBy, AddedTimestamp, UpdateTimestamp)
VALUES ('2015-01-07 11:16:41.4', 'Test', 'sp_UpdateData', 'Stored Procedure', 'Updated', 'To Get ActivitySubscriptions for Mobile from tblSubscriptions', 'dinesh.alla', '2015-01-07 11:39:39.703', '2015-01-07 11:39:39.703')
GO
INSERT INTO #Scripts ([Date], DatabaseName, Name, Type, Action, Description, ModifiedBy, AddedTimestamp, UpdateTimestamp)
VALUES ('2015-01-07 11:16:41.4', 'Test', 'sp_AddData', 'Stored Procedure', 'Created', 'To Get ActivitySubscriptions for Mobile from tblSubscriptions', 'dinesh.alla', '2015-01-07 11:39:39.703', '2015-01-07 11:39:39.703')
GO
```
I tried to get results as shown below
```
SELECT MAX(ID) AS ID,MAX(Action) AS Action
FROM #Scripts GROUP BY Name ORDER BY ID ASC
```
**Output:**
```
ID Action
3 Updated
5 Updated
6 Created
```
**Expected Output:**
```
ID Action
3 Deleted
5 Updated
6 Created
```
|
try this:
```
select distinct sc.action,sc.ID from (
SELECT MAX(ID) OVER(PARTITION BY NAME ) rn,* FROM #Scripts
)d join #Scripts sc
on d.rn=sc.ID
order by sc.ID
```
**OUTPUT**
```
action ID
Deleted 3
Updated 5
Created 6
```
|
```
;WITH LatestChanges AS
(
SELECT S.[ID], S.[Date], S.[DatabaseName], S.[Name], S.[Type], S.[Action], S.[Description],
S.[ModifiedBy], S.[AddedTimestamp], S.[UpdateTimestamp],
[RowNum] = ROW_NUMBER() OVER (PARTITION BY S.[Name] ORDER BY S.[ID] DESC)
FROM [#Scripts] S
)
SELECT *
FROM LatestChanges
WHERE [RowNum] = 1
ORDER BY ID
```
This returns last [ID] for each [Name].
|
Got Error in Retrieving results using group by and order by in SQL server 2008
|
[
"",
"sql",
"sql-server",
"database",
"sql-server-2008",
"select",
""
] |
I would like to cap VARCHAR to 255 on a few columns so that I can add an index.
```
alter table striker modify contacts varchar(255)
```
When I try to run the above command I get
```
Error Code: 1265. Data truncated for column 'contacts' at row 331 38.969 sec
```
Is there a way to force truncation of the column so that this command successfully shortens the VARCHAR? I don't care about shortening the data in the column.
|
You can manually truncate the column at 255 characters:
```
UPDATE striker
SET contacts = SUBSTR (contacts, 1, 255);
```
Now, that you know that no value is longer than 255 characters, you perform the `alter table` statement you have in the OP safely.
|
Why bother truncating the column? Just create the index with a length specifier for the column. Here is an example:
```
create index idx_striker_contacts on striker(contacts(255))
```
You can do this for multiple columns.
|
Force Truncation on VARCHAR modification?
|
[
"",
"mysql",
"sql",
"ddl",
"alter-table",
""
] |
I am having the following mysql table
```
id status
------------
1 available
2 hold
3 available
4 so-hold
5 so-hold
6 hold
7 hold
8 available
```
When I use `GROUP BY` on `STATUS` coloumn I got the following :
```
count(id) status
---------------------
3 available
3 hold
2 so-hold
```
Under `status` coloumn **'hold'** and **'so-hold'** belongs to the same category so I need count in the following manner :
```
count(id) status
---------------------
3 available
5 hold
```
Following is the MySQL query that I am using currently :
```
SELECT count(id), status FROM `inventory_master` GROUP BY status
```
|
You can use `CASE` statement
```
select
count(id),
case when status = 'so-hold'
then 'hold'
else status
end as status_col
from inventory_master
group by status_col
```
[`DEMO`](http://www.sqlfiddle.com/#!2/94df2/4)
|
Try replacing "so-" with empty string and then try your query as below:
```
SELECT newStatus, COUNT(*)
FROM
(SELECT REPLACE(status, 'so-', '') as newStatus
FROM MYTABLE) AS tab
GROUP BY newStatus
```
|
count of id on status coloumn with sum of particular rows count using GROUP BY IN MYSQL
|
[
"",
"mysql",
"sql",
"group-by",
"aggregate-functions",
""
] |
In a MySQL table i have a field, containing this value for a given record : "1908,2315,2316"
Here is my sql Query :
```
SELECT * FROM mytable WHERE 2316 IN (myfield)
```
I got 0 results!
I tried this :
```
SELECT * FROM mytable WHERE 2315 IN (myfield)
```
Still 0 results
And then i tried this :
```
SELECT * FROM mytable WHERE 1908 IN (myfield)
```
Surprisingly i obtained the record when searching with 1908! What should i do to also obtain the record when searching with 2315 and 2316 ? What am i missing ?
Thanks
|
You appear to be storing comma delimited values in a field. This is bad, bad, bad. You should be using a junction table, with one row per value.
But, sometimes you are stuck with data in a particular structure. If so, MySQL provides the `find_in_set()` functions.
```
SELECT *
FROM mytable
WHERE find_in_set(2316, myfield) > 0;
```
|
You can't use `IN()` over comma separated list of no.s its better to normalize your structure first for now you can use `find_in_set` to find results matching with comma separated string
```
SELECT * FROM mytable WHERE find_in_set('1908',myfield) > 0
```
|
MySQL In clause not giving the right result
|
[
"",
"mysql",
"sql",
""
] |
I have a Firebird table like this:
```
CREATE TABLE events (
event VARCHAR(6) NOT NULL
CHECK (event IN ('deploy', 'revert', 'fail')),
change_id CHAR(40) NOT NULL,
change VARCHAR(512) NOT NULL
);
```
Now I need to add another value to the `IN()` list in the CHECK constraint. How do I do that?
Things I've tried so far:
* Updating the value in `RDB$TRIGGERS.RDB$TRIGGER_SOURCE`:
```
UPDATE RDB$TRIGGERS
SET RDB$TRIGGER_SOURCE = 'CHECK (event IN (''deploy'', ''revert'', ''fail'', ''merge''))'
WHERE RDB$TRIGGER_SOURCE = 'CHECK (event IN (''deploy'', ''revert'', ''fail''))';
```
Does not seem to work, as the trigger is compiled in `RDB$TRIGGERS.RDB$TRIGGER_BLR`.
* Creating a new table with a new check, copying the data over, dropping the old table and renaming the new table. However, it seems that one [cannot rename a Firebird table](https://stackoverflow.com/a/12293244/79202), so I can't make the new table have the same name as the old one.
I suspect updating `RDB$TRIGGERS` is the way to go (idk!), if only I could get Firebird to recompile the code. But maybe there's a better way?
|
You need to drop and the re-create the check constraint.
As you didn't specify a name for your constraint, Firebird created one, so you first need to find that name:
```
select trim(cc.rdb$constraint_name), trg.rdb$trigger_source
from rdb$relation_constraints rc
join rdb$check_constraints cc on rc.rdb$constraint_name = cc.rdb$constraint_name
join rdb$triggers trg on cc.rdb$trigger_name = trg.rdb$trigger_name
where rc.rdb$relation_name = 'EVENTS'
and rc.rdb$constraint_type = 'CHECK'
and trg.rdb$trigger_type = 1;
```
I just added the trigger source for informational reasons.
Once you have the name, you can drop it, e.g.
```
alter table events drop constraint integ_27;
```
and then add the new constraint:
```
alter table events
add constraint check_event_type
CHECK (event IN ('deploy', 'revert', 'fail', 'merge'));
```
In the future you don't need to look for the constraint name because you already it.
|
Here's how to do it dynamically:
```
SET AUTOddl OFF;
SET TERM ^;
EXECUTE BLOCK AS
DECLARE trig VARCHAR(64);
BEGIN
SELECT TRIM(cc.rdb$constraint_name)
FROM rdb$relation_constraints rc
JOIN rdb$check_constraints cc ON rc.rdb$constraint_name = cc.rdb$constraint_name
JOIN rdb$triggers trg ON cc.rdb$trigger_name = trg.rdb$trigger_name
WHERE rc.rdb$relation_name = 'EVENTS'
AND rc.rdb$constraint_type = 'CHECK'
AND trg.rdb$trigger_type = 1
INTO trig;
EXECUTE STATEMENT 'ALTER TABLE EVENTS DROP CONSTRAINT ' || trig;
END^
SET TERM ;^
COMMIT;
ALTER TABLE events ADD CONSTRAINT check_event_type CHECK (
event IN ('deploy', 'revert', 'fail', 'merge')
);
COMMIT;
```
I had to disable `AUTOddl` and put in explicit commits or else I got a deadlock on the `ALTER TABLE ADD CONSTRAINT` statement.
|
How do I alter a CHECK constraint on Firebird?
|
[
"",
"sql",
"triggers",
"constraints",
"firebird",
""
] |
I have query :
```
UPDATE SDM_KARYAWAN
SET (ID_DIVISI,ID_UNIT_KERJA,ID_JABATAN) =
(
SELECT ID_DIVISI,ID_UNIT_KERJA,ID_JABATAN FROM
(
SELECT TO_CHAR(TGL_SK,'YYYYMMDD') URUT,ID_DIVISI,ID_UNIT_KERJA,ID_JABATAN
FROM SDM_KARYAWAN_JABATAN
WHERE ID_KARYAWAN = '0081005'
ORDER BY URUT DESC
)DETAIL
WHERE ROWNUM = 1
)X
WHERE ID_KARYAWAN = '0081005'
```
But show error like this : [Err] ORA-00933: SQL command not properly ended
Actually I can use this code :
```
UPDATE TABLE1 SET COL1 = 'A',COL2='B',COL3='C'...
```
but What i want is updating multiple columns like this :
```
UPDATE TABLE1 SET (COL1,COL2,COL3) (SELECT COL1,COL2,COL3 FROM TABLE2)
```
::EDIT::
I tried this :
```
SELECT A.ID_DIVISI,A.ID_UNIT_KERJA,A.ID_JABATAN,
DETAIL.ID_DIVISI X,DETAIL.ID_UNIT_KERJA Y,DETAIL.ID_JABATAN Z
FROM SDM_KARYAWAN A
INNER JOIN
(
SELECT ID_KARYAWAN, TO_CHAR(TGL_SK,'YYYYMMDD') URUT,ID_DIVISI,ID_UNIT_KERJA,ID_JABATAN
FROM SDM_KARYAWAN_JABATAN
WHERE ID_KARYAWAN = '0081005'
ORDER BY URUT DESC
)DETAIL
ON DETAIL.ID_KARYAWAN = A.ID_KARYAWAN
WHERE A.ID_KARYAWAN = DETAIL.ID_KARYAWAN
AND ROWNUM = 1
```
The results :
```
ID_DIVISI ID_UNIT_KERJA ID_JABATAN X Y Z
-----------------------------------------------------------
D1 D5000 D51000 D2 D200 D2100
```
Now, i want to update columns on table SDM\_KARYAWAN with X,Y,Z value. what's missing from my query? pls.
|
This will work. Please check.
```
UPDATE SDM_KARYAWAN
SET (ID_DIVISI,
ID_UNIT_KERJA,
ID_JABATAN) =
(SELECT ID_DIVISI, ID_UNIT_KERJA, ID_JABATAN
FROM ( SELECT TO_CHAR (TGL_SK,'YYYYMMDD')
URUT,
ID_DIVISI,
ID_UNIT_KERJA,
ID_JABATAN
FROM SDM_KARYAWAN_JABATAN
WHERE ID_KARYAWAN = '0081005'
ORDER BY 1 DESC) DETAIL
WHERE ROWNUM = 1)
WHERE ID_KARYAWAN = '0081005'
```
|
try this:
```
UPDATE
(SELECT
Table1.Col1 as OLD1, Table2.Col1 as NEW1
Table1.Col2 as OLD2, Table2.Col2 as NEW2
FROM Table1
INNER JOIN Table2
-- ON Some Join Conditions
) t
SET t.OLD1 = t.NEW1,
SET t.OLD2 = t.NEW2
```
|
update multiple columns in oracle
|
[
"",
"sql",
"oracle",
""
] |
I use the simple query below to output a list of partIDs based on a modelID that we get from a printout on a sheet of paper.
We've always just used one modelId at a time like this:
```
SELECT gm.partId, 343432346 as modelId
FROM dbo.systemMemberTable sm
WHERE gm.partID NOT IN (SELECT partID FROM systemsPartTable)
```
Is there a way to create a query that uses 10 modelIds at a time?
I can't think of a way because the modelIds are not in any table, just a printout that is handed to us.
Thanks!
|
```
SELECT gm.partId, T.number as modelId
FROM ( values (4),(9),(16),(25)
) as T(number)
CROSS JOIN dbo.systemMemberTable sm
WHERE gm.partID NOT IN (SELECT partID FROM systemsPartTable)
```
op said getting an error but this is the test that runs for me
```
SELECT T.number as [modelID]
,[main].[name]
FROM ( values (4),(9),(16),(25)
) as T(number)
cross join [Gabe2a_ENRONb].[dbo].[docFieldDef] as [main]
where [main].[ID] not in (1)
```
|
`insert` model ids into a table variable and then do the `join` with this table variable
Also use `not exists` instead of `not in` as not in doesn't work if there are `null` values in the parts table.
```
declare @modelIds table
(
model_id int
)
insert into @modelIds values (343432346) , (123456)
```
your select would be
As you want same model id repeated for all parts, you can just use `cross join`
```
select gm.partId, m.model_id
from dbo.systemMeberTable sm
cross join @modelIds m
where not exists ( select 1 from systemPartsTable SPT where SPT.partId = gm.PartID )
```
|
Query for a group of IDs
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I'm having some trouble building a query that will group my items into monthly ranges according to whenever they exist in a month or not. I'm using PostgreSQL.
For example I have a table with data as this:
```
Name Period(text)
Ana 2010/09
Ana 2010/10
Ana 2010/11
Ana 2010/12
Ana 2011/01
Ana 2011/02
Peter 2009/05
Peter 2009/06
Peter 2009/07
Peter 2009/08
Peter 2009/12
Peter 2010/01
Peter 2010/02
Peter 2010/03
John 2009/05
John 2009/06
John 2009/09
John 2009/11
John 2009/12
```
and I want the result query to be this:
```
Name Start End
Ana 2010/09 2011/02
Peter 2009/05 2009/08
Peter 2009/12 2010/03
John 2009/05 2009/06
John 2009/09 2009/09
John 2009/11 2009/12
```
Is there any way to achieve this?
|
This is an aggregation problem, but with a twist -- you need the define the groups of adjacent months for each name.
Assuming that the month never appears more than once for a given name, you can do this by assigning a "month" number to each period and subtracting a sequential number. The values will be a constant for months that are in a row.
```
select name, min(period), max(period)
from (select t.*,
(cast(left(period, 4) as int) * 12 + cast(right(period, 2) as int) -
row_number() over (partition by name order by period)
) as grp
from names t
) t
group by grp, name;
```
[Here](http://sqlfiddle.com/#!15/8c0aa/3) is a SQL Fiddle illustrating this.
Note: duplicates are not really a problem either. You would jsut use `dense_rank()` instead of `row_number()`.
|
I don't know if there is an easier way (there probably is) but I can't think of one right now:
```
with parts as (
select name,
to_date(replace(period,'/',''), 'yyyymm') as period
from names
), flagged as (
select name,
period,
case
when lag(period,1, (period - interval '1' month)::date) over (partition by name order by period) = (period - interval '1' month)::date then null
else 1
end as group_flag
from parts
), grouped as (
select flagged.*,
coalesce(sum(group_flag) over (partition by name order by period),0) as group_nr
from flagged
)
select name, min(period), max(period)
from grouped
group by name, group_nr
order by name, min(period);
```
The first [common table expression](http://www.postgresql.org/docs/current/static/queries-with.html) (`parts`) simple changes the period into a date so that it can be used in an arithmetic expression.
The second CTE (`flagged`) assigns a flag each time the gap (in months) between the current row and the previous is not one.
The third CTE then accumulates those flags to define a unique group number for each consecutive number of rows.
The final select then simply gets the start and end period for each group. I didn't bother to convert the period back to the original format though.
SQLFiddle example that also shows the intermediate result of the `flagged` CTE:
<http://sqlfiddle.com/#!15/8c0aa/2>
|
SQL query for grouping monthly period ranges
|
[
"",
"sql",
"postgresql",
""
] |
I have two tables and I want to find different counts of rows after join them. My tables are:


I want to select user's name and (I) the number of books that a user has (by ignoring status) and (II) the number of books that a user has with status 1. And all of these will be selected from, for example, the users who has "Doe" in his/her name. Is it possible? How can I get these results?
<http://sqlfiddle.com/#!2/102ce8/3>
## Sample result:
(Name, Total Count, Specific Count)
John Doe, 3, 1
Jane Doe, 2, 1
|
```
SELECT tu.name,
Count(ts.id),
Sum(CASE
WHEN status = 1 THEN 1
ELSE 0
END)
FROM tblUsers tu
INNER JOIN tblStatus ts
ON ts.id = tu.id
WHERE tu.name LIKE '%doe%'
GROUP BY tu.id
```
|
u mean like this ?
SELECT nom,count(b.id\_user),SUM(b.status) FROM user u,booking b WHERE u.id=b.id\_user GROUP BY u.id
GROUP BY is your friend ;)
|
Select multiple count in MySQL query with different where statements
|
[
"",
"mysql",
"sql",
"join",
"count",
""
] |
I currently have 2 tables, vaccination and vaccination\_type which can be joined using vaccine\_id attribute.
vaccination is basically the joining many-to-many table between vaccination\_type and patient. It has 3 attributes: vaccine\_id, registration\_no and date. Vaccination\_type has vaccine\_id, vaccine\_name.
I want to join these 2 tables and get these entries:
```
vaccine_id vaccine_name registration_no date
1 influenza 1111 2015-01-15
2 hepatitis B null null
3 polio 1111 2015-01-15
4 hepatitis A 1112 2015-01-15
```
This means that even the patient has not done hepatitis B vaccination, I still want the entry to be null. But I only want the one to specific registration\_no, in this case 1111, in other words I also want the hepatitis A is written as null, as it is not done by 1111. So my expected result if I am dealing with 1111 will be:
```
vaccine_id vaccine_name registration_no date
1 influenza 1111 2015-01-15
2 hepatitis B null null
3 polio 1111 2015-01-15
4 hepatitis A null null
```
I have tried using all of the join, but cannot find the one that works perfectly. Any suggestion will be appreciated, thanks!
|
If you want the information for one registration, then that should be your base table. Then you want to get all vaccination types for that registration. Finally join onto the vaccination table to get the date it was administered, or NULL if that person never got it:
```
SELECT
registrations.registration_no,
vaccination_type.vaccine_id,
vaccination_type.name,
vaccination.date
FROM
registrations
CROSS JOIN vaccination_type
LEFT JOIN vaccination ON registrations.registration_no = vaccination.registration_no AND vaccination_type.vaccine_id = vaccination.vaccine_id
WHERE
registrations.registration_no = 1111
```
Fiddle: <http://sqlfiddle.com/#!2/e8b2d/5>
|
Use **LEFT JOIN** it join both tables.
Try this:
```
SELECT VT.vaccine_id, VT.vaccine_name, V.registration_no
FROM Vaccination_type VT
LEFT JOIN vaccination V ON VT.vaccine_id = V.vaccine_id;
```
|
How to select all joined rows from 2 tables including null
|
[
"",
"mysql",
"sql",
"select",
"join",
"left-join",
""
] |
I'm building a Node.js app that needs to query a Redshift database (based on postgres 8.0.2) using CTEs. Unfortunately, the SQL query builders I've looked at thus far ([node-sql](https://github.com/brianc/node-sql), [knex.js](http://knexjs.org) and sequelize) don't seem to support common table expressions (CTEs).
I had great success forming common table expressions in Ruby using Jeremy Evans' Sequel gem, which has a `with` method that takes two arguments to define the tables' aliased name and the dataset reference. I'd like something similar in Node.
Have I missed any obvious contenders for Node.js SQL query builders? From what I can tell these are the four most obvious:
* node-sql
* nodesql (no postgres support?)
* knex.js
* sequelize
|
I was able to use common table expressions (CTEs) with [knex.js](http://knexjs.org/) and it was pretty easy.
Assuming you're using socket.io along with knex.js,
**knex-example.js**:
```
function knexExample (io, knex) {
io.on('connection', function (socket) {
var this_cte = knex('this_table').select('this_column');
var that_cte = knex('that_table').select('that_column');
knex.raw('with t1 as (' + this_cte +
'), t2 as (' + that_cte + ')' +
knex.select(['this', 'that'])
.from(['t1', 't2'])
)
.then(function (rows) {
socket.emit('this_that:update', rows);
});
})
}
module.exports = knexExample;
```
|
[knex.js](http://knexjs.org/) now supports WITH clauses:
```
knex.with('with_alias', (qb) => {
qb.select('*').from('books').where('author', 'Test')
}).select('*').from('with_alias')
```
Outputs:
```
with "with_alias" as (select * from "books" where "author" = 'Test') select * from "with_alias"
```
|
Node-based SQL builder with Common Table Expression (WITH clause) support
|
[
"",
"sql",
"node.js",
"postgresql",
"amazon-redshift",
""
] |
I have the following schema in a mysql database
```
Field | Type | Null | Key | Default | Extra
------------+---------+----------+-------+-----------+---------
answerId int(11) NO PRI NULL
answerDate datetime YES NULL
creationDate datetime YES NULL
questionId int(11) NO PRI 0
url text YES NULL
description text YES NULL
```
This table stores a relationship of questions and answers, wherein one question is related to one or more answers. What I'd like to do is to select the questions that have more than 5 answers. Is it possible to retrieve that?
|
```
Select questionID, count(*) as totalAnswers from qanda
group by questionID
having totalAnswers > 5
```
[Example SQL Fiddle](http://sqlfiddle.com/#!2/230ab/6)
|
Will it work for you ?
```
SELECT questionId
FROM question_answer
GROUP BY questionId
HAVING COUNT(*) >5
```
Side note. Some other attributes ,`url`,`creationDate`, and `description`, don't seem like they describe relationship between question and answer...
|
Use number of distinct values from a column as select criteria
|
[
"",
"mysql",
"sql",
"select",
"relational-database",
""
] |
I have a table ProdByCat
```
CatID ProductID
---------------------------
Food Beans
Food Corn
Food Peas
Drink Juice
Drink Wine
```
And another table Purchases
```
Region ProductID Cost
-----------------------------
North Beans 5
South Beans 5
West Beans 5
North Corn 5
North Peas 5
West Wine 10
West Juice 10
```
What I'd like is to have a table that returns
```
Region CatID TotalCost
-----------------------------
North Food 15
South Food 5
West Food 5
West Drink 20
```
I am certain that I am over complicating it. This is the direction that I am traveling:
```
select P.Region, Y.CatID, SUM(P.Cost) As 'TotalCost'
from Purchases As P,
( select distinct(A.CatID),
Includes=( stuff (
select ''''+ ProductID + ''','
from ProdByCat B
where B.CatID = A.CatID
order by ProductID
for xml path ('')
),1,1,'')
from ProdByCat A
) Y
where ProductID in (Y.Includes)
group by P.Region, Y.CatID
```
It's fubar'd. Syntactically, it works but returns an empty set.
My thought was that if I used the xml path function, I could create an include list that if the ProductID existed in, would allow me to create a sum.
|
Lets create the test data:
```
DECLARE @ProdByCat TABLE
(
CatID VARCHAR(10),
ProductID VARCHAR(10)
)
INSERT INTO @ProdByCat
( CatID, ProductID )
VALUES
('Food' ,'Beans'),
('Food' ,'Corn'),
('Food' ,'Peas'),
('Drink' ,'Juice'),
('Drink' ,'Wine');
DECLARE @Purchases TABLE
(
Region VARCHAR(10),
ProductID VARCHAR(10),
Cost int
)
INSERT INTO @Purchases
( Region, ProductID, Cost )
VALUES
('North', 'Beans', 5),
('South', 'Beans', 5),
('West', 'Beans', 5),
('North', 'Corn', 5),
('North', 'Peas', 5),
('West', 'Wine', 10),
('West', 'Juice', 10);
```
Now we'll do a join and a group by to get the cost of each category:
```
SELECT p.Region, pc.CatId, SUM(COST) AS Cost
FROM @Purchases p
INNER JOIN @ProdByCat pc
ON p.ProductID = pc.ProductID
GROUP BY p.Region, pc.CatID
ORDER BY p.Region, pc.CatID DESC
```
Output:
```
Region CatId Cost
North Food 15
South Food 5
West Food 5
West Drink 20
```
|
You're right. You are over complicating it. It can be much simpler:
```
SELECT p.Region, pbc.CatID, SUM(p.Cost) AS TotalCost
FROM Purchases p
INNER JOIN ProdByCat pbc
ON p.ProductID = pbc.ProductID
GROUP BY p.Region, pbc.CatID;
```
|
SUM first table on second CATEGORY table
|
[
"",
"sql",
"sql-server",
"t-sql",
"sum",
""
] |
I have 8 tables:
received1, received2, received3, received4,
recovery1, recovery2, recovery3, recovery4
Each of these tables has a field named "Item".
What I want to do is to find all the records that has a match between received and recovery, however, if an item is in received3 but not in recovery3, I don't want to show it.
Here's an example: An item is received, it goes into received1, then there's a recovery, it goes in recovery1. If it stops there, I want to select it.
Another exemple: An item is received, it goes into received1, then there's a recovery, it goes in recovery1, received again, goes into received2, recovery again, recovery2, then is received again, received3 but no recovery. I don't want to select this one because the item doesn't have a recovery.
EDIT: I'll be more clear, I want to get the items that their last received/recovery matches. If they have been received/recovered 2 times, I want to select it. However, if an item has been received 3 times and recovered 2 times, I don't want to select it(in this exemple, there will be an item in received1-2-3 and a recovery in 1-2 but not 3).
|
From your example it seems an 'item' always follows the receive-recover-receive-recover pattern. If that's the case, an item can only be considered if it appears a multiple of two times. Also since you have not given any further details on table structure I am assuming an 'item' does not repeat. In such case, the below query will give you the required.
```
SELECT Item FROM
(
(SELECT Item, Count(*) Cnt FROM
(SELECT Item FROM Received1
UNION ALL
SELECT Item FROM Received2
UNION ALL
SELECT Item FROM Received3
UNION ALL
SELECT Item FROM Recovered1
UNION ALL
SELECT Item FROM Recovered2
UNION ALL
SELECT Item FROM Recovered3)A GROUP BY Item)B
WHERE Cnt%2 = 0 --checks for multiples of 2
```
|
What you can do is left outer join the tables and in your where clause add the condition for excluding the records you don't want eg
```
SELECT ...
FROM received1 left outer join received1 on (received1.item = received1)
left outer joing on received2 etc...
WHERE received1 != null OR received2 != null ...
```
Something like that
|
SQL exclude certain records in subquery
|
[
"",
"sql",
"sql-server",
"join",
""
] |
Here is the table schema;
```
create table campaign (
id INT NOT NULL,
custom_param1 VARCHAR(15),
custom_value1 VARCHAR(15),
custom_param2 VARCHAR(15),
custom_value2 VARCHAR(15),
custom_param3 VARCHAR(15),
custom_value3 VARCHAR(15),
PRIMARY KEY (id)
)
```
and this is the mock data I have;
```
id custom_param1 custom_value1 custom_param2 custom_value2 custom_param3 custom_value3
-- ------------- ------------- ------------- ------------- ------------- -------------
1 param1 value1 param2 value2 param3 value3
2 param1 value1 param2 value2 param3 value3
3 param1 value1 param2 value2 param3 value3
4 param1 value1 param2 value2 NULL NULL
5 param1 value1 NULL NULL NULL NULL
```
Now I am trying to concatenate params and values columns like a format
```
param1=value1; param2=value2, param3=value3
```
if a row has only two params and values it should be like;
```
param1=value1; param2=value2
```
well I create a query which I can concatenate row but if one of the is null then whole row is become a null - As I know if `concat` function tries to concat a NULL value string will completely become null. And this is query I use.
```
select CONCAT(c.custom_param1, '=', c.custom_value1, '; ',
c.custom_param2, '=', c.custom_value2, '; ',
c.custom_param3, '=', c.custom_value3) as 'Custom Parameters'
from campaign as c;
```
I will appreciate for any kind of help. Thanks anyway
|
Depending on what you want to do if the value is null, you can try
```
SELECT CONCAT(
c.custom_param1, '=', IFNULL(c.custom_value1, ''), '; ',
c.custom_param2, '=', IFNULL(c.custom_value2, ''), '; ',
c.custom_param3, '=', IFNULL(c.custom_value3, ''), '; ') as 'Custom Parameters'
FROM campaign as c;
```
Will return
```
param1=value1; param2=value2; param3=;
```
Or you can exclude the whole value pair like this....
```
SELECT CONCAT(
IFNULL(CONCAT(c.custom_param1, '=', c.custom_value1, '; '), ''),
IFNULL(CONCAT(c.custom_param2, '=', c.custom_value2, '; '), ''),
IFNULL(CONCAT(c.custom_param3, '=', c.custom_value3, '; '), '')) AS 'Custom Parameters'
FROM campaign as c;
```
which will return
```
param1=value1; param2=value2;
```
Hope that helps
|
Use ifnull <http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_ifnull>
```
select CONCAT(IFNULL(c.custom_param1, ''), '=', IFNULL(c.custom_value1, ''), '; ', IFNULL(c.custom_param2, ''), '=', IFNULL(c.custom_value2, ''), '; ', IFNULL(c.custom_param3, ''), '=', IFNULL(c.custom_value3,'')) as 'Custom Parameters' from campaign as c
```
You could change `IFNULL(c.custom_param1, '')` to `IFNULL(c.custom_param1, 'NULL')` or whatever you need.
Updated info `param1=value1; =; = which should be param1=value1` EDIT:
```
select
CONCAT(
IF(c.custom_param1 IS NOT NULL AND c.custom_value1 IS NOT NULL, CONCAT(c.custom_param1, '=', c.custom_value1, '; '), ''),
IF(c.custom_param2 IS NOT NULL AND c.custom_value2 IS NOT NULL, CONCAT(c.custom_param2, '=', c.custom_value2, '; '), ''),
IF(c.custom_param3 IS NOT NULL AND c.custom_value3 IS NOT NULL, CONCAT(c.custom_param3, '=', c.custom_value3), '')
) as 'Custom Parameters'
from campaign as c
```
example: <http://sqlfiddle.com/#!2/ef859f/2/0>
|
mysql concatenate columns if column is not null
|
[
"",
"mysql",
"sql",
"concatenation",
""
] |
i'm using the following code
```
create TABLE #temp1(
[date] [varchar](500) NULL,
[time] [varchar](500) NULL,
[s-ip] [varchar](500) NULL,
[cs-method] [varchar](500) NULL,
[cs-uri-stem] [varchar](max) NULL,
[cs-uri-query] [varchar](max) NULL,
[s-port] [varchar](500) NULL,
[cs-username] [varchar](500) NULL,
[c-ip] [varchar](500) NULL,
[cs(User-Agent)] [varchar](2048) NULL,
[cs(Referer)] [varchar](max) NULL,
[sc-status] [varchar](max) NULL,
[sc-substatus] [varchar](500) NULL,
[sc-win32-status] [varchar](500) NULL,
[time-taken] [varchar](500) NULL
)
BULK INSERT #temp1 FROM 'C:\iislogs\u_ex150108.log'
WITH (
FIRSTROW = 5,
FIELDTERMINATOR = ' ',
ROWTERMINATOR = '\n',
MAXERRORS = 10000
)
```
and the file that, i'm using has 1521000 lines but i am getting only 1305226 in #temp1
Please help
Thanks in advance
|
Thanks all for your valuable answer
But again after long search in Google.
I found one solution ,that this is a Microsoft sql server bug
This bug can only be solved when we install latest service pack of sql server in our system
And when I have installed service pack in my system bulk insert works well.
For any further help please take help from below link
<https://support.microsoft.com/kb/837401/EN-US>
Thanks Again
|
set all varchar to max and do the bulkfiles again, and please can you paste your log file to excel and see if the total row is really 1521000?
The problem usually caused by invalid fieldterminator and rowterminator.
|
Bulk insert does not insert all the rows from the log file without giving any error
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I'm trying to get two counts of separate columns for data in one table.
I have a database that tracks issues, and one table, Issue, has the 2 relevant columns that each contain a date. Very similar to the following.
```
DateOpened DateClosed
2015-01-08 2015-01-08
2015-01-08 2015-01-08
2015-01-06 2015-01-08
2015-01-06 2015-01-08
2015-01-04 2015-01-07
2015-01-02 2015-01-07
2015-01-02 2015-01-07
```
My goal is to be able to count the number of entries opened and closed on each date. An example of the expected output from above would be.
```
Date CountDateOpened CountDateClosed
2015-01-08 2 4
2015-01-07 0 3
2015-01-06 2 0
2015-01-05 0 0
2015-01-04 1 0
2015-01-03 0 0
2015-01-02 2 0
```
I know I need to group by Date, but there should be days where more issues are closed than opened, but my COUNT(DateClosed) never seems to exceed my Count(DateOpened). I am doing on the fly date conversions in the query, but I do not believe them to be relevant since I always round to the nearest day. Here is the query I'm running so far, skinned down for simplicity.
```
SELECT
CREATEDATE AS [Date],
COUNT(CREATEDATE) AS [Number Opened],
COUNT(CLOSEDATE) AS [Number Closed]
FROM
ISSUE
GROUP BY
CREATEDATE
ORDER BY
[Date] DESC
```
|
One way of doing this is to use `union all` to create a single column for both dates and then group according to its type:
```
SELECT `Date`,
COUNT(`open`) AS `CountDateOpened`
COUNT(`closed`) AS `CountDateClosed`
FROM (SELECT `DateOpened` AS `Date`, 1 AS `open`, NULL AS `closed`
FROM `issue`
UNION ALL
SELECT `DateClosed` AS `Date`, NULL AS `open`, 1 AS `closed`
FROM `issue`
) t
GROUP BY `Date`
ORDER BY `Date` DESC
```
|
Try this
```
select
d.dt,(select COUNT(DateOpened) ct from ISSUE where
CAST(DateOpened as DATE)=CAST(d.dt as DATE) )
,(select COUNT(DateClosed) ct from ISSUE where
CAST(DateClosed as DATE)=CAST(d.dt as DATE) )
from (
select number,DATEADD(D,number-7,GETDATE()) dt
from master.dbo.spt_values sp
where type='P' and DATEADD(D,number-7,GETDATE())<'2015-01-09'
)
d
ORDER BY d.dt desc
```
**OUTPUT**
```
Date DateOpened DateClosed
2015-01-08 2 4
2015-01-07 0 3
2015-01-06 2 0
2015-01-05 0 0
2015-01-04 1 0
2015-01-03 0 0
2015-01-02 2 0
```
|
Count 2 columns by the same grouping
|
[
"",
"mysql",
"sql",
"select",
"count",
"group-by",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.