Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
I have a table in sql server db which has a 'nvarchar' datatype column with datetime data.
I want to add two more columns to the table, one having the whole datetime data in 'datetime' datatype and the other column should have just the date in 'datetime' datatype
I have around 5 million rows in the table.
The nvarchar data looks like this: 2013-03-20 00:00:50
I would sincerely appreciate if someone could help me with a sql command which would do this..
Thanks
|
What you need to do is to first alter the table, then update it.
```
ALTER TABLE YOUR_TABLE ADD DATETIME_COL DATETIME
ALTER TABLE YOUR_TABLE ADD DATE_COL DATE
UPDATE YOUR_TABLE
SET
DATE_COL = CAST(NVCHAR_DATE AS DATE),
DATETIME_COL = CAST(NVCHAR_DATE AS DATETIME)
```
However, storing the date as a separate column seems a bit redundant. Maybe a computed column would be a better choice:
```
ALTER TABLE YOUR_TABLE ADD COMNPUTED_DATE_COL AS CAST(DATETIME_COL AS DATE)
```
See this [SQL Fiddle](http://www.sqlfiddle.com/#!3/ad013/1)
|
Try this:
```
update table x
set dateColumnWithTime = cast(MyVarcharDate as datetime),
datecolumnWithoutTime = DATEADD(Day, DATEDIFF(Day, 0, cast(MyVarcharDate as datetime)), 0)
```
Output:
```
dateColumnWithTime datecolumnWithoutTime
----------------------- -----------------------
2013-03-20 00:00:50.000 2013-03-20 00:00:00.000
```
|
Adding columns in sql server with transformed data from same table
|
[
"",
"sql",
"sql-server",
""
] |
I am building a search engine, therefore, as in google, I am displaying only 4 results but I also need the total number of matched results.
Can I do in a single query in ORACLE?
|
Use window functions:
```
select *
from (
select col1,
col2,
row_number() over (order by some_column) as rn,
count(*) over () as total_count
from the_table
)
where rn <= 4;
```
But if that table is really big, it is not going to be very fast.
|
You can do something like below;
ready to run query:
```
SELECT tbl2.*
FROM (SELECT tbl1.*, ROWNUM rownumber
FROM (SELECT 1, 2, count(*) FROM DUAL) tbl1) tbl2
WHERE tbl2.rownumber BETWEEN 0 AND 4;
```
And the result is:
```
column1|column2|COUNT(*)|ROWNUMBER
1 2 1 1
```
This gets the rows between 0 and 4. So if you want to get others, you can modify to get inputs for those values instead.
|
Get total count of rows in ORACLE and then get only 4 results
|
[
"",
"sql",
"oracle",
""
] |
Apart from the `modify` statement, are there any other ways of modifying the contents of SAS datasets (i.e. altering values, or adding or removing rows or columns) that **don't** involve creating a temporary file in work and then replacing the entire original file?
Related question: if I have a single `proc sql` with one `create table` statement and several `insert` statements, all targeting the same table, will SAS end up overwriting the output table several times during execution, or is it clever enough to do all the writes in one pass? Let's assume that I'm not connecting to any other DBMS.
Since 2 people have posted this already, the following is not a valid answer:
```
data lib.dsn;
set lib.dsn;
/*Insert logic here*/
run;
```
If you do this, SAS creates a temporary file and replaces the original lib.dsn once the data step is complete. If you interrupt this kind of data step, there will be an error in the log, but the original dataset will remain unchanged.
|
Update rows with `PROC SQL; UPDATE`
delete with `PROC SQL; DELETE`
add with `PROC APPEND` or `PROC SQL; INSERT`
|
I found one - but are there other similar methods for overwriting rows, or adding/removing variables? From the help page for the `append` statement:
> The APPEND statement bypasses the processing of data in the original
> data set and adds new observations directly to the end of the original
> data set.
Found another - it seems that the `remove` statement can delete rows in the way I want to, but only if I'm using a `modify` statement, which I already knew about.
|
Can I modify an existing SAS dataset without creating a temporary file?
|
[
"",
"sql",
"sas",
"database-performance",
""
] |
## **Query**
```
select * from TableA a join TableB b
on a.col1=b.col1
where b.col2 = 'SomeValue'
```
I'm expecting the server, first filter the col2 from TableB then do the join. This will be more efficient.
1. Is that the sql server evaluates the where clause first and then Join?
2. Any link the to know in which order sql will process a query ?
Thanks In Advance
|
Already answered ... read both answers ...
<https://dba.stackexchange.com/questions/5038/sql-server-join-where-processing-order>
To summarise: it depends on the server implementation and its execution plan ... so you will need to read up on your server in order to optimise your queries.
But I'm sure that simple joins get optimised by each server as best as it can.
If you are not sure measure execution time on a large dataset.
|
We will use this code:
```
IF OBJECT_ID(N'tempdb..#TableA',N'U') IS NOT NULL DROP TABLE #TableA;
IF OBJECT_ID(N'tempdb..#TableB',N'U') IS NOT NULL DROP TABLE #TableB;
CREATE TABLE #TableA (col1 INT NOT NULL,Col2 NVARCHAR(255) NOT NULL)
CREATE TABLE #TableB (col1 INT NOT NULL,Col2 NVARCHAR(255) NOT NULL)
INSERT INTO #TableA VALUES (1,'SomeValue'),(2,'SomeValue2'),(3,'SomeValue3')
INSERT INTO #TableB VALUES (1,'SomeValue'),(2,'SomeValue2'),(3,'SomeValue3')
select * from #TableA a join #TableB b
on a.col1=b.col1
where b.col2 = 'SomeValue'
```
Let`s analyze query plan in MSSQL Management studio. Mark full SELECT statement and right click --> Diplay Estimated Execution Plan. As you can seen on the picture below

**first it does Table Scan for the WHERE clause, then JOIN.**
*1.Is that the sql server evaluates the where clause first and then Join?*
**First the where clause then JOIN**
*2.Any link the to know in which order sql will process a query?*
**I think you will find useful information here:**
1. [Execution Plan Basics](https://www.simple-talk.com/sql/performance/execution-plan-basics/)
2. [Graphical Execution Plans for Simple SQL Queries](https://www.simple-talk.com/sql/performance/graphical-execution-plans-for-simple-sql-queries/)
|
Where or Join Which one is evaluated first in sql server?
|
[
"",
"sql",
"sql-server",
"sql-execution-plan",
""
] |
Sorry for the terrible wording in the question, struggling to explain this properly.
I have a table like this:
```
Id Name Version
1 Chrome 38.0
2 Chrome 36.0
3 Chrome 37.0
4 Firefox 31.0
5 IE 11.0
6 IE 8.0
7 IE 7.0
```
I need a query to return "Name"s with value "IE" only if the "Version" is >= 8.0
Otherwise don't return a version, I would expect the result to be...
```
Id Name Version
1 Chrome null
2 Chrome null
3 Chrome null
4 Firefox null
5 IE 11.0
6 IE 8.0
```
If it helps here is my stored procedure so far, this returns all versions which isn't what I want.
```
ALTER PROCEDURE [dbo].[GetCommonBrowserCount]
@StartDate datetime = NULL,
@EndDate datetime = NULL,
@Domain varchar(255) = NULL
AS
BEGIN
SET NOCOUNT ON;
SELECT
[Name],
COUNT(Name) AS [Count],
[Version]
FROM
dbo.GetCommonBrowserAccessEvents
WHERE
(
@StartDate IS NULL OR [Date] > @StartDate
)
AND
(
@EndDate IS NULL OR [Date] < @EndDate
)
AND
(
@Domain IS NULL OR DomainName = @Domain
)
GROUP BY Name,
[Version]
END
```
Is this possible?
|
Use a `case`
```
select id,
name,
case when name = 'IE'
then version
else null
end as version
from GetCommonBrowserAccessEvents
where not
(
name = 'IE' and cast(substring(version, 1, charindex('.', version) - 1) as int) < 8
)
```
|
Limiting the result set is just a matter of adding another `AND` to your `WHERE` clause:
```
AND
(
[Name] <> 'IE' OR [Version] > 8
)
```
In terms of returning `NULL` for `[version]` when it's not IE (why?), you can use `CASE` for that:
```
SELECT
-- ...
CASE WHEN [Name] = 'IE' THEN [Version] ELSE NULL END AS Version
```
(If you were on SQL Server 2012 or higher, you could use [`IIF`](http://msdn.microsoft.com/en-gb/library/hh213574.aspx) instead to be a bit more concise, but not on 2008.)
|
Return "Name"s with value "IE" only if the "Version" is >= 8.0
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
I have written an ajax function to check if a value exists inside the database.
For example consider two strings "Book" and "book". In my current situation "Book" is there inside the Database and if I search using the query below
```
Select * from Cat where name='book'
OR
Select * from Cat where name like 'book'
```
It returns an empty result set since the 'b' is in lowercase. My collation is utf8\_bin.
What will be the query to evaluate in such a way that it will be the same whether it is upper case or lower case.
|
Use `upper()` function to make both strings to upper case:
```
Select * from Cat where upper(name)=upper('book')
```
|
If I understand correctly you can use the upper or lower function in the comparison
|
Comparison of two strings in sql
|
[
"",
"mysql",
"sql",
""
] |
1. For example `test` column 7 contains two rows, if the `number` column contains values 5 AND 6, AND the value is NOT **X** in the `chr` column, I would like to select select the rows with 7 in the `test` column.
2. For example `test` column 10 contains three rows, if the `number` column contains values 5 AND 6, AND the value **X** exists in the `chr` column, I would like to exclude rows with 10 in the `test` column.
The [Demo](http://www.sqlfiddle.com/#!2/f209fe/5) of the below Schema and broken SQL query is available on SQL fiddle.
Schema:
```
CREATE TABLE TEST_DATA (ID INT, TEST INT, CHR VARCHAR(1), NUMBER INT);
INSERT INTO TEST_DATA VALUES
( 1 , 7 , 'C' , 5),
( 2 , 7 , 'T' , 6),
( 3 , 8 , 'C' , 4),
( 4 , 8 , 'T' , 5),
( 5 , 9 , 'A' , 4),
( 6 , 9 , 'G' , 5),
( 7 , 10 , 'T' , 4),
( 8 , 10 , 'A' , 5),
( 9 , 10 , 'X' , 6),
(10 , 14 , 'T' , 4),
(11 , 14 , 'G' , 5);
```
SQL:
```
SELECT *
FROM test_data t1, test_data t2
WHERE t1.number=5 is not t1.chr=X AND
t2.number=6 is not t2.chr=X;
```
How would it be possible to keep `test` column if `number` columns contains 5 and 6 and the `chr` column does not contain **X**?
**UPDATE** As result it should only be `test` column with 7, because `test` column 7 have 5 and 6 in the `number` column and not X.
**UPDATE 2** Result example:
```
ID | TEST | CHR | NUMBER
1 | 7 | C | 5
2 | 7 | T | 6
```
|
If I understand the requirement correctly...
```
SELECT a.test
FROM test_data a
LEFT
JOIN test_data b
ON b.test = a.test
AND b.chr = 'x'
WHERE a.number IN (5,6)
AND b.id IS NULL
GROUP
BY a.test
HAVING COUNT(*) = 2;
```
<http://www.sqlfiddle.com/#!2/1939f/4>
You can join this result back on to `test_data` to get all results with a `test` equal to 7
|
Try this.
## Query
```
SELECT *
FROM test_data
WHERE number IN (5,6)
AND test NOT IN (10)
AND chr NOT IN ('X');
```
## [Fiddle Demo](http://www.sqlfiddle.com/#!2/f209fe/20)
|
Selecting data from multiple rows
|
[
"",
"mysql",
"sql",
""
] |
I have been doing alot of research attempting to find an answer to my question.
I am trying to work out which syntax is needed to round when the figure is less than one.
For example
SELECT 17/26
When running this in SQL, it bring up zero, however i am attempting to get it to return me an answer of 0.65.
I have tried using ROUND, CAST AS Numeric,Decimal and also Money.
So far.... no luck
Any help would be appreciated.
|
try this
```
SELECT round(convert(float,17)/26,2)
```
|
For whatever it's worth, when I'm doing something like this with an actual hard-coded value I just add a decimal place to one of the elements. A `CAST()` is better for a database field, but if you're typing something in just use a decimal ...
```
SELECT 17/26, 17/26.0, 17.0/26
```
|
SQL Rounding Query
|
[
"",
"sql",
"sql-server",
"syntax",
"rounding",
""
] |
ALL. I have table looks like
```
NAME1 NAME2 Result
Jone Jim win
Kate Lucy loss
Jone Lucy win
Jim Jone loss
```
I want to select from NAME1 where win case>=3, My code is
```
SELECT NAME1,Count(Result='win') as WIN_CASE
From TABLE
Group by NAME1
Having Count(Result='win')>=3;
```
However, the result is not correct from the output, it just returns the total number of names shown in NAME1, what should I do to fix it please?
UPDATE: Thanks for all the reply. The result from Kritner and jbarker work fine. I just forget to add "where"Clause.
|
# Query:
```
SELECT NAME1, COUNT(Result) AS WIN_CASE
FROM A
WHERE Result='win'
GROUP BY NAME1
HAVING COUNT(Result)>=3
```
|
Try this
```
select *
from (select NAME1, Result, count(*) as res from test group by Result, NAME1) as t
where t.res>=3 and t.Result ='win'
```
|
Access sql combine SELECT and COUNT function
|
[
"",
"sql",
"ms-access",
""
] |
I have 4 tables, in that I want to fetch records from all 4 and aggregate the values
I have these tables

I am expecting this output

but getting this output as a Cartesian product

It is multiplying the expenses and allocation
Here is my query
```
select
a.NAME, b.P_NAME,
sum(a.DURATION) DURATION,
sum(b.[EXP]) EXPEN
from
(select
e.ID, a.P_ID, e.NAME, a.DURATION DURATION
from
EMPLOYEE e
inner join
ALLOCATION a ON e.ID = a.E_ID) a
inner join
(select
p.P_ID, e.E_ID, p.P_NAME, e.amt [EXP]
from
PROJECT p
inner join
EXPENSES e ON p.P_ID = e.P_ID) b ON a.ID = b.E_ID
and a.P_ID = b.P_ID
group by
a.NAME, b.P_NAME
```
Can anyone suggest something about this.
|
Hi I got the answer what I want from some modification in the query
The above query is also working like a charm and have done some modification to the original query and got the answer
Just have to group by the inner queries and then join the queries it will then not showing Cartesian product
Here is the updated one
```
select a.NAME,b.P_NAME,sum(a.DURATION) DURATION,sum(b.[EXP]) EXPEN from
(select e.ID,a.P_ID, e.NAME,sum(a.DURATION) DURATION from EMPLOYEE e inner join ALLOCATION a
ON e.ID=a.E_ID group by e.ID,e.NAME,a.P_ID) a
inner join
(select p.P_ID,e.E_ID, p.P_NAME,sum(e.amt) [EXP] from PROJECT p inner join EXPENSES e
ON p.P_ID=e.P_ID group by p.P_ID,p.P_NAME,e.E_ID) b
ON a.ID=b.e_ID and a.P_ID=b.P_ID group by a.NAME,b.P_NAME
```
Showing the correct output
|
The following should work:
```
SELECT e.Name,p.Name,COALESCE(d.Duration,0),COALESCE(exp.Expen,0)
FROM
Employee e
CROSS JOIN
Project p
LEFT JOIN
(SELECT E_ID,P_ID,SUM(Duration) as Duration FROM Allocation
GROUP BY E_ID,P_ID) d
ON
e.E_ID = d.E_ID and
p.P_ID = d.P_ID
LEFT JOIN
(SELECT E_ID,P_ID,SUM(AMT) as Expen FROM Expenses
GROUP BY E_ID,P_ID) exp
ON
e.E_ID = exp.E_ID and
p.P_ID = exp.P_ID
WHERE
d.E_ID is not null or
exp.E_ID is not null
```
I've tried to write a query that will produce results where e.g. there are rows in `Expenses` but no rows in `Allocations` (or vice versa) for some particular `E_ID`,`P_ID` combination.
|
SQL Server Circular Query
|
[
"",
"sql",
"sql-server",
""
] |
I'm trying to divide 2 counts in order to return a percentage.
The following query is returning `0`:
```
select (
(select COUNT(*) from saxref..AuthCycle
where endOfUse is null and addDate >= '1/1/2014') /
(select COUNT(*) from saxref..AuthCycle
where addDate >= '1/1/2014')
) as Percentage
```
Should I be applying a cast?
|
I would do it differently, using two `sum`s:
```
select sum
( case
when endOfUse is null and addDate >= '1/1/2014'
then 1
else 0
end
)
* 100.0 -- if you want the usual 0..100 range for percentages
/
sum
( case
when addDate >= '1/1/2014'
then 1
else 0
end
)
percentage
from saxref..AuthCycle
```
|
The issue is caused because you are dividing 2 `int` values, which by default will output an `int` as it takes the data types used in the calculation to determine the data type of the output, so effectively if you do this:
```
select 50/100 as result
```
You get `0.5` output as `0` as it rounds it to an `int` (no decimal places).
If you however specify decimals:
```
select 50.0/100.0 as result
```
You would get `0.5` as a decimal, which you could multiply by 100 to get 50%.
So updating your syntax to multiply by `1.0` and making the counts into decimals would give you the correct result:
```
select (
(select COUNT(*) from saxref..AuthCycle where endOfUse is null and addDate >= '1/1/2014')*1.0 /
(select COUNT(*) from saxref..AuthCycle where addDate >= '1/1/2014')*1.0
) as Percentage
```
|
Dividing 2 numbers returns 0
|
[
"",
"sql",
"sql-server",
"sum",
""
] |
let's say for example I have the following query:
```
SELECT City, Country FROM Customers
WHERE Country='Germany'
UNION
SELECT City, Country FROM Suppliers
WHERE Country='Germany'
ORDER BY City;
```
As you can see the `WHERE Country='Germany'` is repeated in both the targets of the union - is there any way to reduce this to a query without repetitions? I don't like my queries being too long.
I'm currently working on Oracle.
|
Why not include the `WHERE` only once like
```
SELECT * FROM
(
SELECT City, Country FROM Customers
UNION ALL
SELECT City, Country FROM Suppliers
ORDER BY City
) tab
WHERE Country='Germany'
```
(OR) do a `JOIN` like
```
SELECT c.City as CustomerCity, c.Country as customerCountry,
s.City as suppliercity, s.Country as suppliercountry
FROM Customers c
LEFT JOIN Suppliers s ON c.Country = s.Country
AND c.Country='Germany'
ORDER BY c.City;
```
|
```
select distinct city, country
from
(
SELECT City, Country FROM Customers
WHERE Country='Germany'
UNION ALL
SELECT City, Country FROM Suppliers
WHERE Country='Germany'
) x
order by city
```
You can't really get around the need for a `UNION` if you really want both sets of rows: I've added a `UNION ALL` inside the main SQL and a `DISTINCT` outside to remove duplicates but with no extra sort operations (assuming you want to do that).
|
SQL reduce duplicates in union clause
|
[
"",
"sql",
"oracle",
""
] |
Ok, so I have two tables in MySQL, `photos` and `views`. Each time a photo is viewed, a new row is created in `views`.
I want the SQL to return a list of photos, with a total number of views for each photo.
I've been trying this query, but its only giving me 1 photo as a result.
```
select photos.id, photos.loc, count(views.id) as views
from photos
left outer join views on views.id=photos.id
```
Can someone explain to me what I am doing wrong?
Thanks.
|
You need to `count` the views and `group by` the photo:
```
SELECT photos.id, photos.loc, COUNT(*) AS total_views
FROM photos
LEFT OUTER JOIN views ON views.id = photos.id
GROUP BY photos.id, photos.loc
```
|
Should be something like this
```
SELECT
photos.id,
photos.loc,
count(views.id) as viewCount
FROM
photos,
LEFT JOIN views ON views.id = photos.id (Not sure if it should be views.id or views.pid or something)
GROUP BY
photos.id
```
|
SQL: Finding totals from a joined table
|
[
"",
"mysql",
"sql",
""
] |
I have a table with 5 columns: ID, ERROR1, ERROR2, ERROR3, ERROR4.
A small sample would look like:
```
ID | Error 1 | Error 2 | Error 3 | Error 4 |
12 | YES | (null) | (null) | YES |
15 | (null) | YES | (null) | YES |
```
So, I need to understand how to break a single row of data where there are multiple columns with "Yes" and turn it into multiple instances of the same ID, and only a single column reading Yes for that instance. So two records of 12 and two records of 15, each having only one error and the rest Null for any individual row.
Thank you
|
Maybe that help, but I'm not sure, if I understand your expected result correctly:
```
SELECT ID, Error1, NULL AS Error2, NULL AS Error3, NULL AS Error4
FROM table
WHERE Error1 = 'YES'
UNION
ALL
SELECT ID, NULL AS Error1, Error2, NULL AS Error3, NULL AS Error4
FROM table
WHERE Error2 = 'YES'
UNION
ALL
SELECT ID, NULL AS Error1, NULL AS Error2, Error3, NULL AS Error4
FROM table
WHERE Error3 = 'YES'
UNION
ALL
SELECT ID, NULL AS Error1, NULL AS Error2, NULL AS Error3, Error4
FROM table
WHERE Error4 = 'YES'
```
|
As an alternative solution, you could join your table on a "[diagonal matrix](http://en.wikipedia.org/wiki/Diagonal_matrix)", taking benefice in the join clause that `NULL` is not equal to `NULL`:
```
SELECT T.ID, O.*
FROM T JOIN (
--
-- build a diagonal matrix
--
SELECT 'YES' as "Error 1", NULL as "Error 2", NULL as "Error 3", NULL as "Error 4"
FROM DUAL
UNION ALL SELECT NULL, 'YES', NULL, NULL
FROM DUAL
UNION ALL SELECT NULL, NULL, 'YES', NULL
FROM DUAL
UNION ALL SELECT NULL, NULL, NULL, 'YES'
FROM DUAL
) O
ON T."Error 1" = O."Error 1"
OR T."Error 2" = O."Error 2"
OR T."Error 3" = O."Error 3"
OR T."Error 4" = O."Error 4";
```
See <http://sqlfiddle.com/#!4/bc0a7f/12>
|
Split row into multiple rows Oracle
|
[
"",
"sql",
"oracle",
""
] |
I'm using Oracle SQL and i have a basic question regarding `join` command.
I have 5 tables. Each one of them has the same column as a primary key: `ID (int)`. Lets look at the following queries:
```
select count(*) from table_a - 100 records
select count(*) from table_c - 200 records
select count(*) from table_c - 150 records
select count(*) from table_d - 100 records
select count(*) from table_e - 120 records
```
---
```
select * -- 88 records
from table_a a
inner join table b
on a.id = b.id
inner join table c
on a.id = c.id
inner join table d
on a.id = d.id
inner join table e
on a.id = e.id
```
In this case, many records are failing to be included in the output if one of the tables is not including the certain ID (even if the rest of them are included). How can i know what are these "bad" records? It's actually the complement of the intersection i think.
I want to know what are the problematic records and tables of each case. For example: ID 123 is a "bad" records since it's not included in table\_c, but included in the rest of the tables. ID 321 is a problematic records since it's included in all of the tables except table\_d.
|
You are probably looking for the [symmetric difference](http://en.wikipedia.org/wiki/Symmetric_difference) between all your tables.
To solve that kind of problem without being too clever, you need a [`FULL OUTER JOIN ... USING`](http://docs.oracle.com/cd/B19306_01/server.102/b14200/queries006.htm#sthref3174):
```
SELECT id
FROM table_a
FULL OUTER JOIN table_b USING(id)
FULL OUTER JOIN table_c USING(id)
FULL OUTER JOIN table_d USING(id)
FULL OUTER JOIN table_e USING(id)
WHERE table_a.ROWID IS NULL
OR table_b.ROWID IS NULL
OR table_c.ROWID IS NULL
OR table_d.ROWID IS NULL
OR table_e.ROWID IS NULL;
```
The `FULL OUTER JOIN` will return all rows that satisfy the join condition (like an ordinary `JOIN`) as well as all rows without corresponding rows. The `USING` clause embed an implicit `COALESCE` on the equijoin column.
---
An other option would be to use an [anti-join](http://en.wikipedia.org/wiki/Relational_algebra#Antijoin_.28.E2.96.B7.29):
```
SELECT id
FROM table_a
FULL OUTER JOIN table_b USING(id)
FULL OUTER JOIN table_c USING(id)
FULL OUTER JOIN table_d USING(id)
FULL OUTER JOIN table_e USING(id)
WHERE id NOT IN (
SELECT id
FROM table_a
INNER JOIN table_b USING(id)
INNER JOIN table_c USING(id)
INNER JOIN table_d USING(id)
INNER JOIN table_e USING(id)
)
```
Basically, this will build the union all sets minus the intersection of all sets.
Graphically, you can compare the `INNER JOIN` and the `OUTER JOIN` (on 3 tables only for ease of representation):

---
Given that test case:
> ```
> ID TABLE_A TABLE_B TABLE_C TABLE_D TABLE_E
> 1 * - - - -
> 2 - * * * *
> 3 * - - * -
> 4 * * * * *
> ```
>
> `*` value in the table `-` missing entry
Both queries will produce:
```
ID
1
3
2
```
---
If you want tabular result, you might adapt one of these query by adding a bunch of `CASE` expressions. Something like that:
```
SELECT ID,
CASE when table_a.rowid is not null then 1 else 0 END table_a,
CASE when table_b.rowid is not null then 1 else 0 END table_b,
CASE when table_c.rowid is not null then 1 else 0 END table_c,
CASE when table_d.rowid is not null then 1 else 0 END table_d,
CASE when table_e.rowid is not null then 1 else 0 END table_e
FROM table_a
FULL OUTER JOIN table_b USING(id)
FULL OUTER JOIN table_c USING(id)
FULL OUTER JOIN table_d USING(id)
FULL OUTER JOIN table_e USING(id)
WHERE table_a.ROWID IS NULL
OR table_b.ROWID IS NULL
OR table_c.ROWID IS NULL
OR table_d.ROWID IS NULL
OR table_e.ROWID IS NULL;
```
Producing:
> ```
> ID TABLE_A TABLE_B TABLE_C TABLE_D TABLE_E
> 1 1 0 0 0 0
> 3 1 0 0 1 0
> 2 0 1 1 1 1
> ```
>
> `1` value in the table `0` missing entry
|
You can try the following query
```
SELECT id, COUNT(id) as id_num FROM (
SELECT id FROM table_a
UNION
SELECT id FROM table_b
UNION
SELECT id FROM table_c
UNION
SELECT id FROM table_d
UNION
SELECT id FROM table_e
)
GROUP BY id HAVING id_num <5
```
|
complement of the intersection in SQL
|
[
"",
"sql",
"oracle",
"join",
"intersection",
""
] |
i have read tons of articles regarding last n records in Oracle SQL by using rownum functionality, but on my case it does not give me the correct rows.
I have 3 columns in my table: 1) message (varchar), mes\_date (date) and mes\_time (varchar2).
Inside lets say there is 3 records:
```
Hello world | 20-OCT-14 | 23:50
World Hello | 21-OCT-14 | 02:32
Hello Hello | 20-OCT-14 | 23:52
```
I want to get the last 2 records ordered by its date and time (first row the oldest, and second the newest date/time)
i am using this query:
```
SELECT *
FROM (SELECT message
FROM messages
ORDER
BY MES_DATE, MES_TIME DESC
)
WHERE ROWNUM <= 2 ORDER BY ROWNUM DESC;
```
Instead of getting row #3 as first and as second row #2 i get row #1 and then row #3
What should i do to get the older dates/times on top follow by the newest?
|
Maybe that helps:
```
SELECT *
FROM (SELECT message,
mes_date,
mes_time,
ROW_NUMBER() OVER (ORDER BY TO_DATE(TO_CHAR(mes_date, 'YYYY-MM-DD') || mes_time, 'YYYY-MM-DD HH24:MI') DESC) rank
FROM messages
)
WHERE rank <= 2
ORDER
BY rank
```
|
I am really sorry to disappoint - but in Oracle there's no such thing as "the last two records".
The table structure does not allocate data at the end, and does not keep a visible property of time (the only time being held is for the sole purpose of "flashback queries" - supplying results as of point in time, such as the time the query started...).
The last inserted record is not something you can query using the database.
What can you do? You can create a trigger that orders the inserted records using a sequence, and select based on it (so `SELECT * from (SELECT * FROM table ORDER BY seq DESC) where rownum < 3`) - that will assure order only if the sequence CACHE value is 1.
Notice that if the column that contains the message date does not have many events in a second, you can use that column, as the other solution suggested - e.g. if you have more than 2 events that arrive in a second, the query above will give you random two records, and not the actual last two.
AGAIN - Oracle will not be queryable for the last two rows inserted since its data structure do not managed orders of inserts, and the ordering you see when running "SELECT \*" is independent of the actual inserts in some specific cases.
If you have any questions regarding any part of this answer - post it down here, and I'll focus on explaining it in more depth.
|
Oracle SQL last n records
|
[
"",
"sql",
"oracle",
""
] |
My table has rows that list object types and the shelves that some can be found on, like this:
```
1, wrench, shelf1
2, wrench, shelf2
3, hammer, shelf2
4, hammer, shelf3
5, pliers, shelf1
6, nails, shelf3
7, nails, shelf4
```
I am trying to decide how to create a query that will return any objects that can be found on shelf1 but not on shelf2.
* In this example I would like to return 'pliers' since it is on
shelf1 but not on shelf2.
* An object may be on many shelves.
* An object on shelves 3 and 4 should not be returned (nails).
I know everybody loves code on SO, but I wont be at work for a few days and I don't have the failed attempts I have made already. I did discover that Access doesn't have EXCEPT, and doesn't support aliases for queries, so I can do a query and run another query against the results. I cannot write temp tables into the existing database, with INSERT INTO.
Any advice on how to get started would be much appreciated!
|
Here is a simple query for that problem. You verify if an object in shelf1 is not in the table of all the objects in shelf2
```
SELECT *
FROM Table
WHERE shelf = 'shelf1' AND object NOT IN
(SELECT object FROM Table WHERE shelf='shelf2')
```
|
You can make a subquery for both Shelf 1 and Shelf 2. Then LEFT JOIN the Shelf1 subquery to the Shelf2 subquery on the Product name. Then only take records where Shelf2 is null.
```
SELECT
Shelf1.*
FROM
(SELECT [ID], [Product], [Location] FROM <table> WHERE [location]="Shelf1") as Shelf1
LEFT OUTER JOIN
(SELECT [Product] FROM <table> WHERE [location]="Shelf2") as Shelf2 ON
Shelf1.Product = Shelf2.Product
WHERE
Shelf2.Product IS NULL
```
|
Select a value in multiple rows based on values in a specific field
|
[
"",
"sql",
"ms-access",
"rows",
""
] |
I have a SQL DB table that has some data duplication. I need to find records based on the fact that none of the "duplicate" records has a value of Null value in one of the fields. i.e.
```
ID Name StartDate
1 Fred 1/1/1945
2 Jack 2/2/1985
3 Mary 3/3/1999
4 Fred null
5 Jack 5/5/1977
6 Jack 4/4/1985
7 Fred 10/10/2001
```
In the example above I need to find Jack and Mary but not Fred. I assume some sort of Self Join or Union but have run into a mental block on what exactly would give me my desired results.
|
First create the query to find duplicates, then add a condition that it not have a record with a NULL `StartDate`
```
SELECT Name
FROM myTable
GROUP BY Name
HAVING COUNT(*) > 1
WHERE Name NOT IN (SELECT Name FROM myTable WHERE StartDate IS NULL)
```
|
Ok, went back and re-read the question. It sounds like you need a sub-select instead of a join, although a join would work too.
```
WHERE Name NOT IN ( SELECT DISTINCT Name FROM table WHERE StartDate IS NULL )
```
should give the desired results, eliminating ALL Fred records based on the fact that Fred qualified with a single NULL date.
|
Finding specific duplicates when one field is different
|
[
"",
"sql",
""
] |
I've been looking at this for hours and can't quite seem to get it right.
I have a table with 3 columns.
```
AsOfDate database_id mirroring_state_desc
2014-10-14 09:46:25.083 7 SUSPENDED
2014-10-14 09:47:09.340 7 SUSPENDED
2014-10-14 09:47:10.767 7 SUSPENDED
2014-10-14 09:47:11.987 7 SUSPENDED
2014-10-14 12:34:23.917 7 SUSPENDED
2014-10-14 12:40:11.337 7 SUSPENDED
```
Basically I'm putting together a sp and in this sp an email will be sent if certain conditions are met. The conditions in this instance are if there are 3 or more of the above rows for distinct database\_id that are less than an hour old. So if this criteria is not met nothing should be returned.
This is what I've tried.
```
IF EXISTS (select distinct top (@MirroringStatusViolationCountForAlert) AsOfDate
from dbo.U_MirroringStatus
WHERE [AsOfDate] >= dateadd(minute, -60, getdate()))
IF EXISTS (select distinct top 3 AsofDate
from dbo.U_MirroringStatus
WHERE [AsOfDate] >= dateadd(minute, -60, getdate()) IN
(select AsofDate from dbo.U_MirroringStatus
GROUP BY AsOfDate HAVING COUNT(*)>=3))
```
Any help would be really appreciated as the longer I look at this the more confused I am getting.
Thanks in advance.
|
Another option ..
```
declare @t table (AsOfDate datetime, database_id int, mirroring_state_desc varchar(20))
insert into @t(AsOfDate, database_id, mirroring_state_desc)
select
'2014-10-14 08:46:25.083', 7, 'SUSPENDED'
union all select
'2014-10-14 10:47:09.340', 7, 'SUSPENDED'
union all select
'2014-10-14 10:47:10.767', 7, 'SUSPENDED'
union all select
'2014-10-14 10:47:11.987', 7, 'SUSPENDED'
union all select
'2014-10-13 12:34:23.917', 7, 'SUSPENDED'
union all select
'2014-10-13 12:40:11.337', 7, 'SUSPENDED'
IF EXISTS (SELECT 1
FROM @t
WHERE AsOfDate >= dateadd(minute, -60, getdate())
GROUP BY database_id
HAVING COUNT(*) > 2
)
print 'Has 3 or more'
```
|
```
declare @count int = 3
if exists (
Select '*'
from dbo.U_MirroringStatus a
where @count >= (Select count(AsOfDate)
from dbo.U_MirroringStatus b
where [b.AsOfDate] >= dateadd(minute, -60, getdate())
and a.database_id = b.database_id
)
)
```
|
Return only if a number of results with certain conditions
|
[
"",
"sql",
"t-sql",
"nested",
"subquery",
""
] |
MS SQL Server 2008
I have two `SELECT` statements
```
SELECT COUNT(*) as Number_of_SEP11_clients
FROM
...
WHERE
...
and dbo.SEM_AGENT.AGENT_VERSION like '%11.%'
SELECT COUNT(*) as Number_of_SEP12_clients
FROM
...
WHERE
...
and dbo.SEM_AGENT.AGENT_VERSION like '%12.%'
```
Output is
```
Number_of_SEP11_clients
109111
```
and
```
Number_of_SEP12_clients
44812
```
How to make the output
```
Number_of_SEP11_clients Number_of_SEP12_clients
109111 44812
```
|
Although you can do a `cross join` to get what you want, I am going to guess that the `FROM` and rest of the `WHERE` clauses are the same. In that case, just do conditional aggregation:
```
SELECT SUM(CASE WHEN dbo.SEM_AGENT.AGENT_VERSION like '%11.%' THEN 1 ELSE 0 END) as Number_of_SEP11_clients,
SUM(CASE WHEN dbo.SEM_AGENT.AGENT_VERSION like '%12.%' THEN 1 ELSE 0 END) as Number_of_SEP12_clients,
FROM
...
WHERE
...
```
If you like, you can add:
```
WHERE . . . AND
(dbo.SEM_AGENT.AGENT_VERSION like '%11.%' or dbo.SEM_AGENT.AGENT_VERSION like '%12.%')
```
|
This could depend on your table structure, but you can do this with a couple of subqueries:
```
SELECT (SELECT COUNT(*)
FROM
...
WHERE
...
and dbo.SEM_AGENT.AGENT_VERSION like '%11.%') AS Number_of_SEP11,
(SELECT COUNT(*)
FROM
...
WHERE
...
and dbo.SEM_AGENT.AGENT_VERSION like '%12.%') AS Number_of_SEP12;
```
|
Combine two columns from two statements into single result-set
|
[
"",
"sql",
"sql-server-2008",
""
] |
A SQL and logic problem. In the `where` clause, is
```
not (p and q)
```
equal to
```
(not p) or (not q)
```
|
Yes. [De Morgan's laws](https://en.wikipedia.org/wiki/De_Morgan%27s_laws) are language-independent.
|
Refer the [working fiddle](http://sqlfiddle.com/#!2/3c040/5):
**Query 1: not (p and q)**
```
select * from table1
where
!(p = 1 and q=1);
```
**Query 2 : (not p) or (not q)**
```
select * from table1
where p!=1 or q!=1;
```
There is no difference in the output and hence the boolean algebra logic `!(p and Q) = (!p) or (!q)` is true!!!
|
About sql and logic. In the sql where clause, is "not (p and q)" equal to "(not p) or (not q)"
|
[
"",
"sql",
"logic",
""
] |
I am having trouble on ORACLE SQL Operation.
So first of all, I have two tables,
```
TEST_TABLE_A
Insert into TEST_TABLE_A (NAME, VAL1, VAL2, STATUS) Values ('HEAD1', 100, 200, 'ACTIVE');
Insert into TEST_TABLE_A (NAME, VAL1, VAL2, STATUS) Values ('HEAD2', 300, 400, 'ACTIVE');
Insert into TEST_TABLE_A (NAME, VAL1, VAL2, STATUS) Values ('HEAD3', 500, 600, 'ACTIVE');
Insert into TEST_TABLE_A (NAME, VAL1, VAL2, STATUS) Values ('HEAD4', 700, 800, 'ACTIVE');
TEST_TABLE_B
Insert into TEST_TABLE_B (NAME, VAL1, VAL2) Values ('HEAD1', 1, 2);
Insert into TEST_TABLE_B (NAME, VAL1, VAL2) Values ('HEAD4', 700, 800);
Insert into TEST_TABLE_B (NAME, VAL1, VAL2) Values ('HEAD5', 900, 1000);
```
My assignment is listed as follows,
1. Sync data from table b to table a
2. when the same name found with the same values then ignore
3. when the same name found with different values then insert the data into table A with the same name and the value grabbed from B and update the old row with 'INACTIVE' status
4. when no data found in table A based on table B then insert the data into table A
my query is like this,
```
MERGE INTO TEST_TABLE_A TGT
USING TEST_TABLE_B SRC
ON (TGT.NAME = SRC.NAME)
WHEN MATCHED THEN
UPDATE SET TGT.VAL1 = SRC.VAL1,
TGT.VAL2 = SRC.VAL2
WHEN NOT MATCHED THEN
INSERT (TGT.NAME, TGT.VAL1, TGT.VAL2, TGT.STATUS)
VALUES (SRC.NAME, SRC.VAL1, SRC.VAL2, 'ACTIVE');
```
So I am facing the problem that i cant make #3 work. I tried many combination of SQL and its just doesnt happen to work.
And how do i make number 2 work by ignoring the same name with same values in both table?
I am hoping that the result would be like this
```
TEST_TABLE_A
NAME VAL1 VAL2 STATUS
-----------------------------
HEAD1 100 200 INACTIVE
HEAD1 1 2 ACTIVE
HEAD2 300 400 INACTIVE
HEAD3 500 600 INACTIVE
HEAD4 700 800 ACTIVE
HEAD5 900 1000 ACTIVE
```
I really appreciate you guys help.
regards
|
> I am hoping that the result would be like this
```
TEST_TABLE_A
NAME VAL1 VAL2 STATUS
-----------------------------
HEAD1 100 200 INACTIVE
HEAD1 1 2 ACTIVE
HEAD2 300 400 INACTIVE
HEAD3 500 600 INACTIVE
HEAD4 700 800 ACTIVE
HEAD5 900 1000 ACTIVE
```
Ok, let's see how it could be done to satisfy each of the rules of your SQL.
1.The first thing is just a simple `UNION` of both tables.
2.Then, need to get rid of rows which are duplicate on `NAME`, `VAL1` and `VAL2` columns. So, use `ROW_NUMBER` analytic.
3.Finally, select the rows with `RANK` as 1.
```
SQL> SELECT name,
2 val1,
3 val2,
4 status
5 FROM
6 (SELECT a.*,
7 row_number() over(partition BY a.val1, a.val2 order by a.name, a.val1, a.val2) rn
8 FROM
9 ( SELECT name, val1, val2,'INACTIVE' status FROM TEST_TABLE_A
10 UNION
11 SELECT b.*, 'ACTIVE' status FROM TEST_TABLE_B b ORDER BY 1
12 ) A
13 )
14 WHERE rn = 1
15 /
NAME VAL1 VAL2 STATUS
-------------------- ---------- ---------- --------
HEAD1 1 2 ACTIVE
HEAD1 100 200 INACTIVE
HEAD2 300 400 INACTIVE
HEAD3 500 600 INACTIVE
HEAD4 700 800 ACTIVE
HEAD5 900 1000 ACTIVE
6 rows selected.
SQL>
```
So, that gives exactly the output you want.
\*Update\*\* Adding a test case on OP's request
```
SQL> SELECT * FROM test_table_a;
NAME VAL1 VAL2 STATUS
-------------------- ---------- ---------- --------------------
HEAD1 100 200 ACTIVE
HEAD2 300 400 ACTIVE
HEAD3 500 600 ACTIVE
HEAD4 700 800 ACTIVE
SQL>
SQL> CREATE TABLE test_table_a_new AS
2 SELECT name,
3 val1,
4 val2,
5 status
6 FROM
7 (SELECT a.*,
8 row_number() over(partition BY a.val1, a.val2 order by a.name, a.val1, a.val2) rn
9 FROM
10 ( SELECT name, val1, val2,'INACTIVE' status FROM TEST_TABLE_A
11 UNION
12 SELECT b.*, 'ACTIVE' status FROM TEST_TABLE_B b ORDER BY 1
13 ) A
14 )
15 WHERE rn = 1
16 /
Table created.
SQL>
SQL> DROP TABLE test_table_a PURGE
2 /
Table dropped.
SQL>
SQL> alter table test_table_a_new rename to test_table_a
2 /
Table altered.
SQL> select * from test_table_a
2 /
NAME VAL1 VAL2 STATUS
-------------------- ---------- ---------- --------
HEAD1 1 2 ACTIVE
HEAD1 100 200 INACTIVE
HEAD2 300 400 INACTIVE
HEAD3 500 600 INACTIVE
HEAD4 700 800 ACTIVE
HEAD5 900 1000 ACTIVE
6 rows selected.
SQL>
```
|
You cannot use only one merge for #3 because you need to update and insert on the same ON condition.
```
update test_table_a a set a.status = 'INACTIVE'
where exists (select 1 from test_table_b b
where b.name = a.name and (b.val1 != a.val1 or b.val2 != a.val2));
merge into test_table_a a using test_table_b b on (b.val1 = a.val1 and b.val2 = a.val2)
when not matched then insert values (b.name, b.val1, b.val2, 'ACTIVE');
```
But I don't understand why in your output HEAD2 and HEAD3 are in 'INACTIVE' status. Maybe you also need to mark as 'INACTIVE' the rows in TEST\_TABLE\_A which don't exist in TEST\_TABLE\_B (in this case you may change the first update by adding this condition: "OR not exists (select 1 from test\_table\_b b where b.name = a.name)")
|
SYNC and UPDATE at the same time between two tables in Oracle SQL
|
[
"",
"sql",
"oracle",
""
] |
I can't get the right SQL, and I'm not sure if it is all that possible:
We have a field with an EventID, an IndividualID and a RoleID.
I need to check if an Individual has attended on Events with other Roles. So I need to count anyhow every IndividualID and check if there is more than one value of it.
Is there a possibility to do this on SQL? I think I'm missing a special expression, to make this work. If I use Count etc. it counts all Individuals but not each on it's ID.
Thanks in advance!
Example:
An Individual attended to the same Event, once as Type xx and once as Type xx2.
So this would mean:
EventID is twice the same, IndividualID is the same, but the Type and the ID of this Table is different.
Edit2: Got it, sorry guys,
```
SELECT IndividualId, EventId, COUNT(RoleId) AS cnt
FROM Tablet
WHERE EventId IS NOT NULL
GROUP BY IndividualId, EventId
ORDER BY cnt DESC
```
I don't get it at all, I really need to learn more :)
|
If I understand you question correctly, you just want to do:
```
SELECT IndividualId, EventId, COUNT(RoleId) as RoleCount
FROM [YOUR_TABLE]
-- JOIN OTHER TABLES IF REQUIRED
GROUP BY IndividualId, EventId
ORDER BY IndividualId, EventId
```
## [SQL Fiddle Demo](http://sqlfiddle.com/#!6/5af21/6)
**Schema Setup**:
```
CREATE TABLE Your_Table
([IndividualId] int, [RoleId] int, [EventId] int)
;
INSERT INTO Your_Table
([IndividualId], [RoleId], [EventId])
VALUES
(1, 1, 1),
(1, 2, 1),
(1, 3, 1),
(2, 1, 1),
(2, 2, 1),
(2, 1, 2),
(3, 2, 2),
(4, 1, 2),
(5, 1, 1),
(5, 2, 2)
;
```
**Query**:
```
SELECT IndividualId, EventId, COUNT(RoleId) as RoleCount
FROM [YOUR_TABLE]
-- JOIN OTHER TABLES IF REQUIRED
GROUP BY IndividualId, EventId
ORDER BY IndividualId, EventId
```
**[Results](http://sqlfiddle.com/#!6/5af21/6/0)**:
```
| INDIVIDUALID | EVENTID | ROLECOUNT |
|--------------|---------|-----------|
| 1 | 1 | 3 |
| 2 | 1 | 2 |
| 2 | 2 | 1 |
| 3 | 2 | 1 |
| 4 | 2 | 1 |
| 5 | 1 | 1 |
| 5 | 2 | 1 |
```
|
This should do it:
select count(\*) from tblName group by IndividualID;
|
SQL: Getting all values, where a 1:N Field has same values
|
[
"",
"sql",
"sql-server",
""
] |
I have query resulting me 1 column of strings, result example:
```
NAME:
-----
SOF
OTP
OTP
OTP
SOF
VIL
OTP
SOF
GGG
```
I want to be able to get SOF, OTP, VIL - the first 3 unique top,
I tried using DISTINCT and GROUP BY, but it is not working, the sorting is damaged..
The query building this result is :
```
SELECT DISTINCT d.adst
FROM (SELECT a.date adate,
b.date bdate,
a.price + b.price total,
( b.date - a.date ) days,
a.dst adst
FROM flights a
JOIN flights b
ON a.dst = b.dst
ORDER BY total) d
```
I have "flights" table with details, and I need to get the 3 (=n) cheapest destinations.
Thanks
|
Find a way to do it.
I am selecting the DST and the PRICE, grouping by DST with MIN function on Price and limiting 3.
do I have better way to do it?
```
SELECT d.adst , min(d.total) mttl
FROM (SELECT a.date adate,
b.date bdate,
a.price + b.price total,
( b.date - a.date ) days,
a.dst adst
FROM flights a
JOIN flights b
ON a.dst = b.dst
ORDER BY total) d
group by adst order by mttl;
```
|
This can easily be done using window functions:
```
select *
from (
SELECT a.date as adate,
b.date as bdate,
a.price + b.price as total,
dense_rank() over (order by a.price + b.price) as rnk,
b.date - a.date as days,
a.dst as adst
FROM flights a
JOIN flights b ON a.dst = b.dst
) t
where rnk <= 3
order by rnk;
```
More details on window functions can be found in the manual:
<http://www.postgresql.org/docs/current/static/tutorial-window.html>
|
sql to select first n unique lines on sorted result
|
[
"",
"sql",
"postgresql",
"greatest-n-per-group",
""
] |
i want to know how can i use MySQL string functions on my DB.
I have MySQL db with following like data
```
+---+-----------------------+
|id | name |
+---+-----------------------+
| 1 | /sun/steave/xyz |
| 2 | /mon/alan/asdsas |
| 3 | /sun/mark/we |
| 4 | /wed/john/rtd |
| 5 | /thu/mich/dfgsd |
+---+------------------- -+
```
where name is of `type varchar(255)`.
i want to select only **name** i,e (/sun/**steave**/xyz).
i tried like
```
select substr(name,4) from my_table;
```
(i can't use length in `substring`, like `(name,4,6)` because name is dynamic)
which returns me
```
steave/xyz
alan/asdsas
mark/we
john/rtd
mich/dfgsd
```
How can i select only names from my table?
Is that possible through MySQL string functions?
|
You can use a couple [`substring_index`](http://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_substring-index) calls to cut your string between the `/`s:
```
SELECT SUBSTRING_INDEX(SUBSTRING_INDEX(name, '/', 3), '/', -1)
FROM my_table
```
EDIT:
As requested in the comments, some more details. To quote the documentation on `substring_index`:
> **SUBSTRING\_INDEX(str,delim,count)** Returns the substring from string `str` before `count` occurrences of the delimiter `delim`. If `count` is positive, everything to the left of the final delimiter (counting from the left) is returned. If `count` is negative, everything to the right of the final delimiter (counting from the right) is returned.
Let's take the string `'/sun/steave/xyz'` as an example. The inner `substring_idex` call returns the substring before the 3rd `/`, so for our case, it returns `'/sun/steave'`. The outer `substring_index` returns the substring after the last `'/'`, so given `'/sun/steave'` it will return just `'steave'`.
|
This can be easily done in XML:
```
SELECT
MyXML.id
,MyXML.name
,x.value('/NAME[1]/PART[2]','VARCHAR(255)') AS 'PART2'
,x.value('/NAME[1]/PART[3]','VARCHAR(255)') AS 'PART3'
,x.value('/NAME[1]/PART[4]','VARCHAR(255)') AS 'PART4'
FROM (
SELECT Id, Name
,CONVERT(XML,'<NAME><PART>' + REPLACE(Name,'/', '</PART><PART>') + '</PART></NAME>') AS X
FROM my_table
) MyXML
```
Anyway, you should rethink your table structure.
|
How to use MySQL string operation?
|
[
"",
"mysql",
"sql",
"substring",
""
] |
I have a query which gives me the below result
```
FILE EVENT AMOUNT
File1 AP 26.96
File1 AP 26.96
File1 AP 26.96
```
Any idea on how to group result by 2 so that I can have
```
FILE EVENT AMOUNT
File1 AP 26.96
File1 AP 26.96
```
If my original query returns 4 results,
```
FILE EVENT AMOUNT
File1 AP 26.96
File1 AP 26.96
File1 AP 26.96
File1 AP 26.96
```
I would like to have
```
FILE EVENT AMOUNT
File1 AP 26.96
File1 AP 26.96
```
Any SQL keyword that does the above?
Thanks
|
If you want to return 2 rows, then I'd look at using a windowing function like [`row_number()`](http://msdn.microsoft.com/en-us/library/ms186734.aspx). You can partition the data over the 3 columns and then filter it to only return 2 rows:
```
select [file], [event], [amount]
from
(
select [file], [event], [amount],
rn = row_number() over(partition by [file], [event], [amount]
order by [file])
from dbo.yourtable
) d
where rn <= 2;
```
See [SQL Fiddle with Demo](http://sqlfiddle.com/#!3/de955/2)
|
Try this...
```
;WITH cteFiles
AS ( SELECT [FILE]
,[EVENT]
,[AMOUNT]
,ROW_NUMBER() OVER ( PARTITION BY [FILE], [EVENT], [AMOUNT]
ORDER BY [FILE] ) AS rownum
FROM files
)
SELECT [FILE]
,[EVENT]
,[AMOUNT]
FROM cteFiles
WHERE rownum <= 2;
```
see [fiddle](http://sqlfiddle.com/#!3/9ac419/3)
|
SQL result group by 2
|
[
"",
"sql",
"sql-server",
""
] |
Trying to import data into Azure.
Created a text file in Management Studio 2005.
I have tried both a comma and tab delimited text file.
BCP IN -c -t, -r\n -U -S -P
I get the error {SQL Server Native Client 11.0]Unexpected EOF encountered in BCP data file
Here is the script I used to create the file:
```
SELECT top 10 [Id]
,[RecordId]
,[PracticeId]
,[MonthEndId]
,ISNULL(CAST(InvoiceItemId AS VARCHAR(50)),'') AS InvoiceItemId
,[Date]
,[Number]
,[RecordTypeId]
,[LedgerTypeId]
,[TargetLedgerTypeId]
,ISNULL(CAST(Tax1Id as varchar(50)),'')AS Tax1Id
,[Tax1Exempt]
,[Tax1Total]
,[Tax1Exemption]
,ISNULL(CAST([Tax2Id] AS VARCHAR(50)),'') AS Tax2Id
,[Tax2Exempt]
,[Tax2Total]
,[Tax2Exemption]
,[TotalTaxable]
,[TotalTax]
,[TotalWithTax]
,[Unassigned]
,ISNULL(CAST([ReversingTypeId] AS VARCHAR(50)),'') AS ReversingTypeId
,[IncludeAccrualDoctor]
,12 AS InstanceId
FROM <table>
```
Here is the table it is inserted into
```
CREATE TABLE [WS].[ARFinancialRecord](
[Id] [uniqueidentifier] NOT NULL,
[RecordId] [uniqueidentifier] NOT NULL,
[PracticeId] [uniqueidentifier] NOT NULL,
[MonthEndId] [uniqueidentifier] NOT NULL,
[InvoiceItemId] [uniqueidentifier] NULL,
[Date] [smalldatetime] NOT NULL,
[Number] [varchar](17) NOT NULL,
[RecordTypeId] [tinyint] NOT NULL,
[LedgerTypeId] [tinyint] NOT NULL,
[TargetLedgerTypeId] [tinyint] NOT NULL,
[Tax1Id] [uniqueidentifier] NULL,
[Tax1Exempt] [bit] NOT NULL,
[Tax1Total] [decimal](30, 8) NOT NULL,
[Tax1Exemption] [decimal](30, 8) NOT NULL,
[Tax2Id] [uniqueidentifier] NULL,
[Tax2Exempt] [bit] NOT NULL,
[Tax2Total] [decimal](30, 8) NOT NULL,
[Tax2Exemption] [decimal](30, 8) NOT NULL,
[TotalTaxable] [decimal](30, 8) NOT NULL,
[TotalTax] [decimal](30, 8) NOT NULL,
[TotalWithTax] [decimal](30, 8) NOT NULL,
[Unassigned] [decimal](30, 8) NOT NULL,
[ReversingTypeId] [tinyint] NULL,
[IncludeAccrualDoctor] [bit] NOT NULL,
[InstanceId] [tinyint] NOT NULL,
CONSTRAINT [PK_ARFinancialRecord] PRIMARY KEY CLUSTERED
(
[Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON)
)
```
There are actually several hundred thousand actual records and I have done this from a different server, the only difference being the version of management studio.
|
If the file is tab-delimited then the command line flag for the column separator should be `-t\t` `-t,`
|
Just an FYI that I encountered this same exact error and it turned out that my destination table contained one extra column than the DAT file!
|
Unexpected EOF encountered in BCP
|
[
"",
"sql",
"azure",
"bcp",
""
] |
I have 2 tables
Maintable
and
Secondtable
In the Second Table is a Column Maintable\_ID and Price.
I have bulided up a view in Workbench like this:
```
CREATE VIEW test AS
SELECT Maintable.ID as ID,
Maintable.Name as Name,
Maintable.Date as Date,
Secondtable.Price as Price,
From maintable
LEFT JOIN secondtable
ON Maintable.ID=Secondtable.Maintable_ID
```
I have thought about to make something like:
`SUM(Price) as GPrice FROM Secondtable WHERE Maintable.ID=Secondtable.Maintable_ID`
How Can I find out the total price for all items in the Secondtable where Secondtable.Maintable\_ID= Maintable.ID
Thank you for your help.
|
Try this..
```
CREATE VIEW test AS
SELECT Maintable.ID as ID,
Maintable.Name as Name,
Maintable.Date as Date,
sum (Secondtable.Price) as GPrice,
From maintable
LEFT JOIN secondtable
ON Maintable.ID=Secondtable.Maintable_ID
Group by
Maintable.ID as ID,
Maintable.Name as Name,
Maintable.Date as Date
```
|
You're on the right track, you're just missing a `group by` clause:
```
CREATE VIEW test AS
SELECT Maintable.ID as ID,
Maintable.Name as Name,
Maintable.Date as Date,
s.Sum_Price as Price,
FROM maintable m
LEFT JOIN (SELECT Maintable_ID, SUM(Price) AS Sum_Price)
FROM secondtable
GROUP BY Maintable_ID) s
ON m.ID=s.Maintable_ID
```
|
How to SUM from another table in SQL in one view?
|
[
"",
"mysql",
"sql",
"sql-server",
"mysql-workbench",
""
] |
I Know that `OUTPUT Clause` can be used in `INSERT, UPDATE, DELETE, or MERGE statement`. The results of an `OUTPUT` clause in a `INSERT, UPDATE, DELETE, or MERGE statements` can be stored into a `target table`.
But, when i run this query
```
select * from <Tablename> output
```
I didn't get any error. The query executed as like select \* from tablename with out any error and with same no. of rows
So what is the exact use of output clause in select statement. If any then how it can be used?
I searched for the answer but i couldn't find a answer!!
|
The query in your question is in the same category of errors as the following (that I have also seen on this site)
```
SELECT *
FROM T1 NOLOCK
SELECT *
FROM T1
LOOP JOIN T2
ON X = Y
```
The first one just ends up aliasing T1 AS NOLOCK. The correct syntax for the hint would be `(NOLOCK)` or ideally `WITH(NOLOCK)`.
The second one aliases T1 AS LOOP. To request a nested loops join the syntax would need to be `INNER LOOP JOIN`
Similarly in your question it just ends up applying the table alias of `OUTPUT` to your table.
None of OUTPUT, LOOP, NOLOCK are actually [reversed keywords](http://msdn.microsoft.com/en-us/library/ms189822.aspx) in TSQL so it is valid to use them as a table alias without needing to quote them, e.g. in square brackets.
|
[OUTPUT](http://technet.microsoft.com/en-us/library/ms177564(v=sql.110).aspx) clause return information about the rows affected by a statement. `OUTPUT` Clause is used along with `INSERT`, `UPDATE`, `DELETE`, or `MERGE` statements as you mentioned. The reason it is used is because these statements themselves just return the number of rows effected not the rows effected. Thus the usage of `OUTPUT` with `INSERT`, `UPDATE`, `DELETE`, or `MERGE` statements helps the user by returning actual rows effected.
`SELECT` statement itself returns the rows and `SELECT` doesn't effect any rows. Thus the usage of `OUTPUT` clause with `SELECT` is not required or supported. If you want to store the results of a `SELECT` statement into a target table use [SELECT INTO](http://technet.microsoft.com/en-us/library/ms190750(v=sql.105).aspx) or the standard [INSERT](http://technet.microsoft.com/en-us/library/dd776381(v=sql.105).aspx) along with the `SELECT` statement.
**EDIT**
I guess I misunderstood your question. AS @Martin Smith mentioned its is acting an alias in the SELECT statement you mentioned.
|
OUTPUT Clause in Sql Server (Transact-SQL)
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
I've seen many examples of rolling averages in oracle but done do quite what I desire.
This is my raw data
```
DATE SCORE AREA
----------------------------
01-JUL-14 60 A
01-AUG-14 45 A
01-SEP-14 45 A
02-SEP-14 50 A
01-OCT-14 30 A
02-OCT-14 45 A
03-OCT-14 50 A
01-JUL-14 60 B
01-AUG-14 45 B
01-SEP-14 45 B
02-SEP-14 50 B
01-OCT-14 30 B
02-OCT-14 45 B
03-OCT-14 50 B
```
This is the desired result for my rolling average
```
MMYY AVG AREA
-------------------------
JUL-14 60 A
AUG-14 52.5 A
SEP-14 50 A
OCT-14 44 A
JUL-14 60 B
AUG-14 52.5 B
SEP-14 50 B
OCT-14 44 B
```
The way I need it to work is that for each MMYY, I need to look back 3 months, and AVG the scores per dept. So for example,
For Area A in OCT, in the last 3 months from oct, there were 6 studies, (45+45+50+30+45+50)/6 = 44.1
Normally I would write the query like so
```
SELECT
AREA,
TO_CHAR(T.DT,'MMYY') MMYY,
ROUND(AVG(SCORE)
OVER (PARTITION BY AREA ORDER BY TO_CHAR(T.DT,'MMYY') ROWS BETWEEN 2 PRECEDING AND CURRENT ROW),1)
AS AVG
FROM T
```
This will look over the last 3 enteries not the last 3 months
|
One way to do this is to mix aggregation functions with analytic functions. The key idea for average is to avoid using `avg()` and instead do a `sum()` divided by a `count(*)`.
```
SELECT AREA, TO_CHAR(T.DT, 'MMYY') AS MMYY,
SUM(SCORE) / COUNT(*) as AvgScore,
SUM(SUM(SCORE)) OVER (PARTITION BY AREA ORDER BY MAX(T.DT) ROWS BETWEEN 2 PRECEDING AND CURRENT ROW) / SUM(COUNT(*)) OVER (PARTITION BY AREA ORDER BY MAX(T.DT) ROWS BETWEEN 2 PRECEDING AND CURRENT ROW)
FROM t
GROUP BY AREA, TO_CHAR(T.DT, 'MMYY') ;
```
Note the `order by` clause. If your data spans years, then using the MMYY format poses problems. It is better to use a format such as YYYY-MM for months, because the alphabetical ordering is the same as the natural ordering.
|
You can specify also ranges, not only rows.
```
SELECT
AREA,
TO_CHAR(T.DT,'MMYY') MMYY,
ROUND(AVG(SCORE)
OVER (PARTITION BY AREA
ORDER BY DT RANGE BETWEEN INTERVAL '3' MONTH PRECEDING AND CURRENT ROW))
AS AVG
FROM T
```
Since `CURRENT ROW` is the default, just `ORDER BY DT RANGE INTERVAL '3' MONTH PRECEDING` should work as well. Perhaps you have to do some fine-tuning, I did not test the behaviour regarding the 28/29/30/31 days per month issue.
Check the Oracle [Windowing Clause](http://docs.oracle.com/cd/E11882_01/server.112/e26088/functions004.htm#i97640) for further details.
|
Oracle: Need to calculate rolling average for past 3 months where we have more than one submission per month
|
[
"",
"sql",
"oracle",
""
] |
I have this SQL query :
```
SELECT
conversations_messages.conversation_id,
MAX(conversations_messages.message_date) AS 'conversation_last_reply',
MAX(conversations_messages.message_date) > conversations_users.last_view AS 'conversation_unread'
FROM
conversations_messages
LEFT JOIN conversations ON conversations_messages.message_id=conversations.id
INNER JOIN conversations_users ON conversations_messages.conversation_id=conversations_users.conversation_id
WHERE
conversations_users.user_id = $user_id AND
conversations_users.deleted=0
GROUP BY
conversations_messages.message_id
ORDER BY
'conversation_last_reply' DESC
```
And the query in running well as I want, but only the last line, of the ORDER BY is not working, its not sorting as I requested.
**And the only thing that is not working - is the last line - ORDER BY...**
I tried to change it to ASC and to DESC again but its not responding...
\* by the way - the field that I'm trying to sort by - is an integer.
Someone knows what is the problem?
Thanks.
|
Try
```
ORDER BY conversation_last_reply
```
instead of
```
ORDER BY 'conversation_last_reply'
```
Your current version sorts by a constant string, so it does not sort at all.
|
you have quotes around your order by
```
ORDER BY
'conversation_last_reply' DESC
----^-----------------------^------ = bad
```
you should change it to
```
ORDER BY
conversation_last_reply DESC
```
or use backticks
```
ORDER BY
`conversation_last_reply` DESC
```
when you order by a string there is no ordering because the value of the string is always the same for every row :)
|
ORDER BY in SQL is not working
|
[
"",
"mysql",
"sql",
""
] |
this is the table:
table\_test
```
ID------INT
DETAIL--TEXT
VALUE---INT
STATUS--INT DEFAULT 1
```
as here the default value will be 1 for column `STATUS` and default value will be 1 for column `VALUE`.
Here in any case of **insert** or **update**,
```
if column VALUE < 1 , STATUS = 2 else STATUS = 1
```
How can I keep the upper type scenario in MySQL table?
|
You can do this with a couple of triggers:
```
delimiter //
CREATE TRIGGER table_test_update_tr BEFORE UPDATE ON table_test
FOR EACH ROW
BEGIN
IF NEW.value < 0 THEN
SET NEW.status = 2;
ELSE
SET NEW.status = 1;
END IF;
END;//
delimiter ;
delimiter //
CREATE TRIGGER table_test_insert_tr BEFORE INSERT ON table_test
FOR EACH ROW
BEGIN
IF NEW.value < 0 THEN
SET NEW.status = 2;
ELSE
SET NEW.status = 1;
END IF;
END;//
delimiter ;
```
EDIT:
Having said that, if `status` should always be computed according to `value`, perhaps it shouldn't be a column in the table - instead, you can create a view to display it.
```
CREATE TABLE table_test (
id INT,
detail TEXT,
value INT DEFAULT 1
);
CREATE VIEW view_test AS
SELECT id,
detail,
value,
CASE value WHEN 1 THEN 1 ELSE 2 END AS status
FROM test_table;
```
|
Use [`case`](http://dev.mysql.com/doc/refman/5.0/en/case.html) for this :
```
UPDATE table_test
SET STATUS =
CASE
WHEN VALUE < 1 THEN 2
ELSE 1
END
```
|
Mysql, one column value changes bases on other column in same table
|
[
"",
"mysql",
"sql",
""
] |
In my table I have some colms like this, (beside another cols)
```
col1 | col2
s1 | 5
s1 | 5
s2 | 3
s2 | 3
s2 | 3
s3 | 5
s3 | 5
s4 | 7
```
I want to have average of **ALL** col2 over Distinct col1.
(5+3+5+7)/4=5
|
Try this:
```
SELECT AVG(T.col2)
FROM
(SELECT DISTINCT col1, col2
FROM yourtable) as T
```
|
You are going to need a subquery. Here is one way:
```
select avg(col2)
from (select distinct col1, col2
from my_table
) t
```
|
sql select average from distinct column of table
|
[
"",
"mysql",
"sql",
"sqlcommand",
""
] |
Say I have the following column in a teradata table:
```
Red ball
Purple ball
Orange ball
```
I want my output to be
```
Word Count
Red 1
Ball 3
Purple 1
Orange 1
```
Thanks.
|
In TD14 there's a STRTOK\_SPLIT\_TO\_TABLE function:
```
SELECT token, COUNT(*)
FROM TABLE (STRTOK_SPLIT_TO_TABLE(1 -- this is just a dummy, usually the PK column when you need to join
,table.stringcolumn
,' ') -- simply add other separating characters
RETURNS (outkey INTEGER,
tokennum INTEGER,
token VARCHAR(100) CHARACTER SET UNICODE
)
) AS d
GROUP BY 1
```
|
Here's how I would handle something like this:
```
WITH RECURSIVE CTE (POS, NEW_STRING, REAL_STRING) AS
(
SELECT
0, CAST('' AS VARCHAR(100)),TRIM(word)
FROM wordcount
UNION ALL
SELECT
CASE WHEN POSITION(' ' IN REAL_STRING) > 0
THEN POSITION(' ' IN REAL_STRING)
ELSE CHARACTER_LENGTH(REAL_STRING)
END DPOS,
TRIM(BOTH ' ' FROM SUBSTR(REAL_STRING, 0, DPOS+1)),
TRIM(SUBSTR(REAL_STRING, DPOS+1))
FROM CTE
WHERE DPOS > 0
)
SELECT TRIM(NEW_STRING) as word,
count (*)
FROM CTE
group by word
WHERE pos > 0;
```
Which will return:
```
word Count(*)
orange 1
purple 1
red 1
ball 3
```
There may be an easier way with regex in 14, but I haven't messed with it yet.
EDIT: Removed some unneeded columns from the query.
|
Teradata - word frequency in a column
|
[
"",
"sql",
"teradata",
"word-frequency",
""
] |
I have a column `categories` in my `company` table. In that `categories` there can be so many categories separated by `,`. Something like `1,2,3,4,5` and I know one of that category `id`.
Let's say `1` for now.
So how I can query `company` table?
|
You have to deal with four cases: the `categories` in question is first in a list, internal to a list, last in a list, and the only categories: `SELECT * FROM company WHERE categories LIKE '1,%' OR categories LIKE '%,1,%' OR categories LIKE '%,1' OR categories='1'`.
|
PostgreSQL has [arrays](http://www.postgresql.org/docs/9.4/static/arrays.html) and [array functions](http://www.postgresql.org/docs/9.4/static/functions-array.html) which allow you to solve this problem neatly.
Assume the following schema and sample data:
```
CREATE TABLE company
("name" varchar(13), "categories" varchar(9));
INSERT INTO company
("name", "categories")
VALUES
('acme', '1,2,3,4,5'),
('abc', '2,3,4'),
('xyz', '3,5'),
('stackoverflow', '4');
```
Then you can use the ANY operator to find an element in an array like so:
```
SELECT
name
FROM (
SELECT NAME, string_to_array(categories, ',') AS category_array FROM company
) n
WHERE
'2' = ANY (category_array);
```
Which should return `acme` and `abc`, according to [this SQLFiddle](http://sqlfiddle.com/#!15/b47874/1/0).
|
SQL query against comma-separated column
|
[
"",
"sql",
""
] |
I have a bunch of decimal values in a column of a query result that are all in this format:
```
_._ _ _ _ _ _
```
(1 integer before the decimal point, 6 integers after the decimal point)
It is possible for a value in the column to be NULL.
Here are some examples of these values:
```
4.010000
3.800000
1.260000
0.650000
0.010000
0.000000
NULL
```
When I change the select statement in my query to cast the values in this column to decimal(6,4), I get this error:
```
Arithmetic overflow error converting numeric to data type numeric.
```
Why am I getting this error?
Thank you.
|
The posted values in the question (including the `NULL`) *do* convert to `DECIMAL(6, 4)`. That error is coming from a value that is >= 100.
For example, the following all succeed:
```
SELECT CONVERT(DECIMAL(6, 4), 4.010000)
SELECT CONVERT(DECIMAL(6, 4), 3.800000)
SELECT CONVERT(DECIMAL(6, 4), 1.260000)
SELECT CONVERT(DECIMAL(6, 4), 0.650000)
SELECT CONVERT(DECIMAL(6, 4), 0.010000)
SELECT CONVERT(DECIMAL(6, 4), 0.000000)
SELECT CONVERT(DECIMAL(6, 4), NULL)
```
Now try:
```
SELECT CONVERT(DECIMAL(6, 4), 100)
```
And you will get:
```
Msg 8115, Level 16, State 8, Line 1
Arithmetic overflow error converting int to data type numeric.
```
`DECIMAL(6, 4)` means: 6 total digits, 4 of which are to the right of the decimal.
Hence: **XX.YYYY**
Max Value: **99.9999**
So either try:
* DECIMAL(7, 4) to get another digit to the left of the decimal while still keeping 4 to the right of it
* DECIMAL(6, 3) to maintain 6 total digits but losing one place to the right of the decimal in order to get an extra one to the left of it.
|
You have a value greater than `99.9999` which covers 6 digits total including decimal places
Get the maximum value from the table and check the total decimal places and then try to CONVERT it to fit that maximum value.
```
SELECT MAX(data) FROM Table1
```
|
Why won't my decimal values convert to 6,4?
|
[
"",
"sql",
"t-sql",
""
] |
I've come across a small issue that probably pretty common, but that I've no idea how to search for. For example, say we have a database with the following tables:
Table of Exams - ExamID, Name
Table of Exam Questions - ExamID, QuestionID, Name
Should I make QuestionIDs be unique? I could make them unique for every ExamID, or I could just make QuestionIDs never repeat. Are there any advantages/disadvantages to doing either? Also, what should the primary keys be in both scenarios?
|
Kind of depends.
There are a *lot* of possibilities with no real "right" and "wrong".
My thoughts would be to probably separate it out into another table, so that the question could be reused across exams
```
Exam
----
ExamId int primary key,
Name varchar(500)
Questions
----
QuestionId int primary key,
Text varchar(500)
ExamQuestions
----
Id int primary key, -- this is optional, i just like "simple" primary keys rather than composite.
ExamId int FK, -- potentially create a unique constraint on examId/questionId
QuestionId int FK, -- potentially create a unique constraint on examId/questionId
questionOrder int -- this would allow a "ordering" of exam questions on a per exam basis.
```
|
The important question isn't whether or not question ID should be unique. The important question is, "What are the semantics of the relationship between Exam and Question?" The answer to that tells us what the primary key of the question table should be.
Your Question table has a foreign key relationship with the Exam table that identifies the [one] exam to which a question belongs. It is the *nature* of that relationship that matters.
* A **non-identifying relationship** is one in which the dependent entity (Question, in this case) can exist independently of the related "parent" entity (Table, in this case). One indication of this is whether a Question can be moved from one Exam to another, or if a Question need not belong to an Exam at all. If so, the primary key of the Question table should be simply the question id itself, which should therefore be unique and non-nullable.
Whether the Exam ID is nullable or not is dependent on the cardinality of the relationship
from Question to Exam: is it one-to-one (non-nullable) or zero-to-one (nullable).
* An **identifying relationship** is one in which the dependent entity (Question) exists only within the context of the related "parent" entity (Table). If so, the primary key of the Question table should be compound, incorporating the Exam ID (foreign key to the Exam table) as well as the Question ID, with both Exam ID and Question ID non-nullable.
This constrains the Question ID to be unique within the scope of an Exam ID: whether it is
unique across the entire table or not doesn't really matter much at this point, since you need
the Exam ID *and* the Question ID to identify a particular question.
|
Database Design Correct Practice
|
[
"",
"sql",
"database",
""
] |
I am new to oracle and I am using toad.
I want to delete all those records, one particular column(U\_MSG) contains `", CALL HELPDESK"`
The column U\_MSG is having 'bla bla foo abcd xyz lmn ,CALL HELPDESK '
I Tried like this :
```
Delete * from mytable where U_MSG = '% ', CALL HELPDESK'
```
But not getting the resuls though. Any Ideas?
Also 2) How to delete that ,. I want to put space in place of ,.
I tried this one
```
UPDATE Mytable t
SET column = REPLACE(t.U_MSG, ''%, CALL HELPDESK'', '% CALL HELPDESK')
```
|
You should not use \* in the delete statement. It will give you an invalid table name error. And as D Stanley said, the like operator is what you want in order to use % as a catchall.
```
delete from mytable
where u_msg like '%, CALL HELPDESK';
```
|
You can use the `LIKE` operator:
```
Delete * from mytable where U_MSG LIKE '%, CALL HELPDESK'
```
|
Oracle Query to delete all those records for which one column contains a specific string
|
[
"",
"sql",
"oracle",
""
] |
Example:
```
DECLARE @String VARCHAR(100)
SET @String = 'TEST STRING'
-- add to string
SET @String = 'TEST STRING 2'
SELECT @String
--return
'TEST STRING''TEST STRING 2'
```
How could i do this?
|
You can append to a string like this.
```
DECLARE @String VARCHAR(100)
SET @String = 'TEST STRING '
-- add to string
--tsql 2005
SET @String = @String + 'TEST STRING 2'
--tsql 2008+
--SET @String += 'TEST STRING 2'
SELECT @String
```
|
Try this:
```
SET @String = @String + 'TEST STRING 2'
```
In this way you add new value to the old using a concatenation
|
write new values to variable but keeping the old value
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I know basic SQL, but not complex.
I have two tables [Table 1 & Table 2] which has username and memberid common in both. There are records in Table 1 where it doesn't exist in Table 2.
For ex. if i needed to pull all the records from Table 1 which doesn't have a matching record in Table 2, using the username as filter, I will write something like this:
```
select * from Table1 a
where not exists (select * from Table2 b
where b.MemberID = a.MemberID) and UserName like 'b%'
```
That was easy. But, how do I update all these results with a different username ?.
I'm trying to find all the records in Table1 for the same username which doesn't have a matching record in Table 2, and then update those records with my username ('r%').
I know some will ask, what have you tried...I have tried to use the 2 queries as single, but getting lots of error...for ex:
```
update Table1
set UserName = (select * from Table1 a
where not exists (select * from Table2 b
where b.MemberID = a.MemberID) and UserName = 'r@gmail.com')
```
Any help is appreciated.
|
Try this
```
UPDATE TAB1
SET USERNAME = (SELECT TOP 1 USERNAME
FROM TABLE1 A
WHERE USERNAME LIKE 'r%')
FROM TABLE1 TAB1
WHERE NOT EXISTS (SELECT 1
FROM TABLE2 B
WHERE B.MEMBERID = TAB1.MEMBERID)
AND TAB1.UserName LIKE 'b%'
```
|
This should update the rows in Table1 that match your first query.
```
update Table1
set UserName = 'r@gmail.com'
from (select * from Table1 a
where not exists (select * from Table2 b
where b.MemberID = a.MemberID) and UserName like 'b%')
```
|
update table using select from one table records that doesnt' exist in another
|
[
"",
"sql",
"sql-server-2008",
"subquery",
""
] |
Hello I have to migrate data from one table to another and I wanna avoid using a cursor.
Using a cursor this would be very easy, since I'd have to do something like this:
```
DECLARE db_cursor CURSOR FOR
select Id, dataToMigrate
from OriginTable
where bar <> 'foo'
OPEN db_cursor
FETCH NEXT FROM db_cursor into @Id, @DataToMigrate
WHILE @@FETCH_STATUS = 0
BEGIN
UPDATE DestinationTable
SET Value = @DataToMigrate
Where Id = @Id
FETCH NEXT FROM db_cursor into @Id, @DataToMigrate
END
CLOSE db_cursor
DEALLOCATE db_cursor
```
However, this feels wrong. I'm sure there must be an easier and more clever way of doing this without a cursor.
Anyone knows a better way?
|
Yes a cursor is completely the wrong way to do this. Kudos for looking for a better way. You can do this with a simple update statement.
```
update d
set Value = o.DataToMigrate
from DestinationTable d
join OriginTable o on o.SomeColumn = d.SomeColumn
where o.bar <> 'foo'
```
|
A set based solution is the best choice in your case. You can use an `UPDATE` with a `JOIN` for this. You do need to know that this is merely updating your rows, it isn't inserting any data.
```
UPDATE D
SET D.Value = O.dataToMigrate
FROM DestinationTable D
INNER JOIN OriginTable O
ON D.Id = O.Id
WHERE O.bar <> 'foo'
```
|
SQL Server: Copying data from a table to another without using a cursor
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I'm trying to figure out how to return possible multiple values in the where clause.
Here is my query
```
SELECT *,
ISNULL(nStatus, '0') Status,
ISNULL(dtUsed, 0) dtUsed,
ISNULL(sStaffID, '') StaffID,
ISNULL((SELECT sName FROM GCS_tbBranch WHERE nSeq=GCS_tbGC.nBranchNo), '') nBranchName,
ISNULL((SELECT dtValidEnd FROM GCS_tbGCGroup WHERE nGroupNo = GCS_tbGC.nGroupSeq), 0) dtValidUntil,
ISNULL((SELECT dtCreated FROM GCS_tbGCGroup WHERE nGroupNo=GCS_tbGC.nGroupSeq), 0) dtCreated,
ISNULL((SELECT nPrice FROM GCS_tbGCGroup WHERE nGroupNo=GCS_tbGC.nGroupSeq), 0) nPrice,
ISNULL(sRemarks, ' ') sRemarks
FROM GCS_tbGC WHERE sGCNo LIKE @sGCNo+'%' AND nGroupSeq IN (SELECT nGroupNo FROM GCS_tbGCGroup WHERE
CASE
WHEN @sClass = 'QNS' OR @sClass = 'IFL' THEN (sClass = 'QNS' OR sClass = 'IFL')
ELSE
@sClass
END
)
```
Wherein, if the variable @sClass is holding either `'QNS' or 'IFL'` it would return records that has the `sClass = 'QNS' OR sClass = 'IFL'`, otherwise, just return the value of the @sClass variable.
EDIT: Whenever I run this query, it would return this exception
```
Incorrect syntax near ' = ' is invalid.
```
|
I don't think a CASE expression is the right tool for this job. Try query below.
```
SELECT *,
ISNULL(nStatus, '0') Status,
ISNULL(dtUsed, 0) dtUsed,
ISNULL(sStaffID, '') StaffID,
ISNULL((SELECT sName FROM GCS_tbBranch WHERE nSeq=GCS_tbGC.nBranchNo), '') nBranchName,
ISNULL((SELECT dtValidEnd FROM GCS_tbGCGroup WHERE nGroupNo = GCS_tbGC.nGroupSeq), 0) dtValidUntil,
ISNULL((SELECT dtCreated FROM GCS_tbGCGroup WHERE nGroupNo=GCS_tbGC.nGroupSeq), 0) dtCreated,
ISNULL((SELECT nPrice FROM GCS_tbGCGroup WHERE nGroupNo=GCS_tbGC.nGroupSeq), 0) nPrice,
ISNULL(sRemarks, ' ') sRemarks
FROM GCS_tbGC
WHERE sGCNo LIKE @sGCNo+'%'
AND nGroupSeq IN (
SELECT nGroupNo
FROM GCS_tbGCGroup
WHERE
@sClass IN('QNS', 'IFL') AND sClass IN('QNS', 'IFL')
UNION ALL
SELECT @sClass
WHERE
@sClass NOT IN('QNS', 'IFL')
);
```
|
`CASE` returns scalar and should be used like one -- you can't manipulate conditions the way you want.
Try simpler expressions:
```
and (
nGroupSeq = @sClass
or @sClass = 'QNS' and nGroupSeq = 'IFL'
or @sClass = 'IFL' and nGroupSeq = 'QNZ'
)
```
or
```
and (
nGroupSeq = @sClass
or @sClass in ('QNS', 'IFL') and nGroupSeq in ('QNS', 'IFL'))
)
```
|
Return multiple results in the where clause using case statement
|
[
"",
"sql",
"asp.net",
"sql-server",
""
] |
Suppose I have 500 records in `tbl_students`. I would like to select records starting from 50 and 250. I know I can use the following way in MySql.
```
SELECT *
FROM tbl_students
LIMIT 50,100;
```
I would like to know how to do this in SQL Server. After browsing I have found solution like this.
```
select top 100 *
from tablename
WHERE column_name BETWEEN 50 AND 250
```
The issue I am facing is I do not know the values of any column in the table, so I can't write the query like the above. In MySql we can just put the `limit` for the range without knowing the values inside the column.
Thanks in advance
|
If you're using **SQL Server 2005/08**, then you could use a [CTE](http://technet.microsoft.com/en-us/library/ms190766(v=sql.105).aspx)...
```
;WITH cteRange as
(
select *, rownumber=ROW_NUMBER() OVER (Order by field)
from tablename
)
SELECT * FROM cteRange WHERE rownumber BETWEEN 50 AND 250
```
if you have **SQL Server 2012**, then...
```
SELECT
*
FROM tablename
ORDER BY field
OFFSET 50 ROWS
FETCH NEXT 200 ROWS ONLY
```
It would be useful to know *RDBMS* and also what columns you want to extract and Order by, instead of using `SELECT *...`
|
```
SELECT * FROM (
SELECT *, ROW_NUMBER() OVER (ORDER BY column_name ) as tbl_row FROM tbl_students
) tbl WHERE tbl_row >= 50 and tbl_row <= 250
```
I think this is one of way available for doing this in SQL Server above version 2005.
|
Select data by range in SQL Server
|
[
"",
"sql",
"sql-server",
""
] |
I current have this WORKING query:
```
SELECT G.ID,
G.EmployeeName,
Stuff((SELECT DISTINCT Cast(',' AS VARCHAR(max))
+ CONVERT(VARCHAR(30), Count(U.UserID))
FROM ActivityLog U
WHERE U.UserID = G.ID
AND StartDateTime >= '2014-06-01 00:00:00.000'
AND EndDateTime <= '2014-07-01 23:59:59.000'
GROUP BY Year(StartDateTime),
Datepart(wk, StartDateTime)
FOR xml path('')), 1, 1, '') AS ActivityCount
FROM OperatorDetail G,
ActivityLog AL
GROUP BY G.ID,
G.EmployeeName
ORDER BY ActivityCount DESC
```
Which returns this:
```
ID EmployeeName ActivityCount
1 John Spoon 30, 50, 33, 90
2 Dave Jones 51, 88
3 Andy Carr 10, 22, 77, 44, 50
```
How can I change my query for it to show the average date collected for each value that's separated by a comma using StartDateTime and EndDateTime which is in `ActivityLog`?
My desired outcome would be something like:
```
ID EmployeeName ActivityCount Dates
1 John Spoon 30, 50, 33, 90 2014-10-01 00:00:00, 2014-10-01 22:00:00, 2014-10-04 07:00:00, 2014-10-10 09:00:00
2 Dave Jones 51, 88 2014-10-03 08:00:00, 2014-10-06 17:00:00
ect...
```
I have tried this below but I get the error `Type U.StartDateTime is not a defined system type.`
```
SELECT G.ID,
G.EmployeeName,
Stuff((SELECT DISTINCT Cast(',' AS VARCHAR(max))
+ CONVERT(VARCHAR(30), Count(U.UserID))
FROM ActivityLog U
WHERE U.UserID = G.ID
AND StartDateTime >= '2014-06-01 00:00:00.000'
AND EndDateTime <= '2014-07-01 23:59:59.000'
GROUP BY Year(StartDateTime),
Datepart(wk, StartDateTime)
FOR xml path('')), 1, 1, '') AS ActivityCount,
Stuff((SELECT DISTINCT Cast(',' AS VARCHAR(max))
+ CONVERT(VARCHAR(30), CONVERT(U.STARTDATETIME, Getdate()))
FROM ActivityLog U
WHERE U.UserID = G.ID
AND StartDateTime >= '2014-06-01 00:00:00.000'
AND EndDateTime <= '2014-07-01 23:59:59.000'
GROUP BY Year(StartDateTime),
Datepart(wk, StartDateTime)
FOR xml path('')), 1, 1, '') AS Dates
FROM OperatorDetail G,
ActivityLog AL
GROUP BY G.ID,
G.EmployeeName
ORDER BY ActivityCount DESC
```
`StartDateTime` and `EndDateTime` values are in this format: '0000-00-00 00:00:00.000', (`datetime, null`)
|
You have a cross join (Cartesian product) between the 2 tables, I can only assume that is deliberate. I firmly believe that you should make this obvious by using the ANSI standard syntax `CROSS JOIN`. If it is NOT deliberate then you need to specify how the 2 tables join.
You are also use a strange date combination for the date range. The 1st of June through to and including the 23rd hour, 59th minute and 59th second of the 1st of July. That is very nearly 1 month and 1 day.
Seems to me you should be using >= '20140601' and < '20140701' (all of July 2014, one month exactly)
For the concatenation it is just a small tweak to the syntax, where you calculate the min()? max()? date, convert that average to a format using style number 120, very much like what you are already doing to the count.
```
SELECT
G.ID
, G.EmployeeName
, STUFF((
SELECT
CAST(',' AS varchar(max)) + CONVERT(varchar(30), COUNT(U.UserID))
FROM ActivityLog U
WHERE U.UserID = G.ID
AND StartDateTime >= '20140601'
AND EndDateTime < '20140701'
GROUP BY
YEAR(StartDateTime)
, DATEPART(wk, StartDateTime)
FOR xml PATH ('')
)
, 1, 1, '') AS ActivityCount
, STUFF((
SELECT -- change can't use avergae
CAST(',' AS varchar(max)) + CONVERT(varchar(30), MAX(U.StartDateTime), 120)
FROM ActivityLog U
WHERE U.UserID = G.ID
AND StartDateTime >= '20140601'
AND EndDateTime < '20140701'
GROUP BY
YEAR(StartDateTime)
, DATEPART(wk, StartDateTime)
FOR xml PATH ('')
)
, 1, 1, '') AS Dates
FROM OperatorDetail G
CROSS JOIN ActivityLog AL
GROUP BY
G.ID
, G.EmployeeName
ORDER BY
ActivityCount DESC
```
Oh, and you don't need `SELECT DISTINCT` if you are doing a `GROUP BY`
|
try this
```
SELECT G.ID,
G.EmployeeName,
Stuff((SELECT ',' + CONVERT(VARCHAR(30), Count(U.UserID))
FROM ActivityLog U
WHERE U.UserID = G.ID
AND StartDateTime >= '2014-06-01 00:00:00.000'
AND EndDateTime <= '2014-07-01 23:59:59.000'
FOR xml path('')), 1, 1, '') AS ActivityCount,
Stuff((SELECT DISTINCT ',' + CONVERT(VARCHAR(30), StartDateTime)
FROM ActivityLog U
WHERE U.UserID = G.ID
AND StartDateTime >= '2014-06-01 00:00:00.000'
AND EndDateTime <= '2014-07-01 23:59:59.000'
FOR xml path('')), 1, 1, '') AS Dates
FROM OperatorDetail G,
ActivityLog AL
GROUP BY G.ID,
G.EmployeeName
ORDER BY ActivityCount DESC
```
|
MSSQL - Select DATETIME from stuff(cast(convert()) for each seperated value
|
[
"",
"sql",
"sql-server",
"date",
""
] |
Here is my table called Employee.I want name of Employee whose active is 1 and who know the java.
I am working on sql server 2000.Split function does not work in sql 2000?What is alternative.How to handle this?
```
id name lname Active Skill
1 Sonia Mohite 1 C,C#,JAVA,.NET
2 Dilip Kale 1 JAVA
3 Kalpesh More 0 C,C#,JAVA
4 Monika Naik 1 C
```
Thanks all of u for ur help.But I have another table callled student.I want name of student whose
isactive is 1 and who have subjectcode 1.Like does not work here.It gave name of employee who have subject code 11 or 12 also.
```
id name Isactive subjectcode
1 Shilpa 1 1,2,3
2 Manasi 1 11
3 kamal 1 12,3
4 Hans 0 1,2
5 Rupali 1 1,11,12
```
|
Use `LIKE` .
## Query
```
SELECT * FROM Employee
WHERE active=1
AND skill LIKE '%Java%';
```
|
Use below query:
```
SELECT * FROM EMPLOYEE
WHERE ACTIVE = 1
AND ',' || SKILL || ',' LIKE '%,JAVA,%';
```
Note: Above is for oracle which supports || for concatenation. For other DBs, you may have to use CONCAT like below:
```
SELECT * FROM EMPLOYEE
WHERE ACTIVE = 1
WHERE CONCAT (",", (CONCAT (SKILL, ",") ) LIKE '%,JAVA,%'
```
|
How to fetch the value from comma separated attribute in sql?
|
[
"",
"sql",
"asp.net",
""
] |
I have this table where NULL is the `NULL` value, not the string NULL:
```
MYCOL
--------
NULL
example
```
Why does this query *not* return the `NULL` row?
```
select * from example_so where nvl(mycol, '') = '';
```
|
`''` is again `NULL` in Oracle, because Oracle doesn't support empty Strings just like Other High Level languages or DBMS..
You need to look for NULL/empty string using `IS NULL` or `IS NOT NULL`
No other relational operator work against `NULL`, though it is syntactically valid. [SQLFiddle Demo](http://sqlfiddle.com/#!4/d41d8/36233)
It has to be,
```
select * from example_so where mycol IS NULL
```
**EDIT:** As per [Docs](http://docs.oracle.com/cd/B19306_01/server.102/b14200/sql_elements005.htm)
> Oracle Database currently treats a character value with a length
> of zero as null. However, this may not continue to be true in future
> releases, and Oracle recommends that you do not treat empty strings
> the same as nulls.
|
Because `NULL = NULL` is simply unknown. A third state perhaps? It is neither `TRUE` nor `FALSE`.
**Oracle considers an EMPTY STRING as NULL.**
`nvl(mycol, '')` makes no real sense, as you are making the NULL back to NULL and comparing it again with NULL.
```
SQL> WITH DATA AS(
2 SELECT 1 val, 'NULL' str FROM dual UNION ALL
3 SELECT 2, NULL str FROM dual UNION ALL
4 SELECT 3, '' str FROM dual UNION ALL
5 SELECT 4, 'some value' str FROM dual)
6 SELECT val, NVL(str, 'THIS IS NULL') FROM data WHERE str IS NULL
7 /
VAL NVL(STR,'THI
---------- ------------
2 THIS IS NULL
3 THIS IS NULL
SQL>
```
|
Oracle NVL with empty string
|
[
"",
"sql",
"oracle",
"null",
"nvl",
""
] |
I'm trying to run the following statement but am receiving the error messages just below. I have researched answers to no end and none have worked for me. I'm running Office 365 (64bit). I have loaded the Microsoft Access Database Engine (64bit). This is in Visual Studio 2013 with SSDT as well as SQL Server 2012. I do not have access to changing environment or startup parameters to SQL Server. Any help is appreciated.
```
SELECT * FROM OPENROWSET('Microsoft.ACE.OLEDB.15.0',
'Excel 12.0;Database=C:\Users\UserName\Folder\SomeFile.xlsx;;HDR=NO;IMEX=1', [Table 1$])
```
* Msg 7399, Level 16, State 1, Line 1 The OLE DB provider
"Microsoft.ACE.OLEDB.15.0" for linked server "(null)" reported an
error. The provider did not give any information about the error.
* Msg 7303, Level 16, State 1, Line 1 Cannot initialize the data source
object of OLE DB provider "Microsoft.ACE.OLEDB.15.0" for linked
server "(null)".
Here's what I have tried:
First, I ran...
```
sp_configure 'show advanced options', 1;
RECONFIGURE;
GO
sp_configure 'Ad Hoc Distributed Queries', 1;
RECONFIGURE;
GO
```
Followed by...with no love.
```
EXEC sys.sp_addsrvrolemember @loginame = N'<<Domain\User>>', @rolename = N'sysadmin';
GO
EXEC master.dbo.sp_MSset_oledb_prop N'Microsoft.ACE.OLEDB.15.0', N'AllowInProcess', 1
GO
EXEC master.dbo.sp_MSset_oledb_prop N'Microsoft.ACE.OLEDB.15.0', N'DynamicParameters', 1
GO
```
I have changed the code to read **Microsoft.ACE.OLEDB.12.0** as I have seen that as well, still no love.
I have also checked permissions of C:\Users\MSSQLSERVER\AppData\Local\Temp and C:Windows\ServiceProfiles\NetworkService\AppData\Local which have granted Full Control for the following: System, MSSQLSERVER, and Administrators, Network Service (on the latter).
Still no love.
Lastly, I have tried changing to the 32bit version of the Microsoft Access Database Engine which has persisted in not working.
Help, anyone?
|
<http://www.aspsnippets.com/Articles/The-OLE-DB-provider-Microsoft.Ace.OLEDB.12.0-for-linked-server-null.aspx>
This solves the issue.
For some reason SQL Server does not like the default MSSQLSERVER account. Switching it to a local user account resolves the issue.
|
This is for my reference, as I encountered a variety of SQL error messages while trying to connect with provider. Other answers prescribe "try this, then this, then this". I appreciate the other answers, but I like to pair *specific solutions with specific problems*
---
**Error**
*...provider did not give information...Cannot initialize data source object...*
**Error Numbers**
7399, 7303
**Error Detail**
```
Msg 7399, Level 16, State 1, Line 2 The OLE DB provider "Microsoft.ACE.OLEDB.12.0" for linked server "(null)" reported an error.
The provider did not give any information about the error.
Msg 7303, Level 16, State 1, Line 2 Cannot initialize the data source object
of OLE DB provider "Microsoft.ACE.OLEDB.12.0" for linked server "(null)".
```
**Solution**
File was open. Close it.
**Credit**
* <https://stackoverflow.com/a/29369868/1175496>
---
**Error**
*Access denied...Cannot get the column information...*
**Error Numbers**
7399, 73**50**
**Error Detail**
```
Msg 7399, Level 16, State 1, Line 2 The OLE DB provider "Microsoft.ACE.OLEDB.12.0" for linked server "(null)" reported an error.
Access denied.
Msg 7350, Level 16, State 2, Line 2 Cannot get the column information
from OLE DB provider "Microsoft.ACE.OLEDB.12.0" for linked server "(null)".
```
**Solution**
*Give access*
**Credit**
* <https://stackoverflow.com/a/27509955/1175496>
---
**Error**
*No value given for one or more required parameters....Cannot execute the query ...*
**Error Numbers**
**???**, 73**20**
**Error Detail**
```
OLE DB provider "Microsoft.ACE.OLEDB.12.0" for linked server "(null)" returned message "No value given for one or more required parameters.".
Msg 7320, Level 16, State 2, Line 2
Cannot execute the query "select [Col A], [Col A] FROM $Sheet" against OLE DB provider "Microsoft.ACE.OLEDB.12.0" for linked server "(null)".
```
**Solution**
Column names might be wrong. Do `[Col A]` and `[Col B]` actually exist in your spreadsheet?
---
**Error**
*"Unspecified error"...Cannot initialize data source object...*
**Error Numbers**
**???**, 7303
**Error Detail**
```
OLE DB provider "Microsoft.ACE.OLEDB.12.0" for linked server "(null)" returned message "Unspecified error".
Msg 7303, Level 16, State 1, Line 2 Cannot initialize the data source object of OLE DB provider "Microsoft.ACE.OLEDB.12.0" for linked server "(null)".
```
**Solution**
Run SSMS as admin. See [this question.](https://stackoverflow.com/questions/20572563/sql-server-cannot-initialize-the-data-source-object-of-ole-db-provider-microso)
---
**Other References**
Other answers which suggest modifying properties. Not sure how modifying these two properties (checking them or unchecking them) would help.
* <https://stackoverflow.com/a/31605038/1175496>
* <http://www.aspsnippets.com/Articles/The-OLE-DB-provider-Microsoft.Ace.OLEDB.12.0-for-linked-server-null.aspx>
* <https://social.technet.microsoft.com/Forums/lync/en-US/bb2dc720-f8f9-4b93-b5d1-cfb4f8a8b1cb/the-ole-db-provider-microsoftaceoledb120-for-linked-server-null-reported-an-error-access?forum=sqldataaccess#3fcc14f4-420e-4544-be74-eea1e0e78462>
|
The OLE DB provider "Microsoft.ACE.OLEDB.12.0" for linked server "(null)"
|
[
"",
"sql",
"visual-studio-2013",
"openrowset",
""
] |
I am planing an SQL Statement right now and would need someone to look over my thougts.
This is my Table:
```
id stat period
--- ------- --------
1 10 1/1/2008
2 25 2/1/2008
3 5 3/1/2008
4 15 4/1/2008
5 30 5/1/2008
6 9 6/1/2008
7 22 7/1/2008
8 29 8/1/2008
```
Create Table
```
CREATE TABLE tbstats
(
id INT IDENTITY(1, 1) PRIMARY KEY,
stat INT NOT NULL,
period DATETIME NOT NULL
)
go
INSERT INTO tbstats
(stat,period)
SELECT 10,CONVERT(DATETIME, '20080101')
UNION ALL
SELECT 25,CONVERT(DATETIME, '20080102')
UNION ALL
SELECT 5,CONVERT(DATETIME, '20080103')
UNION ALL
SELECT 15,CONVERT(DATETIME, '20080104')
UNION ALL
SELECT 30,CONVERT(DATETIME, '20080105')
UNION ALL
SELECT 9,CONVERT(DATETIME, '20080106')
UNION ALL
SELECT 22,CONVERT(DATETIME, '20080107')
UNION ALL
SELECT 29,CONVERT(DATETIME, '20080108')
go
```
I want to **calculate** the difference between each statistic and the next, and then calculate the mean value of the 'gaps.'
Thougts:
I need to join each record with it's subsequent row. I can do that using the ever flexible joining syntax, thanks to the fact that I know the id field is an integer sequence with no gaps.
By aliasing the table I could incorporate it into the SQL query twice, then join them together in a staggered fashion by adding 1 to the id of the first aliased table. The first record in the table has an id of 1. 1 + 1 = 2 so it should join on the row with id of 2 in the second aliased table. And so on.
Now I would simply subtract one from the other.
Then I would use the ABS function to ensure that I always get positive integers as a result of the subtraction regardless of which side of the expression is the higher figure.
**Is there an easier way to achieve what I want?**
|
The average value of the gaps can be done by calculating the difference between the first value and the last value and dividing by one less than the number of elements:
```
select sum(case when seqnum = num then stat else - stat end) / (max(num) - 1);
from (select period, row_number() over (order by period) as seqnum,
count(*) over () as num
from tbstats
) t
where seqnum = num or seqnum = 1;
```
Of course, you can also do the calculation using `lead()`, but this will also work in SQL Server 2005 and 2008.
|
The [`lead`](http://msdn.microsoft.com/en-us/library/hh213125.aspx) analytic function should do the trick:
```
SELECT period, stat, stat - LEAD(stat) OVER (ORDER BY period) AS gap
FROM tbstats
```
|
Joining next Sequential Row
|
[
"",
"sql",
"sql-server",
"join",
"union",
""
] |
```
CREATE FUNCTION [dbo].[GetSalesByParameters]
(
@firstName varchar(70),
@lastName varchar(70),
@dateFrom datetime,
@dateTo datetime,
@selectValid varchar(70),
@selectCallCenters varchar(70)
)
RETURNS TABLE
AS
RETURN
(
select * from Sales
where valid = @selectValid
and firstName LIKE '%'+@firstName+'%'
)
```
I'm trying to make a function that gives me all sales stored on the Sales table, but somethimes the parameter "selectvalid" can be null. If that value comes with value "", how can I do to remove it from the WHERE clause?
|
If `@selectValid` should be ignored when its value is an empty string, then just change this line from:
```
where valid = @selectValid
```
to
```
where (@selectValid = '' OR valid = @selectValid)
```
The question is a little ambiguous as to if `@selectValid` will be an empty string or null. If you want to handle both cases, then try:
```
where (@selectValid IS NULL OR @selectValid = '' OR valid = @selectValid)
```
Typically I use this approach, but if you are dealing with large tables or having performance issues with this approach, then you might want to read up a bit here:
* [Proper way to handle 'optional' where clause filters in SQL?](https://stackoverflow.com/questions/1705634/proper-way-to-handle-optional-where-clause-filters-in-sql)
* [Query running longer by adding unused WHERE conditions](https://stackoverflow.com/questions/1709348/query-running-longer-by-adding-unused-where-conditions/1709698#1709698)
Hope that helps.
|
```
CREATE FUNCTION [dbo].[GetSalesByParameters]
(
@firstName varchar(70),
@lastName varchar(70),
@dateFrom datetime,
@dateTo datetime,
@selectValid varchar(70),
@selectCallCenters varchar(70)
)
RETURNS TABLE
AS
RETURN
(
select * from Sales
where (@selectValid IS NULL OR valid = @selectValid)
and firstName LIKE '%'+@firstName+'%'
)
```
|
How to remove statements from where clause in SQL
|
[
"",
"sql",
"sql-function",
""
] |
Let's say I have two Tables, called Person, and Couple, where each Couple record stores a pair of Person id's (also assume that each person is bound to *at most* another different person).
I am planning to support a lot of queries where I will ask for Person records that are not married yet. Do you guys think it's worthwhile to add a 'partnerId' field to Person? (It would be set to null if that person is not married yet)
I am hesitant to do this because the partnerId field is something that is computable - just go through the Couple table to find out. The performance cost for creating new couple will also increase because I have to do this extra book keeping.
I hope that it doesn't sound like I am asking two different questions here, but I felt that this is relevant. Is it a good/common idea to include extra fields that are redundant (computable/inferable by joining with other tables), but will make your query a lot easier to write and faster?
Thanks!
|
A better option is to keep the data normalized, and utilize a view (indexed, if supported by your rdbms). This gets you the convenience of dealing with all the relevant fields in one place, without denormalizing your data.
Note: Even if a database doesn't support indexed views, you'll likely still be better off with a view as the indexes on the underlying tables can be utilized.
|
Is there always a zero to one relationship between Person and Couples? i.e. a person can have zero or one partner? If so then your `Couple` table is actually redundant, and your new field is a better approach.
The only reason to split `Couple` off to another table is if one Person can have many partners.
When someone gets a partner you either write one record to the `Couple` table or update one record in the `Person` table. I argue that your `Couple` table is redundant here. You haven't indicated that there is any extra info on the `Couple` record besides the link, and it appears that there is only ever zero or one Couple record for every `Person` record.
|
Redundant field in SQL for Performance
|
[
"",
"sql",
"database",
""
] |
```
--Dummy table
create table table1 (
column_order varchar (100)
)
insert into table1 values ('column1')
insert into table1 values ('column2')
insert into table1 values ('column3')
insert into table1 values ('column4')
insert into table1 values ('column5')
insert into table1 values ('column6')
--Start of select
declare @rowsCount INT
declare @i INT = 1
declare @column varchar(1000) = ''
set @rowsCount = (select COUNT(*) from table1)
while @i <= @rowsCount
begin
set @column = @column + (select column_order from table1 where rowid(table1) = @i) + ', '
set @i = @i + 1
end
select @column
```
This code has the function ROWID thats an IQ-Sybase funktion, and im not sure what other DBMS can use it. And above you have a example what i want my select to look like.
My problem is, you cant use the ROWID function with sys.column or any other systables. Has anyone an idea how to get the same select as mine without using the ROWID function.
If you are using IQ, i constructed the code so you can just type f5 and see the select statement, after that just drop the dummy table.
|
Use list(). It works in both the ASA system and IQ catalogs.
```
drop table if exists table1
go
create local temporary table table1 (
column_order varchar (100)
) in system --create table in system
insert into table1 values ('column1')
insert into table1 values ('column2')
insert into table1 values ('column3')
insert into table1 values ('column4')
insert into table1 values ('column5')
insert into table1 values ('column6')
declare @columns varchar(100)
select @columns = list(column_order) from table1
select @columns
go
```
|
I may be not understand your need, because I can't see why you need rowdid.
Usually, in TSQL, I do as follow:
```
declare @someVar as nvarchar(max)
set @someVar = (select
'[' + c.name + '],' as 'data()'
from
sys.columns c
join sys.tables t on c.object_id = t.object_id
where
t.name = 'SomeTableName'
for xml path(''))
print Left(@someVar, Len(@someVar) - 1)
```
|
Determine all columns from a table and write them separated by commas in a variable
|
[
"",
"sql",
"t-sql",
"sap-iq",
""
] |
I was wondering if in the following statement:
```
UPDATE u
SET u.isactive =
(
CASE WHEN e.LVStatus<>'B' AND u.IsActive=1 AND u.AutoUpdate=1 THEN
0
WHEN e.LVStatus='B' AND u.IsActive=0 AND u.AutoUpdate=1 THEN
1 END
),
u.UpdatedB y= 0
FROM tbl_e e
INNER JOIN tbl_Users u ON e.id=u.id
```
If the conditions in the case statement are not met, for example u.IsActive=1 and e.LVStatus='B', will the query still set u.UpdatedBy=0? I was hoping if the conditions in the case statement are not met then do nothing, perhaps if I want that behavior I need to just have to separate update statements with different where clauses. Thanks!
|
Since your query sets `u.UpdatedBy` to zero unconditionally, the answer is yes, the value of `u.UpdatedBy` will be set regardless of the outcome of the `CASE` expression through which `u.isactive` is set.
To overcome this, you could add a separate `CASE` expression for the `u.UpdatedBy` field, and use the current value when the conditions of the other `CASE` statement are not met:
```
UPDATE u
SET u.isactive = CASE
WHEN e.LVStatus<>'B' AND u.IsActive=1 AND u.AutoUpdate=1 THEN 0
WHEN e.LVStatus='B' AND u.IsActive=0 AND u.AutoUpdate=1 THEN 1
ELSE u.isactive
END
, u.UpdatedBy= CASE
WHEN (e.LVStatus<>'B' AND u.IsActive=1 AND u.AutoUpdate=1) OR
(e.LVStatus='B' AND u.IsActive=0 AND u.AutoUpdate=1)
THEN 0
ELSE u.UpdatedBy
END
FROM tbl_e e
INNER JOIN tbl_Users u ON e.id=u.id
```
Alternatively, you could move that condition into the `WHERE` clause, like this:
```
UPDATE u
SET u.isactive = CASE
WHEN e.LVStatus<>'B' AND u.IsActive=1 AND u.AutoUpdate=1 THEN 0
WHEN e.LVStatus='B' AND u.IsActive=0 AND u.AutoUpdate=1 THEN 1
-- No ELSE is needed, because WHERE filters out all other cases
END
, u.UpdatedBy=0
FROM tbl_e e
INNER JOIN tbl_Users u ON e.id=u.id
WHERE (e.LVStatus<>'B' AND u.IsActive=1 AND u.AutoUpdate=1) OR
(e.LVStatus='B' AND u.IsActive=0 AND u.AutoUpdate=1)
```
|
Both set conditions will always work. A `CASE` expression without an `ELSE` clause has an implicit `ELSE NULL`. See [here](http://msdn.microsoft.com/en-us/library/ms181765.aspx).
If `u.isactive` has a `NOT NULL` constraint, you'll get a constraint violation and nothing will be updated by the statement.
|
SQL Case Statement in set; Does it always fire?
|
[
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2008-r2",
"sql-server-2012",
""
] |
Database of test:
```
SET NAMES utf8;
SET foreign_key_checks = 0;
SET time_zone = '+02:00';
SET sql_mode = 'NO_AUTO_VALUE_ON_ZERO';
CREATE TABLE `account` (
`idAccount` int(11) NOT NULL AUTO_INCREMENT,
`name` varchar(128) NOT NULL,
PRIMARY KEY (`idAccount`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8;
CREATE TABLE `users` (
`idUser` int(11) NOT NULL AUTO_INCREMENT,
`idAccount` int(11) NOT NULL,
`firstName` varchar(128) NOT NULL,
PRIMARY KEY (`idUser`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8;
DROP TABLE IF EXISTS `transactions`;
CREATE TABLE `transactions` (
`idTransactions` int(11) NOT NULL AUTO_INCREMENT,
`idUser` int(11) NOT NULL,
`dateTransaction` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (`idTransactions`),
KEY `index_dateTransaction` (`dateTransaction`) USING BTREE
) ENGINE=MyISAM DEFAULT CHARSET=utf8;
INSERT INTO `transactions` (`idTransactions`, `idUser`, `dateTransaction`) VALUES
(1, 1, '2012-12-16 15:52:32'),
(2, 1, '2012-12-20 15:52:37'),
(3, 1, '2013-02-01 15:52:37'),
(4, 2, '2013-03-16 15:52:37'),
(5, 2, '2013-03-18 15:52:37'),
(6, 3, '2014-04-19 15:52:37'),
(7, 3, '2014-05-20 15:52:37'),
(8, 4, '2014-06-21 15:58:46');
INSERT INTO `account` (`idAccount`, `name`) VALUES
(1, 'Burger & Burger');
INSERT INTO `users` (`idUser`, `idAccount`, `firstName` ) VALUES
(1, 1, 'Roberto'),
(2, 1, 'Alessandro');
```
Depending with the date passed, sometimes MYSQL doesn't use the INDEX.
I know that I need to add / edit INDEX, please, could you please help me to perform this query very well?
This query **doesn't** use the INDEX:
```
SELECT
users.firstName,
ts1.*,
COUNT(transactions.dateTransaction) AS num_transactions
FROM users
INNER JOIN transactions ON transactions.idUser = users.idUser
INNER JOIN (
SELECT
users.idUser,
MIN(transactions.dateTransaction) AS first_transaction,
MAX(transactions.dateTransaction) AS last_transaction
FROM transactions
INNER JOIN users ON transactions.idUser = users.idUser
WHERE (users.idAccount = 1)
GROUP BY users.idUser
) AS ts1 ON users.idUser = ts1.idUser
WHERE
transactions.dateTransaction BETWEEN ('2012-01-01') AND ('2013-12-31')
AND users.idAccount = 1
GROUP BY users.idUser
```
EXPLAIN link: <http://sqlfiddle.com/#!2/059d8/7/0>
This query **use** it:
```
SELECT
users.firstName,
ts1.*,
COUNT(transactions.dateTransaction) AS num_transactions
FROM users
INNER JOIN transactions ON transactions.idUser = users.idUser
INNER JOIN (
SELECT
users.idUser,
MIN(transactions.dateTransaction) AS first_transaction,
MAX(transactions.dateTransaction) AS last_transaction
FROM transactions
INNER JOIN users ON transactions.idUser = users.idUser
WHERE users.idAccount = 1
GROUP BY users.idUser
) AS ts1 ON users.idUser = ts1.idUser
WHERE
transactions.dateTransaction BETWEEN ('2012-01-01') AND ('2012-12-31')
AND users.idAccount = 1
GROUP BY users.idUser
```
Change only the year.
But the biggest problem is that in production environment, with ~65.000 rows of transactions, query hangs on over 60 seconds (!)
I created a sqlfiddle, this is the link: <http://sqlfiddle.com/#!2/059d8/1/0>
Thank you very much!
|
Add the following two indexes:
```
ALTER TABLE `users` ADD KEY `bk1_account_user` (idAccount, idUser);
ALTER TABLE `transactions` KEY `bk2_user_datetrans` (idUser, dateTransaction);
```
This allows all the tables to be accessed by covering indexes, and eliminates some of the ALL type tables. See the SQLfiddle for details: <http://sqlfiddle.com/#!2/b11bb/4>
Also, consider upgrading to 5.6, to get rid of the "using join buffer".
|
This is interesting. I played with the dates, and if the filter is obviously off (using year 2001 for example) mysql uses its CONST tables to compute the query:
`Impossible WHERE noticed after reading const tables`
I suspect there's a strong optimization on the date columns which I guess is interfering with the index calculations. But I'm not sure about this...
Nonetheless, your query can be improved.
Take a look at this one:
```
SELECT
users.firstName,
ts1.*
FROM users
JOIN (
SELECT
users.idUser,
MIN(transactions.dateTransaction) AS first_transaction,
MAX(transactions.dateTransaction) AS last_transaction,
COUNT(transactions.dateTransaction) AS num_transactions
FROM transactions
JOIN users ON transactions.idUser = users.idUser AND users.idAccount = 1
WHERE
transactions.dateTransaction BETWEEN ('2011-01-01') AND ('2011-07-31')
GROUP BY users.idUser
) AS ts1 ON users.idUser = ts1.idUser
WHERE
users.idAccount = 1
GROUP BY users.idUser;
```
I moved in the subquery the `COUNT` and the `WHERE` clause, so you only have to use the transactions table once. But it means that the meaning of the query changed, you have to check if it is what you want. Now, the count will count only the transactions between these 2 dates while before, it was counting them in general for the given user, regardless of the date. If you don't think it fits your needs, just ignore my change.
From the DDL perspective, I think you can improve it like this:
1. IF and only IF, you have many different user accounts (cardinality of idAccount >20-30), spread more or less equally:
> KEY `index_idAccount` (`idAccount`) on the `user` table.
2.
Change your existing index `index_dateTransaction` to use the idUser too:
> KEY `index_dateTransaction` (`idUser`, `dateTransaction`)
Final result would be as follows:

|
MYSQL: query doesn't use the index some time
|
[
"",
"mysql",
"sql",
""
] |
I'm looking for a way to let SQL display data from the year we're in.
On my page the data is displayed like this:
```
SELECT * FROM test WHERE date > '01.01.2014' (European date style)
```
But when we move over to 2015 I have to change the code to display that years data.
So I was thinking that perhaps SQL have some magic function that lets me display data like this:
```
SELECT * FROM test WHERE date = 'this.year'
```
Tried searching the web/Google for answers, but I couldn't find a good way to write the question in a single and understandable sentence.
|
You can use the [`year`](http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_year) function:
```
SELECT * FROM test WHERE YEAR(`date`) = YEAR(NOW())
```
|
A simple way is to do:
```
SELECT *
FROM test
WHERE year(date) = year(now());
```
A more advanced way is to avoid the function on `date`. This allows the use of an index:
```
SELECT *
FROM test
WHERE date >= makedate(year(now()), 1) and
date < makedate(year(now()) + 1, 1);
```
|
How to capture THIS year in a SQL SELECT statement
|
[
"",
"mysql",
"sql",
"date",
""
] |
First, i just know sql but not pro at it, so please bear with me
I have a situation where the default condition is (for example) one person only can have one schedule and one schedule only have one person, so i just need to create a `person_id` as `FK` in my `schedule table`. (**1-1 relation**)
However, now a schedule can have more than one person, and one person can have many schedule (**many to many relation**). Is it possible to do this? How to redesign my database?
|
Direct many to many references between two tables is never a good idea. It is best to stick a separate table inbetween the relationship.

A sample query would then be along the lines of:
```
select * from Person
join PersonSchedule on PersonSchedule.PersonId= Person.Id
join Schedules on Schedules.Id = PersonSchedule.ScheduleId
```
|
What you describe is a 1-to-N relation.
In general, 1 parent can have N children.
However, each child can have only 1 parent (well.. not in real life, but I suppose that's what you meant in your example).
They way you model this is:
```
parent(ID, name)
child(ID, name, parent_ID)
```
`child.ID` is a PK (it identifies a child uniquely)
`parent.ID` is a PK (it identifies a parent uniquely)
`child.parent_ID` is a FK that points to `parent.ID`. A FK always points to a PK.
**EDIT:**
From you comment, it seems you wanted an N-to-M relation. Let's again use parents and children, but like in real life: children do have more than 1 parent.
N-to-M relations are modeled using one table in between that links the two.
Example:
```
parent(ID, name) -- ID is PK
parent(1, 'parent1')
parent(2, 'parent2')
parent(3, 'parent3')
child(ID, name) -- ID is PK
child(10, 'child1')
child(11, 'child2')
child(12, 'child3')
-- who is parent of whom?
parent_child(parentID, childID) -- both parentID and childID are FKs
parent_child(1, 10) -- parent1 is parent of child1
parent_child(2, 10) -- parent2 is parent of child1
parent_child(2, 11) -- parent2 is parent of child2
FK: parent_child.parentID -> parent.ID
FK: parent_child.childID -> child.ID
```
One more note. From you question is not clear whether you realise the following:
While setting up the PK and FK constraints for this relations is a very good idea, it is not mandatory and does not affect the modelling of your problem.
One thing is to model your many-to-many scenario with the 3 tables as described above. This will perfectly work.
Another thing is to enforce the PK and FK constraints. This will ensure that you cannot insert inconsistent data by accident (e.g. twice the same person, or a child-parent relationship to a non-existing parent), plus help the query engine provide better efficiency.
|
is it possible to have multiple foreign key from the same table?
|
[
"",
"mysql",
"sql",
""
] |
I am relatively new to PostgreSQL and I know how to pad a number with zeros to the left in SQL Server but I'm struggling to figure this out in PostgreSQL.
I have a number column where the maximum number of digits is 3 and the min is 1: if it's one digit it has two zeros to the left, and if it's 2 digits it has 1, e.g. 001, 058, 123.
In SQL Server I can use the following:
```
RIGHT('000' + cast([Column1] as varchar(3)), 3) as [Column2]
```
This does not exist in PostgreSQL. Any help would be appreciated.
|
You can use the `rpad` and `lpad` functions to pad numbers to the right or to the left, respectively. Note that this does not work directly on numbers, so you'll have to use `::char` or `::text` to cast them:
```
SELECT RPAD(numcol::text, 3, '0'), -- Zero-pads to the right up to the length of 3
LPAD(numcol::text, 3, '0') -- Zero-pads to the left up to the length of 3
FROM my_table
```
|
The `to_char()` function is there to format numbers:
```
select to_char(column_1, 'fm000') as column_2
from some_table;
```
The `fm` prefix ("fill mode") avoids leading spaces in the resulting varchar. The `000` simply defines the number of digits you want to have.
```
psql (9.3.5)
Type "help" for help.
postgres=> with sample_numbers (nr) as (
postgres(> values (1),(11),(100)
postgres(> )
postgres-> select to_char(nr, 'fm000')
postgres-> from sample_numbers;
to_char
---------
001
011
100
(3 rows)
postgres=>
```
For more details on the format picture, please see the manual:
<http://www.postgresql.org/docs/current/static/functions-formatting.html>
|
Padding zeros to the left in postgreSQL
|
[
"",
"sql",
"postgresql",
""
] |
I'd like to inspect an existing Firebird (2.5.1) database without having to
install a server.
Are there any tools out there that allow an inspection of the database file?
If not: Are there any tools I can run on the system where the database server
is actually running to take a look at it?
|
Though it's been a while since I posted this question I'd like to give an answer:
I'm now using "Database .NET" from this website:
<http://fishcodelib.com/Database.htm>
It works reliably and rock solid (especially when used with large databases).
|
IBExpert's Database Inside allows you to analyse a Firebird database file directly, without a server. Full description here: <http://ibexpert.net/ibe/index.php?n=Doc.DatabaseInside>
|
Is it possible to open Firebird database file without installing db server?
|
[
"",
"sql",
"database",
"firebird",
""
] |
I am trying to run a query from two large tables of data. I am trying to join them but filter out the minimum date at the same time, the filtering date seems that it slows it down ALOT. But this is a must have, Is there any way I can speed it up? As the query stands, it just keeps loading and loading
Here is what I'm getting in the EXPLAIN

The query is -
```
SELECT T1.id_no,
T1.condition_code,
Count(T1.condition_code) AS COUNT,
T1.doe,
T2.id_no,
T2.trans_time,
T2.from_routing_pos
FROM attrcoll_month T1
JOIN live_trans T2
ON T1.id_no = T2.id_no
WHERE T2.trans_time = (SELECT Min(trans_time)
FROM live_trans T2_MIN
WHERE T2_MIN.id_no = T2.id_no)
AND T1.doe BETWEEN '2014-09-01 00:00:01' AND '2014-09-02 23:59:59'
AND T1.unique_code = 'XXY'
GROUP BY T2.from_routing_pos,
T1.condition_code
```
Snippets from each tables data -
**ATTRCOLL\_MONTH T1**
```
ID_NO DOE CONDITION_CODE UNQIUE_CODE
8442 25/09/2014 22:49 NEND XXY
8442 25/09/2014 22:49 SEND XXY
8442 25/09/2014 22:49 BS XXY
8442 25/09/2014 22:49 BS XXY
8442 25/09/2014 22:49 BS XXY
8442 25/09/2014 22:49 TD XXY
8511 25/09/2014 22:49 NEND XXY
8511 25/09/2014 22:49 SEND XXY
8511 25/09/2014 22:49 BS XXY
8511 25/09/2014 22:49 BS XXY
8511 25/09/2014 22:49 BS XXY
8511 25/09/2014 22:49 TD XXY
8511 24/09/2014 12:49 OF XXY
8511 24/09/2014 12:49 OF XXY
8675 24/09/2014 12:49 NEND XXY
8675 24/09/2014 12:49 SEND XXY
9081 24/09/2014 12:49 NEND XXY
```
**LIVE\_TRANS T2**
```
ID_NO TRANS_TIME UNQIUE_CODE FROM_ROUTING_POS
8442 2.12276E+17 XXY OD1
8442 2.12276E+17 XXY OD2
8445 2.12276E+17 XXY OD3
8214 2.12276E+17 XXY OD2
8325 2.12276E+17 XXY OD1
842 2.12276E+17 XXY OD3
2444 2.12276E+17 XXY OD3
```
Sorry about the table data formatting!
Hope this is explained well, please let me know if you need more info
|
1. First fetch record from t1 only into a temp table.
2. Then Apply the join of temp Table and T2 and t2\_min and get all min time and ids
3. Then merge #1,#2 and t2 in join and apply group by.
This will give some some boost in performance.
Basic Idea is to limit the records that will be part of Join and to remove the subquery.
This is sample:-
```
--Fetch records from Table one based on all filtering conditions
-- this will reduce the logical read when we apply join
SELECT
T1.id_no,
T1.condition_code,
T1.doe
INTO
#Temp
FROM
attrcoll_month T1
WHERE
T1.doe >= '01/09/2014'
AND T1.doe < '03/01/2014'
AND T1.unique_code = 'XXY';
-- Get all the min time for only required ids. This will avoid the sub query and also read get reduced since records in #temp are
limited
SELECT
MIN(trans_time) MinTime,
T.id_no
INTO
#tempMinTime
FROM
#Temp T
JOIN live_trans T2_MIN ON T.id_no = T2_MIN.id_no;
--Merging #1 and #2
SELECT
T1.id_no,
T1.condition_code,
COUNT(T1.condition_code) AS count,
T1.doe,
T2.id_no,
T2.trans_time,
T2.from_routing_pos
FROM
#Temp T1
JOIN #tempMinTime T ON T1.id_no = T.id_no
JOIN live_trans T2 ON T.id_no = T2.id_no
WHERE
T2.trans_time = T.MinTime
GROUP BY
T2.from_routing_pos,
T1.condition_code;
```
|
You are doing a correlated subquery which means that for every record in the main table (t1), it is running the query within t2. You may want to swap it around by having a sub-query just get all IDs and the minimum date first, THEN join back to the t1 table for the rest of the details.
```
select
FT1.id_no,
FT1.condition_code,
Count(*) AS ConditionCount,
FT1.doe,
FT2.id_no,
FT2.trans_time,
FT2.from_routing_pos
from
( select
t1.id_no,
min( t2.trans_time ) as MinTime
from
attrcoll_month t1
JOIN live_trans T2
on t1.id_no = t2.id_no
where
T1.doe BETWEEN '2014-09-01 00:00:01' AND '2014-09-02 23:59:59'
AND T1.unique_code = 'XXY'
group by
t1.id_no ) as PreQuery
JOIN attrcoll_month FT1
on PreQuery.ID_No = FT1.ID_No
JOIN live_trans FT2
ON PreQuery.id_no = FT2.id_no
AND PreQuery.MinTime = FT2.trans_time
group by
FT2.from_routing_pos,
FT1.condition_code
```
To help query, I would have the following indexes on the tables
```
attrcoll_month index = (unique_code, doe, id_no )
attrcoll_month additional index for secondary join = ( id_no, condition_code )
live_trans index = ( id_no, trans_time )
```
This way, the "PreQuery" is only getting the IDs that qualify the date/time and get the min date ONCE. Then, since you have the IDs, just re-join to get the rest of the details.
|
How to make query perform better using a dependent sub query in SQL?
|
[
"",
"mysql",
"sql",
""
] |
I have this query:
```
select t.cod_provincia as prov, COUNT(t.cod_provincia) as conto
from clienti t
where t.data_ins between to_date(&da_data,'ddmmrr') and to_date(&a_data,'ddmmrr')
and t.cod_iso='ITA'
group by t.cod_provincia
```
it returns

Now I want a new col to view the CONTO value in percent for each row. haw can I do it?
For each row I want the value `[CONTO*100]/SUM(CONTO)`
|
```
select pov, conto*100/sum(conto) over()
from (
select t.cod_provincia as prov, COUNT(t.cod_provincia) as conto
from clienti t
where t.data_ins between to_date(&da_data,'ddmmrr') and to_date(&a_data,'ddmmrr')
and t.cod_iso='ITA'
group by t.cod_provincia
)
```
|
Oracle has a built-in analytic function RATIO\_TO\_REPORT that can be used as an alternative to Rusty's solution:
```
select t.cod_provincia as prov,
count(t.cod_provincia) as conto,
100*RATIO_TO_REPORT(count(t.cod_provincia)) OVER () as percentage
from clienti t
where t.data_ins between to_date(&da_data,'ddmmrr') and to_date(&a_data,'ddmmrr')
and t.cod_iso='ITA'
group by t.cod_provincia
```
It gives the same result but avoids repeating an expression both as numerator and within the sum in the denominator. And you do not need to worry about potential division by zero errors.
|
How insert new row with the percentage
|
[
"",
"sql",
"oracle",
""
] |
I have a messaging system.
```
threads
+----+-------+
| id | title |
+----+-------+
| PK | TEXT |
+----+-------+
messages
+----+--------------+----------------+-----------+-------------+---------+
| id | from_id | thread_id | sent | parent | message |
+----+--------------+----------------+-----------+-------------+---------+
| PK | FK(users.id) | FK(threads.id) | TIMESTAMP | messages.id | TEXT |
+----+--------------+----------------+-----------+-------------+---------+
recipients
+----+-----------------+--------------+--------+
| id | msg_id | to_id | status |
+----+-----------------+--------------+--------+
| PK | FK(messages.id) | FK(users.id) | ENUM |
+----+-----------------+--------------+--------+
users
+----+---------+
| id | name |
+----+---------+
| PK | VARCHAR |
+----+---------+
```
Essentially, it's a messaging system in which:
* a message thread can have multiple recipients (recipients table)
* each message thread has a title (threads.title)
* each user has his own status (read, hidden, unread) for each message (recipients.status)
* each message can be replied to (messages.parent points back to another messages.id)
So hopefully my schema is correct.
I wish to get a list of all threads, which shows the most recent message inside the thread, and the author of that message:
```
+----------+------------+-----------+--------------+------------------+---------------+-------------------+
| users.id | users.name | thread.id | thread.title | messages.message | messages.sent | recipients.status |
+----------+------------+-----------+--------------+------------------+---------------+-------------------+
```
The issue is getting the most recent message as part of the query. Given that recipients.status = 1 means unread.. Ignoring users for now (that'd be a relatively simple join onto the rest of the tables...), also assuming we want user 1's threads:
```
SELECT threads.id, title, message, sent, recipients.status
FROM recipients
JOIN messages
ON messages.id=recipients.msg_id
JOIN threads ON threads.id=messages.thread_id
WHERE recipients.to_id=1
AND recipients.status=1
```
This gets me all messages in all threads that the user is participating in. However, I only need the most recent, and that is where I am stuck.
One solution which I am very not fond of (are there any reasons NOT to do this?)
```
SELECT *
FROM (
SELECT threads.id, title, message, sent, recipients.status
FROM recipients
JOIN messages
ON messages.id=recipients.msg_id
JOIN threads ON threads.id=messages.thread_id
WHERE recipients.to_id=1
AND recipients.status=1
ORDER BY sent DESC
) a
GROUP BY id
```
|
I was able to accomplish it with this:
```
SELECT threads.id AS thread_id, threads.title, users.id AS user_id, users.name, m1.message, m1.sent
FROM messages m1
LEFT JOIN messages m2
ON m1.thread_id = m2.thread_id AND m1.sent < m2.sent
JOIN recipients
ON recipients.status=1
AND recipients.msg_id=m1.id
AND recipients.to_id=1
JOIN threads
ON threads.id=m1.thread_id
JOIN users
ON m1.from_id=users.id
WHERE m2.sent IS NULL
```
The relevant parts to the problem would be:
```
SELECT ...
FROM messages m1
LEFT JOIN messages m2
ON m1.thread_id = m2.thread_id AND m1.sent < m2.sent
WHERE m2.sent IS NULL
```
|
> One solution which I am very not fond of (are there any reasons NOT to
> do this?)
Your query will not necessarily select the row with the latest `sent` value for each thread. Even though your inner query orders by `sent DESC`, mysql is free to choose any value from each group:
<https://dev.mysql.com/doc/refman/5.0/en/group-by-extensions.html>
> MySQL extends the use of GROUP BY so that the select list can refer to
> nonaggregated columns not named in the GROUP BY clause. This means
> that the preceding query is legal in MySQL. You can use this feature
> to get better performance by avoiding unnecessary column sorting and
> grouping. However, this is useful primarily when all values in each
> nonaggregated column not named in the GROUP BY are the same for each
> group. The server is free to choose any value from each group, so
> unless they are the same, the values chosen are indeterminate.
> Furthermore, the selection of values from each group cannot be
> influenced by adding an ORDER BY clause. Sorting of the result set
> occurs after values have been chosen, and ORDER BY does not affect
> which values within each group the server chooses.
I recommend using variables to emulate `row_number()` to number messages within a thread in order of when they were sent (i.e. most recent sent message within a thread will be #1, 2nd most recent #2, etc) and then to only keep #1 messages.
```
SELECT * FROM (
SELECT threads.id, title, message, sent, recipients.status,
@rowNumber := IF(@prevId = threads.id,@rowNumber+1,1) rowNumber,
@prevId := threads.id
FROM recipients
JOIN messages
ON messages.id=recipients.msg_id
JOIN threads ON threads.id=messages.thread_id
WHERE recipients.to_id=1
AND recipients.status=1
ORDER BY threads.id, sent DESC
) t1 WHERE rowNumber = 1
```
**Edit**
Another way using `not exists` to only select messages where a more recent message in the same thread does not exist.
```
SELECT threads.id, title, message, sent, recipients.status
FROM recipients
JOIN messages
ON messages.id=recipients.msg_id
JOIN threads ON threads.id=messages.thread_id
WHERE recipients.to_id=1
AND recipients.status=1
AND NOT EXISTS (
SELECT 1 FROM threads t2
WHERE t2.id = threads.id
AND t2.sent > threads.sent
)
```
|
How to get the most recent row in SQL?
|
[
"",
"mysql",
"sql",
""
] |
I am trying to use Datediff/DateAdd function in the WHERE Clause of my SQL Stored Procedure. Apart from other main query, my query contains 2 Date column and 1 main column :-
```
WHERE DATE1 BETWEEN @MonthStart AND @MonthEnd
AND "MAINCOLUMN" IS NOT NULL
OR "DATE2" <= DATEADD(MM, -36,GETDATE()) AND "MAINCOLUMN" IS NOT NULL
```
I want to return those rows where `MainColumn` `Date1` lies between `@MonthStart` and `@MonthEnd`
Or
MainColumn's Date2 column DATE lies in between last 36 month( from today's date to last 3 years date)
I have even tried:
```
WHERE "DATE1" BETWEEN @MonthStart AND @MonthEnd
AND "MAINCOLUMN" IS NOT NULL
OR "DATE2" BETWEEN DATEDIFF(MM, -36, GETDATE()) AND GETDATE()
AND "MAINCOLUMN" IS NOT NULL
```
And:
```
WHERE "DATE1" BETWEEN @MonthStart AND @MonthEnd
AND "MAINCOLUMN" IS NOT NULL
OR DATEDIFF(MM, DATE2, GETDATE()) <= 36 AND "MAINCOLUMN" IS NOT NULL
```
Problem is, everything is running but they are taking ages to run.. Before adding `OR` statement in the `WHERE` Clause, my report was hardly taking 4-5 min to run although it got nearly 200,000 rows in it but after adding that `OR` statement, it just goes on and on.
|
Parathensis! And in both cases, `MainColumn` must be non-null. You can simplify your `WHERE` clause like this:
```
WHERE MainColumn IS NOT NULL
AND (DATE1 BETWEEN @MonthStart AND @MonthEnd OR DATE2 <= DATEADD(MM, -36,GETDATE()))
```
|
This might help you out...
```
CREATE TABLE data
(
id uniqueidentifier primary key,
MainColumn varchar(20),
date1 datetime,
date2 datetime
);
declare @MonthStart int;
Declare @MonthEnd int;
set @MonthStart = 4;
set @MonthEnd = 8;
Select * from data WHERE DATE1 BETWEEN @MonthStart AND @MonthEnd
AND MAINCOLUMN IS NOT NULL
OR DATE2 <= DATEADD(MM, -36,GETDATE()) AND data.MainColumn IS NOT NULL
```
|
Datediff in where clause
|
[
"",
"sql",
"sql-server",
"sql-server-2005",
"sql-server-2012",
""
] |
I have 10 percentages, 5 have a completion of 50% our of 100% and the remaining 5 have 0% out of 100%
my aim is to get the 0%'s to be replaced by 100% and then calculate the 10 percentages together / 10
for example:
```
(Original Data) ((0.5 + 0.5 + 0.5 + 0.5 + 0.5 + 0 + 0 + 0 + 0 + 0)/10) = 0.35 (35%)
(What I want) ((0.5 + 0.5 + 0.5 + 0.5 + 0.5 + 1 + 1 + 1 + 1 + 1)/10) = 0.75 (75%)
```
how is this achieved in SQL Server? I have tried `NULLIF` and `ISNULL` but i know these are incorrect, im not sure how to use the `REPLACE` statement within a SUM
PS... i dont want to actually replace the data within the table, its for the purpose of a view, thankyou, and also these are referenced within 10 different columns within 1 row
example:
```
select FinanceID, SUM(Percentage1 + Percentage2 + Percentage3 + Percentage4
+ Percentage5 + Percentage6 + Percentage7 + Percentage8 + Percentage9
+ Percentage10 )/10 from FinanceTable
where FinanceID = 1
group by FinanceID
```
this query selects the 10 percentages and divides by the amount, now i need to replace the 0 values with 100, this could be any scenario where all have a value greater than 0 or a random percentage, for example Percentage 7 is the only one with a percentage above 0. and to clarify the 0% is stored in the table as 0%, not NULL
|
This will work
```
create table #temp (financeId varchar(10),percentage1 decimal(6,2),percentage2 decimal(6,2),percentage3 decimal (6,2))
go
insert into #temp values ('ROW1',0.5,0,.75)
insert into #temp values ('ROW2',0.25,0.75,0)
insert into #temp values ('ROW3',0.4,0,.85)
go
select
sum(
case isnull(percentage1,0) when 0 then 1 else percentage1 end +
case isnull(percentage2,0) when 0 then 1 else percentage2 end +
case isnull(percentage3,0) when 0 then 1 else percentage3 end
) / 3
from #temp
group by financeID
```
|
Since all these values are in individual columns, `SUM` can not be used. `SUM` is used to sum values across rows, not across columns.
Instead you need to do something like this:
```
SELECT
CASE WHEN Percentage1 = 0 THEN 1 ELSE Percentage1 END
+ CASE WHEN Percentage2 = 0 THEN 1 ELSE Percentage2 END
+ CASE WHEN Percentage3 = 0 THEN 1 ELSE Percentage3 END
+ CASE WHEN Percentage4 = 0 THEN 1 ELSE Percentage4 END
+ ...
```
You could also transform your table into a long list of values, and then use `SUM`:
```
SELECT SUM(P) FROM (
SELECT Percentage1 AS P ...
UNION ALL
SELECT Percentage2 ...
UNION ALL
SELECT Percentage3 ...
...
)
```
|
SQL Server (Using REPLACE and SUM) within the same query
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have a dateColumn with value "2014-10-10" and startTimeColumn with value "10:10:44". How can I select everything from the table where dateColumn is between 'startDate' and 'endDate', and where the startTimeColumn is greater than or equal to 'startTime' on the 'startDate'?
This is what I mean: (I know this will not work but its just to show what I mean)
```
SELECT * FROM table1 WHERE dateColumn BETWEEN 'startDate' AND 'endDate' AND IF TRUE(
IF(dataColumn == 'startDate')
THEN IF(startTimeColumn >= 'startTime')
RETURN TRUE
ELSE
RETURN FALSE
ELSE
RETURN TRUE
);
```
I am using MySQL and I can't use an SQL function.
|
Combine the date and time columns
```
SELECT * FROM table1
WHERE timestamp(dateColumn, startTimeColumn) BETWEEN 'startDateAndTime'
AND 'endDateAndTime'
```
|
```
SELECT *
FROM Table
WHERE CAST(DateColumn AS DATETIME) + CAST(TimeColumn AS DATETIME)
BETWEEN 'StartDateTime' AND 'EndDatetime'
```
|
Select from table where date and time values are in separate columns
|
[
"",
"mysql",
"sql",
"date",
"select",
"time",
""
] |
I'm using a SqlServe and I need to get all rows where a part of the string is within a specific numberRange.
As example I've got in the column uniqueStringID:
* BE09 Mytest
* BE10 Mytest
* CE101 Mytest
* CE300 Mytest
and I want to get all rows where the number within uniqueStringID (the number has to be located before the space) is between (including) 10 and 101
So result would be:
* BE10 Mytest
* CE101 Mytest
My question here is this possible with just SQL or would I need a stored procedure for that? And if the former how would the SQL be?
|
```
drop table #t
create table #t(id varchar(100))
insert into #t values('BE09 Mytest'),
('BE10 Mytest'),
('CE101 Mytest'),
('CE300 Mytest'),
('CE450595 Mytest')
select id,cast(substring(id,patindex('%[0-9]%',id),patindex('%[a-z0-9] [a-z]%',id)-1) as int) from #t where cast(substring(id,patindex('%[0-9]%',id),patindex('%[a-z0-9] [a-z]%',id)-1) as int) between 10 and 101
```
`FIDDLE DEMO`
`FIDDLE DEMO WITH FEW MORE TEST CASES`
|
Hi first Create function in sql server
```
CREATE FUNCTION dbo.udf_GetNumeric (@strAlphaNumeric VARCHAR(256))
RETURNS VARCHAR(256)
AS
BEGIN
DECLARE @intAlpha INT
SET @intAlpha = PATINDEX('%[^0-9]%', @strAlphaNumeric)
BEGIN
WHILE @intAlpha > 0
BEGIN
SET @strAlphaNumeric = STUFF(@strAlphaNumeric, @intAlpha, 1, '' )
SET @intAlpha = PATINDEX('%[^0-9]%', @strAlphaNumeric )
END
END
RETURN ISNULL(@strAlphaNumeric,0)
END
```
and then use below query to get result
```
SELECT * FROM
(
SELECT uniqueStringID,dbo.udf_GetNumeric(uniqueStringID) AS NumuniqueStringID
FROM YourTableName
) a
WHERE NumuniqueStringID >=10 and NumuniqueStringID<=101
```
|
Numberpart of a string in specific range
|
[
"",
"sql",
"sql-server",
""
] |
```
select *
from date
where DATE_FORMAT(sdate,'%m-%d') between
DATE_FORMAT(CURDATE(), '%m-%d') and
DATE_FORMAT(ADDDATE(CURDATE(),INTERVAL 7 DAY),'%m-%d')
```
This code is run perfectly but when current date is 29th december and interval days are 7 so it will not show any output. But I also want birthdays between 29th December to 4th January.
At that time this code is not perfect for my requirement.
|
Try to add the year like that :
```
SELECT *
FROM dates
WHERE DATE_FORMAT(sDATE,'%Y-%m-%d') BETWEEN
DATE_FORMAT(CURDATE(), '%Y-%m-%d') AND
DATE_FORMAT(ADDDATE(CURDATE(),INTERVAL 7 DAY),'%Y-%m-%d')
```
If `sdate` is the user subscription date and you want to wish him his birthday, add 1 year to the `sdate` :
```
SELECT *
FROM dates
WHERE DATE_FORMAT(ADDDATE(sdate,INTERVAL 1 YEAR),'%Y-%m-%d') BETWEEN
DATE_FORMAT(CURDATE(), '%Y-%m-%d') AND
DATE_FORMAT(ADDDATE(CURDATE(),INTERVAL 7 DAY),'%Y-%m-%d');
```
If you want to get all anniversaries dates, you can do that. (sorry for the dirtiness, I just want to know if it's the resut you expect) :
```
SELECT * FROM dates WHERE DATE_FORMAT(sdate,'%m-%d') IN (
DATE_FORMAT(CURDATE(),'%m-%d'),
DATE_FORMAT(ADDDATE(CURDATE(),INTERVAL 1 DAY),'%m-%d'),
DATE_FORMAT(ADDDATE(CURDATE(),INTERVAL 2 DAY),'%m-%d'),
DATE_FORMAT(ADDDATE(CURDATE(),INTERVAL 3 DAY),'%m-%d'),
DATE_FORMAT(ADDDATE(CURDATE(),INTERVAL 4 DAY),'%m-%d'),
DATE_FORMAT(ADDDATE(CURDATE(),INTERVAL 5 DAY),'%m-%d'),
DATE_FORMAT(ADDDATE(CURDATE(),INTERVAL 6 DAY),'%m-%d'),
DATE_FORMAT(ADDDATE(CURDATE(),INTERVAL 7 DAY),'%m-%d'))
```
Here is some PHP code to generate the query with the interval you want:
```
$interval = 10;
$query = "SELECT * FROM dates WHERE DATE_FORMAT(sdate,'%m-%d') IN (DATE_FORMAT(CURDATE(),'%m-%d')";
for($i = 1; $i <= $interval; $i++) {
$query .= ", DATE_FORMAT(ADDDATE(CURDATE(),INTERVAL $i DAY),'%m-%d')";
}
$query .= ");";
```
|
I think the easiest way to search for birthdays is to subtract the day of the year from the dates. This eliminates the problems at the end of the year:
```
select *
from date
where DATE_FORMAT(sdate - interval dayofyear(sdate) day, '%m-%d')
between DATE_FORMAT(CURDATE() - interval dayofyear(sdate) day, '%m-%d') and
DATE_FORMAT(CURDATE() - interval dayofyear(sdate) day + INTERVAL 7 DAY, '%m-%d');
```
|
How can I find upcoming birthday in ending of december month?
|
[
"",
"mysql",
"sql",
"date",
""
] |
I need to perform a query on a large table that has a `datetime` column that is indexed.
We need to query the data for a range from a month (at a minimum) to multiple months.
This query would be executed from Cognos TM1 and the input would be a period like `YYYYMM`. My question is - how to convert the `YYYYMM` input to a format that can be used to query that table (with the index being used).
Let's say if the input is
* From Date: '201312'
* To Date: '201312'
then, we need convert the same to 'between 01-12-2013 and 31-12-2013' in the query
Since we need this to be hooked up in Cognos TM1, so would not be able to write a procedure or declare variables (TM1 somehow does not like it).
Thanks in advance for your reply.
|
I would do something like this:
```
create procedure dbo.getDataForMonth
@yyyymm char(6) = null
as
--
-- use the current year/month if the year or month is invalid was omitted
--
set @yyyymm = case coalesce(@yyyymm,'')
when '' then convert(char(6),current_timestamp,112)
else @yyyymm
end
--
-- this should throw an exception if the date is invalid
--
declare @dtFrom date = convert(date,@yyyymm+'01') -- 1st of specified month
declare @dtThru date = dateadd(month,1,@dtFrom) -- 1st of next month
--
-- your Big Ugly Query Here
--
select *
from dbo.some_table t
where t.date_of_record >= @dtFrom
and t.date_of_record < @dtThru
--
-- That's about all there is to it.
--
return 0
go
```
|
Suppose you are getting this value of `YYYYMM` in a varchar variable @datefrom .
You can do something like
```
DECLARE @DateFrom VARCHAR(6) = '201201';
-- Append '01' to any passed string and it will get all
-- records starting from that month in that year
DECLARE @Date VARCHAR(8) = @DateFrom + '01'
-- in your query do something like
SELECT * FROM TableName WHERE DateTimeColumn >= @Date
```
Passing Datetime in a ansi-standard format i.e `YYYYMMDD` is a sargable expression and allows sql server to take advantage of indexes defined on that datetime column.
here is an article written by *Rob Farley* about [`SARGable functions in SQL Server`](http://blogs.lobsterpot.com.au/2010/01/22/sargable-functions-in-sql-server/).
|
Converting YYYYMM format to YYYY-MM-DD in SQL Server
|
[
"",
"sql",
"sql-server",
"date",
"cognos-tm1",
""
] |
I have this SQL statement:
```
select DISTINCT id_etudiant,g1.id_cours,note
from etudiant NATURAL JOIN inscription NATURAL JOIN groupe g1,groupe g2
where g1.id_cours = g2.id_cours
group by id_etudiant,g1.id_cours,note
having count(g1.id_cours) > 1
order by id_etudiant asc
;
```
Which gives this result:
```
ID_ETUDIANT ID_COURS NOTE
----------- ---------- ----
1 8 E
2 1 A
2 2 A
2 3 B
3 1 B
3 1 E
3 2
3 3 B
8 8 E
8 8
```
so my question is, how do I obtain this result instead?
```
ID_ETUDIANT ID_COURS NOTE
----------- -------- ----
3 1 B
3 1 E
8 8 E
8 8
```
i'm not good how to ask a question i'm sorry about this
|
So from your responses in the comments, I believe you want the rows where an etudiant has the same cours more than once.
I think the problem with the query you have is that it includes note. Try this simplified version:
```
select DISTINCT id_etudiant,g1.id_cours
from etudiant NATURAL JOIN inscription NATURAL JOIN groupe g1,groupe g2
where g1.id_cours = g2.id_cours
group by id_etudiant,g1.id_cours
having count(g1.id_cours) > 1
order by id_etudiant asc
;
```
If it gives the correct rows, then add note back in at the end:
```
select table1.id_etudiant, table1.id_cours, table2.note
from (
select DISTINCT id_etudiant,g1.id_cours
from etudiant NATURAL JOIN inscription NATURAL JOIN groupe g1,groupe g2
where g1.id_cours = g2.id_cours
group by id_etudiant,g1.id_cours
having count(g1.id_cours) > 1
) as table1,
left outer join inscription as table2
on table1.id_etudiant = table2.id_etudiant and table1.id_cours = table2.id_cours
order by id_etudiant, id_cours
;
```
This would be easier if there were sample table structures and data to work from.
|
Starting from your original query (I skipped the DISTINCT because it is not nessessary) and under the assumption that you want only the rows with the same etudiant and cours but different note, this query may help you:
```
SELECT id_etudiant, id_cours, note
FROM(
SELECT id_etudiant, g1.id_cours, note,
COUNT(*) OVER (PARTITION BY id_etudiant, g1.id_cours) cnt
FROM etudiant
NATURAL
JOIN inscription
NATURAL
JOIN groupe g1, groupe g2
WHERE g1.id_cours = g2.id_cours
GROUP
BY id_etudiant, g1.id_cours, note
HAVING COUNT(g1.id_cours) > 1
)
WHERE cnt > 1
ORDER
BY id_etudiant ASC;
```
|
associate two column together
|
[
"",
"sql",
"oracle",
""
] |
I have 2 tables, tableA and tableB
```
tableA - id int
name varchar(50)
tableB - id int
fkid int
name varchar(50)
```
Both tables are joined between id and fkid.
Below are sample rows from tableA

Below is output from tableB

I want to join both tables and get only top row of joined table. So output will be like below
```
Id Name fkid
1 P1 1
2 P2 4
3 P3 null
```
Here is [Sql fiddle](http://sqlfiddle.com/#!6/f09aa/2)
How can i achieve this with single query? I know that i can loop through in my .net code and retrieve top rows. But i want it in single query.
|
```
select ta.id, ta.name, min(tb.id) from tableA ta
left join tableB tb on tb.fkid=ta.id
group by ta.id, ta.name
```
|
```
select a.id,a.name,b.fid from tableA a left join
(
select min(id) fid ,fkid from tableB group by fkid
)b
on a.id = b.fkid
```
|
getting top row of joined table
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
Having following query:
```
select table_name
from user_tables
where table_name in ('A','B','C','D','E','F');
```
Assuming only user\_tables records B,C, and F exist, I want to retrieve the non-existing values A,D and E. This is a simple example, on real world the list can be huge.
|
A good way to generate fake rows is with a standard collection such as `sys.odcivarchar2list`:
```
select
tables_to_check.table_name,
case when user_tables.table_name is null then 'No' else 'Yes'end table_exists
from
(
select column_value table_name
from table(sys.odcivarchar2list('does not exist', 'TEST1'))
) tables_to_check
left join user_tables
on tables_to_check.table_name = user_tables.table_name
order by tables_to_check.table_name;
TABLE_NAME TABLE_EXISTS
---------- ------------
TEST1 Yes
does not exist No
```
|
if you have list of all those tables to be checked in Table1 then you can use NOT EXISTS clause
```
select name
from Table1 T1
where not exists ( select 1 from
user_tables U
where T1.name = U.table_name)
```
|
Oracle SQL : Retrieving non-existing values from IN clause
|
[
"",
"sql",
"oracle",
""
] |
```
invoice_payments
invoice_id amount type
1 10.00 Cash
1 5.00 Cash
1 5.00 Cash
2 70.00 Store
2 30.00 Cash
```
I want to get rows with total amounts with cash and total amounts with type = store
I am using this query:
```
SELECT invoice_id, SUM( amount ) AS total_paid, (
SELECT SUM( amount ) FROM invoice_payments WHERE TYPE = 'store' ) AS total_store
FROM invoice_payments where type!='Store'
GROUP BY invoice_id
```
Result:
```
invoice_id total_paid total_store
1 20.00 70.00
2 100.00 70.00
```
Desired Result:
```
invoice_id total_paid total_store
1 20.00 0.00
2 100.00 70.00
```
As you can see total\_store field is not values correctly. Please suggest best solution.
I really appreciate any reply.
Thanks
|
select `invoice_id`, (select sum(`amount`) from `payments` p where p.invoice\_id = a.invoice\_id) 'total\_paid', (select sum(`amount`) from `payments` c where c.invoice\_id = a.invoice\_id and type = 'store') 'total\_store' from `payments` a group by `invoice_id`
|
You are better off using a case statement here instead of a subquery:
```
SELECT
invoice_id,
Sum(amount) as total_paid,
Sum(CASE WHEN type='store' THEN amount ELSE 0 END) as Store_Total
FROM
invoice_payments
GROUP BY invoice_id
```
|
SQL: select Sub query opposite to its parent query's where condition
|
[
"",
"mysql",
"sql",
""
] |
I'm altering an existing table to add an Identity column. That I can do, no problem.
But I'm wanting to be sure that people who look at it in the future will see that it has the identity column added, so I really want to make it column 1. I know this is totally inconsequential to the system's operation; it's strictly for human reading.
Does anyone know of a way to do this? I've looked at the TSQL syntax for Alter Table and for column\_definition, and don't see anything; but I'm hoping someone knows of a way to make this happen.
FWIW, this is a one-time operation (but on many servers, so it needs to be automated), so I'm not worried whether any "trick" might go away in the future -- as long as it works now. We're using recent versions of SQL Server Express.
Thanks for any suggestions.
|
According to Microsoft you can do this only using SQL Server Management Studio.
Check [this](http://msdn.microsoft.com/en-us/library/aa337556.aspx)
|
Solve this by following these steps:
```
-- First, add identity column
alter table
mytable
add
id int identity(1, 1) not null
-- Second, create new table from existing one with correct column order
select
id,
col1,
col2
into
newtable
from
mytable
```
Now you've got `newtable` with reordered columns. If you need to you can drop your `mytable` and rename `newtable` to `mytable`:
```
drop table
mytable
exec sp_rename
'newtable', 'mytable'
```
|
SQL Server -- Any way to add a column and make it first column in table?
|
[
"",
"sql",
"sql-server",
"alter-table",
"identity-column",
""
] |
I have two tables tbl\_Properties and tbl\_Locations
tbl\_Properties has list of properties (including FK Location\_Id)
tbl\_Location has a list of locations
I am using following stored procedure to search properties in a location:
```
spPropGetSearch
(
@Location_Id int
)
AS
SELECT
P.Prop_Id,
P.Prop_Title,
P.Prop_Bedrooms,
P.Prop_Price,
L.Location_Title
FROM
tbl_Properties P
INNER JOIN tbl_Locations L ON L.Location_Id = P.Location_Id
WHERE
(P.Location_Id = @Location_Id OR @Location_Id = '0')
ORDER BY P.Prop_DateAdded DESC
```
I pass location\_id (such as '1002') and it works just fine and returns properties located within that location / area.
Now, I want to pass multiple location\_Ids such as '1002', '1005', '1010' to search properties located in all of those areas.
How would I do that?
I will appreciate a detailed reply as I am not a database expert.
**I found the following example and it's working fine. Please can you look into and check if there's any vulnerability**
```
USE AdventureWorks2012
GO
CREATE PROCEDURE usp_Employeelist
@Cities NVARCHAR(30)
AS
DECLARE @CitiesXML AS XML
SET @CitiesXML = cast(('<a>'+replace(@Cities,',' ,'</a><a>')+'</a>') AS XML)
SELECT
BusinessEntityID
, FirstName
, JobTitle
, City
FROM HumanResources.vEmployee
WHERE City IN
(
SELECT
A.value('.', 'varchar(max)')
FROM @CitiesXML.nodes('A') AS FN(A)
)
ORDER BY BusinessEntityID
GO
--Execute the stored procedure using multiple values
--through one parameter in this stored procedure
USE AdventureWorks2012
GO
EXEC usp_Employeelist
'Cambridge,Newport Hills,Berlin,Bordeaux'
GO
```
|
Use [table-valued parameters](http://msdn.microsoft.com/en-us/library/bb675163%28v=vs.110%29.aspx). The first step is to create your type:
```
CREATE TYPE dbo.ListOfInt AS TABLE (Value INT);
```
I tend to use generic names for these to allow for reuse without any confustion as to names (e.g. if you named it `LocationIDs` it would then become confusing to store a list of properties in the type).
Then you can reference this type in your stored procedure:
```
CREATE PROCEDURE dbo.spPropGetSearch @LocationIDs dbo.ListOfInt READONLY
AS
BEGIN
SELECT P.Prop_Id,
P.Prop_Title,
P.Prop_Bedrooms,
P.Prop_Price,
L.Location_Title
FROM tbl_Properties P
INNER JOIN tbl_Locations L
ON L.Location_Id = P.Location_Id
WHERE P.Location_Id IN (SELECT Value FROM @LocationIDs)
OR @Location_Id = 0
ORDER BY P.Prop_DateAdded DESC;
END
```
You can then call this using something like:
```
DECLARE @LocationIDs dbo.ListOfInt;
INSERT @LocationIDs (Value)
VALUES (1002), (1005), (1010);
EXECUTE dbo.spPropGetSearch @LocationIDs;
```
**EDIT**
Found the error, it was here:
```
OR @Location_Id = 0
```
Which leads me on to a new point, it looks like you want to have an option to return everything if `0` is passed. I would do this using `IF/ELSE`:
```
CREATE PROCEDURE dbo.spPropGetSearch @LocationIDs dbo.ListOfInt READONLY
AS
BEGIN
IF EXISTS (SELECT 1 FROM @LocationIDs)
BEGIN
SELECT P.Prop_Id,
P.Prop_Title,
P.Prop_Bedrooms,
P.Prop_Price,
L.Location_Title
FROM tbl_Properties P
INNER JOIN tbl_Locations L
ON L.Location_Id = P.Location_Id
WHERE P.Location_Id IN (SELECT Value FROM @LocationIDs)
ORDER BY P.Prop_DateAdded DESC;
END
ELSE
BEGIN
SELECT P.Prop_Id,
P.Prop_Title,
P.Prop_Bedrooms,
P.Prop_Price,
L.Location_Title
FROM tbl_Properties P
INNER JOIN tbl_Locations L
ON L.Location_Id = P.Location_Id
ORDER BY P.Prop_DateAdded DESC;
END
END
GO
```
So if the table valued parameter passed is empty, it will return all records, if it contains records it will only contain the location\_ids supplied. Putting `OR` in queries like this makes it almost impossible for SQL Server to use an appropriate index.
---
**ADDENDUM**
---
To answer the comment
> instead of using: IF EXISTS (SELECT 1 FROM @LocationIDs) how can we use OR condition in WHERE clause
My answer would be don't I suggested using `IF/ELSE` for a reason, not to over complicate things, but to improve performance. I had hoped to deter you from this approach when I said **"Putting OR in queries like this makes it almost impossible for SQL Server to use an appropriate index."**.
You could rewrite the query as follows:
```
SELECT P.Prop_Id,
P.Prop_Title,
P.Prop_Bedrooms,
P.Prop_Price,
L.Location_Title
FROM tbl_Properties P
INNER JOIN tbl_Locations L
ON L.Location_Id = P.Location_Id
WHERE P.Location_Id IN (SELECT Value FROM @LocationIDs)
OR NOT EXISTS (SELECT 1 FROM @LocationIDs)
ORDER BY P.Prop_DateAdded DESC;
```
The problem with this approach is, you really have two options in the same query, and these options are likely to need two different exection plans. If you have an index on `p.Location_ID`, and you have records in `@LocationIDs`, then the best query plan is to use an index seek on `tbl_Properties.Location_ID`. If `@LocationIDs` is empty, then the index seek is pointless and the best plan is a clustered index scan (table scan) on `tbl_Properties`. Since SQL Server uses cached plans, it can only cache on or the other, which means that if it stores the index seek option, every time you pass an empty table you have a sub-optimal plan, or alternatively, if it caches the table scan plan, every time you pass values for location ID you are not taking advantage of the index that is there.
One workaround is `OPTION (RECOMPILE)`:
```
SELECT P.Prop_Id,
P.Prop_Title,
P.Prop_Bedrooms,
P.Prop_Price,
L.Location_Title
FROM tbl_Properties P
INNER JOIN tbl_Locations L
ON L.Location_Id = P.Location_Id
WHERE P.Location_Id IN (SELECT Value FROM @LocationIDs)
OR NOT EXISTS (SELECT 1 FROM @LocationIDs)
ORDER BY P.Prop_DateAdded DESC
OPTION (RECOMPILE);
```
This forces the query to be recompiled for ever execution to ensure you have the optimal plan for the current execution. But since you only really have two options, this is a lot of unnessecary recompilation. So the best option is to have two queries, each with it's own cached execution plan, and using the `IF/ELSE` flow operator to flow to the appropriate query based on what has been passed as `@LocationIDs`.
|
```
spPropGetSearch
(
@Location_Id varchar(500)
)
AS
DECLARE @strQuery varchar(MAX)
SET @strQuery='
SELECT
P.Prop_Id,
P.Prop_Title,
P.Prop_Bedrooms,
P.Prop_Price,
L.Location_Title
FROM
tbl_Properties P
INNER JOIN tbl_Locations L ON L.Location_Id = P.Location_Id
WHERE
P.Location_Id in ('+ @Location_Id+') OR @Location_Id = ''0''
ORDER BY P.Prop_DateAdded DESC'
EXEC (@strQuery)
```
pass parameter in stored Procedure like as
```
EXEC spPropGetSearch '1002,1005,1010'
```
|
SQL Server 2008 Passing Multi-value Parameters or Parameter Array to a Stored Procedure
|
[
"",
"sql",
"arrays",
"sql-server-2008",
"stored-procedures",
"parameters",
""
] |
I am using oracle, toad.
I want to replace , with backspace.
Column consists of: `bla bla foo ,CALL HELPDESK`
It has to replace: `bla bla foo CALL HELPDESK`
Basically , should be removed
I tried like this:
```
UPDATE Mytable t
SET column = REPLACE(t.U_MSG, ''%, CALL HELPDESK'', '% CALL HELPDESK')
```
|
`REPLACE` doesn't use wildcards, it simply replaces all instances of the first string with the second string. This should work:
```
UPDATE Mytable t
SET column = REPLACE(t.U_MSG, ', CALL HELPDESK', ' CALL HELPDESK')
```
|
As the replaced string is *fixed* why not simply use that:
```
UPDATE Mytable t
SET column = SUBSTR(t.U_MSG, 1, LENGTH(t.U_MSG)-15)
-- ^^
-- length of the replaced string
-- hard coded in this example, should
-- probably use `LENGTH(...)` for ease
-- of maintenance in production code
```
This is probably less clever than other solutions, but this will work even if, by unexpected twist of fate, the replaced string is present several times in some of your strings:
```
WITH t AS (
SELECT 'PLEASE, CALL HELPDESK' U_MSG FROM DUAL
UNION ALL SELECT 'CALL HELPDESK, CALL HELPDESK! THEY SAID, CALL HELPDESK' FROM DUAL
)
SELECT SUBSTR(t.U_MSG, 1, LENGTH(t.U_MSG)-15) || ' CALL HELP DESK' MSG FROM t;
```
Producing:
```
MSG
------------------------------------------------------
PLEASE CALL HELP DESK
CALL HELPDESK, CALL HELPDESK! THEY SAID CALL HELP DESK
```
|
How to replace part of string in a column, in oracle
|
[
"",
"sql",
"oracle",
""
] |
I have the following tables:
```
copy(movie_id,copy_id)
rented(copy_id,outdate,returndate)
```
If a movie is rented out, the returndate is set to null in the database.
There will be multiple copies of the same movie. for a single movie\_id, we can have multiple copy\_id.
I need to retrieve the movies that have been rented out completely, i.e. all the copies of the movie have been rented out or put in another way-all the copies of a movie are present in the rented table with the returndate set as null.
I have tried inner joins, but am not being able to relate all the tuples in the copy table to the rented table.
Each copy has a globally unique copy\_id. So copies of 2 different movies cannot have the same copy\_id.
If the copy has never been rented, it will not show up in the list, however it means that the movie is still in stock, as it has never been rented. This should not show up.
The same movie and copy will definitely appear in rented multiple times, if it has been rented more than once.
|
This turned out a little bit more difficult than I thought. I believe this is the correct answer.
"All movies, for which for all of its copies there exists a rented where returndate is null"
In mathematical notation (A=for All, E=there exists):
{ m : M | ( A c : C | c.movie\_id = m.movie\_id @ ( E r : R | r.copy\_id = c.copy\_id @ r.returndate = null ) ) @ m.movie\_id }
Which can be rephrased to:
"All movies, for which there doesn't exist a copy, for which there doesn't exist a rented where returndate is null"
Which translates to the following SQL.
```
SELECT DISTINCT m.movie_id
FROM Copy m
WHERE NOT EXISTS
(SELECT 1 FROM Copy c
WHERE c.movie_id = m.movie_id
AND NOT EXISTS
(SELECT 1 FROM Rented r
WHERE r.copy_id = c.copy_id
AND returndate IS NULL)
```
|
You can do what you want by using a `left join` and aggregation with a `having` clause. Then, count the number of records with no return date and compare that to the number of copies:
```
SELECT c.movie_id
FROM copy c LEFT JOIN
rented r
ON c.copy_id = r.copy_id
GROUP BY c.movie_id
HAVING SUM(r.returndate IS NULL) = COUNT(DISTINCT c.copy_id)
```
Note the use of `SUM()` for the comparison. This counts the number of rows where the value is "true".
The above query assumes that a single copy cannot be rented more than once at a time. A reasonable assumption, but always worth checking. An alternative `having` clause takes this into account:
```
HAVING count(distinct case when r.returndate is null then c.copy_id end) = count(distinct c.copy_id)
```
|
mysql inner join to relate all tuples
|
[
"",
"mysql",
"sql",
"join",
"data-retrieval",
""
] |
When I try to open SQL developer then I am getting this error:
```
An error was encountered performing the requested operation:
IO Error: Socket read timed out
The Network Adapter could not establish the connection Vendor code 17002
```
What is the problem?
|
Typical causes for that error:
1. Your database listener is not running
2. It is running, but maybe there is a firewall intervening
3. It is running, no firewall issue, but you have provided incorrect connection details.
Source: <https://community.oracle.com/thread/3519434>
|
**Make sure your installation is on a LOCAL drive!**
If your SQL Developer is launched from a network location (specifically, a different location than your database network location), that *could* be the issue.
I was having the same problem and **I moved my installation to my local drive** and it connected right away :)
|
What does the "IO Error: Socket read timed out" means (SQL developer)
|
[
"",
"sql",
"oracle-sqldeveloper",
""
] |
I have a table with many of columns, I count the specific names on column and if that name not on list, I want to display it zero, but not get in to list.
```
SELECT Names, COUNT (*)
FROM NAMESTABLE
WHERE names IN
('Jenny',
'Penny',
'George',
'James',
'Jessica',
'Monica',
'Erica')
AND
adeddate BETWEEN '2014/10/15' AND '2014/10/16'
GROUP BY names
```
---
```
NAMES || COUNT(*)
Jenny || 33
Penny || 4
George || 25
James || 87
```
so i want to Jessica, Monica, Erica as ZERO even these names are not in COLUMN
```
Names || Count(*)
Jenny || 33
Penny || 4
George || 25
James || 87
Jessica || 0
Monica || 0
Erica || 0
```
`nvl(count(*),0)` does not work
|
You could group your results by name and then `left join` the result to a table of your names to fill in the blanks:
```
SELECT rn.name, NVL(cnt, 0)
FROM (SELECT 'Jenny' AS name FROM dual
UNION ALL
SELECT 'Penny' FROM dual
UNION ALL
SELECT 'George' FROM dual
UNION ALL
SELECT 'James' FROM dual
UNION ALL
SELECT 'Jessica' FROM dual
UNION ALL
SELECT 'Monica' FROM dual
UNION ALL
SELECT 'Erica' FROM dual) rn
LEFT JOIN (SELECT name, COUNT(*) AS cnt
FROM namestable
WHERE adeddate BETWEEN '2014/10/15' AND '2014/10/16'
GROUP BY name) n ON n.name = rn.name
```
|
Then you must use other syntax:
```
SELECT t.Names, COUNT(n.Names)
FROM (
SELECT 'Jenny' AS names FROM DUAL UNION ALL
SELECT 'Penny' FROM DUAL UNION ALL
SELECT 'George' FROM DUAL UNION ALL
SELECT 'James' FROM DUAL UNION ALL
SELECT 'Jessica' FROM DUAL UNION ALL
SELECT 'Monica' FROM DUAL UNION ALL
SELECT 'Erica' FROM DUAL
) t
LEFT OUTER JOIN NAMESTABLE n
ON n.names = t.names AND n.adeddate BETWEEN '2014/10/15' AND '2014/10/16'
GROUP BY t.names
```
|
ORACLE display count to zero, when row is null in column
|
[
"",
"sql",
"oracle",
"select",
"count",
"zero",
""
] |
I have a table with many partitions range. I need to get the name of all partition when I give a date.
For eg: if I input date 20/09/2014, it should list all partitions before that given date.
```
create or replace function get_part_name(p_date in date)
return varchar2 is
d date;
retp varchar2(30);
mind date:=to_date('4444-01-01','yyyy-mm-dd');
str varchar2(32000);
cursor c is
select high_value, partition_name p
from user_tab_partitions
where table_name='TEST';
begin
for r in c loop
str := r.high_value;
execute immediate 'select '||str||' from dual' into d;
if p_date<d and d<mind then
retp:=r.p;
mind:=d;
end if;
end loop;
return retp;
end;
```
This is returing a single date. I need all the dates, is it possible?
|
```
WITH DATA AS (
select table_name,
partition_name,
to_date (
trim (
'''' from regexp_substr (
extractvalue (
dbms_xmlgen.getxmltype (
'select high_value from all_tab_partitions where table_name='''
|| table_name
|| ''' and table_owner = '''
|| table_owner
|| ''' and partition_name = '''
|| partition_name
|| ''''),
'//text()'),
'''.*?''')),
'syyyy-mm-dd hh24:mi:ss')
high_value_in_date_format
FROM all_tab_partitions
WHERE table_name = 'SALES' AND table_owner = 'SH'
)
SELECT * FROM DATA
WHERE high_value_in_date_format < SYSDATE
/
TABLE_NAME PARTITION_NAME HIGH_VALU
-------------------- -------------------- ---------
SALES SALES_Q4_2003 01-JAN-04
SALES SALES_Q4_2002 01-JAN-03
SALES SALES_Q4_2001 01-JAN-02
SALES SALES_Q4_2000 01-JAN-01
SALES SALES_Q4_1999 01-JAN-00
SALES SALES_Q4_1998 01-JAN-99
SALES SALES_Q3_2003 01-OCT-03
SALES SALES_Q3_2002 01-OCT-02
SALES SALES_Q3_2001 01-OCT-01
SALES SALES_Q3_2000 01-OCT-00
SALES SALES_Q3_1999 01-OCT-99
SALES SALES_Q3_1998 01-OCT-98
SALES SALES_Q2_2003 01-JUL-03
SALES SALES_Q2_2002 01-JUL-02
SALES SALES_Q2_2001 01-JUL-01
SALES SALES_Q2_2000 01-JUL-00
SALES SALES_Q2_1999 01-JUL-99
SALES SALES_Q2_1998 01-JUL-98
SALES SALES_Q1_2003 01-APR-03
SALES SALES_Q1_2002 01-APR-02
SALES SALES_Q1_2001 01-APR-01
SALES SALES_Q1_2000 01-APR-00
SALES SALES_Q1_1999 01-APR-99
SALES SALES_Q1_1998 01-APR-98
SALES SALES_H2_1997 01-JAN-98
SALES SALES_H1_1997 01-JUL-97
SALES SALES_1996 01-JAN-97
SALES SALES_1995 01-JAN-96
28 rows selected.
SQL>
```
Use your desired date in place of `SYSDATE` in above query. Or you can pass it as INPUT through the `FUNCTION` and `RETURN` the result set.
|
**Find Partition name using date IF you have a meaningful date column in the table in Oracle DB**
```` ```
WITH table_sample AS (select COLUMN_WITH_DATE from table SAMPLE (5))
SELECT uo.SUBOBJECT_NAME AS "PARTITION_NAME_1"
FROM table_sample sw,
SYS.USER_OBJECTS uo
WHERE sw.COLUMN_WITH_DATE = TRUNC(SYSDATE) -- ENTER DATE HERE AS 'DD-MM-YYYY 00:00:00'
AND OBJECT_ID = dbms_rowid.rowid_object(sw.rowid)
AND ROWNUM < 2;
``` ````
|
how to get names of partition in oracle while i input a date
|
[
"",
"sql",
"oracle",
"plsql",
"oracle11g",
"partition",
""
] |
need to filter duplicate record from a particular selected records
SQL table **site\_metatag** and i am using **MySQL client version: 5.5.30**
```
metatag_id | store_id | name
1 0 copyright
2 0 author
3 0 robots
4 0 googlebot
5 0 revisit-after
6 0 google-site-verification
9 1 google-site-verification
8 1 revisit-after
10 1 googlebot
11 1 robots
12 2 googlebot
13 2 robots
14 2 google-site-verification
```
need those record from `store_id` in `1` and `0` but `name` will be unquie like
distinct() function give uniqe all record , but i need record that not used more then 1 time
```
metatag_id | store_id | name
1 0 copyright
2 0 author
3 0 robots
```
i try
```
SELECT * FROM site_metatag v
where metatag_id NOT IN
(
select metatag_id from site_metatag p where v.name=p.name
)
AND v.store_id in (0,1)
```
but not working..
|
I think you want this:
```
SELECT
metatag_id, store_id, `name`
FROM
site_metatag v
group by `name`
```
Or this, if you want to group by number of occurrences(in this case 1 -> cnt = 1).
```
SELECT
metatag_id, store_id, v.`name`, a.cnt
FROM
site_metatag v
inner join
(select
`name`, count(*) as cnt
from
site_metatag
group by `name`
having cnt = 1) as a ON (v.`name` = a.`name`)
group by v.`name`
```
|
use distinct() function. see [Distinct MYSQL](http://www.mysqltutorial.org/mysql-distinct.aspx)
|
need to filter duplicate record from a particular selected records
|
[
"",
"mysql",
"sql",
""
] |
My schema roughly translates to the following:
```
reservation: reservation_id, inventory_id
inventory: inventory_id, car_id
car: car_id
feature: feature_id, name
car_feature: car_id, feature_id
```
The design is relatively simple. A few unnecessary columns are omitted, but the basic idea is that reservations are made for inventory items. Inventory items can be cars, which have car\_ids. A car can have many features (and features can apply to several cars).
What I want to do is find the number of reservations made for each type of feature. Example: a table with one column being the feature "name", and another column being the number of reservations made for cars with that feature. For pedantic reasons, I do *not* want to use any nested queries.
Here is the query that I attempted:
```
select f.name, count(*) as reservation_count
from reservation r, inventory i, car c, car_feature cf, feature f
where r.inventory_id = i.inventory_id and i.car_id = cf.car_id and cf.feature_id = f.feature_id
group by f.name;
```
The query correctly returns each named feature in the following result tableset:
```
-name- -reservation_count-
spokes 945000
wheels 931000
sunroof 104000
... ...
```
However, it takes an abnormally large amount of time to compute (11 seconds for a dataset with a few thousand rows), and the reservation counts are way too high- there are only 16k rentals in the database.
What am I doing wrong?
|
Try taking "car c," out of your query. This will take out some redundant rows because you do not join to it. Having it included the way you do will just make n-car copies of each feature.
And for speed, try indexing the columns you join on. (see the DESCRIBE comment)
|
I think the following query will return the result you need
```
Select feature.name, COUNT(reservation.reservation_id) as reservation_count
from feature
inner join car_feature on feature.feature_id=car_feature.feature_id
inner join car on car_feature.car_id=car.car_id
inner join inventory on car.car_id=inventory.car_id
inner join reservation on inventory.inventory_id = reservation.inventory_id
GROUP BY feature.name
```
|
SQL query returns abnormally large count numbers
|
[
"",
"mysql",
"sql",
""
] |
I know how to do this, but I am not sure if it is wise, so I ask: I have one table that stores any issues with software that we use at work. If the problem becomes resolved, should I move that row to a resolved issue table, or should I only insert the issue's table pk, and whenever I query open issues use an outer join? Just looking for industry standard on this.
|
Moving rows around is almost always a bad idea. If you add additional information regarding resolved issues (e.g., who resolved it, when was it resolved, etc.), having an additional "resolutions" table with a foreign key to the "issues" table might be a good idea. Otherwise, I'd just add a `boolean` field `is_resolved` to the "issues" table and set it to `true` when the issue is resolved.
|
I think you should take one column with name status and update this column as per your choice .and use trigger to maintain this table history .
|
sql move row from one table into another
|
[
"",
"mysql",
"sql",
""
] |
I want to write a function which inserts a random UUID into a table. The function should return the UUID after successfully inserting the UUID. In case of a primary key collision I want the function to try another UUID until it succeeds.
What I have till now:
```
create or replace
function new_object_id return raw is
v_oid RAW(16);
begin
<<next_uuid>>
v_oid := random_uuid();
insert into object (object_id) values (v_oid);
commit;
exception
when ?primary_key_constrain_failure? then goto next_uuid
end;
return v_oid;
end new_object_id;
```
But I can not figure out the right name for the exception, which occurs when the primary key constrain is violated. Does anybody know?
**Update**
I tried `dup_val_on_index` but I have still a problem with the loop:
```
create or replace
function new_object_id return raw is
v_oid RAW(16);
begin
<<next_uuid>>
v_oid := random_uuid();
insert into object (object_id) values (v_oid);
commit;
return (v_oid);
exception
when DUP_VAL_ON_INDEX then goto next_uuid;
end new_object_id;
```
When I compile this I get the error:
```
Error(11,30): PLS-00375: illegal GOTO statement; this GOTO cannot branch to label 'NEXT_UUID'
```
|
According to [this](http://www.techonthenet.com/oracle/exceptions/named_system.php) it is `DUP_VAL_ON_INDEX`.
Full working test:
```
create table x
( y number(15,0)
, constraint x_pk primary key (y)
)
;
begin
insert into x (y) values (1);
exception
when dup_val_on_index
then
dbms_output.put_line('ARGH!!!');
end;
```
For part 2, use an encapsulating `begin ... end` block:
```
begin
<<test>>
begin
insert into x values (1);
exception
when dup_val_on_index then goto test; -- I know, a loop, but it is for the demo
end;
end;
```
|
To accomplish this using a LOOP try:
```
create or replace function new_object_id return raw is
v_oid RAW(16);
begin
LOOP
begin
v_oid := random_uuid();
insert into object (object_id) values (v_oid);
EXIT;
exception
when dup_val_on_index then
NULL; -- do nothing, roll around to top of LOOP again
end;
END LOOP;
commit;
return (v_oid);
end new_object_id;
```
Share and enjoy.
|
What is the name of the exception indicating a primary key constrain violation?
|
[
"",
"sql",
"oracle",
"plsql",
"exception",
""
] |
So I came across something the other day at work, trying to count how many null values after running an import.
So, I did:
```
select COUNT(columnname) from table
WHERE ColumnName is null
```
Which didn't count the nulls...
Then I did,
```
select COUNT(*) from table
WHERE ColumnName is null
```
Which gave me the count.
So, something that bugged me is why exactly this doesn't count the null values.
I have looked at this question (along with a good search around Google...):
[In SQL, what's the difference between count(column) and count(\*)?](https://stackoverflow.com/questions/59294/in-sql-whats-the-difference-between-countcolumn-and-count?lq=1),
and whilst it tells me that `COUNT(columnname)` doesn't count nulls, I would like to know exactly **why** nulls aren't counted using this method?
Many Thanks,
James.
|
`COUNT` counts values, since null is not a value it does not get counted.
If you want to count all null values you could do something like this:
```
SELECT COUNT(ID) as NotNull, SUM(CASE WHEN ID IS NULL then 1 else 0 end) as NullCount
```
|
Why aren't nulls counted in `COUNT(columnname)`?
```
COUNT(*)
```
will count all rows
```
COUNT(columnname)
```
will count all rows, except those rows where columnname IS NULL.
And what's the reason?
It's just that the `COUNT()` function is designed to work this way: NULL values are treated differently from other values, because NULL can be considered as a placeholder for "unknown" values, so it is very common that you just want to count rows that have a real value and skip rows that don't have.
Counting the rows that don't have a value is less common, and SQL doesn't provide a function for it. But you can calculate it easily:
```
SELECT
COUNT(*) As rows,
COUNT(columnname) AS non_null_count,
COUNT(*) - COUNT(columnname) AS null_count
FROM
yourtable
```
|
Why aren't nulls counted in COUNT(columnname)
|
[
"",
"sql",
"sql-server",
"count",
"null",
""
] |
So I have a string in SQL like this
```
Declare @Denumire varchar(2000) = '''Name''name(109)'',''name (1921)'',''name name name name name (2312)'',''name name name name name (2358)'''
```
What I want is to retrieve every code for each Name .
For this example it would be
```
109,1921,2312,2358
```
I managed to get the last Code 2358 using the query.
```
Declare @Cod varchar(20)
Declare @test varchar(3000)
declare @CoduriFinale varchar(2000)
Declare @rDenumire varchar(3000) = REVERSE(@Denumire)
SELECT @Cod = LEFT(@rDenumire,CHARINDEX ( ' ' ,@rDenumire , 0))
SELECT @Cod = REPLACE (@Cod , '(' , '' )
SELECT @Cod = REPLACE (@Cod , ')' , '' )
set @CoduriFinale = REVERSE(RTRIM(LTRIM(SUBSTRING(@Cod,2,6))))+','
set @test = SUBSTRING(@rDenumire,CHARINDEX (',',@rDenumire, 0),LEN(@rDenumire))
set @rDenumire = @test
print @CoduriFinale
print @test
```
So what I did here, is revers the hole string ( this will make sure the first word) until a `' '` is my code, by I can't get it to work in a while loop for doing this for hole string.
PS : my string is in fact more strings separated by a comma.
|
```
CREATE Function [fnRem](@strText VARCHAR(1000))
RETURNS VARCHAR(1000)
AS
BEGIN
WHILE PATINDEX('%[^0-9,]%', @strText) > 0
BEGIN
SET @strText = STUFF(@strText, PATINDEX('%[^0-9,]%', @strText), 1, '')
END
RETURN @strText
END
Declare @test varchar(3000)
Declare @Denumire varchar(2000) = '''FABRICOM SYSTEMES D''ASSEMBLAGE (109)'',''VALEO VISION SAS (1921)'',''INERGY AUTOMOTIVE SYSTEMS GERMANY GMBH (2312)'',''TRW AUTOMOTIVE SAFETY SYSTEMS S.R.L. (2358)'''
set @test =dbo.fnRem(@Denumire)
select @test
```
**Output**
`109,1921,2312,2358`
|

I was unable to add my code. so added as image
```
OUTPUT
109,1921,2312,2358
```
|
Extract Code from string
|
[
"",
"sql",
"sql-server",
"t-sql",
"substring",
""
] |
I wasn't sure what the title would be, so apologies beforehand.
Let's say I have a simple TSQL statement that looks like `SELECT City from Cities`. This returns one result with 26 rows:
```
City
New York
Los Angeles
Chicago
Houston
Philadelphia
Phoenix
San Antonio
San Diego
Dallas
San Jose
Austin
Indianapolis
Jacksonville
San Francisco
Columbus
Charlotte
Fort Worth
El Paso
Memphis
Seattle
Denver
Washington
Boston
Nashville
Baltimore
Oklahoma City
```
Now, let's say that I want to divide this result into 3 groups of 10 rows each. In other words, when I run the query, the result will display three "different" rows sets, the first two with 10 rows and the last one with the remainder of 6. It's as if I had run these 3 queries at once: a *top(10)*, a *middle(10)*, and a *What's Left*. If I were to run this query from a .Net app, the `dataset` would have 3 `datatables`.
The issue here is that there's nothing I can use to group the data. Even if I did, I don't want to. I want to specify how many datatables this query will return and how many rows each datatable will have.
|
I think you want to use `row-number()` and some arithmetic:
```
select ((seqnum - 1) / 10) as grp, city
from (select city, row_number() over (order by (select NULL)) as seqnum
from cities
) c;
```
Note that the ordering is not guaranteed. You really need a specific column to specify the order, because SQL tables represent unordered sets.
EDIT:
It is unclear exactly what you want to do. This divides the rows into groups of 10, which seems to be what you want to do. Of course, this returns only one result set -- any SQL query only returns one result set.
|
If you are using SQL Server 2012 or above you can use the OFFSET/FETCH keywords:
So this would get your first 10 cities:
```
SELECT City
FROM Cities
ORDER BY City
OFFSET 0 ROWS
FETCH NEXT 10 ROWS ONLY
```
And this would get the next 10:
```
SELECT City
FROM Cities
ORDER BY City
OFFSET 10 ROWS
FETCH NEXT 10 ROWS ONLY
```
Also you can substitute these numbers with variables:
```
DECLARE @PageSize INT = 10
DECLARE @PageNumber INT = 5
SELECT City
FROM Cities
ORDER BY City
OFFSET @PageNumber * @PageSize ROWS
FETCH NEXT @PageSize ROWS ONLY
```
|
How can SELECT statement return different groups by row number?
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
Let say we have this table SchoolClass
```
id_class class class_Name id_class_req
-------- -------- --------- --------
1 SQL9999 SQL
2 PHP1111 PHP 1
3 JAV2222 Java 2
```
So a class can have an other class as requirement before you attempt this class. I wanna be able to sort every class that got a requirement and give the class like this :
```
class id_class_req
------- -------------
PHP1111 SQL9999
JAV222 PHP1111
```
this is my query. I'm only able give the id\_class\_req with this query. Is there a way to do this without subqueries or nested query.
```
SELECT SchoolClass.class, SchoolClass,id_class_req
FROM SchoolClass
WHERE SchoolClass.id_cours_prerequis Is Not Null;
```
Thanks you very much.
|
Do this with a self join
```
SELECT s.class, rec.class
FROM SchoolClass s
INNER JOIN SchoolClass req on s.id_class=req.id_class_req
WHERE SchoolClass.id_cours_prerequis Is Not Null;
```
|
Oracle has a nice feature called *[hierarchical query](http://docs.oracle.com/cd/B19306_01/server.102/b14200/queries003.htm)* that helps with that kind of table (i.e.: [adjacency list](http://en.wikipedia.org/wiki/Adjacency_list)). With that, you can write something like this:
```
SELECT CONNECT_BY_ROOT "class" as "Root class", "class" AS "Dep class"
FROM T
WHERE LEVEL = 2
CONNECT BY PRIOR "id_class_req" = "id_class";
```
See <http://sqlfiddle.com/#!4/bbbc0/1>
In such a query, Oracle will build a graph such as for each node, *node.parent.id\_class\_req = node.id\_class* (`CONNECT BY PRIOR "id_class_req" = "id_class"`). After that, we keep only sub-graph of depth 2 (`WHERE LEVEL = 2`). Which is in fact what you are looking for.
---
As a picture worth 1000 words, given the above query and the sample data:
```
id_class class class_Name id_class_req
-------- -------- --------- --------
1 SQL9999 SQL
2 PHP1111 PHP 1
3 JAV2222 Java 2
10 LIN101 Linux
11 C101 C 10
12 SYSADMIN SysAdmin 10
14 ELEC101 Electronics
```
Oracle will build that graph:

And will only retain path of length 2 (i.e.: pair of nodes containing only a child and its direct parent):
```
ROOT CLASS DEP CLASS
PHP1111 SQL9999
JAV2222 PHP1111
C101 LIN101
SYSADMIN LIN101
```
|
Querying a hierarchical table
|
[
"",
"sql",
"oracle",
"select",
""
] |
This SQL Server query works
```
SELECT
dbo.sem_computer.COMPUTER_ID, COUNT(dbo.sem_computer.COMPUTER_ID) as Duplicate_Hardware_IDs
FROM
dbo.sem_computer, [dbo].[V_SEM_COMPUTER], dbo.SEM_CLIENT, dbo.SEM_AGENT, dbo.IDENTITY_MAP
WHERE
sem_computer.COMPUTER_ID = [dbo].[V_SEM_COMPUTER].COMPUTER_ID
and sem_computer.COMPUTER_ID = dbo.SEM_CLIENT.COMPUTER_ID
and sem_computer.COMPUTER_ID = dbo.SEM_AGENT.COMPUTER_ID
and dbo.SEM_CLIENT.GROUP_ID = IDENTITY_MAP.ID
and dbo.SEM_AGENT.TIME_STAMP > DATEDIFF(second, '19700101', DATEADD(day, -1, GETDATE())) * CAST(1000 as bigint)
GROUP BY dbo.sem_computer.COMPUTER_ID
HAVING COUNT(dbo.sem_computer.COMPUTER_ID) > 1
ORDER BY Duplicate_Hardware_IDs DESC;
```
But I want to `SELECT` additional columns (to show which computers have the duplicate `COMPUTER_ID`)
```
SELECT
dbo.sem_computer.COMPUTER_NAME
, [IP_ADDR1_TEXT]
, dbo.SEM_AGENT.AGENT_VERSION
, dbo.sem_computer.COMPUTER_ID, COUNT(dbo.sem_computer.COMPUTER_ID) as Duplicate_Hardware_IDs
FROM
dbo.sem_computer, [dbo].[V_SEM_COMPUTER], dbo.SEM_CLIENT, dbo.SEM_AGENT, dbo.IDENTITY_MAP
WHERE
sem_computer.COMPUTER_ID = [dbo].[V_SEM_COMPUTER].COMPUTER_ID
and sem_computer.COMPUTER_ID = dbo.SEM_CLIENT.COMPUTER_ID
and sem_computer.COMPUTER_ID = dbo.SEM_AGENT.COMPUTER_ID
and dbo.SEM_CLIENT.GROUP_ID = IDENTITY_MAP.ID
and dbo.SEM_AGENT.TIME_STAMP > DATEDIFF(second, '19700101', DATEADD(day, -1, GETDATE())) * CAST(1000 as bigint)
GROUP BY dbo.sem_computer.COMPUTER_ID
HAVING COUNT(dbo.sem_computer.COMPUTER_ID) > 1
ORDER BY Duplicate_Hardware_IDs DESC;
```
I get error
> Column 'dbo.sem\_computer.COMPUTER\_NAME' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.
How to fix?
**UPDATE**: when I execute
```
SELECT
dbo.sem_computer.COMPUTER_NAME
, [IP_ADDR1_TEXT]
, dbo.SEM_AGENT.AGENT_VERSION
, dbo.sem_computer.COMPUTER_ID, COUNT(dbo.sem_computer.COMPUTER_ID) as Duplicate_Hardware_IDs
FROM
dbo.sem_computer, [dbo].[V_SEM_COMPUTER], dbo.SEM_CLIENT, dbo.SEM_AGENT, dbo.IDENTITY_MAP
WHERE
sem_computer.COMPUTER_ID = [dbo].[V_SEM_COMPUTER].COMPUTER_ID
and sem_computer.COMPUTER_ID = dbo.SEM_CLIENT.COMPUTER_ID
and sem_computer.COMPUTER_ID = dbo.SEM_AGENT.COMPUTER_ID
and dbo.SEM_CLIENT.GROUP_ID = IDENTITY_MAP.ID
and dbo.SEM_AGENT.TIME_STAMP > DATEDIFF(second, '19700101', DATEADD(day, -1, GETDATE())) * CAST(1000 as bigint)
GROUP BY dbo.sem_computer.COMPUTER_NAME,[IP_ADDR1_TEXT], dbo.SEM_AGENT.AGENT_VERSION, dbo.sem_computer.COMPUTER_ID
HAVING COUNT(dbo.sem_computer.COMPUTER_ID) > 1
ORDER BY Duplicate_Hardware_IDs DESC;
```
it results in
```
COMPUTER_NAME IP_ADDR1_TEXT AGENT_VERSION COMPUTER_ID Duplicate_Hardware_IDs
ABC 10.10.10.10 12.1 56604FEE0AF 3
```
But I am looking for
```
COMPUTER_NAME IP_ADDR1_TEXT AGENT_VERSION COMPUTER_ID Duplicate_Hardware_IDs
ABC 10.10.10.10 12.1 56604FEE0AF 3
123 10.10.10.15 12.2 56604FEE0AF 3
XYZ 10.10.10.25 12.2 56604FEE0AF 3
```
**UPDATE 2**:
I included `FROM` and `WHERE`
|
```
;WITH CTE AS
(
SELECT dbo.sem_computer.COMPUTER_NAME,
[IP_ADDR1_TEXT],
dbo.SEM_AGENT.AGENT_VERSION,
dbo.sem_computer.COMPUTER_ID,
N = COUNT(*) OVER(PARTITION BY dbo.sem_computer.COMPUTER_ID)
FROM
...
WHERE
...
)
SELECT *
FROM CTE
WHERE N > 1
ORDER BY N DESC
```
|
Just include the columns in your `Select` list to `Group By` list
```
SELECT
dbo.sem_computer.COMPUTER_NAME
, [IP_ADDR1_TEXT]
, dbo.SEM_AGENT.AGENT_VERSION
, dbo.sem_computer.COMPUTER_ID, COUNT(dbo.sem_computer.COMPUTER_ID) as Duplicate_Hardware_IDs
FROM
...
WHERE
...
GROUP BY
dbo.sem_computer.COMPUTER_ID,
dbo.sem_computer.COMPUTER_NAME,
...
HAVING COUNT(dbo.sem_computer.COMPUTER_ID) > 1
ORDER BY Duplicate_Hardware_IDs DESC;`
```
|
Additional columns in SELECT Count()
|
[
"",
"sql",
"sql-server",
""
] |
I used the following query with MySQL 5.5 (or previous versions) for years without any problems:
```
SELECT t2.Code from (select Country.Code from Country order by Country.Code desc ) AS t2;
```
The order of the result was always descending as I needed.
Last week, I just migrated to a new MySQL Version (In fact, I migrated to MariaDB 10.0.14) and now the same query with the same database is not sorted descending anymore. It is sorted ascending (or sorted using the natural order, not sure in fact).
So, can somebody could tell me if this is a bug or if this is a change of the behaviour in recent versions of MySQL/MariaDB?
|
After a bit of digging, I can confirm both your scenarios:
MySQL 5.1 does apply the `ORDER BY` inside the subquery.
MariaDB 5.5.39 on Linux does *not* apply the `ORDER BY` inside the subquery when no `LIMIT` is supplied. It *does* however correctly apply the order when a corresponding `LIMIT` is given:
```
SELECT t2.Code
FROM (
SELECT Country.Code FROM Country ORDER BY Country.Code DESC LIMIT 2
) AS t2;
```
Without that `LIMIT`, there isn't a good reason to apply the sort inside the subquery. It can be equivalently applied to the outer query.
### Documented behavior:
As it turns out, [MariaDB has documented this behavior](https://mariadb.com/kb/en/mariadb/faq/general-faq/why-is-order-by-in-a-from-subquery-ignored/) and it is not regarded as a bug:
> A "table" (and subquery in the `FROM` clause too) is - according to the SQL standard - an unordered set of rows. Rows in a table (or in a subquery in the `FROM` clause) do not come in any specific order. That's why the optimizer can ignore the `ORDER BY` clause that you have specified. In fact, SQL standard does not even allow the `ORDER BY` clause to appear in this subquery (we allow it, because `ORDER BY ... LIMIT` ... changes the result, the set of rows, not only their order).
>
> You need to treat the subquery in the `FROM` clause, as a set of rows in some unspecified and undefined order, and put the `ORDER BY` on the top-level `SELECT`.
So MariaDB also recommends applying the `ORDER BY` in the outermost query, or a `LIMIT` if necessary.
Note: I don't currently have access to a proper MySQL 5.5 or 5.6 to confirm if the behavior is the same there (and SQLFiddle.com is malfunctioning). [Comments on the original bug report](https://mariadb.atlassian.net/browse/MDEV-3926) (closed as not-a-bug) suggest that MySQL 5.6 probably behaves the same way as MariaDB.
|
In newer versions of MySQL and MariaDB you can force the ORDER BY in a sub query by applying a LIMIT. If you don't want to limit the rows, use the biggest number of BIGINT as a LIMIT.
This may come in handy at times, when the sub query needs to be generated in a desired order, for applying line numbers, for example.
|
MySQL/MariaDB - order by inside subquery
|
[
"",
"mysql",
"sql",
"subquery",
"sql-order-by",
"mariadb",
""
] |
I'm trying to sum amounts from the previous month typed (SET)
Here is the structure:
```
CREATE TABLE tester (
date_amount date,
amount int);
INSERT INTO tester VALUES
("2014-08-01", 1000),
("2014-08-02", 1001),
("2014-08-03", 1002),
("2014-08-31", 2000),
("2014-08-31", 2000),
("2014-08-31", 2000),
("2014-09-01", 1006);
```
Here in the example for example
If I type year= 2014 and month= 9, I should sum automatically the last day of the month typed in this case all amounts in "2014-08-31"
Here is the [demo](http://sqlfiddle.com/#!2/6b7e19/1):
```
SET @month := 09;
SET @year := 2014;
select sum(amount) from tester
where month(date_amount)= @month-1
```
The query must sum all amounts of the last day of previous month typed
I must have as result this:
```
SUM(AMOUNT)
6000
```
I tried this but is not the correct way because here I know the last day of the month:
```
SET @month := 09;
SET @year := 2014;
select sum(amount) from tester
where month(date_amount)= @month-1 and day(date_amount)= 31
```
I also tried this query but got NULL
```
SET @month := 09;
SET @year := 2014;
SELECT sum(amount)FROM tester
WHERE month(date_amount)= @month-1 AND day(date_amount) =last_day(day(date_amount))
```
Please somebody can help me with this?
|
Instead of just providing month and year provide an actual date and then you can get the results you wanted
```
SET @month := 09;
SET @year := 2014;
SELECT sum(amount)FROM tester
WHERE month(date_amount)= @month-1 AND day(date_amount) = day(last_day(date_amount))
```
alternatively, you could use `STR_TO_DATE` to convert the month and year given to date first and use that instead of `@Date`
|
Something like this:
```
SELECT SUM(AMOUNT) FROM Tester
WHERE date_amount = LAST_DAY(STR_TO_DATE(@Month + '/00/' + @Year, '%m/%d/%Y'))
```
If you use a function on part of the row in the table, you force sql to do a table scan. If you convert the data to a complete field, it can use indexes in the search.
|
How can sum values from the previous month?
|
[
"",
"mysql",
"sql",
""
] |
After searching the forums I have come up with the following but its not working :/
I have a table with the following;
```
ID | Strings
123| abc fgh dwd
243| dfs dfd dfg
353| dfs
424| dfd dfw
523|
.
.
.
```
Please not that there is around 20,000 rows my other option is to write a stored procedure to do this ...Basically I need to split the strings up so there is a row for each one like this
```
ID | Strings
123| abc
123| fgh
123| dwd
243| dfs
```
and so on...
this is what I have.
```
create table Temp AS
SELECT ID, strings
From mytable;
SELECT DISTINCT ID, trim(regexp_substr(str, '[^ ]+', 1, level)) str
FROM (SELECT ID, strings str FROM temp) t
CONNECT BY instr(str, ' ', 1, level -1) >0
ORDER BY ID;
```
Any help is appreciated
|
This should do the trick:
```
SELECT DISTINCT ID, regexp_substr("Strings", '[^ ]+', 1, LEVEL)
FROM T
CONNECT BY regexp_substr("Strings", '[^ ]+', 1, LEVEL) IS NOT NULL
ORDER BY ID;
```
Notice how I used `regexp_substr` in the connect by clause too. This is to deal with the case of multiple spaces.
---
If you have a predictable upper bound on the number of items per line, it might worth comparing the performances of the recursive query above with a simple `CROSS JOIN`:
```
WITH N as (SELECT LEVEL POS FROM DUAL CONNECT BY LEVEL < 10)
-- ^^
-- up to 10 substrings
SELECT ID, regexp_substr("Strings", '[^ ]+', 1, POS)
FROM T CROSS JOIN N
WHERE regexp_substr("Strings", '[^ ]+', 1, POS) IS NOT NULL
ORDER BY ID;
```
See <http://sqlfiddle.com/#!4/444e3/1> for a live demo
|
A more flexible and better solution which:
* doesn't depend upon the predictability of the number of items per line.
* doesn't depend on the ID column, the solution gives correct result irrespective of the number of column.
* doesn't even depend upon the **DISTINCT** keyword.
There are other examples using **XMLTABLE** and **MODEL clause**, please read [**Split comma delimited strings in a table**](http://lalitkumarb.wordpress.com/2015/03/04/split-comma-delimited-strings-in-a-table-using-oracle-sql/).
For example,
**Without ID column:**
```
SQL> WITH T AS
2 (SELECT 'abc fgh dwd' AS text FROM dual
3 UNION
4 SELECT 'dfs dfd dfg' AS text FROM dual
5 UNION
6 SELECT 'dfs' AS text FROM Dual
7 UNION
8 SELECT 'dfd dfw' AS text FROM dual
9 )
10 SELECT trim(regexp_substr(t.text, '[^ ]+', 1, lines.column_value)) text
11 FROM t,
12 TABLE (CAST (MULTISET
13 (SELECT LEVEL FROM dual CONNECT BY instr(t.text, ' ', 1, LEVEL - 1) > 0
14 ) AS sys.odciNumberList )) lines
15 /
TEXT
-----------
abc
fgh
dwd
dfd
dfw
dfs
dfs
dfd
dfg
9 rows selected.
```
**With ID column:**
```
SQL> WITH T AS
2 (SELECT 123 AS id, 'abc fgh dwd' AS text FROM dual
3 UNION
4 SELECT 243 AS id, 'dfs dfd dfg' AS text FROM dual
5 UNION
6 SELECT 353 AS Id, 'dfs' AS text FROM Dual
7 UNION
8 SELECT 424 AS id, 'dfd dfw' AS text FROM dual
9 )
10 SELECT id, trim(regexp_substr(t.text, '[^ ]+', 1, lines.column_value)) text
11 FROM t,
12 TABLE (CAST (MULTISET
13 (SELECT LEVEL FROM dual CONNECT BY instr(t.text, ' ', 1, LEVEL - 1) > 0
14 ) AS sys.odciNumberList )) lines
15 ORDER BY id
16 /
ID TEXT
---------- -----------
123 abc
123 fgh
123 dwd
243 dfs
243 dfd
243 dfg
353 dfs
424 dfd
424 dfw
9 rows selected.
SQL>
```
|
Split String into rows Oracle SQL
|
[
"",
"sql",
"regex",
"string",
"oracle",
""
] |
I want to combine 2 columns in new column based on conditions. Please check below query. What I tried.
```
SELECT
`ContactPers`.`contact_persID`,
`ContactPers`.`persID`,
`ContactPers`.`contactID`
FROM `bdrplus`.`contact_pers` AS `ContactPers`
WHERE ((`ContactPers`.`persID` = 55) OR (`ContactPers`.`contactID` = 55))
AND `ContactPers`.`etat` = 1
```
Output:
```
contact_persID persID contactID
18 90 55
19 55 56
20 55 61
23 55 67
```
Desired output is something like (Not want 55 in newColumn as it is in WHERE clause, See above query. want to combine persID and contactID based on conditions.)
```
contact_persID persID contactID newColumn
18 90 55 90
19 55 56 56
20 55 61 61
23 55 67 67
```
-Thanks
|
I'm not sure about the MySql side of this. Here's how I would do it in SQL:
```
SELECT
`ContactPers`.`contact_persID`,
`ContactPers`.`persID`,
`ContactPers`.`contactID`,
CASE `ContactPers`.`persID`
WHEN 55 THEN `ContactPers`.`contactID`
ELSE `ContactPers`.`persID`
END AS newColumn
FROM `bdrplus`.`contact_pers` AS `ContactPers`
WHERE 55 IN (`ContactPers`.`persID`, `ContactPers`.`contactID`)
AND `ContactPers`.`etat` = 1
```
|
The [`CASE`](http://dev.mysql.com/doc/refman/5.0/en/case.html) statement will let you make decisions on a row-by-row basis. There's also an [`IF`](http://dev.mysql.com/doc/refman/5.0/en/if.html) statement that's pretty much the same, but I recommend `CASE` because it has wider support (for example Oracle, SQL Server, Teradata, and others):
```
SELECT
contact_persID,
persID,
contactID,
CASE WHEN contactID = 55 THEN persID ELSE contactID END AS newColumn
FROM dbrplus.contact_pers
WHERE (PersID = 55 OR contactID = 55)
AND etat = 1
```
|
Sql query to combine 2 diff. column output based on conditions
|
[
"",
"mysql",
"sql",
""
] |
I made an database with two tables. First is PERSON and second is REFEREES.
This is the code:
```
CREATE TABLE Person(
PersonID serial,
CONSTRAINT PK_Person PRIMARY KEY(PersonID),
FirstName VARCHAR(20),
LastName VARCHAR(30));
CREATE TABLE Referees(
RefereePairID SERIAL,
CONSTRAINT PK_Referee PRIMARY KEY(RefereePairID),
Referee1ID int,
Referee2ID int);
ALTER TABLE Referees
ADD CONSTRAINT FK_Referee1 FOREIGN KEY(Referee1ID) references Person(PersonID);
ALTER TABLE Referees
ADD CONSTRAINT FK_Referee2 FOREIGN KEY(Referee1ID) references Person(PersonID);
INSERT INTO Person VALUES (DEFAULT, 'AAAAAA', 'AAAAAA');
INSERT INTO Person VALUES (DEFAULT, 'BBBBBB', 'BBBBBB');
INSERT INTO Person VALUES (DEFAULT, 'CCCCCC', 'CCCCCC');
INSERT INTO Person VALUES (DEFAULT, 'DDDDDD', 'DDDDDD');
INSERT INTO Referees VALUES (DEFAULT, 1, 2);
INSERT INTO Referees VALUES (DEFAULT, 3, 4);
```
Output from next query I expected to be:
```
SELECT * FROM Person, Referees WHERE Referees.RefereePairID = 1;
PersonID |FirstName|LastName|RefereePairID|Referee1ID|Referee2ID
---------+---------+--------+-------------+----------+----------
1 | AAAAAA | AAAAAA | 1 | 1 | 2
2 | BBBBBB | BBBBBB | 1 | 1 | 2
```
But, the problem is that the output is this:
```
PersonID |FirstName|LastName|RefereePairID|Referee1ID|Referee2ID
---------+---------+--------+-------------+----------+----------
1 | AAAAAA | AAAAAA | 1 | 1 | 2
2 | BBBBBB | BBBBBB | 1 | 1 | 2
3 | CCCCCC | CCCCCC | 1 | 1 | 2
4 | DDDDDD | DDDDDD | 1 | 1 | 2
```
Also, when I type next query I get these result:
```
SELECT * FROM Referees WHERE Referees.RefereePairID = 1;
RefereePairID | Referee1ID | Referee2ID
---------------+------------+------------
1 | 1 | 2
```
Can anybody know where is the problem? Because I don't get it :(
|
When you do
```
SELECT *
FROM Person, Referees
```
what happens is a **cartesian product** between the tables. This is the same as
```
SELECT *
FROM Person CROSS JOIN Referees;
```
And you get:
```
+----------+-----------+----------+---------------+------------+------------+
| PersonID | FirstName | LastName | RefereePairID | Referee1ID | Referee2ID |
+----------+-----------+----------+---------------+------------+------------+
| 1 | AAAAAA | AAAAAA | 1 | 1 | 2 |
| 1 | AAAAAA | AAAAAA | 2 | 3 | 4 |
| 2 | BBBBBB | BBBBBB | 1 | 1 | 2 |
| 2 | BBBBBB | BBBBBB | 2 | 3 | 4 |
| 3 | CCCCCC | CCCCCC | 1 | 1 | 2 |
| 3 | CCCCCC | CCCCCC | 2 | 3 | 4 |
| 4 | DDDDDD | DDDDDD | 1 | 1 | 2 |
| 4 | DDDDDD | DDDDDD | 2 | 3 | 4 |
+----------+-----------+----------+---------------+------------+------------+
```
You need to add an extra condition like this
```
SELECT *
FROM Person, Referees
WHERE Referees.RefereePairID = 1
AND (Referees.Referee1ID = Person.PersonId OR Referees.Referee2ID = Person.PersonId);
```
to get the expected result
```
+----------+-----------+----------+---------------+------------+------------+
| PersonID | FirstName | LastName | RefereePairID | Referee1ID | Referee2ID |
+----------+-----------+----------+---------------+------------+------------+
| 1 | AAAAAA | AAAAAA | 1 | 1 | 2 |
| 2 | BBBBBB | BBBBBB | 1 | 1 | 2 |
+----------+-----------+----------+---------------+------------+------------+
```
But like some people mentioned here, there's a proper way to do queries like these. You should be using a JOIN clause:
```
SELECT *
FROM Person
JOIN Referees
ON (Referees.Referee1ID = Person.PersonId OR Referees.Referee2ID = Person.PersonId)
WHERE Referees.RefereePairID = 1
```
Maybe Wikipedia can help with some concepts:
<http://en.wikipedia.org/wiki/Join_(SQL)>
|
You have an error in your FK definition, and also in the SQL.
**Here's a working SQLFiddle** that fixes both these issues and returns the dataset you are expecting: <http://sqlfiddle.com/#!15/b7141/1>
Your 2nd Foreign Key Definition also refers to `RefereeID1` instead of `RefereeID2`.
So modify that definition like so
```
ALTER TABLE Referees
ADD CONSTRAINT FK_Referee2 FOREIGN KEY(Referee2ID) references Person(PersonID);
```
Also, your SQL is incorrect. Here's the right SQL.
```
SELECT * FROM Person
JOIN Referees
ON Person.PersonID = Referees.Referee1ID
OR Person.PersonID = Referees.Referee2ID
WHERE Referees.RefereePairID = 1
```
|
SQL Wrong output with select?
|
[
"",
"sql",
"select",
"output",
""
] |
Working with tables created by an idiot. Table in question has no record id's, with part number's and quantity's. Some part numbers will be repeated dozens of times all with no quantity's, some with. If a part number in ANY record has a quantity I want that part number JUST once so that the sub is not feeding the main more than one value at a time.
```
SELECT d046d, e024a
FROM 20121
WHERE d046d=(select sc.d046d from 20121 as sc where e024a >0)
```
This will fail because it will find more than one record in the subquery. I have tried DISTINCT and FIRST and ORDER BY's with TOP 1.
All fail when part of main query, work as needed by themselves.
|
Use
```
SELECT DISTINCT
d046d,
e024a
FROM
20121
WHERE
e024a >0
```
|
```
SELECT TOP 1 b.d046d, a.e024a
FROM 20121 a
inner join
(select sc.d046d as d046d from 20121 as sc where e024a >0) b
on a.d046d = b.d046d
```
This is not a suitable solution for the WHERE clause. I think you need to join to the sub query, as a self-join. From there you can just select the top 1 of the result set that comes up. You can play with the aliases as you like.
|
SubQuery Only First Match
|
[
"",
"sql",
"ms-access",
"ms-access-2010",
""
] |
I have two tables `table1` and `table2`. I want to delete from `table1` based on a condition in `table2`.
I have the following mysql query:
```
DELETE FROM table1
INNER JOIN table2 ON table2.col1 = table1.col1
WHERE table2.col2 = '1'
```
This return a syntax error. Is there something wrong with the above syntax?
|
You need to specify the table you are deleting from:
```
DELETE table1
FROM table1 INNER JOIN
table2
USING (col1)
WHERE table2.col2 = '1';
```
|
Try this:
```
DELETE FROM table1
WHERE EXISTS(
SELECT 'C'
FROM table2
WHERE table2.col1 = table1.col1
AND table2.col2 = '1'
)
```
|
Inner join and delete not working in mysql
|
[
"",
"mysql",
"sql",
""
] |
What is the best way of writing a sqlite query that will count the occurrences of colC after selecting distinct colA's ?
```
SELECT colA, colB, colC FROM myTable WHERE colA IN ('121', '122','123','124','125','126','127','128','129');
```
Notice ColA needs to be distinct.

Although close, these results are incorrect.

It should return:
123 a cat 1
124 b dog 1
125 e snake 2
126 f fish 1
127 g snake 2
|
```
WITH t AS (
SELECT colA, min(colB) AS colB, max(colC) AS colC
FROM myTable
WHERE colA IN ('121', '122','123','124','125','126','127','128','129')
GROUP BY colA
)
SELECT t.*, c.colC_count
FROM t
JOIN (
SELECT colC, count(*) AS colC_count
FROM t
GROUP BY colC
) c ON c.colC = t.colC
```
*Explanation:*
First subquery (inside `WITH`) gets desired result but without count column. Second subquery (inside `JOIN`) counts each `colC` value repetition in desired result and this count is returned to final result.
There very helpful `WITH` clause as result of first subquery is used in two places. More info: <https://www.sqlite.org/lang_with.html>
---
Query for SQLite before version 3.8.3:
```
SELECT t.*, c.colC_count
FROM (
SELECT colA, min(colB) AS colB, max(colC) AS colC
FROM myTable
WHERE colA IN ('121', '122','123','124','125','126','127','128','129')
GROUP BY colA
) t
JOIN (
SELECT colC, count(*) AS colC_count
FROM (
SELECT max(colC) AS colC
FROM myTable
WHERE colA IN ('121', '122','123','124','125','126','127','128','129')
GROUP BY colA
) c
GROUP BY colC
) c ON c.colC = t.colC
```
|
You can aggregate by `colA` to get most of what you want:
```
select colA, count(*)
from myTable
where colA in ('121', '122','123','124','125','126','127','128','129')
group by colA;
```
It is unclear how you are getting `colB` and `colC`. The following works for your example data:
```
select colA, min(colB), max(colC), count(*)
from myTable
where colA in ('121', '122','123','124','125','126','127','128','129')
group by colA;
```
|
sqlite count of distinct occurrences
|
[
"",
"sql",
"sqlite",
""
] |
I have a simple query:
```
create table #Test
(
ID INT,
name VARCHAR(100),
num INT
)
INSERT INTO #Test VALUES(1,'bob', 98)
INSERT INTO #Test VALUES(2,'bob', 44)
INSERT INTO #Test VALUES(3,'sam', 60)
INSERT INTO #Test VALUES(4,'deacon', 14)
INSERT INTO #Test VALUES(5,'toby', 99)
INSERT INTO #Test VALUES(6,'toby', 12)
SELECT * FROM #Test ORDER BY num DESC, name DESC
DROP TABLE #Test
```
This outputs:
```
ID name num
5 toby 99
1 bob 98
3 sam 60
2 bob 44
4 deacon 14
6 toby 12
```
What I am trying to do is sort the data first by num BUT if there are duplicate names, I would like the names following eachother, no matter what the num is.
The output I am looking for would be:
```
ID name num
5 toby 99
6 toby 12
1 bob 98
2 bob 44
3 sam 60
4 deacon 14
```
I am having issues with my ORDER BY, any help would be appreciated
|
Try this:
```
SELECT
t.ID,
t.name,
t.num
FROM
#Test t
ORDER BY
(
SELECT
MAX(t2.num)
FROM
#test t2
WHERE
t2.NAME = t.name
GROUP BY
t2.NAME
) DESC
```
|
You are not going to get your expected output because it's not alphabetic, but I think this is what you are looking for. (swap **name** and **num** in Order by)
```
create table #Test
(
ID INT,
name VARCHAR(100),
num INT
)
INSERT INTO #Test VALUES(1,'bob', 98)
INSERT INTO #Test VALUES(2,'bob', 44)
INSERT INTO #Test VALUES(3,'sam', 60)
INSERT INTO #Test VALUES(4,'deacon', 14)
INSERT INTO #Test VALUES(5,'toby', 99)
INSERT INTO #Test VALUES(6,'toby', 12)
SELECT * FROM #Test ORDER BY name DESC, num DESC
DROP TABLE #Test
```
|
ORDER BY two columns if duplicates
|
[
"",
"mysql",
"sql",
"sql-server",
""
] |
I'm helping a friend design a database but I'm curious if there is a general rule of thumb for the following:
**TABLE\_ORDER**
* OrderNumber
* OrderType
The column OrderType has the possibility of coming from a preset list of Order Types. Should I allow VARCHAR values to be used in the OrderType column (ex. Production Order, Sales Order, etc...) Or should I separate it out into another table and have it referenced as a foreign key instead from the TABLE\_ORDER as the following?:
**TABLE\_ORDER**
* OrderNumber
* OrderTypeID
**TABLE\_ORDER\_TYPE**
* ID
* OrderType
|
If the order type list is set, and will not change, you could opt to not-make a seperate table. But in this case, do not make it `VARCHAR`, but make it an `ENUM`.
You can index this better, and you will end up with arguably the same type of database as when you make it an ID with lookup-table.
But if there is any change at all you need to add types, just go for the second. You can add an interface later, but you can easily make "get all types" kind of pages etc.
|
I would say use another table say "ReferenceCodes" for example:
Type, Name, Description, Code
Then you can just use the Code through out the database and need not worry about the name associated to that code. If you use a name (for example order type in your case), if would be really difficult to change the name later on. This is what we actually do in our system.
|
When to replace a database column with an ID instead
|
[
"",
"mysql",
"sql",
"database-design",
""
] |
I have a date column and a time column that are integers
I converted the date portion like this
```
select convert(int, convert(varchar(10), getdate(), 112))
```
I thought I could do the same with this query that gives the time in HH:mm:ss
```
SELECT CONVERT(VARCHAR(8), GETDATE(), 108)
```
How do I convert just the time into an integer?
|
Assuming you are looking for the "time" analogy to the "date" portion of your code which takes `YYYYMMDD` and turns it into an `INT`, you can:
1. start with the `HH:mm:ss` format given by the style number 108
2. remove the colons to get that string into `HHmmss`
3. then convert that to `INT`
For example:
```
SELECT REPLACE(
CONVERT(VARCHAR(8), GETDATE(), 108),
':',
''
) AS [StringVersion],
CONVERT(INT, REPLACE(
CONVERT(VARCHAR(8), GETDATE(), 108),
':',
''
)
) AS [IntVersion];
```
|
This should convert your time into an integer representing seconds from midnight.
```
SELECT (DATEPART(hour, Col1) * 3600) + (DATEPART(minute, Col1) * 60) + DATEPART(second, Col1) as SecondsFromMidnight FROM T1;
```
|
How do I convert time into an integer in SQL Server
|
[
"",
"sql",
"sql-server-2008-r2",
""
] |
I have a table which contains ski resorts around the world. The table also identifies which country and "sector" (USA/Europe/Asia etc.) they are located in. I'm trying to determine the number of resorts in each sector.
Sample Data :
```
resort_id, resort_name, country_id, sector_id
3376 Chréa 204 204
3377 Tikjda 204 204
3384 Beidahu 208 205
3481 Canyon Ski Area 225 206
3482 Castle Mountain 225 206
3483 Drumheller 225 206
```
I need to be able to determine the number of resorts in each sector i.e. :
```
Sector Resorts
--------------------
204 2
205 1
206 3
```
Any help would be much appreciated thanks.
|
```
select sector_id,
count(*)
from resortTableName
group by sector_id
```
and to address your edited question:
```
select sector_id,
count(distinct resort_name) as resortCount,
count(distinct country_id) as countryCount
from resortTableName
group by sector_id
```
|
You just need to `count` and group the results by the sector:
```
SELECT sector_id, COUNT(*)
FROM resorts
GROUP BY sector_id
```
|
SQL return multiple count statements
|
[
"",
"sql",
"count",
""
] |
I am writing a sql query and it works, but I get only half results:
```
(q1.VariableName LIKE
CASE WHEN @Variablename IS NULL THEN ('e%')
ELSE @Variablename
END)
```
The thing I want is :
If the @Variablename is NOT NULL then q1.VariableName = @Variablename.
```
If it is NULL,
then q1.VariableName like 'e%'
or q1.VariableName like 'b%' and q1.VARIABLENAME not LIKE '%MAIL%'
```
But, I am unable to get it. Can anyone tell me, where am I going wrong.
|
If you're doing this is the `where` clause, you would be better off avoiding using `CASE` altogether:
```
((@Variablename IS NULL
and (q1.VariableName LIKE 'e%'
or q1.VariableName like 'b%'
and q1.VARIABLENAME not LIKE '%MAIL%'))
or q1.VariableName = @Variablename)
```
Since `null` is never equal to anything, `q1.VariableName = @Variablename)` will always evaluate to false when `@Variablename` is `null`.
|
You can use the isnull() function to check for NULL. If @VariableName is NULL the function will replace it with 'e%'.
```
WHERE q1.VariableName LIKE isnull(@VariableName, 'e%')
```
If it's NOT NULL it would have to be equal to q1.VariableName, provided @VariableName doesn't contain a wildcard.
|
SQL Case statement not working
|
[
"",
"sql",
"sql-server",
""
] |
I'm working on a e-learning project in which there is a table named chapter in which there is a column named question\_table this is table in which the specific chapter's questions are added.
Now the problem is I want to display all the question from all the chapter for this I used following sql query
```
SELECT * FROM (SELECT `question_table` FROM `chapter`)
```
but it doesn't work and gives the error:
> "Every derived table must have its own alias".
Note: I want to do it using SQL not PHP.
|
Firstly, I think you would be better redesigning your database. Multiple tables of the same structure holding the same data are generally not a good idea.
However what you require is possible using a MySQL procedure to build up some dynamic SQL and then execute it, returning the resulting data.
A procedure as follows could be used to do this:-
```
DROP PROCEDURE IF EXISTS dynamic;
delimiter //
CREATE PROCEDURE dynamic()
BEGIN
DECLARE question_table_value VARCHAR(25);
DECLARE b INT DEFAULT 0;
DECLARE c TEXT DEFAULT '';
DECLARE cur1 CURSOR FOR SELECT `question_table` FROM `chapter`;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET b = 1;
OPEN cur1;
SET b = 0;
WHILE b = 0 DO
FETCH cur1 INTO question_table_value;
IF b = 0 THEN
IF c = '' THEN
SET c = CONCAT('SELECT * FROM `',question_table_value, '`');
ELSE
SET c = CONCAT(c, ' UNION SELECT * FROM `',question_table_value, '`');
END IF;
END IF;
END WHILE;
CLOSE cur1;
SET @stmt1 := c;
PREPARE stmt FROM @stmt1;
EXECUTE stmt;
END
```
This is creating a procedure called dynamic. This takes no parameters. It sets up a cursor to read the question\_table column values from the chapter table. It looks around the results from that, building up a string which contains the SQL, which is a SELECT from each table with the results UNIONed together. This is then PREPAREd and executed. The procedure will return the result set from the SQL executed by default.
You can call this to return the results using:-
```
CALL dynamic()
```
Down side is that this isn't going to give nice results if there are no rows to return and they are not that easy to maintain or debug with the normal tools developers have. Added to which very few people have any real stored procedure skills to maintain it in future.
|
The derived table here is the result of the `(SELECT ...)`. You need to give it an alias, like so:
```
SELECT * FROM (SELECT question_table FROM chapter) X;
```
**Edit**, re dynamic tables
If you know all the tables in advance, you can union them, i.e.:
```
SELECT * FROM
(
SELECT Col1, Col2, ...
FROM Chapter1
UNION
SELECT Col1, Col2, ...
FROM Chapter2
UNION
...
) X;
```
[SqlFiddle here](http://sqlfiddle.com/#!9/a8e13/3)
To do this solution generically, you'll need to use [dynamic sql](https://stackoverflow.com/questions/13705045/dynamic-table-names-in-stored-procedure-function) to achieve your goal.
In general however, this is indicative of a smell in your table design - your chapter data should really be in one table, and e.g. classified by the chapter id.
If you do need to shard data for scale or performance reasons, the typical mechanism for doing this is to span multiple databases, not tables in the same database. MySql can handle large numbers of rows per table, and performance won't be an issue if the table is indexed appropriately.
|
Select all record from all the tables, every derived table must have its own alias
|
[
"",
"sql",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.