Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
I have a database that has a table `Parameters_XML` with columns.
```
id, application, parameter_nr flag, value
```
The `parameter_nr` is for example 1 and the value for that parameter is the following:
```
<Root>
<Row>
<Item id="341" flags="1">
<Str>2</Str>
</Item>
<Item id="342" flags="1">
<Str>10</Str>
</Item>
<Item id="2196" flags="1">
<Str>7REPJ1</Str>
</Item>
</Row>
</Root>
```
I need to retrieve the values for all the applications where the item is 341, 342 and 2196.
Eg: for the application 1 the value for the item 341 is 2 and so on.
I have written the following query:
```
SELECT cast (value as XML).value('data(/Root/Row/Item[@id="431"],')
FROM Parameters_Xml x
WHERE parameter_nr = 1
```
I get the following error:
> Msg 174, Level 15, State 1, Line 1
> The value function requires 2 argument(s).
Why my query is not valid?
|
Try someting like this:
```
SELECT
CAST(x.Value AS XML).value('(/Root/Row/Item[@id="341"]/Str)[1]', 'nvarchar(100)')
FROM dbo.Parameters_Xml x
WHERE parameter_nr = 1
```
You're telling SQL Server to go find the `<Item>` node (under `<Root> / <Row>`) with and `id=341` (that what I'm assuming - your value in the question doesn't even exist) and then get the first `<Str>` node under `<Item>` and return that value
Also: why do you need `CAST(x.Value as XML)` - if that column contains only XML - why isn't it **defined** with datatype `XML` to begin with? If you have this, you don't need any `CAST` ...
|
```
DECLARE @str XML;
SET @str = '<Root>
<Row>
<Item id="341" flags="1">
<Str>2</Str>
</Item>
<Item id="342" flags="1">
<Str>10</Str>
</Item>
<Item id="2196" flags="1">
<Str>7REPJ1</Str>
</Item>
</Row>
</Root>'
-- if you want specific values then
SELECT
xmlData.Col.value('@id','varchar(max)') Item
,xmlData.Col.value('(Str/text())[1]','varchar(max)') Value
FROM @str.nodes('//Root/Row/Item') xmlData(Col)
where xmlData.Col.value('@id','varchar(max)') = 342
--if you want all values then
SELECT
xmlData.Col.value('@id','varchar(max)') Item
,xmlData.Col.value('(Str/text())[1]','varchar(max)') Value
FROM @str.nodes('//Root/Row/Item') xmlData(Col)
--where xmlData.Col.value('@id','varchar(max)') = 342
```
**Edit After Comment**If i query my db: select \* from parameters\_xml where parameter\_nr = 1 i will receive over 10000 rows, each row is like the following: Id app param value 1 1 1 11 I need for all the 10000 apps to retrieve the item id and the value from the XML value - like you did for my eg.
```
-- declare temp table
declare @temp table
(val xml)
insert into @temp values ('<Root>
<Row>
<Item id="341" flags="1">
<Str>2</Str>
</Item>
<Item id="342" flags="1">
<Str>10</Str>
</Item>
<Item id="2196" flags="1">
<Str>7REPJ1</Str>
</Item>
</Row>
</Root>')
insert into @temp values ('<Root>
<Row>
<Item id="3411" flags="1">
<Str>21</Str>
</Item>
<Item id="3421" flags="1">
<Str>101</Str>
</Item>
<Item id="21961" flags="1">
<Str>7REPJ11</Str>
</Item>
</Row>
</Root>')
-- QUERY
SELECT
xmlData.Col.value('@id','varchar(max)') Item
,xmlData.Col.value('(Str/text())[1]','varchar(max)') Value
FROM @temp AS T
outer apply T.val.nodes('/Root/Row/Item') as xmlData(Col)
```
|
Querying XML data in SQL Server 2012
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
Hi all consider following as my table structure
```
Col1 Col2 Col3
A 1 Aa
A 2 Bb
A 1 Aa
A 4 Bb
B 2 Bb
C 1 Aa
C 5 Bb
D 3 Aa
```
As you can see Col3 contains distint values of Aa and Bb.
I am trying to write a query which return only rows with Col1 having value Aa and Bb (Both) or Aa(Alone).
Point is to remove those rows which have only have Bb associated with distinct Col1 value to it.
Example - For Col1 Distinct value of A should have Aa and Bb / Aa in corresponding Col3. This requirement is violated by value of B in Col1, hence result set should not have rows associated with B.
Expected output -
```
Col1 Col2 Col3
A 1 Aa
A 2 Bb
A 1 Aa
A 4 Bb
C 1 Aa
C 5 Bb
D 3 Aa
```
|
```
SELECT *
FROM TableName T
WHERE EXISTS ( SELECT 1
FROM TableName
WHERE T.Col1 = Col1
AND Col3 = 'Aa')
```
|
One other approach is to use `intersect` and `union`.
[Fiddle with sample data](http://www.sqlfiddle.com/#!3/878b2/3)
```
select * from t where col1 in (
select col1 from t where col3 = 'Aa'
intersect
select col1 from t where col3 = 'Bb'
union
select col1 from t where col3 = 'Aa')
```
|
How to select rows with column containing provided values
|
[
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2012",
""
] |
I am looking for help with the following scenario:
I have an SQL Server DB, with a table that stores historical data. For example lets use the following as a sample set.
```
CAR, SERVICE DATE, FINDINGS
1234, 21/01/2001, Fuel Filter
1234, 23/09/2009, Oil Change
1234, 30/09/2015, Tyres
3456, 30/09/2015, Clutch
```
I would like from the following sample to bring back the result that shows the service of any car that was brought in on a give date, e.g. 30/09/2015 but only if it had an oil change in the past.
The query would only bring back:
```
1234, 30/09/2015, Tyres
```
since it is the only car on that date to be services that previously had an oil change.
Any help would be greatly appreciated.
|
Use an [EXISTS](https://msdn.microsoft.com/en-us/library/ms188336.aspx) clause:
```
SELECT cur.car,
cur.[service date],
cur.findings
FROM tablename cur
WHERE cur.[service date] = @mydate
AND EXISTS (
SELECT 1
FROM tablename past
WHERE past.car = cur.car
AND past.[service date] < cur.[service date]
AND past.findings = 'oil change'
)
```
|
This question had many conditions like
> * One of the service should happen on specified date.
> * Previous service to that date shoule only be 'Oil Change'. If previous service is not 'Oil Change' then don't include it in result,
> even if it meets condition 1. Along with this, if car has record of
> OIL Chagne in past then it doesn't matter.
> * If car had OIL CHANGE record but don't have service on that date then it should not consider.
You can try this solution.. and here is the working [SQLFiddle](http://sqlfiddle.com/#!6/5931b/1) for you
```
select * from #t where car in (
select car from (
select car, [service date], findings, ROW_NUMBER() over (Partition by car order by [Service Date] desc) as [row]
from (select * from #t where car in (select car from #t where [service date] = '2015-09-30')) A) T
where T.row =2 and Findings = 'Oil Change'
) and [service date] = '2015-09-30'
```
|
SQL Query - Query on Current Date but condition from the past
|
[
"",
"sql",
"sql-server",
""
] |
I´d like to check for dupes in MySQL comparing two columns:
Linked duplicate doesn't match as it's about finding the duplicate values on specific columns. This question is about finding the rows which have said duplicate values.
Example:
```
id Column1 Column2
-------------------------
3 1 <-This row is a dupe
3 2
3 3
3 1 <-This row is a dupe
```
I'd like to have a list like this:
```
id Column1 Column2
-------------------------
3 1
3 1
```
How should this query look like?
My thinking:
```
SELECT * FROM table
WHERE Column1 && Column2 is a dupe ;)
```
|
You can use this to find the duplicates:
```
SELECT column1, column2
FROM yourtable
GROUP BY column1, column2
HAVING count(*) > 1;
```
And to show the actual duplicate rows you can `JOIN` against results of the above query:
```
SELECT * FROM yourtable yt1
JOIN (SELECT column1, column2
FROM yourtable
GROUP BY column1, column2
HAVING count(*) > 1) yt2 ON (yt2.column1=yt1.column1 AND yt2.column2=yt1.column2);
```
|
```
select * , count(column1)
from table
group by column1, column2
having (count(column1) >1)
```
that is one solution. you will get all columns which exist more than one time, through the having. and the group by collects all together. the count could also go on both or column 2, that is not so important
|
Check for dupes in db with TWO rows
|
[
"",
"mysql",
"sql",
""
] |
I have two tables which are interlinked:
Table #1: CheckList
```
ID_CHECKLIST(PK) | NAME_CHECKLIST | VERSION(PK)
1 XYZ 1.0.0
1 XYZ 1.1.0
1 XYZ 1.2.0
2 PQR 1.0.0
3 ABC 1.1.0
```
Table #2: Machine\_CHECKLIST
```
ID_MACHINE | ID_CHECKLIST(Foreign Key) | VERSION(Foreign)
1 1 1.2.0
1 3 1.1.0
2 1 1.1.0
```
Now I want to get all of the remaining checklist that are not included in ID\_MACHINE = 1
So my query is like this:
```
SELECT
ID_CHECKLIST, NAME_CHECKLIST, VERSION
FROM
CHECKLIST
WHERE
(ID_CHECKLIST NOT IN (SELECT MC.ID_CHECKLIST
FROM MACHINE_CHECKLIST MC
WHERE MC.ID_MACHINE = '1'))
OR (VERSION NOT IN (SELECT MC.VERSION
FROM MACHINE_CHECKLIST MC
WHERE MC.ID_MACHINE = '1'))
```
This query does not return proper results.
I pass `ID_MACHINE` dynamically. I cannot pass Version and that's the problem.
What am I doing wrong in the query?
|
You'd want to LEFT JOIN the tables on the composite key, and look for a NULL value on the right table in the join. This will tell you where you are missing data.
```
SELECT
CheckList.ID_CHECKLIST, CheckList.NAME_CHECKLIST, CheckList.VERSION
FROM
Machine_CHECKLIST
LEFT JOIN CheckList ON Machine_Checklist.ID_CheckList = Checklist.ID_CheckList
AND Machine_Checklist.Version = CheckList.Version
WHERE
ID_Machine = 1 AND CheckList.ID_Checklist IS NULL
```
|
Try an Inner Join. You don't have to return columns of both tables, but you can use both tables for filtering.
```
SELECT
CHECKLIST.ID_CHECKLIST, CHECKLIST.NAME_CHECKLIST, CHECKLIST.VERSION
FROM
CHECKLIST INNER JOIN Machine_CHECKLIST ON CHECKLIST.ID_CHECKLIST = Machine_CHECKLIST.ID_CHECKLIST AND CHECKLIST.VERSION = Machine_CHECKLIST.VERSION
WHERE
Machine_CHECKLIST.ID_MACHINE <> 1
```
|
How to filter records when foreign table has combined primary key in SQL
|
[
"",
"sql",
"sql-server",
"foreign-keys",
""
] |
I have the following query:
```
SELECT pics.e_firedate FROM et_pics
```
Result:
```
NULL
2014-12-01 00:00:00.000
2015-04-03 00:00:00.000
NULL
NULL
```
I want to replace `NULL` values to ''.
```
CASE
WHEN pics.e_firedate IS NULL THEN ''
ELSE pics.e_firedate
END
```
makes `NULL` transforming to `1900-01-01 00:00:00.000`, which I've tried to cast and replace with no success also.
How can I achive my goal?
|
That's because when using `CASE` expressions, the return values must have the same data type. In case they have different data types, all values are converted to the type with a [**higher data type precedence**](https://msdn.microsoft.com/en-us/library/ms190309.aspx).
And since `DATETIME` has a higher datatype than `VARCHAR`, `''` gets converted to `DATETIME`:
```
SELECT CAST('' AS DATETIME)
```
The above will return `1900-01-01 00:00:00.000`.
To achieve your desired result, you should `CAST` the result to `VARCHAR`
```
SELECT
CASE
WHEN e_firedate IS NULL THEN ''
ELSE CONVERT(VARCHAR(23), e_firedate, 121)
END
FROM et_pics
```
---
For date formats, read [this](https://msdn.microsoft.com/en-us/library/ms187928.aspx).
|
Try:
```
select isNUll(CONVERT(VARCHAR, pics.e_firedate, 120), '') e_firedate
FROM et_pics
```
|
TSQL Null data to '' replace
|
[
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2005",
""
] |
I have a bit tricky question. E.g. I have a start\_date: 15/01/2015 and an end date: 17/03/2015 for given record and I would like to generalize that to know, if the record with those exact start and end date belongs to January (by my definition even if it is 31/01/2015, then it belongs to January).
I did the following:
```
sum(case when to_date('2015/01/01','yyyy/mm/dd') between ROUND(dtime_method_start,'MONTH') and ROUND(dtime_method_end,'MONTH') then 1 else 0 end) as flag_jan
```
But the problem with Round function is, that it takes everything from 16-31 as next month, which is no good for me. How can I fix it or rewrite it to make it comply with my definition?
I want to know if a record with certain dtime\_method\_start and dtime\_method\_end belongs to January. I have many records with many different start and end dates and want to know how many of them belong to January.
|
Just use `trunc` instead of `round`. `Trunc` with parameter `'MONTH'` will truncate the date to the first day of month. If you test with `between` using first day of month it's ok.
|
```
SELECT expected,
CASE
WHEN to_date('01/01/2015','DD/MM/YYYY') = ALL (trunc(start_date,'MONTH'), trunc(end_date,'MONTH'))
THEN 1
ELSE 0
END flag_jan
FROM
(SELECT 'notmatch 0' expected
, to_date('15/01/2015','DD/MM/YYYY') start_date
, to_date('17/03/2015','DD/MM/YYYY') end_date
FROM dual
UNION ALL
SELECT 'match 1'
, to_date('12/01/2015','DD/MM/YYYY')
, to_date('23/01/2015','DD/MM/YYYY')
FROM dual
) dates;
```
this query compares the truncated start\_date and end\_date to match the first day of the month.
To check another month\_flag, juste change the date in the first case expression.
|
how to compare date parts in SQL Oracle
|
[
"",
"sql",
"oracle",
"date",
""
] |
I need to create an Oracle DB function that takes a string as parameter. The string contains letters and numbers. I need to extract all the numbers from this string. For example, if I have a string like **RO1234**, I need to be able to use a function, say `extract_number('RO1234')`, and the result would be **1234**.
To be even more precise, this is the kind of SQL query which this function would be used in.
```
SELECT DISTINCT column_name, extract_number(column_name)
FROM table_name
WHERE extract_number(column_name) = 1234;
```
QUESTION: How do I add a function like that to my Oracle database, in order to be able to use it like in the example above, using any of Oracle SQL Developer or SQLTools client applications?
|
You'd use `REGEXP_REPLACE` in order to remove all non-digit characters from a string:
```
select regexp_replace(column_name, '[^0-9]', '')
from mytable;
```
or
```
select regexp_replace(column_name, '[^[:digit:]]', '')
from mytable;
```
Of course you can write a function `extract_number`. It seems a bit like overkill though, to write a funtion that consists of only one function call itself.
```
create function extract_number(in_number varchar2) return varchar2 is
begin
return regexp_replace(in_number, '[^[:digit:]]', '');
end;
```
|
You can use **regular expressions** for extracting the number from string. Lets check it. Suppose this is the string mixing text and numbers 'stack12345overflow569'. This one should work:
```
select regexp_replace('stack12345overflow569', '[[:alpha:]]|_') as numbers from dual;
```
which will return "12345569".
also you can use this one:
```
select regexp_replace('stack12345overflow569', '[^0-9]', '') as numbers,
regexp_replace('Stack12345OverFlow569', '[^a-z and ^A-Z]', '') as characters
from dual
```
which will return "12345569" for numbers and "StackOverFlow" for characters.
|
Extract number from string with Oracle function
|
[
"",
"sql",
"oracle",
"oracle11g",
"oracle-sqldeveloper",
"sqltools",
""
] |
I would like to have a query that uses the greater of two values/columns if a certain other value for the record is true.
I'm trying to get a report account holdings. Unfortunately the DB usually stores the value of Cash in a column called `HoldingQty`, while for every other type of holding (stocks, bonds, mutual funds) it stores it in a column called `Qty`.
The problem is that sometimes the value of the cash is stored in `Qty` only, and sometimes it is in both `Qty` and `HoldingQty`. Obviously sometimes it is stored only in `HoldingQty` as mentioned above.
Basically I want my select statement to say "if the security is cash, look at both qty and holding qty and give me the value of whatever is greater. Otherwise, if the security isn't cash just give me qty".
How would I write that in T-SQL? Here is my effort:
```
SELECT
h.account_name, h.security_name, h.security_type, h.price,
(CASE:
WHEN security_type = 'cash'
THEN (WHEN h.qty > h.holdingqty
THEN h.qty
ELSE h.holdingqty)
ELSE qty) as quantity,
h.total_value
FROM
holdings h
WHERE
...........
```
|
Your query is correct but need few syntax arrangement, try below code
```
SELECT h.account_name, h.security_name, h.security_type, h.price,
CASE WHEN security_type = 'cash' then
CASE when h.qty > h.holdingqty then h.qty
else h.holdingqty END
ELSE qty END AS 'YourColumnName'
) as quantity, h.total_value
FROM holdings h
where ...........
```
|
Almost there!
```
SELECT h.account_name ,
h.security_name ,
h.security_type ,
h.price ,
CASE WHEN security_type = 'cash'
THEN CASE WHEN h.qty > h.holdingqty THEN h.qty
ELSE h.holdingqty
END
ELSE qty
END AS quantity ,
h.total_value
FROM holdings h
WHERE ...........
```
|
Select case comparing two columns
|
[
"",
"sql",
"sql-server",
"t-sql",
"select",
"case-when",
""
] |
I get a row, which contains non-zero values in two columns: `FixHours` and `AddHours`:
```
FixHours | AddHours | CalculatedValue
---------+----------+----------------
0,5 | 1,5 | 16.5
```
How can I split it into two rows such as:
```
FixHours | AddHours | CalculatedValue
---------+----------+----------------
0,5 | 0 | 16.5
0 | 1,5 | 16.5
```
I have had an idea to find that row and artificially make a second row using the `union` operator. But I have many rows with values in two columns at the same time. Any ideas?
|
Assuming I understand your question correctly, I think something like this is what you are looking for:
```
SELECT FixHours, AddHours, CalculatedValue
FROM tbl
WHERE FixHours = 0
OR AddHours = 0
UNION
SELECT FixHours, 0, CalculatedValue
FROM tbl
WHERE FixHours <> 0
AND AddHours <> 0
UNION
SELECT 0, AddHours, CalculatedValue
FROM tbl
WHERE FixHours <> 0
AND AddHours <> 0
```
|
Something like this should do it:
```
select
case when x=1 THEN 0 ELSE fixHours END as fixHours,
case when x=2 THEN 0 ELSE addHours END as addHours,
calculatedValue
from input
cross join (select 1 as x UNION select 2 as x)
```
|
How to split row into two rows in MS SQL
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have a table which stores 0-6 as weekday value, and I want to display weekday name. E.g. if value is `0`, it will display `Sunday`, if value is `1`, it will display `Monday`, likewise.
Is there any inbuilt MySQL function for getting day name from Index?
Thanks in Advance!!
|
`DAYNAME( date_value )` is available in MySql; however, this takes a date, not an int. So if you have the date available, you can just use `DAYNAME(CURDATE())`, `DAYNAME('2014-01-07')`, etc.
If all you have available is the int representing the weekday, then a CASE statement might be your best bet.
`CASE WHEN 0 THEN 'Sunday'
WHEN 1 THEN 'Monday'
.....
ELSE ''
END`
|
As @Aliminator mentioned, you could use [`DAYNAME`](https://dev.mysql.com/doc/refman/5.1/en/date-and-time-functions.html#function_dayname) with a `DATE`.
However, if you don't want to change your schema, here is a nifty hack for you:
```
SELECT DAYNAME(CONCAT("1970-09-2", dayIndex)) FROM your_table;
```
This is based on the fact that 1970-09-20 was a Sunday, the 21st was a Monday, and so on.
|
Get Weekday name from index number in MYSQL
|
[
"",
"mysql",
"sql",
"date",
""
] |
I have a varchar2 field in my db with the format of for example -
```
2015-08-19 00:00:01.0
2014-01-11 00:00:01.0
etc.
```
I am trying to convert this to a date of format DD-MON-YYYY. For instance, 2015-08-19 00:00:01.0 should become 19-AUG-2015. I've tried
```
select to_date(upgrade_date, 'YYYY-MM-DD HH24:MI:SS') from connection_report_update
```
but even at this point I'am getting ORA-01830 date format ends before converting the entire input string. Any ideas?
|
You have details upto milli seconds, for which, you have to use `TO_TIMESTAMP()` with format model '`FF`'
```
select to_timestamp('2015-08-19 00:00:01.0' ,'YYYY-MM-DD HH24:MI:SS.FF') as result from dual;
RESULT
---------------------------------------------------------------------------
19-AUG-15 12.00.01.000000000 AM
```
And Date doesn't have a format itself, only the date output can be in a format. So, when you want it to be printed in a different format, you would need to again use a `TO_CHAR()` of the converted timestamp;
```
select to_char(to_timestamp('2015-08-19 00:00:01.0' ,'YYYY-MM-DD HH24:MI:SS.FF'),'DD-MON-YYYY') as result from dual;
RESULT
-----------
19-AUG-2015
```
|
Why do you store datetimes in a string???
Anyhow. To get from '2015-08-19 00:00:01.0' to a datetime with milliseconds (which is a `TIMESTAMP` in Oracle) use `to_timestamp`:
```
to_timestamp('2015-08-19 00:00:01.0', 'yyyy-mm-dd hh24:mi:ss.ff')
```
Then to get the desired output format, use `to_char`:
```
to_char(thedate, 'DD-MON-YYYY')
```
Together:
```
to_char(to_timestamp('2015-08-19 00:00:01.0', 'yyyy-mm-dd hh24:mi:ss.ff'), 'DD-MON-YYYY')
```
|
convert string literal to a date
|
[
"",
"sql",
"oracle",
"oracle11g",
""
] |
Consider a database that maintains a list of persons and their contact information, including addresses and such.
Sometimes, the contact information changes. Instead of simply updating the single person record to the new values, I like to keep a history of the changes.
I like to keep the history in a way that when I look at a person's record, I can quickly determine that there are older recordings of that person's data as well. However, I also like to avoid having to build very complicated SQL queries for retrieving only the latest version of each person's records (while this may be easy with a single table, it quickly gets difficult once the table is connected to other tables).
I've come up with a few ways, which I'll add below as answers, but I wonder if there are better ways (While I'm a seasoned code writer, I'm rather new to DB design, so I lack the experience and already ran into a few dead ends).
**Which DB?** I am currently using sqlite but plan to move to a server based DB engine eventually, probably Postgres. However, I meant this question asked in a more general form, not specific to any particular engine, though suggestions how to solve this in certain engines are appreciated, too, in the general interest.
|
Must you keep ***structured*** history information?
* Quite often, the history of changes does not have to be structured, because the history is needed for auditing purposes only, and there is no actual need to be able to perform queries against the historical data.
+ So, what quite often suffices is to ***simply log each modification*** that is made to the database, for which you only need a log table with a date-time field and some variable length text field into which you can format human-readable messages as to who changed what, and what the old value was, and what the new value is.
+ Nothing needs to be added to the actual data tables, and no additional complexity needs to be added to the queries.
If you ***must*** keep structured history information:
* If you need to able to execute queries against historical data, then you must keep the historical data in the database. Some people recommend separate historical tables; I consider this misguided. Instead, ***I recommend using views.***
+ Rename each table from "NAME" to "NAME\_HISTORY" and then create a view called "NAME" which presents to you only the latest records.
+ Views are a feature which exists in most RDBMSes. A view looks like a table, so you can query it as if it was a table, but it is read-only, and it can be created by simply defining a query on existing tables (and views.)
+ So, with a query which orders the rows by history-date, groups by all fields except history-date, selects all fields except history-date, and picks only the first row, you can create a view that looks exactly like the original table before historicity was added.
+ Any existing code which just performs queries and does not need to be aware of history will continue working as before.
+ Code that performs queries against historical data, and code that modifies tables, will now need to start using "NAME\_HISTORY" instead of "NAME".
+ It is okay if code which modifies the table is burdened by having to refer to the table as "NAME\_HISTORY" instead of "NAME", because that code will also have to take into account the fact that it is not just updating the table, it is appending new historical records to it.
+ As a matter of fact, since views are read-only, the use of views will prevent you from accidentally modifying a table without taking care of historicity, and that's a good thing.
|
This is generally referred to as [Slowly Changing Dimension](https://en.wikipedia.org/wiki/Slowly_changing_dimension) and linked Wikipedia page offers several approaches to make this thing work.
Martin Fowler has a list of [Temporal Patterns](http://martinfowler.com/eaaDev/timeNarrative.html) that are not exactly DB-specific, but offer a good starting point.
And finally, Microsoft SQL Server offers [Change Data Capture and Change Tracking](https://msdn.microsoft.com/en-us/library/bb933994.aspx).
|
SQL schema pattern for keeping history of changes
|
[
"",
"sql",
"database-schema",
""
] |
I get these [Challenge on HackerRank](https://www.hackerrank.com/challenges/weather-observation-station-6)
> Write a query to print the list of CITY that start with vowels (a, e,
> i, o, u) in lexicographical order. Do not print duplicates.
**My solution:**
```
Select DISTINCT(City)
From Station
Where City like 'A%'
or City like 'E%'
or City like 'I%'
or City like 'O%'
or City like 'U%'
Order by City;
```
**Other solution:**
```
select distinct(city) from station
where upper(substr(city, 1,1)) in ('A','E','I','O','U');
```
This is very clever, but I want to know, if are there any other ways to solve this?
Any kind of DB is OK.
|
Regular expressions in MySQL / MariaDB:
```
select distinct city from station where city regexp '^[aeiouAEIOU]' order by city
```
|
In SQLserver You can use a reg-ex like syntax with like:
```
select distinct city from station where city like '[aeuio]%' Order by City
```
|
Wildcard with multiple values?
|
[
"",
"sql",
""
] |
I have this example table:
```
sort_order product color productid price
---------- ------- ------ --------- -----
1 bicycle red 2573257 50
2 bicycle red 0983989 40
3 bicycle red 2093802 45
4 bicycle blue 9283409 55
5 bicycle blue 3982734 60
1 teddy bear brown 9847598 20
2 teddy bear black 3975897 25
3 teddy bear white 2983428 30
4 teddy bear brown 3984939 35
5 teddy bear brown 0923842 30
1 tricycle pink 2356235 25
2 tricycle blue 2394823 30
3 tricycle blue 9338832 35
4 tricycle pink 2383939 30
5 tricycle blue 3982982 35
```
I would like a query that returns the product, the average price and the most frequent color.
So my query in this example would be expected to return:
```
product most_frequent_color average_price
------- ------------------- -------------
bicycle red 50
teddy bear brown 28
tricycle blue 31
```
The average part seems easy just grouping by product and using avg(price), but how can I solve the most frequent color part?
This is the query I can figure out myself so far, but i don't know how to get the most\_frequent\_color for each group:
```
SELECT product, avg(price) AS average_price from products
WHERE sort_order <= 5
GROUP BY product
```
In my real world table there are usually way more rows for each group than I'm interested in so I just get a limited amount of them using the sort\_order field
For the rare groups that either have null in all the rows for "color" or that have more than one most frequent color I would like to return null in the most\_frequent\_color colum returned
Thank you for any help on this!
|
You can use an additional query in the `SELECT` clause to effectively perform an aggregate query on the same data:
```
SELECT t.product,
Avg ( t.price ) AS average_price,
(
SELECT IF ( Count(*) = t4.count, NULL, t2.color ) 'color'
FROM products t2
JOIN
(
SELECT t3.product,
t3.color,
count(*) 'count'
FROM products t3
GROUP BY t3.product ,
t3.color
ORDER BY count(*) DESC
) t4
ON t2.product = t4.product
AND t2.color <> t4.color
WHERE t2.product = t.product
GROUP BY t2.color
ORDER BY count(*) DESC limit 1
) AS most_frequent_color
FROM products t
WHERE t.sort_order <= 5
GROUP BY t.product
```
So we link the 2nd copy of `products` using the `product` column, select the count of each color (for that product) with most frequent at the top of the list, then take the 1st row only - hence the most frequent value of color for that product.
This is not the same as an inline view (which is placed in the `FROM` clause of the query).
**NOTE:**
This will work with MySQL, but it is not database agnostic.
**UPDATE:**
Now checks for more than 1 color with the same frequency and returns null.
|
```
SELECT m.product
, AVG(m.price) avg_price
, n.color most_frequent
FROM my_table m
JOIN
( SELECT x.product
, x.color
FROM
( SELECT product
, color
, COUNT(color) total
FROM my_table
GROUP
BY product
, color
) x
JOIN
( SELECT product
, MAX(total) max_total
FROM
( SELECT product
, color
, COUNT(color) total
FROM my_table
GROUP
BY product
, color
) a
GROUP
BY product
) y
ON y.product = x.product
AND y.max_total = x.total
) n
ON n.product = m.product
GROUP
BY m.product;
```
|
how to return most frequent value for a certain column for each group in GROUP BY query?
|
[
"",
"mysql",
"sql",
"group-by",
""
] |
I need to find all employees whose supervisor's supervisor has a SSN of '888665555'
I can't seem to figure out what I'm not doing correctly.
Here is a copy of the table being used.
```
Fname Lname Ssn Super_ssn
john smith 123456789 333445555
franklin wong 333445555 888665555
alicia zelaya 999887777 987654321
jennifer wallace 987654321 888665555
ramesh narayan 666884444 333446666
joyce english 453453453 333445555
ahmad jabbar 987987987 987654321
james borg 888665555 NULL
```
The SQL code I've been trying is below.
```
SELECT EMPLOYEE.Fname, EMPLOYEE.Lname
FROM EMPLOYEE
WHERE EMPLOYEE.Super_ssn =
(SELECT EMPLOYEE.Ssn
FROM EMPLOYEE
WHERE EMPLOYEE.Super_ssn = '888665555');
```
The result should look something like this:
```
Fname Lname
john smith
alicia zelaya
ramesh narayan
joyce english
ahmad jabbar
```
|
I found the answer thanks to NSNoob.
```
SELECT EMPLOYEE.Fname, EMPLOYEE.Lname
FROM EMPLOYEE
WHERE EMPLOYEE.Super_ssn IN
(SELECT EMPLOYEE.Ssn
FROM EMPLOYEE
WHERE EMPLOYEE.Super_ssn = '888665555');
```
This takes each of the results instead of only accepting one.
|
Your subquery below is returning more than one results.
```
(SELECT EMPLOYEE.Ssn
FROM EMPLOYEE
WHERE EMPLOYEE.Super_ssn = '888665555');
```
Modify it like
```
(SELECT top 1 EMPLOYEE.Ssn
FROM EMPLOYEE
WHERE EMPLOYEE.Super_ssn = '888665555');
```
Or something like that which returns you only one result by subquery.
|
Find employees whose supervisor's supervisor is a specific person
|
[
"",
"sql",
"ms-access",
""
] |
I have a `users` table and an `emails` table. A user can have many emails.
I want to grab only those users that have more than one email. Here is what I have so far:
```
SELECT Users.name, emails.email
FROM Users
INNER JOIN emails
On Users.id=Emails.user_id
/*specify condition to only grab duplicates here */
```
|
```
SELECT u.id
FROM Users u
INNER JOIN emails e On u.id = e.user_id
group by u.id
having count(distinct e.email) > 1
```
Use `group by` and `having`
|
You can also use a CTE:
```
;WITH CTE AS (
SELECT Users.name, emails.email, ROW_NUMBER() OVER (PARTITION BY emails.email ORDER BY emails.email) AS 'Rank'
FROM Users
INNER JOIN emails
On Users.id=Emails.user_id)
SELECT * FROM CTE WHERE Rank > 1
```
|
Return Only Duplicate Records After Inner Join
|
[
"",
"mysql",
"sql",
""
] |
I've got 2 tables - **dishes** and **ingredients**:
in **Dishes,** I've got a list of pizza dishes, ordered as such:
[](https://i.stack.imgur.com/1O23n.png)
In **Ingredients,** I've got a list of all the different ingredients for all the dishes, ordered as such:[](https://i.stack.imgur.com/f6k0W.png)
I want to be able to **list all the names of all the ingredients of each dish alongside each dish's name**.
I've written this query that does not replace the ingredient ids with names as it should, instead opting to return an empty set - please explain what it that I'm doing wrong:
```
SELECT dishes.name, ingredients.name, ingredients.id
FROM dishes
INNER JOIN ingredients
ON dishes.ingredient_1=ingredients.id,dishes.ingredient_2=ingredients.id,dishes.ingredient_3=ingredients.id,dishes.ingredient_4=ingredients.id,dishes.ingredient_5=ingredients.id,dishes.ingredient_6=ingredients.id, dishes.ingredient_7=ingredients.id,dishes.ingredient_8=ingredients.id;
```
It would be great if you could refer to:
1. *The logic of the DB structuring* - am I doing it correctly?
2. *The logic behind the SQL query* - if the DB is built in the right fashion, then why upon executing the query I get the empty set?
3. If you've encountered such a problem before - one that *requires a single-to-many relationship* - how did you solved it in a way different than this, using PHP & MySQL?
***Disregard The Text In Hebrew - Treat It As Your Own Language.***
|
**1. The logic of the DB structuring - am I doing it correctly?**
* This is denormalized data. To normalize it, you would restructure your database into three tables:
+ `Pizza`
+ `PizzaIngredients`
+ `Ingredients`
`Pizza` would have `ID`, `name`, and `type` where `ID` is the primary key.
`PizzaIngredients` would have `PizzaId` and `IngredientId` (this is a many-many table where the primary key is a composite key of `PizzaId` and `IngredientID`)
`Ingredients` has `ID` and `name` where `ID` is the primary key.
**2. List all the names of all the ingredients of each dish alongside each dish's name. Something like this in `MySQL` (untested):**
```
SELECT p.ID, p.name, GROUP_CONCAT(i.name) AS ingredients
FROM pizza p
INNER JOIN pizzaingredients pi ON p.ID = pi.PizzaID
INNER JOIN ingredients i ON pi.IngredientID = i.ID
GROUP BY p.id
```
**3. If you've encountered such a problem before - one that requires a single-to-many relationship - how did you solved it in a way different than this, using PHP & MySQL?**
* Using a many-many relationship, since that what your example truly is. You have many pizzas which can have many ingredients. And many ingredients belong to many different pizzas.
|
It seems to me that a better Database Structure would have a **Dishes\_Ingredients\_Rel** table, rather than having a bunch of columns for Ingredients.
```
DISHES_INGREDIENTS_REL
DishesID
IngredientID
```
Then, you could just do a much simpler JOIN.
```
SELECT Ingredients.Name
FROM Dishes_Ingredients_Rel
INNER JOIN Ingredients
ON Dishes_Ingredients.IngredientID = Ingredients.IngredientID
WHERE Dishes_Ingredients_Rel.DishesID = @DishesID
```
|
SQL Inner Join With Multiple Columns
|
[
"",
"mysql",
"sql",
"join",
"inner-join",
""
] |
I am trying to create a numerator(num) and denominator(den) column that I will later use to create a metric value. In my numerator column, I need to have a criteria that my denominator column does not have. When I add the where clause to my sub query, I am getting the error below. I do not want to add INRInRange to my Group By clause.
> Column 'dbo.PersonDetailB.INRInRange' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause."
```
SELECT
dbo.PersonDetailSpecialty.PracticeAbbrevName,
(SELECT COUNT(DISTINCT dbo.Problem.PID) WHERE PersonDetailB.INRInRange='True') AS num,
COUNT(DISTINCT dbo.Problem.PID) AS den
FROM
dbo.PersonDetailB
RIGHT OUTER JOIN
dbo.PersonDetailSpecialty ON dbo.PersonDetailB.PID = dbo.PersonDetailSpecialty.PID
LEFT OUTER JOIN
dbo.Problem ON dbo.PersonDetailSpecialty.PID = dbo.Problem.PID
GROUP BY
practiceabbrevname
```
|
Create a sub-query that counts `PersonDetailB.INRInRange` and LEFT OUTER JOIN it with the original query.
```
SELECT Main.PracticeAbbrevName, InRange.Num AS num, Main.den
FROM
(SELECT
dbo.PersonDetailSpecialty.PracticeAbbrevName,
COUNT(DISTINCT dbo.Problem.PID) AS den
FROM
dbo.PersonDetailB
RIGHT OUTER JOIN
dbo.PersonDetailSpecialty ON dbo.PersonDetailB.PID = dbo.PersonDetailSpecialty.PID
LEFT OUTER JOIN
dbo.Problem ON dbo.PersonDetailSpecialty.PID = dbo.Problem.PID
GROUP BY
practiceabbrevname) Main
LEFT OUTER JOIN
(SELECT practiceabbrevname, COUNT(DISTINCT dbo.Problem.PID) Num WHERE PersonDetailB.INRInRange='True' GROUP BY practiceabbrevname) InRange ON Main.practiceabbrevname = InRange.practiceabbrevname
```
|
The problem with this statement:
```
SELECT dbo.PersonDetailSpecialty.PracticeAbbrevName,
(SELECT COUNT(DISTINCT dbo.Problem.PID) WHERE PersonDetailB.INRInRange = 'True') AS num,
COUNT(DISTINCT dbo.Problem.PID) AS den
```
is that `PersonDetailB.INRInRange1` doesn't have a unique value in each group. It is possible that it does. One method is to add it to the `GROUP BY`:
```
GROUP BY practiceabbrevname, PersonDetailB.INRInRange
```
Another method would use an aggregation function in the subquery:
```
SELECT dbo.PersonDetailSpecialty.PracticeAbbrevName,
(SELECT COUNT(DISTINCT dbo.Problem.PID) WHERE MAX(PersonDetailB.INRInRange) = 'True') AS num,
COUNT(DISTINCT dbo.Problem.PID) AS den
```
|
Subquery Where clause invalid in select list
|
[
"",
"sql",
"sql-server",
"join",
"group-by",
"subquery",
""
] |
I want to add condition to below select sql code.
```
select rtrim(ltrim(SUBSTRING(line,1,CHARINDEX('|',line) -1))) as drivename
,round(cast(rtrim(ltrim(SUBSTRING(line,CHARINDEX('|',line)+1,
(CHARINDEX('%',line) -1)-CHARINDEX('|',line)) )) as Float)/1024,0) as 'capacity(GB)'
,round(cast(rtrim(ltrim(SUBSTRING(line,CHARINDEX('%',line)+1,
(CHARINDEX('*',line) -1)-CHARINDEX('%',line)) )) as Float) /1024 ,0)as 'freespace(GB)'
from #output
where line like '[A-Z][:]%'
order by drivename
```
The result is ;
```
drivename capacity(GB) freespace(GB)
C:\ 120 36
D:\ 100 7
```
I want to add like this : 'freespace(GB) > 10'
How can i add this condition?
|
Multiple ways to do this.. `Temp table` and `CTE` may seems like the same but try to understand difference between them from [here.](https://dba.stackexchange.com/a/13117/72998)
> **By using Temporary table**
```
select * into ##t from
(select rtrim(ltrim(SUBSTRING(line,1,CHARINDEX('|',line) -1))) as drivename,
round(cast(rtrim(ltrim(SUBSTRING(line,CHARINDEX('|',line)+1,
(CHARINDEX('%',line) -1)-CHARINDEX('|',line)) )) as Float)/1024,0) as 'capacity(GB)',
round(cast(rtrim(ltrim(SUBSTRING(line,CHARINDEX('%',line)+1,
(CHARINDEX('*',line) -1)-CHARINDEX('%',line)) )) as Float) /1024 ,0)as 'freespace(GB)'
from #output
where line like '[A-Z][:]%') as T
Select * from ##t where [freespace(GB)] > 10 order by drivename
```
> **By Using CTE**
```
;WITH cte as
(
select rtrim(ltrim(SUBSTRING(line,1,CHARINDEX('|',line) -1))) as drivename
,round(cast(rtrim(ltrim(SUBSTRING(line,CHARINDEX('|',line)+1,
(CHARINDEX('%',line) -1)-CHARINDEX('|',line)) )) as Float)/1024,0) as 'capacity(GB)'
,round(cast(rtrim(ltrim(SUBSTRING(line,CHARINDEX('%',line)+1,
(CHARINDEX('*',line) -1)-CHARINDEX('%',line)) )) as Float) /1024 ,0)as 'freespace(GB)'
from #output
where line like '[A-Z][:]%'
)
SELECT * FROM cte WHERE [freespace(GB)] > 10 order by drivename;
```
> **By directly using condition in where clause**
```
select rtrim(ltrim(SUBSTRING(line,1,CHARINDEX('|',line) -1))) as drivename,
round(cast(rtrim(ltrim(SUBSTRING(line,CHARINDEX('|',line)+1,
(CHARINDEX('%',line) -1)-CHARINDEX('|',line)) )) as Float)/1024,0) as 'capacity(GB)',
round(cast(rtrim(ltrim(SUBSTRING(line,CHARINDEX('%',line)+1,
(CHARINDEX('*',line) -1)-CHARINDEX('%',line)) )) as Float) /1024 ,0)as 'freespace(GB)'
from #output
where line like '[A-Z][:]%'
And (round(cast(rtrim(ltrim(SUBSTRING(line,CHARINDEX('%',line)+1,
(CHARINDEX('*',line) -1)-CHARINDEX('%',line)) )) as Float) /1024 ,0)) > 10
order by drivename
```
|
Use CTE or subquery:
```
;WITH cte as
(
select rtrim(ltrim(SUBSTRING(line,1,CHARINDEX('|',line) -1))) as drivename
,round(cast(rtrim(ltrim(SUBSTRING(line,CHARINDEX('|',line)+1,
(CHARINDEX('%',line) -1)-CHARINDEX('|',line)) )) as Float)/1024,0) as 'capacity(GB)'
,round(cast(rtrim(ltrim(SUBSTRING(line,CHARINDEX('%',line)+1,
(CHARINDEX('*',line) -1)-CHARINDEX('%',line)) )) as Float) /1024 ,0)as 'freespace(GB)'
from #output
where line like '[A-Z][:]%'
)
SELECT *
FROM cte
WHERE [freespace(GB)] > 10
order by drivename;
```
Second this is classic example of not understanding **[Logical Query Processing](https://stackoverflow.com/a/32668470/5070879)**
|
SQL Server - Where condition usage
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I am tryng to complete a new column in a table.
I need that the value of this new column be the creation date of the row.
My best efford for the previous values of the table is use the ORA\_ROWSCN.
I achieve select this value in a query but the update fails:
```
UPDATE mytable AA
SET AA.reg_date = SCN_TO_TIMESTAMP(AA .ORA_ROWSCN);
```
other option that fails too:
```
UPDATE mytable AA
SET AA.reg_date = SCN_TO_TIMESTAMP
(
SELECT bb.ORA_ROWSCN
FROM mytable bb
WHERE bb.ID= AA.ID
);
```
Any option? thanks in advance
|
I'm afraid that the only reliable way of retrieving the creation date would have been by making sure you persisted that information when you created the rows.
Using `scn_to_timestamp` with `ora_rowscn` to retrieve the creation timestamp can't work for the following reasons:
* If it works, it will only give you an approximate timestamp based on the `SCN`. Not an exact time.
* The `ora_rowscn` changes when the row is updated. So in some instances, you would be getting the update timestamp, not the creation timestamp.
* The `ora_rowscn` is known to change when other data in the same data block is updated as well. So the value may be completely wrong and unrelated to your row.
* And, to make matters worse, if the `scn` is too old, the `scn_to_timestamp` simply won't work. This is documented [here](http://docs.oracle.com/database/121/SQLRF/functions175.htm#SQLRF06325).
> The association between an SCN and a timestamp when the SCN is generated is remembered by the database ***for a limited period of time***.
|
ORA\_ROWSCN is by default the last update of the whole block, so all rows within the block will have the same (the one assigned during last DML). The table would need to have created with ROWDEPENDENCIES to store the SCN for each row (requires additional space).
Also SCN\_TO\_TIMESTAMP function is limited in range what SCNs it can translate to timestamps. This mapping is stored only for certain amount of time, it's most like the error you experience.
|
use ORA_ROWSCN value for update a column in oracle 10g
|
[
"",
"sql",
"oracle",
"oracle10g",
""
] |
I have a MySQL table with data as:-
```
country | city
---------------
italy | milan
italy | rome
italy | rome
ireland | cork
uk | london
ireland | cork
```
I want query this and group by the country and city and have counts of both the city and the country, like this:-
```
country | city | city_count | country_count
---------------------------------------------
ireland | cork | 2 | 2
italy | milan | 1 | 3
italy | rome | 2 | 3
uk | london | 1 | 1
```
I can do:-
```
SELECT country, city, count(city) as city_count
FROM jobs
GROUP BY country, city
```
Which gives me:-
```
country | city | city_count
-----------------------------
ireland | cork | 2
italy | milan | 1
italy | rome | 2
uk | london | 1
```
Any pointer to getting the country\_count too?
|
You can use a correlated sub-query:
```
SELECT country, city, count(city) as city_count,
(SELECT count(*)
FROM jobs AS j2
WHERE j1.country = j2.country) AS country_count
FROM jobs AS j1
GROUP BY country, city
```
[**Demo here**](http://sqlfiddle.com/#!9/f99e86/1)
|
You can just do it in subquery on results.
```
SELECT jm.country, jm.city,
count(city) as city_count,
(select count(*) from jobs j where j.country = jm.country) as country_count
FROM jobs jm
GROUP BY jm.country, jm.city
```
[SQL Fidlle example](http://sqlfiddle.com/#!9/c7def/6/0)
|
SQL Group By and Count on two columns
|
[
"",
"mysql",
"sql",
"group-by",
""
] |
I am using healthcare data and need to identify a patient's number of stays and admit date and leave date in location 'X'. However, the problem is that patients move between different locations and I need to account for the time in between. Sometimes a patient will start in location 'X', move to location 'Y', and then move back to location 'X'. For this scenario, if the stay at location 'Y' is less than or equal to 48 hours, then I need the length of stay to be calculated using the in date of the first stay at location 'X' and the out date of the last stay at location 'X'.
Example data:
```
PatientID Location InDateTime OutDateTime
1 x 7-9-2003 10:00am 7-9-2003 1:00pm
1 y 7-9-2003 1:00pm 7-10-2003 2:00pm
1 y 7-10-2013 2:00pm 7-10-2003 4:00pm
1 x 7-10-2003 4:00pm 7-13-2003 8:00pm
2 y 7-20-2003 1:00pm 7-21-2003 9:00am
2 x 7-21-2003 9:00am 7-24-2003 8:00am
2 y 7-24-2003 8:00am 7-30-2003 10:00am
2 x 8-4-2003 3:00pm 8-7-2003 11:00am
```
Desired output:
```
PatientID InDateTime OutDateTime
1 7-9-2003 10:00am 7-13-2003 8:00pm
2 7-21-2003 9:00am 7-24-2003 8:00am
2 8-4-2003 3:00pm 8-7-2003 11:00am
```
I have tried using case statements, min/max, lag/lead, etc. In the case above, min/max doesn't work because I need to keep two separate visits to location X for PatientID and therefore cannot group on PatientID. Here's one example of a combination of a case/when clause and lag:
```
When datediff(hh,lag(indatetime) over (partition by patientID
order by indatetime),indatetime)>48 then indatetime
```
The above basically states that when there is a difference of greater than 48 hours from the previous admit location to the current location, then use the date time for the current location. However, this doesn't account for the possibility of 2 or more stays at other locations in between location 'X' (like patientID 1 in the example above).
I know I can't use a for loop in SQL, but I think I need to use something similar. Any thoughts?
**Update:**
Thanks, shawnt00. Say I've taken out all areas where location='Y'. Now I have:
```
PatientID Location InDateTime OutDateTime
1 x 7-9-2003 10:00am 7-9-2003 1:00pm
1 x 7-10-2003 4:00pm 7-13-2003 8:00pm
2 x 7-21-2003 9:00am 7-24-2003 8:00am
2 x 8-4-2003 3:00pm 8-7-2003 11:00am
```
I still need to be able to look at Patient 2 and identify those stays in 'X' as separate stays since there is greater than 48 hours in between the two 'X' stays.
|
Not sure if this is what you tried to do, but this combines first the adjacent stays in the same location into one, then filters out the ones that are in y and over 48 hours, then it combines all the stays that are adjacent together.
Sorry for the bad table aliases, maybe CTEs could be more clear.
```
select patientid, min(indatetime) as indatetime, max(outdatetime) as outdatetime
from
(
select patientid, indatetime, outdatetime,
sum(IsStart) over (partition by patientid order by indatetime) as GRP
from
(
select patientid, indatetime, outdatetime,
case when isnull(lag(outdatetime) over (partition by patientid order by indatetime),'21001231') != indatetime then 1 else 0 end as IsStart
from
(
select * from (
select patientid,location,min(indatetime) as indatetime,max(outdatetime) as outdatetime from (
select patientid,location, indatetime, outdatetime,
sum(IsStart) over (partition by patientid order by indatetime) as Grp
from (
select *,
case when isnull(lag(Location) over (partition by patientid order by indatetime),'dummy') != location then 1 else 0 end as IsStart
from table1 a
where (location = 'x' or exists (select 1 from table1 b where a.patientid = b.patientid and b.location = 'x' and b.indatetime < a.indatetime))
) z
) c group by patientid, location, grp
) a where not (location = 'y' and datediff(minute, indatetime, outdatetime) > 48*60)
) X
) Y
) Z
group by patientid, GRP
order by patientid, indatetime
```
Example in [SQL Fiddle](http://sqlfiddle.com/#!6/585aa/1)
|
Neat! This got me digging into the documentation for recursive CTEs.
```
declare @data table (patientID bigint ,Location char,InDateTime datetime ,OutDateTime datetime )
INSERT INTO @DATA (PatientID,Location,InDateTime,OutDateTime) VALUES( 1, 'x', '7-9-2003 10:00am', '7-9-2003 1:00pm')
INSERT INTO @DATA (PatientID,Location,InDateTime,OutDateTime) VALUES( 1, 'y', '7-9-2003 1:00pm ', '7-10-2003 2:00pm')
INSERT INTO @DATA (PatientID,Location,InDateTime,OutDateTime) VALUES( 1, 'y', '7-10-2003 3:00pm', '7-10-2003 4:00pm')
INSERT INTO @DATA (PatientID,Location,InDateTime,OutDateTime) VALUES( 1, 'x', '7-10-2003 4:00pm', '7-13-2003 8:00pm')
INSERT INTO @DATA (PatientID,Location,InDateTime,OutDateTime) VALUES( 2, 'y', '7-20-2003 1:00pm', '7-21-2003 9:00am')
INSERT INTO @DATA (PatientID,Location,InDateTime,OutDateTime) VALUES( 2, 'x', '7-21-2003 9:00am', '7-24-2003 8:00am')
INSERT INTO @DATA (PatientID,Location,InDateTime,OutDateTime) VALUES( 2, 'y', '7-24-2003 8:00am', '7-30-2003 10:00am')
INSERT INTO @DATA (PatientID,Location,InDateTime,OutDateTime) VALUES( 2, 'x', '8-4-2003 3:00pm', '8-7-2003 11:00am')
;
with patLocData as
(
/* Get core data */
select * from @data
),
EntriesInNext48 as
(
/* Recursive CTE */
/* Start by getting a basic list of all the entries in our core CTE/table */
select patLocData.* from patLocData
UNION ALL
/* Recurse on those core entries - each time around, find a record for that patient and that location
with a start date within 48 hours. This results in each original inDateTime being linked to all the later
OutDateTimes it can reach without jumping more than 48 hours ahead. */
select core.patientID, core.Location, core.InDateTime, nex.OutDateTime from patLocData core
inner join EntriesInNext48 nex
on nex.InDateTime > core.OutDateTime
and nex.InDateTime <= DATEADD(hour,48,core.outdatetime)
and nex.patientID = core.patientID
and nex.Location = core.Location
),
getTopOuts as
(
/* Clean up our output to only use the maximum outdate rather than any of the inbetweens */
select patientID, Location, InDateTime, MAX(OutDateTime) as OutDateTime
from EntriesInNext48
group by patientID, Location, InDateTime
)
select *
/* filter our results to remove the cases where a service's start-to-end occurs entirely within another service */
from getTopOuts gto
where 1=1
and Location = 'x'
and 1=(
select COUNT(*)
from gettopouts internals
where
internals.InDateTime <= gto.InDateTime
and internals.OutDateTime >= gto.OutDateTime
and internals.patientID = gto.patientID
and internals.Location = gto.Location
)
order by gto.InDateTime
```
|
Calculate length of stay in certain location (include in-between stays at other locations if less than 48 hrs)
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I am trying to convert XML node values to comma separated values but, getting a
> Incorrect syntax near the keyword 'SELECT'.
> error message
```
declare @dataCodes XML = '<Root>
<List Value="120" />
<List Value="110" />
</Root>';
DECLARE @ConcatString VARCHAR(MAX)
SELECT @ConcatString = COALESCE(@ConcatString + ', ', '') + Code FROM (SELECT T.Item.value('@Value[1]','VARCHAR(MAX)') as Code FROM @dataCodes.nodes('/Root/List') AS T(Item))
SELECT @ConcatString AS Result
GO
```
I tried to follow an [article](http://www.databasejournal.com/features/mssql/converting-comma-separated-value-to-rows-and-vice-versa-in-sql-server.html) but not sure how to proceed further. Any suggestion is appreciated.
**Expectation:**
Comma separated values ('120,110') stored in a variable.
|
Try this;
```
DECLARE @dataCodes XML = '<Root>
<List Value="120" />
<List Value="110" />
</Root>';
DECLARE @ConcatString VARCHAR(MAX)
SELECT @ConcatString = COALESCE(@ConcatString + ', ', '') + Code
FROM (
SELECT T.Item.value('@Value[1]', 'VARCHAR(MAX)') AS Code
FROM @dataCodes.nodes('/Root/List') AS T(Item)
) as TBL
SELECT @ConcatString AS Result
GO
```
You just need to add an alias to your sub SQL query.
|
For future readers, XML data can be extracted into arrays, lists, vectors, and variables for output in comma separated values more fluidly using general purpose languages. Below are open-source solutions using OP's needs taking advantage of `XPath`.
**Python**
```
import lxml.etree as ET
xml = '<Root>\
<List Value="120" />\
<List Value="110" />\
</Root>'
dom = ET.fromstring(xml)
nodes = dom.xpath('//List/@Value')
data = [] # LIST
for elem in nodes:
data.append(elem)
print((", ").join(data))
120, 110
```
**PHP**
```
$xml = '<Root>
<List Value="120" />
<List Value="110" />
</Root>';
$dom = simplexml_load_string($xml);
$node = $dom->xpath('//List/@Value');
$data = []; # Array
foreach ($node as $n){
$data[] = $n;
}
echo implode(", ", $data);
120, 110
```
**R**
```
library(XML)
xml = '<Root>
<List Value="120" />
<List Value="110" />
</Root>'
doc<-xmlInternalTreeParse(xml)
data <- xpathSApply(doc, "//List", xmlGetAttr, 'Value') # LIST
print(paste(data, collapse = ', '))
120, 110
```
|
Converting XML node values to comma separated values in SQL
|
[
"",
"sql",
"sql-server",
""
] |
What's the problem with syntax. I tried to do `INNER JOIN` 3 tables, but I am getting below error message
> **Additional information: Incorrect syntax near the keyword 'on'.**
```
SELECT train.id,
train.class_id,
train.type_id,
train.m_year,
train_type.type,
train_type.avarage_speed,
train_class.class,
train_class.capacity
FROM train
INNER JOIN
ON train.class_id = train_class.id
AND train.type_id = train_type.id
```
|
You have syntax error in your [join.](http://www.w3schools.com/sql/sql_join_inner.asp)
The syntax for join is as follows:
```
SELECT column_name(s)
FROM table1
JOIN table2
ON table1.column_name=table2.column_name;
```
Hence your query will be like the following:
```
" SELECT train.id, train.class_id, train.type_id, train.m_year, train_type.type," & _
" train_type.avarage_speed, train_class.class, train_class.capacity FROM train" & _
" INNER JOIN train_class on train.class_id = train_class.id " & _
" INNER JOIN train_type on train.type_id = train_type.id"
```
|
You have missed the 2nd and 3rd tables while using INNER JOIN.
```
INNER JOIN <Table 2> on train.class_id = train_class.id
INNER JOIN <Table 3> on train.type_id = train_type.id
```
|
vb.net INNER Join 3 tables: Additional information: Incorrect syntax near the keyword 'on'
|
[
"",
"sql",
"vb.net",
"inner-join",
""
] |
I have the following query:
```
SELECT DISTINCT
T0.DocNum, T0.Status, T1.ItemCode, T2.ItemName,
T1.PlannedQty, T0.PlannedQty AS 'Net Quantity'
FROM
OWOR T0
INNER JOIN
WOR1 T1 ON T0.DocEntry = T1.DocEntry
INNER JOIN
OITM T2 ON T0.ItemCode = T2.ItemCode
WHERE
T0.Status = 'L' AND
T1.ItemCode IN ('BYP/RM/001', 'BYP/RM/002', 'BYP/RM/003', 'BYP/RM/004','BILLET') AND
T2.ItmsGrpCod = 111 AND
(T0.PostDate BETWEEN (SELECT Dateadd(month, Datediff(month, 0, {?EndDate}), 0)) AND {?EndDate})
```
that returns data like:
[](https://i.stack.imgur.com/Mvize.png)
Explanation:
To make 10 MM steel bars, billets are used as raw materials. Any ItemCode 'BYP%' is part of solid wastage. The net quantity for every DocNum is the amount of 'MM' steel produced by weight. For example, for DocNum 348, the following are used as inputs:
[](https://i.stack.imgur.com/LdGSa.png)
However for that DocNum, 147.359 of 10 MM steel was produced, meaning the missing 3.52 (150.879 - 147.359) is burned loss (not solid).
How do I modify the query such that for each DocNum, the query returns:
[](https://i.stack.imgur.com/gn2YA.png)
|
See the SQL code below.
```
declare @input_tbl table ( doc_num int, item_code varchar(15), item_name varchar(10), planned_qty decimal(15,9), net_qty decimal(15,9) )
declare @ouput_tbl table ( doc_num int, item varchar(15), name varchar(10), quantity decimal(15,9) )
-- Inserting sample data shown by you. You will have to replace this with your 1st query.
insert into @input_tbl values (348, 'BILLET' , '10MM', 154.629000, 147.359000)
insert into @input_tbl values (348, 'BYP/RM/001' , '10MM', -1.008000, 147.359000)
insert into @input_tbl values (348, 'BYP/RM/003' , '10MM', -1.569000, 147.359000)
insert into @input_tbl values (348, 'BYP/RM/004' , '10MM', -1.173000, 147.359000)
-- This stores unique doc numbers from input data
declare @doc_tbl table ( id int identity(1,1), doc_num int )
insert into @doc_tbl select distinct doc_num from @input_tbl
-- Loop through each unique doc number in the input data
declare @doc_ctr int = 1
declare @max_doc_id int = (select max(id) from @doc_tbl)
while @doc_ctr <= @max_doc_id
begin
declare @doc_num int
declare @planned_qty_total decimal(15,9)
declare @net_qty decimal(15,9)
declare @burned_loss decimal(15,9)
declare @item_name varchar(15)
select @doc_num = doc_num from @doc_tbl where id = @doc_ctr
select @planned_qty_total = sum(planned_qty) from @input_tbl where doc_num = @doc_num
select distinct @item_name = item_name, @net_qty = net_qty from @input_tbl where doc_num = @doc_num
select @burned_loss = @planned_qty_total - @net_qty
-- 'Union' is also fine but that won't sort the records as desired
insert into @ouput_tbl select doc_num, item_code, item_name, planned_qty from @input_tbl
insert into @ouput_tbl select @doc_num, 'BurnLoss', @item_name, @burned_loss * -1
insert into @ouput_tbl select @doc_num, 'Net', @item_name, @net_qty
set @doc_ctr = @doc_ctr + 1
end
select * from @ouput_tbl
```
Ouput:
```
docnum item name quantity
348 BILLET 10MM 154.629000000
348 BYP/RM/001 10MM -1.008000000
348 BYP/RM/003 10MM -1.569000000
348 BYP/RM/004 10MM -1.173000000
348 BurnLoss 10MM -3.520000000
348 Net 10MM 147.359000000
```
|
You should use a UNION. I guess
```
(/*your original query*/)
UNION
SELECT DISTINCT
T0.DocNum, T0.Status, 'BurnLoss' AS ItemCode, T2.ItemName,
SUM(T1.PlannedQuantity)-T0.PlannedQuantity AS PlannedQuantity, T0.PlannedQuantity AS 'Net Quantity'
FROM
OWOR T0
INNER JOIN
WOR1 T1 ON T0.DocEntry = T1.DocEntry
INNER JOIN
OITM T2 ON T0.ItemCode = T2.ItemCode
WHERE
T0.Status = 'L' AND
T1.ItemCode IN ('BYP/RM/001', 'BYP/RM/002', 'BYP/RM/003', 'BYP/RM/004','BILLET') AND
T2.ItmsGrpCod = 111 AND
(T0.PostDate BETWEEN (SELECT Dateadd(month, Datediff(month, 0, {?EndDate}), 0)) AND {?EndDate})
GROUP BY T0.DocNum
UNION
SELECT DISTINCT
T0.DocNum, T0.Status, 'Net' AS ItemCode, T2.ItemName,
T0.PlannedQuantity, T0.PlannedQuantity AS 'Net Quantity'
FROM
OWOR T0
INNER JOIN
WOR1 T1 ON T0.DocEntry = T1.DocEntry
INNER JOIN
OITM T2 ON T0.ItemCode = T2.ItemCode
WHERE
T0.Status = 'L' AND
T1.ItemCode IN ('BYP/RM/001', 'BYP/RM/002', 'BYP/RM/003', 'BYP/RM/004','BILLET') AND
T2.ItmsGrpCod = 111 AND
(T0.PostDate BETWEEN (SELECT Dateadd(month, Datediff(month, 0, {?EndDate}), 0)) AND {?EndDate})
GROUP BY T0.DocNum
```
should work, although it might even be better to do some of this in a language like PHP, because now every bit of information is fetched twice (and the query becomes two times as long).
|
Transforming a column into a row
|
[
"",
"sql",
"sql-server-2012",
""
] |
I am struggling to get this answer for some reason.
I have two tables, table1 and table2 which look like this:
Table1:
```
ID Location Warehouse
1 London Narnia
2 Cyprus Metro
3 Norway Neck
4 Paris Triumph
```
Table2:
```
ID Area Code
1 London Narnia
2 Cyprus Metro
3 Norway Triumph
4 Paris Neck
```
I need to first select everything from table1 where `table1.Location` is in `table2.Area` **AND** `table1.Warehouse` is in `table2.Code` **GIVEN THAT** `table1.Location` is in `table2.Area`. I.e. I want:
```
ID Location Warehouse
1 London Narnia
2 Cyprus Metro
```
I have got to:
```
select
1.location
, 1.warehouse
from table1 1
where 1.location in (select area from table2)
and 1.warehouse in (select code from table2)
```
But this won't work because I need the second where clause to be executed based on the first where clause holding true.
I have also tried similar queries with joins to no avail.
Is there a simple way to do this?
|
Use `exists`:
```
select t.location, t.warehouse
from table1 t
where exists (select 1
from table2 t2
where t.location = t2.area and t.warehouse = t2.code
);
```
I should point out that some databases support row constructors with `in`. That allows you to do:
```
select t.location, t.warehouse
from table1 t
where(t1.location, t1.warehouse) in (select t2.area, t2.code from table2 t2);
```
|
Maybe I'm missing something, but a simple join on the two conditions would give you the result in your example:
```
select t1.*
from table1 t1
join table2 t2 on t1.Location = t2.Area
and t1.Warehouse = t2.Code;
```
Result:
```
| ID | Location | Warehouse |
|----|----------|-----------|
| 1 | London | Narnia |
| 2 | Cyprus | Metro |
```
[Sample SQL Fiddle](http://www.sqlfiddle.com/#!6/1f264/3)
|
Where clause to check against two columns in another table
|
[
"",
"sql",
"join",
"where-clause",
"where-in",
""
] |
We have a table Phone whose values are
```
(Name, Number)
--------------
(John, 123)
(John, 456)
(Bravo, 789)
(Ken, 741)
(Ken, 589)
```
If the question is to Find the guy who uses only one number, the answer is Bravo.
I solved this using `aggregate` function. But I don't know how to solve without using aggregate function.
|
Here is my solution:
```
SELECT *
FROM test t
WHERE NOT EXISTS (
SELECT 1
FROM test
WHERE NAME = t.NAME
AND number <> t.number);
```
And a sample [SQLFiddle](http://sqlfiddle.com/#!3/faae5/3).
I'm not sure about this representation in relational algebra (and it's most likely not correct or complete but it might give you a starting point):
`RESULT = {(name, number) ∈ TEST | (name, number_2) ¬∃ TEST, number <> number_2}`
(this is the main idea, you could probably try and [have a look here](https://en.wikipedia.org/wiki/Relational_algebra) to try and rewrite this correctly, since I haven't written anything in relational algebra for more than 10 years).
Or maybe you're looking for a different type of representation, [like this one here](http://www.cs.cornell.edu/projects/btr/bioinformaticsschool/slides/gehrke.pdf)?
|
You can use `LEFT JOIN` and use the same table in your `JOIN` , something like this..
```
SELECT a.NAME, a.NUMBER FROM test a
LEFT JOIN test b ON a.name = b.name AND a.number <> b.number
WHERE b.name IS NULL;
```
Hope this helps. :)
|
Relational algebra without aggregate function
|
[
"",
"sql",
"aggregate-functions",
"relational-algebra",
""
] |
I'm stuck on a pretty simple task.
An image is better than words so here's a sample of my table :
I'd like to retrieve every distinct product\_id that are both in groups 27 and 16 for exemple.
So I made this request :
```
SELECT DISTINCT product_id FROM my_table WHERE group_id = 27 AND group_id = 16
```
It's not working and I understand why, but I don't know how to do differently...
I know it's a very noobish question but I don't know what to use in this case, INNER JOIN, LEFT JOIN ...
|
You may do as
```
SELECT product_id FROM my_table WHERE group_id in(27,16)
group by product_id
having count(DISTINCT group_id) >= 2
```
|
You can use `EXISTS`:
```
SELECT DISTINCT m1.product_id
FROM my_table m1
WHERE m1.group_id = 27
AND EXISTS (SELECT 1
FROM my_table m2
WHERE m1.product_id = m2.product_id
AND m2.group_id = 16);
```
|
MySQL : Check condition on two rows
|
[
"",
"mysql",
"sql",
""
] |
I want to query db rows using two standards: A first, B second.
That is: Order by A, if A values are the same, Order by B as the second standard
How to write the sql?
Example:
query table:
```
id | A | B
_ _ _ _ _ _
1 | 1 | 1
_ _ _ _ _ _
2 | 2 | 2
_ _ _ _ _ _
3 | 2 | 1
_ _ _ _ _ _
4 | 3 | 1
```
query result:
```
id
1
3
2
4
```
|
Order by is used to sort the result from a table in ASC | DESC based on one or more column names. It sorts by ASC in default.
Example:
`Select * from Table1 order by A, B`
In this example the results from Table1 is sorted in ASC by A as well as B. If A has the same values, then the results will be sorted by B in ASC
|
You can simply have multiple order-by's: `ORDER BY A DESC,B` for example.
|
Query rows using Order by A, if A values are the same, Order by B as the second standard
|
[
"",
"mysql",
"sql",
"sql-order-by",
""
] |
Imagine a database like this:
[](https://i.stack.imgur.com/zihAk.png)
And looking for all the students - teachers sets that don't share any courses. Result should be a `(Student.Name - Teacher.Name)` tuple.
This idea is near the solution but not exactly what wanted.
```
--- The students that go to some courses
SELECT S.FIRSTNAME, S.LASTNAME
FROM STUDENTS S
JOIN STU_COU SC ON S.STUDENTID = SC.STUDENTS_STUDENTID
UNION ALL
--- teachers not in attended courses
SELECT T.FIRSTNAME, T.LASTNAME
FROM TEA_COU TC
JOIN TEACHERS T ON T.TEACHERID = TC.TEACHERS_TEACHERID
WHERE TC.COURSES_COURSEID NOT IN (
SELECT C.COURSEID
FROM STUDENTS S
JOIN STU_COU SC ON S.STUDENTID = SC.STUDENTS_STUDENTID
JOIN COURSES C ON C.COURSEID = SC.COURSES_COURSEID
);
```
Test data like:
* TEACHER A teaches CS and MATH courses;
* STUDENT A goes to CS course;
* STUDENT B goes to LITERATURE and SPORTS courses;
Result would be
```
STUDENT B - TEACHER A
```
Looking for universal solution that's why no particular database stated.
|
You could start with a cross join and then remove any pair that has a relationship:
```
SELECT s.firstname, s.lastname, t.firstname, t.lastname
FROM students s
CROSS JOIN teachers t
WHERE NOT EXISTS (SELECT *
FROM stu_cou sc
JOIN tea_cou tc ON sc.courses_courseid =
tc.courses_courseid
WHERE sc.students_studentid = s.studentid AND
tc.teachers_teacherid = t.teacherid)
```
|
In Oracle you can use `minus`, or SQL Server or PostgreSQL's `except` set operator: (functionally equivalent)
```
select s.firstname as stud_fname,
s.lastname as stud_lname,
t.firstname as teac_fname,
t.lastname as teac_lname
from students s
cross join teachers t
minus
select s.firstname,
s.lastname,
t.firstname,
t.lastname
from students s
join stu_cou sc
on s.studentid = sc.students_studentid
join courses c
on sc.courses_courseid
join tea_cou tc
on c.courseid = tc.courses_courseid
join teachers t
on tc.teachers_teacherid = t.teacherid
```
|
SQL for retrieving tuples of not related records
|
[
"",
"sql",
"select",
"cross-join",
""
] |
In have table called "MYGROUP" in database. I display this table data in tree format in GUI as below:
```
Vishal Group
|
|-------Vishal Group1
| |-------Vishal Group1.1
| |-------Vishal Group1.1.1
|
|-------Vishal Group2
| |-------Vishal Group2.1
| |-------Vishal Group2.1.1
|
|-------Vishal Group3
|
|-------Vishal Group4
| |-------Vishal Group4.1
```
Actually, the requirement is, I need to visit the lowest root for every group, if that respective group is not used in other specific tables then I would delete that record from respective table.
I need to get all the details only for the main group called "Vishal Group", please refer to both snaps, one contains entire table data and the other snap (snap which has tree format details)shows expected data i.e. I need to get only those records as a result of a SQL execution.
I tried with self join (generally we do for MGR and Employee column relationship), but no success to get the records which falls under "Vishal Group" which is the base of all records.
I have added a table DDL and Insert SQL for reference as below. And also attached a snap of how data looks in the table.
```
CREATE TABLE MYGROUP
(
PK_GROUP GUID DEFAULT 'newid()' NOT NULL,
DESCRIPTION Varchar(255),
LINKED_TO_GROUP GUID,
PRIMARY KEY (PK_GROUP)
);
COMMIT;
INSERT INTO MYGROUP (PK_GROUP, DESCRIPTION, LINKED_TO_GROUP) VALUES ('{11111111-111-1111-1111-111111111111} ', 'My Items', NULL);
INSERT INTO MYGROUP (PK_GROUP, DESCRIPTION, LINKED_TO_GROUP) VALUES ('{CD1E33D1-1666-49B9-83BE-067687E4DDD6}', 'Vishal Group', '{11111111-111-1111-1111-111111111111}');
INSERT INTO MYGROUP (PK_GROUP, DESCRIPTION, LINKED_TO_GROUP) VALUES ('{4B42E7A5-B14C-451B-ACF5-83DD8A983A58}', 'Vishal Group1', '{CD1E33D1-1666-49B9-83BE-067687E4DDD6}');
INSERT INTO MYGROUP (PK_GROUP, DESCRIPTION, LINKED_TO_GROUP) VALUES ('{A87E921D-0468-497D-92C5-19AB63751EE8}', 'Vishal Group1.1', '{4B42E7A5-B14C-451B-ACF5-83DD8A983A58}');
INSERT INTO MYGROUP (PK_GROUP, DESCRIPTION, LINKED_TO_GROUP) VALUES ('{0FDC729A-8FCC-4D23-8619-436A459835DD}', 'Vishal Group1.1.1', '{A87E921D-0468-497D-92C5-19AB63751EE8}');
INSERT INTO MYGROUP (PK_GROUP, DESCRIPTION, LINKED_TO_GROUP) VALUES ('{2E15A2A9-7E40-422E-A5D6-C3F6C63F8591}', 'Vishal Group2', '{CD1E33D1-1666-49B9-83BE-067687E4DDD6}');
INSERT INTO MYGROUP (PK_GROUP, DESCRIPTION, LINKED_TO_GROUP) VALUES ('{5EAC9866-F406-4BBD-B7B3-5CEEC3877C9B}', 'Vishal Group2.1', '{2E15A2A9-7E40-422E-A5D6-C3F6C63F8591}');
INSERT INTO MYGROUP (PK_GROUP, DESCRIPTION, LINKED_TO_GROUP) VALUES ('{A326E6E3-030E-493B-AA0E-DC5D90DB080F}', 'Vishal Group2.1.1', '{5EAC9866-F406-4BBD-B7B3-5CEEC3877C9B}');
INSERT INTO MYGROUP (PK_GROUP, DESCRIPTION, LINKED_TO_GROUP) VALUES ('{3CF1FE37-EEC0-4E79-A3C5-DB78F6A9BC05}', 'Vishal Group3', '{CD1E33D1-1666-49B9-83BE-067687E4DDD6}');
INSERT INTO MYGROUP (PK_GROUP, DESCRIPTION, LINKED_TO_GROUP) VALUES ('{1EC302C8-0AB3-4F67-B47A-CC43401DF4ED}', 'Vishal Group4', '{CD1E33D1-1666-49B9-83BE-067687E4DDD6}');
INSERT INTO MYGROUP (PK_GROUP, DESCRIPTION, LINKED_TO_GROUP) VALUES ('{2EB81764-04FA-4DDA-9AAB-A607BDC2756D}', 'Vishal Group4.1', '{1EC302C8-0AB3-4F67-B47A-CC43401DF4ED}');
INSERT INTO MYGROUP (PK_GROUP, DESCRIPTION, LINKED_TO_GROUP) VALUES ('{7D939081-13F0-404C-9F2F-5222C628FDCC}', 'Sample BOMs', '{11111111-111-1111-1111-111111111111}');
INSERT INTO MYGROUP (PK_GROUP, DESCRIPTION, LINKED_TO_GROUP) VALUES ('{C77D2255-AC47-461D-BEE5-7F3154C23AF1}', 'Test1', '{11111111-111-1111-1111-111111111111}');
INSERT INTO MYGROUP (PK_GROUP, DESCRIPTION, LINKED_TO_GROUP) VALUES ('{D054539A-BBBA-4E3F-9746-1522FF8A1E89}', 'Test2', '{11111111-111-1111-1111-111111111111}');
INSERT INTO MYGROUP (PK_GROUP, DESCRIPTION, LINKED_TO_GROUP) VALUES ('{71B4751C-7096-4FB9-8D71-6BB19A3D9ED9}', 'Trailer Assy', '{11111111-111-1111-1111-111111111111}');
INSERT INTO MYGROUP (PK_GROUP, DESCRIPTION, LINKED_TO_GROUP) VALUES ('{F702BABB-73B0-4A49-B442-1C7C8A126335}', 'WIP', '{11111111-111-1111-1111-111111111111}');
INSERT INTO MYGROUP (PK_GROUP, DESCRIPTION, LINKED_TO_GROUP) VALUES ('{FC74D59A-94E3-4209-BCEA-1B7606EA62F1}', 'mmmmmm', '{11111111-111-1111-1111-111111111111}');
INSERT INTO MYGROUP (PK_GROUP, DESCRIPTION, LINKED_TO_GROUP) VALUES ('{6E4354F9-B298-4737-9C18-51B4ACAC0734}', 'test1', '{11111111-111-1111-1111-111111111111}');
INSERT INTO MYGROUP (PK_GROUP, DESCRIPTION, LINKED_TO_GROUP) VALUES ('{42A48EE0-D4EE-4828-BC11-D7F0D1FE5BEC}', 'test1', '{11111111-111-1111-1111-111111111111}');
INSERT INTO MYGROUP (PK_GROUP, DESCRIPTION, LINKED_TO_GROUP) VALUES ('{28AFE8E1-1221-4F94-BAE3-37EA6B360494}', 'test_2', '{11111111-111-1111-1111-111111111111}');
COMMIT;
```
Any idea how to get records which falls under "Vishal Group" ?
[](https://i.stack.imgur.com/5XhCr.png)
[](https://i.stack.imgur.com/fMUSu.png)
|
You can use recursive stored procedure like this :
```
SET TERM ^ ;
create or alter procedure MYGROUP_PROC (
IPARENT varchar(64))
returns (
PK_GROUP varchar(64),
DESCRIPTION varchar(255),
LINKED_TO_GROUP varchar(64))
as
declare variable I integer;
BEGIN
FOR
select
mygroup.pk_group,
mygroup.description,
mygroup.linked_to_group
from mygroup
where
(upper(mygroup.linked_to_group) = upper(:iparent))
INTO :PK_GROUP,
:DESCRIPTION,
:LINKED_TO_GROUP
DO
BEGIN
suspend;
i = 0;
/* Stored procedures can be nested up to 1,000 levels deep. This limitation helps to prevent infinite loops that can occur when a recursive procedure provides no absolute terminating condition.*/
while (i < 1000) do
begin
execute procedure mygroup_proc(:pk_group) returning_values (:pk_group,:description,:linked_to_group);
if (:pk_group is null) then break;
suspend;
i = i+1;
end
END
END^
SET TERM ; ^
```
When input parameter is `{CD1E33D1-1666-49B9-83BE-067687E4DDD6}` (PK of
`Vishal Group`
)
the result is :
[](https://i.stack.imgur.com/HQ2l2.png)
**Update**
> Is it possible to get "Vishal Group" record also in the output ?
Yes, first reading you can execute procedure above in other procedure like:
```
SET TERM ^ ;
create or alter procedure MYGROUP_PROC_1 (
IPARENT varchar(100))
returns (
PK_GROUP varchar(64),
DESCRIPTION varchar(255),
LINKED_TO_GROUP varchar(64))
as
BEGIN
FOR
select
mygroup.pk_group,
mygroup.description,
mygroup.linked_to_group
from mygroup
where
(upper(mygroup.description) = upper(:iparent))
/*
or (upper(mygroup.pk_group) = upper(:iparent)) instead
if you want to use pk_group as input parameter
*/
INTO :PK_GROUP,
:DESCRIPTION,
:LINKED_TO_GROUP
DO
BEGIN
suspend;
for
select pk_group, description,linked_to_group from mygroup_proc(:pk_group)
into
:PK_GROUP,
:DESCRIPTION,
:LINKED_TO_GROUP
do
begin
suspend;
end
end
END^
SET TERM ; ^
```
Then use
```
SELECT * FROM MYGROUP_PROC_1('Vishal Group')
```
or
```
SELECT * FROM MYGROUP_PROC_1('{CD1E33D1-1666-49B9-83BE-067687E4DDD6}')
```
if you use pk\_group as parameter
|
As a cleaner alternative to a recursive stored procedure, here is a stored procedure using CTE(Common Table Expressions):
```
SET TERM ^ ;
CREATE OR ALTER procedure RECURSIVE_MYGROUP (
PK_GROUP_IN GUID)
returns (
PK_GROUP GUID,
DESCRIPTION varchar(255),
LINKED_TO_GROUP GUID)
as
begin
for with recursive RECUR_MYGROUP
as (select M.PK_GROUP, M.DESCRIPTION, M.LINKED_TO_GROUP
from MYGROUP M
where M.PK_GROUP = :PK_GROUP_IN
union all
select M.PK_GROUP, M.DESCRIPTION, M.LINKED_TO_GROUP
from RECUR_MYGROUP RM
inner join MYGROUP M on M.LINKED_TO_GROUP = RM.PK_GROUP)
select *
from RECUR_MYGROUP
into :PK_GROUP, :DESCRIPTION, :LINKED_TO_GROUP
do
begin
suspend;
end
end^
SET TERM ; ^
```
Then you can use:
```
select * from recursive_mygroup('{CD1E33D1-1666-49B9-83BE-067687E4DDD6}');
```
|
How To Get Entire Linked Group Details using SQL?
|
[
"",
"sql",
"firebird",
"common-table-expression",
"recursive-query",
"firebird2.5",
""
] |
I have an excel sheet with several values which I imported into SQL (book1$) and I want to transfer the values into ProcessList. Several rows have the same primary keys which is the ProcessID because the rows contain original and modified values, both of which I want to keep. How do I make SQL ignore the duplicate primary keys?
I tried the IGNORE\_DUP\_KEY = ON but for rows with duplicated primary key, only 1 the latest row shows up.
```
CREATE TABLE dbo.ProcessList
(
Edited varchar(1),
ProcessId int NOT NULL PRIMARY KEY WITH (IGNORE_DUP_KEY = ON),
Name varchar(30) NOT NULL,
Amount smallmoney NOT NULL,
CreationDate datetime NOT NULL,
ModificationDate datetime
)
INSERT INTO ProcessList SELECT Edited, ProcessId, Name, Amount, CreationDate, ModificationDate FROM Book1$
SELECT * FROM ProcessList
```
Also, if I have a row and I update the values of that row, is there any way to keep the original values of the row and insert a clone of that row below, with the updated values and creation/modification date updated automatically?
|
> How do I make SQL ignore the duplicate primary keys?
Under no circumstances can a transaction be committed that results in a table containing two distinct rows with the same primary key. That is fundamental to the nature of a primary key. SQL Server's `IGNORE_DUP_KEY` option does not change that -- it merely affects how SQL Server *handles* the problem. (With the option turned on it silently refuses to insert rows having the same primary key as any existing row; otherwise, such an insertion attempt causes an error.)
You can address the situation either by dropping the primary key constraint or by adding one or more columns to the primary key to yield a composite key whose *collective* value is not duplicated. I don't see any good candidate columns for an expanded PK among those you described, though. If you drop the PK then it might make sense to add a synthetic, autogenerated PK column.
> Also, if I have a row and I update the values of that row, is there any way to keep the original values of the row and insert a clone of that row below, with the updated values and creation/modification date updated automatically?
If you want to ensure that this happens automatically, however a row happens to be updated, then look into triggers. If you want a way to automate it, but you're willing to make the user ask for the behavior, then consider a stored procedure.
|
try this
`INSERT IGNORE INTO ProcessList SELECT Edited, ProcessId, Name, Amount, CreationDate, ModificationDate FROM Book1$
SELECT * FROM ProcessList`
|
How to ignore duplicate Primary Key in SQL?
|
[
"",
"sql",
"duplicates",
"primary-key",
""
] |
is there a way to reverse an entire column?
Example:
```
ID ColX ColY ColZ
0 001 010 100
1 002 020 200
2 003 030 300
```
shall be:
```
ID ColX ColY ColZ
0 003 030 300
1 002 020 200
2 001 010 100
```
So the Column ID shall be reversed, the record with the last ID shall be the first, the second last the second first and so far.
The newest value has ID = 0 and the oldest ID = n, and this must be exactly reversed, else I cannot insert new records.
|
You can do it using variables:
```
SELECT t2.ID, ColX, ColY, ColZ
FROM (SELECT ID, ColX, ColY, ColZ,
@row_number := @row_number + 1 AS rn
FROM mytable
CROSS JOIN (SELECT @row_number := 0) AS var
ORDER BY ID) AS t1
INNER JOIN (
SELECT ID, @rn := @rn + 1 AS rn
FROM mytable
CROSS JOIN (SELECT @rn := 0) AS var
ORDER BY ID DESC) AS t2
ON t1.rn = t2.rn
ORDER BY t2.ID
```
[**Demo here**](http://sqlfiddle.com/#!9/64c67/3)
If you want to `UPDATE` then you can use the above query in an `UPDATE` statement like this:
```
UPDATE mytable AS t
INNER JOIN(
SELECT ID, ColX, ColY, ColZ,
@row_number := @row_number + 1 AS rn
FROM mytable
CROSS JOIN (SELECT @row_number := 0) AS var
ORDER BY ID) AS t1 ON t.ID = t1.ID
INNER JOIN (
SELECT ID, @rn := @rn + 1 AS rn
FROM mytable
CROSS JOIN (SELECT @rn := 0) AS var
ORDER BY ID DESC) AS t2 ON t1.rn = t2.rn
SET t.ID = t2.ID
```
[**Demo here**](http://sqlfiddle.com/#!9/d542f/2)
The above will work irrespective of the values of `ID` column.
|
If you don't have gaps in your ID, then you can use [this select query](http://sqlfiddle.com/#!9/63227/1):
```
SELECT
max_id-ID AS ID,
ColX,
ColY,
ColZ
FROM
mytable CROSS JOIN (SELECT MAX(ID) AS max_id FROM mytable) m
ORDER BY
ID
```
or this [update query](http://sqlfiddle.com/#!9/8aa69d/1) (but it will work only if ID is not a primary key):
```
UPDATE
mytable m1 CROSS JOIN (SELECT MAX(ID) as max_id FROM mytable) m
SET
m1.ID = m.max_id - m1.ID
```
if it's a primary key you could use this:
```
UPDATE
mytable m1 CROSS JOIN (SELECT MAX(ID) as max_id FROM mytable) m
INNER JOIN mytable m2 ON m1.ID = m.max_id - m2.ID
SET
m1.ColX = m2.ColX,
m1.ColY = m2.ColY,
m1.ColZ = m2.ColZ
```
(please see it [here](http://sqlfiddle.com/#!9/6fdc8/1))
but if your ID column has gaps (e.g. 0, 1, 2, 5, 6) you need a different approach.
|
SQL Reverse entire column
|
[
"",
"mysql",
"sql",
"reverse",
""
] |
How can I check how many products added by a,b,c,d respectively by using a query?
table1
```
admin_id admin_name
3 a
4 b
5 c
6 d
```
table2
```
admin_id products
3 pDeal
3 pSeal
4 pAeal
5 pZeal
6 pXeal
3 pHeal
6 pPeal
```
|
You need a simple JOIN and a COUNT query:
```
SELECT table1.admin_name, COUNT(*) as cnt
FROM
table1 INNER JOIN table2
ON table1.admin_id = table2.admin_id
GROUP BY
table1.admin_name
```
|
Try this...
```
SELECT a.admin_name, COUNT(b.products) as 'CountOfProducts'
FROM table1 a INNER JOIN table2 b ON a.admin_id = b.admin_id
GROUP BY a.admin_name
```
|
Select count with join in SQL query
|
[
"",
"mysql",
"sql",
""
] |
I have the following table, which represent Customers for each day:
```
+----------+-----------+
| Date | Customers |
+----------+-----------+
| 1/1/2014 | 4 |
| 1/2/2014 | 7 |
| 1/3/2014 | 5 |
| 1/4/2014 | 5 |
| 1/5/2014 | 10 |
| 2/1/2014 | 7 |
| 2/2/2014 | 4 |
| 2/3/2014 | 1 |
| 2/4/2014 | 5 |
+----------+-----------+
```
I would like to add 2 additional columns:
1. Summary of the customers for the current month
2. Summary of the customers for the preceding month
**here's the desired outcome:**
```
+----------+-----------+----------------------+------------------------+
| Date | Customers | Sum_of_Current_month | Sum_of_Preceding_month |
+----------+-----------+----------------------+------------------------+
| 1/1/2014 | 4 | 31 | 0 |
| 1/2/2014 | 7 | 31 | 0 |
| 1/3/2014 | 5 | 31 | 0 |
| 1/4/2014 | 5 | 31 | 0 |
| 1/5/2014 | 10 | 31 | 0 |
| 2/1/2014 | 7 | 17 | 31 |
| 2/2/2014 | 4 | 17 | 31 |
| 2/3/2014 | 1 | 17 | 31 |
| 2/4/2014 | 5 | 17 | 31 |
+----------+-----------+----------------------+------------------------+
```
I have managed to calculate the 3rd column by a simple sum over partition function:
```
Select
Date,
Customers,
Sum(Customers) over (Partition by (Month(Date)||year(Date) Order by 1) as Sum_of_Current_month
From table
```
However, I can't find a way to calculate the Sum\_of\_preceding\_month column.
Appreciate your support.
Asaf
|
I think this might be easier by using `lag()` and an aggregation sub-query. The ANSI Standard syntax is:
```
Select t.*, tt.sumCustomers, tt.prev_sumCustomers
From table t join
(select extract(year from date) as yyyy, extract(month from date) as mm,
sum(Customers) as sumCustomers,
lag(sum(Customers)) over (order by extract(year from date), extract(month from date)
) as prev_sumCustomers
from table t
group by extract(year from date), extract(month from date)
) tt
on extract(year from date) = tt.yyyy and extract(month from date) = t.mm;
```
In Teradata, this would be written as:
```
Select t.*, tt.sumCustomers, tt.prev_sumCustomers
From table t join
(select extract(year from date) as yyyy, extract(month from date) as mm,
sum(Customers) as sumCustomers,
min(sum(Customers)) over (order by extract(year from date), extract(month from date)
rows between 1 preceding and 1 preceding
) as prev_sumCustomers
from table t
group by extract(year from date), extract(month from date)
) tt
on extract(year from date) = tt.yyyy and extract(month from date) = t.mm;
```
|
The previous month is a bit tricky. What's your Teradata release, TD14.10 supports `LAST_VALUE`:
```
SELECT
dt,
customers,
Sum_of_Current_month,
-- return the previous sum
COALESCE(LAST_VALUE(x ignore NULLS)
OVER (ORDER BY dt
ROWS BETWEEN UNBOUNDED PRECEDING AND 1 PRECEDING)
,0) AS Sum_of_Preceding_month
FROM
(
SELECT
dt,
Customers,
SUM(Customers) OVER (PARTITION BY TRUNC(dt,'mon')) AS Sum_of_Current_month,
CASE -- keep the number only for the last day in month
WHEN ROW_NUMBER()
OVER (PARTITION BY TRUNC(dt,'mon')
ORDER BY dt)
= COUNT(*)
OVER (PARTITION BY TRUNC(dt,'mon'))
THEN Sum_of_Current_month
END AS x
FROM tab
) AS dt
```
|
SQL sum over partition for preceding period
|
[
"",
"sql",
"teradata",
""
] |
Need some help with this query, I just want to know if what I am doing is fine or do I need JOIN to get it better. Sorry if this a silly question but I am little worried as I query the same table thrice. Thanks in advance
```
Select *
from TableA
where (A_id in (1, 2, 3, 4)
and flag = 'Y') or
(A_id in
(select A_id from TableB
where A_id in
(Select A_id from TableA
where (A_id in (1, 2, 3, 4)
and flag = 'N')
group by A_id
having sum(qty) > 0)
)
```
Relation between TableA and TableB is one-to-many
Condition or Logic:
* if the flag is true, the data can be selected without further checks
* if the flag is false, we have to refer TableB to see if sum of the qty column is greater than 0
|
Your approach is indeed way too complicated. Select from A where flag = Y or the sum of related B > 0. Do the latter in a subquery.
```
select *
from a
where a_id in (1,2,3,4)
and
(
flag = 'Y'
or
(select sum(qty) from b where b.a_id = a.a_id) > 0
)
```
|
There's nothing badly wrong with the query you've presented, but there are improvements that can be made. If you move the test for Flag='N' into your first select from TableA and correlate your select from TableB with your first select from TableA, then you can dispense with the second select from TableA:
```
Select *
from TableA A
where A_id in (1, 2, 3, 4)
and (flag = 'Y'
or (flag = 'N'
and A_id in (select A_id
from TableB B
where b.A_id = a.A_id
group by A_id
having sum(qty) > 0))
);
```
This will eliminate an extra lookup on TableA for information that should already be known. Second since TableA.A\_Id is now correlated with TableB.A\_Id, the `A_Id in (...)` can be changed to an exists clause:
```
Select *
from TableA A
where A_id in (1, 2, 3, 4)
and (flag = 'Y'
or (flag = 'N'
and exists (select A_id
from TableB B
where b.A_id = a.A_id
group by A_id
having sum(qty) > 0))
);
```
This may (depending on the database type) inform the databases query optimizer that it can stop retrieving rows from TableB after the first row is found.
In an Oracle database on a small unindexed sample dataset these two changes shaved 25% off of the cost of the query, so the performance increases could be significant.
|
Need help in SQL Select Query
|
[
"",
"sql",
"select",
"join",
""
] |
I need to identify the first character in my data as numeric or character in SQL Server. I am relatively new to this and I don't know where to begin on this one. But here is what I have done to this point. I had data that looked like this:
```
TypeDep
Transfer From 4Z2
Transfer From BZZ
Transfer From 123
Transfer From abc
```
I used the `right` function to remove the 'transfer from' and isolate the data I need to check.
```
UPDATE #decode
SET firstPartType = Right(z.TypeDep,17)
FROM #decode z
where z.TypeDep like 'TRANSFER FROM%'
firstPartType
4Z2
BZZ
123
abc
```
Now I need to add a column identifying the first character in the string. Producing the results below.
```
firstPartType SecondPartType
4Z2 Numeric
BZZ Alpha
123 Numeric
abc Alpha
```
|
Using `LEFT` and `ISNUMERIC()`, however be aware that `ISNUMERIC` thinks some additional characters such as `.` are numeric
```
UPDATE #decode
SET SecondPartType =
CASE WHEN ISNUMERIC(LEFT(firstPartType, 1)) = 1 THEN'Numeric'
ELSE 'Alpha'
END
FROM #decode;
```
|
A more robust approach is to use the limited regex functionality of sql server. ISNUMERIC will return false positives for single characters like .,$ to name a few.
```
SELECT
CASE WHEN left(firstPartType, 1) like '[0-9]' THEN 'Numeric'
ELSE 'Alpha'
END AS SecondPartType
```
|
How do identify the first character of a string as numeric or character in SQL
|
[
"",
"sql",
"sql-server",
""
] |
I have a table where I insert the year, month number, week number of the year, and day number of the year with each record.
I'm using those fields to filter records instead of full date because I have millions of records, and I want to increase the performance by comparing integers instead of dates.
I was able to select records between dates based the day number and year. The query works well if the years are the same, but once I change the year the query doesn't work because I'm using AND.
Table looks like this:
* ID
* Day
* Week
* Month
* Year
* Full Date
Here is the working query
```
SELECT COUNT(id) AS records_count
FROM table1
WHERE table1.day >= 176
AND table1.day <= 275
AND table1.year <= 2015
AND table1.year >= 2015
```
And this is the query that I need to adjust
```
SELECT COUNT(id) AS records_count
FROM table1
WHERE table1.day >= 275
AND table1.day <= 176
AND table1.year <= 2014
AND table1.year >= 2015
```
|
Your method of storing the dates is exactly wrong for what you want to do. The general form would be:
```
where (year = 2014 and day >= 275 or year > 2014) and
(year = 2015 and day <= 176 or year < 2015)
```
This will work for any pair of years, not just those that are one year apart.
Now, if you stored the dates normally, then you would simply do:
```
where date >= makedate(2014, 275) and date <= makedate(2015, 176)
```
What is really, really nice about this structure is that MySQL can use an index on `date`. That isn't possible with your query.
In general, a micro-optimization such as using integers instead of some other data type is not worth the effort in relational databases. Usually, the cost of reading the data is much more expensive than processing within a row. And, in fact, this example is a great example of it. Why try to increase the speed of comparisons when you can remove the need for them entirely using an index?
|
```
SELECT COUNT(id) AS records_count
FROM table1
WHERE (year = 2014 and day >= 275)
OR (year = 2015 and day <= 176)
```
And as baao commented - an indexed date column is super fast and easier to query.
|
MySQL select records between day of the year and year
|
[
"",
"mysql",
"sql",
"date",
"select",
"date-range",
""
] |
I cannot figure out how to do this. I have looked through my book on oracle SQL and it has nothing of sort about this function. I am not sure if my instructor did not mean to ask this question or if there is a way to do it.
|
[`INSTR`](http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions068.htm)
> The INSTR functions search string for substring. The function returns
> an integer indicating the position of the character in string that is
> the first character of this occurrence.
>
> Note: If substring is not found in string, then the INSTR function will return 0.
```
SELECT *
FROM your_table
WHERE INSTR(name, '''') > 0;
```
**[SQLFiddleDemo](http://sqlfiddle.com/#!4/5ed8ff/1/0)**
|
You can also perform
```
SELECT *
FROM your_table
WHERE INSTR(name, q'!'!') > 0;
```
|
How to search for all names with an apostrophe by using the INSTR function in ORACLE?
|
[
"",
"sql",
"oracle",
""
] |
I searched this in stackoverflow and found [Concatenate multiple columns](https://stackoverflow.com/questions/32137399/concatenate-values-from-multiple-columns-in-oracle) and also [Concatenate multiple rows](https://stackoverflow.com/questions/4686543/sql-query-to-concatenate-column-values-from-multiple-rows-in-oracle). But what I need is combine them both.
I have table named komponen :
```
id urut tipe c_string c_number op
---------------------------------------------
A1 1 S GP NULL *
A1 2 N NULL 5 /
A1 3 N NULL 100 + //Ignore the last op for each groups.
A2 1 S GP NULL -
A2 2 N NULL 1000 / //Ignore the last op for each groups.
```
Desired result :
```
id concat_result
------------------------
A1 GP * 5 / 100
A2 GP - 1000
```
This might be use `LISTAGG` and `GROUP BY` method. But I don't have any clue how to do this and achieve the desired result. Please help.
|
This with some modification from [here](https://stackoverflow.com/questions/4686543/sql-query-to-concatenate-column-values-from-multiple-rows-in-oracle) . Added `||` (concatenate) in `LISTAGG`
```
SELECT ID,
LISTAGG (
CASE
WHEN TIPE = 'v' THEN
C_STRING
ELSE
TO_CHAR (C_NUMBER)
END || OP,
' '
) WITHIN GROUP (ORDER BY URUT) AS CONCAT_RESULT
FROM KOMPONEN
GROUP BY ID;
```
|
Your hunch is correct. In this solution, I assume that c\_string and c\_number are mutually exclusive, i.e. exactly one will be not null.
```
with t as (
select id, listagg(nvl(c_string, c_number)||' '||op, ' ') within group (order by urut) result
from komponen
group by id
)
select id, substr(result, 1, length(result)-2)
from t;
```
The combination of `with` and `substr()` is to remove the last operator.
|
Concatenate multiple columns from multiple rows in Oracle
|
[
"",
"sql",
"oracle",
""
] |
I have an sql database called roottesting and I manage it at the ip 192.168.1.121. I also have a simple java application that connects to it (or rather, is supposed to connect to it) over the Internet with a user named 'user' that has a host of '%', which I know will make it usable for any computer that uses it. Here is the method in which my program connects to the database:
```
public void addTestEntry(TestEntry testEntry)
throws ClassNotFoundException, SQLException {
tests.add(testEntry);
Class.forName("com.mysql.jdbc.Driver");
String url = "jdbc:mysql://192.168.1.121/roottesting";
con = DriverManager.getConnection(url, "user", "S@cajaw3a");
PreparedStatement statement = con
.prepareStatement("insert into tests values(?, ?, ?)");
statement.setString(1, testEntry.getName());
statement.setInt(2, testEntry.getCorrect());
statement.setInt(3, testEntry.getIncorrect());
statement.executeUpdate();
con.close();
}
```
This will work every time I download the program on another computer on my home network, Beigecedar, but when I try to do it on another network, like my school's network, it does not work and gives that standard message that signals that the database could not be found. Can anyone give me a suggestion about how to connect to my database over the Internet, no matter what network the computer running the program is in?
|
The IP Address you are connecting to is what is called a [Reserved IP Address](https://en.wikipedia.org/wiki/Reserved_IP_addresses)
It is local to your network only ( 192.168.0.0/16 ). With that aside let's address 2 things.
If you are allowed to connect to your own computer from the internet (IE: Firewall and ports open) then you can connect using your [public IP Address](https://www.google.com/#q=what+is+my+ip)
You also have to consider what ports your school allow to be open. They may very well block the ports required for you to make this connection.
|
192.168.1.121 is a local IP, you need a Public IP. If you have internet you can use something like www.whatismyip.com to know your public ip. The problem with that IP is may change with the time so your app also may broke in the future.
Or use a firewall to reruote public IP to your pc in case you are behind a router
|
Why Can't I Connect to My Database Over the Internet
|
[
"",
"mysql",
"sql",
""
] |
I have two tables. The first is full of books each with a `book_id`. The second table is a `book_id` to `keyword_id` relationship table.
```
SELECT b.* FROM books_table b
INNER JOIN keywords_table k
ON b.book_id = k.book_id AND k.keyword_id NOT IN(1,2,3)
WHERE b.is_hardcover = 1
GROUP BY b.book_id
```
## *Desired Outcome*
No books with the keyword\_id 1, 2, or 3 attached to any of the books.
## *Actual Outcome*
Books can have the keywords 1, 2, or 3 so long as they have additional keyword\_ids attached to them that are *not* in the exclusion list.
## *What I've tried*
The above query is the closest I have come to achieving it, but it fails in this one regard.
**How can I achieve the desired outcome and in the most optimized way?**
|
You can do so
```
SELECT b.*
FROM books_table b
INNER JOIN keywords_table k
ON b.book_id = k.book_id
WHERE b.is_hardcover = 1
GROUP BY b.book_id
HAVING SUM(k.keyword_id = 1) =0
AND SUM(k.keyword_id = 2) =0
AND SUM(k.keyword_id = 3) =0
```
|
As you noted, this query will produce any book that has at least one keyword that isn't 1, 2 or 3, which isn't what you want. Instead, you'd want to explicitly exclude books with these keywords. A `join` isn't really the right took for the job here. Instead, you could use the `exists` operator:
```
SELECT b.*
FROM books_table b
WHERE b.is_hardcover = 1 AND
NOT EXISTS (SELECT *
FROM keywords_table k
WHERE b.book_id = k.book_id AND
k.keyword_id IN (1,2,3))
```
|
Excluding MYSQL query results with an INNER JOIN
|
[
"",
"mysql",
"sql",
"select",
"subquery",
""
] |
I have following table:
```
| id | name |
|-------|--------|
| 00001 | Item A |
| 00001 | Item A |
| 00002 | Item B |
| 00003 | Item C |
| 00004 | Item C |
```
What MySQL query I should use if I want to check if there's same name for 2 or more different ids? I have tried using `LEFT JOIN` and `UNION` in my queries, but I can't get it to work. So in this case, the query should return:
```
| id | name |
|-------|--------|
| 00003 | Item C |
| 00004 | Item C |
```
Thanks in advance.
|
If you want to return the original rows, then you need to `join` something back to the original data:
```
select t.*
from t join
(select name, count(distinct id) as cnt
from t
group by name
) tt
on t.name = tt.name
where cnt >= 2;
```
If you just want the names, then the subquery is sufficient (with a `having` clause).
|
Join the table with itself `ON` same `name` and different `id`.
**Query**
```
select t1.id,t1.name
from tbl1 t1
join tbl1 t2
on t1.id <> t2.id
and t1.name = t2.name;
```
[**SQL Fiddle**](http://www.sqlfiddle.com/#!9/69d72/1)
|
MySQL selecting duplicate name for 2 or more ids
|
[
"",
"mysql",
"sql",
""
] |
Sorry if my title isn't very specific. I just have no idea what this is called, and since I tend to let Rails abstract DB queries for me, more complex ones escape me. I have this query that rails generated:
```
SELECT "stocks".* FROM "stocks" INNER JOIN "parts" ON "parts"."id" = "stocks"."part_id"
INNER JOIN "features" ON "features"."part_id" = "parts"."id" INNER JOIN "feature_titles" ON "feature_titles"."id" = "features"."feature_title_id"
WHERE "parts"."category_id" = 20 AND "stocks"."manufacturer_id" = 4
AND ((feature_titles.title = 'Title One' AND (features.value = '0.1' OR features.value = '1')) AND (feature_titles.title = 'Title Two' AND (features.value = '200')))
```
The important bits:
`parts(id, number, category_id)`
`features(id, value, feature_title_id, part_id)`
`feature_titles(id, title)`
A quick explanation:
---
Every part can have multiple features. Each feature belongs to a feature\_title. Now I need my query to get all parts that has those features associated with their respective feature\_titles. Using the example above:
```
parts
---------
id number
1 some part number
2 some other part
```
---
```
features (values aren't strictly integers)
--------------
id values feature_title_id part_id
1 0.1 1 1
2 1 2 1
3 7 MAX 2 2
4 0.1 1 2
```
---
```
feature_titles
-------------------
id title
1 Title One
2 Title Two
```
---
Doing the query below works.
```
SELECT "stocks".* FROM "stocks" INNER JOIN "parts" ON "parts"."id" = "stocks"."part_id"
INNER JOIN "features" ON "features"."part_id" = "parts"."id" INNER JOIN "feature_titles" ON "feature_titles"."id" = "features"."feature_title_id"
WHERE "parts"."category_id" = 20 AND "stocks"."manufacturer_id" = 4
AND ((feature_titles.title = 'Title One' AND (features.value = '0.1' OR features.value = '1')))
```
It returns the part with `id` 1. But if I want to further filter by adding another feature\_title with and features (see the first problem query), it returns 0 rows. I expect it to return the same single row (in this example). I don't really understand as I thought thats how the AND and OR operators worked. Unless I am missing something.
**EDIT:**
Sorry guys, I'm adding another example to be clearer. Omitting the join since the where seems to be the key:
```
feature_titles.title = 'Title One' AND
(features.value = '0.1' OR features.value = '1')
```
Should return parts with id's 1 AND 2. But the query
```
(feature_titles.title = 'Title One' AND (features.value = '0.1'))
AND (feature_titles = 'Title Two' AND (features.value = '7 MAX'))
```
Should only return the part with id 2.
|
The inner query finds the features that match your criteria which are then grouped per part id. The matches features are in different rows of data to verify that a part matches on multiple criteria you have to count the number of rows in that group.
```
SELECT stocks.*
FROM stocks
WHERE part_id IN (
SELECT part_id
FROM
parts ON parts.id = stocks.part_id
INNER JOIN features ON features.part_id = parts.id
INNER JOIN feature_titles ON feature_titles.id = features.feature_title_id
WHERE
parts.category_id = 20 AND stocks.manufacturer_id = 4
AND (
feature_titles.title = 'Title One' AND features.value = '0.1'
OR feature_titles.title = 'Title Two' AND features.value = '7 MAX'
)
GROUP BY parts.id
HAVING COUNT(features.id) = 2
)
```
|
I think you need an aggregation:
```
SELECT s.*
FROM "stocks" s INNER JOIN
"parts" p
ON p."id" = s."part_id" INNER JOIN
"features" f
ON f."part_id" = p."id" INNER JOIN
"feature_titles" ft
ON ft."id" = f."feature_title_id"
WHERE p."category_id" = 20 AND s."manufacturer_id" = 4 AND
ft.title = 'Title One' AND
(f.value = '0.1' OR f.value = '1')
GROUP BY s.stock_id -- may need to include all columns here
HAVING COUNT(DISTINCT f.value) = 2; -- makes sure you get both of them
```
|
SQL query that matches multiple rows through several AND clauses in a three table INNER JOIN
|
[
"",
"sql",
"postgresql",
""
] |
I have two different result sets:
```
Result 1:
+--------------+--------------+
| YEAR_MONTH | UNIQUE_USERS |
+--------------+--------------+
| 2013-08 | 1111 |
+--------------+--------------+
| 2013-09 | 2222 |
+--------------+--------------+
Result 2:
+--------------+----------------+
| YEAR_MONTH | UNIQUE_ACTIONS |
+--------------+----------------+
| 2013-08 | 111111111 |
+--------------+----------------+
| 2013-09 | 222222222 |
+--------------+----------------+
```
**The code for Result 1**:
```
SELECT TO_CHAR(ACCESS_DATE, 'yyyy-mm') YEAR_MONTH, COUNT(DISTINCT EMPLOYEE_ID) UNIQUE_USERS
FROM CORE.DATE_TEST
GROUP BY TO_CHAR(ACCESS_DATE, 'yyyy-mm')
ORDER BY YEAR_MONTH ASC
```
**The code for Result 2**:
```
SELECT TO_CHAR(ACCESS_DATE, 'yyyy-mm') YEAR_MONTH, COUNT(DISTINCT EMPLOYEE_ACTION) UNIQUE_ACTIONS
FROM CORE.ACTION_TEST
GROUP BY TO_CHAR(ACCESS_DATE, 'yyyy-mm')
ORDER BY YEAR_MONTH ASC
```
However, I've tried to group them by simply doing this:
```
SELECT TO_CHAR(ACCESS_DATE, 'yyyy-mm') YEAR_MONTH, COUNT(DISTINCT EMPLOYEE_ID) UNIQUE_USERS, COUNT(DISTINCT EMPLOYEE_ACTION) UNIQUE_ACTIONS
FROM CORE.DATE_TEST, CORE.ACTION_TEST
GROUP BY TO_CHAR(ACCESS_DATE, 'yyyy-mm')
ORDER BY YEAR_MONTH ASC
```
And that doesn't work. I've also tried an `INNER JOIN` on the second result set (`result set 1` had `t1` as a variable name, and `result set 2` had `t2`), and got the error, `Invalid Identifier`, on t2.
This is my desired output:
```
+--------------+--------------+----------------+
| YEAR_MONTH | UNIQUE_USERS | UNIQUE_ACTIONS |
+--------------+--------------+----------------+
| 2013-08 | 1111 | 111111111 |
+--------------+--------------+----------------+
| 2013-09 | 2222 | 222222222 |
+--------------+--------------+----------------+
```
How do I do that correctly? It doesn't necessarily need to be a three-column group by; it just needs to work.
|
Try:
```
select a.YEAR_MONTH, a.UNIQUE_USERS, b.UNIQUE_ACTIONS
from (
SELECT TO_CHAR(ACCESS_DATE, 'yyyy-mm') YEAR_MONTH,
COUNT(DISTINCT EMPLOYEE_ID) UNIQUE_USERS
FROM CORE.DATE_TEST
GROUP BY TO_CHAR(ACCESS_DATE, 'yyyy-mm')
) a
join (
SELECT TO_CHAR(ACCESS_DATE, 'yyyy-mm') YEAR_MONTH,
COUNT(DISTINCT EMPLOYEE_ACTION) UNIQUE_ACTIONS
FROM CORE.ACTION_TEST
GROUP BY TO_CHAR(ACCESS_DATE, 'yyyy-mm')
) b
on a.YEAR_MONTH = b.YEAR_MONTH
order by a.YEAR_MONTH ASC
```
|
If both tables have many records, a Cartesian join is a poor solution and may not actually provide the answer you want. I'd solve this problem something like this:
```
SELECT TO_CHAR (COALESCE (t1.year_month, t2.year_month), 'yyyy-mm')
AS year_month,
t1.unique_users,
t2.unique_actions
FROM (SELECT TRUNC (access_date, 'mm') AS year_month,
COUNT (DISTINCT employee_id) AS unique_users
FROM core.date_test
GROUP BY TRUNC (access_date, 'mm')) t1
FULL OUTER JOIN
(SELECT TRUNC (access_date, 'mm') AS year_month,
COUNT (DISTINCT employee_action) AS unique_actions
FROM core.action_test
GROUP BY TRUNC (access_date, 'mm')) t2
ON t1.year_month = t2.year_month
ORDER BY COALESCE (t1.year_month, t2.year_month) ASC
```
The reason a Cartesian join performs poorly is that every row in the first table must be matched with every row in the second table before the `group by` is applied. If each table has only 1000 rows, that's 1,000,000 values that the database has to construct.
|
Three-Column Group-By in Oracle?
|
[
"",
"sql",
"oracle",
""
] |
I have a table with product prices. There are two price types: A = offer price, B = standard price. Example:
```
product_id | price | price_type
-------------------------------------
001 | 0.99 | A
001 | 1.49 | B
```
I'm looking for a SELECT-statement that returns
* the standard price (type B), if there's no offer price (type A) for this product,
* the offer price (type A) whenever the product has a price of type A (disregarded if a price of type B is maintained or not).
For my example data above the statement has to return the offer price = **0.99** = type A.
|
This can be done with a left join. I am assuming that you will at least always have a standard price row for a product.
```
select p1.product_id,
coalesce(p2.price, p1.price) as price
from product_prices p1
left join product_prices p2
on p2.product_id = p1.product_id
and p2.price_type = 'A'
where p1.price_type = 'B'
```
|
You could left join two queries on the table and use `coalesce` to determine if there's an offer or not:
```
SELECT b.product_id, COALESCE(a.price, b.price)
FROM (SELECT product_id, price
FROM products
WHERE price_type = 'B') b
LEFT JOIN (SELECT product_id, price
FROM products
WHERE price_type = 'A') a ON a.product_id = b.product_id
```
|
Exclusive Disjunction
|
[
"",
"sql",
"select",
"oracle11g",
""
] |
Trying to get date difference in `day - hours - mins - sec` from a string entry of date. Also the time entry is having `AM, PM`.
I am trying to use substring and converting it date time, but `AM - PM` is making it difficult.
I have: `06-OCT-15 09.30.20.000000 AM` and `06-OCT-15 09.45.20.000000 PM`
I am trying to get: `0 day(s) 12 hour(s) 15 min(s) 0 sec(s)`.
|
See below SQL Server query. I know this will work as I have some experience in this.
```
WITH CTE AS (
SELECT CAST(
DATEDIFF(
DAY,
'2015-10-06 08:00:00.00 AM',
'2015-10-06 09:00:00.00 PM'
) AS
VARCHAR(1000)
) + ' day(s)' AS Days,
CAST(
DATEDIFF(
hh,
'2015-10-06 08:00:00.00 AM',
'2015-10-06 09:00:00.00 PM'
) AS VARCHAR(1000)
) + ' hour(s)' AS Hours,
CAST(
DATEDIFF(
mi,
'2015-10-06 08:00:00.00 AM',
'2015-10-06 09:00:00.00 PM'
) AS VARCHAR(1000)
) + ' min(s)' AS Minutes,
CAST(
DATEDIFF(
ss,
'2015-10-06 08:00:00.00 AM',
'2015-10-06 09:00:00.00 PM'
) AS VARCHAR(1000)
) + ' sec(s)' AS Seconds
)
SELECT Days + ' ' + Hours + ' ' + Minutes + ' ' + Seconds AS FullDifference
FROM CTE
```
|
it may help
<http://www.w3schools.com/sql/func_datediff.asp>
Example to get hours
you have to change you date string
like
select datediff(hour,CONVERT(datetime, '06-OCT-15 09:00:00.000 AM'),'06-OCT-15 08:00:00.000 PM') AS DiffDate
result:
11
|
Get date difference in days, hours, mins and secs (AM/PM date)
|
[
"",
"mysql",
"sql",
"date-difference",
""
] |
Running a SQL statement to CREATE a table if this table does not exist in 'sysobjects'. The table creates successfully when it does not exist, however, I get an error when running it once the table then does exist. This table must always remain and not be dropped, this is just for first run instance.
```
IF NOT EXISTS (SELECT * FROM sysobjects WHERE name = '[tablename]')
CREATE TABLE [tablename]
(
jobID int,
fileName varchar (255),
pathName varchar (255),
);
```
So, once this table is created the script is run again and it will error out on this SQL statement due to the table already existing.
Any help is much appreciated.
|
If you only want to avoid errors..
If the table is existing then do not create the table, here is the code for easy checking to avoid errors.
```
IF OBJECT_ID('[tablename]') IS NULL
BEGIN
CREATE TABLE '[tablename]'
(
jobID int,
fileName varchar (255),
pathName varchar (255),
);
END
```
If that is newly created table, and you want to check if already existing and you want to DROP and CREATE **(Not Advisable)**
```
IF OBJECT_ID('[tablename]') IS NOT NULL
DROP TABLE [tablename]
CREATE TABLE '[tablename]'
(
jobID int,
fileName varchar (255),
pathName varchar (255),
);
```
Hope this helps.
|
you don't need quotes in table name, creation:
```
IF NOT EXISTS (SELECT * FROM sysobjects WHERE name = '[tablename]')
CREATE TABLE [tablename]
(
jobID int,
fileName varchar (255),
pathName varchar (255),
);
```
you are creating a table named `'[tablename]'` and you are looking for a table named `[tablename]`.
|
There is already an object named ' ' in Database
|
[
"",
"sql",
"sql-server",
""
] |
I have three columns -
* `TheDate` - Obviously a date.
* `TheID` - A strictly increasing ID.
* `TheType` - A Record type.
I want these to sort by `TheID` in almost all cases except for just one record type. Records of the special record type must appear at the end of all records with the same date.
Example:
[](https://i.stack.imgur.com/nN49G.png)
I want the record type `101` to appear **after** all other records that have the same date. In all other cases `TheID` controls the order.
My attempt goes like:
```
ORDER BY
TheDate,
CASE WHEN TheType = 101 THEN 1 ELSE 0 END,
TheID
```
which *nearly* does what I want but is doing far more - i.e. it will reorder by `TheDate` which is not waht I want.
If the same date occurs later in the data I don't care - it's just when a sequence of records containing a type 101 (when sorted by `TheID`) all have the same date I want type `101` to be last.
|
This is complicated. First you must find consecutive date records, so with
```
thedate theid thetype
2014-07-12 5001 59
2014-07-12 5002 101
2014-07-12 5003 88
2014-07-13 5004 10
2014-07-12 5005 60
```
you would identify 2014-07-12 as one occurrence for the first three records and another for the last record. The second record would have to get position #3 in your results, not #5.
You achieve this by giving consecutive records a group key by using first `LAG` to look into the previous record, thus creating a flag on group change, and then cumulating these flags.
```
select thedate, theid, thetype
from
(
select
thedate, theid, thetype,
sum(new_group) over (order by theid) as group_key
from
(
select
thedate, theid, thetype,
case when lag(thedate) over (order by theid) = thedate then 0 else 1 as new_group
from mytable
) marked
) grouped
order by
group_key,
case when thetype = 101 then 1 else 0 end,
theid;
```
|
Taking the liberty granted in comments to assume that `TheDate` is non-decreasing with ascending `TheId`, if `r1.TheId < r2.TheId` then it must be the case that `r1.TheDate <= r2.TheDate` (that's the definition of non-decreasing). In that case, ordering first by `TheDate` and then by `TheId` produces the same order as ordering by `TheId` alone. Looking at it from the other direction, ordering by `TheId` automatically produces results clustered by `TheDate` and in order by date.
But what you're already doing differs from ordering by (`TheDate`, `TheId`) (which we already established is the same as ordering by just `TheId`) only by moving the special records to the end of each date cluster, which is exactly what you say you want. Thus, you must be getting your results in the desired order; if you in fact have any problem then it must be that you are dissatisfied with the means by which you are doing so. For instance, perhaps you are concerned with query performance.
If your existing ordering indeed produces the correct results, however, then I'm having trouble seeing an alternative that I would expect to deliver substantially better performance. No doubt the ordering can be produced by means that do not rely on `TheDate` to be non-decreasing, but I would expect all of those to be comparatively expensive to compute.
|
Sliding certain records to the end of a run of the same date
|
[
"",
"sql",
"oracle",
"sorting",
""
] |
**I am designing a SQL database that needs to be a five-table database to meet Air-Crewe’s requirements. I have the following so far:**
**I have this for UNF:**
CrewID, Crew Type, Title, Forename, Surname, Gender, CAALicenceNum, FlightID, FlightNum, IATADep, IARAArr, Date, SchDep,SchArr, Comments, A/CType, A/CReg, A/CManuf
**I have this for 1NF:**
TBLCrew(CrewID[PrimaryKey], CrewType, CrewTitle, Forename, Surname, gender, CAALicenceNum, FlightID\*)
TBLFlight(FlightID[PrimaryKey], FlightNumber, IATADep, IATAArr, Date, SchArr, comments, A/CType, A/CReg, A/CManuf)
**I have this for 2NF:**
TBLCrew(CrewID[PrimaryKey], CrewType, CreweTitle, Forename, Surname, gender, CAALicenceNum)
TBLFlight(FlightID[PrimaryKey], FlightNum, IATADep, IATAArr, Date, SchArr, comments, A/CType, A/CReg, A/CManuf)
TBLCrewFlight(CreweID[composite/compoundKey], FlightID[composite/compoundKey])
**The 3NF needs to be be separated into five tables but I don't know how to achive this - can anyone please help me out? Or correct me if I have made a mistake in the normalisation above (I am new to normalisation as you probably can tell)**
[](https://i.stack.imgur.com/LmLtN.png)
|
Accepted answer can have more Aircrafts for a flight, I do not think that is correct.
```
TBLCrew(CrewID[PrimaryKey], CrewType, CreweTitle, Forename, Surname, gender, CAALicenceNum)
TBLFlight(FlightNum[PrimaryKey], IATADep, IATAArr, Date, SchArr, A/CReg[composite/compoundKey])
TBLCrewFlight(CreweID[composite/compoundKey], FlightNum[composite/compoundKey], Comments)
Aircraft(A/CReg[PrimaryKey], A/CType[composite/compoundKey])
TBL_A/CType(A/CType[PrimaryKey], A/C Manuf)
```
|
First of all - I am unsure even of the 1st form. `Comments` implies multiple instances of a comment, therefore it probably isn't atomic and I would make a table for them too. It would have three attributes - comment\_ID, comment, FlightID.
In the 3rd form every non-prime attribute of the table is non-transitively dependent on every superkey of the table. So in layman's terms if you logically identify attributes which are dependent on another non-key attribute, you need to transform them into another table.
If gender is dependent on the forename is arguable. Other decompositions are somewhat difficult, since I don't have the descriptions of the columns (not entirely sure what they represent).
However here I present some of my speculations:
* CrewTitle probably depends on the gender - bam new table
* Departures and arrivals depend on the Flight number - bam new table
* A/C types and manufacturers probably depend on the A/C Reg - bam new table
However, you should have better understanding of the individual columns, and therefore you should make these decisions by yourself. These examples should help you to understand the 3rd form concept.
|
Normalisation - SQL - 3NF
|
[
"",
"sql",
"normalization",
""
] |
I am generating a temp table with duplicate entries for each "account ID". And I want all records with the same account ID to have an identifier. For example, all records with account ID: 1234 should have an identifier of 1. All records with account ID: 4321 should have an identifier of 2.
I have added a column to my temp table for this field, but I do not know how to populate it based off the account IDs.
|
You don't need an additional identifier, just group by your account number and select the max(date) for each, like:
```
Select accountID, max(lastUpdated)
from t
group by accountID
```
|
You can use [DENSE\_RANK()](https://msdn.microsoft.com/en-GB/library/ms173825.aspx) function with `ORDER BY AccountId`;
```
SELECT *, DENSE_RANK() OVER (ORDER BY AccountId) Identifier
FROM YourTable
```
**[Fiddle demo](http://sqlfiddle.com/#!3/42696/1)**
|
How can I add a unique identifier for each "account ID" in my database table?
|
[
"",
"sql",
"sql-server",
""
] |
This questions is a simplified version of a problem I have at work. I'm using a custom database that supports all standard SQL queries. I suppose this question applies to any RDMS that supports SQL though.
Suppose I have the table `events`, including three columns: `event_type` (string), `details` (string), and `timestamp` (integer).
One of the events of interest is power outage. The system logs the power status every once in a while. If there's a power outage, "outage" will be logged. Otherwise, "good" will be logged.
Another event of interest is access. Whenever somebody accesses a room, the room that was entered will be logged as well.
I want to select all rooms entered during a power outages.
For example, I have the following rows.
```
event_type | details | timestamp
---------------------------------------
power | good | 14
access | room 6 | 13
power | good | 12
access | room 5 | 11
access | room 4 | 10
power | outage | 9
power | outage | 8
access | room 3 | 7
power | outage | 6
access | room 2 | 5
power | good | 4
access | room 1 | 3
power | outage | 2
access | room 0 | 1
power | good | 0
```
I want to get the following rows.
```
event_type | details | timestamp
---------------------------------------
power | good | 12
access | room 5 | 11
access | room 4 | 10
power | outage | 9
power | outage | 8
access | room 3 | 7
power | outage | 6
power | good | 4
access | room 1 | 3
power | outage | 2
```
The current way I'm doing this feels like I'm not using SQL the right way.
First I get the timestamp for a power outage. Next, I get the minimum timestamp that the power came back after that outage. Finally, I select all events between the two timestamps. After that, I find the next outage timestamp after the power came back. Repeat.
Another way I did it was to just download all the rows to a flat file, then use a Python script to filter out the rows I want in a procedural way.
Is there a better way to get the rows I need?
|
Here is a working query for you ([SQL Fiddle](http://sqlfiddle.com/#!6/8fdea/1)):
```
SELECT
E.*
FROM
dbo.Events E
WHERE
E.event_type IN ('access', 'power')
AND (
SELECT TOP 1 details
FROM dbo.Events E2
WHERE
E.timestampid >= E2.timestampid
AND E2.event_type = 'power'
ORDER BY E2.timestampid DESC
) = 'outage'
;
```
I used SQL Server syntax but it's just as easy to pop a `LIMIT 1` in there for MySQL. Regarding windowing functions, I don't have the time or energy to come up with the answer at the moment, but if you'll refer to [this question and my answer there](https://stackoverflow.com/questions/13614431/how-can-i-detect-and-bound-changes-between-row-values-in-a-sql-table/13618019) you will see one of the techniques I've used for a similar problem.
Also, note that how to handle the data at the beginning of the set is a question. If the first events were `{ access, room 0 }, { power, outage }`, would you want the `room 0` row to be in the result set or not? Even `{ access, room 0 }, { power, good }` might also be a candidate for inclusion at the start of the set, since it *could* be the case that the power was out at the start--we don't have that information in this data. If you want to include rows in either of these cases, please let me know and I can update my query.
|
You can identify all rows where the previous "power" row was for "outage" and not "good".
I think that gets you the information you want.
```
select e.*
from events e
where 'outage' = (select e2.details
from events e2
where e2.event_type = 'power' and
e2.timestamp < e.timestamp
order by e2.timestamp desc
limit 1
) or
e.event_type = 'power';
```
|
How do I select rows between rows that match a condition?
|
[
"",
"sql",
""
] |
I have a big query (MYSQL) to join several tables:
```
SELECT * FROM
`AuthLogTable`,
`AppTable`,
`Company`,
`LicenseUserTable`,
`LicenseTable`,
`LicenseUserPool`,
`PoolTable`
WHERE
`LicenseUserPool`.`UserID`=`LicenseUserTable`.`UserID` and
`LicenseUserTable`.`License`=`LicenseTable`.`License` and
LEFT(RIGHT(`AuthLogTable`.`User`, 17), 16)=`LicenseUserPool`.`UserID` and
`LicenseUserPool`.`PoolID`=`PoolTable`.`id` and
`Company`.`id`=`LicenseTable`.`CompanyID` and
`AuthLogTable`.`License` = `LicenseTable`.`License` and
`AppTable`.`AppID` = `AuthLogTable`.`AppID` AND
`PoolTable`.`id` IN (-1,1,2,4,15,16,17,5,18,19,43,20,3,6,8,10,29,30,7,11,12,24,25,26,27,28,21,23,22,31,32,33,34,35,36,37,38,39,40,41,42,-1)
ORDER BY
`AuthLogTable`.`AuthDate` DESC,
`AuthLogTable`.`AuthTime` DESC
LIMIT 0,20
```
I use **explain** and it gives the following:
[](https://i.stack.imgur.com/WlnjJ.jpg)
How to make this faster? It takes several seconds in a big table.
"Showing rows 0 - 19 ( 20 total, Query took **3.5825** sec)"
as far as i know, the fields used in the query are indexed in each table.
Indices are set for **AuthLogTable**
[](https://i.stack.imgur.com/5Qo63.png)
|
You can try running this query without 'order by' clause on your data and see if it makes a difference (also run 'explain'). If it does, you can consider adding index/indices on the fields you sort by. `Using temporary; using filesort;` means that the temp table is created and then sorted, without index that takes time.
As far as I know, join style doesn't make any difference because query is parsed into another form anyway. But you still may want to use ANSI join syntax (see also this question [ANSI joins versus "where clause" joins](https://stackoverflow.com/questions/1613304/ansi-joins-versus-where-clause-joins)).
|
First of all consider modifying your query to use JOINS properly. Also, make sure that you have indexed the columns used in `JOIN ON` clause ,`WHERE` condition and `ORDER BY` clause.
```
select * from `AuthLogTable`
join `AppTable` on `AppTable`.`AppID` = `AuthLogTable`.`AppID`
join `LicenseTable` on `AuthLogTable`.`License` = `LicenseTable`.`License`
join `Company` on `Company`.`id`=`LicenseTable`.`CompanyID`
join `LicenseUserTable` on `LicenseUserTable`.`License`=`LicenseTable`.`License`
join `LicenseUserPool` on `LicenseUserPool`.`UserID`=`LicenseUserTable`.`UserID`
join `PoolTable` on `LicenseUserPool`.`PoolID`=`PoolTable`.`id`
where LEFT(RIGHT(`AuthLogTable`.`User`, 17), 16)=`LicenseUserPool`.`UserID`
and `PoolTable`.`id` IN (-1,1,2,4,15,16,17,5,18,19,43,20,3,6,8,10,29,30,7,11,12,24,25,26,27,28,21,23,22,31,32,33,34,35,36,37,38,39,40,41,42,-1)
order by `AuthLogTable`.`AuthDate` desc, `AuthLogTable`.`AuthTime` desc
limit 0,20;
```
|
How to Optimize this MYSQL Query - Join several tables
|
[
"",
"mysql",
"sql",
""
] |
I have two tables, `Teachers` and `Departments`.
`Teachers` works at a department, so teacher table has a `department_id` foreign key.
Each department has one teacher as its chairman. Therefore, `Department` has a
foreign key `chairman` referring to the `Teachers` table.
However, teachers table can contain teachers from many departments. But a chairman must be a teacher of that same department. So, it seems foreign
key should refer to the teachers subset instead of whole table.
I have two questions.
1. Is it a bad design since invalid teacher can be put into `Department` table as head?
2. If not, how do I enforce this in the database?
|
Your design makes a number of assumptions.
1. A teacher can only be in one department.
2. Each department can have only one chairman.
3. Chairman is the only position you'll want to store.
4. You don't want to know anything more about the chairman.
5. Only teachers can be chairmen.
If these assumptions hold true in realty you can solve your conundrum by taking advantage that a teacher can only be in one department. Move the "chairman" flag into the teacher table. That guarantees the chairman is a member of the department.
However, I think those assumptions won't survive an encounter with reality. If they don't it would involve significant restructuring of your schema and queries. I would restructure the schema to store the more generic *staff* and turn *teacher* into a department role. Staff and departments are in a many-to-many relationship. This relationship can be used to store the type of relationship as well as any further information.
```
CREATE TABLE staff (
id INTEGER PRIMARY KEY,
name TEXT NOT NULL
);
CREATE TABLE departments (
id INTEGER PRIMARY KEY,
name TEXT NOT NULL
);
CREATE TABLE department_roles (
staff_id INTEGER REFERENCES staff(id),
department_id INTEGER REFERENCES departments(id),
role ENUM('chairman', 'teacher')
);
```
That layout guarantees a chairman is in their own department. Constraints on department\_roles can enforce business logic like having one chairman per department, or only teachers can be chairmen, and they can be removed or changed later without affecting queries. It allows adding roles in the future. It allows adding more information to those roles. It allows non-teachers to hold department roles.
|
It is not a bad database design, because from a relational point of view, the relationships are fine.
Your requirement is an application concern, so it is okay to handle it at the application layer, meaning, to take pains in the application which makes use of this database to ensure that the chairman will always belong to the department that they are chairing.
But the question of whether relational databases are meant to enforce only rules about the natural relationships of entities, or whether they should also enforce business rules, is a philosophical one. You can try also enforcing business rules too, but you should know that not all business rules can be enforced by a database, or are practical to enforce by a database.
If you insist on going that path, then rlb's comment points to a decent solution. (Look for another comment in that post which points to another post which shows how to generate an error from within a trigger.)
|
Is this a bad database design?
|
[
"",
"sql",
"database-design",
""
] |
I have a string which I want to trim. What I am looking for is way to get the value from string up to the last `Y`.
Ex:
1. `AB Y bc Y dc Y1` -> `AB Y bc Y dc Y`
2. `ABYBCY` -> `ABYBCY`
3. `ABY BCY DC` -> `ABY BCY`
I am trying below query but I amn not getting what I need.
```
DECLARE @name varchar(200)='RAM, Y SHAM Y AB'
SELECT @name
,right(@name, CHARINDEX('Y', @name) - 1) AS [Surname]
,REPLACE(SUBSTRING(@name, CHARINDEX(',', @name), LEN(@name)), ',', '') AS [FirstName]
```
|
You can do this with `stuff()` and `reverse()`.
Something like this
```
SELECT STUFF(
@name,
LEN(@name) - CHARINDEX('Y', REVERSE(@name)),
LEN(@name),
''
) string
```
**Note:** This assumes that there is at least one "Y" in the given string.
[Here is a SQL Fiddle.](http://www.sqlfiddle.com/#!6/9eecb7/3494)
|
You can also try something like:
```
Select substring(@name,1, len(@name) - (charindex('Y', reverse(@name)))+1)
```
|
SQL query to get char from string based on criteria
|
[
"",
"sql",
"sql-server",
""
] |
Let's say we have the following Fruits table:
```
Fruit | Qty
-------------
Apple | 2
Apple | 5
Apple | 2
Orange | 3
Orange | 4
Orange | 0
Banana | 2
Banana | 5
Pear | 2
Mango | 1
Mango | 0
```
If I want to get the top 3 fruit I would create a query like this:
```
SELECT TOP 3 Fruit, SUM(Qty) AS Total
FROM Fruits
GROUP BY Fruit
ORDER BY Total DESC
```
will return:
```
Apple | 9
Orange | 7
Banana | 7
```
However, if I want to include the sum of the Fruit that ain't in the top 3, I would have to write something like this to exclude the Top 3 fruits using a `NOT IN`:
```
SELECT 'OTHER' AS Fruit, SUM(Qty) AS Total
FROM Fruits
WHERE Fruit NOT IN (SELECT Fruit FROM (SELECT TOP 3 Fruit, SUM(Qty) AS Total FROM Fruits GROUP BY Fruit ORDER BY Total DESC) AS Test)
```
If I want to execute this in a single query, I guess I could combine the 2 queries using a `UNION`. I already have my plan B but I'm wondering if SQL has any built-in way to aggregate rows that ain't part of the top values that could be used to simplified the query?
|
You need to RANK the sums and add another SUM for the final result:
```
with cte as
(
SELECT Fruit, SUM(Qty) AS Qty,
ROW_NUMBER() OVER (ORDER BY SUM(Qty) DESC) AS rn
FROM Fruits
GROUP BY Fruit
)
select
case when rn > 3 then 'Other' else fruit end as fruit,
sum(Qty)
from cte
group by
case when rn > 3 then 'Other' else fruit end
order by MIN(rn)
```
See [Fiddle](http://sqlfiddle.com/#!6/fc2b2/4)
|
This is one way of doing it by ranking rows based on `sum(qty)`.
```
with totals as (SELECT Fruit, SUM(Qty) AS Total FROM Fruits GROUP BY Fruit)
, rownums as (select *, dense_rank() over(order by total desc) as rank from totals)
select
case when rank > 3 then 'Other' else fruit end as fruit, total
from rownums
```
If the `other` rows need aggregation, include one more `sum` on the `total` column and `group by` the `case` statement.
|
Aggregate rows outside the top
|
[
"",
"sql",
"sql-server",
"aggregate",
"sql-server-2014",
""
] |
I have three tables (examples here). Two with data and one that is a junction table to handle many:many relationships.
**Users**:
```
ID | UserName
====================
1 | Jeremy Coulson
2 | Someone Else
```
**Repositories**:
```
ID | RepositoryURI
====================
1 | http://something
2 | http://another
```
**RepositoriesUsers**:
```
ID | UserID | RepositoryID
==========================
1 | 1 | 1
2 | 2 | 2
```
So, in this example, user 1 is associated with repository 1. User 2 is associated with repository 2. I need to now search by `Repositories.RepositoryURI` and return `Users.UserName`.
I have this query:
```
select UserName
from RepositoriesUsers
join Users on Users.ID = RepositoriesUsers.UserID
join Repositories on Repositories.RepositoryURI = 'http://another';
```
But that returns every row in the `RepositroriesUsers` table.
How can I match data between the junction table with IDs and the other tables with human-friendly text?
|
You aren't actually giving a correct join condition for your second `INNER JOIN`. It should be:
```
SELECT U.UserName
FROM RepositoriesUsers RU
INNER JOIN Users U
ON U.ID = RU.UserID
INNER JOIN Repositories R
ON RU.RepositoryID = R.ID
WHERE R.RepositoryURI = 'http://another';
```
Also, you should try to use table aliases in your queries for clarity.
|
I think a simple correction is needed to your join:
```
select UserName
from RepositoriesUsers
join Users on Users.ID = RepositoriesUsers.UserID
join Repositories on Repositories.ID = RepositoriesUsers.RepositoryID
where Repositories.RepositoryURI = 'http://another';
```
|
SQL Server: How can I get data from junction table that only has IDs?
|
[
"",
"sql",
"sql-server",
"join",
"junction-table",
""
] |
I need to find patients having multiple genders recorded. (I wish I knew how to create a table.)
```
[Hospital Number] [Sex Code]
0000001 M
0000002 F
0000003 M
0000003 F
```
Here we can see patient 1 and 2 are okay, but patient 3 has multiple genders. (There are 4 available genders in the system).
The columns are `[Hospital Number]` and `[Sex Code]` and the table is called `Table1`.
|
Alternative solution, do a self join:
```
select distinct p1.[hospital number]
FROM patients AS p1
INNER JOIN patients AS p2
ON p1.[hospital number] = p2.[hospital number]
AND p1.[Sex code] <> p2.[Sex code]
```
Return `[hospital number]` if same `[hospital number]` but different `[Sex code]` is found.
Now edited according to HarveyFrench's suggestion!
|
You do not need a subquery in MS Access. You can just compare the minimum and maximum values:
```
SELECT [hospital number]
FROM patients
GROUP BY [hospital number]
HAVING MIN([Sex Code]) <> MAX([Sex Code]);
```
This should also have better performance than the count distinct. It does not return rows where `[Sex Code]` is `NULL` *and* takes on one another value (neither would `COUNT(DISTINCT)`. That is `NULL` is ignored. That is easy enough to fix, but it is unclear how you want to handle `NULL`.
|
Find patients with multiple gender
|
[
"",
"sql",
"ms-access",
""
] |
So, i´m trying to select rows between two dates.
In db, the dates also have time.
Therefor i need to use LIKE.
**SQL**
```
$query = "SELECT * FROM table WHERE date >= LIKE :selectedDateFrom AND <= LIKE :selectedDateTo";
$query_params = array(':selectedDateFrom' => $selectedDateFrom.="%", ':selectedDateTo' => $selectedDateTo.="%");
```
This one returns error!
How should it look like?
|
> In db, the dates also have time.
> Therefor i need to use LIKE.
No, you don't.
To select all date/times where the date component is between (from) and (to), inclusive, you can write it as
```
SELECT *
FROM table
WHERE date >= :selectedDateFrom
AND date < :selectedDateToPlusOne
```
(Note the `<` instead of `<=`, and set the second parameter to one day *after* the last day you want to include in your results.) This works even when the column includes times.
|
you can't use like with dates in SQL
SO use this:
```
$query = "SELECT * FROM table WHERE date >= :selectedDateFrom AND date <= :selectedDateTo";
```
|
SQL: How to use 'LIKE' with a date 'between'?
|
[
"",
"sql",
"mariadb",
""
] |
I have a scenario where I need to auto generate the value of a column if it is null.
Ex: `employeeDetails`:
```
empName empId empExtension
A 101 null
B 102 987
C 103 986
D 104 null
E 105 null
```
`employeeDepartment`:
```
deptName empId
HR 101
ADMIN 102
IT 103
IT 104
IT 105
```
Query
```
SELECT
empdt.empId, empdprt.deptName, empdt.empExtension
FROM
employeeDetails empdt
LEFT JOIN
employeeDepartment empdprt ON empdt.empId = empdprt.empId
```
Output:
```
empId deptName empExtension
101 HR null
102 ADMIN 987
103 IT 986
104 IT null
105 IT null
```
Now my question is I want to insert some dummy value which replaces null and auto-increments starting from a 5 digit INT number
Expected output:
```
empId deptName empExtension
101 HR 12345
102 ADMIN 987
103 IT 986
104 IT 12346
105 IT 12347
```
Constraints : I cannot change existing tables structure or any column's datatypes.
|
You should be able to do that with a CTE to grab [ROW\_NUMBER](https://msdn.microsoft.com/en-us/library/ms186734.aspx), and then [COALESCE](https://msdn.microsoft.com/en-us/library/ms190349.aspx) to only use that number where the value is NULL:
```
WITH cte AS(
SELECT empId, empExtension, ROW_NUMBER() OVER(ORDER BY empExtension, empId) rn
FROM employeeDetails
)
SELECT cte.empId, deptName, COALESCE(empExtension, rn + 12344) empExtension
FROM cte LEFT JOIN employeeDepartment
ON cte.empID = employeeDepartment.empID
ORDER BY cte.empId
```
Here's an [SQLFiddle](http://sqlfiddle.com/#!3/142d1/5).
|
If you just want to create a unique random 5 digit number for those `empExtension` column values are null, then
**Query**
```
;with cte as
(
select rn = row_number() over
(
order by empId
),*
from employeeDetails
)
select t1.empId,t2.deptName,
case when t1.empExtension is null
then t1.rn + (convert(numeric(5,0),rand() * 20000) + 10000)
else t1.empExtension end as empExtension
from cte t1
left join employeeDepartment t2
on t1.empId = t2.empId;
```
[**SQL Fiddle**](http://www.sqlfiddle.com/#!3/29acb/1)
|
Populate auto-increment INT 5 digit value in case of null in SQL Server
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
I'm trying to count the occurrences when a user perform something in consecutive months. Ex.
```
| Person | Datetime |
| person_01 | '2015-01-02 10:40:15' |
| person_01 | '2015-02-02 10:40:15' |
| person_01 | '2015-07-05 10:40:15' |
| person_02 | '2015-01-02 10:40:15' |
| person_02 | '2015-04-03 10:40:15' |
| person_02 | '2015-07-09 10:40:15' |
```
Result:
```
| Created | Consecutive |
| person_01 | 1 |
| person_02 | 0 |
```
(I was trying to start by group by year(datetime), month(datetime), but would like to see some examples of solutions for this if possible.)
Any recommendations or examples to help me out?
|
You can do this by combination of rownumber generation and `timestampdiff`
[**SQL Fiddle Demo**](http://sqlfiddle.com/#!9/87c20/9)
```
SET @row_number1:=0;
SET @row_number2:=0;
SELECT T1.person,
,SUM(CASE WHEN TIMESTAMPDIFF(MONTH,T1.ddatetime,T2.ddatetime)=1 THEN 1 ELSE 0 END) CNT CNT
FROM (SELECT @row_number1 := @row_number1 + 1 AS row_number,
person,
ddatetime
FROM datedata
ORDER BY person,
ddatetime) T1
INNER JOIN
(SELECT @row_number2 := @row_number2 + 1 AS row_number,
person,
ddatetime
FROM datedata
ORDER BY person,
ddatetime) T2
ON T1.row_number + 1 = T2.row_number
AND T1.person = T2.person
GROUP BY T1.person
```
|
You may be able to do something like this:
```
select base.person, count(subq.dt) consequtive
-- get distinct person
from (select distinct person from test) base
-- join with a subquery
left join
(
select a.*
from
-- convert date to first day of the month
(select person,
cast(concat(left(dt,7), '-01') as date) as dt
from test) a
join
-- convert date to first day of the month
-- and subtract a month
(select person,
date_sub(cast(concat(left(dt,7), '-01') as date), interval 1 month) as dt
from test) b
-- join the above 2 by person and date
on a.person = b.person and a.dt = b.dt
) subq
on subq.person = base.person
group by base.person;
Result:
| person | consequtive |
|--------|-------------|
| 01 | 1 |
| 02 | 0 |
```
SQLFiddle: <http://sqlfiddle.com/#!9/f1fed/6>
|
MySQL: Count continuous months
|
[
"",
"mysql",
"sql",
"date",
""
] |
I have a table called `users` another table called `user_flags`, and another table called `flags`
The `users` table contains a `name` and `id`
The `flags` table contains a `name` and `id`
The `user_flags` contains a `user_id` and `flag_id`
If I want to get all the users which have any of the provided flag names, what would be the query? That is, if the query provides 'red' and 'blue' but a user is just associated with blue, we would return that User.
That is, if I have the following schema: <http://sqlfiddle.com/#!9/ac105> - And the flag names that I passed were `red` and `blue`, it would return the row for the User John, because John has a `red` associated flag.
|
You need to join the tables on their id values and use a where clause for the flag color name. Something like this:
```
select * from Users U
join UsersFlags UF on UF.user_id = U.id
join Flags F on F.id = UF.flag_id
where F.name = 'red'
```
|
Try this query:
```
SELECT u.name
FROM users u INNER JOIN user_flags uf
ON u.id = uf.user_id
INNER JOIN flags f
ON uf.flag_id = f.id
WHERE F.name = 'red'
```
[**SQL Fiddle**](http://sqlfiddle.com/#!9/ac105/3)
Note that in the fiddle the table and column names are slightly different from what you used in your original question.
|
Select records from table A that have at least X records on table B
|
[
"",
"sql",
"postgresql",
""
] |
I want to update `OFFICE_ID,OFFICE_TYPE` of `FA_SUBLEDGER_MST` table, by using `OFFICE_ID,OFFICE_TYPE` from `EMPLOYEE_MST` table based on following conditions:
`EMPLOYEE_MST.SL_CODE=FA_SUBLEDGER_MST.SL_CODE
EMPLOYEE_MST.OFFICE_ID<>SL.OFFICE_ID
OR EMPLOYEE_MST.OFFICE_TYPE<>SL.OFFICE_TYPE
AND EMPLOYEE_MST.OFFICE_TYPE!='DHB'.`
I tried this query:
```
UPDATE FA_SUBLEDGER_MST sl
SET
(
sl.OFFICE_ID,sl.OFFICE_TYPE
)
=
(SELECT emp.OFFICE_ID,
emp.OFFICE_TYPE
FROM EMPLOYEE_MST emp
WHERE emp.OFFICE_ID<>sl.OFFICE_ID
OR emp.OFFICE_TYPE<>sl.OFFICE_TYPE
AND sl.SL_CODE = emp.SL_CODE
AND emp.OFFICE_TYPE!='DHB'
)
WHERE sl.STATUS = 'A'
AND EXISTS
(SELECT 1
FROM EMPLOYEE_MST emp
WHERE emp.OFFICE_ID<>sl.OFFICE_ID
OR emp.OFFICE_TYPE<>sl.OFFICE_TYPE
AND emp.SL_CODE=sl.SL_CODE
AND emp.OFFICE_TYPE!='DHB'
);
```
And this:
```
UPDATE FA_SUBLEDGER_MST
SET
(
OFFICE_ID,
OFFICE_TYPE
)
=
(SELECT OFFICE_ID,
OFFICE_TYPE
FROM EMPLOYEE_MST
WHERE FA_SUBLEDGER_MST.OFFICE_ID != EMPLOYEE_MST.OFFICE_ID
OR FA_SUBLEDGER_MST.OFFICE_TYPE! =EMPLOYEE_MST.OFFICE_TYPE
AND FA_SUBLEDGER_MST.SL_CODE = EMPLOYEE_MST.SL_CODE
AND EMPLOYEE_MST.OFFICE_TYPE! ='DHB'
) ;
```
But both caused this error:
> SQL Error: ORA-01427: single-row subquery returns more than one row
> 01427. 00000 - "single-row subquery returns more than one row
I also tried:
```
UPDATE
(SELECT em.OFFICE_ID emoffid,
SL.OFFICE_ID sloffid,
em.OFFICE_TYPE emofftype,
SL.OFFICE_TYPE slemofftype,
SL.STATUS
FROM EMPLOYEE_MST em
JOIN FA_SUBLEDGER_MST SL
ON em.SL_CODE =SL.SL_CODE
WHERE em.OFFICE_ID<>SL.OFFICE_ID
OR em.OFFICE_TYPE <>SL.OFFICE_TYPE
AND em.OFFICE_TYPE!='DHB'
AND SL.STATUS ='A'
) t
SET t.sloffid =t.emoffid
WHERE t.emoffid<>t.sloffid
OR t.emofftype <>t.slemofftype
AND t.emofftype!='DHB'
AND t.STATUS ='A';
```
This caused an error:
> SQL Error: ORA-01779: cannot modify a column which maps to a non key-preserved table
> 01779. 00000 - "cannot modify a column which maps to a non key-preserved table"
> \*Cause: An attempt was made to insert or update columns of a join view which map to a non-key-preserved table.
> \*Action: Modify the underlying base tables directly.
Both the tables contain more than 100,000 records and `OFFICE_ID` can be `1,2,3,4,5` and `OFFICE_TYPE` can be `'DE','DI','HO'`, hence they repeat frequently.
I need an Oracle query to do so. Can't use a procedure.
Would be thankful for your help.
|
**Finally found a solution:**
```
merge into FA_SUBLEDGER_MST s1
using (
select SL_CODE, OFFICE_ID, OFFICE_TYPE
from EMPLOYEE_MST
where OFFICE_TYPE != 'DHB'
) emp
on (s1.SL_CODE = emp.SL_CODE)
when matched then update set
s1.OFFICE_ID = emp.OFFICE_ID,
s1.OFFICE_TYPE = emp.OFFICE_TYPE
where sl.STATUS = 'A'
and (s1.OFFICE_ID <> emp.OFFICE_ID
or s1.OFFICE_TYPE <> emp.OFFICE_TYPE)
;
```
Orignal Thread: [Updating row of one table using rows of another table based on specific conditions *on https://dba.stackexchange.com/*](https://dba.stackexchange.com/questions/117245/updating-row-of-one-table-using-rows-of-another-table-based-on-specific-conditio)
|
TRY THIS:
```
WHERE (FA_SUBLEDGER_MST.OFFICE_ID != EMPLOYEE_MST.OFFICE_ID AND
FA_SUBLEDGER_MST.OFFICE_TYPE !=EMPLOYEE_MST.OFFICE_TYPE) --AND will be converted to OR by SQL Engine
AND FA_SUBLEDGER_MST.SL_CODE = EMPLOYEE_MST.SL_CODE
AND EMPLOYEE_MST.OFFICE_TYPE! ='DHB'
AND sl.STATUS = 'A'
```
|
Updating row of one table using rows of another table based on specific conditions
|
[
"",
"sql",
"database",
"oracle",
"oracle11g",
""
] |
I have a plsql procedure which take an input variable. This variable (my\_plsql\_var) I need to decide the where clause of my select statement. If my variable is A then I need to query a specific where clause, if it is B it uses that specific where clause and same goes for C. I have tried some queries but they do not work. The closest I have gotten is this, but there seems to be syntax errors and am even unsure if the query will produce what I need.
```
SELECT ID,
CASE(CAST WHEN my_col1 in ('A') and my_col2 = 'A' then 'A GROUP'
WHEN my_col1 in ('B') and my_col2 = 'B' then 'B GROUP'
WHEN my_col1 in ('C') and my_col2 = 'C' then 'C GROUP'
else null
end as varachar2)) as my_awesome_col
FROM
my_table
WHERE
id= 100 and
name = 'Smith' and
CASE (WHEN my_plsql_var = 'A' then my_col1 in ('A') and my_col2 = 'A'
WHEN my_plsql_var = 'B' then my_col1 in ('B') and my_col2 = 'B' and my_special_col = 'B'
WHEN my_plsql_var = 'C' then my_col1 in ('C') and my_col2 = 'C'
end as varachar2)
```
|
Can this not just be simplified to this?
```
SELECT ID,
my_plsql_var || ' GROUP' AS Group
FROM my_table
WHERE ID = 100
AND NAME = 'Smith'
AND (
(my_plsql_var = 'A' AND my_col1 IN ('A') AND my_col2 = 'A')
OR (my_plsql_var = 'B' AND my_col1 IN ('B') AND my_col2 = 'B' AND my_special_col = 'B')
OR (my_plsql_var = 'C' AND my_col1 IN ('C') AND my_col2 = 'C')
);
```
|
```
Hello you need to build the SELECT Clause dynamically based upon your input. Below is the example for this.
CREATE OR REPLACE PROCEDURE TEST1_DYN(
p_in IN VARCHAR2,
p_ref OUT sys_refcursor )
AS
lv_select LONG;
BEGIN
lv_select:='SELECT ID,
(CASE WHEN my_col1 in ('''||p_in||''')'|| 'and my_col2 = '''||p_in||''''||' then '||''''||p_in||' GROUP'''||
' else null
end) my_awesome_col
FROM
my_table
WHERE
id= 100 and
name = ''Smith'' and
my_plsql_var = '||''''||p_in||''''||' then my_col1 in ('||''''||p_in||''''||') and my_col2 = '||''''||p_in||'''
';
dbms_output.put_line(lv_select);
OPEN p_ref for lv_select;
END;
```
|
Conditional Where Clause Oracle SQL based on procedure input varaible
|
[
"",
"sql",
"oracle",
"plsql",
""
] |
I have a table which looks like this `<SubCodeReport3>` which has a column called `Rank`. For each row I need to the what is the Rank and based on that value I need to unpivot SubCode columns (SubCode1,SubCode2 & Subcode3 etc) and convert them into rows.
[](https://i.stack.imgur.com/BuzlV.png)
As seen above for a
Rank 2 Subcode1 & SubCode2 have been unpivoted
Rank 1 SubCode1 has been unpivoted
Rank 3 Subcode1, Subcode2 & SubCode3 have been unpivoted.
there will not be case where the Rank is higher than the no. of available SubCode columns. Any Ideas?
Cursor though rows ?
Here is some SQL to create this sample table
```
USE TESTDB
GO
/****** Object: Table [dbo].[SubCodeReport3] Script Date: 10/6/2015 2:27:49 PM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[SubCodeReport3](
[ S-ID] [varchar](50) NULL,
[Rank] [smallint] NULL,
[AGE] [varchar](50) NULL,
[SchoolCode] [varchar](50) NULL,
[SubCode1] [varchar](50) NULL,
[SubCode2] [varchar](50) NULL,
[SubCode3] [varchar](50) NULL,
[SubCode4] [varchar](50) NULL,
[SubCode5] [varchar](50) NULL
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
INSERT [dbo].[SubCodeReport3] ([ S-ID], [Rank], [AGE], [SchoolCode], [SubCode1], [SubCode2], [SubCode3], [SubCode4], [SubCode5]) VALUES (N'25', 1, N'23', N'KEN-009', N'ENG', N'MAT', N'ZOO', N'', N'')
INSERT [dbo].[SubCodeReport3] ([ S-ID], [Rank], [AGE], [SchoolCode], [SubCode1], [SubCode2], [SubCode3], [SubCode4], [SubCode5]) VALUES (N'26', 1, N'21', N'DLK-009', N'ENG', N'', N'', N'', N'')
INSERT [dbo].[SubCodeReport3] ([ S-ID], [Rank], [AGE], [SchoolCode], [SubCode1], [SubCode2], [SubCode3], [SubCode4], [SubCode5]) VALUES (N'27', 2, N'25', N'DLK-006', N'MAT', N'ENG', N'STAT', N'', N'')
INSERT [dbo].[SubCodeReport3] ([ S-ID], [Rank], [AGE], [SchoolCode], [SubCode1], [SubCode2], [SubCode3], [SubCode4], [SubCode5]) VALUES (N'28', 1, N'21', N'HLI-005', N'ENG', N'', N'', N'', N'')
INSERT [dbo].[SubCodeReport3] ([ S-ID], [Rank], [AGE], [SchoolCode], [SubCode1], [SubCode2], [SubCode3], [SubCode4], [SubCode5]) VALUES (N'30', 3, N'22', N'INN-009', N'ENG', N'MAT', N'ZOO', N'GEO', N'')
```
|
Start with a CTE or derived table that does a full unpivot of the table, and adds a partitioned row\_number so that for each row in the original table, SubCode1 will be on row\_number 1, SubCode2 on row\_number 2, etc.
Then select from that CTE where row\_number is less than or equal to [Rank].
|
This is what worked for me.
```
WITH CodesReportCTE
(
[Number]
,[ S-ID]
,[Rank]
,[AGE]
,[SchoolCode]
,[Code]
)
AS
(
SELECT
ROW_NUMBER() over (PARTITION BY [ S-ID],[SchoolCode] ORDER BY [ S-ID],[SchoolCode]) AS Number
,[ S-ID]
,[Rank]
,[AGE]
,[SchoolCode]
,up.Code [Code]
FROM [dbo].[SubCodeReport3]
UNPIVOT
(
Code
for x in (SubCode1,SubCode2,SubCode3,SubCode4,SubCode5) ) up
WHERE up.Code <> ' '
)
SELECT
--[Number]
--,
[ S-ID]
,[Rank]
,[AGE]
,[SchoolCode]
,[Code]
FROM CodesReportCTE
WHERE Number <= [Rank]
```
|
Conditional Unpivot SQL table
|
[
"",
"sql",
"sql-server",
"pivot",
"unpivot",
""
] |
If you have a composite clustered index say: `(ClientId, Date, OrderId, ProductId)`. (All fields are non-null)
Given the query:
`SELECT * FROM products WHERE ClientId = 33 AND OrderId = 4 AND ProductId = 2 ORDER BY Date`
Will this query take advantage of the covering index fully by having the `ORDER BY Date` or does it require that the `Date` field be in the WHERE clause?
I added tags of the main database engines, in case there is discrepancy between each.
|
For MySQL, your index can't be used. MySQL's indexes apply left->right:
```
(ClientId, Date, OrderId, ProductId)
```
since your query involves only `ClientID` and `ProductID`, the index can't be used - you'd have to use include `Date` and `OrderID` in the query as well. Note that the specific ordering of the fields in your query is irrelevant - it's whether they're being used as all that counts:
So if your query's `where` has:
```
clientid -> usable
clientid, date -> usable
date, clientid -> usable, order of usage irrelevant
clientid, orderid -> not usable, missing date
clientid, orderid, productid -> also not usable, missing date
clientid, productid -> not usable, missing orderid, missing date
date -> not usable missing clientid
```
Note that this is for mysql only. Some other DB systems do not have this restriction.
|
It really depends on many things , like
* how many rows in the table ?
* how much time it takes for to get values from index leaf ?
* What is inside the \* that you are getting from ...
etc.
Thats why they invented "explain plan"
Use it when you want to know
|
Covering Index with ORDER BY
|
[
"",
"mysql",
"sql",
"sql-server",
"database",
"oracle",
""
] |
I have a table very similar to the one below, except there are 20 of both the v and p values.
```
|--------------Table 1-------------|
| part_id | v1 | v2 | p1 | p2 |
| 1 | 250 | 8 | 1 | 2 |
| 2 | 1348 | 9 | 28 | 88 |
| 4094 | 580 | 230 | 207 | 726 |
| 7111 | 12 | 14 | 223 | 195 |
```
I need to join this table with two others, which contain dimension information.
```
|----Values----| |-------Parameters------|
| v_id | value | | p_id | description |
| 8 | 1 | | 1 | 'Weight (lbs)' |
| etc... | | etc... |
```
Current program:
```
proc sql;
create table table2 as
select t1.part_id
,t1.v1
,val.value
,t1.v2
,val1.value
,t1.p1
,par.description
,t1.p2
,par1.description
from table_1 t1
inner join values val
on val.v_id = t1.v1
inner join values val1
on val1.v_id = t1.v2
inner join parameters par
on par.p_id = t1.p1
inner join parameters par1
on par1.p_id = t1.p2;
quit;
```
Is there a way to join these tables together without using 40 inner joins?
|
Create a format from the values/parameter data set and then use an array within a data set to loop through and create the descriptions you need. Here's a sample for the parameter table that should get you started. I haven't tested this code though :).
```
data param_fmt;
fmtname='param_fmt';
start=p_id;
label=value;
run;
proc format cntlin=param_fmt;
run;
data want;
set have;
array p(*) p1-p3;
array p_desc(*) p_desc1-p_desc3;
do i=1 to dim(p);
p_desc(i) = put(p(i), param_fmt.);
end;
run;
```
|
## Formats
If your dimension tables are simple and small then convert them into formats.
```
data cntlin ;
set values ;
by v_id;
retain fmtname 'VALUES';
rename v_id = start value = label;
run;
proc format cntlin=cntlin;
run;
```
Then you do not even need to modify your input table. You can use it as is and just attach the formats to have the values or descriptions printed instead of the ids.
```
proc print data=table_1 ;
format v1-v2 values. p1-p2 parameters. ;
run;
```
If your dimension tables have more than two columns then turn each extra column into another format.
## Lookup
If the dimensions are too large for a format, then try using the KEY= option on set statements to lookup the dimension values.
```
data want ;
set table_1 ;
array _v v1-v2 ;
array _p p1-p2 ;
array values(2) ;
array descriptions (2) $50 ;
do _n_=1 to dim(_v);
v_id=_v(_n_) ;
if not (missing(v_id)) then set values key=v_id;
values(_n_) = value ;
p_id=_p(_n_) ;
if not (missing(p_id)) then set parameters key=p_id;
parameters(_n_) = parameter ;
output;
call missing(of value parameter);
end;
```
## Transform
You could also transform your TABLE\_1 to a tall instead of a wide format. Then you do not need so many joins.
```
data tall ;
set table_1 ;
array _v v1-v2 ;
array _p p1-p2 ;
do col=1 to dim(_v);
v_id=_v(col) ;
p_id=_p(col) ;
output;
end;
drop v1-v2 p1-p2 ;
run;
proc sql ;
create table_2 as
select a.*
, v.value
, p.description
from tall a
left join values v on a.v_id = v.v_id
left join parameters p on a.p_id = p.description
;
quit;
```
|
Can I reduce the amount of joins used in this query?
|
[
"",
"sql",
"oracle",
"sas",
""
] |
i have this table (500,000 row)
```
CREATE TABLE IF NOT EXISTS `listings` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`type` tinyint(1) NOT NULL DEFAULT '1',
`hash` char(32) NOT NULL,
`source_id` int(10) unsigned NOT NULL,
`link` varchar(255) NOT NULL,
`short_link` varchar(255) NOT NULL,
`cat_id` mediumint(5) NOT NULL,
`title` mediumtext NOT NULL,
`description` mediumtext,
`content` mediumtext,
`images` mediumtext,
`videos` mediumtext,
`views` int(10) unsigned NOT NULL,
`comments` int(11) DEFAULT '0',
`comments_update` int(11) NOT NULL DEFAULT '0',
`editor_id` int(11) NOT NULL DEFAULT '0',
`auther_name` varchar(255) DEFAULT NULL,
`createdby_id` int(10) NOT NULL,
`createdon` int(20) NOT NULL,
`editedby_id` int(10) NOT NULL,
`editedon` int(20) NOT NULL,
`deleted` tinyint(1) NOT NULL,
`deletedon` int(20) NOT NULL,
`deletedby_id` int(10) NOT NULL,
`deletedfor` varchar(255) NOT NULL,
`published` tinyint(1) NOT NULL DEFAULT '1',
`publishedon` int(11) unsigned NOT NULL,
`publishedby_id` int(10) NOT NULL,
PRIMARY KEY (`id`),
KEY `hash` (`hash`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8;
```
i'm thinking to make each query by the `publishedon between x and y` (show in all the site just records of 1 month)
in the same time, i want to add with the `publishedon` in the where clause `published, cat_id , source_id`
some thing like this:
```
SELECT * FROM listings
WHERE (publishedon BETWEEN 1441105258 AND 1443614458)
AND (published = 1)
AND (cat_id in(1,2,3,4,5))
AND (source_id in(1,2,3,4,5))
```
that query is ok and fast until now without indexing, but when trying to use `order by publishedon` its became too slow, so i used this index
```
CREATE INDEX `listings_pcs` ON listings(
`publishedon` DESC,
`published` ,
`cat_id` ,
`source_id`
)
```
it worked and the `order by publishedon` became fast, now i want to `order by views` like this
```
SELECT * FROM listings
WHERE (publishedon BETWEEN 1441105258 AND 1443614458)
AND (published = 1)
AND (cat_id in(1,2,3,4,5))
AND (source_id in(1,2,3,4,5))
ORDER BY views DESC
```
this is the explanation
[](https://i.stack.imgur.com/MMmat.png)
this query is too slow because of `ORDER BY views DESC`
then i'm tried to drop the old index and add this
```
CREATE INDEX `listings_pcs` ON listings(
`publishedon` DESC,
`published` ,
`cat_id` ,
`source_id`,
`views` DESC
)
```
its too slow also
what about if i use just single index on `publishedon`?
what about using single index on cat\_id,source\_id,views,publishedon?
i can change the query dependencies like publishedon in one month if i found other indexing method depend on any other columns
what about making index in (`cat_id`, `source_id`, `publishedon`, `published`) ? but in some cases i will use source\_id only?
what is the best indexing schema for that table
|
One important general note as to why your query isn't getting any faster despite your attempts is that `DESC` on indexes is not currently supported on MySQL. See this [SO thread](https://stackoverflow.com/a/10130260/1935338), and [the source](https://wikivs.com/wiki/MySQL_vs_PostgreSQL) from which it comes.
In this case, your largest problem is in the sheer size of your record. If the engine decides it wouldn't really be faster to use an index, then it won't.
You have a few options, and all are actually pretty decent and can probably help you see significant improvement.
### A note on SQL
First, I want to make a quick note about indexing in SQL. While I don't think it's the solution for your woes, it was your main question, and can help.
It usually helps me to think about indexing in three different buckets. The ***absolutely***, the **maybe**, and the *never*. You certainly don't have anything in your indexing that's in the *never* column, but there are some I would consider "**maybe**" indexes.
***absolutely***: This is your primary key and any foreign keys. It is also any key you will reference on a very regular basis to pull a small set of data from the massive data you have.
**maybe**: These are columns which, while you may reference them regularly, are not really referenced by themselves. In fact, through analysis and using `EXPLAIN` as [@Machavity](https://stackoverflow.com/a/32923120/1935338) recommends in his answer, you may find that by the time these columns are used to strip out fields, there aren't that many fields anyway. An example of a column that would solidly be in this pile for me would be the `published` column. Keep in mind that every `INDEX` adds to the work your queries need to do.
*Also:* Composite keys are a good choice when you're regularly searching for data based on two different columns. More on that later.
### Options, options, options...
There are a number of options to consider, and each one has some drawbacks. Ultimately I would consider each of these on a case-by-case basis as I don't see any of these to be a silver bullet. Ideally, you'd test a few different solutions against your current setting and see which one runs the fastest using a nice scientific test.
1. **Split your SQL table into two or more separate tables.**
This is one of the few times where, despite the number of columns in your table, I wouldn't rush to try to split your table into smaller chunks. If you decided to split it into smaller chunks, however, I'd argue that your `[action]edon`, `[action]edby_id`, and `[action]ed` could easily be put into another table, `actions`:
```
+-----------+-------------+------+-----+-------------------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-----------+-------------+------+-----+-------------------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| action_id | int(11) | NO | | NULL | |
| action | varchar(45) | NO | | NULL | |
| date | datetime | NO | | CURRENT_TIMESTAMP | |
| user_id | int(11) | NO | | NULL | |
+-----------+-------------+------+-----+-------------------+----------------+
```
The downside to this is that it does not allow you to ensure there is only one creation date without a `TRIGGER`. The upside is that when you don't have to sort as many columns with as many indexes when you're sorting by date. Also, it allows you to sort not only be `created`, but also by all of your other actions.
*Edit: As requested, here is a sample sorting query*
```
SELECT * FROM listings
INNER JOIN actions ON actions.listing_id = listings.id
WHERE (actions.action = 'published')
AND (listings.published = 1)
AND (listings.cat_id in(1,2,3,4,5))
AND (listings.source_id in(1,2,3,4,5))
AND (actions.actiondate between 1441105258 AND 1443614458)
ORDER BY listings.views DESC
```
*Theoretically, it should cut down on the number of rows you're sorting against because it's only pulling relevant data. I don't have a dataset like yours so I can't test it right now!*
If you put a composite key on `actiondate` and `listings.id`, this should help to increase speed.
As I said, I don't think this is the best solution for you right now because I'm not convinced it's going to give you the maximum optimization. This leads me to my next suggestion:
2. **Create a month field**
I used [this nifty tool](http://www.timestampconvert.com/?go2=true&offset=4×tamp=1443614458&Submit=%20%20%20%20%20%20Convert%20to%20Date%20%20%20%20%20%20) to confirm what I thought I understood of your question: You are sorting by month here. Your example is specifically looking between September 1st and September 30th, inclusive.
So another option is for you to split your integer function into a `month`, `day`, and `year` field. You can still have your timestamp, but timestamps aren't all that great for searching. Run an `EXPLAIN` on even a simple query and you'll see for yourself.
That way, you can just index the month and year fields and do a query like this:
```
SELECT * FROM listings
WHERE (publishedmonth = 9)
AND (publishedyear = 2015)
AND (published = 1)
AND (cat_id in(1,2,3,4,5))
AND (source_id in(1,2,3,4,5))
ORDER BY views DESC
```
Slap an `EXPLAIN` in front and you should see massive improvements.
Because you're planning on referring to a month and a day, you may want to add a composite key against month and year, rather than a key on both separately, for added gains.
***Note**: I want to be clear, this is not the "correct" way to do things. It is convenient, but denormalized. If you want the correct way to do things, you'd adapt something like [this link](http://explainextended.com/2009/07/01/overlapping-ranges-mysql/) but I think that would require you to seriously reconsider your table, and I haven't tried anything like this, having lacked the need, and, frankly, will, to brush up on my geometry. I think it's a little overkill for what you're trying to do.*
3. **Do your heavy sorting elsewhere**
This was hard for me to come to terms with because I like to do things the "SQL" way wherever possible, but that is not always the best solution. Heavy computing, for example, is best done using your programming language, leaving SQL to handle relationships.
The former CTO of Digg sorted using PHP instead of MySQL and received a [4,000% performance increase](http://highscalability.com/blog/2010/3/23/digg-4000-performance-increase-by-sorting-in-php-rather-than.html). You're probably not scaling out to this level, of course, so the performance trade-offs won't be clearcut unless you test it out yourself. Still, the concept is sound: the database is the bottleneck, and computer memory is dirt cheap by comparison.
There are doubtless a lot more tweaks that can be done. Each of these has a drawback and requires some investment. The best answer is to test two or more of these and see which one helps you get the most improvement.
|
This query:
```
SELECT *
FROM listings
WHERE (publishedon BETWEEN 1441105258 AND 1443614458) AND
(published = 1) AND
(cat_id in (1,2,3,4,5)) AND
(source_id in (1,2,3,4,5));
```
Is hard to optimize with only indexes. The best index is one that starts with `published` and then has the other columns -- it is not clear what their order should be. The reason is because all but `published` are not using `=`.
Because your performance problem is on a sort, that suggests that lots of rows are being returned. Typically, an index is used to satisfy the `WHERE` clause before the index can be used for the `ORDER BY`. That makes this hard to optimize.
Suggestions . . . None are that great:
* If you are going to access the data by month, then you might consider partitioning the data by month. That will make the query without the `ORDER BY` faster, but won't help the `ORDER BY`.
* Try various orders of columns after `published` in the index. You might find the most selective column(s). But, once again, this speeds the query before the sorting.
* Think about ways that you can structure the query to have more equality conditions in the `WHERE` clause or to return a smaller set of data.
* (Not really recommended) Put an index on `published` and the ordering column. Then use a subquery to fetch the data. Put the inequality conditions (`IN` and so on) in the outer query. The subquery will use the index for sorting and then filter the results.
The reason the last is not recommended is because SQL (and MySQL) do not guarantee the ordering of results from a subquery. However, because MySQL materializes subqueries, the results really are in order. I don't like using undocumented side effects, which can change from version to version.
|
MySQL indexes - what are the best practices according to this table and queries
|
[
"",
"mysql",
"sql",
"indexing",
"query-optimization",
""
] |
In my company front end system shows the "proper" date but when i am querying the database from embarcaderoRapid SQL it shows me date as a below (database) .
Its a problem for me as in our front end view we are not able to group results etc and i can do it in embarcadero.
But i need to understand what is the logic behind this date "transformation" ?
Any thoughts ?
Ofcourse i could mapp it as i figured out whats everymonth code but without understanding i will have to to this for every new month.
```
front end database
01/04/2015 00:000 1427842800000
01/04/2015 00:000 1427842800000
01/05/2015 00:000 1430434800000
01/05/2015 00:000 1430434800000
01/05/2015 00:000 1430434800000
01/06/2015 00:000 1433113200000
01/06/2015 00:000 1433113200000
01/07/2015 00:000 1435705200000
01/07/2015 00:000 1435705200000
01/08/2015 00:000 1438383600000
01/08/2015 00:000 1438383600000
```
**EDIT:**
Thank You for You response.
You are near the answer.
Is there any function or a method in SQL/EXCEL to convert this timestamp to date we usually see ?
I also checked `04/01/2015 @ 12:00am (UTC) == 1427846400.` and got same results.
No sure why this is happening as my database shows 1427842800000
|
```
DECLARE @TS bigint
Set @TS ='1427842800000'
SELECT DATEADD(MINUTE,60,DATEADD(MILLISECOND, @TS % 1000, DATEADD(SECOND, @TS / 1000, '19700101')))
```
**Edit**: to Account for Timezone
|
I'm not sure I understand your question.
In the database you have a date stored as a timestamp, i.e. the number of seconds since 1/1/1970. In your frontend, the date is displayed as we usually see a date.
See: <http://www.unixtimestamp.com/>, when i'm writing this answer the date is `1444122272`.
**EDIT** : by the way, your frontend date must be in a specific timezone because `04/01/2015 @ 12:00am (UTC) == 1427846400`.
|
Date coding issue
|
[
"",
"mysql",
"sql",
"sql-server",
"date",
"datetime",
""
] |
I'm having issues with what I assumed would be a simple problem, but googling isn't helping a great load. Possibly I'm bad at what I am searching for nether the less.
```
SELECT ORDER_NUMB, CUSTOMER_NUMB, ORDER_DATE
FROM ORDERS
WHERE FORMAT(ORDER_DATE, 'DD-MMM-YYYY') = '07-JUN-2000';
```
It tells me I am using an invalid identifier. I have tried using `MON` instead of `MMM`, but that doesn't help either.
Unsure if it makes any difference but I am using Oracle SQL Developer.
|
There are multiple issues related to your **DATE** usage:
> WHERE FORMAT(ORDER\_DATE, 'DD-MMM-YYYY') = '07-JUN-2000';
1. **FORMAT** is not an **Oracle** supported built-in function.
2. Never ever compare a **STRING** with **DATE**. You might just be lucky, however, you force Oracle to do an **implicit data type conversion** based on your **locale-specific NLS settings**. You must avoid it. Always use **TO\_DATE** to explicitly convert string to date.
```
WHERE ORDER_DATE = TO_DATE('07-JUN-2000','DD-MON-YYYY','NLS_DATE_LANGUAGE=ENGLISH');
```
3. When you are dealing **only with date without the time portion**, then better use the **ANSI DATE Literal**.
```
WHERE ORDER_DATE = DATE '2000-06-07';
```
Read more about DateTime literals in [documentation](http://docs.oracle.com/cd/B19306_01/server.102/b14200/sql_elements003.htm#BABGIGCJ).
---
**Update**
It think it would be helpful to add some more information about **DATE**.
> Oracle does not store dates in the format you see. It stores it
> internally in a proprietary format in 7 bytes with each byte storing
> different components of the datetime value.
```
BYTE Meaning
---- -------
1 Century -- stored in excess-100 notation
2 Year -- " "
3 Month -- stored in 0 base notation
4 Day -- " "
5 Hour -- stored in excess-1 notation
6 Minute -- " "
7 Second -- " "
```
Remember,
```
To display : Use TO_CHAR
Any date arithmetic/comparison : Use TO_DATE
```
**Performance Bottleneck:**
Let's say you have a regular **B-Tree index** on a date column. now, the following **filter predicate** will never use the index due to TO\_CHAR function:
```
WHERE TO_CHAR(ORDER_DATE, 'DD-MM-YYYY') = '07-06-2000';
```
So, the use of TO\_CHAR in above query is completely meaningless as it does not compare dates, nor does it delivers good performance.
**Correct method:**
The correct way to do the date comparison is:
```
WHERE ORDER_DATE = TO_DATE('07-JUN-2000','DD-MON-YYYY','NLS_DATE_LANGUAGE=ENGLISH');
```
It will use the index on the ORDER\_DATE column, so it will much better in terms of performance. Also, it is comparing dates and not strings.
As I already said, when you do not have the *time* element in your date, then you could use **ANSI date literal** which is NLS independent and also less to code.
```
WHERE ORDER_DATE = DATE '2000-06-07';
```
It uses a fixed format **'YYYY-MM-DD'**.
|
try this:
```
SELECT ORDER_NUMB, CUSTOMER_NUMB, ORDER_DATE
FROM ORDERS
WHERE trunc(to_date(ORDER_DATE, 'DD-MMM-YYYY')) = trunc(to_date('07-JUN-2000'));
```
|
How to compare date in Oracle?
|
[
"",
"sql",
"oracle",
"date-arithmetic",
""
] |
I have a sql query which is working perfectly to return the number of hours a worker is working in any particular job
```
SELECT worker.WorkerID, worker.fullname, job.JobCode, job.jobName, Count(slottime.FK_Hour) AS NUMBER_OF_HOURS_WORKED
FROM (
(
(job INNER JOIN slot
ON job.JobCode = slot.fk_JobCode
)
INNER JOIN (worker INNER JOIN slotworker ON worker.WorkerID = slotworker.FK_worker)
ON slot.SlotNo = slotworker.FK_SlotNo)
INNER JOIN slottime ON slot.SlotNo = slottime.FK_SlotNo
)
LEFT JOIN manager ON (job.JobCode = manager.FK_JobCode) AND (worker.WorkerID = manager.FK_WorkerID) GROUP BY worker.WorkerID, worker.fullname, job.JobCode, job.jobName, manager.is_the_boss
```
However, not all workers are equal. Workers who are allocated as managers are paid an additional 2 hours per job.
the manager table is structured as **JobCode**, **WorkerID**, **is\_the\_boss**
**is\_the\_boss** simply records '1' in the event that that worker is allocated the manager position of that particular job.
How would I affect the count of slottime.FK\_Hour such that an additional 2 hours are added for each of the jobs where (CASE WHEN is\_the\_boss = 1) is true?
|
Try this below
```
SELECT worker.WorkerID, worker.fullname, job.JobCode, job.jobName, Case When manager.is_the_boss = 1 Then Count(slottime.FK_Hour) + 2 Else Count(slottime.FK_hour End AS NUMBER_OF_HOURS_WORKED
FROM (
(
(job INNER JOIN slot
ON job.JobCode = slot.fk_JobCode
)
INNER JOIN (worker INNER JOIN slotworker ON worker.WorkerID = slotworker.FK_worker)
ON slot.SlotNo = slotworker.FK_SlotNo)
INNER JOIN slottime ON slot.SlotNo = slottime.FK_SlotNo
)
LEFT JOIN manager ON (job.JobCode = manager.FK_JobCode) AND (worker.WorkerID = manager.FK_WorkerID) GROUP BY worker.WorkerID, worker.fullname, job.JobCode, job.jobName, manager.is_the_bos
```
|
Your query is rather hard to follow. Why have all those parentheses in the `from` clause (unless you are using MS Access and you have no choice)? It is simpler to just list the tables with the `join` criteria.
Solving your problem requires counting the number of job codes where the worker is the boss. You can do this with a conditional `count(distinct)`:
```
SELECT w.WorkerID, w.fullname, j.JobCode, j.jobName,
Count(st.FK_Hour) AS NUMBER_OF_HOURS_WORKED,
count(distinct case when m.is_the_boss then j.jobCode end) * 2 as ManagerHoursWorks
FROM job j INNER JOIN
slot s
ON j.JobCode = s.fk_JobCode INNER JOIN
slotworker sw
ON slot.SlotNo = sw.FK_SlotNo INNER JOIN
worker w
ON w.WorkerID = sw.FK_worker INNER JOIN
slottime st
ON s.SlotNo = st.FK_SlotNo LEFT JOIN
manager m
ON j.JobCode = m.FK_JobCode AND w.WorkerID = m.FK_WorkerID
GROUP BY w.WorkerID, w.fullname, j.JobCode, j.jobName;
```
|
MySQL: Add to Count with case... then
|
[
"",
"mysql",
"sql",
"count",
"case",
""
] |
In an oracle procedure, I need to insert data in EmployeeHeader table and then later insert the PK id of this table as the FK id of EmployeeDetails table. How can we achieve this?
```
INSERT INTO EmployeeHeader(
HEADER_PK_ID
empNo
)
VALUES(
HEADER_PK_ID_SEQ.NEXTVAL,
'SOMETHING'
);
INSERT INTO EmployeeDetails (
DTLHEADER_PK_ID,
HEADER_fK_ID
empname,
age
)
VALUES(
DTLHEADER_PK_ID_SEQ.NEXTVAL,
HEADER_PK_IDn, -- (THIS NEEDS TO BE FETCHED FROM EmployeeHeader)
21
);
```
|
Use the `RETURNING` clause of the INSERT statement:
```
DECLARE
nHeader_pk_id NUMBER;
BEGIN
INSERT INTO EmployeeHeader
(HEADER_PK_ID, EMPNO)
VALUES
(HEADER_PK_ID_SEQ.NEXTVAL, 'SOMETHING')
RETURNING HEADER_PK_ID INTO nHeader_pk_id;
INSERT INTO EmployeeDetails
(DTLHEADER_PK_ID, HEADER_FK_ID, EMPNAME, AGE)
VALUES
(DTLHEADER_PK_ID_SEQ.NEXTVAL, nHeader_pk_id, 'Somebody', 21);
END;
```
My personal preference is to use `ON INSERT` triggers to handle the population of primary key fields, in the following manner:
```
CREATE OR REPLACE TRIGGER EMPLOYEEHEADER_BI
BEFORE INSERT ON EMPLOYEEHEADER
FOR EACH ROW
BEGIN
IF :NEW.HEADER_PK_ID IS NULL THEN
:NEW.HEADER_PK_ID := HEADER_PK_ID_SEQ.NEXTVAL;
END IF;
END EMPLOYEEHEADER_BI;
CREATE OR REPLACE TRIGGER EMPLOYEEDETAILS_BI
BEFORE INSERT ON EMPLOYEEDETAILS
FOR EACH ROW
BEGIN
IF :NEW.DTLHEADER_PK_ID IS NULL THEN
:NEW.DTLHEADER_PK_ID := DTLHEADER_PK_ID_SEQ.NEXTVAL;
END IF;
END EMPLOYEEDETAILS_BI;
```
and the INSERT statements become:
```
DECLARE
nHeader_pk_id NUMBER;
nDtlheader_pk_id NUMBER;
BEGIN
INSERT INTO EmployeeHeader
(EMPNO) -- Note: PK field not mentioned - will be populated by trigger
VALUES
('SOMETHING')
RETURNING HEADER_PK_ID INTO nHeader_pk_id;
INSERT INTO EmployeeDetails
(HEADER_FK_ID, EMPNAME, AGE) -- Note: PK field not mentioned - will be populated by trigger
VALUES
(nHeader_pk_id, 'Somebody', 21)
RETURNING DTLHEADER_PK_ID INTO nDtlheader_pk_id;
END;
```
(I use the `IF pk_field IS NULL THEN` construct because I often need to copy data from production to development databases and wish to preserve any key values pulled from production to simplify debugging. If you don't have this requirement you can eliminated the `IS NULL` check and just assign the sequence's NEXTVAL directly to the column).
Done in this manner the application code doesn't need to know or care about which sequence is used to generate the primary key value for a particular table, and the primary key field is always going to end up populated.
Best of luck.
|
You can use `currval` in most cases:
```
select HEADER_PK_ID_SEQ.CURRVAL
from dual;
```
You might need to wrap the two inserts in a single transaction, if you want the values to be safe for concurrent inserts.
|
In oracle procedure how to fetch the PKId just added to be added as FK ID for next table
|
[
"",
"sql",
"oracle",
"plsql",
""
] |
I'm trying to figure out how to retrieve all rows with a ID which have a specific value in the other column.
The database looks like this:
[](https://i.stack.imgur.com/kZX56.png)
I want to retrieve those rows which have a VAL of 2 and the corresponding ID rows. So in this case this would give me all the ID 1 rows and the ID 3 row:
[](https://i.stack.imgur.com/lh5Tl.png)
|
One method is exists:
```
select t.*
from test t
where exists (select 1 from test t2 where t2.id = t.id and t2.val = 2);
```
|
you'll need a subquery (or join or cte or derived table) Subquery is easist to visualise
Select \* from Test where ID IN
(SELECT ID from Test where VAL = 2)
|
Compare rows with a column value
|
[
"",
"sql",
"database",
"sqlite",
""
] |
I need to create a sales summary based on data with an optional "Elected" flag, which is user controlled. The flag determines whether or not to group items belonging to the same class of item.
If none of the items within a class have the Elected flag set to 1, the items appear as individual rows in the summary. If one or more items in a class has the flag set to 1, the items within that class get grouped based on the first item that has the flag set (lowest item\_id).
For example:
```
+--------+------+-------+-------+-----+---------+
| ItemId | Item | Class | Price | Qty | Elected |
+--------+------+-------+-------+-----+---------+
| 1 | A1 | 1 | 25.00 | 2 | 1 |
| 2 | A2 | 1 | 20.00 | 3 | 0 |
| 3 | A3 | 1 | 25.00 | 4 | 0 |
| 4 | B1 | 2 | 44.00 | 6 | 0 |
| 5 | B2 | 2 | 41.00 | 7 | 0 |
| 6 | C1 | 3 | 44.00 | 5 | 1 |
| 7 | D1 | 4 | 32.00 | 9 | 0 |
| 8 | D2 | 4 | 30.00 | 8 | 1 |
| 9 | D3 | 4 | 30.00 | 2 | 1 |
+--------+------+-------+-------+-----+---------+
```
Should give the following summary:
```
+-------+------+--------------+-------------+-----+
| Class | Item | Total Price* | Unit Price^ | Qty |
+-------+------+--------------+-------------+-----+
| 1 | A1 | 70.00 | 35.00 | 2 |
| 2 | B1 | 44.00 | 7.33 | 6 |
| 2 | B2 | 41.00 | 5.85 | 7 |
| 3 | C1 | 44.00 | 8.80 | 5 |
| 4 | D2 | 92.00 | 11.50 | 8 |
+-------+------+--------------+-------------+-----+
* Total Price: SUM() within class
^ Unit Price: Total Price / Qty of elected item
```
In the above table, Class 1 is summarised via A1, Class 3 gets summarised via C1, and Class 4 gets summarised via D2. Class 2 does not get summarised at all and instead appears fully listed.
The query below is a hack via multiple self-joins, but it works. Is there a more efficient way to tackle the problem?
```
SELECT Sales.Class,
coalesce(SalesWithFlagName.Item, SalesWithoutFlag._Item) 'Item',
SUM(Sales.Price) 'Total Price',
SUM(Sales.Price) / coalesce(SUM(Sales.Qty), SUM(SalesWithoutFlag._Qty)) 'Unit Price', /* Ignore possible Div0 error */
COALESCE(SUM(Sales.Qty), SUM(SalesWithoutFlag._Qty)) 'Qty'
FROM Sales
/* Class with at least one flag set: Get first item with flag */
LEFT JOIN (
SELECT Class _Class,
Elected _Elected,
MIN(ItemId) _ItemId
FROM Sales
GROUP BY Class, Elected
) AS SalesWithFlag ON (
Sales.ItemId = SalesWithFlag._ItemId
AND Sales.Elected = 1
)
/* Get item name from first Left Join */
LEFT JOIN Sales SalesWithFlagName ON (
SalesWithFlag._ItemId IS NOT NULL
AND SalesWithFlag._ItemId = SalesWithFlagName.ItemId
)
/* Class with at least flag not set: Get all items without flag */
LEFT JOIN (
SELECT Class _Class,
ItemId _ItemId,
Item _Item,
Qty _Qty
FROM Sales s2
WHERE NOT EXISTS (SELECT * FROM Sales s3 WHERE s3.Elected = 1 AND s2.Class = s3.Class)
GROUP BY Class, ItemId, Item, Qty
) AS SalesWithoutFlag ON (
Sales.ItemId = SalesWithoutFlag._ItemId
AND Sales.Elected = 0
)
WHERE SalesWithFlag._ItemId IS NOT NULL
OR SalesWithoutFlag._ItemId IS NOT NULL
GROUP BY Sales.Class,
COALESCE(SalesWithFlagName.Item, SalesWithoutFlag._Item)
```
|
You need both individual rows and aggregates, so this is a task for a `Group Sum` and some additional logic:
```
WITH cte AS
(
SELECT *,
-- either the individual Price or the Group Sum
CASE
WHEN Elected = 1
THEN SUM(Price) OVER (PARTITION BY Class)
ELSE Price
END AS TotalPrice,
-- is there any Elected = 1 within the Class?
MAX(Elected) OVER (PARTITION BY Class) AS maxElected,
-- Find the first row with Elected = 1
ROW_NUMBER() OVER (PARTITION BY Class ORDER BY Elected DESC, ItemId) AS rn
FROM Sales
)
SELECT
class,
Item,
TotalPrice,
TotalPrice / Qty AS UnitPrice,
QTy
FROM cte
WHERE maxElected = 0 -- no Elected = 1
OR rn = 1 -- first row with Elected = 1
ORDER BY 1,2
;
```
See [Fiddle](http://sqlfiddle.com/#!3/ff640/1)
|
You can select for each class min itemid which is elected and then it is simple grouping:
```
;with cte as(select *, (select isnull(min(itemid), t1.itemid)
from t t2
where t2.class = t1.class and t2.elected = 1) as parentid
from t t1)
select p.itemid,
p.item,
p.class,
p.qty,
sum(c.price) as price,
sum(c.price) / p.qty as unitprice
from cte c
join cte p on c.parentid = p.itemid
group by p.itemid, p.item, p.class, p.qty
```
Fiddle here <http://sqlfiddle.com/#!3/7aac9/6>
|
SQL - Group By based on optional flag
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
I have a list of users (Field = UserName) that I got from a table (Users). Some are duplicates.
Where there is a duplicate record, I need to delete the most current record (CreatedOn) created.
Also, if it trips up on the ability to delete because there are 'relationships established to this user ID' in the database, I need it to skip to the next record and continue deleting.
How do I accomplish this??
|
Adding on from Diego's answer with a check that the UserName is not the oldest instance in the table.
```
DELETE FROM
Users
WHERE
UserName IN
(SELECT UserName FROM Users GROUP BY UserName HAVING COUNT(UserName) > 1)
AND
CreatedOn !=
(SELECT MIN(CreatedOn) FROM Users T1 WHERE UserName = T1.UserName);
```
|
You can try something like this:
```
Delete from Users
where nameuser in
(select nameuser from Users
Group by nameuser
Having count(nameuser) > 1)
```
|
Deleting Records Based on Criteria
|
[
"",
"sql",
"sql-delete",
""
] |
```
orders (
o_id INT AUTO_INCREMENT,
o_status TINYINT,
o_description VARCHAR(50),
)
orders_products (
op_id INT AUTO_INCREMENT,
op_order_id INT(11),
op_product_id INT(11),
op_price DECIMAL(19, 2),
)
```
How to select all orders that have ONLY products with id = 1 and id = 2.
Thank you and sorry from my English...
|
There are different ways to get the desired result, this utilizes conditional aggregation:
```
select *
from orders
where o_id in
(
select op_order_id
from orders_products
having count(case when op_product_id = 1 then 1 end) > 0 -- at least one row with 1
and count(case when op_product_id = 2 then 1 end) > 0 -- at least one row with 2
and count(case when op_product_id not in (1,2) then 1 end) = 0 -- no other value
)
```
Depending on indexes/selectivity `EXISTS`/`NOT EXISTS` might be faster:
```
select o_id
from orders as o
where exists (select *
from orders_products as op
where op.op_order_id = o.o_id
and op.op_product_id = 1) -- at least one row with 1
and exists (select *
from orders_products as op
where op.op_order_id = o.o_id
and op.op_product_id = 2) -- at least one row with 2
and not exists (select *
from orders_products as op
where op.op_order_id = o.o_id
and op.op_product_id not in (1,2)) -- no other value
```
|
You could first find all the distinct order and product combination for product 1 or 2 or both, and then look for orders that have both.
```
create table orders (o_id INT);
create table orders_products (op_order_id INT(11), op_product_id INT(11));
insert into orders values (1), (2);
insert into orders_products values (1, 1), (1, 2), (2, 2);
select o_id from (
select distinct o_id, op_product_id
from orders o
inner join orders_products op on op.op_order_id = o.o_id
where op.op_product_id in (1,2)
) main
group by o_id
having count(*) = 2
Result:
1
```
Another way to write the query could be like this:
```
select o_id
from orders o
where exists (select 1 from orders_products where op_order_id = o.o_id and op_product_id = 1)
and exists (select 1 from orders_products where op_order_id = o.o_id and op_product_id = 2)
```
|
sql select all if column has only two particular values
|
[
"",
"mysql",
"sql",
"select",
"foreign-keys",
"inner-join",
""
] |
I need to concatenate a real datetime reference day with a varchar value representing hours/mns informations.
I have read [documentation](https://technet.microsoft.com/en-us/library/ms186724%28v=sql.105%29.aspx) about DateTime operations but haven't find how to do this.
Here is my sample :
```
DECLARE @ref_day [datetime]
DECLARE @hours [varchar](15)
DECLARE @result [datetime]
SET @ref_day = '2015-10-06'
SET @hours = '07:30'
SET @result = ???
```
My expected result is a DateTime object looking like that :
```
@result = 2015/10/06 07:30:00.000
```
|
Try this:
```
DECLARE @ref_day [datetime]
DECLARE @hours [varchar](15)
DECLARE @result [datetime]
SET @ref_day = '2015-10-06'
SET @hours = '07:30'
SET @result = DATEADD(day, DATEDIFF(day, 0, @ref_day), @hours)
```
|
Using [`DATETIMEFROMPARTS`](https://msdn.microsoft.com/en-us/library/hh213233.aspx) SQL Server 2012+:
**[LiveDemo](https://data.stackexchange.com/stackoverflow/query/372397)**
```
DECLARE @ref_day DATETIME;
DECLARE @hours VARCHAR(100);
DECLARE @result DATETIME;
SET @ref_day = '2015-10-06'
SET @hours = '07:30';
SELECT @result = DATETIMEFROMPARTS(YEAR(@ref_day),
MONTH(@ref_day),
DAY(@ref_day),
DATEPART(hh,@hours),
DATEPART(mi, @hours),
0,
0);
SELECT @result;
```
|
DateTime creation in T-SQL
|
[
"",
"sql",
"sql-server",
"t-sql",
"datetime",
""
] |
I have found below query in one of our existing Stored Procedures. This query used to take records between two datetime values.
```
SELECT office
FROM officebudget
WHERE officeid = @officeid
AND (
(CONVERT(DATE, DateFrom) BETWEEN @wkstdate AND @wkenddate)
OR (CONVERT(DATE, DateTo) BETWEEN @wkstdate AND @wkenddate));
```
I have re-written it as below,
```
SELECT office
FROM officebudget
WHERE officeid = @officeid
AND (
(
bkto.DateFrom >= @wkstdate
AND bkto.DateFrom <= @wkenddate
)
OR (bkto.DateTo >= @wkstdate
AND bkto.DateTo <= @wkenddate)
);
```
I got the same result for both cases. but i need to know in any of the scenarios where both of the above queries will produce
different results?
(P.S: DateFrom, DateTo, @Wkstdate, @Wkenddate are `datetime` fields)
|
I suspect that you are refactoring your query to make it sargable and to use possible indexes on columns `DateFrom` and `DateTo`.
Those will not result in same results because your query will omit rows where date part of `wkenddate` equals datepart of `DateFrom` or `DateTo` column values. For example let's say `wkenddate = '20151005'` and your column `DateFrom = '20151005 15:30'`. First query will include this row since both dateparts are equal. And your second query will omit this row since `'20151005 15:30' > '20151005'`.
Consider these example:
```
DECLARE @t TABLE(d DATETIME)
INSERT INTO @t VALUES
('20151001 10:30'),
('20151004 10:30'),
('20151005 10:30')
DECLARE @wkstdate DATE = '20151001', @wkenddate DATE = '20151005'
SELECT * FROM @t WHERE CAST(d AS DATE) BETWEEN @wkstdate AND @wkenddate
SELECT * FROM @t WHERE d >= @wkstdate AND d <= @wkenddate
SELECT * FROM @t WHERE d >= @wkstdate AND d < DATEADD(dd, 1, @wkenddate)
```
Outputs:
```
2015-10-01 10:30:00.000
2015-10-04 10:30:00.000
2015-10-05 10:30:00.000
2015-10-01 10:30:00.000
2015-10-04 10:30:00.000
2015-10-01 10:30:00.000
2015-10-04 10:30:00.000
2015-10-05 10:30:00.000
```
You should rewrite as:
```
SELECT office
FROM officebudget
WHERE officeid = @officeid
AND (
(
bkto.DateFrom >= @wkstdate
AND bkto.DateFrom < dateadd(dd, 1 , @wkenddate)
)
OR (bkto.DateTo >= @wkstdate
AND bkto.DateTo < dateadd(dd, 1, @wkenddate))
);
```
|
The difference is that in the first query, you use a function (CONVERT) on the filtered columns (FateFrom and DateTo), then the engine could not use the index on it if it exists and so the first query would tend to be slower.
In the second query, you have entered manually the date like this : `2015-09-30`.
It is better to enter it like this : `20150930` because this last form is not dependant of the region of the server, whereas the first form is.
|
Get records between two datetimes in SQL Server
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I'm working on a query in SQL Server 2005 that looks at a table of recorded phone calls, groups them by the hour of the day, and computes the average wait time for each hour in the day.
I have a query that I think works, but I'm having trouble convincing myself it's right.
```
SELECT
DATEPART(HOUR, CallTime) AS Hour,
(AVG(calls.WaitDuration) / 60) AS WaitingTimesInMinutes
FROM (
SELECT
CallTime,
WaitDuration
FROM Calls
WHERE DATEADD(day, DATEDIFF(Day, 0, CallTime), 0) = DATEADD(day, DATEDIFF(Day, 0, GETDATE()), 0)
AND DATEPART(HOUR, CallTime) BETWEEN 6 AND 18
) AS calls
GROUP BY DATEPART(HOUR, CallTime)
ORDER BY DATEPART(HOUR, CallTime);
```
To clarify what I think is happening, this query looks at all calls made on the same day as today, and where the hour of the call is between 6 and 18 -- the times are recorded and SELECTed in 24-hour time, so this between hours is to get calls between 6am and 6pm.
Then, the outer query computes the average of the WaitDuration column (and converts seconds to minutes) and then groups each average by the hour.
What I'm uncertain of is this: Are the reported BY HOUR averages only for the calls made in that hour's timeframe? Or does it compute each reported average using all the calls made on the day and between the hours? I know the AVG function has a optional OVER/PARTITION clause, and it's been a while since I used the AVG group function. What I would like is that each result grouped by an hour shows ONLY the average wait time for that specific hour of the day.
Thanks for your time in this.
|
The grouping happens on the values that get spit out of `datepart(hour, ...)`. You're already filtering on that value so you know they're going to range between 6 and 18. That's all that the grouping is going to see.
Now of course the `datepart()` function does what you're looking for in that it looks at the clock and gives the hour component of the time. If you want your group to coincide with HH:00:00 to HH:59:59.997 then you're in luck.
I've already noted in comments that you probably meant to filter your range from 6 to 17 and that your query will probably perform better if you change that and compare your raw `CallTime` value against a static range instead. Your reasoning looks correct to me. And because your reasoning is correct, you don't need the inner query (derived table) at all.
Also if `WaitDuration` is an integer then you're going to be doing decimal division in your output. You'd need to cast to decimal in that case or change the divisor a decimal value like 60.00.
|
Yes if you use the AVG function with a GROUP BY only the items in that group are averaged. Just like if you use the COUNT function with a GROUP BY only the items in that group are counted.
You can use windowing functions (OVER/PARTITION) to conceptually perform GROUP BYs on different criteria for a single function.
eg
```
AVG(zed) OVER (PARTITION BY DATEPART(YEAR, CallTime)) as YEAR_AVG
```
|
Calculating the AVG value per GROUP in the GROUP BY Clause
|
[
"",
"sql",
"sql-server-2005",
"group-by",
"average",
""
] |
So I been struggling with this one for a while. The question is as follows:
Find all the Employees to whom no other employee reports to. Print the Id, and the first and last name of such employees.
The Relationship Sets can be found here:
<https://chinookdatabase.codeplex.com/wikipage?title=Chinook_Schema&referringTitle=Documentation>
And the Data in the Table is as follows (Note that the rest of the data that is cut off is not needed)
[](https://i.stack.imgur.com/MItkz.png)
Now, the easiest solution that I immediately saw was that the Sale Support had no one report to them:
```
Select DISTINCT E.EmployeeId, E.FirstName, E.LastName
From Employee E
Where E.Title = 'Sales Support Agent'
```
But that seems like a cheap work around, is there a way to get the answer using EmployeeID and ReportsTo only?
|
**[SQL Fiddle Demo](http://sqlfiddle.com/#!6/e585b/2)**
```
SELECT E.*
FROM Employee E
LEFT JOIN Employee R -- R for report to
ON E.EmployeeID = R.ReportTo
WHERE R.EmployeeID IS NULL
```
**OUTPUT**
```
| EmployeeID | LastName | ReportTo |
|------------|----------|----------|
| 3 | Peacock | 2 |
| 4 | Park | 2 |
| 5 | Johnson | 2 |
| 7 | King | 6 |
| 8 | Callahan | 6 |
```
|
This should give you what you are wanting:
```
Select
E.EmployeeId
,E.FirstName
,E.LastName
From Employee E
Where E.EmployeeID NOT IN (SELECT DISTINCT ReportsTo
FROM Employee
WHERE ReportsTo is not null)
```
Selects every employee that has no one who reports to them.
|
SQL ReportsTo Query
|
[
"",
"sql",
""
] |
I have a SQL table with data like the following.
```
key entry_no type_1 type_2 text
1 1 A Y text 1
1 2 B Y text 2
1 3 B Y text 3
1 4 A Y text 4
1 5 B text 5
1 6 C Y text 6
```
What I need to do is for a specified key (e.g. 1) extract the rows with the highest entry\_no for each type\_1 group (e.g. A, B or C) where type\_2 equals "Y".
E.g. From the above I want to return the following:
```
key entry_no type_1 type_2 text
1 3 B Y text 3
1 4 A Y text 4
1 6 C Y text 6
```
I've tried a few different ways to do this but can't get it right (I'm no SQL expert, though). Is this possible?
I assume I need to do some type of sub-query?
|
Usually this type of query is called `top-N-per-group`. In SQL Server there are at least two methods to get the result. One uses `ROW_NUMBER`, another `CROSS APPLY`. If you have a lot of groups and few rows per group, then it is more efficient to use `ROW_NUMBER`. If you have few groups, but each group has a lot of rows, and you have an appropriate index, it is more efficient to use `CROSS APPLY`.
<https://dba.stackexchange.com/questions/86415/retrieving-n-rows-per-group>
<http://sqlmag.com/sql-server/seek-and-you-shall-scan-part-i-when-optimizer-doesnt-optimize>
**Sample data**
```
DECLARE @T TABLE
([key] int,
[entry_no] int,
[type_1] varchar(1),
[type_2] varchar(1),
[text] varchar(50));
INSERT INTO @T
([key], [entry_no], [type_1], [type_2], [text])
VALUES
(1, 1, 'A', 'Y', 'text 1'),
(1, 2, 'B', 'Y', 'text 2'),
(1, 3, 'B', 'Y', 'text 3'),
(1, 4, 'A', 'Y', 'text 4'),
(1, 5, 'B', '', 'text 5'),
(1, 6, 'C', 'Y', 'text 6');
```
**Variant with ROW\_NUMBER**
```
DECLARE @VarKey int = 1;
WITH
CTE
AS
(
SELECT
[key]
,[entry_no]
,[type_1]
,[type_2]
,[text]
,ROW_NUMBER()
OVER (PARTITION BY [type_1] ORDER BY [entry_no] DESC) AS rn
FROM @T
WHERE
[key] = @VarKey
AND [type_2] = 'Y'
)
SELECT
[key]
,[entry_no]
,[type_1]
,[type_2]
,[text]
FROM CTE
WHERE rn = 1
ORDER BY [type_1];
```
**Result**
```
| key | entry_no | type_1 | type_2 | text |
|-----|----------|--------|--------|--------|
| 1 | 4 | A | Y | text 4 |
| 1 | 3 | B | Y | text 3 |
| 1 | 6 | C | Y | text 6 |
```
|
```
select max(entry_no)
from table
group by type_1
where key = 1 and type_2 = 'Y'
```
|
Select the most recent entry for a key with relevant sub-values
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
Newby coder here so excuse the ignorance:
Table 1 (title table) list different titles like mr, mrs, dr, prof, etc (column name = 'title\_name') and the primary key is named 'id' which is marked from 1 to 4 (linking that to the title obviously)
Table 2 (member table) has a column named 'title' that show 1,2,3,4, etc as the title of the member
How do I write a sql statement that will populate the actual title and not the id?
I tried the following:
select title
from member, title
where member.title=title.title\_name
I get the following error however:
Conversion failed when converting the nvarchar value 'Dr' to data type int
Thanks
|
Join on the correct column:
```
select title_name from member, title where member.title=title.ID
```
|
You can link data by using a JOIN operation.
For example with the query I'm giving here you get all the members with the related record from title:
```
select * from member as m left join title t on t.id = m.title;
```
If you need specific fields from the query you can just replace "\*" with the names of the fields.
|
How to Populate Title Field in SQL
|
[
"",
"mysql",
"sql",
""
] |
My query is quite big but for understanding purpose I paste here
```
SELECT DISTINCT
ISNULL(CD.InteractionID,'') AS InteractionID
, ISNULL(CD.CaseID,'') AS CaseID
, ISNULL(CD.AnalysisMonth,'') + '-' + CAST( AnalysisYear AS VARCHAR(10)) AS MonYr
, ISNULL(ServiceType,'') AS ServiceType
, ISNULL(ServiceSubType,'') AS ServiceSubType
, ISNULL(SM.SourceName,'') AS SourceName
, ISNULL(UserComment,'') AS UserComment
, ISNULL(Final,'') AS Final
, ISNULL(SYSM.SystemName,'') AS SystemName
, ISNULL(SSM.SubSystem,'') AS SubSystem
, ISNULL(CM.CategoryDesc,'') AS CategoryDesc
, ISNULL(ITCM.ITCommentDesc,'') AS ITCommentDesc
, ISNULL(Casedetails,'') AS Casedetails
, ISNULL(TempRCA,'') AS TempRCA
, ISNULL(FinalRCA,'') AS FinalRCA
, ISNULL(SysOwnerComments ,'') AS SysOwnerComments
FROM
[IT_COMPLAINTS].[ITC_Casedetails] CD WITH (NOLOCK)
INNER JOIN [IT_COMPLAINTS].ITC_SourceMaster SM WITH (NOLOCK) ON CD.SourceID =SM.SourceID
LEFT JOIN [IT_COMPLAINTS].[ITC_SystemMaster] SYSM WITH (NOLOCK) ON CD.SystemID =SYSM.SystemID
LEFT JOIN [IT_COMPLAINTS].[ITC_SubSystemMaster] SSM WITH (NOLOCK) ON CD.SubSystemID=SSM.SubSystemID AND CD.SystemID=SSM.SystemID
LEFT JOIN [IT_COMPLAINTS].[ITC_CategoryMaster] CM WITH (NOLOCK) ON CD.CategoryID =CM.CategoryID
LEFT JOIN [IT_COMPLAINTS].[ITC_ITCommentMaster] ITCM WITH (NOLOCK) ON CD.ITCommentID=ITCM.ITCommentID
INNER JOIN [IT_COMPLAINTS].[ITC_SystemUserMapping] MAP WITH (NOLOCK) ON SSM.SubSystemID = MAP.SubSystemID
WHERE
(IsNull(@InteractionNo,'')='' OR ISNULL(CD.InteractionID,'')=@InteractionNo)
AND (ISNULL(@Mon,'' )='' OR ISNULL(CD.AnalysisMonth,'')=@Mon)
AND (IsNull(@Year,0)=0 OR ISNULL(CD.AnalysisYear,'')=@Year)
--
AND CD.SystemID IN
( CASE WHEN @SystemID = 0 THEN
(SELECT SystemID
FROM IT_COMPLAINTS.ITC_SystemUserMapping
WHERE UserID = @UserID AND IsActive = 1)
ELSE @SystemID END)
AND CD.SubSystemID IN
(CASE WHEN @SystemID = 0 THEN
(SELECT SubSystemID
FROM IT_COMPLAINTS.ITC_SystemUserMapping
WHERE UserID = @UserID AND IsActive = 1)
WHEN @SystemID > 0 AND @SubSystemID = 0 THEN
(SELECT SubSystemID
FROM IT_COMPLAINTS.ITC_SystemUserMapping
WHERE UserID = @UserID AND
IsActive = 1 AND
SystemID = @SystemID)
ELSE @SubSystemID END)
--
AND (ISNULL(@CategoryID,'')='' OR ISNULL(CD.CategoryID,'')=@CategoryID)
AND (ISNULL(@ITCommentID,0)=0 OR ISNULL(CD.ITCommentID,'')=@ITCommentID)
```
However when I run this , It gives me an error
> Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , >, >= or when the subquery is used as an expression.
Problem is in following part
```
AND CD.SystemID IN
( CASE WHEN @SystemID = 0 THEN
(SELECT SystemID
FROM IT_COMPLAINTS.ITC_SystemUserMapping
WHERE UserID = @UserID AND
IsActive = 1) ELSE @SystemID END)
AND CD.SubSystemID IN
(CASE WHEN @SystemID = 0 THEN
(SELECT SubSystemID
FROM IT_COMPLAINTS.ITC_SystemUserMapping
WHERE UserID = @UserID AND
IsActive = 1)
WHEN @SystemID > 0 AND @SubSystemID = 0 THEN
(SELECT SubSystemID
FROM IT_COMPLAINTS.ITC_SystemUserMapping
WHERE UserID = @UserID AND
IsActive = 1 AND
SystemID = @SystemID)
ELSE @SubSystemID END)
```
when I comment this, my query works, but whats the problem with `IN` `Subquery` and `Case` here?
|
Like @zedfoxus commented, the problem is this part:
... THEN (SELECT SubSystemID FROM IT\_COMPLAINTS.ITC\_SystemUserMapping
Since if that returns more than 1 value, there is no place to put that value. You should replace this with something like this:
```
CD.SubSystemID IN (
SELECT
SubSystemID
FROM
IT_COMPLAINTS.ITC_SystemUserMapping
WHERE
UserID = @UserID AND
IsActive = 1 AND
(
@SystemID = 0 or
(SystemID = @SystemID and @SubSystemID = 0)
)
union all
select @SybSystemID where @SystemID > 0 and @SubSystemID > 0
)
```
This might require small adjustments, but you should get the idea.
Instead of this, you might want to look into using "if exists" kind of structure. It's usually a lot simpler to write and performs better too.
|
instead of this:
```
FROM
[IT_COMPLAINTS].[ITC_Casedetails] CD WITH (NOLOCK)
...
AND CD.SystemID IN
( CASE WHEN @SystemID = 0 THEN
(SELECT SystemID
FROM IT_COMPLAINTS.ITC_SystemUserMapping
WHERE ...) ELSE @SystemID END)
AND CD.SubSystemID IN
(CASE WHEN @SystemID = 0 THEN
(SELECT SubSystemID
FROM IT_COMPLAINTS.ITC_SystemUserMapping
WHERE ...)
WHEN @SystemID > 0 AND @SubSystemID = 0 THEN
(SELECT SubSystemID
FROM IT_COMPLAINTS.ITC_SystemUserMapping
WHERE ...)
ELSE @SubSystemID END)
```
consider `left outer join`ing table `IT_COMPLAINTS.ITC_SystemUserMapping` on `CD.SystemID` and `CD.SubSystemID`
like:
```
FROM [IT_COMPLAINTS].[ITC_Casedetails] CD WITH (NOLOCK) left outer join
IT_COMPLAINTS.ITC_SystemUserMapping i on
CD.SystemID = @SystemID or
(@SystemID = 0 and
CD.SystemID = SystemID and UserID = @UserID AND IsActive = 1 ) and
CD.SubSystemID = @SubSystemID or
(CD.SubSystemID = SubSystemID and
UserID = @UserID AND IsActive = 1 and
(@SystemID = 0 or
@SystemID > 0 AND SystemID = @SystemID))
```
|
Subquery returned more than 1 value even with IN operator
|
[
"",
"sql",
"sql-server",
""
] |
I have a table with the following structure and Example data:
[](https://i.stack.imgur.com/waQnc.png)
Now I want to query the records that have `value` equals to `#` and `@`.
For example according to the above image, It should returns `1` and `2`
```
id
-----
1
2
```
Also if the parameters were `@`, `#` and `$` It should give us `1`. Because only the records with id `1` have all the given values.
```
id
-----
1
```
|
You can use a group by and having to get the distinct Id's that contain a distinct count of the number of items you're looking for
```
SELECT Id
FROM Table
WHERE Value IN ('#','$')
GROUP BY Id
HAVING COUNT(DISTINCT Value) = 2
SELECT Id
FROM Table
WHERE Value IN ('#','$','@')
GROUP BY Id
HAVING COUNT(DISTINCT Value) = 3
```
[**SQL Fiddle**](http://sqlfiddle.com/#!3/d7102/2) you can use this link to test
|
There's several ways to do this.
The subquery method:
```
SELECT DISTINCT Id
FROM Table
WHERE Id IN (SELECT Id FROM Table WHERE Value = '@')
AND Id IN (SELECT Id FROM Table WHERE Value = '@');
```
The correlated subquery method:
```
SELECT DISTINCT t.Id
FROM Table t
WHERE EXISTS (SELECT 1 FROM Table a WHERE a.Id = t.Id and a.Value = '@')
AND EXISTS (SELECT 1 FROM Table b WHERE b.Id = t.Id and b.Value = '#');
```
And the INTERSECT method:
```
SELECT Id FROM Table WHERE Value = '#'
INTERSECT
SELECT Id FROM Table WHERE Value = '@';
```
Best performance will depend on RDBMS vendor, size of table, and indexes. Not all RDBMS vendors support all methods.
|
Query a table that have 2 cols with multiple criteria
|
[
"",
"sql",
"performance",
"sqlite",
"android-sqlite",
""
] |
I have a field in a db2 database which is on hexadecimal format i.e 0x0a
which is number 10 in decimal format. The hex field's datatype is `char(1) for bit data`.
```
hex(myfield) gives me the hexadecimal 0A
```
How can i convert 0x0a to 10 in a query on db2?
I have tried: `cast(hex(myfield),integer)`
and `int(hex(myfield))`
with no luck.
Is it possible?
|
AFAIK, there is no such single function built into DB2 that would perform that conversion, but there is a [blog post showing how to define such a function](https://web.archive.org/web/20200101052310/https://www.ibm.com/developerworks/community/blogs/SQLTips4DB2LUW/entry/bit_arithmetic19?lang=en). The following function is taken from that article:
```
--#SET TERMINATOR @
CREATE OR REPLACE FUNCTION HEX2INT(str VARCHAR(8))
RETURNS INTEGER
SPECIFIC HEX2INT
DETERMINISTIC NO EXTERNAL ACTION CONTAINS SQL
BEGIN ATOMIC
DECLARE res INTEGER DEFAULT 0;
DECLARE pos INTEGER DEFAULT 1;
DECLARE nibble CHAR(1);
WHILE pos <= LENGTH(str) DO
SET nibble = SUBSTR(str, pos, 1);
SET res = BITOR(CASE WHEN BITAND(res, 134217728) != 0
THEN BITOR(16 * BITANDNOT(res, 134217728),
-2147483648)
ELSE 16 * res END,
CASE nibble
WHEN '0' THEN 0
WHEN '1' THEN 1
WHEN '2' THEN 2
WHEN '3' THEN 3
WHEN '4' THEN 4
WHEN '5' THEN 5
WHEN '6' THEN 6
WHEN '7' THEN 7
WHEN '8' THEN 8
WHEN '9' THEN 9
WHEN 'A' THEN 10
WHEN 'a' THEN 10
WHEN 'B' THEN 11
WHEN 'b' THEN 11
WHEN 'C' THEN 12
WHEN 'c' THEN 12
WHEN 'D' THEN 13
WHEN 'd' THEN 13
WHEN 'E' THEN 14
WHEN 'e' THEN 14
WHEN 'F' THEN 15
WHEN 'f' THEN 15
ELSE RAISE_ERROR('78000', 'Not a hex string')
END),
pos = pos + 1;
END WHILE;
RETURN res;
END
@
--#SET TERMINATOR ;
```
There are more functions for various conversion operations described.
|
I'm sure you could simplify the following
```
WITH fred (x) AS (VALUES 'f1'),
nurk (a) as (SELECT UPPER(substr(x,1)) from fred
union all
select UPPER(substr(a,2)) from nurk
where substr(a,1,1) <> ' '),
bare (b, c) as (select substr(a,1,1), (length(a) - 1)
from nurk),
trap (d) as ((SELECT (ASCII(B) - ASCII('7')) *
power(16,c)
FROM BARE
WHERE (B BETWEEN 'A' AND 'F')
and
c <> -1)
union
(SELECT (ASCII(B) - ASCII('0')) *
power(16,c)
FROM BARE
WHERE (B not BETWEEN 'A' AND 'F')
and
c <> -1))
select sum(d) from trap
```
ran as
db2 -f "filename of above"
gave result of
241
try testing with values other than f1
John Hennesy
|
How to convert hexadecimal to decimal on DB2
|
[
"",
"sql",
"database",
"db2",
"hex",
""
] |
have two SQL selects on same table. 1st select Shows desired user data. 2nd select finds how many times "initials" are used. That value tends to be used multiple times though it shouldn't
I don't want to have less rows as in query 1 but to add "Initial Count on all"
-- First Query showing all desired data
```
SELECT
UserID, Username, Initials
FROM
dbo.Users
```
This query is able to Count Initaials
```
SELECT
Initials, count(*) as InitialCount
FROM
dbo.Users
GROUP BY
Initials
```
Output:
```
UserID | Username | Initals | Initialcount
----------------------------------------------
1 | Peter Pan | PP | 2
2 | Paul Pax | PP | 2
3 | John Doe | JD | 1
```
|
You can use `COUNT()` with `OVER()` to get the count per group (defined by `PARTITION BY`) without a `GROUP BY`:
```
SELECT
UserID
, Username
, Initials
, COUNT(*) OVER(PARTITION BY Initials) AS InitialCount
FROM
dbo.Users
```
Commonly referred to as analytic or window functions, `OVER()` can be used with aggregate functions like `MIN()`, `MAX()`, `SUM()`, etc.
|
```
SELECT UserID, Username, Inititials,
(SELECT COUNT(*) FROM dbo.Users U2 WHERE U2.Inititals = U1.Initials) InitialCount
FROM dbo.Users U1
```
|
Merge two SQL selects into one result
|
[
"",
"sql",
"sql-server",
"report",
""
] |
I've been at this for about an hour now and am making little to no progress - thought I'd come here for some help/advice.
So, given a sample of my table:
```
+-----------+-----------------------------+--------------+
| MachineID | DateTime | AlertType |
+-----------+-----------------------------+--------------+
| 56 | 2015-10-05 00:00:23.0000000 | 2000 |
| 42 | 2015-10-05 00:01:26.0000000 | 1006 |
| 50 | 2015-10-05 00:08:33.0000000 | 1018 |
| 56 | 2015-10-05 00:08:48.0000000 | 2003 |
| 56 | 2015-10-05 00:10:15.0000000 | 2000 |
| 67 | 2015-10-05 00:11:59.0000000 | 3001 |
| 60 | 2015-10-05 00:13:02.0000000 | 1006 |
| 67 | 2015-10-05 00:13:08.0000000 | 3000 |
| 56 | 2015-10-05 00:13:09.0000000 | 2003 |
| 67 | 2015-10-05 00:14:50.0000000 | 1018 |
| 67 | 2015-10-05 00:15:00.0000000 | 1018 |
| 47 | 2015-10-05 00:16:55.0000000 | 1006 |
+-----------+-----------------------------+--------------+
```
How would I get the first occurrence of `MachineID` w/ an `AlertType` of 2000
and the last occurrence of the same `MachineID` w/ and `AlertType` of 2003.
Here is what I have tried - but it is not outputting what I expect.
```
SELECT *
FROM [Alerts] a
where
DateTime >= '2015-10-05 00:00:00'
AND DateTime <= '2015-10-06 00:00:00'
and not exists(
select b.MachineID
from [Alerts] b
where b.AlertType=a.AlertType and
b.MachineID<a.MachineID
)
order by a.DateTime ASC
```
**EDIT:** *The above code doesn't get me what I want because I am not specifically telling it to search for `AlertType = 2000` or `AlertType = 2003`, but even when I try that, I am still unable to gather my desired results.*
Here is what I would like my output to display:
```
+-----------+-----------------------------+--------------+
| MachineID | DateTime | AlertType |
+-----------+-----------------------------+--------------+
| 56 | 2015-10-05 00:00:23.0000000 | 2000 |
| 56 | 2015-10-05 00:13:09.0000000 | 2003 |
+-----------+-----------------------------+--------------+
```
Any help with this would be greatly appreciated!
|
Not sure, but:
```
select * from [Table]
WHERE [DateTime] IN (
SELECT MIN([DateTime]) as [DateTime]
FROM [Table]
WHERE AlertType = 2000
GROUP BY MachineId
UNION ALL
SELECT MAX([DateTime]) as [DateTime]
FROM [Table]
WHERE AlertType = 2003
GROUP BY MachineId)
ORDER BY MachineId, AlertType
```
|
It looks like your outer section takes all records between 2015-10-05 to 2015-10-06, which includes all the records sorted by date. The inner portion only happens when no records fit the outer date range.
Looks like GSazheniuk has it right, but I am not sure if you just want the 2 records or everything that matches the MachineID and the two alerts?
|
SQL Find First Occurrence
|
[
"",
"sql",
"sql-server",
"ssms",
""
] |
I have a table `artist` with several columns like `name`, `type` ...
I want the column `name` to be returned and ordered alphabetically, but the column name should be adjusted. If the name of the artist starts with *'the '* (case insensitive) then it should be removed and placed behind the name in uppercase, for example *'the Beatles'* -> *'Beatles, THE'*.
This is the code I have (it doesn't work):
```
SELECT name,
CASE
WHEN UPPER(name) LIKE 'THE %' THEN CONCAT(RIGHT(name, length(name)-4),', THE')
END AS name
FROM artist
ORDER BY name
```
Problem 1: `name` is not replaced, it creates a new column.
Problem 2: new column has same name => column `name` is ambiguous in `order by name`.
How can I easily solve those problems?
|
Use `ELSE` clause in `CASE`:
```
SELECT
CASE
WHEN UPPER(name) LIKE 'THE %' THEN CONCAT(RIGHT(name, length(name)-4),', THE')
ELSE name
END AS name
FROM artist
ORDER BY name
```
|
### Query
```
SELECT CASE WHEN name ILIKE 'THE %' -- simpler
THEN right(name, -4) || ', THE' -- simpler, faster
ELSE name END AS name -- but better use a distinct alias
, *
FROM artist
ORDER BY 1; -- positional reference to 1st output column
```
### Rationale
* The simplest and fastest expression to get a string *minus* n leading characters is: [`right(string, -n)`](http://www.postgresql.org/docs/current/interactive/functions-string.html#FUNCTIONS-STRING-OTHER).
* `ILIKE` is simpler than `lower()` / `upper() LIKE ...`.
* You don't need [`concat()`](http://www.postgresql.org/docs/current/interactive/functions-string.html#FUNCTIONS-STRING-OTHER), the plain concatenation operator [`||`](http://www.postgresql.org/docs/current/interactive/functions-string.html#FUNCTIONS-STRING-SQL) does the same faster since the expression `right(name, -4)` is *guaranteed* to be `NOT NULL` in this context.
* When confused with SQL visibility rules for column names, or when there are **duplicate output column names** (which is totally legit in Postgres) you can also use **positional references** in `GROUP BY`, `ORDER BY` or `DISTINCT ON (...)` clauses:
+ [When can we use an identifier number instead of its name in PostgreSQL?](https://stackoverflow.com/questions/24596290/when-can-we-use-an-identifier-number-instead-of-its-name-in-postgresql/24596349#24596349)
+ [How to re-use result for SELECT, WHERE and ORDER BY clauses?](https://stackoverflow.com/questions/14074546/how-to-re-use-result-for-select-where-and-order-by-clauses/14075977#14075977)
However, it's hardly wise or useful to *have* duplicate output column names in the first place. Rather use distinct names.
* If you run this query a lot I suggest a [functional index](http://www.postgresql.org/docs/current/interactive/indexes-expressional.html) on the same expression for much faster results:
```
CREATE INDEX artist_name_sort_idx ON artist
((CASE WHEN name ILIKE 'THE %' THEN right(name, -4) || ', THE' ELSE name END));
```
The expression in the query must be the same to employ this index.
### Proper test case
```
WITH artist(artist_id, name) AS (
VALUES
(1, 'The Beatles')
, (2, 'tHe bEatles')
, (3, 'The The')
, (4, 'Then')
, (5, 'The X')
, (6, 'Theodor')
, (7, 'Abba')
, (8, 'ZZ TOP')
, (9, 'The ') -- unlikely corner case, I don't think it would pay to test for it
, (10, '') -- empty string
, (11, NULL) -- NULL
)
SELECT CASE WHEN name ILIKE 'THE %' THEN right(name, -4) || ', THE' ELSE name END AS name
, *
FROM artist
ORDER BY 1;
```
Result:
```
name | artist_id | name
--------------+-----------+-------------
| 10 |
, THE | 9 | The -- remaining corner case
Abba | 7 | Abba
bEatles, THE | 2 | tHe bEatles
Beatles, THE | 1 | The Beatles
The, THE | 3 | The The
Then | 4 | Then
Theodor | 6 | Theodor
X, THE | 5 | The X
ZZ TOP | 8 | ZZ TOP
<NULL> | 11 | <NULL>
```
|
ORDER BY changed column with same name
|
[
"",
"sql",
"postgresql",
"replace",
"pattern-matching",
"sql-order-by",
""
] |
We have an SQL job that runs every 30s where we logged every step (SQL Server 2012)
Sometimes, without any reason, and without any "job failed" message, the job stops and even hangs (behavior that we would like to analyse).
What we would like to achieve is to query out log table and detect when a record is missing (doesn't happens every 30s).
For instance:
```
CREATION_DATE MESSAGE
2015-09-17 07:49:38.053 11 : **Fin**
2015-09-17 07:49:02.377 11 : **Fin**
2015-09-17 07:48:32.100 11 : **Fin**
2015-09-17 07:48:01.940 11 : **Fin**
2015-09-17 07:47:32.100 11 : **Fin**
2015-09-17 07:47:01.967 11 : **Fin**
2015-09-17 07:46:31.663 11 : **Fin**
2015-09-17 07:46:01.803 11 : **Fin**
2015-09-17 07:45:31.663 11 : **Fin**
2015-09-17 07:45:02.060 11 : **Fin**
2015-09-17 07:44:31.843 11 : **Fin**
2015-09-17 07:44:01.970 11 : **Fin**
2015-09-17 07:43:22.397 11 : **Fin** <= MUST BE DETECTED (23 minutes between events)
2015-09-17 07:20:01.767 11 : **Fin** <= MUST BE DETECTED (3 minutes between events)
2015-09-17 07:17:01.743 11 : **Fin**
2015-09-17 07:16:31.777 11 : **Fin**
2015-09-17 07:16:01.690 11 : **Fin**
2015-09-17 07:15:31.733 11 : **Fin**
2015-09-17 07:15:01.807 11 : **Fin**
2015-09-17 07:14:31.683 11 : **Fin**
2015-09-17 07:14:01.793 11 : **Fin**
2015-09-17 07:13:31.853 11 : **Fin**
2015-09-17 07:13:01.840 11 : **Fin**
```
I hope my question is clear.
|
This shows all periods longer than 30 seconds
```
select A.CREATION_DATE as Start, min(N.CREATION_DATE) as Finish
from logTable A
left join logTable N on A.CREATION_DATE < N.CREATION_DATE
group by A.CREATION_DATE
having datediff(second, A.CREATION_DATE, min(N.CREATION_DATE)) > 30
```
|
This is my "dirty" solution while trying to make the solution by Luis to work (looks better than mine)
```
SELECT TOP 1000 [PROCEDURE_NAME]
,[CREATION_DATE]
,[MESSAGE]
,
DATEDIFF(SECOND,
(SELECT top 1 [CREATION_DATE]
FROM [mactac].[mactac].[ERROR_LOG] f
WHERE f.[CREATION_DATE]<c.[CREATION_DATE]
AND [PROCEDURE_NAME] like '%downl%'
AND [MESSAGE] LIKE '11 : **Fin**'
order by [CREATION_DATE] desc)
,c.[CREATION_DATE]) as DIFF
FROM [mactac].[mactac].[ERROR_LOG] as c
WHERE [PROCEDURE_NAME] like '%downl%'
AND [MESSAGE] LIKE '11 : **Fin**'
order by [CREATION_DATE] desc
```
This solutions displays all the records, and elapsed time :-D
But.... I have not figured yet how to add:
> AND DIFF > 35
```
Invalid column name 'DIFF'.
```
In order to filter longer time
|
SQL: detect when a recurring log (every 30s) is missing
|
[
"",
"sql",
"select",
"sql-server-2012",
"periodicity",
""
] |
As my question states, I would like to know what we call types of queries with that type of condition in the `WHERE` clause, i.e.:
```
SELECT * FROM mytable
WHERE (col1, col2) < (1, 2);
```
In other words:
***Give me all records where `col1` is less than '1' or if it equals '1' then `col2` must be less than '2' - and none of the values are NULL.***
I really like this type of syntax, but don't know what the naming convention is on how to refer to this type of condition. It looks like a tuple conditional but that name is not giving me anything from my searches.
My question stems from needing to know what this syntax is called in order to research how to write this using [Criteria API](http://docs.oracle.com/javaee/6/tutorial/doc/gjitv.html) with Hibernate and JPA2 and Postgres.
### EDIT
I was able to write this using Criteria API using [CriteriaBuilder's function()](https://docs.oracle.com/javaee/6/api/javax/persistence/criteria/CriteriaBuilder.html#function(java.lang.String,%20java.lang.Class,%20javax.persistence.criteria.Expression...)) call:
```
//Our left expression (date, id)
Expression leftVal = criteriaBuilder.function("ROW", Tuple.class,
from.get("date").as(java.util.Date.class),
from.get("id").as(Long.class));
//Our right expression ex: ('2015-09-15', 32450)
ParameterExpression<Date> dateParam = criteriaBuilder.parameter(Date.class);
ParameterExpression<Long> idParam = criteriaBuilder.parameter(Long.class);
Expression rightVal = criteriaBuilder.function("ROW", Tuple.class, dateParam, idParam)
//build the first predicate using ROW expressions
Predicate predicate = criteriaBuilder.greaterThan(leftVal, rightVal);
//more query building happens
...
//construct final query and add parameters to our param expressions
TypedQuery<MyEntity> typedQuery = em.createQuery(criteriaQuery);
typedQuery.setParameter(dateParam, current.getDate());
typedQuery.setParameter(idParam, current.getId());
```
`current` in this case is the record I retrieve as the row we want to get records BEFORE or AFTER. In this example I do after as noted by the `greaterThan` function call.
|
Common terms for the operation you demonstrate are **"Row values"**, **"Row value comparison"**, [**"Row constructor comparison"**](https://www.postgresql.org/docs/current/functions-comparisons.html#ROW-WISE-COMPARISON) or **"Row-wise comparison"**.
Commonly used in **"keyset pagination"**.
That feature has been in the SQL standard since [SQL-92](https://en.wikipedia.org/wiki/SQL-92) (!). Postgres is currently the only major RDBMS that supports it in all aspects - in particular also with full index support.
The expression `(col1, col2) < (1, 2)` is short syntax for `ROW(col1, col2) < ROW(1, 2)` in Postgres.
The expression `ROW(col1, col2)` is a ["row constructor"](https://www.postgresql.org/docs/current/sql-expressions.html#SQL-SYNTAX-ROW-CONSTRUCTORS), like `ARRAY[col1, col2]` is an ["array constructor"](https://www.postgresql.org/docs/current/sql-expressions.html#SQL-SYNTAX-ARRAY-CONSTRUCTORS).
Row-wise comparison is conveniently short for the more verbose, equivalent expression:
```
col1 < 1 OR (col1 = 1 AND col2 < 2)
```
Postgres can use a [**multicolumn index**](https://www.postgresql.org/docs/current/indexes-multicolumn.html) on `(col1, col2)` or `(col1 DESC, col2 DESC)` for this. But not an index with mixed sort direction like `(col1 ASC, col2 DESC)`!
The expression is notably distinct from: (!)
```
col1 < 1 AND AND col2 < 2
```
Consider the example row value: `(1,1)` ...
Here is a presentation by Markus Winand that discusses the feature for pagination in detail:
["Pagination done the PostgreSQL way" on use-the-index-luke.com](https://use-the-index-luke.com/blog/2013-07/pagination-done-the-postgresql-way).
Row value comparison starts on page 20. The support matrix I have been referring to is on page 45.
I am in no way affiliated to Markus Winand.
|
```
WHERE (col1, col2) < (val1, val2)
```
Above syntax is called **Row Value Constructor**/**tuple syntax**/**Row Subquery**.
From [doc](https://docs.jboss.org/hibernate/core/3.3/reference/en/html/queryhql.html#queryhql-tuple)
> ANSI SQL **row value constructor syntax**, sometimes referred to AS **tuple
> syntax**, even though the underlying database may not support that
> notion. Here, we are generally referring to multi-valued comparisons,
> typically associated with components
Alternatively it can be called [Row Subqueries](https://dev.mysql.com/doc/refman/5.0/en/row-subqueries.html)
|
SQL syntax term for 'WHERE (col1, col2) < (val1, val2)'
|
[
"",
"sql",
"hibernate",
"postgresql",
"pagination",
"criteria",
""
] |
I have a table like this and there are only two feature for all user in this table
```
+-------+---------+-----------+----------+
| User | Feature | StartDate | EndDate |
+-------+---------+-----------+----------+
| Peter | F1 | 2015/1/1 | 2015/2/1 |
| Peter | F2 | 2015/3/1 | 2015/4/1 |
| John | F1 | 2015/5/1 | 2015/6/1 |
| John | F2 | 2015/7/1 | 2015/8/1 |
+-------+---------+-----------+----------+
```
I want to transform to
```
+-------+--------------+------------+--------------+------------+
| User | F1_StartDate | F1_EndDate | F2_StartDate | F2_EndDate |
+-------+--------------+------------+--------------+------------+
| Peter | 2015/1/1 | 2015/2/1 | 2015/3/1 | 2015/4/1 |
| John | 2015/5/1 | 2015/6/1 | 2015/7/1 | 2015/8/1 |
+-------+--------------+------------+--------------+------------+
```
|
If you are using SQL Server 2005 or up by any chance, [`PIVOT`](https://technet.microsoft.com/library/ms177410.aspx) is what you are looking for.
|
Use [UNPIVOT & PIVOT](https://technet.microsoft.com/en-us/library/ms177410(v=sql.105).aspx) like this:
Test data:
```
DECLARE @t table
(User1 varchar(20),Feature char(2),StartDate date,EndDate date)
INSERT @t values
('Pete','F1','2015/1/1 ','2015/2/1'),
('Pete','F2','2015/3/1 ','2015/4/1'),
('John','F1','2015/5/1 ','2015/6/1'),
('John','F2','2015/7/1 ','2015/8/1')
```
Query:
```
;WITH CTE AS
(
SELECT User1, date1, Feature + '_' + Seq cat
FROM @t as p
UNPIVOT
(date1 FOR Seq IN
([StartDate], [EndDate]) ) AS unpvt
)
SELECT * FROM CTE
PIVOT
(MIN(date1)
FOR cat
IN ([F1_StartDate],[F1_EndDate],[F2_StartDate],[F2_EndDate])
) as p
```
Result:
```
User1 F1_StartDate F1_EndDate F2_StartDate F2_EndDate
John 2015-05-01 2015-06-01 2015-07-01 2015-08-01
Pete 2015-01-01 2015-02-01 2015-03-01 2015-04-01
```
|
How to transform rows into column?
|
[
"",
"sql",
"sql-server",
""
] |
I have a table like this:
```
+--------+--------+
| userid | events |
+--------+--------+
| 1 | blog |
| 2 | blog |
| 1 | site |
| 1 | move |
| 2 | move |
| 3 | blog |
| 2 | blog |
+--------+--------+
```
I want to display all users who participated in all the events that I passed as parameters.
Example: All users who participated in "blog" AND "move"
Knowing that a user can participate multiple times to an event.
Thanks for your help
|
```
SELECT DISTINCT userid
FROM table_name
WHERE events = 'blog'
AND userid IN
(
SELECT DISTINCT userid
FROM table_name
WHERE events = 'move'
)
```
Here is the [SQLFiddle](http://sqlfiddle.com/#!4/b4504c/2)
|
```
select userid
from your_table
where events in ('blog','move')
group by userid
having count(distinct events) = 2
```
If you have 3 events to check then use
```
having count(distinct events) = 3
```
|
SQL - Table sort by events
|
[
"",
"mysql",
"sql",
""
] |
I designed below query for my delete operation. I am new to SQL and just wanted to check with experienced people here if it is fine or any better way to do this. I am using DB2 database
```
DELETE FROM TableD
WHERE B_id IN
(
SELECT B.B_id
FROM TableB tB
INNER JOIN TableA tA
ON tB.A_id = tA.A_id
WHERE A_id = 123
) AND
C_id IN (1,2,3)
```
This has two IN clause which I am little worried and not sure if I could use EXISTS clause anywhere.
Database Structure as below:
* Table A has ONE TO MANY relation with Table B
* Table B has ONE TO MANY relation with Table C
* Table B has ONE TO MANY relation with Table D
* Table D has composite primary key ( B\_id, C\_id )
Table D data somewhat similar to below
```
B_id|C_id
----------
1 | 1
1 | 2
1 | 3
2 | 4
2 | 5
3 | 5
```
Here I have to delete rows which have C\_id in array of values. But since the index is a composite of B\_id and D\_id, I am retrieving related B\_id to the particular entity of Table A by equality operator A\_id=123
|
There isn't necessarily anything wrong with your method. However, a useful alternative technique to know is [merge](https://www-01.ibm.com/support/knowledgecenter/SSEPGG_9.7.0/com.ibm.db2.luw.sql.ref.doc/doc/r0010873.html):
```
merge into TableD
using (
select distinct
B.B_id
from TableB tB
inner join TableA tA on
tB.A_id = tA.A_id and
A_id = 123
) AB
on
TableD.B_id = AB.B_id and
C_id in (1,2,3)
when matched then delete;
```
Note that I had to use `distinct` on the inner query to prevent duplicate matches.
|
You can use merge like this too :
```
merge into TableD
using TableB tB
on B.B_id = TableD.B_id
and tB.A_id in (select A_id from TableA tA where A_id = 123)
and C_id in (1,2,3)
when matched then delete;
```
|
Is there any better way to write this query
|
[
"",
"sql",
"db2",
"sql-delete",
""
] |
Is there any expressions to get `QUARTER`, `WEEK` and `DAYOFWEEK` of a DateTime field in Hive (v0.14.1) which do the same thing as these buildin functions in MySql?
here is the specification of what I want (from MySql doc):
[QUARTER](https://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_quarter)
[WEEK](https://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_week)
[DAYOFWEEK](https://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_dayofweek)
NOTE:
1. the function quarter() was introduced in Hive 1.3, but I need a expr to support lower version.
2. the function weekofyear() is supported instead of week(), but there is a little difference. but it's okay, so just ignore this one.
|
Suppose the DateTime field is `order_time`
* DAYOFWEEK: `PMOD(DATEDIFF(order_time, '2012-01-01'), 7)`
* WEEK: `WEEKOFYEAR(order_time)`
* QUARTER (hive <1.3): `(INT((MONTH(order_time)-1)/3)+1)`
* QUARTER (hive >=1.3): `QUARTER(order_time)`
|
It always helps to know how to turn a month into a quarter using just math:
```
floor((cast(substr(visit_date, 6,2) as int) - 1) / 3) + 1
```
|
Hive DateTime Truncators (QUARTER, WEEK, DAYOFWEEK)?
|
[
"",
"sql",
"datetime",
"hive",
"hiveql",
""
] |
I am currently having troubles with filtering my SQL records. I need something like what it results in the following concept:
**I wanted to results of some records based on the results of another field in the same table. Say, like the following:**
```
SELECT [Field1], [field2]
FROM [table]
WHERE [Field1] IN ('value1', 'value2')
--from 'value2' i need to filter or remove some records
WHERE [field2] NOT IN ('val1','val2') When [Field1] is 'value2'
```
I've been spending lots of hours on this I still can't do the right stuff.
I'd appreciate immediate help.
|
Starting with the top answer (@Siyual's), you can optimize it to lessen maintenance :
```
SELECT [Field1], [field2]
FROM [table]
WHERE ([Field1] = 'value2' AND [field2] NOT IN ('val1','val2'))
OR ([Field1] = 'value1')
```
Using `[Field1] = 'value1'` or `[Field1] in('value1', 'value3', 'value...', 'valueN')` means that whenever a new value is put into [Field1], your query break. You would be much better to replace with with `[Field1]!='value2'`, giving you :
```
SELECT [Field1], [field2]
FROM [table]
WHERE ([Field1] = 'value2' AND [field2] NOT IN ('val1','val2'))
OR ([Field1] != 'value2')
```
From there, you can also apply logic operators to realize that you're saying `WHERE (A && B) || !A`, in this case, there is no difference between a XOR or a OR, which means it's equivalent to `WHERE (!A)||B`, which gives you :
```
SELECT [Field1], [field2]
FROM [table]
WHERE ([Field1] != 'value2') OR [field2] NOT IN ('val1','val2'))
```
|
Small bit of logic for two fields.
```
SELECT [Field1], [field2]
FROM [table]
WHERE [Field1] IN ('value 1' , ... ) -- Other non value 2 values
OR (Field1 = 'Value 2' AND [field2] NOT IN ('val1','val2'))
```
|
MS SQL: SELECT field1, field2 from [table] where field2 NOT IN ('val1', val2') when field1 = 'value2'
|
[
"",
"sql",
"sql-server",
"database",
"subquery",
""
] |
I am using MS Access 2010, split in front end / back end; on a network drive (WAN) with 16+ table with one table of **users (1.3 Million)** which is mostly used for user information and is not insert heavy and few other tables, which will receive **upto 2000+ inserts daily**.
I have been able to optimize most of the read/select queries. Although 1 chunk of my code looks as below. This can be used for upto 2000 iterations daily.
```
Do Until rec.EOF
Dim vSomeId As Integer
vSomeId = rec!SomeId
'StrSQL = StrSQL & "INSERT INTO TransportationDetails ( TransportationId, SomeId)" & _
'"VALUES(" & vTransportationId & ", " & vSomeId & ");"
StrSQL = "INSERT INTO TransportationDetails ( TransportationId, SomeId)" & _
"VALUES(" & vTransportationId & ", " & vSomeId & ");"
DoCmd.SetWarnings False
DoCmd.RunSQL (StrSQL)
DoCmd.SetWarnings True
rec.Edit
rec!SomeBoolean = rec!SomeOtherBoolean
rec.Update
rec.MoveNext
Loop
```
My objective here, is to **reduce the number of calls to the db** to insert all the values. and **MS ACCESS does NOT support having more than 1 query in a statement**, as I tried in the commented part of the code. I also think the recordset upate method is a lot time consuming, and if any one can suggest a better way to update the recordset.
**Is** there any way I can trick Access to insert & Update in **less hits to db** through SQL Queries, or any other access feature. Or optimize in anyway, It can take up to 30 mins some time. Decreasing it to At least 2 - 5 mins will be appropriate.
P.S.
I can not switch to SQL Server, It is **JUST NOT POSSIBLE**. I am aware it can be done in way more optimal way through sql server and Access shouldn't be used for WAN, but I don't have that option.
Solution:
I went with Andre's and Jorge's solution. The time decreased by 17 times. Although Albert's Answer is correct too as I found my main issue was with the sql statements in a loop. Changing the edits in the recordset to sql didnt impact much on the time factor.
|
If you have now
```
S = "SELECT SomeId, SomeBoolean, SomeOtherBoolean " & _
"FROM recTable WHERE someCriteria"
Set rec = DB.OpenRecordset(S)
```
change your statements into
```
"INSERT INTO TransportationDetails (TransportationId, SomeId) " & _
"SELECT " & vTransportationId & ", SomeId " & _
"FROM recTable WHERE someCriteria"
```
and
```
"UPDATE recTable SET SomeBoolean = SomeOtherBoolean WHERE someCriteria"
```
For performance, avoid looping over Recordsets where possible. Use SQL statements that operate on whole sets instead.
|
I should point out that in the case of inserting rows, you will find FAR better performance by using a recordset. A SQL “action” query will ONLY perform better if you operating on a set of data. The instant you are inserting rows, then you don’t have a “set” insert, and using a DAO recordset will result in MUCH better performance (a factor of 10 to 100 times better).
|
Increase Ms Access Insert Performance
|
[
"",
"sql",
"ms-access",
"insert",
"ms-access-2010",
"query-performance",
""
] |
So I have this problem where I need to find the rows where a coloumn have a specfic value. The table I use is created like this:
```
select table.CID, table2.PID
from table
inner join table2 on table2.OID = table.OID
```
The table looks like this:
```
+------------ table --------------+
|____table.CID___|___table2.PID___|
| AA | 47 |
| AA | 25 |
| AA | 13 |
| AA | 18 |
| AB | 22 |
| AB | 89 |
| AC | 47 |
| AC | 15 |
| AC | 8 |
+---------------------------------+
```
Now I want to get all rows where table.CID is equal and it has atleast one table.PID that is 47. Results in:
```
+------------ table --------------+
|____table.CID___|___table2.PID___|
| AA | 47 |
| AA | 25 |
| AA | 13 |
| AA | 18 |
| AC | 47 |
| AC | 15 |
| AC | 8 |
+---------------------------------+
```
How would one do this? I've tried with `where exists` but with no luck.
|
This query finds the rows with `47`:
```
SELECT ...
FROM MyTable JOIN Table2 USING (OID)
WHERE Table2.PID = 47;
```
The `CID` values of these rows can then be used to filter the original query:
```
SELECT MyTable.CID, Table2.PID
FROM MyTable JOIN Table2 USING (OID)
WHERE MyTable.CID IN (SELECT MyTable.CID
FROM MyTable JOIN Table2 USING (OID)
WHERE Table2.PID = 47);
```
|
Use `exists`:
```
select t.*
from table t
where exists (select 1 from table t2 where t2.cid = t.cid and t2.pid = 47);
```
|
Find rows based on two coloumns in two tables
|
[
"",
"mysql",
"sql",
"database",
"sqlite",
""
] |
Would anybody help me with this problem?
(Please see attached image below)
`Sample Table`
[](https://i.stack.imgur.com/PRjBx.jpg)
`what should be the result`
[](https://i.stack.imgur.com/qQX3g.jpg)
|
This looks like a case for [RANK()](https://msdn.microsoft.com/en-us/library/ms176102.aspx)
See this example:
```
SELECT EmployeeCode
, Timestamp
, Value
FROM (
SELECT RANK() OVER (PARTITION BY EmployeeCode ORDER BY Timestamp) AS RN
, EmployeeCode
, Timestamp
, Value
FROM dbo.SampleTable
) AS ST
WHERE ST.RN = 1;
```
It will generate a number (1, 2, 3, 4, ... n) for each row.
Please note that I'm doing a `PARTITION BY`, which resets number for each different EmployeeCode. So this query pretty much generates unique number for each employee and it depends on timestamp value. Lowest timestamp value will have it as 1.
**Official documentation says**
> If two or more rows tie for a rank, each tied rows receives the same
> rank. For example, if the two top salespeople have the same SalesYTD
> value, they are both ranked one. The salesperson with the next highest
> SalesYTD is ranked number three, because there are two rows that are
> ranked higher. Therefore, the RANK function does not always return
> consecutive integers.
Since you provided a screenshot only, I've added a quick numbers next to rows. So you can imagine how it's looking.
[](https://i.stack.imgur.com/80aNZ.png)
|
```
You can use Row_number() also to generate minimum result :
SELECT EMPLOYEECODE,
TIMESTAMP,
VALUE
FROM (SELECT Row_number()
OVER (
PARTITION BY EMPLOYEECODE
ORDER BY TIMESTAMP)RN,
EMPLOYEECODE,
TIMESTAMP,
VALUE
FROM TABLE)A
WHERE RN = 1
```
|
How to get the rows that have min timestamp using SQL
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"sqlite",
""
] |
I have got a table `names` in MySQL with following columns `ID, type, row, value`
The composite primary key is `ID, type, row`
The purpose of this table is to save all names and professions of a specified person in multiple rows - one data per row.
For example: Commonly in Spain people have two first names and two last names, like `José Anastacio Rojas Laguna`.
In germany, there are many persons having one first name but two last names. And even persons with wide profession, like teaching on university and working as a doctor in a hospital at the same time. In this case, in germany people would have trailing `Prof. Dr.` in their names. For example: `Prof. Dr. José Anastacio Rojas Laguna`
In this case, I would store all these information in the table like this:
```
ID | type | row | value
1 | 0 | 1 | Prof.
1 | 0 | 2 | Dr.
1 | 1 | 1 | José
1 | 1 | 2 | Anastacio
1 | 2 | 1 | Rojas
1 | 2 | 2 | Laguna
```
An `ID` is given for one single person. Every people in the table have one unique `ID` and even one person is identified by his `ID`. `type` defines as it says the type of the name. `0` means profession, `1` means first name and `2` means last name. `row` defines the position in the name. `1` means 1st first name, `2` means 2nd firstname, `3` means 3rd firstname, etc... The same for profession and last name.
Now I would like to find out, how i can SELECT the `ID` of a specified person by just passing some of the names of that person? How can I determine the ID by only giving a few of the values, which occur all in (or have) the same ID?
|
This will return users that have the name José Laguna with the same ID:
```
select t1.id, t1.name, t2.name
from yourTable t1
join (select * from yourTable
where name = 'Laguna') t2
on t1.id = t2.id
where t1.name = 'José'
```
|
I use `José` You could use a variable `@searchText`
```
SELECT *
FROM YourTable
WHERE ID IN (SELECT DISTINCT ID
FROM YourTable
WHERE value = 'José')
```
Or maybe use an `IN` if multiple parameters
```
WHERE value IN ('José', 'Laguna')
```
|
How to SELECT same ID of values occuring in the same column?
|
[
"",
"mysql",
"sql",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.