Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
I have a MySQL database structured as follows: (the numbers on the left are row numbers)
```
+------------+----------+--------------+-------------+
| start_dsgn | end_dsgn | min_cost_1 | min_cost_2 |
+------------+----------+--------------+-------------+
1| 1 | 2 | 3 | 100 |
2| 1 | 3 | 5 | 153 |
3| 1 | 4 | 10 | 230 |
4| 2 | 1 | 4 | 68 |
5| 2 | 3 | 5 | 134 |
6| 3 | 1 | 7 | 78 |
7| 3 | 2 | 8 | 120 |
+------------+----------+--------------+-------------+
```
I would like to query the database such that for each start design, return the count of end designs for which one of the two cost inputs is less than the input. So for example a user inputs a value for cost\_1 and cost\_2. The query will return the count of the number of rows, grouped by design, in which cost\_1 >= min\_cost\_1 OR cost\_2 >= min\_cost\_2.
So for the database above lets assume the user inputs
```
cost_1 = 5
cost_2 = 100
```
This should return something like:
```
{1:2, 2:2, 3:1}
```
where the structure is:
```
{start_design_1: count(row 1 + row 2), start_design_2:count(row 4 + row 5) , start_design_3:count(row 6)}
```
I was just wondering if anyone had any tips or solutions on how to do this query as I am relatively new to SQL. Thank you very much.
Also note that my database is not sorted as in the example, I did so to make the example easier to follow.
|
You didn't specify what version of SQL you are using so I am doing this example in SQL Server where @cost\_1 is a variable. You can adapt it to the syntax for whatever SQL variant you are using. The following will give you the counts you want as a SQL result set:
```
SELECT start_dsgn, COUNT(CASE WHEN min_cost_1 <= @cost_1 OR min_cost_2 <= @cost_2 THEN end_dsng END) as Cnt
FROM table
GROUP BY start_dsgn
```
Once you have that, it's up to you to convert that into whatever format you want in whichever language you are using to query the database. You also didn't specify what language you are using, but it looks like you want a dictionary where start\_dsgn is the key.
Most languages would return a list or array of either lists, tuples or dictionaries depending on what you use. From there it's pretty easy to convert them into any other form, at least in Python. Strongly typed languages like C#, Java and Swift are a little trickier.
|
You need to use a `GROUP BY` to combine the rows for each value of `start_dsgn` thus:
```
select start_dsgn, count(*)
from Table1
group by start_dsgn
```
Add the criteria for cost in a `WHERE` clause. How to get them into the statement depends upon the language you are using - you will have to do some research. In Java it could be:
```
select start_dsgn, count(*)
from Table1
where where min_cost_1 <= ? or min_cost_2 <= ?
group by start_dsgn
```
This will return it as a rowset - from your question you may have to convert this into a JSON structure. That isn't strictly an SQL thing so again you will have to research.
Fiddle is here: <http://sqlfiddle.com/#!9/087c5/2>
|
SQLQuery - Counting Values For A Range Across 2 Values Grouped-By 3rd Value in A Single Table
|
[
"",
"mysql",
"sql",
"count",
"group-by",
""
] |
I need to be sure that I have at least 1Gb of free disk space before start doing some work in my database. I'm looking for something like this:
```
select pg_get_free_disk_space();
```
Is it possible? (I found nothing about it in docs).
PG: 9.3 & OS: Linux/Windows
|
PostgreSQL does not currently have features to directly expose disk space.
For one thing, which disk? A production PostgreSQL instance often looks like this:
* `/pg/pg94/`: a RAID6 of fast reliable storage on a BBU RAID controller in WB mode, for the catalogs and most important data
* `/pg/pg94/pg_xlog`: a fast reliable RAID1, for the transaction logs
* `/pg/tablespace-lowredundancy`: A RAID10 of fast cheap storage for things like indexes and `UNLOGGED` tables that you don't care about losing so you can use lower-redundancy storage
* `/pg/tablespace-bulkdata`: A RAID6 or similar of slow near-line magnetic storage used for old audit logs, historical data, write-mostly data, and other things that can be slower to access.
* The postgreSQL logs are usually somewhere else again, but if this fills up, the system may still stop. Where depends on a number of configuration settings, some of which you can't see from PostgreSQL at all, like syslog options.
Then there's the fact that "free" space doesn't necessarily mean PostgreSQL can use it (think: disk quotas, system-reserved disk space), and the fact that free *blocks*/*bytes* isn't the only constraint, as many file systems also have limits on number of files (inodes).
How does a`SELECT pg_get_free_disk_space()` report this?
Knowing the free disk space could be a security concern. If supported, it's something that'd only be exposed to the superuser, at least.
What you *can* do is use an untrusted procedural language like `plpythonu` to make operating system calls to interrogate the host OS for disk space information, using queries against `pg_catalog.pg_tablespace` and using the `data_directory` setting from `pg_settings` to discover where PostgreSQL is keeping stuff on the host OS. You also have to check for mount points (unix/Mac) / junction points (Windows) to discover if `pg_xlog`, etc, are on separate storage. This still won't really help you with space for logs, though.
I'd quite like to have a `SELECT * FROM pg_get_free_diskspace` that reported the main datadir space, and any mount points or junction points within it like for `pg_xlog` or `pg_clog`, and also reported each tablespace and any mount points within it. It'd be a set-returning function. Someone who cares enough would have to bother to implement it *for all target platforms* though, and right now, nobody wants it enough to do the work.
---
In the mean time, if you're willing to simplify your needs to:
* One file system
* Target OS is UNIX/POSIX-compatible like Linux
* There's no quota system enabled
* There's no root-reserved block percentage
* inode exhaustion is not a concern
then you can `CREATE LANGUAGE plpython3u;` and `CREATE FUNCTION` a `LANGUAGE plpython3u` function that does something like:
```
import os
st = os.statvfs(datadir_path)
return st.f_bavail * st.f_frsize
```
in a function that `returns bigint` and either takes `datadir_path` as an argument, or discovers it by doing an SPI query like `SELECT setting FROM pg_settings WHERE name = 'data_directory'` from within PL/Python.
If you want to support Windows too, see [Cross-platform space remaining on volume using python](https://stackoverflow.com/q/51658/398670) . I'd use Windows Management Interface (WMI) queries rather than using ctypes to call the Windows API though.
Or you could [use this function someone wrote in PL/Perlu](https://wiki.postgresql.org/wiki/Free_disk_space) to do it using `df` and `mount` command output parsing, which will probably only work on Linux, but hey, it's prewritten.
|
Here has a simple way to get free disk space without any extended language, just define a function using pgsql.
```
CREATE OR REPLACE FUNCTION sys_df() RETURNS SETOF text[]
LANGUAGE plpgsql AS $$
BEGIN
CREATE TEMP TABLE IF NOT EXISTS tmp_sys_df (content text) ON COMMIT DROP;
COPY tmp_sys_df FROM PROGRAM 'df | tail -n +2';
RETURN QUERY SELECT regexp_split_to_array(content, '\s+') FROM tmp_sys_df;
END;
$$;
```
Function usage:
```
select * from sys_df();
sys_df
-------------------------------------------------------------------
{overlay,15148428,6660248,7695656,46%,/}
{overlay,15148428,6660248,7695656,46%,/}
{tmpfs,65536,0,65536,0%,/dev}
{tmpfs,768284,0,768284,0%,/sys/fs/cgroup}
{/dev/sda2,15148428,6660248,7695656,46%,/etc/resolv.conf}
{/dev/sda2,15148428,6660248,7695656,46%,/etc/hostname}
{/dev/sda2,15148428,6660248,7695656,46%,/etc/hosts}
{shm,65536,8,65528,0%,/dev/shm}
{/dev/sda2,15148428,6660248,7695656,46%,/var/lib/postgresql/data}
{tmpfs,65536,0,65536,0%,/proc/kcore}
{tmpfs,65536,0,65536,0%,/proc/timer_list}
{tmpfs,65536,0,65536,0%,/proc/sched_debug}
{tmpfs,768284,0,768284,0%,/sys/firmware}
(13 rows)
```
Using `df $PGDATA | tail -n +2` instead of `df | tail -n +2` while you saving all data in same path on disk. In this case, the function only return one row disk usage for $PGDATA path.
**NOTE FOR SECURITY**
*PROGRAM* can run any command by shell, it like two-edged sword. it is best to use a fixed command string, or at least avoid passing any user input in it. [See detail on document](https://www.postgresql.org/docs/current/sql-copy.html).
|
How to get the current free disk space in Postgres?
|
[
"",
"sql",
"postgresql",
"postgresql-9.3",
"diskspace",
""
] |
While creating a table how to add table level constraint so as to check that a column with datatype datetime is not empty?
|
You would use something like this with `NOT NULL`:
```
CREATE TABLE [dbo].[MyTable](
[ID] [int] NOT NULL,
[MyField] [DATETIME] NOT NULL)
```
|
The NOT NULL constraint enforces a column to NOT accept NULL values.
The NOT NULL constraint enforces a field to always contain a value. This means that you cannot insert a new record, or update a record without adding a value to this field.
The following SQL enforces the "P\_Id" column and the "LastName" column to not accept NULL values:
```
CREATE TABLE PersonsNotNull
(
P_Id int NOT NULL,
LastName varchar(255) NOT NULL,
FirstName varchar(255),
Address varchar(255),
City varchar(255)
)
```
|
How to define table constraint to check non empty datetime?
|
[
"",
"sql",
"sql-server",
""
] |
I have a query which filter records in a specific datetime range. For testing purpose i create a query with variables and it is not returning the expected result.
Here is the query:
```
Declare @vTimeFrom datetime = '2015-04-22 20:00:00.000'
Declare @vTimeTo datetime = '2015-04-23 08:00:00.000'
Declare @pTime datetime = '2015-04-22 21:00:00.000'
Select @pTime
where Convert(Varchar(5),@pTime,108)
BETWEEN Convert(Varchar(5),@vTimeFrom,108) and Convert(Varchar(5),@vTimeTo,108)
```
It outputs:
> > No record found
The above query returns nothing.
But consider this query :
```
Declare @vTimeFrom datetime = '2015-04-22 20:00:00.000'
Declare @vTimeTo datetime = '2015-04-23 08:00:00.000'
Declare @pTime datetime = '2015-04-22 21:00:00.000'
Select @pTime
where Convert(Varchar(5),'2015-04-22 21:00:00.000',108)
between Convert(Varchar(5),'2015-04-22 20:00:00.000',108)
and Convert(Varchar(5),'2015-04-23 08:00:00.000',108)
```
It outputs:
> > April, 22 2015 21:00:00
|
`Convert(Varchar(5),'2015-04-22 21:00:00.000',108)` is actually just `left('2015-04-22 21:00:00.000', 5)`. So in first case you're checking time and in second case you're checking strings.
```
Declare @vTimeFrom datetime = '2015-04-22 20:00:00.000'
Declare @vTimeTo datetime = '2015-04-23 08:00:00.000'
Declare @pTime datetime = '2015-04-22 21:00:00.000'
select
convert(Varchar(5),@pTime,108),
Convert(Varchar(5),@vTimeFrom,108),
Convert(Varchar(5),@vTimeTo,108),
Convert(Varchar(5),'2015-04-22 21:00:00.000',108),
Convert(Varchar(5),'2015-04-22 20:00:00.000',108),
Convert(Varchar(5),'2015-04-23 08:00:00.000',108)
------------------------------------------------------
21:00 20:00 08:00 2015- 2015- 2015-
```
|
```
Select Convert(Varchar(5),'2015-04-22 21:00:00.000',108), Convert(Varchar(5),@pTime,108) , @pTime
```
gives you the answer:
> 2015- | 21:00 | 2015-04-22 21:00:00
The first direct formatting is assuming varchar convert and thus ingnoring the style attribute while the second convert is assuming datetime.
To get the example without variables working you can use
```
Convert(Varchar(5), (cast ('2015-04-22 21:00:00.000' as datetime)),108)
```
to make sure convert is converting from datetime.
|
different results when using query with variables and without variables
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I needed help with a question and what would be the most clean way of doing this in SQL SERVER.
I am basically writing a query that checks if a customer number is inside another subquery then it should return the servicename for that customer number. This is my attempt and it is not working.
Do you guys have any suggestions?
```
CASE WHEN aa.cust_no in (SELECT Cust_no FROM #Tabl1) THEN (SELECT ServiceName FROM #Tabl1) END AS Target
```
|
I get what you're trying to do, but you syntax needs to be changed. You can try a `LEFT JOIN`.
This query will give you an idea of what your statement should look like
```
Select tabl1.ServiceName Target
From SomeTable aa
Left Join #Tabl1 tabl1
On aa.cust_no = tabl1.Cust_no
```
If you want to put something else if a match is not found in `#Tabl1`, then you will need to use a `WHEN`, or a `COALESCE`.
`When tabl1.ServiceName Is NOT NULL Then tabl1.ServiceName Else 'Unknown Target' End`
OR
`Coalesce (tabl1.ServiceName, 'Unknown Target') Target`
|
instead of `case` expression try select `ServiceName` in this way:
```
SELECT
(SELECT TOP 1 ServiceName FROM #Tabl1 where Cust_no = aa.cust_no) AS Target
FROM ...
```
|
SQL Server WHEN and subqueries
|
[
"",
"sql",
"sql-server",
""
] |
I've a table `cSc_Role` with a column `RoleSettings`.
`RoleSettings` is datatype image.
The Content is something like this: 0x504B030414000000080000000000E637CA2A4
Now I need to update this column for one row.
Like this:
```
UPDATE cSc_Role
SET RoleSettings = '0x343240000000000E637CA2A430500'
WHERE Naming = 'User X'
```
But with binary data it seems like this is not possible to do it with a string.
The other option is, I can provide the image in a temporary .bak file.
And then do an `INSERT INTO`.
But with this solution I've read it is only possible to insert a complete row and not only a column. Or can I insert only a column with insert?
How can I update or insert one image-column in a table?
Thanks in advance.
|
Try to use convert to varbinary:
```
UPDATE cSc_Role
SET RoleSettings = convert(VARBINARY(MAX),'0x343240000000000E637CA2A430500')
WHERE Naming = 'User X'
```
|
If above all solution did not work then try to update like below by removing `''` in following ,
```
UPDATE cSc_Role
SET RoleSettings = 0x343240000000000E637CA2A430500
WHERE Naming = 'User X'
```
Also, avoid using these data types (**ntext, text, and image**) in new development work, and plan to modify applications that currently use them. Use ***nvarchar(max), varchar(max), and varbinary(max)*** instead.
|
UPDATE datatype image in SQL-Table
|
[
"",
"sql",
"sql-server",
"insert",
"set",
""
] |
I have a table with following definition and data.
**Definition:**
```
CREATE TABLE [dbo].[TestTB]
(
[CREATEDATE] [nvarchar](50) NULL
) ON [PRIMARY]
```
**Data:**
```
10/9/2014
1/26/2015
2/16/2015
```
When I run the query:
```
Select
CAST(CREATEDATE AS DATETIME) as CREATEDATE
FROM
[dbo].[TestTB]
```
It is throwing error:
> Msg 241, Level 16, State 1, Line 1
> Conversion failed when converting date and/or time from character string.
The above does not work even after running
```
SET DATEFORMAT dmy
```
However the following query works fine
```
DECLARE @data nvarchar(50)
SET @data = ' 10/9/2014 '
Select CAST(@data as DateTime)
```
Above query returns: 2014-10-09 00:00:00.000
How do I convert a date string (of mm/dd/yyyy format) stored in a column to datetime?
|
First, if your table column is "DateTime" type than it will save data in this format "2014-10-09 00:00:00.000" no matter you convert it to date or not. But if not so and if you have SQL Server version 2008 or above than you can use this,
```
DECLARE @data nvarchar(50)
SET @data = '10/9/2014'
IF(ISDATE(@data)>0)
BEGIN
SELECT CONVERT(DATE, @data)
END
```
Otherwise
```
DECLARE @data nvarchar(50)
SET @data = '10/9/2014'
IF(ISDATE(@data)>0)
BEGIN
SELECT CONVERT(DATETIME, @data)
END
```
## To Insert into table
```
INSERT INTO dbo.YourTable
SELECT CREATEDATE FROM
(
SELECT
(CASE WHEN (ISDATE(@data) > 0) THEN CONVERT(DATE, CREATEDATE)
ELSE CONVERT(DATE, '01/01/1900') END) as CREATEDATE
FROM
[dbo].[TestTB]
) AS Temp
WHERE
CREATEDATE <> CONVERT(DATE, '01/01/1900')
```
|
Try this out:
```
SELECT IIF(ISDATE(CREATEDATE) = 1, CONVERT(DATETIME, CREATEDATE, 110), CREATEDATE) as DT_CreateDate
FROM [dbo].[TestTB]
```
[SQL Fiddle Solution](http://sqlfiddle.com/#!6/5b56e/2/0)
|
Convert varchar column to datetime in sql server
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"datetime",
""
] |
I'm doing homework for my class and I can't figure out how to properly answer this question:
"Determine which books generate less than a 55% profit and how many copies of these books have been sold. Summarize your findings for management, and include a copy of the query used to retrieve data from the database tables."
I tried taking a shot at it but I can't seem to get it to come out the way I want it to. It always has data that doesn't seem to go together. Below is my code:
```
SELECT isbn, b.title, b.cost, b.retail, o.quantity "# of times Ordered",
ROUND(((retail-cost)/retail)*100,1)||'%' "Percent Profit",
o.quantity "# of times Ordered"
FROM books o JOIN orderitems o USING(isbn);
```
It works in the sense that I get the data I need but it comes up like this:

I have a theory that because the table "Order Items" has multiple orders with the same isbn and different quantities it's selecting all of them. Is there a way to combine them? If not could anyone help me get rid of the redundant data caused by the JOIN?
Thank you!
|
I've had to do similar things in SQL Server / MySQL. You need to group by the columns in which you see repeated data that you do not care about, and you need to SUM the field whose values are important to you, probably something like this...
```
SELECT isbn, b.title, b.cost, b.retail, o.quantity "# of times Ordered",
ROUND(((retail-cost)/retail)*100,1)||'%' "Percent Profit",
SUM(o.quantity) "# of times Ordered"
FROM books o JOIN orderitems o USING(isbn)
GROUP BY isbn, b.title, b.cost, b.retail;
```
If you need more information, go here and search for SUM: <https://docs.oracle.com/javadb/10.6.1.0/ref/rrefsqlj32654.html>
|
You are focusing on books, so you need to group this query on `isbn`, `b.title`, `b.cost`, `b.retail` and `ROUND(((retail-cost)/retail)*100,1)`.
You should `SUM(o.quantity)` to get a total number of copies sold for each book.
|
SQL: Repeating data in JOIN
|
[
"",
"sql",
"oracle",
""
] |
I am new to `WITH RECURSIVE` in PostgreSQL. I have a reasonably standard recursive query that is following an adjacency list. If I have, for example:
```
1 -> 2
2 -> 3
3 -> 4
3 -> 5
5 -> 6
```
it produces:
```
1
1,2
1,2,3
1,2,3,4
1,2,3,5
1,2,3,5,6
```
What I would like is to have just:
```
1,2,3,4
1,2,3,5,6
```
But I can't see how to do this in Postgres. This would seem to be "choose the longest paths" or "choose the paths that are not contained in another path". I can probably see how to do this with a join on itself, but that seems quite inefficient.
An example query is:
```
WITH RECURSIVE search_graph(id, link, data, depth, path, cycle) AS (
SELECT g.id, g.link, g.data, 1, ARRAY[g.id], false
FROM graph g
UNION ALL
SELECT g.id, g.link, g.data, sg.depth + 1, path || g.id, g.id = ANY(path)
FROM graph g, search_graph sg
WHERE g.id = sg.link AND NOT cycle
)
SELECT * FROM search_graph;
```
|
You already have a solution at your fingertips with `cycle`, just add a predicate at the end.
But adjust your break condition by one level, currently you are appending one node too many:
```
WITH RECURSIVE search AS (
SELECT id, link, data, ARRAY[g.id] AS path, (link = id) AS cycle
FROM graph g
WHERE NOT EXISTS (SELECT FROM graph WHERE link = g.id)
UNION ALL
SELECT g.id, g.link, g.data, s.path || g.id, g.link = ANY(s.path)
FROM search s
JOIN graph g ON g.id = s.link
WHERE NOT s.cycle
)
SELECT *
FROM search
WHERE cycle;
-- WHERE cycle IS NOT FALSE; -- alternative if link can be NULL
```
* Also including a start condition like [mentioned by @wildplasser](https://stackoverflow.com/a/29805241/939860).
* Init condition for `cycle` is `(link = id)` to catch shortcut cycles. Not necessary if you have a `CHECK` constraint to disallow that in your table.
* The exact implementation depends on the missing details.
* This is assuming all graphs are terminated with a cycle or `link IS NULL` and there is a FK constraint from `link` to `id` in the same table.
The exact implementation depends on missing details. If `link` is not actually a link (no referential integrity), you need to adapt ...
|
Just add the extra clause to the final query, like in:
```
WITH RECURSIVE search_graph(id, link, data, depth, path, cycle) AS (
SELECT g.id, g.link, g.data, 1, ARRAY[g.id], false
FROM graph g
-- BTW: you should add a START-CONDITION here, like:
-- WHERE g.id = 1
-- or even (to find ALL linked lists):
-- WHERE NOT EXISTS ( SELECT 13
-- FROM graph nx
-- WHERE nx.link = g.id
-- )
UNION ALL
SELECT g.id, g.link, g.data, sg.depth + 1, path || g.id, g.id = ANY(path)
FROM graph g, search_graph sg
WHERE g.id = sg.link AND NOT cycle
)
SELECT * FROM search_graph sg
WHERE NOT EXISTS ( -- <<-- extra condition
SELECT 42 FROM graph nx
WHERE nx.id = sg.link
);
```
---
Do note that:
* the `not exists(...)` -clause tries to join *exactly* the same record as the second *leg* of the recursive union.
* So: they are mutually exclusive.
* if it *would* exist, it should have been *appended* to the "list" by the recursive query.
|
WITH RECURSIVE query to choose the longest paths
|
[
"",
"sql",
"postgresql",
"common-table-expression",
"recursive-query",
"directed-graph",
""
] |
I'm new to sql and I've been encountering the error
"argument of WHERE must be type boolean, not type numeric" from the following code
```
select subject_no, subject_name, class_size from Subject
where (select AVG(mark) from Grades where mark > 75)
group by subject_no
order by subject_no asc
```
To assist in understanding the question what I am attempting to do is list subjects with an average mark less than 75
By my understanding though the where argument would be Boolean as the average mark from a class would be either above or below 60 and therefore true of false, any assistance in correcting my understanding is greatly appreciated.
|
**Edited!**
Use a correlated sub-query to find subject's with avg(grade) < 75. No need for `GROUP BY` since no aggregate functions, use `DISTINCT` instead to remove duplicates:
```
select distinct subject_no, subject_name, class_size
from Subject s
where (select AVG(mark) from grades g
where g.subject_no = s.subject_no) < 75
order by subject_no asc
```
Note, I assumed there's `subject_no` column in the `Grades` table too.
|
First of ALL, the Return Value of (`select AVG(mark) from Grades where mark > 75`) is not Boolean as you mentioned. It is exactly AVG(mark) itself. so you can actually write like:
```
select 1+1 from dual and the return value is 2 or select 'hello world ' from dual
```
and the return value is exactly the String hello world.
So , if you want list subjects with an average mark less than 75. following statements of where should be more like:
```
mark<(select AVG(mark) from Grades where mark > 75)
```
this is going to return a value of Boolean.
However, your statement explaining your question is too hard to grasp :P
I guess a programmer needs a little more time to understand SQL when you are not
too familiar at first. Good luck. If you could explain your question more accurate... its much easier to have a correct answer you are looking for.
|
SQL Where argument returning as numeric and not boolean
|
[
"",
"sql",
"boolean",
"arguments",
"where-clause",
""
] |
I need to get the output of a selected query depending on certain conditions
Means `if(id=uid)`
then I need the below query
```
select * from table1 where id=5;
```
else
I need the below one
```
select * from table1 where id=10
```
I know i can use if condition for this. But my query is very long one so when I use if else then it would look like
```
if(@id=@uid)
begin
select * from table1 where id=5;// query 1
end
else
select * from table1 where id=10;//query 2
```
but here I need to replace the entire query once again for a single check. I hope I can do something like this:
```
declare @id int=4;
declare @uid=10;
select * from table1 where
case
when @id=@uid
then
id=5
else
id=10;
end
```
**Updation**
I need one more condition too
in this case id=5 and uid=10
```
then if(id=uid)
```
then
```
select * from table1 where id=5
```
and
```
if(id!=uid)
```
then
```
select * from table1
```
something like this
|
You can use the `case` expression to return the value `id` should be equal to:
```
SELECT *
FROM table1
WHERE id = CASE WHEN @id = @uid THEN 5 ELSE 10 END;
```
EDIT:
The updated requirement in the question is to return all rows when `@id != @uid`. This can be done by comparing `id` to `id`:
```
SELECT *
FROM table1
WHERE id = CASE WHEN @id = @uid THEN 5 ELSE id END;
```
Alternatively, with this updated requirement, a simple `or` expression might be simpler to use:
```
SELECT *
FROM table1
WHERE @id = @uid OR id = 5;
```
|
```
SELECT
*
FROM
table1
WHERE
(
@id = @uid
AND
id =5
)
OR
(
not @id = @uid
AND
id=10
)
```
|
How to use case statement inside an SQL select Query
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"select",
"case",
""
] |
I needed help with doing division calculations with one decimal. I am trying to sum and divide the result. My Y and X are integers.
```
SELECT a.YearMonth,
(CONVERT(DECIMAL(9,1),Y_Service)+(CONVERT(DECIMAL(9,1),x_Service)/2)) AS Average_Of_Period
FROM table_IB a
INNER JOIN table_UB b ON a.YearMonth=b.YearMonth
```
This is the result I get:
```
YearMonth| Average_Of_Period
2015-03 276318.500000
```
The correct answer is :`185532,5`
My Y and X values differ from 4 digits to 6 digits
|
Looks like your operation is (y) + (x / 2)?
Should that be ( y + x ) / 2?
|
Readability matters. In case you have to do some ugly casts before calculation, you can always do it with common table expression or [outer apply](https://stackoverflow.com/a/18126961/1744834):
```
select
a.YearMonth,
(calc.Y_Service + calc.x_Service) / 2 as Average_Of_Period
from table_IB as a
inner join table_UB as b on a.YearMonth = b.YearMonth
outer apply (select
CONVERT(DECIMAL(9,1), Y_Service) as Y_Service,
CONVERT(DECIMAL(9,1), x_Service) as x_Service
) as calc
```
And then you'd not miss that you're doing `Y_Service + x_Service / 2` instead of `(Y_Service + x_Service) / 2`
|
Addition and Division
|
[
"",
"sql",
"sql-server",
""
] |
I have problem with insert xmltype into another xmltype in specified place in pl/sql.
First variable v\_xml has the form:
```
<ord>
<head>
<ord_code>123</ord_code>
<ord_date>01-01-2015</ord_date>
</head>
</ord>
```
And the second v\_xml2:
```
<pos>
<pos_code>456</pos_code>
<pos_desc>description</pos_desc>
</pos>
```
My purpose is get something like this:
```
<ord>
<head>
<ord_code>123</ord_code>
<ord_date>01-01-2015</ord_date>
</head>
<!-- put the second variable in this place - after closing <head> tag -->
<pos>
<pos_code>456</pos_code>
<pos_desc>description</pos_desc>
</pos>
</ord>
```
What shoud I do with my code?
```
declare
v_xml xmltype;
v_xml2 xmltype;
begin
-- some code
-- some code
-- v_xml and v_xml2 has the form as I define above
end;
```
Is anyone able to help me with this problem? As I know there are functions like insertchildxml, appendchildxml or something like this...
I found few solution in pure SQL, but I don't know how to move this in PL/SQL.
Thanks!
|
You can use mentioned [appendChildXML](http://docs.oracle.com/cd/B19306_01/appdev.102/b14259/xdb04cre.htm#CHDFGJHH), like here:
```
declare
v_xml xmltype := xmltype('<ord>
<head>
<ord_code>123</ord_code>
<ord_date>01-01-2015</ord_date>
</head>
</ord>');
v_xml2 xmltype:= xmltype('<pos>
<pos_code>456</pos_code>
<pos_desc>description</pos_desc>
</pos>');
v_output xmltype;
begin
select appendChildXML(v_xml, 'ord', v_xml2)
into v_output from dual;
-- output result
dbms_output.put_line( substr( v_output.getclobval(), 1, 1000 ) );
end;
```
Output:
```
<ord>
<head>
<ord_code>123</ord_code>
<ord_date>01-01-2015</ord_date>
</head>
<pos>
<pos_code>456</pos_code>
<pos_desc>description</pos_desc>
</pos>
</ord>
```
|
[appendChildXML](http://docs.oracle.com/database/121/SQLRF/functions012.htm#SQLRF06201) is deprecated at 12.1
So here is a solution using [XMLQuery](http://docs.oracle.com/database/121/SQLRF/functions264.htm#SQLRF06209)
```
DECLARE
l_head_xml XMLTYPE := XMLTYPE.CREATEXML('<ord>
<head>
<ord_code>123</ord_code>
<ord_date>01-01-2015</ord_date>
</head>
</ord>');
l_pos_xml XMLTYPE := XMLTYPE.CREATEXML('<pos>
<pos_code>456</pos_code>
<pos_desc>description</pos_desc>
</pos>');
l_complete_xml XMLTYPE;
BEGIN
SELECT XMLQUERY('for $i in $h/ord/head
return <ord>
{$i}
{for $j in $p/pos
return $j}
</ord>'
PASSING l_head_xml AS "h",
l_pos_xml AS "p"
RETURNING CONTENT)
INTO l_complete_xml
FROM dual;
dbms_output.put_line(l_complete_xml.getstringval());
END;
```
|
Insert xmltype into xmltype in specified place [PL/SQL]
|
[
"",
"sql",
"oracle",
"plsql",
"oracle11g",
""
] |
I have been looking at the below sql script for ages now and I can't see the problem! It's getting error 1064 - which could be anything...
```
CREATE TABLE order (order_no INTEGER NOT NULL AUTO_INCREMENT,
vat_id INTEGER NOT NULL,
order_status VARCHAR(30) NOT NULL,
order_pick_date DATE,
order_ship_from INTEGER NOT NULL,
employee_id INTEGER NOT NULL,
payment_id INTEGER,
PRIMARY KEY (order_no))
ENGINE = MYISAM;
```
|
`Order` is a reserved word in SQL, pick a different name for your table.
|
As others have already pointed , the reserved word `order` is the issue .
However if you need to , you can still use it by enclosing with backticks / backquotes :
```
`order`
```
The corrected SQL statement ( which worked for me in MySQL 5.5.24 ) is :
```
CREATE TABLE
`order`
(
order_no INTEGER NOT NULL AUTO_INCREMENT
,vat_id INTEGER NOT NULL
,order_status VARCHAR(30) NOT NULL
,order_pick_date DATE
,order_ship_from INTEGER NOT NULL
,employee_id INTEGER NOT NULL
,payment_id INTEGER
,PRIMARY KEY (order_no)
)
ENGINE = MYISAM;
```
|
MySQL - CREATE TABLE - Syntax Error - Error 1064
|
[
"",
"mysql",
"sql",
""
] |
I have a table with data like.
```
ItemCode
1000
1002
1003
1020
1060
```
I'm trying to write a SQL statement to get the minimum number (ItemCode) that is NOT in this table and it should be able to get the next lowest number once the previous minimum order ID has been inserted in the table but also skip the numbers that are already in the DB. I only want to get 1 result each time the query is run.
So, it should get `1001` as the first result based on the table above. Once the `ItemCode = 1001` has been inserted into the table, the next result it should get should be `1004` because `1000` to `1003` already exist in the table.
Based on everything I have seen online, I think, I have to use a While loop to do this. Here is my code which I'm still working on.
```
DECLARE @Count int
SET @Count= 0
WHILE Exists (Select ItemCode
from OITM
where itemCode like '10%'
AND convert(int,ItemCode) >= '1000'
and convert(int,ItemCode) <= '1060')
Begin
SET @COUNT = @COUNT + 1
select MIN(ItemCode) + @Count
from OITM
where itemCode like '10%'
AND convert(int,ItemCode) >= '1000'
and convert(int,ItemCode) <= '1060'
END
```
I feel like there has to be an easier way to accomplish this. Is there a way for me to say...
**select the minimum number between 1000 and 1060 that doesn't exist in table X**
EDIT: Creating a new table isn't an option in my case
Final Edit: Thanks guys! I got it. Here is my final query that returns exactly what I want. I knew I was making it too complicated for no reason!
```
With T0 as ( select convert(int,ItemCode) + row_number() over (order by convert(int,ItemCode)) as ItemCode
from OITM
where itemCode like '10%'
AND convert(int,ItemCode) >= '1000'
And convert(int,ItemCode) <= '1060')
Select MIN(convert(varchar,ItemCode)) as ItemCode
from T0
where convert(int,ItemCode) Not in (Select convert(int,ItemCode)
from OITM
where itemCode like '10%'
AND convert(int,ItemCode) >= '1000'
and convert(int,ItemCode) <= '1060');
```
|
This should do the thing. Here you are generating sequantial number for rows, then comparing each row with next row(done by joining condition), and filtering those rows only where difference is not 1, ordering by sequence and finally picking the top most.
```
;with c as(select id, row_number() over(order by id) rn)
select top 1 c1.id + 1 as NewID
from c as c1
join c as c2 on c1.rn + 1 = c2.rn
where c2.id - c1.id <> 1
order by c1.rn
```
|
You could use `row_number()` to produce sequential values for each row, and then look for the first row where the `row_number()` doesn't match the value stored in the table. My SQL Server installation isn't working at the moment and SQL Fiddle seems to be down too, so I wrote this without being able to test it, but something like this should work:
```
declare @lowerBound int = 1000;
declare @upperBound int = 1060;
declare @x table ([id] int);
insert @x values (1000), (1002), (1003), (1020), (1060);
with [SequenceCTE] as
(
select
[id],
[seq] = (@lowerBound - 1) + row_number() over (order by [id])
from
@x
)
select top 1
[seq]
from
[SequenceCTE]
where
[seq] != [id] and
[seq] <= @upperBound;
```
**EDIT:** [Here](https://dbfiddle.uk/uq4DVU20) is a SQL Fiddle that demonstrates this approach. I don't know why the site wasn't working for me before. It doesn't seem to like my `declare` statements for some reason so I hard-coded the bounds instead, but hopefully it still gets the idea across.
|
Select minimum number in a range
|
[
"",
"sql",
"sql-server",
"rdbms",
""
] |
I have a rough understanding of why `= null` in SQL and `is null` are not the same, from questions like [this one](https://stackoverflow.com/questions/1843451/why-does-null-null-evaluate-to-false-in-sql-server).
But then, why is
```
update table
set column = null
```
a valid SQL statement (at least in Oracle)?
From that [answer](https://stackoverflow.com/a/1844511/2986852), I know that `null` can be seen as somewhat "UNKNOWN" and therefore and sql-statement with `where column = null` "should" return all rows, because the value of `column` is no longer an an unknown value. I set it to `null` explicitly ;)
Where am I wrong/ do not understand?
So, if my question is maybe unclear:
Why is `= null` valid in the `set` clause, but not in the `where` clause of an SQL statement?
|
SQL doesn't have different graphical signs for assignment and equality operators like languages such as [c](/questions/tagged/c "show questions tagged 'c'") or [java](/questions/tagged/java "show questions tagged 'java'") have. In such languages, `=` is the **assignment operator**, while `==` is the **equality operator**. In SQL, `=` is used for both cases, and interpreted contextually.
In the `where` clause, `=` acts as the equality operator (similar to `==` in C). I.e., it checks if both operands are equal, and returns `true` if they are. As you mentioned, `null` is not a value - it's the lack of a value. Therefore, it cannot be equal to any other value.
In the `set` clause, `=` acts as the assignment operator (similar to `=` in C). I.e., it sets the left operand (a column name) with the value of the right operand. This is a perfectly legal statement - you are declaring that you do not know the value of a certain column.
|
They completely different operators, even if you write them the same way.
* In a where clause, is a **comparsion operator**
* In a set, is an **assignment operator**
The assigment operator allosw to "clear" the data in the column and set it to the "null value" .
|
is null vs. equals null
|
[
"",
"sql",
"oracle",
"null",
"sql-update",
""
] |
I have a table `q_data` like so:
```
cuid timestamp_from timestamp_to
A1 2014-12-01 22:04:00 2014-12-01 22:04:21
A1 2014-12-04 22:05:00 2014-12-04 22:05:25
A2 2014-12-06 20:04:00 2014-12-06 20:04:21
A2 2014-12-07 19:04:00 2014-12-07 19:04:21
```
and a table `patients_` like so:
```
cuid last_visit
A1 2014-12-03
A2 2014-12-05
```
I wish to count the number of rows per cuid in q\_data such that the timestamp\_to is earlier than the last\_visit date in patients\_. So for the data shown above, I expect my query to give the following result.
```
cuid day_count
A1 1
A2 1
```
I am trying to do this using the following query:
```
create table tmp
select
q.cuid as cuid,
count(distinct(q.query_to)) as day_count
from q_data q
inner join patients_ p on q.cuid = p.cuid
group by q.cuid
having datediff(p.last_visit, date(q.query_to)) >= 0
;
```
but I get an error:
```
ERROR 1054 (42S22): Unknown column 'p.last_visit' in 'having clause'
```
I shall appreciate if someone could help me understand the mistake in my query. Thanks.
|
Your select query
```
select
q.cuid as cuid,
count(distinct(q.query_to)) as day_count
from q_data q
inner join patients_ p on q.cuid = p.cuid
group by q.cuid
having datediff(p.last_visit, date(q.query_to)) >= 0;
```
If note closely in your having clause you have used 2 columns which are not part of the select and this is invalid and will get the error as you are getting now.
One thing you can do is moving that in the `where clause` as
```
select q.cuid as cuid, count(distinct(q.query_to)) as day_count
from q_data q
inner join patients_ p on q.cuid = p.cuid
where datediff(p.last_visit, date(q.query_to)) >= 0
group by q.cuid
```
|
Give this a shot:
```
SELECT
q.cuid, COUNT(*) as day_count
FROM
patients_ as p
LEFT JOIN
q_data as q
ON
(p.cuid = c.cuid )
WHERE
(DATE(q.query_to) < p.last_visit)
GROUP BY
q.cuid
```
|
Doing a count on a field in a mysql table based on a condition evaluated from a join
|
[
"",
"mysql",
"sql",
"database",
"date",
""
] |
```
;WITH CTE AS
(
Select * From
(
SELECT ROW_NUMBER() OVER(ORDER BY StatusDate) AS SrNo, FirstName
From Tab1
UNION ALL
SELECT ROW_NUMBER() OVER(ORDER BY StatusDate) AS SrNo, FirstName
From Tab2
) v
)
Select * From CTE
```
Is there anyway I could continue after the `Row_Number()` generated in Tab1. So if 1st result in `UNION` ended at 10000 the 2nd table in UNION should start from 10001
**Note:** I want the `Row_Number()` as it is inside the `Select * From`
Thus I don't can't change it to:
```
;WITH CTE AS
(
Select *, ROW_NUMBER() OVER(ORDER BY StatusDate) AS SrNo From
(
SELECT FirstName
From Tab1
UNION ALL
SELECT FirstName
From Tab2
) v
)
Select * From CTE
```
Any help is greatly appreciated.
|
Since you're not partitioning the data the maximum row\_number should be equal to the count of rows so you could simply add the count of rows in tab1 to the row number in tab2:
```
;WITH CTE AS
(
SELECT * FROM
(
SELECT
ROW_NUMBER() OVER(ORDER BY StatusDate) AS SrNo,
FirstName
FROM Tab1
UNION ALL
SELECT
ROW_NUMBER() OVER(ORDER BY StatusDate) + (SELECT COUNT(*) FROM tab1) AS SrNo,
FirstName
FROM Tab2
) v
)
SELECT * FROM CTE
```
|
Your requirements are not completely clear, but from what I can interpret, this is what I would do.
```
;WITH UnionedData AS
(
SELECT FirstName, StatusDate, 10 AS SortOrder
FROM Tab1
UNION
SELECT FirstName, StatusDate, 20 AS SortOrder
FROM Tab2
)
SELECT FirstName, ROW_NUMBER() OVER(ORDER BY SortOrder, StatusDate) AS SrNo
FROM UnionedData
```
|
Row_Number() continue in Union query
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
I want to create an alphanumeric sequence in oracle. Table name is `rel_details` it consists of four columns.
```
rel_id
rel_name
rel_modified_date
rel_desc
```
In `rel_id` i want to generate ID like `REL230420151001`
`REL` is a string ,
`23042015` is today's date,
`1001` is a starting number.
How to create this type sequence.
|
If you are on **12c**, then here is one way using **[IDENTITY column](http://lalitkumarb.wordpress.com/2015/01/20/identity-column-autoincrement-functionality-in-oracle-12c/)** and **VIRTUAL column**.
*Identity column was introduced in version 12c, and virtual column was introduced in version 11g.*
```
SQL> CREATE TABLE t
2 (
3 ID NUMBER GENERATED ALWAYS AS IDENTITY
4 START WITH 1000 INCREMENT BY 1,
5 text VARCHAR2(50),
6 dt DATE DEFAULT SYSDATE,
7 my_text varchar2(1000) GENERATED ALWAYS AS (text||to_char(dt, 'DDMMYYYY')||ID) VIRTUAL
8 );
Table created.
SQL>
SQL> INSERT INTO t(text) VALUES ('REL');
1 row created.
SQL>
SQL> SELECT text, my_text FROM t;
TEXT MY_TEXT
----- ------------------------------
REL REL230420151000
SQL>
```
**I created identity column to start with 1000**, you could customize the way you want.
There is one small trick about the VIRTUAL column. You will have to **explicitly cast it as varchar2** with fixed size, else the implicit conversion will make it up to **maximum size**. See this for more details [Concatenating numbers in virtual column expression throws ORA-12899: value too large for column](https://stackoverflow.com/questions/28557301/concatenating-numbers-in-virtual-column-expression-throws-ora-12899-value-too-l)
|
Check this , you may not able to create seq , but you can use select as below.
create sequence mysec
minvalue 0
start with 10001
increment by 1
nocache;
select 'REL'||to\_char(sysdate,'DDMMYYYY')||mysec.nextval from dual;
|
How to create alphanumeric sequence using date and sequence number
|
[
"",
"sql",
"oracle",
"sequence",
"identity-column",
"virtual-column",
""
] |
I'm struggling coming up with a way to solve this answer. I want to start at a specific value and keep increasing it by 1 every time a new line.
For example, if I have a table like so.
```
90
93
110
87
130
Etc..
```
I want to select the number 87 and then keep incrementing up from there but also read if the incremented number is there and skip it.
I am just struggling with trying to put the right logic together in my head. I know I need a while loop to keep reading through the table but I can't think of the proper way to go about it. Just looking for some suggestions to push me in the right direction.
Edit: I am using T-SQL for MSFT SQL Server 2012.
Here is an example of what the output should look like
```
90
93
110
87
130
88
89
91
92
94
```
It would skip over adding 90 and 93 because they already exist in the table.
I hope that makes sense to you guys.
|
I do it all in one recursive CTE and I make it so you can use order by and guarantee your results are returned in the correct order.
For the recursion, you can either choose and start and end number or @desiredNumberOfNewValues(keep in mind, it doesn't account for repeats). Let me know if you have any questions or need anything else.
```
DECLARE @yourTable TABLE (nums INT);
INSERT INTO @yourTable
VALUES (90),(93),(110),(87),(130);
DECLARE @Specific_Number INT = 87;
DECLARE @Last_Number INT = 94;
DECLARE @DesiredNumberOfNewValues INT = 7;
WITH CTE_Numbers
AS
(
SELECT 1 AS order_id,nums, 1 AS cnt
FROM @yourTable
UNION ALL
SELECT 2,
CASE
WHEN @Specific_Number + cnt NOT IN (SELECT * FROM @yourTable) --if it's not already in the table, return it
THEN @Specific_Number + cnt
ELSE NULL -- if it is in the table, return NULL
END,
cnt + 1
FROM CTE_Numbers
WHERE nums = @Specific_Number
--OR (cnt > 1 AND @Specific_Number + cnt < @Last_Number) --beginning and end(option 1)
OR (cnt > 1 AND cnt <= @DesiredNumberOfNewValues) --number of new values(option 2)
)
SELECT order_id,nums
FROM CTE_Numbers
WHERE nums IS NOT NULL
ORDER BY order_id,nums
```
Results:
```
order_id nums
----------- -----------
1 87
1 90
1 93
1 110
1 130
2 88
2 89
2 91
2 92
2 94
```
|
There is a system table you can use for numbers called `master..spt_values`.
**DECLARE sample TABLE**
```
DECLARE @tbl TABLE(Id INT)
```
**INSERT statement**
```
INSERT INTO @tbl VALUES (90)
,(93)
,(110)
,(87)
,(130)
```
You can easily make a `UNION` to remove duplicates value you have in your table and `ORDER` it within a `CTE`
```
DECLARE @number INT = 87
;WITH C AS(
SELECT Id, Row_Id, ROW_NUMBER() OVER(PARTITION BY Id ORDER BY Id) AS Rn FROM (
SELECT 1 AS Row_Id, Id FROM @tbl
UNION
SELECT 2 AS Row_Id, number
FROM master..spt_values
WHERE [type] = 'P'
AND number >= @number
) t
)
SELECT Id FROM C
WHERE Rn = 1
ORDER BY Row_Id, Id
```
**Output**
```
Id
----
87
90
93
110
130
88
89
91
92
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
111
112
113
114
.
.
.
```
[SQLFiddle](http://sqlfiddle.com/#!6/9f73c/3)
|
SQL Loop to increment numbers
|
[
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2012",
""
] |
I have a sql table with some values and a lot of filters
```
ID | Name | Filter1 | Filter2 | Filter3 | Filter4 ... and so on...
```
As now the filters have been set as int and I am running a query as follows to get the data required
```
select Name
from tblABC
where Filter1=1 and Filter2 = 7 and Filter3 = 33 ... and so on...'
```
My issue is that I want a filter column to hold multiple numbers. eg:- row no 3 will have numbers 8 and 13 in Filter1 cell, so that when I run a query for 8 or 13 I get the same result.
ie I want both the below queries to return the same result.
```
select... where Filter1=8
select... where Filter1=13
```
How can this be done? I tried converting the Filter columns to nvarchar and entering data as .8.13. where '.' where was used as separators. After this, running a query 'select... where Filter1 LIKE '%.8.%' is working for me.. But there are like 12 Filter columns and when such a string search is run in large volumes, wouldn't it make the query slow. What would be a more efficient way of doing this?
I am using Microsoft SQL 2014
|
A more efficient way, hmm. Separating *tblABC* from the filters would be my suggested way to go, even if it's not the most efficient way it will make up for it in maintenance (and it sure is more efficient than using like with wildcards for it).
```
tblABC ID Name
1 Somename
2 Othername
tblABCFilter ID AbcID Filter
1 1 8
2 1 13
3 1 33
4 2 5
```
How you query this data depends on your required output of course. One way is to just use the following:
```
SELECT tblABC.Name FROM tblABC
INNER JOIN tblABCFilter ON tblABC.ID = tblABCFilter.AbcID
WHERE tblABCFilter.Filter = 33
```
This will return all *Name* with a *Filter* of *33*.
If you want to query for several Filters:
```
SELECT tblABC.Name FROM tblABC
INNER JOIN tblABCFilter ON tblABC.ID = tblABCFilter.AbcID
WHERE tblABCFilter.Filter IN (33,7)
```
This will return all *Name* with *Filter* in either *33* or *7*.
I have created a small [example fiddle.](http://sqlfiddle.com/#!6/bda05/1)
|
I'm going to post a solution I use. I use a split function ( there are a lot of SQL Server split functions all over the internet)
You can take as example
```
CREATE FUNCTION [dbo].[SplitString]
(
@List NVARCHAR(MAX),
@Delim VARCHAR(255)
)
RETURNS TABLE
AS
RETURN ( SELECT [Value] FROM
(
SELECT
[Value] = LTRIM(RTRIM(SUBSTRING(@List, [Number],
CHARINDEX(@Delim, @List + @Delim, [Number]) - [Number])))
FROM (SELECT Number = ROW_NUMBER() OVER (ORDER BY name)
FROM sys.all_objects) AS x
WHERE Number <= LEN(@List)
AND SUBSTRING(@Delim + @List, [Number], LEN(@Delim)) = @Delim
) AS y
);
```
and run your query like this
```
select Name
from tblABC
where Filter1 IN (
SELECT * FROM SplitString(@DatatoFilter,',') and
Filter2 (IN (
SELECT * FROM SplitString(@DatatoFilter,',') and
..so on.
```
If you have hunderds of thousands of records it may not perform very well. But it should work.
My personal aproch would be a stored procedure and temp tables. Create a temp table with all the values you want to use as filter
```
SELECT *
INTO #Filter1
FROM SplitString(@DatatoFilter,',')
SELECT *
INTO #Filter2
FROM SplitString(@DatatoFilter,',')
```
then the final select
```
SELECT * FROM yourtable
WHERE Filter1 IN (SELECT DISTINCT Part FROM #Filter1) and
Filter2 IN (SELECT DISTINCT Part FROM #Filter2)
```
I don't think it makes any big difference from the first query, but it is easier to read.
|
Multiple Filter in the same column
|
[
"",
"sql",
"sql-server",
""
] |
After deleting the duplicate records from the table,
I want to update Identity column of a table with consecutive numbering starting with 1. Here is my table details
```
id(identity(1,1)),
EmployeeID(int),
Punch_Time(datetime),
Deviceid(int)
```

I need to perform this action through a stored procedure.
When i tried following statement in stored procedure
```
DECLARE @myVar int
SET @myVar = 0
set identity_insert TempTrans_Raw# ON
UPDATE TempTrans_Raw# SET @myvar = Id = @myVar + 1
set identity_insert TempTrans_Raw# off
```
gave error like...Cannot update identity column 'Id'
Anyone please suggest how to update Identity column of that table with consecutive numbering starting with 1.
|
The IDENTITY keword is used to generate a key which can be used in combination with the PRIMARY KEY constraint to get a technical key. Such keys are *technical*, they are used to link table records. They should have no other meaning (such as a sort order). SQL Server does not guarantee the generated IDs to be consecutive. They do guarantee however that you get them in order. (So you might get 1, 2, 4, ..., but never 1, 4, 2, ...)
Here is the documentation for IDENTITY: <https://msdn.microsoft.com/de-de/library/ms186775.aspx>.
Personally I don't like it to be guaranteed that the generated IDs are in order. A technical ID is supposed to have no meaning other then offering a reference to a record. You *can* rely on the order, but if order is information you are interested in, you should *store* that information in my opinion (in form of a timestamp for example).
If you want to have a number telling you that a record is the fifth or sixteenth or whatever record in order, you can get always get that number on the fly using the ROW\_NUMBER function. So there is no need to generate and store such consecutive value (which could also be quite troublesome when it comes to concurrent transactions on the table). Here is how to get that number:
```
select
row_number() over(order by id),
employeeid,
punch_time,
deviceid
from mytable;
```
Having said all this; it should never be necessary to change an ID. It is a sign for inappropriate table design, if you feel that need.
|
```
--before running this make sure Foreign key constraints have been removed that reference the ID.
--insert everything into a temp table
SELECT (ColumnList) --except identity column
INTO #tmpYourTable
FROM yourTable
--clear your table
DELETE FROM yourTable
-- reseed identity
DBCC CHECKIDENT('table', RESEED, new reseed value)
--insert back all the values
INSERT INTO yourTable (ColumnList)
SELECT OtherCols FROM #tmpYourTable
--drop the temp table
DROP TABLE #tmpYourTable
GO
```
|
Updating Identity Column of a table with consecutive numbers through SQL Stored Procedure
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"stored-procedures",
""
] |
I have the following sql data:
```
ID Company Name Customer Address 1 City State Zip Date
0108500 AAA Test Mish~Sara Newa Claims Chtiana CO 123 06FE0046
0108500 AAA.Test Mish~Sara Newa Claims Chtiana CO 123 06FE0046
1802600 AAA Test Company Ban, Adj.~Gorge PO Box 83 MouLaurel CA 153 09JS0025
1210600 AAA Test Company Biwel~Brce 97kehst ve Jacn CA 153 04JS0190
```
AAA Test, AAA.Test and AAA Test Company are considered as one company.
Since their data is messy I'm thinking either to do this:
1. Is there a way to search all the records in the DB wherein it will search the company name with almost the same name then re-name it to the longest name?
In this case, the AAA Test and AAA.Test will be AAA Test Company.
2. OR Is there a way to filter only record with company name that are almost the same then they can have option to change it?
If there's no way to do it via sql query, what are your suggestions so that we can clean-up the records? There are almost 1 million records in the database and it's hard to clean it up manually.
Thank you in advance.
|
You could use String matching algorithm like [Jaro-Winkler](http://en.wikipedia.org/wiki/Jaro%E2%80%93Winkler_distance). I've written an SQL version that is used daily to deduplicate People's names that have been typed in differently. It can take awhile but it does work well for the fuzzy match you're looking for.
|
Something like a self join? `||` is ANSI SQL concat, some products have a `concat` function instead.
```
select *
from tablename t1
join tablename t2 on t1.companyname like '%' || t2.companyname || '%'
```
Depending on datatype you may have to remove blanks from the t2.companyname, use `TRIM(t2.companyname)` in that case.
And, as Miguel suggests, use `REPLACE` to remove commas and dots etc.
Use case-insensitive collation. `SOUNDEX` can be used etc etc.
|
Ways to Clean-up messy records in sql
|
[
"",
"sql",
""
] |
Our Google Analytics 'User Count' is not matching our Big Query 'User Count.'
Am I calculating it correctly?
Typically, GA and BQ align very closely…albeit, not *exactly*.
Recently, User Counts in GA vs.BQ are incongruous.
* Our number of ‘Sessions per User' typically has a very normal
distribution.
* In the last 4 weeks, 'Sessions per User' (in GA) has been
several deviations from the norm.
* I cannot replicate this deviation when cross-checking data from the same time period in BQ
The difference lies in the User Counts.
What I'm hoping someone can answer is:
**Am I at least using the correct SQL syntax to get to the answer in BQ?**
This is the query I’m running in BQ:
```
SELECT
WEEK(Week) AS Week,
Week AS Date_Week,
Total_Sessions,
Total_Users,
Total_Pageviews,
( Total_Time_on_Site / Total_Sessions ) AS Avg_Session_Duration,
( Total_Sessions / Total_Users ) AS Sessions_Per_User,
( Total_Pageviews / Total_Sessions ) AS Pageviews_Per_Session
FROM
(
SELECT
FORMAT_UTC_USEC(UTC_USEC_TO_WEEK (date,1)) AS Week,
COUNT(DISTINCT CONCAT(STRING(fullVisitorId), STRING(VisitID)), 1000000) AS Total_Sessions,
COUNT (DISTINCT(fullVisitorId), 1000000) AS Total_Users,
SUM(totals.pageviews) As Total_Pageviews,
SUM(totals.timeOnSite) AS Total_Time_on_Site,
FROM
(
TABLE_DATE_RANGE([zzzzzzzzz.ga_sessions_],
TIMESTAMP('2015-02-09'),
TIMESTAMP('2015-04-12'))
)
GROUP BY Week
)
GROUP BY Week, Date_Week, Total_Sessions, Total_Users, Total_Pageviews, Avg_Session_Duration, Sessions_Per_User, Pageviews_Per_Session
ORDER BY Week ASC
```
We have well under 1,000,000 users/sessions/etc a week.
Throwing that 1,000,000 into the Count Distinct clause should be preventing any sampling on BQ’s part.
Am I doing this correctly?
If so, any suggestion on how/why GA would be reporting differently is welcome.
Cheers.


\*(Statistically) significant discrepancies begin in Week 11
|
Update:
We have Premium Analytics, as @Pentium10 suggested. So, I reached out to their paid support.
Now when I pull the exact same data from GA, I get this:

Looks to me like GA has now fixed the issue.
Without actually admitting there ever was one.
::shrug::
|
I have this problem before. The way I fixed it was by using COUNT(DISTINCT FULLVISITORID) for total\_users.
|
Google Analytics 'User Count' not Matching Big Query 'User Count'
|
[
"",
"sql",
"google-analytics",
"google-bigquery",
""
] |
I would like to return a set of unique records from a table based on two columns along with the most recent posting time and a total count of the number of times the combination of those two columns has appeared before (in time) the record of their output.
So what I'm trying to get is something along these lines:
```
select col1, col2, max_posted, count from T
join (
select col1, col2, max(posted) as posted from T where groupid = "XXX"
group by col1, col2) h
on ( T.col1 = h.col1 and
T.col2 = h.col2 and
T.max_posted = h.tposted)
where T.groupid = 'XXX'
```
Count needs to be the number of times EACH combination of col1 and col2 occurred BEFORE the max\_posted of each record in the output. (I hope I explained that correctly :)
Edit: In trying the below suggestion as:
```
select dx.*,
count(*) over (partition by dx.cicd9, dx.cdesc order by dx.tposted) as cnt
from dx
join (
select cicd9, cdesc, max(tposted) as tposted from dx where groupid ="XXX"
group by cicd9, cdesc) h
on ( dx.cicd9 = h.cicd9 and
dx.cdesc = h.cdesc and
dx.tposted = h.tposted)
where groupid = 'XXX';
```
The count always returns '1'. Additionally, how would you count only the records that occurred before `tposted`?
This also fails, but I hope you can get where I'm headed:
```
WITH H AS (
SELECT cicd9, cdesc, max(tposted) as tposted from dx where groupid = 'XXX'
group by cicd9, cdesc),
J AS (
SELECT count(*) as cnt
FROM dx, h
WHERE dx.cicd9 = h.cicd9
and dx.cdesc = h.cdesc
and dx.tposted <= h.tposted
and dx.groupid = 'XXX'
)
SELECT H.*,J.cnt
FROM H,J
```
Help anyone?
|
How about this:
```
SELECT DISTINCT ON (cicd9, cdesc) cicd9, cdesc,
max(posted) OVER w AS last_post,
count(*) OVER w AS num_posts
FROM dx
WHERE groupid = 'XXX'
WINDOW w AS (
PARTITION BY cicd9, cdesc
RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING
);
```
Given the lack of PG version, table definition, data and desired output this is just shooting from the hip, but the principle should work: Make a partition on the two columns where `groupid = 'XXX'`, then find the maximum value of the `posted` column and the total number of rows in the **window frame** (hence the `RANGE...` clause in the window definition).
|
Do you just want a cumulative count?
```
select t.*,
count(*) over (partition by col1, col2 order by posted) as cnt
from table t
where groupid = 'xxx';
```
|
Count rows for each unique combination of columns in SQL
|
[
"",
"sql",
"postgresql",
"aggregate-functions",
"greatest-n-per-group",
""
] |
I have to cleanup orphaned associations in a Rails app which uses OmniAuth. For the sake of simplicity, here's a stripped down scenario.
Given two tables:
```
users:
password_id: INTEGER
<more columns>
passwords:
id: INTEGER NOT NULL
password_digest: VARCHAR
```
In other words: There's a facultative "user belongs\_to password" relation. (There are good reasons why the relation is not the other way around.)
Normally, every user relates to one password. But sometimes a user is deleted and the corresponding password gets orphaned.
Is there an efficient way to find all orphaned passwords (in other words: all passwords which are not related to by any user) with just one SQL query on Postgres?
Thanks for your hints!
|
This type of query is called an anti-join. The simplest method is:
```
SELECT p.*
FROM passwords p
LEFT JOIN users u
ON u.password_id = p.id
WHERE u.<primary key field> IS NULL;
```
Another alternative is the `NOT EXISTS` method @Politank-Z gave. They should have basically identical query plans.
|
```
SELECT p.id FROM PASSWORDS p
WHERE NOT EXISTS ( SELECT 1 FROM users u WHERE p.id = u.password_id );
```
...is a straightforward enough solution. You could build it around a `LEFT JOIN` or a `MINUS` if you prefer. You could also prevent the scenario entirely by adding a foreign key from `users` to `passwords`.
|
Query for orphaned relations
|
[
"",
"sql",
"postgresql",
""
] |
I figured this would already have been asked, but I couldn't find it on stack overflow.
I have a SQL Server table called `DataTable` with two columns `name` and `message`. I want to select all the rows where "message" contains the same value as in the `name` column.
```
INSERT INTO DataTable values ('frank','this is frank's message');
INSERT INTO DataTable values ('jill','this is not frank's message');
```
I want to return only the first row, because the value in the [name] column ("frank") is in the column [message]
```
SELECT [name],[message]
FROM DataTable
WHERE CONTAINS([message],[name])
```
This throws the error :
> Incorrect syntax near 'name'".
How do I write this correctly?
|
You can try with `LIKE`, in case `[name]` can be contained as a part of `[message]` :
```
SELECT [name],[message]
FROM DataTable
WHERE [message] LIKE '%' + [name] + '%'
```
or you can use `=` operator in case `[name]` should be equal to `[message]`.
|
Use LIKE
```
SELECT name, message
FROM DataTable
WHERE message LIKE '%name%'
```
|
How do I select rows where CONTAINS looks for value of a column, not a string
|
[
"",
"sql",
"sql-server",
""
] |
I'm going to try to explain this the best I can.
The code below does the following:
* Finds a service address from the ServiceLocation table.
* Finds a service type (electric or water).
* Finds how many days in the past to pull data.
Once it has this, it calculates the "daily usage" by subtracting the max meter read for a day from the minimum meter read for a day.
```
(MAX(mr.Reading) - MIN(mr.Reading)) AS 'DaytimeUsage'
```
However, what I'm missing is the max reading from the day prior and the minimum reading from the current day. Mathematically, this should look something like this:
* MAX(PriorDayReading) - MIN(ReadDateReading)
Essentially, if it goes back 5 days it should kick out a table that reads as follows:
> ## Service Location | Read Date | Usage |
>
> 123 Main St | 4/20/15 | 12 |
> 123 Main St | 4/19/15 | 8 |
> 123 Main St | 4/18/15 | 6 |
> 123 Main St | 4/17/15 | 10 |
> 123 Main St | 4/16/15 | 11 |
Where "Usage" is the 'DaytimeUsage' + usage that I'm missing (and the question above). For example, 4/18/15 would be the 'DaytimeUsage' in the query below PLUS the the difference between the MAX read from 4/17/15 and the MIN read from 4/18/15.
I'm not sure how to accomplish this or if it is possible.
```
SELECT
A.ServiceAddress AS 'Service Address',
convert(VARCHAR(10),A.ReadDate,101) AS 'Date',
SUM(A.[DaytimeUsage]) AS 'Usage'
FROM
(
SELECT
sl.location_addr AS 'ServiceAddress',
convert(VARCHAR(10),mr.read_date,101) AS 'ReadDate',
(MAX(mr.Reading) - MIN(mr.Reading)) AS 'DaytimeUsage'
FROM
DimServiceLocation AS sl
INNER JOIN FactBill AS fb ON fb.ServiceLocationKey = sl.ServiceLocationKey
INNER JOIN FactMeterRead as mr ON mr.ServiceLocationKey = sl.ServiceLocationKey
INNER JOIN DimCustomer AS c ON c.CustomerKey = fb.CustomerKey
WHERE
c.class_name = 'Tenant'
AND sl.ServiceLocationKey = @ServiceLocation
AND mr.meter_type = @ServiceType
GROUP BY
sl.location_addr,
convert(VARCHAR(10),
mr.read_date,101)
) A
WHERE A.ReadDate >= GETDATE()-@Days
GROUP BY A.ServiceAddress, convert(VARCHAR(10),A.ReadDate,101)
ORDER BY convert(VARCHAR(10),A.ReadDate,101) DESC
```
|
You can use the APPLY operator if you are above sql server 2005. Here is a link to the documentation. <https://technet.microsoft.com/en-us/library/ms175156(v=sql.105).aspx> The APPLY operation comes in two forms OUTER APPLY AND CROSS APPLY - OUTER works like a left join and CROSS works like an inner join. They let you run a query once for each row returned. I setup my own sample of what you were trying to do, here it is and I hope it helps.
<http://sqlfiddle.com/#!6/fdb3f/1>
```
CREATE TABLE SequencedValues (
Location varchar(50) NOT NULL,
CalendarDate datetime NOT NULL,
Reading int
)
INSERT INTO SequencedValues (
Location,
CalendarDate,
Reading
)
SELECT
'Address1',
'4/20/2015',
10
UNION SELECT
'Address1',
'4/19/2015',
9
UNION SELECT
'Address1',
'4/19/2015',
20
UNION SELECT
'Address1',
'4/19/2015',
25
UNION SELECT
'Address1',
'4/18/2015',
8
UNION SELECT
'Address1',
'4/17/2015',
7
UNION SELECT
'Address2',
'4/20/2015',
100
UNION SELECT
'Address2',
'4/20/2015',
111
UNION SELECT
'Address2',
'4/19/2015',
50
UNION SELECT
'Address2',
'4/19/2015',
65
SELECT DISTINCT
sv.Location,
sv.CalendarDate,
sv_dayof.MINDayOfReading,
sv_daybefore.MAXDayBeforeReading
FROM SequencedValues sv
OUTER APPLY (
SELECT MIN(sv_dayof_inside.Reading) AS MINDayOfReading
FROM SequencedValues sv_dayof_inside
WHERE sv.Location = sv_dayof_inside.Location
AND sv.CalendarDate = sv_dayof_inside.CalendarDate
) sv_dayof
OUTER APPLY (
SELECT MAX(sv_daybefore_max.Reading) AS MAXDayBeforeReading
FROM SequencedValues sv_daybefore_max
WHERE sv.Location = sv_daybefore_max.Location
AND sv_daybefore_max.CalendarDate IN (
SELECT TOP 1 sv_daybefore_inside.CalendarDate
FROM SequencedValues sv_daybefore_inside
WHERE sv.Location = sv_daybefore_inside.Location
AND sv.CalendarDate > sv_daybefore_inside.CalendarDate
ORDER BY sv_daybefore_inside.CalendarDate DESC
)
) sv_daybefore
ORDER BY
sv.Location,
sv.CalendarDate DESC
```
|
It seems like you could solve this by just calculating the difference between the MAX of yesterday & today, however this is how I would approach it. Join to the same table again for the previous day relative to any given day, and select the Max/Min for that too within your inner query. Also if you place the date in the inner query where clause the data set you return will be quicker & smaller.
```
SELECT
A.ServiceAddress AS 'Service Address',
convert(VARCHAR(10),A.ReadDate,101) AS 'Date',
SUM(A.[TodayMax]) - SUM(A.[TodayMin]) AS 'Usage',
SUM(A.[TodayMax]) - SUM(A.[YesterdayMax]) AS 'Usage with extra bit you want'
FROM
(
SELECT
sl.location_addr AS 'ServiceAddress',
convert(VARCHAR(10),mr.read_date,101) AS 'ReadDate',
MAX(mrT.Reading) AS 'TodayMax',
MIN(mrT.Reading) AS 'TodayMin',
MAX(mrY.Reading) AS 'YesterdayMax',
MIN(mrY.Reading) AS 'YesterdayMin',
FROM
DimServiceLocation AS sl
INNER JOIN FactBill AS fb ON fb.ServiceLocationKey = sl.ServiceLocationKey
INNER JOIN FactMeterRead as mrT ON mrT.ServiceLocationKey = sl.ServiceLocationKey
INNER JOIN FactMeterRead as mrY ON mrY.ServiceLocationKey = s1.ServiceLocationKey
AND mrY.read_date = mrT.read_date -1)
INNER JOIN DimCustomer AS c ON c.CustomerKey = fb.CustomerKey
WHERE
c.class_name = 'Tenant'
AND sl.ServiceLocationKey = @ServiceLocation
AND mr.meter_type = @ServiceType
AND convert(VARCHAR(10), mrT.read_date,101) >= GETDATE()-@Days
GROUP BY
sl.location_addr,
convert(VARCHAR(10),
mr.read_date,101)
) A
GROUP BY A.ServiceAddress, convert(VARCHAR(10),A.ReadDate,101)
ORDER BY convert(VARCHAR(10),A.ReadDate,101) DESC
```
|
Difference Max and Min from Different Dates
|
[
"",
"sql",
"sql-server",
"datetime",
"max",
"min",
""
] |
Is it possible to sort by a materialized path tree's `path` text field in order to find the right-most node of the tree? For example, consider this python function that uses django-treebeard's `MP_Node`:
```
def get_rightmost_node():
"""Returns the rightmost node in the current tree.
:rtype: MyNode
"""
# MyNode is a subclass of django-treebeard's MP_Node.
return MyNode.objects.order_by('-path').first()
```
From all my testing, it seems to return what I expect, but I don't know how to come up with the math to prove it. And I haven't found any info on performing this operation on a materialized path tree.
Treebeard's implementation doesn't have separators in paths, so the paths
look like this: `0001`, `00010001`, `000100010012`, etc.
|
Short answer: No.
[Here is a SQLFiddle](http://sqlfiddle.com/#!6/dbe30/4) demonstrating the problem I described in my comment.
For this simple setup:
```
id, path
1, '1'
2, '1\2'
3, '1\3'
4, '1\4'
5, '1\5'
6, '1\6'
7, '1\7'
8, '1\8'
9, '1\9'
10, '1\10'
```
attempting to get the rightmost leaf (`id = 10`) with a simple sort will fail:
```
SELECT TOP 1
id,
path
FROM hierarchy
ORDER BY path DESC
```
returns:
```
id, path
9, 1\9
```
Because `path` is a text-based column, `1\10` will come **after** `1\9` in the descending sort (See the results of the second query in the fiddle).
Even if you began tracking depth and path length, which are generally cheap and easy to keep up with, it would be entirely possible to get paths like this:
```
path depth length
12\3\11\2 4 9
5\17\10\1 4 9
```
which would still not sort properly.
Even if you are using letters instead of numbers, this only pushes the problem horizon out to the 26th child instead of the 10th:
[SQLFiddle using letters](http://sqlfiddle.com/#!6/0462d/1)
I am not as familiar with materialized path operations as I am with nested set and adjacency lists, and have no experience with django, so I'll defer to others if there are methods of which I am unaware, but you will almost certainly have to perform some sort of parsing on the `path` column to consistently get the correct leaf.
EDIT - Having addressed the question of whether sorting is a valid solution, here are some additional notes on other potential solutions after a bit of discussion and thinking on the problem a bit:
-"Rightmost" is a vague term when nodes can have more than two children (i.e., the tree is not a binary tree). If a node has 10 children, which are to the left of the parent, and which are to the right? You must define this condition before you can define a solution to the problem.
-Once "rightmost" is properly defined for your problem space, understand that the rightmost node will not necessarily be on the lowest level of the tree:
```
1
/ \
1\1 1\2 <= This is the rightmost node
/
1\1\1 <= This is the lowest node
```
-Once "rightmost" is defined, a simple loop can be used to programatically find the rightmost node:
```
//in pseudocode
function GetRightmostNode(Node startNode)
{
Node currentNode = startNode;
while(currentNode.RightChildren != null)
{
currentNode = maximum of currentNode.RightChildren;
}
return currentNode;
}
```
This loop will look for children of the current node to the right of the current node. If they exist, it selects the rightmost of the right children and repeats. Once it reaches a node with no children to its right, it returns the current node, as it has found the rightmost node of the tree (or subtree) with `startNode` as its root.
|
> Is it possible to sort by a materialized path tree's path text field in order to find the right-most node of the tree?
No. If node paths are stored like `'/1/3/6/2'` for instance, consider:
```
/1
/1/3
/1/3/6/2
/1/3/6/5
/1/3/6/21
/1/40
```
Refer to Paul's answer for the reasons sorting the above wouldn't work.
All hope is not lost though. If you're searching for "the right-most node", by which I assume you mean the deepest nodes in a tree, you could simply count the separators. For instance:
```
select length(regexp_replace('/1/3/6/2', '[^/]+', '', 'g')) as depth;
```
If you're looking for the max of that, use something like:
```
order by length(regexp_replace(path, '[^/]+', '', 'g')) desc
```
... or the equivalent python code. Index options include indexing the same expression, or storing the result in a separate depth field and indexing that.
If you're still interested in the actual value of the ID, the numbers in the above usually correspond to the ID, so order further using that column. If they're different, extract the right-most figure using a different regular expression, and cast it to integer so as to sort them naturally (1, 11, 2) instead of lexicographically (1, 11, 2):
```
select regexp_replace('/1/3/6/2', '^.+/', '')::int as value;
```
|
Searching for the right-most node of a materialized path tree
|
[
"",
"sql",
"postgresql",
"tree",
"materialized-path-pattern",
"django-treebeard",
""
] |
I got two tables: `table1` and `table2`
```
table1:
HW_SN (varchar2) ID (number)
123ERZ123 (empty)
124NIJD11
4125ERT22
....
```
-
```
table2:
ID (number) Name (varchar)
85442 123ERZ123
85471 124NIJD11
12478 4125ERT22
... ...
```
`table1` has another column called `ID` which is empty.
I need to check `table1-HW_SN` if its the same to `table2-Name` - if yes it should add `table2-ID` to `table1-ID`.
|
```
UPDATE table1
SET ID = (SELECT table2.ID
FROM table2
WHERE table1.HW_SN = table2.Name)
WHERE EXISTS (SELECT table2.ID
FROM table2
WHERE table1.HW_SN = table2.Name);
```
|
This is an `update` operation. Unfortunately, Oracle does not support `join` with `update`, so you have two choices. You can use `merge`. Or, in this case, a subqueries accomplish what you want:
```
update table1
set id = (select id from table2 t2 where t2.name = table1.hw_sn and rownum = 1)
where exists (select 1 from table2 t2 where t2.name = table1.hw_sn);
```
|
sql oracle update table data
|
[
"",
"sql",
"oracle",
"oracle-sqldeveloper",
""
] |
I am trying to find the user who last modified particular table in SQL Server.
`sys.dm_db_index_usage_stats` provides information about last modified date for the table but I am looking for last modified by. Is there a way to figure out which login last modified specific table?
|
Thank You everyone for looking into my question and providing suggestions. I used below script in my trigger to get the last\_modified\_by.
```
DECLARE @login_name NVARCHAR(200)
SELECT @login_name = loginame
FROM sys.sysprocesses
WHERE spid = @@SPID
```
|
Try this out and see if it helps:
```
DECLARE @filename VARCHAR(255)
SELECT @FileName = SUBSTRING(path, 0, LEN(path)-CHARINDEX('\', REVERSE(path))+1) + '\Log.trc'
FROM sys.traces
WHERE is_default = 1;
SELECT gt.HostName,
gt.ApplicationName,
gt.NTUserName,
gt.NTDomainName,
gt.LoginName,
gt.SPID,
gt.EventClass,
te.Name AS EventName,
gt.EventSubClass,
gt.TEXTData,
gt.StartTime,
gt.EndTime,
gt.ObjectName,
gt.DatabaseName,
gt.FileName,
gt.IsSystem
FROM [fn_trace_gettable](@filename, DEFAULT) gt
JOIN sys.trace_events te ON gt.EventClass = te.trace_event_id
WHERE EventClass in (164)
ORDER BY StartTime DESC;
```
|
How to find who last modified the table in SQL server?
|
[
"",
"sql",
"sql-server",
""
] |
I have a table viz. expenses with three columns as under
```
ExpenseId int NOT NULL,
ExpenseName varchar(50) NOT NULL,
Invalid bit NOT NULL
```
To add a new column (`OldCode char(4) not null`), I used design feature for tables in Microsoft SQL Server Management Studio. But I get following error
> 'Expenses' table
> - Unable to modify table. Cannot insert the value NULL into column 'OldCode', table 'TransportSystemMaster.dbo.Tmp\_Expenses'; column does not allow nulls. INSERT fails. The statement has been terminated.
Incidentally I have been able to add same column with same specifications to other tables of the same database.
Any help?
|
Your Table Consist of Existing Records
and you are pushing a new column of type `NOT NULL`.
so for older records the data have to be something.
try something like this
```
ALTER TABLE MY_TABLE ADD Column_name INT NULL
GO
UPDATE MY_TABLE <set valid not null values for your column>
GO
ALTER TABLE MY_TABLE ALTER COLUMN Column_name INT NOT NULL
GO
```
|
Since `OldCode` is `NOT NULL`, you should specify a default value for it.
|
Can not add a column to existing table
|
[
"",
"sql",
"sql-server",
""
] |
I have table in my application which stores the data like pid (primary key) and (inst id) which stores keeps the history data and rows with latest inst id has the latest data.
sample data has below:
Ex: tablename is TESTTABLE ( pid number,instid number,datacol1 varchar2(100),datacol2 date);
```
pid instid datacol1 datacol2
---------------------------------------
1 18 sample1 2/05/2012
1 17 sample2 2/04/2013
1 16 sample2 2/04/2013
2 15 sample3 1/04/2012
2 14 sample3 2/04/2012
```
Above results , latest record for pid 1 id is instid 18 and for pid 2 is 15.
Now the table has grown billions of records and size is too huge (around 1 TB)
,planning to clear history data , keeping latest (pid,instid) combination and delete rest.
Can anyone suggest me what's the better approach to delete the rows considering the huge size of table.
|
My recommendation would be a multi stage operation consisting of 1) creating an extraction script/proc that builds a new table, holding just the needed rows. This will raise the question of how many recent revisions are needed, which is a must-answer question. Design it to place the output into a new table. Once that is written and tested and approved by the users, then pick a time to suspends new activity, run the sql program to create the new table. Archive the ENTIRE old table to a medium that allows you to restore selectively. Truncate the original table, and reload it using the contents from the step 1 extract. TEST, TEST, TEST and get user buyoff every step of the way.
|
Since your taking about "billions of records " and "planning to clear history data" i would suggest you should go for table partition may be partition by week or month, then you can easily drop the older partition.
yes , this need some design change but its worth doing give good for applications and give benefit for long run.
|
Deleting records from a huge table
|
[
"",
"sql",
"oracle",
"oracle11g",
"database-administration",
""
] |
I have the following two relations:
```
Game(id, name, year)
Devs(pid, gid, role)
```
Where Game.id is a primary key, and where Devs.gid is a foreign key to Game.id.
In a [previous post I made here](https://stackoverflow.com/questions/29787954/select-only-largest-values-in-a-sql-relation), another user was kind enough to help me out in creating a query that finds all games made with the most developers making that game. His answer used a WITH statement, and I am not very familiar with these as I'm only a few weeks into learning SQL. Here's the working query:
```
WITH GamesDevs (GameName, DevsCount)
AS
(
SELECT Game.name AS GameName, count(DISTINCT Devs.pid) AS DevsCount
FROM Game, Devs
WHERE Devs.gid=Game.id
GROUP BY Devs.gid, Game.name
)
SELECT * FROM GamesDevs WHERE GamesDevs.DevsCount = (SELECT MAX(DevsCount) FROM GamesDevs)
```
For the sole purpose of getting more familiar with SQL, I am trying to rewrite this query using a subquery instead of a WITH statement. I've been using [this Oracle documentation](http://oracle-base.com/articles/misc/with-clause.php) to help me figure it out. I tried rewriting the query like this:
```
SELECT *
FROM (SELECT Game.name AS GameName, count(DISTINCT Devs.pid) AS DevsCount
FROM Game, Devs
WHERE Devs.gid=Game.id
GROUP BY Devs.gid, Game.name) GamesDevs
WHERE GamesDevs.DevsCount = (SELECT MAX(DevsCount) FROM GamesDevs)
```
As far as I can tell, these two queries should be identical. However, when I try running the second query, I get the error
> Msg 207, Level 16, State 1, Line 6 Invalid column name 'DevsCount'.
Does anyone know why I might be getting this error, or why these two queries wouldn't be identical?
|
You will need to duplicate that subquery in last from clause like:
```
SELECT *
FROM (SELECT Game.name AS GameName, count(DISTINCT Devs.pid) AS DevsCount
FROM Game, Devs
WHERE Devs.gid=Game.id
GROUP BY Devs.gid, Game.name) GamesDevs
WHERE GamesDevs.DevsCount = (SELECT MAX(DevsCount) FROM (SELECT Game.name AS GameName, count(DISTINCT Devs.pid) AS DevsCount
FROM Game
INNER JOIN Devs ON Devs.gid=Game.id
GROUP BY Devs.gid, Game.name))
```
But better do it like:
```
SELECT TOP 1 WITH TIES Game.name AS GameName, count(DISTINCT Devs.pid) AS DevsCount
FROM Game
INNER JOIN Devs ON Devs.gid=Game.id
GROUP BY Devs.gid, Game.name
ORDER BY DevsCount DESC
```
|
The problem is this line:
```
WHERE GamesDevs.DevsCount = (SELECT MAX(DevsCount) FROM GamesDevs)
```
From a CTE you select twice. But you can not from a sub query.
The first statement
```
WHERE GamesDevs.DevsCount
```
is correct. But
```
(SELECT MAX(DevsCount) FROM GamesDevs)
```
is not correct because you can reuse the subquery. Selecting from the cte which is acting like a view works that is why you can use both a max and compare the count
|
Rewriting WITH statements into subquery statements in SQL?
|
[
"",
"sql",
"azure-sql-database",
"with-statement",
""
] |
I´m trying to get as a result of my query 3 tags name instead of getting the tag\_id. This is the situation:
```
create table names (
ˋattr_idˋ int(11) unsigned not null auto_increment,
ˋattr_nameˋ varchar(30),
primary key(ˋattr_idˋ)
) engine=MyISAM
create table posts (
ˋpost_idˋ int(5) unsigned not null auto_increment
ˋattr_1 int(11) unsigned,
ˋattr_2ˋ int(11) unsigned,
ˋattr_3ˋ int(11) unsigned,
ˋreferencedˋ int(5),
primary key (ˋpost_idˋ)
) engine=MyISAM
insert into names (ˋattr_nameˋ) values
(''),('name1'),('name2'),('name3'),('name4'),('name5'),('name6');
insert into posts (ˋattr_1ˋ, ˋattr_2ˋ, ˋattr_3ˋ, referenced) values
(1, 2, 3, null), (1, 4, 2, 1), (1, 3, 1, 2), (1, 2, 5, 1), (4, 2, 5, 1)
```
The result I'm trying to obtain are the rows which `referenced` value is equal to the `post_id = 1` and for all these rows, change its `attr_1`, `attr_2` and `attr_3` for the `attr_name` which is in `names` table.
That's what I've tried before asking the question. Here I could get the post with `referenced` value equals 1:
```
SELECT post_id, attr_1, attr_2, attr_3, referenced
FROM posts
WHERE a.referenced = 1;
```
But here, trying to change the `name` insted of `attr_1` or `attr_2` or `attr_3` couldn't get what I wanted:
```
SELECT a.post_id, b.attr_name, b.attr_name, b.attr_name, a.referenced
FROM posts a, names b
WHERE a.referenced = 1 AND
a.attr_1 = b.attr_id AND
a.attr_2 = b.attr_id AND
a.attr_3 = b.attr_id
GROUP BY a.post_id
```
Could someone help me with this?
Thanks in advice.
|
A very efficient solution is to join the names table once for each attr\_ like this:
```
SELECT p.post_id, n1.attr_name, n2.attr_name, n3.attr_name, p.referenced
FROM posts p
LEFT JOIN names n1 ON p.attr_1 = n1.attr_id
LEFT JOIN names n2 ON p.attr_2 = n2.attr_id
LEFT JOIN names n3 ON p.attr_3 = n3.attr_id
WHERE p.referenced = 1
```
Given your sample data the output would be:
```
post_id attr_name attr_name attr_name referenced
2 name3 name1 1
4 name1 name4 1
5 name3 name1 name4 1
```
|
You need left joins to get name of each value :
```
SELECT post_id,
att1.attr_name,
att2.attr_name,
att3.attr_name,
referenced
FROM posts a
LEFT JOIN names att1
ON a.attr_1 = att1.attr_id
LEFT JOIN names att2
ON a.attr_2 = att2.attr_id
LEFT JOIN names att3
ON a.attr_3 = att3.attr_id
WHERE a.referenced = 1;
```
left join will prevent omitting unmatched rows (if present).
|
SQL sentence to name 3 id tags
|
[
"",
"mysql",
"sql",
""
] |
When I execute the following query
```
floor(Convert(money, @rawMoney) / cc.USDConversionRate)
```
I always get this error, I'm using SQL Server 2000
but if I use
```
floor( Convert(money,substring(23,10)) / cc.USDConversionRate)
```
then it won't give this error
|
There is only a limited set of non-digit symbols that you can pass to a `money` type variable, such as $. The whole list may be found [on MSDN](https://msdn.microsoft.com/en-sg/library/ms179882.aspx).
Check if your raw variable value meets this criterion - if not, that is the error.
e.g. The below does not work if you pass the raw variable to `convert` directly:
```
declare @rawMoney varchar(10) = 'USD 123.45'
select Convert(money,substring(@rawMoney,5,10))
```
|
try This
```
select (cast(@rawMoney as money)) from #YourTable
```
|
Cannot convert char to money in SQL Server 2000
|
[
"",
"sql",
"sql-server",
"sql-server-2000",
""
] |
Can anyone tell me why the LikeString variable is always `%` ? Here's the code:
```
DECLARE @LikeString NVARCHAR = CAST('%4075%' AS nvarchar)
SELECT @LikeString
```
I've tried this in SQL Server 2008 R2 and SQL Server 2012, but `@LikeString` always contains `%` instead of `%4075%` as I expected.
|
for `char`, `varchar`, `nchar`, `nvarchar`
> When size is not specified in variable declaration statement, the default length is 1
```
DECLARE @LikeString NVARCHAR(6) = CAST('%4075%' AS nvarchar(6))
SELECT @LikeString
```
or simpler:
```
DECLARE @LikeString NVARCHAR(6) = N'%4075%'
SELECT @LikeString
```
|
From the SQL Server 2008 R2 Transact-SQL documentation...
"When n is not specified in a data definition or variable declaration statement, the default length is 1. When n is not specified with the CAST function, the default length is 30."
<https://msdn.microsoft.com/en-us/library/ms186939(v=sql.105).aspx>
You are using a variable declaration statement, therefore all but the first character is being truncated from the string when you attempt to initialize the variable with "%4075%".
Therefore, as others have stated, the solution is to specify the length of your nvarchar data type variable.
|
Why does string variable store only first symbol?
|
[
"",
"sql",
"sql-server",
""
] |
I feel like I should be able to get this and I'm just having a brain fart. I've simplified the problem to the following example:
```
DECLARE @A TABLE (ID int);
DECLARE @B TABLE (GroupID char(1), ID int);
INSERT @A VALUES (1);
INSERT @A VALUES (2);
INSERT @A VALUES (3);
INSERT @B VALUES ('X', 1);
INSERT @B VALUES ('X', 2);
INSERT @B VALUES ('X', 3);
INSERT @B VALUES ('Y', 1);
INSERT @B VALUES ('Y', 2);
INSERT @B VALUES ('Z', 1);
INSERT @B VALUES ('Z', 2);
INSERT @B VALUES ('Z', 3);
INSERT @B VALUES ('Z', 4);
```
So table A contains a set of some records. Table B contains multiple copies of the set contained in A with Group IDs. But some of those groups may be missing one or more records of the set. I want to find the groups that are missing records. So in the above example, my results should be:
```
GroupID
-------
Y
```
But for some reason I just can't wrap my head around this, today. Any help would be appreciated.
|
Awesome use-case for [relational division](http://en.wikipedia.org/wiki/Relational_algebra#Division_.28.C3.B7.29)! ([Here's a must-read blog post about it](https://www.simple-talk.com/sql/t-sql-programming/divided-we-stand-the-sql-of-relational-division/))
```
SELECT DISTINCT b1.GroupID
FROM @B b1
WHERE EXISTS (
SELECT 1
FROM @A a
WHERE NOT EXISTS (
SELECT 1
FROM @B b2
WHERE b1.GroupID = b2.GroupID
AND b2.ID = a.ID
)
);
```
How to read this?
> I want all distinct `GroupIDs` in `@B` for which there is a record in `@A` for which there isn't a record in `@B` with the same `@A.ID`
In fact, this is the "remainder" of the relational division.
|
try this
```
SELECT GroupID ,COUNT(GroupID )
FROM @a INNER JOIN @b
ON @a.id=@b.id
GROUP BY GroupID
HAVING COUNT(GroupID )<(SELECT count(*) FROM @a)
```
|
Find Groups that don't contain all records
|
[
"",
"sql",
"sql-server-2008",
""
] |
I have here a table that corresponds to the orders of the customers. I use `AUTO_INCREMENT` to determine the ID of the order. I have this SQL code to the `orders` table:
```
CREATE TABLE IF NOT EXISTS `orders` (
`order_id` int(11) NOT NULL AUTO_INCREMENT,
`customer_id` int(11) NOT NULL,
`customer_name` varchar(500) NOT NULL,
`order_total_price` decimal(20, 2) NOT NULL,
`order_date` varchar(100) NOT NULL,
PRIMARY KEY (`order_id`)
) ENGINE=InnoDB
```
What I need is to insert each of the products of that order in another table with a Foreign Key `order_id` to specify what order that products belongs to. The SQL code for the `purchased_products` table is:
```
CREATE TABLE IF NOT EXISTS `purchased_products` (
`order_id` int (11) NOT NULL,
FOREIGN KEY (`order_id`) REFERENCES orders(`order_id`),
`product_name` varchar(500) NOT NULL,
`product_price` decimal(20, 2) NOT NULL,
`product_quantity` int(11) NOT NULL,
PRIMARY KEY (`order_id`)
)
```
When the user buy something, I use this to insert the data in the `orders` table:
```
INSERT INTO orders (customer_id, customer_name, order_total_price, order_date)
VALUES ('{$customer_id}', '{$customer['customer_name']}', '{$order_total_price}', '{$order_date}')";
```
And here is my problem. I need to insert in the `purchased_products` table the products with the Order ID generated:
```
INSERT INTO purchased_products (order_id, product_name, product_price, product_quantity)
VALUES ('*/The ID of the order need to goes here*/', '{$product['product_name']}', '{$product['product_price']}', '{$product['quantity']}')";
```
This is giving me a headache. I'm not really knowing how to do it. This should be done by a different way? How do I associate the order ID to the products belonging to it?
|
I did it by using PDO `lastInsertId()` to get the ID of last inserted order:
```
$sql = "INSERT INTO orders (customer_id, customer_name, order_total_price, order_date)
VALUES ('{$customer_id}', '{$customer['customer_name']}', '{$order_total_price}', '{$order_date}')";
$query = $connection->prepare($sql);
$query->execute();
$respective_order_id = $connection->lastInsertId();
```
And then:
```
INSERT INTO purchased_products (order_id, product_name, product_price, product_quantity)
VALUES ('{$respective_order_id}', '{$product['product_name']}', '{$product['product_price']}', '{$product['quantity']}')";
```
Thanks for all who tried to help! They put me in the right way!
|
use function [`last_insert_id()`](https://dev.mysql.com/doc/refman/5.0/en/information-functions.html#function_last-insert-id). it will give you value that was auto-incremented as last one before call to it.
|
How I can get an auto incremented value
|
[
"",
"sql",
"pdo",
""
] |
* editing my question \*
I have a set of tables. When I filter on the second table, t2, I'd still like to get all rows of t1.
SQL script is below. I feel like I'm getting close while tinkering, but I just can't make it happen.
In short, I need t2's rows when applicable, but all of t1's rows with nulls in the other columns.
Thanks.
```
create table t1 ( id int identity(1,1), parentName varchar(20) null )
create table t2 ( id int identity(1,1), t1id int not null, childName varchar(20) null )
create table t3 ( id int identity(1,1), t2id int not null, gChildName varchar(20) null )
insert into t1 ( parentName ) values ( 'bob' )
insert into t1 ( parentName ) values ( 'john' )
insert into t2 ( childName, t1id ) values ( 'irving', 1 )
insert into t2 ( childName, t1id ) values ( 'parna', 1 )
insert into t2 ( childName, t1id ) values ( 'mike', 1 )
select
t1.id,
t1.parentName,
t2.id,
t2.childName
from t1 left outer join t2
on t2.t1id = t1.id
where t2.childName = 'mike'
-- what i'd LIKE is:
-- 1, bob, 3, mike
-- 2, john, null, null
drop table t3
drop table t2
drop table t1
```
|
As others have mentioned, you can move the t3 filter out of the overall `WHERE` clause and put it into the `JOIN`, this prevents it from effectively turning your outer join into a pseudo inner join (which happens because none of the `NULL` values can ever match a `WHERE` criteria except for `IS NULL`)
It's a very straightforward change to your sample code - just change `WHERE` to `AND`.
```
create table t1 ( id int identity(1,1), parentName varchar(20) null )
create table t2 ( id int identity(1,1), t1id int not null, childName varchar(20) null )
create table t3 ( id int identity(1,1), t2id int not null, gChildName varchar(20) null )
insert into t1 ( parentName ) values ( 'bob' )
insert into t1 ( parentName ) values ( 'john' )
insert into t2 ( childName, t1id ) values ( 'irving', 1 )
insert into t2 ( childName, t1id ) values ( 'parna', 1 )
insert into t2 ( childName, t1id ) values ( 'mike', 1 )
select
t1.id,
t1.parentName,
t2.id,
t2.childName
from t1
left outer join t2 on t2.t1id = t1.id and t2.childName = 'mike'
drop table t3
drop table t2
drop table t1
```
|
It sounds like you may be using a left join, but then dropping rows based on your where clause. For example:
```
Select * from Table1 a
left join Table2 b
on a.ID = b.ID
where b.name like 'A%'
```
will drop all rows from Table 1 where there is no match in Table 2, even though you left joined (because the where condition is not met when b.name is null).
To avoid this, put your conditions in the join instead, like so:
```
Select * from Table1 a
left join Table2 b
on a.ID = b.ID and b.name like 'A&'
```
or add an IsNull to your where clause, like so:
```
Select * from Table1 a
left join Table2 b
on a.ID = b.ID
where ISNULL(b.name, 'A') like 'A%'
```
Edit: Now that you have posted your query, here is a specific answer: just change "where" to "and," and it will return the results you have specified.
```
select
t1.id,
t1.parentName,
t2.id,
t2.childName
from #t1 t1 left outer join #t2 t2
on t2.t1id = t1.id
and t2.childName = 'mike'
```
|
SQL to return parent rows even when no child rows
|
[
"",
"sql",
"sql-server",
"select",
"ansi-nulls",
""
] |
I have a table A that references a table B. Table B needs to be populated with updated data from an external source and for efficiency I use TRUNCATE followed by a COPY. This is done even when the application is live.
To further improve efficiency, [as suggested in the documentation](http://www.postgresql.org/docs/9.1/static/populate.html), I want to drop and then recreate the foreign keys.
However I have some doubts.
If I drop the FKs, COPY and then recreate FKs **inside the same transaction**, can I be sure that the constraint is preserved even on data inserted in table A during the transaction? I ask this because in theory a transaction is atomic, but in the docs, about the temporary removal of FKs say:
> there is a trade-off between data load speed and loss of error checking while the constraint is missing.
If there's a chance that a wrong reference is inserted in the meantime, what happens when you try to recreate the FK constraints?
|
`TRUNCATE` is not allowed on any table referenced by a foreign key, unless you use `TRUNCATE CASCADE`, which will also truncate the referencing tables. The `DEFERRABLE` status of the constraint does not affect this. I don't think there is any way around this; you will need to drop the constraint.
However, there is no risk of an integrity violation in doing so. `ALTER TABLE ... ADD CONSTRAINT` locks the table in question (as does `TRUNCATE`), so your import process is guaranteed to have exclusive access to the table for the duration of its transaction. Any attempts at concurrent inserts will simply hang until the import has committed, and by the time they are allowed to proceeed, the constraint will be back in place.
|
You can make the foreign key constraint [deferrable](http://www.postgresql.org/docs/current/static/sql-set-constraints.html) (initially deferred). That way it will be checked just once at the end of the transaction.
```
ALTER TABLE
xxx
ADD CONSTRAINT
xxx_yyy_id_fk FOREIGN KEY (yyy_id)
REFERENCES
yyy
DEFERRABLE INITIALLY DEFERRED;
```
In all the cases, transactions are fully atomic in PostgreSQL (not only in theory), including DDL statements (such as CREATE/DROP constraint), so even if you drop a foreign key, then insert data, then create the foreign key and do everything in one transaction, then you are safe - if the recreation of the foreign key constraint fails, then the inserted data will also be dismissed.
Still, it is better to switch to deferred foreign keys, rather than dropping and then creating them.
|
Is it safe to drop and then create the foreign key constraints inside a transaction?
|
[
"",
"sql",
"postgresql",
"transactions",
"foreign-keys",
"populate",
""
] |
I've an EMPLOYEE table which has built-in hierarchy structure as well (using manager column)

I've another REGION table which represents manager-region relation

I'm trying to create a SQL which will show which employees fall under which region by following up the hierarchy chain.
**Constraints / Rules:**
* It's possible that employee's immediate manager may not have a region - so I need to continue up the chain.
* I have guarantee that 4 level up the chain someone will have a region.
* If region is found before the 4th level then use lower manager's region
This is the naive SQL I came up with (but results have duplicates - 3rd Rule failed)
```
select distinct e.name, r.region
from employee e
left outer join employee mgr1 on mgr1.id = e.manager
left outer join employee mgr2 on mgr2.id = mgr1.manager
left outer join employee mgr3 on mgr3.id = mgr2.manager
left outer join employee mgr4 on mgr4.id = mgr3.manager
left outer join REGION r on
( r.id = mgr1.id
or r.id = mgr2.id
or r.id = mgr3.id
or r.id = mgr4.id )
where e.IS_MANAGER = 'N'; //only interested in users for now; assume a flag
```
And this is the ResultSet:

How do I conditionally stop the left outer join if I already found a region?
|
Try this:
```
select distinct e.name, COALESCE(r1.region, r2.region, r3.region, r4.region, 'No Region') region
from employee e left outer join
region r1 on e.manager = r1.id
left outer join employee mgr1 on mgr1.id = e.manager left outer join
region r2 on mgr1.manager = r2.id
left outer join employee mgr2 on mgr2.id = mgr1.manager left outer join
region r3 on mgr2.manager = r3.id
left outer join employee mgr3 on mgr3.id = mgr2.manager left outer join
region r4 on mgr3.manager = r4.id
where e.IS_MANAGER = 'N'; //only interested in users for now; assume a flag
```
I'm not sure the COALESCE function is supported on all mysql versions but you could find an equivalent (it returns the first non null argument).
|
I had to modify your script a bit, but this works:
```
select distinct e.Name,
CASE
WHEN r1.RegionName IS NOT NULL THEN r1.RegionName
WHEN r2.RegionName IS NOT NULL THEN r2.RegionName
WHEN r3.RegionName IS NOT NULL THEN r3.RegionName
WHEN r4.RegionName IS NOT NULL THEN r4.RegionName
ELSE 'NA'
END AS 'RegionName'
from employee e
left outer join employee mgr1 on mgr1.id = e.Manager
left outer join employee mgr2 on mgr2.id = mgr1.Manager
left outer join employee mgr3 on mgr3.id = mgr2.Manager
left outer join employee mgr4 on mgr4.id = mgr3.Manager
left outer join Region r1 on r1.id = mgr1.RegionID
left outer join Region r2 on r2.id = mgr2.RegionID
left outer join Region r3 on r3.id = mgr3.RegionID
left outer join Region r4 on r4.id = mgr4.RegionID
where e.IS_MANAGER = 'N';
```
Here's the SQL Fiddle: <http://sqlfiddle.com/#!9/93b45/5>
|
How to remove duplicates in a complicated JOIN
|
[
"",
"sql",
"database",
"join",
"db2",
""
] |
I have the following table in MySQL database
```
account | status
1 | 1
1 | 0
1 | 0
2 | 0
2 | 0
2 | 0
3 | 1
3 | 1
3 | 1
```
How do I SELECT from this table all information where some status for the account is equal to zero? In this example, the result should be
```
account | status
1 | 1
1 | 0
1 | 0
2 | 0
2 | 0
2 | 0
```
since for account.name=3 all statuses are 1.
Thanks,
Vladimir
|
Try this way:
```
select account, status
from tab
where account in (select account
from tab
where status = 0)
```
[**Sql fiddle demo**](http://sqlfiddle.com/#!9/9d097/2/0)
|
```
SELECT * FROM Table1 WHERE `account` IN
(SELECT `account` FROM Table1 WHERE `status` = 0)
```
---
[**FIDDLE**](http://sqlfiddle.com/#!9/fa2f7f/1)
---
|
Select all information from a table that has SOME status=0 in SQL
|
[
"",
"mysql",
"sql",
""
] |
I got a problem I not sure how to solve it so far...
I have two tables that are related to each other with a 1 x n relation. I will try to describe the more importants fields below:
Table One - company:id PK,companyname varchar;
Table Two - training: course varchar,companyid bigint FK,id PK;
The problem is: I would like to update the information on course field of the table training because there are many courses with the same name. My idea is use something like
```
for s in 1..n loop
update training set course = course || s;
end loop;
```
|
I solved mine doubt creating this function below:
**CREATE OR REPLACE FUNCTION** *changeName()*
**RETURNS VOID AS**
$$
**DECLARE**
*table1id* **table1.id%TYPE**;
*counter1* **RECORD**;
*counter2* **RECORD**;
**BEGIN
FOR** *counter1* **IN SELECT**
*repeated\_column,foreign\_keyid*,**COUNT**(*repeated\_column*) **AS** *contagem*
**FROM** *table1* **GROUP BY** *repeated\_column,foreign\_keyid* **HAVING
COUNT**(*repeated\_column*) > 1 **LOOP
FOR** *counter2* **IN** 1..*counter1.contagem* **LOOP
SELECT id INTO table1id FROM table1 WHERE repeated\_column
IN**(*counter1.repeated\_column*) **AND** *foreign\_keyid = counter1.foreign\_keyid*;
**UPDATE** *table1* **SET** *repeated\_column = repeated\_column || ' (' || counter2 || ')'* **WHERE** *id = table1id*;
**END LOOP;
END LOOP;
END;**
$$
**LANGUAGE** 'plpgsql'
|
```
UPDATE training
SET course = course || num
FROM (
SELECT generate_series(1, (
SELECT count(course)
FROM training
)) num
) t
```
|
How can I update a table based on a plpgsql instruction loop on PostgreSQL 8.3.16?
|
[
"",
"sql",
"postgresql",
""
] |
I have an already complicated SQL statement that creates a table of users who have the rights `CONNECT`, `APPUSER`, or both:
```
(SELECT b.grantee AS "Username", A.granted_role AS "Connect", b.granted_role AS "APPUSER" FROM
(SELECT grantee, granted_role FROM dba_role_privs WHERE granted_role = 'CONNECT') A
RIGHT OUTER JOIN
(SELECT grantee, granted_role FROM dba_role_privs WHERE granted_role = 'APPUSER') b
ON A.grantee=b.grantee)
UNION
(SELECT A.grantee, A.granted_role, b.granted_role FROM
(SELECT grantee, granted_role FROM dba_role_privs WHERE granted_role = 'CONNECT') A
LEFT OUTER JOIN
(SELECT grantee, granted_role FROM dba_role_privs WHERE granted_role = 'APPUSER') b
ON A.grantee=b.grantee)
```
This produces something like:
```
Username Connect APPUSER
--------- --------- ---------
Sue CONNECT APPUSER
Bob (null) APPUSER
Joe CONNECT (null)
```
**I wish to use the `all_users` table, to also show users who have neither rights.** The `all_users` table shows every user in the database.
I have tried adding several types of joins to the end of my SQL statement to achieve this. The closest I got was adding:
```
UNION
(SELECT username, NULL, NULL FROM all_users)
```
This would produce a list where each user is shown twice, but shows users without either right:
```
Username Connect APPUSER
--------- --------- ---------
Amy (null) (null)
Sue CONNECT APPUSER
Sue (null) (null)
Bob (null) APPUSER
Bob (null) (null)
Joe CONNECT (null)
Joe (null) (null)
```
I tried adding a `where username = a.grantee`, but that won't work with Unions. If I try replacing `UNION` with any `JOIN`s, like adding:
```
FULL OUTER JOIN
SELECT username, NULL, NULL FROM ALL_USERS
on username = a.grantee;
```
I get the error:
> "SQL command not properly ended"
|
Your query seems way more complicated than necessary. Here is one approach:
```
select grantee,
max(case when granted_role = 'CONNECT' then granted_role end) as "connect",
max(case when granted_role = 'APPUSER' then granted_role end) as "appuser"
from dba_role_privs
group by grantee;
```
If there are users that have no roles at all, then you will need the `all_users` table.
EDIT:
Just use a `left join`:
```
select au.userName,
max(case when granted_role = 'CONNECT' then granted_role end) as "connect",
max(case when granted_role = 'APPUSER' then granted_role end) as "appuser"
from all_users au join
dba_role_privs rp
on au.userName = rp.grantee
group by au.userName;
```
|
use the below query. Let me know if this works. You need to add dp1.granted\_role is null and dp2.granted\_role is null at the where clause.
```
select au.username,
dp1.granted_role,
dp2.granted_role
from all_users au
left join dba_role_privs drp1 on drp1.grantee=au.username and drp1.granted_role='CONNECT'
left join dba_role_privs drp2 on drp2.grantee=au.username and drp2.granted_role='APPUSER'
```
|
Joining tables with different number of rows without duplicates
|
[
"",
"sql",
"join",
"oracle11g",
"union",
""
] |
I am using Oracle 11g.
So, I have this query, which provides me with the `Last Business Day of a Month` (**any suggestions for a better query is always welcome**)
```
select DECODE(to_char(last_day(to_date(sysdate)), 'D'),'7',
to_char((last_day(sysdate) - 1), 'DD-MON-YYYY'),'1',
to_char((last_day(sysdate) - 2), 'DD-MON-YYYY'),
to_char(last_day(sysdate), 'DD-MON-YYYY'))
into LAST_BD_OF_MONTH_P
from dual;
```
Which as of today gives me this result
```
30-APR-2015
```
Now, when I compare this with `sysdate + 9` to check whether its End of Month, its always giving me a `No Match` - see that I am converting both to date using `to_date`.
```
select to_char(sysdate + 9,'DD-MON-YYYY')
, DECODE(to_char(last_day(to_date(sysdate)), 'D'),'7',
to_char((last_day(sysdate) - 1), 'DD-MON-YYYY'),'1',
to_char((last_day(sysdate) - 2), 'DD-MON-YYYY'),
to_char(last_day(sysdate), 'DD-MON-YYYY')) as EOMBD
, case when to_date(sysdate + 9,'DD-MON-YYYY') =
to_date(DECODE(to_char(last_day(to_date(sysdate)), 'D'),'7',
to_char((last_day(sysdate) - 1), 'DD-MON-YYYY'),'1',
to_char((last_day(sysdate) - 2), 'DD-MON-YYYY'),
to_char(last_day(sysdate), 'DD-MON-YYYY')), 'DD-MON-YYYY')
then 'Match'
else 'No Match'
end as Match
from dual;
```
But, if I change this query to convert `sysdate + 9` as `char` using `to_char`, it works and gives me a `Match`.
```
select to_char(sysdate + 9,'DD-MON-YYYY')
, DECODE(to_char(last_day(to_date(sysdate)), 'D'),'7',
to_char((last_day(sysdate) - 1), 'DD-MON-YYYY'),'1',
to_char((last_day(sysdate) - 2), 'DD-MON-YYYY'),
to_char(last_day(sysdate), 'DD-MON-YYYY')) as EOMBD
, case when /*convert using to_char*/to_char(sysdate + 9,'DD-MON-YYYY') =
DECODE(to_char(last_day(to_date(sysdate)), 'D'),'7',
to_char((last_day(sysdate) - 1), 'DD-MON-YYYY'),'1',
to_char((last_day(sysdate) - 2), 'DD-MON-YYYY'),
to_char(last_day(sysdate), 'DD-MON-YYYY'))
then 'Match'
else 'No Match'
end as Match
from dual;
```
I understand that in the second query, it matching to `strings` and thus giving me a `Match`. Is there any way, this comparison provides me a result of `Match` without converting this to `char`?
Its true that this gives me the desired output, but I would like not to use the `to_char` function here.
PS : `LAST_BD_OF_MONTH_P` is declared as `DATE`
**INCLUDED ANSWER**
```
select to_char(sysdate + 9,'DD-MON-YYYY'), DECODE(to_char(last_day(to_date(sysdate)), 'D'),'7',
to_char((last_day(sysdate) - 1), 'DD-MON-YYYY'),'1',
to_char((last_day(sysdate) - 2), 'DD-MON-YYYY'),
to_char(last_day(sysdate), 'DD-MON-YYYY')) as EOMBD
, case when trunc(sysdate + 9) =
to_date(DECODE(to_char(last_day(to_date(sysdate)), 'D'),'7',
to_char((last_day(sysdate) - 1), 'DD-MON-YYYY'),'1',
to_char((last_day(sysdate) - 2), 'DD-MON-YYYY'),
to_char(last_day(sysdate), 'DD-MON-YYYY')), 'DD-MON-YYYY')
then 'Match'
else 'No Match'
end as Match
from dual;
```
|
This is doing an implicit conversion of sysdate + 9 to a string, using your NLS\_DATE\_FORMAT; and then converting back to a date:
```
case when to_date(sysdate + 9,'DD-MON-YYYY') =
```
At best this will work, if your format is also DD-MON-YYYY, but as it is you're losing the century because you're really doing:
```
case when to_date(to_char(sysdate + 9, 'DD-MON-RR'),'DD-MON-YYYY') =
```
The mismatch of RR and YYYY means you end up with year 0015, not 2015.
You can simplify that to:
```
case when trunc(sysdate) + 9 =
```
So just with that change:
```
select to_char(sysdate + 9,'DD-MON-YYYY')
, DECODE(to_char(last_day(to_date(sysdate)), 'D'),'7',
to_char((last_day(sysdate) - 1), 'DD-MON-YYYY'),'1',
to_char((last_day(sysdate) - 2), 'DD-MON-YYYY'),
to_char(last_day(sysdate), 'DD-MON-YYYY')) as EOMBD
, case when trunc(sysdate + 9) =
to_date(DECODE(to_char(last_day(to_date(sysdate)), 'D'),'7',
to_char((last_day(sysdate) - 1), 'DD-MON-YYYY'),'1',
to_char((last_day(sysdate) - 2), 'DD-MON-YYYY'),
to_char(last_day(sysdate), 'DD-MON-YYYY')), 'DD-MON-YYYY')
then 'Match'
else 'No Match'
end as Match
from dual;
TO_CHAR(SYSDATE+9,'DD-MON-YYYY') EOMBD MATCH
-------------------------------- -------------------- --------
30-APR-2015 30-APR-2015 Match
```
I'm not sure why you're converting between dates and strings so much in other places though, you're making things complicated and letting NLS issues become an issue. You should never rely on your NLS settings, and it's easy to do so by accident with so many explicit and implicit conversions going on.
|
Unfortunately, your logic isn't entirely accurate.
Take Mar, 2013 for instance:
```
with w_date as ( select to_date('15-mar-2013','dd-mon-yyyy') d from dual )
select DECODE(to_char(last_day(d), 'D'),'7',
to_char((last_day(d) - 1), 'DD-MON-YYYY'),'1',
to_char((last_day(d) - 2), 'DD-MON-YYYY'),
to_char(last_day(d), 'DD-MON-YYYY'))
from w_date;
DECODE(TO_C
-----------
29-MAR-2013
```
Which happens to be Good Friday .. so "not a business Day" ... it should spit out "Mar 28, 2013"
Don't do so much to\_date / to\_char conversions .. it'll cause you grief.
In order to do this kind of thing, you really need a table of holidays (or a table of business days - either way)
Once you have that, the solution becomes trivial:
If you have a table of holidays:
```
with w_date as ( select to_date('15-mar-2013','dd-mon-yyyy') d from dual ),
w_holidays as (
select to_date('29-mar-2013','dd-mon-yyyy') holiday from dual
),
w_sub as (
select last_day(d) - level + 1 dd
from w_date
connect by level <= 10
)
select max(dd)
from w_sub s
where to_char(dd,'d') not in ( 1,7)
and not exists ( select * from w_holidays h
where h.holiday = s.dd )
/
```
results:
```
MAX(DD)
---------
28-MAR-13
1 row selected.
```
If you have a table of business days:
```
with w_business_days as (
select to_date('25-mar-2013','dd-mon-yyyy') busday from dual union all
select to_date('26-mar-2013','dd-mon-yyyy') busday from dual union all
select to_date('27-mar-2013','dd-mon-yyyy') busday from dual union all
select to_date('28-mar-2013','dd-mon-yyyy') busday from dual union all
select to_date('01-apr-2013','dd-mon-yyyy') busday from dual
)
select max(busday)
from w_business_days
where busday <= last_day(to_date('15-mar-2013','dd-mon-yyyy') )
/
```
results:
```
MAX(BUSDA
---------
28-MAR-13
1 row selected.
```
|
Date Comparison in Oracle to find whether today is Last Business Day of Month
|
[
"",
"sql",
"oracle",
"oracle11g",
"date-arithmetic",
""
] |
I am trying to run a query to fetch me slots a worker is free. The SQL Schema for '**works**' table is as follows:
```
+------------+----------+------------+----------+
| workID | workerID | workerType | workTime |
+------------+----------+------------+----------+
| 0000000001 | 1 | agents | 1 |
| 0000000002 | 1 | agents | 2 |
| 0000000003 | 1 | agents | 4 |
| 0000000004 | 1 | agents | 4 |
+------------+----------+------------+----------+
```
Another table '**worker**' is as:
```
+----------+------------+
| workerID | workerName |
+----------+------------+
| 1 | John Doe |
+----------+------------+
```
I need the output as follows:
```
+------------+-----------+-----------+-----------+-----------+
| workerType | timeslot1 | timeslot2 | timeslot3 | timeslot4 |
+------------+-----------+-----------+-----------+-----------+
| John Doe | Occupied | Occupied | Free | Occupied |
+------------+-----------+-----------+-----------+-----------+
```
But by my query, if I `GROUP BY wo.workerID` then I get
```
+------------+-----------+-----------+-----------+-----------+
| workerName | timeslot1 | timeslot2 | timeslot3 | timeslot4 |
+------------+-----------+-----------+-----------+-----------+
| John Doe | Free | Free | Free | Occupied |
+------------+-----------+-----------+-----------+-----------+
```
**MY TRY:**
[SQLFIDDLE](http://www.sqlfiddle.com/#!9/33230/4/0)
```
SELECT wo.workerName,
CASE
WHEN w.workTime = 1 THEN 'Occupied'
ELSE 'Free'
END AS timeSlot1,
CASE
WHEN w.workTime = 2 THEN 'Occupied'
ELSE 'Free'
END AS timeSlot2,
CASE
WHEN w.workTime = 3 THEN 'Occupied'
ELSE 'Free'
END AS timeSlot3,
CASE
WHEN w.workTime = 4 THEN 'Occupied'
ELSE 'Free'
END AS timeSlot4
FROM works AS w
INNER JOIN workers AS wo ON wo.workerID = w.workerID
WHERE w.workerType = 'agents' AND
w.workerID = 1
GROUP BY wo.workerID;
```
|
Just add `max()` function for pivoting
```
SELECT wo.workerName,
max( CASE
WHEN w.workTime = 1 THEN 'Occupied'
ELSE 'Free'
END) AS timeSlot1,
max(CASE
WHEN w.workTime = 2 THEN 'Occupied'
ELSE 'Free'
END) AS timeSlot2,
max(CASE
WHEN w.workTime = 3 THEN 'Occupied'
ELSE 'Free'
END) AS timeSlot3,
max(CASE
WHEN w.workTime = 4 THEN 'Occupied'
ELSE 'Free'
END) AS timeSlot4
FROM works AS w
INNER JOIN workers AS wo ON wo.workerID = w.workerID
WHERE w.workerType = 'agents' AND
w.workerID = 1
```
|
<http://www.sqlfiddle.com/#!9/33230/11>
```
SELECT wo.workerName,
IF(SUM(IF(w.workTime = 1,1,0)),'Occupied','Free') AS timeSlot1,
IF(SUM(IF(w.workTime = 2,1,0)),'Occupied','Free') AS timeSlot2,
IF(SUM(IF(w.workTime = 3,1,0)),'Occupied','Free') AS timeSlot3,
IF(SUM(IF(w.workTime = 4,1,0)),'Occupied','Free') AS timeSlot4
FROM works AS w
INNER JOIN workers AS wo ON wo.workerID = w.workerID
WHERE w.workerType = 'agents' AND
w.workerID = 1
GROUP BY w.workerID
```
|
Combine multiple rows into one column
|
[
"",
"mysql",
"sql",
""
] |
I have a table `T` like the following:
# T Table
```
EMPLID CODE DT
101 PPP 01-JAN-15
101 PPP 02-JAN-15
101 PPP 03-JAN-15
101 OOO 04-JAN-15
101 OOO 05-JAN-15
101 PPP 06-JAN-15
101 PPP 07-JAN-15
101 PPP 08-JAN-15
101 PPP 09-JAN-15
```
what I want is a result like the following:
```
EMPLID CODE RNK DT
101 PPP 1 01-JAN-15
101 PPP 1 02-JAN-15
101 PPP 1 03-JAN-15
101 OOO 2 04-JAN-15
101 OOO 2 05-JAN-15
101 PPP 3 06-JAN-15
101 PPP 3 07-JAN-15
101 PPP 3 08-JAN-15
101 PPP 3 09-JAN-15
```
but I only seem to be getting results like the following:
```
SELECT EMPLID, CODE, DENSE_RANK() OVER(ORDER BY CODE) AS RNK, DT
FROM T;
EMPLID CODE RNK DT
101 OOO 1 05-JAN-15
101 OOO 1 04-JAN-15
101 PPP 2 03-JAN-15
101 PPP 2 08-JAN-15
101 PPP 2 09-JAN-15
101 PPP 2 06-JAN-15
101 PPP 2 07-JAN-15
101 PPP 2 02-JAN-15
101 PPP 2 01-JAN-15
```
Since the `PPP` values repeat after the `OOO` values but at later dates, I'm not sure how to group/order it so that I get the `RNK` field to consider the second set of `PPP` values as a third group.
Here is the code for the table to make this a little easier:
```
WITH
T AS(
SELECT '101' AS EMPLID, 'PPP' AS CODE, '01-JAN-15' AS DT FROM DUAL
UNION ALL
SELECT '101', 'PPP', '02-JAN-15' FROM DUAL
UNION ALL
SELECT '101', 'PPP', '03-JAN-15' FROM DUAL
UNION ALL
SELECT '101', 'OOO', '04-JAN-15' FROM DUAL
UNION ALL
SELECT '101', 'OOO', '05-JAN-15' FROM DUAL
UNION ALL
SELECT '101', 'PPP', '06-JAN-15' FROM DUAL
UNION ALL
SELECT '101', 'PPP', '07-JAN-15' FROM DUAL
UNION ALL
SELECT '101', 'PPP', '08-JAN-15' FROM DUAL
UNION ALL
SELECT '101', 'PPP', '09-JAN-15' FROM DUAL
)
SELECT *
FROM T;
```
|
I think you need two passes: First find all rows where the code changes, then build a group on this.
```
select emplid, code, sum(change) over (order by dt) as rnk, dt
from
(
select
emplid,
code,
case when lag(code) over (order by dt) = code then 0 else 1 end as change,
dt
from t
order by dt
);
```
|
Recursive solution is also possible:
```
with s (u, s, a, n) as (
select emplid, code, 1, dt from t where dt = date '2015-01-01'
union all select emplid, code, a+decode(code, s, 0, 1), dt from t join s on dt=n+1 )
select * from s
```
[SQLFiddle](http://sqlfiddle.com/#!4/c159f/1)
|
How to Dense_Rank() Sets of Values that Repeat
|
[
"",
"sql",
"oracle",
"oracle11g",
""
] |
I'm quite new to database programming and I am wondering what the negative effects of indexes are? As far as I understood, indexes speed up operations which have to search the database for a specific value (for example a SELECT).
Consider this example:
For the table `Example`, with an index on column `user_name`, the operation:
```
SELECT TestField FROM Example WHERE user_name=XXXX
```
Will be faster as a result of the Index.
My question is: what are cons of using indexes? If an index just give us pros (performance gaining), why aren't they set as default?
|
Well you can probably fill books about indices but in short here a few things to think about, when creating an index:
While it (mostly) speeds up a select, it slows down inserts, updates and deletes because the database engine does not have to write the data only, but the index, too.
An index need space on hard disk (and much more important) in RAM. An index that can not be held in RAM is pretty useless.
An index on a column with only a few different values doesn't speed up selects, because it can not sort out much rows (for example a column "gender", which usually has only two different values - male, female).
If you use MySQL for example you can check, if the engine uses an index by adding "explain" before the select - for your above example `EXPLAIN SELECT TestField FROM Example WHERE username=XXXX`
|
**What are indexes for, what are they in database?**
Without index on column `user_name` system would have to scan the entire `Example` table on a row-by-row basis to find all matching entries. If the data distribution in particular table points that there are only a few rows or so this is clearly an inefficient way of obtaining those rows.
However, when using indexes, you are redirecting the power of search to a different, tree structure, that has faster lookups and very small depth.
Please have in mind, that indexes are pure redundancy. Database index is just like a telephone book one or any other index in a book you might be willing to read (probably a part of, to quickly find what you're looking for).
If you are interested in a chapter of a book the index lets you find it relatively quickly so that you don't have to skim through many pages to get it.
**Why aren't indexes created on default?**
Index is a data structure that is created alongside a table and maintains itself whenever a table is changed. The fact of it's existance implies usage of data storage.
If you would index every column on a large table, the storage needed to keep indexes would exceed the size of table itself by far.
Self maintenance of an index structure also means that whenever an `UPDATE, INSERT, DELETE` occurs, the index has to be updated (it's done automatically and does not require your action), but that **costs time** which means these operations are performed slower.
There are situations, when you need to retrieve most of the table (eg 90% of rows will be in the output), or the entire table, and in this case Sequence scan of the whole table (behaviour without an index) would be more efficient than doing the tree traversal and leaf node chain (which is the behaviour for navigating the index tree structure).
|
Why and where to use INDEXes - pros and cons
|
[
"",
"mysql",
"sql",
"select",
"indexing",
""
] |
Is there a way to convert `UNION ALL` to `JOIN` and still get similar output.
Here is an example query to illustrate:
```
DECLARE @customerIdentifierId BIGINT
SET @customerIdentifierId = 2
SELECT 1 AS Tag, NULL AS Parent, cust.CustomerId AS CustomerId,
NULL AS CustomerIdentifierId, NULL AS OrderDetailId
FROM Customer.CustomerIdentifier custIdent
JOIN Customer.Customer cust
ON cust.CurrentCustomerIdentifierId = custIdent.CustomerIdentifierId
JOIN detail.OrderDetail detail
ON detail.CustomerIdentifierId = custIdent.CustomerIdentifierId
WHERE custIdent.CustomerIdentifierId = @customerIdentifierId
UNION ALL
SELECT 2, 1, NULL, custIdent.CustomerIdentifierId, null
FROM Customer.CustomerIdentifier custIdent
JOIN Customer.Customer cust
ON cust.CurrentCustomerIdentifierId = custIdent.CustomerIdentifierId
JOIN detail.OrderDetail detail
ON detail.CustomerIdentifierId = custIdent.CustomerIdentifierId
WHERE custIdent.CustomerIdentifierId = @customerIdentifierId
UNION ALL
SELECT 3, 1, NULL, null, detail.OrderDetailId
FROM Customer.CustomerIdentifier custIdent
JOIN Customer.Customer cust
ON cust.CurrentCustomerIdentifierId = custIdent.CustomerIdentifierId
JOIN detail.OrderDetail detail
ON detail.CustomerIdentifierId = custIdent.CustomerIdentifierId
WHERE custIdent.CustomerIdentifierId = @customerIdentifierId
```
It is not so important that the nulls have null in them, but I need the separate rows that the union gives.
I tried doing a `CROSS JOIN`, and it did not work out. I am hoping there is some other `SQL` trick that can do it (`CROSS APPLY`?)
In case it matters, my end goal is to get this to work in an indexed (materialized) view in `SQL Server`.
This is the output I am looking for:
```
Tag Parent CustomerId CustomerIdentifierId OrderDetailId
----------- ----------- -------------------- -------------------- --------------------
1 NULL 4 NULL NULL
1 NULL 4 NULL NULL
1 NULL 4 NULL NULL
1 NULL 4 NULL NULL
1 NULL 4 NULL NULL
2 1 NULL 2 NULL
2 1 NULL 2 NULL
2 1 NULL 2 NULL
2 1 NULL 2 NULL
2 1 NULL 2 NULL
3 2 NULL NULL 2
3 2 NULL NULL 14
3 2 NULL NULL 26
3 2 NULL NULL 38
3 2 NULL NULL 50
```
The Tables are a parent to many children relationship:
1 Customer to Many CustomerIdentifiers
1 CustomerIdentifier to Many OrderDetails
(It makes a tree)
Here is a [link](https://gist.githubusercontent.com/Vaccano/3d66a6f97e470ab104ca/raw/d843952d5635fb184d7c6c64599a5b26d22afdff/gistfile1.txt) to the sql needed to create the tables to make my above query work:
|
never done a indexed view but you could rewrite the query:
```
INSERT INTO @xmlDataTable(Tag, Parent, [Customer!1!CustomerId],
[CustomerIdentifier!2!CustomerIdentifierId], [OrderDetail!3!OrderDetailId])
SELECT rows.*
FROM Customer.CustomerIdentifier custIdent
JOIN Customer.Customer cust
ON cust.CurrentCustomerIdentifierId = custIdent.CustomerIdentifierId
JOIN [order].OrderDetail detail
ON detail.CustomerIdentifierId = custIdent.CustomerIdentifierId
WHERE detail.CustomerIdentifierId = @customerIdentifierId
OUTER APPLY (
SELECT 1, NULL, cust.CustomerId, NULL, null
UNION ALL
SELECT 2, 1, NULL, custIdent.CustomerIdentifierId, null
UNION ALL
SELECT 3, 1, NULL, null, detail.OrderDetailId
) rows
```
|
Here's some pseudo code of a hacky way you could do it:
```
SELECT COALESCE(t1.Col1, t2.Col1, tN.Col1) AS Col1
, COALESCE(t1.Col2, t2.Col2, tN.Col2) AS Col2
, COALESCE(t1.ColN, t2.ColN, tN.ColN) AS ColN
FROM t1
FULL OUTER JOIN t2 ON 1=0
FULL OUTER JOIN tN ON 1=0
```
I haven't explored whether this would violate the requirements of an indexed view, but it is a way you can duplicate the results of a UNION ALL using JOINs.
|
Convert UNION ALLs to JOINs
|
[
"",
"sql",
"sql-server",
"join",
"sql-server-2012",
"union",
""
] |
i need to select among other fields the age of a customer at the time he/she bought some product of a specific brand etc, WHERE the customer was for example between 30 and 50 years old.i wrote this query (getAge just uses DATEDIFF to return the age in years)
```
SELECT DISTINCT customers.FirstName, customers.LastName,
products.ProductName,
dbo.getAge(customers.BirthDate,sales.Datekey)
AS Age_when_buying
FROM sales
INNER JOIN dates ON sales.Datekey=dates.Datekey
INNER JOIN customers ON sales.CustomerKey=customers.CustomerKey
INNER JOIN products ON sales.ProductKey=products.ProductKey
INNER JOIN stores ON sales.StoreKey=stores.StoreKey
WHERE stores.StoreName = 'DribleCom Europe Online Store' AND
products.BrandName = 'Proseware' AND
dbo.getAge(customers.BirthDate, sales.Datekey) >= 30 AND
dbo.getAge(customers.BirthDate, sales.Datekey) <=50
```
and it works but i calculate the age three times.I tried to assign age\_when\_buying to a variable but it didn't work.My next thought was to use cursor but i feel that there is a more simple way i am missing.The question is: which is the appropriate way to solve this or what are my options?
|
You should use Cross Apply.
```
SELECT DISTINCT customers.FirstName, customers.LastName,
products.ProductName,
age.age AS Age_when_buying
FROM sales
INNER JOIN dates ON sales.Datekey=dates.Datekey
INNER JOIN customers ON sales.CustomerKey=customers.CustomerKey
INNER JOIN products ON sales.ProductKey=products.ProductKey
INNER JOIN stores ON sales.StoreKey=stores.StoreKey
CROSS APPLY
(select dbo.getAge(customers.BirthDate, sales.Datekey) as age) age
WHERE stores.StoreName = 'DribleCom Europe Online Store' AND
products.BrandName = 'Proseware' AND
age.age >= 30 AND
age.age <=50
```
|
Assuming that you only have a limited number of filters you'd like to apply, you could use a [Common Table Expression](https://technet.microsoft.com/en-us/library/ms190766%28v=sql.105%29.aspx?f=255&MSPPError=-2147217396) to restructure your query.
I personally find it easier to see all the joins and such in one place, while the filters are similarly grouped together at the bottom...
```
WITH CTE AS(
select customers.FirstName
, customers.LastName
, dbo.getAge(customers.BirthDate,sales.Datekey) AS Age_when_buying
, sales.StoreName
, products.BrandName
, products.ProductName
from sales
INNER JOIN customers on sales.CustomerKey=customers.CustomerKey
INNER JOIN products ON sales.ProductKey = products.ProductKey
INNER JOIN stores ON sales.StoreKey = stores.StoreKey
)
SELECT DISTINCT FirstName, LastName, ProductName, Age_when_buying
FROM CTE
WHERE StoreName = 'DribleCom Europe Online Store'
AND BrandName = 'Proseware'
AND Age_when_buying BETWEEN 30 AND 50
```
|
select based on calculated value, optimization
|
[
"",
"sql",
"sql-server",
"database",
"performance",
""
] |
I have data in the form of below table. I have applied an aggregate function of sum on the last column of Quantity(MG).
```
ITEM STORE Quantity(MG)
Rice Bags ABC 150
Sugar Bags ABC 200
Rice Bags NEW 50
Sugar Bags New 20
Rice Bags Alpha 25
```
My Select SQL looks like this.
```
Select ITEM, STORE, SUM(Quantity(MG)
From....
.........
.........
Group by ITEM, STORE
Having SUM(Quantity(MG) > 50
```
The Problem i am facing is with the having statement i want the SQL to compare the Grand sum of all the quantity values for a given item (lets say Rice Bags , which is 150 + 50 +25 = 225). But with the above query it's not working as expected. when i apply the condition "Having SUM(Quantity(MG) > 50" it actually compares the values 50 with each unique row and skip the rows where rice bags quantity is less than 50 (which in this case is row # 5).Ideally this row should not be skipped because the aggregate of Rice bags quantity is 225 so no row for rice bags should be skipped.
Whats the solution to apply such a filter with this group by setting?
|
You need to apply a Group Sum on `Quantity(MG)`:
```
select ITEM, STORE, sumQuantity
from
(
Select ITEM, STORE, SUM(Quantity) as sumQuantity
,SUM(SUM(quantity)) OVER (PARTITION BY ITEM) as groupSum
From....
.........
.........
Group by ITEM, STORE
) as dt
where groupSum > 50
```
|
sum() over(partition by) will do the job for you:
```
Select ITEM, STORE, SUM(Quantity(MG)) over(partition by item,store) as sm
From table
where Quantity(MG)< 50
```
**Edited:**
```
select ITEM, STORE,Quantity(MG),grp_sum from
(Select ITEM, STORE,Quantity(MG), SUM(Quantity(MG)) over(partition by item,store) as grp_sum
From table)temp
where grp_sum< 50
```
|
Compare Grand total of "Group by subtotals" in SQL Server
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"sql-server-2008-r2",
"sql-server-2012",
""
] |
My MySQL table has three columns and ten rows:
```
ID NAME ParentID
1 Parent #1 NULL
2 Child #1 1
3 Child #1 1
4 Child #1 1
5 Child #1 2
6 Child #1 2
7 Child #1 3
8 Child #1 3
9 Child #1 3
10 Child #1 3
```
After adding the new element on the front-end, I am getting only the parent's ID to which this new element belongs to.
Let's see the example. I'd like to add the new element to the `Child #1` that has `ID` equals to `3` and `ParentID` equals to `1`.
From the front-end, I get only the `ID` value (e.g. `3` in our example above). When I'd like to insert the new element into the table, I need to set this element the right `ParentID`. What query should I use to achieve this? Please, take a look at my query below:
```
INSERT INTO `tree` (`ID`, `Name`, `ParentID`)
VALUES (11, "The new element", "and here I'd like to give this new element the ParentID of the element with ID equals to 3");
```
|
You can do this with `insert select` statement:
```
insert into `tree`(`ID`, `Name`, `ParentID`)
select 11, "Inserted Element", `ParentID`
from `tree` where `ID` = 3
```
|
Usually this needs to be done via the application level however in the same query you can use `insert into .. select from`
```
insert into `tree`(`ID`, `Name`, `ParentID`)
select
11,
"Inserted Element",
ParentID
from(
select ParentID from tree where ID = 3
)x
```
|
Insert into MySQL table with select statement
|
[
"",
"mysql",
"sql",
""
] |
I am a student of Computer Science. Recently I am trying to learn database. I found a [link](http://www.orafaq.com/faq/difference_between_truncate_delete_and_drop_commands) describing about the difference between DELETE and DROP command. I understand most of the difference between them with the most important one - DELETE removes only data where DROP removes the table/object too.
But I can not understand one difference - DELETE command fire a DML trigger where DROP doesn't fire any DML trigger. The link only state this fact without any explanation. Can someone explain it a bit, so that I can understand it more.
|
> "DELETE command fire a DML trigger where DROP doesn't fire any DML trigger"'
In Oracle & some other DB provides triggers. Triggers are noting but subroutines which will get executed on DML actions like INSERT, UPDATE, DELETE ..
say on updating some row in a table, a trigger can be invoked.
Similarly when you execute delete statement, a trigger can be executed. To get more understanding read through triggers.
|
DROP removes table along with data and other structures, such as constraints, indexes, grants and also triggers. So trigger doesn't fire because in process it is also removed.
|
Understanding DELETE and DROP command
|
[
"",
"sql",
"database",
"oracle",
"sql-delete",
"sql-drop",
""
] |
I need to compare two tables in a database. They do store similar info.
However in some cases `ID` and `Segment` (`SUBSTRING (data, 1, 1)`) do match, however the Service Number and Service date are different for each `ID`.
Here is my example data. In this case the two highlighted rows should be selected:

I have tried something like this:
```
SELECT
T.ID,
SUBSTRING(data, 1, 1) AS Seg,
SUBSTRING(data, 2, 4) AS sn,
CONVERT(DATE, SUBSTRING(data, 11, 8)),
ServiceNumber, ServiceDate
FROM
P
JOIN
T ON p.ID = t.ID
WHERE
SUBSTRING(data, 1, 1) != Segment
AND SUBSTRING(data, 2, 4) != ServiceNumber
AND CONVERT(DATE, SUBSTRING(data, 11, 8)) != ServiceDate;
```
The SQL Fiddle link is here:
<http://sqlfiddle.com/#!6/d6aee/10>
Any ideas would be more then welcome!
|
Assuming that there is an error in your pasted example, viz that the second row shouldn't be highlighted as the ID, Segments and the Date are the same, you can simplify the problem by breaking out the encoded column into its components with a CTE or derived table, and then ease the comparison:
```
WITH FixedTableP AS
(
SELECT P.ID,
SUBSTRING ( [data] ,1 , 1 ) AS PSegment,
SUBSTRING( [data] ,2 , 4 ) AS PServiceNumber,
CONVERT(date,SUBSTRING ( data ,11 , 8 )) AS PServiceDate
FROM P
)
SELECT T.ID, P.ID, T.ServiceNumber, P.PServiceNumber, T.ServiceDate, P.PServiceDate
FROM T
JOIN FixedTableP P ON p.ID=t.ID AND p.PSegment = t.Segment
WHERE P.PServiceDate <> T.ServiceDate
AND P.PServiceNumber <> T.ServiceNumber;
```
[SqlFiddle here](http://sqlfiddle.com/#!6/f1ace/3)
|
Do you want select only the difference?
```
WITH cte AS (
SELECT ID,
CAST(SUBSTRING ( data ,1 , 1 ) AS INT) AS Seg,
CAST(LTRIM(SUBSTRING ( data ,2 , 4 )) AS VARCHAR(25)) AS sn,
CONVERT(date,SUBSTRING ( data ,11 , 8 )) AS ServiceDate
from P
)
SELECT *
FROM t
WHERE NOT EXISTS (
SELECT 1 FROM cte c
WHERE c.ID = t.ID
AND c.Seg = t.Segment
AND c.sn = t.ServiceNumber
AND c.ServiceDate = t.ServiceDate)
```
<http://sqlfiddle.com/#!6/d6aee/30>
|
Match and compare two tables in SQL Server
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
I have 3 tables where I am trying to put a join query like below.
The below is the Section table containing 3 sections.
```
Section
*****************************
* section_id * section_name *
*****************************
* 1 * A *
* 2 * B *
* 3 * C *
*****************************
```
The below is the section\_subject table. 1st section contains 2 subjects, 2nd section contains 2 subjects and 3rd section contains 3 subjects.
```
Section_Subject
***********************************
* ss_id * section_id * subject_id *
***********************************
* 1 * 1 * 8 *
* 2 * 1 * 9 *
* 3 * 2 * 6 *
* 4 * 2 * 5 *
* 5 * 3 * 2 *
* 6 * 3 * 3 *
* 7 * 3 * 4 *
***********************************
```
The below is the section\_batch table. 3rd section alone contains 2 batches
```
Section_Batch
*********************************
* sb_id * section_id * batch_id *
*********************************
* 1 * 3 * 6 *
* 2 * 3 * 7 *
*********************************
```
I want a query to yield the below result
```
**************************************************************
* section_id * section_name * count_subjects * count_batches *
**************************************************************
* 1 * A * 2 * 0 *
* 2 * B * 2 * 0 *
* 3 * C * 3 * 2 *
**************************************************************
```
I know that we can do some kind of sub query and achieve the above result. But how to get the result using left join and group query?
|
I believe using `count(distinct)` will get you what you need. You have to use `distinct` because the joins have a multiplier effect where a section has more than one subject and more than one batch.
```
select
s.section_id,
min(t1.section_name) as section_name,
count(distinct ss.subject_id) as subject_count,
count(distinct sb.batch_id) as batch_count,
from
Section as s
left join Section_Subject as ss on ss.section_id = s.section_id
left join Section_Batch as sb on sb.section_id = s.section_id
group by
s.section_id
```
By the way, I think the left joins could probably be inner joins.
|
you need to do `left join` and `group by` each table separately to get the count and then do a join between them
SQL Fiddle: <http://www.sqlfiddle.com/#!9/ea4ee/12>
```
select T1.section_id,
T1.section_name,
T1.subjects_count,
T2.batch_count
FROM (
select S.section_id,
S.section_name,
COUNT(SS.subject_id) as subjects_count
from Section S
LEFT JOIN Section_Subject SS
on S.section_id = SS.section_id
group by S.section_id, S.section_name )T1
LEFT JOIN (
select S.section_id,
S.section_name,
COUNT(SB.batch_id ) as batch_count
from Section S
LEFT JOIN Section_Batch SB
on S.section_id = SB.section_id
group by S.section_id, S.section_name
) T2
on T1.section_id = T2.section_id
```
|
MySQL query with left join, count and group with 3 tables
|
[
"",
"mysql",
"sql",
"group-by",
"left-join",
""
] |
I have a mysql table as follow
```
id| reagent1 | reagent 2
1 | coffee | milk
2 | sugar | coffee
3 | coffee | salt
```
If I would like to get the reagents of coffee, the result should be
```
reagentofcoffee
milk
sugar
salt
```
How can I make a query to get expected result? I am new to mysql and I have tried many time.
|
You can use `case` :
```
select case when reagent1 = 'coffee' then reagent2
when reagent2 = 'coffee' then reagent1 end as reagentofcoffee
from tbl
```
|
```
SELECT IF(reagent1 = 'coffee',reagent1,reagent2) as reagent FROM table WHERE reagent1 = 'coffee' OR reagent2 = 'coffee'
```
|
How to query from two column of a same table where equal to one value
|
[
"",
"mysql",
"sql",
""
] |
I need to write SQL query that will get the employees with the highest pay from each department separately and return name and pay of the employee and the department name they are in.
```
employees
+----+-------+------+---------------+
| id | name | pay | department_id |
+----+-------+------+---------------+
| 1 | Bob | 1200 | 1 |
| 2 | Rob | 600 | 2 |
| 3 | Tom | 800 | 2 |
| 4 | Pam | 900 | 1 |
| 5 | Dave | 1200 | 1 |
+----+-------+------+---------------+
departments
+----+-----------+
| id | name |
+----+-----------+
| 1 | IT |
| 2 | Marketing |
+----+-----------+
```
This query returns just the first employee from each department with the highest pay in the table, but I would like to get all the employees, that have the highest pay, in this case both Bob and Dave from IT and Tom from Marketing.
```
SELECT d.name,e.name,e.pay FROM employees e JOIN departments d
ON e.department_id = d.id GROUP BY d.id HAVING MAX(e.pay)
```
The correct result should be:
```
+-----------+-------+------+
| IT | Bob | 1200 |
| IT | Dave | 1200 |
| Marketing | Tom | 800 |
+-----------+-------+------+
```
|
Redefinition of "having the maximum salary" is: "there should be nobody with a higher salary" (within the same department)
```
SELECT d.name, e.name,e.pay
FROM employees e
JOIN departments d ON e.department_id = d.id
WHERE NOT EXISTS (
SELECT 42 FROM employees x
WHERE x.department_id = e.department_id -- same dept
AND x.pay > e.pay -- higher pay
);
```
|
Requires a subquery, this works with most DBs but don't know about MySQL.
```
SELECT
d.name,
e.name,
e.pay
FROM
employees e JOIN departments d
ON e.department_id = d.id
WHERE
(d.id, e.pay) =
(SELECT
d.id,
MAX(e.pay)
FROM
employees e JOIN departments d
ON e.department_id = d.id
GROUP BY
d.id)
```
|
Multiple max values from joined tables
|
[
"",
"sql",
"postgresql",
""
] |
Given these two tables/sets with different groups of items, ~~how can I find which groups in `set1` span across more than a single group in `set2`?~~ how can I find the groups in `set1` which cannot be covered by a single group in `set2`?
e.g. for tables below, `A (1,2,5)` is the only group that spans across `s1`(**1**,**2**,3) and `s2`(2,3,4,**5**). `B` and `C` are not the answers because both are covered in a single group `s2`.
I would prefer to use `SQL` (`Sql Server 2008 R2` available).
Thanks.
```
set1 set2
+---------+----------+ +---------+----------+
| group | item | | group | item |
`````````````````````+ `````````````````````+
| A | 1 | | s1 | 1 |
| A | 2 | | s1 | 2 |
| A | 5 | | s1 | 3 |
| B | 4 | | s2 | 2 |
| B | 5 | | s2 | 3 |
| C | 3 | | s2 | 4 |
| C | 5 | | s2 | 5 |
+---------+----------+ +---------+----------+
```
Use this sqlfiddle to try: <http://sqlfiddle.com/#!6/fac8a/3>
Or use the script below to generate temp tables to try out the answers:
```
create table #set1 (grp varchar(5),item int)
create table #set2 (grp varchar(5),item int)
insert into #set1 select 'a',1 union select 'a',2 union select 'a',5 union select 'b',4 union select 'b',5 union select 'c',3 union select 'c',5
insert into #set2 select 's1',1 union select 's1',2 union select 's1',3 union select 's2',2 union select 's2',3 union select 's2',4 union select 's2',5
select * from #set1
select * from #set2
--drop table #set1
--drop table #set2
```
|
Select groups from `set1` for which there are no groups in `set2` for which all items in `set1` exists in `set2`:
```
select s1.grp from set1 s1
where not exists(
select * from set2 s2 where not exists(
select item from set1 s11
where s11.grp = s1.grp
except
select item from set2 s22
where s22.grp = s2.grp))
group by s1.grp
```
|
Ok. This is ugly, but it should work. I tried it in fiddle. I think it can be done through windowing, but I have to think about it.
Here is the ugly one for now.
```
WITH d1 AS (
SELECT set1.grp
, COUNT(*) cnt
FROM set1
GROUP BY set1.grp
), d2 AS (
SELECT set1.grp grp1
, set2.grp grp2
, COUNT(set1.item) cnt
FROM set1
INNER JOIN set2
ON set1.item = set2.item
GROUP BY set1.grp
, set2.grp
)
SELECT grp
FROM d1
EXCEPT
SELECT d1.grp
FROM d1
INNER JOIN d2
ON d2.grp1 = d1.grp
AND d2.cnt = d1.cnt
```
|
SQL intersect with group by
|
[
"",
"sql",
"sql-server",
"t-sql",
"group-by",
""
] |
I need to fix data in table "tag" using table "tag2",
by match "tag.id" and "tag2.id" and if matched replace "tag2.name" with "tag.name" in "tag" table,
tables structures:
tag:
```
id name
1 test
2 test
3 test
4 Tom hancks
5 test
6 amazon
7 car
8 BMW
9 search
```
tag2:
```
id name
1 Google
2 yahoo
3 Microsoft
4 Tom hancks
5 facebook
```
to return "tag" table like this:
tag:
```
id name
1 Google
2 yahoo
3 Microsoft
4 Tom hancks
5 facebook
6 amazon
7 car
8 BMW
9 search
```
|
You can do it by using inner join.
```
update tag inner join tag2
on tag.id = tag2.id
set tag.name = tag2.name
```
|
Try this:
```
update tag t1
inner join tag2 on t1.id= t2.id set t1.name=t2.name
```
|
MySQL: fixing data using other table
|
[
"",
"mysql",
"sql",
""
] |
This should be simple. I don't know why I feel stumped.
I have two similar tables. Let's say they each have just 2 columns: PartNumber, and Order.
In a single statement, I want to select the PartNumbers from WorkItemPartsFiltered where Order = ABC, but if none exist, I want to select them from WorkItemParts where Order = ABC
```
--WorkItemParts
PartNumber | Order
123 | ABC
456 | ABC
789 | ABC
012 | ABC
123 | DEF
456 | DEF
389 | GHI
--WorkItemPartsFiltered
PartNumber | Order
123 | ABC
456 | ABC
789 | ABC
456 | DEF
389 | GHI
```
|
You must use a case Statement with a JOIN like this:
```
SELECT Order, CASE WHEN WorkItemParts.PartNumber is null
THEN WorkItemPartsFiltered.PartNumber
ELSE WorkItemParts.PartNumber END
FROM WorkItemParts
LEFT JOIN WorkItemPartsFiltered
ON (WorkItemPartsFiltered.Order = WorkItemParts.Order)
```
|
```
SELECT [Order]
, COALESCE( WorkItemParts.PartNumber
,WorkItemPartsFiltered.PartNumber) AS PartNumber
FROM WorkItemParts
LEFT JOIN WorkItemPartsFiltered
ON WorkItemPartsFiltered.[Order] = WorkItemParts.[Order]
```
|
How to SELECT from a table if no data exists in another
|
[
"",
"sql",
"sql-server",
"sql-scripts",
""
] |
I've got data in a column of my table just like this:
```
cpc > cpc > organic
cpc > organic >cpc
mail > cpc > organic
```
How can I split that data to different columns using '>' as a delimiter?
|
This is famous problem. You can use Regex through CLR imported function, but here is very usefull [script](https://stackoverflow.com/questions/10581772/how-to-split-a-comma-separated-value-to-columns). Alter spliting you can use [PIVOT/UNPIVOT](https://technet.microsoft.com/en-US/library/ms177410(v=sql.105).aspx) is you need exactly data-by-columns.
|
```
SELECT
PARSENAME(REPLACE(String,'>','.'),1) AS 'col1' ,
PARSENAME(REPLACE(String,'>','.'),2) AS 'col2' ,
PARSENAME(REPLACE(String,'>','.'),3) AS 'col3'
FROM tbl
```
|
Split into different columns SQL Server
|
[
"",
"sql",
"sql-server",
"parsing",
"split",
""
] |
I have 200k records in one table and I want to insert these records into another table. I read about the bulk insert but the query I found on msdn website is just not making any sense.
This is the query
```
BULK INSERT AdventureWorks2012.Sales.SalesOrderDetail
FROM 'f:\orders\lineitem.tbl'
WITH
(
FIELDTERMINATOR =' |',
ROWTERMINATOR =' |\n'
);
```
What is `f:\orders\lineitem.tbl` and the whole this is just not making any sense.
I have a table with four columns: id, frm, to1 and country
Same this in destination table
Any easy syntax will be helpful
I am using SQL Server 2008/12
|
BULK INSERT imports from an external data file. If you already have the data in a SQL Server table, then you should do something like:
```
INSERT INTO NewTable (field1, field2, field3)
SELECT field1, field2, field3 FROM OldTable
```
DO NOT point BULK INSERT at your SQL Server database file. The .tbl file referenced in your example code is to a text file with delimited fields.
|
Bulk insert is for import external data from file to sql table like
`BULK INSERT 'tableName' From 'File Path'`,
If You have Copy data from one table to other table in sql, use:
`select into instate insert into like ' select * into table1 From table2`
|
Bulk insert in SQL Server database from one table to another
|
[
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2012",
""
] |
I am trying to create a piece of code in `sql server 2008` that will grab specific values from each distinct string within my `dbo` table. The ultimate goal is to make a drop down box within `Visual Studio` so that one can choose all lines from the database that contain a specific product code (see definition of product code below). Example strings:
```
in_0314_95pf_500_w_0315
in_0314_500_95pf_0315_w
```
The part of these strings I am wishing to identify is the 3 digit numeric code (in this case let us call it product code) that appears once within each string. There are roughly 300 different product codes.
The problem is that these product code values do not appear in the same position within each unique string. Hence, I am having a hard time determining the product code because I can't use substring, charindex, like, etc.
Any ideas? Any help is MUCH appreciated.
|
This can be done with `PATINDEX`:
```
DECLARE @s NVARCHAR(100) = 'in_0314_95pf_500_w_0315'
SELECT SUBSTRING(@s, PATINDEX('%[_][0-9][0-9][0-9][_]%', @s) + 1, 3)
```
Output:
```
500
```
If there are no underscores then:
```
SELECT SUBSTRING(@s, PATINDEX('%[^0-9][0-9][0-9][0-9][^0-9]%', @s) + 1, 3)
```
This means 3 digits between any symbols that are not digits.
**EDIT:**
Apply to table like:
```
SELECT SUBSTRING(ColumnName, PATINDEX('%[^0-9][0-9][0-9][0-9][^0-9]%', ColumnName) + 1, 3)
FROM TableName
```
|
One approach is to use a String splitting table [function like this one](https://stackoverflow.com/a/10914602/314291) which breaks the string up into its components. You can then filter the components based on your criteria:
```
SELECT Name
FROM dbo.splitstring('in_0314_95pf_500_w_0315', '_')
WHERE ISNUMERIC(Name) = 1 AND LEN(Name) = 3;
```
I've amended the function slightly to accept the delimiter as a parameter.
```
CREATE FUNCTION dbo.splitstring ( @stringToSplit VARCHAR(MAX), @delimiter VARCHAR(50))
RETURNS
@returnList TABLE ([Name] [nvarchar] (500))
AS
BEGIN
DECLARE @name NVARCHAR(255)
DECLARE @pos INT
WHILE CHARINDEX(@delimiter, @stringToSplit) > 0
BEGIN
SELECT @pos = CHARINDEX(@delimiter, @stringToSplit)
SELECT @name = SUBSTRING(@stringToSplit, len(@delimiter), @pos-len(@delimiter))
INSERT INTO @returnList
SELECT @name
SELECT @stringToSplit = SUBSTRING(@stringToSplit, @pos+LEN(@delimiter),
LEN(@stringToSplit)-@pos)
END
INSERT INTO @returnList
SELECT @stringToSplit
RETURN
END
```
To apply this to your table, use `CROSS APPLY` (Single Delimiter):
```
SELECT mt.Name, x.Name AS ProductCode
FROM MyTable mt
CROSS APPLY dbo.splitstring(mt.Name, '_') x
WHERE ISNUMERIC(x.Name) = 1 AND LEN(x.Name) = 3
```
**Update, Multiple Delimiters**
I guess the real underlying problem is that ultimately the product codes need to be normalized out of the composite key (e.g. add a distinct `ProductId` or `ProductCode` column to the same table), derived using a query like this, and then stored back in the table via an `update`. Reverse engineering the product codes out of the string appears to be a trial and error process.
Nonetheless, you can continue to keep passing the split strings through further splitting functions (one per each type of delimiter), before applying your final discriminating filter:
```
SELECT *
FROM MyTable mt
CROSS APPLY dbo.splitstring(mt.Name, 'test') y -- First alias
CROSS APPLY dbo.splitstring(y.Name, '_') x -- Reference the preceding alias
WHERE ISNUMERIC(x.Name) = 1 AND LEN(x.Name) = 3; -- Must reference the last alias (x)
```
Note that the stringsplit function has again been changed to accommodate multicharacter delimiters.
|
Extracting specific column values embedded within composite Strings of codes
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I want to select all the tables which don't contain the character `_` from a database, for this I used the request :
```
SELECT TABLE_NAME
FROM INFORMATION_SCHEMA.TABLES
WHERE table_schema="DB_NAME"
AND TABLE_NAME NOT LIKE '%_%';
```
But it doesn't seem to work.
|
The underscore (`_`) is a single character wildcard in SQL's `LIKE` operator. If you want to treat it as a regular charater, you'll have to explicitly `escape` it:
```
SELECT TABLE_NAME
FROM INFORMATION_SCHEMA.TABLES
WHERE table_schema='DB_NAME' and TABLE_NAME NOT LIKE '%\_%' ESCAPE `\`;
```
|
## Short answer
```
TABLE_NAME NOT LIKE '%\_%'
```
## Long answer
The characters `%` and `_` are both special in `LIKE` expressions.
[Documentation](https://dev.mysql.com/doc/refman/5.0/en/string-comparison-functions.html)
* `%` Match zero or more characters.
* `_` Mach exactly one character.
If You want to use it in Your query, You have to use escape character. The default is `\`.
* `\%` Mach '%' character.
* `\_` Mach '\_' character.
In Your situation it would be:
```
TABLE_NAME NOT LIKE '%\_%'
```
It is possible to define different escape character, for example `|`.
```
TABLE_NAME NOT LIKE '%|_%' ESCAPE '|'
```
|
Select ___where __LIKE ___
|
[
"",
"mysql",
"sql",
"database",
"select",
"sql-like",
""
] |
SQL Isn't my Forte, Some help from you experts would be amazing :)
Dataset:
```
Name of Product | Part Number | Size 1 | Size 2 | Size 3
------------------|----------------|-----------|---------|----------
Item1 | 12345 | 4 | 4 | 6
Item2 | 54321 | 4 | 5 | 4
Item3 | 54123 | 6 | 2 | 2
```
I need to return the highest value in the 3 sizes and aggregate them into a single column.
```
Item1 | 6
Item2 | 5
Item3 | 6
```
Googling has only lead me to the MAX() function, but it just returns the highest value in the dataset which is not what im after.
|
You can use an `UNPIVOT` to split out the columns, and then the Maximum becomes a simple `MAX()` with `GROUP BY` against the relevant `[Name of Product]`
```
SELECT [Name of Product], Max(TheSize)
FROM Table1
UNPIVOT
(
[TheSize]
FOR Size in ([Size 1], [Size 2], [Size 3])
) x
GROUP BY [Name of Product];
```
[SqlFiddle is back online - example here](http://sqlfiddle.com/#!6/a23cd/1)
|
Depending on your version of SQL Server (2008 or above), you could use a variation on the following:
```
SELECT
[Name Of Product],
(SELECT MAX(Val) FROM (VALUES ([Size 1]), ([Size 2]), ([Size 3])) AS value(val)) AS MaxSize
FROM MyTable
```
EDIT: Found the [original](https://stackoverflow.com/a/6871572/1225845) source of this code, will mark your question as a potential duplicate based on that.
|
Best Practice for returning the highest value in a set of columns?
|
[
"",
"sql",
"sql-server",
""
] |
I have these product SKUs in column `scott` that are alphanumeric (should always contain a number but not always a letter). The `id` column is auto-incremented. All of the data was imported line-by-line from a CSV file, so I know the order of the data is correct.
As you can see from this picture, some `scott` fields are just letters, which should have the previous number in front of them. For example, on `id` 5149, I'd like the `scott` to be 3780a.

**So the question is this:** Using SQL, how can I parse through the rows in order, prepending the `scott` number from the most recent field that contained a number, onto fields following it that don't contain a number?
|
Here's one option using a user-defined variable to determine the most recent previous numeric value.
```
update yourtable y
join (
select id, scott, @prevNumericId:=
if(scott REGEXP '^[0-9]+$', scott, @prevNumericId) prevNumericId
from yourtable, (select @prevNumericId:=0) t
order by id
) t on y.id = t.id and y.scott = t.scott
set y.scott = concat(t.prevNumericId,y.scott)
where y.scott not REGEXP '^[0-9]+$'
```
* [SQL Fiddle Demo](http://sqlfiddle.com/#!9/46d47/1)
|
That good old first-(or in this case, last)-child problem, but this time on the same table..
This should do the job quite elegantly for you:
```
UPDATE mytable dest
JOIN (
SELECT mt.id, CONCAT(lnk.scott, mt.scott) AS newValue
FROM mytable mt
LEFT OUTER JOIN mytable lnk ON lnk.id < mt.id AND CONCAT('',SUBSTRING(lnk.scott, 1, 1) * 1) != 0
LEFT OUTER JOIN mytable lnk2 ON lnk2.id < mt.id AND CONCAT('',SUBSTRING(lnk2.scott, 1, 1) * 1) != 0
AND lnk.id < lnk2.id
WHERE lnk2.id IS NULL
AND CONCAT('',SUBSTRING(mt.scott, 1, 1) * 1) = 0
) AS source on source.id = dest.id
SET dest.scott = source.newValue
```
===
I don't have enough rep to comment but @sgeddes's solution is basically the same but using the user defined variable. That should be fine so long as the variable doesn't cross over between concurrent executions :)
This version will happily run concurrently, regardless.
|
Using SQL (for MySQL), how can I UPDATE data on a row based on data from a previous row?
|
[
"",
"mysql",
"sql",
""
] |
I want to change the default port number of MySQL server presently it is 3306. I want to change it to 3360.
I have tried:
```
-- port=3360
```
But things are not working for me. Please provide query to change port not any configuration. I am using Windows 8 64 bit.
|
You need to edit your `my.cnf` file and make sure you have the port set as in the following line:
```
port = 3360
```
Then restart your MySQL service and you should be good to go. There is no query you can run to make this change because the `port` is not a dynamic variable (q.v. [here for MySQL documentation](http://dev.mysql.com/doc/refman/5.1/en/server-system-variables.html) showing a table of all system variables).
|
If you're on Windows, you may find the config file `my.ini` it in this directory
```
C:\ProgramData\MySQL\MySQL Server 5.7\
```
You open this file in a text editor and look for this section:
```
# The TCP/IP Port the MySQL Server will listen on
port=3306
```
Then you change the number of the port, save the file.
Find the service MYSQL57 under Task Manager > Services and restart it.
|
How to change the default port of mysql from 3306 to 3360
|
[
"",
"mysql",
"sql",
"database",
"port",
""
] |
I have a situation where :
* 24.9999 should be 25
* 24.5000 should be 25
* 24.4999 should be 24
* 24.1111 should be 24
I tried `Ceiling`, but the result will be 25, where `Floor` will be 24 for all of them.
How to accomplish this?
Thanks a lot for your time.
Note: it might be helpful to let you know that I want this functionality to be inside a `computed column`.
|
Use:
```
Round(YourNumber, 0)
```
The 0 indicates the precision (i.e. number of decimal places); if you wanted to round 42.51 to 42.5, you'd replace the 0 with 1, for example.
Make sure not to use `float`s - they can sometimes be approximated, which causes values to be rounded incorrectly on occasion.
|
You should use [`round()`](https://msdn.microsoft.com/en-us//library/ms175003.aspx) with `0` as `[length]` parameter :
```
round(field, 0)
```
[**SQLFiddle**](http://sqlfiddle.com/#!6/3a01f/2)
|
SQL Server custom rounding
|
[
"",
"sql",
"sql-server",
""
] |
I'm hoping to find a solution for this to automate a report I have. Basically what I'm trying to accomplish here is grabbing a date (first day of previous month, two years ago through last day of previous month current year).
So the date span if running this month would look like this: between 4/1/2013 and 3/31/2015
I have found code to get the date two years ago but I'm not able to also incorporate the month functions... Any help is very much appreciated!
For year I'm using this:
```
SELECT CONVERT(VARCHAR(25),DATEADD(year,-2,GETDATE()),101)
```
|
First day of previous month 2 years ago:
```
SELECT CONVERT(DATE,dateadd(day, -1, dateadd(day, 1 - day(GETDATE()), GETDATE())))
```
Last day of last month:
```
SELECT CONVERT(DATE,DATEADD(month, DATEDIFF(month, 0, DATEADD(year,-2,GETDATE())), 0))
```
Then just do whatever logic you need with them
|
Your `where` clause can look something like this:
```
where date >= cast(dateadd(year, -2,
dateadd(month, -1, getdate() - day(getdate()) + 1)
) as date) and
date < cast(getdate() - day(getdate()) + 1 as date)
```
This makes use of the handy convenience that subtracting/adding a number to a `datetime` is the same as adding a date. The start date says: get the first day of the month, then subtract one month, then subtract two years. This could have been done as `dateadd(month, -25, . . .)`, but I think separating the logic is clearer.
|
Function to go back 2 years, first day of last month
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
"date",
""
] |
I have 4 tables (say emp1, emp2, emp3, emp4) with nearly identical columns. I want to fetch details
```
(select empid, empname from emp1 where empid = '1' )
UNION
(select empid, empname from emp2 where empid = '1')
UNION
(select empid, empname from emp3 where empid = '1')
UNION
(select empid, empname from emp4 where empid = '1')
```
The thing is if I got result from first query (emp1) it should ignore queries below (emp2, emp3, emp4). If I get a result from emp2, it should ignore (emp3, emp4) and so on.
Remember in emp1, emp2, emp3, emp4 there an be different `empname` associated with same `empid`. That's why union giving all the results. In that case I have to prefer result from uppermost table i.e emp1 > emp2 > emp3. I also tried using 'case' but things are not working for me.
Sample data
emp1
```
1 deepak
```
emp2
```
1 nitin
```
emp3
```
1 sateesh
```
emp4
```
1 chandra
```
and expected result is
```
1 deepak
```
I hope I am clear to you. please help me thank you
|
You can add an arbitrary column to specify the priority.
[**SQL Fiddle**](http://sqlfiddle.com/#!6/02e9e/1/0)
```
;WITH Cte AS(
SELECT *, N = 1 FROM emp1 WHERE empId = 1 UNION ALL
SELECT *, N = 2 FROM emp2 WHERE empId = 1 UNION ALL
SELECT *, N = 3 FROM emp3 WHERE empId = 1 UNION ALL
SELECT *, N = 4 FROM emp4 WHERE empId = 1
)
,CteRN AS(
SELECT *, RN = ROW_NUMBER() OVER(ORDER BY N) FROM Cte
)
SELECT
empId, empName
FROM CteRN
WHERE RN = 1
```
---
Basically, you want to prioritize results from `emp1`, then `emp2` and so on. This is where the arbitrary column `N` comes in. You want to *rank* them in order of priority. The result of the first CTE is:
```
empId empName N
----------- ---------- -----------
1 deepak 1
1 nitin 2
1 sateesh 3
1 chandra 4
```
Then you use [**`ROW_NUMBER`**](https://msdn.microsoft.com/en-us/library/ms186734.aspx) to add a sequential number to each rows. The second CTE, `CteRN` will give you:
```
empId empName N RN
----------- ---------- ----------- --------
1 deepak 1 1
1 nitin 2 2
1 sateesh 3 3
1 chandra 4 4
```
Laslt, you only want the row with the least `RN`, so you add a `WHERE RN = 1` clause. The final result would be:
```
empId empName
----------- ----------
1 deepak
```
*Additionally, you can add a `PARTITION BY empId` on `RN = ROW_NUMBER() OVER(ORDER BY N)`*
|
Based on your updated question, since empid should not be repeated from other tables, you can try something like this.
```
select empid,empname
from emp1
UNION ALL
select empid,empname
from emp2 e
WHERE NOT EXISTS( SELECT empid FROM emp1 ee WHERE ee.empid = e.empid)
UNION ALL
select empid,empname
from emp3 e
WHERE NOT EXISTS( SELECT empid FROM emp1 ee WHERE ee.empid = e.empid)
AND NOT EXISTS( SELECT empid FROM emp2 ee WHERE ee.empid = e.empid)
UNION ALL
select empid,empname from emp4 e
WHERE NOT EXISTS( SELECT empid FROM emp1 ee WHERE ee.empid = e.empid)
AND NOT EXISTS( SELECT empid FROM emp2 ee WHERE ee.empid = e.empid)
AND NOT EXISTS( SELECT empid FROM emp3 ee WHERE ee.empid = e.empid)
```
|
How to write SQL Server query which can give result from any of 4 tables
|
[
"",
"sql",
"sql-server",
""
] |
```
Name Value AnotherColumn
-----------
Pump 1 8000 Something1
Pump 1 1000 Something2
Pump 1 3000 Something3
Pump 2 3043 Something4
Pump 2 4594 Something5
Pump 2 6165 Something6
```
In this table I want to group by on Name column and give output as name,value\_exist.
value\_exist will be 1 if 1000 is present in any of the value column for that name group. so the output will be:
```
Name value_exist
-----------
Pump 1 1
Pump 2 0
```
|
Maybe something like this:
```
select name, MAX(CASE WHEN value=1000 THEN 1 ELSE 0 END) as value_exist
from your_table
group by name
```
|
One more :
```
select name , max(if(value=1000,1,0)) as value_exist
from table
group by name;
```
|
mysql - if statement inside a select statement
|
[
"",
"mysql",
"sql",
""
] |
I have two data frames `d1` and `d2`. `d2` has a column which contains data that I would prefer to be added to `d1`.
Each of the data frames have equal number of rows and columns.
```
> d1
t1 t2 numVehicles avgByRunRep
1 0.2 0.3 10 225.5000
2 0.2 0.4 10 219.6667
3 0.2 0.5 10 205.1667
4 0.2 0.6 10 220.6667
5 0.2 0.7 10 205.1667
> d2
t1 t2 numVehicles avgLostPerRep
1 0.2 0.3 10 14.333333
2 0.2 0.4 10 9.000000
3 0.2 0.5 10 8.000000
4 0.2 0.6 10 8.000000
5 0.2 0.7 10 6.833333
```
So I would like the values in `d2`'s `avgLostPerRep` column to be "transferred" to `d1` by matching `t1`, `t2`, `numVehicles`.
So in the end `d1` would look something like:
```
> d1
t1 t2 numVehicles avgByRunRep avgLostPerRep
1 0.2 0.3 10 225.5000 14.333333
2 0.2 0.4 10 219.6667 9.000000
3 0.2 0.5 10 205.1667 8.000000
4 0.2 0.6 10 220.6667 8.000000
5 0.2 0.7 10 205.1667 6.833333
```
It is also possible to save the final resulting data frame in another variable `d3`, if that makes any difference at all.
I would like to know how this can be solved with `sqldf` but pure `R` is OK as well.
I tried `merge` from R, but got a big data-frame with a lot of `NA`. I also tried `UPDATE` and `INSERT INTO` for `sqldf` to no avail.
|
**1)** This performs a left join along the indicated columns:
```
library(sqldf)
sqldf("select * from d1 left join d2 using(t1, t2, numVehicles)")
```
We could alternately use a left natural join which causes the join to occur along commonly named columns:
```
sqldf("select * from d1 left natural join d2")
```
For the data shown in the question we could alternately use an inner join by simply omitting the word `left` in either of the above; however, if the actual data does not have a value in `d2` for every row of `d1` then the inner join would omit those rows of `d1` whereas the left join would include them and add `NA` for the joined `d2` column.
**2)** The corresponding native R code would be this for the first sqldf statement
```
merge(d1, d2, all.x = TRUE, by = 1:3)
```
and this for the second:
```
merge(d1, d2, all.x = TRUE)
```
Inner joins are obtained by omitting the `all.x = TRUE` in either case.
|
You may want to try data.table package as long as your question is pretty straightforward with it's syntax and keys and merge will be much faster than base R
Recreating initial data sets:
```
library(data.table)
d1<- fread("t1,t2,numVehicles,avgByRunRep
0.2,0.3,10,225.5000
0.2,0.4,10,219.6667
0.2,0.5,10,205.1667
0.2,0.6,10,220.6667
0.2,0.7,10,205.1667")
# setting desired columns as keys is important in your case
# and setkey(d1) would be enough to use all columns in d1
setkey(d1, t1, t2, numVehicles)
d2<- fread("t1,t2,numVehicles,avgLostPerRep
0.2,0.3,10,14.333333
0.2,0.4,10,9.000000
0.2,0.5,10,8.000000
0.2,0.6,10,8.000000
0.2,0.7,10,6.833333")
```
Solution:
```
merge(d1, d2)
# t1 t2 numVehicles avgByRunRep avgLostPerRep
#1: 0.2 0.3 10 225.5000 14.333333
#2: 0.2 0.4 10 219.6667 9.000000
#3: 0.2 0.5 10 205.1667 8.000000
#4: 0.2 0.6 10 220.6667 8.000000
#5: 0.2 0.7 10 205.1667 6.833333
```
|
Inserting values of a column from one dataframe to another while respecting a given condition
|
[
"",
"sql",
"r",
"sqldf",
""
] |
I am currently trying to do a query like this:
(Psuedocode)
```
SELECT
NAME, SUM(VALUE), MONTH
FROM TABLE
WHERE MONTH BETWEEN 12 MONTHS AGO AND NOW
GROUP BY MONTH, NAME
```
The problem I am getting is that a name exists in a few of the months, but not all of the months, so if i filter this down to return the values for only one name, i sometimes get only 3 or 4 rows, rather than the 12 I expect to see.
My question is, is there a way to return rows, where it will still include the name, and month within the range, where the value would just be set to zero when I am missing the row from the previous result.
My first thought was to just union another select onto it, but I cant seem to get the logic to work to adhere to the group by, as well as the where clauses for limiting the names.
|
I you have data for all months, you can take the following approach. Generate all the rows (uses a `cross join`) then bring in the data you want:
```
select m.month, n.name, sum(t.value)
from (select distinct month from table) m cross join
(select distinct name from table) n left join
table t
on t.month = m.month and t.name = n.name
group by m.month, n.name;
```
This will return the missing sums as `NULL` values. If you want zero, then use `coalesce(sum(t.value), 0)`.
|
you can use something like the following table to generate all the past 12 months as separate rows:
```
SELECT add_months(trunc(add_months(sysdate, -12), 'MONTH'), LEVEL - 1) AS month_in_range
FROM all_objects
CONNECT BY LEVEL <= 1 + months_between(add_months(sysdate, -12), TRUNC (sysdate, 'MONTH'));
```
and then do an outer join between you table and this.
|
How to include missing rows in sql return
|
[
"",
"sql",
"oracle",
""
] |
I am looking to merge timestamp from 2 different row based on Employee and punch card but the max or limit does not work with the from statement, if I only use > then i get every subsequent for everyday... I want the next higher value on a self join, also I have to mention that i have to use SQL 2008! so the lag and Lead does not work!
please help me.
```
SELECT , Det.name
,Det.[time]
,Det2.[time]
,Det.[type]
,det2.type
,Det.[detail]
FROM [detail] Det
join [detail] Det2 on
Det2.name = Det.name
and
Det2.time > Det.time Max 1
where det.type <>3
Table detail
NAME | Time | Type | detail
john | 10:30| 1 | On
steve| 10:32| 1 | On
john | 10:34| 2 | break
paul | 10:35| 1 | On
steve| 10:45| 3 | Off
john | 10:49| 2 | on
paul | 10:55| 3 | Off
john | 11:12| 3 | Off
```
Wanted result
```
John | 10:30 | 10:34 | 1 | 2 | On
John | 10:34 | 10:49 | 2 | 1 | Break
John | 10:49 | 11:12 | 1 | 3 | on
Steve| 10:32 | 10:45 | 1 | 3 | on
Paul | 10:35 | 10:55 | 1 | 3 | On
```
Thank you in advance!
|
You can do it with cross apply:
```
SELECT Det.name
,Det.[time]
,ca.[time]
,Det.[type]
,ca.type
,Det.[detail]
FROM [detail] Det
Cross Apply(Select Top 1 * From detail det2 where det.Name = det2.Name Order By det2.Time) ca
Where det.Type <> 3
```
|
As you said `LAG` or `LEAD` functions won't work for you, but you could use `ROW_NUMBER() OVER (PARTITION BY name ORDER BY time DESC)` on both tables and then do a `JOIN` on `RN1 = RN2 + 1`
This is just a idea, but I don't see an issue why it shouldn't work.
**Query:**
```
;WITH Data (NAME, TIME, type, detail)
AS (
SELECT 'john', CAST('10:30' AS DATETIME2), 1, 'On'
UNION ALL
SELECT 'steve', '10:32', 1, 'On'
UNION ALL
SELECT 'john', '10:34', 2, 'break'
UNION ALL
SELECT 'paul', '10:35', 1, 'On'
UNION ALL
SELECT 'steve', '10:45', 3, 'Off'
UNION ALL
SELECT 'john', '10:49', 2, 'on'
UNION ALL
SELECT 'paul', '10:55', 3, 'Off'
UNION ALL
SELECT 'john', '11:12', 3, 'Off'
)
SELECT t.NAME, LTRIM(RIGHT(CONVERT(VARCHAR(25), t.TIME, 100), 7)) AS time, LTRIM(RIGHT(CONVERT(VARCHAR(25), t2.TIME, 100), 7)) AS time, t.type, t2.type, t.detail
FROM (
SELECT ROW_NUMBER() OVER (PARTITION BY NAME ORDER BY TIME) rn, *
FROM Data
) AS t
INNER JOIN (
SELECT ROW_NUMBER() OVER (PARTITION BY NAME ORDER BY TIME) rn, *
FROM Data
) AS t2
ON t2.NAME = t.NAME
AND t2.rn = t.rn + 1;
```
**Result:**
```
NAME time time type type detail
----------------------------------------------
john 10:30AM 10:34AM 1 2 On
john 10:34AM 10:49AM 2 2 break
john 10:49AM 11:12AM 2 3 on
paul 10:35AM 10:55AM 1 3 On
steve 10:32AM 10:45AM 1 3 On
```
Any comments, concerns - let me know. :)
|
Self join next timestamp
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
How do I select all nodes from an XML in sql?
XML:
```
<value>
<TradeId>5555</TradeId>
<Account>SomeAccount</Account>
<Book>1</Book>
<LocalCcy>XXXX</LocalCcy>
<ValuationCcy>XXXX</ValuationCcy>
<PVLocal>44444444</PVLocal>
<PresentValue>44444444</PresentValue>
<InstrumentCode>XXXXXXXXX</InstrumentCode>
</value>
```
My SQL (that doesnt work):
```
SELECT tradeRef.value('.','varchar(50)') AS strValue
FROM @xmlData.nodes('data/value') AS trades(tradeRef)
CROSS APPLY tradeRef.nodes('.') AS legColumns(nameRef)
```
This gives me all values but in one row...I would like one row per value..
Can you help me?
EDIT:
Requested output (just two first nodes here...):

|
You can try this approach:
```
declare @xmlData xml
select @xmlData = '<data><value>
<TradeId>5555</TradeId>
<Account>SomeAccount</Account>
<Book>1</Book>
<LocalCcy>XXXX</LocalCcy>
<ValuationCcy>XXXX</ValuationCcy>
<PVLocal>44444444</PVLocal>
<PresentValue>44444444</PresentValue>
<InstrumentCode>XXXXXXXXX</InstrumentCode>
</value></data>'
select
t.c.value('./text()[1]', 'nvarchar(100)') as strValue
from @xmlData.nodes('//value/child::node()') as t(c)
```
|
Possible way to get that output using `CROSS APPLY` :
```
declare @xmlData XML
set @xmlData = '<value>
<TradeId>5555</TradeId>
<Account>SomeAccount</Account>
<Book>1</Book>
<LocalCcy>XXXX</LocalCcy>
<ValuationCcy>XXXX</ValuationCcy>
<PVLocal>44444444</PVLocal>
<PresentValue>44444444</PresentValue>
<InstrumentCode>XXXXXXXXX</InstrumentCode>
</value>'
select
P.X.value('.', 'int') as TradeId
,T.X.value('local-name(.)', 'nvarchar(100)') as NodeName
,T.X.value('.', 'nvarchar(100)') as NodeValue
from @xmlData.nodes('//value/*[not(local-name()="TradeId")]') as T(X)
cross apply T.X.nodes('../TradeId') as P(X)
```
**[SQL Fiddle](http://sqlfiddle.com/#!6/9eecb/4811)**
|
select all from XML in sql
|
[
"",
"sql",
"sql-server",
"xml",
"xpath",
"sqlxml",
""
] |
I have list of users ids like: (2,3,5) and need to select 5 users from "users" table, with high priority of users in the list and then complete the limit with "rand" users,
users table structure:
```
id name
0 John
1 Adam
2 MM
3 Saly
4 Sara
5 Alx
6 Jak
7 Sarah
8 lynda
9 Riyan
```
I need to select 5 users with priority of specific ids:
example 1, if list of users ids = (2,3,5)
result:
```
id name
2 MM
3 Saly
5 Alx
0 John
6 Jak
```
example 2, if list of users ids = (5,6,0,1,2)
result:
```
id name
5 Alx
6 Jak
0 John
1 Adam
2 MM
```
thanks,
|
To get the users with matching ID first but maximum 5 users:
```
SELECT id, name FROM(
SELECT id, name, 0 p FROM users WHERE id IN (...)
UNION
SELECT id, name, 1 p FROM users WHERE id NOT IN(...)
) tab
ORDER BY p
LIMIT 5;
```
The first select searches for the users with the given ids, the second for all other users.
The column `p` is used to put the users with matching id before the otehr users.
|
Try something like this :
```
select * from
(
select id , name
from users where id in (2,3,5)
union all
(select temp.* from (select * from users where id not in (2,3,5)
order by rand() limit 2) temp )
) tab
limit 5
;
```
|
MySQL: select specific users ids and complete limit randomly
|
[
"",
"mysql",
"sql",
""
] |
I have a table:

I want to leave a maximum of two entries per **live\_login**, which records sorted by **last\_login\_ts**.
As a result, you should get:

**!** If is it possible, than using only SQL, without php and other tools.
**P.S.** Sorry for my english :)
**ANSWER**:
```
delete gr.* from my_table gr inner join (Select x.* from my_table x join my_table y on y.live_login = x.live_login and y.last_login_ts <= x.last_login_ts group by x.live_login, x.last_login_ts having count(*) > 2) gh on gh.live_login=gr.live_login and gh.dead_login=gr.dead_login;
```
Thank you, **Strawberry**!
|
```
Select x.*
from my_table x
join my_table y
on y.live_login = x.live_login
and y.last_login_ts <= x.last_login_ts
group by x.live_login
, x.last_login_ts
having count(*) <= 2
```
Or something like that
|
**Edit:** Apparently this ANSI/ISO SQL compliant answer doesn't work with MySQL. Can be used with most other dbms products anyway, so I don't delete the answer.
Delete a row if there are two (or more) newer rows with same live\_login:
```
delete from tablename t1
where 2 <= (select count(*)
from tablename t2
where t2.live_login = t1.live_login
and t2.last_login_ts > t1.last_login_ts)
```
|
How to delete rows from grouped data in MySQL
|
[
"",
"mysql",
"sql",
""
] |
I'm having trouble crafting a simple SQL query that retrieves the number of orders placed on 7/15/1998, which includes the date in the result. The label number of orders should be "OrderCount".
This is what I have got so far without success.
```
SELECT SUM(OrderT.CID) as OrderCount
FROM OrderT
WHERE OrderT.CID= #7/15/1998#;
```
Here is a screenshot of the database:

|
```
SELECT OrderDate, COUNT(OrderID) AS OrderCount
FROM OrderT
WHERE OrderDate = #1998-07-15#
GROUP BY OrderDate;
```
|
This is how:
```
SELECT Count(*) AS OrderCount
FROM OrderT
WHERE OrderDate = #7/15/1998#;
```
To include order date:
```
SELECT OrderDate, Count(*) AS OrderCount
FROM OrderT
GROUP BY OrderDate
HAVING OrderDate = #7/15/1998#;
```
|
Simple SQL statement to retrieve orders on a specific date
|
[
"",
"sql",
"ms-access",
""
] |
I have two tables as follows:
```
==================
StudentsClasses
----------------
ID (Registration ID of the class)
StudentID (ID of student taking class)
ClassID (ID of certain class)
----------------
==================
Students
---------------
ID (ID of student)
Name (Name of student)
GradeLevelID (Grade of student)
---------------
==================
```
And they are joined by StudentsClasses.StudentID and Students.ID.
I am trying to write a query to return the students with the least classes registered. My query is:
```
SELECT Students.Name, COUNT(StudentsClasses.StudentID) AS Expr1
FROM StudentsClasses INNER JOIN
Students ON StudentsClasses.StudentID = Students.ID
GROUP BY StudentsClasses.StudentID, Students.Name
ORDER BY Expr1
```
However, that only returns all the students with at least 1 class in ASC order.
I know the correct answer is 7 students with 0 classes.
How can I modify my query to return only those 7 students with 0 classes.
|
To enlist those students, that have no classes, instead of `INNER JOIN` you should be using `LEFT JOIN` here, to make sure all rows from `students` table are listed, even though there is no rows in `studentclasses` for that particular student.
```
SELECT
s.name, count(sc.id) AS classes
FROM
students s
LEFT JOIN studentsclasses sc ON s.id = sc.studentid
GROUP BY s.name
HAVING count(sc.id) = 0 -- added after comment
ORDER BY count(sc.id);
```
OR another method (for retrieving only students that have 0 classes):
```
SELECT
s.name
FROM
students.s
LEFT JOIN studentsclasses sc ON s.id = sc.studentid
WHERE
sc.id IS NULL
```
|
This should limit your results to those students having whatever the minimum number of registered classes is (so if the minimum is currently zero, then zero, if it becomes true that the least number of registrations is 3, then it will use 3, etc.)
```
select s.name,
v.classes
from students s
join (
select s.name,
count(sc.id) as classes
from students s
left join studentsclasses sc
on s.id = sc.studentid
group by s.name
order by count(sc.id)
limit 1
) v
on s.name = v.name
```
|
How to write a query with count
|
[
"",
"mysql",
"sql",
"database",
"visual-studio-2012",
""
] |
so I want to take this "returned" set of results and turn it into the one below it based on the created date being the youngest (newest) to indicate the most recent addition.
```
PhoneNum sourcetable FullName reference Task CreatedDate
0 This is not important Mr john smith 39161 This is different every time, but has been renamed 16/03/2015 15:01:05
0 This is not important Mr john smith 39161 This is different every time, but has been renamed 12/03/2015 16:58:22
0 This is not important Mr john smith 39161 This is different every time, but has been renamed 12/03/2015 16:58:25
0 This is not important Mr john smith 39161 This is different every time, but has been renamed 10/03/2015 12:29:50
0 This is not important Mr john smith 39161 This is different every time, but has been renamed 12/03/2015 14:18:47
0 This is not important Mr john smith 39161 This is different every time, but has been renamed 10/03/2015 12:40:21
1 This is not important Mr wilson smith 39158 This is different every time, but has been renamed 10/03/2015 12:07:14
1 This is not important Mr wilson smith 39158 This is different every time, but has been renamed 10/03/2015 12:07:14
1 This is not important Mr wilson smith 39158 This is different every time, but has been renamed 10/03/2015 12:07:13
```
What I need to return:
```
PhoneNum sourcetable FullName reference Task CreatedDate
0 This is not important Mr john smith 39161 This is different every time, but has been renamed 12/03/2015 16:58:25
1 This is not important Mr wilson smith 39158 This is different every time, but has been renamed 10/03/2015 12:07:14
```
What I have so far
```
select distinct d.PhoneNum,d.sourcetable,N.FullName,C.fk_applicationid as ref,t.Subject,t.CreatedDate
from Dial d
join Database.dbo.DM_PhoneNumbers p on p.PhoneNum1 = d.PhoneNum collate latin1_general_CI_AS
join Database.dbo.DM_PhoneNumbers on p.PhoneNum2 = d.PhoneNum collate latin1_general_CI_AS
join Database.dbo.DM_ClientApplicants C on C.FK_ClientID = P.FK_ApplicationID
join Database.dbo.DM_Names N on c.FK_ClientID = N.FK_ApplicationID
join Database.dbo.Tasks T on T.FK_ApplicationID = c.FK_ApplicationID
where c.FK_ClientID in (39157,39160)
```
Any help would be appreciated.
|
Please use the rank function to find the old record, This is not tested!! hope this help
```
SELECT * FROM (
select distinct d.PhoneNum,d.sourcetable,N.FullName,C.fk_applicationid as ref,t.Subject,t.CreatedDate
, RANK() OVER ( PARTITION BY N.FullName ORDER BY t.CreatedDate DESC ) AS iRank
from Dial d
join Database.dbo.DM_PhoneNumbers p on p.PhoneNum1 = d.PhoneNum collate latin1_general_CI_AS
join Database.dbo.DM_PhoneNumbers on p.PhoneNum2 = d.PhoneNum collate latin1_general_CI_AS
join Database.dbo.DM_ClientApplicants C on C.FK_ClientID = P.FK_ApplicationID
join Database.dbo.DM_Names N on c.FK_ClientID = N.FK_ApplicationID
join Database.dbo.Tasks T on T.FK_ApplicationID = c.FK_ApplicationID
where c.FK_ClientID in (39157,39160)
) AS t
WHERE t.iRank = 1
```
|
You can use `ROW_NUMBER()`:
```
;WITH CTE AS (
SELECT *, ROW_NUMBER() OVER(PARTITION BY REFERENCE ORDER BY CREATEDDATE DESC) AS RN
FROM [TABLE])
SELECT *
FROM CTE
WHERE RN = 1
```
Obviously you can just change your select statement to get the desired columns.
|
SQL: How do I select only the newest record when selecting multiple records
|
[
"",
"sql",
"sql-server",
""
] |
A table looks like:
```
id | location | datetime
------| ---------| --------
CD123 | loc001 | 2010-10-21 13:30:15
ZY123 | loc001 | 2010-10-21 13:40:15
YU333 | loc001 | 2010-10-21 13:41:00
AB456 | loc002 | 2011-1-21 14:30:30
FG121 | loc002 | 2011-1-21 14:31:00
BN010 | loc002 | 2011-1-21 14:32:00
```
Assume the table has been sorted by ascending datetime. I am trying to find the elapse (in seconds) between two consecutive rows **within a location**.
The result table is supposed to be:
```
| location | elapse
| loc001 | 600
| loc001 | 45
| loc002 | 30
| loc002 | 60
```
Since the `id` is randomly generated, it is difficult to write something like `a.id = b.id + 1` in a query. And only rows within the same location is consecutively subtracted, not across different locations.
How should I write a query in MS SQL Server to accomplish it?
|
In SQL Server 2012 and later you can use `LEAD` or `LAG`
```
SELECT
location,
SUM(DATEDIFF(SECOND, DateTime,
Lead(DateTime, 1) OVER(PARTITION BY location ORDER BY DateTime))) Elepase
FROM
tableName
GROUP BY
location
```
|
```
with Result as
(Select *, ROW_NUMBER() Over (order by location,datetime) RowID from table_name )
Select R1.location,DATEDIFF(SECOND,R2.datetime,R1.datetime) from Result R1 Inner join Result R2 on (R1.RowID=R2.RowID+1 and R1.location=r2.location)
```
|
How to subtract two consecutive rows in MS SQL Server?
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
In my SQL statement I have to extract a substring from a string at the character '\_'. Strings can be for example 'A\_XXX' 'AB\_XXX' 'ABC\_XXXX', so the extracted substrings should be like 'A' 'AB' 'ABC'.
In Oracle this is easy with the substr() and instr() functions:
```
select substr('AB_XXX', 1, instr('AB_XXX', '_')-1) as substring
from dual;
```
The result would be:
```
SUBSTRING
------------------------
AB
```
I need this query to check if a specific substring is in an array of strings.
The whole query would look like:
```
select 'AB_XXX' from dual
where (instr('ABC_AB_A', substr('AB_XXX', 1, instr('AB_XXX', '_')-1))>0);
```
Is there a way to write it in a SQL standard-compliant way?
If PostgreSQL provides an alternative function, it also helps. The rest could be solved with e.g. IN.
The really important part is to get the substring.
|
Your second example is a bit confusing because you are mixing `'ABC_AB_A'` and `'AB_XXX'` not sure if that is typo.
But if you just want all characters before the first `_` then the following works in Postgres:
```
left(col, strpos(col, '_') - 1)
```
or using a regular expression:
```
substring(col from '([A-Z]+)(_{1})')
```
You can use a regular expression in Oracle as well:
```
regexp_substr(col, '([A-Z]+)(_{1})', 1, 1, 'i', 1)
```
Postgres' `substring` function always returns the first capturing group of the regex whereas in Oracle you can specify the group you want: that is the last parameter to the `regexp_substr()` function.
SQLFiddle for Oracle: <http://sqlfiddle.com/#!4/b138c/1>
SQLFiddle for Postgres: <http://sqlfiddle.com/#!15/4b2bb/1>
|
## tl;dr
Use [`split_part`](http://www.postgresqltutorial.com/postgresql-split_part/) which was purposely built for this:
```
split_part(string, '_', 1)
```
## Explanation
Quoting this [PostgreSQL API docs](http://www.postgresqltutorial.com/postgresql-split_part/):
> `SPLIT_PART()` function splits a string on a specified delimiter and returns the nth substring.
The 3 parameters are the string to be split, the delimiter, and the part/substring number (starting from 1) to be returned.
So if you have a field named `string` that contains stuff like `AB_XXX` and you would like to get everything before `_`, then you split by that and get the first part/substring: `split_part(string, '_', 1)`.
|
Split a string at a specific character in SQL
|
[
"",
"string",
"postgresql",
"split",
"sql",
""
] |
Probably a silly mistake, but I couldn't figure this out myself. When I run this query in Oracle 11g.
If this question is answered in SO, please let me know the link.
```
with LAST_BUSINESS_DAY as (select DECODE(to_char(last_day(to_date('29-mar-2013')), 'D'),
, '7', to_char((last_day('29-mar-2013') - 1), 'DD-MON-YYYY')
, '1', to_char((last_day('29-mar-2013') - 2), 'DD-MON-YYYY')
, to_char(last_day('29-apr-2013'), 'DD-MON-YYYY')) as LAST_BD from dual),
HOLIDAYS as (select distinct rpt_day
from rpt_days rpt left join
calendars cal on rpt.calendar_id = cal.calendar_id
where rpt.type = 2
and cal.group = 4)
select case when to_char(to_date(LAST_BD, 'DD-MON-YYYY'), 'D') is null
then LAST_BD
else DECODE(to_char(to_date(LAST_BD, 'DD-MON-YYYY') , 'D')
, '6', LAST_BD
, '2', LAST_BD
, LAST_BD)
end as LAST_BD_OF_MONTH
from LAST_BUSINESS_DAY LBD
inner join HOLIDAYS H on LBD.LAST_BD = H.rpt_day
```
I get the result as
```
LAST_BD_OF_MONTH
===================
29-MAR-2013
```
Now, when I try to add a day to the `LAST_BD` date, it throws an error.
```
with LAST_BUSINESS_DAY as (select DECODE(to_char(last_day(to_date('29-mar-2013')), 'D'),
, '7', to_char((last_day('29-mar-2013') - 1), 'DD-MON-YYYY')
, '1', to_char((last_day('29-mar-2013') - 2), 'DD-MON-YYYY')
, to_char(last_day('29-apr-2013'), 'DD-MON-YYYY')) as LAST_BD from dual),
HOLIDAYS as (select distinct rpt_day
from rpt_days rpt left join
calendars cal on rpt.calendar_id = cal.calendar_id
where rpt.type = 2
and cal.group = 4)
select case when to_char(to_date(LAST_BD, 'DD-MON-YYYY'), 'D') is null
then LAST_BD
else DECODE(to_char(to_date(LAST_BD, 'DD-MON-YYYY') , 'D') -- line 35
, '6', LAST_BD - 1 -- CHANGED THIS
, '2', LAST_BD + 1 -- CHANGED THIS
, LAST_BD)
end as LAST_BD_OF_MONTH
from LAST_BUSINESS_DAY LBD
inner join HOLIDAYS H on LBD.LAST_BD = H.rpt_day
```
**Error Message**
> ORA-00932: inconsistent datatypes: expected CHAR got NUMBER
> 00932. 00000 - "inconsistent datatypes: expected %s got %s"
> \*Cause:
> \*Action: Error at Line: 35 Column: 20
As I said, this might be a simple overlook from my side. I tried converting the `LAST_BD` to a `date`, but didn't work.
I tried changing the `DECODE` as below
```
case when to_char(to_date(LAST_BD, 'DD-MON-YYYY'), 'D') is null
then LAST_BD
else DECODE(to_char(to_date(LAST_BD, 'DD-MON-YYYY') , 'D')
, '6', to_date(LAST_BD, 'DD-MON-YYYY') - 1
, '2', LAST_BD + 1 -- line 37
, LAST_BD)
end as LAST_BD_OF_MONTH
```
and got this error :
> ORA-00932: inconsistent datatypes: expected DATE got NUMBER
> 00932. 00000 - "inconsistent datatypes: expected %s got %s"
> \*Cause:
> \*Action: Error at Line: 37 Column: 42
So, I changed the `line 37` to this,
```
case when to_char(to_date(LAST_BD, 'DD-MON-YYYY'), 'D') is null
then LAST_BD
else DECODE(to_char(to_date(LAST_BD, 'DD-MON-YYYY') , 'D')
, '6', to_date(LAST_BD, 'DD-MON-YYYY') - 1
, '2', to_date(LAST_BD, 'DD-MON-YYYY') + 1
, LAST_BD)
end as LAST_BD_OF_MONTH
```
and this time its a different message.
> ORA-00932: inconsistent datatypes: expected CHAR got DATE
> 00932. 00000 - "inconsistent datatypes: expected %s got %s"
> \*Cause:
> \*Action: Error at Line: 35 Column: 20
Any help to get this corrected is greatly appreciated.
**ANSWER** :
```
with LAST_BUSINESS_DAY as (select DECODE(to_char(last_day(to_date('29-mar-2013')), 'D'),
, '7', to_char((last_day('29-mar-2013') - 1), 'DD-MON-YYYY')
, '1', to_char((last_day('29-mar-2013') - 2), 'DD-MON-YYYY')
, to_char(last_day('29-apr-2013'), 'DD-MON-YYYY')) as LAST_BD from dual),
HOLIDAYS as (select distinct rpt_day
from rpt_days rpt left join
calendars cal on rpt.calendar_id = cal.calendar_id
where rpt.type = 2
and cal.group = 4)
select case when to_char(to_date(LAST_BD, 'DD-MON-YYYY'), 'D') is null
then LAST_BD
else DECODE(to_char(to_date(LAST_BD, 'DD-MON-YYYY') , 'D')
, '6', to_char (to_date(LAST_BD, 'DD-MON-YYYY') - 1, 'DD-MON-YYYY')
, '2', to_char (to_date(LAST_BD, 'DD-MON-YYYY') + 1, 'DD-MON-YYYY')
, LAST_BD)
end as LAST_BD_OF_MONTH
from LAST_BUSINESS_DAY LBD
inner join HOLIDAYS H on LBD.LAST_BD = H.rpt_day
```
|
So do I see it right, that you converted `LAST_BD`from `VARCHAR2` to `DATE` (due to):
```
to_date(LAST_BD, 'DD-MON-YYYY')
```
In the second query you try to subtract `1` from this `VARCHAR2`:
```
LAST_BD - 1
```
This won't work. As a consequence you get the error:
```
ORA-00932: inconsistent datatypes: expected CHAR got NUMBER
```
What would probably work is, if you convert it to `DATE`, add `1` and convert it back to `VARCHAR2`
```
with LAST_BUSINESS_DAY as (select DECODE(to_char(last_day(to_date('29-mar-2013')), 'D'),
, '7', to_char((last_day('29-mar-2013') - 1), 'DD-MON-YYYY')
, '1', to_char((last_day('29-mar-2013') - 2), 'DD-MON-YYYY')
, to_char(last_day('29-apr-2013'), 'DD-MON-YYYY')) as LAST_BD from dual),
HOLIDAYS as (select distinct rpt_day
from rpt_days rpt left join
calendars cal on rpt.calendar_id = cal.calendar_id
where rpt.type = 2
and cal.group = 4)
select case when to_char(to_date(LAST_BD, 'DD-MON-YYYY'), 'D') is null
then LAST_BD
else DECODE(to_char(to_date(LAST_BD, 'DD-MON-YYYY') , 'D') -- line 35
, '6', to_char (to_date(LAST_BD, 'DD-MON-YYYY') - 1, 'DD-MON-YYYY')
, '2', to_char (to_date(LAST_BD, 'DD-MON-YYYY') + 1, 'DD-MON-YYYY')
, LAST_BD)
end as LAST_BD_OF_MONTH
from LAST_BUSINESS_DAY LBD
inner join HOLIDAYS H on LBD.LAST_BD = H.rpt_day
```
Note that the conversion back to `VARCHAR2` is required, because `DECODE` allows only values of one type.
|
The issue is that DECODE expects a CHAR argument of some kind and `LAST_BD + 1` and even the `TO_DATE(LAST_BD...` return a NUMBER and a DATE respectively.
The following SQL Fiddle demonstrate how to fix this.
<http://sqlfiddle.com/#!4/8ac4a3/9>
Here is the query:
```
with LAST_BUSINESS_DAY as (select DECODE(to_char(last_day(to_date('29-mar-2013')), 'D')
, '7', to_char((last_day('29-mar-2013') - 1), 'DD-MON-YYYY')
, '1', to_char((last_day('29-mar-2013') - 2), 'DD-MON-YYYY')
, to_char(last_day('29-apr-2013'), 'DD-MON-YYYY')) as LAST_BD from dual),
HOLIDAYS as (select distinct reporting_day
from tbm_reporting_days trdy left join
tbm_calendars tcal on trdy.calendar_id = tcal.calendar_id
where trdy.type = 2
and tcal.site_id = 4)
select case when to_char(to_date(LAST_BD, 'DD-MON-YYYY'), 'D') is null
then LAST_BD
else DECODE(to_char(to_date(LAST_BD, 'DD-MON-YYYY') , 'D')
, '6', TO_CHAR(
(TO_DATE(LAST_BD, 'DD-MON-YYYY') - 1), 'D')
, '2', TO_CHAR(
(TO_DATE(LAST_BD, 'DD-MON-YYYY') - 1), 'D')
, LAST_BD)
end as LAST_BD_OF_MONTH
from LAST_BUSINESS_DAY LBD
inner join HOLIDAYS H on LBD.LAST_BD = H.reporting_day
```
You have to convert the number or date back to a CHAR with TO\_CHAR.
|
ORA-00932: inconsistent datatypes: expected CHAR got NUMBER while adding 1 to a date
|
[
"",
"sql",
"oracle",
"oracle11g",
""
] |
I have three tables: Products, Purchase, Invoice
**Product table**:
Producct\_no Name
1 A
2 B
3 C
**Purchase table**:
Purchase\_no Product\_no Qty
001 1 81
002 2 150
003 3 10
**Invoice table**:
Invoice\_no Product\_no Qty
001 1 20
002 2 10
003 3 10
I want to get each product's purchase quantity and invoice quantity, I used following query
```
SELECT PR.product_no, sum(P.qty),sum(I.qty)
FROM products PR
LEFT JOIN invoice I ON I.product_no=PR.product_no
LEFT JOIN purchase P ON P.product_no=PR.product_no
group by PR.product_no
```
product\_no sum(P.qty) sum(I.qty)
001 162 160
002 150 50
003 10 10
**EDIT: Expected results**
product\_no sum(P.qty) sum(I.qty)
001 81 20
002 150 10
003 10 10
My query is giving me wrong response (sum of quantities are wrong), please help me to correct my query to get the results properly. thanks
|
I don't think your sample data is really what you have based on the information provided. My best guess here is that your query is doing a fan-out on either or both of those joins which is messing up your sums. You need to sum them separately, else additional rows in either on of those joins will fan out the other join, duplicating your results in the sum. This is evident in your result since 001 looks to be double (even though your sample data doesn't show it).
Something like this would ensure sums independent of each other:
```
SELECT PR.product_no,
( SELECT sum(I.qty)
FROM invoice I
WHERE I.product_no=PR.product_no ) invoice_qty,
( SELECT sum(P.qty)
FROM purchase P
WHERE P.product_no=PR.product_no ) purchase_qty
FROM products PR
```
|
I think you have a problem with GROUP BY there. I would do something like this in this case
```
SELECT P.Product_no, Inv.InvProdSum, Pur.PurProdSum
FROM Product P
LEFT JOIN (SELECT Product_no, SUM(Qty) AS InvProdSum
FROM Invoice
GROUP BY Product_no) AS Inv
ON P.Product_no = Inv.Product_no
LEFT JOIN (SELECT Product_no, SUM(Qty) AS PurProdSum
FROM Purchase
GROUP BY Product_no) AS Pur
ON P.Product_no = Pur.Product_no
```
Here is SQL FIddle for that <http://sqlfiddle.com/#!9/731a5/1>
**NOTE** i add some extra value here to test how it's work...
GL!
|
How to join three tables to get Sum
|
[
"",
"mysql",
"sql",
""
] |
I got a query that searches for a movie by title, actors, description etc and orders the results by movies that have the search term in the title.
```
select movie.id, movie.title
from movie
where
title like '%searchTerm%' or
cast like '%searchTerm%' or
director like '%searchTerm%' or
FullPlot like '%searchTerm%'
order by (title like '%searchTerm%') desc ## this gives priority to results thatr have the search term in the title
```
Now I have a new table called "AlsoKnownAs". This table contains data about what movies are called in other languages. example Cidade De Deus (2003) AKA City Of God (2003)
I need to search this new table as well while i am searching for the movie and sort by it as secondary sort. so movies that contain the searchTerm in the AKA table will come after movies that contain searchTerm in the title.
I am not sure how to do this. It seems like i need to store the found term in a variable so that i can order by it.
any help welcome
```
AlsoKownAs table
'id', 'int(11)', 'NO', 'PRI', NULL, 'auto_increment'
'movieId', 'varchar(10)', 'NO', '', NULL, ''
'aka', 'varchar(305)', 'NO', '', NULL, ''
'country', 'varchar(100)', 'YES', '', NULL, ''
```
|
If you only want to return unique movies, then in addition to a left join you need to group by the id and title so you don't get duplicates for movies with multiple aka's.
You can use conditional aggregation to determine if any of the aka's matched and use that for your order
```
select id, title from (
select t1.id, t1.title,
sum(aka like '%searchTerm%') aka_count
from movie t1
left join AlsoKnownAs t2 on t1.id = t2.movieId
where title like '%searchTerm%'
or cast like '%searchTerm%'
or director like '%searchTerm%'
or FullPlot like '%searchTerm%'
or aka like '%searchTerm%'
group by t1.id, t1.title
) order by (title like '%searchTerm%') desc, (aka_count > 0) desc
```
**Edit**
It's probably faster to do a single table scan of the aka table and left join those results via a derived table:
```
select movie.id, movie.title
from movie
left join (
select distinct movieId from AlsoKnownAs where aka like '%searchTerm%'
) t1 on t1.movieId = movie.id
where
title like '%searchTerm%' or
cast like '%searchTerm%' or
director like '%searchTerm%' or
FullPlot like '%searchTerm%' or
t1.movieId IS NOT NULL
order by (title like '%searchTerm%'), (t1.movieId IS NOT NULL) desc
```
|
As I understand it, you just need to join them. You can add `aka` as a field in *order by* (it will be secondary to original title though):
```
SELECT m.id, m.title, a.aka
FROM movie m
LEFT JOIN AlsoKnownAs a ON m.id = a.movieId
WHERE
m.title like '%searchTerm%' or
m.cast like '%searchTerm%' or
m.director like '%searchTerm%' or
m.FullPlot like '%searchTerm%' or
a.aka like '%searchTerm%'
ORDER BY (m.title like '%searchTerm%') DESC, (a.aka like '%searchTerm%') DESC
```
*Disclaimer: My dbms is mssql, but I'm quite sure the syntax is the same in this query*
|
How can I search two tables
|
[
"",
"mysql",
"sql",
""
] |
I'm currently working with Excel VBA and SQL queries. I'm trying to take what I've put into my recordset and dump it into a two-dimensional array, so I can use the information in a later part of the function. The issue is that I only know two methods of extracting information from recordsets: CopyFromRecordset, and rs.Fields.
Here is the code I am attempting.
```
Dim ID_Array(150, 2) As String
Set rs = New ADODB.Recordset
Set oConn = New ADODB.Connection
strSql = "select id, name from groups"
rs.Open strSql, oConn
Do While Not rs.EOF
With ActiveSheet
For Index = 0 To 171
ID_Array(Index, 0) = CStr(rs.Fields(0).Value)
'Safety check to make sure the value isn't null (was having problems before)
If rs.Fields(1).Value <> Null Then
ID_Array(Index, 1) = CStr(rs.Fields(1).Value)
End If
rs.MoveNext
Next
End With
Loop
rs.Close
```
I'm positive I'm not assigning these values properly, since when I go to pull them from recordset, many are either wrong or not appearing (the name portion, particularly, will not even appear as a string on a MsgBox command, so I'm assuming it's not being assigned correctly).
Anyone have any experience with this? How to do I assign the id portion of rs to ID\_Array's first dimension, and the name portion of rs to ID\_Array's second dimension?
|
If you use `ADODB.Recordset` you dont need inner loop next for.
Try use this code, should work:
```
Dim ID_Array() As String
Set rs = New ADODB.Recordset
Set oConn = New ADODB.Connection
strSql = "select id, name from groups"
rs.Open strSql, oConn
Index = 0
Do While Not rs.EOF
'With ActiveSheet
'For Index = 0 To 171 you dont need for..next, Do While Not rs.EOF show you record one by one
ReDim Preserve ID_Array(1, Index)
ID_Array(0, Index) = CStr(rs.Fields(0).Value)
'Safety check to make sure the value isn't null (was having problems before)
If rs.Fields(1).Value <> vbNullString Then
ID_Array(1, Index) = CStr(rs.Fields(1).Value)
End If
Index = Index + 1
rs.MoveNext
'Next
'End With
Loop
rs.Close
End Sub
```
|
I see this is from a long while ago but I modified this to make it work better for me so I hop it's useful to someone else.
```
Function RecordSetArray(comTxt As String) As Variant
Dim ID_Array() As Variant
Dim objMyConn As ADODB.Connection, objMyCmd As ADODB.Command, rs As ADODB.Recordset
Set objMyConn = New ADODB.Connection: Set objMyCmd = New ADODB.Command: Set rs = New ADODB.Recordset
'Open Connection'
objMyConn.ConnectionString = ConnectionString: objMyConn.Open
'Set and Excecute SQL Command'
Set objMyCmd.ActiveConnection = objMyConn
With objMyCmd
.CommandText = "SET NOCOUNT ON " & comTxt
.CommandType = adCmdText: .Execute
End With
Set rs.source = objMyCmd: rs.CursorLocation = adUseClient: rs.Open
If rs.EOF Then
FindRecordCount = 0
Else
FindRecordCount = rs.RecordCount
End If
ReDim Preserve ID_Array(FindRecordCount - 1, rs.fields.count - 1)
Index = 0
Do While Not rs.EOF
For i = 0 To rs.fields.count - 1
ID_Array(Index, i) = rs.fields(i).Value
Next
Index = Index + 1
rs.MoveNext
Loop
rs.Close: Set objMyConn = Nothing: Set objMyCmd = Nothing: Set rs = Nothing
RecordSetArray = ID_Array
End Function
```
|
How to assign recordset values to a two-dimensional array
|
[
"",
"sql",
"arrays",
"excel",
"vba",
""
] |
I have so many long database so I used `seq_no` in commas separate using more than one sequence store in single column but now I want all sequence in a single column so I am confused how to create this sql result for this.
For example:
```
TABLE STRUCTURE
SR_NO IS INT ,
SEQ_NO IS VARCHAR(MAX)
```
```
SR_NO SEQ_NO
---------------------------------
1 1839073,
2 1850097,1850098,
3 1850099,1850100,1850110
```
I need to get this result:
```
SEQ_NO
--------------
1839073
1850097
1850098
1850099
1850100
1850110
```
Thanks!
|
```
declare @t table(Id int,seq varchar(100))
insert into @t (Id,seq) values (1,'1839073,'),(2,'1839073,1850098,'),(3,'1850099,1850100,1850110 ')
;With Cte as (
SELECT A.Id,
Split.a.value('.', 'VARCHAR(100)') AS Seq
FROM
(
SELECT Id,
CAST ('<M>' + REPLACE(seq, ',', '</M><M>') + '</M>' AS XML) AS Data
FROM @t
) AS A CROSS APPLY Data.nodes ('/M') AS Split(a) )
Select ID,Seq from Cte Where Seq > ''
```
|
Try splitting it with XML
```
SELECT SR_NO, t.c.value('.', 'VARCHAR(2000)') COL1
FROM (
SELECT SR_NO, x = CAST('<t>' +
REPLACE(SEQ_NO, ',', '</t><t>') + '</t>' AS XML)
FROM
(values(1,'1839073'),(2, '1850097,1850098'),
(3, '1850099,1850100,1850110')) y(SR_NO, SEQ_NO)
) a
CROSS APPLY x.nodes('/t') t(c)
```
Result:
```
SR_NO COL1
1 1839073
2 1850097
2 1850098
3 1850099
3 1850100
3 1850110
```
You can replace this with your table:
```
(values (1,'1839073'),(2, '1850097,1850098'),
(3, '1850099,1850100,1850110')) y(SR_NO, SEQ_NO)
```
|
how to write SQL query for this result?
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
What is the best solution to create SQL table which has a composite primary key.
* First column of the primary key is `ID` {`int`}.
* Second column of the primary key is `YearMonth` {`string` like yyyyMM ex. 201409 } which is current month.
that can reseed an ID to 1 every month
And I actually need the running ID like
```
"201409-00001"
"201409-00002"
"201409-00003"
.
.
"201410-00001"
"201410-00002"
.
.
"201411-00001"
```
|
Finally, I try using the concept of Mr. @ughai
and I using the query like below to keep the values on another column for represent the ID with month relatively.
```
select LEFT(CONVERT(VARCHAR(10),YearMonth,112),6) + '-' + RIGHT(REPLICATE('0',4) + CONVERT(VARCHAR(5),ROW_NUMBER() OVER(PARTITION BY LEFT(CONVERT(VARCHAR(10),YearMonth,112),6) ORDER BY ID)),5) from TestTable
```
|
Based on the question and your comments, it seems that you need to display the a row identifier in the format `201411-00001`. This does not need to be your actual primary key. What I would recommend is that you do something like this.
create your table with `ID` which is Identity. Add another column `YearMonth CHAR(6)` column which stores `YYYYMM` or if you already have a column which stores the date as `date/datetime`, use that. In your `SELECTs` you would do something like this
```
LEFT(CONVERT(VARCHAR(10),YearMonth,112),6) + '-' + RIGHT(REPLICATE('0',5) + CONVERT(VARCHAR(5),ROW_NUMBER() OVER(PARTITION BY DateCol, ORDER BY ID),5)
```
You would alternately have a trigger which updates a new `INT` column `monthlyID` based on the `MAX(monthlyID) + 1` for the month.
|
Auto increment primary key and reseed every month
|
[
"",
"sql",
"sql-server",
"identity",
""
] |
I am stuck on how to get a student's minimum score from `marks` table and `student` info from student table.
**EDIT: sorry for not mentioning, but i need result for user 1, st1**
student
```
id name
1 st1
2 st2
```
marks
```
student_id course_name marks
1 C++ 55
1 OOP 65
1 AI 45 //need this lowest result for st1
2 C++ 82
2 STATS 74
2 OS 20 // lowest marks are these for st2 or overall
```
**Edit forgot to enter name st1:**
Desired Output:
```
id name course_name MinMarks
1 st1 AI 45
```
I tried this query but it gives wrong `course_name` (C++) 1st row's course name
```
SELECT s.id, s.name, m.course_name, MIN(m.marks) FROM student s
JOIN marks m
ON s.id = m.student_id
Where s.id = 1
```
Output:
```
id name course_name MinMarks
1 st1 C++ 45
```
|
You can use a subquery:
```
select *
from marks
where marks = (select min(marks) from marks)
```
or `left join`:
```
select m.*
from marks m
left join (select min(marks) as marks from marks) m_min on m.marks = m_min.marks
where m_min.marks is not null
```
It guarantees you that if there is more then one student with the minimum `mark` it displays all of them.
`left join` can possibly improve performance, you can check your execution plan to be sure.
*P.S.: If you also need to retrieve `name` from `student` table (is not stated in your desired output) you can do the `join` operation you did in your query:*
using subquery:
```
select m.student_id
, s.name
, m.course_name
, m.marks
from student s
join marks m on s.id = m.student_id
where m.marks = (select min(marks) from marks)
```
using `left join`:
```
select m.student_id
, s.name
, m.course_name
, m.marks
from student s
join marks m on s.id = m.student_id
left join (select min(marks) as marks from marks) m_min on m.marks = m_min.marks
where m_min.marks is not null
```
**Edit**
As it turns out, *OP* needs a minimum `marks` per `student`'s `id` (for the `id = 1`), so :
```
select m.student_id
, s.name
, m.course_name
, m.marks
from student s
join marks m on s.id = m.student_id
left join (select student_id, min(marks) as marks from marks group by student_id) p on s.id = p.student_id and p.marks = m.marks
where s.id = 1 and p.student_id is not null
```
|
try this:
```
Select s.id, s.name, m.course_name, m.marks From student s
right join marks m
on s.id = m.user_id where m.marks = MIN(m.marks);
```
|
Get min value based on foreign key
|
[
"",
"mysql",
"sql",
""
] |
I'm trying to pull ALL fields and records if two of the fields (category and measure) create 3 or more dups.
```
SELECT category
,measure
,date
FROM my_table
```
for example:
```
category measure date
EVENTS COL 04/15/2014
EVENTS COL 05/21/2014
EVENTS COL 07/16/2014
```
So the above meets the criteria of 3 or more so we would pull all three.
```
category measure rec_count
EVENTS COL 3
```
|
I think you want something like - gives you all rows with 3 or more records
```
SELECT
category,
measure,
date
FROM
my_table t1
inner join (
SELECT
category,
measure
FROM
my_table
group by
category,
measure
having
count(*) >= 3
) t2 on
t2.category = t1.category and
t2.measure = t1.measure
```
To get only the count you would run
```
SELECT
category,
measure
count(*)
FROM
my_table
group by
category,
measure
having
count(*) >= 3
```
|
Sounds like you're looking for `GROUP BY`:
```
SELECT category
,measure
,count(*) as rec_count
FROM my_table
GROUP BY category, measure
HAVING count(*) >= 3
```
|
Pull all records only if 3 dups or more exist
|
[
"",
"sql",
"duplicates",
""
] |
Currently, my code is as follows:
```
SELECT Volunteer.VolunteerID, Volunteer.LastName, Volunteer.FirstName, Observations.VolunteerID, Roost.RoostID, Roost.UTME, Roost.UTMN, Roost.DecimalDegrees, Observations.Date, Location.LocationName, Location.Address, Town.Town
FROM (((Volunteer INNER JOIN Observations ON Volunteer.[VolunteerID]= Observations.[VolunteeriD])
LEFT JOIN Roost ON Roost.[RoostID]= Observations.[RoostID])
LEFT JOIN Location ON Location.[LocationID]= Roost.[LocationID])
LEFT JOIN Town ON Location.[TownID]= Town.[TownID]
ORDER BY Volunteer.LastName, Volunteer.FirstName;
```
Currently a part of my table looks like this:
```
Observations.VolunteerID | Observations.LocationID | Date
Bob | Main St | 2015-01-01
Bob | Main St | 2015-02-02
Sally | Fox St | 2015-02-02
Dave | Long St | 2015-02-02
Dave | Taylor St | 2015-02-05
Lindsay | New St | 2015-02-01
Lindsay | New St | 2015-02-08
Lindsay | Ray St | 2015-02-10
Lindsay | Main St | 2015-02-25
Lindsay | Taylor St | 2015-02-31
```
However, I want to add a `GROUP BY` statement (I think) that is grouped by *VolunteerID* and *LocationID* so that I get the following output. I want to know each location the volunteers have been at, and if they visited the same location multiple times, I would like to see the most recent date. How would I incorporate that into my original code?
```
Observations.VolunteerID | Observations.LocationID | Date
Bob | Main St | 2015-02-02
Sally | Fox St | 2015-02-02
Dave | Long St | 2015-02-02
Dave | Taylor St | 2015-02-05
Lindsay | New St | 2015-02-08
Lindsay | Ray St | 2015-02-10
Lindsay | Main St | 2015-02-25
Lindsay | Taylor St | 2015-02-31
```
---
Updated code, but still getting syntax error (missing operator in query expression 'Volunteer.FirstName GROUP BY Observations.VolunteerID'.
```
SELECT Volunteer.VolunteerID, Volunteer.LastName, Volunteer.FirstName, Observations.VolunteerID, Roost.RoostID, Roost.UTME, Roost.UTMN, Roost.DecimalDegrees, Max([Observations.Date]), Location.LocationName, Location.Address, Town.Town
FROM (((Volunteer INNER JOIN Observations ON Volunteer.[VolunteerID]= Observations.[VolunteeriD])
LEFT JOIN Roost ON Roost.[RoostID]= Observations.[RoostID])
LEFT JOIN Location ON Location.[LocationID]= Roost.[LocationID])
LEFT JOIN Town ON Location.[TownID]= Town.[TownID]
ORDER BY Volunteer.LastName, Volunteer.FirstName
GROUP BY Observations.VolunteerID, Observations.LocationID;
```
|
Put the `ORDER BY` clause after the `GROUP BY`.
Change `Max([Observations.Date])` to `Max(Observations.Date)`
Add any field expression from the `SELECT` list which is not an aggregate expression to your `GROUP BY`.
```
SELECT
Volunteer.VolunteerID,
Volunteer.LastName,
Volunteer.FirstName,
Observations.VolunteerID,
Roost.RoostID,
Roost.UTME,
Roost.UTMN,
Roost.DecimalDegrees,
Max(Observations.Date) AS latest_observation,
Location.LocationName,
Location.Address,
Town.Town
FROM (((Volunteer INNER JOIN Observations ON Volunteer.[VolunteerID]= Observations.[VolunteeriD])
LEFT JOIN Roost ON Roost.[RoostID]= Observations.[RoostID])
LEFT JOIN Location ON Location.[LocationID]= Roost.[LocationID])
LEFT JOIN Town ON Location.[TownID]= Town.[TownID]
GROUP BY
Volunteer.VolunteerID,
Volunteer.LastName,
Volunteer.FirstName,
Observations.VolunteerID,
Roost.RoostID,
Roost.UTME,
Roost.UTMN,
Roost.DecimalDegrees,
Location.LocationName,
Location.Address,
Town.Town
ORDER BY Volunteer.LastName, Volunteer.FirstName;
```
However I am uncertain whether that grouping will be suitable. In the question, you wanted to group on *Observations.VolunteerID* and *Observations.LocationID*. This larger `GROUP BY` may not give you the same result.
If that is the case, you can use an aggregate function for a `SELECT` field and then remove that field from the `GROUP BY`. For example ...
```
SELECT Max(Volunteer.LastName)
```
Assuming *LastName* is always the same for a given *VolunteerID*, `Max(Volunteer.LastName)` will give you that same value.
|
`max` with the `group by` clause seems to be enough:
```
select Observations.VolunteerID, Observations.LocationID, max([Date])
from <...>
group by Observations.VolunteerID, Observations.LocationID
```
|
Group by to show unique fields
|
[
"",
"sql",
"ms-access",
"group-by",
""
] |
I want to fire a Query "`SELECT * FROM TABLE`" but select only from row `N+1`. Any idea on how to do this?
|
Query: in [sql-server](/questions/tagged/sql-server "show questions tagged 'sql-server'")
```
DECLARE @N INT = 5 --Any random number
SELECT * FROM (
SELECT ROW_NUMBER() OVER(ORDER BY ID) AS RoNum
, ID --Add any fields needed here (or replace ID by *)
FROM TABLE_NAME
) AS tbl
WHERE @N < RoNum
ORDER BY tbl.ID
```
This will give rows of Table, where rownumber is starting from `@N + 1`.
|
For SQL Server 2012 and above, use this:
```
SELECT *
FROM Sales.SalesOrderHeader
ORDER BY OrderDate
OFFSET (@Skip) ROWS FETCH NEXT (@Take) ROWS ONLY
```
<https://stackoverflow.com/a/19669165/1883345>
|
How to skip the first n rows in sql query
|
[
"",
"sql",
""
] |
I have a bunch of XMLs that I need to parse with SQL.
The XML can take on multiple forms:
```
<Grandparent>
<parent>
<child1>something</child1>
<child2>something</child2>
</parent>
</Grandparent>
```
or
```
<Grandparent>
<child1>something</child1>
<child2>something</child2>
</Grandparent>
```
Additionally, the number of "child" nodes is variable and there is no way of knowing before hand how many children there are.
What i have done so far is:
```
@xml.nodes('/Grandparent')
```
which returns either the `<parent>` node and children or simply the `child` nodes depending on the format of the xml.
The version of SQL and the fact that i'm writing it as an SQL function seems to mean that trying to get `value`as shown in [this anwser](https://stackoverflow.com/questions/15680259/parse-xml-in-sql-server) does not work.
Therefore, I decided to parse the string. Essentially, I look for `<` and take the substring from there until `>` for the node name. Then I take anything between `>` and `</` for the value. I do so in a while loop until the xml string is finished. It works perfectly unless the xml has that `parent` node.
I don't know how to determine whether that `parent` node is there and how to ignore it if it is. This is where I am stuck.
What I want to get in either case is:
```
Node | Value
child1 | something
child2 | something
```
etc for as many child nodes that there is.
|
You can use descendant axis `//` to get child nodes at any level depth within a parent node.
Another useful xpath syntax for this task is `local-name()` which return current context node/attribute's name without namespace :
```
select c.value('local-name(.)', 'varchar(max)') as 'node'
, c.value('.', 'varchar(max)') as 'value'
from @xml.nodes('/Grandparent//*[not(*)]') as T(c)
```
This xpath bit `//*[not(*)]` means select descendant nodes that doesn't have child node, in other words select the inner most descendant.
**[SQL Fiddle](http://sqlfiddle.com/#!6/9eecb/5023)**
|
Going out on a limb here with two assumptions; your question isn't clear about the following:
1. I'm assuming that your child nodes have the same name (e.g., child, not child1 and child2), and
2. You want a SQL statement that returns 1 child per row.
If either of those assumptions is incorrect, this answer won't help :)
```
DECLARE @xml XML = '<Grandparent>
<parent>
<child>something</child>
<child>something</child>
</parent>
</Grandparent>'
SELECT x.value('.[1]', 'varchar(100)')
FROM @xml.nodes('/Grandparent//child') t(x)
SET @xml= '<Grandparent>
<child>something</child>
<child>something</child>
</Grandparent>'
SELECT x.value('.[1]', 'varchar(100)')
FROM @xml.nodes('/Grandparent//child') t(x)
```
|
SQL Parse an xml string
|
[
"",
"sql",
"sql-server",
"xml",
"parsing",
""
] |
Before you reprimand me about how 'this has been asked here:', I'd like to point out I did indeed google. I even went to page 3 in some cases. *shudders*
So here's the deal:
I'm trying to audit a database we have, setting triggers for `UPDATE`, `INSERT`, `DELETE` statements for several tables. The triggers are created, and linked, succesfully. Each trigger executes a stored-procedure that inserts required data into our **tick\_audit** table.
This information is:
* user\_account; to store *who* changed something
* client\_id; to store which client had their data changed
* date\_time; the date of the edit
* table\_name; the name of the table that was changed
* table\_record\_id; the *id* of the record that was changed
* descr; a description of *why* something was changed
* remote\_ip\_address; so we can keep tabs where something was changed (internal or external)
The table also has a PRIMARY\_KEY, AUTO\_INCREMENT *id* field.
When I try to create the stored-procedure
```
create procedure update_tick_user
@UserId varchar(32),
@ClientId varchar(32),
@Table varchar(64),
@TableRecord varchar(512),
@Descr varchar(128),
@RemoteIP varchar(16)
as
begin
insert into tick_audit ('user_account', 'client_id', 'date_time', 'table_name', 'table_record_id', 'descr', 'remote_ip_address')
values
(@UserId, @ClientId, getdate(), @Table, @TableRecord, @Descr, @RemoteIP)
end;
```
I get the following error(s):
> Msg 207, Level 16, State 1, Procedure update\_tick\_user, Line 10
> Invalid column name 'user\_account'.
This repeats for each column. When I run
```
exec sp_columns tick_audit
```
I get all the columns from **tick\_audit**, and even copying their names into the column-fields for the insert, I get the above-mentioned errors. I even get the errors when I simply run
```
insert into tick_audit
('user_account', 'client_id', 'date_time', 'table_name', 'table_record_id', 'descr', 'remote_ip_address')
values
('', '', getdate(), '', '', '', '')
```
Whenever I try an insert, update or delete on a different table, I get no errors. Is there anything I could try to find out if there's a fault in my table, or some super-secret hocus-pocus, ritual-esque method?
Here's what I've tried so far:
* Drop the table (On my feet, many many times), re-create it.
* Google for three hours
* Asked my co-workers
* Put on my thinking cap
* Check if columns actually exists (through exec, select)
Crossing my fingers and hoping someone can help me.
|
Remove `'` to make `insert` as below
```
insert into tick_audit (user_account, client_id, date_time, table_name, table_record_id, descr, remote_ip_address)
values (@UserId, @ClientId, getdate(), @Table, @TableRecord, @Descr, @RemoteIP)
```
|
Do not use single quotes arround column names.
```
create procedure update_tick_user
@UserId varchar(32),
@ClientId varchar(32),
@Table varchar(64),
@TableRecord varchar(512),
@Descr varchar(128),
@RemoteIP varchar(16)
as
begin
insert into tick_audit (user_account, client_id, date_time, table_name, table_record_id, descr, remote_ip_address)
values
(@UserId, @ClientId, getdate(), @Table, @TableRecord, @Descr, @RemoteIP)
end;
```
|
SMSS 'Invalid column name'
|
[
"",
"sql",
"stored-procedures",
"triggers",
"ssms",
""
] |
Following up with [SELECTing the contents of the Table, which is a result of another value](https://stackoverflow.com/questions/29771264/selecting-the-contents-of-the-table-which-is-a-result-of-another-value), I wanna keep a condition here on the generated field. If I execute this query:
```
SELECT *, (
SELECT `TableName` FROM `TableNames` WHERE `TableID`=`IndexType`
) AS `IndexTypeName`,
CASE WHEN `IndexType`=1 THEN (
SELECT `Username` FROM `Users` WHERE `IndexRowID`=`UserID`
) WHEN `IndexType`=2 THEN (
SELECT `MessageContent` FROM `Messages` WHERE `IndexRowID`=`MessageID`
) WHEN `IndexType`=3 THEN (
SELECT `CommentContent` FROM `Comments` WHERE `IndexRowID`=`CommentID`
)
END `TableValue`
FROM `Index`
ORDER BY `IndexTime` DESC;
```
I would obviously get:
```
+---------+-----------+------------+---------------------+------------+---------------------+
| IndexID | IndexType | IndexRowID | IndexTime | IndexTable | TableValue |
+---------+-----------+------------+---------------------+------------+---------------------+
| 5 | 2 | 2 | 2015-04-10 11:45:00 | Messages | Hello, Mr. Prave... |
+---------+-----------+------------+---------------------+------------+---------------------+
| 4 | 3 | 1 | 2015-04-10 11:30:00 | Comments | @You, you aren't... |
+---------+-----------+------------+---------------------+------------+---------------------+
| 3 | 2 | 1 | 2015-04-10 11:25:00 | Messages | Thanks for your ... |
+---------+-----------+------------+---------------------+------------+---------------------+
| 2 | 1 | 2 | 2015-04-10 10:55:00 | Users | Jeff Atwood |
+---------+-----------+------------+---------------------+------------+---------------------+
| 1 | 1 | 1 | 2015-04-10 10:50:00 | Users | Praveen Kumar |
+---------+-----------+------------+---------------------+------------+---------------------+
```
But, if I would like to filter on the values of `TableValue`, so if I give a query like:
```
SELECT *, (
SELECT `TableName` FROM `TableNames` WHERE `TableID`=`IndexType`
) AS `IndexTypeName`,
CASE WHEN `IndexType`=1 THEN (
SELECT `Username` FROM `Users` WHERE `IndexRowID`=`UserID`
) WHEN `IndexType`=2 THEN (
SELECT `MessageContent` FROM `Messages` WHERE `IndexRowID`=`MessageID`
) WHEN `IndexType`=3 THEN (
SELECT `CommentContent` FROM `Comments` WHERE `IndexRowID`=`CommentID`
)
END `TableValue`
FROM `Index`
WHERE `TableValue` LIKE '%prav%'
ORDER BY `IndexTime` DESC;
```
I get the following error:
> Error in query (1054): Unknown column 'TableValue' in 'where clause'.
I don't wanna use the filter for every case in the sub SELECT query. Is there any other way to do by **not** doing the following:
* Using `WHERE` in each Sub-`SELECT` statements.
* Creating a View for the whole thing.
Thanks in advance.
|
You can't use the alias name in `where clause`
You need to either use `subquery` or `having clause`
```
SELECT *, (
SELECT `TableName` FROM `TableNames` WHERE `TableID`=`IndexType`
) AS `IndexTypeName`,
CASE WHEN `IndexType`=1 THEN (
SELECT `Username` FROM `Users` WHERE `IndexRowID`=`UserID`
) WHEN `IndexType`=2 THEN (
SELECT `MessageContent` FROM `Messages` WHERE `IndexRowID`=`MessageID`
) WHEN `IndexType`=3 THEN (
SELECT `CommentContent` FROM `Comments` WHERE `IndexRowID`=`CommentID`
)
END `TableValue`
FROM `Index`
having `TableValue` LIKE '%prav%'
ORDER BY `IndexTime` DESC;
```
or
```
select * from(
SELECT *, (
SELECT `TableName` FROM `TableNames` WHERE `TableID`=`IndexType`
) AS `IndexTypeName`,
CASE WHEN `IndexType`=1 THEN (
SELECT `Username` FROM `Users` WHERE `IndexRowID`=`UserID`
) WHEN `IndexType`=2 THEN (
SELECT `MessageContent` FROM `Messages` WHERE `IndexRowID`=`MessageID`
) WHEN `IndexType`=3 THEN (
SELECT `CommentContent` FROM `Comments` WHERE `IndexRowID`=`CommentID`
)
END `TableValue`
FROM `Index`
)x
where `TableValue` LIKE '%prav%'
ORDER BY `IndexTime` DESC;
```
|
You could use a `HAVING` clause to use the alias name.
Try this
```
SELECT *, (
SELECT `TableName` FROM `TableNames` WHERE `TableID`=`IndexType`
) AS `IndexTypeName`,
CASE WHEN `IndexType`=1 THEN (
SELECT `Username` FROM `Users` WHERE `IndexRowID`=`UserID`
) WHEN `IndexType`=2 THEN (
SELECT `MessageContent` FROM `Messages` WHERE `IndexRowID`=`MessageID`
) WHEN `IndexType`=3 THEN (
SELECT `CommentContent` FROM `Comments` WHERE `IndexRowID`=`CommentID`
)
END `TableValue`
FROM `Index`
HAVING `TableValue` LIKE '%prav%'
ORDER BY `IndexTime` DESC;
```
|
Error in query (1054): Unknown column 'TableValue' in 'where clause'
|
[
"",
"mysql",
"sql",
"join",
"subquery",
"where-clause",
""
] |
I have an SQL table (PostgreSQL 9.3) with essentially an ID column and a value column (e.g. varchar). I need to be able to reproduce the order in which the rows were entered. The most naive approach would be to either rely on the AUTOINCREMENT feature of the ID column or have a specific column for ordering, let's say
```
CREATE TABLE "foo" (
"foo_id" SERIAL,
"order_index" int NOT NULL,
"value" VARCHAR(128)
);
```
The data might then look like
```
347 1 'foo'
368 2 'bar'
511 3 'baz'
```
and I could retrieve them with an ORDER BY "order\_index" clause.
The problem is that there has to be the possibility to insert another row at a specific position later on, e.g. when a row has been left out accidentally. In the above solution I would have to reassign all order\_index values that come after the inserted position. That does not scale very well and is kind of the opposite of elegant.
One solution would be to leave gaps in the initial assignment, so the above example would look like
```
347 1000 'foo'
368 2000 'bar'
511 3000 'baz'
```
Rows could be inserted until a gap is filled. Only then a reassignment has to take place. This is the best idea I have had so far. I have also thought about a solution based on the idea of a linked list where each row has a column "successor\_id". However, in that case I could not retrieve the data in an ordered way by using a simple ORDER BY clause.
I've been wondering whether there is a scalable and elegant solution possibly involving two columns that does not require reassignment of order indices. Or is there some kind of best practice dealing with this problem?
|
The users enter data (of a sheet). When they look up their data, they may want to insert another data row between others.
So they are always working on a small part of the whole table. Then it is not so big a task to renumber the records. Let's say you have a sheet number indicating the sheet they are working on and an order index per sheet number. Then it would be no big task for the dbms to handle this:
```
update sheets set order_index = order_index + 1
where sheet_no = 1234 and order_index >= 23;
insert into sheets (order_index, ...) values (23, new data);
commit;
```
I would even go so far as deleting the rows and inserting them anew.
```
delete from sheets where sheet_no = 1234;
insert into sheets ...
...
commit;
```
|
A variation on your "gaps" idea: Use a secondary ordering column.
Increment your order column on your own (don't use AUTOINCREMENT), but treat it like an autoincrement column, incrementing just by +1 (not by gaps as you suggest). Then, just add a 2nd column that defaults to 0. Order on both of these columns.
When you come to the case of inserting a missing row, insert it with the main ID column matching the row that preceeds it, and a "1" (or 2, or, 3) in the secondary ordering column.
It ruins the uniqueness of your ID column, though.
|
SQL preserve order
|
[
"",
"sql",
"postgresql",
"sql-order-by",
""
] |
I am passing one string to store procedure : `1:20,2:30,4:50`
It contains id and appropriate value for it.
how can I add value as shown in below table in database.
```
ID Value
1 20
2 30
4 50
```
I have already "stringSplit" function which works perfectly and gives out put in row value some think like this :
```
1:20
2:30
4:50
```
can anyone please help me to insert data into table with any solution.
i already try this solution
```
insert <table> (colname)
select y.item
from dbo.SplitString(@testString, ':') x
cross apply
dbo.SplitString(x.item, ',') y
```
but this will return duplicate value as more as id value.
my store procedure is
```
CREATE PROCEDURE [dbo].[temp_result_insert]
@dataString varchar(max)
AS
insert into tempTable(id,marks)
select x.Item,y.Item
from dbo.SplitStringVarcahr(@dataString, ':') x
cross apply
dbo.SplitStringVarcahr(x.Item,',') y
RETURN 0
```
|
As you already splitted into rows and you want insert into some table by splliting into two columns may be this works
```
CREATE TABLE #Test(ID INT,Val INT)
declare @t table (val varchar(50))
insert into @t (val)values ('1:20,2:30,4:50')
declare @str varchar(max)
;with cte as (
SELECT
Split.a.value('.', 'VARCHAR(100)') AS String
FROM (SELECT
CAST ('<M>' + REPLACE([val], ',', '</M><M>') + '</M>' AS XML) AS String
FROM @t) AS A CROSS APPLY String.nodes ('/M') AS Split(a))
INSERT INTO #Test
select SUBSTRING(String,0,CHARINDEX(':',String)),REVERSE(SUBSTRING(reverse(String),0,CHARINDEX(':',reverse(String)))) from cte
select * from #test
```
|
You can also try `XML.nodes()` and string functions to spit the data. Something like this
```
DECLARE @var VARCHAR(100) = '1:20,2:30,4:50'
DECLARE @xml xml = CONVERT(xml, '<r>' + REPLACE(@var,',','</r><r>') + '</r>')
SELECT LEFT(val,cindex - 1) c1,RIGHT(val,LEN(val) - cindex) c2
FROM
(
SELECT CHARINDEX(':',c.value('text()[1]','VARCHAR(100)')) cindex,c.value('text()[1]','VARCHAR(100)') val
FROM @xml.nodes('r') as t(c))c
```
|
Inserting multiple value in table with String input
|
[
"",
"sql",
"sql-server",
"database",
"stored-procedures",
""
] |
I've a table of SalesDetails, looking like this:
```
InvoiceID, LineID, Product
1,1,Apple
1,2,Banana
2,1,Apple
2,2,Mango
3,1,Apple
3,2,Banana
3,3,Mango
```
My requirement is to return rows where an Invoice contained sales of both: Apple AND Banana, but if there are other products on such an invoice, I don't want those.
So the result should be:
```
1,1,Apple
1,2,Banana
3,1,Apple
3,2,Banana
```
I tried the following:
```
Select * from SalesDetails where Product = 'Apple'
Intersect
Select * from SalesDetails where Product = 'Banana'
```
Didn't work, because it seems Intersect needs to match all the columns.
What I'm hoping to do is:
```
Select * from SalesDetails where Product = 'Apple'
Intersect ----On InvoiceID-----
Select * from SalesDetails where Product = 'Banana'
```
Is there a way to do this?
Or do I have to first Intersect on InvoiceIDs only using my criteria, then select the rows of those InvoiceIDs where the criteria is matched again, I.e.:
```
Select * From SalesDetails
Where Product In ('Apple', 'Banana') And InvoiceID In
(
Select InvoiceID from SalesDetails where Product = 'Apple'
Intersect
Select InvoiceID from SalesDetails where Product = 'Banana'
)
```
Which seems somewhat wasteful as it's examining the criteria twice.
|
Okay this time I've managed to get reuse of the Apple/Banana info by using a CTE.
```
with sd as (
Select * from SalesDetails
where (Product in ('Apple', 'Banana'))
)
Select * from sd where invoiceid in (Select invoiceid from
sd group by invoiceid having Count(distinct product) = 2)
```
[SQL Fiddle](http://sqlfiddle.com/#!6/0192b/6)
|
A self-join will solve the problem.
```
SELECT T1.*
FROM SalesDetails T1
INNER JOIN SalesDetails T2 ON T1.InvoiceId = T2.InvoiceId
AND (T1.Product = 'Apple' AND T2.Product = 'Banana'
OR T1.Product = 'Banana' AND t2.Product = 'Apple')
```
|
Intersect Select Statements on Specific Columns
|
[
"",
"sql",
"sql-server",
""
] |
Suppose I have the following table:
```
Product A Product B
Water Soda
Water Soda
Water Eggs
Water Apples
```
I'm trying to count the product combinations and the number of each product so that my results are:
```
Product A Product B Combination Product A Count Product B Count
Water Soda 2 4 2
Water Eggs 1 4 1
Water Apples 1 4 1
```
Unfortunately, once I group the products, I can't get the individual counts. Any help would be appreciated.
```
SELECT [ProductA], [ProductB], Count(*) as ComboCount
FROM [ProductCombinations]
GROUP BY [ProductA], [ProductB]
Order by ComboCount Desc
```
|
This works very well and is clean I think:
```
SELECT DISTINCT [Product A], [Product B],
COUNT(*) OVER (PARTITION BY [Product A], [Product B]) ComboCount,
COUNT(*) OVER (PARTITION BY [Product A]) [Product A Count],
COUNT(*) OVER (PARTITION BY [Product B]) [Product B Count]
FROM [ProductCombinations]
ORDER BY ComboCount DESC
```
|
You could count each product independently, and join those to your current query:
```
SELECT p.[ProductA], p.[ProductB], COUNT(*) As CombotCount,
ProductA_Count, ProductB_Count
FROM [ProductCombinations] p
JOIN (SELECT [ProductA], COUNT(*) AS ProductA_Count
FROM [ProductCombinations]
GROUP BY [ProductA]) a ON p.[ProductA] = a.[ProductA]
JOIN (SELECT [ProductB], COUNT(*) AS ProductB_Count
FROM [ProductCombinations]
GROUP BY [ProductB]) a ON p.[ProductA] = b.[ProductB]
GROUP BY [ProductA], [ProductB]
ORDER BY ComboCount DESC
```
|
SQL count items outside of group by filter
|
[
"",
"sql",
"sql-server",
"select",
"count",
"group-by",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.