Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
So, I'm practicing for an exam *(high school level)*, and although we have never been thought SQL it is necessarry know a little when handling **MS Access**.
The task is to select the IDs of areas which names does not correspond with the town's they belong to.
In the solution was the following example:
```
SELECT name
FROM area
WHERE id not in (SELECT areaid
FROM area, town, conn
WHERE town.id = conn.townid
AND area.id = conn.areaid AND
area.name like "*"+town.name+"*");
```
*It would be the same with INNER JOINS, just stating that, because Access makes the connection between tables that way.*
It works perfectly (well, it was in the solution), but what I don't get is that why do we need the "not in" part and why can't we use just "not like" instead of "like" and make the query in one step.
I rewrote it that way (without the "not in" part) and it gave a totally different result. If I changed "like" with "not like" it wasn't the opposite of that, but just a bunch of mixed data. Why? How does that work? Please, could someone explain?
**Edit (after best answer):** It was more like a theoretical question on how SQL queries work, and does not needed a concrete solution, but an explanation of the process. (Because of this I feel like the sql tag however belongs here)
|
One thing that would create a difference is to consider this example
```
areaid areaname townname
1 AA AA
1 AA BB
```
So your first query would exclude both records from the outcome. Because the inner query would identify `areaid =1` to be among those to be excluded. Therefore, both records will not show up in the output.
Using `not like` however would exclude the first record and return to you the second record. Because the first record satisfies the condition with not like but the second doesn't satisfy the condition.
In other words, the first query would exclude any area (and corresponding records) that have at **least one** townname that is like an areaname. The second approach, would exclude only incidences where areaname is like townname but doesn't necessarily exclude all records for that area.
|
The reason is because there can be more than one town in an area, right?
So if there is a town in an area that has a similar name, then that area will be found in the LIKE subquery.
If there is another town in the SAME AREA that does not have a similar name, then that area will ALSO be found in the NOT LIKE subquery.
So the same area can be returned whether you use LIKE or NOT LIKE, because of the one-to-many relationship to towns.
Make sense?
|
SQL: Not Like produces different results than what would be Like's opposite
|
[
"",
"sql",
"database",
"ms-access",
""
] |
I'm collecting data between two date 01/12/2014 and 31/12/2014 but my sql data type in nvarchar
is my query right?
```
SELECT * from customer where date >= convert(datetime, '01/12/2014', 105)
AND date <= convert(datetime, '31/12/2014', 105)
```
Result
```
Msg 242, Level 16, State 3, Line 1
The conversion of a nvarchar data type to a datetime data type resulted in an out-of-range value.
```
can any one solve this problem...
|
as I know you must separate different parts of a DATE with "-" not with "/" in format 105. here is an example:
```
SELECT convert(datetime, '23-10-2016', 105) -- dd-mm-yyyy
```
so you must rewrite your code as:
```
SELECT * from customer where date >= convert(datetime, '01-12-2014', 105)
AND date <= convert(datetime, '31-12-2014', 105)
```
|
The format your string are in, `'dd/mm/yyyy'` is 103, not 105 (which is `'dd-mm-yyyy'`). So, simply use the correct format:
```
SELECT *
FROM customer
WHERE [date] >= CONVERT(datetime, '01/12/2014', 103)
AND [date] <= CONVERT(datetime, '31/12/2014', 103)
```
|
Convert nvarchar to date and Get data between two date
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have a 2 tables:
```
DECLARE @MASTER TABLE (MAST_ID INT,
MAST_NAME NVARCHAR(10),
IS_ACTIVE CHAR(1)
)
INSERT INTO @MASTER
VALUES (1, 'MAST1', 'A'), (2, 'MAST2', 'I'),
(3, 'MAST3', 'A'), (4, 'MAST4', 'A')
SELECT * FROM @MASTER
DECLARE @CHILD TABLE (CHD_ID INT,
MAST_ID INT,
CHD_NAME NVARCHAR(10),
IS_ACTIVE CHAR(1)
)
INSERT INTO @CHILD
VALUES (1, 1, 'CHD1', 'I'), (2, 2, 'CHD2', 'A'),
(3, 4, 'CHD3', 'A'), (4, 4, 'CHD4', 'I')
SELECT * FROM @CHILD
```
1. I need all active master table data, which are active and for those there are active child data
2. I need all active child for above master data
Output should be like below

Thanks for help.
|
```
select * from @MASTER A where is_active = 'A' and
exists (select 1 from @child where is_active = 'A' and mast_id = A.mast_id);
```
and
```
select * from @child A where is_active = 'A' and
exists (select 1 from @master where mast_id = A.mast_id and is_active = 'A');
```
Fiddle - <http://sqlfiddle.com/#!6/7a59c/4>
|
You need to build a subquery to obtain the `MAST_ID` from the `@CHILD` table where the `IS_ACTIVE = 'A'` and then subquery against the `@MASTER` to get `IS_ACTIVE = 'A'` for those `MAST_ID`
```
SELECT M.*
FROM @MASTER M
WHERE IS_ACTIVE = 'A'
AND MAST_ID IN (SELECT MAST_ID FROM @CHILD C WHERE IS_ACTIVE = 'A')
```
|
Looking for master/child table database on condition?
|
[
"",
"sql",
"sql-server",
""
] |
I am trying to generate Fibonacci series using below query (recursive WITH clause).
```
WITH X(Pnbr,Cnbr) AS
(SELECT 0, 1 FROM dual
UNION ALL
SELECT X.Cnbr, X.Pnbr+X.Cnbr FROM X
WHERE X.Cnbr<50
)
SELECT * FROM X
```
But I am getting this error
```
ORA-32044: cycle detected while executing recursive WITH query
```
Why?
|
Your data at first iteration would be
```
PNBR CNBR
0 , 1
1 , 1 + 0
```
So, CNBR is 1 is first two rows.. A Cycle is detected!
The `CONNECTING` condition has to be **unique**!
> So probably you would need to maintain an iterator.
`ROWNUM` is what I used here for it.
```
WITH X(iter,Pnbr,Cnbr) AS
(SELECT 1,0, 1 FROM dual
UNION ALL
SELECT iter + rownum, X.Cnbr, X.Pnbr+X.Cnbr FROM X
WHERE iter < 50
)
SELECT iter,Pnbr FROM X;
```
[DEMO](http://sqlfiddle.com/#!4/9eecb7db59d16c/512)
|
I agree with the diagnosis in M. Ravisankar's Answer (from 2015), but not with the remedy.
To handle precisely the situation presented in the original post, recursive CTE offers the `CYCLE` clause. In this problem, while there will be repeated values in the `Pnbr` column as well as in the `Cnbr` column, when considered separately, there are no repeated values (duplicates) in the **composite** `(Pnbr, Cnbr)`.
So, the query can be written like this:
```
WITH X(Pnbr,Cnbr) AS
(SELECT 0, 1 FROM dual
UNION ALL
SELECT X.Cnbr, X.Pnbr+X.Cnbr FROM X
WHERE X.Cnbr<50
)
cycle Pnbr, Cnbr set cycle to 'Y' default 'N' ----
SELECT Pnbr, Cnbr FROM X
```
Notice the `cycle` clause (second to last line), and also `SELECT Pnbr, Cnbr` as opposed to `SELECT *` (if we selected \* here, we would also get the `cycle` column, which we don't need).
|
Trying to generate Fibonacci series using recursive WITH clause, getting error: 'cycle detected while executing'
|
[
"",
"sql",
"oracle",
"recursion",
""
] |
Let's say I have two tables:
```
user (user_name varchar(50), project_name varchar(50))
project (project_name varchar(50), project_cost(integer))
```
I have a query that's returns me results which are "de-facto desired" :
```
select u.user_name, p.project_name
from user u, project p
where u.project_name = p.project_name
```
Postgres says that order of rows is not predictable when ORDER BY is not given. But yet in my local test, postgres returns rows in a same order for repeated tests.
Could you please help me understand what Postgres really does when `order by` is not provided in the query?
I don't have access to all the possible places where the real table and schema a live, so I really need to know what really happens in order to keep existing ordering intact.
|
If no `order by` clause is given, postgres (and any other reasonable database, for that sake), should return the rows in the order it was able to produce them (be it from an internal cache, an index, or directly from the table).
Since the same algorithm is used on the same data, it isn't surprising you're getting the same rows in the same order. However, this does not mean you should rely on this ordering. If you do something to change the data's layout on the disk (e.g., back it up and restore it, or even rebuild the tables' indexes), you're likely to get a different ordering.
|
To know what really DBMS does one should look at the PLAN. The output order will depend upon it too. However there are two things to remember: first, if the plan includes 'full (heap) table scan' then the order is undefined (as DBMS may freely reorder heap data); second, the plan may change significantly if you change your SQL statement or update DB stats. This is why you should not rely on output order's stability in a long run.
|
How does postgres order the results when ORDER BY is not provided
|
[
"",
"sql",
"postgresql",
"select",
"sql-order-by",
""
] |
I have a table with Column - D and E.
I want to get D, Distinct E in each D, Count of total number of entry for each D. How to write SQL for this ?
Data:
```
D | E
-----
1 | K
1 | K
1 | A
2 | S
2 | S
2 | S
2 | S
```
Desired o/p:
```
D | E | Total_E_in_D
----------------------
1 | K | 3
1 | A | 3
2 | S | 4
```
---
```
SELECT D,E,Count(E in each D)
FROM table
GROUP BY D,E.
```
Last column should give me the total number of entries for each D.
|
The specific answer to the question is:
```
select dept, count(*) as numemployees, count(distinct emp) as numDistinctEmployees
from d1
group by dept;
```
This just seems quite unusual, because the it assumes that employees would be in the same department more than once.
EDIT:
Strange data format, but just use aggregation with analytic functions:
```
select dept, emp, sum(count(*)) over (partition by dept) as numEmployees
from d1
group by dept, emp;
```
|
You can group on the department and the employee, and join in a query where you group on the department to count the employees:
```
select
e.Dept,
e.Emp
d.EmpCount
from
table e
inner join (
select
Dept,
count(distinct Emp) as EmpCount
from
table
group by
Dept
) d on d.Dept = e.Dept
group by
e.Dept, e.Emp
```
You could also use a subquery to count the employees:
```
select
e.Dept,
e.Emp,
(
select
count(distinct Emp)
from
table d
where
d.Dept = e.Dept
) as EmpCount
from
table e
group by
e.Dept, e.Emp
```
|
SQL - GROUP BY and COUNT
|
[
"",
"sql",
"oracle",
"group-by",
""
] |
I have a query which I use InnoDB storage engine.
I want to optimize it. It takes too much time to execute. I have 5 million data in my database. Now it takes 250 seconds to execute.
```
INSERT INTO dynamicgroups (adressid)
SELECT SQL_NO_CACHE DISTINCT(addressid) FROM (
SELECT cluster_0.addressid FROM (
SELECT DISTINCT addressid FROM (
SELECT group_all.addressid FROM (
SELECT g.addressid FROM table2.635_emadresmgroups g
INNER JOIN table2.emaildata f_0
ON f_0.addressid = g.addressid
WHERE (f_0.birthday > date(DATE_SUB(NOW(),INTERVAL 18 MONTH))
AND f_0.birthday < CURDATE() )
) group_all
) AS groups
) AS cluster_0
INNER JOIN(
SELECT DISTINCT addressid FROM (
SELECT group_all.addressid FROM (
SELECT g.addressid FROM table2.635_emadresmgroups g
INNER JOIN table2.emaildata f_0
ON f_0.addressid = g.addressid
WHERE (marriage_date = ''
OR marriage_date = '1900-01-01'
OR marriage_date = '0000-00-00' )
) group_all
) AS groups
) AS cluster_1 ON cluster_1.addressid = cluster_0.addressid
INNER JOIN(
SELECT DISTINCT addressid FROM (
SELECT group_all.addressid FROM (
SELECT g.addressid FROM table2.635_emadresmgroups g
INNER JOIN table2.emaildata f_0
ON f_0.addressid = g.addressid
WHERE (f_0.city = '34' )
) group_all
) AS groups
) AS cluster_2 ON cluster_2.addressid = cluster_1.addressid
) AS t
```
|
Your queries all seem to be variations of this query:
```
SELECT g.addressid
FROM table2.635_emadresmgroups g INNER JOIN
table2.emaildata f_0
ON f_0.addressid = g.addressid
WHERE (f_0.birthday > date(DATE_SUB(NOW(),INTERVAL 18 MONTH)) AND f_0.birthday < CURDATE() )
```
I would suggest approaching this using `group by` and `having`:
```
SELECT g.addressid
FROM table2.635_emadresmgroups g INNER JOIN
table2.emaildata f_0
ON f_0.addressid = g.addressid
GROUP BY g.addressid
HAVING SUM(f_0.birthday > date(DATE_SUB(NOW(), INTERVAL 18 MONTH)) AND f_0.birthday < CURDATE() ) > 0 AND
SUM(marriage_date = '' OR marriage_date = '1900-01-01' OR marriage_date = '0000-00-00' ) > 0 AND
SUM(f_0.city = '34' ) > 0;
```
Depending on the volume of data, filtering before the `group by` can also help:
```
SELECT g.addressid
FROM table2.635_emadresmgroups g INNER JOIN
table2.emaildata f_0
ON f_0.addressid = g.addressid
WHERE (f_0.birthday > date(DATE_SUB(NOW(), INTERVAL 18 MONTH)) AND f_0.birthday < CURDATE() ) OR
(marriage_date = '' OR marriage_date = '1900-01-01' OR marriage_date = '0000-00-00' ) OR
(f_0.city = '34' )
GROUP BY g.addressid
HAVING SUM(f_0.birthday > date(DATE_SUB(NOW(), INTERVAL 18 MONTH)) AND f_0.birthday < CURDATE() ) > 0 AND
SUM(marriage_date = '' OR marriage_date = '1900-01-01' OR marriage_date = '0000-00-00' ) > 0 AND
SUM(f_0.city = '34' ) > 0;
```
|
Even though the EXPLAIN operator isn't implemented as well as others.. I'd suggest you use it for your query.
After that you can analyse what the result that EXPLAIN give and decide which columns should be indexed.
For more information I'd suggest viewing these sources:
[MySQL syntax: EXPLAIN](http://dev.mysql.com/doc/refman/5.0/en/explain.html)
[MySQL using: EXPLAIN](http://dev.mysql.com/doc/refman/5.0/en/using-explain.html)
Furthermore, the last 2 selects appear to be very similar, maybe you can make a temporary table or a view out of these, so that you don't have to run the entire select twice?
|
How to optimize Mysql Query that has 2 Inner Joins with Distinct? ( InnoDB )
|
[
"",
"mysql",
"sql",
"performance",
""
] |
I have a table with `Subscription' , when I sort recordes with order by I getting wrong result . My Result Does not sort , by MyColumn
my query is :
```
Select Subscriptioncode from Subscription order by Subscriptioncode desc
```
my Result like as :
```
90
80
8
700
73
634
100
```
Why getting this Result ?
> In SubscriptionCode some Row is Null.
100
|
Your column `SubscriptionCode` is currently not a numeric type, and it is likely a text type instead. However, you can `CAST` this column to an `INT` type and then the ordering should work with no problem:
```
Select Subscriptioncode from Subscription
order by CAST(Subscriptioncode AS INT) desc
```
|
You should declare **Subscriptioncode** column **Datatype As INT**
Example
```
create table Subscription(
Subscriptioncode int)
insert into Subscription values('90')
insert into Subscription values('80')
insert into Subscription values('8')
insert into Subscription values('700')
insert into Subscription values('73')
insert into Subscription values('634')
insert into Subscription values('100')
Select Subscriptioncode from Subscription order by Subscriptioncode desc
```
OUTPUT:
```
Subscriptioncode
700
634
100
90
80
73
8
```
**Note:**
As you are declaring datatype other than **INT** Ex:Varchar
-->**It will sort values as like Sorting Alphabetical order** i.e)abc
that's y you are getting wrong results.so choose correct Datatype.
|
order by sort wrong records
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I'm new in SQL and I need to create table with specified field format. How to add CHECK condition that will assure that input will be formatted e.g.
[LLLDD]
where L is a letter and D is a digit?
|
Try this: <http://sqlfiddle.com/#!6/3974b>
```
create table test (
field1 char(5),
check (field1 like '[a-z][a-z][a-z][0-9][0-9]')
);
insert into test values ('ttt09'); --this will succeed
```
If you were to change the insert to:
```
insert into test values ('testi'); -- this will fail
insert into test values ('12345'); -- this will fail
```
|
Try this if you are adding the constraint on a new table
```
CONSTRAINT ck_data_checker CHECK ([columnName] LIKE ('[A-Z][A-Z][A-Z][0-9][0-9]'))
```
Try this if you are adding the constraint on existing table
```
ALTER TABLE tableName
ADD CONSTRAINT ck_data_checker CHECK ([columnName] LIKE ('[A-Z][A-Z][A-Z][0-9][0-9]'))
```
|
How to specify input format in SQL create table?
|
[
"",
"sql",
"sql-server",
"create-table",
""
] |
Here is the table-
```
time
______
2:00 am
1:05 PM
12:00 Pm
11:10pM
```
Here is what the result should look like-
```
time
______
2:00 am
1:05 PM
12:00 Pm
11:10 pM
```
How can I do it?
If you would like to try-
```
create table tmp (
time varchar2(50)
);
insert into tmp values (' 2:00 am');
insert into tmp values ('1:05 PM');
insert into tmp values (' 12:00 Pm');
insert into tmp values (' 11:10pM');
```
|
You could use **REGEXP\_REPLACE** to modify the data to your desired pattern.
For example,
```
SQL> SELECT TIME,
2 regexp_replace(trim(' ' FROM TIME),
3 '^([[:digit:]]{1,2})(:)([[:digit:]]{1,2})[[:space:]]*([[:alpha:]]{2})$',
4 '\1\2\3 \4') str
5 FROM tmp;
TIME STR
--------------- ----------
2:00 am 2:00 am
1:05 PM 1:05 PM
12:00 Pm 12:00 Pm
11:10pM 11:10 pM
```
The above **REGEXP\_REPLACE** query matches the following pattern:
1. `([[:digit:]]{1,2})`
2. `(:)`
3. `([[:digit:]]{1,2})`
4. `([[:alpha:]]{2})`
Each set of parenthesis is a pattern. So, you can see pattern is replaced with `'\1\2\3 \4'`. Which means, pattern 1,2,3 are together followed by a space and ten pattern 4.
|
You can use `regexp_replace` for formatting complex expressions:
```
select regexp_replace(time,
'^[[:space:]]*([[:digit:]]{1,2}:[[:digit:]]{1,2})[[:space:]]*([aApP][mM]).*$',
'\1 \2') from tmp;
```
SQL fiddle: <http://sqlfiddle.com/#!4/ba037f/2>.
|
How can I format result of the SQL in my case?
|
[
"",
"sql",
"oracle",
""
] |
I have been forced into using **PostgreSQL**. I have read something about it but still it's new and I have no idea why I'm getting this error:
`SQLSTATE[42703]: Undefined column: 7 ERROR: column t0.id does not exist LINE 1: SELECT t0.id AS id1, t0.user AS user2, t0.email AS ma...`
I checked that id column exists for thousand times (almost literally).
I asked my friend and he told me that there is no auto increment in PostgreSQL and I have to use sequences. I found that **Doctrine** generates sequences automatically when I set `@GeneratedValue` to auto (which is default). And yes, those sequences are there.
Here is my entity:
```
<?php
/**
* @ORM\Entity
*/
class User
{
/**
* @ORM\Id
* @ORM\Column(name="id", type="integer")
* @ORM\GeneratedValue(strategy="AUTO")
*/
protected $id;
/** @ORM\Column(type="string", nullable=true) */
protected $name;
// some more properties similar to $name
```
In other question ([PostgreSQL column 'foo' does not exist](https://stackoverflow.com/questions/10200769/postgresql-column-foo-does-not-exist)) they wanted to see output of `\d table`. Here is it:
```
northys=# \d user
Table "public.user"
Column | Type | Modifiers
----------+------------------------+---------------------------------
id | integer | not null
name | character varying(255) | default NULL::character varying
email | character varying(255) | not null
password | character varying(255) | not null
Indexes:
"user_pkey" PRIMARY KEY, btree (id)
"uniq_8d93d649e7927c74" UNIQUE, btree (email)
Referenced by:
TABLE "child" CONSTRAINT "fk_22b3542941807e1d" FOREIGN KEY (teacher_id) REFERENCES "user"(id)
TABLE "event" CONSTRAINT "fk_3bae0aa7a76ed395" FOREIGN KEY (user_id) REFERENCES "user"(id)
```
I'm having **PostgreSQL** 9.4.1 and I`m not using any database specific plugins for doctrine. Do you have any ideas why this shouldn't work? I got stuck and trying to find out the solution for days.
|
`user` [is a reserved word](http://www.postgresql.org/docs/current/static/sql-keywords-appendix.html). It's an alias for `current_user`.
```
regress=> SELECT * FROM user;
current_user
--------------
myusername
(1 row)
```
If you want to use `user` as a table name, since it's a reserved word you must *quote the identifier*, e.g.:
```
SELECT id FROM "user";
```
Your ORM should be quoting all identifiers, or at least reserved words. Failure to do so is a bug in your ORM. You can work around the ORM bug by using a non-reserved word as a table name.
I think it's a bit of a wart in `psql` that it automatically quotes identifiers you pass to backslash commands. So `\d user` will work but `select * from user` won't. You should have to write `\d "user"`. The same issue arises with case sensitivity where `\d MyTable` works but `SELECT * FROM MyTable` won't work, you have to write `SELECT * FROM "MyTable"`.
---
It'd be nice to give a `HINT` message about this in the error. Unfortunately the parser and planner doesn't really have enough information at the time the "column does not exist" error gets generated to know that you originally wrote a keyword, all it sees is a function scan at that point.
|
define table name with special chars because some tables like user, postgresql is looking as system table.
instead "user" write "`user`".
* @ORM\Table(name="user")
add
* @ORM\Table(name="`user`")
|
Doctrine column id does not exist on PostgreSQL
|
[
"",
"sql",
"database",
"postgresql",
"doctrine",
""
] |
I have a table `invoices` with a field `invoice_number`. This is what happens when i execute `select invoice_number from invoice`:
```
invoice_number
--------------
1
2
3
5
6
10
11
```
I want a SQL that gives me the following result:
```
gap_start | gap_end
4 | 4
7 | 9
```
How can i write a SQL to perform such query?
I am using PostgreSQL.
|
The name of this problem is the "Gaps and Islands problem" which can be done with any [modern SQL](http://use-the-index-luke.com/blog/2015-02/modern-sql), using [window functions](http://www.postgresql.org/docs/current/static/tutorial-window.html):
```
select invoice_number + 1 as gap_start,
next_nr - 1 as gap_end
from (
select invoice_number,
lead(invoice_number) over (order by invoice_number) as next_nr
from invoices
) nr
where invoice_number + 1 <> next_nr;
```
SQLFiddle: <http://sqlfiddle.com/#!15/1e807/1>
Walkthrough example here using row\_number over partition and interval: [Postgres Consecutive Days, gaps and islands, Tabibitosan](https://stackoverflow.com/questions/67529168/postgres-consecutive-days-gaps-and-islands-tabibitosan)
|
We can use a simpler technique to get all missing values first, by joining on a generated sequence column like so:
```
select series
from generate_series(1, 11, 1) series
left join invoices on series = invoices.invoice_number
where invoice_number is null;
```
This gets us the series of missing numbers, which can be useful on it's own in some cases.
To get the gap start/end range, we can instead join the source table with itself.
```
select invoices.invoice_number + 1 as start,
min(fr.invoice_number) - 1 as stop
from invoices
left join invoices r on invoices.invoice_number = r.invoice_number - 1
left join invoices fr on invoices.invoice_number < fr.invoice_number
where r.invoice_number is null
and fr.invoice_number is not null
group by invoices.invoice_number,
r.invoice_number;
```
dbfiddle: <https://dbfiddle.uk/?rdbms=postgres_14&fiddle=32c5f3c021b0f1a876305a2bd3afafc9>
This is probably less optimised than the above solutions, but could be useful in SQL servers that don't support `lead()` function perhaps.
---
Full credit goes to this excellent page in SILOTA docs:
<http://www.silota.com/docs/recipes/sql-gap-analysis-missing-values-sequence.html>
I highly recommend reading it, as it explains the solution step by step.
|
Find gaps of a sequence in SQL without creating additional tables
|
[
"",
"sql",
"postgresql",
"gaps-and-islands",
""
] |
I am trying to do a JOIN on two Tables. Each table contains a UPC varchar2(20) from different data sources.
What makes this JOIN a bit difficult is that the values of UPC can vary in size, can sometimes be padded with leading zeros, and sometimes not. All contain a trailing check digit.
For example:
```
Table 1 Table 2
UPC "00000123456789" "123456789"
"234567890" "234567890"
"00000003456789" "00000003456789"
"3456799" "00000003456799"
```
My thoughts are to convert each to a long and then do the compare. Or I can append leading zeros. Or an do a contains.
What is the best way to do the join using SQL?
|
You can try this:
```
select * from
table1 inner join table2
on (CAST(CAST(table1.UPC AS BIGINT) AS VARCHAR))
=(CAST(CAST(table2.UPC AS BIGINT) AS VARCHAR))
```
**[SQL FIDDLE DEMO](http://sqlfiddle.com/#!6/addd1/5)**
or
```
select * from
table1 inner join table2
on (RIGHT(table1.UPC,(LEN(table1.UPC) - PATINDEX('%[^0]%',table1.UPC)) + 1))
=(RIGHT(table2.UPC,(LEN(table2.UPC) - PATINDEX('%[^0]%',table2.UPC)) + 1))
```
**[SQL FIDDLE DEMO](http://sqlfiddle.com/#!6/addd1/4)**
|
This is not the highest-performance option, but it is the simplest:
```
SELECT
T1.UPC,
T2.Column1
FROM
myTable T1
INNER JOIN myTable T2 ON
RIGHT(REPLICATE('0', 20) + T2.UPC, 20) = RIGHT(REPLICATE('0', 20) + T1.UPC, 20)
```
Alternatively, you can create computed columns for these padded UPCs, and place indexes upon them. However, this comes with a slew of restrictions. I have not been able to use this in the real world very many times.
[Indexes on Computed Columns (MSFT)](https://msdn.microsoft.com/en-us/library/ms189292.aspx)
|
Best Way to do a JOIN on Product UPC varchar(20)?
|
[
"",
"sql",
"sql-server",
""
] |
I have this table
```
create table customers(id int, cust text, email id, cust_grp int);
```
and I use the following select query to get 3 results in 3 different conditions
```
select count(*) as cust_with_email
from customers where email<>'' and cust_grp=101;
result
--------
199
select count(*) as cust_without_email
from customers where email='' and cust_grp=101;
result
--------
3370
select count(*) as cust_total
from customers where cust_grp=101;
result
--------
3569
```
But now I need to combine these three queries into a single select and the expected output is:
```
custemaildet
---------------------
3569|199|3370
```
|
You can use [`case when`](http://www.postgresql.org/docs/9.4/static/plpgsql-control-structures.html#PLPGSQL-CONDITIONALS) to filter `email` *column* and concatenate the result using `|` symbol
```
SELECT count(*) || '|' || count(CASE
WHEN email <> ''
THEN email
END) || '|' || count(CASE
WHEN email = ''
THEN email
END) custemaildet
FROM customers
WHERE cust_grp= 101
```
|
Try this:
```
SELECT count(*), count(email = '' OR NULL), count(email <> '' OR NULL)
FROM customers
WHERE cust_grp = 101;
```
Or, In PG 9.4+:
```
SELECT
count(*),
count(*) FILTER (WHERE email = ''),
count(*) FILTER (WHERE email <> '')
FROM customers
WHERE cust_grp = 101;
```
|
Complex select on a single column
|
[
"",
"sql",
"postgresql",
""
] |
Welcome!
I am currently working on some C# and I encountered a rather silly issue. I need to fill ListBox with some data from database obviously. Problem is varchar filtering. I need to filter codes and display only the right ones.
Example of codes are: `RRTT, RRTR, RT12, RQ00, R100, R200, R355, TY44, GG77 etc. Four digit codes.`
I managed to filter only R-codes with simple `select * from table1 where code_field like 'R%'` but I get both R000 and RQ10 and I need to get R ones followed by numbers only.
So from example:
```
RRTT, RRTR, RT12, RQ00, R100, R200, R355
```
Only:
```
R100, R200, R355
```
Any ideas what should I add to the `like 'R%'`?
|
In SQL Server, you can make use of wildcards. Here is one approach:
```
where rcode like 'R___' and -- make sure code has four characters
rcode not like 'R%[^0-9]%' -- and nothing after the first one is not a number
```
Or:
```
where rcode like 'R[0-9][0-9][0-9]'
```
In other databases, you would normally do this using regular expressions rather than extensions to `like`.
|
here solution using like
```
SELECT *
FROM (
VALUES ( 'R123'),
( 'R12T' ),
( 'RT12' )
) AS t(test)
where test like 'R[0-9][0-9][0-9]'
```
|
SQL Like filtering
|
[
"",
"sql",
"sql-server",
"t-sql",
"sql-like",
"varchar",
""
] |
i am having table in ms access named as stockitems,
table structure is..
```
stdate stitems
01-04-2015 Red
02-04-2015 Blue
08-04-2015 Green
01-05-2015 Grey
02-05-2015 Violet
09-05-2015 Purple
04-06-2015 Sky Blue
```
i am using the below code to select records from that table...
```
Dim report As New CrystalDecisions.CrystalReports.Engine.ReportDocument
Dim xs, xs2, cmp As String
xs = date1.Text
xs2 = date2.Text
cmp = salsqkrpt.salcom.Text
Dim cmd1 As OleDbCommand = New OleDbCommand("SELECT * from stockitems where stdate between '" & date1.Text & "'and '" & date2.Text & "'",con)
myDA = New OleDbDataAdapter(cmd1)
myDataSet = New DataSet()
myDA.Fill(myDataSet, "stockitems ")
report.Load(Application.StartupPath & "\stockreport.rpt")
report.SetDataSource(myDataSet.Tables("stockitems "))
stckreport.CrystalReportViewer1.ReportSource = report
```
here `date1 and date2 was masked textbox` ,mask is `00-00-0000 (DD-MM-YYYY)`
when excuting this code the report only get from first two strings only that is `dd` only,when i assign `date1=01-04-2015` and `date2=03-05-2015` the result comes like...
```
stdate stitems
01-04-2015 Red
02-04-2015 Blue
01-05-2015 Grey
02-05-2015 Violet
but my expected result is
stdate stitems
01-04-2015 Red
02-04-2015 Blue
08-04-2015 Green
01-05-2015 Grey
02-05-2015 Violet
```
it does'nt works as a date please help me out of this problem developers and i am beginner to vb.net. thanks in advance.
|
MS/Access interprets dates as mm/dd/yy and so you are selecting data from January 4th -> March 5th. Reformat your dates in the SQL as mm-dd-yyyy. Generally Ms/Access actually requires the date literals in the format #mm/dd/yyyy# (i.e. with the #'s).
```
Dim dtDate1 as DateTime = DateTime.Parse(date1.text)
Dim dtDate2 as DateTime = DateTime.Parse(date2.text)
Dim cmd1 As OleDbCommand = New OleDbCommand("SELECT * from stockitems where stdate Between #" & _
dtDate1.ToString("MM/dd/yyyy") & "# And #" & _
dtDate2.ToString("MM/dd/yyyy") & "#",con)
```
Your code leaves you open to SQL Injection. Validate the dates first, and then either pass them as parameters or format your SQL using DataTime variables (above).
|
You need properly formatted date expressions in your SQL:
```
Dim xs1 As string
Dim xs2 As string
Dim sql as string
xs1 = Date.Parse(date1.Text).ToString("yyyy'/'MM'/'dd")
xs2 = Date.Parse(date2.Text).ToString("yyyy'/'MM'/'dd")
sql = "select * from stockitems where stdate between #" & xs1 & "# and #" & xs2 & "#"
Dim cmd1 As OleDbCommand = New OleDbCommand(sql, con)
```
|
Query records between two dates using VB.net/MS Access
|
[
"",
"sql",
"vb.net",
"ms-access",
""
] |
I'm using PostgreSQL.
I need to select the **max of each group**, the situation is that the table represents the products sell on each day, and I want to know the top sold product of each day.
```
SELECT sum(detalle_orden.cantidad) as suma,detalle_orden.producto_id as producto
,to_char(date_trunc('day',orden.fecha AT TIME ZONE 'MST'),'DY') as dia
FROM detalle_orden
LEFT JOIN orden ON orden.id = detalle_orden.order_id
GROUP BY orden.fecha,detalle_orden.producto_id
ORDER BY dia,suma desc
```
Is returning:
```
suma producto dia
4 1 FRI
1 2 FRI
5 3 TUE
2 2 TUE
```
**I want to get:**
```
suma producto dia
4 1 FRI
5 3 TUE
```
Only the top product of each day (with the `max(suma)` of each group).
I tried different approaches, like subqueries, but the aggregate function used make things a bit difficult.
|
You can still use `DISTINCT ON` to get this done in a single query level without subquery, because `DISTINCT` is applied after `GROUP BY` and aggregate functions (and after window functions):
```
SELECT DISTINCT ON (3)
sum(d.cantidad) AS suma
, d.producto_id AS producto
, to_char(o.fecha AT TIME ZONE 'MST', 'DY') AS dia
FROM detalle_orden d
LEFT JOIN orden o ON o.id = d.order_id
GROUP BY o.fecha, d.producto_id
ORDER BY 3, 1 DESC NULLS LAST, d.producto_id;
```
### Notes
* This solution returns ***exactly one*** row per `dia` (if available). if multiple products tie for top sales my arbitrary (but deterministic and reproducible) pick is the one with the smaller `producto_id`.
If you need all peers tying for one day use `rank()` as suggested by @Houari.
* The sequence of events in an SQL `SELECT` query is explained in this related answer:
+ [Best way to get result count before LIMIT was applied](https://stackoverflow.com/questions/156114/best-way-to-get-result-count-before-limit-was-applied-in-php-postgresql/8242764#8242764)
* `date_trunc()` was just noise in the calculation of `dia`. I removed it.
* I added `NULLS LAST` to the descending sort order since it is unclear whether there might be rows with NULL for `suma` in the result:
+ [PostgreSQL sort by datetime asc, null first?](https://stackoverflow.com/questions/9510509/postgresql-sort-by-datetime-asc-null-first/9511492#9511492)
* The numbers in `DISTINCT ON` and `GROUP BY` are just a syntactical shorthand notation for convenience. Similar:
+ [PostgreSQL equivalent for MySQL GROUP BY](https://stackoverflow.com/questions/10398528/postgresql-equivalent-for-mysql-group-by/10398558#10398558)
As are the added table aliases (syntactical shorthand notation).
* Basics for `DISTINCT ON`
+ [Select first row in each GROUP BY group?](https://stackoverflow.com/questions/3800551/select-first-row-in-each-group-by-group/7630564#7630564)
|
You can (ab)use `SELECT DISTINCT ON` with the appropriate ordering clause. Assuming you made your previous query into a view:
```
SELECT DISTINCT ON (dia, producto) * FROM some_view ORDER BY dia, producto, suma DESC;
```
the `DISTINCT` ensures you will retain only one row for every day and product, and the `ORDER BY` ensures it retains the correct one
|
Select max record of each group on a group by
|
[
"",
"sql",
"postgresql",
"aggregate-functions",
"greatest-n-per-group",
""
] |
i have table like
```
CREATE TABLE meta.fk_payment1
(
id serial NOT NULL,
settlement_ref_no character varying,
order_type character varying,
fulfilment_type character varying,
seller_sku character varying,
wsn character varying,
order_id character varying,
order_item_id bigint,
....
);
```
i am inserting data from csv file where all column are same instead of
id column
In case when csv file uploaded more then one time the data will be duplicate .
but id will not and id is primary key.
so I want to remove all duplicate row without using primary key .
I have to do this on single table
|
Copy distinct data to work table `fk_payment1_copy`. The simplest way to do that is to use `into`
```
SELECT max(id),settlement_ref_no ...
INTO fk_payment1_copy
from fk_payment1
GROUP BY settlement_ref_no ...
```
delete all rows from `fk_payment1`
```
delete from fk_payment1
```
and copy data from `fk_payment1_copy` table to `fk_payment1`
```
insert into fk_payment1
select id,settlement_ref_no ...
from fk_payment1_copy
```
|
You can do like this
**e.g.**
```
DELETE FROM table_name
WHERE ctid NOT IN
(SELECT MAX(dt.ctid)
FROM table_name As dt
GROUP BY dt.*);
```
run this query
```
DELETE FROM meta.fk_payment1
WHERE ctid NOT IN
(SELECT MAX(dt.ctid)
FROM meta.fk_payment1 As dt
GROUP BY dt.*);
```
|
Delete duplicate records from a Postgresql table without a primary key?
|
[
"",
"sql",
"postgresql",
""
] |
I have a simple query
```
SELECT Group, Value, NULL AS MONTH from tbl_A
```
which returns
```
Group Value Month
A 100 NULL
B 110 NULL
```
I'm seeking results that return
```
Group Value Month
A 100 1
A 100 2
A 100 3
...
B 110 1
B 110 2
B 110 3
...
```
In other words, I need to be able to define a list of values and repeat each result row for each value in the defined list of "months". They are actually dates, but I just used integers here for clarity.
|
You can use `VALUES` clause to define a [Table Value Constructor (TVC)](https://www.simple-talk.com/sql/sql-training/table-value-constructors-in-sql-server-2008/) . Then `CROSS APPLY` in order to get required result set:
```
SELECT [Group], Value, x.y AS MONTH
from tbl_A
CROSS APPLY (VALUES (1), (2), (3)) x(y)
```
[**SQL Fiddle Demo**](http://sqlfiddle.com/#!6/f9348/1)
|
Try `CROSS join` as below
```
select Group, Value, MONTH
from tbl_A
cross join (select 1 as MONTH
union all
select 2
union all
select 3
union all
select 4
union all
select 5
union all
select 6
union all
select 7
union all
select 8
union all
select 9
union all
select 10
union all
select 11
union all
select 12) B
order by Group, Value, MONTH
```
|
Force repeated rows with different derived value
|
[
"",
"sql",
"t-sql",
""
] |
I recently responded [to this question](https://stackoverflow.com/questions/30163920/formatting-datetime-in-ssrs-expression) in the SSRS-2008 tag that required changing the day number in a date to the ordinal number (i.e. "1st", "2nd" instead of "1", "2"). The solution involved a VB.Net function. I'm curious how one would go about performing this task in SQL (t-sql and SQL Server in particular), or if there is some built in support.
So here is a scenario: say you have organized a footrace for 1000 runners and have the results in a table with the columns Name and Place (in normal numbers). You want to create a query that will display a user's name and their place in ordinal numbers.
|
Here's a scalable solution that should work for any number. I thought other's used % 100 for 11,12,13 but I was mistaken.
```
WITH CTE_Numbers
AS
(
SELECT 1 num
UNION ALL
SELECT num + 1
FROM CTE_Numbers
WHERE num < 1000
)
SELECT CAST(num AS VARCHAR(10))
+
CASE
WHEN num % 100 IN (11,12,13) THEN 'th' --first checks for exception
WHEN num % 10 = 1 THEN 'st'
WHEN num % 10 = 2 THEN 'nd'
WHEN num % 10 = 3 THEN 'rd'
ELSE 'th' --works for num % 10 IN (4,5,6,7,8,9,0)
END
FROM CTE_Numbers
OPTION (MAXRECURSION 0)
```
|
You can do that just as easily in SQL as in the app layer:
```
DECLARE @myDate DATETIME = '2015-05-21';
DECLARE @day INT;
SELECT @day = DAY(@myDate);
SELECT CASE WHEN @day IN ( 11, 12, 13 ) THEN CAST(@day AS VARCHAR(10)) + 'th'
WHEN @day % 10 = 1 THEN CAST(@day AS VARCHAR(10)) + 'st'
WHEN @day % 10 = 2 THEN CAST(@day AS VARCHAR(10)) + 'nd'
WHEN @day % 10 = 3 THEN CAST(@day AS VARCHAR(10)) + 'rd'
ELSE CAST(@day AS VARCHAR(10)) + 'th'
END
```
You could also put this in a scalar function if necessary.
**EDIT**
For your example, it would be:
```
SELECT Name ,
CASE WHEN Place IN ( 11, 12, 13 )
THEN CAST(Place AS VARCHAR(10)) + 'th'
WHEN Place % 10 = 1 THEN CAST(Place AS VARCHAR(10)) + 'st'
WHEN Place % 10 = 2 THEN CAST(Place AS VARCHAR(10)) + 'nd'
WHEN Place % 10 = 3 THEN CAST(Place AS VARCHAR(10)) + 'rd'
ELSE CAST(Place AS VARCHAR(10)) + 'th'
END AS Place
FROM FootRaceResults;
```
|
How to create ordinal numbers (i.e. "1st" "2nd", etc.) in SQL Server
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have a week number and year and i need to display "total for mm/dd/yy to mm/dd/yy in a row of my ssrs report. My week starts with Monday. For example if my week number is '2' and year is '2010' then I have to display "total for 01/04/2010 to 01/10/2010 in my ssrs column. how to do this?
|
Try this
```
declare @year char(4) = '2010'
declare @week int = 2
declare @fromdate datetime
declare @todate datetime
set @fromdate = DATEADD(wk, DATEDIFF(wk, 6, '1/1/' + @year) + (@week-1), 7);
set @todate = DATEADD(wk, DATEDIFF(wk, 5, '1/1/' + @year) + (@week-1), 6) ;
;WITH dates AS
(
SELECT CONVERT(datetime,@fromDate) as Date
UNION ALL
SELECT DATEADD(d,1,[Date])
FROM dates
WHERE DATE < @toDate
)
select * from dates
```
|
SQL Server has a [DATEPART](https://msdn.microsoft.com/en-us/library/ms174420.aspx) function which calculates the ordinal week number of a year. However, you have to call [DATEFIRST](https://msdn.microsoft.com/en-AU/library/ms181598.aspx) before this to define which day of the week represents the start of the week. In your case, you have stated that the start of your week is Monday (i.e. 1).
```
SET DATEFIRST 1;
SELECT SUM([your data column])
FROM [your table]
WHERE DATEPART(WEEKNUM, [your date column])=[your week parameter]
AND DATEPART(YEAR, [your date column])=[your year parameter]
```
|
Finding week start and week end date
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"reporting-services",
"ssrs-2008-r2",
""
] |
I've a product table with three columns:
1. product\_id
2. product\_name
3. product\_keywords
And it contains values something like that:
> product\_id | product\_name | product\_keywords
> 1 |Computer |pc, computer, laptop, personal computer, desktop computer
> ------------------------------------------------------------------------------------------------------------------'
> 2 |Mobile |Smart Phone, Phone, Nokia, Sony
> --------------------------------------------------------------------------------------------------------------------'
Now I need an SQL query that if any customer search for pc or personal computer it will return product\_name = Computer with product\_id = 1 and so on.
|
Although my first instinct would be to re-design the schema to have a `keywords` table with unique values and a mapping table between the products and the keywords, MySQL actually offers an elegant way to query comma-delimited strings with the [`find_in_set`](https://dev.mysql.com/doc/refman/5.6/en/string-functions.html#function_find-in-set) function:
```
SELECT product_id, product_name
FROM products
WHERE FIND_IN_SET ('pc', keywords) > 0 OR
FIND_IN_SET ('personal computer', keywords) > 0
```
|
You can use [find\_in\_set](https://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_find-in-set).
```
select * from `table` where find_in_set(' Computer',product_name)
```
But storing values as csv is very bad database design.
|
mysql select rows on comma seperated value in column
|
[
"",
"mysql",
"sql",
"select",
""
] |
I have a large equation in a select statement that I need to reference in a where statement I think it is eating up my efficiency, I do not want to reference it more than I need to.
```
SELECT 'HeaderDetail' AS rowType, ... , [large CASE/WHEN statement] AS Tracking
...
WHERE NOT(@DNC IN([large CASE/WHEN statement]))
AND [large CASE/WHEN statement] IS NOT NULL
```
I feel like each time I call the statement it would need to recalculate the values and the query already takes a few seconds to run. Is there a way to reference the statement without having it to run the sub query multiple times?
Thanks in advance for the help!
|
What happens when you filter the rows in `outer query` or `CTE`.
```
With CTE as
(
SELECT 'HeaderDetail' AS rowType, ... , [large CASE/WHEN statement] AS Tracking
...
)
select * from CTE
WHERE NOT(@DNC IN(Tracking))
AND Tracking IS NOT NULL
```
or use `Sub-Select`
```
select * from
(
SELECT 'HeaderDetail' AS rowType, ... , [large CASE/WHEN statement] AS Tracking
...
) AS A
WHERE NOT(@DNC IN(Tracking))
AND Tracking IS NOT NULL
```
|
use a subquery
```
select *
from (
your query with the large case statement goes here
) temp
where theAliasForThatCaseStatment is not null
```
|
SQL Reference Large Equation in Select Statement
|
[
"",
"mysql",
"sql",
"sql-server",
"performance",
"subquery",
""
] |
I know it is possible to filter a query from a list as in `Select * from table where ID in (1,2,3,4)`, but is it possible to select from a list?
For example, I'd like to select date and offset date from a list of offset hours:
```
select sysdate, sysdate - offset / 24 as offsetSysdate
from dual, (1,2,3,4) as offset
```
I know I can create a table and populate it with values to do this, but I was looking for a way to do it on the fly directly in the query.
|
You can use a database "table of number" type - either create one or use one that handily exists like SYS.KU$\_OBJNUMSET:
```
select sysdate, sysdate - offset.column_value / 24 as offsetSysdate
from TABLE (KU$_OBJNUMSET(1,2,3,4)) offset
```
Note that above `offset` is now an alias for the TABLE, and there is a pseudo-column called `column_value` for table types like this. Also no need for DUAL table.
If you prefer to create your own type:
```
create type num_tab as table of number;
/
select sysdate, sysdate - offset.column_value / 24 as offsetSysdate
from TABLE(num_tab(1,2,3,4)) offset;
```
|
Sure, use a subselect for that
```
select sysdate, sysdate - offset / 24 as offsetSysdate
from
(
select 1 as offset from dual
union
select 2 from dual
union
select 3 from dual
union
select 4 from dual
)
```
For a more sophisticated way to generate the number sequence, see e.g. [here](https://stackoverflow.com/questions/2847226/sql-to-generate-a-list-of-numbers-from-1-to-100).
---
**EDIT:** Brino figured out the suggested code improvement, here it is in readable format and slightly improved:
```
select sysdate, sysdate - offset.value / 24, offset.value as offsetSysdate
from (select r as value
from (select level r from dual connect by level <= 4)
) offset;
```
|
Oracle: Can I select from a list of values?
|
[
"",
"sql",
"oracle",
""
] |
I'm sitting at the following problem: I'm writing a view where I join several tables to a person table. And I now trying to join the partners table but I only need the historical last valid partner row:
partners table:
```
id,
name,
married_at,
divorced_at,
died_at,
someone_id
```
As you can see it's about partners you are/were married with. Someone can have only one partner at a time, but several partners in history. So the last partner of someone (someone\_id) may be:
* alive and still married
* alive but divorced
* dead "but still married" (so someone is the widower)
I need to find ONLY the last partner row for someone.
What I got so far:
```
select *
from someone_table s
left join partners p on (p.someone_id = s.id and (p.divorced_at is null and p.died_at is null) )
```
But this - obvious as it is - gives me only partners who are still alive and still married. Sure these partners are the last partners of someone but all other "someones" whos last partner is divorced or dead won't be in the result of the statement. How do I get the other ones and only one row for each someone?
I also tried a select-statement as table and using of rownum
```
select *
from someone s,
(select * from partners p where p.someone_id = s.id and ROWNUM = 1 order by p.married_at)
```
But this statement always fails with an "invalied identifier s.id" error
Note: The table structure is fixed and can't be changed. DBMS is oracle.
Thanks in advance
edit:
sample data
partners\_table
```
╔════╦═════════╦════════════╦═════════════╦════════════╦════════════╗
║ id ║ name ║ married_at ║ divorced_at ║ died_at ║ someone_id ║
╠════╬═════════╬════════════╬═════════════╬════════════╬════════════╣
║ 1 ║ partner ║ 01.01.2000 ║ ║ ║ 12 ║
║ 2 ║ honey1 ║ 15.01.2000 ║ 15.01.2001 ║ ║ 15 ║
║ 3 ║ honey2 ║ 16.02.2001 ║ ║ ║ 15 ║
║ 4 ║ beauty ║ 23.03.2005 ║ ║ 25.03.2005 ║ 16 ║
║ 5 ║ lady1 ║ 11.11.2000 ║ 11.12.2000 ║ ║ 20 ║
║ 6 ║ lady2 ║ 12.12.2000 ║ 01.01.2001 ║ ║ 20 ║
║ 7 ║ lady3 ║ 02.02.2001 ║ ║ 04.02.2004 ║ 20 ║
║ 8 ║ lady4 ║ 05.05.2005 ║ ║ ║ 20 ║
║ 9 ║ mate ║ 23.06.2003 ║ 12.12.2009 ║ ║ 25 ║
╚════╩═════════╩════════════╩═════════════╩════════════╩════════════╝
```
last historical rows would be:
```
╔════╦═════════╦════════════╦═════════════╦════════════╦════════════╗
║ id ║ name ║ married_at ║ divorced_at ║ died_at ║ someone_id ║
╠════╬═════════╬════════════╬═════════════╬════════════╬════════════╣
║ 1 ║ partner ║ 01.01.2000 ║ ║ ║ 12 ║
║ 3 ║ honey2 ║ 16.02.2001 ║ ║ ║ 15 ║
║ 4 ║ beauty ║ 23.03.2005 ║ ║ 25.03.2005 ║ 16 ║
║ 8 ║ lady4 ║ 05.05.2005 ║ ║ ║ 20 ║
║ 9 ║ mate ║ 23.06.2003 ║ 12.12.2009 ║ ║ 25 ║
╚════╩═════════╩════════════╩═════════════╩════════════╩════════════╝
```
|
This ought to do what you want:
```
with partners (id, name, married_at, divorced_at, died_at, someone_id) as (select 1, 'partner', to_date('01/01/2000', 'dd/mm/yyyy'), null, null, 12 from dual union all
select 2, 'honey1', to_date('15/01/2000', 'dd/mm/yyyy'), to_date('15/01/2001', 'dd/mm/yyyy'), null, 15 from dual union all
select 3, 'honey2', to_date('16/02/2001', 'dd/mm/yyyy'), null, null, 15 from dual union all
select 4, 'beauty', to_date('23/03/2005', 'dd/mm/yyyy'), null, to_date('25/03/2005', 'dd/mm/yyyy'), 16 from dual union all
select 5, 'lady1', to_date('11/11/2000', 'dd/mm/yyyy'), to_date('11/12/2000', 'dd/mm/yyyy'), null, 20 from dual union all
select 6, 'lady2', to_date('12/12/2000', 'dd/mm/yyyy'), to_date('01/01/2001', 'dd/mm/yyyy'), null, 20 from dual union all
select 7, 'lady3', to_date('02/02/2001', 'dd/mm/yyyy'), null, to_date('04/02/2004', 'dd/mm/yyyy'), 20 from dual union all
select 8, 'lady4', to_date('05/05/2005', 'dd/mm/yyyy'), null, null, 20 from dual union all
select 9, 'mate', to_date('23/06/2003', 'dd/mm/yyyy'), to_date('12/12/2009', 'dd/mm/yyyy'), null, 25 from dual)
select id,
name,
married_at,
divorced_at,
died_at,
someone_id
from (select id,
name,
married_at,
divorced_at,
died_at,
someone_id,
row_number() over (partition by someone_id order by married_at desc) rn
from partners)
where rn = 1;
ID NAME MARRIED_AT DIVORCED_AT DIED_AT SOMEONE_ID
---------- ------- ---------- ----------- ---------- ----------
1 partner 01/01/2000 12
3 honey2 16/02/2001 15
4 beauty 23/03/2005 25/03/2005 16
8 lady4 05/05/2005 20
9 mate 23/06/2003 12/12/2009 25
```
|
If I understand your question (and I believe I do), you should try something like this:
```
SELECT *
FROM someone_table s
left join (
SELECT *
FROM (
SELECT *
FROM partners p
WHERE p.someone_id = s.id
ORDER BY GREATEST(died_at, divorced_at, married_at)
) x
WHERE ROWNUM = 1
) y
```
**Note:** I'm not an oracle guy, most of my work is with sql server, but [according to this post](https://stackoverflow.com/questions/2928102/oracle-select-maximum-value-from-different-columns-of-the-same-row) `greatest` should work on oracle database.
|
SQL Join only the last historical row
|
[
"",
"sql",
"oracle",
"greatest-n-per-group",
""
] |
I have 2 similar tables, one with all the data and the other contains a subset of the first. Every 2-3 days I need to insert in the second table the missing values, and I use this code
```
INSERT INTO [SRVDB2].[dbt].[curve].[curve_value]
SELECT *
FROM [SRVDB1].[dbt].[curve].[curve_value] as DB01
WHERE TargetDate >= '20150505'
and NOT EXISTS (SELECT *
FROM [SRVDB2].[dbt].[curve].[curve_value] as DB02
WHERE DB02.TargetDate = DB01.TargetDate
and DB02.[Hour] = DB01.[Hour]
and DB02.[id_Mkt] = DB01.[id_Mkt]
and DB02.[Price] = DB01.[Price]
and DB02.VoSe = DB01.VoSe
and DB02.VoBu = DB01.VoBu
)
```
It always worked but now I have some rows with `NULL` in column `VoSe` or `VoBu` and those values are not inserted correctly (even if executing only the `SELECT` statement seems to return all the differences). How can I handle these?
|
Add explicit check for `NULL` for both of these columns:
```
INSERT INTO [SRVDB2].[dbt].[curve].[curve_value]
SELECT *
FROM [SRVDB1].[dbt].[curve].[curve_value] as DB01
WHERE TargetDate >= '20150505'
and NOT EXISTS (SELECT *
FROM [SRVDB2].[dbt].[curve].[curve_value] as DB02
WHERE DB02.TargetDate = DB01.TargetDate
and DB02.[Hour] = DB01.[Hour]
and DB02.[id_Mkt] = DB01.[id_Mkt]
and DB02.[Price] = DB01.[Price]
and ((DB02.VoSe IS NULL AND DB01.VoSe IS NULL) OR DB02.VoSe = DB01.VoSe)
and ((DB02.VoBu IS NULL AND DB01.VoBu IS NULL) OR DB02.VoBu = DB01.VoBu)
)
```
|
@dotnetom's answer (+1) should work for your problem. However, making some assumptions on the problem you describe, I suspect the following would work just as well:
```
INSERT INTO [SRVDB2].[dbt].[curve].[curve_value]
SELECT *
FROM [SRVDB1].[dbt].[curve].[curve_value]
WHERE TargetDate >= '20150505'
EXCEPT SELECT *
FROM [SRVDB2].[dbt].[curve].[curve_value]
```
|
SQL server update table with missing values
|
[
"",
"sql",
"sql-server",
"difference",
""
] |
I have a database table in which multiple customers can be assigned to multiple types. I am having trouble formulating a query that will exclude all customer records that match a certain type. For example:
```
ID CustomerName Type
=========================
111 John Smith TFS-A
111 John Smith PRO
111 John Smith RWAY
222 Jane Doe PRO
222 Jane Doe TFS-A
333 Richard Smalls PRO
444 Bob Rhoads PRO
555 Jacob Jones TFS-B
555 Jacob Jones TFS-A
```
What I want is to pull only those people who are marked PRO but not marked TFS. If they are PRO and TFS, exclude them.
Any help is greatly appreciated.
|
You can get all `'PRO'` customers and use `NOT EXISTS` clause to exclude the ones that are also `'TFS'`:
```
SELECT DISTINCT ID, CustomerName
FROM mytable AS t1
WHERE [Type] = 'PRO' AND NOT EXISTS (SELECT 1
FROM mytable AS t2
WHERE t1.ID = t2.ID AND [Type] LIKE 'TFS%')
```
[**SQL Fiddle Demo**](http://sqlfiddle.com/#!6/244f1/2)
|
```
Select DISTINCT(Customername),ID
FROM tablename
WHERE NOT (ID IN (SELECT ID FROM tablename WHERE type='PRO')
AND ID IN (SELECT ID FROM tablename WHERE type='TFS'))
```
EDIT: now added working TFS clause
Get all customers that do not have TYPE PRO AND TFS for example
SQLFIDDLE:<http://sqlfiddle.com/#!9/da4f9/2>
|
Query to exclude two or more records if they match a single value
|
[
"",
"sql",
"sql-server",
""
] |
I have column with all kinds of numbers, I am specifically trying to extract numbers that have either
```
555
```
or
```
555.xx
```
or
```
555.x
```
The output should look like this
```
555
555.1
555.5
555.9
555.58
555.22
.
.
```
IE I need an sql query that will return the rows that have just the number 555 with any decimal fraction from my column of arbitrary numbers.
|
You can try `LIKE` statement
```
WHERE Col LIKE '555.%'
OR Col = '555'
```
|
As a fast approach I would do
```
CAST((ValueOfTable * 100.00) AS DECIMAL(18, 2))
```
> my table name is Diagnosis and the column name is code, where should I
> add the table name and column name in this code ?
In your situation:
```
SELECT
CAST((code * 100.00) AS DECIMAL(18, 2))
FROM Diagnosis ;
```
I'm expecting this to be an integer. You can find out executing:
```
\d Diagnosis ;
```
one of the output lines should look similar to
```
(...)
code | integer |
(...)
```
|
SQL query to extract a number and its decimal variations
|
[
"",
"sql",
"netezza",
""
] |
I have this below query. I like to insert only if the row is not present in TABLE1. How can I do that?
```
INSERT INTO TABLE1 (VEH_YEAR, VEH_MAKE, ACV_VOLUME)
SELECT VEH_YEAR, VEH_MAKE,
(SELECT COUNT(*)
FROM ACV_VEHICLE_DETAILS
WHERE YEAR = table2 .veh_year
AND MAKE = table2 .veh_make
) AS ACV_VOLUME
FROM TABLE2 table2 WHERE VEH_YEAR IS NOT NULL AND VEH_MAKE IS NOT NULL;
```
|
Use `not exists`:
```
INSERT INTO TABLE1 (VEH_YEAR, VEH_MAKE, ACV_VOLUME)
SELECT VEH_YEAR, VEH_MAKE,
(SELECT COUNT(*)
FROM ACV_VEHICLE_DETAILS vd
WHERE vd.YEAR = t2.veh_year AND vd.MAKE = t2.veh_make
) AS ACV_VOLUME
FROM TABLE2 t2
WHERE VEH_YEAR IS NOT NULL AND VEH_MAKE IS NOT NULL AND
NOT EXISTS (SELECT 1
FROM table1 t1
WHERE t1.veh_year = t2.veh_year and t1.veh_make = t2.veh_make
);
```
|
You can use `MERGE` with `WHEN NOT MATCHED THEN INSERT`:
```
merge into Table1 t
using (
SELECT VEH_YEAR, VEH_MAKE,
(SELECT COUNT(*)
FROM ACV_VEHICLE_DETAILS
WHERE YEAR = table2 .veh_year
AND MAKE = table2 .veh_make
) AS ACV_VOLUME
FROM TABLE2 table2 WHERE VEH_YEAR IS NOT NULL AND VEH_MAKE IS NOT NULL
) d
on (t.veh_year = d.veh_year and t.veh_make = d.veh_make)
when not matched then insert (veh_year, veh_make, acv_volume)
values (d.veh_year, d.veh_make, d.acv_volume);
```
Don't forget to commit :)
|
INSERT INTO SELECT if NOT EXISTS in oracle
|
[
"",
"sql",
"oracle",
"not-exists",
""
] |
I want to condense the following three queries into one query and out put the totals into 3 columns. Oh and how do I make it so I don't have to declare the date. I want it to "know" the current date, month, and year.
```
DECLARE @myDate as datetime
SET @myDate = '2015-01-1'
select SUM(Amount) as 'Day Total'
from [Accounting].[dbo].[HandPay]
where AccountingDate>=@myDate and AccountingDate<dateadd(day,1,@myDate)
select SUM(Amount) as 'Month Total'
from [Accounting].[dbo].[HandPay]
where AccountingDate>=@myDate and AccountingDate<dateadd(MONTH,1,@myDate)
select SUM(Amount) as 'Day Total'
from [Accounting].[dbo].[HandPay]
where AccountingDate>=@myDate and AccountingDate<dateadd(year,1,@myDate)
```
What is the best way to do this?
Thanks!
Thanks for all the super fast responses! This is now solved.
|
If I'm understanding the problem correctly, then something like this ought to work:
```
declare @today date = convert(date, getdate());
select
[Day Total] = sum(case when [AccountingDate] >= @today and [AccountingDate] < dateadd(day, 1, @today) then [Amount] else 0 end),
[Month Total] = sum(case when [AccountingDate] >= @today and [AccountingDate] < dateadd(month, 1, @today) then [Amount] else 0 end),
[Year Total] = sum(case when [AccountingDate] >= @today and [AccountingDate] < dateadd(year, 1, @today) then [Amount] else 0 end)
from
[Accounting].[dbo].[HandPay];
```
Note that `[Month Total]` and `[Year Total]` don't give the sums of the entries that occur within the current month/year, but rather the sum of the entries that occur within a month/a year of today's date. I'm not sure if that's what you want, but it seems consistent with the original queries.
**UPDATE:** As suggested by D Stanley below, you can simplify this a bit since you know that the date ranges that compose the `[Day Total]` and `[Month Total]` sums are enclosed entirely within the date range that composes the `[Year Total]` sum. Here's what this might look like:
```
declare @today date = convert(date, getdate());
select
[Day Total] = sum(case when [AccountingDate] < dateadd(day, 1, @today) then [Amount] else 0 end),
[Month Total] = sum(case when [AccountingDate] < dateadd(month, 1, @today) then [Amount] else 0 end),
[Year Total] = sum([Amount])
from
[Accounting].[dbo].[HandPay]
where
[AccountingDate] >= @today and [AccountingDate] < dateadd(year, 1, @today);
```
|
Make them all part of a 'super' select:
```
DECLARE @myDate as datetime
SET @myDate = '2015-01-1'
SELECT
(select SUM(Amount)
from [Accounting].[dbo].[HandPay]
where AccountingDate>=@myDate and AccountingDate<dateadd(day,1,@myDate) ) as 'Day Total',
(select SUM(Amount)
from [Accounting].[dbo].[HandPay]
where AccountingDate>=@myDate and AccountingDate<dateadd(MONTH,1,@myDate) ) as 'Month Total',
(select SUM(Amount)
from [Accounting].[dbo].[HandPay]
where AccountingDate>=@myDate and AccountingDate<dateadd(year,1,@myDate) ) as 'Day Total'
```
|
SQL Server How to combine 3 queries into one?
|
[
"",
"sql",
"sql-server",
""
] |
In Microsoft SQL Server, to test something like this in the query window:
```
select * from Users where LastName = @lastname
```
I can add this before the command:
```
declare @lastname varchar(16)
set @lastname = 'Troy'
```
But in PostgreSQL, I cannot find a similar way to do so. It seems the only thing I can do is to replace the parameter name directly with its value. It gets hard when the ad-hoc query gets complicated and the same parameter gets used several times. Is there a way?
|
I config postgres to log all the commands, and copy the command from the log file, so all the parameters are already replaced with the value, and test the command in the query window.
May not be the best approach, but it is easy and works for me
|
Various options.
Provide parameters in a [CTE](https://www.postgresql.org/docs/current/queries-with.html) to have "variables" in **pure SQL**:
```
WITH var(lastname) AS (SELECT 'Troy'::varchar(16))
SELECT *
FROM users, var v
WHERE lastname = v.lastname;
```
This works for *any* query.
Since the CTE `var` holds a ***single row*** it is safe to append it with a `CROSS JOIN` at the end of the FROM clause - actually the short form with appending it after a comma may be best because explicit join syntax binds before commas. The additional table alias `v` is optional to further shorten the syntax.
**OR** cheaper without CTE. BTW, why `varchar(16)`? Just use `text`:
```
SELECT *
FROM users
JOIN (SELECT 'Troy'::text) var(lastname) USING (lastname)
WHERE lastname = var.lastname;
```
**Or** use a **temporary table** to play a similar role for *all* queries within the same session. Temp tables die with the end of the session.
```
CREATE TEMP TABLE var AS
SELECT text 'Troy' AS lastname;
ANALYZE var; -- temp tables are not covered by autovacuum
SELECT * FROM users JOIN var USING (lastname);
```
* [About temporary tables and `autovacuum`](https://dba.stackexchange.com/questions/18664/are-regular-vacuum-analyze-still-recommended-under-9-1/18694#18694)
**Or** you can use **`DO`** statements like @Houari supplied or like demonstrated here:
* [PostgreSQL loops outside functions. Is that possible?](https://stackoverflow.com/questions/18340929/postgresql-loops-outside-functions-is-that-possible/18341502#18341502)
Note that you cannot return values from `DO` statements. (You can use `RAISE ...` though.) And you cannot use `SELECT` without target in plpgsql - the default procedural language in a `DO` statement. Replace `SELECT` with [**`PERFORM`**](https://www.postgresql.org/docs/current/plpgsql-statements.html#PLPGSQL-STATEMENTS-SQL-NORESULT) to throw away results.
**Or** you can use [**customized options**](https://www.postgresql.org/docs/current/runtime-config-custom.html), which you can set in `postgresql.conf` to be visible ***globally***.
**Or** set in your session to be visible for the duration of the session and only **in the same session**:
```
SET my.lastname = 'Troy';
```
The variable name *must* include a dot. You are limited to `text` as data type this way, but any data type can be represented as `text` ...
You can use `current_setting('my.lastname')` as value expression. Cast if you need. For example: `current_setting('my.json_var')::json` ...
**Or** use `SET LOCAL` for the effect to only last for the current ***transaction***. See:
* [Passing user id to PostgreSQL triggers](https://stackoverflow.com/questions/13172524/passing-user-id-to-postgresql-triggers)
**Or** you can use tiny **`IMMUTABLE` functions** as **global** persisted variables that only privileged users can manipulate. See:
* [Is there a way to define a named constant in a PostgreSQL query?](https://stackoverflow.com/questions/13316773/is-there-a-way-to-define-a-named-constant-in-a-postgresql-query/13317628#13317628)
**Or** when working with [psql](https://www.postgresql.org/docs/current/app-psql.html) as client, use the `\set` or `\gset` meta-commands and [variable substitution](https://www.postgresql.org/docs/current/app-psql.html#APP-PSQL-VARIABLES).
|
How to test my ad-hoc SQL with parameters in Postgres query window
|
[
"",
"sql",
"postgresql",
"parameterized-query",
""
] |
I have a SQL table that have ID , start and End Dates. Example,
```
ID StartDt EndDt
123 1/1/2010 12/31/2014
456 7/16/2013 11/20/2014
```
Based on Oct-Sept FY calendar I can get the FY from the dates (e.g., 2010 and 2015 for ID 123). However, I would like to duplicate the row with the initial and last FY; and the in-between FY. Below is what I would like to have from the above rows of data:
ID FY
123 2010
123 2011
123 2012
123 2014
123 2015
456 2013
456 2014
|
The query below uses a recursive CTE to count years from the start date fiscal year to the end date fiscal year.
```
;WITH CTE AS (
SELECT
ID,
YEAR(StartDt) AS FY,
YEAR(EndDt) AS EY
FROM [Source]
UNION ALL
SELECT
ID,
FY + 1,
EY
FROM CTE
WHERE FY < EY
)
SELECT ID, FY FROM CTE ORDER BY ID, FY
```
|
You can use a recursive `cte` to get a list of all possible years, then `JOIN` to that:
```
;with cte AS (SELECT 2010 AS Yr
UNION ALL
SELECT Yr + 1
FROM cte
WHERE Yr < 2015)
SELECT a.ID, b.Yr
FROM YourTable a
JOIN cte b
ON b.Yr BETWEEN YEAR(a.StartDt) AND YEAR(a.EndDt)
```
|
How to duplicate Rows with new entries
|
[
"",
"sql",
"t-sql",
""
] |
I'm running a series of SQL queries to find data that needs cleaning up. One of them I want to do is look for:
* 2 or more uppercase letters in a row
* starting with a lowercase letter
* space then a lowercase letter
For example my name should be "John Doe". I would want it to find "JOhn Doe" or "JOHN DOE" or "John doe", but I would not want it to find "John Doe" since that is formatted correctly.
I am using SQL Server 2008.
|
The key is to use a case-sensitive collation, i.e. `Latin1_General_BIN`*\**. You can then use a query with a [`LIKE` expression](https://msdn.microsoft.com/en-us/library/ms179859.aspx) like the following ([SQL Fiddle demo](http://sqlfiddle.com/#!6/2b7ce/7)):
```
select *
from foo
where name like '%[A-Z][A-Z]%' collate Latin1_General_BIN --two uppercase in a row
or name like '% [a-z]%' collate Latin1_General_BIN --space then lowercase
```
*\**As per [How do I perform a case-sensitive search using LIKE?](https://stackoverflow.com/questions/15079210/how-do-i-perform-a-case-sensitive-search-using-like), apparently there is a "bug" in the `Latin1_General_CS_AS` collation where ranges like `[A-Z]` fail to be case sensitive. The solution is to use `Latin1_General_BIN`.
|
You can use a regular expression. I'm not a SQL Server whiz, but you want to use [RegexMatch](https://msdn.microsoft.com/en-us/magazine/cc163473.aspx). Something like this:
```
select columnName
from tableName
where dbo.RegexMatch( columnName,
N'[A-Z]\W[A-Z]' ) = 1
```
|
How to find more than 1 uppercase character
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
"case-sensitive",
""
] |
I would like to store people's information in the database (table). Every person can have a different set of properties. I would like to store all these properties. But creating fixed number of columns does not make my application scalable.
So, another approach is to store these values inside key-value tables which leads to tables with a few columns but a huge number of rows.
So I am wondering if there is another way of storing these information which is also easy and fast to query.
|
What Database are you using.
One solution (if your DB supports) could be storing the person's information in an XML Format.
|
If you are stuck using SQL Server for this task, you can leverage its XML Support. Notably, you can use XPath.
[Using XPath Queries in SQLXML 4.0](https://msdn.microsoft.com/en-us/library/ms171802(v=sql.105).aspx)
|
Storing entities with dynamic set of properties in a table (and using fixed columns or key-value tables)
|
[
"",
"sql",
"sql-server",
"database",
"database-design",
"key-value",
""
] |
I have a column in our database that holds 4 fields as a "\" delimited string.
I have split the fields as I need them seperatly in my report.
I also need to use these fields individually as where criteria against another table.
**Things I've tried:**
Temporary table:
```
CREATE GLOBAL TEMPORARY TABLE pfcc
ON COMMIT PRESERVE ROWS
AS select REGEXP_SUBSTR(s, '[^\]+', 1, 1) colA,
REGEXP_SUBSTR(s, '[^\]+', 1, 2) colB,
REGEXP_SUBSTR(s, '[^\]+', 1, 3) colC,
REGEXP_SUBSTR(s, '[^\]+', 1, 4) colD
from (select delimited s from products
where productid = 1)
```
And then Joining this against the other table.
```
select * from pfcc tmp
inner join lineitems gch
on gch.Line = tmp.colA
AND gch.productB = tmp.colB
AND gch.productC = tmp.colC
```
I also tried to join right off without storing values in a table:
```
select REGEXP_SUBSTR(s, '[^\]+', 1, 1) colA,
REGEXP_SUBSTR(s, '[^\]+', 1, 2) colB,
REGEXP_SUBSTR(s, '[^\]+', 1, 3) colC,
REGEXP_SUBSTR(s, '[^\]+', 1, 4) colD
from (select delimited s from products
where productid = 1) tmp
inner join lineitems gch
on gch.Line = tmp.colA
AND gch.productB = tmp.colB
AND gch.productC = tmp.colC
```
I would like to **AVOID** using the temporary table, and have it done similar to the second way. If this is unavoidable then so be it.
Does anyone have a solution for this?
Thanks,
JFIT
|
I think this is the query you want:
```
select gch.Line, gch.productB, gch.productC,
REGEXP_SUBSTR(p.delimited, '[^\]+', 1, 4)
from products p inner join
lineitems gch
on gch.Line = REGEXP_SUBSTR(p.delimited, '[^\]+', 1, 1) and
gch.productB = REGEXP_SUBSTR(p.delimited, '[^\]+', 1, 2) and
gch.productC = REGEXP_SUBSTR(p.delimited, '[^\]+', 1, 3)
where p.productid = 1;
```
You need neither a subquery nor a temporary table.
|
You can use CTE as follows.
```
with pfcc as
(select REGEXP_SUBSTR(delimited , '[^\]+', 1, 1) colA,
REGEXP_SUBSTR(delimited , '[^\]+', 1, 2) colB,
REGEXP_SUBSTR(delimited , '[^\]+', 1, 3) colC,
REGEXP_SUBSTR(delimited , '[^\]+', 1, 4) colD
from products
where productid = 1)
select * from pfcc tmp
inner join lineitems gch
on gch.Line = tmp.colA
AND gch.productB = tmp.colB
AND gch.productC = tmp.colC;
```
|
Join to splitted string columns in Oracle
|
[
"",
"sql",
"oracle",
"join",
"plsql",
""
] |
I need to get some values from a database and count all rows.
I wrote this code:
```
SELECT author, alias, (select COUNT(*) from registry WHERE status='OK' AND type='H1') AS count
FROM registry
WHERE status='OK' AND type='H1'
```
It works, but, how can I simplify this code? Both `WERE` condition thereof.
|
If the query is returning the resultset you need, with the "total" count of rows (independent of author and alias), with the same exact value for "count" repeated on each row, we could rewrite the query like this:
```
SELECT t.author
, t.alias
, s.count
FROM registry t
CROSS
JOIN ( SELECT COUNT(*) AS `count`
FROM registry c
WHERE c.status='OK'
AND c.type='H1'
) s
WHERE t.status='OK'
AND t.type='H1'
```
I don't know if that's any simpler, but to someone reading the statement, I think it makes it more clear what resultset is being returned.
(I also tend to favor avoiding any subquery in the SELECT list, unless there is a specific reason to add one.)
The resultset from this query is a bit odd. But absent any example data, expected output or any specification other than the original query, we're just guessing. The query in my answer replicates the results from the original query, in a way that's more clear.
|
try this:
```
SELECT author, alias, count(1) AS caunt
FROM registry
WHERE status='OK' AND type='H1'
group by author, alias
```
|
Simplify select with count
|
[
"",
"mysql",
"sql",
"select",
"count",
""
] |
The [problem statement](http://sqlzoo.net/wiki/SELECT_from_WORLD_Tutorial) is:
> Put the continents right...
>
> * Oceania becomes Australasia
> * Countries in Eurasia and Turkey go to Europe/Asia
> * Caribbean islands starting with 'B' go to North America, other Caribbean islands go to South America
>
> Show the name, the original continent and the new continent of all
> countries.
My solution:
```
SELECT name, continent,
CASE WHEN continent='Oceania' THEN 'Australasia'
WHEN continent IN ('Europe', 'Asia') THEN 'Europe/Asia'
WHEN name='Turkey' THEN 'Europe/Asia'
WHEN continent='Caribbean' AND name LIKE 'B%' THEN 'North America'
WHEN continent='Caribbean' AND name NOT LIKE 'B%' THEN 'South America'
ELSE continent END
FROM world
```
The result I get from sqlzoo is "Wrong answer. Some of the data is incorrect.".
|
This works for me. Don't ask me why I have to use the ORDER BY (didn't work without it).
```
SELECT name, continent,
CASE WHEN continent='Oceania' THEN 'Australasia'
WHEN continent = 'Eurasia' THEN 'Europe/Asia'
WHEN name='Turkey' THEN 'Europe/Asia'
WHEN continent='Caribbean' AND name LIKE 'B%' THEN 'North America'
WHEN continent='Caribbean' AND name NOT LIKE 'B%' THEN 'South America'
ELSE continent END
FROM world ORDER BY name
```
|
Looks to be a bug in their system unless I'm reading the question wrong:
```
SELECT name, continent,
CASE WHEN continent='Oceania' THEN 'Australasia'
WHEN continent IN ('Eurasia') THEN 'Europe/Asia'
WHEN name='Turkey' THEN 'Europe/Asia'
WHEN continent='Caribbean' AND name LIKE 'B%' THEN 'North America'
WHEN continent='Caribbean' AND name NOT LIKE 'B%' THEN 'South America'
ELSE continent END
FROM world
order by name
```
If you add in "order by name" it gives a correct answer with the above query. However, if you do not include the order by it marks it as incorrect. As to why I am not sure.
|
What is the solution to 13th part of 'select from world' tutorial on sqlzoo?
|
[
"",
"sql",
"case",
"sql-like",
"case-when",
""
] |
Four simple SELECT statements:
```
SELECT 33883.50 * -1;
SELECT 33883.50 / -1.05;
SELECT 33883.50 * -1 / 1.05;
SELECT (33883.50 * -1) / 1.05;
```
But the results are not as I would expect:
```
-33883.50
-32270.000000
-32269.96773000
-32270.000000
```
That third result is the one that seems questionable. I can see what is happening, first SQL Server evaluates this:
```
SELECT -1 / 1.05;
```
Getting an answer of:
```
-0.952380
```
Then it takes that answer and uses it to perform this calculation:
```
SELECT 33883.50 * -0.952380;
```
To get the (wrong) answer of:
```
-32269.96773000
```
But why is it doing this?
|
In your example
```
33883.50 * -1 / 1.05
```
is evaluated as
```
33883.50 * (-1 / 1.05)
```
instead of
```
(33883.50 * -1) / 1.05
```
which results in a loss in precision.
I played a bit with it. I used SQL Sentry Plan Explorer to see the details of how SQL Server evaluates expressions. For example,
```
2 * 3 * -4 * 5 * 6
```
is evaluated as
```
((2)*(3)) * ( -((4)*(5))*(6))
```
I'd explain it like this. In T-SQL unary minus is made to be the [same priority as subtraction](https://msdn.microsoft.com/en-us/library/ms190276.aspx), which is lower than multiplication. Yes,
> When two operators in an expression have the same operator precedence
> level, they are evaluated left to right based on their position in the
> expression.
, but here we have an expression that mixes operators with different priorities and parser follows these priorities to the letter. Multiplication has to go first, so it evaluates `4 * 5 * 6` at first and then applies unary minus to the result.
Normally ([say in C++](http://en.cppreference.com/w/cpp/language/operator_precedence)) unary minus has higher priority (like bitwise NOT) and such expressions are parsed and evaluated as expected. They should have made unary minus/plus same highest priority as bitwise NOT in T-SQL, but they didn't and this is the result. So, it is not a bug, but a bad design decision. It is even documented, though quite obscurely.
When you refer to Oracle - that the same example works differently in Oracle than in SQL Server:
* Oracle may have different [rules for operator precedence](https://msdn.microsoft.com/en-us/library/ms190276.aspx) than SQL Server. All it takes is to make unary minus highest priority as it should.
* Oracle may have different [rules for determining result precision and scale](https://msdn.microsoft.com/en-us/library/ms190476.aspx) when evaluating expressions with `decimal` type.
* Oracle may have different rules for rounding intermediate results. [SQL Server](https://msdn.microsoft.com/en-us/library/ms187746.aspx) "uses rounding when converting a number to a decimal or numeric value with a lower precision and scale".
* Oracle may be using completely different types for these kind of expressions, not `decimal`. In [SQL Server](https://msdn.microsoft.com/en-us/library/ms187746.aspx) "a constant with a decimal point is automatically converted into a numeric data value, using the minimum precision and scale necessary. For example, the constant 12.345 is converted into a numeric value with a precision of 5 and a scale of 3."
* Even definition of `decimal` may be different in Oracle. Even in [SQL Server](https://msdn.microsoft.com/en-us/library/ms190476.aspx) "the default maximum precision of numeric and decimal data types is 38. In earlier versions of SQL Server, the default maximum is 28."
|
Do you know **[*BODMAS*](http://www.math-only-math.com/bodmas-rule.html)** rule. The answer is correct its not because of Sql Server, Its a basic mathematics.
First comes `Division` then comes the `Subtraction`, So always Division will happen before Subtraction
If you want to get correct answer then use proper parenthesis
```
SELECT (33883.50 * -1) / 1.05;
```
|
Why is SQL Server changing operation order and boxing the way it does?
|
[
"",
"sql",
"sql-server",
"sql-server-2014",
""
] |
There is a table named PRODUCT\_PRICE:
```
CREATE TABLE [TEST].[PRODUCT_PRICE]
(
[PRICE_ID] [bigint] NOT NULL,
[PRODUCT_ID] [bigint] NOT NULL,
[PRICE_DATE] [date] NOT NULL,
[IS_SALE_PRICE] [bit] NOT NULL,
[UNIT_PRICE] [decimal](18, 2) NOT NULL
)
```
It has the following records:
```
PRICE_ID PRODUCT_ID PRICE_DATE IS_SALE_PRICE UNIT_PRICE
-------- ---------- ---------- ------------- ----------
1 15 2015-05-12 False 0,05
2 15 2015-05-12 True 0,04
3 25 2015-05-12 False 1,45
4 35 2015-05-12 True 2,65
```
Edit: There can only be two prices - a purchase price and a sale price. There can't be 3 or more rows with same `PRODUCT_ID` and `PRICE_DATE`.
I want to write a `SELECT` statement that results in the following:
```
PRICE_ID PRODUCT_ID PRICE_DATE IS_SALE_PRICE UNIT_PRICE PRICE_ID_2 IS_SALE_PRICE_2 UNIT_PRICE_2
-------- ---------- ---------- ------------- ---------- ---------- --------------- ------------
1 15 2015-05-12 False 0,05 2 True 0,04
3 25 2015-05-12 False 1,45 NULL NULL NULL
4 35 2015-05-12 True 2,65 NULL NULL NULL
```
I tried `FULL OUTER JOIN` but it results in 4 rows rather than 3 which is correct but not what I am looking for:
```
SELECT
PR1.*,
PR2.PRICE_ID AS PRICE_ID_2,
PR2.IS_SALE_PRICE AS IS_SALE_PRICE_2,
PR2.UNIT_PRICE AS UNIT_PRICE_2
FROM PRODUCT_PRICE AS PR1
FULL OUTER JOIN PRODUCT_PRICE AS PR2
ON PR1.PRODUCT_ID = PR2.PRODUCT_ID
AND PR1.PRICE_DATE = PR2.PRICE_DATE
AND PR1.PRICE_ID <> PR2.PRICE_ID
AND PR1.IS_SALE_PRICE <> PR2.IS_SALE_PRICE
WHERE
PR1.PRICE_DATE = '20150512'
ORDER BY PR1.PRICE_ID
```
Result of the above query:
```
| PRICE_ID | PRODUCT_ID | PRICE_DATE | IS_SALE_PRICE | UNIT_PRICE | PRICE_ID_2 | IS_SALE_PRICE_2 | UNIT_PRICE_2 |
|----------|------------|------------|---------------|------------|------------|-----------------|--------------|
| 1 | 15 | 2015-05-12 | false | 0.05 | 2 | true | 0.04 |
| 2 | 15 | 2015-05-12 | true | 0.04 | 1 | false | 0.05 |
| 3 | 25 | 2015-05-12 | false | 1.45 | (null) | (null) | (null) |
| 4 | 35 | 2015-05-12 | true | 2.65 | (null) | (null) | (null) |
```
Basically I want to `JOIN` a table with itself and remove duplicates.
Note: `PRICE_ID` is an `identity` field (primary key). But natural key is the `PRODUCT_ID`, `PRICE_DATE` pair. I want a row for each unique `PRODUCT_ID` and `PRICE_DATE`.
[**SQL Fiddle**](http://sqlfiddle.com/#!6/5d84d/3/0)
|
If you're sure that there will only be a maximum of 2 rows for each `PRODUCT_ID` - `PRICE_DATE` combination, you can use conditional aggregation instead of `JOIN`:
[**SQL Fiddle**](http://sqlfiddle.com/#!6/5d84d/1/0)
```
SELECT
PRICE_ID = MAX(CASE WHEN RN = 1 THEN PRICE_ID END),
PRODUCT_ID,
PRICE_DATE,
IS_SALE_PRICE = MAX(CASE WHEN RN = 1 THEN CAST(IS_SALE_PRICE AS INT) END),
UNIT_PRICE = MAX(CASE WHEN RN = 1 THEN UNIT_PRICE END),
PRICE_ID2 = MAX(CASE WHEN RN = 2 THEN PRICE_ID END),
IS_SALE_PRICE2 = MAX(CASE WHEN RN = 2 THEN CAST(IS_SALE_PRICE AS INT) END),
UNIT_PRICE2 = MAX(CASE WHEN RN = 2 THEN UNIT_PRICE END)
FROM (
SELECT *,
RN = ROW_NUMBER() OVER(PARTITION BY PRODUCT_ID, PRICE_DATE ORDER BY IS_SALE_PRICE)
FROM PRODUCT_PRICE
)t
GROUP BY PRODUCT_ID, PRICE_DATE
ORDER BY PRODUCT_ID, PRICE_DATE
```
**Result**
```
| PRICE_ID | PRODUCT_ID | PRICE_DATE | IS_SALE_PRICE | UNIT_PRICE | PRICE_ID2 | IS_SALE_PRICE2 | UNIT_PRICE2 |
|----------|------------|------------|---------------|------------|-----------|----------------|-------------|
| 1 | 15 | 2015-05-12 | 0 | 0.05 | 2 | 1 | 0.04 |
| 3 | 25 | 2015-05-12 | 0 | 1.45 | (null) | (null) | (null) |
| 4 | 35 | 2015-05-12 | 1 | 2.65 | (null) | (null) | (null) |
```
---
If you insist on using `JOIN`, you can use `FULL JOIN`:
[**SQL Fiddle**](http://sqlfiddle.com/#!6/5d84d/2/0)
```
SELECT
PRICE_ID = CASE WHEN PP.PRICE_ID IS NOT NULL THEN PP.PRICE_ID ELSE SP.PRICE_ID END,
PRODUCT_ID = CASE WHEN PP.PRICE_ID IS NOT NULL THEN PP.PRODUCT_ID ELSE SP.PRODUCT_ID END,
PRICE_DATE = CASE WHEN PP.PRICE_ID IS NOT NULL THEN PP.PRICE_DATE ELSE SP.PRICE_DATE END,
IS_SALE_PRICE = CASE WHEN PP.PRICE_ID IS NOT NULL THEN PP.IS_SALE_PRICE ELSE SP.IS_SALE_PRICE END,
UNIT_PRICE = CASE WHEN PP.PRICE_ID IS NOT NULL THEN PP.UNIT_PRICE ELSE SP.UNIT_PRICE END,
PRICE_ID2 = CASE WHEN PP.PRICE_ID IS NOT NULL THEN SP.PRICE_ID END,
IS_SALE_PRICE2 = CASE WHEN PP.PRICE_ID IS NOT NULL THEN SP.IS_SALE_PRICE END,
UNIT_PRICE2 = CASE WHEN PP.PRICE_ID IS NOT NULL THEN SP.UNIT_PRICE END
FROM (
SELECT *
FROM PRODUCT_PRICE
WHERE IS_SALE_PRICE = 0
)AS PP
FULL JOIN(
SELECT *
FROM PRODUCT_PRICE
WHERE IS_SALE_PRICE = 1
)AS SP
ON PP.PRODUCT_ID = SP.PRODUCT_ID
AND PP.PRICE_DATE = SP.PRICE_DATE
ORDER BY PRODUCT_ID, PRICE_DATE
```
|
You are getting `PRICE_ID=1` vs `PRICE_ID =2` and `PRICE_ID=2` vs `PRICE_ID=1`
So you have a repeated row.
In the `ON CLAUSE`, you should force to only `join` when `PRICE_ID1 < PRICE_ID2`
Add this to the ON CLAUSE:
```
AND PR1.PRICE_ID < PR2.PRICE_ID
```
And use LEFT JOIN
With that changes u will get 4 rows, u need also to avoid the the row 2, because its already "inside" the row 1. So u only have to filter registers with that in the where clause:
```
AND PR1.PRICE_ID in (select min(PRICE_ID) from PRODUCT_PRICE group by PRODUCT_ID)
```
|
How to JOIN a table with itself and display it as a single row
|
[
"",
"sql",
"sql-server",
""
] |
I have a hardware group and many devices into this group.
Example:
```
+ Room 1
|-- Computer
|-- Camera
+ Room 2
|-- Computer
|-- Switch
```
All devices are monitored using ping. When some device is not working the program add a row into a table saying the start of break. When the device back on then the program update this row saying the end of break.
It's ok to know the total break seconds for each device.
My need is know the real sum time of all group. Example:
```
Group Device Start End
Room 1 Computer 2015-05-12 01:40:00 2015-05-12 01:40:20
Room 1 Camera 2015-05-12 01:40:01 2015-05-12 01:40:27
Room 2 Computer 2015-05-12 03:43:03 2015-05-12 03:46:14
Room 2 Switch 2015-05-12 03:43:00 2015-05-12 03:46:12
Room 1 Camera 2015-05-12 07:12:10 2015-05-12 07:12:22
```
The **real** down time of **group** "Room 1" is 39 seconds (NOT 58).
```
01:40:00 - 01:40:20 = 20 seconds
01:40:01 - 01:40:27 = 26 seconds
07:12:10 - 07:12:22 = 12 seconds
```
About the two first lines, take a look why is 27 seconds and not 46 seconds:
```
| 00, 01, 02, 03, 04, 05, 06, 07, 08, 09, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20 |
| 01, 02, 03, 04, 05, 06, 07, 08, 09, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27 |
```
Well... I have many groups and many devices per group.
How can I do it using SQL?
To help on tests...
```
DECLARE @tblGroup TABLE (
id int,
name varchar(20)
)
INSERT INTO @tblGroup (id, name)
VALUES (1, 'Room 1'), (2, 'Room 2'), (3, 'Room 3'), (4, 'Room 4')
DECLARE @tblDevice TABLE (
id int,
name varchar(20),
group_id int
)
INSERT INTO @tblDevice (id, name, group_id)
VALUES (1, 'Computer', 1), (2, 'Camera', 1), (3, 'Computer', 2), (4, 'Switch', 2)
DECLARE @tblStatus TABLE (
id int,
device_id int,
dtStart datetime,
dtEnd datetime
)
INSERT INTO @tblStatus (id, device_id, dtStart, dtEnd)
VALUES (1, 1, '2015-05-12 01:40:00.0', '2015-05-12 01:40:20.0'),
(2, 2, '2015-05-12 01:40:01.0', '2015-05-12 01:40:27.0'),
(3, 3, '2015-05-12 03:43:03.0', '2015-05-12 03:46:14.0'),
(4, 4, '2015-05-12 03:43:00.0', '2015-05-12 03:46:12.0'),
(5, 2, '2015-05-12 07:12:10.0', '2015-05-12 07:12:22.0')
SELECT
s.id,
s.device_id,
g.name AS groupName,
d.name AS deviceName,
s.dtStart,
s.dtEnd
FROM @tblStatus s
INNER JOIN @tblDevice d
ON d.id = s.device_id
INNER JOIN @tblGroup g
ON g.id = d.group_id
```
|
You are looking to combine the different groups into "islands" and to count the extend of the islands. That is why this type of problem is sometimes called gaps and islands.
Let me assume that you are using SQL Server 2012+. That slightly simplifies the calculations. The idea is to determine the starts and ends for overlapping groups. The following determines if a group has overlaps:
```
select t.*,
(case when exists (select 1
from @tblstatus t2
where t2.group_id = t.group_id and
t2.dtend > t.dtstart and t2.dtstart <= t.dtstart and
t2.id < t.id
)
then 0 else 1 end) as NoOverlapBefore
from @tblstatus t
```
With this, you can just assign to each row in the table the number of "NoOverlapBefore" records that occur before it and use result for aggregation:
```
with t as (
select t.*,
(case when exists (select 1
from @tblstatus t2
where t2.group_id = t.group_id and
t2.dtend > t.dtstart and t2.dtstart <= t.dtstart and
t2.id < t.id
)
then 0 else 1 end) as NoOverlapBefore
from @tblstatus t
)
select group_id,
datediff(second, min(dtstart), max(dtend)) as total_seconds
from (select t.*,
sum(NoOverlapBefore) over (partition by group_id order by dtstart, id) as grp
from @tblstatus t
) t
group by group_id;
```
EDIT:
I misunderstood a few things about your data structure. The SQL Fiddle is a big help. [Here](http://sqlfiddle.com/#!6/b9fde2/13) is one that actually works.
The query is:
```
WITH t AS (
SELECT t.*, d.group_id,
(CASE WHEN EXISTS (SELECT 1
FROM tblstatus t2 JOIN
tbldevice d2
ON d2.id = t2.device_id
WHERE d2.group_id = d.group_id AND
t2.dtend > t.dtstart AND
t2.dtstart <= t.dtstart AND
t2.id <> t.id
)
THEN 0 ELSE 1
END ) AS NoOverlapBefore
FROM tblstatus t JOIN
tblDevice d
ON t.device_id = d.id
)
SELECT group_id, SUM(total_seconds) as total_seconds
FROM (SELECT group_id, grp,
DATEDIFF(SECOND, MIN(dtstart), MAX(dtend)) AS total_seconds
FROM (SELECT t.*,
sum(t.NoOverlapBefore) over (partition BY group_id
ORDER BY t.dtstart, t.id) AS grp
FROM t
) t
GROUP BY grp, group_id
) t
GROUP BY group_id;
```
|
A bit convoluted, but I have a working solution.
The trick was to alter the data presentation.
EDIT : This solution works as long as there are never two events that take place on the same device at the same time.
I left a SQL Fiddle here : <http://sqlfiddle.com/#!6/59e80/8/0>
```
declare @tblGroup table (id int, name varchar(20))
insert into @tblGroup (id, name) values (1, 'Room 1'), (2, 'Room 2'), (3, 'Room 3'), (4, 'Room 4')
declare @tblDevice table (id int, name varchar(20), group_id int)
insert into @tblDevice (id, name, group_id) values (1, 'Computer', 1), (2, 'Camera', 1), (3, 'Computer', 2), (4, 'Switch', 2)
declare @tblStatus table (id int, device_id int, dtStart datetime, dtEnd datetime)
insert into @tblStatus (id, device_id, dtStart, dtEnd) values
(1, 1, '2015-05-12 01:40:00.0', '2015-05-12 01:40:20.0'),
(2, 2, '2015-05-12 01:40:01.0', '2015-05-12 01:40:27.0'),
(3, 3, '2015-05-12 03:43:03.0', '2015-05-12 03:46:14.0'),
(4, 4, '2015-05-12 03:43:00.0', '2015-05-12 03:46:12.0'),
(5, 2, '2015-05-12 07:12:10.0', '2015-05-12 07:12:22.0');
WITH eventlist as
(select
s.id,
s.device_id,
g.Id AS groupId,
g.name as groupName,
d.name as deviceName,
s.dtStart AS dt,
'GO_DOWN' AS eventtype,
1 AS eventcount
from
@tblStatus s
inner join
@tblDevice d on d.id = s.device_id
inner join
@tblGroup g on g.id = d.group_id
UNION
select
s.id,
s.device_id,
g.Id AS groupId,
g.name as groupName,
d.name as deviceName,
s.dtEND AS dt,
'BACK_UP' AS eventtype,
-1 AS eventcount
from
@tblStatus s
inner join
@tblDevice d on d.id = s.device_id
inner join
@tblGroup g on g.id = d.group_id
),
breakdown AS(
SELECT
principal.groupId
,principal.groupName
,principal.dt
,principal.deviceName
,principal.eventtype
,was_broken = ISNULL(SUM(before.eventcount),0)
,is_broken = ISNULL(SUM(before.eventcount),0) + principal.eventcount
FROM
eventlist principal
LEFT JOIN eventlist before ON before.groupId = principal.groupId
AND 1 = CASE WHEN before.dt < principal.dt THEN 1
WHEN before.dt = principal.dt AND before.device_id < principal.device_id THEN 1
ELSE 0 END
GROUP BY
principal.eventcount
,principal.deviceName
,principal.eventtype
,principal.groupId
,principal.groupName
,principal.dt
)
,breakdownstart AS
( SELECT groupId,dt, r = RANK() OVER (PARTITION BY groupId ORDER BY dt) FROM breakdown WHERE was_broken = 0 AND is_broken =1 )
,breakdownend AS
( SELECT groupId,dt, r = RANK() OVER (PARTITION BY groupId ORDER BY dt) FROM breakdown WHERE was_broken = 1 AND is_broken = 0 )
,breakgroup as
(SELECT s.groupId
,s.r
, break_start = s.dt
, break_end = e.dt FROM breakdownstart s INNER JOIN breakdownend e ON e.r = s.r AND e.groupId = s.groupId)
SELECT groupId,SUM(DATEDIFF(SECOND,break_start,break_end)) AS break_length FROM breakgroup GROUP BY breakgroup.groupId
```
|
Unioning overlapping time ranges, then summing the total time
|
[
"",
"sql",
"sql-server",
""
] |
I have a table with 32 Million rows and 31 columns in PostgreSQL 9.2.10. I am altering the table by adding columns with updated values.
For example, if the initial table is:
```
id initial_color
-- -------------
1 blue
2 red
3 yellow
```
I am modifying the table so that the result is:
```
id initial_color modified_color
-- ------------- --------------
1 blue blue_green
2 red red_orange
3 yellow yellow_brown
```
I have code that will read the initial\_color column and update the value.
Given that my table has 32 million rows and that I have to apply this procedure on five of the 31 columns, what is the most efficient way to do this? My present choices are:
1. Copy the column and update the rows in the new column
2. Create an empty column and insert new values
I could do either option with one column at a time or with all five at once. The columns types are either `character varying` or `character`.
|
> The columns types are either character varying or character.
Don't use `character`, that's a misunderstanding. `varchar` is ok, but I would suggest just `text` for arbitrary character data.
* [Any downsides of using data type "text" for storing strings?](https://stackoverflow.com/questions/20326892/any-downsides-of-using-data-type-text-for-storing-strings/20334221#20334221)
> Given that my table has 32 million rows and that I have to apply this
> procedure on five of the 31 columns, what is the most efficient way to do this?
If you don't have objects (views, foreign keys, functions) depending on the existing table, the most efficient way is create a new table. Something like this ( details depend on the details of your installation):
```
BEGIN;
LOCK TABLE tbl_org IN SHARE MODE; -- to prevent concurrent writes
CREATE TABLE tbl_new (LIKE tbl_org INCLUDING STORAGE INCLUDING COMMENTS);
ALTER tbl_new ADD COLUMN modified_color text
, ADD COLUMN modified_something text;
-- , etc
INSERT INTO tbl_new (<all columns in order here>)
SELECT <all columns in order here>
, myfunction(initial_color) AS modified_color -- etc
FROM tbl_org;
-- ORDER BY tbl_id; -- optionally order rows while being at it.
-- Add constraints and indexes like in the original table here
DROP tbl_org;
ALTER tbl_new RENAME TO tbl_org;
COMMIT;
```
If you have depending objects, you need to do more.
Either was, be sure to add `all five at once`. If you update each in a separate query you write another row version each time due to the MVCC model of Postgres.
Related cases with more details, links and explanation:
* [Updating database rows without locking the table in PostgreSQL 9.2](https://stackoverflow.com/questions/15770734/updating-database-rows-without-locking-the-table-in-postgresql-9-2/15771103#15771103)
* [Best way to populate a new column in a large table?](https://dba.stackexchange.com/a/52531/3684)
* [Optimizing bulk update performance in PostgreSQL](https://dba.stackexchange.com/a/41111/3684)
While creating a new table you might also order columns in an optimized fashion:
* [Calculating and saving space in PostgreSQL](https://stackoverflow.com/questions/2966524/calculating-and-saving-space-in-postgresql/7431468#7431468)
|
Maybe I'm misreading the question, but as far as I know, you have 2 possibilities for creating a table with the extra columns:
1. `CREATE TABLE`
This would create a new table and filling could be done using
* `CREATE TABLE .. AS SELECT..` for filling with creation or
* using a separate `INSERT...SELECT...` later on
Both variants are not what you seem to want to do, as you stated *solution without listing all the fields*.
Also this would require all data (plus the new fields) to be copied.
2. `ALTER TABLE...ADD ...`
This creates the new columns. As I'm not aware of any possibility to reference existing column values, you will need an additional `UPDATE ..SET...` for filling in values.
So, I' not seeing any way to realize a procedure that follows your choice 1.
Nevertheless, copying the (column) data just to overwrite them in a second step would be suboptimal in any case. Altering a table adding new columns is doing minimal I/O. From this, even if there would be a possibility to execute your choice 1, following choice 2 promises better performance by factors.
Thus, do 2 statements one `ALTER TABLE` adding all your new columns in on go and then an `UPDATE` providing the new values for these columns will achieve what you want.
|
Column Copy and Update vs. Column Create and Insert
|
[
"",
"sql",
"postgresql",
"database-administration",
"postgresql-performance",
"bulkupdate",
""
] |
In SQL Server, if my `SELECT` statement in a subquery returns no row, is then the result of the subquery equal to `NULL`? I made some research, but I am not sure about it.
Example:
```
IF (SELECT TOP 1 CLMN1 FROM SOMETABLE) IS NOT NULL THEN
....
```
I am asking to understand the behaviour of the if-statement above.
|
NULL means no value, for example that the "box" for a certain column in a certain row is empty. NO ROW means that there are no rows.
|
Looks like the answer is yes:
```
DECLARE @Test TABLE (Id INT)
INSERT INTO @Test VALUES (1)
SELECT * FROM @Test WHERE Id = 2
SELECT CASE WHEN (SELECT * FROM @Test WHERE Id = 2) IS NULL THEN 1 ELSE 0 END
```
EDIT: after you updated your question I think I should add that instead of checking if there are rows with IS NULL you should use the following that can be better optimised by the server:
```
IF EXISTS(SELECT * FROM @Test WHERE Id = 2)
BEGIN
-- Whatever
END
```
|
Is a subquery, which is returning no row, equal to NULL?
|
[
"",
"sql",
"sql-server",
"select",
""
] |
I have the following database table:
```
LNr OrderNr Ident Amount
1 128 3 123.00
2 128 14 200.00
3 1290 3 300.00
4 13400 3 637.00
```
I want to calculate the sum of the `Amount`-fields where `Ident` equals 3, but not 14 at the same time. So I want the database server to return only row number 3 and 4(where `OrderNr` = 1290 and 13400).
I tried:
```
SELECT SUM(Amount) FROM table WHERE Ident = '3'
```
But that does not work of course, because this also returns row 1 (which I do not want because `Ident` equals 14). I tried some other queries, but to no avail.
|
You can add another `not exists` condition:
```
SELECT SUM(Amount)
FROM mytable x
WHERE Ident = 3 AND
NOT EXISTS (SELECT *
FROM mytable y
WHERE x.OrderNr = y.OrderNr AND y.Ident = 14)
```
|
The only way to do that is to further qualify your WHERE clause, by either excluding row 1, or including just row 3 and 4. You need additional filtering. There must other tables joined to that table that should allow you to filter gracefully based on whatever business rules you have defined.
|
How to return only one result from sql table?
|
[
"",
"sql",
"sql-server",
"select",
""
] |
I have a table in sql server 2012, where one column is nvarchar. It contains Romanian characters. We've noticed that only some of the letters 'Ș' do not show in reports at all, so I found that it depends of the keyboard settings.
There are two different keyboard settings for Romanian - Standard and Legacy. Letter 'Ș' - inserted from Rom(Standard) keyboard have ASCII code 63, from Legacy it's 170.
Letter 'Ş' with CHAR(170) - shows in reports, but CHAR(63) - doesn't - even though it's the same letter (should be).
It would be simple if I could replace char(63) with char(170), but I cannot detect rows with character 63. The next select doesn't return rows:
```
select * from table1 where columnname like '%'+CHAR(63)+'%'
```
even though if I do `select ASCII(SUBSTRING(columnname , 1, 1))` it returns me '63'.
even `select charindex(char(63), columnname)` - returns me 0
I also tried to do collation:
```
select * from table1 where columnname COLLATE Latin1_general_CI_AI like N'%s%'
```
it doesn't help - it returns only rows with 's' and char(170).
Please help me find these rows with wrong 'Ş'
|
So firstly from my comments, `CHAR(63)` is misleading as it represents a character that sql server is unable to display:
[Unable to replace Char(63) by SQL query](https://stackoverflow.com/questions/6441208/unable-to-replace-char63-by-sql-query)
The issue is possibly down to your selected collation, as if I run this sample I get the 2 rows containing the special characters:
```
CREATE TABLE #temp ( val NVARCHAR(50) )
INSERT INTO #temp
( val )
VALUES ( N'Șome val 1' ),
( N'some val 2' ),
( N'șome other val 3' )
SELECT *
FROM #temp
WHERE val COLLATE Latin1_General_BIN LIKE N'%ș%'
OR val COLLATE Latin1_General_BIN LIKE N'%Ș%'
DROP TABLE #temp
```
**Output**
```
val
=================
Șome val 1
șome other val 3
```
The specified collation is: Latin1\_General\_BIN, as found in this post:
[replace only matches the beginning of the string](https://stackoverflow.com/questions/27546811/replace-only-matches-the-beginning-of-the-string)
|
```
WHERE columnname LIKE N'%'+NCHAR(536)+'%'
```
This should help you find the character even if it was inserted as an unknown character as in the first insert below.
```
DECLARE @Table TABLE (text nvarchar(50))
INSERT INTO @Table(text)
SELECT 'Ș'
UNION ALL
SELECT N'Ș'
SELECT UNICODE(text) UNICODE
FROM @Table
```
Results:
```
UNICODE
63
536
```
'Ș' is NCHAR(536) and 'ș' is NCHAR(537).
If you then do:
```
SELECT * FROM @Table WHERE text LIKE N'%'+NCHAR(536)+'%'
```
Results:
```
text
?
Ș
```
|
Cannot find letter 'ș' or 'Ș' inserted from Romanian (Standard) keyboard
|
[
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2012",
""
] |
How do I count the average amount of times a given number appears in a database?
```
id | ...
----------
1 | ...
5 | ...
2 | ...
3 | ...
3 | ...
1 | ...
6 | ...
4 | ...
3 | ...
...| ...
```
`id` corresponds to the id of the user. Perhaps the table is for customer orders or donations made by a user. For the above table:
> id 1 = 2 entries
> id 2 = 1 entry
> id 3 = 3 entries
> id 4 = 1 entry
> id 5 = 1 entry
> id 6 = 1 entry
> Average = (2+1+3+1+1+1)/6 = 1.5 entries per user
The average number of orders/donations made per user is 1.5 to give an example.
I could do something like the below:
```
$getTotalEntries = $db->prepare("
SELECT *
FROM table
");
$getTotalEntries->execute();
$totalEntries = $getTotalEntries->rowCount();
$getGroupedEntries = $db->prepare("
SELECT *
FROM table
GROUP BY id
");
$getGroupedEntries->execute();
$groupedEntries = $getTotalEntries->rowCount();
$average = $totalEntries/$groupedEntries;
```
I'm hoping for a single SQL request, however. Incidentally, the below gives me the number of occurances of a given id, but I cannot `AVG()` them.
```
$getAverageEntries = $db->prepare("
SELECT id, COUNT(*)
FROM table
GROUP BY id
"); // works, returns the 2,1,3,1,... from before
$getAverageEntries = $db->prepare("
SELECT AVG(COUNT(*))
FROM table
GROUP BY id
"); // won't find aggregate count
```
|
How about this?
```
select count(id) / count(distinct id) as avgEntriesPerUser
from table t;
```
The only issue with this would be a `NULL` value for `id`. If this occurred (and I find it highly unlikely for a column named `id`), then the above ignores those rows entirely. It can be modified to take this situation into account.
|
You mean?
```
select avg(countPerID) from (
select id, count(*) as countPerID from table group by id) x
```
|
Find average amount of times a value appears in table
|
[
"",
"sql",
"pdo",
""
] |
I'm stuck and I could use a little input. How can I insert the SUM(amount) of all values of "accountKonto" of table "Buchung" into one row of the table "Erfolg"?
"Buchung": id accountKonto amount
"Erfolg": id totalAmountAccountKonto1 totalAmountAccountKonto2 …
For each possible "accountKonto" in "Buchung", there is one column in "Erfolg", into which I need to insert the sum. At the end, I need to have one new row in "Erfolg" that should have all sums of "amount" for each "accountKonto" that exists in "Buchung". Makes sense?
It should begin like this:
```
SELECT SUM(amount) FROM Buchung …
```
But how do I tell it to put each sum into the corresponding field of table Erfolg?
Thanks a lot for your help!
Gary
|
You should combine `INSERT .. SELECT` with [`PIVOT`](https://technet.microsoft.com/en-US/library/ms177410%28v=sql.105%29.aspx).
### Using PIVOT (available in SQL Server and Oracle, only):
```
SELECT *
FROM (
SELECT accountKonto, amount
FROM Buchung
) t
PIVOT (
SUM(amount) FOR accountKonto IN ([1], [2], [3])
) AS p
```
The above query produces something like:
```
1 2 3
---------------------
28.00 17.00 15.35
```
### If you're not using SQL Server:
... then you cannot use `PIVOT`, but you can emulate it easily:
```
SELECT
SUM(CASE accountKonto WHEN 1 THEN amount END) totalAmountAccountKonto1,
SUM(CASE accountKonto WHEN 2 THEN amount END) totalAmountAccountKonto2,
SUM(CASE accountKonto WHEN 3 THEN amount END) totalAmountAccountKonto3
FROM Buchung
```
### Inserting that into your other table:
Just use `INSERT .. SELECT` as follows:
```
INSERT INTO Erfolg (
totalAmountAccountKonto1,
totalAmountAccountKonto2,
totalAmountAccountKonto3
)
SELECT p.[1], p.[2], p.[3]
FROM (
SELECT accountKonto, amount
FROM Buchung
) t
PIVOT (
SUM(amount) FOR accountKonto IN ([1], [2], [3])
) AS p;
```
... or if `PIVOT` is not available:
```
INSERT INTO Erfolg (
totalAmountAccountKonto1,
totalAmountAccountKonto2,
totalAmountAccountKonto3
)
SELECT
SUM(CASE accountKonto WHEN 1 THEN amount END) AS totalAmountAccountKonto1,
SUM(CASE accountKonto WHEN 2 THEN amount END) AS totalAmountAccountKonto2,
SUM(CASE accountKonto WHEN 3 THEN amount END) AS totalAmountAccountKonto3
FROM Buchung
```
|
The design you are using is not very good - you run out of columns pretty fast (for MySQL - approx. 4k columns, depends a bit).
I'd use design for aggregate table, similar to:
```
iteration | accountNr | sum
```
Now, to fill the table, you just do (replace 1 with some timestamp, iteration ID, link to some aggregations table, your choice):
```
INSERT INTO aggregate (iteration, accountNr, sum) SELECT 1, accountNr, SUM(amount) FROM data GROUP BY accountNr
```
Now that you have data for each iteration, you can work on it in interfaces or some pivot tables just as you wish.
|
How can I insert values from SELECT SUM into other table?
|
[
"",
"mysql",
"sql",
"sql-server",
"filemaker",
""
] |
I'm using SQLite and I would like to know how to generalize the following SQL statement so that inequality conditions can be imposed on an arbitrary number of columns, not just column `x`:
```
SELECT * FROM t WHERE x BETWEEN xMin AND xMAX
```
I anticipated that this could be achieved in the following manner, but to no avail:
```
SELECT * FROM t
WHERE (x BETWEEN xMin AND xMAX)
AND (y BETWEEN yMin AND yMax)
AND (z BETWEEN zMin AND zMax) ...
```
Any suggestions would be most welcome.
|
Your statement should work, e.g. with Oracle:
```
SELECT *
FROM (SELECT 10 x, 40 y, 50 z
FROM dual)
WHERE (x BETWEEN 8 AND 12)
AND (y BETWEEN 30 AND 42)
AND (z BETWEEN 0 AND 100);
```
See [Fiddle demo](http://sqlfiddle.com/#!4/9eecb7db59d16c/736). With My SQL:
```
SELECT *
FROM (SELECT 10 AS x, 40 AS y, 50 AS z) AS t
WHERE (x BETWEEN 8 AND 12)
AND (y BETWEEN 30 AND 42)
AND (z BETWEEN 0 AND 100);
```
Another [MySQL Fiddle](http://sqlfiddle.com/#!9/9eecb7d/617), [SQLite Fiddle](http://sqlfiddle.com/#!5/9eecb7/45), [PostgreSQL Fiddle](http://sqlfiddle.com/#!15/9eecb7db59d16c80417c72d1e1f4fbf1/9) :)
***p.s.*** (update as per comment): take care, neither
```
SELECT *
FROM (SELECT "10" AS x, "40" AS y, "50" AS z) AS t
WHERE (x BETWEEN 8 AND 12)
AND (y BETWEEN 30 AND 42)
AND (z BETWEEN 0 AND 100);
```
nor
```
SELECT *
FROM (SELECT 10 AS x, 40 AS y, 50 AS z) AS t
WHERE (x BETWEEN "8" AND "12")
AND (y BETWEEN 30 AND 42)
AND (z BETWEEN 0 AND 100);
```
works. Both failed execution in *DB Browser for SQLite* ([sqlitebrowser.org](http://sqlitebrowser.org)).
|
Works fine in SQLite 3.
```
sqlite> CREATE TABLE test (x INT, y INT , z INT);
sqlite> INSERT INTO test (x,y,z) VALUES (1,2,3), (2,3,4), (4,5,6);
sqlite> SELECT * FROM test WHERE (x BETWEEN 0 AND 5) AND (y BETWEEN 0 AND 5) AND (z BETWEEN 0 AND 5)
...> ;
1|2|3
2|3|4
sqlite>
```
|
How do you write an SQL query which uses multiple BETWEEN optimizations?
|
[
"",
"sql",
"sqlite",
""
] |
I've a table `ErrorLog` with following fields
```
Id - Primary key
Message- varchar(200)
CustomerId - int
CreatedDate - DateTime
```
Everytime a Logged in user gets an error, it is logged in `ErrorLog` table.
Now i want to fetch *distinct Message* along with *all other fields* where CreatedDate is equal to `getdate()-1`.
**For Example:**
If i have these values in my table

then i should get output as

after query.
I'm not able to get distinct Message value by CreatedDate. Any ideas how to achieve this?
|
Something like this should work:
```
with cte as(
select *, row_number() over(partition by Message order by CreatedDate desc, CustomerID) as rn from Table
where CreatedDate < cast(getdate() as date) and CreateDate >= dateadd(dd, -1, cast(getdate() as date))
)
select * from cte where rn = 1
```
|
You need a ROW\_NUMBER to find the latest row per message:
```
select Id, Message, CustomerId, CreatedDate
from
(
select Id, Message, CustomerId, CreatedDate,
row_number ()
over (partition by Message
order by CreatedDate desc) as rn
from ErrorLog
where your-condition-here
) as dt
where rn = 1
```
|
Get distinct value of column with date check
|
[
"",
"sql",
"sql-server-2008",
""
] |
```
+------+------+
| id | no |
+------+------+
| 1 | 1 |
| 11 | 1 |
| 21 | 1 |
+------+------+
```
I want to update 2nd row no. to 2.
the query i can use is
```
update test set no = 2 where id = 11;
```
How can i achieve the same without where clause ??
|
To update the "second" row in the table, the row that has the second smallest id value...
```
UPDATE test t
JOIN ( SELECT r.id
FROM test r
ORDER BY r.id
LIMIT 1,1
) s
ON s.id = t.id
SET t.no = 2
```
**EDIT**
As a followup to clarify the results of the query above...
In the case where `id` is not unique in the table, the query could potentially update *more than one* row. The inline view query (`s`) gets the id value from the "second" row, after the rows are ordered by id value. Then *all* rows that have that same id value would be updated.
This is an issue only if `id` is *not* unique; if `id` is unique, the statement would update (at most) one row.
For example, if the contents of the table was:
```
+-----+-----+
| id | no |
+-----+-----+
| 1 | 1 |
| 11 | 3 | <-- "second" row, ordered by id ascending
| 11 | 4 | <-- id from third row matches id from second row
| 21 | 1 |
+-----+-----+
```
The result of the query above would be to update the *two* rows that have `id` value of `11`.
```
+-----+-----+
| id | no |
+-----+-----+
| 1 | 1 |
| 11 | 2 | <-- updated
| 11 | 2 | <-- updated
| 21 | 1 |
+-----+-----+
```
|
I am not sure **why** you would want to but...
```
UPDATE `test` SET `no` = IF(`id`=11, 1, `no`);
```
For the record, I would be surprised if this didn't perform horribly as it would go through every row in the table.
|
Update without where clause
|
[
"",
"mysql",
"sql",
""
] |
I am very new to hive and sql and I have a question about how I would go about the following:
I have table A:
```
Name id
Amy 1
Bob 4
Josh 9
Sam 6
```
And I want to filter it using values from another table (table B):
```
Value id
.2 4
.7 6
```
To get a new table that looks like table A but only contains rows with values in the id column that also appeared in the id column of table B:
```
Name id
Bob 4
Sam 6
```
So I'm assuming I would write something that started like...
```
CREATE TABLE Table C AS
SELECT * FROM Table A
WHERE id....
```
|
The correct syntax for the result I wanted was:
```
CREATE TABLE tableC AS
SELECT tableA.*
FROM tableA LEFT SEMI JOIN tableB on (tableA.id = tableB.id);
```
|
just join it..
```
hive> CREATE TABLE TableC AS
> SELECT A.* FROM TableA as A,
> TableB as B
> WHERE A.id = B.id;
hive> SELECT * FROM TableC;
OK
Bob 4
Sam 6
```
or try this,
```
hive> CREATE TABLE TableD AS
> SELECT A.* FROM TableA as A join
> TableB as B
> on A.id = B.id;
hive> SELECT * FROM TableD;
OK
Bob 4
Sam 6
```
|
Hive: filter a table using another table
|
[
"",
"sql",
"filter",
"hive",
""
] |
We are using the following question to get date out of a large mysql table.
```
SELECT fullPath, Permissiontype, DinstinguishedName
from cdm.test
where fullPath in
(SELECT distinct fullPath
FROMcdm.test
WHERE (Permissiontype = 'EXPLICIT' and not DinstinguishedName ='')
OR(Permissiontype = 'INHERITED'
AND (length(fullPath) - length(replace(fullPath,'/','')) < 4))
OR(Permissiontype = 'EXPLICIT'
AND NOT DinstinguishedName=''
AND LEFT(fullPath,length(fullPath)-Length(RIGHT(fullPath,INSTR(reverse(fullPath),'/'))))
AND(length(fullPath) - length(replace(fullPath,'/','')) > 2) ))
```
When I limit the results that need to be shown to 270 it runs really fast, but for example 500 rows it just doesn't run. I have for 1 case 77mil rows in the table (needs to be in 1 table). And then it runs over 8 hours and still doesn't finish. Is there a way to optimize this?
wkr.
|
For each record in the test table you're querying the entire table again in the subquery. Instead of using a subquery in the where clause, try an inner join on the same table. This will dramatically improve your performance.
I haven't tried it yet but it could look like:
```
SELECT fullPath, Permissiontype, DinstinguishedName from cdm.test
INNER JOIN (
SELECT distinct fullPath from cdm.test
where (Permissiontype = 'EXPLICIT' and not DinstinguishedName ='')
or (Permissiontype = 'INHERITED' AND (length(fullPath) - length(replace(fullPath,'/','')) < 4)) OR(Permissiontype = 'EXPLICIT'
AND NOT DinstinguishedName='' AND LEFT(fullPath,length(fullPath)-length(RIGHT(fullPath,INSTR(reverse(fullPath),'/'))))
and(length(fullPath) - length(replace(fullPath,'/','')) > 2) )
) AS SQ1
ON SQ1.fullpath = cdm.test.fullpath
```
|
For the combination of IN statement and subqueries mysql has a usefull optimizer called EXISTS() (not specifically meant for this but can be used to optimize queries that use the IN statement in combination with subquery)
According to the reference on <https://dev.mysql.com/doc/refman/5.0/en/subquery-optimization-with-exists.html>
```
outer_expr IN (SELECT inner_expr FROM ... WHERE subquery_where)
```
Would be the same as
```
EXISTS (SELECT 1 FROM ... WHERE subquery_where AND outer_expr=inner_expr)
```
But faster
Your query with exist:
```
SELECT fullPath, Permissiontype, DinstinguishedName
FROM cdm.test cdm1
WHERE EXISTS(SELECT 0 FROM cdm.test cdm2 @wherecondition AND cdm2.fullPath = cdm1.fullPath)
```
@wherecondition =
```
WHERE (Permissiontype = 'EXPLICIT' and not DinstinguishedName ='')
OR(Permissiontype = 'INHERITED'
AND (length(fullPath) - length(replace(fullPath,'/','')) < 4))
OR(Permissiontype = 'EXPLICIT'
AND NOT DinstinguishedName=''
AND LEFT(fullPath,length(fullPath)-Length(RIGHT(fullPath,INSTR(reverse(fullPath),'/'))))
AND(length(fullPath) - length(replace(fullPath,'/','')) > 2))
```
|
Mysql nested queries take a long time
|
[
"",
"mysql",
"sql",
"mysql-workbench",
""
] |
I have this SQL...
```
UPDATE table1 t1
SET (t1.wert) =
(select t2.bezeichnung from table2 t2
where t1.id = t2.cpbezeichnung)
where t1.id = t2.cpbezeichnung
```
... which I cant run because it says me that it doesnt know `t2.cpbezeichnung` in line #5.
How can I fix it?
|
Table with alias t2 is not defined for UPDATE query, so it's clearly not known at line 5. Table t2 is defined only inside subquery on lines 3 and 4.
What exactly are you trying to achieve with condition on line 5?
If do you want prevent setting NULL into t1.wert for rows where there is no appropriate record in table t2, then you need to replace condition on line 5
```
UPDATE table1 t1
SET (t1.wert) =
(select t2.bezeichnung from table2 t2 where t1.id = t2.cpbezeichnung)
where t1.id IN (SELECT t2.cpbezeichnung from table2)
```
This will set values in t1.wert only for records where t1.id exists in t2.cpbezeichnung.
|
The `t2` alias (along with `table2`) is only visible in the subquery. My guess is that you want
```
UPDATE table1 t1
SET t1.wert = (select t2.bezeichnung
from table2 t2
where t1.id = t2.cpbezeichnung)
where exists (select 1
from table2 t2
where t1.id = t2.cpbezeichnung)
```
which updates every row where there is a match between the two tables. If that's not what you want, posting a test case would be helpful.
|
sql oracle update
|
[
"",
"sql",
"oracle",
""
] |
Example related to my requirement.
I have a table t\_policies\_details
columns pol\_ref , inrtyp
```
inrtyp has values = ORD,PPC,AVCO,DSS,RVP, SP,TV
```
I want to fetch polref for which inrtyp is only SP and TV both , no any other.
```
SELECT DISTINCT p.polref, p.inrtyp FROM t_policies_details p WHERE p.inrtyp NOT IN ('ORD')
INTERSECT
SELECT DISTINCT p.polref, p.inrtyp FROM t_policies_details p WHERE p.inrtyp NOT IN ('PPC')
INTERSECT
SELECT DISTINCT p.polref, p.inrtyp FROM t_policies_details p WHERE p.inrtyp NOT IN ('AVCO')
INTERSECT
SELECT DISTINCT p.polref, p.inrtyp FROM t_policies_details p WHERE p.inrtyp NOT IN ('DSS')
INTERSECT
SELECT DISTINCT p.polref, p.inrtyp FROM t_policies_details p WHERE p.inrtyp NOT IN ('RVP')
```
I have tried with above query, exclusing things which are not required,
but this gives me polref, which has either SP or TV , or both. I have to search again which has both.
I found few by searching.
But when I included the condition:
```
INTERSECT
SELECT DISTINCT p.polref, p.inrtyp FROM t_policies_details p WHERE p.inrtyp IN ('SP')
INTERSECT
SELECT DISTINCT p.polref, p.inrtyp FROM t_policies_details p WHERE p.inrtyp IN ('TV')
```
I don't get any polref.
Please help me to get this right.
|
If I understand correctly, you want all instances where a `polref` has `inrtyp` for both 'TV', 'SP' but no other values at all. You can count the number of wanted and unwanted instances in a group by, and then apply logic in an outer select:
```
WITH cte AS
(
SELECT p.polref,
SUM(CASE WHEN p.inrtyp IN ('TV', 'SP') THEN 1 ELSE 0 END) AS Good,
SUM(CASE WHEN p.inrtyp NOT IN ('TV', 'SP') THEN 1 ELSE 0 END) AS Bad
FROM t_policies_details p
GROUP BY p.polref
)
SELECT cte.polref
FROM cte
WHERE Good = 2 AND Bad = 0;
```
[SqlFiddle here](http://sqlfiddle.com/#!4/c89c1/7)
|
Please try below query.
Query:
SELECT DISTINCT p.polref, p.inrtyp
FROM t\_policies\_details p
WHERE p.inrtyp in ('SP','TV');
|
how to fetch data for particular values from a column among different values
|
[
"",
"sql",
"oracle",
"plsql",
""
] |
I have the following formula:
```
"Value1" * 100 * "Total day in a month" * "Value2"
```
I have the following table:
```
ID Date Time Value1 Value2
1 2014-01-01 00:00 10 20
2 2014-01-01 01:00 20 5
```
I want to select a data in a year with only using one parameter which is `Year`.
How can I apply the formula into a query?
The result should be:
```
ID Date Time Value1 Value2 TotalDayinMonth Result
1 2014-01-01 00:00 10 20 31 620000
2 2014-01-01 01:00 20 5 31 310000
ff.
```
|
You can get the number of days of a given date like this:
```
DECLARE @date DATETIME = '2014-01-01'
SELECT DATEDIFF(DAY, @date, DATEADD(MONTH, 1, @date))
```
And the query:
```
SELECT ID
,[Date]
,[Time]
,Value1
,Value2
,DATEDIFF(DAY, [Date], DATEADD(MONTH, 1, [Date])) AS TotalDayinMonth
,Value1 * 100 * DATEDIFF(DAY, [Date], DATEADD(MONTH, 1, [Date])) * Value2 AS Result
FROM yourTable
```
|
This expression will give you the number of days in the month that `date` is in no matter what day it is:
```
datediff(day,
dateadd(month,datediff(month, 0, date),0),
dateadd(month,datediff(month, 0, date)+1,0))
```
|
SQL: Total days in a month
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
I have 3 tables in my database as shown below:

I want to insert data in `RESULT` table from `Infos` table. And only those rows are inserted that are listed in `Picker` table. But since `InfoID` column of `Result` table is a primary key, so it will restrict already existed `InfoID` to be reinserted again. And `Picker` table's 'InfoIDs' can have repeated values. `Picker` table is actually a temporary table created in a stored procedure.
I am not so pro in making join queries, so I took help of some SO questions and developed following query:
```
INSERT INTO @Result([INFOID],[VALUE],[PROCESSED]) (SELECT T1.Id,
CASE
WHEN T1.Value1 = '-1' THEN T1.Value2
ELSE T1.Value1
END, 0
FROM [Infos] T1 LEFT JOIN (SELECT [InfoIDs] FROM Picker AS A1 WHERE (NOT EXISTS (SELECT [INFOID] FROM @Result AS A2 WHERE A1.[InfoIDs] != A2.[INFOID]))) T2 ON T1.Id = T2.InfoIDs)
```
But the problem is that I am getting only one row in output of `Result` table. Please tell me what's wrong with my query?
**UPDATE**
Expected Result:

|
Finally, I figured out what was actually going wrong:
```
INSERT INTO @Result([INFOID],[VALUE],[PROCESSED]) (SELECT T1.Id,
CASE
WHEN T1.Value1 = '-1' THEN T1.Value2
ELSE T1.Value1
END, 0
FROM [Infos] T1 INNER JOIN (SELECT [InfoIDs] FROM Picker AS A1 WHERE (NOT EXISTS (SELECT [INFOID] FROM @Result AS A2 WHERE A1.[InfoIDs] = A2.[INFOID]))) T2 ON T1.Id = T2.InfoIDs)
```
I used `INNER JOIN` and in `A1.[InfoIDs != A2.[INFOID]` condition I added `=` instead of `!=`.
I combined all the solutions and hints suggested by the responders of this question and came up with this solution. Thank you everyone. :)
|
I think you need this query:
```
INSERT INTO @Result
(SELECT ID,
CASE
WHEN i.Value1 = '-1' THEN i.Value2
ELSE i.Value1
END
Value1, 0
FROM Infos i
WHERE i.ID In (SELECT InfoIDs FROM Picker)
AND NOT i.ID IN (SELECT InfoID FROM @Result))
```
|
Inserting rows in a table from another table using a third table
|
[
"",
"sql",
"sql-server",
"database",
"stored-procedures",
"join",
""
] |
say I have 3 values, Bill, Steve, Jack. and I want to randomly update a table with those values, e.g.
```
Update contacts
set firstname = ('Bill','Steve','Jack')
where city = 'NY'
```
how do I randomize these values?
Thanks
|
You can do this with the following trick:
```
update c set name=ca.name
from contacts c
outer apply(select top 1 name
from (values('bill'),('steve'),('jack')) n(name)
where c.id = c.id order by newid())ca;
```
`c.id = c.id` is just a dummy predicate that forces sql engine to call subquery for each outer row. Here is the fiddle <http://sqlfiddle.com/#!6/8ecca/22>
|
Here's some love using `choose`
```
with cte as (
select *, (ABS(CHECKSUM(NewId())) % 3) + 1 as n
from contacts
where city = 'NY'
)
update cte
set firstname = choose(n, 'Bill','Steve','Jack')
```
|
SQL Server 2012 Random string from a list
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
I am trying to query my database for all models with primary keys in a list. This is my query (idsList is an ArrayList containing integers):
```
new Select().from(PostModel.class)
.where(Condition.column(PostModel$Table.ID).in(0, idsList))
.async().queryList(listener);
```
But Android Studio highlights the where condition, stating
```
"Cannot resolve method 'where(com.raizlabs.android.dbflow.sql.builder.Condition.In)"
```
So is Condition.In not considered a condition? How can I query all Models with primaryKey in an ArrayList?
I am using DBFlow 2.0. I can also use a regular SQL query String as a substitute, but I am not that versed in SQL, so if you could provide an SQL query String for my problem, that would be a possible workaround.
|
**DBFlow `v3.x` now allows you to pass a collection to `Condition.in()`**
```
List<String> ids = new ArrayList<>();
Condition.In in = Condition.column(Tree_Table.ID.getNameAlias()).in(ids);
long count = new Select().count().from(Tree.class)
.where(in)
.count();
```
|
Create a `In` Condition:
```
List<String> ids = new ArrayList<String>();
Condition.In in = Condition.column(Tree$Table.ID).in(ids.get(0));
for (i = 1; i < ids.size(); i++){
in.and(ids.get(i));
}
long count = new Select().count().from(Tree.class)
.where(in)
.count();
```
|
DBFlow select where COLUMN in List?
|
[
"",
"android",
"sql",
"database",
"dbflow",
""
] |
My requirement is to display multiple rows data into a single cell. For example I have a teacher who is specialized in multiple subjects.
```
staffid Subjects
-------------------
13 Hindi
13 asd
```
I wants result in following format
```
Hindi, asd
```
for staffid 13.
To do this task I used following code
```
declare @output varchar(max)
select @output = COALESCE(@output + ', ', '') + sr.title
from streamsubjects sr
join StaffSubjectAssociation ir on ir.StreamSubjectID=sr.StreamSubjectID
where StaffId = 13
select @output
```
To get desired output I created one user defined scalar function which is given below
```
ALTER FUNCTION [dbo].[getSubjectsForStaff]
(
@StaffId int
)
RETURNS varchar
AS
BEGIN
declare @output varchar(max)
select @output = COALESCE(@output + ', ', '') + sr.title
from streamsubjects sr
join StaffSubjectAssociation ir on ir.StreamSubjectID=sr.StreamSubjectID
where StaffId = @StaffId
RETURN @output
END
```
But I am not getting desired result I am only getting first alphabet of subject. Can anyone tell me why I am not getting desired result using same code in scalar function.
What will be the correct solution, to achieve result?
Please help me I am new in this technology.
|
Also try this method :
```
DECLARE @table TABLE(staffid INT, subject VARCHAR(30))
INSERT INTO @table
VALUES
(13,'Hindi'),
(13,'English'),
(14,'Japanese'),
(14,'English')
SELECT staffid,
STUFF(grp, 1, 1, '')
FROM @table a
CROSS APPLY (SELECT ',' + subject
FROM @table b
WHERE a.staffid = b.staffid
FOR XML PATH('')) group_concat(grp)
GROUP BY staffid,grp
```
|
same as `@Deepak Pawar` variant, but without `cross apply`
```
DECLARE @table TABLE
(
staffid INT ,
[subject] VARCHAR(30)
)
INSERT INTO @table
VALUES ( 13, 'Hindi' ),
( 13, 'English' ),
( 14, 'Japanese' ),
( 14, 'English' )
SELECT DISTINCT
a.staffid ,
SUBSTRING(( SELECT ', ' + b.[subject]
FROM @table b
WHERE a.staffid = b.staffid
FOR
XML PATH('')
), 3, 999) grp
FROM @table a
```
output result

|
How to display multiple row data into a single cell with comma separated
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
We have a legacy database schema that has some interesting design decisions. Until recently, we have only supported Oracle and SQL Server, but we are trying to add support for PostgreSQL, which has brought up an interesting problem. I have searched Stack Overflow and the rest of the internet and I don't believe this particular situation is a duplicate.
Oracle and SQL Server both behave the same when it comes to nullable columns in a unique constraint, which is to essentially ignore the columns that are NULL when performing the unique check.
Let's say I have the following table and constraint:
```
CREATE TABLE EXAMPLE
(
ID TEXT NOT NULL PRIMARY KEY,
FIELD1 TEXT NULL,
FIELD2 TEXT NULL,
FIELD3 TEXT NULL,
FIELD4 TEXT NULL,
FIELD5 TEXT NULL,
...
);
CREATE UNIQUE INDEX EXAMPLE_INDEX ON EXAMPLE
(
FIELD1 ASC,
FIELD2 ASC,
FIELD3 ASC,
FIELD4 ASC,
FIELD5 ASC
);
```
On both Oracle and SQL Server, leaving any of the nullable columns `NULL` will result in only performing a uniqueness check on the non-null columns. So the following inserts can only be done once:
```
INSERT INTO EXAMPLE VALUES ('1','FIELD1_DATA', NULL, NULL, NULL, NULL );
INSERT INTO EXAMPLE VALUES ('2','FIELD1_DATA','FIELD2_DATA', NULL, NULL,'FIELD5_DATA');
-- These will succeed when they should violate the unique constraint:
INSERT INTO EXAMPLE VALUES ('3','FIELD1_DATA', NULL, NULL, NULL, NULL );
INSERT INTO EXAMPLE VALUES ('4','FIELD1_DATA','FIELD2_DATA', NULL, NULL,'FIELD5_DATA');
```
However, because PostgreSQL (correctly) adheres to the SQL Standard, those insertions (and any other combination of values as long as one of them is NULL) will not throw an error and be inserted correctly no problem. Unfortunately, because of our legacy schema and the supporting code, we need PostgreSQL to behave the same as SQL Server and Oracle.
I am aware of the following Stack Overflow question and its answers: [Create unique constraint with null columns](https://stackoverflow.com/questions/8289100). From my understanding, there are two strategies to solve this problem:
1. Create partial indexes that describe the index in cases where the nullable columns are both `NULL` and `NOT NULL` (which results in exponential growth of the number of partial indexes)
2. Use `COAELSCE` with a sentinel value on the nullable columns in the index.
The problem with (1) is that the number of partial indexes we'd need to create grows exponentially with each additional nullable column we'd like to add to the constraint (2^N if I am not mistaken). The problems with (2) are that sentinel values reduces the number of available values for that column and all of the potential performance problems.
My question: are these the only two solutions to this problem? If so, what are the tradeoffs between them for this particular use case? A good answer would discuss the performance of each solution, the maintainability, how PostgreSQL would utilize these indexes in simple `SELECT` statements, and any other "gotchas" or things to be aware of. Keep in mind that 5 nullable columns was only for an example; we have some tables in our schema with up to 10 (yes, I cry every time I see it, but it is what it is).
|
## Postgres 15 adds the clause `NULLS NOT DISTINCT`
See:
* [Create unique constraint with null columns](https://stackoverflow.com/questions/8289100/create-unique-constraint-with-null-columns/8289253#8289253)
The solution is very simple now:
```
ALTER TABLE example ADD CONSTRAINT foo
UNIQUE NULLS NOT DISTINCT (field1, field2, field3, field4, field5);
```
## For Postgres 14 or older
You are striving for **compatibility** with your existing **Oracle** and **SQL Server** implementations.
Since Oracle does not implement `NULL` values at all in row storage, it can't tell the difference between an empty string and `NULL` anyway. So wouldn't it be prudent to use empty strings (`''`) instead of `NULL` values in Postgres as well - for ***this*** particular use case?
Define columns included in the unique constraint as `NOT NULL DEFAULT ''`, problem solved:
```
CREATE TABLE example (
example_id serial PRIMARY KEY
, field1 text NOT NULL DEFAULT ''
, field2 text NOT NULL DEFAULT ''
, field3 text NOT NULL DEFAULT ''
, field4 text NOT NULL DEFAULT ''
, field5 text NOT NULL DEFAULT ''
, CONSTRAINT foo UNIQUE (field1, field2, field3, field4, field5)
);
```
### Notes
What you demonstrate in the question is a **unique *index***:
```
CREATE UNIQUE INDEX ...
```
Not the **unique *constraint*** you keep talking about. There are subtle, important differences!
* [How does PostgreSQL enforce the UNIQUE constraint / what type of index does it use?](https://stackoverflow.com/questions/9066972/how-does-postgresql-enforce-the-unique-constraint-what-type-of-index-does-it-u/9067108#9067108)
I changed that to an actual constraint like in the title of the question.
The keyword `ASC` is just noise, since that is the default sort order. I dropped it.
Using a [`serial`](https://stackoverflow.com/a/9875517/939860) PK column for simplicity which is totally optional but typically preferable to numbers stored as `text`.
### Working with it
Just omit empty / null fields from the `INSERT`:
```
INSERT INTO example(field1) VALUES ('F1_DATA');
INSERT INTO example(field1, field2, field5) VALUES ('F1_DATA', 'F2_DATA', 'F5_DATA');
```
Repeating any of theses inserts would violate the unique constraint.
**Or** if you insist on omitting target columns (which is a bit of anti-pattern in persisted `INSERT` statements),
**or** for bulk inserts where all columns need to be listed:
```
INSERT INTO example VALUES
('1', 'F1_DATA', DEFAULT, DEFAULT, DEFAULT, DEFAULT)
, ('2', 'F1_DATA','F2_DATA', DEFAULT, DEFAULT,'F5_DATA')
;
```
**Or** simply:
```
INSERT INTO example VALUES
('1', 'F1_DATA', '', '', '', '')
, ('2', 'F1_DATA','F2_DATA', '', '','F5_DATA')
;
```
Or you can write a trigger `BEFORE INSERT OR UPDATE` that converts `NULL` to `''`.
### Alternative solutions
If you need to use actual NULL values I would suggest the unique *index* with **`COALESCE`** like you mentioned as option (2) and [@wildplasser provided as his last example.](https://stackoverflow.com/a/30157493/939860)
The index on an **array** like [@Rudolfo presented](https://stackoverflow.com/a/30381172/939860) is simple, but considerably more expensive. Array handling isn't very cheap in Postgres and there is an array overhead similar to that of a row (24 bytes):
* [Calculating and saving space in PostgreSQL](https://stackoverflow.com/questions/2966524/calculating-and-saving-space-in-postgresql/7431468#7431468)
Arrays are limited to columns of the same data type. You could cast all columns to `text` if some are not, but it will typically further increase storage requirements. Or you could use a well-known row type for heterogeneous data types ...
A corner case: array (or row) types with all NULL values are considered equal (!), so there can only be 1 row with all involved columns NULL. May or may not be as desired. If you want to disallow all columns NULL:
* [NOT NULL constraint over a set of columns](https://stackoverflow.com/questions/21021102/not-null-constraint-over-a-set-of-columns/21026085#21026085)
|
Third method: use `IS NOT DISTINCT FROM` insted of `=` for comparing the key columns. (This could make use of the existing index on the candidate *natural* key) Example (look at the last column)
```
SELECT *
, EXISTS (SELECT * FROM example x
WHERE x.FIELD1 IS NOT DISTINCT FROM e.FIELD1
AND x.FIELD2 IS NOT DISTINCT FROM e.FIELD2
AND x.FIELD3 IS NOT DISTINCT FROM e.FIELD3
AND x.FIELD4 IS NOT DISTINCT FROM e.FIELD4
AND x.FIELD5 IS NOT DISTINCT FROM e.FIELD5
AND x.ID <> e.ID
) other_exists
FROM example e
;
```
Next step would be to put that into a trigger function, and put a trigger on it. (don't have the time now, maybe later)
---
And here is the trigger-function (which is not perfect yet, but appears to work):
---
```
CREATE FUNCTION example_check() RETURNS trigger AS $func$
BEGIN
-- Check that empname and salary are given
IF EXISTS (
SELECT 666 FROM example x
WHERE x.FIELD1 IS NOT DISTINCT FROM NEW.FIELD1
AND x.FIELD2 IS NOT DISTINCT FROM NEW.FIELD2
AND x.FIELD3 IS NOT DISTINCT FROM NEW.FIELD3
AND x.FIELD4 IS NOT DISTINCT FROM NEW.FIELD4
AND x.FIELD5 IS NOT DISTINCT FROM NEW.FIELD5
AND x.ID <> NEW.ID
) THEN
RAISE EXCEPTION 'MultiLul BV';
END IF;
RETURN NEW;
END;
$func$ LANGUAGE plpgsql;
CREATE TRIGGER example_check BEFORE INSERT OR UPDATE ON example
FOR EACH ROW EXECUTE PROCEDURE example_check();
```
---
UPDATE: a unique index can *sometimes* be wrapped into a
constraint (see [postgres-9.4 docs, final example](http://www.postgresql.org/docs/9.4/static/sql-altertable.html) ) You do need to invent a sentinel value; I used the empty string `''` here.
---
```
CREATE UNIQUE INDEX ex_12345 ON example
(coalesce(FIELD1, '')
, coalesce(FIELD2, '')
, coalesce(FIELD3, '')
, coalesce(FIELD4, '')
, coalesce(FIELD5, '')
)
;
ALTER TABLE example
ADD CONSTRAINT con_ex_12345
USING INDEX ex_12345;
```
---
But the "functional" index on `coalesce()` is not allowed in this construct.
The unique index (OP's option 2) still works, though:
---
```
ERROR: index "ex_12345" contains expressions
LINE 2: ADD CONSTRAINT con_ex_12345
^
DETAIL: Cannot create a primary key or unique constraint using such an index.
INSERT 0 1
INSERT 0 1
ERROR: duplicate key value violates unique constraint "ex_12345"
```
|
PostgreSQL multiple nullable columns in unique constraint
|
[
"",
"sql",
"postgresql",
"database-design",
"null",
"unique-constraint",
""
] |
A simplified example of what I need:
Table 'Transport' has 2 columns; 'Vehicle' and 'Colour'. User can filter records by either a vehicle, a colour, or both.
```
variable vehicle_choice = user_selected_vehicle (selected from a dropdown for example)
variable colour_choice = user_selected_colour
If user_selected_vehicle = nothing selected Then
vehicle_choice = *
End if
If user_selected_colour = nothing selected Then
colour_choice = *
End if
Select query = ("Select * From Transport Where Vehicle = vehicle_choice And Colour = colour_choice")
```
So if the user wanted a 'red' 'bus' the query would look like:
```
("Select * From Transport Where Vehicle = 'bus' And Colour = 'red'")
```
Which would be fine and find the record if it existed.
But, if the user wanted all vehicles that were yellow the query would look like:
```
("Select * From Transport Where Vehicle = * and Colour = 'yellow'")
```
Clearly this is all in a made up syntax but this is what I want, is it possible in SQL? (Using MS SQL Server 2008)
|
Here are two typical ways to solve this. Assume that the user input is in the variables `@vehicle` and `@colour`:
```
where (vehicle = @vehicle or @vehicle is null) and
(colour = @colour or @colour is null)
```
The problem with this approach is the use of indexes. Indexing strategies have a hard time with `or`. So, if you are constructing the query dynamically, then it is better to only add the clauses you want:
```
@where = '1 = 1' +
(case when @vehicle is not null then ' and vehicle = @vehicle' else '' end) +
(case when @colour is not null then ' and colour = @colour' else '' end);
```
|
You can try something like
```
("Select * From Transport Where Vehicle like '%' and Colour = 'yellow'")
```
|
select from table where column = anything
|
[
"",
"sql",
"sql-server",
""
] |
I need to retrieve the distinct values of both the column and row. The oracle sql query in the current setup is given below:
```
select distinct ym.wh_id,
ym.trlr_num,
ym.arrdte,
ri.invnum,
ri.supnum
from rcvinv ri, yms_ymr ym
where ym.trlr_cod='RCV'
and ri.trknum = ym.trlr_num
and ym.wh_id <=50
and ym.trlr_stat in ('C','CI','R','OR')
and ym.arrdte is not null
order by ym.arrdte desc;
```
The above returns the output as follows:
```
> Trailer Number Arrived PO Vendor
> Trailer4 5/12/2015 010025790692 00101
> Trailer5-V6661 5/12/2015 010025754823 00110
> Trailer2-V6651 5/12/2015 010025781421 55395
> TRAILERS1-V6641 5/12/2015 010025790388 00915
> DEV110501-V6631 5/11/2015 010025790692 00101
> Rj-V6621 5/11/2015 010025790692 00101
> 12345-V6601 5/8/2015 010025751682 00128
> 12345-V6601 5/8/2015 010025754823 00110
```
I require the output as follows:
```
> Trailer Number Arrived PO Vendor
> Trailer4 5/12/2015 010025790692 00101
> Trailer5-V6661 5/12/2015 010025754823 00110
> Trailer2-V6651 5/12/2015 010025781421 55395
> TRAILERS1-V6641 5/12/2015 010025790388 00915
> 12345-V6601 5/8/2015 010025751682 00128
```
As you can see, the repeated outputs for PO (010025790692 and 010025754823) and trailer number(12345-V6601) have been removed.
So in short, I want to modify the query such that I get the distinct of both the row and the column as in the below output.
Please help. Thanks.
|
You could use the Analytic **ROW\_NUMBER()**. See the [SQL Fiddle](http://sqlfiddle.com/#!4/03ea9/1).
For example,
```
SQL> SELECT trailer_number,
2 po,
3 vendor
4 FROM
5 (SELECT t.*,
6 row_number() OVER(PARTITION BY po, vendor ORDER BY po, vendor) rn
7 FROM t
8 )
9 WHERE rn = 1;
TRAILER_NUMBER PO VENDOR
--------------- -------------------- --------------------
12345-V6601 10025751682 128
Trailer5-V6661 10025754823 110
Trailer2-V6651 10025781421 55395
TRAILERS1-V6641 10025790388 915
Trailer4 10025790692 101
SQL>
```
**Update** OP wants to know how to apply the analytic function on his original query:
Your modified query would look like:
```
WITH t AS
(SELECT DISTINCT ym.trlr_num trlr_num,
ym.arrdte arrdte,
ri.invnum invnum,
ri.supnum supnum
FROM rcvinv ri,
yms_ymr ym
WHERE ym.trlr_cod ='RCV'
AND ri.trknum = ym.trlr_num
AND ym.wh_id <=50
AND ym.trlr_stat IN ('C','CI','R','OR')
AND ym.arrdte IS NOT NULL
),
t1 AS (
SELECT t.trlr_num,
t.arrdte,
t.invnum,
t.supnum,
row_number() OVER (PARTITION BY t.trlr_num, t.invnum ORDER BY t.trlr_num, t.invnum DESC) rn
FROM t
)
SELECT trlr_num, arrdte, invnum, supnum
FROM t1
WHERE rn = 1;
```
The **WITH clause** would be resolved as a temporary table, so you need not create any static table.
|
Your request can be written as: Get me the latest record per invnum. You get this by numbering (i.e. using `ROW_NUMBER`) the rows per invnum (i.e. `PARTITON BY invnum`) in the order desired, such that the latest record gets #1 (`ORDER BY ym.arrdte DESC`). Once the numbering is done, you remove all undesired records, i.e. those with a number other then 1.
BTW: Don't use implicit comma-separate joins any longer. They were replaced by explicit joins more than twenty years ago for good reasons.
```
select wh_id, trlr_num, arrdte, invnum, supnum,
from
(
select
ym.wh_id, ym.trlr_num, ym.arrdte, ri.invnum, ri.supnum,
row_number() over (partition by ri.invnum order by ym.arrdte desc) as rn
from rcvinv ri
join yms_ymr ym on ri.trknum = ym.trlr_num
where ym.trlr_cod = 'RCV'
and ym.wh_id <= 50
and ym.trlr_stat in ('C','CI','R','OR')
and ym.arrdte is not null
)
where rn = 1
order by arrdte desc, trlr_num;
```
|
Oracle sql distinct query
|
[
"",
"sql",
"oracle",
"distinct",
""
] |
I need help writing a query to get some data from a table. I am trying to write a query that will select all of the book titles that have “bill” in their name and will display the title of the book, the length of the title, and the part of the title that follows “bill”. I know you are supposed to use the substring and instring functions but i keep on running into syntax errors and/or incorrect output
The book table is as follows
```
CREATE TABLE Book(
ISBN CHAR(13),
Title VARCHAR(70) NOT NULL,
Description VARCHAR(100),
Category INT,
Edition CHAR(30),
PublisherID INT NOT NULL,
constraint book_ISBN_pk PRIMARY KEY (ISBN),
constraint book_category_fk FOREIGN KEY (Category) REFERENCES Category(CatID),
constraint book_publisherID_fk FOREIGN KEY (PublisherID) REFERENCES Publisher(PublisherID)
);
```
|
This is a Standard SQL version, afaik mysql should support those functions, too:
```
select
Title
,char_length(Title)
,substring(Title from position('bill' in Title) + 4)
from book
where Title like '%bill%'
```
|
Use regexp for the match, as follows:
```
select title from books where title regexp '*book*'
```
As for what you're **supposed** to use, in my book, you're only supposed to use what gets the desired result. Optimisation comes later, if ever.
|
MYSQL query help- Substring and Instring
|
[
"",
"mysql",
"sql",
""
] |
I have three related tables and need to select rows that show data from two tables based on a value (serial number) from the third. I am only interested in the max value of the serial number. I have tried multiple solutions suggested here on stackoverflow and I still cannot get my head around this.
A sample code for my tables with a straight forward SELECT for all values are available here: <http://sqlfiddle.com/#!6/6b8f7/4/0>
My end goal is to obtain a table like this:
```
reference groupname serialnum
C:123 Group2 3
C:125 Group1 4
C:126 Group1 1
```
Ordering with LIMIT does not seem to work.
Any ideas how this might be addressed?
**DDL + DML for Sample data:**
```
CREATE TABLE pm_process
([pm_guid] int, [Descr] varchar(4), [usr_newref] varchar(5))
;
INSERT INTO pm_process
([pm_guid], [Descr], [usr_newref])
VALUES
(11111, 'aaaa', 'C:123'),
(22222, 'bbbb', 'C:125'),
(33333, 'cccc', 'C:126')
;
CREATE TABLE tps_group
([tps_title] varchar(6), [tps_guid] int)
;
INSERT INTO tps_group
([tps_title], [tps_guid])
VALUES
('Group1', 99999),
('Group2', 88888)
;
CREATE TABLE pm_process_assignment
([pm_group_guid] int, [pm_process_guid] int, [pm_serial_number] int)
;
INSERT INTO pm_process_assignment
([pm_group_guid], [pm_process_guid], [pm_serial_number])
VALUES
(99999, 11111, 1),
(99999, 11111, 2),
(88888, 11111, 3),
(88888, 22222, 1),
(99999, 22222, 2),
(88888, 22222, 3),
(99999, 22222, 4),
(99999, 33333, 1)
;
```
|
In SQL Server, probably the easiest way to do this is using [`APPLY`](https://technet.microsoft.com/en-us/library/ms175156%28v=sql.105%29.aspx):
```
SELECT p.usr_newref as reference,
pag.tps_title as groupname,
pag.pm_serial_number as serialnum
FROM pm_process p OUTER APPLY
(SELECT TOP 1 pa.pm_serial_number, g.tps_title
FROM pm_process_assignment pa JOIN
tps_group g
ON g.tps_guid = pa.pm_group_guid
WHERE pa.pm_process_guid = p.pm_guid
ORDER BY pm_serial_number DESC
) pag
```
[Here](http://sqlfiddle.com/#!6/6b8f7/23) is the SQL Fiddle.
|
You can use `ROW_NUMBER()` to locate records having the maximum `serialnum` within each `reference` partition. Then, in an outer query, select only these records:
```
SELECT reference, groupname, serialnum
FROM (
SELECT
pm_process.usr_newref as reference,
pm_assignment_group.tps_title as groupname,
process_assignments.pm_serial_number as serialnum,
ROW_NUMBER() OVER (PARTITION BY pm_process.usr_newref
ORDER BY process_assignments.pm_serial_number DESC) AS rn
FROM
tps_group pm_assignment_group
RIGHT OUTER JOIN pm_process_assignment process_assignments
ON (pm_assignment_group.tps_guid=process_assignments.pm_group_guid)
RIGHT OUTER JOIN pm_process
ON (process_assignments.pm_process_guid=pm_process.pm_guid)
) t
WHERE t.rn = 1
```
[**SQL Fiddle Demo**](http://sqlfiddle.com/#!6/6b8f7/12)
|
Selecting a record with max value on one of the joins
|
[
"",
"sql",
"sql-server",
"greatest-n-per-group",
""
] |
How do I read each line to determine if the customer has fully paid or partially paid or never paid?
I would like the code to read every line of each customer's bill until I find the `TOTAL_PAYMENT_AMT = TOTAL_DUE`. If found, then mark it as paid, if not found, read the next line until I found the `TOTAL_PAYMENT_AMT <> 0` and `TOTAL_PAYMENT_AMT < TOTAL_DUE` then mark it as partially paid or if the `TOTAL_PAYMENT_AMT > TOTAL_DUE`, then mark it as paid.
For customer `111`, the bill is fully paid `-13129.54` from reading the first line. But for customer `222`, the bill has not paid until the 2nd month for the amount of `-18768.9`, and for cumster `333`, the bill is not paid until the 3rd month and only pay it partially. For customer `444`, the bill has never paid.(negative # means paid, positive # means amount charged)
```
CUSTOMER_ID BILL_DATE TOTAL_DUE TOTAL_PAYMENT_AMT
111 3/19/2015 13129.54 -13129.54
111 4/20/2015 0 0
222 3/25/2015 26334.12 0
222 4/24/2015 -27000.00
333 2/25/2015 12720.21 0
333 3/25/2015 -1000.00
333 4/24/2015 -1071.15
444 2/26/2015 12266.6 0
444 3/26/2015 0
```
|
This seems to be a task for a Windowed Aggregate Function, a running total?
```
SELECT CUSTOMER_ID, BILL_DATE, TOTAL_DUE, TOTAL_PAYMENT_AMT,
SUM(TOTAL_DUE)
OVER (PARTITION BY CUSTOMER_ID
ORDER BY BILL_DATE
ROWS UNBOUNDED PRECEDING) AS due,
SUM(TOTAL_PAYMENT_AMT)
OVER (PARTITION BY CUSTOMER_ID
ORDER BY BILL_DATE
ROWS UNBOUNDED PRECEDING) AS pay,
CASE
WHEN due + pay <= 0 THEN 'paid'
WHEN pay = 0 THEN 'not paid'
ELSE 'partially paid'
END
FROM tab
```
|
This will sum the `TOTAL_DUE` and `TOTAL_PAYMENT_AMT` for each `CUSTOMER_ID`.
```
SELECT CUSTOMER_ID,
MIN(BILL_DATE) AS [FIRST_BILL],
MAX(BILL_DATE) AS [LAST_BILL],
SUM(TOTAL_DUE),
SUM(TOTAL_PAYMENT_AMT)
FROM TABLE_NAME
GROUP BY CUSTOMER_ID
```
Change `TABLE_NAME` for the actual name of the table. You'll be able to determine if the customer paid the total amount by comparing the last two columns.
To sum the last two columns on the table you could do:
```
SELECT CUSTOMER_ID,
MIN(BILL_DATE) AS [FIRST_BILL],
MAX(BILL_DATE) AS [LAST_BILL],
SUM(TOTAL_DUE + TOTAL_PAYMENT_AMT) AS [AMOUNT_DUE]
FROM TABLE_NAME
GROUP BY CUSTOMER_ID
```
Then you would get the amount yet to be paid in a single row.
If you just wish to sum a single line, forget the `group by` and simply do:
```
SELECT CUSTOMER_ID,
BILL_DATE,
(TOTAL_DUE + TOTAL_PAYMENT_AMT) AS [AMOUNT_DUE]
FROM TABLE_NAME
```
|
How to read every row in SQL
|
[
"",
"sql",
"teradata",
""
] |
In My Table, I Have a Column Which Stores the Date. I am Passing Month Number as an argument to a stored Procedure.
I would like to delete all entries of that month from table.
Is it Possible....??
|
I think this is your answer :
```
delete from yourtable where month(YourDatetimeColumn) = 5 -- for example 'may'
```
OR
```
delete from yourtable where datepart(mm,YourDatetimeColumn) = 5 -- for example 'may'
```
Note : replace `5` with your input parameter. This will not consider the `year` part of your date so if it is may-2014 or may-2015, all will be deleted.
|
I am not familiar with SQL server versions, but youre tagged as 2005.
If there is issues with `month()` then you can also use;
```
delete from your_table where datepart(month, table_datefield) = @procedure_argument_month
```
But as in other answer, this will delete all fields which month is as you provided, as you described, not minding the year.
|
Delete Data From Table of Specific Month
|
[
"",
"sql",
"sql-server-2005",
""
] |
I have two select statements like below
```
SELECT * FROM TABLE1
```
gives me
```
COLUMNA, COLUMNB, COLUMNC
```
and
```
SELECT * FROM TABLE2
```
gives me
```
COLUMNA, COLUMNB, COLUMND
```
COLUMNA, COLUMNB are identical(same number of rows and cell values) in those two SELECTs
How can I merge these two SELECTs so I can have four columns with one query and no extra rows
```
COLUMNA, COLUMNB, COLUMNC, COLUMND
```
Updating my question based on comments. Let's say I have two tables like below
```
TABLE1 TABLE2
COLUMNA COLUMNB COLUMNC COLUMNA COLUMNB COLUMND
value1 value2 value3 value1 value2 value9
value4 null value5 value4 null value10
null value6 value7 null value6 value11
null null value8 null null value12
```
result should be
```
COLUMNA COLUMNB COLUMNC COLUMND
value1 value2 value3 value9
value4 null value5 value10
null value6 value7 value11
null null value8 value12
```
|
I guess you're looking for something like this.
```
SELECT COLUMNA, COLUMNB, COLUMNC, NULL as COLUMND FROM TABLE1
UNION
SELECT COLUMNA, COLUMNB, NULL as COLUMNC, COLUMND FROM TABLE2
```
or maybe:
```
SELECT COLUMNA, COLUMNB, COLUMNC FROM TABLE1
UNION
SELECT COLUMNA, COLUMNB, COLUMND as COLUMNC FROM TABLE2
```
|
This will give you 3 columns, the third being the different valued one:
```
SELECT
COLUMNA, COLUMNB, COLUMNC AS [Column3]
FROM TABLE1
UNION ALL
SELECT
COLUMNA, COLUMNB, COLUMND AS [Column3]
FROM TABLE2
```
|
Merge two SELECT statements which have one or more additional columns
|
[
"",
"sql",
"select",
"merge",
""
] |
I am currently working in sql 2012 visual management studio. I have two tables. Table1 has three columns (ItemNumber as varchar, Quantity as int, and TimeOrdered as datetime). Table2 has 2 columns (ItemNumber as varchar, and Price as float). Please note these item numbers are not the same, the part numbers on table 1 have a letter after the number while the table 2 item number does not. For example on table 1 the item number will look something like this 999999999-E and the other table will just be 999999999-. Therefore I must use a select Left for 10 digits to get the part number.
I need to pull a list of item numbers from table 1 based on the time ordered and then cross compare that list to table 2 and multiple the price times the quantity for a grand total. Here is my code so far:
```
SELECT sum(tbl.quantity * table2.price) as grandtotal,
tbl.PartNumber,
tbl.quanity,
table2.price
FROM
(SELECT left(itemnumber, 10) as itemnumber, quantity
FROM table1
WHERE TimeOrdered between
('2014-05-05 00:00:00.000')
AND
('2015-05-05 00:00:00.000')) as tbl
Left table2 on
tbl.partnumber =tbl2.itemnumber
```
I am receiving an error here for aggregate columns but I am not sure this is the correct way to go about this to begin with.
-------------update---------------
I got it working. Sorry for taking so long to get back to you guys, I was stuck in a meeting all day,
|
How About This. The case is just to avoid div by Zero errors.
```
SELECT sum( Isnull(tbl.quantity,0) * Isnull(table2.price,0) ) as grandtotal,
tbl.PartNumber,
Sum(tbl.quanity),
case when Isnull(Sum(tbl.quanity),0) = 0 then null else
sum(Isnull(tbl.quantity,0) * Isnull(table2.price,0) ) / Sum(tbl.quanity) end
as Price
FROM
(SELECT left(itemnumber, 10) as itemnumber, quantity FROM table1 WHERE TimeOrdered between
('2014-05-05 00:00:00.000')
AND ('2015-05-05 00:00:00.000')) as tbl
Left outer join table2 on
tbl.partnumber =tbl2.itemnumber
group by tbl.PartNumber
```
|
**[SQL Fiddle Example](http://sqlfiddle.com/#!6/99171/1)**
```
SELECT SUM(t1.quantity * t2.price) AS 'GrandTotal'
,SUM(t1.quantity) AS 'Quantity'
,t1.itemnumber
,t2.price
FROM Table1 t1
JOIN Table2 t2 ON LEFT(t1.itemnumber, 10) = t2.itemnumber
WHERE t1.Timeordered BETWEEN '2014-05-05 00:00:00.000' AND '2015-05-05 00:00:00.000'
GROUP BY t1.itemnumber, t2.price
```
|
Select left 10 numbers, left join for a price from second table, and then sum, SQL
|
[
"",
"sql",
"select",
"sql-server-2012",
"sum",
"multiplication",
""
] |
I have this SQL statement. It works, and I need to add another one condition.
I need to sort it by date. **occurence** - is my date row.
```
SELECT dd.caption, COUNT(t.occurence)
FROM transaction t
INNER JOIN dict_departments dd
ON dd.id = t.terminal_id
GROUP BY dd.caption
```
How to add this condition:
```
WHERE t.occurence BETWEEN (CURRENT_DATE() - INTERVAL 1 MONTH)
```
to my query.
|
`BETWEEN` requires two arguments, a start point and an end point. If your end point is the current time, you have two options:
1. Using `BETWEEN`:
`WHERE t.occurence BETWEEN (CURRENT_DATE() - INTERVAL 1 MONTH) AND NOW()`
2. Using simple comparison operator:
`WHERE t.occurence >= (CURRENT_DATE() - INTERVAL 1 MONTH)`
|
Try this:
```
WHERE t.occurrece BETWEEN current_date() AND dateadd(month,1,current_date())
```
The function *dateadd* is a SQL SERVER function, but the rest of the clause is standard SQL.
|
How to add date condition to my query?
|
[
"",
"mysql",
"sql",
""
] |
I have three tables **Patients** table: which contains the name of my patients, **Controls** table: which represents the UI controls which could be drawn for each illness , and **ControlsValues** table which contains the values of the controls submitted for each Patient
Lets have some data **Patients** Table
```
|ID | Name |
|-----------|
| 1 | Ara |
| 2 | Sada |
```
**Controls** table
```
|ID | Text | Type |
|-----------|----------|
| 1 | age | textbox |
| 2 |alergy| checkbox |
```
Then the controlsValues table which is where I want to query at
```
|ID | contrlId | value | patientId |
|---------------|----------|-----------|
| 1 | 1 | 23 | 1 |
| 2 | 2 | true | 1 |
| 3 | 1 | 26 | 2 |
| 4 | 2 | false | 2 |
```
here my problem occurs when I want to return that patient from ControlsValues table that has the `(controlId=1 AND value=23) and (controlId=2 AND value=true)` in this case the condition is on two rows not two columns which it is not possible , so I desided to change the rows into the columns depending on controlId but I dont know how and I have been searching for 2 days and seen a lot of samples but none of them helped me to solve my problem
|
A way to solve the problem is with subqueries
```
select patientId from controlValues
where controlId=1 AND value=23
and patientId in (
select patientId
from controlValues
where controlId=2 and value=true
)
```
|
try this one
```
select * from controlvalues;
+----+----------+-------+-----------+
| id | contrlId | value | patientId |
+----+----------+-------+-----------+
| 1 | 1 | 23 | 1 |
| 2 | 2 | true | 1 |
| 3 | 1 | 26 | 2 |
| 4 | 2 | false | 2 |
| 5 | 1 | 23 | 3 |
| 6 | 2 | true | 3 |
+----+----------+-------+-----------+
6 rows in set (0.00 sec)
```
mysql> SELECT cv1.patientId p1,cv1.contrlId ctrl1, cv1.value val1,cv2.patientId p2,cv2.contrlId ctrl2, cv2.value val2
FROM controlvalues cv1 , controlvalues cv2
WHERE cv1.patientId = cv2.patientId and cv1.contrlId = 2 and cv1.value = 'true' and cv2.contrlId = 1 and cv2.value = '23';
```
+------+-------+------+------+-------+------+
| p1 | ctrl1 | val1 | p2 | ctrl2 | val2 |
+------+-------+------+------+-------+------+
| 1 | 2 | true | 1 | 1 | 23 |
| 3 | 2 | true | 3 | 1 | 23 |
+------+-------+------+------+-------+------+
2 rows in set (0.00 sec)
```
|
How to convert rows to columns and query on them in Mysql
|
[
"",
"mysql",
"sql",
""
] |
I have a query that does a basic moving average using the `FOLLOWING` / `PRECEDING` syntax of PostgreSQL 9.0. To my horror I discovered our pg server runs on 8.4 and there is no scope to get an upgrade in the near future.
I am therefore looking for the simplest way to make a backwards compatible query of the following:
```
SELECT time_series,
avg_price AS daily_price,
CASE WHEN row_number() OVER (ORDER BY time_series) > 7
THEN avg(avg_price) OVER (ORDER BY time_series DESC ROWS BETWEEN 0 FOLLOWING
AND 6 FOLLOWING)
ELSE NULL
END AS avg_price
FROM (
SELECT to_char(closing_date, 'YYYY/MM/DD') AS time_series,
SUM(price) / COUNT(itemname) AS avg_price
FROM auction_prices
WHERE itemname = 'iphone6_16gb' AND price < 1000
GROUP BY time_series
) sub
```
It is a basic 7-day moving average for a table containing price and timestamp columns:
```
closing_date timestamp
price numeric
itemname text
```
The requirement for basic is due to my basic knowledge of SQL.
|
[Postgres 8.4 already has CTEs.](http://www.postgresql.org/docs/8.4/interactive/queries-with.html)
I suggest to use that, calculate the daily average in a CTE and then self-join to all days (existing or not) in the past week. Finally, aggregate once more for the weekly average:
```
WITH cte AS (
SELECT closing_date::date AS closing_day
, sum(price) AS day_sum
, count(price) AS day_ct
FROM auction_prices
WHERE itemname = 'iphone6_16gb'
AND price <= 1000 -- including upper border
GROUP BY 1
)
SELECT d.closing_day
, CASE WHEN d.day_ct > 1
THEN d.day_sum / d.day_ct
ELSE d.day_sum
END AS avg_day -- also avoids division-by-zero
, CASE WHEN sum(w.day_ct) > 1
THEN sum(w.day_sum) / sum(w.day_ct)
ELSE sum(w.day_sum)
END AS week_avg_proper -- also avoids division-by-zero
FROM cte d
JOIN cte w ON w.closing_day BETWEEN d.closing_day - 6 AND d.closing_day
GROUP BY d.closing_day, d.day_sum, d.day_ct
ORDER BY 1;
```
[SQL Fiddle.](http://sqlfiddle.com/#!15/6b283/5) (Running on Postgres 9.3, but should work in 8.4, too.)
### Notes
* I used a **different (correct) algorithm** to calculate the weekly average. See considerations in my [comment to the question](https://stackoverflow.com/questions/30107756/sql-workaround-to-substitute-following-preceeding-in-postgresql-8-4#comment48358066_30107756).
* This calculates averages for *every* day in the base table, including corner cases. But no row for days without any rows.
* One can subtract `integer` from `date`: `d.closing_day - 6`. (But not from `varchar` or `timestamp`!)
* It's rather confusing that you call a `timestamp` column `closing_date` - it's not a `date`, it's a `timestamp`.
And `time_series` for the resulting column with a `date` value? I use `closing_day` instead ...
* Note how I count prices `count(price)`, *not* items `COUNT(itemname)` - which would be an entry point for a sneaky error if either of the columns can be NULL. If *neither* can be NULL `count(*)` would be superior.
* The `CASE` construct avoids division-by-zero errors, which can occur as long as the column you are counting *can* be NULL. I could use `COALESCE` for the purpose, but while being at it I simplified the case for exactly 1 price as well.
|
```
-- make a subset and rank it on date
WITH xxx AS (
SELECT
rank() OVER(ORDER BY closing_date) AS rnk
, closing_date
, price
FROM auction_prices
WHERE itemname = 'iphone6_16gb' AND price < 1000
)
-- select subset, + aggregate on self-join
SELECT this.*
, (SELECT AVG(price) AS mean
FROM xxx that
WHERE that.rnk > this.rnk + 0 -- <<-- adjust window
AND that.rnk < this.rnk + 7 -- <<-- here
)
FROM xxx this
ORDER BY this.rnk
;
```
* Note: the CTE is for conveniance (Postgres-8.4 does have CTE's), but the CTE could be replaced by a subquery or, more elegantly, by a view.
* The code assumes that the time series is has no gaps (:one opservation for every {product\*day}. When not: join with a calendar table (which could also contain the rank.)
* (also note that I did not cover the corner cases.)
|
SQL workaround to substitute FOLLOWING / PRECEEDING in PostgreSQL 8.4
|
[
"",
"sql",
"postgresql",
"window-functions",
"postgresql-8.4",
"moving-average",
""
] |
I have developed few SSIS packages in my local system and deployed it in to remote SQL Server. Now my local hard disk crashed. Is there a way to get back the packages that are deployed to SQL Server?
Thanks in advance
|
If you have access to the server itself, you or somebody with the appropriate permissions can login to the server that is running SQL Server Integration Services, from there you can open SSMS, connect to Integration Services, and export any package that has been deployed to that server.
|
Do something like this:
1) Start integration services project.
2) Right click, Add existing package.
3) Select SSIS Package store.
4) Type the name of the remote server.
5) You can import any package locally from the remote server.
|
Is it possible to recover the SSIS packages that are deployed in SQL Server?
|
[
"",
"sql",
"sql-server",
"ssis",
"ssis-2012",
""
] |
I have a table that looks like this...
```
id city date
1 chicago 5/1
1 chicago 5/2
1 new york 5/1
2 new york 5/3
2 seattle .
3 chicago .
4 seattle .
4 seattle .
```
And I want to create a third column that takes the value of 'city' where the specific city makes up the majority (>51%) of the number of entries a single ID has. So for example, id #1 would have favorite\_city = 'chicago'. I'm not sure where to even start...
Help is much appreciated. Thanks!
|
```
WITH
summary As
(
SELECT
your_table.*,
COUNT(*) OVER (PARTITION BY id) AS id_count,
COUNT(*) OVER (PARTITION BY id, city) AS id_city_count
FROM
your_table
)
SELECT
summary.*,
MAX(
CASE WHEN id_city_count * 2 > id_count THEN city ELSE NULL END
)
OVER (PARTITION BY id)
FROM
summary
```
|
This works fine but gives all cities (not unique one) for id which have equal count of cities,
```
with a as( select * from (
select id, city, nb,
rank() OVER (PARTITION BY id ORDER BY nb DESC) as rnk
from(
select id, city, count(city) nb
from test
group by id, city)as t group by id, city,nb) as tt where rnk =1)
select test.id as id, test.city as city, a.city as favcity from
test, a where test.id= a.id
```
Life demo and output [HERE](http://sqlfiddle.com/#!15/247b4/19)
|
How do you create a flag that takes the value of column value depending whether the values make up the majority of the count?
|
[
"",
"sql",
"postgresql",
""
] |
I have started to learn MySQL.
Here is the table `world`:
```
+-------------+-----------+---------+
| name | continent | area |
+-------------+-----------+---------+
| Afghanistan | Asia | 652230 |
| Albania | Europe | 2831741 |
| Algeria | Africa | 28748 |
| ... | ... | ... |
+-------------+-----------+---------+
```
I need:
> List each continent and the name of the country that comes first alphabetically
The result of SELECT must be:
```
+---------------+---------------------+
| continent | name |
+---------------+---------------------+
| Africa | Algeria |
| Asia | Afghanistan |
| Caribbean | Antigua and Barbuda |
| Eurasia | Armenia |
| Europe | Albania |
| North America | Belize |
| Oceania | Australia |
| South America | Argentina |
+---------------+---------------------+
```
|
This is a simple aggegation:
```
SELECT continent, MIN(name) AS name
FROM world
GROUP BY continent
ORDER by continent
```
|
If it's an Exercise from [SQLZoo](http://sqlzoo.net/wiki/SELECT_within_SELECT_Tutorial), than IMO it should look something like this:
```
select continent, name from world x
where name = (select name
from world y
where x.continent = y.continent
order by name asc
limit 1)
```
P.S. I study SQL from there now and this post helped me. Thanks to @Parado!)
Update: I've found this [site](http://weeblike.weebly.com/sum-and-count.html) with answers. Useful if stack.
|
MySQL - SELECT the name that comes first alphabetically
|
[
"",
"mysql",
"sql",
""
] |
I this a query
```
SELECT * FROM user_state_logs WHERE user_state_logs.user_id = 1
AND created_at BETWEEN '2015-05-03 11:06:05' AND '2015-05-13 11:06:05'
```
That returns this
```
user_id state duration
1 call 10
1 call 20
1 wait 30
1 call 10
1 wait 20
```
I would like to return this
```
user_id state duration
1 call 40
1 wait 50
```
I've tried to adding a `GROUP BY user_state_logs.state`
But I get this error
> ERROR: column "user\_state\_logs.user\_id" must appear in the GROUP BY
> clause or be used in an aggregate function
So two questions:
Why am i getting this error?
How could i get the query to return the data in the second format?
|
Try to do it that way:
```
SELECT user_state_logs.state, user_state_logs.user_id, sum(user_state_logs.duration) as duration
FROM user_state_logs
WHERE user_state_logs.user_id = 1
AND created_at BETWEEN '2015-05-03 11:06:05' AND '2015-05-13 11:06:05'
GROUP BY user_state_logs.state, user_state_logs.user_id
```
|
You should group by `user_id` and `state`:
```
SELECT user_id, state, sum(duration)
WHERE user_state_logs.user_id = 1
AND created_at BETWEEN '2015-05-03 11:06:05' AND '2015-05-13 11:06:05'
GROUP BY user_id, state
```
|
Group by one column and sum the other
|
[
"",
"sql",
"postgresql",
""
] |
What's the difference or benefit of writing "NOT NULL" in a mysql field creation...
For example if I'm creating a table like this...
```
CREATE TABLE IF NOT EXISTS game(
id INT(11) NOT NULL AUTO_INCREMENT,
name VARCHAR(128) NOT NULL,
description VARCHAR(200) NOT NULL,
PRIMARY KEY(id)
)
```
Here, id and name always has a value so their NOT NULL is just fine. I get that.
But, description is an optional field so, it can be blank.
So, in this situation, should I put NOT NULL or not ?
|
NULL and a blank field are [not the same thing](http://en.wikipedia.org/wiki/Null_%28SQL%29) (unless, under some circumstances, you're a brain-dead DBMS *cough*Oracle*cough*).
NULL means unknown or not applicable whereas a blank field means, well, known and blank.
It depends entirely on how you want to handle the field itself. For example, let's consider the middle initial of your name. If you do not wish to distinguish between 'unknown' and 'does not have one', set the column as `NOT NULL` and just use a blank value to represent both - that will ease your queries somewhat.
However, if you want to mail all your clients without middle names (for whatever bizarre reason) without bothering those where you don't *know* if they have one, you need to be able to distinguish between the two classes.
|
First of all, in your situation, you really don't have to put NOT NULL, if you put NOT NULL, when you insert data to your database , if the field used NOT NULL, its mandatory to put values to this field, however ,as mentioned earlier by paxdiablo, that NULL doesn't mean blank field, since blank field can be of a lot of spaces or one space. Anyways, if it is not mandatory to have values in a certain Field, you really don't have to put NOT NULL.( I really don't think it has anything to do with braindead DBMS or not @paxdiablo)
|
When to write NOT NULL when creating a MySQL table column
|
[
"",
"mysql",
"sql",
"database",
"null",
"notnull",
""
] |
I have a `VARCHAR2` that I want to `Select`, but it's too long. So, I want to `Split` it into multiples lines like this.
```
SELECT 'Veuillez adresser toute la correspondance à : Aktiva2 Service Comptabilité : 30 B rue de la Petite Hollande 59700 Marcq-en-Baroeul Tel : 03 20 20 34 83@RET@IMPORTANT: pour faciliter nos échanges comptables, lors de votre paiement, par chèque et en particulier par virement, merci de rappeler votre n° de client et n° de facture acquittée.@RET@@RET@Paiement comptant sans escompte même en cas de paiement anticipé. Au-delà de 10 jours, une pénalité de 3 fois le taux d''intérêt légal (Loi 2008-776 du 04 août 2008) sera appliquée, calculée au prorata temporis sur la valeur de facture. @RET@@RET@Conforment à la loi du 29 février 2012 le montant de l''indemnité forfaitaire pour frais de recouvrement est de 40 euros dans le cas où les sommes dues sont réglées après la date de règlement figurant sur la facture.'
FROM dual
```
Now I want to replace **@RET@** with something to jump into a 'new line' here. The result must be like this:
```
Veuillez adresser toute la correspondance à : Aktiva2 Service Comptabilité : 30 B rue de la Petite Hollande 59700 Marcq-en-Baroeul Tel : 03 20 20 34 83
IMPORTANT: pour faciliter nos échanges comptables, lors de votre paiement, par chèque et en particulier par virement, merci de rappeler votre n° de client et n° de facture acquittée.
Paiement comptant sans escompte même en cas de paiement anticipé. Au-delà de 10 jours, une pénalité de 3 fois le taux d''intérêt légal (Loi 2008-776 du 04 août 2008) sera appliquée, calculée au prorata temporis sur la valeur de facture.
<p>Conforment à la loi du 29 février 2012 le montant de l''indemnité forfaitaire pour frais de recouvrement est de 40 euros dans le cas où les sommes dues sont réglées après la date de règlement figurant sur la facture.
```
|
you can try something like this
```
declare text varchar2(1000);
begin
SELECT 'Veuillez adresser toute la correspondance à : Aktiva2 Service Comptabilité : 30 B rue de la Petite Hollande 59700 Marcq-en-Baroeul Tel : 03 20 20 34 83@RET@IMPORTANT: pour faciliter nos échanges comptables, lors de votre paiement, par chèque et en particulier par virement, merci de rappeler votre n° de client et n° de facture acquittée.@RET@@RET@Paiement comptant sans escompte même en cas de paiement anticipé. Au-delà de 10 jours, une pénalité de 3 fois le taux d''intérêt légal (Loi 2008-776 du 04 août 2008) sera appliquée, calculée au prorata temporis sur la valeur de facture. @RET@@RET@Conforment à la loi du 29 février 2012 le montant de l''indemnité forfaitaire pour frais de recouvrement est de 40 euros dans le cas où les sommes dues sont réglées après la date de règlement figurant sur la facture.'
into text from dual;
select REPLACE(text,'@RET@',chr(10)) into text from dual;
insert into testtable (col_text) values(text);
end;
/
```
This select will place spaces
```
select REPLACE(text,'@RET@',chr(10)) into text from dual;
```
|
CHR(10) is a new line
so `REPLACE(text,'@RET@',chr(10))`
cheers
|
How to Replace a specific char in a Varchar2 on Select
|
[
"",
"sql",
"oracle",
"replace",
"newline",
""
] |
Context:
I have two tables: markettypewagerlimitgroups (mtwlg) and stakedistributionindicators (sdi). When a mtwlg is created, 2 rows are created in the sdi table which are linked to the mtwlg - each row with the same values bar 2, the id and another field (let's call it column X) which must contain a 0 for one row and 1 for the other.
There was a bug present in our codebase which prevented this happening automatically, so any mtwlg's created during the time that bug was present do not have the related sdi's, causing NPE's in various places.
To fix this, a patch needs to be written to loop through the mtwlg table and for each ID, search the sdi table for the 2 related rows. If the rows are present, do nothing; if there is only 1 row, check if F is a 0 or a 1, and insert a row with the other value; if neither row is present, insert them both. This needs to be done for every mtwlg, and a unique ID needs to be inserted too.
Pseudocode:
```
For each market type wager limit group ID
Check if there are 2 rows with that id in the stake distributions table, 1 where column X = 0 and one where column X = 1
if none
create 2 rows in the stake distributions table with unique id's; 1 for each X value
if one
create the missing row in the stake distributions table with a unique id
if 2
do nothing
```
If it helps at all - the patch will be applied using liquibase.
Anyone with any advice or thoughts as to if and how this will be possible to write in SQL/a liquibase patch?
Thanks in advance, let me know of any other information you need.
EDIT:
I've actually just been advised to do this using PL/SQL, do you have any thoughts/suggestions in regards to this?
Thanks again.
|
Oooooh, an excellent job for [`MERGE`](https://docs.oracle.com/database/121/SQLRF/statements_9016.htm).
Here's your pseudo code again:
```
For each market type wager limit group ID
Check if there are 2 rows with that id in the stake distributions table,
1 where column X = 0 and one where column X = 1
if none
create 2 rows in the stake distributions table with unique id's;
1 for each X value
if one
create the missing row in the stake distributions table with a unique id
if 2
do nothing
```
Here's the `MERGE` variant (still pseudo-code'ish as I don't know how your data really looks):
```
MERGE INTO stake_distributions d
USING (
SELECT limit_group_id, 0 AS x
FROM market_type_wagers
UNION ALL
SELECT limit_group_id, 1 AS x
FROM market_type_wagers
) t
ON (
d.limit_group_id = t.limit_group_id AND d.x = t.x
)
WHEN NOT MATCHED THEN INSERT (d.limit_group_id, d.x)
VALUES (t.limit_group_id, t.x);
```
No loops, no PL/SQL, no conditional statements, just plain beautiful SQL.
Nice alternative suggested by [Boneist](https://stackoverflow.com/a/30233998/521799) in the comments uses a `CROSS JOIN` rather than `UNION ALL` in the `USING` clause, which is *likely* to perform better (unverified):
```
MERGE INTO stake_distributions d
USING (
SELECT w.limit_group_id, x.x
FROM market_type_wagers w
CROSS JOIN (
SELECT 0 AS x FROM DUAL
UNION ALL
SELECT 1 AS x FROM DUAL
) x
) t
ON (
d.limit_group_id = t.limit_group_id AND d.x = t.x
)
WHEN NOT MATCHED THEN INSERT (d.limit_group_id, d.x)
VALUES (t.limit_group_id, t.x);
```
|
Answer: you don't. There is absolutely no need to loop through anything - you can do it in a single insert. All you need to do is identify the rows that are missing, and then you just need to add them in.
Here is an example:
```
drop table t1;
drop table t2;
drop sequence t2_seq;
create table t1 (cola number,
colb number,
colc number);
create table t2 (id number,
cola number,
colb number,
colc number,
colx number);
create sequence t2_seq
START WITH 1
INCREMENT BY 1
MAXVALUE 99999999
MINVALUE 1
NOCYCLE
CACHE 20
NOORDER;
insert into t1 values (1, 10, 100);
insert into t2 values (t2_seq.nextval, 1, 10, 100, 0);
insert into t2 values (t2_seq.nextval, 1, 10, 100, 1);
insert into t1 values (2, 20, 200);
insert into t2 values (t2_seq.nextval, 2, 20, 200, 0);
insert into t1 values (3, 30, 300);
insert into t2 values (t2_seq.nextval, 3, 30, 300, 1);
insert into t1 values (4, 40, 400);
commit;
insert into t2 (id, cola, colb, colc, colx)
with dummy as (select 1 id from dual union all
select 0 id from dual)
select t2_seq.nextval,
t1.cola,
t1.colb,
t1.colc,
d.id
from t1
cross join dummy d
left outer join t2 on (t2.cola = t1.cola and d.id = t2.colx)
where t2.id is null;
commit;
select * from t2
order by t2.cola;
ID COLA COLB COLC COLX
---------- ---------- ---------- ---------- ----------
1 1 10 100 0
2 1 10 100 1
3 2 20 200 0
5 2 20 200 1
7 3 30 300 0
4 3 30 300 1
6 4 40 400 0
8 4 40 400 1
```
|
Oracle SQL - How can I write an insert statement that is conditional and looped?
|
[
"",
"sql",
"oracle",
"sql-update",
"conditional-statements",
"liquibase",
""
] |
I'm pretty sure this is an easy question, but I'm having trouble wording it.
I need to count the total number of values in one column based on distinct criteria in another column.
Example:
```
A CD
B ABC
C AD
D A
```
Would yield:
```
A 3
B 1
C 2
D 2
```
|
First, you shouldn't be storing lists of things in a string.
But, sometimes one is stuck with this format. In your example, you seem to have a table with all possible values. If so you can use a `join`:
```
select e.col1, count(e2.col2)
from example e left join
example e2
on charindex(e.col1, e2.col2) > 0
group by e.col1;
```
Note: this counts rows containing the value rather. If multiple values appear in a single row, the query is a bit more complicated.
|
Here is how you can do it:
```
DECLARE @t TABLE ( c1 CHAR(1), c2 VARCHAR(5) )
INSERT INTO @t
VALUES ( 'A', 'CD' ),
( 'B', 'ABC' ),
( 'C', 'AD' ),
( 'D', 'A' )
SELECT t.c1 ,
SUM(count) AS count
FROM @t t
CROSS APPLY ( SELECT LEN(c2) - LEN(REPLACE(c2, t.c1, '')) AS count
FROM @t
WHERE c2 LIKE '%' + t.c1 + '%'
) ca
GROUP BY t.c1
```
|
Count distinct records in one column with multiple values in another column
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have a table sample table as follows:
```
ID | City
--------------
1 | New York
2 | San Francisco
3 | New York
4 | Los Angeles
5 | Atlanta
```
I would like to select the distinct City AND the TOP ID for each. E.g., conceptually I would like to do the following
```
SELECT TOP 1 ID, DISTINCT City
FROM Cities
```
Should give me:
```
ID | City
--------------
1 | New York
2 | San Francisco
4 | Los Angeles
5 | Atlanta
```
Because New York appears twice, it's taken the first ID `1` in this instance.
But I get the error:
> Column 'Cities.ID' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.
|
Try this way:
```
SELECT min(ID), City
FROM Cities
Group by City
```
`MIN` function is used for choose one of the `ID` from two `New York` cities.
|
You need to have your city in a `GROUP BY`
```
SELECT MIN(ID), City
FROM Cities
GROUP BY City
```
|
SELECT Top 1 ID, DISTINCT Field
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
I have a 'Case' conditional code snippet wherein I am returning two objects.
The psuedocode is -
```
case name
when 'a'
object1 = - SQL Logic -
when 'b'
object1 = - SQL Logic -
when 'c'
object2 = - SQL Logic -
when 'd'
object2 = - SQL Logic -
end
return object1, object2
```
As evident, I am returning two objects. However, in my Controller I need one object at a time. The object are returned in form of Array like ['value', 'nil']. One of them is always nil.
In my controller I am passing one of these objects as -
```
Model.find_by_sql ["select * from #{object}"] #either object1 or object2
```
Is there any way that I can break off this array and return the object that is required at that place as String?
Thanks.
|
`return [object1, object2].compact`
you can use `compact` method to remove `nil` value of array.
|
While you can use `compact` to eliminate the `nil` values from your array, I'm not sure why you need this in the first place.
Doing
```
case name
when 'a'
return "SQL statement"
when 'b'
return "SQL statement"
when 'c'
return "SQL statement"
when 'd'
return "SQL statement"
end
```
is way more intuitive.
|
Getting an Array from Return statement, deduce a string from that
|
[
"",
"sql",
"ruby-on-rails",
"arrays",
"ruby",
"return",
""
] |
I have a below sql query in MS-Access 2010 wghich works fine in query design window. Now I want to code it through VBA.
ACTUAL QUERY:
```
INSERT INTO MAINRESULT(EXNAME, MASTER_TICKER,MASTER_CUSIP,TL_TICKER,TL_CUSIP,FC_CUSIP,FC_TICKER)
SELECT EXC AS MY_EXC,
SUM(IIF(MASTER_TICKER <> "NULL", 1, 0)) AS MY_MASTER_TICKER,
SUM(IIF(MASTER_CUSIP <> "NULL", 1, 0)) AS MY_MASTER_CUSIP,
SUM(IIF(TL_TICKER <> "NULL", 1, 0)) AS MY_TL_TICKER,
SUM(IIF(TL_CUSIP <> "NULL", 1, 0)) AS MY_TL_CUSIP,
SUM(IIF(FC_CUSIP <> "NULL", 1, 0)) AS MY_FC_CUSIP,
SUM(IIF(FC_TICKER <> "NULL", 1, 0)) AS MY_FC_TICKER
FROM TESTDATA
GROUP BY EXC;
```
MY VBA code snippet:
```
Dim strSQL As String
Dim db As Database
Set db = CurrentDb
strSQL = "INSERT INTO MAINRESULT(EXNAME, MASTER_TICKER,MASTER_CUSIP,TL_TICKER,TL_CUSIP,FC_CUSIP,FC_TICKER) " & _
"SELECT EXC AS MY_EXC, " & _
"SUM(IIF(MASTER_TICKER <> "NULL", 1, 0)) AS MY_MASTER_TICKER, " & _
"SUM(IIF(MASTER_CUSIP <> "NULL", 1, 0)) AS MY_MASTER_CUSIP, " & _
"SUM(IIF(TL_TICKER <> "NULL", 1, 0)) AS MY_TL_TICKER, " & _
"SUM(IIF(TL_CUSIP <> "NULL", 1, 0)) AS MY_TL_CUSIP, " & _
"SUM(IIF(FC_CUSIP <> "NULL", 1, 0)) AS MY_FC_CUSIP, " & _
"SUM(IIf(FC_TICKER <> "NULL", 1, 0)) As MY_FC_TICKER " & _
"FROM TESTDATA GROUP BY EXC;"
DoCmd.RunSQL strSQL
```
Unfortunately it is giving error due to that "NULL" thing in editor and always been in red. Is there any more alternate way ? Because those column names will also come dynamically from a select list box so I have to create this strSQL in a parameterized way.
|
finally it beautifully work by using chr$(39) way:
```
Dim MYSTR As String
MYSTR = "NULL"
'teststrSQL = "SELECT EXC,FC_CUSIP FROM TESTDATA " & _
"WHERE FC_CUSIP = " & Chr$(39) & MYSTR & Chr$(39)
finalstrSQL = "SELECT EXC AS MY_EXC, " & _
"SUM(IIF(MASTER_CUSIP <> " & Chr$(39) & MYSTR & Chr$(39) & " , 1, 0)) AS MY_MASTER_CUSIP, " & _
"SUM(IIF(MASTER_TICKER <> " & Chr$(39) & MYSTR & Chr$(39) & " , 1, 0)) AS MY_MASTER_TICKER, " & _
"SUM(IIF(TL_TICKER <> " & Chr$(39) & MYSTR & Chr$(39) & " , 1, 0)) AS MY_TL_TICKER, " & _
"SUM(IIF(TL_CUSIP <> " & Chr$(39) & MYSTR & Chr$(39) & " , 1, 0)) AS MY_TL_CUSIP, " & _
"SUM(IIF(FC_CUSIP <> " & Chr$(39) & MYSTR & Chr$(39) & " , 1, 0)) AS MY_FC_CUSIP, " & _
"SUM(IIF(FC_TICKER <> " & Chr$(39) & MYSTR & Chr$(39) & " , 1, 0)) AS MY_FC_TICKER " & _
"FROM TESTDATA GROUP BY EXC;"
```
|
You seem to confuse NULL as a value, Null is something that cannot be compared against. You can use two variations. One is,
```
Dim strSQL As String
Dim db As Database
Set db = CurrentDb
strSQL = "INSERT INTO MAINRESULT(EXNAME, MASTER_TICKER,MASTER_CUSIP,TL_TICKER,TL_CUSIP,FC_CUSIP,FC_TICKER) " & _
"SELECT EXC AS MY_EXC, " & _
"SUM(IIF(MASTER_TICKER Is Not Null, 1, 0)) AS MY_MASTER_TICKER, " & _
"SUM(IIF(MASTER_CUSIP Is Not Null, 1, 0)) AS MY_MASTER_CUSIP, " & _
"SUM(IIF(TL_TICKER Is Not Null, 1, 0)) AS MY_TL_TICKER, " & _
"SUM(IIF(TL_CUSIP Is Not Null, 1, 0)) AS MY_TL_CUSIP, " & _
"SUM(IIF(FC_CUSIP Is Not Null, 1, 0)) AS MY_FC_CUSIP, " & _
"SUM(IIf(FC_TICKER Is Not Null, 1, 0)) As MY_FC_TICKER " & _
"FROM TESTDATA GROUP BY EXC;"
DoCmd.RunSQL strSQL
```
Or you can use the IsNull function.
If the value is actually a String "NULL" then you should be comparing it using the String literal. `<> 'NULL'`.
**EDIT**
Try the following,
```
Dim strSQL As String
Dim db As Database
Set db = CurrentDb
strSQL = "INSERT INTO MAINRESULT(EXNAME, MASTER_TICKER,MASTER_CUSIP,TL_TICKER,TL_CUSIP,FC_CUSIP,FC_TICKER) " & _
"SELECT EXC AS MY_EXC, " & _
"SUM(IIF(MASTER_TICKER <> 'NULL', 1, 0)) AS MY_MASTER_TICKER, " & _
"SUM(IIF(MASTER_CUSIP <> 'NULL', 1, 0)) AS MY_MASTER_CUSIP, " & _
"SUM(IIF(TL_TICKER <> 'NULL', 1, 0)) AS MY_TL_TICKER, " & _
"SUM(IIF(TL_CUSIP <> 'NULL', 1, 0)) AS MY_TL_CUSIP, " & _
"SUM(IIF(FC_CUSIP <> 'NULL', 1, 0)) AS MY_FC_CUSIP, " & _
"SUM(IIf(FC_TICKER <> 'NULL', 1, 0)) As MY_FC_TICKER " & _
"FROM TESTDATA GROUP BY EXC;"
DoCmd.RunSQL strSQL
```
|
ms-access VBA long sql query string line split ( with double quote inline) in editor
|
[
"",
"sql",
"ms-access",
"vba",
""
] |
I need a query that returns n number of spaces(" ") that I need to
append I a fixed length file.
So far I have been able to create
```
select ' ' from dual connect by level < 254
```
this returns multiple records I need only one record with 253 spaces.
|
Check the [LPAD](http://docs.oracle.com/cd/B28359_01/olap.111/b28126/dml_functions_2017.htm#OLADM593) function:
```
select LPAD(' ', 254, ' ') from dual connect by level < 254;
```
***p.s.***: if only one row is required use
```
select LPAD(' ', 254, ' ') from dual;
```
This pre-pends as many spaces `' '` to the String `' '` so the the returned value has a length of 254.
|
I think this will work:
```
select rpad(' ', 254)
from dual;
```
|
SQL Query that returns n spaces
|
[
"",
"sql",
"oracle",
""
] |
I have a `ParkingLot` model. Parking Lots have a number of available `lots`. Users can then book a parking lot for one or more days. Hence I have a `Booking` model.
```
class ParkingLot
has_many :bookings
end
class Booking
belongs_to :parking_lot
end
```
# Simplified Usecase
**ParkingLot**
Given a parking lot with 5 available lots:
**Bookings**
* Bob books a place from Monday to Sunday
* Sue makes one booking each on Monday, Wednesday and Friday
* Henry books only on Friday.
* Since the weekend is busy, 4 other people book from Saturday to Sunday.
---
**Edit**
The bookings have a `start_date` & an `end_date`, so Bob's bookings only has *one* entry. `Mon-Sun`.
Sue on the other hand really has three bookings, all starting and ending on the same day. `Mon-Mon`, `Wed-Wed`, `Fri-Fri`.
This gives us following booking data:
For simplicity, instead of the user\_id (`1`) & the date (`2015-5-15`), I will use the initial (`B`) and the week days (`Mon`).
```
––––––––––––––––––––––––––––––––––––––––––
| id | user_id | start_date| end_date| ... |
|––––––––––––––––––––––––––––––––––––––––––|
| 1 | B | Mon | Sun | ... |
|––––––––––––––––––––––––––––––––––––––––––|
| 2 | S | Mon | Mon | ... |
| 3 | S | Wed | Wed | ... |
| 4 | S | Fri | Fri | ... |
|––––––––––––––––––––––––––––––––––––––––––|
| 5 | H | Fri | Fri | ... |
|––––––––––––––––––––––––––––––––––––––––––|
| 6 | W | Sat | Sun | ... |
| 7 | X | Sat | Sun | ... |
| 8 | Y | Sat | Sun | ... |
| 9 | Z | Sat | Sun | ... |
––––––––––––––––––––––––––––––––––––––––––
```
---
This gives us the following week:
```
–––––––––––––––––––––––––––––––––––––––––
| Mon | Tue | Wed | Thu | Fri | Sat | Sun |
|–––––––––––––––––––––––––––––––––––––––––|
| B | B | B | B | B | B | B |
|–––––––––––––––––––––––––––––––––––––––––|
| S | - | S | - | S | - | - |
|–––––––––––––––––––––––––––––––––––––––––|
| - | - | - | - | H | - | - |
|–––––––––––––––––––––––––––––––––––––––––|
| - | - | - | - | - | W | W |
| - | - | - | - | - | X | X |
| - | - | - | - | - | Y | Y |
| - | - | - | - | - | Z | Z |
|=========================================|
| 2 | 1 | 2 | 1 | 3 | 5 | 5 | # Bookings Count
|=========================================|
| 3 | 4 | 3 | 4 | 2 | 0 | 0 | # Available lots
–––––––––––––––––––––––––––––––––––––––––
```
These bookings are already in the database, so when a **new** user wants to book from Monday to Friday, there is space to do so. But when he wants to book from Monday to Saturday, this will not be possible.
My goal is to query for the max number of bookings in a given time range. Ultimately leading to the available lots
```
# Mon - Thursday => max bookings: 2 => 3 available lots
# Mon - Friday => max bookings: 3 => 2 available lots
# Mon - Sunday => max bookings: 5 => 0 available lots
```
**A simple, but wrong approach** of mine was to get all bookings that fall in the given time range:
```
scope :in_range, ->(range) { where("end_date >= ?", range.first).where("start_date <= ?", range.last) }
```
But this is by no means correct. Querying from Monday to Friday returns 5 bookings, one from Bob, one from Henry and three from Sue. This would falsely assume the Parking Lot is full.
---
How would I create such a query to get the max count of bookings in a given time range?
This can also be pure `SQL`, I'll be happy to translate it into `AR` lateron.
|
There is a simple way using a calendar table. If you don't have one already you should create it, it has multiple usages.
```
select
c.calendar_date
,count(b.start_date) -- number of occupied lots
from calendar as c
left join bookings as b -- need left join to get dates where no lot is already booked
on c.calendar_date between b.start_date and b.end_date
-- restrict to the searched range of dates
where calendar_date between date '2015-05-10' and date '2015-05-18'
group by c.calendar_date
order by c.calendar_date
```
**Edit:**
[Vladimir Baranov](https://stackoverflow.com/users/4116017/vladimir-baranov) suggested to add a link on how to create and use a calendar table. Of course the actual implementation is always user and DBMS specific (e.g. [MS SQL Server](http://web.archive.org/web/20070611150639/http://sqlserver2000.databases.aspfaq.com/why-should-i-consider-using-an-auxiliary-calendar-table.html)), so searching for *"calendar table" + yourDBMS* will probably reveal some source code for your system.
In fact the easiest way to create a calendar table is to do the calculation for the range of years you need in a spreadsheet (Excel, etc. go all the functions you need like easter calculation) and then push it to the database, it's a one-time operation :-)
---
# Rails use case¹
First, create the `CalendarDay` model. I've added more columns than just the `day`, which may come in handy for future scenarios.
**db/migrate/201505XXXXXX\_create\_calendar\_days.rb**
```
class CreateCalendarDays < ActiveRecord::Migration
def change
create_table :calendar_days, id: false do |t|
t.date :day, null: false
t.integer :year, null: false
t.integer :month, null: false
t.integer :day_of_month, null: false
t.integer :day_of_week, null: false
t.integer :quarter, null: false
t.boolean :week_day, null: false
end
execute "ALTER TABLE calendar_days ADD PRIMARY KEY (day)"
end
end
```
Then, after running `rake db:migrate` add a rake task to populate your model:
**lib/tasks/calendar\_days.rake**
```
namespace :calendar_days do
task populate: :environment do
(Date.new(2010,1,1)...Date.new(2049,12,31)).each do |d|
CalendarDay.create(
day: d,
year: d.year,
month: d.month,
day_of_month: d.day,
day_of_week: d.wday,
quarter: (d.month / 4) + 1,
week_day: ![0,6].include?(d.wday)
)
end
end
end
```
And run `calendar_days:populate`
Lastly, you can use Activerecord to perform **complex queries** as the one above:
```
CalendarDay.select("calendar_days.day, count(b.departure_time)")
.joins("LEFT JOIN bookings as b on calendar_days.day BETWEEN b.departure_time and b.arrival_time")
.where(:day => start_date..end_date)
.group(:day)
.order(:day)
# => SELECT "calendar_days"."day", count(b.departure_time)
# FROM "calendar_days"
# LEFT JOIN bookings as b on calendar_days.day BETWEEN b.departure_time and b.arrival_time
# WHERE ("calendar_days"."day" BETWEEN '2015-05-04 13:41:44.877338' AND '2015-05-11 13:42:00.076805')
# GROUP BY day
# ORDER BY "calendar_days"."day" ASC
```
---
1 - Use case added by [TheChamp](https://stackoverflow.com/users/2235594/thechamp)
|
You need GROUP by day since your bookings are daily based. Check the total bookings in a specific day against your total lots, you get available space for that day.
Let create a table bookings with following entries:
```
Book_Date Slot_Id Customer_Id
2015-05-14 1 100
2015-05-14 2 200
2015-05-14 3 400
2015-05-15 1 100
2015-05-16 1 100
2015-05-17 1 100
```
Do this query:
```
SELECT book_date , count(*) AS booked, 5- count(*) AS lot_available
FROM bookings
WHERE bookdate>='2015-05-14' AND book_date<'2015-05-21'
GROUP BY book_date
```
will give you something:
```
book_date booked lot_available
2015-05-14 3 2
2015-05-15 1 4
2015-05-16 1 4
2015-05-17 1 4
```
You know now how many lots available for each day.
There is one issue to solve, if there is no booking for a specific day, it will not be listed in above result, you need add a calendar table or build a small temp table to solve it.
Use this to generate a table for next 7 days:
```
SELECT DATE_ADD(CURDATE(), INTERVAL 1 DAY) nday
UNION
SELECT DATE_ADD(CURDATE(), INTERVAL 2 DAY) nday
UNION
SELECT DATE_ADD(CURDATE(), INTERVAL 3 DAY) nday
UNION
SELECT DATE_ADD(CURDATE(), INTERVAL 4 DAY) nday
UNION
SELECT DATE_ADD(CURDATE(), INTERVAL 5 DAY) nday
UNION
SELECT DATE_ADD(CURDATE(), INTERVAL 6 DAY) nday
UNION
SELECT DATE_ADD(CURDATE(), INTERVAL 7 DAY) nday
```
and check your code to
```
SELECT nday AS book_date , count(lot_id) AS booked, 5- count(lot_id) AS lot_available
FROM (SELECT DATE_ADD(CURDATE(), INTERVAL 1 DAY) nday
UNION
SELECT DATE_ADD(CURDATE(), INTERVAL 2 DAY) AS nday
UNION
SELECT DATE_ADD(CURDATE(), INTERVAL 3 DAY)
UNION
SELECT DATE_ADD(CURDATE(), INTERVAL 4 DAY)
UNION
SELECT DATE_ADD(CURDATE(), INTERVAL 5 DAY)
UNION
SELECT DATE_ADD(CURDATE(), INTERVAL 6 DAY)
UNION
SELECT DATE_ADD(CURDATE(), INTERVAL 7 DAY) ) days
LEFT JOIN bookings
ON days.nday = books.book_date
GROUP by nday
```
It will give you something like :
```
+------------+--------+---------------+
| book_date | booked | lot_available |
+------------+--------+---------------+
| 2015-05-15 | 2 | 3 |
| 2015-05-16 | 0 | 5 |
| 2015-05-17 | 0 | 5 |
| 2015-05-18 | 0 | 5 |
| 2015-05-19 | 0 | 5 |
| 2015-05-20 | 0 | 5 |
| 2015-05-21 | 0 | 5 |
```
(The last results is generated using different sample data)
This is a modified version :
```
SELECT SUM(CASE WHEN lot IS NULL THEN 0 ELSE 1 END) AS booked, nday
FROM (
SELECT lot, c.nday FROM (
SELECT DATE_ADD(CURDATE(), INTERVAL 1 DAY) AS nday
UNION SELECT DATE_ADD(CURDATE(), INTERVAL 2 DAY)
UNION SELECT DATE_ADD(CURDATE(), INTERVAL 3 DAY)
UNION SELECT DATE_ADD(CURDATE(), INTERVAL 4 DAY)
UNION SELECT DATE_ADD(CURDATE(), INTERVAL 5 DAY)
UNION SELECT DATE_ADD(CURDATE(), INTERVAL 6 DAY)
UNION SELECT DATE_ADD(CURDATE(), INTERVAL 7 DAY)
) c
LEFT JOIN bookings l
ON l.book_date<=c.nday AND l.end_date >=c.nday
) e
GROUP BY NDAY
```
gives you results like:
```
+--------+------------+
| booked | nday |
+--------+------------+
| 5 | 2015-05-16 |
| 5 | 2015-05-17 |
| 2 | 2015-05-18 |
| 0 | 2015-05-19 |
| 0 | 2015-05-20 |
| 0 | 2015-05-21 |
| 0 | 2015-05-22 |
+--------+------------+
```
|
Get max bookings count in range
|
[
"",
"sql",
"ruby-on-rails",
"activerecord",
"date-range",
""
] |
I couldn't find what is causing this error, when I union headers with the actual data constructed using rollup to write them to a .csv file.
```
CREATE TABLE `products` (
`id` int(11) default NULL,
`item` varchar(100) default NULL,
`value` int(11) default NULL
) ENGINE=MyISAM ;
INSERT INTO `products` VALUES (1,'Mobiles', '1000'),(5,'Mobiles', '2000'),(8,'Mobiles', 4000),(18,'Books',100),(28,'Books', 200),(28,'Books',400);
```
When I try the below query,
```
SELECT * FROM (
(SELECT 'ITEM', 'SUM')
UNION
(select item, sum(value) from products group by item with rollup)
) data;
```
I get this error
```
ERROR 1221 (HY000): Incorrect usage of CUBE/ROLLUP and ORDER BY
```
Thanks in advance.
|
```
select 'ITEM', 'SUM'
union
select item, sum(value) from products group by item with rollup
;
```
Result:
```
+---------+------+
| ITEM | SUM |
+---------+------+
| ITEM | SUM |
| Books | 700 |
| Mobiles | 7000 |
| NULL | 7700 |
+---------+------+
```
|
You can not use this way.
You need to do either
```
select item, sum(value) from products group by item with rollup;
```
or
```
select item, sum(value) as tot from products group by item
union all
select 'ITEM',sum(value) from products
```
The result of first query will be as
```
+---------+------------+
| item | sum(value) |
+---------+------------+
| Books | 700 |
| Mobiles | 7000 |
| NULL | 7700 |
+---------+------------+
```
and the 2nd
```
+---------+------+
| item | tot |
+---------+------+
| Books | 700 |
| Mobiles | 7000 |
| ITEM | 7700 |
+---------+------+
```
|
union and rollup in mysql
|
[
"",
"mysql",
"sql",
"database",
"union",
"rollup",
""
] |
I have a table having this
table done\_by
```
Var_ID| Var_name| Q1_by |Q2_by|Q3_by|Q4_by
1 | abc | me | me |me |you
2 | cba | me | me |you |you
3 | abd | me | you |you |me
```
the result i want to get is get all the total of all the
me and you value
me=7
you=5
have done count, but what i cant get is count for all the 'me' for each column
|
You can do with conditional aggregation like:
```
SELECT SUM(CASE WHEN Q1_by = 'me' THEN 1 ELSE 0 END +
CASE WHEN Q2_by = 'me' THEN 1 ELSE 0 END +
CASE WHEN Q3_by = 'me' THEN 1 ELSE 0 END +
CASE WHEN Q4_by = 'me' THEN 1 ELSE 0 END) AS me ,
SUM(CASE WHEN Q1_by = 'you' THEN 1 ELSE 0 END +
CASE WHEN Q2_by = 'you' THEN 1 ELSE 0 END +
CASE WHEN Q3_by = 'you' THEN 1 ELSE 0 END +
CASE WHEN Q4_by = 'you' THEN 1 ELSE 0 END) AS you
FROM TableName
```
|
Conditional aggregates using "Sum" instead of "Count" is way to go, you can extend it by using dynamic sql, if you don't know the distinct values -
```
--Dynamic SQL Extension to count sum of all distinct values
--Extract out distinct values in temporary table
SELECT DISTINCT by_val=val INTO #by_tbl FROM (SELECT val=Q1_by
FROM #TableName
UNION ALL
SELECT val=Q2_by
FROM #TableName
UNION ALL
SELECT val=Q3_by
FROM #TableName
UNION ALL
SELECT val=Q4_by
FROM #TableName) A
--Create a SQL String
DECLARE @sql NVARCHAR(max)
SELECT @sql = ISNULL(@sql+',', 'SELECT ') + '['+by_val+']=SUM(CASE WHEN Q1_by='''+by_val+''' THEN 1 ELSE 0 END
+ CASE WHEN Q2_by='''+by_val+''' THEN 1 ELSE 0 END
+ CASE WHEN Q3_by='''+by_val+''' THEN 1 ELSE 0 END
+ CASE WHEN Q4_by='''+by_val+''' THEN 1 ELSE 0 END) '
FROM #by_tbl
SET @sql = @sql + ' FROM #TableName'
EXEC(@SQL)
DROP TABLE #TableName
DROP TABLE #by_tbl
```
|
SQL COUNT And SUM
|
[
"",
"sql",
"count",
"sum",
"radix",
"libreoffice",
""
] |
For starters, some background information. I am building a simple recipe book database and I'm trying to build a query that will show me what I can make based on the items in my pantry. The table schemas are as follows:
```
RECIPE(*RecipeID*, RecipeName)
RECIPE_INGREDIENT(***RecipeID***, **IngredientID**)
INGREDIENT(*IngredientID*, IngredientName)
PANTRY_ITEM(*ItemID*, ItemName)
```
The fields in the RECIPE\_INGREDIENT table make up a composite key and are both foreign keys into the RECIPE(RecipeID) and INGREDIENT(IngredientID) table. I'm using the following test data:
```
RECIPE table
RecipeID,RecipeName
1,'food 1'
2,'food 2'
INGREDIENT Table
IngredientID, IngredientName
1,'ing 1'
2,'ing 2'
3,'ing 3'
RECIPE_INGREDIENT table
RecipeID,IngredientID
1,1
1,2
2,2
2,3
PANTRY_ITEM table
ItemID,ItemName
1,'ing 2'
2,'ing 3'
```
So essentially I'm trying to query a list of RecipeNames based on the items I have in my pantry. By this, I mean that I must have all the ingredients for the recipe in my pantry for it to be added to the list. Therefore the ideal query based on this test data would result in only 'food 2'. The problem I'm running into is enforcing the 'all ingredients' for a recipe part.
I've tried a few different queries and they all result in every recipe being returned.
```
SELECT RecipeName FROM RECIPE WHERE RecipeID IN (SELECT RecipeID FROM RECIPE_INGREDIENT WHERE IngredientID IN (SELECT IngredientID FROM INGREDIENT WHERE IngredientName IN (SELECT ItemName FROM PANTRY_ITEM)))
```
Does anyone have any ideas as to how I can accomplish this? Is there a possible query for this, or would I have to restructure my database?
|
First off if you can put **only** ingredients into the pantry your `pantry` schema should rather look like this
```
CREATE TABLE pantry
(
IngredientID int,
FOREIGN KEY (IngredientID) REFERENCES ingredient (IngredientID)
);
```
Now, you can leverage `HAVING` clause to get the desired result
```
SELECT recipename
FROM
(
SELECT recipeid
FROM recipe_ingredient ri LEFT JOIN pantry p
ON ri.ingredientid = p.ingredientid
GROUP BY recipeid
HAVING COUNT(*) = COUNT(p.ingredientid)
) q JOIN recipe r
ON q.recipeid = r.recipeid
```
Output:
```
| RecipeName |
|------------|
| food 2 |
```
Here is a **[SQLFiddle](http://sqlfiddle.com/#!9/bd84c/1)** demo
|
This WILL NOT perform well, but it's possible without changing your schema. I'd recommend though, perhaps you should consider having your PANTRY table id's match the ingredient id's...
Essentially, we have to find any time a NULL value is presented when comparing a recipe ingredient to it's pantry item equivalent. Any instance of NULL in the sub query means there is a missing item... So we just generate an exclusion list and compare the results to that.
```
SELECT RecipeID AS _id, RecipeName FROM RECIPE WHERE RecipeID NOT IN (
SELECT d.RecipeId FROM (
SELECT p.ItemID, r.RecipeID FROM PANTRY_ITEM p
JOIN INGREDIENT i ON (p.ItemName = i.IngredientName)
RIGHT JOIN RECIPE_INGREDIENT ri ON (ri.IngredientId = i.IngredientId)
RIGHT JOIN RECIPE r ON (r.RecipeId = ri.RecipeId)
) d WHERE d.ItemID IS NULL
);
```
|
SQL multi-table composite key unique query
|
[
"",
"mysql",
"sql",
"database",
"composite",
""
] |
In a project, we have successfully implemented the paging mechanism for SQL Server, but as per our requirement, we are trying to implement the paging mechanism in generic way for Oracle, SQL Server, DB2 & DB400 databases. Is it possible to implement paging in such way? What can be the simplest generic way to implement such query?
|
As far as I know there is no generic functionality to implement the pagining mechanism for the all the database.
The syntax to implement the pagination may also change with the database, so it is hard to say that there is a genric functionality to implement it across all the database.
You can refer [There are a method to paging using ANSI Sql only?](https://stackoverflow.com/questions/463859/there-are-a-method-to-paging-using-ansi-sql-only) where the accepted answer refers to a [link](http://troels.arvin.dk/db/rdbms/#select-limit-offset) which says to use it like
```
SELECT * FROM (
SELECT
ROW_NUMBER() OVER (ORDER BY key ASC) AS rownum,
columns
FROM tablename
) AS foo
WHERE rownum > skip AND rownum <= (n+skip)
```
|
What I understand. Try this:
```
WITH MyCte AS
(
SELECT row_number() OVER (ORDER BY FieldName) ID, * FROM TableName
)
SELECT *
FROM MyCte
WHERE ID BETWEEN 1 AND 10
```
|
How to implement paging mechanism in generic way for SQL standards?
|
[
"",
"sql",
"sql-server",
"oracle",
"db2",
""
] |
I am trying to update information in the column Mgrstat to a 3 and would like to mass enter the information. As it is I have to use "=" and enter each AppID individually but I would rather enter several at once. The query below shows my attempt using "in", which didn't work either. I get "Incorrect syntax near the keword 'in'".
Any ideas? Thanks everyone!
```
declare @appid as int
declare @mgrstat as int
set @appid in ('10995',
'11201',
'9523',
'9558',
'9666',
'10069',
'10547',
'10548',
'9702',
'10698',
'9754',
'10161',
'10162',
'11240',
'11241',
'9553',
'10848',
'10667',
'9383',
'10709',
'9696',
'10053',
'10702')
set @mgrstat = 3
update [Compensation].[dbo].[dev_RPT_Approval]
set Mgr_Stat = @mgrstat
FROM [Compensation].[dbo].[dev_RPT_Approval]
where @appid = App_Id
select *
from [Compensation].[dbo].[dev_RPT_Approval]
where @appid = App_Id
```
|
This is the SQL you Need:
```
update dev_RPT_Approval set Mgr_Stat=3
where designation
in ('10995',
'11201',
'9523',
'9558',
'9666',
'10069',
'10547',
'10548',
'9702',
'10698',
'9754',
'10161',
'10162',
'11240',
'11241',
'9553',
'10848',
'10667',
'9383',
'10709',
'9696',
'10053',
'10702')
```
|
If i'm understanding correctly, and you want all `mgr_stats` to be 3 where the app\_id is in the list provided in your question, then you could do this a few ways:
```
update [Compensation].[dbo].[dev_RPT_Approval]
set Mgr_Stat = 3
where app_id in (
'10995',
'11201',
'9523',
'9558',
'9666',
'10069',
'10547',
'10548',
'9702',
'10698',
'9754',
'10161',
'10162',
'11240',
'11241',
'9553',
'10848',
'10667',
'9383',
'10709',
'9696',
'10053',
'10702'
)
```
or (sql server using table variable)
```
declare @ids table (id varchar(50))
insert into @ids (id)
select '10995'
union all select '11201'
union all select '9523'
union all select '9558'
union all select '9666'
union all select '10069'
union all select '10547'
union all select '10548'
union all select '9702'
union all select '10698'
union all select '9754'
union all select '10161'
union all select '10162'
union all select '11240'
union all select '11241'
union all select '9553'
union all select '10848'
union all select '10667'
union all select '9383'
union all select '10709'
union all select '9696'
union all select '10053'
union all select '10702'
update [Compensation].[dbo].[dev_RPT_Approval]
set Mgr_Stat = 3
from [Compensation].[dbo].[dev_RPT_Approval] t
inner join @ids i on t.app_id = i.id
```
A few things to note about the code you had posted:
```
declare @appid as int
set @appId in ...
```
A few things with this - @appId is declared as an integer, meaning it is a scalar value (cannot be a set) - for sets of values, you can use a table variable as I did in my second example of how to accomplish your question.
Additionally, because you variable as an int, I'm assuming your ID is of type int, the quotes are not needed.
Instead of:
```
where app_id in (
'10995',
....
)
```
you can do:
```
where app_id in (
10995,
....
)
```
|
Update column information
|
[
"",
"sql",
"sql-update",
""
] |
I have an oracle table with a unique `id` column. When I insert data into the table, I increment the last `id`.
```
insert into my_table vlaues((select max(id) from my_table), 'etc', 'etc');
```
There can be multiple processes who writes into this table simultaneously.
Auto commit is `on`.
This table does not have any other unique column except `id`.
When I insert a record to the table, is there a way to get the `id` value the record got, after I insert the record?
As I can see, if I use `select max(id) from my_table` after insert, I may not get the id used in the insert statement since someone else could have inserted another record before I issue `select`.
|
Oracle 12 finally has built-in identity columns. So, if you are using the most recent version of Oracle, you can just use that. Read about it [here](http://oracle-base.com/articles/12c/identity-columns-in-oracle-12cr1.php).
In earlier versions, it is best to use a sequence. This guarantees uniqueness, even in a multi-threaded environment. I always implement triggers to update the id column. [Here](https://stackoverflow.com/questions/11296361/how-to-create-id-with-auto-increment-on-oracle) is an answer to a similar question, that explains how to do this.
|
You can try this:
```
CREATE TABLE myTable( ID RAW(16) DEFAULT SYS_GUID())
```
Also it is preferred to use [Sequence](http://www.techonthenet.com/oracle/sequences.php) to get the auto incremented id.
If you want to get the sequence id after the insert you can try like this:
```
declare
x number;
id number;
begin
x := your_sequence.nextval;
insert into mytable (column1, column2, column3)
values (x, value2, value3) returning x into id;
dbms_output.put_line(to_char(id));
end;
```
|
Get the unique id the record got when inserting to a database table
|
[
"",
"sql",
"oracle",
""
] |
I have a 3.5 million rows in table `acs_objects` and I need to retrieve column `creation_date` with year only format and distinct.
**My first attempt** : `180~200 Sec (15 Rows Fetched)`
```
SELECT DISTINCT to_char(creation_date,'YYYY') FROM acs_objects
```
**My second attempt** : `35~40 Sec (15 Rows Fetched)`
```
SELECT DISTINCT to_char(creation_date,'YYYY')
FROM (SELECT DISTINCT creation_date FROM acs_objects) AS distinct_date
```
Is there any way to make it faster? *-"I need to use this in ADP Website"*
|
In your second attempt you get distinct dates from the sub-query which you then all convert to a string representation and then you select the distinct ones. That is rather inefficient. Better is it to first extract the distinct years from the `creation_date` in a sub-query and simply cast those to text in the main query:
```
SELECT year::text
FROM (
SELECT DISTINCT extract(year FROM creation_date) AS year FROM acs_objects
) AS distinct_years;
```
If you create an `INDEX` on the table, the query should run much faster still:
```
CREATE INDEX really_fast ON acs_objects((extract(year FROM creation_date)));
```
However, this may impact other uses of your table, in particular if you have many modifying statements (insert, update, delete). And this will only work if `creation_date` has a data type of `date` or `timestamp` (specifically not `timestamp with timezone`).
The below option looked promising because it does not use a sub-query, but it is in fact much slower (see comments below), probably because the `DISTINCT` clause is applied on a string:
```
SELECT DISTINCT extract(year FROM creation_date)::text
FROM acs_objects;
```
|
I think you shouldn't select `distinct` from this huge table. Instead try to generate a short [years sequence](http://www.postgresql.org/docs/9.1/static/functions-srf.html) (say from 1900 to 2100) and select from this sequence only years that exist in `acs_objects` table. Result set will be the same but I think it will be faster. EXISTS subquery have to run fast on an indexed field `creation_date`.
```
SELECT y
FROM
(
select generate_series(1900,2100) as y
) as t
WHERE EXISTS (SELECT 1 FROM acs_objects
WHERE creation_date >= DATE (''||t.y||'-01-01')
AND creation_date < DATE (''||t.y + 1||'-01-01'))
```
`SQLFiddle demo`
|
Fastest way to PostgreSQL Distinct and Format
|
[
"",
"sql",
"postgresql",
"distinct",
"aggregate",
"postgresql-performance",
""
] |
I have one table like
tbl
```
---------------------
id users name
---------------------
1 2,3 acc1
2 4 acc2
3 2,4,1 acc3
4 4,1 acc4
```
In this table I want to get id and name by users
i.e user [2] have which id and name
Suppose I pass user [2] then i get result id is 1 and 3 and name acc1 and acc3.
|
Use the `LIKE` function
```
SELECT id, name
FROM yourtable
WHERE (user = '2' OR user LIKE '2,%' OR user LIKE '%,2' OR user LIKE '%,2,%')
```
|
You can split those comma separated values using XML functions and then search in the result :
```
DECLARE @table TABLE(id INT, users VARCHAR(30), name VARCHAR(30))
INSERT INTO @table VALUES
(1,'2,3','acc1'),
(2,'4','acc2'),
(3,'2,4,1','acc3'),
(4,'4,1','acc4')
SELECT t.id,
t.name,
( c1.value('.', 'varchar(100)') )
FROM (SELECT id,
name,
CAST('<N>' + REPLACE(users, ',', '</N><N>') + '</N>' AS XML)
FROM @table) t(id, name, c)
CROSS APPLY c.nodes('/N') AS t1(c1)
WHERE ( c1.value('.', 'varchar(100)') ) = '2'
```
|
Get multiple id on multiple value in string with sql?
|
[
"",
"sql",
"sql-server",
""
] |
I need to get the data in two parent > child table sets merged/combined into a third parent > child table.
The tables look like this:

The only difference in the three sets of tables is that TableC has a `TableType` column to help discern the difference between a TableA record and a TableB record.
My first thought was to use a cursor.. Here's code to create the table structure, insert some records, and then merge the data together. It works very well, sooooo....
```
--Create the tables
CREATE TABLE TableA
(
ID int not null identity primary key,
Name VARCHAR(30)
);
CREATE TABLE TableAChild
(
ID int not null identity primary key,
Parent int not null,
Name VARCHAR(30),
CONSTRAINT FK_A FOREIGN KEY (Parent) REFERENCES TableA(ID)
);
CREATE TABLE TableB
(
ID int not null identity primary key,
Name VARCHAR(30)
);
CREATE TABLE TableBChild
(
ID int not null identity primary key,
Parent int not null,
Name VARCHAR(30),
CONSTRAINT FK_B FOREIGN KEY (Parent) REFERENCES TableB(ID)
);
CREATE TABLE TableC
(
ID int not null identity primary key,
TableType VARCHAR(1),
Name VARCHAR(30)
);
CREATE TABLE TableCChild
(
ID int not null identity primary key,
Parent int not null,
Name VARCHAR(30),
CONSTRAINT FK_C FOREIGN KEY (Parent) REFERENCES TableC(ID)
);
-- Insert some test records..
INSERT INTO TableA (Name) Values ('A1')
INSERT INTO TableAChild (Name, Parent) VALUES ('A1Child', SCOPE_IDENTITY())
INSERT INTO TableB (Name) Values ('B1')
INSERT INTO TableBChild (Name, Parent) VALUES ('B1Child', SCOPE_IDENTITY())
-- Needed throughout..
DECLARE @ID INT
-- Merge TableA and TableAChild into TableC and TableCChild
DECLARE TableACursor CURSOR
-- Get the primary key from TableA
FOR SELECT ID FROM TableA
OPEN TableACursor
FETCH NEXT FROM TableACursor INTO @ID
WHILE @@FETCH_STATUS = 0
BEGIN
-- INSERT INTO SELECT the parent record into TableC, being sure to specify a TableType
INSERT INTO TableC (Name, TableType) SELECT Name, 'A' FROM TableA WHERE ID = @ID
-- INSERT INTO SELECT the child record into TableCChild using the parent ID of the last row inserted (SCOPE_IDENTITY())
-- and the current record from the cursor (@ID).
INSERT INTO TableCChild(Name, Parent) SELECT Name, SCOPE_IDENTITY() FROM TableAChild WHERE Parent = @ID
FETCH NEXT FROM TableACursor INTO @ID
END;
CLOSE TableACursor
DEALLOCATE TableACursor
-- Repeat for TableB
DECLARE TableBCursor CURSOR
FOR SELECT ID FROM TableB
OPEN TableBCursor
FETCH NEXT FROM TableBCursor INTO @ID
WHILE @@FETCH_STATUS = 0
BEGIN
INSERT INTO TableC (Name, TableType) SELECT Name, 'B' FROM TableB WHERE ID = @ID
INSERT INTO TableCChild(Name, Parent) SELECT Name, SCOPE_IDENTITY() FROM TableBChild WHERE Parent = @ID
FETCH NEXT FROM TableBCursor INTO @ID
END;
CLOSE TableBCursor
DEALLOCATE TableBCursor
```
Now, my question(s):
* I've always been told that cursors are bad. But I couldn't find another way of doing it. I'm wondering if there's some way to do that with a CTE?
* If the cursor is appropriate in this situation, how did I do? Is there a better way of doing what I did? It doesn't look very DRY to me, but I'm no SQL expert.
Lastly, if you want to re-run the query above, here's a small script to delete the tables that were created.
```
DROP TABLE TableAChild
DROP TABLE TableBChild
DROP TABLE TableCChild
DROP TABLE TableA
DROP TABLE TableB
DROP TABLE TableC
```
The correct result should look like:

|
Here is one way to do this without a cursor or other RBAR type stuff.
```
ALTER TABLE TableC ADD LegacyID INT
GO
INSERT INTO TableC (TableType, Name, LegacyID)
SELECT 'A', Name, ID
FROM TableA
INSERT TableCChild
SELECT C.ID, AC.Name
FROM TableAChild AC
JOIN TableA A ON A.Id = AC.ID
JOIN TableC C ON C.LegacyID = A.ID AND C.TableType = 'A'
INSERT INTO TableC (TableType, Name, LegacyID)
SELECT 'B', Name, ID
FROM TableB
INSERT TableCChild
SELECT C.ID, AC.Name
FROM TableBChild AC
JOIN TableB A ON A.Id = AC.ID
JOIN TableC C ON C.LegacyID = A.ID AND C.TableType = 'B'
ALTER TABLE TableC DROP COLUMN LegacyID
GO
```
|
You can use [merge](https://msdn.microsoft.com/en-us/library/bb510625.aspx) as described by Adam Machanic in [Dr. OUTPUT or: How I Learned to Stop Worrying and Love the MERGE](http://sqlblog.com/blogs/adam_machanic/archive/2009/08/24/dr-output-or-how-i-learned-to-stop-worrying-and-love-the-merge.aspx) and in [this](https://stackoverflow.com/questions/5365629/using-merge-output-to-get-mapping-between-source-id-and-target-id) question to get a mapping between the new identity value and the old primary key value in a table variable and the use that when you insert to your child tables.
```
declare @T table(ID int, IDC int);
merge dbo.TableC as C
using dbo.TableA as A
on 0 = 1
when not matched by target then
insert (TableType, Name) values('A', A.Name)
output A.ID, inserted.ID into @T(ID, IDC);
insert into dbo.TableCChild(Parent, Name)
select T.IDC, AC.Name
from dbo.TableAChild as AC
inner join @T as T
on AC.Parent = T.ID;
delete from @T;
merge dbo.TableC as C
using dbo.TableB as B
on 0 = 1
when not matched by target then
insert (TableType, Name) values('B', B.Name)
output B.ID, inserted.ID into @T(ID, IDC);
insert into dbo.TableCChild(Parent, Name)
select T.IDC, BC.Name
from dbo.TableBChild as BC
inner join @T as T
on BC.Parent = T.ID;
```
[SQL Fiddle](http://sqlfiddle.com/#!6/7988f/2)
|
Merging two parent > child table sets
|
[
"",
"sql",
"sql-server",
"common-table-expression",
"database-cursor",
""
] |
I am a bit lost about how to retrieve the Id. I have stored inside a string like
`Hi i am interested in sharing apartment or to rent all rooms please text me {PropertyId:43499}`
The part I want from this is `43499`
How can I achieve this in my Sql-Server.
The solution that worked for me, i edited the answer i got from Kavin Chakaravarthi
```
DECLARE @string nvarchar(max)
SET @string = 'Hi i am interested in sharing apartment or to rent all rooms please text me {PropertyId:43499}'
select SUBSTRING(SUBSTRING(@string, CHARINDEX(':',@string) +1, DATALENGTH(@string)), 0,6)`
```
|
Using SQL Query u can get the seperate id:
```
declare @id varchar(max)='Hi i am interested in sharing apartment or to rent all rooms please text me {PropertyId:43499}'
select @id=STUFF(@id,LEN(@id),1,'')
select @id=SUBSTRING(@id,CHARINDEX(':',@id)+1,LEN(@id))
select @id
```
Output:
```
id
43499
```
|
```
CREATE FUNCTION dbo.propertyidextract
(@strAlphaNumeric VARCHAR(256))
RETURNS VARCHAR(256)
AS
BEGIN
DECLARE @intAlpha INT
SET @intAlpha = PATINDEX('%[^0-9]%', @strAlphaNumeric)
BEGIN
WHILE @intAlpha > 0
BEGIN
SET @strAlphaNumeric = STUFF(@strAlphaNumeric, @intAlpha, 1, '' )
SET @intAlpha = PATINDEX('%[^0-9]%', @strAlphaNumeric )
END
END
RETURN ISNULL(@strAlphaNumeric,0)
END
GO
SELECT dbo.propertyidextract(phrase) AS 'propertyid'
FROM yourtable;
GO
```
This outputs your ID
```
propertyid
43499
```
SQL FIDDLE: <http://sqlfiddle.com/#!6/4c22f/12/0>
|
SQL : Retrieve ID stored in a string
|
[
"",
"sql",
"sql-server",
""
] |
I have following SQL table:
**AR\_Customer\_ShipTo**
```
+--------------+------------+-------------------+------------+
| ARDivisionNo | CustomerNo | CustomerName | ShipToCode |
+--------------+------------+-------------------+------------+
| 00 | 1234567 | Test Customer | 1 |
| 00 | 1234567 | Test Customer | 2 |
| 00 | 1234567 | Test Customer | 3 |
| 00 | ARACODE | ARACODE Customer | 1 |
| 00 | ARACODE | ARACODE Customer | 2 |
| 01 | CBE1EX | Normal Customer | 1 |
| 02 | ZOCDOC | Normal Customer-2 | 1 |
+--------------+------------+-------------------+------------+
```
`(ARDivisionNo, CustomerNo,ShipToCode)` form a primary key for this table.
If you notice first 3 rows belong to same customer (Test Customer), who has different ShipToCodes: 1, 2 and 3. Similar is the case with second customer (ARACODE Customer). Each of Normal Customer and Normal Customer-2 has only 1 record with a single `ShipToCode`.
Now, I would like to get result querying on this table, where I will have only 1 record per customer. So, for any customer, where there are more than 1 records, I would like to keep the record with highest value for `ShipToCode`.
I tried various things:
(1) I can easily get the list of customers with only one record in table.
(2) With following query, I am able to get the list of all the customers, who have more than one record in the table.
**[Query-1]**
```
SELECT ARDivisionNo, CustomerNo
FROM AR_Customer_ShipTo
GROUP BY ARDivisionNo, CustomerNo
HAVING COUNT(*) > 1;
```
(3) Now, in order to select proper `ShipToCode` for each record returned by above query, I am not able to figure out, how to iterate through all the records returned by above query.
If I do something like:
**[Query-2]**
```
SELECT TOP 1 ARDivisionNo, CustomerNo, CustomerName, ShipToCode
FROM AR_Customer_ShipTo
WHERE ARDivisionNo = '00' and CustomerNo = '1234567'
ORDER BY ShipToCode DESC
```
Then I can get the appropriate record for (00-1234567-Test Customer). Hence, if I can use all the results from query-1 in the above query (query-2), then I can get the desired single records for customers with more than one record. This can be combined with results from point (1) to achieve the desired end result.
Again, this can be easier than approach I am following. Please let me know how can I do this.
[Note: I have to do this using SQL queries only. I cannot use stored procedures, as I am going to execute this thing finally using 'Scribe Insight', which only allows me to write queries.]
|
`Sample SQL FIDDLE`
1) Use CTE to get max ship code value record based on ARDivisionNo, CustomerNo
for each Customers
```
WITH cte AS (
SELECT*,
row_number() OVER(PARTITION BY ARDivisionNo, CustomerNo ORDER BY ShipToCode desc) AS [rn]
FROM t
)
Select * from cte WHERE [rn] = 1
```
2) To Delete the record use Delete query instead of Select and change Where Clause to rn > 1. `Sample SQL FIDDLE`
```
WITH cte AS (
SELECT*,
row_number() OVER(PARTITION BY ARDivisionNo, CustomerNo ORDER BY ShipToCode desc) AS [rn]
FROM t
)
Delete from cte WHERE [rn] > 1;
select * from t;
```
|
`ROW_NUMBER()` is great for this:
```
;WITH cte AS (SELECT *,ROW_NUMBER() OVER(PARTITION BY ARDivisionNo,CustomerNo ORDER BY ShipToCode DESC) AS RN
FROM AR_Customer_ShipTo
)
SELECT *
FROM cte
WHERE RN = 1
```
You mention removing the duplicates, if you want to `DELETE` you can simply:
```
;WITH cte AS (SELECT *,ROW_NUMBER() OVER(PARTITION BY ARDivisionNo,CustomerNo ORDER BY ShipToCode DESC) AS RN
FROM AR_Customer_ShipTo
)
DELETE cte
WHERE RN > 1
```
The `ROW_NUMBER()` function assigns a number to each row. `PARTITION BY` is optional, but used to start the numbering over for each value in a given field or group of fields, ie: if you `PARTITION BY Some_Date` then for each unique date value the numbering would start over at 1. `ORDER BY` of course is used to define how the counting should go, and is required in the `ROW_NUMBER()` function.
|
Removing duplicate rows (based on values from multiple columns) from SQL table
|
[
"",
"sql",
"sql-server",
"t-sql",
"join",
"duplicates",
""
] |
In my stored procedure I'm provided with a varchar parameter that looks like the following:
```
'201503'
```
Which obviously indicates the 3rd month of 2015. However, I need to select the previous 12 months in the same format from the given parameter, including the parameter itself.
For example, if given `'201503'`
I need to get the following:
```
'201503'
'201502'
'201501'
'201412'
'201411'
'201410'
'201409'
'201408'
'201407'
'201406'
'201405'
'201404'
```
Some help would really be appreciated! :)
|
Try this:
```
DECLARE @m VARCHAR(10) = '201503'
SELECT LEFT(CONVERT(VARCHAR(8), DATEADD(m, -id, @m + '01'), 112), 6) AS result
FROM ( VALUES ( 0), ( 1), ( 2), ( 3), ( 4), ( 5), ( 6), ( 7), ( 8), ( 9),
( 10), ( 11) ) m ( id ) ORDER BY result DESC
```
|
This could be working too.
```
DECLARE @Date VARCHAR(8) = '201503';
SELECT TOP (12) LEFT(CONVERT(varchar, DATEADD(MM, (ROW_NUMBER() OVER(ORDER BY (SELECT 1)) - 1) * -1, CAST(@Date + '01' AS DATETIME2)),112),6)
FROM sys.columns
```
|
Get previous 12 months dates if given a datetime
|
[
"",
"sql",
"sql-server",
""
] |
I'm inserting data from one table to another table. I just take the last value from first table as ID and need to increment from there, so I have taken `MAX(ID) + 1` and doing increment but I'm getting same values
```
declare @t table (ID INT,VID INT,Sname Varchar(10),Rname Varchar(10))
INSERT INTO @t (ID,VID,Sname,Rname)VALUES
(4601,1,'Car','maruti'),
(4601,1,'Car','benz'),
(4601,1,'Car','honda')
declare @tt table (ID INT,VID INT,Sname Varchar(10),Rname Varchar(10))
INSERT INTO @tt (ID,VID,Sname,Rname)VALUES
(1,1,'Bike','Dio'),
(1,1,'Bike','Pulsar'),
(1,1,'Bike','Duke') ,
(2,1,'Cloth','jeans'),
(2,1,'Cloth','shirts'),
(2,1,'Cloth','short')
INSERT INTO @t (ID,VID,Sname,Rname)
Select (select MAX(ID)+1 FROM @t)Id,ROW_NUMBER()OVER(ORDER BY VID)VID,Sname,Rname
from @tt
```
I don't have Identity Property on either tables.
How to get output like this:
```
ID VID Sname Rname
4601 1 Car maruti
4601 2 Car benz
4601 3 Car honda
4602 1 Bike Dio
4602 2 Bike Pulsar
4602 3 Bike Duke
4603 1 Cloth jeans
4603 2 Cloth shirts
4603 3 Cloth short
```
|
I think, you are looking for something like this.
Use `MAX(ID)` of `@t` + `id` for incremented values of `ID` and `ROW_NUMBER()` with `PARTITION BY` to get partitioned values of `VID`
```
INSERT INTO @t (ID,VID,Sname,Rname)
Select (select MAX(ID) FROM @t) + id as Id,ROW_NUMBER()OVER(partition by id ORDER BY VID)VID,Sname,Rname from @tt
```
Inserted Values
```
4602 1 Bike Dio
4602 2 Bike Pulsar
4602 3 Bike Duke
4603 1 Cloth jeans
4603 2 Cloth shirts
4603 3 Cloth short
```
|
You should add an additional table, with data that looks like this:
```
ID SName
4601 Car
4062 Bike
4063 Cloth
```
Then you can have that first column just be an autonumber/identity, and let Sql Server handle this. Doing otherwise puts you at risk of race conditions.
Your existing table should then look more like this:
```
ID SID RName
1 4601 Maruti
2 4601 Benz
3 4601 Honda
4 4602 Dio
5 4602 Duke
6 4602 Pulsar
7 4603 Jeans
8 4603 Shirts
9 4603 Short
```
Again, this ID column can be an autoincrememt/identity column, such that Sql Server handles making sure you don't have conflicts. Later on, if you really need a sequence number per item type, you can use the `Row_Number()` function in combination with a `PARTITION BY` clause to get a result set that looks like you want, and if you want this to be more intrinsic to the data you can build that into a view.
The point is that you want to more cleanly separate the general category of items from the specific entries in that category.
```
SELECT i.SID, ROW_NUMBER() OVER (PARTITION BY i.SID ORDER BY i.ID) as VID
, t.SName, i.RName
FROM ItemTypes t
INNER JOIN Items i on i.SID = t.ID
ORDER BY i.SID, VID
```
|
How to do autoincrement based on last value from another table?
|
[
"",
"sql",
"sql-server",
""
] |
I have a table which looks something as below
```
state_history
+---------------------+-----------+----------------+ +
| state_added_time | entity_id | state_id | .... |
+---------------------+-----------+----------------+ |
| 2015-05-15 13:24:22 | 1 | 1 | |
| 2015-05-15 13:29:44 | 3 | 2 | |
| 2015-05-15 13:34:26 | 2 | 2 | |
| 2015-05-15 14:24:28 | 1 | 3 | |
| 2015-05-15 14:24:30 | 2 | 3 | |
| 2015-05-15 14:26:32 | 3 | 5 | |
| 2015-05-15 14:26:34 | 3 | 3 | |
.......
```
My intention is to know the states of all entities at any given time. For example, if the timestamp received from the application is `2015-05-15 14:25:00` then the expected output should be:
```
state_history
+---------------------+-----------+----------------+ +
| state_added_time | entity_id | state_id | .... |
+---------------------+-----------+----------------+ |
| 2015-05-15 14:24:28 | 1 | 3 | |
| 2015-05-15 14:24:30 | 2 | 3 | |
| 2015-05-15 13:29:44 | 3 | 2 | |
.......
```
That is, to know the last state change which took place for each entity before or at the given time. The interval between state change is not fixed. Thus I cannot have 2 time boundaries and find rows between them.
I have tried using `TIMEDIFF` but failed to get the desired output. Could anyone please guide me on the path I should take?
EDIT: Thanks everyone for the quick responses. I tried the answers and noticed that the queries take quite a lot of time to fetch the rows when executed on the actual database. Probably because the fields `entity_id` and `state_id` are foreign keys to two other tables.
Now that this is known is there any way to improve the performance of the query?
|
You can also do it using variables:
```
SELECT entity_id, state_added_time, state_id
FROM (
SELECT state_added_time, state_id,
@row_number:= CASE WHEN @entity = entity_id THEN @row_number+1
ELSE 1
END AS row_number,
@entity:=entity_id AS entity_id
FROM state_history
WHERE state_added_time <= '2015-05-15 14:25:00'
ORDER BY entity_id, state_added_time DESC ) t
WHERE t.row_number = 1
```
`@row_number` is being reset each time a new `entity_id` is encountered. Within each `entity_id`, a value of `@row_number = 1` points to the most recent record.
[**SQL Fiddle Demo**](http://sqlfiddle.com/#!9/f2d8f/2)
|
Are you looking for this?
```
SELECT h.*
FROM
(
SELECT entity_id, MAX(state_added_time) state_added_time
FROM state_history
WHERE state_added_time <= '2015-05-15 14:25:00'
GROUP BY entity_id
) q JOIN state_history h
ON q.entity_id = h.entity_id
AND q.state_added_time = h.state_added_time
```
Output:
```
| state_added_time | entity_id | state_id |
|-----------------------|-----------|----------|
| May, 15 2015 13:29:44 | 3 | 2 |
| May, 15 2015 14:24:28 | 1 | 3 |
| May, 15 2015 14:24:30 | 2 | 3 |
```
Here is a **[SQLFiddle](http://sqlfiddle.com/#!9/ba01f/4)** demo
|
MySQL select rows with timestamp closest to but not exceeding the given timestamp
|
[
"",
"mysql",
"sql",
""
] |
I have a field that is a `longtext` in MySQL. I'm looking for any instances of 'media' that could be in it, +/- ~10 characters of context. There are usually multiple instances in a single rows' field, so I need to see the context. How can I write a query to do this? I can't even think of where to start.
So what I'm looking at is this:
```
SELECT field_data_body FROM table WHERE field_data_body LIKE '%media%';
```
```
+----------------------------------+
| field_data_body |
+----------------------------------+
| ... ode__media_or ... e immediat |
+----------------------------------+
```
The field is actually a long string, and I just parsed the actual test value to show the substrings that would match the WHERE clause.
What I actually want to see is *all* instances of the string `media`, which in the example above is two, but in other fields could be more. `SUBSTR` only shows the first instance of `media`.
|
In mysql you can create a user define function for this like wordcount. You can get help from this UDF.
[mysql count word in sql syntax](https://stackoverflow.com/questions/12156970/mysql-count-word-in-sql-syntax)
|
[`CREATE FUNCTION`](https://dev.mysql.com/doc/refman/5.0/en/create-procedure.html) of your own. Inside the function you can use the [`WHILE`](https://dev.mysql.com/doc/refman/5.0/en/while.html) statement and general string functions such as [`LOCATE`](https://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_locate) and [`SUBSTRING`](https://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_substring).
Here is an example to get you started:
```
DELIMITER $$
CREATE FUNCTION substring_list(
haystack TEXT,
needle VARCHAR(100)
)
RETURNS TEXT
DETERMINISTIC
BEGIN
DECLARE needle_len INT DEFAULT CHAR_LENGTH(needle);
DECLARE output_str TEXT DEFAULT '';
DECLARE needle_pos INT DEFAULT LOCATE(needle, haystack);
WHILE needle_pos > 0 DO
SET output_str = CONCAT(output_str, SUBSTRING(haystack, GREATEST(needle_pos - 10, 1), LEAST(needle_pos - 1, 10) + needle_len + 10), '\n');
SET needle_pos = LOCATE(needle, haystack, needle_pos + needle_len);
END WHILE;
RETURN output_str;
END$$
DELIMITER ;
```
Here are some tests. For each match, the term ("media") and up to 10 characters on either side are returned, all concatenated in a single string:
```
SELECT substring_list('1234567890media12345678immediate34567890media1234567890', 'media');
```
```
+---------------------------+
| 1234567890media12345678im |
| 12345678immediate34567890 |
| te34567890media1234567890 |
+---------------------------+
```
```
SELECT substring_list('0media12345678immediate34567890media1', 'media');
```
```
+---------------------------+
| 0media12345678im |
| 12345678immediate34567890 |
| te34567890media1 |
+---------------------------+
```
|
Selecting multiple substrings from a field in MySQL
|
[
"",
"mysql",
"sql",
"string",
""
] |
I have two tables (`User` and `Salary`). I want to do a `left join` from `User` to `Salary`. For each user I want their name and salary. In case they have no salary that field can be left empty. So far a left join is all we need. But I only want one row per user. Due to some defects there can be several salaries for one user (see table salary). I only want one row per user which can be selected randomly (or top 1). How do I do that? The expected output is presented in the bottom.
User Table:
```
User Name
1 Adam
2 Al
3 Fred
```
Salary Table
```
User Salary
1 1000
2 2000
2 2000
```
Expected table:
```
User Name Salary
1 Adam 1000
2 Al 2000
3 Fred null
```
|
Changed `User` to `Userid` as `User` is a reserved word in SQL
```
SELECT u.Userid, u.Name, MAX(S.Salary)
FROM Usertable u
LEFT JOIN Salarytable s ON u.Userid = s.userid
GROUP BY u.userid, u.name
```
SQL Fiddle: <http://sqlfiddle.com/#!6/ce4a8/1/0>
|
Try this:
```
select U.User, U.Name, min(S.Salary)
from UserTable U
left join SalaryTable S on S.User = U.User
group by U.User, U.Name
```
|
Left join without multiple rows from right table
|
[
"",
"sql",
"sql-server",
"teradata",
""
] |
I'm a newbie in mysql programming, and i would create a mysql function ***myFunction*** with a string parametre; this function query ***myTable*** and return a string from the query-result like this example :
```
myTable
---------------
|id | value |
---------------
| id1 |value1 |
| id2 |value2 |
| id3 |value3 |
| id4 |value4 |
---------------
```
Calling this function is like this
```
myFunction('value2#value1#value4')
```
and must return
```
'id2#id1#id4'
```
Thank you very much
|
This is a demo of how you can do it, of course you can put all together but I find it cleaner to split the functions.
Check the [SQL Fiddle](http://sqlfiddle.com/#!9/dda93/1)
**My table**
```
-- the table def
create table myTable (id char(3), value char(6));
insert into myTable values( 'id1', 'value1');
insert into myTable values( 'id2', 'value2');
insert into myTable values( 'id3', 'value3');
insert into myTable values( 'id4', 'value4');
```
**Get a specific id**
```
-- get Id by Value
CREATE function getIdByValue( theValue TEXT )
RETURNS TEXT READS SQL DATA
BEGIN
DECLARE theId TEXT;
DECLARE ok INT DEFAULT FALSE;
DECLARE crs CURSOR FOR
SELECT id FROM myTable where value = theValue;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET ok = TRUE;
SET theId = '';
OPEN crs;
read_loop: LOOP
FETCH crs INTO theId;
IF ok THEN
LEAVE read_loop;
END IF;
END LOOP;
CLOSE crs;
RETURN theId;
END//
```
**MyFunction as you describe**
```
-- the myFunction, usage: myFunction('value2#value1#value4')
CREATE function myFunction( v TEXT )
RETURNS TEXT READS SQL DATA
BEGIN
DECLARE theId TEXT;
DECLARE theIds TEXT;
DECLARE theValue TEXT;
DECLARE vInstr INT;
SET theId = '';
SET theIds = '';
v_loop: LOOP
SET vInstr = INSTR(v,'#');
IF vInstr = 0 THEN
SET theValue = v;
SET theId = getIdByValue(theValue);
ELSE
SET theValue = SUBSTRING(v, 1, vInstr-1);
SET v = SUBSTRING(v, vInstr+1);
SET theId = concat( getIdByValue(theValue), '#');
END IF;
SET theIds = CONCAT(theIds, theId);
IF vInstr = 0 THEN
LEAVE v_loop;
END IF;
END LOOP;
RETURN theIds;
END//
```
**The call**
```
SELECT myFunction( 'value2' );
SELECT myFunction( 'value2#value4' );
SELECT myFunction( 'value2#value4#value1' );
SELECT myFunction( 'value2#value4#value1#value3' );
```
**The results**
```
id2
id2#id4
id2#id4#id1
id2#id4#id1#id3
```
|
Any reason you couldn't just run this kind of query?
```
SELECT GROUP_CONCAT(id ORDER BY FIELD(`value`, 'value2', 'value1', 'value4'))
FROM myTable
WHERE `value` IN ('value2','value1','value4')
;
```
Edited: Added `value` as first argument to FIELD() function to make it return the ids in the proper order.
|
Is it possible to create this mysql Function?
|
[
"",
"mysql",
"sql",
"function",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.