Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
I've always had this problem that went like this:
* Have a slow SQL query that needs to be faster
* Be confused why it's slow
* Look at execution plan
* Realize why it is slow
* Know exactly what changes to the execution plan would likely result in a faster query
* Attempt to formulate SQL query to get desired execution plan
* Repeat previous step ~20 times
* Learn to live with a slow query
This is not asking how to formulate the query from a desired plan, since that depends on the situation. The question is: **Why bother?**
Sure, for simple queries the execution plan is something I don't really care about or want to think about. But for complex queries, it feels like I'm programming in brainf\*ck. I don't mean to brag, but I do believe I am much better at formulating execution plans than the optimizer is. Not only that, but it's an extra step to slow everything down more. The optimizer can't know many things than I know. It always feels like I'm fighting it as opposed to working with it.
I've looked online best I could, and there seems to be no way to write an execution plan directly, although I could have missed something.
### Why do we write SQL queries instead of writing execution plans directly?
Side note: In SQLite, it's always baffled me that despite running *in the same process* as the program querying, SQLite still asks for a textual, character-array query and then has to parse it. In the cases of dymanic query generation, this means the query is generated and then immediately parsed by the same process.
|
Because SQL is a fifth generation programming language (the only successful one I know of) - it's main feature is that it writes the code for you. It inspects the contents of the database and determines the best way to do things.
That said, there are ways to manually change the execution plan on the fly via RECOMPILE. However, I suggest you stick to using HINTS rather than trying to do anything overly fancy. Generally, the planner does a better job than you possibly could.
One common way to solve consistently slow execution plans is to add [WITH RECOMPILE](https://msdn.microsoft.com/en-us/library/ms190439.aspx) to the end of your query. It causes the execution plan to be recompiled each time you execute it - not great for memory performance, but it is worth testing to see if it improves a highly active (many reads/writes) database.
|
SQL is a declarative language and its sole purpose is to help you write declarative code. You have to simply tell what you want to achieve. Then onwards it is the language's responsibility to figure out the best way to achieve it. Because the language still isn't that advanced, users often have to "know exactly what changes to the execution plan would likely result in a faster query". The ideal declarative language would perfectly decipher user's intentions.
|
Why can't I just write an execution plan directly instead of SQL?
|
[
"",
"sql",
"performance",
""
] |
I am new to Hive . I want to create the table in hive with the same columns as that of existing table plus some additional columns. I Know we can use something like this.
```
CREATE TABLE new_table_name
AS
SELECT *
FROM old_table_name
```
This will create the table with same columns as that of old\_table\_name.
But How do I specify additional columns in new\_table\_name ?
|
Here is how you can achieve it:
> Old table:
```
hive> describe departments;
OK
department_id int from deserializer
department_name string from deserializer
```
> Create table:
```
create table ctas as
select department_id, department_name,
cast(null as int) as col_null
from departments;
```
> Displaying Structure of new table:
```
hive> describe ctas;
OK
department_id int
department_name string
col_null int
Time taken: 0.106 seconds, Fetched: 3 row(s)
```
> Results from new table:
```
hive> select * from ctas;
OK
2 Fitness NULL
3 Footwear NULL
4 Apparel NULL
5 Golf NULL
6 Outdoors NULL
7 Fan Shop NULL
8 TESTING NULL
8000 TESTING NULL
9000 testing export NULL
```
|
Simple way is to issue `ALTER TABLE` command to add more(additional) columns after the above `CREATE` statement.
|
create table in hive with additional columns
|
[
"",
"sql",
"hadoop",
"hive",
"bigdata",
""
] |
I've come across a table design that immediately struck me as odd, but now that I've thought through it I can't seem to come up with a design that I'm really happy about.
The existing design (simplified) is:
```
CREATE TABLE Accounts (
account_id INT NOT NULL,
account_name VARCHAR(50) NOT NULL,
CONSTRAINT PK_Accounts PRIMARY KEY CLUSTERED (account_id)
)
CREATE TABLE Groups (
group_id INT NOT NULL,
group_name VARCHAR(50) NOT NULL,
CONSTRAINT PK_Groups PRIMARY KEY CLUSTERED (group_id)
)
CREATE TABLE Group_Accounts (
group_id INT NOT NULL,
account_id INT NOT NULL,
is_primary BIT NOT NULL,
CONSTRAINT PK_Group_Accounts PRIMARY KEY CLUSTERED (group_id, account_id)
)
```
While it looks like a standard many:many relationship, an account never actually belongs to more than one group. I immediately thought, "Ok, we could put the `group_id` into the `Accounts` table and that should work." But then what would I do with the `is_primary` attribute?
I could put an `account_id` into the `Groups` table as `primary_account_id` and then I believe that I could enforce RI with a foreign key on the `primary_account_id, group_id to account_id, group_id`.
Alternatively, I could move the "is\_primary" flag into the `Accounts` table. Maybe that's the best solution?
Any thoughts on pros/cons for each approach? Am I missing any potential issues? Is there some other alternative that I've missed?
Is there any way to enforce a single primary account within a group in any of these situations outside of triggers (so primarily declarative RI)?
Thanks!
|
## Relationship Cardinality
Judging by your description, you need 1:N relationship, which means you do *not* need the junction table `Group_Accounts`. Just a simple FK from `Accounts` to `Groups` should do.
## Special Row
The next question is how you pick one row at the N side (`Accounts`) to be "special". You can either:
1. use the `Accounts.is_primary` flag and enforce its uniqueness (per group) through a filtered unique index (if your DBMS supports it),
2. or you could have a FK in `Groups` pointing to the primary account. In the latter case, though, you have to be careful to pick a primary account which *actually belongs* to the group.
The second approach can be modeled similar to this:
[](https://i.stack.imgur.com/RzNkp.png)
`Groups.FK1` denotes:
```
FOREIGN KEY (group_id, primary_account_no) REFERENCES Accounts (group_id, account_no)
```
The presence of `group_id` in the FK above is what enforces primary account to belong to the group it is the primary account of.
Just be careful how you generate `account_no` when creating new accounts. You'll need to do [something like this](https://stackoverflow.com/a/34631858/533120) to avoid race conditions in concurrent environment (the actual code will varry by DBMS, of course).
---
Pick the first approach if your DBMS supports filtered indexes and there is no specific reason to pick the second approach.
Pick the second if:
* you DBMS doesn't support filtered indexes,
* or your DBMS supports deferred constraints and you need to enforce the presence of primary account at all times (just make `primary_account_no` NOT NULL),
* or you don't actually need `account_id`, so you can have potentially one index less (depending on how strictly your DBMS requires indexes on FKs, and your actual workload, you may be able to avoid index on `primary_account_no`, as opposed to index that must be present on `is_primary`).
|
It is definitely possible to get rid of Group\_Accounts.
From your description, it seems each group has many accounts, but each account only has one group. So you would put the group\_id into the Accounts table as you suggest, and then put primary\_account\_id as a field in Groups.
|
Modeling a 1:Many relationship with an attribute
|
[
"",
"sql",
"database-design",
""
] |
The data in T1 and T2 is present for multiple rows on multiple dates, i wrote these sql to get below data sets.
Table T1 is result of below sql:
```
select SYS_ID, SYS_NM
FROM T1
WHERE dt = '30-NOV-2015'
group by SYS_ID, SYS_NM
order by 1,2;
T1
SYS_ID SYS_NM
4 MPC
4 MHL
6 FR
8 BECD
8 BCD
8 CL
8 FHLB
8 JRD
```
---
Table T2 is result of below sql
```
Select SYS_ID, SYS_NM FROM T2
WHERE dt = '30-NOV-2015'
and SYS_CD IN ('R103')
group by SYS_ID, SYS_NM
order by 1,2;
T2
SYS_ID SYS_NM
8 BECD
8 BCD
8 FHLB
```
Now i need to get the data from `T1` that are not present in `T2`.
I tried doing this in two ways, but I'm not getting the expected results.
```
Method 1:
select A.SYS_ID, A.SYS_NM
FROM T1 A
WHERE not exists
(
select B.SYS_ID, B.SYS_NM
FROM T2 B
WHERE A.SYS_ID = B.SYS_ID
and A.SYS_NM = B.SYS_NM
group by 1, 2
)
group by 1,2
order by 1,2;
```
Method 2:
```
Select A.SYS_ID, A.SYS_NM FROM T1 A
LEFT JOIN T2 B
ON A.SYS_ID = B.SYS_ID
group by A.SYS_ID, A.SYS_NM
order by 1,2;
```
|
You are close. Try this:
```
SELECT * FROM T1
WHERE NOT EXISTS
(SELECT * FROM T2
WHERE T2.SYS_ID = T1.SYS_ID
AND T2.SYS_NM = T1.SYS_NM)
order by 1, 2;
```
UPDATE:
I am guessing that you are seeing duplicate rows in the result. If so, adding "DISTINCT" will correct that.
I also noticed the answer that I previously recommended doesn't have both columns in the JOIN clause. Here is the complete solution:
```
SELECT DISTINCT A.SYS_ID, A.SYS_NM
FROM T1 A
LEFT JOIN T2 B ON A.SYS_ID = B.SYS_ID AND A.SYS_NM=B.SYS_NM
WHERE B.SYS_ID IS NULL
GROUP BY A.SYS_ID, A.SYS_NM;
```
|
You could go for the usual equality comparison between all columns of both tables, inverted with the "not" operator.
*What I mean, is, just go for a `SELECT` for the case that you want to retrieve rows that are same, inverted with a "not" operator.*
|
How to get the difference (data) in data between two independent tables - using sql
|
[
"",
"mysql",
"sql",
"database",
""
] |
I have the following table:
```
tableA
+-----------+--------+
| tableA_id | code |
+-----------+--------+
| 1 | code A |
| 2 | code B |
| 3 | code A |
| 3 | code C |
| 3 | code B |
| 4 | code A |
| 4 | code C |
| 4 | code B |
| 5 | code A |
| 5 | code C |
| 5 | code B |
+-----------+--------+
```
I want to use a query to display code A, code B, code C as the column headers and then the values would display whether or not the tableA\_id entry contains that code in the code field. So something like this:
```
+-----------+------------------------------+
| tableA_id | code A | code B | code C |
+-----------+------------------------------+
| 1 | yes | | |
| 2 | | yes | yes |
| 3 | yes | yes | yes |
etc...
```
Can you do this in SQL?
|
Using condtitional aggregation(portable between different dialects):
```
SELECT tableA_id,
MAX(CASE WHEN code ='code A' THEN 'yes' END) AS "code A",
MAX(CASE WHEN code ='code B' THEN 'yes' END) AS "code B",
MAX(CASE WHEN code ='code C' THEN 'yes' END) AS "code C"
FROM tableA
GROUP BY tableA_id;
```
`SqlFiddleDemo_MySQL`
`SqlFiddleDemo_Postgresql`
Output:
```
ββββββββββββββ¦ββββββββββ¦ββββββββββ¦βββββββββ
β tableA_id β code A β code B β code C β
β βββββββββββββ¬ββββββββββ¬ββββββββββ¬βββββββββ£
β 1 β yes β (null) β (null) β
β 2 β (null) β yes β (null) β
β 3 β yes β yes β yes β
β 4 β yes β yes β yes β
β 5 β yes β yes β yes β
ββββββββββββββ©ββββββββββ©ββββββββββ©βββββββββ
```
There are many possibilities(search for):
```
PIVOT -> SQL Server/Oracle
CROSSTAB -> Postgresql
SELF OUTER JOIN -> All
CONDITIONAL AGG -> All
...
```
|
You Need something like this in SQL Syntax
```
Select *
From Table1
pivot ( Aggregate function For Column Name in ([CodeA], [CodeB] , [CodeC])) as PivotTable
```
|
SQL convert row values to column headers
|
[
"",
"sql",
"pivot",
"crosstab",
""
] |
Using `SQLite`, I'd like to split a string in the following way.
Input string:
```
C:\Users\fidel\Desktop\Temp
```
and have the query return these **rows**:
```
C:\
C:\Users\
C:\Users\fidel\
C:\Users\fidel\Desktop\
C:\Users\fidel\Desktop\Temp
```
In other words, I'd like to split a file path into its constituent paths. Is there a way to do this in pure SQLite?
|
This is possible with a recursive [common table expression](http://www.sqlite.org/lang_with.html):
```
WITH RECURSIVE split(s, last, rest) AS (
VALUES('', '', 'C:\Users\fidel\Desktop\Temp')
UNION ALL
SELECT s || substr(rest, 1, 1),
substr(rest, 1, 1),
substr(rest, 2)
FROM split
WHERE rest <> ''
)
SELECT s
FROM split
WHERE rest = ''
OR last = '\';
```
(You did not ask for a *reasonable* way.)
|
Recursive CTE:
```
WITH RECURSIVE cte(org, part, rest, pos) AS (
VALUES('C:\Users\fidel\Desktop\Temp', '','C:\Users\fidel\Desktop\Temp'|| '\', 0)
UNION ALL
SELECT org,
SUBSTR(org,1, pos + INSTR(rest, '\')),
SUBSTR(rest, INSTR(rest, '\')+1),
pos + INSTR(rest, '\')
FROM cte
WHERE INSTR(rest, '\') > 0
)
SELECT *
FROM cte
WHERE pos <> 0
ORDER BY pos;
```
`SqlFiddleDemo`
Output:
```
βββββββββββββββββββββββββββββββ
β part β
β ββββββββββββββββββββββββββββββ£
β C:\ β
β C:\Users\ β
β C:\Users\fidel\ β
β C:\Users\fidel\Desktop\ β
β C:\Users\fidel\Desktop\Temp β
βββββββββββββββββββββββββββββββ
```
How it works:
```
org - original string does not change
part - simply `LEFT` equivalent of original string taking pos number of chars
rest - simply `RIGHT` equivalent, rest of org string
pos - position of first `\` in the rest
```
Trace:
```
ββββββββββββββββββββββββββββββββ¦βββββββββββββββββββββββββββββββ¦βββββββββββββββββββββββββββββ¦ββββββ
β org β part β rest β pos β
β βββββββββββββββββββββββββββββββ¬βββββββββββββββββββββββββββββββ¬βββββββββββββββββββββββββββββ¬ββββββ£
β C:\Users\fidel\Desktop\Temp β C:\ β Users\fidel\Desktop\Temp\ β 3 β
β C:\Users\fidel\Desktop\Temp β C:\Users\ β fidel\Desktop\Temp\ β 9 β
β C:\Users\fidel\Desktop\Temp β C:\Users\fidel\ β Desktop\Temp\ β 15 β
β C:\Users\fidel\Desktop\Temp β C:\Users\fidel\Desktop\ β Temp\ β 23 β
β C:\Users\fidel\Desktop\Temp β C:\Users\fidel\Desktop\Temp β β 28 β
ββββββββββββββββββββββββββββββββ©βββββββββββββββββββββββββββββββ©βββββββββββββββββββββββββββββ©ββββββ
```
|
Split a string into rows using pure SQLite
|
[
"",
"sql",
"string",
"sqlite",
"split",
""
] |
I've only just began to run explain on my queries and see that the type is All and I'm using filesort.
I'm not sure how to optimise even the simplest of queries, if anyone could provide guidance on the following query which just retrieves users and orders by their first name primarily and second name secondarily:
```
SELECT UserID, TRIM(FName) AS FName, TRIM(SName) as SName, pic
FROM users WHERE Blocked <> 1
ORDER BY FName, SName
LIMIT ?, 10
```
Table is created as follows:
```
CREATE TABLE IF NOT EXISTS `users` (
`UserID` int(11) NOT NULL,
`FName` varchar(25) NOT NULL,
`SName` varchar(25) NOT NULL,
`Pword` varchar(50) NOT NULL,
`Longitude` double NOT NULL,
`Latitude` double NOT NULL,
`DateJoined` bigint(20) NOT NULL,
`Email` varchar(254) NOT NULL,
`NotificationID` varchar(256) NOT NULL,
`Pic` varchar(500) DEFAULT NULL,
`Radius` int(11) NOT NULL,
`ads` tinyint(1) NOT NULL,
`Type` varchar(5) NOT NULL,
`Blocked` tinyint(4) NOT NULL
) ENGINE=MyISAM AUTO_INCREMENT=1469 DEFAULT CHARSET=latin1;
```
Explain gives the following:
```
id : 1
select_type : SIMPLE
table : users
type : ALL
possible_keys : NULL
key : NULL
key_len : NULL
ref : NULL
rows : 1141
Extra : Using where; Using filesort
```
|
Add index (Blocked, FName, SName)
And change where to `Blocked = 0`, if it is possible
|
If you want optimize this query you could create an index on the field in where condition
```
CREATE INDEX id_users_blocked ON users (Blocked) ;
```
the optimization depend by the number of users withBlocked <> 1
If these are few don't aspect particolar improvement .. but in explain you don't shoudl see All.
You can also add fname, sname in index field but the use of trim for a way and the need of the field pic can't make this index performant .. because if the firsta case the fields normally with function like trim are not get from index and in the second is not gooe an index with a field like pic .. so the access at the table row is mandatory.
|
optimise simple mysql query with order by on two columns
|
[
"",
"mysql",
"sql",
"select",
"sql-order-by",
"query-optimization",
""
] |
I have a table like this..
```
Priority | Amount | Case
P1 | 100 | 1
P1 | 200 | 2
P1 | 300 | 1
P3 | 400 | 3
```
I want to first order this by `Priority` and `Amount` (Descending) and then `Case`, to look like this.
```
Priority | Amount | Case
P1 | 300 | 1
P1 | 100 | 1
P1 | 200 | 2
P3 | 400 | 3
```
If I use `ORDER BY Priority, Amount DESC, Case` then it returns this. Where `Case` is not grouped together based on the highest `Amount` value.
```
Priority | Amount | Case
P1 | 300 | 1
P1 | 200 | 2
P1 | 100 | 1
P3 | 400 | 3
```
EDIT: Adding one more record for clarity:
```
Priority | Amount | Case
P1 | 100 | 1
P1 | 200 | 2
P1 | 300 | 1
P1 | 200 | 0 << New record
P3 | 400 | 3
```
This should return as:
```
Priority | Amount | Case
P1 | 300 | 1
P1 | 100 | 1
P1 | 200 | 0
P1 | 200 | 2
P3 | 400 | 3
```
where first it grouped by `Priority`, amongst that sorted by highest `Amount` and then within the `Amount` grouped by `Case`
|
You need to use a windowed aggregate to find the *highest* amount within each `Case`, and then use *that* for sorting:
```
declare @t table ([Priority] varchar(19) not null,Amount int not null, [Case] int not null)
insert into @t ([Priority],Amount,[Case]) values
('P1',100,1),
('P1',200,2),
('P1',300,1),
('P1',200,0),
('P3',400,3)
select
*
from
(
select *,MAX(Amount) OVER (PARTITION BY [Case]) as mAmount
from @t
) t
order by [Priority],mAmount desc,[Case],Amount desc
```
Result:
```
Priority Amount Case mAmount
------------------- ----------- ----------- -----------
P1 300 1 300
P1 100 1 300
P1 200 0 200
P1 200 2 200
P3 400 3 400
```
Also, please consider renaming some of your columns - I've had to wrap two in `[]` brackets because they're reserved words. It's usually better to just avoid reserved words completely.
|
```
CREATE TABLE SHANKATABLE (PRIORITY VARCHAR(2), AMOUNT INT, [CASE] INT)
INSERT INTO SHANKATABLE VALUES('P1', 100 , 1)
INSERT INTO SHANKATABLE VALUES ('P1', 200 , 2)
INSERT INTO SHANKATABLE VALUES ('P1', 300 , 1)
INSERT INTO SHANKATABLE VALUES ('P3', 400 , 3)
```
To achieve first result simply use below query,
Based on [**Damien\_The\_Unbeliever**](https://stackoverflow.com/users/15498/damien-the-unbeliever) answer I am just updating my answer.
```
SELECT PRIORITY,[CASE],AMOUNT
FROM (
SELECT PRIORITY,[CASE],AMOUNT,MAX(Amount) OVER(PARTITION BY [Case]) AS mAmount
FROM SHANKATABLE
) Temp
ORDER BY
[Priority],mAmount DESC,[Case],Amount DESC
```
[**updated sql fiddle demo**](http://sqlfiddle.com/#!3/6f266/5)
|
ORDER BY or GROUP BY based on three columns in SQL Server
|
[
"",
"sql",
"sql-server",
"group-by",
"sql-order-by",
""
] |
i have table COBA with 3 fields, they are NOMER, NAMA, KETERANGAN. in table there are exist data like this:
[](https://i.stack.imgur.com/Ki3uS.png)
i wanna insert another values to that table, but how can i prevent duplicate insert query data in oracle?
i have tried use WHERE NOT EXIST, this is my query:
```
INSERT INTO COBA(NOMER, NAMA, KETERANGAN)
(SELECT '2','FIAN', 'JEKULO'
FROM DUAL
WHERE NOT EXISTS (SELECT * FROM COBA WHERE NOMER='1' AND NAMA='AMIN' AND KETERANGAN='JEPARA'))
```
that query didn't work....any suggest for me,...thanks...
|
If you dont wanna use Unique constraint, you can you `left join` while inserting to check if the `nomer` exists in target or not like below. By this method you will not get error even if the record already exists in your table. It will just be skipped.
```
insert into coba
(select s.nomer,s.nama,s.ket from
(select 1 as nomer,'AA' as nama,'bb' as ket from dual) s
left join
coba t
on s.nomer=t.nomer
where t.nomer is null
);
```
I created a fiddle in MySQL (as Oracle is not working) but the functionality would be same. As you can see in example below, the `nomer` =`1` is not inserted again.
**See fiddle demo here**
<http://sqlfiddle.com/#!2/3add2/1>
|
Use a unique constraint:
```
ALTER TABLE COBA ADD CONSTRAINT uni_c UNIQUE (NOMER, NAMA, KETERANGAN)
```
|
Prevent Duplicate Insert Data in Oracle
|
[
"",
"sql",
"oracle11g",
""
] |
I am trying to write a query to get all nodes with their ancestors. The database stores a tree (nodes and their children/parents). I know that connect by can give all ancestors, and when coupled with the start with clause you can get all ancestors of a single node.
Here is a quick example to illustrate what I am going for.
Node edge table:
```
+---------+-----------+
|child_id |parent_id |
+---------+-----------+
|2 |1 |
|3 |2 |
|4 |2 |
|5 |4 |
+---------+-----------+
```
The query I wrote is:
```
select parent_id, child_id
from edges
start with child_id = 5
connect by child_id = prior parent_id
```
gives:
```
+---------+-----------+
|child_id |parent_id |
+---------+-----------+
|2 |1 |
|4 |2 |
|5 |4 |
+---------+-----------+
```
what I am looking for is something like this:
```
+---------+-----------+
|child_id |parent_id |
+---------+-----------+
|2 |1 |
|3 |2 |
|3 |1 |
|4 |2 |
|4 |1 |
|5 |4 |
|5 |2 |
|5 |1 |
+---------+-----------+
```
So each node has a record for each of it's ancestors all the way to the root. I'm having a bit of trouble building a query to get this result. Any suggestions?
Thanks,
mcangus
|
I don't think "start with" does what you intend it to do. That's going to restrict your root rows, which is why you get a limited result set. You also want CONNECT\_BY\_ROOT to get not just the child\_id, but the child\_id of the root rows. Disclaimer: I am not an oracle guru, I just had some time.
The SYS\_CONNECT\_BY\_PATH is just extra info.
```
select CONNECT_BY_ROOT child_id "CHILD_ID", parent_id, SYS_CONNECT_BY_PATH(child_id, '/') "PATH"
from edges
connect by child_id = prior parent_id;
```
results:
```
CHILD_ID PARENT_ID PATH
2 1 /2
3 2 /3
3 1 /3/2
4 2 /4
4 1 /4/2
5 4 /5
5 2 /5/4
5 1 /5/4/2
```
Check out the [docs](http://docs.oracle.com/cd/B19306_01/server.102/b14200/operators004.htm#i1035022).
|
Use `CONNECT_BY_ROOT` operator:
```
WITH edges (child_id, parent_id) AS (
SELECT 2, 1 FROM DUAL UNION ALL
SELECT 3, 2 FROM DUAL UNION ALL
SELECT 4, 2 FROM DUAL UNION ALL
SELECT 5, 4 FROM DUAL
)
SELECT
child_id, CONNECT_BY_ROOT parent_id parent_id
FROM
edges
CONNECT BY
PRIOR child_id = parent_id
ORDER BY
child_id, parent_id DESC;
```
|
Oracle connect by to find child, ancestor pairs
|
[
"",
"sql",
"oracle",
"connect-by",
""
] |
I have this query in SQL Server:
```
select column
from table_53;
```
Now, I want to get this 53 from another table, so what I want to do is something like this:
```
select column
from table_(select id from table2);
```
Is there any way to do this in SQL Server?
|
**You need dynamic sql query here.**
```
declare @sqlQuery = 'select column
from table_(';
set @sqlQuery = @sqlQuery + 'select id from table2)';
EXEC (@sqlQuery)
```
Note :- One of cons of using dynamic sql query is sql injection. I would suggest to have better table structure or try to used parameterized query.
|
This is definitely not the way SQL thinks and works. Maybe your suggested approach can be mimicked by way of writing stored procedures in which you create SQL-statements which are then evaluated. However, this will not be very efficient.
A better approach would be to store the values of all your individual separate tables into one master table and mark them in a separate column `tblid` with their number (e.g. `53`). Then you can always filter them from this master table by looking for this `tblid`.
|
SQL Server - select into from statement?
|
[
"",
"sql",
"sql-server",
""
] |
I have two tables - player, which stores players data, and similar table team.
I want to select from these two tables name and surname of player and name of team to which one he belongs. I used this query:
```
SELECT player.name, player.surname,
team.name AS teamname
FROM player, team
WHERE player.id_player=team.id_team
```
But as result i only have only one player from each team, and of course i want a full list of players with names of team.
Anyone has an idea?
|
I think you need to join on the team id. I am guessing something similar to this will work:
```
SELECT p.name, p.surname, t.name AS teamname
FROM player p JOIN
team t
ON p.team_id_team = t.id_team ;
----------^ Note the change here
```
Also:
* Learn proper `JOIN` syntax. Simple rule: *Never* use commas in the `FROM` clause. *Always* use explicit `JOIN` syntax.
* Table aliases make a query easier to write and to read.
* If you have players without a valid team, then use a `LEFT JOIN` instead of `JOIN`.
|
You must use JOIN to "connect" the tables. With "WHERE" you only can exclude rows;
```
SELECT
p.name,
p.surname,
t.name AS teamname
FROM player p
LEFT JOIN team t ON p.id_player = t.id_team ;
```
|
Selecting rows from two tables MySql
|
[
"",
"mysql",
"sql",
""
] |
Table A is an inventory of items imported from a CSV file. The spreadsheet included a column for location of inventory items (i.e., toothbrush, bathroom).
Table B is a list of all the distinct locations from Table A.
I now want to update table A with the ID from Table B, in a new column in Table A set up as a foreign key. I need each record in Table A to have a `location_id` where the text of `location` matches `as_entered` in Table B.
I have tried:
```
UPDATE imported_data_copy
SET imported_data_copy.location_id=locations.id
WHERE locations.as_entered LIKE imported_data_copy.location;
```
The error I get is:
```
#1054 - Unknown column 'locations.as_entered' in 'where clause'
```
There is, in fact, an `as_entered` column in the locations table.
What am I doing wrong?
|
When you are doing a `SELECT`, `INSERT`, `DELETE`, `UPDATE` operation on a particular table in `WHERE` clause the left had side is always your reference of the target table. So in this case `imported_data_copy.location` is your target column and should be on left hand side. This is a bit different from a logical or mathematical operation so which side you have your column matters.
```
UPDATE imported_data_copy, locations
SET imported_data_copy.location_id=locations.id
WHERE imported_data_copy.location LIKE locations.as_entered;
```
On another note for you design, if you refer a table with id using text is not a best practice. Using integer columns for ID is much better than text unless you have a good (very good) reason.
|
The problem turned out to be that I did not include the `locations` table in the UPDATE clause. The correct syntax turned out to be:
```
UPDATE imported_data_copy,locations
SET imported_data_copy.location_id=locations.id
WHERE imported_data_copy.location LIKE locations.as_entered;
```
Glad I asked here, though, because I would still have had trouble with `WHERE` clause.
|
How To Populate Foreign Key by Comparing Columns in Two Tables
|
[
"",
"mysql",
"sql",
"database",
""
] |
I want to group values and sum them up by a category value - or none.
For example, I have the following table:
```
+-------+----------+
| value | category |
+-------+----------+
| 99 | A |
| 42 | A |
| 76 | [NULL] |
| 66 | B |
| 10 | C |
| 13 | [NULL] |
| 27 | C |
+-------+----------+
```
My desired result should look like this:
```
+-------+----------+
| sum | category |
+-------+----------+
| 230 | A |
| 155 | B |
| 126 | C |
| 89 | [NULL] |
+-------+----------+
```
I tried a `group by category` but obviously this doesn't bring up the right numbers.
Any ideas?
I'm using SQL Server 2012.
EDIT:
Ok, as requested, I can explain my intents and give my query so far, although that is not very helpful I think.
I need to sum all value for the given categories AND add the sum of all values without a category [=> NULL]
So in my example, I would sum
```
99 + 42 + 76 + 13 = 230 for category A
66 + 76 + 13 = 155 for category B
10 + 27 + 76 + 13 = 126 for category C
76 + 13 = 89 for no category
```
I hope that gives you an idea of my goal.
Query so far:
```
SELECT SUM([value]), [category]
FROM [mytable]
GROUP BY [category]
```
|
First calculate the sum of nulls then add it to each group:
```
DECLARE @t TABLE
(
value INT ,
category CHAR(1)
)
INSERT INTO @t
VALUES ( 99, 'A' ),
( 42, 'A' ),
( 76, NULL ),
( 66, 'B' ),
( 10, 'C' ),
( 13, NULL ),
( 27, 'C' )
;with cte as(select sum(value) as s from @t where category is null)
select category, sum(value) + s
from @t
cross join cte
where category is not null
group by category, s
```
Another version:
```
;WITH cte AS(SELECT category, SUM(value) OVER(PARTITION BY category) +
SUM(CASE WHEN category IS NULL THEN value ELSE 0 END) OVER() AS value
FROM @t)
SELECT DISTINCT * FROM cte WHERE category IS NOT NULL
```
|
If you want to add the `NULL` values to all the groups, then do something like:
```
with cte as (
select category, sum(value) as sumv
from t
group by category
)
select cte.category,
(cte.sumv +
(case when category is not null then coalesce(ctenull.sumv) else 0 end)
) as sumWithNulls
from cte left join
cte ctenull
on ctenull.category is null -- or should that be `= '[NULL]'`?
```
This seems like a strange operation.
EDIT:
You can almost do this with window functions:
```
select category,
(sum(value) +
sum(case when category is null then sum(value) else 0 end) over ()
) as sumWithNulls
from t
group by category;
```
The problem is that `NULL`s get over counted for that category. So:
```
select category,
(sum(value) +
(case when category is not null
then sum(case when category is null then sum(value) else 0 end) over ()
else 0
end
) as sumWithNulls
from t
group by category;
```
|
GROUP BY with summing null values to every group
|
[
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2012",
""
] |
I am using a SQL Server 2016 database and in my stored procedure I want to compare two dates.
I get one date parameter as string in 'dd-MM-yyyy' format.
```
@stringdate = '28-12-1990' // I get date in this format
```
I want to convert this to date format and compare with a database column which is of type `datetime`.
|
You need to use **CAST()** function to convert to date format.
**Try this Code:**
```
CAST(dbdatetimecolumnname AS date) = (CAST(RIGHT(@stringdate,4) + '-' + LEFT(RIGHT(@stringdate,7),2) + '-' + LEFT(@stringdate,2) AS DATE))
```
**Right()** & **Left()** functions are used rearrange your string date format to db date format.
|
Since you're on SQL Server **2016**, you could use the `PARSE` function and define the culture needed to successfully parse this string.
Something like
```
DECLARE @stringdate VARCHAR(20) = '28-12-1990';
DECLARE @parseddate DATETIME2(3);
SELECT @parseddate = PARSE(@stringdate AS DATE USING 'de');
SELECT @parseddate
```
I used the `de` culture for Germany - since the format of the date you have is a **European** format (`dd-MM-yyyy`).
|
SQL Server Date Comparison from string to datetime
|
[
"",
"sql",
"sql-server",
"sql-server-2016",
""
] |
I am having an issue, I am new to SQL query work, but I have a query that runs and display employees and all their addresses history, but I have found that staff have been missing checking off the indicator for if the employee has mailing address. The addresses are stored in a table that has a reference to the employee id. How would I display results for a specific employee if no "2" value for mailing is found. The address table contains previous addresses and billing address flags, "1" and "3"?
In the addelement table
```
type_add_id|type_add_desc
1 |Billing
2 |Mailing
3 |Previous
```
Query
```
SELECT
addelement.type_add_desc
,address.street
,employee.name
FROM
address
INNER JOIN addelement
ON address.type_add_id = addelement.type_add_id
INNER JOIN employee
ON address.refid = employee.refid
order by employee.name
```
|
This will get you a list of employees that do not have a mailing address. Note that we start with all employees, outer join to the addresses, but constrain to not only match the employee, but also to be of the desired type of address. The `WHERE` clause then removes records from the resulting recordset where there is a value.
```
SELECT
employee.name
FROM
employee
LEFT OUTER JOIN address ON address.refid = employee.refid AND address.type_add_id = 2
WHERE
address.type_add_id IS NULL
```
Change the hardcoded integer in the `JOIN` to search out each of the desired types of addresses.
|
This should return your employee #100 with all the addresses stored, even if employee has not address (any type).
```
select e.name, ad.type_add_desc, a.street
from employee e
left outer join address a on
e.refid = a.refid
left outer join addelement ad on
a.type_add_id = ad.type_add_id
where e.refid = 100
order by e.name;
```
To return rows that haven't a row relationship in other table you should use OUTER JOIN.
|
SQL Query Display records only when value is missing
|
[
"",
"sql",
""
] |
I have a table with millions of records and I've been running this query once a day for over 500 days and just recently it stopped working.
The query looks as follows:
```
SELECT id FROM table
ORDER BY column1 desc, column2 desc, column3 desc
LIMIT 522, 1
```
This returns an empty result set. However, if I change the limit to anything less than 500 it works fine.
If I remove the limit and do a COUNT(id) with the order by, it returns a count of the multi-million row table.
|
Not sure exactly how it occurred, but after optimizing the table it began working again...
|
I can only imagine some sort of time-out or error occurring. The `order by` and `limit` should be returning rows, if they are there.
First, check to see if you are getting an error. If you are calling from an application, this requires checking the error status.
For a time-out, you might consider adding an index on `table(column1 desc, column2 desc, column3 desc, id)`. This should radically improve the performance of the query.
|
Unexpected empty result set using limit with order by
|
[
"",
"mysql",
"sql",
""
] |
I have a requirement to filter a specific row while doing summation. Table structure and data as below.
[](https://i.stack.imgur.com/14uyR.png)
I want to SUM of `[Gain/Lost]` column value for all rows except the row `WHERE [Reason] = 'Order Placed' AND [No Of Month Old] = 1`.
For below case value will be -560.
|
You described your conditions pretty well in English - now you just need to write the same thing in SQL:
```
SELECT SUM([Gain/Lost]
FROM mytable
WHERE NOT ([Reason] = 'Order Placed' AND [No Of Month Old] = 1)
```
|
```
select userid, sum([Gain/Lost])
from tablename
where [Reason] <> 'Order Placed'
or [No Of Month Old] <> 1
group by userid
```
|
Conditional WHERE condition SQL script
|
[
"",
"sql",
"sql-server",
"select",
"sum",
"where-clause",
""
] |
In my application, I have a table that identifies resources (i.e. pictures) by their id. Said resources have also been "tagged" (field1). i.e. Picture 3 in the table below is tagged with both 'A' and 'B'. Whereas Picture 1 is tagged with only 'A' and Picture 2 is tagged with only 'B'.
Here is my "tagging" table:
```
+--------------+
| id | field1 |
+--------------+
| 1 | A |
| 2 | B |
| 3 | A |
| 3 | B |
+--------------+
```
Note: The id's are neither unique nor auto-incrementing.
Problem: I want to return all pictures tagged as 'B', but I do not want to return any that are tagged as 'A'.
SELECT id from pictures
WHERE field1 = 'B';
Returns:
```
+-----+
| id |
+-----+
| 2 |
| 3 |
+-----+
```
This is not want I want, because it includes Picture 3 which is also tagged with 'A' (in the row immediately preceding [3, B] in the original table)
I want:
```
+-----+
| id |
+-----+
| 2 |
+-----+
```
|
here are two methods:
exists subclause:
```
SELECT id
from pictures as pictures1
WHERE field1 = 'B'
and not exists (
select *
from pictures as picutures2
where pictures2.id = pictures1.id
and pictures2.field1 = 'A');
```
left join:
```
Select pictures1.id
from pictures as pictures1
left join pictures as picutures2 on
pictures2.id = pictures1.id
and pictures2.field1 = 'A'
where pictures1.field1 = 'B'
and pictures2.ID is null -- this line eliminates records where the join fails; note that if you have this line, you must not put any other pictures2 references in this where clause
```
;
|
You started well with your request. Just unselect rows where field1 is A. :
```
SELECT id from pictures WHERE field1 = 'B' AND id NOT IN(
SELECT id from pictures WHERE field1 = 'A'
);
```
|
MySQL: Return all rows with same ID but filter by a different field
|
[
"",
"mysql",
"sql",
""
] |
With this query
```
SELECT
product, customer, sum(sales) as topSales
FROM
invoices
WHERE
product in (//lots of product id's here)
GROUP BY
product, customer
ORDER BY
topSales DESC
```
I get a result set containing all buyers for a product, in order of sales descending.
```
PRODUCT CUSTOMER topSales
=============================
banana Chris 50.35
banana Eric 34.87
cookie Emmy 54.54
apple John 78.67
banana Derek 33.87
banana Sally 21.76
apple Henry 65.78
cookie Yoyo 24.67
milk Chris 30.43
milk Henry 22.43
```
I only want the top buyer for each product, it should look like this
```
PRODUCT CUSTOMER topSales
=============================
banana Chris 50.35
cookie Emmy 54.54
apple John 78.67
milk Chris 30.43
```
How can I get the result like this? I need to get distinct products, but only for the top buyer.
|
You can do this using window funtions:
```
SELECT *
FROM (SELECT product, customer, sum(sales) as topSales,
ROW_NUMBER() OVER (PARTITION BY product ORDER BY SUM(sales) DESC) as seqnum
FROM invoices
WHERE product in(//lots of product id's here)
GROUP BY product, customer
) pc
WHERE seqnum = 1
ORDER BY topSales DESC;
```
|
From: <https://dba.stackexchange.com/questions/1002/how-to-get-the-max-row>
(see link for more examples of how you can get your top row)
```
--ranking + derived table
SELECT
C.*
FROM
(SELECT ROW_NUMBER() OVER (PARTITION BY product ORDER BY topSales ) AS topBuyer,
product, customer, sum(sales) as topSales FROM invoices
WHERE product in(//lots of product id's here)
GROUP BY product,customer ORDER BY topSales DESC) C
WHERE
C.topBuyer = 1
ORDER BY
C.product
```
|
Select distinct on one column, returning multiple rows and columns
|
[
"",
"sql",
"sql-server",
""
] |
I have a table that I need to query to obtain the most recent record in which the description contains certain data. The table columns contain (in part) the following:
```
+-----------+------------+-------------------+
| AccountID | Date | Description |
+-----------+------------+-------------------+
| 125060 | 2006-02-11 | Red Apple |
| 125060 | 2007-03-23 | Yellow Banana |
| 125060 | 2009-04-03 | Yellow Apple |
| 125687 | 2006-03-10 | Red Apple |
| 139554 | 2007-06-29 | Orange Orange |
| 139554 | 2009-07-24 | Green Apple |
| 145227 | 2008-11-22 | Green Pear |
| 145227 | 2012-04-16 | Yellow Grapefruit |
| 154679 | 2014-05-22 | Purple Grapes |
| 163751 | 2012-02-11 | Green Apple |
| ... | ... | ... |
+-----------+------------+-------------------+
```
(There are a few more columns, and hundreds of thousands of records, but this is all I am interested in at the moment)
For this example, I want the most recent record for a subset of AccountIDs that contained "Apple." The results I am looking for are:
```
+-----------+------------+--------------+
| AccountID | Date | Description |
+-----------+------------+--------------+
| 125060 | 2009-04-03 | Yellow Apple |
| 125687 | 2006-03-10 | Red Apple |
| 139554 | 2009-07-24 | Green Apple |
+-----------+------------+--------------+
```
The current query I am using is:
```
SELECT AccountID, max(Date), Description
FROM products
WHERE Description like "%Apple%" and (AccountID=125060 or AccountID=125687 or AccountID=139554)
GROUP BY AccountID;
```
Unfortunately, the results are yielding something to the tune of:
```
+-----------+------------+-------------------+
| AccountID | Date | Description |
+-----------+------------+-------------------+
| 125060 | 2009-04-03 | Red Apple |
| 125687 | 2006-03-10 | Red Apple |
| 139554 | 2009-07-24 | Green Apple |
+-----------+------------+-------------------+
```
where the AccountID is grouping properly, and the appropriate (most recent) date is being selected, but the Description is still returning the FIRST description that matches the WHERE/like clause... and NOT the description that correlates to the record with the selected date.
I've never seen anything like this before. Is this something I am doing wrong? I don't have a ton of experience with advanced MySQL queries, but would this be better suited for a left join or inner join on a subquery?
I thought about first using a subquery to pull all records that contain the desired text within the description, and then querying that subquery to select/group by most recent, but didn't know if that would be necessary or not.
Thanks so much in advance for the help!
**UPDATE**
This server host is running an old version of mySQL (4.0.17). Apparently this version is too old to support subqueries. Thanks to Shadow and shawnt00 it looks like the same thing is possible with a left join. Here is the query I am currently using:
```
SELECT p1.*
FROM products p1
LEFT JOIN products p2
on p1.AccountID=p2.AccountID and p1.Date<p2.Date and p2.Description like "%Apple%"
where p1.Description like "%Apple%" and p2.Date is null and (p1.AccountID=125060 or p1.AccountID=142580 or p1.AccountID=145135 or p1.AccountID=139254);
```
If any problems present with this query, I will post back. Thanks everyone!
|
In your query nothing guarantees that mysql will select those description fields that have the max(date) value. Actually, your version is against the mysql standard and works in mysql under certain configuration settings only.
The solution is to get the max dates by account ids where the description matches the criteria in a subquery and join this back to the table itself using account id and max date:
```
SELECT p.AccountID, p.Date, p.Description
FROM products p
INNER JOIN (SELECT AccountID, max(Date) as maxdate
FROM products
WHERE Description like "%Apple%" and (AccountID=125060 or AccountID=125687 or AccountID=139554)
GROUP BY AccountID) t
ON p.AccountID=t.AccountID and p.Date=t.maxdate
WHERE Description like "%Apple%";
```
**UPDATE**
Mysql v4.0 does not support subqueries, therefore the above method is not applicable. You can still use a left join approach, where you self join the products table and use the is null expression to find those dates for which larger dates do not belong to:
```
select p1.*
from products p1
left join products p2
on p1.accountid=p2.accountid and p1.date<p2.date
where Description like "%Apple%" and p2.date is null;
```
|
Perhaps your old MySQL can handle this version. It combines the `AccountID` and `Date` values into a single result that works with `in`.
```
select
p.Account, p.Date, p.Description
from
products p
where
p.AccountID in (125060, 125687, 139554)
and p.Description like '%Apples%'
and concat(cast(p.AccountID as varchar(8)), date_format(p.Date, '%Y%m%d')) in
(
select concat(cast(p2.AccountID as varchar(8)), date_format(max(p2.Date), '%Y%m%d'))
from products p2
where p2.Description like '%Apple%'
group by p2.AccountID
)
```
Many platforms could handle this kind of subquery before they could work with "derived tables" and "inline views" in the `from` clause. I'm not sure about MySQL though.
|
MySQL Query: Select latest record where column contains certain criteria
|
[
"",
"mysql",
"sql",
"group-by",
"subquery",
""
] |
Suppose I have the following tables:
```
CREATE TABLE parents (
id int primary key
);
CREATE TABLE children (
parent_id int, --id from parents
day int,
status bool,
}
INSERT INTO parents (id) VALUES (1);
INSERT INTO children (parent_id, day, status) VALUES (1, 1, TRUE);
INSERT INTO children (parent_id, day, status) VALUES (1, 2, TRUE);
INSERT INTO parents (id) VALUES (2);
INSERT INTO children (parent_id, day, status) VALUES (2, 1, TRUE);
INSERT INTO children (parent_id, day, status) VALUES (2, 2, FALSE);
INSERT INTO parents (id) VALUES (3);
INSERT INTO children (parent_id, day, status) VALUES (3, 1, TRUE);
INSERT INTO parents (id) VALUES (4);
INSERT INTO children (parent_id, day, status) VALUES (4, 1, FALSE);
INSERT INTO parents (id) VALUES (5);
```
I need a query that will return:
```
Parents
+------------+
| id |
+------------+
| 1 |
| 3 |
+------------+
```
where `id` is parents id. The resulting table only contains the parents that always(any day) `true`. Note that parents without children should be excluded.
My attempt:
```
SELECT id
FROM parents p
INNER JOIN children c ON c.parent_id=p.id
WHERE c.status = TRUE
GROUP BY id
```
But it will also give parent with `id=2`.
Another attempt:
```
SELECT id
FROM parents p
LEFT OUTER JOIN children c ON c.parent_id=p.id AND c.status=FALSE
WHERE c.status IS NULL
GROUP BY id
```
But this approach will also include parent with `id=5`, which must be excluded.
|
You don't need to join to parents.
```
SELECT parent_id
FROM children
GROUP BY parent_id
HAVING MIN(Status) = 'TRUE'
AND MAX(Status) = 'TRUE'
```
No other Status besides TRUE.
|
```
SELECT id FROM parent P
WHERE (P.id) IN
(SELECT c.parent_id FROM children c WHERE c.status = TRUE)
```
This will give you the desired result.
|
Select rows where field in joining table is same in every row
|
[
"",
"mysql",
"sql",
""
] |
First, I know this has been addressed before - I've done my research and as you'll see below, I have tried multiple versions to get this to work.
I am trying to set up a query in which a variable can be set and the query uses that variable. Here's my code:
```
-- Set to the category you want to check
DECLARE @validationCategory varchar = 'Dept'
DECLARE @validationTable varchar = (SELECT ValidationTable FROM MasterFiles..Categories WHERE Category = @validationCategory AND TableToValidate = 'xref')
DECLARE @validationField varchar = (SELECT ValidationField FROM MasterFiles..Categories WHERE Category = @validationCategory AND TableToValidate = 'xref')
EXEC('
SELECT DISTINCT Category
FROM MasterFiles.dbo.xref
WHERE Category = ''' + @validationCategory + '''
AND (new_value NOT IN (SELECT ''' + @validationField + '''
FROM ' + @validationTable + ')
AND old_value NOT LIKE ''%New%''
OR (new_value IS NULL OR new_value = ''''))
)'
)
```
When I run it I am getting this error:
> Msg 102, Level 15, State 1, Line 6
> Incorrect syntax near ')'.
The most I've been able to glean from the error is that Level 15 means the error is correctable by the user. Great! Now if I could just find out exactly what the syntax problem is...
I'm suspecting it may be due to the fact that `@validationTable` is a *table*, not a string, but I'm not sure how to address that.
After some more research I then tried to put the query in a variable and then run EXEC on the variable:
```
--...
DECLARE @Query nvarchar
SET @Query = '
SELECT DISTINCT Category
FROM MasterFiles.dbo.xref
WHERE Category = ''' + @validationCategory + '''
AND (new_value NOT IN (SELECT ''' + @validationField + '''
FROM ' + @validationTable + ')
AND old_value NOT LIKE ''%New%''
OR (new_value IS NULL OR new_value = ''''))
)'
EXEC(@query)
```
That ran without errors, but I didn't get any output - just that it ran successfully.
After some more research about getting the output of a query as an EXECuted variable, I came up with this (from an example I found here on SO):
```
--...
DECLARE @vQuery nvarchar
SET @vQuery = '
SELECT DISTINCT Category
FROM MasterFiles.dbo.xref
WHERE Category = ''' + @validationCategory + '''
AND (new_value NOT IN (SELECT ''' + @validationField + '''
FROM ' + @validationTable + ')
AND old_value NOT LIKE ''%New%''
OR (new_value IS NULL OR new_value = ''''))
)'
DECLARE @vi int
--DECLARE @vQuery varchar(1000)
EXEC SP_EXECUTESQL
@query = @vQuery
, @params = '@vi int output'
, @vi = @vi output
SELECT @vi
```
That resulted in this error:
> Msg 214, Level 16, State 3, Procedure sp\_executesql, Line 1
> Procedure expects parameter '@parameters' of type 'ntext/nchar/nvarchar'.
So what's the correct way to go about this. In my research on this I've seen examples that use a stored procedure. I'd like to be able to do this without a stored procedure because this query will be used by copy/pasting the query into a query window in a client's SSMS, setting the table variable and running the query. Then moving on to the next client to do the same thing.
I need to be able to run this on servers installed with either SQL Server 2008 or 2012.
|
This is what your first attempt is generating..
```
SELECT DISTINCT Category
FROM MasterFiles.dbo.xref
WHERE Category = 'D'
AND (new_value NOT IN (SELECT 'M' FROM M)
AND old_value NOT LIKE '%New%'
OR (new_value IS NULL OR new_value = '')
)
) -- extra
```
This has an extra `)` and doesn't really make sense
On top of all of this.. you're declaring your variables as VARCHAR with no length which equals to 1 character
```
DECLARE @validationCategory varchar = 'Dept'
DECLARE @validationTable varchar = (SELECT ValidationTable FROM MasterFiles..Categories WHERE Category = @validationCategory AND TableToValidate = 'xref')
DECLARE @validationField varchar = (SELECT ValidationField FROM MasterFiles..Categories WHERE Category = @validationCategory AND TableToValidate = 'xref')
```
`@validationCategory` would equal to 'D' in this case.. Make sure you add appropriate lengths to your varchars
|
First test, print the query variable ( @vQuery ), copy and paste it in a new query window and execute it. You're gonna identify where the error is exactly.
I think you need to do this, just like you do with your @validationTable:
```
DECLARE @vQuery nvarchar
SET @vQuery = '
SELECT DISTINCT Category
FROM MasterFiles.dbo.xref
WHERE Category = ''' + @validationCategory + '''
AND (new_value NOT IN (SELECT ' + @validationField + '
FROM ' + @validationTable + ')
AND old_value NOT LIKE ''%New%''
OR (new_value IS NULL OR new_value = ''''))
)'
```
|
SQL Server - Table name as variable in query
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"variables",
"sql-server-2012",
""
] |
I am currently learning PL/SQL so i am still a newbie. Assume that you have a production database, which you connect to using Oracle SQL developer. You have ONLY READ privilges to that databases. Therefore you cannot create or edit any tables.
My question is, if i have a big list of IDs, which i have to join with a table in that database, how can i do that?
Obviously, I can load the IDs onto a temporary table and then do a join, but that would be really tedious as i have only READ privileges. Hardcoding the IDs is not an option also, because the list is too big.
And also note that, i know the concept of TEMPORARY tables. But unfortunately, i also don't have privileges to create those.
Is there any solution in SQL developer where i can load the list of IDs, to match with the table in the database?
|
Use a collection
```
VARIABLE cursor REFCURSOR;
DECLARE
your_collection SYS.ODCIVARCHAR2LIST := SYS.ODCIVARCHAR2LIST();
BEGIN
your_collection.EXTEND( 10000 );
FOR i IN 1 .. 10000 LOOP
-- Populate the collection.
your_collection(i) := DBMS_RANDOM.STRING( 'x', 20 );
END LOOP;
OPEN :cursor FOR
SELECT t.*
FROM your_table t
INNER JOIN
TABLE( your_collection ) c
ON t.id = c.COLUMN_VALUE;
END;
/
PRINT cursor;
```
Or doing the same thing via java:
```
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
import oracle.jdbc.OraclePreparedStatement;
import oracle.sql.ARRAY;
import oracle.sql.ArrayDescriptor;
public class TestDatabase2 {
public static void main(String args[]){
try{
Class.forName("oracle.jdbc.OracleDriver");
Connection con = DriverManager.getConnection("jdbc:oracle:thin:@localhost:1521:XE","username","password");
String[] ids = { "1", "2", "3" };
ArrayDescriptor des = ArrayDescriptor.createDescriptor("SYS.ODCIVARCHAR2LIST", con);
PreparedStatement st = con.prepareStatement("SELECT t.* FROM your_table t INNER JOIN TABLE( :your_collection ) c ON t.id = c.COLUMN_VALUE");
// Passing an array to the procedure -
((OraclePreparedStatement) st).setARRAYAtName( "your_collection", new ARRAY( des, con, ids ) );
ResultSet cursor = st.executeQuery();
while ( cursor.next() )
{
int id = cursor.getInt(1);
double column1 = cursor.getDouble(2);
double column2 = cursor.getDouble(3);
System.out.println( String.format( "Id: %5d", id ) );
System.out.println( String.format( " Column1: %s", column1 ) );
System.out.println( String.format( " Column2: %s", column2 ) );
}
} catch(ClassNotFoundException | SQLException e) {
System.out.println(e);
}
}
}
```
|
Your friendly dba can map a directory for you to use, that will let you plop your file in there, and treat it as a table. Then basically you join with the file-as-table. Ask your DBA about EXTERNAL\_TABLES.
|
How to load a large number of strings to match with oracle database?
|
[
"",
"sql",
"oracle",
"oracle-sqldeveloper",
""
] |
I have tables A, B, C. Table A is linked to B, and table A is linked to C. I want to join the 3 tables and find the sum of B.cost and the sum of C.clicks. However, it is not giving me the expected value, and when I select everything without the group by, it is showing duplicate rows. I am expecting the row values from B to roll up into a single sum, and the row values from C to roll up into a single sum.
My query looks like
```
select A.*, sum(B.cost), sum(C.clicks) from A
join B
left join C
group by A.id
having sum(cost) > 10
```
I tried to group by `B.a_id` and `C.another_field_in_a` also, but that didn't work.
Here is a DB fiddle with all of the data and the full query:
> <http://sqlfiddle.com/#!9/768745/13>
Notice how the sum fields are greater than the sum of the individual tables? I'm expecting the sums to be equal, containing only the rows of the table B and C once. I also tried adding `distinct` but that didn't help.
I'm using Postgres. (The fiddle is set to MySQL though.) Ultimately I will want to use a `having` clause to select the rows according to their sums. This query will be for millions of rows.
|
I found this ([MySQL joining tables group by sum issue](https://stackoverflow.com/questions/12026182/mysql-joining-tables-group-by-sum-issue)) and created a query like this
```
select *
from A
join (select B.a_id, sum(B.cost) as cost
from B
group by B.a_id) B on A.id = B.a_id
left join (select C.keyword_id, sum(C.clicks) as clicks
from C
group by C.keyword_id) C on A.keyword_id = C.keyword_id
group by A.id
having sum(cost) > 10
```
I don't know if it's efficient though. I don't know if it's more or less efficient than Gordon's. I ran both queries and this one seemed faster, 27s vs. 2m35s. Here is a fiddle: <http://sqlfiddle.com/#!15/c61c74/10>
|
If I understand the logic correctly, the problem is the Cartesian product caused by the two joins. Your query is a bit hard to follow, but I think the intent is better handled with correlated subqueries:
```
select k.*,
(select sum(cost)
from ad_group_keyword_network n
where n.event_date >= '2015-12-27' and
n.ad_group_keyword_id = 1210802 and
k.id = n.ad_group_keyword_id
) as cost,
(select sum(clicks)
from keyword_click c
where (c.date is null or c.date >= '2015-12-27') and
k.keyword_id = c.keyword_id
) as clicks
from ad_group_keyword k
where k.status = 2 ;
```
[Here](http://sqlfiddle.com/#!9/768745/12) is the corresponding SQL Fiddle.
EDIT:
The subselect should be faster than the `group by` on the unaggregated data. However, you need the right indexes: `ad_group_keyword_network(ad_group_keyword_id, ad_group_keyword_id, event_date, cost)` and `keyword_click(keyword_id, date, clicks)`.
|
How can I join 3 tables and calculate the correct sum of fields from 2 tables, without duplicate rows?
|
[
"",
"sql",
"postgresql",
""
] |
According to the [`UPDATE` documentation](https://msdn.microsoft.com/en-us/library/ms177523.aspx#Anchor_8), an `UPDATE` always acquires an exclusive lock on the whole table. However, I am wondering if the exclusive lock is acquired before the rows to be updated are determined or only just before the actual update.
My concrete problem is that I have a nested `SELECT` in my `UPDATE` like this:
```
UPDATE Tasks
SET Status = 'Active'
WHERE Id = (SELECT TOP 1 Id
FROM Tasks
WHERE Type = 1
AND (SELECT COUNT(*)
FROM Tasks
WHERE Status = 'Active') = 0
ORDER BY Id)
```
Now I am wondering whether it is really guaranteed that there is exactly one
task with `Status = 'Active'` afterwards if in parallel the same statement may be executed with another Type:
```
UPDATE Tasks
SET Status = 'Active'
WHERE Id = (SELECT TOP 1 Id
FROM Tasks
WHERE Type = 2 -- <== The only difference
AND (SELECT COUNT(*)
FROM Tasks
WHERE Status = 'Active') = 0
ORDER BY Id)
```
If for both statements the rows to change would be determined before the lock is acquired, I could end up with two active tasks which I must prevent.
If this is the case, how can I prevent it? Can I prevent it without setting the transaction level to `SERIALIZABLE` or messing with lock hints?
From the answer to [Is a single SQL Server statement atomic and consistent?](https://stackoverflow.com/questions/21468742/is-a-single-sql-server-statement-atomic-and-consistent) I learned that the problem arises when the nested `SELECT` accesses another table. However, I'm not sure if I have to care about this issue if only the updated table is concerned.
|
No, at least the nested select statement can be processed before the update is started and locks are acquired. To make sure that no other query interferes with this update it is required to set the transaction isolation level to `SERIALIZABLE`.
This article (and the series it is part of) explains very well the subtleties of concurrency in SQL server:
<http://sqlperformance.com/2014/02/t-sql-queries/confusion-caused-by-trusting-acid>
|
If you want exactly one task with static = active, then set up the table to ensure this is true. Use a filtered unique index:
```
create unique index unq_tasks_status_filter_active on tasks(status)
where status = 'Active';
```
A second concurrent `update` might fail, but you will be ensured of uniqueness. Your application code can process such failed updates, and re-try.
Relying on the actual execution plans of the updates might be dangerous. That is why it is safer to have the database do such validations. Underlying implementation details could vary, depending on the environment and version of SQL Server. For instance, what works in a single threaded, single processor environment may not work in a parallel environment. What works with one isolation level may not work with another.
EDIT:
And, I cannot resist. For efficiency purposes, consider writing the query as:
```
UPDATE Tasks
SET Status = 'Active'
WHERE NOT EXISTS (SELECT 1
FROM Tasks
WHERE Status = 'Active'
) AND
Id = (SELECT TOP 1 Id
FROM Tasks
WHERE Type = 2 -- <== The only difference
ORDER BY Id
);
```
Then place indexes on `Tasks(Status)` and `Tasks(Type, Id)`. In fact, with the right query, you might find that the query is so fast (despite the update on the index) that your worry about current updates is greatly mitigated. This would not solve a race condition, but it might at least make it rare.
And if you are capturing errors, then with the unique filtered index, you could just do:
```
UPDATE Tasks
SET Status = 'Active'
WHERE Id = (SELECT TOP 1 Id
FROM Tasks
WHERE Type = 2 -- <== The only difference
ORDER BY Id
);
```
This will return an error if a row already is active.
Note: all these queries and concepts can be applied to "one active per group". This answer is addressing the question that you asked. If you have a "one active per group" problem, then consider asking another question.
|
Is a transaction that only updates a single table always isolated?
|
[
"",
"sql",
"sql-server",
"transaction-isolation",
""
] |
I would like to return boolean value if the record exists.
The below query works fine for single table.
```
SELECT CASE WHEN MAX(componentid) IS NULL THEN 'NO' ELSE 'YES' END
table3 FROM table3 WHERE componentid = 'GetAccountBalance';
```
I have tried the same for 3 tables using JOIN but I couldn't achieve the expected result.The below query returns all the values as "No" if any of the table (suppose table3) doesn't have record while the other two table has.
```
select CASE WHEN MAX(a.componentid)IS NULL THEN 'NO' ELSE 'YES' END table1,
CASE WHEN MAX(b.componentid)IS NULL THEN 'NO' ELSE 'YES' END table2,
CASE WHEN MAX(c.componentid)IS NULL THEN 'NO' ELSE 'YES' END table3
from table1 a
join table2 b on a.componentid=b.componentidΒ
join table3 c on a.componentid=c.componentidΒ
and a.componentid ='GetAccountBalance';
```
Output
```
table1 table2 table3
NO NO NO
```
expected
```
table1 table2 table3
YES YES NO
```
Also would it be possible to search for multiple values using in? like
```
a.componentid in ('GetAccountBalance','GetCreditBalance')
```
|
You should phrase this as `exists`:
```
select (case when exists (select 1 from table1 where componentid = 'GetAccountBalance')
then 'YES' else 'NO'
end) as flagTable1,
(case when exists (select 1 from table2 where componentid = 'GetAccountBalance')
then 'YES' else 'NO'
end) as flagTable2,
(case when exists (select 1 from table3 where componentid = 'GetAccountBalance')
then 'YES' else 'NO'
end) as flagTable3
from dual;
```
The overhead of doing joins is simply unnecessary. The above should also make optimal use of indexes on the tables.
EDIT:
For multiple components, you can use correlated subqueries:
```
select (case when exists (select 1 from table1 t1 where t1.componentid = c.componentid)
then 'YES' else 'NO'
end) as flagTable1,
(case when exists (select 1 from table2 t2 where t2.componentid = c.componentid)
then 'YES' else 'NO'
end) as flagTable2,
(case when exists (select 1 from table3 t3.where t3.componentid = c.componentid)
then 'YES' else 'NO'
end) as flagTable3
from (select 'GetAccountBalance' as componentid from dual union all
select 'GetCreditBalance' from dual
) c
```
|
You want an outer join on all three tables. You also need to include a condition that checks the presence of the value in all three tables:
```
select coalesce(a.componentid, b.componentid, c.componentid) as componentid,
case when a.componentid is null then 'no' else 'yes' end as in_table1,
case when b.componentid is null then 'no' else 'yes' end as in_table2,
case when c.componentid is null then 'no' else 'yes' end as in_table3
from table1 a
full outer join table2 b on a.componentid = b.componentid
full outer join table3 c on b.componentid = c.componentid
where a.componentid in ('GetAccountBalance','GetCreditBalance')
or b.componentid in ('GetAccountBalance','GetCreditBalance')
or c.componentid in ('GetAccountBalance','GetCreditBalance');
```
If you only use `where a.componentid in ('GetAccountBalance','GetCreditBalance')` the result would not have *any* rows if the values didn't exist at all in `table1`.
This will not return rows for values that are in none of the tables at all!
If `componentid` is **not** unique in each table, you might get multiple rows for each table.
SQLFiddle example: <http://sqlfiddle.com/#!4/c70b3e/1>
(The example uses numbers for componentid because I was too lazy to type the strings)
|
How to return boolean if record exists for multiple tables
|
[
"",
"sql",
"oracle",
""
] |
I have imported a price list from a csv to my SQL Server database. That has worked fine. But now some weird stuff. Table is named `PRICE` which includes a column (and some more) `Endprice` and a total of 761 rows. All datatypes are `varchar(50)`.
```
SELECT MAX(Endprice)
FROM PRICE
```
When I want this simple SQL statement to show the highest price in the column, I get a wrong result. I don't know why.
I get 98,39 as a result, but that's definitively wrong, it must be 100,73.
Here you can see a part of the data:
[](https://i.stack.imgur.com/VYAR3.png)
And now the wrong `MAX()` result:
[](https://i.stack.imgur.com/Jea56.png)
BUT when I'm using the `MIN` function I get the highest one!? The min is somewhere at ~50 (not shown in the screenshot part).
[](https://i.stack.imgur.com/0vttD.png)`
The resultset of `SELECT Endprice FROM PRICE` is correct. I am at my wit's end.
|
This is because your column is a varchar, so it is determining the min or max based on characters. The column should be a decimal or money type, so it sorts by the value of your number (instead of an alphabetic sort like you are getting now).
Alphabetic sort: 9 is more than 1, thus 98.39 is the max.
|
The reason is because price is a `varchar()`.
Here are two solutions:
```
order by len(price), price
```
This works assuming that all the price values have the same structure.
Or:
```
order by cast(price as float)
```
If you could have non-numeric values (always a danger when storing numbers in the wrong data type):
```
order by (case when isnumeric(price) = 1 then cast(price as float) end)
```
Or better yet:
```
alter table alter column price money
```
Then you don't have to worry about having the wrong type for the column.
|
SQL Server : aggregate function doesn't work?
|
[
"",
"sql",
"sql-server",
"database",
"sql-server-2008",
"aggregate-functions",
""
] |
I need to figure out which datatype to use for **states**.
Should it be `SET` or `VARCHAR` or anything else?
```
CREATE TABLE actors(
state SET('USA','Germany','....)
)
```
alternatively
```
CREATE TABLE actors(
state VARCHAR(30)
)
```
|
Assuming there's going to be tens or over hundred of the countries, it's best to use separate table.
```
CREATE TABLE states(
state_id INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
name VARCHAR(30)
);
```
*It's also recommended to use foreign key on the state\_id so if you want to delete a state from your database, it wouldn't break other data depending on it.*
If each actor is going to be assigned only to one state (1:1), you can use column in the actors table.
```
CREATE TABLE actors(
actor_id INT ...,
state_id INT,
)
```
Or if each actor can be assigned to more states (1:N), use another table for these relations:
```
CREATE TABLE actors(
actor_id INT ...,
)
CREATE TABLE actors_to_states(
actor_id INT,
state_id INT
)
```
|
SET is a compound datatype containing values from predefined set of possible values. If table contains such a data then according to relational databases theory it is not in 1NF. So it is only few special cases where this approach is reasonable. In most cases I suggest using separate table for countries like in example below:
```
CREATE TABLE countries (id SMALLINT, name VARCHAR(100))
```
|
MYSQL|Datatype for states
|
[
"",
"mysql",
"sql",
"database",
"sqldatatypes",
"states",
""
] |
I need to join two tables in SQL. There are no common fields. But the one table have a field with the value `krin1001` and I need it to be joined with the row in the other table where the value is `1001`.
The idea behind the joining is i have multiple customers, but in the one table there customer id is 'krin1001' 'krin1002' and so on, in this table is how much they have sold. In the other table there customer is is '1001' '1002' and so on, and in this table is there name and adress and so on. So it will always be the first 4 charakters i need to strip from the field before matching and joining. It might not always be 'krin' i need it to work with 'khjo1001' also, and it still needs to join on the '1001' value from the other table.
Is that possible?
Hope you can help me.
|
You need to use substring:
```
ON SUBSTRING(TableA.Field, 5, 4) = TableB.Field
```
Or Right:
```
ON RIGHT(TableA.Field, 4) = TableB.Field
```
|
You can also try to use `CHARINDEX` function for join operation. If value from `table1` contains value from `table2` row will be included in result set.
```
;WITH table1 AS(
SELECT 'krin1001' AS val
UNION ALL
SELECT 'xxx'
UNION ALL
SELECT 'xyz123'
),
table2 AS(
SELECT '1001' AS val
UNION ALL
SELECT '12345'
UNION ALL
SELECT '123'
)
SELECT * FROM table1 AS t
JOIN table2 AS T2 ON CHARINDEX(T2.val, T.val) > 0
```
|
SQL Join 2 tables with almost same field
|
[
"",
"mysql",
"sql",
""
] |
I have this query in postgresql:
```
select *
from A s
join B m on (s.id=m.id)
where m.key=4 and s.ran=some_input_from_user
```
This gives me all the rows that I need to update.
I want to set `A.value` to be `90` for all these rows.
It doesn't look like a standart update query
if I do...
```
Update A set value=90 where.....
```
then I can't do the join.
any ideas how to do it?
|
This is the basic update syntax for PostgreSQL where you are updating based on a join to another table:
```
update A s
set
value = 90
from B m
where
s.id = m.id and
m.key = 4 and
s.ran = some_input_from_user
```
The trick is you never use the alias in the lvalue for the `set` commands. In other words, `value = 90` is not `s.value = 90`. It seems minor, but I'm pretty sure it will prevent your query from working. The rationale is if you are updating table A (alias s) then any fields you are updating are, de-facto, from table A -- no need to alias them, and to allow aliases would almost imply you could update something other than A with this statement, which you cannot.
You can definitely use them in the rvalues, so this would certainly be okay (if it were your desire to update A based on B):
```
update A s
set
value = m.salary * s.commission
from B m
where
s.id = m.id and
(s.value is null or
s.value != m.salary * s.commission)
```
|
Here is the query:
```
update a set value = 90
where exists (
select 1 from b
where a.id = b.id and b.key=4
and a.ran=some_input_from_user);
```
The above query will eliminate the requirement of reading table a twice.
Also you can use this query:
```
update a set value = 90
where a.id in
(select b.id from b
where a.id = b.id and b.key = 4
and a.ran=some_input_from_user);
```
|
update multiple rows with joins
|
[
"",
"sql",
"postgresql",
""
] |
This question has been asked previously and been well answered for MySQL specifically, however I'd very much appreciate your guys' help in translating this for SQL Server. The prior posts are closed and I didn't want to reopen them as they're quite old.
Below is the dataset to query, and the required output of the query solution. Below that is the excellent MySql solution posted here ([SQL best way to subtract a value of the previous row in the query?](https://stackoverflow.com/questions/13734976/sql-best-way-to-subtract-a-value-of-the-previous-row-in-the-query)). My question is how to accomplish the same result with variables in SQL Server. I apologize if this is a novice question, but I'm new to variables in SQL Server and have failed in my attempts to translate to SQL Server based on MSDN documentation on variables in SQL Server.
Thank you for your help!
```
select
EL.SN,
EL.Date,
EL.EL.Value,
if( @lastSN = EL.SN, EL.Value - @lastValue, 0000.00 ) as Consumption,
@lastSN := EL.SN,
@lastValue := EL.Value
from
EnergyLog EL,
( select @lastSN := 0,
@lastValue := 0 ) SQLVars
order by
EL.SN,
EL.Date
```
I need to have the consumption value base on previous one by SN number.
This is my data:
TABLE EnergyLog
```
SN Date Value
2380 2012-10-30 00:15:51 21.01
2380 2012-10-31 00:31:03 22.04
2380 2012-11-01 00:16:02 22.65
2380 2012-11-02 00:15:32 23.11
20100 2012-10-30 00:15:38 35.21
20100 2012-10-31 00:15:48 37.07
20100 2012-11-01 00:15:49 38.17
20100 2012-11-02 00:15:19 38.97
20103 2012-10-30 10:27:34 57.98
20103 2012-10-31 12:24:42 60.83
```
This is the result I need:
```
SN Date Value consumption
2380 2012-10-30 00:15:51 21.01 0
2380 2012-10-31 00:31:03 22.04 1.03
2380 2012-11-01 00:16:02 22.65 0.61
2380 2012-11-02 00:15:32 23.11 0.46
20100 2012-10-30 00:15:38 35.21 0
20100 2012-10-31 00:15:48 37.07 1.86
20100 2012-11-01 00:15:49 38.17 1.1
20100 2012-11-02 00:15:19 38.97 0.8
20103 2012-10-30 10:27:34 57.98 0
20103 2012-10-31 12:24:42 60.83 2.85
```
|
By using a CTE you would be able to get the data ordered, and then select the previous record.
```
WITH PREVCTE AS (
SELECT *,
ROW_NUMBER() OVER (PARTITION BY SN ORDER BY MyDate) [RowNum]
FROM @myTable
)
SELECT A.SN,
A.MyDate,
A.[Value],
A.[Value] - COALESCE(B.[Value], A.[Value]) [Consumption]
FROM PREVCTE A LEFT OUTER JOIN PREVCTE B
ON A.RowNum - 1 = B.RowNum AND
A.SN = B.SN
```
**[SQL Fiddle Demo](http://sqlfiddle.com/#!6/dd32c2/1)**
|
My previous solution had calculated running totals. After shawnt00's comment and looking back at the example that you've provided I see that's not the case. This should make a solution much easier. I believe that the following will work for SQL 2008, although I don't have a 2008 box handy at the moment to test it.
```
;WITH CTE_Ordered AS
(
SELECT
sn,
[date],
value,
ROW_NUMBER() OVER(PARTITION BY sn ORDER BY [date]) AS row_num
FROM
dbo.EnergyLog
)
SELECT
T1.sn,
T1.[date],
T1.value,
T1.value - COALESCE(T2.value, 0) AS consumption
FROM
CTE_Ordered T1
LEFT OUTER JOIN CTE_Ordered T2 ON
T2.sn = T1.sn AND
T2.row_num = T1.row_num - 1
ORDER BY
T1.sn,
T1.row_num
```
|
Subtracting Previous Value by Prior Group - SQL Server
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have this query which does get the results I require but is incredibly slow and surely there must be a better way of doing this as I would need to run this regularly.
Each where clause has two variables which will always be the same number but I need to use this with 50+ BigIDs, the example shows 3 but I would do it for BigID 1 to 50+.
I am unsure how to make this shorter because of the two variables (one of which being in a subquery) and group by which is required.
Any help or pointing in the right direction would be appreciated. Thanks.
```
SELECT BigID,count(LittleID)
FROM Table1
where ( (BigID=1 and LittleID not in (SELECT LittleID FROM Table2 where BigID=1)) or
(BigID=2 and LittleID not in (SELECT LittleID FROM Table2 where BigID=2)) or
(BigID=3 and LittleID not in (SELECT LittleID FROM Table2 where BigID=3)) )
group by BigID
```
|
One method is a correlated subquery:
```
SELECT t1.BigID, count(t1.LittleID)
FROM Table1 t1
WHERE t1.BigID IN (1, 2, 3) and
t1.LittleID not in (SELECT t2.LittleID
FROM Table2 t2
WHERE t2.BigID = t1.BigId
)
GROUP BY t1.BigID
```
|
```
SELECT t1.BigID, COUNT(t1.LittleID)
FROM Table1 t1
LEFT JOIN Table2 t2 ON t1.LittleID = t2.LittleID AND t1.BigID = t2.BigID
WHERE t1.BigID IN (1, 2, 3)
AND t2.LittleID IS NULL
GROUP BY t1.BigID
```
|
Making SQL query with multiple where conditions and variables more efficient
|
[
"",
"sql",
"sql-server",
""
] |
I have a query that looks up a list of documents depending on their department and their status.
```
DECLARE @StatusIds NVARCHAR(MAX) = '1,2,3,4,5';
DECLARE @DepartmentId NVARCHAR(2) = 'IT';
SELECT ILDPST.name,
COUNT(*) AS TodayCount
FROM dbo.TableA ILDP
LEFT JOIN dbo.TableB ILDPS ON ILDPS.IntranetLoanDealPreStateId = ILDP.IntranetLoanDealPreStateId
LEFT JOIN dbo.TableC ILDPST ON ILDPST.IntranetLoanDealPreStateTypeId = ILDPS.CurrentStateTypeId
WHERE (ILDP.CreatedByDepartmentId = @DepartmentId OR @DepartmentId IS NULL)
AND ILDPS.CurrentStateTypeId IN (
SELECT value
FROM dbo.StringAsIntTable(@StatusIds)
)
GROUP BY ILDPST.name;
```
This returns the results:
[](https://i.stack.imgur.com/62YWT.png)
However, I'd also like to be able to return statuses where the `TodayCount` is equal to 0 (i.e. any status with an id included in `@StatusIds` should be returned, regardless of `TodayCount`).
I've tried messing with some unions / joins / ctes but I couldn't quite get it to work. I'm not much of an SQL person so not sure what else to provide that could be useful.
Thanks!
|
Try this instead:
```
DECLARE @StatusIds NVARCHAR(MAX) = '1,2,3,4,5';
DECLARE @DepartmentId NVARCHAR(2) = 'IT';
SELECT ILDPST.name,
COUNT(ILDP.IntranetLoanDealPreStateId) AS TodayCount
FROM
dbo.TableC ILDPST
LEFT JOIN
dbo.TableB ILDPS ON ILDPST.IntranetLoanDealPreStateTypeId = ILDPS.CurrentStateTypeId
LEFT JOIN
dbo.TableA ILDP ON ILDPS.IntranetLoanDealPreStateId = ILDP.IntranetLoanDealPreStateId
AND (ILDP.CreatedByDepartmentId = @DepartmentId OR @DepartmentId IS NULL)
WHERE
ILDPST.IntranetLoanDealPreStateTypeId
IN (
SELECT value
FROM dbo.StringAsIntTable(@StatusIds)
)
GROUP BY ILDPST.name;
```
|
If you want to have all the records from TableC you need to left join all other tables to it, not left join it to the other tables. Also it's best to `INNER JOIN` the filtering table you create from `@StatusIds` rather then apply it through `IN`clause. Try this:
```
DECLARE @StatusIds NVARCHAR(MAX) = '1,2,3,4,5';
DECLARE @DepartmentId NVARCHAR(2) = 'IT';
SELECT ILDPST.Name, COUNT(ILDP.IntranetLoanDealPreStateId) AS TodayCount
FROM (SELECT DISTINCT value FROM dbo.StringAsIntTable(@StatusIds)) StatusIds
INNER JOIN dbo.TableC ILDPST
ON ILDPST.IntranetLoanDealPreStateTypeId = StatusIds.value
LEFT JOIN dbo.TableB ILDPS
ON ILDPS.CurrentStateTypeId = ILDPST.IntranetLoanDealPreStateTypeId
LEFT JOIN dbo.TableA ILDP
ON ILDP.IntranetLoanDealPreStateId = ILDPS.IntranetLoanDealPreStateId
AND (ILDP.CreatedByDepartmentId = @DepartmentId OR @DepartmentId IS NULL)
GROUP BY ILDPST.Name;
```
|
Returning column with count of 0
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
In the table JAN07, I have a column TEMPO\_INTERCORSO (number(10,0)) and I want to create another column ANTE\_POST (number(1)) that is '0' when TEMPO\_INTERCORSO > 0 and '1' otherwise.
```
IF (TEMPO_INTERCORSO > 0) THEN
UPDATE JAN07 SET ANTE_POST = 1
ELSE
UPDATE JAN07 SET ANTE_POST = 0
END IF;
```
I've tried a lot of different ways that you can find on the web (for example the ";" before the ELSE: [sometimes](http://docs.oracle.com/cd/B28359_01/appdev.111/b28370/controlstructures.htm#BABGBDCB) there is, [sometimes](https://docs.oracle.com/cd/B10501_01/appdev.920/a96624/04_struc.htm#484) not), but it is still not working.
```
IF (TEMPO_INTERCORSO = 0) THEN
Report error -
Comando sconosciuto
```
Some ideas?
ps= I've tried with JAN07.TEMPO\_INTERCORSO too!
|
The following `UPDATE` query uses `CASE...WHEN` to achieve what you want:
```
UPDATE JAN07
SET ANTE_POST = CASE WHEN TEMPO_INTERCORSO > 0 THEN 1 ELSE 0 END
```
|
I would rather suggest **[Virtual Columns](http://docs.oracle.com/cd/E11882_01/server.112/e41084/statements_7002.htm#BABCHBHE)** introduced in `Oracle Database 11g Release 1`. A simple **CASE** statement would do the rest.
For example,
```
SQL> CREATE TABLE t
2 (
3 TEMPO_INTERCORSO NUMBER,
4 ANTE_POST NUMBER GENERATED ALWAYS AS (
5 CASE
6 WHEN TEMPO_INTERCORSO > 0
7 THEN 1
8 ELSE 0
9 END) VIRTUAL
10 );
Table created.
```
Now, you need not worry about manually updating the virtual column, it will be automatically generated at run time.
Let's **insert the values only in static column** and see:
```
SQL> INSERT INTO t(TEMPO_INTERCORSO) VALUES(0);
1 row created.
SQL> INSERT INTO t(TEMPO_INTERCORSO) VALUES(1);
1 row created.
SQL> INSERT INTO t(TEMPO_INTERCORSO) VALUES(10);
1 row created.
SQL> commit;
Commit complete.
SQL> SELECT * FROM t;
TEMPO_INTERCORSO ANTE_POST
---------------- ----------
0 0
1 1
10 1
```
So, you have your column `ANTE_POST` with desired values automatically.
|
IF-ELSE statement: create a column depending on another one
|
[
"",
"sql",
"oracle",
"if-statement",
"plsql",
"sql-update",
""
] |
This is my sql table: [SQLFiddle](http://sqlfiddle.com/#!9/3ff1c/3)
I trying get `id_user` once, but with two or more conditions in `WHERE` clause
If I use this code:
```
SELECT DISTINCT id_user FROM base_group_details WHERE id_category=3
```
I getting good result, but i wanted getting result with two or more parameters in WHERE, for example:
```
SELECT DISTINCT id_user FROM base_group_details WHERE id_category=3 AND id_category=4
```
I should get 2 results, but it not working.
|
I guess this is what you want
```
SELECT DISTINCT id_user
FROM base_group_details
WHERE id_category=3 and id_user IN (
SELECT DISTINCT id_user
FROM base_group_details
WHERE id_category = 4 )
```
The Output is id\_user 1 and 3
|
No single row can have a category of both 3 and 4, so no rows are returned. One way to solve these types of questions is to group the result and count the number of relevant distinct categories each user has:
```
SELECT id_user
FROM base_group_details
WHERE id_category IN (3, 4)
HAVING COUNT(DISTINCT id_category) = 2
```
|
mysql select once (DISTINCT not working)
|
[
"",
"mysql",
"sql",
"select",
""
] |
I've got a table of purchasing decisions that looks like this:
```
org_id item_id spend
--------------------------
123 AAB 2
123 AAC 4
124 AAB 10
124 AAD 5
```
I want to find all the items that were only bought by three or fewer organisations, then I want to order them by summed spend, along with the IDs of the organisations.
This is my query for getting the items in that list:
```
SELECT
item_id,
EXACT_COUNT_DISTINCT(org) AS org_count,
SUM(spend) AS total_spend
FROM
[mytable]
GROUP BY
item_id
HAVING
org_count < 4
ORDER BY
total_spend DESC
```
It gives me results that look like this:
```
item_id total_spend
--------------------------
AAB 12
AAC 4
AAD 5
```
But I need to extend this query to also return the IDs of those organisations.
Is this possible in a single query, or do I need to do it in multiple queries?
The query for getting the IDs of the organisations on their own would look like:
```
SELECT
org
FROM
mytable
WHERE item_id IN (SELECT item_id ... etc, query as above)
```
But I'm not sure how to glue the two together.
UPDATE: Ideally I'd end up with something a lot like the original table, but only containing those items bought by three or fewer organisations:
```
org_id item_id spend
--------------------------
123 AAB 2
123 AAC 4
124 AAB 10
124 AAD 5
```
|
Try query like this. In the result set you will see all items that were bought by there or fewer organisations and total spend
```
SELECT T2.org_id,
T.item_id
FROM table AS T2
JOIN
(SELECT item_id,
SUM(spend) AS total_spend
FROM table AS T
GROUP BY T.item_id
HAVING COUNT(DISTINCT org_id) < 4) AS T ON T.item_id = T2.item_id
ORDER BY T.total_spend DESC
```
|
In BigQuery - **JOIN**s sometimes is quite a headache (depends on multiple factors), so it is always good to have some non-join solutions in your arsenal.
Below are few examples of such based on [Window functions](https://cloud.google.com/bigquery/query-reference#windowfunctions) :
I think they can be interesting both from practical and learning prospective
Option #1 - with group\_concat/regexp trick
```
SELECT org_id, item_id, spend
FROM (
SELECT org_id, item_id, spend,
GROUP_CONCAT(STRING(org_id)) OVER(PARTITION BY item_id) AS orgs
FROM table
)
WHERE 1 + LENGTH(REGEXP_REPLACE(orgs, r'[^,]', '')) < 4
ORDER BY item_id, org_id
```
Option #2 - assuming on avarage number of organizations per item is not too big (so count distinct is more less exact):
```
SELECT org_id, item_id, spend
FROM (
SELECT org_id, item_id, spend,
COUNT(DISTINCT org_id) OVER(PARTITION BY item_id) AS orgs
FROM table
)
WHERE orgs < 4
ORDER BY item_id, org_id
```
|
SQL: how to join multiple queries together?
|
[
"",
"sql",
"google-bigquery",
""
] |
I have products with different details in different attributes and I need to develop an algorithm to find the most similar to the one I'm trying to find.
For example, if a product has:
* Weight: 100lb
* Color: Black, Brown, White
* Height: 10in
* Conditions: new
Others can have different colors, weight, etc. Then I need to do a search where the most similar return first. For example, if everything matches but the color is only Black and White but not Brown, it's a better match than another product that is only Black but not White or Brown.
I'm open to suggestions as the project is just starting.
One approach, for example, I could do is restrict each attribute (weight, color, size) a limited set of option, so I can build a binary representation. So I have something like this for each product:
```
Colors Weight Height Condition
00011011000 10110110 10001100 01
```
Then if I do an XOR between the product's binary representation and my search, I can calculate the number of set bits to see how similar they are (all zeros would mean exact match).
The problem with this approach is that I cannot index that on a database, so I would need to read all the products to make the comparison.
Any suggestions on how I can approach this? Ideally I would like to have something I can index on a database so it's fast to query.
Further question: also if I could use different weights for each attribute, it would be awesome.
|
What you want to do is a perfect use case for [elasticsearch](https://www.elastic.co/products/elasticsearch) and other similar search oriented databases. I don't think you need to hack with bitmasks/etc.
You would typically maintain your primary data in your existing database (sql/cassandra/mongo/etc..anything works), and copy things that need searching to elasticsearch.
|
You basically need to come up with a distance metric to define the distance between two objects. Calculate the distance from the object in question to each other object, then you can either sort by minimum distance or just select the best.
Without some highly specialized algorithm based on the full data set, the best you can do is a linear time distance comparison with every other item.
You can **estimate** the nearest by keeping sorted lists of certain fields such as Height and Weight and cap the distance at a threshold (like in broad phase collision detection), then limit full distance comparisons to only those items that meet the thresholds.
|
Similarity matching algorithm
|
[
"",
"sql",
"algorithm",
"performance",
"categories",
""
] |
The question is,
Determine names of all the ships in Ships table that meet a combination of at least four criteria from the following list: numGuns = 8 bore = 15 displacement = 32000 type = bb launched = 1915 class = Kongo country = USA.
I found the answer for this exercise.
The answer is,
```
SELECT s.name from ship s,classes c
WHERE s.class=c.class AND
((numGuns = 8 AND bore = 15 AND displacement = 32000 AND type = 'bb')
OR (numGuns = 8 AND bore = 15 AND displacement = 32000 AND launched = 1915)
OR (numGuns = 8 AND bore = 15 AND displacement = 32000 AND c.class = 'Kongo')
OR (numGuns = 8 AND bore = 15 AND displacement = 32000 AND country = 'USA')
OR (numGuns = 8 AND bore = 15 AND type = 'bb' AND launched = 1915)
OR (numGuns = 8 AND bore = 15 AND type = 'bb' AND c.class = 'kongo')
OR (numGuns = 8 AND bore = 15 AND type = 'bb' AND country = 'USA')
OR (numGuns = 8 AND bore = 15 AND launched = 1915 AND c.class = 'Kongo')
OR (numGuns = 8 AND bore = 15 AND launched = 1915 AND country = 'USA')
OR (numGuns = 8 AND bore = 15 AND c.class = 'Kongo' AND country = 'USA')
OR (numGuns = 8 AND displacement = 32000 AND type = 'bb' AND launched = 1915)
OR (numGuns = 8 AND displacement = 32000 AND type = 'bb' AND c.class = 'kongo')
OR (numGuns = 8 AND displacement = 32000 AND type = 'bb' AND country = 'USA')
OR (numGuns = 8 AND displacement = 32000 AND launched = 1915 AND c.class = 'Kongo')
OR (numGuns = 8 AND displacement = 32000 AND launched = 1915 AND country = 'USA') O
```
My question is,
Is there any other simple way check the conditions.
|
The cast from `boolean` to `integer` yields 0 or 1:
```
select s.name
from
ship s
inner join
classes c using (class)
where
(numguns = 8)::int +
(bore = 15)::int +
(displacement = 32000)::int +
(type = 'bb')::int +
(launched = 1915)::int +
(class = 'Kongo')::int +
(country = 'USA')::int
>= 4
```
|
You can use `CASE WHEN` statements to count the number of matching columns for each record, and then wrap this query to get only records with 4 or more matching columns.
```
SELECT t.name
FROM
(
SELECT s.name,
CASE WHEN s.numGuns = 8 THEN 1 ELSE 0 END AS c1,
CASE WHEN s.dbore = 15 THEN 1 ELSE 0 END AS c2,
CASE WHEN s.displacement = 32000 THEN 1 ELSE 0 END AS c3,
CASE WHEN s.type = 'bb' THEN 1 ELSE 0 END AS c4,
CASE WHEN s.launched = 1915 THEN 1 ELSE 0 END AS c5,
CASE WHEN c.class = 'Kongo' THEN 1 ELSE 0 END AS c6,
CASE WHEN s.country = 'USA' THEN 1 ELSE 0 END AS c7
FROM ship s INNER JOIN classes c ON s.class = c.class
) t
WHERE (t.c1 + t.c2 + t.c3 + t.c4 + t.c5 + t.c6 + t.c7) >= 4
```
|
SQL : 4 conditions combined with OR
|
[
"",
"sql",
"postgresql",
""
] |
I have two tables - client and person. They are linked by `personID` in both tables.
I am trying to create a view.
I tried:
```
SELECT * FROM client INNER JOIN person ON client.personID = person.personID
```
It didn't like it saying `Duplicate column name 'personID`.
I thought if you used `table.column` that was fine as an identifier. I tried with and without single quotes.
I could do an alias if I wanted specific columns EG `client.personID AS perID` but I cannot work out how to make an alias for the columns in this position IE when they are part of the JOIN definition.
I have been looking and trying for over an hour and just cannot find it.
**EDIT:** This is a question SPECIFICALLY about aliasing columns in a SELECT \* join. Everyone seems to be saying put in the explicit columns. I know how to do that. I deliberately want to use a \* JOIN for development work. This will be a tiny DB so it does even not matter if it stays in. Oh and **this is only a problem storing as a VIEW not as an SQL query or in a bit of code**.
|
It looks like you have to alias you'r column names with aliases.
```
SELECT client.column1 as col1, client.column2 as col2, person.column1 as colp1 FROM client INNER JOIN person ON client.personID = person.personID
```
Of course, replace the column names into the real column names as use more appealing aliases
Let us know if it helps
## UPDATE #1
I tried creating 2 tables with sqlfiddle in mySQL 5.5 and 5.6
see link : <http://sqlfiddle.com/#!9/e70ab/1>
It works as expected.
Maybe you could share you tables schema.
Here's the example code :
```
CREATE TABLE Person
(
personID int,
name varchar(255)
);
CREATE TABLE Client
(
ID int,
name varchar(255),
personID int
);
insert into Person values(1, 'person1');
insert into Person values(2, 'person2');
insert into Person values(3, 'person3');
insert into Client values(1, 'client1', 1);
insert into Client values(2, 'client2', 1);
insert into Client values(3, 'client1', 1);
SELECT * FROM client
INNER JOIN person
ON client.personID = person.personID;
```
|
```
SELECT
c.*,
p.`all`,
p.need,
p.`fields`,
p.`of`,
p.person,
p.`table`,
p.without,
p.personID_field
FROM client c
INNER JOIN person p
ON p.personID = c.personID
```
|
Alias for column name on a SELECT * join
|
[
"",
"mysql",
"sql",
"join",
"alias",
""
] |
The tables I am working on contain details about their treatment and details abour appointments, named vwTreatmentPlans and vwAppointmentDetails respectively.
My aim is to return one row per patient code only. I want it to display two columns: the patient code from the vwTreatmentPlans table and the appointmentDateTimevalue from the vwAppointmentDetails table. MOST IMPORTANTLY, wherever there is more than one appointment row, I want only the latest appointment details to be displayed, hence:
```
vA.appointmentDateTimevalue Desc
```
Using the AND clauses, only one row per PatientCode is returned which is what I want. However, there is problem of a many to one relationship between patient codes from the two tables.
```
SELECT
vT.PatientCode, MAX(vA.appointmentDateTimevalue)
FROM vwTreatmentPlans vT
INNER JOIN vwAppointmentDetails vA ON vT.PatientCode = vA.patientcode
WHERE
vT.PatientCode IN ( 123)
AND
vT.[Current] = 1
AND
vT.Accepted = 1
GROUP BY vT.PatientCode, vA.appointmentDateTimevalue
ORDER by vT.PatientCode, vA.appointmentDateTimevalue Desc
```
For example, one patient code returns the following output:
```
PatientCode appointmentDateTimevalue
123 2016-02-01 09:10:00.000
123 2016-01-07 09:15:00.000
123 2015-12-31 10:40:00.000
```
So for the above example, I want this output:
```
PatientCode appointmentDateTimevalue
123 2016-02-01 09:10:00.000
```
If there were more than one patient code being selected, I would want:
```
PatientCode appointmentDateTimevalue
123 2016-02-01 09:10:00.000
456 2016-04-11 15:45:00.000
```
I've tried messing around with nested selects, having clauses etc. and frankly haven't a clue. I would really appreciate some help with something that must be disappointingly simple!
Thanks.
|
Why are you grouping by `vA.appointmentDateTimevalue`? You needn't do it.
So you can get your result set with next query
```
SELECT
vT.PatientCode,
MAX(vA.appointmentDateTimevalue) as max_date
FROM vwTreatmentPlans vT
INNER JOIN vwAppointmentDetails vA ON vT.PatientCode = vA.patientcode
WHERE vT.[Current] = 1
AND vT.Accepted = 1
GROUP BY vT.PatientCode
ORDER BY vt.patientCode
```
|
All the data you show (`PatientCode` and `appointmentDateTimevalue`) is available in table `vwAppointmentDetails`. So select from that table.
The criteria which records to select are in table `vwTreatmentPlans`, so have it in the where clause.
```
select patientcode, max(appointmentdatetimevalue)
from vwappointmentdetails
where patientcode in
(
select patientcode
from vwtreatmentplans
where patientcode in (123, ...)
and current = 1
and accepted = 1
)
group by patientcode;
```
No need to join here. This makes the query both very readable and maintainable. Disappointingly simple? :-)
|
SQL - Join two tables and return one row only contain the latest time from the second table
|
[
"",
"sql",
"join",
"greatest-n-per-group",
""
] |
I wonder why one should ever care on explicitly writing an index in query. I frequently see queries like this
```
select blabla from Table with(nolock index = index_name)
```
Index is already defined, and query optimizer chooses the best index by itself. Isn't it more logical and efficient to write query as this:
```
select blabla from Table with(nolock)
```
What are the advantages of explicitly writing an index in query?
**EDIT**
Thank you all for your opinions. The best choice is not to force seeking indexes explicitly.
In this case only database vendors could be advantageous and it is not related with performance. They indicate indexes explicitly and force execution plan to not to change in order to be able to predict execution times.
|
This is question is kind of rhetoric: one must measure efficiency, not reason about it.
In ssms you can do it with "Include Actual Execution Plan" (Ctrl-M).
Try execute following in ssms and look at "Execution plan":
```
SELECT * FROM msdb.dbo.sysjobs with (index = nc1)
SELECT * FROM msdb.dbo.sysjobs with (index = clust)
SELECT * FROM msdb.dbo.sysjobs
```
You will find that: a)first index hint is only slowing, b) hint on clustered index and no hint have equivalent plans
|
Specifying Index in the SQL SERVER causes Force Seeking of that Index.
> When any query is ran SQL Server Engine determines which index has to
> be used. SQL Server makes uses Index which has lowest cost based on
> performance. Index which is the best for performance is automatically
> used. There are some instances when Database Developer is best judge
> of the index used. DBA can direct SQL Server which index to be used to
> execute query.
>
> If the index that you have created is not picked up by SQL Server,
> then try telling the SQL Server by using the HINT.
you can refer below links
<http://blog.sqlauthority.com/2009/02/07/sql-server-introduction-to-force-index-query-hints-index-hint/>
<https://www.simple-talk.com/sql/performance/index-selection-and-the-query-optimizer/>
|
Advantages of explicitly writing an index
|
[
"",
"sql",
"sql-server",
"t-sql",
"indexing",
"sql-server-2008-r2",
""
] |
I have a table with columns as follows..
1. id
2. person1
3. person2
let's say there are 2 records having values...
```
id1 - 2222 - 4444
id2 - 3333 - 6666
```
I want to modify the value of that cell whose value is '4444' and id is 'id1'
|
You could use a `case` expression on only update the appropriate columns:
```
UPDATE mytable
SET person1 = CASE person1 WHEN '4444' THEN 'new value' ELSE person1 END,
person2 = CASE person2 WHEN '4444' THEN 'new value' ELSE person2 END
WHERE id = 'id1' AND
'4444' IN (person1, person2)
```
|
@Mureinik has given the answer. But as you asked for other 49 columns how to make this query then you can prepare the dynamic query and then execute it. Below is the same query mentioned by @Mureinik but created dynamically:
```
SET @sql = NULL;
SELECT
GROUP_CONCAT(DISTINCT
CONCAT(UN.column_name, ' = CASE ', UN.column_name,' WHEN ''4444'' THEN ''1111'' ELSE ', UN.column_name,' END'))
INTO @sql
FROM (
SELECT table_name, column_name
FROM INFORMATION_SCHEMA.COLUMNS
where table_name = 'MyTable'
AND column_name like 'person%') UN;
SET @sql = CONCAT('UPDATE mytable SET ', @sql, ' WHERE id = ''id1''');
PREPARE stmt FROM @sql;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;
```
[SQL Fiddle Link](http://sqlfiddle.com/#!2/3c6e75/7)
|
how to find a specific column name from a table whose column value is matched?
|
[
"",
"mysql",
"sql",
"sql-update",
""
] |
I have a PostgreSQL table with the following relevant fields:
```
url
title
created_at
```
There can be many rows that contain identical URLs but different titles. Here are some sample rows:
```
www.nytimes.com | The New York Times | 2016-01-01 00:00:00`
www.wsj.com | The Wall Street Journal | 2016-01-03 15:32:13`
www.nytimes.com | The New York Times Online | 2016-01-06 07:19:08`
```
I'm trying to obtain an output that lists the following fields:
1) `url`
2) `title` that corresponds to the highest value of `created_at`
3) count of all `title` for that unique `url`
So, output rows for the above sample would look something like this:
```
www.nytimes.com | The New York Times Online | 2
www.wsj.com | The Wall Street Journal | 1
```
Based on the numerous SO posts I've read on similar questions, it looks like my best option for obtaining the first two fields (`url` and latest `title`) would be to use `DISTINCT ON`:
```
select distinct on (url) url, title from headlines order by url, created_at desc
```
Likewise, to obtain the first and third fields (`url` and count of all `title`), I could simply use `GROUP BY`:
```
select url, count(title) from headlines group by url
```
What I can't figure out is how to combine the above methodologies and obtain the above-mentioned three values I'm trying to get.
(Edited to provide more clarity.)
|
Try;
```
select t1.url, t2.title, t1.cnt
from (
select url, count(title) cnt
from headlines
group by url
) t1
join (
select distinct on (url) url, title
from headlines
order by url, created_at desc
) t2 on t1.url = t2.url
order by t1.url
```
`join` both queries on `url`
[**`sql fiddle demo`**](http://sqlfiddle.com/#!15/f7665f/3)
|
This can be done in a single `SELECT` with a single scan over the table - by combining a [window function](http://www.postgresql.org/docs/current/interactive/functions-window.html) with `DISTINCT ON`:
```
SELECT DISTINCT ON (url)
url, title, count(*) OVER (PARTITION BY url) AS ct
FROM headlines
ORDER BY url, created_at DESC NULLS LAST;
```
[**SQL Fiddle.**](http://sqlfiddle.com/#!15/1fbbea/2)
Related (with detailed explanation):
* [Best way to get result count before LIMIT was applied](https://stackoverflow.com/questions/156114/best-way-to-get-result-count-before-limit-was-applied-in-php-postgresql/8242764#8242764)
* [Select first row in each GROUP BY group?](https://stackoverflow.com/questions/3800551/select-first-row-in-each-group-by-group/7630564#7630564)
* [PostgreSQL: running count of rows for a query 'by minute'](https://stackoverflow.com/questions/8193688/postgresql-running-count-of-rows-for-a-query-by-minute/8194088#8194088)
|
Run Postgres query that groups by one field and sorts by another
|
[
"",
"sql",
"postgresql",
"aggregate",
"greatest-n-per-group",
"window-functions",
""
] |
I have the following UPSERT in PostgreSQL 9.5:
```
INSERT INTO chats ("user", "contact", "name")
VALUES ($1, $2, $3),
($2, $1, NULL)
ON CONFLICT("user", "contact") DO NOTHING
RETURNING id;
```
If there are no conflicts it returns something like this:
```
----------
| id |
----------
1 | 50 |
----------
2 | 51 |
----------
```
But if there are conflicts it doesn't return any rows:
```
----------
| id |
----------
```
I want to return the new `id` columns if there are no conflicts or return the existing `id` columns of the conflicting columns.
**Can this be done?** If so, **how?**
|
I had exactly the same problem, and I solved it using 'do update' instead of 'do nothing', even though I had nothing to update. In your case it would be something like this:
```
INSERT INTO chats ("user", "contact", "name")
VALUES ($1, $2, $3),
($2, $1, NULL)
ON CONFLICT("user", "contact")
DO UPDATE SET
name=EXCLUDED.name
RETURNING id;
```
This query will return all the rows, regardless they have just been inserted or they existed before.
|
The [currently accepted answer](https://stackoverflow.com/a/37543015/939860) seems ok for a single conflict target, few conflicts, small tuples and no triggers. It avoids ***concurrency issue 1*** (see below) with brute force. The simple solution has its appeal, the side effects may be less important.
For all other cases, though, do ***not*** update identical rows without need. Even if you see no difference on the surface, there are **various side effects**:
* It might fire triggers that should not be fired.
* It write-locks "innocent" rows, possibly incurring costs for concurrent transactions.
* It might make the row seem new, though it's old (transaction timestamp).
* **Most importantly**, with [PostgreSQL's MVCC model](https://www.postgresql.org/docs/current/mvcc-intro.html) `UPDATE` writes a new row version for every target row, no matter whether the row data changed. This incurs a performance penalty for the UPSERT itself, table bloat, index bloat, performance penalty for subsequent operations on the table, `VACUUM` cost. A minor effect for few duplicates, but *massive* for mostly dupes.
**Plus**, sometimes it is not practical or even possible to use `ON CONFLICT DO UPDATE`. [The manual:](https://www.postgresql.org/docs/current/sql-insert.html#SQL-ON-CONFLICT)
> For `ON CONFLICT DO UPDATE`, a *`conflict_target`* must be provided.
A **single** "conflict target" is not possible if multiple indexes / constraints are involved.
Related solution for multiple, mutually exclusive partial indexes:
* [UPSERT based on UNIQUE constraint with NULL values](https://stackoverflow.com/questions/69746862/upsert-based-on-unique-constraint-with-null-values/69747756#69747756)
Or a way to deal with multiple unique constraints:
* [Use multiple conflict\_target in ON CONFLICT clause](https://stackoverflow.com/questions/35888012/use-multiple-conflict-target-in-on-conflict-clause/77544432#77544432)
Back on the topic, you can achieve (almost) the same without empty updates and side effects. Some of the following solutions also work with `ON CONFLICT DO NOTHING` (no "conflict target"), to catch *all* possible conflicts that might arise - which may or may not be desirable.
## Without concurrent write load
```
WITH input_rows(usr, contact, name) AS (
VALUES
(text 'foo1', text 'bar1', text 'bob1') -- type casts in first row
, ('foo2', 'bar2', 'bob2')
-- more?
)
, ins AS (
INSERT INTO chats (usr, contact, name)
SELECT * FROM input_rows
ON CONFLICT (usr, contact) DO NOTHING
RETURNING id --, usr, contact -- return more columns?
)
SELECT 'i' AS source -- 'i' for 'inserted'
, id --, usr, contact -- return more columns?
FROM ins
UNION ALL
SELECT 's' AS source -- 's' for 'selected'
, c.id --, usr, contact -- return more columns?
FROM input_rows
JOIN chats c USING (usr, contact); -- columns of unique index
```
The `source` column is an optional addition to demonstrate how this works. You may actually need it to tell the difference between both cases (another advantage over empty writes).
The final `JOIN chats` works because newly inserted rows from an attached [data-modifying CTE](https://www.postgresql.org/docs/current/queries-with.html) are not yet visible in the underlying table. (All parts of the same SQL statement see the same snapshots of underlying tables.)
Since the `VALUES` expression is free-standing (not directly attached to an `INSERT`) Postgres cannot derive data types from the target columns and you may have to add explicit type casts. [The manual:](https://www.postgresql.org/docs/current/sql-values.html)
> When `VALUES` is used in `INSERT`, the values are all automatically
> coerced to the data type of the corresponding destination column. When
> it's used in other contexts, it might be necessary to specify the
> correct data type. If the entries are all quoted literal constants,
> coercing the first is sufficient to determine the assumed type for all.
The query itself (not counting the side effects) may be a bit more expensive for *few* dupes, due to the overhead of the CTE and the additional `SELECT` (which should be cheap since the perfect index is there by definition - a unique constraint is implemented with an index).
May be (much) faster for *many* duplicates. The effective cost of additional writes depends on many factors.
But there are **fewer side effects and hidden costs** in any case. It's most probably cheaper overall.
Attached sequences are still advanced, since default values are filled in *before* testing for conflicts.
About CTEs:
* [Are SELECT type queries the only type that can be nested?](https://stackoverflow.com/questions/22749253/are-select-type-queries-the-only-type-that-can-be-nested/22750328#22750328)
* [Deduplicate SELECT statements in relational division](https://dba.stackexchange.com/a/111362/3684)
## With concurrent write load
Assuming default [`READ COMMITTED` transaction isolation](https://www.postgresql.org/docs/current/transaction-iso.html#XACT-READ-COMMITTED). Related:
* [Concurrent transactions result in race condition with unique constraint on insert](https://dba.stackexchange.com/q/212580/3684)
The best strategy to defend against race conditions depends on exact requirements, the number and size of rows in the table and in the UPSERTs, the number of concurrent transactions, the likelihood of conflicts, available resources and other factors ...
### Concurrency issue 1
If a concurrent transaction has written to a row which your transaction now tries to UPSERT, your transaction has to wait for the other one to finish.
If the other transaction ends with `ROLLBACK` (or any error, i.e. automatic `ROLLBACK`), your transaction can proceed normally. Minor possible side effect: gaps in sequential numbers. But no missing rows.
If the other transaction ends normally (implicit or explicit `COMMIT`), your `INSERT` will detect a conflict (the `UNIQUE` index / constraint is absolute) and `DO NOTHING`, hence also not return the row. (Also cannot lock the row as demonstrated in *concurrency issue 2* below, since it's *not visible*.) The `SELECT` sees the same snapshot from the start of the query and also cannot return the yet invisible row.
***Any such rows are missing from the result set (even though they exist in the underlying table)!***
This **may be ok as is**. Especially if you are not returning rows like in the example and are satisfied knowing the row is there. If that's not good enough, there are various ways around it.
You can check the row count of the output and repeat the statement if it does not match the row count of the input. May be good enough for the rare case. The point is to start a new query (can be in the same transaction), which will then see the newly committed rows.
**Or** check for missing result rows *within* the same query and *overwrite* those with the brute force trick demonstrated in [Alextoni's answer](https://stackoverflow.com/a/37543015/939860).
```
WITH input_rows(usr, contact, name) AS ( ... ) -- see above
, ins AS (
INSERT INTO chats AS c (usr, contact, name)
SELECT * FROM input_rows
ON CONFLICT (usr, contact) DO NOTHING
RETURNING id, usr, contact -- we need unique columns for later join
)
, sel AS (
SELECT 'i'::"char" AS source -- 'i' for 'inserted'
, id, usr, contact
FROM ins
UNION ALL
SELECT 's'::"char" AS source -- 's' for 'selected'
, c.id, usr, contact
FROM input_rows
JOIN chats c USING (usr, contact)
)
, ups AS ( -- RARE corner case
INSERT INTO chats AS c (usr, contact, name) -- another UPSERT, not just UPDATE
SELECT i.*
FROM input_rows i
LEFT JOIN sel s USING (usr, contact) -- columns of unique index
WHERE s.usr IS NULL -- missing!
ON CONFLICT (usr, contact) DO UPDATE -- we've asked nicely the 1st time ...
SET name = c.name -- ... this time we overwrite with old value
-- SET name = EXCLUDED.name -- alternatively overwrite with *new* value
RETURNING 'u'::"char" AS source -- 'u' for updated
, id --, usr, contact -- return more columns?
)
SELECT source, id FROM sel
UNION ALL
TABLE ups;
```
It's like the query above, but we add one more step with the CTE `ups`, before we return the ***complete*** result set. That last CTE will do nothing most of the time. Only if rows go missing from the returned result, we use brute force.
More overhead, yet. The more conflicts with pre-existing rows, the more likely this will outperform the simple approach.
One side effect: the 2nd UPSERT writes rows out of order, so it re-introduces the possibility of deadlocks (see below) if *three or more* transactions writing to the same rows overlap. If that's a problem, you need a different solution - like repeating the whole statement as mentioned above.
### Concurrency issue 2
If concurrent transactions can write to involved columns of affected rows, and you have to make sure the rows you found are still there at a later stage in the same transaction, you can **lock existing rows** cheaply in the CTE `ins` (which would otherwise go unlocked) with:
```
...
ON CONFLICT (usr, contact) DO UPDATE
SET name = name WHERE FALSE -- never executed, but still locks the row
...
```
And add a [locking clause to the `SELECT` as well, like `FOR UPDATE`](https://www.postgresql.org/docs/current/sql-select.html#SQL-FOR-UPDATE-SHARE).
This makes competing write operations wait till the end of the transaction, when all locks are released. So be brief.
More details and explanation:
* [How to include excluded rows in RETURNING from INSERT ... ON CONFLICT](https://stackoverflow.com/questions/35949877/how-to-include-excluded-rows-in-returning-from-insert-on-conflict/35953488#35953488)
* [Is SELECT or INSERT in a function prone to race conditions?](https://stackoverflow.com/questions/15939902/is-select-or-insert-in-a-function-prone-to-race-conditions/15950324#15950324)
### Deadlocks?
Defend against **deadlocks** by inserting rows in **consistent order**. See:
* [Deadlock with multi-row INSERTs despite ON CONFLICT DO NOTHING](https://dba.stackexchange.com/a/195220/3684)
## Data types and casts
### Existing table as template for data types ...
Explicit type casts for the first row of data in the free-standing `VALUES` expression may be inconvenient. There are ways around it. You can use any existing relation (table, view, ...) as row template. The target table is the obvious choice for the use case. Input data is coerced to appropriate types automatically, like in the `VALUES` clause of an `INSERT`:
```
WITH input_rows AS (
(SELECT usr, contact, name FROM chats LIMIT 0) -- only copies column names and types
UNION ALL
VALUES
('foo1', 'bar1', 'bob1') -- no type casts here
, ('foo2', 'bar2', 'bob2')
)
...
```
This does not work for some data types. See:
* [Casting NULL type when updating multiple rows](https://stackoverflow.com/questions/12426363/casting-null-type-when-updating-multiple-rows/12427434#12427434)
### ... and names
This also works for *all* data types.
While inserting into all (leading) columns of the table, you can omit column names. Assuming table `chats` in the example only consists of the 3 columns used in the UPSERT:
```
WITH input_rows AS (
SELECT * FROM (
VALUES
((NULL::chats).*) -- copies whole row definition
('foo1', 'bar1', 'bob1') -- no type casts needed
, ('foo2', 'bar2', 'bob2')
) sub
OFFSET 1
)
...
```
---
Aside: don't use [reserved words](https://www.postgresql.org/docs/current/sql-keywords-appendix.html) like `"user"` as identifier. That's a loaded footgun. Use legal, lower-case, unquoted identifiers. I replaced it with `usr`.
|
How to use RETURNING with ON CONFLICT in PostgreSQL?
|
[
"",
"sql",
"postgresql",
"upsert",
"sql-returning",
""
] |
I have a table called test.
In test I have An ID, a value and a date.
The dates are ordered for each ID.
I want to select rows for an ID, before and after a change of value, so the following example table.
RowNum--------ID------- Value -------- Date
1------------------001 ---------1----------- 01/01/2015
2------------------001 ---------1----------- 02/01/2015
3------------------001 ---------1----------- 04/01/2015
4------------------001 ---------1----------- 05/01/2015
5------------------001 ---------1----------- 06/01/2015
6------------------001 ---------1----------- 08/01/2015
7------------------001 ---------0----------- 09/01/2015
8------------------001 ---------0----------- 10/01/2015
9------------------001 ---------0----------- 11/01/2015
10-----------------001 ---------1----------- 12/01/2015
11-----------------001 ---------1----------- 14/01/2015
12------------------002 ---------1----------- 01/01/2015
13------------------002 ---------1----------- 04/01/2015
14------------------002 ---------0----------- 05/01/2015
15------------------002 ---------0----------- 07/01/2015
The result would return rows 6, 7, 9, 10, 13, 14
|
You could use analytic functions `LAG()` and `LEAD()` to access value in preceding and following rows, then check that it does not match value in current row.
```
SELECT *
FROM (
SELECT RowNum,
ID,
Value,
Date,
LAG(VALUE, 1, VALUE) OVER(ORDER BY RowNum) PrevValue,
LEAD(VALUE, 1, VALUE) OVER(ORDER BY RowNum) NextValue
FROM test)
WHERE PrevValue <> Value
OR NextValue <> Value
```
Params passed to this functions are
1. some scalar expression (column name in this case);
2. offset (1 row before or after);
3. default value (`LAG()` will return `NULL` for first row and `LEAD()` will return `NULL` for last row, but they don't seem special in your question, so I used column value as default).
|
Refer the below one for without using LEAD and LAG:
```
DECLARE @i INT = 1,
@cnt INT,
@dstvalue INT,
@srcvalue INT
CREATE TABLE #result
(
id INT,
mydate DATE
)
CREATE TABLE #temp1
(
rn INT IDENTITY(1, 1),
id INT,
mydate DATE
)
INSERT INTO #temp1
(id,
mydate)
SELECT id,
mydate
FROM table
ORDER BY id,
mydate
SELECT @cnt = Count(*)
FROM #temp1
SELECT @srcvalue = value
FROM #temp1
WHERE rn = @i
WHILE ( @i <= @cnt )
BEGIN
SELECT @dstvalue = value
FROM #temp1
WHERE rn = @i
IF( @srcvalue = @dstvalue )
BEGIN
SET @i = @i + 1
CONTINUE;
END
ELSE
BEGIN
SET @srcvalue = @dstvalue
INSERT INTO #result
(id,
mydate)
SELECT id,
mydate
FROM #temp
WHERE rn = @i - 1
UNION ALL
SELECT id,
mydate
FROM #temp
WHERE rn = @i
END
SET @i = @i + 1
END
SELECT *
FROM #result
```
|
SQL - Selecting rows with dates before and after column value change
|
[
"",
"sql",
"sql-server",
""
] |
I'm using a T-SQL script to remove all constraints and all tables in my database to give me a clean slate upon execution, it seems to work fine for constraints, but when adapting the segment to clear my tables out it's giving me an error message, code is as follows:
```
declare @tableCommand varchar(255)
declare tableCursor cursor for
select 'drop table '+ t.name
from sys.tables t
where t.type = 'U'
open tableCursor
fetch next from tableCursor into @tableCommand
while @@fetch_status = 0 begin
exec @tableCommand
fetch next from tableCursor into @tableCommand
end
close tableCursor
deallocate tableCursor
```
As I say, a similar segment of code works fine to drop my constraints, but with this code I just get the error:
> Msg 2812, Level 16, State 62, Line 15
> Could not find stored procedure 'drop table calendar'.
Any ideas?
|
I think you miss some brackets:
```
while @@fetch_status = 0 begin
exec (@tableCommand);
fetch next from tableCursor into @tableCommand
end
```
|
I won't use `CURSOR` to do this.
```
SELECT ' Drop table ' + s.NAME + '.' + t.NAME
FROM sys.tables t
JOIN sys.schemas s
ON t.[schema_id] = s.[schema_id]
WHERE t.type = 'U'
Exec sp_executesql @sql
```
**Note:** If you have any `foreign Keys` defined between tables then you have to disable it before dropping tables. Also if there is any schema bound objects then again you cannot drop tables.
Run the below query to **disable** all `foreign keys` present in your DB.
```
EXEC sp_msforeachtable "ALTER TABLE ? NOCHECK CONSTRAINT all"
```
for more info [***check here***](http://blog.sqlauthority.com/2013/04/29/sql-server-disable-all-the-foreign-key-constraint-in-database-enable-all-the-foreign-key-constraint-in-database/)
|
T-SQL dropping multiple tables
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
If I have following query
```
if not exists(select * from DeliveryTemplate where TemplateId=2)
begin
select usersCode, 1,2,'User {UsersCode}',' hello {Username},', null
from User
end
```
how can I extend this query in order to select one more column `DeliveryCode` from `DeliveryTemplate` table?
|
So you need to use `JOIN` something like:
```
if not exists(select * from DeliveryTemplate where TemplateId=2)
begin
select u.usersCode, 1,2,'User {UsersCode}',' hello {Username},', null, dt.DeliveryCode
from User u
left join DeliveryTemplate dt on u.Id = dt.UserId
end
```
|
```
declare @count int
declare @DeliveryCode nvarchar(100)
select @count = count(*),
@DeliveryCode = DeliveryCode
from DeliveryTemplate
where TemplateId=2
if @count = 0
begin
select usersCode,
1,
2,
'User {UsersCode}',
'hello {Username},',
null,
@DeliveryCode AS 'DeliveryCode'
from User
end
```
|
select data from two tables using simple sql query
|
[
"",
"sql",
".net",
"sql-server",
"t-sql",
""
] |
Suppose if i have a table A with 100 columns of same data type and 100 rows.
Table B with 2 columns and 5000 rows of same data type of above table columns.
Which table takes more disk space to store & which is more efficient?
|
The real answer here is... it depends.
Oracle stores its data in "data blocks", which are stored in "Extents" which are stored in "Segments" which make up the "Tablespace". [See here.](https://docs.oracle.com/cd/B19306_01/server.102/b14220/logical.htm)
A data block is much like a block used to store data for the operating system. In fact, an Oracle data block should be specified in multiples of the operating system's blocks so there isn't unnecessary I/O overhead.
A data block is split into 5 chunks:
1. **The Header** - Which has information about the block
2. **The Table Directory** - Tells oracle that this block contains info about whatever table it is storing data for
3. **Row Directory** - The portion of the block that stores information about the rows in the block like addresses.
4. **Row data** - The meat and potatoes of the block where the row data is stored. Keeping in mind that Rows can span blocks.
5. **Free space** - This is the middle of the bingo board and you don't have to actually put your chip here.
So the two important parts of Oracle data storage, for this question, in it's data blocks are the Row Data and the Row Directory (And to some extend, the Free Space).
In your first table you have very large rows, but fewer of them. This would suggest a smaller row directory (unless it spans multiple blocks because of the size of the rows, in which case it would be Rows\*Blocks-Necessary-To-Store-Them). In your second table you have more rows, which would suggest a larger row directory than the first table.
I believe that a row directory entry is two bytes. It describes the offset in bytes from the start of the block where the row data can be found. If your data types for your two columns in second table are `TINYINT()` then your rows would be 2 bytes as well. In effect, you have more rows, so your directory here is as big as your data. It's datasize\*2, which will cause you to store more data for this table.
The other gotcha here is that data stored in the row directory of a block is not deleted when the row is deleted. The header that contains the row directory in the block is only reused when a new insert comes along that needs the space.
Also, every block has it's free space that it keeps for storing more rows and header information, as well as holding transaction entries (see the link above for that).
At any rate, it's unlikely that your row directory in a given block would be larger than your row data, and even then Oracle may be holding onto free space in the block that trumps both depending on the size of the table and how often it's accessed and whether oracle is automatically managing free space for you, or your managing manually (does anyone do that?).
Also, if you toss an index on either of these tables, you'll change the stats all around anyway. Indexes are stored like tables, and they have their own Segments, extents, and blocks.
In the end, your best bet is to not worry too much about the blocks and whatnot (after all, storage is cheap) instead:
1. Define appropriate field types for your data. Don't store boolean values in a CHAR(100), for instance.
2. Define your indexes wisely. Don't add an index just to be sure. Make good decisions when your tuning.
3. Design your schema for your end user's needs. Is this a reporting database? In that case, shoot for denormalized pre-aggregated data to keep reads fast. Try to keep down the number of joins a user needs to get at their result set.
4. Focus on cutting CPU and I/O requirements based on the queries that will come through for the schema you have created. Storage is cheap, CPU and I/O aren't, and your end user isn't going to give a rats ass about how many hard drives (or ram if it's in-memory) you needed to cram into your box. They are going to care about how quick the application reads and writes.
p.s. Forgive me if I misrepresented anything here. Logical database storage is complicated stuff and I don't deal much with Oracle so I may be missing a piece to the puzzle, but the overall gist is the same. There is the actual data you store, and then there is the metadata for that data. It's unlikely that the metadata will trump, in size, the data itself, but given the right circumstances, it's possible (especially with indexing figured in). And, in the end, don't worry to much about it anyway. Focus on the needs of the end user/application in designing your schema. The end user will balk a hell of a lot more than your box.
|
Either a table has 2 columns or 100. You would not convert one into the other or you would do something very wrong.
A product table may have 100 columns (item number, description, supplier number, material, list price, actual price ...). How would you make this a two column table? A key-value table? A very bad idea.
A country table may have 2 columns (iso code and name). How would you make this a 100-columns table? By having columns usa\_name, usa\_code, germany\_name, germany\_code, ...? An even worse idea.
So: The question is out of question :-) There is nothing to decide between.
|
Which is more space efficient, multiple colums or multple rows?
|
[
"",
"sql",
"oracle",
"database-design",
""
] |
I am confused that how i can query this from table below. I have to get only those categories which is assigned to a single group.
```
**Id** **Category** **PracticeGroup**
1 Category-1 Practice Group-1
2 Category-1 Practice Group-1
3 Category-2 Practice Group-2
4 Category-1 Practice Group-1
5 Category-2 Practice Group-1
```
As in above scenario Category-1 will be the result set which has only one assigned practice group "Practice Group-1".
|
Just can do this :-
```
select [category] from (select distinct [category] ,[Practice Group] from tbl) as temp
group by [category]
having count([category]) = 1
```
|
You can use the same query what @potasin mentioned, with the min of Practice Group
```
select [Category],min([Practice Group])
from tbl
group by [Category]
having count(distinct [Practice Group]) = 1
```
|
Select all rows having specific value assigned only once
|
[
"",
"sql",
"sql-server",
"database",
"sql-server-2014",
"sql-server-2014-express",
""
] |
I have two tables in my database which I need to join to obtain a list of User IDs. The first table stores the `user_id` and `email`:
```
user_id | email
-------------------------
1 | test@test.com
2 | test2@test.com
```
Table 2 contains searches of the user and the browser which they used:
```
id | user_id | browser
--------------------------------
1 | 1 | Internet Explorer
2 | 1 | Internet Explorer
3 | 1 | Firefox
4 | 2 | Chrome
```
I want to join the two tables together based on the `user_id` and filter out users who only ever used Internet Explorer.
My current query looks like:
```
select distinct (u.id,u.email)
from users u join searches k on u.id = k.user_id
where k.browser = 'Internet Explorer'
```
which is wrong as this will retrieve users who have used Internet Explorer before - but not exclusively only Internet Explorer.
|
This will list all users that have at least one search from a browser that is not Internet Explorer:
```
SELECT u.id, u.email
FROM users u
WHERE EXISTS (
SELECT 1
FROM searches s
WHERE s.user_id = u.id
AND s.browser <> 'Internet Explorer'
)
```
|
Under the assumption that every user used at least one browser sometime, this can be done more easily by filtering the users where another browser exists:
```
SELECT DISTINCT u.userid, u.email
FROM users u
WHERE EXISTS (SELECT *
FROM searches k
WHERE u.userid = k.userid AND k.browser != 'Internet Explorer')
```
If this assumption is incorrect, and you want to also allow users that have never used any browser, you'd need another condition:
```
SELECT DISTINCT u.userid, u.email
FROM users u
WHERE EXISTS (SELECT *
FROM searches k
WHERE u.userid = k.userid AND k.browser != 'Internet Explorer')
OR NOT EXISTS (SELECT *
FROM searches k
WHERE u.userid = k.userid)
```
|
SQL query groupBy and filter
|
[
"",
"mysql",
"sql",
"select",
"pgadmin",
""
] |
I can't see leading/trailing whitespace in the following following SQL statement executed with `psql`:
```
select name from my_table;
```
Is there a pragmatic way to see leading/trailing whitespace?
|
If you don't mind substituting all whitespace characters whether or not they are leading/trailing, something like the following will do it:
```
SELECT REPLACE(REPLACE(REPLACE(REPLACE(txt, ' ', '_'),
E'\t', '\t'),
E'\r', '\r'),
E'\n', '\n') AS txt
FROM test;
```
This is using an underscore to mark the spaces but of course you are free to choose your own. See [SQL fiddle demo](http://sqlfiddle.com/#!15/2944e/2).
If you strictly only want to show up the leading/trailing ones it will get more complex - but if this is really desired, something may be possible using [`regex_replace`](https://www.postgresql.org/docs/9.5/static/functions-matching.html#FUNCTIONS-POSIX-REGEXP).
|
One of option is to use format() function.
With given query case: `select format( '"%s"', name ) from my_table;`
PoC:
```
SELECT format( '"%s"', name )
FROM ( VALUES ( ' a ' ), ( ' b ' ) ) v(name);
format
--------
" a "
" b "
(2 rows)
```
|
How to show leading/trailing whitespace in a PostgreSQL column?
|
[
"",
"sql",
"postgresql",
"whitespace",
"psql",
""
] |
Let's say I have the following database table:
```
id | from | to
1 | 01-JAN-2015 | 03-MAR-2015
1 | 04-MAR-2015 | 31-AUG-2015
1 | 01-SEP-2015 | 31-DEC-2015
2 | 01-JAN-2015 | 30-JUN-2015
2 | 01-NOV-2015 | 31-DEC-2015
```
And I want to summarise the records with the same id that are continuous in time into one single row covering the full time frame, as follows:
```
id | from | to
1 | 01-JAN-2015 | 31-DEC-2015
2 | 01-JAN-2015 | 30-JUN-2015
2 | 01-NOV-2015 | 31-DEC-2015
```
So, because the time frames are sequential and have no gaps between them, the 3 rows for id 1 could be converted into 1 single row with the minimum from date and the maximum to date. The 2 rows for id 2 would remain the same as the time frames are not continuous.
I'm thinking on doing this using a loop through a cursor, but I might be complicating things.
Any better ideas? perhaps with SQL queries only?
|
You can do it using hierarchical queries, something like this:
```
select id, min(root_dt_from) dt_from, dt_to
from (select id, dt_from, dt_to, level, connect_by_isleaf, connect_by_root(dt_from) root_dt_from
from t
where connect_by_isleaf = 1
connect by prior id = id and prior (dt_to + 1) = dt_from
)
group by id, dt_to;
```
Sample execution:
```
SQL> with t as (
2 select 1 id, to_date('01-JAN-2015', 'DD-MON-YYYY') dt_from, to_date('03-MAR-2015', 'DD-MON-YYYY') dt_to from dual union all
3 select 1 id, to_date('04-MAR-2015', 'DD-MON-YYYY') dt_from, to_date('31-AUG-2015', 'DD-MON-YYYY') dt_to from dual union all
4 select 1 id, to_date('01-SEP-2015', 'DD-MON-YYYY') dt_from, to_date('31-DEC-2015', 'DD-MON-YYYY') dt_to from dual union all
5 select 2 id, to_date('01-JAN-2015', 'DD-MON-YYYY') dt_from, to_date('30-JUN-2015', 'DD-MON-YYYY') dt_to from dual union all
6 select 2 id, to_date('01-NOV-2015', 'DD-MON-YYYY') dt_from, to_date('31-DEC-2015', 'DD-MON-YYYY') dt_to from dual
7 ) -- end of sample data
8 select id, min(root_dt_from) dt_from, dt_to
9 from (select id, dt_from, dt_to, level, connect_by_isleaf, connect_by_root(dt_from) root_dt_from
10 from t
11 where connect_by_isleaf = 1
12 connect by prior id = id and prior (dt_to + 1) = dt_from
13 )
14 group by id, dt_to;
ID DT_FROM DT_TO
---------- ----------- -----------
1 01-JAN-2015 31-DEC-2015
2 01-NOV-2015 31-DEC-2015
2 01-JAN-2015 30-JUN-2015
```
|
> You can try here some analytical functions which can really simplify
> the scenario. Hope this below snippet helps. Let me know for any
> issues.
```
SELECT B.ID,
MIN(B.FRM_DT) FRM_DT,
MAX(B.TO_DT) TO_DT
FROM
(SELECT A.ID,
A.FRM_DT,
A.TO_DT,
NVL(LAG(A.TO_DT+1) OVER(PARTITION BY A.ID ORDER BY A.TO_DT),A.FRM_DT) nxt_dt,
CASE
WHEN NULLIF(A.FRM_DT,NVL(LAG(A.TO_DT+1) OVER(PARTITION BY A.ID ORDER BY A.TO_DT),A.FRM_DT)) IS NULL
THEN 'True'
ELSE 'False'
END COND
FROM
(SELECT 1 AS ID,
TO_DATE('01/01/2015') FRM_DT,
TO_DATE('03/03/2015') TO_DT
FROM DUAL
UNION
SELECT 1 AS ID,
TO_DATE('03/04/2015') FRM_DT,
TO_DATE('07/31/2015') TO_DT
FROM DUAL
UNION
SELECT 1 AS ID,
TO_DATE('08/01/2015') FRM_DT,
TO_DATE('12/31/2015') TO_DT
FROM DUAL
UNION
SELECT 2 AS ID,
TO_DATE('01/01/2015') FRM_DT,
TO_DATE('06/30/2015') TO_DT
FROM DUAL
UNION
SELECT 2 AS ID,
TO_DATE('11/01/2015') FRM_DT,
TO_DATE('12/31/2015') TO_DT
FROM DUAL
UNION
SELECT 3 AS ID,
TO_DATE('01/01/2015') FRM_DT,
TO_DATE('03/14/2015') TO_DT
FROM DUAL
UNION
SELECT 3 AS ID,
TO_DATE('03/15/2015') FRM_DT,
TO_DATE('11/30/2015') TO_DT
FROM DUAL
UNION
SELECT 3 AS ID,
TO_DATE('12/01/2015') FRM_DT,
TO_DATE('12/31/2015') TO_DT
FROM DUAL
UNION
SELECT 4 AS ID,
TO_DATE('02/01/2015') FRM_DT,
TO_DATE('05/30/2015') TO_DT
FROM DUAL
UNION
SELECT 4 AS ID,
TO_DATE('06/01/2015') FRM_DT,
TO_DATE('12/31/2015') TO_DT
FROM DUAL
)A
)B
GROUP BY B.ID,
B.COND;
-----------------------------------OUTPUT------------------------------------------
ID FRM_DT TO_DT
4 02/01/2015 05/30/2015
4 06/01/2015 12/31/2015
1 01/01/2015 12/31/2015
2 01/01/2015 06/30/2015
2 11/01/2015 12/31/2015
3 01/01/2015 12/31/2015
-----------------------------------OUTPUT------------------------------------------
```
|
PL/SQL: How to convert multiple rows with continuous time frames into one single row convering the whole time frame?
|
[
"",
"sql",
"oracle",
"plsql",
"gaps-and-islands",
""
] |
I have my query displaying output as displayed below
```
`UserName Status Count`
`A Pass 32`
`A fail 2`
`A Hold 4`
`B Fail 12`
`C Pass 40`
`C Fail 4`
`C Hold 3`
`D Pass 2`
```
I want to have output to be displayed, First record as username C and its details, second, A and its details. I'm trying to get Top 5 records with high pass status, and its related username records. I do not want to leave the 6th record if that username exists in the Top 5. Kindly help
Sorry I'm unable to format in a tabular format.
|
To achieve the desired listing order is relatively simple:
```
select top 6 * from #tbl INNER JOIN (
SELECT UserName u, cnt pc FROM #tbl WHERE Status='Pass'
) ord ON u=UserName ORDER BY pc desc,status desc
```
However, limiting it to 5 rows, in case the first result sets are not complete, is not that easy. I am still trying ...
**Edit**:
Well, after a few more tries I figured out, that when you limit the output lines to a certain maximum number, depending on the data structure, you might end up with two less records if you insist that all lines of a user are always listed completely. Therefore, if we limit the output to 6 lines it will be possible that the list will end at line 4 if the next user would be supplying 3 more lines. Having accepted that, you could use the following:
```
;WITH cnts AS ( SELECT
UserName un,
SUM(CASE Status
WHEN 'Pass' THEN 10000 -- weighing system
WHEN 'Hold' THEN 100 -- to sort the results ...
ELSE 1 END * cnt) scr, -- --> score for sorting
COUNT(*) ncnt -- row count per UserName
FROM #tbl GROUP BY UserName
)
SELECT UserName,Status,cnt,scr,acnt FROM #tbl INNER JOIN (
SELECT un,scr,(SELECT SUM(ncnt) FROM cnts WHERE scr>=c.scr) acnt
FROM cnts c WHERE (SELECT SUM(ncnt) FROM cnts WHERE scr>=c.scr) <=6
) srt ON UserName=un
ORDER BY scr DESC, UserName, Status desc
```
See [here](https://data.stackexchange.com/stackoverflow/query/421953/up-to-six-top-results).
The subquery `(SELECT SUM(ncnt) FROM cnts WHERE scr>=c.scr)` represents an accumulated line count which I use in the `where` clause of the `srt`-subquery to limit the output.
Contrary to my first approach I now sort internally according to `scr`, an internal score which I calculate by summing up all pass/hold/fail-counts with a different weighing factor. Without doing this, a user without a single 'pass' would never get listed, even if no other user was doing any better.
|
```
;WITH CTE_UserRanking AS (
SELECT
UserID,
DENSE_RANK() OVER (ORDER BY COUNT(*) DESC) AS user_rank
FROM
dbo.User_Scores S
WHERE
S.Status = 'Pass'
GROUP BY
UserID
)
SELECT
U.UserName,
S.Status,
COUNT(*)
FROM
CTE_UserRanking R
INNER JOIN dbo.Users U ON U.UserID = R.UserID
INNER JOIN dbo.Scores S ON S.UserID = R.UserID
WHERE
R.user_rank <= 5
GROUP BY
U.UserID,
U.UserName,
S.Status
ORDER BY
R.user_rank
```
|
SQL Query to display records categorically
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
"tabular",
""
] |
```
FOR this_loop
IN (SELECT field_A, field_B
FROM TABLE_NAME
WHERE num = i_num)
LOOP
IF this_loop.field_B BETWEEN 1 AND 3
THEN
v_A := v_A || ' ' || this_loop.field_A;
ELSIF this_loop.field_B BETWEEN 4 AND 8
THEN
v_field_A := v_field_A || ' ' || this_loop.field_A; -- Error is at this line
ELSIF this_loop.field_B BETWEEN 9 AND 15
THEN
v_B := v_B || ' ' || this_loop.field_A;
END IF;
END LOOP;
```
Variable decalared as
```
v_field_A VARCHAR2 (100);
```
What I know -
1. Variable v\_field\_A cannot hold value more than 100 characters
2. The output I get from the `SELECT` Query doesn't have more than 10 characters.
My question - How is even possible to face this issue of space buffer when the characters are whitin the limit of varchar2 ?
I have faced this issue few years back but last time the cause was output of `select` query. It had more than 100 characters and hence the size issue but this time it is not more than 10 characters. I'm confused. Any help is appreciated
|
> Variable v\_field\_A cannot hold value more than 100 characters
Why not? It is very much possible since you are **concatenating** the variable for each row in the **CURSOR FOR LOOP**.
For example,
```
SQL> DECLARE
2 v_name VARCHAR2(50);
3 BEGIN
4 FOR i IN
5 (SELECT ename FROM emp
6 )
7 LOOP
8 v_name := v_name || i.ename;
9 END LOOP;
10 END;
11 /
DECLARE
*
ERROR at line 1:
ORA-06502: PL/SQL: numeric or value error: character string buffer too small
ORA-06512: at line 8
```
Use **DBMS\_OUTPUT** to see the current size of the variable and the new value being appended.
**Let's debug**
```
SQL> DECLARE
2 v_name VARCHAR2(50);
3 BEGIN
4 FOR i IN
5 (SELECT ename FROM emp
6 )
7 LOOP
8 dbms_output.put_line('Length of new value = '||LENGTH(i.ename));
9 v_name := v_name || i.ename;
10 dbms_output.put_line('Length of variable = '||LENGTH(v_name));
11 END LOOP;
12 END;
13 /
Length of new value = 5
Length of variable = 5
Length of new value = 5
Length of variable = 10
Length of new value = 4
Length of variable = 14
Length of new value = 5
Length of variable = 19
Length of new value = 6
Length of variable = 25
Length of new value = 5
Length of variable = 30
Length of new value = 5
Length of variable = 35
Length of new value = 5
Length of variable = 40
Length of new value = 4
Length of variable = 44
Length of new value = 6
Length of variable = 50
Length of new value = 5
```
**Error**
```
DECLARE
*
ERROR at line 1:
ORA-06502: PL/SQL: numeric or value error: character string buffer too small
ORA-06512: at line 9
```
It is pretty clear, we wanted to concatenate a string with length `5` to the variable declared as max size `50`, currently holding a value of size `50`. hence, it throws the error `ORA-06502: PL/SQL: numeric or value error: character string buffer too small`.
|
You can do the string concatenation in a SQL query:
```
SELECT field_A,
LISTAGG(CASE WHEN field_B BETWEEN 1 AND 3 THEN field_A END, ' ') WITHIN GROUP (ORDER BY field_A) as val1,
LISTAGG(CASE WHEN field_B BETWEEN 4 AND 8 THEN field_A END, ' ') WITHIN GROUP (ORDER BY field_A) as val2,
LISTAGG(CASE WHEN field_B BETWEEN 9 AND 15 THEN field_A END, ' ') WITHIN GROUP (ORDER BY field_A) as val3
FROM TABLE_NAME
WHERE num = i_num;
```
This is intended to simplify your code. You are still limited by Oracle string lengths. You can get around the Oracle limit in PL/SQL using a CLOB, but that is unnecessary when the final string is only a few hundred characters.
|
ORA-06502: character string buffer too small. Even though the string size is under declared size limit
|
[
"",
"sql",
"oracle",
"plsql",
"oracle11g",
""
] |
I have A table of students and their grades.
How can I count the number of failed grades (*<50*)
and find the general average ( I mean the average for every student ) using **sqlite**?
this is the table (studentTable):
```
sID LessonID grade
1 1 45
1 2 50
1 3 65
2 1 44
2 2 22
2 3 91
```
I expect the results like this:
```
sID noOfFails Average
1 1 53
2 2 5
```
|
Try
```
SELECT
sID,
SUM(CASE WHEN grade < 50 THEN 1 ELSE 0 END) AS noOfFails,
AVG(grade) AS Average
FROM studentTable
GROUP BY sID
```
Demo at <http://sqlfiddle.com/#!5/fcd13/1>
|
You could use a `case` expression inside the `count` function:
```
SELECT sID, COUNT(CASE WHEN grade < 50 THEN 1 END) AS noOfFails, AVG(grade)
FROM mytable
```
|
How to find number of failed grades and the number of general average?
|
[
"",
"sql",
"sqlite",
"select",
"average",
""
] |
I have query that is executing too long (more than 300sec). Can it be optimized with indexes or query modifying?
What is the best key?
```
SELECT
`streams_channel`.`id`,
`streams_channel`.`uid`,
`streams_channel`.`provider_id`
FROM
`streams_channel`
WHERE
`streams_channel`.`provider_id` = 1
AND
`streams_channel`.`followers` < 50
ORDER BY `streams_channel`.`id` ASC
LIMIT 15000 OFFSET 1440000;
```
|
The best index for this query is probably `streams_channel(provider_id, followers, id)`. You could also add `uid` so the index covers the query (that is, all the columns being used are in the index).
However, this index will not prevent the final sort for the `ORDER BY`, which may be the performance issue. How many rows would the query return without the `LIMIT`?
It is possible that an index on `streams_channel(provider_id, id, followers, uid)` would prevent the sort. You would need to test on your data.
|
You need indexes on the columns in the WHERE clause.
You can see the effect of changes by running EXPLAIN before and after. You should see TABLE SCAN removed by adding indexes.
Order of WHERE clauses matters. Make the first clause eliminate the majority of the rows and make it easier for subsequent clauses.
|
Best key for a MySQL query on a large table (> 1M)
|
[
"",
"mysql",
"sql",
"optimization",
"indexing",
""
] |
I have a Client data table, selected columns of which are shown below:
```
Row_ID Client_ID Status_ID From_date To_date
1 123456 4 20/12/2007 18:02 20/12/2007 18:07
2 789087 4 20/12/2007 18:02 20/12/2007 18:07
3 789087 4 20/12/2007 18:07 20/12/2007 18:50
4 789087 4 20/12/2007 18:50 21/12/2007 10:38
5 123456 4 20/12/2007 18:07 20/12/2007 18:50
6 123456 4 20/12/2007 18:50 21/12/2007 10:38
7 123456 4 21/12/2007 10:38 21/12/2007 16:39
8 789087 4 21/12/2007 10:38 21/12/2007 17:54
9 789087 4 21/12/2007 17:54 21/12/2007 18:32
10 789087 4 21/12/2007 18:32 22/12/2007 06:48
11 123456 5 21/12/2007 16:39
12 789087 5 22/12/2007 06:48 22/12/2007 10:53
13 789087 4 22/12/2007 10:53 22/12/2007 11:51
14 789087 5 22/12/2007 11:51
```
After putting the data into ascending order by Client\_ID and then by From\_date, my objective is to add a calculated Rank\_ID every time there is a change in the status for that client when comparing the status to the previous line. The desired values I want for the Rank\_ID are shown below:
```
Row_ID Client_ID Status_ID From_date To_date Rank_ID
1 123456 4 20/12/2007 18:02 20/12/2007 18:07 1
5 123456 4 20/12/2007 18:07 20/12/2007 18:50 1
6 123456 4 20/12/2007 18:50 21/12/2007 10:38 1
7 123456 4 21/12/2007 10:38 21/12/2007 16:39 1
11 123456 5 21/12/2007 16:39 2
2 789087 4 20/12/2007 18:02 20/12/2007 18:07 3
3 789087 4 20/12/2007 18:07 20/12/2007 18:50 3
4 789087 4 20/12/2007 18:50 21/12/2007 10:38 3
8 789087 4 21/12/2007 10:38 21/12/2007 17:54 3
9 789087 4 21/12/2007 17:54 21/12/2007 18:32 3
10 789087 4 21/12/2007 18:32 22/12/2007 06:48 3
12 789087 5 22/12/2007 06:48 22/12/2007 10:53 4
13 789087 4 22/12/2007 10:53 22/12/2007 11:51 5
14 789087 5 22/12/2007 11:51 6
```
I am trying to use DENSE\_RANK as an analytical function, my "incorrect" SQL code being below
```
SELECT t1.*, DENSE_RANK () OVER (ORDER BY t1.client_id, t1.status_id) rank_id
FROM (SELECT c.client_ID, c.status_id, c.from_date, c.to_date
FROM client c
ORDER BY c.client_id, c.from_date) t1
ORDER BY t1.client_id, t1.from_date
```
However, the problem I am encountering is that it given the SQL code as written it calculates the Rank\_ID as follows:
```
Row_ID Client_ID Status_ID From_date To_date Rank_ID
1 123456 4 20/12/2007 18:02 20/12/2007 18:07 1
5 123456 4 20/12/2007 18:07 20/12/2007 18:50 1
6 123456 4 20/12/2007 18:50 21/12/2007 10:38 1
7 123456 4 21/12/2007 10:38 21/12/2007 16:39 1
11 123456 5 21/12/2007 16:39 2
2 789087 4 20/12/2007 18:02 20/12/2007 18:07 3
3 789087 4 20/12/2007 18:07 20/12/2007 18:50 3
4 789087 4 20/12/2007 18:50 21/12/2007 10:38 3
8 789087 4 21/12/2007 10:38 21/12/2007 17:54 3
9 789087 4 21/12/2007 17:54 21/12/2007 18:32 3
10 789087 4 21/12/2007 18:32 22/12/2007 06:48 3
12 789087 5 22/12/2007 06:48 22/12/2007 10:53 4
13 789087 4 22/12/2007 10:53 22/12/2007 11:51 3
14 789087 5 22/12/2007 11:51 4
```
For record 13, the returned Rank\_ID is 3 (whereas I want 5 at it is a change in status for that client when compared to the status on the previous record for that client) and for record 14 the returned Rank\_ID is 4, whereas I want 6 as it is again a change in status for that client when compared to the previous line.
I guess the problem is that my SQL orders the data by Client\_ID and then by Status\_ID, and so I can see why it produces the answers it gives. The problem is no matter what changes I make to the DENSE\_RANK line, I can't obtain the answer I want.
Any help would be appreciated.
|
As I understood, this is what you need:
```
select client_ID, status_id, from_date, to_date,
sum(start_of_group) over (order by client_ID, from_date) + 1 rank
from (SELECT c.client_ID, c.status_id, c.from_date, c.to_date,
case when lag(c.client_ID, 1, c.client_ID) over (order by c.client_ID, c.from_date) = c.client_ID
and lag(c.status_id, 1, c.status_id) over (order by c.client_ID, c.from_date) = c.status_id
then 0 else 1 end start_of_group
FROM client c)
order by client_ID, from_date
```
[SQLFiddle](http://sqlfiddle.com/#!4/4e13a/1)
|
the thing is that you need to partition your ranking over the CHANGE of the status, not the VALUE of the status. I'm leaving some extra columns in the output so that you can see how it is all derived:
```
WITH dat as (
SELECT 1 row_id, 123456 client_id, 4 status, to_date('20/12/2007 18:02','dd/mm/yyyy hh24:mi') frdate, to_date('20/12/2007 18:07','dd/mm/yyyy hh24:mi') todate from dual union all
SELECT 2 row_id, 789087 client_id, 4 status, to_date('20/12/2007 18:02','dd/mm/yyyy hh24:mi') frdate, to_date('20/12/2007 18:07','dd/mm/yyyy hh24:mi') todate from dual union all
SELECT 3 row_id, 789087 client_id, 4 status, to_date('20/12/2007 18:07','dd/mm/yyyy hh24:mi') frdate, to_date('20/12/2007 18:50','dd/mm/yyyy hh24:mi') todate from dual union all
SELECT 4 row_id, 789087 client_id, 4 status, to_date('20/12/2007 18:50','dd/mm/yyyy hh24:mi') frdate, to_date('21/12/2007 10:38','dd/mm/yyyy hh24:mi') todate from dual union all
SELECT 5 row_id, 123456 client_id, 4 status, to_date('20/12/2007 18:07','dd/mm/yyyy hh24:mi') frdate, to_date('20/12/2007 18:50','dd/mm/yyyy hh24:mi') todate from dual union all
SELECT 6 row_id, 123456 client_id, 4 status, to_date('20/12/2007 18:50','dd/mm/yyyy hh24:mi') frdate, to_date('21/12/2007 10:38','dd/mm/yyyy hh24:mi') todate from dual union all
SELECT 7 row_id, 123456 client_id, 4 status, to_date('21/12/2007 10:38','dd/mm/yyyy hh24:mi') frdate, to_date('21/12/2007 16:39','dd/mm/yyyy hh24:mi') todate from dual union all
SELECT 8 row_id, 789087 client_id, 4 status, to_date('21/12/2007 10:38','dd/mm/yyyy hh24:mi') frdate, to_date('21/12/2007 17:54','dd/mm/yyyy hh24:mi') todate from dual union all
SELECT 9 row_id, 789087 client_id, 4 status, to_date('21/12/2007 17:54','dd/mm/yyyy hh24:mi') frdate, to_date('21/12/2007 18:32','dd/mm/yyyy hh24:mi') todate from dual union all
SELECT 10 row_id, 789087 client_id, 4 status, to_date('21/12/2007 18:32','dd/mm/yyyy hh24:mi') frdate, to_date('22/12/2007 06:48','dd/mm/yyyy hh24:mi') todate from dual union all
SELECT 11 row_id, 123456 client_id, 5 status, to_date('21/12/2007 16:39','dd/mm/yyyy hh24:mi') frdate, null from dual union all
SELECT 12 row_id, 789087 client_id, 5 status, to_date('22/12/2007 06:48','dd/mm/yyyy hh24:mi') frdate, to_date('22/12/2007 10:53','dd/mm/yyyy hh24:mi') todate from dual union all
SELECT 13 row_id, 789087 client_id, 4 status, to_date('22/12/2007 10:53','dd/mm/yyyy hh24:mi') frdate, to_date('22/12/2007 11:51','dd/mm/yyyy hh24:mi') todate from dual union all
SELECT 14 row_id, 789087 client_id, 5 status, to_date('22/12/2007 11:51','dd/mm/yyyy hh24:mi') frdate, null from dual)
SELECT t1.*, DENSE_RANK () OVER (ORDER BY t1.client_id, t1.chg_status) rank_id
FROM (select client_id, status, prev_status, sum(case when nvl(prev_status,-1) != status then 1 else 0 end) over (partition by client_id order by frdate) chg_status, frdate, todate
from (
SELECT c.client_ID
, c.status
, lag(status) over (partition by client_id order by frdate) as prev_status
, c.frdate
, c.todate
FROM dat c
ORDER BY c.client_id, c.frdate)) t1
ORDER BY t1.client_id, t1.frdate
```
Returns:
```
CLIENT_ID, STATUS, PREV_STATUS, CHG_STATUS, FRDATE, TODATE, RANK_ID
123456, 4, , 1, 20/12/2007 6:02:00 PM, 20/12/2007 6:07:00 PM, 1
123456, 4, 4, 1, 20/12/2007 6:07:00 PM, 20/12/2007 6:50:00 PM, 1
123456, 4, 4, 1, 20/12/2007 6:50:00 PM, 21/12/2007 10:38:00 AM, 1
123456, 4, 4, 1, 21/12/2007 10:38:00 AM, 21/12/2007 4:39:00 PM, 1
123456, 5, 4, 2, 21/12/2007 4:39:00 PM,, 2
789087, 4, , 1, 20/12/2007 6:02:00 PM,20/12/2007 6:07:00 PM, 3
789087, 4, 4, 1, 20/12/2007 6:07:00 PM,20/12/2007 6:50:00 PM, 3
789087, 4, 4, 1, 20/12/2007 6:50:00 PM, 21/12/2007 10:38:00 AM, 3
789087, 4, 4, 1, 21/12/2007 10:38:00 AM, 21/12/2007 5:54:00 PM, 3
789087, 4, 4, 1, 21/12/2007 5:54:00 PM, 21/12/2007 6:32:00 PM, 3
789087, 4, 4, 1, 21/12/2007 6:32:00 PM,22/12/2007 6:48:00 AM, 3
789087, 5, 4, 2, 22/12/2007 6:48:00 AM, 22/12/2007 10:53:00 AM, 4
789087, 4, 5, 3, 22/12/2007 10:53:00 AM, 22/12/2007 11:51:00 AM, 5
789087, 5, 4, 4, 22/12/2007 11:51:00 AM,, 6
```
|
Oracle SQL - DENSE_RANK
|
[
"",
"sql",
"oracle",
"dense-rank",
""
] |
I have a postgres 9.3 table with two columns. The first column has times and the second has routes. A route may have multiple times. I want to list all routes with their most minimum times. My table:
```
Times Routes
07:15:00 Route a
09:15:00 Route a
08:15:00 Route b
11:30:00 Route b
09:15:00 Route c
12:00:00 Route c
```
What i want output:
```
Times Routes
07:15:00 Route a
08:15:00 Route b
09:15:00 Route c
```
Any help would be appriciated and thanks in advance.
|
This can be done using the [`MIN` aggregate function](http://www.postgresql.org/docs/8.0/static/functions-aggregate.html), and then grouping by the `Routes` column:
```
SELECT Routes, MIN(Times) FROM Table GROUP BY Routes
```
The `GROUP BY` clause is used to group rows together into a single row which have the same value for the field(s) specified in the `GROUP BY` clause. You can then use aggregate functions such as `MIN`, `MAX`, `SUM`, `AVG`, ... to compute values from the rows which have been grouped together.
|
```
select distinct on(routes) routes
,times
from p
order by routes,times asc
```
---
[`DISTINCT ON`](http://www.postgresql.org/docs/9.4/static/sql-select.html#SQL-DISTINCT)
Will return the "*first*" row of each set of rows where the expression is equal.
As per doc.
> DISTINCT ON ( expression [, ...] ) keeps only the first row of each
> set of rows where the given expressions evaluate to equal. [...] Note
> that the "first row" of each set is unpredictable unless ORDER BY is
> used to ensure that the desired row appears first. [...] The DISTINCT
> ON expression(s) must match the leftmost ORDER BY expression(s).
|
POSTGRES min/least value
|
[
"",
"sql",
"postgresql",
"aggregate-functions",
"postgresql-9.3",
"distinct-on",
""
] |
I am looking for a way to find out the value of maximum character repetition in a string.
For instance :
```
String NMCR
-----------------------
akhsdjjjaajjj 6
AABBDDDDDDD 7
```
|
An odd requirement, but here is a way:
```
create or replace
function max_repetetive_letter_count (string varchar2) return integer
is
letter_col SYS.KU$_VCNT := SYS.KU$_VCNT(); -- A handy collection type
l_max_count integer;
begin
letter_col.extend(length(string));
for i in 1..length(string) loop
letter_col(i) := substr(string,i,1);
end loop;
select max(letter_count)
into l_max_count
from
( select column_value, count(*) letter_count
from table(letter_col)
group by column_value
);
return l_max_count;
end;
/
```
Example usage:
```
SQL> select string, max_repetetive_letter_count(string)
2 from
3 ( select 'ajkhsdjjjaajjj' as string from dual
4 union all
5 select 'AABBDDDDDDD' as string from dual
6 );
STRING MAX_REPETETIVE_LETTER_COUNT(STRING)
-------------- -----------------------------------
ajkhsdjjjaajjj 7
AABBDDDDDDD 7
```
(NB The 6 in your example was incorrect!)
|
My try, with steps highlighted by CTEs:
```
with data as (select 'akhsdjjjaajjj' txt from dual
union all
select 'AABBDDDDDDD' txt from dual
),
chars as(select txt,substr(txt,lvl, 1) c, lvl
from data join (select level lvl from dual connect by level < 1000)
on length(data.txt) >= lvl
),
counts as (select txt, c, count(*) cnt
from chars
group by txt, c
)
select txt, max(cnt)
from counts
group by txt;
```
Result:
> TXT MAX(CNT)
>
> akhsdjjjaajjj 6
>
> AABBDDDDDDD 7
|
Calculating the number of maximum repetitive character in Oracle
|
[
"",
"sql",
"string",
"oracle",
"count",
""
] |
Is it possible to group result of my sql SELECT statements without creating temp table to sum and group in one statement? I have a table saving the total of debit and credit amount for credit account No. and debit account No.
and put zero for opposite field, I made first grouppage with below union select statement and I need to sum the output based on repeated accounts name
I can get this the same easily or have to load it temp table? my select statement as below
```
select journal_drname AS ACCNAME, sum(amount) AS DEBT,0 as CREDIT
FROM JOURNAL GROUP BY journal_drname
UNION select journal_crname AS ACCNAME, 0 as DEBT,sum(amount) AS
CREDIT,
FROM JOURNAL GROUP BY journal_crname
```
returning data like:
```
ACCNAME DEBIT CREDIT
-------- ------- ------
CASH 0 1,000
CASH 900 0
CASH 300 0
BANK 200 0
BANK 400 0
```
can I summery above to be grouped as follows:
```
ACCNAME DEBIT CREDIT
-------- ------- ------
CASH 200 0
BANK 600 0
```
And I want to join the results of these to be cut as above.
I'm trying to avoid using a temp table, but if that's the only practical way to do this, I'd like to know more about using temp tables in this fashion.
I also tried to do some kind join and union but not satisfy a conditional and I couldn't figure out how to do that either.
|
As you say your query is working, lets call your union query as a table
```
( ...... ) myTable
```
then just create a subquery to get the final step
```
SELECT ACCNAME,
CASE WHEN SUM(CREDIT - DEBT) < 0 THEN - SUM(CREDIT - DEBT)
ELSE 0
END DEBT,
CASE WHEN SUM(CREDIT - DEBT) > 0 THEN SUM(CREDIT - DEBT)
ELSE 0
END CREDIT
FROM ( ...... ) myTable
GROUP BY ACCNAME
```
|
If you want only one row per subquery, then remove the `group by`:
```
select 'CASH' as ACCNAME, sum(amount) AS DEBT, 0 as CREDIT
FROM JOURNAL
UNION ALL
select 'BANK' as ACCNAME, 0 as DEBT, sum(amount) AS CREDIT
FROM JOURNAL;
```
(Note: the `ACCNAME` values might be reversed.)
Important: You should be using `UNION ALL` for this type of query rather than `UNION`. There is no reason to incur the overhead of removing duplicates, unless you intend to remove duplicates.
Also, your original query should have worked, unless there are unusual characters in the accname fields.
|
nested union select statement
|
[
"",
"mysql",
"sql",
"derby",
"javadb",
""
] |
Here's a simplified example of my data:
```
Acct AddType Address
==== ======= ============
1001 Home 1239 Maple
1001 Billing 456 Broadway
1002 Billing 1234 Main
1003 Home 1278 Walnut
```
I'm trying to create a SQL query that will give me one address for each account. It should be the billing address for each account, but if there isn't a billing address, then I want home address.
something like:
```
SELECT *
FROM Table1
WHERE
(COUNT(Acct) > 1 AND AddType = 'Billing') OR
(COUNT(Acct) = 1 AND AddType = 'Home')
```
which should return:
```
Acct AddType Address
==== ======= ============
1001 Billing 456 Broadway
1002 Billing 1234 Main
1003 Home 1278 Walnut
```
|
A simple (and efficient) way to do this is to use `union all` and some additional logic:
```
select t.*
from table1 t
where addtype = 'Billing'
union all
select t.*
from table1 t
where addtype = 'Home' and
not exists (select 1 from table1 t2 where t2.acct = t.acct and t2.addtype = 'Billing');
```
There are other methods. This is typically the simplest for only two values. A more general form is to use `row_number()`:
```
select t.*
from (select t.*,
row_number() over (partition by acct
order by (case when addtype = 'Billing' then 1
when addtype = 'Home' then 2
else 3
end)
) as seqnum
from table1 t
) t
where seqnum = 1;
```
|
I've used self joins for things like this:
```
SELECT
Acct,
case
when c.Address IS NOT NULL then c.Address
else b.Address
end as Address,
*
FROM
Table1 a LEFT OUTER JOIN Table1 b ON a.Acct = b.Acct
AND b.AddType = 'Home'
LEFT OUTER JOIN Table1 c ON a.Acct = c.Acct
AND c.AddType = 'Billing'
```
|
SQL query that will give me billing address for each account, but home address if there is no billing
|
[
"",
"sql",
"sql-server",
""
] |
I have a table which contains historical records for the votes given to entrants in a competition. A simplified example of the structure is:
```
id | entrantId | roundId | judgeId
```
Some example data:
```
401 | 32 | 3 | 4
402 | 32 | 3 | 5
403 | 35 | 3 | 4
404 | 32 | 4 | 4
405 | 36 | 3 | 4
406 | 36 | 3 | 10
```
I need to get all records of users who have a `judgeId = 4` and the `roundId = 3` but where the same 'entrantId' doesn't have a record of any kind where `roundId = 4`
So using the data above, I would be looking to retrieve records `403` and `405` since these entrants(35 & 36) have records for judge 4 in round 3 but have no records in round 4 (eg roundId = 4).
I'm assuming I'd need to have a SELECT statement that uses an 'IN' based clause, but I don't know enough about this to be able to build a suitable query.
|
You can use correlated subquery:
```
SELECT *
FROM tab t1
WHERE judgeId = 4
AND roundId = 3
AND NOT EXISTS (SELECT 1
FROM tab t2
WHERE t1.entrantId = t2.entrantId
AND t2.roundId = 4);
```
`LiveDemo`
Output:
```
βββββββ¦ββββββββββββ¦ββββββββββ¦ββββββββββ
β id β entrantId β roundId β judgeId β
β ββββββ¬ββββββββββββ¬ββββββββββ¬ββββββββββ£
β 403 β 35 β 3 β 4 β
β 405 β 36 β 3 β 4 β
βββββββ©ββββββββββββ©ββββββββββ©ββββββββββ
```
---
Alternatively using windowed functions:
```
WITH cte AS(
SELECT *,
c = COUNT(CASE WHEN roundId = 4 THEN 1 END) OVER (PARTITION BY entrantId)
FROM #tab
)
SELECT id,entrantId,roundId,judgeId
FROM cte
WHERE c = 0
AND judgeId = 4
AND roundId = 3;
```
`LiveDemo2`
|
You might use except but you won't get the ID.
```
select entrantId
from table
where roundId = 3 and judgeId = 4
except
select entrantId
from table
where roundId = 4;
```
|
Select users who have records equal to y but not x
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have Three tables:
```
Users { id | name | email }
User_Group { User_id | group_id }
Subscriptions {user_id, sub_level }
```
* User can be signed to more that User group.
* User can be subscribed to more than subscription plan.
Let's suppose we have:
Table Users:
```
[1, John, john@email.com]
[2, Lara, lara@email.com]
```
Table User Group:
```
[1,6] // User 1 is assigned to Group 6
[1,3] // User 1 also assigned to Group 3
[2,3] // User 2 in ONLY assigned to Group 3
```
Table Subscriptions :
```
[1, 8] // User 1 have subscription level 8
[2, 8] // User 2 have subscription level 8
[2, 9] // Also User 2 have subscription level 9
```
What I want to get is:
All UNIQUE users who are assigned to ONLY group 3 and who have at least one subscription.
What I have tried:
```
SELECT U.Username, U.email, G.group_id, S.sub_level FROM `Users` AS U
INNER JOIN `User_Group` AS G
ON U.id = G.user_id
INNER JOIN `Subscriptions` AS S
ON U.id = S.user_id
WHERE G.group_id = 3
Limit 0,10
```
Problem it will display users who are also assigned to others groups like user 1 who is in group 6 and 3. Also it will display duplicate lines because one user can have many subscription levels.
|
One method is to use aggregation:
```
SELECT U.Username, U.email, MAX(G.group_id)
FROM `Users` U INNER JOIN
`User_Group` G
ON U.id = G.user_id INNER JOIN
`Subscriptions` S
ON U.id = S.user_id
GROUP BY U.Username, U.email
HAVING MAX(G.group_id) = 3 AND MIN(G.group_id) = 3 ; -- condition on groups
```
The condition on subscriptions is handled just by the `join` condition.
This is probably more efficient as:
```
select u.*
from users u
where exists (select 1
from user_groups ug
where ug.user_id = u.id and ug.group_id = 3
) and
not exists (select 1
from user_groups ug
where ug.user_id = u.id and ug.group_id <> 3
) and
exists (select 1
from subscriptions s
where s.user_id = u.id
);
```
For this query, you want indexes on `user_groups(user_id, group_id)` and `subscriptions(user_id)`. Actually, these indexes are a good idea for both ways of formulating the query.
|
```
SELECT
U.username, U.email,
G.group_id,
S.sub_level
FROM
Users U
INNER JOIN User_Group G ON
G.user_id = u.id AND
G.group_id = 3
INNER JOIN Subscriptions S ON S.user_id = U.id
WHERE
NOT EXISTS
(
SELECT *
FROM User_Group G2
WHERE G2.user_id = U.user_id AND G2.group_id <> 3
)
```
|
Advanced Inner Join between User, User_Group and Subscriptions Tables
|
[
"",
"mysql",
"sql",
"database",
"select",
"inner-join",
""
] |
i have the following situation. every row has a timestamp when it was written on table. now i want to evaluate per day how many rows have been inserted before 5 am and how many after. how can that be done??
|
You can use the HH24 format to get the hour in 24-hour time:
```
select trunc(created_Date) as the_day
,sum(case when to_number(to_char(created_Date,'HH24')) < 5 then 1 else 0 end) as before_five
,sum(case when to_number(to_char(created_Date,'HH24')) >= 5 then 1 else 0 end) as after_five
from yourtable
group by trunc(created_Date)
```
Per USER's comment on 5:10, to show timestamps just before and after 5:
```
select trunc(created_Date) as the_day
,sum(case when to_number(to_char(created_Date,'HH24')) < 5 then 1 else 0 end) as before_five
,sum(case when to_number(to_char(created_Date,'HH24')) >= 5 then 1 else 0 end) as after_five
from (
-- one row januar 1 just after 5:00 a.m.
select to_Date('01/01/2015 05:10:12','dd/mm/yyyy hh24:mi:ss') as created_date from dual
union all
-- one row Januar 2 just before 5:00 a.m.
select to_Date('02/01/2015 04:59:12','dd/mm/yyyy hh24:mi:ss') as created_date from dual
)
group by trunc(created_Date);
THE_DAY, BEFORE_FIVE, AFTER_FIVE
02/01/2015, 1, 0
01/01/2015, 0, 1
```
|
Assuming your timestamp is a DATE column:
```
select trunc(date_written) as day
, count (case when (date_written-trunc(date_written))*24 < 5 then 1 end) before_5_count
, count (case when (date_written-trunc(date_written))*24 >= 5 then 1 end) after_5_count
from mytable
group by trunc(date_written)
```
|
count rows before time
|
[
"",
"sql",
"oracle",
"evaluate",
""
] |
I am using sql to try to only where value is in the column more than once.
For example, If my table was to look like this, The Age column is showing 3 instances of the age '2'.
```
Increment | Agenda
-------------------------------------
2 | Minutes Previous Meeting
2 | Minutes Previous Meeting
2 | Minutes Previous Meeting
11 | First Aid
6 | PAT Testing
```
In this case, I would only want to show the first '2', So it would look like this:
```
Increment | Agenda
-------------------------------------
2 | Minutes Previous Meeting
| Minutes Previous Meeting
| Minutes Previous Meeting
11 | First Aid
6 | PAT Testing
```
**Revision:**
Here is the code i am using:
```
SELECT A.Increment,
A.Agenda,
MS.Summary,
MS.Attendies,
MS.Appologies,
MS.Department,
MS.Date,
MS.WhoFor,
Ms.WhenFor,
MS.Name + ': ' + MS.Date as CreatedBy
FROM HSAgenda A
LEFT OUTER JOIN
Meeting_summary MS ON A.Increment = MS.Id
WHERE (A.Increment IN (SELECT SplitValue FROM dbo.Split ((SELECT Agenda FROM Agenda
WHERE AgendaID=@AgendaID ), ',') AS Split_1))
AND (A.Increment <> '1')
AND (MS.Department = @Department) AND (MS.AgendaId=@AgendaID)
ORDER BY A.Id, A.Parent
```
|
```
WITH cte as (
SELECT Age,
Cat,
ROW_NUMBER() OVER (PARTITION BY Age ORDER BY Cat) as rn
FROM YourTable
)
SELECT CASE WHEN rn = 1 THEN Age
ELSE NULL
END as Age,
Cat
FROM CTE
```
|
Odd request - but try using a Window Function.
```
SELECT (CASE
WHEN RowNum <> 1 THEN NULL
ELSE Age
END
) Age
,Cat
FROM
(
SELECT ROW_NUMBER()OVER(PARTITION BY Age ORDER BY Cat) RowNum
,Age
,Cat
FROM dbo.YourTable
) Inner
```
|
SQL Only show First value of (where value is in the column more than once)
|
[
"",
"sql",
"sql-server",
""
] |
I received a workbook which contains two tables in power-pivot (one around one million rows, another 20 mill rows). I would like to rip this out (as anything really - but let's say a CSV) so that I can use it in R + PostGreSQL.
I can't export to an Excel table as there are more than 1 million rows; and copy-pasting the data only works when I select around 200,000 rows.
I tried converting the xlsx into a zip and opening the "item.data" file in notepad++, however it was encrypted.
I put together some VBA which works for around 0.5 mill rows:
```
Public Sub CreatePowerPivotDmvInventory()
Dim conn As ADODB.Connection
Dim sheet As Excel.Worksheet
Dim wbTarget As Workbook
On Error GoTo FailureOutput
Set wbTarget = ActiveWorkbook
wbTarget.Model.Initialize
Set conn = wbTarget.Model.DataModelConnection.ModelConnection.ADOConnection
' Call function by passing the DMV name
' E.g. Partners
WriteDmvContent "Partners", conn
MsgBox "Finished"
Exit Sub
FailureOutput:
MsgBox Err.Description
End Sub
Private Sub WriteDmvContent(ByVal dmvName As String, ByRef conn As ADODB.Connection)
Dim rs As ADODB.Recordset
Dim mdx As String
Dim i As Integer
mdx = "EVALUATE " & dmvName
Set rs = New ADODB.Recordset
rs.ActiveConnection = conn
rs.Open mdx, conn, adOpenForwardOnly, adLockOptimistic
' Setup CSV file (improve this code)
Dim myFile As String
myFile = "H:\output_table_" & dmvName & ".csv"
Open myFile For Output As #1
' Output column names
For i = 0 To rs.Fields.count - 1
If i = rs.Fields.count - 1 Then
Write #1, rs.Fields(i).Name
Else
Write #1, rs.Fields(i).Name,
End If
Next i
' Output of the query results
Do Until rs.EOF
For i = 0 To rs.Fields.count - 1
If i = rs.Fields.count - 1 Then
Write #1, rs.Fields(i)
Else
Write #1, rs.Fields(i),
End If
Next i
rs.MoveNext
Loop
Close #1
rs.Close
Set rs = Nothing
Exit Sub
FailureOutput:
MsgBox Err.Description
End Sub
```
|
[DAX Studio](https://daxstudio.codeplex.com/) will allow you to query the data model in an Excel workbook and output to various formats, including flat files.
The query you'll need is just:
```
EVALUATE
<table name>
```
|
I have found a working (VBA) solution [but greggy's also works for me too!] -> my table was too big to export in one chunk so I loop over and filter by 'month'. This seems to work and produces a 1.2 gb CSV after I append all together:
```
Function YYYYMM(aDate As Date)
YYYYMM = year(aDate) * 100 + month(aDate)
End Function
Function NextYYYYMM(YYYYMM As Long)
If YYYYMM Mod 100 = 12 Then
NextYYYYMM = YYYYMM + 100 - 11
Else
NextYYYYMM = YYYYMM + 1
End If
End Function
Public Sub CreatePowerPivotDmvInventory()
Dim conn As ADODB.Connection
Dim tblname As String
Dim wbTarget As Workbook
On Error GoTo FailureOutput
Set wbTarget = ActiveWorkbook
wbTarget.Model.Initialize
Set conn = wbTarget.Model.DataModelConnection.ModelConnection.ADOConnection
' Call function by passing the DMV name
tblname = "table1"
WriteDmvContent tblname, conn
MsgBox "Finished"
Exit Sub
FailureOutput:
MsgBox Err.Description
End Sub
Private Sub WriteDmvContent(ByVal dmvName As String, ByRef conn As ADODB.Connection)
Dim rs As ADODB.Recordset
Dim mdx As String
Dim i As Integer
'If table small enough:
'mdx = "EVALUATE " & dmvName
'Other-wise filter:
Dim eval_field As String
Dim eval_val As Variant
'Loop through year_month
Dim CurrYM As Long, LimYM As Long
Dim String_Date As String
CurrYM = YYYYMM(#12/1/2000#)
LimYM = YYYYMM(#12/1/2015#)
Do While CurrYM <= LimYM
String_Date = CStr(Left(CurrYM, 4)) + "-" + CStr(Right(CurrYM, 2))
Debug.Print String_Date
eval_field = "yearmonth"
eval_val = String_Date
mdx = "EVALUATE(CALCULATETABLE(" & dmvName & ", " & dmvName & "[" & eval_field & "] = """ & eval_val & """))"
Debug.Print (mdx)
Set rs = New ADODB.Recordset
rs.ActiveConnection = conn
rs.Open mdx, conn, adOpenForwardOnly, adLockOptimistic
' Setup CSV file (improve this code)
Dim myFile As String
myFile = "H:\vba_tbl_" & dmvName & "_" & eval_val & ".csv"
Debug.Print (myFile)
Open myFile For Output As #1
' Output column names
For i = 0 To rs.Fields.count - 1
If i = rs.Fields.count - 1 Then
Write #1, """" & rs.Fields(i).Name & """"
Else
Write #1, """" & rs.Fields(i).Name & """",
End If
Next i
' Output of the query results
Do Until rs.EOF
For i = 0 To rs.Fields.count - 1
If i = rs.Fields.count - 1 Then
Write #1, """" & rs.Fields(i) & """"
Else
Write #1, """" & rs.Fields(i) & """",
End If
Next i
rs.MoveNext
Loop
CurrYM = NextYYYYMM(CurrYM)
i = i + 1
Close #1
rs.Close
Set rs = Nothing
Loop
Exit Sub
FailureOutput:
MsgBox Err.Description
End Sub
```
|
Rip 20 million rows from Power Pivot ("Item.data")
|
[
"",
"sql",
"vba",
"excel",
"powerpivot",
""
] |
I have a string in multiple URLs starting with two characters followed by between 1-6 numbers e.g. 'SO123456' This string is rarely in the same position within the URL. Following the string is either .html or whitespace.
```
SELECT SUBSTRING(URL,PATINDEX('%SO[0-9]%',URL),8)
FROM Table
WHERE URL LIKE '%SO[0-9]%'
```
This code returns 'SO12.htm' if the string is shorter than 8 characters.
Not all of the URLs have this string, and if that's the case then I still need the query to produce 'Null'.
I'm trying to return the exact length of the string. Can anyone help me with a way to solve this please? Can you find the length of a wildcard string to use within the substring so that only the exact string length is returned?
Many Thanks.
|
Not quite elaborated, but as a hint to get you started:
```
patindex = PATINDEX('%SO[0-9]%',URL) -> Index of the start of the pattern
charindex = CHARINDEX('.html', URL, patindex ) -> Index of the first '.html' after the start of the pattern.
patternLen = charindex - patindex
```
So something like the following may work:
```
SELECT
CHARINDEX('.html', URL,
PATINDEX('%SO[0-9]%',URL)
) -
PATINDEX('%SO[0-9]%',URL)
FROM ...
```
---
> Not all of the URLs have this string, and if that's the case then I
> still need the query to produce 'Null'.
-> Outer (self) join:
```
SELECT
allUrls.URL,
CHARINDEX('.html', u.URL, PATINDEX('%SO[0-9]%',u.URL) ) - PATINDEX('%SO[0-9]%', u.URL) -- Same as above
FROM MyTable allUrls
LEFT OUTER JOIN MyTable u
ON allUrls.URL = u.URL
AND u.URL LIKE '%SO[0-9]%'
```
|
I think you should check the length of the column first dude.
Here's the code
```
SELECT CASE LEN(URL) -- Check the length of URL column
WHEN > 8 THEN SUBSTRING(URL,PATINDEX('%SO[0-9]%',URL),8)
ELSE -- do your stuff
END as 'example'
FROM Table
```
|
Parsing a URL to find a varying length string
|
[
"",
"sql",
"sql-server",
"string",
"parsing",
"url",
""
] |
I'm trying to join 2 tables to get an output report. The tables involved are the stock and dailysales table.
Stock and Dailysales tables:
[](https://i.stack.imgur.com/EW3gz.jpg)
Desired output format:
[](https://i.stack.imgur.com/TYK0x.jpg)
I am trying to join 2 tables by using the below query
```
Select item,article,sold,stockonhand
from stock S
left join dailysales as D on S.item=D.item
group by item
```
I want the output to include all rows from the stock table. Like there is stock, but not sold, also to be included in the report. Currently my report is does not show, (stocked but not sold).
Hope you can understand my context or point me to a tutorial which I could read up. I tried to search for few days and couldn't find an exact question to mine.
|
You can sum the item in a separated table, it would be clearer than `GROUP BY` 2 tables
```
SELECT
s.item,
s.article,
ISNULL(ds.sold, 0) AS qtysold,
s.stockonhand
FROM
stock s OUTER APPLY
(SELECT SUM(sold) AS sold FROM dailysales WHERE item = s.item GROUP BY item) ds
```
|
Not tested -
```
select item,article,sum(sold),sum(stockonhand)-sum(sold) from (
select a.item,a.article,a.stockonhand,case when b.sold is null then 0 else b.sold end as sold
from stock a left join dailysales b on (a.item = b.item))
group by item,article;
```
It's basically does the left join and put 0 on the null column(for sum after)
and then summing up the results grouping by all the columns in the select(that what was wrong with your query)
|
Joining 2 tables error
|
[
"",
"sql",
"join",
"left-join",
"inner-join",
""
] |
I have a table with following Column
```
Name
----------------------
test10/20000020
test1/test2 / 20000001
test3/test4 / 20000002
test5/20000017
test5/test6 / 20000004
test5/20000007
```
I need to select the last value
I.e
```
20000020
20000001
20000002
20000017
20000004
20000007
```
I have tried with
```
(SUBSTRING(Name,0,(CHARINDEX('/',Name,0))))
```
But I am getting First Value
i.e. test1
test2
test3
|
SELECT SUBSTRING( Name, LEN(Name) - CHARINDEX('/',REVERSE(Name)) + 2 , LEN(Name) ) FROM SAMPLE
worked for me
|
Using XML:
```
declare @xml xml;
declare @str nvarchar(max) = '';
with data as
(
select 'test1/test2/test3/344495' as r
union ALL
select 'test1/344556' as r
)
select @str = @str + '<r><v>' + replace(r,'/','</v><v>') + '</v></r>'
from data;
-- obtain xml
set @xml = cast(@str as xml);
-- select last value from each row
select v.value('(v/text())[last()]', 'nvarchar(50)')
from @xml.nodes('/r') as r(v)
```
Same idea but without variables:
```
;with data as
(
select 'test1/test2/test3/344495' as r
union ALL
select 'test1/344556' as r
),
xmlRows AS
(
select cast('<r><v>' + replace(r,'/','</v><v>') + '</v></r>' as xml) as r
from data
)
select v.value('(v/text())[last()]', 'nvarchar(50)') as lastValue
from xmlRows xr
cross APPLY
r.nodes('/r') as r(v)
```
|
Get Last values using Select Substring in SQL Query
|
[
"",
"sql",
"sql-server-2008",
""
] |
I'm kinda new into the SQL Server and I'm having the following question: is there any possibility to renumber the rows in a column?
For ex:
```
id date name
1 2016-01-02 John
2 2016-01-02 Jack
3 2016-01-02 John
4 2016-01-02 John
5 2016-01-03 Jack
6 2016-01-03 Jack
7 2016-01-04 John
8 2016-01-03 Jack
9 2016-01-02 John
10 2016-01-04 Jack
```
I would like that all "Johns" to start with id 1 and go on (2, 3, 4 etc) and all "Jacks" have the following number when "John" is done (5, 6, 7 etc). Thanks!
|
I hope this helps..
```
declare @t table (id int ,[date] date,name varchar(20))
insert into @t
( id, date, name )
values (1,'2016-01-02','John')
,(2,'2016-01-02','Jack')
,(3,'2016-01-02','John')
,(4,'2016-01-02','John')
,(5,'2016-01-03','Jack')
,(6,'2016-01-03','Jack')
,(7,'2016-01-04','John')
,(8,'2016-01-03','Jack')
,(9,'2016-01-02','John')
,(10,'2016-01-04','Jack')
select
row_number() over(order by name,[date]) as ID,
date ,
name
from
@t
order by name
```
|
The `id` should just be an internal identifier you use for joins etc - I wouldn't change it. But you could query such a numbering using a window function:
```
SELECT ROW_NUMBER() OVER (ORDER BY CASE name WHEN 'John' THE 1 ELSE 2 END) AS rn,
date,
name
FROM mytable
```
|
Renumbering rows in SQL Server
|
[
"",
"sql",
"sql-server",
"t-sql",
"select",
""
] |
I am using this SQL Server [database](https://gist.githubusercontent.com/pamelafox/ad4f6abbaac0a48aa781/raw/c72be44a1bdefcbaf8af63361f98595552131b9c/countries_by_population.sql). I am defining the following remarks for countries
```
CASE
WHEN density_per_sq_km > 1000 THEN 'Overpopulated'
WHEN density_per_sq_km > 500 THEN 'above average'
WHEN density_per_sq_km > 250 THEN 'average'
WHEN density_per_sq_km > 50 THEN 'below average'
ELSE 'Underpopulated'
END as remarks
```
Now I want to count how many countries are there in each remark. How can I do that? I am using the following query but it fails
```
SELECT
COUNT(country) as no_of_countries,
CASE
WHEN density_per_sq_km > 1000 THEN 'Overpopulated'
WHEN density_per_sq_km > 500 THEN 'above average'
WHEN density_per_sq_km > 250 THEN 'average'
WHEN density_per_sq_km > 50 THEN 'below average'
ELSE 'Underpopulated'
END as remarks
FROM
countries_by_population
GROUP BY
remarks;
```
|
In the group by clause you cannot use that column alias, use the case expression instead
```
SELECT
COUNT(country) AS no_of_countries
, CASE
WHEN density_per_sq_km > 1000 THEN 'Overpopulated'
WHEN density_per_sq_km > 500 THEN 'above average'
WHEN density_per_sq_km > 250 THEN 'average'
WHEN density_per_sq_km > 50 THEN 'below average'
ELSE 'Underpopulated'
END AS remarks
FROM countries_by_population
GROUP BY
CASE
WHEN density_per_sq_km > 1000 THEN 'Overpopulated'
WHEN density_per_sq_km > 500 THEN 'above average'
WHEN density_per_sq_km > 250 THEN 'average'
WHEN density_per_sq_km > 50 THEN 'below average'
ELSE 'Underpopulated'
END
;
```
|
Wrapping the query with computed column into a subquery may help you use that column:
```
SELECT remarks, COUNT(country) as no_of_countries
FROM (
SELECT
CASE
WHEN density_per_sq_km > 1000 THEN 'Overpopulated'
WHEN density_per_sq_km > 500 THEN 'above average'
WHEN density_per_sq_km > 250 THEN 'average'
WHEN density_per_sq_km > 50 THEN 'below average'
ELSE 'Underpopulated'
END as remarks,
country
FROM countries_by_population
) DT
GROUP BY remarks;
```
|
SQL Server : how to use count
|
[
"",
"sql",
"sql-server",
"count",
"case",
""
] |
I do not understand why, but somehow this query doesn't work.
I want to take system date -1 day where the sysdate is smaller by 1 day then current date.
```
WHERE
a.SEND_Date >= dateadd(DD,-1,(CAST(getdate() as date) as datetime))
```
|
The CAST depends on what kind of date type you need.
If you need only to compare dates you can use only:
```
dateadd(DD, -1, cast(getdate() as date))
```
If you need to compare with date time you can use:
```
dateadd(DD,-1,getdate())
```
That wil give you datetime like this: `2016-01-11 10:43:57.443`
|
In T-SQL (sqlserver) you can simply do :
```
getDate()-1
```
The function substracts (days) as standard.
|
Getdate(), -1 day
|
[
"",
"sql",
"sybase",
"sqlanywhere",
""
] |
I have a series of numbers stored in an oracle table in the following format.
```
Emp_ID
1
2
3
4
5
6
7
8
9
10
14
15
16
17
18
31
32
33
34
35
36
41
42
```
I want to group this list by a fixed number like 7 and get the output in the following format:
```
Range Total
1-7 7
8-10,14-17 7
18-18,31-36 7
41-42 2
```
|
If you need the range representation as shown in your example:
```
-- test data
with data(empid) as
( -- complete list 1..42
select level
from dual
connect by level <= 42
minus (
-- minus gaps
select level
from dual
where level between 11 and 13
connect by level <= 42
union
select level
from dual
where level between 19 and 30
connect by level <= 42
union
select level
from dual
where level between 37 and 40
connect by level <= 42))
-- select:
select listagg(case
when minempid = maxempid then
minempid || ' '
else
(minempid || '-' || maxempid)
end,
', ') within group(order by minempid),
sum(cnt)
from (select grp,
seq,
min(empid) as minempid,
max(empid) as maxempid,
count(*) cnt
from (select empid, rn, empid - rn as seq, ceil(rn / 7) as grp
from (select empid, row_number() over(order by empid) rn
from data))
group by grp, seq)
group by grp;
```
|
In Oracle 11g you could use `listagg()` and analytic functions:
[SQLFiddle demo](http://sqlfiddle.com/#!4/8522e/1)
```
with t1 as (
select emp_id id,row_number() over (order by emp_id) rn from test),
t2 as (
select id, rn, floor((rn-.1)/7) grp,
min(id) over (partition by floor((rn-.1)/7), id-rn)||'-'||
max(id) over (partition by floor((rn-.1)/7), id-rn) rng
from t1),
t3 as (select grp, rng, min(rn) rn, count(1) cnt from t2 group by grp, rng)
select listagg(rng, ', ') within group (order by rn) range, sum(cnt) total
from t3 group by grp
```
|
How to group a series of numbers in oracle sql, with a specific number provided
|
[
"",
"sql",
"oracle",
"plsql",
""
] |
First of all, I would like to apologize for the vague title of my question. It's very difficult for me to explain the situation, since I'm not an expert in SQL.
I need help in writing a query.
I have a table, called **HELLO-MESSAGES**, which contains several columns: *ID*, *From*, *To*, *Body*, *Sent* (with the data the message has been sent).
I have these rows:
1. **1, 345xxxxxxx, 339xxxxxxx, "Hey, how are you?", 12/1/2016 17:24**
2. **2, 339xxxxxxx, 345xxxxxxx, "Fine thanks, and you?", 12/1/2016 17:25**
3. **3, 345xxxxxxx, 340xxxxxxx, "The quick...", 12/1/2016 18:24**
What I would like to do is to display in a table, all different chats. If my telephone number is *345xxxxxxx*, I would like to display a table with the following layout:
```
<table>
<tr>
<td><b>339xxxxxxx</b><br>Fine thanks, and you?</td>
</tr>
<tr>
<td><b>340xxxxxxx</b><br>The quick...</td>
</tr>
</table>
```
For example, just like WhatsApp or any other IMs do. I want to display the number of the person I'm chatting to, and the last message sent in that chat. I've a really bad experience in programming queries: this is what I've produced so far:
```
SELECT * FROM 'HELLO-MESSAGES' WHERE 'From'='345xxxxxxx' OR 'To'='345xxxxxxx'
```
But it returns EVERY message, as expected!
Please help a beginner that does not like SQL so much :)
**EDIT**
```
SELECT *
FROM (
SELECT MAX(d.id) AS max_id
FROM (
SELECT df.id
FROM ( SELECT xf.from
, xf.to
, MAX(xf.sent) AS max_date
FROM `HELLO-MESSAGES` xf
WHERE xf.from = '" . $_SESSION['ID'] . "'
GROUP BY xf.from, xf.to
) mf
JOIN `HELLO-MESSAGES` df
ON df.from = mf.from
AND df.to = mf.to
AND df.sent = mf.max_date
UNION ALL
SELECT dt.id
FROM ( SELECT xt.from
, xt.to
, MAX(xt.sent) AS max_date
FROM `HELLO-MESSAGES` xt
WHERE xt.to = '" . $_SESSION['ID'] . "'
GROUP BY xt.from, xt.to
) mt
JOIN `HELLO-MESSAGES` dt
ON dt.from = mt.from
AND dt.to = mt.to
AND dt.sent = mt.max_date
) q1
JOIN `HELLO-MESSAGES` d
ON d.id = q1.id
GROUP BY IF(d.from<d.to,d.from,d.to),IF(d.from<d.to,d.to,d.from)
) q2
JOIN `HELLO-MESSAGES` r
ON r.id = q2.max_id
ORDER BY r.sent DESC, r.id DESC
```
Is that right, @spencer7593? Thanks!
|
As I indicated in my comments, for a table with a large number of rows, to improve query performance, I would consider implementing an additional "latest message" table to be maintained along with the `HELLO-MESSAGES` table. But I won't delve into that, since that doesn't address the specific question you were asking. (But I suspect you'll be asking the question how to improve the performance of the query that gets the "latest message"... beyond ensuring appropriate indexes are available, and that table grows to a significant size.)
---
To get the latest message in each "conversation" sent *from* a specific a telephone number, you can use a query something like this:
```
SELECT df.id
, df.from
, df.to
, df.sent
, df.body
FROM ( SELECT xf.from
, xf.to
, MAX(xf.sent) AS max_date
FROM `HELLO-MESSAGES` xf
WHERE xf.from = '345xxxxxxx'
GROUP BY xf.from, xf.to
) mf
JOIN `HELLO-MESSAGES` df
ON df.from = mf.from
AND df.to = mf.to
AND df.sent = mf.max_date
```
If there's more than one message with the same maximum value of "sent" for a given conversation, this query will return all of those rows.
A corresponding query can be used to get the latest message in each "conversation" that is received by a specific telephone number.
```
SELECT dt.id
, dt.from
, dt.to
, dt.sent
, dt.body
FROM ( SELECT xt.from
, xt.to
, MAX(xt.sent) AS max_date
FROM `HELLO-MESSAGES` xt
WHERE xt.to = '345xxxxxxx'
GROUP BY xt.to, xt.from
) mt
JOIN `HELLO-MESSAGES` dt
ON dt.from = mt.from
AND dt.to = mt.to
AND dt.sent = mt.max_date
```
We can combine the results from those two queries, and return just the unique `id` value for those messages. We can combine the sets with the `UNION ALL` operator. (The MySQL optimizer generally doesn't do to well with 'OR' predicates; that's the reason for breaking the query into the two parts.)
```
SELECT df.id
FROM ( SELECT xf.from
, xf.to
, MAX(xf.sent) AS max_date
FROM `HELLO-MESSAGES` xf
WHERE xf.from = '345xxxxxxx'
GROUP BY xf.from, xf.to
) mf
JOIN `HELLO-MESSAGES` df
ON df.from = mf.from
AND df.to = mf.to
AND df.sent = mf.max_date
UNION ALL
SELECT dt.id
FROM ( SELECT xt.from
, xt.to
, MAX(xt.sent) AS max_date
FROM `HELLO-MESSAGES` xt
WHERE xt.to = '345xxxxxxx'
GROUP BY xt.to, xt.from
) mt
JOIN `HELLO-MESSAGES` dt
ON dt.from = mt.from
AND dt.to = mt.to
AND dt.sent = mt.max_date
```
The set returned by this query will include the `id` values of the latest "sent" and "received" message in each conversation.
If we are guaranteed that the `id` value of a row for a "later" message will be greater than the `id` value of a row with an "earlier" message, we can take advantage of that, to deal with the issue of more than one "latest" message in the conversation; two or more rows with the same value of `Sent`.
We can use that to whittle down the results from the previous query to just the latest message in each conversation. We'll rerepresent the result of the previous query as `q1` in the next query:
```
SELECT MAX(d.id) AS max_id
FROM (
subquery
) q1
JOIN `HELLO-MESSAGES` d
ON d.id = q1.id
GROUP BY IF(d.from<d.to,d.from,d.to), IF(d.from<d.to,d.to,d.from)
```
That get's the `id` values of the messages we want to return. We'll represent that result as `q2` in the next query.
```
SELECT r.id
, r.from
, r.to
, r.sent
, r.body
FROM (
) q2
JOIN `HELLO-MESSAGES` r
ON r.id = q2.max_id
ORDER BY r.sent DESC, r.id DESC
```
---
Putting that all together, we get a pretty ugly query:
```
SELECT r.id
, r.from
, r.to
, r.sent
, r.body
FROM (
SELECT MAX(d.id) AS max_id
FROM (
SELECT df.id
FROM ( SELECT xf.from
, xf.to
, MAX(xf.sent) AS max_date
FROM `HELLO-MESSAGES` xf
WHERE xf.from = '345xxxxxxx'
GROUP BY xf.from, xf.to
) mf
JOIN `HELLO-MESSAGES` df
ON df.from = mf.from
AND df.to = mf.to
AND df.sent = mf.max_date
UNION ALL
SELECT dt.id
FROM ( SELECT xt.from
, xt.to
, MAX(xt.sent) AS max_date
FROM `HELLO-MESSAGES` xt
WHERE xt.to = '345xxxxxxx'
GROUP BY xt.to, xt.from
) mt
JOIN `HELLO-MESSAGES` dt
ON dt.from = mt.from
AND dt.to = mt.to
AND dt.sent = mt.max_date
) q1
JOIN `HELLO-MESSAGES` d
ON d.id = q1.id
GROUP BY IF(d.from<d.to,d.from,d.to),IF(d.from<d.to,d.to,d.from)
) q2
JOIN `HELLO-MESSAGES` r
ON r.id = q2.max_id
ORDER BY r.sent DESC, r.id DESC
```
---
EDIT: missing qualifier for ambiguous column reference
EDIT: fixed expressions in `GROUP BY` for q2. (The rows need to be collapsed/grouped by the distinct *combined* values of `from` and `to`. One approach to fixing that is to add an expression to the `GROUP BY`. That fix is applied to the queries above.
This answer illustrates one approach to returning the specified result. This is not necessarily the best approach. The usage of inline views (*derived tables* in the MySQL venacular) can impose a (sometimes significant) performance penalty.
|
What you are looking for, I think is not that simple and you need to have good knowledge of sql. I think that the answer is in the following link:
[Get Last conversation row from MySQL database table](https://stackoverflow.com/questions/8427198/get-last-conversation-row-from-mysql-database-table)
Hope this helps you.
Best regards !
|
How to make a particular SELECT query
|
[
"",
"mysql",
"sql",
""
] |
I'm trying to get count of duplicate data, but my query doesn't work correctly.
Every user should have one dev\_id but when other user will have this same dev\_id I want to know this
Table for example:
```
dev_id user_id
------------------
111 1
111 1
222 2
111 2
333 3
```
Should result:
```
user_id qu
------------------
1 1
2 1
3 0
```
This is my query
```
SELECT t1.user_id,
(SELECT Count(DISTINCT t2.dev_id)
FROM reports t2
WHERE t2.user_id != t1.user_id
AND t2.dev_id = t1.dev_id
) AS qu
FROM reports t1
GROUP BY t1.user_id
```
|
Okay. Let start from simple.
First you need get unique user\_id/dev id combinations
```
select distinct dev_id,user_id from reports
```
Result will be
```
dev_id user_id
------------------
111 1
222 2
111 2
333 3
```
After that you should get number of different user\_id per dev\_id
```
select dev_id,c from (
SELECT
dev_id,
count(*)-1 AS c
FROM
(select distinct user_id,dev_id from reports) as fixed_reports
GROUP BY dev_id
) as counts
```
Result of such query will be
```
dev_id c
-----------------
111 1
222 0
333 0
```
Now you should show users which have such dev\_id. For that you should join such dev\_id list with table from step1(which show which one user\_id,dev\_id pairs exist)
```
select distinct fixed_reports2.user_id,counts.c from (
SELECT
dev_id,
count(*)-1 AS c
FROM
(select distinct user_id,dev_id from reports) as fixed_reports
GROUP BY dev_id
) as counts
join
(select distinct user_id,dev_id from reports) as fixed_reports2
on fixed_reports2.dev_id=counts.dev_id
where counts.c>0 and counts.c is not null
```
"Distinct" here need to skip same rows.
Result should be for internal query
```
dev_id c
-----------------
111 1
```
For all query
```
user_id c
------------------
1 1
2 1
```
If you sure you need also rows with c=0, then you need do "left join" of fixed\_reports2 and large query,that way you will get all rows and rows with c=null will be rows with 0(can be changed by case/when statement)
|
You can get results by doing:
```
select r.user_id, count(*) - 1
from reports r
group by r.user_id;
```
Is this the calculation that you want?
|
MYSQL Count of duplicate records with condition
|
[
"",
"mysql",
"sql",
""
] |
I just spotted curious behaviour of oracle [TO\_DATE](https://docs.oracle.com/cd/B28359_01/server.111/b28286/functions191.htm) function when used with [format\_mask](https://docs.oracle.com/cd/B28359_01/server.111/b28286/sql_elements004.htm#i34924) parameter.
Basically, what I see is that in one case it ignores given format mask, and parses input with its own mask, and in other cases it throws an exception.
**Example I** expected behaviour - error thrown:
```
SELECT TO_DATE('18-02-2016', 'DD/MON/YYYY') FROM dual
```
> ORA-01843: not a valid month
**Example II** unexpected behaviour - date parsed:
```
SELECT TO_DATE('18-feb-2016', 'DD/MM/YYYY') FROM dual
```
> February, 18 2016 00:00:00
I cannot see any remark of this in docs, so I am wondering if this incostincency is by design or is it bug or maybe I am not understanding something correct?
**Edit:**
Looking at answers I can agree that it is most probably by design. But what is done here looks dangerously "automagical" to me.
What if format will be interpreted (guessed by oracle) incorrectly? Is there any documentation on what is exactly happening here, so I can be sure that it is safe?
My question would be then - can I turn it off? Is my only option validating format on my own?
|
See the table here: <https://docs.oracle.com/cd/B28359_01/server.111/b28286/sql_elements004.htm#g195479>
It is part of the String-To-Date Conversion Rules section of the Datetime format model. In the case of `MM` if there is no match, it attempts for `MON` and `MONTH`. Similarly if you specify `MON` and it does not find that, it attempts `MONTH`. If you specify `MONTH` and it cannot find that, it attempts `MON`, but it will never attempt `MM` on anything except `MM`.
In response to the question: `Can I turn it off?` The answer is, Yes.
You can do that by specifying `FX` as part of your formatting.
```
SELECT TO_DATE('18/february/2016', 'FXDD/MM/YYYY') FROM dual;
```
Now returns:
> [Error] Execution (4: 16): ORA-01858: a non-numeric character was
> found where a numeric was expected
Whereas the following:
```
SELECT TO_DATE('18/02/2016', 'FXDD/MM/YYYY') FROM dual;
```
Returns the expected:
> 2/18/2016
Note that when specifying `FX` you **MUST** use the proper separators otherwise it will error.
|
This is by design. Oracle tries to find deterministic date representation in the string even if does not comply with the defined mask. It throws the error only it doesn't find deterministic date or some required component of the date is missing or not resolved.
|
Oracle TO_DATE NOT throwing error
|
[
"",
"sql",
"oracle",
"oracle12c",
""
] |
Project configuration:
* data base - MySQL 5.7
* orm - Hibernate 4.3.11.Final / JPA 1.3.1.RELEASE
* Liquibase 3.4.2
My problem dont exist when i run script from workBeanch only from Liquibase.
```
<changeSet author="newbie" id="function_rad2deg" dbms="mysql,h2">
<sqlFile encoding="utf8" path="sql/function_rad2deg.sql" relativeToChangelogFile="true" splitStatements="false" stripComments="false"/>
</changeSet>
```
My sql script looks like this:
```
DROP FUNCTION IF EXISTS rad2deg;
DELIMITER //
CREATE FUNCTION rad2deg(rad DOUBLE)
RETURNS DOUBLE
BEGIN
RETURN (rad * 180 / PI());
END
//
DELIMITER ;
```
Ok and log:
```
liquibase.exception.DatabaseException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'DELIMITER //
CREATE FUNCTION rad2deg(rad DOUBLE)
RETURNS DOUBLE
BEGIN
' at line 3 [Failed SQL: DROP FUNCTION IF EXISTS rad2deg;
DELIMITER //
CREATE FUNCTION rad2deg(rad DOUBLE)
RETURNS DOUBLE
BEGIN
RETURN (rad * 180 / PI());
END
//
DELIMITER ;]
at liquibase.executor.jvm.JdbcExecutor$ExecuteStatementCallback.doInStatement(JdbcExecutor.java:301)
at liquibase.executor.jvm.JdbcExecutor.execute(JdbcExecutor.java:55)
at liquibase.executor.jvm.JdbcExecutor.execute(JdbcExecutor.java:107)
at liquibase.database.AbstractJdbcDatabase.execute(AbstractJdbcDatabase.java:1251)
at liquibase.database.AbstractJdbcDatabase.executeStatements(AbstractJdbcDatabase.java:1234)
at liquibase.changelog.ChangeSet.execute(ChangeSet.java:554)
at liquibase.changelog.visitor.UpdateVisitor.visit(UpdateVisitor.java:51)
at liquibase.changelog.ChangeLogIterator.run(ChangeLogIterator.java:73)
at liquibase.Liquibase.update(Liquibase.java:212)
at liquibase.Liquibase.update(Liquibase.java:192)
at liquibase.Liquibase.update(Liquibase.java:327)
at org.liquibase.maven.plugins.LiquibaseUpdate.doUpdate(LiquibaseUpdate.java:33)
at org.liquibase.maven.plugins.AbstractLiquibaseUpdateMojo.performLiquibaseTask(AbstractLiquibaseUpdateMojo.java:30)
at org.liquibase.maven.plugins.AbstractLiquibaseMojo.execute(AbstractLiquibaseMojo.java:394)
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:134)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:80)
at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51)
at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:128)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:307)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:193)
at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:106)
at org.apache.maven.cli.MavenCli.execute(MavenCli.java:862)
at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:286)
at org.apache.maven.cli.MavenCli.main(MavenCli.java:197)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)
at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229)
at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)
at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)
at org.codehaus.classworlds.Launcher.main(Launcher.java:47)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)
Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'DELIMITER //
CREATE FUNCTION rad2deg(rad DOUBLE)
RETURNS DOUBLE
BEGIN
' at line 3
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:404)
at com.mysql.jdbc.Util.getInstance(Util.java:387)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:939)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3878)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3814)
at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2478)
at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2625)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2547)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2505)
at com.mysql.jdbc.StatementImpl.executeInternal(StatementImpl.java:840)
at com.mysql.jdbc.StatementImpl.execute(StatementImpl.java:740)
at liquibase.executor.jvm.JdbcExecutor$ExecuteStatementCallback.doInStatement(JdbcExecutor.java:299)
... 41 more
```
|
If you are using yaml file then here is the configuration
```
changeSet:
id: sqlFile-function
author: sandeep
logicalFilePath: baseFunctionScript
changes:
- sqlFile:
dbms: mysql
encoding: utf8
endDelimiter: \nGO
path: my/path/baseScripts.sql
splitStatements: true
stripComments: false
```
Make sure stripComments should be false else it will throw an error. I spent 4 hours because of it.
|
I think you need to add the `endDelimiter` clause in the Liquibase tag, based on the documentation found [here](http://www.liquibase.org/documentation/changes/sql_file.html)
Below is an example
```
<changeSet author="newbie" id="function_rad2deg" dbms="mysql,h2">
<sqlFile encoding="utf8"
path="sql/function_rad2deg.sql"
relativeToChangelogFile="true"
splitStatements="false"
stripComments="false"
endDelimiter="\nGO"
/>
</changeSet>
```
Your SQL file with the above delimiter would then look like
```
DROP FUNCTION IF EXISTS rad2deg;
GO
CREATE FUNCTION rad2deg(rad DOUBLE)
RETURNS DOUBLE
BEGIN
RETURN (rad * 180 / PI());
END
GO
```
Hope this helps
|
Create function from SQL script by Liquibase
|
[
"",
"sql",
"function",
"liquibase",
""
] |
So i am just a beginner in all this php stuff. I know just the basics, and when i setting up the settings for my new table, I met the problem #1075. Before, i created one, almost similar to this one, and i don't see the differenc. Can you say me where is the problem and explain what is happening?
```
CREATE TABLE `try`.`testing` ( `id` INT NOT NULL AUTO_INCREMENT , `date` DATE NOT NULL , `text_1` TEXT CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL , `text_2` TEXT CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL ) ENGINE = MyISAM;
```
here is the code of my SQL Preview. I use phpMyAdmin, obviously.
Please, help me.
Thank, you)
|
Try this
```
CREATE TABLE `testing` (
`id` INT NOT NULL AUTO_INCREMENT,
`date` DATE NOT NULL,
`text_1` TEXT CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL,
`text_2` TEXT CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL,
PRIMARY KEY (`id`)
) ENGINE = MYISAM ;
```
|
You have to declare your `AUTO_INCREMENT` field as a `primary key` or a `key`. So you have to add `PRIMARY KEY (id)` or `KEY (id)` to your `CREATE TABLE` statement:
```
CREATE TABLE `try`.`testing` (
`id` INT NOT NULL AUTO_INCREMENT,
`date` DATE NOT NULL ,
`text_1` TEXT CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL ,
`text_2` TEXT CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL,
PRIMARY KEY (`id`) -- as primary key
KEY (`id`) -- or as key
) ENGINE = MyISAM;
```
Please also check:
<https://stackoverflow.com/a/8114994/3647441>
<https://stackoverflow.com/a/14087703/3647441>
|
#1075 MySQL Error
|
[
"",
"mysql",
"sql",
"mysql-error-1075",
""
] |
Say, I have a table like
```
Name Pets
-------------------------
Anna Cats,Dogs,Hamsters
John Cats
Jake Dogs,Cats
Jill Parrots
```
And I want to count, how many people have different types of pets. The output would be something like
```
Pets Owners
---------------
Cats 3
Dogs 2
Hamsters 1
Parrots 1
```
**Limitations:**
* Reworking of DB scheme is impractical. If I could do it, I wouldn't be here.
* All logic must be done in one SQL query.
* I can't take result table and deduce owner count later in code.
* Using built-in Oracle functions is OK, but writing custom functions is discouraged.
* Oracle version β 11 and up.
|
It's a terrible design - as you mentioned - so I don't envy you having to work with it!
It's possible to do what you're after, although I wouldn't like to say that it would be performant for larger datasets!
Assuming the `name` column is the primary key (or at least unique):
```
with t1 as (select 'Anna' name, 'Cats,Dogs,Hamsters' pets from dual union all
select 'John' name, 'Cats' pets from dual union all
select 'Jake' name, 'Dogs,Cats' pets from dual union all
select 'Jill' name, 'Parrots' pets from dual)
select pet pets,
count(*) owners
from (select name,
regexp_substr(pets, '(.*?)(,|$)', 1, level, null, 1) pet
from t1
connect by prior name = name
and prior sys_guid() is not null
and level <= regexp_count(pets, ',') + 1)
group by pet
order by owners desc, pet;
PETS OWNERS
---------- ----------
Cats 3
Dogs 2
Hamsters 1
Parrots 1
```
|
It is a bad design to store **comma-separated** values in a **single column**. You should consider **normalizing the data**. Having such a design will always push you to have an overhead of manipulating delimited-strings.
Anyway, as a workaround, you could use **REGEXP\_SUBSTR** and **CONNECT BY** to split the comma-delimited string into multiple rows and then count the pets.
There are other ways of doing it too, like **XMLTABLE**, **MODEL** clause. Have a look at **[split the comma-delimited string into multiple rows](http://lalitkumarb.com/2015/03/04/split-comma-delimited-strings-in-a-table-in-oracle/)**.
```
SQL> WITH sample_data AS(
2 SELECT 'Anna' NAME, 'Cats,Dogs,Hamsters' pets FROM dual UNION ALL
3 SELECT 'John' NAME, 'Cats' pets FROM dual UNION ALL
4 SELECT 'Jake' NAME, 'Dogs,Cats' pets FROM dual UNION ALL
5 SELECT 'Jill' NAME, 'Parrots' pets FROM dual
6 )
7 -- end of sample_data mimicking a real table
8 SELECT pets,
9 COUNT(*) cnt
10 FROM
11 (SELECT trim(regexp_substr(t.pets, '[^,]+', 1, lines.COLUMN_VALUE)) pets
12 FROM sample_data t,
13 TABLE (CAST (MULTISET
14 (SELECT LEVEL FROM dual CONNECT BY LEVEL <= regexp_count(t.pets, ',')+1
15 ) AS sys.odciNumberList ) ) lines
16 ORDER BY NAME,
17 lines.COLUMN_VALUE
18 )
19 GROUP BY pets
20 ORDER BY cnt DESC;
PETS CNT
------------------ ----------
Cats 3
Dogs 2
Hamsters 1
Parrots 1
SQL>
```
|
In oracle, how to 'group by' properties that are in comma separated values?
|
[
"",
"sql",
"oracle",
"csv",
"count",
"group-by",
""
] |
I am working on a sql oracle database. I have a table with an id, begindate and enddate.
For example:
```
employee | begindate | enddate
john | 18/02/2015 | 18/02/2015
john | 19/02/2015 | 21/02/2015
```
I want to do a select statement of that table, but when the begindate is not equal to the enddate, it have to add some rows. In the example above the first line will stay like that, but the second line has to be expanded in three rows. the result of the select statement has to be:
```
john | 18/02/2015 | 18/02/2015
john | 19/02/2015 | 19/02/2015
john | 20/02/2015 | 20/02/2015
john | 21/02/2015 | 21/02/2015
```
So my select statement would have in this example 4 rows in total.
Has someone an idea how I can do that?
|
**Oracle Setup**:
```
CREATE TABLE employees ( employee, begindate, enddate ) AS
SELECT 'john', DATE '2015-02-18', DATE '2015-02-18' FROM DUAL UNION ALL
SELECT 'john', DATE '2015-02-19', DATE '2015-02-21' FROM DUAL;
```
**Query**:
```
SELECT e.employee,
t.COLUMN_VALUE AS begindate,
t.COLUMN_VALUE AS enddate
FROM employees e,
TABLE(
CAST(
MULTISET(
SELECT e.begindate + LEVEL - 1
FROM DUAL
CONNECT BY LEVEL <= e.enddate - e.begindate + 1
)
AS SYS.ODCIDATELIST
)
) t;
```
**Results**:
```
EMPLOYEE BEGINDATE ENDDATE
-------- --------- ---------
john 18-FEB-15 18-FEB-15
john 19-FEB-15 19-FEB-15
john 20-FEB-15 20-FEB-15
john 21-FEB-15 21-FEB-15
```
|
Here's an alternative answer using connect by directly on the table:
```
with test as (select 'john' employee, to_date('18/02/2015', 'dd/mm/yyyy') begindate, to_date('18/02/2015', 'dd/mm/yyyy') enddate from dual union all
select 'john' employee, to_date('19/02/2015', 'dd/mm/yyyy') begindate, to_date('21/02/2015', 'dd/mm/yyyy') enddate from dual)
select employee,
begindate + level - 1 begindate,
begindate + level - 1 enddate
from test
connect by prior employee = employee
and prior begindate = begindate
and prior sys_guid() is not null
and begindate + level - 1 <= enddate;
EMPLOYEE BEGINDATE ENDDATE
-------- ---------- ----------
john 18/02/2015 18/02/2015
john 19/02/2015 19/02/2015
john 20/02/2015 20/02/2015
john 21/02/2015 21/02/2015
```
I would recommend you test the answers which work and see which one has the best performance for your data.
|
expand a row in different rows
|
[
"",
"sql",
"oracle",
""
] |
I have a data set as below in `SQL Server`:
```
ROW_NUM EMP_ID DATE_KEY TP_DAYS
1 U12345 20131003 1
2 U12345 20131004 0
3 U12345 20131005 0
4 U12345 20131006 0
5 U12345 20150627 1
6 U12345 20150628 0
1 U54321 20131003 1
2 U54321 20131004 0
3 U54321 20131005 0
4 U54321 20131006 0
```
I need to update all the zeros in column `TP_DAYS` with values increment by 1 to the previous value.
The required result set will be as follows:
```
ROW_NUM EMP_ID DATE_KEY TP_DAYS
1 U12345 20131003 1
2 U12345 20131004 2
3 U12345 20131005 3
4 U12345 20131006 4
5 U12345 20150627 1
6 U12345 20150628 2
1 U54321 20131003 1
2 U54321 20131004 2
3 U54321 20131005 3
4 U54321 20131006 4
```
I tried using `LAG` and `LEAD` function in SQL. But couldn't achieve the result as expected.
Can someone help me to achieve it.
|
Using windowed functions (`SUM/ROW_NUMBER` so it will work with `SQL Server 2008`):
```
WITH cte AS
(
SELECT *, s = SUM(TP_DAYS) OVER(PARTITION BY EMP_ID ORDER BY ROW_NUM)
FROM #tab
), cte2 AS
(
SELECT *,
tp_days_recalculated = ROW_NUMBER() OVER (PARTITION BY EMP_ID, s ORDER BY ROW_NUM)
FROM cte
)
UPDATE cte2
SET TP_DAYS = tp_days_recalculated;
SELECT *
FROM #tab;
```
`LiveDemo`
Output:
```
βββββββββββ¦βββββββββ¦βββββββββββ¦ββββββββββ
β ROW_NUM β EMP_ID β DATE_KEY β TP_DAYS β
β ββββββββββ¬βββββββββ¬βββββββββββ¬ββββββββββ£
β 1 β U12345 β 20131003 β 1 β
β 2 β U12345 β 20131004 β 2 β
β 3 β U12345 β 20131005 β 3 β
β 4 β U12345 β 20131006 β 4 β
β 5 β U12345 β 20150627 β 1 β
β 6 β U12345 β 20150628 β 2 β
β 1 β U54321 β 20131003 β 1 β
β 2 β U54321 β 20131004 β 2 β
β 3 β U54321 β 20131005 β 3 β
β 4 β U54321 β 20131006 β 4 β
βββββββββββ©βββββββββ©βββββββββββ©ββββββββββ
```
#Addendum
Original OP question and sample data are very clear that `tp_days` indicators are `0` and `1` not any other values.
Especially for [Atheer Mostafa](https://stackoverflow.com/users/1930195/atheer-mostafa):
> check this example as a proof: <https://data.stackexchange.com/stackoverflow/query/edit/423186>
This should be new question, but I will handle that case:
```
;WITH cte AS
(
SELECT *
,rn = s + ROW_NUMBER() OVER(PARTITION BY EMP_ID, s ORDER BY ROW_NUM) -1
,rnk = DENSE_RANK() OVER(PARTITION BY EMP_ID ORDER BY s)
FROM (SELECT *, s = SUM(tp_days) OVER(PARTITION BY EMP_ID ORDER BY ROW_NUM)
FROM #tab) AS sub
), cte2 AS
(
SELECT c1.*,
tp_days_recalculated = c1.rn - (SELECT COALESCE(MAX(c2.s),0)
FROM cte c2
WHERE c1.emp_id = c2.emp_id
AND c2.rnk = c1.rnk-1)
FROM cte c1
)
UPDATE cte2
SET tp_days = tp_days_recalculated;
```
`LiveDemo2`
Output:
```
βββββββββββ¦βββββββββ¦βββββββββββ¦ββββββββββ
β row_num β emp_id β date_key β tp_days β
β ββββββββββ¬βββββββββ¬βββββββββββ¬ββββββββββ£
β 1 β U12345 β 20131003 β 2 β
β 2 β U12345 β 20131004 β 3 β
β 3 β U12345 β 20131005 β 4 β
β 4 β U12345 β 20131006 β 3 β
β 5 β U12345 β 20150627 β 4 β
β 6 β U12345 β 20150628 β 5 β
β 1 β U54321 β 20131003 β 2 β
β 2 β U54321 β 20131004 β 3 β
β 3 β U54321 β 20131005 β 1 β
β 4 β U54321 β 20131006 β 2 β
βββββββββββ©βββββββββ©βββββββββββ©ββββββββββ
```
> it shouldn't change the values 3,4,2 to 1 .... this is the case. **I don't need your solution when I have another generic answer**, you don't tell me what to do ... thank you
[Solution mentioned in comment](https://stackoverflow.com/a/34769642/5070879) is nothing more than `quirky update`. Yes it will work, but may easily fail:
* First of all there is no such thing as ordered table per se
* Query optimizer may read data in any way(especially when dataset is big and parallel execution is involved). Without `ORDER BY` you cannot guarantee the stable result
* The behavior is not documented,might work today but could break in the future
Related articles:
1. [Robyn Page's SQL Server Cursor Workbench](https://www.simple-talk.com/sql/learn-sql-server/robyn-pages-sql-server-cursor-workbench/)
2. [Calculate running total / running balance](https://stackoverflow.com/a/11313533/26167)
3. [No Seatbelt - Expecting Order without ORDER BY](http://blogs.msdn.com/b/conor_cunningham_msft/archive/2008/08/27/no-seatbelt-expecting-order-without-order-by.aspx)
|
Let me assume SQL Server 2012+. You need to identify groups that are delimited by 1. A simple way to calculate the group is by doing a cumulative sum of 1s. Then `row_number()` can be used to calculate the new value. You can do this work uisng an updatable CTE:
```
with toupdate as (
select t.*,
row_number() over (partition by empid, grp order by row_num) as new_tp_days
from (select t.*,
sum(tp_days) over (partition by emp_id order by row_num) as grp
from t
) t
)
update toupdate
set tp_days = new_tp_days;
```
In earlier versions of SQL Server, you can accomplish the same thing (less efficiently). One method uses `outer apply`.
|
SQL - Update rows between two values in a column
|
[
"",
"sql",
"sql-server",
"t-sql",
"sql-update",
"gaps-and-islands",
""
] |
I am working on dates and got the requirement in this way.
In case the recruit date is between 1st - 7th day of the month (e.g. July 3 2014), then the recruit date should be July 1 2014.
In case the recruit date is after the 7th day of the month (e.g. July 8, 2014. Recruit date should be on the following month Aug 1 2014.
So, I prepared an SQL, using add months and round, but it is not giving me desired results. Could you please help me modify the query
```
SELECT AST.X_REC_DT,
CASE
WHEN (to_char(AST.X_REC_DT, 'DD') <= 7)
THEN ROUND(to_date(add_months(AST.X_REC_DT, -1)))
when (to_char(AST.X_REC_DT, 'DD') > 7)
then ROUND(to_date(add_months(AST.X_REC_DT, 1)))
END AS RTD
FROM EMPLOYEE AST WHERE AST.ROW_ID = '1-EWR'
```
Thanks.
|
There's no need for any complicated case statements, just use this code:
```
select trunc(add_months(ast.x_rec_dt,1)-7,'mm') RTD
from employee ast
where ast.row_id = '1-EWR';
```
It first adds one month to the date in question, then subtracts 7 days from the new date, then truncates the whole thing to the beginning of the month.
|
Use a `CASE` statement to see if the date is on or before the 7th day of the current month. If it is not, add a month. This logic will show you how that works using all the dates in 2016 as test data.
```
WITH dates AS
(SELECT TO_DATE ('01-JAN-2016') + ROWNUM - 1 dte
FROM DUAL
CONNECT BY ROWNUM <= 365)
SELECT dte,
-- This is the logic you want right here:
CASE WHEN dte <= TRUNC (dte, 'MM') + 7 - 1 THEN TRUNC (dte, 'MM') ELSE TRUNC (ADD_MONTHS (dte, 1), 'MM') END
modified_dte
FROM dates;
```
|
Rounding off dates in oracle
|
[
"",
"sql",
"oracle",
"date",
""
] |
```
select * from table where
COALESCE(HOURS, MINUTES) is not null
and (if hours = null, then minutes >0)
and (if minutes = null then hours > 0)
and (minutes != 0 AND hours != 0)
```
How can I give last 3 sqls as SQL condition for oracle?
Sample data:
```
Hours Minutes
----- -------
1 22
null 33
0 13
0 0
null null
10 0
10 null
```
In this scenerio it should return
```
Hours Minutes
----- -------
1 22
null 33
0 13
10 0
10 null
```
|
Thanks guys, based on your suggestions, i am able to fetch proper data. I am using conditions like:
select \* from table where condition1
and (COALESCE(HOURS, MINUTES) is not null)
and (HOURS > 0 OR MINUTES > 0)
|
I suspect that you want:
```
where coalesce(hours, 0) + coalesce(minutes, 0) > 0
```
This basically treats `NULL` as 0 and makes sure that the sum is greater than zero.
|
SQL to get data with condition as combination of two columns in oracle
|
[
"",
"sql",
"oracle",
""
] |
I have a table Test and it`s data is :
```
A B
1 One
2 One
2 Two
3 One
2 One
5 Two
2 Three
2 Two
2 Three
2 Two
2 One
```
Now i want the output like following :
```
A B
10 One
10 One
11 Two
10 One
10 One
11 Two
4 Three
11 Two
4 Three
11 Two
10 One
```
I tried
SELECT B,SUM(A) FROM Test GROUP BY A but i got :
```
One 10
Three 4
Two 11
```
Can anybody tell me how we can acheive the same in shortest way, thanks in advance.
|
```
SELECT SUM(A) OVER (PARTITION BY B), B FROM Test
```
|
```
select t1.B, t2.sum_a
FROM Test t1
join (select b, sum(a) as sum_a from test group by b) t2 on t1.b = t2.b
```
|
Getting duplicates values and summing up of 2nd column in SQL
|
[
"",
"sql",
"sql-server",
""
] |
Background info: *We have a community room, which can be divided in half via a curtain. In the past when a group needed the full room we put 2 entries in, one for each half... However we've modified the software (MRBS) so that there are now 3 rooms (Full{1},Closet Side{2}, and Kitchen Side{3}) and the software checks that you can't reserve a partial room when the full is already booked and vice versa. However we have plenty of old "full room" reservations made by reserving both sides. So when 2 & 3 are identical I need to move one of the bookings to 1 and delete the other.*
So I have a table such as:
```
id room_id start_time name
1 2 13:00 Meeting
2 2 15:00 Meeting
3 3 15:00 Meeting
4 3 13:00 Storytime
```
I want to go through the table, and when room 2 & 3 both have entries at the same time and with the same name I want to change room 2's room\_id to 1 and delete the entry for room 3. So in the above example entry 2 would be modified and entry 3 would be deleted.
I'm fairly certain this needs to be two separate queries; EG first where there is a match change all of the room\_id's for 2 to 1, then as a separate query, compare room 1&3 and delete entries on 3.
I *think* this is close for changing room 2 to 1:
```
UPDATE `mrbs_entry`
JOIN `mrbs_entry` AS `other_side` ON `other_side.room_id` = '3'
AND `other_side.name` = `mrbs_entry.name`
AND `other_side.start_time` = `mrbs_entry.start_time`
AND `other_side.id` != `mrbs_entry.id`
SET `mrbs_entry.room_id` = '1'
WHERE (`mrbs_entry.room_id` = '2' AND `mrbs_entery.id` IN(92437,92438,92442,92443,92470,92471,92477,92478,92489,89462,92496,90873))
```
however I get an `#1054 - Unknown column 'mrbs_entry.room_id' in 'field list'` error
Note: the IN(\*) bit is to limit it to a few test entries to make sure it's actually working as expected.
|
**Method 1 - Stored Procedure with temporary table**
This seems the simplest method if you're prepared to use a Stored Procedure and temporary table:
```
CREATE PROCEDURE sp_sanitize_mrbs()
BEGIN
DROP TEMPORARY TABLE IF EXISTS mrbs_to_sanitize;
CREATE TEMPORARY TABLE mrbs_to_sanitize (
id int auto_increment primary key,
room2_id int,
room3_id int);
-- "I want to go through the table, and when room 2 & 3 both have
-- entries at the same time and with the same name I want to..."
INSERT INTO mrbs_to_sanitize (room2_id, room3_id)
SELECT m1.id, m2.id
FROM mrbs_entry m1
CROSS JOIN mrbs_entry m2
WHERE m1.start_time = m2.start_time
AND m1.name = m2.name
AND m1.room_id = 2
AND m2.room_id = 3;
-- ...change room 2's room_id to 1
UPDATE mrbs_entry me
JOIN mrbs_to_sanitize mts
ON me.id = mts.room2_id
SET me.room_id = 1;
-- "...and delete the entry for room 3."
DELETE me
FROM mrbs_entry me
JOIN mrbs_to_sanitize mts
ON me.id = mts.room3_id;
END//
-- ...
-- The Stored Procedure can now be called any time you like:
CALL sp_sanitize_mrbs();
```
See [SQL Fiddle Demo - using a Stored Procedure](http://sqlfiddle.com/#!2/4f17b/1)
**Method 2 - *without* Stored Procedure**
The following "trick" is slightly more complex but should do it without using stored procedures, temporary tables or variables:
```
-- "I want to go through the table, and when room 2 & 3 both have
-- entries at the same time and with the same name I want to..."
-- "...change room 2's room_id to 1"
UPDATE mrbs_entry m1
CROSS JOIN mrbs_entry m2
-- temporarily mark this row as having been updated
SET m1.room_id = 1, m1.name = CONCAT(m1.name, ' UPDATED')
WHERE m1.start_time = m2.start_time
AND m1.name = m2.name
AND m1.room_id = 2
AND m2.room_id = 3;
-- "...and delete the entry for room 3."
DELETE m2 FROM mrbs_entry m1
CROSS JOIN mrbs_entry m2
WHERE m1.start_time = m2.start_time
AND m1.name = CONCAT(m2.name, ' UPDATED')
AND m1.room_id = 1
AND m2.room_id = 3;
-- now remove the temporary marker to restore previous value
UPDATE mrbs_entry
SET name = LEFT(name, CHAR_LENGTH(name) - CHAR_LENGTH(' UPDATED'))
WHERE name LIKE '% UPDATED';
```
**Explanation of Method 2**
The first query updates the room number. However, as you mentioned, we need to perform the delete in a separate query. Since I'm not making any assumptions about your data, a safe way of requerying to get the same results once they have been modified is to introduce a "marker" to temporarily indicate which row was changed by the update. *In the example above, this marker is `'UPDATED '` but you may wish to choose something more likely to never be used for any other purpose e.g. a random sequence of characters. It could also be moved onto a different field if required.* The delete can then be performed and finally the marker needs to be removed to restore the original data.
See [SQL Fiddle demo - *without* Stored Procedure](http://sqlfiddle.com/#!9/9656f/1).
|
You can't use the table name for an update in the line for SET.
You could probably get away with changing that line to just
```
SET `room_id` = '1'
```
But this is probably safer from the standpoint of ensuring the query works like you want it to:
```
UPDATE
`mrbs_entry`
set `room_id` = '1'
WHERE `id` IN
(
SELECT `mrbs_entry.id` FROM
`mrbs_entry`
JOIN `mrbs_entry` AS `other_side` ON `other_side.room_id` = '3'
AND `other_side.name` = `mrbs_entry.name`
AND `other_side.start_time` = `mrbs_entry.start_time`
AND `other_side.id` != `mrbs_entry.id`
WHERE (`mrbs_entry.room_id` = '2' AND `mrbs_entry.id` IN(92437,92438,92442,92443,92470,92471,92477,92478,92489,89462,92496,90873))
) AS T
```
Run the inner query until it's pulling the right group of `id`s, then run the whole thing to change the `room_id`s
|
UPDATE and DELETE a set of rows when the operations affect the set
|
[
"",
"mysql",
"sql",
""
] |
How can I join two tables together to get all rows from each, and enter NULL, where one is missing in the other one.
For example:
```
declare @t1 table (x int)
declare @t2 table (x int)
insert into @t1 select 2
insert into @t1 select 3
insert into @t1 select 4
insert into @t1 select 5
insert into @t2 select 1
insert into @t2 select 2
insert into @t2 select 5
select *
from @t1 t1
left join @t2 t2 on t2.x = t1.x
```
The result should look like this:
```
t1.x t2.x
NULL 1
2 2
3 NULL
4 NULL
5 5
```
|
```
select *
from @t1 t1
full outer join @t2 t2 on t2.x = t1.x
```
this is like left join, but will take all the records from both table even when there is no match, and will enter null when there is no match.
|
```
select *
from @t1 t1
FULL OUTER join @t2 t2 on t2.x = t1.x
```
|
Left/right join to receive all rows
|
[
"",
"sql",
"sql-server-2008",
"t-sql",
""
] |
I have a table that looks like
`Items`:
```
id index
1 45
1 50
2 25
2 45
```
I am writing a query that
select all rows from `items`. I need to replace the index with its description.
Id 1 indicates table\_a, id 2 indicates table\_b.
Table\_A
```
index description
45 'ddd'
50 'fff'
```
Table\_B
```
index description
25 'AAA'
45 'BBB'
```
meaning I need to join on index but depending on id.
something like:
```
Select id,index,description
from items
join table_A,table_B using (index)
```
What I want to get is:
```
id index description
1 45 'ddd'
1 50 'fff'
2 25 'AAA'
2 45 'BBB'
```
How can I do that with 1 Join?
|
You have to UNION Table\_A & Table\_B first, then join with Items as follows
```
Select id,index,T.description from items
join (select 1 as id, index, description from Table_A
UNION select 2 as id, index, description from Table_B) as T
ON items.id=T.id and items.index=T.index
```
|
Assuming that UNION doesn't count as a join, then:
```
SELECT i.id, i.index, u.description
FROM Items AS i
JOIN (SELECT 1 AS id, index, description FROM Table_A
UNION
SELECT 2 AS id, index, description FROM Table_B
) AS u
ON u.id = i.id AND i.index = u.index;
```
|
Conditional join from different tables
|
[
"",
"sql",
"postgresql",
""
] |
Note: This is for a personal project to help in self learning, it does not involve work or schooling.
I'm new to creating/databases and I'm looking for insight in how I could design the most logically sound table relationships.
To give a bit of background information: I want to design a database that will hold different alcoholic drinks such as cocktails, beers, wines, etc. The database will contain fields such as the drink ID, name, description, and finally, the ingredients.
The issue I'm having is in trying to wrap my head around the logic of having a variable number of ingredients for a drink. For example, one cocktail might only have 2 ingredients while another might have 4,5,6, etc.
So far I have created this design (It does not appear logically sound from my point of view and have gotten stuck trying to figure out the best way to store the information).
```
**Drinks table**
Drink ID
Drink name
Drink description
Drink category // Beer, Cocktails,
Drink type // Pale ale, Lager, etc.
**Cocktails Table**
Drink ID
Drink instructions
**Beers Table**
Drink ID
**Ingredients Table**
Ingredient id
Ingredient name
```
---
I would be very grateful for any insight provided, thank you.
|
I would go for:
```
**Drinks table**
ID
name
description
category_id //foreign key to a category table
type_id //foreign key to a type table
**Category** //Beer, Cocktails, etc
ID
Name
**Type** //Pale ale, Lager, etc.
ID
Name
**Ingredients** //Hold all ingredients
Id
description
**Category_Ingredients** //That category has ingredients
category_id
Ingredient_id
amount //if you want be perfectionist
```
This will allow you to have well defined data. You could specialize the drinks, say if a client like some specific ingredient you can add a table to a drink ingredient like:
```
**Drink_ingredient**
id_drink
id_ingredient
amount
```
|
Remove the Cocktails and Beers tables. Those are just different drinks, and you can add an "Instructions" field to the Drink table for the cocktails. The Ingedients table is the right way to handle ingredients. I'd just a numeric field for qty and text field for measurement (ie: 2 oz)
|
Database with variable number of fields in a table
|
[
"",
"mysql",
"sql",
"database",
"relationship",
"normalization",
""
] |
Is there a way to get DBeaver to use SQL Server's alternate "GO" delimiter? Squirrel SQL allows this via a SQL Server preference (statement separator).
|
This should work.
* Open connection editor
* click on "Edit Driver".
* Switch tab on "Adv.
parameters".
* Set "Script delimiter" value to "go".
[GO batch support](https://github.com/serge-rider/dbeaver/issues/192)
|
@Jonathan's screen seems to not exist anymore with actual version. This works for me :
* Right click on your connection and select "Edit Connection".
* Under SQL Editor -> SQL Processing, click the checkbox for "Datasource settings" and then change the "Statements delimiter" to "GO".
|
DBeaver SQL Server GO invalid
|
[
"",
"sql",
"sql-server",
"dbeaver",
""
] |
Hi I am using following Sql statement to calculate salary
```
DECLARE @startDate DATETIME, @endDate DATETIME, @currentDate DATETIME, @currentDay INT, @PerDaycount INT, @Monthcount INT
DECLARE @currentMonth INT, @lastDayOfStartMonth INT
CREATE TABLE #VacationDays ([Month] VARCHAR(10), [DaysSpent] INT,[MonthDays] VARCHAR(10),[PerdayAmt] decimal(8,2),[TotalAmt] decimal(8,2))
DECLARE @Salary decimal(8,0)
SET @Salary = 8000
SET @startDate = '01/01/2015'
SET @endDate = '12/07/2015'
SET @currentMonth = DATEPART(mm, @startDate)
SET @currentDay = DATEPART(dd, @startDate)
SET @currentDate = @startDate
WHILE @currentMonth < DATEPART(mm, @endDate)
BEGIN
SELECT @lastDayOfStartMonth =
DATEPART(dd, DATEADD(s,-1,DATEADD(mm, DATEDIFF(m,0,@currentDate)+1,0)))
PRINT @lastDayOfStartMonth
INSERT INTO #VacationDays
SELECT DATENAME(month, @currentDate) AS [Month],
@lastDayOfStartMonth - @currentDay + 1 AS [DaysSpent],@lastDayOfStartMonth as a,@Salary/@lastDayOfStartMonth As dayammt,(@Salary/@lastDayOfStartMonth ) * @lastDayOfStartMonth - @currentDay + 1 AS totamt
SET @currentDate = DATEADD(mm, 1, @currentDate)
SET @currentMonth = @currentMonth + 1
SET @currentDay = 1
END
IF DATEPART(mm, @startDate) = DATEPART(mm, @endDate)
BEGIN
INSERT INTO #VacationDays
SELECT DATENAME(month, @endDate) AS [Month],
DATEPART(dd, @endDate) - DATEPART(dd, @startDate) + 1 AS [DaysSpent],@lastDayOfStartMonth as a,@Salary/@lastDayOfStartMonth As dayammt,(@Salary/@lastDayOfStartMonth ) * DATEPART(dd, @endDate) - DATEPART(dd, @startDate) + 1 AS totamt
END
ELSE
BEGIN
INSERT INTO #VacationDays
SELECT DATENAME(month, @endDate) AS [Month],
DATEPART(dd, @endDate) AS [DaysSpent],@lastDayOfStartMonth as a,@Salary/@lastDayOfStartMonth As dayammt,(@Salary/@lastDayOfStartMonth ) * DATEPART(dd, @endDate) AS totamt
END
SELECT * FROM #VacationDays
DROP TABLE #VacationDays
```
here is the Result :
```
January 31 31 258.06 8000.00
February 28 28 285.71 8000.00
March 31 31 258.06 8000.00
April 30 30 266.67 8000.00
May 31 31 258.06 8000.00
June 30 30 266.67 8000.00
July 31 31 258.06 8000.00
August 31 31 258.06 8000.00
September 30 30 266.67 8000.00
October 31 31 258.06 8000.00
November 30 30 266.67 8000.00
December 7 30 266.67 1866.67
```
the problem is some times monthdays comming wrong as in example dec. has 31 days but getting 30 days.
How to solve this.
Start date and end date can be change as per requirement.
|
Hi you can use this query.
```
DECLARE @startDate DATETIME, @endDate DATETIME, @currentDate DATETIME, @currentDay INT, @PerDaycount INT, @Monthcount INT
DECLARE @currentMonth INT, @lastDayOfStartMonth INT
CREATE TABLE #VacationDays ([Month] VARCHAR(10), [DaysSpent] INT,[MonthDays] VARCHAR(10),[PerdayAmt] decimal(8,2),[TotalAmt] decimal(8,2))
DECLARE @Salary decimal(8,0)
SET @Salary = 8000
SET @startDate = '01/01/2015'
SET @endDate = '12/07/2015'
SET @currentMonth = DATEPART(mm, @startDate)
SET @currentDay = DATEPART(dd, @startDate)
SET @currentDate = @startDate
WHILE @currentMonth <= DATEPART(mm, @endDate)
BEGIN
SELECT @lastDayOfStartMonth =
DATEPART(dd, DATEADD(s,-1,DATEADD(mm, DATEDIFF(m,0,@currentDate)+1,0)))
PRINT @lastDayOfStartMonth
IF(@currentMonth != DATEPART(mm, @endDate))
BEGIN
INSERT INTO #VacationDays
SELECT DATENAME(month, @currentDate) AS [Month],
@lastDayOfStartMonth - @currentDay + 1 AS [DaysSpent],@lastDayOfStartMonth as a,@Salary/@lastDayOfStartMonth As dayammt,(@Salary/@lastDayOfStartMonth ) * @lastDayOfStartMonth - @currentDay + 1 AS totamt
END
ELSE
BEGIN
INSERT INTO #VacationDays
SELECT DATENAME(month, @endDate) AS [Month],
DATEPART(dd, @endDate) AS [DaysSpent],@lastDayOfStartMonth as a,@Salary/@lastDayOfStartMonth As dayammt,(@Salary/@lastDayOfStartMonth ) * DATEPART(dd, @endDate) AS totamt
END
SET @currentDate = DATEADD(mm, 1, @currentDate)
SET @currentMonth = @currentMonth + 1
SET @currentDay = 1
END
SELECT * FROM #VacationDays
DROP TABLE #VacationDays
```
its a bit modified.
Here the changes as per comments below
Changed the condition of While
```
WHILE @currentMonth <= DATEPART(mm, @endDate)
```
Because `@lastDayOfStartMonth` is been calculated inside the while loop . And for December month its not changing and its picking the data of November month which is 30. So i changed it accordingly to get the proper data inside the while loop itself.
|
Please re-calculate month day after while loop execution, because your while loop not include last month, Add this line after while loop
```
SELECT @lastDayOfStartMonth =
DATEPART(dd, DATEADD(s,-1,DATEADD(mm, DATEDIFF(m,0,@currentDate)+1,0)))
```
|
Salary calculation in sql server
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I know this is a very easy question but i need someone to explain the thought process behind this query.
I have a big table of data related to countries. I want to see which country has the minimum population so i use the following query
```
SELECT country, MIN(population) AS min_pop FROM countries_by_population;
```
it returns an error that the country should be in the aggregate or in the GROUP BY. I'm confused; why? shouldnt this query just see the min(population) and return that with the country name?
after this i add the country to a GROUP BY like this
```
SELECT country, MIN(population) AS min_pop FROM countries_by_population GROUP BY country;
```
and i get a table of all the countries with their population in alphabetic order. how did this happen?
after explaining this can someone talk through the thought process of the correct query?
Thanks.
|
SQL is about query, i.e., filter and read/write.
1. When you do:
`SELECT country, MIN(population) AS min_pop FROM countries_by_population;`
you intended to select countries with the minimum population, and that logic is right--"select countries" and "with the minimum population". However what's going on here is that, on the one hand, "select countries" has no filtering and means just read data, returning the countries from the tuples filtered by the potential "where" clause (which you don't have). On the other hand, `MIN()` returns the minimum value. Basically there is a mismatch here: you want to pair the countries with the minimum population but SQL does not know how.
2. Group By means group tuples with the same value. In this case you just group by countries with their, maybe, names and since countries are unique, their will be no group of more than two. Therefore basically nothing happens **visually**.
3. To do your task there are several ways and one of them is that you could firstly query about the value of minimum population using
`select MIN(population) from countries_by_population`,
then you could do
`select country from countries_by_population where population = minipop`, either in nested query or you could do it in separate ways.
|
The [GROUP BY](http://www.tutorialspoint.com/sql/sql-group-by.htm) requires the columns specified in the mathematical functions to be specified in it.
Example: Consider **Employee** table
```
SELECT NAME, SUM(SALARY) FROM CUSTOMERS
GROUP BY NAME;
```
This will return a result something like this
```
+----------+-------------+
| NAME | SUM(SALARY) |
+----------+-------------+
| A | 15000.00 |
| B | 25000.00 |
| C | 35000.00 |
+----------+-------------+
```
If you don't specify the column in `GROUP BY` clause like
```
SELECT NAME, SUM(SALARY) FROM CUSTOMERS
```
you'll face the below error message.
```
'Column Name' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause
```
|
How to select the country with the min population
|
[
"",
"sql",
"min",
""
] |
There are two table `x_ass_table` and `i_ass_table` :
I am inserting data from `x_ass_table` with 13000 records to i\_ass\_table:
There is a unique constraint in `i_ass_table`
```
CREATE UNIQUE INDEX i_ass_table_pk ON i_ass_table(ASSIGNMENT_NUMBER,EFFECTIVE_START_DATE,EFFECTIVE_END_DATE,EFFECTIVE_LATEST_CHANGE)
insert into i_ass_table select * from x_ass_table;
```
I am getting a unique constraint error
```
00001. 00000 - "unique constraint (%s.%s) violated"
*Cause: An UPDATE or INSERT statement attempted to insert a duplicate key.
For Trusted Oracle configured in DBMS MAC mode, you may see
this message if a duplicate entry exists at a different level.
*Action: Either remove the unique restriction or do not insert the key.
```
This has to be an error with the data. But how do i check which data is courrupt or duplicate out of 13000 records
|
You could number the source rows by combination of the fields you want unique using the analytic version of the `count()` function and then query to see which combination has more than one row. E.g.:
```
SELECT *
FROM (SELECT *,
COUNT() OVER (PARTITION BY assignment_number,
effective_start_date,
effective_end_date,
effective_latest_change) AS c
FROM x_ass_table) t
WHERE c > 1
```
|
As you asked, you will have to check in both tables for the unique constraint conflict.
1. First check in `x_ass_table` if there is any duplicate record based in unique key, that you can check with query provided by Mureinik on above answer.
2. Second, you will have to check if there is same record present in both table based on unique key that can also create unique constraint conflict. This you can check with below query.
```
select *
from x_ass_table x,
i_ass_table i
where i.assignment_number = x.assignment_number
and i.effective_start_date = x.effective_start_date
and i.effective_end_date = x.effective_end_date
and i.effective_latest_change = x.effective_latest_change;
```
|
Unique key constraint error while inserting in SQL
|
[
"",
"sql",
"oracle",
"oracle-sqldeveloper",
"unique-key",
""
] |
In my application i need to assign multiple groups to my users. There are 1000+ users and 10-15 groups.
Which database design is better?
One-to-many:
```
USER_ID | GROUP_1 | GROUP_2 | ... | GROUP_15
--------------------------------------------
1 | true | false | ... | true
2 | false | true | ... | true
3 | true | true | ... | true
. | . | . | ... | .
. | . | . | ... | .
. | . | . | ... | .
```
or many-to-many:
```
USER_ID | GROUP_ID
------------------
1 | 1
1 | 15
2 | 2
2 | 15
3 | 1
3 | 2
3 | 15
. | .
. | .
. | .
```
?
|
The many-to-many is the better design without a doubt.
The first design makes writing queries difficult. Consider the following routine queries.
1. Is a specified user in a specified group? To do this you have to use a different query for each group. This is undesirable. Also if you are using column names for groups, then the list of groups is part of the database schema rather than being part of the data, where the users are data.
2. What groups is a specified user in? You could simply return the single row, though many applications would probably prefer (and are versed in) iterating through a result set. Iterating through a subset of columns is doable but unnatural.
3. What users does a specified group contain? Now you are back to the different queries for each group..
I'll leave the demonstration of these things as an exercise to the reader.
The relational model, which SQL databases approximate, was intended to deal with relations and keys (tables and primary/foreign keys). Information should exist in one (and ONLY ONE) place AS DATA (not metadata). The multi-column approach lacks normalization and will be a maintenance headache into the future.
Note: I edited this response to correct a misreading on my part of the original code. The thrust of the comments is the same however. The second (many-to-many) is the way to go.
|
If you want to follow the rules of an entity relationship model:
```
Many-to-many: users can belong to different groups & groups can have multiple users.
One-to-many: a user belongs to one group & groups can have multiple users.
```
Your second example is a many-to-many, your first isn't a one-to-many. A one-to-many would be:
```
USER_ID | GROUP_ID
------------------
1 | 1
2 | 15
3 | 2
4 | 15
5 | 1
6 | 2
7 | 15
```
Where user\_id must be unique.
|
Database design. One-to-many or Many-to-many?
|
[
"",
"mysql",
"sql",
"database",
"sqlite",
"relational-database",
""
] |
I have searched a bit, and can't see any questions asked that answer my question, so here it goes.
I am trying to get a date from the database (source is in datetime format, i.e "2015-11-30 00:00:00.000") and select it as date, but in another format. I am using Convert function for this after "googleing". First I set target format as varchar:
```
Select
convert(varchar(10),ac_payout_book_dt,104) as 'Dato'
```
Result is date in correct format, i.e "30.11.2015"
Then I needed to sort it when automating the script, so to be able to use
```
ORDER BY Dato DESC
```
... I changed my query to ...
```
Select
convert(date,ac_payout_book_dt,104) as 'Dato'
```
Now the result is in the wrong format, i.e "2015-11-30"
After a few rounds on the internet I also tried this without any luck
```
CAST(convert(date,ac_payout_book_dt,104) AS DATE) as 'Dato'
```
Anyone that can help me to figure out where I am going the wrong way?
BR
Andreas
|
Well, if you use
```
Select
convert(varchar(10),ac_payout_book_dt,104) as 'Dato'
```
then you convert your `DateTime` (which internally in SQL Server has no format - it's an 8-byte binary value) to a string - you can control how the date is formatted into the string.
In order to sort your output, you need a date (without the time, I presume) - no problem, but when you do
```
Select
convert(date, ac_payout_book_dt, 104) as 'Dato'
```
you're converting your `DateTime` to a `Date` (which in SQL Server *again* has **no format** - it's just a 3-byte bianry value) - so the style "104" is useless here - you're converting one binary value to another binary value - and the date will be displayed in the default SQL Server date formatting.
Since this really is a binary-to-binary conversion, without **string formatting** involved, I prefer to use
```
SELECT CAST(ac_payout_book_dt AS DATE) AS 'Dato'
```
since there's really no point in supplying a style which cannot be used anyway.
So really, what you need to be aware of is this:
* are you using a `DATE` or `DATETIME` for **sorting** or date arithmetic? In that case, you **must use** the native data types - don't do date math on a string representation of a date!
* do you want to **output** your `DATE` or `DATETIME` to the screen? Then you need to convert it to a **string** using `CONVERT` or `FORMAT` and supplying the proper styles / date formatting string
So overall, without knowing your entire query - you probably want to do something like this:
```
SELECT
CONVERT(VARCHAR(10), ac_payout_book_dt, 104) as 'Dato',
..... (other columns) .....
FROM
dbo.YourTable
WHERE
.... (some condition) ....
ORDER BY
CAST(ac_payout_book_dt AS DATE) DESC
```
Or maybe you want to check out [`FORMAT`](https://msdn.microsoft.com/en-us/library/hh213505.aspx) which is available from SQL Server **2012** on, that allows for more flexibility in formatting your dates:
```
SELECT
FORMAT(ac_payout_book_at, 'dd.MM.yyyy', 'de-de')
```
|
i guess you should try this, order with `CAST` Date so by this way you will perform your desired operation and you will be able to get your desired date format as well.
```
SELECT
CONVERT(VARCHAR(10), ac_payout_book_dt, 104) as 'Dato'
ORDER BY
CAST(ac_payout_book_dt AS DATE) DESC
```
|
sql - date not in format specified
|
[
"",
"sql",
"sql-server",
"date",
"select",
""
] |
I am using Microsoft SQL Server 2014.
The following is my query
```
SELECT type, SUM(calories) AS total_calories
FROM exercise_logs
GROUP BY type
HAVING total_calories > 150;
```
and I get the error
> Msg 207, Level 16, State 1, Line 2
> Invalid column name 'total\_calories'.
Can somebody point out what am I doing wrong? Thanks.
|
`Aggregation` is required , as you have no access to alias `total_calories`
```
SELECT type,SUM(calories) AS total_calories
FROM exercise_logs
GROUP BY type
HAVING SUM(calories) > 150;
```
|
You can also wrap the `GROUP BY` query up in a derived table:
```
select type, total_calories
FROM (
SELECT type, SUM(calories) AS total_calories
FROM exercise_logs
GROUP BY type
) AS dt
WHERE total_calories > 150
```
|
'invalid column name' while using the HAVING
|
[
"",
"sql",
"sql-server",
"t-sql",
"having",
""
] |
I found this MySQL function for [Base58](https://en.wikipedia.org/wiki/Base58) Encoder in a [Github Gist](https://gist.github.com/micahwalter/d6c8f8bc677978cf01a5).
```
DELIMITER $$
CREATE FUNCTION base58_encode (num int) RETURNS varchar(255)
DETERMINISTIC
BEGIN
DECLARE alphabet varchar(255);
DECLARE base_count int DEFAULT 0;
DECLARE encoded varchar(255);
DECLARE divisor DECIMAL(10,4);
DECLARE mode int DEFAULT 0;
SET alphabet = '123456789abcdefghijkmnopqrstuvwxyzABCDEFGHJKLMNPQRSTUVWXYZ';
SET base_count = CHAR_LENGTH(alphabet);
SET encoded = "";
WHILE num >= base_count DO
SET divisor = num / base_count;
SET mode = (num - (base_count* TRUNCATE(divisor,0)));
SET encoded = CONCAT(SUBSTRING(alphabet FROM mode+1 FOR 1), encoded);
SET num = TRUNCATE(divisor,0);
END WHILE;
SET encoded = CONCAT(SUBSTRING(alphabet FROM num+1 FOR 1), encoded);
RETURN (encoded);
END
```
I am new to `PostgreSQL` and having difficulty converting above function to PostgreSQL function.
How would be the equivalent `PostgreSQL` function of above SQL snippet for Base58 Encoder?
|
Equivalent function in PostgreSQL that I came up with is as follows.
```
CREATE FUNCTION base58_encode(num INT)
RETURNS VARCHAR(255) AS $encoded$
DECLARE
alphabet VARCHAR(255);
base_count INT DEFAULT 0;
encoded VARCHAR(255);
divisor DECIMAL(10, 4);
mod INT DEFAULT 0;
BEGIN
alphabet := '123456789abcdefghijkmnopqrstuvwxyzABCDEFGHJKLMNPQRSTUVWXYZ';
base_count := char_length(alphabet);
encoded := '';
WHILE num >= base_count LOOP
divisor := num / base_count;
mod := (num - (base_count * trunc(divisor, 0)));
encoded := concat(substring(alphabet FROM mod + 1 FOR 1), encoded);
num := trunc(divisor, 0);
END LOOP;
encoded = concat(substring(alphabet FROM num + 1 FOR 1), encoded);
RETURN (encoded);
END; $encoded$
LANGUAGE PLPGSQL;
```
|
For completeness here's a quick and dirty swipe at the inverse `base58_decode()` function:
```
CREATE OR REPLACE FUNCTION base58_decode(str VARCHAR(255))
RETURNS BIGINT AS $$
DECLARE
alphabet VARCHAR(255);
c CHAR(1);
p INT;
v BIGINT;
BEGIN
alphabet := '123456789abcdefghijkmnopqrstuvwxyzABCDEFGHJKLMNPQRSTUVWXYZ';
v := 0;
FOR i IN 1..char_length(str) LOOP
c := substring(str FROM i FOR 1);
-- This is probably wildly inefficient, but we're just using this function for diagnostics...
p := position(c IN alphabet);
IF p = 0 THEN
RAISE 'Illegal base58 character ''%'' in ''%''', c, str;
END IF;
v := (v * 58) + (p - 1);
END LOOP;
RETURN v;
END;$$
LANGUAGE PLPGSQL;
```
|
Base58 Encoder function in PostgreSQL
|
[
"",
"mysql",
"sql",
"postgresql",
"base58",
""
] |
I have data as `Decimal(15,4)` and values can be like `14.0100`, or `14.0000`, or `14.9999`
For integration with other system we have to store this kind of data in `NVarChar(MAX)` attributes table. When I run `CAST(Field AS NVarChar(MAX))` I get string values like `0.0000`
What I want is to trim trailing zeros (and period if needed) from those strings because data later used in online transmission and it's much better to send `14` instead of `14.0000`
How do I do that?
|
`SQL Server 2012+` you could use `FORMAT`, with `SQL Server 2008` you could use string manipulation:
```
CREATE TABLE #tab(col DECIMAL(15,4));
INSERT INTO #tab(col)
VALUES (14.0100), (14.0000),
(14.9999), (10), (0),
(-1), (-10), (-12.01), (-12.10);
SELECT
col
,result_2012 = FORMAT(col, '########.####')
,result_2008 = CASE
WHEN col = 0 THEN '0'
ELSE LEFT(col,LEN(col) -
CASE WHEN PATINDEX('%[1-9]%', REVERSE(col)) < PATINDEX('%[.]%', REVERSE(col))
THEN PATINDEX('%[1-9]%', REVERSE(col)) - 1
ELSE PATINDEX('%[.]%', REVERSE(col))
END)
END
FROM #tab;
```
`LiveDemo`
|
```
--With a lot of fantasy
--Tip: to optimize performances, first select values in a @-table, then do this transformation.
select
left( cast(Field as varchar(max)), charindex('.', cast(Field as varchar(max))) - 1) +
Case when reverse(cast(cast(reverse(substring(cast(Field as varchar), charindex('.', cast(Field as varchar)) + 1 ,4)) as int) as nvarchar)) = 0 Then '' Else
'.' + reverse(cast(cast(reverse(substring(cast(Field as varchar), charindex('.', cast(Field as varchar)) + 1 ,4)) as int) as nvarchar)) End
```
|
Remove trailing 0 when CAST as NVarChar
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
Hi I have the below query in Teradata. I have a row number partition and from that I want rows with rn=1. Teradata doesn't let me use the row number as a filter in the same query. I know that I can put the below into a subquery with a `where rn=1` and it gives me what I need. But the below snippet needs to go into a larger query and I want to simplify it if possible.
Is there a different way of doing this so I get a table with 2 columns - one row per customer with the corresponding fc\_id for the latest eff\_to\_dt?
```
select cust_grp_id, fc_id, row_number() over (partition by cust_grp_id order by eff_to_dt desc) as rn
from table1
```
|
Have you considered using the QUALIFY clause in your query?
```
SELECT cust_grp_id
, fc_id
FROM table1
QUALIFY ROW_NUMBER()
OVER (PARTITION BY cust_grp_id
ORDER BY eff_to_dt desc)
= 1;
```
|
Calculate `MAX` `eff_to_dt` for each `cust_grp_id` and then join result to main table.
```
SELECT T1.cust_grp_id,
T1.fc_id,
T1.eff_to_dt
FROM Table1 AS T1
JOIN
(SELECT cust_grp_id,
MAX(eff_to_dt) AS max_eff_to_dt
FROM Table
GROUP BY cust_grp_id) AS T2 ON T2.cust_grp_id = T1.cust_grp_id
AND T2.max_eff_to_dt = T1.eff_to_dt
```
|
Different way of writing this SQL query with partition
|
[
"",
"sql",
"teradata",
""
] |
I am currently trying to export some data from SQL Server 2014 to a XML document. I've gotten help from here earlier on this project and I am very thankful for that.
At the moment the data is structured correctly and is as it should be, but unfortunately the server (Totalview server) that is picking up the XML document is very picky about it. SQL Server is adding a stamp on top of the document which looks like this:
```
XML_F52E2B61-18A1-11d1-B105-00805F49916B
```
Because of this stamp in the XML document, the Totalview server cannot load the file. I have looked on google alot and Microsoft's help pages but can't find anything about this, maybe I'm using the wrong words or looking wrong places, which is why I am asking in here with you great guys.
What I want is for this stamp to be replaced by this stamp:
```
<?xml version="1.0"?>
```
It doesn't matter how it's done and I have thought of making some kind of script that will change this after SQL Server outputs the XML file but it would be nice to get SQL Server to output it correctly in the first place so there is fewer steps that could fail, but is it the only way?
Kind regards and thanks in advance, I am very sorry for any mistakes made in this question, I am still quite new to this site.
***EDIT***
SQL query as following:
```
SELECT
[ctID],
[forecastData/date/day] = Day(dDate),
[forecastData/date/month] = month(dDate),
[forecastData/date/year] = year(dDate),
cast(([co_forecast]) as decimal(20,2)) AS [forecastData/dailyData/contactsReceived],
cast(([AHT_Forecast]) as int) AS [forecastData/dailyData/averageAHT]
FROM
[ProductionForecast].[dbo].[vwForecastXMLDaily]
WHERE
dDate BETWEEN cast(getdate() as date) AND cast(getdate()+31 as date)
GROUP BY
[CTID], [dDate], [co_forecast], [AHT_Forecast]
FOR XML PATH ('ctForecast'), ROOT ('forecastImport')
```
Table structure is as following:
```
CTID dDate CO_Forecast AHT_Forecast
2 2016-01-15 167.75515 419.042510
2 2016-01-16 0.00000 0.000000
2 2016-01-17 0.00000 0.000000
2 2016-01-18 246.15381 382.407540
2 2016-01-19 238.35609 379.404340
2 2016-01-20 227.36992 444.473690
2 2016-01-21 232.43770 424.452350
2 2016-01-22 203.65597 403.429950
2 2016-01-23 0.00000 0.000000
2 2016-01-24 0.00000 0.000000
```
|
The answer to this question, is to disable column header when exporting to a file. In my case i am using Microsoft SQL 2014, so i went in to Tools->Options->Query Results->Results to Text, then remove the tick in 'Include Column headers in the result set'
Even though the Totalview server still wont accept the output file, it did remove the stamp/column header so my question here is solved.
Thank you all for your help.
|
You could try something like this. Keeping in mind that your exported XML file needs a root element and you need permissions to execute xp\_cmdshell. Thanks to [How To Save XML Query Results to a File](https://stackoverflow.com/questions/18775005/how-to-save-xml-query-results-to-a-file)
```
DECLARE @xml AS XML = (SELECT * FROM Table FOR XML PATH, ROOT('MyRoot')
)
-- Cast the XML variable into a VARCHAR
DECLARE @xmlChar AS VARCHAR(max) = CAST(@xml AS VARCHAR(max))
-- Escape the < and > characters
SET @xmlChar = REPLACE(REPLACE(@xmlChar, '>', '^>'), '<', '^<')
-- Create command text to echo to file
DECLARE @command VARCHAR(8000) = 'echo ' + @xmlChar + ' > c:\test.xml'
-- Execute the command
EXEC xp_cmdshell @command
```
|
SQL Server stamp on exporting FOR XML
|
[
"",
"sql",
"sql-server",
"xml",
"stamp",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.