Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
Is there a way to show multiple metrics in 1 SQL pivot operator. Basically, I have the Table1 and want the desired results is the Table2 format.
```
Table1
ACCOUNTS YEAR REVENUE MARGIN
ACCOUNT1 2012 100 50
ACCOUNT1 2013 104 52
ACCOUNT1 2014 108 54
ACCOUNT2 2012 112 56
ACCOUNT2 2013 116 58
ACCOUNT2 2014 120 60
ACCOUNT3 2012 124 62
ACCOUNT3 2013 128 64
ACCOUNT3 2014 132 66
Table2
ACCOUNTS REVENUE_2012 REVENUE_2013 REVENUE_2014 MARGIN_2012 MARGIN_2013 MARGIN_2014
ACCOUNT1 100 104 108 50 52 54
ACCOUNT2 112 116 120 56 58 60
ACCOUNT3 124 128 132 62 64 66
```
Please Help
|
```
DECLARE @t TABLE
(
ACCOUNTS NVARCHAR(MAX) ,
YEAR INT ,
REVENUE INT ,
MARGIN INT
)
INSERT INTO @t
VALUES ('ACCOUNT1', 2012, 100, 50 ),('ACCOUNT1', 2013, 104, 52 ),
('ACCOUNT1', 2014, 108, 54 ),('ACCOUNT2', 2012, 112, 56 ),
('ACCOUNT2', 2013, 116, 58 ),('ACCOUNT2', 2014, 120, 60 ),
('ACCOUNT3', 2012, 124, 62 ),('ACCOUNT3', 2013, 128, 64 ),
('ACCOUNT3', 2014, 132, 66 )
;WITH CTE AS
(
SELECT ACCOUNTS, value, name + '_' + cast(YEAR as char(4)) header
FROM @t as p
UNPIVOT
(value FOR name IN
([REVENUE], [MARGIN]) ) AS unpvt
)
SELECT ACCOUNTS, [REVENUE_2012],[REVENUE_2013],[REVENUE_2014]
,[MARGIN_2012],[MARGIN_2013],[MARGIN_2014]
FROM CTE
PIVOT
(SUM([value])
FOR header
in([REVENUE_2012],[REVENUE_2013],[REVENUE_2014], [MARGIN_2012]
,[MARGIN_2013],[MARGIN_2014])
)AS p ORDER BY 2,3,4
```
Result:
```
ACCOUNTS REVENUE_2012 REVENUE_2013 REVENUE_2014 MARGIN_2012 MARGIN_2013 MARGIN_2014
ACCOUNT1 100 104 108 50 52 54
ACCOUNT2 112 116 120 56 58 60
ACCOUNT3 124 128 132 62 64 66
```
|
You could do something like this:
```
SELECT
Table1.ACCOUNTS,
SUM(CASE WHEN Table1.[YEAR]=2012 THEN Table1.REVENUE ELSE 0 END) AS REVENUE_2012,
SUM(CASE WHEN Table1.[YEAR]=2013 THEN Table1.REVENUE ELSE 0 END) AS REVENUE_2013,
SUM(CASE WHEN Table1.[YEAR]=2014 THEN Table1.REVENUE ELSE 0 END) AS REVENUE_2014,
SUM(CASE WHEN Table1.[YEAR]=2012 THEN Table1.MARGIN ELSE 0 END) AS MARGIN_2012,
SUM(CASE WHEN Table1.[YEAR]=2013 THEN Table1.MARGIN ELSE 0 END) AS MARGIN_2013,
SUM(CASE WHEN Table1.[YEAR]=2014 THEN Table1.MARGIN ELSE 0 END) AS MARGIN_2014
FROM
Table1
GROUP BY
Table1.ACCOUNTS
```
|
SQL Pivot for Multiple Metrics by Year
|
[
"",
"sql",
"sql-server",
"pivot",
""
] |
I need to find the sum of values for each month and then find the max value for the months. I am a bit stumped and not sure what to do.
```
My customer wants it formatted a particular way:
Activity | JUN | JUL | AUG | MIN | MAX | AVG
jogging | 232 | 32 | 343 | 32 | 343 | 202
Here is my table:
activity + status + date
____________________________
swimming + 1 + 13-DEC-02
swimming + 1 + 12-FEB-01
jogging + 0 + 14-AUG-03
```
Here is what I have so far:
```
SELECT ACTIVITY,
SUM(
CASE
WHEN DECODE(TO_CHAR((TRUNC(date)), 'MON'),'JUL','JUL') IN 'JUL'
THEN 1
ELSE 0
END ) JUL,
SUM(
CASE
WHEN DECODE(TO_CHAR((TRUNC(date)), 'MON'),'AUG','AUG') IN 'AUG'
THEN 1
ELSE 0
END ) AUG
FROM daily_log
WHERE ACTIVITY_DESC IN ('Swimming','Jogging')
AND TRUNC(date) BETWEEN '01-JUL-2014' AND '30-JUN-2015'
AND STATUS = 1
group by ACTIVITY
```
Help!
|
Your query is a bit too complicated. This, for example:
```
CASE
WHEN DECODE(TO_CHAR((TRUNC(date)), 'MON'),'AUG','AUG') IN 'AUG'
THEN 1
ELSE 0
END
```
could be rewritten as:
```
CASE WHEN TO_CHAR(date, 'MON') = 'AUG' THEN 1 ELSE 0 END
```
or even:
```
DECODE(TO_CHAR(date, 'MON'), 'AUG', 1, 0)
```
In other words, you need either `CASE` or `DECODE()` but not both. With that in mind, we can rewrite your query a bit:
```
SELECT activity
, SUM(DECODE(TO_CHAR(date, 'MON'), 'JUL', 1, 0)) AS jul
, SUM(DECODE(TO_CHAR(date, 'MON'), 'AUG', 1, 0)) AS aug
FROM daily_log
WHERE activity_desc IN ('Swimming','Jogging')
AND date >= DATE'2014-07-01'
AND date < DATE'2015-07-01'
AND status = 1
GROUP BY activity;
```
Now, notice how I changed your filter on the `date` column (which, by the way, is an awful name for a column since `DATE` is an Oracle keyword, used for a data type and for ANSI date literals). You want to avoid using `TRUNC()` on a `DATE` column, especially if it is indexed (and if it isn't indexed, you might want to consider indexing it). Since you want the minimum and maximum values for all months, you'll want to use the `LEAST()` and `GREATEST()` functions:
```
SELECT activity, jul, aug
, LEAST(jul, aug) AS min
, GREATEST(jul, aug) AS max
, (jul+aug)/2 AS avg
FROM (
SELECT activity
, SUM(DECODE(TO_CHAR(date, 'MON'), 'JUL', 1, 0)) AS jul
, SUM(DECODE(TO_CHAR(date, 'MON'), 'AUG', 1, 0)) AS aug
FROM daily_log
WHERE activity_desc IN ('Swimming','Jogging')
AND date >= DATE'2014-07-01'
AND date < DATE'2015-07-01'
AND status = 1
GROUP BY activity
);
```
Unfortunately there is nothing like `LEAST()` and `GREATEST()` that will compute an average value, so we have to do that by hand. You'll want to increase the denominator when adding results for additional months.
|
For MIN/MAX you could use GREATEST and LEAST, if you just need to select values from the result.O you can use the MIN,MAX and AVG function to aggregate the values directly from the source data.
<http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions001.htm>
Also you might want to look into using subqueries, especially using the WITH-clause. To select intermediate results you can combine together.
<http://oracle-base.com/articles/misc/with-clause.php>
Regarding the SUM(CASE bit, you might have access to PIVOT as an option for transforming rows to columns.
<http://oracle-base.com/articles/11g/pivot-and-unpivot-operators-11gr1.php>
Edit:
Something like this should do it
```
SELECT Activity,
COALESCE(AUG,0) AS AUG,
COALESCE(JUN,0) AS JUN,
COALESCE(JUL,0) AS JUL,
MIN,MAX,AVG
FROM (
SELECT
Mon,
Activity,
Count,
MIN(Count) OVER (PARTITION BY Activity) AS Min,
MAX(Count) OVER (PARTITION BY Activity) AS Max,
AVG(Count) OVER (PARTITION BY Activity) AS Avg
FROM (
SELECT TO_CHAR("date",'MON') AS Mon, activity, COUNT(*) Count
FROM daily_log
GROUP BY TO_CHAR("date",'MON'), activity
)
) PIVOT ( SUM(Count) FOR Mon IN ('AUG' AS AUG, 'JUN' AS JUN, 'JUL' AS JUL))
```
|
find sum of values by month then find max sum of months
|
[
"",
"sql",
"database",
"oracle",
"max",
""
] |
I have a problem in parsing my last name and first name in a table. My code:
```
SELECT value,
substr(value,instr(value,' ',1),instr(value,' ',2)-2) last_name,
FROM (SELECT 'Matt Ryan, QB' value
FROM dual);
```
If I write `'Andrew Luck, QB'` it parses properly the last name. But when I write `'Matt Ryan, QB'` it parses just two letters from the last name. I identify it by white space. I don't understand what can be wrong here.
```
'Matt Ryan, QB'
'Andrew Luck, QB'
```
|
Please consider using regular expression. I think this should solve your problem.
```
SELECT value,
REGEXP_SUBSTR(value,
'\w+\s+(\w+\s+\w+|\w+),',1,1,NULL,1) last_name
FROM (
SELECT 'Matt Ryan, QB' value FROM dual
);
```
Check this [fiddle](http://sqlfiddle.com/#!4/9eecb/1632/0)
|
You're misusing the index parameter that you're passing to `instr`, plus you should remove the comma that comes after "last\_name". Try:
```
SELECT value,
substr(value, instr(value,' ')+1, instr(value,' ',instr(value,' '))-1) last_name
FROM (
SELECT 'Matt Ryan, QB' value FROM dual
);
```
**[Link to fiddle](http://sqlfiddle.com/#!4/9eecb/1586)**
|
substr SQl function
|
[
"",
"sql",
"oracle",
"substring",
""
] |
I've got a Sqlite database with close to 500,000 rows worth of access log information in it. I'm using it for aggregate information like "number of times each ip has hit the site", or "percentage of hits were POST", etc.
I wrote a SQL query that gathers how many times each IP address has hit the site, where the number of occurrences is greater than 1% of the count of the IP addresses.
```
select ip_address, count(ip_address)
from records
group by ip_address
having count(ip_address) > (select count(ip_address) from records) * .01
```
This returns about 7 significant IP addresses. How would I go about unioning an "All Others" row to the result set?
I tried UNIONing with the logical opposite
```
select "All Others", count(ip_address)
from records
group by ip_address
having count(ip_address) < (select count(ip_address) from records) * .01
```
but this returns multiple "All Other" rows, with a count that is sequential.
|
Use `union all`, of course.. but that doesn't answer "the problem".
This issue is the the second query "returns multiple" (just like the first query) because the `group by` is by IP, of which there are many. That is, there is a resulting tuple *per group*, independent of any operation in the select output clause.
The desired goal is probably to sum-the-counts with an outer select.
```
-- union all
select "All Others", sum(t.ct)
from (
select count(ip_address) as ct
from records
group by ip_address
-- note: <=, and not <, is inverse of >
having count(ip_address) <= (select count(ip_address) from records) * .01
) t
```
Of course if the 'total' and 'found' are known then the 'others' is 'total' - 'found'.
The count being sequential, while an interesting observation, is irrelevant. Remember that SQL can return rows in whatever order it feels like when there is no `order by` applied to the materialized result-set (`order by` in sub-selects are not strictly guaranteed).
|
Can you use a variable to hold this information?
```
DECLARE @num INT
SET @num = (select count(*)
from records
group by ip_address
having count(*) > (select count(ip_address) from records) * .01)
```
Then do your regular query
```
select ip_address, count(ip_address)
from records
group by ip_address
having count(ip_address) > (select count(ip_address) from records) * .01
UNION
select "All Others", count(ip_address)-@num
from records
```
|
SQL Union "All Other" Row
|
[
"",
"sql",
"sqlite",
"union",
""
] |
```
CREATE TABLE Permission (
permissionID INTEGER PRIMARY KEY UNIQUE,
user INTEGER
location INTEGER
);
```
I don't want to have user or location to be `UNIQUE` because I can have multiple rows with user containing the same data, or multiple rows with location containing the same data. I just want to avoid having both user and location having some value, with that row repeating any number of times.
Ex: this is okay
```
permissionID user location
--------------------------
1 1 2
2 2 2
3 2 1
```
but this is not okay:
```
permissionID user location
--------------------------
1 1 2
2 1 2
```
because a row already exists in which user = 1 and location = 2.
How can I avoid duplicates?
|
Declare a unique constraint on (user, location).
```
CREATE TABLE Permission (
permissionID integer primary key,
user integer not null,
location integer not null,
unique (user, location)
);
```
```
sqlite> insert into Permission (user, location) values (1, 2);
sqlite> insert into Permission (user, location) values (1, 2);
Error: UNIQUE constraint failed: Permission.user, Permission.location
```
|
What you essentially want to do is to make the **combination of user and location** unique across you table.
There are several options to achieve that and I will describe these options in the order you should consider them, as the former options are more natural than the latter.
## Option 1: Have a unique constraint in your table
You can put the constraint you want directly in your table:
```
CREATE TABLE Permission (
permissionID INTEGER PRIMARY KEY UNIQUE,
user INTEGER,
location INTEGER
unique (user, location)
);
```
This is the most natural option to express your requirement. The caveat is that it is not so easy to add and remove this option on an existing table. See the Annex 2 of this post on how to add this option to an existing table.
If you now try to insert a duplicate entry into the table you get the following result:
```
sqlite> insert into Permission (user, location) values (1, 2);
Error: UNIQUE constraint failed: Permission.user, Permission.location
```
## Option 2: Create a unique index
It is also possible to create auniqe index
```
CREATE TABLE Permission (
permissionID INTEGER PRIMARY KEY UNIQUE,
user INTEGER,
location INTEGER
);
CREATE UNIQUE INDEX user_location ON Permission (user,location);
```
If you try to insert a duplicate entry with this option you get the exact same error message as in the first option:
```
sqlite> insert into Permission (user, location) values (1, 2);
Error: UNIQUE constraint failed: Permission.user, Permission.location
```
You might ask about the difference between this option and the first one, [and so have many others](https://dba.stackexchange.com/questions/144/when-should-i-use-a-unique-constraint-instead-of-a-unique-index) . As the [sqlite documentation](https://www.sqlite.org/lang_createtable.html) explains, internally it is probably implemented in the exact same way. It really boils down to the fact, that it is much easier to add and drop an index from a table than to add and remove a unique constraint on a table.
## Option 3: Use a trigger
For the sake of completeness, it is also possible to use a trigger to prevent duplicates to be inserted, albeit I hardly can imagine a reason why you should prefer this option. It is the most general way to react on an `INSERT` and it could look like this for your example:
```
CREATE TRIGGER avoid_duplicate_user_locations
BEFORE INSERT
ON Permission
when exists (select * from Permission where user = new.user and location = new.location)
BEGIN
SELECT
RAISE (ABORT,'duplicate entry');
END;
```
If you try to insert a duplicate entry with this option, you will run into the error message you specified in the trigger:
```
sqlite> insert into Permission (user, location) values (1, 2);
Error: duplicate entry
```
## Annex 1: Removing existing Duplicates
If you already have duplicates in your table, the following code will help you you to remove them. If you want to apply the first or second option, you will have to do it.
```
DELETE FROM Permission
WHERE permissionID NOT IN
(SELECT MIN(permissionID) FROM Permission GROUP BY user,location );
```
## Annex 2: Adding a unique contraint to table
If you habe created the schema for the table without the `UNIQUE` constraint, here is a recipe how to add it.
```
CREATE TABLE Permission2 (
permissionID INTEGER PRIMARY KEY UNIQUE,
user INTEGER,
location INTEGER,
unique (user, location)
);
INSERT INTO Permission2
SELECT *
FROM Permission;
DROP Table Permission;
ALTER TABLE Permission2
RENAME TO Permission;
```
|
SQLite: Preventing Duplicate Rows
|
[
"",
"sql",
"database",
"sqlite",
""
] |
I have 3 tables listing below:
Table\_A:
```
order_number | header_id
123 | 80001
```
Table\_B
```
header_id | line_id | quantity
80001 | 10001 | 1
80001 | 10002 | 3
80001 | 10003 | 5
```
Table\_C
```
header_id | line_id | hold_price_id | released_flag
80001 | 10001 | 2001 | Y
80001 | 10002 | 2002 | Y
80001 | 10003 | 2003 | N
```
I wrote a query as shown below:
```
SELECT A.order_number, A.header_id, B.line_id, B.quantity, C.hold_price_id, C.released_flag
FROM Table_A a,
Table_B b,
Table_C c
WHERE a.header_id = b.header_id
AND c.line_id = b.line_id
AND a.order_number = '123';
```
My desire output is as shown below:
```
order_number | header_id | line_id | quantity | hold_price_id | released_flag
123 | 80001 | 10001 | 1 | 2001 | Y
123 | 80001 | 10002 | 3 | 2002 | Y
123 | 80001 | 10003 | 5 | 2003 | N
```
However the query show me the below result:
```
order_number | header_id | line_id | quantity | hold_price_id | released_flag
123 | 80001 | 10001 | 1 | 2001 | Y
123 | 80001 | 10001 | 3 | 2002 | Y
123 | 80001 | 10001 | 5 | 2003 | N
123 | 80001 | 10002 | 1 | 2001 | Y
123 | 80001 | 10002 | 3 | 2002 | Y
123 | 80001 | 10002 | 5 | 2003 | N
123 | 80001 | 10003 | 1 | 2001 | Y
123 | 80001 | 10003 | 3 | 2002 | Y
123 | 80001 | 10003 | 5 | 2003 | N
```
Is it something wrong on my query? Please advice.
Thank you!
|
You need to learn to use proper explicit `join` syntax. A simple rule: never use commas in the `from` clause. *Always* use explicit `join`s:
```
SELECT A.order_number, A.header_id, B.line_id, B.quantity,
C.hold_price_id, C.released_flag
FROM Table_A a JOIN
Table_B b
ON a.header_id = b.header_id JOIN
Table_C c
ON c.header_id = b.header_id AND c.line_id = b.line_id
WHERE a.order_number = '123';
```
|
You haven't joined all of the common keys, so you are getting Cartesian results. You needs to join a to c with header id, like so
```
SELECT A.order_number, A.header_id, B.line_id, B.quantity, C.hold_price_id, C.released_flag
FROM Table_A a,
Table_B b,
Table_C c
WHERE a.header_id = b.header_id
AND c.line_id = b.line_id
AND a.header_id = c.header_id
AND a.order_number = '123';
```
|
Joining 3 tables Oracle SQL
|
[
"",
"sql",
"oracle",
"join",
""
] |
I want to get the total current sales for the month by category. I also wanted to display all the `category_name` (including cars) even though an `item_id` of that `category_id` is not present on my `sales_tbl` or gives $0.00 sales like..
```
category_name | subTotalSales_ofMonth
---------------------------------------
appliances | 33 000
cars | 0
IT | 150 000
```
Running this
```
SELECT
c.category_name,
SUM(amount) AS subTotalSales_ofMonth
FROM sales_tbl sa
INNER JOIN item_out_tbl io
ON io.item_id=sa.item_id
INNER JOIN category_tbl c
ON io.category_id=c.category_id
WHERE MONTH(sa.date_sold) >= $month)
GROUP BY io.category_id
```
gives me
```
category_name | subTotalSales_ofMonth
---------------------------------------
appliances | 33 000
IT | 150 000
```
|
You need to use an outer join. Since all your categories are stored in the `category_tbl` table, that table should be on the outer side of the join. One way to make it so is to change the order of your joins and use `LEFT JOIN` in both cases. You will also need to move the `WHERE` condition to one of the `ON` clauses:
```
SELECT
c.category_name,
SUM(amount) AS subTotalSales_ofMonth
FROM
category_tbl AS c
LEFT JOIN item_out_tbl AS io ON io.category_id = c.category_id
LEFT JOIN sales_tbl AS sa ON io.item_id = sa.item_id
AND MONTH(sa.date_sold) >= $month
GROUP BY
c.category_id
;
```
|
Full outer join will give you your desired result (although it will also grab NULLs from category\_name):
```
SELECT c.category_name, SUM(amount) AS subTotalSales_ofMonth
FROM sales_tbl sa
FULL OUTER JOIN item_out_tbl io ON io.item_id=sa.item_id
FULL OUTER JOIN category_tbl c ON io.category_id=c.category_id
WHERE MONTH(sa.date_sold) >= $month)
GROUP BY io.category_id
```
Take a look at [this](http://www.codeproject.com/Articles/33052/Visual-Representation-of-SQL-Joins) for a guide on all the joins.
|
SELECT value non-existing on the first JOIN
|
[
"",
"mysql",
"sql",
""
] |
What I'm trying to is create a table that will keep track of users who report a comment on a website. Right now, I have a table that would look something like this:
```
id | num_reports | users
-----------------------------------
12345 1
12489 4
```
For this table, I'd like id to be unique and number\_reports to keep incrementing starting at 1. But for users, I'm getting confused because I'd like to keep a record of user\_ids who created a report and I'm unsure of how to make it so I can store multiple user\_ids.
I thought of doing something like
```
id | user_id
---------------
123 567
123 689
```
and in this case, you would just count the number of rows with id being duplicated and user\_id being unique, but this just seemed inefficient.
I've been looking around, and it looks like the correct way would be creating another table, but how does that allow me to store multiple user\_ids?
|
That's the right way to do it. Here is what you should have:
```
USERS COMMENTS
+---------+------+ +------------+---------+------------+---------------------+
| id_user | name | | id_comment | id_user | id_article | date |
+---------+------+ +------------+---------+------------+---------------------+
| 171 | Joe | | 245 | 245 | 24 | 2015-03-22 10:12:00 |
| 180 | Jack | | 1245 | 180 | 68 | 2015-03-23 23:01:19 |
| ... | ... | | ... | ... | ... | ... |
+---------+------+ +------------+---------+------------+---------------------+
COMMENT_REPORTS
+-----------+------------+---------+---------------------+
| id_report | id_comment | id_user | date |
+-----------+------------+---------+---------------------+
| 1 | 245 | 171 | 2015-03-24 16:11:15 |
| 2 | 654 | 180 | 2015-03-24 18:13:42 |
| 3 | 1245 | 180 | 2015-03-24 18:34:01 |
| 4 | 1245 | 456 | 2015-03-25 09:58:10 |
| ... | ... | ... | ... |
+-----------+------------+---------+---------------------+
```
You then will be able to get:
```
# Every reports made by an user
SELECT *
FROM comment_reports
WHERE user_id = 180
# Every reports related to a comment
SELECT *
FROM comment_reports
WHERE comment_id = 1245
# Every reports made today
SELECT *
FROM comment_reports
WHERE date >= CURDATE()
# The amount of reports related to an user's comments
SELECT c.id_user AS User, COUNT(cr.id_report) AS Reported
FROM comment_reports cr
JOIN comments c ON (cr.id_comment = c.id_comment)
WHERE c.id_user = 180
GROUP BY c.id_user
```
|
Are you making datawarehouse? Normally quantity of reports for the websites are not saved. They are calculated on the fly by taking COUNT(\*) by the website\_id from the table where reports are saved. There you can save user who made this report. And then you can play by taking total of reports, or total of reports by user etc.
However if you have solution like that then you have no other option than to create separate link table for storing report<-->user links.
|
MySQL- List inside of a column
|
[
"",
"mysql",
"sql",
"database",
""
] |
Suppose I have a table containing a month's transaction data with `transaction_time` stored in a `DATETIME` field.
I want to get all the transactions that occurred between `12:00:00` and `13:00:00`, irrespective of the day. `WHERE transaction_time BETWEEN x AND y` would have to be date-specific, but I need that same time period of all dates.
How can I filter for such a range in a MySQL query?
|
You can filter on the result of applying MySQL's [`HOUR()`](http://dev.mysql.com/doc/en/date-and-time-functions.html#function_hour) function to your `DATETIME` value:
```
WHERE HOUR(transaction_time) = 12
```
If you need to filter across more exact time ranges, you could convert the times to seconds as follows:
```
WHERE TIME_TO_SEC(TIME(transaction_time)) BETWEEN TIME_TO_SEC('12:00:00')
AND TIME_TO_SEC('13:00:00')
```
|
You have two time constraints. The first is to restrict the dates to a particular month. Assume we get the first day of the month as a date parameter:
```
where transaction_time >= :searchMonth
and transaction_time < Date_Add( :searchMonth, interval 1 month )
```
Now we're only looking at rows from that month. Now limit it to the hour specified. Assume we get the hour as an integer parameter:
```
and extract( hour from transaction_time ) between :hr and (:hr + 1)
```
Now that final part is based on your own code. Let me say that when the requirements read "during a particular hour of the day" and the request reads "during the noon hour" then I am wont to write it like this:
```
and extract( hour from transaction_time ) >= :hr
and extract( hour from transaction_time ) < (:hr + 1)
```
because hour 13 (1PM) is the first click of the next hour. So if there is a transaction time at exactly 13:00:00 and you use `between`, then it will show up when looking at the noon hour and also when looking at the 1PM hour. That is NOT generally a desired result. You may want to verify that with your analyst.
So the complete filter, the way I would write it, is this:
```
where transaction_time >= :searchMonth
and transaction_time < Date_Add( :searchMonth, interval 1 month )
and extract( hour from transaction_time ) >= :hr
and extract( hour from transaction_time ) < (:hr + 1)
```
|
Getting data of everyday's same time period
|
[
"",
"mysql",
"sql",
""
] |
I have some data that has to be measured which are not in any table. I can not insert it to a table nor I can create any table and insert these data. So I used dual like the following to get that table. I used this to join with other tables.
```
with movie_genre as
(
select '10' as "id", 'action' as "genre" from dual
union select '20' as "id", 'horror' as "genre" from dual
union select '30' as "id", 'comedy' as "genre" from dual
union select '40' as "id", 'adventure' as "genre" from dual
union select '50' as "id", 'drama' as "genre" from dual
union select '60' as "id", 'mystery' as "genre" from dual
union select '70' as "id", 'musical' as "genre" from dual
)
select * from movie_genre
;
```
So that I get the result -
```
id genre
10 action
20 horror
30 comedy
40 adventure
50 drama
60 mystery
70 musical
```
My question is, is there any better way to do this? Any suggestion will be a life saver.
An example -
Lets say we have a table -
```
create table movies (
id number,
name varchar2(50),
genre_id number
);
insert into movies values (1, 'the hulk', 10);
insert into movies values (2, 'dumb and dumber', 30);
insert into movies values (3, 'frozen', 70);
```
And we need a result like this -
```
name genre is_in_genre
the hulk action yes
the hulk horror no
the hulk comedy no
the hulk adventure no
the hulk drama no
the hulk mystery no
the hulk musical no
dumb and dumber action no
dumb and dumber horror no
dumb and dumber comedy yes
dumb and dumber adventure no
dumb and dumber drama no
dumb and dumber mystery no
dumb and dumber musical no
frozen action no
frozen horror no
frozen comedy no
frozen adventure no
frozen drama no
frozen mystery no
frozen musical yes
```
Here, we DO NOT have any movie\_genre table.
|
You can pass the genre as string in the order you want and use regular expression to generate the movie\_genre table.The [sql fiddle here](http://sqlfiddle.com/#!4/9eecb/1736)
```
with movie_genre as
(
select level * 10 as id, regexp_substr(genre,'[^,]+',1,level) as genre
from
(
select ('action,horror,comedy,adventure,drama,mystery,musical')
as genre from dual
)
connect by level <=REGEXP_COUNT(genre,'[^,]+')
)
select * from movie_genre;
```
|
If you need to get the full list, you want a `cross join`, with some additional logic:
```
with movie_genre as (
select '10' as "id", 'action' as "genre" from dual union all
select '20' as "id", 'horror' as "genre" from dual union all
select '30' as "id", 'comedy' as "genre" from dual union all
select '40' as "id", 'adventure' as "genre" from dual union all
select '50' as "id", 'drama' as "genre" from dual union all
select '60' as "id", 'mystery' as "genre" from dual union all
select '70' as "id", 'musical' as "genre" from dual
)
select m.name, mg.genre,
(case when m.genre_id = mg.id then 'yes' else 'no' end) as IsInGenre
from movies m cross join
movie_genres mg;
```
|
How to create a temporary / dynamic / virtual table when a SQL runs in Oracle?
|
[
"",
"sql",
"database",
"oracle",
""
] |
So the sql code looks like this:
```
UPDATE
tbl1
SET
LastModifiedDate = GETDATE(),
col1 = 'Closed',
LastModifiedBy = 'usr'
WHERE
col2 = 'xxx'
AND
col1 NOT LIKE '%Closed%'
AND
DATEDIFF(day, DateTimeOfInsert, GETDATE()) > 30
AND
col3 NOT IN
(SELECT
col3
FROM
tbl1
WHERE
col2 = 'xxx'
AND
DATEDIFF(day, DateTimeOfInsert, GETDATE()) < 30)
```
Basically I want to close all records where col1 is 'xxx', which are not yet closed and which are older than 30 days and there is no similar record within the last 30 days.
This is what I get:
> Subquery returned more than 1 value. This is not permitted when the
> subquery follows =, !=, <, <= , >, >= or when the subquery is used as
> an expression. The statement has been terminated.
I truly believe this trigger is causing the issue:
```
CREATE TRIGGER trg_tbl1_closeAction
ON tbl1
AFTER UPDATE
AS
BEGIN
SET NOCOUNT ON
If (SELECT col1 FROM INSERTED) NOT LIKE '%Close%'
Begin
Return
End
INSERT INTO tbl2
(tbl_1_ID, col4, usr)
select i.ID, '10', tu.id
from inserted i
LEFT OUTER JOIN tbl3 tu ON tu.usr=i.LastModifiedBy;
END
```
|
The trigger you've shown is indeed broken. Something like this:
```
CREATE TRIGGER trg_tbl1_closeAction
ON tbl1
AFTER UPDATE
AS
BEGIN
SET NOCOUNT ON
INSERT INTO tbl2
(tbl_1_ID, col4, usr)
select i.ID, '10', tu.id
from inserted i
LEFT OUTER JOIN tbl3 tu ON tu.usr=i.LastModifiedBy
WHERE i.col1 LIKE '%Close%';
END
```
Is probably what was intended (since `inserted` can contain multiple rows, *some* of which may be like `Close` and others not - so we move the filter down into a `WHERE` clause).
|
Datediff is a subquery. In your case, it is probably returning more than one value (more than one record). Moreover, it is probably returning one value for each day or DateTimeOfInsert you have (perhaps both, I am unfamiliar with your schema, of course).
As the error states, you cannot use '<' on a subquery that returned more than one value.
|
I can't update multiple records in SQL
|
[
"",
"sql",
"sql-server",
""
] |
I have a table with the following schema :
```
+---------------+-------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+---------------+-------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| system_one_id | int(11) | NO | MUL | NULL | |
| system_two_id | int(11) | NO | MUL | NULL | |
| type | smallint(6) | NO | | NULL | |
+---------------+-------------+------+-----+---------+----------------+
```
I want to delete duplicates, where "duplicate" is defined as either:
1. matching values for both `system_one_id` and `system_two_id` between two rows, or
2. "cross matched" values, ie `row1.system_one_id = row2.system_two_id` and `row1.system_two_id = row2.system_one_id`
Is there a way to delete both kinds of duplicates in one query?
|
Mysql supports multi-table deletes, so a straightforward join can be used:
```
delete t1
from mytable t1
join mytable t2 on t1.id > t2.id
and ((t1.system_one_id = t2.system_one_id
and t1.system_two_id = t2.system_two_id)
or (t1.system_one_id = t2.system_two_id
and t1.system_two_id = t2.system_one_id))
```
The join condition `t1.id > t2.id` prevents rows joining to themselves *and* selects the *later* added row of a duplicate pair to be the one deleted.
---
FYI, in postgres, similar functionality exists, but with different syntax:
```
delete mytable t1
using mytable t2
where t1.id > t2.id
and ((t1.system_one_id = t2.system_one_id
and t1.system_two_id = t2.system_two_id)
or (t1.system_one_id = t2.system_two_id
and t1.system_two_id = t2.system_one_id))
```
|
Here is a statement (hopefully) selecting all ids of duplicate records, you only need to wrap it with a delete command (that´s your part). ;-)
```
select A.ID from MYTABLE A
left join MYTABLE B on
(
(A.SYSTEM_ONE_ID = B.SYSTEM_ONE_ID and A.SYSTEM_TWO_ID = B.SYSTEM_TWO_ID)
or
(A.SYSTEM_ONE_ID = B.SYSTEM_TWO_ID AND A.SYSTEM_TWO_ID = B.SYSTEM_ONE_ID)
)
where B.ID is not null and A.ID <> B.ID;
```
|
Delete duplicates from two columns
|
[
"",
"mysql",
"sql",
""
] |
I have a table like so
```
Id | Type | Value
--------------------
0 | Big | 2
1 | Big | 3
2 | Small | 3
3 | Small | 3
```
I would like to get a table like this
```
Type | Last Value
--------------------
Small | 3
Big | 3
```
How can I do this. I understand there is an SQL Server method called **LAST\_VALUE(...) OVER .(..)** but I can't get this to work with `GROUP BY`.
I've also tried using `SELECT MAX(ID)` & `SELECT TOP 1..` but this seems a bit inefficient since there would be a subquery for each value. The queries take too long when the table has a few million rows in it.
Is there a way to quickly get the last value for these, perhaps using `LAST_VALUE`?
|
The most efficient method might be `not exists`, which uses an anti-join for the underlying operator:
```
select type, value
from likeso l
where not exists (select 1 from likeso l2 where l2.type = l.type and l2.id > l.id)
```
For performance, you want an index on `likeso(type, id)`.
|
You can do it using rownumber:
```
select
type,
value
from
(
select
type,
value,
rownumber() over (partition by type order by id desc) as RN
) TMP
where RN = 1
```
Can't test this now since SQL Fiddle doesn't seem to work, but hopefully that's ok.
|
How to efficiently get a value from the last row in bulk on SQL Server
|
[
"",
"sql",
"sql-server",
"aggregate-functions",
""
] |
Can someone suggest a good solution to this approach.
I have 2 binary strings (1010101....) of 1024 bits in length
Now I would want to do bit operation (AND) on both to get a value whether it is greater than 0 or not.
Currently I am converting the string to hex(256) and varbinary(128)
So once I have two varbinary I am converting 8 bytes of it to BIGINT on each and doing AND on the two BIG INT.
Can anyone suggest me a better approach in SQL 2012.
Thanks,
Bala
|
After a long discussion I have finally figured out what input data you have. You have a varbinary(128) that is constructed from a binary string (like '1000010010101... ') that is 1024 characters long. SQL Server does not provide a ready function that does such a conversion. I have built one to allow me testing. The following function does such conversion:
```
CREATE FUNCTION dbo.binStringToBinary(@inputString VARCHAR(1024)) RETURNS VARBINARY(128) AS
BEGIN
DECLARE @inputBinary VARBINARY(128) = convert(varbinary, '', 2)
DECLARE @octet int = 1
DECLARE @len int
SET @len = Len(@inputString)
while @octet < @len
BEGIN
DECLARE @i int = 0
DECLARE @Output int = 0
WHILE(@i < 7) BEGIN
SET @Output = @Output + POWER(CAST(SUBSTRING(@inputString, @octet + @i, 1) AS int) * 2, 7 - @i)
SET @i = @i + 1
END
SET @Output = @Output + CAST(SUBSTRING(@inputString, @octet + @i, 1) AS int)
select @inputBinary = @inputBinary + convert(varbinary(1), @Output)
-- PRINT substring(@inputString, @octet, 8) + ' ' + STR(@Output, 3, 0) + ' ' + convert(varchar(1024), @inputBinary, 2)
SET @octet = @octet + 8
END
RETURN @inputBinary
END
```
I then have written a function that checks for a bit using the varbinary(128) as an input:
```
CREATE FUNCTION dbo.[DoBitsMatchFromBinary](@bitToCheck INT,@inputBinary VARBINARY(1024))
RETURNS BIT
AS
BEGIN
IF @bitToCheck < 1 OR @bitToCheck > 1024
RETURN 0
DECLARE @byte int = (@bitToCheck - 1) / 8
DECLARE @bit int = @bitToCheck - @byte * 8
DECLARE @bytemask int = POWER(2, 8-@bit)
SET @byte = @byte + 1
RETURN CASE WHEN CONVERT(int, CONVERT(binary(1), SUBSTRING(@inputBinary, @byte, 1), 2)) & @bytemask = @bytemask THEN 1 ELSE 0 END
END
```
As a bonus, I have also included here a function that does the bit check from a input binary string(1024):
```
CREATE FUNCTION dbo.[DoBitsMatchFromBinString](@bitToCheck INT,@inputString VARCHAR(1024))
RETURNS BIT
AS
BEGIN
IF @bitToCheck < 1 OR @bitToCheck > 1024
RETURN 0
RETURN CASE WHEN SUBSTRING(@inputString, @bitToCheck, 1) = '1' THEN 1 ELSE 0 END
END
```
Check the [SQL fiddle](http://sqlfiddle.com/#!6/be81a/4) that demonstrates their usage.
```
DECLARE @inputBinary VARBINARY(128)
select @inputBinary = dbo.binStringToBinary('1010001000101111010111010100001101000100010111101011101010000101101000100010111101011101010000110100010001011110101110101000010110100010001011110101110101000011010001000101111010111010100001011010001000101111010111010100001101000100010111101011101010000101101000100010111101011101010000110100010001011110101110101000010110100010001011110101110101000011010001000101111010111010100001011010001000101111010111010100001101000100010111101011101010000101101000100010111101011101010000110100010001011110101110101000010110100010001011110101110101000011010001000101111010111010100001011010001000101111010111010100001101000100010111101011101010000101101000100010111101011101010000110100010001011110101110101000010110100010001011110101110101000011010001000101111010111010100001011010001000101111010111010100001101000100010111101011101010000101101000100010111101011101010000110100010001011110101110101000010110100010001011110101110101000011010001000101111010111010100001011010001000101111010111010100001101000100010111101011101010000101')
select dbo.[DoBitsMatchFromBinary](1, @inputBinary) bit1,
dbo.[DoBitsMatchFromBinary](2, @inputBinary) bit2,
dbo.[DoBitsMatchFromBinary](3, @inputBinary) bit3,
dbo.[DoBitsMatchFromBinary](4, @inputBinary) bit4,
dbo.[DoBitsMatchFromBinary](5, @inputBinary) bit5,
dbo.[DoBitsMatchFromBinary](6, @inputBinary) bit6,
dbo.[DoBitsMatchFromBinary](7, @inputBinary) bit7,
dbo.[DoBitsMatchFromBinary](8, @inputBinary) bit8,
dbo.[DoBitsMatchFromBinary](1017, @inputBinary) bit1017,
dbo.[DoBitsMatchFromBinary](1018, @inputBinary) bit1018,
dbo.[DoBitsMatchFromBinary](1019, @inputBinary) bit1019,
dbo.[DoBitsMatchFromBinary](1020, @inputBinary) bit1020,
dbo.[DoBitsMatchFromBinary](1021, @inputBinary) bit1021,
dbo.[DoBitsMatchFromBinary](1022, @inputBinary) bit1022,
dbo.[DoBitsMatchFromBinary](1023, @inputBinary) bit1023,
dbo.[DoBitsMatchFromBinary](1024, @inputBinary) bit1024
| bit1 | bit2 | bit3 | bit4 | bit5 | bit6 | bit7 | bit8 | bit1017 | bit1018 | bit1019 | bit1020 | bit1021 | bit1022 | bit1023 | bit1024 |
|------|-------|------|-------|-------|-------|------|-------|---------|---------|---------|---------|---------|---------|---------|---------|
| true | false | true | false | false | false | true | false | true | false | false | false | false | true | false | true |
```
|
> If your binary numbers can convert to any numeric types this answer can help you.
**& (Bitwise AND) (Transact-SQL)**
Performs a bitwise logical AND operation between two integer values.
```
expression & expression
```
**expression**
Is any valid expression of any of the data types of the integer data type category, or the bit, or the binary or varbinary data types. expression is treated as a binary number for the bitwise operation.
> **Note:**
> In a bitwise operation, only one expression can be of either binary or varbinary data type.
...
You can cast one of expressions to a type like `bigint` if it's possible.
```
declare @b1 varbinary(max) = 0x2, @b2 varbinary(max) = 0x3
print Cast(@b1 & cast(@b2 as bigint) as varbinary(max))
```
Result is:
```
0x0000000000000002
```
|
How can I do AND BIT OPERATOR between two varbinary fields in SQL
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
I have two tables, `Users` and `Company`.
I want to transfer values from a `Active` column in the `Users` table to the `Active` column in the `Company` table, where the `CompanyID` in matches `ID`.
This is an example table. It has many thousands of rows, and there is 1 on 1 relationship between `Company` and `Users`:
```
Users:
CompanyID Active
458 1
685 1
58 0
Company:
ID Active
5 Null
3 Null
58 Null
685 Null
```
The final `Company` table should look something like this where the `Null` has been replaced with the value from the `Users` table.
```
Company:
ID Active
5 Null
3 Null
58 0
685 1
```
|
You can simply perform an `UPDATE` that uses a `JOIN` between the two tables like so:
```
UPDATE c
SET Active = u.Active
FROM Company c
INNER JOIN Users u ON u.CompanyId = c.ID
```
**Full working sample code:**
```
CREATE TABLE #Users
(
CompanyId INT ,
Active BIT
)
INSERT INTO #Users
( CompanyId, Active )
VALUES ( 458, 1 ),
( 685, 1 ),
( 58, 0 )
CREATE TABLE #Company
(
ID INT ,
Active BIT
)
INSERT INTO #Company
( ID, Active )
VALUES ( 5, NULL ),
( 3, NULL ),
( 58, NULL ),
( 685, NULL )
UPDATE c
SET Active = u.Active
FROM #Company c
INNER JOIN #Users u ON u.CompanyId = c.ID
SELECT * FROM #Company
DROP TABLE #Users
DROP TABLE #Company
```
You'll notice that the `UPDATE` statement in the sample code uses aliases `c` and `u` to reference the two tables.
**Caveat:**
As stated in the comments, this assumes that you only ever have a 1 to 1 relationship between `Company` and `Users`. If there is more than one user assigned to the same company, you will need to filter the `Users` to pick the one you want to use, otherwise you may get unexpected results.
|
That should do the trick for you.
```
DECLARE @Users TABLE (CompanyID INT, Active BIT);
DECLARE @Companies TABLE (CompanyID INT, Active BIT);
INSERT INTO @Users (CompanyID, Active)
VALUES (458, 1), (685, 1), (58, 0)
INSERT INTO @Companies (CompanyID)
VALUES (5),(3),(58),(685)
SELECT C.CompanyID, U.Active
FROM @Companies AS C
OUTER APPLY (
SELECT TOP (1) U.Active
FROM @Users AS U
WHERE U.CompanyID = C.CompanyID
ORDER BY U.Active DESC
) AS U(Active)
```
**Result:**
```
CompanyID Active
------------------
5 NULL
3 NULL
58 0
685 1
```
|
Update a table with values from a related table
|
[
"",
"sql",
"sql-server",
"conditional-statements",
"transfer",
""
] |
I have a SQL Server database and having a table containing too many records. Before it was working fine but now when I run SQL Statement takes time to execute.
Sometime cause the SQL Database to use too much CPU.
This is the Query for the table.
```
CREATE TABLE [dbo].[tblPAnswer1](
[ID] [bigint] IDENTITY(1,1) NOT NULL,
[AttrID] [int] NULL,
[Kidato] [int] NULL,
[Wav] [int] NULL,
[Was] [int] NULL,
[ShuleID] [int] NULL,
[Mwaka] [int] NULL,
[Swali] [float] NULL,
[Wilaya] [int] NULL,
CONSTRAINT [PK_tblPAnswer1] PRIMARY KEY CLUSTERED
(
[ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
```
And the following down is the sql stored procedure for the statement.
```
ALTER PROC [dbo].[uspGetPAnswer1](@ShuleID int, @Mwaka int, @Swali float, @Wilaya int)
as
SELECT ID,
AttrID,
Kidato,
Wav,
Was,
ShuleID,
Mwaka,
Swali,
Wilaya
FROM dbo.tblPAnswer1
WHERE [ShuleID] = @ShuleID
AND [Mwaka] = @Mwaka
AND [Swali] = @Swali
AND Wilaya = @Wilaya
```
What is wrong in my SQL Statement. Need help.
|
Add an Index on the `ID` column and include `ShuleID`, `Mwaka`, `Swali` and `Wilaya` columns. That should help improve the speed of the query.
```
CREATE NONCLUSTERED INDEX IX_ID_ShuleID_Mwaka_Swali_Wilaya
ON tblPAnswer1 (ID)
INCLUDE (ShuleID, Mwaka, Swali, Wilaya);
```
|
Just add an index on ShuleID, Mwaka, Swali and Wilaya columns. The order of columns in the index should depend on distribution of data (the columns with most diverse values in it should be the first in the index, and so on).
And if you need it super-fast, also include all the remaining columns used in the query, to have a covering index for this particular query.
EDIT: Probably should move the float col (Swali) from indexed to included columns.
|
SQL Statement take long time to execute
|
[
"",
"sql",
"sql-server",
""
] |
My sql string is off just a little bit and I need help to get it exactly right. I have tried a number of strings with no luck. [I've tried most of the examples in this thread, with no luck.](https://stackoverflow.com/questions/16550703/sql-get-the-last-date-time-record "Stackoverflow thread.") Concerning the sql, here is a screenshot of what it looks like with just the results:

I want it to only display the value for the most recent DateTime for PRecipeRunning and ERecipeRunning.
[I have setup a SQLFiddle here, with the schema and the grab everything code.](http://sqlfiddle.com/#!6/d6332/1/0)
[Edit, split my two parter into two separate questions. Here is the sql question and I will do another for the visual studio part of it.]
|
Here is a query to find the most recent date-time values by TagName
```
SELECT A.[DateTime]
,A.[TagName]
,A.[Value]
FROM [v_StringHistory] A
INNER JOIN
(SELECT MAX(B.[DateTime]) [MaxDate]
,B.[TagName]
FROM [v_StringHistory] B
GROUP BY B.[TagName]) MD
ON MD.[TagName] = A.[TagName]
AND A.[DateTime] = MD.MaxDate
WHERE A.[TagName] = 'ERecipeRunning'
OR A.[TagName] = 'PRecipeRunning'
```
|
For this particular front-end Wonderware, there are apparently some rules about how you query their tables. Luckily they have a GUI that allows you to pick and click what you want to see and then it spits out the SQL code. In this case, the code ended up being:
```
SET NOCOUNT ON
DECLARE @StartDate DateTime
DECLARE @EndDate DateTime
SET @StartDate = DateAdd(mi,-5,GetDate())
SET @EndDate = GetDate()
SET NOCOUNT OFF
SELECT * FROM (
SELECT History.TagName, DateTime = convert(nvarchar, DateTime, 21), Value, vValue, StartDateTime
FROM History
WHERE History.TagName IN ('ERecipeRunning', 'PRecipeRunning')
AND wwRetrievalMode = 'Cyclic'
AND wwCycleCount = 2
AND wwVersion = 'Latest'
AND DateTime >= @StartDate
AND DateTime <= @EndDate) temp WHERE temp.StartDateTime >= @StartDate
ORDER BY DateTime DESC
```
I had forgotten about the Query application and the SQL code that it provides. In my hunting and guessing, I hadn't even chosen the same table / view. But in the end, this is working and I am good. Thanks to everyone for their suggestions.
|
SQL Show Only the Most Recent DateTime Values
|
[
"",
"sql",
"sql-server-2008",
""
] |
I have a table `test` which looks like this:
```
+-------+-------+
| u1_id | u2_id |
+-------+-------+
| 1 | 2 |
| 3 | 1 |
| 2 | 1 |
| 2 | 3 |
+-------+-------+
```
And, `u1_id` and `u2_id` are both 'foreign keys' to another table `user`:
```
+----+-------+
| id | name |
+----+-------+
| 1 | n_foo |
| 2 | n_bar |
| 3 | n_baz |
+----+-------+
```
Not sure how to explain this, but:
* In input, I have a single user `id` which can be referenced in `u1_id` or in `u2_id`.
* I'd like to get the associated user to it as defined in table `test` using a join on table `user`.
For user `id = 1`, I should get:
```
n_bar
n_baz
n_bar
```
For user `id = 2`, I should get:
```
n_foo
n_foo
n_baz
```
This may be a common issue but didn't find exactly how to join these two tables using:
* `u1_id` if my input user `id` is in `u2_id` column
* `u2_id` otherwise
I tried something like this but it doesn't seem to work:
```
SELECT name
FROM test
JOIN user
ON user.id = test.u1_id
WHERE test.u1_id = @guid OR
test.u2_id = @guid AND
CASE
WHEN test.u2_id = @guid
THEN test.u2_id = test.u1_id
END;
```
Any ideas how to achieve this? Or may be there is a better way to design these tables, I'm completely open to any suggestions.
|
If I correctly understood your question, I believe you need following query:
```
SELECT t2.`name`
FROM `t2`
INNER JOIN (
SELECT IF(@uid = 1, t1.u1_id, t1.u2_id) as `id`
FROM `t1`
) as `t1`
WHERE t2.id = t1.id and t1.id != @uid;
```
I tried following:
Date base create, I don't know what columns type your are using just for demo:
```
create table t1 (
u1_id int,
u2_id int
);
insert into t1 values
(1, 2),
(3, 1),
(2, 1),
(2, 3);
create table t2 (
id int,
name varchar(10)
);
insert into t2 values
( 1 , 'n_foo' ),
( 2 , 'n_bar' ),
( 3 , 'n_baz' );
```
Then Queries:
```
mysql> SELECT * FROM t1;
+-------+-------+
| u1_id | u2_id |
+-------+-------+
| 1 | 2 |
| 3 | 1 |
| 2 | 1 |
| 2 | 3 |
+-------+-------+
4 rows in set (0.00 sec)
mysql> SELECT * FROM t2;
+------+-------+
| id | name |
+------+-------+
| 1 | n_foo |
| 2 | n_bar |
| 3 | n_baz |
+------+-------+
3 rows in set (0.00 sec)
mysql> SET @uid = 1;
Query OK, 0 rows affected (0.00 sec)
mysql> SELECT @uid;
+------+
| @uid |
+------+
| 1 |
+------+
1 row in set (0.00 sec)
mysql> SELECT t2.`name`
-> FROM `t2`
-> INNER JOIN (
-> SELECT IF(@uid = 1, t1.u1_id, t1.u2_id) as `id`
-> FROM `t1`
-> ) as `t1`
-> WHERE t2.id = t1.id and t1.id != @uid;
+-------+
| name |
+-------+
| n_baz |
| n_bar |
| n_bar |
+-------+
3 rows in set (0.03 sec)
mysql> SET @uid = 2;
Query OK, 0 rows affected (0.00 sec)
mysql> SELECT @uid;
+------+
| @uid |
+------+
| 2 |
+------+
1 row in set (0.00 sec)
mysql> SELECT t2.`name`
-> FROM `t2`
-> INNER JOIN (
-> SELECT IF(@uid = 1, t1.u1_id, t1.u2_id) as `id`
-> FROM `t1`
-> ) as `t1`
-> WHERE t2.id = t1.id and t1.id != @uid;
+-------+
| name |
+-------+
| n_foo |
| n_foo |
| n_baz |
+-------+
3 rows in set (0.00 sec)
```
Btw, you can change join conditions if it is not what you wanted. But as it give correct results...
Give it a Try!!
|
Using @GrijeshChauhan's schema...
```
SELECT * FROM
(SELECT u1_id,u2_id FROM t1
UNION ALL
SELECT u2_id,u1_id FROM t1
) x
JOIN t2
ON t2.id = x.u1_id
WHERE x.u2_id = 2;
+-------+-------+------+-------+
| u1_id | u2_id | id | name |
+-------+-------+------+-------+
| 1 | 2 | 1 | n_foo |
| 1 | 2 | 1 | n_foo |
| 3 | 2 | 3 | n_baz |
+-------+-------+------+-------+
```
|
List all associate's names from a referred table?
|
[
"",
"mysql",
"sql",
""
] |
hey Guys i m working on SQL and i got this error
first i ll show you my code
```
Insert into HistoriqueEquipement (DateMHS,CodeEquipement,nomenclature) VALUES (2015-03-28, (SELECT CodeEquipement,nomenclature FROM equipement WHERE Numero_Serie =241));
```
i got the error #1241 - Operand should contain 1 column(s);
and if i use
```
Insert into HistoriqueEquipement (DateMHS,CodeEquipement) VALUES (2015-03-28, (SELECT CodeEquipement FROM equipement WHERE Numero_Serie =241));
```
it work perfectly so where's the problem i want to insert many columns not only one
|
I think you are getting this error as the `rows` returned by the `SELECT` is more than one. So you can use a `INSERT INTO TABLE SELECT` like this:
```
Insert into HistoriqueEquipement (DateMHS,CodeEquipement,nomenclature)
SELECT CURDATE(), CodeEquipement, nomenclature
FROM equipement
WHERE Numero_Serie =241;
```
|
You should use `insert . . . select`, and you can use it with a constant:
```
Insert into HistoriqueEquipement (DateMHS,CodeEquipement)
SELECT '2015-03-28', CodeEquipement
FROM equipement
WHERE Numero_Serie = 241;
```
Note:
* If you want only one row, then add `limit 1`.
* Even when using values, you need single quotes around the date. `2015-03-28` evaluates to `1984` (think "subtraction"). This then gets treated as a date.
|
#1241 Operand should contain 1 column(s) in Insert into Select
|
[
"",
"mysql",
"sql",
""
] |
is there any way to optimize the following MySql Query?
```
SELECT article.*
FROM article
WHERE channelid = 1
AND ((SELECT count(*) FROM article_cCountry WHERE ID1 = article.id AND ID2 = 1) = 1)
AND date < now()
AND released = "TRUE"
AND ( (uId = 0) OR ((SELECT count(*) FROM user WHERE ID = article.uId and released = true) = 1) )
ORDER BY cDate DESC
LIMIT 20
```
I have already tried to use INNER JOIN for the article\_cCountry check but the performance was even worse.
**EDIT**
*Structure for article\_cCountry*
```
| ID | ID1 | ID2 | (ID1 => article.id, ID2 => country)
```
*Structure for user user*
About 20 columns with user data
|
Try to explain your query with:
```
EXPLAIN SELECT ...
```
If you are not using any index try to put an index on the fields used in the where:
```
article.date
article.channelid
article.released
user.released
article_cCountry.id1
article_cCountry.id2
```
|
I'm not sure you need to check
```
SELECT article.*, COUNT(ac.id) ac_count, COUNT(u.id) u_count
FROM article
INNER JOIN article_cCountry ac ON ac.ID1 = article.id AND ac.ID2 = 1
INNER JOIN user u ON u.ID = article.uId and u.released
WHERE
channelid = 1
AND date < now()
AND released = "TRUE"
GROUP BY article.id
HAVING ac_count = 1 AND (uId = 0 OR u_count = 1)
ORDER BY cDate DESC
LIMIT 20
```
|
Slow MySQL Query / Large DB / JOIN
|
[
"",
"mysql",
"sql",
"database",
""
] |
I have a table with object show/click statistics.
Table has such columns: id, objectId, pageUrl, event('show' or 'click').
For example:
```
id, objectId, pageUrl, event
1, 1, /cars/, show
2, 1, /cars/, show
3, 1, /items/, show
4, 2, /cars/, show
5, 2, /items/, show
```
Can I, in a simple way, get the count for each object having a unique pageUrl?
The result for dataset must be:
```
objectId, counter
1, 2
2, 2
```
For object with id 1 there are 3 records, but pageUrl /cars/ appears two times, so counter must be only for unique urls.
|
Try this query:
```
SELECT objectId, COUNT(DISTINCT(pageUrl)) AS 'counter'
FROM Table
GROUP BY objectId
```
|
```
SELECT objectId, COUNT(*) AS count FROM ( SELECT objectId,pageUrl FROM table GROUP BY objectId,pageUrl ) z GROUP BY objectId
```
|
MySQL - Count only if unique
|
[
"",
"mysql",
"sql",
""
] |
Is there any function to check for continuous date. I'm having problem on working with this issue below:
My table has a `datetime` column with the following data:
```
----------
2015-03-11
2015-03-12
2015-03-13
2015-03-16
```
Given start date as `2015-3-11` and end date as `2015-3-17`. I want the result as:
```
----------
2015-03-11
2015-03-12
2015-03-13
```
Can anyone suggest anything ?
|
I'm thinking this is somewhat a variation of **[Grouping Islands of Contiguous Dates](http://www.sqlservercentral.com/articles/T-SQL/71550/)** problem. This can be done using `ROW_NUMBER()`:
[**SQL Fiddle**](http://sqlfiddle.com/#!6/c67a0/6/0)
```
CREATE TABLE Test(
tDate DATETIME
)
INSERT INTO Test VALUES
('20150311'), ('20150312'), ('20150313'), ('20150316');
DECLARE @startDate DATE = '20150311'
DECLARE @endDate DATE = '20150317'
;WITH Cte AS(
SELECT
*,
RN = DATEADD(DD, - (ROW_NUMBER() OVER(ORDER BY tDATE) - 1), tDate)
FROM Test
WHERE
tDate >= @startDate
AND tDate < DATEADD(DAY, 1, @endDate)
)
SELECT CAST(tDate AS DATE)
FROM CTE
WHERE RN = @startDate
```
**RESULT**
```
|------------|
| 2015-03-11 |
| 2015-03-12 |
| 2015-03-13 |
```
---
Here is the SQL Server 2005 version:
[**SQL Fiddle**](http://sqlfiddle.com/#!6/17eaa/2/0)
```
DECLARE @startDate DATETIME
DECLARE @endDate DATETIME
SET @startDate = '20150311'
SET @endDate = '20150317'
;WITH Cte AS(
SELECT
*,
RN = DATEADD(DD, -(ROW_NUMBER() OVER(ORDER BY tDATE)-1), tDate)
FROM Test
WHERE
tDate >= @startDate
AND tDate < DATEADD(DAY, 1, @endDate)
)
SELECT CONVERT(VARCHAR(10), tDate, 121)
FROM CTE
WHERE RN = @startDate
```
|
For `MSSQL 2012`. This will return MAX continuous groups:
```
DECLARE @t TABLE(d DATE)
INSERT INTO @t VALUES
('20150311'),
('20150312'),
('20150313'),
('20150316')
;WITH
c1 AS(SELECT d, IIF(DATEDIFF(dd,LAG(d, 1, DATEADD(dd, -1, d)) OVER(ORDER BY d), d) = 1, 0, 1) AS n FROM @t),
c2 AS(SELECT d, SUM(n) OVER(ORDER BY d) AS n FROM c1)
SELECT TOP 1 WITH TIES MIN(d) AS StartDate, MAX(d) AS EndDate, COUNT(*) AS DayCount
FROM c2
GROUP BY n
ORDER BY DayCount desc
```
Output:
```
StartDate EndDate DayCount
2015-03-11 2015-03-13 3
```
For
```
('20150311'),
('20150312'),
('20150313'),
('20150316'),
('20150317'),
('20150318'),
('20150319'),
('20150320')
```
Output:
```
StartDate EndDate DayCount
2015-03-16 2015-03-20 5
```
Apply filtering in `c1 CTE`:
```
c1 AS(SELECT d, IIF(DATEDIFF(dd,LAG(d, 1, DATEADD(dd, -1, d)) OVER(ORDER BY d), d) = 1, 0, 1) AS n FROM @t WHERE d BETWEEN '20150311' AND '20150320'),
```
For `MSSQL 2008`:
```
;WITH
c1 AS(SELECT d, (SELECT MAX(d) FROM @t it WHERE it.d < ot.d) AS pd FROM @t ot),
c2 AS(SELECT d, CASE WHEN DATEDIFF(dd,ISNULL(pd, DATEADD(dd, -1, d)), d) = 1 THEN 0 ELSE 1 END AS n FROM c1),
c3 AS(SELECT d, (SELECT SUM(n) FROM c2 ci WHERE ci.d <= co.d) AS n FROM c2 co)
SELECT TOP 1 WITH TIES MIN(d) AS StartDate, MAX(d) AS EndDate, COUNT(*) AS DayCount
FROM c3
GROUP BY n
ORDER BY DayCount desc
```
|
How to Select continuous date in sql
|
[
"",
"sql",
"sql-server",
""
] |
I have table like this:
```
IST | FILEDATE | DATE | ...
1 | 2013-2014 | 27.03.2015 10:20:47 | ...
2 | 2013-2014 | 27.03.2015 10:20:47 | ...
3 | 2013-2014 | 27.03.2015 10:20:47 | ...
1 | 2013-2014 | 28.03.2015 11:20:47 | ...
2 | 2013-2014 | 28.03.2015 11:20:47 | ...
3 | 2013-2014 | 28.03.2015 11:20:47 | ...
1 | 2014-2015 | 29.03.2015 12:20:47 | ...
2 | 2014-2015 | 29.03.2015 12:20:47 | ...
3 | 2014-2015 | 29.03.2015 12:20:47 | ...
...
```
I need to select newest(with date value) entry of all IST, like this:
```
IST | FILEDATE | DATE | ...
1 | 2014-2015 | 29.03.2015 11:20:47 | ...
2 | 2014-2015 | 29.03.2015 11:20:47 | ...
3 | 2014-2015 | 29.03.2015 11:20:47 | ...
```
I tried `order by` and `rownum=1`, but its working for just single IST.
How can I do that? Thank you.
|
That's a typical scenario where analytical functions (aka windowing functions) are really helpful:
```
with v_data(ist, filedate, entry_date) as (
select 1, '2013-2014', to_date('27.03.2015 10:20:47','DD.MM.YYYY hh24:mi:ss') from dual union all
select 2, '2013-2014', to_date('27.03.2015 10:20:47','DD.MM.YYYY hh24:mi:ss') from dual union all
select 3, '2013-2014', to_date('27.03.2015 10:20:47','DD.MM.YYYY hh24:mi:ss') from dual union all
select 1, '2013-2014', to_date('28.03.2015 11:20:47','DD.MM.YYYY hh24:mi:ss') from dual union all
select 2, '2013-2014', to_date('28.03.2015 11:20:47','DD.MM.YYYY hh24:mi:ss') from dual union all
select 3, '2013-2014', to_date('28.03.2015 11:20:47','DD.MM.YYYY hh24:mi:ss') from dual union all
select 1, '2014-2015', to_date('29.03.2015 12:20:47','DD.MM.YYYY hh24:mi:ss') from dual union all
select 2, '2014-2015', to_date('29.03.2015 12:20:47','DD.MM.YYYY hh24:mi:ss') from dual union all
select 3, '2014-2015', to_date('29.03.2015 12:20:47','DD.MM.YYYY hh24:mi:ss') from dual)
select * from (
select
v1.*,
row_number() over (partition by ist order by entry_date desc) as rn
from v_data v1
)
where rn=1
```
This solution
* computes an ordering per group using the `ROW_NUMBER` analytical function
* removes everything but the newest entry per group with `WHERE rn = 1`
|
You can first group the result:
```
select ist, max(date) date
from table
group
by ist
```
Then you can combine that result with a `select` to get all matching lines:
```
select master.*
from table master
join
( select ist, max(date) date
from table
group
by ist
) filter
on master.ist = filter.ist
and master.date = filter.date
```
|
How can I select only newest entries?
|
[
"",
"sql",
"oracle",
"greatest-n-per-group",
""
] |
How to truncate any table using its synonym in oracle?
```
-- in Server_A
Create Table table_a ( col int);
-- in server_B
CREATE SYNONYM syn_table_a FOR table_a@dblink_server_a;
--insert into
INSERT INTO syn_table_a values (1);
--Truncate
How to truncate table using synonym only?.
```
|
**A truncate statement cannot be used on a synonym.**
> Synonyms cannot be used in a drop table, drop view or truncate
> table/cluster statements. If this is tried, it results in a ORA-00942:
> table or view does not exist
For example,
```
SQL> CREATE TABLE t(col NUMBER);
Table created.
SQL>
SQL> CREATE SYNONYM t_syn FOR t;
Synonym created.
SQL>
SQL> TRUNCATE TABLE t_syn;
TRUNCATE TABLE t_syn
*
ERROR at line 1:
ORA-00942: table or view does not exist
SQL>
```
|
You could use dynamic SQL to do it, e.g.:
```
declare
d varchar2(1000);
begin
select 'TRUNCATE TABLE "' || table_owner || '"."' || table_name || '"'
into d
from all_synonyms
where synonym_name = 'MYSYNONYM';
execute immediate d;
end;
```
If the table is accessed via a database link, this will not work. In that case, you could create a procedure on the remote instance that does the truncate, then call that procedure across the database link, e.g.
```
begin
truncate_my_table@dblinkname;
end;
```
|
How to truncate any table using its synonym in oracle?
|
[
"",
"sql",
"oracle",
"truncate",
"synonym",
""
] |
Early last year I was on a project using Oracle DB and was introduced to a new querying format where you could query the results of the previous query. It was only for a few weeks we were helping on the project so so I don't recall exactly how things were written. But, it was something like the outline below. Note all of the query I believe was written in a stored procedure and in just 1 procedure. Forgive me for the rude formatting but I just cannot recall how things were just that I found it awesome the ability to do the query of queries and not have all the nested selects in one statement.
e.g. SP: X
```
select firstName from users where active = true;
select authors from books where authorFirstName in (previous select);
```
Any guidance on what this style of querying is called that would help me research this would be greatly appreciated as I would like to learn more on it and follow the format more.
|
the form you mentioned is `subquery`,
which `may be` wrote with `join`s (depends on the query and subqueries):
```
select firstName from users where active = true;
select authors from books where authorFirstName in (previous select);
```
is equal to:
```
select books.authors
from books
join users on books.authorFirstName =users.firstName
where users.active = true;
```
or equal to another subquery:
```
select authors
from books
where exists (select firstName
from users
where
books.authorFirstName =users.firstName
and active = true);
```
you can also use `with` statement:
```
with cte as (
select firstName from users where active = true)
select authors from books where authorFirstName in (select firtsname from cte);
```
and other forms ....
|
You can use the SQL with clause to give a sub-query a name and then use that name. Example here:
[SQL WITH clause example](https://stackoverflow.com/questions/12552288/sql-with-clause-example)
|
What is the proper term or style called for a query of query?
|
[
"",
"sql",
"oracle",
""
] |
I'm trying to join three tables to pull back a list of distinct blog posts with associated assets (images etc) but I keep coming up a cropper. The three tablets are `tblBlog`, `tblAssetLink` and `tblAssets`. The Blog tablet hold the blog, the asset table holds the assets and the Assetlink table links the two together.
`tblBlog.BID` is the PK in blog, `tblAssets.AID` is the PK in Assets.
This query works but pulls back multiple posts for the same record. I've tried to use select distinct and group by and even union but as my knowledge is pretty poor with SQL - they all error.
I'd like to also discount any assets that are marked as deleted (tblAssets.Deleted = true) but not hide the associated Blog post (if that's not marked as deleted). If anyone can help - it would be much appreciated! Thanks.
Here's my query so far....
```
SELECT dbo.tblBlog.BID,
dbo.tblBlog.DateAdded,
dbo.tblBlog.PMonthName,
dbo.tblBlog.PDay,
dbo.tblBlog.Header,
dbo.tblBlog.AddedBy,
dbo.tblBlog.PContent,
dbo.tblBlog.Category,
dbo.tblBlog.Deleted,
dbo.tblBlog.Intro,
dbo.tblBlog.Tags,
dbo.tblAssets.Name,
dbo.tblAssets.Description,
dbo.tblAssets.Location,
dbo.tblAssets.Deleted AS Expr1,
dbo.tblAssetLink.Priority
FROM dbo.tblBlog
LEFT OUTER JOIN dbo.tblAssetLink
ON dbo.tblBlog.BID = dbo.tblAssetLink.BID
LEFT OUTER JOIN dbo.tblAssets
ON dbo.tblAssetLink.AID = dbo.tblAssets.AID
WHERE ( dbo.tblBlog.Deleted = 'False' )
ORDER BY dbo.tblAssetLink.Priority, tblBlog.DateAdded DESC
```
*EDIT*
Changed the `Where` and the `order by`....
Expected output:
```
tblBlog.BID = 123
tblBlog.DateAdded = 12/04/2015
tblBlog.Header = This is a header
tblBlog.AddedBy = Persons name
tblBlog.PContent = *text*
tblBlog.Category = Category name
tblBlog.Deleted = False
tblBlog.Intro = *text*
tblBlog.Tags = Tag, Tag, Tag
tblAssets.Name = some.jpg
tblAssets.Description = Asset desc
tblAssets.Location = Location name
tblAssets.Priority = True
```
|
Use `OUTER APPLY`:
```
DECLARE @b TABLE ( BID INT )
DECLARE @a TABLE ( AID INT )
DECLARE @ba TABLE
(
BID INT ,
AID INT ,
Priority INT
)
INSERT INTO @b
VALUES ( 1 ),
( 2 )
INSERT INTO @a
VALUES ( 1 ),
( 2 ),
( 3 ),
( 4 )
INSERT INTO @ba
VALUES ( 1, 1, 1 ),
( 1, 2, 2 ),
( 2, 1, 1 ),
( 2, 2, 2 )
SELECT *
FROM @b b
OUTER APPLY ( SELECT TOP 1
a.*
FROM @ba ba
JOIN @a a ON a.AID = ba.AID
WHERE ba.BID = b.BID
ORDER BY Priority
) o
```
Output:
```
BID AID
1 1
2 1
```
Something like:
```
SELECT b.BID ,
b.DateAdded ,
b.PMonthName ,
b.PDay ,
b.Header ,
b.AddedBy ,
b.PContent ,
b.Category ,
b.Deleted ,
b.Intro ,
b.Tags ,
o.Name ,
o.Description ,
o.Location ,
o.Deleted AS Expr1 ,
o.Priority
FROM dbo.tblBlog b
OUTER APPLY ( SELECT TOP 1
a.* ,
al.Priority
FROM dbo.tblAssetLink al
JOIN dbo.tblAssets a ON al.AID = a.AID
WHERE b.BID = al.BID
ORDER BY al.Priority
) o
WHERE b.Deleted = 'False'
```
|
Based on your comments
> i would like to get is just one asset per blog post (top one ordered
> by Priority)
You can change your query as following. I suggest changing the join with `dbo.tblAssetLink` to filtered one, which contains only one (highest priority) link for every blog.
```
SELECT dbo.tblBlog.BID,
dbo.tblBlog.DateAdded,
dbo.tblBlog.PMonthName,
dbo.tblBlog.PDay,
dbo.tblBlog.Header,
dbo.tblBlog.AddedBy,
dbo.tblBlog.PContent,
dbo.tblBlog.Category,
dbo.tblBlog.Deleted,
dbo.tblBlog.Intro,
dbo.tblBlog.Tags,
dbo.tblAssets.Name,
dbo.tblAssets.Description,
dbo.tblAssets.Location,
dbo.tblAssets.Deleted AS Expr1,
dbo.tblAssetLink.Priority
FROM dbo.tblBlog
LEFT OUTER JOIN
(SELECT BID, AID,
ROW_NUMBER() OVER (PARTITION BY BID ORDER BY [Priority] DESC) as N
FROM dbo.tblAssetLink) AS filteredAssetLink
ON dbo.tblBlog.BID = filteredAssetLink.BID
LEFT OUTER JOIN dbo.tblAssets
ON filteredAssetLink.AID = dbo.tblAssets.AID
WHERE dbo.tblBlog.Deleted = 'False' AND filteredAssetLink.N = 1
ORDER BY tblBlog.DateAdded DESC
```
|
How to join three tables with distinct
|
[
"",
"sql",
"sql-server",
""
] |
I have a query like this
```
select * from tablename where ID in(select max(ID) from tablename )
```
I want to remove subquery to improve performace. Please suggest me,How to do it
Thank you in advanced
|
You don't mention your RDBMS, but in MS-SQL:
```
SELECT TOP 1 *
FROM tablename
ORDER BY ID DESC
```
|
for Oracle :
```
SELECT *FROM
(
SELECT * FROM tablename ORDER BY ID desc
)
WHERE rownum <= 1 ORDER BY ID ;
```
For MS-SQL :
```
SELECT TOP 1 * FROM tablename ORDER BY ID DESC
```
For MY SQL :
```
select * from tablename order by ID desc limit 1
```
|
how to remove subquery from sql involving max()
|
[
"",
"sql",
"sql-server",
"database",
""
] |
I'm trying to do a self join and subtract values of the same column.
```
--sum of all pageviews of circleid 0 of each device category - (sum of all
--pageviews of circleid <> 0 of each device category)
```
**Existing table - gadata:**
```
PageName DeviceCategory FKCircleId FKGAFilterKeyID PageViews
login desktop 0 5 10
login desktop 0 5 20
login mobile 0 5 5
login tablet 0 5 15
login desktop 1 4 2
login desktop 1 4 2
login mobile 1 4 3
login tablet 1 4 4
```
**Desired o/p:**
```
PageName DeviceCategory PageViews
login desktop 26 --(30-4)
login mobile 2 --(5-3)
login tablet 11 --(15-4)
```
But this query gives me null values
```
PageName DeviceCategory Circle Total
Login desktop NULL NULL
Login mobile NULL NULL
Login tablet NULL NULL
CREATE TABLE gadata(PageName varchar(10),DeviceCategory
varchar(10),FKCircleId int, FKGAFilterKeyID int,PageViews int)
insert into gadata values('login','desktop',0,5,10)
insert into gadata values('login','desktop',0,5,20)
insert into gadata values('login','mobile',0,5,5)
insert into gadata values('login','tablet',0,5,15)
insert into gadata values('login','desktop',1,4,2)
insert into gadata values('login','desktop',1,4,2)
insert into gadata values('login','mobile',1,4,3)
insert into gadata values('login','tablet',1,4,4)
;WITH TBL3 AS(
SELECT gd.PageName,gd.DeviceCategory,'All' AS Circle,
SUM(gd.PageViews) AS Total1,gd.FKGAFilterKeyID
FROM gadata(NOLOCK) gd
WHERE gd.FKGAFilterKeyID =5
GROUP BY gd.PageName,gd.DeviceCategory,gd.FKGAFilterKeyID
),
TBL4 AS(
SELECT gd.PageName,gd.DeviceCategory,'Other' AS t4Circle,
ISNULL(SUM(gd.PageViews),0) AS Total2,gd.FKGAFilterKeyID
FROM gadata(NOLOCK) gd
WHERE gd.FKGAFilterKeyID <> 5
GROUP BY gd.PageName,gd.DeviceCategory,gd.FKGAFilterKeyID
)
SELECT t3.PageName,t3.DeviceCategory,t4.t4Circle AS Circle,
(t3.Total1-t4.Total2) AS Total
FROM TBL3 t3
LEFT JOIN TBL4 t4 ON t3.FKGAFilterKeyID = t4.FKGAFilterKeyID
AND t3.DeviceCategory= t4.DeviceCategory
```
When I run the subQueries in tbl3 & tbl4 individually, I get the data.
Where am I going wrong.
sqlfiddle isn't working. I would have given a fiddle.
|
You are joining on incorrect columns. Also use `ISNULL` function in order not to get `NULL`s when no matching rows will be found in `TBL4`:
```
SELECT t3.PageName,t4.t4Circle AS Circle,
(t3.Total1-ISNULL(t4.Total2, 0)) AS Total
FROM TBL3 t3
LEFT JOIN TBL4 t4 ON t3.PageName = t4.PageName AND t4.DeviceCategory = t3.DeviceCategory
```
Output:
```
PageName Circle Total
login Other 26
login Other 2
login Other 11
```
I have deleted `AccessType` column, because it was not in create script.
EDIT:
I also don't know your logic but if you can have in `TBL4` values that are not in `TBL3`(for example `login, windows phone, 1, 10` ) then you can use `FULL OUTER JOIN`:
```
insert into gadata values('login','win. phone',1,10,6)
SELECT ISNULL(t3.PageName, t4.PageName) AS PageName,t4.t4Circle AS Circle,
(ISNULL(t3.Total1, 0)-ISNULL(t4.Total2, 0)) AS Total
FROM TBL3 t3
FULL JOIN TBL4 t4 ON t3.PageName = t4.PageName AND t4.DeviceCategory = t3.DeviceCategory
```
Output:
```
PageName Circle Total
login Other 52
login Other 4
login Other 22
login Other -6
```
|
instead of the long query in question just use a `self join` only:
```
select tbl1.pagename, tbl1.devicecategory, tbl1.pageviews - tbl2.pageviews
from (select t2.pagename, t2.devicecategory, sum(t2.pageviews) as pageviews
from gadata t2
where t2.FKCircleId=0
group by t2.pagename, t2.devicecategory) tbl1
join (select t3.pagename, t3.devicecategory, sum(t3.pageviews) as pageviews
from gadata t3
where t3.FKCircleId <> 0
group by t3.pagename, t3.devicecategory) tbl2
on tbl1.pagename=tbl2.pagename and tbl1.devicecategory=tbl2.devicecategory
```
OUTPUT:
```
login desktop 26
login mobile 2
login tablet 11
```
[SQLFIDDLE DEMO](http://sqlfiddle.com/#!6/3d374/5)
|
How to do left join with nulls
|
[
"",
"sql",
"t-sql",
"sql-server-2012",
""
] |
I'm working in rails. My model is like this:
```
class Topic < ActiveRecord::Base
has_many :topics, dependent: :delete_all
belongs_to :parent, foreign_key: 'topic_id', class_name: 'Topic'
has_many :contents
validates :name, uniqueness: true, presence: true
end
```
So I have a topic that can have many "sub-topics". **Every sub-topic can have many sub-topics, indefinitely**. I'm trying to make a method that returns me all "leaf" topics. A leaf topic is a topic with no sub-topics.
```
def self.leafs
where(???)
end
```
I can't formulate this in active record logic, so actually I use this query:
```
Topic.find_by_sql("SELECT * FROM topics WHERE id NOT IN (SELECT t.topic_id FROM topics t WHERE topic_id IS NOT NULL)")
```
How can I write this in an active record way?
|
Try this:
```
child_ids_with_topic_id = where.not(topic_id: nil).pluck(:topic_id)
where.not(id: child_ids_with_topic_id)
```
|
```
def self.leafs
topics.where("topic_id IS NOT NULL")
end
```
ActiveRecord 4.0 and above adds where.not so you can do this:
```
scope :leafs, -> topics.where.not(topic_id: nil)
scope :without_topics, includes(:topics).where(:topics => { :id => nil })
```
|
How to convert this query to ActiveRecord logic?
|
[
"",
"sql",
"ruby-on-rails",
"activerecord",
""
] |
**PLEASE DO NOT MARK THE QUESTION AS DUPLICATE WITHOUT READING IT. I DID POST A SIMILAR QUESTION BUT THE STACKOVERFLOW COMMUNITY MEMBERS ASKED ME TO REPOST THE MODIFIED QUESTION SEPARATELY AS THE SOLUTION IS MUCH MORE COMPLICATED GIVEN A SMALL, SUBTLE MODIFICATION.**
Suppose you have the following schema:
```
CREATE TABLE Data
(
ID INT,
CXL INT
)
INSERT INTO Data (ID, CXL)
SELECT 1, NULL
UNION
SELECT 2, 1
UNION
SELECT 3, 2
UNION
SELECT 5, 3
UNION
SELECT 6, NULL
UNION
SELECT 7, NULL
UNION
SELECT 8, 7
```
The column CXL is the ID that cancels a particular ID. So, for example, the first row in the table with ID:1 was good until it was cancelled by ID:2 (CXL column). ID:2 was good until it was cancelled by ID:3. ID:3 was good until it was cancelled by ID:5 so in this sequence the last "GOOD" ID was ID:5.
**I would like to find the "GOOD" ID as well as the original ID that started EACH chain.** So in this example it would be:
```
Original ID | Latest GOOD ID
1 5
6 6
7 8
```
Here's a fiddle if you want to play with this:
<http://sqlfiddle.com/#!6/68ac48/1>
|
Took me a few minutes to dredge up the right CTE for this:
```
WITH ids AS (
SELECT
ID,
ID AS orig FROM Data d1 WHERE CXL IS NULL
UNION ALL
SELECT
d2.ID,
orig
FROM ids i
INNER JOIN Data d2 ON d2.CXL = i.ID
)
SELECT
orig AS [Original Id],
MAX(ID) AS [Latest Good Id]
FROM ids
GROUP BY orig
```
[Here's your SQLFiddle](http://sqlfiddle.com/#!6/68ac48/36)
**This assumes that a cancelling ID is always higher than the ID it cancels, of course...**
Basically, every time it recurses, it reselects the original id again for the row. Once the recursion is done, it's just a matter of a simple `GROUP BY` to get the original id and the most current.
|
I think you can do it using a recursive CTE:
```
;WITH CTE AS (
SELECT ID AS Parent, ID, CXL, 0 AS level
FROM Data
WHERE CXL IS NULL
UNION ALL
SELECT c.Parent AS Parent, d.ID, d.CXL, level = level + 1
FROM CTE AS c
INNER JOIN Data AS d ON c.ID = d.CXL
)
SELECT Parent AS OriginalID, ID AS GoodID
FROM CTE AS c
WHERE level = (SELECT MAX(level) FROM CTE WHERE Parent = c.Parent)
```
The anchor query of the `CTE` selects all the original IDs that start EACH chain, i.e. the ones having `CXL NULL`. Then we recursively build up each chain, keeping `Parent` field and using `level` so as to be able to identify the end of the chain using `MAX` in the final query.
[**SQL Fiddle Demo**](http://sqlfiddle.com/#!6/d63d6/1)
|
How would you find the 'GOOD' ID when cancellation is involved as well as the original ID that started it all?
|
[
"",
"sql",
"sql-server",
""
] |
I have created a table VQ1 using following clause.
```
CREATE VIEW VQ1 as
SELECT productid, productname, TO_CHAR(unitprice,'$9,999.99') AS "unitprice"
FROM products
WHERE unitprice > avg(unitprice)
WITH READ ONLY;
```
I am getting an error that I can not use the aggregate function `AVG()` to find the average.
So how can I find `AVG()` for a view?
|
You can try this mate:
```
CREATE VIEW VQ1 AS
SELECT
productid,
productname,
TO_CHAR(unitprice, '$9,999.99') 'unitprice'
FROM
products
GROUP BY
productid
HAVING
unitprice > AVG(unitprice);
```
|
Because `avg` is an aggregate function, it cannot be used in the select unless the other fields are specified in the `group by` clause.
```
create view VQ1 as
select productid, productname,
TO_CHAR(unitprice,'$9,999.99') as "unitprice"
from products,
(select avg(unitprice) as avgprice from products)
where unitprice > avgprice
with read only;
```
|
Not able use the average aggregate in view table
|
[
"",
"sql",
"database",
"oracle",
""
] |
Using Microsot's SQL server I want to show a list of each costumer and they spends by year and quarter. So, I have the next code:
```
SELECT CompanyName AS 'Customer', YEAR(OrderDate) AS 'Year' ,
Quarter1 = CASE(DATEPART(q, Orders.OrderDate))
WHEN 1 THEN SUM((UnitPrice*Quantity)*(1-Discount))
ELSE 0
END,
Quarter2 = CASE(DATEPART(q, Orders.OrderDate))
WHEN 2 THEN SUM((UnitPrice*Quantity)*(1-Discount))
ELSE 0
END,
Quarter3 = CASE(DATEPART(q, Orders.OrderDate))
WHEN 3 THEN SUM((UnitPrice*Quantity)*(1-Discount))
ELSE 0
END,
Quarter4 = CASE(DATEPART(q, Orders.OrderDate))
WHEN 4 THEN SUM((UnitPrice*Quantity)*(1-Discount))
ELSE 0
END
FROM Customers LEFT JOIN Orders ON Orders.CustomerID = Customers.CustomerID
LEFT JOIN [Order Details] ON [Order Details].OrderID = Orders.OrderID
GROUP BY CompanyName, YEAR(OrderDate), DATEPART(q, OrderDate)
```
But it shows a SINGLE ROW per quarter, for example:
```
Customer Year Quarter1 Quarter2 Quarter3 Quarter4
-------- ---- -------- -------- -------- --------
John Smith 1997 127 0 0 0
John Smith 1997 0 254 0 0
John Smith 1997 0 0 547 0
John Smith 1997 0 0 0 155
```
What I want is a single row per cutomer, in this case:
```
Customer Year Quarter1 Quarter2 Quarter3 Quarter4
-------- ---- -------- -------- -------- --------
John Smith 1997 127 254 547 155
```
Any advice?
Thx.
|
wrap your query and group it by customer and year. something like this...
```
SELECT Customer,Year,SUM(Quarter1) AS Quarter1,SUM(Quarter2) AS Quarter2,SUM(Quarter3) AS Quarter3,SUM(Quarter4) AS Quarter4
FROM
(
SELECT CompanyName AS 'Customer', YEAR(OrderDate) AS 'Year' ,
Quarter1 = CASE(DATEPART(q, Orders.OrderDate))
WHEN 1 THEN SUM((UnitPrice*Quantity)*(1-Discount))
ELSE 0
END,
Quarter2 = CASE(DATEPART(q, Orders.OrderDate))
WHEN 2 THEN SUM((UnitPrice*Quantity)*(1-Discount))
ELSE 0
END,
Quarter3 = CASE(DATEPART(q, Orders.OrderDate))
WHEN 3 THEN SUM((UnitPrice*Quantity)*(1-Discount))
ELSE 0
END,
Quarter4 = CASE(DATEPART(q, Orders.OrderDate))
WHEN 4 THEN SUM((UnitPrice*Quantity)*(1-Discount))
ELSE 0
END
FROM Customers LEFT JOIN Orders ON Orders.CustomerID = Customers.CustomerID
LEFT JOIN [Order Details] ON [Order Details].OrderID = Orders.OrderID
GROUP BY CompanyName, YEAR(OrderDate), DATEPART(q, OrderDate)
)C
GROUP BY CompanyName,Year
```
|
use Common Table Expression [CTE](https://msdn.microsoft.com/en-IN/library/ms175972.aspx)
```
;with cte1 as
(
SELECT CompanyName AS 'Customer', YEAR(OrderDate) AS 'Year' ,
Quarter1 = CASE(DATEPART(q, Orders.OrderDate))
WHEN 1 THEN SUM((UnitPrice*Quantity)*(1-Discount))
ELSE 0
END,
Quarter2 = CASE(DATEPART(q, Orders.OrderDate))
WHEN 2 THEN SUM((UnitPrice*Quantity)*(1-Discount))
ELSE 0
END,
Quarter3 = CASE(DATEPART(q, Orders.OrderDate))
WHEN 3 THEN SUM((UnitPrice*Quantity)*(1-Discount))
ELSE 0
END,
Quarter4 = CASE(DATEPART(q, Orders.OrderDate))
WHEN 4 THEN SUM((UnitPrice*Quantity)*(1-Discount))
ELSE 0
END
FROM Customers LEFT JOIN Orders ON Orders.CustomerID = Customers.CustomerID
LEFT JOIN [Order Details] ON [Order Details].OrderID = Orders.OrderID
GROUP BY CompanyName, YEAR(OrderDate), DATEPART(q, OrderDate)
)
select Customer,max(Year), max(Quarter1),max(Quarter2),max(Quarter3),max(Quarter4) from cte1 group by Customer
```
|
SQL GROUP BY quarters
|
[
"",
"sql",
"sql-server",
"group-by",
""
] |
I have working with two tables in sql developer to determine which customer has placed the most orders I have gotten as far as to figure out the total number of order placed by each customer, but cannot figure out how to show only the one with the MAX number of orders...
For example-
this will give me a list of all customers who placed orders and the number of orders each of them have placed
```
SELECT
customer.cust_num, customer.cust_bizname,
COUNT(invoice.inv_num) AS "TOTAL ORDERS"
FROM customer INNER JOIN invoice ON customer.cust_num = invoice.cust_num
GROUP BY customer.cust_num, customer.cust_bizname;
```
If I attempt to use max and count together...
```
SELECT
customer.cust_num, customer.cust_bizname,
MAX(COUNT(invoice.inv_num)) AS "TOTAL ORDERS"
FROM customer INNER JOIN invoice ON customer.cust_num = invoice.cust_num
GROUP BY customer.cust_num, customer.cust_bizname;
```
I get an error stating "not a single-group group function".
How can I effectively count the number of orders and display only the customer that has placed the highest number of orders?
|
You can't use `max` like that. Instead, order by count in descending order and get the first record, like so:
```
SELECT * FROM
(select customer.cust_num, customer.cust_bizname,
COUNT(invoice.inv_num) AS "TOTAL ORDERS"
FROM customer INNER JOIN invoice ON customer.cust_num = invoice.cust_num
GROUP BY customer.cust_num, customer.cust_bizname
ORDER BY "TOTAL ORDERS" desc)
WHERE rownum = 1
```
|
Use `Order BY` with `ROWNUM`
```
select *
from
(SELECT customer.cust_num,
customer.cust_bizname,
COUNT(invoice.inv_num) AS "TOTAL ORDERS"
FROM customer
INNER JOIN invoice
ON customer.cust_num = invoice.cust_num
GROUP BY customer.cust_num, customer.cust_bizname
Order by "TOTAL ORDERS" DESC
)
Where ROWNUM =1
```
or Use `Row_Number()` Analytic Function
```
select customer.cust_num,
customer.cust_bizname,
"TOTAL ORDERS"
FROM
(
SELECT Row_number() over(order by COUNT(invoice.inv_num) DESC) As RN
customer.cust_num,
customer.cust_bizname,
COUNT(invoice.inv_num) AS "TOTAL ORDERS"
FROM customer
INNER JOIN invoice
ON customer.cust_num = invoice.cust_num
GROUP BY customer.cust_num, customer.cust_bizname
)
Where RN=1
```
|
Determine customer with most orders placed
|
[
"",
"sql",
"oracle",
"oracle-sqldeveloper",
""
] |
I am running this script on sql oracle developer:-
```
DEFINE TABLESPACE1 = "&TABLESPACE1";
CREATE TABLE "DBUSER"
(
"USER_ID" VARCHAR2(100 BYTE),
"USERNAME" VARCHAR2(100 BYTE),
"CREATED_BY" VARCHAR2(100 BYTE),
"CREATED_DATE" DATE
) TABLESPACE "&TABLESPACE1" ;
```
I am getting this error:-
```
SQL Error: ORA-00959: tablespace '&TABLESPACE1' does not exist
00959. 00000 - "tablespace '%s' does not exist"
```
|
**NOTE** Not all `SQL*Plus` commands are necessary to run in **SQL Developer**. Also, it depends on the SQL Developer version, the latest versions have a lot of SQL\*Plus commands support.
I tested in **SQL Developer version 3.2.20.10**
`&` is used for **substitution variable** in `SQL*Plus` .
For example,
In `SQL*Plus`
```
SQL> DEFINE TABLESPACE1 = &TABLESPACE1
Enter value for tablespace1: new_tablespace
SQL> SELECT '&TABLESPACE1' from dual;
old 1: SELECT '&TABLESPACE1' from dual
new 1: SELECT 'new_tablespace' from dual
'NEW_TABLESPAC
--------------
new_tablespace
SQL>
```
In **SQL Developer**
```
DEFINE TABLESPACE1 = &TABLESPACE1
<Enter the value when prompted> -- I entered "t"
old:DEFINE TABLESPACE1 = &TABLESPACE1
new:DEFINE TABLESPACE1 = t
```
> SQL Error: ORA-00959: tablespace '&TABLESPACE1' does not exist
>
> 959. 00000 - "tablespace '%s' does not exist"
Are you sure you are executing it as a **script** in **SQL Developer**. You could **press F5** to execute it as a script. Did you substitute the value for the variable?
For example,
```
SQL> DEFINE TABLESPACE1 = "&TABLESPACE1"
Enter value for tablespace1: new_tablespace
SQL> CREATE TABLE "DBUSER"
2 (
3 "USER_ID" VARCHAR2(100 BYTE),
4 "USERNAME" VARCHAR2(100 BYTE),
5 "CREATED_BY" VARCHAR2(100 BYTE),
6 "CREATED_DATE" DATE
7 ) TABLESPACE "&TABLESPACE1" ;
old 7: ) TABLESPACE "&TABLESPACE1"
new 7: ) TABLESPACE "new_tablespace"
) TABLESPACE "new_tablespace"
*
ERROR at line 7:
ORA-00959: tablespace 'new_tablespace' does not exist
SQL>
```
In your case, if the value is properly substituted, and if tablespace exists, you should be able to create the table.
|
It looks like you entered a value '&TABLESPACE1' into TABLESPACE1 substitution variable. Maybe you have substitution scanning off. Try
```
SET SCAN ON
```
before your statement
|
SQL Error: Tablespace does not exist
|
[
"",
"sql",
"oracle",
"oracle-sqldeveloper",
""
] |
I have a query for example `Query1 = Select Name from table where some Criteria`.
Now this query returns a result set of course, what I want is to query the result set of this query, for example I only want the unique Names from the above query `select Distinct(Name) from Query1`. I should mention that I know I can just use `distinct` in Query1 but this is just an example my real scenario is somewhat different, what I want to know is whether it's possible to query the result set of a previous query.
I am using SQL Server 2012.
|
You can use the `WITH` clause
```
WITH SomeClients AS (
SELECT
c.ID
FROM Clients c
WHERE c.Name LIKE '%hello%'
)
SELECT DISTINCT
sc.ID
FROM SomeClients sc
```
|
There are several ways to solve this:
1: create a view from the first query and run the second query on the view.
2: nest both queries, like this:
```
SELECT DISTINCT [Name]
FROM (
SELECT [Name]
FROM table
WHERE some Criteria
) As InnerQuery
```
3: use a temporary table to store the resultset of the first query as suggested by wewesthemenace in the [comments](https://stackoverflow.com/questions/29362693/querying-the-result-set-of-a-previous-query/29362848#comment46909015_29362693).
4: use CTE as suggested the thebreiflabb in the other [answer](https://stackoverflow.com/a/29362756/3094533) to this post.
Personally, I would probably go with the first or second option, depending if you need to use the first query as stand alone as well.
|
Querying the Result set of a Previous Query
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
"subquery",
""
] |
```
1 L R
1 1 1
1 1 2
1 1 3
1 2 1
1 2 2
1 2 3
1 3 1
1 3 2
1 3 3
```
Using this query but not able to get for L column
```
Select 1,level R
from DUAL
Connect by level <=3
```
|
You could do a **Cartesian join** in the **row generator** query you have to generate `3 rows`. Thus, the **Cartesian product** would generate total `9 rows`.
For example,
```
SQL> WITH DATA AS
2 ( SELECT LEVEL rn FROM dual CONNECT BY LEVEL <=3
3 )
4 SELECT 1, A.rn L, b.rn R FROM DATA A, DATA b
5 /
1 L R
---------- ---------- ----------
1 1 1
1 1 2
1 1 3
1 2 1
1 2 2
1 2 3
1 3 1
1 3 2
1 3 3
9 rows selected.
SQL>
```
|
```
select 1, L, R
from (Select level R
from DUAL
Connect by level <=3),
(Select level L
from DUAL
Connect by level <=3)
```
|
How to get this below output from DUAL in oracle?
|
[
"",
"sql",
"oracle",
""
] |
I'm new to sql and I'm writing a sql script to select data and the results will be displayed in a .csv format. For one of the fields, I'm needing to select only what is before the @ in the email address for everyone in the listing. I'm not wanting to update the records in the table.
Ex: john.doe@yahoo.com
I'm only needing to select the john.doe
I need assistance please in doing this. I'm using sqlplus in a Linux environment.
Here's now what I have as part of the script. I still need assistance with getting the desired output.
(select nvl(c.email\_email\_address, ' ')
```
from email c, person a
where c.email_pidm = a.person_pidm and
PERSON.ID = a.person_id and
c.email_emal_code = 'EMPL' and
c.email_status_ind = 'A' and
c.rowid = (select max(b.rowid)
from email b
where b.email_pidm = a.person_pidm and
b.email_emal_code = 'EMP'
and b.email_status_ind = 'A')
) "Employee_Email_Address",
SELECT SUBSTR(email_email_address, 0, INSTR(email_email_address, '@')-1)
--(select nvl(c.email_email_address, ' ')
from email c, person a
where c.email_pidm = a.person_pidm and
PERSON.ID = a.person_id and
c.email_emal_code = 'EMP' and
c.email_status_ind = 'A' and
c.rowid = (select max(b.rowid)
from email b
where b.email_pidm = a.person_pidm and
b.email_emal_code = 'EMPL'
and b.email_status_ind = 'A')
) "Username"
```
|
`SELECT SUBSTR('test@example.com', 1, INSTR('test@example.com', '@')-1) FROM DUAL;`
Output:
> test
|
You could do:
```
select SUBSTR("test@testdomain.com", 1, INSTR("test@testdomain.com", "@")-1)
from dual
```
|
How to exclude everything after an @ in SQL?
|
[
"",
"sql",
"oracle",
""
] |
I am working with Oracle SQL, and I have a table with an attribute POST (VARCHAR2(10 BYTE)). The data that I am given would be in the format 123.34, and I need one VARCHAR2 (123) and one NUMBER (0.34) to store this in a different table. I was trying to think of a way to do this in a select statement, but could not figure it out.
|
Can you be assured that the value will always be a `NUMBER`? If so, then just use `TO_NUMBER()`:
```
WITH t1 AS (
SELECT '123.34' AS post FROM dual
)
SELECT post_num, TRUNC(post_num), post_num - TRUNC(post_num) FROM (
SELECT TO_NUMBER(post) AS post_num FROM t1
);
```
---
If you are not sure that it will always be a `NUMBER`, there are "safe" ways of converting a character to a value, e.g., using `REGEXP_SUBSTR()`:
```
TO_NUMBER(REGEXP_SUBSTR(post, '^\d*(\.\d+)?'))
```
|
select '123.45' num,
substr('123.45',1,instr('123.45','.')-1) wholenum,
substr('123.45',instr('123.45','.')+1) decimalpart
from dual
Output:
NUM WHOLENUM DECIMALPART
123.45 123 45
|
Oracle SQL, separating a Number string to get whole number and decimal separately
|
[
"",
"sql",
"oracle",
""
] |
I have an accounts and a contacts table, and I am trying to find any accounts where NONE of the contacts associated with that account have a value for a certain field.
Scenario 1: Account WaffleHouse has 3 contacts, 1 of the contacts has a value in field "Field1". This account IS NOT returned in result set.
Scenario 2: Account PancakeHouse has 5 contacts, NONE of the contacts has a value in the field "Field1" set to TRUE. This account IS returned in result set.
I tried this code, and it is returning ANY account that has a contact where field is blank or null.
```
select distinct a.accountid
from account as a
inner join contact as c
on a.accountid = c.accountid
where (c.Field1 is null or c.Field1 = '')
```
|
You are very close - just use some aggregation (`GROUP BY`) and the `MAX` function to get what you want, like so:
```
select a.accountid
from account as a
inner join contact as c
on a.accountid = c.accountid
group by a.accountid
having MAX(isnull(c.Field1, '')) = ''
```
|
Try this:
```
SELECT *
FROM Accounts a
WHERE NOT EXISTS ( SELECT *
FROM contact c
WHERE a.accountid = c.accountid
AND c.Field1 = 'TRUE' )
```
Or:
```
SELECT *
FROM Accounts a
WHERE NOT EXISTS ( SELECT *
FROM contact c
WHERE a.accountid = c.accountid
AND c.Field1 <> '' )
```
|
Please help form this T-SQL Query to return correct Account results
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
Hi I have a ms access table in the following format
```
request_date | Total_Uploaded
24/03/2015 07:42:47 | 36
24/03/2015 07:56:19 | 36
24/03/2015 08:17:28 | 4
24/03/2015 08:33:04 | 4
24/03/2015 08:39:07 | 36
24/03/2015 08:53:56 | 10
24/03/2015 09:04:26 | 16
24/03/2015 09:14:03 | 6
24/03/2015 09:14:05 | 16
24/03/2015 09:18:32 | 407
24/03/2015 09:18:34 | 16
24/03/2015 09:19:00 | 13
24/03/2015 09:19:05 | 62
24/03/2015 09:25:59 | 138
24/03/2015 09:27:08 | 138
24/03/2015 09:28:02 | 16
24/03/2015 09:31:09 | 16
```
I want to be able to get counts per hour of records between a set of ranges. my ranges are
* 0 - 10
* 11 - 50
* 51 - 100
* > 101
so I would like to end up with a table that shows
```
DateTime | 0-10 Count| 22-50 Count| 51-100 Count | > 100 Count
24/03/2015 07 0 | 2 | 0 | 0
24/03/2015 08 2 | 1 | 0 | 0
```
I have been able to group by date by using datepart("h", request\_date) and get any one of the range counts but I would like my query to be able to do all of them in one hit. I have tried sub query but it ends up very messy and mainly Wrong. any input gratefully received.
Thanks
|
As this is Access, here is how:
```
SELECT
Format([request_date],"dd\/mm\/yyyy hh") AS DateHour,
Count(TableTotals.total_uploaded) AS Count_all,
Count(IIf([total_uploaded]<=10,1,Null)) AS Count_0_to_10,
Count(IIf([total_uploaded] Between 11 And 50,1,Null)) AS Count_11_to_50,
Count(IIf([total_uploaded] Between 51 And 100,1,Null)) AS Count_51_to_100,
Count(IIf([total_uploaded]>100,1,Null)) AS Count_101_up
FROM
TableTotals
GROUP BY
DateValue([request_date]),
Hour([request_date]),
Format([request_date],"dd\/mm\/yyyy hh");
```
Output:
```
DateHour Count_all Count_0_to_10 Count_11_to_50 Count_51_to_100 Count_101_up
24/03/2015 07 2 0 2 0 0
24/03/2015 08 4 3 1 0 0
24/03/2015 09 11 1 6 1 3
```
|
You can use [case](http://www.techonthenet.com/access/functions/advanced/case.php) inside a sum for each column. In the case you transform a value in the particular range to 1 and other values to 0.
|
Select Multiple counts from the same table based on range
|
[
"",
"sql",
"ms-access",
""
] |
I'm having trouble selecting a column from a subselect that has a shared name between two joined tables using the `ON` join expression syntax.
I have two tables, `event` and `geography` each of which has a `geography_id` column, which is the same data type, and `event.geography_id` is a foreign key into `geography` (geography provides information about an event):

The problem I'm having is that I can't reference the shared column between these two tables when joining them using the `ON` syntax, but it works when using the `USING` syntax.
I realize that `USING` works because it [suppresses redundant columns](http://www.postgresql.org/docs/9.0/static/queries-table-expressions.html), but since the statement uses many different joined tables with schemas that change more often than not, I would rather be as explicit as possible.
The specific SQL I'm having trouble with is:
```
select
x.event_id
from (
select * from event e
left join geography g on (e.geography_id = g.geography_id)
) x
where
x.geography_id in (1,2,3)
```
Which gives the error:
> ERROR: column reference "geography\_id" is ambiguous
>
> LINE 8: x.geography\_id in (1,2,3)
I'm using PostgreSQL 9.0.14.
|
It would be a very helpful feature in SQL to be able to select all columns *except* one or more that you explicitly want to exclude. If it existed, you could have used such a feature to solve your problem by excluding `g.geography_id`. Unfortunately, such a feature does not appear to exist anywhere, in any DBMS. See <https://dba.stackexchange.com/questions/1957/sql-select-all-columns-except-some>.
One solution, as @a\_horse\_with\_no\_name commented, is to list every single column you want to select, and just omit the ones you don't.
There is actually another, probably preferable, solution, which is to select `*` and also `e.geography_id`, but alias the latter to another name that will then be unambiguous in the subquery result-set. Something like this:
```
select
x.event_id
from (
select *, e.geography_id geography_id1 from event e
left join geography g on (e.geography_id = g.geography_id)
) x
where
x.geography_id1 in (1,2,3)
```
|
Pull the predicate down into a subquery *before* you join:
```
SELECT e.event_id
FROM (SELECT * FROM event WHERE geography_id IN (1,2,3)) e
LEFT JOIN geography g ON (g.geography_id = e.geography_id);
```
The result is 100 % equivalent to your original query:
```
SELECT e.event_id
FROM event e
LEFT JOIN geography g USING (geography_id)
WHERE geography_id in (1,2,3);
```
Just that the alternative should be much faster (excludes irrelevant rows early). Quite an acceptable side effect for a workaround.
|
Resolving an ambiguous column in a subselect
|
[
"",
"sql",
"postgresql",
"join",
"subquery",
"left-join",
""
] |
I record Start and End times in SQL whenever something happens to a record, basically a user opens a screen containing the information on a record, and I want to see how long the screen was opened.
I record the start time, and the end together with the stage the case is currently in for that case in a linked table similar to this:
```
| Id | Reference | Stage | StartTime | EndTime |
| 1 | 123456789 | NEW | 2015-03-30 16:04:39.8100000 | NULL |
| 2 | 123456789 | NEW | NULL | 2015-03-30 16:06:44.3830000 |
| 3 | 123456789 | VAL | 2015-03-30 16:09:00.3800000 | NULL |
| 4 | 123456789 | VAL | NULL | 2015-03-30 16:11:04.9870000 |
| 5 | 123456789 | CAP | 2015-03-30 16:20:25.7900000 | NULL |
| 6 | 123456789 | CAP | NULL | 2015-03-30 16:21:12.1130000 |
| 7 | 123456789 | CAP | 2015-03-30 16:22:40.4930000 | NULL |
| 8 | 123456789 | CAP | NULL | 2015-03-30 16:29:02.5030000 |
| 9 | 123456789 | CAP | 2015-03-30 16:29:17.8970000 | NULL |
| 10 | 123456789 | CAP | NULL | 2015-03-30 16:55:13.8870000 |
| 11 | 123456789 | CAP | 2015-03-30 16:56:20.2230000 | NULL |
| 12 | 123456789 | CAP | NULL | 2015-03-30 16:56:40.3830000 |
```
I've been playing around but was unable to get the desired result:
```
| Id | Reference | Stage | StartTime | EndTime |
| 1 | 123456789 | NEW | 2015-03-30 16:04:39.8100000 | 2015-03-30 16:06:44.3830000 |
| 3 | 123456789 | VAL | 2015-03-30 16:09:00.3800000 | 2015-03-30 16:11:04.9870000 |
| 5 | 123456789 | CAP | 2015-03-30 16:20:25.7900000 | 2015-03-30 16:21:12.1130000 |
| 7 | 123456789 | CAP | 2015-03-30 16:22:40.4930000 | 2015-03-30 16:29:02.5030000 |
| 9 | 123456789 | CAP | 2015-03-30 16:29:17.8970000 | 2015-03-30 16:55:13.8870000 |
| 11 | 123456789 | CAP | 2015-03-30 16:56:20.2230000 | 2015-03-30 16:56:40.3830000 |
```
How can I combine the data (Start and End time) in this case?
|
You can select StartTime rows and with an inner select fill EndTime:
```
SELECT Id, Reference, Stage, StartTime,
(SELECT t1i.EndTime FROM Table1 t1i WHERE t1.Stage = t1i.Stage And t1i.StartTime IS NULL
And t1i.Id = (SELECT min(t1ii.Id) FROM t1ii WHERE t1ii.StartTime IS NULL And t1ii.Stage = t1.Stage And t1ii.Id > t1.Id) )
FROM Table1 t1
WHERE t1.EndTime IS NULLL
```
|
```
SELECT * FROM (
SELECT [Id], [Reference], [Stage], [StartTime],lead(EndTime) OVER (ORDER BY ID) AS NewEndDate FROM yourTablename
) tbl
WHERE id % 2 = 1
```
**EDIT**: This query will work in SQL Server 2012 and above versions
You can use below query if you are using older versions of SQL Server:
```
SELECT * FROM (
SELECT t.[Id], t.[Reference], t.[Stage], t.[StartTime], tnext.[EndTime]
FROM dates t INNER JOIN
dates tnext
ON t.id = tnext.id - 1
) tbl
WHERE id % 2 = 1
```
|
Combining rows in SQL
|
[
"",
"sql",
"sql-server-2008",
""
] |
I am trying to create a view in Oracle to show all rooms available in the hotel that are not booked out.
If they are booked out there will be a record in RoomBasket. So I am trying to select all rooms not in RoomBasket between the checkin and checkout dates (RoomBasket.datein, RoomBasket.dateout)
But the query kept returning 0 results when I used inner joins and outer (left/right) joins, I think it's because it is not joined on to a table of dates so it will have 0 results. So now I am trying to do a Union to the Room table which has a record of all rooms, as I think that would select all Rooms and then negate the ones which are booked out?
I can't get my syntax correct and I played around with loads of forms of this query:
```
CREATE VIEW availability AS
(SELECT * FROM RoomBasket rb
WHERE TO_DATE(SYSDATE, 'yyyymmdd')
NOT BETWEEN TO_DATE(rb.datein, 'yyyymmdd') AND TO_DATE(rb.dateout, 'yyyymmdd')
UNION (SELECT r.id room, rt.type type, rt.price price FROM Room r, RoomType rt)
);
```
But if it works I get 0 results and if it doesn't work I get syntax errors. At the moment the error is:
> query block has incorrect number of result columns
|
You have no predicate on your join of `Room` to `RoomType`, so you get a cross join. That is unlikely to be what you want.
Furthermore, your `union` looks like it's trying to add unrelated data to the room data. Your description of the problem suggests that you want to use the `RoomBasket` data to *filter* the other data instead -- that requires a join or a subquery.
Something more along these lines would do what you want:
```
CREATE VIEW availability AS (
SELECT r.id room, rt.type type, rt.price price
FROM
Room r
INNER JOIN RoomType rt
ON rt.id = r.type
LEFT JOIN RoomBasket rb
ON rb.room = r.id
AND TO_DATE(SYSDATE, 'yyyymmdd') BETWEEN TO_DATE(rb.datein, 'yyyymmdd')
AND TO_DATE(rb.dateout, 'yyyymmdd')
WHERE rb.room IS NULL
);
```
The `WHERE` predicate has the effect of selecting those rows of the left table (`Room JOIN RoomType`) which are not matched by any row of the right table (`RoomBasket`).
|
Your first query is returning a different number of columns than your second query, thus they cannot be unioned. Specify the columns for the first query explicitly instead of using `SELECT *`.
|
Oracle union giving error
|
[
"",
"sql",
"database",
"oracle",
"join",
"union",
""
] |
I want to save the hassle of doing many querys for the following:
I have a table like this:
```
name, age
{
Mike, 7
Peter, 2
Mario, 1
Tony, 4
Mary, 2
Tom, 7
Jerry, 3
Nick, 2
Albert, 22
Steven, 7
}
```
And I want the following result:
```
Results(custom_text, num)
{
1 Year, 1
2 Year, 3
3 Year, 1
4 Year, 1
5 Year, 0
6 Year, 0
7 Year, 3
8 Year, 0
9 Year, 0
10 Year, 0
More than 10 Year, 1
}
```
I know how to do this but in 11 queries :( But how to simplify it?
**EDIT:**
Doing the following, I can obtain the non zero values, but I need the zeroes in the right places.
```
SELECT COUNT(*) AS AgeCount
FROM mytable
GROUP BY Age
```
How can I achieve this?
Thanks for reading.
|
you can use below query but it `will not show the gaps` if you want gaps then the use [Linoff's answer](https://stackoverflow.com/a/29336556/3208640):
```
select t.txt, count(t.age) from
(select
case
when age<11 then concat(age ,' year')
else 'more than 10'
end txt, age
from your_table)t
group by t.txt
order by 1
```
[SQL FIDDLE DEMO](http://www.sqlfiddle.com/#!9/e3a26/41)
|
You can use `left join` and a subquery to get what you want:
```
select coalesce(concat(ages.n, ' year'), 'More than 10 year') as custom_text,
count(*)
from (select 1 as n union all select 2 union all select 3 union all select 4 union all
select 5 union all select 6 union all select 7 union all select 8 union all
select 9 union all select 10 union all select null
) ages left join
tabla t
on (t.age = ages.n or ages.n is null and t.age > 10)
group by ages.n;
```
EDIT:
I think the following is a better way to do this query:
```
select (case when least(age, 11) = 11 then 'More than 10 year'
else concat(age, ' year')
end) as agegroup, count(name)
from (select 1 as age, NULL as name union all
select 2, NULL union all
select 3, NULL union all
select 4, NULL union all
select 5, NULL union all
select 6, NULL union all
select 7, NULL union all
select 8, NULL union all
select 9, NULL union all
select 10, NULL union all
select 11, NULL
union all
select age, name
from tabla t
) t
group by least(age, 11);
```
Basically, the query need a `full outer join` and MySQL does not provide one. However, we can get the same result by adding in extra values for each age, so we know something is there. Then because `name` is `NULL`, the `count(name)` will return `0` for those rows.
|
How to reduce SQL queries in only one in this case
|
[
"",
"mysql",
"sql",
"join",
"group-by",
"having",
""
] |
I need to get all table names in SQL Server. To do this I use this code:
```
select TABLE_NAME
from INFORMATION_SCHEMA.TABLES
```
I need use dynamic database name. To do this I tried to use this code:
```
Declare @dbName varchar(50)='Learn'
use @dbname
go
select TABLE_NAME
from INFORMATION_SCHEMA.TABLES
```
But it does not work. How can I do this ?
|
```
DECLARE @dbName varchar(50)='Learn'
EXEC ('SELECT TABLE_NAME FROM ' +@dbName+'.INFORMATION_SCHEMA.TABLES');
```
OR SIMPLY:
```
SELECT TABLE_NAME FROM Learn.INFORMATION_SCHEMA.TABLES
```
|
Create this stored procedure in master db and call it
```
CREATE PROCEDURE custom_query_executor
@dbName VARCHAR(50)
AS
BEGIN
DECLARE @query_string nvarchar(4000);
SET @query_string = 'select TABLE_NAME from ' + CAST(@dbName AS NVARCHAR) +'.INFORMATION_SCHEMA.TABLES';
EXEC sys.sp_executesql @query_string;
END
```
OR you can try this
```
DECLARE @dbName VARCHAR(50);
SET @dbName = 'Learn';
SET @query_string = 'select TABLE_NAME from ' + CAST(@dbName AS NVARCHAR) +'.INFORMATION_SCHEMA.TABLES';
EXEC sys.sp_executesql @query_string;
```
|
How to use database name dynamically in SQL Server
|
[
"",
"sql",
"sql-server",
""
] |
I have got table that is looking like this:
```
+------+-------+-------------+
ID_Loc Type Data
+------+-------+-------------+
ABC RMKS Hello
ABC NAM Joe Smith
ABD NAM Peter Hill
ABD RMKS Bye Bye
ABD NAM Freddy Tall
ABE NAM Loran Bennett
ABE RMKS Bye Bye
ABF NAM Liv Claris
ABF RMKS Bye Bye
+------+-------+-------------+
```
And I need to select all ID\_Loc WHERE DATA NOT LIKE 'Hello'. When I tried:
```
SELECT distinct ID_loc FROM data_full WHERE DATA NOT LIKE '% Hello'
```
This also selects ID\_Loc: 'ABC', which contains 'Hello' in Data.
Also as this is going to affect quite a lot of or rows, would be nice if I can point query to only look at the rows were Type RMKS is used.
I am using MS SQL Server 2008
SQL fiddle address is:
<http://sqlfiddle.com/#!6/38130/6>
Any help would be really appreciated.
|
If you need to select the `ID_Loc` values for which there is no record matching the `'%Hello'` pattern, here's the query to do it:
```
SELECT ID_loc
FROM data_full
group by ID_Loc
having max(case
when DATA LIKE '%Hello' then 1
else 0
end) = 0;
```
This is the result: <http://sqlfiddle.com/#!6/38130/33>
If you also need to apply the `Type = 'RMKS'` filter, you can do so in a `WHERE` clause ([sqlfiddle](http://sqlfiddle.com/#!6/38130/35)):
```
SELECT ID_loc
FROM data_full
where type = 'RMKS'
group by ID_Loc
having max(case
when DATA LIKE '%Hello' then 1
else 0
end) = 0;
```
|
you need to remove the space, added the RMKS filter as well.
```
SELECT distinct ID_loc
FROM data_full
WHERE Type= 'RMKS'
and DATA NOT LIKE '%Hello' --'%Hello%' if it can be between other text
```
EDIT:
If `ID_Loc,Type` isn't unique:
```
SELECT distinct ID_loc
FROM data_full
WHERE ID_loc NOT IN (SELECT ID_loc FROM data_full WHERE Type= 'RMKS'
and DATA LIKE '%Hello')
```
<http://sqlfiddle.com/#!6/59e9a/8>
|
SQL Select WHERE Column NOT LIKE across multiple lines
|
[
"",
"sql",
"sql-server",
"select",
""
] |
I have created a sql query that updates certain tables taking a CSV file as the input.
I want my co-workers to be able to execute this query as easily as possible. At first, I thought a batch file using sqlcmd was the best solution.
The end product works on my computer, because I have SSMS installed, but no other computer is able to properly launch the batch file.
What is the best way for my end-users to run an sql query? I have thought/researched these solutions:
-Install SSMS or the required tools(don't want each user to have to do this.)
-Install Psexec tools to allow for remote batch launching (also don't like this.)
Is there a better way?
|
Why don't you create a C-Sharp or VB.Net program that executes the proc and distribute the program to your users?
|
Check SQLS\*Plus from www.memfix.com - works the best.
|
Alternatives to sqlcmd/best practice
|
[
"",
"sql",
"sql-server",
""
] |
Having a table with two columns, `PART_NUMBER` and `PART_TYPE`. A type can have many parts. The requirement is to find the unique combinations of the part numbers using a sql procedure. We are using Oracle as our DB. Sample data is as below:
```
PART_NO PART_TYPE
Part1 10
Part2 10
Part3 10
Part4 20
Part5 30
Part6 30
Part7 40
```
I want the output to be as follows.
```
1: Part1 Part4 Part5 Part7
2: Part1 Part4 Part6 Part7
3: Part2 Part4 Part5 Part7
4: Part2 Part4 Part6 Part7
5: Part3 Part4 Part5 Part7
6: Part3 Part4 Part6 Part7
```
ie) Each part from each type forms a single combination.
Please throw some light on this either in the form of algorithm or sql procedure.
|
Use a hierarchical query:
[SQL Fiddle](http://sqlfiddle.com/#!4/77585/2)
**Oracle 11g R2 Schema Setup**:
```
CREATE TABLE parts AS
SELECT 'Part1' AS PART_NO, 10 AS PART_TYPE FROM DUAL
UNION ALL SELECT 'Part2' AS PART_NO, 10 AS PART_TYPE FROM DUAL
UNION ALL SELECT 'Part3' AS PART_NO, 10 AS PART_TYPE FROM DUAL
UNION ALL SELECT 'Part4' AS PART_NO, 20 AS PART_TYPE FROM DUAL
UNION ALL SELECT 'Part5' AS PART_NO, 30 AS PART_TYPE FROM DUAL
UNION ALL SELECT 'Part6' AS PART_NO, 30 AS PART_TYPE FROM DUAL
UNION ALL SELECT 'Part7' AS PART_NO, 40 AS PART_TYPE FROM DUAL;
```
**Query 1**:
```
WITH combinations AS (
SELECT SYS_CONNECT_BY_PATH( PART_NO, ' ' ) AS parts,
CONNECT_BY_ISLEAF AS leaf
FROM parts
START WITH PART_TYPE = 10
CONNECT BY PRIOR PART_TYPE + 10 = PART_TYPE
)
SELECT ROWNUM || ':' || parts AS output
FROM combinations
WHERE leaf = 1
```
**[Results](http://sqlfiddle.com/#!4/77585/2/0)**:
```
| OUTPUT |
|----------------------------|
| 1: Part1 Part4 Part5 Part7 |
| 2: Part1 Part4 Part6 Part7 |
| 3: Part2 Part4 Part5 Part7 |
| 4: Part2 Part4 Part6 Part7 |
| 5: Part3 Part4 Part5 Part7 |
| 6: Part3 Part4 Part6 Part7 |
```
**Edit Rob van Wijk**:
Since connect\_by\_isleaf is evaluated after the connect by, a slightly easier query is:
```
SQL> select rownum || ':' || sys_connect_by_path(part_no, ' ') as parts
2 from parts
3 where connect_by_isleaf = 1
4 connect by prior part_type + 10 = part_type
5 start with part_type = 10
6 /
PARTS
---------------------------------------------------------------------------------------
1: Part1 Part4 Part5 Part7
2: Part1 Part4 Part6 Part7
3: Part2 Part4 Part5 Part7
4: Part2 Part4 Part6 Part7
5: Part3 Part4 Part5 Part7
6: Part3 Part4 Part6 Part7
6 rows selected.
```
**Edit - Non-incremental `PART_TYPE`s**
[SQL Fiddle](http://sqlfiddle.com/#!4/a4dfd/2)
**Query 3**:
```
WITH part_types AS (
SELECT DISTINCT PART_TYPE
FROM parts
),
ordered_part_types AS (
SELECT PART_TYPE,
LEAD( PART_TYPE ) OVER ( ORDER BY PART_TYPE ) AS NEXT_PART_TYPE
FROM part_types
)
SELECT ROWNUM || ':' || SYS_CONNECT_BY_PATH( PART_NO, ' ' ) AS parts
FROM parts p
INNER JOIN
ordered_part_types t
ON ( p.PART_TYPE = t.PART_TYPE )
WHERE CONNECT_BY_ISLEAF = 1
START WITH p.PART_TYPE = ( SELECT MIN( PART_TYPE ) FROM parts )
CONNECT BY PRIOR NEXT_PART_TYPE = p.PART_TYPE
```
**[Results](http://sqlfiddle.com/#!4/a4dfd/2/0)**:
```
| PARTS |
|----------------------------|
| 1: Part3 Part4 Part6 Part7 |
| 2: Part3 Part4 Part5 Part7 |
| 3: Part2 Part4 Part6 Part7 |
| 4: Part2 Part4 Part5 Part7 |
| 5: Part1 Part4 Part6 Part7 |
| 6: Part1 Part4 Part5 Part7 |
```
|
You don't need PL/SQL for this. Just SQL.
```
with TAB as (
select 'Part1' as PART_NO, 10 as PART_TYPE from dual union all
select 'Part2', 10 from dual union all
select 'Part3', 10 from dual union all
select 'Part4', 20 from dual union all
select 'Part5', 30 from dual union all
select 'Part6', 30 from dual union all
select 'Part7', 40 from dual
),
CONSTANTS as (
select /*+ MATERIALIZE */
min( PART_TYPE ) as PART_TYPE,
count( distinct PART_TYPE ) as CNT
from TAB
)
select rownum || ':' || ANSWER as ANSWER
from ( select sys_connect_by_path( PART_NO, ' ' ) as ANSWER,
connect_by_isleaf as IS_LEAF,
level as L
from TAB
start with PART_TYPE = ( select PART_TYPE
from CONSTANTS )
connect by PART_TYPE > prior PART_TYPE )
where L = ( select CNT
from CONSTANTS )
and IS_LEAF = 1
```
|
SQL to find combinations within a column
|
[
"",
"sql",
"oracle",
""
] |
I have this table:
```
create table myTable (keyword text, category text, result text
, primary key (keyword,category));
insert into myTable values
('foo', 'A', '10'),
('bar', 'A', '200'),
('baz', 'A', '10'),
('Superman', 'B', '200'),
('Yoda', 'B', '10'),
('foo', 'C', '10');
```
I want to retrieve results according to tuples `(keyword,category)`. So basically, with one easy tuple I have the following query:
```
SELECT result FROM myTable WHERE keyword LIKE '%a%' AND category = 'A';
-- returns 10,200 as expected
```
But I can have as many tuples as I want. Extending this query for several tuples returns bad results:
```
SELECT result FROM myTable
WHERE ( keyword LIKE '%a%' AND category = 'A')
AND ( keyword LIKE '%Superman%' AND category = 'B');
-- expected 200; but returned no rows...
SELECT distinct result FROM myTable
WHERE ( keyword LIKE '%a%' AND category = 'A')
OR ( NOT(keyword LIKE '%Superman%') AND category = 'B');
-- expected 10; but returned 10,200...
```
That's pretty logical because PostgreSQL does not follow the operator order and parenthesis.
Only `OR` clauses are working. If I had only `OR` clauses, I'd use something like this:
```
SELECT result FROM myTable
INNER JOIN (VALUES
('foo','C'),
('Superman', 'B')
) t(keyword,category) USING (keyword,category); -- 10,200 as expected
```
But it works only for `OR` and for strict equality. In my case I want to use a `LIKE` equality and I want to use `AND`, `OR`, `AND NOT` and `OR NOT` between the different tuples.
More precisely, when I write:
```
SELECT result FROM myTable
WHERE ( keyword LIKE '%a%' AND category = 'A')
AND ( keyword LIKE '%Superman%' AND category = 'B');
-- expected 200; but returned no row
```
I mean I want the INTERSECTION of results obtained by the two clauses.
The first tuple return 10,200 and the second one 200. I want to return only 200 in this case.
Using an OR as suggesting in comments like this:
```
SELECT distinct result FROM myTable
WHERE ( keyword LIKE '%a%' AND category = 'A')
OR ( keyword LIKE '%Superman%' AND category = 'B');
```
returns 10,200, but that's not that I want ...
|
What you seem to be looking for is called **relational division**. The task could be phrased as:
*Find results that have at least one row matching these conditions:*
`keyword LIKE '%a%' AND category = 'A'`
***and*** at least one row matching these other conditions:
`keyword LIKE '%Superman%' AND category = 'B'`
A fast solution for conditions returning `DISTINCT` results:
```
SELECT DISTINCT result
FROM tbl t1
JOIN tbl t2 USING (result)
WHERE t1.keyword LIKE '%a%' AND t1.category = 'A'
AND t2.keyword LIKE '%Superman%' AND t2.category = 'B';
```
But since your filters can return multiple rows for each result, one of these will be **faster**:
```
SELECT result
FROM (
SELECT DISTINCT result
FROM tbl
WHERE keyword LIKE '%a%' AND category = 'A'
) t1
JOIN (
SELECT DISTINCT result
FROM tbl
WHERE keyword LIKE '%Superman%' AND category = 'B'
) t2 USING (result);
```
Or:
```
SELECT result
FROM (
SELECT DISTINCT result
FROM tbl
WHERE keyword LIKE '%a%' AND category = 'A'
) t
WHERE EXISTS (
SELECT 1
FROM tbl
WHERE result = t.result
AND keyword LIKE '%Superman%' AND category = 'B'
);
```
**SQL Fiddle.**
We have assembled an arsenal of query techniques under this related question:
* [How to filter SQL results in a has-many-through relation](https://stackoverflow.com/questions/7364969/how-to-filter-sql-results-in-a-has-many-through-relation/7774879)
|
I think you can also take a look at the documentation [SIMILAR TO](http://www.postgresql.org/docs/8.4/static/functions-matching.html)
You can do something like this
```
SELECT * from myTable where keyword SIMILAR TO '%(oo|ba)%' and category SIMILAR TO '(A)';
```
|
PostgreSQL multiple tuples selection in WHERE clause
|
[
"",
"sql",
"postgresql",
"where-clause",
"multiple-columns",
"relational-division",
""
] |
I have a problem to fiqure out the way to grap rows after last marked value from a table.
```
id | f_id | pi | typeId
1 | 1 | 10 | 1
2 | 2 | 24 | 2
3 | 1 | 34 | 3
4 | 1 | 56 | 2
5 | 1 | 12 | 1
6 | 2 | 34 | 1
7 | 1 | 65 | 1
8 | 1 | 19 | 2
9 | 1 | 38 | 1
10 | 2 | 27 | 3
11 | 1 | 21 | 3
```
i need a mysql query for f\_id=1 and rows after last typeId=2 (including typeId=2 row) like below:
```
id | f_id | pi | typeId
1 | 1 | 19 | 2
2 | 1 | 38 | 1
3 | 1 | 21 | 3
```
|
Consider the following
```
mysql> create table test (f_id int, pi int, typeid int,timestamp datetime);
Query OK, 0 rows affected (0.13 sec)
mysql> insert into test values
-> (1,10,1, date_add(now(),interval 1 minute)),
-> (2,24,2, date_add(now(),interval 2 minute)),
-> (1,34,3,date_add(now(),interval 3 minute)),
-> (1,56,2,date_add(now(),interval 4 minute)),
-> (1,12,1,date_add(now(),interval 5 minute)),
-> (2,34,1,date_add(now(),interval 6 minute)),
-> (1,65,1,date_add(now(),interval 7 minute)),
-> (1,19,2,date_add(now(),interval 8 minute)),
-> (1,38,1,date_add(now(),interval 9 minute)),
-> (2,27,3,date_add(now(),interval 10 minute)),
-> (1,21,3,date_add(now(),interval 11 minute));
Query OK, 11 rows affected (0.08 sec)
Records: 11 Duplicates: 0 Warnings: 0
mysql> select * from test ;
+------+------+--------+---------------------+
| f_id | pi | typeid | timestamp |
+------+------+--------+---------------------+
| 1 | 10 | 1 | 2015-04-01 16:53:01 |
| 2 | 24 | 2 | 2015-04-01 16:54:01 |
| 1 | 34 | 3 | 2015-04-01 16:55:01 |
| 1 | 56 | 2 | 2015-04-01 16:56:01 |
| 1 | 12 | 1 | 2015-04-01 16:57:01 |
| 2 | 34 | 1 | 2015-04-01 16:58:01 |
| 1 | 65 | 1 | 2015-04-01 16:59:01 |
| 1 | 19 | 2 | 2015-04-01 17:00:01 |
| 1 | 38 | 1 | 2015-04-01 17:01:01 |
| 2 | 27 | 3 | 2015-04-01 17:02:01 |
| 1 | 21 | 3 | 2015-04-01 17:03:01 |
+------+------+--------+---------------------+
11 rows in set (0.00 sec)
```
The query will first get the result with first condition ordering by timestamp column and the union all to get the rest after the first record
```
(
select * from test where f_id = 1 and typeid = 2 order by timestamp desc limit 1
)
union all
(
select * from test t1 where t1.f_id = 1 and t1.timestamp > ( select max(timestamp) from test t2 where t2.f_id = 1 and t2.typeid = 2 )
) ;
```
The result will be
```
+------+------+--------+---------------------+
| f_id | pi | typeid | timestamp |
+------+------+--------+---------------------+
| 1 | 19 | 2 | 2015-04-01 17:00:01 |
| 1 | 38 | 1 | 2015-04-01 17:01:01 |
| 1 | 21 | 3 | 2015-04-01 17:03:01 |
+------+------+--------+---------------------+
```
|
Try this. Myabe help you:
```
select id, max(f_id), max(pi), max(typeId)
from TABLE
where f_id=1 and typeID=1
group by id
order by max(f_id)
```
|
mysql query for getting rows after last specific mark on the list
|
[
"",
"mysql",
"sql",
""
] |
I currently have a SQL query setup but I want it to ignore 0's in the `min_on_hand` column, and I can't seem to figure out why this doesn't work
```
SELECT
sku_master.sku,
sku_master.description,
sku_master.min_on_hand,
sku_master.max_on_hand,
x.total_qty_on_hand
FROM
[FCI].dbo.[sku_master]
LEFT JOIN
(SELECT
sku_master.sku,
sum(location_inventory.qty_on_hand) as total_qty_on_hand
FROM
[FCI].[dbo].[location_inventory]
JOIN
[FCI].dbo.[sku_master] ON location_inventory.sku = sku_master.sku
WHERE
sku_master.min_on_hand > 0
GROUP BY
sku_master.sku) x ON sku_master.sku = x.sku;
```
|
As others have mentioned in the comments, filtering on `min_on_hand` in the subquery has no effect - you'll still be returned the values in `sku_master`, but they just won't include any of the data from `x`.
If you move the check to the main query then you will not see any records where `min_on_hand` = 0
```
SELECT
sku_master.sku,
sku_master.description,
sku_master.min_on_hand,
sku_master.max_on_hand,
x.total_qty_on_hand
FROM
[FCI].dbo.[sku_master]
LEFT JOIN
(SELECT
sku_master.sku,
sum(location_inventory.qty_on_hand) as total_qty_on_hand
FROM
[FCI].[dbo].[location_inventory]
JOIN
[FCI].dbo.[sku_master] ON location_inventory.sku = sku_master.sku
GROUP BY
sku_master.sku) x ON sku_master.sku = x.sku
WHERE
sku_master.min_on_hand > 0
```
|
I used this query. It returns an inventory summary, for all records where the minimum on hand quantity needs to exceed 0.
`/\* Left join sku\_master \*/
```
SELECT sku_master.sku, sku_master.description, sku_master.min_on_hand,
sku_master.max_on_hand,
location_inventory.qty_on_hand AS total_qty_on_hand
FROM sku_master LEFT OUTER JOIN location_inventory
ON sku_master.sku = location_inventory.sku
GROUP BY sku_master.sku, sku_master.description,
sku_master.min_on_hand, sku_master.max_on_hand,
location_inventory.qty_on_hand
HAVING (sku_master.min_on_hand > 0)`
```
|
SQL with Left Join showing values that are zero
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I've stuck on this:
I need get **guid** based on minimum **ID** and remove all other duplicates (guid, ID).
ID is only unique field here.
```
+-----------------------------+------+-------------+
| guid | ID | post_parent |
+-----------------------------+------+-------------+
| 5.jpg | 7626 | 2418 |
| 3.jpg | 7625 | 2418 |
| 2.jpg | 5972 | 2418 |
| 2.jpg | 3000 | 2420 |
| 0.jpg | 3205 | 2420 |
| 9.jpg | 9205 | 2419 |
+-----------------------------+------+-------------+
```
So what I want:
```
+-----------------------------+------+-------------+
| guid | ID | post_parent |
+-----------------------------+------+-------------+
| 2.jpg | 5972 | 2418 |
| 2.jpg | 3000 | 2420 |
| 9.jpg | 9205 | 2419 |
+-----------------------------+------+-------------+
```
Sure, I may use something like this:
```
select guid,
ID
from wp_posts where ID = (
select MIN(ID) from wp_posts
where post_parent="2418");
```
But I need to get all data from table, not single rows.
|
```
select *
from wp_posts p,
(select min(ID) as id, post_parent
from wp_posts
group by post_parent) mins
where p.ID = mins.id and mins.post_parent = p.post_parent
```
|
Find the row with the minimum id
```
SELECT *
FROM wp_posts p
WHERE NOT EXISTS ( SELECT 'a'
FROM wp_posts p2
WHERE p2.post_parent = p.post_parent
AND p2.id < p.id
)
```
|
Return unique rows based on minimum id in mysql
|
[
"",
"mysql",
"sql",
""
] |
I have two tables in MySQL that hold users and another one holds there subscriptions. There are users may be sign but they do not subscribe. I use the following query to get the count of users that they have not any subscription:
```
SELECT COUNT(social_users.id) FROM social_users LEFT JOIN socials
ON social_users.id = socials.social_user_id
WHERE socials.social_user_id IS NULL
```
The above query works fine and return a number which it is the count of users that have no any subscription.
Now, I want to get there percentage relative to the total number of users. so I tried to make two count SQL queries and divide them as follows:
```
(SELECT COUNT(social_users.id) FROM social_users LEFT JOIN socials ON
social_users.id = socials.social_user_id WHERE socials.social_user_id
IS NULL) / select count(social_users.id) FROM social_users
```
However, this query does not work. So I need to know any other way that allow me to embed two select and the divide them and then multiply the result by 100.
|
try something like this
```
Select ( (SELECT COUNT(social_users.id) FROM social_users LEFT JOIN socials ON
social_users.id = socials.social_user_id WHERE socials.social_user_id
IS NULL) / (select count(social_users.id) FROM social_users))*100 as Percentage
```
|
There should be no reason to query the same tables twice. If you take the user table and outer join with the social table, group by the user id, you can take a count of the subscriptions per user. Those with no subscription will have a count of zero.
Then just feed that into a query which essentially just counts the number of zeros, divides by the total number of users, *et viola*:
```
SELECT Sum( case TotalSubs when 0 then 1 else 0 end ) NonSubscribers,
Count( user_id ) TotalUsers,
100 * Sum( case TotalSubs when 0 then 1 else 0 end ) /
Count( user_id ) Ratio
FROM(
select u.ID user_id, Count( s.ID ) TotalSubs
from social_users u
left join socials s
ON s.social_user_id = u.id
group by u.id ) x;
```
You also want the `100.0 *` to be the first value in the ratio equation. Consider:
```
select 1 / 3 * 100.0 PostAnswer, 100.0 * 1 / 3 PreAnswer;
```
returns
```
PostAnswer PreAnswer
0.0 33.33333
```
Either that or explicitly cast the two terms to be floating point values before dividing. Always be mindful when mixing integers and reals in equations.
|
SQL select percentage
|
[
"",
"mysql",
"sql",
""
] |
Let's say I have a this search query like this:
```
SELECT COUNT(id), date(created_at)
FROM entries
WHERE date(created_at) >= date(current_date - interval '1 week')
GROUP BY date(created_at)
```
As you know then for example I get a result back like this:
```
count | date
2 | 15.01.2014
1 | 13.01.2014
9 | 09.01.2014
```
But I do **not** get the days of the week where no entries where created.
How can I get a search result that looks like this, **including** the days where no entries where created?
```
count | date
2 | 15.01.2014
0 | 14.01.2014
1 | 13.01.2014
0 | 12.01.2014
0 | 11.01.2014
0 | 10.01.2014
9 | 09.01.2014
```
|
```
SELECT day, COALESCE(ct, 0) AS ct
FROM (SELECT now()::date - d AS day FROM generate_series (0, 6) d) d -- 6, not 7
LEFT JOIN (
SELECT created_at::date AS day, count(*) AS ct
FROM entries
WHERE created_at >= date_trunc('day', now()) - interval '6d'
GROUP BY 1
) e USING (day);
```
Use a [sargable](https://en.wikipedia.org/wiki/Sargable) expression for your `WHERE` condition, so Postgres can use a plain index on `created_at`. Far more important for performance than all the rest.
To cover a week (including today), subtract 6 days from the start of "today", not 7.
Alternatively, shift the week by 1 to end "yesterday", as "today" is obviously incomplete, yet.
Assuming that `id` is defined `NOT NULL`, `count(*)` is identical to `count(id)`, but slightly faster. See:
* [Why is count(x.*) slower than count(*)?](https://dba.stackexchange.com/a/309924/3684)
A [CTE](https://www.postgresql.org/docs/current/queries-with.html) is not needed for the simple case. Would be slower and more verbose.
Aggregate first, join later. That's faster.
`now()` is Postgres' short syntax for the standard SQL [`CURRENT_TIMESTAMP`](https://www.postgresql.org/docs/current/functions-datetime.html#FUNCTIONS-DATETIME-CURRENT) (which you can use as well). See:
* [Difference between now() and current\_timestamp](https://dba.stackexchange.com/a/63549/3684)
This should be the shortest and fastest query. Test with `EXPLAIN ANALYZE`.
Related:
* [Selecting sum and running balance for last 18 months with generate\_series](https://stackoverflow.com/questions/27670068/selecting-sum-and-running-balance-for-last-18-months-with-generate-series/27670540#27670540)
* [PostgreSQL: running count of rows for a query 'by minute'](https://stackoverflow.com/questions/8193688/postgresql-running-count-of-rows-for-a-query-by-minute/8194088#8194088)
|
Try this query:
```
with a as (select current_date - n as dt from generate_series(0, 6) as t(n)),
b as (select count(id) cnt, date(created_at) created_at
from entries
where date(created_at) >= date(current_date - interval '1 week')
group by date(created_at))
select coalesce(b.cnt,0), a.dt
from a
left join b on (a.dt = b.created_at)
order by a.dt;
```
`count` function will not generate 0 for non-existing rows. So you have to fill the rows for missing dates. With `generate_series` and simple date arithmetic, you can generate rows for dates of some period (in this case, 1 week). Then you can outer join to generate the final result. `coalesce` will substitute `null` to `0`.
|
Get count of created entries for each day
|
[
"",
"sql",
"postgresql",
"aggregate-functions",
"generate-series",
""
] |
I have a table formatted like the following, we'll call it "Payments"
```
P_ID | P_PayNum | P_AmtPerc | P_Type
1 | 1 | 100 | FP1
2 | 1 | 50 | 2P1
3 | 2 | 50 | 2P1
4 | 1 | 25 | 4P1
5 | 2 | 0 | 4P1
6 | 3 | 0 | 4P1
7 | 4 | 0 | 4P1
```
The question with the table I am working with, I am trying to come up with the best way to **create an UPDATE script using @TempTables and inserts** to find all values where a AmtPerc = 0 and figure out how to take the current non-0 (in this case the 25 for 4P1 type) and figure out the remaining from the 3 that are 0 (which would all be 25; 25\*4 = 100).
There are a couple of entries in this table with payment plans of quarterly, semi-quarterly, semi-annually, bi-monthly, etc. Those same few also suffer from an issue where only the first partial-percent is present (the 1st payment) and the rest are 0. So I am trying to find the best way to dynamically find all the 0's, find the 1st payment that represent that set and have it update those to the correct percentages that would total to 100.
*I am honestly unsure of how to word this any better and hopeful someone understands what I mean.
If there is a better way to phrase or parse this question, then feel free to modify it to make more sense and maybe the code discovered can help someone else with figuring out remaining partial percents totaling to 100.*
|
At sounds like you want to not just compute the missing payment percentages, but also to update the base table with them. In SQL Server, you can accomplish that this way:
```
UPDATE p
SET p.P_AmtPerc = r.remaining_perc / r.num_zero
FROM
payments p
INNER JOIN (
SELECT
P_Type,
100 - SUM(P_AmtPerc) AS remaining_perc,
SUM(case P_AmtPerc when 0 then 1 else 0 end) as num_zero
FROM payments
GROUP BY P_Type
HAVING SUM(P_AmtPerc) < 100
AND SUM(case P_AmtPerc when 0 then 1 else 0 end) > 0
) r
ON p.P_Type = r.P_Type
WHERE p.P_AmtPerc = 0
```
You will recognize the inline view as similar to the queries presented in the two other answers posted so far. It computes for each payment type the percentage remaining to be allocated and the number of payment rows among which to split it, filtering out any payment types for which (at least) 100% payment is already allocated, or for which there are no rows with zero payment specified.
The rest of the query is proprietary SQL Server syntax for, basically, updating a table via a view. It updates only those rows that have `P_AmtPerc = 0` **and** have a corresponding row in the inline view. In particular, if there is a payment type whose recorded payments add up to at least 100, but that also has some zero-percent payments, then now rows for that payment type are updated. It ignores any non-zero payment percentages, splitting the balance among the zero-percent payments instead of making them all match the first payment.
|
Took me a moment to wrap my noodle around this one, you have different payment codes (at least for this example), but when all payments for a given code are made the percentage should equal out to 100%.
What you want is to find unpaid payments, where at least one payment has been made, and figure out how much is left to pay.
```
Select AccountNumber --I imagine this will be replaced by an account PK or item PK
, 100 - sum(P_amtPerc) as RemainingPercent
, sum(case when P_amtPerc = 0 then 1 else 0 end) as RemainingPayments
From Payments
Group By AccountNumber --Once again this is a stand in for your real PK
having sum(P_amtPerc) < 100
```
What this does is we use a group statement to allow us to aggregate the rest of the table, in this case we want to see how many payments are yet to come, and what percent is left. We use the having clause to filter out accounts that are fully paid (eg: 100%), so we are sure we are looking only at accounts with payments outstanding, or partial payments. Hopefully that's on the right track for you.
|
How to get remaining percentages for column?
|
[
"",
"sql",
"sql-server",
"database",
""
] |
I work in an Oracle instance that has hundreds of schemas and multiple developers. We have a development instance where developers can integrate their work before test or production.
We want to have source control for all the DDL run in this integrated development database. Currently this is done through a product Red Gate which we run manually after we make a change to the database. Redgate finds the changes between what is in the schema and what was last checked into source control and makes a script of the differences and puts this into source control.
The problem however is of course that running regdate can take some time and people run it infrequently or not at all for small changes. Also redgate will only look in one schema at a time and it would be VERY time consuming to manually run it against all schemas to guarantee that they are up to date. However if the source controlled code cannot be relied upon it becomes less useful...
What would seem to be ideal would be to have some software that could periodically (even once a day), or when triggered by DDL being run, update the source control (preferably github as this is used by other teams) from all the schemas.
I cannot seem to see any existing software which can be simply used to do this.
Is there a problem with doing this? (there is no need to address multiple developers overwriting each others work on the same day as we have this covered in a separate process) Is anyone doing this? Can anyone recommend a way to do this?
|
We do this with help of a PL/SQL function, a python script and a shell script:
* The PL/SQL function can generate the DDL of a whole schema and returns this as CLOB
* The python script connects to the database, fetches the DDL and stores it in files
* The shell script runs the Source Control to add the modifications (we use Bazaar here).
You can see the scripts on PasteBin:
* The PL/SQL function is here: <http://pastebin.com/AG2Fa9zL>
* The python program (schema\_exporter.py): <http://pastebin.com/nd8Lf0gK>
* The shell script:
The shell script:
```
python schema_exporter.py
d=$(date +%Y-%m-%d__%H_%M_%S)
bzr add
bzr st | grep -q -E 'added|modified' && commit -m "Database objects on $d"
exit 0
```
This shell script is configured to run from cron every day.
|
Being in the database version control space for 5 years (as director of product management at [DBmaestro](http://www2.dbmaestro.com/l/11742/2014-12-31/2grnfp)) and having worked as a DBA for over two decades, I can tell you the simple fact that you cannot treat the database objects as you treat your Java, C# or other files and save the changes in simple DDL scripts.
There are many reasons and I'll name a few:
* Files are stored locally on the developer’s PC and the change s/he
makes do not affect other developers. Likewise, the developer is not
affected by changes made by her colleague. In database this is
(usually) not the case and developers share the same database
environment, so any change that were committed to the database affect
others.
* Publishing code changes is done using the Check-In / Submit Changes /
etc. (depending on which source control tool you use). At that point,
the code from the local directory of the developer is inserted into
the source control repository. Developer who wants to get the latest
code need to request it from the source control tool. In database the
change already exists and impacts other data even if it was not
checked-in into the repository.
* During the file check-in, the source control tool performs a conflict
check to see if the same file was modified and checked-in by another
developer during the time you modified your local copy. Again there
is no check for this in the database. If you alter a procedure from
your local PC and at the same time I modify the same procedure with
code form my local PC then we override each other’s changes.
* The build process of code is done by getting the label / latest
version of the code to an empty directory and then perform a build –
compile. The output are binaries in which we copy & replace the
existing. We don't care what was before. In database we cannot
recreate the database as we need to maintain the data! Also the
deployment executes SQL scripts which were generated in the build
process.
* When executing the SQL scripts (with the DDL, DCL, DML (for static
content) commands) you assume the current structure of the
environment match the structure when you create the scripts. If not,
then your scripts can fail as you are trying to add new column which
already exists.
* Treating SQL scripts as code and manually generating them will cause
syntax errors, database dependencies errors, scripts that are not
reusable which complicate the task of developing, maintaining,
testing those scripts. In addition, those scripts may run on an
environment which is different from the one you though it would run
on.
* Sometimes the script in the version control repository does not match
the structure of the object that was tested and then errors will
happen in production!
There are many more, but I think you got the picture.
What I found that works is the following:
1. Use an enforced version control system that enforces
check-out/check-in operations on the database objects. This will
make sure the version control repository matches the code that was
checked-in as it reads the metadata of the object in the check-in
operation and not as a separated step done manually. This also allow
several developers to work in parallel on the same database while
preventing them to accidently override each other code.
2. Use an impact analysis that utilize baselines as part of the
comparison to identify conflicts and identify if a difference (when
comparing the object's structure between the source control
repository and the database) is a real change that origin from
development or a difference that was origin from a different path and
then it should be skipped, such as different branch or an emergency
fix.
3. Use a solution that knows how to perform Impact Analysis for many
schemas at once, using UI or using API in order to eventually
automate the build & deploy process.
An article I wrote on this was published [here](http://www2.dbmaestro.com/l/11742/2014-12-31/2grnfr), you are welcome to read it.
|
How to automate source control with Oracle database
|
[
"",
"sql",
"plsql",
"oracle11g",
"oracle12c",
""
] |
Guys I have the following query:
```
select count(*) AS num_items
from desc_look
group by dept
```
This query returns the number of items in each department. However I can't get ONLY the department with the higher number of items.
I've been trying to use this to retrieve the name and id of the department with most items
```
select dept, desc_up
from desc_look
where (select count(*) AS num_items
from desc_look
group by dept)
```
However I keep getting an error ORA-00936 and I don't know why.
I know I can't user MAX(COUNT(\*)) but is there a way to workaround this?
|
```
select * from
(
select count(*) AS num_items
from desc_look
group by dept
order by count(*) desc
) tmp
WHERE ROWNUM = 1;
```
Also have a look on [howto limit the records in Oracle](https://stackoverflow.com/questions/470542/how-do-i-limit-the-number-of-rows-returned-by-an-oracle-query-after-ordering).
|
This version is basically the same as juergen's, but using an analytic function instead of an aggregate (`GROUP BY`) for counting:
```
SELECT t.dept, t.desc_up FROM
(SELECT dept, desc_up,
COUNT(*) over (partition BY dept) dept_count
FROM desc_look
ORDER BY dept_count DESC
) t
WHERE rownum = 1
```
If you're on Oracle 12, the inline view is not needed because you can use the row-limiting clause (`FETCH FIRST ...`):
```
SELECT dept, desc_up,
COUNT(*) over (partition BY dept) dept_count
FROM desc_look
ORDER BY dept_count DESC
FETCH FIRST 1 ROW ONLY
```
|
Query to get the "MAX COUNT"
|
[
"",
"sql",
"oracle",
""
] |
I need to update a column (type of `datetime`) in the top 1000 rows my table. However the catch is with each additional row I must increment the `GETDATE()` by 1 second... something like `DATEADD(ss,1,GETDATE())`
The only way I know how to do this is something like this:
```
UPDATE tablename
SET columnname = CASE id
WHEN 1 THEN DATEADD(ss,1,GETDATE())
WHEN 2 THEN DATEADD(ss,2,GETDATE())
...
END
```
Obviously this is not plausible. Any ideas?
|
I don't know what your ID is like, and I'm assuming you have at least SQL Server 2008 or else ROW\_NUMBER() won't work.
Note: I did top 2 to show you that you that the top works. You can change it to top 1000 for your actual query.
```
DECLARE @table TABLE (ID int, columnName DATETIME);
INSERT INTO @table(ID)
VALUES(1),(2),(3);
UPDATE @table
SET columnName = DATEADD(SECOND,B.row_num,GETDATE())
FROM @table A
INNER JOIN
(
SELECT TOP 2 *, ROW_NUMBER() OVER (ORDER BY ID) row_num
FROM @table
ORDER BY ID
) B
ON A.ID = B.ID
SELECT *
FROM @table
```
Results:
```
ID columnName
----------- -----------------------
1 2015-03-31 13:11:59.760
2 2015-03-31 13:12:00.760
3 NULL
```
|
How about using `id` rather than a constant?
```
UPDATE tablename
SET columnname = DATEADD(second, id, GETDATE() )
WHERE id <= 1000;
```
If you want the first 1000 rows (by `id`), but the `id` has gaps or other problems, then you can use a `CTE`:
```
with toupdate as (
select t.*, row_number() over (order by id) as seqnum
from tablename
)
update toupdate
set columnname = dateadd(second, seqnum, getdate())
where seqnum <= 1000;
```
|
Update table with new value for each row
|
[
"",
"sql",
"sql-server",
""
] |
I get the an error when I execute the query below. what I'm doing wrong?
Msg 512, Level 16, State 1, Line 3
Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , >, >= or when the subquery is used as an expression.
```
select
so.name 'Table Name'
,so.id 'Table ID'
,so.xtype
,sc.name 'Column Name'
,sc.id 'Column ID'
,sf.constid
,sf.fkeyid 'Object ID of the table with FOREIGN KEY'
,sf.rkeyid 'Referenced Table ID'
,(select o.name 'Referenced Table'
from sysforeignkeys f
inner join sysobjects o
on o.id=f.rkeyid
where o.xtype='U')
from sysobjects so
inner join syscolumns sc
on so.id=sc.id
inner join sysforeignkeys sf
on so.id=sf.fkeyid
where so.xtype='U'
and (sc.name like 'SSN'
OR sc.name LIKE 'ssn%'
OR sc.name LIKE 'ssn%'
OR sc.name LIKE '%_ssn%'
OR sc.name LIKE '_ocsecno'
OR sc.name LIKE 'Ssn%');
```
|
I don't think your subquery is correct as it has no way of referencing your sysobjects aliased "so". Try this instead. Also I don't think you need such a long where clause.
```
select so.name [Table Name]
,so.id [Table ID]
,so.xtype
,sc.name [Column Name]
,sc.id [Column ID]
,sf.constid
,sf.fkeyid [Object ID of the table with FOREIGN KEY]
,sf.rkeyid [Referenced Table ID]
,zz.name [Reference Table]
from sysobjects so
inner join syscolumns sc on so.id = sc.id
inner join sysforeignkeys sf on so.id = sf.fkeyid
--Use a join here for the reference table column
inner join sysobjects zz on zz.id = sf.rkeyid
where so.xtype='U'
AND(
sc.name LIKE '%ssn%'
OR sc.name LIKE '_ocsecno'
)
```
|
```
select so.name [Table Name]
,so.id [Table ID]
,so.xtype
,sc.name [Column Name]
,sc.id [Column ID]
,sf.constid
,sf.fkeyid [Object ID of the table with FOREIGN KEY]
,sf.rkeyid [Referenced Table ID]
,(select TOP 1 o.name
from sysforeignkeys f
inner join sysobjects o on o.id=f.rkeyid
where o.xtype='U') AS [Referenced Table]
from sysobjects so
inner join syscolumns sc on so.id = sc.id
inner join sysforeignkeys sf on so.id = sf.fkeyid
where so.xtype='U'
and ( sc.name like 'SSN' --<-- Two operator together "LIKE" and "="
OR sc.name LIKE 'ssn%'
OR sc.name LIKE 'ssn%'
OR sc.name LIKE '%_ssn%'
OR sc.name LIKE '_ocsecno'
OR sc.name LIKE 'Ssn%');
```
**Important Note**
A sub-query in Select MUST return a scalar value, add `TOP 1` to your sub-query and it should fix the error.
Also use Square Brackets `[]` for column names instead of literal strings.
|
Please help me correct the syntax error
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I want to export one table to each csv file and name the csv file with timestamp. For example, if I have a table `t1`, after export, a csv file name t1.YYYYMMDDHHMISS.csv is generated. Here is my code:
```
create or replace procedure
T_to_CSV(Tname varchar2,Directory varchar2)
BEGIN
set colsep , -- separate columns with a comma
set pagesize 0 -- No header rows
set trimspool on -- remove trailing blanks
set headsep off -- this may or may not be useful...depends on your
spool timestamp.csv --don't know how to write in directory
select *
from Tname
end
```
Here are my questions:
(1) I don't know how to output the csv file into the requried directory, how to fix the spool code please?
Should I use spool `D:\path\filename.csv` please?
(2) In the spool line, how to change the name of the csv file using the timestamp now() please?
|
There are a few steps:
* create a [directory](http://www.adp-gmbh.ch/ora/sql/create_directory.html) using `CREATE DIRECTORY my\_dir as 'C:\dir';
* make sure that Oracle has read,write on the folder on the computer (best accomplished by creating it in the Oracle install folder)
* grant the user executing the procedure `GRANT read,write on DIRECTORY my_dir to the_user;`
* download and compile the handy procedure [here](http://www.oracle-developer.net/content/utilities/data_dump.sql)
I have used this and it works really nicely.
Usage
```
data_dump ( 'Select emp_name from emp',
CURRENT_TIMESTAMP||'filename.csv',
my_dir);
```
(vastly simplified sample!)
After creating the directory verify your work by running this:
* Select \* from ALL\_DIRECTORIES;
* you should see your directory
* logon to the machine where the database is located and verify the folder path exists and the oracle user has permissions on it. Networked drives are only possible if the user running the oracle service has permissions on that folder
|
Thanks kevin for sharing the procedure and it was very useful for me. I have customized the code:
1. To Add the column names in the output csv file which was not working earlier.
2. When i was passing the delimiter as parameter then it was adding the comma in the starting of every row(,1,2,3) which has been corrected.
I am also sharing the customized code which might help others. Customized code can be downloaded [here](https://www.dropbox.com/s/7v1n2hz13cddlp9/customized_data_dump.sql?dl=0).
1. Customized Code to add column names
> ```
> FOR i IN t_describe.FIRST .. t_describe.LAST LOOP
> IF i <> t_describe.LAST THEN put('UTL_FILE.PUT(v_fh,'''||t_describe(i).col_name||'''||'''||v_delimiter||''');');
> ELSE
> put(' UTL_FILE.PUT(v_fh,'''||t_describe(i).col_name||''');');
> END IF;
> END LOOP;
> put(' UTL_FILE.NEW_LINE(v_fh);');
> ```
2. Customized Code for delimiter
IF i <> t\_describe.LAST THEN
put(' UTL\_FILE.PUT(v\_fh,"'||t\_describe(i).col\_name||'"(i) ||'''||v\_delimiter||''');');
ELSE
put(' UTL\_FILE.PUT(v\_fh,"'||t\_describe(i).col\_name||'"(i));');
END IF;
And the correct way to call the procedure is to bind the variable with values
data\_dump(query\_in => 'Select 1 from dual',file\_in => 'file.csv',directory\_in => 'MY\_DIR', delimiter\_in => '|' );
Thanks
Naveen
|
Export table to csv file by using procedure (csv name with timestamp)
|
[
"",
"sql",
"oracle",
"stored-procedures",
""
] |
I am trying to write a Query to find if a string contains part of the value in Column (Not to confuse with the query to find if a column contains part of a string).
Say for example I have a column in a table with values
> ABC,XYZ
If I give search string
> ABCDEFG
then I want the row with **ABC** to be displayed.
If my search string is **XYZDSDS** then the row with value **XYZ** should be displayed
|
The answer would be "use LIKE".
See the documentation: <https://dev.mysql.com/doc/refman/5.0/en/string-comparison-functions.html>
You can do `WHERE 'string' LIKE CONCAT(column , '%')`
Thus the query becomes:
```
select * from t1 where 'ABCDEFG' LIKE CONCAT(column1,'%');
```
If you need to match anywhere in the string:
```
select * from t1 where 'ABCDEFG' LIKE CONCAT('%',column1,'%');
```
Here you can see it working in a fiddle:
<http://sqlfiddle.com/#!9/d1596/4>
|
```
Select * from table where @param like '%' + col + '%'
```
|
SQL - Query to find if a string contains part of the value in Column
|
[
"",
"mysql",
"sql",
"string",
"contains",
""
] |
Lets say I have a table like this:
```
id | peru | usa
1 20 10
2 5 100
3 1 5
```
How can I get the top values from `peru` and `usa` as well as the spefic ids. So that I get as result:
```
usa_id: 2 | usa: 100 | peru_id: 1 | peru: 20
```
Is this possible In **one** query? Or do I have to do two `ORDER BY` querys?
Im using postgresql
|
You can do this with some subqueries and a cross join:
```
select
u.id usa_id,
u.usa,
p.id peru_id,
p.peru
from
(select id, usa from mytable where usa=(select max(usa) from mytable) order by id limit 1) u
cross join (select id, peru from mytable where peru=(select max(peru) from mytable) order by id limit 1) p
;
```
In the case that there are multiple rows with the same max value (for usa or peru, independently), this solution will select the one with the lowest id (I've assumed that id is unique).
|
```
SELECT
t1.id as peru_id, t1.peru
, t2.id as usa_id, t2.usa
FROM tab1 t1, tab1 t2
ORDER BY t1.peru desc, t2.usa desc
limit 1
```
<http://sqlfiddle.com/#!15/0c12f/6>
As basicly what this does is a simple carthesian product - I guess that performance WILL be poor for large datasets.
on the fiddle it took [196ms for a 1k rows table](http://sqlfiddle.com/#!15/cdbf9/2). On 10k rows table - sqlFiddle hung up.
|
Get top values from two columns
|
[
"",
"sql",
"postgresql",
""
] |
I have two tables named `doctor` and `patient`
Here is my `doctor` table

Here is my `patient` table

First I wanted to retrieve list of doctor IDs together with the number of appointments made for each doctor with one or more appointments
I can do it with below query,
```
Select d.doctor_id, count(p.patient_id)
from patient p, doctor d
where d.doctor_id=p.registered_with
group by d.doctor_id;
```
Result:

But now I need to modify the previous query to include any doctors with no appointments too (Ex: The doctor with id 51 doesn't have any appointments. I want to include him in the above result)
How can I do this?
PS: If the title is misleading, someone please re-correct it for me
|
You are doing `implicit join`
You need to use `left join` as
```
Select
d.doctor_id,
coalesce(count(p.patient_id),0) as `patient_count`
from doctor d
left join patient p on d.doctor_id=p.registered_with
group by d.doctor_id;
```
|
you can do it with `right outer join`
```
Select d.doctor_id, count(p.patient_id)
from patient p right outer join doctor d
on d.doctor_id=p.registered_with
group by d.doctor_id;
```
|
Retrieve records with no matching rows in joined table
|
[
"",
"mysql",
"sql",
"count",
"aggregate-functions",
""
] |
I need to implement a regular expression (as I understand) matching in PostgreSQL 8.4. It seems regular expression matching are only available in 9.0+.
My need is:
When I give an input `14.1` I need to get these results:
```
14.1.1
14.1.2
14.1.Z
...
```
But exclude:
```
14.1.1.1
14.1.1.K
14.1.Z.3.A
...
```
The pattern is not limited to a single character. There is always a possibility that a pattern like this will be presented: `14.1.1.2K`, `14.1.Z.13.A2` etc., because the pattern is provided the user. The application has no control over the pattern (**it's not a version number**).
Any idea how to implement this in Postgres 8.4?
After one more question my issue was solved:
[Escaping a LIKE pattern or regexp string in Postgres 8.4 inside a stored procedure](https://stackoverflow.com/questions/29411869/escaping-a-like-pattern-or-regexp-string-in-postgres-8-4-inside-a-stored-procedu)
|
[Regular expression matching](https://www.postgresql.org/docs/current/functions-matching.html#FUNCTIONS-POSIX-REGEXP) has been in Postgres practically for ever, at least since version 7.1. Use the [these operators](https://www.postgresql.org/docs/current/functions-matching.html#FUNCTIONS-POSIX-TABLE):
```
~ !~ ~* !~*
```
For an overview, see:
* [Pattern matching with LIKE, SIMILAR TO or regular expressions in PostgreSQL](https://dba.stackexchange.com/q/10694/3684)
The point in your case seems to be to disallow more dots:
```
SELECT *
FROM tbl
WHERE version LIKE '14.1.%' -- for performance
AND version ~ '^14\.1\.[^.]+$'; -- for correct result
```
*db<>fiddle [here](https://dbfiddle.uk/?rdbms=postgres_15&fiddle=f7e5ba28e111d771bb9acbc046ac2c0d)*
Old [sqlfiddle](http://sqlfiddle.com/#!17/0a28f/420)
The `LIKE` expression is redundant, but it is going to improve performance dramatically, even without index. You should have an index, of course.
The `LIKE` expression can use a basic `text_pattern_ops` index, while the regular expression cannot, at least in Postgres 8.4.
Or with COLLATE "C" since Postgres 9.1. See:
* [Is there a difference between text\_pattern\_ops and COLLATE "C"?](https://dba.stackexchange.com/a/291250/3684)
* [PostgreSQL LIKE query performance variations](https://stackoverflow.com/questions/1566717/postgresql-like-query-performance-variations/13452528#13452528)
`[^.]` in the regex pattern is a character class that excludes the dot (`.`). So more characters are allowed, just no more dots.
### Performance
To squeeze out top performance for this particular query you could add a specialized index:
```
CREATE INDEX tbl_special_idx ON tbl
((length(version) - length(replace(version, '.', ''))), version text_pattern_ops);
```
And use a matching query, the same as above, just replace the last line with:
```
AND length(version) - length(replace(version, '.', '')) = 2
```
*db<>fiddle [here](https://dbfiddle.uk/?rdbms=postgres_14&fiddle=4a3115a417fe8ace1ce4ac66661aadf4)*
Old [sqlfiddle](http://sqlfiddle.com/#!17/8fe99b/1)
|
You can't do regex matching, but I believe you can do like operators so:
```
SELECT * FROM table WHERE version LIKE '14.1._';
```
Will match any row with a version of '14.1.' followed by a single character. This should match your examples. Note that this will not match just '14.1', if you needed this as well. You could do this with an OR.
```
SELECT * FROM table WHERE version LIKE '14.1._' OR version = '14.1';
```
|
String matching in PostgreSQL
|
[
"",
"sql",
"regex",
"postgresql",
"pattern-matching",
"postgresql-8.4",
""
] |
I am very new to SQL and I'm attempting to work through some exercises and I am having some trouble with the following problem:
There is a database consisting of six tables:
1. Author(AUTHOR ID, AUTHOR NAME, AUTHOR FIRSTNAME)
2. Book(BOOK ID, ISBN, TITLE, EDITOR, EDITION DATE) which contains the list of books (ISBN attribute is the key).
3. Written(AUTHOR ID, BOOK ID) which is the list of books written by each author.
4. Copy(BOOK ID, COPY ID, STATUS, ACQ DATE) which contains the list of copies for each book.
5. Borrower(USER ID, USER NAME, USER FIRSTNAME, USER ADDRESS) which contains the list of borrowers.
6. Borrowing(BOOK ID, COPY ID, USER ID, BORROWING DATE) which contains the list of
borrowing in the library.
With these tables I am trying to accomplish a few queries:
1. Search the titles of books written by Ernest Hemingway
2. Search the id, name and first name of borrowers which borrow the book with id 10
3. Search the number currently borrowing.
4. Then for each book, search the id, title and number of copies which have the status borrowed
For the first one I used:
SELECT title FROM Book
WHERE Author='Ernest Hemingway
For the thrid:
SELECT COUNT(DISTINCT book id) FROM Borrowing
I'm a little stumped on the remaining two (I'm a little confused on the join method) and I wanted to make sure these other two queries were correct as well. Any help is much appreciated.
Thanks guys!
|
To start with the join you can use [this schema](http://www.codeproject.com/KB/database/Visual_SQL_Joins/Visual_SQL_JOINS_orig.jpg).
|
the answer of first question
```
SELECT
title
FROM book b, author a, written w
WHERE b.bookid = w.bookid AND a.authorid = w.authorid AND authorname = 'Ernest Hemingway';
```
|
SQL Join and select
|
[
"",
"mysql",
"sql",
"sql-server",
""
] |
I have two columns having data like below.
Column1
AMC Standard, School
Column2
AMC Standard School.
In need to compare these two columns such that comparison is made for the words only and not for any additional, meaning from the above example Column1 and ColumnC are match but due to the Comma ",' and the period sign "." the simple comparison of Column1 and Column2 suggests it as a mismatch.
|
you can replace the non comparable characters to empty string (in your case , and .)and then compare them. Something like this.
```
SELECT 1 WHERE REPLACE('AMC Standard, School',',','') = REPLACE('AMC Standard School.','.','')
```
Based on jarlh comments, You should (if possible) update the columns and remove the punctuation marks if they are not using in any comparison and display.
|
Try like this
```
DECLARE @column1 VARCHAR(100)='AMC Standard, School (Near to ABC Building)'
DECLARE @column2 VARCHAR(100)='AMC Standard, School (Opposite KFC)'
SELECT 'MATCHED' AS COLUMN_COMPARE
WHERE replace(replace(replace(@column1, ',', ''), '.', ''), substring(@column1, CHARINDEX('(', @column1), CHARINDEX(')', @column1) - 1), '') = replace(replace(replace(@column2, ',', ''), '.', ''), substring(@column2, CHARINDEX('(', @column2), CHARINDEX(')', @column2) - 1), '')
```
|
Is it possible to Compare two columns in Microsoft SQL server so that the comparison skips punctuation marks and other character like %, ' etc?
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
My customer model has many videos and videos has many video activities.
I want to join on video activities to limit based off videos that belong to a customer who has a specific email domain.
This code will give me all the video activities belonging to customer with id 52, but since videos don't have customer email, I need to join customer onto video and then do a .where.
```
VideoActivity.joins(:video).where(videos: {customer_id: 52})
```
How is this done? Doing `VideoActivity.joins(:video).joins(:customer)` gives me an error saying VideoActivity doesn't have a customer associated with it.
|
VideoActiviy has no relation with customer, you need to say that the customer is related to the video, then it's easier to use active record's [`#merge`](http://apidock.com/rails/ActiveRecord/SpawnMethods/merge) that doing a hash `where`
```
VideoActivity.joins(video: :customer).merge(Customer.where('some condition'))
```
If you have a scope in videos you could use that too, here's an example
```
VideoActivity.joins(video: :customer).merge(Customer.some_scope)
```
**PS:**
a scope could be
```
# customer model
scope :email_ends_with, ->(string) { where('email ilike ?', "%#{string}") }
```
Then use it
```
VideoActivity.joins(video: :customer).merge(Customer.email_ends_with('gmail.com'))
```
|
There are a bunch of ways to do this and all end up at about the same place, but using an explicit where statement will easily accomplish this goal.
```
VideoActivity.joins(:video).where("videos.customer_id = ?", 52)
```
|
ActiveRecord multiple joins through association
|
[
"",
"sql",
"ruby-on-rails",
"join",
"activerecord",
""
] |
I'm trying to use this query, but I get an error
> Data mismatch in criteria
`Dat` is the `Date` column inside the MS Access database. I'm trying to select the sum of column named `Total` for every row named `Pro` between a date picked from the datepickers...
There are many posts regarding this, but my query is different then those
```
Dim DTST As String
DTST = DateTimePicker1.Value.ToString
Dim DTEn As String
DTEn = DateTimePicker2.Value.ToString
Dim Query1 As String = "SELECT SUM(Total) FROM [T500] WHERE Pro =@Pro AND Dat BETWEEN'" + DTST + "' AND '" + DTEn + "'"
Dim cmd2 As OleDb.OleDbCommand = New OleDbCommand(Query1, dbCon)
cmd2.Parameters.AddWithValue("@Pro", ComboBoxBP.SelectedItem.ToString)
```
|
My edit seems to be cancelled, so here is:
```
Dim DTST As String
DTST = DateTimePicker1.Value.ToString("'#'yyyy'/'MM'/'dd'#'")
Dim DTEn As String
DTEn = DateTimePicker2.Value.ToString("'#'yyyy'/'MM'/'dd'#'")
Dim Query1 As String = "SELECT SUM(Total) FROM [T500] WHERE Pro =@Pro AND Dat BETWEEN " + DTST + " AND " + DTEn + ""
```
|
The Dates are just additional parameters. One of the problems with the way you (and the other answers) are doing it is that you are converting perfectly good `DateTime` variables to string. MSAccess/OleDb will usually make sense of things, but it is unnecessary and allowing something else to interpret your intent is usually undesirable.
The DB columns must be implemented as a `Date` type in order for the data to be treated as `Dates` (BETWEEN), but you do not need to "format" the Date variables (ever).
Another problem is disposing of Command and Connection objects when done with them:
```
Dim SQL = "SELECT SUM(Total) FROM [T500] WHERE Pro =@Pro AND Dat BETWEEN @dt1 AND @dt2"
Using dbCon As OleDbCOnnection(GetConnection()),
cmd As New OleDbCommand(SQL, dbCon)
dbCon.Open
' ToDO be sure SelectedItems.COunt >0 earlier
cmd2.Parameters.AddWithValue("@Pro", ComboBoxBP.SelectedItem.ToString)
cmd2.Parameters.AddWithValue("@dt1", DateTimePicker1.Value)
cmd2.Parameters.AddWithValue("@dt2", DateTimePicker2.Value)
Dim Total = cmd.ExecuteScalar()
...
End Using ' close and dispose of Command and Connection objects
```
As you can see, you pass DateTime values to it as any other parameter, and the DTP's`.Value` property will work perfectly well without any massaging or processing.
Here is a link for info on the [`GetConnection()` method](https://stackoverflow.com/a/28216964/1070452) and dbConnections in general.
---
Note that `OleDB` does not actually use named parameters (@Pro, @dt1). They are just *placeholders*, you have to `AddWithValue` in the same order as they appear in the SQL statement. It is more common to see params specified as "?", but meaningful params are helpful in mapping the right var to the right param in code.
Finally, it cant happen with a `DateTimePicker`, but gluing bits of string from UI controls together to make SQL can result in SQL injection attacks and should always be avoided. SQL using parameters is generally easier to code, build, read and mantain.
|
'Data criteria mismatch SQL' when using Dates
|
[
"",
"sql",
".net",
"vb.net",
"ms-access",
""
] |
I want to create a dynamic order by statement.
Its purpose is to read a table of daily schedule overrides (0 to \* records) and decide to use a global override or a store override, for selected day.
I thought of using a case clause like below but its not working as expected.
```
select * from schedule sdl
where day = 3
and (sdl.store_id = 23331 or sdl.store_id is null)
order by
case when sdl.force is true
then 'sdl.store_id nulls first'
else 'sdl.store_id nulls last'
end
limit 1
```
Is it possible to create an order-by statement using a case statement? Or maybe there is a better approach on the subject.
|
You seemed to be on the right track. You just have to generate a value for each row and make it meaningful to how you want them sorted.
```
select *
from schedule sdl
where day = 3
and (sdl.store_id = 23331 or sdl.store_id is null)
order by case when sdl.store_id is null and sdl.force then 0 --nulls in front
when sdl.store_id is null and not sdl.force then 2 --nulls in back
else 1 end; --everything else in the middle
```
Tested on PostgreSQL 9.4.1
|
yes it is possible, the link in the comment to your question shows a valid example but it looks like your issue is
"nulls first/last". These aren't valid postgres constructs. Your query should be like
```
select * from schedule sdl
where day = 3 and (sdl.store_id = 23331 or sdl.store_id is null)
order by
case when sdl.force is true
then sdl.store_id end desc,
case when sdl.force is false
then sdl.store_id end
limit 1
```
asc is default sorting (doesn't need to be specified explicitly)
|
ORDER BY with CASE clause
|
[
"",
"sql",
"postgresql",
""
] |
```
SELECT ai.auction_item_id, ai.starting_bid, b.bid_amount, i.*
FROM
auction_item as ai, //Table contains auction specific details about the item
item as i //Table contains general details about the item
INNER JOIN (
SELECT auction_item_id, bid_amount
FROM bid xb //Table contains bids on item
ORDER BY amount DESC
LIMIT 1 ) b
ON b.auction_item_id = ai.auction_item_id
WHERE
ai.auction_id = 4 AND
i.id = ai.listings_id
ORDER BY RAND()
LIMIT 4
```
Currently with the above query I am able to get 4 random items from the current active auction (auction #4). But they are all currently coming back with the same highest bid amount (from the inner join). Each item should have its own highest bid amount (from inner join) - unless that item has no bids then it should be 0 (or something)
What am I doing wrong here?
The bid table has a row for every bid placed thus the LIMIT 1 and order by desc in the inner join, as I want the highest bid for every item in the outer section. (if there is a bid that is).
Thanks
|
Try a sub-query. You geht a `0` if there a no entries in the bid-table because of the `coalesce`.
```
SELECT
ai.auction_item_id, ai.starting_bid,
Coalesce((SELECT max(xb.bid_amount)
FROM bid xb
WHERE xb.auction_item_id = ai.auction_item_id), 0) AS bid_amount,
i.*
FROM auction_item AS ai
INNER JOIN item AS i ON i.id = ai.listings_id
WHERE ai.auction_id = 4
ORDER BY Rand()
```
|
Something like this?
```
SELECT ai.auction_item_id, ai.starting_bid, b.bid_amount, max(bid_amount)
FROM auction_item as ai
inner join bid b
on ai.auction_item_id = b.auction_item_id
inner join items i
on i.id = ai.listings_id
where ai.auction_id = 4
```
Adding your table schema to the question will help
**EDIT**:
```
SELECT ai.auction_item_id, ai.starting_bid, IFNULL(max(bid_amount), 0) max_bid
FROM auction_item as ai
left join bid b
on ai.auction_item_id = b.auction_item_id
inner join items i
on i.id = ai.listings_id
WHERE ai.auction_id = 4
GROUP BY ai.auction_item_id
```
Here you go, including no bid items.
Let me know if i have the fiddle wrong: <http://sqlfiddle.com/#!9/0eae7/2>
|
Simple SQL inner join query - can't get working
|
[
"",
"mysql",
"sql",
""
] |
We have a table (T) with ~3 million rows and just two INT columns (ID1 and ID2), which together are set as a composite clustered key.
Early in a stored procedure we create a table variable (@A) which consists of a list of INTs.
The slow query is the following
```
SELECT T.ID1, T.ID2
FROM T
INNER JOIN @A AS A
ON A.ID = T.ID1
```
@A will have just a few hundred rows, and T contains a few million rows. The problem is that T gets a scan count of several hundred. I don't know how to make that go away.
I have tried to create another index on T with column ID1 and ID2 included, but it does not help (the execution planner shows that the new index is used).
What can be done to reduce the scan count on table T?
(We are using SQL Server 2014, Web Edition)
|
Try creating an included (covering index):
```
create index idx_t_id1 on t(id1) include id2
```
This will allow your query to find everything it needs in the index pages and will not have to search in the main table. By the way is there a clustered index on table t?
|
You could try phrasing the query like this:
```
select a.id as id1,
(select t.id2
from t
where t.id1 = a.id
) as id2
from @a a;
```
This should scan `@a` and use the index for the lookup.
Two notes:
* If there can be multiple matches, use `cross apply` instead.
* If there could be no matches and you want to filter the rows, use a subquery or CTE.
|
How to reduce scan count on table with a composite index?
|
[
"",
"sql",
"sql-server",
""
] |
What is the reason that the following two queries give wildly different results?
```
MariaDB [mydatabase]> SELECT COUNT(DISTINCT(`price`)) FROM `products`; --Good
+--------------------------+
| COUNT(DISTINCT(`price`)) |
+--------------------------+
| 2059 |
+--------------------------+
1 row in set (0.01 sec)
MariaDB [mydatabase]> SELECT COUNT(DISTINCT('price')) FROM `products`; --Bad
+--------------------------+
| COUNT(DISTINCT('price')) |
+--------------------------+
| 1 |
+--------------------------+
1 row in set (0.01 sec)
```
I've googled around for an explanation of the difference between backticks and apostrophes (aka. single quotes), but I am unable to find any indication as to why they would be interpreted differently for a column name like in the above.
Is it that the single-quoted string in the latter query is actually not interpreted as a column name, but just as an arbitrary string literal, of which there could be said to be "1"? If so, it ain't easy to find any pages expounding on this meaning of the apostrophe.
|
'price' (apostrophes or quotes) is a string. It never changes, so the count is always 1.
`price` (backtics) refers to the column `price`. So it could be more than 1.
The inner parentheses are irrelevant. `COUNT(DISTINCT price)` is the same as your backtic version.
* `SELECT COUNT(*) FROM tbl WHERE ...` is a common way to ask how many rows.
* `SELECT foo, COUNT(*) FROM tbl GROUP BY foo` is a common way to ask how many rows for each distinct value of `foo`.
* `SELECT foo, COUNT(foo) FROM tbl GROUP BY foo` is the same as above, but does not count rows where `foo IS NULL`.
`SELECT DISTINCT ... GROUP BY ...` is a nonsense statement. Either use DISTINCT or use GROUP BY.
|
A straight single quote (') is used for [string literals](http://dev.mysql.com/doc/refman/5.6/en/string-literals.html) (along with straight double quote (")).
A backtick quote (`) is for [quoting identifiers](http://dev.mysql.com/doc/refman/5.7/en/identifiers.html).
Identifiers must be quoted if they match a reserved word, or if they contain special characters. Quoted identifiers also can specify lowercase in case-insensitive fields (which otherwise might be shown as uppercase).
```
CREATE TABLE MyTable (Field INT);
DESCRIBE MyTable;
+---------+-------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+---------+-------------+------+-----+---------+-------+
| FIELD | INT | YES | | NULL | |
+---------+-------------+------+-----+---------+-------+
```
See also [ANSI quote mode](http://dev.mysql.com/doc/refman/5.7/en/sql-mode.html#sqlmode_ansi_quotes).
|
What makes the big difference between a backtick and an apostrophe?
|
[
"",
"mysql",
"sql",
"mariadb",
""
] |
This question is relevant to SQL Server. I have table with column called `gender` and 1 or 0 will be saved as integer. I want to know whether there is a method assign a label when I selecting it using `SELECT` statement.
For example
```
SELECT gender
FROM emp
```
AND if 1 occur `Male` AND 0 occur `Female` should be displayed.
|
Use the CASE operator, i.e. CASE gender when 1 then 'Male' else 'Female' end.
Full select as follows:
```
SELECT gender,
CASE gender
when 1 then 'Male'
else 'Female'
end as gender_name
FROM emp
```
**UPD:**
Option with N/A to process all possible optios (if gender field value not in range [0,1], but I prefer to use check constraint to determine possible values, to avoid weird/unexpected values):
```
SELECT gender,
CASE
gender
when 1 then 'Male'
when 0 then 'Female'
else 'N/A'
end as gender_name
FROM emp
```
|
You can use `CASE`:
```
SELECT CASE WHEN gender = 1 THEN 'Male' ELSE 'Female' END AS gender
FROM emp
```
|
Assign a value when for integer when USE select query
|
[
"",
"sql",
"sql-server",
"select",
""
] |
I've inherited a database table that has an nvarchar(MAX) column containing ASCII numbers. I need to convert and replace them with plain text. Is this possible using an SQL function?
From:
> 034 067 111 110 118 101 114 116 032 077 101 044 032 068 097 114 110 032 105 116 033 033 034
To:
> "Convert Me, Darn it!!"
Thanks all
|
## Test Data
```
DECLARE @TABLE TABLE (ASCII_Col VARCHAR(1000))
INSERT INTO @TABLE VALUES
('034 067 111 110 118 101 114 116 032 077 101 044 032 068 097 114 110 032 105 116 033 033 034')
```
## Query
```
;WITH CTE AS(
SELECT CHAR(Split.a.value('.', 'VARCHAR(100)')) Char_Vals
FROM (SELECT
Cast ('<M>' + Replace(ASCII_Col, ' ', '</M><M>') + '</M>' AS XML) AS Data
FROM @Table) AS A
CROSS APPLY Data.nodes ('/M') AS Split(a)
)
SELECT (SELECT '' + Char_Vals
FROM CTE
FOR XML PATH(''),TYPE).value('.','NVARCHAR(MAX)')
```
## Result
```
"Convert Me, Darn it!!"
```
|
A solution (without `split` function):
```
declare @input nvarchar(max);
declare @result nvarchar(max);
select @result = '';
select @input = '034 067 111 110 118 101 114 116 032 077 101 044 032 068 097 114 110 032 105 116 033 033 034';
--------------------------------
declare @index int;
declare @len int;
declare @char char(1);
declare @charNum varchar(8);
set @charNum = '';
set @index = 1;
set @len= LEN(@input);
WHILE @index <= @len + 1
BEGIN
set @char = SUBSTRING(@input, @index, 1);
if (@char = ' ') begin
set @result = @result + char(@charNum);
set @charNum = '';
end else begin
set @charNum = @charNum + @char;
end;
set @index = @index + 1;
END
select @result as [Output];
```
..and the result is:
```
OUTPUT:
----------------------
"Convert Me, Darn it!!"
```
|
Does SQL Server 2012 have a function or other way to convert a varchar column that contains ASCII to plain text?
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
I want to add comments to my `SQL CLR` functions (as I do to other `SQL` objects I am creating or editing - functions, procedures and views). Unfortunately, I am not able to do this for the `SQL CLR` objects.
For example, the following code:
```
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
-- =================================================================================================================================
-- Author: gotqn
-- Create date: 2015-03-25
-- Description: Converts a string that has been encoded for transmission in a URL into a decoded string.
-- Usage Example:
/*
SELECT [dbo].[fn_UrlDecode]('http://stackoverflow.com/search?q=tql+sql+server');
*/
-- =================================================================================================================================
CREATE FUNCTION [dbo].[fn_UrlDecode] (@value NVARCHAR(MAX))
RETURNS NVARCHAR(MAX)
AS EXTERNAL NAME [Utils].[Utils].[UrlDecode]
GO
```
when the function is script from the `SQL Management studio` is going to produce this:
```
SET ANSI_NULLS OFF
GO
SET QUOTED_IDENTIFIER OFF
GO
CREATE FUNCTION [dbo].[fn_UrlDecode](@value [nvarchar](max))
RETURNS [nvarchar](max) WITH EXECUTE AS CALLER
AS
EXTERNAL NAME [Utils].[Utils].[UrlDecode]
GO
```
I try to fix this moving the comments part after the `AS` as this is the way comments are added for views, but it fails again. Then I try to put the comments after the `CREATE` clause, after the `EXTERNAL NAME ...` clause, but nothing changed.
Is there a way to fix this behaviour?
|
Basically, if it's a type not listed as having data stored in [`sys.sql_modules`](https://msdn.microsoft.com/en-us/library/ms175081.aspx) then the original text that created the object is not retained and so comments aren't retained. No CLR object stores such text.
|
While @Damien is correct as to *why* the comments are not saved, there is still a somewhat work-around to store comments: Extended Properties.
For example:
```
EXEC sys.sp_addextendedproperty @name = N'comments', @value = N'
-- =================================================================================================================================
-- Author: gotqn
-- Create date: 2015-03-25
-- Description: Converts a string that has been encoded for transmission in a URL into a decoded string.
-- Usage Example:
/*
SELECT [dbo].[fn_UrlDecode](''http://stackoverflow.com/search?q=tql+sql+server'');
*/
-- =================================================================================================================================
', @level0type = 'SCHEMA', @level0name = N'dbo',
@level1type = 'FUNCTION', @level1name = N'fn_UrlDecode';
```
You just need to escape your embedded single-quotes.
Then you can retrieve them via:
```
SELECT [value]
FROM sys.fn_listextendedproperty(N'comments', 'SCHEMA', N'N'dbo',
'FUNCTION', N'fn_UrlDecode', NULL, NULL);
```
---
Minor additional note: if you won't ever decode URLs that are more than 4000 characters long (and I am pretty sure that you won't run into many that are even over 2048 characters), then you would be better served to use `NVARCHAR(4000)` for both input and output datatypes as that will be quite a bit faster than if either, or both, are `NVARCHAR(MAX)`.
|
How to add comments to SQL CLR function?
|
[
"",
"sql",
"t-sql",
"sql-server-2012",
"sqlclr",
""
] |
Accidently I noticed a bug-like behaviour in SSMS. I was querying from a table named Candidate with the below query.
```
select CandidateId, CandidateName from Candidate
where CandidateId='73415005-77C6-4D4B-9947-02D6B148E03F2'
```
I was copy-pasting the CandidateId which is a unique identifier, but somehow I added a two (2) in the end. Actually the candidate id I was querying to was '`73415005-77C6-4D4B-9947-02D6B148E03F`' and there is no candidate with candidateid `73415005-77C6-4D4B-9947-02D6B148E03F2` (that is not even a GUID i suppose)
But still, I was getting the result back.

You can see in the query and the result, the CandidateId's are different. Why is it happening so? Anyone please explain.
|
The top-level description is that the string is being converted to a unique identifier, so the last digit is ignored.
This logic is documented. First, unique identifiers have a slightly higher operator precedence than strings. The relevant part of the [documentation](https://msdn.microsoft.com/en-us/library/ms190309.aspx):
> 24. uniqueidentifier
> 25. nvarchar (including nvarchar(max) )
> 26. nchar
> 27. varchar (including varchar(max) )
> 28. char
This is why the conversion is to `uniqueidentifier` rather than to a string.
Second, this is a case where SQL Server does "silent conversion". That is, it converts the first 36 characters and doesn't generate an error for longer strings. This is also [documented](https://msdn.microsoft.com/en-us/library/ms187942.aspx):
> The following example demonstrates the truncation of data when the
> value is too long for the data type being converted to. Because the
> uniqueidentifier type is limited to 36 characters, the characters that
> exceed that length are truncated.
So, the behavior that you see is not a bug. It is documented behavior, combining two different aspects of documented SQL Server functionality.
|
Because your column CandidateId is of type GUID the right (string) part of the condition gets converted to uniqueidentifier data type and truncated. You can see this in your execution plan. There will be a Scalar Operator(CONVERT\_IMPLICIT(uniqueidentifier,[@1],0)) in your index seek/scan operator.
|
Why is SQL Server giving me wrong output?
|
[
"",
"sql",
"sql-server",
"ssms",
""
] |
I'm attempting to calculate days of therapy by month from an oracle database. The (vastly simplified) data is as follows:
```
Therapies
+-----------+-----------+----------+
| Rx Number | StartDate | StopDate |
|-----------+-----------+----------|
| 1 | 12-29-14 | 1-10-15 |
| 2 | 1-2-15 | 1-14-15 |
| 3 | 1-29-15 | 2-15-15 |
+-----------+-----------+----------+
```
For the purposes of this example, all times are assumed to be midnight. The total days of therapy in this table is (10-1 + 32-29) + (14-2) + (15-1 + 32-29) = 41. The total days of therapy in January in this table is (10-1) + (14-2) + (32-29) = 24.
If I wanted to calculate days of therapy for the month of January , my best effort is the following query:
```
SELECT SUM(stopdate - startdate)
FROM therapies
WHERE startdate > to_date('01-JAN-15')
AND stopdate < to_date ('01-FEB-15');
```
However, rx's 1 and 3 are not captured at all. I could try the following instead:
```
SELECT SUM(stopdate - startdate)
FROM therapies
WHERE stopdate > to_date('01-JAN-15')
AND startdate < to_date ('01-FEB-15');
```
But that would include the full duration of the first and third therapies, not just the portion in January. To make the matter more complex, I need these monthly summaries over a period of two years. So my questions are:
1. How do I include overhanging therapies such that only the portion within the target time period is included, and
2. How do I automatically generate these monthly summaries over a two year period?
|
> How do I include overhanging therapies such that only the portion
> within the target time period is included?
```
select sum(
greatest(least(stopdate, date '2015-01-31' + 1)
- greatest(startdate, date '2015-01-01'), 0)) suma
from therapies
```
> How do I automatically generate these monthly summaries over a two
> year period?
```
with period as (select date '2014-01-01' d1, date '2015-12-31' d2 from dual),
months as (select trunc(add_months(d1, level-1), 'Month') dt
from period connect by add_months(d1, level-1)<d2)
select to_char(dt, 'yyyy-mm') mth,
sum(greatest(least(stopdate, add_months(dt, 1)) - greatest(startdate, dt), 0)) suma
from therapies, months
group by to_char(dt, 'yyyy-mm') order by mth
```
Above queries produced desired output. Please insert your dates in proper places to change analyzed periods.
In second SQL inner subquery `months` gives 24 dates, one for each month. The rest is only maneuvering
with functions `greatest()`,`least()` and some math.
|
Use a case statement to set the start date and stop date. Like the below:
`select sum(
Stopdate -
(case Startdate
when startdate < to_date(@YourBeginingDate) then To_date(@YourBeginingDate)
else startdate
end)
FROM therapies
WHERE stopdate > to_date(@YourBeginingDate)
AND StartDate < to_date(@YourEndingDate)`
|
Calculating days of therapy in a sql query
|
[
"",
"sql",
"oracle",
""
] |
```
Person
id | name
1 | Mark
2 | John
3 | Cathy
4 | Susan
5 | Rick
Property
id | property_type (A,B,C,D,E,F)
1 | A
1 | B
1 | C
2 | A
2 | A
2 | A
3 | C
4 | D
4 | E
4 | F
4 | A
3 | A
5 | A
5 | A
```
> Query would be - Find out people's id and name who owns property of
> only Type A. Answer should be John and Rick
>
> Right now, I am doing it this way
```
select distinct(p.id), p.name
from
Person p
INNER JOIN Property t
ON p.id = t.id
WHERE t.property_type = 'A'
AND p.id NOT IN (
SELECT
id
from
Property
where property_type IN (
'B','C','D','E','F'
)
)
order by p.id desc
```
Tables have hundred thousand rows and that is why I am trying to optimize this query.
I intend to fetch result in such a way that, find out people who have only type A properties.
Any help is appreciated.
|
You can use conditional aggregation in the `having` clause:
```
select pers.id,
pers.name
from person pers
join property prop
on pers.id = prop.id
group by pers.id,
pers.name
having sum(case when prop.property_type = 'A' then 1 else 0 end) >= 1
and sum(case when prop.property_type <> 'A' then 1 else 0 end) = 0
```
**Fiddle:** <http://sqlfiddle.com/#!9/a3720/1/0>
(I think you meant John and Rick)
|
```
select p.* from
Person p join
(select id from Property
group by id having count(1)=1
and max(property_type)='A') t on p.id = t.id
```
Try this out.
I think your major problem is using `in` and `distinct` keyword
|
MYSQL query optimization - how to fetch unique column from right table of join
|
[
"",
"mysql",
"sql",
""
] |
I am working with a database that has multiple machines writing to it, each machine is identified by a column called ControllerID, so for example, One machine could have a controllerID of 20 and one could be 30 ext
What i want to do is for each machine get the elapsed time between the last reading and now. This is what i currently have:
```
SELECT TOP (1) controllerID, convert(Datetime,ReaderTime), DATEDIFF(SECOND,dtReading,GETDATE())
FROM ReaderData
WHERE (controllerID = '30') AND (CardID = 'FFFFFFF0 ')
ORDER BY ReaderIndex DESC
```
This gets the Elapsed seconds between the last reading and now, but if i put AND (controllerID = 'Another ID') obviously that wont work as and OR also does not work as it just picks the last one, How can i achieve this?
|
If have understood your goal correctly, you need something like
```
select
controllerID,
max(convert(datetime, ReaderTime)),
datediff(ss, max(dtReading), getdate())
from ReaderData
where CardID = 'FFFFFFF0 '
group by controllerID
```
This will give you information about last `ReaderTime` and difference from last `dtReading` to now for each `controllerID`.
UPDATE:
*On a side note, if i wanted to compare the last 2 records for each controllerID how could i do that...*
Well, as "quick-and-dirty" solution you cant do following: get results from previous query and outer apply previous by time interval record like this:
```
select
T.controllerID,
datediff(ss, T.Max_dtReading, T1.Max_dtReading)
from
(
select
controllerID,
max(dtReading) as Max_dtReading
from ReaderData
where CardID = 'FFFFFFF0 '
group by controllerID
) as T
outer apply(
select max(T1.dtReading) as Max_dtReading
from ReaderData as T1
where
T1.CardID = 'FFFFFFF0 '
and T1.controllerID = T.controllerID
and T1.dtReading < T.Max_dtReading
) as T1
```
|
If you want to use OR in the way I think you do, you need to group the OR's in ()s as below.
```
SELECT TOP (2) controllerID, convert(Datetime,ReaderTime), DATEDIFF(SECOND,dtReading,GETDATE())
FROM ReaderData
WHERE (controllerID = '30' OR controllerID = '31' OR controllerID = '32') AND CardID = 'FFFFFFF0 '
ORDER BY ReaderIndex DESC
GROUP BY ControllerID
```
|
sql server query with multiple where's
|
[
"",
"sql",
"sql-server",
""
] |
I have tried many combinations of the SQL functions; so as to have a 12 digit number including the dot character, including leading zeroes and decimal points.
For example:
for the number **121.22**, I want to format it to **000000121.22**
or for the number **12.2**, I want to format it to **000000012.20**
or for the number **100**, I want to format it to **000000100.00**
I have used the following function; but I lost the decimal points if it's zero.
```
SELECT RIGHT('000000000000'+ STR(CONVERT(VARCHAR,MYNUMBER),12,2),12);
```
Any idea on how to solve this problem in Microsoft SQL?
|
If you're on SQL Server 2012 or later, you can use the format() function.
```
SELECT FORMAT(121.22, '000000000000.00')
SELECT FORMAT(12.2, '000000000000.00')
```
---
000000000121.22
---
000000000012.20
|
for `ms sql versions not in (2012,2014)`:
```
cast(right('000000000',9-len(floor(the_number))) as varchar)
+ cast( cast(the_number as decimal(10,2))as varchar)
```
for `ms sql versions in (2012,2014)`:
```
format(the_number ,'000000000000.00')
```
|
How to add Leading Zeroes and Decimal Points at the same time in SQL?
|
[
"",
"sql",
"sql-server",
""
] |
My query keeps returning an ORA-01427 error and I don't know how to resolve it.
```
update db1.CENSUS set (notes)
=
(
select notes
from db2.CENSUS cen
where db1.CENSUS.uid = cen.uid
)
where headcount_ind = 'Y' and capture_FY = '2015'
```
|
You are getting the error because there exists more than one row in `db2.CENSUS` for at least value of `uid`. (There could be more.) You can figure out which values of `uid` are causing the issue by doing the following:
```
SELECT uid, COUNT(*)
FROM db2.census
GROUP BY uid
HAVING COUNT(*) > 1;
```
At that point you can do a number of things. You can delete the extra rows (maybe there aren't that many and you don't want them anyway) and update as in your original query, or you can use aggregation in the subquery you're using to update, e.g.:
```
update db1.CENSUS set (notes)
=
(
select MAX(notes)
from db2.CENSUS cen
where db1.CENSUS.uid = cen.uid
)
where headcount_ind = 'Y' and capture_FY = '2015';
```
In addition, with your query the way it is above, if there is not a corresponding value of `notes` in `db2.CENSUS` for some value of `db1.CENSUS.uid`, `db1.CENSUS.notes` will be set to `NULL`. Maybe that's the behavior you want? If not, you'll want something like the following:
```
UPDATE db1.census c1
SET c1.notes = ( SELECT max(c2.notes)
FROM db2.census c2
WHERE c2.uid = c1.uid )
WHERE c1.headcount_ind = 'Y'
AND c1.capture_FY = '2015'
AND EXISTS ( SELECT 1 FROM db2.census c2
WHERE c2.uid = c1.uid );
```
|
quick and dirty is
```
update db1.CENSUS cen set cen.notes
=
(
select MAX(notes)
from db2.census census
where db1.DW_HRT_PERSONNEL_CENSUS.uid = census.uid
)
where cen.headcount_ind = 'Y' and cen.capture_FY = '2015'
```
But, this probably does not do what you want. You have more than one note entered by one member of db1.DW\_HRT\_PERSONNEL\_CENSUS.
In order to prevent this error you must know the relationships between the tables. Is this a one member of db1.DW\_HRT\_PERSONNEL\_CENSUS maps to one note, or, more likely, one member of db1.DW\_HRT\_PERSONNEL\_CENSUS can make many census notes? A more likely solution is:
```
CURSOR census_taker IS
SELECT db1.DW_HRT_PERSONNEL_CENSUS.UID
FROM db1.DW_HRT_PERSONNEL_CENSUS;
For each census_taker in db1.DW_HRT_PERSONNEL_CENSUS LOOP
update db1.CENSUS cen set cen.notes
=
( select census.notes
from db2.census census
where db1.DW_HRT_PERSONNEL_CENSUS.uid = census.uid
)
where cen.headcount_ind = 'Y'
and cen.capture_FY = '2015'
and cen.PERSONNEL_CENSUS_UID = census_taker.UID;
```
END LOOP;
|
How can i resolve an ORA-01427 error (Single-row subquery returns more than one row)?
|
[
"",
"sql",
"oracle",
"plsql",
""
] |
I'm trying to write something like this:
```
case column_01
when column_01 = 5 AND column_02 = 'NO'
then value
else value_other
end
```
Is it somehow possible?
|
You have mixed the two different **syntax** for **CASE** expressions.
You could do it as:
```
CASE
WHEN column_01 = 5 AND column_02 = 'NO' THEN
value
ELSE
value_other
```
**1. simple\_case\_statement**
```
CASE [ expression ]
WHEN condition_1 THEN result_1
WHEN condition_2 THEN result_2
...
WHEN condition_n THEN result_n
ELSE result
END
```
**2. searched\_case\_statement**
```
CASE
WHEN expression condition_1 THEN result_1
WHEN expression condition_2 THEN result_2
...
WHEN expression condition_n THEN result_n
ELSE result
END
```
|
With case statements, you either check for the value of one column/variable and do your checks against that:
```
case column_01
when 1 then 'a'
when 2 then 'b'
...
else 'zzz'
end
```
or you check for conditions in each when clause:
```
case when column_01 = 5 and column_02 = 'NO' then value
when column_01 = 10 and column_03 = 'FRED' then 123
else value_other
end
```
You can't combine the two forms, nor does it make sense to do so.
|
Is there possible to use in CASE statement in WHEN more columns?
|
[
"",
"sql",
"oracle",
"case",
""
] |
I've looked at a bunch of questions and solutions regarding many to many queries. I just can't seem to wrap my head around it. Maybe I'm not completely understanding the keywords in MySQL. But...
I have 3 tables. The first table is a list of peoples contact information. The second table is a list of mailing list categories. The third table is an associative table that holds the `id's` from the first and second table. How would I write a `MySQL` query to get all the contacts from the contact table that match the `VIP list id` (which I already have)?
**Table 1 (contacts)**
```
id | name | email
-----------------------------
1 | John | john@gmail.com
-----------------------------
2 | Jane | jane@gmail.com
-----------------------------
```
**Table 2 (list\_type)**
```
id | list_name |
-----------------
1 | VIP's |
-----------------
2 | Generic |
-----------------
```
**Table 3 (list\_contact\_joiner)**
```
contact_id | list_type_id |
----------------------------
1 | 2 |
----------------------------
2 | 1 |
----------------------------
```
**This is what I tried but get a syntax error**
```
$listID = 1;
SELECT list_contact_joiner.contact_id
FROM list_contact_joiner
WHERE list_id = $listID AS lcj
INNER JOIN contact_lists AS cl
ON cl.id = lcj.contact_id
```
|
```
SELECT c.*
FROM contacts c
JOIN list_contact_joiner j on j.contact_id = c.id
JOIN list_type t on j.list_type_id = t.id
WHERE t.list_name = 'VIP''s'
```
If you already have the `id` of `VIP's` then you need to join only 2 tables
```
SELECT c.*
FROM contacts c
JOIN list_contact_joiner j on j.contact_id = c.id
WHERE j.list_type_id = 1
```
|
Yes the join statement is not correct. It should be something as
```
select
c.name,
c.email,
lt.list_type
from list_contact_joiner lcj
join contacts c on c.id = lcj.contact_id
join list_type lt on lt.id = lcj.list_type_id
where
lt.id = ?
```
If you are looking for data with `$listID = 1;` then the place holder is
`lt.id = ?`
|
MySQL Many to Many query confusion
|
[
"",
"mysql",
"sql",
""
] |
I neeed to develop a script that will capture all fields when one car is tied to more than one color.
If one car is tied to one color more than once, that needs to be captured only if that car is tied to additional colors.
If one car is tied to one color more than once and no other colors that does NOT need to be captured.
```
{CREATE TABLE test2
(
ID NUMBER(9),
CAR NUMBER(9),
COLOR NUMBER(9)
);
Insert into test2 (ID, CAR, COLOR) Values (1, 5, 10);
Insert into test2 (ID, CAR, COLOR) Values (2, 5, 11);
Insert into test2 (ID, CAR, COLOR) Values (3, 5, 10);
Insert into test2 (ID, CAR, COLOR) Values (4, 9, 6);
Insert into test2 (ID, CAR, COLOR) Values (5, 9, 6);
Insert into test2 (ID, CAR, COLOR) Values (6, 8, 4);
Insert into test2 (ID, CAR, COLOR) Values (7, 8, 9);
Insert into test2 (ID, CAR, COLOR) Values (8, 12, 9);
COMMIT;}
--expected results
ID CAR COLOR
1 5 10
2 5 11
3 5 10
6 8 4
7 8 4
```
all insights and suggestions deeply appreciated.
|
I would use either an `in` clause or a correlated `exists` clause. The latter should perform better than the former:
```
select id, car, color from test2
where car in (
select car
from test2
group by car
having count(distinct color) > 1
)
select id, car, color from test2 t
where exists (
select car
from test2
where car = t.car
group by car
having count(distinct color) > 1
)
```
[Sample SQL Fiddle](http://www.sqlfiddle.com/#!4/d5813/1)
|
you need to perform a `count` twice:
```
with cte as
( select CAR,COLOR,count(*) cn
from test2
group by CAR,COLOR
)
select t.id,t.car,t.color
from test2 t
join(
select car,count(*)
from cte
group by CAR
having count(*)>1)q
on t.car=q.car
order by 1
```
*OUTPUT:*
```
ID CAR COLOR
1 5 10
2 5 11
3 5 10
6 8 4
7 8 9
```
|
select based on one field association with multiple values in another field
|
[
"",
"sql",
"oracle",
""
] |
I have 1000's of tables in SQL Server that have each been created from CSV files. The data in each table is all similar with each table representing a different day.
The problem I am having, is that there are lots of variations in the structure of the tables as well as the names of columns.
Some tables do however have matching structures, and I thought a good starting point for combining the data would be to combine all the data in those one together.
I have been looking for a way to query the database in order to find these tables that have the same structure but have been as yet unsuccessful.
Any help would be greatly appreciated.
|
So if the tables are truly identical, then try this out. I actually use it to create your insert statements and it can drop the old tables if you want it to.
```
IF OBJECT_ID('dbo.table1') IS NOT NULL DROP TABLE dbo.table1;
IF OBJECT_ID('dbo.table2') IS NOT NULL DROP TABLE dbo.table2;
IF OBJECT_ID('dbo.table3') IS NOT NULL DROP TABLE dbo.table3;
IF OBJECT_ID('dbo.table4') IS NOT NULL DROP TABLE dbo.table4;
IF OBJECT_ID('dbo.table5') IS NOT NULL DROP TABLE dbo.table5;
CREATE TABLE table1 (ID INT,FirstName VARCHAR(25),LastName NVARCHAR(25),EntryDate DATETIME,AvgScore NUMERIC(18,6)); --table1
CREATE TABLE table2 (ID INT,FirstName VARCHAR(25),LastName NVARCHAR(25),EntryDate DATETIME,AvgScore NUMERIC(18,6)); --matches table1
CREATE TABLE table3 (ID INT,FirstName VARCHAR(25),LastName NVARCHAR(25),EntryDate DATETIME); --table3
CREATE TABLE table4 (ID INT,FirstName VARCHAR(25),LastName NVARCHAR(25),EntryDate DATETIME); --matches table3
CREATE TABLE table5 (ID INT,FirstName VARCHAR(25),LastName NVARCHAR(25),EntryDate DATETIME,AvgScore NUMERIC(18,6)); --matches table1
WITH CTE_matching_Tables
AS
(
SELECT
A.TABLE_NAME primaryTable,
A.total_columns,
COUNT(*) AS matching_columns,
B.TABLE_NAME AS matchedTable
FROM (SELECT *, MAX(ORDINAL_POSITION) OVER (PARTITION BY Table_NAME) AS total_columns FROM INFORMATION_SCHEMA.COLUMNS) A
INNER JOIN (SELECT *, MAX(ORDINAL_POSITION) OVER (PARTITION BY Table_NAME) AS total_columns FROM INFORMATION_SCHEMA.COLUMNS) B
ON A.TABLE_NAME < B.TABLE_NAME
AND A.ORDINAL_POSITION = B.ORDINAL_POSITION
AND A.total_columns = B.total_columns
AND A.COLUMN_NAME = B.COLUMN_NAME
AND A.DATA_TYPE = B.DATA_TYPE
AND A.IS_NULLABLE = B.IS_NULLABLE
AND ( (A.CHARACTER_MAXIMUM_LENGTH = B.CHARACTER_MAXIMUM_LENGTH)
OR (A.CHARACTER_MAXIMUM_LENGTH IS NULL AND B.CHARACTER_MAXIMUM_LENGTH IS NULL)
)
AND ( (A.NUMERIC_PRECISION = B.NUMERIC_PRECISION)
OR (A.NUMERIC_PRECISION IS NULL AND B.NUMERIC_PRECISION IS NULL)
)
AND ( (A.NUMERIC_SCALE = B.NUMERIC_SCALE)
OR (A.NUMERIC_SCALE IS NULL AND B.NUMERIC_SCALE IS NULL)
)
AND ( (A.DATETIME_PRECISION = B.DATETIME_PRECISION)
OR (A.DATETIME_PRECISION IS NULL AND B.DATETIME_PRECISION IS NULL)
)
GROUP BY A.TABLE_NAME,A.total_columns,B.TABLE_NAME
HAVING A.total_columns = COUNT(*)
)
--CTE has all table matches. I find the lowest occurring primaryTable for each matchedTable
--That way in my case table2 and table 5 insert into table 1 even though table2 and table5 also match
SELECT 'INSERT INTO ' + MIN(primaryTable) + ' SELECT * FROM ' + matchedTable + '; DROP TABLE ' + matchedTable + ';'
FROM CTE_matching_Tables
GROUP BY matchedTable
```
Results:
```
INSERT INTO table1 SELECT * FROM table2; DROP TABLE table2;
INSERT INTO table3 SELECT * FROM table4; DROP TABLE table4;
INSERT INTO table1 SELECT * FROM table5; DROP TABLE table5;
```
|
You'll find a wealth of data in the informational view `INFORMATION_SCHEMA.COLUMNS`.
This will give you (among other things) the table name, order of columns, column names, and column definitions.
So, for example, you could do something like this:
```
;
-- Create a list of table pairs. If you have reason to believe that
-- some tables are more likely to be similar than others, you can
-- modify this CTE as you need to.
with A as (
select T1.table_name
, t2.TABLE_NAME as other_table_Name
from information_Schema.TABLES t1
join information_schema.tables t2
on t1.TABLE_NAME < t2.TABLE_NAME
)
-- Pick all the pairs of table names ...
select *
from A
where NOT exists (
-- where the first table does NOT have any columns ...
select 1
from INFORMATION_SCHEMA.columns c1
where A.TABLE_NAME = C1.TABLE_NAME
and not exists (
-- ... that are NOT found in the second table ...
select 1
from INFORMATION_SCHEMA.columns c2
where c2.Table_Name = A.other_table_Name
AND c1.ordinal_position = c2.ordinal_position
and c1.data_type = c2.data_type
and ((c1.CHARACTER_MAXIMUM_LENGTH is null and
c2.CHARACTER_MAXIMUM_LENGTH is null) or
c1.CHARACTER_MAXIMUM_LENGTH = c2.CHARACTER_MAXIMUM_LENGTH)
)
)
and NOT exists (
-- ... and the second table doesn't have any columns ...
select 1
from INFORMATION_SCHEMA.columns c1
where A.OTHER_TABLE_NAME = C1.TABLE_NAME
and not exists (
-- that are not also found in the first table!
select 1
from INFORMATION_SCHEMA.columns c2
where c2.Table_Name = A.TABLE_NAME
AND c1.ordinal_position = c2.ordinal_position
and c1.data_type = c2.data_type
and ((c1.CHARACTER_MAXIMUM_LENGTH is null and
c2.CHARACTER_MAXIMUM_LENGTH is null) or
c1.CHARACTER_MAXIMUM_LENGTH = c2.CHARACTER_MAXIMUM_LENGTH)
)
)
```
|
Find tables that are copied / duplicate structure within database
|
[
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2014-express",
""
] |
I want run an `INSERT INTO table SELECT... FROM...`
The problem is that the table that I am inserting to has 5 columns, whereas the table I am selecting from has only 4. The 5th column needs to be set do a default value that I specify. How can I accomplish this? The query would be something like this (*note*: this is Oracle):
```
INSERT INTO five_column_table
SELECT * FROM four_column_table
--and a 5th column with a default value--;
```
|
Just add the default value to your select list.
```
INSERT INTO five_column_table
SELECT column_a, column_b, column_c, column_d, 'Default Value'
FROM four_column_table;
```
|
Just select the default value in your `SELECT` list. It's always a good idea to explicitly list out columns so I do that here even though it's not strictly necessary.
```
INSERT INTO five_column_table( col1, col2, col3, col4, col5 )
SELECT col1, col2, col3, col4, 'Some Default'
FROM four_column_table
```
If you really don't want to list out the columns
```
INSERT INTO five_column_table
SELECT fct.*, 'Some Default'
FROM four_column_table fct
```
|
INSERT INTO using a query, and add a default value
|
[
"",
"sql",
"oracle",
"insert-into",
""
] |
Description of my problem:
For example I have a table "Item" and columns "id", "name", "color", but for some weird reason (please don't ask why - it wasn't me) in the "color" column there is an information about the item and it essentially should be called "description"
A lot of code in the project depends on it and simply renaming the column name will take a tremendous amount of time.
Question:
Is there a way to create some kind of pointer to that column and call it correctly so the sql below will work as expected:
`select i.id, i.name, i.description from Item i`
(it will know that when I type "description" it should look into "color" column.
|
Short answer: no. Long answer: you could create insert and update triggers in Oracle that always keep those column in sync. But, the best thing to do would be to live with it, or rename it and fix your code. Having triggers and storing data in two columns just so your old code doesn't break, but lets your new code use the new column feels like an anti-pattern to me. Frankly, even if Oracle DID have what you were asking for out of the box it STILL feels like an anti pattern. As a developer I would be scratching my head with things like: "Let's see all the places where we ever select `color`." Oops, you just missed all the places where you use `description` instead. It seems VERY confusing to try and go this route. Again, I suggest living with it, or taking the time to update the project with the new correct name
|
You can create an updatable view with properly named column.
See <https://msdn.microsoft.com/en-us/library/ms180800.aspx>
|
Oracle SQL: is it possible to reference a column by different name without renaming it
|
[
"",
"sql",
"oracle",
"tablecolumn",
""
] |
I am trying to create a table (shown below) for my final year project,
but I encountered SQL name conflict error. It seems the words 'User'
and 'role' are reserved by the SQL. Is there any way I can resolve the
issue, without renaming the words.
```
CREATE TABLE User
(
ID INT generated by default as identity PRIMARY KEY,
username varchar(45) NOT NULL,
password varchar(255) NOT NULL,
role varchar(255) NOT NULL
);
```
|
Put double quotes around User. In general it never hurts to quote all your identifiers.
|
Throw square brackets around their name (or any reserved word that you want as your column names).
[User]
or
[Role]
|
How to resolve SQL name conflict without renaming table attributes?
|
[
"",
"sql",
"database",
"derby",
""
] |
I am trying to work out the majority sub group of a group. For an example let's say my table looks like this:
```
+--------------------------------------------------+
| city | car_colour | car_type | qty |
+--------------------------------------------------+
| ------------------------------------------------ |
| manchester | Red | Sports | 7 |
| manchester | Red | 4x4 | 9 |
| manchester | Blue | 4x4 | 8 |
| london | Red | Sports | 2 |
| london | Blue | 4x4 | 3 |
| leeds | Red | Sports | 5 |
| leeds | Blue | Sports | 6 |
| leeds | Blue | 4X4 | 1 |
+--------------------------------------------------+
```
I've tried to find a pure sql solution so that i can see: in each city, which colour of car has the largest quantity.
I can do:
```
select city, cars, sum(qty)
from table
group by city, cars
```
to get:
```
+------------+------+----+
| manchester | red | 16 |
| manchester | blue | 8 |
| london | red | 2 |
| london | blue | 3 |
| leeds | red | 5 |
| leeds | blue | 7 |
+------------+------+----+
```
but is there anyway i can use a sub query to get the max of the results which would return the max colour for each city, so the results would show:
```
+------------+------+
| manchester | red |
| london | blue |
| leeds | blue |
+------------+------+
```
I can do these calculations in my Python script but would prefer a pure SQL solution.
Hope this makes sense, thanks for any help in advance :)
Tommy
|
```
select distinct p.city, p.car_colour,sq.qty as qty
from ( select t.car_colour,t.city, sum(t.qty) as qty
from table1 t
group by t.car_colour,t.city
)p
join ( select q.city,max(q.qty) qty from
( select t.car_colour,t.city, sum(t.qty) as qty
from table1 t
group by t.car_colour,t.city
)q
group by q.city
)sq
on p.city=sq.city and p.qty=sq.qty
```
[SQL FIDDLE DEMO](http://www.sqlfiddle.com/#!9/1baeb/44)
|
In case you use MS SQL:
```
DECLARE @t TABLE
(
city NVARCHAR(MAX) ,
color NVARCHAR(MAX) ,
qty INT
)
INSERT INTO @t
VALUES ( 'manchester', 'Red', 7 ),
( 'manchester', 'Red', 9 ),
( 'manchester', 'Blue', 8 ),
( 'london', 'Red', 2 ),
( 'london', 'Blue', 3 ),
( 'leeds', 'Red', 5 ),
( 'leeds', 'Blue', 6 ),
( 'leeds', 'Blue', 1 )
SELECT city , color
FROM ( SELECT city ,
color ,
SUM(qty) AS q ,
ROW_NUMBER() OVER ( PARTITION BY city ORDER BY SUM(qty) DESC ) AS rn
FROM @t
GROUP BY city , color
) t
WHERE rn = 1
```
Output:
```
city color
leeds Blue
london Blue
manchester Red
```
|
sql group by sub group
|
[
"",
"sql",
""
] |
Suppose I have a table like this:
| subject | flag |
| --- | --- |
| this is a test | 2 |
`subject` is of type `text`, and `flag` is of type `int`. I would like to transform this table to something like this in Postgres:
| token | flag |
| --- | --- |
| this | 2 |
| is | 2 |
| a | 2 |
| test | 2 |
Is there an easy way to do this?
|
Use a `LATERAL` join - with [`string_to_table()`](https://www.postgresql.org/docs/current/functions-string.html#id-1.5.8.10.7.2.2.35.1.1.1) in [Postgres 14+](https://www.postgresql.org/message-id/flat/CAFj8pRD8HOpjq2TqeTBhSo_QkzjLOhXzGCpKJ4nCs7Y9SQkuPw%40mail.gmail.com).
Minimal form:
```
SELECT token, flag
FROM tbl, string_to_table(subject, ' ') token
WHERE flag = 2;
```
The comma in the `FROM` list is (almost) equivalent to `CROSS JOIN`, `LATERAL` is automatically assumed for set-returning functions (SRF) in the `FROM` list. Why "almost"? See:
* ["invalid reference to FROM-clause entry for table" in Postgres query](https://stackoverflow.com/questions/34597700/invalid-reference-to-from-clause-entry-for-table-in-postgres-query/34598292#34598292)
The alias "token" for the derived table is also assumed as column alias for a single anonymous column, and we assumed distinct column names across the query. Equivalent, more verbose and less error-prone:
```
SELECT s.token, t.flag
FROM tbl t
CROSS JOIN LATERAL string_to_table(subject, ' ') AS s(token)
WHERE t.flag = 2;
```
Or move the SRF to the `SELECT` list, which is allowed in Postgres (but not in standard SQL), to (almost) the same effect:
```
SELECT string_to_table(subject, ' ') AS token, flag
FROM tbl
WHERE flag = 2;
```
The last one seems acceptable since SRF in the `SELECT` list have been sanitized in Postgres 10. See:
* [What is the expected behaviour for multiple set-returning functions in SELECT clause?](https://stackoverflow.com/questions/39863505/what-is-the-expected-behaviour-for-multiple-set-returning-functions-in-select-cl/39864815#39864815)
If `string_to_table()` does not return any rows (empty or null `subject`), the (implicit) join eliminates the row from the result. Use `LEFT JOIN ... ON true` to keep qualifying rows from `tbl`. See:
* [What is the difference between a LATERAL JOIN and a subquery in PostgreSQL?](https://stackoverflow.com/questions/28550679/what-is-the-difference-between-lateral-and-a-subquery-in-postgresql/28557803#28557803)
We could also use [`regexp_split_to_table()`](https://www.postgresql.org/docs/current/functions-string.html#FUNCTIONS-STRING-OTHER), but that's slower. Regular expressions are powerful but expensive. See:
* [SQL select rows containing substring in text field](https://stackoverflow.com/questions/21832375/sql-select-rows-containing-substring-in-text-field/21832550#21832550)
* [PostgreSQL unnest() with element number](https://stackoverflow.com/questions/8760419/postgresql-unnest-with-element-number/8767450#8767450)
In **Postgres 13** or older use `unnest(string_to_array(subject, ' '))` instead of `string_to_table(subject, ' ')`.
|
I think it's not necessary to use a join, just the `unnest()` function in conjunction with `string_to_array()` should do it:
```
SELECT unnest(string_to_array(subject, ' ')) as "token", flag FROM test;
token | flag
-------+-------
this | 2
is | 2
a | 2
test | 2
```
|
Split column into multiple rows in Postgres
|
[
"",
"sql",
"postgresql",
"split",
"set-returning-functions",
""
] |
So I have a table like so . . .
```
Table A
| GROUP_NAME | USERID |
| group_A | user1 |
| group_A | user2 |
| group_B | user3 |
| group_A | user4 |
| group_B | user5 |
| group_C | user6 |
| group_B | user7 |
| group_C | user8 |
| group_C | user9 |
| group_A | user10 |
```
What I want is the total number of rows where total number of userids in a any particular group is less than or greater than a certain number.
The closest I can come is something like this:
```
select count(distinct group_name)
from Table_A
group by userid having count(*) < 5;
```
. . . but this give me a separate row for each result.
What I want is a total count of all rows that are returned.
This is for a table in an Oracle database.
|
If you want total number of users in the groups where count of users in that group is less than 5(say), use
```
SELECT group_name,
COUNT(userid)
FROM table_a
GROUP BY group_name
HAVING COUNT(userid) < 5;
```
For total number of distinct users, use
```
SELECT group_name,
COUNT(DISTINCT userid)
FROM table_a
GROUP BY group_name
HAVING COUNT(DISTINCT userid) < 5;
```
Total number of rows returned from above query, then use
```
SELECT COUNT(1)
FROM (SELECT group_name,
COUNT(DISTINCT userid)
FROM table_a
GROUP BY group_name
HAVING COUNT(DISTINCT userid) < 5);
```
|
One way is to use **COUNT() OVER()** analytic function.
For example,
**Setup**
```
SQL> CREATE TABLE t
2 (GROUP_NAME varchar2(7), USERID varchar2(6))
3 ;
Table created.
SQL>
SQL> INSERT ALL
2 INTO t (GROUP_NAME, USERID)
3 VALUES ('group_A', 'user1')
4 INTO t (GROUP_NAME, USERID)
5 VALUES ('group_A', 'user2')
6 INTO t (GROUP_NAME, USERID)
7 VALUES ('group_B', 'user3')
8 INTO t (GROUP_NAME, USERID)
9 VALUES ('group_A', 'user4')
10 INTO t (GROUP_NAME, USERID)
11 VALUES ('group_B', 'user5')
12 INTO t (GROUP_NAME, USERID)
13 VALUES ('group_C', 'user6')
14 INTO t (GROUP_NAME, USERID)
15 VALUES ('group_B', 'user7')
16 INTO t (GROUP_NAME, USERID)
17 VALUES ('group_C', 'user8')
18 INTO t (GROUP_NAME, USERID)
19 VALUES ('group_C', 'user9')
20 INTO t (GROUP_NAME, USERID)
21 VALUES ('group_A', 'user10')
22 SELECT * FROM dual
23 ;
10 rows created.
SQL>
SQL> COMMIT;
Commit complete.
SQL>
```
**Query**
```
SQL> SELECT t.*,
2 COUNT(GROUP_NAME) OVER(PARTITION BY GROUP_NAME ORDER BY GROUP_NAME) cnt
3 FROM t;
GROUP_N USERID CNT
------- ------ ----------
group_A user10 4
group_A user2 4
group_A user1 4
group_A user4 4
group_B user3 3
group_B user5 3
group_B user7 3
group_C user8 3
group_C user9 3
group_C user6 3
10 rows selected.
SQL>
```
To **filter** out the rows having lesser count than desired:
```
SQL> WITH DATA AS
2 (SELECT t.*,
3 COUNT(GROUP_NAME) OVER(PARTITION BY GROUP_NAME ORDER BY GROUP_NAME) cnt
4 FROM t
5 )
6 SELECT GROUP_NAME, USERID FROM DATA WHERE cnt < 4;
GROUP_N USERID
------- ------
group_B user3
group_B user5
group_B user7
group_C user8
group_C user9
group_C user6
6 rows selected.
SQL>
```
**Update** As per OP's request in the comments:
```
SQL> WITH DATA AS
2 (SELECT t.*,
3 COUNT(GROUP_NAME) OVER(PARTITION BY GROUP_NAME ORDER BY GROUP_NAME) cnt
4 FROM t
5 )
6 SELECT count(DISTINCT group_name) cnt FROM DATA WHERE cnt < 4;
CNT
----------
2
SQL>
```
|
SQL total count of another count
|
[
"",
"sql",
"oracle",
""
] |
I've heard that it is faster to select colums manually ("col1, col2, col3, etc") instead of querying them all with "\*".
But what if I don't even want to query all columns of a table? Would it be faster to query, for Example, only "col1, col2" insteaf of "col1, col2, col3, col4"?
From my understanding SQL has to search through all of the columns anyway, and just the return-result changes. I'd like to know if I can achieve a gain in performance by only choosing the right columns.
(I'm doing this anyway, but a backend API of one of my applications returns more often than not all columns, so I'm thinking about letting the user manually select the columns he want)
|
In general, reducing the number of columns in the `select` is a minor optimization. It means that less data is being returned from the database server to the application calling the server. Less data is usually faster.
Under most circumstances, this a minor improvement. There are some cases where the improvement can be more important:
* If a covering index is available for the query, so the index satisfies the query without having to access data pages.
* If some fields are very long, so records occupy multiple pages.
* If the volume of data being retrieved is a small fraction (think < 10%) of the overall data in each record.
Listing the columns individually is a good idea, because it protects code from changes in underlying schema. For instance, if the name of a column is changed, then a query that lists columns explicitly will break with an easy-to-understand error. This is better than a query that runs and produces erroneous results.
|
You should try not to use `select *`.
* **Inefficiency in moving data to the consumer.** When you SELECT \*, you're often retrieving more columns from the database than your application really needs to function. This causes more data to move from the database server to the client, slowing access and increasing load on your machines, as well as taking more time to travel across the network. This is especially true when someone adds new columns to underlying tables that didn't exist and weren't needed when the original consumers coded their data access.
* **Indexing issues.** Consider a scenario where you want to tune a query to a high level of performance. If you were to use \*, and it returned more columns than you actually needed, the server would often have to perform more expensive methods to retrieve your data than it otherwise might. For example, you wouldn't be able to create an index which simply covered the columns in your SELECT list, and even if you did (including all columns [*shudder*]), the next guy who came around and added a column to the underlying table would cause the optimizer to ignore your optimized covering index, and you'd likely find that the performance of your query would drop substantially for no readily apparent reason.
* **Binding Problems.** When you SELECT \*, it's possible to retrieve two columns of the same name from two different tables. This can often crash your data consumer. Imagine a query that joins two tables, both of which contain a column called "ID". How would a consumer know which was which? SELECT \* can also confuse views (at least in some versions SQL Server) when underlying table structures change -- [the view is not rebuilt, and the data which comes back can be nonsense](https://stackoverflow.com/questions/3639861/why-is-select-considered-harmful/3639964#3639964). And the worst part of it is that you can take care to name your columns whatever you want, but the next guy who comes along might have no way of knowing that he has to worry about adding a column which will collide with your already-developed names.
I got this from [this](https://stackoverflow.com/questions/3639861/why-is-select-considered-harmful/3639964#3639964) answer.
|
Is it faster to only query specific columns?
|
[
"",
"mysql",
"sql",
"performance",
"select",
""
] |
I have a table named `USER_OPTIONS`. If I query this table with:
```
SELECT * FROM USER_OPTIONS
```
I get the following result:

Now, I need to copy the same values (A,1), (B,2), (C,3), etc to the same table but change the `USER_ID` value.
The final result should be something like:
```
2 A 1
2 B 2
2 C 3
2 D 4
2 E 5
2 F 6
3 A 1
3 B 2
3 C 3
3 D 4
3 E 5
3 F 6
```
"Quite simple just add a loop and do it" you might think. But here's the catch...
I need to do this with just one statement. Is there a way to do this? How?
|
Just do a plain **INSERT INTO..SELECT**.
For example,
```
INSERT INTO user_options SELECT user_id + 1, code, code_value FROM user_options;
COMMIT;
SELECT * FROM user_options;
```
**TEST CASE**
```
SQL> SELECT * FROM user_options;
USER_ID C CODE_VALUE
---------- - ----------
2 A 1
2 B 2
2 C 3
2 D 4
2 E 5
2 F 6
6 rows selected.
SQL>
SQL> INSERT INTO user_options SELECT user_id + 1, code, code_value FROM user_options;
6 rows created.
SQL>
SQL> COMMIT;
Commit complete.
SQL>
SQL> SELECT * FROM user_options ORDER BY 1, 2, 3;
USER_ID C CODE_VALUE
---------- - ----------
2 A 1
2 B 2
2 C 3
2 D 4
2 E 5
2 F 6
3 A 1
3 B 2
3 C 3
3 D 4
3 E 5
3 F 6
12 rows selected.
SQL>
```
|
Try this
Insert into user\_options Select 3, code, code\_value from user\_options;
|
Copy data within a table and changing values
|
[
"",
"sql",
"oracle",
""
] |
I have a table like below, What I need that for any particular fund and up to any particular date logic will sum the amount value. Let say I need the sum for 3 dates as 01/28/2015,03/30/2015 and 04/01/2015. Then logic will check for up to first date how many records are there in table . If it found more than one record then it'll sum the amount value. Then for next date it'll sum up to the next date but from the previous date it had summed up.
```
Id Fund Date Amount
1 A 01/20/2015 250
2 A 02/28/2015 300
3 A 03/20/2015 400
4 A 03/30/2015 200
5 B 04/01/2015 500
6 B 04/01/2015 600
```
I want result to be like below
```
Id Fund Date SumOfAmount
1 A 02/28/2015 550
2 A 03/30/2015 600
3 B 04/01/2015 1100
```
|
Based on your question, it seems that you want to select a set of dates, and then for each fund and selected date, get the sum of the fund amounts from the selected date to the previous selected date. Here is the result set I think you should be expecting:
```
Fund Date SumOfAmount
A 2015-02-28 550.00
A 2015-03-30 600.00
B 2015-04-01 1100.00
```
Here is the code to produce this output:
```
DECLARE @Dates TABLE
(
SelectedDate DATE PRIMARY KEY
)
INSERT INTO @Dates
VALUES
('02/28/2015')
,('03/30/2015')
,('04/01/2015')
DECLARE @FundAmounts TABLE
(
Id INT PRIMARY KEY
,Fund VARCHAR(5)
,Date DATE
,Amount MONEY
);
INSERT INTO @FundAmounts
VALUES
(1, 'A', '01/20/2015', 250)
,(2, 'A', '02/28/2015', 300)
,(3, 'A', '03/20/2015', 400)
,(4, 'A', '03/30/2015', 200)
,(5, 'B', '04/01/2015', 500)
,(6, 'B', '04/01/2015', 600);
SELECT
F.Fund
,D.SelectedDate AS Date
,SUM(F.Amount) AS SumOfAmount
FROM
(
SELECT
SelectedDate
,LAG(SelectedDate,1,'1/1/1900') OVER (ORDER BY SelectedDate ASC) AS PreviousDate
FROM @Dates
) D
JOIN
@FundAmounts F
ON
F.Date BETWEEN DATEADD(DAY,1,D.PreviousDate) AND D.SelectedDate
GROUP BY
D.SelectedDate
,F.Fund
```
EDIT: Here is alternative to the `LAG` function for this example:
```
FROM
(
SELECT
SelectedDate
,ISNULL((SELECT TOP 1 SelectedDate FROM @Dates WHERE SelectedDate < Dates.SelectedDate ORDER BY SelectedDate DESC),'1/1/1900') AS PreviousDate
FROM @Dates Dates
) D
```
|
If i change your incorrect sample data to ...
```
CREATE TABLE TableName
([Id] int, [Fund] varchar(1), [Date] datetime, [Amount] int)
;
INSERT INTO TableName
([Id], [Fund], [Date], [Amount])
VALUES
(1, 'A', '2015-01-28 00:00:00', 250),
(2, 'A', '2015-01-28 00:00:00', 300),
(3, 'A', '2015-03-30 00:00:00', 400),
(4, 'A', '2015-03-30 00:00:00', 200),
(5, 'B', '2015-04-01 00:00:00', 500),
(6, 'B', '2015-04-01 00:00:00', 600)
;
```
this query using GROUP BY works:
```
SELECT MIN(Id) AS Id,
MIN(Fund) AS Fund,
[Date],
SUM(Amount) AS SumOfAmount
FROM dbo.TableName t
WHERE [Date] IN ('01/28/2015','03/30/2015','04/01/2015')
GROUP BY [Date]
```
[**Demo**](http://sqlfiddle.com/#!6/e9c89/1/0)
|
Summing up the records as per given conditions
|
[
"",
"sql",
"sql-server",
"vba",
"ms-access",
""
] |
I have a derived a table with 3 columns :
```
computerID
ScanDate
vulnerability level.
```
I want to group by **computerID** and get the vulnerability level of the latest **scanDate** WITHOUT having to add an inner join (the table is pretty big).
Is it possible?
|
if you want `Without join keyword`!!, you can do this :
```
select *
from your_table t
where (computerID, ScanDate) in
(
select computerID, max(ScanDate)
from your_table t1
where t1.computerID=t.computerID
)
```
[SQLFIDDLE DEMO](http://sqlfiddle.com/#!9/51bc1/2)
|
Sometimes, using `not exists` can work better than `group by`. Not always, but it is worth a try:
```
select d.*
from derived d
where not exists (select 1
from derived d2
where d2.computer = d.computer and d2.scandate > d.scandate
);
```
Alternatively, if you are already doing a `group by`, then there is the `substring_index()`/`group_concat()` trick:
```
select computer, max(scandate),
substring_index(group_concat(vulnerability order by scandate desc), ',', 1)
from derived d2
group by computer;
```
You need to be a little careful with this. If `vulnerability` is a string and can contain commas, then a different separator needs to be used. If `vulnerability` is not a string, it will be converted to one. And, if there are too many dates, then you might hit the limits of the `group_concat()` (the maximum length of the `group_concat()` result is controlled by a parameter, so this can also be fixed).
|
How to get extra information from the highest values from SQL table?
|
[
"",
"mysql",
"sql",
""
] |
A bit of a newbie question, probably an INNER JOIN with an "AS" statement, but I can't figure it out...
This is for a MYSQL based competition app. I want to select the "img\_title" for both img\_id1 and img\_id2. I can't figure out how to do it and still see which title is assigned to the associated \_id1 or \_id2.
My tables:
* competitions
+ comp\_id
+ img\_id1
+ img\_id2
* on\_deck
+ img\_id
+ img\_title
Desired results:
comp\_id | img\_id1 | img\_title1 |img\_id2 | img\_title2
|
```
select comp_id, img_id1, b.img_title as img_title1,
img_id2, b2.img_title as img_title2
from competitions a
left outer join on_deck b on b.img_id = a.img_id1
left outer join on_deck b2 on b2.img_id = a.img_id2
```
switch let outer join to inner join if you want to exclude rows in competitions that do not have two matching img\_ids
|
You need a join for each image:
```
SELECT comp.comp_id, img1.img_id, img1.img_title, img2.img_id, img2.img_title
FROM competitions comp
INNER JOIN on_deck img1 ON img1.img_id = comp.img_id1
INNER JOIN on_deck img2 ON img2.img_id = comp.img_id2
```
`LEFT JOIN` if `img_id1` or `img_id2` can be `NULL`.
|
MySQL how to select what I need, most likely an inner join
|
[
"",
"mysql",
"sql",
"select",
""
] |
I have a table containing email addresses and account numbers (amongst other data).
I have removed the other data for simplification.
```
123456, joe@place.com
123457, phil@place.com
123456, jil@place.com
123456, jane@place.com
123458, john@place.com
```
Per the example above, most accounts have multiple email addresses.
I need to create a query to tell me:
* how many accounts have 1 email address
* how many accounts have 2 email addresses
...
* how many accounts have 10 email addresses
|
The inner query (`q`) will count how many distinct email addresses each account has. The outer query will then count how many accounts fall into each counting bucket (1, 2, 3, ...).
```
SELECT q.email_counter, COUNT(*) AS num_accounts
FROM (SELECT account_number,
COUNT(DISTINCT email_address) AS email_counter
FROM YourTable
GROUP BY account_number) q
GROUP BY q.email_counter;
```
|
You can use
```
select acc_no, count(*) from your_table group by acc_no
```
|
How do I do a SQL query based on number of quantity
|
[
"",
"sql",
"group-by",
"sum",
""
] |
I have this:
```
Dr. LeBron Jordan
John Bon Jovi
```
I would like this:
```
Dr. Jordan
John Jovi
```
How do I come about it? I think it's regexp\_replace.
Thanks for looking.
Any help is much appreciated.
|
Here's a way using regexp\_replace as you mentioned, using several forms of a name for testing. More powerful than nested SUBSTR(), INSTR() but you need to get your head around regular expressions, which will allow you way more pattern matching power for more complex patterns once you learn it:
```
with tbl as (
select 'Dr. LeBron Jordan' data from dual
union
select 'John Bon Jovi' data from dual
union
select 'Yogi Bear' data from dual
union
select 'Madonna' data from dual
union
select 'Mr. Henry Cabot Henhouse' data from dual )
select regexp_replace(data, '^([^ ]*) .* ([^ ]*)$', '\1 \2') corrected_string from tbl;
CORRECTED_STRING
----------------
Dr. Jordan
John Jovi
Madonna
Mr. Henhouse
Yogi Bear
```
The regex can be read as:
```
^ At the start of the string (anchor the pattern to the start)
( Start remembered group 1
[^ ]* Zero or more characters that are not a space
) End remembered group 1
space Where followed by a literal space
. Followed by any character
* Followed by any number of the previous any character
space Followed by another literal space
( Start remembered group 2
[^ ]* Zero or more characters that are not a space
) End remembered group 2
$ Where it occurs at the end of the line (anchored to the end)
```
Then the '\1 \2' means return remembered group 1, followed by a space, followed by remembered group 2.
If the pattern cannot be found, the original string is returned. This can be seen by surrounding the returned groups with square brackets and running again:
```
...
select regexp_replace(data, '^([^ ]*) .* ([^ ]*)$', '[\1] [\2]')
corrected_string from tbl;
CORRECTED_STRING
[Dr.] [Jordan]
[John] [Jovi]
Madonna
[Mr.] [Henhouse]
Yogi Bear
```
|
If it is only two words, it will return that. ("Lebron Jordan" will return "Lebron Jordan")
If it is three words, it will take out the middle word ("Dr. LeBron Jordan" will return "Dr. Jordan")
```
DECLARE @firstSpace int = 0
DECLARE @secondSpace int = 0
DECLARE @string nvarchar(50) = 'Dr. Lebron Jordan'
SELECT @string = LTRIM(RTRIM(@string))
SELECT @firstSpace = CHARINDEX(' ', @string, 0)
SELECT @secondSpace = CHARINDEX(' ', @string, @firstSpace + 1)
IF @secondSpace = 0
BEGIN
SELECT @string
END
ELSE
BEGIN
SELECT SUBSTRING(@string, 0, @firstSpace) + SUBSTRING(@string, @secondSpace, (LEN(@string) - @secondSpace) + 1)
END
```
|
Removing one word in a string (or between two white spaces)
|
[
"",
"sql",
"oracle",
"plsql",
"regexp-replace",
""
] |
How do I delete duplicates rows in Postgres 9 table, the rows are completely duplicates on every field AND there is no individual field that could be used as a unique key so I cant just `GROUP BY` columns and use a `NOT IN` statement.
I'm looking for a single SQL statement, not a solution that requires me to create temporary table and insert records into that. I know how to do that but requires more work to fit into my automated process.
Table definition:
```
jthinksearch=> \d releases_labels;
Unlogged table "discogs.releases_labels"
Column | Type | Modifiers
------------+---------+-----------
label | text |
release_id | integer |
catno | text |
Indexes:
"releases_labels_catno_idx" btree (catno)
"releases_labels_name_idx" btree (label)
Foreign-key constraints:
"foreign_did" FOREIGN KEY (release_id) REFERENCES release(id)
```
Sample data:
```
jthinksearch=> select * from releases_labels where release_id=6155;
label | release_id | catno
--------------+------------+------------
Warp Records | 6155 | WAP 39 CDR
Warp Records | 6155 | WAP 39 CDR
```
|
If you can afford to rewrite the whole table, this is probably the simplest approach:
```
WITH Deleted AS (
DELETE FROM discogs.releases_labels
RETURNING *
)
INSERT INTO discogs.releases_labels
SELECT DISTINCT * FROM Deleted
```
If you need to specifically target the duplicated records, you can make use of the internal `ctid` field, which uniquely identifies a row:
```
DELETE FROM discogs.releases_labels
WHERE ctid NOT IN (
SELECT MIN(ctid)
FROM discogs.releases_labels
GROUP BY label, release_id, catno
)
```
Be very careful with `ctid`; it changes over time. But you can rely on it staying the same within the scope of a single statement.
|
### Single SQL statement
Here is a solution that deletes duplicates in place:
```
DELETE FROM releases_labels r
WHERE EXISTS (
SELECT 1
FROM releases_labels r1
WHERE r1 = r
AND r1.ctid < r.ctid
);
```
Since there is no unique key I am (ab)using the tuple ID `ctid` for the purpose. The physically first row survives in each set of dupes.
* [In-order sequence generation](https://stackoverflow.com/questions/17500013/in-order-sequence-generation/17503095#17503095)
* [How do I (or can I) SELECT DISTINCT on multiple columns?](https://stackoverflow.com/questions/54418/how-do-i-or-can-i-select-distinct-on-multiple-columns/12632129#12632129)
`ctid` is a system column that is not part of the associated row type, so when referencing the whole row with table aliases in the expression `r1 = r`, only *visible* columns are compared (not the `ctid` or others). That's why the whole row can be equal and one `ctid` is still smaller than the other.
With only *few* duplicates, this is also the fastest of all solutions.
With *lots* of duplicates other solutions are faster.
Then I suggest:
```
ALTER TABLE discogs.releases_labels ADD COLUMN releases_labels_id serial PRIMARY KEY;
```
### Why does it work with NULL values?
This is somewhat surprising. The reason is explained in the [chapter **Composite Type Comparison** in the manual](http://www.postgresql.org/docs/current/interactive/functions-comparisons.html#COMPOSITE-TYPE-COMPARISON):
> The SQL specification requires row-wise comparison to return NULL if
> the result depends on comparing two NULL values or a NULL and a
> non-NULL. PostgreSQL does this only when comparing the results of two
> row constructors (as in Section 9.23.5) or comparing a row constructor
> to the output of a subquery (as in Section 9.22). In other contexts
> where two composite-type values are compared, **two NULL field values
> are considered equal**, and a NULL is considered larger than a non-NULL.
> This is necessary in order to have consistent sorting and indexing
> behavior for composite types.
Bold emphasis mine.
### Alternatives with second table
I removed that section, because the [solution with a data-modifying CTE provided by @Nick](https://stackoverflow.com/a/29410691/939860) is better.
|
Delete duplicate rows from table with no unique key
|
[
"",
"sql",
"postgresql",
"duplicates",
""
] |
I am trying to select records where the record shares a field value with another record, but the query I am trying only returns one occurrence. For example, in this table:
```
COUNTRY LANGUAGE
Mexico Spanish
Portugal Portuguese
Russia Russian
Spain Spanish
Thailand Thai
United Kingdom English
United States English
```
... I would like to return:
```
COUNTRY LANGUAGE
Mexico Spanish
Spain Spanish
United Kingdom English
United States English
```
Using:
```
SELECT * FROM `table` GROUP BY `language` HAVING COUNT(language) > 1
```
I only get:
```
COUNTRY LANGUAGE
Spain Spanish
United Kingdom English
```
Where am I going wrong here?
|
Found a solution:
```
SELECT * FROM `TABLE` WHERE `language` IN (
SELECT * GROUP BY `language` HAVING COUNT(primary_language) > 1
)
```
|
You need to break this up into two steps. Get the list of languages in a sub-select, then get the rows:
SELECT \* FROM country\_language where language in (SELECT language FROM country\_language GROUP BY language HAVING COUNT(\*) > 1)
|
MySQL select where duplicates in field, not returning all records
|
[
"",
"mysql",
"sql",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.