Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
Below are my MYSQL tables. I am not able to figure out what a MySQl query looks like that selects only one row from parent for each month (by latest date in a month) and its consequent child rows. So in the given example it should return rows from the child table with IDs 4,5,6,10,11,12
[](https://i.stack.imgur.com/5I5VN.png)
|
I think something like the following would do the trick for you:
```
SELECT Child.id
FROM parent
INNER JOIN Child ON parent.id = child.parent_id
WHERE parent.`date` IN (SELECT max(`date`) FROM parent GROUP BY YEAR(`date`), MONTH(`date`))
```
The fun part is the `WHERE` clause where we only grab `parent` table records where the `date` is the `max(date)` for that particular month/year combination.
|
Ok, let's split this in parts:
First, select the max date from the `parent` table, grouping by month:
```
select max(`date`) as max_date from parent group by last_day(`date`)
-- The "last_day()" function returns the last day of the month for a given date
```
Then, select the corresponding row from the parent table:
```
select parent.*
from parent
inner join (select max(`date`) as max_date from parent group by last_day(`date`)) as a
on parent.`date` = a.max_date
```
Finally, select the corresponding rows in the `child` table:
```
select child.*
from child
inner join parent
on child.parent_id = parent.id
inner join (select max(`date`) as max_date from parent group by last_day(`date`)) as a
on parent.`date` = a.max_date;
```
You can check how this works on this [SQL fiddle](http://sqlfiddle.com/#!9/00e2d/1).
---
**EDIT**
The above solution works, but if your tables are big, you may face a problem because the joined data is not indexed. One way to solve this is to create a temporary table and use this temp table to get your final result:
```
drop table if exists temp_max_date;
create temporary table temp_max_date
select max(`date`) as max_date
from parent
group by last_day(`date`);
alter table temp_max_date
add index idx_max_date(max_date);
-- Get the final data:
select child.*
from child
inner join parent
on child.parent_id = parent.id
inner join temp_max_date as a
on parent.`date` = a.max_date;
```
[Here's the SQL fiddle for this second solution](http://sqlfiddle.com/#!9/45f15/1).
Temporary tables are only accesible to the connection that creates them, and are destroyed when the connection is closed or killed.
**Remember:** Add the appropriate indexes to your tables.
|
SQL select query join
|
[
"",
"mysql",
"sql",
"inner-join",
"select-query",
""
] |
I have two tables TAB A and TAB B and I would like to print
everything from TAB A and if record exist in TAB B then return 1 or 0
but in TAB B can be multiple records with same id I think I need group this table?
**TAB\_A**
```
ββββββββ¦βββββββ¦βββββββ¦βββββ
β COLA β COLB β COLC β ID β
β βββββββ¬βββββββ¬βββββββ¬βββββ£
β AAA β BBB β CAB β 1 β
β AAA β BBB β CFD β 2 β
β AAA β BBB β CCD β 3 β
β AAA β BBB β CTR β 4 β
ββββββββ©βββββββ©βββββββ©βββββ
```
**TAB\_B**
```
ββββββββ¦βββββββ¦βββββββ¦βββββ
β COLA β COLB β COLC β ID β
β βββββββ¬βββββββ¬βββββββ¬βββββ£
β AAA β BBB β CAB β 1 β
β AAA β BBB β CFD β 2 β
β AAA β BBB β CCD β 3 β
β AAA β BBB β CCD β 3 β
β AAA β BBB β CCD β 3 β
β AAA β BBB β CTR β 4 β
β AAA β BBB β CTR β 5 β
ββββββββ©βββββββ©βββββββ©βββββ
```
By this example I should have 4 records but LEFT JOINT gives 6
```
SELECT
A.*
, case when B.ID is not null then 1 else 0 end as NEW_COLUMN
FROM
TAB_A A
left join TAB_B B
on A.ID= B.ID
WHERE
SOMETHING ....
```
|
You can use the `DISTINCT` keyword to remove duplicates which might be present in the `TAB_B` table.
```
SELECT DISTINCT A.*,
CASE WHEN B.ID IS NOT NULL THEN 1 ELSE 0 END AS NEW_COLUMN
FROM TAB_A A LEFT JOIN TAB_B B ON A.ID = B.ID
```
|
You run a query only on TABA but in a subquery you can decide if exists record in TABB so you can return 1 or 0 (as you want)
Try this:
```
SELECT *,
CASE
WHEN
(SELECT COUNT(*) FROM TABB
WHERE TABB.ID = TABA.ID) > 0
THEN 1
ELSE 0
END
FROM TABA
```
|
SQL record exists in another table with LEFT JOIN and GROUP BY
|
[
"",
"sql",
"sql-server",
""
] |
I have the following table with data:
mytable (country, gender):
```
+----------+----------+
| country | gender |
+----------+----------+
| China | male |
+----------+----------+
| China | female |
+----------+----------+
| China | male |
+----------+----------+
| China | male |
+----------+----------+
| Russia | male |
+----------+----------+
| Russia | female |
+----------+----------+
```
And I want a query select output like this:
```
+----------+----------+--------+-----------+
| country | gender | count | percent |
+----------+----------+--------+-----------+
| China | male | 3 | 75 |
+----------+----------+--------+-----------+
| China | female | 1 | 25 |
+----------+----------+--------+-----------+
| Russia | male | 1 | 50 |
+----------+----------+--------+-----------+
| Russia | female | 1 | 50 |
+----------+----------+--------+-----------+
```
So basically I want calculate percentages for genders for each country.
How do I do this?
Thanks a lot in advance
|
You can use as this:
```
select a.country,
a.gender,
(count(*) / b.qtd)*100 as percent
from test a inner join
(select country, count(*) qtd from test group by country) b
on a.country = b.country
group by a.country, a.gender
```
No need for two subqueries!
See it here: <http://sqlfiddle.com/#!9/f41609/4>
|
Just group the countries and genders in the main query and get in a subquery how much genders there are for each group, applying the percentage formula.
```
select
a.country,
a.gender,
count(a.gender) as `count`,
(
select count(a.gender) / count(gender) * 100
from mytable
where country = a.country
) as percent
from mytable a
group by
a.country,
a.gender
order by
country,
gender desc,
percent desc
```
|
How to group percentage calculations on group by
|
[
"",
"sql",
"count",
"group-by",
"percentage",
""
] |
I have a table with a field `name`.
For a given name like `ABC` I want to get the records with that name and the records which have an `L` appended at the end.
So for `ABC` I want all records with name either `ABC` or `ABCL`.
I tried getting the records using the following code but it doesn't work.
```
SELECT * FROM tbl
WHERE name like "ABC[|L]"
```
I am using TSQL.
How can pattern match these names?
|
Use this SQL:
```
SELECT * FROM tbl WHERE name IN ('ABC', 'ABCL')
```
If you are using this SQL within a Stored Procedure / Function, something like following would work. Assuming, you are passing in the value for `name` in a `@name` variable.
```
SELECT * FROM tbl WHERE name IN (@name, @name + 'L')
```
|
Use the `IN` operator.
```
SELECT *
FROM tbl
WHERE name IN ('ABC', 'ABCL')
```
|
SQL pattern matching in search conditions
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
What is the best use of this function in `Postgres` `IS DISTINCT FROM`, auditioning got the same result using `COALESCE` but in less time , following the test :
```
SELECT COUNT(P.id)
FROM produto P
INNER JOIN cliente CL ON P.id_cliente = CL.id_cliente
WHERE
COALESCE(CL.tp_pessoa,'') <> 'JURIDICA' -- test with COALESCE, average 610 ms
(CL.tp_pessoa <> 'JURIDICA' OR CL.tp_pessoa IS NULL) -- test with OR, average 668 ms
CL.tp_pessoa IS DISTINCT FROM 'JURIDICA' -- test with IS DISTINCT FROM, average 667 ms
OUTRO TESTE:
COALESCE(CL.tp_pessoa,'') <> COALESCE(P.observacao,'') -- test with IS DISTINCT FROM, average 940 ms
CL.tp_pessoa IS DISTINCT FROM P.observacao -- test with ```IS DISTINCT FROM```, average 930 ms, a little beter here
```
Its have lower performance and is a function that is not found in other DBs such as `SQL Server` , another reason to not use it .
Doing another test, where both criteria can be `NULL` , the `IS DISTINCT FROM` had a slight advantage , this would be its use , where more it applies ?
Edit:
Like @hvd said, `IS DISTINCT FROM` is part of `ANSI SQL`, also, the result of `COALESCE(CL.tp_pessoa,'') <> COALESCE(P.observacao,'')` is not the same of `CL.tp_pessoa IS DISTINCT FROM P.observacao`.
|
First, it is convenient. Second, you need to run tests on larger amounts of data. A lot can happen on a database server in a second, so small changes in hundredths of a second are not necessarily indicative of overall performance.
On the positive side, I think Postgres will use an index for `is distinct from`. I don't think an index will necessarily be used for all the alternatives.
|
The performance differences you've seen are minimal. Focus on correctness.
You give an example
```
COALESCE(CL.tp_pessoa,'') <> COALESCE(P.observacao,'')
```
versus
```
CL.tp_pessoa IS DISTINCT FROM P.observacao
```
If `CL.tp_pessoa` is `NULL`, and `P.observacao` is `''`, then the first comparison treats them as equal, whereas the second comparison treats them as unequal.
So use the first version if you want to compare them as equal, and the second version if you want to compare them as unequal.
|
Why use IS DISTINCT FROM - Postgres
|
[
"",
"sql",
"postgresql",
""
] |
I got a table like this
```
-----------------------------------
| Price | Brands | Colors |
-----------------------------------
| 300 | A | Red |
| 200 | B | Red |
| 500 | D | Green |
| 800 | B | Orange |
| 100 | A | Red |
| 500 | B | Green |
| 200 | C | Orange |
| 400 | D | Red |
```
I want to get **Max (Price)** and **Distinct** values of *Brands and Colors* like this
```
-----------------------------------
| Price | Brands | Colors |
-----------------------------------
| 800 | A | Red |
| | B | Green |
| | D | Orange |
| | C | |
```
What query I should be running to get above result? Is it possible to get above result in single query or I have to run multiple subqueries with joins etc to get this?
|
It *is* possible to get that all in 1 query, but it isn't good practice since that data doesn't represent anything. MySQL is a *Relational* DataBase Management System (*R*DBMS) so the data is output has to have a relation together.
In this case, you want the max(price), distinct(brands) and distinct(colors), with no relations between them and you should make 3 different calls
|
You can use variables and `union all`:
```
select max(price) as price, max(brand) as brand, max(color) as color
from (select 1 as rn, max(price), NULL as brand, NULL as color from t
) union all
(select (@rnb := @rnb + 1), NULL, brand, NULL
from t cross join (select @rnb := 0) params
group by brand
) union all
(select (@rnc := @rnc + 1), NULL, NULL, color
from t cross join (select @rnc := 0) params
group by brand
)
) t
group by rn
order by rn;
```
The key idea is that you need a way to identify what is on each row. This information is not in the original data. You can generate a "row number" using variables. In this case, `union all` is preferable to a `join`, because MySQL doesn't support `full outer join`.
|
Get Max value, Distinct values from Table in MySQL
|
[
"",
"mysql",
"sql",
""
] |
I have 3 columns that are string columns calculated using complex substring and replace functions.
I have a 4th column which should be col1 + col2 + col3.
At the moment the 4th column is repeating the calculations required to computed col1 and col2 and col3.
The select statement looks something like this (I've removed most of the functions):
```
select
STR(c1, 2, 0) as col1,
STR(c2, 2, 0) as col2,
STR(c3, 2, 0) as col3,
STR(c1, 2, 0) + STR(c2, 2, 0) + STR(c3, 2, 0) as col4
from blah
```
The issue is that we're repeating the functions to calculate the columns, breaking the DRY principal and opening an opportunity for errors to creep in.
I'd prefer if we could do something like this:
```
select
STR(c1, 2, 0) as col1,
STR(c2, 2, 0) as col2,
STR(c3, 2, 0) as col3,
col1 + col2 + col3 as col4
from blah
```
Is there a neat way of doing this? I think it could be done using a temp table but that seems like overkill.
Is it worth doing, or am I just taking DRY to seriously.
|
You can do it like this (sorry about formatting):
```
select i.col1, i.col2, i.col3, i.col1 + i.col2 + i.col3 as col4
from (
select
STR(c1, 2, 0) as col1,
STR(c2, 2, 0) as col2,
STR(c3, 2, 0) as col3,
from blah ) i
```
|
You can do this with a CTE:
```
;WITH data_cte (col1, col2, col3)
AS
(
select
STR(c1, 2, 0) as col1,
STR(c2, 2, 0) as col2,
STR(c3, 2, 0) as col3
from blah
)
SELECT
col1, col2, col3,
col1 + col2 + col3 as col4
FROM data_cte
```
|
Concatenate calculated fields in SQL
|
[
"",
"sql",
"sql-server",
"select",
"concatenation",
""
] |
I need strict control of the reading and writing of my Postgres data. Updatable views have always provided very good, strict, control of the reading of my data and allows me to add valuable computed columns. With Postgres 9.5 row level security has introduced a new and powerful way to control my data. But I can't use both technologies views, and row level security together. Why?
|
**EDIT**: As another reply mentioned below, since PostgreSQL 15 it is possible for views to inherit the RLS policies of their origin table with the `security_invoker` flag (<https://www.postgresql.org/docs/15/sql-createview.html>)
---
Basically because it wasn't possible to retroactively change how views work. I'd like to be able to support `SECURITY INVOKER` (or equivalent) for views but as far as I know no such feature presently exists.
You can filter access to the view its self with row security normally.
The tables accessed by the view will also have their row security rules applied. However, they'll see the `current_user` as the *view creator* because views access tables (and other views) with the rights of the user who created/owns the view.
Maybe it'd be worth raising this on pgsql-hackers if you're willing to step in and help with development of the feature you need, or pgsql-general otherwise?
That said, while views access tables as the creating user and change `current_user` accordingly, they don't prevent you from using custom GUCs, the `session_user`, or other contextual information in row security policies. You can use row security with views, just not (usefully) to filter based on `current_user`.
|
You can do this from PostgreSQL v15 on, which introduced the `security_invoker` option on views. If you turn that on, permissions on the underlying tables are checked as the user who calls the view, and RLS policies for the invoking user are used.
You can change existing views with
```
ALTER VIEW view_name SET (security_invoker = on);
```
|
Why isn't row level security enabled for Postgres views?
|
[
"",
"sql",
"postgresql",
"view",
"row-level-security",
"postgresql-9.5",
""
] |
```
SELECT id
FROM table1
WHERE NOT (id = 0)
SELECT id
FROM table1
WHERE id <> 0
```
In the above 2 queries which one need to be considered in terms of `Performance` and `Coding Standards`?
|
`NOT` is a negation, `<>` is a comparison operator, they are both ISO standard.
And have no performance difference for your example.
[](https://i.stack.imgur.com/IgkOP.jpg)
|
In most cases, `NOT` is used for negation. `<>` means `not equal to`.
|
NOT vs <> Operator in sql server
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have to select rows following some variables which will switch on or off some conditions, like:
```
SELECT *
FROM table
WHERE field =
CASE @param
WHEN NULL THEN field
ELSE @param
END
```
In another words.. I want to compare only if @param is not null, but my select doesn't work. How can I do it?
Thanks!
|
When `@param` is null it will use `field`, when `@param` is not null it will use `@param`:
```
SELECT *
FROM table
WHERE field = ISNULL(@param,field)
```
But `field = field` will always be true.
So what you might want is:
```
SELECT *
FROM table
WHERE field = @param
and @param is not null
```
|
Why use a Case for a single switch? an OR statement should do the trick.
```
SELECT *
FROM table
WHERE @param IS NULL OR Field = @Param
```
|
SQL Server where case only if not null
|
[
"",
"sql",
"sql-server",
""
] |
I have a small problem I could tackle quite easily in C# but I have been asked to do it within the SQL.
I have a Stored procedure which takes in an int as a parameter, and I need to check if that parameter is inside the value of a colon seperated column in the database.
```
(
(
gvf_permitted_projects is null
)
or
( -- @activeProject IN gvf_permitted_projects
-- @activeProject = 11
-- gvf_permitted_projects = '11:17'
)
)
```
This is inside the where clause of my SELECT, I could do this in C# with minimal effort but I'm not too sure how to do it here. Do I need to use a temp table and then do a select into that?
|
You can use like operator with some text manipulation, although this will be somewhat slow and normalised structure would be preferred anyway:
```
':'+@gvf_permitted_projects+':' like '%:'+cast(@activeProject as varchar(20))+':%'
```
|
In this answer I added up some nice tricks you can do with XML and string values:
<https://stackoverflow.com/a/33658220/5089204>
Go to the "Dynamic IN" section. Hope this helps.
|
Checking if a value is IN a colon separated column with in a table
|
[
"",
"sql",
"asp.net",
"sql-server",
"parsing",
""
] |
So, Here are the tables-
```
create table person (
id number,
name varchar2(50)
);
create table injury_place (
id number,
place varchar2(50)
);
create table person_injuryPlace_map (
person_id number,
injury_id number
);
insert into person values (1, 'Adam');
insert into person values (2, 'Lohan');
insert into person values (3, 'Mary');
insert into person values (4, 'John');
insert into person values (5, 'Sam');
insert into injury_place values (1, 'kitchen');
insert into injury_place values (2, 'Washroom');
insert into injury_place values (3, 'Rooftop');
insert into injury_place values (4, 'Garden');
insert into person_injuryPlace_map values (1, 2);
insert into person_injuryPlace_map values (2, 3);
insert into person_injuryPlace_map values (1, 4);
insert into person_injuryPlace_map values (3, 2);
insert into person_injuryPlace_map values (4, 4);
insert into person_injuryPlace_map values (5, 2);
insert into person_injuryPlace_map values (1, 1);
```
Here, table `person_injuryPlace_map` will just map the both other tables.
How I wanted to show data is -
```
Kitchen Pct Washroom Pct Rooftop Pct Garden Pct
-----------------------------------------------------------------------
1 14.29% 3 42.86% 1 14.29% 2 28.57%
```
Here, the value of Kitchen, Washroom, Rooftop, Garden column is the total incidents happened. Pct columns will show the percentage of the total count.
How can I do this in Oracle SQL?
|
You need to use the standard **PIVOT** query.
Depending on your **Oracle database version**, you could do it in two ways:
Using **PIVOT** for **version 11g** and up:
```
SQL> SELECT *
2 FROM
3 (SELECT c.place place,
4 row_number() OVER(PARTITION BY c.place ORDER BY NULL) cnt,
5 (row_number() OVER(PARTITION BY c.place ORDER BY NULL)/
6 COUNT(place) OVER(ORDER BY NULL))*100 pct
7 FROM person_injuryPlace_map A
8 JOIN person b
9 ON(A.person_id = b.ID)
10 JOIN injury_place c
11 ON(A.injury_id = c.ID)
12 ORDER BY c.place
13 ) PIVOT (MAX(cnt),
14 MAX(pct) pct
15 FOR (place) IN ('kitchen' AS kitchen,
16 'Washroom' AS Washroom,
17 'Rooftop' AS Rooftop,
18 'Garden' AS Garden));
KITCHEN KITCHEN_PCT WASHROOM WASHROOM_PCT ROOFTOP ROOFTOP_PCT GARDEN GARDEN_PCT
---------- ----------- ---------- ------------ ---------- ----------- ---------- ----------
1 14.2857143 3 42.8571429 1 14.2857143 2 28.5714286
```
Using **MAX** and **DECODE** for **version 10g** and before:
```
SQL> SELECT MAX(DECODE(t.place,'kitchen',cnt)) Kitchen ,
2 MAX(DECODE(t.place,'kitchen',pct)) Pct ,
3 MAX(DECODE(t.place,'Washroom',cnt)) Washroom ,
4 MAX(DECODE(t.place,'Washroom',pct)) Pct ,
5 MAX(DECODE(t.place,'Rooftop',cnt)) Rooftop ,
6 MAX(DECODE(t.place,'Rooftop',pct)) Pct ,
7 MAX(DECODE(t.place,'Garden',cnt)) Garden ,
8 MAX(DECODE(t.place,'Garden',pct)) Pct
9 FROM
10 (SELECT b.ID bid,
11 b.NAME NAME,
12 c.ID cid,
13 c.place place,
14 row_number() OVER(PARTITION BY c.place ORDER BY NULL) cnt,
15 ROUND((row_number() OVER(PARTITION BY c.place ORDER BY NULL)/
16 COUNT(place) OVER(ORDER BY NULL))*100, 2) pct
17 FROM person_injuryPlace_map A
18 JOIN person b
19 ON(A.person_id = b.ID)
20 JOIN injury_place c
21 ON(A.injury_id = c.ID)
22 ORDER BY c.place
23 ) t;
KITCHEN PCT WASHROOM PCT ROOFTOP PCT GARDEN PCT
---------- ---------- ---------- ---------- ---------- ---------- ---------- ----------
1 14.29 3 42.86 1 14.29 2 28.57
```
|
If you use Oracle 11g or above you can use pivot function for your required output.
```
SELECT *
FROM (
SELECT id
,place
,round((
cnt / sum(cnt) OVER (
ORDER BY NULL
)
) * 100, 2) AS pct
FROM (
SELECT a.id
,a.place
,count(a.id) AS cnt
FROM injury_place a
JOIN person_injuryPlace_map b ON a.id = b.injury_id
GROUP BY a.id
,a.place
)
)
pivot(max(id) , max(pct) pct FOR place IN (
'kitchen' AS kitchen
,'Washroom' Washroom
,'Rooftop' Rooftop
,'Garden' Garden
))
```
|
How can I write SQL in Oracle in my case?
|
[
"",
"sql",
"oracle",
"pivot",
""
] |
I wan to create random data in Oracle table:
```
CREATE TABLE EVENTS(
EVENTID INTEGER NOT NULL,
SOURCE VARCHAR2(50 ),
TYPE VARCHAR2(50 ),
EVENT_DATE DATE,
DESCRIPTION VARCHAR2(100 )
)
/
```
I tried this:
```
BEGIN
FOR loop_counter IN 1..1000
LOOP
INSERT INTO EVENTS (EVENTID, SOURCE, TYPE, EVENT_DATE, DESCRIPTION) VALUES (loop_counter, loop_counter, 'warning',
DBMS_RANDOM.VALUE(TO_CHAR(DATE '2000-01-01','J') ,TO_CHAR(DATE '9999-12-31','J')), dbms_random.value(1,100));
END LOOP;
COMMIT;
END;
```
I get this error exception
> Error report - ORA-06550: line 5, column 13: PL/SQL: ORA-00932:
> inconsistent datatypes: expected DATE got NUMBER ORA-06550: line 4,
> column 1: PL/SQL: SQL Statement ignored
> 06550. 00000 - "line %s, column %s:\n%s"
> \*Cause: Usually a PL/SQL compilation error.
> \*Action:
Can you give me advice how I can fix this issue?
|
Use:
```
BEGIN
FOR loop_counter IN 1..1000 LOOP
INSERT INTO "EVENTS" (EVENTID, "SOURCE", TYPE, EVENT_DATE, DESCRIPTION)
VALUES (loop_counter, loop_counter, 'warning',
TO_DATE(TRUNC(DBMS_RANDOM.VALUE(TO_CHAR(DATE '2000-01-01','J') ,TO_CHAR(DATE '9999-12-31','J'))),'J')
,dbms_random.value(1,100)
);
END LOOP;
COMMIT;
END;
```
`SqlFiddleDemo`
Changes:
1. Add mising `;` after final `END`
2. Quote keywords
3. Rewrite random date generation
|
Also, if you use PL/SQL Developer by Allroundautomations, you can find out
good tool for this job: Data Generator.
It can be very useful, because it can help to generate some data
in any types and place it to a tables.
(see screenshot attached) [](https://i.stack.imgur.com/9Mopu.png)
|
Insert random data in Oracle table
|
[
"",
"sql",
"oracle",
"plsql",
"oracle11g",
""
] |
I have the following function in my postgresql database:
```
CREATE OR REPLACE FUNCTION get_unused_part_ids()
RETURNS integer[] AS
$BODY$
DECLARE
part_ids integer ARRAY;
BEGIN
create temporary table tmp_parts
as
select vendor_id, part_number, max(price) as max_price
from refinery_akouo_parts
where retired = false
group by vendor_id, part_number
having min(price) < max(price);
-- do some work etc etc
-- simulate ids being returned
part_ids = '{1,2,3,4}';
return part_ids;
END;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
ALTER FUNCTION get_unused_part_ids()
OWNER TO postgres;
```
This compiles but when I run:
```
select get_unused_part_ids();
```
the temporary table, `tmp_parts`, still exists. I can do a select on it after. Forgive me as I'm used to a particular functionality with t-sql/MSSQL. This wouldn't be the case with MSSQL. What am I doing wrong?
|
The table will only be deleted at the end of the session. You need to specify the ON COMMIT option to drop, and it will drop the table at the end of the transaction.
```
create temporary table tmp_parts
on commit drop
as
select vendor_id, part_number, max(price) as max_price
from refinery_akouo_parts
where retired = false
group by vendor_id, part_number
having min(price) < max(price);
```
|
After [manual](http://www.postgresql.org/docs/9.4/static/sql-createtable.html)
> Temporary tables are automatically dropped at the end of a session, or optionally at the end of the current transaction (see ON COMMIT below)
Session ends after disconnection. Not after transaction commit. So default behavior is to preserve temp table till your connection is still open. You must add `ON COMMIT DROP;` to achive your desired behaviour:
```
create temporary table tmp_parts on commit drop
as
select vendor_id, part_number, max(price) as max_price
from refinery_akouo_parts
where retired = false
group by vendor_id, part_number
having min(price) < max(price)
on commit drop;
```
|
Postgres: Temporary table in function is persistent. Why?
|
[
"",
"sql",
"postgresql",
"postgresql-9.3",
"temp-tables",
"sql-function",
""
] |
I have the following tables:
```
Parts
id int (idx)
partnumber varchar (idx)
accountnumber (idx)
enabled
```
Sample data:
[](https://i.stack.imgur.com/JX1tm.png)
```
RefUserGroup
id int (idx)
value varchar (idx)
```
Sample Data:
[](https://i.stack.imgur.com/9i3rs.png)
```
Pdf < has about 15 columns I will list ones I am interested
in currently over 300,000's rows
id int (idx)
accountnumber varchar (idx)
customername varchar (idx)
```
Sample Data: [](https://i.stack.imgur.com/RprsV.png)
```
Ref_UserGroup_Pdf
id
groupid FK (idx)
partsnumber (idx)
enable (idx)
```
Sample Data:
[](https://i.stack.imgur.com/8K1Nf.png)
**Query:** This takes around 47secs- 97 secs to return data.
```
SELECT p.partsnumber
,rug.`value` AS `group`
,pd.customername AS customer
FROM ref_groupid_to_pdf rgp
INNER JOIN ref_user_group rug ON rug.id = rgp.groupid
INNER JOIN parts p on rgp.partsnumber = p.partsnumber
LEFT JOIN pdf pd on p.accountnumber= pd.accountnumber
WHERE rug.groupid = 2
AND rug.groupid <> 1
ORDER BY customer;
```
Can anyone point in the right direction on how to optimize this query? I am aware normally we put indexs on one, or more columns but for this I put index on every column that but even then it is really slow.
When the query joins on to the pdf table it becomes very slow.
Update Explain:
[](https://i.stack.imgur.com/O9SiE.png)
Updated Query based on suggestions:
```
SELECT p.partsnumber
,rug.`value` AS `group`
,pd.customername AS customer
FROM ref_groupid_to_pdfid rgp
INNER JOIN ref_user_group_pdf rug ON rug.groupid = rgp.groupid
INNER JOIN parts p on rgp.partsnumber = p.partsnumber
INNER JOIN pdf_load pd on p.accountnumber = pd.accountnumber
WHERE rug.id = 2
GROUP BY rgp.partsnumber;
```
|
I'd look up the customer name in a subquery. After all a part can have many entries in the pdf table, but you are always only interested in finding one such record arbitrarily. So pick that record with a LIMIT 1 query.
```
select
prt.partsnumber,
grp.value as `group`,
(
select customername
from pdf
where pdf.accountnumber = prt.accountnumber
limit 1
) as customer
from parts prt
join ref_usergroup_pdf ref on ref.partsnumber = prt.partsnumber
join refusergroup grp on grp.id = ref.groupid
where ref.id = 2;
```
Same query with parts table in the subquery instead. Choose whichever you like better:
```
select
ref.partsnumber,
grp.value,
(
select pdf.customername
from pdf
where pdf.accountnumber =
(
select prt.accountnumber
from parts prt
where prt.partsnumber = ref.partsnumber
)
limit 1
) as customer
from ref_usergroup_pdf ref
join refusergroup grp on grp.id = ref.groupid
where ref.id = 2;
```
As you have an index on `pdf(accountnumber)`, lookup should be pretty fast. It would be even faster if you had a composite index on `pdf(accountnumber,customername)`, as then you would gain all data needed from the index alone and the table wouldn't have to be read at all.
|
Try using a subquery and join the pdf table in the outer query.
```
SELECT n.partsnumber, n.group, pd.customername AS customer
FROM (SELECT p.partsnumber,rug.`value` AS `group`,p.accountnumber
FROM ref_groupid_to_pdf rgp
INNER JOIN ref_user_group rug ON rug.id = rgp.groupid
INNER JOIN parts p on rgp.partsnumber = p.partsnumber
WHERE rug.groupid = 2
AND rug.groupid <> 1) n
LEFT JOIN pdf pd on n.accountnumber= pd.accountnumber ORDER BY customer;
```
|
Slow Query Execution joining multiple tables
|
[
"",
"mysql",
"sql",
""
] |
It is well known that you cannot perform a `SELECT` from a stored procedure in either Oracle or SQL Server (and presumably most other mainstream RDBMS products).
Generally speaking, there are several obvious "issues" with selecting from a stored procedure, just two that come to mind:
a) The columns resulting from a stored procedure are indeterminate (not known until runtime)
b) Because of the indeterminate nature of stored procedures, there would be issues with building database statistics and formulating efficient query plans
As this functionality is frequently desired by users, a number of workaround hacks have been developed over time:
<http://www.club-oracle.com/threads/select-from-stored-procedure-results.3147/>
<http://www.sommarskog.se/share_data.html>
SQL Server in particular has the function `OPENROWSET` that allows you to join to or select from almost anything: <https://msdn.microsoft.com/en-us/library/ms190312.aspx>
....however, DBA's tend to be very reluctant to enable this for security reasons.
So to my question: while there are some obvious issues or performance considerations involved in allowing joins to or selects from stored procedures, is there some *fundamental underlying technical reason* why this capability is not supported in RDBMS platforms?
**EDIT:**
A bit more clarification from the initial feedback....yes, you *can* return a resultset from a stored procedure, and yes, you *can* use a (table valued) function rather than a stored procedure if you want to join to (or select from) the resultset - however, this is *not the same thing* as JoiningTo / SelectingFrom a stored procedure. If you are working in a database that you have complete control over, then you have the option of using a TVF. However, it is *extremely* common that you find yourself working in a 3rd party database and you are forced to call pre-existing stored procedures; or, often times you would like to join to system stored procedures such as: sp\_execute\_external\_script (<https://msdn.microsoft.com/en-us/library/mt604368.aspx>).
**EDIT 2:**
On the question of whether PostgreSQL can do this, the answer is also no: [Can PostgreSQL perform a join between two SQL Server stored procedures?](https://stackoverflow.com/questions/33895894/can-postgresql-perform-a-join-between-two-sql-server-stored-procedures#)
|
**TL;DR**: you *can* select from (table-valued) functions, or from any sort of function in PostgreSQL. But not from stored procedures.
Here's an "intuitive", somewhat database-agnostic explanation, for I believe that SQL and its many dialects is too much of an organically grown language / concept for there to be a fundamental, "scientific" explanation for this.
### Procedures vs. Functions, historically
I don't really see the point of selecting from stored procedures, but I'm biased by years of experience and accepting the status quo, and I certainly see how the distinction between *procedures* and *functions* can be confusing and how one would wish them to be more versatile and powerful. Specifically in SQL Server, Sybase or MySQL, procedures can return an arbitrary number of result sets / update counts, although this is not the same as a function that returns a well-defined type.
Think of procedures as *imperative routines* (with side effects) and of functions as *pure routines* without side-effects. A `SELECT` statement itself is also *"pure"* without side-effects (apart from potential locking effects), so it makes sense to think of functions as the only types of routines that can be used in a `SELECT` statement.
In fact, think of functions as being routines with strong constraints on behaviour, whereas procedures are allowed to execute arbitrary programs.
### 4GL vs. 3GL languages
Another way to look at this is from the perspective of SQL being a [4th generation programming language (4GL)](https://en.wikipedia.org/wiki/Fourth-generation_programming_language). A 4GL can only work reasonably if it is restricted heavily in what it can do. [Common Table Expressions made SQL turing-complete](https://stackoverflow.com/questions/900055/is-sql-or-even-tsql-turing-complete), yes, but the declarative nature of SQL still prevents its being a general-purpose language from a practical, every day perspective.
Stored procedures are a way to circumvent this limitation. Sometimes, you *want* to be turing complete *and* practical. So, stored procedures resort to being imperative, having side-effects, being transactional, etc.
Stored functions are a clever way to introduce *some* 3GL / procedural language features into the purer 4GL world at the price of forbidding side-effects inside of them (unless you want to open pandora's box and have completely unpredictable `SELECT` statements).
The fact that some databases allow for their stored procedures to return arbitrary numbers of result sets / cursors is a trait of their allowing arbitrary behaviour, including side-effects. In principle, nothing I said would prevent this particular behaviour also in stored functions, but it would be very unpractical and hard to manage if they were allowed to do so within the context of SQL, the 4GL language.
Thus:
* Procedures can call procedures, any function and SQL
* "Pure" functions can call "pure" functions and SQL
* SQL can call "pure" functions and SQL
But:
* "Pure" functions calling procedures become "impure" functions (like procedures)
And:
* SQL cannot call procedures
* SQL cannot call "impure" functions
### Examples of "pure" table-valued functions:
Here are some examples of using table-valued, "pure" functions:
### Oracle
```
CREATE TYPE numbers AS TABLE OF number(10);
/
CREATE OR REPLACE FUNCTION my_function (a number, b number)
RETURN numbers
IS
BEGIN
return numbers(a, b);
END my_function;
/
```
And then:
```
SELECT * FROM TABLE (my_function(1, 2))
```
### SQL Server
```
CREATE FUNCTION my_function(@v1 INTEGER, @v2 INTEGER)
RETURNS @out_table TABLE (
column_value INTEGER
)
AS
BEGIN
INSERT @out_table
VALUES (@v1), (@v2)
RETURN
END
```
And then
```
SELECT * FROM my_function(1, 2)
```
### PostgreSQL
Let me have a word on PostgreSQL.
PostgreSQL is awesome and thus an exception. It is also weird and probably 50% of its features shouldn't be used in production. It only supports "functions", not "procedures", but those functions can act as anything. Check out the following:
```
CREATE OR REPLACE FUNCTION wow ()
RETURNS SETOF INT
AS $$
BEGIN
CREATE TABLE boom (i INT);
RETURN QUERY
INSERT INTO boom VALUES (1)
RETURNING *;
END;
$$ LANGUAGE plpgsql;
```
Side-effects:
* A table is created
* A record is inserted
Yet:
```
SELECT * FROM wow();
```
Yields
```
wow
---
1
```
|
I don't think your question is really about stored procedures. I think it is about the limitations of table valued functions, presumably from a SQL Server perspective:
* You cannot use dynamic SQL.
* You cannot modify tables or the database.
* You have to specify the output columns and types.
* Gosh, you can't even use `rand()` and `newid()` (directly)
(Oracle's restrictions are slightly different.)
The simplest answer is that databases are both a powerful querying language and an environment that supports ACID properties of transactional databases. The ACID properties require a consistent view, so if you could modify existing tables, what would happen when you do this:
```
select t.*, (select count(*) from functionThatModifiesT()) -- f() modifies "t"
from t;
```
Which `t` is used in the `from`? Actually, SQL Server sort of has answer to this question, but you get the same issue with multiple references in the same clause. In a sense, user defined functions are limited in the same way that this is not accepted:
```
select a = 1, a + 1
```
Defining the semantics is very, very tricky and not worth the effort because there are other powerful features that are more important.
In my opinion, though, the final straw in SQL Server is the ability for stored procedures to "return" multiple result sets. That simply has no meaning in the world of tables.
EDIT:
Postgres's use of `create function` is very powerful. It does allow the function to modify the underlying database, which brings up interesting transactional issues. However, you still do have to define the columns and their types.
|
Why is selecting from stored procedure not supported in relational databases?
|
[
"",
"sql",
"sql-server",
"database",
"oracle",
"stored-procedures",
""
] |
I have a nested Not Exists, and was wondering if its a double negation. And if I remove both the negation, if it would be the equivalent queries?
```
Suppliers(sid: integer, sname: string, address: string)
Parts(pid: integer, pname: string, color: string)
Catalog(sid: integer, pid: integer, cost: real)
```
SQL Translation : "C.Sid for which Does not exist the parts that are not supplied by C.Sid"
```
SELECT C.sid
FROM Catalog C
WHERE NOT EXISTS (SELECT P.pid
FROM Parts P
WHERE NOT EXISTS (SELECT C1.sid
FROM Catalog C1
WHERE C1.sid = C.sid
AND C1.pid = P.pid)
)
```
Removed Negations : "C.Sid who supply every part"
```
SELECT C.sid
FROM Catalog C
WHERE EXISTS (SELECT P.pid
FROM Parts P
WHERE EXISTS (SELECT C1.sid
FROM Catalog C1
WHERE C1.sid = C.sid
AND C1.pid = P.pid)
)
```
|
Nop, the operations are not equivalent. What you are trying to achieve it's the equivalent to an [algebra division operation](https://en.wikipedia.org/wiki/Relational_algebra#Division_.28.C3.B7.29) in SQL.
Here is a [SQLFiddle](http://sqlfiddle.com/#!15/4f605/1/0) where you might tests the following queries:
The first one:
```
SELECT * FROM Catalog C
WHERE NOT EXISTS (SELECT P.pid FROM Part P
WHERE NOT EXISTS (SELECT C1.sid FROM Catalog C1
WHERE C1.sid = C.sid
AND C1.pid = P.pid) );
sid pid
1 1
1 2
2 1
2 2
```
The alternative (where you can see that the results are now equivalent):
```
SELECT * FROM Catalog C
WHERE EXISTS (SELECT P.pid FROM Part p
WHERE EXISTS (SELECT C1.sid FROM Catalog C1
WHERE C1.sid = C.sid
AND C1.pid = P.pid) );
sid pid
1 1
1 2
2 1
2 2
3 1
3 3
```
And a classical Database course exercise:
```
-- Suppliers for which doesn't exists any part that they doesn't provide.
SELECT * FROM supplier S
WHERE NOT EXISTS ( SELECT * FROM part P
WHERE NOT EXISTS ( SELECT * FROM catalog C
WHERE S.sid = C.sid
AND P.pid = C.pid ) );
sid name
1 "Dath Vader"
2 "Han Solo"
```
Dissecting part of the above query might give you a better insight on the logic involved in the query.
```
SELECT * FROM part P
WHERE NOT EXISTS ( SELECT * FROM catalog C
WHERE P.pid = C.pid
AND C.sid = 3); -- R2D2 Here!
pid name
2 "Laser Gun"
```
R2D2 was excluded from the result set because it's the only one selling a product not listed in the part table.
The existence of this row excludes RD2D from the final result set.
|
Not sure if your question is just educational or you are asking for a better way to solve your question.
If you know how many parts sell each supplier, and know how many parts are. Is easy to compare those values.
```
SELECT C.Sid
FROM Catalog C
GROUP BY C.Sid
HAVING COUNT(pid) = (SELECT COUNT(P.pid)
FROM Parts P)
```
|
SQL double negation with Not exists
|
[
"",
"sql",
"logic",
""
] |
I have the table `Pages`
```
+--------------------------------+
| Pages |
+--------------------------------+
| Name | Id | ParentId | Ordinal |
|--------------------------------|
| A | 1 | NULL | 0 |
|--------------------------------|
| B | 2 | 1 | 0 |
|--------------------------------|
| C | 3 | 1 | 0 |
|--------------------------------|
| D | 4 | 1 | 0 |
|--------------------------------|
| E | 5 | 2 | 0 |
|--------------------------------|
| F | 6 | 2 | 0 |
|--------------------------------|
| G | 7 | 3 | 0 |
|--------------------------------|
| H | 8 | 3 | 0 |
|--------------------------------|
| I | 9 | 3 | 0 |
+--------------------------------+
```
and i want to update the table with SQL, so i get
```
+--------------------------------+
| Pages |
+--------------------------------+
| Name | Id | ParentId | Ordinal |
|--------------------------------|
| A | 1 | NULL | 0 |
|--------------------------------|
| B | 2 | 1 | 0 |
|--------------------------------|
| C | 3 | 1 | 1 |
|--------------------------------|
| D | 4 | 1 | 2 |
|--------------------------------|
| E | 5 | 2 | 0 |
|--------------------------------|
| F | 6 | 2 | 1 |
|--------------------------------|
| G | 7 | 3 | 0 |
|--------------------------------|
| H | 8 | 3 | 1 |
|--------------------------------|
| I | 9 | 3 | 2 |
+--------------------------------+
```
Column `Ordinal` must be incremental values, starting from 0.
It should start over every time column `ParentId` changes.
|
I solve it. Below is the code, in case someone is interest
The `SELECT` statement
```
SET @ordinal := -1;
SET @parent := (SELECT ParentId FROM Pages WHERE ParentId IS NULL);
SELECT p.Name, p.Id, p.ParentId, p.Ordinal
FROM (
SELECT p.Name
,p.Id
,p.ParentId
,CASE WHEN @parent != p.ParentId OR @parent IS NULL
THEN @ordinal:=0
ELSE @ordinal:=@ordinal+1 END
AS Ordinal
,@parent:=p.parentId
FROM Pages p
) p
ORDER BY p.ParentId, p.Ordinal;
```
The `UPDATE` statement
```
SET @ordinal := -1;
SET @parent := (SELECT ParentId FROM Pages WHERE ParentId IS NULL);
UPDATE Pages p JOIN (
SELECT Id
,CASE WHEN @parent != ParentId OR @parent IS NULL
THEN @ordinal:=0
ELSE @ordinal:=@ordinal+1 END
AS Ordinal
,@parent:=ParentId
FROM Pages
) p1 ON p.id = p1.id
SET p.Ordinal = p1.Ordinal;
```
|
A simplified answer.
Sample output :[](https://i.stack.imgur.com/bp6TM.png)
Here is the [**SQLFiddle Demo**](http://sqlfiddle.com/#!9/633634/1)
```
SELECT Name,Id,ParentId,Ordinal
FROM
(SELECT `Name`,
`Id`,
`ParentId`,
(@category_num :=IF(ParentId = @ParentId,@category_num+1,0)) AS Ordinal,
@ParentId:= `ParentId` AS Temp_swap
FROM Pages)T
```
Hope this helps.
|
How to update records with incremented values based on a column
|
[
"",
"mysql",
"sql",
""
] |
IΒ΄m looking for a solution, where I can select the entries between 2 dates. My table is like this
```
ID | YEAR | MONTH | ....
```
Now i want to SELECT all entries between
```
MONTH 9 | YEAR 2015
MONTH 1 | YEAR 2016
```
I donΒ΄t get any entries, because the 2nd month is lower than the 1st month. Here is my query:
```
SELECT *
FROM table
WHERE YEAR >= '$year'
AND MONTH >= '$month'
AND YEAR <= '$year2'
AND MONTH <= '$month2'
```
I canΒ΄t change the columns of the table, because a csv import is like this. Can anyone help me on this?
|
The years aren't disconnected from the months, so you can't test them separately.
Try something like
```
$date1 = $year*100+$month; // will be 201509
$date2 = $year2*100+$month2; // will be 201602
...
SELECT * FROM table WHERE (YEAR*100)+MONTH >= '$date1' AND (YEAR*100)+MONTH <= '$date2'
```
Make sure you protect against SQL injection though.
|
```
SELECT
*
FROM
`my_table`
WHERE
((`YEAR` * 12) + `MONTH`) >= (($year * 12) + $month)
AND ((`YEAR` * 12) + `MONTH`) <= (($year2 * 12) + $month2)
```
Since they aren't date fields, you need to convert to numbers that can be compared against. Multiplying the year by 12 and adding the month will give you a unique number specific to that month of the year. Then you can compare on that.
|
PHP SQL Select between 4 columns
|
[
"",
"sql",
""
] |
I have a query like this:
```
select * from table where id <= 10 limit 5; // table has +10 rows
```
The number of result in the above query ^ is 10 rows. Now I want to know, how can I get the number of total result in this query:
```
select * from table where col = 'anything' limit 5;
```
How to calculate the number of all results *(regardless of `limit`)* in this ^ ?
Actually I want this number:
```
select count(*) as total_number from table where col = 'anything'
```
Now I want to know how can I get the number of total results without another query.
|
Add a column, `total`, for example:
```
select t.*
, (select count(*) from tbl where col = t.col) as total
from tbl t
where t.col = 'anything'
limit 5
```
As stated by *@Tim Biegeleisen*: *`limit` keyword is applied after everything else, so the `count(*)` still returns the right answer.*
|
You need the SQL\_CALC\_FOUND\_ROWS option in your query and FOUND\_ROWS() function to do this:
```
DECLARE @rows int
SELECT SQL_CALC_FOUND_ROWS * from table where col = 'anything' limit 5;
SET @rows = FOUND_ROWS(); --for a later use
```
|
How to get the number of total results when there is LIMIT in query?
|
[
"",
"mysql",
"sql",
""
] |
I realize this is an odd question, but I'd like to know if this is possible.
Let's say I have a DB with ages and IDs. I need to compare each ID's age to the average age, but I can't figure out how to do that without grouping or subqueries.
```
SELECT
ID,
AGE - AVG(AGE)
FROM
TABLE
```
I'm trying to get something like the above, but obviously that doesn't work because ID isn't grouped, but I group, then it calculates the average for each group, and not the table as a whole. How can I get a global average without a subquery?
|
```
SELECT ID,
AGE -
AVG(AGE) OVER (partition by ID) as age_2
FROM Table
```
I just read is global `avg`
```
SELECT ID,
AGE -
AVG(AGE) OVER () as age_2
FROM Table
```
|
The window logic for average age is:
```
SELECT ID, AGE - ( AVG(AGE) OVER () )
FROM TABLE;
```
You do not want `ORDER BY` in the partitioning clause.
|
Average without grouping or subquery
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
"average",
""
] |
```
PersistentId UserId EnterDate
111 1 June 1, 2015 17:05
112 1 June 1, 2015 17:21
113 1 June 1, 2015 17:27
114 1 June 1, 2015 18:25
115 1 June 1, 2015 19:00
116 2 June 1, 2015 18:05
117 2 June 1, 2015 18:21
118 2 June 1, 2015 19:27
```
I'd like to get a list of UserIds and a count for each UserId such that only rows where the difference between EnterDates < 30 minutes are included.
So for the above data, the output would be
```
UserId Count
1 3
2 2
```
The rows that should be pulled for UserId 1 are with persistentIds 111, 114, 115.
The rows that should be pulled for UserId 2 are with persistentIds 116, 118
Any ideas on how I can write this SQL query?
|
Two queries that both give your expected results and use 30 minute windows but have completely different interpretations of your requirements... you might want to clarify the question.
[SQL Fiddle](http://sqlfiddle.com/#!4/6f765/7)
**Oracle 11g R2 Schema Setup**:
```
CREATE TABLE table_name (PersistentId, UserId, EnterDate ) AS
SELECT 111, 1, to_date('June 1, 2015 17:05','Month DD, YYYY HH24:MI') FROM DUAL
UNION ALL SELECT 112, 1, to_date('June 1, 2015 17:21','Month DD, YYYY HH24:MI') FROM DUAL
UNION ALL SELECT 113, 1, to_date('June 1, 2015 17:27','Month DD, YYYY HH24:MI') FROM DUAL
UNION ALL SELECT 114, 1, to_date('June 1, 2015 18:25','Month DD, YYYY HH24:MI') FROM DUAL
UNION ALL SELECT 115, 1, to_date('June 1, 2015 19:00','Month DD, YYYY HH24:MI') FROM DUAL
UNION ALL SELECT 116, 2, to_date('June 1, 2015 18:05','Month DD, YYYY HH24:MI') FROM DUAL
UNION ALL SELECT 117, 2, to_date('June 1, 2015 18:21','Month DD, YYYY HH24:MI') FROM DUAL
UNION ALL SELECT 118, 2, to_date('June 1, 2015 19:27','Month DD, YYYY HH24:MI') FROM DUAL
```
**Query 1 - Count results in 30 minute windows**:
```
SELECT UserId,
"Count"
FROM (
SELECT UserID,
COUNT(*) OVER ( PARTITION BY UserId ORDER BY EnterDate RANGE BETWEEN INTERVAL '30' MINUTE PRECEDING AND CURRENT ROW ) AS "Count",
EnterDate,
LEAD(EnterDate) OVER ( PARTITION BY UserId ORDER BY EnterDate ) AS nextEnterDate
FROM Table_Name
)
WHERE "Count" > 1
AND EnterDate + INTERVAL '30' MINUTE < nextEnterDate
```
**[Results](http://sqlfiddle.com/#!4/6f765/7/0)**:
```
| USERID | Count |
|--------|-------|
| 1 | 3 |
| 2 | 2 |
```
**Query 2 - Count all rows that are within 30 minutes of another row**:
```
SELECT UserID,
COUNT(1) AS "Count"
FROM (
SELECT UserID,
EnterDate,
LAG(EnterDate) OVER ( PARTITION BY UserId ORDER BY EnterDate ) AS prevDate,
LEAD(EnterDate) OVER ( PARTITION BY UserId ORDER BY EnterDate ) AS nextDate
FROM Table_Name
)
WHERE EnterDate - INTERVAL '30' MINUTE < prevDate
OR EnterDate + INTERVAL '30' MINUTE > nextDate
GROUP BY UserId
```
**[Results](http://sqlfiddle.com/#!4/6f765/7/1)**:
```
| USERID | Count |
|--------|-------|
| 1 | 3 |
| 2 | 2 |
```
|
Your question isn't worded clearly, but based on your desired results, I think you want to use `NOT EXISTS` to filter out records that are less than 30 minutes after another record with the same user id. Like this:
```
with d as (
SELECT 111 persistent_id, 1 user_id, to_date('June 1, 2015 17:05','Month DD, YYYY HH24:MI') enter_date from dual UNION ALL
SELECT 112 persistent_id, 1 user_id, to_date('June 1, 2015 17:21','Month DD, YYYY HH24:MI') from dual UNION ALL
SELECT 113 persistent_id, 1 user_id, to_date('June 1, 2015 17:27','Month DD, YYYY HH24:MI') from dual UNION ALL
SELECT 114 persistent_id, 1 user_id, to_date('June 1, 2015 18:25','Month DD, YYYY HH24:MI') from dual UNION ALL
SELECT 115 persistent_id, 1 user_id, to_date('June 1, 2015 19:00','Month DD, YYYY HH24:MI') from dual UNION ALL
SELECT 116 persistent_id, 2 user_id, to_date('June 1, 2015 18:05','Month DD, YYYY HH24:MI') from dual UNION ALL
SELECT 117 persistent_id, 2 user_id, to_date('June 1, 2015 18:21','Month DD, YYYY HH24:MI') from dual UNION ALL
SELECT 118 persistent_id, 2 user_id, to_date('June 1, 2015 19:27','Month DD, YYYY HH24:MI') from dual
)
select d.user_id, count(*)
from d
where not exists ( SELECT 'record for same userid but less than 30 minutes earlier'
FROM d d2
WHERE d2.user_id = d.user_id
AND d2.enter_date between d.enter_date - (0.5/24) and d.enter_date
and d2.persistent_id != d.persistent_id )
group by d.user_id
order by d.user_id
```
|
sql count rows where diff Dates is less than 30 minutes
|
[
"",
"sql",
"oracle",
"date",
"compare",
""
] |
Say this is my table schema
```
Create Table PowerReading
{ Device VARCHAR(32);
Power INT;
Time TIMESTAMP;
}
```
Say these are the rows in my table
```
Device | Power | Time
A3 | 5 | 2013-05-01 17:36:00
A3 | 9 | 2013-05-01 17:37:44
B1 | 11 | 2013-05-01 17:35:14
B1 | 5 | 2013-05-01 17:35:55
B7 | 4 | 2013-05-01 17:34:12
B7 | 0 | 2013-05-01 17:34:44
```
I've spent like days trying to figure out how to show the reading that is the most recent for each DISTINCT device name. I want an SQL query that gives THIS output from the above table.
```
Device | Power | Time
A3 | 9 | 2013-05-01 17:37:44
B1 | 5 | 2013-05-01 17:35:55
B7 | 0 | 2013-05-01 17:34:44
```
I've tried to accomplish using the below code, but it's useless
```
SELECT * FROM (SELECT Device,Power,MAX(Time) as Max FROM PowerReading GROUP
BY Device,Power) t1 JOIN PowerReading on t1.Device=PowerReading.Device AND
t1.max=PowerReading.Power
```
What the above code gives me is an output that is not to my desired output
This little issue has been eating my head for days. Please help me? :-)
|
The derived table was using a `group by` on `power` column as well, which is not needed.
```
SELECT p.* FROM
(SELECT Device, MAX(Time) as maxtime
FROM PowerReading
GROUP BY Device) t1
JOIN PowerReading p on t1.Device = p.Device AND t1.maxtime = p.time
```
|
You can alternatively do it using variables:
```
SELECT Device, Power, Time
FROM (
SELECT Device, Power, Time,
@rn := IF (@dev = Device, @rn + 1,
IF(@dev := Device, 1, 1)) AS rn
FROM PowerReading
CROSS JOIN (SELECT @rn := 0, @dev := '') AS vars
ORDER BY Device, Time DESC) AS t
WHERE t.rn = 1
```
Variable `@rn` essentially simulates `ROW_NUMBER() OVER (PARTITION BY Device ORDER BY Time DESC` window function, available in other RDBMSs.
The above query will select *exactly one* row per `Device` even if there are more than one rows sharing the exact same timestamp.
[**Demo here**](http://sqlfiddle.com/#!9/8f7af/2)
|
Show the most recently added row in a table for each distinct name
|
[
"",
"mysql",
"sql",
"database",
""
] |
NB. I don't want to mark the check box in the wizard for deletion. This question's **strictly** about scripting the behavior.
When I run the following script to get a fresh start, I get the error that the database *Duck* can't be deleted because it's currently in use.
```
use Master
drop database Duck
drop login WorkerLogin
drop login AdminLogin
go
```
Be that as it may (even though I'm the only user currently in the system and I run no other queries but that's another story), I need to close all the existing connections. One way is to wait it out or restart the manager. However I'd like to script in that behavior so I can tell the stubborn server to *drop* the duck down. (Yes, "typo" intended.)
What do I need to add to the dropping statement?
|
Try below code.
```
USE master;
ALTER DATABASE [Duck] SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
DROP DATABASE [Duck] ;
```
For deep discussion see [this answer](https://dba.stackexchange.com/a/34265).
|
You have to kill first all active connections before you can drop the database.
```
ALTER DATABASE YourDatabase SET SINGLE_USER WITH ROLLBACK IMMEDIATE
--do you stuff here
ALTER DATABASE YourDatabase SET MULTI_USER
```
<http://wiki.lessthandot.com/index.php/Kill_All_Active_Connections_To_A_Database>
[How do you kill all current connections to a SQL Server 2005 database?](https://stackoverflow.com/questions/11620/how-do-you-kill-all-current-connections-to-a-sql-server-2005-database)
|
How to drop a database when it's currently in use?
|
[
"",
"sql",
"sql-server",
""
] |
Let's say I have a column that contains a float value between 1-100.
I'd like to be able to turn that value into a less precise integer between 1-10 then order the results on this new value.
It may seem odd to want to make the ordering less precise but the SQL statement is ordered by 2 columns and if the first is too precise then the 2nd order column would have no weight.
Essentially I would like to group my first order by into 10 groups and then order each of those groups by another column.
```
SELECT "sites".* FROM "sites" ORDER BY "sites"."rating" DESC, "sites"."price" ASC LIMIT 24 OFFSET 0
```
edit: This is a rails app using postgresql
|
```
SELECT "sites".*
FROM "sites"
ORDER BY FLOOR("sites"."rating"/10) DESC, "sites"."price" ASC
LIMIT 24 OFFSET 0
```
|
What SQL is that? Use a divide function and, if necessary, round it.
In MySQL look at the DIV command. I don't have the means to test this right now, but it might help point you in the right direction:
SELECT "sites".\* FROM "sites" ORDER BY "sites"."rating" DIV 2 DESC, ...
|
SQL order_by an expression
|
[
"",
"sql",
"ruby-on-rails",
"postgresql",
""
] |
The case condition for two columns is same.in the below statement am using this twice but for different column, is there any other way for not repeating the condition twice ??
```
case [CPHIL_AWD_CD]
when ' ' then 'Not Applicable/ Not a Doctoral Student'
when 'X' then 'Not Applicable/ Not a Doctoral Student'
when 'N' then 'NO'
when 'Y' then 'YES'
end as CPHIL_AWD_CD
,case [FINL_ORAL_REQ_CD]
when ' ' then 'Not Applicable/ Not a Doctoral Student'
when 'X' then 'Not Applicable/ Not a Doctoral Student'
when 'N' then 'NO'
when 'Y' then 'YES'
end as FINL_ORAL_REQ_CD
```
|
A variation on thepirat000's answer:
```
-- Sample data.
declare @Samples as Table (
Frisbee Int Identity Primary Key, Code1 Char(1), Code2 Char(2) );
insert into @Samples values ( 'Y', 'N' ), ( ' ', 'Y' ), ( 'N', 'X' );
select * from @Samples;
-- Handle the lookup.
with Lookup as (
select * from ( values
( ' ', 'Not Applicable/ Not a Doctoral Student' ),
( 'X', 'Not Applicable/ Not a Doctoral Student' ),
( 'N', 'No' ),
( 'Y', 'Yes' ) ) as TableName( Code, Description ) )
select S.Code1, L1.Description, S.Code2, L2.Description
from @Samples as S inner join
Lookup as L1 on L1.Code = S.Code1 inner join
Lookup as L2 on L2.Code = S.Code2;
```
The lookup table is created within a CTE and referenced as needed for multiple columns.
**Update:** The table variable is now blessed with a primary key for some inexplicable reason. If someone can actually explain how it will benefit performance, I'd love to hear it. It isn't obvious from the execution plan.
|
Just create a table (temp?) with the mapping
```
CREATE TABLE [Constants]
(
[ID] nvarchar(1) PRIMARY KEY,
[Text] nvarchar(max)
)
INSERT INTO [Constants] VALUES (' ', 'Not Applicable/ Not a Doctoral Student')
INSERT INTO [Constants] VALUES ('X', 'Not Applicable/ Not a Doctoral Student')
INSERT INTO [Constants] VALUES ('N', 'No')
INSERT INTO [Constants] VALUES ('Y', 'Yes')
```
and perform an inner join
```
SELECT C1.Text AS CPHIL_AWD_CD, C2.Text AS FINL_ORAL_REQ_CD, ...
FROM YourTable T
INNER JOIN Constants C1 ON C1.ID = T.CPHIL_AWD_CD
INNER JOIN Constants C2 ON C2.ID = T.FINL_ORAL_REQ_CD
```
|
do i need to rewrite the case statement for every field?
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
Very simply, I am calling a procedure to get the primary key from one table, then storing the key in a variable so that I can then use it to insert into another table that needs a foreign key. The variable is what I expect it to be, however when I use it it returns nothing.
After trying looking at it, it appears to be that its an issue with the `WHERE` clause and using the variable
```
DECLARE @ClientId bigint;
SELECT *
FROM Testing.dbo.Client
WHERE ClientID = @ClientId
```
`@ClientId` value is 2
There is a value of 2 for a as a client ID in the table
When I run this I get the result I expect
```
select *
from Testing.dbo.Client
where ClientID = 2
```
This is where it gets set
```
DECLARE @ClientId int;
EXECUTE @ClientId = Testing.dbo.GetClientID @ClientName;
```
Where GetClientID is the following
```
USE Testing
GO
CREATE PROCEDURE GetClientId
@ClientName nvarchar(50)
AS
SELECT ClientID FROM Testing.dbo.Client WHERE ClientName = @ClientName
GO
```
Worked out a bit more now, so the ClientId is not getting set after call to the proc
|
I managed to work it out using OUTPUT variables which i had not come across before
```
USE Testing
GO
CREATE PROCEDURE AddReading
@ClientName NVARCHAR(50),
@MonitorName NVARCHAR(50),
@DateTime DATETIME,
@Temperature DECIMAL(12, 10),
@Humidity DECIMAL (12, 10),
@Staleness DECIMAL (12, 10)
AS
DECLARE @ClientId int;
EXEC Testing.dbo.GetClientId @ClientName, @ClientId OUTPUT;
INSERT INTO Testing.dbo.Reading
(ClientID, MonitorName, DateTime, Temperature, Humidity, Staleness)
VALUES (@ClientId, @MonitorName, @DateTime, @Temperature, @Humidity, @Staleness);
GO
```
Where proc GetClientId was
```
USE Testing
GO
CREATE PROCEDURE GetClientId
@ClientName nvarchar(50),
@ClientId bigint OUTPUT
AS
Select @ClientId = ClientID FROM Testing.dbo.Client WHERE ClientName = @ClientName
GO
```
Please let me know if you believe this to be the best way of doing this, or if there is a better way of doing it.
|
You need to initalize variable otherwise it contains `NULL`;
```
DECLARE @ClientId bigint = 2;
select * from Testing.dbo.Client where ClientID = @ClientId;
```
If you have it in argument list don't create variable with this name:
```
CREATE PROC stored_procedure
@ClientID BIGING
AS
BEGIN
select * from Testing.dbo.Client where ClientID = @ClientId;
END
```
|
SQL Server : use bigint variable in stored procedure
|
[
"",
"sql",
"sql-server",
"stored-procedures",
""
] |
hoping someone can help, I've used search but answers are beyond what we aare covering.
I'm looking to pull up "staff member who looks after property in Glasgow or Aberdeen" using the below code:
```
SELECT s.fName, s.lName, propertyNo
FROM Staff s, PropertyForRent p
WHERE s.staffNo = p.staffNo
AND city = 'Glasgow' OR 'Aberdeen';
```
only Glasgow is being returned ..I've also tried AND which returns nothing. I'm completely new to this so I know I'm missing something very basic.
|
```
SELECT s.fName, s.lName, propertyNo
FROM Staff s, PropertyForRent p
WHERE s.staffNo = p.staffNo
AND (city = 'Glasgow' OR city='Aberdeen');
```
This can also be rewritten to
```
SELECT s.fName, s.lName, propertyNo
FROM Staff s, PropertyForRent p
WHERE s.staffNo = p.staffNo
AND city in('Glasgow', 'Aberdeen');
```
However, you should use a proper join structure to let the optimizer do its thing
```
SELECT s.fName, s.lName, propertyNo
FROM Staff s
INNER JOIN PropertyForRent p ON s.staffNo = p.staffNo
WHERE (city = 'Glasgow' OR city='Aberdeen');
```
|
You have missed the brackets it should be
```
a AND (b OR c)
```
**Edit** and after the `OR` you have to write `city = x` again. And you should prefix all of your fields like you do it with `s.fName`. p.propertyNo and city should also be prefixed.
|
using basic join for sql query for multiple values
|
[
"",
"mysql",
"sql",
"join",
""
] |
I have two tables. `PartFlights` and `Parts`. A `Part` can (potentially) have many `PartFlights`, a `PartFlight` has one and only one `Part`.
```
+----------------------+---------+
| part_flights_pivot | part |
+----------------------+---------+
| part_flight_id | part_id |
| part_id | |
+----------------------+---------+
```
The question I'm asking is: **How many `PartFlights` are there that have reused `Parts`?**
Getting this into SQL is turning out horribly for me. I've identified some conditions however:
* Ultimately, I need a count statement to add the result up.
* I need to join `PartFlight` to `Part`.
* For `Parts` that have been reused, I need to exclude the first `PartFlight`, as that `PartFlight` is not using a reused `Part` at the time, but a new one.
I managed to produce the following query:
```
SELECT part_flights_pivot.part_flight_id, part_flights_pivot.part_id, COUNT(parts.part_id)-1 as count, SUM(count) FROM part_flights_pivot
JOIN parts ON part_flights_pivot.part_id=parts.part_id
GROUP BY (parts.part_id)
HAVING COUNT(parts.part_id)-1 > 0
```
And while it returns results, I don't believe those results are precisely correct.
|
Since it is given that every single part\_flight must carry one and only one part\_id, this query can be constructed on the part\_flights table only. However, I've constructed the query assuming there are other fields we require from the parts table. (If we required nothing unique from the Parts table this could be stripped down to the code within the parens.) I believe our correct query is along these lines :
```
select p.*, t.counter
from parts p
(select part_id, count(*)-1 counter
from part_flights_pivot
group by part_id
having count(*)>1) t
where p.part_id = t.part_id
```
|
Use a sub-query:
```
SELECT Part.* FROM Part WHERE
(SELECT COUNT(*) FROM PartFlight WHERE
PartFlight.key = Part.key) > 0
```
|
Select count of records that have a relationship that has its own relationship condition?
|
[
"",
"mysql",
"sql",
""
] |
there are two models Player and Team which relates as Many-to-Many to each other, so schema contains three tables `players`, `player_teams` and `teams`.
Given that each team may consist from 1 or 2 two players, how to find a team by known player id(s)?
In this SQLFiddle <http://sqlfiddle.com/#!15/27ac5>
* query for player ids 1 and 2 should return team with id 1
* query for player id 2 should return teams with ids 1 and 2
* query for player id 3 should return team with id 3
|
There's a mistake in the third bullet point of your problem statement, I think. There is no team 3. In that third case, I think you want to return team 2. (The only team that player 3 is on.)
This query requires 2 bits of information - the players you are interested in, and the number of players.
```
SELECT team_id, count(*)
FROM players_teams
WHERE player_id IN (1,2)
GROUP BY team_id
HAVING count(*) = 2
-- returns team 1
SELECT team_id, count(*)
FROM players_teams
WHERE player_id IN (2)
GROUP BY team_id
HAVING count(*) = 1
-- returns teams 1 & 2
SELECT team_id, count(*)
FROM players_teams
WHERE player_id IN (3)
GROUP BY team_id
HAVING count(*) = 1
-- returns team 2
```
edit: here's an example of using this via ruby, which maybe makes a little clearer how it works...
```
player_ids = [1,2]
sql = <<-EOF
SELECT team_id, count(*)
FROM players_teams
WHERE player_id IN (#{player_ids.join(',')})
GROUP BY team_id
HAVING count(*) = #{player_ids.size}
EOF
```
|
Is this what you are looking for?
```
select t.name
from teams t
inner join players_teams pt on t.id = pt.team_id
where pt.player_id = 1
```
-- "OK, SQL give me a team id where both of those two players played together"
```
select pt1.team_id
from players_teams pt1
inner join players_teams pt2 on pt1.team_id = pt2.team_id
where pt1.player_id = 1
and pt2.player_id = 2
```
|
How to find record by the pair of references from the join table?
|
[
"",
"mysql",
"sql",
"ruby-on-rails",
"postgresql",
"activerecord",
""
] |
Let's say I have two tables in my oracle database
Table A : stDate, endDate, salary
For example:
```
03/02/2010 28/02/2010 2000
05/03/2012 29/03/2012 2500
```
Table B : DateOfActivation, rate
For example:
```
01/01/2010 1.023
01/11/2011 1.063
01/01/2012 1.075
```
I would like to have a SQL query displaying the sum of salary of table A with each salary multiplied by the rate of table B depending on the activation date.
Here, for the first salary the good rate is the first one (1.023) because the second rate has a date of activation that is later than stDate and endDate interval.
For the second salary, the third rate is applied because activation date of the rate was before the interval of dates of the second salary.
so the sum is this one : 2000 \* 1.023 + 2500 \* 1.075 = 4733.5
I hope I am clear
Thanks
|
Assuming the rate must be active before the beginning of the interval (i.e. DateOfActivation < stDate), you could do something like this ([see fiddle](http://sqlfiddle.com/#!4/6d85b/1)):
`SELECT SUM(salary*
(SELECT rate from TableB WHERE DateOfActivation=
(SELECT MAX(DateOfActivation) FROM TableB WHERE DateOfActivation < stDate)
)) FROM TableA;`
|
The first thing to do is to transform `Table B` (`Table2` in the query) to have, for each row, the start and end date
```
Select DateOfActivation AS startDate
, rate
, NVL(LEAD(DateOfActivation, 1) OVER (ORDER BY DateOfActivation)
, TO_DATE('9999/12/31', 'yyyy/mm/dd')) AS endDate
From Table2
```
Now we can join this table with `Table A` (`Table1` in the query)
```
WITH Rates AS (
Select DateOfActivation AS startDate
, rate
, NVL(LEAD(DateOfActivation, 1) OVER (ORDER BY DateOfActivation)
, TO_DATE('9999/12/31', 'yyyy/mm/dd')) AS endDate
From Table2)
Select SUM(s.salary * r.rate)
From Rates r
INNER JOIN Table1 s ON s.stDate < r.endDate AND s.endDate > r.startDate
```
The `JOIN` condition get every row in `Table A` that are at least partially in the activation period of the rate, if you need it to be inclusive you can alter it as in the following query
```
WITH Rates AS (
Select DateOfActivation AS startDate
, rate
, NVL(LEAD(DateOfActivation, 1) OVER (ORDER BY DateOfActivation)
, TO_DATE('9999/12/31', 'yyyy/mm/dd')) AS endDate
From Table2)
Select SUM(s.salary * r.rate)
From Rates r
INNER JOIN Table1 s ON s.stDate >= r.startDate AND s.endDate <= r.endDate
```
|
Query to apply rate from the interval of dates
|
[
"",
"sql",
"oracle",
""
] |
basically I have two tables - one populated with payment information, one with a payment type and description.
Table 1(not the full table, just the first entries):
[frs\_Payment](https://i.stack.imgur.com/qCyD3.jpg)
[](https://i.stack.imgur.com/qCyD3.jpg)
Table 2:
frs\_PaymentType
[](https://i.stack.imgur.com/eEjLm.jpg)
What I'm meant to do is make a query that returns the sum of the amount for each payment type. In other words, my end result should look something like:
```
ptdescription amountSum
-------------------------
Cash 845.10
Cheque 71.82
Debit 131.67
Credit 203.49
```
(I've worked out the answers)
Getting the ptdescription is easy:
```
SELECT ptdescription
FROM frs_PaymentType
```
And so is getting the amountSum:
```
SELECT SUM(amount) AS amountSum
FROM frs_Payment
GROUP BY ptid
```
The question is, how do I combine the two queries into something that I can use in a general case (i.e. if I add another payment type, etc.)
|
use join
```
Select ptdescription,SUM(amount) AS amountSum
From frs_PaymentType t join frs_Payment p
on t.ptid=p.ptid
GROUP BY t.ptid
```
|
Try as follows
```
Select ptdescription, SUM(amount) AS amountSum
From frs_PaymentType t join frs_Payment p
on t.ptid=p.ptid
GROUP BY ptdescription
```
|
Horizontal UNION ALL in SQL
|
[
"",
"mysql",
"sql",
"union",
""
] |
I have a query which is giving me syntax error. i am not getting it. please help to sort it out.
```
SELECT
T3.[AcctName],
SUM(T0.[DebLTotal]/85) AS buget,(Select sum(T3.[Debit]/85) From JDT1 T3 ) as 'CurrentBudget'
FROM
OBGT T0
INNER JOIN
(SELECT
CASE WHEN T1.[AcctCode] = '5001' THEN T1.[AcctName] = 'Salaries'
END as [AcctName],
[AcctCode]
FROM
OACT AS T1) AS T3 ON T0.[AcctCode] = T3.[AcctCode]
INNER JOIN
OBGS T2 ON T0.[Instance] = T2.[AbsId]
WHERE
T2.[Name] = 'Main Budget 2015'
GROUP BY
T3.[AcctName], T2.[Name]
```
I am getting `Incorrect syntax near '='` error on this line
```
SELECT CASE WHEN T1.[AcctCode] = '5001' THEN T1.[AcctName] = 'Salaries'
```
|
The problem is your assignment in the middle of the `CASE` expression:
```
SELECT
CASE WHEN T1.[AcctCode] = '5001' THEN T1.[AcctName] = 'Salaries'
END as [AcctName],
[AcctCode]
FROM
OACT AS T1
```
This will never work, since `CASE` is just an **expression** in T-SQL - it can only **return one atomic value** - it cannot do assigments or control the flow of code.
You need to use it something like this:
```
SELECT
[AcctName] = CASE
WHEN [AcctCode] = N'5001' THEN 'Salaries'
ELSE 'Other'
END,
[AcctCode]
FROM
OACT AS T1
```
If your `AcctCode` is 5001 - then you use `Salaries` as your `AcctName` (and if it's another value - what do you want to do?)
**Update:** not sure what you mean by *is executing only else part* - check out this demo:
```
DECLARE @t1 TABLE (ID INT NOT NULL, AcctCode INT)
INSERT INTO @T1 VALUES(1, 5000), (2, 5001), (3, 5000), (4, 5002), (5, 5001), (6, 4999)
SELECT
[AcctName] = CASE
WHEN [AcctCode] = 5001 THEN 'Salaries'
ELSE 'Other'
END,
[AcctCode]
FROM @T1
```
The output will **clearly show** that my `CASE` expression works just fine - for values of `5001`, `AcctName` is set to `Salaries`, while for all other values, it returns `Other`.
|
I am getting only else part with this query.
```
SELECT
T3.[AcctName] ,
SUM(T0.[DebLTotal] / 85) AS buget ,
(SELECT SUM(T3.[Debit] / 85)
FROM JDT1 T3) AS 'CurrentBudget'
FROM
OBGT T0
INNER JOIN
(SELECT
AcctName = CASE
WHEN T1.[AcctCode] = '10' THEN 'Fixed Assets'
WHEN T1.[AcctCode] = '50' THEN 'Salaries'
ELSE 'Other'
END,
[AcctCode]
FROM
OACT AS T1) AS T3 ON T0.[AcctCode] = T3.[AcctCode]
INNER JOIN
OBGS T2 ON T0.[Instance] = T2.[AbsId]
WHERE
T2.[Name] = 'Main Budget 2015'
GROUP BY
T3.[AcctName], T2.[Name]
```
Only `ELSE 'Other'` is working. Whatever I put in my `CASE WHEN`, it is returning me one row that is like :
```
AcctName budget
Other 2312836
```
|
Synatx error in SQL Query
|
[
"",
"sql",
"sql-server",
""
] |
I tried counting and grouping data from 1 column (track id) for when its value is 0, and when it is different than 0, the grouping being done for another column, namely radioid in my code. However, Microsoft Visual studio gives me an error, which i can't understand. Any helpers
```
SELECT RadioID,
(SELECT COUNT (TrackID) FROM tblZampMediaLogItem WHERE TrackID!=0) AS NajdeniPesni,
(SELECT COUNT (TrackID) FROM tblZampMediaLogItem WHERE TrackID=0) AS Nenajdenipesni
Group BY RadioID;
```
error message:
> Msg 207, Level 16, State 1, Line 6 Invalid column name 'RadioID'.
>
> Msg207, Level 16, State 1, Line 3 Invalid column name 'RadioID'.
Why can't it recognise radioid? What am i doing wrong?
|
The outer query does not have a from clause, therefore mssql server cannot recognise from where the radioid field comes from. Assuming that radioid field is in the same table as you used in the subquery, I would rewrite your query as follows:
```
SELECT RadioID,
SUM(CASE WHEN TrackID !=0 THEN 1 ELSE 0 END) AS NajdeniPesni,
SUM(CASE WHEN TrackID =0 THEN 1 ELSE 0 END) AS Nenajdenipesni
FROM tblZampMediaLogItem
Group BY RadioID;
```
This way you do not have to run 3 queries, just one.
|
Your missing `FROM DatabaseName` .. Try this..
```
SELECT RadioID,
(SELECT COUNT (TrackID) FROM tblZampMediaLogItem WHERE TrackID!=0) AS NajdeniPesni,
(SELECT COUNT (TrackID) FROM tblZampMediaLogItem WHERE TrackID=0) AS Nenajdenipesni
FROM tblZampMediaLogItem Group BY RadioID;
```
|
When i make 2 counts for the same column and group them by a third in sql it gives me an error
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I am gathering all of the users who have administrative privileges with `@admins = User.find_by(admin: true)`, but when I try and get the number of admins with `puts @admins.size`, I get the error `NoMethodError: undefined method 'size' for #<User:0x00000009b78988>`. I expect to get just 1. Any idea what's going wrong?
|
as mentioned, `find_by` will only return the first instance of record matching the given condition.
Since you want a list of all the admin, use `where`.
Also, you can turn this into a `scope` in the `User` model as follow:
```
scope :admins, -> { where(admin: true) }
```
And then call:
```
User.admins # gets a list of all the admins
User.admins.size # or User.admins.count to return the number of admins you have.
```
|
You're expecting it to return a list of all the matching `User`s, but it doesn't -- [`find_by`](http://apidock.com/rails/v4.0.2/ActiveRecord/FinderMethods/find_by) "finds the **first** record matching the specified conditions."
You can use `User.where(admin: true)` instead, and that should work as you intend, returning an Array of all the `User`s where `admin` is `true`.
|
Getting size of users group not working in Ruby
|
[
"",
"sql",
"ruby-on-rails",
"ruby",
""
] |
I want to display the latest version(**version is float: 1, 1.1, 1.2...**) in every result group.
here is my code:
```
@results = Result.where(project_id: @project_id).group(:result_id).order("result_version DESC")
```
and in rails server message:
```
SELECT `results`.* FROM `results` WHERE `results`.`project_id` = 2 GROUP BY `results`.`result_id` ORDER BY result_version DESC LIMIT 5 OFFSET 0
```
but it's not work, so I try **ASC**:
```
@results = Result.where(project_id: @project_id).group(:result_id).order(:result_version)
```
message:
```
SELECT `results`.* FROM `results` WHERE `results`.`project_id` = 2 GROUP BY `results`.`result_id` ORDER BY `results`.`result_version` ASC LIMIT 5 OFFSET 0
```
in view, two method show the same thing(**order by ASC**).
how can I do order in the group?
data like this:
```
id, project_id, result_version, result_id, result_num
'6', '2', '1', '1', 'N00009999000010'
'7', '2', '1', '2', βN00009999000020'
'8', '2', '1', '3', 'N00009999000030'
'10', '2', '1.1', '2', 'N00009999000020'
'11', '2', '1.1', '3', 'N00009999000030'
```
P.S. if simply use .order(result\_version: :desc), it work.
my problem just like this issue:
[Maximum in group by active record query](https://stackoverflow.com/questions/6468316/maximum-in-group-by-active-record-query)
but it's answer can't solve my problem.
|
As pointed out in my comment you should be using group\_by instead of group.
The result you're looking for might look like this :
```
Result.where(project_id: @project_id).
order(result_version: asc).
group_by {|result| result.result_id}
```
This will generate a hash that looks something like that :
`{5: [result1, result2], 6: [result7, result8]}`
You can then iterate over it in your view as such
```
- @grouped_result.each do |result_id, array_of_results|
Result ID:
= result_id
- array_of_results.each do |result|
= result.result_num
= result.result_version
# etc....
```
|
Try this:-
```
Result.where('project_id = ?', @project_id).group('result_id').order('result_version desc')
```
Use sort\_by instead of order:-
```
Result.where('project_id = ?', @project_id).group('result_id').sort_by(&:result_version)
```
Updated :-
```
Result.where('project_id = ?', @project_id).sort_by(&:result_version).group_by('result_id')
```
|
rails activerecord order in group
|
[
"",
"sql",
"ruby-on-rails",
"activerecord",
""
] |
I converted a table's `DateTime` field to `DateTimeOffset`, but now the offset is automatically set to `+00:00`.
I need to change **all** `DateTimeOffset` fields of this table to an offset of +1:00.
How can I do this in an update query?
|
You can use `SWITCHOFFSET` to change the offset. You will need to subtract the amount of hours though from the date if you don't want the date to change.
```
SELECT SWITCHOFFSET(DATEADD(hh, -1, CAST (GETDATE() AS DATETIMEOFFSET)),
'+01:00')
```
|
You can use TODATETIMEOFFSET(datetime, '+01:00' )
This wont affect the datetime part.
|
How do I update a column's offset in SQL Server?
|
[
"",
"sql",
"sql-server",
"datetimeoffset",
""
] |
We have an inventory management system which consists of an Item Catalog, Inventory, and Assets. Currently, we have an entry for every piece of inventory but we are now implementing a quantity on both the Inventory table and Assets table. For instance, data in the Inventory table looks something like this:
```
InventoryID | ItemID
----------------------
100 | 5
101 | 5
102 | 5
103 | 5
104 | 9
105 | 5
```
What we now want to do is to merge the records with the same ItemID and put the Quantity in the field:
```
InventoryID | ItemID | Quantity
---------------------------------
100 | 5 | 5
104 | 9 | 1
```
I have thousands of records that need merging and would like to know of a faster way to do this instead of the current way, which is finding the records, getting the count, deleting all but the latest record and updating the quantity field with the count (all being done manually in SSMS, not through any scripts).
Any help/suggestions would be appreciated.
|
Make a temp table and insert:
```
SELECT MIN(InventoryID), ItemID, COUNT(*) as Quantity
FROM Inventory
INTO #TEMP
GROUP BY ItemID
```
Then update the main table (create a quantity column first if you haven't):
```
UPDATE I
SET I.Quantity = T.Quantity
FROM #TEMP
WHERE I.InventoryID = T.InventoryID and I.ItemID = T.ItemID
```
Then delete the extra record from Inventory
```
DELETE
FROM INVENTORY
WHERE InventoryID not in(
SELECT InventoryID
FROM #TEMP)
```
|
Assuming you have a quantity field on your inventory table, you can update that field then delete the now-unnecessary records.
```
UPDATE Inventory
SET Inventory.Quantity = Computed.QCount
FROM Inventory
INNER JOIN
(
SELECT InventoryId, COUNT(*) as QCount
FROM Inventory
GROUP BY InventoryId
) as Computed
on Inventory.ItemId = Computed.ItemId
--Now Delete Duplicates
DELETE Inventory
FROM Inventory
LEFT OUTER JOIN (
SELECT MIN(InventoryId) as RowId, ItemId
FROM Inventory
GROUP BY ItemId
) as KeepRows ON
Inventory.InventoryId = KeepRows.RowId
WHERE
KeepRows.ItemId IS NULL
```
|
Delete records and update quantity
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
I have `photo` table:
```
create table photo(
id integer,
...
user_id integer,
created_at date
);
```
I'd like to achieve the same result as:
```
select
json_agg(photo),
created_at,
id_user
from photo
group by created_at, id_user
order by created_at desc, id_user
limit 5;
```
but avoiding full table scan on `photo`.
Is it possible? I was thinking of recursive CTE but I couldn't manage to construct it.
|
Assuming you have an index on `photo(id_user, created_at)`, then you can select the five rows that you want using a subquery. Then use a join or correlated subquery to get the rest of the information:
```
select cu.created_at, cu.id_user,
(select json_agg(p.photo)
from photo p
where cu.created_at = p.created_at and cu.id_user = p.id_user
)
from (select distinct created_at, id_user
from photo p
order by created_at desc, id_user
limit 5
) cu
order by cu.created_at desc, cu.id_user ;
```
|
Not recursive, You can try with a single CTE to see if get the TOP 5 without full scan
```
WITH cte as (
SELECT DISTINCT created_at, id_user
FROM photo
ORDER BY created_at DESC, id_user
LIMIT 5
)
SELECT p.created_at, p.id_user, json_agg(p.photo)
FROM photo p
JOIN cte c
ON p.created_at = c.created_at
AND p.id_user = c.id_user
GROUP BY p.created_at, p.id_user
ORDER BY p.created_at DESC, p.id_user
```
|
Aggregate function with limit and without full table scan
|
[
"",
"sql",
"postgresql",
""
] |
I'm new to SQL and I'm trying to create a new table however I get an error when I run my script in the SQL command line, the errors I'm getting are either ORA-00942 missing right parenthesis or ORA-00942 table or view does not exist.
Oh and yes I know I have probably written some terrible script but as mentioned earlier I'm learning so any meaningful criticism would be appreciated along with some help :).
```
CREATE TABLE Branch
(
Branch_ID varchar(5),
Branch_Name varchar(255),
Branch_Address varchar(255),
Branch_Town varchar(255),
Branch_Postcode varchar(10),
Branch_Phone varchar(50),
Branch_Fax varchar(50),
Branch_Email varchar(50),
Property_ID varchar(5),
Contract_ID varchar(5),
Staff_ID varchar(5),
PRIMARY KEY (Branch_ID),
FOREIGN KEY (Property_ID) REFERENCES Property(Property_Id),
FOREIGN KEY (Contract_ID) REFERENCES Contract(Contract_Id),
FOREIGN KEY (Staff_ID) REFERENCES Staff(Staff_Id)
);
CREATE TABLE Staff
(
Staff_ID varchar(5),
Staff_Forename varchar(255),
Staff_Surname varchar(255),
Staff_Address varchar(255),
Staff_Town varchar(255)
Staff_Postcode varchar(10),
Staff_Phone varchar(50),
Staff_DOB varchar(50),
Staff_NIN varchar(10),
Staff_Salary varchar(50),
Staff_Date_Joined varchar(100),
Staff_Viewing_Arranged varchar(100),
Branch_ID varchar(5),
Sales_ID varchar(5),
PRIMARY KEY (Staff_ID),
FOREIGN KEY (Branch_ID) REFERENCES Branch(Branch_ID),
FOREIGN KEY (Sales_ID) REFERENCES Sales(Sales_Id)
);
CREATE TABLE Sales
(
Sales_ID varchar(5),
Property_Address varchar(255),
Property_Town varchar(255)
Property_Postcode varchar(10),
Property_Type varchar(255),
Num_Rooms varchar(50),
Date_of_Sale varchar(10),
Sales_Bonus varchar(100),
Branch_ID varchar(5),
Property_ID varchar(5),
Staff_ID varchar(5)
Seller_ID varchar(5),
PRIMARY KEY (Sales_ID),
FOREIGN KEY (Branch_ID) REFERENCES Branch(Branch_ID),
FOREIGN KEY (Property_ID) REFERENCES Property(Property_Id),
FOREIGN KEY (Staff_ID) REFERENCES Staff(Staff_Id),
FOREIGN KEY (Seller_ID) REFERENCES Seller(Seller_Id)
);
CREATE TABLE Contract
(
Contract_ID varchar(5),
Contract_Signed_Date varchar(50),
Property_ID varchar(5),
Buyer_ID varchar(5),
Seller_ID varchar(5),
Branch_ID varchar(5),
PRIMARY KEY (Contract_ID),
FOREIGN KEY (Branch_ID) REFERENCES Branch(Branch_ID),
FOREIGN KEY (Property_ID) REFERENCES Property(Property_Id),
FOREIGN KEY (Seller_ID) REFERENCES Seller(Seller_Id),
FOREIGN KEY (Buyer_ID) REFERENCES Buyer(Buyer_Id)
);
CREATE TABLE Buyer
(
Buyer_ID varchar(5),
Viewing_Data varchar(255),
Maximum_Budject varchar(255),
Purchase_Price varchar (50),
Buyer_Forename varchar(255),
Buyer_Surname varchar(255),
Buyer_Address varchar(255),
Buyer_Town varchar(255),
Buyer_Postcode varchar(10),
Property_ID varchar(5),
Contract_ID varchar(5),
Survey_ID varchar(5),
PRIMARY KEY (Buyer_ID),
FOREIGN KEY (Property_ID) REFERENCES Property(Property_ID),
FOREIGN KEY (Contract_ID) REFERENCES Contract(Contract_Id),
FOREIGN KEY (Survey_ID) REFERENCES Survey(Survey_Id)
);
CREATE TABLE Seller
(
Seller_ID varchar(5),
Seller_Forename varchar(255),
Seller_Surname varchar(255),
Seller_Address varchar(255),
Seller_Town varchar(255),
Seller_Postcode varchar(10),
Seller_Property_ID varchar(5),
No_of_Bed varchar(5),
Contract_ID varchar(5),
Property_ID varchar(5),
Sales_ID varchar(5),
PRIMARY KEY (Seller_ID),
FOREIGN KEY (Contract_ID) REFERENCES Contract(Contract_ID),
FOREIGN KEY (Property_ID) REFERENCES Property(Property_Id),
FOREIGN KEY (Sales_ID) REFERENCES Sales(Sales_Id)
);
CREATE TABLE Property
(
Property_ID varchar(5),
Property_Address varchar(255),
Property_Town varchar(255),
Property_Postcode varchar(10),
Asking_Price varchar(20),
Date_Registered varchar(50),
Property_Fixtures varchar(255),
Size_of_Rooms varchar(100),
Buyer_ID varchar(5),
Staff_ID varchar(5),
Contract_ID varchar(5),
Seller_ID varchar(5),
PRIMARY KEY (Property_ID),
FOREIGN KEY (Buyer_ID) REFERENCES Buyer(Buyer_ID),
FOREIGN KEY (Seller_ID) REFERENCES Seller(Seller_ID),
FOREIGN KEY (Staff_ID) REFERENCES Staff(Staff_Id),
FOREIGN KEY (Contract_ID) REFERENCES Contract(Contract_Id)
);
CREATE TABLE Survey
(
Survey_ID varchar(5),
No_of_Survey varchar(10),
Survey_Type varchar(255),
Organised_By varchar(255),
Property_ID varchar(5),
Staff_ID varchar(5),
Buyer_ID varchar(5),
PRIMARY KEY (Survey_ID),
FOREIGN KEY (Property_ID) REFERENCES Property(Property_ID),
FOREIGN KEY (Staff_ID) REFERENCES Staff(Staff_Id),
FOREIGN KEY (Buyer_ID) REFERENCES Buyer(Buyer_Id)
);
CREATE TABLE Advert
(
Survey_ID Advert_ID varchar(5),
No_of_Adverts varchar(10),
Advert_Website varchar(255),
Advert_Newspaper varchar(255),
Property_ID varchar(5),
PRIMARY KEY (Advert_ID),
FOREIGN KEY (Property_ID) REFERENCES Property(Property_ID)
);
```
|
`STAFF` Table miss a `,` at the end
```
Staff_Town varchar(255)
```
`Sales` Table too
```
Property_Town varchar(255)
Staff_ID varchar(5)
```
Also you **can't** define constrains to table not created yet. I could found the error removing those constrain.
|
Can't define the foreign key references until the table being referred to is created. So, if the foreign key references are circular, in that A => B => C => A, they you will first have to create the tables, then use ALTER TABLE to define the foreign keys. Otherwise, create the tables in order, where first you create a table, then create the table that is referencing it.
|
Error when creating table in Oracle Database 11g SQL
|
[
"",
"sql",
"oracle",
"ddl",
""
] |
To explain my question easier I will just paste my whole code:
```
drop table tst;
create table tst
(t1 number(2));
set serveroutput on
DECLARE
TYPE vltp IS TABLE OF NUMBER(3);
vl vltp := vltp(2,12,33,344,55,66,7,555,4);
NUMBER_TO_BIG EXCEPTION;
PRAGMA EXCEPTION_INIT(NUMBER_TO_BIG, -01438);
BEGIN
FOR i IN vl.FIRST .. vl.LAST LOOP
INSERT INTO tst VALUES (vl(i));
SAVEPOINT ONE;
END LOOP;
EXCEPTION
WHEN NUMBER_TO_BIG THEN
ROLLBACK TO SAVEPOINT ONE;
END;
/
select * from tst;
```
Basically, when I am inserting 344 into the table I get an exception (NUMBER\_TO\_BIG) and I want it to roll back to the loop but skip that number.
The expected output:
```
tst
-----
2
12
33
55
66
7
4
```
Actual output:
```
no rows selected
```
It is rolling back all the changes, not just that one number.
Any ideas?
|
```
Hello i have illustrtaed a small snippet to replicate your scenario. Let me know if this helps.
--Check for existing table with same name and dropping if already exists
DROP TABLE tst;
--Create Table
CREATE TABLE tst
(t1 NUMBER(2)
);
--Anonymous block to perform the task
SET serveroutput ON
DECLARE
TYPE vltp
IS
TABLE OF NUMBER
(
3
)
;
vl vltp := vltp(2,12,33,344,55,66,7,555,4);
NUMBER_TO_BIG EXCEPTION;
PRAGMA EXCEPTION_INIT(NUMBER_TO_BIG, -01438);
BEGIN
FOR i IN vl.FIRST .. vl.LAST
LOOP
BEGIN
INSERT INTO tst VALUES
(vl(i)
);
EXCEPTION
WHEN NUMBER_TO_BIG THEN
NULL;
dbms_output.put_line('skipping the value');
END;
END LOOP;
COMMIT;
END;
```
|
you should try this...
```
drop table tst;
--create table
create table tst
(t1 number(2));
--start of code
DECLARE
TYPE vltp IS TABLE OF NUMBER(3);
vl vltp := vltp(2, 12, 33, 344, 55, 66, 7, 555, 4);
NUMBER_TO_BIG EXCEPTION;
PRAGMA EXCEPTION_INIT(NUMBER_TO_BIG, -01438);
BEGIN
FOR i IN vl.FIRST .. vl.LAST LOOP
begin
INSERT INTO tst VALUES (vl(i));
exception
when NUMBER_TO_BIG then
--log exeption into log table here
dbms_output.put_line(sqlerrm);
end;
END LOOP;
commit;
exception
when others then
--log exeption into log table here
dbms_output.put_line(sqlerrm);
END;
```
|
How do you rollback to the next iteration in a loop?
|
[
"",
"sql",
"database",
"oracle",
"exception",
"plsql",
""
] |
I'm trying this code but I'm getting this error message: *An expression of non-boolean type specified in a context where a condition is expected, near ','.*
```
ALTER TABLE customers
ADD active int
DEFAULT (1)
CONSTRAINT chk_active
CHECK (0,1);
```
Thanks in advance!
|
When defining a check constraint you have to actually refer to the column name, i.e. `CHECK (Active IN (0,1)` instead of just `CHECK (0, 1)`. So your syntax would be
```
ALTER TABLE Customers ADD
active INT DEFAULT (1)
CONSTRAINT chk_active CHECK (Active IN (0,1));
```
It is also probably a good idea to name your default constraint, and include the table name in the constraint name so that you don't conflict with similar constraints on other tables:
```
ALTER TABLE Customers ADD
Active INT CONSTRAINT DF_Customers__Active DEFAULT (1)
CONSTRAINT CHK_Customers__Active CHECK (Active IN (0,1));
```
However, it would seem more appropriate to have a not null bit column so the check constraint is not required:
```
ALTER TABLE Customers ADD Active BIT NOT NULL CONSTRAINT DF_Customers__Active DEFAULT(1);
```
|
try this
```
ALTER TABLE Customers ADD
active INT DEFAULT (1)
CONSTRAINT chk_active CHECK (Active IN (0,1));
```
|
How to create column with default and required values?
|
[
"",
"sql",
"sql-server",
""
] |
I know this might be a simple one but for the life of me I cant get it to work.
I have the below query:
```
SELECT
[1_Data_Set].EAN_CODE,
[1_Data_Set].CPFUSER_CONSUMER_UNIT_DESCRIPTION,
[1_Data_Set].CASE_WIDTH,
[1_Data_Set].CASE_HEIGHT,
[1_Data_Set].CASE_LENGTH,
[1_Data_Set].CPFUSER_CONSUMER_UNIT_INTRODUCED_DATE,
[1_Data_Set].WAREHOUSED_IND INTO DEFAULTS
FROM 1_Data_Set
WHERE
([1_Data_Set].CASE_WIDTH =[1_Data_Set].CASE_HEIGHT
AND
[1_Data_Set].CASE_HEIGHT = [1_Data_Set].CASE_LENGTH
AND
[1_Data_Set].CASE_LENGTH = [1_Data_Set].CASE_WIDTH);
```
I want to return the values where the width, length and height all match (there maybe a more logical way than the one I have entered above)
AND
Only whole numbers, I cannot use the INT() command to display whole numbers, I want ot reurn only whole numbers
Currently being returned example:
```
EAN Height Width Length
58554 10 10 10
85965 11.1 11.1 11.1
```
Required:
```
EAN Height Width Length
58554 10 10 10
```
|
Perhaps this does what you want:
```
select *
from l_data_set
where height = round(height, 0) and width = round(width, 0) and
length = round(length, 0) and
height = width and width = length;
```
`round()` is safer than `int` because of the way that floating point numbers are stored.
|
Check that the base number is equal to the INT of the base number
```
SELECT *
FROM 1_Data_Set
WHERE CASE_WIDTH = INT(CASE_WIDTH) AND
CASE_WIDTH = CASE_HEIGHT AND
CASE_HEIGHT = CASE_LENGTH
```
Before I post this - is basically the same as Jason Boyds answer. i.e. just check that the number is an integer first.
|
Returning only whole numbers in MS Access query
|
[
"",
"sql",
"ms-access",
""
] |
I have two columns in a SQL table as follow. I need to compare these two columns for mismatches but due to extra decimals i am getting false results. When i try to convert the first column it gives the error
> "Error converting data type varchar to numeric."
How to solve this issue? The length of first column varies.
```
Column01(varchar) Column02(Decimal)
0.01 0.010000
0.255 0.255000
```
|
You have data in `Column01` that cannot be casted to `DECIMAL`.
With **`SQL Server 2012+`** I would use **[`TRY_PARSE`](https://msdn.microsoft.com/en-us/library/hh213126.aspx)**:
```
SELECT *
FROM your_table
WHERE Column02 = TRY_PARSE(Column01 AS DECIMAL(38,18));
```
`LiveDemo`
When value from column cannot be casted safely you get `NULL`.
For **`SQL Server 2008`** you can use:
```
SELECT *
FROM #tab
WHERE (CASE
WHEN ISNUMERIC(Column01) = 1 THEN CAST(Column01 AS DECIMAL(38,18))
ELSE NULL
END) = Column02;
```
**EDIT:**
If you need it at column level use:
```
SELECT Column01, Column02,
CASE WHEN Column02 = TRY_PARSE(Column01 AS DECIMAL(38,18))
OR (Column02 IS NULL AND Column01 IS NULL)
THEN 'true'
ELSE 'false'
END AS [IsEqual]
FROM #tab;
```
`LiveDemo2`
|
You can do this using self join and conversion function
```
SELECT x.Column01, y.Column02
FROM table1 x, table1 y
WHERE x.Column02 = try_parse(y.Column01 as decimal(38,18))
```
Since I cannot comment, I like to thank lad2025 for showing live demo and introducing to data.stackexchange for composing queries
|
SQL Server: Compare two columns
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I am working with a table that is using year ranges for some of the data I need to be able to
select a record by a year that falls between those ranges.
```
| id | Make | Model | Year |
|----------------|------------|------------ |
| 1 | Chevrolet | Camaro | 2008 |
| 2 | Chevrolet | Camaro | 2009 - 2014 |
| 3 | Dodge | Avenger | 2010 - 2015 |
| 4 | Dodge | Challenger | 2008 - 2016 |
| 5 | Ford | Escape | 2013 |
| 6 | Ford | Mustang | 2004 - 2012 |
| 7 | Ford | Mustang | 2015 |
```
For example, I want to be able to ***Select all vehicles with a year of 2012***.
This should return: **2**, **3**, **4** and **6** given the example table below.
|
Use `LEFT` and `RIGHT` to determine the ranges.
```
SELECT *
FROM yourtable
WHERE (LEFT(Year,4) <= '2012' AND RIGHT(Year,4) >= '2012')
```
OUTPUT:
```
id Make Model Year
2 Chevrolet Camaro 2009 - 2014
3 Dodge Avenger 2010 - 2015
4 Dodge Challenger 2008 - 2016
6 Ford Mustang 2004 - 2012
```
SQL Fiddle:
|
```
SELECT t.*
FROM t WHERE LEFT(Year,4) <= '2012' AND RIGHT(Year,4) >= '2012'
```
|
How to execute MySQL query with range of years in one field
|
[
"",
"mysql",
"sql",
""
] |
I have a date field in my MySQL table and I want to get all rows before the next 1st Of December. So for example the rows I should get back if I run the query today (Nov 2015) would be any date before 1st Dec 2015. If I run the query after 1st Dec 2015 but before 1st December 2016, it should return all rows with a date of before 1st Dec 2016.
Sorry for lack of code, I have literally no idea where to start with this one!
|
I create the limit date based on the current month (using [`now()`](http://www.w3schools.com/sql/func_now.asp) to get the current date). If the current month is 12, then the limit is 1-12 of next year (`year(now()+1`), otherwise the current year.
```
select YourDate
from YourTable
where YourDate <
case when month(now())=12 then
date(concat(convert(year(now()+1), char), '-12-01'))
else
date( concat(convert(year(now()), char), '-12-01'))
end;
```
|
I think you should use something like this:
```
SELECT *
FROM Table T
WHERE Some_Date < CASE
WHEN MONTH(CURDATE()) < 12 THEN STR_TO_DATE('01, 12, ' + YEAR(CURDATE())),'%d,%m,%Y')
ELSE STR_TO_DATE('01, 12, ' + YEAR(CURDATE() + 1)),'%d,%m,%Y')
END
```
Disclaimer: I dont have mysql, so I couldn't test it.
|
How Can I Get The Next 1st December Date?
|
[
"",
"mysql",
"sql",
""
] |
I am trying to compare two columns in two different tables in a database. One column is of type VARCHAR2(4 CHAR) and the other is of type DECIMAL. Although I am pulling these columns into JAVA as a string to do the comparison, I am running into this issue:
Elements in column (decimal) are: 123, 456, 789
Elements of compared column (varchar) are: 0123, 0456, 0789
So I want to know if there is a way that I can remove this prefixed 0 as I am pulling it from the database (like some sort of TRIM0()) such that elements from both columns would be the same?
I am trying to avoid handling this column in the JAVA itself to improve performance.
|
> (like some sort of TRIM0()) such that elements from both columns would be the same?
**Your requirement is about data type conversion, not string manipulation.**
Elements might look similar, but it is the **data type** that matters. `'123'` and `123` are not the same. They look similar, but they are different data types.
Do not use **LTRIM** as it would return you a **STRING**, you will have to convert it to **NUMBER** again to avoid **implicit data type conversion**. It is pointless to return a string, and then use TO\_NUMBER again to convert it to number.
The correct and efficient way is to use **TO\_NUMBER**. It would convert it into NUMBER in just one step.
For example,
```
SQL> SELECT to_number('00123') num FROM dual;
NUM
----------
123
SQL> SELECT * FROM dual WHERE to_number('00123') = 123;
D
-
X
```
On a side note, if you are sure that always need a **NUMBER** from the column to process, then you must fix it at **design level**. Storing *numbers* as *character type* is a **design flaw**, unless there is a specific *business need*.
|
You can use LTRIM to remove the leading zeroes from the varchar number:
```
select ltrim( '0001234', '0' ) from dual;
```
But I would suggest revisiting your design and comparing number to number.
|
How to compare a NUMBER with VARCHAR2 type number having leading zeroes?
|
[
"",
"sql",
"oracle",
"numbers",
"type-conversion",
"varchar",
""
] |
I have a DataFrame generated as follow:
```
df.groupBy($"Hour", $"Category")
.agg(sum($"value") as "TotalValue")
.sort($"Hour".asc, $"TotalValue".desc))
```
The results look like:
```
+----+--------+----------+
|Hour|Category|TotalValue|
+----+--------+----------+
| 0| cat26| 30.9|
| 0| cat13| 22.1|
| 0| cat95| 19.6|
| 0| cat105| 1.3|
| 1| cat67| 28.5|
| 1| cat4| 26.8|
| 1| cat13| 12.6|
| 1| cat23| 5.3|
| 2| cat56| 39.6|
| 2| cat40| 29.7|
| 2| cat187| 27.9|
| 2| cat68| 9.8|
| 3| cat8| 35.6|
| ...| ....| ....|
+----+--------+----------+
```
As you can see, the DataFrame is ordered by `Hour` in an increasing order, then by `TotalValue` in a descending order.
I would like to select the top row of each group, i.e.
* from the group of Hour==0 select (0,cat26,30.9)
* from the group of Hour==1 select (1,cat67,28.5)
* from the group of Hour==2 select (2,cat56,39.6)
* and so on
So the desired output would be:
```
+----+--------+----------+
|Hour|Category|TotalValue|
+----+--------+----------+
| 0| cat26| 30.9|
| 1| cat67| 28.5|
| 2| cat56| 39.6|
| 3| cat8| 35.6|
| ...| ...| ...|
+----+--------+----------+
```
It might be handy to be able to select the top N rows of each group as well.
Any help is highly appreciated.
|
**Window functions**:
Something like this should do the trick:
```
import org.apache.spark.sql.functions.{row_number, max, broadcast}
import org.apache.spark.sql.expressions.Window
val df = sc.parallelize(Seq(
(0,"cat26",30.9), (0,"cat13",22.1), (0,"cat95",19.6), (0,"cat105",1.3),
(1,"cat67",28.5), (1,"cat4",26.8), (1,"cat13",12.6), (1,"cat23",5.3),
(2,"cat56",39.6), (2,"cat40",29.7), (2,"cat187",27.9), (2,"cat68",9.8),
(3,"cat8",35.6))).toDF("Hour", "Category", "TotalValue")
val w = Window.partitionBy($"hour").orderBy($"TotalValue".desc)
val dfTop = df.withColumn("rn", row_number.over(w)).where($"rn" === 1).drop("rn")
dfTop.show
// +----+--------+----------+
// |Hour|Category|TotalValue|
// +----+--------+----------+
// | 0| cat26| 30.9|
// | 1| cat67| 28.5|
// | 2| cat56| 39.6|
// | 3| cat8| 35.6|
// +----+--------+----------+
```
This method will be inefficient in case of significant data skew. This problem is tracked by [SPARK-34775](https://issues.apache.org/jira/browse/SPARK-34775) and might be resolved in the future ([SPARK-37099](https://issues.apache.org/jira/browse/SPARK-37099)).
**Plain SQL aggregation followed by `join`**:
Alternatively you can join with aggregated data frame:
```
val dfMax = df.groupBy($"hour".as("max_hour")).agg(max($"TotalValue").as("max_value"))
val dfTopByJoin = df.join(broadcast(dfMax),
($"hour" === $"max_hour") && ($"TotalValue" === $"max_value"))
.drop("max_hour")
.drop("max_value")
dfTopByJoin.show
// +----+--------+----------+
// |Hour|Category|TotalValue|
// +----+--------+----------+
// | 0| cat26| 30.9|
// | 1| cat67| 28.5|
// | 2| cat56| 39.6|
// | 3| cat8| 35.6|
// +----+--------+----------+
```
It will keep duplicate values (if there is more than one category per hour with the same total value). You can remove these as follows:
```
dfTopByJoin
.groupBy($"hour")
.agg(
first("category").alias("category"),
first("TotalValue").alias("TotalValue"))
```
**Using ordering over `structs`**:
Neat, although not very well tested, trick which doesn't require joins or window functions:
```
val dfTop = df.select($"Hour", struct($"TotalValue", $"Category").alias("vs"))
.groupBy($"hour")
.agg(max("vs").alias("vs"))
.select($"Hour", $"vs.Category", $"vs.TotalValue")
dfTop.show
// +----+--------+----------+
// |Hour|Category|TotalValue|
// +----+--------+----------+
// | 0| cat26| 30.9|
// | 1| cat67| 28.5|
// | 2| cat56| 39.6|
// | 3| cat8| 35.6|
// +----+--------+----------+
```
**With DataSet API** (Spark 1.6+, 2.0+):
*Spark 1.6*:
```
case class Record(Hour: Integer, Category: String, TotalValue: Double)
df.as[Record]
.groupBy($"hour")
.reduce((x, y) => if (x.TotalValue > y.TotalValue) x else y)
.show
// +---+--------------+
// | _1| _2|
// +---+--------------+
// |[0]|[0,cat26,30.9]|
// |[1]|[1,cat67,28.5]|
// |[2]|[2,cat56,39.6]|
// |[3]| [3,cat8,35.6]|
// +---+--------------+
```
*Spark 2.0 or later*:
```
df.as[Record]
.groupByKey(_.Hour)
.reduceGroups((x, y) => if (x.TotalValue > y.TotalValue) x else y)
```
The last two methods can leverage map side combine and don't require full shuffle so most of the time should exhibit a better performance compared to window functions and joins. These cane be also used with Structured Streaming in `completed` output mode.
**Don't use**:
```
df.orderBy(...).groupBy(...).agg(first(...), ...)
```
It may seem to work (especially in the `local` mode) but it is unreliable (see [SPARK-16207](https://issues.apache.org/jira/browse/SPARK-16207), credits to [Tzach Zohar](https://stackoverflow.com/users/5344058/tzach-zohar) for [linking relevant JIRA issue](https://stackoverflow.com/questions/33878370/how-to-select-the-first-row-of-each-group#comment78445228_45602100), and [SPARK-30335](https://issues.apache.org/jira/browse/SPARK-30335)).
The same note applies to
```
df.orderBy(...).dropDuplicates(...)
```
which internally uses equivalent execution plan.
|
For Spark 2.0.2 with grouping by multiple columns:
```
import org.apache.spark.sql.functions.row_number
import org.apache.spark.sql.expressions.Window
val w = Window.partitionBy($"col1", $"col2", $"col3").orderBy($"timestamp".desc)
val refined_df = df.withColumn("rn", row_number.over(w)).where($"rn" === 1).drop("rn")
```
|
How to select the first row of each group?
|
[
"",
"sql",
"scala",
"apache-spark",
"dataframe",
"apache-spark-sql",
""
] |
Hi guys i have a problem about how to select value from a set of value in SQL and my code look like this:
```
Select distinct subreddit from reddit
where author = (select distinct author from reddit where link_id = 't3_j56j2');
```
Code in the () return more than one author but when i run this query it only gets the value of one author. What should i do to get value from all the authors () part returns?
|
I guess you simply want `IN` instead:
```
Select distinct subreddit from reddit
where author IN (select author from reddit where link_id = 't3_j56j2');
```
**Note**: No need for `distinct` in the sub-query.
|
Use
`Select distinct T.subreddit from reddit T
where EXISTS
(select author from reddit where link_id = 't3_j56j2'and author = T.author);`
|
SQL Select x from a set of value
|
[
"",
"sql",
""
] |
I have a column that contains data that I want to escape in order to use it as JSON output, to be more precise am trying to escape the same characters listed here but using Oracle 11g: [Special Characters and JSON Escaping Rules](https://msdn.microsoft.com/en-us/library/dn921889.aspx)
I think it can be solved using REGEXP\_REPLACE:
```
SELECT REGEXP_REPLACE(my_column, '("|\\|/)|(' || CHR(9) || ')', '\\\1') FROM my_table;
```
But I am lost about replacing the other characters (tab, new line, backspace, etc), in the previous example I know that \1 will match and replace the first group but I am not sure how to capture the tab in the second group and then replace it with \t. Somebody could give me a hint about how to do the replacement?
I know I can do this:
```
SELECT REGEXP_REPLACE( REGEXP_REPLACE(my_column, '("|\\|/)', '\\\1'), '(' || CHR(9) || ')', '\t')
FROM my_table;
```
But I would have to nest like 5 calls to REGEXP\_REPLACE, and I suspect I should be able to do it in just one or two calls.
I am aware about other packages or libraries for JSON but I think this case is simple enough that it can be solved with the functions that Oracle offers out-of-the-box.
Thank you.
|
Here's a start. Replacing all the regular characters is easy enough, it's the control characters that will be tricky. This method uses a group consisting of a character class that contains the characters you want to add the backslash in front of. Note that characters inside of the class do not need to be escaped. The argument to REGEXP\_REPLACE of 1 means start at the first position and the 0 means to replace all occurrences found in the source string.
```
SELECT REGEXP_REPLACE('t/h"is"'||chr(9)||'is a|te\st', '([/\|"])', '\\\1', 1, 0) FROM dual;
```
Replacing the TAB and a carriage return is easy enough by wrapping the above in REPLACE calls, but it stinks to have to do this for each control character. Thus, I'm afraid my answer isn't really a full answer for you, it only helps you with the regular characters a bit:
```
SQL> SELECT REPLACE(REPLACE(REGEXP_REPLACE('t/h"is"'||chr(9)||'is
2 a|te\st', '([/\|"])', '\\\1', 1, 0), chr(9), '\t'), chr(10), '\n') fixe
3 FROM dual;
FIXED
-------------------------
t\/h\"is\"\tis\na\|te\\st
SQL>
```
**EDIT: Here's a solution!** I don't claim to understand it fully, but basically it creates a translation table that joins to your string (in the inp\_str table). The connect by, level traverses the length of the string and replaces characters where there is a match in the translation table. I modified a solution found here: <http://database.developer-works.com/article/14901746/Replace+%28translate%29+one+char+to+many> that really doesn't have a great explanation. Hopefully someone here will chime in and explain this fully.
```
SQL> with trans_tbl(ch_frm, str_to) as (
select '"', '\"' from dual union
select '/', '\/' from dual union
select '\', '\\' from dual union
select chr(8), '\b' from dual union -- BS
select chr(12), '\f' from dual union -- FF
select chr(10), '\n' from dual union -- NL
select chr(13), '\r' from dual union -- CR
select chr(9), '\t' from dual -- HT
),
inp_str as (
select 'No' || chr(12) || 'w is ' || chr(9) || 'the "time" for /all go\od men to '||
chr(8)||'com' || chr(10) || 'e to the aid of their ' || chr(13) || 'country' txt from dual
)
select max(replace(sys_connect_by_path(ch,'`'),'`')) as txt
from (
select lvl
,decode(str_to,null,substr(txt, lvl, 1),str_to) as ch
from inp_str cross join (select level lvl from inp_str connect by level <= length(txt))
left outer join trans_tbl on (ch_frm = substr(txt, lvl, 1))
)
connect by lvl = prior lvl+1
start with lvl = 1;
TXT
------------------------------------------------------------------------------------------
No\fw is \tthe \"time\" for \/all go\\od men to \bcom\ne to the aid of their \rcountry
SQL>
```
**EDIT 8/10/2016 - Make it a function for encapsulation and reusability so you could use it for multiple columns at once:**
```
create or replace function esc_json(string_in varchar2)
return varchar2
is
s_converted varchar2(4000);
BEGIN
with trans_tbl(ch_frm, str_to) as (
select '"', '\"' from dual union
select '/', '\/' from dual union
select '\', '\\' from dual union
select chr(8), '\b' from dual union -- BS
select chr(12), '\f' from dual union -- FF
select chr(10), '\n' from dual union -- NL
select chr(13), '\r' from dual union -- CR
select chr(9), '\t' from dual -- HT
),
inp_str(txt) as (
select string_in from dual
)
select max(replace(sys_connect_by_path(ch,'`'),'`')) as c_text
into s_converted
from (
select lvl
,decode(str_to,null,substr(txt, lvl, 1),str_to) as ch
from inp_str cross join (select level lvl from inp_str connect by level <= length(txt))
left outer join trans_tbl on (ch_frm = substr(txt, lvl, 1))
)
connect by lvl = prior lvl+1
start with lvl = 1;
return s_converted;
end esc_json;
```
Example to call for multiple columns at once:
```
select esc_json(column_1), esc_json(column_2)
from your_table;
```
|
Inspired by the answer above, I created this simpler "one-liner" function:
```
create or replace function json_esc (
str IN varchar2
) return varchar2
begin
return REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(str, chr(8), '\b'), chr(9), '\t'), chr(10), '\n'), chr(12), '\f'), chr(13), '\r');
end;
```
Please note, both this and @Gary\_W's answer above are not escaping *all* control characters as the json.org seems to indicate.
|
Escaping special characters for JSON output
|
[
"",
"sql",
"json",
"regex",
"oracle",
""
] |
I need to get the information from DB in such a way, the limit should be 3 rows and out of which i want to sort by descending order.
I used
```
select * from table where coloumn = 'Myfilter' order by serialNumber desc limit 3
```
after the execution I am not getting the latest three records rather the first three records ordered by descending.
|
This query solves my question thank you all for suggestions,
```
SELECT * FROM (SELECT * FROM table WHERE coloumn='myFilter' ORDER BY serialnumber desc LIMIT 3) a ORDER BY serialnumber asc
```
the query uses to select the latest 3 rows ordered by big to small serial number then again the selected rows order where reversed, thnx @Kelvin Barsana
|
Applying limit before order by
```
SELECT * FROM (SELECT * FROM table WHERE coloumn = 'Myfilter' ORDER BY serialNumber LIMIT 3) a ORDER BY serialNumber DESC
```
|
how to orderby after applying limit in sql
|
[
"",
"mysql",
"sql",
""
] |
We have a job table that includes ID and title, something like this:
```
JobID | JobTitle
1 president
2 vice-president
3 director
```
The user table includes a jobID that is supposed to map to the job table, but whoever coded it made it a multi-select field in the UI and appended values with pipes between. So a user could be a president AND a vice-president AND a director. Here is an example of the user table
```
UserName | JobID
Suzy 1|2|3
Bob 3
Jane 2|1
```
I'm trying to run a report of all staff and their titles, but am stumped at how to iterate through the multi-value jobIDs and display the jobTitle.
The query I'm currently using is something like:
```
select user.username, job.JobTitle
from user
inner join job on user.JobID = job.JobID
```
This is all on SQL Server 2012
Any suggestions?
I'd settle for displaying additional titles on subsequent rows, or in subsequent columns, whatever is easier.
|
The best answer is obviously to simply split the column and create a separate table with 1 row per child (job) to have [atomic data](https://dba.stackexchange.com/questions/2342/what-is-atomic-relation-in-first-normal-form). But in the meantime, you can do something like :
```
SELECT [user].username, [job].JobTitle
FROM [job]
INNER JOIN [user]
ON ('|'+[user].JobID+'|' LIKE('%|'+CAST([job].JobID as varchar(20))+'|%'))
```
Oh, and before you shout after someone for inserting non-atomic data in a table, start by shunning him for using reserved keywords as table names. Never *ever* **ever** ***ever*** name your table `user`.
---
Btw, this syntax can be used to do something like :
```
CREATE TABLE users AS
SELECT user.username, job.jobId
FROM [job]
INNER JOIN [user]
ON ('|'+[user].JobID+'|' LIKE('%|'+CAST([job].JobID as varchar(20))+'|%'))
```
which will net you a sanitized atomic table.
|
Here is a function you can use to split a delimited field and return the results to a table (which you can then use in subsequent operations):
```
CREATE FUNCTION dbo.ufnGENSplitDelimField (
@InputString nvarchar(max),
@Delimiter nvarchar(10)
)
RETURNS @Results TABLE (
Item nvarchar(50)
)
AS
BEGIN
-- default delimiter to comma if blank
IF ISNULL(@Delimiter,'') = ''
BEGIN
SET @Delimiter = ','
END
DECLARE @Item nvarchar(50);
DECLARE @ItemList nvarchar(max);
DECLARE @DelimIndex int;
SET @ItemList = @InputString
SET @DelimIndex = CHARINDEX(@Delimiter, @ItemList, 0)
/*
Loop through all items, removing the first at each iteration. For example, if we start with the string 'bob,mary,joe':
1. set @Item = 'bob'
2. insert 'bob' into result table
3. set @ItemList = 'mary,joe'
4. set @DelimIndex = 4
loop
*/
WHILE (@DelimIndex != 0)
BEGIN
SET @Item = SUBSTRING(@ItemList, 0, @DelimIndex)
INSERT INTO @Results VALUES (@Item)
-- Set @ItemList = @ItemList minus one less item
SET @ItemList = SUBSTRING(@ItemList, @DelimIndex+1, LEN(@ItemList)-@DelimIndex)
SET @DelimIndex = CHARINDEX(@Delimiter, @ItemList, 0)
END -- End WHILE
IF @Item IS NOT NULL -- At least one delimiter was encountered in @InputString, and we need to insert the final item in the list
BEGIN
SET @Item = @ItemList
INSERT INTO @Results VALUES (@Item)
END
ELSE -- No delimiters were encountered in @InputString, so just return @InputString
BEGIN
INSERT INTO @Results VALUES (@InputString)
END
RETURN;
END;
```
|
How can I iterate through characters in a SQL result value?
|
[
"",
"sql",
"sql-server-2012",
""
] |
I have the following models and want to get all the available posts that **an specific user** has not put a like on them.
Post Model:
```
class Post < ActiveRecord::Base
has_many :likes
has_many :users, through: :likes
scope :available, -> { where available: true }
end
```
Like Model:
```
class Like < ActiveRecord::Base
belongs_to :post
belongs_to :user
end
```
User Model:
```
class User < ActiveRecord::Base
has_many :likes
has_many :posts, through: :likes
end
```
I came up with this ActiveRecord query:
```
Post.available - Post.available.joins(:likes).where('likes.user_id = ?', user.id)
```
Is there an optimized way to achieve this? Maybe even an equivalent SQL query?
|
This can be achieved with:
```
Post.available.where("id not in (select post_id from likes where user_id = ?)", user.id)
```
|
That could do it, I guess:
```
Post.available.where( 'NOT EXISTS (?)',
user.likes.where('likes.post_id = posts.id')
)
```
You can also do this without SQL strings at all, like so:
```
Post.available.where(
user.likes.where(post_id: Post.arel_table[:id]).exists.not
)
```
But I've seen constructs like this cause issues with parameter binding. I'm pretty sure it's a bug, but I'm not sure this trick is even in public ActiveRecord API (an so is ActiveRecord/Arel really to blame).
|
Optimizing an ActiveRecord query
|
[
"",
"sql",
"ruby-on-rails",
"postgresql",
"activerecord",
"query-optimization",
""
] |
I'm wondering whether there is a way to display some default value for select statement where queries field doesn't exist.
For instance,
```
SELECT t.name, t.type, t.price, t.brand FROM some_table t;
```
If the 'brand' field doesn't exist in the some\_table I would like this statement to display 'brand' as 'not available'.
Eventually I want to create a view from that select statement.
I'm just curious whether there is a way to do that in PL/SQL.
EDIT:
To avoid confusion, I want the statement to compile and work when the 'brand' column doesn't exist in the table.
|
I have just seen the question above. It seems very weird design or requirement. I am posting a code snippet which may suffice your problem but ideally this should not be like this.
```
--So i get chance to look into the question asked simple way to get a workaround for your problem is to fetch out the columns list from table
var p_lst refcursor;
SET serveroutput ON;
DECLARE
lv_sql LONG;
lv_tab_name VARCHAR2(100);
lv_col_chk VARCHAR2(1000 CHAR);
BEGIN
FOR I IN
(SELECT * FROM ALL_TAB_COLUMNS WHERE OWNER = 'AVROY' AND TABLE_NAME = 'EMP'
)
LOOP
lv_tab_name:=I.TABLE_NAME;
lv_sql :=lv_sql||','||i.column_name;
END LOOP;
lv_sql:='SELECT '||SUBSTR(lv_sql,2,LENGTH(lv_sql));
dbms_output.put_line(lv_sql);
lv_col_chk:=INSTR(UPPER(lv_sql),'BRAND',1);
dbms_output.put_line(lv_col_chk);
IF lv_col_chk = 0 THEN
lv_sql :=SUBSTR(lv_sql,1,LENGTH(lv_sql))||', ''Not_available'' as Brand_col FROM '||lv_tab_name;
dbms_output.put_line(LV_SQL);
ELSE
lv_sql:=SUBSTR(lv_sql,1,LENGTH(lv_sql))||' FROM '||lv_tab_name;
dbms_output.put_line(LV_SQL);
END IF;
OPEN :p_lst FOR lv_sql;
END;
PRINT p_lst;
```
|
You can use `COALESCE`, change `null` by `not available`
```
SELECT t.name, t.type, t.price, COALESCE(t.brand,'not available') AS brand FROM some_table t;
```
`COALESCE` is sql standard, but i dont know if Oracle have it.
EDIT:
I think you have to check the field exist in table first, someting like:
```
Select count(*) into v_column_exists
from user_tab_cols
where column_name = 'ADD_TMS'
and table_name = 'EMP';
```
If 1 then EXIST else NOT EXIST, after create the view based on the result.
1:
```
SELECT t.name, t.type, t.price, t.brand FROM some_table t;
```
2:
```
SELECT t.name, t.type, t.price, 'not available' AS brand FROM some_table t;
```
But i cant see the right way to use this in view.
|
SQL Select statement - default field value when field doesn't exist
|
[
"",
"sql",
"oracle",
"plsql",
""
] |
I have a single large SQL Table and I have the following query with returns some basic info:
```
SELECT
CustomerID AS "Customer Name"
,COUNT(ID) AS "Total Emails"
FROM EmailDatas WHERE Expired = 0
GROUP BY CustomerID
```
# Result
```
Customer Name Total Emails
A 1000
B 9200
C 11400
```
What I am trying to do is get a count of emails for todays date returned in the same query, something like:
# Result
```
Customer Name Total Emails Emails Today
A 1000 34
B 9200 7
C 11400 54
```
I can amend the original query to return the info:
```
SELECT
CustomerID AS "Customer Name"
,COUNT(ID) AS "Total Emails"
FROM EmailDatas WHERE Expired = 0 and starttime > getdate()
GROUP BY CustomerID
```
What I need is basically to have these 2 queries combined.
Hope this makes sense.
Thanks!
|
You can use conditional aggregation. Something like this:
```
SELECT CustomerID AS "Customer Name",
COUNT(ID) AS "Total Emails",
SUM(CASE WHEN CAST(starttime as date) = CAST(getdate() as date) THEN 1 ELSE 0 END) as "Today's"
FROM EmailDatas
WHERE Expired = 0
GROUP BY CustomerID;
```
|
You can do something like this also.
```
select t1.CustomerName,t1.TotalEmails,
case when t2.EmailsToday is null then 0
else t2.EmailsToday end as EmailsToday
from (SELECT
CustomerID AS CustomerName
,COUNT(ID) AS TotalEmails
FROM EmailDatas WHERE Expired = 0
GROUP BY CustomerID) t1
left join
(SELECT
CustomerID AS CustomerName
,COUNT(ID) AS EmailsToday
FROM EmailDatas WHERE Expired = 0 and starttime > getdate()
GROUP BY CustomerID) t2
on t1.CustomerName=t2.CustomerName
```
|
Nested SQL select
|
[
"",
"sql",
""
] |
I have two tables.
**TABLE\_A**
```
| SURNAME | COL_X |
```
**TABLE\_B**
```
| ID | COL_Y |
```
COL\_X can be mapped towards the ID column in table B, and I need the values from COL\_Y.
The below query works fine except when COL\_X in TABLE\_A has a NULL value. I would like to include those rows. How do I do that?
```
SELECT a.SURNAME, b.COL_Y
FROM TABLE_A a
INNER JOIN TABLE_B b
ON a.COL_X = b.ID
```
I have tried the following query but it returns duplicate rows and can therefore not be used.
```
SELECT a.SURNAME, b.COL_Y
FROM TABLE_A a
INNER JOIN TABLE_B b
ON a.COL_X = b.ID or a.COL_X IS NULL
```
|
You have tried using an inner join which displays all rows from both tables. You can probably use the left join to do what you expect:
```
SELECT a.SURNAME, b.COL_Y
FROM TABLE_A a
LEFT JOIN TABLE_B b
ON a.COL_X = b.ID
```
|
Just use a `LEFT JOIN` instead of an `INNER JOIN`:
```
SELECT a.SURNAME, b.COL_Y
FROM TABLE_A a
LEFT JOIN TABLE_B b
ON a.COL_X = b.ID
```
|
Oracle JOIN ON colA = colB: how to include null rows from colA
|
[
"",
"sql",
"oracle",
""
] |
```
DECLARE @age DATETIME
SET @age = (GETDATE() - emp.Birthdate)
SELECT
emp.BusinessEntityID, emp.BirthDate, @age
FROM
HumanResources.Employee AS emp
WHERE
emp.OrganizationLevel > = 3
AND ((GETDATE() - emp.Birthdate) BETWEEN '1930-01-01 00:00:00.000' AND '1940-01-01 00:00:00.000')
```
As you pros can see, this will not work, I'm hoping to display the ages of the people who are aged 30-40 with their id, bday and age. declaring the @age is my problem. I tried using `substring(getdate...), 2, 2)` but it doesn't seem to work. Any ideas? Thanks
|
No need to use variable in the query. You can do this simply as follow:
```
SELECT t.BusinessEntityID, t.BirthDate, t.Age
FROM
(SELECT emp.BusinessEntityID, emp.BirthDate, CAST(datediff(DAY, emp.Birthdate, GETDATE()) / (365.23076923074) as int) as 'Age'
FROM HumanResources.Employee AS emp
WHERE emp.OrganizationLevel > = 3) t
WHERE t.Age >= 30 and t.Age <=40
```
|
At the moment you set the value to `@age`,
```
SET @age = (GETDATE() - emp.Birthdate)
```
there is no `emp`.
You can simply do the following query:
```
SELECT emp.BusinessEntityID, emp.BirthDate, (GETDATE() - emp.Birthdate) AS Age
FROM HumanResources.Employee AS emp
WHERE emp.OrganizationLevel > = 3 AND
(GETDATE() - emp.Birthdate) BETWEEN '1930-01-01 00:00:00.000' AND '1940-01-01 00:00:00.000')
```
|
Declaring a variable with a value from select query
|
[
"",
"sql",
"sql-server",
"t-sql",
"adventureworks",
""
] |
Example table:
```
Col1 | Col2
A | Apple
A | Banana
B | Apple
C | Banana
```
Output:
```
A
```
I want to get all values of `Col1` which have more than one entry and at least one with `Banana`.
I tried to use `GROUP BY`:
```
SELECT Col1
FROM Table
GROUP BY Col1
HAVING count(*) > 1
AND ??? some kind of ONEOF(Col2) = 'Banana'
```
How to rephrase the `HAVING` clause that my query works?
|
Use *conditional aggregation*:
```
SELECT Col1
FROM Table
GROUP BY Col1
HAVING COUNT(DISTINCT col2) > 1 AND
COUNT(CASE WHEN col2 = 'Banana' THEN 1 END) >= 1
```
You can conditionally check for `Col1` groups having *at least one* `'Banana'` value using `COUNT` with `CASE` expression inside it.
Please note that the first `COUNT` has to use `DISTINCT`, so that groups with at least two *different* `Col1` values are detected. If by *having more than one entry* you mean also rows having the same `Col2` values repeated more than one time, then you can skip `DISTINCT`.
|
Here is a simple approach:
```
SELECT Col1
FROM table
GROUP BY Col1
HAVING COUNT(DISTINCT CASE WHEN col2= 'Banana' THEN 1 ELSE 2 END) = 2
```
|
HAVING clause: at least one of the ungrouped values is X
|
[
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2014",
""
] |
When I want to select max value from code (code below). It shows me 2 max row result. But I want to get one max value.
Code below:
```
select
distinct(c.msisdn),
max(adj_date),
6144 - (round(t.volume /1024/1024,2))as ΠΎΡΡΠ°ΡΠΎΠΊ_ΠΌΠ±
from
subscriber_discount_threads t,
client_balance c,
subs_packs p
where
t.dctr_dctr_id = 1283
and p.pack_pack_id in (877,874)
and c.msisdn = '550350057'
and t.adj_date >= '30.10.2015'
and sysdate between p.start_date and p.end_date
and sysdate between t.start_thread and t.end_thread
and c.subs_subs_id = t.subs_subs_id
and p.subs_subs_id = c.subs_subs_id
group by c.msisdn, t.volume
```
result below (2 row, but i want that shows me only max date)
```
25.11.2015 13:08:06
03.11.2015 11:42:06
```
What could be the problem?
|
Edit: Updated my answer as per OP
```
with cte as(
select
distinct(c.msisdn) as msisdn,
max(adj_date) as maxdate,
6144 - (round(t.volume /1024/1024,2))as ΠΎΡΡΠ°ΡΠΎΠΊ_ΠΌΠ± ,
from
subscriber_discount_threads t,
client_balance c,
subs_packs p
where
t.dctr_dctr_id = 1283
and p.pack_pack_id in (877,874)
and c.msisdn = '550350057'
and t.adj_date >= '30.10.2015'
and sysdate between p.start_date and p.end_date
and sysdate between t.start_thread and t.end_thread
and c.subs_subs_id = t.subs_subs_id
and p.subs_subs_id = c.subs_subs_id
group by c.msisdn, t.volume
),
cte2 as(
select c1.*,row_number over(partition by msisdn order by maxdate desc) as maxdatecol from cte c1)
select c2.* from cte2 c2
where maxdatecol =1
;
```
|
You can try this one:
```
select
distinct c.msisdn,
max(adj_date) over (order by null),
6144 - (round(t.volume /1024/1024,2))as ΠΎΡΡΠ°ΡΠΎΠΊ_ΠΌΠ±
from
subscriber_discount_threads t,
client_balance c,
subs_packs p
where
t.dctr_dctr_id = 1283
and p.pack_pack_id in (877,874)
and c.msisdn = '550350057'
and t.adj_date >= '30.10.2015'
and sysdate between p.start_date and p.end_date
and sysdate between t.start_thread and t.end_thread
and c.subs_subs_id = t.subs_subs_id
and p.subs_subs_id = c.subs_subs_id
```
max value will be the same for all rows. It is max date in table. Not sure if it is what you want, but this is how I understand you.
|
Error result for define max value
|
[
"",
"sql",
"oracle",
"plsql",
""
] |
My title sounds complicated, but the situation is very simple. People search on my site using a term such as "blackfriday".
When they conduct the search, my SQL code needs to look in various places such as a `ProductTitle` and `ProductDescription` field to find this term. For example:
```
SELECT *
FROM dbo.Products
WHERE ProductTitle LIKE '%blackfriday%' OR
ProductDescription LIKE '%blackfriday%'
```
However, the term appears differently in the database fields. It is most like to appear with a space between the words as such "Black Friday USA 2015". So without going through and adding more combinations to the `WHERE` clause such as `WHERE ProductTitle LIKE '%Black-Friday%'`, is there a better way to accomplish this kind of fuzzy searching?
I have full-text search enabled on the above fields but its really not that good when I use the `CONTAINS` clause. And of course other terms may not be as neat as this example.
|
I should start by saying that "variations (of a string)" is a bit vague. You could mean plurality, verb tenses, synonyms, and/or combined words (or, ignoring spaces and punctuation between 2 words) like the example you posted: "blackfriday" vs. "black friday" vs "black-friday". I have a few solutions of which 1 or more together may work for you depending on your use case.
**Ignoring punctuation**
Full Text searches already ignore punctuation and match them to spaces. So `black-friday` will match `black friday` whether using FREETEXT or CONTAINS. But it won't match `blackfriday`.
**Synonyms and combined words**
Using [FREETEXT](https://msdn.microsoft.com/en-us/library/ms176078.aspx) or [FREETEXTTABLE](https://msdn.microsoft.com/en-us/library/ms177652.aspx) for your full text search is a good way to handle synonyms and *some* matching of combined words (I don't know which ones). You can [customize the thesaurus](https://msdn.microsoft.com/en-us/library/ms142491.aspx) to add more combined words assuming it's practical for you to come up with such a list.
**Handling combinations of any 2 words**
Maybe your use case calls for you to match poorly formatted text or hashtags. In that case I have a couple of ideas:
* Write the full text query to cover each combination of words using a dictionary. For example your data layer can rewrite a search for `black friday` as `CONTAINS(*, '"black friday" OR "blackfriday"')`. This may have to get complex, for example would `black friday treehouse` have to be `("black friday" OR "blackfriday") AND ("treehouse" OR "tree house")`? You would need a dictionary to figure out that "treehouse" is made up of 2 words and thus can be split.
* If it's not practical to use a dictionary for the words being searched for (I don't know why, maybe acronyms or new memes) you could create a long query to cover every letter combination. So searching for `do-re-mi` could be `"do re mi" OR "doremi" OR "do remi" OR "dore mi" OR "d oremi" OR "d o remi" ...`. Yes it will be a lot of combinations, but surprisingly it may run quickly because of how full text efficiently looks up words in the index.
|
A hack / workaround if searching for multiple variations is **very important**.
1. Define which fields in the DB are searchable (e.g `ProductTitle`, `ProductDescription`)
2. Before saving these fields in the DB, replace each space (or consecutive spaces by a placeholder e.g `"%"`)
3. Search the DB for variation matches employing the placeholder
4. Do the reverse process when displaying these fields on your site (i.e replace placeholder with space)
5. Alternatively you can enable regex matching for your users (meaning they can define a regex either explicitly or let your app build one from their search term). But it is slower and probably error-prone to do it this way
|
How to tackle efficient searching of a string that could have multiple variations?
|
[
"",
"sql",
"sql-server",
"search",
"full-text-search",
"sql-server-2014",
""
] |
Below is the sample procedure code,
```
create or replace procedure pro_test(start_date date, end_date date)
is
begin
insert into test1 select col1, col2, col3 from main where range_date between start_date and end_date;
exception
< what are the exception I need to capture >
end;
/
```
Q1 : Is this right way to pass date directly inside the procedure?
Q2 : If not, can I pass varchar inside the procedure , to convert the date in declaration part?
Q3 : In begin part am using between operator, can i pass procedure parameter directly ?
While executing this procedure, `exec pro_test('10102015','30102015');` What i need to mention in between sentence? between start\_date and end\_date is this enough or i need to mask date format?
can someone help me to clear on this?
|
> Q1 : Is this right way to pass date directly inside the procedure?
Yes.
> Q3 : In begin part am using between operator, can i pass procedure parameter directly ?
Not sure what you mean by this, but your insert statement is fine. You are passing a **DATE** as parameter and inserting into the table.
In my opinion, all this could be done in a single **INSERT..SELECT** statement in pure **SQL**.
```
insert into test1
select col1, col2, col3
from main
where range_date
between TO_DATE(<date_literal>,<format mask>)
and TO_DATE(<date_literal>,<format mask>);
```
---
**UPDATE** Per OP's comment:
> While executing this procedure, exec pro\_test('10102015','30102015');
> What i need to mention in between sentence? between start\_date and
> end\_date is this enough or i need to mask date format?
`'10102015'` is not a DATE, it is a string literal. You must pass it as DATE, so you must either use **TO\_DATE** with proper format mask or **ANSI Date literal** as you do not have any time portion. ANSI Date literal uses a fixed format `'YYYY-MM-DD'`.
For example,
Using **TO\_DATE**:
```
EXEC pro_test(TO_DATE('10102015','DDMMYYYY'),TO_DATE('30102015','DDMMYYYY'));
```
Using **ANSI Date literal**:
```
EXEC pro_test(DATE '2015-10-10', DATE '2015-10-30');
```
|
**Q1.** You need say procedure what type of parameters **input** or **output**(adding **in** or **out**), for your procedure it is input:
`create or replace procedure pro_test(start_date in date, end_date in date)`
Everything else in your procedure fine.
|
How to pass date parameter inside the oracle procedure?
|
[
"",
"sql",
"oracle",
""
] |
This is my query
```
SELECT dia
FROM CRES
WHERE pro_id = 2
AND 8103434563 LIKE
( SELECT dial_pattern||'%' FROM CDIVN WHERE dial_id = 1
);
```
Now `select dial_pattern||'%' from CDIVN where dial_id = 1` can give `multiple` results. Hence, my main query is failing and reason is `"sub query returns more than one row"`. This is because i have mentioned `like`.
but my logic requires `like` because i want 8103434563 with pattern match condition from table CDIVN.
How do i modify my query. Please help.
=======
```
CREATE TABLE CDIVN
( DIAL_PATTERN_ID NUMBER NOT NULL ENABLE,
DIAL_PATTERN VARCHAR2(30 BYTE) NOT NULL ENABLE,
OTHERS VARCHAR2(64 BYTE),
CONSTRAINT "CDIVN_PK" PRIMARY KEY ("DIAL_PATTERN_ID", "DIAL_PATTERN")
);
Insert into CDIVN (DIAL_PATTERN_ID,DIAL_PATTERN,OTHERS) values (1,'810','abc');
Insert into CDIVN (DIAL_PATTERN_ID,DIAL_PATTERN,OTHERS) values (1,'811','xyz');
Insert into CDIVN (DIAL_PATTERN_ID,DIAL_PATTERN,OTHERS) values (1,'812','aaa');
Insert into CDIVN (DIAL_PATTERN_ID,DIAL_PATTERN,OTHERS) values (5,'999','www');
Insert into CDIVN (DIAL_PATTERN_ID,DIAL_PATTERN,OTHERS) values (9,'333','ewe');
CREATE TABLE CRES
( PROFILE_ID NUMBER NOT NULL ENABLE,
PROFILE_NAME VARCHAR2(50 BYTE) NOT NULL ENABLE,
DIALLED_PATTERN VARCHAR2(15 BYTE),
CONSTRAINT "CRES_PK" PRIMARY KEY ("PROFILE_ID")
) ;
Insert into CRES (PROFILE_ID,PROFILE_NAME,DIALLED_PATTERN) values (1,'A','1');
Insert into CRES (PROFILE_ID,PROFILE_NAME,DIALLED_PATTERN) values (2,'B','5');
Insert into CRES (PROFILE_ID,PROFILE_NAME,DIALLED_PATTERN) values (3,'C','9');
```
I have CRES-profile id and one number with me from other sources and these are 1 and 81034345 respectively.
Now `select DIALLED_PATTERN from CRES where PROFILE_ID=1;`
This will give me DIALLED\_PATTERN as 1.
Now `select DIAL_PATTERN from CDIVN where DIALLED_PATTERN_ID = 1 ( DIALLED_PATTERN )`
This will give DIAL\_PATTERN as 810, 811, 812.
Now if 81034345 is matching with any one 810% or 811% or 812%. then i need 810% as my answer.
|
You are going in right direction, just that you need to **JOIN** the two tables, and no need of sub-query, you could directly use `LIKE dial_pattern||'%'`.
```
SQL> SELECT a.*, b.*
2 FROM CRES A
3 JOIN CDIVN b
4 ON (b.DIAL_PATTERN_ID = A.profile_id)
5 WHERE TO_CHAR(8103434563) LIKE dial_pattern||'%';
PROFILE_ID PROFILE_NAME DIALLED_PATTERN DIAL_PATTERN_ID DIAL_PATTER OTHERS
---------- ------------ --------------- --------------- ----------- ------
1 A 1 1 810 abc
```
|
Collect the like-values and build a regexp-expression, e.g.
```
-- Test Data
with CDIVN as
(select 1 as dial_id, 'ABC' as val
from dual
union all
select 1 as dial_id, 'ACD' as val
from dual
union all
select 1 as dial_id, 'XXA' as val from dual),
CRES as
(select 2 as proc_id, 'ABCD' as val
from dual
union all
select 2 as proc_id, 'DABCD' as val
from dual
union all
select 2 as proc_id, 'ACF' as val
from dual
union all
select 2 as proc_id, 'XXAF' as val from dual)
,
-- Build regexp expression: 1, 'ABC|ACD|XXA'
CDIVN_PATTERN as
(select dial_id,
listagg(val, '|') within group(order by dial_id) as val_pattern
from CDIVN
group by dial_id)
-- Use this expression by regexp_like
select *
from CRES c
where regexp_like(c.val,
(select '^' || p.val_pattern
from cdivn_pattern p
where p.dial_id = 1));
```
|
Oracle DB : Like with multiple values
|
[
"",
"sql",
"oracle",
"oracle10g",
""
] |
First of all: I've found some possible answers to my problem in the previously asked questions, but I've encountered problems with getting them to work properly. I know the question was already asked, but the answers always were working code with little to no explaination on the method used.
So: I've got to find out when a customer reached the VIP status, which is when value of his orders exceeds 50 000. I've got 2 tables: one with orderid, customerid and orderdate, and second with orderid, quantity and unitprice.
The result of the query I'm writing should be 3 colums wide, one with the customerid, one with true/false named "is VIP?", and the third is the date of getting the VIP status(which is the date of order that summed with the previous ones gave a result of over 50 000)-the last one should be blank if the customer didn't reach the VIP status
```
select o.customerid, sum(od.quantity*od.unitprice),
case
when sum(od.quantity*od.unitprice)>50000 then 'VIP'
else 'Normal'
end as 'if vip'
from
orders o join [Order Details] od on od.orderid=o.orderid
group by o.customerid
```
That is as far as I got with the code, it returns the status of the customer and now I need to get the date when that happend.
.
|
You can easily calculate a running total using a window functions:
```
select o.customerid,
o.orderdate,
sum(od.quantity*od.unitprice) over (partition by o.customerid order by orderdate) as running_sum,
from orders o
join Order_Details od on od.orderid = o.orderid
order by customer_id, orderdate;
```
Now you need to find a way to detect the first row, where the running total exceeds the threshold:
The following query starts numbering the rows in a descending manner once the threshold is reached. Which in turn means the row with then number 1 is the first one to cross the threshold:
```
with totals as (
select o.customerid,
o.orderdate,
sum(od.quantity*od.unitprice) over (partition by o.customerid order by orderdate) as running_sum,
case
when
sum(od.quantity*od.unitprice) over (partition by o.customerid order by orderdate) > 50000 then row_number() over (partition by o.customerid order by orderdate desc)
else 0
end as rn
from orders o
join Order_Details od on od.orderid = o.orderid
)
select *
from totals
where rn = 1
order by customerid;
```
SQLFiddle example: <http://sqlfiddle.com/#!6/a7f18/3>
|
You get the cumulative sum using an Analytic Function, `SUM OVER`. And then add an aggregate to find the minimum date:
```
with cte as
( select o.customerid,
o.orderdate,
case when sum(od.quantity*od.unitprice) -- running total
over (partition by o.customerid
order by orderdate
rows unbounded preceding) > 50000
then 'Y'
else 'N'
end as VIP
from orders o
join Order_Details od on od.orderid = o.orderid
)
select customerid,
MAX(VIP) AS "isVIP?", -- either 'N' or 'Y'
MIN(CASE WHEN VIP = 'Y' THEN orderdate END) AS VIP_date -- only when VIP status reached
from cte
group by customerid
order by customers;
```
See [fiddle](http://sqlfiddle.com/#!6/7327f/2)
|
Finding out when summed values reached a certain checkpoint in SQL
|
[
"",
"sql",
"sql-server",
"sum",
""
] |
I tried the following query to select the number of groups resulting from a GROUP BY query:
```
SELECT count(*)
FROM (
SELECT count(*)
FROM MyTable
WHERE Col2 = 'x'
GROUP BY Col1
)
```
Unfortunately, this query is not valid: `Incorrect syntax near ')'.`.
Note: I want to get exactly one row (I already found a solution with n times the result, where n = the group count).
|
```
SELECT count(*)
FROM (
SELECT 1 as dummy
FROM MyTable
WHERE Col2 = 'x'
GROUP BY Col1
) dt
```
No need to count rows in the sub-query, the result will be the same anyway.
|
```
SELECT COUNT(*)
FROM (
SELECT value = COUNT(*)
FROM MyTable
WHERE Col2 = 'x'
GROUP BY Col1
) Π΅
```
but i think - you need to try this query -
```
SELECT COUNT(DISTINCT Col1)
FROM MyTable
WHERE Col2 = 'x'
```
|
SELECT number of groups resulted from a GROUP BY query
|
[
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2014",
""
] |
How can use the function SUM() for bit columns in T-SQL?
When I try do it as below:
```
SELECT SUM(bitColumn) FROM MyTable;
```
I get the error:
> Operand data type bit is invalid for sum operator.
|
```
SELECT SUM(CAST(bitColumn AS INT))
FROM dbo.MyTable
```
need to cast into number
or another solution -
```
SELECT COUNT(*)
FROM dbo.MyTable
WHERE bitColumn = 1
```
|
```
SELECT SUM(bitColumn * 1) FROM dbo.MyTable
```
Converts the bit into int, by multiplication, clean and simple
|
How can I use SUM for bit columns?
|
[
"",
"sql",
"sql-server",
"t-sql",
"aggregate-functions",
""
] |
I have an issue with a query I am trying to run below. I've covered and closed all brackets but SQL Server still highlights
> Incorrect syntax near ')'.
at the last bracket just by `AS REV`
```
SELECT
spot.spotid, rev.revenue
FROM
[Media].[dbo].[T_MED_Spot] AS SPOT
INNER JOIN
(SELECT
CASE
WHEN spot2.status = '4'
THEN spotcc.mediacost * 5
WHEN spot2.status IN ('7','8')
THEN spotcc.mediacost * 20
END AS revenue
FROM
(SELECT *
FROM [Media].[dbo].[T_MED_Spot] AS SPOT2
INNER JOIN [Media].[dbo].[T_MED_SpotCalculation] AS SPOTCC ON spot2.spotid = spotcc.spotid)
) AS REV ON spot.spotid = rev.spotid
```
I'm kinda at a loss here. I hope its obvious what I'm trying to achieve overall i.e. inner join the `T_MED_Spot` table with the sub query named as `REV` using the spotid.
Thanks
|
```
SELECT s.spotid,
CASE
WHEN s.[status] = '4' THEN c.mediacost * 5
WHEN s.[status] IN ('7','8') THEN c.mediacost * 20
END AS revenue
FROM [dbo].[T_MED_Spot] s
JOIN [dbo].[T_MED_SpotCalculation] c ON c.spotid = s.spotid
```
check this query...
|
Does this work:
```
SELECT spot.spotid, rev.revenue
FROM [Media].[dbo].[T_MED_Spot] AS SPOT
INNER JOIN
(SELECT
CASE
WHEN T.status = '4'
THEN T.mediacost*5
WHEN T.status IN ('7','8')
THEN T.mediacost*20
END AS revenue, spotid
FROM (SELECT * FROM [Media].[dbo].[T_MED_Spot] AS SPOT2
INNER JOIN [Media].[dbo].[T_MED_SpotCalculation] AS SPOTCC
ON spot2.spotid = spotcc.spotid) AS T ) AS REV
ON spot.spotid = rev.spotid
```
You didn't alias your inner query
|
SQL Server query syntax (sub queries, brackets and CASE WHEN)
|
[
"",
"sql",
"sql-server",
"subquery",
"inner-join",
"case-when",
""
] |
I am trying to perform *cumulative multiplication*. I am trying two methods to do this
## sample data:
```
DECLARE @TEST TABLE
(
PAR_COLUMN INT,
PERIOD INT,
VALUE NUMERIC(22, 6)
)
INSERT INTO @TEST VALUES
(1,601,10 ),
(1,602,20 ),
(1,603,30 ),
(1,604,40 ),
(1,605,50 ),
(1,606,60 ),
(2,601,100),
(2,602,200),
(2,603,300),
(2,604,400),
(2,605,500),
(2,606,600)
```
**Note:** The data in `value` column will never be integer and values will have decimal part. To show approximation problem I have kept example values as integers.
---
## Method 1: EXP + LOG + SUM() Over(Order by)
In this method am using `EXP + LOG + SUM() Over(Order by)` technique to find cumulative multiplication. In this method values are not accurate; there is some rounding and approximation issue in the result.
```
SELECT *,
Exp(Sum(Log(Abs(NULLIF(VALUE, 0))))
OVER(
PARTITION BY PAR_COLUMN
ORDER BY PERIOD)) AS CUM_MUL
FROM @TEST;
```
### Result:
```
PAR_COLUMN PERIOD VALUE CUM_MUL
---------- ------ --------- ----------------
1 601 10.000000 10
1 602 20.000000 200 -- 10 * 20 = 200(correct)
1 603 30.000000 6000.00000000001 -- 200 * 30 = 6000.000000000 (not 6000.00000000001) incorrect
1 604 40.000000 240000
1 605 50.000000 12000000
1 606 60.000000 720000000.000001 -- 12000000 * 60 = 720000000.000000 (not 720000000.000001) incorrect
2 601 100.000000 100
2 602 200.000000 20000
2 603 300.000000 5999999.99999999 -- 20000.000000 *300.000000 = 6000000.000000 (not 5999999.99999999) incorrect
2 604 400.000000 2399999999.99999
2 605 500.000000 1199999999999.99
2 606 600.000000 719999999999998
```
---
## Method 2: Tradictional Multiplication (Recursive CTE)
This method works perfectly without any rounding or approximation problem.
```
;WITH CTE
AS (SELECT TOP 1 WITH TIES PAR_COLUMN,
PERIOD,
VALUE,
CUM_MUL = VALUE
FROM @TEST
ORDER BY PERIOD
UNION ALL
SELECT T.PAR_COLUMN,
T.PERIOD,
T.VALUE,
Cast(T.VALUE * C.CUM_MUL AS NUMERIC(22, 6))
FROM CTE C
INNER JOIN @TEST T
ON C.PAR_COLUMN = T.PAR_COLUMN
AND T.PERIOD = C.PERIOD + 1)
SELECT *
FROM CTE
ORDER BY PAR_COLUMN,PERIOD
```
### Result
```
PAR_COLUMN PERIOD VALUE CUM_MUL
---------- ------ --------- ----------------
1 601 10.000000 10.000000
1 602 20.000000 200.000000
1 603 30.000000 6000.000000
1 604 40.000000 240000.000000
1 605 50.000000 12000000.000000
1 606 60.000000 720000000.000000
2 601 100.000000 100.000000
2 602 200.000000 20000.000000
2 603 300.000000 6000000.000000
2 604 400.000000 2400000000.000000
2 605 500.000000 1200000000000.000000
2 606 600.000000 720000000000000.000000
```
---
Can anyone tell me ~~why in method 1 values are not accurate and~~ how to fix it? I tried by changing the data types to `Float` and by increasing the `scale` in `numeric` but no use.
I really want to use method 1 which is much faster than method 2.
**Edit:** Now I know the reason for approximation. Can anyone find a fix for this problem?
|
You can round to big multiple, for your data:
```
--720000000000000 must be multiple of 600
select
round( 719999999999998/600, 0 ) * 600
--result: 720000000000000
```
[Test it at SQLFiddle](http://sqlfiddle.com/#!6/fe8c4/28)
```
create TABLE T
(
PAR_COLUMN INT,
PERIOD INT,
VALUE NUMERIC(22, 6)
)
INSERT INTO T VALUES
(1,601,10.1 ), --<--- I put decimals just to test!
(1,602,20 ),
(1,603,30 ),
(1,604,40 ),
(1,605,50 ),
(1,606,60 ),
(2,601,100),
(2,602,200),
(2,603,300),
(2,604,400),
(2,605,500),
(2,606,600)
```
**Query 1**:
```
with T1 as (
SELECT *,
Exp(Sum(Log(Abs(NULLIF(VALUE, 0))))
OVER(
PARTITION BY PAR_COLUMN
ORDER BY PERIOD)) AS CUM_MUL,
VALUE AS CUM_MAX1,
LAG( VALUE , 1, 1.)
OVER(
PARTITION BY PAR_COLUMN
ORDER BY PERIOD ) AS CUM_MAX2,
LAG( VALUE , 2, 1.)
OVER(
PARTITION BY PAR_COLUMN
ORDER BY PERIOD ) AS CUM_MAX3
FROM T )
select PAR_COLUMN, PERIOD, VALUE,
( round( ( CUM_MUL / ( CUM_MAX1 * CUM_MAX2 * CUM_MAX3) ) ,6)
*
cast( ( 1000000 * CUM_MAX1 * CUM_MAX2 * CUM_MAX3) as bigint )
) / 1000000.
as CUM_MUL
FROM T1
```
**[Results](http://sqlfiddle.com/#!6/fe8c4/28/0)**:
```
| PAR_COLUMN | PERIOD | VALUE | CUM_MUL |
|------------|--------|-------|-----------------|
| 1 | 601 | 10.1 | 10.1 | --ok! because my data
| 1 | 602 | 20 | 202 |
| 1 | 603 | 30 | 6060 |
| 1 | 604 | 40 | 242400 |
| 1 | 605 | 50 | 12120000 |
| 1 | 606 | 60 | 727200000 |
| 2 | 601 | 100 | 100 |
| 2 | 602 | 200 | 20000 |
| 2 | 603 | 300 | 6000000 |
| 2 | 604 | 400 | 2400000000 |
| 2 | 605 | 500 | 1200000000000 |
| 2 | 606 | 600 | 720000000000000 |
```
*Notice I x1000000 to work without decimals*
|
In pure T-SQL `LOG` and `EXP` operate with the `float` type (8 bytes), which has only [15-17 significant digits](https://en.wikipedia.org/wiki/Double-precision_floating-point_format). Even that last 15th digit can become inaccurate if you sum large enough values. Your data is `numeric(22,6)`, so 15 significant digits is not enough.
`POWER` can return `numeric` type with potentially higher precision, but it is of little use for us, because both `LOG` and `LOG10` can return only `float` anyway.
To demonstrate the problem I'll change the type in your example to `numeric(15,0)` and use `POWER` instead of `EXP`:
```
DECLARE @TEST TABLE
(
PAR_COLUMN INT,
PERIOD INT,
VALUE NUMERIC(15, 0)
);
INSERT INTO @TEST VALUES
(1,601,10 ),
(1,602,20 ),
(1,603,30 ),
(1,604,40 ),
(1,605,50 ),
(1,606,60 ),
(2,601,100),
(2,602,200),
(2,603,300),
(2,604,400),
(2,605,500),
(2,606,600);
SELECT *,
POWER(CAST(10 AS numeric(15,0)),
Sum(LOG10(
Abs(NULLIF(VALUE, 0))
))
OVER(PARTITION BY PAR_COLUMN ORDER BY PERIOD)) AS Mul
FROM @TEST;
```
**Result**
```
+------------+--------+-------+-----------------+
| PAR_COLUMN | PERIOD | VALUE | Mul |
+------------+--------+-------+-----------------+
| 1 | 601 | 10 | 10 |
| 1 | 602 | 20 | 200 |
| 1 | 603 | 30 | 6000 |
| 1 | 604 | 40 | 240000 |
| 1 | 605 | 50 | 12000000 |
| 1 | 606 | 60 | 720000000 |
| 2 | 601 | 100 | 100 |
| 2 | 602 | 200 | 20000 |
| 2 | 603 | 300 | 6000000 |
| 2 | 604 | 400 | 2400000000 |
| 2 | 605 | 500 | 1200000000000 |
| 2 | 606 | 600 | 720000000000001 |
+------------+--------+-------+-----------------+
```
Each step here looses precision. Calculating LOG looses precision, SUM looses precision, EXP/POWER looses precision. With these built-in functions I don't think you can do much about it.
---
So, the answer is - use CLR with C# [`decimal`](https://msdn.microsoft.com/en-us/library/364x0z75.aspx) type (not `double`), which supports higher precision (28-29 significant digits). Your original SQL type `numeric(22,6)` would fit into it. And you wouldn't need the trick with `LOG/EXP`.
---
Oops. I tried to make a CLR aggregate that calculates Product. It works in my tests, but only as a simple aggregate, i.e.
This works:
```
SELECT T.PAR_COLUMN, [dbo].[Product](T.VALUE) AS P
FROM @TEST AS T
GROUP BY T.PAR_COLUMN;
```
And even `OVER (PARTITION BY)` works:
```
SELECT *,
[dbo].[Product](T.VALUE)
OVER (PARTITION BY PAR_COLUMN) AS P
FROM @TEST AS T;
```
But, running product using `OVER (PARTITION BY ... ORDER BY ...)` doesn't work (checked with SQL Server 2014 Express 12.0.2000.8):
```
SELECT *,
[dbo].[Product](T.VALUE)
OVER (PARTITION BY T.PAR_COLUMN ORDER BY T.PERIOD
ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS CUM_MUL
FROM @TEST AS T;
```
> Incorrect syntax near the keyword 'ORDER'.
A search found this [connect item](https://connect.microsoft.com/SQLServer/feedback/details/586867/user-defined-ranking-functions), which is closed as "Won't Fix" and this [question](https://stackoverflow.com/questions/22352026/is-it-possible-to-use-user-defined-aggregates-clr-with-window-functions-over).
---
The C# code:
```
using System;
using System.Data;
using System.Data.SqlClient;
using System.Data.SqlTypes;
using Microsoft.SqlServer.Server;
using System.IO;
using System.Collections.Generic;
using System.Text;
namespace RunningProduct
{
[Serializable]
[SqlUserDefinedAggregate(
Format.UserDefined,
MaxByteSize = 17,
IsInvariantToNulls = true,
IsInvariantToDuplicates = false,
IsInvariantToOrder = true,
IsNullIfEmpty = true)]
public struct Product : IBinarySerialize
{
private bool m_bIsNull; // 1 byte storage
private decimal m_Product; // 16 bytes storage
public void Init()
{
this.m_bIsNull = true;
this.m_Product = 1;
}
public void Accumulate(
[SqlFacet(Precision = 22, Scale = 6)] SqlDecimal ParamValue)
{
if (ParamValue.IsNull) return;
this.m_bIsNull = false;
this.m_Product *= ParamValue.Value;
}
public void Merge(Product other)
{
SqlDecimal otherValue = other.Terminate();
this.Accumulate(otherValue);
}
[return: SqlFacet(Precision = 22, Scale = 6)]
public SqlDecimal Terminate()
{
if (m_bIsNull)
{
return SqlDecimal.Null;
}
else
{
return m_Product;
}
}
public void Read(BinaryReader r)
{
this.m_bIsNull = r.ReadBoolean();
this.m_Product = r.ReadDecimal();
}
public void Write(BinaryWriter w)
{
w.Write(this.m_bIsNull);
w.Write(this.m_Product);
}
}
}
```
Install CLR assembly:
```
-- Turn advanced options on
EXEC sys.sp_configure @configname = 'show advanced options', @configvalue = 1 ;
GO
RECONFIGURE WITH OVERRIDE ;
GO
-- Enable CLR
EXEC sys.sp_configure @configname = 'clr enabled', @configvalue = 1 ;
GO
RECONFIGURE WITH OVERRIDE ;
GO
CREATE ASSEMBLY [RunningProduct]
AUTHORIZATION [dbo]
FROM 'C:\RunningProduct\RunningProduct.dll'
WITH PERMISSION_SET = SAFE;
GO
CREATE AGGREGATE [dbo].[Product](@ParamValue numeric(22,6))
RETURNS numeric(22,6)
EXTERNAL NAME [RunningProduct].[RunningProduct.Product];
GO
```
---
This [question](https://dba.stackexchange.com/questions/114403/date-range-rolling-sum-using-window-functions) discusses calculation of a running SUM in great details and [Paul White shows in his answer](https://dba.stackexchange.com/a/114941/57105) how to write a CLR function that calculates running SUM efficiently. It would be a good start for writing a function that calculates running Product.
Note, that he uses a different approach. Instead of making a custom **aggregate** function, Paul makes a function that returns a table. The function reads the original data into memory and performs all required calculations.
It may be easier to achieve the desired effect by implementing these calculations on your client side using the programming language of your choice. Just read the whole table and calculate running product on the client. Creating CLR function makes sense if the running product calculated on the server is an intermediary step in a more complex calculations that would aggregate data further.
---
One more idea that comes to mind.
Find a third-party .NET math library that offers `Log` and `Exp` functions with high precision. Make a CLR version of these **scalar** functions. And then use the `EXP + LOG + SUM() Over (Order by)` approach, where `SUM` is the built-in T-SQL function, which supports `Over (Order by)` and `Exp` and `Log` are custom CLR functions that return not `float`, but high-precision `decimal`.
Note, that high precision calculations may also be slow. And using CLR scalar functions in the query may also make it slow.
|
Rounding issue in LOG and EXP functions
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"sql-server-2012",
""
] |
I am trying to calculate the ratio of rows in table A to rows in table B. This would have been easy if I only had to calculate the ratio for the whole sets, but I want to group them by a category that exists in table B.
Table A and B are linked through a primary key (id).
Edit:
I tried the following:
```
select
(select count(*) from class_a a left join football_players b on (b.id = a.id) group by age)
/
(select count(*) from football_players group by age)
```
but the group by command does not work here.
|
You need an outer join, then divide count of the rows in the outer joined tables by the total number of rows in the group:
```
select fp.age,
(count(cl.id)::numeric / count(*)::numeric) * 100 as ration,
count(cl.id) as class_count,
count(*) as age_count
from football_players fp
left join class cl on fp.id = cl.id
group by fp.age
```
The cast `::numeric` is necessary to get fractional values, otherwise the result of the division will be converted to an integer.
SQLFiddle: <http://sqlfiddle.com/#!15/3373c/3>
|
Here it is: [DEMO](http://sqlfiddle.com/#!9/e52e7/9)
```
select age,
(1-(select count(*) from tableA a where a.id=b.id)/count(*) )*100
from tableB b
group by age
```
|
Dividing count on one table from another and grouping the results
|
[
"",
"sql",
"postgresql",
"count",
"group-by",
"divide",
""
] |
I have a Student table consists of following parameters
```
[ID] [nvarchar](50) NOT NULL,
[Firsname] [nvarchar](50) NOT NULL,
[Lastname] [nvarchar](50) NOT NULL,
[Melicode] [nchar](10) NOT NULL,
[City] [nvarchar](50) NOT NULL,
[Province] [nvarchar](50) NOT NULL,
[Active] [int] NULL
```
i want to write a Table-Valued Function named Show which has one parameter as number. the function will act as following
* if @number = 1 , returns all columns from Student table
* if @number = 2 , returns only City from Student
* if @number = 3 , returns only Province from Student
i wrote the following T-SQL, but it only works for (if (@number = 1)). When the user enter @number as 2 or 3, the function does not work. Thank You
```
Create function Show(@number int)
RETURNS @result TABLE
(
[ID] [nvarchar](50) NOT NULL,
[Firsname] [nvarchar](50) NOT NULL,
[Lastname] [nvarchar](50) NOT NULL,
[Melicode] [nchar](10) NOT NULL,
[City] [nvarchar](50) NOT NULL,
[Province] [nvarchar](50) NOT NULL,
[Active] [int] NULL
)
AS
BEGIN
IF (@number = 1)
INSERT INTO @result SELECT * from Student
IF (@number = 2)
INSERT INTO @result (City) values ((SELECT City from Student))
IF (@number = 3)
INSERT INTO @result (Province) values ((SELECT Province from Student))
RETURN -- @Players (variable only required for Scalar functions)
END
go
select *from dbo.show(1)
```
|
This is **not** going to work:
```
INSERT INTO @result (City)
VALUES ((SELECT City from Student))
```
**Either** you have all the values as scalar SQL variables, or literals - then you can use
```
INSERT INTO @result (City)
VALUES ('New York')
INSERT INTO @result (City)
VALUES (@ChosenCity)
```
**or** you have a `SELECT` statement to fill the values - then you need this syntax:
```
INSERT INTO @result (City)
SELECT City
FROM Student
```
*without* the `VALUES` keyword. And as @GiorgiNakeuri correctly states - this will then **fail** because all your columns require a value (have the `NOT NULL` attribute), so this insert cannot succeed - you need to provide **all** `NOT NULL` values (or define a default value for each column)
|
```
CREATE FUNCTION dbo.Show
(
@number INT
)
RETURNS @result TABLE
(
ID NVARCHAR(50),
Firsname NVARCHAR(50),
Lastname NVARCHAR(50),
Melicode NCHAR(10),
City NVARCHAR(50),
Province NVARCHAR(50),
Active INT
)
AS
BEGIN
IF (@number = 1)
INSERT INTO @result
SELECT * FROM dbo.Student
IF (@number = 2)
INSERT INTO @result (City)
SELECT City FROM dbo.Student
IF (@number = 3)
INSERT INTO @result (Province)
SELECT Province FROM dbo.Student
RETURN
END
GO
SELECT * FROM dbo.Show(2)
```
|
Table-Valued Function using IF statement in SQL Server
|
[
"",
"sql",
"sql-server",
"user-defined-functions",
""
] |
I need to be able to report on clients that visited my business 6 times, the 6th time being today.
I've got this so far, but this won't pick up someone who had visited a total of 5 times yesterday, and then twice today.
```
SELECT Member_FullName, Branch_Name, Member_Email1, Account_Description, Visit_LastVisitDateOnly
FROM View1
WHERE (Visit_TotalVisits = '6')
AND (Visit_LastVisitDateOnly = CONVERT(DATE, GETDATE()))
```
Rows from the visits table (was asked structure of table)
```
VisitID BranchID MemberID AccountID Visit
BF98FAC1-F430-47AD-B810-02744A1633EA C4E833C0-7675-4650-8D58-F64DF87BB0F2 E90EC99B-8C15-4F01-AEFC-60430BE4B6BF C404B81D-85C5-42D1-8FD2-52657960FD9A 2015-11-20 16:00:00.000
5C0CB2F0-3ED9-441F-A789-03B679FF85E7 C4E833C0-7675-4650-8D58-F64DF87BB0F2 E90EC99B-8C15-4F01-AEFC-60430BE4B6BF C404B81D-85C5-42D1-8FD2-52657960FD9A 2015-11-20 16:00:00.000
```
|
```
SELECT AccountID
FROM View1 v
INNER JOIN (SELECT AccountID
FROM Visits
WHERE Visit < CONVERT(DATE, GETDATE())
GROUP BY AccountID
HAVING COUNT(1) = 5) hist ON v.AccountID = hist.AccountID
AND Visit_LastVisitDate = CONVERT(DATE, GETDATE())
```
Subquery visit\_history will pick up all visitors who visited the shop exactly 5 times in the past.
Then it joins with the view that has last visit date. If last visit is today, means it was 6th, 7th, Nth or whatever, but the point is that 6th visit occured today.
|
replace:
```
Visit_TotalVisits = 6
```
by:
```
Visit_TotalVisits >= 6
```
|
SQL - Selecting the 6th visit from a customer
|
[
"",
"sql",
"sql-server",
"date",
""
] |
I have added a new field in data\_entry table and want to insert on that the value of employee\_write\_task table's `emp_id` based on `employee_write_task.task_id = data_entry.id and employee_write_task.empType_id = 1.` 'here 1 is type of task for data\_entry'
I have tried something like this:
```
INSERT INTO data_entry (emp_id)
SELECT emp_id FROM employee_write_task
WHERE employee_write_task.task_id = data_entry.id AND employee_write_task.empType_id = 1
```
but it doesn't seems to work. What am I missing?
I have attached tables and it's attributes below for references.
[](https://i.stack.imgur.com/d4oOU.png)
|
I think you should be using update statement instead of insert. Try like follows.
```
UPDATE data_entry set emp_id =
(SELECT emp_id FROM employee_write_task
WHERE employee_write_task.task_id = data_entry.id AND employee_write_task.empType_id = 1)
```
This query may fail if the below select retrieve more than 1 value.
```
SELECT emp_id FROM employee_write_task
WHERE employee_write_task.task_id = data_entry.id
AND employee_write_task.empType_id = 1
```
In such case, you should add few more conditions to uniquely term the record.
Regards.
|
```
UPDATE data_entry
LEFT JOIN employee_write_task
ON data_entry.id = employee_write_task.task_id
SET data_entry.emp_id = employee_write_task.emp_id
WHERE employee_write_task.empType_id = 1;
```
|
How to updated all row's field of one table from data of another table with one time query?
|
[
"",
"mysql",
"sql",
""
] |
I have a simple query shown below. I am trying to format the column yyyymmdd to Month, Day, Year format. How can do this?
The yyyymmdd returns 2015-11-13 00:00:00.000
I would like it to return November 13, 2015
```
SELECT
CONVERT(datetime, yyyymmdd, 101) as 'Date of Data',
cd.[text] as 'Last POS Upload Time Each Day'
FROM MyTable CD
WHERE CD.[Store] = @Store
AND YYYYMMDD between @Beginning_Date AND @Ending_Date
AND [text] <> ''
group by cd.[Store], [text], cd.[yyyymmdd]
order by cd.[store], cd.[yyyymmdd], [text]
```
|
There may be a built in function that returns exactly what you're looking for, but you always have the option to roll it yourself...
```
SELECT
DATENAME(month, yyyymmdd) + ' ' +
CONVERT(nvarchar(50), DATEPART(day, yyyymmdd)) + ', ' +
CONVERT(nvarchar(50), DATEPART(Year, yyyymmdd)) AS [Your Date In Your Format],
...rest of your query
```
Here's a [proof of concept sqlfiddle](http://sqlfiddle.com/#!3/9eecb7db59d16c80417c72d1/3887).
|
Try this:
```
SELECT
DATENAME(MM, '2015-11-13 00:00:00.000')
+ ' ' + RIGHT(CONVERT(VARCHAR(10), '2015-11-13 00:00:00.000', 107), 2)
+ ', ' + RIGHT(CONVERT(VARCHAR(4) , '2015-11-13 00:00:00.000', 107), 9) AS 'FormattedDate'
```
Let me know if you have any question.
|
Formatting yyyymmdd varchar column to Month, Day Year in SQL
|
[
"",
"sql",
"sql-server",
"datetime",
""
] |
I have a view that joins two tables and applies a filter. When I run the view it takes <2s, when I run the same statement as a direct SELECT rather than a view, it also takes <2s.
The confusing part is that if I add an order by when selecting from the view, the query takes >20s, where adding the order by to the query directly takes <2s
My code looks a bit like this:
Creating the view:
```
CREATE VIEW MY_VIEW AS
SELECT T1.COLUMN1, T2.COLUMN 2
FROM TABLE1 T1, TABLE2 T2
WHERE T1.COLUMN3=T2.COLUMN3
```
Querying the view (<2s):
```
SELECT * FROM MY_VIEW
```
Querying directly with no view (<2s)
```
SELECT T1.COLUMN1, T2.COLUMN 2
FROM TABLE1 T1, TABLE2 T2
WHERE T1.COLUMN3=T2.COLUMN3
```
Querying the view with an ORDER BY (>20s)
```
SELECT * FROM MY_VIEW ORDER BY COLUMN1
```
Querying the view with an ORDER BY (<2s)
```
SELECT T1.COLUMN1, T2.COLUMN 2
FROM TABLE1 T1, TABLE2 T2
WHERE T1.COLUMN3=T2.COLUMN3
ORDER BY T1.COLUMN1
```
Do you have any advice on how to order by more efficiently on views? Or is the only option to scrap the use of views and stick to direct SELECT.
The plans are the following, respectively (Sorry its images, I found it easier than writing the table in the StackOverflow editor)
[](https://i.stack.imgur.com/HXtj9.png)
[](https://i.stack.imgur.com/T56LO.png)
[](https://i.stack.imgur.com/Zf2Wk.png)
[](https://i.stack.imgur.com/walJS.png)
|
In Oracle 11g a view is not actually stored unless you create a materialized view. Therefore what is happening here is that all the `ORDER BY` actions in the plan are executed twice, regardless of the fact that it is the same `ORDER BY`.
|
The issue that you are facing is that the view definition is compiled. The execution plan for a query is set when the query is compiled, rather than when it is used.
As you have observed, the best execution plan for:
```
SELECT T1.COLUMN1, T2.COLUMN2
FROM TABLE1 T1 JOIN
TABLE2 T2
ON T1.COLUMN3 = T2.COLUMN3;
```
is different from the best execution plan for:
```
SELECT T1.COLUMN1, T2.COLUMN2
FROM TABLE1 T1 JOIN
TABLE2 T2
ON T1.COLUMN3 = T2.COLUMN3
ORDER BY T1.COLUMN1;
```
(If I had to guess, there is an index on `TABLE1(COLUMN1, COLUMN3)` that can be used for both the `JOIN` and the `ORDER BY`, but the Oracle optimizer might choose another execution path without the `ORDER BY`.)
What can you do? First, you can try recompiling the view. The execution plan may not be optimal if the underlying data has changed. Second, you can insert hints into the query to force a particular plan.
|
ORDER BY slower on a view rather than a direct SELECT
|
[
"",
"sql",
"oracle",
"oracle11g",
""
] |
This is a shortened example. I realize I can't create indexes for every possible scenario but if I want to get our Data Warehouse down to 3-4 indexes per table, here is an example question.
If I have a table with Product, State, ZipCode, DatePurchased, and 50 other columns.
\* and if a user sometimes wants to query by Product,
\* and then sometimes by Product & State
\* and then sometimes by DatePurchased
\* and then sometimes by State only
Do I need to make an index for these 4 scenarios? or can I do one index and include all 4 of these columns and Oracle will see i am only asking for one or two of the fields in my where clause?
Especially the Product & State, do i make one index and include both of these columns? and if the where clause only has Product will it hit that index? Or if I only want state, will it hit that index? Or do I need a separate index, one for Product, one for state and one for Product & State combined?
|
I would probably create indexes on the following columns:
* (Product, State) <--- should be used by both Product and Product+State
queries
* (State)
* (DatePurchased)
With regards to the last one; if the table is big or going to be big, I'd partition on the DatePurchased column, and that way, you get the benefit of partition pruning when you query on DatePurchased, and you probably wouldn't need a separate index.
As with everything, you should test it out with your tables and data!
|
I try to keep it simple and adjust as necessary. Since you know the user will be querying by product, state, and datepurchased, put an index on each of those 3 columns. It should help the product/state case as well. If empirically you discover the product/state case needs improvement, add it. BTW, extra indices will slow down inserts, but should not impact queries negatively.
|
Oracle Index columns
|
[
"",
"sql",
"oracle",
""
] |
I'm trying to make a `SELECT` query based on two other table data.
`_sites_` array has 5 objects, `_cats_` array has 12 objects for now. So, in order to get `news` based on these objects, I have to make `12 * 5` sql query repeatedly. (For each category & site)
How can I reduce query count?
```
getSites(function (_sites_) {
getCategories(function (_cats_) {
var array = []
async.each(_sites_, function (site, next) {
async.each(_cats_, function (cat, next2) {
var sql = "SELECT * "+
"FROM news "+
"WHERE category = ? AND site = ? "+
"ORDER BY id DESC LIMIT 30"
db.query(sql, [cat.data, site.data], function (result) {
array = array.concat(temp)
next2()
})
}, function () {
next()
})
}, function () {
// finished
console.log(array)
})
})
})
function getCats (pass) { //get used categories
db.query("SELECT * FROM cat WHERE cat_name in (SELECT news_url.rss_cat FROM news_url)"+
"ORDER by cat_pos ASC", [], function (result) {
var array = []
if(result) {
for (var j = 0; j < result.length; j++) {
array.push({type: 1, data:result[j].cat_name})
};
}
pass(array)
})
}
function getSites (pass) {
db.query("SELECT * FROM news_url", [], function (result) {
var array = []
if(result) {
for (var j = 0; j < result.length; j++) {
array.push({type: 0, data:result[j].rss_site})
};
}
pass(array)
})
}
```
---
**cats** table: (categories)
```
|id|cat_name|cat_pos|
|1 |Fun | 1 |
|1 |Bomm | 2 |
```
**news\_url** table: (rss site names and urls)
```
|id|rss_site|rss_url |rss_cat
|1 |Cnn |http://...|Fun
|2 |Fox |http://...|Bomm
|3 |Cnn |http://...|Bomm
|4 |Routers |http://...|Fun
```
**news** table:
```
|id|news_site | news_cat | news_content
|1 | Cnn | Bomm | Some random news content from Cnn
|2 | Cnn | Fun | Some random news content from Cnn
|3 | Fox | Fun | Some random news content from Fox
```
Expected result is a separate N-row limit for each combination of category and site. If it's possible using a few queries.
|
If you don't want to query separately for each category for each site, then why are you doing that? If you just omit the `WHERE` clause from the query you are using now, then in one go it will give you all the news for all categories and sites, subject to the 30-row `LIMIT` you have placed.
It will be trickier if you want a separate 30-row limit for each combination of category and site, as your current approach provides, but I'm inclined to suspect that the `LIMIT` isn't doing quite what you want it to do now, or at least that it would not be doing so if you had more data.
**Edited to add:**
If, as you indicate in comments, you really do want a 30-row limit for each combination of category and site then, for mysql, I see only two alternatives:
* form one giant query by connecting all the individual queries you now perform via the `UNION ALL` operator. Submit only that query. This is likely your best bet, but you could also
* use a simple query with no `WHERE` clause, as I first suggested, and apply the row limits on the node.js side. This approach might be facilitated by ordering additionally by site and category. Although this results in a simpler query, it may involve many rows being transferred and then discarded.
|
It sounds to me like you are looking for a Cartesian Product. You get this when you select from two different tables without defining a relationship between them. So, for a simple example:
```
SELECT rss_site, cat_name
FROM categories, news_url;
```
Will return every possible (rss\_site, cat\_name) combination. I would note, however, that since you have a few repeated rss\_site values (cnn in your sample data) you may want to add `DISTINCT` in your select clause to only get distinct pairs:
```
SELECT DISTINCT rss_site, cat_name
FROM categories, news_url;
```
Here is an [SQL Fiddle](http://sqlfiddle.com/#!9/aa4c3/2) example.
|
MySQL select rows based on two tables
|
[
"",
"mysql",
"sql",
"node.js",
""
] |
I am currently running a SQL statement to retrieve `total prize money earned by competitors of every country'
However, I am currently hardcoding the country names in my `SELECT` statement as follows:
```
SELECT C2.NATIONALITY, SUM(C2.TOTALPRIZEMONEY) AS TOTALPRIZEMONEY
FROM COMPETITOR C2, COMPETITION C1
WHERE NATIONALITY IN ('USA','AUSTRALIA','SINGAPORE')
AND C1.TIMEPLANNED BETWEEN TO_DATE('01-JAN-15') AND TO_DATE ('31-DEC-15')
GROUP BY NATIONALITY;
```
May I know how can I remove the part that declares the country name in the statement, and to use a query to retrieve all countries available?
Below is my table structure:
[](https://i.stack.imgur.com/aO1OH.png)
|
You can use the competition table (country column) to get all potential countries (and get distinct values using group by).
```
SELECT C2.NATIONALITY, SUM(C2.TOTALPRIZEMONEY) AS TOTALPRIZEMONEY
FROM COMPETITOR C2, COMPETITION C1
WHERE NATIONALITY IN (SELECT COUNTRY FROM COMPETITION GROUP BY COUNTRY)
AND C1.TIMEPLANNED BETWEEN TO_DATE('01-JAN-15') AND TO_DATE ('31-DEC-15')
GROUP BY NATIONALITY;
```
|
Here in both the tables I can not see any foreign key defined for the tables. There must be some other table which will contains mapping.
If it is not there then, you will not be able to get correct output as it will give you cross product.
|
Oracle Database SQL Query
|
[
"",
"sql",
"oracle",
"select",
""
] |
I have stored procedure that allow null parameter and works fine in SSMS. . Here is the SP
```
ALTER PROCEDURE [dbo].[sp_RepInventorySales]
@SalesOrderNo varchar(9) = null
AS
BEGIN
SELECT SalesOrderNo, ItemCode, ContainerNo, SealNo, ContainerType, OrderQty
FROM SalesOrderDetail
where SalesOrderNo = isnull(@SalesOrderNo, SalesOrderNo )
```
Actually the SP is the source of my crystal report. When Me.txtSalesOrderNo.Text is NOT empty I get the correct output.However, if textbox is empty, the stored procedure treats the parameter as empty string '' . How can i replace empty string to NULL? . Here is the code VB code
```
Private Dset As DataSet
Private Da As SqlDataAdapter
Dim objRpt As New RepInventorySales /*used for crystal report*/
Dset = New DataSet
Da = New SqlDataAdapter("sp_RepInventorySales '" & trim(Me.txtSalesOrderNo.Text) & "' ", Conn_DB)
Da.Fill(Dset)
If Dset.Tables(0).Rows.Count <> 0 Then
objRpt.SetDataSource(Dset.Tables(0))
```
I tried to use IIF in query string in SqlDataAdapter to replace empty string to null but gives me error
```
Da = New SqlDataAdapter("sp_RepInventorySales '" & Trim(Me.IIf(txtTextBox.Text Is Nothing, Null, txtTextBox.Text).Text) & "' ", Conn_DB)
```
This is the final solution. thank you for your help!
```
IF @SalesOrderNo = ''
SET @SalesOrderNo = NULL
IF @ContainerType = ''
SET @ContainerType = NULL
SELECT SalesOrderNo, ItemCode, ContainerNo, SealNo, ContainerType, OrderQty
FROM SalesOrderDetail
WHERE ( SalesOrderNo = @SalesOrderNo OR @SalesOrderNo IS NULL )
AND
( ContainerType = @ContainerType OR @ContainerType IS NULL )
```
|
just put the condition on your SP
```
IF @SalesOrderNo = '' SET @SalesOrderNo = null
IF @ContainerType= '' SET @ContainerType= null
SELECT SalesOrderNo, ItemCode, ContainerNo, SealNo, ContainerType, OrderQty
FROM SalesOrderDetail
where SalesOrderNo = @SalesOrderNo AND ContainerType = @ContainerType
```
|
You will always get empty string because you passing parameter as hardcoded string inside single quote `..."sp_RepInventorySales '" & trim(Me.txtSalesOrderNo.Text) & "'...`
Add parameter to the `SqlDataAdapter.SelectCommand.Parameters` collection properly, then you can use `DBNull.Value`
```
Da = New SqlDataAdapter("sp_RepInventorySales", Conn_DB)
DA.SelectCommand.CommandType = CommandType.StoredProcedure
'Parameter's type need to be Object because DBNull.Value cannot be converted to string
Dim paramValue as Object = DirectCast(DBNull.value, Object)
If String.IsNullOrEmpty(Me.txtSalesOrderNo.Text) = false Then
paramValue = Me.txtSalesOrderNo.Text.Trim()
End If
Dim param As New SqlParameter With {.ParameterName = "@SalesOrderNo",
.SqlDbType = SqlDbType.VarChar,
.Value = paramValue}
DA.SelectCommand.Parameters.Add(param)
Da.Fill(Dset)
```
|
How to replace empty string parameter to Null in stored procedure and VB.net
|
[
"",
"sql",
"vb.net",
""
] |
I want to retrieve all rows from the DB where the `OWNERKEY` is 1 but only the highest `DATAVERSION` of the `DATACONTROLID`.
In the example below I have two rows where `DATACONTROLID`= 1, they have 1 and 2 as `DATAVERSION`. I want to get the highest.
MyDB:
```
DATAKEY OWNERKEY OWNERTYPE DATAVERSION MALLKEY DATAVALUE DATAVALUETYPE DATACONTROLID DATADATE DATATIME DATASIGN
=========== ============ =========== =========== =========== ========= ============ ============= ========== =========== =========
4 1 2 1 1 1 2 1 2015-11-24 09:55:00:00 ADMIN
3 1 2 2 1 2 2 1 2015-11-23 20:55:00:00 ADMIN
2 1 2 1 1 3 2 2 2015-11-23 15:39:00:00 ADMIN
1 1 2 1 1 4 2 3 2015-11-23 11:29:00:00 ADMIN
```
Wanted result:
```
DATAKEY OWNERKEY OWNERTYPE DATAVERSION MALLKEY DATAVALUE DATAVALUETYPE DATACONTROLID DATADATE DATATIME DATASIGN
=========== ============ =========== =========== =========== ========= ============ ============= ========== =========== =========
3 1 2 2 1 2 2 1 2015-11-23 20:55:00:00 ADMIN
2 1 2 1 1 3 2 2 2015-11-23 15:39:00:00 ADMIN
1 1 2 1 1 4 2 3 2015-11-23 11:29:00:00 ADMIN
```
Where do I start?
```
SELECT *
FROM MyDB
WHERE OWNERKEY = 1
```
Statement above is the obviuos part, but how do I proceed from that?
I think I should use `MAX(DATAVERSION)` somehow, but what to group on? And can I use both `*` and `MAX`?
|
Here's a simple example using `GROUP BY` in `MSSQL`:
```
SELECT DATACONTROLID,
MAX(DATAVERSION) AS DATAVERSION, -- Note that 'AS' just gives the column a name in the result set, this isn't required.
OWNERKEY
FROM MyDB
WHERE OWNERKEY = 1
GROUP BY DATACONTROLID, OWNERKEY
```
This is assuming that the `DATACONTROLID` determines 'distinct' records and `DATAVERSION` is the instance of the record.
This will group the `DATACONTROLID` column and return the `MAX(DATAVERSION)` for the grouped result. In this case, `DATACONTROLID = 1` has `DATAVERSION = 1 and 2`, so this returns 2.
It's worth noting that if you want to see additional columns in your result *and they're not aggregates such as `MAX()`*, then you'll need to add them to the `GROUP BY` clause - as I have done here with the `OWNERKEY` column. If you try to `SELECT OWNERKEY` without having it in the `GROUP BY`, you'll receive an error.
EDIT:
Here's a better way to accomplish the same result by using a `JOIN`:
```
SELECT *
FROM MyDB mdb
INNER JOIN (
SELECT MAX(DATAVERSION) AS DATAVERSION,
DATACONTROLID
FROM MyDB
WHERE OWNERKEY = 1
GROUP BY DATACONTROLID
) AS mdb2
ON mdb2.DATACONTROLID = mdb.DATACONTROLID
AND mdb2.DATAVERSION = mdb.DATAVERSION
```
What this does is turns the same `GROUP BY` statement that I showed you before into a filtering table. The `INNER JOIN` portion is selecting the `MAX(DATAVERSION)` and the `DATACONTROLID` it belongs to, and returning the result as the temp table `mdb2`.
This new inner table will return the following:
```
DATAVERSION DATACONTROLID
2 1
1 2
1 3
```
Then we're taking that result and getting all of the rows `(SELECT *)` that match this criteria. Since this inner table doesn't contain a result for `DATAVERSION = 1 where DATACONTROLID = 1`, this row gets filtered out.
Conversely, if you only wanted to see the *older* versions and not the newest, you can change this criteria to `LEFT OUTER JOIN` and add `WHERE mdb2.DATAVERSION IS NULL`.
```
SELECT *
FROM MyDB mdb
LEFT OUTER JOIN (
SELECT MAX(DATAVERSION) AS DATAVERSION,
DATACONTROLID
FROM MyDB
WHERE OWNERKEY = 1
GROUP BY DATACONTROLID
) AS mdb2
ON mdb2.DATACONTROLID = mdb.DATACONTROLID
AND mdb2.DATAVERSION = mdb.DATAVERSION
WHERE mdb2.DATAVERSION IS NULL
```
Since we're selecting the `MAX(DATAVERSION)` in `mdb2`, it will contain null for the rows that don't meet this criteria.
|
Try this:
```
SELECT *
FROM (SELECT OWNERKEY,DATACONTROLID,DATAVERSION,
rank() over (partition by DATACONTROLID order by DATAVERSION desc) rnk
FROM MyDB WHERE OWNERKEY = 1)
WHERE rnk = 1;
```
Note: I haven't tested, but this should work
|
Select all rows but only the latest (highest) version from database
|
[
"",
"sql",
"pervasive-sql",
""
] |
I want increment `difficulty` by 1 but the value of `difficulty` should not go past 3. How do I do that?
This is my query
```
$query=mysqli_query($con,"UPDATE ticket
SET status = 'open',
difficulty = difficulty + 1
WHERE ticketid = '$_POST[ticketid]'")
```
|
## Schema
```
create table ticket
( ticketid int auto_increment primary key,
status varchar(20) not null,
difficulty int not null
);
insert ticket(status,difficulty) values (0,0),(0,0); -- 2 rows
```
run this query a bunch of times:
```
update ticket
set status ='open',
difficulty=least(difficulty+1,3)
where ticketid=2;
```
Now look at data
```
select * from ticket;
+----------+--------+------------+
| ticketid | status | difficulty |
+----------+--------+------------+
| 1 | 0 | 0 |
| 2 | open | 3 |
+----------+--------+------------+
```
See [Mysql Comparison functions](http://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html)
|
You can also use a if statement:
```
CREATE TABLE IF NOT EXISTS `ticket` (
`ticketid` int(11) NOT NULL AUTO_INCREMENT,
`status` varchar(20) NOT NULL,
`difficulty` int(11) NOT NULL,
PRIMARY KEY (`ticketid`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=2 ;
INSERT INTO ticket (difficulty, status) VALUES (0,0);
UPDATE
ticket
SET
`difficulty` = IF(`difficulty` < 3,`difficulty` + 1, 3)
WHERE
`ticketid`=1;
```
For more information have a look at <https://www.ask-sheldon.com/conditions/>.
|
SQL - Setting a maximum number when incrementing a value by 1
|
[
"",
"mysql",
"sql",
""
] |
With the lack of CTEs/recursive queries on VistaDB, I'm trying to formulate a viable query with a certain depth to query a PARENT/ID hierarchical self-referencing table. I had several ideas (SQL is from Firebird, as we are using it on the server side):
1. Do several joins, like this:
```
SELECT
"A"."ID",
"B"."ID",
"C"."ID",
"D"."ID"
FROM "NAVIGATION" AS A
LEFT JOIN "NAVIGATION" AS B ON (B.PARENT = A.ID)
LEFT JOIN "NAVIGATION" AS C ON (C.PARENT = B.ID)
LEFT JOIN "NAVIGATION" AS D ON (D.PARENT = C.ID)
WHERE "A"."ID" = CHAR_TO_UUID('00000000-0000-0000-0000-000000000000');
```
Then COALESCE on the A, B, C, D "ID" columns and use that as a subquery or join source for the actual rows to fetch the required content. However, as the first row on the first level might join onto several other rows, that didn't work - what I would need would be this:
```
A B C D
0 NULL NULL NULL
0 1 NULL NULL
0 1 2 NULL
0 1 2 3
0 1 2 4
0 1 2 5
```
Instead - as expected - I'm getting this:
```
A B C D
0 1 2 3
0 1 2 4
0 1 2 5
```
Any way to get the additional `NULL` rows?
2. Using `UNION` with subqueries. However, I can't figure out a viable syntax to get this done.
3. Maybe an alternative syntax. We only need a few levels of depth. Technically we could evaluate the result from (1.) in the application, but I prefer a more elegant approach, although it doesn't have to be very fast. We will usually only query two or three levels deep on the client, sometimes only one level. Still, it would be nice not to do it procedurally.
Some sample data as requested:
```
ID PARENT TITLE
0 NULL 'Root Node'
1 0 '1st Level Node'
2 1 '2nd Level Node'
3 2 '3nd Level Node 1'
4 2 '3nd Level Node 2'
5 2 '3nd Level Node 3'
```
|
It would help if you had sample data. But, you query cannot return `A`/`NULL`/`NULL`/`NULL` if there are matching rows in the other tables.
One way to get all hierarchies is to add a `NULL` value for each of the joins:
```
SELECT "A"."ID", "B"."ID", "C"."ID", "D"."ID"
FROM "NAVIGATION" AS A LEFT JOIN
(SELECT N.PARENT, N.ID
FROM "NAVIGATION"
UNION ALL
SELECT NULL, NULL
) B
ON B.PARENT = A.ID
(SELECT N.PARENT, N.ID
FROM "NAVIGATION"
UNION ALL
SELECT NULL, NULL
) C
ON C.PARENT = B.ID LEFT JOIN
(SELECT N.PARENT, N.ID
FROM "NAVIGATION"
UNION ALL
SELECT NULL, NULL
) D
ON D.PARENT = C.ID
WHERE "A"."ID" = CHAR_TO_UUID('00000000-0000-0000-0000-000000000000');
```
|
Here is the complete code for the curious. It can be expanded for more levels of course.
```
SELECT NAVIGATION.* FROM (
SELECT
COALESCE("E"."ID", "D"."ID", "C"."ID", "B"."ID", "A"."ID") AS FINAL_ID
FROM "NAVIGATION" AS A LEFT JOIN
(
SELECT NULL AS "ID", NULL AS "PARENT", NULL AS "TITLE"
UNION ALL
SELECT "NAVIGATION"."ID", "NAVIGATION"."PARENT", "NAVIGATION"."TITLE"
FROM "NAVIGATION"
) AS B
ON ("B"."PARENT" = "A"."ID") OR ("B"."ID" IS NULL) LEFT JOIN
(
SELECT NULL AS "ID", NULL AS "PARENT", NULL AS "TITLE"
UNION ALL
SELECT "NAVIGATION"."ID", "NAVIGATION"."PARENT", "NAVIGATION"."TITLE"
FROM "NAVIGATION"
) AS C
ON ("C"."PARENT" = "B"."ID") OR ("C"."ID" IS NULL) LEFT JOIN
(
SELECT NULL AS "ID", NULL AS "PARENT", NULL AS "TITLE"
UNION ALL
SELECT "NAVIGATION"."ID", "NAVIGATION"."PARENT", "NAVIGATION"."TITLE"
FROM "NAVIGATION"
) AS D
ON ("D"."PARENT" = "C"."ID") OR ("D"."ID" IS NULL) LEFT JOIN
(
SELECT NULL AS "ID", NULL AS "PARENT", NULL AS "TITLE"
UNION ALL
SELECT "NAVIGATION"."ID", "NAVIGATION"."PARENT", "NAVIGATION"."TITLE"
FROM "NAVIGATION"
) AS E
ON ("E"."PARENT" = "D"."ID") OR ("E"."ID" IS NULL)
WHERE "A"."ID" = '00000000-0000-0000-0000-000000000000'
)
LEFT JOIN NAVIGATION ON NAVIGATION.ID = FINAL_ID;
```
|
Hierarchical query without CTE
|
[
"",
"sql",
"vistadb",
""
] |
I have two tables in an SQL Server database.
EMP table...
```
ID NAME A
-- ---- -
01 Tony Y
02 Fred N
```
and a group membership table (GRP)...
```
ID GRP START FINISH KEYSK
-- ------ -------- -------- -----
01 GRP1 01/01/15 31/01/15 00001
01 GRP2 01/02/15 28/02/15 00002
01 GRP3 01/03/15 30/04/15 00003
01 GRP2 01/15/15 31/12/99 00004
01 GRPA 01/01/15 28/02/15 00005
01 GRPB 01/03/15 31/03/15 00006
01 GRPC 01/14/15 30/04/15 00007
01 GRPB 01/15/15 31/12/99 00008
02 GRPII 01/01/15 28/02/15 00005
02 GRPIII 01/03/15 31/03/15 00006
02 GRPIV 01/14/15 30/04/15 00007
02 GRPV 01/15/15 31/12/99 00008
```
I'm trying to construct a query to produce the following output that would select rows 00004 and 00008 from above as their start and end dates are current...
```
NAME GRP123 GRPABC GRPROMAN
---- ------ ------ --------
Tony GRP2 GRPB N/A
```
However my query returns the following...
```
NAME GRP123 GRPABC GRPROMAN
---- ------ ------ --------
Tony GRP2 N/A N/A
Tony N/A GRPB N/A
```
Here is my attempted query, which I may have over complicated things...
```
select NAME,
"GRP123" = case
when GRP123.GRP='GRP1' then 'GRP1'
when GRP123.GRP='GRP2' then 'GRP2'
when GRP123.GRP='GRP3' then 'GRP3'
else 'N/A' end,
"GRPABC" = case
when GRPABC.GRP='GRP1' then 'GRP1'
when GRPABC.GRP='GRP2' then 'GRP2'
when GRPABC.GRP='GRP3' then 'GRP3'
else 'N/A' end,
"GRPROMAN" = case
when GRPROMAN.GRP='GRPI' then 'GRPI'
when GRPROMAN.GRP='GRPII' then 'GRPII'
when GRPROMAN.GRP='GRPIII' then 'GRPIII'
when GRPROMAN.GRP='GRPIV' then 'GRPIV'
when GRPROMAN.GRP='GRPV' then 'GRPV'
when GRPROMAN.GRP='GRPVI' then 'GRPVI'
else 'N/A' end,
from EMP
full outer join GRP as GRP123 on EMP.ID = GRP123.ID and GRP123.GRP in ('GRP1', 'GRP2', 'GRP3')
full outer join GRP as GRPABC on EMP.ID = GRPABC.ID and GRPABC.GRP in ('GRPA', 'GRPB', 'GRPC')
full outer join GRP as GRPROMAN on EMP.ID = GRPROMAN.ID and GRPROMAN.GRP in ('GRPI', 'GRPII', 'GRPIII', 'GRPIV', 'GRPV', 'GRPVI')
where EMP.A = 'T'
order by EMP.NAME
;
```
Any help / guidance would be much appreciated.
An optional improvement output could be...
```
NAME GRP123 GRP123START GRPABC GRPABCSTART GRPROMAN GRPROMANSTART
---- ------ ----------- ------ ----------- -------- -------------
Tony GRP2 01/15/15 GRPB 01/15/15 N/A N/A
```
Thanks, AjN3806
|
You can use subquery in each coumn like below:
```
SELECT
NAME,
GRP123 = ISNULL((
SELECT TOP 1
GRP
FROM
GRP g
WHERE
GRP IN (
'GRP1', 'GRP2', 'GRP3')
AND g.KEYSK IN ('00004', '00008')
AND e.ID = g.ID
), 'N/A'),
GRPABC = ISNULL((
SELECT TOP 1
GRP
FROM
GRP g
WHERE
GRP IN (
'GRPA', 'GRPB', 'GRPC')
AND g.KEYSK IN ('00004', '00008')
AND e.ID = g.ID
), 'N/A'),
GRPROMAN = ISNULL((
SELECT TOP 1
GRP
FROM
GRP g
WHERE
GRP IN (
'GRPI', 'GRPII', 'GRPIII', 'GRPIV', 'GRPV', 'GRPVI')
AND g.KEYSK IN ('00004', '00008')
AND e.ID = g.ID
), 'N/A')
FROM
EMP e
WHERE
e.A = 'Y'
ORDER BY
e.NAME
```
|
There are various **mistakes in the question**, so my answer moves by steps and ipotheses ~~and is **incomplete**. I hope however it could be a valid starting point to write the targeted query~~.
First of all there's surely a typo in the case for `"GRPABC"`, misaligned from the criteria of the second full outer join (that producing results aliased with `GRPABC`), with the result of falling always in the `"N/A"` case.
I'm assuming the criteria on join is correct and so, at first, the `"GRPABC"` colum might be:
```
"GRPABC" = case
when GRPABC.GRP='GRPA' then 'GRPA'
when GRPABC.GRP='GRPB' then 'GRPB'
when GRPABC.GRP='GRPC' then 'GRPC'
else 'N/A' end,
```
Also the *where clause* seems wrong: assuming `EMP.NAME = 'Y'` (`'T'`?!), it doesn't deal however about your question regarding dates issues (current?).
And what about datetime values (1999!?)?.
But move on by steps:
1. we can correct the *case criteria*;
2. i'd move the clauses on outer join as where clauses of subqueries, considering that you produce `NULL` values joining the three *"GRP's configurations of values"* and being still able to express the `"N/A"` case.
3. we can bypass completely the *case statement* thanks to new possibilities form the second point rewriting, dealing directly with `NULL` ot `NOT NULL` values (by the mean of `coalesce`; as a track leading to this, consider that the *case statement* is only repeating 'good' values and having a default 'not good' case).
In practice you could rewrite to this:
```
select NAME,
coalesce(GRP123.GRP, 'N/A') as 'GRP123',
coalesce(GRPABC.GRP, 'N/A') as 'GRPABC',
coalesce(GRPROMAN.GRP, 'N/A') as 'GRPROMAN'
from EMP
full outer join (select * from GRP WHERE GRP in ('GRP1', 'GRP2', 'GRP3')) as GRP123 on EMP.ID = GRP123.ID
full outer join (select * from GRP WHERE GRP in ('GRPA', 'GRPB', 'GRPC')) as GRPABC on EMP.ID = GRPABC.ID
full outer join (select * from GRP WHERE GRP in ('GRPI', 'GRPII', 'GRPIII', 'GRPIV', 'GRPV', 'GRPVI')) as GRPROMAN on EMP.ID = GRPROMAN.ID
--where EMP.A = 'Y'
```
But the above query doesn't face the *'current' filter* and has a bug because can't produce a configuration with all `"N/A"` values.
So further steps are going this direction, changing filter and join types, and introducing a *'pivoting'* query for *all-"N/A"-values* case plus a `DISTINCT` clause.
Dealing with the *'current' filter*, you should avoid the use of a *'bad value'* like `'1999-12-31 00:00:00'` because it's a valid date. Why not use the `NULL` value?
But i'll follow your design and so the query becomes:
```
select DISTINCT NAME,
coalesce(GRP123.GRP, 'N/A') as 'GRP123',
coalesce(GRPABC.GRP, 'N/A') as 'GRPABC',
coalesce(GRPROMAN.GRP, 'N/A') as 'GRPROMAN'
from EMP
inner join (select * from GRP WHERE FINISH < '2000-01-01 00:00:00') as GRP_PIVOTING on EMP.ID = GRP_PIVOTING.ID
left join (select * from GRP WHERE GRP in ('GRP1', 'GRP2', 'GRP3') AND FINISH < '2000-01-01 00:00:00') as GRP123 on EMP.ID = GRP123.ID
left join (select * from GRP WHERE GRP in ('GRPA', 'GRPB', 'GRPC') AND FINISH < '2000-01-01 00:00:00') as GRPABC on EMP.ID = GRPABC.ID
left join (select * from GRP WHERE GRP in ('GRPI', 'GRPII', 'GRPIII', 'GRPIV', 'GRPV', 'GRPVI') AND FINISH < '2000-01-01 00:00:00') as GRPROMAN on EMP.ID = GRPROMAN.ID
where EMP.A = 'Y'
```
Optional output could be targeted with a quick extend using the same *colaesce logic*, after casting the `datetime` type to correctly fall back in the `'N/A'` value.
This brings to reach this final and the requested:
```
select DISTINCT NAME,
coalesce(GRP123.GRP, 'N/A') as 'GRP123', coalesce(CONVERT(VARCHAR, GRP123.START, 120), 'N/A') as 'GRP123START',
coalesce(GRPABC.GRP, 'N/A') as 'GRPABC', coalesce(CONVERT(VARCHAR, GRPABC.START, 120), 'N/A') as 'GRPABCSTART',
coalesce(GRPROMAN.GRP, 'N/A') as 'GRPROMAN', coalesce(CONVERT(VARCHAR, GRPROMAN.START, 120), 'N/A') as 'GRPROMANSTART'
from EMP
inner join (select * from GRP WHERE FINISH < '2000-01-01 00:00:00') as GRP_PIVOTING on EMP.ID = GRP_PIVOTING.ID
left join (select * from GRP WHERE GRP in ('GRP1', 'GRP2', 'GRP3') AND FINISH < '2000-01-01 00:00:00') as GRP123 on EMP.ID = GRP123.ID
left join (select * from GRP WHERE GRP in ('GRPA', 'GRPB', 'GRPC') AND FINISH < '2000-01-01 00:00:00') as GRPABC on EMP.ID = GRPABC.ID
left join (select * from GRP WHERE GRP in ('GRPI', 'GRPII', 'GRPIII', 'GRPIV', 'GRPV', 'GRPVI') AND FINISH < '2000-01-01 00:00:00') as GRPROMAN on EMP.ID = GRPROMAN.ID
where EMP.A = 'Y'
```
|
How to produce 1 row output on join case SQL Server
|
[
"",
"sql",
"sql-server",
""
] |
In my database I have 2 concerned tables there are: ***ims\_products*** and ***ims\_order\_details***. They have following structure:
***ims\_order\_details***
> * od\_id(PK)
> * p\_id(FK)
> * od\_price
> * od\_quantity
***ims\_products***
> * p\_id(PK)
> * p\_name
> * p\_category
I want to ***select p\_category from ims\_products where*** its ***p\_id*** is same in ***ims\_order\_details***.
|
in your model or controller file
```
$this->db->select('p_category');
$this->db->join('ims_order_details ','ims_order_details.p_id = ims_products.p_id,'inner');
$query = $this->db->get('ims_products');
if ($query && $query->num_rows()>0)
return $query->result();
```
|
```
select p. p_category
from ims_products p
inner join ims_order_details o
on p.p_id = o.p_id
```
|
selecting column from another table matching id codeigniter
|
[
"",
"sql",
"codeigniter",
""
] |
I have a table like this:
```
// mytable
+----+--------+-------+-------+
| id | name | key | value |
+----+--------+-------+-------+
| 1 | jack | 1 | 10 |
| 2 | peter | 1 | 5 |
| 3 | jack | 2 | 5 |
| 4 | ali | 1 | 2 |
| 5 | jack | 1 | 5 |
| 6 | jack | 1 | 10 |
| 7 | bert | 4 | 2 |
| 8 | peter | 2 | 10 |
| 9 | bert | 4 | 5 |
+----+--------+-------+-------+
```
Now I want to sum the numbers of `value` where both `name` and `key` are identical. So, I want this output:
```
// mynewtable
+----+--------+-------+-------+
| id | name | key | value |
+----+--------+-------+-------+
| 1 | jack | 1 | 25 |
| 2 | peter | 1 | 5 |
| 3 | jack | 2 | 5 |
| 4 | ali | 1 | 2 |
| 7 | bert | 4 | 7 |
| 8 | peter | 2 | 10 |
+----+--------+-------+-------+
```
Is it possible to I do that?
---
**Edit:** How can I do that for `insert`?
```
// mytable
+----+--------+-------+-------+
| id | name | key | value |
+----+--------+-------+-------+
| 1 | jack | 1 | 25 |
| 2 | peter | 1 | 5 |
| 3 | jack | 2 | 5 |
| 4 | ali | 1 | 2 |
| 7 | bert | 4 | 7 |
| 8 | peter | 2 | 10 |
+----+--------+-------+-------+
```
Inserting these rows:
```
+----+--------+-------+-------+
| 10 | jack | 1 | 5 |
+----+--------+-------+-------+
+----+--------+-------+-------+
| 11 | bert | 1 | 2 |
+----+--------+-------+-------+
```
What I want: (**output**)
```
// mynewtable
+----+--------+-------+-------+
| id | name | key | value |
+----+--------+-------+-------+
| 1 | jack | 1 | 30 |
| 2 | peter | 1 | 5 |
| 3 | jack | 2 | 5 |
| 4 | ali | 1 | 2 |
| 7 | bert | 4 | 7 |
| 8 | peter | 2 | 10 |
| 11 | bert | 1 | 2 |
+----+--------+-------+-------+
```
|
Group by `name, key`
```
select name, key, sum(value) as value
from mytable group by name,key
```
|
You have to group by more columns.
```
select name, key, sum(value) from mytable group by name, key;
```
|
How to sum the values of column on duplicate key?
|
[
"",
"mysql",
"sql",
"sum",
"on-duplicate-key",
""
] |
I'm trying to get the assignment id of the employee who was the last occupant of the position. That would be the last assignment who had the position or the maximum date. How can I retrieve both in the below query?
```
select max(to_char(paaf.effective_start_date, 'yyyymmdd')) || to_char(paaf.assignment_id)
from apps.per_all_assignments_f paaf
where paaf.position_id = 159841
and paaf.assignment_type in ('E', 'C')
and paaf.primary_flag = 'Y'
```
I'm converting the maximum date to a character in order to substring it out in the result.
|
Problems like that can easily be solved using a window function:
```
select *
from (
select paaf.*,
max(paaf.effective_start_date) over (partition by position_id) as max_start_date
from apps.per_all_assignments_f paaf
where paaf.position_id = 159841
and paaf.assignment_type in ('E', 'C')
and paaf.primary_flag = 'Y'
) t
where effective_start_date = max_start_date;
```
The part `max(paaf.effective_start_date) over (partition by position_id) as max_start_date` is essentially the same as a `max(paaf.effective_start_date) ... group by position_id` but without the need to group the whole result.
As you are only selecting a single `position_id` you *could* use `over ()` instead - but by using `over (partition by position_id)` the query could also be used to retrieve that information for multiple positions.
|
try this
```
select (paaf.assignment_id)
from apps.per_all_assignments_f paaf
where paaf.position_id = 159841
and paaf.primary_flag = 'Y'
UNION all
select (paaf.assignment_id)
from apps.per_all_assignments_f paaf
where paaf.assignment_type in ('E', 'C')
and paaf.primary_flag = 'Y'
AND paaf.effective_start_date = (select max(paaf.effective_start_date) from apps.per_all_assignments_f paaf
where paaf.assignment_type in ('E', 'C')
and paaf.primary_flag = 'Y')
```
|
Oracle SQL - How to get the assignment id of the record with the maximum date
|
[
"",
"sql",
"oracle",
"greatest-n-per-group",
""
] |
I'm writing my own translator to get more control about my translations.
Using SQL Server 2008 this is the table for the english-german dictionary:
```
CREATE TABLE engdeu
(
[eword] [varchar](100) NOT NULL,
[dword] [varchar](100) NOT NULL
-- clipped --
)
```
This is my text that I'm going to translate:
```
Berlin was the first city ...
```
This is the query to translate the given english text into the german equivalent:
```
select *
from engdeu
where eword in ('Berlin', 'was', 'the', 'first', 'city')
```
And this is the result of the query:
```
eword dword
city Stadt
first erst\e\r/es
the der/die/das
was wurde\n\war
Berlin Berlin
```
As you can see the order of the result is unlike the order of the origin English text.
How can I create an SQL that produces a result in the same order like that:
```
eword dword
Berlin Berlin
was wurde\n\war
the der/die/das
first erst\e\r/es
city Stadt
```
|
Use FIND\_IN\_SET
```
ORDER BY FIND_IN_SET(eword, 'Berlin,was,...')
```
**EDIT:**
If FIND\_IN\_SET is not working on Sql-server try (can't test it now sorry)
```
ORDER BY CHARINDEX(eword, 'Berlin,was,...')
```
|
So, this is not task for IN clause. You have to give SQL information about order You want. One possible solution is to reach for transact SQL, and store this information in table variable, and just use JOIN and ORDER BY, like this:
```
DECLARE @vartab TABLE(
text varchar(30) NOT NULL,
pos varchar(30) NOT NULL
);
INSERT @vartab
SELECT 1,'Berlin'
SELECT 2,'was'
SELECT 3,'the'
SELECT 4,'first'
SELECT 5,'city'
;
SELECT
*
FROM
@vartab vt
LEFT JOIN engdeu ed ON (vt.text = ed.word)
ORDER BY
vt.pos
```
As I am unable to test it in the moment, there my be some mistakes in my code, but You should get the basic idea behind it.
[Here](https://msdn.microsoft.com/pl-pl/library/ms188927%28v=sql.105%29.aspx) is some manual from masters of good documentation.
And [here](http://www.sqlteam.com/article/using-table-variables) are some text on table variables that seems more straightfoward.
|
How to force the result order
|
[
"",
"sql",
"sql-server",
"translation",
""
] |
I want to create a function which will update the table.
I am trying to run it:
```
SELECT insert_function('asda', 1 ,1 , 'asd', 1)
```
But I get the error:
> LINE 3 VALUES("asda","1","1","asd","1") column doesn't exist.
When I am trying rot run:
> `SELECT insert_function('1', 1 ,1 , '1', 1)` zero-length delimited
> identifier at near """""""
>
> LINE 3 VALUES(""1"","1","1",""1"","1")
> (^at the first item)
```
CREATE TABLE IF NOT EXISTS commits (
id SERIAL PRIMARY KEY,
goo CHAR(64) NOT NULL,
foo INTEGER NOT NULL,
bla INTEGER NOT NULL,
ma CHAR(512) NOT NULL,
fgt INTEGER NOT NULL
);
CREATE OR REPLACE FUNCTION insert_function(goo char(64), foo INTEGER, bla INTEGER, ma CHAR(512), fgt INTEGER)
RETURNS VOID AS
$func$
BEGIN
EXECUTE format('
INSERT INTO commits
VALUES(%I, %I, %I, %I, %I)',
goo , foo , bla , ma , fgt );
END
$func$ LANGUAGE plpgsql;
```
Could you please tell me how can I insert values? How can I write the function with `EXECUTE format`?
|
Specify column names and use `using`:
```
CREATE OR REPLACE FUNCTION insert_function(goo char(64), foo INTEGER,
bla INTEGER, ma CHAR(512), fgt INTEGER)
RETURNS VOID AS
$func$
BEGIN
EXECUTE format('INSERT INTO commits(goo, foo, bla, ma, fgt)
VALUES($1,$2,$3,$4,$5);') using goo, foo, bla, ma, fgt;
END
$func$ LANGUAGE plpgsql;
```
`SqlFiddleDemo`
|
You have two problems:
When you want to **[insert into](http://www.postgresql.org/docs/9.3/static/sql-insert.html)** table, you have to specify all columns (even auto calculated columns), or if you don't want to specify all the columns names, you have to specify all of column value, even auto calcluated value too.
Concerning the format of your dynamic query, you should use [`%s`](http://www.postgresql.org/docs/9.3/static/functions-string.html) instead of [`%I`](http://www.postgresql.org/docs/9.3/static/functions-string.html) to interpolates the corresponding argument as a string which you want to achieve, and the [`%I`](http://www.postgresql.org/docs/9.3/static/functions-string.html) escapes its argument as an SQL identifier which you don't want to.
So you can resolve your problem by replacing:
```
EXECUTE format('
INSERT INTO commits
VALUES(%I, %I, %I, %I, %I)',
goo , foo , bla , ma , fgt );
```
By
```
EXECUTE format('
INSERT INTO commits
VALUES(DEFAULT, %L , %s , %s , %L , %s )',
goo , foo , bla , ma , fgt );
```
|
INSERT EXECUTE FORMAT Postgresql string
|
[
"",
"sql",
"postgresql",
"sql-insert",
"postgresql-9.3",
"execute",
""
] |
I am trying to create a table in mysql database with the following code:
```
CREATE TABLE `abcd` (
`A` int(11) NOT NULL DEFAULT '0' primary key,
`B` varchar(45) NOT NULL,
`C` datetime,
`D` datetime,
`E` varchar(16),
`F` varchar(16),
`G` int(5) NOT NULL DEFAULT '0',
`H` int(11) NOT NULL DEFAULT '0',
`I` int(20),
`J` int(20),
`K` datetime,
) TYPE=MyISAM ;
```
I am getting the following error:
> ERROR 1064: You have an error in your SQL syntax. Check the manual
> that corresponds to your MySQL server version for the right syntax to
> use near ') TYPE=MyISAM' at line 13.
I have also used ENGINE=MyISAM but getting the same error.
Can anyone explain where there is syntax error in the code?
Thanks in advance!!
|
Remove the comma before the parenthesis, like this:
```
CREATE TABLE `abcd` (
`A` int(11) NOT NULL DEFAULT '0' primary key,
`B` varchar(45) NOT NULL,
`C` datetime,
`D` datetime,
`E` varchar(16),
`F` varchar(16),
`G` int(5) NOT NULL DEFAULT '0',
`H` int(11) NOT NULL DEFAULT '0',
`I` int(20),
`J` int(20),
`K` datetime
) TYPE=MyISAM ;
```
|
working off what @Thanos said, it is engine, not type
```
CREATE TABLE `abcd` (
`A` int(11) NOT NULL DEFAULT '0' primary key,
`B` varchar(45) NOT NULL,
`C` datetime,
`D` datetime,
`E` varchar(16),
`F` varchar(16),
`G` int(5) NOT NULL DEFAULT '0',
`H` int(11) NOT NULL DEFAULT '0',
`I` int(20),
`J` int(20),
`K` datetime
) engine=MyISAM ;
```
**tested**
|
Error while creating a table
|
[
"",
"mysql",
"sql",
""
] |
I am having difficulties getting my second alias to work in the example below.
I'm using Squirrel SQL 3.7
Getting an error
> Error: [SQL5001] Column qualifier or table T2 undefined. SQLState:
> 42703 ErrorCode: -5001
```
UPDATE myDatabaseOne.myTableOne t1
SET
firstFieldToCopy = (SELECT DISTINCT alternateField FROM myDatabaseTwo.myTableTwo t2)
WHERE t1.firstFieldToCopy = t2.alternateField;
```
|
Did you mean...
```
UPDATE myDatabaseOne.myTableOne t1
SET firstFieldToCopy = (SELECT DISTINCT alternateField
FROM myDatabaseTwo.myTableTwo t2
WHERE t1.firstFieldToCopy = t2.alternateField);
```
Note the position of the ) ... This is why the t2 alias didn't work...
Otherwise the query is confusing as to your intent.
|
```
UPDATE t1
SET t1.firstfieldtocopy = t2.alternatefield
FROM mydatabaseone.mytableone t1
JOIN mydatabasetwo.mytabletwo t2 on t1.firstfieldtocopy = t2.alternate field
```
I don't understand your logic though.. Setting copy = alternate but you're filtering to where copy = alternate already.
|
Cannot assign the second sql alias
|
[
"",
"sql",
"db2",
"alias",
"squirrel-sql",
""
] |
I always felt like I needed to improve my SQL knowledge, I am trying to, but still no success to find an answer to my question.
I have 2 tables: `Post` and `Comments`.
`Post` table has a PK `PostID, Username, Title, Content`.
`Comments` table has a PK `CommentID, FK PostID, Username, Content`.
`Username` in `PostID` is the `Username` of the post's author, in `Comments` it is commenter's `username`. Everything else should be pretty self explanatory.
**Main challenge: I want to select all posts made by that specific author and display amount of comments for each post AND display zero if no comments are found. All in one query (if possible?)**
So far the closest, I've gotten to was this:
```
SELECT p.*, COUNT(*) as CommentAmount
FROM Posts p
LEFT OUTER JOIN Comments c
ON p.PostID = c.PostID
WHERE p.Username = 'author1'
GROUP BY c.PostID
```
It works in a way, but doesn't display neither Post data, nor CommentAmount (0) if no comments exist for that PostID - which makes sense since it does not find any p.PostID that has the same c.PostID. How to make it display 0 CommentAmount as well? Is it even possible?
Here is the sample SQL Fiddle I've set up so you could test as well!: <http://sqlfiddle.com/#!9/f8941>
**UPDATE: I made a little mistake in the schema... sorry guys. Fixed in the fiddle above^**
**UPDATE2: Thanks everyone for amazing answers! Funny how most solutions work perfectly on SQL Fiddle but seem not to work on my DB on the cloud, using MySQL Workbench... I'll have to look into that now then, thanks everyone!**
|
How about this?:
```
SELECT p.*,
(SELECT COUNT(*) FROM Comments WHERE PostID=p.PostID) AS num_comments
FROM Posts p
WHERE p.Username = 'author1'
```
|
You only need to adjust your query by adding `GROUP BY` and changing `COUNT()` aggregate argument to point to `Comments` so that it will enable storing value `0` when there are no comments for a particular post.
It doesn't matter which column from `Comments` you will put inside `COUNT()`, since every column has a `NULL` value when JOIN condition is not met.
```
SELECT p.*, COUNT(c.CommentID) AS CommentAmount
FROM Posts p
LEFT JOIN Comments c
ON p.PostID = c.PostID
WHERE p.username = 'author1'
GROUP BY 1,2,3,4
```
|
SQL: Count amount of comments for each post written by specific author
|
[
"",
"mysql",
"sql",
"database",
"select",
"join",
""
] |
I have this two queries
```
SELECT consejos.consejo as value,
consejos.id,
consejos.texto,
consejos.votos
FROM consejos
ORDER BY fecha DESC
```
And
```
SELECT preguntas.pregunta as value,
preguntas.id,
preguntas.texto,
preguntas.votos
FROM preguntas
ORDER BY fecha DESC
```
Is there any simple way to get both results in the same query and sort by `fecha` column? and would it be possible to add a field to determinate which table is a row from?
|
you can use `union`:
```
SELECT value, id, texto, votos, source_table from (
SELECT consejos.consejo as value,
consejos.id,
consejos.texto,
consejos.votos,
'consejos' as source_table,
fecha
FROM consejos
UNION
SELECT preguntas.pregunta as value,
preguntas.id,
preguntas.texto,
preguntas.votos,
'preguntas' as source_table,
fecha
FROM preguntas
) as A ORDER BY fecha DESC
```
**EDIT:**
for your additional request, to add a column indicating from which table it came- you can just add to the columns list in the select `'consejos' as source_table` to the first, and `'preguntas' as source_table` to the second
**Second EDIT (for strawberry, is this a test or something? did I pass :)?)**
add `fecha` to the inner queries, but not to the wrapping one if you dont want it there
|
Use `UNION ALL`
```
Select * From (
SELECT consejos.consejo as value,
consejos.id,
consejos.texto,
consejos.votos
FROM consejos
UNION ALL
SELECT preguntas.pregunta as value,
preguntas.id,
preguntas.texto,
preguntas.votos
FROM preguntas ) t
ORDER BY t.fecha DESC
```
|
Combine two queries from different tables and sort by a common column?
|
[
"",
"mysql",
"sql",
""
] |
I have a table `table` which has three fields: `id`, `field1` and `field2`.
The SQL query I need to create should select the `id`s WHERE `field1 LIKE 'Some string'` but if `field1` is `NULL` I should perform the filtering against `field2`, in pseudo SQL:
```
SELECT id FROM table
WHERE (IF `field1` IS NULL THEN `field2` ELSE `field1`) LIKE 'Some string'
```
How can I achieve this in MySQL?
Thanks for the attention!
|
You can use one of the [Flow Control Operators](http://dev.mysql.com/doc/refman/5.7/en/control-flow-functions.html#function_ifnull). You could write:
### `CASE` operator (recommended)
```
SELECT id FROM table
WHERE CASE
WHEN `field1` IS NOT NULL THEN `field1`
ELSE `field2`
END LIKE 'Some string'
```
### `IF` construct
```
SELECT id FROM table
WHERE IF(`field1` IS NOT NULL, `field1`, `field2`) LIKE 'Some string'
```
### `IFNULL` construct
```
SELECT id FROM table
WHERE IFNULL(`field1`, `field2`) LIKE 'Some string'
```
|
Or:
```
WHERE IFNULL(`field1`, `field2`) LIKE 'Some string'
```
|
How can I select which column to use in a WHERE condition programmatically?
|
[
"",
"mysql",
"sql",
""
] |
I would like to create and populate temporary table with data to process it inside loop statement like this:
```
DECLARE
cnt NUMBER;
BEGIN
SELECT COUNT(tname) INTO cnt from tab where tname = 'MY_TEMP';
IF (cnt > 0) THEN
EXECUTE IMMEDIATE 'DROP TABLE MY_TEMP';
END IF;
EXECUTE IMMEDIATE 'CREATE GLOBAL TEMPORARY TABLE MY_TEMP (G NVARCHAR2(128), F NVARCHAR2(128), V NVARCHAR2(128)) ON COMMIT DELETE ROWS';
INSERT INTO MY_TEMP VALUES (N'G-value1', N'F-value1', N'V-value1');
INSERT INTO MY_TEMP VALUES (N'G-value2', N'F-value2', N'V-value2');
...
FOR record IN (SELECT G,F,V FROM MY_TEMP)
LOOP
... Do something sophisticated with record.G, record.F, record.V
END LOOP;
COMMIT;
END;
```
When I run this script inside PL-SQL Developer it tells me for the very first INSERT that MY\_TEMP table or view doesn't exist even though my EXECUTE IMMEDIATE 'CREATE GLOBAL TEMPORARY TABLE ... ' statement seems to be executed without errors. I checked there is no MY\_TEMP table inside tables list after script execution
When I run EXECUTE IMMEDIATE 'CREATE GLOBAL TEMPORARY TABLE ... ' alone it runs ok and MY\_TEMP table is really created. After this the whole scripts runs ok.
How do I use this script without manually precreating MY\_TEMP table ?
|
> How do I use this script without manually precreating MY\_TEMP table ?
You can't. Unless of course you run everything after the creation of the temporary table using `EXECUTE IMMEDIATE`. But I cannot for a second recommend that approach.
The point is not that your script fails to *run*, but that it fails to *compile*. Oracle won't start running your block if it can't compile it first. At the point Oracle tries to compile your PL/SQL block, the table doesn't exist. You have a compilation error, not a runtime error.
I suspect that you are more familiar with temporary tables in SQL Server and are trying to use temporary tables in Oracle in the same way. If this is the case, then you will need to know that there are differences between temporary tables in Oracle and in SQL Server.
Firstly, there's no such thing as a local temporary table (i.e. a table visible to only one connected user) in Oracle. Oracle does have global temporary tables, but only the data in a global temporary table is temporary. The table itself is permanent, so once it has been created it will only be dropped if you explicitly drop it. Compare this with SQL Server temporary tables, which are dropped once all relevant users disconnect.
I really don't think you need to be creating the temporary table in your block. It should be sufficient to create it once beforehand.
|
Why do want to drop and create the temp table? Simply create it and use it.
|
Why doesn't a PL/SQL block create a temporary table?
|
[
"",
"sql",
"oracle",
"plsql",
""
] |
I have a scenario in which I have three columns with multiple rows, I need to select them in single row.
Below I am attaching temporary table scenario.
```
CREATE TABLE #Temp
(
PersonID INT ,
PhoneType VARCHAR(20) ,
PhoneNumber VARCHAR(20)
)
INSERT INTO #Temp
( PersonID ,
PhoneType ,
PhoneNumber
)
SELECT 1212 ,
'Business' ,
'123456789'
UNION ALL
SELECT 1212 ,
'Cell' ,
'741852963'
UNION ALL
SELECT 1212 ,
'Other' ,
'987654321'
UNION ALL
SELECT 1212 ,
'Home' ,
'951357852'
SELECT *
FROM #Temp
DROP TABLE #Temp
```
I need to Display,
```
PersonID|Business|123456789|Cell|741852963|Other|987654321|Home|951357852
```
Can anybody help?
I have to combine this result with another Select Query,How may I do?
|
You can use conditional aggregation:
[**SQL Fiddle**](http://sqlfiddle.com/#!6/55f15/3/0)
```
SELECT
PersonID,
Business = MAX(CASE WHEN PhoneType = 'Business' THEN PhoneNumber END),
Cell = MAX(CASE WHEN PhoneType = 'Cell' THEN PhoneNumber END),
Other = MAX(CASE WHEN PhoneType = 'Other' THEN PhoneNumber END),
Home = MAX(CASE WHEN PhoneType = 'Home' THEN PhoneNumber END)
FROM #Temp
GROUP BY PersonID
```
---
If you have unknown number of `PhoneType`s, do a dynamic crosstab:
[**SQL Fiddle**](http://sqlfiddle.com/#!6/55f15/1/0)
```
DECLARE @sql NVARCHAR(MAX) = ''
SELECT @sql =
'SELECT
PersonID' + CHAR(10)
SELECT @sql = @sql +
' , MAX(CASE WHEN PhoneType = ''' + PhoneType + ''' THEN PhoneNumber END) AS' + QUOTENAME(PhoneType) + CHAR(10)
FROM (SELECT DISTINCT PhoneType FROM #Temp) t
SELECT @sql = @sql +
'FROM #Temp
GROUP BY PersonID'
EXEC sp_executesql @sql
```
|
You can use this code. It may be Help you.
```
DECLARE @cols AS NVARCHAR(MAX),@query AS NVARCHAR(MAX)
SELECT @cols = STUFF((SELECT ',' + QUOTENAME(PhoneType)
from #Temp
group by PhoneType, PersonID
order by PersonID
FOR XML PATH('')), 1, 1, '')
SET @query = N'SELECT ' + @cols + N' from
(
select *
from #Temp
) x
pivot
(
max(PhoneNumber)
for PhoneType in (' + @cols + N')
) p'
EXEC SP_EXECUTESQL @QUERY;
```
|
Convert a row data to columns in SQL Server
|
[
"",
"sql",
"sql-server",
"multiple-columns",
"rows",
""
] |
I am trying to insert values into sql table, but I am getting this error in sql query
```
SQL Error: ORA-00001: unique constraint (uniqueKey) violated
00001. 00000 - "unique constraint (%s.%s) violated"
*Cause: An UPDATE or INSERT statement attempted to insert a duplicate key.
For Trusted Oracle configured in DBMS MAC mode, you may see
this message if a duplicate entry exists at a different level.
*Action: Either remove the unique restriction or do not insert the key.
```
Is there a way to skip this error and continue insert. Something like this
```
try
insert query
catch (unique constraint error)
continue inserting other values
```
|
There exists hint `ignore_row_on_dupkey_index(<table name>, <unique index name>)`.
```
HUSQVIK@hq_pdb_tcp> CREATE TABLE tmp (val NUMBER CONSTRAINT pk_tmp PRIMARY KEY);
Table created.
HUSQVIK@hq_pdb_tcp> INSERT /*+ ignore_row_on_dupkey_index(tmp, pk_tmp) */ INTO tmp (val) SELECT 1 FROM DUAL CONNECT BY LEVEL <= 3;
1 row created.
```
See that I insert three values of 1 and only one row was created.
|
There is a `LOG ERRORS` clause which allows you to log the rows that cause errors in to an error table - this error table is created using a DBMS package:
```
DBMS_ERRLOG.CREATE_ERROR_LOG(table_being_inserted_into ,name_of_table_for_errors ,NULL,NULL,TRUE);
```
<https://docs.oracle.com/cd/B28359_01/appdev.111/b28419/d_errlog.htm>
Function Signature:
```
DBMS_ERRLOG.CREATE_ERROR_LOG (
dml_table_name IN VARCHAR2,
err_log_table_name IN VARCHAR2 := NULL,
err_log_table_owner IN VARCHAR2 := NULL,
err_log_table_space IN VARCHAR2 := NULL,
skip_unsupported IN BOOLEAN := FALSE);
```
Then in your insert statement you end it with a log errors clause:
```
LOG ERRORS INTO your_error_table_name ( 'description of your choosing' ) REJECT LIMIT UNLIMITED;
```
You can choose to accept a reject limit of a fixed number, allowing you to specify in effect a tolerance to errors before it throws a real error instead of just allowing the row to be place in an error table.
|
How to skip unique constraint error
|
[
"",
"sql",
"oracle",
""
] |
This is a relational database design question, not specific to any RDBMS. A simplified case:
I have two tables Cars and Trucks. They have both have a column, say RegistrationNumber, that must be unique across the two tables.
This could probably be enforced with some insert/update triggers, but I'm looking for a more "clean" solution if possible.
One way to achieve this could be to add a third table, Vehicles, that holds the RegistrationNumber column, then add two additional columns to Vehicles, CarID and TruckID. But then for each row in Vehicles, one of the columns CarID or TruckID would always be NULL, because a RegistrationNumber applies to either a Car or a Truck leaving the other column with a NULL value.
Is there anyway to enforce a unique RegistrationNumber value across multiple tables, without introducing NULL columns or relying on triggers?
|
This is a bit complicated. Having the third table, `Vehicles` is definitely part of the solution. The second part is guaranteeing that a vehicle is either a car or a truck, but not both.
One method is the "list-all-the-possibilities" method. This is what you propose with two columns. In addition, this should have a constraint to verify that only one of the ids is filled in. A similar approach is to have the `CarId` and `TruckId` actually be the `VehicleId`. This reduces the number of different ids floating around.
Another approach uses composite keys. The idea is:
```
create table Vehicles (
Vehicle int primary key,
Registration varchar(255),
Type varchar(255),
constraint chk_type check (type in ('car', 'truck')),
constraint unq_type_Vehicle unique (type, vehicle), -- this is redundant, but necessary
. . .
);
create table car (
VehicleId int,
Type varchar(255), -- always 'car' in this table
constraint fk_car_vehicle foreign key (VehicleId) references Vehicles(VehicleId),
constraint fk_car_vehicle_type foreign key (Type, VehicleId) references Vehicles(Type, VehicleId)
);
```
|
See the following tags: [class-table-inheritance](/questions/tagged/class-table-inheritance "show questions tagged 'class-table-inheritance'") [shared-primary-key](/questions/tagged/shared-primary-key "show questions tagged 'shared-primary-key'")
You have already outlined class table inheritance in your question. The tag will just add a few details, and show you some other questions whose answers may help.
Shared primary key is a handy way of enforcing the one-to-one nature of IS-A relationships such as the relationship between a vehicle and a truck. It also allows a foreign key in some other table to reference a vehicle and also a truck or a car, as the case may be.
|
Is unique foreign keys across multiple tables via normalization and without null columns possible?
|
[
"",
"sql",
"database",
"database-design",
"database-normalization",
""
] |
I'm a beginner in SQL and I have some troubles with this request, which just can't be executed because it's too inefficient. Here it is, it just loads for hours (!)
```
SELECT my_db_client_total.email as EMAIL_TOTAL,
my_db_client_total.name, my_db_client_total.country
FROM my_db_client_total, my_db_client_pro
WHERE (my_db_client_total.email<>'' )
AND (my_db_client_total.country = '120')
AND (my_db_client_total.email IN (SELECT my_db_client_pro.email
FROM my_db_client_pro));
```
My tables are organised in this way :
```
Table 1 : Table 2 :
email email
name name
country country
```
The purpose of the request it to test my database, and to process a further request where I want to select `email`, `name` and `country` from `Table 1` where `table1.email` is not in `table2`.
Sorry for my bad English, and for my level, as I said earlier, I'm a beginner and self-taught.
If anyone can help me or at least enlighten me, I'll be so thankful for your time.
EDIT: I didn't precisely know but I'm working with a table where there is 20 000+ rows.
|
This is your query (fixed up a bit with table aliases):
```
SELECT ct.email as EMAIL_TOTAL, ct.name, ct.country
FROM my_db_client_total ct
WHERE (ct.email <> '' ) AND
(ct.country = 120) AND
(ct.email IN (SELECT my_db_client_pro.email FROM my_db_client_pro));
```
(I am assuming that `120` is really an integer.)
In older versions of MySQL, `exists` is more efficient than `in`, so rewrite the query as:
```
SELECT ct.email as EMAIL_TOTAL, ct.name, ct.country
FROM my_db_client_total ct
WHERE (ct.email <> '' ) AND
(ct.country = 120) AND
EXISTS (SELECT 1 FROM my_db_client_pro cp WHERE cp.email = ct.email);
```
Next, consider indexes. For this query, good indexes are:
```
create index idx_my_db_client_total_2 on my_db_client_total(country, email, name);
create index idx_my_db_client_pro_email on my_db_client_pro(email);
```
|
Your question states you are using MySQL, so this will give you an inner (record must exist in both tables) join.
```
SELECT my_db_client_total.email as EMAIL_TOTAL, my_db_client_total.name, my_db_client_total.country
FROM my_db_client_total inner join my_db_client_pro on my_db_client_pro.email = my_db_client_total.email
WHERE (my_db_client_total.email<>'' ) AND (my_db_client_total.country = '120');
```
To select a row that does not exist in the other table do this:
```
SELECT my_db_client_total.email as EMAIL_TOTAL, my_db_client_total.name, my_db_client_total.country
FROM my_db_client_total LEFT OUTER JOIN my_db_client_pro on my_db_client_pro.email = my_db_client_total.email
WHERE (my_db_client_total.email<>'' ) AND (my_db_client_total.country = '120') AND my_db_client_pro.email IS NULL;
```
|
How to make my SELECT query more efficient?
|
[
"",
"mysql",
"sql",
"performance",
"select",
""
] |
I have 3 tables, shown below:
```
mysql> select * from Raccoon;
+----+------------------+----------------------------------------------------------------------------------------------------+
| id | name | image_url |
+----+------------------+----------------------------------------------------------------------------------------------------+
| 3 | Jesse Coon James | http://www.pawfun.com/wp/wp-content/uploads/2010/01/rabbid.png |
| 4 | Bobby Coon | https://c2.staticflickr.com/6/5242/5370143072_dee60d0ce2_n.jpg |
| 5 | Doc Raccoon | http://images.encyclopedia.com/utility/image.aspx?id=2801690&imagetype=Manual&height=300&width=300 |
| 6 | Eddie the Rac | http://www.felid.org/jpg/EDDIE%20THE%20RACCOON.jpg |
+----+------------------+----------------------------------------------------------------------------------------------------+
4 rows in set (0.00 sec)
mysql> select * from Review;
+----+------------+-------------+---------------------------------------------+--------+
| id | raccoon_id | reviewer_id | review | rating |
+----+------------+-------------+---------------------------------------------+--------+
| 1 | 3 | 1 | This raccoon was a fine raccoon indeed. | 5 |
| 2 | 5 | 2 | This raccoon did not do much for me at all. | 2 |
| 3 | 3 | 1 | asdfsadfsadf | 5 |
| 4 | 5 | 2 | asdfsadf | 1 |
+----+------------+-------------+---------------------------------------------+--------+
4 rows in set (0.00 sec)
mysql> select * from Reviewer;
+----+---------------+
| id | reviewer_name |
+----+---------------+
| 1 | Kane Charles |
| 2 | Cameron Foale |
+----+---------------+
2 rows in set (0.00 sec)
```
I'm trying to build a select query that will return all of the columns in `Raccoon` as well as an extra column which grabs an average of `Review.rating` (grouped by id). The problem I face is that there is no guarantee that there will be rows present in the `Review` table for every single Raccoon (as determined by the FK, `raccoon_id` which references `Raccoon.id`. In situations where there are zero rows present in the `Review` table (for a given Raccoon.id, ie Review.raccoon\_id) I'd like the query to return `0` as the average for that Raccoon.
Below is the current query I'm using:
```
mysql> SELECT *, (SELECT IFNULL(AVG(rating),0) FROM Review WHERE raccoon_id=Raccoon.id GROUP BY raccoon_id) AS "AVG" FROM Raccoon ORDER BY "AVG" ASC;
+----+------------------+----------------------------------------------------------------------------------------------------+--------+
| id | name | image_url | AVG |
+----+------------------+----------------------------------------------------------------------------------------------------+--------+
| 3 | Jesse Coon James | http://www.pawfun.com/wp/wp-content/uploads/2010/01/rabbid.png | 5.0000 |
| 4 | Bobby Coon | https://c2.staticflickr.com/6/5242/5370143072_dee60d0ce2_n.jpg | NULL |
| 5 | Doc Raccoon | http://images.encyclopedia.com/utility/image.aspx?id=2801690&imagetype=Manual&height=300&width=300 | 1.5000 |
| 6 | Eddie the Rac | http://www.felid.org/jpg/EDDIE%20THE%20RACCOON.jpg | NULL |
+----+------------------+----------------------------------------------------------------------------------------------------+--------+
4 rows in set (0.00 sec)
```
As you can see above, the query isn't returning `0` for Raccoons with id of 4 and 6, it is simply returning `NULL`. I need it to return something like the following (note the ordering, sorted by lowest average review first):
```
+----+------------------+----------------------------------------------------------------------------------------------------+--------+
| id | name | image_url | AVG |
+----+------------------+----------------------------------------------------------------------------------------------------+--------+
| 4 | Bobby Coon | https://c2.staticflickr.com/6/5242/5370143072_dee60d0ce2_n.jpg | 0.0000 |
| 6 | Eddie the Rac | http://www.felid.org/jpg/EDDIE%20THE%20RACCOON.jpg | 0.0000 |
| 5 | Doc Raccoon | http://images.encyclopedia.com/utility/image.aspx?id=2801690&imagetype=Manual&height=300&width=300 | 1.5000 |
| 3 | Jesse Coon James | http://www.pawfun.com/wp/wp-content/uploads/2010/01/rabbid.png | 5.0000 |
+----+------------------+----------------------------------------------------------------------------------------------------+--------+
```
|
use `IFNULL` outside your subquery since it will return null it there is not match on the outer table,
```
IFNULL((SELECT AVG(rating) FROM Review WHERE raccoon_id=Raccoon.id GROUP BY raccoon_id), 0) AS "AVG"
```
Or you can also use `LEFT JOIN`,
```
SELECT ra.id, ra.name, ra.image_url,
IFNULL(AVG(rv.rating),0)AS "AVG"
FROM Raccoon ra
LEFT JOIN Review rv
ON rv.raccoon_id = ra.id
GROUP BY ra.id, ra.name, ra.image_url
ORDER BY "AVG" ASC;
```
|
You don't want a `group by` in the subquery. This is dangerous because it could return more than one row (although the `where` prevents this). More importantly, with no `group by`, the subquery is an aggregation query that always returns one row. So, you can put the logic in the subquery:
```
SELECT r.*,
(SELECT COALESCE(AVG(rev.rating),0)
FROM Review rev
WHERE rev.raccoon_id = r.id
) AS "AVG"
FROM Raccoon r
ORDER BY "AVG" ASC;
```
Also: always use qualified column names when you have a correlated subquery. This is a good practice to prevent problems in the future.
|
MySQL AVG() return 0 if NULL
|
[
"",
"mysql",
"sql",
"if-statement",
"null",
"average",
""
] |
I have this table for **Response Codes:**
[](https://i.stack.imgur.com/mzKn1.png)
And this table for **invitations:**
[](https://i.stack.imgur.com/sp8tB.png)
My query so far **gives this:**
[](https://i.stack.imgur.com/29lJP.png)
While I want to **achieve this:**
[](https://i.stack.imgur.com/B1pGr.png)
MY QUERY:
```
SELECT
i.eventId
,code.responseCode
,COUNT(i.attendeeResponse) responseCount
FROM invitations i
LEFT JOIN response_codes code
ON code.responseCode = i.attendeeResponse
GROUP BY i.eventId, code.responseCode, i.attendeeResponse;
```
# [SQLFiddle](http://sqlfiddle.com/#!9/bb11e/5)
|
You need to construct a cartesian product of all `eventId`s and `responseCode`s at first (you can achieve it with `join` without condition):
```
select c.eventId
, c.responseCode
, count( i.attendeeResponse ) as responseCount
from ( select distinct t1.responseCode
, t2.eventId
from `response_codes` t1
join `invitations` t2 ) c
left join `invitations` i on c.responseCode = i.attendeeResponse and c.eventId = i.eventId
group by c.eventId, c.responseCode;
```
[**SQLFiddle**](http://sqlfiddle.com/#!9/bb11e/79)
|
You need to cross join the responsecode table to get the all combinations of eventid and responsecode.
[SQL Fiddle](http://sqlfiddle.com/#!9/bb11e/73)
```
SELECT distinct
i.eventId
,code.responseCode
,case when t.responseCount is null then 0
else t.responsecount end rcount
FROM invitations i
cross JOIN response_codes code
left join
(SELECT i.eventId
,code.responseCode
,COUNT(i.attendeeResponse) responseCount
FROM invitations i
JOIN response_codes code
ON code.responseCode = i.attendeeResponse
group by i.eventid, code.responsecode) t
on t.responsecode =code.responsecode and t.eventid = i.eventid
order by i.eventid, code.responsecode desc
```
|
Count invitation response against events and response codes
|
[
"",
"mysql",
"sql",
"join",
""
] |
Consider the table in SQL Server 2012
```
789-0000000
```
The above number will be consider as a string in SQL Server 2012, but whenever I update the record I need increment to 1.
For example:
* When I update the record 1 it should increment to `789-0000001`
* When I update the record 2 it should increment to `789-0000002`
Finally increment should done only 789-**0000000**
|
The best solution is to use
* an `ID INT IDENTITY(1,1)` column to get SQL Server to handle the automatic increment of your numeric value
* a **computed, persisted** column to convert that numeric value to the value you need
So try this:
```
CREATE TABLE dbo.YourTable
(ID INT IDENTITY(1,1) NOT NULL PRIMARY KEY CLUSTERED,
CompanyID AS '789-' + RIGHT('000000' + CAST(ID AS VARCHAR(7)), 7) PERSISTED,
.... your other columns here....
)
```
Now, every time you insert a row into `dbo.YourTable` without specifying values for `ID` or `CompanyID`:
```
INSERT INTO dbo.YourTable(Col1, Col2, ..., ColN)
VALUES (Val1, Val2, ....., ValN)
```
then SQL Server will **automatically and safely** increase your `ID` value, and `CompanyID` will contain values like `789-0000001`, `789-0000002`,...... and so on - automatically, safely, reliably, no duplicates.
|
```
DECLARE @base int = 0
UPDATE TableX
SET
TableX.Value = 'Prefix' + RIGHT('0000000' + CAST(@base AS nvarchar),7),
@base = @base + 1
FROM
TableX
```
|
Need to auto increment string in SQL Server 2012
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
I have two tables.
Table A:
```
root_id, root_text
```
Table B:
```
word_id , word_text,root_text
```
Now Some of the records of `Table B` has root\_texts which might not exist in `Table A`.
So for that I am trying to use left join.
```
select *
from A a , B b
where a.root_text=+b.root_text
```
I only get the records which has matching **`root_text`**.
If I use the new way
```
Select
*
from
A left join B on a.root_text= b.root_text
```
I get so many extra records. I am Using MySQL 5.6.1.7
**UPDATE :**
<http://sqlfiddle.com/#!9/7b32a/2>
Query I am running
```
select * from word_detail w left join roots r
on r.root_text =w.root
and w.surah_no=1
and w.verse_no=1
and w.word_no=1
```
What I am doing wrong ?
As in result you see so many records which are not needed. The filter verse\_no , word\_no not working.
**Update**
The issue was after the left join we have to use where
```
select * from word_detail w left join roots r
on r.root_text =w.root
where w.surah_no=1
and w.verse_no=1
and w.word_no=1
```
|
If you want all records from TableB that are not present in table A you should use this:
```
Select *
from
B left join A
on a.root_text= b.root_text
where
a.root_text is null
```
or if you want the opposite - all records from tableA that are not present on tableB:
```
Select *
from
A left join B
on a.root_text= b.root_text
where
b.root_text is null
```
btw, this is not a left join on MySQL:
```
select * from A a , B b where a.root_text=+b.root_text
```
but will result as a simple INNER JOIN
|
The `+=` operator is not standard SQL, but specific to the Oracle RDBMS, so it will not work as expected on MySQL.
The `LEFT JOIN` syntax is correct, the "many extra records" stem from the data in your tables. You might want to use some sort of `WHERE` as a filter or an aggregate to group the result set to make it more managable.
|
MySQL Joins not working as required
|
[
"",
"mysql",
"sql",
"select",
"left-join",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.