Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
I'm using MySQL 5.5 so that's why I can't use FULLTEXT search so please don't suggest it.
What I wanted to do is if user I have 5 records for example :
```
Amitesh
Ami
Amit
Abhi
Arun
```
and if someone searches for Ami then it should returns Ami first as exact match then Amit & Amitesh
|
You can do:
```
select *
from table t
where col like '%Ami%'
order by (col = 'Ami') desc, length(col);
```
|
```
SELECT *
FROM table t
WHERE col LIKE '%Ami%'
ORDER BY INSTR(col,'Ami') asc, col asc;
```
|
How to use MySQL like with order by exact match first
|
[
"",
"mysql",
"sql",
""
] |
When I run this query I get Incorrect syntax error "(". Is this the correct way of using CONVERT?
```
SELECT
DATEPART(mm,[actualclosedate]) AS [Month],
COUNT([opportunityid]) AS 'Won Opportunities',
CONVERT(SUM(ISNULL([actualvalue],1)) as numeric (10,2)) AS 'Value'
FROM [dbo].[FilteredOpportunity]
WHERE [owneridname]= @SalesPerson
AND YEAR([actualclosedate]) = @Year
GROUP BY
DATEPART(mm,[actualclosedate])`
```
|
If you use convert you put the type first before the expression:
```
SELECT
DATEPART(mm,[actualclosedate]) AS [Month],
COUNT([opportunityid]) AS 'Won Opportunities',
CONVERT(numeric (10,2),
SUM(
ISNULL([actualvalue],1)
)
) AS 'Value'
FROM [dbo].[FilteredOpportunity]
WHERE [owneridname]= @SalesPerson
AND YEAR([actualclosedate]) = @Year
GROUP BY
DATEPART(mm,[actualclosedate])
```
|
Instead of CONVERT() use [CAST()](https://technet.microsoft.com/en-us/library/aa226054(v=sql.80).aspx) function.
**Syntax**
```
CAST ( expression AS data_type )
```
|
Convert gives an error
|
[
"",
"sql",
"sql-server",
""
] |
I got an error for the following query in SQL Server 2008
> Subquery returned more than 1 value. This is not permitted when the
> subquery follows =, !=, <, <= , >, >= or when the subquery is used as
> an expression.
I want to use select command inside case statement after `THEN`
Below is the query
```
DECLARE @startTime DATETIME
,@endTime DATETIME
,@personId VARCHAR(max)
,@supplierId UNIQUEIDENTIFIER = NULL
SET @startTime = '2011-1-22'
SET @endTime = '2012-1-27'
SET @personId = '2dd3cd60-4acc-4ff1-9956-2938099c08af,69186022-78b5-4bc6-9878-55b14a44a5aa,e64f0bf8-51cc-4c85-a4bd-2615d3ba7a52,53091d8b-2891-4c46-babd-1f0036ffe003,ea21226c-8be6-48de-a707-fe0edd0b62a3,f5ce7a19-a8da-4c0c-a233-861f9330361b'
DECLARE @table TABLE (personId UNIQUEIDENTIFIER)
INSERT INTO @table
SELECT deviceid
FROM [dbo].[Split](@personId, ',')
CREATE TABLE #tempTable (
PERSON_ID UNIQUEIDENTIFIER
,ASSET_ID UNIQUEIDENTIFIER
,EVENT_TYPE_ID INT
,EVENT_START_DATE DATETIME
,EVENT_DATE DATETIME
,AMEND_TIME INT
,GRP INT
,SEQ INT identity(1, 1)
,ACTIVITY_TIME INT
)
--Adding Raw data to TEMP table
INSERT INTO #tempTable
SELECT ASSET_EVENT.PERSON_ID
,ASSET_ID
,Event_Type_id
,EVENT_START_DATE
,Event_date
,ISNULL(DATEDIFF(ss, event_start_date, Event_date), 0) AS INTERVAL
,0
,0
FROM ASSET_EVENT
INNER JOIN PERSON ON ASSET_EVENT.PERSON_ID = PERSON.PERSON_ID
WHERE event_type_id < 3
AND EVENT_DATE >= @startTime
AND EVENT_DATE <= @endTime
AND ASSET_EVENT.PERSON_ID IN (
CASE (LEN(@personId))
WHEN 0
THEN ASSET_EVENT.PERSON_ID
ELSE (
SELECT deviceid
FROM [dbo].[Split](@personId, ',')
)
END
)
AND ISNULL(CONVERT(VARCHAR(40), PERSON.SUPPLIER_ID), '') = CASE
WHEN @supplierId IS NOT NULL
THEN CONVERT(VARCHAR(36), @supplierId)
ELSE ''
END
ORDER BY person_id
,event_date
SELECT *
FROM #tempTable
DROP TABLE #tempTable
```
Any alternative for this query.
|
Try changing your `select` query like this.
```
SELECT ASSET_EVENT.PERSON_ID,
ASSET_ID,
Event_Type_id,
EVENT_START_DATE,
Event_date,
Isnull(Datediff(ss, event_start_date, Event_date), 0) AS INTERVAL,
0,
0
FROM ASSET_EVENT
INNER JOIN PERSON
ON ASSET_EVENT.PERSON_ID = PERSON.PERSON_ID
WHERE event_type_id < 3
AND EVENT_DATE >= @startTime
AND EVENT_DATE <= @endTime
AND ( Len(@personId) = 0
OR ASSET_EVENT.PERSON_ID IN (SELECT deviceid
FROM [dbo].[Split](@personId, ',')) )
AND Isnull(CONVERT(VARCHAR(40), PERSON.SUPPLIER_ID), '') = CASE
WHEN @supplierId IS NOT NULL THEN CONVERT(VARCHAR(36), @supplierId)
ELSE ''
END
```
|
Edit this line
```
and ASSET_EVENT.PERSON_ID in ( CASE (LEN(@personId)) WHEN 0 THEN ASSET_EVENT.PERSON_ID ELSE (select deviceid from [dbo].[Split]( @personId , ',')) END )
```
to
```
and
(
@personId = ''
OR
EXISTS
(
select *from [dbo].[Split]( @personId , ',')
where deviceid = ASSET_EVENT.PERSON_ID
)
)
```
|
select command inside then in case statement
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
i'm using Pentaho and I was wondering if it's possible to do some query like this :
```
SELECT Something
FROM Somewhere
WHERE (CASE WHEN condition1 = 0 THEN Option IN (Parameter) ELSE
(Option IN (SELECT Option FROM Somewhere_else)) END);
```
If you want some precision, i want to select everything if my condition is not respected in my WHERE clause (the thing i want to select in the where is different from the original select).
Don't hesitate to ask me for my approach and of course to answer !
Thank you !
PS: The Parameter is a Pentaho parameter which represents here an array
|
Just use regular conditions:
```
SELECT Something
FROM Somewhere
WHERE (condition1 = 0 AND Option IN (Parameter))
OR (condition1 != 0 AND Option IN (SELECT Option FROM Somewhere_else));
```
|
You should try following,
```
SELECT Something
FROM Somewhere
WHERE
1 = case when condition1 = 0 THEN
case when Option IN (Parameter) then 1 else 0 end
else
case when Option IN (SELECT Option FROM Somewhere_else) then 1 else 0 end
end
```
|
SELECT ALL in a CASE inside a SELECT
|
[
"",
"mysql",
"sql",
"select",
"pentaho",
""
] |
I have a table-valued parameter `@supplierData` in a stored procedure which has columns as follows
```
sName | sLocation | currYearExpt | currYearImpt | prevYearExpt | prevYearImpt
----------
XXX | USA | 1000 | 2000 | 1000 | 2050
YYY | U.K | 8000 | 12000 | 10000 | 29000
```
The table needs to be inserted to the below 2 tables
`SUPPLIER` table:
```
sId | sName | sLocation |
(Identity) | | |
----------
1 | XXX | USA |
2 | YYY | U.K |
```
and
`SUPPLIERRECORD` table:
```
YearId | sId | currYearExpt | currYearImpt | prevYearExpt | prevYearImpt|
----------
1 | 1 | 1000 | 2000 | 1000 | 2050 |
2 | 2 | 8000 | 12000 | 10000 | 29000 |
```
Where in the 2nd table is having reference to the `sId` created as part of the same stored procedure execution.
Should I insert record by record from `@supplierData` parameter?
|
You could use the DENSE\_RANK function (<https://msdn.microsoft.com/en-us/library/ms173825.aspx>) with OVER to specify the order and a partitioning criteria. So you'll get numeric IDs for your suppliers. You could insert them into a table parameter first and than use this for your two INSERTs
|
Here you are. Let use `insert into select` or `select into`, they are created to do the kind of this:
```
DECLARE @supplierData TABLE
(
sName nvarchar(50),
sLocation nvarchar(50),
currYearExpt int,
currYearImpt int,
prevYearExpt int,
prevYearImpt int
);
DECLARE @SUPPLIER TABLE
(
sId int,
sName nvarchar(50),
sLocation nvarchar(50)
);
insert into @SUPPLIER values(1,'XXX','USA');
insert into @SUPPLIER values(2,'YYY','U.K');
DECLARE @SUPPLIERRECORD TABLE
(
YearId int,
sId int,
currYearExpt int,
currYearImpt int,
prevYearExpt int,
prevYearImpt int
);
insert into @SUPPLIERRECORD values(1,1,1000,2000,1000,2050);
insert into @SUPPLIERRECORD values(2,2,8000,12000,10000,29000);
insert into @supplierData
select a.sName, a.sLocation, b.currYearExpt, b.currYearImpt, b.prevYearExpt, b.prevYearImpt
from @SUPPLIER a inner join @SUPPLIERRECORD b on a.sId = b.sId
select * from @supplierData
```
Hope this helps.
|
Insert Table parameter to 2 different tables within a stored procedure
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"stored-procedures",
"cursor",
""
] |
In SAS, is there an easy way to extract records from a data set that have more than 2 occurrences.
The DUPS command gives duplicates, but how to get triplicates and higher?
For example, in this dataset:
```
col1 col2 col3 col4 col5
1 2 3 4 5
1 2 3 5 7
1 2 3 4 8
A B C D E
A B C S W
```
The first 3 columns are my key columns. So in my output, I only want first 3 rows(triplicates) but not last 2 rows (duplicates)
|
You can achieve this using proc sql pretty easily. The below example will keep all rows from the table that are triplicates (or higher).
Create some sample data:
```
data have;
input col1 $
col2 $
col3 $
col4 $
col5 $
;
datalines;
1 2 3 4 5
1 2 3 5 7
1 2 3 4 8
A B C D E
A B C S W
;
run;
```
First identify the triplicates. I'm assuming you want triplicates (or above), and that you're grouping on the first 3 columns:
```
proc sql noprint;
create table tmp as
select col1, col2, col3, count(*)
from have
group by 1,2,3
having count(*) ge 3
;
quit;
```
Then use the tmp table we just created to filter against the original dataset via a join.
```
proc sql noprint;
create table want as
select a.*
from have a
join tmp b on b.col1 = a.col1
and b.col2 = a.col2
and b.col3 = a.col3
;
quit;
```
These 2 steps could be combined into a single step with a subquery if you desired but I'll leave that up to you.
**EDIT :** Keith's answer provides a shorthand way to combine these 2 steps into a single step.
|
I would use `proc sql` for this, taking advantage of the `group by` and `having` clauses. Even though it's one step of code, it does require 2 passes of the data in the background, however I believe this needs to be the case whichever method you use.
```
data have;
input col1 $ col2 $ col3 $ col4 $ col5 $;
datalines;
1 2 3 4 5
1 2 3 5 7
1 2 3 4 8
A B C D E
A B C S W
;
run;
proc sql;
create table want as
select * from have
group by col1,col2,col3
having count(*)>2;
quit;
```
|
How to extract triplicates or higher records in SAS
|
[
"",
"sql",
"sas",
""
] |
I have a table like:
```
ID TIMEVALUE
----- -------------
1 06.07.15 06:43:01,000000000
2 06.07.15 12:17:01,000000000
3 06.07.15 18:21:01,000000000
4 06.07.15 23:56:01,000000000
5 07.07.15 04:11:01,000000000
6 07.07.15 10:47:01,000000000
7 07.07.15 12:32:01,000000000
8 07.07.15 14:47:01,000000000
```
and I want to group this data by special times.
My current query looks like this:
```
SELECT TO_CHAR(TIMEVALUE, 'YYYY\MM\DD'), COUNT(ID),
SUM(CASE WHEN TO_CHAR(TIMEVALUE, 'HH24MI') <=700 THEN 1 ELSE 0 END) as morning,
SUM(CASE WHEN TO_CHAR(TIMEVALUE, 'HH24MI') >700 AND TO_CHAR(TIMEVALUE, 'HH24MI') <1400 THEN 1 ELSE 0 END) as daytime,
SUM(CASE WHEN TO_CHAR(TIMEVALUE, 'HH24MI') >=1400 THEN 1 ELSE 0 END) as evening FROM Table
WHERE TIMEVALUE >= to_timestamp('05.07.2015','DD.MM.YYYY')
GROUP BY TO_CHAR(TIMEVALUE, 'YYYY\MM\DD')
```
and I am getting this output
```
day overall morning daytime evening
----- ---------
2015\07\05 454 0 0 454
2015\07\06 599 113 250 236
2015\07\07 404 139 265 0
```
so that is fine grouping on the same day (0-7 o'clock, 7-14 o'clock and 14-24 o'clock)
But my question now is:
**How can I group over midnight?**
For example count from 6-14 , 14-23 and 23-6 o'clock on next day.
I hope you understand my question. You are welcome to even improve my upper query if there is a better solution.
|
**EDIT**: It is tested now: [SQL Fiddle](http://sqlfiddle.com/#!4/6a7d8/2)
The key is simply to adjust the `group by` so that anything before 6am gets grouped with the previous day. After that, the counts are pretty straight-forward.
```
SELECT TO_CHAR(CASE WHEN EXTRACT(HOUR FROM timevalue) < 6
THEN timevalue - 1
ELSE timevalue
END, 'YYYY\MM\DD') AS day,
COUNT(*) AS overall,
SUM(CASE WHEN EXTRACT(HOUR FROM timevalue) >= 6 AND EXTRACT(HOUR FROM timevalue) < 14
THEN 1 ELSE 0 END) AS morning,
SUM(CASE WHEN EXTRACT(HOUR FROM timevalue) >= 14 AND EXTRACT(HOUR FROM timevalue) < 23
THEN 1 ELSE 0 END) AS daytime,
SUM(CASE WHEN EXTRACT(HOUR FROM timevalue) < 6 OR EXTRACT(HOUR FROM timevalue) >= 23
THEN 1 ELSE 0 END) AS evening
FROM my_table
WHERE timevalue >= TO_TIMESTAMP('05.07.2015','DD.MM.YYYY')
GROUP BY TO_CHAR(CASE WHEN EXTRACT(HOUR FROM timevalue) < 6
THEN timevalue - 1
ELSE timevalue
END, 'YYYY\MM\DD');
```
|
Substract 1 day from timevalue for times lower than '06:00' at first and then:
[SQLFiddle demo](http://sqlfiddle.com/#!4/8d2a5/1)
```
select TO_CHAR(day, 'YYYY\MM\DD') day, COUNT(ID) cnt,
SUM(case when '23' < tvh or tvh <= '06' THEN 1 ELSE 0 END) as midnight,
SUM(case when '06' < tvh and tvh <= '14' THEN 1 ELSE 0 END) as daytime,
SUM(case when '14' < tvh and tvh <= '23' THEN 1 ELSE 0 END) as evening
FROM (
select id, to_char(TIMEVALUE, 'HH24') tvh,
trunc(case when (to_char(timevalue, 'hh24') <= '06')
then timevalue - interval '1' day
else timevalue end) day
from t1
)
GROUP BY day
```
|
Select data grouped by time over midnight
|
[
"",
"sql",
"oracle",
"group-by",
""
] |
I have a need to split one record into 2 when they meet a certain criteria and I have difficulty joining them together after splitting them up.
I have this table:

For meetings that a day, I need to split them up into 2 sessions, one in the morning and one in the afternoon. In this example, I need to split Test 2 into 2 sessions AM and PM.

I have used this statement and it serves me well:
```
WITH DATA
AS
(SELECT
CASE
WHEN level=1 THEN 'AM'
WHEN LEVEL=2 THEN 'PM'
END "Session"
FROM dual CONNECT BY level<3)
SELECT "Meeting","From","EndTime","StartTime","Session"
FROM "TEST", DATA
WHERE ("StartTime" < 12 AND "StartTime">=8) AND ( "EndTime" > 12 AND "EndTime" <= 17)
```
However, when I attempted to combine the other meeting that last half day, I got the error below:
```
ORA-32034: unsupported use of WITH clause
32034. 00000 - "unsupported use of WITH clause"
*Cause: Inproper use of WITH clause because one of the following two reasons
1. nesting of WITH clause within WITH clause not supported yet
2. For a set query, WITH clause can't be specified for a branch.
3. WITH clause can't sepecified within parentheses.
*Action: correct query and retry
Error at Line: 56 Column: 1
```
This is the sql statement I used:
```
SELECT *
FROM
(
SELECT "Meeting","From","EndTime","StartTime" ,
CASE
WHEN "StartTime" >= 8 AND "EndTime" <= 12 THEN 'AM'
WHEN "StartTime" >= 12 AND "EndTime" <= 17 THEN 'PM'
ELSE 'UNKNOWN'
END "Session"
FROM "TEST"
WHERE ("StartTime" >= 8 AND "EndTime" <= 12)
OR
("StartTime" >= 12 AND "EndTime" <= 17)
) HalfDay
UNION ALL
(
WITH DATA
AS
(SELECT
CASE
WHEN level=1 THEN 'AM'
WHEN LEVEL=2 THEN 'PM'
END "Session"
FROM dual CONNECT BY level<3)
SELECT "Meeting","From","EndTime","StartTime","Session"
FROM "TEST", DATA
WHERE ("StartTime" < 12 AND "StartTime">=8) AND ( "EndTime" > 12 AND "EndTime" <= 17)
) FullDay
```
How do I fix this?
Please find attached the script to create the table and related data.
```
CREATE TABLE "TEST"
( "Meeting" VARCHAR2(20 BYTE),
"From" DATE,
"StartTime" NUMBER,
"EndTime" NUMBER
) ;
Insert into TEST ("Meeting","From","StartTime","EndTime") values ('Test 1',to_date('06-JUL-15','DD-MON-RR'),12,17);
Insert into TEST ("Meeting","From","StartTime","EndTime") values ('Test 2',to_date('12-DEC-15','DD-MON-RR'),8,17);
```
My goal is getting this output

along with this one in 1 view

|
You're not really nesting two with clauses, but you do nest one inside a union.
The `with` clause declares one or more subqueries and assigns them a name that can be used further down in the query, like a view.
So thats what you can do here as well. Move the entire `with` clause and the inline view it defines to the top. After that, the union part follows. After a little clean-up it would look like this:
```
WITH
DATA AS
(SELECT
CASE
WHEN level=1 THEN 'AM'
WHEN LEVEL=2 THEN 'PM'
END "Session"
FROM dual CONNECT BY level < 3)
SELECT "Meeting","From","EndTime","StartTime" ,
CASE
WHEN "StartTime" >= 8 AND "EndTime" <= 12 THEN 'AM'
WHEN "StartTime" >= 12 AND "EndTime" <= 17 THEN 'PM'
ELSE 'UNKNOWN'
END "Session"
FROM "TEST"
WHERE ("StartTime" >= 8 AND "EndTime" <= 12)
OR
("StartTime" >= 12 AND "EndTime" <= 17)
UNION ALL
SELECT "Meeting", "From", "EndTime", "StartTime", "Session"
FROM "TEST", DATA
WHERE
"StartTime" < 12 AND "StartTime" >= 8 AND
"EndTime" > 12 AND "EndTime" <= 17
```
The same query without `WITH`:
```
SELECT "Meeting","From","EndTime","StartTime" ,
CASE
WHEN "StartTime" >= 8 AND "EndTime" <= 12 THEN 'AM'
WHEN "StartTime" >= 12 AND "EndTime" <= 17 THEN 'PM'
ELSE 'UNKNOWN'
END "Session"
FROM "TEST"
WHERE ("StartTime" >= 8 AND "EndTime" <= 12)
OR
("StartTime" >= 12 AND "EndTime" <= 17)
UNION ALL
SELECT "Meeting", "From", "EndTime", "StartTime", "Session"
FROM
"TEST",
(SELECT
CASE
WHEN level=1 THEN 'AM'
WHEN LEVEL=2 THEN 'PM'
END "Session"
FROM dual CONNECT BY level < 3)
WHERE
"StartTime" < 12 AND "StartTime" >= 8 AND
"EndTime" > 12 AND "EndTime" <= 17
```
|
A bit more compact solution using two subqueries (ommitning the union).
The first one for a mapping table, providing join either one to one or the split in two records.
The second subquery transform your date source adding the "Duration" key, which represents
the tree cases: AM only, PM only or both AM + PM
The rest is a simple join.
```
with join_helper as (
select 'AM' "Duration", 'AM' "Session" from dual union all
select 'PM' "Duration", 'PM' "Session" from dual union all
select 'AM-PM' "Duration", 'AM' "Session" from dual union all
select 'AM-PM' "Duration", 'PM' "Session" from dual),
session_duration as (
select test.*,
CASE
WHEN "StartTime" < 12 AND "EndTime" >= 12 THEN 'AM-PM'
WHEN "StartTime" < 12 THEN 'AM'
WHEN "EndTime" >= 12 THEN 'PM'
END "Duration"
from test)
select a."Meeting", a."From",a."StartTime",a."EndTime", b."Session"
from session_duration a, join_helper b
where a."Duration" = b."Duration"
;
```
You may find the logic less scattered in the query..
|
Oracle Nesting With Clauses
|
[
"",
"sql",
"oracle",
"common-table-expression",
""
] |
I have a data set that has timestamped entries over various sets of groups.
```
Timestamp -- Group -- Value
---------------------------
1 -- A -- 10
2 -- A -- 20
3 -- B -- 15
4 -- B -- 25
5 -- C -- 5
6 -- A -- 5
7 -- A -- 10
```
I want to sum these values by the `Group` field, but parsed as it appears in the data. For example, the above data would result in the following output:
```
Group -- Sum
A -- 30
B -- 40
C -- 5
A -- 15
```
I do *not* want this, which is all I've been able to come up with on my own so far:
```
Group -- Sum
A -- 45
B -- 40
C -- 5
```
Using Oracle 11g, this is what I've hobbled togther so far. I know that this is wrong, by I'm hoping I'm at least on the right track with `RANK()`. In the real data, entries with the same group could be 2 timestamps apart, or 100; there could be one entry in a group, or 100 consecutive. It does not matter, I need them separated.
```
WITH SUB_Q AS
(SELECT K_ID
, GRP
, VAL
-- GET THE RANK FROM TIMESTAMP TO SEPARATE GROUPS WITH SAME NAME
, RANK() OVER(PARTITION BY K_ID ORDER BY TMSTAMP) AS RNK
FROM MY_TABLE
WHERE K_ID = 123)
SELECT T1.K_ID
, T1.GRP
, SUM(CASE
WHEN T1.GRP = T2.GRP THEN
T1.VAL
ELSE
0
END) AS TOTAL_VALUE
FROM SUB_Q T1 -- MAIN VALUE
INNER JOIN SUB_Q T2 -- TIMSTAMP AFTER
ON T1.K_ID = T2.K_ID
AND T1.RNK = T2.RNK - 1
GROUP BY T1.K_ID
, T1.GRP
```
Is it possible to group in this way? How would I go about doing this?
|
I approach this problem by defining a group which is the different of two `row_number()`:
```
select group, sum(value)
from (select t.*,
(row_number() over (order by timestamp) -
row_number() over (partition by group order by timestamp)
) as grp
from my_table t
) t
group by group, grp
order by min(timestamp);
```
The difference of two row numbers is constant for adjacent values.
|
A solution using `LAG` and windowed analytic functions:
[SQL Fiddle](http://sqlfiddle.com/#!4/bf7ab/1)
**Oracle 11g R2 Schema Setup**:
```
CREATE TABLE TEST ( "Timestamp", "Group", Value ) AS
SELECT 1, 'A', 10 FROM DUAL
UNION ALL SELECT 2, 'A', 20 FROM DUAL
UNION ALL SELECT 3, 'B', 15 FROM DUAL
UNION ALL SELECT 4, 'B', 25 FROM DUAL
UNION ALL SELECT 5, 'C', 5 FROM DUAL
UNION ALL SELECT 6, 'A', 5 FROM DUAL
UNION ALL SELECT 7, 'A', 10 FROM DUAL;
```
**Query 1**:
```
WITH changes AS (
SELECT t.*,
CASE WHEN LAG( "Group" ) OVER ( ORDER BY "Timestamp" ) = "Group" THEN 0 ELSE 1 END AS hasChangedGroup
FROM TEST t
),
groups AS (
SELECT "Group",
VALUE,
SUM( hasChangedGroup ) OVER ( ORDER BY "Timestamp" ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW ) AS grp
FROM changes
)
SELECT "Group",
SUM( VALUE )
FROM Groups
GROUP BY "Group", grp
ORDER BY grp
```
**[Results](http://sqlfiddle.com/#!4/bf7ab/1/0)**:
```
| Group | SUM(VALUE) |
|-------|------------|
| A | 30 |
| B | 40 |
| C | 5 |
| A | 15 |
```
|
SQL Grouping by Ranges
|
[
"",
"sql",
"oracle",
"oracle11g",
""
] |
I tried to fetch data from a oracle sql table with the count of records. I tried like following,
```
SELECT *,
(COUNT(BRAND_ID) AS TOTAL)
FROM
(
SELECT BRAND_ID,
BRAND_CODE,
BRAND_TITLE
FROM BRAND
WHERE ACTIVE = '1'
ORDER BY BRAND_TITLE ASC
OFFSET 10 ROWS
FETCH NEXT 10 ROWS ONLY
) BRAND
LEFT JOIN
((
SELECT PRODUCT_ID,
PRODUCT_SKU_ID,
PRODUCT_WEB_ID,
PRODUCT_TITLE,
PRODUCT_SALES_PRICE,
PRODUCT_REGULAR_PRICE,
PRODUCT_RATING
FROM PRODUCT
WHERE
(
PRODUCT_TYPE='B'
OR PRODUCT_TYPE='R'
)
AND AVAILABILITY='1'
) PRDUCT ) ON BRAND.BRAND_CODE= PRDUCT.BRAND_CODE
```
When I'm executing this I got the following error,
> ERROR: ORA-00923: FROM keyword not found where expected
How may I fix this.
Thanks in Advance!
|
I guess You should remove `*` from `select statement` in the first line. Try the below one.
```
SELECT (COUNT(BRAND_ID) AS TOTAL)
FROM
(
SELECT BRAND_ID,
BRAND_CODE,
BRAND_TITLE
FROM BRAND
WHERE ACTIVE = '1'
ORDER BY BRAND_TITLE ASC
OFFSET 10 ROWS
FETCH NEXT 10 ROWS ONLY
) BRAND
LEFT JOIN
((
SELECT PRODUCT_ID,
PRODUCT_SKU_ID,
PRODUCT_WEB_ID,
PRODUCT_TITLE,
PRODUCT_SALES_PRICE,
PRODUCT_REGULAR_PRICE,
PRODUCT_RATING
FROM PRODUCT
WHERE
(
PRODUCT_TYPE='B'
OR PRODUCT_TYPE='R'
)
AND AVAILABILITY='1'
) PRDUCT ) ON BRAND.BRAND_CODE= PRDUCT.BRAND_CODE
```
|
I don't have 12c, so can't test, but maybe this is what you're after?
```
SELECT *
FROM
(
SELECT BRAND_ID,
BRAND_CODE,
BRAND_TITLE
FROM (select b.*,
count(brand_id) over () total
from BRAND b
WHERE ACTIVE = '1'
ORDER BY BRAND_TITLE ASC
OFFSET 10 ROWS
FETCH NEXT 10 ROWS ONLY
) BRAND
LEFT JOIN
((
SELECT PRODUCT_ID,
PRODUCT_SKU_ID,
PRODUCT_WEB_ID,
PRODUCT_TITLE,
PRODUCT_SALES_PRICE,
PRODUCT_REGULAR_PRICE,
PRODUCT_RATING
FROM PRODUCT
WHERE
(
PRODUCT_TYPE='B'
OR PRODUCT_TYPE='R'
)
AND AVAILABILITY='1'
) PRDUCT ) ON BRAND.BRAND_CODE= PRDUCT.BRAND_CODE;
```
This uses an analytic query to get the count of all brand\_ids over the whole table before you filter the rows. I'm not sure if you wanted the count per brand\_id (`count(*) over (partititon by brand_id)` or perhaps the count of distinct brand\_ids (`count(distinct brand_id) over ()`), though, so you'll have to play around with the count function to get the results you're after.
|
ERROR: ORA-00923: FROM keyword not found where expected
|
[
"",
"sql",
"oracle",
""
] |
I have two tables which I join so that I may compare a field and extract records from one table where the field being compared is not in both tables:
```
Table A
---------
Comp Val
111 327
112 234
113 265
114 865
Table B
-----------
Comp2 Val2
111 7676
112 5678
```
So what im doing is to join both tables on Comp-Comp2, then I wish to select all values from Table A for which a corrssponding Comp does not exist in Table B. In this case, the query should result in:
```
Result
---------
Comp Val
113 265
114 865
```
Here is the query:
```
select * into Result from TableA
inner join TableB
on (TableB.Comp2 = TableA.Comp)
where TableB.Comp2 <> TableA.Comp
```
Problem is, it pulls values from both tables. Is there a way to select values from TableA alone without specifying the fields explicitly?
|
I think you want this, though:
```
select *
from TableA a
where
not exists (select b.Comp2 from TableB b where a.Comp1 = b.Comp2)
```
That will find all records in A that don't exist in B.
|
Just prefix the `*` with the desired table name, like this:
```
select TableA.* into Result from TableA
inner join TableB
on (TableB.Comp2 = TableA.Comp)
where TableB.Comp2 <> TableA.Comp
```
|
Inner Join Tables But Select From One Table Only
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have been stuck in a recent problem with a SQL Query. What I'm trying to archieve is to get each product in the store and show how many of them has been sold each month. However, sometimes there are some months where these products were not sold, which means they won't be displayed.
For instance, this is the result I'm getting right now
```
Article Month Sold
CN140027 6 312
CN140027 7 293
CN140027 12 122
CN140186 1 10
CN140186 4 2
```
While I want to get something more like this
```
Article Month Sold
CN140027 6 312
CN140027 7 293
CN140027 8 0
CN140027 9 0
CN140027 10 0
CN140027 11 0
CN140027 12 122
CN140186 1 10
CN140186 2 0
CN140186 3 0
CN140186 4 2
```
And here is the query I'm using at the moment
```
SELECT k.artikelnr, Months.datefield as `Months`, IFNULL(SUM(k.menge),0) as `Quantity`
FROM store_shop_korb as k LEFT OUTER JOIN office_calendar AS Months
ON Months.datefield = month(k.date_insert)
WHERE k.date_insert BETWEEN "2014-12-01" AND "2015-12-31"
group by k.artikelnr, Months.datefield
```
What am I missing? Or what am I doing wrong? Any help is really appreciated.
Thanks in advance.
EDIT:
Additional information:
office\_calendar is the calendar table. It only contains the months as registry, from 1 to 12.
Additionally, I'm taking the article/product ID from a table called 'store\_shop\_korb', which contains all the lines of a made order (so it contains the article ID, its price, the quantity for each order..)
|
This works for me:
```
SELECT k.artikelnr, c.datefield AS `Month`, COALESCE(s.Quantity, 0) AS Sold
FROM (
SELECT artikelnr
FROM store_shop_korb
GROUP BY artikelnr
) k
JOIN office_calendar c
LEFT JOIN (
SELECT artikelnr, MONTH(date_insert) AS monthfield, SUM(menge) AS Quantity
FROM store_shop_korb
GROUP BY artikelnr, MONTH(date_insert)
) s ON k.artikelnr = s.artikelnr AND c.datefield = s.monthfield
ORDER BY k.artikelnr, c.datefield
```
If you have a table of articles, you can use it in the place of subquery k. I'm basically normalizing on the fly.
Explanation:
There's basically 3 sets of data that get joined. The first is a distinct set of articles (k), the second is a distinct set of months (c). These two are joined without restriction, meaning you get the cartesian product (every article x every month). This result is then left-joined to the sales per month (s) so that we don't lose 0 entries.
|
I have tried this in MSAccess and it seems to work OK
```
SELECT PRODUCT, CALENDAR.MONTH, A
FROM CALENDAR LEFT JOIN (
SELECT PRODUCT, MONTH(SALEDTE) AS M, SUM(SALEAMOUNT) AS A
FROM SALES
WHERE SALEDTE BETWEEN #1/1/2015# AND #12/31/2015#
GROUP BY PRODUCT, MONTH(SALEDTE) ) AS X
ON X.M = CALENDAR.MONTH
```

|
Get product total sales per moth, with 0 in the gaps
|
[
"",
"mysql",
"sql",
""
] |
I’m trying to make a query that returns the difference of days to get the average of days in a period of time. This is the situation I need to get the max date from the status 2 and the max date from the status 3 from a request and get how much time the user spend on that period of time
So far this is the query I Have right now I get the mas and min and the difference between the days but are not the max of the status 2 and the max of status 3
Query I have so far:
```
SELECT distinct t1.user, t1.Request,
Min(t1.Time) as MinDate,
Max(t1.Time) as MaxDate,
DATEDIFF(day, MIN(t1.Time), MAX(t1.Time))
FROM [Hst_Log] t1
where t1.Request = 146800
GROUP BY t1.Request, t1.user
ORDER BY t1.user, max(t1.Time) desc
```
Example table:
```
-------------------------------
user | Request | Status | Time
-------------------------------
User 1 | 2 | 1 | 6/1/15 3:25 PM
User 2 | 1 | 1 | 2/1/15 3:24 PM
User 2 | 3 | 1 | 2/1/15 3:24 PM
User 1 | 4 | 1 | 5/10/15 3:18 PM
User 3 | 3 | 2 | 5/4/15 2:36 PM
User 2 | 2 | 2 | 6/4/15 2:34 PM
User 3 | 2 | 3 | 6/10/15 5:51 PM
User 1 | 1 | 2 | 5/1/15 5:49 PM
User 3 | 4 | 2 | 5/16/15 2:39 PM
User 2 | 4 | 2 | 5/17/15 2:32 PM
User 2 | 3 | 2 | 4/6/15 2:22 PM
User 2 | 3 | 3 | 4/7/15 2:06 PM
-------------------------------
```
I will appreciate all the help
|
You'll need to use subqueries since the groups for the min and max times are different. One query will pull the min value where the status is 2. Another will pull the max value where the status is 3.
Something like this:
```
SELECT MinDt.[User], minDt.MinTime, MaxDt.MaxTime, datediff(d,minDt.MinTime, MaxDt.MaxTime) as TimeSpan
FROM
(SELECT t1.[user], t1.Request,
Min(t1.Time) as MinTime
FROM [Hst_Log] t1
where t1.Request = 146800
and t1.[status] = 2
GROUP BY t1.Request, t1.[user]) MinDt
INNER JOIN
(SELECT t1.[user], t1.Request,
Max(t1.Time) as MaxTime
FROM [Hst_Log] t1
where t1.[status] = 3
GROUP BY t1.Request, t1.[user]) MaxDt
ON MinDt.[User] = MaxDt.[User] and minDt.Request = maxDt.Request
```
|
What is the SQL-Server version? Maybe you could use your query as CTE and do a follow-up SELECT where you can use the Min and Max date as date period.
EDIT: Exampel
```
WITH myCTE AS
(
put your query here
)
SELECT * FROM myCTE
```
You can use myCTE for further joins too, pick out the needed date, use sub-select, what ever... AND: have a look on the OVER-link, could be helpfull...
Depending on the version you could also think about using OVER
<https://msdn.microsoft.com/en-us/library/ms189461.aspx>
|
SQL Query AVG Date Time In same Table Column
|
[
"",
"sql",
"sql-server",
"max",
"average",
"datediff",
""
] |
I would like to query by the following
```
(statement1 AND statement2 AND (statement3 OR statement4 ))
```
this is my hive query, I verified that it doesn't work since it only returns statement3, and I know there are case where statement4 is true
```
SELECT
cid,
SUM(count) AS total_count
FROM
count_by_day
WHERE
time >= 1435536000
AND time < 1436140800
AND(
cid = '4eb3441f282d4d657a000016'
OR cid = '14ebe153121a863462300043d'
)
GROUP BY
cid
```
Can someone tell me what is wrong? Thanks
|
1. Is count a real variable name? Double check that.
2. Also check to make sure your time is a numeric type, probably a bigint. If it isn't cast it as a big int like this:
```
WHERE cast(time as bigint) >= 1435536000 AND cast(time as bigint) < 1436140800
```
3. Try changing your or statement to an in statement.
```
SELECT
cid,
SUM(count) AS total_count
FROM
count_by_day
WHERE time >= 1435536000 AND time < 1436140800
AND cid in('4eb3441f282d4d657a000016','14ebe153121a863462300043d')
GROUP BY
cid;
```
Try each change one at a time so you know what the fix is.
|
Always learnt to use UNION instead of OR in relational databases. Try and see if union solves your issue.
```
select cols
from table
where statement1 AND statement2 AND statement3
union all
select cols
from table
where statement1 AND statement2 AND statement4
```
|
How do you group OR clause in WHERE statement using HIVE
|
[
"",
"sql",
"hadoop",
"hive",
"bigdata",
""
] |
I have a query that returns a large (10000+ rows) dataset. I want to order by date desc, and display the first 40 results. Is there a way to run a query like this that only retrieves those 40 results without retrieving all 10000 first?
I have something like this:
```
select rownum, date, * from table
order by date desc
```
This selects all the data and orders it by date, but the rownum is not in order so it is useless for selecting only the first 40.
```
ROW_NUMBER() over (ORDER BY date desc) AS rowNumber
```
^ Will display a rownumber in order, but I can't use it in a where clause because it is a window function. I could run this:
```
select * from (select ROW_NUMBER() over (ORDER BY date desc) AS rowNumber,
rownum, * from table
order by date desc) where rowNumber between pageStart and pageEnd
```
but this is selecting all 10000 rows. How can I do this efficiently?
|
```
SELECT *
FROM (SELECT *
FROM table
ORDER BY date DESC)
WHERE rownum <= 40
```
will return the first 40 rows ordered by `date`. If there is an index on `date` that can be used to find these rows, and assuming statistics are up to date, Oracle should choose to use that index to identify the 40 rows that you want and then do 40 single-row lookups against the table to retrieve the rest of the data. You could throw a `/*+ first_rows(40) */` hint into the inner query if you want though that shouldn't have any effect.
For a more general discussion on pagination queries and Top N queries, here's a nice [discussion from Tom Kyte](http://www.oracle.com/technetwork/issue-archive/2007/07-jan/o17asktom-093877.html) and a [much longer AskTom discussion](https://asktom.oracle.com/pls/apex/f?p=100:11:0%3A%3A%3A%3AP11_QUESTION_ID:127412348064).
|
Oracle 12c has introduced a [row limiting](https://oracle-base.com/articles/12c/row-limiting-clause-for-top-n-queries-12cr1) clause:
```
SELECT *
FROM table
ORDER BY "date" DESC
FETCH FIRST 40 ROWS ONLY;
```
In earlier versions you can do:
```
SELECT *
FROM ( SELECT *
FROM table
ORDER BY "date" DESC )
WHERE ROWNUM <= 40;
```
or
```
SELECT *
FROM ( SELECT *,
ROW_NUMBER() OVER ( ORDER BY "date" DESC ) AS RN
FROM table )
WHERE RN <= 40;
```
or
```
SELECT *
FROM TEST
WHERE ROWID IN ( SELECT ROWID
FROM ( SELECT "Date" FROM TEST ORDER BY "Date" DESC )
WHERE ROWNUM <= 40 );
```
Whatever you do, the database will need to look through all the values in the `date` column to find the 40 first items.
|
Pagination of large dataset
|
[
"",
"sql",
"oracle",
"pagination",
""
] |
Oracle DB.
Spring JPA using Hibernate.
I am having difficulty inserting a Clob value into a native sql query.
The code calling the query is as follows:
```
@SuppressWarnings("unchecked")
public List<Object[]> findQueryColumnsByNativeQuery(String queryString, Map<String, Object> namedParameters)
{
List<Object[]> result = null;
final Query query = em.createNativeQuery(queryString);
if (namedParameters != null)
{
Set<String> keys = namedParameters.keySet();
for (String key : keys)
{
final Object value = namedParameters.get(key);
query.setParameter(key, value);
}
}
query.setHint(QueryHints.HINT_READONLY, Boolean.TRUE);
result = query.getResultList();
return result;
}
```
The query string is of the format
```
SELECT COUNT ( DISTINCT ( <column> ) ) FROM <Table> c where (exact ( <column> , (:clobValue), null ) = 1 )
```
where "(exact ( , (:clobValue), null ) = 1 )" is a function and "clobValue" is a Clob.
I can adjust the query to work as follows:
```
SELECT COUNT ( DISTINCT ( <column> ) ) FROM <Table> c where (exact ( <column> , to_clob((:stringValue)), null ) = 1 )
```
where "stringValue" is a String but obviously this only works up to the max sql string size (4000) and I need to pass in much more than that.
1. I have tried to pass the Clob value as a java.sql.Clob using the method
final Clob clobValue = org.hibernate.engine.jdbc.ClobProxy.generateProxy(stringValue);
This results in a
*java.io.NotSerializableException: org.hibernate.engine.jdbc.ClobProxy*
2. I have tried to Serialize the Clob using
final Clob clob = org.hibernate.engine.jdbc.ClobProxy.generateProxy(stringValue);
final Clob clobValue = SerializableClobProxy.generateProxy(clob);
But this appears to provide the wrong type of argument to the "exact" function resulting in
3. After reading some post about using Clobs with entities I have tried passing in a byte[] but this also provides the wrong argument type
*(org.hibernate.engine.jdbc.spi.SqlExceptionHelper:144) - SQL Error: 29900, SQLState: 99999
(org.hibernate.engine.jdbc.spi.SqlExceptionHelper:146) - ORA-29900: operator binding does not exist
ORA-06553: PLS-306: wrong number or types of arguments in call to 'EXACT'*
4. I can also just pass in the value as a String as long as it doesn't break the max string value
I have seen a post ([Using function in where clause with clob parameter](https://stackoverflow.com/questions/20100184/using-function-in-where-clause-with-clob-parameter)) which seems to suggest that the only way is to use "plain old JDBC". This is not an option.
I am up against a hard deadline so any help is very welcome.
|
THE ANSWER: Thank you both for your answers. I should have updated this when i solved the issue some time ago. In the end I used JDBC and the problem disappeared in a puff of smoke!
|
I'm afraid your assumptions about CLOBs in Oracle are wrong. In Oracle CLOB locator is something like a file handle. And such handle can be created by the database only. So you can not simply pass CLOB as bind variable. CLOB must be somehow related to database storage, because this it can occupy up to 176TB and something like that can not be held in Java Heap.
So the usual approach is to call either DB functions empty\_clob() or dbms\_lob.create\_temporary (in some form). Then you get a clob **from** database even if you think it is "IN" parameter. Then you can write as many data as you want into that locator (handle, CLOB) and then you can use this CLOB as a parameter for a query.
If you do not follow this pattern, your code will not work. It does not matter whether you use JPA, SpringBatch or plan JDBC. This constrain is given by the database.
|
Setting a Clob value in a native query
|
[
"",
"sql",
"oracle",
"jpa",
"clob",
"nativequery",
""
] |
I have created a query which returns a list of products with their fields and values. Now I want to search through fields for a certain value and get a resultlist matching this search query. The problem is that I want an AND construction, so fieldx value must be like %car% and fieldy value must be like %chrome%.
Here`s an example of my query and the resultset:
**Query**
```
SELECT p.id as product,pf.field_name,pfv.field_value
FROM product p
JOIN field pf ON pf.product_id = p.product_id
JOIN field_val pfv ON pfv.field_id = pf.field_id
```
**Resultset**
```
product | field_name | field_value
pr1 | meta_title | Example text
pr1 | meta_kw | keyword,keyword1
pr1 | prod_name | Product 1
pr2 ....
```
So with the above query and resultset in mind I want to do the following:
Query all products where meta\_title contains 'Example' and where prod\_name contains 'Product'. After that, I want to group the results so that only products are returned where both search queries matches.
I tried everything I could think off and I have tried many solutions on kind of the same questions, but I think mine is different because I need the AND match on the field name as well the value over multiple rows.
For example, I tried adding this as WHERE clause:
```
WHERE
(field_name = 'meta_title' AND field_value LIKE '%Example%') AND
(field_name = 'prod_name' AND field_value LIKE '%Product%')
```
Obviously this won`t work because after the first where on meta\_title there is no result left for other field names. But changing the WHERE to OR would not give me the desired result.
I also tried with HAVING but seems like same result.
Anyone an idea how to solve this, or is this just not possible?
|
A relatively simple way to do this is to use `group by` and `having`:
```
SELECT p.id as product
FROM product p JOIN
field pf
ON pf.product_id = p.product_id JOIN
field_val pfv
ON pfv.field_id = pf.field_id
WHERE (field_name = 'meta_title' AND field_value LIKE '%Example%') OR
(field_name = 'prod_name' AND field_value LIKE '%Product%')
GROUP BY p.id
HAVING COUNT(DISTINCT field_name) = 2;
```
By modifying the `HAVING` clause (and perhaps removing the `WHERE`), it is possible to express lots of different logic for the presence and absence of different fields. For instance `> 0` would be `OR`, and `= 1` would be one value or the other, but not both.
|
First: I would go for the easiest way. If Prerak Solas solution works and the performance is OK for you, go for it.
An other solution is to use a pivot table. With this the select of your table would return something like this:
```
product | meta_title | meta_kw | prod_name
pr1 | Example text | keyword,keyword1 | Product 1
pr2 ...
```
You could than easily query like:
```
SELECT
p.Id
FROM
(subquery/view)
WHERE
meta_title LIKE '%Example%' AND
prod_name LIKE '%Product%'
```
For more infos about pivot read [this](https://stackoverflow.com/questions/7674786/mysql-pivot-table)
|
Mysql filter (AND) query on multiple fields in multiple joined rows
|
[
"",
"mysql",
"sql",
"join",
"having",
""
] |
I need to find max date from a table(mysql database). I am storing my date as varchar.
`select max(completion_date) from table_name` returns wrong value.
<http://sqlfiddle.com/#!9/c88f6/3>
|
Assuming the date time format you have in your fiddle (e.g. '12/19/2012 05:30 PM') then:
```
select max(STR_TO_DATE(completion_date, '%m/%d/%Y %l:%i %p')) from test;
```
<http://sqlfiddle.com/#!9/c88f6/15>
It's unclear if you want to factor the time into your rankings or just the date. This example accounts for time too, but you can remove that part of the formatter if desired.
|
I am not sure why you would want to keep it as varchar not date.
But this will work for varchar:
```
SELECT completion_date
FROM test
ORDER BY STR_TO_DATE(completion_date, '%m/%d/%Y') DESC LIMIT 1;
```
<http://sqlfiddle.com/#!9/c88f6/10>
|
max value from varchar date
|
[
"",
"mysql",
"sql",
"date",
""
] |
I want to count user\_id from `courses_taken` and `quiz_attempts` table but my query brings me wrong numbers.
```
SELECT
u.id,
u.email,
u.user,
u.joined,
MAX(qa.last_attempt_time) as last_attempt_time,
COUNT(qa.user_id) total_quiz,
COUNT(ct.user_id) total_courses
FROM users u
LEFT JOIN courses_taken ct
ON u.id = ct.user_id
LEFT JOIN quiz_attempt qa
ON u.id = qa.user_id AND qa.attempt_mode=1
GROUP BY u.id
ORDER BY total_courses DESC
```
## **Table structure**
`users` table
`id, email, user, joined`
`quiz_attempt` table
`id,user_id, last_attempt_time, attempt_mode etc.`
`courses_taken` table
`id,user_id,course_id,taken_on etc.`
---
Here i am trying to get all users with their total number of quiz attempts and total number of courses taken. But my query returns same numbers for both quiz attempts and courses taken.
|
What you can do is use `COUNT DISTINCT` on a column which varies uniquely with the value that you are trying to count, i.e.:
```
...
COUNT(DISTINCT qa.id) total_quiz,
COUNT(DISTINCT ct.course_id) total_courses
...
```
[SqlFiddle here](http://sqlfiddle.com/#!9/66834/1)
|
You probably have a cartesian product problem because of the join. The better solution is to pre-aggregate the results. However, in many cases if the tables are not too big, then `count(distinct)` solves the problem:
```
SELECT u.id, u.email, u.user, u.joined,
MAX(qa.last_attempt_time) as last_attempt_time,
COUNT(DISTINCT qa.id) as total_quiz,
COUNT(DISTINCT ct.id) as total_courses
FROM users u LEFT JOIN
courses_taken ct
ON u.id = ct.user_id LEFT JOIN
quiz_attempt qa
ON u.id = qa.user_id AND qa.attempt_mode = 1
GROUP BY u.id
ORDER BY total_courses DESC;
```
Note that this works because you are using `MAX()` and `COUNT()`. It would not work with `SUM()` or `AVG()`.
|
Count different totals from multiple tables in mysql grouped by user_id in one query
|
[
"",
"mysql",
"sql",
""
] |
I've been looking for an answer to this but couldn't find anything the same as this particular situation.
So I have a one table that I want to remove duplicates from.
```
__________________
| JobNumber-String |
| JobOp - Number |
------------------
```
So there are multiples of these two values, together they make the key for the row. I want keep all distinct job numbers with the lowest job op. How can I do this? I've tried a bunch of things, mainly trying the min function, but that only seems to work on the entire table not just the JobNumber sets. Thanks!
|
Original Table Values:
```
JobNumber Jobop
123 100
123 101
456 200
456 201
780 300
```
Code Ran:
```
DELETE FROM table
WHERE CONCAT(JobNumber,JobOp) NOT IN
(
SELECT CONCAT(JobNumber,MIN(JobOp))
FROM table
GROUP BY JobNumber
)
```
Ending Table Values:
```
JobNumber Jobop
123 100
456 200
780 300
```
|
With SQL Server 2008 or higher you can enhance the MIN function with an OVER clause specifying a PARTITION BY section.
Please have a look at <https://msdn.microsoft.com/en-us/library/ms189461.aspx>
|
SQL Remove Duplicates, save lowest of certain column
|
[
"",
"sql",
"sql-server",
"database",
""
] |
My users should be able to search the database by entering a search term in an input box and selecting the column they'd like to search on from a dropdown box.
However, I can't find a way to parameterize a query in golang such that the column name is recognized as a column name instead of a value. Let me give an example:
```
rows, err := db.Query("SELECT * FROM mytable WHERE ? = ?", col, searchStr)
```
I turned on MySQL logging and this was the resulting query:
```
41 Execute SELECT *
FROM mytable
WHERE 'Username' = 'Foo'
```
Clearly `Username` should not have single quotes around it.
How can I parameterize this query? Do I have to write application logic to check each query for the proper column names and use string manipulation to add the column name into the query (perhaps using some placeholder character to find-and-replace)?
|
This is trying to help keep you safe from bad args (prevent things like SQL injection) but isn't designed to do replacement on anything other than a value. You want it to insert a table name. Unfortunately for you the code is aware of `col`'s type (string) and quotes it because in SQL it's nvarchar and that's how they literals are written, enclosed in single quotes. Might seem a bit like a hack but you need this instead;
```
db.Query(fmt.Sprintf("SELECT * FROM mytable WHERE %s = ?", col), searchStr)
```
Putting the table name into your query string before passing it to `Query` so it doesn't get treated like an argument (ie a value used in the where clause).
|
You should take a look at this package.
<https://github.com/gocraft/dbr>
It's great for what you want to do.
```
import "github.com/gocraft/dbr"
// Simple data model
type Suggestion struct {
Id int64
Title string
CreatedAt dbr.NullTime
}
var connection *dbr.Connection
func main() {
db, _ := sql.Open("mysql","root@unix(/Applications/MAMP/tmp/mysql/mysql.sock)/dbname")
connection = dbr.NewConnection(db, nil)
dbrSess := connection.NewSession(nil)
// Get a record
var suggestion Suggestion
err := dbrSess.Select("id, title").From("suggestions").
Where("id = ?", 13).
LoadStruct(&suggestion)
if err != nil {
fmt.Println(err.Error())
} else {
fmt.Println("Title:", suggestion.Title)
}
}
```
|
How can I use a parameterized query to search on a column by its name?
|
[
"",
"mysql",
"sql",
"go",
"sql-injection",
""
] |
I have this piece of code from which i wish to get a single array that contains all value.
```
$sql = "SELECT * FROM interest where interest='".$interest."' and userid!='".$myuserid."'";
$result = mysqli_query($conn, $sql);
if (mysqli_num_rows($result) > 0)
{
while($row = mysqli_fetch_assoc($result))
{
$userid = $row["userid"];
if($searchtype == 'both')
{
$sql2 = "SELECT * FROM register where id='".$userid."' and discover = 'on' and id!='".$myuserid."'";
$result2 = mysqli_query($conn, $sql2);
if (mysqli_num_rows($result2) > 0)
{
while($row2 = mysqli_fetch_assoc($result2))
{
echo "<pre>";
print_r($row2);
echo "</pre>";
}
}
}
}
}
```
The o/p that i am getting is something like this
```
Array
(
[id] => 1
[email] => A1
[username] =>B1
[password] => C1
[gender] => C1
)
Array
(
[id] => 2
[email] => A2
[username] => B2
[password] => C2
[gender] => D2
)
Array
(
[id] => 3
[email] => A3
[username] => B3
[password] => C3
[gender] => D3
)
```
But i wish to get this all data in a single array like this
```
Array
(
[0] => Array
(
[id] => 1
[email] => A1
[username] =>B1
[password] => C1
[gender] => C1
)
[1] => Array
(
[id] => 2
[email] => A2
[username] => B2
[password] => C2
[gender] => D2
)
[2] => Array
(
[id] => 3
[email] => A3
[username] => B3
[password] => C3
[gender] => D3
)
}
```
can anyone tell how i can do so
|
Take one array variable before while loop started like `$user_data = array();` and in inner loop you have to set `$user_data[] = $row2;`
```
if (mysqli_num_rows($result) > 0) {
$user_data = array();
while($row = mysqli_fetch_assoc($result)) {
$userid = $row["userid"];
if($searchtype == 'both') {
$sql2 = "SELECT * FROM register where id='".$userid."' and discover = 'on' and id!='".$myuserid."'";
$result2 = mysqli_query($conn, $sql2);
if (mysqli_num_rows($result2) > 0) {
while($row2 = mysqli_fetch_assoc($result2)) {
$user_data[] = $row2;
}
}
}
}
print_r($user_data); //Print here your user_data outside the loop.
}
```
|
Create an array variable like `$a=array();` at the start of your code
Get row value in array $a[]=your row value(while loop), then print this outside loop you will get all value in single array print like
```
print_r($a);
```
|
Merge multiple array to single array
|
[
"",
"mysql",
"sql",
"arrays",
"multidimensional-array",
""
] |
I tried to create trigger `AFTER UPDATE` via phpmyadmin with body INSERT:
```
BEGIN
INSERT INTO updatespersonal SET (user, time, type, operator) VALUES (NEW.idDetailToUsers, CURRENT_TIMESTAMP(), "detailtousers", "update");
END
```
But I get error, reason that I can not understand:
> # 1064 on line 2
May be problem in double quotes?
|
Error in **Insert** Statement, remove **SET** keyword:
```
INSERT INTO updatespersonal (user, time, type, operator) VALUES (NEW.idDetailToUsers, CURRENT_TIMESTAMP(), "detailtousers", "update");
```
|
Your syntax is incorrect. You no need `SET` when using **[INSERT INTO](http://www.w3schools.com/php/php_mysql_insert.asp)**.
Your code should be:
```
BEGIN
INSERT INTO updatespersonal (user, time, type, operator) VALUES (NEW.idDetailToUsers, CURRENT_TIMESTAMP(), "detailtousers", "update");
END
```
|
Trigger error INSERT Mysql
|
[
"",
"mysql",
"sql",
""
] |
I am having getting proper outputs for this functions. Does datediff only calculate the difference in days for days in the same month?
When I pass in a date in the form of '01 Jan 2015' it always sends me back a 0 =/ did I miss something in my logic or syntax?
```
CREATE FUNCTION dbo.CanPolicy
(
@ReservationID int,
@CancellationDate date
)
RETURNS smallmoney
AS
BEGIN
DECLARE @DepositPaid smallmoney
SET @DepositPaid = (SELECT ResDepositPaid
FROM Reservation
WHERE ReservationID = @ReservationID)
DECLARE @ResDate date
SET @ResDate = (SELECT ResDate
FROM Reservation
WHERE ReservationID = @ReservationID)
DECLARE @CanceledDaysAhead int
SET @CanceledDaysAhead = DATEDIFF(day, @ResDate, @CancellationDate)
DECLARE @result smallmoney
SET @result = 0
SET @result = CASE WHEN @CanceledDaysAhead > 30 THEN 0
WHEN @CanceledDaysAhead BETWEEN 14 AND 30 THEN @DepositPaid * 0.25 + 25
WHEN @CanceledDaysAhead BETWEEN 8 AND 13 THEN @DepositPaid * 0.50 + 25
ELSE @DepositPaid
END
RETURN @result
END
GO
```
|
No, DATEDIFF counts dates in between. Try:
```
SELECT DATEDIFF(day,{ts'2105-01-01 00:00:00'},{ts'2105-04-01 00:00:00'})
```
Could be a date format issue...
Are you sure, that @ResDate is set correctly?
EDIT: New approach with CTE
```
DECLARE @ReservationID INT=123;
DECLARE @CancelationDate DATE=GETDATE();
WITH ReservationCTE AS
(
SELECT ResDepositPaid
,ResDate
FROM Reservation
WHERE ReservationID=@ReservationID --assuming that ReservationID is a unique key!
)
,ReservationCTEWithDateDiff AS
(
SELECT ReservationCTE.*
--EDIT: switched dates due to a comment by Me.Name
,DATEDIFF(DAY,@CancelationDate,ResDate) AS CanceledDaysAhead
FROM ReservationCTE
)
SELECT CASE WHEN CanceledDaysAhead>30 THEN 0
WHEN CanceledDaysAhead BETWEEN 14 AND 30 THEN ResDepositPaid * 0.25 + 25
WHEN CanceledDaysAhead BETWEEN 8 AND 13 THEN ResDepositPaid * 0.50 + 25
ELSE ResDepositPaid END AS MyReturnValue
FROM ReservationCTEWithDateDiff
```
|
I think the correct and short version of your function is this - please give it a try:
```
CREATE FUNCTION dbo.CanPolicy
(
@ReservationID int,
@CancellationDate date
)
RETURNS smallmoney
AS
BEGIN
DECLARE @DepositPaid smallmoney,
@CanceledDaysAhead int
SELECT @DepositPaid = ResDepositPaid,
@CanceledDaysAhead = DATEDIFF(DAY,ResDate,@CancellationDate)
FROM Reservation
WHERE ReservationID = @ReservationID
RETURN CAST(CASE WHEN @CanceledDaysAhead > 30 THEN 0 ELSE
CASE WHEN @CanceledDaysAhead BETWEEN 14 AND 30 THEN @DepositPaid * 0.25 + 25 ELSE
CASE WHEN @CanceledDaysAhead BETWEEN 8 AND 13 THEN @DepositPaid * 0.50 + 25 ELSE
@DepositPaid END END END AS smallmoney)
END
GO
```
The main problem was your case-when block i think...
|
Trouble with TSQL calculation - date diff giving me troubles
|
[
"",
"sql",
"sql-server",
"t-sql",
"datediff",
"udf",
""
] |
I have a user table that consists of these columns:
```
| id | username | password | email | pants_size_id | shirt_size_id |
```
`pants_size_id` and `shirt_size_id` are filled with foreign tables id keys where I store a list of sizes for pants and shirts in different country specific measures, example of `pants_size` table:
```
| id | UK_sizing | US_sizing | IT_sizing |
```
a single user will have only one pants and shirt size so the user table is filled with the ID of the corresponding rows in the size tables.
what kind of relationship does this imply between the user model and the pants and shirt sizing models?
Also how can I retrieve the data inside the foreign table column (example IT\_sizing) when returning auth user `return \Auth::user();` instead of the numeric size\_id ?
In other words how can I retrieve say '32' (a pants size) instead of the `pants_size_id` (let's say '1').
|
Cato has the right answer, I can't exactly respond to it because of my rep but the logic in your other answer doesn't make sense from a relational standpoint.
Users don't belong to a size, instead, Users have a size.
To me it sounds like you mixed up the foreign and local key assignment it should be User->hasOne(pants\_size).
In your model it would be the following, the explicitness of the keys isn't great, but if you have some weird thing laravel can't figure out this should work.
```
public function pants_size(){
return $this->hasOne('App\Pants_size','id','pants_size_id');
}
public function shirt_size(){
return $this->hasOne('App\Shirt_size','id','shirt_size_id');
}
```
To answer the other question of how to find the size (32), since you're dealing with three different measurements you have to have a where clause on the specific measurement the 32 represents, and get the id. If you specifically wanted the users you would call the eloquent query as so:
```
\Auth::User()->pants_size()->(..whatever measurement you want..)
```
|
this is how I made it work:
in USER model:
```
public function pants_size(){
return $this->belongsTo('App\Pants_size');
}
public function shirt_size(){
return $this->belongsTo('App\Shirt_size');
}
```
In Pants\_size and Shirt\_size Models:
```
public function user(){
return $this->hasMany('App\User');
}
```
That last one works also with hasOne.
The code I use to retrieve the data is:
```
public function index()
{
echo $user = User::find($id);
echo $pants = User::find($id)->pants_size->it_sizing;
echo $shirt = User::find($id)->shirt_size->it_sizing;
}
```
|
confusion over which relationship to use in models for these two tables
|
[
"",
"sql",
"laravel",
"eloquent",
"relationship",
""
] |
I have several tables that have a common field (column) called `LastName` in a `MySQL database`. Several of the rows in these tables are in mixed case so they don't get selected properly when doing a `SELECT`.
How can I convert those columns to all UPPER CASE? I can easily handle any new entries to convert them to upper case, but the existing records I'm not so sure about.
|
this would work:
```
UPDATE table_name SET `column_name` = UPPER( `column_name` )
```
|
would do the job
```
update table set LastName=UPPER(LastName);
```
NOTE - if you are running from MySQL workbench you may have to disable safety mode or add a where clause (eg WHERE id>0) otherwise it wont run.
|
Update a field in SQL
|
[
"",
"mysql",
"sql",
""
] |
Last time when I was working with my SQL Server 2014 and suddenly due to a power failure my database goes corrupted and next time after running DBCC CHECKDB I found some of stored procedures that I have created on my SQL Server are still missing. So there is any method from which I can create a successful backup for my stored procedure?
|
One Possibilty is:
In SQL Server Manangement Studio **rightlick on your Database - Tasks - Generate Scripts...**
Here you can specify which object you want to script and where to save the script file.
What's my favourite solution (though it's more work) is to have all my databases in **Database Projects** in Visual Studio. (Keyword: SQL Server Data Tools). Don't know if that is interesting for you but it's very powerful and a great thing to deploy databases, manage source code (TFS for example)...all the visual studio benefits.
|
The correct solution here is to backup the entire database, as you stored procedure will have dependencies - like tables, columns, views, etc. that are in the database.
If you just want to retrieve the text of the stored procedure and save it to a file, you can use `sp_helptext`. For example:
```
sp_helptext sp_procedureName
```
|
How can I back up a stored procedure in SQL Server?
|
[
"",
"sql",
"sql-server",
"sql-server-2014",
""
] |
I've always been under the assumption that not exists is the way to go instead of using a not in condition. However, I doing a comparison on a query I've been using, I noticed that the execution for the Not In condition actually appears to be faster. Any insight into why this could be the case, or if I've just made a horrible assumption up until this point, would be greatly appreciated!
QUERY 1:
```
SELECT DISTINCT
a.SFAccountID, a.SLXID, a.Name FROM [dbo].[Salesforce_Accounts] a WITH(NOLOCK)
JOIN _SLX_AccountChannel b WITH(NOLOCK)
ON a.SLXID = b.ACCOUNTID
JOIN [dbo].[Salesforce_Contacts] c WITH(NOLOCK)
ON a.SFAccountID = c.SFAccountID
WHERE b.STATUS IN ('Active','Customer', 'Current')
AND c.Primary__C = 0
AND NOT EXISTS
(
SELECT 1 FROM [dbo].[Salesforce_Contacts] c2 WITH(NOLOCK)
WHERE a.SFAccountID = c2.SFAccountID
AND c2.Primary__c = 1
);
```
QUERY 2:
```
SELECT
DISTINCT
a.SFAccountID FROM [dbo].[Salesforce_Accounts] a WITH(NOLOCK)
JOIN _SLX_AccountChannel b WITH(NOLOCK)
ON a.SLXID = b.ACCOUNTID
JOIN [dbo].[Salesforce_Contacts] c WITH(NOLOCK)
ON a.SFAccountID = c.SFAccountID
WHERE b.STATUS IN ('Active','Customer', 'Current')
AND c.Primary__C = 0
AND a.SFAccountID NOT IN (SELECT SFAccountID FROM [dbo].[Salesforce_Contacts] WHERE Primary__c = 1 AND SFAccountID IS NOT NULL);
```
Actual Execution plan for Query 1:

Actual Execution plan for Query 2:
TIME/IO STATISTICS:
Query #1 (using not exists):
```
SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server parse and compile time:
CPU time = 532 ms, elapsed time = 533 ms.
Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Salesforce_Contacts'. Scan count 2, logical reads 3078, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'INFORMATION'. Scan count 1, logical reads 691, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'ACCOUNT'. Scan count 4, logical reads 567, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Salesforce_Accounts'. Scan count 1, logical reads 680, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server Execution Times:
CPU time = 250 ms, elapsed time = 271 ms.
SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
```
Query #2 (using Not In):
```
SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server parse and compile time:
CPU time = 500 ms, elapsed time = 500 ms.
Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Salesforce_Contacts'. Scan count 2, logical reads 3079, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'INFORMATION'. Scan count 1, logical reads 691, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'ACCOUNT'. Scan count 4, logical reads 567, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Salesforce_Accounts'. Scan count 1, logical reads 680, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server Execution Times:
CPU time = 157 ms, elapsed time = 166 ms.
SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
```
|
try
```
SELECT DISTINCT a.SFAccountID, a.SLXID, a.Name
FROM [dbo].[Salesforce_Accounts] a WITH(NOLOCK)
JOIN _SLX_AccountChannel b WITH(NOLOCK)
ON a.SLXID = b.ACCOUNTID
AND b.STATUS IN ('Active','Customer', 'Current')
JOIN [dbo].[Salesforce_Contacts] c WITH(NOLOCK)
ON a.SFAccountID = c.SFAccountID
AND c.Primary__C = 0
LEFT JOIN [dbo].[Salesforce_Contacts] c2 WITH(NOLOCK)
on c2.SFAccountID = a.SFAccountID
AND c2.Primary__c = 1
WHERE c2.SFAccountID is null
```
|
I think the missing index cause the difference for `EXISTS()` and `IN` operations.
Although the question do not ask for a better query, but for me I'll try to avoid the Distinct like this
```
SELECT
a.SFAccountID, a.SLXID, a.Name
FROM
[dbo].[Salesforce_Accounts] a WITH(NOLOCK)
CROSS APPLY
(
SELECT SFAccountID
FROM [dbo].[Salesforce_Contacts] WITH(NOLOCK)
WHERE SFAccountID = a.SFAccountID
GROUP BY SFAccountID
HAVING MAX(Primary__C + 0) = 0 -- Assume Primary__C is a bit value
) b
WHERE
-- Actually it is the filtering condition for account channel
EXISTS
(
SELECT * FROM _SLX_AccountChannel WITH(NOLOCK)
WHERE ACCOUNTID = a.SLXID AND STATUS IN ('Active','Customer', 'Current')
)
```
|
Not Exists vs Not In: efficiency
|
[
"",
"sql",
"sql-server",
"t-sql",
"exists",
"sql-execution-plan",
""
] |
I have tried with the following sample
```
SELECT
FORMAT(CONVERT(DATETIME,'01011900'), 'dd/MM/yyyy')
FROM
identities
WHERE
id_type = 'VID'
```
|
Try this:
```
SELECT FORMAT(CONVERT(DATETIME,STUFF(STUFF('01011900',5,0,'/'),3,0,'/')),'dd/MM/yyyy')
```
Insert `/` using **`STUFF`**, and then convert it.
```
STUFF(STUFF('01011900',5,0,'/'),3,0,'/') -- 01/01/1900
```
**Update**:
I tried the following also,
```
DECLARE @DateString varchar(10) = '12202012' --19991231 --25122000
DECLARE @DateFormat varchar(10)
DECLARE @Date datetime
BEGIN TRY
SET @Date = CAST(@DateString AS DATETIME)
SET @DateFormat = 'Valid'
END TRY
BEGIN CATCH
BEGIN TRY
SET @DateFormat = 'ddMMyyyy'
SET @Date = CONVERT(DATETIME,STUFF(STUFF(@DateString,5,0,'/'),3,0,'/'))
END TRY
BEGIN CATCH
SET @DateFormat = 'MMddyyyy'
SET @Date = CONVERT(DATETIME,STUFF(STUFF(@DateString,1,2,''),3,0,
'/' + LEFT(@DateString,2) + '/'))
END CATCH
END CATCH
SELECT
@DateString InputDate,
@DateFormat InputDateFormat,
@Date OutputDate
```
|
Your data should be 19000101.So your input needs to be modified first then we need to use Convert to get your appropriate format.
```
declare @inp varchar(10) = '01011900'
select CONVERT(varchar, cast(right(@inp,4)+''+left(@inp,4) as datetime), 101)
--Output : 01/01/1900
```
|
Convert the string '01011900' or '19990101' or any format to date and with required format '01/01/1990'
|
[
"",
"sql",
"sql-server",
""
] |
I'm using an sqlite database for my Java application and I have a single varchar column with a bunch user stats that is written, read, and parsed by my Java program. What I want to do is have an query that can sort the rows by the last stat in the column. The stats are separated by commas and their lengths vary so I need something that can take the whole last section of the text (which is the text from the last comma to the end of the data) and order that. This would be easy to do just within my Java application but much more resource intensive which is why I would like to do this directly with the query. In practice the actual column data looks something like this:
```
2015/7/4 17:24:38,[(data1, 1, 1436394735787)|(data2, 4, 1436394739288)], 5
```
and I'm trying to order the rows based on that last `5` or whatever else it might be (it can be multiple digits too). Iv'e tried almost everything I could find on the internet but a lot of issues if had were because of syntax errors (even when I copied the query exactly) or problems where a specific function doesn't exist and i'm not really sure what the cause of those errors is. I'm not really familiar with MySQL so a simple answer would be the most appreciated.
|
As a quick and hacky solution (low performance if you have huge amount of data):
```
SELECT * FROM [tbl] ORDER BY CAST(SUBSTR([col], INSTR([col], '],') + 2) AS INTEGER);
```
But as Hayley suggested, to re-evaluate the data, you can use:
```
INSERT INTO [new-tbl]
SELECT SUBSTR(val, 0, c1),
SUBSTR(val, c1+1, c2-c1),
CAST(SUBSTR(val, c2+2) AS INTEGER),
[more-cols]
FROM (
SELECT INSTR([col], ',') AS c1,
INSTR([col], '],') AS c2,
[col] AS val,
[more-cols]
FROM [tbl]);
```
|
The solution to you problem is not the answer to your question. A delimited list of stats should not be a column in your table. See [this question](https://stackoverflow.com/questions/3653462/is-storing-a-delimited-list-in-a-database-column-really-that-bad) for more information. Instead, re-evaluate your schema and sort your query by the appropriate column using:
```
SELECT [columns to select] FROM tableName ORDER BY [column to order by]
```
|
SQLite Order Rows by Last Characters in a String
|
[
"",
"sql",
"database",
"sqlite",
""
] |
I'm trying to delete an item `items` table. And also from `saves` table if it exists.
This is the error:
`You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'i LEFT JOIN saves s ON i.id = s.item_id`
```
DELETE FROM items i
LEFT JOIN saves s
ON i.id = s.item_id
WHERE i.id = ? AND s.item_id = ?
AND
NOT EXISTS (SELECT id FROM pending_wins WHERE item_id = ?)
AND
NOT EXISTS (SELECT id FROM bids WHERE item_id = ?)
```
|
**If you want to delete from a single table, say `items` try this::**
```
DELETE i FROM items i
LEFT JOIN saves s
ON i.id = s.item_id
WHERE i.id = ? AND s.item_id = ?
AND
NOT EXISTS (SELECT id FROM pending_wins WHERE item_id = ?)
AND
NOT EXISTS (SELECT id FROM bids WHERE item_id = ?)
```
**If you want to delete record from both the tables, try this::**
```
DELETE i, s FROM items i
LEFT JOIN saves s
ON i.id = s.item_id
WHERE i.id = ? AND s.item_id = ?
AND
NOT EXISTS (SELECT id FROM pending_wins WHERE item_id = ?)
AND
NOT EXISTS (SELECT id FROM bids WHERE item_id = ?)
```
|
```
DELETE i, s
FROM items i
LEFT JOIN saves s
ON i.id = s.item_id
WHERE s.item_id = ?
AND
NOT EXISTS (SELECT id FROM pending_wins WHERE item_id = ?)
AND
NOT EXISTS (SELECT id FROM bids WHERE item_id = ?)
```
|
MySQL - Delete item from two tables
|
[
"",
"mysql",
"sql",
""
] |
I'm pretty new to SQL, trying to wrap my head around it, but it's getting a little confusing. Here's a simplified version of what I'm working with.
I have this table called `people`:
```
+----+------------+-----------+
| id | name | parent_id |
+----+------------+-----------+
| 1 | Bob | 3 |
| 2 | John | 5 |
| 3 | Larry | 4 |
| 4 | Kevin | 0 |
| 5 | Kyle | 0 |
| 6 | Jason | 5 |
| 7 | Mildred | 4 |
| 8 | Mabel | 6 |
+----+------------+-----------+
```
The above table is a list of people. Their parent\_id column refers to who their parents are. If they have no parents on the table, their parent\_id is 0.
Now, I want to be able to get separate lists of each group of people: grandparents, children, and grandchildren.
It's easy to get grandparents (Larry and Kevin), I can just do this query:
```
SELECT name FROM people WHERE parent_id = 0
```
But when it comes to getting children (John, Larry, Jason, and Mildred) and grandchildren (Bob and Mabel), I'm lost.
In English, this would be the process of getting the children: "Get all the results from the table. For each one, look at their parent id. Find the result in the table that has that as their id. See *that person's* id, and if it's 0, the original person was a child. Add them to the list of what we will display."
For the grandchildren, it would be the same as above, but just with an additional step.
Does that make sense? How can I write my process above as a SQL query?
|
This can be solved using a simple `JOIN`.
To select the list of children:
```
SELECT c.name
FROM people p
JOIN people c ON c.parent_id = p.id
WHERE p.parent_id = 0
```
To select the list of grandchildren:
```
SELECT gc.name
FROM people p
JOIN people c ON c.parent_id = p.id
JOIN people gc ON gc.parent_id = c.id
WHERE p.parent_id = 0
```
|
First of all, it's very important to know that this question is very easy to answer, IF you know that you're working with a fixed set of generations (down to grandchildren, for example). If this table is ultimately going to have many generations, and you want to (for example) find all of Kyle's descendants through the whole family tree, then you are not going to do it with a single query. (I have a stored procedure that deals with arbitrary levels of tree generations.) So for now, let's find up to grandparents / grandchildren.
As you said, finding the grandparents is easy...
```
mysql> select name from people where parent_id = 0;
+-------+
| name |
+-------+
| Kevin |
| Kyle |
+-------+
2 rows in set (0.00 sec)
```
Now, finding children isn't too bad.
Let's find Kyle's children:
```
mysql> select p1.name from people p1 where p1.parent_id in
(select p2.id from people p2 where p2.name = 'Kyle');
+-------+
| name |
+-------+
| John |
| Jason |
+-------+
2 rows in set (0.02 sec)
```
And here's Kyle's grandchildren:
```
mysql> select p3.name from people p3 where p3.parent_id in
(select p2.id from people p2 where p2.parent_id in
(select p3.id from people p3 where p3.name = 'Kyle'));
+-------+
| name |
+-------+
| Mabel |
+-------+
1 row in set (0.01 sec)
mysql>
```
Going the other direction... who is Mabel's parent?
```
mysql> select p1.name from people p1 where p1.id =
(select p2.parent_id from people p2 where p2.name = 'Mabel');
+-------+
| name |
+-------+
| Jason |
+-------+
1 row in set (0.00 sec)
mysql>
```
... and her grandparent:
```
mysql> select p1.name from people p1 where p1.id =
(select p2.parent_id from people p2 where p2.id =
(select p3.parent_id from people p3 where p3.name = 'Mabel'));
+------+
| name |
+------+
| Kyle |
+------+
1 row in set (0.00 sec)
```
So you can see the pattern I followed to make these queries should you need great-grandparents / great-grandchildren. However, the resulting query will become unwieldy if you need more generations, and a stored procedure that loops will be in order.
The Oracle database has a more elegant solution, a SQL extension called "CONNECT BY PRIOR". For some more reading (and a MySQL stored procedure example), check out [Connect By Prior Equivalent for MySQL](https://stackoverflow.com/questions/7631048/connect-by-prior-equivalent-for-mysql) here on StackOverflow.
A final note: do yourself a favor, if you haven't already, and:
```
mysql> create index ix_parent_id on people(parent_id);
Query OK, 0 rows affected (0.06 sec)
Records: 0 Duplicates: 0 Warnings: 0
mysql>
```
It will dramatically improve performance of for these kind of queries.
|
SQL query to select only grandchildren
|
[
"",
"mysql",
"sql",
""
] |
I ran a simple query:
```
UPDATE table
SET user_id = '123456'
WHERE user_id = '234567'
```
Then I got the error message:
```
(2942 row(s) affected)
Msg 2627, Level 14, State 1, Line 3
Violation of PRIMARY KEY constraint 'PK__users__6B24EA82'. Cannot insert duplicate key in object 'dbo.users'. The duplicate key value is (123456).
The statement has been terminated.
```
Does this mean that i have actually made changes to 2942 rows? It doesn't appear that way. I haven't had much luck in my searches. Any help is appreciated.
|
No, this means you would have made changes to 2942 rows, except one or more of them violates the PRIMARY KEY constraint so they were rolled back.
|
Assuming that `user_id` is a primary key (which the error message suggests based on the value in the error message), then the query:
```
UPDATE table
SET user_id = '123456'
WHERE user_id = '234567';
```
could never affect more than one row in `table`. `user_id` is a primary key, so it is unique. The `where` clause would choose at most one row.
This suggests that there is a trigger on the table. I think the 2,942 is coming from a *trigger* not from this statement directly. If I had to guess, it is an instead-of update trigger, probably affecting another table.
The error implies that you already have a row in the table where `user_id = '123456'`, so no updates should be made to that table.
(Note: Variations on this idea are possible. The trigger could be updating *another* table and the primary key violation could be there. The above scenario seems the more likely scenario.)
|
What exactly does "(X row(s) affected)" mean?
|
[
"",
"sql",
"sql-server-2012",
""
] |
I have two tables which looks like :
**First table**
```
CustomerId CustomerName
110 XYZ
111 ABC
112 PQR
113 LMN
```
Second Table
```
CustomerId PhoneNumber
110 9823983298329
111 9996709760760
110 0495054905495
112 8394893489843
113 0932023090900
111 0930290909999
113 8993293288888
112 9828239882388
```
**What I want is a table structure similar to :**
```
CustomerId CustomerName PhoneNum1 PhoneNum2
110 XYZ 9823983298329 0495054905495
111 ABC 9996709760760 0930290909999
112 PQR 8394893489843 9828239882388
113 LMN 0932023090900 8993293288888
```
I am really stuck up with the logic here , If I Join the two tables using
**inner join** then output table will have **multiple CustomerId's** which is not required here , any help will be highly appreciable.
Thanx in advance.
|
This is a form of pivot; you just need a column to pivot on. You can get this result using conditional aggregation and `row_number()`:
```
select t1.CustomerId, t1.CustomerName,
max(case when seqnum = 1 then PhoneNmber end) as PhoneNmber1,
max(case when seqnum = 2 then PhoneNmber end) as PhoneNmber2
from table1 t1 left join
(select t2.*,
row_number() over (partition by customerId order by customerId) as seqnum
from table2 t2
) t2
on t1.CustomerId = t2.CustomerId
group by t1.CustomerId, t1.CustomerName;
```
|
Use a `GROUP BY` to find each customer's min and max phone no. Do a `LEFT JOIN` with that result:
```
select f.CustomerId,
f.CustomerName,
s.min_ph,
case when s.min_ph <> s.max_ph then s.max_ph else null end
from firsttable f
left join (select CustomerId,
min(PhoneNumber) min_ph,
max(PhoneNumber) max_ph
from secondtable
group by CustomerId) s on f.CustomerId = s.CustomerId
```
|
How to get values from rows on one table to column of other
|
[
"",
"sql",
"database",
"oracle",
""
] |
How do I get N records before given one?
I have the following table structure:
```
Id, Message
1, John Doe
2, Jane Smith
3, Error
4, Jane Smith
5, Michael Pirs
7, Gabriel Angelos
8, Error
```
Is there a way to get the N records before each Error and join all such records?
So the expected result for the N =2 will be
```
1, John Doe
2, Jane Smith
5, Michael Pirs
7, Gabriel Angelos
```
[Fiddle](http://sqlfiddle.com/#!3/23589/1)
|
You need to create a row number column if your Ids do not increment without gaps. Then you can use a simple join to find the previous N. Your previous N could overlap... so you have to add `distinct` if you do not want duplicates.
```
declare @N as integer
set @N=2
;with cte_tbl (Id, Message, rownum) AS
(
select *, ROW_NUMBER() over (order by id) as rownum from test
)
select distinct Prev.Id, Prev.Message
from cte_tbl
join cte_tbl Prev
on Prev.rownum between cte_tbl.rownum-@N and cte_tbl.rownum-1
where cte_tbl.Message = 'Error'
and Prev.Message <> 'Error'
order by Prev.Id
```
If the one of the previous `@N` records is an error, the 'error' record will NOT show up. This would have to be modified if you do want those to be included. Just simply remove the line `and Prev.Message <> 'Error'`.
|
You can do this using `cross apply`. The logic is a bit different from typical applications, because you only want the records from the `cross apply` subquery:
```
select t2.*
from table t cross apply
(select top 2 t.*
from table t2
where t2.id < t.id
order by t2.id desc
) t2
where t2.message = 'Error';
```
For those inclined, there is also a method using window functions, but it is a little more cumbersome. Do a reverse cumulative sum of `Error` records to identify values before a given error. Then enumerate these and choose the ones you want:
```
select t.id, t.message
from (select t.*, row_number() over (partition by grp order by id desc) as seqnum
from (select t.*,
sum(case when message = 'Error' then 1 else 0 end) over
(order by id desc)) as grp
from table t
) t
where seqnum between 2 and 3;
```
Note that the filter is between 2 and 3, because `'Error'` has a value of 1.
|
How do I get N records before given one?
|
[
"",
"sql",
"sql-server",
""
] |
Below is my sql query:
```
IIf(remedy_src.Position Is Null,(mid(remedy_src.User,instr(1,remedy_src.User,"(")+1,instr(1,remedy_src.User,")")-2-instr(1,remedy_src.User,"(")+1)),remedy_src.Position) AS [Adjusted User]
```
The point is to extract string from a field. Here's an example of the value:
```
n123456 (name lastname)
```
the `IIf` function returns what is in the brackets:
```
name lastname
```
But. Sometimes the source value looks like that:
```
n123456
```
No brackets, and the `IIf` returns the ugly `#Func!` error which prevents the query to be refreshed in my excel file (external data connection to access db).
I would like to handle this error somehow. Preferably to make the `IIf` function return raw source value if error is present.
|
Here's how I got a solution for this issue, kind of completely reqwriting the code to exclude any finction to return errors.
```
IIf(remedy_src.Position Is Null,replace(replace(right(remedy_src.User,len(remedy_src.User)-instr(1,remedy_src.User,' ')),'(',''),')',''),remedy_src.Position) AS [Adjusted User]
```
|
You could try to catch the error:
```
IIF(IsERROR(IIf(remedy_src.Position Is Null,(mid(remedy_src.User,instr(1,remedy_src.User,"(")+1,instr(1,remedy_src.User,")")-2-instr(1,remedy_src.User,"(")+1)),remedy_src.Position)),
remedy_src.user,
IIf(remedy_src.Position Is Null,(mid(remedy_src.User,instr(1,remedy_src.User,"(")+1,instr(1,remedy_src.User,")")-2-instr(1,remedy_src.User,"(")+1)),remedy_src.Position))
AS [Adjusted User]
```
or
```
IIF(InStr("(",remedy_src.user)=0,
remedy_src.user,
IIF(IsERROR(IIf(remedy_src.Position Is Null,(mid(remedy_src.User,instr(1,remedy_src.User,"(")+1,instr(1,remedy_src.User,")")-2-instr(1,remedy_src.User,"(")+1)),remedy_src.Position))
As [Adjusted User]
```
|
#Func! Error on iif query in MS Access
|
[
"",
"sql",
"ms-access",
""
] |
I run the following SQL query on my Microsoft SQL Server (2012 Express) database, and it works fine, executing in less than a second:
```
SELECT
StringValue, COUNT(StringValue)
FROM Attributes
WHERE
Name = 'Windows OS Version'
AND StringValue IS NOT NULL
AND ProductAssociation IN (
SELECT ID
FROM ProductAssociations
WHERE ProductCode = 'MyProductCode'
)
GROUP BY StringValue
```
I add a filter in the inner query and it continues to work fine, returning slightly less results (as expected) and also executing in less than a second.
```
SELECT
StringValue, COUNT(StringValue)
FROM Attributes
WHERE
Name = 'Windows OS Version'
AND StringValue IS NOT NULL
AND ProductAssociation IN (
SELECT ID
FROM ProductAssociations
WHERE ProductCode = 'MyProductCode'
AND ID IN (
SELECT A2.ProductAssociation
FROM Attributes A2
WHERE A2.Name = 'Is test' AND A2.BooleanValue = 0
)
)
GROUP BY StringValue
```
But when I add a flag variable to enable me to "turn on/off" the filter in the inner query, and set the flag to zero, **the query seems to execute indefinitely** (I left it running about 5 minutes and then force cancelled):
```
DECLARE @IsTestsIncluded bit
SET @IsTestsIncluded = 0
SELECT
StringValue, COUNT(StringValue)
FROM Attributes
WHERE
Name = 'Windows OS Version'
AND StringValue IS NOT NULL
AND ProductAssociation IN (
SELECT ID
FROM ProductAssociations
WHERE ProductCode = 'MyProductCode'
AND (
@IsTestsIncluded = 1
OR
ID IN (
SELECT A2.ProductAssociation
FROM Attributes A2
WHERE A2.Name = 'Is test' AND A2.BooleanValue = 0
)
)
)
GROUP BY StringValue
```
**Why? What am I doing wrong?** I swear I've used this pattern in the past without a problem.
(When I set `@IsTestsIncluded = 1` in the final query above, the filter is skipped and the execution time is normal - the delay only happens when `@IsTestsIncluded = 0`)
---
EDIT
As per Joel's request in the comments, here is the execution plan for the first query:

And here is the execution plan for the second query:

(I can't post an execution plan for the 3rd query as it never completes - unless there is another way to get it in SSMS?)
|
Try this:
```
SELECT
a.StringValue, COUNT(a.StringValue)
FROM Attributes a
INNER JOIN ProductAssociations p ON a.ProductAssociation = p.ID
AND p.ProductCode = 'MyProductCode'
LEFT JOIN Attributes a2 ON a2.ProductAssociation = p.ID
AND a2.Name = 'Is Test' AND a2.BooleanValue = 0
WHERE
Name = 'Windows OS Version'
AND StringValue IS NOT NULL
AND COALESCE(a2.ProductAssociation, NULLIF(@IsTestsIncluded, 1)) IS NOT NULL
GROUP BY a.StringValue
```
The `coalesce/nullif` combination is not the easiest-to-follow thing I've ever written, but it should be functionally equivalent to what you have as long as the join conditions match 0 or 1 record on the joined table.
|
> Why? What am I doing wrong?
You are trying to *compile* a query that needs to satisfy multiple distinct conditions, based on the variable. The optimizer must come up with *one* plan that works in *both* cases.
Try to avoid this like the plague. Just issue two queries, one for one condition one for the other, so that the optimizer is free to optimize each queries separately and *compile* an execution plan that is optimal for each case.
A lenghty discussion of the topic, with alternatives and pros and cons: [Dynamic Search Conditions in T‑SQL](http://www.sommarskog.se/dyn-search.html)
|
Simple change causes SQL query execution time to dramatically increase
|
[
"",
"sql",
"sql-server",
"t-sql",
"execution-time",
"sql-execution-plan",
""
] |
I am trying to execute a SQL query in my rails app. The following executes normally since it is looking for an exact match:
```
connection.query("SELECT *
FROM test
WHERE y=#{connection.quote(name)}
ORDER BY x ASC")
```
I want to use the LIKE operator to find partial matches. In SQL it would look like:
`SELECT * FROM test WHERE y LIKE '%John%' ORDER BY x ASC`
How do I do this in my Ruby query? I tried adding `%` in a few places but it doesn't work. I get errors that say my query is looking for `%'John'%` instead of `'%John%'`
```
connection.query("SELECT *
FROM test
WHERE y LIKE #{%connection.quote(name)%}
ORDER BY x ASC")
```
|
You need to add the `%`s in Ruby before quoting:
```
connection.query("SELECT *
FROM test
WHERE y LIKE #{connection.quote('%' + name + '%')}
ORDER BY x ASC")
```
`connection.quote` will add single quotes to produce a valid SQL string literal and you want to get the `%`s *inside* that string literal, hence the Ruby string concatenation *before* `connection.quote` is called.
Or you could do it in SQL:
```
connection.query("SELECT *
FROM test
WHERE y LIKE '%' || #{connection.quote(name)} || '%'
ORDER BY x ASC")
```
`||` is the standard SQL string concatenation operator, you might need to use the `concat` function or something else if you're using a database that doesn't really support SQL.
You're better off using the ActiveRecord interface as [spickermann](https://stackoverflow.com/a/31353456/479863) suggests but sometimes you need to do it by hand so it is useful to know how.
|
Since you use Rails anyway I suggest using [ActiveRecord's query interface](http://guides.rubyonrails.org/active_record_querying.html) instead of plain SQL.
With `ActiveRecord` the query could be written like this:
```
Test.where("y LIKE ?", "%#{name}%").order(:x)
```
You need to have an `ActiveRecord` model named `Test` that is configured to use a database table named `test` (Rails default naming would be `tests`) to make this work:
```
# in app/models/test.rb
class Test < ActiveRecord::Base
self.table_name = 'test'
end
```
|
SQL query in Rails using % and LIKE
|
[
"",
"sql",
"ruby-on-rails",
"ruby",
""
] |
Apologies if the title is not completely accurate, but I am not sure how to phrase my question exactly. I would like to return only rows where the results include all items in the IN statement grouped on particular columns. So my query is:
```
SELECT [DRSY],
[DRRT],
[DRKY],
[DRDL01]
FROM dbo.f0005
WHERE DRKY IN ('FC', 'OO', 'SH')
```
I want to return records where DRSY and DRRT include all of the items 'FC', 'OO', 'SH'. Perhaps a picture will help illustrate this:

The only rows I want returned are where DRSY = '00' and DRRT = 'DT' because this includes all 3 values I specified. I have tried searching for different ways to use IN, EXISTS, and even ALL.
|
I believe this can be done by GROUPING and then ensuring the DISTINCT count is equal to the number of search items:
```
SELECT [DRSY],
[DRRT]
FROM dbo.f0005
WHERE DRKY IN ('FC', 'OO', 'SH')
GROUP BY [DRSY], [DRRT]
HAVING COUNT(DISTINCT DRKY) = 3;
```
**Edit, Re Ensuring Count of Items remains in Sync**
What you could do is build up a derived table or CTE containing the list of desired values (`searchValues`), which you can then join to (instead of `IN`) and then you will be able to `COUNT` the `searchValues` to avodi any maintenance issues with counting values.
```
WITH searchValues AS
(
select val
from (values ('FC'), ('OO'), ('SH')) as s(val)
)
SELECT [DRSY],
[DRRT]
FROM dbo.f0005
INNER JOIN searchValues s
ON DRKY = s.val
GROUP BY [DRSY], [DRRT]
HAVING COUNT(DISTINCT DRKY) = (SELECT COUNT(val) FROM searchValues);
```
[SqlFiddle here](http://sqlfiddle.com/#!6/58701/4)
|
What about
```
SELECT DRSY, DRRT, DRKY, DRDL01 FROM dbo.f0005 a
WHERE exists (
SELECT * FROM dbo.f005 b WHERE DRKY = "FC" and a.DRSY = b.DRSY and a.DRRT = b.DRRT
)
AND exists (
SELECT * FROM dbo.f005 c WHERE DRKY = "00" and a.DRSY = c.DRSY and a.DRRT = c.DRRT
)
AND exists (
SELECT * FROM dbo.f005 d WHERE DRKY = "SH" and a.DRSY = d.DRSY and a.DRRT = d.DRRT
)
```
|
Return records in SQL that include all values from IN clause
|
[
"",
"sql",
"sql-server",
"t-sql",
"exists",
""
] |
I have this table X with only one column, A. How can I generate a random number in each row, within specific values? Let's say I want random numbers to be either 1, 2 or 999 (3 numbers), I would have a column B looking something like this:
```
A | B
-----
12| 1
33| 999
67| 1
20| 2
...
```
I've tried the `dbms_random` package but only to generate between 2 numbers, like this:
```
select X.*, round(dbms_random.value(1,2)) from X;
```
Any suggestions?
|
Here is one method using `case`:
```
select x.a,
(case when rand < 2 then 1 when rand < 3 then 2 else 999 end) as b
from (select x.*,
dbms_random.value(1, 4) as rand
from x
) x;
```
|
You could store the values you want (1, 2, 999, bla, bla, bla) in a table, and join into it in a random order, like:
```
create table x (a int);
insert into x values (12);
insert into x values (33);
insert into x values (67);
insert into x values (20);
create table y (z int);
insert into y values (1);
insert into y values (2);
insert into y values (999);
create table new_x as
select a,
z
from (select a,
z,
row_number() over(partition by a order by dbms_random.value) as rn
from x
cross join y)
where rn = 1;
drop table x;
alter table new_x rename to x;
```
**Fiddle:** <http://www.sqlfiddle.com/#!4/8cf70/1/0>
|
SQL Generate Random Number
|
[
"",
"sql",
"database",
"oracle",
"random",
""
] |
This is my table structure:
```
CUST_ID ORDER_MONTH
---------------------
1 1
1 5
2 3
2 4
```
My objective is to tag these customers as either New or Returning customers.
When I filter the query lets say for month 1 then customer 1 should have the tag 'New' but when I filter it for month 5 then customer 1 should show up as 'Return' as he already made a purchase in month 1.
Same way customer ID 2 should show up as New for month 3 and return for month 4.
I want to do this using a CASE statement and not inner join.
Thanks
|
If you insist on using a case statement, the logic would be something like "If this is the first month for that user, write new, otherwise write returning." The query would be as follows:
```
SELECT CASE
WHEN m.month = (SELECT MIN(month) FROM myTable WHERE customer = m.customer) THEN 'New'
ELSE 'Returning' END AS customerType
FROM myTable m;
```
However, I think this would be nicer and more readable in a `JOIN`. You can write an aggregation query to get the earliest month for each user, and then use `COALESCE()` to replace null values with 'Returning'. The aggregation:
```
SELECT customer, MIN(month) AS minMonth, 'New' AS customerType
FROM myTable
GROUP BY customer ;
```
To get the rest:
```
SELECT m.customer, m.month, COALESCE(t.customerType, 'Returning') AS customerType
FROM myTable m
LEFT JOIN(
SELECT customer, MIN(month) AS minMonth, 'New' AS customerType
FROM myTable
GROUP BY customer) t ON t.customer = m.customer AND t.minMonth = m.month;
```
Here is an [SQL Fiddle](http://sqlfiddle.com/#!9/de434/3) example that shows both examples.
|
You don't need a JOIN and a case statement would probably be overkill...
```
SELECT CUST_ID, IF(COUNT(1)>1, 'Returning', 'New') AS blah
FROM the_table
WHERE ORDER_MONTH <= the_month
GROUP BY CUST_ID
;
```
Of course, using just month is going to cause problems after a year (or really, after passing December.)
This would be better
```
SELECT CUST_ID, IF(COUNT(1)>1, 'Returning', 'New') AS blah
FROM the_table
WHERE order_date <= some_date
GROUP BY CUST_ID
;
```
|
CASE Statement instead of inner join
|
[
"",
"mysql",
"sql",
"logic",
""
] |
I am making attempts to run my first dynamic pivot in SQL Server 2012.
My `#temp` table that I am using for the dynamic pivoting looks like this.
```
YearMonth Agreement nr Discount
------------------------------------
201303 123 1
201303 12 0
201304 1 0
```
I am running this code and it does not work:
```
DECLARE @DynamicPivotQuery AS NVARCHAR(MAX)
DECLARE @ColumnName AS NVARCHAR(MAX)
--Get distinct values of the PIVOT Column
SELECT @ColumnName = ISNULL(@ColumnName + ',', '') + QUOTENAME(YearMonth )
FROM (SELECT DISTINCT YearMonth FROM #FINAL) AS Courses
--Prepare the PIVOT query using the dynamic
SET @DynamicPivotQuery =
N'SELECT [Agreement nr],YearMonth , ' + @ColumnName + '
FROM #FINAL
PIVOT(
COUNT(agreement nr)
FOR YearMonth IN (' + @ColumnName + ') AS PVTTable'
--Execute the Dynamic Pivot Query
EXECUTE @DynamicPivotQuery;
```
The error message I am getting is
> FOR YearMonth IN ([201403]) AS PVTTable' is not a valid identifier.
What am I missing here?
|
The cause of the error is that you're missing a parenthesis before you alias the Pivot. More than this however your pivot was rather inefficient.
You should select what you need for the source table in your pivot otherwise it could run for a long time and produce a lot of rows with null returns.
The below is fixed and hopefully more efficient:
```
DECLARE @DynamicPivotQuery AS NVARCHAR(MAX)
DECLARE @ColumnName AS NVARCHAR(MAX)
--Get distinct values of the PIVOT Column
SELECT @ColumnName= ISNULL(@ColumnName + ',','')
+ QUOTENAME(YearMonth )
FROM (SELECT DISTINCT YearMonth FROM #FINAL) AS Courses
--Prepare the PIVOT query using the dynamic
SET @DynamicPivotQuery =
N'SELECT ' + @ColumnName + '
FROM (Select [Agreement nr], YearMonth from #FINAL) src
PIVOT(
COUNT([Agreement nr])
FOR YearMonth IN (' + @ColumnName + ')) AS PVTTable'
--Execute the Dynamic Pivot Query
EXECUTE sp_executesql @DynamicPivotQuery;
```
|
You are missing a parenthesis
```
SET @DynamicPivotQuery =
N'SELECT [Agreement nr],YearMonth , ' + @ColumnName + '
FROM #FINAL
PIVOT(
COUNT([agreement nr])
FOR YearMonth IN (' + @ColumnName + ')) AS PVTTable'
--Execute the Dynamic Pivot Query
```
|
Dynamic pivoting SQL Server 2012
|
[
"",
"sql",
"sql-server-2012",
"dynamic-pivot",
""
] |
I have 2 fields.
Birth= Datatype Decimal(12,4) & Date = Datatype Date
```
Birth Date
19650101 2015-07-09
```
how do i get a result that looks like this
i want the result to be like this
```
Birth Date
1965-01-01 2015-07-09
```
where the Birth is a Date datatype and not a decimal (12,4)
|
To convert the number `19650101` to a date use
```
CONVERT(DATETIME, CAST(Birth AS VARCHAR(8)), 112)
```
To get the number of years (to one decimal) you could do:
```
ROUND(
DATEDIFF(day,
CONVERT(DATETIME, CAST(Birth AS VARCHAR(8)), 112),
[Date])/365.25
,1)
```
Which won't be exact but should be close enough to tenths of a year.
|
You can use this:
```
-- Create demo data
CREATE TABLE #dates(birth int, date date)
INSERT INTO #dates(birth,date)
VALUES(19650101,N'2015-07-09')
-- Your work
SELECT CONVERT(date,CONVERT(nvarchar(max),birth),112) as birth, date,
DATEDIFF(year,
CONVERT(date,CONVERT(nvarchar(max),birth),112),
date
) as years
FROM #dates
-- Cleanup
DROP TABLE #dates
```
This depends on the exact format you provides (`19650101`).
|
Convert Decimal or Varchar into Date
|
[
"",
"sql",
"sql-server",
"date",
"casting",
"converters",
""
] |
I'm writing a query that generates statistics based on postcodes and I need to be able to count the number of matching records that are within a range of postcodes except when they exist in a secondary table. This is part of a larger query and I need the count of records for each postcodes in columnar format rather than as separate rows and this minimal example demonstrates what I've attempted:
```
CREATE TABLE #People
(
Name nvarchar(10),
Postcode int
)
INSERT INTO #People VALUES ('Adam', 2000)
INSERT INTO #People VALUES ('John', 2001)
INSERT INTO #People VALUES ('Paul', 2001)
INSERT INTO #People VALUES ('Peter', 2099)
INSERT INTO #People VALUES ('Tom', 4000)
CREATE TABLE #PostcodesToIgnore
(
Postcode int
)
INSERT INTO #PostcodesToIgnore VALUES (2099)
SELECT SUM(CASE WHEN PostCode BETWEEN 2000 AND 2099 THEN 1 ELSE 0 END) FROM #People
SELECT SUM(CASE WHEN PostCode BETWEEN 2000 AND 2099
AND PostCode NOT IN (SELECT PostCode FROM #PostcodesToIgnore) THEN 1 ELSE 0 END)
FROM #People
```
The first query that counts all postcodes within the range works but the second one fails with the error:
> Cannot perform an aggregate function on an expression containing an aggregate or a subquery.
While I could refactor the query to include all the criteria from the outer select into each subselect there are quite a few criteria in the real query so I was hoping there might be a more elegant way to go about it?
|
You could use a left join instead.
```
SELECT
SUM
(
CASE WHEN PostCode BETWEEN 2000 AND 2099
AND pcti.PostCode is null
THEN 1
ELSE 0
END
)
FROM #People p
left join #PostcodesToIgnore pcti on pcti.PostCode = p.PostCode
```
|
You could remove the `SUM` and push the query into a derived table or CTE.
The following works
```
SELECT SUM(PostCodeFlag)
FROM (SELECT CASE
WHEN PostCode BETWEEN 2000 AND 2099
AND PostCode NOT IN (SELECT PostCode
FROM #PostcodesToIgnore) THEN 1
ELSE 0
END AS PostCodeFlag
FROM #People) T
```
|
Excluding records within an aggregate function based on presence of value in another table
|
[
"",
"sql",
"sql-server",
""
] |
I can easily select all my columns from a table with:
```
SELECT * FROM foo
```
But if I want to rename the column `bar` in `foobar`, I need to specify all the fields manually. This make my query not robust if a new column is added to the table:
```
SELECT a,b,c,d,bar AS foobar FROM foo
```
Is there a possibility to write something like
```
SELECT *, bar AS foobar from foo
```
|
Yes you can do the following:
```
SELECT bar AS foobar, a.* from foo as a;
```
But in this case you will get bar twice: one with name `foobar` and other with bar as `*` fetched it..
|
There's not really a great way to get at what you're after. Typically, best practices dictate that you should avoid `SELECT * ...` for performance reasons. Especially for the reason you just specified - say you add a large field to this table that is irrelevant to the query at hand. You're still retrieving the data with a "\*" whether you need it or not.
As for your query, you *can* do `SELECT *, bar AS foobar from foo` - but you'll be duplicating your results, as you'll return the original column by name as well as a column with your new alias.
Hope this helps!
|
How to SELECT * and rename a column?
|
[
"",
"mysql",
"sql",
""
] |
I'm trying to do something like:
```
$results = $query->leftJoin('checklist_items', function($join) use ($days) {
$join->on('users.id', '=', 'checklist_items.user_id')
->on('checklist_items.due_date', 'IN', $days);
})
->where('checklist_items.user_id', null)
->limit(10)
->get();
```
This is an example of the query I'm attempting to execute:
```
SELECT *
FROM users
LEFT JOIN checklist_items
ON users.id = checklist_items.user_id
AND checklist_items.due_date IN ('2015-07-09', '2015-07-10')
WHERE checklist_items.user_id IS NULL
ORDER BY users.id
```
So this is a left outer join. In query builder, most of this is no problem. The problem is the fact that my AND line uses an IN query. If it were part of a WHERE clause I would use ->whereIn but since I need it in the Join clause, whereIn won't work and there is no orIn or some such.
Suggestions?
|
I think you would need to use `DB::raw()` so it doesn't try to quote your days and wrap your days in parenthesis as well. This should do the trick.
```
$days = '(\'2015-07-09\', \'2015-07-10\')';
$results = DB::table('users')->leftJoin('checklist_items', function($join) use ($days) {
$join->on('users.id', '=', 'checklist_items.user_id')
->on('checklist_items.due_date', 'IN', DB::raw($days));
})
->where('checklist_items.user_id', null)
->limit(10)
->toSql();
echo $results;
```
|
You can use `->whereIn()` within the `->leftJoin()` closure (Tested in Laravel 5.7.16):
```
$days = ['2015-07-09', '2015-07-10'];
$results = \DB::table('users')
->leftJoin('checklist_items', function($join) use ($days) {
$join->on('users.id', '=', 'checklist_items.user_id')
->whereIn('checklist_items.due_date', $days);
})
->where('checklist_items.user_id', null)
->orderby('users.id')
->get();
```
Output from `dd(\DB::getQueryLog();` produces your example query:
```
array:1 [▼
0 => array:3 [▼
"query" => "select * from `users` left join `checklist_items` on `users`.`id` = `checklist_items`.`user_id` and `checklist_items`.`due_date` in (?, ?) where `checklist_items`.`user_id` is null order by `users`.`id` asc ◀"
"bindings" => array:2 [▼
0 => "2015-07-09"
1 => "2015-07-10"
]
"time" => 6.97
]
]
```
|
Laravel 5: Join On with IN query
|
[
"",
"sql",
"laravel",
""
] |
I need a table A with 4 columns for order numbers: Order1, Order2, Order3, Order4 (I know, it's horrible, but that's a given).
And I have to find the records in table B, where the match is that any value in any order column in table A could be in any order in column B:
```
A.Order1 = B.Order1 OR
A.Order1 = B.Order2 OR
A.Order1 = B.Order3 OR
A.Order1 = B.Order4 OR
A.Order2 = B.Order1 etc
```
Is there a better way to write this?
I dread the moment they tell me they want to use 5 or 6 columns.
EDITS TO ORIGINAL QUESTION
* This is for SQL Server 2008 R2
* Table B also has 4 order columns with order numbers
* I'm looking for any order number in any order column in Table A to match any order number in any order column in Table B.
* There's no anticipated most likely finding
|
The only way to do this with the model design you have is something (as @GordonLinoff suggested) like this:
```
where b.order1 in (a.order1, a.order2, a.order3, a.order4) or
b.order2 in (a.order1, a.order2, a.order3, a.order4) or
b.order3 in (a.order1, a.order2, a.order3, a.order4) or
b.order4 in (a.order1, a.order2, a.order3, a.order4)
```
An interesting question you might have is how can I change my data model to make this work better? ... here is how:
First you have your two tables A and B. I'm going to assume that both A and B have an unique index ID.
You can then make a support table AOrder with the following columns
```
AID
ORDNUM
VALUE
```
If you make a similar table for BOrder then to find out if a given order is the same just join on Value and you get the AID, BID, and the two order numbers.
With this design you don't care how many order numbers there are.
You could convert your current data to this design on the fly like this and get the results you want:
```
SELECT aord.ID as aID, bord.ID as bID, a.num as a_ordernum, b.num as b.ordernum, v
FROM (
SELECT a.ID, 1 AS num, a.order1 as V FROM a
UNION ALL
SELECT a.ID, 2 AS num, a.order2 as V FROM a
UNION ALL
SELECT a.ID, 3 AS num, a.order3 as V FROM a
UNION ALL
SELECT a.ID, 4 AS num, a.order4 as V FROM a
) aord
JOIN (
SELECT b.ID, 1 AS num, b.order1 as V FROM b
UNION ALL
SELECT b.ID, 2 AS num, b.order2 as V FROM b
UNION ALL
SELECT b.ID, 3 AS num, b.order3 as V FROM b
UNION ALL
SELECT b.ID, 4 AS num, b.order4 as V FROM b
) bord on aord.v = bord.v
```
|
You can unpivot the columns in one table using a cross apply and then check if the value from the cross apply is in any of the columns from the other table.
It does not *automatically* work if you add new columns but you will only have to add them in one or two places.
[SQL Fiddle](http://sqlfiddle.com/#!6/44663/1)
**MS SQL Server 2014 Schema Setup**:
```
create table A
(
Order1 int,
Order2 int,
Order3 int,
Order4 int
)
create table B
(
Order1 int,
Order2 int,
Order3 int,
Order4 int
)
insert into A values
(1, 1, 40, 10),
(2, 2, 2, 20)
insert into B values
(3, 3, 3, 30),
(4, 4, 4, 40)
```
**Query 1**:
```
select *
from A
where exists (
select *
from B
cross apply (values(B.Order1),(B.Order2),(B.Order3),(B.Order4)) as X(O)
where X.O in (A.Order1, A.Order2, A.Order3, A.Order4)
)
```
**[Results](http://sqlfiddle.com/#!6/44663/1/0)**:
```
| Order1 | Order2 | Order3 | Order4 |
|--------|--------|--------|--------|
| 1 | 1 | 40 | 10 |
```
|
SQL WHERE matching any value in any column
|
[
"",
"sql",
"sql-server",
"sql-server-2008-r2",
""
] |
I have a few related tables containing film, actor, and category data. I am trying to sum the length of the films in a particular category for each actor.
Here's what I have to try and include the sum of a category ID subquery in my results:
```
SELECT actor.actor_id, (SELECT SUM(length) from film WHERE category_id=14) total
FROM actor
JOIN film_actor USING(actor_id)
JOIN film USING(film_id)
JOIN film_category USING(film_id)
JOIN category USING(category_id)
GROUP BY actor.actor_id
ORDER BY total DESC
```
However my `total` column just contains all NULL values.
This query works, but does not include actors who have 0 minutes worth of films in this category:
```
SELECT actor.actor_id, SUM(film.length) as total
FROM actor
JOIN film_actor USING(actor_id)
JOIN film USING(film_id)
JOIN film_category USING(film_id)
JOIN category USING(category_id)
WHERE category_id = 14
GROUP BY actor.actor_id
```
|
Here's how I got to what I needed:
```
SELECT actor.actor_id, SUM(film.length) as total
FROM actor
JOIN film_actor USING(actor_id)
LEFT JOIN film on film_actor.film_id = film.film_id AND film.film_id IN (SELECT film_id FROM film_category WHERE film_category.category_id = 14)
GROUP BY actor.actor_id
```
|
You need an outer join and the criteria on category belongs in the on clause of the outer join:
```
select a.actor_id,
sum(f.length) as total
from actor a
join film_actor fa
on a.actor_id = fa.actor_id
join film f
on fa.film_id = f.film_id
left join film_category fc
on fc.film_id = f.film_id
and fc.category_id = 14
group by a.actor_id
```
|
SQL: Select the sum of a subquery
|
[
"",
"mysql",
"sql",
"sum",
"subquery",
"aggregate-functions",
""
] |
I have a table of people, and I want to pull the most recent record of event, but there may not be one.
So my problem is that even though I'm left joining to make sure I'm getting a record for each person, I'm only bringing back people that have events because I'm pulling the Max of a date as a criteria.
```
SELECT *
FROM tbl_people
LEFT JOIN tbl_events ON
tbl_people.People_UID = tbl_events.People_UID
WHERE tbl_people.Active = 1
AND (
SELECT MAX(Event_Date) FROM tbl_events
)
```
Table **People** would have the following
```
People_UID
```
Table **Events** would have the following
```
Event_UID, People_UID, Event_Name, Event_Date
```
Like I said, what I'd like is an output like this:
```
Jen, Had a Baby, 7/10/2015
Shirley
Susan
Megan, Had a Baby, 8/5/2014
etc.
```
I hope that makes sense.
|
One way to do this is to use a correlated subquery in the join that checks that there doesn't exists any event for the same person with a later date:
```
SELECT *
FROM tbl_people p
LEFT JOIN tbl_events e
ON p.People_UID = e.People_UID
AND NOT EXISTS (
SELECT 1 FROM tbl_Events
WHERE e.People_UID = People_UID
AND Event_Date > e.Event_Date
)
WHERE p.Active = 1
```
It might not be the most efficient solution though; maybe limiting the left joined set first is better. (Or come to think of it the exists should be better).
Given a sample data set like:
```
People_UID Event_UID People_UID Event_Name Event_Date
Jen 1 Jen Had a Baby 2015-07-10
Jen 3 Jen Bought a horse 2013-07-10
Shirley NULL NULL NULL NULL
Susan NULL NULL NULL NULL
Megan 2 Megan Had a Baby 2014-08-05
Megan 4 Megan Had another Baby 2015-08-05
```
This would be the result:
```
People_UID Event_UID People_UID Event_Name Event_Date
Jen 1 Jen Had a Baby 2015-07-10
Shirley NULL NULL NULL NULL
Susan NULL NULL NULL NULL
Megan 4 Megan Had another Baby 2015-08-05
```
Using joins it could look like this:
```
SELECT *
FROM tbl_people p
LEFT JOIN (
SELECT e.*
FROM tbl_events e
JOIN (
SELECT PEOPLE_UID, MAX(EVENT_DATE) MDATE
FROM tbl_Events GROUP BY People_UID
) A ON A.MDATE = E.event_date AND e.People_UID = A.People_UID
) B ON p.People_UID = b.People_UID
WHERE p.Active = 1
```
|
```
select * from
( SELECT *, row_number() over (partition by tbl_people.name order by tbl_events.Event_Date desc) as rn
FROM tbl_people
LEFT JOIN tbl_events
ON tbl_people.People_UID = tbl_events.People_UID
WHERE tbl_people.Active = 1 ) t
where t.rn = 1
```
|
Left Join most recent record, if one exists
|
[
"",
"mysql",
"sql",
"join",
"relationship",
""
] |
I need to find anniversary date and anniversary year of employees and send email in every 14 days.But I have a problem with last week of December when using the following query if start date and end date are in different years.
```
Select * from Resource
where (DATEPART(dayofyear,JoinDate)
BETWEEN DATEPART(dayofyear,GETDATE())
AND DATEPART(dayofyear,DateAdd(DAY,14,GETDATE())))
```
|
Instead of comparing to a `dayofyear` (which resets to zero at jan 1st and is the reason your query breaks within 14 days of the end of the year) you could update the employee's `joindate` to be the current year for the purpose of the query and just compare to actual dates
```
Select * from Resource
-- Add the number of years difference between joinDate and the current year
where DATEADD(year,DATEDIFF(Year,joinDate,GetDate()),JoinDate)
-- compare to range "today"
BETWEEN GetDate()
-- to 14 days from today
AND DATEADD(Day,14,GetDate())
-- duplicate for following year
OR DATEADD(year,DATEDIFF(Year,joinDate,GetDate())+1,JoinDate) -- 2016-1-1
BETWEEN GetDate()
AND DATEADD(Day,14,GetDate())
```
Test query:
```
declare @joindate DATETIME='2012-1-1'
declare @today DATETIME = '2015-12-26'
SELECT @joinDate
where DATEADD(year,DATEDIFF(Year,@joinDate,@today),@JoinDate) -- 2015-1-1
BETWEEN @today -- 2015-12-26
AND DATEADD(Day,14,@today) -- 2016-01-09
OR DATEADD(year,DATEDIFF(Year,@joinDate,@today)+1,@JoinDate) -- 2016-1-1
BETWEEN @today -- 2015-12-26
AND DATEADD(Day,14,@today) -- 2016-01-09
```
(H/T @Damien\_The\_Unbeliever for a simple fix)
The above correctly selects the `joinDate` which is in the first week of Jan (note I've had to fudge `@today` as Ive not managed to invent time travel).
The above solution should also solve the issue with leap years that was hiding in your original solution.
---
**Update**
You expressed in comments the requirement to select `AnniversaryDate` and `Years` of service, you need to apply some `CASE` logic to determine whether to add 1 (year or date) to your select
```
select *,
CASE
WHEN DATEADD(YEAR,DATEDIFF(Year,JoinDate,GETDATE()),JoinDate) < GetDate()
THEN DATEDIFF(Year,JoinDate,GETDATE())+1
ELSE DATEDIFF(Year,JoinDate,GETDATE())
END as [Years],
CASE WHEN DATEADD(YEAR,DATEDIFF(Year,JoinDate,GETDATE()),JoinDate) < GetDate()
THEN DATEADD(YEAR,DATEDIFF(Year,JoinDate,GETDATE())+1,JoinDate)
ELSE DATEADD(YEAR,DATEDIFF(Year,JoinDate,GETDATE()),JoinDate)
end as [AnniversaryDate]
.... // etc
```
|
You could do this:
```
Select * from Resource
where DATEPART(dayofyear,JoinDate)
BETWEEN DATEPART(dayofyear,GETDATE())
AND DATEPART(dayofyear,DateAdd(DAY,14,GETDATE()))
OR
DATEPART(dayofyear,JoinDate)
BETWEEN (DATEPART(dayofyear,GETDATE()) + 365)
AND (DATEPART(dayofyear,DateAdd(DAY,14,GETDATE())) + 365)
```
|
SQL query to find employee aniversary
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
We have a slave that follow the master. In near future, the master will shut down and the current slave will become new master.
After becoming master, how can I undo `CHANGE MASTER TO MASTER_HOST='master'...` SQL statement that previously executed on slave?
I can execute `STOP SLAVE;` but I curious to know how to cancel and undo `CHANGE MASTER TO MASTER_HOST='master'...` .
|
No need to UNDO change master info as it will not impact anything.
If you still want it for removing details then you can set all values with blank.
OR
remove master.info file and restart MySQL.
The direct command is:
```
RESET SLAVE;
```
In MySQL 8 and newer, `RESET SLAVE` has been deprecated. Use `RESET REPLICA` instead. `RESET REPLICA` does not change any replication connection parameters. If you want to remove all of the replication connection parameters, use `RESET REPLICA ALL`.
Ref. <https://dev.mysql.com/doc/refman/8.0/en/reset-replica.html>
|
I believe the command you are looking for is `RESET SLAVE`:
<https://dev.mysql.com/doc/refman/5.5/en/reset-slave.html>
|
how to undo CHANGE MASTER TO
|
[
"",
"mysql",
"sql",
"master-slave",
"percona",
""
] |
Input
```
+-------+---------+----------+
| Name | Country | State |
+-------+---------+----------+
| jon | US | Chicago |
+-------+---------+----------+
| kathy | US | CA |
+-------+---------+----------+
| linda | US | LA |
+-------+---------+----------+
| harry | UK | wales |
+-------+---------+----------+
| steve | UK | scotland |
+-------+---------+----------+
| ram | India | ap |
+-------+---------+----------+
| jai | India | ka |
+-------+---------+----------+
| raju | India | ap |
+-------+---------+----------+
| ravi | India | mp |
+-------+---------+----------+
```
Iam not able to pass the country with highest count(india) using the below query as it is returning 2(country & count) values. Please correct me.
```
select name from table a
where a.country in
(SELECT country, COUNT(*) as c FROM table b
GROUP BY country
ORDER BY c DESC LIMIT 1);
```
Output:
The details of the state (under the highest counted country-india) whose count is the highest in the same table should be displayed.
```
+---------+-------+----+
| ram | india | ap |
+---------+-------+----+
| raju | india | ap |
+---------+-------+----+
```
|
SELECT country, COUNT(\*) as c FROM table b
GROUP BY country
ORDER BY c DESC LIMIT 1;
UPDATE: This should work
```
select state,count(*) c2 from a t1
INNER JOIN (
SELECT country, COUNT(*) as c FROM a t1 GROUP BY country ORDER BY c DESC LIMIT 1) t2 on t1.country = t2.country
GROUP BY state ORDER BY c2 DESC LIMIT 1
```
|
Try this
```
select * from table e
where e.state in
(select d.state from table a
where a.country in
(SELECT country FROM table b
GROUP BY country
ORDER BY count(*) DESC LIMIT 1);
GROUP BY state
ORDER BY count(*) DESC LIMIT 1);
```
|
Sub query to fetch the highest value
|
[
"",
"mysql",
"sql",
""
] |
I'm working on a SQL query like that :
```
PROC SQL;
CREATE TABLE WORK.test AS
SELECT
ARTICLES.sku,
ARTICLES.family_set,
ARTICLES.classe_article,
CASE
WHEN EXISTS (SELECT CATALOG_PAGE2 FROM ODS.ARTICLECOMPANY14 WHERE ODS_ARTICLECOMPANY14.ITEMNUM = ARTICLES.sku)
THEN "Get the catalog_page2 value"
ELSE '0'
END AS pag_cat_mega
FROM WORK.ARTICLES AS ARTICLES;
QUIT;
```
And I'm failling in the "Then" statement, I try several thing to replace the "Get the catalog\_page2 value" but nothing work (like field name)
I think it's cause "exist" say yes / no but didn't know what to do...
Perhaps there is an other solution ?
I try a `join`, but I loose value cause there is less value in my table `ARTICLECOMPANY14` than into `ARTICLES`.
My goal is to get the value if exist, else put a "0".
Thanks for your help.
|
`proc sql` supports `exists`. I suspect the problem might be the double quotes:
```
PROC SQL;
CREATE TABLE WORK.test AS
SELECT a.sku, a.family_set, a.classe_article,
(CASE WHEN EXISTS (SELECT 1
FROM ODS.ARTICLECOMPANY14 oc
WHERE oc.ITEMNUM = a.sku)
THEN 'Get the catalog_page2 value'
ELSE '0'
END) AS pag_cat_mega
FROM WORK.ARTICLES a ;
```
EDIT:
I suspect you just want a `LEFT JOIN`:
```
PROC SQL;
CREATE TABLE WORK.test AS
SELECT a.sku, a.family_set, a.classe_article, oc.catalog_page2
FROM WORK.ARTICLES a LEFT JOIN
ODS.ARTICLECOMPANY14 oc
ON oc.ITEMNUM = a.sku;
```
If you don't want it to show up as `NULL`, then use `coalesce()` in the `SELECT`, either:
```
COALESCE(oc.catalog_page2, 0)
```
or
```
COALESCE(oc.catalog_page2, '0')
```
depending on the type of the column.
|
Well I would suggest you to reform it to a `LEFT JOIN` and an `ISNULL` function:
```
SELECT ARTICLES.sku,
ARTICLES.family_set,
ARTICLES.classe_article,
COALESCE(ACompany.CATALOG_PAGE2,0) AS pag_cat_mega
FROM WORK.ARTICLES AS ARTICLES
LEFT JOIN ODS.ARTICLECOMPANY14 AS ACompany
ON ACompany.ITEMNUM = ARTICLES.sku;
```
|
SQL CASE exist then value
|
[
"",
"sql",
"sas",
"case",
"exists",
"proc-sql",
""
] |
```
SELECT zlec_status.nazwa AS Status,
piorytet.nazwa AS Priorytet,
Concat(koord.imie, ' ', koord.nazwisko) AS `Koordynator`,
Concat(zlec_adresy.town, ' - ', zlec_adresy.street, ' ',
zlec_adresy.other)
AS `adres`,
zlec_z_dnia,zlec_id,
zlec_nr,
zlec_do,
zlec_ogran,
awizacje,
awizacja_na_dzien,
termin_zamkniecia,
tresc,
uwagi
FROM zlec
INNER JOIN koord
ON zlec.koord = koord.id
INNER JOIN zlec_adresy
ON zlec.zlec_addres = zlec_adresy.id
INNER JOIN piorytet
ON zlec.priorytet = piorytet.id
INNER JOIN zlec_status
ON zlec.status_zlecenia = zlec_status.id
WHERE `zlec_adresy`.`town` LIKE '%Sz%' LIMIT 0, 10
```
The table `zlec_adresy` is the following:
```
=============================
id | street | town | other
=============================
```
As you see there is a concat for the whole address part
```
Concat(zlec_adresy.town, ' - ', zlec_adresy.street, ' ',zlec_adresy.other)AS `adres`
```
Starting off - the user has a input field with the address lookup - so he can write either the town name or a street name or the other condition. In my code there will be only filtered through the WHERE clause of the .'town' LIKE userinput. But how can I make it working => `Where ||Concat(zlec_adresy.town, ' - ', zlec_adresy.street, ' ',zlec_adresy.other)AS`adres`|| LIKE userinput`
So basically it would filter the As `adres` for the where statement. Is that possible to do? Or there is another way to accomplish it.
|
Another, possibly more clean approach uses `OR`:
```
WHERE
town like concat('%',@userinput,'%')
OR
street like concat('%',@userinput,'%')
OR
other like concat('%',@userinput,'%')
```
|
The query anatomy looks like this:
* it handles the `from`, because all the other clauses are dependent on it
* it handles the `where`, to filter out unneeded rows
* it handles the `select`, so you will get the columns you want
Note, that I did not mention `group by`, `having` or `order by`, since they are not part of your query, so this anatomy is quite a simplified one. The purpose of this simplification was to achieve clarity.
Now, since the `select` runs after the `where`, you cannot use the renames from the `select` in the `where`, since when the `where` is executed, the selection was not yet executed.
So you have to do a work-around:
- you can define a temporary new relation by `(select ...) mynewrelation` and use the column renamings of the `newrelation`, but it is not advisable for this very situation. However, it is good to know about this option, it might be useful in the future
- you can use something like the one suggested by Madhivanan, but that solution is not performant, since you are doing very slow string operations to concatenate those fields. As a result, the code will be clear but slow
- Hanno Binder's solution is much better, since it uses the benefit of the `or` of not evaluating the second operand if the first was true. His code is fast, but you do not want to see it
My suggestion: You should define a `stored function` to make this calculation. That stored function should be equivalent to the solution suggested by Hanno Binder and you would just call that function in your query. So, your source-code will be clear, easy-to-read, correct and performant.
|
Where statement that will use a concat
|
[
"",
"mysql",
"sql",
""
] |
I have a requirement where i have a select statement like below:
```
SELECT t1.a,t2.b,t2.c FROM t1,t2 WHERE t1.d = t2.e group by a,b;
```
I am fetching column `'b'` and grouping it with column `'a'`. However, the no of lines in table `t2` can be multiple for one count of column `t1.a` and `t2.b` can be different in each case.
So for one count of `t1.a`, **if multiple `t2.b` is there, i want it to be shown as "MULTIPLE" otherwise, it should show the value of `t2.b`**.
I hope my question is clear.
Thanks in advance,
Anoop
My Query:
```
select prh.requisition_header_id,
case when count(prl.suggested_vendor_name)>1 THEN 'MULTIPLE' ELSE prl.suggested_vendor_name END suggested_vendor_name
from PO_REQ_DISTRIBUTIONS_ALL prd,PO.PO_REQUISITION_LINES_ALL prl, gl_code_combinations gcc,fnd_flex_values_vl f1,
fnd_flex_values_vl f2,PO_REQUISITION_HEADERS_ALL prh,PER_ALL_PEOPLE_F papf
where prd.requisition_line_id = prl.requisition_line_id
and prl.requisition_header_id = prh.requisition_header_id
AND PAPF.PERSON_ID = PRL.TO_PERSON_ID
AND TRUNC(SYSDATE) BETWEEN PAPF.EFFECTIVE_START_DATE AND PAPF.EFFECTIVE_END_DATE
and gcc.code_combination_id = prd.CODE_COMBINATION_ID
and gcc.segment1 = f1.flex_value
and gcc.segment2 = f2.flex_value
and f1.flex_value_set_id = 1
and f2.flex_value_set_id = 2
and prh.AUTHORIZATION_STATUS IN('APPROVED','IN PROCESS')
and prh.preparer_id = 35016
group by prh.requisition_header_id,gcc.segment1,gcc.segment2,f2.description,prh.requisition_header_id,
papf.full_name ,prl.suggested_vendor_name;
```
|
Thanks for the response guys,
The final answer that worked include the following logic:
```
decode(count(distinct prl.suggested_vendor_name) over (partition by prh.requisition_header_id),1,prl.suggested_vendor_name,0,null,'MULTIPLE')
vendor
```
I used `partition by` to group the vendors and `decode` to show the required result
|
The closest approach i was able to find is the one stated down below. Just note that i've removed all the boilerplate you posted to improve readability.
```
select case
when vendors.vendorsqty > 1 then 'MULTIPLE'
else (select suggested_vendor_name
from po_requisition_lines_all
where requisition_header_id = vendors.headerid)
end
from (select prh.requisition_header_id headerid,
count(prl.suggested_vendor_name) vendorsqty
from PO_REQUISITION_HEADERS_ALL prh,
PO_REQUISITION_LINES_ALL prl
where prl.requisition_header_id = prh.requisition_header_id
group by prh.requisition_header_id) vendors;
```
|
Combine a column in select statement if multiple data is present
|
[
"",
"sql",
"oracle",
"group-by",
""
] |
1. I have a table that specifies the current year (done in C#) `Years.Year`
2. I have a table called Weeks which contains the numbers (1-53) `Weeks.WeekNumber` <http://whatweekisit.org/>
Now from this I want to create a view which will list Monday's date of each week based upon the parameters `Years.Year` and `Weeks.WeekNumber`
the code below gets the week number but doenst calculate the date correctly:
```
SET DATEFIRST 4
DECLARE @d DATETIME
SET @d = GETDATE()
--BELOW IS WHAT I WANT TO WORK ALTHOUGH THE SUB QUERY RETURNS MORE THAN ONE VALUE
--(SELECT Weeks.WeekNumber FROM Weeks)
SELECT
Weeks.WeekNumber,
DATEADD(dd, (@@DATEFIRST + 5 + DATEPART(dw, @d)) % 7, @d) Monday
FROM Weeks
```
Example data:
```
Week Number Date
1 29/12/2014
2 05/01/2015
3 12/01/2015
4 19/01/2015
5 26/01/2015
... ...
53 28/12/2015
```
**to Summarize:**
the year 2015 contains 53 Mondays, so I am trying to generate a SELECT statement which will specify the week number, and the date for each week, just like the Example data above
thanks for the help below, and the variety along with the suggestion of Calendar tables.... I will try and test all :)
|
As mentioned in a comment, this is much easier if your build yourself a [calendar table](http://blog.jontav.com/post/9380766884/calendar-tables-are-incredibly-useful-in-sql), your query would simply be:
```
DECLARE @Year INT = 2015;
SELECT Date, ISOWeek
FROM CalendarTable
WHERE DayNumberOfWeek = 1
AND ISOYear = @Year;
```
I will assume though that this is not an option for you. The first step is to get the first day of the year, which is fairly simple:
```
SELECT CAST(CAST(@Year AS VARCHAR(4)) + '0101' AS DATE)
```
Then you can get the monday of this week:
```
SELECT DATEADD(WEEK, DATEDIFF(WEEK, 0, YearStart), 0)
```
Then you a set of numbers from 1 -53 to add your weeks:
```
SELECT TOP(53) ROW_NUMBER() OVER(ORDER BY object_id)
FROM sys.all_objects
```
Finally, you need to work out if the if the first day of the year falls in the first week of this year, or the last week of last year. If it is the latter then you need to add one week to your weekstart calculation. Combining the above you get:
```
DECLARE @Year INT = 2016;
SELECT n.Weeknumber,
StartOfWeek = DATEADD(WEEK, DATEDIFF(WEEK, 0, d.YearStart) + n.WeekNumber + n.Factor, 0)
FROM (SELECT CAST(CAST(@Year AS VARCHAR(4)) + '0101' AS DATE)) AS d (YearStart)
CROSS APPLY
( SELECT TOP(53) ROW_NUMBER() OVER(ORDER BY object_id),
CASE WHEN DATEPART(ISO_WEEK, d.YearStart) = 1 THEN -1 ELSE 0 END
FROM sys.all_objects
) AS n (Weeknumber, Factor)
WHERE DATEPART(ISO_WEEK, DATEADD(WEEK, DATEDIFF(WEEK, 0, YearStart) + n.WeekNumber + n.Factor, 0)) = n.WeekNumber;
```
The where clause at the end will just remove the last week to ensure that for 52 week years you don't include the first week of the next year.
**EDIT**
Just realised you already have a table of 53 numbers, so you don't need to generate your own:
```
DECLARE @Year INT = 2016;
SELECT w.Weeknumber,
StartOfWeek = DATEADD(WEEK, DATEDIFF(WEEK, 0, d.YearStart) + w.WeekNumber + f.Factor, 0)
FROM (SELECT CAST(CAST(@Year AS VARCHAR(4)) + '0101' AS DATE)) AS d (YearStart)
CROSS JOIN Weeks AS w
CROSS APPLY (SELECT CASE WHEN DATEPART(ISO_WEEK, YearStart) = 1 THEN -1 ELSE 0 END) f (Factor)
WHERE DATEPART(ISO_WEEK, DATEADD(WEEK, DATEDIFF(WEEK, 0, d.YearStart) + w.WeekNumber + f.Factor, 0)) = w.WeekNumber;
```
|
You can write as:
```
-- SET DATEFIRST to U.S. English default value of 7.
SET DATEFIRST 7;
SELECT DATEPART( wk, [DateofYear]) as [Week Number],
[DateofYear] as[MondayDate]
FROM (
SELECT TOP (365)
DATEADD(DAY, ROW_NUMBER() OVER (ORDER BY number)-1, '20150101') AS [DateofYear]
FROM [master].dbo.spt_values
WHERE [type] = N'P' ORDER BY number
) AS T
WHERE datepart(dw,[DateofYear]) = 2 --Monday
ORDER BY [DateofYear];
```
|
SQL Server - Mondays date based on week number and year (view)
|
[
"",
"sql",
"sql-server",
""
] |
I was trying to figure out how to insert the data to the second table from the first table, but I don't want to insert the existing data that the second table already had from previous injection. Is there any way to check before it inserts the data?
**Here is what I have so far:**
```
INSERT INTO content2 (d1, d2, d3) SELECT d1, d2, d3 FROM content
```
I tried to put "LIMIT 1" into the query, but it didn't work. I'm not sure if I had it in the right place...
|
You can have the database verify that the data is unique. For instance, if the triple of `d1, d2, d3` is supposed to be unique, then create a unique index/constraint on those columns:
```
create unique index idx_content2_d1_d2_d3 on content2(d1, d2, d3);
```
Then if you try to insert a row, you'll get an error. You can get around this error using `on duplicate key update`:
```
INSERT INTO content2(d1, d2, d3)
SELECT d1, d2, d3
FROM content
ON DUPLICATE KEY UPDATE d1 = values(d1);
```
The update doesn't actually change any values. It simply prevents an error from occurring.
|
```
INSERT INTO content2 (d1,d2,d3)
SELECT c1.d1,c1.d2,c1.d3 from content c1
Where not exists (
select *
from content2 as c2
where c1.d1 = c2.d1 and c1.d2 = c2.d2 and c1.d3 = c2.d3
)
```
Hope this will help
[Fiddle for selecting the row from c1 which not exist in c2](http://sqlfiddle.com/#!9/996b7/3)
COnsidering you are checking entire row as containing before inserting thats why condition is for d1,d2,d3. IF only need to check any of the column you can avoid others
|
Only insert the data from the first table that doesn't exist in the second table
|
[
"",
"mysql",
"sql",
""
] |
I've created a table with a computed column as the primary key.
Table is created fine.And here is the script..
```
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ARITHABORT ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [planning.A062].[RMAllocation](
[Id] [int] IDENTITY(100,1) NOT NULL,
[RMAllocatonId] AS ('RMA_'+CONVERT([nvarchar](100),[Id])) PERSISTED NOT NULL,
[RequsitionNo] [nvarchar](100) NULL,
[RMDemandId] [nvarchar](104) NULL,
[HierarchyId] [nvarchar](102) NULL,
[Season] [nvarchar](50) NULL,
[VendorSupplierNo] [nvarchar](100) NULL,
[Year] [int] NULL,
[Month] [int] NULL,
[Week] [int] NULL,
[Day] [int] NULL,
[PlannedQty] [int] NULL,
[ConfirmedQty] [int] NULL,
[Status] [int] NULL,
[CreatedBy] [int] NULL,
[SyncId] [nvarchar](100) NULL,
[CreatedOn] [datetime2](7) NULL,
[UpdatedBy] [int] NULL,
[UpdatedOn] [datetime2](7) NULL,
[IsActive] [bit] NULL,
[RecordDateTime] [datetime2](7) NULL,
CONSTRAINT [PK_RMAllocation] PRIMARY KEY CLUSTERED
(
[RMAllocatonId] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
```
The problem is when I change this table (Add/edit a column ) using Designer View,it gives me the following error.

error
```
Unable to create index 'PK_RMAllocation'.
Cannot define PRIMARY KEY constraint on nullable column in table 'RMAllocation'.
Could not create constraint. See previous errors.
```
When I use script to do modifications,it works. And Even I have declared the computed column as `NOT NULL`. How this happen??
|
Something is wrong with the designer. SQL Server is quite clear in the documentation that computed columns *can* be used for primary keys (for instance, [here](https://msdn.microsoft.com/en-us/library/ms191250(v=sql.105).aspx)).
My guess is that the designer is dropping all constraints on the table and adding them back in. It ends up adding them in the wrong order, so the primary key is assigned before the `not null` on the computed column. I have no idea if there is any work-around other than the obvious one of not using the designer.
|
According to [the documentation](https://msdn.microsoft.com/en-us/library/ms188300.aspx) (emphasis mine)
> A computed column cannot be used as a DEFAULT or FOREIGN KEY
> constraint definition **or with a NOT NULL constraint definition**.
So it may be somewhat surprising that it works at all even in TSQL.
When the designer implements the change by recreating the table, it loses the `NOT NULL` on the column definition.
```
[Id] [int] IDENTITY(100,1) NOT NULL,
[RMAllocatonId] AS ('RMA_'+CONVERT([nvarchar](100),[Id])) PERSISTED,
[RequsitionNo] [nvarchar](100) NULL,
```
Semantically this concatenation of a `NOT NULL` constant and a `NOT NULL` column can never be `NULL` anyway.
Another way you can persuade SQL Server that the column will be `NOT NULL`-able even in the absence of a `NOT NULL` is by wrapping the definition in an `ISNULL`.
The following works fine with the designer
```
[RMAllocatonId] AS (ISNULL('RMA_'+CONVERT([nvarchar](100),[Id]),'')) PERSISTED
```
|
SQL Server Computed Column as Primary Key
|
[
"",
"sql",
"sql-server",
""
] |
I'm trying to fetch a PLSQL statement to display all the list of customer ID and the sum of order value for each customer. the following code gets me the correct answer but the issue is that I have multiple customers with the same ID and I need to sum all their order values under only 1 ID output and not multiple.my code gets me multiple output for the same customer.
```
Create table sales (customer_ID number(10), product_ID number(10), quantity number(10));
INSERT INTO sales (customer_ID, product_ID, quantity) Values(3,1,23);
INSERT INTO sales (customer_ID, product_ID, quantity) Values(1,2,34);
INSERT INTO sales (customer_ID, product_ID, quantity) Values(1,3,654);
INSERT INTO sales (customer_ID, product_ID, quantity) Values(3,7,32);
INSERT INTO sales (customer_ID, product_ID, quantity) Values(4,3,23);
INSERT INTO sales (customer_ID, product_ID, quantity) Values(3,3,111);
INSERT INTO sales (customer_ID, product_ID, quantity) Values(5,4,6);
Create table products (product_ID number(10), price number(10));
INSERT INTO products (product_ID, price) Values(1,32);
INSERT INTO products (product_ID, price) Values(2,121);
INSERT INTO products (product_ID, price) Values(3,3000);
INSERT INTO products (product_ID, price) Values(4,621);
INSERT INTO products (product_ID, price) Values(5,363);
INSERT INTO products (product_ID, price) Values(6,32);
INSERT INTO products (product_ID, price) Values(7,3);
INSERT INTO products (product_ID, price) Values(8,432);
INSERT INTO products (product_ID, price) Values(9,11);
INSERT INTO products (product_ID, price) Values(10,73);
declare
cursor cur is select unique sales.quantity,products.price,sales.customer_ID
from sales,products
where sales.product_id=products.product_id
order by customer_ID desc;
prod number;
quan number(10);
pri number(10);
c_id number(10);
begin
open cur;
loop
fetch cur into quan,pri,c_id;
exit when cur%notfound;
prod:=pri*quan;
DBMS_OUTPUT.PUT_LINE('customer_id =' || c_id);
DBMS_OUTPUT.PUT_LINE('quantity value =' || quan);
DBMS_OUTPUT.PUT_LINE('price =' || pri);
DBMS_OUTPUT.PUT_LINE('The total value of customer purchases is = ' || prod);
end loop;
close cur;
END
```
please adivse.
|
You just need to use **SUM** as aggregate function using **GROUP BY**.
Using **PL/SQL**
```
SQL> SET serveroutput ON
SQL> DECLARE
2 CURSOR cur
3 IS
4 SELECT SUM(sales.quantity),
5 SUM(products.price),
6 sales.customer_ID
7 FROM sales,
8 products
9 WHERE sales.product_id=products.product_id
10 GROUP BY customer_ID
11 ORDER BY customer_ID DESC;
12 prod NUMBER;
13 quan NUMBER(10);
14 pri NUMBER(10);
15 c_id NUMBER(10);
16 BEGIN
17 OPEN cur;
18 LOOP
19 FETCH cur INTO quan,pri,c_id;
20 EXIT
21 WHEN cur%notfound;
22 prod:=pri*quan;
23 DBMS_OUTPUT.PUT_LINE('customer_id =' || c_id);
24 DBMS_OUTPUT.PUT_LINE('quantity value =' || quan);
25 DBMS_OUTPUT.PUT_LINE('price =' || pri);
26 DBMS_OUTPUT.PUT_LINE('The total value of customer purchases is = ' || prod);
27 END LOOP;
28 CLOSE cur;
29 END;
30 /
customer_id =5
quantity value =6
price =621
The total value of customer purchases is = 3726
customer_id =4
quantity value =23
price =3000
The total value of customer purchases is = 69000
customer_id =3
quantity value =166
price =3035
The total value of customer purchases is = 503810
customer_id =1
quantity value =688
price =3121
The total value of customer purchases is = 2147248
PL/SQL procedure successfully completed.
SQL>
```
By the way, the whole **PL/SQL** block could be written in pure **SQL**.
Using **SQL**
```
SQL> SELECT SUM(sales.quantity) AS "quantity",
2 SUM(products.price) AS "price",
3 sales.customer_ID
4 FROM sales,
5 products
6 WHERE sales.product_id=products.product_id
7 GROUP BY customer_ID
8 ORDER BY customer_ID DESC;
quantity price CUSTOMER_ID
---------- ---------- -----------
6 621 5
23 3000 4
166 3035 3
688 3121 1
SQL>
```
**Update** OP wants to filter the rows based on user input.
In **SQL\*Plus**, you could declare a variable and use it in the filter predicate:
```
SQL> variable cust_id NUMBER
SQL> EXEC :cust_id:= 4
PL/SQL procedure successfully completed.
```
No,let's use the above **variable** in the filter predicate:
```
SQL> SET serveroutput ON
SQL> DECLARE
2 CURSOR cur
3 IS
4 SELECT SUM(sales.quantity),
5 SUM(products.price),
6 sales.customer_ID
7 FROM sales,
8 products
9 WHERE sales.product_id=products.product_id
10 AND sales.customer_ID = :cust_id
11 GROUP BY customer_ID
12 ORDER BY customer_ID DESC;
13 prod NUMBER;
14 quan NUMBER(10);
15 pri NUMBER(10);
16 c_id NUMBER(10);
17 BEGIN
18 OPEN cur;
19 LOOP
20 FETCH cur INTO quan,pri,c_id;
21 EXIT
22 WHEN cur%notfound;
23 prod:=pri*quan;
24 DBMS_OUTPUT.PUT_LINE('customer_id =' || c_id);
25 DBMS_OUTPUT.PUT_LINE('quantity value =' || quan);
26 DBMS_OUTPUT.PUT_LINE('price =' || pri);
27 DBMS_OUTPUT.PUT_LINE('The total value of customer purchases is = ' || prod);
28 END LOOP;
29 CLOSE cur;
30 END;
31 /
customer_id =4
quantity value =23
price =3000
The total value of customer purchases is = 69000
PL/SQL procedure successfully completed.
SQL>
```
If you want to do this from a front-end application, you could put the entire logic in a **PROCEDURE** and accept the `customer_id` as **IN parameter**. And use it in the filter predicate.
|
Try this
```
SELECT CUSTOMER_ID
,SUM(QUANTITY) as Total_Quantity
,SUM(QUANTITY*PRICE) as Total_Price
FROM
SALES s INNER JOIN
PRODUCTS p on p.PRODUCT_ID=s.PRODUCT_ID
GROUP BY CUSTOMER_ID // Group By each customer Id.
```
|
sum of total values PL/Sql
|
[
"",
"sql",
"oracle",
"plsql",
""
] |
I'm a self-taught, vaguely competent SQL user. For a view that I'm writing, I'm trying to develop a 'conditional `LEFT`' string-splitting command (presumably later to be joined by a 'conditional `RIGHT`' - whereby:
* If a string (let's call it 'haystack') contains a particular pattern (let's call it 'needle'), **it will be pruned to the left of that pattern**
* Otherwise, the entire string will be passed unaltered.
So, if our pattern is ' - ',
* 'A long string - containing the pattern' will output as 'A long string'
* 'A string without the pattern' will be returned as-is.
Rather than using the crudest ways to do this, I'm trying to come up with a way that avoids having to repeat any clause (such as if `0 < CHARINDEX`, then take `CHARINDEX` - 1, etc.) and instead leverages conditional `NULL`ing.
Yet - here's what I get for trying to be creative - I've hit what seems to be a really basic stumbling block. Please observe the following code and results, and let me know whether you can replicate it - and hence whether it's a bug or I've missed something peculiar. I have tested this on SQL Server both 2008 R2 and 2014, both Express editions.
```
select
-- ISNULL: returns 'a big old string'
ISNULL(null, 'a big old string'),
-- NULLIF: returns NULL
left(
'a big old string',
nullif
(
CHARINDEX
(
'needle',
'haystack'
), 0
) - 1
),
-- combined: returns just 'a' (1st character of ISNULL condition)
ISNULL(
left
(
'a big old string', -- the input string. In reality, this would be a column alias, etc.
nullif
(
CHARINDEX -- Search for the splitting pattern
(
'needle',
'haystack'
), 0 -- If it's not found, return NULL instead of the usual 0
) - 1 -- so that this subtraction produces a NULL, not an invalid negative index
),
'a big old string' -- If the pattern was not found, we should return the input unaltered
);
/*
---------------- ---- ----
a big old string NULL a
(1 row(s) affected)
*/
```
Why do these 2 clauses work as expected in isolation, but **when I combine them, rather than getting the sum of their effects, I get only the 1st character** of the `ISNULL` string - 'a'?
Is there some kind of implicit `CAST` to `varchar(1)`? Deliberately `cast`ing to `varchar(max)` made no difference. What else could be going on here?
Am I just doing something really stupid? Because from here, I can't figure out what I'm doing wrong, and so it really seems like a bug. I hoped testing on 2014 would prove it to be a bug in the old 2008 R2, but alas, they act identically (or, rather, don't).
Thanks in advance for, hopefully, saving me from what would presumably be an evening of baffled existential crisis.
|
There are two parts to this problem, the first is the nature of the `ISNULL` operator, it will use the datatype and length of the first argument. A simple example would be:
```
DECLARE @A CHAR(1) = NULL,
@B VARCHAR(MAX) = 'This is a test';
SELECT TOP 1 Test = ISNULL(@A, @B);
```
This returns `T` and checking the execution plan XML we can see the implicit conversion of `"This is a Test"` to `CHAR(1)`:
```
<ScalarOperator ScalarString="isnull([@A],CONVERT_IMPLICIT(char(1),[@B],0))">
<Intrinsic FunctionName="isnull">
<ScalarOperator>
<Identifier>
<ColumnReference Column="@A" />
</Identifier>
</ScalarOperator>
<ScalarOperator>
<Convert DataType="char" Length="1" Style="0" Implicit="true">
<ScalarOperator>
<Identifier>
<ColumnReference Column="@B" />
</Identifier>
</ScalarOperator>
</Convert>
</ScalarOperator>
</Intrinsic>
</ScalarOperator>
```
Your example is not quite so straightforward since you don't have your types nicely defined like above, but if we do define the dataypes:
```
DECLARE @A VARCHAR(MAX) = 'a big old string',
@B VARCHAR(MAX) = 'needle',
@C VARCHAR(MAX) = 'haystack';
SELECT TOP 1 ISNULL(LEFT(@A, NULLIF(CHARINDEX(@B, @C), 0) - 1), @A);
```
We get the result as expected. So something else is happening under the hood. The query plan does not delve into the inner workings of the constant evaluation, but the following demonstrates what is happening:
```
SELECT Test = LEFT('a big old string', NULLIF(CHARINDEX('needle', 'haystack'), 0) - 1)
INTO #T;
SELECT t.name, c.max_length
FROM tempdb.sys.columns AS c
INNER JOIN sys.types AS t
ON t.system_type_id = c.system_type_id
AND t.user_type_id = c.user_type_id
WHERE [object_id] = OBJECT_ID(N'tempdb..#T');
----------------
name max_length
varchar 1
```
Basically, by using the `SELECT INTO` sytax with your left expression shows that a when the NULL length is passed to `LEFT` the resulting datatype is `VARCHAR(1)`, **however**, this is not always the case. If I simply hard code `NULL` into the `LEFT` function:
```
SELECT Test = LEFT('a big old string', NULL)
INTO #T;
--------------------
name max_length
varchar 16
```
Then you get the length of the string passed, but a case expression that should be optimised away to the same thing, yields a length of 1 again:
```
SELECT TOP 1 Test = LEFT('a big old string', CASE WHEN 1 = 1 THEN NULL ELSE 1 END)
INTO #T;
----------------
name max_length
varchar 1
```
I suspect it is related to the default behaviour of `VARCHAR`, where the default length is 1, e.g:
```
DECLARE @A VARCHAR = 'This is a Test';
SELECT Value = @A, -- T
MaxLength = SQL_VARIANT_PROPERTY(@A, 'MaxLength') -- 1
```
But I can't tell you why you would see different behaviour for `NULL` and `CASE WHEN 1 = 1 THEN NULL ELSE 1 END`. If you wanted to get the bottom of what is going on in the constant evaluation I think you would probably need to re-ask on the DBA site and hope that one of the real SQL Server Gurus picks it up.
In summary, `LEFT(<constant>, <constant expression>)` where `<constant expression>` yields `NULL` is implicitly typed as `VARCHAR(1)`, and this implicit type is used in `ISNULL` evaluation.
For what it is worth, if you explicitly type the result of your `LEFT` function then you get the expected result:
```
SELECT ISNULL(
CAST(
LEFT(
'a big old string',
NULLIF(CHARINDEX('needle', 'haystack'), 0) - 1
)
AS VARCHAR(MAX))
, 'a big old string');
```
---
An additional point is that when you say you don't want to repeat any expressions (If 0 < CHARINDEX, then take CHARINDEX - 1, etc.), there are two things you should know, the first is that `NULLIF(<expression>, <value>)` expands to a case expression - `CASE WHEN <expression> = <value> THEN NULL ELSE <expression> END`, so is repeated, the second is that this doesn't matter, SQL Server can identify that this is the same expression used twice, and will evaluate it once and refer to the same result each time it is used.
|
This is the difference between `isnull` and `coalesce` -- and since your first parameter to isnull is char(1), that will be the type of return value of the statement. With coalesce you'll get the correct result.
[Isnull](https://msdn.microsoft.com/en-us/library/ms184325.aspx):
Returns the same type as check\_expression. If a literal NULL is provided as check\_expression, returns the datatype of the replacement\_value. If a literal NULL is provided as check\_expression and no replacement\_value is provided, returns an int.
[Coalesce](https://msdn.microsoft.com/en-us/library/ms190349.aspx):
Returns the data type of expression with the highest data type precedence. If all expressions are nonnullable, the result is typed as nonnullable.
|
SQL Server : ISNULL(compound NULL condition, 'a string') returns only the 1st character, under certain circumstance(s)
|
[
"",
"sql",
"sql-server",
"string",
"isnull",
"nullif",
""
] |
I'm using SQL Server management studio 2008 and use `TOP` to select some data from DB.
```
SELECT
TOP 3 Name, Company, ta_Content, Email, Writedate
FROM dbo.ta_CONTACT
WHERE Name in ('David', 'Filo', 'Rain', 'Cone', 'Source', 'Tailor', 'Fier', 'Venesse')
ORDER BY Writedate;
```
So by using `TOP 3`, I can collect top 3 data from 8 given data. But What I want to do is selecting 5-7th data from the given 8 data.
I may use `ROW_NUMBER()` but I want to use `TOP` logic by using `NOT IN`. But I'm not sure where to put `NOT IN` logic to show 5-7th data only.
|
Try this query:
```
SELECT * FROM (
SELECT ROW_NUMBER() OVER (ORDER BY Writedate ASC) AS rownumber,
Name, Company, ta_Content, Email, Writedate
FROM dbo.ta_CONTACT
) AS t
WHERE rownumber >= 5 AND rownumber <= 7
```
|
Try this query.It is not right way.
```
select TOP 2 Name, Company, ta_Content, Email, Writedate from (
SELECT TOP 3 Name, Company, ta_Content, Email, Writedate FROM dbo.ta_CONTACT
WHERE Name in ('David', 'Filo', 'Rain', 'Cone', 'Source', 'Tailor', 'Fier', 'Venesse')
ORDER BY Writedate desc)a ORDER BY Writedate
```
|
Selecting some row numbers from SQL
|
[
"",
"sql",
"sql-server",
""
] |
Can anyone help me to find out below format using `Convert` function in SQL Server?
DateTime
```
yyyy/MM/dd hhmm
yyyy/MM/dd hhmmssstt
```
Time
```
hh:mm
hh:mm:ss tt
hh:mm tt
```
Thanks in advance...
|
I am unable to get exact format which i need with Convert() so i am using Format() now.
Also want to say Format() is poor performance compare to Convert().
|
[This MSDN page here](https://msdn.microsoft.com/en-us/library/ms187928.aspx) lists all available "styles" for conversion.
If you're using SQL Server **2012** or newer, you can also look [at the new `FORMAT` function](https://msdn.microsoft.com/en-us/library/hh213505.aspx) for even more flexibility
|
Convert DateTime Format using Convert function in SQL Server
|
[
"",
"sql",
"sql-server",
"datetime",
""
] |
I have a query:
```
SELECT * FROM (SELECT col1,col2,col3,ROWNUM R
FROM TBL WHERE ROWNUM < 3)
WHERE R='2';
```
(rownum will change each time giving only one record at a time )
```
column1 column2 column3
alpha beta gamma
```
I need result as:
```
alpha,beta,gamma
```
All i need to do is, field separators in between the output.
This is not relevant to the suggested link below and not a duplicate!
|
There are two ways:
**CONCAT** function
```
SELECT *
FROM
(SELECT concat(concat(concat(concat('a', ','), col2), ','), col3),
ROWNUM R
FROM TBL
WHERE ROWNUM < 3
)
WHERE R=2;
```
`||` concatenation operator
```
SELECT *
FROM
(SELECT col1||','||col2||','||col3,ROWNUM R FROM TBL WHERE ROWNUM < 3
)
WHERE R=2;
```
Another thing, no need to use single quotes around the value for **ROWNUM**, it unnecessarily converts it into **string**, just leave it as **NUMBER**.
|
You can use following:-
```
SELECT *
FROM (SELECT col1||','||col2||','||col3,ROWNUM R
FROM TBL WHERE ROWNUM < 3)
WHERE R='2';
```
|
concat result of sql query using field seperator
|
[
"",
"sql",
"oracle",
"oracle11g",
""
] |
I have lists like this stored in a field in SQL:
```
Id MyListField
1 soy, meat, milk, wheat, nuts
2 soy, meat, nuts, milk, wheat
3 nuts
4 walnuts, nutshell, nuts
5 nuts, beans
6 walnuts, hazel nuts
7 nutshell, nutsize
```
**This is not normalized and I can't change it.** I now have to find `nuts` and replace them with `insane` (but keep e. g. `hazel nuts` and `nutshell`).
A Find is relatively easy, there are four cases and I have the placeholder character `%` at my disposal:
```
SELECT * FROM MyTable WHERE
MyListField LIKE 'nuts' OR
MyListField LIKE 'nuts, %' OR
MyListField LIKE '%, nuts' OR
MyListField LIKE '%, nuts, %'
```
But a Replace is hard, because I don't have placeholder or startofstring/endofstring characters:
```
UPDATE MyTable SET MyListField =
REPLACE(
REPLACE(
REPLACE(
REPLACE(
MyListField,
', nuts, ' ,
', insane, '
),
????,
????
),
????,
????
),
????,
????
)
WHERE
MyListField LIKE 'nuts' OR
MyListField LIKE 'nuts, %' OR
MyListField LIKE '%, nuts' OR
MyListField LIKE '%, nuts, %'
```
I can easily replace it in the middle of the string, but not at the start or the end. Or can I?
I am using SQL Server 2008, if that matters.
|
If you concatenate the field between `,` and `_,` then you only have one case to look for.
e.g.
```
update MyTable set MyFieldList = replace(', ' + MyFieldList + ',', ', nuts,', ', insane,')
update MyTable set MyFieldList = substring(MyFieldList, 3, len(MyFieldList) - 3)
```
|
By putting a comma at the start and end of the list, each element is wrapped in a leading ", " and a trailing ",".
Then your replace method becomes easy....
```
REPLACE(', ' + MyListField + ',',
', ' + @termToReplace + ',',
', ' + @replacement + ',')
```
Finally, strip the leading and trailing commas.
|
Find value in comma-separated list
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I am trying to return a minimum `fiscalyear` with a grouping on `productid` using a subquery and the `EXISTS` operator. For whatever reason, I am getting an arbitrary value from a set of fiscal years. The subquery returns the minimum `fiscalyear` but the outer query won't pick it up for the `UPDATE`. Any ideas?
```
UPDATE#Temp
SET SALES_YEAR = sa.fiscalyear
FROM sales sa
JOIN products p ON p.id = sa.productid
JOIN #Temp t ON p.id = t.productid
WHERE exists ( select MIN(sa.fiscalyear),
sa.productid
FROM sales sa
JOIN products p ON p.id = sa.productid
JOIN #Temp t ON p.id = t.productid
GROUP BY sa.productid
)
```
|
You don't need the join's *outside* the exists. In addition, I assume you are using SQL Server.
Actually, I don't think you want `exists` at all. Just use `=` like this:
```
UPDATE t
SET SALES_YEAR = sa.fiscalyear
FROM #Temp t
WHERE t.product_id = (Select TOP 1 sa.productid
FROM sales sa JOIN
products p
ON p.id = sa.productid
ORDER BY sa.fiscalyear
)
```
|
try this query
```
UPDATE #Temp
SET SALES_YEAR = A.fiscalyear
FROM #Temp t
JOIN (
select MIN(sa.fiscalyear),
sa.productid
FROM sales sa
JOIN products p ON p.id = sa.productid
JOIN #Temp t ON p.id = t.productid
GROUP BY sa.productid
) A ON T.PRODUCTID=A.PRODUCTID
```
|
Bad value returned using EXISTS
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I am currently using BigQuery and GROUP\_CONCAT which works perfectly fine. However, when I try to add a ORDER BY clause to the GROUP\_CONCAT statement like I would do in SQL, I receive an error.
So e.g., something like
`SELECT a, GROUP_CONCAT(b ORDER BY c)
FROM test
GROUP BY a`
The same happens if I try to specify the separator.
Any ideas on how to approach this?
|
Since BigQuery doesn't support ORDER BY clause inside GROUP\_CONCAT function, this functionality can be achieved by use of analytic window functions. And in BigQuery separator for GROUP\_CONCAT is simply a second parameter for the function.
Below example illustrates this:
```
select key, first(grouped_value) concat_value from (
select
key,
group_concat(value, ':') over
(partition by key
order by value asc
rows between unbounded preceding and unbounded following)
grouped_value
from (
select key, value from
(select 1 as key, 'b' as value),
(select 1 as key, 'c' as value),
(select 1 as key, 'a' as value),
(select 2 as key, 'y' as value),
(select 2 as key, 'x' as value))) group by key
```
Will produce the following:
```
Row key concat_value
1 1 a:b:c
2 2 x:y
```
NOTE on Window specification: The query uses "rows between unbounded preceding and unbounded following" window specification, to make sure that all rows within a partition participate in GROUP\_CONCAT aggregation. Per SQL Standard default window specification is "rows between unbounded preceding and current row" which is good for things like running sum, but won't work correctly in this problem.
Performance note: Even though it looks wasteful to recompute aggregation function multiple times, the BigQuery optimizer does recognize that since window is not changing result will be the same, so it only computes aggregation once per partition.
|
Standard SQL mode in BigQuery does support ORDER BY clause within some aggregate functions, including STRING\_AGG, for example:
```
#standardSQL
select string_agg(t.x order by t.y)
from unnest([struct<x STRING, y INT64>('a', 5), ('b', 1), ('c', 10)]) t
```
will result in
`b,a,c`
Documentation is here: <https://cloud.google.com/bigquery/docs/reference/standard-sql/functions-and-operators#using-order-by-with-aggregate-functions>
|
BigQuery GROUP_CONCAT and ORDER BY
|
[
"",
"sql",
"google-bigquery",
""
] |
How to extract only Email Id from the below example (i.e. abc@abc.com, mno@mno.com, etc.)? Each row can contains multiple Email IDs...
```
"ABC" <abc@abc.com>; "XYZ" <xyz@xyz.com>; pqr@pqr.com; "MNO" <mno@mno.com>
```
|
In *MySQL* you could use `SUBSTRING_INDEX` in following:
```
SELECT SUBSTRING_INDEX(SUBSTRING_INDEX(Id, '>', 1), '<', -1) Email
FROM Tbl;
```
In *SQL Server* will be:
```
SELECT SUBSTRING(Id, CHARINDEX('<', Id) + 1 , CHARINDEX('>', Id) - CHARINDEX('<', Id) - 1)
FROM Tbl;
```
|
```
SELECT SUBSTRING_INDEX(SUBSTRING_INDEX(email,'>',1),'<',-1) AS EmailID
FROM table_name;
```
|
How to Extract only email id in SQL SERVER
|
[
"",
"sql",
"sql-server",
""
] |
I want to select 1000 rows at a time using the query:
```
SELECT * FROM MEMBERID_1M WHERE ROWNUM <1000
```
How do I get the next set of 1000 rows from this table in a for loop?
|
I suggest you to use `ROW_NUMBER()` function like this: (my `id` is your PK)
```
SELECT M.*
FROM (
SELECT MEMBERID_1M.*, ROW_NUMBER() OVER (ORDER BY id) As rn
FROM MEMBERID_1M ) M
WHERE
(rn <= 1000)
```
And for next:
```
SELECT M.*
FROM (
SELECT MEMBERID_1M.*, ROW_NUMBER() OVER (ORDER BY id) As rn
FROM MEMBERID_1M ) M
WHERE
(rn > 1000) AND (rn <= 2000)
```
For page `:i`:
```
SELECT M.*
FROM (
SELECT MEMBERID_1M.*, ROW_NUMBER() OVER (ORDER BY id) As rn
FROM MEMBERID_1M ) M
WHERE
(rn > :i * 1000) AND (rn <= (:i + 1) * 1000)
```
|
Reproducing the [answer](https://stackoverflow.com/questions/3869311/sql-oracle-to-select-the-first-10-records-then-the-next-10-and-so-on)
There is only a rather convoluted way to do this, which is a real pain with Oracle. They should just implement a LIMIT/OFFSET clause...
The rownum gets assigned *after* the row has been selected by the where clause, so that a rownum must always start with 1. `where rownum > x` will always evaluate to false.
Also, rownum gets assigned *before sorting is done*, so the rownum will not be in the same order as your order by says.
You can get around both problems with a subselect:
```
select a,b,c, rn from
( select a,b,c, rownum rn from
( select a,b,c from the_table where x = ? order by c)
where rownum < Y)
where rn > X
```
If you do not need to sort (but only then), you can simplify to
```
select a,b,c, rn from
( select a,b,c, rownum rn from the_table where rownum < Y )
where rn > X
```
|
Get 1000 row set from Oracle table
|
[
"",
"sql",
"oracle",
"select",
""
] |
In **PostgreSQL 9.4** the window functions have the new option of a `FILTER` to select a sub-set of the window frame for processing. The documentation mentions it, but provides no sample. An online search yields some samples, including from [2ndQuadrant](http://blog.2ndquadrant.com/the-within-group-and-filter-sql-clauses-of-postgresql-9-4/) but all that I found were rather trivial examples with constant expressions. What I am looking for is a filter expression that includes the value of the current row.
Assume I have a table with a bunch of columns, one of which is of `date` type:
```
col1 | col2 | dt
------------------------
1 | a | 2015-07-01
2 | b | 2015-07-03
3 | c | 2015-07-10
4 | d | 2015-07-11
5 | e | 2015-07-11
6 | f | 2015-07-13
...
```
A window definition for processing on the `date` over the entire table is trivially constructed: `WINDOW win AS (ORDER BY dt)`
I am interested in knowing how many rows are present in, say, the 4 days prior to the current row (inclusive). So I want to generate this output:
```
col1 | col2 | dt | count
--------------------------------
1 | a | 2015-07-01 | 1
2 | b | 2015-07-03 | 2
3 | c | 2015-07-10 | 1
4 | d | 2015-07-11 | 3
5 | e | 2015-07-11 | 3
6 | f | 2015-07-13 | 4
...
```
The `FILTER` clause of the window functions seems like the obvious choice:
```
count(*) FILTER (WHERE current_row.dt - dt <= 4) OVER win
```
But how do I specify `current_row.dt` (for lack of a better syntax)? Is this even possible?
If this is not possible, are there other ways of selecting `date` ranges in a window frame? The frame specification is no help as it is all row-based.
I am not interested in alternative solutions using sub-queries, it has to be based on window processing.
|
You are not actually *aggregating* rows, so the new aggregate `FILTER` clause is not the right tool. A window function is more like it, a problem remains, however: the **[frame definition](http://www.postgresql.org/docs/current/interactive/sql-expressions.html#SYNTAX-WINDOW-FUNCTIONS)** of a window cannot depend on *values* of the current row. It can only count a given number of rows preceding or following with the `ROWS` clause.
To make that work, aggregate counts per day and `LEFT JOIN` to a full set of days in range. Then you can apply a window function:
```
SELECT t.*, ct.ct_last4days
FROM (
SELECT *, sum(ct) OVER (ORDER BY dt ROWS 3 PRECEDING) AS ct_last4days
FROM (
SELECT generate_series(min(dt), max(dt), interval '1 day')::date AS dt
FROM tbl t1
) d
LEFT JOIN (SELECT dt, count(*) AS ct FROM tbl GROUP BY 1) t USING (dt)
) ct
JOIN tbl t USING (dt);
```
Omitting `ORDER BY dt` in the widow frame definition *usually* works, since the order is carried over from `generate_series()` in the subquery. But there are no guarantees in the SQL standard without explicit `ORDER BY` and it might break in more complex queries.
[**SQL Fiddle.**](http://sqlfiddle.com/#!15/f3ebc/1)
Related:
* [Select finishes where athlete didn't finish first for the past 3 events](https://stackoverflow.com/questions/17247725/select-finishes-where-athlete-didnt-finish-first-for-the-past-3-events/17250864#17250864)
* [PostgreSQL: running count of rows for a query 'by minute'](https://stackoverflow.com/questions/8193688/postgresql-running-count-of-rows-for-a-query-by-minute/8194088#8194088)
* [PostgreSQL unnest() with element number](https://stackoverflow.com/questions/8760419/postgresql-unnest-with-element-number/8767450#8767450)
|
I don't think there is any syntax that means "current row" in an expression. The gram.y file for postgres makes a filter clause
take just an a\_expr, which is just the normal expression clauses. There
is nothing specific to window functions or filter clauses in an expression.
As far as I can find, the only current row notion in a window clause is for specifying the window frame boundaries. I don't think this gets you
what you want.
It's possible that you could get some traction from an enclosing query:
<http://www.postgresql.org/docs/current/static/sql-expressions.html>
> When an aggregate expression appears in a subquery (see Section 4.2.11
> and Section 9.22), the aggregate is normally evaluated over the rows
> of the subquery. But an exception occurs if the aggregate's arguments
> (and filter\_clause if any) contain only outer-level variables: the
> aggregate then belongs to the nearest such outer level, and is
> evaluated over the rows of that query.
but it's not obvious to me how.
|
Referencing current row in FILTER clause of window function
|
[
"",
"sql",
"postgresql",
"window-functions",
"postgresql-9.4",
""
] |
I'd like a column of numbers:
Seven occurances of the integer 1, followed by 7 occurances of 2, followed by 7 occurances of 3 .... , followed by 7 occurances of n-1, followed by 7 occurances of n. Like so
```
Num
1
1
1
1
1
1
1
2
2
2
2
2
2
2
...
...
n-1
n-1
n-1
n-1
n-1
n-1
n-1
n
n
n
n
n
n
n
```
Unfortunately I've not progressed too far. My current attempt is the following, where n=4:
```
WITH
one AS
(
SELECT num = 1,
cnt = 0
UNION ALL
SELECT num = num,
cnt = cnt + 1
FROM one
WHERE cnt < 7
),
x AS
(
SELECT num,
cnt = 0
FROM one
UNION ALL
SELECT num = num + 1,
cnt = cnt + 1
FROM one
WHERE cnt < 4
)
SELECT *
FROM x
```
|
```
;WITH Numbers AS
(
SELECT n = 1
UNION ALL
SELECT n + 1
FROM Numbers
WHERE n+1 <= 10
),
se7en AS
(
SELECT n = 1
UNION ALL
SELECT n + 1
FROM se7en
WHERE n+1 <= 7
)
SELECT Numbers.n
FROM Numbers CROSS JOIN se7en
```
|
```
WITH t1 AS (SELECT 0 as num UNION ALL SELECT 0)
,t2 AS (SELECT 0 as num FROM t1 as a CROSS JOIN t1 as b)
,t3 AS (SELECT 0 as num FROM t2 as a CROSS JOIN t2 as b)
,t4 AS (SELECT 0 as num FROM t3 as a CROSS JOIN t3 as b)
,Tally (number)
AS (SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 1)) FROM t4)
SELECT t1.number
FROM Tally as t1 cross join Tally as t2
where t2.number <=7
ORDER BY t1.number;
```
|
Recursive cte to repeat several integers
|
[
"",
"sql",
"t-sql",
"sql-server-2012",
""
] |
I'm trying to combine data from two tables in one result. With simplified example-tables, this is what I want to get in result:
```
ID Text Choice
--------------------------------
1 My first choice 0
2 My second choice 0
3 My third choice 1
```
from the two tables persons and choosen
'Persons':
```
ID Name Age
-------------------
1 Adam 22
2 Scott 25
3 Tom 28
```
'Choices':
```
ID Text
----------------------
1 My first choice
2 My second choice
3 My third choice
```
'Choices\_made':
```
Person_ID Choice_ID
----------------------
2 3
```
I have tried some different queries, but not found the right one. I got stuck when trying this query:
```
SELECT * FROM (
(SELECT * FROM Choices) t1
UNION
(SELECT 1 as Choice FROM Choices_made WHERE Person_ID=2) t2
) t_union
```
... which does not work. It causes error #1064 - You have an error in your SQL syntax.
Any suggestions on how I can accomplish the wanted result?
|
You can try the next query
```
SELECT Choices.*
,IF(Choices_made.Choice_ID IS NULL, 0, 1) AS Choice
FROM Choices
LEFT JOIN Choices_made ON Choices_made.Choice_ID = Choices.ID AND Choices_made.Person_ID = @PersonID
```
|
```
SELECT c.id,
c.text,
cm.Choice_ID = c.id as persons_choice
FROM Choices c
LEFT JOIN Choices_made cm on cm.Choice_ID = c.id
and cm.Person_ID = 2
```
|
SQL - Combine a select * query with select 1
|
[
"",
"mysql",
"sql",
"select",
""
] |
Within my SQL database I have a table which represents books of tickets [Books] where the number of tickets within a book can vary.
This is represented by two columns `[Books].[StartNo]` and `[Books].[BookSize]`
What I need to achieve is a select statement that repeats each row in the table [Books] for each ticket in that book with an additional calculated column that displays the ticket number for that row.
So from
```
--------+---------+----------
Book | StartNo | BookSize
--------+---------+----------
Book 1 | 1 | 3
Book 2 | 4 | 4
Book 3 | 19 | 4
```
to something like this
```
--------+---------+----------+----------
Book | StartNo | BookSize | TicketNo
--------+---------+----------+----------
Book 1 | 1 | 3 | 1
Book 1 | 1 | 3 | 2
Book 1 | 1 | 3 | 3
Book 2 | 4 | 4 | 4
Book 2 | 4 | 4 | 5
Book 2 | 4 | 4 | 6
Book 2 | 4 | 4 | 7
Book 3 | 19 | 4 | 19
Book 3 | 19 | 4 | 20
Book 3 | 19 | 4 | 21
Book 3 | 19 | 4 | 22
```
I'm just not quite sure where to start.
|
Use `tally table`
```
WITH lv0 AS (SELECT 0 g UNION ALL SELECT 0 UNION ALL SELECT 0 UNION ALL SELECT 0 UNION ALL SELECT 0 UNION ALL SELECT 0 UNION ALL SELECT 0 UNION ALL SELECT 0 UNION ALL SELECT 0 UNION ALL SELECT 0)
,lv1 AS (SELECT 0 g FROM lv0 a CROSS JOIN lv0 b) --10 * 10 = 100
,lv2 AS (SELECT 0 g FROM lv1 a CROSS JOIN lv0 b) --100 * 10 = 1000
,Tally (num) AS (SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) FROM lv2)
SELECT (num+StartNo-1) as TicketNo, *
FROM Tally
CROSS JOIN Yourtable
WHERE num <= booksize
ORDER BY book
```
|
Try this:
```
;WITH Counts AS (
SELECT Max(StartNo + BookSize) AS TotalBookSize
FROM t
), CTE(Tickets) AS (
SELECT 1
UNION ALL
SELECT Tickets + 1
FROM CTE
WHERE Tickets < (SELECT TotalBookSize FROM Counts)
)
SELECT *
FROM t JOIN CTE ON CTE.Tickets BETWEEN t.StartNo AND t.StartNo + t.BookSize - 1
```
|
Expand header row into multiple child rows
|
[
"",
"sql",
"sql-server",
"sql-server-2005",
""
] |
I have two tables, each with the same schema.
The first one (or `OldDataTable` from here on) looks like this:
```
| PlantID | OutletId | BusinessTypeID | TradeChannelID |
|---------|-----------|----------------|----------------|
| I000 | 500113730 | 1 | 8 |
| I000 | 500113772 | 1 | 12 |
| I000 | 500113819 | 1 | 40 |
| I000 | 500113821 | 1 | 8 |
| I000 | 500113848 | 1 | 7 |
```
The second one (or `NewDataTable` from here on) looks like this:
```
| PlantID | OutletId | BusinessTypeID | TradeChannelID |
|---------|-----------|----------------|----------------|
| I000 | 500113730 | 2 | 5 |
| I000 | 500113772 | 1 | 12 |
| I000 | 500113819 | 1 | 40 |
| I000 | 500113821 | 1 | 8 |
| I000 | 500113848 | 1 | 7 |
```
You can see, there are some differences. For a given `OutletId` (500113730), the `BusinessTypeID` and `TradeChannelID` changed in the `NewDataTable` as opposed to the `OldDataTable`.
I can't figure out a query that accomplishes what I am looking for. I need have a query that produces an output that shows the `OutletID` that changed and then what changed, along with its original value. Given the two examples above, the result should look like this:
```
| PlantID | Change_PlantID | OutletID | Change_OutletID | BusinessTypeID | Change_BusinessTypeID | TradeChannelID | Change_TradeChannelID |
|---------|----------------|-----------|-----------------|----------------|-----------------------|----------------|-----------------------|
| I000 | | 500113730 | | 1 | 2 | 8 | 5 |
```
Couple things to note:
The application will not always know what was changed, if anything was changed. The output only shows the `OutletId`s that changed.
|
It should be easy enough to join the two tables together, using a constraint that checks for anything different:
```
SELECT
old.PlantID
,old.OutletID
,old.BusinessTypeID
,new.BusinessTypeID AS Change_BusinessTypeID
,old.TradeChannelID
,new.TradeChannelID AS Change_TradeChannelID
FROM
OldDataTable old
FULL OUTER JOIN
NewDataTable new
ON
old.PlantID = new.PlantID
AND
old.OutletID = new.OutletID
WHERE
(
old.BusinessTypeID <> new.BusinessTypeID
OR
old.TradeChannelID <> new.TradeChannelID
)
```
What this won't do is hide the old/new pairs of one column that didn't change while another column in the same row did, but you could easily add some checks in the SELECT list (e.g. `SELECT CASE WHEN old.column <> new.column THEN old.column END`).
Update - here's what the statement might look like if you want to handle the case where only what changed *per column* is included in the results:
```
SELECT
old.PlantID
,old.OutletID
,(CASE WHEN old.BusinessTypeID <> new.BusinessTypeID THEN old.BusinessTypeID END) AS BusinessTypeID
,(CASE WHEN old.BusinessTypeID <> new.BusinessTypeID THEN new.BusinessTypeID END) AS Change_BusinessTypeID
,(CASE WHEN old.TradeChannelID <> new.TradeChannelID THEN old.TradeChannelID END) AS TradeChannelID
,(CASE WHEN old.TradeChannelID <> new.TradeChannelID THEN new.TradeChannelID END) AS Change_TradeChannelID
FROM
OldDataTable old
FULL OUTER JOIN
NewDataTable new
ON
old.PlantID = new.PlantID
AND
old.OutletID = new.OutletID
WHERE
(
old.BusinessTypeID <> new.BusinessTypeID
OR
old.TradeChannelID <> new.TradeChannelID
)
```
|
Here's a simple version for you. You can change the CASE statement(s) as you need to to benefit what output you wish (e.g. use bit fields, etc)... If you wish to return all rows, changed or not, just REM out the WHERE clause.
Hope it assists,
```
DECLARE @OldDataTable AS TABLE (PlantID nvarchar(20), OutletId nvarchar(20), BusinessTypeID int,TradeChannelID int )
DECLARE @NewDataTable AS TABLE (PlantID nvarchar(20), OutletId nvarchar(20), BusinessTypeID int,TradeChannelID int )
INSERT INTO @OldDataTable
VALUES
('I000','500113730',1,8)
,('I000','500113772',1,12)
,('I000','500113819',1,40)
,('I000','500113821',1,8)
,('I000','500113848',1,7)
INSERT INTO @NewDataTable
VALUES
('I000','500113730',2,5)
,('I000','500113772',1,12)
,('I000','500113819',3,40)
,('I000','500113821',1,9)
,('I000','500113848',1,7)
SELECT a.PlantID
,a.OutletId
,a.BusinessTypeID
,a.TradeChannelID
,b.BusinessTypeID
,b.TradeChannelID
,LTRIM(RTRIM(CASE WHEN a.BusinessTypeID <> b.BusinessTypeID THEN 'BusinessTypeID' ELSE '' END
+ ' ' + CASE WHEN a.TradeChannelID <> b.TradeChannelID THEN 'TradeChannelID' ELSE '' END)) as ChangeType
FROM @OldDataTable a
INNER JOIN @NewDataTable b ON a.OutletId = b.OutletId
WHERE a.BusinessTypeID <> b.BusinessTypeID
OR a.TradeChannelID <> b.TradeChannelID
```
|
Finding the differences between tables with same schema, with changes and original values in same output
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have a table like this in MySQL:
```
+-----+------+
| id | type |
+-----+------+
| 149 | 8 |
| 150 | 7 |
| 151 | 8 |
| 152 | 7 |
| 153 | 5 |
| 154 | 6 |
| 155 | 3 |
| 156 | 2 |
| 157 | 4 |
| 158 | 2 |
| 159 | 1 |
| 160 | 0 |
+-----+------+
```
I would like to sort this table and receive results like this:
```
+-----+------+
| id | type |
+-----+------+
| 151 | 8 |
| 152 | 7 |
| 154 | 6 |
| 153 | 5 |
| 157 | 4 |
| 155 | 3 |
| 158 | 2 |
| 159 | 1 |
| 160 | 0 |
| 149 | 8 |
| 150 | 7 |
| 156 | 2 |
+-----+------+
```
As further explanation, I want to sort type column like continues count down like this : `8,7,6,5,4,3,2,1,0,8,7,6,5,4,3,2,1,0,8,7,...`
Is it possible to sort table like that? or achieve that result by procedures or something else?
|
```
SELECT id, TYPE FROM (
SELECT id, TYPE,
IF(@myvar = 0 OR @myvar = TYPE, @counter := @counter + 1, @counter := 1) sequence,
@myvar := TYPE FROM mytable
JOIN (SELECT @myvar := 0, @counter := 0 ) a
ORDER BY TYPE DESC, id) b
ORDER BY sequence, TYPE DESC, id
```
This query will work for any level.
Put an outer query if necessary to fetch only relevant fields.
Checkout this [SqlFiddle](http://www.sqlfiddle.com/#!9/95290/1)
|
This will work only if frequency of type is **two**.
Write following code in procedure:
```
drop temporary table if exists tmp;
create temporary table tmp
select min(id) as min_id ,max(id) as max_id from table_name group by type;
select * from table_name where id in( select max_id from tmp) order by type
union all
select * from table_name where id in(select min_id from tmp) order by type;
```
|
mysql advanced order by query
|
[
"",
"mysql",
"sql",
""
] |
How can I compare just the dates of timestamps while ignoring the times?
I just want to compare the month/date/year. For example:
```
select * from Batch
where openingtime <> closingtime
```
The problem is that this will show too many batches, since it will include batches where `OpeningTime` and `ClosingTime` differ only in the time of day:
---
`OpeningTime = 2010-07-07 11:19:29.000`
---
`ClosingTime = 2010-07-07 19:19:22.000`
|
cast both timestamps as dates
For SQL Server
```
Select *
from Batch
where cast(openingtime as date) <> cast(closingtime as date)
```
For Oracle
```
Select *
from Batch
where trunc(openingtime) <> trunc(closingtime)
```
|
Another way
```
Select * from Batch where
CONVERT(VARCHAR(10),openingtime,110<>CONVERT(VARCHAR(10),closingtime,110)
```
|
SQL: How to compare dates while ignoring the time?
|
[
"",
"sql",
""
] |
I use this query to get all the stored procedure in my database (I'm using Microsoft SQL Server 2008):
```
SELECT
SPECIFIC_NAME, ROUTINE_DEFINITION
FROM
INFORMATION_SCHEMA.ROUTINES
```
For almost all result everything is ok, but for the row with a very long `ROUTINE_DEFINITION` the result is cropped.
Do you know how to solve it?
|
Please try with `sp_helptext 'ProcedureName'` or you can use either `sys.sql_modules` or `OBJECT_DEFINITION([object_id])` function to get the stored procedure text.
All this gives your exact code. `Information_Schema.Routines` will give up to max of nvarchar(4000).
|
If you are using SQL Server Management Studio then it is possible that it does not display all of the text of the stored procedure (since I believe [there is a 8192 character limit for each column](https://msdn.microsoft.com/en-us/library/ms178782(v=sql.100))).
One thing you can do to check this is to verify the length of the string stored in the column:
```
SELECT
SPECIFIC_NAME
, ROUTINE_DEFINITION
, LEN(ROUTINE_DEFINITION) [Length in characters]
FROM INFORMATION_SCHEMA.ROUTINES
```
If this length is greater than 8192 then it is possible that the truncation occurs only on display level (SSMS) and not in the actual code of the stored procedure.
There is however a way to increase this number, as mentioned [**here** - Options (Query Results/SQL Server/Results to Grid Page)](https://msdn.microsoft.com/en-us/library/ms190078(v=sql.100).aspx)
> To change the options for the current queries, click Query Options on
> the Query menu, or right-click in the SQL Server Query window and
> select Query Options.
...
> Maximum Characters Retrieved
>
> Non XML data: Enter a number from 1 through 65535 to specify the maximum number of characters that will be displayed in each cell.
|
SQL query to get Stored Procedure crop result if the routine_definition is greater than 4000 char
|
[
"",
"sql",
"sql-server",
""
] |
I have two tables:
```
User : (id, username, created_at, updated_at)
Comment : (comment_id, user_id, username, created_at, updated_at)
```
Note: **yes, I do understand Comment table has a duplicated field, 'username'. However, the table is already designed in that way and I have no permission to redesign the schema.**
---
And this is an output format how I want to extract data from tables.
```
id | username | num_of_counts
```
---
And this is two different sql codes I've tried with (I've simplified the codes to show you what I'm trying to do... minor typos may exist but general ideas are here.)
```
-- Ver 1
SELECT u.id, u.username, COUNT(c.id)
FROM User u
LEFT JOIN Comment c ON u.id = c.id
GROUP BY u.id
-- Ver 2
SELECT u.id, u.username, c.cnt
FROM User u
LEFT JOIN (SELECT id, COUNT(*) AS cnt
FROM Comment
GROUP BY user_id) c
ON u.id = c.id
GROUP BY u.id
```
Both codes gives me the same issue:
> "Column 'username' is invalid in the select list because it is not
> contained in either an aggregate function or the GROUP BY clause"
After reading [some articles](https://stackoverflow.com/search?q=Column%20is%20invalid%20in%20the%20select%20list%20because%20it%20is%20not%20contained%20in%20either%20an%20aggregate%20function%20or%20the%20GROUP%20BY%20clause) regarding it, I've learned that there's a conflict between selecting 'username' and grouping rows by 'id'.
I'm still googling and reading [similar cases](https://stackoverflow.com/questions/7709232/mysql-select-with-a-count-query-on-another-table) but still getting the same issue (I'm not that good at sql stuff...)
What would be the best way to code sql query to get outputs in this format?
---
```
id | username | num_of_comments
1 | Tyler | 3
2 | Jane | 5
3 | Jack | 1
```
|
Neither has solved the issue.... :'(
```
SELECT
*,
(SELECT COUNT(id) FROM Comment WHERE id = id) AS Comments
FROM User
ORDER BY id DESC
```
This work-around has solved the issue... it's a simplified version of what i've actually coded tho. I still appreciate your answers.
|
```
SELECT u.id, u.username, COUNT(c.id) as theCount
FROM User u
JOIN Comment c ON u.id = c.id
GROUP BY u.id,u.username
```
|
Select users' info with their total number of comments
|
[
"",
"sql",
"sql-server",
""
] |
Using the Linq fluent syntax, how do I get the contents of table1 using a foreign key from table 2. For example given tables
```
tblPropertyExtras tblExtra
PropertyExtraId ExtraId
propertyId ExtraName
ExtraId
```
I am trying to do the equivalent of the SQL
```
Select tblExtra.*
From tblPropertyExtras Inner Join tblExtra
On tblPropertyExtras.ExtraId = tblExtra.ExtraId
Where propertyId = 1234
```
(Extra part :Is there a converter from SQL to Linq that I can use while I am annoyingly bad a Linq?)
|
This should work for you:-
```
var result = db.tblPropertyExtras.Join(db.tblExtra,
pe => pe.ExtraId,
e => e.ExtraId,
(pe, e) => new { pe, e })
.Where(x => x.pe.propertyId == 1234)
.Select(x => x.e);
```
Although I am not a fan of Method Syntax when it comes to Joins. I prefer query syntax instead:-
```
var result1 = from pe in db.tblPropertyExtras
join e in db.tblExtra
on pe.ExtraIdequals e.ExtraId
where pe.propertyId == 1234
select e;
```
|
```
var query =
from p in tblPropertyExtras
where p.propertyId == 1234
select p.tblExtras;
```
If the Database has been mapped properly you should just be able to go `Entity.ForeignEntities`.
Fluent Linq is usually in the order of:
```
from
where
order
select
```
|
Fluent Linq: Get he contents of Table1 where entry in Table2
|
[
"",
"sql",
"linq",
""
] |
I am trying to return data in fifteen minute intervals. The first thing I thought to do was this:
`select * from myTable where DATEPART(minute, Timestamp) % 15 = 0`
But there are two problems with this approach. The first is that there will not necessarily always be data with a timestamp at a given minute, the other is that sometimes there are multiple data points at a given minute with different second values. I want to have exactly one row for each fifteen minute group, at :00, :15, :30, etc.
This data is only recorded when something changes, so if I don't have a data point at 12:30, for example, I could take the closest data point before that and use that value for 12:30 and it would be correct.
So basically I need to be able to return timestamps at exactly :00, :30, etc along with the data from the record closest to that time.
The data could span years but is more likely to be a shorter amount of time, days or weeks. This is what the expected output would look like:
```
Timestamp Value
1/1/2015 12:30:00 25
1/1/2015 12:45:00 41
1/1/2015 1:00:00 45
```
I'm having trouble thinking of a way to do this in SQL. Is it possible?
|
Given a fixed start time, all you would need is a table of numbers to add your intervals to. If you don't already have a table of numbers (which are useful) then a quick way to generate one on the fly is
```
WITH N1 AS (SELECT N FROM (VALUES (1),(1),(1),(1),(1),(1),(1),(1),(1),(1)) t (N)),
N2 (N) AS (SELECT 1 FROM N1 AS N1 CROSS JOIN N1 AS N2),
Numbers (N) AS (SELECT ROW_NUMBER() OVER(ORDER BY N1.N) FROM N2 AS N1 CROSS JOIN N2 AS N2)
SELECT *
FROM Numbers;
```
This simply generates a sequence from 1 to 10,000. For more reading on this see the following series:
* [Generate a set or sequence without loops – part 1](http://sqlperformance.com/2013/01/t-sql-queries/generate-a-set-1)
* [Generate a set or sequence without loops – part 2](http://sqlperformance.com/2013/01/t-sql-queries/generate-a-set-2)
* [Generate a set or sequence without loops – part 3](http://sqlperformance.com/2013/01/t-sql-queries/generate-a-set-3)
Then once you have your numbers you can generate your intervals:
```
DECLARE @StartDateTime SMALLDATETIME = '20150714 14:00',
@EndDateTime SMALLDATETIME = '20150715 15:00';
WITH N1 AS (SELECT N FROM (VALUES (1),(1),(1),(1),(1),(1),(1),(1),(1),(1)) t (N)),
N2 (N) AS (SELECT 1 FROM N1 AS N1 CROSS JOIN N1 AS N2),
Numbers (N) AS (SELECT ROW_NUMBER() OVER(ORDER BY N1.N) FROM N2 AS N1 CROSS JOIN N2 AS N2)
SELECT Interval = DATEADD(MINUTE, 15 * (N - 1), @StartDateTime)
FROM Numbers
WHERE DATEADD(MINUTE, 15 * (N - 1), @StartDateTime) <= @EndDateTime
```
Which gives something like:
```
Interval
----------------------
2015-07-14 14:00:00
2015-07-14 14:15:00
2015-07-14 14:30:00
2015-07-14 14:45:00
2015-07-14 15:00:00
2015-07-14 15:15:00
2015-07-14 15:30:00
```
Then you just need to find the closest value on or before each interval using [`APPLY`](https://technet.microsoft.com/en-us/library/ms175156%28v=sql.105%29.aspx) and `TOP`:'
```
/*****************************************************************
SAMPLE DATA
*****************************************************************/
DECLARE @T TABLE ([Timestamp] DATETIME, Value INT);
INSERT @T ([Timestamp], Value)
SELECT DATEADD(SECOND, RAND(CHECKSUM(NEWID())) * -100000, GETDATE()),
CEILING(RAND(CHECKSUM(NEWID())) * 100)
FROM sys.all_objects;
/*****************************************************************
QUERY
*****************************************************************/
DECLARE @StartDateTime SMALLDATETIME = '20150714 14:00',
@EndDateTime SMALLDATETIME = '20150715 15:00';
WITH N1 AS (SELECT N FROM (VALUES (1),(1),(1),(1),(1),(1),(1),(1),(1),(1)) t (N)),
N2 (N) AS (SELECT 1 FROM N1 AS N1 CROSS JOIN N1 AS N2),
Numbers (N) AS (SELECT ROW_NUMBER() OVER(ORDER BY N1.N) FROM N2 AS N1 CROSS JOIN N2 AS N2),
Intervals AS
( SELECT Interval = DATEADD(MINUTE, 15 * (N - 1), @StartDateTime)
FROM Numbers
WHERE DATEADD(MINUTE, 15 * (N - 1), @StartDateTime) <= @EndDateTime
)
SELECT i.Interval, t.[Timestamp], t.Value
FROM Intervals AS i
OUTER APPLY
( SELECT TOP 1 t.[Timestamp], t.Value
FROM @T AS t
WHERE t.[Timestamp] <= i.Interval
ORDER BY t.[Timestamp] DESC, t.Value
) AS t
ORDER BY i.Interval;
```
---
**Edit**
One point to note is that in the case of having two equal timestamps that are both on or closest to an interval, I have applied a secondary level of ordering by `Value`:
```
SELECT i.Interval, t.[Timestamp], t.Value
FROM Intervals AS i
OUTER APPLY
( SELECT TOP 1 t.[Timestamp], t.Value
FROM @T AS t
WHERE t.[Timestamp] <= i.Interval
ORDER BY t.[Timestamp] DESC, t.Value --- ORDERING HERE
) AS t
ORDER BY i.Interval;
```
This is arbitrary and could be anything you chose, it would be advisable to ensure that you order by enough items to ensure the results are deterministic, that is to say, if you ran the query on the same data many times the same results would be returned because there is only one row that satisfies the criteria. If you had two rows like this:
```
Timestamp | Value | Field1
-----------------+---------+--------
2015-07-14 14:00 | 100 | 1
2015-07-14 14:00 | 100 | 2
2015-07-14 14:00 | 50 | 2
```
If you just order by timestamp, for the interval `2015-07-14 14:00`, you don't know whether you will get a value of 50 or 100, and it could be different between executions depending on statistics and the execution plan. Similarly if you order by `Timestamp` and `Value`, then you don't know whether `Field1` will be 1 or 2.
|
Like Shnugo mention, you can use a tally table to get your data in an interval of 15 minutes, something like this.
I am creating a dynamic tally table using CTE however you can even use a physical calendar table as per your needs.
```
DECLARE @StartTime DATETIME = '2015-01-01 00:00:00',@EndTime DATETIME = '2015-01-01 14:00:00'
DECLARE @TimeData TABLE ([Timestamp] datetime, [Value] int);
INSERT INTO @TimeData([Timestamp], [Value])
VALUES ('2015-01-01 12:30:00', 25),
('2015-01-01 12:45:00', 41),
('2015-01-01 01:00:00', 45);
;WITH CTE(rn) AS
(
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1
), CTE2 as
(
SELECT C1.rn
FROM CTE C1 CROSS JOIN CTE C2
), CTE3 as
(
SELECT TOP (CEILING(DATEDIFF(minute,@StartTime,@EndTime)/15)) ROW_NUMBER()OVER(ORDER BY C1.rn) - 1 rn
FROM CTE2 C1 CROSS JOIN CTE2 C2
)
SELECT DATEADD(minute,rn*15,@StartTime) CurrTime,T.Value
FROM CTE3
CROSS APPLY (SELECT TOP 1 Value FROM @TimeData WHERE [Timestamp] <= DATEADD(minute,rn*15,@StartTime) ORDER BY [Timestamp] DESC) T;
```
**OUTPUT**
```
CurrTime Value
2015-01-01 01:00:00.000 45
2015-01-01 01:15:00.000 45
.
.
.
2015-01-01 12:00:00.000 45
2015-01-01 12:15:00.000 45
2015-01-01 12:30:00.000 25
2015-01-01 12:45:00.000 41
2015-01-01 13:00:00.000 41
2015-01-01 13:15:00.000 41
2015-01-01 13:30:00.000 41
2015-01-01 13:45:00.000 41
```
|
How to return value based on the last available timestamp if the exact time is unavailable?
|
[
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2014",
""
] |
This is my table structure
```
CUST_ID ORDER_DT
1 01-2013
1 04-2013
1 01-2015
1 02-2015
```
What I am trying to achieve is classify the customer as new customer/existing customer and revived.
Logic is
First time order- New
Time from last purchase within 365 days then Existing
Time more than 1 year then Revived
My output should be
```
CUST_ID ORDER_DT FLAG
1 01-2013 New
1 04-2013 Exisiting
1 01-2015 Revived
1 02-2015 Exisiting
```
My SQL
```
select a.cust_id,a.order_dt,coalesce(b.ptye,'other') as typ
from tab a left join
(select min(order_dt),new as ptye from tab group by cust_id) b on a.cust_id=b.cust_id
```
How do I replace the other with a nested logic.
|
The idea way would be to use `lag()`. Teradata doesn't quite support lag, but it does support other window functions. So, you can mimic it:
```
select t.cust_id, t.order_dt,
(case when order_dt - prev_od <= 365 then 'Existing' else 'New'
end) as flag
from (select t.*,
max(order_dt) over (partition by cust_id order by order_dt
rows between 1 preceding and 1 preceding
) as prevod
from mytable t
) t;
```
I should point out that you don't actually need the subquery, but I think it helps readability:
```
select t.cust_id, t.order_dt,
(case when order_dt -
max(order_dt) over (partition by cust_id order by order_dt
rows between 1 preceding and 1 preceding
) <= 365
then 'Existing' else 'New'
end) as flag
from (select t.*,
as prevod
from mytable t
) t;
```
|
This includes the "revived" logic which was missing in Gordon's answer:
```
SELECT
CUST_ID, ORDER_DT,
CASE
WHEN ORDER_DT = MIN(ORDER_DT) -- first order
OVER (PARTITION BY CUST_ID)
THEN 'New'
WHEN ORDER_DT >= MAX(ORDER_DT) -- more than 365 days since previous order
OVER (PARTITION BY CUST_ID
ORDER BY ORDER_DT
ROWS BETWEEN 1 PRECEDING AND 1 PRECEDING) + 365
THEN 'Revived'
ELSE 'Existing'
END
FROM tab
```
|
SQL nested logic
|
[
"",
"sql",
"logic",
"teradata",
""
] |
I have large number of open files limit in MySQL.
I have set **open\_files\_limit** to **150000** but still MySQL uses almost **80%** of it.
Also I have low traffic and max concurrent connections around 30 and no query has more than 4 joins.
|
The files opened by the server are visible in the performance\_schema.
See table performance\_schema.file\_instances.
<http://dev.mysql.com/doc/refman/5.5/en/file-instances-table.html>
As for tracing which query opens which file, it does not work that way, due to caching in the server itself (table cache, table definition cache).
|
MySQL shouldn't open that many files, unless you have set a ludicrously large value for the `table_cache` parameter (the default is 64, the maximum is 512K).
You can reduce the number of open files by issuing the `FLUSH TABLES` command.
Otherwise, the appropriate value of `table_cache` can be roughly estimated (in Linux) by running `strace -c` against all MySQLd threads. You get something like:
```
# strace -f -c -p $( pidof mysqld )
Process 13598 attached with 22 threads
[ ...pause while it gathers information... ]
^C
Process 13598 detached
...
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
58.82 0.040000 51 780 io_getevents
29.41 0.020000 105 191 93 futex
11.76 0.008000 103 78 select
0.00 0.000000 0 72 stat
0.00 0.000000 0 20 lstat
0.00 0.000000 0 16 lseek
0.00 0.000000 0 16 read
0.00 0.000000 0 9 3 open
0.00 0.000000 0 5 close
0.00 0.000000 0 6 poll
...
------ ----------- ----------- --------- --------- ----------------
```
...and see whether there's a reasonable difference in impact in open() and close() calls; those are the calls which `table_cache` affects, and that influence how many open files there are at any given point.
If the impact of `open()` is negligible, then by all means **reduce `table_cache`**. It is mostly needed on slow IOSS'es, and there aren't many of those left around.
If you're running on Windows, you'll have to try and use [ProcMon by SysInternals, or some similar tool](https://superuser.com/questions/418364/how-can-i-trace-profile-a-process-in-windows).
Once you have `table_cache` to manageable levels, your query that now opens too many files will simply close and re-open many of those same files. You'll perhaps notice an impact on performances, that in all likelihood will be negligible. Chances are that a smaller table cache might actually get you results *faster*, as fetching an item from a modern, fast IOSS cache may well be faster than searching for it in a really large cache.
If you're into optimizing your server, you may want to look at [this article](http://haydenjames.io/mysql-query-cache-size-performance/) too. The take-away is that as caches go, **larger is not always better** (it also applies to indexing).
## Inspecting a specific query on Linux
On Linux you can use *strace* (see above) and verify what files are opened and how:
```
$ sudo strace -f -p $( pidof mysqld ) 2>&1 | grep 'open("'
```
Meanwhile from a different terminal I run a query, and:
```
[pid 8894] open("./ecm/db.opt", O_RDONLY) = 39
[pid 8894] open("./ecm/prof2_people.frm", O_RDONLY) = 39
[pid 8894] open("./ecm/prof2_discip.frm", O_RDONLY) = 39
[pid 8894] open("./ecm/prof2_discip.ibd", O_RDONLY) = 19
[pid 8894] open("./ecm/prof2_discip.ibd", O_RDWR) = 19
[pid 8894] open("./ecm/prof2_people.ibd", O_RDONLY) = 20
[pid 8894] open("./ecm/prof2_people.ibd", O_RDWR) = 20
[pid 8894] open("/proc/sys/vm/overcommit_memory", O_RDONLY|O_CLOEXEC) = 39
```
...these are the files that the query used (\*be sure to run the query on a "cold-started" MySQL to prevent caching), and I see that the highest file handle assigned was 39, thus at no point were there more than 40 open files.
The same files can be checked from /proc/$PID/fd or from MySQL:
```
select * from performance_schema.file_instances where open_count > 1;
```
but the count from MySQL is slightly shorter, it does not take into account socket descriptors, log files, and temporary files.
|
Is it possible to check that a particular query opens how many files in MySQL?
|
[
"",
"mysql",
"sql",
"linux",
"database",
"mysql-5.5",
""
] |
```
Some_Column (Table ABC)
___
123
456
NULL
789
```
When I run `SELECT * FROM ABC` it returns all rows.
However, when I run
`SELECT * FROM ABC WHERE Some_Column <> ''`
it returns all the values but the null. Whether `ANSI_NULLS` is on or off.
Can anyone explain this peculiar behavior to me?
|
It's by design. From [the doc](https://msdn.microsoft.com/en-us/library/ms188048(v=sql.100).aspx) (emphasis mine):
> When SET ANSI\_NULLS is OFF, the Equals (=) and Not Equal To (<>) comparison operators do not follow the ISO standard. A SELECT statement that uses WHERE column\_name = NULL returns the rows that have null values in column\_name. A SELECT statement that uses WHERE column\_name <> NULL returns the rows that have nonnull values in the column. Also, **a SELECT statement that uses WHERE column\_name <> XYZ\_value returns all rows that are not XYZ\_value and that are not NULL.**
So, `WHERE Some_Column <> ''` returns all rows that are not `''` and are not NULL.
This query with ANSI\_NULLS OFF:
```
SELECT *
FROM ABC
WHERE Some_Column <> '';
```
Is equivalent to this query:
```
SELECT *
FROM ABC
WHERE Some_Column <> ''
AND Some_Column IS NOT NULL;
```
When ANSI\_NULLS is ON, of course, then normal ANSI [Three Valued Logic](https://en.wikipedia.org/wiki/Three-valued_logic) applies. NULL is never equal to anything, including NULL. NULL is also never *not* equal to anything, including NULL.
Either way, you should expect to use `Some_Column IS NOT NULL` or `Some_Column IS NULL` and explicitly handle NULL values.
```
SELECT *
FROM ABC
WHERE Some_Column <> ''
OR Some_Column IS NULL
```
|
This happens because NULL is neither equal to, nor unequal to ''...NULL is simply the absence of a value. You could claim it is 'unknown'.
The only way of including NULLs in the return of your query is to add a clause to check for null values:-
```
SELECT * FROM ABC WHERE Some_Column <> '' OR Some_Column IS NOT NULL
```
ANSI\_NULLS won't alter this behaviour. It only allows you to carry out direct comparisons to null as a 'value' e.g.
```
WHERE Some_Column = NULL
```
Rather than
```
WHERE Some_Column IS NULL
```
|
Comparison with NULL behavior the same even when setting ANSI_NULLS
|
[
"",
"sql",
"sql-server-2008",
""
] |
I believe what I'm asking here is possible, but I haven't find a way yet :
Some cells in the result of my SQL *SELECT-FROM-WHERE* query are blank, and in DbVisualizer, `(null)` is written inside. I'd like to display a blank cell instead.
I've already tried with *CASE-WHEN* and the *NVL* operator, but it won't let me replace it by a blank `''`, I'm forced to use some `' '` or `'message'`.
I know I could just delete these spaces or messages with Excel later, but I'd like to know if there is a way to do it directly with DbVisualizer, instead of this workaround.
---
**EDIT:** Here is what my request looks like :
```
SELECT *things*,
CASE WHEN
(SELECT COUNT(*) FROM table d2 WHERE *join-condition* AND *other condition*) = 1
THEN
(*sub-select query*)
ELSE
''
END
AS NAME,
*other things*
FROM table d1
WHERE *something*
```
Thanks a lot !
|
The problem in your query is the following **ELSE** part of the **CASE** expression:
> ```
> ELSE
> ''
> ```
In *Oracle*, an **empty string** is considered as **NULL** value. So, all you need to do is use *something* else instead of `''`.
For example, to use a **space** instead of **NULL**:
```
ELSE
' '
```
**Update** The issue is the **DbVisualizer** tool. OP is on version `8.0.12`. Prior to version `9.2.8` it cannot show **NULL** as an empty string. However, as discussed in this [forum](http://www.dbvis.com/forum/thread.jspa?messageID=11943), it has been fixed in [DbVisualizer 9.2.8](http://www.dbvis.com/download/).
|
Did you try *standard* SQL function [coalesce()](http://docs.oracle.com/cd/B28359_01/server.111/b28286/functions023.htm), as below ?
```
SELECT COALESCE(columnName, '') AS ColumnName FROM tableName;
```
---
**Syntax:**

```
COALESCE (expr1, expr2)
```
is equivalent to:
```
CASE WHEN expr1 IS NOT NULL THEN expr1 ELSE expr2 END
```
Similarly,
```
COALESCE (expr1, expr2, ..., exprn), for n>=3
```
is equivalent to:
```
CASE WHEN expr1 IS NOT NULL THEN expr1
ELSE COALESCE (expr2, ..., exprn) END
```
Above examples are from [Database SQL Language Reference](http://docs.oracle.com/cd/B28359_01/server.111/b28286/functions023.htm#SQLRF00617)
|
Replacing "(null)" with blank cell in SQL query result
|
[
"",
"sql",
"string",
"oracle",
"null",
"dbvisualizer",
""
] |
I´m creating a data model to register the SERVICES that an EMPLOYEE is able to perform based on its list of COURSES completed (COURSE\_ID). To do this I have two data tables: EMPLOYEES and SERVICES. And I have two map tables: one table to relate the COURSES completed by EMPLOYEE and another table to relate the COURSES required by a SERVICE.
The data tables looks like this:
EMPLOYEES:
```
EMP_ID NAME
=============
1 Joe
2 Bob
3 Rob
```
SERVICES:
```
SERV_ID SERVICE
====================
1 Install
2 Configure
3 Manage
```
The relation or map tables look like this:
EMPLOYEE\_COURSES
```
EMP_ID COURSE_ID
==================
1 3
1 5
1 6
2 5
3 4
```
SERVICE\_COURSES
```
SERV_ID COURSE_ID
==================
1 3
1 5
1 6
2 3
3 4
```
The query result that I want to produce should look like this where only SERVICES 1 and 3 are shown since only two EMPLOYEES have the required courses by SERVICE:
```
SERV_ID EMP_ID SERVICE NAME
================================
1 1 Install Joe
2 1 Configure Joe
3 3 Manage Rob
```
Any help is appreciate.
Thanks !
|
Here is how I would do it:
```
with employees as (select 1 emp_id, 'Joe' name from dual union all
select 2 emp_id, 'Bob' name from dual union all
select 3 emp_id, 'Rob' name from dual),
services as (select 1 serv_id, 'Install' service from dual union all
select 2 serv_id, 'Configure' service from dual union all
select 3 serv_id, 'Manage' service from dual),
employee_courses as (select 1 emp_id, 3 course_id from dual union all
select 1 emp_id, 5 course_id from dual union all
select 1 emp_id, 6 course_id from dual union all
select 2 emp_id, 5 course_id from dual union all
select 3 emp_id, 4 course_id from dual),
service_courses as (select 1 serv_id, 3 course_id from dual union all
select 1 serv_id, 5 course_id from dual union all
select 1 serv_id, 6 course_id from dual union all
select 2 serv_id, 3 course_id from dual union all
select 3 serv_id, 4 course_id from dual),
---- end of mimicking your tables and data
svc_main as (select serv_id,
course_id,
count(course_id) over (partition by serv_id) svc_course_count
from service_courses)
select sc.serv_id,
ec.emp_id,
svc.service,
emp.name
from svc_main sc
inner join employee_courses ec on (ec.course_id = sc.course_id)
inner join employees emp on (ec.emp_id = emp.emp_id)
inner join services svc on (svc.serv_id = sc.serv_id)
group by sc.serv_id,
ec.emp_id,
svc.service,
emp.name,
sc.svc_course_count
having count(ec.course_id) = sc.svc_course_count
order by ec.emp_id,
sc.serv_id;
SERV_ID EMP_ID SERVICE NAME
---------- ---------- --------- ----
1 1 Install Joe
2 1 Configure Joe
3 3 Manage Rob
```
N.B. I am assuming that there is a unique constraint on (emp\_id, course\_id) in employe\_courses and on (serv\_id, course\_id) in service\_courses. If you could have multiple rows for the same emp\_id/serv\_id per course\_id, then you'll have to put "distinct" inside the counts in my query.
Basically, what I'm doing is first of all, finding out how many courses there are per service (done via an analytic function in the svc\_main subquery), and then join everything together before finding out how many courses per service for each employee. Then it's just a case of choosing the employees that have the same number of courses for a given service that matches with the number of courses required for that service.
|
Just start joining tables.
```
select yourfields
from employees join employee_courses using (emp_id)
join service_courses using (course_id)
join services using (serv_id)
etc
```
|
How to create a query to compare two exact results of two additional queries?
|
[
"",
"sql",
"oracle",
""
] |
How can I perform a query with multiple LIKE parameters?
For example, I have this string to search through:
```
"I like searching very much"
```
This is the code I currently use:
```
$searTerm = "like"
$this->db->or_like('list.description', $SearchTerm,'both');
```
But i want to search with 2 or 3 parameters. like this:
```
$searTerm = "like"
$searTerm1 = "much"
```
How can i perform this to get the same result?
|
You can simply repeat the `like` parameters on the active record. In your example you would do something like this:
```
$this->db->or_like('list.description', $searchTerm1);
$this->db->or_like('list.description', $searchTerm2);
$this->db->or_like('list.description', $searchTerm3);
...
```
This will just join each `or_like` with an `AND` in the `WHERE` clause.
|
Firstly, you need to define the array with like variables then, its very important to put the `or_like` statement above the `where` clause in order to make multiple 'OR' statements for `like` 'AND' the where clause.
Here is example:
```
$this->db->or_like(array('column_name1' => $k, 'column_name2' => $k))
$this->db->where($whereColumn, $whereValue)
```
|
Codeigniter database queries with multiple LIKE parameters
|
[
"",
"sql",
"codeigniter",
""
] |
I need help with SQL for generating pivot table with row/column totals calculated ..I have 2 tables as given below
```
Table ProbCat
==============
probcat | probdesc
1 minor
2 high
3 showstopper
Table ProbSummary
===================
prodcat | noofproblems | stage
1 5 Dev
2 1 Dev
3 6 QA
3 6 Prod
```
and I would like to generate a pivot table with row/column total percentages as shown below. I have tried combination of 'pivot' and 'group by' but could not get the row & column total accurately
# Probelm Summary view:
```
ProbCategory CategoryDesc Dev Qa Prod Total(%)
______________________________________________________
1 Minor 5 0 0 5(100*(5/18))
2 High 1 0 0 1(100*(1/18))
3 Showstopper 0 6 6 12(100*(6/18))
Total NA 6(%) 6(%) 6(%)
```
|
Just like others mentioned, your summary/total calculation should be done on the presentation layer. But here is my attempt to getting your output minus the last summary line:
```
;WITH Q1
AS (
SELECT pvt.probcat
,pvt.probdesc
,ISNULL(pvt.[Dev], 0) AS 'Dev'
,ISNULL(pvt.[QA], 0) AS 'QA'
,ISNULL(pvt.[Prod], 0) AS 'Prod'
FROM (
SELECT pc.probcat
,pc.probdesc
,ps.noofproblems
,ps.stage
FROM Probcat pc
LEFT JOIN ProbSummary ps ON pc.probcat = ps.probcat
) t
PIVOT(max(noofproblems) FOR stage IN (
[Dev]
,[QA]
,[Prod]
)) pvt
),
q2 as
(SELECT q1.*
,sum(q1.Dev + q1.QA + q1.Prod) AS Total
FROM q1
GROUP BY q1.probcat
,q1.probdesc
,q1.Dev
,q1.QA
,q1.Prod
)
select q2.probcat
,q2.probdesc
,q2.Dev
,q2.QA
,q2.Prod
,cast(q2.Total as varchar(10)) + ' (' +
cast(cast((cast(q2.Total as decimal(5,2))/cast(d.CrossSum as decimal(5,2)))*100
as decimal(5,2)) as varchar(10))
+ '% )' as FinalTotal
from q2
CROSS APPLY (
SELECT sum(q1.Dev + q1.QA + q1.Prod) AS CrossSum
FROM q1
) d
ORDER BY q2.probcat
```
[**SQL Fiddle Demo**](http://www.sqlfiddle.com/#!3/b6f3e/34/0)
|
```
with x as
(select p.probcat as probcategory, p.probdesc as categotydesc,
case when s.stage = 'Dev' and s.noofproblems > 0 then s.noofproblems else 0 end as Dev,
case when s.stage = 'QA' and s.noofproblems > 0 then s.noofproblems else 0 end as QA,
case when s.stage = 'Prod' and s.noofproblems > 0 then s.noofproblems else 0 end as Prod
from Probcat p join Probsummary s on p.probcat = s.prodcat)
select probcategory,categotydesc,Dev,QA,Prod, Dev+QA+Prod as Total
from x
```
this should give what you need except the `Total` row at the bottom.
|
Summary table with row/column totals using SQL pivot
|
[
"",
"sql",
"sql-server",
"pivot",
"pivot-table",
""
] |
I want to select all the customers that have the same name and birth date on mysql table.
my query right now is close, but it seems to have a flaw:
```
SELECT
id,
customer.name,
date
FROM
customer
INNER JOIN (
SELECT
name
FROM
customer
GROUP BY
name,date
HAVING
COUNT(id) > 1
) temp ON customer.name = customer.name
ORDER BY
name;
```
|
Return a customer if there `EXISTS` another one with same name and date, but other id:
```
SELECT
id,
name,
date
FROM
customer c1
where exists (SELECT 1 from customer c2
where c2.name = c1.name
and c2.date = c1.date
and c2.id <> c1.id)
```
`JOIN` version:
```
SELECT
c1.id,
c1.name,
c1.date
FROM
customer c1
JOIN customer c2
ON c2.name = c1.name
and c2.date = c1.date
and c2.id <> c1.id
```
|
Something like this should do it:
```
select
c1.*
from
customer c1
join customer c2 on
c1.name = c2.name
and c1.birth_date = c2.birth_date
and c1.id != c2.id
order by name, birth_date, id
;
```
And here's the full example:
```
drop table customer;
create table customer (
id int,
name varchar(64),
birth_date date
);
insert into customer values (1, 'Joe', '2001-01-01');
insert into customer values (2, 'Joe', '2001-01-02');
insert into customer values (3, 'Joe', '2001-01-03');
insert into customer values (4, 'Jim', '2001-01-01');
insert into customer values (5, 'Jack', '2001-01-01');
insert into customer values (6, 'George', '2001-01-01');
insert into customer values (7, 'George', '2001-01-02');
insert into customer values (8, 'Jeff', '2001-01-02');
insert into customer values (10, 'Joe', '2001-01-01');
insert into customer values (60, 'George', '2001-01-01');
select * from customer;
select
c1.*
from
customer c1
join customer c2 on
c1.name = c2.name
and c1.birth_date = c2.birth_date
and c1.id != c2.id
order by name, birth_date, id
;
+ ------- + --------- + --------------- +
| id | name | birth_date |
+ ------- + --------- + --------------- +
| 6 | George | 2001-01-01 |
| 60 | George | 2001-01-01 |
| 1 | Joe | 2001-01-01 |
| 10 | Joe | 2001-01-01 |
+ ------- + --------- + --------------- +
4 rows
```
|
Select All customers with exact name and birthdate
|
[
"",
"mysql",
"sql",
""
] |
I have data that was sent to me, and I need to normalize it. The data is in a sql table, but each row has multiple multi value columns. An example is the following:
```
ID fname lname projects projdates
1 John Doe projA;projB;projC 20150701;20150801;20150901
2 Jane Smith projD;;projC 20150701;;20150902
3 Lisa Anderson projB;projC 20150801;20150903
4 Nancy Johnson projB;projC;projE 20150601;20150822;20150904
5 Chris Edwards projA 20150905
```
Needs too look like this:
```
ID fname lname projects projdates
1 John Doe projA 20150701
1 John Doe projB 20150801
1 John Doe projC 20150901
2 Jane Smith projD 20150701
2 Jane Smith projC 20150902
3 Lisa Anderson projB 20150801
3 Lisa Anderson projC 20150903
4 Nancy Johnson projB 20150601
4 Nancy Johnson projC 20150822
4 Nancy Johnson projE 20150904
5 Chris Edwards projA 20150905
```
I need to split it into rows for the id, fname, lname, and parsing the projects and proddates into separate records. I have found many posts with split functions and I can get it to work for 1 column, but not 2. When I do 2 columns it permeates the split. ie for John Doe, it gives me records for projA 3 times, once for each of the proddates. I need to coorelate each multivalue project record with only it's respective projdate and not the others.
Any thoughts?
Thanks!
|
If you use Jeff Moden's "[DelimitedSplit8K](http://www.sqlservercentral.com/articles/Tally+Table/72993/)" splitter (Which I have renamed here "fDelimitedSplit8K")
(Ref. Figure 21: *The Final "New" Splitter Code, Ready for Testing*)
to do the heavy lifting for the splits, the rest becomes fairly straightforward, using CROSS APPLY and WHERE to get the proper joining.
```
IF object_ID (N'tempdb..#tInputData') is not null
DROP TABLE #tInputData
CREATE TABLE #tInputData (
ID INT
PRIMARY KEY CLUSTERED -- Add IDENTITY if ID needs to be set at INSERT time
, FName VARCHAR (30)
, LName VARCHAR (30)
, Projects VARCHAR (4000)
, ProjDates VARCHAR (4000)
)
INSERT INTO #tInputData
( ID, FName, LName, Projects, ProjDates )
VALUES
( 1, 'John', 'Doe' , 'projA;projB;projC' , '20150701;20150801;20150901'),
( 2, 'Jane', 'Smith' , 'projD;;projC' , '20150701;;20150902'),
( 3, 'Lisa', 'Anderson' , 'projB;projC' , '20150801;20150903'),
( 4, 'Nancy', 'Johnson' , 'projB;projC;projE' , '20150601;20150822;20150904'),
( 5, 'Chris', 'Edwards' , 'projA' , '20150905')
SELECT * FROM #tInputData -- Take a look at the INSERT results
; WITH ResultSet AS
(
SELECT
InData.ID
, InData.FName
, InData.LName
, ProjectList.ItemNumber AS ProjectID
, ProjectList.Item AS Project
, DateList.ItemNumber AS DateID
, DateList.Item AS ProjDate
FROM #tInputData AS InData
CROSS APPLY dbo.fDelimitedSplit8K(InData.Projects,';') AS ProjectList
CROSS APPLY dbo.fDelimitedSplit8K(InData.ProjDates,';') AS DateList
WHERE DateList.ItemNumber = ProjectList.ItemNumber -- Links projects and dates in left-to-r1ght order
AND (ProjectList.Item <> '' AND DateList.Item <> '') -- Ignore input lines when both Projects and ProjDates have no value; note that these aren't NULLs.
)
SELECT
ID
, FName
, LName
, Project
, ProjDate
FROM ResultSet
ORDER BY ID, Project
```
Results in
```
ID FName LName Project ProjDate
-- ----- -------- ------- --------
1 John Doe projA 20150701
1 John Doe projB 20150801
1 John Doe projC 20150901
2 Jane Smith projC 20150902
2 Jane Smith projD 20150701
3 Lisa Anderson projB 20150801
3 Lisa Anderson projC 20150903
4 Nancy Johnson projB 20150601
4 Nancy Johnson projC 20150822
4 Nancy Johnson projE 20150904
5 Chris Edwards projA 20150905
```
This algorithm handles Project and Date lists of equal length. Should one list be shorter than the other for a given row, some special attention will be needed to apply the NULL in the proper place.
```
-- Cleanup
DROP TABLE #tInputData
```
|
Try with this following query.
SELECT A.ID ,a.fname,a.lname ,a.projects ,
ltrim(Split.a.value('.', 'VARCHAR(100)')) AS projdates
FROM (SELECT ID , fname, lname , projects,
CAST ('' + REPLACE([projdates], ';', '') + '' AS XML) AS String
FROM ) AS A CROSS APPLY String.nodes ('/M') AS Split(a);
Try with this you will get the your expected output.
Thanks.
|
SQL Split Multiple Multivalue Columns into Rows
|
[
"",
"sql",
"parsing",
"split",
"multivalue",
""
] |
I have three tables: `comics`, `tags` and `comicTags`.
The `comics` table ID is connected through a foreign key with the `comicTags` table while the `tags` table ID is also connected to the `comicTags` table through the `tagID`.
```
comics table
+----+
| ID |
+----+
| 1 |
| 2 |
+----+
comicTags table
+---------+-------+
| comicID | tagID |
+---------+-------+
| 1 | 1 |
| 2 | 1 |
| 2 | 2 |
+---------+-------+
tags table
+----+-------+
| ID | tag |
+----+-------+
| 1 | tag1 |
| 2 | tag2 |
+----+-------+
```
What I'd like to achieve is, if I'm searching for tag1 AND tag2 I'd only like to get comic ID 2 as a result. Given is a string with the two tag names.
```
SELECT c.ID FROM `tags` `t`
LEFT JOIN comicTags AS ct
ON ct.tagID=t.ID
LEFT JOIN comics AS c
ON c.ID=ct.comicID
WHERE ((t.tag LIKE 'tag1')
OR (t.tag LIKE 'tag2'))
GROUP BY c.ID
```
With this statement I'm obviously getting comic ID 1 as well which I do not want. Changing from OR to AND doesn't work in the statement I've created.
Could you point me in the right direction on how to get only those comic IDs that match all tag IDs?
|
You need to search for rows where the tag\_id is one of the tags you are looking for, and then ensure that the number of rows returned is equal to the number of tags, like this:
```
SELECT c.id
FROM comics c
JOIN comicTags ct ON ct.comicID = c.id
JOIN tags t ON t.id = ct.tagID AND t.tag IN ('tag1', 'tag2')
GROUP BY c.id
HAVING COUNT(*) = 2;
```
I have changed the condition to use the `IN` operator. Your current query does not have any wildcards, so I assume you are looking for tags that are an exact match, in which case this query is a little nicer.
Here is an [SQL Fiddle](http://sqlfiddle.com/#!9/5ef04/1) example.
Note that if it's possible for a comic to be aligned with a tag more than once, you may want to change your query to `COUNT(DISTINCT t.id)` to ensure that the number of unique tags matches. [Here is an SQL Fiddle example](http://sqlfiddle.com/#!9/243b7/2) that shows repeated tags not being returned with the current query, but will be with the distinct added.
|
you can run one query per "tag" you are looking for and then join the result:
```
select c.*
from
(
SELECT comicID
FROM tags t JOIN comicTags AS ct ON ct.tagID=t.ID
WHERE (t.tag LIKE 'tag1')
) a join
(
SELECT comicID
FROM tags t
JOIN comicTags AS ct ON ct.tagID=t.ID
WHERE (t.tag LIKE 'tag2')
)b on a.comicID = b.comicID
join comics c on c.ID = a.comicID
```
|
Get result with matching all search queries
|
[
"",
"mysql",
"sql",
""
] |
I have a SQL Server 2014 database with 2 fields:
- one of type `date` for the record date
- one of type `time` for the record time.
I want to retrieve records from the table between 2 dates/ times.
Example: from 2015-01-01 at 16:00 until 2015-01-02 at 08:00.
I tried
```
SELECT...
Date BETWEEN '2015-01-01' AND '2015-01-02'
AND Time BETWEEN '16:00' AND '08:00'
```
This fails, as expected.
Can I do what I want or only if I use a single field for date and time (`datetime` type)?
|
Try adding the fields:
```
WHERE cast(date as datetime)+cast(time as datetime) between
'2015-01-01 16:00' AND '2015-01-02 8:00'
```
|
Your query will only work if want to find records within a certain time window between certain days. E.g. during office hours between 13 and 17 July.
```
WHERE [Date] BETWEEN '2015-07-13' AND '2015-07-17'
AND CONVERT(Time, [Time]) BETWEEN '08:00' AND '16:00'
```
And since your start `Time` value is greater than your end time value, you will not get any results until you combine the date and time into one value like:
```
WHERE CONVERT(DateTime, [Date]) + CONVERT(DateTime, [Time])
BETWEEN '2015-01-01 16:00' AND '2015-01-02 08:00'
```
|
SQL query between date/time with different fileds for date and times
|
[
"",
"sql",
"sql-server",
"date",
"datetime",
"time",
""
] |
Lets say I use `coalesce()` to combine two columns into one in select and subsequently a view constructed around such select.
Tables:
```
values_int
id INTEGER(11) PRIMARY KEY
value INTEGER(11)
values_varchar
id INTEGER(11) PRIMARY KEY
value VARCHAR(255)
vals
id INTEGER(11) PRIMARY KEY
value INTEGER(11) //foreign key to both values_int and values_varchar
```
The primary keys between `values_int` and `values_varchar` are unique and that allows me to do:
```
SELECT vals.id, coalesce(values_int.value, values_varchar.value) AS value
FROM vals
JOIN values_int ON values_int.id = vals.value
JOIN values_varchar ON values_varchar.id = vals.value
```
This produces nice assembled view with ID column and combined value column that contains actual values from two other tables combined into single column.
* What type does this combined column have?
* When turned into view and then queried with a WHERE clause using this combined "value" column, how is that actually handled type-wise? I.e. `WHERE value > 10`
Som rambling thoughts in the need (most likely wrong):
The reason I am asking this is that the alternative to this design have all three tables merged into one with INT values in one column and VARCHAR in the other. That would of course produce a lots of NULL values in both columns but saved me the JOINs. For some reason I do not like that solution because it would require additional type checking to choose the right column and deal with the NULL values but maybe this presented design would require the same too (if the resulting column is actually VARCHAR). I would hope that it actually passes the WHERE clause down the view to the source (so that the column does NOT have a type per se) but I am likely wrong about that.
|
You query should be explicit to be clear, In this case mysql is using varchar.
I would write this query like this to be clear
```
coalesce(values_int.value,cast(values_varchar.value as integer), 0)
```
or
```
coalesce(cast(values_int.value as varchar(20)),values_varchar.value,'0')
```
you should put in that last value unless you want the column to be null if both columns are null.
|
Returns the data type of expression with the highest data type precedence. If all expressions are nonnullable, the result is typed as nonnullable.
So in your case the type will be `VARCHAR(255)`
|
SQL coalesce(): what type does the combined column have?
|
[
"",
"mysql",
"sql",
"coalesce",
""
] |
I have an SQL table with two text columns and I'd like find the number of words in each of those and also the number of common words.
Basically a word should be the regex `[A-Za-z]{3,}` (at least 3 chars) and for a row with the values `left="abc def ghi"` and `right="ghi abc qwe rtz"` the result should be `wordsleft=3`, `wordsright=4`, `wordscommon=2` (since they share `abc` and `ghi`).
Can I do that with SQL only?
|
So, even though SQL might not be the best option to do something like this (it's probably a lot easier using a scripting language) you can do it using common table expressions with `connect by` queries to split the strings into rows and then count the number of distinct items in each column (and the common items).
Given a table like this:
```
| ID | L | R |
|----|--------------|-----------------|
| 1 | abc def ghi | ghi abc qwe rtz |
| 2 | bc de gh | gh ab qw rt xx |
| 3 | xbc de gh yy | gh xbc |
```
You can get this result:
```
| ID | COMMON_COUNT | LEFT_COUNT | RIGHT_COUNT |
|----|--------------|------------|-------------|
| 1 | 2 | 3 | 4 |
| 2 | 1 | 3 | 5 |
| 3 | 2 | 4 | 2 |
```
Using this query (which I'm sure can be optimized and improved - Oracle isn't my thing really):
```
with
cte_left as (
SELECT DISTINCT ID, regexp_substr(l, '[^ ]+', 1, LEVEL) l
FROM t
CONNECT BY regexp_substr(l, '[^ ]+', 1, LEVEL) IS NOT NULL
ORDER BY ID
),
cte_right as (
SELECT DISTINCT ID, regexp_substr(r, '[^ ]+', 1, LEVEL) r
FROM t
CONNECT BY regexp_substr(r, '[^ ]+', 1, LEVEL) IS NOT NULL
ORDER BY ID
),
cte_all as (
select cte_left.id, cte_left.l, cte_right.r
from cte_left
join cte_right on cte_left.id = cte_right.id
)
select id, count(distinct l) as common_count,
(select count(distinct l) from cte_all where id = t.id) as left_count,
(select count(distinct r) from cte_all where id = t.id) as right_count
from cte_all t
where l in (select r from cte_all)
group by t.id;
```
[Sample SQL Fiddle](http://www.sqlfiddle.com/#!4/73b13/7)
|
SQL is an excellent tool for this. Using jpw's dataset and the OP's requirement that words be 3+ characters in length:
```
with t1(id, l, r) as (
select 1, 'abc def ghi', 'ghi abc qwe rtz' from dual union all
select 2, 'bc de gh', 'gh ab qw rt xx' from dual union all
select 3, 'xbc de gh yy', 'gh xbc' from dual
), t2(id, words, word, idx, cnt) as (
select id
, l
, regexp_substr(l, '[[:alpha:]]{3,}', 1, 1)
, 1
, regexp_count(l,'[[:alpha:]]{3,}')
from t1
union all
select id, words, regexp_substr(words, '[[:alpha:]]{3,}', 1, idx+1)
, idx+1, cnt
from t2
where idx < cnt
), t3(id, words, word, idx, cnt) as (
select id
, r
, regexp_substr(r, '[[:alpha:]]{3,}', 1, 1)
, 1
, regexp_count(r,'[[:alpha:]]{3,}')
from t1
union all
select id, words, regexp_substr(words, '[[:alpha:]]{3,}', 1, idx+1)
, idx+1, cnt
from t3
where idx < cnt
)
select coalesce(t2.id,t3.id) id
, count(t2.word) left_cnt
, count(t3.word) right_cnt
, count(case when t2.word = t3.word then 1 end) common_cnt
from t2
full join t3
on t3.id = t2.id
and t3.word = t2.word
group by coalesce(t2.id,t3.id);
ID LEFT_CNT RIGHT_CNT COMMON_CNT
---------- ---------- ---------- ----------
1 3 4 2
2 0 0 0
3 1 1 1
```
|
Find number of common words in SQL table columns?
|
[
"",
"sql",
"oracle",
""
] |
```
**Table1** **Table2**
ID Values ID Values
1 100 1 10
2 200 2 20
3 300 3 30
4 400 4 40
null 2000 null 3000
5 500
```
o/p:-
```
ID Table1_Values Table2_Values
1 100 10
2 200 20
3 300 30
4 400 40
5 500 null
null 2000 3000
```
|
Try this ..
```
select t1.id,t1.values,t2.values from
table1 t1
left outer join
table t2 on nvl(t1.id,0)=nvl(t2.id,0)
```
|
You can add a check to see if both values are `NULL` to the join condition:
```
SELECT t1.ID,
t1.VALUES AS Table1Values,
t2.VALUES AS Table2Values
FROM TABLE2 t1
LEFT OUTER JOIN
TABLE2 t2
ON ( t1.ID = t2.ID OR ( t1.ID IS NULL AND t2.ID IS NULL ) )
```
|
left outer join with null values
|
[
"",
"sql",
"oracle",
""
] |
I have the following table structure (and data example):
```
id code category
1 x c1
1 y c1
1 a c2
2 y c1
2 a c2
3 a c2
4 j c3
```
Given a list of pairs <(category, code)>, one for each category, I need to query the `ids` which match the pairs. The rule is: if a category is present for the `id`, its pair must be in the list for the `id` to be returned.
For example, if the input pairs are (c1, x), (c2, a), (c3, k) the `ids` to be returned are: 1 and 3.
2 must not be returned, because the `c1` category does not match the code `x`.
3 is returned because for the only category present, the code matches `a`.
I've tried using `(EXISTS(c1 and code) or NOT EXISTS(c1)) AND (EXISTS(c2 and code) or NOT EXISTS(c2)) AND...` but could not eliminate `id=2` from the results.
|
I would do it like here:
```
with input(cat, code) as (select 'c1', 'x' from dual
union all select 'c2', 'a' from dual
union all select 'c3', 'k' from dual)
select id from (
select t.id, max(decode(i.code, t.code, 1, 0)) cc from t
left join input i on i.cat = t.cat and i.code = t.code
group by t.id, t.cat)
group by id having min(cc) = 1;
```
[SQLFiddle demo](http://sqlfiddle.com/#!4/af712/1)
This way you don't have to write all these new `not exist... or exists...` clauses, and data is hit only once (important from performance point of view).
|
Made it work with the following query:
```
select distinct t2.ID from t t2
where
( not exists (select * from t where id = t2.id and cat like 'c2')
or (exists ( select * from t where id = t2.id and cat = 'c2' and code = 'a')))
and
(not exists (select * from t where id = t2.id and cat like 'c1')
or (exists( select * from t where id = t2.id and cat = 'c1' and code = 'x')))
and
(not exists (select * from t where id = t2.id and cat like 'c3')
or (exists( select * from t where id = t2.id and cat = 'c3' and code = 'k')))
; </pre>
```
|
SQL Query which depends on the existence of other rows
|
[
"",
"sql",
"oracle",
""
] |
I have a text memo field in SQL table that I need to remove the last character in the field if it's a comma.
So, for example, if I have these rows, I need to remove the commas from rows 2 and 4.
```
INETSHORTD
1 94
2 85,
3 94, 92
4 89, 99, 32,
```
The output would be:
```
INETSHORTD
94
85
94, 92
89, 99, 32
```
Any ideas?
|
Using `REVERSE` and `STUFF`:
```
SELECT
REVERSE(
STUFF(
REVERSE(LTRIM(RTRIM(INETSHORTD))),
1,
CASE WHEN SUBSTRING((REVERSE(LTRIM(RTRIM(INETSHORTD)))), 1, 1) = ',' THEN 1 ELSE 0 END,
''
)
)
FROM tbl
```
---
First, you want to `TRIM` your data to get rid of leading and trailing spaces. Then `REVERSE` it and check if the first character is `,`. If it is, remove it, otherwise do nothing. Then `REVERSE` it back again. You can remove the first character by using `STUFF(string, 1, 1, '')`.
[**SQL Fiddle**](http://sqlfiddle.com/#!6/76428/1/0)
|
Here's a more elegant / readable way:
```
SET @string = REPLACE(@string + '<END>', ',<END>', '')
```
---
if you can't be sure if last comma appear in string, use this:
```
SET @string = REPLACE(REPLACE(@string + '<END>', ',<END>', ''), '<END>', '')
```
|
SQL : remove last comma in string
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I would like `Α` to display as A in the browser, but remain `Α` in my source code. My reason is that I would like to store the code in a SQL table as `Α`. However, anytime I add the literal `Α` and view the source code, I see A. Since this text might be resent back to the database (as input in a form), I lose the `Α` version.
I don't seem to have this problem with the ` ` which is the behavior I'm looking for.
Any suggestions?
|
`Α` stays the same when viewing the source code in a separate browser window. I inserted `Α` into my own MySQL table called 'test' using the command below, and it remains the same.
```
INSERT INTO test (column1, column2) VALUES ('Α', ' ');
```
Perhaps you were using an element inspector to view the source code? That is when `Α` will appear as A. This happens to other special character codes such as `©` (©) and `►` (►).
If this is the case, then consider right clicking and selecting View Page Source or similar option instead.
|
You need to encode and decode your Html while storing to the database. Here is a Few links to help you.
<http://www.strictly-software.com/htmlencode>
<https://msdn.microsoft.com/en-us/library/windows/apps/hh464984.aspx>
|
vs Α displaying in browser and source
|
[
"",
"html",
"sql",
"ascii",
"coldfusion-10",
""
] |
```
create table test (id int, changed timestamp);
insert into test (id, changed) values (1, now());
insert into test (id, changed) values (2, now());
insert into test (id, changed) values (3, now());
select * from test;
+----+---------------------+
| id | changed |
+----+---------------------+
| 1 | 2015-07-14 14:58:00 | <-- correct
| 2 | 2015-07-14 14:58:02 |
| 3 | 2015-07-14 14:58:04 |
+----+---------------------+
update test set id = 5 where id = 1; -- nothing with 'changed' specified!
select * from test;
+----+---------------------+
| id | changed |
+----+---------------------+
| 5 | 2015-07-14 15:00:00 | <-- wrong! why?
| 2 | 2015-07-14 14:58:02 |
| 3 | 2015-07-14 14:58:04 |
+----+---------------------+
```
See the column changed, i wanted to set id and not changed. First time that a database does this on me.
**Why does MySQL do that?**
**How can I stop that behaviour?**
|
It's because you used the `timestamp` data type for the 'changed' column, which updates every time the row is updated.
If you wanted this to work as expected I would change the 'changed' data type to `datetime` instead.
|
By default a `TIMESTAMP` column will be set to `CURRENT_TIMESTAMP` and this will also be used whenever you updating the row.
When you create you table using:
```
CREATE TABLE test (
id int,
changed TIMESTAMP
);
```
It will be the same as this:
```
CREATE TABLE test (
id int,
changed TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
);
```
With a constant, the default is the given value. In this case, the column has no automatic properties at all:
```
CREATE TABLE test (
id int,
changed TIMESTAMP 0
);
```
|
MySQL changes 'changed' column on update by itself
|
[
"",
"mysql",
"sql",
""
] |
Just curious... is there a command in TSQL which will delete rows I just selected from a table?
In pseudo SQL, something like
```
SELECT * FROM TableA THEN DELETE
```
Or
```
SELECT * FROM TableA WITH (DELETEROWS)
```
NOTE: I'm not just looking to delete rows, I want to return the rows to a temptable and *then* delete them, in the same statement
|
You can get the rows that were *just* deleted using the [output clause](https://msdn.microsoft.com/en-us/library/ms177564.aspx):
```
DELETE FROM TableA
OUTPUT deleted.* --TODO name columns, identify table to insert into, etc
```
So, overall should be the same, just the order of operations is reversed.
|
You can use an `OUTPUT` statement in the `DELETE`. This will allow you to capture all the details of the deleted rows.
If you do the `SELECT` first, you would need to capture those rows into a table, and then delete the rows in the table.
If you repeat the logic for the `SELECT`, you *could* run into problems, because new rows may have been inserted/modified/deleted between the `SELECT` and the `DELETE`, so the two might not be consistent.
|
TSQL Select Delete
|
[
"",
"sql",
"sql-server",
"t-sql",
"sql-delete",
""
] |
I am trying to select from a transactions table using an IN clause determined by the value of a varchar parameter "@StatusCode". If @StatusCode is 'All Fail' then I want all records with status codes 1, 2, 3, or 5. Otherwise I want records with status code equal to the numeric value of @StatusCode. I have not found an elegant way to do this other than wrapping the entire select statement in an if condition.
I tried:
```
SELECT * FROM Transactions
WHERE
(@StatusCode = 'All Failed' AND StatusCode IN (1,2,3,5))
OR
(IsNumeric(@StatusCode) = 1 AND StatusCode = @StatusCode)
```
this compiles but throws a conversion error when I pass 'All Failed' since the conditions aren't evaluated lazily.
So I tried with a CASE:
```
SELECT * FROM Transactions
WHERE
CASE @StatusCode
WHEN 'All Failed' THEN StatusCode IN (1,2,3,5)
ELSE StatusCode = @StatusCode
END
```
but this doesn't compile and gives a syntax error at the 'IN'.
Is there a good way to do this? Or am I stuck with
```
IF @StatusCode = 'All Failed'
BEGIN
SELECT * FROM Transactions
WHERE StatusCode IN (1,2,3,5)
END
ELSE
BEGIN
SELECT * FROM Transactions
WHERE StatusCode = @StatusCode
END
```
|
You can use a COALESCE trick if your incoming variable is null to match on all items -- like this
```
WHERE COALESCE(@nullinputvar,MatchingField) = MatchingField
```
but that does not work here because you only want to match on 4 items. I've modified this trick to work (by first creating a local variable that is null in special case) and it might be faster than some of the other solutions where you have to a convert or cast for every row. This of course depends on how big your table is -- at some size my solution will be faster. A lot faster if statuscode field is an index.
```
DECLARE @numCode integer;
IF @StatusCode = 'All Fail'
BEGIN
SET @numCode = null;
END
ELSE
BEGIN
SET @numCode = CAST(@StatusCode as integer)
END
-- now the magic:
SELECT * FROM Transactions
WHERE COALESCE(@numCode,1) = StatusCode
OR COALESCE(@numCode,2) = StatusCode
OR COALESCE(@numCode,3) = StatusCode
OR COALESCE(@numCode,5) = StatusCode
```
|
try this query
```
SELECT * FROM Transactions
WHERE
(@StatusCode = 'All Failed' AND StatusCode IN (1,2,3,5))
OR
(IsNumeric(@StatusCode) = 1 AND StatusCode =(case when @StatusCode = 'All Failed' then StatusCode else convert(int, @StatusCode) end))
```
|
SQL Server - Conditional IN clause
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I want to count the number of transactions for the first 30 days from an account's creation for all accounts. The issue is not all accounts were created at the same time.
```
Example: [Acct_createdTable]
Acct Created_date
909099 01/02/2015
878787 02/03/2003
676767 09/03/2013
```
I can't Declare a datetime variable since it can only take one datetime.
and I can't do :
```
Select acctnumber,min,count(*)
from transaction_table
where transactiondate between (
select Created_date from Acct_createdTable where Acct = 909099)
and (
select Created_date from Acct_createdTable where Acct = 909099)+30
```
Since then it'll only count the number of transaction for only one acct.
What I want for my output is.
```
Acct First_30_days_count
909099 23
878787 190
676767 23
```
|
I think what you're looking for is a basic `GROUP BY` query.
```
SELECT
ac.acctnumber,
COUNT(td.id)
FROM Acct_createdTable ac
LEFT JOIN transactiondate td ON
td.acct = ac.acctnumber
AND
td.transaction_date BETWEEN ac.create_date AND DATEADD(30, DAY, ac.create_date)
GROUP BY
ac.acctnumber
```
This should return number of transactions within first 30 days for each account. This of course is pseudocode as you didn't state your database platform. The left join will ensure that accounts with no transactions in that period will get displayed.
|
An alternative solution would be to use `outer apply` like this:
```
select a.acct, o.First_30_days_count
from acct_createdtable a
outer apply (
select count(*) First_30_days_count
from transaction_table
where acctnumber = a.acct
and transactiondate between a.created_date and dateadd(day, 30, a.created_date)
) o;
```
|
Count number of transactions for first 30 days of account creation for all accounts
|
[
"",
"sql",
"sql-server",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.