Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
I have a problem with my MySQL query with large data access, When the query optimized with join it gives the output within 122 seconds for the data of one week. Then for one month data it takes 526 seconds for the process.
I want to optimize this query for less amount of process time per year or if there any way to optimize MySQL settings in general ?
Table details.
I refer two tables which mdiaries and tv\_diaries,In both tables I have indexed relevant columns, In mdiaries table there are 2661331 rows and 27074645 rows in tv\_diaries.
mdiaries table:
```
INDEX area (area),
INDEX date (date),
INDEX district (district),
INDEX gaDivision (gaDivision),
INDEX member_id (member_id),
INDEX tv_channel_id (tv_channel_id),
```
tv\_diaries.
```
INDEX area (area),
INDEX date (date),
INDEX district (district),
INDEX member_id (member_id),
INDEX timeslot_id (timeslot_id),
INDEX tv_channel_id (tv_channel_id),
```
This is my query which takes 122 seconds to execute.
```
$sql = "SELECT COUNT(TvDiary.id) AS m_count,TvDiary.date,TvDiary.timeslot_id,TvDiary.tv_channel_id,TvDiary.district,TvDiary.area
FROM `mdiaries` AS Mdiary INNER JOIN `tv_diaries` AS TvDiary ON Mdiary.member_id = TvDiary.member_id
WHERE Mdiary.date >= '2014-01-01' AND Mdiary.date <= '2014-01-07'
AND TvDiary.date >= '2014-01-01' AND TvDiary.date <= '2014-01-07'
GROUP BY TvDiary.date,
TvDiary.timeslot_id,
TvDiary.tv_channel_id,
TvDiary.district,
TvDiary.area";
```
This is my.cnf file.
```
[mysqld]
## General
datadir = /var/lib/mysql
tmpdir = /var/lib/mysqltmp
socket = /var/lib/mysql/mysql.sock
skip-name-resolve
sql-mode = NO_ENGINE_SUBSTITUTION
#event-scheduler = 1
## Networking
back-log = 100
#max-connections = 200
max-connect-errors = 10000
max-allowed-packet = 32M
interactive-timeout = 3600
wait-timeout = 600
### Storage Engines
#default-storage-engine = InnoDB
innodb = FORCE
## MyISAM
key-buffer-size = 64M
myisam-sort-buffer-size = 128M
## InnoDB
innodb-buffer-pool-size = 16G
innodb_buffer_pool_instances = 16
#innodb-log-file-size = 100M
#innodb-log-buffer-size = 8M
#innodb-file-per-table = 1
#innodb-open-files = 300
## Replication
server-id = 1
#log-bin = /var/log/mysql/bin-log
#relay-log = /var/log/mysql/relay-log
relay-log-space-limit = 16G
expire-logs-days = 7
#read-only = 1
#sync-binlog = 1
#log-slave-updates = 1
#binlog-format = STATEMENT
#auto-increment-offset = 1
#auto-increment-increment = 2
## Logging
log-output = FILE
slow-query-log = 1
slow-query-log-file = /var/log/mysql/slow-log
#log-slow-slave-statements
long-query-time = 2
##
query_cache_size = 512M
query_cache_type = 1
query_cache_limit = 2M
join_buffer_size = 512M
thread_cache_size = 128
[mysqld_safe]
log-error = /var/log/mysqld.log
open-files-limit = 65535
[mysql]
no-auto-rehash
```
|
This is your query:
```
SELECT COUNT(t.id) AS m_count, t.date, t.timeslot_id, t.tv_channel_id,
t.district, t.area
FROM `mdiaries` m INNER JOIN
`tv_diaries` t
ON m.member_id = t.member_id
WHERE m.date >= '2014-01-01' AND m.date <= '2014-01-07' AND
t.date >= '2014-01-01' AND t.date <= '2014-01-07'
GROUP BY t.date, t.timeslot_id, t.tv_channel_id, t.district, t.area;
```
I would start with composite indexes: `tv_diaries(date, member_id)` and `mdiaries(member_id, date)`.
This query is problematic, but these might help.
|
Try adding multiple-columns index on all columns referenced in `GROUP BY` clause, as mentioned [in the documentation](http://dev.mysql.com/doc/refman/5.0/en/group-by-optimization.html).
```
INDEX grp (date, timeslot_id, tv_channel_id, district, area)
```
|
MySQL Query Optimization for JOIN Large Tables
|
[
"",
"mysql",
"sql",
"innodb",
""
] |
I have table With ID,Sub\_ID and value coloumns
```
ID SUB_ID Value
100 1 100
100 2 150
101 1 100
101 2 150
101 3 200
102 1 100
```
SUB ID can vary from 1..maxvalue( In this example it is 3). I need Sum of values for each Sub\_ID. If SUB\_ID is less than MAXVALUE for a particlaur ID then it should take MAX(SUB\_ID) of each ID As shown below ( In this example for ID=100 for SUB\_ID 3 it should take 150 i.e 2<3 so value=150))
```
SUB_ID SUM(values) Remarks
1 300 (100+100+100)
2 400 (150+150+100)
3 450 (150+200+100)
```
This can be easily done in PL/SQL . Can we use SQL for the same using Model Clause or any other options
|
[SQL Fiddle](http://sqlfiddle.com/#!4/af87b/3)
**Oracle 11g R2 Schema Setup**:
```
CREATE TABLE TableA ( ID, SUB_ID, Value ) AS
SELECT 100, 1, 100 FROM DUAL
UNION ALL SELECT 100, 2, 150 FROM DUAL
UNION ALL SELECT 101, 1, 100 FROM DUAL
UNION ALL SELECT 101, 2, 150 FROM DUAL
UNION ALL SELECT 101, 3, 200 FROM DUAL
UNION ALL SELECT 102, 1, 100 FROM DUAL
```
**Query 1**:
```
WITH sub_ids AS (
SELECT LEVEL AS sub_id
FROM DUAL
CONNECT BY LEVEL <= ( SELECT MAX( SUB_ID ) FROM TableA )
),
max_values AS (
SELECT ID,
MAX( VALUE ) AS max_value
FROM TableA
GROUP BY ID
)
SELECT s.SUB_ID,
SUM( COALESCE( a.VALUE, m.max_value ) ) AS total_value
FROM sub_ids s
CROSS JOIN
max_values m
LEFT OUTER JOIN
TableA a
ON ( s.SUB_ID = a.SUB_ID AND m.ID = a.ID )
GROUP BY
s.SUB_ID
```
**[Results](http://sqlfiddle.com/#!4/af87b/3/0)**:
```
| SUB_ID | TOTAL_VALUE |
|--------|-------------|
| 1 | 300 |
| 2 | 400 |
| 3 | 450 |
```
|
Try this
```
SELECT SUB_ID,SUM(values),
(SELECT DISTINCT SUBSTRING(
(
SELECT '+'+ CAST(values AS VARCHAR)
FROM table_Name AS T2
WHERE T2.SUB_ID = d.SUB_ID
FOR XML PATH ('')
),2,100000)[values]) as values
FROm table_Name d
GROUP BY SUB_ID
```
|
Oracle sql group sum
|
[
"",
"sql",
"oracle",
"oracle11g",
""
] |
I am running a SQL query where I am trying to get both the DOB and Age field as blank (' ') as opposed to `NULL`.
I have managed to use the `ISNULL` function to change the DOB from `1900-01-01` to ' '. Originally my DOB was
```
SELECT isnull(DOB,'') DOB
```
which was bring back `1900-01-01` instead of `NULL`
With the code below the Age field is appearing as '0' rather then ' '. I'm not sure how to use the `ISNULL` function, as the Age field does not exist in the db table.
```
SELECT ISNULL(CASE WHEN CONVERT(DATE, DOB) = '1900-01-01' THEN ''
ELSE CONVERT(CHAR(10), DOB, 103) END, '') AS DOB,
ISNULL (DATEDIFF(hour,dob,GETDATE())/8766,'')Age,
```
Any help will be much appreciated, thanks :)
Some dummy data of the table:
```
ID | Name |Address |DOB | Gender | Email |
---------------------------------------------------------
01 | Max |Abc Road| 2000-12-19 | Male |Max@mail.net |
02 | Sam |TBH Road| null | Male |Sam@mail.net |
```
This is what im getting with my query
```
ID | Name |Address |DOB | Age | Email |
---------------------------------------------------------
01 | Max |Abc Road|2000-12-19 | 15 |Max@Gmail.net |
02 | Sam |TBH Road| | 0 |Sam@Gmail.net |
```
What I want to get however is:
```
ID | Name |Address |DOB | Age | Email |
---------------------------------------------------------
01 | Max |Abc Road|2000-12-19 | 15 |Max@Gmail.net |
02 | Sam |TBH Road| | |Sam@Gmail.net |
```
|
This is your age:
```
DATEDIFF(hour,dob,GETDATE())/8766
```
It is a number. To make it a string, do this:
```
cast(DATEDIFF(hour,dob,GETDATE())/8766 as varchar (15))
```
Then you can use isnull because your datatypes are the same
```
isnull(cast(DATEDIFF(hour,dob,GETDATE())/8766 as varchar (15)), '')
```
|
You could use a simple CASE statement.
```
case
when dob = ''
then ''
else datediff(hour, dob, getdate())/8766
end as [Age]
```
There's probably a more elegant way to do it though.
|
<SQL Server>DOB and Age field as blank using SQL
|
[
"",
"sql",
"sql-server",
""
] |
I have a table which has a int column representing for scores of student. I want to select the 2nd highest score if there does exists for 2nd highest score, and if not, return null. Wondering how to implement in SQL.
I am using MySQL/SQL Workbench.
|
This query would return the second highest value if present or-else NULL
```
SELECT MAX(score)
FROM table_name
WHERE score<>(
SELECT MAX(score)
FROM table_name);
```
|
Please try executing following sql query for retrieving 2nd highest score
```
SELECT score from table order by score desc limit 1,1
```
The above query will return 2nd highest score if it exists or **NULL** if it the record does not exists
|
how to check if there exists 2nd highest score in SQL?
|
[
"",
"mysql",
"sql",
""
] |
I'd like to perform a SQL SELECT query which will go across two tables. I understand I'll need to use a join, but I'm not entirely sure how this would work (apologies - quite new to SQL).
```
SELECT COUNT(RSO_ParentID) AS 'Calls in Queue'
FROM Task, Incident
WHERE Task.OwnerTeam = '2nd Line Support'
AND (Task.Status='Accepted' OR Task.Status='Waiting')
AND (Incident.Status='Waiting');
```
But this returns 6,749,340 results when there should only be about 150, so I've clearly gone wrong somewhere - but can't figure out where.
Any help would be hugely appreciated. If there's any additional information that would be useful, let me know and I'll gladly provide you with further details.
Thanks in advance!
|
There are so many different ways to solve a problem with SQL and a number of ways to filter out and reduce your dataset to what you want to see. Lets look at your original SQL Statement:
```
SELECT COUNT(RSO_ParentID) AS 'Calls in Queue'
FROM Task, Incident
WHERE Task.OwnerTeam = '2nd Line Support'
AND (Task.Status='Accepted' OR Task.Status='Waiting')
AND (Incident.Status='Waiting');
```
Selecting from 2 tables is not recommended. Remember `SQL` is a `Relational Database Management System`. What you need to know is, is there a relationship between `Tasks` and `Incidents`.
If you want the subset of two tables combined you have to know their relationhip. Since I dont know your full schema here is a illustrative example. You will have to apply your exact scenario.
For example say there a TaskID in the Incident table so you know that this task was an incident, you would do something like this:
```
SELECT COUNT(RSO_ParentID) AS 'Calls in Queue'
FROM Task t
JOIN Incident i
ON t.TaskID = i.TaskID
WHERE Task.OwnerTeam = '2nd Line Support'
AND (Task.Status='Accepted' OR Task.Status='Waiting')
AND (Incident.Status='Waiting');
```
That will give you only tasks that were incidents. Probably the 150 you were looking for.
**EDIT:**
Another note one JOINS. There are different types of `JOIN`. `RIGHT`, `LEFT`, `INNER`, `OUTER`. The most common is an `INNER JOIN` which can also be done by simply a `JOIN`
|
```
SELECT COUNT(RSO_ParentID) AS 'Calls in Queue'
FROM Task JOIN Incident
ON --task.somecolumn = incident.somecolumn
WHERE Task.OwnerTeam='2nd Line Support'
AND (Task.Status='Accepted' OR Task.Status='Waiting')
AND Incident.Status='Waiting'
```
You don't have a `join` condition in your query, which means it would give a `catersian product` (product of rows in both tables) as the result. Include the `join` condition to make it work.
|
SQL Select Query - Two Tables
|
[
"",
"sql",
""
] |
I've encountered a bit of a mental roadblock regarding the way a specific integer field is storing data.
Specifically, there is a column with integers that range from 1 - 127; each integer represents a combination of different days of the week. For example: Monday = 2^0 or 1, Tuesday = 2^2 or 2, Wednesday = 2^3 or 8; with the option of addition, Monday + Tuesday = 3.
I've been able to extract the date values partially using the example found [here](https://stackoverflow.com/questions/4804294/enumerate-days-of-week-in-t-sql). However, that particular example does not work when two days get added together (eg. Monday + Tuesday = 3). Can anyone point me in the right direction?
FYI, I am using SQL Server 2008 R2. My apologies if this has been posted before, I took a look but was unable to find any other postings.
|
What you're dealing with is referred to as bitwise operators.
Here's a [good read](https://www.mssqltips.com/sqlservertip/1218/sql-server-bitwise-operators-store-multiple-values-in-one-column/) on it with clear simple examples.
For the sake of completeness, here is what you're looking at broken down into columns for each day of the week.
```
DECLARE @bitwise TABLE (someValue TINYINT)
INSERT INTO @bitwise (someValue)
SELECT 1 UNION
SELECT 5 UNION
SELECT 127
SELECT someValue, CASE WHEN (1&someValue)=1 THEN 'SUNDAY' END
, CASE WHEN (2&someValue)=2 THEN 'MONDAY' END
, CASE WHEN (4&someValue)=4 THEN 'TUESDAY' END
, CASE WHEN (8&someValue)=8 THEN 'WEDNESDAY' END
, CASE WHEN (16&someValue)=16 THEN 'THURSDAY' END
, CASE WHEN (32&someValue)=32 THEN 'FRIDAY' END
, CASE WHEN (64&someValue)=64 THEN 'SATURDAY' END
FROM @bitwise
```
|
It seems like you could just grab the bit you need and store the result in their own field for each day of the week.
```
SELECT
cast(day_of_week & 1 as bit) AS 'Monday',
cast(day_of_week & 2 as bit) AS 'Tuesday',
cast(day_of_week & 4 as bit) AS 'Wednesday',
cast(day_of_week & 8 as bit) AS 'Thursday',
etc...
```
|
Multiple days of week stored in one field
|
[
"",
"sql",
"sql-server-2008",
"date",
""
] |
I have table with a list of dates. These are not ordered in the database, but using a SQL SELECT query.
There are two intervals I want to calculate. The first is not a problem, which is calculating the interval between the dates in the same row.
```
Date 1 Date 2 (Interval 1)
1950-01-01 1960-01-01 (10.00)
1951-07-01 1962-01-01 (10.50)
1952-04-01 1964-07-01 (11.25)
1953-07-01 1968-10-01 (15.25)
1958-01-01 1970-01-01 (12.00)
```
However, I also want to calculate the difference between Date 1 of a row, and Date 2 of the row above it (in the SQL SELECT output). Essentially I want the Date 2 column to be copied and shifted down one row (or something to that effect) so that I can calculate Interval 2.
```
Date 1 Date 2 Interval 1 Date 2_shift Interval 2
1950-01-01 1960-01-01 10.00
1951-07-01 1962-01-01 10.50 1960-01-01 8.50
1952-04-01 1964-07-01 11.25 1962-01-01 9.75
1953-07-01 1968-10-01 15.25 1964-07-01 11.00
1958-01-01 1970-01-01 12.00 1968-10-01 10.75
```
|
Like @Mihai mentioned, you would need to use a `row_number` field and do a left join to get the previous date values:
```
SET @row_number:= 0;
SET @row_number1:= 0;
select q1.*
,q2.Date2 as PreviousDate
from
(SELECT @row_number:= @row_number + 1 AS row_number
,dt.*
FROM datetable dt
) q1
left join (SELECT @row_number1:= @row_number1 + 1 AS row_number
,dt.*
FROM datetable dt
) q2 on q1.row_number-1 = q2.row_number
```
[`SQL Fiddle Demo`](http://www.sqlfiddle.com/#!9/1e048/1/2)
|
What you need to do is add a row number column in your query, then left join it with itself on firstinstance.rownumber = secondinstance.rownumber + 1
Then you can do the calculus you need.
Hope this helps!
|
Copying and shifting column in SQL query
|
[
"",
"mysql",
"sql",
""
] |
I have added a suffix id to allow merging of data in sql using the following query
```
Update [databasename].[dbo].[customers] set [relnum] = RTRIM(relnum) + '-9999' GO
```
It has been run more than once. I need to know how to remove it completely and also how to only leave one instance of -9999 on the field in the dbo
\*relnum\*\*
```
test-9999-9999-9999
0000109-9999-9999-9999
62077-9999-9999-9999 51387-9999-9999-9999
```
Can anyone give me an idea or ideas on how to remove it completely and also leave only one set of -9999
|
This covers all possible scenarios. Using `CHARINDEX()` and `LEFT()` functions:
[Fiddle sampl](http://sqlfiddle.com/#!3/81e5d/1)e
```
UPDATE Customers SET relnum =
CASE WHEN CHARINDEX('-9999', relnum, 1) > 0 THEN
LEFT(relnum, CHARINDEX('-9999', relnum, 1)-1)
ELSE relnum
END + '-9999'
```
Data before update
```
| relnum |
|---------------|
| abc-9999 |
| xyz-9999-9999 |
| pqr |
```
Data after update
```
| relnum |
|---------------|
| abc-9999 |
| xyz-9999 |
| pqr-9999 |
```
|
Maybe:
```
Update [databasename].[dbo].[customers] set [relnum] = replace(relnum,'-9999-9999','-9999')
```
Run it a few times and you'll be fine.
Hope it helps!
|
How to remove suffix id added on sql server that was previously added SQL
|
[
"",
"sql",
"sql-server",
""
] |
I have data as per the table below, I pass in a list of numbers and need the `raceId` where all the numbers appear in the the data column for that race.
```
+-----+--------+------+
| Id | raceId | data |
+-----+--------+------+
| 14 | 1 | 1 |
| 12 | 1 | 2 |
| 13 | 1 | 3 |
| 16 | 1 | 8 |
| 47 | 2 | 1 |
| 43 | 2 | 2 |
| 46 | 2 | 6 |
| 40 | 2 | 7 |
| 42 | 2 | 8 |
| 68 | 3 | 3 |
| 69 | 3 | 6 |
| 65 | 3 | 7 |
| 90 | 4 | 1 |
| 89 | 4 | 2 |
| 95 | 4 | 6 |
| 92 | 4 | 7 |
| 93 | 4 | 8 |
| 114 | 5 | 1 |
| 116 | 5 | 2 |
| 117 | 5 | 3 |
| 118 | 5 | 8 |
| 138 | 6 | 2 |
| 139 | 6 | 6 |
| 140 | 6 | 7 |
| 137 | 6 | 8 |
+-----+--------+------+
```
Example I pass in `1,2,7` I would get the following Id's:
```
2 and 4
```
I have tried the simple statement
SELECT \* FROM table WHERE ((data = 1) or (data = 2) or (data = 7))
But I don't really understand the grouping by clause or indeed if it is the correct way of doing this.
|
```
select raceId
from yourtable
where data in (1,2,7)
group by raceId
having count(raceId) = 3 /* length(1,2,7) */
```
This is assuming raceId, data pair is unique. If it's not the you should use
```
select raceId
from (select distinct raceId, data
from yourtable
where data in(1,2,7))
group by raceId
having count(raceId) = 3
```
|
This is an example of a "set-within-sets" query. I like to solve these with `group by` and `having`.
```
select raceid
from races
where data in (1, 2, 7)
group by raceid
having count(*) = 3;
```
|
SQL Select and Group By clause
|
[
"",
"sql",
"sql-server",
""
] |
I have a set of duplicate records in a SQL db like so:
```
Id | Prop1 | Prop2
--------------------------
1 | aaa | aaa
2 | aaa | aaa
3 | bbb | bbb
4 | bbb | bbb
5 | ccc | ccc
6 | ccc | ccc
```
I need to select each duplicate, or every second row to update it. Would like to get a select statement returning ids
```
2, 4, 6
```
Is it possible?
|
One way would be to use `group by` and `having`:
```
select max(id) id
from table
group by Prop1, Prop2
having count(*) > 1
```
This would return the highest id for each duplicate group which with your sample data would be 2,4,6. This might, or might not be what you want - the question is a bit ambiguous.
Note that this might not be the correct solution if there can be more than one duplicate row.
|
Use `ROW_NUMBER`
**[SqlFiddleDemo](http://sqlfiddle.com/#!6/ca4423/1/0)**
```
WITH cte AS
(
SELECT
Id
,[rn] = ROW_NUMBER() OVER(PARTITION BY Prop1, Prop2 ORDER BY Id)
FROM your_tab
)
SELECT Id
FROM cte
WHERE [rn] <> 1
```
|
Select duplicates in SQL
|
[
"",
"sql",
"database",
"t-sql",
""
] |
I want table output as XML. My table result is
[](https://i.stack.imgur.com/UUsJV.png)
I need result like
[](https://i.stack.imgur.com/LtbMo.png)
Query
```
Declare @colorTable table (Category varchar(100),Attribute varchar(100))
insert into @colorTable values ('Color','Red')
insert into @colorTable values ('Color','Blue')
insert into @colorTable values ('Color','Green')
insert into @colorTable values ('Transport','Bus')
insert into @colorTable values ('Transport','Car')
insert into @colorTable values ('Transport','Twoweeler')
select * from @colorTable
FOR XML PAth(''), ROOT ('xml'), ELEMENTS;
```
Thanks,
S.Sundar
|
we have to write a query using `GROUP BY`, XML path not allowed to write sql within `xml path(not allowed select statement)`
```
Declare @colorTable table (Category varchar(100),Attribute varchar(100))
insert into @colorTable values ('Color','Red')
insert into @colorTable values ('Color','Blue')
insert into @colorTable values ('Color','Green')
insert into @colorTable values ('Transport','Bus')
insert into @colorTable values ('Transport','Car')
insert into @colorTable values ('Transport','Twoweeler')
select T1.Category as '@Value',
(
select T2.Attribute as '@Value'
from @colorTable as T2
where T2.Category = T1.Category
group by T2.Attribute
for xml path('Attribute'), type
)
from @colorTable as T1
group by Category
for xml path('Category'), root('xml')
```
OUTPUT
```
<xml>
<Category Value="Color">
<Attribute Value="Blue" />
<Attribute Value="Green" />
<Attribute Value="Red" />
</Category>
<Category Value="Transport">
<Attribute Value="Bus" />
<Attribute Value="Car" />
<Attribute Value="Twoweeler" />
</Category>
</xml>
```
|
Finally I put code like this
> Select Category as value ,
> (SELECT Attribute as value from @colorTable where Category= a.Category
> FOR XML raw('attribute'),
> TYPE)
> from @colorTable as a group by Category
> FOR XML raw('category'), ROOT ('xml'), type;
|
Xml output in SQL Server
|
[
"",
"sql",
"sql-server",
"xml",
"sql-server-2008",
"stored-procedures",
""
] |
I have a rather large statement that I've built up using an @SQL variable and then running the query from the variable at the end of the statement. This works fine except for when inserting a date into one of the parameters.
The query then returns no data and come back with an error:
> Conversion failed when converting date and/or time from character string.
The SQL I currently have is as follows:
```
ALTER PROCEDURE [dbo].[GetVisitListFiltered]
@sitekey int,
@VisitNo int = NULL,
@DNS varchar(max) = NULL,
@SessionStarted varchar(15) = '01/01/1900',
@Page varchar(max) = NULL,
@SecondsOnSite int = NULL,
@SecondsOnSiteRange int = NULL,
@Pages int = NULL,
@Cost int = NULL,
@City varchar(max) = NULL,
@Country varchar(max) = NULL,
@Keywords varchar(max) = NULL,
@Referrer varchar(max) = NULL
AS
BEGIN
BEGIN TRY
SET @SecondsOnSiteRange =
CASE @SecondsOnSiteRange
WHEN 1 THEN '='
WHEN 2 THEN '>'
WHEN 3 THEN '<'
ELSE NULL
END
DECLARE @SQL NVARCHAR(MAX)
, @SQLParams NVARCHAR(MAX);
SET @SQL = N'
SELECT VKey,
VisitIP,
SiteKey,
Alert,
AlertNo,
VisitNo,
Invited,
Chatted,
Prospect,
Customer,
HackRaised,
Spider,
Cost,
Revenue,
Visits,
FirstDate,
TotalCost,
TotalRevenue,
OperatingSystem,
Browser,
SearchEngine,
Referrer,
Keywords,
ReferrerQuery,
Name,
Email,
Company,
Telephone,
Fax,
Street,
City,
Zip,
Country,
Web,
Organization,
CRMID,
Notes,
DNS,
Region,
FirstAlert,
FirstVisitReferrer,
ProspectTypes,
VisitDate,
SecondsOnSite,
Page
FROM dbo.VisitDetail
WHERE SiteKey = @p0';
IF NULLIF(@VisitNo, '') IS NOT NULL SET @SQL += N' AND VisitNo = @p1';
IF NULLIF(@DNS, '') IS NOT NULL SET @SQL += N' AND DNS = @p2';
IF NULLIF(@SessionStarted, '01/01/1900') IS NOT NULL SET @SQL += N' AND VisitDate between @p3 and @p3 23:59:59';
IF NULLIF(@Page, '') IS NOT NULL SET @SQL += N' AND Page = @p4';
IF NULLIF(@SecondsOnSite, '') IS NOT NULL AND NULLIF(@SecondsOnSiteRange, '') IS NOT NULL SET @SQL += N' AND SecondsOnSite' + '@p12' + '@p5';
IF NULLIF(@Pages, '') IS NOT NULL SET @SQL += N' AND PagesSeen = @p6';
IF NULLIF(@Cost, '') IS NOT NULL SET @SQL += N' AND Cost = @p7';
IF NULLIF(@City, '') IS NOT NULL SET @SQL += N' AND City = @p8';
IF NULLIF(@Country, '') IS NOT NULL SET @SQL += N' AND Country = @p9';
IF NULLIF(@Keywords, '') IS NOT NULL SET @SQL += N' AND Keywords = @p10';
IF NULLIF(@Referrer, '') IS NOT NULL SET @SQL += N' AND ReferrerQuery = @p11';
SET @SQLParams = N'
@p0 INT
, @p1 INT
, @p2 VARCHAR(MAX)
, @p3 VARCHAR(15)
, @p4 VARCHAR(MAX)
, @p5 INT
, @p6 INT
, @p7 INT
, @p8 VARCHAR(MAX)
, @p9 VARCHAR(MAX)
, @p10 VARCHAR(MAX)
, @p11 VARCHAR(MAX)
, @p12 VARCHAR(10)';
EXECUTE sp_executesql @SQL
, @SQLParams
, @p0 = @SiteKey
, @p1 = @VisitNo
, @p2 = @DNS
, @p3 = @SessionStarted
, @p4 = @Page
, @p5 = @SecondsOnSite
, @p6 = @Pages
, @p7 = @Cost
, @p8 = @City
, @p9 = @Country
, @p10 = @Keywords
, @p11 = @Referrer
, @p12 = @SecondsOnSiteRange;
END TRY
BEGIN CATCH
SELECT ERROR_MESSAGE();
END CATCH
END
```
I know the problem lies somewhere in this area:
```
IF NULLIF(@SessionStarted, '01/01/1900') IS NOT NULL
SET @SQL += N' AND VisitDate between @p3 and @p3 23:59:59';
```
But I'm not sure how to fix it - can anyone suggest what I might be doing wrong?
|
It has already been pointed out that `@SessionStarted` is a `VARCHAR` when it should be a date. And that your final SQL is malformed, and no more valid than:
```
DECLARE @p3 DATE = GETDATE();
SELECT Test = @p3 '23:59:59';
```
Which gives:
> Msg 102, Level 15, State 1, Line 3
>
> Incorrect syntax near '23:59:59'.
But I want to stress another point:
**DON'T USE BETWEEN LIKE THIS**
You are trying to construct a statement like:
```
WHERE Date BETWEEN '2015-09-03' AND '2015-09-03 23:59:59'
```
But what about if `Date` is '2015-09-03 23:59:59.5' - Do you really want this to be excluded? The best practice is to use an open ended range:
```
WHERE Date >= '2015-09-03'
AND Date < '2015-09-04'
```
Pretty much the same, but covers the entire day, not just most of it. SO your exact statement should probably be:
```
IF NULLIF(@SessionStarted, '01/01/1900') IS NOT NULL
SET @SQL += N' AND VisitDate >= @p3 AND VisitDate < DATEADD(DAY, 1, @p3)';
```
Aaron Bertrand has written a [great article](https://sqlblog.org/2011/10/19/what-do-between-and-the-devil-have-in-common) on this for further reading.
So in summary, a partial fix would be to use the concatenatation operator `+`:
```
DECLARE @p3 VARCHAR(50) = '03/09/2015';
SELECT Test = @p3 + ' 23:59:59';
```
A better fix would be to convert to the right datatype:
```
DECLARE @p3 VARCHAR(50) = '03/09/2015';
SELECT Test = CONVERT(DATETIME, @p3 + ' 23:59:59');
```
Even better would be to use a culture invariant date format, so it is clear whether you mean 3rd September or 9th March:
```
DECLARE @p3 VARCHAR(50) = '20150903';
SELECT Test = CONVERT(DATETIME, @p3 + ' 23:59:59');
```
Even better still would be to use the correct datatype in the first place:
```
DECLARE @p3 DATETIME = '20150903';
SELECT Test = @p3 + '23:59:59';
```
And better yet, would be to use an open ended date range as described above.
|
You have declared `sessionstarted` to be a character rather than a date. I imagine that this is the root cause of your problem.
Change the type to a date. I would also recommend that you use ISO standard YYYY-MM-DD format for the date, rather than a culture-specific format.
|
How to express a BETWEEN statement in SQL when running query from a variable
|
[
"",
"sql",
"sql-server",
""
] |
I want to achieve in MS SQL something like below, using 2 tables and through join instead of iteration.
From table A, I want each row to identify from table B which in the list is their nearest value, and when value has been selected, that value cannot re-used. Please help if you've done something like this before. Thank you in advance! #SOreadyToAsk
[](https://i.stack.imgur.com/IjCJ1.png)
|
I highly believe **THIS IS NOT A GOOD PRACTICE** because I am bypassing the policy SQL made for itself that functions with side-effects (INSERT,UPDATE,DELETE) is a **NO**, but due to the fact that I want solve this without resulting to iteration options, I came up with this and gave me better view of things now.
```
create table tablea
(
num INT,
val MONEY
)
create table tableb
(
num INT,
val MONEY
)
```
I created a hard-table temp which I shall drop from time-to-time.
```
if((select 1 from sys.tables where name = 'temp_tableb') is not null) begin drop table temp_tableb end
select * into temp_tableb from tableb
```
I created a function that executes xp\_cmdshell (this is where the side-effect bypassing happens)
```
CREATE FUNCTION [dbo].[GetNearestMatch]
(
@ParamValue MONEY
)
RETURNS MONEY
AS
BEGIN
DECLARE @ReturnNum MONEY
, @ID INT
SELECT TOP 1
@ID = num
, @ReturnNum = val
FROM temp_tableb ORDER BY ABS(val - @ParamValue)
DECLARE @SQL varchar(500)
SELECT @SQL = 'osql -S' + @@servername + ' -E -q "delete from test..temp_tableb where num = ' + CONVERT(NVARCHAR(150),@ID) + ' "'
EXEC master..xp_cmdshell @SQL
RETURN @ReturnNum
END
```
and my usage in my query simply looks like this.
```
-- initialize temp
if((select 1 from sys.tables where name = 'temp_tableb') is not null) begin drop table temp_tableb end
select * into temp_tableb from tableb
-- query nearest match
select
*
, dbo.GetNearestMatch(a.val) AS [NearestValue]
from tablea a
```
and gave me this..
[](https://i.stack.imgur.com/lJoXD.png)
|
Below is a set-based solution using CTEs and windowing functions.
The `ranked_matches` CTE assigns a closest match rank for each row in `TableA` along with a closest match rank for each row in `TableB`, using the `index` value as a tie breaker.
The `best_matches` CTE returns rows from `ranked_matches` that have the best rank (rank value 1) for both rankings.
Finally, the outer query uses a `LEFT JOIN` from `TableA` to the to the `best_matches` CTE to include the `TableA` rows that were not assigned a best match due to the closes match being already assigned.
Note that this does not return a match for the index 3 TableA row indicated in your sample results. The closes match for this row is TableB index 3, a difference of 83. However, that TableB row is a closer match to the TableA index 2 row, a difference of 14 so it was already assigned. Please clarify you question if this isn't what you want. I think this technique can be tweaked accordingly.
```
CREATE TABLE dbo.TableA(
[index] int NOT NULL
CONSTRAINT PK_TableA PRIMARY KEY
, value int
);
CREATE TABLE dbo.TableB(
[index] int NOT NULL
CONSTRAINT PK_TableB PRIMARY KEY
, value int
);
INSERT INTO dbo.TableA
( [index], value )
VALUES ( 1, 123 ),
( 2, 245 ),
( 3, 342 ),
( 4, 456 ),
( 5, 608 );
INSERT INTO dbo.TableB
( [index], value )
VALUES ( 1, 152 ),
( 2, 159 ),
( 3, 259 );
WITH
ranked_matches AS (
SELECT
a.[index] AS a_index
, a.value AS a_value
, b.[index] b_index
, b.value AS b_value
, RANK() OVER(PARTITION BY a.[index] ORDER BY ABS(a.Value - b.value), b.[index]) AS a_match_rank
, RANK() OVER(PARTITION BY b.[index] ORDER BY ABS(a.Value - b.value), a.[index]) AS b_match_rank
FROM dbo.TableA AS a
CROSS JOIN dbo.TableB AS b
)
, best_matches AS (
SELECT
a_index
, a_value
, b_index
, b_value
FROM ranked_matches
WHERE
a_match_rank = 1
AND b_match_rank= 1
)
SELECT
TableA.[index] AS a_index
, TableA.value AS a_value
, best_matches.b_index
, best_matches.b_value
FROM dbo.TableA
LEFT JOIN best_matches ON
best_matches.a_index = TableA.[index]
ORDER BY
TableA.[index];
```
**EDIT:**
Although this method uses CTEs, recursion is not used and is therefore not limited to 32K recursions. There may be room for improvement here from a performance perspective, though.
|
Left join with nearest value without duplicates
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have the following `IF` block that is supposed to be executed only if my table is empty. But it seems to be executing the code between `BEGIN` and `END` even when my table has a row.
```
IF ((SELECT COUNT(*) FROM [dbo].[KeyProperties]) = 0)
BEGIN
INSERT INTO [dbo].[KeyProperties] (KeyId,PropertyId) VALUES
((SELECT Id FROM [dbo].[Keys] WHERE Priority=1),(SELECT Id FROM [dbo].[Properties] WHERE Name='SocialSecurityNumber'))
INSERT INTO [dbo].[KeyProperties] (KeyId,PropertyId) VALUES
((SELECT Id FROM [dbo].[Keys] WHERE Priority=2),(SELECT Id FROM [dbo].[Properties] WHERE Name='FirstName')),
((SELECT Id FROM [dbo].[Keys] WHERE Priority=2),(SELECT Id FROM [dbo].[Properties] WHERE Name='LastName')),
((SELECT Id FROM [dbo].[Keys] WHERE Priority=2),(SELECT Id FROM [dbo].[Properties] WHERE Name='Email'))
END
```
The above code throws this error:
> Cannot insert the value NULL into column 'KeyId', table
> 'MYDATABASE.dbo.KeyProperties'; column does not allow nulls. INSERT
> fails. The statement has been terminated. End
|
1) Use `NOT EXISTS` instead of counting and comparing to 0
2) Use `INSERT INTO tab(col) SELECT`
```
IF NOT EXISTS (SELECT 1 FROM [dbo].[KeyProperties])
BEGIN
INSERT INTO [dbo].[KeyProperties] (KeyId,PropertyId)
SELECT
(SELECT Id FROM [dbo].[Keys] WHERE Priority=1),
(SELECT Id FROM [dbo].[Properties] WHERE Name='SocialSecurityNumber');
INSERT INTO [dbo].[KeyProperties] (KeyId,PropertyId)
SELECT
(SELECT Id FROM [dbo].[Keys] WHERE Priority=2),
(SELECT Id FROM [dbo].[Properties] WHERE Name='FirstName')
UNION ALL
SELECT
(SELECT Id FROM [dbo].[Keys] WHERE Priority=2),
(SELECT Id FROM [dbo].[Properties] WHERE Name='LastName'),
UNION ALL
SELECT
(SELECT Id FROM [dbo].[Keys] WHERE Priority=2),
(SELECT Id FROM [dbo].[Properties] WHERE Name='Email');
END
```
|
Use a variable instead:
```
DECLARE @Count INT
(SELECT @Count = COUNT(*) FROM [dbo].[KeyProperties])
IF (@Count = 0)
```
or just
```
IF NOT EXISTS (SELECT TOP 1 NULL FROM [dbo].[KeyProperties])
```
|
IF block executes when the condition does not apply
|
[
"",
"sql",
"sql-server",
"database",
"t-sql",
""
] |
I'm trying to execute the following query as a simpler way to process several records rather than creating an insert statement for each individual one:
```
INSERT INTO wostatus (WO, STATUS, DATE, WOSTATUSID)
SELECT
workorder.wonum, 'CLOSE', '02-SEP-2015',
(SELECT MAX(wostatusid) + 1 FROM wostatus)
FROM
wostatus
JOIN
workorder ON wostatus.wonum = workorder.wonum
```
However I'm getting a duplicate key error so this isn't working recursively. I thought it would keep inserting +1 and retrieving the new value for the next insert. Is there another simple way to do this that could work on both SS and Oracle?
|
use below query
```
INSERT INTO wostatus (WO,STATUS,DATE,WOSTATUSID)
SELECT workorder.wonum, 'CLOSE', '02-SEP-2015', ((SELECT isnull(max(wostatusid),0) from wostatus)+ row_number() over(order by wostatus.wonum))
from wostatus join workorder on wostatus.wonum = workorder.wonum
```
|
You can use `ROW_NUMBER`:
```
INSERT INTO wostatus (WO,STATUS,DATE,WOSTATUSID)
SELECT
workorder.wonum,
'CLOSE',
'02-SEP-2015',
(SELECT max(wostatusid)+1 from wostatus) + ROW_NUMBER() OVER(ORDER BY (SELECT NULL)) - 1
FROM wostatus
JOIN workorder
ON wostatus.wonum = workorder.wonum
```
|
SQL incremental id insert without auto key
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
How can I write a loop that runs n times in MySql without using a stored procedure.
This is how I do it with a stored procedure:
```
DELIMITER $$
DROP PROCEDURE IF EXISTS test$$
CREATE PROCEDURE test()
BEGIN
DECLARE count INT DEFAULT 0;
WHILE count < 10 DO
/**Sql statement**/
SET count = count + 1;
END WHILE;
END$$
DELIMITER ;
```
And then I execute my procedure this way:
```
call test();
```
If I remove the stored procedure and run the normal query, then it fails with this error:
> 1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'DECLARE count INT DEFAULT 0; WHILE count < 10 DO at line' 2
I have looked through the Internet for a solution with no luck.
**Edit Based On comments:**
The above stored procedure does exactly what I want: It loops 10 times and execute my sql statement. Now I want to accomplish the same thing without using a stored procedure. Something like:
```
DECLARE count INT DEFAULT 0;
WHILE count < 10 DO
/**Sql statement**/
SET count = count + 1;
END WHILE;
```
|
MySQL docs on [Flow Control Statements](https://dev.mysql.com/doc/refman/5.6/en/flow-control-statements.html) say:
> MySQL supports the IF, CASE, ITERATE, LEAVE LOOP, WHILE, and REPEAT
> constructs for flow control **within stored programs**.
Docs on [Stored Programs and Views](https://dev.mysql.com/doc/refman/5.6/en/stored-programs-views.html) say:
> Stored program definitions include a body that may use compound
> statements, **loops, conditionals, and declared variables**.
[Compound-Statement Syntax](https://dev.mysql.com/doc/refman/5.6/en/sql-syntax-compound-statements.html)
> This section describes the syntax for the BEGIN ... END compound
> statement and other statements that can be used in the body of **stored
> programs**: Stored procedures and functions, triggers, and events.
>
> A compound statement is a block that can contain other blocks;
> declarations for variables, condition handlers, and cursors; and **flow
> control constructs such as loops** and conditional tests.
So, it looks like you can run an explicit loop only within a stored procedure, function or trigger.
---
Depending on what you do in your SQL statement, it may be acceptable to use a table (or view) of numbers ([Creating a "Numbers Table" in mysql](https://stackoverflow.com/questions/9751318/creating-a-numbers-table-in-mysql), [MYSQL: Sequential Number Table](https://stackoverflow.com/questions/14298154/mysql-sequential-number-table)).
If your query is a `SELECT` and it is OK to return result of your `SELECT` 10 times as one long result set (as opposed to 10 separate result sets) you can do something like this:
```
SELECT MainQuery.*
FROM
(
SELECT 1 AS Number
UNION ALL SELECT 2
UNION ALL SELECT 3
UNION ALL SELECT 4
UNION ALL SELECT 5
UNION ALL SELECT 6
UNION ALL SELECT 7
UNION ALL SELECT 8
UNION ALL SELECT 9
UNION ALL SELECT 10
) AS Numbers
CROSS JOIN
(
SELECT 'some data' AS Result
) AS MainQuery
```
**Example for INSERT**
I recommend to have a permanent table of numbers in your database. It is useful in many cases. See the links above how to generate it.
So, if you have a table `Numbers` with `int` column `Number` with values from 1 to, say, 100K (as I do), and primary key on this column, then instead of this loop:
```
DECLARE count INT DEFAULT 0;
WHILE count < 10 DO
INSERT INTO table_name(col1,col2,col3)
VALUES("val1","val2",count);
SET count = count + 1;
END WHILE;
```
you can write:
```
INSERT INTO table_name(col1,col2,col3)
SELECT ("val1", "val2", Numbers.Number-1)
FROM Numbers
WHERE Numbers.Number <= 10;
```
It would also work almost 10 times faster.
|
You can do it direcly with MariaDB Sequence Engine. MariaDB is a binary replacement for MySQL.
**"A Sequence engine allows the creation of ascending or descending sequences of numbers (positive integers) with a given starting value, ending value and increment."**
[Manual Sequence Engine]
Here are some Samples:
```
mysql -uroot -p
Enter password: xxxxxxx
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 5
Server version: 10.0.20-MariaDB-log Homebrew
Copyright (c) 2000, 2015, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]> use tmp
Database changed
MariaDB [tmp]> select version();
+---------------------+
| version() |
+---------------------+
| 10.0.20-MariaDB-log |
+---------------------+
1 row in set (0.00 sec)
MariaDB [tmp]> select * from seq_1_to_10;
+-----+
| seq |
+-----+
| 1 |
| 2 |
| 3 |
| 4 |
| 5 |
| 6 |
| 7 |
| 8 |
| 9 |
| 10 |
+-----+
10 rows in set (0.00 sec)
MariaDB [tmp]> select * from seq_1_to_10_step_2;
+-----+
| seq |
+-----+
| 1 |
| 3 |
| 5 |
| 7 |
| 9 |
+-----+
5 rows in set (0.00 sec)
MariaDB [tmp]> SELECT DAYNAME('1980-12-05' + INTERVAL (seq) YEAR) day,
-> '1980-12-05' + INTERVAL (seq) YEAR date FROM seq_0_to_40;
+-----------+------------+
| day | date |
+-----------+------------+
| Friday | 1980-12-05 |
| Saturday | 1981-12-05 |
| Sunday | 1982-12-05 |
| Monday | 1983-12-05 |
| Wednesday | 1984-12-05 |
| Thursday | 1985-12-05 |
| Friday | 1986-12-05 |
| Saturday | 1987-12-05 |
| Monday | 1988-12-05 |
| Tuesday | 1989-12-05 |
| Wednesday | 1990-12-05 |
| Thursday | 1991-12-05 |
| Saturday | 1992-12-05 |
| Sunday | 1993-12-05 |
| Monday | 1994-12-05 |
| Tuesday | 1995-12-05 |
| Thursday | 1996-12-05 |
| Friday | 1997-12-05 |
| Saturday | 1998-12-05 |
| Sunday | 1999-12-05 |
| Tuesday | 2000-12-05 |
| Wednesday | 2001-12-05 |
| Thursday | 2002-12-05 |
| Friday | 2003-12-05 |
| Sunday | 2004-12-05 |
| Monday | 2005-12-05 |
| Tuesday | 2006-12-05 |
| Wednesday | 2007-12-05 |
| Friday | 2008-12-05 |
| Saturday | 2009-12-05 |
| Sunday | 2010-12-05 |
| Monday | 2011-12-05 |
| Wednesday | 2012-12-05 |
| Thursday | 2013-12-05 |
| Friday | 2014-12-05 |
| Saturday | 2015-12-05 |
| Monday | 2016-12-05 |
| Tuesday | 2017-12-05 |
| Wednesday | 2018-12-05 |
| Thursday | 2019-12-05 |
| Saturday | 2020-12-05 |
+-----------+------------+
41 rows in set (0.00 sec)
MariaDB [tmp]>
```
Here one Sample:
```
MariaDB [(none)]> use tmp
Database changed
MariaDB [tmp]> SELECT * FROM seq_1_to_5,
-> (SELECT * FROM animals) AS x
-> ORDER BY seq;
+-----+------+-----------+-----------------+
| seq | id | name | specie |
+-----+------+-----------+-----------------+
| 1 | 1 | dougie | dog-poodle |
| 1 | 6 | tweety | bird-canary |
| 1 | 5 | spotty | turtle-spotted |
| 1 | 4 | mr.turtle | turtle-snapping |
| 1 | 3 | cadi | cat-persian |
| 1 | 2 | bonzo | dog-pitbull |
| 2 | 4 | mr.turtle | turtle-snapping |
| 2 | 3 | cadi | cat-persian |
| 2 | 2 | bonzo | dog-pitbull |
| 2 | 1 | dougie | dog-poodle |
| 2 | 6 | tweety | bird-canary |
| 2 | 5 | spotty | turtle-spotted |
| 3 | 6 | tweety | bird-canary |
| 3 | 5 | spotty | turtle-spotted |
| 3 | 4 | mr.turtle | turtle-snapping |
| 3 | 3 | cadi | cat-persian |
| 3 | 2 | bonzo | dog-pitbull |
| 3 | 1 | dougie | dog-poodle |
| 4 | 2 | bonzo | dog-pitbull |
| 4 | 1 | dougie | dog-poodle |
| 4 | 6 | tweety | bird-canary |
| 4 | 5 | spotty | turtle-spotted |
| 4 | 4 | mr.turtle | turtle-snapping |
| 4 | 3 | cadi | cat-persian |
| 5 | 5 | spotty | turtle-spotted |
| 5 | 4 | mr.turtle | turtle-snapping |
| 5 | 3 | cadi | cat-persian |
| 5 | 2 | bonzo | dog-pitbull |
| 5 | 1 | dougie | dog-poodle |
| 5 | 6 | tweety | bird-canary |
+-----+------+-----------+-----------------+
30 rows in set (0.00 sec)
MariaDB [tmp]>
```
|
Loop n times without using a stored procedure
|
[
"",
"mysql",
"sql",
""
] |
I have written the below query. However I am not able to get 0 in the corresponding counts. Can you please let me know how can i do join in this query to display 0's?
```
SELECT b.collected AS Last_Week_Collected,
a.collected AS THIS_Week_Collected,
b.errored AS Last_Week_Errored,
a.errored AS THIS_Week_Errored,
b.processed AS Last_Week_Processed,
a.processed AS THIS_Week_Processed
FROM (
SELECT stream_id,collected, errored, processed
FROM processing_Stats_Archive
WHERE stream_id = '29'
AND HR_OF_DAY ='5'
AND TO_CHAR(batch_Creation_date,'DD-MON-YY')= '03-09-2015'
) a ,
(
SELECT stream_id,collected,errored ,processed
FROM processing_Stats_Archive
WHERE stream_id = '29'
AND HR_OF_DAY ='5'
AND TO_CHAR(batch_Creation_date,'DD-MON-YY')= '27-08-2015'
) b
WHERE a.stream_id=b.stream_id;
```
|
Looking at your syntax, it looks like you are using a `Oracle` database. So, the `NVL` function should work for you just fine. Also, since you want to return `0`'s in place of null values, instead of `inner join`, you would want to do some form of `outer join` (either left, right, or full depending on your needs). If you want to return all the rows from both queries, you will need to use `FULL OUTER JOIN` instead like this:
```
SELECT nvl(b.collected, 0) AS Last_Week_Collected
,nvl(a.collected, 0) AS THIS_Week_Collected
,nvl(b.errored, 0) AS Last_Week_Errored
,nvl(a.errored, 0) AS THIS_Week_Errored
,nvl(b.processed, 0) AS Last_Week_Processed
,nvl(a.processed, 0) AS THIS_Week_Processed
FROM (
SELECT stream_id
,collected
,errored
,processed
FROM processing_Stats_Archive
WHERE stream_id = '29'
AND HR_OF_DAY = '5'
AND TO_CHAR(batch_Creation_date, 'DD-MON-YY') = '03-09-2015'
) a
FULL OUTER JOIN (
SELECT stream_id
,collected
,errored
,processed
FROM processing_Stats_Archive
WHERE stream_id = '29'
AND HR_OF_DAY = '5'
AND TO_CHAR(batch_Creation_date, 'DD-MON-YY') = '27-08-2015'
) b ON a.stream_id = b.stream_id;
```
|
You can do it using a conditional aggregate:
```
SELECT NVL(Last_Week_Collected, 0) AS Last_Week_Collected,
NVL(THIS_Week_Collected, 0) AS THIS_Week_Collected,
NVL(Last_Week_Errored, 0) AS Last_Week_Errored,
NVL(THIS_Week_Errored, 0) AS THIS_Week_Errored,
NVL(Last_Week_Processed, 0) AS Last_Week_Processed,
NVL(THIS_Week_Processed, 0) AS THIS_Week_Processed
FROM
( SELECT MAX(CASE WHEN TO_CHAR(batch_Creation_date,'DD-MON-YY')= '27-08-2015' THEN collected ELSE 0 END) AS Last_Week_Collected,
MAX(CASE WHEN TO_CHAR(batch_Creation_date,'DD-MON-YY')= '03-09-2015' THEN collected ELSE 0 END) AS THIS_Week_Collected,
MAX(CASE WHEN TO_CHAR(batch_Creation_date,'DD-MON-YY')= '27-08-2015' THEN errored ELSE 0 END) AS Last_Week_Errored,
MAX(CASE WHEN TO_CHAR(batch_Creation_date,'DD-MON-YY')= '03-09-2015' THEN errored ELSE 0 END) AS THIS_Week_Errored,
MAX(CASE WHEN TO_CHAR(batch_Creation_date,'DD-MON-YY')= '27-08-2015' THEN processed ELSE 0 END) AS Last_Week_Processed,
MAX(CASE WHEN TO_CHAR(batch_Creation_date,'DD-MON-YY')= '03-09-2015' THEN processed ELSE 0 END) AS THIS_Week_Processed
FROM processing_Stats_Archive
WHERE stream_id = '29'
AND HR_OF_DAY ='5'
AND TO_CHAR(batch_Creation_date,'DD-MON-YY') IN ('27-08-2015', '03-09-2015')
) t;
```
Your current query would only work if there was one row per week, so I have assumed this to be the case. Therefore although I have applied the `MAX` function, it is pretty meaningless because it is the `MAX` of one row.
This is a scalar aggregate, that is to say it has an aggregate function and no group by, it will always return one row, regardless of whether or not there is data.
|
SQL to display 0 in the counts
|
[
"",
"sql",
""
] |
I have the following table called Orders
```
Order | Date | Total
------------------------------------
34564 | 03/05/2015| 15.00
77456 | 01/01/2001| 3.00
25252 | 02/02/2008| 4.00
34564 | 03/04/2015| 7.00
```
I am trying to select the distinct order sum the total and group by order #, the problem is that it shows two records for 34564 because they are different dates.. How can I sum if they are repeated orders and pick only the max(date) - But sill sum the total of the two instances?
I.E result
```
Order | Date | Total
------------------------------------
34564 | 03/05/2015| 22.00
77456 | 01/01/2001| 3.00
25252 | 02/02/2008| 4.00
```
Tried:
```
SELECT DISTINCT Order, Date, SUM(Total)
FROM Orders
GROUP BY Order, Date
```
Of couse the above won't work as you can see but i am not sure how to achieve what i intend.
|
```
SELECT [order], MAX(date) AS date, SUM(total) AS total
FROM Orders o
GROUP BY [order]
```
|
You can use the `MAX` aggregate function to choose the latest `Date` to appear from each `Order` group:
```
SELECT Order, MAX(Date) AS Date, SUM(Total) AS Total
FROM Orders
GROUP BY Order
```
|
Querying table with group by and sum
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
This query
```
select distinct owner from dba_objects
```
is throwing this error
```
ORA-00942: table or view does not exist
```
Does that make any sense at all?
|
You have to use an administrative user (such as `sys` or `system`). If you do not have access to such a user, you could use the `all_objects` view instead of `dba_obejcts`. Any user can query it, and will get results only for the objects it has privileges to.
|
It does if you don't have select privs on the DBA\_OBJECTS view or if you don't have a local or global synonym to the SYS.DBA\_OBJECTS view. You could try selecting from SYS.DBA\_OBJECTS instead.
|
Getting Schemas from Oracle DB throwing error
|
[
"",
"sql",
"oracle",
"select",
"data-dictionary",
""
] |
I have these table
```
Item
+-------+--------+
| ID | Name |
+-------+--------+
| 1 | itemA |
+-------+--------+
Sales
+-------+--------+-------------+-----+
| ID | ItemID | WarehouseID | ... |
+-------+--------+-------------+-----+
| ABC | 1 | null | ... |
+-------+--------+-------------+-----+
ItemID = FK of Item(ID)
WarehouseID = FK of Warehouse(ID)
Warehouse
+--------+----------+------+-------+
| ID | ItemID | Qty | Price |
+--------+----------+------+-------+
| 1 | 1 | 10 | 5.00 |
+--------+----------+------+-------+
ItemID = FK of Item(ID)
**Expected Results:**
+--------+----------+----------------+-------------+------+-------+
| ItemID | ItemName | SalesID | WarehouseID | Qty | Price |
+--------+----------+----------------+-------------+------+-------+
| 1 | itemA | ABC | null | null | null |
+--------+----------+----------------+-------------+------+-------+
```
Its null because the "WarehouseID" in the Sales is null.
How can i do these.. I tried but the results has no rows returned due to null value.
|
You can use a `INNER JOIN` + `LEFT OUTER JOIN` for the `WarehouseId`:
```
SELECT item.ID as ItemID, item.Name as ItemName,
sales.Id as SalesID,
sales.WarehouseID,
wh.Qty, wh.Price
FROM Item
INNER JOIN Sales
ON item.ID = sales.ItemID
LEFT OUTER JOIN Warehouse wh
ON sales.WarehouseID = wh.ID
```
|
Eh, something like that?
```
select i.ID as ItemID,
i.Name as ItemName,
s.ID as SalesID,
w.ID as WarehouseID,
w.Qty as Qty,
w.Price as Price
from (Item i join Sales s
on i.Id = s.ItemID) left join
WareHouse w on w.ItemId = i.ID and w.SalesID = s.ID
```
*inner join* between `Sales` and `Item` and *outer join* (*left* in the query) to `WareHouse`?
|
SQL command to Retrieve data where one column is null
|
[
"",
"sql",
""
] |
How can I use a CASE that will affect which column will be updated?
```
UPDATE TABLE1
CASE WHEN [Status] = 'Alpha' THEN AlphaStatus = Status
```
Using the solution [here](https://stackoverflow.com/questions/4830191/t-sql-using-a-case-in-an-update-statement-to-update-certain-columns-depending-o), I tried something like the following, but it only updates 1 column.
```
DECLARE @Table TABLE
(
ID int,
AlphaStatus varchar(10),
BetaStatus varchar(10),
GammaStatus varchar(10)
)
insert into @table (id) select 1
insert into @table (id) select 2
Declare @Values Table
(
ID varchar(20),
[Group] varchar(20),
[Status] varchar(20)
)
insert into @Values ( ID, [Group], [Status]) select 1, 'Alpha', 'ENABLED'
insert into @Values ( ID, [Group], [Status]) select 1, 'Beta', 'ENABLED'
insert into @Values ( ID, [Group], [Status]) select 1, 'Gamma', 'DISABLED'
insert into @Values ( ID, [Group], [Status]) select 2, 'Alpha', 'ENABLED'
insert into @Values ( ID, [Group], [Status]) select 2, 'Gamma', 'ENABLED'
insert into @Values ( ID, [Group], [Status]) select 2, 'Beta', 'ENABLED'
update @Table
set
AlphaStatus = (case when [Group] = 'Alpha' then [Status] else AlphaStatus end),
BetaStatus = (case when [Group] = 'Beta' then [Status] else BetaStatus end),
GammaStatus = (case when [Group] = 'Gamma' then [Status] else GammaStatus end)
from @Table t inner join @Values r
on r.id = t.ID
select *From @Table
```
|
By separating each field into a different join, you can do it in a single update.
```
update t
set
AlphaStatus = ISNULL(rA.[Status], AlphaStatus),
BetaStatus = ISNULL(rB.[Status], BetaStatus),
GammaStatus = ISNULL(rG.[Status], GammaStatus)
from @Table t
left join @Values rA on rA.id = t.ID AND rA.[Group] = 'Alpha'
left join @Values rB on rB.id = t.ID AND rB.[Group] = 'Beta'
left join @Values rG on rG.id = t.ID AND rG.[Group] = 'Gamma'
```
|
You could write this in two separate ways:
**First Method:**
```
UPDATE t
SET t.AlphaStatus = coalesce(ra.[Status], t.AlphaStatus)
,t.BetaStatus = coalesce(rb.[Status], t.BetaStatus)
,t.GammaStatus = coalesce(rg.[Status], t.GammaStatus)
FROM @Table t
LEFT JOIN @values ra ON ra.id = t.ID
AND ra.[group] = 'Alpha'
LEFT JOIN @values rb ON rb.id = t.ID
AND rb.[group] = 'Beta'
LEFT JOIN @values rg ON rg.id = t.ID
AND rg.[group] = 'Gamma';
```
**Second Method:**
```
update t
set t.AlphaStatus = coalesce(r.[Status],t.AlphaStatus)
from @Table t
left join @Values r
on r.id = t.ID
and [group] = 'Alpha';
update t
set t.BetaStatus = coalesce(r.[Status],t.BetaStatus)
from @Table t
left join @Values r
on r.id = t.ID
and [group] = 'Beta';
update t
set t.GammaStatus = coalesce(r.[Status],t.GammaStatus)
from @Table t
left join @Values r
on r.id = t.ID
and [group] = 'Gamma';
select * from @Table
```
[`SQL Fiddle Demo`](http://www.sqlfiddle.com/#!3/4989b/8/1)
|
CASE in UPDATE that determines column to update?
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I want to generate an XML element using Oracle's XML documentation generation support features that looks like this
```
<Example Attr=""></Example>
```
Attr is an attribute of element Example and has a value of empty string "".
When I tried to generate an XML Element using Oracle's XML functions, I couldn't generate an XML element which has an attribute whose value is an empty string.
```
select XMLELEMENT("hello", xmlattributes('' as "Max")) from dual
```
The result of the above query is
```
<hello></hello>
```
Note: there is no space between the single quotes for Max attribute.
However my requirement is
```
<hello Max=""></hello> -- there is no space between the double quotes.
```
Is there a way to do this?
|
As you're aware, for XMLAtttribute ["if the value\_expr is null, then no attribute is created for that value expression"](http://docs.oracle.com/cd/E11882_01/server.112/e41084/functions242.htm#SQLRF06168).
You can work around this with [InsertChildXML](http://docs.oracle.com/cd/E11882_01/appdev.112/e23094/xdb04cre.htm#ADXDB4289) but it isn't terribly pretty:
```
select insertchildxml(xmlelement("hello"), '/hello', '@Max', null) from dual;
INSERTCHILDXML(XMLELEMENT("HELLO"),'/HELLO','@MAX',NULL)
--------------------------------------------------------------------------------
<hello Max=""/>
```
... and as you can see it collapses an empty node, but that's only a potentially issue if you want this to look exactly as you showed - it's valid XML still. There is [an even uglier way around that](https://stackoverflow.com/a/7976567/266304) if you really need to.
That also suggests an alternative to @smnbbrv's `replace`:
```
select updatexml(xmlelement("hello", xmlattributes('$$IMPOSSIBLE-VALUE$$' as "Max")),
'/hello[@Max="$$IMPOSSIBLE-VALUE$$"]/@Max', null) from dual;
UPDATEXML(XMLELEMENT("HELLO",XMLATTRIBUTES('$$IMPOSSIBLE-VALUE$$'AS"MAX")),'/HEL
--------------------------------------------------------------------------------
<hello Max=""/>
```
which might be easier if your max attribute value is coming from data as you can NVL it to the impossible value. I'm not a fan of using magic values though really.
|
What about setting the property value to some impossible value and then replace it with the value you need (so, empty string in your case)?
```
select replace(
XMLELEMENT("hello", xmlattributes('$$IMPOSSIBLE-VALUE$$' as "Max")).getStringVal(),
'$$IMPOSSIBLE-VALUE$$'
)
from dual;
```
I assume you anyway in the end need the string value, so even if this `XMLELEMENT` is just an example of the problem and you have a biiiig XML generated, you still can generate it first and then, finally, replace all the values with one command as shown above.
|
How to generate XML data in Oracle which has Empty string as an attribute
|
[
"",
"sql",
"xml",
"oracle",
""
] |
I need to find the highest-valued row in each group in a table, e.g., I want to group the following by Color and Shape, and then take the row with the highest Cost. E.g. for input
```
ID Color Shape Cost
-- ----- ----- ----
1 Red Round 45
2 Red Round 18
3 Red Square 13
4 Red Square 92
5 Green Round 25
6 Green Round 21
7 Green Triangle 20
8 Green Triangle 33
```
I want to get
```
ID Color Shape Cost
-- ----- ----- ----
1 Red Round 45
4 Red Square 92
5 Green Round 25
8 Green Triangle 33
```
How can I do this? Something that works on PL/SQL and T/SQL would be fantastic, although my immediate need is PL/SQL.
|
You can use `row_number` to partition on color and shape and then assign 1 as row number to the highest cost in that partition.
```
select id,color,shape,cost
from
(
select *,
row_number() over(partition by color,shape order by cost desc) as rn
from tablename
) t
where rn = 1;
```
|
This should be a straight forward SELECT statement if you have your table setup - we'll call it table\_a:
```
SELECT id, color, shape, max(cost) as Cost
from table_a
group by id, color, shape
```
Not sure about the output allowing for Cost to be capitalized in your output - sometimes this depends on your SQL Syntax. (not possible in IMPALA SQL for instance)
|
Return highest-ranked row in each group
|
[
"",
"sql",
"t-sql",
"plsql",
""
] |
I am trying to create a while loop in SQL and it seems kind of complex. Here's what I need it to achieve:
1. Iterate through a single VARCHAR string (ex. '123')
2. If the nth character is in an even position in the string (ex. 2nd, 4th .... letter in the string), it must be added(SUM) to a base variable (Let's assume @z)
3. If the nth character is in an odd position in the string (ex. 1st, 3rd .... letter in the string), it must be multiplied by 2. If this newly generated value (Let's assume @y) is less than 10, it must be added(SUM) to the base variable (Still the same assumed @z). If @y is greater than 10, we need to subtract 9 from @y before adding(SUM) it to @z
After iterating through the entire string, this should return a numeric value generated by the above process.
Here is what I've done so far, but I'm stuck now (Needless to say, this code does not work yet, but I think I'm heading in the right direction):
```
DECLARE @x varchar(20) = '12345'
DECLARE @p int = len(@x)
WHILE @p > 0
SELECT @x =
stuff(@x, @p, 1,
case when CONVERT(INT,substring(@x, @p, 1)) % 2 = 0 then CONVERT(INT, @x) + CONVERT(INT,substring(@x, @p, 1))
end), @p -= 1
RETURN @x;
```
PS. The input will always be 100% numeric values, but it is formatted as VARCHAR when I recieve it.
**UPDATE**
The expected result for the sample string is 15
|
You can do this without using a loop. Here is a solution using Tally Table:
```
DECLARE @x VARCHAR(20) = '12345'
DECLARE @z INT = 0 -- base value
;WITH E1(N) AS( -- 10 ^ 1 = 10 rows
SELECT 1 FROM(VALUES (1),(1),(1),(1),(1),(1),(1),(1),(1),(1))t(N)
),
E2(N) AS(SELECT 1 FROM E1 a CROSS JOIN E1 b), -- 10 ^ 2 = 100 rows
E4(N) AS(SELECT 1 FROM E2 a CROSS JOIN E2 b), -- 10 ^ 4 = 10,000 rows
CteTally(N) AS(
SELECT TOP(LEN(@x)) ROW_NUMBER() OVER(ORDER BY(SELECT NULL))
FROM E4
),
CteChars(N, num) AS(
SELECT
t.N, CAST(SUBSTRING(@x, t.N, 1) AS INT)
FROM CteTally t
WHERE t.N <= LEN(@x)
)
SELECT
SUM(
CASE
WHEN N % 2 = 0 THEN num
WHEN num * 2 < 10 THEN num * 2
ELSE (num * 2) - 9
END
) + @z
FROM CteChars
```
The `CTE`s up to `CteTally` generates a list of number from 1 to `LEN(@x)`. `CteChars` breaks `@x` character by character into separate rows. Then the final `SELECT` does a `SUM` based on the conditions.
```
OUTPUT : 15
```
|
Check below if it helps you
```
DECLARE @x varchar(20) = '12345'
DECLARE @p int = 1
DECLARE @result bigint=0;
DECLARE @tempval int =0;
WHILE @p <= len(@x)
BEGIN
SET @tempval = CONVERT(INT,substring(@x, @p, 1));
if(@p%2 = 1)
BEGIN
SET @tempval = @tempval * 2;
IF(@tempval >= 10) SET @tempval = @tempval - 9;
END
SET @result = @result + @tempval;
SET @p = @p + 1;
END;
PRINT @result;--This is the result
RETURN @x;
```
|
Complicated SQL while loop
|
[
"",
"sql",
"sql-server",
""
] |
I'm working in an Oracle environment.
In a 1:M table relationship I want to write a query that will bring me each row from the "1" table and **only 1 matching row** from the "many" table.
To give a made up example... ( \* = Primary Key/Foreign Key )
```
EMPLOYEE
*emp_id
name
department
PHONE_NUMBER
*emp_id
num
```
There are **many** phone numbers for **one** employee.
Let's say I wanted to return all employees and **only one** of their phone numbers. (Please forgive the far-fetched example. I'm trying to simulate a workplace scenario)
I tried to run:
```
SELECT emp.*, phone.num
FROM EMPLOYEE emp
JOIN PHONE_NUMBER phone
ON emp.emp_id = phone.emp_id
WHERE phone.ROWNUM <= 1;
```
It turns out (and it makes sense to me now) that ROWNUM only exists within the context of the results returned from the entire query. There is not a "ROWNUM" for each table's data set.
I also tried:
```
SELECT emp.*, phone.num
FROM EMPLOYEE emp
JOIN PHONE_NUMBER phone
ON emp.emp_id = phone.emp_id
WHERE phone.num = (SELECT MAX(num)
FROM PHONE_NUMBER);
```
That one just returned me one row total. I wanted the inner SELECT to run once for **each** row in EMPLOYEE.
I'm not sure how else to think about this. I basically want my result set to be the number of rows in the EMPLOYEE table and for each row the **first** matching row in the PHONE\_NUMBER table.
Obviously there are all sorts of ways to do this with procedures and scripts and such but I feel like there is a single-query solution in there somewhere...
Any ideas?
|
If you want only one phone number, then use `row_number()`:
```
SELECT e.*, p.num
FROM EMPLOYEE emp JOIN
(SELECT p.*,
ROW_NUMBER() OVER (PARTITION BY emp_id ORDER BY emp_id) as seqnum
FROM PHONE_NUMBER p
) p
ON e.emp_id = p.emp_id and seqnum = 1;
```
Alternatively, you can use aggregation, to get the minimum or maximum value.
|
All above answers will work beautifully with the scenario you described.
But if you have some employees which are missing in phone tables, then you need to do a left outer join like below. (I faced similar scenario where I needed isolated parents also)
```
EMP
---------
emp_id Name
---------
1 AA
2 BB
3 CC
PHONE
----------
emp_id no
1 7555
1 7777
2 5555
select emp.emp_id,ph.no from emp left outer join
(
select emp_id,no,
ROW_NUMBER() OVER (PARTITION BY emp_id ORDER BY emp_id) as rnum
FROM phone) ph
on emp.emp_id = ph.emp_id
where ph.rnum = 1 or ph.rnum is null
Result
EMP_ID NO
1 7555
2 5555
3 (null)
```
|
Limit the data set of a single table within a multi-table sql select statement
|
[
"",
"sql",
"oracle",
"join",
"rownum",
""
] |
I have the following mysql table:
```
CREATE TABLE test (id INT, _id INT, name VARCHAR(30), age INT);
INSERT INTO test
(id, _id, name, age) VALUES
(1, 1, 'Lorem', 20),
(2, 1, 'Ipsum', 21),
(3, 1, 'Dolor', 22),
(4, 1, 'Sit', 23),
(5, 1, 'Amet', 24),
(6, 1, 'Consectetur', 25),
(7, 1, 'Adipiscing', 26),
(8, 2, 'Elit', 27),
(9, 2, 'In', 28),
(10, 2, 'Non', 29),
(11, 2, 'Gravida', 30),
(12, 2, 'Erat', 31),
(13, 2, 'Tempor', 32),
(14, 2, 'Augue', 33);
```
I need a query to get the first and last records based on the `_id`. So the end result could be either this:
```
| id | _id | name | age |
| 1 | 1 | Lorem | 20 |
| 7 | 1 | Adipiscing | 26 |
| 8 | 2 | Elit | 27 |
| 14 | 2 | Augue | 33 |
```
or this:
```
| min_id | max_id | _id | first_name | last_name | first_age | last_age |
| 1 | 7 | 1 | Lorem | Adipiscing| 20 | 26 |
| 8 | 14 | 2 | Elit | Augue | 27 | 33 |
```
So far I tried using `group by` and `MAX` and `MIN` functions to get the `id`, but I have no idea how to get the `name` and the `age`.
|
You can use the following query:
```
SELECT t2.id AS min_id, t3.id AS max_id, t1._id,
t2.name AS first_name, t3.name AS last_name,
t2.age AS first_age, t3.age AS last_age
FROM (
SELECT _id, MIN(age) AS minAge, MAX(age) AS maxAge
FROM test
GROUP BY _id ) AS t1
INNER JOIN test AS t2 ON t2.age = t1.minAge
INNER JOIN test As t3 ON t3.age = t1.maxAge
```
This will give you the second result set. It assumes that there is *only one* min or max record per `_id`.
[**Demo here**](http://sqlfiddle.com/#!9/71cb1/10)
To get the first result set, you can use:
```
SELECT t2.*
FROM (
SELECT _id, MIN(age) AS minAge, MAX(age) AS maxAge
FROM test
GROUP BY _id ) AS t1
INNER JOIN test AS t2
ON t2._id = t1._id AND (t2.age = t1.minAge OR t2.age = t1.maxAge)
```
[**Demo here**](http://sqlfiddle.com/#!9/71cb1/12)
To handle the case of having multiple min, max records per `_id`, you can use:
```
SELECT MAX(CASE WHEN age = minAge THEN id END) AS min_id,
MAX(CASE WHEN age = maxAge THEN id END) AS max_id,
_id,
MAX(CASE WHEN age = minAge THEN name END) AS first_name,
MAX(CASE WHEN age = maxAge THEN name END) AS last_name,
MAX(CASE WHEN age = minAge THEN age END) AS first_age,
MAX(CASE WHEN age = maxAge THEN age END) AS last_age
FROM (
SELECT GROUP_CONCAT(t1.id) AS id, t1._id,
GROUP_CONCAT(t1.name) AS name, t1.age,
(SELECT MIN(age)
FROM test AS t2
WHERE t2._id = t1._id) AS minAge,
(SELECT MAX(age)
FROM test AS t2
WHERE t2._id = t1._id) AS maxAge
FROM test AS t1
GROUP BY _id, age ) AS t3
GROUP BY _id
```
[**Demo here**](http://sqlfiddle.com/#!9/e371d/5)
|
This will give the first result:
```
SELECT id, _id, name, age FROM test
WHERE id IN (SELECT MIN(id) FROM test GROUP BY _id)
OR id IN (SELECT MAX(id) FROM test GROUP BY _id)
```
|
How to get first and last records grouping by a certain column?
|
[
"",
"mysql",
"sql",
"group-by",
""
] |
I want to lookup values listed in a temp table:
So let us say:
```
Create Table #mylist
(
eserial nvarchar(35) Collate SQL_Latin1_General_CP850_CI_AS,
refdate datetime
)
Insert Into #mylist (eserial, refdate) Values ('A', '2015-09-15')
Insert Into #mylist (eserial, refdate) Values ('B', '2015-09-14')
Insert Into #mylist (eserial, refdate) Values ('C', '2015-09-13')
Insert Into #mylist (eserial, refdate) Values ('D', '2015-09-12')
```
I need the result to be the `Top 1` date less than the reference date.
And should be returned in the same sequence as is in the temporary table.
What I tried:
```
Select
lst.eserial,
lst.refdate,
app.CREATEDDATETIME,
From #mylist lst
Outer Apply
(Select Top 1 rec.CREATEDDATETIME, rec.ESERIAL, rec.ITEMID
From TableSource rec
Where lst.eserial=rec.ESERIAL And rec.CREATEDDATETIME<lst.refdate
Order By rec.CREATEDDATETIME Desc
) As app
```
This works but it is slow. Also, if the number of rows are increased, it does not consistently preserve the sequence of `eserial`. I need the query to preserve the order I put it in the temporary table.
Again my expected output is simply:
[](https://i.stack.imgur.com/uopXj.png)
Where `eserial` is the same sequence as the temp table and `CREATEDDATETIME` is the maximum date less than the reference date. More like a conditional `Vlookup` if you know Excel.
|
It is not quite clear what you mean by
> maintain the sequence of the items in the temporary table
, but if you want to get result ordered by `eserial`, then you have to add `ORDER BY eserial` to your query. Without `ORDER BY` the resulting rows can be returned in any order. This applies to any method that you choose.
So, taking your last query as a basis, it will look like this:
```
Select
lst.eserial
,lst.refdate
,app.CREATEDDATETIME
From
#mylist lst
Outer Apply
(
Select Top 1 rec.CREATEDDATETIME
From TableSource rec
Where lst.eserial=rec.ESERIAL And rec.CREATEDDATETIME<lst.refdate
Order By rec.CREATEDDATETIME Desc
) As app
ORDER BY lst.eserial;
```
To make it work fast and efficiently add an index to `TableSource` on `(ESERIAL, CREATEDDATETIME)`. Order of columns in the index is important.
It is also important to know if there are any other columns that you use in `OUTER APPLY` query and how you use them. You mentioned column `AREAID` in the first variant in the question, but not in the last variant. If you do have more columns, then clearly show how you intend to use them, because the correct index would depend on it. The index on `(ESERIAL, CREATEDDATETIME)` is enough for the query I wrote above, but if you have more columns a different index may be required.
It would also help optimizer if you defined your temp table with a `PRIMARY KEY`:
```
Create Table #mylist
(
eserial nvarchar(35) Collate SQL_Latin1_General_CP850_CI_AS PRIMARY KEY,
refdate datetime
)
```
Primary key would create a unique clustered index.
One more important note. What is the type and collation of columns `ESERIAL` and `CREATEDDATETIME` in the main `TableSource` table? Make sure that types and collation of columns in your temp table matches the main `TableSource` table. If the type is different (`varchar` vs. `nvarchar` or `datetime` vs. `date`) or collation is different index may not be used => it will be slow.
**Edit**
You use the phrase "same sequence as the temp table" several times in the question, but it is not really clear what you mean by it. Your sample data doesn't help to resolve the ambiguity. The column name `eserial` also adds to the confusion. I can see two possible meanings:
1. Return rows from temp table ordered by values in `eserial` column.
2. Return rows from temp table in the same order as they were inserted.
My original answer implies (1): it returns rows from temp table ordered by values in `eserial` column.
If you want to preserve the order of rows as they were inserted into the table, you need to explicitly remember this order somehow. The easiest method is to add an `IDENTITY` column to the temp table and later order by this column. Like this:
```
Create Table #mylist
(
ID int IDENTITY PRIMARY KEY,
eserial nvarchar(35) Collate SQL_Latin1_General_CP850_CI_AS,
refdate datetime
)
```
And in the final query use `ORDER BY lst.ID`.
|
That's easy using identity. Query without `Order` is not guarantee to have order in SQL server.
```
Create Table #mylist
(
seqId int identity(1,1),
eserial nvarchar(35) Collate SQL_Latin1_General_CP850_CI_AS,
refdate datetime
)
```
Use the table freely and put `Order By seqId` at the end of your query
**Edit**
Use `MAX()` instead of `TOP 1` with order if you have no cluster index on `ESERIAL`, `CREATEDDATETIME` on the TableSource
<https://stackoverflow.com/a/21420643/1287352>
```
Select
lst.eserial,
lst.refdate,
app.CREATEDDATETIME,
From #mylist lst
Outer Apply
(
Select MAX(rec.CREATEDDATETIME), rec.ESERIAL, rec.ITEMID
From TableSource rec
Where lst.eserial = rec.ESERIAL And rec.CREATEDDATETIME < lst.refdate
GROUP BY rec.ESERIAL, rec.ITEMID
) As app
ORDER BY lst.seqId
```
|
Pull data without altering the item sequence in the reference table
|
[
"",
"sql",
"sql-server",
"sql-server-2008-r2",
""
] |
I want to extract the string after the character '/' in a PostgreSQL SELECT query.
The field name is `source_path`, table name is `movies_history`.
Data Examples:
Values for source\_path:
* 184738/file1.mov
* 194839/file2.mov
* 183940/file3.mxf
* 118942/file4.mp4
And so forth. All the values for source\_path are in this format
* random\_number/filename.xxx
I need to get 'file.xxx' string only.
|
If your case is that simple (*exactly* one `/` in the string) use [`split_part()`](https://www.postgresql.org/docs/current/functions-string.html#FUNCTIONS-STRING-OTHER):
```
SELECT split_part(source_path, '/', 2) ...
```
**If** there can be *multiple* `/`, and you want the string after the *last* one, a simple and fast solution would be to process the string backwards with [`reverse()`](https://www.postgresql.org/docs/current/functions-string.html#FUNCTIONS-STRING-OTHER), take the first part, and `reverse()` again:
```
SELECT reverse(split_part(reverse(source_path), '/', 1)) ...
```
**Or** you could use the more versatile (and more expensive) [`substring()`](https://www.postgresql.org/docs/current/functions-string.html#FUNCTIONS-STRING-OTHER) with a regular expression:
```
SELECT substring(source_path, '[^/]*$') ...
```
Explanation:
`[...]` .. encloses a list of characters to form a character class.
`[^...]` .. if the list starts with `^` it's the *inversion* (all characters not in the list).
`*` .. quantifier for 0-n times.
`$` .. anchor to end of string.
*db<>fiddle [here](https://dbfiddle.uk/?rdbms=postgres_12&fiddle=098fe4d061eddcb311ee7f2a16718692)*
Old [sqlfiddle](http://sqlfiddle.com/#!17/9eecb/4675)
|
You need use [*substring*](http://www.postgresql.org/docs/9.1/static/functions-string.html) function
[**SQL FIDDLE**](http://sqlfiddle.com/#!15/9eecb7db59d16c80417c72d1e1f4fbf1/2989)
```
SELECT substring('1245487/filename.mov' from '%/#"%#"%' for '#');
```
Explanation:
```
%/
```
This mean `%` some text and then a `/`
```
#"%#"
```
each `#` is the place holder defined in the last part `for '#'` and need and aditional `"`
So you have `<placeholder> % <placeholder>` and function will return what is found inside both placeholder. In this case is `%` or the rest of the string after `/`
**FINAL QUERY:**
```
SELECT substring(source_path from '%/#"%#"%' for '#');
FROM movies_history
```
|
Get string after '/' character
|
[
"",
"sql",
"regex",
"postgresql",
"pattern-matching",
""
] |
I am using Microsoft Access 2010 and I have a table `T_Offers` that looks like this:
```
Key ID Date Name Text
--- -- ---------- ----------- -----------
1 10 10/10/2015 Lorem Consectetur
2 10 10/10/2015 Ipsum Amet
3 11 27/09/2014 Dolor Sit
4 13 12/11/2013 Sit Dolor
5 14 11/07/2015 Amet Ipsum
6 14 12/07/2015 Consectetur Lorem
```
I need to get only one row of each ID (the one with the smallest date), so, for example, the result of this table would be:
```
Key ID Date Name Text
--- -- ---------- ----------- -----------
1 10 10/10/2015 Lorem Consectetur
3 11 27/09/2014 Dolor Sit
4 13 12/11/2013 Sit Dolor
5 14 11/07/2015 Amet Ipsum
```
This is one of the queries i've tried:
```
SELECT ID, name, text, MIN (date) AS minDate
FROM (SELECT ID, name, text, date
FROM T_Offers
GROUP BY ID, name, text, date
ORDER BY ID asc) as X
GROUP BY ID, name, text
```
This would work fine, but there's a little problem: if 2 offers with the same ID have the same date, the result table would duplicate ID, and i don't want that to happen. Is there an alternative to this problem?
|
You can use `NOT EXISTS` to exclude all rows where another row with the same ID and an earlier date exists:
```
SELECT t1.Key, t1.ID, t1.Date, t1.Name, t1.Text
FROM t_offers AS t1
WHERE NOT EXISTS
( SELECT 1
FROM T_Offers AS t2
WHERE t2.ID = t1.ID
AND t2.Date < t1.Date
);
```
This will leave 1 row per ID, and it will be the row with the earliest date.
With regard to then removing duplicates where the first date is the same, I am not sure of your logic, but you may need to build in further checks which could get quite messy. In this case I have used `Key` to determine which of the two records should be returned.
```
SELECT t1.Key, t1.ID, t1.Date, t1.Name, t1.Text
FROM t_offers AS t1
WHERE NOT EXISTS
( SELECT 1
FROM T_Offers AS t2
WHERE t2.ID = t2.ID
AND ( t2.Date < t1.Date
OR (t2.Date = t1.Date AND t2.Key < t1.Key)
)
);
```
|
You need a select distinct query:
```
SELECT DISTINCT ID, name, text, MIN (date) AS minDate
FROM T_Offers
GROUP BY ID, name, text
ORDER BY ID asc;
```
|
Select only one row from a group by?
|
[
"",
"sql",
"database",
"ms-access-2010",
""
] |
I want to list the all tables and each tables count columns and each table count of (primary+foreign key) used in my DB
EG: table 1 contain 2 columns and one primary key then result should be like below
```
Tables List Total columns Primary+foreign Key count
1 2 1
```
|
Try this
```
SELECT C.TABLE_NAME,
Count(C.COLUMN_NAME) AS TOTAL,
Count (A.COLUMN_NAME) AS [PRIMARY+FORIGN KEY COUNT]
FROM INFORMATION_SCHEMA.COLUMNS C
LEFT OUTER JOIN (SELECT DISTINCT TC.TABLE_NAME,
COLUMN_NAME
FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS AS TC
INNER JOIN INFORMATION_SCHEMA.KEY_COLUMN_USAGE AS KCU
ON KCU.CONSTRAINT_SCHEMA = TC.CONSTRAINT_SCHEMA
AND KCU.CONSTRAINT_NAME = TC.CONSTRAINT_NAME
AND KCU.TABLE_SCHEMA = TC.TABLE_SCHEMA
AND KCU.TABLE_NAME = TC.TABLE_NAME
WHERE CONSTRAINT_TYPE IN( 'PRIMARY KEY', 'FOREIGN KEY' )) A
ON C.TABLE_NAME = A.TABLE_NAME
AND C.COLUMN_NAME = A.COLUMN_NAME
GROUP BY C.TABLE_NAME
```
|
with the help of [Information Schema](https://msdn.microsoft.com/en-in/library/ms186778.aspx)
```
-- All info
SELECT *
FROM INFORMATION_SCHEMA.KEY_COLUMN_USAGE
ORDER BY TABLE_NAME
--Count Info
SELECT
table_name,count(column_name) ColCount,count(constraint_name) ConstraintCount
FROM INFORMATION_SCHEMA.KEY_COLUMN_USAGE
group by table_name
--Seperate Count Info
SELECT
a.table_name,count(a.column_name) ColCount,count(a.constraint_name) ConstraintCount,
(SELECT
count(b.constraint_name)
FROM INFORMATION_SCHEMA.KEY_COLUMN_USAGE b where b.constraint_name like 'PK%'
AND b.table_name = a.table_name
group by table_name ) PKCount,
(SELECT
count(b.constraint_name)
FROM INFORMATION_SCHEMA.KEY_COLUMN_USAGE b where b.constraint_name like 'FK%'
AND b.table_name = a.table_name
group by table_name ) FKCount
FROM INFORMATION_SCHEMA.KEY_COLUMN_USAGE a
group by a.table_name
```
|
List of Tables and total columns and count of primary and foreign keys
|
[
"",
"sql",
"sql-server",
""
] |
I am trying to run this query but I am getting a **Missing Expression error**.
```
SELECT *
FROM (
SELECT ROW_NUMBER() OVER(order by 'rownum') row_num1,
*
FROM A
WHERE refresh_date = (
SELECT max(refresh_date)
FROM A
WHERE upper(flaw_table_name) = upper('B')
)
)
WHERE row_num1 >= 1
AND row_num1 <=20
```
Can you please help me out, where I am getting wrong.
|
you have to add an alias, you cannot use \* without a name if you're using other colnames or functions too
```
SELECT *
FROM (SELECT row_number() over(ORDER BY 'rownum') row_num1,
t.*
FROM a t
WHERE refresh_date =
(SELECT MAX(refresh_date)
FROM a
WHERE upper(flaw_table_name) = upper('B')))
WHERE row_num1 >= 1
AND row_num1 <= 20
```
|
You have a constant in the `order by` clause. In addition, it is redundant to put `rownum` there in the first place. Just use `rownum`.
I think you want:
```
SELECT A.*
FROM A
WHERE refresh_date = (SELECT max(refresh_date)
FROM A
WHERE upper(flaw_table_name) = upper('B')
) AND
rownum between 1 and 20;
```
The subquery is not necessary and Oracle is smart enough to evaluate the `rownum` expression *after* the other conditions in the `WHERE` clause.
|
Missing Expression for the Oracle query
|
[
"",
"sql",
"oracle",
"window-functions",
""
] |
Apologies if this has already been asked, but I can't find how to do this.
I have this table:
```
ID Col1 Col2 Col3 Col4 Col5 Col6
------------------------------------------------------
1 1a 1b 1c 1d 1e 1f
2 2a 2b 2c 2d 2e 2f
3 3a 3b 3c 3d 3e 3f
```
How do I turn it into a single column table with ALL the values from all 6 columns? PERFORMANCE IS IMPORTANT for what I need it for.
```
ColValue
------------------
1a
1b
1c
...
2a
2b
2c
...
3a
3b
3c
...
```
|
You can use Unpivot. ie:
```
SELECT ColName
FROM myTable UNPIVOT
( COLName FOR col IN ( Col1, Col2, Col3, Col4, Col5, Col6 ) ) AS unpvt;
```
|
you can achieve this with the help of [UNPIVOT](https://technet.microsoft.com/en-us/library/ms177410%28v=sql.105%29.aspx)
```
-- Row to multiple column
declare @Data TABLE (Id INT, Col1 VARCHAR(20)
, Col2 VARCHAR(20), Col3 VARCHAR(20), Col4 VARCHAR(20))
INSERT INTO @Data VALUES
(1 , '1a', '1b' ,'1c','1d'),
(2 , '2a', '2b' ,'2c','2d'),
(3 , '3a', '3b' ,'3c','3d')
SELECT Id,Value
FROM @Data t
UNPIVOT (Value FOR Alias IN (Col1, Col2, Col3,Col4))pp
```
|
Consolidate multiple columns into single column
|
[
"",
"sql",
"sql-server",
""
] |
I am trying to figure out a way in Hive to select data from a flat source and output into an array of named struct(s). Here is a example of what I am looking for...
Sample Data:
```
house_id,first_name,last_name
1,bob,jones
1,jenny,jones
2,sally,johnson
3,john,smith
3,barb,smith
```
Desired Output:
```
1 [{"first_name":"bob","last_name":"jones"},{"first_name":"jenny","last_name":"jones"}]
2 [{"first_name":"sally","last_name":"johnson"}]
3 [{"first_name":"john","last_name":"smith"},{"first_name":"barb","last_name":"smith"}]
```
I tried collect\_list and collect\_set but they only allow primitive data types. Any thoughts of how I might go about this in Hive?
|
I would use this [jar](https://github.com/klout/brickhouse/blob/master/src/main/java/brickhouse/udf/collect/CollectUDAF.java), it is a much better implementation of `collect` (and takes complex datatypes).
**Query**:
```
add jar /path/to/jar/brickhouse-0.7.1.jar;
create temporary function collect as 'brickhouse.udf.collect.CollectUDAF';
select house_id
, collect(named_struct("first_name", first_name, "last_name", last_name))
from db.table
group by house_id
```
**Output**:
```
1 [{"first_name":"bob","last_name":"jones"}, {"first_name":"jenny","last_name":"jones"}]
2 [{"first_name":"sally","last_name":"johnson"}]
3 [{"first_name":"john","last_name":"smith"},{"first_name":"barb","last_name":"smith"}]
```
|
You can also use a workaround
```
select collect_list(full_name) full_name_list from (
select
concat_ws(',',
concat("first_name:",first_name),
concat("last_name:",last_name)
) full_name,
house_id
from house) a
group by house_id
```
|
Hive select data into an array of structs
|
[
"",
"sql",
"hadoop",
"hive",
"apache-pig",
"hiveql",
""
] |
I have a pretty large database of 20 million records of geographic points, and its growing every day.
```
[id (int)] [group (int)] [latitude (double)] [longitude(double)]
[1] [1] [22.365598] [12.55678]
[2] [1] [22.365548] [12.55238]
[3] [2] [24.665348] [13.10238]
```
Now I want every point inside a given boudingbox.The boudingbox has the size of South Africa and the query should return about 7.000 results. But it takes 30 seconds to get the results.
The query is:
```
SELECT distinct(group), id from `table`
where (latitude between -95.22 and 36.458 and longitude between -51.939 and 103.833);
```
The index is [latitude, longitude] (btree).
How on earth can I speed this up?
---
**EDIT**
What I want to accomplish
The database contains a large set of polygons. Say for example National parks.
Each node in the polygon sits in this table. Now I want to check if a National park is within the range of an given position.
The Id is the node id, the group is the polygon it belongs to, and the latitude and longitude is the position of the node.
---
When I don't use distinct, the query will finish in 3 seconds but has 900.000 results to return. which is a lot to handle in the rest of the code.
---
**Sollution**
As Gordon Linoff in his answer says: it is a realy large surface. The query is used for some realy detailed results. With this large surface I shouldnt use all the nodes from all polygons but use a centeroid of the polygon.
When I need detailed results for small surfaces, this query runs fast enough.
So I think I stick with that.
|
As documented under [The Range Access Method for Multiple-Part Indexes](https://dev.mysql.com/doc/en/range-optimization.html#range-access-multi-part):
> The optimizer attempts to use additional key parts to determine the interval as long as the comparison operator is [`=`](https://dev.mysql.com/doc/en/comparison-operators.html#operator_equal), [`<=>`](https://dev.mysql.com/doc/en/comparison-operators.html#operator_equal-to), or [`IS NULL`](https://dev.mysql.com/doc/en/comparison-operators.html#operator_is-null). If the operator is [`>`](https://dev.mysql.com/doc/en/comparison-operators.html#operator_greater-than), [`<`](https://dev.mysql.com/doc/en/comparison-operators.html#operator_less-than), [`>=`](https://dev.mysql.com/doc/en/comparison-operators.html#operator_greater-than-or-equal), [`<=`](https://dev.mysql.com/doc/en/comparison-operators.html#operator_less-than-or-equal), [`!=`](https://dev.mysql.com/doc/en/comparison-operators.html#operator_not-equal), [`<>`](https://dev.mysql.com/doc/en/comparison-operators.html#operator_not-equal), [`BETWEEN`](https://dev.mysql.com/doc/en/comparison-operators.html#operator_between), or [`LIKE`](https://dev.mysql.com/doc/en/string-comparison-functions.html#operator_like), the optimizer uses it but considers no more key parts.
In other words, MySQL uses your index only to find records whose `latitude` falls within the specified range—it then fetches those records from the table and scans through them to perform the filter on `longitude`.
The reason that MySQL does this is obvious if you consider how a [B-tree](https://en.wikipedia.org/wiki/B-tree) is structured:
```
Bd
________/ \_______
/ \
Ad Cd
__/ \__ __/ \__
/ \ / \
Ab Bb Cb Db
/ \ / \ / \ / \
Aa Ac Ba Bc Ca Cc Da Dc
```
Filtering the first key part for a range (e.g. where the first character is `BETWEEN 'B' AND 'C'` in the example above, but the latitude criterion in your case) is very simple, because the tree is already sorted with respect to the first key part:
```
Bd
________/ \_______
/ \
\ Cd
\__ __/
\ /
Bb Cb
/ \ / \
Ba Bc Ca Cc
```
But the resulting pruned tree cannot help when filtering on the second key part (e.g. where the second character is `BETWEEN 'b' AND 'c'` in this example, but the longitude criterion in your case) because it is *not* sorted with respect to the second key part. By contrast, had the first key part been filtered for an *exact* match (rather than a range) then the resulting pruned tree *would* then already be sorted by the second key part.
Thus B-trees cannot help so much with locating multidimensional ranges. The [R-tree](https://en.wikipedia.org/wiki/R-tree) is an alternative data structure that is much better suited to problems of this sort. MySQL is capable of creating R-tree indexes using its [spatial extensions](https://dev.mysql.com/doc/en/spatial-extensions.html):
1. Create a new column of a [spatial data type](https://dev.mysql.com/doc/en/spatial-datatypes.html) (e.g. `POINT`) that will hold your coordinate data and [index](https://dev.mysql.com/doc/en/creating-spatial-indexes.html) it:
```
ALTER TABLE `table`
ADD coordinates POINT,
ADD SPATIAL INDEX (coordinates);
```
2. Populate that column from your existing data:
```
UPDATE `table` SET coordinates = Point(longitude, latitude);
```
You may want to define triggers and/or views to assist with further migration.
3. Perform your search:
```
SELECT DISTINCT `group`, id
FROM `table`
WHERE MBRContains(
MultiPoint(Point(-51.939, -95.22), Point(103.833, 36.458)),
coordinates
)
```
What's particularly nice about this approach is that, as of MySQL 5.6.1, you can [use object shapes](https://dev.mysql.com/doc/en/spatial-relation-functions-object-shapes.html) to perform even more precise searches: e.g. define polygons that exactly represent national boundaries.
4. Update your application to use this new column, for example:
```
SELECT X(coordinates) AS longitude, Y(coordinates) AS latitude FROM `table`
```
You may want to define triggers and/or views to assist with the migration.
5. Drop the old columns:
```
ALTER TABLE `table` DROP longitude, DROP latitude;
```
However, you should note that MySQL's spatial extensions use Euclidean geometry (whereas, obviously, the Earth is spherical): this shouldn't affect the above operation, but be wary of using it to perform calculations such as distance.
|
First, parentheses don't matter for `distinct`. So, just write the query as:
```
SELECT distinct `group`, id
from `table`
where latitude between -95.22 and 36.458 and
longitude between -51.939 and 103.833;
```
This type of query -- with two `between`s -- is not really amenable to indexes. You *can* try an index on `latitude, longitude` or `longitude, latitude`, and it might offer some small increment in speed.
A better approach is to use spatial indexes. [Here](https://dev.mysql.com/doc/refman/5.6/en/spatial-extensions.html) is the place to start learning about them.
However, even a spatial index is not likely to be much help. The areas in your query account for about 1/6 of the earth's surface. If your point are evenly distributed, then this is over 3 million records that need to be aggregated (for the `select distinct`). You probably won't have much luck getting really good performance for this query.
|
Speeding up this mysql query
|
[
"",
"mysql",
"sql",
"database",
""
] |
I have six different select statements used for SSRS reporting purpose.
Below are two statements out of those. I need help on how to do `Union All` for bellow and combine all statements into one.
```
DECLARE @FromDate AS DATE='04-Aug-2015'
DECLARE @ToDate AS DATE='05-Aug-2015'
SELECT
A.LBrCode AS BranchCode,
(SELECT B.Name
FROM D001003 B
WHERE A.LBrCode = B.PBrCode) AS BranchName,
C.PrdCd AS Product,
SUM(D.FcyTrnAmt) AS Amount
FROM
D009022 A
INNER JOIN
D009021 C ON substring(A.PrdAcctId, 1, 8) = C.PrdCd
AND A.LBrCode = C.LBrCode
LEFT JOIN
D009040 D ON A.PrdAcctId = D.VcrAcctId
AND substring(D.VcrAcctId, 1, 8) = C.PrdCd
AND A.LBrCode = D.LBrCode
WHERE
A.AcctStat <> 3
AND A.DateOpen >= @FromDate
AND A.DateOpen <= @ToDate
AND C.ModuleType = 11
AND D.DrCr = 'D'
AND D.CanceledFlag <> 'C'
GROUP BY
A.LBrCode, C.PrdCd
ORDER BY
A.LBrCode
UNION ALL
SELECT
A.LBrCode AS BranchCode,
(SELECT B.Name FROM D001003 B WHERE A.LBrCode = B.PBrCode) AS BranchName,
C.PrdCd AS Product,
SUM(A.ActTotBalFcy) AS Balance
FROM
D009022 A
INNER JOIN
D009021 C ON substring(A.PrdAcctId, 1, 8) = C.PrdCd
AND A.LBrCode = C.LBrCode
WHERE
C.ModuleType = 11
AND A.AcctStat <> 3
AND A.DateOpen >= @FromDate
AND A.DateOpen <= @ToDate
GROUP BY
A.LBrCode, C.PrdCd
ORDER BY
A.LBrCode
```
|
You can only apply one `ORDER BY` clause to affect the order of results, and it has to apply to the entire result set. If, as I suspect, you want all of the results from the top query to appear before results from the bottom query, you need to add another column to the data to allow that to happen:
```
SELECT A.LBrCode AS BranchCode,
(SELECT B.Name FROM D001003 B WHERE A.LBrCode=B.PBrCode) AS BranchName,
C.PrdCd AS Product,
sum(D.FcyTrnAmt) AS Amount,
1 as ResultSet
FROM D009022 A INNER JOIN D009021 C
ON substring(A.PrdAcctId,1,8)=C.PrdCd
AND A.LBrCode=C.LBrCode
LEFT JOIN D009040 D
ON A.PrdAcctId=D.VcrAcctId
AND substring(D.VcrAcctId,1,8)=C.PrdCd
AND A.LBrCode=D.LBrCode
WHERE A.AcctStat <> 3 AND A.DateOpen>=@FromDate AND A.DateOpen<=@ToDate
AND C.ModuleType=11
AND D.DrCr='D'
AND D.CanceledFlag<>'C'
GROUP BY A.LBrCode, C.PrdCd
--ORDER BY A.LBrCode
UNION ALL
SELECT A.LBrCode AS BranchCode,
(SELECT B.Name FROM D001003 B WHERE A.LBrCode=B.PBrCode) AS BranchName,
C.PrdCd AS Product,
sum(A.ActTotBalFcy) AS Balance,
2
FROM D009022 A INNER JOIN D009021 C
ON substring(A.PrdAcctId,1,8)=C.PrdCd
AND A.LBrCode=C.LBrCode
WHERE C.ModuleType=11
AND A.AcctStat <> 3
AND A.DateOpen>=@FromDate AND A.DateOpen<=@ToDate
GROUP BY A.LBrCode, C.PrdCd
ORDER BY ResultSet,BranchCode
```
|
For a union to work both queries must have the same number of columns and be compatible data types, you may want to convert column data types to ensure they are the same. For example you may have issues with your Amount and Balance columns of the data types are incompatible.
You cannot order each individual statement in the union, this does not make sense as you will be producing a single output.
But, with these points in mind, your query should work as it stands.
```
declare @test table (a int, b varchar(10))
insert into @test values (10,'test1')
insert into @test values (20,'test2')
select
a, b
from
@test
UNION ALL
select
a, b
from
@test
order by a
```
|
Union All Help Needed
|
[
"",
"sql",
"sql-server",
""
] |
I have been trying to develop a query to solve a problem but its been hard.
Table 1:
```
+------+----+
| NAME | ID |
+------+----+
| A | 1 |
| A | 2 |
| B | 1 |
| B | 5 |
| C | 8 |
+------+----+
```
Table 2:
```
+------+----+
| NAME | ID |
+------+----+
| A | 1 |
| A | 4 |
| B | 3 |
| B | 5 |
| D | 9 |
+------+----+
```
From these results, I need to return everything from table 2 that the name contains in table 1 and the ID don't.
So, this example, the return should be:
```
+------+----+
| NAME | ID |
+------+----+
| A | 4 |
| B | 3 |
+------+----+
```
|
You might wanna try this:
EDIT: replaced table1 and table2 with a simple subqueries in WITH clause.
```
WITH table1 AS
(
SELECT
DECODE(LEVEL,1, 'A',2, 'A',3, 'B',4, 'B',5, 'C') AS name
,DECODE(LEVEL,1, 1,2, 2,3, 1,4, 5,5, 8) AS id
FROM
dual
CONNECT BY LEVEL < 6
)
,table2 AS
(
SELECT
DECODE(LEVEL,1, 'A',2, 'A',3, 'B',4, 'B',5, 'D') AS name
,DECODE(LEVEL,1, 1,2, 4,3, 3,4, 5,5, 9) AS id
FROM
dual
CONNECT BY LEVEL < 6
)
SELECT
t2.id
,t2.name
FROM
table1 t1
,table2 t2
WHERE
t1.name = t2.name -- here we take all the records from table2, which have the same names as in table1
MINUS -- then we "subtract" the records that have both the same name and id in both tables
SELECT
t2.id
,t2.name
FROM
table1 t1
,table2 t2
WHERE
t1.name = t2.name
AND t1.id = t2.id
```
|
I would do:
```
with t1 as (select 'A' name, 1 id from dual union all
select 'A' name, 2 id from dual union all
select 'B' name, 1 id from dual union all
select 'B' name, 5 id from dual union all
select 'C' name, 8 id from dual),
t2 as (select 'A' name, 1 id from dual union all
select 'A' name, 4 id from dual union all
select 'B' name, 3 id from dual union all
select 'B' name, 5 id from dual union all
select 'D' name, 9 id from dual)
select name, id
from t2
where name in (select name from t1)
minus
select name, id
from t1;
NAME ID
---- ----------
A 4
B 3
```
|
Return from Table 2 different Ids from same names on Table1
|
[
"",
"sql",
"oracle",
"inner-join",
"outer-join",
""
] |
Using Microsoft SQL Server, Table1 has customer requests, table2 has a resolution code (so we can see why customers are writing in, and what we did to resolve their issue). Sometimes reps forget to add the resolution code, so I'm trying to run a query to tell me the name of each rep, how many closed issues are missing a resolution code, how many issues did they close in total, and what's their % of closed issues versus closed issues without a resolution code. For example:
```
Agent Name | Missing Resolution Codes | Total issues closed | % of
----------------------------------------------------------------------
Mary | 12 | 120 | 10.00%
Bob | 8 | 704 | 1.14%
```
The query to show missing resolution codes works great. The query to show total issues closed works great. Whenever I try to combine them, I either get this:
(with UNION)
```
Mary | 12
Mary | 120
Bob | 8
Bob | 704
```
or if I try `from (select...) t1, (select...) t2`, I get this:
```
Mary | 12 | 120
Mary | 12 | 704
Bob | 8 | 120
Bob | 8 | 704
```
If I add a Group By at the end, I get all kinds of errors.
The two queries on their own are as follows:
Missing resolution code:
```
select i.assigned_to, count(i.assigned_to)
from customer_issues as i
left join resolution_codes as r on i.issue_ID = r.issue_ID
where r.issue_id is null
and i.status='closed'
group by i.assigned_to
```
Total closed issues:
```
select assigned_to, count(assigned_to)
from customer_issues
where status='closed'
group by assigned_to
```
Appreciate any help you can provide!
EDIT: Just to clarify, the resolution codes are in a different table because one customer issue could contain several problems and several solutions. So while the customer\_issues table does have an indicator that the issue itself is closed, the resolution\_codes table shows the different problems the customer had, which ones we could solve, which ones we couldn't, etc.
|
```
select t1.assigned_to, t1.missing, t2.total from
(
select i.assigned_to as assigned_to, count(i.assigned_to) as missing
from customer_issues as i
left join resolution_codes as r on i.issue_ID = r.issue_ID
where r.issue_id is null
and i.status='closed'
group by i.assigned_to
) t1 INNER JOIN
(
select assigned_to, count(assigned_to) as total
from customer_issues
where status='closed'
group by assigned_to
) t2
ON t1.assigned_to=t2.assigned_to
```
|
Since the only difference between the two queries seems to be the left join and null filter you can do it all in one query.
This should work I think:
```
;with source as (
select
i.assigned_to as [Agent Name],
count(r.issue_id) as [Resolution Codes],
count(i.assigned_to) as [Total issues closed]
from customer_issues as i
left join resolution_codes as r on i.issue_ID = r.issue_ID
where i.status='closed'
group by i.assigned_to
)
select
[Agent Name],
[Total issues closed]-[Resolution Codes] as [Missing Resolution Codes],
[Total issues closed],
([Total issues closed]-[Resolution Codes])*100.0/[Total issues closed] as [% of]
from source;
```
[Sample SQL Fiddle](http://www.sqlfiddle.com/#!6/c6124/2)
|
query the same table twice to get different numbers in one query
|
[
"",
"sql",
"sql-server",
""
] |
I already have values in a temp table and I want to insert it into my table.
I follow this syntax
```
IF NOT EXISTS (SELECT 1 FROM ABC abc JOIN #Temp t ON abc.ID = t.ID)
insert into MyTable(Id,Name)
select values (t.ID, t.Name)
From t
```
I have just the name t as an alias I created in a condition before this insert.
Is this correct? Some people use @ etc. I am confused.
|
Correct syntax:
```
insert into MyTable(Id,Name)
select t.ID, t.Name
From #temp t
```
Always [read manual](https://technet.microsoft.com/en-us/library/dd776381%28v=sql.105%29.aspx)
|
You can check this
```
insert into Table1(Id,Name)
select ID,Name
From t
```
|
How to insert into a table from temp table?
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"temp-tables",
""
] |
I'm trying to update a table called `incode_warrants` and set the `warn_docket_no` to the `viol_docket_no` from the `incode_violations` table.
I have the following SQL query in Postgres 9.3, but when it fires I get the following error:
> ```
> Error : ERROR: relation "iw" does not exist
> LINE 1: update iw
> ```
I'm more of an Active Record person so my raw SQL skills are seriously lacking. I was wondering if anyone could help point me in the right direction on how to get this query right.
```
update iw
set iw.warn_docket_no = iv.viol_docket_no
from incode_warrants as iw
INNER JOIN incode_warrantvs as iwvs
on iw.warn_rid = iwvs.warnv_rid
INNER JOIN incode_violations as iv
ON iv.viol_citation_no = iwvs.warnv_citation_no and iv.viol_viol_no = iwvs.warnv_viol_no
```
|
The same as valid `UPDATE` statement in Postgres:
```
UPDATE incode_warrants iw
SET warn_docket_no = iv.viol_docket_no
FROM incode_warrantvs iwvs
JOIN incode_violations iv ON iv.viol_citation_no = iwvs.warnv_citation_no
AND iv.viol_viol_no = iwvs.warnv_viol_no
WHERE iw.warn_rid = iwvs.warnv_rid
-- AND iw.warn_docket_no IS DISTINCT FROM iv.viol_docket_no -- see below
;
```
You cannot just use a table alias in the `FROM` clause as target table in the `UPDATE` clause. The (one!) table to be updated comes right after `UPDATE` keyword (if we ignore a possible `ONLY` keyword in between). You can add an alias there if you want. That's the immediate cause of your error message, but there's more.
The column to be updated is always from the one table to be updated and cannot be table-qualified.
You don't need to repeat the target table in the `FROM` clause - except for special cases like this:
* [PostgreSQL: update with left outer self join ignored](https://stackoverflow.com/questions/8766763/postgresql-update-with-left-outer-self-join-ignored/8766815#8766815)
This optional addition can avoid pointless cost by suppressing updates that do not change anything:
```
AND iw.warn_docket_no IS DISTINCT FROM iv.viol_docket_no
```
See:
* [How do I (or can I) SELECT DISTINCT on multiple columns?](https://stackoverflow.com/questions/54418/how-do-i-or-can-i-select-distinct-on-multiple-columns/12632129#12632129)
More in the excellent [manual on `UPDATE`](https://www.postgresql.org/docs/current/sql-update.html).
|
Your query should look like this:
```
UPDATE incode_warrants
SET warn_docket_no = incode_violations.viol_docket_no
FROM incode_violations
WHERE incode_violations.viol_citation_no = incode_warrants.warnv_citation_no
AND incode_violations.viol_viol_no = incode_warrants.warnv_viol_no;
```
You don't need any other join. With this query you just update a column in one table with values from a column from another table. Of course, it updates only when `WHERE` condition is true.
|
UPDATE statement with multiple joins in PostgreSQL
|
[
"",
"sql",
"postgresql",
"sql-update",
"inner-join",
""
] |
i want to do something like that:
```
Select
(select sum(monkey_value) from mt where monkey_weight > 20) ,
(select sum(monkey_value) from mt where monkey_weight > 30)
from MonkeyTable mt where monkeySanityLevel > 10
```
but i cant use mt in subselects. What ive done right now is that i declared an table and pass the result of parent value into it, and right after i make the subselcts in query for itself.
Are there any smarter ways so i can avoid inserting rows into temp table?
|
How about
```
Select
Sum(case when monkey_weight > 20 then monkey_value else 0 end) as WT20,
Sum(case when monkey_weight > 30 then monkey_value else 0 end) as WT30
from MonkeyTable mt where monkeySanityLevel > 10
```
|
following should work:
```
select sum(case when monkey_weight > 20 then monkey_value end) as monkey_weight_20,
sum(case when monkey_weight > 30 then monkey_value end) as monkey_weight_30
from MonkeyTable mt
where monkeySanityLevel > 10
```
|
How to use parent result in sub selects?
|
[
"",
"sql",
"sql-server",
""
] |
I want to concatenate two dates with their times like below in SQL Server 2008. Something like this:
```
2015-09-09 08:30 - 2015-09-09 09:30
```
I tried this method but didn't work and I used casting as well.
```
CONVERT(DATETIME, CONVERT(CHAR(8), S.StartTime, 112)+ '-' + CONVERT(CHAR(8), S.endtime, 108)) AS 'OccupiedTime'
```
it is showing result like this
```
2015-09-09 09:30:00:000
```
|
```
CONVERT(CHAR(16), s.StartTime, 120) + '-' +
CONVERT(CHAR(16), s.EndTime, 120) AS OccupiedTime
```
|
You need 2 parts of date - date only + time. You can have 2 strings and concatenate them:
```
SELECT
REPLACE(CONVERT(VARCHAR(50),s.StartTime,103),'/','-') + ' ' +
CONVERT(VARCHAR(5),s.StartTime,114) + ' - ' +
REPLACE(CONVERT(VARCHAR(50),s.EndTime,103),'/','-') + ' ' +
CONVERT(VARCHAR(5),s.EndTime,114) AS OccupiedDateTime
```
You can make quick check how it looks using:
```
SELECT
REPLACE(CONVERT(VARCHAR(50),GETDATE(),103),'/','-') + ' ' +
CONVERT(VARCHAR(5),GETDATE(),114) + ' - ' +
REPLACE(CONVERT(VARCHAR(50),GETDATE(),103),'/','-') + ' ' +
CONVERT(VARCHAR(5),GETDATE(),114) AS OccupiedDateTime
```
|
Concatenate Two dates and their times in SQL Server 2008
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"datetime",
"stored-procedures",
""
] |
I have been working on a query (in sql Server TSQL) which fills left of a number with 0's so output is always 5 digit.
So:
```
Select MuNumber From Mytable
```
for data 11,011,2132,1111
```
Creates output like
00011
02134
01111
```
I tried Lpad Function but numer of 0's can be different.
if Munumber is 1 we need 0000 and If MyNumber is 34 we need 000
|
Assuming that MuNumber is VARCHAR simply use `RIGHT`
```
SELECT RIGHT('00000' + MuNumber, 5)
FROM Mytable
```
Otherwise you need to convert it first
```
SELECT RIGHT('00000' + CONVERT(VARCHAR(5), MuNumber), 5)
FROM Mytable
```
And in general you can use this pattern:
```
DECLARE @num INT = 10;
SELECT RIGHT(REPLICATE('0', @num) + CONVERT(VARCHAR(5), MuNumber), @num)
FROM Mytable
```
|
Try this
```
select right('00000'+cast(col as varchar(5)),5) from table
```
|
Query to pad left of a field with 0's
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have table name `Sample` with the columns `Id` and `Position`.
```
select ID ,POSITION
from SAMPLE
order by 1
```
Sample :
```
ID Position
1 GEN2
1 GEN1
2 GEN1
2 GEN4
2 GEN2
2 GEN3
3 GEN1
4 GEN1
5 GEN1
5 GEN1
5 GEN1
5 GEN4
6 GEN1
```
Here I need to select the records based on the below condition...
If the last record of every unique `id` has the value of `GEN1`, then I need to filter and selected in to it.
So I expect the result set as below:
```
ID Position
1 GEN1
3 GEN1
4 GEN1
6 GEN1
```
Since `id`'s (2,5) don't have the value `GEN1` in last recent record I ignore it...
|
If you have a `history_id`, you can use `row_number()`:
```
select h.*
from (select h.*,
row_number() over (partition by id order by history_id desc) as seqnum
from history h
) h
where seqnum = 1 and Position = 'GEN1';
```
|
Use `row_number()` partitioned by `id` and order by `history_id` int descending order in a common table expression:
```
WITH CTE AS (
SELECT
rn = ROW_NUMBER() OVER (PARTITION BY ID ORDER BY History_id DESC)
, *
FROM Sample
)
SELECT * FROM CTE
WHERE Position = 'GEN1' AND rn = 1;
```
An alternative solution is to use a negated `exists` predicate with a correlated subquery:
```
SELECT ID, Position
FROM Sample s
WHERE Position = 'GEN1'
AND NOT EXISTS (
SELECT 1
FROM Sample
WHERE Position <> 'GEN1'
AND History_id > s.History_id AND ID = s.ID
);
```
The version using `row_number` most likely performs better when supported by a proper index (something like: `(id, history_id desc, position)` ).
|
Filter SQL Query based on Last Values in the resultset
|
[
"",
"sql",
"sql-server",
""
] |
I have table in my database with next SQL code
```
CREATE TABLE `table` (
`id` INTEGER PRIMARY KEY AUTO_INCREMENT NOT NULL,
`column1` VARCHAR(255) NOT NULL,
`column2` INTEGER NULL,
`column3` INTEGER NULL,
`column4` INTEGER NULL,
`default` INTEGER NULL
);
```
And i can insert data into it with
```
INSERT INTO `table` VALUES (1,'something',1,0,0,1)
```
and it works, but my question is how to enter specific values with problem that i have column with name `default` and SQL interprets that name of column like default value. Therefore i can't use inserts like
```
INSERT INTO `table` (id,column1,column2,column3,column4,default) VALUES (1,'something',1,0,0,1),
```
or
```
INSERT INTO `table` (id,column1,default) VALUES (1,'something',1),
```
or etc.
So my question is :
Is it possible to insert specific data into table with column name default and how or not? Is it possible to work this on some other databases? I'm using MySql database.
Thanks in advance
|
Try escaping the column name `default` with backticks, like you have with the table name.
|
do the same thing that you did on your table:
```
INSERT INTO `table` (`id`,`column1`,`default`) VALUES (1,'something',1),
```
|
How to insert data into database with column name "default"
|
[
"",
"mysql",
"sql",
""
] |
I have following database tables in SQL Server 2008.
```
customer
------------
id, Name, Address
payment
-------
payment_id, amount, customer_id
Sales
S_id, Date, ItemName, amount, commission, customer_id
```
I am novice to SQL queries. i want want to to display result
```
name, previous customer balance i.e (sum(Sale.amount)+Sum(commission))-sum(Payment) for all customers
```
|
Something like the below should work:
```
select c.[Name], saleSum.SaleTotal - paymentSum.PaymentTotal
from customer c join
(
select c.[id], sum(isnull(s.amount,0) + isnull(s.commission, 0)) SaleTotal
from customer c left join sales s on c.id = s.customer_id
group by c.[id]
) as saleSum on c.id = saleSum.id
join
(
select c.id, sum(isnull(p.amount,0)) PaymentTotal
from customer c left join payment p on c.id = p.customer_id
group by c.id
) paymentSum on c.id = paymentSum.id
```
|
The equery is
```
Select Tab1.ID,
Min(Tab1.name),
sum(Tab3.amount) + Sum(Tab3.commission) - sum(Tab2.amount)
From Table1 Tab1
inner join Table2 tab2 on Tab1.id = Tab2.customer_id
inner join Table3 tab3 on Tab1.id = Tab3.customer_id
Group by Tab1.ID
```
|
Finding Sum of all previous sales to all customer and sum of today's sale in sql query
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have one table (TransactionData) containing transaction data for customers and one table (Subscribers) containing customers that are newsletter subscribers. This is what I need to do.
Select all subscribers from TransactionData table that are also in the Subscribers table and that has only purchased products online (StoreID = 50001).
The unique key for both tables is the customer email address.
This is the basic table structure.
```
TransactionData
Email StoreID OrderID
customer1@mail.com 50001 101
customer1@mail.com 50001 102
customer2@mail.com 50001 201
customer2@mail.com 51111 202
customer3@mail.com 50001 301
customer3@mail.com 50001 302
Subscribers
Email
customer1@mail.com
customer2@mail.com
Desired result
Email StoreID
customer1@mail.com 50001
```
This is my query so far:
```
SELECT b.Email, b.StoreID
FROM TransactionData AS a
INNER JOIN Subscribers AS b
ON a.Email = b.Email
WHERE a.BoutiqueId = 50001
```
I can’t figure out how to exclude customers that have purchased both online and in store. Can you help me with this?
|
```
SELECT t.Email , t.StoreID
FROM TransactionData t
WHERE EXISTS (SELECT 1
FROM Subscribers
WHERE t.Email = Email)
AND NOT EXISTS (SELECT 1
FROM TransactionData
WHERE t.Email = Email
AND StoreID <> 50001)
GROUP BY t.Email , t.StoreID
```
|
```
declare @TransactionData TABLE
([Email] varchar(18), [StoreID] int, [OrderID] int)
;
INSERT INTO @TransactionData
([Email], [StoreID], [OrderID])
VALUES
('customer1@mail.com', 50001, 101),
('customer1@mail.com', 50001, 102),
('customer2@mail.com', 50001, 201),
('customer2@mail.com', 51111, 202),
('customer3@mail.com', 50001, 301),
('customer3@mail.com', 50001, 302)
;
declare @Subscribers TABLE
([Email] varchar(18))
;
INSERT INTO @Subscribers
([Email])
VALUES
('customer1@mail.com'),
('customer2@mail.com')
;
select T.Email,T.StoreID from @TransactionData T
INNER JOIN @Subscribers TT
ON TT.Email = T.Email
WHERE NOT EXISTS (SELECT 1
FROM @TransactionData
WHERE t.Email = Email
AND StoreID <> 50001)
GROUP BY T.Email,T.StoreID
```
|
SQL server – get rows with customers that has only purchased online
|
[
"",
"sql",
"sql-server",
""
] |
I am having problems in the next query:
This is my array of data
```
String[] columns = new String[]
{ KEY_ROWELEMENTID, KEY_STUDYID, KEY_ELEMENTCODE, KEY_ELEMENTNAME};
```
I create a cursor to this array where:
The column KEY\_ELEMENTCODE is equal to a varable result
AND
KEY\_STUDYID is equal to a varable result AND
```
Cursor c =
ourDatabase.query(DATABASE_TABLEELEMENTS, columns, KEY_ELEMENTCODE + "=" + elementRequest
+ "AND" + KEY_STUDYID +
"=" + idStudy, null, null, null, null);
```
What is the problem? I have debug and all variables has the correct values
Thanks
|
Yes there should be space as " AND ",
One more thing, if you are comparing for Integer values then use this line,
```
Cursor c = ourDatabase.query(DATABASE_TABLEELEMENTS, columns,
KEY_ELEMENTCODE + "=" + elementRequest
+ " AND " + KEY_STUDYID +
"=" + idStudy, null, null, null, null);
```
and if you are comparing string values then you should use single inverted comma as,
```
Cursor c = ourDatabase.query(DATABASE_TABLEELEMENTS, columns,
KEY_ELEMENTCODE + "=" +"'"+elementRequest+"'"+
+ " AND " + KEY_STUDYID +
"=" + "'"+idStudy+"'", null, null, null, null);
```
Let me know if it works for you...
|
Without seeing the output from
```
KEY_ELEMENTCODE + "=" + elementRequest + "AND" + KEY_STUDYID + "=" + idStudy
```
I guess that you just have to replace
```
"AND"
```
with
```
" AND "
```
|
Unrecognized token AND
|
[
"",
"android",
"sql",
"sqlite",
"token",
""
] |
I need to show a list of partners and the maximum value from the `reservation_limit` column from `Klass` table.
```
Partner has_many :klasses
Klass belongs_to :partner
# Partner controller
def index
@partners = Partner.includes(:klasses)
end
# view
<% @partners.each do |partner| %>
Up to <%= partner.klasses.maximum("reservation_limit") %> visits per month
<% end %>
```
Unfortunately the query below runs for every single `Partner`.
```
SELECT MAX("klasses"."reservation_limit") FROM "klasses" WHERE "klasses"."partner_id" = $1 [["partner_id", 1]]
```
If there are 40 partners then the query will run 40 times. How do I optimize this?
---
edit: Looks like there's a [limit](http://api.rubyonrails.org/classes/ActiveRecord/QueryMethods.html#method-i-limit) method in rails so I'm changing the `limit` in question to `reservation_limit` to prevent confusion.
|
You can use two forms of SQL to efficiently retrieve this information, and I'm assuming here that you want a result for a partner even where there is no klass record for it
The first is:
```
select partners.*,
max(klasses.limit) as max_klasses_limit
from partners
left join klasses on klasses.partner_id = partners.id
group by partner.id
```
Some RDBMSs require that you use "group by partner.\*", though, which is potentially expensive in terms of the required sort and the possibility of it spilling to disk.
On the other hand you can add a clause such as:
```
having("max(klasses.limit) > ?", 3)
```
... to efficiently filter the partners by their value of maximum klass.limit
The other is:
```
select partners.*,
(Select max(klasses.limit)
from klasses
where klasses.partner_id = partners.id) as max_klasses_limit
from partners
```
The second one does not rely on a group by, and in some RDBMSs may be effectively transformed internally to the first form, but may execute less efficiently by the subquery being executed once per row in the partners table (which would stil be much faster than the raw Rails way of actually submitting a query per row).
The Rails ActiveRecord forms of these would be:
```
Partner.joins("left join klasses on klasses.partner_id = partners.id").
select("partners.*, max(klasses.limit) as max_klasses_limit").
group(:id)
```
... and ...
```
Partner.select("partners.*, (select max(klasses.limit)
from klasses
where klasses.partner_id = partners.id) as max_klasses_limit")
```
Which of these is actually the most efficient is probably going to depend on the RDBMS and even the RDBMS version.
If you don't need a result when there is no klass for the partner, or there is always guaranteed to be one, then:
```
Partner.joins(:klasses).
select("partners.*, max(klasses.limit) as max_klasses_limit").
group(:id)
```
Either way, you can then reference
```
partner.max_klasses_limit
```
|
Your initial query brings all the information you need. You only need to work with it as you would work with a regular array of objects.
Change
```
Up to <%= partner.klasses.maximum("reservation_limit") %> visits per month
```
to
```
Up to <%= partner.klasses.empty? ? 0 : partner.klasses.max_by { |k| k.reservation_limit }.reservation_limit %> visits per month
```
---
What `maximum("reservation_limit")` does it to trigger an **Active Record** query `SELECT MAX...`. But you don't need this, as you already have all the information you need to process the maximum in your array.
**Note**
Using `.count` on an Active Record result will trigger an extra `SELECT COUNT...` query!
Using `.length` will not.
|
Rails: Optimize querying maximum values from associated table
|
[
"",
"sql",
"ruby-on-rails",
"ruby",
"postgresql",
"rails-postgresql",
""
] |
Consider the following 2 tables:
```
customer( **c_id**, c_name, c_dob)
customer_loan_taken( **loan_no**, c_id, taken_date, loan_amount)
```
How to find out average loan taken by age group 20-25, 30-35, 40-45, and display them in a single table ?
The table contents are as follows:
## customer table
```
C_ID C_NAME C_DOB
```
---
```
1 Jainam Jhaveri 17-FEB-93
2 Harsh Mehra 10-DEC-91
3 Mohit Desai 15-OCT-75
4 Raj Gupta 31-AUG-80
5 Yash Shah 24-NOV-85
6 Dishank Parikh 02-OCT-78
7 Chandni Jain 06-MAR-83
8 Bhavesh Prajapati 13-MAY-71
9 Priyank Khandelwal 18-JUN-86
10 Mihir Vora 11-NOV-95
```
## customer\_loan\_taken table
`LOAN_NO C_ID TAKEN_DAT LOAN_AMOUNT`
---
```
1011 1 12-SEP-11 100000
1012 3 20-APR-10 200010
1013 4 15-OCT-12 150000
1014 5 04-JAN-13 2500005
1015 7 15-AUG-16 2600001
1016 8 21-DEC-16 3500000
1017 9 13-NOV-17 4000000
1018 10 05-MAR-18 1010100
```
|
This is working in Oracle 12c. The trick to figure out the age can differ on database as datediff is not working in oracle. So modify it accordingly
```
with customer( c_id, c_name, c_dob) as
(select 1,'A','31/01/1990' from dual union
select 2,'A','31/01/1980' from dual union
select 3,'C','31/01/1970' from dual union
select 4,'D','31/08/1990' from dual),
ag as
(select c.* ,
FLOOR(TRUNC(MONTHS_BETWEEN(SYSDATE, to_date(c_dob,'DD/MM/YYYY'))) /12) as age,
case when FLOOR(TRUNC(MONTHS_BETWEEN(SYSDATE, to_date(c_dob,'DD/MM/YYYY'))) /12) between 20 and 25 then '20-25' when
FLOOR(TRUNC(MONTHS_BETWEEN(SYSDATE, to_date(c_dob,'DD/MM/YYYY'))) /12) between 30 and 35 then '30-35' when
FLOOR(TRUNC(MONTHS_BETWEEN(SYSDATE, to_date(c_dob,'DD/MM/YYYY'))) /12) between 40 and 45 then '40-45'
end as agegroup from customer c
),
customer_loan_taken( loan_no, c_id, taken_date, loan_amount)
as
(
select 101,1,'01/01/1990',1000 from dual union
select 102,2,'01/01/1990',2000 from dual union
select 103,3,'01/01/1990',3000 from dual union
select 104,4,'01/01/1990',4000 from dual
)
select distinct(ag.agegroup),avg(loan_amount) from customer_loan_taken cl,ag
where ag.c_id=cl.c_id
group by ag.agegroup
```
|
```
;WITH cte AS (
SELECT CASE
WHEN DATEDIFF ("YY", c_dob, GETDATE()) > 20
AND DATEDIFF ("YY", c_dob, GETDATE()) <= 25 THEN '20-25'
WHEN DATEDIFF ("YY", c_dob, GETDATE()) > 25
AND DATEDIFF ("YY", c_dob, GETDATE()) <= 30 THEN '25-30'
WHEN DATEDIFF ("YY", c_dob, GETDATE()) > 30
AND DATEDIFF ("YY", c_dob, GETDATE()) <= 35 THEN '30-35'
END AS rangedate,
l.loan_amount
FROM customer
INNER JOIN customer_loan_taken l ON customer.c_id = l.c_id
)
SELECT rangedate,
AVG(loan_amount) average
FROM cte
GROUP BY rangedate
```
|
Find out average loan taken by age group in specific ranges and display them in a table
|
[
"",
"sql",
"oracle",
"oracle12c",
""
] |
I have a scenario where I have 2 tables with the same foreign key and amount column, such as below:
```
TABLE 1:
ForeignKey Amount
---------- -------------
12 20.0
12 30.0
13 20.0
21 10.0
21 10.0
TABLE 2:
ForeignKey Amount
---------- -------------
12 60.0
12 25.0
13 30.0
21 10.0
21 10.0
EXPECTED OUTPUT:
ForeignKey Amount
---------- -------------
12 35.0
13 10.0
21 0
```
I am using MSSQL
I want to compare the amounts aggregate amount per ForeignKey of each table and get the difference. I realize that I could put these into DataTable objects (C#) and do some complicated looping, but I am wondering if there's a more elegant SQL approach I can take. I am not very strong with SQL. Could someone point me in a general direction that I could explore in order to solve this problem?
Thank you!
|
```
SELECT t1.ForeignKey, t1.total AS total1, t2.total AS total2, t1.total-t2.total AS difference
FROM
(
SELECT ForeignKey,sum(Amount) AS total
FROM table1
GROUP BY ForeignKey
) t1
INNER JOIN
(
SELECT ForeignKey,sum(Amount) AS total
FROM table2
GROUP BY ForeignKey
) t2
ON t1.ForeignKey=t2.ForeignKey
```
|
```
;with t as (select ForeignKey,amount from [table 1]
union
select ForeignKey,-amount from [table 2])
select ForeignKey,sum(amount) from t
group by ForeignKey
```
|
Compare columns from 2 different tables and get the difference in amount
|
[
"",
"sql",
""
] |
I've this table with the following data
```
id Product Price
1 ELECTRO TV
2 null null
3 null null
4 Samsung 1000
5 LG 2000
6 Philips 1300
7 ELECTRO Mobile
8 null null
9 null null
10 Samsung 1000
11 Nokia 2000
12 Sony 1300
```
I need to add another column and repeat the value the resultant table should be as below:
```
id Product Price Category
1 ELECTRO TV TV
2 null null TV
3 null null TV
4 Samsung 1000 TV
5 LG 2000 TV
6 Philips 1300 TV
7 ELECTRO Mobile Mobile
8 null null Mobile
9 null null Mobile
10 Samsung 1000 Mobile
11 Nokia 2000 Mobile
12 Sony 1300 Mobile
```
Can someone please help me out with this query? because i don't have any idea how can i do it
|
[SQL Fiddle](http://sqlfiddle.com/#!4/3a235/2)
**Oracle 11g R2 Schema Setup**:
```
CREATE TABLE test (id, Product, Price ) AS
SELECT 1, 'ELECTRO', 'TV' FROM DUAL
UNION ALL SELECT 2, null, null FROM DUAL
UNION ALL SELECT 3, null, null FROM DUAL
UNION ALL SELECT 4, 'Samsung', '1000' FROM DUAL
UNION ALL SELECT 5, 'LG', '2000' FROM DUAL
UNION ALL SELECT 6, 'Philips', '1300' FROM DUAL
UNION ALL SELECT 7, 'ELECTRO', 'Mobile' FROM DUAL
UNION ALL SELECT 8, null, null FROM DUAL
UNION ALL SELECT 9, null, null FROM DUAL
UNION ALL SELECT 10, 'Samsung', '1000' FROM DUAL
UNION ALL SELECT 11, 'Nokia', '2000' FROM DUAL
UNION ALL SELECT 12, 'Sony', '1300' FROM DUAL
```
**Query 1**:
```
SELECT ID,
PRODUCT,
PRICE,
CASE PRODUCT
WHEN 'ELECTRO' THEN PRICE
ELSE LAG( CASE PRODUCT WHEN 'ELECTRO' THEN PRICE END ) IGNORE NULLS OVER ( ORDER BY ID )
END AS CATEGORY
FROM test
```
**[Results](http://sqlfiddle.com/#!4/3a235/2/0)**:
```
| ID | PRODUCT | PRICE | CATEGORY |
|----|---------|--------|----------|
| 1 | ELECTRO | TV | TV |
| 2 | (null) | (null) | TV |
| 3 | (null) | (null) | TV |
| 4 | Samsung | 1000 | TV |
| 5 | LG | 2000 | TV |
| 6 | Philips | 1300 | TV |
| 7 | ELECTRO | Mobile | Mobile |
| 8 | (null) | (null) | Mobile |
| 9 | (null) | (null) | Mobile |
| 10 | Samsung | 1000 | Mobile |
| 11 | Nokia | 2000 | Mobile |
| 12 | Sony | 1300 | Mobile |
```
|
There appear to be a number of things wrong with the design of your data here, and I really think that you need to address them before trying to make your query. For starters, it looks like Electro is a category, and should therefore be a column, but Product is already doing the work, which makes Electro in any form look redundant. If Electro is somehow not redundant, it should have it's own column.
Your rows containing null are also a symptom that something is almost certainly wrong with your data design, as it would seem unlikely that you have a TV with both no manufacturer and no price.
Additionally you appear to have mixed numeric and character data in the Price column: again suggesting that there is a serious flaw in the data design.
I don't think that you can really get the outcome you want to achieve given the data you appear to have, and to me it looks as if your underlying data needs to be redesigned.
|
repeat a value of a row in a new column
|
[
"",
"sql",
"oracle",
"oracle11g",
""
] |
I have the following situation:
One table called cad (id, name, gender, age, fok\_professional, fok\_agegroup, fok\_ativity) and other table called professional (id, name), agegroup (id, desc),
ativity (id, name);
How to select the name value instead number as generally is presented in simple query: For exemple:
```
SELECT * FROM cad
```
output is:
```
id -> 1;
name -> Teste;
gender -> Male;
age -> 22;
fok_professional -> 1;
fok_agegroup -> 4;
fok_ativity -> 2;
```
instead I would like:
```
id -> 1;
name -> Teste;
gender -> Male;
age -> 22;
fok_professional -> Administrator;
fok_agegroup -> Age 19 55;
fok_ativity -> Testestetstats;
```
How to get the values name ?
|
You want something like this, using SQL JOINs to connect the tables together :
```
SELECT
cad.id, cad.name, cad.gender, cad.age,
professional.name,
agegroup.desc,
ativity.name
FROM cad INNER JOIN professional ON cad.fok_professional = professional.id
INNER JOIN agegroup ON cad.fok_agegroup = agegroup.id
INNER JOIN ativity ON cad.fok_ativity = ativity.id
```
|
You need to use inner join between those three tables.
Try this:
```
SELECT C.ID,
C.NAME,
C.GENDER,
C.AGE,
P.NAME AS [FOK_PROFESSIONAL],
A.DESC AS [FOK_AGEGROUP],
AT.NAME AS [FOK_ATIVITY]
FROM CAD C
INNER JOIN PROFESSIONAL P
ON C.FOK_PROFESSIONAL = P.ID
INNER JOIN AGEGROUP A
ON C.FOK_AGEGROUP = A.ID
INNER JOIN ATIVITY AT
ON C.FOK_ATIVITY = AT.ID
```
|
How to Select foreign key names instead of number in SQL Server
|
[
"",
"sql",
"sql-server",
"database",
""
] |
Similarly to [this question](https://stackoverflow.com/q/9325762) I would like to perform an SQL "like" operation using my own user defined type called "AccountNumber".
The QueryDSL Entity class the field which defines the column looks like this:
```
public final SimplePath<com.myorg.types.AccountNumber> accountNumber;
```
I have tried the following code to achieve a "like" operation in SQL but get an error when the types are compared before the query is run:
```
final Path path=QBusinessEvent.businessEvent.accountNumber;
final Expression<AccountNumber> constant = Expressions.constant(AccountNumber.valueOfWithWildcard(pRegion.toString()));
final BooleanExpression booleanOperation = Expressions.booleanOperation(Ops.STARTS_WITH, path, constant);
expressionBuilder.and(booleanOperation);
```
The error is:
```
org.springframework.dao.InvalidDataAccessApiUsageException: Parameter value [7!%%] did not match expected type [com.myorg.types.AccountNumber (n/a)]
```
Has anyone ever been able to achieve this using QueryDSL/JPA combination?
|
In the end, I was given a tip by my colleague to do the following:
```
if (pRegion != null) {
expressionBuilder.and(Expressions.booleanTemplate("{0} like concat({1}, '%')", qBusinessEvent.accountNumber, pRegion));
}
```
This seems to do the trick!
|
Did you try using a String constant instead?
```
Path<?> path = QBusinessEvent.businessEvent.accountNumber;
Expression<String> constant = Expressions.constant(pRegion.toString());
Predicate predicate = Expressions.predicate(Ops.STARTS_WITH, path, constant);
```
|
QueryDSL like operation SimplePath
|
[
"",
"sql",
"jpa",
"sql-like",
"querydsl",
""
] |
I am using SQL Server 2012.
I have written the query below which works fine apart from when I include the last line 'where NomDiff <> 0'.
It tells me NomDiff is an invalid column name. I don't understand why and don't know how to get the query to return only rows where the nomDiff is not equal to zero?
```
;with pf as
(
select Name, Sedol, Nominal from tblTempPLF
where FundCode = 'CSGE'
), pc as
(
select Name, Sedol, Nominal from tblTempPCF
where FundCode = 'BTCM'
)
select coalesce(pf.Name, pc.Name) Name, coalesce(pf.Sedol, pc.Sedol) Sedol,
isnull(pf.Nominal,0) PfNom, isnull(pc.Nominal,0) PcNom, isnull(pf.Nominal,0) - isnull(pc.Nominal,0) NomDiff
from pf full outer join pc on pf.Sedol = pc.Sedol
where NomDiff <> 0
```
|
Using `APPLY VALUES` is a good way of not having to use a subquery, or another CTE:
```
WITH pf
AS ( SELECT
Name ,
Sedol ,
Nominal
FROM tblTempPLF
WHERE FundCode = 'CSGE'
),
pc
AS ( SELECT
Name ,
Sedol ,
Nominal
FROM tblTempPCF
WHERE FundCode = 'BTCM'
)
SELECT
Name = COALESCE(pf.Name, pc.Name),
Sedol = COALESCE(pf.Sedol, pc.Sedol),
PfNom = ISNULL(pf.Nominal, 0),
PcNom = ISNULL(pc.Nominal, 0),
NomDiff = Nom.Diff
FROM pf
FULL OUTER JOIN pc
ON pf.Sedol = pc.Sedol
CROSS APPLY(VALUES(ISNULL(pf.Nominal, 0) - ISNULL(pc.Nominal, 0))) AS Nom(Diff)
WHERE Nom.Diff <> 0;
```
|
You can't use a previously defined `alias` in the `where` clause. Use the calculation instead.
```
select coalesce(pf.Name, pc.Name) Name,
coalesce(pf.Sedol, pc.Sedol) Sedol,
isnull(pf.Nominal,0) PfNom, isnull(pc.Nominal,0) PcNom,
isnull(pf.Nominal,0) - isnull(pc.Nominal,0) NomDiff
from pf full outer join pc on pf.Sedol = pc.Sedol
where isnull(pf.Nominal,0) - isnull(pc.Nominal,0)<> 0
```
|
using where clause causes invalid column name error
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
I'm doing this challenge on Hackerrank:
<https://www.hackerrank.com/challenges/weather-observation-station-5>
I'm a beginner on SQL and I'm trying to query all the rows that have a maximum value for a column, maximum that I can only obtain via `MAX()`. So I'm trying this:
```
SELECT CITY, LENGTH(CITY) AS citylength
FROM STATION
WHERE LENGTH(CITY) = (SELECT MIN(CITY) FROM STATION)
```
and I get errors.
I've looked up on google about sub-queries but I'm not accustomed enough to know exactly how it works, so I need your help guys.Thanks.
So to sum up, I need a query that can get the rows on a table that has a maximum value obtained via `MAX()` clause.
|
This works without sub-queries
```
SELECT CITY, LENGTH(CITY)
FROM STATION
ORDER BY 2,1
LIMIT 1
```
|
You are requested to find two different results:
* The city with maximum length (and the first in the alphabet in case of a tie)
* The city with minimum length (and the first in the alphabet in case of a tie)
This means two different queries, which you glue together with UNION ALL.
```
(
select concat(city, ' ', length(city))
from station
order by length(city), city limit 1
)
union all
(
select concat(city, ' ', length(city))
from station
order by length(city) desc, city limit 1
);
```
As Strawberry pointed out: You need the parentheses in order to place two ORDER BY clauses, one per query part. (Otherwise you can only place one ORDER BY clause at the end for the whole query.)
In your query you are comparing `LENGTH(CITY)`, i.e. an integer holding the name's length and `MIN(CITY)`, i.e. the city name itself, which cannot work of course. You would have to compare with `MIN(LENGTH(CITY))`. Then do the same for the maximum and then use UNION ALL. This doesn't solve the problem with ties, however, which the LIMIT query does.
|
How to select rows that have a maximum value for a column
|
[
"",
"mysql",
"sql",
""
] |
I'm trying to select the sum of the values in the `isOK` column for each `Name` separated, BUT only if `isOK = 1` on `Day = 2`.
The query for the following example table `tablename`
```
Name | Day | isOK
char | int | int
-----------------
Flo | 1 | 1
Seb | 1 | 1
Tim | 1 | 0
Flo | 2 | 1
Seb | 2 | 0
Tim | 2 | 1
```
should give Flo: 2 and Tim: 1, but not Seb: 1, since his `isOK` on `Day` = 2 is 0.
I've tried using SUM(isOK) with IF constructs, but it's just not working. My alternative solution, to select all `Name` where `isOK` = 1 first and select the SUM(isOK) for each of the names is slow and seems in need of improvement.
I guess it's not that difficult, but I've been trying for hours now and I just can't combine my two queries into one.
|
One way to do this is to use a conditional expression together with a `having` clause like this:
```
select name, sum(isOk) ok_sum
from your_table
group by name
having sum(case when day = 2 and isOK = 1 then 1 else 0 end) > 0;
```
With your sample data the result would be:
```
name ok_sum
Flo 2
Tim 1
```
As MySQL evaluates boolean expressions as 1 or 0 it should be possible to reduce the condition to this:
```
having sum(day = 2 and isOK = 1) > 0;
```
Another way to do it would be to use a correlated subquery that makes sure there exists a row with `Day = 2` and `isOk = 1` for the `Name`:
```
select t1.name, sum(t1.isOk) ok_sum
from your_table t1
where exists (
select 1
from your_table t2
where t2.day = 2 and t2.isOK = 1 and t1.name = t2.name
)
group by t1.name
```
* See [this fiddle](http://sqlfiddle.com/#!9/a5327/2)
|
TRY this :
```
SELECT
name, SUM(isok) AS isOk
FROM
table
GROUP BY `name`
HAVING SUM(`day` = 2 AND isok = 1) > 0;
```
|
Aggregate rows if one row meets a specific condition
|
[
"",
"mysql",
"sql",
"conditional-statements",
""
] |
So I have two tables.
`Table 1`
```
ID Receiver Some Columns
1 43523 Ba Baa
2 82822 Boo Boo
```
`Table 2`
```
ID Receiver Some2 Columns2
1 - 43523 OO JJ
2 - 43523 OO NULL
3 - 43523 OO YABA DABA
```
So, Now I want to do a `left join` where in I join only one of the matching rows from table 2. THe join will be on Receiver column, and the ID column in each table is different.
I tried left join, and it gave me everything from TABLE 2 (Understandably). I looked at other queries online but got lost.
Can anyone help me with this?
**EDIT**
Starting query (from OP's comments):
```
SELECT Table1.[ID],
Table1.[Receiver],
Table2.[Some2],
Table1.[Some],
Table1.[Columns]
FROM Table1
LEFT JOIN Table2
ON Table1.Receiver = Table2.ReceiverID
WHERE Some = 544
AND Columns = 'xyz'
```
|
You can change your `left join` to a normal `join`, since you are always expecting a match.
And to limit the matches to a single row, you can use `row_number over ()`, where the `order by` clause is meaningless, since you don't care which row you are matching to.
```
SELECT Table1.[ID],
Table1.[Receiver],
t2.[Some2],
Table1.[Some],
Table1.[Columns]
FROM Table1
JOIN (SELECT *,
row_number() over (partition by ReceiverID order by ReceiverID) as rn
FROM Table2) t2
ON Table1.Receiver = t2.ReceiverID
AND t2.rn = 1
WHERE Some = 544
AND Columns = 'xyz'
```
|
Try using a `group by` to make the `Receiver` column unique:
```
SELECT t1.*, t2.*
FROM Table1 AS t1
LEFT JOIN Table2 AS t2 ON t1.Receiver = t2.Receiver
GROUP BY t2.Receiver
```
|
Trying to do a LEFT JOIN on two tables but want only one from the RIGHT TABLE
|
[
"",
"sql",
"join",
"left-join",
""
] |
I have a rails model call MentorData and it has an attribute called `os_usage`. The oses are stored in an array like so `['apple', 'linux']`.
To recap:
```
$ MentorData.first.os_usage
=> ['apple', 'linux']
```
I am looking to be able to query the data for all MentorData that includes the os\_usage of `apple`, but when I search `MentorData.where(os_usage: 'apple')` I only get the mentors who can only use apple and not apple and linux. I need to search in some way that checks if apple is included in the array.
I have also tried the following.
```
MentorData.where('os_usage like ?', 'apple’)
MentorData.where('os_usage contains ?', 'apple’)
MentorData.where('os_usage contains @>ARRAY[?]', 'apple')
```
Is it possible to query data in ActiveRecord by attributes that have an array or items?
The database is on Postgres if that helps in providing a more raw search query.
|
Here are the examples given in the current [Rails Edge Guides](http://edgeguides.rubyonrails.org/active_record_postgresql.html) for PostgreSQL:
```
# db/migrate/20140207133952_create_books.rb
create_table :books do |t|
t.string 'title'
t.string 'tags', array: true
t.integer 'ratings', array: true
end
add_index :books, :tags, using: 'gin'
add_index :books, :ratings, using: 'gin'
# app/models/book.rb
class Book < ActiveRecord::Base
end
# Usage
Book.create title: "Brave New World",
tags: ["fantasy", "fiction"],
ratings: [4, 5]
## Books for a single tag
Book.where("'fantasy' = ANY (tags)")
## Books for multiple tags
Book.where("tags @> ARRAY[?]::varchar[]", ["fantasy", "fiction"])
## Books with 3 or more ratings
Book.where("array_length(ratings, 1) >= 3")
```
|
Have you tried `MentorData.where("'apple' = ANY (os_usage)")`?
|
How can I query Rails ActiveRecord data stored in arrays
|
[
"",
"sql",
"ruby-on-rails",
"ruby",
"postgresql",
"activerecord",
""
] |
I have a table with 2 column like this:
```
city_name car_name
Gurgaon Alto
Gurgaon Mercedez Benz
Gurgaon Swift Dzire
New Delhi Alto
New Delhi Etios
New Delhi Mercedez Benz
New Delhi Swift Dzire
```
I made this by running a query using `group by` tag, but my task is to display a table with non repetitive entries. I.e. I want to display my table something like this:
```
city_name car_name
Gurgaon Alto
Mercedez Benz
Swift Dzire
New Delhi Alto
Etios
Mercedez Benz
Swift Dzire
```
I'm using Microsoft SQL Server Management Studio 2014
|
This should be done in a front end / reporting tool and not using sql. If you have no option, try this
```
select case when sno=1 then city_name else '' end as city_name,car_name from
(
select row_number() over (partition by city_name order by car_name) as sno, * from table
) as t
```
|
```
select city_name, (SELECT [car_name]+','
FROM [myTable] AS [t1]
WHERE t1.[city_name] = t2.[city_name]
FOR XML PATH('') ) AS cars
FROM [MyTable] AS [t2]
GROUP BY [t2].[city_name]
```
You can replace the , (or any other string you might use) with newline character in your display medium.
|
How to remove repetitive column entries?
|
[
"",
"sql",
"sql-server",
""
] |
can someone help me to resolve this?
```
CREATE TABLE TT (
A NUMBER PRIMARY KEY,
B VARCHAR2(5)
);
insert into tt values (11,'A');
insert into tt values (12,'A');
insert into tt values (13,'B');
insert into tt values (14,'B');
insert into tt values (15,'C');
insert into tt values (16,'D');
insert into tt values (17,'E');
insert into tt values (18,'E');
insert into tt values (19,'F');
insert into tt values (20,'F');
COMMIT;
SELECT * FROM TT;
+---+---+
| A | B |
+---+---+
|11 | A |
|12 | A |
|13 | B |
|14 | B |
|15 | C |
|16 | D |
|17 | E |
|18 | E |
|19 | F |
|20 | F |
+---+---+
```
My requirement is what are the 'B' column has mapped more than one of 'A' columns Like (value ‘E’ has mapped two rows in column of ‘A’)
o/p
```
+---+
| A |
+---+
| 11|
| 12|
| 13|
| 14|
| 17|
| 18|
| 19|
| 20|
+---+
```
I've achieved using below analytical query. I want to know whether it’s possible to archive using without analytical fucntion.
```
select a
from (SELECT tt.*, COUNT(*) over (partition by b) cnt
FROM TT
)
where cnt >= 2;
```
---
|
You can find the candidates using a group by:
```
select a
from tt
where B in (
select B
from tt
group by b
having count(*) >= 2);
```
|
This is easy enough with aggregations:
```
select a
from tt
where b in (select b from tt group by b having count(*) > 1);
```
As a note, you can avoid the use of aggregation, because Oracle provides the `rowid` pseudocolumn:
```
select a
from tt
where exists (select 1
from tt tt2
where tt2.b = tt.b and tt2.rowid <> tt.rowid
);
```
|
Any other option is there without using Oracle Analytical function
|
[
"",
"sql",
"oracle",
"window-functions",
""
] |
Is there a way to do the following query? It would save me having to go out of SQL to do post-processing of the query:
```
SELECT date, RANDOM('CA', 'DE', 'AZ') FROM table LIMIT 3
```
And the result would be something like:
```
2014-01-01,"CA"
2014-01-02,"CA"
2014-01-03,"DE"
```
Is there a way to do the `RANDOM` operation in SQL?
|
Get your set of values to table/subquery with `UNION ALL`, sort by `RAND()` and return 1 value:
**[SqlFiddleDemo](http://sqlfiddle.com/#!9/77fb8/4/0)**
```
SELECT id,
(SELECT 'CA' AS 'col'
UNION ALL
SELECT 'DE'
UNION ALL
SELECT'CZ'
ORDER BY RAND() LIMIT 1) AS Random
FROM tab
LIMIT 3
```
Or use:
**[SqlFiddleDemo\_2](http://sqlfiddle.com/#!9/4ea375/2/0)**
```
SELECT id,
ELT(FIELD(CEILING(RAND()*3), 1, 2, 3),'CA','CZ', 'DE') AS Rand
FROM tab
LIMIT 3
```
|
```
SELECT SUBSTRING_INDEX(SUBSTRING_INDEX('CA,DE,AZ',',',CEILING(RAND()*3)),',',-1);
```
or something like that
|
Select random value in SQL
|
[
"",
"mysql",
"sql",
""
] |
Is there a way in SQL or PLSQL that makes sure that nothing is committed? Since, sometimes a function/procedure is called and the consequences are not known. For example, it can trigger an implicit commit. Is there a way to prevent that?
|
I am giving example in Oracle Database
This is for the SQL statements DML.
```
SQL> set transaction read only;
Transaction set.
SQL> update t set t=14;
update t set t=14
*
ERROR at line 1:
ORA-01456: may not perform insert/delete/update operation inside a READ ONLY
transaction
```
This for calling procedure which has commits in it.
```
SQL>alter session DISABLE COMMIT IN PROCEDURE ;
SQL>exec procedureHavingCommit(10);
BEGIN procedureHavingCommit(10); END;
*
ERROR at line 1:
ORA-00034: cannot COMMIT in current PL/SQL session
ORA-06512: at "ND210.DRMOP_UTIL", line 332
ORA-06512: at "ND210.DRMOP_UTIL", line 1664
ORA-00034: cannot COMMIT in current PL/SQL session
ORA-06512: at line 1
```
|
If you want to be sure that you won't modify anything, the "set transaction read only" is the right answer.
Otherwise: if you are writing some code that calls other procedures wrote by other programmers and your main concern is that these other procedures might issue unwanted commits (or could in future be modified to issue unwanted commits), so you want to catch them before they cause damage, well I have a solution you could find useful that I currently use in my code for exactly this purpose:
let's say your program does this:
```
procedure MyProcedureThatModifiesData is
begin
update mytables....
SomeOthersProcedure;
update myothertables ...;
commit; -- having a commit in a stored procedure
-- is a bad idea: i wrote this only to mimick the global
-- application behaviour
end;
```
and you want to be sure that, if the author of the SomeOtherProcedure modifies his procedure inserting a commit inside it, this commit will be blocked and rolled back.
with my solution the code becomes this:
```
procedure MyProcedureThatModifiesData is
begin
pkg_block_unwanted_commits.DisableCommits; // <<!!
update mytables....
SomeOthersProcedure;
update myothertables ...;
pkg_block_unwanted_commits.ReenableCommits; // <<!!
commit; -- having a commit in a stored procedure
-- is a bad idea: i wrote this only to mimick the global
-- application behaviour
end;
```
Let me explain the idea that makes it possible: all you need is a table containing a deferred constraint.
A deferred constraint is checked only when you issue the "commit": as far as data is not committed it can violate the constraint, but if you try to commit, the whole transaction is rolled back and an oracle error is raised.
Now here is the point: if you INTENTIONALLY insert some data that violates a deferred constraint right at the start of your procedure you will obtain exactly what you are asking for: a rollback instead of a commit.
to re-enable commits all you have to do is to remove the data violating the constraint.
a basic implementation could be this:
```
procedure MyProcedureThatModifiesData is
begin
-- this update "disables" the commits
update myspecialtable set
myfield=unacceptable_value_that_violates_the_constraint;
update mytables....
SomeOthersProcedure;
update myothertables ...;
-- this update "re-enables" the commits
update myspecialtable set
myfield=valid_value;
commit;
end;
```
a further step to the above basic implementation is to make "myspecialtable" a global temporary table (kind: preserve rows on commit) so it will only contain the temporary values written during the life of your oracle session and it won't be store permanently in the db. moreover, this way other sessions will be able to write their own data in this special table without interfering with your table.
The complete solution is this one:
```
create or replace package pkg_block_unwanted_commits is
-- we will implement these two in order to allow nested calls:
-- each DisableCommit must be paired with her EnableCommits.
-- commits will be actually enabled only when each DisableCommit (including
-- nested calls) has been closed by her pairing EnableCommit
procedure DisableCommits;
procedure ReenableCommits;
end;
/
-- this is the temporary table we will use for the above
create global temporary table tbl_block_unwanted_commits
(
-- this primary key, along with the "chk_only_one_row" constraint ensures that
-- this table can contain only one row
only_one_row char(1) default 'X' primary key,
-- it keeps track of the number of "opened" DisableCommits calls:
-- we can commit only if nest_counter is zero
nest_counter number not null
)
on commit preserve rows
/
-- this one, considering that "only_one_row" is the primary key
-- ensures we will have only one row in the table (just to be safe)
alter table tbl_block_unwanted_commits add constraint
chk_only_one_row check (only_one_row='X')
/
-- this is just to reveal errors in our program if we call EnableCommits without having called DisableCommits
-- (mispaired calls)
alter table tbl_block_unwanted_commits add constraint
chk_unbalanced_enables check (nest_counter >=0)
/
-- this one is the constraint that actually does the trick of blocking commits
-- we can commit only if whe have active calls to "disablecommits" that have not been paired with corresponding "ReenableCommits"
alter table tbl_block_unwanted_commits add constraint
chk_blocked_commits check (nest_counter = 0) deferrable initially deferred
/
create or replace package body pkg_block_unwanted_commits is
procedure DisableCommits is
begin
update tbl_block_unwanted_commits set nest_counter = nest_counter +1;
if sql%notfound then
insert into tbl_block_unwanted_commits(only_one_row,nest_counter)
values ('X',1);
end if;
end;
procedure ReenableCommits is
begin
update tbl_block_unwanted_commits set nest_counter = nest_counter -1;
end;
end;
/
```
hope this helps
|
Force rollback in case of an implicit commit
|
[
"",
"sql",
"oracle",
"plsql",
"commit",
"rollback",
""
] |
I'm currently running an instance of MS SQL Server 2014 (12.1.4100.1) on a dedicated machine I rent for $270/month with the following specs:
* Intel Xeon E5-1660 processor (six physical 3.3ghz cores +
hyperthreading + turbo->3.9ghz)
* 64 GB registered DDR3 ECC memory
* 240GB Intel SSD
* 45000 GB of bandwidth transfer
I've been toying around with Azure SQL Database for a bit now, and have been entertaining the idea of switching over to their platform. I fired up an Azure SQL Database using their P2 Premium pricing tier on a V12 server (just to test things out), and loaded a copy of my existing database (from the dedicated machine).
I ran several sets of queries side-by-side, one against the database on the dedicated machine, and one against the P2 Azure SQL Database. The results were sort of shocking: my dedicated machine outperformed (in terms of execution time) the Azure db by a huge margin each time. Typically, the dedicated db instance would finish in under 1/2 to 1/3 of the time that it took the Azure db to execute.
Now, I understand the many benefits of the Azure platform. It's managed vs. my non-managed setup on the dedicated machine, they have point-in-time restore better than what I have, the firewall is easily configured, there's geo-replication, etc., etc. But I have a database with hundreds of tables with tens to hundreds of millions of records in each table, and sometimes need to query across multiple joins, etc., so performance in terms of execution time really matters. I just find it shocking that a ~$930/month service performs that poorly next to a $270/month dedicated machine rental. I'm still pretty new to SQL as a whole, and very new to servers/etc., but does this not add up to anyone else? Does anyone perhaps have some insight into something I'm missing here, or are those other, "managed" features of Azure SQL Database supposed to make up the difference in price?
Bottom line is I'm beginning to outgrow even my dedicated machine's capabilities, and I had really been hoping that Azure's SQL Database would be a nice, next stepping stone, but unless I'm missing something, it's not. I'm too small of a business still to go out and spend hundreds of thousands on some other platform.
Anyone have any advice on if I'm missing something, or is the performance I'm seeing in line with what you would expect? Do I have any other options that can produce better performance than the dedicated machine I'm running currently, but don't cost in the tens of thousand/month? Is there something I can do (configuration/setting) for my Azure SQL Database that would boost execution time? Again, any help is appreciated.
EDIT: Let me revise my question to maybe make it a little more clear: is what I'm seeing in terms of sheer execution time performance to be expected, where a dedicated server @ $270/month is well outperforming Microsoft's Azure SQL DB P2 tier @ $930/month? Ignore the other "perks" like managed vs. unmanaged, ignore intended use like Azure being meant for production, etc. I just need to know if I'm missing something with Azure SQL DB, or if I really am supposed to get MUCH better performance out of a single dedicated machine.
|
There is an alternative from Microsoft to Azure SQL DB:
“Provision a SQL Server virtual machine in Azure”
<https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-provision-sql-server/>
A detailed explanation of the differences between the two offerings: “Understanding Azure SQL Database and SQL Server in Azure VMs”
<https://azure.microsoft.com/en-us/documentation/articles/data-management-azure-sql-database-and-sql-server-iaas/>
One significant difference between your stand alone SQL Server and Azure SQL DB is that with SQL DB you are paying for high levels of availability, which is achieved by running multiple instances on different machines. This would be like renting 4 of your dedicated machines and running them in an AlwaysOn Availability Group, which would change both your cost and performance. However, as you never mentioned availability, I'm guessing this isn't a concern in your scenario. SQL Server in a VM may better match your needs.
|
(Disclaimer: I work for Microsoft, though not on Azure or SQL Server).
"Azure SQL" isn't equivalent to "SQL Server" - and I personally wish that we did offer a kind of "hosted SQL Server" instead of Azure SQL.
On the surface the two are the same: they're both relational database systems with the power of T-SQL to query them (well, they both, under-the-hood use the same DBMS).
Azure SQL is different in that the *idea* is that you have two databases: a development database using a local SQL Server (ideally 2012 or later) and a production database on Azure SQL. You (should) never modify the Azure SQL database directly, and indeed you'll find that SSMS does not offer design tools (Table Designer, View Designer, etc) for Azure SQL. Instead, you design and work with your local SQL Server database and create "DACPAC" files (or special "change" XML files, which can be generated by SSDT) which then modify your Azure DB such that it copies your dev DB, a kind of "design replication" system.
Otherwise, as you noticed, Azure SQL offers built-in resiliency, backups, simplified administration, etc.
As for performance, is it possible you were missing indexes or other optimizations? You also might notice slightly higher latency with Azure SQL compared to a local SQL Server, I've seen ping times (from an Azure VM to an Azure SQL host) around 5-10ms, which means you should design your application to be less-chatty or to parallelise data retrieval operations in order to reduce page load times (assuming this is a web-application you're building).
|
Azure SQL Database vs. MS SQL Server on Dedicated Machine
|
[
"",
"sql",
"azure",
"azure-sql-database",
"sql-server-2014",
""
] |
I'm trying to find rows which are present only in one table. Big table name (consists all rows) is `kwf_uploads`, small tables present in many names. My goal is find rows which exists in `kwf_uploads`, but not exists in other (this example are primary and foreigh keys in tables without relations).
What I did in sql:
```
select id from
kwf_uploads
left join
(SELECT picture_id as id
FROM documents where picture_id is not null
UNION
SELECT file_id as id
FROM books where file_id is not null
UNION
SELECT picture_id as id
FROM employee where picture_id is not null
UNION
SELECT file_id as id
FROM flightFiles where file_id is not null
UNION
SELECT picture_id as id
FROM tasks where picture_id is not null
UNION
SELECT picture_id as id
FROM trainingContentQuestions where picture_id is not null
UNION
SELECT picture_id as id
FROM trainingQuestions where picture_id is not null) foo
ON kwf_uploads.id = foo.id
```
Find it on: [SQL: cascade UNION and JOIN](https://stackoverflow.com/questions/1418431/sql-cascade-union-and-join)
But it fails with error: `SQL (1052): Column "id" in field list is ambiguous`.
I don't want to use join with every table, because I'm not good in joins and sql becomes very large and unreadable. I also tried `not exists` contrustion without any results.
I think we can find a better solution =)
Result from `select id from kwf_uploads` query (9690 rows):
[](https://i.stack.imgur.com/MGLgR.jpg)
Result from union queries (6096 rows):
[](https://i.stack.imgur.com/mvVge.jpg)
In result I want to see 3594 (9690 minus 6096) rows.
|
UNION (when used by itself, without ALL) is an "expensive" operation.
There is an alternative, NOT EXISTS. This construct is a "semi join" and I suspect may be less expensive than a union approach.
```
SELECT
id
FROM kwf_uploads AS u
WHERE NOT EXISTS (SELECT NULL FROM documents WHERE u.id = picture_id )
AND NOT EXISTS (SELECT NULL FROM books WHERE u.id = file_id )
AND NOT EXISTS (SELECT NULL FROM employee WHERE u.id = picture_id )
AND NOT EXISTS (SELECT NULL FROM flightFiles WHERE u.id = file_id )
AND NOT EXISTS (SELECT NULL FROM tasks WHERE u.id = picture_id )
AND NOT EXISTS (SELECT NULL FROM trainingContentQuestions WHERE u.id = picture_id )
AND NOT EXISTS (SELECT NULL FROM trainingQuestions WHERE u.id = picture_id )
```
|
**To fix your current error**
Prefix Id column with proper resultset alias. In this case `kwf_uploads.id`. Because Id exists in both resultsets.
**To get desired output**
Add `Where foo.id is null`. This will fetch those ids which failed in join, those are the ids you want.
```
select kwf_uploads.id from
kwf_uploads
left join
(SELECT picture_id as id
FROM documents where picture_id is not null
UNION
SELECT file_id as id
FROM books where file_id is not null
UNION
SELECT picture_id as id
FROM employee where picture_id is not null
UNION
SELECT file_id as id
FROM flightFiles where file_id is not null
UNION
SELECT picture_id as id
FROM tasks where picture_id is not null
UNION
SELECT picture_id as id
FROM trainingContentQuestions where picture_id is not null
UNION
SELECT picture_id as id
FROM trainingQuestions where picture_id is not null) foo
ON kwf_uploads.id = foo.id
where foo.id is null
```
|
How to compare tables with many unions and join?
|
[
"",
"mysql",
"sql",
"join",
"union",
""
] |
I'm dealing with the yelp dataset on `sqlite3`, and I have a schema as follows:
```
CREATE TABLE businesses
(business_id text primary key,
name text,
full_address text,
city text,
stars decimal(1,1),
ratings integer);
```
I run the following query:
```
select name, stars from businesses where ratings >= 400;
```
And I get the following:
```
Pho Kim Long |3.5
Pho So 1 |3.5
Ichiza |4
...
Guy Fieri |3.5
Giada |4
```
If I run:
```
select name, min(stars) from businesses where ratings >= 400;
```
I get `The Quad Las Vegas Resort & Casino|2`
But if I run:
```
select name
from (select name, min(stars)
from businesses
where ratings >= 400);
```
I get `Giada`.
I know that there's a way to do this with `LIMIT X`, but why does this occur in the first place? Is there something about the SQLite parser that I'm neglecting?
|
In the [SQLite docs](https://www.sqlite.org/lang_select.html), it says:
> If the SELECT statement is an aggregate query without a GROUP BY
> clause, then each aggregate expression in the result-set is evaluated
> once across the entire dataset. Each non-aggregate expression in the
> result-set is evaluated once for an arbitrarily selected row of the
> dataset. The same arbitrarily selected row is used for each
> non-aggregate expression.
In other words, the actual name you are getting is purely random in this case, it does not bear any relation to the row that actually has the minimum value of `stars`.
According to the SQL-92 standard, in a query containing aggregate expressions and non-aggregate expressions all of the non-aggregate expressions must appear in the GROUP BY clause. SQL-2003 has a similar but somewhat more complicated set of rules. This [blog article](http://rpbouman.blogspot.com.au/2007/05/debunking-group-by-myths.html) provides a summary.
Some databases allow making queries that include aggregate and non-aggregate expressions without any GROUP BY clause, or without all non-aggregates included in the GROUP BY clause, but as you have discovered the results may be indeterminate. Other databases will display an error and refuse to run the query.
It is hard to give specific advice on how to correct your query because you have not stated what output you are trying to get. If you are trying to find out which row has the minimum value of `stars` then one of the proposals in Juan's answer should work.
|
The problem is your `MIN()` function bring the smallest value for `stars`, but not the name match for that row.
[**SQL Fiddle Demo**](http://sqlfiddle.com/#!9/44391/2)
You can do a cross join
```
SELECT name
FROM businesses b,
( SELECT min(stars) as MinValue
FROM businesses
WHERE ratings >= 400) as M
WHERE b.stars = M.MinValue;
```
OR Inner Select
```
SELECT name
FROM businesses b
WHERE b.stars = ( SELECT min(stars) as MinValue
FROM businesses
WHERE ratings >= 400);
```
|
Why does nested query in SQLite return the wrong value?
|
[
"",
"sql",
"database",
"sqlite",
""
] |
This is table test
```
ID order_no film_type process line
100 RXQFW40-1 FW SL HL21
101 RXQFW46-1-1 EXFW EX HE15
103 RXQFW49-1 FW SL HL21
173 RXQFW49-1-1 EXFW EX HE15
107 RXQFW4E-1 FW SL HL21
115 RXQFW4E-1 FW SL HL21
169 RXQFW4E-1-1 EXFW EX HE13
168 RXQFW4E-1-1 EXFW EX HE13
104 RXQFW4K-1 FW SL HL21
172 RXQFW4K-1-1 EXFW EX HE15
```
First, I want to filter process='SL' and get the first 7 character of order\_no.
```
Select distinct substring(order_no,1,7) where process='SL' from test
```
Next
If the result meets
```
Select distinct substring(order_no,1,7) where process='EX' from test
```
That will be the output:
```
order_no
RXQFW49-1
RXQFW4E-1
RXQFW4K-1
```
|
Well you can use a `WHERE EXISTS` like below by combining both the queries
```
Select distinct substring(order_no,1,7)
from test t
where process='EX'
and EXISTS ( Select 1 from test
where process='SL'
and substring(t.order_no,1,7) = substring(order_no,1,7));
```
|
A different aproach to `EXISTS` you can `JOIN` using the `substring` result. And is easier to read.
And looks like the result is always the `SL.order_no` so no need apply another substring there.
[**SQL Fiddle Demo**](http://sqlfiddle.com/#!6/0408a/4)
```
SELECT DISTINCT SL.order_no
FROM test SL
inner join test EX
on substring(SL.order_no,1,7) = substring(EX.order_no,1,7)
WHERE SL.process = 'SL'
AND EX.process = 'EX'
```
|
Query table and compare with another query (if exist, show in result)
|
[
"",
"sql",
"sql-server",
""
] |
I have table BIDS, which contains a lot of columns and rows but I only want to select a row which contains lowest BIDPRICE.
```
Select min(Cast(Bids.BidPrice as INT)), BidBidderName from bids BidBidderName
```
but it throws error
> Column 'bids.BidBidderName' is invalid in the select list because it
> is not contained in either an aggregate function
When I put BidderName in Group by then it shows all records but I only want record which contains lowest bid price.
|
You can use subquery:
```
Select BidPrice, BidBidderName from bids
where Bids.BidPrice in (Select min(Cast(b.BidPrice as INT)) from bids b)
```
Or `INNER JOIN`:
```
Select b1.BidPrice, b1.BidBidderName from bids b1
inner join (select Min(BidPrice) BidPrice from @bids) as b2 on b.BidPrice = b2.BidPrice
```
|
Here is an option that will get just the row with the lowest price.
```
Select top 1 BidPrice
, BidBidderName
from bids
order by Cast(BidPrice as INT)
```
|
How to select the record with minimum value from table?
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"sql-server-2012",
""
] |
Say I have table 1 with:
```
stuff | sender_id | recipient_id
```
and I have table 2 with:
```
id | name
```
I want to return:
`stuff | sender_name | recipient_name`
or
`stuff | sender_id | sender_name | recipient_id | recipient_name`
Right now I'm looking at something like:
`SELECT * FROM table1 INNER JOIN table2 ON id=sender_id OR id=recipient_id` but that returns dupicates of stuff as a row fufills both conditionals.
Is there a way to get the information I want in 1 query?
|
A way is to use inner-selects like this:
```
SELECT stuff, sender_id
, (SELECT name FROM table2 WHERE id = sender_id) AS sender_name
, recipient_id
, (SELECT name FROM table2 WHERE id = recipient_id) AS recipient_name
FROM table1;
```
|
Solution with one reference to table2:
```
select t1.stuff
, max(case when t1.sender_id = t2.id then t2.name end) as sender_name
, max(case when t1.recipient_id = t2.id then t2.name end) as recipint_name
from t1
join t2
on t2.id in (t1.sender_id, t1.recipient_id)
group by t1.stuff;
```
It is a bit messy, but there are situations where it will be handy.
I created the tables and stuffed them with 10000 rows each (db2 express-c, 10.5 fixpak 1):
```
db2 "create table t1 (stuff int not null primary key, sender_id int not null, recipient_id int not null)"
db2 "create table t2 (id int not null primary key, name varchar(10) not null);
db2 "insert into t1 with t (n) as ( values 0 union all select n+1 from t where n+1 < 10000) select n, 2*n, 2*n+1 from t"
db2 "insert into t2 with t (n) as ( values 0 union all select n+1 from t where n+1 < 10000) select 2*n, 'C' || rtrim(cast(2*n as char(10))) from t"
db2 runstats on table t1 with distribution and sampled detailed indexes all
db2 runstats on table t2 with distribution and sampled detailed indexes all
```
and checked the plan for the different queries. I added a where clause
Two sub-selects:
```
db2 "explain plan for SELECT stuff , (SELECT name FROM t2 WHERE id = sender_id) AS sender_name, (SELECT name FROM t2 WHERE id = recipient_id) AS recipient_name FROM t1 where t1.id between 500 and 600"
db2exfmt -d sample -g -1 -o sub.exfmt
```
Two joins:
```
db2 "explain plan for SELECT t1.stuff, tA.name as sender_name, tB.name as recipient_name from t1 join t2 as tA on t1.sender_id = tA.id join t2 as tB on t1.sender_id = tB.id where t1.stuff between 500 and 600"
db2exfmt -d sample -g -1 -o dualjoin.exfmt
```
and finally the variant with aggregates and case:
```
db2 "explain plan for SELECT t1.stuff, max(case when t1.sender_id = t2.id then t2.name end) as sender_name, max(case when t1.recipient_id = t2.id then t2.name end) as recipint_name from t1 join t2 on t2.id in (t1.sender_id, t1.recipient_id) group by t1.stuff"
db2exfmt -d sample -g -1 -o singlejoin.exfmt
```
According to this rather unscientific test, the solution by @Juan Carlos Oropeza is the cheapest:
```
Access Plan:
-----------
Total Cost: 132.657
Query Degree: 1
Rows
RETURN
( 1)
Cost
I/O
|
101.808
^NLJOIN
( 2)
132.657
53
/-------+--------\
101.808 1
TBSCAN FETCH
( 3) ( 7)
13.6735 13.6215
2 2
| /---+----\
101.808 1 10000
SORT IXSCAN TABLE: LELLE
( 4) ( 8) T2
13.6733 6.81423 Q1
2 1
| |
101.808 10000
FETCH INDEX: SYSIBM
( 5) SQL150906110744470
13.6625 Q1
2
/---+----\
101.808 10000
IXSCAN TABLE: LELLE
( 6) T1
6.84113 Q2
1
|
10000
INDEX: SYSIBM
SQL150906110646160
Q2
```
Using two sub-selects as in @shA.t is a bit more expensive:
```
Access Plan:
-----------
Total Cost: 251.679
Query Degree: 1
Rows
RETURN
( 1)
Cost
I/O
|
101.808
>^NLJOIN
( 2)
251.679
103.99
/-------+--------\
101.808 1
TBSCAN FETCH
( 3) ( 12)
132.695 13.6215
52.9898 2
| /---+----\
101.808 1 10000
SORT IXSCAN TABLE: LELLE
( 4) ( 13) T2
132.691 6.81423 Q1
52.9898 1
| |
101.808 10000
>^NLJOIN INDEX: SYSIBM
( 5) SQL150906110744470
132.67 Q1
52.9898
/-------+--------\
101.808 1
TBSCAN FETCH
( 6) ( 10)
13.6881 13.6215
2 2
| /---+----\
101.808 1 10000
SORT IXSCAN TABLE: LELLE
( 7) ( 11) T2
13.6839 6.81423 Q2
2 1
| |
101.808 10000
FETCH INDEX: SYSIBM
( 8) SQL150906110744470
13.6625 Q2
2
/---+----\
101.808 10000
IXSCAN TABLE: LELLE
( 9) T1
6.84113 Q3
1
|
10000
INDEX: SYSIBM
SQL150906110646160
Q3
```
My solutions is the most expensive one:
```
Access Plan:
-----------
Total Cost: 758.822
Query Degree: 1
Rows
RETURN
( 1)
Cost
I/O
|
10000
GRPBY
( 2)
758.139
124.996
|
20000
NLJOIN
( 3)
756.923
124.996
/----------+----------\
10000 2
FETCH FETCH
( 4) ( 6)
122.351 27.0171
49 3.96667
/---+----\ /---+----\
10000 10000 2 10000
IXSCAN TABLE: LELLE RIDSCN TABLE: LELLE
( 5) T1 ( 7) T2
58.1551 Q2 13.6291 Q1
21 2
| /-------+-------\
10000 1.0016 1.0016
INDEX: SYSIBM SORT SORT
SQL150906110646160 ( 8) ( 10)
Q2 6.81465 6.81465
1 1
| |
1.0016 1.0016
IXSCAN IXSCAN
( 9) ( 11)
6.81423 6.81423
1 1
| |
10000 10000
INDEX: SYSIBM INDEX: SYSIBM
SQL150906110744470 SQL150906110744470
Q1 Q1
```
|
Map 2 columns to 2 rows in 1 column?
|
[
"",
"sql",
""
] |
I have the following table
```
timestamp parameter value
---------------------------------------------------
2015-09-04 10:00:00.000 par01 1
2015-09-04 10:03:00.000 par02 2
2015-09-04 10:06:00.000 par03 3
2015-09-04 10:09:00.000 par04 4
2015-09-04 10:12:00.000 par05 5
2015-09-04 10:15:00.000 par06 6
2015-09-04 10:18:00.000 par01 7
2015-09-04 10:21:00.000 par02 8
2015-09-04 10:24:00.000 par03 9
2015-09-04 10:27:00.000 par04 10
```
I would like to calculate the weighted average every 15 minutes. The result must be like this:
```
timestamp parameter value
---------------------------------------------------
2015-09-04 10:00:00.000 result1 3
2015-09-04 10:15:00.000 result2 8
```
What's the fastest way? Is It possible avoiding loops?
|
This will give you the result you are requesting:
```
DECLARE @t table(timestamp datetime, parameter char(5), value int)
INSERT @t values
('2015-09-04 10:00:00.000','par01',1),
('2015-09-04 10:03:00.000','par02',2),
('2015-09-04 10:06:00.000','par03',3),
('2015-09-04 10:09:00.000','par04',4),
('2015-09-04 10:12:00.000','par05',5),
('2015-09-04 10:15:00.000','par06',6),
('2015-09-04 10:18:00.000','par01',7),
('2015-09-04 10:21:00.000','par02',8),
('2015-09-04 10:24:00.000','par03',9),
('2015-09-04 10:27:00.000','par04',10)
SELECT
dateadd(minute, datediff(minute, 0,timestamp)/15*15, 0) timestamp,
'Result' + cast(row_number() over (order by datediff(minute, 0,timestamp)/15)
as varchar(10)) parameter,
avg(value) value
FROM @t
GROUP BY datediff(minute, 0,timestamp)/15
```
Result:
```
timestamp parameter value
2015-09-04 10:00 Result1 3
2015-09-04 10:15 Result2 8
```
EDIT here is a method to calculated weighted average. Been a learning experience for me:
```
;WITH CTE as
(
SELECT
cast(coalesce(lead(timestamp) over (order by timestamp),
dateadd(minute, datediff(minute, 0,timestamp)/15*15, '00:15'))
- timestamp as float)*24*60/15*value x,
dateadd(minute, datediff(minute, 0,timestamp)/15*15, 0) truncatedtime
FROM @t
)
SELECT
sum(x) weighted_average,
truncatedtime
FROM
cte
GROUP BY
truncatedtime
```
|
try this:
```
SELECT MIN(timestamp) as timestamp,
'result' + cast(count(*) over(order by min(timestamp)) as varchar(10)) as parameter,
AVG(value) as value
FROM tbl
GROUP BY CAST(timestamp as Date),
DATEPART(Hour, timestamp),
DATEPART(Minute, timestamp) / 15
```
[see fiddle here](http://sqlfiddle.com/#!3/50327/1)
|
Fastest way to calculate time periods weighted average in SQL Server
|
[
"",
"sql",
"sql-server",
""
] |
I try to use an SQL query in Access but it doesn't work. Why?
```
SELECT * FROM table
EXCEPT
SELECT DISTINCT name FROM table;
```
I have a syntax error in FROM statement.
|
use HAVING COUNT(name) > 1 clause
```
SELECT * FROM Table1
WHERE [name] IN
(SELECT name, Count(name)
FROM Table1
GROUP BY name
HAVING COUNT(name)>1)
```
|
MS Access does not support EXCEPT keyword. You can try using the LEFT JOIN like this:
```
select t1.* from table t1 left join table t2 on t1.name = t2.name
```
**EDIT:**
If you want to find the duplicates in your table then you can try this:
```
SELECT name, COUNT(*)
FROM table
GROUP BY name
HAVING COUNT(*) > 1
```
You can also refer: [Create a Query in Microsoft Access to Find Duplicate Entries in a Table](http://www.howtogeek.com/howto/microsoft-office/create-a-query-in-microsoft-access-to-find-duplicate-entries-in-a-table/) and follow the steps to find the duplicates in your table.
> First open the MDB (Microsoft Database) containing the table you want
> to check for duplicates. Click on the Queries tab and New.
>
> [](https://i.stack.imgur.com/QTVcV.png)
>
> This will open the New Query dialog box. Highlight Find Duplicates
> Query Wizard then click OK.
>
> [](https://i.stack.imgur.com/qCpEV.png)
>
> Now highlight the table you want to check for duplicate data. You can
> also choose Queries or both Tables and Queries. I have never seen a
> use for searching Queries … but perhaps it would come in handy for
> another’s situation. Once you’ve highlighted the appropriate table
> click Next.
>
> [](https://i.stack.imgur.com/YIPYx.png)
>
> Here we will choose the field or fields within the table we want to
> check for duplicate data. Try to avoid generalized fields.
>
> [](https://i.stack.imgur.com/tyzAw.png)
>
> Name the Query and hit Finish. The Query will run right away and pop
> up the results. Also the Query is saved in the Queries section of
> Access.
>
> [](https://i.stack.imgur.com/ayQkB.png)
>
> Depending upon the selected tables and fields your results will look
> something similar to the shots below which show I have nothing
> duplicated in the first shot and the results of duplicates in the
> other.
>
> [](https://i.stack.imgur.com/rHK2E.png) [](https://i.stack.imgur.com/fdB4M.png)
|
How to find duplicates in a table using Access SQL?
|
[
"",
"sql",
"ms-access",
""
] |
I have some duplicate values in a table and I'm trying to use `Row_Number` to filter them out. I want to order the rows using `datediff` and order the results based on the closest value to zero but I'm struggling to account for negative values.
Below is a sample of the data and my current `Row_Number` field (`rn`) column:
```
PersonID SurveyDate DischargeDate DaysToSurvey rn
93638 10/02/2015 30/03/2015 -48 1
93638 27/03/2015 30/03/2015 -3 2
250575 23/10/2014 29/10/2014 -6 1
250575 19/11/2014 24/11/2014 -5 2
203312 23/01/2015 26/01/2015 -3 1
203312 26/01/2015 26/01/2015 0 2
387737 19/02/2015 26/02/2015 -7 1
387737 26/02/2015 26/02/2015 0 2
751915 02/04/2015 04/04/2015 -2 1
751915 10/04/2015 25/03/2015 16 2
712364 24/01/2015 30/01/2015 -6 1
712364 26/01/2015 30/01/2015 -4 2
```
My select statement for the above is:
```
select
PersonID, SurveyDate, DischargeDate,
datediff(dd,dischargedate,surveydate) days,
ROW_NUMBER () over (partition by PersonID
order by datediff(dd, dischargedate, surveydate) asc) as rn
from
Table 1
order by
PersonID, rn
```
What I want to do is change the sort order so it displays like this:
```
PersonID SurveyDate DischargeDate DaysToSurvey rn
93638 27/03/2015 30/03/2015 -3 1
93638 10/02/2015 30/03/2015 -48 2
250575 19/11/2014 24/11/2014 -5 1
250575 23/10/2014 29/10/2014 -6 2
```
So the `DaysToSurvey` value that is closest to the `DischargeDate` is ranked as rn 1.
Is this possible?
|
You're close. Just add [`ABS()`](https://msdn.microsoft.com/en-US/library/ms189800(v=sql.120).aspx) to calculate absolute values of the differences:
```
ROW_NUMBER () OVER (
PARTITION BY PersonID
ORDER BY abs(datediff(dd, dischargedate, surveydate)) asc
) AS rn
```
|
You could use [`abs`](https://msdn.microsoft.com/en-us/library/ms189800.aspx) to get the distance from zero:
```
select PersonID, SurveyDate, DischargeDate, datediff(dd,dischargedate,surveydate) days,
ROW_NUMBER () over (partition by PersonID order by abs(datediff(dd,dischargedate,surveydate)) asc) as rn
from Table 1
order by PersonID, rn
```
|
SQL Server order by closest value to zero
|
[
"",
"sql",
"sql-server",
"select",
"sql-order-by",
"row-number",
""
] |
As the title says the query needs to combine multiple select queries. The question is as follows:
**Display the total number of employees, and of that total the number of employees hired in 1995,1996,1997,1998.**
My query:
```
select (select count(*) from employees) as "Total",
(select count(*) from employees where hire_date between 'JAN-1-0095' and 'DEC-1-0095')as "1995",
(select count(*) from employees where hire_date between 'JAN-1-0096' and 'DEC-1-0096') as "1996",
(select count(*) from employees where hire_date between 'JAN-1-0097' and 'DEC-1-0097') as "1997",
(select count(*) from employees where hire_date between 'JAN-1-0098' and 'DEC-1-0098') as "1998"
from employees
```
but the issue is instead of returning only single record this query is being executed for all the records in the table and hence producing the following output:
[](https://i.stack.imgur.com/6QZD0.png)
|
Assuming the years are really what you want, the problem with your query is that you are selecting from `employees`, so you get a row for each one. You could use:
```
select (select count(*) from employees) as "Total",
(select count(*) from employees where hire_date between 'JAN-1-0095' and 'DEC-1-0095')as "1995",
(select count(*) from employees where hire_date between 'JAN-1-0096' and 'DEC-1-0096') as "1996",
(select count(*) from employees where hire_date between 'JAN-1-0097' and 'DEC-1-0097') as "1997",
(select count(*) from employees where hire_date between 'JAN-1-0098' and 'DEC-1-0098') as "1998"
from dual;
```
And I would use `date '1998-01-01'` for the date constants.
However, I prefer @a\_horse\_with\_no\_name's solution.
|
You can use conditional counting:
```
select count(*) as total_count,
count(case when extract(year from hire_date) = 1995 then 1 end) as "1995",
count(case when extract(year from hire_date) = 1996 then 1 end) as "1996",
count(case when extract(year from hire_date) = 1997 then 1 end) as "1997",
count(case when extract(year from hire_date) = 1998 then 1 end) as "1997",
from employees;
```
this makes use of the fact that aggregate functions ignore NULL values and therefor the `count()` will only count those rows where the `case` expressions returns a non-null value.
---
Your query returns one row for each row in the employees table because you do not apply any grouping. Each select is a scalar sub-select that gets executed for each and every row in the `employees` table.
You *could* make it only return a single row if you replace the final `from employees` with `from dual` - but you'd still count over all rows within each sub-select.
---
You should also avoid implicit data type conversion like you did. `'JAN-1-0095'` is a string and will implicitly be converted to a `date` depending on your NLS settings. Your query would not run if executed from my computer (because of different NLS settings).
As you are looking for a complete year, just comparing the year is a bit shorter to write and easier to understand (at least in my eyes).
Another option would be to use proper date literals, e.g. `where hire_date between DATE '1995-01-01' and DATE '1995-12-31'` or a bit more verbose using Oracle's `to_date()` function: `where hire_date between to_date('1995-01-01', 'yyyy-mm-dd') and to_date('1995-12-31', 'yyyy-mm-dd')`
|
Combining multiple SELECT queries in one
|
[
"",
"sql",
"oracle",
"oracle11g",
""
] |
I have to generate a report sales report for current day in which user will select start hour and end hour.
It will be max 24 hours.
Start hour will be 4:00 AM and max end hour can be next day 4:00 AM
**The below query will return date-time and amount of sale**
```
select s.StartDate ,
CONVERT(DECIMAL(10,2),sum(OrigionalSubTotal)/100.0) Amt from Sale s
where
s.StartDate
BETWEEN '2015-07-03 04:00:01'
and '2015-07-04 04:00:00'
and s.IsSuspend = 0 and s.IsTrainMode = 0 and wasrefunded=0
and IsCancelled = 0
group by S.StartDate
order by s.StartDate
```
**O/p**
```
StartDate Amt
2015-07-03 17:01:15.780 10.00
2015-07-03 18:45:57.360 10.00
2015-07-03 18:48:41.250 20.00
2015-07-03 19:02:50.850 5.00
2015-07-03 19:04:45.090 15.00
2015-07-03 19:18:38.960 10.00
2015-07-03 21:12:25.700 100.00
2015-07-03 21:16:30.730 20.00
2015-07-03 22:21:09.380 30.00
2015-07-03 23:38:32.050 34.00
2015-07-04 00:39:46.790 200.00
2015-07-04 01:00:14.820 106.00
```
From this I need to take hourly sales
Consider current day is `03-July-2015`
Let say user select `16:00 (04:00 PM) - 04:00 AM (next day 04-July-2015)`.
**Then the desired o/p should be like below**
```
Hour Amount
16:00 - 17:00 0.00 -- No sale row between this time
17:00 - 18:00 10.00 -- sale between 17:00 to 17:59
18:00 - 19:00 30.00
19:00 - 20:00 30.00
20:00 - 21:00 0.00 -- No sale row between this time
21:00 - 22:00 120.00
22:00 - 23:00 30.00
23:00 - 0:00 34:00
0:00 - 1:00 200.00
1:00 - 2:00 106.00
2:00 - 3:00 0.00
3:00 - 4:00 0.00
```
**I tried below query to achieve this**
```
select STUFF(CONVERT(CHAR(13), s.StartDate , 120), 11, 1, ' ') ,
DATEPART(HOUR,s.startdate),
CONVERT(DECIMAL(10,2),sum(OrigionalSubTotal)/100.0) from Sale s
where
s.StartDate
BETWEEN '2015-07-03 04:00:01'
and '2015-07-04 04:00:00'
and s.IsSuspend = 0 and s.IsTrainMode = 0 and wasrefunded=0
and IsCancelled = 0
group by STUFF(CONVERT(CHAR(13), s.StartDate , 120), 11, 1, ' '),DATEPART(HOUR,s.startdate)
order by STUFF(CONVERT(CHAR(13), s.StartDate , 120), 11, 1, ' '),DATEPART(HOUR,s.startdate)
```
**O/P is like below**
```
Date Hour Amt
2015-07-03 17 17 10.00
2015-07-03 18 18 30.00
2015-07-03 19 19 30.00
2015-07-03 21 21 120.00
2015-07-03 22 22 30.00
2015-07-04 23 23 34.00
2015-07-04 0 0 200.00
2015-07-04 01 1 106.00
```
How can I achieve the desired o/p from this. Please help.
**Edited**
**Table structure**
```
Saleid - Unqiueidentifier eg:- 5D0AC452-2F01-E511-8502-0019178A0F32
startDate - datetime eg:- 2015-05-23 13:37:32.880
OrigionalSubTotal - int eg: 5400 (last two digit is decimal)its table of customized software i cannot change the type
```
[SQL Fiddle](http://sqlfiddle.com/#!3/99562/2)
|
```
declare @starttime datetime = '2015-07-03 16:00:01'
declare @endtime datetime = '2015-07-04 04:00:00'
;with reporttable as
(
select s.StartDate ,
CONVERT(DECIMAL(10,2),sum(OrigionalSubTotal)/100.0) Amt from Sale s
where
s.StartDate
BETWEEN @starttime
and @endtime
and s.IsSuspend = 0 and s.IsTrainMode = 0 and wasrefunded=0
and IsCancelled = 0
group by S.StartDate
),
CTE
AS
(
SELECT 0 AS HR
UNION ALL
SELECT HR+1 AS HR FROM CTE WHERE HR<23
)
,cte1 as
(
SELECT (select cast(min(startdate) as date) from ReportTable) as [date],c.hr as hr,cast(c.hr as varchar(100))+'-'+cast(c.hr+1 as varchar(100)) as period,sum(isnull(originalsubtotal,0)) as total
FROM CTE c
left join ReportTable RT on c.hr = datepart(hh,rt.startDate) and cast(rt.startdate as date) = (select cast(min(startdate) as date) from ReportTable)
group by c.hr,cast(rt.startdate as date)
union all
SELECT (select cast(max(startdate) as date) from ReportTable) as [date],c.hr as hr,cast(c.hr as varchar(100))+'-'+cast(c.hr+1 as varchar(100)) as period,sum(isnull(originalsubtotal,0)) as total
FROM CTE c
left join ReportTable RT on c.hr = datepart(hh,rt.startDate) and cast(rt.startdate as date) = (select cast(max(startdate) as date) from ReportTable)
group by c.hr,cast(rt.startdate as date)
)
select * from cte1 where (cast([date] as date)=cast(@starttime as date) and hr>=datepart(hh,@starttime)) and (cast([date] as date)=cast(@endtime as date) and hr<=datepart(hh,@endtime))
order by [date],hr
```
|
I've approached this in two steps:
1. Get the range of `datetime` values using `MIN` and `MAX` on the data
2. Use these values to create the full range of dates and hours in a [**CTE**](https://technet.microsoft.com/en-us/library/ms190766%28v=sql.105%29.aspx?f=255&MSPPError=-2147217396) and join them back on to the data.
The CTE will produce this lookup table to join back on to the main data:
```
| DateVal | HourVal |
|------------|---------|
| 2015-07-03 | 17 |
| 2015-07-03 | 18 |
| 2015-07-03 | 19 |
| 2015-07-03 | 20 |
| 2015-07-03 | 21 |
| 2015-07-03 | 22 |
| 2015-07-03 | 23 |
| 2015-07-04 | 0 |
| 2015-07-04 | 1 |
```
**Runnable sample:**
The sample code is commented to explain what each step is doing.
```
-- dummy table
CREATE TABLE #Sale
(
[StartDate] DATETIME ,
[Amt] INT
);
-- fill dummy data
INSERT INTO #Sale
( [StartDate], [Amt] )
VALUES ( '2015-07-03 17:01:15', 10.00 ),
( '2015-07-03 18:45:57', 10.00 ),
( '2015-07-03 18:48:41', 20.00 ),
( '2015-07-03 19:02:50', 5.00 ),
( '2015-07-03 19:04:45', 15.00 ),
( '2015-07-03 19:18:38', 10.00 ),
( '2015-07-03 21:12:25', 100.00 ),
( '2015-07-03 21:16:30', 20.00 ),
( '2015-07-03 22:21:09', 30.00 ),
( '2015-07-03 23:38:32', 34.00 ),
( '2015-07-04 00:39:46', 200.00 ),
( '2015-07-04 01:00:14', 106.00 );
DECLARE @minDate DATETIME ,
@maxDate DATETIME
-- set min date
SELECT TOP 1
@minDate = StartDate
FROM #Sale
ORDER BY StartDate
-- set max date
SELECT TOP 1
@maxDate = StartDate
FROM #Sale
ORDER BY StartDate DESC
-- cte to iterate between min and max, to generate unique date and hour vals for range
;WITH cte
AS ( SELECT CONVERT(DATE, StartDate) AS DateVal ,
DATEPART(HOUR, StartDate) AS HourVal
FROM #Sale
WHERE StartDate = @minDate
UNION ALL
SELECT CASE WHEN cte.HourVal + 1 > 23
THEN DATEADD(DAY, 1, cte.DateVal)
ELSE cte.DateVal
END AS DateVal ,
CASE WHEN cte.HourVal + 1 = 24 THEN 0
ELSE cte.HourVal + 1
END AS HourVal
FROM cte
WHERE DATEADD(HOUR, CASE WHEN cte.HourVal + 1 = 24 THEN 0
ELSE cte.HourVal + 1
END,
CONVERT(DATETIME, CASE WHEN cte.HourVal + 1 = 24
THEN DATEADD(DAY, 1,
cte.DateVal)
ELSE cte.DateVal
END)) <= @maxDate
)
-- join results of cte to source data on date and hour with sum/group by
SELECT cte.DateVal ,
cte.HourVal ,
-- covers hours with no sales
COALESCE(SUM(s.Amt), 0) AS Amt
FROM cte
LEFT JOIN #Sale s ON cte.DateVal = CONVERT(DATE, s.StartDate)
AND cte.HourVal = DATEPART(HOUR, s.StartDate)
GROUP BY cte.DateVal ,
cte.HourVal
ORDER BY cte.DateVal ,
cte.HourVal
DROP TABLE #Sale
```
**Output**
```
| DateVal | HourVal | Amt |
|------------|---------|-----|
| 2015-07-03 | 17 | 10 |
| 2015-07-03 | 18 | 30 |
| 2015-07-03 | 19 | 30 |
| 2015-07-03 | 20 | 0 |
| 2015-07-03 | 21 | 120 |
| 2015-07-03 | 22 | 30 |
| 2015-07-03 | 23 | 34 |
| 2015-07-04 | 0 | 200 |
| 2015-07-04 | 1 | 106 |
```
## [SQL Fiddle Demo](http://sqlfiddle.com/#!6/1da04/1)
I've ignored the outliers in the above, as there is no data outside the range of the min/max dates. If you need this, you can of course tweak the min/max values as shown in the code below. This modified version will take user input for the date range to produce your desired output:
```
DECLARE @minDate DATETIME ,
@maxDate DATETIME
-- set min date
SET @minDate = '2015-07-03 16:00:00'
-- set max date
SET @maxDate = '2015-07-04 04:00:00'
-- cte to iterate between min and max, to generate unique date and hour vals for range
;WITH cte
AS ( SELECT CONVERT(DATE, @minDate) AS DateVal ,
DATEPART(HOUR, @minDate) AS HourVal
UNION ALL
SELECT CASE WHEN cte.HourVal + 1 > 23
THEN DATEADD(DAY, 1, cte.DateVal)
ELSE cte.DateVal
END AS DateVal ,
CASE WHEN cte.HourVal + 1 = 24 THEN 0
ELSE cte.HourVal + 1
END AS HourVal
FROM cte
WHERE DATEADD(HOUR, CASE WHEN cte.HourVal + 1 = 24 THEN 0
ELSE cte.HourVal + 1
END,
CONVERT(DATETIME, CASE WHEN cte.HourVal + 1 = 24
THEN DATEADD(DAY, 1,
cte.DateVal)
ELSE cte.DateVal
END)) <= @maxDate
)
-- join results of cte to source data on date and hour with sum/group by
SELECT cte.DateVal ,
CONVERT(NVARCHAR(2),cte.HourVal) + ':00 -' +
CONVERT(NVARCHAR(2),cte.HourVal+1) + ':00' AS [Hours],
-- covers hours with no sales
COALESCE(SUM(s.Amt), 0) AS Amt
FROM cte
LEFT JOIN #Sale s ON cte.DateVal = CONVERT(DATE, s.StartDate)
AND cte.HourVal = DATEPART(HOUR, s.StartDate)
GROUP BY cte.DateVal ,
cte.HourVal
ORDER BY cte.DateVal ,
cte.HourVal
```
**Ouput**
```
| DateVal | Hours | Amt |
|------------|--------------|-----|
| 2015-07-03 | 16:00 -17:00 | 0 |
| 2015-07-03 | 17:00 -18:00 | 10 |
| 2015-07-03 | 18:00 -19:00 | 30 |
| 2015-07-03 | 19:00 -20:00 | 30 |
| 2015-07-03 | 20:00 -21:00 | 0 |
| 2015-07-03 | 21:00 -22:00 | 120 |
| 2015-07-03 | 22:00 -23:00 | 30 |
| 2015-07-03 | 23:00 -24:00 | 34 |
| 2015-07-04 | 0:00 -1:00 | 200 |
| 2015-07-04 | 1:00 -2:00 | 106 |
| 2015-07-04 | 2:00 -3:00 | 0 |
| 2015-07-04 | 3:00 -4:00 | 0 |
| 2015-07-04 | 4:00 -5:00 | 0 |
```
## [SQL Fiddle Demo](http://sqlfiddle.com/#!6/1da04/4)
|
Show the total sale in Hourly basis
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
What I am trying to do is to create two timestamps a `StartDate` timestamp which will be `09/08/2015 00:00:00` and an `EndDate` time stamp which should be `09/08/2015 23:59:59` as easy as it is to achieve in MS SQL, I have not been able to find a `Make_Date` function or `Add_Days` function to get either of the timestamps in Oracle PL SQL.
Can anyone help me out?
|
Use **TO\_DATE** to convert **string** into **DATE**.
```
SQL> alter session set nls_date_format='mm/dd/yyyy hh24:mi:ss';
Session altered.
SQL> SELECT to_date('09/08/2015 00:00:00' ,'mm/dd/yyyy hh24:mi:ss') start_date,
2 to_date('09/08/2015 23:59:59' ,'mm/dd/yyyy hh24:mi:ss') end_date
3 FROM dual;
START_DATE END_DATE
------------------- -------------------
09/08/2015 00:00:00 09/08/2015 23:59:59
SQL>
```
You could also use the **[ANSI TIMESTAMP Literal](http://docs.oracle.com/cd/B19306_01/server.102/b14200/sql_elements003.htm#BABGIGCJ)**.
```
SQL> SELECT TIMESTAMP '2015-08-09 00:00:00' start_date,
2 TIMESTAMP '2015-08-09 23:59:59' end_date
3 FROM dual;
START_DATE END_DATE
---------------------------- -------------------------------
09-AUG-15 12.00.00.000000000 09-AUG-15 11.59.59.000000000 PM
SQL>
```
**Update** OP wants the date literal to be dynamic.
```
SQL> SELECT TRUNC(SYSDATE) start_date,
2 TRUNC(SYSDATE) + 86399 / 86400 end_date
3 FROM dual;
START_DATE END_DATE
------------------- -------------------
09/08/2015 00:00:00 09/08/2015 23:59:59
```
**Update 2** OP wants to know why the time part is hidden in the date.
```
SQL> alter session set nls_date_format='mm/dd/yyyy';
Session altered.
SQL> SELECT sysdate FROM DUAL;
SYSDATE
----------
09/08/2015
SQL> alter session set nls_date_format='mm/dd/yyyy hh24:mi:ss';
Session altered.
SQL> SELECT sysdate FROM DUAL;
SYSDATE
-------------------
09/08/2015 15:46:14
```
So, what happened above? The same **SYSDATE** returns two different values. The reason is that the **DATE** has both *datetime* elements, what you see depends on the display properties driven by your **locale-specific NLS settings**.
> Use **TO\_CHAR** to convert the date into string to display it in your
> desired format.
|
Rather than using fractional numbers `86399 / 86400` (which requires some working out when reviewing the code to see why you picked those magic numbers) to get the end date you can explicitly state the time periods using `INTERVALS` (which is easy to see at a glance what you are doing):
[SQL Fiddle](http://sqlfiddle.com/#!4/9eecb7d/6008)
**Oracle 11g R2 Schema Setup**:
**Query 1**:
```
SELECT TRUNC( CURRENT_DATE ) AS START_DATE,
TRUNC( CURRENT_DATE ) + INTERVAL '1' DAY - INTERVAL '1' SECOND AS END_DATE
FROM DUAL
```
**[Results](http://sqlfiddle.com/#!4/9eecb7d/6008/0)**:
```
| START_DATE | END_DATE |
|-----------------------------|-----------------------------|
| September, 08 2015 00:00:00 | September, 08 2015 23:59:59 |
```
|
How to create day start and end date in PL SQL
|
[
"",
"sql",
"oracle",
"plsql",
""
] |
I am working on an Excel sheet based algorithm and trying to convert the logic in SQL. There is a need in my SQL logic where I need to create a temporary Table and I am stuck in referencing one aliased column in another aliased column of that table. It looks something like this-
```
INSERT INTO @tbl_CensusTable AS CST
SELECT CS.gender, CS.salary, CS.birthday, CS.Age,
(some calculation) AS 'Column_5', (Some calculation * CST.Column_5) AS Column_6
From @tbl_tempTable CS
```
Am I writing the SQL statement correct? Can I reference Column\_5 in Column\_6 the way I have used in the above example?
|
Try this:
```
INSERT INTO @tbl_CensusTable
SELECT CS.gender,
CS.salary,
CS.birthday,
CS.Age,
(Col5 calculation) AS 'Column_5',
((Some calculation) * (Col5 calculation)) AS Column_6
From @tbl_tempTable as CS
```
This should work as well:
```
WITH CTE as (
SELECT CS.gender,
CS.salary,
CS.birthday,
CS.Age,
(Col5 calculation) AS 'Column_5'
From @tbl_tempTable as CS
)
INSERT INTO @tbl_CensusTable
SELECT CS.gender,
CS.salary,
CS.birthday,
CS.Age,
Column_5,
((SomeCalculation) * Column_5) AS Column_6
From CTE as CS
```
|
```
INSERT INTO @tbl_CensusTable (<good practice to list the columns here>)
SELECT
CS.gender, CS.salary, CS.birthday, CS.Age,
CS.Column_5,
<another calculation> * CS.Column_5 AS Column_6
FROM
(SELECT *, <some calculation> AS Column_5 FROM @tbl_tempTable) AS CS
```
or
```
;WITH CS AS (
SELECT *, <some calculation> AS Column_5
FROM @tbl_tempTable
)
INSERT INTO @tbl_CensusTable (<good practice to list the columns here>)
SELECT
CS.gender, CS.salary, CS.birthday, CS.Age,
CS.Column_5,
<another calculation> * CS.Column_5 AS Column_6
FROM CS
```
|
how to Reference aliased column to another column of a temporary table in SQL?
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have `Kpi_definition` table in which for particular event\_id there can be more than one `KPI_DEF_ID` present. For example for event\_id = `10250` there are two `KPI_DEF_ID`. I have to check first for the event\_id the `KPI_DEF_ID` is present or not and if it is not present then set the `STATUS` to `'N'`. In another condition i have to check if `KPI_DEF_ID` is present and none of the field value of `KPI_ACTIVE_INITIAL,KPI_ACTIVE_CURRENT,KPI_ACTIVE_MANUAL,KPI_ACTIVE_DOWNTIME = 'N'` for all `KPI_DEF_ID` then `OS.STATUS` else `'N'`.
[](https://i.stack.imgur.com/ZRy7b.png)
I have written the below select query but its not working. I am writing this query inside view.
```
SELECT 'EVENT' AS OBJECT_TYPE,
os.event_id AS OBJECT_ID,
CASE
WHEN NOT EXISTS(SELECT KD.kpi_def_id
FROM rator_monitoring_configuration.kpi_definition KD
WHERE KD.event_id = os.event_id
AND KD.kpi_type IN ( 19, 21 )) THEN 'N'
WHEN EXISTS(SELECT KD.kpi_def_id
FROM rator_monitoring_configuration.kpi_definition KD
WHERE KD.event_id = os.event_id
AND KD.kpi_type IN ( 19, 21 )
AND ( KD.kpi_active_initial = 'N'
OR KD.kpi_active_current = 'N'
OR KD.kpi_active_manual = 'N'
OR KD.kpi_active_downtime = 'N' )) THEN 'N'
ELSE os.status
END AS OBJECT_STATUS
FROM RATOR_MONITORING.EVENT_STATUS OS
```
|
```
select decode(kpi_def_id,NULL,'N',
case when instr(ini||curr||downtime||manual,'N',1) = 0
then status else 'N' end)
from ex_6
```
hi rahul,
this query worked on my machine .Fullfilled your requirement.Please mark up the answer if it helps.
|
```
select case when decode(kpi_def_id,NULL,'N',col,0,'N') <> 'N' then status else 'N' end case from
(
select kpi_def_id,instr(ini||curr||downtime||manual,'Y',1) col,ex_6.status status from ex_6
)
```
This query works i have concatenated initial,downtime,current,manual column and with use of instr function finding whether it has character 'Y' in that string and using the result returned by inner query in outer query and with help of decode function we can achieve the expected result
Regards,
|
Select case statement not working in oracle
|
[
"",
"sql",
"oracle",
"view",
"case-when",
""
] |
I'm working in MS Access trying to iron out an SQL statement that works. On a form I have a combobox that displays a list of employees. I have a separate dialog form that allows the user to select multiple items in a listbox. Each item represents a certification. Each employee can have any number and combination of certifications. Ultimately I just want to update the RowSource property of the combobox to reflect the new filtered data by assigning a proper SQL statement.
If I want to filter the list on the combobox of employees, I use this SQL statement:
```
SELECT
Employees.Employee_ID, Employees.Last_Name, Employees.First_Name
FROM
Employees
INNER JOIN
Emp_Certs ON Employees.Employee_ID = Emp_Certs.Employee_ID
WHERE
(((Employees.Active_Member) = Yes)
AND ((Emp_Certs.Employee_ID) = [Employees].[Employee_ID])
AND ((Emp_Certs.Cert_ID) = 1))
ORDER BY
Employees.Last_Name;
```
If I run this query it works because I'm assigning only one value to `Emp_Certs.Cert_ID`. But when I add another like this:
```
SELECT
Employees.Employee_ID, Employees.Last_Name, Employees.First_Name
FROM
Employees
INNER JOIN
Emp_Certs ON Employees.Employee_ID = Emp_Certs.Employee_ID
WHERE
(((Employees.Active_Member) = Yes)
AND ((Emp_Certs.Employee_ID) = [Employees].[Employee_ID])
AND ((Emp_Certs.Cert_ID) = 1)
AND ((Emp_Certs.Cert_ID) = 4))
ORDER BY Employees.Last_Name;
```
I get an empty set. That's not what I expected. The table Emp\_Certs clearly has several employees that have the combination of certifications 1 and 4.
Could someone please explain how this is supposed to be written if I want to indicate more that one Cert\_ID and have the employee record show up only once in the combobox. I don't need the employee records showing up multiple times in the combobox.
This might help:
[](https://i.stack.imgur.com/jpiEx.jpg)
|
When you join tables, you basically query off a result set containing all the combinations of rows from those joined tables that your where clauses then operate off of. Since you are joining to the `Emp_Certs` table just once and linking only by Employee\_ID, you are getting a result set that looks like this (only showing two columns):
```
Last_Name Cert_ID
Jones 1
Jones 3
Jones 4
Smith 1
Smith 2
```
Your where clause then filters those rows, only accepting rows that have `Cert_ID = 1` AND `Cert_ID = 4`, which is impossible so you should not get any rows.
I'm not sure if Access has limititations, but in SQL Server you could handle it in at least two ways:
1) Link to the table twice, joining for each of the certifications. Table alias 'a' joins to the Emp\_Certs table where the Cert\_ID is 1 and table alias 'b' joins to the Emp\_Certs table where the Cert\_ID is 4:
```
SELECT
Employees.Employee_ID, Employees.Last_Name, Employees.First_Name
FROM
Employees
INNER JOIN
Emp_Certs a ON Employees.Employee_ID = a.Employee_ID AND a.Cert_ID = 1
INNER JOIN
Emp_Certs b ON Employees.Employee_ID = b.Employee_ID AND b.Cert_ID = 4
WHERE
Employees.Active_Member = Yes
ORDER BY Employees.Last_Name;
```
This gives you a result set that looks like this (Smith doesn't show up because the join criteria doesn't allow any rows unless the employee can link to table `a` and `b`):
```
Last_Name a.Cert_ID b.Cert_ID
Jones 1 4
```
2) Use sub-selects in the where clause to filter the employee id on ids with those certifications (looks like [Access 2010 supports it](https://msdn.microsoft.com/en-us/library/office/Ff192664(v=office.14).aspx)):
```
SELECT
Employees.Employee_ID, Employees.Last_Name, Employees.First_Name
FROM
Employees
WHERE
Active_Member = Yes
AND Employee_ID in (SELECT Employee_ID FROM Emp_Certs WHERE Cert_ID = 1)
AND Employee_ID in (SELECT Employee_ID FROM Emp_Certs WHERE Cert_ID = 4)
ORDER BY Employees.Last_Name;
```
|
You should use `OR` operator in `WHERE` clause:
```
WHERE
(((Employees.Active_Member) = Yes)
AND ((Emp_Certs.Employee_ID) = [Employees].[Employee_ID])
AND ( Emp_Certs.Cert_ID = 1 OR
Emp_Certs.Cert_ID = 4 )
ORDER BY Employees.Last_Name;
```
because of `Emp_Certs.Cert_ID = 1 and Emp_Certs.Cert_ID = 4` is always `FALSE`
Also the `((Emp_Certs.Employee_ID) = [Employees].[Employee_ID])` condition is redundant due to `INNER JOIN` condition
|
SQL query with JOIN involving two criteria from same table
|
[
"",
"sql",
"ms-access",
"join",
"vba",
"ms-access-2010",
""
] |
I have a column in a SQL table I'm trying to do an order by on. The data needs to be in desc so it shows more recent years first.
```
Sample Data Expect Output
----------- -------------
older 2013
2012 2012
2013 2011
2011 older
```
Using this particular method I only have access to the `Order by` and `WHERE` clauses rather than full blown SQL.
This limits me from being able to pull the data out in 2 sets and joining.
I also can not manipulate the `older` field to something like `999order` and strip the `999` out later.
Is there a way to order by that will give me what I'm looking for?
|
You can use a `CASE` statement in the `ORDER BY` clause:
```
WITH tbl AS(
SELECT 'older' AS date_col UNION ALL
SELECT '2012' UNION ALL
SELECT '2013' UNION ALL
SELECT '2011'
)
SELECT *
FROM tbl
ORDER BY
CASE
WHEN date_col NOT LIKE '%[^0-9]%' THEN 0
ELSE 1
END,
date_col DESC
```
`NOT LIKE '%[^0-9]%'` means that the `date_col` is composed of all numbers. Assign `0` to these rows since they will be first in the sorting. Then sort by `date_col DESC`.
**RESULT**
```
date_col
--------
2013
2012
2011
older
```
|
Can we not use the `ISNUMERIC()` function instead of `NOT LIKE`
```
DECLARE @test TABLE
(
value varchar(10)
)
INSERT INTO @test
VALUES
('2010'),
('2011'),
('2012'),
('2013'),
('2014'),
('2015'),
('older')
SELECT * From @test
ORDER BY
(CASE WHEN ISNUMERIC(value) = 1 THEN 0 ELSE 1 END),value DESC
```
|
SQL Order by DESC on alphanumeric field with numbers before words
|
[
"",
"mysql",
"sql",
"sql-server",
"select",
"kentico",
""
] |
```
select * from payments where amount = 0 order by id desc
```
I need to get the values which it has `0.5` part only.Could you tell me how to do that ?
e.g. 12,12.5,12.6,14.5
results data set can be have : `12.5,14.5` values data only
|
Truncate amount to integer, subtract from original value and compare with 0.5
```
SELECT *
FROM payments
WHERE amount - CAST(amount AS INT) = 0.5
ORDER BY id DESC
```
OR:
```
SELECT *
FROM payments
WHERE amount - FLOOR(amount) = 0.5
ORDER BY id DESC
```
Hacks way:
```
SELECT *
FROM payments
WHERE PARSENAME(amount, 1)= 5
ORDER BY id DESC
```
One more using modulo:
```
SELECT *
FROM payments
WHERE amount % 1 = 0.5
ORDER BY id DESC
```
|
you can find this way also
```
declare @temp table (val decimal(18,3))
insert into @temp values (0.5),(10.5),(20.8)
select * from @temp
where (val % 1) = 0.5
```
|
Get the specified decimal part having data only from the sql query
|
[
"",
"sql",
"sql-server",
""
] |
I have an Oracle table with the following data:
```
ID VALUE
10 A
10 B
10 C
20 A1
30 C1
30 D1
```
I want to do a *group by* based on the ID column and print a new column with the sequence number within each group.
The output would look like this:
```
ID VALUE GROUPSEQ
10 A 1
10 B 2
10 C 3
20 A1 1
30 C1 1
30 D1 2
```
Can this be done using an Oracle SQL query, without creating a temporary table?
|
You need [ROW\_NUMBER](http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions137.htm)
```
SELECT ID, VALUE, row_number() OVER (PARTITION BY ID ORDER BY value) GROUPSEQ
FROM myTable
```
|
You can try this:-
```
SELECT ID, VALUE, ROW_NUMBER() OVER (PARTITION BY ID ORDER BY VALUE) "GROUPSEQ"
FROM YOUR_TABLE;
```
|
Add a sequence number for each element in a group using an Oracle SQL query
|
[
"",
"sql",
"oracle",
"oracle11g",
""
] |
I have table `friends` below.
```
-----------------------------
| id | user_id | friends_id |
-----------------------------
| 1 | 1 | 2 |
-----------------------------
| 2 | 1 | 3 |
-----------------------------
| 3 | 1 | 4 |
-----------------------------
| 4 | 2 | 1 |
-----------------------------
| 5 | 3 | 5 |
-----------------------------
| 6 | 4 | 5 |
-----------------------------
```
and `SQL` query:
```
select user_id, friends_id from friends where user_id = 1
```
Above query gives me below result.
```
------------------------
| user_id | friends_id |
------------------------
| 1 | 2 |
------------------------
| 1 | 3 |
------------------------
| 1 | 4 |
------------------------
```
I want to get resul like this:
```
------------------------
| user_id | friends_id |
------------------------
| 1 | many |
------------------------
```
* If any user have friends more than 1, i must write word 'many'.
* If any user have 1 friend, i must write his ID.
How i can do that?
I googled and can't explain to google my problem.
|
Do a `GROUP BY` to count number of friends. Use a `CASE` to return 'Many' if more than 1 friend, or the friend\_id if only one (need to `CAST` to a character type, compatible with 'Many'.)
```
select user_id,
case when count(friends_id) > 1 then 'Many'
else cast(min(friends_id) as varchar(11)) end as friends_id
from friends
group by user_id
```
|
I update the query and include
[**SQL FIDDLE DEMO**](http://sqlfiddle.com/#!6/e2406/8)
```
WITH fCount as (
SELECT user_id, min(friends_id) friends_id, count(friends_id) nFriend
FROM friends
GROUP BY user_id
)
select user_id, CASE
WHEN nFriend > 1 THEN 'many'
ELSE cast(friends_id as nvarchar(10))
END friends_id
from fCount
```
|
condition in returning number of column
|
[
"",
"sql",
"sql-server",
""
] |
I have this SQL statement:
```
SELECT * FROM `table` WHERE type = 3 OR type = 5 OR type = 4 and table.deleted = 1;
```
I've read that I can use parenthesis to accomplish this but I was wondering if then this would be valid:
```
SELECT * FROM `table` WHERE (type = 3 OR type = 5 OR type = 4) and table.deleted = 1;
```
OR
```
SELECT * FROM `table` WHERE (type = 3 OR type = 5) OR type = 4 and table.deleted = 1;
```
|
Both of these would be valid, but [since `AND` has higher precedence than `OR`](https://stackoverflow.com/q/1241142/335858), they would mean different things:
* Your first parenthesized query would pick deleted rows with types 3, 4, 5
* Your second parenthesized query would select all rows with types 3, 5, in addition to deleted rows of type 4; this is the same meaning as in the original query without parentheses.
You can avoid the confusion altogether by using operator `IN`, like this:
```
SELECT * FROM `table` WHERE type IN (3, 4, 5) AND table.deleted = 1;
```
or if you wanted the second meaning
```
SELECT * FROM `table` WHERE type IN (3, 5) OR (type = 4 AND table.deleted = 1)
```
|
What you need is `IN` operator like
```
SELECT * FROM `table`
WHERE type IN ( 3, 5, 4) and deleted = 1;
```
|
How can I combine ANDs and ORs in my SQL statement
|
[
"",
"mysql",
"sql",
""
] |
I'm trying to start with SQLite in android but I have some problems..
I took the code from a tutorial which was written in 2012, but it's not working for me now and shows me this error:
> E/SQLiteLog﹕ (1) near "Table": syntax error
The problem is with creating/opening the Database.
```
package db.com.example.kids1.databasetest;
import android.app.ListActivity;
import android.database.Cursor;
import android.database.sqlite.SQLiteDatabase;
import android.database.sqlite.SQLiteException;
import android.support.v7.app.AppCompatActivity;
import android.os.Bundle;
import android.util.Log;
import android.view.Menu;
import android.view.MenuItem;
import android.view.View;
import android.widget.ArrayAdapter;
import android.widget.ListView;
import android.widget.TextView;
import org.w3c.dom.Text;
import java.io.IOException;
import java.util.ArrayList;
public class MainActivity extends ListActivity{
private final String DB_NAME = "Database";
private final String TABLE_NAME = "Table";
SQLiteDatabase DB = null;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
ArrayList<String> results = new ArrayList<>();
String[] res = {"Red", "Green", "Text"};
try {
DB = this.openOrCreateDatabase(DB_NAME, MODE_PRIVATE, null);
DB.execSQL("CREATE TABLE IF NOT EXISTS " +
TABLE_NAME +
"(Name VARCHAR, Street VARCHAR, Block INT, City VARCHAR, Tel VARCHAR);");
mFillDbsTable();
Cursor c = DB.rawQuery("SELECT Name, Street, Block, City, Tel FROM " +
TABLE_NAME +
" where Blcok == 9 LIMIT 5", null);
if (c!=null){
if (c.moveToFirst()) {
do {
String name = c.getString(c.getColumnIndex("Name"));
String street = c.getString(c.getColumnIndex("Street"));
int block = c.getInt(c.getColumnIndex("Block"));
String city = c.getString(c.getColumnIndex("City"));
String tel = c.getString(c.getColumnIndex("Tel"));
results.add(name + "," + street + "," + block + "," + city + "," + tel);
} while (c.moveToNext());
}
}
ListView list = (ListView)findViewById(android.R.id.list);
ArrayAdapter<String> adapter = new ArrayAdapter<String>(this, android.R.layout.simple_list_item_1, res);
list.setAdapter(adapter);
} catch (SQLiteException se){
Log.e(getClass().getSimpleName(), "Create/Open Database Problem.");
}
}
private void mFillDbsTable(){
try {
DB.execSQL("INSERT INTO " +
TABLE_NAME +
" Values('Noam', 'Shkolnik', 9, 'Rehovot', '054-4900807');");
DB.execSQL("INSERT INTO " +
TABLE_NAME +
" Values('Eyal', 'Shkolnik', 9, 'Rehovot', '055-4488779');");
DB.execSQL("INSERT INTO " +
TABLE_NAME +
" Values('Yehontan', 'Shkolnik', 9, 'Rehovot', '058-7789547');");
} catch (SQLiteException se) {
Log.e(getClass().getSimpleName(), "Could not create records.");
}
}
}
```
|
```
private final String TABLE_NAME = "Table"
```
You can't create a table named `Table`, because it's a reserved keyword.
You better create a table named `MyTable` (or `_Table` or better give a more talkative name, such as `Persons` - note the **s** for plurals).
So:
```
private final String TABLE_NAME = "MyTable"
```
For your reference: <https://sqlite.org/lang_keywords.html>
---
You **could** (but it's not recommended) use reserved keywords, but then you have to use special delimiters everytime you refer to that table.
Something like
```
private final String TABLE_NAME = "[Table]"
```
---
And there's also another (double) error in your query:
```
" where Blcok == 9 LIMIT 5"
```
Should be
```
" where Block = 9 LIMIT 5"
```
|
Try `VARCHAR(100)` and remove the trailing `;`
|
E/SQLiteLog﹕ (1) near "Table": syntax error
|
[
"",
"android",
"sql",
"sqlite",
"syntax-error",
""
] |
I am trying to convert `Aug/14/2014` to `2014-08-14` but can't get it to working and throws below error. Could someone please suggest.
```
Update test
set UserDATE =convert(nvarchar(10), cast(UserDATE as datetime),105)
```
> Error: The conversion of a nvarchar data type to a datetime data type
> resulted in an out-of-range value.
|
SQL Server doesn't understand that date format, so the `cast` will fail to parse it.
Convert it to the format `'Aug 14 2014'` which you can use format 109 to parse.
The format `yyyy-mm-dd` has the code 120.
```
update
test
set
UserDATE = convert(
nvarchar(10),
convert(datetime, replace(UserDATE, '/', ' '), 109),
120
)
```
|
Try replacing the slashing with spaces and using style 100 for the conversion to date:
```
Update test
set UserDATE = convert(nvarchar(10), convert(date, replace(UserDate, '/', ' '), 100), 105)
```
However, if you are going to store dates as strings -- which I don't recommend -- then use an ISO standard format: YYYY-MM-DD or YYYYMMDD. In SQL Server, that would be format 112, 120, or 121, not 105.
|
Convert Date time format in SQL
|
[
"",
"sql",
"sql-server-2008",
""
] |
Can anyone help me un derstand the syntax of using AND like 4 times.
I have this piece of code
```
WHERE (ListPrice >= 400) and (ListPrice <= 800) AND (Color = 'Red') OR (Color = 'Black')
```
Once evaluated it gives many other columns which do not have the color red or black. help please
|
Perhaps you should change it to
```
WHERE ((ListPrice >= 400) AND (ListPrice <= 800)) AND (Color = 'Red' OR Color = 'Black')
```
Please correct me if i'm wrong.
|
```
WHERE (ListPrice BETWEEN 400 and 800) AND (Color in ('Red','Black'))
```
|
SQL multiple AND and OR
|
[
"",
"sql",
""
] |
I have a result set of several columns that may or may or may not have nulls. I want the first non null result that is greater than 1, what's the best way to add this secondary condition when coalescing my columns?
e.g.
```
result:
col1 col2 col3
1 null 2
null 1 3
null null 4
```
returns 2 3 and 4
|
lad2025's answer is correct, and you have to have an if for each column. But, since you have to have those if functions anyway, you might as well do
```
SELECT IF(col1>1, col1, if(col2>1, col2, if (col3>1, col3, null)))
```
or similarly
```
SELECT CASE WHEN col1>1 THEN col1
WHEN col2>1 THEN col2
WHEN col3>1 THEN col3
ELSE null END
```
Either of these will be a little more efficient and takes less code than using coalesce because you would then still have to use the if to null out any values of 1 in any column
|
```
SELECT COALESCE(IF(col1>1,col1, NULL), IF(col2>1, col2, NULL), IF(col3>1, col3, NULL))
FROM tab
```
|
MySQL coalesce with extra condition
|
[
"",
"mysql",
"sql",
""
] |
I have an SQL question I've been struggling with and hope someone can help.
I have the following data:
```
TEAM | USERID | STEP1 | STEP2 | STEP3 | STEP4 | STEP5
001 | 000001 | Y | Y | N | N | Y
001 | 000002 | Y | N | N | Y | Y
002 | 000003 | N | Y | Y | N | N
002 | 000004 | N | Y | Y | Y | Y
003 | 000005 | Y | N | N | Y | N
003 | 000006 | Y | Y | Y | N | Y
```
What I need to do is return the value of the TEAM where all values of any STEPx are 'N'.
So in the example above I would need TEAM 001 and 002 to be returned because in TEAM 001 all values of STEP3 are 'N', and in TEAM 002 all values of STEP1 are 'N'.
Any help would be much appreciated.
|
```
select team
from table
group by team
having
sum(case step1 when 'N' then 0 else 1 end) = 0
or sum(case step2 when 'N' then 0 else 1 end) = 0
or sum(case step3 when 'N' then 0 else 1 end) = 0
or sum(case step4 when 'N' then 0 else 1 end) = 0
or sum(case step5 when 'N' then 0 else 1 end) = 0
```
Here's a fiddle: <http://sqlfiddle.com/#!6/ecbff/3>
It may be better to normalize your data so that you have an int column named `STEP` and one row per team/user/step. This would make your query much less awkward.
|
I went the other way, but Blorgbeard was much faster than me.
```
select TEAM
from TEAMS
group by TEAM
having
count(case STEP1 when 'Y' then 1 end) = 0
or count(case STEP2 when 'Y' then 1 end) = 0
or count(case STEP3 when 'Y' then 1 end) = 0
or count(case STEP4 when 'Y' then 1 end) = 0
or count(case STEP5 when 'Y' then 1 end) = 0
```
|
SQL - Select rows where all values in a column are the same
|
[
"",
"sql",
""
] |
I am using the following query to return the length of the longest value for Field1:
```
SELECT MAX(LEN(Field1))
FROM Table1
```
How can I modify my query to return second largest length for Field1?
|
In the inner query, you can select the maximum length of the column and eliminate it in the outer query and do a `max` so you get the second largest length in that column.
```
select max(val) as secondlargest
from t join
(
SELECT max(len(val)) as lf1
FROM t
) x
on len(t.val) < x.lf1
```
If you need all the values with the second largest length, it is best to use `cte`s. [SQL Fiddle](http://www.sqlfiddle.com/#!6/af18a/16)
```
with top2 as (
select top 2 len(val) as lf1
from t
order by len(val) desc
) ,
scnd as
(select min(lf1) as snd from top2)
select val
from t join scnd
on len(t.val) = scnd.snd
```
|
SQL Server 2012+ supports `OFFSET/FETCH`:
```
select len(f1)
from table1
group by len(f1)
order by len(f1) desc
offset 1
fetch first 1 row only;
```
In earlier versions there are various methods. Here is one:
```
with ml as (
select len(f1) as lenf
from table1
group by len(f1)
)
select max(lenf)
from ml
where lenf < (select max(lenf) from ml);
```
|
Selecting second largest length of a field in a table
|
[
"",
"sql",
"sql-server",
""
] |
I have one table `ABC`. I am using queries
```
select count(*) from ABC where COLA=123; //output is 3
```
`select count(*) from ABC WHERE COLA=321; //output is 6`
I want both output combined like
```
| someColumnName |
| 3 |
| 6 |
```
Is there any way to frame query so that I can achieve this?
|
Use `UNION` between your two queries:
```
select count(*) as someColumnName from ABC where COLA=123
union
select count(*) from ABC WHERE COLA=321;
```
|
use a group by and where clause.
```
SELECT count(*) as SomeColumnName
FROM ABC
WHERE COLA in (123,321)
GROUP BY ColA
```
|
How to get combined output from two or more than two queries?
|
[
"",
"sql",
"database",
"oracle",
""
] |
I have a SQL table with one float column populated with values like these:
```
1.4313
3.35
2.55467
6.22456
3.325
```
I need to select rows containing only values with more than 4 decimals. In this case, the select must return:
```
2.55467
6.22456
```
Ideas? Thanks!
This is what I have tried so far
```
select *
from table
where CAST(LATITUDE AS DECIMAL(10,5)) - LATITUDE = 0
```
|
```
DECLARE @tbl TABLE (val float)
INSERT INTO @tbl SELECT 1234.567
INSERT INTO @tbl SELECT 1234.5678
INSERT INTO @tbl SELECT -1234.5678
INSERT INTO @tbl SELECT 1234.56789
SELECT *
from @tbl
where (((val*10000) - CONVERT(INT,(val*10000))) <> 0)
```
|
Why cant we make it simple by this query:-
```
SELECT * FROM table WHERE val LIKE '%.____%'
```
This selects what we want
|
Select numbers with more than 4 decimal places
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I am trying to join 2 tables at once, the problem is that im getting an empty set with the following query.
```
select a.id, a.name, d.name, b.id, c.id from a
inner join b on a.id = b.a_id
inner join c on a.id = c.a_id
inner join d on a.d_id = d.id
```
My guess is that this is happening because both B and C tables not always have matching records with the A table.
My example DB looks like this:
```
A Table:
|id|name |d_id|
|1 |one |1 |
|2 |two |1 |
|3 |three|2 |
B Table:
|id|a_id|
|1 |1 |
C Table:
|id|a_id|
|1 |2 |
D Table:
|id|name|
|1 |hey |
|2 |ho |
```
What i expect in my resultset will looks like this:
```
Result-set:
|a.id|a.name|d.name|b.id|c.id
|1 |one |hey |1 |NULL
|2 |two |hey |NULL|2
|3 |three |ho |NULL|NULL
```
[](https://i.stack.imgur.com/2zNMq.png)
(Excluded from my sketch the D table because i think i will have no problem with this). Also tried with left join with no luck
Thanks in advance!
|
From what you say, if B and C not always have matches, you need to left join those (and those only).
```
SELECT a.id, a.name, d.name, b.id, c.id
FROM a
LEFT JOIN b on (a.id = b.a_id)
LEFT JOIN c on (a.id = c.a_id)
JOIN d on (a.d_id = d.id);
```
Here A and D must always match, while matching records from B and C may be there or else be NULL.
(Remember that *all* values from B and C will be NULL in nonmatched rows, in case you add a `WHERE` clause or something like that).
# Test (click [here](http://www.sqlfiddle.com/#!9/48686/1/0) for a fiddle)
```
mysql> CREATE TABLE a
-> ( id integer, name varchar(20), d_id integer);
Query OK, 0 rows affected (0.00 sec)
mysql> INSERT INTO a VALUES (1, 'one', 1), (2, 'two', 1), (3, 'three', 2);
Query OK, 3 rows affected (0.00 sec)
Records: 3 Duplicates: 0 Warnings: 0
mysql> CREATE TABLE b ( id integer, a_id integer);
Query OK, 0 rows affected (0.00 sec)
mysql> INSERT INTO b VALUES (1, 1);
Query OK, 1 row affected (0.00 sec)
mysql> CREATE TABLE c ( id integer, a_id integer);
Query OK, 0 rows affected (0.00 sec)
mysql> INSERT INTO c VALUES (1, 2);
Query OK, 1 row affected (0.00 sec)
mysql> CREATE TABLE d ( id integer, name varchar(20) );
Query OK, 0 rows affected (0.00 sec)
mysql> INSERT INTO d VALUES (1, 'hey' ), (2, 'ho');
Query OK, 2 rows affected (0.00 sec)
Records: 2 Duplicates: 0 Warnings: 0
mysql> SELECT a.id, a.name, d.name, b.id, c.id
-> FROM a
-> LEFT JOIN b on (a.id = b.a_id)
-> LEFT JOIN c on (a.id = c.a_id)
-> JOIN d on (a.d_id = d.id);
+------+-------+------+------+------+
| id | name | name | id | id |
+------+-------+------+------+------+
| 1 | one | hey | 1 | NULL |
| 2 | two | hey | NULL | 1 |
| 3 | three | ho | NULL | NULL |
+------+-------+------+------+------+
```
(Note, in your example c.id is 2, but in your data there's actually a 1).
|
Use LEFT JOIN:
```
select a.id, a.name, d.name, b.id, c.id from a
inner join b on a.id = b.a_id
LEFT join c on a.id = c.a_id
LEFT join d on a.d_id = d.id
```
|
Mysql joining 2 tables in same query. Inner join or Left join?
|
[
"",
"mysql",
"sql",
""
] |
> I want to Join two tables based on id or default id.
I've two tables thats storred all countries in different languages:
**country:**
```
CREATE TABLE `country` (
`ctry_id` smallint(7) unsigned NOT NULL AUTO_INCREMENT,
`ctry_iso` varchar(2) DEFAULT NULL,
PRIMARY KEY (`ctry_id`)
);
```
**country\_lang:**
```
CREATE TABLE `country_lang` (
`ctryl_ctry_id` smallint(7) unsigned NOT NULL,
`ctryl_lang_id` smallint(5) unsigned NOT NULL DEFAULT '1',
`ctryl_name` varchar(50) COLLATE utf32_unicode_ci NOT NULL,
PRIMARY KEY (`ctryl_ctry_id`,`ctryl_lang_id`)
) ;
```
$lang\_id = 129;
```
SELECT c.ctry_id, cl.ctryl_name,cl.ctryl_lang_id
FROM country c
LEFT JOIN country_lang cl on cl.ctryl_ctry_id = c.ctry_id AND ((cl.ctryl_lang_id= $lang_id AND cl.ctryl_lang_id) OR cl.ctryl_lang_id=1)
ORDER BY c.ctry_iso ASC
```
I just want to get record join by provided lang\_id, If no record country record found in country\_lang table correspond a lang\_id then return by id 1.
|
Maybe the following is more what you are looking for?
```
SELECT c.ctry_id, cl.ctryl_name,cl.ctryl_lang_id, c.ctry_iso,
c.ctry_iso3, c.ctry_flag, c.ctry_phone_code
FROM country c
LEFT JOIN country_lang cl on cl.ctryl_ctry_id = CASE
WHEN EXISTS(SELECT 1 FROM country_lang WHERE ctryl_ctry_id=ctry_id)
THEN ctryl_ctry_id
ELSE 1 END
ORDER BY c.ctry_iso ASC
```
This will return exactly *one* line per `country`-record and will match *either* the corresponding language id *or* default id `1`.
Or, using `COALESCE()`, you could also write
```
SELECT c.ctry_id, cl.ctryl_name,cl.ctryl_lang_id, c.ctry_iso,
c.ctry_iso3, c.ctry_flag, c.ctry_phone_code
FROM country c
LEFT JOIN country_lang cl on cl.ctryl_ctry_id = COALESCE(
(SELECT ctryl_ctry_id FROM country_lang WHERE ctryl_ctry_id=ctry_id),1)
ORDER BY c.ctry_iso ASC
```
(Admittedly, I have not yet understood, what special significance your `cl.ctryl_lang_id=129` holds and what behaviour you want if this `ctryl_lang_id` pops up.)
|
You can try this:
```
SELECT c.ctry_id, cl.ctryl_name,cl.ctryl_lang_id, c.ctry_iso, c.ctry_iso3, c.ctry_flag, c.ctry_phone_code
FROM country c
LEFT JOIN country_lang cl on cl.ctryl_ctry_id = c.ctry_id
where (cl.ctryl_lang_id=129 AND cl.ctryl_lang_id=1)
ORDER BY c.ctry_iso ASC
```
|
MYSQL: How to JOIN tables based on id if not found any record then by default value
|
[
"",
"mysql",
"sql",
"join",
""
] |
I have rarely use SQL, however for this task it may be the most suitable. I am looking to create a query that is able to detect the first occurrence of an incident for each subject.
**Record:**
```
------------------------------
personID | date | incident
------------------------------
1 20150901 F1
2 20150101 B2
3 20150301 C3
1 20150901 B2
3 20150401 R5
2 20150401 C3
1 20150701 F1
```
Wanted Result:
```
------------------------------
personID | date | incident
------------------------------
2 20150101 B2
3 20150301 C3
3 20150401 R5
2 20150401 C3
1 20150701 F1
1 20150901 B2
```
Simply: I am looking for the first (based on date) time the incident occurs for each personID, ignoring if the incident reoccurs.
Thanks
PS. Using SQL Server 2008
|
Using MIN should work for this:
```
select personId,incident,MIN(convert(date,date)) as date
from [table]
group by personId,incident
```
|
You can use `row_number` function to get the desired result.
[Fiddle with sample data](http://www.sqlfiddle.com/#!3/a2d86/1)
```
select personid, date, incident
from
(
select *, row_number() over(partition by personid, incident order by date) as rn
from tablename
) x
where x.rn = 1;
```
|
SQL: First Occuerence and with value
|
[
"",
"sql",
"sql-server-2008",
""
] |
I have following values in a column of table. there are two columns in table. The other column is having distinct dates in descending order.
```
3
4
3
21
4
4
-1
3
21
-1
4
4
8
3
3
-1
21
-1
4
```
The graph will be
[](https://i.stack.imgur.com/LSL4X.jpg)
I need only peaks higlighted in graph with circles in output
```
4
21
21
8
21
4
```
|
[SQL Fiddle](http://sqlfiddle.com/#!4/b92dc/1)
**Oracle 11g R2 Schema Setup**:
```
CREATE TABLE TEST ( datetime, value ) AS
SELECT DATE '2015-01-01', 3 FROM DUAL
UNION ALL SELECT DATE '2015-01-02', 4 FROM DUAL
UNION ALL SELECT DATE '2015-01-03', 3 FROM DUAL
UNION ALL SELECT DATE '2015-01-04', 21 FROM DUAL
UNION ALL SELECT DATE '2015-01-05', 4 FROM DUAL
UNION ALL SELECT DATE '2015-01-06', 4 FROM DUAL
UNION ALL SELECT DATE '2015-01-07', -1 FROM DUAL
UNION ALL SELECT DATE '2015-01-08', 3 FROM DUAL
UNION ALL SELECT DATE '2015-01-09', 21 FROM DUAL
UNION ALL SELECT DATE '2015-01-10', -1 FROM DUAL
UNION ALL SELECT DATE '2015-01-11', 4 FROM DUAL
UNION ALL SELECT DATE '2015-01-12', 4 FROM DUAL
UNION ALL SELECT DATE '2015-01-13', 8 FROM DUAL
UNION ALL SELECT DATE '2015-01-14', 3 FROM DUAL
UNION ALL SELECT DATE '2015-01-15', 3 FROM DUAL
UNION ALL SELECT DATE '2015-01-16', -1 FROM DUAL
UNION ALL SELECT DATE '2015-01-17', 21 FROM DUAL
UNION ALL SELECT DATE '2015-01-18', -1 FROM DUAL
UNION ALL SELECT DATE '2015-01-19', 4 FROM DUAL
```
**Query 1**:
```
SELECT datetime, value
FROM (
SELECT datetime,
LAG( value ) OVER ( ORDER BY datetime ) AS prv,
value,
LEAD( value ) OVER ( ORDER BY datetime ) AS nxt
FROM test
)
WHERE ( prv IS NULL OR prv < value )
AND ( nxt IS NULL OR nxt < value )
```
**[Results](http://sqlfiddle.com/#!4/b92dc/1/0)**:
```
| DATETIME | VALUE |
|---------------------------|-------|
| January, 02 2015 00:00:00 | 4 |
| January, 04 2015 00:00:00 | 21 |
| January, 09 2015 00:00:00 | 21 |
| January, 13 2015 00:00:00 | 8 |
| January, 17 2015 00:00:00 | 21 |
| January, 19 2015 00:00:00 | 4 |
```
|
Just for completeness the row pattern matching example:
```
WITH source_data(datetime, value) AS (
SELECT DATE '2015-01-01', 3 FROM DUAL UNION ALL
SELECT DATE '2015-01-02', 4 FROM DUAL UNION ALL
SELECT DATE '2015-01-03', 3 FROM DUAL UNION ALL
SELECT DATE '2015-01-04', 21 FROM DUAL UNION ALL
SELECT DATE '2015-01-05', 4 FROM DUAL UNION ALL
SELECT DATE '2015-01-06', 4 FROM DUAL UNION ALL
SELECT DATE '2015-01-07', -1 FROM DUAL UNION ALL
SELECT DATE '2015-01-08', 3 FROM DUAL UNION ALL
SELECT DATE '2015-01-09', 21 FROM DUAL UNION ALL
SELECT DATE '2015-01-10', -1 FROM DUAL UNION ALL
SELECT DATE '2015-01-11', 4 FROM DUAL UNION ALL
SELECT DATE '2015-01-12', 4 FROM DUAL UNION ALL
SELECT DATE '2015-01-13', 8 FROM DUAL UNION ALL
SELECT DATE '2015-01-14', 3 FROM DUAL UNION ALL
SELECT DATE '2015-01-15', 3 FROM DUAL UNION ALL
SELECT DATE '2015-01-16', -1 FROM DUAL UNION ALL
SELECT DATE '2015-01-17', 21 FROM DUAL UNION ALL
SELECT DATE '2015-01-18', -1 FROM DUAL UNION ALL
SELECT DATE '2015-01-19', 4 FROM DUAL
)
SELECT *
FROM
source_data MATCH_RECOGNIZE (
ORDER BY datetime
MEASURES
LAST(UP.datetime) AS datetime,
LAST(UP.value) AS value
ONE ROW PER MATCH
PATTERN ((UP DOWN) | UP$)
DEFINE
DOWN AS DOWN.value < PREV(DOWN.value),
UP AS UP.value > PREV(UP.value)
)
ORDER BY
datetime
```
|
Oracle: Identifying peak values in a time series
|
[
"",
"sql",
"oracle",
""
] |
At the first excuse me for my bad english.
I have two tables:
1. master table:
```
| product id | pr_name | remain_Qty |
+--------------+------------------+-------------------+
| 1 | x | 13 |
| 2 | y | 18 |
| 3 | z | 21 |
+--------------+------------------+-------------------+
```
2. Detail Table (This table contain detail data of bought product):
`+--------------+------------------+----------+--------+
| date | pr_id | Qty |price |
+--------------+------------------+----------+--------+
| 2010-01-01 | 1 | 3 | 1000 |
| 2010-01-02 | 1 | 5 | 1200 |
| 2010-01-01 | 2 | 11 | 1100 |
| 2010-01-03 | 1 | 4 | 1400 |
| 2010-01-04 | 3 | 3 | 1300 |
| 2010-01-01 | 2 | 6 | 1600 |
| 2010-01-03 | 1 | 7 | 1700 |
| 2010-01-02 | 3 | 3 | 1300 |
| 2010-01-01 | 3 | 5 | 1500 |
| 2010-01-04 | 3 | 7 | 1700 |
| 2010-01-06 | 2 | 8 | 1800 |
| 2010-01-07 | 2 | 4 | 1400 |
| 2010-01-03 | 1 | 3 | 1300 |
| 2010-01-04 | 3 | 6 | 1600 |
| 2010-01-08 | 1 | 1 | 1100 |
+--------------+------------------+----------+--------+`
sum Qty of product 1 = 23
sum Qty of product 2 = 29
sum Qty of product 3 = 21
As a result I want list of the Details table, where the list is sorted by pr\_id , date and price, but the sum(Qty) per pr\_id don't exceed the remain\_Qty of the product\_id of the Master table.
For example:
```
+--------------+------------------+----------+--------+
| date | pr_id | Qty |price |
+--------------+------------------+----------+--------+
| 2010-01-01 | 1 | 3 | 1000 |
| 2010-01-02 | 1 | 5 | 1200 |
| 2010-01-03 | 1 | 4 | 1400 |
| 2010-01-03 | 1 | 1 | 1700 |
| 2010-01-01 | 2 | 11 | 1100 |
| 2010-01-01 | 2 | 6 | 1600 |
| 2010-01-01 | 3 | 5 | 1500 |
| 2010-01-02 | 3 | 3 | 1300 |
| 2010-01-04 | 3 | 3 | 1300 |
| 2010-01-04 | 3 | 7 | 1700 |
+--------------+------------------+----------+--------+
```
|
More of a clarification than a direct SQL answer. But what it LOOKS like they may be wanting is based on an inventory being depleted to fill orders from the known available quantity, but even that falls short as the may be missing a second qty of 3 on 2010-01-03 for product 1... which if looking at just ID=1 from his sample data would show...
```
| date | pr_id | Qty |price | Qty Available to fill order
+--------------+--------+-----+-------+
| 2010-01-01 | 1 | 3 | 1000 | 13 - 3 = 10 avail next order
| 2010-01-02 | 1 | 5 | 1200 | 10 - 5 = 5 avail next order
| 2010-01-03 | 1 | 3 | 1300 | 5 - 3 = 2 avail next order
| 2010-01-03 | 1 | 4 | 1400 | only 2 to PARTIALLY fill this order
| 2010-01-03 | 1 | 7 | 1700 | none available
| 2010-01-08 | 1 | 1 | 1100 | none available
```
With the extra sample record removed, would result in...
```
| date | pr_id | Qty |price | Qty Available to fill order
+--------------+--------+-----+-------+
| 2010-01-01 | 1 | 3 | 1000 | 13 - 3 = 10 avail next order
| 2010-01-02 | 1 | 5 | 1200 | 10 - 5 = 5 avail next order
| 2010-01-03 | 1 | 4 | 1400 | 5 - 4 = 1 avail for next order
| 2010-01-03 | 1 | 7 | 1700 | only 1 of the 7 available
| 2010-01-08 | 1 | 1 | 1100 | no more available...
```
So Aliasghar, does this better represent what you are trying to do??? Fill the available orders based on which order was entered into the system first, fill as many as possible based on inventory and stop there?
Please confirm by adding comment to this answer and maybe we can help resolve... Also, confirm WHICH Database are you using... SQL-Server, Oracle, MySQL, etc...
|
Here a working query for pr\_id=1 , I used MySql:
```
select final.pr_date, final.pr_id, count(t_qty) as qty, final.price from
(select * FROM (select q.pr_date, q.pr_id, 1 as t_qty, q.price , @t := @t + t_qty total
FROM(
SELECT d.pr_date, d.pr_id, 1 as t_qty, d.price
FROM detail_table d
JOIN generator_4k i
ON i.n between 1 and d.qty
WHERE d.pr_id= 1
Order by d.id, d.pr_date) q
CROSS JOIN (SELECT @t := 0) i) c
WHERE c.total <= (select remain_qty from master_table WHERE product_id = 1)) final
group by final.pr_date , final.pr_id , final.price ;
```
Here [SQL FIDDLE](http://sqlfiddle.com/#!9/997a6/6/0)
You have to adapt your detail\_table to add a technical id as primary key and create some views, I renamed the date column as pr\_date, You'll find the schema on the sql fiddle.
Here another query Using SQL SERVER
```
select final.pr_date, final.pr_id, count(t_qty) as qty, final.price from
(SELECT top(select remain_qty from master_table WHERE product_id = 1) d.pr_date, d.pr_id, 1 as t_qty, d.price
FROM detail_table d
JOIN generator_4k i
ON i.n between 1 and d.qty
WHERE d.pr_id= 1
Order by d.id, d.pr_date) final
group by final.pr_date , final.pr_id , final.price ;
```
Here [SQL FIDDLE](http://sqlfiddle.com/#!3/997a6/19/0)
|
Sql query for special record
|
[
"",
"sql",
"t-sql",
""
] |
I have an original table named 'student' like this:
```
idstudent studentname studentclass
101 adam A
102 brian NULL
103 charlie C
104 danny D
```
Then, I have a temporary table name 'student\_temp' like this:
```
idstudent studentname studentclass
101 adam A
102 brian B
103 NULL C
105 edward E
```
If I run this query on SQL Server Management Studio 2008 R2:
```
USE [StudentDatabase]
GO
UPDATE stu
SET [studentname] = tmp.[studentname]
, [studentclass] = tmp.[studentclass]
FROM [StudentDatabase].[dbo].[student] stu
INNER JOIN [StudentDatabase].[dbo].[student_temp] tmp
ON stu.[idstudent] = tmp.[idstudent]
GO
INSERT INTO student
SELECT
tmp.[idstudent]
, tmp.[studentname]
, tmp.[studentclass]
FROM [StudentDatabase].[dbo].[student] stu
RIGHT OUTER JOIN [StudentDatabase].[dbo].[student_temp] tmp
ON stu.[idstudent] = tmp.[idstudent]
WHERE stu.[idstudent] IS NULL
```
The results of 'student' table will be like this:
```
idstudent studentname studentclass
101 adam A
102 brian B
103 NULL C
104 danny D
105 edward E
```
But what I want is, the results will be like this:
```
idstudent studentname studentclass
101 adam A
102 brian B
103 charlie C
104 danny D
105 edward E
```
Notice that `idstudent=103` in `student` table already have `studentname` value "charlie". But it is replaced by records from `student_temp` table that it is `NULL`.
Is there any way to resolve this?
|
Try this:
```
UPDATE stu
SET [studentname] = CASE WHEN tmp.[studentname] IS NOT NULL THEN tmp.[studentname] ELSE stu.studentname END
, [studentclass] = CASE WHEN tmp.[studentclass] IS NOT NULL THEN tmp.[studentclass] ELSE stu.studentclass END
FROM [student] stu
INNER JOIN [student_temp] tmp
ON stu.[idstudent] = tmp.[idstudent]
GO
INSERT INTO student
SELECT
tmp.[idstudent]
, tmp.[studentname]
, tmp.[studentclass]
FROM [student] stu
RIGHT OUTER JOIN [student_temp] tmp
ON stu.[idstudent] = tmp.[idstudent]
WHERE stu.[idstudent] IS NULL
```
---
|
Use a condition in your `WHERE` clause to ignore `NULL` values
```
UPDATE stu
SET [studentname] = tmp.[studentname]
, [studentclass] = tmp.[studentclass]
FROM [StudentDatabase].[dbo].[student] stu
INNER JOIN [StudentDatabase].[dbo].[student_temp] tmp
ON stu.[idstudent] = tmp.[idstudent]
WHERE tmp.[studentname] IS NOT NULL
```
|
Updating table without replacing values that already exist in the original table with null values from the temporary table
|
[
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2008-r2",
""
] |
I need add a where clause only if a condition is respected, for example :
```
SELECT * FROM car Where <a condition>
and ( if (car.brand = 'Renault') then car.color = 'red')
```
In my example : if the brand is renault i want only red cars and I want other car only if there isn't renault
Is it possible to write something like this in SQL (without PLSQL)?
|
You can't use an `if` in a `where` clause (SQL doesn't even know `if`). To achieve what you want you need to use something like:
```
where ... and (car.brand = 'Renault' and car.color = 'red' or car.brand <> 'Renault')
```
This disjunction says *"if the brand is Renault, then the color needs to be red, for other brand it doesn't matter"*.
If you - as you comment - *" want only renault red, ... other car only if there isn't renault"*, then you need check for non-existence of renault:
```
where ...
and (car.brand = 'Renault' and car.color = 'red'
or not exists (select * from car where car.brand = 'Renault'))
```
|
Simple logic below.
You might want to write a truth table for any query like this if you are not sure.
```
SELECT *
FROM car
WHERE <a condition>
AND ((car.brand= 'Renault' and car.color = 'red') OR car.brand != 'Renault')
```
|
If statement in clause where
|
[
"",
"sql",
"oracle",
""
] |
I have the following table:
```
+----+----------+
| id | feature |
+----+----------+
| 1 | 10 |
| 1 | 20 |
| 2 | 20 |
| 3 | 40 |
| 4 | 50 |
| 5 | 60 |
+----+----------+
```
And I'd like to have the id's that have both features 10 and 20. So, not just the results that have 10 or 20, but I'd like to have the result of the id's that have both 10 and 20.
|
The easiest way would be to `GROUP BY` the `id` and use `HAVING`:
```
SELECT id
FROM table_name
WHERE feature IN (10,20)
GROUP BY id
HAVING COUNT(distinct feature) = 2
```
|
Another way to select the id that have features 10 and 20 you can do so
```
select id
from table1
group by id
having sum(feature = 10)
and sum(feature = 20)
```
[`DEMO`](http://sqlfiddle.com/#!9/dbf10b/1)
|
SQL select rows with filters
|
[
"",
"mysql",
"sql",
""
] |
I have two tables that can be joined using one field:
```
Table_1:
emp_id emp_name department
------ -------- ----------
1 Adam Accounting
2 Peter Engineering
3 Bruce Engineering
Table_2:
emp_id emp_salary
------ ----------
1 1000
3 3500
5 2000
```
I want to select the rows in table 2 that don't appear when joining the two tables (in this example emp\_id=5). I have been trying the following statement but I am getting 0 rows:
```
select * from table_2
where not exists
(
select * from table_1, table_2
where table_1.emp_id = table_2.emp_id);
```
|
So easy, just remove table\_2 from sub-query:
```
select *
from table_2
where not exists (select 1
from table_1
where table_1.emp_id = table_2.emp_id);
```
|
Try:
```
select * from table_2
where
emp_id not in (select emp_id from table_1)
```
|
Selecting rows that don't exist when joining two tables
|
[
"",
"sql",
"oracle",
"join",
"oracle11g",
""
] |
Code
```
CREATE TABLE #Temp (ValA varchar(10) NULL, FK_ID int)
INSERT INTO #Temp
SELECT 'A',1
UNION ALL
SELECT 'A',1
UNION ALL
SELECT 'A',1
UNION ALL
SELECT 'A',2
UNION ALL
SELECT 'B',1
UNION ALL
SELECT 'B',2
UNION ALL
SELECT 'C',1
UNION ALL
SELECT 'C',1
UNION ALL
SELECT 'C',1
SELECT
ValA
, FK_ID
, CASE WHEN COUNT(*) OVER (PARTITION BY ValA, FK_ID) > 1 THEN 1
ELSE 0
END IsMultiple
FROM #Temp
DROP TABLE #Temp
```
Current Output
```
ValA FK_ID IsMultiple
A 1 1
A 1 1
A 1 1
A 2 0
B 1 0
B 2 0
C 1 1
C 1 1
C 1 1
```
Desired Output
```
ValA FK_ID IsMultiple
A 1 1
A 1 1
A 1 1
A 2 **1**
B 1 0
B 2 0
C 1 1
C 1 1
C 1 1
```
Goal
I would like to find multiples partitioned by ValA and FK\_ID but for those where ValA is repeating and at least 2 of FK\_ID is repeating (while at least one other doesn't), I would like those to be marked as 1 (IsMultiple).
i.e. ValA - A has 4 records where 3 records have same FK\_ID but one different FK\_ID, The whole set should be marked as IsMultiple = 1
Thank you
|
If you don't have `NULL` Value in `FK_ID`
```
SELECT
ValA
, FK_ID
, CASE WHEN COUNT(*) OVER (PARTITION BY ValA) >
dense_rank() OVER (PARTITION BY ValA ORDER BY FK_ID ASC) + dense_rank() OVER (PARTITION BY ValA ORDER BY FK_ID DESC) -1 -- Get Distinct FK_ID Count
THEN 1
ELSE 0
END IsMultiple
FROM Temp
```
|
Not very elegant but works:
```
select
t.*,
case when tex.ValA is null
then 0
else 1
end IsMultiple
from #Temp t
left join (
select
ValA
from #Temp
group by
ValA, FK_ID
having
count(*) > 1
) tex on
t.ValA = tex.ValA
```
Here in inner query we select ValAs which have multiple same pairs (ValA, FK\_ID) - it's achieved by grouping on (ValA, FG\_ID) and taking only with `having count(*) > 1`.
Then in left join we use this set to mark records with corresponding ValAs as IsMultiple.
|
T-SQL | Find multiples (with a twist?!)
|
[
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2012",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.