Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
I have a query (in SAP, specifically).
```
SELECT T1.[Name],
sum(T0.[U_pptQuota]) AS [Quota]
FROM OHEM T0
INNER JOIN OUBR T1
ON T0.[branch] = T1.[Code]
```
It returns (correctly) the following info:
```
# Branches Name
1 Main 11.40 Y
2 Mesa 24.70 Y
3 Phoenix 24.90 Y
4 Tempe 21.00 Y
```
However, when I add more joins and run this query:
```
SELECT T1.[Name],
sum(T0.[U_pptQuota]) AS [Quota]
FROM OHEM T0
INNER JOIN OUBR T1
ON T0.[branch] = T1.[Code]
INNER JOIN OINV T2
ON T0.[empID] = T2.[OwnerCode]
INNER JOIN INV1 T3
ON T2.[DocEntry] = T3.[DocEntry]
GROUP BY T1.[Name]
```
then the returned data suddenly starts returning the following:
```
# Branches Name Quota
1 Main 140.40 Y
2 Mesa 157.00 Y
3 Phoenix 20.00 Y
4 Tempe 265.60 Y
```
Why would adding those selected joins cause the number to change? This doesn't make any sense at all to me.
EDIT: I've updated the query to read as follows:
```
SELECT DISTINCT T1.[Name],
T0.[U_pptQuota] AS [Quota]
FROM OHEM T0
INNER JOIN OUBR T1
ON T0.[branch] = T1.[Code]
LEFT JOIN OINV T2
ON T0.[empID] = T2.[OwnerCode]
LEFT JOIN INV1 T3
ON T2.[DocEntry] = T3.[DocEntry]
```
This now returns the right amounts in a list
```
# Branches Name Quota
1 Main 0.60 Y
4 Main 1.80 Y
6 Main 2.00 Y
11 Main 3.00 Y
16 Main 4.00 Y
2 Mesa 1.20 Y
7 Mesa 2.00 Y
12 Mesa 3.00 Y
15 Mesa 3.50 Y
20 Mesa 5.00 Y
23 Mesa 8.00 Y
3 Phoenix 1.60 Y
5 Phoenix 1.90 Y
9 Phoenix 2.10 Y
14 Phoenix 3.10 Y
17 Phoenix 4.00 Y
19 Phoenix 5.00 Y
22 Phoenix 7.20 Y
8 Tempe 2.00 Y
10 Tempe 2.70 Y
13 Tempe 3.00 Y
18 Tempe 4.00 Y
21 Tempe 5.30 Y
```
(When you sum the amounts, they appear to equal the correct amounts)
However, when I add a "SUM" and a "GROUP BY" as in the originals, then it again returns the values that are too high.
EDIT: The way the tables are associated is as follows:
OUBR = the Branch table. Each employee has one branch, and a branch can have more than one employee. PKey = OUBR.Code
OHEM = the Employee table. Each employee has a quota. PKey = OHEM.empID
OINV = the Invoice table. Each invoice has exactly one employee associated with it. An employee will hopefully have more than one invoice. PKey = OINV.DocNum
INV1 = the Invoice sub table (for each different item on the invoice). PKey = INV1.DocEntry
I eventually need to get the following information from a query:
```
BRANCH QUOTA TOTAL
```
Where Quota is the sum of all the quotas for each employee in the branch, and TOTAL is the sum of all invoices associated with the employees from each branch.
|
@phroureo Based on the additional information that you added and the assumptions in my last comment... The query should look something like this:
```
SELECT T1.[Name],
sum(T0.[U_pptQuota]) AS [Quota],
sum (T4.Total) as Total
FROM OHEM T0
INNER JOIN OUBR T1 ON T0.[branch] = T1.[Code]
LEFT JOIN (Select T2.OwnerCode, Sum(T2.PaidSum) AS [Total]
From OINV T2
INNER JOIN INV1 T3 ON T2.[DocEntry] = T3.[DocEntry]
Group by T2.OwnerCode) T4 ON T0.[empID] = T4.[OwnerCode]
GROUP BY T1.[Name]
```
Let me know if My assumptions were right.
|
This happens because your joins are incorrect. Not sure what the meaning of the tables are but ON T0.[empID] = T2.[OwnerCode] multiplies entries because you have many T2 records with the same ownerCode and/or ON T2.[DocEntry] = T3.[DocEntry] multiplies entries because you have multiple same docEntry values in T3.
|
When I add more JOINs in a SQL query, it returns a different amount, even though nothing else changes
|
[
"",
"sql",
"sql-server",
""
] |
I am working on a database and one of the final tasks left is to create user accounts. For some reason I can't seem to get it to work. In fact, none of the commented code works when uncommented. Our primary concern is being able to automate the creation of user accounts rather than creating them manually. I am hoping someone can shed some light into the errors of my ways so my code will compile.
```
create or replace TRIGGER trg_Students
BEFORE INSERT OR UPDATE OF SRN, Surname, Forename, Username, DOB, Date_Cv_Submitted, Date_cv_approved, same_address, home_phone_no, home_postcode ON Students
FOR EACH ROW
BEGIN
IF INSERTING THEN
:NEW.SRN := seq_SRN.nextval;
CREATE USER :new.USERNAME
IDENTIFIED BY PASSWORD
PROFILE app_user
PASSWORD EXPIRE;
--IF (ACTIVE_ACCOUNT = 'Y' AND CV_APPROVED = NULL) THEN
-- RAISE_APPLICATION_ERROR(-20000, 'Cannot create an account that is active before the cv is approved!');
--END IF;
END IF;
--IF UPDATING THEN
--IF (DATE_CV_APPROVED != NULL) THEN
--:new.Active_Account := 'Y';
--END IF;
--END IF;
:NEW.forename := INITCAP(:NEW.forename);
:NEW.surname := INITCAP(:NEW.surname);
:NEW.home_postcode := UPPER(:NEW.home_postcode);
:NEW.home_phone_no := REGEXP_REPLACE(:NEW.home_phone_no, '[^[:digit:]]', '');
:NEW.home_phone_no := REGEXP_REPLACE(:NEW.home_phone_no,
'([[:digit:]]{5})([[:digit:]]{6})', '(\1) \2');
IF :NEW.same_address = 'Y' THEN
:NEW.term_no := :NEW.home_no;
:NEW.term_postcode := :NEW.home_postcode;
:NEW.term_phone_no := :NEW.home_phone_no;
ELSE
:NEW.term_postcode := UPPER(:NEW.term_postcode);
:NEW.term_phone_no := REGEXP_REPLACE(:NEW.term_phone_no, '[^[:digit:]]', '');
:NEW.term_phone_no := REGEXP_REPLACE(:NEW.term_phone_no,
'([[:digit:]]{5})([[:digit:]]{6})', '(\1) \2');
END IF;
IF (:NEW.DOB + NUMTOYMINTERVAL(18,'YEAR') > SYSDATE) THEN
RAISE_APPLICATION_ERROR(-20000, 'Client must be at least 18 years of age!');
END IF;
IF (:NEW.Date_cv_approved < :NEW.date_cv_submitted) THEN
RAISE_APPLICATION_ERROR(-20000, 'Cannot approve a cv before it is submitted!');
END IF;
END;
```
the error is
> Compilation failed, line 6 (13:19:44) The line numbers associated with
> compilation errors are relative to the first BEGIN statement. This
> only affects the compilation of database triggers. PLS-00103:
> Encountered the symbol "CREATE" when expecting one of the following: (
> begin case declare else elsif end exit for goto if loop mod null
> pragma raise return select update while with << continue
> close current delete fetch lock insert open rollback savepoint set sql
> execute commit forall merge pipe purge.
I have changed my method to:
```
APEX_UTIL.CREATE_USER(
p_user_name => :new.USERNAME,
P_web_password => 'Password123');
```
and it now produces this error:
> An API call has been prohibited. Contact your administrator. Details
> about this incident are available via debug id "46046".
>
> Contact your application administrator.
|
Seems funny that i am answering my own question but i solved the issue. The code i used to create apex users is the following.
```
APEX_UTIL.CREATE_USER(
p_user_name => :new.USERNAME,
P_web_password => 'Password123',
p_change_password_on_first_use => 'Y');
```
The error above was solved by changing the security settings from within the application builder to allow the api to work this is found by the following.
Application Builder -> (Your Application) -> Shared Components -> Security Attributes and finally tick the boxes next to runtime API Usage at the bottom of the page, i ticked all 3 as i needed to.
|
You cannot execute create statements directly from PLSQL.
change this to:
```
IF INSERTING THEN
:NEW.SRN := seq_SRN.nextval;
execute immediate 'CREATE USER '||:new.USERNAME ||'
IDENTIFIED BY PASSWORD
PROFILE app_user
PASSWORD EXPIRE';
```
|
PL/SQL trigger: can't create apex user
|
[
"",
"sql",
"database",
"oracle",
"plsql",
"oracle-apex",
""
] |
I have a a few `left join` and so I would like to add/merge this `sql` inside my other query:
```
SELECT
A.ID
FROM
Table1 as A
LEFT JOIN
(SELECT
T.ID, T.TRF_TAKEN,
SUM(CASE WHEN CAST([UNT_TRNSFER] AS FLOAT) = 0
THEN CAST([TRF_TAKEN] AS FLOAT)
ELSE ISNULL([UNT_TRNSFER], 0)
END) AS 'UNT_TRNSFER'
FROM TRNS_C as T
GROUP BY ID;) ON A.ID = TransfC.ID
WHERE
A.ID = 1;
```
ERROR
> Column 'TRNS\_C.TRF\_TAKEN' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.
|
Remove `;` from `GROUP BY ID;)`, add `T.TRF_TAKEN` to `GROUP BY` clause and add the table alias `Transfc` that you joined on.
```
SELECT A.ID, Transfc.UNT_TRNSFER
FROM Table1 AS A
LEFT JOIN (
SELECT T.ID, T.TRF_TAKEN,
SUM(CASE WHEN CAST([UNT_TRNSFER] AS FLOAT) = 0 THEN CAST([TRF_TAKEN] AS FLOAT)
ELSE ISNULL([UNT_TRNSFER], 0) END) AS 'UNT_TRNSFER'
FROM TRNS_C as T
GROUP BY T.ID, T.TRF_TAKEN
) Transfc
ON A.ID = TransfC.ID
WHERE A.ID = 1;
```
|
You just need to put your non aggregated fields into the GROUP BY. You are asking it to group by EMPLID but you are only selecting t.ID and t.TRF\_Taken.
```
select A.ID
from Table1 as A
LEFT JOIN (
SELECT T.ID, T.TRF_TAKEN,
SUM(CASE WHEN CAST([UNT_TRNSFER] AS FLOAT) = 0 THEN CAST([TRF_TAKEN] AS FLOAT) ELSE ISNULL([UNT_TRNSFER], 0) END) AS 'UNT_TRNSFER'
FROM TRNS_C as T
GROUP BY T.ID, T.TRF_TAKEN;
) Transfc
ON A.ID = TransfC.ID
Where A.ID = 1;
```
|
How to do a left or subquery with this SQL statement?
|
[
"",
"sql",
"sql-server",
""
] |
We are using procedural approach (while loop) for inserting records into a particular table. the insert syntax is like below,
```
DECLARE @CNT INT = 0,
@WEEK DATE = '2015-11-01',
@FLAG INT
CREATE TABLE #Tmpdata (officeId int,id smallint, weekDate date,startsOn varchar(10),endsOn varchar(10),flag bit);
WHILE (@CNT <7)
BEGIN
SET @WEEK = DATEADD(D,@CNT,@WEEK )
IF EXISTS
(SELECT 1
FROM YEARRANGE
WHERE @WEEK BETWEEN CONVERT(DATE,taxseasonbegin)
AND CONVERT (DATE,taxSeasonEnd)
)
BEGIN
SET @FLAG =1
END
ELSE
BEGIN
SET @FLAG = 0
END
INSERT INTO #Tmpdata
(
officeId,id,weekDate,startsOn,endsOn,flag
)
VALUES
(
5134,@lvCounter,@week,'09:00 AM','05:00 PM',@flag
);
SET @cnt=@cnt+1;
end
```
(NOTE : TaxSeason is from january to august).
Is it possible to re-write the above logic in `set based approach`?
|
This is making a number of assumption because you didn't post ddl or any consumable sample data. Also, there is a variable @lvCounter not defined in your code. This is perfect opportunity to use a tally or numbers table instead of a loop.
```
declare @lvCounter int = 42;
DECLARE @CNT INT = 0,
@WEEK DATE = '2015-11-01',
@FLAG INT;
WITH
E1(N) AS (select 1 from (values (1),(1),(1),(1),(1),(1),(1),(1),(1),(1))dt(n))
, cteTally(N) AS
(
SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) FROM E1
)
select 5134 as officeId
, @lvCounter as Id
, DATEADD(DAY, N - 1, @WEEK) as weekDate
, '09:00 AM' as startsOn
, '05:00 PM' as EndOn
, Flag
from cteTally t
cross apply
(
select CAST(count(*) as bit) as Flag
from YearRange
where DATEADD(Day, t.N, @WEEK) > CONVERT(DATE,taxseasonbegin)
AND DATEADD(Day, t.N, @WEEK) <= CONVERT (DATE,taxSeasonEnd)
) y
where t.N <= 7;
```
|
Please can you provide sample data?
You can do something like:
```
SELECT DateIncrement = SUM(DATEADD(D,@CNT,@WEEK)) OVER (ORDER BY officeID)
FROM...
```
This gets an incremented date value for each record which you can then check against your start and end dates.
|
Convert Procedural Approach into Set Based Approach in Sql-Server
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have a similar table:
```
ID | Name | Time
-------|---------|-----------------------------
111 | AAA | 2015-11-05 15:39:24.000
222 | BBB | 2015-11-04 11:29:11.000
111 | AAA | 2015-11-02 13:12:10.000
333 | CCC | 2015-11-05 15:39:24.000
111 | AAA | 2015-11-01 15:39:24.000
```
I would like to get the latest `Time` for every `ID`. My output would look something like this:
```
ID | Name | Time
-------|---------|-----------------------------
111 | AAA | 2015-11-05 15:39:24.000
222 | BBB | 2015-11-04 11:29:11.000
333 | CCC | 2015-11-05 15:39:24.000
```
Please help me formulate the query!
Can I use `max(Time)` to get latest `Time`?
Do I need to JOIN the table with itself to be able to list `max(Time)` for all `ID`s?
|
If there are more columns you want to keep you could use a CTE with `ROW_NUMBER` function:
```
WITH CTE AS
(
SELECT Id, Name, Time, OtherColumns...,
RN = ROW_NUMBER() OVER (PARTITION BY ID Order By Time DESC)
FROM dbo.TableName
)
SELECT Id, Name, Time, OtherColumns...
FROM CTE
WHERE RN = 1
```
If it's possible that there are mutliple rows with the same max-time per ID and you want all of them instead of one arbitrary replace `ROW_NUMBER` with `DENSE_RANK`. [Ranking Functions](https://msdn.microsoft.com/en-us/library/ms189798(v=sql.120).aspx)
|
Use `MAX` function to find the latest time.
**Query**
```
SELECT ID, Name,
MAX(Time) AS [Time]
FROM your_table_name
GROUP BY ID, Name;
```
|
Query max time (latest) by id SQL Server
|
[
"",
"sql",
"sql-server",
""
] |
Imagine a table with two columns as follows:
```
Account_ID (integer)
Product_ID (integer)
```
Other columns are not material. This lists products bought by accounts. I want to create an output with three columns like so:
```
Account_ID_1 | Account_ID_2 | Count(distinct product_ID)
```
The result should have a combination of all values of Account\_IDs and associated distinct count of common Product\_Ids among each Account\_Id Combination.
I'm using `Google BigQuery`. Is there an SQL method for doing this, or should I plan to code it in a full programming language?
|
this works for me:
```
select
t1.Account_ID, T2.Account_ID, count(t1.Product_ID) count_product_id
from
MYTABLE t1 join MYTABLE t2 on t1.Product_ID = t2.Product_ID
where t1.Account_ID <> t2.Account_ID
group by t1.Account_ID, t2.Account_ID
order by 1,2
```
|
Here I calculate how many product both account have in comon.
```
SELECT
T1.Account_ID as Account_ID_1,
T2.Account_ID as Account_ID_2,
COUNT(distinct T1.product_id)
From YourTable as T1
JOIN YourTable as T2
ON T1.Account_ID < T2.Account_ID
AND T1.product_ID = T2.product_ID
GROUP BY
T1.Account_ID,
T2.Account_ID
```
|
Find distinct count of common accounts between products
|
[
"",
"sql",
"distinct",
"combinations",
"google-bigquery",
""
] |
This question is quite vague so I will just give an example
If I have a DB with for example a country attribute, where the country can appear more than once, how could I check that another attribute, called building, is null in all instances of that country?
And how would you do this for all countries? i.e. check that all buildings are null for a group of the same country, for every country.
```
country | building
--+--
USA | null
USA | null
USA | 2
Germany | null
Germany | null
Nepal | null
Spain | 3
```
The query should return Germany and Nepal if you select country. Totally arbitrary example but it should get the point across.
|
Checked in Postgres: `select country from countries where country not in (select country from countries where building is not null) group by country order by country;`
Where `countries` is the table name.
|
Here's another one, using group by:
```
SELECT country FROM countries group by country having sum(building) is null;
```
I've tested this in MySql and SQL Server.
|
How to use a where clause on all instances of an attribute
|
[
"",
"sql",
"where-clause",
""
] |
I have a question !
I need to make a university project, and in this project i will have one database table like this :
[](https://i.stack.imgur.com/Agz4B.png)
This table will have a LOT of records !!!!!!
And for manage this i need to create a validation system.
What is the best (and why) between create a buffer table like this :
[](https://i.stack.imgur.com/rZpRz.png)
Or add a column in my table like this :
[](https://i.stack.imgur.com/sUNu4.png)
Thank you !
|
Your question does not have enough information to provide a real answer. Here is some guidance on how to think about the situation. Which approach depends on the nature of your application and especially on what "validation" means.
One reasonable interpretation is that "validation" is part of a work-flow process, so it happens only once (or 99% of the time only once). And, you never want to see unvalidated advertisements when you look look at advertisements. If this is the case, then there would typically be additional information about the validation process.
This scenario suggests two reasonable approaches:
* Do the validation inside a transaction. This would be reasonable if the validation process were entirely in the database and was measured in seconds.
* Have a separate table for advertisements being validated. Perhaps even a separate table per "user" or "entity" responsible for them. Depending on the nature of the validation process, this could be a queue that feeds them to people doing the validation.
Putting them in the "advertisements" table doesn't make sense, because there is likely to be additional information involved with the validation process -- who, what, where, when, how.
If an advertisement can be validated and invalidated multiple times, then the best approach may be to put them in the same table. Once again, there are questions about the nature of the process.
Getting access to the two groups without a full table scan is tricky. If 10% of the rows are invalidated and 90% are validated, then a normal index would require a full table scan for reading either group. To get faster access to the smaller group, here are two options:
* clustered index on the validation flag.
* separate partitions for validated and invalidated rows.
In both cases, changing the validation flag for a record is relatively expensive, because it involves reading and writing the record on different data pages. Unless dozens of changes are made per second, this is probably not a big deal.
|
Here, there is no need to have a separate "buffer table". You can just properly index the `valid` field. So the following index would essentially automatically create a buffer table:
```
create unique index x on y (id)
include (all columns)
where (valid = 0)
```
This index creates a copy of the yet invalid data. You can do lots of variations such as
```
create unique index x on y (valid, id)
```
There's really no need for a separate table. Indexes are very easy compared to partitioning or even manually partitioning. Much less work, more general, more flexible and less potential for human error.
|
Buffer table in a database, Good or not?
|
[
"",
"sql",
"database",
"database-schema",
""
] |
I apologise for the terribly-worded question, I tried to phrase it more clearly, but couldn't think of a way to do so.
My problem is this: I've got a table with ~20 columns. I need to find all rows which have a particular value in one of the columns, and within that set have the same value in another column as at least one other record.
So in analogous form, say I have a table of personal data (Names, DOBs, phone numbers, etc). How would I be able to get from that table the data for all of the people who have the Surname "Jones", and also the same birthday as anyone else with the same surname?
I've tried
```
select *
from personal_details
where surname = 'Jones'
and DOB in (select DOB
from personal_details
where surname = 'Jones'
group by DOB
having count(*) > 1);
```
Which hasn't given me the set I'm looking for. Can anyone point me in the right direction for how to be thinking about getting this properly?
|
Use `EXISTS` to return a person if another person has same surname and DOB but a different phone number (or other unique data).
```
select *
from personal_details pd1
where pd1.surname = 'Jones'
and exists (select 1 from personal_details pd2
where pd2.surname = pd1.surname
and pd2.DOB = pd1.DOB
and pd2.phone <> pd1.phone)
```
|
TRY LIKE BELOW CODE. I have used Self Join to Get the Result.
```
DECLARE @dd AS TABLE (name VARCHAR(1), dob date)
INSERT INTO @dd VALUES ('A', '2015-11-04'), ('B','2015-11-04'),('C','2015-05-05'),('D','2015-11-04'), ('E','2015-11-04'),('F','2015-12-04')
SELECT *FROM @dd
SELECT DISTINCT d.name ,d.dob
FROM @dd d
INNER JOIN @dd d1
ON (d.dob = d1.dob
AND d.name <> d1. name)
```
|
How do I select data from a table where I need to return rows which have a repeat value in one field and a specified value in another?
|
[
"",
"sql",
"sql-server",
""
] |
I'm doing an analysis using SQL and R, I want to join two tables like below listed:
Table 1:
```
ID date
a11 20150302
a11 20150302
a22 20150303
a22 20150304
a33 20150306
a44 20150306
a55 20150307
a66 20150308
a66 20150309
a66 20150310
```
Table 2
```
ID date
a11 20150303
a22 20150304
a22 20150305
a44 20150306
a66 20150308
a66 20150310
```
The situation is like: Customer got called (table1) and Customer called back for more info (table two)
So what I want to do in the analysis is to:
1. Only show IDs that in both table.
2. Match table 2 date to table 1 date by:
* Match the closest date
* table 2 date must >= table 1 date
(like the example in result "a66" 20150310 is assigned to table1 date 20150310, while 20150308 is assigned to 20150308, not 20150309)
Result:
```
ID table1 date table2 date
a11 20150302
a11 20150302 20150303
a22 20150303 20150304
a22 20150304 20150305
a44 20150306 20150306
a66 20150308 20150308
a66 20150309
a66 20150310 20150310
```
Is there any solutions for this many to many (but I don't want the n\*m as result, I want 1 to 1) matching/join? Solution either in R or SQL will be wanted.
Thanks
|
```
SELECT ID, Date1, Date2 FROM (
SELECT joined.ID, joined.Date1, joined.Date2, ROW_NUMBER() OVER (PARTITION BY ID, Date1 ORDER BY Date2 ASC) AS RowNumber
FROM(
SELECT t1.ID, t1.[Date] as Date1, CASE WHEN t2.[Date] >= t1.[Date] THEN t2.[Date] ELSE NULL END as [Date2]
FROM Table1 t1
LEFT JOIN Table2 t2 ON t1.ID = t2.ID) as joined
WHERE joined.Date2 IS NOT NULL
) partitioned
WHERE RowNumber = 1
```
Joins the two tables on `ID` and removes the rows in `Table 2` that is not in `Table 1`. Then uses the `ROW_NUMBER() OVER (PARTITION BY ID, Date1 ORDER BY Date2 ASC)` to match the closest date which is found by the `WHERE RowNumber = 1` clause.
Produces this output that is consistent with the conditions you listed:
```
+-----+----------+----------+
| ID | Date1 | Date2 |
+-----+----------+----------+
| a11 | 20150302 | 20150303 |
| a22 | 20150303 | 20150304 |
| a22 | 20150304 | 20150304 |
| a44 | 20150306 | 20150306 |
| a66 | 20150308 | 20150308 |
| a66 | 20150309 | 20150310 |
| a66 | 20150310 | 20150310 |
+-----+----------+----------+
```
|
I get the same result as markmanguy in R with `dplyr`. For a22, the closest callback for the 20150304 initial call is 20150304, not 20150305. You need a time component to distinguish this.
```
library(dplyr)
inner_join(table1,table2,"ID")%>%
group_by(ID,date1)%>%
filter(date1<=date2)%>%
filter(row_number() == 1)
>
Source: local data frame [7 x 3]
Groups: ID, date1 [7]
ID date1 date2
(chr) (int) (int)
1 a11 20150302 20150303
2 a22 20150303 20150304
3 a22 20150304 20150304
4 a44 20150306 20150306
5 a66 20150308 20150308
6 a66 20150309 20150310
7 a66 20150310 20150310
```
**Data**
```
table1 <-read.table(text="ID date1
a11 20150302
a11 20150302
a22 20150303
a22 20150304
a33 20150306
a44 20150306
a55 20150307
a66 20150308
a66 20150309
a66 20150310", header=T,stringsAsFactors =F)
table2 <-read.table(text="ID date2
a11 20150303
a22 20150304
a22 20150305
a44 20150306
a66 20150308
a66 20150310", header=T,stringsAsFactors =F)
```
|
many to many join (same ID with different date)
|
[
"",
"sql",
"r",
"many-to-many",
"match",
"seq",
""
] |
I have a field that holds an account code. I've managed to extract the first 2 parts OK but I'm struggling with the last 2.
The field data is as follows:
```
812330/50110/0-0
812330/50110/BDG001-0
812330/50110/0-X001
```
I need to get the string between the second "/" and the "-" and after the "-" .Both fields have variable lengths, so I would be looking to output 0 and 0 on the first record, BDG001 and 0 on the second record and 0 and X001 on the third record.
Any help much appreciated, thanks.
|
You can use `CHARINDEX` and `LEFT/RIGHT`:
```
CREATE TABLE #tab(col VARCHAR(1000));
INSERT INTO #tab VALUES ('812330/50110/0-0'),('812330/50110/BDG001-0'),
('812330/50110/0-X001');
WITH cte AS
(
SELECT
col,
r = RIGHT(col, CHARINDEX('/', REVERSE(col))-1)
FROM #tab
)
SELECT col,
r,
sub1 = LEFT(r, CHARINDEX('-', r)-1),
sub2 = RIGHT(r, LEN(r) - CHARINDEX('-', r))
FROM cte;
```
`LiveDemo`
**EDIT:**
or even simpler:
```
SELECT
col
,sub1 = SUBSTRING(col,
LEN(col) - CHARINDEX('/', REVERSE(col)) + 2,
CHARINDEX('/', REVERSE(col)) -CHARINDEX('-', REVERSE(col))-1)
,sub2 = RIGHT(col, CHARINDEX('-', REVERSE(col))-1)
FROM #tab;
```
`LiveDemo2`
**EDIT 2:**
Using `PARSENAME` **SQL SERVER 2012+** (if your data does not contain `.`):
```
SELECT
col,
sub1 = PARSENAME(REPLACE(REPLACE(col, '/', '.'), '-', '.'), 2),
sub2 = PARSENAME(REPLACE(REPLACE(col, '/', '.'), '-', '.'), 1)
FROM #tab;
```
`LiveDemo3`
|
...Or you can do this, so you only go from left side to right, so you don't need to count from the end in case you have more '/' or '-' signs:
```
SELECT
SUBSTRING(columnName, CHARINDEX('/' , columnName, CHARINDEX('/' , columnName) + 1) + 1,
CHARINDEX('-', columnName) - CHARINDEX('/' , columnName, CHARINDEX('/' , columnName) + 1) - 1) AS FirstPart,
SUBSTRING(columnName, CHARINDEX('-' , columnName) + 1, LEN(columnName)) AS LastPart
FROM table_name
```
|
Extract string between after second / and before -
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have two tables, `Show`, `Character`. Each Show has\_many Characters.
```
class Show < ActiveRecord::Base
has_many :characters
class Character < ActiveRecord::Base
belongs_to :show
```
What I want to do is return the results of a Show that is associated with multiple Characters that fit certain criteria.
For example, I want to be able to return a list of Shows that have as characters both Batman and Robin. Not Batman OR Robin, Batman AND Robin.
So the query should be something like
```
Show.includes(:characters).where(characters: {'name = ? AND name = ?', "Batman", "Robin"})
```
But this returns an error. Is there a syntax for this that would work?
**UPDATE**
The query
```
Show.includes(:characters).where('characters.name = ? AND characters.name = ?', "Batman", "Robin")
```
returns a value of 0, even though there are definitely Shows associated with both Batman and Robin.
|
Using plain SQL, one solution is :
```
select s. *
from shows as s
join characters as c1 on (s.id=c1.show_id)
join characters as c2 on (s.id=c2.show_id)
where c1.name='Batman'
and c2.name='Robin';
```
Using Arel, I would translate as :
```
Show.joins('join characters as c1 on shows.id=c1.show_id').joins('join
characters as c2 on shows.id=c2.show_id').where('c1.name = "Batman"').where(
'c2.name="Robin"')
```
|
So you'll have to get a little fancy with SQL here; especially if you want it to be performant and handle different types of matchers.
```
select count(distinct(characters.name)) as matching_characters_count, shows.* from shows
inner join characters on characters.show_id = shows.id and (characters.name = 'Batman' or characters.name = 'Robin')
group by shows.id
having matching_characters_count = 2
```
To translate into ActiveRecord
```
Show.select('count(distinct(characters.name)) as matching_characters_count, shows.*').
joins('inner join characters on characters.show_id = shows.id and (characters.name = "Batman" or characters.name = "Robin")').
group('shows.id').
having('matching_characters_count = 2')
```
Then you'd probably do a pass with interpolation and then AREL it up; so you wouldn't be building string queries.
|
Can I query a record with multiple associated records that fit certain criteria?
|
[
"",
"mysql",
"sql",
"ruby-on-rails",
"ruby",
""
] |
```
SELECT ....
FROM table_name
WHERE a_column is null
OR b_column is null
OR c_column is null
......;
```
This is my SQL query. I took SELECT result but I dont know which column is empty and I took this result. However my table name and column names are dynamic. I can compile with EXECUTE IMMEDIATE. I need to find empty column and after I should use this information. Can you help me ?
|
Another way to find out all the columns which are completely having NULL values, then you could query **[DBA|ALL|USER]\_TAB\_COLUMNS** view and check `NUM_DISTINCT = 0`.
**NOTE :** The statistics must be gathered up to date to get an accurate result.
For example,
Lets say I have table "**T**" which has two columns, `EMPNO` and `SAL` such that the `SAL` column is completely **NULL**.
```
SQL> SELECT * FROM LALIT.t;
EMPNO SAL
---------- ----------
7369
7499
7521
7566
7654
7698
7782
7788
7839
7844
7876
7900
7902
7934
14 rows selected.
```
Lets **gather statistics** for safe side:
```
SQL> BEGIN
2 DBMS_STATS.gather_table_stats(
3 'LALIT',
4 'T');
5 END;
6 /
PL/SQL procedure successfully completed.
```
**Desired output**
```
SQL> SELECT column_name,
2 num_distinct
3 FROM user_tab_columns
4 WHERE NUM_DISTINCT = 0
5 AND table_name ='T';
COLUMN_NAME NUM_DISTINCT
----------- ------------
SAL 0
```
So, you get the column which is completely NULL i.e. **num\_distinct** is `0`.
---
**UPDATE** Based on OP's comment, it could be at least a NULL value.
* You could query the same view for `NUM_NULLS <> 0`.
For example, in the standard **EMP** table in SCOTT schema, let's look for the columns having **at least one NULL value**.
```
SQL> SELECT column_name,
2 num_nulls
3 FROM user_tab_columns
4 WHERE NUM_NULLS <> 0
5 AND table_name ='EMP';
COLUMN_NAME NUM_NULLS
----------- ----------
COMM 11
MGR 1
```
Remember, the statistics must be gathered up to date.
* Another way in **PL/SQL** using **EXECUTE IMMEDIATE**:
Just reverse the `NULL` logic in the demonstration about [**Find all columns having at least a NULL value from all tables in the schema**](http://lalitkumarb.com/2014/09/25/find-all-columns-having-at-least-a-null-value-from-all-tables-in-the-schema/).
For example,
**`FIND_NULL_COL`** is a simple **user defined function**(UDF) which will return 1 for the column which has at least one **`NULL`** value :
```
SQL> CREATE OR REPLACE FUNCTION FIND_NULL_COL(
2 TABLE_NAME VARCHAR2,
3 COLUMN_NAME VARCHAR2)
4 RETURN NUMBER
5 IS
6 cnt NUMBER;
7 BEGIN
8 CNT :=1;
9 EXECUTE IMMEDIATE 'select count(*) from ' ||TABLE_NAME||' where '
10 ||COLUMN_NAME||' is null'
11 INTO cnt;
12 RETURN
13 CASE
14 WHEN CNT > 0 THEN
15 1
16 ELSE
17 0
18 END;
19 END;
20 /
Function created.
```
Call the function in **SQL** to get the NULL status of all the column of any table :
```
SQL> SELECT c.TABLE_NAME,
2 c.COLUMN_NAME,
3 FIND_NULL_COL(c.TABLE_NAME,c.COLUMN_NAME) null_status
4 FROM all_tab_columns c
5 WHERE C.OWNER ='SCOTT'
6 AND c.TABLE_NAME = 'EMP'
7 ORDER BY C.OWNER,
8 C.TABLE_NAME,
9 C.COLUMN_ID
10 /
TABLE_NAME COLUMN_NAME NULL_STATUS
---------- ----------- -----------
EMP EMPNO 0
EMP ENAME 0
EMP JOB 0
EMP MGR 1
EMP HIREDATE 0
EMP SAL 0
EMP COMM 1
EMP DEPTNO 0
8 rows selected.
```
So, **NULL\_STATUS** `1` is the column which has at least one **NULL** value.
|
Use `case` in select clause and output null column names as columns like below.
```
select case when o1+o2+o3 = 7 then 'a_column,b_column,c_column'
when o1+o2+o3 = 6 then 'b_column,c_column'
when o1+o2+o3 = 5 then 'a_column,c_column'
when o1+o2+o3 = 4 then 'c_column'
when o1+o2+o3 = 3 then 'a_column,b_column'
when o1+o2+o3 = 2 then 'b_column'
when o1+o2+o3 = 1 then 'a_column'
else 'none'
end as nullColumns, l.*
from (
SELECT
case when a_column is null then 1 end as o1,
case when b_column is null then 2 end as o2,
case when c_column is null then 4 end as o3,
t.*
FROM table_name t WHERE a_column is null OR b_column is null OR c_column is null) l
```
|
How can I find which column is empty?
|
[
"",
"sql",
"oracle",
"select",
""
] |
> Caused by: java.lang.IllegalArgumentException: Cannot bind argument at index 1 because the index is out of range. The statement
> has 0 parameters.
Android doesn't recognise `?` character in `?%`
I try to achieve this:
```
String selectQuery = "SELECT * FROM config WHERE Directory Like '?%';";
SQLiteDatabase db = this.getReadableDatabase();
Cursor cursor = db.rawQuery(selectQuery, new String[] {parent});
```
I have searched for a solution but couldn't find anything, I think it's pretty particullary. I am trying to do it like this because I want to make the query work with `Strings` that contains character `'`.
Maybe someone who had the same problem can answer me how can I solve this.
|
Use `||` to concat parameter and wildcard:
```
SELECT * FROM config WHERE Directory Like ? || '%';
```
|
```
SELECT * FROM config WHERE Directory Like '?' || '%';
SQLiteDatabase db = this.getReadableDatabase();
Cursor cursor = db.rawQuery(selectQuery, new String[] {parent});
```
using `||` solves this issue
|
Cannot use Like statement in Android SQLite with argument and % character
|
[
"",
"android",
"sql",
"sqlite",
""
] |
For example, I've got a table
```
name | ability
kevin|say
kevin|scream
nike |say
```
I wanna get only kevin in response, when looking for `say` and `scream`. Count of parameters may change.
|
You can do this with `group by` and `having`:
```
SELECT name
FROM t
WHERE ability in ('say', 'scream')
GROUP BY name
HAVING COUNT(*) = 2;
```
|
One way to do it is:
```
SELECT * FROM YourTableName WHERE name='kevin' AND ability IN ('say','scream');
```
Another way to do it:
```
SELECT * FROM YourTableName WHERE name='kevin' AND (ability='say' || ability='scream');
```
Good luck!!
|
how to select row when column must satisfy multiple value?
|
[
"",
"mysql",
"sql",
""
] |
I am trying to use a field of a table as parameter of the contains method, but I don't know how to do it.
I am trying this:
```
SELECT * FROM tableA AS t1 WITH(UPDLOCK), TableB AS t2
WHERE CONTAINS(t1.FieldX, '"' + t2.FieldY + '"')
AND t2.ID IN(1,2,3,4,5);
```
However I get an error that says that is expected a ")" before the first "+".
How can I do it?
|
You could do it without *contains* and *full-text-search*, I mean using *like* operator:
```
select * from tableA as t1 with(UPDLOCK), TableB as t2
where t1.FieldX like '%"'+t2.FieldY+'"%'
and t2.ID IN(1,2,3,4,5);
```
|
The function you are looking for is CHARINDEX or PATINDEX...
> where CHARINDEX('"' + t2.FieldY + '"', t1.FieldX) <> 0
not sure if you need the '"'
If you want to use wildcards then use the PATINDEX function
Let me know if this works.
|
how to use a field of a table in the parameter of the contains function?
|
[
"",
"sql",
"sql-server",
"full-text-search",
""
] |
I am trying to create a view that will give me a total usage,total amount and a nickname for a client. if nothing was found it will leave the other columns null but display the client name and nickname. I simplified my query and changed the table name and column for convenience. Any help will be appreciated. Thanks in advance.
here is my code:
```
select
Client_Name= (case
when account = 123 then 'John'
when account = 234 then 'Doe'
end),
Total_Usage = sum(Credit_Used),
Total_Amount = sum(debit),
Nickname = (case
when account = 123 then 'J'
when account = 234 then 'D'
end)
from myTable
where year= 2015
group by = (case
when account = 123 then 'John'
when account = 234 then 'Doe'
end),
(case
when account = 123 then 'J'
when account = 234 then 'D'
end)
```
I am getting an answer like this :
```
John 34 3 J
```
I want something like this :
```
John 34 3 J
Doe 0 0 D
```
|
Since you don't have the table for users, and you have a huge case/when expression... You really SHOULD create the users table! It would be a nightmare to maintain the 2 large when/case expressions correctly!
If you still prefer to opt out from creating users table, just like Sami suggested, you should create an inline view and OUTER JOIN it to your main table. Select statement similar to the one below should do it.
> NB: I've used syntax that is more familiar to me, but that should work
> in SQL Server as well.
```
select b.client_name,
sum(a.credit_used) total_usage,
sum(a.debit) total_amount,
b.nickname
from MyTable a
right outer join
(select 123 account_id, 'John' client_name, 'J' nickname union all
select 234, 'Doe','D') b
on
(a.account = b.account_id)
where a.year = 2015
group by b.client_name, b.nickname
```
|
Since the database doesn't have rows for that, you cannot do it this way.
If you have a separate table for users, you can use an OUTER join to get this. Surely you have that and don't have a huge CASE WHEN thing in the query itself?
If it's only for two users, you can also have an OUTER join with a simple select/union structure that gives the user IDs and names, but it isn't very good way to do it.
|
How to Have an empty row if query result couldn't find anything with one assigned value under one column?
|
[
"",
"sql",
"sql-server",
"t-sql",
"plsql",
"database-administration",
""
] |
I have an `employee` and a `supervisor` table. A supervisor is an employee. The `employee` table looks like this:
```
employee_id | employee_name
1 | Freeman
2 | Manfred
```
The `supervisor` table looks like this (fields are foreign keys to `employee` table):
```
supervisor_id | employee_id
1 | 2
2 | 1
```
I implemented a composite primary key (supervisor\_id,employee\_id) and that didn't prevent the update.
How can I prevent the above from happening? An employee cannot supervise his/her supervisor.
|
If there is no user interface that needs to be notified then the event the validation fails, siride's solution using triggers is a good one, Gordon Linoff solution would also work. A simpler solution, that would also be able to notify a user interface that the data failed validation would be be to update the table using a stored procedure....
```
CREATE PROCEDURE AddSupervisor
(
@supervisor_id int,
@employee_id
)
AS
INSERT INTO supervisor
SELECT @supervisor_ID, @employee_id
WHERE
NOT EXISTS
(
SELECT 1 FROM supervisor
WHERE
supervisor_id = @employee_id AND
employee_id = @supervisor_id
) AND NOT EXISTS -- EDIT - Add logic to stop inserts for employees who already have a supervisor
(
SELECT 1 FROM supervisor
WHERE
employee_id = @employee_id
)
SELECT @@ROWCOUNT
```
Selecting @@ROWCOUNT at the end will return 0 if no rows were inserted or 1 if a row was inserted. You might argue that you could combine this answer with a trigger or constraints to ensure that validation isn't circumnavigated by using something other than the stored proc to update the table.
EDIT: If an employee can only have one supervisor, rather than having a separate supervisor table, you should just have a supervisor\_id column in the employee table. Having a separate Supervisor table with a composite key would cater many to many relationships i.e. Supervisors supervising multiple employees and employees having multiple supervisors.
|
This isn't easily possible purely with primary keys or check constraints.
* Primary keys are unique constraints that specify that no row may have the same primary key as any other row. But it doesn't say that certain combinations are invalid.
* Check constraints can use more complex logic, but only about values of columns in the row about to be inserted. Your problem requires looking at other rows. (But see Gordon Linoff's answer for how you could do this with constraints -- even though it's a bit opaque).
The solution is to use a trigger. The trigger can check other rows in the table to see if the employee is already being supervised and cancel the transaction.
Start here: <https://msdn.microsoft.com/en-us/library/ms189799.aspx>
|
is primary key (a,b) different from primary key (b,a)?
|
[
"",
"sql",
"sql-server",
"primary-key",
""
] |
I have the following table called `Tracking`:
```
+----------+------------+-----------+---------+
| DeviceID | DeviceName | PageCount | JobType |
+----------+------------+-----------+---------+
| 1 | Device1 | 22 | print |
| 2 | Device2 | 43 | copy |
| 1 | Device1 | 11 | copy |
| 2 | Device2 | 15 | print |
| 3 | Device3 | 65 | copy |
| 4 | Device4 | 1 | copy |
| 3 | Device3 | 17 | copy |
| 2 | Device2 | 100 | copy |
| 1 | Device1 | 632 | print |
| 2 | Device2 | 2 | print |
| 3 | Device3 | 57 | print |
+----------+------------+-----------+---------+
```
I'm trying create an output query with total copy and print for each device, like this example:
```
+------------+------+-------+
| DeviceName | Copy | Print |
+------------+------+-------+
| Device1 | 11 | 654 |
| Device2 | 143 | 17 |
| Device3 | 82 | 57 |
| Device4 | 1 | 0 |
+------------+------+-------+
```
Can you give me a hint?
Thank you.
|
The easiest way I can think for this is:
```
SELECT DeviceName,
SUM(CASE WHEN JobType = 'copy' THEN PageCount ELSE 0 END) AS Copy,
SUM(CASE WHEN JobType = 'print' THEN PageCount ELSE 0 END) AS Print
FROM Tracking
GROUP BY DeviceName
```
|
what you need to do is:
1. get sum of pageCount for copy
2. get sum of pageCount for print
3. join the two results
getting the sums is easy, just need to:
```
1) select DeviceName, sum(pageCount) as copy from Tracking where jobType = 'copy' group by deviceName
2) select DeviceName, sum(pageCount) as print from Tracking where jobType = 'print' group by deviceName
```
and joining them:
```
select A.deviceName, copy, print from
(select DeviceName, sum(pageCount) as copy from Tracking
where jobType = 'copy' group by deviceName) as A
inner join
(select DeviceName, sum(pageCount) as print from Tracking
where jobType = 'print' group by deviceName) as B
ON A.deviceName = B.deviceName
```
|
SQL - Multiple queries on same table
|
[
"",
"sql",
""
] |
I have a specific request to do on my database (PostgreSQL v9.4.5), and I don't see any elegant solution in pure SQL to solve it (I know I can do it using Python or other, but I have several billions lines of data, and the calculation time would be greatly increased).
I have two tables : *trades* and *events*. These tables both represent the trades occurring in an orderbook during a day (this is why I have several billions lines, my data is over several years) but there are many more *events* than *trades*.
Both tables have columns *time*, *volume* and *quantity*, however each one has other columns (let's say respectively *foo* and *bar*) with specific information.
I want to make a correspondence between the two tables on the columns *time*, *volume* and *price*, as **I know this correspondence exists as an injection from trades to events** (if there are *n* rows in *trades* with the same time *t*, the same price *p* and the same volume *v*, I know there are also *n* rows in *events* with the time *t*, the price *p* and the volume *v*).
Trades :
```
id | time | price | volume | foo
-----+-----------+---------+--------+-------
201 | 32400.524 | 53 | 2085 | xxx
202 | 32400.530 | 53 | 1162 | xxx
203 | 32400.531 | 52.99 | 50 | xxx
204 | 32400.532 | 52.91 | 3119 | xxx
205 | 32400.837 | 52.91 | 3119 | xxx <--
206 | 32400.837 | 52.91 | 3119 | xxx <--
207 | 32400.837 | 52.91 | 3119 | xxx <--
208 | 32400.839 | 52.92 | 3220 | xxx <--
209 | 32400.839 | 52.92 | 3220 | xxx <--
210 | 32400.839 | 52.92 | 3220 | xxx <--
```
Events :
```
id | time | price | volume | bar
-----+-----------+---------+--------+------
328 | 32400.835 | 52.91 | 3119 | yyy
329 | 32400.837 | 52.91 | 3119 | yyy <--
330 | 32400.837 | 52.91 | 3119 | yyy <--
331 | 32400.837 | 52.91 | 3119 | yyy <--
332 | 32400.838 | 52.91 | 3119 | yyy
333 | 32400.838 | 52.91 | 3119 | yyy
334 | 32400.839 | 52.92 | 3220 | yyy <--
335 | 32400.839 | 52.92 | 3220 | yyy <--
336 | 32400.839 | 52.92 | 3220 | yyy <--
337 | 32400.840 | 52.91 | 2501 | yyy
```
What I want is :
```
time | price | volume | bar | foo
-----------+---------+--------+------+-------
32400.837 | 52.91 | 3119 | xxx | yyy
32400.837 | 52.91 | 3119 | xxx | yyy
32400.837 | 52.91 | 3119 | xxx | yyy
32400.839 | 52.92 | 3220 | xxx | yyy
32400.839 | 52.92 | 3220 | xxx | yyy
32400.839 | 52.92 | 3220 | xxx | yyy
```
I cannot do a classic INNER JOIN, or else I will have all the possible crossing between the two tables (in this case I would have 6x6 then 36 rows).
The though thing is to have only one row versus one row, although several rows could fit.
Thank you for your help.
EDIT :
As I said, if I use a classic INNER JOIN, for example
```
SELECT * FROM events e,
INNER JOIN trades t
ON t.time = e.time AND t.price = e.price AND t.volume = e.volume
```
I will have something like :
```
trade_id | event_id | time | price | volume | bar | foo
---------+----------+-----------+---------+--------+------+-------
205 | 329 | 32400.837 | 52.91 | 3119 | xxx | yyy
205 | 330 | 32400.837 | 52.91 | 3119 | xxx | yyy
205 | 331 | 32400.837 | 52.91 | 3119 | xxx | yyy
206 | 329 | 32400.837 | 52.91 | 3119 | xxx | yyy
206 | 330 | 32400.837 | 52.91 | 3119 | xxx | yyy
206 | 331 | 32400.837 | 52.91 | 3119 | xxx | yyy
207 | 329 | 32400.839 | 52.91 | 3119 | xxx | yyy
207 | 330 | 32400.839 | 52.91 | 3119 | xxx | yyy
207 | 331 | 32400.839 | 52.91 | 3119 | xxx | yyy
208 | 334 | 32400.837 | 52.92 | 3220 | xxx | yyy
208 | 335 | 32400.837 | 52.92 | 3220 | xxx | yyy
208 | 336 | 32400.837 | 52.92 | 3220 | xxx | yyy
209 | 334 | 32400.837 | 52.92 | 3220 | xxx | yyy
209 | 335 | 32400.837 | 52.92 | 3220 | xxx | yyy
209 | 336 | 32400.837 | 52.92 | 3220 | xxx | yyy
210 | 334 | 32400.839 | 52.92 | 3220 | xxx | yyy
210 | 335 | 32400.839 | 52.92 | 3220 | xxx | yyy
210 | 336 | 32400.839 | 52.92 | 3220 | xxx | yyy
```
But what I want is :
```
trade_id | event_id | time | price | volume | bar | foo
---------+----------+-----------+---------+--------+------+-------
205 | 329 | 32400.837 | 52.91 | 3119 | xxx | yyy
206 | 330 | 32400.837 | 52.91 | 3119 | xxx | yyy
207 | 331 | 32400.839 | 52.91 | 3119 | xxx | yyy
208 | 334 | 32400.837 | 52.92 | 3220 | xxx | yyy
209 | 335 | 32400.837 | 52.92 | 3220 | xxx | yyy
210 | 336 | 32400.839 | 52.92 | 3220 | xxx | yyy
```
|
Here is my example with row\_number.
Also, SQL Fiddle: [SO 33608351](http://sqlfiddle.com/#!15/9eecb7db59d16c80417c72d1e1f4fbf1/4351)
```
with
trades AS
(
select 201 as id, 32400.524 as time, 53 as price, 2085 as volume, 'xxx' as foo union all
select 202, 32400.530, 53, 1162, 'xxx' union all
select 203, 32400.531, 52.99, 50, 'xxx' union all
select 204, 32400.532, 52.91, 3119, 'xxx' union all
select 205, 32400.837, 52.91, 3119, 'xxx' union all
select 206, 32400.837, 52.91, 3119, 'xxx' union all
select 207, 32400.837, 52.91, 3119, 'xxx' union all
select 208, 32400.839, 52.92, 3220, 'xxx' union all
select 209, 32400.839, 52.92, 3220, 'xxx' union all
select 210, 32400.839, 52.92, 3220, 'xxx'
),
events as
(
select 328 as id, 32400.835 as time , 52.91 as price , 3119 as volume , 'yyy' as bar union all
select 329 , 32400.837 , 52.91 , 3119 , 'yyy' union all
select 330 , 32400.837 , 52.91 , 3119 , 'yyy' union all
select 331 , 32400.837 , 52.91 , 3119 , 'yyy' union all
select 332 , 32400.838 , 52.91 , 3119 , 'yyy' union all
select 333 , 32400.838 , 52.91 , 3119 , 'yyy' union all
select 334 , 32400.839 , 52.92 , 3220 , 'yyy' union all
select 335 , 32400.839 , 52.92 , 3220 , 'yyy' union all
select 336 , 32400.839 , 52.92 , 3220 , 'yyy' union all
select 337 , 32400.840 , 52.91 , 2501 , 'yyy'
),
tradesWithRowNumber AS
(
select *
,ROW_NUMBER() over (PARTITION by time, price, volume order by time, price, volume) as RowNum
from trades
),
eventsWithRowNumber AS
(
select *
,ROW_NUMBER() over (PARTITION by time, price, volume order by time, price, volume) as RowNum
from events
)
select t.time,
t.price,
t.volume,
t.foo,
e.bar
FROM tradesWithRowNumber t
inner JOIN
eventsWithRowNumber e on e.time = t.time
AND e.price = t.price
AND e.volume = t.volume
and e.RowNum = t.RowNum
```
|
Try this and let me know if. We can also you `row_number() over(partion by)` clause but I am not sure if it will work on postgreSQL. Anyways try this.
```
SELECT
min(t.id) as trade_id,min(e.id) as event_id,
min(t.time) as time,min(t.price) as price,
min(t.volume) as volume, min(e.bar) as bar,
min(t.foo) as foo
FROM events e,
INNER JOIN trades t
ON t.time = e.time AND t.price = e.price AND t.volume = e.volume
group by t.id
```
|
SQL Select Inner join one by one
|
[
"",
"sql",
"inner-join",
""
] |
I'm trying to calculate the age of a user based on the current year and his year of birth, but something in my syntax appears to be incorrect and I can't spot it.
```
CREATE TABLE Normal_Users(
first_name varchar(20),
last_name varchar(20),
date_of_birth date,
age int = year(CURRENT_TIMESTAMP) - year(date_of_birth)
)
```
Why is this incorrect?
|
Use `AS`:
```
CREATE TABLE Normal_Users(
first_name varchar(20),
last_name varchar(20),
date_of_birth date,
age int AS (year(CURRENT_TIMESTAMP) - year(date_of_birth))
);
```
**[`Generated columns in MySQL`](http://mysqlserverteam.com/generated-columns-in-mysql-5-7-5/)**
> < type> [ GENERATED ALWAYS ] **AS ( < expression> )** [
> VIRTUAL|STORED ] [ UNIQUE [KEY] ] [ [PRIMARY] KEY ] [ NOT NULL ]
> [ COMMENT ]
If you are using `SQL Server` there is no need for datatype in **[`computed columns`](https://technet.microsoft.com/en-us/library/ms191250%28v=sql.105%29.aspx)**:
```
CREATE TABLE Normal_Users(
first_name varchar(20),
last_name varchar(20),
date_of_birth date,
age AS (year(CURRENT_TIMESTAMP) - year(date_of_birth))
);
```
`LiveDemo`
**EDIT:**
For calculating age better use:
```
SELECT TIMESTAMPDIFF( YEAR, date_of_birth, CURDATE()) AS age;
```
Your code for `2014-12-31` and `2015-01-01` will return 1 year, but really it has 0.
|
How do you do it in your brain?
You subtract the year of birth from current year,
Then you if the birthday already happen this year.
If Yes fine, else it means he will have this year the age (so -1 yearold)
diff of the years is easy.
how to compare month and day to know if we are before or after
month\*100+day which gives a number MMDD
which is easy to compare bigger equal or smaller, so after birthday, day of birthday, before birthday.
Here is the principle for Sybase :
```
declare @test char(10)
set @test='15/02/2010'
select "age" =
case when (month(convert(datetime, @test, 103))*100)+day(convert(datetime, @test, 103)) - ((month(getdate())*100)+day(getdate())) <= 0 then
DATEDIFF(YEAR,convert(datetime, @test, 103),getdate())
else
DATEDIFF(YEAR,convert(datetime, @test, 103),getdate())-1
end
```
|
Calculating age derived from current date and DOB
|
[
"",
"sql",
""
] |
I'm trying to get distinct values from a table. When I ran `select distinct count(id) from table` I got over a million counts. However if I ran `select count(distinct id) from table` I've got only around 300k counts. What was the difference of the two queries?
Thanks
|
When you do `select distinct count(id)` then you are basically doing:
```
select distinct cnt
from (select count(id) as cnt from t) t;
```
Because the inner query only returns one row, the `distinct` is not doing anything. The query counts the *number of rows* in the table (well, more accurately, the number of rows where `id` is not `null`).
On the other hand, when you do:
```
select count(distinct id)
from t;
```
Then the query counts the number of different values that `id` takes on in the table. This would appear to be what you want.
|
If id is the pk the count with `distinct count(id)` will match the no of rows returned with `count(distinct id)`.
If id is not the pk but has a unique constraint(on id alone, not in combination with any other column), the no of rows returned with `count(distinct id)` will be equal to the count with `distinct count(id)`, as in the case of pk.
If id is just another column, `select count distinct count(id) from table` will return `one row` with the no of records where the id column is NOT NULL where as `select count count(distinct id) from table` will return 'one column' with all non NULL unique ids in the table.
In no case will the count or the no of rows returned exceed the total no of rows in your table.
|
select distinct count(id) vs select count(distinct id)
|
[
"",
"mysql",
"sql",
""
] |
I have this error. I am new to SQL and can't figure what is wrong with my syntax. I changed my INTERSECT statement to inner join realizing SQL does not accept such syntax. However I continue to get an error.
ERROR:
```
ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use n
ear 'a
INNER JOIN
(Select h.propertyId as id
From House h, has_weather hw, weather_ye' at line 8
```
The query is suppose to find the userIDs and the email of user who’s property has ratings equal to 5 and has AvgTemp over 55.
```
Select has_property.userId
From has_property
Where has_property.propertyId IN
(
(Select hRR.propertyId as id
From hasRatings_Rate hRR
Where hRR.ratingId = 5
) a
INNER JOIN
(Select h.propertyId as id
From House h, has_weather hw, weather_year wy
Where hw.weatherId = wy.weatherId AND hw.homeAddId = h.homeAddId AND wy.AvgTemp > 55
)b
ON (a.id = b.id)
);
```
|
Try this:
```
Select has_property.userId
From has_property hp
JOIN hasRatings_Rate hRR ON hp.propertyId = hRR.propertyId
JOIN House h ON h.id = hRR.id
JOIN has_weather hw ON hw.homeAddId = h.homeAddId
JOIN weather_year wy ON hw.weatherId = wy.weatherId
WHERE hRR.ratingId = 5
AND wy.avgTemp > 55
```
No need for any subqueries, just a succession of joins.
|
Try this:
```
Select has_property.userId
From has_property
INNER JOIN
(
SELECT id
FROM
(
Select hRR.propertyId as id
From hasRatings_Rate hRR
Where hRR.ratingId = 5
) a
INNER JOIN
(
Select h.propertyId as id
From House h, has_weather hw, weather_year wy
Where hw.weatherId = wy.weatherId AND hw.homeAddId = h.homeAddId AND wy.AvgTemp > 55
) b ON (a.id = b.id)
) c ON(has_property.propertyId = c.id)
```
|
SQL inner join syntax error
|
[
"",
"mysql",
"sql",
""
] |
I'm porting some Postgres SQL to MySQL and am trying to set the starting values of three columns to specific values. The table is as follows:
```
CREATE TABLE ITEM (
ORDID NUMERIC(4) NOT NULL,
ITEMID NUMERIC(4) NOT NULL,
PRODID NUMERIC(6),
ACTUALPRICE NUMERIC(8,2),
QTY NUMERIC(8),
ITEMTOT NUMERIC(8,2),
CONSTRAINT ITEM_FOREIGN_KEY FOREIGN KEY (ORDID) REFERENCES ORD (ORDID),
CONSTRAINT ITEM_PRIMARY_KEY PRIMARY KEY (ORDID,ITEMID));
```
The code I'm trying to port is as follows:
```
CREATE SEQUENCE ORDID
INCREMENT BY 1
START WITH 622
;
CREATE SEQUENCE PRODID
INCREMENT BY 1
START WITH 200381
;
CREATE SEQUENCE CUSTID
INCREMENT BY 1
START WITH 109
;
```
However, when trying to run this I'm getting the error:
```
SQL query:
CREATE SEQUENCE ORDIDINCREMENT BY 1 START WITH 622 ;
MySQL said: Documentation
#1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'SEQUENCE ORDID
INCREMENT BY 1
START WITH 622' at line 1
```
I know that there is no direct equivalent to a SEQUENCE in MySQL but I can't figure out a reasonable way to achieve the same thing without it. Any ideas?
|
MySQL uses [AUTO\_INCREMENT](https://dev.mysql.com/doc/refman/5.6/en/example-auto-increment.html) for that purpose. Rather than making new sequence types, you apply it to an existing integer column.
[Unfortunately you can only have one per table.](https://dev.mysql.com/doc/refman/5.6/en/create-table.html)
> There can be only one AUTO\_INCREMENT column per table, it must be indexed, and it cannot have a DEFAULT value.
And they must be integers, numeric doesn't work. This will probably improve your schema as 9999 orders and items seems very small.
> AUTO\_INCREMENT applies only to integer and floating-point types.
And if that wasn't enough, you can't have an AUTO\_INCREMENT on a multi-key primary key. Only the vastly inferior MyISAM table format allows that.
So you cannot easily translate your PostgreSQL tables to MySQL verbatim.
You sure you want to convert to MySQL?
---
In your case, `item.ordid` is a reference so it will be incremented in its own table. `item.prodid` is probably also a reference and somebody forgot to declare it that. This leaves just `item.itemid` to be declared AUTO\_INCREMENT, but it's part of the primary key. It probably doesn't need to be, it can just be unique.
In fact, the `ITEM` table seems more like it's tracking orders of products, not items... but then there's also a product ID? I don't know what an "item" is.
You wind up with something like this:
```
CREATE TABLE ITEM (
ITEMID INTEGER NOT NULL PRIMARY KEY AUTO_INCREMENT,
PRODID INTEGER REFERENCES PROD(PRODID),
ORDID INTEGER NOT NULL REFERENCES ORD (ORDID),
ACTUALPRICE NUMERIC(8,2),
QTY NUMERIC(8),
ITEMTOT NUMERIC(8,2),
UNIQUE(ORDID, ITEMID)
)
CREATE TABLE ORD (
ORDID INTEGER PRIMARY KEY AUTO_INCREMENT,
...
) AUTO_INCREMENT = 622;
CREATE TABLE PROD (
PRODID INTEGER PRIMARY KEY AUTO_INCREMENT,
...
) AUTO_INCREMENT = 200381;
```
You can also set the AUTO\_INCREMENT starting point after the fact with ALTER TABLE. Because it's a table attribute, not a column attribute, it happens on the table itself.
```
ALTER TABLE CUST AUTO_INCREMENT=109;
```
It's largely unnecessary to set the `AUTO_INCREMENT` starting point if you're importing an existing data set. `AUTO_INCREMENT` will always use `MAX(column)` and it cannot be set lower than this. It doesn't matter what you start it at if the table is already populated.
|
You can use table with AUTO\_INCREMENT key to emulate sequences:
```
CREATE TABLE ORDID (id INT PRIMARY KEY AUTO_INCREMENT) AUTO_INCREMENT = 622;
CREATE TABLE PRODID (id INT PRIMARY KEY AUTO_INCREMENT) AUTO_INCREMENT = 200381;
CREATE TABLE CUSTID (id INT PRIMARY KEY AUTO_INCREMENT) AUTO_INCREMENT = 109;
```
Each of the table represents a 'sequence'. To use one in your `CREATE TABLE`:
```
CREATE TABLE ITEM (
ORDID INT NOT NULL,
ITEMID NUMERIC(4) NOT NULL,
PRODID NUMERIC(6),
ACTUALPRICE NUMERIC(8,2),
QTY NUMERIC(8),
ITEMTOT NUMERIC(8,2),
CONSTRAINT ITEM_FOREIGN_KEY FOREIGN KEY (ORDID) REFERENCES ORDID (ID),
CONSTRAINT ITEM_PRIMARY_KEY PRIMARY KEY (ORDID,ITEMID));
```
You can then use `INSERT` to get a new value from your 'sequence':
```
INSERT INTO ordid VALUES (null);
SELECT LAST_INSERT_ID();
```
|
Postgres Sequence Port to MySQL
|
[
"",
"mysql",
"sql",
"postgresql",
""
] |
I have a table in my database which contains a column called DateTimes. This column has DateTime data inside it which I would like to select the different hours for which this table has data for. For example, if the column has these entries:
```
2015-05-03 01:06:45
2015-05-03 04:51:09
2015-05-03 05:08:11
2015-05-03 09:33:35
2015-05-03 13:46:38
```
I would like to return
```
2015-05-03 01:00:00
2015-05-03 04:00:00
2015-05-03 05:00:00
2015-05-03 09:00:00
2015-05-03 13:00:00
```
I have tried the following which is returning an error:
```
SELECT DateTimes
FROM MyTable
GROUP BY DATEPART(hh, DateTimes)
```
I feel like this should be easy to do but I can't seem to get it right (I'm very new to SQL). I'm using MS SQL Management Studio 2012 to access my database.
|
Try this.
Test Data:
```
DECLARE @MyTable AS TABLE(DateTimes DATETIME)
INSERT INTO @MyTable(DateTimes)
VALUES('2015-05-03 01:06:45')
,('2015-05-03 04:51:09')
,('2015-05-03 05:08:11')
,('2015-05-03 09:33:35')
,('2015-05-03 13:46:38')
```
Query:
```
SELECT Hourly
FROM (SELECT DATEADD(HOUR, DATEDIFF(HOUR, 0, DateTimes), 0) AS Hourly
FROM @MyTable) AS DatesAsHours
GROUP BY Hourly
```
Results:
```
Hourly
2015-05-03 01:00:00.000
2015-05-03 04:00:00.000
2015-05-03 05:00:00.000
2015-05-03 09:00:00.000
2015-05-03 13:00:00.000
```
|
Try the following:
```
SELECT DateTimes
FROM MyTable
GROUP BY DATEPART(hh, DateTimes), DateTimes;
```
You were simply missing the DateTimes field directly in your group by.
Also, from the rest of your question, you may want to do the following:
```
SELECT FORMAT(DateTimes, 'dd-MM-yyyy hh:00:00', 'en-US')
FROM MyTable
GROUP BY DATEPART(hh, DateTimes), FORMAT(DateTimes, 'dd-MM-yyyy hh:00:00', 'en-US');
```
|
Grouping DateTime by hour in SQL
|
[
"",
"sql",
"sql-server",
""
] |
I want to count the number of different sheep, and I want it in one table.
Like this;
```
Ewes | Rams | Lambs
8 | 5 | 12
```
The query I try is this, but it doesn't work;
```
SELECT COUNT(e.EweID) AS 'Ewe', COUNT(r.RamID) AS 'Ram', COUNT(l.LambID) AS 'Lamb'
FROM Sheep s
INNER JOIN Ewe e ON s.SheepID = e.EweID
INNER JOIN Ram r ON s.SheepID = r.RamID
INNER JOIN Lamb l ON s.SheepID = l.LambID
WHERE s.FarmerID = '123'
```
I don't get what I'm doing wrong, this is my database ERD;
[](https://i.stack.imgur.com/vccmQ.png)
|
I don't think you need a `FROM` here at all:
```
select
(select count(*) from Ram where Famerid = 123) as RamCount,
(select count(*) from Ewe where Famerid = 123) as Count,
(select count(*) from Lamb where Famerid = 123) as LambCount
```
(There is no relationship between the rows you are counting, do don't try and create one. Instead count each separately, wrapping it all in an outer select keeps everything in a single result row.)
|
I think that the problem here is that you don't need an INNER JOIN but an OUTER JOIN ...
```
SELECT COUNT(CASE WHEN e.EweID IN NOT NULL THEN e.EweID ELSE 0 END) AS 'Ewe', COUNT(r.RamID) AS 'Ram', COUNT(l.LambID) AS 'Lamb'
FROM Sheep s
LEFT OUTER JOIN Ewe e ON s.SheepID = e.EweID
LEFT OUTER JOIN Ram r ON s.SheepID = r.RamID
LEFT OUTER JOIN Lamb l ON s.SheepID = l.LambID
WHERE s.FarmerID = '123'
```
Take a look even at the case statement that I've added inside the first count(Ewe), to see a way to handle nulls in the count .
> The Left Outer Join logical operator returns each row that satisfies
> the join of the first (top) input with the second (bottom) input. It
> also returns any rows from the first input that had no matching rows
> in the second input. The nonmatching rows in the second input are
> returned as null values. If no join predicate exists in the Argument
> column, each row is a matching row.
|
Using the COUNT with multiple tables
|
[
"",
"sql",
"sql-server",
""
] |
How do I split a row with
```
Start Date : 02-OCT-2015
End Date : 31-DEC-2015
```
into rows below in Oracle?
```
02-OCT-2015 31-OCT-2015
01-NOV-2015 30-NOV-2015
01-DEC-2015 31-DEC-2015
```
|
Your desired **DATE arithmetic** could be done using following:
* **ADD\_MONTHS**
* **LAST\_DAY**
* **TRUNC**
* **CONNECT BY** i.e. typical row generator
* **CASE** expression
Let's say you have two dates as **start** and **end** dates, the following query would **split the dates into multiple rows** based on the **MONTHS**.
```
SQL> WITH sample_data AS
2 (SELECT DATE '2015-10-02' Start_Date, DATE '2015-12-25' End_Date FROM DUAL)
3 -- end of sample_date to mock an actual table
4 SELECT CASE
5 WHEN start_date >= TRUNC(add_months(start_date,COLUMN_VALUE - 1),'MM')
6 THEN
7 TO_CHAR(start_date, 'YYYY-MM-DD')
8 ELSE
9 TO_CHAR(TRUNC(add_months(start_date,COLUMN_VALUE - 1),'MM'),'YYYY-MM-DD')
10 END new_start_date,
11 CASE
12 WHEN end_date <= last_day(TRUNC(add_months(start_date,COLUMN_VALUE - 1),'MM'))
13 THEN
14 TO_CHAR(end_date, 'YYYY-MM-DD')
15 ELSE
16 TO_CHAR(last_day(TRUNC(add_months(start_date,COLUMN_VALUE - 1),'MM')),
17 'YYYY-MM-DD')
18 END new_end_date
19 FROM sample_data,
20 TABLE(
21 CAST(
22 MULTISET
23 (SELECT LEVEL
24 FROM dual
25 CONNECT BY add_months(TRUNC(start_date,'MM'),LEVEL - 1) <= end_date
26 ) AS sys.OdciNumberList
27 )
28 )
29 ORDER BY column_value;
NEW_START_DATE NEW_END_DATE
-------------- ------------
2015-10-02 2015-10-31
2015-11-01 2015-11-30
2015-12-01 2015-12-25
```
**How the query works:**
If you should understand how row generation works using **CONNECT BY clause**, the rest is simple **DATE** arithmetic.
* `TRUNC(date, 'MM')` gives the first day of the month, which in your case becomes the start date.
* `ADD_MONTHS(date, value)` adds as many months to the date as specified in the *value*.
* `LAST_DAY` gives the last day of the month, which in your case becomes the end date.
|
```
SELECT SUBSTR(t.column_one, 1, INSTR(t.column_one, ' ')-1) AS col_one,
SUBSTR(t.column_one, INSTR(t.column_one, ' ')+1) AS col_two
FROM YOUR_TABLE t
```
**Alternatively**
You could use regex or the substring function. It won't be super fast on very large datasets, but it'll get the job done. as -
```
SELECT REGEXP_SUBSTR(t.column_one, '[^-]+', 1, 1) col_one,
REGEXP_SUBSTR(t.column_one, '[^-]+', 1, 2) col_two,
REGEXP_SUBSTR(t.column_one, '[^-]+', 1, 3) col_three,
FROM YOUR_TABLE t;
```
Hope this helps.
|
How to split a row into multiple rows between two given dates?
|
[
"",
"sql",
"oracle",
"date-arithmetic",
""
] |
I need to write SQL query in order to extract some data.
i have this data in my table:
```
ID Store Value
1 9921 NOK
2 9921 NOK1
3 9921 OK3
```
what i need is to get data from select query like this form:
```
9921 NOK,NOK1,OK3
```
Any help please ?
|
Go through this below example
[**Demo: [SQLFiddle]**](http://sqlfiddle.com/#!3/349d3/5)
The SQL I used is as below,
```
SELECT
store,
STUFF(
(SELECT DISTINCT ',' + value
FROM SampleData
WHERE store = a.store
FOR XML PATH (''))
, 1, 1, '') AS CombineValues
FROM SampleData AS a
GROUP BY store
```
you will see your expected result as **"CombineValues"**
```
store CombineValues
9921 NOK,NOK1,NOK2
```
|
Try to accomplish your excepted output by using `COALESCE`;
Create a sample table for testing purpose
```
CREATE TABLE SampleData (id INT ,store INT ,value NVARCHAR(50))
INSERT INTO SampleData VALUES (1 ,9921 ,'NOK')
INSERT INTO SampleData VALUES (2 ,9921 ,'NOK1')
INSERT INTO SampleData VALUES (3 ,9921 ,'NOK2')
```
Create a Scalar-Valued Function
```
Alter FUNCTION fun_GetCombinedData
(
@store int
)
RETURNS nvarchar(max)
AS
BEGIN
-- Declare the return variable here
DECLARE @CombineValue nvarchar(max)
SELECT @CombineValue = COALESCE(@CombineValue + ', ', '') + value
FROM SampleData where store=@store
RETURN @CombineValue
END
GO
```
Final Query,
```
SELECT store
,dbo.fun_GetCombinedData(store) AS value
FROM SampleData
GROUP BY store
```
Expected Output:
```
store | value
------------------------
9921 | NOK,NOK1,NOK2
```
This is one of the way to simplify your select query.
|
Different SQL Select query
|
[
"",
"sql",
"sql-server-2012",
""
] |
I've got a table with certain attributes such as name and address. In a search field I want to let users of a system type in whatever they know about these people and return the results in a table. Obviously I do want the results that are fitting best to be on top.
1. Should the ranking be done in the SQL command or are there other ways that might work better?
2. Does anyone know a good way to order/rank the results based by highest conformity? I basically do want the person to show up on top that fits the search criteria most.
You can see an example here:
<http://sqlfiddle.com/#!2/19e9ad>
Thanks for your help!
(I do use SQL Server currently, but any help is appreciated.)
|
You might want to check the full text indexes and searches - they have options for ranking that might do the job for you:
* [Fulltext index creation info](https://msdn.microsoft.com/en-us/library/ms187317.aspx?f=255&MSPPError=-2147217396)
... and searching functionality info:
* [FREETEXTTABLE](https://msdn.microsoft.com/en-us/library/ms177652.aspx)
* [CONTAINSTABLE](https://msdn.microsoft.com/en-us/library/ms189760.aspx)
Alternatively you can also work on your own solution like parsing the user input and matching word-by-word, then boosting results that match more words, exact words ordering etc. But you will need to handle a number of cases yourself (like multi-word cities like 'New York' etc.).
|
Sounds like you are asking for a string comparison function, such as Jaro-Winkler. This is a built in function for Oracle (<http://docs.oracle.com/cd/E18283_01/appdev.112/e16760/u_match.htm>) and I would imagine many other RDBMs.
Jaro-Winkler is pretty good at recognizing strings are just in the wrong order, and the score will reflect that. Run some tests and see if it gives you the results you want, but I use it a lot in the work I do (marketing customer data integration). I developed an API that essentially does this, but with a lot of added complexity, such as accounting for nick names, etc. The API is at <http://matchbox.io/#match-api>.
|
SQL Sorting with multiple attributes (order by conformity)
|
[
"",
"sql",
"sql-server",
""
] |
I have a query that needs to return a single row if there are `more than 2` rows with a certain `ID.` I figured the best way to do this was to add a counter in my query to count the instances of `x` and then return the results `if x >=2` but I'm not sure how in `Oracle`.
My query now returns all of the instances of `x` in all `playlists`. I need it to return the playlist only if it has more than 2 instances of `x`
```
Select * from PLAYLIST p
left join PLAYLIST_SONGS ps on p.PLAYLIST_ID = ps.PLAYLIST_ID
join SONG s on ps.SONG_ID = s.SONG_ID
join Artists art on s.ARTIST_ID = art.ARTIST_ID
where art.BAND='x'
and p.NUM_SONGS >=2;
```
|
If you want to get the playlists with 2 or more songs of a certain artist, something like this might help you out:
```
SELECT *
FROM playlist p
WHERE (SELECT COUNT(1)
FROM playlist_songs ps JOIN song s ON ps.song_id = s.song_id
JOIN artists art ON s.artist_id = art.artist_id
WHERE ps.playlist_id = p.playlist_id
AND art.band = 'X') >= 2
```
|
Here's the concept of having and a group by to get songs with count > 1. Note field 1,2,3,4 in this case must not cause the records you want to combine to be unique. Otherwise, pablomatico's approach could work
```
Select Field1, Field2, Field3, Field4
from PLAYLIST p
left join PLAYLIST_SONGS ps on p.PLAYLIST_ID = ps.PLAYLIST_ID
join SONG s on ps.SONG_ID = s.SONG_ID
join Artists art on s.ARTIST_ID = art.ARTIST_ID
where art.BAND='x'
GROUP BY Field1, Field2, Field3, Field4
HAVING count(PS.Song_ID) >=2
```
You could also use an analytic function (window function) such as over partition by assign a row number to each song. something like row\_number() over (partition by song\_Id)
|
Add if statement to Oracle query
|
[
"",
"sql",
"oracle",
""
] |
I am trying to join a company and their details, as well as transactions even if they don't exist.
I am counting the transactions to guage how many users are going to a course, if there's no transactions I still want to join the company and details, but the count would be 0, in my query below the training\_company table is being selected, but the training\_details is not being selected for some reason:
```
SELECT training.*, count(distinct training_transactions.training_transaction_course) as completed_training_payments
FROM training
LEFT JOIN training_company
ON training.course_main = training_company_id
LEFT JOIN training_details
ON training.course_main = training_details_company
LEFT JOIN training_transactions
ON training.course_user = training_transactions.training_transaction_user
WHERE course_id = ?
AND training_transactions.training_transaction_status = 'complete'
AND training_transactions.training_transaction_payment_status = 'complete'
AND course_enabled = 'enabled'
```
training\_company:
```
CREATE TABLE IF NOT EXISTS `training_company` (
`training_company_id` int(11) NOT NULL,
`training_company_name` varchar(100) NOT NULL,
`training_company_user` int(11) NOT NULL,
`training_company_enabled` varchar(50) NOT NULL DEFAULT 'enabled',
`training_company_has_avatar` int(5) NOT NULL DEFAULT '0',
`training_company_has_banner` int(5) NOT NULL DEFAULT '0'
) ENGINE=InnoDB AUTO_INCREMENT=11 DEFAULT CHARSET=latin1;
--
-- Dumping data for table `training_company`
--
INSERT INTO `training_company` (`training_company_id`, `training_company_name`, `training_company_user`, `training_company_enabled`, `training_company_has_avatar`, `training_company_has_banner`) VALUES
(1, '123', 1, 'enabled', 0, 0),
```
training\_details:
```
CREATE TABLE IF NOT EXISTS `training_details` (
`training_details_id` int(11) NOT NULL,
`training_details_user` int(11) NOT NULL,
`training_details_company` int(11) NOT NULL,
`training_details_registration_number` varchar(10) NOT NULL,
`training_details_type` varchar(100) NOT NULL,
`training_details_name` varchar(100) NOT NULL,
`training_details_street` varchar(100) NOT NULL,
`training_details_town` varchar(100) NOT NULL,
`training_details_county` varchar(100) NOT NULL,
`training_details_postcode` varchar(100) NOT NULL,
`training_details_country` varchar(100) NOT NULL,
`training_details_company_name` varchar(100) NOT NULL,
`training_details_company_street` varchar(100) NOT NULL,
`training_details_company_town` varchar(100) NOT NULL,
`training_details_company_county` varchar(100) NOT NULL,
`training_details_company_postcode` varchar(100) NOT NULL,
`training_details_company_country` varchar(100) NOT NULL,
`training_details_total_employees` varchar(100) NOT NULL,
`training_details_fax` varchar(100) NOT NULL,
`training_details_landline` varchar(100) NOT NULL,
`training_details_mobile` varchar(50) NOT NULL,
`training_details_email` varchar(50) NOT NULL,
`training_details_website` varchar(250) NOT NULL,
`company_differs_address` int(11) NOT NULL DEFAULT '0'
) ENGINE=InnoDB AUTO_INCREMENT=9 DEFAULT CHARSET=latin1;
--
-- Dumping data for table `training_details`
--
INSERT INTO `training_details` (`training_details_id`, `training_details_user`, `training_details_company`, `training_details_registration_number`, `training_details_type`, `training_details_name`, `training_details_street`, `training_details_town`, `training_details_county`, `training_details_postcode`, `training_details_country`, `training_details_company_name`, `training_details_company_street`, `training_details_company_town`, `training_details_company_county`, `training_details_company_postcode`, `training_details_company_country`, `training_details_total_employees`, `training_details_fax`, `training_details_landline`, `training_details_mobile`, `training_details_email`, `training_details_website`, `company_differs_address`) VALUES
(1, 0, 1, '0', '', '123', '123', '123', '123456', 'WN8', 'Australia', '123', '123', '123', '', 'WN8', 'Australia', '', '', '', '', '', '', 4),
```
training:
```
CREATE TABLE IF NOT EXISTS `training` (
`course_id` int(11) NOT NULL,
`course_user` int(11) NOT NULL,
`course_main` int(11) NOT NULL,
`course_type` varchar(255) NOT NULL,
`course_name` varchar(255) NOT NULL,
`course_description` text NOT NULL,
`course_location` varchar(255) NOT NULL,
`course_duration` varchar(255) NOT NULL,
`course_fitness_type` varchar(255) NOT NULL,
`course_instructor_name` varchar(255) NOT NULL,
`course_price` int(15) NOT NULL,
`course_start_date` date NOT NULL,
`course_max_attendees` int(8) NOT NULL,
`course_accommodation` varchar(255) NOT NULL,
`course_accommodation_price` varchar(255) NOT NULL,
`course_status` varchar(50) NOT NULL,
`course_enabled` varchar(10) NOT NULL DEFAULT 'enabled',
`course_location_name` varchar(255) NOT NULL,
`course_location_street` varchar(255) NOT NULL,
`course_location_town` varchar(255) NOT NULL,
`course_location_county` varchar(255) NOT NULL,
`course_location_postcode` varchar(255) NOT NULL,
`course_location_country` varchar(255) NOT NULL,
`course_certificate` varchar(250) NOT NULL,
`course_certificate_valid` int(30) NOT NULL
) ENGINE=InnoDB AUTO_INCREMENT=3 DEFAULT CHARSET=latin1;
--
-- Dumping data for table `training`
--
INSERT INTO `training` (`course_id`, `course_user`, `course_main`, `course_type`, `course_name`, `course_description`, `course_location`, `course_duration`, `course_fitness_type`, `course_instructor_name`, `course_price`, `course_start_date`, `course_max_attendees`, `course_accommodation`, `course_accommodation_price`, `course_status`, `course_enabled`, `course_location_name`, `course_location_street`, `course_location_town`, `course_location_county`, `course_location_postcode`, `course_location_country`, `course_certificate`, `course_certificate_valid`) VALUES
(1, 3, 1, 'Hazardous', '123', 'dddddddddddddd', 'other', '14', 'lol', 'lol', 123, '2015-11-09', 4, '0', '', 'pending', 'enabled', '123', '123', '123', '123456', '123', 'Australia', '123', 2),
```
So how can I get my join to work on details, because at the moment it is not joining even though everything matches
|
As Gordon Linoff suggested in the first comment, move the conditions involving the `training_transactions` from the `WHERE` clause into the `LEFT JOIN`.
Of course, you should also list the columns explicitly instead of `*` and add a corresponding `GROUP BY` for the `COUNT` to work.
```
SELECT
training.course_id
,training.course_user
...
,count(distinct training_transactions.training_transaction_course) as completed_training_payments
FROM
training
LEFT JOIN training_company ON training.course_main = training_company_id
LEFT JOIN training_details ON training.course_main = training_details_company
LEFT JOIN training_transactions
ON training_transactions.training_transaction_user = training.course_user
AND training_transactions.training_transaction_status = 'complete'
AND training_transactions.training_transaction_payment_status = 'complete'
WHERE
training.course_id = ?
AND training.course_enabled = 'enabled'
GROUP BY
training.course_id
,training.course_user
...
```
There are also few things that you should pay attention to in this query.
```
WHERE
course_id = ?
AND course_enabled = 'enabled'
```
It is not clear for a new person reading the query (as everybody reading this question and the next person who will be maintaining your code, which could be you in two years time) what table these fields belong to. Always try to indicate the table explicitly, like this:
```
WHERE
training.course_id = ?
AND training.course_enabled = 'enabled'
```
It also helps to use aliases.
In the similar fashion, it is not clear what is going on in the `JOIN`:
```
LEFT JOIN training_company ON training.course_main = training_company_id
LEFT JOIN training_details ON training.course_main = training_details_company
```
Is it just an oversight when simplifying the query for this question, or this is your real code? In any case, include the table name (or alias).
```
LEFT JOIN training_company ON training.course_main = training_company.training_company_id
LEFT JOIN training_details ON training.course_main = training_details.training_details_company
```
|
I would suggest going a different route altogether and not use left joins. Since you want all data from the training table and you want an aggregate (count in your case) from a different table, I would suggest breaking the aggregate out into a separate subquery. Here's what I mean (note I added aliases to your tables to simplify things as well):
```
SELECT t.*,
(SELECT count(distinct tt.training_transaction_course)
FROM training_company tc
JOIN training_details td ON td.course_main = tc.training_details_company
JOIN training_transactions tt ON t.course_user = tt.training_transaction_user
WHERE t.course_main = tc.training_company_id
AND tt.training_transaction_status = 'complete'
AND tt.training_transaction_payment_status = 'complete')
as completed_training_payments
FROM training t
WHERE t.course_id = ?
AND t.course_enabled = 'enabled'
```
This will guarantee a row returned for every row in the training table that matches the entered course\_id. Also, if the subquery returns no rows then it will return 0.
|
Left join isn't working when I am selecting count from another table
|
[
"",
"mysql",
"sql",
""
] |
I have some sql statements, which i am using for Oracle. This sql statements are used by a programm from me.
I want to support Oracle and SQL-Server with my program without having different sql statements for Oracle and SQL-Server.
Which alternative can i use for the specific Oracle SQL-Statements:
* to\_char(FIELDNAME, 'YYYY')
* to\_char(FIELDNAME, 'YYYYMMDD')
* to\_char(FIELDNAME, 'DD.MM.YYYY')
The sql statements have to work for Oracle **and** SQL-Server.
|
Even if at a first glance the SQL implementation from two different vendors looks similar, when working with real life enterprise applications you will stumble upon a large number of differences, and I am only talking about SQL, when comparing PL/SQL with T-SQL there is hardly any resemblance left.
When trying to reduce the usage of two databases to only common functionality, you will loose a lot of their power, you could as well use a txt file on the file system.
One elegant solution, as someone already suggested, would be to leave the columns in the database as DATE data type and extract your data in the application code that stands above the database, if any. For example, in Java, you will map your database DATE columns to java.sql.Date no matter if that date comes from Oracle or from SQL Server.
Still, if you want to get your formatted data from the database, you could create separate columns that hold the formatted date, for example :
FIELDNAME | FIELDNAME\_YYYY | FIELDNAME\_YYYYMMDD | FIELDNAME\_DDMMYYYY
|
Finally i found a solution. Maybe its useful some other people too.
You can set the input format for a date...
Oracle: `ALTER SESSION SET NLS_DATE_FORMAT = 'DD.MM.YYYY'`
SQL-Server: `SET DATEFORMAT dmy`
|
SQL: to_char alternative for Oracle AND SQL-Server
|
[
"",
"sql",
"sql-server",
"oracle",
"to-char",
""
] |
Hi I have problems trying to reach the value of 3 because my query does not recognize the selection of two countries using the clause 'IN'
a) This is my simple table source:
**id | country**
1 | CL
2 | BR
b) this is my sql query:
```
SELECT
(
CASE
WHEN country ='CL' THEN 1
WHEN country = 'BR' THEN 2
WHEN country IN ('BR','CL') THEN 3
ELSE 0
END) AS result
FROM countries
WHERE country IN ('BR','CL') ;
```
c) This is the current output result:
**Result**
1
2
|
It seems you want something like the following:
```
SELECT CASE
WHEN cntAll = 2 THEN 3
WHEN cntCL >= 1 THEN 1
WHEN cntBR >= 1 THEN 2
ELSE 0
END AS result
FROM (
SELECT COUNT(DISTINCT country) AS cntAll,
COUNT(CASE WHEN country = 'CL' THEN 1 END) AS cntCL,
COUNT(CASE WHEN country = 'BR' THEN 1 END) AS cntBR
FROM countries
WHERE country IN ('CL', 'BR')) AS t
```
When *both* `('CL', 'BR')` values are present in `countries` table, then output is `3`, otherwise if only `'CL'` is present output is `1`, otherwise if only `'BR'` is present output is `2`, otherwise output is `0`.
|
The [CASE ... END expression](https://dev.mysql.com/doc/refman/5.6/en/control-flow-functions.html#operator_case) uses shortcut evaluation:
> ```
> CASE value WHEN [compare_value] THEN result [WHEN [compare_value] THEN result ...] [ELSE result] END
>
> CASE WHEN [condition] THEN result [WHEN [condition] THEN result ...] [ELSE result] END
> ```
>
> The first version returns the result where value=compare\_value. The
> second version returns the result for the **first condition that is
> true**. If there was no matching result value, the result after ELSE is
> returned, or NULL if there is no ELSE part.
|
I try to use IN in mysql 'CASE WHEN' condition
|
[
"",
"mysql",
"sql",
"case",
"conditional-statements",
""
] |
I'm not particularly familiar with Ruby on Rails, but I'm troubleshooting an issue we're experiencing with a rake job that is supposed to be cleaning database tables. The tables grow very large very quickly, and the query generated by ActiveRecord doesn't seem to be efficient enough to handle it.
The Ruby calls looks like this:
```
Source.where("id not IN (#{Log.select('DISTINCT source_id').to_sql})").delete_all
```
and this:
```
Log.joins(:report).where(:report_id => Report.where(cond)).delete_all
```
I'm trying to get at the SQL, so we can have our DBA's attempt to optimize it better. I've noticed if I drop the ".delete\_all" I can add a ".to\_sql" which gives me the SELECT statement of the query, prior to the call to ".delete\_all". I'd like to see what SQL is being generated by that delete\_all method though.
Is there a way to do that?
|
From the [fine manual](http://api.rubyonrails.org/classes/ActiveRecord/Relation.html#method-i-delete_all):
> **delete\_all(conditions = nil)**
>
> Deletes the records matching `conditions` without instantiating the records first, and hence not calling the `destroy` method nor invoking callbacks. This is a single SQL DELETE statement that goes straight to the database, much more efficient than `destroy_all`.
So a `Model.delete_all(conditions)` ends up as
```
delete from models where conditions
```
When you say `Model.where(...).delete_all`, the `conditions` for the `delete_all` come from the `where` calls so these are the same:
```
Model.delete_all(conditions)
Model.where(conditions).delete_all
```
Applying that to your case:
```
Source.where("id not IN (#{Log.select('DISTINCT source_id').to_sql})").delete_all
```
you should see that you're running:
```
delete from sources
where id not in (
select distinct source_id
from logs
)
```
If you run your code in a development console you should see the SQL in the console or the Rails logs but it will be as above.
As far as optimization goes, my first step would be to drop the DISTINCT. DISTINCT usually isn't cheap and `IN` doesn't care about duplicates anyway so `not in (select distinct ...)` is probably pointless busy work. Then maybe an index on `source_id` would help, the query optimizer might be able to slurp the `source_id` list straight out of the index without having to do a table scan to find them. Of course, query optimization is a bit of a dark art so these simple steps may or may not work.
|
Another option is to use raw Arel syntax, similar to a simplified version of what [ActiveRecord::Relation#delete\_all](https://github.com/rails/rails/blob/f4538aa5865da5f16b707d34aa733a5e2353f2fd/activerecord/lib/active_record/relation.rb#L577) does.
```
relation = Model.where(...)
arel = relation.arel
stmt = Arel::DeleteManager.new
stmt.from(arel.join_sources.empty? ? Model.arel_table : arel.source)
stmt.wheres = arel.constraints
sql = Model.connection.to_sql(stmt, relation.bound_attributes)
print sql
```
This will give you the generated delete sql. Here's an example using postgres as the sql adapter
```
relation = User.where('email ilike ?', '%@gmail.com')
arel = relation.arel
stmt = Arel::DeleteManager.new
stmt.from(arel.join_sources.empty? ? User.arel_table : arel.source)
stmt.wheres = arel.constraints
sql = User.connection.to_sql(stmt, relation.bound_attributes)
=> DELETE FROM "users" WHERE (email ilike '%@gmail.com')
```
|
Raills: Get SQL generated by delete_all
|
[
"",
"sql",
"ruby-on-rails",
"ruby",
"activerecord",
"rails-activerecord",
""
] |
I need a oracle query which returns every minute between given two timestamps. I referred [this](https://stackoverflow.com/questions/22680617/oracle-query-with-every-minute-a-day) Stack Overflow question.
Can we improve the same query?
|
To get all the minutes between two datetime elements using [**Row Generator**](http://lalitkumarb.com/2015/04/15/generate-date-month-name-week-number-day-number-between-two-dates-in-oracle-sql/) technique, you need to convert the difference between the dates into the **number of minutes**. Rest remains same in the **CONNECT BY** clause.
For example, to get all the minutes between `11/09/2015 11:00:00` and `11/09/2015 11:15:00`:
```
SQL> WITH DATA AS
2 (SELECT to_date('11/09/2015 11:00:00', 'DD/MM/YYYY HH24:MI:SS') date_start,
3 to_date('11/09/2015 11:15:00', 'DD/MM/YYYY HH24:MI:SS') date_end
4 FROM dual
5 )
6 SELECT TO_CHAR(date_start+(LEVEL -1)/(24*60), 'DD/MM/YYYY HH24:MI:SS') the_date
7 FROM DATA
8 CONNECT BY level <= (date_end - date_start)*(24*60) +1
9 /
THE_DATE
-------------------
11/09/2015 11:00:00
11/09/2015 11:01:00
11/09/2015 11:02:00
11/09/2015 11:03:00
11/09/2015 11:04:00
11/09/2015 11:05:00
11/09/2015 11:06:00
11/09/2015 11:07:00
11/09/2015 11:08:00
11/09/2015 11:09:00
11/09/2015 11:10:00
11/09/2015 11:11:00
11/09/2015 11:12:00
11/09/2015 11:13:00
11/09/2015 11:14:00
11/09/2015 11:15:00
16 rows selected.
```
Above, `CONNECT BY level <= (date_end - date_start)*(24*60) +1` means that we are generating rows as many as the number `(date_end - date_start)*(24*60) +1`. You get `16` rows, because it includes both the **start** **and** end window for the minutes.
|
You can create like this if you want all minutes from sysdate to 15 NOV:
```
SELECT to_char(TRUNC(sysdate) + numtodsinterval(level - 1, 'minute'),
'dd.mm.yyyy hh24:mi') min
FROM dual
CONNECT BY LEVEL <=
(trunc((TO_DATE('16-NOV-2015','dd-mon-yyyy')) - sysdate) * 24 * 60);
```
|
Oracle query to fetch every minute between two timestamps
|
[
"",
"sql",
"oracle",
"date-arithmetic",
""
] |
In R, I have a data frame called **df** such as the following:
**A** **B** **C** **D**
a1 b1 c1 2.5
a2 b2 c2 3.5
a3 b3 c3 5 - 7
a4 b4 c4 2.5
I want to split the value of the third row and **D** column by the dash and create another row for the second value retaining the other values for that row.
So I want this:
**A** **B** **C** **D**
a1 b1 c1 2.5
a2 b2 c2 3.5
a3 b3 c3 5
a3 b3 c3 7
a4 b4 c4 2.5
Any idea how this can be achieved?
Ideally, I would also want to create an extra column to specify whether the value I split is either a minimum or maximum.
So this:
**A** **B** **C** **D** **E**
a1 b1 c1 2.5
a2 b2 c2 3.5
a3 b3 c3 5 min
a3 b3 c3 7 max
a4 b4 c4 2.5
Thanks.
|
One option would be to use `sub` to paste 'min' and 'max in the 'D" column where `-` is found, and then use `cSplit` to split the 'D' column.
```
library(splitstackshape)
df1$D <- sub('(\\d+) - (\\d+)', '\\1,min - \\2,max', df1$D)
res <- cSplit(cSplit(df1, 'D', ' - ', 'long'), 'D', ',')[is.na(D_2), D_2 := '']
setnames(res, 4:5, LETTERS[4:5])
res
# A B C D E
#1: a1 b1 c1 2.5
#2: a2 b2 c2 3.5
#3: a3 b3 c3 5.0 min
#4: a3 b3 c3 7.0 max
#5: a4 b4 c4 2.5
```
|
Here's a dplyrish way:
```
DF %>%
group_by(A,B,C) %>%
do(data.frame(D = as.numeric(strsplit(as.character(.$D), " - ")[[1]]))) %>%
mutate(E = if (n()==2) c("min","max") else "")
A B C D E
(fctr) (fctr) (fctr) (dbl) (chr)
1 a1 b1 c1 2.5
2 a2 b2 c2 3.5
3 a3 b3 c3 5.0 min
4 a3 b3 c3 7.0 max
5 a4 b4 c4 2.5
```
Dplyr has a policy against expanding rows, as far as I can tell, so the ugly
```
do(data.frame(... .$ ...))
```
construct is required. If you are open to data.table, it's arguably simpler here:
```
library(data.table)
setDT(DF)[,{
D = as.numeric(strsplit(as.character(D)," - ")[[1]])
list(D = D, E = if (length(D)==2) c("min","max") else "")
}, by=.(A,B,C)]
A B C D E
1: a1 b1 c1 2.5
2: a2 b2 c2 3.5
3: a3 b3 c3 5.0 min
4: a3 b3 c3 7.0 max
5: a4 b4 c4 2.5
```
|
Split value from a data.frame and create additional row to store its component
|
[
"",
"sql",
"r",
"dataframe",
"dplyr",
""
] |
Suppose I have this table: named = **the\_table**
whose structure is:
postgreSQL:
```
create table the_table (col3 SERIAL, col2 varchar, col1 varchar, PRIMARY KEY(col3));
```
MySQL:
```
create table the_table ( col3 INT NOT NULL AUTO_INCREMENT PRIMARY KEY, col2 varchar(20), col1 varchar(20) )
```
Then I inserted the table:
```
INSERT INTO the_table (col2,col1) VALUES
('x','a'),
('x','b'),
('y','c'),
('y','d'),
('z','e'),
('z','f');
```
Now the table looks like this:
```
col3 | col2 | col1
------+------+------
1 | x | a
2 | x | b
3 | y | c
4 | y | d
5 | z | e
6 | z | f
```
When I do this query:
```
select * from the_table group by col2
```
then in mysql I get:
```
1 x a
3 y c
5 z e
```
and in postgreSQL, I am getting error:
```
ERROR: column "the_table.col3" must appear in the GROUP BY clause or be used in an aggregate function
LINE 1: select * from the_table group by col2;
```
**My Questions:**
What does this error mean?
What is aggregate function ?
When it works in MySQL , why can't it work in postgreSQL ?
|
You need to use [AGGREGATE FUNCTION](http://www.postgresql.org/docs/current/static/functions-aggregate.html):
> Aggregate functions compute a single result from a set of input
> values.
```
SELECT col2, MIN(col3) AS col3, MIN(col1) AS col1
FROM the_table
GROUP BY col2;
```
**[db<>fiddle demo](https://dbfiddle.uk/?rdbms=postgres_11&fiddle=5cbc44b41885ec85a2564d3c69f39e97)**
---
> **[MySQL Handling of GROUP BY](https://dev.mysql.com/doc/refman/5.0/en/group-by-handling.html)**:
>
> **In standard SQL, a query that includes a GROUP BY clause cannot refer
> to nonaggregated columns in the select list that are not named in the
> GROUP BY clause**
and:
> MySQL extends the use of GROUP BY so that the select list can refer to nonaggregated columns not named in the GROUP BY clause. This means that the preceding query is legal in MySQL. You can use this feature to get better performance by avoiding unnecessary column sorting and grouping. However, this is useful primarily when all values in each nonaggregated column not named in the GROUP BY are the same for each group. **The server is free to choose any value from each group, so unless they are the same, the values chosen are indeterminate**
So with MySQL version without explicit aggregate function you may end up with undetermininistic values. I strongly suggest to use specific aggregate function.
---
**EDIT:**
From [MySQL Handling of GROUP BY](https://dev.mysql.com/doc/refman/5.7/en/group-by-handling.html):
> SQL92 and earlier does not permit queries for which the select list, HAVING condition, or ORDER BY list refer to nonaggregated columns that are not named in the GROUP BY clause.
>
> SQL99 and later permits such nonaggregates per optional feature T301 **if they are functionally dependent on GROUP BY columns:** If such a relationship exists between name and custid, the query is legal. This would be the case, for example, were custid a primary key of customers.
Example:
```
SELECT o.custid, c.name, MAX(o.payment)
FROM orders AS o
JOIN customers AS c
ON o.custid = c.custid
GROUP BY o.custid;
```
|
Alternatively on the MySQL answer: It wouldn't work in 5.7 version onwards.
You can use `ANY_VALUE()` function as stated in MySQL documentation.
Sources: <https://dev.mysql.com/doc/refman/8.0/en/miscellaneous-functions.html#function_any-value>
Example:
```
SELECT MIN(col1), col2, ANY_VALUE(col3) FROM the_table GROUP BY col2
```
|
Group by clause in mySQL and postgreSQL, why the error in postgreSQL?
|
[
"",
"mysql",
"sql",
"postgresql",
"group-by",
""
] |
I am dealing with an issue that I have to backtrack as the entire project is already in production and system has been used for a while.
I need to backtrack all the data with the following parametars.
`Select * from table where bundle = 5 and count(bundle) >= 3`
This will be a joined table so technically I need count of bundles greater than 2 with same transaction.
eg
```
id | transaction | bundle
-------------------------
1 | 123 | 5
3 | 234 | 15
12 | 1111 | 5
13 | 1111 | 15
17 | 1111 | 5
18 | 1111 | 5
```
My code so far
```
select * from table_i as ti
right join table_r as tr
on tr.id = ti.t_id
where ti.type_id = x and ti.bundle = 5 and ti.online = 1 and count(ti.bundle) >=5
```
Thanks
# EDIT REAL CODE:
```
SELECT ti.*, tr.*
FROM ticket_items AS ti
INNER join transactions as tr
ON tr.id = ti.trans_id
INNER JOIN
(
SELECT tis.trans_id, COUNT(tis.bundle) AS bundle_count
FROM ticket_items as tis
INNER join transactions as trs
ON trs.id = tis.trans_id
WHERE tis.type_id = 2
AND tis.bundle = 5
AND tis.online = 1
HAVING bundle_count > 2
) sub0
ON sub0.trans_id = ti.trans_id
WHERE ti.type_id = 2
AND ti.bundle = 5
AND ti.online = 1
```
Result:
`1328 1 1 766 2 5 25 1 1 2015-10-26 20:26:41 2015-10-27 00:00:02 0 766 1 0 John Doe 123-123-1234 NULL email@email.com NULL NULL NULL NULL 1 164 Cedar Square NULL 123 rrt city province country 125 2015-10-26 20:26:41 2015-10-26 20:26:41 125.00 0.00 0.00 0.00 0 1`
Table ticket\_items:
```
id | lot | system | trans_id | type | bundle | price | print | online | date | update | void
1 1 2 1 1 1 100 1 0 2015-10-01 23:30:12 2015-10-03 18:49:25 0
2 1 2 1 2 15 50 1 0 2015-10-01 23:30:12 2015-10-03 16:48:15 0
3 1 3 2 1 1 100 1 0 2015-10-02 00:13:57 2015-10-02 00:22:17 1
4 1 3 2 2 15 50 1 0 2015-10-02 00:13:57 2015-10-02 00:19:17 1
```
Table transactions:
```
id | lot_id | cust | first | last| number |||||||
1 | 1 | 23 | john | doe | 123
```
|
If I understand correctly, you want to find transactions where there are more than 3 with that transaction code for a bundle of 5 and with online = 1, and then bring back all the rows from the 2 tables for that transaction with a bundle of 5 and with online = 1
If so, a sub query to get the transactions with a count of more than 3 and then join that back against the 2 tables:-
```
SELECT ti.*, tr.*
FROM table_i AS ti
INNER join table_r as tr
ON tr.id = ti.t_id
INNER JOIN
(
SELECT tis.transaction, COUNT(tis.bundle) AS bundle_count
FROM table_i as tis
INNER join table_r as trs
ON trs.id = tis.t_id
WHERE tis.type_id = x
AND tis.bundle = 5
AND tis.online = 1
GROUP BY tis.transaction
HAVING bundle_count >= 3
) sub0
ON sub0.transaction = ti.transaction
WHERE ti.type_id = x
AND ti.bundle = 5
AND ti.online = 1
```
|
I'm a little confused by the wording of your question, but try this:
```
SELECT ti.id, ti.transaction, count(ti.bundle)
FROM table_i as ti
JOIN table_r as tr
ON tr.id = ti.id
WHERE ti.type_id = x
AND ti.bundle = 5
AND ti.online = 1
AND count(ti.bundle) >=5
```
|
MySQL count rows where column value is same and select them where count is greater than 2
|
[
"",
"mysql",
"sql",
""
] |
Explaining by example:
```
UPDATE Table SET value=(22,55,99) WHERE id IN (2,5,9)
```
So the row with `id=2`, `value` is set to `22`. And the row with `id=5`, `value` is set to `55`. Etc.
|
You can use derived table:
```
update t set
value = a.value
from Table as t
inner join (values
(22, 2),
(55, 5),
(99, 9)
) as a(id, value) on a.id = t.id
```
For me this is the most elegant way to do this and it's also easily extandable (you can add more columns if you like)
|
Yes, but not like you have:
```
UPDATE Table
SET value=case when id=2 then 22 when id=5 then 55 when id=9 then 99 end
WHERE id in (2,5,9)
```
If you have to do this for a large number of fields/records, you'd be better off just issuing a bunch of dedicated `update` queries.
|
Is it possible to update rows from a key/value pair?
|
[
"",
"sql",
"sql-server",
""
] |
As shown in the figure, I have four tables. Only `TableB` and `TableBChild` have a foreign key relationship.
What I want to achieve here is to determine whether
`TableBChild.MemberID` matches `TableA.memberNo` where `TableB.isgood` is true
If not batch update `isvalid` to false where condition is not satisfy.
But I am unable to compare `TableBChild.MemberID` to `Member.MemberNo`.
```
Update TableB
Set IsValid = 0
From TableA as A
INNER JOIN TableB as B ON A.TableBNo = B.TableBNo And IsGood = 1
INNER JOIN TableBChild as BC ON B.TableBID = BC.TableBID
AND BC.MEMBERID != SELECT MemberID
FROM Member
WHERE MemberNo = A.MemberNo // This line is not working
```
[](https://i.stack.imgur.com/Tf8ZC.png)
|
Try to use `NOT IN` instead of `!=` like this:
```
Update TableB
Set IsValid = 0
From TableA as A INNER JOIN TableB as B
ON A.TableBNo = B.TableBNo And IsGood = 1
INNER JOIN TableBChild as BC
ON B.TableBID = BC.TableBID
AND BC.MEMBERID NOT IN ( SELECT MemberID from Member Where MemberNo = A.MemberNo )
```
|
What I would do:
```
UPDATE TableB
SET IsValid = 0
WHERE TableBID NOT IN
(
-- this will select TableBID's satisfying:
-- TableB.IsGood is true,
-- and corresponding TableBChild has memberID that exists in TableA.MemberNo
SELECT tbc.TableBID
FROM TableBChild tbc
INNER JOIN TableB tb ON tb.TableBID = tbc.TableBID
INNER JOIN TableA ta ON tbc.MemberID = ta.MemberNo
WHERE tb.IsGood = 1
)
```
|
SQL update based on value from other table
|
[
"",
"sql",
"sql-server",
"t-sql",
"stored-procedures",
""
] |
iReport does not seem to accept the normal SQL SUM function and I am having a hard time figuring out a way around this.
I am trying to use `SUM(qtytofulfill.SOITEM - qtyfulfilled.SOITEM) AS qty` and it does not seem to like that or me simply adding the variables and saying `SUM(qtytofulfill - qtyfulfilled) AS qty`.
This does not seem to be a syntax error but iReport simply will not accept it as an SQL statement. I am posting a picture of me attempting to use this SQL statement and the error it also gives. Any help on what I am doing or even what I actually should be using , specifically, for iReport is greatly appreciated.
Thanks!
-Colt
[](https://i.stack.imgur.com/dfqhy.png)
|
This will work fine,
> In standard SQL (but not MySQL), when you use GROUP BY, you must list
> all the result columns that are not aggregates in the GROUP BY clause.
```
SELECT
SOITEM.'QTYFULFILLED' AS QTYFULFILLED,
SOITEM.`QTYTOFULFILL` AS QTYTOFULFILL,
SOITEM.`SOID` AS SOITEM_SOID,
SUM(SOITEM.`QTYFULFILLED`) AS Sum_Quantity_Fullfilled,
SUM(SOITEM.`QTYTOFULFILL`) AS Sum_Quantity_to_Fullfill,
(SUM(SOITEM.`QTYFULFILLED`) - SUM(SOITEM.`QTYTOFULFILL`)) AS QTY,
SO.`ID` AS SO_ID
FROM
`SO` SO INNER JOIN `SOITEM` SOITEM ON SO.`ID` = SOITEM.`SOID`
GROUP BY SOITEM.'QTYFULFILLED',SOITEM.`QTYTOFULFILL`,SOITEM.`SOID`,SO.`ID`
```
Hope this helps.
|
I believe it's treating the 2 column names inside sum as variables, not table columns
|
iReport not accepting SQL SUM function
|
[
"",
"mysql",
"sql",
"jasper-reports",
"ireport",
""
] |
I have two tables
**Table\_1:**
```
╔══════╦══════════╦═════════╗
║ Name ║ Date ║ Revenue ║
╠══════╬══════════╬═════════╣
║ A ║ 1/1/2001 ║ 20 ║
║ A ║ 1/2/2001 ║ 20 ║
║ B ║ 1/1/2001 ║ 40 ║
╚══════╩══════════╩═════════╝
```
**Table\_2:**
```
╔══════╦══════╗
║ Name ║ Task ║
╠══════╬══════╣
║ A ║ Call ║
║ A ║ Foo ║
║ B ║ Bar ║
╚══════╩══════╝
```
So I do a join
```
SELECT sum(Revenue), t2.Name, T2.Task
FROM Table_1 as t1 JOIN Table_2 as t2 ON t1.Name = t2.Name
GROUP BY t2.Name
```
The result table of the join looks like this:
```
Result
Name Sum
A 80
B 40
```
The problem is that I want the sum result of A to be 40. How should I modify my query?
|
Use this query:
```
SELECT Name, SUM(Revenue)
FROM Table_1
GROUP BY Name
```
I don't see any point in joining `Table_1` to `Table_2` since you are not making use of the `Task` column.
|
The root of your problem is that the join is not doing what you expect it to do. By doing the join between the two tables on the 'name', you are creating duplicates. Remove the group by clause in your query and you will see exactly what I mean.
As mentioned in a previous answer, the join (in this case) is superfluous. I would advise looking at things closer than that. How could the data be structured such that this duplication of data doesn't occur?
Without more data I can't provide you any more direction, but hopefully my comments will point you in the right direction and you'll learn a valuable lesson on using discrete key values.
|
SQL Query Summing In Join
|
[
"",
"sql",
""
] |
I have table called `DataFile`. This keeps all my data records that get stored in a daily bases.
Data in this table looks like the below:
```
Id ConfigAccountId Shelf FileIdentifier Created
5356341 23 BSAS020006 C200094 28/01/2013
5356342 23 BSAS020006 C200095 28/01/2013
5356343 23 BSAS020006 C200096 28/01/2013
5356344 23 BSAS020006 C200097 28/01/2013
5356345 23 BSAS020006 C200098 28/01/2013
5356346 23 BSAS020006 C200099 28/01/2013
5356347 23 BSAS020006 C200100 28/01/2013
5356348 23 BSAS020006 C200101 28/01/2013
5356349 23 BSAS020006 C200102 28/02/2013
5356350 23 BSAS020006 C200103 28/02/2013
5356351 23 BSAS020006 C200104 28/02/2013
5356352 23 BSAS020006 C200105 28/02/2013
5356353 23 BSAS020006 C200106 28/02/2013
5356354 23 BSAS020006 C200107 28/02/2013
5356355 23 BSAS020007 C200108 28/02/2013
5356356 23 BSAS020007 C200109 28/02/2013
```
If you look at the data, the `shelf` column will change number only when it is full, however I need to know how many unique `shelf` codes I have for the month. The problem is that `shelf` `BSAS020006` runs over two month period so if I run a distinct for `February` it will count `shelf` `BSAS020006` again(I hope I'm making sense). I need a unique `Shelf` count every month. So if a `shelf` number has already been reported on in `Jan`, and it runs over to `Feb` it must only show the count for `Jan`.
This is the code I have so far:
```
select distinct Shelf
from DataFile
where Created Between convert(datetime, '2015-10-01 00:00:01', 102) and
convert(Datetime, '2015-10-31 23:59:59', 102)
```
My Output must show, But please note that a shelf can run over to a new month, and therefore must not be shown in that month as well.
```
Month Shelf Count
January 15
February 16
March 10
```
|
If I understand your problem correctly, I think a combination of distinct and min before counting will work for you:
```
SELECT COUNT(Shelf) AS UniqueCodesForShelf, FirstMonthForShelf AS Month
FROM
(
SELECT Shelf, MIN(AllMonthsForShelf) AS FirstMonthForShelf
FROM
(
SELECT DISTINCT
Shelf,
DATEADD(m, DATEDIFF(m, 0, Created), 0) AS AllMonthsForShelf
FROM DataFile
) AS T1
GROUP BY Shelf
) AS T2
GROUP BY FirstMonthForShelf
```
The output of the Month column is the date of the first day of the month.
|
Try this,
```
Declare @To Datetime='2013-02-28 23:59:59'
DECLARE @t TABLE (
Id INT
,ConfigAccountId INT
,Shelf VARCHAR(20)
,FileIdentifier VARCHAR(20)
,Created DATETIME
)
INSERT INTO @t
VALUES
(5356341, 23,'BSAS020006','C200094','01/28/2013')
,(5356342, 23,'BSAS020006','C200095','01/28/2013')
,(5356343, 23,'BSAS020006','C200096','01/28/2013')
,(5356344, 23,'BSAS020006','C200097','01/28/2013')
,(5356345, 23,'BSAS020006','C200098','01/28/2013')
,(5356346, 23,'BSAS020006','C200099','01/28/2013')
,(5356347, 23,'BSAS020006','C200100','01/28/2013')
,(5356348, 23,'BSAS020006','C200101','01/28/2013')
,(5356349, 23,'BSAS020006','C200102','02/28/2013')
,(5356350, 23,'BSAS020006','C200103','02/28/2013')
,(5356351, 23,'BSAS020006','C200104','02/28/2013')
,(5356352, 23,'BSAS020006','C200105','02/28/2013')
,(5356353, 23,'BSAS020006','C200106','02/28/2013')
,(5356354, 23,'BSAS020006','C200107','02/28/2013')
,(5356355, 23,'BSAS020007','C200108','02/28/2013')
,(5356356, 23,'BSAS020007','C200109','02/28/2013')
SELECT *
FROM @t A
WHERE Created BETWEEN @From
AND @To
AND NOT EXISTS (
SELECT Shelf
FROM @t B
WHERE B.Shelf = A.Shelf
AND Created NOT BETWEEN @From
AND @To
)
```
|
Find Distinct Records over a period of a month
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
Where I am working I have been recently told that using distinct in your queries is a bad sign of a programmer. So I am wondering I guess the only way to not use this function is to use a **group by** .
It was my understanding that the distinct function works very similarly to a group by except in how its read. A distinct function checks each individual selection criteria vs a group by which does the same thing only done as a whole.
Keep in mind I only do **reporting** . I do not create/alter the data. So my question is for best practices should I be using distinct or group by. If neither then is there an alternative. Maybe the group by should be used in more complex queries than my non-real example here, but you get the idea. I could not find an answer that really explained why or why not I should use distinct in my queries
```
select distinct
spriden_user_id as "ID",
spriden_last_name as "last",
spriden_first_name as "first",
spriden_mi_name as "MI",
spraddr_street_line1 as "Street",
spraddr_street_line2 as "Street2",
spraddr_city as "city",
spraddr_stat_code as "State",
spraddr_zip as "zip"
from spriden, spraddr
where spriden_user_id = spraddr_id
and spraddr_mail_type = 'MA'
```
**VS**
```
select
spriden_user_id as "ID",
spriden_last_name as "last",
spriden_first_name as "first",
spriden_mi_name as "MI",
spraddr_street_line1 as "Street",
spraddr_street_line2 as "Street2",
spraddr_city as "city",
spraddr_stat_code as "State",
spraddr_zip as "zip"
from spriden, spraddr
where spriden_user_id = spraddr_id
and spraddr_mail_type = 'MA'
group by "ID","last","first","MI","Street","Street2","city","State","zip"
```
|
Databases are smart to recognize what you mean. I expect both of your queries to perform equally well. It is important for someone else maintaining your query to know what you meant. If you really meant to retrieve distinct records, use `DISTINCT`. If your intention was to do aggregation, use `GROUP BY`
Take a look at [this question](https://stackoverflow.com/questions/164319/is-there-any-difference-between-group-by-and-distinct). There are some nice answers that might help.
|
The answer provided by @zedfoxus is useful to understand the context.
However, I don't believe your query should require distinct records if the data is designed correctly.
It appears you are selecting the primary key of table `spriden`, so all that data *should* be unique. You're also joining onto the `spraddr` table; does that table really contain valid duplicate data? Or is there perhaps an additional join criterium that's required to filter out those duplicates?
This is why I get nervous about use of "`distinct`" - the `spraddr` table may include additional columns which you should use to filter out data, and "`distinct`" may be hiding that.
Also, you may be generating a massive result set which needs to be filtered by the "distinct" clause, which can cause performance issues. For instance, if there are 1 million rows in `spraddr` for each row in `spriden`, and you should use the "is\_current" flag to find the 2 or 3 "real" ones.
Finally, I get nervous when I see "group by" used as a substitute for distinct, not because it's "wrong", but because stylistically, I believe group by should be used for aggregate functions. That's just a personal preference.
|
Should I use distinct in my queries
|
[
"",
"sql",
"oracle",
"distinct",
""
] |
I have a table and I want to check the datetime format of records according to `YYYYMMDD,HH24MISS`. If the datetime format of my records is incorrect, write an error message. How can I make a function or procedure in PL/SQL?
|
You can write a function like this:
```
CREATE OR REPLACE FUNCTION CheckDateString(str IN VARCHAR2) RETURN DATE IS
BEGIN
RETURN TO_DATE(str,'YYYYMMDD,HH24MISS');
EXCEPTION
WHEN OTHERS THEN
RETURN NULL;
END CheckDateString;
```
and use it like this
```
SELECT *
FROM my_table
WHERE CheckDateString(DATE_STRING) IS NULL
AND DATE_STRING IS NOT NULL;
```
Of course the next action point would be to correct the wrong values and change the data type of this column to `DATE`, resp. `TIMESTAMP`.
|
In case your column is VARCHAR2 and you need to check that the values inside it can be transformed to date using your desired format, this could be a solution:
```
declare
v_foo_date date;
begin
for r_date in (
select date_col from my_table
) loop
begin
v_foo_date := to_date(r_date.date_col, 'YYYYMMDD,HH24MİSS');
exception when others then
dbms_output.put_line('error in validating value ' || r_date.date_col);
end;
end loop;
end;
```
|
How can I check the format of my date?
|
[
"",
"sql",
"oracle",
"date",
"plsql",
""
] |
I have data on locations that include a Location ID and a set of 3 0 or 1 flags that indicate whether or not the latitude, longitude, or address of a location has changed as well as the month end in which the change occurred.
So I am looking at something like this:
```
+------------+-------------+--------------+---------------+---------------------+
| LOCATIONID | XCOORDHANGE | YCOORDCHANGE | ADDRESSCHANGE | REPORTPERIOD |
+------------+-------------+--------------+---------------+---------------------+
| 1 | 0 | 0 | 1 | 2010-01-31 00:00:00 |
+------------+-------------+--------------+---------------+---------------------+
| 2 | 1 | 1 | 1 | 2010-03-31 00:00:00 |
+------------+-------------+--------------+---------------+---------------------+
| 1 | 1 | 1 | 0 | 2010-08-31 00:00:00 |
+------------+-------------+--------------+---------------+---------------------+
```
I am tasked with identifying locations that have moved. A move is defined as either x or y coordinate changes AND an address change (sometimes locations are re-spotted and the coordinates change but the address doesn't change, and sometimes addresses are changed without subsequent coordinate changes, and I am not interested in these sites).
Identifying when all 3 flags are set to 1 is easy enough. The issue is that address and coordinate changes don't always come through at the same time. Location 1, for example, shows the address change in 1/31/2010 but the coordinate change in 8/31/2010. I need to look at each record and identify if the "move" criteria is met within a year of the first change. For location 1 in my example above I would consider it a "move" if the x and/or y coordinate changes came through up to 1 year from the address change (that is to say the criteria are met within 1 year of each other). Another wrinkle added in is a location can move multiple times within the 4 year period I am investigating. I am doing this for 1/31/2010 to 12/31/2014.
My first attempt was to use `ROW_NUMBER() OVER (PARTITION BY LOCATIONID ORDER BY REPORTPERIOD ASC) as rn` and use a self-join on `a.rn = a.rn+1` to link one record to another, but this ignores locations that have moved multiple times.
The end goal is to add a column `MEETSREQ` which will be a `bit` with a 1 indicating that the location had a coordinate change and address change and these changes occurred within 1 year of each other.
Output would look something like this
```
+------------+-------------+--------------+---------------+---------------------+---------+
| LOCATIONID | XCOORDHANGE | YCOORDCHANGE | ADDRESSCHANGE | REPORTPERIOD | MEETREQ |
+------------+-------------+--------------+---------------+---------------------+---------+
| 1 | 0 | 0 | 1 | 2010-01-31 00:00:00 | 1 |
+------------+-------------+--------------+---------------+---------------------+---------+
| 2 | 1 | 1 | 1 | 2010-03-31 00:00:00 | 1 |
+------------+-------------+--------------+---------------+---------------------+---------+
| 1 | 1 | 1 | 0 | 2010-08-31 00:00:00 | 0 |
+------------+-------------+--------------+---------------+---------------------+---------+
| 3 | 0 | 0 | 1 | 2011-02-28 00:00:00 | 0 |
+------------+-------------+--------------+---------------+---------------------+---------+
| 4 | 1 | 1 | 0 | 2011-03-31 00:00:00 | 0 |
+------------+-------------+--------------+---------------+---------------------+---------+
```
This is SQL Server 2008 R2. Thanks for your time, I hope I have added enough clarity. I can provide additional details if necessary.
|
You could do something like this. Note although it's an "evil" cursor. I personally feel when you're performing complex business logic, it keeps things clear.
```
DECLARE @LOCATIONID INT
DECLARE @XCOORDHANGE INT
DECLARE @YCOORDCHANGE INT
DECLARE @ADDRESSCHANGE INT
DECLARE @REPORTPERIOD DATETIME
CREATE TABLE #Temp1 ( LOCATIONID INT, HASMOVED BIT );
-- find all locations that have an address change
DECLARE db_cursor CURSOR FOR
SELECT LOCATIONID, XCOORDHANGE, YCOORDCHANGE, ADDRESSCHANGE, REPORTPERIOD
FROM [TABLENAME]
WHERE ADDRESSCHANGE = 1
OPEN db_cursor
FETCH NEXT FROM db_cursor INTO @LOCATIONID, @XCOORDHANGE, @YCOORDCHANGE, @ADDRESSCHANGE, @REPORTPERIOD
WHILE @@FETCH_STATUS = 0
BEGIN
-- find any other occurance of this location within the previous year, excluding any we've already looked at
-- and must have an x or y coord change
IF EXISTS(SELECT 0 FROM [TABLENAME] WHERE LOCATIONID = @LOCATIONID
AND LOCATIONID NOT IN(SELECT LOCATIONID FROM #Temp1)
AND (XCOORDHANGE = 1 OR YCOORDCHANGE = 1)
AND REPORTPERIOD > DATEADD(year, 1, @REPORTPERIOD)
)
INSERT INTO #Temp1 (LOCATIONID, HASMOVED) VALUES (@LOCATIONID, 1)
ELSE
INSERT INTO #Temp1 (LOCATIONID, HASMOVED) VALUES (@LOCATIONID, 0)
FETCH NEXT FROM db_cursor INTO @LOCATIONID, @XCOORDHANGE, @YCOORDCHANGE, @ADDRESSCHANGE, @REPORTPERIOD
END
CLOSE db_cursor
DEALLOCATE db_cursor
SELECT LOCATIONID, HASMOVED FROM #Temp1
```
You could join on your existing [TABLENAME] at the end if you like which would give you the existing table including the HasMoved column.
This may not be the exact logic you specify but it should give you the general idea of my suggested approach.
|
Since you are only concerned if (x or y) and address is not 0 for a period, I used `SUM` on them in the inner query
```
SELECT LocationID
,SUM(Xcoord) AS x
,SUM(Ycoord) AS y
,SUM(Address) AS a
FROM myTable
WHERE Period BETWEEN '2010-01-01' AND '2010-12-31'
GROUP BY LocationID
```
Then included a column using `CASE` on an outer query
```
SELECT LocationID
,(CASE WHEN (x > 0 OR y > 0) AND a > 0 THEN 1 ELSE 0 END) AS MeetsReq
FROM (
SELECT LocationID
,SUM(Xcoord) AS x
,SUM(Ycoord) AS y
,SUM(Address) AS a
FROM myTable
WHERE Period BETWEEN '2010-01-01' AND '2010-12-31'
GROUP BY LocationID
) AS isrc
```
Then Select from your base table, left joined to the sub-query
Change NULL value of MeetsReq to 0
```
/* This is the final query.
The 2 queries above are included here,
and was just separated for explanation purposes */
SELECT main.*, COALESCE(src.MeetsReq, 0) AS MeetsReq
FROM myTable AS main
LEFT OUTER JOIN (
SELECT LocationID
,(CASE WHEN (x > 0 OR y > 0) AND a > 0 THEN 1 ELSE 0 END) AS MeetsReq
FROM (
SELECT LocationID
,SUM(Xcoord) AS x
,SUM(Ycoord) AS y
,SUM(Address) AS a
FROM myTable
WHERE Period BETWEEN '2010-01-01' AND '2010-12-31'
GROUP BY LocationID
) AS isrc
) AS src ON main.LocationID = src.LocationID
```
Though if a location is tagged as 1 on MeetsReq, it will be the same for all records of that location.
|
How do I identify records that have certain flags equal to 1 within a year of each other?
|
[
"",
"sql",
"sql-server",
""
] |
I have a table: `Employee`
```
columns:
1. id
2. name
3. city1
4. city2
```
I want to `GROUP BY` the combination of `city1` and `city2` so that e.g.
* an employee with city1: `'Bangalore'` and city2: `'Chennai'`
* and another employee with city1: `'Chennai'` and city2: `'Bangalore'`
comes in a single group when done `GROUP BY`.
Help me construct this sql.
|
The easiest way is to use `least()` and `greatest()`:
```
select least(city1, city2) as city1, greatest(city1, city2) as city2, count(*)
from employees
group by least(city1, city2), greatest(city1, city2);
```
The logic is slightly more complex if you have `NULL` values. The question says nothing about `NULL` values.
Note:
This answer was edited to use:
```
group by city1, city2
```
Although technically correct, I removed the edit for two reasons:
1. `city1` and `city2` are ambiguous, because they are both the names of columns in the data and column aliases.
2. Not all databases support column aliases in the `group by` clause.
The first is the primary reason. If the column aliases were different from names in the tables, then the alternative formulation would be fine. Otherwise, I consider the code to be confusing and hence less maintainable and more prone to error.
|
You can build a ordered key with your city and group it, using a `CASE WHEN`;
```
SELECT
CityKey,
Id,
Name,
count(CityKey)
FROM
(SELECT
CASE WHEN City1 > City2 THEN Concat(City1,City2)
WHEN City1 <= City2 THEN Concat(City2,City1)
END AS CityKey,
Sample.City1,
Sample.City2,
Id,
name
FROM SAMPLE) WithKeys
GROUP BY CityKey, Id, Name;
```
|
mysql group by on unique combination of 2 columns
|
[
"",
"mysql",
"sql",
"group-by",
""
] |
I want to do a special request on my database (PostgreSQL v9.4.5), but I don't manage to do it.
In order to simply, let's say I have the following table **AvgTemperatures**, representing different averages of temperature taken in different cities, and calculated on different length of time (counted in months) :
```
id | city | avg | months
----+-----------+------+--------
1 | New-York | 20 | 3 <--- average temperate over the last 3 months
2 | New-York | 19 | 6 <--- average temperate over the last 6 months
3 | New-York | 15 | 12 <--- etc
4 | New-York | 15 | 24
5 | Boston | 13 | 3
6 | Boston | 18 | 8
7 | Boston | 17 | 12
8 | Boston | 16 | 15
9 | Chicago | 12 | 2
10 | Chicago | 14 | 12
11 | Miami | 28 | 1
12 | Miami | 25 | 4
13 | Miami | 21 | 12
14 | Miami | 22 | 15
15 | Miami | 20 | 24
```
Now, imagine that I want to select all the rows concerning the measures in a city where at least one average has been over 19 degrees. In this case I want :
```
id | city | avg | months
----+-----------+------+--------
1 | New-York | 20 | 3
2 | New-York | 19 | 6
3 | New-York | 15 | 12
4 | New-York | 15 | 24
11 | Miami | 28 | 1
12 | Miami | 25 | 4
13 | Miami | 21 | 12
14 | Miami | 22 | 15
15 | Miami | 20 | 24
```
I could do something like :
```
SELECT *
FROM AvgTemperatures
WHERE MIN(avg) OVER (PARTITION BY city) > 16
```
But :
```
********** Erreur **********
ERROR: window functions not allowed in WHERE clause
```
What's more, I cannot use `GROUP BY` as in :
```
SELECT *
FROM AvtTemperatures
GROUP BY city
HAVING MIN(avg) > 16
```
because I will lose information due to the aggregation (by the way this query is not valid because of the "SELECT \*").
I'm pretty sure I can use the `OVER PARTITION BY` to solve that, but I don't know how. Does someone have an idea ?
|
**[All-at-once operation:](http://social.technet.microsoft.com/wiki/contents/articles/20724.all-at-once-operations-in-t-sql.aspx#All-at-Once)**
> "All-at-Once Operations" means that all expressions in the same
> logical query process phase are evaluated logically at the same time.
And great chapter **Impact on Window Functions**:
Suppose you have:
```
CREATE TABLE Test ( Id INT) ;
INSERT INTO Test VALUES ( 1001 ), ( 1002 ) ;
SELECT Id
FROM Test
WHERE Id = 1002
AND ROW_NUMBER() OVER(ORDER BY Id) = 1;
```
> **All-at-Once operations tell us these two conditions evaluated logically at the same point of time.** Therefore, SQL Server can
> evaluate conditions in WHERE clause in arbitrary order, based on
> estimated execution plan. So the main question here is which condition
> evaluates first.
**Case 1:**
`If ( Id = 1002 ) is first, then if ( ROW_NUMBER() OVER(ORDER BY Id) = 1 )`
Result: 1002
**Case 2:**
`If ( ROW_NUMBER() OVER(ORDER BY Id) = 1 ), then check if ( Id = 1002 )`
Result: empty
> **So we have a paradox.**
> This example shows why we cannot use Window Functions in WHERE clause.
> You can think more about this and find why Window Functions are
> allowed to be used just in **SELECT** and **ORDER BY** clauses!
To get what you want you can wrap windowed function with `CTE/subquery` as in [Gordon answer](https://stackoverflow.com/a/33629600/5070879):
```
;WITH cte AS
(
SELECT t.*, MAX(AVG) OVER (PARTITION BY city) AS average
FROM avgTemperatures t
)
SELECT *
FROM cte
where average > 19
ORDER BY id;
```
**[db<>fiddle demo](https://dbfiddle.uk/?rdbms=sqlserver_2019&fiddle=e700e8854f617482e779e13478c75776)**
Output:
```
╔═════╦══════════╦═════╦═════════╗
║ id ║ city ║ avg ║ months ║
╠═════╬══════════╬═════╬═════════╣
║ 1 ║ New-York ║ 20 ║ 3 ║
║ 2 ║ New-York ║ 19 ║ 6 ║
║ 3 ║ New-York ║ 15 ║ 12 ║
║ 4 ║ New-York ║ 15 ║ 24 ║
║ 11 ║ Miami ║ 28 ║ 1 ║
║ 12 ║ Miami ║ 25 ║ 4 ║
║ 13 ║ Miami ║ 21 ║ 12 ║
║ 14 ║ Miami ║ 22 ║ 15 ║
║ 15 ║ Miami ║ 20 ║ 24 ║
╚═════╩══════════╩═════╩═════════╝
```
|
The simplest solution is to use the [`bool_or` aggregate function](http://www.postgresql.org/docs/current/static/functions-aggregate.html)
```
select id, city, avg, months
from avttemperatures
where city in (
select city
from avttemperatures
group by 1
having bool_or(avg > 19)
)
order by 2, 4
;
id | city | avg | months
----+----------+-----+--------
11 | Miami | 28 | 1
12 | Miami | 25 | 4
13 | Miami | 21 | 12
14 | Miami | 22 | 15
15 | Miami | 20 | 24
1 | New-York | 20 | 3
2 | New-York | 19 | 6
3 | New-York | 15 | 12
4 | New-York | 15 | 24
```
The test table:
```
create table avttemperatures (
id int, city text, avg int, months int
);
insert into avttemperatures (id, city, avg, months) values
( 1,'New-York',20,3),
( 2,'New-York',19,6),
( 3,'New-York',15,12),
( 4,'New-York',15,24),
( 5,'Boston',13,3),
( 6,'Boston',18,8),
( 7,'Boston',17,12),
( 8,'Boston',16,15),
( 9,'Chicago',12,2),
( 10,'Chicago',14,12),
( 11,'Miami',28,1),
( 12,'Miami',25,4),
( 13,'Miami',21,12),
( 14,'Miami',22,15),
( 15,'Miami',20,24);
```
|
SQL Condition on Window function
|
[
"",
"sql",
"postgresql",
"window-functions",
""
] |
I have a query where I want to display the owner and the number of times they took the car, bus or train to work.
so the table should look like this;
```
Owner | Car | Bus | Train
-------------------------
Joe | 1 | 2 | 4
```
This is my query;
```
Select owner, vehicle
From MyTable
INNER JOIN(select
count(case when vehicle = 'Car' then 1 else 0 end) AS [Car],
count(case when vehicle = 'Bus' then 1 else 0 end) AS [Bus],
count(case when vehicle = 'Train' then 1 else 0 end) AS [Train]
from dbo.MyTable
where
YEAR([CreatedOn]) = 2015
group by
vehicle)
```
Im getting an incorrect syntax error
|
First, use `sum()` rather than `count()`. Second, you don't need a subquery. Third, you need to group by `owner`, not `vehicle`:
```
Select owner,
sum(case when vehicle = 'Car' then 1 else 0 end) AS [Car],
sum(case when vehicle = 'Bus' then 1 else 0 end) AS [Bus],
sum(case when vehicle = 'Train' then 1 else 0 end) AS [Train]
From MyTable
where YEAR([CreatedOn]) = 2015
group by owner;
```
You can use `count()`, but it counts non-NULL values, so it is misleading. In your case, for instance, the conditional logic is ignored.
EDIT:
To get the rest, use the same idea, just change the condition:
```
sum(case when vehicle not in ('Car', 'Bus', 'Train') then 1 else 0 end) AS Others
```
|
Try this:
```
select owner
,count(case when vehicle = 'Car' then 1 end) as [Car]
,count(case when vehicle = 'Bus' then 1 end) as [Bus]
,count(case when vehicle = 'Train' then 1 end) as [Train]
from dbo.MyTable
where year([CreatedOn]) = 2015
group by owner
```
|
Sql - Count table query error
|
[
"",
"sql",
"sql-server",
""
] |
I would like to know if it's possible to create a not null constraint for either column `name` OR column `surname`. ex :
```
create table football_players (
id VARCHAR(36) PRIMARY KEY,
name VARCHAR(20),
surname VARCHAR(20)
);
```
Sometimes we know the name or the surname but not both. I don't want a player without name in my database.
|
The standard SQL method would be:
```
create table football_players (
id VARCHAR(36) PRIMARY KEY,
name VARCHAR(20),
surname VARCHAR(20),
constraint chk_name_surname check (name is not null or surname is not null)
);
```
This does not work in MySQL (the question was not originally tagged MySQL). Doing this in MySQL requires a trigger.
|
MySQL doesn't support check constraints, but you can do something similar [with a trigger](https://stackoverflow.com/q/9734920/4794).
|
How define column a OR column b NOT NULL?
|
[
"",
"mysql",
"sql",
"check-constraints",
""
] |
I want to check whether all the tables in my Database has clusterd index on which type of column(eg:int) and whether the clustered index is on a single/multipe columns. I was not able to figure out which DMvs or Views to retreive this information. Please help.
|
below sql will give you list of the clusterd indexes and table name with few more detail. you can modified this to get to you results.
```
SELECT 'ClusteredIndexName' = SI.Name,
'TableName' = SO.Name,
'ColumCount' = IK.IndexColumnCount,
'IsUnique' = CASE WHEN SI.is_unique = 0 THEN 'N' ELSE 'Y' END
,SI.type_desc
FROM SYS.INDEXES SI
JOIN SYS.OBJECTS SO -- Joining on SYS.OBJECTS to get the TableName
ON SO.OBJECT_ID = SI.Object_ID
JOIN ( -- Joining on a Derived view to work out how many columns exist on the clustered index
SELECT 'IndexColumnCount' = MAX(KEY_ORDINAL), OBJECT_ID, index_id
FROM SYS.INDEX_COLUMNS
GROUP BY OBJECT_ID, index_id
) AS IK
ON IK.object_id = SI.Object_ID
AND IK.index_id = SI.index_id
WHERE SI.type_desc = 'CLUSTERED' and
SI.OBJECT_ID NOT IN (SELECT OBJECT_ID
FROM SYS.ALL_OBJECTS
WHERE TYPE = 'S') -- Not system tables
AND SO.Type = 'U'
AND SO.is_ms_shipped = 0
```
|
SELECT \* FROM sys.indexes (I think, its been a while since I used SQL Server).
|
DMVs to get index configuration information
|
[
"",
"sql",
"sql-server",
"indexing",
""
] |
I have a table like this:
```
Employee
| id | name | department |
| 01 | Joe | Network |
| 02 | Sam | Quality |
| 03 | Mark | Network |
| 04 | Pete | Quality |
```
And other table like this:
```
Hours
| id | date | hours |
| 01 | 1/1/11 | 7 |
| 02 | 1/2/11 | 15 |
| 03 | 1/5/11 | 13 |
| 01 | 1/7/11 | 5 |
| 01 | 1/11/11 | 9 |
| 02 | 1/11/11 | 11 |
| 03 | 1/12/11 | 14 |
| 04 | 1/12/11 | 14 |
```
I want to query this: show the person who achieve the max total hours from each department (sorted from the max to the min)
```
| id | Name | Department | totalhours |
| 03 | Mark | Network | 27 |
| 02 | Sam | Quality | 26 |
```
My current code doesnt work, just showsme the total hours of each person:
```
SELECT e.name, e.department, SUM(h.hours) AS total
FROM employee e JOIN hours h ON e.id = h.id
GROUP BY e.name, e.department
ORDER BY total DESC;
```
What i need to do?
i tried something like this...
```
SELECT e.name, e.department, t.total
FROM (
SELECT e2.department, SUM(h.hours) AS total
FROM employee e2 JOIN hours h ON e2.id=h.id
GROUP BY e2.department, h.hours
) t JOIN employee e JOIN hours h ON e.id=h.id ON e.department = t.department AND t.total = h.hours
ORDER BY t.total DESC;
```
But this shows crazy results ( i think my code is crazy lol)
PLEASE HELP!!
Thanks!!
|
```
select id, name, department, total
from
(
SELECT e.id, e.name, e.department, t.total,
rank() over(partition by e.department order by t.total desc) as rnk
FROM (
SELECT id, SUM(hours) AS total
FROM hours
GROUP BY id
) t JOIN employee e ON e.id = t.id
) t
where rnk = 1
order by total desc
```
|
This will show more than 1 row for a department if there is a tie. Probably what you do want.
```
SELECT EachEmp.name, EachEmp.department, MAXES.maxTotal AS total
FROM (SELECT e.name, -- sum up each employees totals
e.department,
SUM(h.hours) AS total
FROM employee e
JOIN hours h ON e.id = h.id
GROUP BY e.name, e.department) EachEmp
JOIN
(SELECT department, max(total) maxTotal -- get the maximum emp total for this dept
FROM ( SELECT e.department, SUM(h.hours) AS total
FROM employee e
JOIN hours h ON e.id = h.id
GROUP BY e.name, e.department) AS TOTAL -- note we are grouping by e.name
GROUP BY department) AS MAXES
ON EachEmp.department = MAXES.department
AND EachEmp.total = MAXES.maxTotal
ORDER BY MAXES.maxTotal DESC;
```
|
SQL want to avoid repeated rows
|
[
"",
"sql",
"database",
"postgresql",
""
] |
I am trying to make two columns auto increment but this column shows an error `[user_id] as id + 0 PRIMARY KEY NOT NULL` saying
> Only UNIQUE or PRIMARY KEY constraints can be created on computed columns
What I am trying to do is, if id = 1, make user\_id= 1 as well.
```
CREATE TABLE [dbo.TBL_TXN_USER]
(
[id] int NOT NULL IDENTITY(1,1),
[user_id] as id + 0 PRIMARY KEY NOT NULL ,
[username] varchar(150) NOT NULL,
[fullname] varchar(150) NOT NUll,
[pwd] varchar(50) NOT NUll,
[email] varchar(150) NOT NULL,
[mobile] varchar(150) NOT NULL,
[designation] varchar(150) NOT NULL,
[deleted] int NULL,
[created_date] datetime NULL,
[creator_user_id] int NULL,
[changed_date] datetime NULL,
[changer_user_id] int NULL,
[add_content] int NULL,
[edit_content] int NULL,
[delete_content] int NULL,
[manage_user] int NULL,
[view_log] int NULL,
)
```
What is wrong in `[user_id]`? How to solve it?
|
the error message is because you put the `NOT NULL` constraint on the computed column.
on sql server 2012 the complete error message is:
> Only UNIQUE or PRIMARY KEY constraints can be created on computed
> columns, while CHECK, FOREIGN KEY, and NOT NULL constraints require
> that computed columns be persisted.
here is a working script (i changed the table name):
```
CREATE TABLE dbo.[TBL_TXN_USER]
(
[id] int NOT NULL IDENTITY(1,1),
[user_id] as id + 0 persisted not null primary key,
[username] varchar(150) NOT NULL,
[fullname] varchar(150) NOT NUll,
[pwd] varchar(50) NOT NUll,
[email] varchar(150) NOT NULL,
[mobile] varchar(150) NOT NULL,
[designation] varchar(150) NOT NULL,
[deleted] int NULL,
[created_date] datetime NULL,
[creator_user_id] int NULL,
[changed_date] datetime NULL,
[changer_user_id] int NULL,
[add_content] int NULL,
[edit_content] int NULL,
[delete_content] int NULL,
[manage_user] int NULL,
[view_log] int NULL,
);
GO
```
i have a couple of comments about that question .
- a calculated field with a fixed formula with static values as primary key instead of the id itself is a waste of resources: one of the 2 fields should not be there
- a field with the name of a system function (user\_id) is something i would avoid at all costs.
- the question looks like an attempt to put in place a solution (the calculated field as id) for an hidden issue.
|
Sorry for my misunderstanding, So you want to add auto increments two column in one table. Actually that is not accept at SQL-server so I am going to give you another option below
```
CREATE TRIGGER [dbo].[insert_triger] ON [dbo].[TBL_TXN_USER]
FOR INSERT
AS
update TBL_TXN_USER set [user_id] = id
where id = (
select MAX(id)
from TBL_TXN_USER
)
```
|
Can one table have two identity columns in SQL Server?
|
[
"",
"sql",
"sql-server",
""
] |
I’ve recently moved companies and they do something in SQL Server that I have not seen before. I am used to using a primary key column as an int that increments on inserts, very occasionally I've seen the use of guids but here they use both, see below.
```
PrimaryID | GUID | RestOfColumns
```
I can't get my head round why? When asking this question they told me it was another layer of protection against SQL injection. In stored procedures they are using the guid to find the primary key ID and then using the id anyway, I personally cannot see the benefit?
My questions are:
* Is there any security risk to exposing the primary key in code?
* Are there any benefits to using the guid to get the primary key?
**EDIT:**
An example of the usage (pseudo code):
1. I need a contact.
2. Retrieve Contact Guid from QueryString.
3. Call SQL Server stored procedure to get contact.
4. In stored procedure: find the id in the contact table based on the guid.
5. Use the id to get relevant values in others tables using the id as a foreign key.
6. Return data.
|
If somebody looks at your website's html source, they might see that they are submitting a form and passing their ID which is "1003". So from there, they could construct a form that passes the ID of "1002" and see somebody else's data. It's not exactly "SQL Injection", it's more like parameter-guessing.
But if they see their ID is a GUID, which just looks like random characters, they'd have a much harder time guessing what other valid IDs might be in your system.
|
The main benefit would be: We already have a bunch of code that uses PrimaryID, including ORM code that maps PrimaryID as the primary key column. The only other reason I can see to keep PrimaryID is for human readability.
Also, I should let you know that GUID doesn't always protect you from data scraping. You could very well be using a database that generates GUIDs in a predictable pattern (sequential or otherwise).
|
SQL Server Int primary key and Guid
|
[
"",
"sql",
".net",
"sql-server",
""
] |
I am trying to create a table for suppliers group and when I try to create it it throws an error.
Here's the code :
```
CREATE TABLE GROUPS_PLUS_SUPPLIERS
(
PRODUCT_GROUP_SUPPLIER_ID NUMBER(3),
GROUP_ID NUMBER(4),
GROUP_NAME VARCHAR2(255),
SUPPLIER_ID NUMBER(4),
CONSTRAINT PRODUCT_GROUP_SUPPLIER_ID_PK PRIMARY KEY (PRODUCT_GROUP_SUPPLIER_ID),
CONSTRAINT FK_SUPPLIER_ID FOREIGN KEY (SUPPLIER_ID) REFERENCES SUPPLIERS (SUPPLIER_ID)
);
CREATE TABLE PRODUCTS
(
PRODUCT_ID NUMBER (4),
PRODUCT_DESCRIPTION VARCHAR2 (255),
PRODUCT_SIZE VARCHAR2 (10),
PRODUCT_GROUP NUMBER (4),
PRODUCT_PRICE NUMBER(4),
NO_IN_STOCK NUMBER (4),
REORDER_LEVEL NUMBER (4),
CONSTRAINT PRODUCTS_ID_PK PRIMARY KEY (PRODUCT_ID),
CONSTRAINT FK_GROUP_PLUS_SUPPLIERS_ID FOREIGN KEY (PRODUCT_GROUP) REFERENCES GROUPS_PLUS_SUPPLIERS(GROUP_ID)
);
```
This is the error message I am getting :
> ORA-02270: no matching unique or primary key for this column-list
Here's what I am trying to add in group plus suppliers:
```
INSERT INTO GROUPS_PLUS_SUPPLIERS VALUES (1,705,'flavoured oil',5588);
INSERT INTO GROUPS_PLUS_SUPPLIERS VALUES (2,705,'flavoured oil',5509);
INSERT INTO GROUPS_PLUS_SUPPLIERS VALUES (3,800,'spice',5543);
INSERT INTO GROUPS_PLUS_SUPPLIERS VALUES (4,800,'spice',5579);
INSERT INTO GROUPS_PLUS_SUPPLIERS VALUES (5,800,'spice',5584);
```
Any help would be appreciated!
|
Since your `GROUP_ID` column is not unique, and foreign key constraints cannot be made to non-unique values (which single row does it relate to?), and given that your `GROUPS_PLUS_SUPPLIERS` table also uses `SUPPLIER_ID`, then you have to have a composite foreign key:
```
CREATE TABLE GROUPS_PLUS_SUPPLIERS (
PRODUCT_GROUP_SUPPLIER_ID NUMBER(3), -- better not to have this column
GROUP_ID NUMBER(4),
GROUP_NAME VARCHAR2(255), -- violates 2nd normal form!
SUPPLIER_ID NUMBER(4),
CONSTRAINT PRODUCT_GROUP_SUPPLIER_ID_PK PRIMARY KEY (PRODUCT_GROUP_SUPPLIER_ID),
CONSTRAINT FK_SUPPLIER_ID FOREIGN KEY (SUPPLIER_ID) REFERENCES SUPPLIERS (SUPPLIER_ID)
-- A new constraint
CONSTRAINT UQ_PRODUCT_GROUP_SUPPLIER_ID_GROUP_ID UNIQUE (SUPPLIER_ID, GROUP_ID)
);
CREATE TABLE PRODUCTS (
PRODUCT_ID NUMBER (4),
PRODUCT_DESCRIPTION VARCHAR2 (255),
PRODUCT_SIZE VARCHAR2 (10),
PRODUCT_GROUP NUMBER (4),
PRODUCT_PRICE NUMBER(4),
NO_IN_STOCK NUMBER (4),
REORDER_LEVEL NUMBER (4),
CONSTRAINT PRODUCTS_ID_PK PRIMARY KEY (PRODUCT_ID),
-- a changed constraint
CONSTRAINT FK_GROUPS_PLUS_SUPPLIERS_SUPPLIER_ID_GROUP_ID
FOREIGN KEY ( SUPPLIER_ID, PRODUCT_GROUP)
REFERENCES GROUPS_PLUS_SUPPLIERS(SUPPLIER_ID, GROUP_ID)
);
```
It's possible I could be off base in this recommendation, and the FK instead needs to point to the `GROUPS` table, which you need to create if it doesn't exist. It's also a possible solution to make your FK point to the `PRODUCT_GROUP_SUPPLIER_ID` column, but I promise you that doing that will create serious problems for you down the road, when you discover that to query your products table by group you will always be forced to join to another table. I predict quite confidently that you will deeply regret it if you do that.
There are also some serious issues with the database design.
1. Seeing your updated example for what you want in the `GROUPS_PLUS_SUPPLIERS` table, it is very bad for `GROUP_NAME` to be in that table, because this violates [second normal form](https://en.wikipedia.org/wiki/Second_normal_form). You need a `GROUPS` table with the `GROUP_ID` and the `GROUP_NAME` columns there.
2. The `GROUPS_PLUS_SUPPLIERS` table appears to be a many-to-many join table, and almost certainly doesn't need its own ID column. I promise that this is true 99% of the time, and that it's better for any other tables referring to this logical relation (a unique relationship between a `SUPPLIER_ID` and a `GROUP_ID`) to just use a composite key. You'll thank me down the road for this.
3. `NUMBER(4)` seems awfully low. Are you sure there will never, ever, ever be more than 9999 products? Even for suppliers this sounds too low. Save yourself a major headache later and make these reasonably large enough to accommodate a real-world enterprise-level scenario.
Also, please forgive me if I'm being uncomplimentary, but your naming scheme needs some work. I know some older versions of Oracle require all upper case and can't handle lower case, so I guess ignore that part if that's what you're working with.
1. Don't use all upper case with underscores. At *least* use Pascal Case, or all lower case with underscores. Better would be `GroupID` or `GroupId` and so on.
2. Name columns the same in all tables. Don't call it `GROUP_ID` in one table and `PRODUCT_GROUP` in another. Not doing so is, frankly, ridiculous and will lead to confusion and hatred for you by future developers. The next developer to work with this database after you had better not know your address!
3. `GROUPS_PLUS_SUPPLIERS` is unnecessarily wordy. Just use `SUPPLIERS_GROUPS`.
4. Don't put the table name before every conceivable column. Just don't. I suppose it's okay for column names that are common to many tables, such as `Description`, but don't do it for the others. (In fact, that's why in my own tables I stopped using `Descr` or `Description` entirely, and now just call the column the singular of the table name--although my tables are singular, too, e.g., the `ProductStatus` table would have name column `ProductStatus` instead of `ProductStatusDescription` or `Description` or some other monstrosity.)
Here's what I think a far more sensible naming scheme and database designe would look like:
```
CREATE TABLE Groups (
GroupID NUMBER(4),
GroupName VARCHAR2(255), -- I would normally call this Group but that's reserved
CONSTRAINT PK_Groups PRIMARY KEY (GroupID),
CONSTRAINT UQ_Groups_GroupName UNIQUE (GroupName)
);
CREATE TABLE SupplierGroups (
GroupID NUMBER(4),
SupplierID NUMBER(4),
CONSTRAINT PK_SupplierGroups PRIMARY KEY (SUPPLIER_ID, GROUP_ID),
CONSTRAINT FK_SupplierID FOREIGN KEY (SupplierID) REFERENCES Suppliers (SupplierID),
CONSTRAINT FK_GroupID FOREIGN KEY (GroupID) REFERENCES Groups (GroupID)
);
CREATE TABLE Products (
ProductID NUMBER (4),
ProductDescription VARCHAR2 (255),
Size VARCHAR2 (10),
GroupID NUMBER (4),
Price NUMBER(4),
NoInStock NUMBER (4),
ReorderLevel NUMBER (4),
CONSTRAINT PK_Products PRIMARY KEY (ProductID),
CONSTRAINT FK_Products_SupplierID_GroupID FOREIGN KEY (SupplierID, GroupID)
REFERENCES SupplierGroups (SupplierID, GroupID)
);
```
|
The error message says what you are missing. The foreign key to `GROUPS_PLUS_SUPPLIERS(GROUP_ID)` means you need to add a unique index on this column.
```
CREATE TABLE GROUPS_PLUS_SUPPLIERS
(
PRODUCT_GROUP_SUPPLIER_ID NUMBER(3),
GROUP_ID NUMBER(4),
GROUP_NAME VARCHAR2(255),
SUPPLIER_ID NUMBER(4),
CONSTRAINT PRODUCT_GROUP_SUPPLIER_ID_PK PRIMARY KEY (PRODUCT_GROUP_SUPPLIER_ID),
CONSTRAINT FK_SUPPLIER_ID FOREIGN KEY (SUPPLIER_ID) REFERENCES SUPPLIERS (SUPPLIER_ID),
CONSTRAINT AK_GROUP_ID UNIQUE(GROUP_ID)
);
```
|
SQL Primary key struggle
|
[
"",
"sql",
"oracle",
""
] |
this is my first question in this site, I want to know, if I have a table like this one in SQL
* Value
* a
* a
* a
* a
* a
* b
* b
* b
* c
* c
* a
* a
* b
* b
How can I generate a list of sequential numbers or ID's in a query in such a way that it will change according to the value column, this is what I want:
* Value; ID
* a; 1
* a ; 1
* a ; 1
* a ; 1
* a ; 1
* b ; 2
* b ; 2
* b ; 2
* c ; 3
* c ; 3
* a ; 4
* a ; 4
* b ; 5
* b ; 5
Thanks to you all in advance for your answers.
|
You can do this with a series of queries.
1. Start with a table containing an auto increment field and a text field, nothing else. Put no records in it.
2. Write an aggregate query against your table that groups by your value field.
3. Write an append query that appends record from the query in step 2 into the table from step 1. The results of that query will be what you want.
|
Gordon Linoff is correct regarding no ordering, so it is not possible to do exactly what you ask for
But if you have an additional column to sort in SQL Server you can use DENSE\_RANK to assign numbers:
```
--create table Test(col varchar(100))
--insert into Test values('a'),('a'),('a'),('b'),('b'),('c'),('d')
select col, DENSE_RANK() over (Order BY col)
from Test
```
It will produce
> ```
> a 1
> a 1
> a 1
> b 2
> b 2
> c 3
> d 4
> ```
Or alternatively there is always a loop or cursor loop option. This script seems to be producing your expected result:
```
/*
drop table Test
create table Test(ExcelRowId int not null identity primary key, data varchar(100) not null, idx int)
insert into Test(data) values('a'),('a'),('a'),('a'),('a'),('b'),('b'),('b'),('c'),('c'),('a'),('a'),('b'),('b')
*/
declare @idx int, @prev_data varchar(100), @data varchar(100)
set @idx = 0
declare cur cursor for select Data from Test order by ExcelRowId for update of idx
open cur
fetch next from cur into @data
while @@FETCH_STATUS = 0
begin
if @prev_data is null or @data <> @prev_data set @idx = @idx + 1
set @prev_data = @data
update Test set Idx = @Idx WHERE CURRENT OF cur
fetch next from cur into @data
end
close cur
deallocate cur
select * from test
```
|
Same index/ID based on another column value
|
[
"",
"sql",
"sql-server",
"ms-access",
""
] |
I've got this table `supp_props`:
```
supp_id | supp_dry_weight | supp_price | supp_date
--------+-----------------+------------+------------
22 | 88.00 | 27.50 | 2015-06-25 x delete
22 | 89.00 | 28.00 | 2015-10-18 < don't delete, while
22 | 89.00 | 29.50 | 2015-12-20 this row here is in the future
23 | 84.00 | 15.00 | 2015-06-23 x delete
23 | 42.50 | 7.50 | 2015-06-25 x delete
23 | 35.60 | 5.00 | 2015-06-29
24 | 89.00 | 18.20 | 2015-06-25
25 | 89.15 | 18.50 | 2015-08-05
26 | 89.00 | 28.30 | 2015-06-25
```
And I want to delete all rows that are 'old', but there are some conditions:
* a row must be older than two weeks to be deleted.
* if there is no row with a more current date it shouldn't be deleted.
* There may be rows that are in the future, if so, this row shouldn't be seen as a 'more current' date (for the above rule).
How can I achieve that with mysql?
|
Thanks to all the great help and examples from Alex, Mike Brent and xQbert I've managed to create this query:
```
DELETE sp
FROM supp_props sp
LEFT JOIN (
SELECT supp_id, MAX( supp_date ) AS max_date
FROM supp_props
WHERE supp_date < NOW()
GROUP BY supp_id
) max
ON sp.supp_id = max.supp_id
WHERE sp.supp_date < ( DATE_SUB( CURDATE(), INTERVAL 2 week ) )
AND sp.supp_date <> max.max_date
```
Thank you all for your help!
|
This should do the trick:
```
DELETE sp FROM supp_props AS sp
LEFT JOIN (
SELECT supp_id, MAX(supp_date) AS max_date
FROM supp_props
WHERE supp_date <= DATE(NOW())
) AS max
ON sp.supp_id = max.supp_id
AND sp.supp_date = max.max_date
WHERE max.supp_id IS NULL
AND sp.supp_date < DATE_SUB(NOW(), INTERVAL 2 WEEK)
```
What you are doing here is making a join against a subselect (aliased as `max`) that contains the highest date for each `supp_id` (while excluding rows where where the `supp_date` is in the future). By making a LEFT JOIN against that `max` subselect, you would "protect" all rows having the maximum date for the given `supp_id` by only deleting cases where the join result ends up having a `NULL` value for fields in `max`. This is achieved by the first part of the WHERE clause.
You then apply the condition that the record must also be older than two weeks as the second part of the WHERE clause.
|
Delete all rows but the latest date
|
[
"",
"mysql",
"sql",
"sql-delete",
""
] |
I am using Oracle sql developer.And I have to categorize some data into each `Date` from a table.
I used `TO_CHAR` function to change the date format.
```
select to_char(txn_date, 'dd-mon-rrrr') my_day, some_coloums
from mwt_wallet_transactions
order by my_day;
```
It's ok with upper one. But I want to get it with given date like this:
```
select to_char( txn_date, 'dd-mon-rrrr') my_day, some_coloums
from mwt_wallet_transactions
where my_day like '08-feb-2015'
order by my_day;
```
I searched this over the internet, most of them asked to use the `CONVERT` function like this:
```
select convert (VARCHAR(10), txn_date , 23 ) my_day, some_colomns
from mwt_wallet_transactions
where my_day like '08-feb-2015'
order by my_day;
```
But it doesn't work for me. Help me to solve this!
|
> where my\_day like '08-feb-2015'
I think you got confused between **SELECTION** and **FILTER PREDICATE**.
Also, remember, `''08-feb-2015'` is NOT a **DATE**, it is a **string**.
You want to filter the rows based on a DATE value. So, convert the literal on the R.H.S. into **DATE** using `TO_DATE` or use **ANSI Date literal** if you don't have a time portion.
Now, remember, a DATE has both date and time elements, so you need to -
* either use `TRUNC` on the date column to get rid off the time element
* or, use a **DATE range condition** to for better **performance** as it would use any **regular index** on the date column.
I am assuming `my_day` as the date column. Modify the filter as:
Using **ANSI Date literal**: fixed format `'YYYY-MM-DD'`
```
where my_day >= DATE '2015-02-08' and my_day < DATE '2015-02-09'
```
Or, `TO_DATE` with proper **format model**. Remember, `TO_DATE` is **NLS dependent**, so I have used `NLS_DATE_LANGUAGE` to make it NLS independent.
```
WHERE my_day >= TO_DATE('08-feb-2015','dd-mon-yyyy','NLS_DATE_LANGUAGE=american')
AND my_day < TO_DATE('09-feb-2015','dd-mon-yyyy','NLS_DATE_LANGUAGE=american')
```
Above. `my_day` is assumed as the static date column, and not the column alias.
|
You can't use select list column aliases in the `WHERE` clause. Use derived table to access it:
```
select * from
(
select *some_computed_value* as my_day, some_colomns
from mwt_wallet_transaction
)
where *my_day conditions*
order by my_day
```
|
How to categorize data for each date in sql?
|
[
"",
"sql",
"database",
"oracle",
"date",
"date-arithmetic",
""
] |
I have a table where I want to filter all rows that have a Code,Life and TC equal to the results of a select query on the same table filtered by ID
```
ID Code|Life|TC|PORT
62 XX101 1 1 1
63 XX101 1 1 2
64 AB123 1 1 1
65 AB123 1 1 2
66 AB123 1 1 3
67 CD321 1 1 1
68 CD321 1 1 2
```
This is the best I have come up with but it doesn't seem to be very efficient.
```
select ID from #table
where Code = (Select Code from #table where ID = @Port1) and
Life = (Select Life from #table where ID = @Port1) and
TC = (Select TC from #table where ID = @Port1)
```
|
Here is the query you need:
```
select t2.*
from #table t1
join #table t2 on t1.Code = t2.Code and
t1.Life = t2.Life and
t1.TC = t2.TC and
t1.PORT = t2.PORT
where t1.id = @Port1
```
With `cross apply`:
```
select ca.*
from #table t1
cross apply (select * from #table t2 where t1.Code = t2.Code and
t1.Life = t2.Life and
t1.TC = t2.TC and
t1.PORT = t2.PORT) ca
where where t1.id = @Port1
```
With `cte`:
```
with cte as(select * from #table where id = @Port1)
select t.*
from #table t
join cte c on t.Code = c.Code and
t.Life = c.Life and
t.TC = c.TC and
t.PORT = c.PORT
```
|
You could use an EXIST statement for this scenario
```
SELECT
ID
FROM
#table t1
WHERE
EXISTS ( SELECT
*
FROM
#table t2
WHERE
t2.ID = @Port1
AND t2.Code = t1.Code
AND t2.Life = t1.Life
AND t2.TC = t1.TC )
```
|
SQL: Select rows in a table by filtering multiple columns from the same table by a 3 column select result
|
[
"",
"sql",
"sql-server",
""
] |
Suppose I have table 1 and table 2 with same column. I want to combine them and sort in order by column3 and column4, as well as preserved the table column order. How could it be done?
I got the following error:
> ORDER BY items must appear in the select list if the statement
> contains a UNION, INTERSECT or EXCEPT operator.
Query
```
SELECT COLUMN1, COLUMN2, COLUMN3, COLUMN4 FROM TABLE 1
UNION COLUMN1, COLUMN2, COLUMN3, COLUMN4 FROM TABLE 2
ORDER BY COLUMN2, COLUMN3
```
|
Using sub query
Example :
```
SELECT * FROM
(
SELECT COLUMN1, COLUMN2, COLUMN3, COLUMN4 FROM TABLE1
UNION
SELECT COLUMN1, COLUMN2, COLUMN3, COLUMN4 FROM TABLE2
) ResultTABLE
ORDER BY ResultTABLE.COLUMN2, ResultTABLE.COLUMN3
```
This method can help you avoid ambiguous column and order your whole results.
|
When you use UNION or UNION ALL result set get the column names from the first query by default. So make sure at the end of the query when you write ORDER BY you give the names from the first query(in your case table1). I guess in your case table1 and table2 don't have same column. So you need to write query1 (table1) column names in the ORDER BY.
```
SELECT COLUMN1, COLUMN2, COLUMN3, COLUMN4 FROM TABLE 1
UNION
select COLUMN1, COLUMN2, COLUMN3, COLUMN4 FROM TABLE 2
ORDER BY COLUMN2, COLUMN3 -- here column names should be from the table1
```
|
Sorting in sql union
|
[
"",
"sql",
"sql-server",
"select",
"sql-order-by",
"union",
""
] |
I would like to know how to solve this problem, I have these two tables:
```
TABLE FRIENDSHIP
Name | Friend
--------------
John | Mark
Peter | Paul
Simon | Mary
TABLE SCHOOL
Name | School
--------------
John | SchoolA
Mark | SchoolA
Peter | SchoolB
Paul | SchoolC
Simon | SchoolC
Mary | SchoolD
```
How do I select those people that are friends but do not attend the same school?
I thought to select first schools that have a count of <2 and then compare the name of these people with the names in FRIENDSHIP table (comparing both columns name and friend). How do I transfor this into a query? Sorry for being a newbie..
It should return me something like:
```
Name | Friend
---------------
Peter | Paul
Simon | Mary
```
thanks for help
|
You can try joining the `SCHOOL` table to the `FRIENDSHIP` twice, once for each friend in the pair. Then you can restrict your result set by retaining only friend pairs whose respective schools are *not* the same.
```
SELECT f.Name, f.Friend
FROM FRIENDSHIP f INNER JOIN SCHOOL s1 ON f.Name = s1.Name
INNER JOIN SCHOOL s2 ON f.Friend = s2.Name
WHERE s1.School <> s2.School
```
Here is a working demo of this query using your sample data:
[**SQLFiddle**](http://sqlfiddle.com/#!9/34e499/1)
|
There are 2 type of people on your example.
Let's call the first one as `Name` and the second one as `Friend`.
Both `Name` and `Friend` has `School`.
We need to compare that their `School` is different, so we need to get their `School` first.
Let's get `School` for `Name` by joining `Friend.Name` to `School.Name`
```
SELECT f.Name, School, f.Friend
FROM Friend AS f
INNER JOIN School AS s ON f.Name = s.Name
```
Now let's get `School` for `Friend` by joining `Friend.Friend` to `School.Name`
```
SELECT Friend, School
FROM Friend AS f
INNER JOIN School AS s ON f.Friend = s.Name
```
Now we got result for `SchoolOfName` and `SchoolOfFriend`, let's join them by `Friend` of `SchoolOfName` and `Friend` of `SchoolOfFriend`
And also, `School` of `SchoolOfName` should not be equal to `School` of `SchoolofFriend`
```
SELECT Name
,SchoolOfName.School AS SchoolOfName
,SchoolOfName.Friend
,SchoolOfFriend.School AS SchoolOfFriend
FROM (
SELECT f.Name, School, f.Friend
FROM Friend AS f
INNER JOIN School AS s ON f.Name = s.Name
) AS SchoolOfName
INNER JOIN (
SELECT Friend, School
FROM Friend AS f
INNER JOIN School AS s ON f.Friend = s.Name
) AS SchoolOfFriend
ON SchoolOfName.Friend = SchoolOfFriend.Friend
AND NOT SchoolOfName.School = SchoolOfFriend.School
```
|
SQL Query Common Value after count
|
[
"",
"mysql",
"sql",
"database",
""
] |
I am working on cleaning up a customer list from an ecommerce site. The customer list has a many to many relationship between customer ID and customer email. For example, a customer could place an order with the same email while logged in or anonymous, and the result would be two customer records with the same email but different customer IDs. Similarly, a customer could create orders with two different emails while logged in which would result in customer records with the same ID but different emails. Given this, I want to create a list of customers with truly unique IDs based on either email or customer number. In addition, there are situations where the email is blank, so customers records that both have blank emails but different IDs would need to be considered two different customers.
So given something like this:
```
CUST_ID CUST_EMAIL
------------------------
123 test1@gmail.com
123 test2@gmail.com
124 test3@gmail.com
125 test3@gmail.com
126
127
128 test4@gmail.com
128 test5@gmail.com
129 test4@gmail.com
```
I would want to generate a key like this:
```
CUST_ID CUST_EMAIL NEW_CUST_KEY
------------------------------------
123 test1@gmail.com 1
123 test2@gmail.com 1
124 test3@gmail.com 2
125 test3@gmail.com 2
126 3
127 4
128 test4@gmail.com 5
128 test5@gmail.com 5
129 test4@gmail.com 5
```
|
OLDTABLE - is your table
NEWTABLE - will have result
[](https://i.stack.imgur.com/EUwon.jpg)
```
CREATE TABLE #NEWTABLE
(
NEW_CUST_KEY int not null ,
CUST_ID int not null,
CUST_EMAIL nvarchar(100) null
)
------------------------------------
insert into #NEWTABLE (NEW_CUST_KEY,CUST_ID,CUST_EMAIL)
SELECT ROW_NUMBER() OVER(ORDER BY CUST_ID, CUST_EMAIL) AS NEW_CUST_KEY, CUST_ID, CUST_EMAIL
FROM
(
SELECT CUST_ID, CUST_EMAIL
FROM OLDTABLE
GROUP BY CUST_ID, CUST_EMAIL
) T
UPDATE Upd SET NEW_CUST_KEY = T.NEW_CUST_KEY
FROM #NEWTABLE Upd
join (
SELECT CUST_ID, min(NEW_CUST_KEY) AS NEW_CUST_KEY
FROM #NEWTABLE
GROUP BY CUST_ID) T
on Upd.CUST_ID = T.CUST_ID
UPDATE Upd SET NEW_CUST_KEY = T.NEW_CUST_KEY
FROM #NEWTABLE Upd
join (
SELECT CUST_EMAIL, min(NEW_CUST_KEY) AS NEW_CUST_KEY
FROM #NEWTABLE
GROUP BY CUST_EMAIL) T
on nullif(Upd.CUST_EMAIL,'') = nullif(T.CUST_EMAIL,'')
UPDATE Upd SET NEW_CUST_KEY = T.CHANGE_CUST_KEY
FROM #NEWTABLE Upd
join (
SELECT NEW_CUST_KEY, ROW_NUMBER() OVER(ORDER BY NEW_CUST_KEY) AS CHANGE_CUST_KEY
FROM #NEWTABLE
GROUP BY NEW_CUST_KEY) T
on Upd.NEW_CUST_KEY = T.NEW_CUST_KEY
select * from #NEWTABLE
```
|
I think you could use row\_number.....
Something like this......
```
SELECT DISTINCT CUST_ID, CUST_EMAIL
ROW_NUMBER() OVER(PARTITION BY CUST_ID, CUST_EMAIL) AS New_Cust_Key
FROM YOUR TABLES
```
|
SQL Server: Generate unique customer key based on two columns
|
[
"",
"sql",
"sql-server",
""
] |
I have two tables: `@CATS` and `@NEWCATS`
```
declare @CATS table (_Group int, _Name nvarchar(50))
declare @NEWCATS table (_Name nvarchar(50))
insert into @CATS (_Group, _Name) values (1, 'Siamese'), (1, 'Japanese'), (2, 'Siamese'), (2, 'Japanese'), (2, 'Russian')
insert into @NEWCATS (_Name) values ('Siamese'), ('Japanese')
```
I want to find if there exists a \_Group in @CATS containing **exactly** the rows from @NEWCATS (e.g. 'Siamese' and 'Japanese').
In this example, I want to return \_Group=1, but not \_Group=2 (because \_Group=2 contains 'Russian').
That is, to complete this:
```
declare @Group int
select TOP 1 @Group = _Group
from ...
```
Note: There is no guarantee that the groups are unique, that's why TOP 1.
|
This is one way to do it:
```
select CATS._Group
from CATS
left join NEWCATS on NEWCATS._Name = CATS._Name
group by CATS._Group
having sum(case when NEWCATS._Name is null then 1 else 0 end) = 0
```
If they were real tables and there were indexes, other ways using different access plans could produce faster results.
---
**Update**
Given new requirement that all rows in `NEWCATS` must be referenced:
```
select CATS._Group
from CATS
left join NEWCATS on NEWCATS._Name = CATS._Name
group by CATS._Group
having sum(case when NEWCATS._Name is null then 1 else 0 end) = 0
and count(*) = ( select count(*) from NEWCATS )
```
|
You can use `EXCEPT`:
```
SELECT DISTINCT _group
FROM @CATS
EXCEPT
SELECT c._group
FROM @CATS c
LEFT JOIN @NEWCATS nc
ON nc._Name = c._Name
WHERE nc._Name IS NULL;
```
`LiveDemo`
**EDIT:**
```
WITH cte AS
(
SELECT *
FROM (SELECT DISTINCT _Group FROM @CATS) AS c
CROSS JOIN @NEWCATS nc
)
SELECT DISTINCT _group
FROM @CATS
EXCEPT
SELECT COALESCE(c._Group, cte._Group) AS _Group
FROM @CATS c
FULL JOIN cte
ON c._Group = cte._Group
AND c._Name = cte._Name
WHERE c._Name IS NULL
OR cte._Name IS NULL;
```
`LiveDemo2`
|
Get parent ID from two tables
|
[
"",
"sql",
"sql-server",
""
] |
I'm working with a table in MySql that has an int indexed column called "serial". This table has around 2 million rows.
If I apply a select statement with this column in this way:
```
SELECT serial FROM Table WHERE Serial=12345
```
this returns the data in around < 1 sec.
However, if I use a SELECT \* query in this same table, this query takes **around 78 seconds**...
I know it is not useful to apply indexes to all the columns in the table, how can I optimize/minimize the query response time if I need to get all the columns from a specific serial?
```
SELECT * FROM Table WHERE serial= 12345
```
The results from EXPLAIN :
SELECT serial:
> 1 SIMPLE tableName index idx\_tablename\_serial 5 6686620 Using index
SELECT \* :
> 1 SIMPLE agsensepivotreadings ALL (null values) 6686620
Please, any sugggestion or guide will be very appreciated.
|
There is part answer for your question.
<https://stackoverflow.com/a/3211164/2957840>
But also, maybe you should consider partitioning your table:
<https://dev.mysql.com/doc/refman/5.7/en/partitioning.html>
|
It is rather unlikely that the columns are causing the problem. This could happen if one (or more) of the columns are really, really large objects. To get to 78 seconds, you need to be thinking in terms of many megabytes or gigabytes, although even 1 Gbyte might not take that long in many environments.
The use of the index versus the non-index is easy to explain. The first query is covered by the index, so the original data pages are not needed. The second query is not covered by the index. Because so many rows are being selected, all the data may need to be read, in order to find a matching row. This is an optimization to prevent thrashing. It might explain what is going on, although 78 seconds for loading a table into memory seems like a long time -- unless the rows are very wide.
Another possibility is that other operations are locking the table. In many environments, this would be the most likely culprit.
Finally, if the queries were subtly different (such as one having an `order by` or the constant being enclosed in single quotes), then that might account for some difference.
I would check the `explain` to see what is happening. Even searching through a table with a few million rows should not take 78 seconds.
|
MySql, SELECT * FROM with Indexed columns
|
[
"",
"mysql",
"sql",
"database",
"performance",
"indexing",
""
] |
Here is the SQL Puzzle to challenge you:
Write a query that would select top 5 records in three different categories.
Something like this:
```
select top 5 name, age from table1 where category = 22 order by age desc
union all
select top 5 name, age from table1 where category = 27 order by age desc
union all
select top 5 name, age from table1 where category = 53 order by age desc
```
But do it **without** using UNION or UNION ALL
If you are using some vendor-specific SQL extensions please specify the database you are using.
|
Classic `top-n-per-group`, isn't it?
Using SQL Server syntax. `ROW_NUMBER()` should be in all decent databases in 2015.
```
WITH
CTE
AS
(
select
name
,age
,ROW_NUMBER() OVER (PARTITION BY category ORDER BY age desc) AS rn
from table1
where category IN (22, 27, 53)
)
SELECT
name
,age
FROM CTE
WHERE rn <= 5
;
```
---
`UNION` is in some sense the same as `OR`.
If your table has a primary key `ID` you can rewrite the query like this:
```
SELECT name, age
FROM table1
WHERE
ID IN (select top 5 ID from table1 where category = 22 order by age desc)
OR
ID IN (select top 5 ID from table1 where category = 27 order by age desc)
OR
ID IN (select top 5 ID from table1 where category = 53 order by age desc)
```
Normally, `UNION ALL` would be more efficient than this, though.
|
If you truly want top 5 then you'll probably need a tie-breaker. Is `name` unique per `age`?
```
select t.name, t.age from T t
where t.category in (22, 27, 53) and 5 >= (
select count(*) from T t2
where t2.category = t.category
and (t2.age >= t.age or t2.age = t.age and t2.name >= t.name)
)
```
|
SQL Puzzle - UNION ALL without UNION ALL
|
[
"",
"sql",
"intersystems-cache",
""
] |
I have implemented multiple conditions in case statement as below,
```
select officied from budgettable
where budgetid = case @budgetid when 7 then 7
when 7 then 8
when 7 then 10
when 8 then 6
end
```
but it didn't give me any result. If I pass budget-Id as `7` the query should return `8,10,7` budget id's. Anything wrong in the above query?
|
I suspect that you want something like this:
```
where budgetid = @budgetid or
(@budgetid = 7 and budgetid in (7, 8, 10)) or
(@budgetid = 8 and budgetid = 6)
```
Your query is failing because for all values of `@budgetid` other than 7 or 8, the `case` returns `NULL` -- which is treated as false.
|
This is no answer. I just want to show how CASE WHEN works, so you see your mistake. Your query evaluates `case @budgetid`as follows:
1. when 7 then 7 => take a 7 for a @budgetid 7
2. when 7 then 8 => ignored, because we already said to take 7 for a @budgetid 7
3. when 7 then 10 => ignored, because we already said to take 7 for a @budgetid 7
4. when 8 then 6 => take a 6 for a @budgetid 8
5. end => no else here, so any other @budgetid results in NULL.
You then compare the result with `= budgetid`. This is never true for NULL. So you end up with:
```
where (@budgetid = 7 and budgetid = 7)
or (@budgetid = 8 and budgetid = 6)
```
|
Use Multiple conditions in Case Statement
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have a table like this:
```
id | col1 | col2 | col3 |
-------------------------
1 | ab | ab | |
2 | bc | ab | cd |
3 | bc | cd | cd |
```
And i want to produce the count of each of the names in each column like this:
```
name | col1 | col2 | col3 |
-------------------------
ab | 1 | 2 | 0 |
bc | 2 | 0 | 0 |
cd | 0 | 1 | 2 |
```
Note that this is just an example, in reality there are 1000s of names, the solution will have to specify the columns, but not the names.
This is probably very simple, and I tried to search for it but couldn't find anything quite right, likely because i didn't know the right term.
I was using something like this to count the total occurrences of each name, but I can't figure out how to split it back out by column:
[Selecting distinct records from multiple column of a table with their count](https://stackoverflow.com/questions/28248921/selecting-distinct-records-from-multiple-column-of-a-table-with-their-count)
This is using MYSQL.
|
```
select name, sum(cnt1), sum(cnt2), sum(cnt3)
from
(
select col1 as name, count(*) as cnt1 , null as cnt2 , null as cnt3 from t group by col1
union all
select col2, null, count(*), null from t group by col2
union all
select col3, null, null, count(*) from t group by col3
) as dt
```
|
perhaps this would work
```
select colcounts.col, sum(colcounts.cnt) from
(
(select col1 as col, count(*) as cnt from TableLikeThis group by col1)
union
(select col2 as col, count(*) as cnt from TableLikeThis group by col2)
union
(select col3 as col, count(*) as cnt from TableLikeThis group by col3)
) as colcounts
group by col
```
[Sql Fiddle Here](http://sqlfiddle.com/#!9/e79081/3)
|
Count of ID in Each Column by ID
|
[
"",
"mysql",
"sql",
"database",
"union",
"aggregate",
""
] |
Having issues getting rid of this error.
I'm trying to use Digital Ocean to deploy my Django app. I configured Postgres, but when I'm trying to register a new user in my app, I get this problem.
I've tried running `python manage.py makemigrations` and `python manage.py migrate` and they work fine. The SQL migrations are completed upon migration.
|
Try add this to your imports:
```
from django.utils import timezone
```
Then wherever you grab the `user`'s data to `save` the new `user` add before the `save` procedure:
```
<user_object>.last_login = timezone.now()
```
|
I was upgrading from django 1.7 to django 1.10 when I ran into this. What it ended up being was I needed to run `python manage.py migrate` to update the database tables and columns.
|
null value in column "last_login" violates not-null constraint
|
[
"",
"sql",
"django",
"postgresql",
"null",
""
] |
***UPDATED :***
I am try to achieve this,
```
TBL1
PK AK TOT1
1 1 100
2 2 200
3 3 300
TBL2
PK AK TOT2
1 1 50
2 1 50
3 2 150
```
First of, I aggregate the TBL2 which is, all of the TBL2.AK(1) is summed.
Then use JOIN to display them.
The script is like this,
```
SELECT t1.AK, t1.TOT1, t2.TOT1
FROM TBL1 t1
JOIN (SELECT AK, SUM(TOT1) as TOT1
FROM TBL2
GROUP BY AK) t2
ON t1.AK = t2.AK
WHERE t1.TOT1 <> t2.TOT1
```
It displays tables where t1.TOT1 <> t2.TOT1.
But as of now, my problem is,
i.e.
TBL1.AK has a value of (3) but TBL2.AK has not.
I need also to display them.
as of now, I am adding the
```
LEFT JOIN
```
and adding the
```
WHERE t1.AK IS NULL OR t2.AK IS NULL.
```
Still no success.
|
The query is performing an inner join; it's returning only rows that have matching values.
You are correct, to also get rows from one table that do *not* have a matching row in the other table, you would need an "outer" join.
To get the result you describe:
1) add the keyword `FULL` before the `JOIN` keyword, and
2) relocate the predicate in the `WHERE` clause to the `ON` clause
2) the predicate in the `WHERE` clause also needs to allow for rows with NULL value for t1.TOT1 or t2.TOT1 to be returned
---
In the `WHERE` clause, the condition `t1.TOT1 <> t2.TOT1` will negate the "outerness" of the join. For any rows returned from `t1` that don't have a matching row from `t2`, the expression `t2.TOT` will evaluate to NULL, and the inequality comparison will evaluate to NULL. And that will prevent the row from being returned.
The bare `JOIN` keyword is an "inner" join. To get an outer join, that has to be preceded by a keyword `LEFT`, `RIGHT` or `FULL`. To allow the "non-matching" rows to be returned, the `WHERE` clause must not prohibit those rows from being returned.
---
For example
```
SELECT t1.AK
, t1.TOT1
, t2.TOT1
FROM TBL1 t1
FULL
JOIN ( SELECT AK
, SUM(TOT1) AS TOT1
FROM TBL2
GROUP BY AK
) t2
ON t1.AK = t2.AK
WHERE t1.TOT1 <> t2.TOT1
OR (t1.TOT1 IS NULL AND t2.TOT1 IS NOT NULL)
OR (t1.TOT1 IS NOT NULL AND t2.TOT1 IS NULL)
```
In this form, the query would return the rows
```
AK t1.TOT1 t2.TOT1
3 300 NULL
NULL NULL 400
```
If you want to return the value "4" as the value of the AK column on that last row, you would need a different expression in the SELECT list, e.g.
```
SELECT NVL(t1.AK,t2,AK) AS AK
```
or the more ANSI standards equivalent
```
SELECT CASE WHEN t1.AK IS NOT NULL t1.AK ELSE t2.AK END AS AK
```
|
I'm not sure what would you want to display because you sum data in only a table but I suggest like this.
For query all of record in both table. Just like this.
```
SELECT * -- Display as you want
FROM TBL1 t1
FULL OUTER JOIN TBL2 t2
ON t1.AK = t2.AK
```
If you need to merge data of AK in both table as a column. you can use [NVL](http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions105.htm)
```
SELECT NVL(t1.AK,t2.AK) --if t1.AK value is null, it will display t2.AK value.
FROM TBL1 t1
FULL OUTER JOIN TBL2 t2
ON t1.AK = t2.AK
```
If you need to "group by" and sum values, you can do like this.
```
SELECT NVL(t1.AK,t2.AK),
SUM(t1.TOT1),
SUM(t2.TOT2)
FROM TBL1 t1
FULL OUTER JOIN TBL2 t2
ON t1.AK = t2.AK
GROUP BY NVL(t1.AK,t2.AK)
```
Edited: Maybe you need this below.
```
SELECT NVL(t1.AK,t2.AK),
t1.TOT1,
t2.TOT1
FROM TBL1 t1
FULL OUTER JOIN (SELECT AK, SUM(TOT2) AS TOT1
FROM TBL1 2
GROUP BY AK) t2
ON t1.AK = t2.AK
WHERE t1.TOT1 <> t2.TOT1
GROUP BY NVL(t1.AK,t2.AK)
```
|
SQL After aggregation and join. Display the Not Equal value and The NULLs
|
[
"",
"sql",
"oracle",
"join",
"aggregate",
""
] |
I need to Search a String in multiple Column With OR Condition & also want to split String by SPACE. Ex. If User Search 'Software Developer' then a simple Query will be like
```
Select * from TableName Where Col1 LIKE '%Software%' OR Col2 LIKE '%Software%' OR Col3 LIKE '%Software%' OR Col1 LIKE '%Developer%' OR Col2 LIKE '%Developer%' OR Col3 LIKE '%Developer%'
```
In Procedure should be One Parameter. Something like
```
ProcedureName 'Software Developer'
```
I can work on above Query but I need to know, Is there any Simple, Fast and better query which i can do.
|
Do you thin you can use the query like
> Select \* from Table where ( col1 + col2 + col3 like '%Software%') OR (col1 + col2 + col3 like '%Engineer%')
|
```
SELECT dbo.JD_tblJob.EmployerID, dbo.JD_tblJob.PostDate, dbo.JD_tblJob.JobID, dbo.JD_tblJob.JobTitle, dbo.JD_tblJob.JobDescr, dbo.JD_tblJob.MinWorkExp,
dbo.JD_tblJob.MaxWorkExp, dbo.JD_tblJob.MinSalary, dbo.JD_tblJob.MaxSalary, dbo.JD_tblJob.HideSalary, dbo.JD_tblJob.NoOfVacancies,
dbo.JD_tblJob.JobLocation, dbo.JD_tblJob.Active, dbo.JD_tblEmployer.CompanyName, tblKeySkill.fldName AS KeySkillName,
tblFuncArea.fldName AS FuncAreaName FROM dbo.JD_tblJob INNER JOIN dbo.JD_tblMaster AS tblFuncArea ON dbo.JD_tblJob.FuncAreaID = tblFuncArea.ID INNER JOIN dbo.JD_tblMaster AS tblKeySkill ON dbo.JD_tblJob.KeySkillID = tblKeySkill.ID INNER JOIN dbo.JD_tblEmployer ON dbo.JD_tblJob.EmployerID = dbo.JD_tblEmployer.ID WHERE (dbo.JD_tblJob.JobTitle LIKE '%Engineer%') OR (tblKeySkill.fldName LIKE '%Engineer%') OR (tblFuncArea.fldName LIKE '%Engineer%') OR (dbo.JD_tblJob.JobTitle LIKE '%Software%') OR (tblKeySkill.fldName LIKE '%Software%') OR (tblFuncArea.fldName LIKE '%Software%')
```
Its giving me result but below not showing any row
```
SELECT dbo.JD_tblJob.EmployerID, dbo.JD_tblJob.PostDate, dbo.JD_tblJob.JobID, dbo.JD_tblJob.JobTitle, dbo.JD_tblJob.JobDescr, dbo.JD_tblJob.MinWorkExp, dbo.JD_tblJob.MaxWorkExp, dbo.JD_tblJob.MinSalary, dbo.JD_tblJob.MaxSalary, dbo.JD_tblJob.HideSalary, dbo.JD_tblJob.NoOfVacancies, dbo.JD_tblJob.JobLocation, dbo.JD_tblJob.Active, dbo.JD_tblEmployer.CompanyName, tblKeySkill.fldName AS KeySkillName, tblFuncArea.fldName AS FuncAreaName FROM dbo.JD_tblJob INNER JOIN dbo.JD_tblMaster AS tblFuncArea ON dbo.JD_tblJob.FuncAreaID = tblFuncArea.ID INNER JOIN dbo.JD_tblMaster AS tblKeySkill ON dbo.JD_tblJob.KeySkillID = tblKeySkill.ID INNER JOIN dbo.JD_tblEmployer ON dbo.JD_tblJob.EmployerID = dbo.JD_tblEmployer.ID WHERE dbo.JD_tblJob.JobTitle + tblKeySkill.fldName + tblFuncArea.fldName LIKE '%Engineer%Software%')
```
|
Procedure to Search String with split
|
[
"",
"sql",
"sql-server",
"string",
"stored-procedures",
"split",
""
] |
First I want you to show my tables:
Table Users
```
CREATE TABLE Users(
EMAIL VARCHAR(40) NOT NULL);
```
Table Actions
```
CREATE TABLE Actions
IDACCION INT NOT NULL,
ACCION VARCHAR(100) NOT NULL);
```
Table UserAction
```
CREATE TABLE UserAction(
EMAIL VARCHAR(40) NOT NULL,
IDACCION INT NOT NULL,
DATETIME DATE NOT NULL,
FOREIGN KEY (IDACCION) REFERENCES Actions(IDACCION),
FOREIGN KEY (EMAIL) REFERENCES Users(EMAIL),
PRIMARY KEY (IDACCION, EMAIL, DATETIME));
```
Then, I need to get the actions (ACCION from Actions) and DATETIME (DATETIME from UserAction) asociated with him, where the condition is that email (from User) be the same as Action email. Is it possible?
If is not possible, I can move DATETIME to Action table
|
You really need to structure your tables, your Users table does not have a Primary key and don't seem to have any purpose on an application, for this query doesn't even need Users table.
```
SELECT a.ACCION, ua.DATETIME FROM Actions a
INNER JOIN UserAction ua on a.IDACCION = ua.IDACCION
```
|
```
select
a.accion,
ua.datetime
from
users u,
actions a,
useraction ua
where
u.email=ua.email
and ua.idaccion=a.idaccion;
```
|
SQL query from 3 different tables.
|
[
"",
"mysql",
"sql",
"database",
""
] |
I have written a stored procedure, my table contains 2 foreign keys from same table, `DISTRICTS`.
Scenario: I am passing a person's job record, which should contain his initial working city and district and then current city and district. Now I want to show district names of both but I am confused in joins because it picks data from same districts table
Stored procedure:
```
select
ServiceInfo.pk_ServiceInfo_ServiceInfoID,
ServiceInfo.fk_Districts_ServiceInfo_CurrentDistrictID,
ServiceInfo.fk_Districts_ServiceInfo_InitialDistrictID,
Districts.DistrictName
from
ServiceInfo
join
Districts on Districts.pk_Districts_DistrictID = ServiceInfo.fk_Districts_ServiceInfo_CurrentDistrictID
join
PersonalInfo on PersonalInfo.pk_PersonalInfo_ID = ServiceInfo.fk_PersonalInfo_ServiceInfo_PID
```
|
I think you need to join on the districts table a second time. Join once with the working city and once with the current city. Try this:
```
select
ServiceInfo.pk_ServiceInfo_ServiceInfoID,
ServiceInfo.fk_Districts_ServiceInfo_CurrentDistrictID,
currentdistrict.DistrictName,
ServiceInfo.fk_Districts_ServiceInfo_InitialDistrictID,
initialdistrict.DistrictName
from
ServiceInfo
join
Districts currentdistrict on Districts.pk_Districts_DistrictID = ServiceInfo.fk_Districts_ServiceInfo_CurrentDistrictID
join
Districts initialdistrict on Districts.pk_Districts_DistrictID = ServiceInfo.fk_Districts_ServiceInfo_InitialDistrictID
join
PersonalInfo on PersonalInfo.pk_PersonalInfo_ID = ServiceInfo.fk_PersonalInfo_ServiceInfo_PID
```
You will notice how I have used a table alias for the district table to show which versin is looking up current, and which version is looking up initial.
|
Try the following:
```
select s.pk_ServiceInfo_ServiceInfoID,
s.fk_Districts_ServiceInfo_CurrentDistrictID,
s.fk_Districts_ServiceInfo_InitialDistrictID,
d1.DistrictName as CurrentDistrictName,
d2.DistrictName as InitialDistrictName
from ServiceInfo si
join PersonalInfo p on p.pk_PersonalInfo_ID = s.fk_PersonalInfo_ServiceInfo
join Districts d1 on d1.pk_Districts_DistrictID = s.fk_Districts_ServiceInfo_CurrentDistrictID
join Districts d2 on d2.pk_Districts_DistrictID = s.fk_Districts_ServiceInfo_InitialDistrictID
```
Notice that I used an alias on each table and made sure each alias was used every place the table name would have been used, including the `ON` portion of the joins.
Depending on your circumstance, you may want to make the joins to the **Districts** table be `LEFT OUTER JOIN`. This would be if you wanted to see the information from **ServiceInfo** and/or **PersonalInfo** tables even if there were no records in the **Districts** table for the values in the two foreign key fields. Currently, if either foreign key is missing for a person/service, no record will be returned.
|
How to apply inner join in this case?
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
I'm having hard times with grouping; I'm working on ISTAT (Italian Institute of Statistics) data about my region's population; they give me data for each city and each age (0, 1, 2 and so on) and I need to group ages in class of 10 years (0-9, 10-19, and so on) for EACH city. Example of the first few rows:
```
| ID | CodiceComune | Eta | Celibi | Coniugati | Divorziati | Vedovi | TotMaschi | Nubili | Coniugate | Divorziate | Vedove | TotFemmine |
+----+--------------+-----+--------+-----------+------------+--------+-----------+--------+-----------+------------+--------+------------+
| 1 | 42001 | 0 | 30 | 0 | 0 | 0 | 30 | 22 | 0 | 0 | 0 | 22 |
| 2 | 42001 | 1 | 22 | 0 | 0 | 0 | 22 | 22 | 0 | 0 | 0 | 22 |
| 3 | 42001 | 2 | 27 | 0 | 0 | 0 | 27 | 21 | 0 | 0 | 0 | 21 |
| 4 | 42001 | 3 | 23 | 0 | 0 | 0 | 23 | 26 | 0 | 0 | 0 | 26 |
| 5 | 42001 | 4 | 33 | 0 | 0 | 0 | 33 | 24 | 0 | 0 | 0 | 24
```
where `CodiceComune` is the ISTAT code assigned to each city, `Eta` is age (ranging from 0 to 100), `TotMaschi` is the total number of males having that very age in that city, `TotFemmine` is the total number of females having that very age in that city; you don't need the translation of the other columns since I don't need those data.
What I'd like to get is a view containing, FOR EACH CITY, the total number of males and the total number of females IN EACH AGE CLASS, that is, the number of males in city `42001` being between `0` and `9` years old, and so on.
For the record, I've tried the solution [here](https://stackoverflow.com/questions/3247630/mysql-group-by-age-range-including-null-ranges) but it doesn't fit my purpose and I'm not able to adapt the code in the link to my case; of course I know I can do it in Excel but it will take my whole life since the table has more than 24,000 rows.
|
E.g.:
```
SELECT CodiceComune
, CONCAT(FLOOR((Eta+0.5)/10)*10,'-',(CEILING((Eta+0.5)/10)*10)-1) Age_group
, SUM(TotMaschi) m
, SUM(TotFemmine) f
FROM my_table
GROUP
BY CodiceComune
, FLOOR(Eta/10);
```
|
This will count the number of males for the 10 year range in MySQL
```
SELECT COUNT(TotMaschi) FROM tablename WHERE CodiceComune = 42001 AND eta IN (0,1,2,3,4,5,6,7,8,9)
```
|
Group ages by city
|
[
"",
"mysql",
"sql",
"excel",
"group-by",
""
] |
All I'm trying to do is verify a query.
```
'SELECT * from table_that_does_not_exist'
```
Without that erroring out, I'd like to know it failed so I can return a response that states "Error: table does not exist" or the generic error.
|
The simplest way to catch any `sql` syntax or query errors is to catch an `Illuminate\Database\QueryException` after providing closure to your query:
```
try {
$results = \DB::connection("example")
->select(\DB::raw("SELECT * FROM unknown_table"))
->first();
// Closures include ->first(), ->get(), ->pluck(), etc.
} catch(\Illuminate\Database\QueryException $ex){
dd($ex->getMessage());
// Note any method of class PDOException can be called on $ex.
}
```
If there are any errors, the program will `die(var_dump(...))` whatever it needs to.
Note: For namespacing, you need to first `\` if the class is not included as a `use` statement.
Also for reference:
[Laravel 5.5 API - Query Exception](http://laravel.com/api/5.5/Illuminate/Database/QueryException.html)
[Laravel 8.x API - Query Exception](http://laravel.com/api/8.x/Illuminate/Database/QueryException.html)
|
Wrap the lines of code you wish to catch an exception on using try-catch statements
```
try
{
//write your codes here
}
catch(Exception $e)
{
dd($e->getMessage());
}
```
Do not forget to include the Exception class at the top of your controller by saying
```
Use Exception;
```
|
How do I catch a query exception in laravel to see if it fails?
|
[
"",
"sql",
"laravel-5",
""
] |
How do I filter column with particular value?
This works fine `>`
```
scala> dataframe.filter("postalCode > 900").count()
```
but `==` fails
```
scala> dataframe.filter("postalCode == 900").count()
java.lang.RuntimeException: [1.13] failure: identifier expected
postalCode == 900 ##Error line
```
I know I am missing something obvious but I cant figure out. I checked [API doc](http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.sql.DataFrame) and SO for same. Also, tried giving `===`
|
Expression string you pass to `filter` / `where` should be a valid SQL expression. It means you have to use a single equal operator:
```
dataframe.filter("postalCode = 900")
```
And example
```
val df = sc.parallelize(Seq(("foo", 900), ("bar", 100))).toDF("k", "postalCode")
df.where("postalCode = 900").show
// +---+----------+
// | k|postalCode|
// +---+----------+
// |foo| 900|
// +---+----------+
```
|
In `python` it may be approached this way (using @zero323 data):
```
df = sqlContext.createDataFrame(sc.parallelize(
[("foo", 900), ("bar", 100)]),
StructType([
StructField("k", StringType(), True),
StructField("v", IntegerType(), True)
])
)
filtered_df = df.where(df.v == 900)
filtered_df.show()
```
|
Spark dataframe checking equality and filtering
|
[
"",
"sql",
"apache-spark",
"apache-spark-sql",
""
] |
# Context:
I have an update query for a Microsoft Access DB that keeps failing. The query works when executed from within my DB, but fails when executed from my CodeFile for my .aspx page.
I have ensured that:
* My App\_Data folder has write permissions (via IUSR)
* My DB is not 'Read Only'
* My query syntax is correct
Does anyone have any advice on what I might be missing? **Thank you!!**
# Code:
```
Imports System.Data
Imports System.Data.SqlClient
Imports System.Web.UI.WebControls
Imports System.Data.OleDb
Partial Class jsDB
Inherits System.Web.UI.Page
Private con As New OleDbConnection("Provider=Microsoft.ACE.OLEDB.12.0; Data Source = C:\Dustin\App_Data\FAQ.accdb")
Public Sub Page_Load(sender as object, e as System.EventArgs)
If request.QueryString("type") = "helpful" Then
Dim cleanID as string
cleanID = request.querystring("id")
If IsNumeric(cint(cleanID)) Then 'Make sure QueryString hasn't been tampered with
Dim sql as string
sql = "UPDATE QUESTION SET helpful=helpful+1 WHERE questionID=" & cleanID
Dim cmd As New OleDbCommand(sql, con)
con.Open()
cmd.ExecuteNonQuery()
con.Close()
Response.write("Success")
else
Response.write("Invalid ID")
end if
else
Response.write("No recognized type")
end If
End Sub
End Class
```
# Error:
Server Error in '/' Application.
Operation must use an updateable query.
Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code.
Exception Details: System.Data.OleDb.OleDbException: Operation must use an updateable query.
Source Error:
Line 27:
Line 28: con.Open()
Line 29: cmd.ExecuteNonQuery()
Line 30: con.Close()
Line 31:
Source File: C:\Dustin\FAQDB.aspx.vb Line: 29
**Stack Trace:**
[OleDbException (0x80004005): Operation must use an updateable query.]
System.Data.OleDb.OleDbCommand.ExecuteCommandTextErrorHandling(OleDbHResult hr) +1102900
System.Data.OleDb.OleDbCommand.ExecuteCommandTextForSingleResult(tagDBPARAMS dbParams, Object& executeResult) +247
System.Data.OleDb.OleDbCommand.ExecuteCommandText(Object& executeResult) +189
System.Data.OleDb.OleDbCommand.ExecuteCommand(CommandBehavior behavior, Object& executeResult) +58
System.Data.OleDb.OleDbCommand.ExecuteReaderInternal(CommandBehavior behavior, String method) +162
System.Data.OleDb.OleDbCommand.ExecuteNonQuery() +107
jsDB.Page\_Load(Object sender, EventArgs e) in C:\Dustin\FAQDB.aspx.vb:29
System.Web.Util.CalliEventHandlerDelegateProxy.Callback(Object sender, EventArgs e) +51
System.Web.UI.Control.OnLoad(EventArgs e) +92
System.Web.UI.Control.LoadRecursive() +54
System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +772
|
Shout-out to @Steve for being such a big help on this.
I had several issues here. While I'm sure there are many issues which could produce the error I was experiencing, here's what resolved it for me:
* Created a new application pool in IIS and gave this new pool write permissions to my App\_Data folder
* Assigned this new app pool to my application
* **Modified the new app pool under advanced settings to allow 32 bit applications**
As of right now, the 'Microsoft.ACE.OLEDB.12.0' provider only works as 32-bit. Changing my application pool to allow 32 bit applications resolved the issue.
Thanks all for your input!
|
This help page addresses the error you are receiving:
<https://support.microsoft.com/en-us/kb/175168>
It looks like either youre not opening the connection in the right mode (Mode 3 in this case) or the "table" you are updating has conditions which prevent you from updating it. I'd bet though that changing your Mode to 3 will resolve the issue.
|
Update Query from ASP.NET to .MDB Failing
|
[
"",
"sql",
"asp.net",
"vb.net",
"ms-access",
""
] |
I'm looking to get distinct rows with a start and end date from a table with structure below. I dont want duplicate rows with same start and end month. Please note that start and end date are NUMBER type here, not date.
```
tbl_app_ranges:
rg_id start_month end_month
105 200401 200409
105 200401 200409
110 200701 200712
110 200701 200710
```
What I want is the below result set
```
rg_id start_month end_month
105 200401 200409
110 200701 200712
110 200701 200710
```
I know this can be done with analytics but not sure how. Is there a way to do this in pure SQL? I need the query to work against Oracle database.
|
```
select distinct rgid,start_month,end_month from tbl_app_ranges;
```
|
You can use `GROUP BY rg_id, start_month, end_month` in your query.
|
How to select distinct date ranges with Oracle SQL
|
[
"",
"sql",
"oracle",
""
] |
So I am attempting my very first sub query and ran into a small problem... Why is the query not taking into account my WHERE clause?
My query:
```
SELECT *
FROM Product
WHERE stdUnitPrice < 5*
(SELECT MIN(stdUnitPrice)
FROM Product
WHERE discontinued = 'A')
```
But in the results I am still getting values in the discontinued column that are NOT just 'A'
Here are the results:
```
# productId, prodName, stdUnitPrice, qtyPerUnit, discontinued
3, Aniseed Syrup, 11.00, 12 - 550 ml bottles, A
13, Konbu, 6.60, 2 kg box, A
19, Teatime Chocolate Biscuits, 10.12, 10 boxes x 12 pieces, A
21, Sir Rodney's Scones, 11.00, 24 pkgs. x 4 pieces, A
23, Tunnbrod, 9.90, 12 - 250 g pkgs., A
**24, Guarana Fantastica, 4.95, 12 - 355 ml cans, D**
33, Geitost, 2.75, 500 g, A
41, Jack's New England Clam Chowder, 10.61, 12 - 12 oz cans, A
45, Rogede sild, 10.45, 1k pkg., A
46, Spegesild, 13.20, 4 - 450 g glasses, A
47, Zaanse koeken, 10.45, 10 - 4 oz boxes, A
52, Filo Mix, 7.70, 16 - 2 kg boxes, A
54, Tourtiere, 8.19, 16 pies, A
74, Longlife Tofu, 11.00, 5 kg pkg., A
75, Rhonbrau Klosterbier, 8.52, 24 - 0.5 l bottles, A
78, Bob's Down Home Juice, 7.90, 6 pack, A
```
|
Try this:
```
select *
from Product
where stdUnitPrice < (5 * (select min(stdUnitPrice) from Product ) )
and discontinued = 'A'
```
|
So the reasoning for this not working the way you want it to is because you are missing the where clause in your first query. Let me break it down a little for you.
```
select * from Product where stdUnitPrice < 5* (select min(stdUnitPrice) from Product where discontinued = 'A')
```
so in this query the sub query gets executed first so lets assume that we set the sub query to a variable:
```
var subquery = select min(stdUnitPrice) from Product where discontinued = 'A';
```
Now after that subquery is executed it is then plugged back into the original query like so: (using the new variable name)
```
select * from Product where stdUnitPrice < 5 * (subquery);
```
so to get this to include a where discontinued = 'A' you would need to change your query to the following:
```
SELECT * FROM Product WHERE stdUnitPrice < 5 * (SELECT MIN(stdUnitPrice) FROM Product WHERE discontinued = 'A') AND discontinued = 'A';
```
I hope this helps.
EDIT: Just to clarify you can't actually use a variable like that im just using that as an example to show how the query will actually be executed.
|
MySQL: Sub-query troubles
|
[
"",
"mysql",
"sql",
""
] |
The upper limit for any `int` data type (excluding `tinyint`), is always one less than the absolute value of the lower limit.
For example, the upper limit for an `int` is 2,147,483,647 and ABS(lower limit) = 2,147,483,648.
Is there a reason why there is always one more negative int than positive int?
**EDIT: Changed since question isn't directly related to DB's**
|
The types you provided are signed integers. Let's see one byte(8-bit) example. With 1 byte you have `2^8` combinations which gives you 256 possible numbers to store.
Now you want to have the same number of positive and negative numbers (each group should have 128).
The point is `0` doesn't have `+0` and `-0`. There is only one `0`.
So you end up with range `-128..-1..0..1..127`.
The same logic works for `16/32/64-bit`.
**EDIT:**
Why the range is `-128 to 127`?
It depends on how you **[`represent signed integer`](https://en.wikipedia.org/wiki/Signed_number_representations)**:
* Signed magnitude representation
* Ones' complement
* **[Two's complement](https://stackoverflow.com/questions/1049722/what-is-2s-complement)**
|
This question isn't really related to databases.
As lad2025 points out, there are an even number of values. So, by including 0, there would be one more positive or negative value. The question you are asking seems to be: "Why is there one more negative value than positive value?"
Basically, the reason is the sign-bit. One possible implementation of negative numbers is to use n - 1 bits for the absolute value and then 0 and 1 for the sign bit. The problem with this approach is that it permits +0 and -0. That is not desirable.
To fix this, computer scientists devised the twos-complement representation for signed integers. ([Wikipedia explains this in more detail](https://en.wikipedia.org/wiki/Two%27s_complement).) Basically, this representation maintains the concept of a sign bit that can be tested. But it changes the representation. If +1 is represented as 001, then -1 is represented as 111. That is, the negative value is the bit-wise complement of the positive value minus one. In fact the negative is always generated by subtracting 1 and using the bit-wise complement.
The issue is then the value 100 (followed by any number of zeros). The sign bit is set, so it is negative. However, when you subtract 1 and invert, it becomes itself again (011 --> 100). There is an argument for calling this "infinity" or "not a number". Instead it is assigned the smallest possible negative number.
|
Why is there one more negative int than positive int?
|
[
"",
"sql",
"binary",
"int",
"signed",
""
] |
Currently I've gotten as far as
```
SELECT
DESCRIPTION,
LEFT(DESCRIPTION,
CASE
WHEN CHARINDEX(',', DESCRIPTION) = 0 THEN LEN(DESCRIPTION)
ELSE charindex(',', DESCRIPTION) - 1
END) AS LEFTDESCRIPTION,
RIGHT(DESCRIPTION,
CASE
WHEN CHARINDEX(',',REVERSE(DESCRIPTION)) = 0 THEN ''
ELSE charindex(',', DESCRIPTION) - 1
END) AS RIGHTDESCRIPTION
FROM TABLE
```
But the RIGHT portion doesn't seem to be correct, for example:
It gets a description of example,this and returns "ample,this" rather than just "this"
|
Since you know how to get the contents `LEFT` of the comma, you can leverage the same manipulation logic on the `REVERSE`'d value to grab what would be to the `RIGHT` of the original string,
```
DECLARE @Var VARCHAR(50) = 'example,this'
SELECT @Var
, LEFT(@Var, CHARINDEX(',', @Var) - 1) -- returns 'example'
, REVERSE(LEFT(REVERSE(@Var), CHARINDEX(',', REVERSE(@Var)) - 1)) -- returns 'this'
-- or when using the RIGHT function,
, RIGHT(@Var, LEN(@Var) - CHARINDEX(',', @Var)) -- returns 'this'
```
Per Juan's comment, below is the same `SELECT` as above (this time selecting `RIGHT` route rather than the `REVERSE` logic) that would return the value when the comma delimiter is not found,
```
DECLARE @Var VARCHAR(100) = 'example,this'
SELECT @Var
, CASE WHEN @Var LIKE '%,%' THEN LEFT(@Var, CHARINDEX(',', @Var) - 1)
ELSE @Var END
, CASE WHEN @Var LIKE '%,%' THEN RIGHT(@Var, LEN(@Var) - CHARINDEX(',', @Var))
ELSE @Var END
```
|
I would be inclined to use `stuff()` for this:
```
select description,
left(description, charindex(',', description) - 1) as leftdescription,
stuff(description, 1, charindex(',', description), '') as rightdescription
```
If you are concerned about `,` not being there, then use `case`:
```
select description,
(case when description like '%,%'
then left(description, charindex(',', description) - 1)
else description
end) as leftdescription,
(case when description like '%,%'
then stuff(description, 1, charindex(',', description), '')
end) as rightdescription
```
|
How do I Select characters to the left of a ',' and then characters to the right of a ','?
|
[
"",
"sql",
"sql-server-2008-r2",
""
] |
I want to create a query for project listings that would give the number of registered applications, excluding the ones for which the user does not exist.
In this case, considering user 10 does not exist, I should have the query results as folows:
RESULTS
```
+----+------------+--------------+
| id | project | applications |
+----+------------+--------------+
| 1 | MyProject1 | 3 |
| 2 | MyProject2 | 0 |
| 3 | MyProject3 | 0 |
+----+------------+--------------+
```
# TABLES
```
Projects
+----+------------+
| id | name |
+----+------------+
| 1 | MyProject1 |
| 2 | MyProject2 |
| 3 | MyProject3 |
+----+------------+
applications
+----+------+------------+
| id | user | project_id |
+----+------+------------+
| 1 | 3 | 1 |
| 2 | 4 | 1 |
| 3 | 5 | 1 |
| 4 | 10 | 1 |
+----+------+------------+
users
+----+---------+
| id | Name |
+----+---------+
| 1 | Smith |
| 2 | John |
| 3 | Paul |
| 4 | Chris |
| 5 | Gabriel |
+----+---------+
```
The below query is not excluding the non-existing users:
```
SELECT `projects` . * , (
SELECT COUNT( * )
FROM `applications`
WHERE `applications`.`project_id` = `projects`.`id`
AND EXISTS (
SELECT `applications`.`id`
FROM `applications` , `users`,`project`
WHERE `application`.`user` = `users`.`id` AND `applications`.`project_id` = `project`.`id`
)
) AS `applications`
FROM `projects` ORDER BY `id` DESC LIMIT 30
```
|
Your query looks overly complicated. This should do:
```
select
id,
name as project,
(
select count(*)
from applications a
where a.project_id = p.id
and a.user in (select id from users)
) as applications
from projects p;
```
|
I think you want `left join` and `group by`:
```
select p.id, p.name, count(u.id)
from projects p left join
applications a
on p.id = a.project_id left join
users u
on a.user_id = u.id
group by p.id, p.name;
```
However, you might want to think about fixing the data. It seems like there should be foreign key relationships between applications and projects and applications and users. The ability to have an invalid user means that there is no valid foreign key relationship to users.
|
Count rows from one tables of users in another table
|
[
"",
"mysql",
"sql",
""
] |
what is the SQL WHERE clause which checks if a column TopicID('1,5,14,18') has the value '1'?
```
SELECT TOP 10 *
FROM topics
WHERE {TopicID has the value '1'}
```
thanks.
|
May be I just interpret your question incorrectly. I think you need `like`:
```
SELECT TOP 10 *
FROM topics
WHERE TopicID like '1,%' or TopicID like '%,1' or TopicID like '%,1,%' or TopicID = '1'
```
|
If TopicId is a numeric field then:
```
SELECT TOP 10 * FROM topics where TopicId = 1
```
|
SQL - check if a value is in column list
|
[
"",
"sql",
""
] |
I have a table called `notes` and there I have three fields:
```
id | start_time | duration
1 | 2015-10-21 19:41:35 | 15
2 | 2015-10-21 19:41:50 | 15
3 | 2015-10-21 19:42:05 | 15
4 | 2015-10-21 19:42:35 | 15
etc.
```
`id` is INT field
`start_time` is a TIMESTAMP field
`duration` is INT field that tells the number of seconds of how long each event is.
I'm writing an SQL query that will get the 3 fields as an input: `duration`, `begin_time` and `end_time` and will return me a `timestamp` field of where exactly new even can fit.
Based on lots of questions similar to mine on StackOverflow (mostly this particular one [MySQL / PHP - Find available time slots](https://stackoverflow.com/questions/13414795/mysql-php-find-available-time-slots) ) I created a query:
```
SELECT (a.start_time + a.duration) AS free_after FROM notes a
WHERE NOT EXISTS ( SELECT 1 FROM notes b
WHERE b.start_time BETWEEN (a.start_time+a.duration) AND
(a.start_time+a.duration) + INTERVAL '$duration' SECOND) AND
(a.start_time+a.duration) BETWEEN '$begin_time' AND '$end_time'
```
But when I run it as follows:
```
SELECT (a.start_time + a.duration) AS free_after FROM notes a
WHERE NOT EXISTS ( SELECT 1 FROM notes b WHERE b.start_time
BETWEEN (a.start_time+a.duration) AND (a.start_time+a.duration) + INTERVAL 15 SECOND )
AND (a.start_time+a.duration) BETWEEN '2015-11-21 19:41:30' AND '2015-11-21 19:43:50'
```
I get no records found, even though - looking at the table mentioned above - I should get a result:
```
free_after
2015-11-21 19:42:20
```
(because between those two records:
```
3 | 2015-10-21 19:42:05 | 15
4 | 2015-10-21 19:42:35 | 15
```
there is a free slot of 15 seconds). So why does my query not work properly?
==== EDIT:
After following Richard's advice I did a `show warnings` query and it returned me the values:
```
Truncated incorrect DOUBLE value: '2015-11-21 19:4...
Truncated incorrect DOUBLE value: '2015-11-21 19:4...
Truncated incorrect DOUBLE value: '2015-11-21 19:4...
Truncated incorrect DOUBLE value: '2015-11-21 19:4...
Incorrect datetime value: '0'
```
and I'm not sure what causes it, could you help me with that?
===== EDIT 2
Okay, after following another @Richard's advice (thanks btw!) I changed my query to the following one:
```
SELECT (a.start_time + INTERVAL a.duration SECOND) AS free_after FROM notes a
WHERE
NOT EXISTS ( SELECT 1 FROM notes b WHERE b.start_time
BETWEEN (a.start_time + INTERVAL a.duration SECOND) AND
(a.start_time + INTERVAL a.duration SECOND) + INTERVAL 15 SECOND ) AND
(a.start_time + INTERVAL a.duration SECOND) BETWEEN '2015-11-21 19:41:30' AND '2015-11-21 19:43:50'
```
and this time I got a result (Which is a good message):
```
2015-10-21 19:42:50
```
but it's a bad value (which is a worse message). It should be `2015-10-21 19:42:20`... Is there anything else that I'm doing wrong here? When I do `show warnings` right after it - there's nothing more here :(
|
The `BETWEEN` clause matches the end dates exactly. With your sample data, once you add the duration of 15 seconds and the fixed duration, also of 15 seconds, it matches your last sample data that is 30 seconds on, and hence rules out. Modify your query to
```
SELECT (a.start_time + INTERVAL a.duration SECOND) AS free_after FROM notes a
WHERE
NOT EXISTS ( SELECT 1 FROM notes b WHERE b.start_time
BETWEEN (a.start_time + INTERVAL a.duration SECOND) AND
(a.start_time + INTERVAL a.duration SECOND) + INTERVAL 15 SECOND - INTERVAL 1 MICROSECOND) AND
(a.start_time + INTERVAL a.duration SECOND) BETWEEN '2015-10-21 19:41:30' AND '2015-10-21 19:43:50'
```
(i.e. taking off 1 microsecond as an adjustment)
and you get the answer:
```
+---------------------+
| free_after |
+---------------------+
| 2015-10-21 19:42:20 |
| 2015-10-21 19:42:50 |
+---------------------+
2 rows in set (0.00 sec)
```
|
Short answer: Because you can't just add `TIMESTAMP` to `INT`.
See for example: [How to add an offset to all timestamps/DATETIME in a MySQL database?](https://stackoverflow.com/questions/9077074/how-to-add-an-offset-to-all-timestamps-datetime-in-a-mysql-database)
Long answer:
As a general rule when debugging a complicated SQL query, start from a simpler form and then add your more complex clauses one by one until it stops doing what you expect, then fix that. In your case, start with
```
SELECT (a.start_time + a.duration) AS free_after FROM notes a;
```
Which when I try it on my system gives:
```
+----------------+
| free_after |
+----------------+
| 20151021194150 |
| 20151021194165 |
| 20151021194220 |
| 20151021190250 |
+----------------+
```
Those don't look like normal `DATETIME` or `TIMESTAMP` values to me. I notice that in one place your query does use the `INTERVAL <value> SECOND` notation to add an offset to a `TIMESTAMP`, but this neglected in all the other additions.
```
mysql> SELECT (a.start_time + INTERVAL a.duration SECOND) AS free_after FROM notes a;
+---------------------+
| free_after |
+---------------------+
| 2015-10-21 19:41:50 |
| 2015-10-21 19:42:05 |
| 2015-10-21 19:42:20 |
| 2015-10-21 19:02:50 |
+---------------------+
4 rows in set (0.00 sec)
```
That looks better. Following the same principle throughout, the full query gives the expected result.
Also, as Richard commented, you should have seen warnings reported. Using `show warnings;` in mysql would then have given you a similar clue by complaining about "Incorrect datetime value" and "Truncated incorrect DOUBLE value".
|
A query for finding the nearest free time slot in mysql - why it doesn't work?
|
[
"",
"mysql",
"sql",
""
] |
I want to create table from some metadata of another table in hive. I'm using hive.
I know from [this](https://stackoverflow.com/questions/22532419/sql-query-for-searching-column-name-in-database) question that the metatdata can be retrieved from the table by `INFORMATION_SCHEMA.COLUMNS` in sql:
Does `HIVE` have a similar access to metadata of a table to allow me to create a table using the columns of another table? Essentially, I'm copying a table without all of the tuples.
This is the best thing I have so far:
`create table <table_name>( (select <table_name> from INFORMATION_SCHEMA.COLUMNS)) row format delimited fields by '|';`
|
You can try `CTAS` syntax:
```
CREATE TABLE new_table_name
AS
SELECT *
FROM old_table_name
WHERE 1 = 2;
```
|
You can use `LIKE` so the new table gets the structure but not the data.
[Hive documentation](http://archive.cloudera.com/cdh4/cdh/4/hive/language_manual/data-manipulation-statements.html)
```
CREATE TABLE yourtable
LIKE table2;
```
|
Create table in HIVE with same columns as another table?
|
[
"",
"sql",
"hadoop",
"hive",
""
] |
When view is created including an `ORDER BY` clause, will that order be maintained when using `SELECT * FROM VIEW`?
Or is it necessary to state `SELECT * FROM VIEW ORDER BY` ?
|
You would need an `order by` on the outer query to guarantee that the rows are returned in the order you want.
If you run a `select * from view` query against the view without a `where` clause, it is likely that the rows would be returned in the order defined in the `order by`. But that's not something that you should depend on.
|
Oracle 19 definitely supports writing "create view ... as select ... from ... order by ...". As for why would anyone do it, Oracle may be using this as a hint to return the results sorted as specified, but I'd never recommend writing "order by" in the view definition, because ordering should always be done by the client application or at least outside of the view, because there can always be a situation where the client, or a select from the view, can require a different result ordering, and in that case, the order by within the view is probably an unnecessary performance cost and - in case of an application which orders the data only after receiving the results from Oracle - it is definitely unnecessary.
|
Does a select on a view keep the order by from the view?
|
[
"",
"sql",
"oracle",
""
] |
There is an SQL query I'm after (in SQL Server). I need to get the count of instances where company Y is more expensive than company X. How would I start to tackle this? I've looked through various examples, but I cannot find anything similiar. I see PARTITION BY could be helpful, but not sure how to start from there - any hint will be very helpful.
```
ReadingId | Product | Price | Company
----------------------------------------------
1 | A | 3 | X
2 | A | 4 | Y
3 | A | 5 | Z
4 | B | 11 | X
5 | B | 12 | Y
6 | B | 13 | Z
...
```
|
One method is conditional aggregation. For each product:
```
select product,
max(case when company = 'Y' then price end) as Yprice
max(case when company = 'X' then price end) as Xprice
from t
group by product;
```
For a count, you can then do:
```
select count(*)
from (select product,
max(case when company = 'Y' then price end) as Yprice
max(case when company = 'X' then price end) as Xprice
from t
group by product;
) p
where Yprice > Xprice;
```
There are other methods as well. `Pivot` can be used, as well as a `join` with aggregation:
```
select count(*)
from t ty join
t tx
on ty.company = 'Y' and tx.company = 'X' and ty.product = tx.product
where ty.price > tx.price;
```
I should point out that all these methods sort of assume that X and Y only appear once for each product. That seems reasonable given your data.
|
This query, tho not efficient, will give you the details of what you need:
```
Select
CompanyY.*,
CompanyX.*
FROM
(
select * from OrderDetails
where Company = 'Y'
) CompanyY
JOIN
(
select * from OrderDetails
where Company = 'X'
) CompanyX
ON CompanyX.Product = CompanyY.Product
WHERE CompanyY.Price > CompanyX.Price
```
**Try the SQLFiddle [Here](http://sqlfiddle.com/#!3/75170/4/0)**
|
SQL query to compare subsets of rows between each other
|
[
"",
"sql",
"sql-server",
""
] |
[](https://i.stack.imgur.com/u0uQK.jpg)
SAMPLE DATA:
```
CREATE TABLE poly_and_multipoly (
"id" SERIAL NOT NULL PRIMARY KEY,
"name" char(1) NOT NULL,
"the_geom" geometry NOT NULL
);
-- add data, A is a polygon, B is a multipolygon
INSERT INTO poly_and_multipoly (name, the_geom) VALUES (
'A', 'POLYGON((7.7 3.8,7.7 5.8,9.0 5.8,7.7 3.8))'::geometry
), (
'B',
'MULTIPOLYGON(((0 0,4 0,4 4,0 4,0 0),(1 1,2 1,2 2,1 2,1 1)), ((-1 -1,-1 -2,-2 -2,-2 -1,-1 -1)))'::geometry
);
```
I have a table of multipolygons and polygons and I am trying to calculate the interior angles of the exterior rings in the table (i.e. no interior rings...) using [ST\_Azimuth](http://postgis.net/docs/manual-1.4/ST_Azimuth.html). Is there any way to modify the attached query to use [ST\_Azimuth](http://postgis.net/docs/manual-1.4/ST_Azimuth.html) on the sp and ep of the linestrings?
```
SELECT id, name, ST_AsText( ST_MakeLine(sp,ep) )
FROM
-- extract the endpoints for every 2-point line segment for each linestring
(SELECT id, name,
ST_PointN(geom, generate_series(1, ST_NPoints(geom)-1)) as sp,
ST_PointN(geom, generate_series(2, ST_NPoints(geom) )) as ep
FROM
-- extract the individual linestrings
(SELECT id, name, (ST_Dump(ST_Boundary(the_geom))).geom
FROM poly_and_multipoly
) AS linestrings
) AS segments;
1;"A";"LINESTRING(7.7 3.8,7.7 5.8)"
1;"A";"LINESTRING(7.7 5.8,9 5.8)"
1;"A";"LINESTRING(9 5.8,7.7 3.8)"
2;"B";"LINESTRING(0 0,4 0)"
```
|
The use of ST\_Azimuth function at aengus example is giving an error because you can't use two generate\_series as parameters for a single function. If you use ep / sp values in another subquery then you won't have any trouble.
Here is my solution:
```
-- 3.- Create segments from points and calculate azimuth for each line.
-- two calls of generate_series for a single function wont work (azimuth).
select id,
name,
polygon_num,
point_order as vertex,
--
case when point_order = 1
then last_value(ST_Astext(ST_Makeline(sp,ep))) over (partition by id, polygon_num)
else lag(ST_Astext(ST_Makeline(sp,ep)),1) over (partition by id, polygon_num order by point_order)
end ||' - '||ST_Astext(ST_Makeline(sp,ep)) as lines,
--
abs(abs(
case when point_order = 1
then last_value(degrees(ST_Azimuth(sp,ep))) over (partition by id, polygon_num)
else lag(degrees(ST_Azimuth(sp,ep)),1) over (partition by id, polygon_num order by point_order)
end - degrees(ST_Azimuth(sp,ep))) -180 ) as ang
from (-- 2.- extract the endpoints for every 2-point line segment for each linestring
-- Group polygons from multipolygon
select id,
name,
coalesce(path[1],0) as polygon_num,
generate_series(1, ST_Npoints(geom)-1) as point_order,
ST_Pointn(geom, generate_series(1, ST_Npoints(geom)-1)) as sp,
ST_Pointn(geom, generate_series(2, ST_Npoints(geom) )) as ep
from ( -- 1.- Extract the individual linestrings and the Polygon number for later identification
select id,
name,
(ST_Dump(ST_Boundary(the_geom))).geom as geom,
(ST_Dump(ST_Boundary(the_geom))).path as path -- To identify the polygon
from poly_and_multipoly ) as pointlist ) as segments;
```
I've added some complexity as I wanted to identify each polygon to avoid mixing lines for angle calculation.
As sqlfiddle don't have support for PostGIS I've uploaded this example with some complementary code to github [here](https://github.com/vamaq/rdbms_playground/blob/master/examples/PostGis_polygon_angle_calculation/pa_calculation.sql)
|
I needed the internal angles for only polygons. The accepted solution by vamaq did not consistently give me the internal angles, but sometimes external angles, for an arbitrarily shaped polygon. This is because it calculates the angle between two lines without any consideration of which side of those two lines is internal, so if the internal angle is obtuse vamaq's solution gives the external angle instead.
In my solution I use the cosine rule to determine the angle between two lines without reference to the Azimuth and then use ST\_Contains to check whether the angle of the polygon is acute or obtuse so that I can ensure the angle is internal.
```
CREATE OR REPLACE FUNCTION is_internal(polygon geometry, p2 geometry, p3 geometry)
RETURNS boolean as
$$
BEGIN
return st_contains(polygon, st_makeline(p2, p3));
END
$$ language plpgsql;
CREATE OR REPLACE FUNCTION angle(p1 geometry, p2 geometry, p3 geometry)
RETURNS float AS
$$
DECLARE
p12 float;
p23 float;
p13 float;
BEGIN
select st_distance(p1, p2) into p12;
select st_distance(p1, p3) into p13;
select st_distance(p2, p3) into p23;
return acos((p12^2 + p13^2 - p23^2) / (2*p12*p13));
END
$$ language plpgsql;
CREATE OR REPLACE FUNCTION internal_angle(polygon geometry, p1 geometry, p2 geometry, p3 geometry)
RETURNS float as
$$
DECLARE
ang float;
is_intern boolean;
BEGIN
select angle(p1, p2, p3) into ang;
select is_internal(polygon, p2, p3) into is_intern;
IF not is_intern THEN
select 6.28318530718 - ang into ang;
END IF;
return ang;
END
$$ language plpgsql;
CREATE OR REPLACE FUNCTION corner_triplets(geom geometry)
RETURNS table(corner_number integer, p1 geometry, p2 geometry, p3 geometry) AS
$$
DECLARE
max_corner_number integer;
BEGIN
create temp table corners on commit drop as select path[2] as corner_number, t1.geom as point from (select (st_dumppoints($1)).*) as t1 where path[1] = 1;
select max(corners.corner_number) into max_corner_number from corners;
insert into corners (corner_number, point) select 0, point from corners where corners.corner_number = max_corner_number - 1;
create temp table triplets on commit drop as select t1.corner_number, t1.point as p1, t2.point as p2, t3.point as p3 from corners as t1, corners as t2, corners as t3 where t1.corner_number = t2.corner_number + 1 and t1.corner_number = t3.corner_number - 1;
return QUERY TABLE triplets;
END;
$$
LANGUAGE plpgsql;
CREATE OR REPLACE FUNCTION internal_angles(geom geometry)
RETURNS table(corner geometry, angle float)
AS $$
BEGIN
create temp table internal_angs on commit drop as select p1, internal_angle(geom, p1, p2, p3) from (select (c).* from (select corner_triplets(geom) as c) as t1) as t2;
return QUERY TABLE internal_angs;
END;
$$
LANGUAGE plpgsql;
```
Usage:
```
select (c).* into temp from (select internal_angles(geom) as c from my_table) as t;
```
|
Calculate the angle of exterior rings PostGIS (polygons & multipolygons)
|
[
"",
"sql",
"postgresql",
"postgis",
""
] |
This is a follow up to my previous question [A query for finding the nearest free time slot in mysql - why it doesn't work?](https://stackoverflow.com/questions/33688658/a-query-for-finding-the-nearest-free-time-slot-in-mysql-why-it-doesnt-work/)
Basically I have a table:
```
id | start_time | duration
1 | 2015-10-21 19:41:35 | 15
2 | 2015-10-21 19:41:50 | 15
3 | 2015-10-21 19:42:05 | 15
4 | 2015-10-21 19:42:35 | 15
etc.
```
and it contains the event start\_time and its duration. I asked for help with finding the nearest time slot in which the event can be placed between the existing events. @Richard came up with a perfect answer <https://stackoverflow.com/a/33689786/3766930> and suggested a query:
```
SELECT (a.start_time + INTERVAL a.duration SECOND) AS free_after FROM notes a
WHERE
NOT EXISTS ( SELECT 1 FROM notes b WHERE b.start_time
BETWEEN (a.start_time + INTERVAL a.duration SECOND) AND
(a.start_time + INTERVAL a.duration SECOND) + INTERVAL 15 SECOND - INTERVAL 1 MICROSECOND) AND
(a.start_time + INTERVAL a.duration SECOND) BETWEEN '2015-10-21 19:41:30' AND '2015-10-21 19:43:50'
```
which works great.
Now I was wondering if there's a possibility of finding a most suitable date not only between the existing dates, but also right before them.
For example: I would set a begin\_date as `2015-10-21 16:00:00` and end\_date as `2015-10-21 21:00:00`. Currently the result of @Richard's query would be `2015-10-21 19:42:20`. But is there a way of creating a query that in this result will return `2015-10-21 19:41:20` as the closest one to the first date that is already in database?
|
A simple solution for this would be to use date\_sub with an order by statement, limiting the results to show only 1 record.
This is the result:
```
SELECT date_sub(start_time, interval duration second) as free_before FROM `notes` where start_time>'2015-10-21 16:00:00' order by start_time asc limit 1
```
**Bonus for you**
Using the previous solution @Richard provided. Putting it all together to show all free times in 1 table could result in the following:
```
select * from (SELECT date_sub(start_time, interval duration second) as free_times FROM `notes` where start_time>'2015-10-21 16:00:00' order by start_time asc limit 1) a
union
(SELECT (a.start_time + INTERVAL a.duration SECOND) AS free_times FROM notes a
WHERE
NOT EXISTS ( SELECT 1 FROM notes b WHERE b.start_time
BETWEEN (a.start_time + INTERVAL a.duration SECOND) AND
(a.start_time + INTERVAL a.duration SECOND) + INTERVAL 15 SECOND - INTERVAL 1 MICROSECOND) AND
(a.start_time + INTERVAL a.duration SECOND) BETWEEN '2015-10-21 19:41:30' AND '2015-10-21 19:43:50')
```
**Edit:**
I will only write my part of the query. The other part was working correctly and only if you want me to change it I will (never fix something that ain't broke)
If you want interval of 10 seconds ->
```
SELECT date_sub(start_time, interval 10 second) as free_times FROM `notes` where start_time>'2015-10-21 16:00:00' order by start_time asc limit 1
```
If you want interval of 15 seconds ->
```
SELECT date_sub(start_time, interval 15 second) as free_times FROM `notes` where start_time>'2015-10-21 16:00:00' order by start_time asc limit 1
```
In this case you'll have to change your start\_time and the duration accordingly.
|
Would this work for you?
Given:
```
select * from notes;
+----+---------------------+----------+
| id | start_time | duration |
+----+---------------------+----------+
| 1 | 2015-11-17 10:10:10 | 15 |
| 2 | 2015-11-17 10:20:40 | 15 |
| 3 | 2015-11-17 10:30:00 | 15 |
+----+---------------------+----------+
```
This result:
```
select (start_time - interval 15 second) as earlier_date
from notes
where start_time > '2015-11-17 10:15:00'
AND start_time < '2015-11-17 10:25:00'
order by start_time
limit 1;
+---------------------+
| earlier_date |
+---------------------+
| 2015-11-17 10:20:25 |
+---------------------+
```
**Important:** This sample doesn't pay any attention to entries that might fall immediately in front of the search window (because your example didn't include any). This query as-is *will* create overlaps if there are impinging entries just prior to the search window.
|
I have an sql query for finding a nearest free time after specific date. How can I make it to also look for a date BEFORE this date?
|
[
"",
"mysql",
"sql",
""
] |
I have a function written in plpythonu.
```
CREATE OR REPLACE FUNCTION temp(t_x integer[])
RETURNS void AS
$BODY$
.
.
.
x=plpy.execute("""select array_to_string(select id from A where id=any(array%s) ), ',')"""%t_x)
```
On some cases when t\_x is empty I get an error:
> ERROR: cannot determine type of empty array
How do I fix it?
|
## if-else statement
Pseudocode:
```
if t_x is empty
x=null
else
x=plpy.execute....
```
## cast
```
x=plpy.execute("""select array_to_string(select id from A where id=any(array%s::integer[]) ), ',')"""%t_x)
```
**Why cast help?**
```
postgres=# select pg_typeof(array[1,2]);
pg_typeof
-----------
integer[]
```
`array[1,2]` has type `integer[]`
---
```
postgres=# select array[];
ERROR: cannot determine type of empty array
LINE 1: select array[];
```
`array[]` has no type.
---
```
postgres=# select pg_typeof(array[]::integer[]);
pg_typeof
-----------
integer[]
```
`array[]::integer[]` has type `integer[]`
|
Maybe this can help someone in future.
Table function may receive NULL or Empty Array in that case (equivalent to ALL):
```
CREATE OR REPLACE FUNCTION temp(t_x integer[])
RETURNS void AS
$BODY$
.
.
.
x=plpy.execute("""select array_to_string(select id from A where id=any(array%s) or cardinality(coalesce(array%s,array[]::integer[])) = 0), ',')"""%t_x)
```
|
How to handle cases where array is empty in PostgreSQL?
|
[
"",
"sql",
"arrays",
"database",
"postgresql",
"types",
""
] |
For example i have the table called 'Table1'. and column called 'country'.
I want to count the value of word in string.below is my data for column 'country':
```
country:
"japan singapore japan chinese chinese chinese"
```
expected output: in above data we can see the japan appear two time, singapore once and chinese 3 times.i want to count value of word where japan is count as one, singapore as one and chinese as one. hence the ouput will be 3. please help me
```
ValueOfWord: 3
```
|
Firstly, it is a bad design to store multiple values in a single column as delimited string. You should consider **normalizing** the data as a permanent solution.
With the denormalized data, you could do it in a single SQL using **REGEXP\_SUBSTR**:
```
SELECT COUNT(DISTINCT(regexp_substr(country, '[^ ]+', 1, LEVEL))) as "COUNT"
FROM table_name
CONNECT BY LEVEL <= regexp_count(country, ' ')+1
/
```
**Demo:**
```
SQL> WITH sample_data AS
2 ( SELECT 'japan singapore japan chinese chinese chinese' str FROM dual
3 )
4 -- end of sample_data mocking real table
5 SELECT COUNT(DISTINCT(regexp_substr(str, '[^ ]+', 1, LEVEL))) as "COUNT"
6 FROM sample_data
7 CONNECT BY LEVEL <= regexp_count(str, ' ')+1
8 /
COUNT
----------
3
```
See [**Split single comma delimited string into rows in Oracle**](https://lalitkumarb.wordpress.com/2014/12/02/split-comma-delimited-string-into-rows-in-oracle/) to understand how the query works.
---
**UPDATE**
For multiple delimited string rows you need to take care of the number of rows formed by the **CONNECT BY** clause.
See [**Split comma delimited strings in a table in Oracle**](https://lalitkumarb.wordpress.com/2015/03/04/split-comma-delimited-strings-in-a-table-in-oracle/) for more ways of doing the same task.
**Setup**
Let's say you have a table with 3 rows like this:
```
SQL> CREATE TABLE t(country VARCHAR2(200));
Table created.
SQL> INSERT INTO t VALUES('japan singapore japan chinese chinese chinese');
1 row created.
SQL> INSERT INTO t VALUES('singapore indian malaysia');
1 row created.
SQL> INSERT INTO t VALUES('french french french');
1 row created.
SQL> COMMIT;
Commit complete.
SQL> SELECT * FROM t;
COUNTRY
---------------------------------------------------------------------------
japan singapore japan chinese chinese chinese
singapore indian malaysia
french french french
```
* Using **REGEXP\_SUBSTR** and **REGEXP\_COUNT**:
We expect the output as `6` since there are 6 unique strings.
```
SQL> SELECT COUNT(DISTINCT(regexp_substr(t.country, '[^ ]+', 1, lines.column_value))) count
2 FROM t,
3 TABLE (CAST (MULTISET
4 (SELECT LEVEL FROM dual
5 CONNECT BY LEVEL <= regexp_count(t.country, ' ')+1
6 ) AS sys.odciNumberList ) ) lines
7 ORDER BY lines.column_value
8 /
COUNT
----------
6
```
There are many other methods to achieve the desired output. Let's see how:
* Using **XMLTABLE**
```
SQL> SELECT COUNT(DISTINCT(country)) COUNT
2 FROM
3 (SELECT trim(COLUMN_VALUE) country
4 FROM t,
5 xmltable(('"'
6 || REPLACE(country, ' ', '","')
7 || '"'))
8 )
9 /
COUNT
----------
6
```
* Using **MODEL** clause
```
SQL> WITH
2 model_param AS
3 (
4 SELECT country AS orig_str ,
5 ' '
6 || country
7 || ' ' AS mod_str ,
8 1 AS start_pos ,
9 Length(country) AS end_pos ,
10 (LENGTH(country) -
11 LENGTH(REPLACE(country, ' '))) + 1 AS element_count ,
12 0 AS element_no ,
13 ROWNUM AS rn
14 FROM t )
15 SELECT COUNT(DISTINCT(Substr(mod_str, start_pos, end_pos-start_pos))) count
16 FROM (
17 SELECT *
18 FROM model_param
19 MODEL PARTITION BY (rn, orig_str, mod_str)
20 DIMENSION BY (element_no)
21 MEASURES (start_pos, end_pos, element_count)
22 RULES ITERATE (2000)
23 UNTIL (ITERATION_NUMBER+1 = element_count[0])
24 ( start_pos[ITERATION_NUMBER+1] =
25 instr(cv(mod_str), ' ', 1, cv(element_no)) + 1,
26 end_pos[ITERATION_NUMBER+1] =
27 instr(cv(mod_str), ' ', 1, cv(element_no) + 1) )
28 )
29 WHERE element_no != 0
30 ORDER BY mod_str , element_no
31 /
COUNT
----------
6
```
|
Did you store that kind of string in a single entry?
If not, try
```
SELECT COUNT(*)
FROM (SELECT DISTINCT T.country FROM Table1 T)
```
If yes, I would write an external program to parse the string and return the result you want.
Like using java.
Create a String set.
I would use JDBC to retrieve the record, and use split to split strings in tokens using ' 'delimiter. For every token, if it is not in the set, add it to the set.
When parse finishes, get the length of the set, which is the value you want.
|
How to count number of words in delimited string in Oracle SQL
|
[
"",
"sql",
"oracle",
"count",
"split",
"csv",
""
] |
I have two tables, a question table, and an answer table.
I then have a comments table, so that comments can be attached to a question or an answer. There an be many comments for each question or answer.
Should I create two comments tables? Or relate both question and answer table to a single comments table. I would prefer this, but it seems like it would be non-normalized.
|
It depends. In general, you would like to have foreign key relationships. If there is only one comment allowed per question/answer, then it is easy. A `commentId` goes in each of the tables, `Questions` and `Answers`. There is an easy foreign key relationship.
To maintain the foreign key relationships and allow multiple comments is a bit tricker. Probably the easiest way is to have two junction tables, `QuestionComments` and `AnswerComments`. This would like to the appropriate parent and to the `Comments` table.
Note: there are reasons why you would want separate comments tables for the two purposes. For instance, you might classify the comments on answers into different groups. Or, you might only allow one comment per user on a question. These different business rules might be easier to implement if the comments are in different tables.
|
Create comments table with two foreign keys from question and answer tables. Its enough.
|
Can a single table be a child to two parent tables
|
[
"",
"sql",
"sql-server",
""
] |
i have a table like this on a mysql database:
```
id | item
-----------
1 | 2
2 | 2
3 | 4
4 | 5
5 | 8
6 | 8
7 | 8
```
i want the result to be 3 record with the highest Item value
select max(item) returns only 1 value
how can i select multiple max values?
thank you
|
You can use a derived table to get the maximum value and `join` it back to the original table to see all rows corresponding to it.
```
select t.id, t.item
from tablename t
join (select max(item) as mxitem from tablename) x
on x.mxitem = t.item
```
Edit:
```
select t.co_travelers_id, t.booking_id, t.accounts_id
from a_co_travelers t
join (select accounts_id, max(booking_id) as mxitem
from a_co_travelers
group by accounts_id) x
on x.mxitem = t.booking_id and t.accounts_id = x.accounts_id
```
|
If you use an 'aggregate function' without `GROUP BY` only one row will be returned.
You may use GROUP BY , with aggregate functions.
Here is **[SQLFiddle Demo](http://sqlfiddle.com/#!9/60f92/1)**
```
SELECT id,max(item) AS item
FROM table_name
GROUP BY id
ORDER BY item DESC
LIMIT 3
```
Hope this helps.
|
selecting multiple max values
|
[
"",
"mysql",
"sql",
""
] |
My use case is an admin view to see all the messages in the db.
I would like a list of message ordered by conversations and beginning dates, the idea is to be able to see each message in its context.
A conversation is a list of interactions between two users.
I don't have a `conversation` table, only a `message` table.
From this data sample :
```
id sender recipient
--------------------------
1 marc rémi
2 gépéto sylvain
3 rémi marc
4 julie georgette
5 rémi marc
6 denis julie
7 julie rémi
8 sylvain gépéto
9 denis marc
10 denis julie
```
I would like to obtain :
```
id sender recipient
--------------------------
1 marc rémi
3 rémi marc
5 rémi marc
2 gépéto sylvain
8 sylvain gépéto
4 julie georgette
6 denis julie
10 denis julie
7 julie rémi
9 denis marc
```
For now, I was more simply looking to get the first messages of each conversations as a first step towards the wanted result :
```
SELECT message.id, message.sender , message.recipient
FROM message
GROUP BY message.sender, message.recipient
HAVING message.id = min(message.id)
ORDER BY message.id DESC;
```
But I can't get this right, I get two conversations instead of one for all bidirectionals conversations :
```
id sender recipient
--------------------------
8 sylvain gépéto
2 gépéto sylvain
```
So, I'm stuck here ... And I would appreciate some tips !
Almost forgot: [SQL Fiddle with the sample and the last query](http://sqlfiddle.com/#!9/fc66b/1)
|
You can use the following query to get the `id` of the first message of each conversation:
```
SELECT MIN(id),
IF(sender > recipient, sender, recipient) AS participantA,
IF(sender > recipient, recipient, sender) AS participantB
FROM message
GROUP BY participantA, participantB
```
Now use the above query in a derived table to get desired result:
```
SELECT id, sender, recipient
FROM (
SELECT id, sender, recipient,
IF(sender > recipient, sender, recipient) AS participantA,
IF(sender > recipient, recipient, sender) AS participantB
FROM message) AS t1
INNER JOIN (
SELECT MIN(id) AS minId,
IF(sender > recipient, sender, recipient) AS participantA,
IF(sender > recipient, recipient, sender) AS participantB
FROM message
GROUP BY participantA, participantB
) AS t2 ON t1.participantA = t2.participantA AND t1.participantB = t2.participantB
ORDER BY t2.minId
```
[**Demo here**](http://sqlfiddle.com/#!9/fc66b/22)
|
**[SqlFiddle Demo](http://sqlfiddle.com/#!9/fc66b/35)**
```
SELECT m.*, CASE
WHEN sender <= recipient THEN concat(sender,'-',recipient)
ELSE concat(recipient,'-', sender)
END as conversation
FROM message m
ORDER BY conversation, id
```
**OUTPUT**
```
| id | sender | recipient | conversation |
|----|---------|-----------|-----------------|
| 6 | denis | julie | denis-julie |
| 10 | denis | julie | denis-julie |
| 9 | denis | marc | denis-marc |
| 4 | julie | georgette | georgette-julie |
| 2 | gépéto | sylvain | gépéto-sylvain |
| 8 | sylvain | gépéto | gépéto-sylvain |
| 7 | julie | rémi | julie-rémi |
| 1 | marc | rémi | marc-rémi |
| 3 | rémi | marc | marc-rémi |
| 5 | rémi | marc | marc-rémi |
```
This is first aproach, if you need `marc-rémi` be first you need include another select to get `MIN()` for each conversation.
Exact solution **[SqlFiddleDemo](http://sqlfiddle.com/#!9/fc66b/37)**
```
SELECT conversation_id, T.id, T.sender, T.recipient, T.conversation
FROM (
SELECT CASE
WHEN sender <= recipient THEN concat(sender,'-',recipient)
ELSE concat(recipient,'-', sender)
END as conversation,
MIN(id) as conversation_id
FROM message m
GROUP BY conversation
) as convesation_start
JOIN (
SELECT m.*, CASE
WHEN sender <= recipient THEN concat(sender,'-',recipient)
ELSE concat(recipient,'-', sender)
END as conversation
FROM message m
) as T
ON
convesation_start.conversation = T.conversation
ORDER BY conversation_id, T.id
```
**OUTPUT**
```
| conversation_id | id | sender | recipient | conversation |
|-----------------|----|---------|-----------|-----------------|
| 1 | 1 | marc | rémi | marc-rémi |
| 1 | 3 | rémi | marc | marc-rémi |
| 1 | 5 | rémi | marc | marc-rémi |
| 2 | 2 | gépéto | sylvain | gépéto-sylvain |
| 2 | 8 | sylvain | gépéto | gépéto-sylvain |
| 4 | 4 | julie | georgette | georgette-julie |
| 6 | 6 | denis | julie | denis-julie |
| 6 | 10 | denis | julie | denis-julie |
| 7 | 7 | julie | rémi | julie-rémi |
| 9 | 9 | denis | marc | denis-marc |
```
|
List messages ordered by conversations for all users
|
[
"",
"mysql",
"sql",
"greatest-n-per-group",
""
] |
I'm using Oracle 10.g. I have a table, LIB\_BOOK, containing a listing of books in a library. There are multiple copies of many of the books. I'd like to produce a report that lists all the books with more than one copy. I've constructed a query that lists the books, but I can't find a way to get only one row for the result.
```
Select
title
, copy_number
, isbn_10
, category
, book_pk
, max(copy_number)
From LIB_BOOK
Group by
title
, copy_number
, isbn_10
, category
, book_pk
Order by copy_number desc
;
```
As you can see in the data result below, I get the results for "Conversations with God - Book 1" listed seven times. I'd like that book to be listed only once with a "7" as the copy\_number.
I took the first 32 rows of the query result, exported it to Excel and pasted the image below.
[](https://i.stack.imgur.com/qr8Wh.jpg)
How do I construct a query to result in only one row per book, and avoid books with only one copy (copy\_number > 1)?
Thanks for looking at this.
|
Try this
```
Select
title
, copy_number
, isbn_10
, category
, book_pk
, copy_number
From LIB_BOOK lib
join (select title,max(copy_number) as maxcopynumber
from LIB_BOOK group by title) maxcopy on lib.title = maxcopy.title and lib.copy_number = maxcopy.maxcopynumber
Order by lib.copy_number desc
;
```
|
You need to remove `copy_number()` and `book_pk` from the `group by`:
```
Select title, isbn_10, category,
max(copy_number)
From LIB_BOOK
Group by title, isbn_10, category
Order by max(copy_number) desc;
```
I'm not sure what you want for `book_pk`, so I just removed it from the `select`.
|
Produce one row in an Oracle sql query using group_by
|
[
"",
"sql",
"group-by",
"oracle10g",
""
] |
Assuming a table
```
source_document_id BIGINT NOT NULL,
target_document_id BIGINT NOT NULL,
similarity DOUBLE NOT NULL,
```
If I wanted to determine the frequency of occurrence of a documents in the source\_document\_id column, my query would run something like this:
```
select count(source_document_id) as c1, source_document_id from PROJECT_0622.SIMILARITY_RESULT group by source_document_id order by c1 desc;
```
But considering the fact tha a document Id can occur in either the source\_document\_id or target\_document\_id column, ...
how would I find the frequency of documentIds occurring in either column?
For example,
consider data thus
```
source_document_id source_document_id similarity
1 2 0.8
1 3 0.8
1 4 0.7
4 5 0.8
4 8 0.75
9 4 0.9
2 4 0.99
```
I would like to achieve results something like this
```
frequency in source/target doc id column, doc_id
5 4
3 1
2 2
1 8
1 3
1 9
1 10
```
Is this possible? Thanks
|
There is a chance that aggregating the data twice will have better performance:
```
select id, sum(cnt) as n_docs
from ((select source_document_id as id, count(*) as cnt
from similarity
group by source_document_id
) union all
(select target_document_id as id
from similarity
group by target_document_id
)
) t
group by id;
```
If performance is a consideration, it is worth trying both methods, particularly if there is are indexes on the two columns.
|
You can merge your two columns into a single column using a `union all`, then perform your count on this aggregate column:
```
select doc_id, count(*) n_docs
from
(
select source_document_id as doc_id from similarity
union all
select target_document_id as doc_id from similarity
)
group by doc_id
```
|
Find count of item that occurs in multiple columns
|
[
"",
"sql",
"postgresql",
""
] |
Consider the following regex
```
@(.*\..*){2,}
```
Expected behaviour:
```
a@b doesnt match
a@b.c doesnt match
a@b.c.d matches
a@b.c.d.e matches
and so on
```
Testing in regexpal it works as expected.
Using it it in a mysql select doesn't work as expected. Query:
```
SELECT * FROM `users` where mail regexp '@(.*\..*){2,}'
```
is returning lines like
```
foo@example.com
```
that should not match the given regex. Why?
|
You can use:
```
SELECT * FROM `users` where mail REGEXP '([^.]*\\.){2}'
```
to enforce at least 2 dots in `mail` column.
|
I think the answer to your question is here.
> Because MySQL uses the C escape syntax in strings (for example, “\n”
> to represent the newline character), you must double any “\” that you
> use in your REGEXP strings.
[MYSQL Reference](http://dev.mysql.com/doc/refman/5.7/en/regexp.html "12.5.2 Regular Expressions")
Because your middle dot wasn't properly escaped it was treated as just another wildcard and in the end your expression was effectively collapsed to `@.{2,}` or `@..+`
@anubhava's answer is probably a better substitute for what you tried to do though I would note @dasblinkenlight's comment about using the character class `[.]` which will make it easy to drop in a regex you've already tested in at RegexPal.
|
MySQL regex matching at least 2 dots
|
[
"",
"mysql",
"sql",
"regex",
""
] |
I have 140,000 INSERT statements I need to run. I am trying to insert the record's into a database table, and then input the created PK's into a temp table which I then return. It looks something like this:
```
CREATE PROCEDURE sp_MassInsertProcedure
@PassInVariable int
AS
CREATE TABLE #TempTable(OriginalID int IDENTITY(1,1), NewID int);
--These two inserts are essentially repeated 70,000 times
INSERT INTO MyDBTable (something, something, something) VALUES (1, 2, @PassInVariable);
INSERT INTO #TempTable(NewID) SELECT SCOPE_IDENTITY();
SELECT * FROM #TempTable;
DROP TABLE #TempTable;
```
I have a few other stored procedures just like this that have max 2,000 insert statements and this way works perfectly, but I think because of the number of statements in this particular SP it gives me "Query Finished with Errors" when I try to create the procedure, but doesn't actually create the procedure.
I then added a GO every 200 insert statements, but after the first GO executes the temp table must be released and I get "TempTable is unavailable" error (I also get the same error for the @PassInVariable after the first GO executes). The thing that is strange is that when I place the above explained SQL in a standard script (not in a stored procedure) it works.
So my question is, does anyone know how to persist a temporary table/variable in a single stored procedure that uses multiple batch executions via the GO command?
```
CREATE PROCEDURE sp_MassInsertProcedure
@PassInVariable int
AS
CREATE TABLE #TempTable(OriginalID int IDENTITY(1,1), NewID int);
--These inserts are essentially repeated 70,000 times
INSERT INTO MyDBTable (something, something, something) VALUES (1, 2, @PassInVariable);
INSERT INTO #TempTable(NewID) SELECT SCOPE_IDENTITY();
GO
INSERT INTO MyDBTable (something, something, something) (1, 2, @PassInVariable);
INSERT INTO #TempTable(NewID) SELECT SCOPE_IDENTITY();
SELECT * FROM #TempTable;
DROP TABLE #TempTable;
```
|
Thanks everyone for the answers...I ended up with the following solution:
From the script that was calling my stored procedure I created the #TempTable
```
CREATE TABLE #TempTable (OriginalID int IDENTITY(1,1), NewID int);
EXEC sp_InsertMassRecords 777;
```
And then in my stored procedure I had:
```
CREATE PROCEDURE sp_InsertMassRecords
@IncomingVariable int
AS
BEGIN
SET NOCOUNT ON;
INSERT MyDBTable (col1, col2, col3)
OUTPUT Inserted.ID INTO #TempTable(NewID)
SELECT 1, @IncomingVariable, 3 UNION ALL
SELECT 4, @IncomingVariable, 6 UNION ALL...
```
I repeated the INSERT/OUTPUT lines after about every 100 select statements or so and the whole thing ran successfully pretty fast!
|
The GO statement in MS SQL releases resources and clean out your session; that is why the temp table is gone as well your variables.
In your stored proc or at least the SQL script above, you don't need the go statement.
The GO statements that you see in other's scripts are to prevent the parser from stop execution after the previous statement errors out. it's similar to Visual Basic "On Error Resume Next" statement. that way your script will continue execution until the end of the script file.
You will see that the GO statement will be utilized mostly in a script file that contain multiple transactions; after each transaction is a go statement. For example a script file that contain multiple CREATE statement for different store procedures. But within one transaction you don't want to use a GO statement because you will loose all of your variables (including temp table(s)) as you see in your script.
I don't see a need in your stored proc though.
|
Temp table in stored procedure is unavailable after first GO
|
[
"",
"sql",
"sql-server",
"stored-procedures",
"sql-server-2014",
""
] |
Consider 2 or more tables:
```
users (id, firstname, lastname)
orders (orderid, userid, orderdate, total)
```
I wish to delete all **users** and their **orders** that match first name '**Sam**'. In mysql, I usually do left join. In this example userid is unknown to us.
What is the correct format of the query?
|
<http://www.postgresql.org/docs/current/static/sql-delete.html>
```
DELETE
FROM orders o
USING users u
WHERE o.userid = u.id
and u.firstname = 'Sam';
DELETE
FROM users u
WHERE u.firstname = 'Sam';
```
You can also create the table with `ON delete cascade`
<http://www.postgresql.org/docs/current/static/ddl-constraints.html>
```
CREATE TABLE order_items (
product_no integer REFERENCES products ON DELETE RESTRICT,
order_id integer REFERENCES orders ON DELETE CASCADE,
quantity integer,
PRIMARY KEY (product_no, order_id)
);
```
|
Arranging proper cascading deletes is wise and is usually the correct solution to this.
For certain special cases, there is another solution to this that can be relevant.
If you need to perform multiple deletes based on a common set of data you can use [Common Table Expressions (CTE)](https://www.postgresql.org/docs/current/queries-with.html).
It's hard to come up with a simple example as the main use case for this can be covered by cascading deletes.
For the example we're going to delete all items in table A whose value is in the set of values we're deleting from table B. Usually these would be keys, but where they are not, then cascading delete can't be used.
To solve this you use CTEs
```
WITH Bdeletes AS (
DELETE from B where IsSomethingToDelete = true returning ValueThatRelatesToA
)
delete from A where RelatedValue in (select ValueThatRelatesToA from Bdeletes)
```
This example is deliberately simple because my point is not to argue over key mapping etc, but to show how two or more deletes can be performed off a shared dataset.
This can be much more complex too, including update commands etc.
Here is a more complex example (from Darth Vader's personal database). In this case, we have a table that references an address table. We need to delete addresses from the address table if they are in his list of planets he's destroyed. We want to use this information to delete from the people table, but only if they were on-planet (or on his trophy-kill list)
```
with AddressesToDelete as (
select AddressId from Addresses a
join PlanetsDestroyed pd on pd.PlanetName = a.PlanetName
),
PeopleDeleted as (
delete from People
where AddressId in (select * from AddressesToDelete)
and OffPlanet = false
and TrophyKill = false
returning Id
),
PeopleMissed as (
update People
set AddressId=null, dead=(OffPlanet=false)
where AddressId in (select * from AddressesToDelete)
returning id
)
Delete from Addresses where AddressId in (select * from AddressesToDelete)
```
Now his database is up to date. No integrity failures due to Address deletion.
Note that while we are returning data from the update and the first delete, it doesn't mean we have to use it. I'm uncertain whether you can put a delete in a CTE with no returned data (My SQL may also be wrong on the use of returning from an update - I've not been able to test run this as Darth V. was in a cranky mood.
|
Postgresql delete multiple rows from multiple tables
|
[
"",
"sql",
"database",
"postgresql",
"relational-database",
"sql-delete",
""
] |
seems it is too long ago that I needed create own SQL Statements. I have a table (GAS\_COUNTER) with timestamps (TS) and values (VALUE).
There are hundreds of entries per day, but I only need the latest of the day. I tried different ways but never get what I need.
**Edit**
Thanks for the fast replies, but some do not meet my needs (I need the latest value of each day in the table) and some don't work. My best own statement was:
```
select distinct (COUNT),
from
(select
extract (DAY_OF_YEAR from TS) as COUNT,
extract (YEAR from TS) as YEAR,
extract (MONTH from TS) as MONTH,
extract (DAY from TS) as DAY,
VALUE as VALUE
from GAS_COUNTER
order by COUNT)
```
but the value is missing. If I put it in the first select all rows return. (logical correct as every line is distinct)
Here an example of the Table content:
```
TS VALUE
2015-07-25 08:47:12.663 0.0
2015-07-25 22:50:52.155 2.269999999552965
2015-08-10 11:18:07.667 52.81999999284744
2015-08-10 20:29:20.875 53.27999997138977
2015-08-11 10:27:21.49 54.439999997615814
```
**2nd Edit and solution**
```
select TS, VALUE from GAS_COUNTER
where TS in (
select max(TS) from GAS_COUNTER group by extract(DAY_OF_YEAR from TS)
)
```
|
This one would give you the very last record:
```
select top 1 * from GAS_COUNTER order by TS desc
```
Here is one that would give you last records for every day:
```
select VALUE from GAS_COUNTER
where TS in (
select max(TS) from GAS_COUNTER group by to_date(TS,'yyyy-mm-dd')
)
```
Depending on the database you are using you might need to replace/adjust to\_date(TS,'yyyy-mm-dd') function. Basically it should extract date-only part from the timestamp.
|
Select the max value for the timestamp.
```
select MAX(TS), value -- or whatever other columns you want from the record
from GAS_COUNTER
group by value
```
|
SQL Statement Only latest entry of the day
|
[
"",
"sql",
"h2",
"greatest-n-per-group",
""
] |
I am almost a newbie to writing SQL queries.
In the context of SQL Server, how to get list of 2nd and 4th Saturday dates
in the year 2016?
|
Done as a derived table simply to show the logic but you can reduce if you prefer:
```
select *
from (
select d2016,
datename( weekday, d2016 ) as wkdy,
row_number( ) over ( partition by datepart( month, d2016 ), datename( weekday, d2016 ) order by d2016 ) as rn_dy_mth
from (
select dateadd( day, rn, cast( '2016-01-01' as date ) ) as d2016
from (
select row_number() over( order by object_id ) - 1 as rn
from sys.columns
) as rn
) as dy
) as dy_mth
where rn_dy_mth in ( 2, 4 )
and wkdy = 'Saturday'
order by d2016
```
|
**You can get any Saturday of a month using the Following Query in SQL.
Here I'm Getting on Current Date, You can set your own selected date to get a Specific month Saturday**
```
select DATEADD(dd, (14 - @@DATEFIRST - DATEPART(dw, DATEADD(MONTH, DATEDIFF(mm, 0,getdate()), 0))) % 7, DATEADD(MONTH, DATEDIFF(mm, 0, getdate()), 0)) as FirstSaturday,
DATEADD(dd,7,DATEADD(dd, (14 - @@DATEFIRST - DATEPART(dw, DATEADD(MONTH, DATEDIFF(mm, 0, getdate()), 0))) % 7, DATEADD(MONTH, DATEDIFF(mm, 0, getdate()), 0))) as SecondSaturday,
DATEADD(dd,14,DATEADD(dd, (14 - @@DATEFIRST - DATEPART(dw, DATEADD(MONTH, DATEDIFF(mm, 0, getdate()), 0))) % 7, DATEADD(MONTH, DATEDIFF(mm, 0, getdate()), 0))) as ThirdSaturday,
DATEADD(dd,21,DATEADD(dd, (14 - @@DATEFIRST - DATEPART(dw, DATEADD(MONTH, DATEDIFF(mm, 0, getdate()), 0))) % 7, DATEADD(MONTH, DATEDIFF(mm, 0, getdate()), 0))) as LastSaturday
```
|
How to get list of 2nd and 4th Saturday dates in SQL Server?
|
[
"",
"sql",
"sql-server",
"date",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.