Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
How can i improve performance of this query?
```
DECLARE @Form nvarchar(200)
SET @Form=N'mail'
select t1.* from dbo.EDWordForms t1
where t1.CanonicForm in
(
SELECT CanonicForm
FROM dbo.EDWordForms t2
WHERE t2.WordForm=@Form)
```
Thanks for the answers, problem solved - there were no index on field "CanonicForm"
|
Thanks for the answers, problem solved - there were no index on field "CanonicForm"
|
In many databases, an explicit `join` or `exists` is more efficient than `in`. I would try:
```
select t1.*
from dbo.EDWordForms t1
where EXISTS (SELECT CanonicForm
FROM dbo.EDWordForms t2
WHERE t2.WordForm = @Form AND t1.CanonicForm = t2.CanonicForm
);
```
Indexes can also help the query. For your version (with `in`), you want an index on `EDWordForms(WordForm, CanonicForm)`. For the version with `exists` (or with a `join`), the best index is `EDWordForms(CanonicForm, WordForm)`.
|
SQL query "select ...where select..." take too long time
|
[
"",
"sql",
"performance",
""
] |
I'm a beginner in MySQL and I need to write query to select all rows from event table and count how many users are registered on this event (but I need to split users by gender).
My tables:
**table event:**
```
id
event_name
...and another insignificant column
```
**table user:**
```
id
username
gender (allowed values: male or female)
...and another insignificant column
```
**table user\_on\_event:**
```
event_id (link to event table to column ID)
user_id (link to user table to column ID)
```
and this query:
```
SELECT *, COUNT(user_on_event.user_id) as total_registred_users_on_event FROM event LEFT JOIN user_on_event ON user_on_event.event_id = event.id
```
This selects everything from event table and counts all registered users. This is OK, but I need to also count registered women and men.
What do I have modify in my query to achieve this?
|
```
select
sum(gender = 'male') male_count,
sum(gender = 'female') female_count,
count(*) total_count
from user_on_event uoe
join user u on u.id = uoe.user_id
```
if you want counts for a specific event add
```
where uoe.event_id = :event_id
```
|
maybe I have got it :) is this query looks OK? what do you think?
```
SELECT *, COUNT(user_on_event.user_id) as total_registred_users_on_event,
(SELECT COUNT(user.id) FROM user WHERE user.id = user_on_event.user_id
AND user.gender = "male") as men,
(SELECT COUNT(user.id) FROM user
WHERE user.id = user_on_event.user_id
AND user.gender = "female") as women FROM event
LEFT JOIN user_on_event ON user_on_event.event_id = event.id;
```
|
MySQL query - count how many user is registred on event (men&women)
|
[
"",
"mysql",
"sql",
""
] |
I have a question about the basic syntax of joining with derived tables.
Is this teh basic syntax that is used:
```
select *
from table1 a
inner join (select * from table2) as T1
on 1.ID = T1.ID
```
Will that work?
|
you're asking about joining two subqueries?
try:
```
select * from
(select * from table1) t1 inner join
(select * from table2) t2 on
t1.id = t2.id
```
|
In your posted query change `on 1.ID = T1.ID` to `on a.ID = T1.ID` and it should work fine but why do you even need that? You can directly join with `table1` and no need of inline/derived table like
```
select *
from table1 a
inner join
table2 T1
on a.ID = T1.ID
```
|
Inner Joins with Derived Tables
|
[
"",
"sql",
"inner-join",
"derived-table",
""
] |
I have 3 tables from which I need to select and summarize data.
```
Table: IDEA
REFERENCE SL
128 SL1
200 SL1
201 SL2
205 SL3
Table: ACCT1
IDEA_REF ACCTS
128 5
128 2
200 3
205 4
Table: ACCT2
IDEA_REF ACCTS
201 3
205 4
205 3
```
What I need to do is pull a summary sorted by SL with the totals from the ACCTS field of both tables.
Here is the SQL I am using so far:
```
SELECT I.SL AS SL, COUNT(DISTINCT I.REFERENCE) AS NO,
SUM(CASE WHEN A1.IDEA_REF=I.REFERENCE THEN A1.ACCTS ELSE 0 END) AS ACCT1,
SUM(CASE WHEN A2.IDEA_REF=I.REFERENCE THEN A2.ACCTS ELSE 0 END) AS ACCT2
FROM IDEA I
LEFT JOIN ACCT1 A1 ON A1.IDEA_REF=I.REFERENCE
LEFT JOIN ACCT2 A2 ON A2.IDEA_REF=I.REFERENCE
WHERE A2.IDEA_REF IN I.REFERENCE OR A1.IDEA_REF IN I.REFERENCE
GROUP BY I.SL
```
The problem I am finding is when there are multiple values in the ACCT1 and ACCT2 tables with reference to the IDEA table. Here are the results from this query:
```
SL NO ACCT1 ACCT2
SL1 2 10 0
SL2 1 0 3
SL3 1 8 7
```
The SL3 line adds the ACCT1 and ACCT2 values two times. I can't seem to find the right way to add them the appropriate number of times.
The desired output is:
```
SL NO ACCT1 ACCT2
SL1 2 10 0
SL2 1 0 3
SL3 1 4 7
```
Any help would be much appreciated.
|
You hare asking for three separate aggregates, but you're trying to compute them all in a single query.
To get your `NO` (count of distinct items) you can do
```
SELECT SL,
COUNT(*) AS NO
FROM IDEA
GROUP BY SL
```
To get your `ACCT1` item you can do:
```
SELECT SL,
SUM(ACCTS) AS ACCT1
FROM IDEA
JOIN ACCT1 ON IDEA.REFERENCE = ACCT1.IDEA_REF
GROUP BY SL
```
In like manner you can get `ACCT2`
```
SELECT SL,
SUM(ACCTS) AS ACCT2
FROM IDEA
JOIN ACCT2 ON IDEA.REFERENCE = ACCT2.IDEA_REF
GROUP BY SL
```
Then, you need to join all these aggregate queries together on SL to get your result set. Because you have missing entries in some of the aggregates, you need the `LEFT` in `LEFT JOIN` and the `COALESCE()` items.
[SQL Fiddle](http://sqlfiddle.com/#!2/63aa1/1)
Here's the over all query
```
SELECT Q.SL, NO,
COALESCE(ACCT1,0) AS ACCT1,
COALESCE(ACCT2,0) AS ACCT2
FROM (
SELECT SL,
COUNT(*) AS NO
FROM IDEA
GROUP BY SL
) Q
LEFT JOIN (
SELECT SL,
SUM(ACCTS) AS ACCT1
FROM IDEA
JOIN ACCT1 ON IDEA.REFERENCE = ACCT1.IDEA_REF
GROUP BY SL
) R ON Q.SL = R.SL
LEFT JOIN (
SELECT SL,
SUM(ACCTS) AS ACCT2
FROM IDEA
JOIN ACCT2 ON IDEA.REFERENCE = ACCT2.IDEA_REF
GROUP BY SL
) S ON Q.SL = S.SL
```
[The result is what you are looking for](http://sqlfiddle.com/#!2/63aa1/1/0):
```
| SL | NO | ACCT1 | ACCT2 |
|-----|----|-------|-------|
| SL1 | 2 | 10 | 0 |
| SL2 | 1 | 0 | 3 |
| SL3 | 1 | 4 | 7 |
```
See how this works? You have to do each aggregate separately.
If you're using a DBMS that doesn't know the `JOIN ... USING()` syntax, put in `ON Q.SL = R.SL` or the appropriate `ON` clause instead. See the edit, and see this fiddle: <http://sqlfiddle.com/#!2/63aa1/3/0>
|
You can achieve the result by using UNION ALL
```
SELECT
COMB.SL AS SL,
COUNT(DISTINCT COMB.REFERENCE) AS NO,
sum(T1) as ACCT1,
sum(T2) ACCT2
FROM (
SELECT
I.*,A1.ACCTS as t1,0 as t2
FROM
IDEA I
LEFT JOIN
ACCT1 A1 ON A1.IDEA_REF=I.REFERENCE
UNION ALL
SELECT
I.*,0 as t1,A2.ACCTS as t2
FROM
IDEA I
LEFT JOIN
ACCT2 A2 ON A2.IDEA_REF=I.REFERENCE
) as COMB
GROUP BY
COMB.SL
```
|
Summing and counting across multiple tables in SQL
|
[
"",
"sql",
"db2-luw",
""
] |
I am having so much trouble trying to work this out. Can somebody help me?
I have a table with colA and colB and these as the values for each row:
```
colA ColB
2 1
1 2
```
When selecting in this table, I want to retrieve just one row since the values are inverted.
If 2 implies 1, 1 implies 2, so they are equal and I don't want two rows as a return.
Is there any way of doing a SQL query and get the result I want?
For example, just one row with colA = 1, colB = 2.
|
If it is only 2 columns:
```
WITH t(colA, colB) AS (
VALUES (1,2), (2,1)
)
SELECT DISTINCT least(colA, colB), greatest(colA, colB)
FROM t;
```
**EDIT:** As outlined by Daniel, although this might do the job, this approach is not good for filtering as it'll change the original data.
|
No indication was made as to which version of the tuple is preferred over the other. I.e., we know that `(1,2)` is equivalent to `(2,1)` and in that case one of the two should be display, but which of the two? In this solution, the assumption is that the variant with the lower `ColA` value is preferred over the reverse.
```
Select ColA, ColB
From SourceData As S
Where ColA < ColB
Or Not Exists (
Select 1
From SourceData As S1
Where S1.ColA = S.ColB
And S1.ColB = S.ColA
)
```
|
Eliminate rows with columns with inverted values
|
[
"",
"sql",
"database",
"postgresql",
""
] |
Trying to group by the result bellow:
```
+-------------+------+
| ID | Coln1 | Coln2 |
+-------------+------+
| 10 | A | NULL |
| 10 | NULL | X |
| 15 | B | NULL |
| 15 | NULL | Y |
| 20 | C | NULL |
| 20 | NULL | Z |
+--------------+-----+
```
With the following select:
```
SELECT * FROM
(
SELECT
DB1.dbo.TABLE1.id AS ID,
DB1.dbo.TABLE1.Coln1 AS Coln1,
NULL AS Coln2
FROM
DB1.dbo.TABLE1
UNION
SELECT
DB2.dbo.TABLE2.id AS ID,
NULL AS Coln1,
DB2.dbo.TABLE2.Coln2 AS Coln2
FROM
DB2.dbo.TABLE2
) AS A
GROUP BY
ID,
Coln1,
Coln2
ORDER BY ID ASC
```
Expecting this:
```
+-------------+------+
| ID | Coln1 | Coln2 |
+-------------+------+
| 10 | A | X |
| 15 | B | Y |
| 20 | C | Z |
+--------------+-----+
```
With no luck. Tried to use some `HAVING` with no luck neither. Thanks for any information on this.
|
An `aggregate function` should help you. Aggregate function will eliminate the `NULL` values.Try this
```
SELECT ID, max(Coln1), max(Coln2)
FROM tablename
GROUP BY
ID
```
Update your query like this..
```
SELECT ID,
Max(Coln1),
Max(Coln2)
FROM (SELECT DB1.dbo.TABLE1.id AS ID,
DB1.dbo.TABLE1.Coln1 AS Coln1,
NULL AS Coln2
FROM DB1.dbo.TABLE1
UNION
SELECT DB2.dbo.TABLE2.id AS ID,
NULL AS Coln1,
DB2.dbo.TABLE2.Coln2 AS Coln2
FROM DB2.dbo.TABLE2) AS A
GROUP BY ID
ORDER BY ID ASC
```
|
You just have to change
Max function is used to select highest values & this function is very helful for avoiding for NULL values
```
SELECT ID, max(Coln1), max(Coln2)
FROM ID, Coln1, Coln2
GROUP BY
ID
```
|
Group by with null values
|
[
"",
"sql",
"sql-server",
""
] |
I have a Customers table and an orders table. Not all customers placed an order, so not all customer IDs are on the orders table. I want my result to show ONLY the values from the customer table who did not place an order, so the orderID column should show as null. The following code works:
```
SELECT c.CustomerID, c.CustomerName, o.OrderID
FROM Customers c
LEFT OUTER JOIN Orders o
ON c.CustomerID = o.CustomerID WHERE o.OrderID IS NULL
```
But the one I tried originally doesn't:
```
SELECT c.CustomerID, c.CustomerName, o.OrderID
FROM Customers c
LEFT OUTER JOIN Orders o
ON c.CustomerID = o.CustomerID AND o.OrderID IS NULL
```
This one instead shows all the values on the customers table but ALL of them have null on for their orderID
I don't think I really understand the difference, since I feel like it would make sense for them to both return the same thing.
|
This is the nature of an outer join (in this case a left join). A left join takes your main table (Customers), matches it by the criteria to the joined table (Orders). For each row in Customers that doesn't have a match, unlike an inner join, it doesn't remove the row. Instead, it adds all the fields from Orders but puts null in them.
Look at this example:
```
Table A Table B
ββββββββ¬βββββββ ββββββββ¬βββββββ
βfield1βfield2β βfield3βfield4β
ββββββββΌβββββββ€ ββββββββΌβββββββ€
βA β1 β β1 βOne β
βB β2 β β3 βThree β
βC β3 β ββββββββ΄βββββββ
ββββββββ΄βββββββ
```
The tables' inner join (between field2 and field3) is:
```
ββββββββ¬βββββββ¬βββββββ¬βββββββ
βfield1βfield2βfield3βfield4β
ββββββββΌβββββββ€βββββββΌβββββββ€
βA β1 β1 βOne β
βC β3 β3 βThree β
ββββββββ΄βββββββ΄βββββββ΄βββββββ
```
But the tables' outer join has to give you every record, and if there is no match, put nulls instead.
```
ββββββββ¬βββββββ¬βββββββ¬βββββββ
βfield1βfield2βfield3βfield4β
ββββββββΌβββββββ€βββββββΌβββββββ€
βA β1 β1 βOne β
βB β2 βNULL βNULL ββ¬
οΈ No match
βC β3 β3 βThree β
ββββββββ΄βββββββ΄βββββββ΄βββββββ
```
Now what would happen if there are no matches in table2 at all? For example, if you added an impossible condition in the ON clause? Then all the records in the result would look like the "No match"
```
ββββββββ¬βββββββ¬βββββββ¬βββββββ
βfield1βfield2βfield3βfield4β
ββββββββΌβββββββ€βββββββΌβββββββ€
βA β1 βNULL βNULL ββ¬
οΈ No match (because of impossible condition)
βB β2 βNULL βNULL ββ¬
οΈ No match (because of impossible condition)
βC β3 βNULL βNULL ββ¬
οΈ No match (because of impossible condition)
ββββββββ΄βββββββ΄βββββββ΄βββββββ
```
So it doesn't matter if there was no match because there was no record in Table2 with the given ID, or if there was no match because you added an impossible condition. The result of an outer join is that the fields that were supposed to come from Table2 would be replaced with nulls. Because that's how an outer join is defined.
---
Now to the real world tables:
You don't actually have any records in Orders whose OrderID is null (unless you designed it very badly). So if you put that condition in the ON clause, it will find no records that meet your criteria.
In such a case, because this is an outer (left) join, you get all the origintal Customers records, and because there were no matches, each of them has the fields from Orders all null.
In the case where you put the condition in the `WHERE`, you were actually making good use of this behaviour of a left join. You were matching each customer with its order. If there was a match - fine, you got the actual order ID. But in cases there was no match - the ones you are looking for - it adds a null order ID.
The where clause then tells it to give you only the records where that happened. That is, the records that did not have a matching order in Orders.
|
It is important to pont out that the two queries are *very* different. The first query is:
```
SELECT c.CustomerID, c.CustomerName, o.OrderID
FROM Customers c LEFT OUTER JOIN
Orders o
ON c.CustomerID = o.CustomerID
WHERE o.OrderID IS NULL;
```
It returns all customers that have no corresponding orders (or `o.OrderId` is null). It does that because the `left outer join` keeps all rows in the first table. If there is no match, then all columns from the second table are `NULL` and the `where` clause would choose these columns.
The second query:
```
SELECT c.CustomerID, c.CustomerName, o.OrderID
FROM Customers c LEFT OUTER JOIN
Orders o
ON c.CustomerID = o.CustomerID AND
o.OrderID IS NULL;
```
finds all rows all customers and also gets order information where `OrderId` is null, if any such records exist. There is no filtering of `Customers`, because the `left outer join` guarantees that all rows from the first table are in the result set.
I would be surprised if a field called `OrderId` ever took on `NULL` values. But, each query is valid SQL and each does something useful. Only one of them, however, does what you intend.
|
Why does this statement only work with a WHERE?
|
[
"",
"sql",
""
] |
I have what I imagine is a somewhat basic SQL question and hoping for some help. I'm using MSSQL.
I have two tables :
```
Table A Table B
----------------------------------------------
A.ID |A.Date| B.ID|B.Date|B.Number
```
A.ID is unique; no duplicate rows. B.ID is not unique so joining the tables on that column could result in multiple rows returned. To grab the proper record, its necessary to join on both columns but this also has issues of undesired results. For example:
```
Table A | Table B
----------------------------------------------
1 |01-01-2014 | 1 |01-01-2014|25
2 |01-30-2014 | 1 |04-30-2013|40
3 |03-24-2014 | 1 |10-12-2018|12
4 |05-28-2014 | 2 |11-17-2013|55
```
The desired outcome is to join on both columns and return a single row. The problem I'm running into is if I join on both columns being equal using the sample data, only the first row will return. If i set the ID columns to equal and use a <=, multiple rows will return for ID 1 which is undesired. I can't use a max date because some fields contain future dates which shouldn't return records until the two date fields are equal. In the sample data, desired results would return number 25 and 55. So essentially I need to join on the ID columns being equal and the date field being equal but if there is no equal date in table B, return the number of the latest date, assuming it is not a future date.
Hopefully that makes sense. Thanks in advance.
|
I would recommend using a correlated subquery or the apply operator. Here is the method:
```
select a.*, b.*
from tablea a outer apply
(select top 1 b.*
from tableb b
where b.id = a.id and
b.date <= a.date
order by b.date desc
) b;
```
|
Yeah this is a little tricky but this should do it:
```
with equalvalues as (
select b.* from a
inner join b on a.ID=b.ID and a.adate=b.bdate
),
latestdates as (
select b.id, MAX(b.bdate) as maxbdate
from b
where b.bdate<=GETDATE()
group by b.ID
)
select number from equalvalues
union all
select b.number from b
inner join latestdates d on d.ID=b.ID and d.maxbdate=b.bdate
where b.ID not in (select ID from equalvalues)
```
|
Conditional SQL joins
|
[
"",
"sql",
"sql-server",
"join",
""
] |
I want to update the type of a column in an Oracle table from varchar2(50) to varchar2(200).
I 've started doing some impact analysis that this change might have with searching in which packages this field is used, but after looking at few it seems that the field length is not specified anywhere. The only thing that is specified is the column type which at this case is **varchar2**.
My question is:
When the only thing that you want to change is the field length of a varchar2 type of one specific column, does this impact any underlying packages that this field might be referenced from?
Thanks
|
1. You just need to make sure that wherever the `column` is being referred in the `TYPE` declaration, you have mentioned as `column%type` and not hardcoded as `VARCHAR(50)`.
2. See if there are any `type` is explicitly created as `database object` referring this column, if it is hardcoded with the size, then it would fail there too.
Do this impact analysis first, especially if your `application/system` is quite old or legacy. Developers at that time might not have thought about the future and might have hardcoded the size as I described above. I have seen it happening in one of the application I worked on, and fortunately caught it during `system integration testing`. So, do the `impact analysis` thoroughly and do the `integration testing` as good as possible.
|
> When the only thing that you want to change is the field length of a varchar2 type of one specific column, does this impact any underlying packages that this field might be referenced from
This is exactly the scenario that the [`%Type` type](http://docs.oracle.com/cd/B19306_01/appdev.102/b14261/type_attribute.htm) attempts to solve, viz that if you change the underlying type on the referenced column, that dependent artifacts (like packages) referencing the column will remain in synch.
So unfortunately, if you have defined parameters and variables in your packages as `varchar2(50)`, you will need to manually update these to the new width, in which case you can change it to `MYTABLE.MYCOLUMN%TYPE` to protect against a future change.
|
Updating column type in Oracle
|
[
"",
"sql",
"oracle",
"plsql",
""
] |
I have a table with records in one column which are just different by how they are written.
So how can I find those and save the corresponding id's in a new table?
e.g. I have the following records in a column for cities.
```
Id name
1 berlin
2 ber lin
3 ber-lin
4 Berlin
5 Hamburg
6 New York
7 NewYork
```
So my first assumption would be to remove any special characters including white spaces, then lowercase. And see who matches and then write the id to a new table?
What would be the best and most reliable way to find machtes?
|
If removing some characters (' ' and '-' in the example) and lower-casing is enough to identify duplicates:
```
CREATE TABLE tbl_folded AS
SELECT lower(translate(name, ' -', '')) AS base_name
, array_agg(id) AS ids
FROM tbl
GROUP BY 1;
```
[SQL Fiddle](http://sqlfiddle.com/#!17/28efe7/1)
[`translate()`](https://www.postgresql.org/docs/current/functions-string.html#FUNCTIONS-STRING-OTHER) is particularly useful to replace (or remove) a list of single characters.
Use [`CREATE TABLE AS`](https://www.postgresql.org/docs/current/sql-createtableas.html) to create a new table from the results of a query.
Related:
* [Pattern matching with LIKE, SIMILAR TO or regular expressions](https://dba.stackexchange.com/a/10696/3684)
|
This could certainly be optimized but it works :
```
CREATE TABLE test (id INT(9) NOT NULL AUTO_INCREMENT PRIMARY KEY, name VARCHAR(50) NOT NULL);
INSERT INTO test (id, name) VALUES ('', 'berlin');
INSERT INTO test (id, name) VALUES ('', 'ber lin');
INSERT INTO test (id, name) VALUES ('', 'ber-lin');
INSERT INTO test (id, name) VALUES ('', 'Berlin');
INSERT INTO test (id, name) VALUES ('', 'Hamburg');
INSERT INTO test (id, name) VALUES ('', 'New York');
INSERT INTO test (id, name) VALUES ('', 'NewYork');
CREATE TABLE tmp_clean_text (id INT(9) NOT NULL, name VARCHAR(50) NOT NULL);
INSERT INTO tmp_clean_text (id, name) SELECT id, REPLACE(REPLACE(LOWER(name), ' ', ''), '-', '') FROM test;
CREATE TABLE results (name VARCHAR(50) NOT NULL);
INSERT INTO results (name) SELECT DISTINCT name FROM tmp_clean_text;
UPDATE results SET results.name = CONCAT(results.name, ' ', (
SELECT GROUP_CONCAT(tmp_clean_text.id)
FROM tmp_clean_text
WHERE tmp_clean_text.name = results.name
));
DROP TABLE tmp_clean_text;
```
|
Which search strategy in SQL (postgres) to use in order to find similar strings
|
[
"",
"sql",
"postgresql",
"pattern-matching",
""
] |
I'm using Oracle SQL Developer
I have a patient table with fields name and phonenumber.
I want to only display the names of people who have the same number as at least someone else.
So the patient table is like this:
```
create table PATIENT(
name varchar2(9) primary key,
phoneNumber varchar2(15) null
);
```
So I tried the folowing:
```
select name, phoneNumber
from PATIENT
where phoneNumber in(select phoneNumber
from PATIENT
where name not .....);
```
So where the dots are there should be a reference to name in the parent select statement.
Is there anyway to do this, or is there a totaly different way to solve this?
|
Self join should work for you.
```
SELECT a.NAME,
a.phoneNumber
FROM PATIENT a
JOIN PATIENT b
ON a.phoneNumber = b.phoneNumber
AND a.NAME <> b.NAME
```
|
I think this will give you the expected answer.
```
select parent_q.NAME,
parent_q.phoneNumber
from PATIENT parent_q
where (select count(sub_q.phoneNumber)
from PATIENT sub_q
where sub_q.phoneNumber = parent_q.phoneNumber) > 1
```
|
references in select statements
|
[
"",
"sql",
"oracle",
"select",
"reference",
"subquery",
""
] |
I use mysql. My table look like this:

Last I try to use this query
```
SELECT * FROM movie GROUP BY `group` HAVING cateogry = 'TV'
```
I want with this query result as: show all but only GROUP BY TV category, where category = 'TV'
**I want this Result**

**But my query give me this result (HAVING in query work as WHERE clouse)**

IF I use this QUERY
```
SELECT * FROM movie GROUP BY `group`
```
It give me this result

I want -> QUERY -> GROUP BY `group` (ID no 9 and ID 1,2,3 treat as different group name)
> IF `group` has all same values BUT `category`='movie' (RETURN ALL ROWS
> group by NOT APPLY)
>
> IF `group` has all same values BUT `category`='TV' (RETURN 1 ROW group by APPLY)
|
```
SELECT min(ID) as ID, min(Name), `group`, Category
FROM movie
GROUP BY `group`, Category
ORDER BY ID
```
|
You seem to want this query:
```
select m.*
from movie m join
(select `group`, min(id) as minid
from movie
group by `group`
) g
on m.id = g.minid;
```
|
show all data only group by specific rows : Select * from table group by column having column = 'value'
|
[
"",
"mysql",
"sql",
"phpmyadmin",
""
] |
I have a column which returns
```
a:2:{i:0;s:10:"Properties";i:1;s:14:"Movable Assets";}
```
I would like to return only:
```
Properties, Movable Assets
```
How can I use a select statement to retrieve the values between the " symbols
|
Use `SUBSTRING_INDEX()`.
`SUBSTRING_INDEX()` takes a string argument followed by a delimiter character and the number of parts to return. After you break up the string using the delimiter, that number of parts is returned as a single string.
```
select concat(
SUBSTRING_INDEX(
SUBSTRING_INDEX(
SUBSTRING_INDEX(
'a:2:{i:0;s:10:"Properties";i:1;s:14:"Movable Assets";}',
'"',
4
),
'"',
2
),
'"',
-1
),
",",
SUBSTRING_INDEX(
SUBSTRING_INDEX(
SUBSTRING_INDEX(
'a:2:{i:0;s:10:"Properties";i:1;s:14:"Movable Assets";}',
'"',
4
),
'"',
4
),
'"',
-1
)
);
```
|
these are serialize values, you can use php, to get your desired results.
you can use , [unserialize](http://php.net/manual/en/function.unserialize.php) which will return an array then you can use [implode](http://php.net/manual/en/function.implode.php) to get the comma separated values.
[example](http://codepad.org/5CxaOuzD)
|
MYSQL SELECT multiple values between "" in Column
|
[
"",
"mysql",
"sql",
""
] |
I have three tables, libraryitems, copies and loans.
A libraryitem *hasMany* copies, and a copy *hasMany* loans.
I'm trying to get the *latest* loan entry for a copy only; The query below returns *all* loans for a given copy.
```
SELECT
libraryitems.title,
copies.id,
copies.qruuid,
loans.id AS loanid,
loans.status,
loans.byname,
loans.byemail,
loans.createdAt
FROM copies
INNER JOIN libraryitems ON copies.libraryitemid = libraryitems.id AND libraryitems.deletedAt IS NULL
LEFT OUTER JOIN loans ON copies.id = loans.copyid
WHERE copies.libraryitemid = 1
ORDER BY copies.id ASC, loans.createdAt DESC
```
I know there needs to be a sub select of some description in here, but struggling to get the correct syntax. How do I only return the latest, i.e MAX(loans.createdAt) row for each distinct copy? Just using group by copies.id returns the earliest, rather than latest entry.
Image example below:

|
in the subquery , getting maximum created time for a loan i.e. latest entry and joining back with loans to get other details.
```
SELECT
T.title,
T.id,
T.qruuid,
loans.id AS loanid,
loans.status,
loans.byname,
loans.byemail,
loans.createdAt
FROM
(
SELECT C.id, C.qruuid, L.title, MAX(LN.createdAt) as maxCreatedTime
FROM Copies C
INNER JOIN libraryitems L ON C.libraryitemid = L.id
AND L.deletedAt IS NULL
LEFT OUTER JOIN loans LN ON C.id = LN.copyid
GROUP BY C.id, C.qruuid, L.title) T
JOIN loans ON T.id = loans.copyid
AND T.maxCreatedTime = loans.createdAt
```
|
A self left join on loans table will give you latest loan of a copy, you may `join` the query to the other tables to fetch the desired output.
```
select * from loans A
left outer join loans B
on A.copyid = B.copyid and A.createdAt < B.createdAt
where B.createdAt is null;
```
|
mySQL Sub Select needed
|
[
"",
"mysql",
"sql",
"subquery",
""
] |
Is it possible to omit the `WHERE` if a variable is a certain value? The following doesn't work and I am struggling to find an answer;
```
DECLARE @rMonth int, @rYear int, @sID int
SET @rMonth = 0;
SET @rYear = 0;
SET @sID= 0;
SELECT
TCS.bStatus AS jStatus, TCS.ID, TCS.sID, TCS.insDate, TCS.statusLabel, TCS.cID
FROM
TCS
CASE WHEN @rMonth > 0 THEN
WHERE month(insDate) = @rMonth AND year(insDate) = @rYear
END
```
|
This is common scenario to include a clause conditionally
check this out:
```
WHERE
(@rMonth = 0 OR MONTH(insDate) = @rMonth)
AND (@rYear = 0 OR YEAR(insDate) = @rYear)
AND (@sID = 0 OR sID = @sID)
```
In above query and for each clause, right hand side of "OR" is applied only if left hand side is false. Otherwise whole clause is considered as true and does not filter any row.
Hopefully,by using above trick any complex clause could be written in right side of "OR" rather than a simple equality comparision.
|
```
WHERE
(@rMonth <= 0)
OR
(month(insDate) = @rMonth AND year(insDate) = @rYear)
```
|
If Var is certain Value omit the WHERE
|
[
"",
"sql",
"sql-server",
"if-statement",
"case",
""
] |
Few columns in my table looks like
```
Id Code date latest
1 T 2014-10-04 0
2 B 2014-10-19 0
2 B 2014-10-26 0
1 S 2014-10-05 0
1 T 2014-10-06 0
1 T 2014-10-08 1
2 P 2014-10-27 1
```
I am tracking all changes made by each ID. if there is any change, I insert new row and update the latest value column.
What I want is for each Id, I should be able to find last code where latest is 0. Also, that code should not be equal to existing code(latest = 1) So for id = 1, answer cannot be
```
Id Code
1 T
```
as for `id = 1` T is existing code (`latest = 1`).
So ideally my output should look like:
```
Id Code
1 S
2 B
```
I think I can get the latest value for code for each id where `latest = 0`.
But how do I make sure that it should not be equal to existing code value (`latest = 1`)
|
Works in **Postgres**:
```
SELECT DISTINCT ON (t0.id)
t0.id, t0.code
FROM tbl t0
LEFT JOIN tbl t1 ON t1.code = t0.code
AND t1.id = t0.id
AND t1.latest = 1
WHERE t0.latest = 0
AND t1.code IS NULL
ORDER BY t0.id, t0.date DESC;
```
I use the combination of a `LEFT JOIN` / `IS NULL` to remove siblings of rows with `latest = 1`. There are various ways to do this:
* [Select rows which are not present in other table](https://stackoverflow.com/questions/19363481/select-rows-which-are-not-present-in-other-table/19364694#19364694)
Details for `DISTINCT ON`:
* [Select first row in each GROUP BY group?](https://stackoverflow.com/questions/3800551/select-first-row-in-each-group-by-group/7630564#7630564)
### Version with CTE and 2x LEFT JOIN
Since **Redshift** does not seem to support `DISTINCT ON`:
```
WITH cte AS (
SELECT t0.*
FROM tbl t0
LEFT JOIN tbl t1 ON t1.code = t0.code
AND t1.id = t0.id
AND t1.latest = 1
WHERE t0.latest = 0
AND t1.id IS NULL
)
SELECT c0.id, c0.code
FROM cte c0
LEFT JOIN cte c1 ON c1.id = c0.id
AND c1.date > c0.date
WHERE c1.id IS NULL
ORDER BY c0.id;
```
[**SQL Fiddle**](http://sqlfiddle.com/#!12/4cc7a/12) showing both.
|
I believe you should have a data for the current version and you should create another table where you would store previous revisions, having foreign key to the `Id`. Your `Id` does not fulfill the general expectations for a column with such a name. So, ideally, you would:
* create a table `Revisions(Id, myTableId, core, date, revision)`, where `Id` would be `auto_increment` `primary key` and `myTableId` would point to the `Id` of the records (1 and 2 in the example)
* migrate the elements into revision: `insert into Revisions(myTableId, core, date, revision) select Id, core, date latest from MyTable where latest = 0`
* update the migrated records: `update Revisions r1 set r1.revision = (select count(*) from revisions r2 where r2.date < r1.date)`
* remove the old data from your new table: `delete from MyTable where latest = 0`
* drop your `latest` column from MyTable
From here, you will be always able to select the penultimate version, or second to last and so on, without problems. Note, that my code suggestions might be of wrong syntax in postgreSQL, as I have never used it, but the idea should work there as well.
|
Comparing different rows in PostgreSQL for each Id
|
[
"",
"sql",
"postgresql",
"greatest-n-per-group",
"amazon-redshift",
""
] |
I have a token table
```
id | status
------------
1 | taken
1 | used
1 | deleted
2 | taken
2 | deleted
3 | taken
```
I need to count how many tokens are used ( in use or used).
If a token is taken and deleted without being used then it should not be counted.
So sql would be sth like
```
SELECT count(*) if the id's status is not (taken & deleted)
```
The desired number of used token in above example is 2 as
```
id 1 has been taken used and deleted -> count it
id 3 has been taken -> count it
id 2 has been taken and deleted without being used -> do not count it
```
|
A little bit verbose but efficient and still readable and maintainable:
```
SELECT COUNT(DISTINCT id)
FROM dbo.Token t
WHERE EXISTS
(
SELECT 1 FROM dbo.Token t1
WHERE t.id = t1.id
AND t1.status = 'used'
)
OR
(
EXISTS(
SELECT 1 FROM dbo.Token t1
WHERE t.id = t1.id
AND t1.status = 'taken'
)
AND NOT EXISTS(
SELECT 1 FROM dbo.Token t1
WHERE t.id = t1.id
AND t1.status = 'deleted'
)
)
```
`Demo`
|
You need to be able to take into account all three conditions, so a naive approach would be to just compare each three with a case statement:
```
WITH grouped as
(
select id from #uses group by id
)
select grouped.id,
used =
CASE WHEN used.id is not null THEN 'YES'
WHEN taken.id is not null and deleted.id is null THEN 'YES'
ELSE 'NO'
END
from grouped
left join #uses taken on grouped.id = taken.id
and taken.use_status = 'taken'
left join #uses used on grouped.id = used.id
and used.use_status = 'used'
left join #uses deleted on grouped.id = deleted.id
and deleted.use_status = 'deleted'
```
The case statement will stop whenever the condition is met, so you only need to `WHEN`'s and an `ELSE` to meet the conditions.
This is a naive approach, though, and assumes that you only ever have one row per id and use status type. You'd have to do some additional work if that wasn't the case.
|
sql Select id that matches combination of column
|
[
"",
"sql",
"sql-server",
""
] |
I have the following query:
```
SELECT dbo.saleDocumentDetails.ID,
dbo.saleDocumentDetails.DocumentID,
dbo.saleDocumentDetails.invProductID,
dbo.saleDocumentDetails.Qty,
dbo.saleDocumentDetails.QtyConfirmed,
dbo.saleDocumentDetails.AmountPrice,
dbo.saleDocumentDetails.AmountUserPrice,
dbo.saleDocumentDetails.AmountCost,
dbo.saleDocumentDetails.RespiteDays,
dbo.saleDocumentDetails.Date_UpdateLast,
CASE
WHEN io1.Code = '33' THEN SUM(idd.Qty)
ELSE SUM(0)
END AS Qty_BackSale,
CASE
WHEN io1.Code = '41' THEN SUM(idd.Qty)
ELSE SUM(0)
END AS Qty_Stock
FROM dbo.saleDocuments
INNER JOIN dbo.saleDocumentDetails
ON dbo.saleDocuments.ID = dbo.saleDocumentDetails.DocumentID
LEFT OUTER JOIN dbo.invDocumentDetails AS idd
ON dbo.saleDocumentDetails.ID = idd.saleDocumentDetailID
LEFT OUTER JOIN dbo.invDocuments AS id2
ON id2.ID = idd.invDocumentID
LEFT JOIN dbo.invOperations AS io1
ON (io1.ID = id2.invOperationID)
WHERE (dbo.saleDocumentDetails.ID = 1295617)
GROUP BY
dbo.saleDocumentDetails.ID,
dbo.saleDocumentDetails.DocumentID,
dbo.saleDocumentDetails.invProductID,
dbo.saleDocumentDetails.Qty,
dbo.saleDocumentDetails.QtyConfirmed,
dbo.saleDocumentDetails.AmountPrice,
dbo.saleDocumentDetails.AmountUserPrice,
dbo.saleDocumentDetails.AmountCost,
dbo.saleDocumentDetails.RespiteDays,
dbo.saleDocumentDetails.Date_UpdateLast,
io1.Code
```
its result is:

I would like to achieve:

How should I change the query?
|
As mentioned JRLambert, move `SUM()` outside `CASE`, and remove `io1.Code` from `GROUP BY`
|
Move the `SUM()` in your `CASE` statements outside of the `CASE`:
```
SELECT
dbo.saleDocumentDetails.ID,
dbo.saleDocumentDetails.DocumentID,
dbo.saleDocumentDetails.invProductID,
dbo.saleDocumentDetails.Qty,
dbo.saleDocumentDetails.QtyConfirmed,
dbo.saleDocumentDetails.AmountPrice,
dbo.saleDocumentDetails.AmountUserPrice,
dbo.saleDocumentDetails.AmountCost,
dbo.saleDocumentDetails.RespiteDays,
dbo.saleDocumentDetails.Date_UpdateLast,
SUM(CASE
WHEN io1.Code = '33' THEN idd.Qty
ELSE 0
END) AS Qty_BackSale,
SUM(CASE
WHEN io1.Code = '41' THEN idd.Qty
ELSE 0
END) AS Qty_Stock
FROM
dbo.saleDocuments
INNER JOIN dbo.saleDocumentDetails
ON dbo.saleDocuments.ID = dbo.saleDocumentDetails.DocumentID
LEFT OUTER JOIN dbo.invDocumentDetails AS idd
ON dbo.saleDocumentDetails.ID = idd.saleDocumentDetailID
LEFT OUTER JOIN dbo.invDocuments AS id2
ON id2.ID = idd.invDocumentID
LEFT JOIN dbo.invOperations AS io1
ON (io1.ID = id2.invOperationID)
WHERE
(dbo.saleDocumentDetails.ID = 1295617)
GROUP BY
dbo.saleDocumentDetails.ID,
dbo.saleDocumentDetails.DocumentID,
dbo.saleDocumentDetails.invProductID,
dbo.saleDocumentDetails.Qty,
dbo.saleDocumentDetails.QtyConfirmed,
dbo.saleDocumentDetails.AmountPrice,
dbo.saleDocumentDetails.AmountUserPrice,
dbo.saleDocumentDetails.AmountCost,
dbo.saleDocumentDetails.RespiteDays,
dbo.saleDocumentDetails.Date_UpdateLast,
io1.Code;
```
|
How to use aggregate function?
|
[
"",
"sql",
"sql-server",
"aggregate-functions",
""
] |
I have a query where i need to get values in 4 columns but only if the date is greater than today from a 5th column. I have tried the following but it doesn't seem to be working.
```
Select
(case when clientplans.END_DATE < convert(date,getdate(,101) then '') else insplans.Desc_upper as PLAN NAME,
(case when clientplans.END_DATE < convert(date,getdate(,112) then '') else insplans.ID_NO,
(case when clientplans.END_DATE < convert(date,getdate(,112) then '') else insplans.cert_NO,
```
I have converted the date on the end date as follows:
convert (varchar,clientplans.END\_DATE,112) as POLICY\_EXP\_DATE,
does it matter that I do the conversion of the end date later in the query? the clientplans.end\_date has to be inserted into the results in a certain order which happens to be after the description, id and cert number. Thanks for any help.
|
Perhaps something like this does what you want:
```
Select (case when cp.END_DATE > cast(getdate() as date) then insplans.Desc_upper end) as PLAN_NAME,
(case when cp.END_DATE > cast(getdate() as date) then insplans.ID_NO end) as ID_NO,
(case when cp.END_DATE > cast(getdate() as date) then insplans.cert_NO END) as cert_NO
from clientplans cp . . .
```
Note the following:
* This table uses table aliases (`cp` for `clientplans`), so the query is easier to write and to read.
* It uses `cast()` to a `date` to just get the date.
* Non-matching rows are given a `NULL` value instead of `''`. That usually makes more sense.
EDIT:
Of course, you can use `''`, if you like:
```
Select (case when cp.END_DATE > cast(getdate() as date) then insplans.Desc_upper else '' end) as PLAN_NAME,
(case when cp.END_DATE > cast(getdate() as date) then insplans.ID_NO else '' end) as ID_NO,
(case when cp.END_DATE > cast(getdate() as date) then insplans.cert_NO else '' end) as cert_NO,
(case when cp.END_DATE > cast(getdate() as date)
then convert(varchar(255), cp.StartDate, 121) else ''
end) as StartDate
from clientplans cp . . .
```
|
Use this to get the start of today: `DATEADD(day, DATEDIFF(day, 0, GETDATE()), 0)`
For example:
```
SELECT DATEADD(day, DATEDIFF(day, 0, GETDATE()), 0)
```
And for yours...
```
Select
(case when clientplans.END_DATE > DATEADD(day, DATEDIFF(day, 0, GETDATE()), 0) then '') else insplans.Desc_upper as PLAN NAME,
(case when clientplans.END_DATE > DATEADD(day, DATEDIFF(day, 0, GETDATE()), 0) then '') else insplans.ID_NO,
(case when clientplans.END_DATE > DATEADD(day, DATEDIFF(day, 0, GETDATE()), 0) then '') else insplans.cert_NO,
```
|
Obtaining columns values only if date > today
|
[
"",
"sql",
"sql-server",
""
] |
I have a question for you all. I have 'inherited' a DB at work and I have to create a report from a table using different conditions. Please note I'm no sql expert, I hope what I write makes sense.
Trying to simplify, I have a HARDWARE table that contains the following:
* **HWTYPE** - *type of hardware*
* **HWMODEL** - *model of hardware*
* **PHONENUM** - *phone number*
* **USERID** - *user the hardware is assigned to*
the data looks like this:
```
HWTYPE | HWMODEL | PHONENUM | USERID
-------+------------+------------+----------
SIM | SIMVOICE | 123456 | CIRO
SIM | SIMVOICE | 124578 | LEO
PHONE | APPLE | | CIRO
PHONE | SAMSUNG | | LEO
```
now as you can see, every user has assigned one phone and one SIM with a phone number.
I need to sort the data per user, so that every line of the query result look like:
```
HW | PHONENUM | USERID
---------+--------------+------
APPLE | 123456 | CIRO
SAMSUNG | 124578 | LEO
```
so basically: group column PHONENUM and HWMODEL based on USER.
And this is where I get stuck! I tried union, join, case etc. but I still don't get the correct result.
Again apologies for the (probably) very basic question. I tried to look for something similar but could not find anything.
Thanks to whoever will want to help me.
regards
Leo
|
You can also get your expected Result using below query. This will use self join on Hardware table as well as reduce the length of query syntex.
```
SELECT H2.HWMODEL AS HW,H1.PHONENUM,H1.USERID FROM HARDWARE H1 INNER JOIN HARDWARE H2 ON H1.USERID = H2.USERID
WHERE ISNULL(H1.PHONENUM,'') <> ''
AND ISNULL(H2.PHONENUM,'') = ''
ORDER BY H2.HWMODEL ASC
```
Hope this will help you.
|
I dont know that I understood your question or not
But i think you just need to write following query for your O/P
```
SELECT
HWTYPE, HWMODEL, USERID
FROM
HARDWARE
GROUP BY USERID ,HWTYPE,HWMODEL
ORDER BY HWTYPE
```
|
Retrieving values from two columns based on different conditions
|
[
"",
"sql",
"sql-server",
""
] |
The conditional drop table line in the code below doesn't seem to work. It executes without error, but then the next line errors saying the object ID already exists - so it's not actually dropping the table.
Any ideas?
I'm connecting to an MS SQL Server 2005 database through OLE DB (SQLOLEDB provider) using ADO.
```
'Create a temporary table on the SQL Server and insert into it the info for the end part
With ADOcmA
Set .ActiveConnection = mObjCON
.CommandType = adCmdText
.CommandText = "IF OBJECT_ID('" & TEMPTABLE & "') IS NOT NULL DROP TABLE " & TEMPTABLE
.Execute
.CommandText = "CREATE TABLE " & TEMPTABLE & " (ITEM VARCHAR(255),DESCRIP1 CHAR(255),DESCRIP2 CHAR(255), LEV INT, SEQ VARCHAR(255), FLAG1 TINYINT, PRIMARYKEY INT IDENTITY(1,1) PRIMARY KEY,QTY_PER FLOAT)"
.Execute
.CommandText = "Insert Into " & TEMPTABLE & " (ITEM,DESCRIP1,DESCRIP2,LEV,SEQ,FLAG1,QTY_PER) select item_no,Item_desc_1,Item_desc_2,1,1,'1',1 FROM " & cstrMACtItem & " WHERE Item_no ='" & strITEM & "' "
.Execute
End With
```
FWIW the temptable name is created at runtime using the format `#[WSID]TEMP[NOW]` and truncated to 116 characters, so for example:`TEMPTABLE=#LOKSPEC1TEMP141031155408`
Also, I've tried escaping the table name using single quotes or double quotes but always same result (it doesn't drop). I found square brackets raise a syntax error.
|
Sql creates tables prefaced with the pound sign in a whole other place than your current working database. So if you want to interrogate schema information of a temp table, then you need to explicitly reference that other database. It might be easier to see if you tweaked your code like this
```
.CommandText = "IF OBJECT_ID('tempdb.." & TEMPTABLE & "') IS NOT NULL DROP TABLE " & TEMPTABLE
```
|
Change your temp table drop query to this
```
CommandText = "IF OBJECT_ID('tempdb.." & TEMPTABLE & "') IS NOT NULL DROP TABLE " & TEMPTABLE
```
|
Temporary table will not drop
|
[
"",
"sql",
"sql-server",
"ado",
"temp-tables",
"drop-table",
""
] |
I want to use SQL to get the the 5 "best" rated profiles. The problem is, I do not want the ranking solely based on grades.
Say there is a profile with a grade 8.1, but with 5 reviews I want it to be so that a profile with a grade 7.9 and 30 reviews will be ranked higher.
How can I accomplish that if the grade is a maximum of .3 lower than another but with a minimum of 30% reviews higher is listed higher than the other?
This is my query now: (average = grade, reviews = number of reviews)
```
SELECT name,reviews,average
FROM profiles
WHERE average >= 7
AND reviews >= 50
ORDER BY average DESC, reviews DESC
LIMIT 5
```
|
You could do a similar calculation in the `ORDER BY` clause. I'll keep trying to see if I can get *exactly* what you're going for, but to start, you could weight your sorting something like this.
```
SELECT name,reviews,average
FROM profiles
WHERE average >= 7
AND reviews >= 50
ORDER BY (average*.7 + reviews*.3)
LIMIT 5
```
|
Hmmm, I'm not sure how you implement your specific rule.
Often when faced with this task, I often subtract one standard error from the estimated value of the reviews. This gives a lower bound of the rating with some level of confidence. In general, the more reviews, the more confident, so the lower the standard deviation.
This could be readily calculated if you knew the individual reviews:
```
SELECT name, count(*) as numreviews, avg(reviews),
avg(review) - std(review)/sqrt(count(*)) as avg_lb
FROM reviews r
ORDER BY avg_lb DESC
LIMIT 5;
```
If you are not collecting the standard deviation of the reviews for an individual, then this method will not work.
|
SQL complex sorting by two columns
|
[
"",
"mysql",
"sql",
"sorting",
""
] |
I have a table with `opening_hours` for restaurants:
```
SELECT * FROM opening_hours;
+----+---------------+------------+----------+-----+
| id | restaurant_id | start_time | end_time | day |
+----+---------------+------------+----------+-----+
| 1 | 1 | 12:00:00 | 18:00:00 | 1 |
| 2 | 1 | 09:00:00 | 19:00:00 | 4 |
| 3 | 2 | 09:00:00 | 16:00:00 | 4 |
| 4 | 2 | 09:00:00 | 16:00:00 | 5 |
| 5 | 3 | 09:00:00 | 16:00:00 | 4 |
| 6 | 3 | 09:00:00 | 16:00:00 | 5 |
| 7 | 3 | 09:00:00 | 16:00:00 | 1 |
| 8 | 3 | 09:00:00 | 16:00:00 | 6 |
+----+---------------+------------+----------+-----+
```
<http://www.sqlfiddle.com/#!2/eaea09/1>
Now I want to fetch the "closest" next day or same day for every restaurant to the current day. For example when the current day is `1` the result would be:
```
restaurant_id: 1 day: 1
restaurant_id: 2 day: 4
restaurant_id: 3 day: 1
```
In the case of day `1` I could do this:
```
SELECT day FROM opening_hours WHERE day >= 1 GROUP BY restaurant_id LIMIT 1
```
But if today would be `6` that would not work. I would need the query to go get the maximum number of days (`7`) and if that could not be found it should start trying from `1` again. So the result for day `6` would be in this case:
```
restaurant_id: 1 day: 1
restaurant_id: 2 day: 4
restaurant_id: 3 day: 6
```
How could I achieve this with a query?
I'd think it could be something like this in pseudo SQL:
```
SELECT `day` FROM opening_hours WHERE `day` >= 'today' IF NOT FOUND WHERE `day` >= 1 GROUP BY `restaurant_id` LIMIT 1
```
edit:
I could run 2 queries, and determine if a match for a restaurant was found in the first. If not, run a second query. But there must be a better way.
|
Tested, this one works :) you can keep it simple.
**Query:**
```
SELECT * FROM (
SELECT
oh.*
FROM
opening_hours oh
ORDER BY restaurant_id,
`day` + IF(`day` < $current_day, 7, 0)
) sq
GROUP BY restaurant_id;
```
**Explanation:**
Note though, that this is a bit hacky. To select a column that is not used in the `group by` and has no aggregate function applied to it, usually isn't allowed, because theoretically it could give you a random row of each group. That's why it's not allowed in most database systems. MySQL is actually the only one I know of, that allows this (if not set otherwise via sql-mode). Like I said, in theory. Practically it's a bit different and if you do an `order by` in the subquery, MySQL will always give you the minimum or maximum value (depending on the sort order).
**Tests:**
Desired result with current day = 1:
```
root@VM:playground > SELECT * FROM (
-> SELECT
-> oh.*
-> FROM
-> opening_hours oh
-> ORDER BY restaurant_id,
-> `day` + IF(`day` < 1, 7, 0)
-> ) sq
-> GROUP BY restaurant_id;
+----+---------------+------------+----------+-----+
| id | restaurant_id | start_time | end_time | day |
+----+---------------+------------+----------+-----+
| 1 | 1 | 12:00:00 | 18:00:00 | 1 |
| 3 | 2 | 09:00:00 | 16:00:00 | 4 |
| 7 | 3 | 09:00:00 | 16:00:00 | 1 |
+----+---------------+------------+----------+-----+
3 rows in set (0.00 sec)
```
Desired result with current day = 6:
```
root@VM:playground > SELECT * FROM (
-> SELECT
-> oh.*
-> FROM
-> opening_hours oh
-> ORDER BY restaurant_id,
-> `day` + IF(`day` < 6, 7, 0)
-> ) sq
-> GROUP BY restaurant_id;
+----+---------------+------------+----------+-----+
| id | restaurant_id | start_time | end_time | day |
+----+---------------+------------+----------+-----+
| 1 | 1 | 12:00:00 | 18:00:00 | 1 |
| 3 | 2 | 09:00:00 | 16:00:00 | 4 |
| 8 | 3 | 09:00:00 | 16:00:00 | 6 |
+----+---------------+------------+----------+-----+
3 rows in set (0.00 sec)
```
|
This is a tricky one.
Best I can come up with is this, which seems to work but I might be missing some edge cases.
```
SELECT sub0.restaurant_id, MIN(sub1.day)
FROM
(
SELECT restaurant_id, MIN( LEAST(ABS(DAYOFWEEK(CURDATE()) - day), ABS(DAYOFWEEK(CURDATE()) - (day + 7)), ABS(DAYOFWEEK(CURDATE()) - (day - 7)))) AS difference
FROM opening_hours
GROUP BY restaurant_id
) sub0
INNER JOIN
(
SELECT restaurant_id, day, LEAST(ABS(DAYOFWEEK(CURDATE()) - day), ABS(DAYOFWEEK(CURDATE()) - (day + 7)), ABS(DAYOFWEEK(CURDATE()) - (day - 7))) AS difference
FROM opening_hours
) sub1
ON sub0.restaurant_id = sub1.restaurant_id
AND sub0.difference = sub1.difference
GROUP BY sub0.restaurant_id
```
The first sub query is getting the absolute difference between todays day and each restaurant day. It is using the day, the day plus 7 and the day minus 7 to compare with, using ABS to just get the difference in days and using LEAST to get the lowest of those differences. This way if the current day is 1 and there is a restaurant day of 6 it is comparing 1 + 7 with 6, 1 - 7 with 6 and 1 with 6 and getting the least of those (in this case that would be 1 + 7).
The 2nd sub query just gets that difference and the day of the week for each possible restaurant / day, and this is joined to the first sub query.
The outer query uses MIN just to pick a single day when 2 are just as close.
|
SQL "First relevant day"
|
[
"",
"mysql",
"sql",
""
] |
I am using the below after select in a SQL query. Some of the results have a space at the start so I believe I would need to use ltrim to display as I'd like them.
```
SUBSTRING(mf_mm.DisplayName,8,15) as 'Name'
```
Could someone tell me the syntax to use `ltrim` in this case?
|
Try this:-
```
LTRIM(SUBSTRING(mf_mm.DisplayName,8,15)) as 'Name'
```
|
Just `LTRIM(SUBSTRING(mf_mm.DisplayName,8,15)) as 'Name'` in T-SQL
|
ltrim a substring
|
[
"",
"sql",
""
] |
`id | name | elapsed`
`1 James 0`
`2 John 60`
`3 Kerry 60`
`4 Janet 60`
`5 Katie 60`
Based on the results above , how can I select names where the 'elapsed' value total is less than or equal to 120? For those results, that would involve selecting the names 'James', 'John' and 'Kerry'. If I changed the total elapsed to 180 it would also select 'Janet'.
How can I construct a query that returns this?
|
This should work:
```
SELECT
t1.id,
t1.name,
t1.elapsed
FROM your_table t1
INNER JOIN your_table t2 ON (t1.id >= t2.id)
GROUP BY t1.id, t1.name, t1.elapsed
HAVING SUM(t2.elapsed) <= 120
```
|
Here's something to think about...
```
SELECT * FROM ints;
+---+
| i |
+---+
| 0 |
| 1 |
| 2 |
| 3 |
| 4 |
| 5 |
| 6 |
| 7 |
| 8 |
| 9 |
+---+
SELECT i,@x:=@x+i running FROM ints i,(SELECT @x:=0)var ORDER BY i;
+---+----------+
| i | running |
+---+----------+
| 0 | 0 |
| 1 | 1 |
| 2 | 3 |
| 3 | 6 |
| 4 | 10 |
| 5 | 15 |
| 6 | 21 |
| 7 | 28 |
| 8 | 36 |
| 9 | 45 |
+---+----------+
```
|
Select rows where total elapsed value is less than or equal to
|
[
"",
"mysql",
"sql",
"innodb",
""
] |
So, here's my query:
```
SELECT '$'
+ CONVERT(VARCHAR (6), Cast(Avg(TotalPrice) AS NUMERIC (6, 2))) AS 'Average Price',
'$'
+ CONVERT(VARCHAR (6), Cast(Min(TotalPrice) AS NUMERIC (6, 2))) AS 'Minimum Price',
'$'
+ CONVERT(VARCHAR (6), Cast(Max(TotalPrice) AS NUMERIC (6, 2))) AS 'Maximum Price'
FROM Invoice;
```
The AVG column and the MIN column work fine but the MAX column returns:
"Arithmetic overflow error converting numeric to data type varchar"
And I'm not sure why I get the error.
|
The NUMERIC(6,2) indicates total 6 digits and out of which 2 are decimal places.
you have a value like 1234.66 then total characters needed is 7
Get the maximum value and use appropriate varchar size, here you need atleast `varchar(7)`
|
Try this
```
SELECT '$'
+ CONVERT(VARCHAR (10), Cast(Avg(TotalPrice) AS NUMERIC (8, 2))) AS 'Average Price',
'$'
+ CONVERT(VARCHAR (10), Cast(Min(TotalPrice) AS NUMERIC (8, 2))) AS 'Minimum Price',
'$'
+ CONVERT(VARCHAR (10), Cast(Max(TotalPrice) AS NUMERIC (8, 2))) AS 'Maximum Price'
FROM Invoice;
```
|
Arithmetic overflow error converting numeric to data type varchar
|
[
"",
"sql",
"sql-server",
""
] |
I've checked other similar questions. about repeating commas, error in commands but cant find any in my error. I've also searched examples of create tables to compare with mine, but fail to find any difference :(.
Below is the CREATE table statement:
```
CREATE TABLE DRIVER(L# VARCHAR(15) NOT NULL
, DNAME VARCHAR(75) NOT NULL
, STATUS VARCHAR(50) NOT NULL
, NRIC VARCHAR (15) NOT NULL
, PRIMARY KEY(L#)
, CANDIDATE KEY(NRIC)
);
```
Anyone can help me point out that i can't see or missing something,thanks (:
|
You can't specify `CANDIDATE KEY` like that in Oracle. The right way is to create a `UNIQUE CONSTRAINT` on your Candidate Key(s).
Like so.
### Here's a working SQLFiddle: <http://sqlfiddle.com/#!4/b392d/1>
```
CREATE TABLE DRIVER(
L# VARCHAR(15) NOT NULL,
DNAME VARCHAR(75) NOT NULL,
STATUS VARCHAR(50) NOT NULL,
NRIC VARCHAR (15) NOT NULL,
PRIMARY KEY(L#),
CONSTRAINT UK_NRIC UNIQUE (NRIC)
);
```
|
ORA-00907 pretty much always indicates a syntax error. It happens when the parser finds a word which is not a key word in a position where it is expecting either a keyword or a right-hand bracket to close the current statement (hence `Missing Right Parenthesis`). Although sometime it can just be that we have an unmatched left-hand bracket: a decent text editor can help trace that.
In your case the intruding word is `CANDIDATE`. Syntax errors can be tricky for beginners. I recommend you familarise yourself with the Oracle documentation:it is online, complete and free, and has quite a lot of examples. [Check it out](http://docs.oracle.com/cd/E11882_01/server.112/e41084/clauses002.htm#SQLRF52180) . The [list of reserved words](http://docs.oracle.com/cd/E11882_01/server.112/e41084/ap_keywd001.htm#SQLRF55621) could have helped you, because `CANDIDATE` is not on it, which is a big clue.
|
SQL Error: Missing Right Parenthesis in Line 1 ORA-00907
|
[
"",
"sql",
"oracle",
"sqlplus",
"create-table",
""
] |
I have a requirement for a report and I would like my sql query to repeat each row twice.
Example :
```
**Table 1**
Id Name
1 Ab
2 Cd
3 Ef
```
I want to write a query which outputs the following :
```
1 Ab
1 Ab
2 Cd
2 Cd
3 Ef
3 Ef
```
Is there a way I can do it ?
I cannot think of anything except using `union`
```
Select Id, name from Table1 union select Id, name from Table1
```
|
You can use a `union all`. A `union` will not work, because it will eliminate duplicates. Another way is a `cross join`:
```
select id, name
from table1 t1 cross join
(select 1 as n union all select 2) n;
```
|
You can also use `UNION ALL`, put them under `CTE` (Common Table Expression) and `Order By Id`:
```
WITH CTE AS
(
SELECT Id, Name FROM Table_1
UNION ALL
SELECT Id, Name FROM Table_1
)
SELECT Id, Name
FROM CTE
ORDER BY Id;
```
As this will **reorder** them and **stacked them** as duplicates
|
how to repeat each row twice
|
[
"",
"sql",
"sql-server",
"sql-server-2008-r2",
"sql-server-2012",
""
] |
I am trying to write a query which will give output as MONTH, YEAR.
When I write :
```
select CAST( tbl.MONTH as varchar(2) ) + ', ' + CAST ( tbl.YEAR as varchar(4) ) as [DATE]
from TABLE as tbl
```
I get the output as
```
1,2014
4,2014 (depending upon the input)
```
But, Now, I want to replace 1 with JAN, 4 with APRIL 12 with DEC
So, I tried to write a case statement as :
```
SELECT
case when tbl.MONTH ='1'
then ('JAN' + ', ' + CAST ( tbl.YEAR as varchar(4) )) as [DATE1]
from TABLE as tbl
```
and this gives syntax error.
Can anyone tell me, what I should be doing ?
|
The `case` needs an `end`:
```
SELECT (case when tbl.MONTH = '1' then 'JAN' + ', ' + CAST(tbl.YEAR as varchar(4) ))
end) as [DATE1]
from TABLE tbl;
```
If the values are being stored as numbers, then don't put single quotes around the constant value.
|
You are missing the `end`, which is needed to close the `case` statement:
```
case
when tbl.MONTH ='1'
then ('JAN' + ', ' + CAST ( tbl.YEAR as varchar(4) ))
end
as [DATE1]
from TABLE as tbl
```
|
Case Statement in SQL giving error
|
[
"",
"sql",
"sql-server",
""
] |
In the below query i am select the product with currentstock .Now i want to replace 0 if the currrent stock is null.Pls help me to do this.
```
SELECT p.ProductID,
p.ProductName,
(SELECT ISNULL( CurrentStock,0.00)
FROM Productstatus PS
WHERE PS.ProductID =p.ProductID
AND PS.LocationID = 1
AND PS.StatusDateTime= '2014-08-27'
and PS.productid=p.productid) CurrentStock
FROM Product P
LEFT OUTER JOIN LocationProductMap LMP ON LMP.ProductID=P.ProductID
WHERE LMP.ProductInFlow=1
```
|
```
SELECT p.ProductID,
p.ProductName,
ISNULL((SELECT ISNULL( CurrentStock,0.00)
FROM Productstatus PS
WHERE PS.ProductID =p.ProductID
AND PS.LocationID = 1
AND PS.StatusDateTime= '2014-08-27'
and PS.productid=p.productid),0) CurrentStock
FROM Product P
LEFT OUTER JOIN LocationProductMap LMP ON LMP.ProductID=P.ProductID
WHERE LMP.ProductInFlow=1
```
|
you can use
```
Select case when (CurrentStock) is null then 0 else (CurrentStock) end from table
```
|
To make 0 if the value is null in sql server
|
[
"",
"sql",
"sql-server",
""
] |
Hopefully an easy MS SQL Question! I have four tables:
```
Portfolio:
int portfolioId, string portfolioName
1,'PortFolio 1'
Trip:
int tripId, int portfolioId, int tripType, int maxPeople, string tripName
1, 1, 1, 20, 'Trip A'
2, 1, 1, 21, βTrip B'
3, 1, 2, 22, βTrip C'
Person:
int personId, int personStatus, string personName
1, 14, βPerson 1'
2, 15, βPerson 2'
3, 16, βPerson 3'
TripPerson:
int personId, int tripId
1, 1
1, 3
2, 1
2, 2
```
For a given portfolioId, Iβm trying to write a clean query that will give me the following columns:
```
tripId, countA, countB, tripType, tripName
```
Where:
CountA: is the total number of Persons on the Trip.
CountB: is the total number of people who are on that trip that have also been on at least one other trip with type of β2β.
The number of rows returned must match the number of trips related to portfolio where portfolioId = 1, ordered by tripName.
Thoughts? I am using MS SQL, have a basic understanding of SQL, and this is driving me bananas.
|
You can write a query as:
```
With CTE1 as
(
-- Total number of persons on a trip:
select count(T.personId) as CountA , tripId
from TripPerson T
group by T.tripId
),
CTE2 as
(
-- Total number of people who are on a trip that have also been on
-- at least one other trip with type of '2'.
select Count (T2.personId)as CountB , CTE1.tripId ,CTE1.CountA
from TripPerson T2
inner join TripPerson T3 on T2.personId = T3.personId and T3.tripId =2
right join CTE1 on CTE1.tripId = T2.tripId
group by CTE1.tripId,CTE1.CountA
)
select CTE2.tripId, CTE2.CountA, CTE2.CountB, Trip.tripType, Trip.tripName
from CTE2
inner join Trip on Trip.tripId = CTE2.tripId
inner join Portfolio P on P.portfolioId = Trip.portfolioId
```
`DEMO`
|
one option is you can use sub-queries to get the countA and countB, so the query looks like some thing below:
```
select trip.tripId, trip.tripType, trip.tripName,
(select count(personId) from TripPerson where tripId = trip.tripId) as countA,
(select count(personId) from TripPerson where tripId IN (trip.tripId, 2)) as countB
from Trip trip
where portfolioId = 1 order by trip.tripName
```
|
SQL Server join across multiple tables
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
I have two sql views in sql-server-2008. Both have columns `ITEM_NO`, `QTY`, and `LOCATION`.
view\_1:
```
-----------------------------------
| ITEM_NO | QTY | LOCATION |
-----------------------------------
| SN-15 | 0 | SEA |
-----------------------------------
| XY-45 | 2 | SFO |
-----------------------------------
| TY-14 | 1 | SEA |
-----------------------------------
```
view\_2:
```
-----------------------------------
| ITEM_NO | QTY | LOCATION |
-----------------------------------
| HN-34 | 1 | SEA |
-----------------------------------
| GJ-13 | 4 | SFO |
-----------------------------------
| RB-02 | 7 | SEA |
-----------------------------------
| AJ-13 | 9 | LAX |
-----------------------------------
```
I want to combine these into a single view, that would look like:
```
-----------------------------------
| ITEM_NO | QTY | LOCATION |
-----------------------------------
| SN-15 | 0 | SEA |
-----------------------------------
| XY-45 | 2 | SFO |
-----------------------------------
| TY-14 | 1 | SEA |
-----------------------------------
| HN-34 | 1 | SEA |
-----------------------------------
| GJ-13 | 4 | SFO |
-----------------------------------
| RB-02 | 7 | SEA |
-----------------------------------
| AJ-13 | 9 | LAX |
-----------------------------------
```
Is there an easy way to do this?
|
You can use `UNION`..
```
SELECT ITEM_NO, QTY, LOCATION from View_1
UNION
SELECT ITEM_NO, QTY, LOCATION from View_2
```
**OR**, if you *don't care about duplicates*, a more efficient way is `UNION ALL`
```
SELECT ITEM_NO, QTY, LOCATION from View_1
UNION ALL
SELECT ITEM_NO, QTY, LOCATION from View_2
```
The `UNION` will perform a `DISTINCT` internally, where `UNION ALL` will not, hence the better performance. You should also note, if you want to `ORDER BY` to sort the complete result set, the `ORDER BY` needs to be placed at the **END** of the query.
|
`UNION` is used to combine the result-set of several `SELECT` statements.
```
SELECT *
FROM View_1
UNION
SELECT *
FROM View_2
```
[Documentation](http://dev.mysql.com/doc/refman/4.1/en/union.html).
|
Combining 2 SQL Tables with same Columns
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have a table with one of its column having comma separated values,
( for example: longitude,latitude,longitude1,latitude1,etc)..
Now I need to swap the values like ( latitude,longitude,latitude1,longitude1,etc).
As for a trial purpose:
I have created a table as follows:
```
CREATE TABLE string_table
(
slno NUMBER,
old_string VARCHAR2(50),
new_string VARCHAR2(50)
);
/
INSERT INTO STRING_TABLE (SLNO, OLD_STRING)
VALUES (1, '1,2,3,4,5,6');
INSERT INTO STRING_TABLE (SLNO, OLD_STRING)
VALUES (2, '1,2,3,4,5');
INSERT INTO STRING_TABLE (SLNO, OLD_STRING)
VALUES (3, 'a,b,c,d,e,f');
INSERT INTO STRING_TABLE (SLNO, OLD_STRING)
VALUES (4, 'a,b,c,d,e');
COMMIT;
/
```
Now the table looks like:
```
slno old_string new_string
----- ----------------------
1 1,2,3,4,5,6
2 1,2,3,4,5
3 a,b,c,d,e,f
4 a,b,c,d,e
```
I need to update the swapped values into the new\_string column, and the result should looks like:
```
slno old_string new_string
----- ----------------------
1 1,2,3,4,5,6 2,1,4,3,6,5
2 1,2,3,4,5 2,1,4,3,5
3 a,b,c,d,e,f b,a,d,c,f,e
4 a,b,c,d,e b,a,d,c,e
```
What I have done so far is using PL/SQL code using COLLECTION as follows, and is working fine:
```
SET serveroutput ON
DECLARE
TYPE my_type IS TABLE OF VARCHAR2(50);
my_obj my_type := my_type();
l_temp_var VARCHAR2(50);
l_string VARCHAR2(200);
BEGIN
FOR i IN
( SELECT slno, old_string FROM string_table
)
loop
FOR j IN
(SELECT regexp_substr(i.old_string,'[^,]+',1, LEVEL) val
FROM dual
CONNECT BY regexp_substr(i.old_string, '[^,]+', 1, LEVEL) IS NOT NULL
)
loop
my_obj.EXTEND;
my_obj(my_obj.LAST) := j.val;
IF mod(my_obj.count,2)= 0 THEN
l_temp_var := my_obj(my_obj.LAST -1);
my_obj(my_obj.LAST-1) := my_obj(my_obj.LAST) ;
my_obj(my_obj.LAST):= l_temp_var;
END IF;
END LOOP;
FOR i IN my_obj.FIRST..my_obj.LAST
loop
l_string := l_string||my_obj(i)||',';
END loop;
l_string := substr(l_string , 1, length(l_string)-1);
update string_table
SET new_string = l_string
WHERE slno = i.slno;
l_string := NULL;
my_obj := my_type();
END loop;
COMMIT;
END;
/
```
I think this solution is very lengthy, is there any other good/short/easy method to swap values for the expected result?
Thanks in advance *;)*
|
Using only regexp\_replace,
```
with string_table(slno, old_string)
as (
select 1, '1,2,3,4,5,6' from dual union all
select 2, '1,2,3,4,5' from dual union all
select 3, 'a,b,c,d,e,f' from dual union all
select 4, 'a,b,c,d,e' from dual
)
select
slno,
old_string,
regexp_replace(old_string,'([^,]+),([^,]+)','\2,\1') new_string
from
string_table;
SLNO OLD_STRING NEW_STRING
---------- ----------- ------------------------------------------------------------
1 1,2,3,4,5,6 2,1,4,3,6,5
2 1,2,3,4,5 2,1,4,3,5
3 a,b,c,d,e,f b,a,d,c,f,e
4 a,b,c,d,e b,a,d,c,e
```
Pattern:
```
([^,]+) -- any string without a comma. Enclosed in brackets to form first capture group.
, -- a comma
([^,]+) -- any string without a comma. Enclosed in brackets to form second capture group.
```
So, this pattern matches two strings separated by a comma.
Replace\_String:
```
\2 -- the second capture group from the Pattern
, -- a comma
\1 -- the first capture group from the Pattern
```
So, this replaces the matched pattern with the same string, but interchanging the position.
|
You can use the `connect by` syntax to split your comma-separated lists into their separate elements, and put them back together in a different order, all in plain SQL. The two slightly tricky bits are swapping the pairs over, which can be done by adjusting each position up or down by one depending on whether it's odd or even; and applying this syntax to multiple rows of data at once, which can be done with a trick using a deterministic function:
```
select slno, old_string,
listagg(item, ',') within group (order by new_pos) as new_string
from (
select slno, old_string, regexp_substr(old_string, '[^,]+', 1, level) as item,
case when mod(level, 2) = 1 then level + 1
else level - 1 end as new_pos
from string_table
connect by level <= regexp_count(old_string, '[^,]+')
and prior slno = slno
and prior sys_guid() is not null
)
group by slno, old_string;
SLNO OLD_STRING NEW_STRING
---------- -------------------- --------------------
1 1,2,3,4,5,6 2,1,4,3,6,5
2 1,2,3,4,5 2,1,4,3,5
3 a,b,c,d,e,f b,a,d,c,f,e
4 a,b,c,d,e b,a,d,c,e
```
You can then use that as the `using` clause of a `merge` to update the original table:
```
merge into string_table st
using (
select slno, old_string,
listagg(item, ',') within group (order by new_pos) as new_string
from (
select slno, old_string,
regexp_substr(old_string, '[^,]+', 1, level) as item,
case when mod(level, 2) = 1 then level + 1
else level - 1 end as new_pos
from string_table
connect by level <= regexp_count(old_string, '[^,]+')
and prior slno = slno
and prior sys_guid() is not null
)
group by slno, old_string
) tmp
on (tmp.slno = st.slno)
when matched then
update set st.new_string = tmp.new_string;
select * from string_table order by slno;
SLNO OLD_STRING NEW_STRING
---------- -------------------- --------------------
1 1,2,3,4,5,6 2,1,4,3,6,5
2 1,2,3,4,5 2,1,4,3,5
3 a,b,c,d,e,f b,a,d,c,f,e
4 a,b,c,d,e b,a,d,c,e
```
[SQL Fiddle](http://sqlfiddle.com/#!4/f942c/1) including what the inner query produces.
If you need to use this more generically you could create a function instead:
```
create or replace function swap_pairs (p_old_string varchar2)
return varchar2 as
l_new_string string_table.new_string%type;
begin
select listagg(item, ',') within group (order by new_pos)
into l_new_string
from (
select regexp_substr(p_old_string, '[^,]+', 1, level) as item,
case when mod(level, 2) = 1 then level + 1
else level - 1 end as new_pos
from dual
connect by level <= regexp_count(p_old_string, '[^,]+')
);
return l_new_string;
end;
/
update string_table set new_string = swap_pairs(old_string);
```
[SQL Fiddle](http://sqlfiddle.com/#!4/515fd/1).
Of course it isn't a good idea to store comma-separated values in a column in the first place; each value should be its own column, in a child table if you have multiple pairs. If you're adding a new column I'd really seriously consider a revamp of the data model instead. Sometimes you're stuck with what you have though, and even if you can split the data out, this sort of technique can be useful for doing that one-off exercise too.
|
swapping comma separated values in oracle
|
[
"",
"sql",
"plsql",
"oracle11g",
""
] |
I am having xml as result for webservice api . i need to parse the result and update to database table. my xml is below . it is a response text.
```
<?xml version="1.0" encoding="utf-8"?>
<double>1</double>
```
Sqlserver 2008 Code :
```
declare @xml xml, @rate DECIMAL(10,4)
set @xml=REPLACE(@ResponseText ,'encoding="utf-8"','')
select @rate= @xml.value('(/double)[1]','decimal')
```
I want to get the value of double but it always return the null .
Please help me out .
Hi , have Done changes as per your suggestion still no getting.
```
declare @xml XML
DECLARE @responsetext VARCHAR(900)
declare @rate DECIMAL(10,4)
SET @responsetext = '<?xml version="1.0" encoding="utf-8"?>
<double xmlns="http://www.webserviceX.NET/">1</double>'
set @xml=REPLACE(@ResponseText ,'encoding="utf-8"','')
select @rate= @xml.value('(/double)[1]','decimal')
select @rate
```
|
```
DECLARE @X XML = '<?xml version="1.0" encoding="utf-8"?>
<double xmlns="http://www.webserviceX.NET/">1</double>'
SELECT @X.value ('declare namespace x="http://www.webserviceX.NET/"; (/x:double)[1]', 'decimal')
```
Updated to reflect your use of namespace; the general point is that you don't need to do string manipulation to make this work. The header is fully supported. However, namespaces are important.
|
You need to declare the namespace when querying using the value() Method.
Change the first parameter of value() from
```
'(/double)[1]'
```
to
```
'declare namespace x="http://www.webserviceX.NET/"; (/x:double)[1]'
```
So the full example will look like this
```
declare @xml XML
DECLARE @responsetext VARCHAR(900)
declare @rate DECIMAL(10,4)
SET @responsetext = '<?xml version="1.0" encoding="utf-8"?>
<double xmlns="http://www.webserviceX.NET/">1</double>'
set @xml=REPLACE(@ResponseText ,'encoding="utf-8"','')
select @rate= @xml.value('declare namespace x="http://www.webserviceX.NET/"; (/x:double)[1]','decimal')
select @rate
```
which should return 1.000 (decimal)
|
how to read this xml in sql server
|
[
"",
"sql",
"sql-server-2008",
""
] |
I have a `users` table in my database and have three kinds of user roles - manager(admin), expert, and student.
I have tried storing their role as column `role` in the `users` table, but my question is what is the best way to store it?
For example, should I use numbers - 1 for admin, 2 for expert and 3 for student. Or only one character - 'A' for admin , 'E' for expert and 'S' for student? Or full string 'admin' for admin and so on, or what?
**Edit**
Guys, I want most efficient way. Some of you told me to have another table but this relationship isn't many to many, it's one to many because users can only have one role.
|
You might want to have a separate junction table, eg `user_roles` that will have columns: `id`, `user_id` and `role_id`. This way each user can have multiple roles in the future, even if right now they only have 1.
And that `role_id` would point to a role in a new `roles` table, which could have all the data associated with the roles.
Or if you really don't want the junction table, then just put the `role_id` directly into your `users` table.
Look up database "normalization". Basically if you have a specific set of data that will be repeating in a column, you might want to put it in its own table.
|
You could create a separate table showing 2 columns: role\_nbr, role with values like:
1, Admin;
2, Expert;
3, Student
Then include role\_nbr in your users table and use numbers, linking it to the role table to lookup the role name. That way, if you ever increase your roles, it's easy to add to the separate table. A good study would be on normalization - this will help you eliminate storing duplicate information where you have a lot of information to store (like the role name - since storing a number or letter - how would you remember what it stood for?)
|
Best way to store a one-to-many column in database
|
[
"",
"mysql",
"sql",
"database",
"fieldtype",
""
] |
I got a form that works amazingly by itself, but the second I attach it to my navigation form. I start to get prompted for user input since this line
`[Forms]![frm_addReceiveReportInformation]![cbo_PurchaseOrderID]`
no longer works due to the current form becoming subform in the navigation form which was explained in
[ACCESS 2010 Navigation Form Query Property](https://stackoverflow.com/questions/9405547/access-2010-navigation-form-query-property?answertab=votes#tab-top)
I can't seem to figure a way out of using the !form since I absolutely need to retrieve the ID from a combo box to update another combo box.
I tried multiple ways of using the !forms but I can't seem to wrap my head around how to retrieve my information that I am seeking.
I got the 2 way navigation menu(vertical + horizontal tabs). Anyone got advice or has encounter this problem in the pass, who can direct me down the right path.

|
To access a field inside a form, which is attached to a navigation tab, you should use the following structure:
```
[Forms]![YourNavigationTab]![NavigationSubform].[Form]![YourField]
```
Note: tested in MS Access 2013
|
In order for queries that contain form references to work, the form must be fully loaded. I think the problem you are experiencing is that the query in the Row Source of the Part Code combo is being evaluated before the form is loaded and therefore you are being asked to input the parameter value.
You can work around this by leaving the Row Source property of the Part Code combo empty until it gets focus for the first time, something like:
```
Private Sub cboPartCode_GotFocus()
If Len(cboPartCode.RowSource) = 0 Then
cboPartCode.[RowSource] = "Your Query"
cboPartCode.Requery
End If
End Sub
```
|
Microsoft Access - Navigation form is causing my query to not work
|
[
"",
"sql",
"forms",
"ms-access",
"ms-access-2010",
""
] |
I see a lot of ways to find duplicate records and delete them but I could not find the one I am looking for i.e. to delete them based on certain condition.
The first SQL code below taken from another post works very well in locating duplicates. However, I am having problem withe the second part.
```
<cfquery datasource="mydatabase" name="duplist">
SELECT a.*, b.totalCount AS Duplicate
FROM mytable a
INNER JOIN
(
SELECT email, COUNT(*) totalCount
FROM mytable
GROUP BY email
HAVING COUNT(*) > 1
) b ON a.email = b.email
</cfquery>
<cfoutput query="duplist">
<CFQUERY DATASOURCE="mydatabase" name="dlist">
SELECT * FROM mytable
WHERE userid = '#userid#'
AND activedate is null
</CFQUERY>
</cfoutput>
<cfoutput>
Total To Be Deleted: #dlist.recordcount#
</cfoutput>
```
I am trying use SELECT (before replace it with DELETE) only those duplicates user emails that do not have an "activedate" based on the output query "duplist" with duplicated email records. I am only getting one record (in #dlist.recordcount#) which is the last record! Isn't the query supposed to loop through them all?
**EDITED:**
After some feedback and checking the recordcount wasn't counting correctly because it is outside the loop. It will work in retrieving those records without startdate. However, now the real problem is in the following scenario:
```
ID USERID EMAIL STARTDATE
1 user1 test@test.com
2 user2 test@test.com 11/01/2014
3 user3 test@test.com
4 user4 test@test.com 11/02/2014
5 user5 me@mydomain.com
6 user6 me@mydomain.com
```
Running through the above code will list these 6 records of duplicate emails. The second query will select(delete) those without startdate i.e. Record 1, 3, 5 and 6. Two problems:
1. Record 1 and 3 will be deleted, but 2 and 4 are still duplicates
2. Record 5 and 6 will be deleted and I am left with no user with the email. I need to keep one of them.
How can the codes be modified to deal with the scenarios? I want to keep record #4 and #6 because 4 is newer than 2 and 6 is newer than 6.
```
4 user4 test@test.com 11/02/2014
6 user6 me@mydomain.com
```
|
You can use the Rank() function to rank the data and delete the rows which are not ranked 1.
Oracle equivalent of the Rank() query to select the #4 and #6 data:
```
SELECT * FROM
(
SELECT USERNAME, EMAIL, START_DATE,
RANK() OVER (PARTITION BY EMAIL ORDER BY START_DATE DESC NULLS LAST, ROWNUM DESC) AS RANK
FROM TEMP_SOL
)
WHERE RANK = 1
```
Mysql equivalent can be derived using this link [Rank function in MySQL with Order By clause](https://stackoverflow.com/questions/10881990/rank-function-in-mysql-with-order-by-clause)
**Edit : A non-rank solution can be something like:**
First get all the duplicate emails
```
<cfquery name="dupEmail" datasource="XXX">
SELECT EMAIL
FROM TEMP_SOL
GROUP BY EMAIL
HAVING COUNT(*) > 1
</cfquery>
```
Loop through dupEmail and
1. select the username which doesn't have to be deleted.
2. Select/Delete the usernames not equal to the above username having the same emailid.
```
<cfloop query="dupEmail">
<cfquery name="UserToRetain" datasource="XXX">
SELECT USERNAME FROM TEMP_SOL
WHERE EMAIL = '#dupEmail.Email#'
ORDER BY START_DATE DESC NULLS LAST, ROWNUM DESC
LIMIT 1
</cfquery>
<cfquery name="DeleteUsers" datasource="XXX">
SELECT * FROM TEMP_SOL
WHERE USERNAME <> '#UserToRetain.USERNAME#' AND EMAIL='#dupEmail.Email#'
</cfquery>
</cfloop>
```
Note: The code above hasn't been tested for and the queries may not run on mysql.
|
This is your original query:
```
SELECT a.*, b.totalCount AS Duplicate
FROM mytable a
INNER JOIN
(
SELECT email, COUNT(*) totalCount
FROM mytable
GROUP BY email
HAVING COUNT(*) > 1
) b ON a.email = b.email
```
All you need to do is add:
```
where a.activeDate is null
```
and you have your records. Then once you have seen them, replace your select clause with the word "delete"
**Edit Starts Here**
Since this will still leave duplicates, after you do the above, do something like this:
```
select id, email -- can be changed to delete after
from myTable deleteFrom join myTable keep on deleteFrom.email = keep.email
where deleteFrom.id < keep.id
```
That will keep the newest record.
|
Find duplicate emails in MySql and delete certain ones based on conditions
|
[
"",
"mysql",
"sql",
"coldfusion",
"duplicates",
""
] |
I have the following query:
```
select ad_st_id_state, count(distinct id_visit) as Visits
from sf_visit
join vr_users on vi_us_id_user = sus_us_id_user
join sf_pdv on vi_pdv_id_pdv = id_pdv
join sf_address on pdv_ad_id_address = id_address
group by ad_st_id_state
order by ad_st_id_state
```
and I also have this one:
```
select ad_st_id_state, count(distinct id_visit) as DoneVisits
from sf_visit
join vr_users on vi_us_id_user = sus_us_id_user
join sf_pdv on vi_pdv_id_pdv = id_pdv
join sf_address on pdv_ad_id_address = id_address
join sf_visit_file_time on id_visit = vft_vi_id_visit /* Another join has been added */
group by ad_st_id_state
order by ad_st_id_state
```
As you can see the queries are almost the same one except for an extra `join` statement.
Both queries return me the correct values I need, but I need to join them in a single table, so I do this:
```
select fffuuu.ad_st_id_state, count(distinct id_visit) as Visitas, fffuuu.doneVisits
from sf_visit
join vr_users on vi_us_id_user = sus_us_id_user
join sf_pdv on vi_pdv_id_pdv = id_pdv
join sf_address on pdv_ad_id_address = id_address
join (
select ad_st_id_state, count(distinct id_visit) as doneVisits
from sf_visit
join vr_users on vi_us_id_user = sus_us_id_user
join sf_pdv on vi_pdv_id_pdv = id_pdv
join sf_address on pdv_ad_id_address = id_address
join sf_visit_file_time on id_visit = vft_vi_id_visit
group by ad_st_id_state
order by ad_st_id_state
) as fffuuu on sf_address.ad_st_id_state = fffuuu.ad_st_id_state
group by ad_st_id_state
order by ad_st_id_state
```
Or in other words, I join the first query with the second one as a subquery. The resultset is fine and correct but is taking too long, so I got a timeout in another system this query is being run. Each query independently runs fast, but joining them is too slow for my needs...
I'd like to know if there is a way to optimize this, I was thinking if there is some join conditional statement or something. I searched for info but I didn't have any luck. I was thinking in something like:
```
select ad_st_id_state, count(distinct id_visit) as Visits, if(@someVariable := true) as DoneVisits
from sf_visit
join vr_users on vi_us_id_user = sus_us_id_user
join sf_pdv on vi_pdv_id_pdv = id_pdv
join sf_address on pdv_ad_id_address = id_address
if (@someVariable == true) then join sf_visit_file_time on id_visit = vft_vi_id_visit
group by ad_st_id_state
order by ad_st_id_state
```
Or something like that. How can I optimize this?
|
Could you use an outer join to sf\_visit\_file\_time and a case statement in your count? Obviously I don't have your schema locally but something like:
```
select ad_st_id_state,
count(distinct id_visit) as Visits,
count(distinct case when vft_vi_id_visit is not null then id_visit end) as DoneVisits
from sf_visit
join vr_users on vi_us_id_user = sus_us_id_user
join sf_pdv on vi_pdv_id_pdv = id_pdv
join sf_address on pdv_ad_id_address = id_address
left join sf_visit_file_time on id_visit = vft_vi_id_visit
group by ad_st_id_state
order by ad_st_id_state
```
|
If both queries are fast and work nicely and you just need the results in one table, you can use a union select <http://dev.mysql.com/doc/refman/5.0/en/union.html>
```
select ad_st_id_state, count(distinct id_visit) as Visits, '' as DoneVisits
from sf_visit
join vr_users on vi_us_id_user = sus_us_id_user
join sf_pdv on vi_pdv_id_pdv = id_pdv
join sf_address on pdv_ad_id_address = id_address
group by ad_st_id_state
UNION
select ad_st_id_state, '' as Visits, count(distinct id_visit) as DoneVisits
from sf_visit
join vr_users on vi_us_id_user = sus_us_id_user
join sf_pdv on vi_pdv_id_pdv = id_pdv
join sf_address on pdv_ad_id_address = id_address
join sf_visit_file_time on id_visit = vft_vi_id_visit /* Another join has been added */
group by ad_st_id_state
order by ad_st_id_state
```
|
MySQL - Conditional Join for a column
|
[
"",
"mysql",
"sql",
"join",
"optimization",
""
] |
How to specify multiple conditions in **SQL** joins? I know `A.key = B.key` is mandatory apart from this, is the below correct for specifying multiple conditions in **SQL**?
```
ON (A.key=B.key and/or cond1 and/or cond2 ...etc)
```
**OR**
```
ON (A.key=B.key where cond1 and/or cond2 ..etc)
```
|
I think your first option is correct.
You can use multiple join conditions like:
ON (A.key=B.key and/or cond1 and/or cond2 ...etc).
For the second one, WHERE should come after the ON clause(all the joins using JOIN clause). Then, this will filter the data after the JOIN. Just close the bracket before WHERE, then it will work as well.
Like:
ON (A.key=B.key) WHERE cond1 and/or cond2 ..etc
Both above options can be used, depending upon your requirement.
|
You can specify as many conditions as you like in the join like:
```
ON A.Key=B.Key
AND b.value>100
AND (a.value>b.value OR b.value2 IS NULL)
```
|
Multiple conditions in SQL joins
|
[
"",
"mysql",
"sql",
"sql-server",
"oracle",
""
] |
I have a table called `tbl_procie` which has the following structure:
```
CREATE TABLE `tbl_procie` (
`id` int,
`price` varchar(25)
);
```
Data:
```
id price
1 $2.5
2 $5.3
```
I want to be able to `SUM()` the price however it is currently a `varchar` column also containing currency symbols. I should get the answer `7.8`.
|
```
SELECT SUM(REPLACE(price,'$','')) AS cr
FROM TABLENAME
WHERE Valet_Company = "Demo Company"
```
|
`SELECT SUM(price) AS [Price Total] FROM tbl_temp`
Or if that doesnt remove the $ sign automatically try
`SELECT SUM(TRIM(LEADING '$' FROM price)) AS [Price Total] FROM tbl_temp`
|
How do I convert a varchar price column containing a currency symbol and then sum it?
|
[
"",
"mysql",
"sql",
"aggregate-functions",
"string-conversion",
""
] |
How to concatenate the row values after joining Table1 and table2.
Table 1:
```
-----
Col1
------
1
2
```
Table 2:
```
-----------------
Col1 Col2
-----------------
1 A
1 B
1 C
2 D
2 E
2 F
```
Desired Result:
```
-----------------
Col1 Col2
-----------------
1 A,B,C
2 D,E,F
```
|
Try this:
```
create table #table1(
col1 int
)
create table #table2(
col1 int,
col2 char(1),
)
insert into #table1
select 1 union all
select 2
insert into #table2
select 1, 'A' union all
select 1, 'B' union all
select 1, 'C' union all
select 2, 'D' union all
select 2, 'E' union all
select 2, 'F'
select
col1,
col2 =
stuff((
select
', ' + t2.col2
from #table2 t2
where
t2.col1 = t1.col1
group by t2.col2
for xml path(''), type).value('.', 'varchar(max)'
), 1, 2, '')
from #table1 t1
drop table #table1
drop table #table2
```
|
Mysql:
```
SELECT group_concat(table2.col2) FROM
table2 JOIN table1 ON table1.col1 = table2.col1
GROUP BY table2.col1
```
|
Concatenate the row values after join
|
[
"",
"sql",
"sql-server",
""
] |
please consider table below (TableName: Sale)
```
ID Customer Year Amount
1 A 2012 1000
2 B 2012 2000
3 A 2012 1500
4 C 2013 1000
5 B 2013 2000
```
I'm looking for a query which groups the data by `Customer` and shows the sums of `Amount` having `Year` equal to 2012 and 2013 (distinctly), so that it returns:
```
Customer Year2012 Year2013
A 2500 0
B 2000 2000
C 0 1000
```
|
try this:
```
SELECT Customer,ISNULL([2012],0)as [2012],ISNULL([2013],0)as [2013]
FROM (
SELECT
Customer,[Year] as [year], Amount
FROM Mytest
) as s
PIVOT(SUM(AMOUNT)
FOR Year IN ([2012], [2013])) AS PVTTable
```
OUTPUT:
```
Customer 2012 2013
A 2500 0
B 2000 2000
C 0 1000
```
|
```
select Customer,
SUM(case when Year = '2012' then Amount else 0 end) as 'Year2012',
SUM(case when Year = '2013' then Amount else 0 end) as 'Year2013'
from Sale
group by Customer;
```
SQL Fiddle - <http://sqlfiddle.com/#!2/f560b/4>
|
Vertical and Horizontal Group by in sql
|
[
"",
"sql",
""
] |
I have two sets of elements, let's say they are these words:
* set 1: "nuclear", "fission", "dirty" and
* set 2: "device", "explosive"
In my database, I have a text column (`Description`) which contains a sentence or two. I would like to find any records where `Description` contains both an element from set 1 followed by an element from set 2, where the two elements are separated by four words or less. For simplicity, counting (spaces-1) will count words between the two elements.
I'd prefer it if a solution didn't require the installation of anything like CLR functions for regular expression. Rather, if this could be done with a user-defined table function, it would make deployment simpler.
Does this sound possible?
|
It is possible, but i do not think it will preform well with millions of rows.
I have a solution here that handles about 10 000 rows in 2 sec and 100 000 rows in about 20 sec on our server. It also requires the famous [DelimitedSplit8K](http://www.sqlservercentral.com/articles/Tally+Table/72993/) sql table function from SQLServerCentral:
```
DECLARE @set1 VARCHAR(MAX) = 'nuclear, fission, dirty';
DECLARE @set2 VARCHAR(MAX) = 'device, explosive';
WITH GetDistances AS
(
SELECT
DubID = ROW_NUMBER() OVER (PARTITION BY ID ORDER BY ID)
, Distance = dbo.[cf_ValueSetDistance](s.Description, @set1, @set2)
, s.ID
,s.Description
FROM #sentences s
JOIN dbo.cf_DelimitedSplit8K(@set1, ',') s1 ON s.Description LIKE '%' + RTRIM(LTRIM(s1.Item)) + '%'
JOIN dbo.cf_DelimitedSplit8K(@set2, ',') s2 ON s.Description LIKE '%' + RTRIM(LTRIM(s2.Item)) + '%'
) SELECT Distance, ID, Description FROM GetDistances WHERE DubID = 1 AND Distance BETWEEN 1 AND 4;
--10 000 rows: 2sec
--100 000 rows: 20sec
```

**Test data generator**
```
--DROP TABLE #sentences
CREATE TABLE #sentences
(
ID INT IDENTITY(1,1) PRIMARY KEY
, Description VARCHAR(100)
);
GO
--CREATE 10000 random sentences that are 100 chars long
SET NOCOUNT ON;
WHILE((SELECT COUNT(*) FROM #sentences) < 10000)
BEGIN
DECLARE @randomWord VARCHAR(100) = '';
SELECT TOP 100 @randomWord = @randomWord + ' ' + Item FROM dbo.cf_DelimitedSplit8K('nuclear fission dirty device explosive On the other hand, we denounce with righteous indignation and dislike men who are so beguiled and demoralized by the charms of pleasure of the moment, so blinded by desire, that they cannot foresee the pain and trouble that are bound to ensue; and equal blame belongs to those who fail in their duty through weakness of will, which is the same as saying through shrinking from toil and pain. These cases are perfectly simple and easy to distinguish. In a free hour, when our power of choice is untrammelled and when nothing prevents our being able to do what we like best, every pleasure is to be welcomed and every pain avoided. But in certain circumstances and owing to the claims of duty or the obligations of business it will frequently occur that pleasures have to be repudiated and annoyances accepted. The wise man therefore always holds in these matters to this principle of selection: he rejects pleasures to secure other greater pleasures, or else he endures pains to avoid worse pains', ' ') ORDER BY NEWID();
INSERT INTO #sentences
SELECT @randomWord
END
SET NOCOUNT OFF;
```
**Function 1 - cf\_ValueSetDistance**
```
CREATE FUNCTION [dbo].[cf_ValueSetDistance]
(
@value VARCHAR(MAX)
, @compareSet1 VARCHAR(MAX)
, @compareSet2 VARCHAR(MAX)
)
RETURNS INT
AS
BEGIN
SET @value = REPLACE(REPLACE(REPLACE(@value, '.', ''), ',', ''), '?', '');
DECLARE @distance INT;
DECLARE @sentence TABLE( WordIndex INT, Word VARCHAR(MAX) );
DECLARE @set1 TABLE(Word VARCHAR(MAX) );
DECLARE @set2 TABLE(Word VARCHAR(MAX) );
INSERT INTO @sentence
SELECT ItemNumber, RTRIM(LTRIM(Item)) FROM dbo.cf_DelimitedSplit8K(@value, ' ')
INSERT INTO @set1
SELECT RTRIM(LTRIM(Item)) FROM dbo.cf_DelimitedSplit8K(@compareSet1, ',')
IF(EXISTS(SELECT 1 FROM @sentence s JOIN @set1 s1 ON s.Word = s1.Word))
BEGIN
INSERT INTO @set2
SELECT RTRIM(LTRIM(Item)) FROM dbo.cf_DelimitedSplit8K(@compareSet2, ',');
IF(EXISTS(SELECT 1 FROM @sentence s JOIN @set2 s2 ON s.Word = s2.Word))
BEGIN
WITH Set1 AS (
SELECT s.WordIndex, s.Word FROM @sentence s
JOIN @set1 s1 ON s1.Word = s.Word
), Set2 AS
(
SELECT s.WordIndex, s.Word FROM @sentence s
JOIN @set2 s2 ON s2.Word = s.Word
)
SELECT @distance = MIN(ABS(s2.WordIndex - s1.WordIndex)) FROM Set1 s1, Set2 s2
END
END
RETURN @distance;
END
```
**Function 2 - DelimitedSplit8K**
(No need to even try to understand this code, this is an extremely fast function for splitting a string to a table, written by several talented people):
```
CREATE FUNCTION [dbo].[cf_DelimitedSplit8K]
(@pString VARCHAR(8000), @pDelimiter CHAR(1))
RETURNS TABLE WITH SCHEMABINDING AS
RETURN
--===== "Inline" CTE Driven "Tally Table" produces values from 0 up to 10,000...
-- enough to cover NVARCHAR(4000)
WITH E1(N) AS (
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1
), --10E+1 or 10 rows
E2(N) AS (SELECT 1 FROM E1 a, E1 b), --10E+2 or 100 rows
E4(N) AS (SELECT 1 FROM E2 a, E2 b), --10E+4 or 10,000 rows max
cteTally(N) AS (--==== This provides the "base" CTE and limits the number of rows right up front
-- for both a performance gain and prevention of accidental "overruns"
SELECT TOP (ISNULL(DATALENGTH(@pString),0)) ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) FROM E4
),
cteStart(N1) AS (--==== This returns N+1 (starting position of each "element" just once for each delimiter)
SELECT 1 UNION ALL
SELECT t.N+1 FROM cteTally t WHERE SUBSTRING(@pString,t.N,1) = @pDelimiter
),
cteLen(N1,L1) AS(--==== Return start and length (for use in substring)
SELECT s.N1,
ISNULL(NULLIF(CHARINDEX(@pDelimiter,@pString,s.N1),0)-s.N1,8000)
FROM cteStart s
)
--===== Do the actual split. The ISNULL/NULLIF combo handles the length for the final element when no delimiter is found.
SELECT ItemNumber = ROW_NUMBER() OVER(ORDER BY l.N1),
Item = SUBSTRING(@pString, l.N1, l.L1)
FROM cteLen l;
```
|
I dont know anything about performance, but this could be done with cross apply and two temporary tables.
```
--initialize word set data
DECLARE @set1 TABLE (wordFromSet varchar(n))
DECLARE @set2 TABLE (wordFromSet varchar(n))
INSERT INTO @set1 SELECT 'nuclear' UNION SELECT 'fission' UNION SELECT 'dirty'
INSERT INTO @set2 SELECT 'device' UNION SELECT 'explosive'
SELECT *
FROM MyTable m
CROSS APPLY
(
SELECT wordFromSet
,LEN(SUBSTRING(m.Description, 1, CHARINDEX(wordFromSet, m.Description))) - LEN(REPLACE(SUBSTRING(m.Description, 1, CHARINDEX(wordFromSet, m.Description)),' ', '')) AS WordPosition
FROM @set1
WHERE m.Description LIKE '%' + wordFromSet + '%'
) w1
CROSS APPLY
(
SELECT wordFromSet
,LEN(SUBSTRING(m.Description, 1, CHARINDEX(wordFromSet, m.Description))) - LEN(REPLACE(SUBSTRING(m.Description, 1, CHARINDEX(wordFromSet, m.Description)),' ', '')) AS WordPosition
FROM @set2
WHERE m.Description LIKE '%' + wordFromSet + '%'
) w2
WHERE w2.WordPosition - w1.WordPosition <= treshold
```
Essentially it will only return rows from MyTable that have at least a word from both sets, and for these rows it will calculate which word position it holds by calculating the difference in length between the substring that ends at the words position and the same substring with spaces removed.
|
SQL Server 2012 T-SQL count number of words between elements of two sets
|
[
"",
"sql",
"sql-server",
"text",
"pattern-matching",
""
] |
I have this SQL query so far that finds the percentage based on all kinds of "sex"
```
select (
(select count(*) from hw where DISCHARGE_STATUS = 'B')
/
count(*)
* 100
) as Percentage
from hw
```
Lets say my table is
```
SEX DISCHARGE STATUS
1 A
1 B
2 B
2 B
2 B
```
The percentage of Discharge B would be 80%/100% TOTAL COUNT : ALL SEX
Now lets say I need to find the percentage the same way but categorize it with SEX types 1 and 2
1 would be 20/100% TOTAL COUNT : FOR SEX TYPE 1 and DISCHARGE TYPE B
2 would be 60/100% TOTAL COUNT : FOR SEX TYPE 2 and DISCHARGE TYPE B
I tried adding a GROUP BY SEX to the end of the query but that did not help, what can I do?
Thanks in advance :)
|
Instead of querying hw twice you can do this in one pass and get the sub counts by using SUM:
```
select
count(*) as count_total,
sum(discharge_status = 'B') as count_b,
sum(sex = 1) as count_sex1,
sum(discharge_status = 'B' AND sex = 1) as count_b_sex1
from hw;
```
(An expression that results in TRUE is 1, one that results in FALSE is 0 in MySQL. Thus you SUM 1 per matching record. In other words: you count.)
Then do whatever math you want on it:
```
select
sum(discharge_status = 'B' AND sex = 1) /
sum(discharge_status = 'B') * 100 as percent_sex1_in_b
from hw;
```
(In this simple example where you only use status B counts, you can move `discharge_status = 'B'` to a WHERE clause of course.)
|
You can try this for sex type 1
((select sum(sex) from hw where sex=1)/(select sum(sex) from hw))
\*
((select count(DISCHARGE\_STATUS) from hw where (DISCHARGE\_STATUS)=B)/select count(\*) from hw))
|
SQL Query and Count
|
[
"",
"mysql",
"sql",
"math",
"count",
""
] |
Table `Items` :
**ItemCode---Name-------Stock**
111-----------Book---------10
112-----------Bag-----------03
113-----------Pen-----------08
114-----------Pencil--------13
Table `INVOICEITEMS(SoldItems)` :
**BillNumber---ItemCode-----UnitValue(Qty)**
1005-----------111------------------3
1005-----------111------------------2
1005-----------113------------------4
1006-----------111------------------6
1007-----------112------------------5
We need to update the `ITEMS` table from `INVOICEITEMS` table.
We need to add to the Stock of each ItemCode in ITEMS table from the sum of the UnitValues of
BillNumber '1005' in INVOICEITEMS.
Query used :
`UPDATE ITEMS SET Stock=Stock+(SELECT SUM(T.UnitValue) FROM INVOICEITEMS T WHERE T.BillNumber LIKE '1005' AND T.ItemCode LIKE ItemCode) WHERE ItemCode IN(SELECT J.ItemCode FROM INVOICEITEMS J WHERE J.BillNumber LIKE '1005')`
Expected Result After updation :
Table ITEMS
**ItemCode---Name----Stock**
111-----------Book-------15
112-----------Bag---------03
113-----------Pen---------12
114-----------Pencil------13
Result we are getting:
Table ITEMS
**ItemCode---Name----Stock**
111-----------Book--------19
112-----------Bag---------03
113-----------Pen---------17
114-----------Pencil-------13
Please help in correcting this query.
|
One way to investigate problems with update statements is to convert them to select statements. For your update statement, I come up with the following query:
```
SELECT Items.*
,(SELECT SUM(T.UnitValue) FROM INVOICEITEMS T WHERE T.BillNumber LIKE '1005' AND T.ItemCode LIKE ItemCode)
FROM Items
WHERE ItemCode IN(SELECT J.ItemCode FROM INVOICEITEMS J WHERE J.BillNumber LIKE '1005')
```
As you can see, the last column is 9 for every row, which isn't surprising because you have not done anything to return different values based on which row you're looking at.
You can then change the select statement to return the data you do want:
```
SELECT Items.ItemCode
,Items.Stock
,SUM(T.UnitValue)
,Items.Stock + SUM(T.UnitValue)
FROM Items
JOIN INVOICEITEMS T ON T.ItemCode = Items.ItemCode
WHERE T.BillNumber = 1005
GROUP BY Items.ItemCode, Items.Stock
```
And finally you can incorporate that back into an UPDATE statement:
```
UPDATE Items SET Stock = NewSum
FROM Items JOIN
(SELECT Items.ItemCode
,Items.Stock
,Items.Stock + SUM(INVOICEITEMS.UnitValue) NewSum
FROM Items
JOIN INVOICEITEMS ON INVOICEITEMS.ItemCode = Items.ItemCode
WHERE INVOICEITEMS.BillNumber = 1005
GROUP BY Items.ItemCode, Items.Stock
) T ON T.ItemCode = Items.ItemCode
```
It might then be prudent to extract hard-coded values from the sub-query and make them part of the higher-level query. One reason this might be a good idea is because then the sub-query is more portable. Then you'd end up with this:
```
UPDATE Items SET Stock = NewSum
FROM Items JOIN
(SELECT Items.ItemCode
,INVOICEITEMS.BillNumber
,Items.Stock
,SUM(INVOICEITEMS.UnitValue) InvoiceQty
,Items.Stock + SUM(INVOICEITEMS.UnitValue) NewSum
FROM Items
JOIN INVOICEITEMS ON INVOICEITEMS.ItemCode = Items.ItemCode
GROUP BY Items.ItemCode, Items.Stock, INVOICEITEMS.BillNumber
) T ON T.ItemCode = Items.ItemCode
WHERE T.BillNumber = 1005
```
|
`AND T.ItemCode LIKE ItemCode` line in the SQL is not getting executed as `ItemCode` hasn't been specified in the SQL before the execution of the inner SQL statement.
Associate `ItemCode` explicitly with the `ITEMS` table and it would work.
|
Update table from another table in SQL is giving an unexpected result
|
[
"",
"sql",
"sql-server",
"t-sql",
"sql-update",
""
] |
I want to get all values from the left table (ignoring the common values from the right table with the same id) and all values from the right table that don't exist in left table.
Table 1:
```
----------------------
| id | value |
----------------------
| 1 | 50 |
----------------------
| 2 | 150 |
----------------------
| 4 | 100 |
----------------------
```
Table 2:
```
----------------------
| id | value |
----------------------
| 1 | 300 |
----------------------
| 3 | 150 |
----------------------
| 4 | 250 |
----------------------
```
Expected result:
```
----------------------
| id | value |
----------------------
| 1 | 50 |
----------------------
| 2 | 150 |
----------------------
| 3 | 150 |
----------------------
| 4 | 100 |
----------------------
```
Is there any easy way to do this?
Thank you.
|
You can use `union all` and a `not exists` clause:
```
select id, value
from table1 t1
union all
select id, value
from table2 t2
where not exists (select 1 from table1 t1 where t1.id = t2.id);
```
|
```
select * from
(
select * from tab1
except
select * from tab2
UNION ALL
select id, value
from tab2 t2
where not exists (select 1 from tab1 t1 where t1.id = t2.id))d
order by d.id ;
```
**output**
```
id value
1 50
2 150
3 150
4 100
```
|
SQL Query get all values from left table and uncommon values from right table
|
[
"",
"sql",
"sql-server",
""
] |
I have a table
```
| Location | Unit | ...
+----------|------+----
| A | 1 | ...
| A | 1 | ...
| B | 1 | ...
| A | 2 | ...
| C | 2 | ...
| ... | ... | ...
```
and I want to compute a new table with the 'average' location value for each unit, in the following sense:
```
| Location | Unit | Weight |
+----------|------+--------+
| A | 1 | 0.6667 |
| B | 1 | 0.3333 |
| A | 2 | 0.5 |
| C | 2 | 0.5 |
| ... | ... | ... |
```
Of course it is simple enough to get the totals
```
select unit, location, count(*)
from table1
group by unit, location;
```
and to create the table
```
create table table2 (Unit nvarchar(50), Location int, Weight float);
```
but I'm not sure how to populate it with the average data. (This isn't hard, but somehow I'm stuck... it's been many years since I worked on SQL Server.)
|
You'd use `COUNT OVER` for this:
```
select distinct
location, unit,
cast(count(*) over (partition by unit, location) as decimal) /
cast(count(*) over (partition by unit) as decimal) as weight
from mytable
order by unit, location;
```
### [SQLFiddle](http://sqlfiddle.com/#!6/2b26b/5)
|
You want to do a weighted average, with units as the weight. You need both the sum for each location/unit combination and the total for each `location`. To produce your output:
```
select location, unit, (1.0*unit) / sum(unit) over (partition by location)
from table1
group by location, unit;
```
The `1.0*` is just a fast way of converting to a decimal value, in the event that `unit` is actually an integer.
EDIT:
If you just want the counts, then this should work:
```
select location, unit, count(*) as cnt,
(1.0*count(*)) / sum(count(*)) over (partition by unit)
from table1
group by location, unit;
```
|
Adding averages to a table
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I am attempting to get the highest system number from a set of rows. The system number is preceded with `SYS`, thus `select SYSTEM_ID from TABLE` would yield, `{SYS901,SYS87,SYS3024.....}`
This is the query I'm attempting to use:
`select MAX(REPLACE(SYSTEM_ID,'SYS','')) from TABLE`
The possible results are
`{901,87,3024,20,1}`
It is returning the `901` value where I'm expecting to see the `3024` value. I assume the problem is that the field is a `VARCHAR` not a `NUMBER`. How to address this problem, I do not know.
|
```
select MAX(TO_NUMBER(REPLACE(SYSTEM_ID,'SYS',''))) from TABLE;
```
Use TO\_NUMBER to convert VARCHAR2 to NUMBER otherwise Oracle compares strings using their ASCII codes ('9' > '3')
|
Of course it is returning the max. It is just returning the max according to the rules of string comparisons, not numbers.
To get the numeric max, do a conversion:
```
select MAX(TO_NUMBER(REPLACE(SYSTEM_ID, 'SYS', ''))) from TABLE
```
|
Oracle SQL MAX function not returning max on VARCHAR to NUMBER conversion
|
[
"",
"sql",
"oracle",
""
] |
I have a select (MS SQL) where I'm joining a parent with up to 3 children. I'd like to select all the children if ANY of the children meets a criteria (Type <> 'Done'). Performance is very important.
I've tried
```
SELECT p.*,c.* FROM Parent p
INNER JOIN Child c ON p.Id=c.ParentId
WHERE p.Id IN (SELECT DISTINCT c.ParentId FROM Child c2 WHERE c2.ParentId=p.Id AND c2.Type<>'Done')
```
But the inner select often returns 2000+ children so the IN gives bad performance.
I've also tried EXISTS and count:
```
SELECT p.*,c.* FROM Parent p
INNER JOIN Child c ON p.Id=c.ParentId
WHERE EXISTS(SELECT 1 FROM Child c2 WHERE c2.ParentId=p.Id AND c2.Type<>'Done')
SELECT p.*,c.* FROM Parent p
INNER JOIN Child c ON p.Id=c.ParentId
WHERE 0 < (SELECT COUNT(c2.Id) FROM Child c2 WHERE c2.ParentId=p.Id AND c2.Type<>'Done')
```
All these return the correct result but with bad performance.
I think I have all the required indexes.
So does anyone have any alternatives?
|
**New Answer:**
Well then I would create a temporary table with the queries below and use them instead of your nested subqueries and try which combination is the fastest.
Update:
```
DECLARE @tmp TABLE(pid INT);
INSERT INTO @tmp(pid)
SELECT p.Id FROM Parent p
WHERE EXISTS (SELECT 1 FROM Child c WHERE c.ParentId=p.Id AND c2.Type<>'Done');
SELECT c.*, p.*
FROM @tmp t INNER JOIN Child c ON t.pid = c.ParentID INNER JOIN Parent p ON t.pid = p.Id
```
You can also declare @tmp the same way you declared Parent and fill in the whole table. This way you could avoid the JOIN with Parent. But without profiling all possible solutions it is not possible to say which one will be the fastest.
**Old Answer:**
I think that you don't need to join with the Child table. That leaves you with:
```
SELECT p.Id FROM Parent p
WHERE EXISTS (SELECT 1 FROM Child c2 WHERE c2.Type<>'Done' AND c2.Id = p.Id)
```
You could also try
```
SELECT p.Id FROM Parent p
INNER JOIN Child c ON p.Id=c.ParentId
WHERE c.Type <> 'Done'
GROUP BY p.Id
```
Or
```
SELECT DISTINCT p.Id FROM Parent p
INNER JOIN Child c ON p.Id=c.ParentId
WHERE c.Type <> 'Done'
```
These should all be equivalent.
|
The EXISTS variant should normally be the fastest one.
Assuming you do have all the relevent indices, there is really no way to optimize this query, other than preparing your data in temp tables. You could make a denormalized table that combines the parent and child table. This should be a massive performance increase.
This is an especially feasible solution if you never delete or update existing parent records, but only add new ones. Because in this case, you won't suffer from update anomalies.
If you do delete or update existing parent records, this would make the process of keeping your denormalized table up to date much more complex.
|
Select all children if any one meets criteria
|
[
"",
"sql",
"sql-server",
"select",
"parent-child",
""
] |
I'm trying to generate an email from full name. Following is what I have and I'm stuck here
```
select distinct tbin.Name
,(SELECT SUBSTRING(tbin.Name, 1, CHARINDEX(' ', tbin.Name) - 1) AS [FirstName])
,(select SUBSTRING(tbin.Name, CHARINDEX(' ', tbin.Name) + 1, 8000) AS LastName)
from tblInitialApplicants tbin
```
Name is like 'Natalie Marine Baily'. Name can be of any length and also can have any number of space between each part of the name.
Expected result: Natalie.B@gmail.com
'Natalie' - First Name
'B' - First letter of the last name
|
You can get the position of a char with CHARINDEX, look for the first space and reverse de name and look for the last. With that you can easly substring first and last name.
```
DECLARE @Fullname VARCHAR(100) = 'Natalie Marine Baily';
select
*,
firstname+'.'+lastnameINITIAL+'@gamil.com'
from (
select
@Fullname Fullname,
SUBSTRING(@Fullname,1,CHARINDEX(' ',@Fullname,1)) firstname,
SUBSTRING(SUBSTRING(@Fullname,LEN(@Fullname)-CHARINDEX(' ',REVERSE(@Fullname),1)+2,CHARINDEX(' ',REVERSE(@Fullname),1)),1,1) lastnameINITIAL
) T
```
|
```
DECLARE @name VARCHAR(200) = 'Natalie Marine Baily'
SELECT LEFT(@name,CHARINDEX(' ',@name,1)-1) + '.'+
LEFT(REVERSE(LEFT(REVERSE(@name),CHARINDEX(' ',REVERSE(@name),1)-1)),1) + '@gmail.com'
```
Result:

Just see, whether you can work from here.
|
Split and merge string
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I am trying to solve the following challenge:
1) If a patient visits the ER within 48 hours, I want to flag that as `1`.
2) If the same patient visits the ER again after 48 hours, I want to flag that as `2`.
3) Each subsequent visit must be flagged as `3`, `4`, `5` etcetera after the first 48 hours.
Here is what my table looks like:
```
PATIENT_ID ADMIT_DATE LOCATION
---------- ---------- --------
33 1/10/2014 ER
33 1/11/2014 ER
33 1/15/2014 ER
33 1/17/2014 ER
45 2/20/2014 OBS
45 2/21/2014 OBS
45 2/25/2014 OBS
45 2/30/2014 OBS
45 2/32/2014 OBS
```
And here is what the desired result should look like:
```
PATIENT_ID ADMIT_DATE LOCATION FLAG
---------- ---------- -------- ----
33 1/10/2014 ER 1
33 1/15/2014 ER 2
33 1/17/2014 ER 3
45 2/20/2014 OBS 1
45 2/25/2014 OBS 2
45 2/30/2014 OBS 3
45 2/32/2014 OBS 4
```
I have started something like this but could not complete it:
```
SELECT PATIENT_ID, ADMIT_DATE, LOCATION,
CASE WHEN MIN(ADMIT_DATE)-MAX(ADMIT_DATE)<48 THEN 1 ELSE 0 AS FLAG
FROM MYTABLE
GROUP BY PATIENT_ID, ADMIT_DATE, LOCATION
```
Can someone please help?
|
You can achieve this easy using [LAG](http://msdn.microsoft.com/en-us/library/hh231256.aspx), [DATEDIFF](http://msdn.microsoft.com/en-us/library/ms189794.aspx) and [ROWNUMBER](http://msdn.microsoft.com/en-us/library/ms186734.aspx) functions. The `LAG` function helps you to get the previous `ADMIT_DATE` value. Then you can calculate the difference in hours using the `DATEDIFF` function. Finally, using `ROWNUMBER` you can simple rank your results.
This is full working example:
```
SET NOCOUNT ON
GO
DECLARE @DataSource TABLE
(
[ATIENT_ID] TINYINT
,[ADMIT_DATE] DATE
,[LOCATION] VARCHAR(3)
)
INSERT INTO @DataSource ([ATIENT_ID], [ADMIT_DATE], [LOCATION])
VALUES (33, '1-10-2014', 'ER')
,(33, '1-11-2014', 'ER')
,(33, '1-15-2014', 'ER')
,(33, '1-17-2014', 'ER')
,(45, '2-15-2014', 'OBS')
,(45, '2-16-2014', 'OBS')
,(45, '2-20-2014', 'OBS')
,(45, '2-25-2014', 'OBS')
,(45, '2-27-2014', 'OBS')
;WITH DataSource ([ATIENT_ID], [ADMIT_DATE], [LOCATION], [DIFF_IN_HOURS]) AS
(
SELECT [ATIENT_ID]
,[ADMIT_DATE]
,[LOCATION]
,DATEDIFF(
HOUR
,LAG([ADMIT_DATE], 1, NULL) OVER (PARTITION BY [ATIENT_ID], [LOCATION] ORDER BY [ADMIT_DATE] ASC)
,[ADMIT_DATE]
)
FROM @DataSource
)
SELECT [ATIENT_ID]
,[ADMIT_DATE]
,[LOCATION]
,ROW_NUMBER() OVER (PARTITION BY [ATIENT_ID], [LOCATION] ORDER BY [ADMIT_DATE] ASC)
FROM DataSource
WHERE [DIFF_IN_HOURS] >= 48
OR [DIFF_IN_HOURS] IS NULL -- these are first records
SET NOCOUNT OFF
GO
```

Note, I have fixed your sample data as it was wrong.
---
This is alternative solution without `LAG` function:
```
;WITH TempDataSource ([ATIENT_ID], [ADMIT_DATE], [LOCATION], [Rank]) AS
(
SELECT [ATIENT_ID]
,[ADMIT_DATE]
,[LOCATION]
,ROW_NUMBER() OVER (PARTITION BY [ATIENT_ID], [LOCATION] ORDER BY [ADMIT_DATE] ASC)
FROM @DataSource
),
DataSource ([ATIENT_ID], [ADMIT_DATE], [LOCATION], [DIFF_IN_HOURS]) AS
(
SELECT DS1.[ATIENT_ID]
,DS1.[ADMIT_DATE]
,DS1.[LOCATION]
,DATEDIFF(HOUR, DS2.[ADMIT_DATE], DS1.[ADMIT_DATE])
FROM TempDataSource DS1
LEFT JOIN TempDataSource DS2
ON DS1.[Rank] - 1 = DS2.[Rank]
AND DS1.[ATIENT_ID] = DS2.[ATIENT_ID]
AND DS1.[LOCATION] = DS2.[LOCATION]
)
SELECT [ATIENT_ID]
,[ADMIT_DATE]
,[LOCATION]
,ROW_NUMBER() OVER (PARTITION BY [ATIENT_ID], [LOCATION] ORDER BY [ADMIT_DATE] ASC)
FROM DataSource
WHERE [DIFF_IN_HOURS] >= 48
OR [DIFF_IN_HOURS] IS NULL -- these are first records
```
|
you can use DATEDIFF() available in sql-server like
```
SELECT DATEDIFF(hour,startDate,endDate) AS 'Duration'
```
You can visit <http://msdn.microsoft.com/en-IN/library/ms189794.aspx>
|
How to find the difference between dates within the same column using SQL?
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
Ok so i have some specific query i need to do.
I have a table "description", and i need to find if there is a picture linked to that description, for that, i made a left join, with the table "files".
```
SELECT D.id, COUNT(*) as haspicture
FROM `description`as D
LEFT JOIN `files` as F ON D.id=F.id
GROUP BY D.id
```
It works well, but the column haspicture contain what it should contain, the number of files found.
I just want to set haspicture to 1 if there is at least 1 file attached.
I need it to be at 1, because then i have ways of sorting the results with differents levels of priorities. Of course, i've shorten the real query, i select more than D.id, and haspicture in reality.
|
You can use a `case` statement. Also, because careful what you are counting:
```
SELECT D.id,
(CASE WHEN COUNT(f.id) > 0 THEN 1 ELSE 0 END) as haspicture
FROM `description`D LEFT JOIN
`files` F
ON D.id = F.id
GROUP BY D.id;
```
You might find that the following performs better, because there is no larger `group by`:
```
select d.id,
(exists (select 1 from files where d.id = f.id)) as haspicture
from description d;
```
An index on `files(id)` is needed for optimal performance.
|
`SELECT IF(COUNT('*') > 0, 1, 0)) as haspicture`
|
SQL, left join, group by, count(*) as haspicture?
|
[
"",
"mysql",
"sql",
"count",
"group-by",
"left-join",
""
] |
I have two statements that I want to merge into one output.
Statement One:
```
select name from auxiliary_variable_inquiry
where inquiry_idbr_code = '063'
```
Returns the following list of names:
```
Name
------------
Affiliates
NetBookValue
Parents
Worldbase
```
Statement Two:
```
select name, value from auxiliary_variable_value
where inquiry_idbr_code = '063'
and ru_ref = 20120000008
and period = 200912
```
Returns the following:
```
Name Value
-------------------
Affiliates 112
NetBookValue 225.700
```
I would like to have an output like this:
```
Name Value
-------------------
Affiliates 112
NetBookValue 225.700
Parents 0
Worldbase 0
```
So basically, if the second query only returns 2 names and values, I'd still like to display the complete set of names from the first query, with no values. If all four values were returned by both queries, then all four would be displayed.
Sorry I must add, im using Ingres SQL so im unable to use the ISNULL function.
|
I'd recommend a self-JOIN using the LEFT OUTER JOIN syntax. Include your 'extra' conditions from the second query in the JOIN condition, while the first conditions stay in the WHERE, like this:
```
select a.name, CASE WHEN b.Value IS NULL THEN 0 ELSE b.Value END AS Value
from
auxiliary_variable_inquiry a
LEFT JOIN
auxiliary_variable_inquiry b ON
a.name = b.name and -- replace this with your real ID-based JOIN
a.inquiry_idbr_code = b.inquiry_idbr_code AND
b.ru_ref = 20120000008 AND
b.period = 200912
where a.inquiry_idbr_code = '063'
```
|
You can do a left join. This ensures that all records from the first table will stay included. Where value is null, no child record was found, and we use coalesce to display 0 in these cases.
```
select i.name, COALESCE(v.Value,0) from auxiliary_variable_inquiry i
left join auxiliary_variable_value v
on v.inquiry_idbr_code = i.inquiry_idbr_code
and v.ru_ref = 20120000008
and v.period = 200912
where i.inquiry_idbr_code = '063'
```
|
SQL Join / Union
|
[
"",
"sql",
"join",
"ingres",
""
] |
I have a table which looks like this

For each assessment code I would like to create a query which returns the percentage of marks in certain mark ranges... For example
```
FileYear AssessmentCode MarkRange MarkPercentage
2014 11cpssdd 0-10 5.88
2014 11cpssdd 11-20 0
2014 11cpssdd 21-30 0
2014 11cpssdd 31-40 0
2014 11cpssdd 41-50 0
2014 11cpssdd 51-60 0
2014 11cpssdd 61-70 0
2014 11cpssdd 71-80 29.4
2014 11cpssdd 81-90 52.9
2014 11cpssdd 91-100 11.7
```
Not sure how to do this at all and was wondering if anyone was able to help? Thanks in advance for any help.
|
You can do this calculating the `SUM` of each range and then the percentage of this `SUM` from the whole `SUM`. This is full working example:
```
SET NOCOUNT ON
GO
DECLARE @DataSource TABLE
(
[FileYear] CHAR(4)
,[AssessmentCode] CHAR(8)
,[StudentID] INT
,[Mark] TINYINT
)
INSERT INTO @DataSource ([FileYear], [AssessmentCode], [StudentID], [Mark])
VALUES ('2014', '11cpssdd', '34323', '75')
,('2014', '11cpssdd', '74666', '38')
,('2014', '11cpssdd', '87664', '34')
,('2014', '11cpssdd', '87576', '66')
,('2014', '11cpssdd', '23455', '87')
,('2014', '11cpssdd', '87654', '75')
,('2014', '11cpssdd', '98776', '75')
,('2014', '11cpssdd', '34543', '55')
,('2014', '11ecoeco', '87687', '89')
,('2014', '11ecoeco', '56466', '77')
,('2014', '11ecoeco', '34544', '45')
,('2014', '11ecoeco', '95554', '23')
,('2014', '11ecoeco', '22322', '56')
,('2014', '11ecoeco', '76557', '66')
SELECT [FileYear]
,[AssessmentCode]
,[MarkPercentage]
,[MarkRange] * 100 /[ALL] AS [MarkRange]
FROM
(
SELECT [FileYear]
,[AssessmentCode]
,SUM(IIF([Mark] BETWEEN 0 AND 10, [Mark], 0))
,SUM(IIF([Mark] BETWEEN 11 AND 20, [Mark], 0))
,SUM(IIF([Mark] BETWEEN 21 AND 30, [Mark], 0))
,SUM(IIF([Mark] BETWEEN 31 AND 40, [Mark], 0))
,SUM(IIF([Mark] BETWEEN 41 AND 50, [Mark], 0))
,SUM(IIF([Mark] BETWEEN 51 AND 60, [Mark], 0))
,SUM(IIF([Mark] BETWEEN 61 AND 70, [Mark], 0))
,SUM(IIF([Mark] BETWEEN 71 AND 80, [Mark], 0))
,SUM(IIF([Mark] BETWEEN 81 AND 90, [Mark], 0))
,SUM(IIF([Mark] BETWEEN 91 AND 100, [Mark], 0))
,SUM([Mark])
FROM @DataSource
GROUP BY [FileYear]
,[AssessmentCode]
) DataSource ([FileYear], [AssessmentCode], [0-10], [11-20], [21-30], [31-40], [41-50], [51-60], [61-70], [71-80], [81-90], [91-100], [ALL])
UNPIVOT
(
[MarkRange] FOR [MarkPercentage] IN ([0-10], [11-20], [21-30], [31-40], [41-50], [51-60], [61-70], [71-80], [81-90], [91-100])
)PVT
ORDER BY [FileYear]
,[AssessmentCode]
,[MarkPercentage]
SET NOCOUNT OFF
GO
```

---
Check this solution. Pay attention that we are filtering for specific `AssesmentCode`. Also, if you need the result in `decimal` format, just cast the `[MarkRange]` before division.
```
SET NOCOUNT ON
GO
DECLARE @DataSource TABLE
(
[FileYear] CHAR(4)
,[AssessmentCode] CHAR(8)
,[StudentID] INT
,[Mark] TINYINT
)
INSERT INTO @DataSource ([FileYear], [AssessmentCode], [StudentID], [Mark])
VALUES ('2014', '11CPSSDD', '34323', '93')
,('2014', '11CPSSDD', '74666', '93')
,('2014', '11CPSSDD', '87664', '90')
,('2014', '11CPSSDD', '87576', '90')
,('2014', '11CPSSDD', '23455', '89')
,('2014', '11CPSSDD', '87654', '86')
,('2014', '11CPSSDD', '98776', '84')
,('2014', '11CPSSDD', '34543', '84')
,('2014', '11CPSSDD', '87687', '84')
,('2014', '11CPSSDD', '56466', '83')
,('2014', '11CPSSDD', '34544', '82')
,('2014', '11CPSSDD', '95554', '80')
,('2014', '11CPSSDD', '22322', '79')
,('2014', '11CPSSDD', '76557', '78')
,('2014', '11CPSSDD', '76557', '77')
,('2014', '11CPSSDD', '76557', '76')
,('2014', '11CPSSDD', '76557', '70')
SELECT [FileYear]
,[AssessmentCode]
,[MarkPercentage]
,[MarkRange] * 100 /[ALL] AS [MarkRange]
FROM
(
SELECT [FileYear]
,[AssessmentCode]
,SUM(IIF([Mark] BETWEEN 0 AND 10, 1, 0))
,SUM(IIF([Mark] BETWEEN 11 AND 20, 1, 0))
,SUM(IIF([Mark] BETWEEN 21 AND 30, 1, 0))
,SUM(IIF([Mark] BETWEEN 31 AND 40, 1, 0))
,SUM(IIF([Mark] BETWEEN 41 AND 50, 1, 0))
,SUM(IIF([Mark] BETWEEN 51 AND 60, 1, 0))
,SUM(IIF([Mark] BETWEEN 61 AND 70, 1, 0))
,SUM(IIF([Mark] BETWEEN 71 AND 80, 1, 0))
,SUM(IIF([Mark] BETWEEN 81 AND 90, 1, 0))
,SUM(IIF([Mark] BETWEEN 91 AND 100, 1, 0))
,COUNT([Mark])
FROM @DataSource
WHERE [AssessmentCode] = '11CPSSDD'
GROUP BY [FileYear]
,[AssessmentCode]
) DataSource ([FileYear], [AssessmentCode], [0-10], [11-20], [21-30], [31-40], [41-50], [51-60], [61-70], [71-80], [81-90], [91-100], [ALL])
UNPIVOT
(
[MarkRange] FOR [MarkPercentage] IN ([0-10], [11-20], [21-30], [31-40], [41-50], [51-60], [61-70], [71-80], [81-90], [91-100])
)PVT
ORDER BY [FileYear]
,[AssessmentCode]
,[MarkPercentage]
SET NOCOUNT OFF
GO
```
|
This not the exact answer but this should help you i am not sure how u got that percentage. so i have commented that part u can tweak a bit bit get the percentage.
```
CREATE TABLE #test
(
FileYear INT,
AssessmentCode VARCHAR(50),
StudentID INT,
Mark INT
)
INSERT INTO #test
VALUES ( 2014,'11cpssdd',34323,75),( 2014,'11cpssdd',74666,38 ),
( 2014,'11cpssdd',87664,34),( 2014,'11cpssdd',87576,66),
( 2014,'11cpssdd',23455,87),( 2014,'11cpssdd',87654,75),
( 2014,'11cpssdd',98776,75),( 2014,'11cpssdd',34543,55),
( 2014,'11ecoeco',87687,89),( 2014,'11ecoeco',56466,77),
( 2014,'11ecoeco',34544,45),( 2014,'11ecoeco',95554,23),
( 2014,'11ecoeco',22322,56),( 2014,'11ecoeco',76557,66)
;WITH cte
AS (SELECT DISTINCT 0 AS fst,
10 AS scd,
fileyear,
AssessmentCode
FROM #test
WHERE AssessmentCode = '11cpssdd'
UNION ALL
SELECT scd + 1 fst,
scd + 10,
fileyear,
AssessmentCode
FROM cte
WHERE scd < 100)
SELECT b.FileYear,
b.AssessmentCode,
CONVERT(VARCHAR(10), fst) + ' - '
+ CONVERT(VARCHAR(10), scd) MarkRange,
count(case when mark IS not null then 1 end) Range_Count
--(count(case when mark IS not null then 1 end)/10.0)*100 percentage
FROM cte b
LEFT JOIN(SELECT CASE
WHEN mark BETWEEN 0 AND 10 THEN '0 - 10'
WHEN mark BETWEEN 11 AND 20 THEN '11 - 20'
WHEN mark BETWEEN 21 AND 30 THEN '21 - 30'
WHEN mark BETWEEN 31 AND 40 THEN '31 - 40'
WHEN mark BETWEEN 41 AND 50 THEN '41 - 50'
WHEN mark BETWEEN 51 AND 60 THEN '51 - 60'
WHEN mark BETWEEN 61 AND 70 THEN '61 - 70'
WHEN mark BETWEEN 71 AND 80 THEN '71 - 80'
WHEN mark BETWEEN 81 AND 90 THEN '81 - 90'
WHEN mark BETWEEN 90 AND 100 THEN '90 - 100'
END [range],
FileYear,
mark,
AssessmentCode
FROM #test
WHERE AssessmentCode = '11cpssdd') a
ON CONVERT(VARCHAR(10), fst) + ' - '
+ CONVERT(VARCHAR(10), scd) = [range]
AND a.AssessmentCode = b.AssessmentCode
GROUP BY b.FileYear,
b.AssessmentCode,
CONVERT(VARCHAR(10), b.fst) + ' - '
+ CONVERT(VARCHAR(10), b.scd)
```
|
Percentage of marks in various mark ranges
|
[
"",
"sql",
"t-sql",
"percentage",
""
] |
I have the following table and some sample records in it:
```
id | attr1_id | attr2_id | user_id | rating_id | override_comment
------+----------+----------+-------------------+-----------+------------------
1 | 188 | 201 | user_1@domain.com | 3 |
2 | 193 | 201 | user_2@domain.com | 2 |
3 | 193 | 201 | user_2@domain.com | 1 |
4 | 194 | 201 | user_2@domain.com | 1 |
5 | 194 | 201 | user_1@domain.com | 1 |
6 | 192 | 201 | user_2@domain.com | 1 |
```
The combination of (`attr1_id`, `attr2_id`, `user_id`) is `UNIQUE`, meaning each user can only create one record with a specific pair of attribute ids.
My goal is to count the number of rows where `rating_id = 1`, but only count each combiniation of `attr1_id` and `attr2_id` only once, and only where there doesn't exist any other row (by other users) that have `rating_id > 1` and refer to the same `attr1_id` and `attr2_id`.
Note that the combination of `attr1_id` and `attr2_id` can be switched around, so given these two records:
```
id | attr1_id | attr2_id | user_id | rating_id | override_comment
------+----------+----------+--------------------+-----------+------------------
20 | 5 | 2 | user_1@domain.com | 3 |
------+----------+----------+--------------------+-----------+------------------
21 | 2 | 5 | user_2@domain.com | 1 |
```
no row should be counted, as the rows refer to the same combination of `attr_ids` and one of them has `rating_id > 1`.
However, if these two rows exist:
```
id | attr1_id | attr2_id | user_id | rating_id | override_comment
------+----------+----------+--------------------+-----------+------------------
20 | 5 | 2 | user_1@domain.com | 1 |
------+----------+----------+--------------------+-----------+------------------
21 | 2 | 5 | user_2@domain.com | 1 |
------+----------+----------+--------------------+-----------+------------------
22 | 2 | 5 | user_3@domain.com | 1 |
```
all rows should only be counted as one, because they all share the same combination of `attr1_id` and `attr2_id` and all have `rating_id = 1`.
My approach so far is this, but it results in no rows being selected at all.
```
SELECT *
FROM compatibility c
WHERE rating_id > 1
AND NOT EXISTs
(SELECT *
FROM compatibility c2
WHERE c.rating_id > 1
AND (
(c.attr1_id = c2.attr1_id) AND (c.attr2_id = c2.attr2_id)
OR
(c.attr1_id = c2.attr2_id) AND (c.attr2_id = c2.attr1_id)
)
)
```
How can I achieve this?
|
If I understand correctly, you want pairs of attributes whose ratings are always "1".
This should give you the attributes:
```
select least(attr1_id, attr2_id) as a1, greatest(attr1_id, attr2_id) as a2,
min(rating_id) as minri, max(rating_id) as maxri
from compatibility c
group by least(attr1_id, attr2_id), greatest(attr1_id, attr2_id)
having min(rating_id) = 1 and max(rating_id) = 1;
```
To get the count, just use this as a subquery:
```
select count(*)
from (select least(attr1_id, attr2_id) as a1, greatest(attr1_id, attr2_id) as a2,
min(rating_id) as minri, max(rating_id) as maxri
from compatibility c
group by least(attr1_id, attr2_id), greatest(attr1_id, attr2_id)
having min(rating_id) = 1 and max(rating_id) = 1
) c
```
|
> My goal is to count the number of rows where `rating_id = 1`, but only
> count each combiniation of `attr1_id` and `attr2_id` only once, and
> only where there doesn't exist any other row (by other users) that
> have `rating_id > 1`.
### Building on your original
Your original query was on the right track to exclude offending rows. You just had `>` instead of `=`, and the count was missing, yet.
```
SELECT count(*) AS ct
FROM (
SELECT 1
FROM compatibility c
WHERE rating_id = 1
AND NOT EXISTS (
SELECT 1
FROM compatibility c2
WHERE c2.rating_id > 1
AND (c2.attr1_id = c.attr1_id AND c2.attr2_id = c.attr2_id OR
c2.attr1_id = c.attr2_id AND c2.attr2_id = c.attr1_id))
GROUP BY least(attr1_id, attr2_id), greatest(attr1_id, attr2_id)
) sub;
```
### Shorter and faster
```
SELECT count(*) AS ct
FROM (
SELECT FROM compatibility -- empty SELECT list is enough for count(*)
GROUP BY least(attr1_id, attr2_id), greatest(attr1_id, attr2_id)
HAVING max(rating_id) <= 1
) sub;
```
Similar to this [earlier answer with more explanation](https://stackoverflow.com/a/26519157/939860).
`HAVING max(rating_id) <= 1` implements your requirement exactly.
[fiddle](https://dbfiddle.uk/6YBag6ib)
Old [sqlfiddle](http://sqlfiddle.com/#!17/4430f0/1)
|
PostgreSQL - select count(*) for rows where a condition holds
|
[
"",
"sql",
"postgresql",
"select",
"count",
""
] |
There is a PhoneNumber table which stores the number as the user has input it (e.g. 555 555 5555, 555-555-5555, 555.555.5555)
We register those numbers with a 3-rd party service, which then consumers one our web services, but only provides the number as 10 digits (5555555555).
I need to find a match but filtering out the non-alphanumeric characters in my records.
I added the function that strips out non-alphanumeric characters here: [How to strip all non-alphabetic characters from string in SQL Server?](https://stackoverflow.com/questions/1007697/how-to-strip-all-non-alphabetic-characters-from-string-in-sql-server)
I tried running the following, but I always get 0 records, though I have a 555-555-5555 value for a record in the table:
```
SELECT COUNT(*)
FROM [dbo].[PhoneNumber]
WHERE [dbo].[RemoveNonAlphaCharacters]([Number]) = '5555555555'
```
|
You are removing all *non* alpha characters, including numbers. Try this variant:
```
Create Function [dbo].[RemoveNonAlphaNumCharacters](@Temp VarChar(8000))
Returns VarChar(800)
AS
Begin
Declare @KeepValues as varchar(50)
Set @KeepValues = '%[^a-zA-Z0-9]%'
While PatIndex(@KeepValues, @Temp) > 0
Set @Temp = Stuff(@Temp, PatIndex(@KeepValues, @Temp), 1, '')
Return @Temp
End;
```
|
```
SELECT COUNT(*)
FROM [dbo].[PhoneNumber]
WHERE REPLACE([Number],'-','') = '5555555555'
```
You can use this function several times for the needed characters.
|
Check for equality on a phone number inputted in a variety of formats
|
[
"",
"sql",
"sql-server",
""
] |
I have two tables Users and Tasks. I need to get the count of tasks grouped by Users.Name and Tasks.Status. I need some help writing query to make my result look similar to the third table.
**Users**
```
|- Id -|- Name -|
|- 1 -|- Robert -|
|- 2 -|- Bob -|
|- 3 -|- Vicky -|
|- 4 -|- Don -|
|- 5 -|- Ron -|
|- 6 -|- Harry -|
```
**Tasks**
```
|- Id -|- Status -|- user_id -|
|- 1 -|- Started -|- 1 -|
|- 2 -|- Started -|- 1 -|
|- 3 -|- Started -|- 2 -|
|- 4 -|- Started -|- 2 -|
|- 3 -|- Complete -|- 1 -|
|- 4 -|- Complete -|- 1 -|
|- 5 -|- Complete -|- 2 -|
|- 6 -|- Complete -|- 2 -|
```
**Result**
```
|- Name -|- Status -|- Count -|
|- Robert -|- Complete -|- 2 -|
|- Robert -|- Started -|- 2 -|
|- Bob -|- Complete -|- 2 -|
|- Bob -|- Started -|- 2 -|
|- Vicky -|- Complete -|- 0 -|
|- Vicky -|- Started -|- 0 -|
|- Don -|- Complete -|- 0 -|
|- Don -|- Started -|- 0 -|
|- Ron -|- Complete -|- 0 -|
|- Ron -|- Started -|- 0 -|
|- Harry -|- Complete -|- 0 -|
|- Harry -|- Started -|- 0 -|
```
|
If you want to get zero's in the answer, you need to generate the rows first. Here is a method for getting the results:
```
select u.name, s.status, count(t.userid) as cnt
from users u cross join
(select distinct status from tasks) s left join
tasks t
on t.userid = u.id and t.status = s.status
group by u.name, s.status
order by u.name, s.status;
```
Note that this is standard SQL, so it will work in all the databases in your tags.
|
Select name, status, count(name,status)
from users inner join tasks on users.id = tasks.userid
group by name,status
Something along those lines, I am rusty
|
SQL Query: Group by multiple columns in different tables
|
[
"",
"sql",
""
] |
Consider a table with some rows.
by php, I make a SQL query such as "DELETE \* From my\_table".
How can I recognize affected rows before running the sql query. On the other words, I want to have the list of affected rows after an sql query and before of its running.
All in all, I want to check somethings before running SQL irreversible queries.
best respects.
|
Suppose you have a DELETE query like:
`DELETE FROM my_table WHERE foo='bar'`
To see how many records will be deleted you could first run:
`SELECT count(*) from my_table where foo='bar'`
The result of that query will be the count of records that meet the same condition.
|
You can perform a `SELECT` query with the same `JOIN` and `WHERE` criteria as the `DELETE` would use. This will return the rows that would have been deleted.
If you want perfect safety, you need to use transactions in all the applications that access the tables. The process that's doing the `DELETE` should perform the `SELECT` in the same transaction. Otherwise, another process could add rows to the table between the `SELECT` and `DELETE`, and they might be deleted if they meet the criteria.
If you're just trying to verify the logic of your `DELETE` query with an eyeball examination of the results, this may be overkill.
|
How to check affected rows before an sql query running?
|
[
"",
"mysql",
"sql",
""
] |
I have 2 tables, 'contacts' and 'job\_titles'. Within the 'contacts' table there is a field named 'position'. With the new 'job\_titles' table, I want to update a field in the 'contacts' table called 'job\_title\_id' if the 'position' fields value is similar to the 'title' field in the 'job\_titles' table and I've come across a snag.
Because the position field is a free text field we could have values like:
* Service Delivery Manager of Such a Company
Also, we may have in the 'title' field of the 'job\_titles' table values like:
* Service Delivery Manager
* IT Service Delivery Manager
* Senior Service Delivery Manager
So when I run the following query, I am getting a 'Sub query returns more than 1 row' error.
```
UPDATE contacts
SET job_title_id =
(SELECT id
FROM job_titles
WHERE job_titles.title LIKE CONCAT('%', contacts.position, '%')
);
```
Is there a way I can run a wildcard query like above that will do what I need? Thanks.
|
The following will match the *shortest* title, on the assumption that is more generic:
```
UPDATE contacts c
SET job_title_id = (SELECT id
FROM job_titles jt
WHERE jt.title LIKE CONCAT('%', c.position, '%')
ORDER BY char_length(jt.title)
LIMIT 1
);
```
|
Wildcards query may return more then one records. so the one solution can be if you use the nested query with limit 1.
Like
```
UPDATE contacts SET job_title_id = (SELECT id FROM job_titles WHERE job_titles.title LIKE CONCAT('%', contacts.position, '%') limit 1);
```
But here an issue can occur that it may return the records for senior service delivery for the search of service delivery, so if your field's data start with job title then you can write the query like.
```
UPDATE contacts SET job_title_id = (SELECT id FROM job_titles WHERE job_titles.title LIKE CONCAT(contacts.position, '%') limit 1);
```
|
Sub Query Returns More Than 1 Row LIKE & CONCAT with Wildcards
|
[
"",
"mysql",
"sql",
"wildcard",
"concatenation",
"sql-like",
""
] |
I am quite new with SQL and loops especially and need some help with the following problem.
I have a table like this:
```
SpotID EventID MaxTemp
123 1 45
236 1 109
69 1 18
123 2 216
236 2 29
69 2 84
123 3 91
236 3 457
69 3 280
```
I would like to generate a new table with the following output:
```
SpotID Over30 Over70 Over100
123 3 2 1
236 2 2 2
69 2 2 1
```
So what i am after is the count of how many times did the temperature exceed the limits of 30, 70 and 100 per SpotID for different EventIDs.
Is there a way to do this with a loop? My data set is obviously bigger and I am curious if how could i use something efficient.
Thank you very much.
Mike
|
You just need conditional aggregation:
```
select spotid,
sum(case when maxtemp > 30 then 1 else 0 end) as over_30,
sum(case when maxtemp > 70 then 1 else 0 end) as over_70
sum(case when maxtemp > 100 then 1 else 0 end) as over_100
from likethis
group by spotid;
```
|
One minor change to the previous post, my version only counts within each temp range, else the lower temps will count most temps not temps in that range.
```
DECLARE @DATA TABLE (
SpotID INT,
EventID INT,
MaxTemp INT
)
INSERT INTO @DATA VALUES
(123, 1, 45 ),
(236, 1, 109),
(69 , 1, 18 ),
(123, 2, 216),
(236, 2, 29 ),
(69 , 2, 84 ),
(123, 3, 91 ),
(236, 3, 457),
(69 , 3, 280)
SELECT
SpotID,
SUM(CASE WHEN MaxTemp > 30 AND MaxTemp < 70 THEN 1 ELSE 0 END) AS OVER_30,
SUM(CASE WHEN MaxTemp >= 70 AND MaxTemp < 100 THEN 1 ELSE 0 END) AS OVER_70,
SUM(CASE WHEN MaxTemp >= 100 THEN 1 ELSE 0 END) AS OVER_100
FROM
@DATA
GROUP BY
SpotID
```
|
Using a loop in SQL to populate the table (SQL Server)
|
[
"",
"sql",
"sql-server",
"loops",
""
] |
I have a need to export the data from the table into .csv file and do it schedulled. I've not work much on SQL developer before, mainly on SQL Server. So what options do i have to do this auto export?
|
As a Powershell fan, I'd point your attention to this [tutorial](http://blogs.technet.com/b/heyscriptingguy/archive/2012/12/04/use-oracle-odp-net-and-powershell-to-simplify-data-access.aspx "tutorial").
Though SQL Developer not involved.
|
SQL Developer has a command line interface, SDcli - you'll find it in your SQL Developer BIN directory.
You can build a 'cart' in the SQL Developer GUI, and save it to a file. This cart's contents are database objects, including tables.
[](https://i.stack.imgur.com/WAH5n.png)
You can ask to export the contents of your cart to various file formats, including CSVs for the data.
Here's what the CLI looks like
[](https://i.stack.imgur.com/0Oi8f.png)
[Detailed example here](http://www.thatjeffsmith.com/archive/2016/06/shopping-for-csv-with-sql-developers-cart/).
|
Scheduled export from SQL Developer
|
[
"",
"sql",
"oracle",
"oracle-sqldeveloper",
""
] |
I'm using Oracle's APEX and trying to set the default date of one of my columns to '31-dec-2013', but for the life of me, it's not happening. I've tried many syntax variations and gotten a number of errors such as "not a valid month" and "such a unique or primary key exists" something to that effect. Please help! here's my code:
```
Create Table Lease(
LeaseNo number(8) not null unique,
PropertyID number(6) not null,
ClientId varchar2(4) not null,
Leasestartdate date not null,
LeaseEndDate date dEFAULT ('31-12-2013'),
MonthlyRent number(8,2) check (MonthlyRent >1000),
Primary Key (LeaseNo),
Foreign key (propertyId) references property(Propertyid),
Foreign key (clientId) references client(clientid));
```
It threw the "not a valid month" error.
|
You can use `to_date` with an explicit date format model as ThorstenKettner shows, which means you won't be relying on the session's NLS\_DATE\_FORMAT. You can also use a [date literal](http://docs.oracle.com/cd/E11882_01/server.112/e26088/sql_elements003.htm#SQLRF51062), which is always in YYYY-MM-DD format:
```
...
LeaseEndDate date default date '2013-12-31',
...
```
Largely a matter of personal preference between the two though; I happen to prefer this, partly because it's slightly less typing, but also because there is no possibility of ambiguity between DD-MM and MM-DD.
|
Use TO\_DATE to convert a string to date:
```
...
LeaseEndDate date default to_date('31-12-2013','dd-mm-yyyy')
...
```
|
SQL Create Table Default value as specific date
|
[
"",
"sql",
"oracle",
"date",
"default",
"oracle-apex",
""
] |
I am trying to find student who has min score which will be the result of the below query. However, I was asked to write the query without using MIN(). Spent several hours but I can't find the alternative solution :'(.
```
select s.sname
from student s
where s.score =
(select min(s2.score)
from score s2)
```
|
This is one way, which will work even if two students have same lowest score.
```
SELECT distinct s1.sname
FROM student s1
LEFT JOIN student s2
ON s2.score < s1.score
WHERE s2.score IS NULL
```
The below is the method using limit, which will return lowest score student, but only one of them if multiple of them have same score.
```
select sname
from student
order by score asc
limit 1
```
|
Here's a possible alternative to the `JOIN` approach:
```
select sname from student where score in
(select score from student order by score asc limit 1)
```
|
find MIN without using min()
|
[
"",
"sql",
"sqlite",
""
] |
I have two databases for two companies running on the same software, thus the DB structures are the same.
Windows Server 2003, MS SQL Sever 2005.
I'm attempting to copy the items list and associated tables from SOURCE to DESTINATION.
There are a total of 11 tables, I'm using the same format for the script to do the job on all 11. Three of them are failing.
The example below is one of the three:
```
--dbo.ITEM_MEASURE 5
SET IDENTITY_INSERT DESTINATION.dbo.ITEM_MEASURE ON
INSERT DESTINATION.dbo.ITEM_MEASURE(
ITEM_MEASURE_ID, MEAS_TYPE, ITEMNO, MEAS_CODE, SELLPRIC, MARKUP, S_PERC_DOC,
MIN_AMOUNT, COSTPERSP, COST, COST_LOW, COST_HIGH, WEIGHT_MEAS, WEIGHT,
SIZE_MEAS, LENGTH, BREADTH, HEIGHT, VOLUME_MEAS, VOLUME, LAST_COST)
SELECT s.ITEM_MEASURE_ID, s.MEAS_TYPE, s.ITEMNO, s.MEAS_CODE, s.SELLPRIC,
s.MARKUP, s.S_PERC_DOC, s.MIN_AMOUNT, s.COSTPERSP, '0', '0', '0' ,
s.WEIGHT_MEAS, s.WEIGHT, s.SIZE_MEAS, s.LENGTH, s.BREADTH, s.HEIGHT,
s.VOLUME_MEAS, s.VOLUME, '0'
FROM SOURCE.dbo.ITEM_MEASURE s
LEFT OUTER JOIN DESTINATION.dbo.ITEM_MEASURE d
ON (d.ITEM_MEASURE_ID = s.ITEM_MEASURE_ID)
WHERE d.ITEM_MEASURE_ID IS NULL
SET IDENTITY_INSERT DESTINATION.dbo.ITEM_MEASURE OFF
/* ERROR
Msg 2627, Level 14, State 1, Line 73
Violation of UNIQUE KEY constraint 'IN_ITEM_MEASURE_UQ_ITEM_TYPE_MEAS'. Cannot insert duplicate key in object 'dbo.ITEM_MEASURE'.
The statement has been terminated.
*/
```
The table PK is ITEM\_MEASURE\_ID, there are no duplicates in either SOURCE or DESTINATION separately and as I understood it the "WHERE d.ITEM\_MEASURE\_ID IS NULL" statement prevented it from trying to copy data into DESTINATION which already exists.
Why am I getting this error?
UPDATE - Can't post images, but here is a look at the constraints and indexes.:
<https://photos-6.dropbox.com/t/1/AAD2EzrJTZFy_BMqcL5i2dWmZn1bAp5C7Y6LAHwJZ1btYQ/12/1501690/png/1024x768/3/1415138400/0/2/constraint.png/vvHTaOuDXOO72MN7IYeDnbLzAjQ65deom5zF9GV3jgw>
UPDATE - Properties on IN\_ITEM\_MEASURE\_UQ\_ITEM\_TYPE\_MEASURE: <https://photos-3.dropbox.com/t/1/AAC8eurM2o8SfHfvLNOsvwt8h_2P_qGpvRBmhovIp3cJzg/12/1501690/png/1024x768/3/1415142000/0/2/properties.PNG/Lf4Q_hE1QTsHgEI1BAxR9WoyL2R71MPFxDZJ5R9kXN0>
|
It seems most likely that the affected tables (and maybe others) have one or more `UNIQUE` constraints (or unique indices) in addition to their PKs. For example, the name of the violated constraint suggests that it might be a constraint such as
```
UNIQUE(ITEMNO, MEAS_TYPE)
```
... or a unique index on such a collection of columns. There is no particular reason to suppose that in two unrelated databases, equal (ITEMNO, MEAS\_TYPE) pairs (or any other non-PK data) would be associated with the same PKs, therefore your strategy to avoid PK violations is not certain to avoid violations of a constraint such as this.
For that matter, you haven't presented any reason to believe that PKs in your source and destination tables are correlated in any way. Although it may make the table copy proceed without error, I see no reason to believe that the query you present is actually doing the right thing.
*Edited to add*:
In fact, it looks like `ITEM_MEASURE_ID` is a surrogate key (i.e. one invented by the application or DBMS, independent of any of the data). It is extremely unlikely that matching source data to independently recorded destination data by such a key will give you meaningful results (not in other tables, either). You should use a natural key instead, such as `(ITEMNO, MEAS_TYPE)` if that indeed suits. Where there are `UNIQUE` constraints or unique indices on your tables they may serve as a clue to the natural keys. For this table in isolation, that might look like this instead:
```
-- NOTE: NOT inserting values for the IDENTITY column
INSERT DESTINATION.dbo.ITEM_MEASURE(
MEAS_TYPE, ITEMNO, MEAS_CODE, SELLPRIC, MARKUP, S_PERC_DOC,
MIN_AMOUNT, COSTPERSP, COST, COST_LOW, COST_HIGH, WEIGHT_MEAS, WEIGHT,
SIZE_MEAS, LENGTH, BREADTH, HEIGHT, VOLUME_MEAS, VOLUME, LAST_COST)
SELECT s.MEAS_TYPE, s.ITEMNO, s.MEAS_CODE, s.SELLPRIC,
s.MARKUP, s.S_PERC_DOC, s.MIN_AMOUNT, s.COSTPERSP, '0', '0', '0' ,
s.WEIGHT_MEAS, s.WEIGHT, s.SIZE_MEAS, s.LENGTH, s.BREADTH, s.HEIGHT,
s.VOLUME_MEAS, s.VOLUME, '0'
FROM SOURCE.dbo.ITEM_MEASURE s
LEFT OUTER JOIN DESTINATION.dbo.ITEM_MEASURE d
ON (d.ITEMNO = s.ITEMNO) AND (d.MEAS_TYPE = s.MEAS_TYPE)
WHERE d.ITEMNO IS NULL
```
That gets a lot messier if you need to deal with related source tables, because PKs in the destination tables differ from PKs in the source tables, but it can be done.
|
First thing I'd try is making a select statement to see what the conflict is.
```
SELECT * FROM DESTINATION.dbo.ITEM_MEASURE
WHERE ITEM_MEASURE_ID IN (SELECT s.ITEM_MEASURE_ID FROM SOURCE.dbo.ITEM_MEASURE s
LEFT OUTER JOIN DESTINATION.dbo.ITEM_MEASURE d ON (d.ITEM_MEASURE_ID = s.ITEM_MEASURE_ID)
WHERE d.ITEM_MEASURE_ID IS NULL)
```
This should show you what is conflicting. Other than that, is it possible that DESTINATION.dbo.ITEM\_MEASURE has a duplicate?
|
LEFT OUTER JOIN causing violation of unique key constraint
|
[
"",
"sql",
"sql-server",
"outer-join",
"unique-key",
"surrogate-key",
""
] |
I have one table called `route_timetables` which contains timetables for bus routes.
Each row contains one timetable for one direction(i.e north, south, east, and west) and for one type of day(i.e, weekday, Saturday and Sunday).
I need to get the most current timetables for each route. So where route id=X and the route publish\_date has passed, I need to get the most current timetables where the direction and type are distinct. It will likely return 6 rows, one for say West, on weekdays, Saturdays, and Sundays, and one for East on weekdays, Saturdays, and Sundays.
The table fields looks like this
```
id route_id timetable publish_date direction type
```
I have tried several different approaches. I thought this code was working but it was querying an old timetable that also was for weekdays, traveling west.
```
SELECT timetable, direction, type, MAX(publish_date)
FROM route_timetables
WHERE publish_date<=curDate() and route_id=2
GROUP BY direction, type
```
|
I found an answer here [How to group by 2 fields and order by date at the same time?](https://stackoverflow.com/questions/8082379/how-to-group-by-2-fields-and-order-by-date-at-the-same-time) and re-purposed it.
```
SELECT
rt.*
FROM
route_timetables rt
JOIN
( SELECT
direction
, type
, MAX(publish_date) AS latest_date_registered
FROM
route_timetables
WHERE publish_date<=curDate()
GROUP BY
direction
, type
) AS grp
ON grp.direction = rt.direction
AND grp.type = rt.type
AND grp.latest_date_registered = rt.publish_date
Where route_id=2
```
|
You could create a separate table which contains an id (PK), the timetable and direction. Have this new table's id as a FK in your current table and you could call DISTINCT() on this FK.
I suggest this as it sounds like you are having repeating values of timetable and/or direction.
|
Select rows from table where two colmns form a distinct set and order by third field
|
[
"",
"mysql",
"sql",
"select",
"distinct",
""
] |
I'm having some difficulty with a query that I'm writing that is meant to display the "makers" that produce laptops with 3 or more DIFFERENT speeds.
```
SELECT DISTINCT Product.maker, Count(Laptop.speed) AS [3+ Different Speeds]
FROM Laptop INNER JOIN Product ON Laptop.model = Product.model
WHERE type = "laptop"
GROUP BY Product.maker
HAVING Count(*) >= 3;
```
This gives me the 2 correct "makers" that produce 3 or more laptop models, however, one "maker", A, produces two laptop models with the SAME speed, so that "maker" needs to be eliminated from the resulting table, which is below.
```
maker 3+ Different Speeds
A 3
E 3
```
Here's the Laptop table where the model and speed data is stored:
```
model speed
2001 2.00 E
2002 1.73 E
2003 1.80 E
2004 2.00 A
2005 2.16 A
2006 2.00 A
```
E produces the top 3, and A produces the bottom 3. I'm fairly certain that I need to make my Count function DISTINCT, however, I know that Access does not support DISTINCT Count. Any assistance/suggestions would be greatly appreciated!
|
The below query should solve your problem.
```
SELECT
maker,
COUNT(speed)
FROM(
SELECT
p.maker,
l.speed
FROM
Laptop l
INNER JOIN Product p
ON l.model = p.model
WHERE
type = "laptop"
GROUP BY 1,2
) foo
GROUP BY 1
HAVING COUNT(1) >= 3
```
First you aggregate all speeds for the same maker, so the inner query would produce:
```
maker | speed | count
-------+-------+-------
E | 1.73 | 1
E | 1.80 | 1
A | 2.16 | 1
E | 2.00 | 1
A | 2.00 | 2
```
Now you have distinct rows for each pair (maker, speed) so that you can simply run a `COUNT()` over speed.
Result
```
maker | count
-------+-------
E | 3
A | 2
```
Now eliminate A with `HAVING` clause.
|
Your query will work fine if you take out the distinct keyword. If you only group by the maker, then that query will bring you back only one record per maker, the distinct is actually redundant.
|
Writing a Query with Distinct Count in Access
|
[
"",
"sql",
"ms-access",
""
] |
I have two tables: `data` and `structure` that I am attempting to query to return a recordset with all columns from `data` and a single columns from `structure`. I cannot seem to get the joins figured out. This will be performed in Access 2010, and all fields are strings.
`data` contains, among other things, three columns: `Country`, `State`, and `City`. `structure` also contains these three columns, as well as a storeID. There is a many to 1 relationship if one was to concatenate Country, State, City between `data` and `structure`. I need the query to select all fields from `data` and a new column for `storeID`. Ideally the `storeID` would be in `data` already, but that is unfortunately out of my control.
`data` looks like
```
Country State City Value
US FL MIA 1.0
US FL MIA 2.1
CA BC VAN 0.8
MX CH JUA 1.2
```
`structure` looks like
```
Country State City storeID
US FL MIA 4522
CA BC VAN 3866
MX CH JUA 8985
```
would like the query to return
```
Country State City Value storeID
US FL MIA 1.0 4522
US FL MIA 2.1 4522
CA BC VAN 0.8 3866
MX CH JUA 1.2 8985
```
|
You can use multiple conditions in a JOIN clause: separate them with boolean operators, just as you would in a WHERE clause (AND, OR, etc.). Your goal is to have all conditions evaluate to TRUE or FALSE, but within that range you can do quite a lot.
In your specific case, it's fairly simple - use the Country, State, and City columns from both tables to make the connection. I've set up some sample data in the query below:
```
DECLARE @Data TABLE (Country VARCHAR(2), State VARCHAR(2), City VARCHAR(3), Value DECIMAL(2,1))
DECLARE @Structure TABLE (Country VARCHAR(2), State VARCHAR(2), City VARCHAR(3), StoreID INT)
INSERT INTO @Data (Country, State, City, Value) VALUES ('US', 'FL', 'MIA', 1.0)
INSERT INTO @Data (Country, State, City, Value) VALUES ('US', 'FL', 'MIA', 2.1)
INSERT INTO @Data (Country, State, City, Value) VALUES ('CA', 'BC', 'VAN', 0.8)
INSERT INTO @Data (Country, State, City, Value) VALUES ('MX', 'CH', 'JUA', 1.2)
INSERT INTO @Structure (Country, State, City, StoreID) VALUES ('US', 'FL', 'MIA', 4522)
INSERT INTO @Structure (Country, State, City, StoreID) VALUES ('CA', 'BC', 'VAN', 3866)
INSERT INTO @Structure (Country, State, City, StoreID) VALUES ('MX', 'CH', 'JUA', 8985)
SELECT d.Country, d.State, d.City, d.Value, s.StoreID
FROM
@Data d
LEFT JOIN
@Structure s ON
d.Country = s.Country AND
d.State = s.State AND
d.City = s.City
```
|
`SELECT data.*, structure.storeID
FROM data
LEFT OUTER JOIN structure USING(storeID)`
|
Multiple column join
|
[
"",
"sql",
"join",
"ms-access-2010",
""
] |
I've got two tables:
```
Researcher (ResId, Fname, Gname)
Book (bookID, ResID*, Descr)
```
I need to use `IN` (one of the condition of my assignment) statement to show a list of researchers that have more than 5 books.
I've got the following SQL query that seems to be logic but it gives an error. The subquery on its own works just fine:
```
select ResId, Fname
from Researcher
where ResId in
(select Book.ResId, count(*)
from Book
group by Book.ResId
having count(*) > 5
)
```
|
```
where ResId in
(select Book.ResId, count(*)
```
When you use `IN (Select...`, you can only select one column in the sub query.
Change the above to:
```
where ResId in
(select Book.ResId
```
only.
|
Does that work as expected?
```
Select ResId, Fname
From Researcher r
Inner Join Book b on b.ResId = r.ResId
Group By r.ResId, r.FName
Having count(b.*) > 5
```
|
Group by in nested subquery
|
[
"",
"sql",
""
] |
I have tried many combinations i want to convert my Date which is imported from excel and is stored as varchar in 15-Oct-14 format, what i want is to convert it to MSSqlserver datetime format 2014-10-23 00:00:00.000 i have tried all combination from <http://www.sql-server-helper.com/tips/date-formats.aspx> but nothing is working for me.
|
Check the below script and hope this suits your requirement:
```
Set dateformat ymd
Select convert(datetime,'15-Oct-14',105)
```
|
Try this (solution with guard):
```
set dateformat YMD;
select
case
when isdate(date_column) = 1 then convert(date, date_column, 105)
end [converted_date]
```
|
How to convert 15-Oct-14 varchar to datetime format 2014-10-23 00:00:00.000
|
[
"",
"sql",
"sql-server",
"datetime",
""
] |
I want to find out meter reading for given transaction day. In some cases there wonβt be any meter reading and would like to see a meter reading for previous day.
Sample data set follows. I am using SQL Server 2008
```
declare @meter table (UnitID int, reading_Date date,reading int)
declare @Transactions table (Transactions_ID int,UnitID int,Transactions_date date)
insert into @meter (UnitID,reading_Date,reading ) values
(1,'1/1/2014',1000),
(1,'2/1/2014',1010),
(1,'3/1/2014',1020),
(2,'1/1/2014',1001),
(3,'1/1/2014',1002);
insert into @Transactions(Transactions_ID,UnitID,Transactions_date) values
(1,1,'1/1/2014'),
(2,1,'2/1/2014'),
(3,1,'3/1/2014'),
(4,1,'4/1/2014'),
(5,2,'1/1/2014'),
(6,2,'3/1/2014'),
(7,3,'4/1/2014');
select * from @meter;
select * from @Transactions;
```
I expect to get following output
```
Transactions
Transactions_ID UnitID Transactions_date reading
1 1 1/1/2014 1000
2 1 2/1/2014 1010
3 1 3/1/2014 1020
4 1 4/1/2014 1020
5 2 1/1/2014 1001
6 2 3/1/2014 1001
7 3 4/1/2014 1002
```
|
Your SQL Query to get your desired out put will as following:
```
SELECT Transactions_ID, T.UnitID, Transactions_date
, (CASE WHEN ISNULL(M.reading,'') = '' THEN
(
SELECT MAX(Reading) FROM @meter AS A
JOIN @Transactions AS B ON A.UnitID=B.UnitID AND A.UnitID=T.UnitID
)
ELSE M.reading END) AS Reading
FROM @meter AS M
RIGHT OUTER JOIN @Transactions AS T ON T.UnitID=M.UnitID
AND T.Transactions_date=M.reading_Date
```
|
I can think of two ways to approach this - neither of them are ideal.
The first (and slightly better) way would be to create a SQL Function that took the Transactions\_date as a parameter and returned the reading for Max(Reading\_date) where reading\_date <= transactions\_date. You could then use this function in a select statement against the Transactions table.
The other approach would be to use a cursor to iterate through the transactions table and use the same logic as above where you return the reading for Max(Reading\_date) where reading\_date <= transactions\_date.
|
SQL Server 2008 - need help on a antithetical query
|
[
"",
"sql",
"sql-server-2008",
"t-sql",
"sql-server-2008-r2",
""
] |
I got some table with hierarchy:
```
create table t_hier (id number primary key, parent number);
insert into t_hier (id, parent) values(0, null);
insert into t_hier (id, parent) values(1, 0);
insert into t_hier (id, parent) values(2, 0);
insert into t_hier (id, parent) values(3, 1);
insert into t_hier (id, parent) values(4, 1);
insert into t_hier (id, parent) values(5, 2);
insert into t_hier (id, parent) values(6, 2);
insert into t_hier (id, parent) values(7, 5);
insert into t_hier (id, parent) values(8, 5);
select rpad('* ', 2*level, '* ')||id id, parent
from t_hier
connect by prior id = parent
start with parent is null;
ID PARENT
____________________ ______
* 0
* * 1 0
* * * 3 1
* * * 4 1
* * 2 0
* * * 5 2
* * * * 7 5
* * * * 8 5
* * * 6 2
```
Given some ID I need to get all its parents, grandparents etc and also every sibling of returned elements (by siblings I mean only elements with the same parent, not the entire level), and also given element itself.
So if I have element with id 5, I need to return 0, 1, 2, 5 and 6.
For element with id 7 I need to return 0, 1, 2, 5, 6, 7, 8.
I think it can be done by just one query, it will be great if someone will help me with it.
|
```
with parents as (
select level lvl, id
from t_hier
start with id = 7
connect by id = prior parent
)
select distinct id from t_hier
where id != 7
start with id in (select id from parents where lvl > 1)
connect by prior id = parent and level <= 2;
```
1. Find all forefathers
2. Go back and for each forefather find his children but only on the second level
3. Exclude the starting id.
|
This may help :
```
with cte_getHierarchy (id,parent)
as
(
select t_hier.id,t_hier.parent from t_hier where id = 7
union all
select t_hier.id,t_hier.parent from t_hier join cte_getHierarchy on t_hier.id = cte_getHierarchy.parent
),
cte_getsibling (id,parent)
as
(
select cte_getHierarchy.id,cte_getHierarchy.parent from cte_getHierarchy
union
select t_hier.id,t_hier.parent from t_hier join cte_getHierarchy on t_hier.parent = cte_getHierarchy.parent
)
select id from cte_getsibling where id <> 7;
```
[sql fiddle](http://sqlfiddle.com/#!4/2c5a8c/29)
|
Get all parents for given element and their siblings
|
[
"",
"sql",
"oracle",
"hierarchy",
""
] |
I have searched both this great forum and googled around but unable to resolve this.
We have two tables (and trust me I have nothing to do with these tables). Both tables have a column called `eventId`.
However, in one table, data type for `eventId` is `float` and in the other table, it is `nvarchar`.
We are selecting from `table1` where `eventI` is defined as `float` and saving that Id into `table2` where `eventId` is defined as `nvarchar(50)`.
As a result of descrepancy in data types, we are getting error converting datatype `nvarchar` to `float`.
Without fooling around with the database, I would like to cast the `eventId` to get rid of this error.
Any ideas what I am doing wrong with the code below?
```
SELECT
CAST(CAST(a.event_id AS NVARCHAR(50)) AS FLOAT) event_id_vre,
```
|
The problem is most likely because some of the rows have `event_id` that is empty. There are two ways to go about solving this:
* **Convert your `float` to `nvarchar`, rather than the other way around** - This conversion will always succeed. The only problem here is if the textual representations differ - say, the table with `float`-as-`nvarchar` uses fewer decimal digits, or
* **Add a condition to check for empty IDs before the conversion** - This may not work if some of the event IDs are non-empty strings, but they are not float-convertible either (e.g. there's a word in the field instead of a number).
The second solution would look like this:
```
SELECT
case when a.eventid <> ''
then cast(cast(a.event_id as nvarchar(50)) as float)
ELSE 0.0
END AS event_id_vre,
```
|
I also received the message "Error converting data type nvarchar to float". This was from a query in SQL Server Management Studio (SSMS) from SQL Server 2019.
In my case there were undecipherable characters in the source data, which was exported to CSV from Excel. Some columns were in the "Accounting" format, which has spaces, currency symbols, commas, and parenthesis, like this:
```
- positive values: $ 123,456.78
- negative values: $(123,456.78)
```
To account for this and resolve the error, I simply inserted a nested `REPLACE()` function to remove some characters and replace others, and wrapped it with `IsNull()` to return zero when necessary.
First I will describe:
* replace any `$` `,` `)` with nothing (aka null string or empty quotes `''`)
* replace any `(` with `-` (aka hyphen or negative symbol)
And here is the SQL:
```
Original:
CONVERT(float, exp_TotalAmount)
Replacement:
CONVERT(float, IsNull(REPLACE(REPLACE(REPLACE(REPLACE(exp_TotalAmount,'(','-'),')',''),',',''),'$',''),0))
Here broken up for easier read, innermost first:
REPLACE(exp_TotalAmount, '(', '-')
REPLACE(X, ')', '')
REPLACE(X, ',', '')
REPLACE(X, '$', '')
IsNull (X, 0)
CONVERT(float, X)
```
|
Error unable to convert data type nvarchar to float
|
[
"",
"sql",
"sql-server",
""
] |
I want to get the 1st of the month following the date the person turns 70. How to achive this in SQL
i am calculating age using the formula
```
DECLARE @Date_of_birth DATETIME = '1915-10-02 00:00:00.000'
DECLARE @AGE INT
SELECT @AGE = FLOOR((CAST (GETDATE() AS INTEGER) - CAST(@Date_of_birth AS INTEGER)) / 365.25)
IF(@AGE > 70)
How to find the first of the month following the date ??
IF (@AGE >80)
```
|
You can use datediff to calculate their age, and then date add to find their 70th birthday. To find the first of the month afterwards, you can use the Month and Year functions.
```
create table #people (name varchar(30), birthdate date)
insert into #people
values ('Bob', '07/08/1976'), ('Tasha','05/30/1996'),('April','04/01/1971')
--This will give you everyone's age
select DATEDIFF(YY,birthdate,GETDATE()) as age
from #people
--This will give you the first month following the date that they turn 70
select Name, DATEADD(yy,70,birthdate) as [70thBday], convert(varchar,month(dateadd(m,1,DATEADD(yy,70,birthdate)))) + '/01/' + convert(varchar,YEAR(dateadd(m,1,DATEADD(yy,70,birthdate))))
from #people
```
|
```
declare @dob datetime = '1954-06-08'
declare @age int = 70
select DATEADD(m, DATEDIFF(m, -1, DATEADD(yy, @age, @dob)), 0)
```
|
How to calculate age (in years) based on Date of Birth in SQL
|
[
"",
"sql",
""
] |
I have an requirement where I need to get a record with min time difference with current record.
Let us assume that in a table I have insert date, group Id and Id column.
I have selected a record and not want to get another record for which difference between insert date of selected record and another record is min.
I have tried outer apply, but that query takes forever to run.
Query:
```
select e.id
from B.Emp t
where id = 5
outer apply (
select top 1 *
from B.Emp
where t.group_id = group_id
order by insert_time desc ) e
```
|
If you indeed want to find closest insert time for the selected record' group, regardless of the direction, then something like this might help:
```
select t.insert_date, ca.*
from B.Emp t
outer apply (
select top (1) * from B.Emp e
where e.group_id = t.group_id
and e.id != t.id
order by abs(datediff(ms, t.insert_date, e.insert_date))
) ca
where t.Id = 5;
```
**EDIT:** You will definitely need to check indexing options, though. For starters, assuming you have a clustered primary key on `Id` column, this should help:
```
create index [IX_B_Emp_GroupInsert] on B.Emp (group_id, insert_time);
```
|
```
select * From B.Emp emp
Inner Join
(
select MAX(emp1.insert_time) maxTime, emp1.id From B.Emp emp1 group by emp1.id
) maxDateRec ON maxDateRec.id = emp.id AND maxDateRec.maxTime = emp.insert_time
where emp.id = 5
```
Try with this second one.
|
Get record with min time difference
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I want to use to insert a string that has an @ notation in sqlplus, but sqlplus reads the string like i am trying to open an .sql file. Is there a way to stop sqlplus from doing that.
ex
```
set define off;
DECLARE
test CLOB
:= q'[
@font-face
{font-family:Wingdings;
panose-1:5 0 0 0 0 0 0 0 0 0;}
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
{font-family:Tahoma;
panose-1:2 11 6 4 3 5 4 4 2 4;}
/* Style Definitions */ ]' ;
BEGIN
INSERT INTO SOME_TABLE
VALUES(test)
END;
/
```
|
Used Oracles Quote operator: `q'[""]`
|
Interesting, I don't have this issue in sqlplus (SQL\*Plus: Release 11.2.0.1.0 Production):
```
SQL> set serveroutput on
SQL> create table tmp1(x varchar2(100));
Table created.
SQL> insert into tmp1 values ('@font-face {font-family:"Cambria Math"; panose-1:
2 4 5 3 5 4 6 3 2 4;}');
1 row created.
```
You can set an escape character:
```
SQL> set escape '\';
SQL> insert into tmp1 values ('\@456');
1 row created.
```
This will insert @456
|
SQLPlus using at notation @ in a String
|
[
"",
"sql",
"oracle",
"sqlplus",
""
] |
I have +10 million articles(200 to 1000 words) in a `InnoDB` table. I only use this type of query when I'm working with `article` field:
```
SELECT article,title,other_fields from table where id=123;
```
**What is the most "space effective" way to store text in MySQL?**
Now the **table size is *say* 100GB, my objective is to make that as little as possible** without causing too much performance tradeoff.
|
MyISAM is more space friendly than InnoDB, you can start with that one.
|
As others have pointed out, innodb [compressed rows](http://dev.mysql.com/doc/innodb/1.1/en/innodb-row-format-dynamic.html) is an option.
You need to be sure that your file\_format system parameter is set to Barracuda though. An attempt to alter to make the row format compressed where row format is the default Antelope may result in a wasted table rebuild only to give you a warning that row format wasn't supported.
I've found the compression on various datasets to not be very good and give a lot of cpu overhead, especially hurting performance where joins are involved.
I've been experimenting to the [TokuDB](http://www.tokutek.com/tokudb-for-mysql/) storage engine for mysql which seems to offer better compression ratios at better speeds, among other benefits (such as quick online schema changes).
|
What is the most "space effective" way to store text in MySQL?
|
[
"",
"mysql",
"sql",
"optimization",
""
] |
I want to display all the products that do not have a visibility rule with hidden: true. I cant quite figure out the sql. I have the following 4 tables:
```
products:
id: integer
name: string
role:
id: integer
name: string
user_roles:
user_id: integer
role_id: integer
visibility_rules:
hidden: boolean
product: integer
role: text
```
I tried the following
```
SELECT spree_products.*
FROM spree_products
LEFT OUTER JOIN spree_visibility_rules ON spree_visibility_rules.product_id
= spree_products.id
AND spree_visibility_rules.role_name IN ('Student','Admin')
WHERE spree_products.deleted_at IS NULL
AND (spree_visibility_rules.visible IS NULL)
```
If I have multiple roles (Student & Admin etc) Then I want to find all products that are visible to at least one of the users roles. The problem with the above sql is that it only seems to work if the product is visible to all the users roles. If its hidden to the Student role then it wont return the product even though its visible to the Admin role. I think I have misunderstood Left outer join.
|
Visible products for a ***given*** `user_id`:
```
SELECT p.*
FROM user_roles ur
JOIN role r ON r.id = ur.role_id
JOIN visibility_rules vr ON vr.role = r.id
JOIN spree_products p ON p.id = vr.product
WHERE ur.user_id = 123 -- given user_id
AND NOT vr.hidden
GROUP BY p.id; -- must be defined as PK
```
Depending on details, performance might be optimized further.
|
```
select sp.id,sp.name,svr.hidden from spree_products sp
inner join spree_visibility_rules svr
on sp.id = svr.product
where svr.hidden = 0
```
|
Left outer sql join problems
|
[
"",
"sql",
"ruby-on-rails",
"postgresql",
""
] |
I have table `user_groups` and I need to select maximal `int` value of field `time` or `0` if any `user_group` entity has `0` in `time` field.
Something like this:
```
SELECT time FROM user_groups
WHERE
IF (time == 0) THEN time = 0
ELSE MAX(time)
```
I need multiplatform solution.
**UPDATED:**
I'm using `mysql`.
In this case:
```
id | time |
1 | 5 |
2 | 6 |
```
I'm expecting `6`.
In this case:
```
id | time |
3 | 5 |
4 | 6 |
5 | 0 |
```
I'm expecting `0`
|
This solution usues abs function to cover also negative time case:
```
SELECT case when min(abs(time)) = 0 then 0 else max(time) end FROM user_groups;
```
|
How about using case
```
SELECT case when min(time) = 0 then 0 else max(time) end FROM user_groups;
```
or if time could be < 0 but only time 0 or more should be considered then adding a where clause can help
```
SELECT case when min(time) = 0 then 0 else max(time) end FROM user_groups where time >= 0;
```
|
SELECT MAX field value OR zero value
|
[
"",
"mysql",
"sql",
"select",
"max",
""
] |
```
SELECT Field1
FROM Table
WHERE Field1 BETWEEN "A%" AND "D%"
```
Example Data:
```
A432434324324
B435435435
C5654664546
D5465465546546
A768677687
B8768678768
A65446456546
B876867877868
```
Query outcome won't include "D" line, any ideas how to deal with it? ( I need to operate on text, not numbers).
|
Leave out the wildcard and try:
```
WHERE Field1 >= 'A' AND Field1 < 'E'
```
Or:
```
WHERE LEFT(Field1, 1) BETWEEN 'A' and 'D'
```
The `'%'` wildcard is used for `LIKE`, not for other string comparisons.
Two notes:
* The first version is better because it would allow MySQL to use an index on `Field1`.
* Use single quotes for string and date constants; this is the common syntax across all databases.
|
You can't use wildcard in the "between" function. With you Query you get all values between A and the % sign and D and the % sign. Where the % sign is before the numbers.
|
SQL BETWEEN missing data
|
[
"",
"mysql",
"sql",
"csv",
""
] |
I'm trying to write a SQL query that uses CASE and STRING together and getting an error.
This is what I'm trying to work on. Any help is greatly appreciated.
I tried adding in STRING function as well but also does not work.
```
SELECT Case
when sn.1_code = 1 then 'Attended -- ' ,
sn.mult_1 ,
, 'and' ,
sn.dict_2 ,
' also acted with ' ,
sn.dict_3 ,
'.' ,
when sn.1_code = 3 then 'left because ' ,
sn.mult_2 ,
'.' ,
when sn.dict_1 = 2 then 'Went home' ,
when sn.dict_1 = 24 then 'Canceled' AS 'Attendance'
FROM db.sn
```
|
As other answers have pointed out, you need to concatenate your string values together. From the very very very little I know of Intersystems Cache SQL (I just looked it up), you will [**need to use `||`**](http://docs.intersystems.com/ens20141/csp/docbook/DocBook.UI.Page.cls?KEY=RSQL_stringmanipulation) to concatenate the values (you can also use the [**CONCAT()**](http://docs.intersystems.com/ens20091/csp/docbook/DocBook.UI.Page.cls?KEY=RSQL_concat) function to do this, but it only allows two paramaters):
```
SELECT Case
when sn.1_code = 1 then 'Attended -- ' ||
sn.mult_1 ||
'and' ||
sn.dict_2 ||
' also acted with ' ||
sn.dict_3 ||
'.'
when sn.1_code = 3 then 'left because ' ||
sn.mult_2 ||
'.'
when sn.dict_1 = 2 then 'Went home'
when sn.dict_1 = 24 then 'Canceled' END AS 'Attendance'
FROM db.sn
```
You also had some extra commas in there, as well as a missing `END` at the end of your [**CASE statement**](http://docs.intersystems.com/ens20091/csp/docbook/DocBook.UI.Page.cls?KEY=RSQL_case)
|
It looks like you're trying to concatenate strings. The actual operator may vary depending on your server software, but the idea would be:
```
SELECT
Case
when sn.1_code = 1
then 'Attended -- ' + sn.mult_1 + 'and' + sn.dict_2 + ' also acted with ' + sn.dict_3 + '.'
when sn.1_code = 3
then 'left because ' + sn.mult_2 + '.' ,
when sn.dict_1 = 2
then 'Went home' ,
when sn.dict_1 = 24
then 'Canceled'
End AS 'Attendance'
FROM db.sn
```
|
SQL using CASE with STRING function
|
[
"",
"sql",
"string",
"case",
"intersystems-cache",
""
] |
I would like to update a column of type DATETIME such that it takes year, month, and date from a parameter `@date` (in a stored procedure for Microsoft SQL Server 2012) but retains hours, minutes, seconds, etc. from its present value.
Is there a concise SQL expression for combining dates in such a manner (similar to what `java.util.Calendar` would allow) or special syntax for `UPDATE` to achieve this effect?
**UPDATE** here is a concrete example: `UPDATE` of November 1, 2014 3:30 PM to `@date` November 3, 2014 12:40 PM results in November 3, 2014 3:30 PM (i.e. only month, day, date affected, hours/minutes/seconds, etc. stay the same).
|
Get the date part from the @date parameter and the time part from the existing value in the column.
```
UPDATE mytable
SET mydatetimecolumn =
CAST(CONVERT(date, @date) as datetime) + CONVERT(time, mydatetimecolumn)
WHERE <my_where_clause>
```
|
I might suggest just adding complete days:
```
select dateadd(day,
datediff(day, col, @date),
col)
```
If you want this in an `update`:
```
update table t
set col = dateadd(day, datediff(day, col, @date), col);
```
|
SQL expression for combining dates such that year/month/date comes from 1st, hours/etc. from 2nd
|
[
"",
"sql",
"sql-server",
"datetime",
"calendar",
""
] |
Let's say that I have a customer table:
```
CustomerID | FirstName | LastName
1 | John | Smith
2 | John | Adams
3 | Kevin | Smith
4 | Kevin | Adams
```
Now imagine that this table has 200k+ rows. Would it increase performance to create a separate FirstName and LastName table as shown below and then using joins to get the view above?
Example:
```
FirstNameID | FirstName
1 | John
2 | Kevin
```
---
```
LastNameID | LastName
1 | Adam
2 | Smith
```
---
```
CustomerID | FirstNameID | LastNameID
1 | 1 | 2
2 | 1 | 1
3 | 2 | 2
4 | 2 | 1
```
|
It depends on your query workload. This is a simple form of data compression. Reducing the set of data that is needed to answer a given query can improve performance.
On the other hand you introduce overhead in many places. It is a trade-off. If you want to retrieve the values of those columns you now need to join. DML becomes slower as well.
Since the name columns are probably rather small it is hard to imagine a query workload that would benefit from this scheme.
`DATA_COMPRESSION` and backup compression can be alternatives. They make very different trade-offs.
Just to prove that there is merit to your scheme: Imagine very long names and a huge (TB-sized) table. The space savings would be significant because there are rather few distinct names.
|
Whether such a move would increase performance depends on what queries you perform, but it is likely to do the opposite for many of the queries that I envision you wanting to perform.
|
First and Last name tables to increase performance?
|
[
"",
"sql",
"sql-server",
""
] |
I am struggling trying to learn a Django query against two models:
```
class Invoice(models.Model):
contact = models.ForeignKey(Contact)
dateCreated = models.DateTimeField(auto_now_add=True)
jobName = models.CharField(max_length=30)
jobAddress = models.CharField(max_length=30, blank=True)
class Contact(models.Model):
first_name = models.CharField(max_length=30)
last_name = models.CharField(max_length=30)
address = models.CharField(max_length=30)
```
I am trying to replicate the following query:
```
SELECT *
FROM invoice, contact
WHERE invoice.contact_id = contact.id
AND invoice.id = 5
```
Any help would be appreciated.
|
So basically what you want is all information for the invoice and associated contact for the invoice having the id 5; to do this:
```
# Fetch the invoice with id = 5
invoice = Invoice.objects.get(id=5)
```
Now, to fetch information about the related contact, simply "follow" the foreign key:
```
print(invoice.contact.first_name)
print(invoice.contact.last_name)
```
|
you would rather set your models in this way: (Contact first, then invoice..)
```
class Contact(models.Model):
first_name = models.CharField(max_length=30)
last_name = models.CharField(max_length=30)
address = models.CharField(max_length=30)
class Invoice(models.Model):
contact = models.ForeignKey(Contact, related_name="contact_invoice")
dateCreated = models.DateTimeField(auto_now_add=True)
jobName = models.CharField(max_length=30)
jobAddress = models.CharField(max_length=30, blank=True)
```
then this query:
```
contact = Contact.objects.get(id=someid)#just to get first contact object
contact_address = contact.address
contact_firstname = contact.first_name
contact_lastname = contact.last_name
invoice_of_this_contact = contact.contact_invoice.get(id=5)
```
|
django relationship query
|
[
"",
"sql",
"django",
""
] |
[**SQL Fiddle.**](http://sqlfiddle.com/#!12/1483a/1)
I'm having a slow start to the morning. I thought there was a more efficient way to make the following query using a join, instead of two independent selects -- am I wrong?
Keep in mind that I've simplified/reduced my query into this example for SO purposes, so let me know if you have any questions as well.
```
SELECT DISTINCT c.*
FROM customers c
WHERE c.customer_id IN (select customer_id from customers_cars where car_make = 'BMW')
AND c.customer_id IN (select customer_id from customers_cars where car_make = 'Ford')
;
```
### Sample Table Schemas
```
-- Simple tables to demonstrate point
CREATE TABLE customers (
customer_id serial,
name text
);
CREATE TABLE customers_cars (
customer_id integer,
car_make text
);
-- Populate tables
INSERT INTO customers(name) VALUES
('Joe Dirt'),
('Penny Price'),
('Wooten Nagen'),
('Captain Planet')
;
INSERT INTO customers_cars(customer_id,car_make) VALUES
(1,'BMW'),
(1,'Merc'),
(1,'Ford'),
(2,'BMW'),
(2,'BMW'), -- Notice car_make is not unique
(2,'Ferrari'),
(2,'Porche'),
(3,'BMW'),
(3,'Ford');
-- ids 1 and 3 both have BMW and Ford
```
---
## Other Expectations
* There are ~20 car\_make in the database
* There are typically 1-3 car\_make per customer\_id
* There is expected to be not more than 50 car\_make assignments per customer\_id (generally 20-30)
* The query is generally only going to look for 2-3 specific car\_make per customer (e.g., BMW and Ford), but not 10-20
|
You don't need to join to `customers` at all (given relational integrity).
Generally, this is a case of **relational division**. We assembled an arsenal of techniques under this related question:
* [How to filter SQL results in a has-many-through relation](https://stackoverflow.com/questions/7364969/how-to-filter-sql-results-in-a-has-many-through-relation)
### Unique combinations
If `(customer_id, car_make)` *was* defined unique in `customers_cars`, it would get much simpler:
```
SELECT customer_id
FROM customers_cars
WHERE car_make IN ('BMW', 'Ford')
GROUP BY 1
HAVING count(*) = 2;
```
### Combinations not unique
Since `(customer_id, car_make)` is not unique, we need an extra step.
For **only a few cars**, your original query is not that bad. But (especially with duplicates!) `EXISTS` is typically faster than `IN`, and we don't need the final `DISTINCT`:
```
SELECT customer_id -- no DISTINCT needed.
FROM customers c
WHERE EXISTS (SELECT 1 FROM customers_cars WHERE customer_id = c.customer_id AND car_make = 'BMW')
AND EXISTS (SELECT 1 FROM customers_cars WHERE customer_id = c.customer_id AND car_make = 'Ford');
```
Above query gets verbose and less efficient for a longer list of cars. For an **arbitrary number of cars** I suggest:
```
SELECT customer_id
FROM (
SELECT customer_id, car_make
FROM customers_cars
WHERE car_make IN ('BMW', 'Ford')
GROUP BY 1, 2
) sub
GROUP BY 1
HAVING count(*) = 2;
```
[**SQL Fiddle.**](http://sqlfiddle.com/#!12/1483a/8)
|
And here another option, don't know what the fastest one would be on large tables.
```
SELECT customers.*
FROM customers
JOIN customers_cars USING(customer_id)
WHERE car_make = ANY(ARRAY['BMW','Ford'])
GROUP BY
customer_id, name
HAVING array_agg(car_make) @> ARRAY['BMW','Ford'];
```
---
vol7ron:
[Fiddle](http://sqlfiddle.com/#!12/95453/16)
> The following is a modification of the above, taking the same idea using an array for comparison. I'm not sure how any more efficient it would be compared to the dual-query approach, since it would have to create an array as one pass and then do more heavy-handed comparison because of comparing the elements of an array.
```
SELECT DISTINCT c.*
FROM customers c
WHERE customer_id IN (
select customer_id
from customers_cars
group by customer_id
having array_agg(car_make) @> ARRAY['BMW','Ford']
);
```
|
Making simple SQL more efficient
|
[
"",
"sql",
"postgresql",
"relational-division",
""
] |
When I read about inner or outer joins in SQL, all examples and descriptions are about 2 tables being joined. What if there are more than 2 tables in the query? Is that still considered a join?
I think inner join still makes sense even if it is between multiple tables; but I'm not sure outer joins makes sense between more than 2 table.
Can someone please clarify this issue?
|
Inner joins and outer joins are perfectly reasonable to use with more than 2 tables.
Inner joins force the result to display only data that has whatever row you joined on, whereas outer joins display all data no matter what.
Let us say you wanted to join 4 tables together...
```
select * from testtable
inner join testable2 on col1 = othercolumn
inner join testable3 on col2 = othercolumn
leftjoin testable4 on col3 = othercolumn
```
In this case, it would return only results that existed in the inner joins, but the result would not have to exist in the outside/left join. You are forcing testtables 2 & 3 to have a value on what you are joining on.. it cannot be null.
The left join does not care if the value is null, and will show results anyway.
I hope this helps some... Basically.. if you inner join on a value, and it can possibly be null, then the entire query will show blank. This is the scenario you would use an outter join.. you are not forcing the value to exist.
|
Most examples of joins will include two tables. However, joins can be done on any number of tables.
You can read more about joins all over the interwebs, but you might want to start with:
<http://www.w3schools.com/sql/sql_join.asp>
<http://blog.codinghorror.com/a-visual-explanation-of-sql-joins/>
The w3schools article first thing stated is:
> SQL joins are used to combine rows from two or more tables.
This isn't entirely true, as you can even join tables on themselvees!
Consider:
```
Employees
-----
EmployeeId
ManagerId
EmployeeName
```
if you want to find out the employees of a specific manager, that could be written as:
```
select manager.EmployeeName, subordinates.*
from employees manager
inner join employees subordinates on manager.employeeId = subordinates.managerId
```
For multiple table joins consider:
```
Employees
----
EmployeeId
ManagerId
EmployeeName
Departments
----
DepartmentId
DepartmentName
EmployeeDepartments
----
DepartmentId
EmployeeId
```
In this case, if you wanted to find out all department names that employee 5 belonged too, you could do:
```
select d.DepartmentName
from employees e
inner join employeeDepartments ed on e.employeeId = ed.employeeId
inner join departments d on ed.departmentId = d.departmentId
where e.employeeId = 5
```
TLDR; - yes including more than 2 tables is still considered join(s)
|
SQL Join: Are selects between more than 2 tables still joins?
|
[
"",
"mysql",
"sql",
"oracle",
"join",
""
] |
I am having a little issue inserting the date time into a database table name (TestTable) in SAS EG. The date fields in the TestTable are defined as
```
Name = EnteredDate
Type = Date
Length = 8
Format = DATETIME22.3
Informat = DATETIME22.3
Name = LastUpdateDate
Type = Date
Length = 8
Format = DATETIME22.3
Informat = DATETIME22.3
```
My existing code is as follows, where i use a select statement to insert as opposed to a values statement. Here is call a PrepTable which I dont use at all, but the select statement allows me to use the datetime() function which works perfectly in inserting the date time without any issues. See below:
```
INSERT INTO LIBNAME.TestTable (Statuscode, EnteredDate, LastUpdateDate)
Select '2', datetime(), datetime()
From work.PrepTable
```
I want to do the following:
```
INSERT INTO LIBNAME.TestTable (Statuscode, EnteredDate, LastUpdateDate)
VALUES ('2', datetime(), date time())
```
This is far more efficient I think since I dont have to query the work.PrepTable. But datetime() will not work in a values statement. I tried the following variations
```
INSERT INTO LIBNAME.TestTable (Statuscode, EnteredDate, LastUpdateDate)
VALUES ('2', "&sysdate9:00:00:00"dt,"&sysdate9:00:00:00"dt)
```
This method does not enter the time only the date and the date entered is wrong as well. Another variation I tried is the following after looking around on this site
```
INSERT INTO LIBNAME.TestTable (Statuscode, EnteredDate, LastUpdateDate)
VALUES ('2', &today_dttm,&today_dttm)
```
where &today\_dttm is sourced from:
```
%let today_dttm=%sysfunc(dhms(%sysfunc(today()), 0, 0, 0));
```
This hasn't worked to well either as i keep getting only just the date inserted and not the time. Is there a proper way to enter the current day and time when the insert statement is called without having to use the Select statement and datetime() combo.
I am a newbie to sas and any input would be greatly appreciated.
Thanks
|
There are a few different issues here.
First off, `&sysdate9.` and `&sysdate.` are only [defined at system startup](http://support.sas.com/documentation/cdl/en/mcrolref/68140/HTML/default/viewer.htm#n0psau13pzzhf2n1mndr75gw7vc8.htm). To get the current date, you need `%sysfunc(today(),date9.)` or `%sysfunc(datetime(),datetime17.)`. You could use that in a values statement.
Second, I don't tend to encourage use of the values statement in SAS, because it's pretty minimal in what it supports compared to other SQL flavors. I find it better to insert data using datalines in a SAS dataset and then appending that dataset.
```
data to_insert;
input statuscode $;
entereddate=datetime();
lastupdatedate=datetime();
datalines;
2
;;;;
run;
proc append base=libname.testtable data=to_insert;
run;
```
Third, your various attempts to use date variables/functions (`&sysdate9`, `today()`) are not including the time because they're date variables/functions and don't include times. Really what my first suggestion is, is simply combining two of your methods, after all. While it is possible to have a decimal-containing date value in SAS which converts to datetime complete with h/m/s, it's not the "norm" unless your data comes from somewhere else (like excel).
|
Joe's answer covers it, but here is a solution using your SQL.
You almost have it. Try this
```
%let today_dttm=%sysfunc(datetime());
INSERT INTO LIBNAME.TestTable (Statuscode, EnteredDate, LastUpdateDate)
VALUES ('2', &today_dttm,&today_dttm);
```
|
Inserting date/time in a SQL table in SAS
|
[
"",
"sql",
"sas",
""
] |
So I'm trying to print all the comments from my database, but only if they have been approved.
I recently heard that you can use if statements in your SQL, but I'm not exactly sure how it works.
This is what I tried:
```
SELECT name, created, comment IF('approved' = 1) FROM table_example WHERE id = 3
```
It should only execute the statement `IF 'approved' = 1`.
Can I do something like this or is it way off?
|
```
SELECT name, created, comment FROM table_example WHERE id = 3 AND approved = 1
```
Or am I missing something ??
|
you could do something like this:
```
SELECT name, created,
CASE
WHEN approved = 1 THEN comment
ELSE NULL
END as comment
FROM table_example
WHERE id = 3
```
you could replace NULL for the default value you want showed if the comment is not approved.
|
Only execute SELECT statement IF something
|
[
"",
"mysql",
"sql",
""
] |
I have the following contacts table
```
id | date_registered
1 2014-08-07
2 2014-08-13
3 2014-08-14
4 2014-10-08
5 2014-10-08
```
I want to run a query which says tell me the number of users that were registered in the months August, September and October. Hence for the above data the query would return
```
8 3
9 3
10 5
```
This is because 3 people joined in August so until October there are 3 people registered and then another 2 people join making that 5 in October.
I have tried the following query to get the answer (which follows a similar approach to the solution in here [Total number of users at end of each week for last 6 months](https://stackoverflow.com/questions/19318844/total-number-of-users-at-end-of-each-week-for-last-6-months)).
```
SELECT MONTH(c1.date_registered) AS month,
(SELECT COUNT(c2.id)
FROM contacts AS c2
WHERE c2.date_registered <= (DATE_FORMAT(DATE_ADD(c1.date_registered,
INTERVAL 1 MONTH),'%Y-%m-01')))
AS 'contacts on system'
FROM contacts AS c1
WHERE c1.date_registered BETWEEN '2013-11-01' AND '2014-11-01'
GROUP BY MONTH(c1.date_registered)
```
This query returns the following result
```
month | contacts on system
8 3
10 5
```
which is not quite what I want because September is missing from the results.
I have now done what a few people have suggested and used a left join to get september to show.
This is my new query
```
SELECT MONTH(c1.date_registered) AS month,
(SELECT COUNT(c2.id)
FROM contacts AS c2
WHERE c2.date_registered <= (DATE_FORMAT(DATE_ADD(c1.date_registered,
INTERVAL 1 MONTH),'%Y-%m-01')))
AS ContactsOnSystem
FROM (select 8 as mon union
select 9 as mon union
select 10 as mon
) as m left join
contacts as c1
on MONTH(c1.date_registered) = m.mon
GROUP BY MONTH(c1.date_registered)
```
When this query is run it returns
```
month | ContactsONSystem
9 0
8 3
10 5
```
Which is better but still not what I'm after. I want it to say there were 3 contacts on the system for September not 0. The 3 people that registered in August are still registered in September as they haven't gone anywhere.
Got it working at the third attempt
```
SELECT m.mon AS month, (SELECT COUNT(c2.id)
FROM contacts AS c2
WHERE c2.date_registered < m.monthstart) AS ContactsOnSystem
FROM (select '2014-09-01' as monthstart, 8 as mon union
select '2014-10-01' as monthstart, 9 as mon union
select '2014-11-01' as monthstart, 10 as mon
) as m LEFT JOIN
contacts as c1
on MONTH(c1.date_registered) = m.mon
GROUP BY MONTH(c1.date_registered)
```
|
Do a full join to a Table containing the months.
Filter to those datasets having the registerdate older than this rows "all-month-list"-date.
You get a list with one entry per user per month since his registration.
Thereover you may group as said before.
|
You can do what you want by explicitly listing the months and using `left join`:
```
SELECT m.mon AS month, count(c1.date_registered) as ContactsOnSystem
FROM (select 2014 as yr, 8, 201408 as yyyymm as mon union all
select 2014, 9, 201409 union all
select 2014, 10, 201410
) m left join
contacts c1
on yyyymm between year(c1.date_registered) * 100 + month(c1.date_registered)
GROUP BY yyyymm, mon
ORDER B yyyymm;
```
As a note: don't use single quotes for column aliases. Only use single quotes for string and date constants.
|
Mysql How To Get Total Number Of Registered Users At Each Month
|
[
"",
"mysql",
"sql",
""
] |
in Oracle ApEx I have a table like this:
```
CREATE TABLE "ATTENDANCE_HOURS"
("ID" NUMBER NOT NULL ENABLE,
"PERSON_ID" NUMBER,
"PROJECT_ID" NUMBER,
"FROM_X" DATE NOT NULL ENABLE,
"TO_X" DATE NOT NULL ENABLE,
"NOTE" VARCHAR2(300),
"APPROVED" NUMBER DEFAULT 0 NOT NULL ENABLE,
"APPROVAL_NOTE" VARCHAR2(300),
"DAY" DATE NOT NULL ENABLE,
CONSTRAINT "CHECK_TIMES" CHECK (TO_CHAR(TO_X, 'HH24MI') > TO_CHAR(FROM_X, 'HH24MI')) ENABLE,
CONSTRAINT "ATTENDANCE_HOURS_PK" PRIMARY KEY ("ID")
USING INDEX ENABLE
)
```
Note: I cannot change the table. Just nope.
I need to, for each person, select how many hours a week they worked. I already have a selection for How many hours a day (thanks StackOverflow) that looks like this:
```
select
(MAX("TO_X") - MIN("FROM_X"))*24 -
(max(case when PROJECT_ID = 21 then to_x else to_date('01012000','DDMMYYYY') end) -
max(case when PROJECT_ID = 21 then from_x else to_date('01012000','DDMMYYYY') end))*24 AS TIME_SPENT
// project id = 21 is break
from #OWNER#.ATTENDANCE_HOURS
GROUP BY DAY
```
What occured to me, was to group the entries next by week number and put the whole thing into SUM(), but well, it doesn't work and it says that: not a single-group group function.
Or maybe I should start playing with views?
|
Total hours for each person in a week would look something like this:
```
select person_id, to_char(day, 'YYYY-WW') as week,
sum(to_x - from_x) as hours_worked
from #OWNER#.ATTENDANCE_HOURS
group by person_id, to_char(day, 'YYYY-WW')
order by person_id, week;
```
I have no idea what the extra logic is on the project id. Your question doesn't explain it, so I removed it as unnecessary for calculating the sum of the hours. You can add it back in, if it is important.
|
The following query gives you the sum of worked hours minus breaks for each person (provided project 21 means break and all others mean work):
```
select person_id,
sum(case when project_id = 21 then -1 else 1 end * (to_x - from_x)) * 24
from #OWNER#.ATTENDANCE_HOURS
group by person_id;
```
To get this per day can be quite complicated when work can be over midnight (or even span more than two days). As long as we assume that from\_x and to\_x are aways the same day, we get:
```
select person_id, trunc(from_x),
sum(case when project_id = 21 then -1 else 1 end * (to_x - from_x)) * 24
from #OWNER#.ATTENDANCE_HOURS
group by person_id, trunc(from_x);
```
To get this per week would then be:
```
select person_id, trunc(from_x, 'IW'),
sum(case when project_id = 21 then -1 else 1 end * (to_x - from_x)) * 24
from #OWNER#.ATTENDANCE_HOURS
group by person_id, trunc(from_x, 'IW');
```
EDIT: Oh, I just notice that I don't know enough of your data. Can a project attendance range from 9am to 11am when there is also a break from 10am to 10:15am? Then my queries work. Or would that be two project attendances then, one ranging from 9am to 10am and the other from 10:15am to 11am? Then you would need `sum(case when project_id = 21 then 0 else to_x - from_x end) * 24`instead.
EDIT: Okay, as you say in your comment that project\_id = 21 cannot overlap, simply exclude it from your query:
```
select person_id, trunc(from_x, 'IW'), sum(to_x - from_x) * 24
from #OWNER#.ATTENDANCE_HOURS
where project_id <> 21
group by person_id, trunc(from_x, 'IW');
```
|
Select sum of hours grouped by a day first and then by a week
|
[
"",
"sql",
"oracle",
"select",
"oracle-apex",
""
] |
I am trying to figure out all of the records that only have the value null in two columns. The problem I am running into is that
I have 565 ROWS with Columns including ID, Allegation1, allegation 2, date, concatenatedalleg12
so it shows up as
```
ID ALLEG1 ALLEG2 DATE CONCATENATED
1 NULL NULL DATE1 NULL
1 NULL A1 DATE1 A1
1 A2 NULL DATE1 A2
2 NULL NULL DATE2 NULL
2 NULL B1 DATE2 B2
3 NULL NULL DATE3 NULL
4 NULL NULL DATE4 NULL
5 NULL NULL DATE5 NULL
5 NULL C1 DATE5 C1
```
I want to show only:
```
ID ALLEG1 ALLEG2 DATE CONCATENATED
3 NULL NULL DATE3 NULL
4 NULL NULL DATE4 NULL
```
How do I get only the distinct cases where there is only one row of nulls without including the 1 2 and 5 null rows that also have nonnull 2+ rows.
|
```
SELECT
ID,
MAX(ALLEG1) AS ALLEG1,
MAX(ALLEG2) AS ALLEG2,
MAX([DATE]) AS [DATE],
MAX(CONCATENATED) AS CONCATENATED
FROM
@TBL
GROUP BY
ID
HAVING
SUM(CASE WHEN ALLEG1 IS NULL THEN 0 ELSE 1 END + CASE WHEN ALLEG2 IS NULL THEN 0 ELSE 1 END) = 0
```
|
```
WITH idswithonerow AS
(
SELECT ID FROM
(
SELECT ID,
COUNT() OVER (PARTITION BY ID) AS CN
FROM tablename
) as tmp
WHERE CN = 1
)
select ID, ALLEG1, ALLEG2, DATE, CONCATINATED
FROM tablename
WHERE ALLEG11 is null and ALLEG2 is null
AND ID IN SELECT ID FROM idswithonerow
```
|
Showing DISTINCT cases where NULL
|
[
"",
"sql",
"sql-server",
""
] |
I'm trying to find the most efficient way to concatenate text in PL/SQL. My system scans five variables, and does something along these lines:
```
return_value := ''
IF variable_a is null THEN
return_value := 'Error: Variable A null';
END IF;
IF variable_b is null THEN
IF return_value = '' THEN
return_value := 'Error: Variable B null';
ELSE
return_value := return_value || ', Variable B null';
END IF;
IF variable_c is null THEN
IF return_value = '' THEN
return_value := 'Error: Variable C null';
ELSE
return_value := return_value || ', Variable c null';
END IF;
etc.
```
Hopefully the logic of what I'm trying to do is clear - determine if any of the five variables are null and concatenate the text of each error message to the return value, so by the end of the bunch of if/else statements the return value is either null (good) or contains an error call (bad).
Is there a better way to do this though? It seems rather lengthy and I feel, as a PL/SQL newbie, I'm missing some obvious way of doing this in a much more efficient, shorter manner.
|
There are plenty of ways to do this I'm sure. One possibility is to concatenate all the errors in the same way, as 'Variable ? null, ', then if necessary prefix with 'Error: ' and remove the redundant comma and space at the end.
```
return_value := '';
IF variable_a is null THEN
return_value := 'Variable A null, ';
END IF;
IF variable_b is null THEN
return_value := return_value||'Variable B null, ';
END IF;
IF variable_c is null THEN
return_value := return_value||'Variable C null, ';
END IF;
IF variable_d is null THEN
return_value := return_value||'Variable D null, ';
END IF;
IF variable_e is null THEN
return_value := return_value||'Variable E null, ';
END IF;
IF return_value <> '' THEN
return_value := 'Error: '||return_value;
return_value := RTRIM(return_value, ', ');
END IF;
```
|
You can define a function that simulate ternary operator, this makes more readible your code in my opinion.
The final presentation of error obviously can be improved.
```
declare
return_value varchar2(4001);
variable_a varchar2(1) := null;
variable_b varchar2(1) := 'B';
variable_c varchar2(1) := 'C';
function iif(predicate boolean, if_true varchar2, if_false varchar2 := '') return varchar2 as
begin
if(predicate) then
return if_true;
else
return if_false;
end if;
end;
begin
return_value := iif(variable_a is null, ', Variable A null')
|| iif(variable_b is null, ', Variable B null')
|| iif(variable_c is null, ', Variable C null');
return_value := iif(return_value is not null, 'Error: ' || substr(return_value, 2, length(return_value)));
dbms_output.put_line('[' || return_value || ']');
end;
```
|
Concatenating text in PL/SQL in a more concise way
|
[
"",
"sql",
"oracle",
"plsql",
"concatenation",
""
] |
I have a table "Store" with the attributes "name" of type String (indexes checked for sort query and search).
I want to execute a SQL like query to find all stores with name beginning with a substring.
So I tried this predicate :
```
NSPredicate *pred = [NSPredicate predicateWithFormat:@"ANY name BEGINSWITH %@",substring];
CKQuery *query = [[CKQuery alloc]initWithRecordType:@"Store" predicate:pred];
CKQueryOperation *queryOperation = [[CKQueryOperation alloc] initWithQuery:query];
queryOperation.desiredKeys = @[@"name",@"category"];
NSMutableArray *results = [[NSMutableArray alloc] init];
queryOperation.recordFetchedBlock = ^(CKRecord *record) {
[results addObject:record];
};
queryOperation.queryCompletionBlock = ^(CKQueryCursor *cursor, NSError *error) {
//my own code
}
[publicDatabase addOperation:queryOperation];
```
I have this error :
```
<CKError 0x1887a540: "Invalid Arguments" (12/1009); "Field value type mismatch in query predicate for field 'name'"> for query : <CKQuery: 0x18882650; recordType=Store, predicate=ANY name BEGINSWITH "mysubstring">
```
|
My guess is that you need to get rid of the "ANY".
> The ANY and SOME aggregate operators may be combined with the IN and
> CONTAINS operators to perform list membership tests.
So ANY is for lists (Arrays). Since 'name' is a String, it is not a list, hence the mismatch error: "Field value type mismatch in query predicate for field 'name'"
|
Wild guess...try:
```
[NSPredicate predicateWithFormat:@"ANY {'name','category'} BEGINSWITH %@",substring];
```
That may be an incorrect reversal of key/value but it might work.
|
NSPredicate and BEGINSWITH with CloudKit : Field value type mismatch
|
[
"",
"ios",
"sql",
"nspredicate",
"cloudkit",
""
] |
I have a table which has `transaction_id` as the primary key and also contains `customer_id` which is a foreign key.
Now there is a column `type` which has two values: `'Card'` and `'cash'`.
Now some of the customers have used both the methods for payment. I want to add a new column and classify the customers as "Only card" "Only cash" and "Both".
```
Transaction id Customer id Type
1 100 Card
2 101 Cash
3 102 Card
4 103 Cash
5 101 Card
```
So in this table I want a new column 'Type of payment' which classifies customer 101 as Both since he has used both the methods of payment.
|
Rather than adding a column to the table, if what you want to do is analyze the payment methods, then doing something like this might be better:
```
SELECT DISTINCT Table1.[Customer ID], T1.*
FROM Table1
CROSS APPLY (SELECT SUM(CASE WHEN [Type] = 'Cash' THEN 1 ELSE 0 END) AS Cash,
SUM(CASE WHEN [Type] = 'Card' THEN 1 ELSE 0 END) AS Card
FROM Table1 T WHERE T.[Customer ID] = Table1.[Customer ID]) T1
```
Gives you results like this:
```
CUSTOMER ID CASH CARD
100 0 1
101 1 1
102 0 1
103 1 0
```
|
You can use window functions:
```
select t.*,
(case when min(type) over (partition by customerid) = max(type) over (partition by customerid)
then 'Only ' + min(type) over (partition by customerid)
else 'both'
end)
from transactions t;
```
|
SQL: Add a column and classify into categories
|
[
"",
"sql",
"sql-server",
"join",
""
] |
I am trying to write a case statement inside the IN clause.
The Channel column contains three values ACT, REN, REP. I want to select more than 1 values, but I get result with only one value.
```
select * from tbl
WHERE tbl.Channel IN (select
case when @ACT =1 then 'ACT'
when @REN =1 then 'REN'
when @REP =1 then 'REP' END)
```
What changes should I be doing here ?
|
This will work fine. [SQL Fiddle](http://www.sqlfiddle.com/#!6/7f5b1/1)
The only change is removing the `select` so the input to the in clause is 3 values rather than a single row with three columns.
The implicit `else null` if the variable is not 1 does not cause a problem in an `in` clause.
```
SELECT *
FROM tbl
WHERE tbl.channel IN ( CASE
WHEN @ACT = 1 THEN 'ACT'
END, CASE
WHEN @REN = 1 THEN 'REN'
END, CASE
WHEN @REP = 1 THEN 'REP'
END )
```
|
Don't end your case. You are returning 3 columns. You only want one.
```
select * from tbl
WHERE tbl.Channel = case when @ACT =1 then 'ACT'
when @REN =1 then 'REN'
when @REP =1 then 'REP'
else NULL END
```
|
Case Statement in 'IN' clause in SQL
|
[
"",
"mysql",
"sql",
"sql-server",
""
] |

Hi,
I have a table "tbl\_address\_replacement", attached is the screen shot of the data in it.
My Input string is @input = 'Auriga Building 4'
I want to replace the above input string by matching values in "word\_contains" column in the above table and to replace matched values with "replace\_word" column value.
Eg:
Desired o/p : Auriga Bldg. 4
ThankYou!
|
I wrote something like this a little while ago whilst doing a system migration.
Here's the applicable code:
```
DECLARE @rowcount int;
-- Repeat until there are no iterations left (lazy recursive method)
SET @rowcount = 937;
WHILE (@rowcount > 0)
BEGIN
UPDATE addresses
SET address_line_1 = Replace(x.address_line_1, ' ' + y.word_contains + ' ', ' ' + y.replace_word + ' ')
FROM addresses As x
INNER
JOIN tbl_address_replacement As y
ON x.address_line_1 LIKE '% ' + y.word_contains + ' %'
;
SET @rowcount = @@RowCount;
END
;
```
I found this to be the most efficient method as it reduces the number of iterations required.
Basically it keeps looping until the update statement makes no more changes!
|
This will be helpful to you.
**EXAMPLE : SELECT dbo.[fnSplitString1] ('Auriga Building 4')**
```
CREATE FUNCTION [dbo].[fnSplitString1]
(
@InputString NVARCHAR(MAX)
)
RETURNS VARCHAR(50)
AS
BEGIN
DECLARE @delimiter CHAR(1)
DECLARE @start INT, @end INT
DECLARE @Result NVARCHAR(MAX)
DECLARE @Temp AS NVARCHAR(MAX)
SET @delimiter = ' '
SET @Result = ''
---SET @InputString = 'Auriga Building 4'
SELECT @start = 1, @end = CHARINDEX(@delimiter, @InputString)
WHILE @start < LEN(@InputString) + 1
BEGIN
IF @end = 0
SET @end = LEN(@InputString) + 1
SET @Temp = (SELECT TOP 1 replaceword
FROM tbl_address_replacement
WHERE word_contains = SUBSTRING(@InputString, @start, @end - @start))
IF (@Temp IS NULL)
SET @Result = @Result + SUBSTRING(@InputString, @start, @end - @start) + ' '
ELSE
SET @Result = @Result + @Temp + ' '
SET @start = @end + 1
SET @end = CHARINDEX(@delimiter, @InputString, @start)
END
RETURN @Result
END
```
|
Function to replace words in a inputstring based on values in a table
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"sql-server-2008-r2",
"sql-server-2012",
""
] |
I have two tables one parent table
```
ID Name
1 Sam
2 Ricky
```
Then i have a child table , here we have one to many relation ship like record sam has multiple address
Like (Child Table)
```
ID Parent_ID Address
1 1 Newyork
2 1 Chicago
```
Now what i wanna do is to write a query where i will get data from both the table like
```
ID Name Address1 Address2
1 Sam Newyork Chicago
```
I know what is the maximum no of address present. Even some data has one address . in that case the Address2 will be null or empty.
Thanks for help !!
|
Using `INNER JOIN` and `PIVOT` u can get the result. Try this..
```
CREATE TABLE #par
(
ID INT,
Name VARCHAR(100)
)
INSERT #par
VALUES (1,'Sam'),
(2,'Ricky')
CREATE TABLE #chil
(
ID INT,
Parent_ID INT,
Addresss VARCHAR(100)
)
INSERT #chil
VALUES( 1,1,'Newyork'),
(2,1,'Chicago')
SET @col=(SELECT ',[' + Cast(Addresss AS VARCHAR(100)) + ']'
FROM #chil
FOR XML PATH(''))
SELECT @col = RIGHT(@col, Len(@col) - 1)
SET @col1=(SELECT ',[' + Cast(Addresss AS VARCHAR(100))
+ '] as Address'
+ CONVERT(VARCHAR(50), Row_number() OVER (ORDER BY ID))
FROM #chil
FOR XML PATH(''))
SELECT @col1 = RIGHT(@col1, Len(@col1) - 1)
SET @sql= 'SELECT id,name,' + @col1
+ '
FROM (SELECT a.id,
a.name,
b.Addresss Addresss
FROM #par a
INNER JOIN #chil b
ON a.id = b.Parent_ID) p
PIVOT (Max(Addresss)
FOR Addresss IN(' + @col + ')) AS piv '
EXEC Sp_executesql
@sql
```
**OUTPUT**
```
id name Address1 Address2
1 Sam Newyork Chicago
```
|
should be something like
```
select * from parent inner join child on child.parent_id = parent.id
```
but that is simple joining. maybe i missed your question?
|
Joint Parent and child table in a sql statement to get the records of the both the table
|
[
"",
"mysql",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I am trying to find a way to add a count to the output of my query:
```
SELECT *
FROM
(SELECT
id,
'Event Location' AS name,
venueName AS snippet,
venueLatLng AS coordinates,
(3959
* acos(cos(radians('xx.xxxxxx'))
* cos(radians(SUBSTRING(venueLatLng, 1, CHARINDEX(',', venueLatLng)-1)))
* cos(radians(SUBSTRING(venueLatLng, CHARINDEX(',', venueLatLng) + 1, 1000))
- radians('-xx.xxxxxxx'))
+ sin(radians('xx.xxxxxx'))
* sin(radians(SUBSTRING(venueLatLng, 1, CHARINDEX(',', venueLatLng)-1))))) AS distance
FROM marker) TMP
WHERE
distance < 30
ORDER BY
distance;
```
The way I tried doing it was:
```
SELECT *
FROM (
SELECT id,
'Event Location' AS name,
venueName AS snippet,
venueLatLng AS coordinates,
COUNT(*) AS rCount,
(3959
* acos(cos(radians('xx.xxxxxx'))
* cos(radians(SUBSTRING(venueLatLng, 1, CHARINDEX(',', venueLatLng)-1)))
* cos(radians(SUBSTRING(venueLatLng, CHARINDEX(',', venueLatLng) + 1, 1000))
- radians('-xx.xxxxxxx'))
+ sin(radians('xx.xxxxxx'))
* sin(radians(SUBSTRING(venueLatLng, 1, CHARINDEX(',', venueLatLng)-1))))) AS distance
FROM marker) TMP
WHERE distance < 30
ORDER BY distance;
```
The error is this:
> Column 'marker.id' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.
How can I go about doing this correctly?
|
Use the analytic version of `COUNT`, and make sure to put it in your *outer* query so it will respect the `WHERE distance < 30`.
```
SELECT
TMP.*,
COUNT(*) OVER () AS RCount
FROM (
SELECT id,
'Event Location' AS name,
venueName AS snippet,
venueLatLng AS coordinates,
(3959
* acos(cos(radians('xx.xxxxxx'))
* cos(radians(SUBSTRING(venueLatLng, 1, CHARINDEX(',', venueLatLng)-1)))
* cos(radians(SUBSTRING(venueLatLng, CHARINDEX(',', venueLatLng) + 1, 1000))
- radians('-xx.xxxxxxx'))
+ sin(radians('xx.xxxxxx'))
* sin(radians(SUBSTRING(venueLatLng, 1, CHARINDEX(',', venueLatLng)-1))))) AS distance
FROM marker) TMP
WHERE distance < 30
ORDER BY distance;
```
|
Count(\*) is going to make your query an aggregate and so, unless you use a group by, the result will be a single row. The error message is saying that you cant't use the other fields and still generate a single row
|
Adding count(*) to SQL Server query
|
[
"",
"sql",
"sql-server",
"database",
"record-count",
""
] |
I want to select from a table but the problem is, I have a factor column that indicates how many times this row must be selected (this a sample table just for demonstration):
```
ID Factor Count
-------------------------------
1 1 235
2 2 345
3 2 214
4 3 95
5 1 135
6 1 750
```
Query:
```
select top 6
[id], [count]
from
table
order by
id --somewhere in here I must take factor into consideration
```
The result must be:
```
ID Count
---------------------
1 235
2 345
2 345
3 214
3 214
4 135
```
|
This should be enough:
```
SELECT A.*
FROM dbo.YourTable A
INNER JOIN (SELECT *
FROM master.dbo.spt_values
WHERE type = 'P') B
ON A.Factor >= B.number+1
```
[**Here is a sqfiddle**](http://sqlfiddle.com/#!3/0870d/1) with a demo of it.
And the results are:
```
ββββββ¦βββββββββ¦ββββββββ
β ID β Factor β Count β
β βββββ¬βββββββββ¬ββββββββ£
β 1 β 1 β 235 β
β 2 β 2 β 345 β
β 2 β 2 β 345 β
β 3 β 2 β 214 β
β 3 β 2 β 214 β
β 4 β 3 β 95 β
β 4 β 3 β 95 β
β 4 β 3 β 95 β
β 5 β 1 β 135 β
β 6 β 1 β 750 β
ββββββ©βββββββββ©ββββββββ
```
If the `factor` column can be greater than 2048, then you can use a numbers table.
|
If you know what the 'max' factor is.
```
SELECT [id],[count]
FROM table
WHERE factor > 1
UNION ALL
SELECT [id],[count]
FROM table
WHERE factor > 2
UNION ALL
SELECT [id],[count]
FROM table
WHERE factor > 3
.
.
.
order by [id]
```
There's probably a way to do this with a recursive CTE. Don't know how much effort you want to put into it.
|
SQL Server: selecting a row multiple times by the given factor
|
[
"",
"sql",
"sql-server",
""
] |
I have duplicate rows in my table and I want to delete duplicates in the most efficient way since the table is big. After some research, I have come up with this query:
```
WITH TempEmp AS
(
SELECT name, ROW_NUMBER() OVER(PARTITION by name, address, zipcode ORDER BY name) AS duplicateRecCount
FROM mytable
)
-- Now Delete Duplicate Records
DELETE FROM TempEmp
WHERE duplicateRecCount > 1;
```
But it only works in SQL, not in Netezza. It would seem that it does not like the `DELETE` after the `WITH` clause?
|
I like @erwin-brandstetter 's solution, but wanted to show a solution with the `USING` keyword:
```
DELETE FROM table_with_dups T1
USING table_with_dups T2
WHERE T1.ctid < T2.ctid -- delete the "older" ones
AND T1.name = T2.name -- list columns that define duplicates
AND T1.address = T2.address
AND T1.zipcode = T2.zipcode;
```
If you want to review the records before deleting them, then simply replace `DELETE` with `SELECT *` and `USING` with a comma `,`, i.e.
```
SELECT * FROM table_with_dups T1
, table_with_dups T2
WHERE T1.ctid < T2.ctid -- select the "older" ones
AND T1.name = T2.name -- list columns that define duplicates
AND T1.address = T2.address
AND T1.zipcode = T2.zipcode;
```
Update: I tested some of the different solutions here for speed. If you don't expect many duplicates, then this solution performs much better than the ones that have a `NOT IN (...)` clause as those generate a lot of rows in the subquery.
If you rewrite the query to use `IN (...)` then it performs similarly to the solution presented here, but the SQL code becomes much less concise.
Update 2: If you have `NULL` values in one of the key columns (which you really shouldn't IMO), then you can use `COALESCE()` in the condition for that column, e.g.
```
AND COALESCE(T1.col_with_nulls, '[NULL]') = COALESCE(T2.col_with_nulls, '[NULL]')
```
|
If you have no other unique identifier, you can use `ctid`:
```
delete from mytable
where exists (select 1
from mytable t2
where t2.name = mytable.name and
t2.address = mytable.address and
t2.zip = mytable.zip and
t2.ctid > mytable.ctid
);
```
It is a good idea to have a unique, auto-incrementing id in every table. Doing a `delete` like this is one important reason why.
|
How to delete duplicate rows without unique identifier
|
[
"",
"sql",
"postgresql",
""
] |
I have a table 'contacts' which is structured like this:
* id, name, surname, etc
And contacts\_related
* contact\_id, name, surname
One contact can have multiple related contacts (joined by contact\_id). What I would like to do is make a SELECT query on contacts that if a search input is given (example: 'John') it prints all contacts that have John in name OR contacts that have a related contact with 'John' in the name.
Any suggestions on how to do this, without making two distinct queries?
|
A possible solution is
```
SELECT
DISTINCT C.*
FROM
contacts C LEFT JOIN contacts_related R
ON C.id=R.contact_id
WHERE
C.name LIKE '%John%' OR
R.name LIKE '%John%'
```
|
You want those contacts that are either Johns themselves or have a John relation. To look up the relations use EXISTS or IN.
```
select *
from contacts
where name like '%John%'
or exists
(
select *
from contacts_related
where name like '%John%'
and contact_id = contacts.id
);
```
Or:
```
select *
from contacts
where name like '%John%'
or id in
(
select contact_id
from contacts_related
where name like '%John%'
);
```
|
Search in joined table with multiple records
|
[
"",
"mysql",
"sql",
"join",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.