Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
I have the following code that gets the months between two date ranges using a CTE
```
declare
@date_start DateTime,
@date_end DateTime
;WITH totalMonths AS
(
SELECT
DATEDIFF(MONTH, @date_start, @date_end) totalM
),
numbers AS
(
SELECT 1 num
UNION ALL
SELECT n.num + 1 num
FROM numbers n, totalMonths c
WHERE n.num <= c.totalM
)
SELECT
CONVERT(varchar(6), DATEADD(MONTH, numbers.num - 1, @date_start), 112)
FROM
numbers
OPTION (MAXRECURSION 0);
```
This works, but I do not understand how it works
Especially this part
```
numbers AS
(
SELECT 1 num
UNION ALL
SELECT n.num + 1 num
FROM numbers n, totalMonths c
WHERE n.num <= c.totalM
)
```
Thanks in advance, sorry for my English
|
This query is using two CTEs, one recursive, to generate a list of values from nothing (SQL isn't really good at doing this).
```
totalMonths AS (SELECT DATEDIFF(MONTH, @date_start, @date_end) totalM),
```
This is part is basically a convoluted way of binding the result of the `DATEDIFF` to the name `totalM`. This could've been implemented as just a variable if you can declare things:
```
DECLARE @totalM int = DATEDIFF(MONTH, @date_start, @date_end);
```
Then you would of course use `@totalM` to refer to the value.
```
numbers AS (
SELECT 1 num
UNION ALL
SELECT n.num+1 num FROM numbers n, totalMonths c
WHERE n.num<= c.totalM
)
```
This part is essentially a simple loop implemented using recursion to generate the numbers from 1 to `totalMonths`. The first `SELECT` specifies the first value (1) and the one after that specifies the next value, which is int greater than the previous one. Evaluating recursive CTEs has [somewhat special semantics](https://technet.microsoft.com/en-us/library/ms186243(v=sql.105).aspx) so it's a good idea to read up on them. Finally the `WHERE` specifies the stopping condition so that the recursion doesn't go on forever.
What all this does is generate an equivalent to a physical "numbers" table that just has one column the numbers from 1 onwards.
The `SELECT` at the very end uses the result of the `numbers` CTE to generate a bunch of dates.
Note that the `OPTION (MAXRECURSION 0)` at the end is also relevant to the recursive CTE. This disables the server-wide recursion depth limit so that the number generating query doesn't stop short if the range is very long, or a bothersome DBA set a very low default limit.
|
`totalMonths` query evaluates to a scalar result (single value) indicating the number of months that need to be generated. It probably makes more sense to just do this inline instead of using a named CTE.
`numbers` generates a sequence of rows with a column called `num` starting at `1` and ending at `totalM + 1` which was computed in the previous step. It is able to reference this value by means of a cross join. Since there's only one row it essentially just appends that one column to the table horizontally. The query is recursive so each pass adds a new row to the result by adding 1 to the last added row (really just the one column) until the value of the *previously added row* exceeds `totalM`. The first half of the `union` is the starting value; the second half refers to *itself* via `from numbers` and incrementally builds the result in a sort of loop.
The output is derived from the `numbers` input. One is subtracted from each `num` giving a range from `0` to `totalM` and that value is treated as the number of months to add to the starting date. The date value is converted to a `varchar` of length six which means the final two characters containing the day are truncated.
Suppose that `@date_start` is January 31, 2016 and `@date_end` is March 1, 2016. There is never any comparison of the actual date values so it doesn't matter that March 31 is generated in the sequence but also falls later than the passed `@date_end` value. Any dates in the respective start and end months can be chosen to generate identical sequences.
|
Need help to understand SQL query
|
[
"",
"sql",
"sql-server",
"common-table-expression",
""
] |
I want to use the result of a `FULL OUTER JOIN` as a table to `FULL OUTER JOIN` on another table. What is the syntax that I should be using?
For eg: T1, T2, T3 are my tables with columns id, name. I need something like:
```
T1 FULL OUTER JOIN T2 on T1.id = T2.id ==> Let this be named X
X FULL OUTER JOIN T3 on X.id = t3.id
```
I want this to be achieved so that in the final ON clause, I want the T3.id to match either `T1.id` or `T2.id`. Any alternative way to do this is also OK.
|
```
SELECT COALESCE(X.id,t3.id) AS id, *-- specific columns here instead of the *
FROM
(
SELECT COALESCE(t1.id,t2.id) AS id, * -- specific columns here instead of the *
FROM T1 FULL OUTER JOIN T2 on T1.id = T2.id
) AS X
FULL OUTER JOIN T3 on X.id = t3.id
```
|
Often, chains of full outer joins don't behave quite as expected. One replacements uses `left join`. This works best when a table has all the ids you need. But you can also construct that:
```
from (select id from t1 union
select id from t2 union
select id from t3
) ids left join
t1
on ids.id = t1.id left join
t2
on ids.id = t2.id left join
t3
on ids.id = t3.id
```
Note that the first subquery can often be replaced by a table. If you have such a table, you can select the matching rows in the `where` clause:
```
from ids left join
t1
on ids.id = t1.id left join
t2
on ids.id = t2.id left join
t3
on ids.id = t3.id
where t1.id is not null or t2.id is not null or t3.id is not null
```
|
Multiple Full Outer Joins
|
[
"",
"sql",
"impala",
""
] |
I have two tables, student and school.
**student**
```
stid | stname | schid | status
```
**school**
```
schid | schname
```
Status can be many things for temporary students, but `NULL` for permanent students.
How do I list names of schools which has no temporary students?
|
Using `Conditional Aggregate` you can count the number of `permanent student` in each `school`.
If total count of a school is same as the conditional count of a school then the school does not have any `temporary students`.
Using `JOIN`
```
SELECT sc.schid,
sc.schname
FROM student s
JOIN school sc
ON s.schid = sc.schid
GROUP BY sc.schid,
sc.schname
HAVING( CASE WHEN status IS NULL THEN 1 END ) = Count(*)
```
Another way using `EXISTS`
```
SELECT sc.schid,
sc.schname
FROM school sc
WHERE EXISTS (SELECT 1
FROM student s
WHERE s.schid = sc.schid
HAVING( CASE WHEN status IS NULL THEN 1 END ) = Count(*))
```
|
You can use `not exists` to only select schools that do not have temporary students:
```
select * from school s
where not exists (
select 1 from student s2
where s2.schid = s.schid
and s2.status is not null
)
```
|
SQL query using NULL
|
[
"",
"sql",
"postgresql",
""
] |
I have a MySQL database with tables `countries` and `exchange_rates`:
```
mysql> SELECT * FROM countries;
+----------------+----------+----------------+
| name | currency | GDP |
+----------------+----------+----------------+
| Switzerland | CHF | 163000000000 |
| European Union | EUR | 13900000000000 |
| Singapore | SGD | 403000000000 |
| USA | USD | 17400000000000 |
+----------------+----------+----------------+
mysql> SELECT * FROM exchange_rates;
+----------+------+
| currency | rate |
+----------+------+
| EUR | 0.9 |
| SGD | 1.37 |
+----------+------+
```
I would like to have a joined table with additional column showing GDP in US$.
Currently I have this:
```
mysql> SELECT countries.name, GDP, countries.GDP/exchange_rates.rate AS 'GDP US$'
-> FROM countries, exchange_rates
-> WHERE exchange_rates.currency=countries.currency;
+----------------+----------------+----------------+
| name | GDP | GDP US$ |
+----------------+----------------+----------------+
| European Union | 13900000000000 | 15444444853582 |
| Singapore | 403000000000 | 294160582917 |
+----------------+----------------+----------------+
```
However, I would like to additionally show:
* GDP in local currency, even if exchange rate information is missing
* GDP for countries with local currency 'USD' in both columns
The desired output is:
```
+----------------+----------------+----------------+
| name | GDP | GDP US$ |
+----------------+----------------+----------------+
| European Union | 13900000000000 | 15444444853582 |
| Singapore | 403000000000 | 294160582917 |
| Switzerland | 163000000000 | |
| USA | 17400000000000 | 17400000000000 |
+----------------+----------------+----------------+
```
I will be grateful for the help.
|
Use a `left join` to include countries with missing exchange rates, and a case expression to always set USA GDP to USA GDP:
```
SELECT
c.name,
GDP,
CASE WHEN c.name = 'USA' THEN c.GDP ELSE c.GDP/er.rate END AS 'GDP US$'
FROM countries c
LEFT JOIN exchange_rates er ON er.currency = c.currency;
```
Also, I changed to proper joins and added aliases for the tables to shorten the query a bit.
|
In order to get #1, you just need to use `left join` instead of inner join:
```
SELECT countries.name, GDP, countries.GDP/exchange_rates.rate AS 'GDP US$'
FROM countries LEFT JOIN exchange_rates
ON exchange_rates.currency=countries.currency;
```
In order to get #2, just add to the exchange\_rate table a record for USD with rate 1. If you don't want it in the table, do it in the query:
```
SELECT countries.name, GDP, countries.GDP/full_exchange_rates.rate AS 'GDP US$'
FROM countries LEFT JOIN (
select currency, rate from exchange_rates
union
select 'USD' as currency, 1 as rate
) as full_exchange_rates
ON full_exchange_rates.currency=countries.currency;
```
|
MySQL Join with condition when matching certain value
|
[
"",
"mysql",
"sql",
"join",
"left-join",
""
] |
Suppose there are the following rows
```
| Id | MachineName | WorkerName | MachineState |
|----------------------------------------------|
| 1 | Alpha | Young | RUNNING |
| 1 | Beta | | STOPPED |
| 1 | Gamma | Foo | READY |
| 1 | Zeta | Zatta | |
| 2 | Guu | Niim | RUNNING |
| 2 | Yuu | Jaam | STOPPED |
| 2 | Nuu | | READY |
| 2 | Faah | Siim | |
| 3 | Iem | | RUNNING |
| 3 | Nyt | Fish | READY |
| 3 | Qwe | Siim | |
```
We want to merge these rows according to following priority :
STOPPED > RUNNING > READY > (null or empty)
If a row has a value for greatest priority, then value from that row should be used (only if it is not null). If it is null, a value from any other row should be used. The rows should be grouped by id
The correct output for the above input is :
```
| Id | MachineName | WorkerName | MachineState |
|----------------------------------------------|
| 1 | Beta | Foo | STOPPED |
| 2 | Yuu | Jaam | STOPPED |
| 3 | Iem | Fish | RUNNING |
```
What would be a good sql query to accomplish this? I tried using joins, but it did not work out.
|
You can view this as a case of the group-wise maximum problem, provided you can obtain a suitable ordering over your `MachineState` column—e.g. by using a [`CASE`](http://www.postgresql.org/docs/current/static/functions-conditional.html#FUNCTIONS-CASE) expression:
```
SELECT a.Id,
COALESCE(a.MachineName, t.MachineName) MachineName,
COALESCE(a.WorkerName , t.WorkerName ) WorkerName,
a.MachineState
FROM myTable a JOIN (
SELECT Id,
MIN(MachineName) AS MachineName,
MIN(WorkerName ) AS WorkerName,
MAX(CASE MachineState
WHEN 'READY' THEN 1
WHEN 'RUNNING' THEN 2
WHEN 'STOPPED' THEN 3
END) AS MachineState
FROM myTable
GROUP BY Id
) t ON t.Id = a.Id AND t.MachineState = CASE a.MachineState
WHEN 'READY' THEN 1
WHEN 'RUNNING' THEN 2
WHEN 'STOPPED' THEN 3
END
```
See it on [sqlfiddle](http://sqlfiddle.com/#!15/3ca10/2/0):
```
| id | machinename | workername | machinestate |
|----|-------------|------------|--------------|
| 1 | Beta | Foo | STOPPED |
| 2 | Yuu | Jaam | STOPPED |
| 3 | Iem | Fish | RUNNING |
```
You could save yourself the pain of using `CASE` if `MachineState` was an [`ENUM`](http://www.postgresql.org/docs/current/static/datatype-enum.html) type column (defined in the appropriate order). It so happens in this case that a simple lexicographic ordering over the string value will yield the same result, but that's a coincidence on which you really shouldn't rely as it's bound to slip under the radar when someone tries to maintain this code in the future.
|
This is a prioritization query. One method uses variables. Another uses `union all` . . . this works if the states are not repeated for a given id:
```
select t.*
from table t
where machinestate = 'STOPPED'
union all
select t.*
from table t
where machinestate = 'RUNNING' and
not exists (select 1 from table t2 where t2.id = t.id and t2.machinestate in ('STOPPED'))
union all
select t.*
from table t
where machinestate = 'READY' and
not exists (select 1 from table t2 where t2.id = t.id and t2.machinestate in ('STOPPED', 'RUNNING'));
```
|
Merging multiple rows according to an order
|
[
"",
"sql",
"psql",
""
] |
I am moving from MS Access to SQL Server, yay! I know that SQL Server has a huge capacity over of 500k Terabytes, but I have also been told by my boss that SQL Server will eventually not be able to handle the rate I am inserting into my tables.
I have a table with 64 columns and each day around ~20,000 rows are added (1.28 million cells). The majority (~70%) of the data types are strings of on average 16 characters long. The rest of the data types are short numbers or booleans.
I am tracking financial data, which is why there is so much. The assumption is that this data will need to be tracked into perpetuity, if not at least 3-5 years.
So at this rate, will SQL Server be able to handle my data? Will I have to do some special configuration to get it to work, or is this amount laughably minuscule? I feel like its enough but I just want to make sure before moving forward.
Thanks!
|
It has less to do with `sql server` and more to do with the box it's running on. How big is the hard drive? How much memory is in there? What kind of CPU is sitting on it? 20000 a day isn't so much, even with wide varchar(). But without good indexing, partitioning, and the memory, disk space and CPU to handle queries against it, your problem is more likely to be slow performing queries.
At any rate, assuming you are using `VARCHAR()` instead of `NVARCHAR()` a single character is a byte. You say they average 16, but is that the length of the string stored in the `VARCHAR()` or the max size of the `VARCHAR()`? It will make a difference.
Assuming that's the average string length of a field, then you can do `64x16` to understand the byte size of a record (not super dooper accurate because of the need for meta data, but close enough). That would be `1024` bytes or `1kb` per record.
After 5 years that would be `20000*365*5` which is `36,500,000kb` which is `36.5gb`. No biggie. Add indexing on there and metadata and all that, and maybe you'll be pushing `50gb` for this table.
My guess is that your average string length is less than 16 though and that the fields are just defined as `VARCHAR(16)`. `VARCHAR()` only stores as many bytes as the length of the string (plus 2 to define the length), so it's probably less than this estimate. If the table is defined with all `CHAR(16)` then the storage will always be 16 bytes.
Also, if you are storing unicode and using `NVARCHAR()` then double all the calculations since SQL Server uses UTF8 which is 2 bytes per character.
|
**Very** rough, back-of-the-envelope calculations:
* 16 bytes per field
* 64 fields per record
* 20,000 records per day
You're adding 20MB per day to the table. 7GB per year.
This is not a large amount of data. There are many people running multi-terabyte databases on SQL Server.
What's more important is the process by which you load the data into the table, your indexing (so that you can efficiently query the data), server configuration (I/O, memory, CPU) and how you're managing it all. Eventually, you may need Enterprise Edition to make use of additional memory, table partitioning, etc.
The short answer to your question: yes, SQL Server can handle this just fine, as long as you design it properly
|
How soon will SQL Server reach capacity at this rate?
|
[
"",
"sql",
"sql-server",
"database",
""
] |
```
select isnull(column1,'')+','+isnull(column2,'')+','+isnull(column3,'') AS column4 from table
```
From the above query, I am getting what I need, which is really good. But the thing here is if all the columns all `NULL` I am getting commas which I have used to separate the fields.
I want comma is to be replaced with `NULL` when every field is `NULL`. Can anyone help me in this? thank you!
|
You might pack the `+ ','` into the `ISNULL()`
```
select isnull(column1+',','')+isnull(column2+',','')+isnull(column3,'') AS column4 from table
```
|
You can do this using `stuff()` like this:
```
select stuff((coalesce(',' + col1, '') +
coalesce(',' + col2, '') +
coalesce(',' + col3, '')
), 1, 1, '')
```
Other databases often have a function called `concat_ws()` that does this as well.
|
To NULL particular fields in retrieval time in sql
|
[
"",
"sql",
"sql-server",
""
] |
I don't know how to ask that also this is an example.
Say I have 2 tables:
pages:
```
idpage title
0 first
1 second
2 third
```
reads:
```
idread idpage time
50 0 8:15
83 0 2:58
```
If I do `SELECT * FROM pages,reads WHERE pages.idpage=reads.idpage AND pages.idpage<2`
I will have something like that:
```
idpage title idread time
0 first 50 8:15
0 first 83 2:58
```
Where I would like that:
```
idpage title idread time
0 first 50 8:15
0 first 83 2:58
1 second 0 0:00
```
Thanks
|
Always use explicit `JOIN` syntax. *Never* use commas in the `FROM` clause.
What you need is a `LEFT JOIN`. And, the way you are expressing the query makes this much harder to figure out. So:
```
SELECT p.idpage, p.title,
COALESCE(idread, 0) as idread,
COALESCE(time, cast('0:00' as time)) as time
FROM pages p LEFT JOIN
reads r
ON p.idpage = r.idpage
WHERE p.idpage < 2;
```
Note that when using `LEFT JOIN`, conditions on the *first* table should go in the `WHERE` clause. Conditions on the *second* table go in the `ON` clause.
|
You need a left join and CASE expression to complete the values when they are null, like this:
```
SELECT p.*,
case when r.idread is null then 0 else r.idread end as idread
case when r.time is null then '0:00' else r.time end as time
FROM pages p
LEFT OUTER JOIN reads r
ON(p.idpage = r.idpage)
WHERE p.idpage < 2
```
Note that I've changed your syntax to explicit join syntax(LEFT OUTER JOIN) instead of your implicit syntax's, which can easily lead to problems, especially when left joining.
|
Select row even if a condition is not true
|
[
"",
"sql",
""
] |
How do I combine the calculation date columns all into one column? What's the SQL function to make this happen? They rest of the fields are distinct values based on the calculation date. I only need the distinct values associated with the dates.
[](https://i.stack.imgur.com/9SKqg.jpg)
***EDIT***
I tried the `ISNULL` and `COALESCE` functions and this is not what I'm looking for because it still brings back all the values for both of the dates. I only need the data as of the date for select accounts. I don't want the data for both dates on the same account.
I also tried the Select Distinct and it's not working for me.
|
You can use COALESCE
```
SELECT COALESCE(Calculation_Date, Calculation_Date)
FROM tableName
```
|
Assuming only 1 of them will ever have a value, one option is to use `coalesce`:
```
select coalesce(date1, date2)
from yourtable
```
|
T-Sql Combining Multiple Columns into One Column
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
This is with respect to oracle
Input
```
CUSTID FROMDT ACTIVITY NEXTDATE
100000914 31/01/2015 14:23:51 Bet 3.999996
100000914 31/01/2015 14:29:07 Bet 3.999996
100000914 31/01/2015 14:32:59 Bet 2
100000914 31/01/2015 14:35:35 Bet 1.999998
100000914 31/01/2015 16:52:32 Settlement 3.999996
100000914 31/01/2015 16:54:39 Settlement 1.999998
100000914 31/01/2015 16:55:04 Settlement 2
100000914 31/01/2015 16:57:00 Settlement 3.999996
100000914 31/01/2015 16:57:10 Bet 3
100000914 31/01/2015 19:21:15 Settlement 3
```
Result
```
CUSTID ACTIVITY AMOUNT
100000914 Bet 11.99999
100000914 Settlement 11.99999
100000914 Bet 3
100000914 Settlement 3
```
Result should have sum of amount for every activity change
Thanks
|
```
SELECT CUSTID,
ACTIVITY,
total - LAG( total, 1, 0 ) OVER ( PARTITION BY CUSTID ORDER BY FROMDT ) AS total
FROM (
SELECT CUSTID,
FROMDT,
ACTIVITY,
SUM( NEXTDATE ) OVER ( PARTITION BY CUSTID ORDER BY FROMDT ) AS total,
CASE ACTIVITY
WHEN LEAD( ACTIVITY ) OVER ( PARTITION BY CUSTID ORDER BY FROMDT )
THEN 0
ELSE 1
END AS has_changed
FROM your_table
)
WHERE has_changed = 1;
```
**Outputs**:
```
CUSTID ACTIVITY TOTAL
--------- ---------- --------
100000914 Bet 11.99999
100000914 Settlement 11.99999
100000914 Bet 3
100000914 Settlement 3
```
|
```
select custid, activity, sum(amount)
from (select jg_dig_test.*,
(row_number() over (partition by custid order by fromdate) - row_number() over (partition by custid, activity order by fromdate)
) as grp
from jg_dig_test
) jg_dig_test
group by custid, grp, activity
ORDER BY CUSTID, MAX( FROMDaTe )
;
```
|
Aggregations on Lead & LAG in oracle
|
[
"",
"sql",
"oracle",
"window-functions",
""
] |
I need to get the COUNT of each `id_prevadzka` IN 4 tables:
First I tried:
```
SELECT
p.id_prevadzka,
COUNT(pv.id_prevadzka) AS `vytoce_pocet`,
COUNT(pn.id_prevadzka) AS `navstevy_pocet`,
COUNT(pa.id_prevadzka) AS `akcie_pocet`,
COUNT(ps.id_prevadzka) AS `servis_pocet`
FROM shop_prevadzky p
LEFT JOIN shop_prevadzky_vytoce pv ON (pv.id_prevadzka = p.id_prevadzka)
LEFT JOIN shop_prevadzky_navstevy pn ON (pn.id_prevadzka = p.id_prevadzka)
LEFT JOIN shop_prevadzky_akcie pa ON (pa.id_prevadzka = p.id_prevadzka)
LEFT JOIN shop_prevadzky_servis ps ON (ps.id_prevadzka = p.id_prevadzka)
GROUP BY p.id_prevadzka
```
But this returned the same number for `vytoce_pocet`, `navstevy_pocet`, `akcie_pocet` and `servis_pocet` - and it was the number, what was the COUNT in `shop_prevadzky_vytoce`.
Then I tried ([as is answered here](https://stackoverflow.com/a/12789493/1631551)):
```
SELECT
p.*,
SUM(CASE WHEN pv.id_prevadzka IS NOT NULL THEN 1 ELSE 0 END) AS `vytoce_pocet`,
SUM(CASE WHEN pn.id_prevadzka IS NOT NULL THEN 1 ELSE 0 END) AS `navstevy_pocet`,
SUM(CASE WHEN pa.id_prevadzka IS NOT NULL THEN 1 ELSE 0 END) AS `akcie_pocet`,
SUM(CASE WHEN ps.id_prevadzka IS NOT NULL THEN 1 ELSE 0 END) AS `servis_pocet`
FROM shop_prevadzky p
LEFT JOIN shop_prevadzky_vytoce pv ON (pv.id_prevadzka = p.id_prevadzka)
LEFT JOIN shop_prevadzky_navstevy pn ON (pn.id_prevadzka = p.id_prevadzka)
LEFT JOIN shop_prevadzky_akcie pa ON (pa.id_prevadzka = p.id_prevadzka)
LEFT JOIN shop_prevadzky_servis ps ON (ps.id_prevadzka = p.id_prevadzka)
WHERE p.cis_status = 1
GROUP BY p.id_prevadzka
ORDER BY p.prevadzka_nazov
```
But it returned the same results as is in the first example.
Whats wrong in my second query? Thanks.
**EDIT:**
To understand well, this could be test data in my tables:
*shop\_prevadzky:*
```
id_prevadzka
1
2
```
*shop\_prevadzky\_vytoce:*
```
id | id_prevadzka
1 | 1
2 | 1
3 | 1
4 | 1
5 | 2
```
*shop\_prevadzky\_navstevy:*
```
id | id_prevadzka
1 | 1
2 | 1
```
*shop\_prevadzky\_akcie:*
```
id | id_prevadzka
1 | 2
```
*shop\_prevadzky\_servis:*
```
id | id_prevadzka
```
And the query should return:
```
id_prevadzka | vytoce_pocet | navstevy_pocet | akcie_pocet | servis_pocet
1 4 2 0 0
2 1 0 1 0
```
|
Try count distinct as in:
Using your data sample
**[SQL Fiddle Demo](http://sqlfiddle.com/#!9/9f63cd/3)**
```
SELECT
p.id_prevadzka,
COUNT(distinct pv.id) AS `vytoce_pocet`,
COUNT(distinct pn.id) AS `navstevy_pocet`,
COUNT(distinct pa.id) AS `akcie_pocet`,
COUNT(distinct ps.id) AS `servis_pocet`
FROM shop_prevadzky p
LEFT JOIN shop_prevadzky_vytoce pv ON (pv.id_prevadzka = p.id_prevadzka)
LEFT JOIN shop_prevadzky_navstevy pn ON (pn.id_prevadzka = p.id_prevadzka)
LEFT JOIN shop_prevadzky_akcie pa ON (pa.id_prevadzka = p.id_prevadzka)
LEFT JOIN shop_prevadzky_servis ps ON (ps.id_prevadzka = p.id_prevadzka)
GROUP BY p.id_prevadzka
```
**OUTPUT**
```
| id_prevadzka | vytoce_pocet | navstevy_pocet | akcie_pocet | servis_pocet |
|--------------|--------------|----------------|-------------|--------------|
| 1 | 4 | 2 | 0 | 0 |
| 2 | 1 | 0 | 1 | 0 |
```
|
That is because you are joining all tables and created a cartesian product
```
shop_prevadzky x _vytoce x _navstevy x _akcie x _servis
```
You may want
```
SELECT
p.*,
(SELECT COUNT(id_prevadzka) FROM shop_prevadzky_vytoce s WHERE s.id_prevadzka = p.id_prevadzka) AS `vytoce_pocet`,
(SELECT COUNT(id_prevadzka) FROM shop_prevadzky_navstevy s WHERE s.id_prevadzka = p.id_prevadzka) AS `navstevy_pocet`,
(SELECT COUNT(id_prevadzka) FROM shop_prevadzky_akcie s WHERE s.id_prevadzka = p.id_prevadzka) AS `akcie_pocet`,
(SELECT COUNT(id_prevadzka) FROM shop_prevadzky_servis s WHERE s.id_prevadzka = p.id_prevadzka) AS `servis_pocet`
FROM shop_prevadzky p
```
Also you can do the same with subquerys
```
SELECT
p.*,
COALESCE(vytoce_count, 0) as vytoce_count,
COALESCE(navstevy_count, 0) as navstevy_count,
COALESCE(akcie_count, 0) as akcie_count,
COALESCE(servis_count, 0) as servis_count
FROM shop_prevadzky p
LEFT JOIN (SELECT id_prevadzka, COUNT(id_prevadzka) vytoce_count
FROM shop_prevadzky_vytoce s
WHERE s.id_prevadzka = p.id_prevadzka) AS vytoce
ON p.id_prevadzka = vytoce.id_prevadzka
LEFT JOIN (SELECT id_prevadzka, COUNT(id_prevadzka) navstevy_count
FROM shop_prevadzky_navstevy s
WHERE s.id_prevadzka = p.id_prevadzka) AS navstevy
ON p.id_prevadzka = navstevy.id_prevadzka
LEFT JOIN (SELECT id_prevadzka, COUNT(id_prevadzka) akcie_count
FROM shop_prevadzky_akcie s
WHERE s.id_prevadzka = p.id_prevadzka) AS akcie
ON p.id_prevadzka = akcie.id_prevadzka
LEFT JOIN (SELECT id_prevadzka, COUNT(id_prevadzka) servis_count
FROM shop_prevadzky_servis s
WHERE s.id_prevadzka = p.id_prevadzka) AS servis
ON p.id_prevadzka = servis.id_prevadzka
```
|
SQL - How to use more COUNT on many tables?
|
[
"",
"mysql",
"sql",
"left-join",
""
] |
I have two records on my table:
**Table:**
```
ID StartDate EndDate
1 2013-01-01 2016-01-01
2 2016-02-01 NULL
```
My query:
```
@DatePeriodFrom = 2016-01-01
@DatePeriodTo = 2016-01-01
select *
from tableabove ta
where (ta.StartDate >= @DatePeriodFrom and ta.EndDate >= @DatePeriodTo)
```
My problem here is that it will return no results. If I replace `and` with `or`, it will return both rows. I am thinking of using `ISNULL` but no luck for me.
**EDIT**
What I want is to return the row based from the given Start date and end date regardless if the row has null end date.
On the example above, the 1st row should be returned.
On this example, the 2nd row should be returned:
```
@DatePeriodFrom = 2016-02-01
@DatePeriodTo = 2016-02-01
```
Any idea?
|
In case if you need to find all intersections of periods:
```
where ta.StartDate <= @DatePeriodTo
and (ta.endDate IS NULL or ta.endDate >= @DatePeriodFrom)
```
|
```
select *
from tableabove ta
where ta.StartDate <= @DatePeriodFrom and (ta.endDate is NULL or ta.EndDate >= @DatePeriodTo)
```
Output: [Table values](https://i.stack.imgur.com/SPcm0.png)
[](https://i.stack.imgur.com/gRyJW.png)
[](https://i.stack.imgur.com/mtLXJ.png)
|
sql conditional where clause
|
[
"",
"sql",
""
] |
I've looked online a few days now to how to use STUFF on sql server, most of the examples I see are involving only two tables and my query is going through 3 tables and I just can't get it to work here is the query without the STUFF function which gets me all the data I want :
```
select c.category_name,r.role_name
from categories as c
join role_categ as rc on c.category_id=rc.category_id
join roles as r on r.role_id=rc.role_id
where rc.c_read='1';
```

What I want is where you have a Category\_name then Id want all the role\_names in one cell in row one ex:
```
BCM-Télécopieur-photocopieur Admin,Administation
```
Here what I have with the stuff function but doesnt work jsut gives me the same table as the other query
```
select c.category_name,STUFF((
select ','+r.role_name
from roles as r
where rc.role_id=r.role_id
for xml path('')),1,1,'')
from role_categ as rc
join categories as c on c.category_id=rc.category_id
```
Any help would be appreciated.
|
Here's a version of what I came up with. @GiorgosBetsos was correct that the `JOIN` needs to be moved to the inner query. I'm not sure why he's still seeing duplicates, but the following query returns the data as expected:
```
-- Set up the data
DECLARE @roles TABLE (role_id INT, role_name VARCHAR(20))
DECLARE @role_categories TABLE (category_id INT, role_id INT)
DECLARE @categories TABLE (category_id INT, category_name VARCHAR(20))
INSERT INTO @roles (role_id, role_name) VALUES (1, 'Admin'), (2, 'Administration'), (3, 'Tech')
INSERT INTO @categories (category_id, category_name) VALUES (1, 'Consultant'), (2, 'FTP'), (3, 'Logicals')
INSERT INTO @role_categories (category_id, role_id) VALUES (1, 1), (1, 2), (1, 3), (2, 1), (2, 3), (3, 1)
-- The query
SELECT
C.category_name,
STUFF((
SELECT ',' + R.role_name
FROM
@role_categories RC
INNER JOIN @roles R ON R.role_id = RC.role_id
WHERE
RC.category_id = C.category_id AND
RC.c_read = 1
FOR XML PATH('')), 1, 1, '')
FROM
@categories C
```
|
Try this:
```
SELECT DISTINCT c_out.category_name,
STUFF((SELECT ',' + r.role_name
FROM roles as r
INNER JOIN role_categ as rc ON rc.role_id=r.role_id
WHERE rc_out.category_id=rc.category_id
FOR XML PATH('')),1,1,'')
FROM role_categ AS rc_out
JOIN categories AS c_out ON c_out.category_id = rc_out.category_id
WHERE rc_out.c_read = '1'
```
You need to `JOIN` to `role_categ` table inside the subquery, so that you can correlate to `category_id`. Also, you have to use `DISTINCT` in the outer query in order to filter out duplicate records.
|
SQL Server String Concat with Stuff
|
[
"",
"sql",
"sql-server",
"string",
"t-sql",
"concatenation",
""
] |
Here is the table information:
Table name is Teaches,
```
+-----------+--------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-----------+--------------+------+-----+---------+-------+
| ID | varchar(5) | NO | PRI | NULL | |
| course_id | varchar(8) | NO | PRI | NULL | |
| sec_id | varchar(8) | NO | PRI | NULL | |
| semester | varchar(6) | NO | PRI | NULL | |
| year | decimal(4,0) | NO | PRI | NULL | |
+-----------+--------------+------+-----+---------+-------+
```
The requirement is to find which course appeared more than once in 2009(ID is the id of teachers)
Here is my query using `GROUP BY`:
```
select course_id
from teaches
where year= 2009
group by course_id
having count(id) >= 2;
```
How could I write this without using `GROUP BY`?
|
You may try this:
```
SELECT DISTINCT
T.course_id
FROM
teaches T
WHERE
T.course_id NOT IN (
SELECT
T1.course_id
FROM teaches AS T1 INNER JOIN teaches AS T2 ON T1.course_id = T2.course_id
AND T1.`year` = T2.`year`
AND T1.id <> T2.id
WHERE T1.`year` = 2009
);
```
---
**Test Schema And Data**:
```
DROP TABLE IF EXISTS `teaches`;
CREATE TABLE `teaches` (
`ID` varchar(5) CHARACTER SET utf8 DEFAULT NULL,
`course_id` varchar(8) CHARACTER SET utf8 DEFAULT NULL,
`sec_id` varchar(8) CHARACTER SET utf8 DEFAULT NULL,
`semester` varchar(6) CHARACTER SET utf8 DEFAULT NULL,
`year` decimal(4,0) DEFAULT NULL
);
INSERT INTO `teaches` VALUES ('66', '100', 'B', '11', '2009');
INSERT INTO `teaches` VALUES ('71', '100', 'A', '11', '2009');
INSERT INTO `teaches` VALUES ('64', '102', 'C', '12', '2010');
INSERT INTO `teaches` VALUES ('77', '102', 'B', '22', '2009');
```
**Expected Output:**
```
course_id
102
```
[**SQL FIDDLE DEMO**](http://sqlfiddle.com/#!9/0ecee8/1/0)
|
For your homework, below sql can be done.
This is followed your logic, id exist more than once is means course appeared more than once.
```
select DISTINCT T1.course_id
from teaches T1
where T1.course_id not in (
select a.course_id
from teaches as a inner join teaches as b
on a.course_id = b.course_id and a.year = b.year and a.id <> b.id
where a.year= 2009 )
```
|
Rewriting MySQL query without using GROUP BY
|
[
"",
"mysql",
"sql",
""
] |
I want list all flights in flights table where Departure and arrival is egal to table 2.
The specificity is that the Departure in flight is XXXX -dsdjqlkdjlqs or XXXXdkjfhkds etc... and in table 2 is only XXXX
Code :
```
CREATE TABLE flights
(`Name` varchar(10), `Departure` varchar(50), `Arrival` varchar(10), `Pass` int, `Cargo` int, `Dist` int)
;
INSERT INTO flights
(`Name`, `Departure`, `Arrival`, `Pass`, `Cargo`, `Dist`)
VALUES
('444 737vvv', 'LFLL gee', 'LPMAdsf', 200, 2000, 12),
('gg737vvv', 'LPMA-egege', 'LFLLdsf', 3000, 0, 13),
('747vvv', 'LFLLèèegege', 'LPMAdsf', 0, 5000, 15),
('747vvv', 'OTHHèèegege', 'LPMAdsf', 0, 5000, 15),
('747vvv', 'OMDBèèegege', 'LPMAdsf', 0, 5000, 15),
('a320vvv', 'EGKK-egege', 'LFPOdd', 0, 6000, 14)
;
CREATE TABLE Table2
(`Dep` varchar(21), `Arri` varchar(21),`Type` varchar(21))
;
INSERT INTO Table2
(`Dep`, `Arri`, `type`)
VALUES
('LFLL', 'LFPG', 'cargo'),
('LFPG', 'LFLL', 'cargo'),
('LFLL', 'LPMA', 'com'),
('LPMA', 'LFLL', 'cargo'),
('LFPO', 'EGKK', 'cargo'),
('EGKK', 'LFPO', 'com')
;
```
I have test :
```
select flights.name,
flights.Departure
from flights
where substr(flights.Departure, 1, 4) in (select Dep from table2) and
substr(flights.Arrival, 1, 4) in (select Arri from table2)
;
```
Example :
I flight from LFLL to LFPG
in table 2 you have the airlines :
LFLL LFPG
LFMT LFPO etc....
With your request, if I flight from LFLL to LFPO the flight appear and it will be not appear because LFLL to LFPO is not an airline.
<http://sqlfiddle.com/#!9/026d6>
|
I think the following is what you are looking for:
```
Select
flights.name,
flights.Departure
from
flights
inner join Table2
on Table2.Dep = SUBSTRING(flights.Departure,1,4) and Table2.Arri = SUBSTRING(flights.Arrival,1,4)
;
```
You need the SUBSTRING instead of wildcards, since you only want part of the FLIGHTS table entries. Also, you needed to include the arrival constraint.
|
```
select flights.name,
flights.Departure
from flights
inner join table2
on substr(flights.Departure, 1, 4) = table2.Dep
```
|
SQL JOIN return record from table flights corresponding form table 2
|
[
"",
"mysql",
"sql",
"join",
"inner-join",
""
] |
I would like to preface this by saying I am VERY new to SQL, but my work now requires that I work in it.
I have a dataset containing topographical point data (x,y,z). I am trying to build a KNN model based on this data. For every point 'P', I search for the 100 points in the data set nearest P (nearest meaning geographically nearest). I then average the values of these points (this average is known as a residual), and add this value to the table in the 'resid' column.
As a proof of concept, I am trying to simply iterate over the table, and set the value of the 'resid' column to 1.0 in every row.
My query is this:
```
CREATE OR REPLACE FUNCTION LoopThroughTable() RETURNS VOID AS '
DECLARE row table%rowtype;
BEGIN
FOR row in SELECT * FROM table LOOP
SET row.resid = 1.0;
END LOOP;
END
' LANGUAGE 'plpgsql';
SELECT LoopThroughTable() as output;
```
This code executes and returns successfully, but when I check the table, no alterations have been made. What is my error?
|
Doing updates row-by-row in a loop is almost always a bad idea and **will** be extremely slow and won't scale. You should really find a way to avoid that.
After having said that:
All your function is doing is to change the value of the column value in memory - you are just modifying the contents of a variable. If you want to update the data you need an `update` statement:
You need to use an `UPDATE` inside the loop:
```
CREATE OR REPLACE FUNCTION LoopThroughTable()
RETURNS VOID
AS
$$
DECLARE
t_row the_table%rowtype;
BEGIN
FOR t_row in SELECT * FROM the_table LOOP
update the_table
set resid = 1.0
where pk_column = t_row.pk_column; --<<< !!! important !!!
END LOOP;
END;
$$
LANGUAGE plpgsql;
```
Note that you *have* to add a `where` condition on the primary key to the `update` statement otherwise you would update **all** rows for **each** iteration of the loop.
A *slightly* more efficient solution is to use a cursor, and then do the update using `where current of`
```
CREATE OR REPLACE FUNCTION LoopThroughTable()
RETURNS VOID
AS $$
DECLARE
t_curs cursor for
select * from the_table;
t_row the_table%rowtype;
BEGIN
FOR t_row in t_curs LOOP
update the_table
set resid = 1.0
where current of t_curs;
END LOOP;
END;
$$
LANGUAGE plpgsql;
```
---
> So if I execute the UPDATE query after the loop has finished, will that commit the changes to the table?
No. The call to the function runs in the context of the calling transaction. So you need to `commit` after running `SELECT LoopThroughTable()` if you have disabled auto commit in your SQL client.
---
Note that the language name is an identifier, do not use single quotes around it. You should also avoid using keywords like `row` as variable names.
Using [dollar quoting](http://www.postgresql.org/docs/current/static/sql-syntax-lexical.html#SQL-SYNTAX-DOLLAR-QUOTING) (as I did) also makes writing the function body easier
|
I'm not sure if the proof of concept example does what you want. In general, with SQL, you almost *never* need a FOR loop. While you can use a function, if you have PostgreSQL 9.3 or later, you can use a [`LATERAL` subquery](http://www.postgresql.org/docs/current/static/queries-table-expressions.html) to perform subqueries for each row.
For example, create 10,000 random 3D points with a random `value` column:
```
CREATE TABLE points(
gid serial primary key,
geom geometry(PointZ),
value numeric
);
CREATE INDEX points_geom_gist ON points USING gist (geom);
INSERT INTO points(geom, value)
SELECT ST_SetSRID(ST_MakePoint(random()*1000, random()*1000, random()*100), 0), random()
FROM generate_series(1, 10000);
```
For each point, search for the 100 nearest points (except the point in question), and find the residual between the points' `value` and the average of the 100 nearest:
```
SELECT p.gid, p.value - avg(l.value) residual
FROM points p,
LATERAL (
SELECT value
FROM points j
WHERE j.gid <> p.gid
ORDER BY p.geom <-> j.geom
LIMIT 100
) l
GROUP BY p.gid
ORDER BY p.gid;
```
|
Iterate through table, perform calculation on each row
|
[
"",
"sql",
"postgresql",
"postgis",
""
] |
When I try to insert data using AJAX without postback it's not inserting. I wrote AJAX in ajaxinsert.aspx page and I wrote a webmethod in the same page view code (i.e ajaxinsert.aspx.cs). What is the problem?
```
<%@ Page Language="C#" AutoEventWireup="true" CodeBehind="ajaxinsert.aspx.cs" Inherits="ajaxweb.ajaxinsert" %>
<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml">
<head runat="server">
<script type="text/javascript" src="//code.jquery.com/jquery-1.10.2.min.js"></script>
<script type="text/javascript" >
$(document).ready(function () {
$("#insert").click(function (e) {
e.preventDefault()
var Name = $("#name1").val();
$.ajax({
type: "post",
dataType: "json",
url: "Contact.aspx/savedata",
contentType: "application/json; charset=utf-8",
data: { studentname: Name },
success: function () {
$("#divreslut").text("isnerted data");
},
error: function () {
alert("not inseted");
}
});
});
});
</script>
<title></title>
</head>
<body>
<form id="form1" runat="server">
<div>
<input type="text" id="name1" name="name1" />
<input type="submit" id="insert" value="isnertdata" />
</div>
</form>
<div id="divreslut"></div>
</body>
</html>
```
```
[WebMethod]
public static void savedata(string studentname)
{
using (SqlConnection con = new SqlConnection(ConfigurationManager.ConnectionStrings["SqlConnection"].ConnectionString))
{
using (SqlCommand cmd = new SqlCommand("sp_savedata", con))
{
cmd.CommandType = CommandType.StoredProcedure;
cmd.Parameters.AddWithValue("@name", studentname);
//if (con.State == ConnectionState.Closed)
//{
con.Open();
Int32 retVal = cmd.ExecuteNonQuery();
if (retVal > 0)
{
Console.WriteLine("inserted sucess");
}
//if (retVal > 0)
//{
// status = true;
//}
//else
//{
// status = false;
//}
//return status;
}
}
```
|
Please modify below line in your above .aspx page code
```
data: { studentname: Name },
```
To
```
data: JSON.stringify({studentname: Name }),
```
Javascript Code
```
<script type="text/javascript" >
$(document).ready(function () {
$("#insert").click(function (e) {
e.preventDefault();
var Name = $("#name1").val();
$.ajax({
type: "post",
dataType: "json",
url: "Contact.aspx/savedata",
contentType: "application/json; charset=utf-8",
data: JSON.stringify({studentname: Name }),
success: function () {
$("#divreslut").text("isnerted data");
},
error: function () {
alert("not inseted");
}
});
});
});
</script>
```
|
Save data to database without postback using jQuery ajax in ASP.NET - See more at: <http://www.dotnetfox.com/articles/save-data-to-database-without-postback-using-jquery-ajax-in-Asp-Net-1108.aspx#sthash.7lXf0io7.dpuf>
Please find the below link:-
<http://www.dotnetfox.com/articles/save-data-to-database-without-postback-using-jquery-ajax-in-Asp-Net-1108.aspx>
|
inserting data in sql database without postback in asp.net
|
[
"",
"jquery",
"sql",
"asp.net",
"asp.net-ajax",
""
] |
I have this grouping problem that I can't seem to figure out. Any advice would be greatly appreciated! Let's say I have a table like this:
```
Name Passed? PlanID Plan
-----------------------------------------
Tom 1 1 Math
Tom 1 1 Reading
Tom 0 2 Math
Tom 0 2 Reading
Tom 0 3 Math
Tom 0 3 Reading
Bobby 1 1 Math
Bobby 0 1 Reading
Bobby 1 2 Math
Bobby 1 2 Reading
Bobby 0 3 Math
Bobby 0 3 Reading
Linda 0 1 Math
Linda 1 1 Reading
Linda 0 2 Math
Linda 1 2 Reading
Linda 1 3 Math
Linda 1 3 Reading
```
What I want to accomplish is something like this:
```
Name Passed? PlanID
---------------------------
Tom 1 1
Bobby 1 2
Linda 1 3
```
So basically, if the first planID hasn't been passed, look at the second one. If that one hasn't been passed, look at the third one. The issue I'm running into is that all the PlanIDs will be 3 or 1 or all the values in the Passed column will be 0.
I've tried a query like this:
```
CASE
WHEN MIN(Passed?) = 1
THEN MIN(PlanID)
ELSE MAX(PlanID)
END
```
I realize that the max and min will only yield a 3 or 1, but I'm not sure how else to go about it. Thanks!
EDIT: Sorry, forgot to mention that if a person has passed a planID, then the rest of the planIDs should read as passed. So since Bobby didn't pass both plans the first time, he must take it again. Since he passed the second time, he does not have to take it a third time. A person must pass both plan to count as passed, if that makes sense. I've added a couple more rows to hopefully communicate what I'm thinking of better. I may be making this a bit too confusing for myself as well.
|
If there is a possibility that we may see multiple passes for the same name and you just want to pick up the first one, use below query:
```
select pass.name, pass.passed, pass.planID
from
(Select name, passed, planID
from table
where passed = 1) pass,
(Select name, min(planID) planId
from table
where passed = 1) min
where pass.planID= min.planID and pass.name = min.name
```
If there could only be one pass, you can simply select the pass:
```
select * from table where Passed = 1
```
|
if return just passed? = 1
```
select * from table where Passed? = 1
```
returns as you want
|
SQL Grouping calculations
|
[
"",
"sql",
"group-by",
"case",
""
] |
I have in my Moodle `db` `table` for every `session` `sessid` and `timestart`. The table looks like this:
```
+----+--------+------------+
| id | sessid | timestart |
+----+--------+------------+
| 1 | 3 | 1456819200 |
| 2 | 3 | 1465887600 |
| 3 | 3 | 1459839600 |
| 4 | 2 | 1457940600 |
| 5 | 2 | 1460529000 |
+----+--------+------------+
```
How to get for `every` `session` the `first` `date` from the `timestamps` in `SQL`?
|
You can easy use this:
```
select sessid,min(timestart) FROM mytable GROUP by sessid;
```
**And for your second question, something like this:**
```
SELECT
my.id,
my.sessid,
IF(my.timestart = m.timestart, 'yes', 'NO' ) AS First,
my.timestart
FROM mytable my
LEFT JOIN
(
SELECT sessid,min(timestart) AS timestart FROM mytable GROUP BY sessid
) AS m ON m.sessid = my.sessid;
```
|
**Query**
```
select sessid, min(timestart) as timestart
from your_table_name
group by sessid;
```
Just an other perspective if you need even the `id`.
```
select t.id, t.sessid, t.timestart from
(
select id, sessid, timestart,
(
case sessid when @curA
then @curRow := @curRow + 1
else @curRow := 1 and @curA := sessid end
) as rn
from your_table_name t,
(select @curRow := 0, @curA := '') r
order by sessid,id
)t
where t.rn = 1;
```
|
Get first date from timestamp in SQL
|
[
"",
"mysql",
"sql",
"moodle",
""
] |
Ignore the practicality of the following sql query
```
DECLARE @limit BIGINT
SELECT TOP (COALESCE(@limit, 9223372036854775807))
*
FROM
sometable
```
It warns that
> The number of rows provided for a TOP or FETCH clauses row count parameter must be an integer.
Why doesn't it work but the following works?
```
SELECT TOP 9223372036854775807
*
FROM
sometable
```
And `COALESCE(@limit, 9223372036854775807)` is indeed `9223372036854775807` when `@limit` is null?
I know that changing `COALESCE` to `ISNULL` works but I want to know the reason.
|
<https://technet.microsoft.com/en-us/library/aa223927%28v=sql.80%29.aspx>
> Specifying bigint Constants
>
> Whole number constants that are outside the range supported by the int
> data type continue to be interpreted as numeric, with a scale of 0 and
> a precision sufficient to hold the value specified. For example, the
> constant 3000000000 is interpreted as numeric. These numeric constants
> are implicitly convertible to bigint and can be assigned to bigint
> columns and variables:
```
DECLARE @limit bigint
SELECT SQL_VARIANT_PROPERTY(COALESCE(@limit, 9223372036854775807),'BaseType')
SELECT SQL_VARIANT_PROPERTY(9223372036854775807, 'BaseType') BaseType
```
shows that 9223372036854775807 is `numeric`, so the return value of coalesce is numeric. Whereas
```
DECLARE @limit bigint
SELECT SQL_VARIANT_PROPERTY(ISNULL(@limit, 9223372036854775807),'BaseType')
```
gives `bigint`. Difference being `ISNULL` return value has the data type of the first expression, but `COALESCE` return value has the highest data type.
```
SELECT TOP (cast(COALESCE(@limit, 9223372036854775807) as bigint))
*
FROM
tbl
```
should work.
|
```
DECLARE
@x AS VARCHAR(3) = NULL,
@y AS VARCHAR(10) = '1234567890';
SELECT
COALESCE(@x, @y) AS COALESCExy, COALESCE(@y, @x)
AS COALESCEyx,
ISNULL(@x, @y) AS ISNULLxy, ISNULL(@y, @x)
AS ISNULLyx;
```
Output:
```
COALESCExy COALESCEyx ISNULLxy ISNULLyx
---------- ---------- -------- ----------
1234567890 1234567890 123 1234567890
```
Notice that with COALESCE, regardless of which input is specified first, the type of the output is VARCHAR(10)—the one with the higher precedence. However, with **ISNULL, the type of the output is determined by the first input**. So when the first input is of a VARCHAR(3) data type (the expression aliased as ISNULLxy), the output is VARCHAR(3). As a result, the returned value that originated in the input @y is truncated after three characters.That means isnull would not change the type, but coalesce would.
|
SELECT TOP COALESCE and bigint
|
[
"",
"sql",
"sql-server",
""
] |
I want to write a query for oracle that verifies if all combinations exist in a table.
My problem is that the "key-columns" of the table are FKs linked to other tables, which means that the combinations are based on the rows of the other tables.
ERD example:
[](https://i.stack.imgur.com/ZagKR.png)
So, if there are 3 rows (1-3) in table A, 4 rows in table B and 2 rows in table C, MyTable must have these rows (3x4x2, 24 totally):
```
id, a_fk, b_fk, c_fk, someValue
x, 1, 1 ,1, ..
x, 1, 1, 2, ..
x, 1, 2, 1, ..
x, 1, 2, 2, ..
x, 1, 3, 1, ..
x, 1, 3, 2, ..
..............
```
I am not sure how to write this, because the available data of the combination may change.
Thanks for any help!
|
You can identify all the possible combinations with [cross joins](http://docs.oracle.com/cd/E11882_01/server.112/e41084/statements_10002.htm#BABGHCBD), which generate the cartesian product of the rows:
```
select a.id, b.id, c.id
from tablea a
cross join tableb b
cross join tablec c
```
Depending on the exact result you want, you can use that in various ways to see what you do or do not have. To list the combinations that don't exist, use [the `minus` set operator](http://docs.oracle.com/cd/E11882_01/server.112/e41084/operators005.htm):
```
select a.id, b.id, c.id
from tablea a
cross join tableb b
cross join tablec c
minus
select fk_a, fk_b, fk_c
from my_table mt;
```
Or you can use `not exists` instead of minus, as other answers show.
If you want to list them all with the main table's column if it exists, and null otherwise, you can use a left outer join:
```
select a.id, b.id, c.id, mt.id
from tablea a
cross join tableb b
cross join tablec c
left join my_table mt
on mt.fk_a = a.id and mt.fk_b = b.id and mt.fk_c = c.id
```
You can also count the results from the first query, and then use that in a `case` statement to get a simple yes/no answer to show whether all combinations exist. And so on - it really depends what you want to see.
|
To get possible combinations, cross join works.
So you can get your 24 rows with:
```
with a as (
select 1 id1 from dual union all
select 2 id1 from dual union all
select 3 id1 from dual )
, b as (
select 1 id2 from dual union all
select 2 id2 from dual union all
select 3 id2 from dual union all
select 4 id2 from dual )
, c as (
select 1 id3 from dual union all
select 2 id3 from dual )
select id1, id2, id3
from a cross join b cross join c;
```
From there it is a pretty easy step to look for the combinations that do or do not exist in your table. To get the combinations that aren't in the target table you could:
```
with a as (
select 1 id1 from dual union all
select 2 id1 from dual union all
select 3 id1 from dual )
, b as (
select 1 id2 from dual union all
select 2 id2 from dual union all
select 3 id2 from dual union all
select 4 id2 from dual )
, c as (
select 1 id3 from dual union all
select 2 id3 from dual )
, t as (
select 1 id1, 1 id2, 1 id3 from dual union all
select 1 id1, 1 id2, 2 id3 from dual union all
select 1 id1, 2 id2, 1 id3 from dual union all
select 1 id1, 2 id2, 2 id3 from dual union all
select 1 id1, 3 id2, 1 id3 from dual union all
select 1 id1, 4 id2, 2 id3 from dual )
select lst.id1, lst.id2, lst.id3
from (
select id1, id2, id3
from a cross join b cross join c ) lst
where not exists (select 1 from t
where t.id1 = lst.id1
and t.id2 = lst.id2
and t.id3 = lst.id3)
```
Or, use the NOT IN test:
```
select lst.id1, lst.id2, lst.id3
from (
select id1, id2, id3
from a cross join b cross join c ) lst
where (id1, id2, id3) not IN (select distinct id1, id2, id3 from t)
```
Alex's minus does the same thing, all coming up with the same result set - and which option will work best may depend on the number of records in the composite table, available indexes, and - most importantly - exactly what it is you want.
If you just want to know that there is one or more missing combinations, then use an option that short-circuits out as quickly as possible. EXISTS, for example, will stop checking the moment it hits a case that evaluates to TRUE
|
Verify existance of all combinations in table
|
[
"",
"sql",
"database",
"oracle",
"combinations",
""
] |
I'm querying an access db from excel. I have a table similar to this one:
```
id Product Count
1 A 0
1 B 5
3 C 0
2 A 0
2 B 0
2 C 5
3 A 6
3 B 5
3 C 7
```
From which I'd like to return all the rows (including the ones where count for that product is 0) where the sum of the count for this ID is not 0 and the product is either A or B. So from the above table, I would get:
```
id Product Count
1 A 0
1 B 5
3 A 6
3 B 5
```
The following query gives the right output, but is quite slow (takes almost a minute when querying from a somewhat small 7k row db), so I was wondering if there is a more efficient way of doing it.
```
SELECT *
FROM [BD$] BD
WHERE (BD.Product='A' or BD.Product='B')
AND BD.ID IN (
SELECT BD.ID
FROM [BD$] BD
WHERE (Product='A' or Product='B')
GROUP BY BD.ID
HAVING SUM(BD.Count)<>0)
```
|
Use your `GROUP BY` approach in a subquery and `INNER JOIN` that back to the `[BD$]` table.
```
SELECT BD2.*
FROM
(
SELECT BD1.ID
FROM [BD$] AS BD1
WHERE BD1.Product IN ('A','B')
GROUP BY BD1.ID
HAVING SUM(BD1.Count) > 0
) AS sub
INNER JOIN [BD$] AS BD2
ON sub.ID = BD2.ID;
```
|
IN() statement can perform badly a lot of times, you can try EXISTS() :
```
SELECT * FROM [BD$] BD
WHERE BD.Product in('A','B')
AND EXISTS(SELECT 1 FROM [BD$] BD2
WHERE BD.id = BD2.id
AND BD2.Product in('A','B')
AND BD2.Count > 0)
```
|
SELECT all rows where sum of count for this id is not 0
|
[
"",
"sql",
"excel",
"ms-access",
""
] |
i have two sql query in one of them i perform left outer join, both should return same no of records but returned no of rows are different in both the sql queries
```
select Txn.txnRecNo
from Txn
inner join Person on Txn.uwId = Person.personId
full outer join TxnInsured on Txn.txnRecNo = TxnInsured.txnRecNo
left join TxnAdditionalInsured on Txn.txnRecNo = TxnAdditionalInsured.txnRecNo
where Txn.visibleFlag=1
and Txn.workingCopy=1
```
returned 20 records
```
select Txn.txnRecNo
from Txn
inner join Person on Txn.uwId = Person.personId
full outer join TxnInsured on Txn.txnRecNo = TxnInsured.txnRecNo
where Txn.visibleFlag=1
and Txn.workingCopy=1
```
returned 15 records
|
I suspect that the `TxnAdditionalInsured` table have **duplicate records**. use `distinct`
```
select distinct Txn.txnRecNo
from Txn
inner join Person on Txn.uwId = Person.personId
full outer join TxnInsured on Txn.txnRecNo = TxnInsured.txnRecNo
left join TxnAdditionalInsured on Txn.txnRecNo = TxnAdditionalInsured.txnRecNo
where Txn.visibleFlag=1
and Txn.workingCopy=1
```
|
A `left` join will produce all rows from the left side of the join *at least* once in the result set.
But if your join conditions are such that there are *multiple* rows from the right side that match a particular row on the left, that left row will appear multiple times in the result (as many times as it is matched with a right row).
So, if the results are unexpected, your join criteria aren't are strict as they need to be or you do not understand your data as well as you thought you did.
Unlike the other answers, I would not suggest just adding `distinct` - I'd suggest you investigate your data and determine whether your `ON` clause needs strengthening or if your data is in fact incorrect. Adding `distinct` to "make the results look right" is usually a poor decision - prefer to investigate and get the *correct* query written.
|
returned no of rows different on left join
|
[
"",
"sql",
"sql-server",
""
] |
The database itself is about storing cocktails with their own recipes (Recipe) and ingredients (RecipeIngredient). Each user (User) has their own "pantry" (UserIngredients) in which they can store the ingredients they have at home. This query should now show them the cocktails they can mix
I've got the following query:
```
SELECT u.User_Name, r.Recipe_Name
FROM User u
INNER JOIN UserIngredient ui ON u.User_ID = ui.User_ID
INNER JOIN RecipeIngredient ri ON ui.Ingredient_ID = ri.Ingredient_ID
INNER JOIN Ingredient i ON ri.Ingredient_ID = i.Ingredient_ID
INNER JOIN Recipe r ON ri.Recipe_ID = r.Recipe_ID
WHERE u.User_Session = 'DgRkQztkvUhotfSf53l7ciiI8rOhKtuvoPqCTvdlBXWTn9cYxz'
```
and would like to know if it is possible to just get one "r.Recipe\_Name" per recipe and not one for each ingredient.
My tablelayout is the following:
```
CREATE TABLE User
(
User_ID INT NOT NULL PRIMARY KEY AUTO_INCREMENT,
User_Pass TEXT NOT NULL,
User_Name TEXT NOT NULL,
User_Surname TEXT NOT NULL,
User_Nickname TEXT,
User_EMail TEXT,
User_Session VARCHAR(50) UNIQUE,
User_Admin BOOLEAN
);
CREATE TABLE Recipe
(
Recipe_ID INT NOT NULL PRIMARY KEY AUTO_INCREMENT,
Recipe_Name TEXT NOT NULL,
Recipe_Clicks INT,
Recipe_Description TEXT
);
CREATE TABLE Ingredient
(
Ingredient_ID INT NOT NULL PRIMARY KEY AUTO_INCREMENT,
Ingredient_Name VARCHAR(255) UNIQUE,
Ingredient_Description TEXT
);
CREATE TABLE RecipeIngredient
(
RecipeIngredient_ID INT NOT NULL PRIMARY KEY AUTO_INCREMENT,
RecipeIngredient_Amount DECIMAL(8,2) NOT NULL,
MeasuringUnit_ID INT NOT NULL,
Recipe_ID INT NOT NULL,
Ingredient_ID INT NOT NULL,
FOREIGN KEY (MeasuringUnit_ID) REFERENCES MeasuringUnit(MeasuringUnit_ID),
FOREIGN KEY (Recipe_ID) REFERENCES Recipe(Recipe_ID),
FOREIGN KEY (Ingredient_ID) REFERENCES Ingredient(Ingredient_ID)
);
CREATE TABLE UserIngredient
(
UserIngredient_ID INT NOT NULL PRIMARY KEY AUTO_INCREMENT,
Ingredient_ID INT NOT NULL,
User_ID INT NOT NULL,
FOREIGN KEY(Ingredient_ID) REFERENCES Ingredient(Ingredient_ID),
FOREIGN KEY(User_ID) REFERENCES User(User_ID)
);
```
|
Try
```
SELECT u.User_Name, MAX(r.Recipe_Name)
FROM User u
INNER JOIN UserIngredient ui ON u.User_ID = ui.User_ID
INNER JOIN RecipeIngredient ri ON ui.Ingredient_ID = ri.Ingredient_ID
INNER JOIN Ingredient i ON ri.Ingredient_ID = i.Ingredient_ID
INNER JOIN Recipe r ON ri.Recipe_ID = r.Recipe_ID
WHERE u.User_Session = 'DgRkQztkvUhotfSf53l7ciiI8rOhKtuvoPqCTvdlBXWTn9cYxz'
GROUP BY u.User_Name, r.Recipe_Name
```
Not sure about this but it sounds like multiple ingredients will have the same recipe so just select max, which will return the only recipe name and if you group by user name + recipe name it might give you what you need.
|
To get the desired result using this database, try
```
SELECT DISTINCT u.User_Name, r.Recipe_Name
FROM User u
INNER JOIN UserIngredient ui ON u.User_ID = ui.User_ID
INNER JOIN RecipeIngredient ri ON ui.Ingredient_ID = ri.Ingredient_ID
INNER JOIN Ingredient i ON ri.Ingredient_ID = i.Ingredient_ID
INNER JOIN Recipe r ON ri.Recipe_ID = r.Recipe_ID
WHERE u.User_Session = 'DgRkQztkvUhotfSf53l7ciiI8rOhKtuvoPqCTvdlBXWTn9cYxz'
```
My guess is that users create Recipies, why don't you instead add User\_ID to Receipe?
|
How do I limit the result to just different results?
|
[
"",
"mysql",
"sql",
""
] |
I have a column (XID) that contains a varchar(20) sequence in the following format: xxxzzzzzz Where X is any letter or a dash and zzzzz is a number.
I want to write a query that will strip the xxx and evaluate and return which is the highest number in the table column.
For example:
```
aaa1234
bac8123
g-2391
```
After, I would get the result of 8123
Thanks!
|
A bit painful in SQL Server, but possible. Here is one method that assumes that only digits appear after the first digit (which you actually specify as being the case):
```
select max(cast(stuff(col, 1, patindex('%[0-9]%', col) - 1, '') as float))
from t;
```
Note: if the last four characters are always the number you are looking for, this is probably easier to do with `right()`:
```
select max(right(col, 4))
```
|
Using Numbers table
```
declare @string varchar(max)
set @string='abc1234'
select top 1 substring(@string,n,len(@string))
from
numbers
where n<=len(@string)
and isnumeric(substring(@string,n,1))=1
order by n
```
**Output:1234**
|
A query that will search for the highest numeric value in a table where the column has an alphanumeric sequence
|
[
"",
"sql",
"sql-server",
""
] |
I am creating a library system that only has 1 copy of each book. The user would enter the book and the dates they want it for. After the system would check that the book is not reserved for the dates the user wants it.
I'm trying to insert data into a table if the variables are not already in the table. e.g. if the id is equal to 3 and the date is in between two dates already in the table, then the information won't be entered. The table is for a simple book reservation system. The code below is what I have, but doesn't seem to work so needs changing. The error that I get is below the code. Thank you.
```
SELECT *
FROM `table_name`
INSERT INTO table_name (name, id, start_date, end_date, days)
VALUES ('test', '4', '0000-00-00', '1000-00-00', 3)
WHERE id != 3
AND start_date NOT BETWEEN 2016-03-31 AND 2016-03-05
```
This is the error that I get when I run the code:
```
Static analysis:
9 errors were found during analysis.
Unrecognized keyword. (near "NOT" at position 203)
Unrecognized keyword. (near "BETWEEN" at position 207)
Unexpected token. (near "2016" at position 215)
Unexpected token. (near "-03" at position 219)
Unexpected token. (near "-31" at position 222)
Unrecognized keyword. (near "AND" at position 226)
Unexpected token. (near "2016" at position 230)
Unexpected token. (near "-03" at position 234)
Unexpected token. (near "-05" at position 237)
SQL query: Documentation
SELECT * FROM `table_name` INSERT LIMIT 0, 25 INTO table_name (name, id, start_date, end_date, days) VALUES ('test', '4', '0000-00-00', '1000-00-00', 3) WHERE id != 3 AND start_date NOT BETWEEN 2016-03-31 AND 2016-03-05
MySQL said: Documentation
#1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'INSERT LIMIT 0, 25 INTO table_name (name, id, start_date, end_date, days)' at line 3#
```
|
Turns out that I needed to just do the two things separately, in different parts of my code.
|
You problably want something like this:
```
INSERT INTO table_name (name, id, start_date, end_date, days)
SELECT name, id, start_date, end_date, days
FROM (SELECT 'test' AS name,
4 AS id,
'0000-00-00' AS start_date,
'1000-00-00' AS end_date,
3 AS days) AS t
WHERE NOT EXISTS (SELECT *
FROM table_name
WHERE id = 3 AND start_date BETWEEN '2016-03-31' AND '2016-03-05')
```
This query will insert the specified *hardcoded* values in `table_name` **if** a row with `id=3` **and** `start_date` between dates `('2016-03-31', '2016-03-05')` **does not exist** in the same table.
You can modify the predicates of the `WHERE` clause as you wish to suit your actual needs.
|
SQL insert into a table depending on values in the table
|
[
"",
"mysql",
"sql",
"select",
"insert-into",
""
] |
I have to show "birthday" days in format 'dd.mm' from format 'dd.mm.yy' but only if that "birthday" has more than 10 employees from table named "employeeFirm".
when I go : `select birthday from employeeFirm;`
I get:
```
01.11.73
08.09.77
01.11.65
01.11.74
(null)
(null)
01.11.85
(null)
01.11.88
01.11.65
01.11.56
01.11.77
01.11.77
(null)
01.11.77
01.11.77
....
```
I want to get a record in format 'dd.mm', in this case ofcorse " 01.11" because we have more than 10 employees with the same day birthday.
|
Try this:
```
SELECT ddmm, count
FROM (
SELECT distinct Substr(Birthday,1,5) as ddmm
, Count(Birthday) OVER(PARTITION BY Substr(Birthday,1,5)) AS count
from employeeFirm
) A
where count> 10
```
|
You can use TO\_CHAR like this:
```
SELECT case when t.formated is not null
then t.formated
else to_char(s.birthday,'DD.MM.YYYY')
end as new_birthDay
FROM employeeFirm s
LEFT OUTER JOIN(SELECT TO_CHAR(birthday,'DD.MM') as formated
FROM employeeFirm
GROUP BY TO_CHAR(birthday,'DD.MM')
HAVING COUNT(*) > 10) t
ON(to_char(s.birthday,'DD.MM') = t.formated)
```
If you only want the birthdays that has more then 10 employees on that day then :
```
SELECT TO_CHAR(birthday,'DD.MM') as formated
FROM employeeFirm
GROUP BY TO_CHAR(birthday,'DD.MM')
HAVING COUNT(*) > 10
```
|
Show "birthday" days in format 'dd.mm' from format 'dd.mm.yy' but only if that "birthday" has more than 10 employees
|
[
"",
"sql",
"oracle",
""
] |
Why am getting this error
> Incorrect syntax near the keyword
when I execute the below code
```
SELECT *
FROM [dbo].[priority_table] p
WHERE EXISTS ( (SELECT 1
FROM [dbo].[item_table] i
WHERE i.priority_id = p.priority_id)
AND filter = @filter )
OR ( @filter IS NULL )
```
For past two hours am cracking my head couldn't get anything. I want to ignore the `Where` clause when `@filter` variable is `NULL`
|
Move the `Open parenthesis` before `EXISTS`
```
SELECT *
FROM [dbo].[priority_table] p
WHERE ( EXISTS (SELECT 1
FROM [dbo].[item_table] i
WHERE i.priority_id = p.priority_id)
AND filter = @filter )
OR ( @filter IS NULL )
```
|
This is not ju-ju magic. Exists in a where clause is just written like a sub-query (specifically a semi-join), thus (extra lines added for emphasis):
```
SELECT *
FROM [dbo].[priority_table] p
WHERE EXISTS
(
SELECT 1
FROM [dbo].[item_table] i
WHERE i.priority_id = p.priority_id
AND (i.filter = @filter
OR @filter IS NULL)
)
```
The critical bit is to make sure you use table aliaes, so you refer correctly to the outer and inner tables of the sub-query.
|
Incorrect syntax near the keyword when using EXISTS
|
[
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2012",
""
] |
Table `tmp` :
```
CREATE TABLE if not exists tmp (
id INTEGER PRIMARY KEY,
name TEXT NOT NULL);
```
I inserted 5 rows. `select rowid,id,name from tmp;` :
| rowid | id | name |
| --- | --- | --- |
| 1 | 1 | a |
| 2 | 2 | b |
| 3 | 3 | c |
| 4 | 4 | d |
| 5 | 5 | e |
Now I delete rows with id 3 and 4 and run above query again:
| rowid | id | name |
| --- | --- | --- |
| 1 | 1 | a |
| 2 | 2 | b |
| 5 | 5 | e |
`rowid` is not getting reset and leaves holes. Even after `vacuum` it doesn't reset `rowid`.
I want :
| rowid | id | name |
| --- | --- | --- |
| 1 | 1 | a |
| 2 | 2 | b |
| 3 | 5 | e |
How to achieve above output?
|
I assume you already know a little about `rowid`, since you're asking about its interaction with the `VACUUM` command, but this may be useful information for future readers:
`rowid` is [a special column available in all tables](https://www.sqlite.org/lang_createtable.html#rowid) (unless you use `WITHOUT ROWID`), used internally by sqlite. A `VACUUM` is supposed to rebuild the table, aiming to reduce fragmentation in the database file, and [may change the values of the `rowid` column](https://sqlite.org/lang_vacuum.html). Moving on.
Here's the answer to your question: `rowid` is *really* special. So special that if you have an `INTEGER PRIMARY KEY`, it becomes an alias for the `rowid` column. From the docs on [rowid](https://sqlite.org/lang_createtable.html#rowid):
> With one exception noted below, if a rowid table has a primary key that consists of a single column and the declared type of that column is "INTEGER" in any mixture of upper and lower case, **then the column becomes an alias for the rowid**. Such a column is usually referred to as an "integer primary key". A PRIMARY KEY column only becomes an integer primary key if the declared type name is exactly "INTEGER". Other integer type names like "INT" or "BIGINT" or "SHORT INTEGER" or "UNSIGNED INTEGER" causes the primary key column to behave as an ordinary table column with integer affinity and a unique index, not as an alias for the rowid.
This makes your primary key faster than it would've been otherwise (presumably because there's no lookup from your primary key to `rowid`):
> The data for rowid tables is stored as a B-Tree structure containing one entry for each table row, using the rowid value as the key. This means that retrieving or sorting records by rowid is fast. Searching for a record with a specific rowid, or for all records with rowids within a specified range is **around twice as fast** as a similar search made by specifying any other PRIMARY KEY or indexed value.
Of course, when your primary key is an alias for `rowid`, it would be terribly inconvenient if this could change. Since `rowid` is now aliased to *your application data*, it would not be acceptable for sqlite to change it.
Hence, this little note in the [VACUUM docs](https://sqlite.org/lang_vacuum.html):
> The VACUUM command may change the ROWIDs of entries in any tables **that do not have an explicit INTEGER PRIMARY KEY.**
If you *really really really* absolutely need the `rowid` to change on a `VACUUM` (I don't see why -- feel free to discuss your reasons in the comments, I may have some suggestions), you can avoid this aliasing behavior. Note that it will decrease the performance of any table lookups using your primary key.
To avoid the aliasing, and degrade your performance, you can use `INT` instead of `INTEGER` when defining your key:
> **A PRIMARY KEY column only becomes an integer primary key if the declared type name is exactly "INTEGER".** Other integer type names like "INT" or "BIGINT" or "SHORT INTEGER" or "UNSIGNED INTEGER" causes the primary key column to behave as an ordinary table column with integer affinity and a unique index, not as an alias for the rowid.
|
I found a solution for some case. I don't know why, but this worked.
1.Rename column "id" to any other name (not PRIMARY KEY) or delete this column because you have already "rowid".
```
CREATE TABLE if not exists tmp (
my_i INTEGER NOT NULL,
name TEXT NOT NULL);
```
2.Insert 5 rows in it.
select rowid,\* from tmp;
```
rowid my_i name
1 1 a
2 2 b
3 3 c
4 4 d
5 5 e
```
3.Delete rows with rowid 3 and 4 and run above query again.
```
DELETE FROM tmp WHERE rowid = 3;
DELETE FROM tmp WHERE rowid = 4;
select rowid,* from tmp;
rowid my_i name
1 1 a
2 2 b
5 5 e
```
4.Run SQL
```
VACUUM;
```
5.Run SQL
```
select rowid,* from tmp;
```
The output:
```
rowid my_i name
1 1 a
2 2 b
3 5 e
```
|
How to get rid of gaps in rowid numbering after deleting rows?
|
[
"",
"sql",
"sqlite",
""
] |
I have a ticket details table that stores the information of transactions. Here is an example of the table data:
```
Ticket_Number Detail_type_ID Description Date_Created TotalAmount Barcode
1 11 Card Sale 1/1/16 5 123
1 1 Book 1/1/16 5
1 11 Card Red 1/1/16 -5 123
2 1 book 1/5/16 5
3 1 book 1/6/16 5
3 11 Card Red 1/6/16 -5 123
4 11 Card Sale 1/7/16 5 124
5 1 Book 1/7/16 5
5 11 Card Red 1/7/16 -5 124
6 11 Card Sale 1/8/16 5 123
6 1 Book 1/8/16 5
6 11 Card Red 1/8/16 -5 123
7 1 Book 1/9/16 5
7 11 Card Red 1/9/16 -5 124
```
We sell Gift cards - $5 allows you to purchase 2 books. The gift cards are loaded with 2 books. You'll see in the table above, in most cases, we sell a card, and a book is redeemed right away. The customer comes back at some point, and purchases another book, with the remaining balance on the card.
What we're looking to find out is: how often are customers coming back and redeeming the balance, or how long does it take for them to deplete the card. As you'll see - the barcode is stored in the details table, however, we do re-use the cards so we don't want that to pollute the data of the first card. Detail\_Type\_ID of '11' means a card sale or redemption. Based on the data above, Here is the output I'm looking for:
```
Barcode Days_between_usage Balance_still_remains
123 6 No
124 2 No
123(2) 0 Yes
```
The "balance still remains" will tell me that the card still has a balance.
How can I run a query to get to this output?
EDIT:
Based on the answer below, It looks like the first step is to break the data into sale and redeems which I've done. I am unsure how to proceed from here.
```
Select barcode, date_created, Case When TotalAmount > 0 Then 'Sale' Else 'Redeem' end as SaleOrRedeem
From Ticketsdetails
Where (Date_Created > '1/1/16') and (Detail_Type_ID = '11') and (barcode In (select Barcode
From Ticketsdetails as td
where Date_Created > '1/1/16') and (Detail_Type_ID = '11') and Total Amount > 0)))
Order By Barcode, date_Created
```
Which returns:
```
Barcode Date_Created TransType
123 1/1/16 Sale
123 1/1/16 Redeem
123 1/6/16 Redeem
124 1/7/16 Sale
124 1/7/16 Redeem
124 1/7/16 Sale
123 1/8/16 Sale
123 1/8/16 Redeem
124 1/9/16 Redeem
```
|
Stab in the dark. I'm not sure I fully understand your requirements quite right.
```
with Sales as (
select
t.Barcode,
t.Date_Created as Sale_Date,
row_number() over (partition by t.Barcode order by t.Sale_Date) as Load_Seq
from <Transactions> as t
where Description = 'Card Sale'
group by Barcode
),
RedemptionWindows as (
select
s1.Barcode,
s1.Load_Seq
s1.Sale_Date,
coalesce(s2.Sale_Date, dateadd(year, 1, s1.Sale_Date)) as End_Date,
from Sales as s1 left outer join Sales s2
on s2.Barcode = s1.Barcode and s2.Load_Seq = s1.Load_Seq + 1
)
select
Barcode
+ case
when Load_Seq > 1
then '(' + cast(Load_Seq as varchar(3)) + ')'
else '' end as Barcode,
Days_Between_Usage,
case when RedemptionCount < 2 then 'Yes' else 'No' Balance_Still_Remains,
5.00 - 2.50 * RedemptionCount as Balance_Remaining
from
RedemptionWindows as rw
cross apply
(
select
datediff(day,min(r.Date_Created),max(r.Date_Created)) as Days_Between_Usage,
count(*) as RedemptionCount
from <Transactions> as r /* redemptions */
where Description = 'Card Red'
and r.Barcode = rw.Barcode
and r.Date_Created >= rw.Sale_Date
and r.Date_Created < rw.End_Date
) r_summary
```
|
You can do this with a self-join or with a subquery.
Here is some pseudo-code
```
SELECT t1.Barcode, SUM(t1.Date - t2.Date)
FROM TheTable t1
JOIN TheTable t2 ON t1.Barcode=t2.Barcode
AND t1.Code = 'Redemption'
AND t2.Code = 'Sale'
GROUP BY Barcode
```
And then to get that last row you'll have to UNION with another query that generates the row you want. You can use a CASE expression to generate a string based on whether the balance remaining is > 0 or not. And a sub-query can be used to generate the number in parenthesis next to the Barcode.
|
How to find number of days between results
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
"sql-server-2008-r2",
""
] |
I have a mini database (`mymoney`) which has a column called `total_amount`. Now I would like to get this column and ad an extra column `GRP2` (with values `low`, `medium` or `high`) based on the row value in the `total_amount` column. **So I have this**:
```
total_amount
5000
27000
36000
50000
```
**And I would like to the this**:
```
total_amount GRP2
5000 low
27000 low
36000 medium
50000 high
```
I tried with the following SQL query:
```
SELECT mymoney.total_amount,
CASE mymoney.total_amount
WHEN mymoney.total_amount <= 30000 THEN 'low'
WHEN mymoney.total_amount >= 31000 AND mymoney.total_amount <= 40000 THEN 'medium'
ELSE 'high'
END AS GRP2
FROM mymoney
```
But in `GRP2` I get `high` everywhere, even though there are clearly numbers in `total_amount` which should be marked `low` and `medium`. (btw: the data types for the values in `total_amount` are all integers).
```
total_amount GRP2
5000 high
27000 high
36000 high
50000 high
```
Can anyone please tell me what I'm doing wrong?
|
You are misusing the `CASE` expression. Try like this:
```
SELECT total_amount,
CASE
WHEN total_amount <= 30000 THEN 'low'
WHEN total_amount > 30000 AND total_amount <= 40000 THEN 'medium'
ELSE 'high'
END AS GRP2
FROM gates_money
```
|
You are use simple CASE expression whereas you need searched CASE expression
```
SELECT mymoney.total_amount,
CASE
WHEN mymoney.total_amount <= 30000 THEN 'low'
WHEN mymoney.total_amount >= 31000 AND mymoney.total_amount <= 40000 THEN 'medium'
ELSE 'high'
END AS GRP2
FROM mymoney
```
|
SQL CASE conditions always returning ELSE value
|
[
"",
"sql",
"sqlite",
"case",
""
] |
How can I convert month number to month name if have following records:
eg.
```
select date from tableClient;
17.01.07
18.02.08
18.03.08
18.04.08
18.05.08
18.06.08
18.07.08
18.08.08
```
Expected result is :
```
17.January.07
18.February.08
18.March.08
18.April.08
......
```
How to convert just MM from that that record?
PS. I have to write one query where exists two or three conditions, eg.
```
select id, firstName, lastName, date from tableClient, tableActivity
where tableClient.id = tableActivity.id and tableActivity.status= 'YES'
AND tableActiviy.type = 'SMS';
```
Condition: date must be in the form :eg. 01.January.03
|
You have to use `to_char()` function with `trim()` function to get rid off extra spaces after month name:
```
select trim(to_char(sysdate, 'dd.Month')) || '.' || to_char(sysdate,'yy') from dual;
```
With your example:
```
select id, firstName, lastName, trim(to_char(date, 'dd.Month')) || '.' || to_char(date,'yy')
from tableClient, tableActivity
where tableClient.id = tableActivity.id and tableActivity.status= 'YES'
AND tableActiviy.type = 'SMS';
```
|
Use **TO\_CHAR** with `FMMONTH` to **display** the date in your **desired format**.
From the [documentation](https://docs.oracle.com/cd/B28359_01/server.111/b28286/sql_elements004.htm#SQLRF00216),
> Format Model Modifiers
>
> The FM and FX modifiers, used in format models in the TO\_CHAR
> function, control blank padding and exact format checking.
>
> A modifier can appear in a format model more than once. In such a
> case, each subsequent occurrence toggles the effects of the modifier.
> Its effects are enabled for the portion of the model following its
> first occurrence, and then disabled for the portion following its
> second, and then reenabled for the portion following its third, and so
> on.
>
> FM Fill mode. Oracle uses trailing blank characters and leading zeroes
> to fill format elements to a constant width. The width is equal to the
> display width of the largest element for the relevant format model:
For example,
```
SQL> SELECT to_char(SYSDATE, 'DD-FMMONTH-YY') FROM DUAL;
TO_CHAR(SYSDATE
---------------
09-MARCH-16
SQL> SELECT to_char(to_date('17.01.07','dd.mm.yy'), 'DD-fmMONTH-fmYY') FROM DUAL;
TO_CHAR(TO_DATE
---------------
17-JANUARY-07
```
|
Convert month number to month name from two tables
|
[
"",
"sql",
"oracle",
""
] |
I understand why you use `COUNT(*)` and `COUNT(DISTINCT col)`, but in which cases would you use simply `COUNT(col)`. Wouldn't `COUNT(col)` return the same result as `COUNT(*)`?
```
SELECT COUNT(CustomerID) AS OrdersFromCustomerID7 FROM Orders
WHERE CustomerID=7;
```
and
```
SELECT COUNT(*) AS OrdersFromCustomerID7 FROM Orders
WHERE CustomerID=7;
```
both result in
OrdersFromCustomerID7
4
in this W3 school [example](http://www.w3schools.com/sql/trysql.asp?filename=trysql_func_count).
|
When you use **COUNT(Colomn)** It won't count **nulls**.
As opposed to **COUNT(\*)** which will count **each row** individually no matter null or not.
Lets take this case:
```
ID | NAME
1 John
2 NULL
3 Jonathan
SELECT COUNT(*) FROM Table -- return 3
SELECT COUNT(NAME) FROM Table -- return 2
```
|
Try this:
```
DECLARE @tbl TABLE(ID INT IDENTITY,SomeValue INT);
INSERT INTO @tbl VALUES(1),(2),(NULL);
SELECT *
FROM @tbl
SELECT COUNT(*) AS COUNT_Asterisk
,COUNT(SomeValue) AS COUNT_SomeValue
FROM @tbl
```
|
When would you use a column name instead of * in a count?
|
[
"",
"sql",
"count",
""
] |
I have two separately unique columns in a table: `col1`, `col2`. Both have a unique index (`col1` is unique and so is `col2`).
I need `INSERT ... ON CONFLICT ... DO UPDATE` syntax, and update other columns in case of a conflict, but I can't use both columns as `conflict_target`.
It works:
```
INSERT INTO table
...
ON CONFLICT ( col1 )
DO UPDATE
SET
-- update needed columns here
```
But how to do this for several columns, something like this:
```
...
ON CONFLICT ( col1, col2 )
DO UPDATE
SET
....
```
Currently using Postgres 9.5.
|
## A sample table and data
```
CREATE TABLE dupes(col1 int primary key, col2 int, col3 text,
CONSTRAINT col2_unique UNIQUE (col2)
);
INSERT INTO dupes values(1,1,'a'),(2,2,'b');
```
## Reproducing the problem
```
INSERT INTO dupes values(3,2,'c')
ON CONFLICT (col1) DO UPDATE SET col3 = 'c', col2 = 2
```
Let's call this Q1. The result is
```
ERROR: duplicate key value violates unique constraint "col2_unique"
DETAIL: Key (col2)=(2) already exists.
```
## What the [documentation](https://www.postgresql.org/docs/current/static/sql-insert.html#SQL-ON-CONFLICT) says
> conflict\_target can perform unique index inference. When performing
> inference, it consists of one or more index\_column\_name columns and/or
> index\_expression expressions, and an optional index\_predicate. All
> table\_name unique indexes that, without regard to order, contain
> exactly the conflict\_target-specified columns/expressions are inferred
> (chosen) as arbiter indexes. If an index\_predicate is specified, it
> must, as a further requirement for inference, satisfy arbiter indexes.
This gives the impression that the following query should work, but it does not because it would actually require a together unique index on col1 and col2. However such an index would not guarantee that col1 and col2 would be unique individually which is one of the OP's requirements.
```
INSERT INTO dupes values(3,2,'c')
ON CONFLICT (col1,col2) DO UPDATE SET col3 = 'c', col2 = 2
```
Let's call this query Q2 (this fails with a syntax error)
## Why?
Postgresql behaves this way is because what should happen when a conflict occurs on the second column is not well defined. There are number of possibilities. For example in the above Q1 query, should postgresql update `col1` when there is a conflict on `col2`? But what if that leads to another conflict on `col1`? how is postgresql expected to handle that?
## A solution
A solution is to combine ON CONFLICT with [old fashioned UPSERT](https://www.postgresql.org/docs/9.4/static/plpgsql-control-structures.html#PLPGSQL-ERROR-TRAPPING).
```
CREATE OR REPLACE FUNCTION merge_db(key1 INT, key2 INT, data TEXT) RETURNS VOID AS
$$
BEGIN
LOOP
-- first try to update the key
UPDATE dupes SET col3 = data WHERE col1 = key1 and col2 = key2;
IF found THEN
RETURN;
END IF;
-- not there, so try to insert the key
-- if someone else inserts the same key concurrently, or key2
-- already exists in col2,
-- we could get a unique-key failure
BEGIN
INSERT INTO dupes VALUES (key1, key2, data) ON CONFLICT (col1) DO UPDATE SET col3 = data;
RETURN;
EXCEPTION WHEN unique_violation THEN
BEGIN
INSERT INTO dupes VALUES (key1, key2, data) ON CONFLICT (col2) DO UPDATE SET col3 = data;
RETURN;
EXCEPTION WHEN unique_violation THEN
-- Do nothing, and loop to try the UPDATE again.
END;
END;
END LOOP;
END;
$$
LANGUAGE plpgsql;
```
You would need to modify the logic of this stored function so that it updates the columns exactly the way you want it to. Invoke it like
```
SELECT merge_db(3,2,'c');
SELECT merge_db(1,2,'d');
```
|
`ON CONFLICT` requires a unique index\* to do the conflict detection. So you just need to create a unique index on both columns:
```
t=# create table t (id integer, a text, b text);
CREATE TABLE
t=# create unique index idx_t_id_a on t (id, a);
CREATE INDEX
t=# insert into t values (1, 'a', 'foo');
INSERT 0 1
t=# insert into t values (1, 'a', 'bar') on conflict (id, a) do update set b = 'bar';
INSERT 0 1
t=# select * from t;
id | a | b
----+---+-----
1 | a | bar
```
\* In addition to unique indexes, you can also use [exclusion constraints](https://www.postgresql.org/docs/9.5/static/ddl-constraints.html#DDL-CONSTRAINTS-EXCLUSION). These are a bit more general than unique constraints. Suppose your table had columns for `id` and `valid_time` (and `valid_time` is a `tsrange`), and you wanted to allow duplicate `id`s, but not for overlapping time periods. A unique constraint won't help you, but with an exclusion constraint you can say "exclude new records if their `id` equals an old `id` and also their `valid_time` overlaps its `valid_time`."
|
Use multiple conflict_target in ON CONFLICT clause
|
[
"",
"sql",
"postgresql",
"upsert",
""
] |
I have this table with multiple columns. Primary key is `(type,ref,code)` with row type `t1`, `t2`, and two states Valid `(V)` and Invalid `(I)`.
Every invalid row in state `t1` should have a valid row in the state `t2`,
but my table already has some rows with Invalid state that don't have the valid row in the state `t2`.
Example:
```
type | ref | code | state .....
----------------------------------------
t1 | 1 | c1 | V
t1 | 2 | c1 | V
t1 | 3 | c1 | I
t2 | 3 | c1 | V
t1 | 4 | c1 | V
t1 | 5 | c1 | I
```
so i need to duplicate the missing rows
I'm using
```
INSERT INTO table (type,ref,code,state)
SELECT 't2',ref,code,'V' FROM table
WHERE EXISTS (SELECT ref,code,count(*) from table GROUP BY ref,code HAVING count(*)=1)
AND state='I'
```
but I'm getting
```
Violation of PRIMARY KEY
```
I tried with
```
WHERE NOT EXISTS (SELECT ref,code,count(*) from table GROUP BY ref,code HAVING count(*)>1)
AND state='I'
```
and nothing hapend. Any dea how to perform this??
|
You can use the following query to get to-be-duplicated rows:
```
SELECT type, ref, code, state
FROM mytable AS t1
WHERE state = 'I' AND type = 't1' AND
NOT EXISTS (SELECT 1
FROM mytable AS t2
WHERE t1.ref = t2.ref AND t1.code = t2.code AND
state = 'V' AND type = 't2')
```
So, the `INSERT` statement can look like this:
```
INSERT INTO mytable
SELECT 't2', ref, code, 'V'
FROM mytable AS t1
WHERE state = 'I' AND type = 't1' AND
NOT EXISTS (SELECT 1
FROM mytable AS t2
WHERE t1.ref = t2.ref AND t1.code = t2.code AND
state = 'V' AND type = 't2')
```
|
If it is a primary key, must be unique and not null for definition. so if you need two or more states you need another table, or another field(if just two) but than get seek to query
|
Duplicate All single rows in database table
|
[
"",
"sql",
"sql-server",
"sql-insert",
"not-exists",
""
] |
I have searched this website for all possible solutions but still can't find an answer for my Pivot problem.
I have a table with the following data.
```
Portfolio | Date | TotalLoans | ActiveLoans | TotalBalance
--------------------------------------------------------------------
P1 | 2015-12-31 | 1,000 | 900 | 100,000.00
P1 | 2015-11-30 | 1,100 | 800 | 100,100.00
P1 | 2015-10-31 | 1,200 | 700 | 100,200.00
```
I am trying to create a pivot with the following output (only where Portfolio = P1)
```
Field | 2015-12-31 | 2015-11-30 | 2015-10-31 |
-----------------------------------------------------
TotalLoans | 1,000 | 1,100 | 1,200 |
ActiveLoans | 900 | 800 | 700 |
TotalBalance | 100,000 | 100,100 | 100,200 |
```
Ideally, I am looking for a dynamic pivot, but a static query would do as well and I can try a dynamic query out of that.
|
You need first to [**`UNPIVOT`**](https://technet.microsoft.com/en-us/library/ms177410%28v=sql.105%29.aspx?f=255&MSPPError=-2147217396) your table. You can do it using this query:
```
SELECT Portfolio, [Date], Val, ColType
FROM (SELECT Portfolio,
[Date],
TotalLoans,
ActiveLoans,
TotalBalance
FROM mytable
WHERE Portfolio = 'P1') AS srcUnpivot
UNPIVOT (
Val FOR ColType IN (TotalLoans, ActiveLoans, TotalBalance)) AS unpvt
```
**Output:**
```
Portfolio Date Val ColType
===============================================
P1 2015-12-31 1000 TotalLoans
P1 2015-12-31 900 ActiveLoans
P1 2015-12-31 100000 TotalBalance
P1 2015-11-30 1100 TotalLoans
P1 2015-11-30 800 ActiveLoans
P1 2015-11-30 100100 TotalBalance
P1 2015-10-31 1200 TotalLoans
P1 2015-10-31 700 ActiveLoans
P1 2015-10-31 100200 TotalBalance
```
**Note:** *All* unpivoted fields must be of the *same type*. The query above assumes a type of *int* for all fields. If this is not the case then you have to use `CAST`.
Using the above query you can apply `PIVOT`:
```
SELECT Portfolio, ColType, [2015-12-31], [2015-11-30], [2015-10-31]
FROM (
... above query here ...
PIVOT (
MAX(Val) FOR [Date] IN ([2015-12-31], [2015-11-30], [2015-10-31])) AS pvt
```
|
This is Giorgos Betsos solution as dynamic SQL. This will deal without the need to write the date values explicitly.
Please: If you like this: **Do not** mark this solution as accepted, set the acceptance to Giorgos Betsos. There's the hard work! But you may vote on it :-)
```
CREATE TABLE #tbl(Portfolio VARCHAR(10),[Date] DATE,TotalLoans DECIMAL(10,4),ActiveLoans DECIMAL(10,4),TotalBalance DECIMAL(10,4));
INSERT INTO #tbl VALUES
('P1','20151231',1000,900,100000.00)
,('P1','20151130',1100,800,100100.00)
,('P1','20151031',1200,700,100200.00);
DECLARE @pvtColumns VARCHAR(MAX)=
(
STUFF(
(
SELECT DISTINCT ',['+CONVERT(VARCHAR(10), [Date] ,126) + ']'
FROM #tbl
FOR XML PATH('')
)
,1,1,'')
);
DECLARE @cmd VARCHAR(MAX)=
'SELECT Portfolio, ColType, ' + @pvtColumns +
' FROM (
SELECT Portfolio, [Date], Val, ColType
FROM (SELECT Portfolio,
[Date],
TotalLoans,
CAST(ActiveLoans AS DECIMAL(10,4)) AS ActiveLoans,
TotalBalance
FROM #tbl AS mytable
WHERE Portfolio = ''P1'') AS srcUnpivot
UNPIVOT (
Val FOR ColType IN (TotalLoans, ActiveLoans, TotalBalance)) AS unpvt
) AS srcPivot
PIVOT (
MAX(Val) FOR [Date] IN (' + @pvtColumns + ')) AS pvt';
EXEC (@cmd);
```
|
SQL Server Pivot on multiple fields
|
[
"",
"sql",
"sql-server",
"pivot",
""
] |
I've been just transferred to the new job site and given a set of sqls which are in operation. There are several sqls which are written like [A] below. What looks weird to me is that the alias of the table is the same as the table name itself. Allow me to ask you two questions about [A].
```
[A] select /*+ INDEX(TableA IDX01) */ from TableA TableA
[B] select /*+ INDEX(a IDX01) */ from TableA a
[C] select /*+ INDEX(TableA IDX01) */ from TableA a
```
Q1. Is hint clause in [A] valid? I know that hint clause in [C] is invalid because the table name is used instead of the alias in the hint clause, though that in [B] is valid.
Q2. Is there any advantage to give a table the same alias as its name?
Thanks in advance for your generous support for an oracle beginner working alone at the new environment.
|
Yes, the first hint appears to be syntactically valid (though presumably you can test that yourself) assuming that `IDX01` is an index that exists on `TableA` and that it is possible for the query to use that index. I'm not a fan of having this sort of hint in production code since it generally implies that you're trying to cover up an underlying problem with statistics but it appears to be valid.
There is nothing wrong with using an alias that matches the table name. It's possible that someone added that alias to make the hint valid or because the query originally didn't have an alias and some of the columns in the `select` list or `where` clause were using the `table_name.column_name` syntax. Of course, there are other possibilities that we could speculate about. Someone in your organization may know more of the history or source control might show the evolution.
|
I would like to respond only to your second question. It is absolutely OK to provide an alias with the same name as the table, and absolutely unnecessary to do so. It adds no clarity, and you can still use the `table_name.column_name` syntax without the alias.
|
Oracle SQL - a hint clause and a table alias
|
[
"",
"sql",
"oracle",
""
] |
I'm trying to load a local DB with SQLite under a RedHat Linux server. I have a C code to load the database from a very large file spliting the columns. The bad new is that sqlite3 is not installed in the machine (`fatal error: sqlite3.h: No such file or directory`) and I won't be able to have permissions to install `libsqlite3-dev` ([acording to this](https://stackoverflow.com/a/31764947/1709738)) , so I could only use it throuth bash or python:
```
[dhernandez@zl1:~]$ locate sqlite3
/opt/xiv/host_attach/xpyv/lib/libsqlite3.so
/opt/xiv/host_attach/xpyv/lib/libsqlite3.so.0
/opt/xiv/host_attach/xpyv/lib/libsqlite3.so.0.8.6
/opt/xiv/host_attach/xpyv/lib/python2.7/sqlite3
/opt/xiv/host_attach/xpyv/lib/python2.7/lib-dynload/_sqlite3.so
/opt/xiv/host_attach/xpyv/lib/python2.7/sqlite3/__init__.py
/opt/xiv/host_attach/xpyv/lib/python2.7/sqlite3/dbapi2.py
/opt/xiv/host_attach/xpyv/lib/python2.7/sqlite3/dump.py
/usr/bin/sqlite3
/usr/lib64/libsqlite3.so.0
/usr/lib64/libsqlite3.so.0.8.6
/usr/lib64/python2.6/sqlite3
/usr/lib64/python2.6/lib-dynload/_sqlite3.so
/usr/lib64/python2.6/sqlite3/__init__.py
/usr/lib64/python2.6/sqlite3/__init__.pyc
/usr/lib64/python2.6/sqlite3/__init__.pyo
/usr/lib64/python2.6/sqlite3/dbapi2.py
/usr/lib64/python2.6/sqlite3/dbapi2.pyc
/usr/lib64/python2.6/sqlite3/dbapi2.pyo
/usr/lib64/python2.6/sqlite3/dump.py
/usr/lib64/python2.6/sqlite3/dump.pyc
/usr/lib64/python2.6/sqlite3/dump.pyo
/usr/lib64/xulrunner/libmozsqlite3.so
/usr/share/man/man1/sqlite3.1.gz
/usr/share/mime/application/x-kexiproject-sqlite3.xml
/usr/share/mime/application/x-sqlite3.xml
```
What would be faster of the following options?
1. Split the columns in my C program, and then execute the insert like
this:
```
system("echo 'insert into t values(1,2);'" | sqlite3 mydb.db);
```
2. Split the columns in my C program, save it to a temp file and when
I've got 500.000 rows I execute the script like this (and then empty
the temp file to continue loading rows):
```
system("sqlite3 mydb.db < temp.sql);
```
3. Split the columns in my C program adding a delimiter between them, save it to a temp file and import it like this:
```
.delimiter '@'
.import temp.txt t
```
|
You can use the [amalgation](https://www.sqlite.org/amalgamation.html) version. It is a single `.c` file you can include in your project, and all of SQLite is available. No need for dynamic linking.
|
You could probably try to dynamically load the sqlite3 library at runtime.
There are quite few stuff to learn about it, but that's a powerful functionality and I am quite sure this would solve your problem.
Here is a link describing how you can do it : <http://tldp.org/HOWTO/Program-Library-HOWTO/dl-libraries.html>
|
How to use sqlite3 in C without the dev library installed?
|
[
"",
"sql",
"c",
"bash",
"sqlite",
""
] |
My scenario is to have a table for logging and reporting some important settings that users can change in a set of sites we maintain (each site has its own DB). When I was designing the table (going to be deployed on each DB), I was confused if I require a `Primary Key` or not. I have a column named `SiteID` and each row in the logging table will have `SiteId`, `newVal of Setting`, `oldVal of setting`, `Change date`. I will be using this to see the history of changes on a Site, filter changes by date etc. So in this scenario, obviously `SiteId` alone cannot be a `PK`, but do I really have to add a new column like `LogId` to make a composite `P`K ? Am I missing anything in here ?
|
You need to focus on your use case.
If you can avoid data duplication (e.g. from the application side) and you know your data, and know what type of index etc. you need, you can easily avoid that; mainly if you talking about massive scans for reporting and when data duplication is not issue.
In many MPP databases (databases that target reporting, e.g. Vertica) a primary key is optional.
So if you know your data and use cases, you can avoid primary keys.
|
A table without duplicate rows *always has* at least one candidate key. We can pick one to declare via `PRIMARY KEY`; any others are via `UNIQUE NOT NULL`. (`UNIQUE NOT NULL` is the constraint that you get from a `PRIMARY KEY` declaration.) If you don't *tell* the DBMS about your candidate keys then it can't help you by, say, reducing errors by preventing duplicates or improving performance by implicitly defining an index on its columns.
It's not clear from your question how you record what setting given new & old values are for. If your columns are `siteId`, `setting`, `newValue`, `oldValue` & `changeDate` then your table has one candidate key, `(siteID, setting, changeDate)`. As the only candidate key, it is the primary key. If setting values identify the settings so that your columns are `siteId`, `newValue`, `oldValue` & `changeDate` then you have two candidate keys, `(siteID, newValue, changeDate)` and `(siteID, oldValue, changeDate)`. Each should get a `UNIQUE NOT NULL` constraint. One could be via a PRIMARY KEY declaration.
If you add another column of unique/surrogate/id values then the table has another candidate key. As before constrain all of them via a `UNIQUE NOT NULL` constraint, one of which can be expressed via `PRIMARY KEY`. There are reasons for adding such a unique/surrogate/id column, but it's not because there's no primary key unless the table would otherwise have duplicates rows.
|
No primary key vs Composite primary Key
|
[
"",
"sql",
"database-design",
""
] |
I have a table `Product`:
```
Name Description
----------------------
x 1
y 2
z 3
```
I have another table `Producttemp`:
```
Name Description
------------------
x 1
x 1
x 2
r 3
r 3
z 8
z 8
```
I need to insert data from `Producttemp` into `Product` and only that data which is in combination of `Name` and `Description`.
So as of now `x,1` should not be inserted because this combination already exists in `Product` table and only `(r,1)` and `(z,8)` should be inserted and we don't have to insert the duplicate combinations.
I am trying with this query :
```
create table #vhf (pk_id numeric)
INSERT INTO product (product_name, product_description)
OUTPUT INSERTED.* INTO #vhf
SELECT
temp.product_name,
temp.product_description
FROM
producttemp
WHERE
NOT EXISTS (SELECT distinct temp.product_name FROM product prj, product temp
WHERE temp.product_description = prj.product_description
AND temp.product_name = prj.product_name)
```
This query is returning all the values that are not there in product table but it is also inserting the duplicate rows
|
Try this one
```
INSERT INTO [Product]
SELECT DISTINCT PT.[Name],PT.[Description]
FROM [Producttemp] AS PT
LEFT OUTER JOIN [Product] AS P ON P.[Name] = PT.[Name]
AND P.[Description] = PT.[Description]
WHERE P.[Name] IS NULL
```
|
From what I understood you do not want to insert rows from '**Producttemp**' that are already in '**Product**'.
For this you can use **MERGE**
```
MERGE Product AS P
USING(SELECT DISTINCT Name,Descrip FROM Producttemp) AS PT
ON P.Name = PT.Name AND P.Descrip=PT.Descrip
WHEN NOT MATCHED THEN
INSERT(Name,Descrip) VALUES(PT.Name,PT.Descrip);
```
|
Insert of data in a query in SQL Server
|
[
"",
"sql",
"sql-server",
"database",
"sql-server-2008",
""
] |
I have a table (in SQL Server) that has 2 columns. One column contains multiple server names separated with a tilda `~`. This column could or could not contain the actual server name. The other column has the actual server name.
I am looking for a way to separate the first column values into their own separated column. The number of aliases range from 1 to ?.
```
Server_Alias_Names Actual_Server_Name
------------------------------------------------------------
Server1~ROSCO24~Server3~Server4~~~~~ ROSCO24
STEVETESDB26~~~~~~~~~ STEVETESDB26
RALPHPRD117~RALPHPRD117-adm~Server0025~Server0025a1~Server0025a2~Server0025a3~~~~~ RALPHPRD117
Server1001~Server1001R1~Server1001-adm~~~~~~~ DBTEST1001
```
I have the first two servers extracted from the string, I am having trouble on the next few. Any help is appreciated!!!
```
SELECT
LEFT(Server_Alias_Names, CHARINDEX('~', Server_Alias_Names) - 1) as 'First_Server',
SUBSTRING(Server_Alias_Names,len(LEFT(Server_Alias_Names, CHARINDEX('~', Server_Alias_Names)+1)),LEN(LEFT(Server_Alias_Names, CHARINDEX ('~', Server_Alias_Names)))) as 'Second_Server'
FROM
TBL_NAME
```
|
@Hogan - I ended up combining your script with some help from the link that @Sean Lange put in his comment. Here is what I came up with.
```
WITH splitit
AS
(
SELECT
y.i.value('(./text())[1]', 'nvarchar(4000)') as Separated_Server,
Actual_Server_Name
FROM
(
SELECT x = CONVERT(XML, '<i>'
+ REPLACE(Server_Alias_Names,'~', '</i><i>')
+ '</i>').query('.')
, Actual_Server_Name
FROM TBL_NAME
) as a cross apply x.nodes('i') as y(i)
)
SELECT DISTINCT
Actual_Server_Name,
Separated_Server
FROM splitit
Where Separated_Server is not null
ORDER BY 1,2
--Some of the separated items were also sorted by a comma so I added another step to separate those as well.
--Uncomment the code below for an additional and replace the comma after Separated_Server with the special character you want to use for separation
--,splitit_2
--AS
--(
--SELECT distinct
--ServerName,
--y.i.value('(./text())[1]', 'nvarchar(4000)') as Server_Alias
--FROM
-- (
-- SELECT x = CONVERT(XML, '<i>'
-- + REPLACE(Separated_Server,',', '</i><i>')
-- + '</i>').query('.')
-- ,[ServerName]
-- FROM splitit
-- ) as a cross apply x.nodes('i') as y(i)
--)
--SELECT DISTINCT
--Actual_Server_Name,
--Separated_Server
--FROM splitit_2
--Where Separated_Server is not null
--ORDER BY 1,2
```
|
The easiest way to accomplish this is with a string splitter. Aaron Bertrand has fairly complete list of viable options here. <http://sqlperformance.com/2012/07/t-sql-queries/split-strings>. Notice that none of these have any loops or recursion. I am not quite sure what you are trying to do with this information but you stated you just wanted to parse the strings.
|
SQL - Separate Out Text From String
|
[
"",
"sql",
"sql-server",
"substring",
""
] |
```
EmpID EmpName EmpSalary EmpDept
1 Steve 5000 HR
2 Robert 5000 Management
3 Brad 3000 HR
4 Sam 4000 HR
5 Dave 2000 Management
6 Stuvart 4500 Management
```
How to get employee details from the EMPLOYEE table whose salary is max and he is belong to HR department... My query is
```
SELECT EmpID,EmpName,EmpSalary,EmpDept
FROM EMPLOYEE
WHERE EmpSalary IN (select max(EmpSalary) from EMPLOYEE)
AND EMPDEPT='HR'
```
I tried above query, it is giving me the accurate result, but due to performance issue I can not use inner queries.
|
You can use an `order by` clause with `rownum` for Oracle *version < 12c*:
```
SELECT EmpID, EmpName, EmpSalary, EmpDept
FROM EMPLOYEE
WHERE ROWNUM = 1 AND EMPDEPT = 'HR'
ORDER BY EmpSalary DESC
```
Otherwise you can use the following:
```
SELECT EmpID, EmpName, EmpSalary, EmpDept
FROM EMPLOYEE
WHERE EMPDEPT = 'HR'
ORDER BY EmpSalary DESC
FETCH FIRST ROW WITH TIES
```
***P.S.:*** `with ties` option brings you an opportunity to get multiple results in case there are multiple employees with the same max salary (*so resulting set may differ from the one using `rownum`, for which there would be always one row in the resulting set*).
|
```
SELECT EmpID,EmpName,MAX(EmpSalary),EmpDept
FROM Employee
WHERE EmpDept='HR'
GROUP BY EmpSalary
```
|
Maximum salary of employee in specific department
|
[
"",
"sql",
"oracle",
"performance",
"oracle11g",
""
] |
I have a table called **emp**:
```
╔════╦══════╦═══════════╗
║ id ║ name ║ fathersid ║
╠════╬══════╬═══════════╣
║ 1 ║ a ║ 2 ║
║ 2 ║ s ║ null ║
║ 3 ║ d ║ 1 ║
║ 4 ║ f ║ 3 ║
╚════╩══════╩═══════════╝
```
I want to print the name corresponding with its father's name.
I have written the following query:
```
SELECT name,
(SELECT name FROM emp WHERE father_id = id)
FROM emp
```
Is this correct?
If not, then what is the right answer?
|
Its almost correct, you need to alias the tables so the reader will know you are comparing the inner query to the outer query :
```
SELECT t.NAME,
(SELECT s.name FROM emp s where s.id = t.father_id) as Father_name
FROM emp t
```
You can do this with a join:
```
SELECT t.name,s.name as Father_Name
FROM emp t
LEFT OUTER JOIN emp s
ON(t.father_id = s.id)
```
|
I prefer to solve this type of problem using a self-join:
```
SELECT e1.name AS name, COALESCE(e2.name, 'Not Available') AS fatherName
FROM emp e1 LEFT JOIN emp e2
ON e1.fathersid = e2.id
```
|
Query for replacing one field with the value of corresponding field
|
[
"",
"mysql",
"sql",
"database",
"oracle",
""
] |
I need to assign two values to my select based on a CASE statement. In pseudo:
```
select
userid
, case
when name in ('A', 'B') then 'Apple'
when name in ('C', 'D') then 'Pear'
end as snack
from
table
;
```
I am assigning a value for `snack`. But lets say I also want to assign a value for another variable, `drink` based on the same conditions. One way would be to repeat the above:
```
select
userid
, case
when name in ('A', 'B') then 'Apple'
when name in ('C', 'D') then 'Pear'
end as snack
, case
when name in ('A', 'B') then 'Milk'
when name in ('C', 'D') then 'Cola'
end as drink
from
table
;
```
However, if I have to assign more values based on the same conditions, say `food`, `drink`, `room`, etc. this code becomes hard to maintain.
Is there a better way of doing this? Can I put this in a SQL function, like you would normally do in another (scripting) language and if so, could you please explain how?
|
Functions destroy performance. But you could use a common-table-expression(cte):
```
with cte as
(
Select IsNameInList1 = case when name in ('A', 'B')
then 1 else 0 end,
IsNameInList2 = case when name in ('C', 'D')
then 1 else 0 end,
t.*
from table
)
select
userid
, case when IsNameInList1=1 then 'Apple'
when IsNameInList2=1 then 'Pear'
end as snack
, case when IsNameInList1=1 then 'Milk'
when IsNameInList2=1 then 'Cola'
end as drink
from
cte
;
```
On this way you have only one place to maintain.
If query performance doesn't matter and you want to use a scalar valued function like this:
```
CREATE FUNCTION [dbo].[IsNameInList1]
(
@name varchar(100)
)
RETURNS bit
AS
BEGIN
DECLARE @isNameInList bit
BEGIN
SET @isNameInList =
CASE WHEN @name in ('A', 'B')
THEN 1
ELSE 0
END
END
RETURN @isNameInList
END
```
Then you can use it in your query in this way:
```
select
userid
, case when dbo.IsNameInList1(name) = 1 then 'Apple'
when dbo.IsNameInList2(name) = 1 then 'Pear'
end as snack
from
table
;
```
But a more efficient approach would be to use a real table to store them.
|
When doing things like this I tend to use a join with a [table valued constructor](https://msdn.microsoft.com/en-us/library/dd776382.aspx):
```
SELECT t.UserID,
s.Snack,
s.Drink
FROM Table AS T
LEFT JOIN
(VALUES
(1, 'Apple', 'Milk'),
(2, 'Pear', 'Cola')
) AS s (Condition, Snack, Drink)
ON s.Condition = CASE
WHEN t.name IN ('A', 'B') THEN 1
WHEN t.name IN ('C', 'D') THEN 2
END;
```
I find this to be the most flexible if I need to add further conditions, or columns.
Or more verbose, but also more flexible:
```
SELECT t.UserID,
s.Snack,
s.Drink
FROM Table AS T
LEFT JOIN
(VALUES
('A', 'Apple', 'Milk'),
('B', 'Apple', 'Milk'),
('C', 'Pear', 'Cola'),
('D', 'Pear', 'Cola')
) AS s (Name, Snack, Drink)
ON s.Name= t.name;
```
|
How to assign multiple values in CASE statement?
|
[
"",
"sql",
"sql-server",
"case-statement",
""
] |
I have a select statement:
```
select DATEDIFF(day,[Contract Start Date],[Contract End Date]) as contract_time
from table 1
```
now, how to add into next column if statement:
```
contract_time >= 390 then display A
contract_time < 390 then display B
contract_time is null display C? (because Contract start date or End date can be null)
```
thanks for help!
|
Use a [case expression](https://msdn.microsoft.com/en-us/library/ms181765.aspx):
```
;With cte as
(
select DATEDIFF(day,[Contract Start Date],[Contract End Date]) as contract_time,
a,
b,
c
from table 1
)
select contract_time
case when contract_time is null then c
when contract_time >= 390 then a
when contract_time < 390 then b
end as otherColumn
from cte
```
note that a, b and c must be all of the same data type.
|
Use IIF function which supports in SQL Server 2012 otherwise you can also use CASE WHEN...THEN ...END
```
;With cte_table1 as
(
SELECT
DATEDIFF(day,[Contract Start Date],[Contract End Date]) as contract_time,
A,
B,
C
FROM [table 1]
)
SELECT contract_time,
IIF(contract_time is null,C,
IIF(contract_time >= 390, A,B))
as otherColumn
FROM cte_table1
```
|
If statement based on other column
|
[
"",
"sql",
"sql-server",
"datetime",
"if-statement",
"switch-statement",
""
] |
I have three tables: `albums`, `songs`, and `images`.
The tables look something like this:
`albums` table:
```
id | title | image_id
5 | First Album | 1
6 | Another Album | 2
```
`songs` table:
```
id | album_id | title | image_id
32 | 5 | My Song | 3
33 | 5 | Another One | 4
34 | 5 | First Song | 0
35 | 6 | My Song #2 | 0
36 | 6 | Fancy Title | 0
37 | 6 | My Love Song | 5
```
`images` table:
```
id | path
1 | path/to/image1.jpg
2 | path/to/image2.jpg
3 | path/to/image3.jpg
4 | path/to/image4.jpg
5 | path/to/image5.jpg
```
I am trying to display a list of songs with their titles and corresponding images.
If the songs don't have an image (if the `image_id` column is `0`), then I want to just use the album image.
The only code I have so far is this, and I am stuck:
```
SELECT songs.title, images.path
FROM songs
LEFT JOIN images
ON images.id = songs.image_id
// ...if songs.image_id is 0, i want to just use the image_id of the corresponding album instead
```
It would be nice if it was an efficient query that could work with large tables.
|
Here is a solution using multiple LEFT JOINs, with demo using data you provided.
The final query:
```
SELECT
s.title,
COALESCE(i1.path, i2.path)
FROM songs s
LEFT JOIN albums a ON s.album_id = a.id
LEFT JOIN images i1 ON s.image_id = i1.id
LEFT JOIN images i2 ON a.image_id = i2.id
ORDER BY s.id;
```
Below is a full demo.
SQL:
```
-- Data for demo
create table albums(id int, title char(100), image_id int);
insert into albums values
(5 , 'First Album' , 1),
(6 , 'Another Album' , 2);
create table songs(id int, album_id int, title char(100), image_id int);
insert into songs values
(32 , 5 , 'My Song' , 3),
(33 , 5 , 'Another One' , 4),
(34 , 5 , 'First Song' , 0),
(35 , 6 , 'My Song #2' , 0),
(36 , 6 , 'Fancy Title' , 0),
(37 , 6 , 'My Love Song' , 5);
create table images(id int, path char(200));
insert into images values
(1 , 'path/to/image1.jpg'),
(2 , 'path/to/image2.jpg'),
(3 , 'path/to/image3.jpg'),
(4 , 'path/to/image4.jpg'),
(5 , 'path/to/image5.jpg');
SELECT * FROM albums;
SELECT * FROM songs;
SELECT * FROM images;
-- SQL needed
SELECT
s.title,
COALESCE(i1.path, i2.path)
FROM songs s
LEFT JOIN albums a ON s.album_id = a.id
LEFT JOIN images i1 ON s.image_id = i1.id
LEFT JOIN images i2 ON a.image_id = i2.id
ORDER BY s.id;
```
Output:
```
mysql> SELECT
-> s.title,
-> COALESCE(i1.path, i2.path)
-> FROM songs s
-> LEFT JOIN albums a ON s.album_id = a.id
-> LEFT JOIN images i1 ON s.image_id = i1.id
-> LEFT JOIN images i2 ON a.image_id = i2.id
-> ORDER BY s.id;
+--------------+----------------------------+
| title | COALESCE(i1.path, i2.path) |
+--------------+----------------------------+
| My Song | path/to/image3.jpg |
| Another One | path/to/image4.jpg |
| First Song | path/to/image1.jpg |
| My Song #2 | path/to/image2.jpg |
| Fancy Title | path/to/image2.jpg |
| My Love Song | path/to/image5.jpg |
+--------------+----------------------------+
6 rows in set (0.00 sec)
```
|
Something like this should work:
```
SELECT s.title, COALESCE(si.path, ai.path)
FROM albums AS a
INNER JOIN songs s ON a.id = s.album_id
LEFT JOIN images ai ON i.id = a.image_id
LEFT JOIN images si ON i.id = s.image_id
```
Note that you have to join the `images` table to the `albums` table AND to the `songs` table. `ai` and `si` serve as separate namespacing for these joins so that [`COALESCE`](http://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#function_coalesce) knows how to find options to choose from.
(Note that the `INNER JOIN` is debatable depending on how you expect your data to look.)
|
MySQL conditional LEFT JOIN?
|
[
"",
"mysql",
"sql",
""
] |
I have a Table with two colums.
* The first column is a date stored as: (example): `01/01/2015 00:00:00`
* The seconds column is a number : as (example) : 500
I have written an SQL Statement which looks like this:
```
select *
from cc_open_incident_view
WHERE (DATE =(NOW() - INTERVAL 1 MONTH))
```
If I execute this statement it doesn't retrieve any data and I can't find the mistake
I want only the data for the last month as a result of the query (last 30 days).
Would appreciate any help..
Edit as per OP comments:
Date is saved as date and it's working, but the column date in my table has stored the dates for the next 5 years and it shows the data like: `09.03.2016` (which is tomorrow) too. Is there any way to only show the date from `30 days` back till `today`?
|
Edit: Changed query as per OP
```
select *
from cc_open_incident_view
WHERE date between (CURDATE() - INTERVAL 1 MONTH ) and CURDATE()
```
Previous Answer:
If date is saved as `date` then use this
```
select *
from cc_open_incident_view
WHERE date >= (CURDATE() - INTERVAL 1 MONTH )
```
If date is saved as string then use this (assuming it is in `dd/mm/yyyy ...`
```
select *
from cc_open_incident_view
WHERE STR_TO_DATE(date ,''%d/%m/%y %h:%i:%s')>= (CURDATE() - INTERVAL 1 MONTH )
```
|
```
SELECT *
FROM cc_open_incident_view
WHERE yourdatetimecolumn BETWEEN CURDATE() - INTERVAL 30 DAY AND CURDATE()
```
Also note that `CURDATE()` returns only the `DATE` portion of the date, so if you store `yourdatetimecolumn` as a `DATETIME` with the time portion filled, this query will not select the today's records.
In this case, you'll need to use `NOW` instead:
```
SELECT *
FROM cc_open_incident_view
WHERE yourdatetimecolumn BETWEEN NOW() - INTERVAL 30 DAY AND NOW()
```
|
Mysql -- Last 30 days
|
[
"",
"mysql",
"sql",
""
] |
I am using a smartphone to collect data from the accelerometer and then saving it in a postgresql database, in a server. Basically, each time I read the accelerometer, I save the latitude/longitude at which the smartphone is at the moment, as well as the timestamp where it happened.
Now, I want to read from the database every distinct position (latitude/longitude) in the same order as they were saved (ordered by the timestamp). And I want to know how many readings are repeated in each position.
Let me explain with an example. Consider I have the following table in my database:
```
+------------+------------+-----------+
| latitude | longitude | timestamp |
+------------+------------+-----------+
| 43.1784771 | -8.5956853 | 930560045 |
| 43.1784771 | -8.5956853 | 930560054 |
| 41.2784813 | -7.5956853 | 930560063 |
| 42.1786173 | -8.5951757 | 930560072 |
| 42.1786173 | -8.5951757 | 930560082 |
+------------+------------+-----------|
```
Notice that I have the elements ordered by timestamp, and that I have 2 repeated positions. So, I want to query the database to see the repeated positions and have the following result:
```
+------------+------------+-------+
| latitude | longitude | count |
+------------+------------+-------+
| 43.1784771 | -8.5956853 | 2 |
| 41.2784813 | -7.5956853 | 1 |
| 42.1786173 | -8.5951757 | 2 |
+------------+------------+-------|
```
The problem is that I want the elements ordered as the original table (ordered by timestamp).
I am trying the following query but it is not working, because the order in a subquery doesn't matter:
```
SELECT latitude, longitude, count(*)
FROM
(SELECT latitude, longitude, timestamp FROM table ORDER BY timestamp asc) subquery1
GROUP BY latitude, longitude
```
I have been looking in StackOverflow for answers and the closest was this one: [Is order in a subquery guaranteed to be preserved?](https://stackoverflow.com/questions/16248813/is-order-in-a-subquery-guaranteed-to-be-preserved)
However, it is not working in my case because I need the "group by" clause. Can anyone help me, please?
|
```
SELECT
latitude,
longitude,
count(1) as "Count",
min(timestamp) as "Start",
max(timestamp) as "End"
FROM table
GROUP BY latitude, longitude
ORDER BY min(timestamp) asc
```
|
```
create or replace function foo(
out latitude numeric,
out longitude numeric,
out cnt int,
out start_time numeric,
out end_time numeric
) returns setof record as $$
declare
c record;
p record;
i int := 1;
begin
select null into p;
for c in (select * from table order by timestamp)
loop
if p is null then
start_time := c.timestamp;
elsif p.latitude <> c.latitude and p.longitude <> c.longitude then
latitude := p.latitude;
longitude := p.longitude;
cnt := i;
end_time := p.timestamp;
return next;
i := 1;
start_time := p.timestamp;
else
i := i + 1;
end if;
p := c;
end loop;
if p.latitude = c.latitude and p.longitude = c.longitude then
latitude := p.latitude;
longitude := p.longitude;
cnt := i;
end_time := p.timestamp;
return next;
end if;
return;
end; $$ immutable language plpgsql;
```
Usage:
```
select * from foo();
```
As a little bonus it is also providing start/end timestamps for each series.
|
Preserve order from subquery (with GROUP BY and ORDER BY)
|
[
"",
"sql",
"postgresql",
"group-by",
"sql-order-by",
""
] |
i am writing sql and I am stuck in the following line
```
select employeeid, (case paidL when'1'
Then
1
when '0'
Then
0
end) as 'paidLeave' from Lea order by employeeid group by 'paidLeave'
```
The above line if giving me error related to 'paidLeave'
Any help to improve the syntax to remove the error will be appreciated
|
Your case syntax is obviously wrong. But you have another error: *Never* use single quotes for column aliases. This is an example of an error waiting to happen.
It is best to use column and table names that do not need to be escaped. If they do, use square braces or double quotes.
The query you want should look like this:
```
select employeeid,
(case when paidL = '1' then 1
when paidl = '0 then 0
end) as paidLeave
from Lea
order by employeeid;
```
I have no idea what `group by 'paidleave'` is supposed to do. It is syntactically wrong on two fronts. If you want the sum, then the query would be something like:
```
select employeeid,
sum(case when paidL = '1' then 1
when paidl = '0 then 0
end) as paidLeave
from Lea
group by employeeid;
```
I think you need to study up on basic SQL syntax. There are many resources on the web and in books.
|
Your case syntax is all wrong, use this :
```
select employeeid, (case when paidL = '1'
Then 1
when paidL = '0'
Then 0
end) as paidLeave
from Lea
order by employeeid
```
Also, order by comes after group by!
Another thing, quote marks are used for strings, not column names. Maybe you meant ` but you don't have to do this because this is not a reserved word.
This whole query seems wrong, the group by doesn't make any sense if you have more then one employee, maybe you meant to group by emplooyeid and order by paidleave?
Maybe you meant to group by employee id and select the maximm paidLeave? in this case:
```
select employeeid, max((case when paidL = '1'
Then 1
when paidL = '0'
Then 0
end)) as paidLeave
from Lea
group by employeeid
order by employeeid
```
|
sql syntax is not working with group by clause
|
[
"",
"sql",
"sql-server",
""
] |
I have a table that contains the number of workers in a set of different job fields, in all of the wards in the country. How can you find the number of wards that have more than x amount of workers (across all job fields)?
My current query looks like this:
```
SELECT
(SELECT COUNT(1) FROM Table WHERE
(SELECT SUM(working) FROM Table GROUP BY table.ward) > x)
AS working FROM Table;
```
|
```
select count(ward)
from
(SELECT ward FROM Table
GROUP BY ward having SUM(working) > x) t
```
|
```
SELECT ward, SUM(working)
FROM YourTable
GROUP BY ward
HAVING SUM(working) > @x
```
|
How do you count all of the values in a column that satisfy your condition? (sqlite)
|
[
"",
"sql",
"sqlite",
"count",
""
] |
I have a table like below and wish to select distinct people e.g row 2, 9, 11, 20. I don't want to select MAX() as that's not random. And I don't want to select Jack twice. It needs to be one person from each set of records
```
ID Name Category Level
1 Jack Dragon 3
2 Jack Falls 5
3 Jack Spider 5
4 Jack Apprentice 1
5 Jack Jolly 5
6 Luke Dragon 1
7 Luke Falls 1
8 Luke Spider 3
9 Luke Apprentice 5
10 Luke Jolly 5
11 Mark Dragon 3
12 Mark Falls 3
13 Mark Spider 1
14 Mark Apprentice 3
15 Mark Jolly 1
16 Sam Dragon 3
17 Sam Falls 5
18 Sam Spider 5
19 Sam Apprentice 5
20 Sam Jolly 3
```
|
Assuming set of records = rows with the same value of "Name":
```
with cte_random
as
(
select *, rank() over (partition by forenames order by newid()) as rnk from tbl
)
select id, name, category, level from cte_random where rnk = 1
```
|
This seems trickier than it sounds, creating a `temp` table with an extra `tempId` column should work. Try:
```
create table #temp(ID int, Name char(10), Category char(10), Level int, tempId varchar(36))
insert #temp select ID, Name, Category, Level, NEWID() as 'tempId' from yourTable
select ID, Name, Category, Level from #temp where ID IN
(select min(tempId) from #temp group by Name)
drop table #temp
```
|
SQL select random record
|
[
"",
"sql",
"sql-server",
"random",
"newid",
""
] |
Here's the problem. :(
A column of my table `Answers` which is like
```
Answers
------------------------------------
id | user_id | question_id | ans
------------------------------------
1 | 1 | 1 | 0
2 | 1 | 2 | 85
3 | 2 | 1 | 5
4 | 2 | 2 | NULL
```
where the value of `ans` are in the range
```
NULL, 0, 5, ..., 95, 100
```
and `NULL` means the user has answered and chosen "no answer", effectively. However, when I `LEFT OUTER JOIN` the `Questions` table with the `Answers` I can't distinguish between a question that hasn't been answered an a question that has been answered with "no answer".
So is there a way that I can do a `LEFT OUTER JOIN` that fills in the non-values with `-1` instead of `NULL`?
(I like to think I'm not a crappy database designer ... the problem is that the requirements of my project keep get changed late in the cycle. The requirements are like a moving target.)
|
Yes. One method uses a `case` statement:
```
select a.*, (case when a.id is not null then coalesce(ans, -1) end) as ans
from b left join
a
on b.answerid = a.id;
```
I just made up the `join` conditions and columns. The key is to use the join key to determine if there is a record match.
|
You'll need to check the existence of a not-nullable column in the answers table, like this:
```
select ..., case when answers.ans is not null then answers.ans
when answers.id is not null then -1
else null
end
```
|
Is it possible to do a left outer join where something other than NULL fills the non-values?
|
[
"",
"sql",
"sql-server",
"t-sql",
"database-design",
""
] |
I have two tables, one is `Countries` that contains `codeCountry` and `nameCountry` and second table `Client` that contains `privat_code_country.` CodeCountry for example AUS and nameCountry Austria.
I want to select all the countries that have a minimum number of clients, in other words, I have to join this two tables and find all the countries that have minumum number of clients.
After I writed a query, I had a problem with "not a single-group group function" .
This is my query:
```
select codeCountry, min(cnt) as Number_of_clients
from (select s.codeCountry, count(*) as cnt
from countries s join
client t
on s.codeCountry = t.privat_code_country
group by s.codeCountry
) t order by number_of_clients desc, codeCountry asc;
```
I want to recive all the countries that have minimum number of clients, so what I ask, is to correct my query. TNx
Expected result is :
```
CodeCountry Number umber_of_clients
AUS 1
CRO 1
ITA 1
```
|
You need a RANK:
```
select *
from
(
select s.codeCountry, count(*) as cnt,
rank() over (order by count(*)) as rnk -- rank based on increasing counts
from countries s join
client t
on s.codeCountry = t.privat_code_country
group by s.codeCountry
) dt
where rnk = 1
```
|
If you want the list of countries which share the minimum number of clients:
```
select x.codeCountry, x.cnt as number_of_clients
from
(
select t.codeCountry, min(t.cnt) over () as min_cnt, t.cnt
from
(
select s.codeCountry, count(*) as cnt
from
countries s
join
client t
on s.codeCountry = t.privat_code_country
group by s.codeCountry
) t
) x
where x.cnt=x.min_cnt
order by codeCountry asc;
```
|
Select all the countries that have a minimum number of clients
|
[
"",
"sql",
"oracle",
""
] |
It's not gentlemanly to not like many items. But I need a query that `exclude` many items (`not in`) using a list of many items using `not like`.
```
| source |
|========|
| danny |
| ram |
| nach |
| boom |
| trach |
| banana |
| key_exclude |
|================|
| danny |
| ram |
| like_exclude |
|================|
| bo |
| tr |
```
The expected results:
```
| result |
|========|
| banana |
| nach |
```
I want to something like that:
```
select * from source where key not in (select key from key_exclude.key)
and key not like in(like_exclude.key)
```
But it is doesnt work
I can do:
```
select * from source where key not in (select key from key_exclude.key)
and key not like '%bo%' and key not like '%tr%'
```
The only problem is that the 'like\_exclue' may contain hundreds of records.
|
Here is a solution query using subqueries:
```
select *
from source AS s
where
s.`key` not in( select k.`key` from key_exclude AS k ) AND
NOT EXISTS( select 1 from like_exclude l WHERE s.`key` LIKE CONCAT('%', l.`key`, '%'));
```
Below is a full demo with data verified, also put on SQLFiddle by OP <http://sqlfiddle.com/#!9/22fe8a/1/0>
SQL:
```
-- Data
create table source(`key` char(20));
insert into source values
( 'danny' ),
( 'ram' ),
( 'nach' ),
( 'boom' ),
( 'trach' ),
( 'banana' );
create table key_exclude( `key` char(20));
insert into key_exclude values
( 'danny' ),
( 'ram' );
create table like_exclude( `key` char(20) );
insert into like_exclude values
( 'bo' ),
( 'tr' );
-- SQL Needed
select *
from source AS s
where
s.`key` not in( select k.`key` from key_exclude AS k ) AND
NOT EXISTS( select 1 from like_exclude l WHERE s.`key` LIKE CONCAT('%', l.`key`, '%'));
```
Output:
```
mysql> select *
-> from source AS s
-> where
-> s.`key` not in( select k.`key` from key_exclude AS k ) AND
-> NOT EXISTS( select 1 from like_exclude l WHERE s.`key` LIKE CONCAT('%', l.`key`, '%'));
+--------+
| key |
+--------+
| nach |
| banana |
+--------+
2 rows in set (0.00 sec)
```
Live Sample - SQL Fiddle: <http://sqlfiddle.com/#!9/22fe8a/1/0>
|
Let me first say that it is indeed very ungentlemanly not to like so many things. But sometimes you just have to...
Anyways..one way to do it is to use `RLIKE` in combination with `GROUP_CONCAT` instead of `LIKE`. That would result in something like this. This will probably go 'wrong' if there are special regexp characters inside the like\_exclude table. What this would do performance wise is up to you, and with a lot of values in the table you might have to alter the `group_concat_max_len` value :
[SQLFiddle](http://sqlfiddle.com/#!9/f8d79d/6)
```
SELECT
*
FROM
source as s
WHERE
s.key NOT IN (SELECT ke.key FROM key_exclude as ke)
AND
s.key NOT RLIKE (SELECT GROUP_CONCAT(le.key SEPARATOR '|') FROM like_exclude as le);
```
|
How to not likes many items in MySQL?
|
[
"",
"mysql",
"sql",
""
] |
I have create a report using visual studio 2015 with SSDT Tools installed from the following link
<https://msdn.microsoft.com/en-us/mt186501>
The database is on SQL Server 2014. The reports work on my machine however when I try to upload a report on customers machine(Which has SQL Server 2014 and not visual studio). I get the following error
"The definition of this report is not valid or supported by this version of Reporting Services. The report definition may have been created with a later version of Reporting Services, or contain content that is not well-formed or not valid based on Reporting Services schemas. Details: The report definition has an invalid target namespace '<http://schemas.microsoft.com/sqlserver/reporting/2016/01/reportdefinition>' which cannot be upgraded. (rsInvalidReportDefinition)"
|
If you have the solution > properties > TargetServerVersion set to SQL Server 2008 R2, 2012 or 2014 and then upload the RDL from the bin folder instead of the project folder, it should work. I was getting the same error and that solved it.
|
Your report is targeting SQL server 2016
|
Error while uploading a report
|
[
"",
"sql",
"reporting",
"sql-server-data-tools",
""
] |
My `requirement` is to calculate the `distance` between two `locations` on a given [map](https://www.google.co.in/maps?source=tldsi&hl=en) using [mysql](https://www.mysql.com/). I found a function in [mysql](https://www.mysql.com/) named [ST\_Distance\_Sphere](https://dev.mysql.com/doc/refman/5.7/en/spatial-convenience-functions.html) which returns the minimum spherical distance between two locations and/or multi locations on a sphere in meters.
When I computed the distance between two locations using [ST\_Distance\_Sphere](https://dev.mysql.com/doc/refman/5.7/en/spatial-convenience-functions.html) and the `lat_lng_distance` function , I found that the [ST\_Distance\_Sphere](https://dev.mysql.com/doc/refman/5.7/en/spatial-convenience-functions.html) is not giving the same distance as that of the `lat_lng_distance` function.
My `lat_lng_distance` function code is as follows
```
CREATE FUNCTION `lat_lng_distance` (lat1 FLOAT, lng1 FLOAT, lat2 FLOAT, lng2 FLOAT)
RETURNS FLOAT
DETERMINISTIC
BEGIN
RETURN 6371 * 2 * ASIN(SQRT(
POWER(SIN((lat1 - abs(lat2)) * pi()/180 / 2),
2) + COS(lat1 * pi()/180 ) * COS(abs(lat2) *
pi()/180) * POWER(SIN((lng1 - lng2) *
pi()/180 / 2), 2) ));
END
```
The two locations `((38.898556,-77.037852),(38.897147,-77.043934))` passed to the [ST\_Distance\_Sphere](https://dev.mysql.com/doc/refman/5.7/en/spatial-convenience-functions.html) and `lat_lng_distance` function is as follows
```
SET @pt1 = ST_GeomFromText('POINT (38.898556 -77.037852)');
SET @pt2 = ST_GeomFromText('POINT (38.897147 -77.043934 )');
SELECT ST_Distance_Sphere(@pt1, @pt2)/1000,lat_lng_distance(38.898556,-77.037852,38.897147,-77.043934 );
```
The Results Obtained is as follows
[](https://i.stack.imgur.com/cpJMr.png)
I checked the distance between the two locations on [google maps](https://www.google.co.in/maps?source=tldsi&hl=en) and found that `lat_lng_distance` is close to the actual distance between the two locations. Can someone let me know why is the [ST\_Distance\_Sphere](https://dev.mysql.com/doc/refman/5.7/en/spatial-convenience-functions.html) not giving accurate distance between two locations?
|
[ST\_DISTANCE\_SPHERE](https://dev.mysql.com/doc/refman/5.7/en/spatial-convenience-functions.html#function_st-distance-sphere) requires points to be expressed as `POINT(longitude, latitude)`, you have them reversed in your code
```
set @lat1 = 38.898556;
set @lon1 = -77.037852;
set @lat2 = 38.897147;
set @lon2 = -77.043934;
SET @pt1 = point(@lon1, @lat1);
SET @pt2 = point(@lon2, @lat2);
SELECT ST_Distance_Sphere(@pt1, @pt2)/1000,
lat_lng_distance(@lat1,@lon1,@lat2,@lon2);
+-------------------------------------+-------------------------------------------+
| ST_Distance_Sphere(@pt1, @pt2)/1000 | lat_lng_distance(@lat1,@lon1,@lat2,@lon2) |
+-------------------------------------+-------------------------------------------+
| 0.549154584458455 | 0.5496311783790588 |
+-------------------------------------+-------------------------------------------+
```
This gives a result that is much closer to the value returned by your function.
|
**For all who are working with MYSQL 8:**
For all mysql geolocation functions there **must** be the right **SRID** used, otherwise you won't get the right results.
Most commenly used is SRID 4326 (GPS Coordinates, Google Earth) AND SRID 3857 (used on Google Maps, OpenStreetMap, and most other web maps).
Example of a correct distance calculation between two points:
```
SELECT ST_Distance(ST_GeomFromText('POINT(51.513 -0.08)', 4326), ST_GeomFromText('POINT(37.745 -122.4383)', 4326)) / 1000 AS km;
```
Here is a good explanation of this topic:
<https://medium.com/maatwebsite/the-best-way-to-locate-in-mysql-8-e47a59892443>
There are some very good explanations from the mysqlserverteam:
<https://mysqlserverteam.com/spatial-reference-systems-in-mysql-8-0/>
<https://mysqlserverteam.com/geography-in-mysql-8-0/>
|
ST_Distance_Sphere in mysql not giving accurate distance between two locations
|
[
"",
"mysql",
"sql",
"mysql-workbench",
""
] |
I have rows in Oracle database table (having +2M records) and I want to extract records with some condition. The constraint is not setup on the table. Any help will be appreciated.
```
nif num_sum
--------------------------
123 456
123 456
123 789 // => I want to select this
134 600
```
Records having a `nif` with one or more same `num_sum` is allowed. But if a `nif` exists for two or more different `num_sum`s, I want to select that `nif` and `num_sum`. I showed the above records for explanation only.
I tried:
```
select distinct nif, num_sum
from sumcon
group by nif, num_sum
having count(distinct nif) > 1
```
|
Given your description,
"Records having a nif with one or more same num\_sum is allowed (a nif shouldn't be for two different num\_sums)"
I think you want the following:
```
SELECT nif
FROM sumcon
GROUP BY nif
HAVING COUNT(DISTINCT num_sum) > 1;
```
First, in your original query `DISTINCT` is superfluous. Next, since you say there should not exist more than one distinct value of `num_sum` for a given value of `nif`, I think you want to group by `nif` and use `COUNT(DISTINCT num_sum)`. Now if you mean that a given value of `num_sum` should not exist for more than one unique value of `nif`, then you would want the following:
```
SELECT num_sum
FROM sumcon
GROUP BY num_sum
HAVING COUNT(DISTINCT nif) > 1;
```
If you need to get all the values of `nif` for which this is the case, then you would have to do something like this:
```
SELECT nif, num_sum FROM sumcon
WHERE num_sum IN (
SELECT num_sum
FROM sumcon
GROUP BY num_sum
HAVING COUNT(DISTINCT nif) > 1
);
```
Hope this helps.
|
You don't return any records because there's only 1 distinct value for `nif`, so the count is not greater than 1.
|
Sql: Extract records with some condition
|
[
"",
"sql",
"oracle",
""
] |
I'm trying to extract transaction details from two already existing tables:
1. `transactions`, containing the total amount received,
2. `bills`, with a row for each bill received in the transaction, and containing the denomination of the bill.
Both are indexed with a common `session` id. [*Correction:* only the `transactions` table is indexed on `session` id.]
I've joined the tables and made subqueries to count the number of each bill denomination per transaction (how many 10s, 20s, etc.). I want to get one record for each transaction with all the counts on the same row.
I made it as far as this query:
```
SELECT
t.session,
to_char(t.amount::numeric, '"$"9990D99') AS "USD",
(select count(b.denom) where b.denom = '50' ) AS "50",
(select count(b.denom) where b.denom = '20') AS "20",
(select count(b.denom) where b.denom = '10') AS "10",
(select count(b.denom) where b.denom = '5') AS "5",
(select count(b.denom) where b.denom = '1') AS "1"
FROM transactions AS t JOIN bills AS b USING (session)
GROUP BY
t.session, t.amount, b.denom
ORDER BY
t.session,
b.denom ASC;
```
... which correctly gives me the bill counts, but with one row for each denomination:
```
session | USD | 50 | 20 | 10 | 5 | 1
--------------+-----------+----+----+----+---+----
c64af32f1815 | $ 135.00 | | | | 1 |
c64af32f1815 | $ 135.00 | | | 1 | |
c64af32f1815 | $ 135.00 | | 6 | | |
643e096b6542 | $ 175.00 | | | | | 10
643e096b6542 | $ 175.00 | | | | 1 |
643e096b6542 | $ 175.00 | | 8 | | |
ce7d2c647eff | $ 200.00 | 4 | | | |
```
What I want is this, with one row per transaction:
```
session | USD | 50 | 20 | 10 | 5 | 1
--------------+-----------+----+----+----+---+----
c64af32f1815 | $ 135.00 | | 6 | 1 | 1 |
643e096b6542 | $ 175.00 | | 8 | | 1 | 10
ce7d2c647eff | $ 200.00 | 4 | | | |
```
What do I need to understand to fix this query?
**Revised Query** (following @erwin suggestion to avoid subqueries):
```
SELECT
t.session,
to_char(t.amount::numeric, '"$"9990D99') AS "USD",
COUNT(NULLIF(b.denom = '100', FALSE)) AS "100",
COUNT(NULLIF(b.denom = '50', FALSE)) AS "50",
COUNT(NULLIF(b.denom = '20', FALSE)) AS "20",
COUNT(NULLIF(b.denom = '10', FALSE)) AS "10",
COUNT(NULLIF(b.denom = '5', FALSE)) AS "5",
COUNT(NULLIF(b.denom = '1', FALSE)) AS "1"
FROM transactions AS t JOIN bills AS b USING (session)
GROUP BY
t.session, t.amount, b.denom
ORDER BY
t.session,
b.denom ASC;
```
This query still generates one line of output for each aggregate (count) function call.
|
Do not use correlated subqueries. That's inefficient.
And **do *not* include `b.denom`** in the `GROUP BY` clause. That's your primary error.
### Postgres 9.4+
In Postgres **9.4** or later use the dedicated aggregate `FILTER` feature:
```
SELECT t.session
, to_char(t.amount::numeric, '"$"9990D99') AS "USD"
, count(*) FILTER (WHERE b.denom = '50') AS "50" -- !
, count(*) FILTER (WHERE b.denom = '20') AS "20" -- !
, ...
FROM ...
GROUP BY t.session, t.amount -- !
ORDER BY ...
```
Explanation and links to more:
* [How can I simplify this game statistics query?](https://stackoverflow.com/questions/27136251/how-can-i-simplify-this-game-statistics-query/27141193#27141193)
### Postgres 9.3-
For older versions (Postgres **9.3** or older), there is a variety of (less elegant) alternatives:
```
SELECT t.session
, to_char(t.amount::numeric, '"$"9990D99') AS "USD" -- why cast to numeric?
, count(b.denom = '100' OR NULL) AS "100" -- bad column name
, count(b.denom = '50' OR NULL) AS "50"
, count(b.denom = '20' OR NULL) AS "20"
, count(b.denom = '10' OR NULL) AS "10"
, count(b.denom = '5' OR NULL) AS "5"
, count(b.denom = '1' OR NULL) AS "1"
FROM transactions t
JOIN bills b USING (session)
GROUP BY t.session, t.amount
ORDER BY t.session;
```
After cleaning up:
```
SELECT t.session, to_char(t.amount, '"$"9990D99') AS usd
, d100, d50, d20, d10, d5, d1
FROM transactions t
LEFT JOIN (
SELECT session
, nullif(count(denom = 100 OR NULL), 0) AS d100
, nullif(count(denom = 50 OR NULL), 0) AS d50
, nullif(count(denom = 20 OR NULL), 0) AS d20
, nullif(count(denom = 10 OR NULL), 0) AS d10
, nullif(count(denom = 5 OR NULL), 0) AS d5
, nullif(count(denom = 1 OR NULL), 0) AS d1
FROM bills
GROUP BY 1
) b USING (session)
ORDER BY session;
```
What's changed?
* Don't use illegal identifiers if you can avoid it, then you also don't need double quotes.
* Assuming `integer` for `bills.denom`, so we also don't need single quotes around the constants.
* Seems like you want `NULL` in columns where no bills are found. Wrap the result in `NULLIF()`
* Since you retrieve the whole table it's faster to aggregate *before* you join.
[**SQL Fiddle.**](http://sqlfiddle.com/#!15/d3247/1)
More techniques and explanation:
* [For absolute performance, is SUM faster or COUNT?](https://dba.stackexchange.com/a/27572/3684)
### Faster alternative: `crosstab()`
For top performance, use actual cross tabulation with `crosstab()`. You need the additional module `tablefunc` installed. ***Read this first*** if you are not familiar with it:
* [PostgreSQL Crosstab Query](https://stackoverflow.com/questions/3002499/postgresql-crosstab-query/11751905#11751905)
```
SELECT t.session, to_char(t.amount, '"$"9990D99') AS usd
, d100, d50, d20, d10, d5, d1
FROM transactions t
LEFT JOIN crosstab(
$$SELECT session, denom, count(*)::int
FROM bills
GROUP BY 1,2
ORDER BY 1$$
, $$SELECT unnest ('{100,50,20,10,5,1}'::int[])$$
) AS ct (session text, d100 int, d50 int, d20 int, d10 int, d5 int, d1 int)
USING (session);
```
|
I think you are almost there. You need to do another group by after your query. For instance:
```
WITH q1 as (SELECT
t.session,
to_char(t.amount::numeric, '"$"9990D99') AS "USD",
(select count(b.denom) where b.denom = '50' ) AS "50",
(select count(b.denom) where b.denom = '20') AS "20",
(select count(b.denom) where b.denom = '10') AS "10",
(select count(b.denom) where b.denom = '5') AS "5",
(select count(b.denom) where b.denom = '1') AS "1"
FROM transactions AS t JOIN bills AS b USING (session)
GROUP BY t.session, t.amount, b.denom)
SELECT session, "USD", SUM("50") AS "50", SUM("20") AS "20", SUM("10") AS "10",
SUM("5") AS "5", SUM("1") AS "1"
FROM q1
GROUP BY session, "USD"
```
|
How to group multiple subqueries in the same output row
|
[
"",
"sql",
"postgresql",
"aggregate",
""
] |
I am trying to find a more efficient way to write PL/SQL Query to
to select insert from a table with 300+ columns, to the back up version of that table (same column names + 2 extra columns).
I could simply type out all the column names in the script (example below), but with that many names, it will bother me... :(
```
INSERT INTO
TABLE_TEMP
(column1, column2, column3, etc)
(SELECT column1, column2, column3, etc FROM TABLE WHERE id = USER_ID);
```
Thanks in advance
|
Specify literals/null for those two extra columns.
```
INSERT INTO
TABLE_TEMP
SELECT t1.*, null, null FROM TABLE t1 WHERE id = USER_ID
```
|
You can pretty easily build a column list for any given table:
```
select table_catalog
,table_schema
,table_name
,string_agg(column_name, ', ' order by ordinal_position)
from information_schema.columns
where table_catalog = 'catalog_name'
and table_schema = 'schema_name'
and table_name = 'table_name'
group by table_catalog
,table_schema
,table_name
```
That should get you nearly where you need to be.
|
A more efficient way to do a select insert that involves over 300 columns?
|
[
"",
"sql",
"plsql",
""
] |
I have 2 tables: `questions` and `answers`, each question has 4 alternatives and only 1 alternative is correct.
Is there any way to force `answers` table to accept only 1 correct alternative for each question, (something like unique index)?
`answers` table design sample:
```
id question_id is_correct text
0 1 true blue1
1 1 false blue2
2 1 false blue3
3 1 false blue4
4 2 true blue5
5 2 false blue6
6 2 false blue7
7 2 false blue8
8 3 true blue9
9 3 false blue10
```
Is there away to tell DB to store only 1 correct answer for question\_id = 1 ... etc ?
thanks,
|
The one way you can do this without a trigger is by storing the information in the `Questions` table rather than the `Answers` table.
You can just include a column `CorrectAnswerId` with a foreign key relationship to the `Answers` table.
|
Since you want to allow only one correct answer you can alter your design to this direction.
You can remove the column "is\_correct" from the "answers" table and add a column "correct\_answer\_id" to the "questions" table.
So for each question you will have only one correct answer, by definition.
This design has the obvious advantage that easily enforces the requested restriction as it cannot support more than one correct answers.
|
MySQL: store only 1 correct answer (indexing)
|
[
"",
"mysql",
"sql",
"indexing",
""
] |
Hi i'm saving file name as a date and time with file extension like below
```
43201612150389.docx,
73201611843471.jpg
```
in my sql table, now I want after dot which extension using sql simple query any one have idea.
|
Use [**`SUBSTRING_INDEX`**](http://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_substring-index):
```
SELECT SUBSTRING_INDEX(mycol, '.', -1)
```
[**Demo here**](http://sqlfiddle.com/#!9/bcda7/1)
In SQL Server you can use [**`PARSENAME`**](https://msdn.microsoft.com/en-us/library/ms188006.aspx):
```
SELECT PARSENAME(col, 1)
```
|
MySql:
```
SELECT SUBSTRING_INDEX(field, '.', -1)
```
SQL:
```
SELECT PARSENAME(field, 1)
```
|
How to get the value after dot in sql query
|
[
"",
"sql",
"split",
"sql-server-2012",
""
] |
I have a SQL query that goes like this:
```
select v FROM v.rsversion between 'minVer' and 'maxVer';
```
Where the `version` is expressed as a string of format `x.y.z`
This will return fine all existing versions between 0.2.0 and 0.2.9
but will return nothing if the range is 0.2.0 and 0.2.10
Is there a way to make this work?
|
With Postgres you could do this by splitting up the version into three numbers and then compare those numbers. For other DBMS you would need to find a different way of splitting a string like `0.2.1` into three numbers.
```
with rsversion (v) as (
values
-- only here for sample data
('0.2.0'), ('0.2.1'), ('0.2.2'), ('0.2.10'), ('0.2.12'),
('0.3.0'), ('0.3.1'), ('0.3.2'), ('0.3.4')
), numeric_version (v, major, minor, patch) as (
select v,
split_part(v,'.', 1)::int,
split_part(v,'.', 2)::int, ,
split_part(v,'.', 3)::int
from rsversion
)
select v
FROM numeric_version
where (major,minor,patch) between (0,2,1) and (0,2,11)
```
The above prints:
```
v
------
0.2.1
0.2.2
0.2.10
```
---
SQLFiddle example: <http://sqlfiddle.com/#!15/403d2/2>
|
The reason this is not working as designed is because 0.2.0 and 0.2.x are not literal numbers, so if you are trying to do a string comparison it's looking at each incremented character and comparing them.
So 0.2.0, 0.2.1, 0.2.10, 0.2.2, 0.2.3, etc is how it's arranging the strings.
You may be able to make this work by adding a leading 0 to the third part of the string 0.2.00, 0.2.01, 0.2.02, etc if that is possible to do for your purposes.
|
SQL query between strings representing version numbers
|
[
"",
"sql",
""
] |
```
SET @s:='';
SELECT @s:= CONCAT('SHOW GRANTS FOR \'',user,'\'@\'',host,'\';') FROM mysql.user where user = 'root';
PREPARE stmt FROM @s;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;
```
I dont mind to achieve this using any stored proc. Let say I have 2 users with root
**'root'@'%'** and **'root'@'localhost'**
What I want is to get result of multiple prepare statements. But the above only executes the last one (ie 'root'@'localhost'). The two challenges I faced
1. [PREPARE](http://dev.mysql.com/doc/refman/5.7/en/sql-syntax-prepared-statements.html) / EXECUTE stmt can execute only one query at a time
2. Only Stored proc can use loop
**Objective:** All I want is to execute two 'SHOW GRANTS FOR' in one shot
|
With the appropriate privileges, you can do something like:
```
mysql> system rm -f /tmp/get_show_grants.sql;
mysql> SELECT CONCAT('system rm -f /tmp/show_grants.sql;
SELECT CONCAT(\'SHOW GRANTS FOR \'\'\', `user`, \'\'\'@\'\'\', `host`,\'\'\';\')
INTO OUTFILE \'/tmp/show_grants.sql\'
FROM `mysql`.`user`
WHERE `user` = \'root\';
source /tmp/show_grants.sql;
system rm -f /tmp/show_grants.sql /tmp/get_show_grants.sql;
') INTO OUTFILE '/tmp/get_show_grants.sql';
Query OK, 1 row affected (0.00 sec)
mysql> source /tmp/get_show_grants.sql;
Query OK, 1 row affected (0.00 sec)
+---------------------------------------------------------------------+
| Grants for root@localhost |
+---------------------------------------------------------------------+
| GRANT ALL PRIVILEGES ON *.* TO 'root'@'localhost' WITH GRANT OPTION |
| GRANT PROXY ON ''@'' TO 'root'@'localhost' WITH GRANT OPTION |
+---------------------------------------------------------------------+
2 rows in set (0.00 sec)
```
|
This in also not the answer, as the question is more on the execution of multiple prepared statements, Another example can be like in case we need to OPTIMIZE all tables in a database, @wchiquito answer is accepted for that reason
Finally Percona already came up with [pt-show-grants](https://www.percona.com/doc/percona-toolkit/2.1/pt-show-grants.html)
One more way I tried myself to get it along with the database-name. But this may not work on Version 5.7, In a more readable format would be
```
(SELECT `GRANTEE`, `TABLE_SCHEMA`, (CASE
WHEN GROUP_CONCAT(`PRIVILEGE_TYPE`) = 'SELECT' THEN 'READ ONLY'
WHEN (LOCATE('DELETE',GROUP_CONCAT(`PRIVILEGE_TYPE`))
+ LOCATE('UPDATE',GROUP_CONCAT(`PRIVILEGE_TYPE`))
+ LOCATE('INSERT',GROUP_CONCAT(`PRIVILEGE_TYPE`))
+ LOCATE('SELECT',GROUP_CONCAT(`PRIVILEGE_TYPE`))) >= 4 THEN 'READ+WRITE'
ELSE GROUP_CONCAT(`PRIVILEGE_TYPE` ORDER BY `PRIVILEGE_TYPE`)
END) AS 'PRIVILEGE_TYPE'
FROM INFORMATION_SCHEMA.SCHEMA_PRIVILEGES
WHERE GRANTEE NOT REGEXP '^......$'
GROUP BY `GRANTEE`, `TABLE_SCHEMA`)
UNION
(SELECT `GRANTEE`, 'All Databases' AS `TABLE_SCHEMA`, (CASE
WHEN GROUP_CONCAT(`PRIVILEGE_TYPE`) = 'SELECT' THEN 'READ ONLY'
WHEN (LOCATE('DELETE',GROUP_CONCAT(`PRIVILEGE_TYPE`))
+ LOCATE('UPDATE',GROUP_CONCAT(`PRIVILEGE_TYPE`))
+ LOCATE('INSERT',GROUP_CONCAT(`PRIVILEGE_TYPE`))
+ LOCATE('SELECT',GROUP_CONCAT(`PRIVILEGE_TYPE`))) >= 4 THEN 'READ+WRITE'
ELSE GROUP_CONCAT(`PRIVILEGE_TYPE` ORDER BY `PRIVILEGE_TYPE`)
END) AS 'PRIVILEGE_TYPE'
FROM INFORMATION_SCHEMA.USER_PRIVILEGES
WHERE GRANTEE NOT REGEXP '^......$'
GROUP BY `GRANTEE`
HAVING GROUP_CONCAT(`PRIVILEGE_TYPE`) != 'USAGE')
```
|
MySQL User Privileges list in one shot
|
[
"",
"mysql",
"sql",
""
] |
I have 2 tables,
Table 1 StockCard
```
OutletKey ProductKey Date qty
101 ABC 01/01/2014 10
101 ABC 21/02/2014 5
101 ABC 31/03/2014 5
101 ABC 05/06/2014 2
101 ABC 20/10/2014 3
101 ABC 11/11/2014 4
101 ABC 12/12/2014 7
101 ABC 05/01/2015 8
101 ABC 03/02/2015 10
```
And the second table is stock\_date
```
OutletKey StockDate
101 05/04/2014
101 14/10/2014
101 10/01/2015
```
And I want the result like this
```
OutletKey ProductKey StockDate TotalQty
101 ABC 10/01/2015 44
```
So I need to calculate total qty before last date of stockdate which is 10/01/2015
Thanks
EDIT: I am writing query like this but the result is not correct.
```
Select SC.outletkey ,SC.productkey ,SD.StockDate ,Sum(SC.qty) as totalqty
From StockCard SC
Inner Join StockDate SD
On SC.outletkey = SD.outletkey and SC.date < SD.stockdate
Group by SC.outletkey ,SC.productkey ,SD.StockDate
```
|
For most RDBMSs, I would do it like this:
```
SELECT sc.OutletKey
,sc.ProductKey
,sd.LastStockDate AS StockDate
,SUM(sc.qty) AS TotalQty
FROM StockCard sc
INNER JOIN (
SELECT OutletKey
,MAX(StockDate) AS LastStockDate
FROM stock_date
GROUP BY OutletKey
) sd
ON sd.OutletKey = sc.OutletKey
AND sc.Date < sd.LastStockDate
GROUP BY sc.OutletKey
,sc.ProductKey
,sd.LastStockDate
```
For SQL Server, I'd convert the `INNER JOIN` to a `CROSS APPLY`, which might perform better:
```
SELECT sc.OutletKey
,sc.ProductKey
,sd.LastStockDate AS StockDate
,SUM(sc.qty) AS TotalQty
FROM StockCard sc
CROSS APPLY (
SELECT MAX(StockDate) AS LastStockDate
FROM stock_date
WHERE OutletKey = sc.OutletKey
) sd
WHERE sc.Date < sd.LastStockDate
GROUP BY sc.OutletKey
,sc.ProductKey
,sd.LastStockDate
```
I don't think it's possible to move the inequality to the correlation, so that's why it's in the WHERE clause.
|
This should do the trick
```
Select StockCard.OutletKey, StockCard.ProductKey, stock_date.StockDate, sum(StockCard.TotalQty) as TotalQty
from StockCard
inner join stock_date on StockCard.OutletKey = stock_date.OutletKey
group by StockCard.OutletKey, StockCard.ProductKey, stock_date.StockDate
```
What this does...
Selects the requested columns, with a sum of the quatity. Then joins the other table up.
Your result will be the quantities grouped by `ID`, `product`, and `date`.
|
SQL SERVER How To join 2 tables
|
[
"",
"sql",
"join",
""
] |
Now I have one table, let's say table PROPERTY, which have 2 columns called A and B,
```
A B PIVOT
1 dog T
2 cat T
1 chien F
1 gou F
2 chat F
2 miao F
```
Now I want to add one column C, whose content is based on A and B, and is indexed by 1. that's to say,
```
A B C
1 dog dog
2 cat cat
1 chien dog
1 gou dog
2 chat cat
2 miao cat
```
How can I do this? This is a large table(200k rows)
I was thinking about:
```
UPDATE PROPERTY a SET C=
(select min(b.B) from CFG_DIM_PROPERTY b where
b.A= a.A and b.PIVOT= 'T')
```
However, this sql has taken too much time. Do you smart guys know other ways? And some indications about how you thought of it?
|
Why not use **merge**:
```
merge into PROPERTY t
using (select min (b) as min_b, a
from CFG_DIM_PROPERTY b
where b.pivot = 'T'
group by a) t1
on (t.a = t1.a)
when matched then
update set t.c = t1.min_b;
```
|
Looks like you want output like this? Because your table doesn't have a column `c` yet. And if you want to add it, you can use below logic to get its value.
Also note that if for each `A`, if there are more than 1 `PIVOT1` as `T`, then you can get unexpected result.
```
with tbl(A,B,PIVOT1) as(
select 1,'dog ','T' from dual union all
select 2,'cat ','T' from dual union all
select 1,'chien','F' from dual union all
select 1,'gou ','F' from dual union all
select 2,'chat ','F' from dual union all
select 2,'miao ','F' from dual
)
select t1.a,t1.b,t2.b as C from tbl t1 left join
(select a,b from tbl where pivot1='T') t2
on t1.a=t2.a
```
Output:
```
A B C
1 gou dog
1 chien dog
1 dog dog
2 miao cat
2 chat cat
2 cat cat
```
|
Update a column with information from 3 other columns
|
[
"",
"sql",
"oracle",
""
] |
**Dataset:** *(Just highlighting error condition data, other conditions work fine)*
[](https://i.stack.imgur.com/dIFsq.png)
**Query:**
```
SELECT `id`, `uid`, `activity_type`, `source_id`, `parent_id`, `parent_type`, `post_id`, `status`
FROM `user_activity` AS `user_activity`
WHERE
`user_activity`.`activity_type` IN(
'STATUS_UPDATE',
'PROFILE_PICTURE_UPDATED',
'PROFILE_COVER_UPDATED',
'ALBUM'
)
AND `user_activity`.`uid` = '13'
AND `user_activity`.`status` = 'ACTIVE'
-- AND `user_activity`.`parent_type` <> 'ALBUM' -- query works wihtout this condition
ORDER BY id DESC LIMIT 10
```
The query works fine and gets the desired results, but when I add the condition:
```
AND `user_activity`.`parent_type` <> 'ALBUM'
```
query returns no results.
I don't understand what's the problem in this query.
[Here is the fiddle.](http://www.sqlfiddle.com/#!9/ad043/2/0)
|
SQL uses [three valued logic](https://www.simple-talk.com/sql/learn-sql-server/sql-and-the-snare-of-three-valued-logic/). The NULL <> 'ALBUM' and NULL = 'ALBUM' both evaluate to UNKNOWN. You need to use IS NOT NULL or IS NULL to compare with nulls.
|
The `user_activity.parent_type <> 'ALBUM'` condition also filters every `NULL` value on that column, since `NULL <> 'ALBUM'` isn't `true`, is `undetermined`. So you can use something like this:
```
AND (`user_activity`.`parent_type` <> 'ALBUM' OR `user_activity`.`parent_type` IS NULL)
```
Or:
```
AND COALESCE(`user_activity`.`parent_type`,'') <> 'ALBUM'
```
|
Logical condition in WHERE - Incorrect results
|
[
"",
"mysql",
"sql",
"where-clause",
""
] |
I'm having problems with date format in an SQL statement
```
CurrentTime = Format(CurrentTime, "dd/mm/yyyy hh:mm")
SQL = "SELECT Count([ID]) AS Booked FROM [Appointment Slots] WHERE [Appointment Slots].Time=#" & CurrentTime & "#"
```
The bizarre thing here is that sometimes when I run the code this works. Other times, and without changing the code, it doesn't, but then it works when I change the date format to mm/dd/yyyy hh:mm then it works for a while then stops working until I change it back to dd/mm/yyyy hh:mm
Clearly something is going wrong with the regional date settings and how it's storing dates but I can't pin down a solution to this. Is there no way to compare a date in Access SQL that is independent of format?
|
> Is there no way to compare a date in Access SQL that is independent of
> format?
Consider a query with a parameter in its `WHERE` clause, similar to this ...
```
WHERE [Appointment Slots].Time=[Enter Appointment Time]
```
You can also add a `PARAMETERS` clause at the beginning of your SQL statement, but it's not absolutely required ...
```
PARAMETERS [Enter Appointment Time] DateTime;
```
So when the query expects a Date/Time parameter, you just give it a Date/Time value.
A parameter query avoids date format issues, and also avoids the need for `#` date delimiters in the SQL statement.
|
You should make it a habit using the *ISO sequence*, and *nn* for minutes.
Also escape the "/" and ":" to have a true slash and colon, otherwise they will be replaced with the localized date and time separators:
```
CurrentTime = Format(CurrentTime, "yyyy\/mm\/dd hh\:nn")
```
This also works for *ADO* and *FindFirst*, which the "reverse" US format (mm/dd/yyyy) does not.
|
Access VBA & SQL date formats
|
[
"",
"sql",
"ms-access",
"vba",
""
] |
I've got this `User` table:
```
+----------+-------+
| Username | Value |
+----------+-------+
| User4 | 2 |
| User1 | 3 |
| User3 | 1 |
| User2 | 6 |
| User4 | 2 |
+----------+-------+
```
And I do this query to get the top 2's sums:
```
SELECT Username, SUM(Value) AS Sum
FROM User
GROUP BY Username
ORDER BY Sum DESC
LIMIT 0, 2
```
Which gives me as a result:
```
+----------+-----+
| Username | Sum |
+----------+-----+
| User2 | 6 |
| User4 | 4 |
+----------+-----+
```
What I'm looking for is **one more row giving the sum of all the values**, like:
```
+----------+-----+
| Username | Sum |
+----------+-----+
| User2 | 6 |
| User4 | 4 |
| All | 14 |
+----------+-----+
```
**Is there any way to achieve that?** Preferably with no procedures.
|
You can use [**`WITH ROLLUP`**](http://dev.mysql.com/doc/refman/5.7/en/group-by-modifiers.html) modifier:
```
SELECT COALESCE(Username, 'All'), SUM(Value) AS Sum
FROM User
GROUP BY Username WITH ROLLUP
ORDER BY Sum DESC
```
or, if you want just top 2 along with the sum of *all*:
```
SELECT Username, s
FROM (
SELECT Username, s
FROM (
SELECT COALESCE(Username, 'All') AS Username, SUM(Value) AS s
FROM User
GROUP BY Username WITH ROLLUP ) AS t
ORDER BY s DESC
LIMIT 0, 3) AS s
ORDER BY IF(Username = 'All', 0, s) DESC
```
|
Use union
```
SELECT Username, SUM(Value) AS Sum
FROM User
GROUP BY Username
ORDER BY Sum DESC
LIMIT 0, 2
union
SELECT'ALL', SUM(Value) AS Sum
FROM User
```
|
MySQL one row of sum after limited results
|
[
"",
"mysql",
"sql",
"mariadb",
"mariasql",
""
] |
I have a table (trips) that has response data with columns:
* TripDate
* Job
* Address
* DispatchDateTime
* OnSceneDateTime
* Vehicle
Often two vehicles will respond to the same address on the same date, and I need to find the one that was there first.
I've tried this:
```
SELECT
TripDate,
Job,
Vehicle,
DispatchDateTime
(SELECT min(OnSceneDateTime)
FROM Trips AS FirstOnScene
WHERE AllTrips.TripDate = FirstOnScene.TripDate
AND AllTrips.Address = FirstOnScene.Address) AS FirstOnScene
FROM
Trips AS AllTrips
```
But I still get both records returned, and both have the same `FirstOnScene time`.
How do I only get THE record, with it's `DispatchDateTime` and `OnSceneDateTime`, and not the row of the trip that was on scene second?
Here are a few example rows from the table:
```
2016-01-01 0169-a 150 Main St 2016-01-01 16:52 2016-01-01 16:59 Truck 1
2016-01-01 0171-a 150 Main St 2016-01-01 16:53 2016-01-01 17:05 Truck 2
2016-01-01 0190-a 29 Spring St 2016-01-01 17:19 2016-01-01 17:30 Truck 5
2016-01-02 0111-a 8 Fist St 2016-01-02 09:30 2016-01-02 09:40 Truck 1
2016-01-02 0112-a 8 Fist St 2016-01-02 09:32 2016-01-02 09:38 Truck 2
```
In the above examples I need to return the first, third, and last row of that data set.
|
You can just filter the rows down by selecting only the `MIN` OnSceneDateTime like below:
```
SELECT TripDate, Job, Vehicle, DispatchDateTime,OnSceneDateTime FirstOnScene
FROM Trips as AllTrips
WHERE AllTrips.OnSceneDateTime = (SELECT MIN(OnSceneDateTime)
FROM Trips as FirstOnScene
WHERE AllTrips.TripDate = FirstOnScene.TripDate
and AllTrips.Address = FirstOnScene.Address
)
```
|
Here is a total shot in the dark based on the sparse information provided. I don't really know what defines a given incident so you can adjust the partition accordingly.
```
with sortedValues as
(
select TripDate
, Job
, Vehicle
, OnSceneDateTime
, ROW_NUMBER() over(partition by Address, DispatchDateTime order by OnSceneDateTime desc) as RowNum
from Trips
)
select TripDate
, Job
, Vehicle
, OnSceneDateTime
from sortedValues
where RowNum = 1
```
|
Compare two rows in SQL Server and return only one row
|
[
"",
"sql",
"sql-server",
""
] |
I need to join tableA to tableB on employee\_id and the cal\_date from table A need to be between date start and date end from table B. I ran below query and received below error message, Would you please help me to correct and query. Thank you for you help!
**Both left and right aliases encountered in JOIN 'date\_start'**.
```
select a.*, b.skill_group
from tableA a
left join tableB b
on a.employee_id= b.employee_id
and a.cal_date >= b.date_start
and a.cal_date <= b.date_end
```
|
RTFM - quoting [LanguageManual Joins](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Joins)
> Hive does not support join conditions that are not equality conditions
> as it is very difficult to express such conditions as a map/reduce
> job.
You may try to move the BETWEEN filter to a WHERE clause, resulting in a lousy partially-cartesian-join followed by a post-processing cleanup. Yuck. Depending on the actual cardinality of your "skill group" table, it may work fast - or take whole days.
|
If your situation allows, do it in two queries.
First with the full join, which can have the range; Then with an outer join, matching on all the columns, but include a where clause for where one of the fields is null.
Ex:
```
create table tableC as
select a.*, b.skill_group
from tableA a
, tableB b
where a.employee_id= b.employee_id
and a.cal_date >= b.date_start
and a.cal_date <= b.date_end;
with c as (select * from TableC)
insert into tableC
select a.*, cast(null as string) as skill_group
from tableA a
left join c
on (a.employee_id= c.employee_id
and a.cal_date = c.cal_date)
where c.employee_id is null ;
```
|
Join Tables on Date Range in Hive
|
[
"",
"sql",
"hadoop",
"hive",
"left-join",
""
] |
I am trying to create a contact report that shows ALL contact persons AND select those who where present.
The presence are stored in a many to many table. Both document id and who was present.
The tables are as follow:
```
CREATE TABLE pe
(peid int4, peco int4, pename varchar(30));
INSERT INTO pe
(peid, peco, pename)
VALUES
(1, 1,'Carl'),
(2, 1,'John'),
(3, 1,'Eric'),
(4, 2,'Donald')
;
CREATE TABLE co
(coid int, coname varchar(30));
INSERT INTO co
(coid, coname)
VALUES
(1,'Volvo'),
(2,'BMW'),
(3,'Microsoft'),
(4,'Apple')
;
There is a also a doc table that is not needed for this query
CREATE TABLE pres
(presid int4, presdoc int4, prespe int4);
INSERT INTO pres
(presid, presdoc, prespe)
VALUES
(1,1,1),
(2,2,2),
(3,2,3),
(4,3,1)
;
```
The query that do not work (regardless of type of join):
```
SELECT pename,coname,presdoc AS present
FROM pe
LEFT JOIN co ON coid=peco
LEFT JOIN pres ON prespe=peid
WHERE peco = 1 AND presdoc=2
```
The output of this query is
```
John Volvo present
Eric Volvo present
```
The desired output is
```
Carl Volvo
John Volvo present
Eric Volvo present
```
SQLFiddle:
<http://sqlfiddle.com/#!15/77b09/21>
Thanks in advance for any clue.
|
Can you perform like this (if you need simply to check if present)
```
SELECT pename,coname,presdoc = 2 AS present
FROM pe
LEFT JOIN co ON coid=peco
LEFT JOIN pres ON prespe=peid
WHERE peco = 1
```
Or even better
```
SELECT DISTINCT pename,coname,presdoc = 2 AS present
FROM pe
LEFT JOIN co ON coid=peco
LEFT JOIN pres ON prespe=peid
WHERE peco = 1
```
|
I think the issue is with the condition `presdoc=2` which forces to return only records which were present. This is what you need.
`SQLFiddle Demo`
```
SELECT pename,
coname,
CASE
WHEN presdoc=2 THEN 'Present'
ELSE NULL
END AS Present
FROM pe
LEFT JOIN co ON coid=peco
LEFT JOIN pres ON prespe=peid
WHERE peco = 1
GROUP BY pename,
coname,
present
```
|
show all records in table 1 using many to many relations - postgresql
|
[
"",
"sql",
"postgresql",
"postgresql-9.1",
""
] |
I need your assistance with the below query as I am not sure how to modify to show me the correct results.
Column2 has records that begin with:
```
005....
04....
01....
05....
XYZ....
234....
6789....
V875....
```
I would like to modify the query to exclude records that begin with a single zero for example 04,01,05 (from the example above), the rest are okay. I tried NOT LIKE ‘0%’ but it excludes records starting with zero. How can I do this? Below is a query sample I have.
```
Select * from table1
where column 1 in (‘A’,’B’,’C’) and column2 ????
```
|
Use a regular expression to exclude records that begin with a single zero:
```
WHERE NOT Regexp_like( column, '^0[^0]' );
```
|
You can do this using just `like`:
```
where col not like '0%' or col like '00%'
```
|
Oracle Not Like
|
[
"",
"sql",
"oracle",
""
] |
I have a table I wish to select data from by grouping the data by using a certain key. For each group I would also like to count how many rows belonging to the group meets a certain criteria. Even if 0 row from that group meets the criteria, I still want to return this group and have a "Count" field display that 0 rows meets the criteria. Because of this, I can't simply filter out undesired rows with a "where" clause and simply select the count of the number of elements inside the group.
Any help would be greatly appreciated
|
I would suggest using [`CASE WHEN`](http://www.postgresql.org/docs/8.1/static/functions-conditional.html) (standard ISO SQL syntax) like in this example:
```
SELECT a.category,
SUM(CASE WHEN a.is_interesting = 1 THEN 1 END) AS conditional_count,
COUNT(*) group_count
FROM a
GROUP BY a.category
```
This will sum up values of 1 and null values (when the condition is false), which comes down to actually counting the records that meet the condition.
This will however return *null* when no records meet the conditions. If you want to have 0 in that case, you can either wrap the `SUM` like this:
```
COALESCE(SUM(CASE WHEN a.is_interesting = 1 THEN 1 END), 0)
```
or, shorter, use `COUNT` instead of `SUM`:
```
COUNT(CASE WHEN a.is_interesting = 1 THEN 1 END)
```
For `COUNT` it does not matter what value you put in the `THEN` clause, as long as it is not *null*. It will count the instances where the expression is not *null*.
The addition of the `ELSE 0` clause also generally returns 0 with `SUM`:
```
SUM(CASE WHEN a.is_interesting = 1 THEN 1 ELSE 0 END)
```
There is however one boundary case where that `SUM` will still return *null*. This is when there is no `GROUP BY` clause and no records meet the `WHERE` clause. For instance:
```
SELECT SUM(CASE WHEN 1 = 1 THEN 1 ELSE 0 END)
FROM a
WHERE 1 = 0
```
will return *null*, while the `COUNT` or `COALESCE` versions will still return 0.
|
You have a condition on which you `GROUP BY`. Simply add a column with a conditional expression that can be summed up:
```
..., SUM(
CASE WHEN othercondition THEN 1
ELSE 0 END
) AS MatchingCondition, ...
```
The rows for which the condition is true will yield 1, thereby appearing as a row count in the grouped results. If no rows match, you will get NULL, so you need to wrap the SUM in a COALESCE to reduce it to the desired value of 0.
# Derived trick
This offers a way to group for different conditions. Say that you have three different conditions which are mutually exclusive, and you want to count all three (i.e., in a GROUP BY tuple you have 32 rows of which 10 match condition 1, 6 match condition 2 and 16 match condition 3). If you additionally know that the maximum number of tuples in a group will never exceed N, you can encode the three conditions in a single number:
```
..., SUM(
CASE WHEN condition1 THEN 1
WHEN condition2 THEN N
WHEN condition3 THEN N*N
WHEN condition4 THEN N*N*N
ELSE 0 END
) AS MatchingCondition, ...
```
The resulting number modulo N will yield how many rows match condition1. The number divided by N, modulo N will yield matches for condition2. The modulo remainder by N\*N will yield matches for condition3, and so on:
```
num1 = result % N
result = (result - num1) / N
num2 = result % N
result = (result - num2) / N
num3 = result % N
...
```
(A further refinement using larger multipliers allows encoding in a single column the result of several *non-mutually exclusive* conditions).
|
How to count how many rows inside a "group by" group meets a certain criteria
|
[
"",
"sql",
"postgresql",
"count",
"group-by",
""
] |
I have a query something like:
```
SELECT *
FROM qAll
WHERE name not in('Alina,Charaidew,Sukapha')
```
which is not working. What will be the best way to do so? As this list a generated dynamically and maybe different every time.
|
In CF, You should use cfqueryparam to for your query parameteres. To pass a list as parameter, you should add list attribute to the cfqueryparam.
Your query should be similar to below:
```
<cfset nameList = "Alina,Charaidew,Sukapha">
<cfquery name="queryName" datasource="#Application.ds#">
SELECT * FROM qAll WHERE name NOT IN (<cfqueryparam cfsqltype="CF_SQL_VARCHAR" list="Yes" value="#nameList#" >)
</cfquery>
```
|
Sql server treating `'Alina,Charaidew,Sukapha'` as a single Value, that's why are not getting any result.
Query should be like..
```
SELECT *
FROM qAll
WHERE name not in('Alina','Charaidew','Sukapha')
```
|
Filtering a list of names from query
|
[
"",
"sql",
"coldfusion",
"qoq",
""
] |
I want to "create or replace" a trigger for a postgres table. However, there is not such sql expression.
I see that I can do a "`DROP TRIGGER IF EXISTS`" first (<http://www.postgresql.org/docs/9.5/static/sql-droptrigger.html>).
My question are:
1. Is there a recommended/better option than (`DROP` + `CREATE` trigger)
2. Is there a reason why there is not such "create or replace trigger" (which might imply that I should not be wanting to do it)
Note that there is a "`Create or Replace Trigger`" in oracle (<https://docs.oracle.com/cd/B19306_01/appdev.102/b14251/adfns_triggers.htm>). Then,
3. Is such command planned for Postgres at all?
|
Postgresql has transaction DDL so `BEGIN > DROP > CREATE > COMMIT` is the equivalent of `CREATE OR REPLACE`
[This](https://wiki.postgresql.org/wiki/Transactional_DDL_in_PostgreSQL:_A_Competitive_Analysis) is a nice write-up of how postgre's transactional DDL compares to other systems (such as oracle)
Current postgres planned features regarding [triggers](https://wiki.postgresql.org/wiki/Todo#Triggers) do not include adding the `REPLACE` syntax.
|
No way to create or replace a trigger but can do this way
```
DROP TRIGGER IF EXISTS yourtrigger_name on "yourschemaname"."yourtablename";
```
|
Create or replace trigger postgres
|
[
"",
"sql",
"postgresql",
"ddl",
"database-trigger",
""
] |
I am new to SQL and I am kind of lost. I have a table that contains products, various fields like `productname`, `category` etc.
I want to have a query where I can say something like: select all products in some category that have a specific word in their productname. The complicating factor is that I only want to return a specific range of that subset. So I also want to say return me the 100 to 120 products that fall in that specification.
I googled and found this query:
```
WITH OrderedRecords AS
(
SELECT *, ROW_NUMBER() OVER (ORDER BY PRODUCTNUMMER) AS "RowNumber",
FROM (
SELECT *
FROM SHOP.dbo.PRODUCT
WHERE CATEGORY = 'ARDUINO'
and PRODUCTNAME LIKE '%yellow%'
)
)
SELECT * FROM OrderedRecords WHERE RowNumber BETWEEN 100 and 120
Go
```
The query works to an extent, however it assigns the row number before filtering so I won't get enough records and I don't know how I can handle it if there are no parameters. Ideally I want to be able to not give a category and search word and it will just list all products.
I have no idea how to achieve this though and any help is appreciated!
|
If you just need to to paginate your query and return a specific range of results, you can simply use [OFFSET FETCH Clause](https://technet.microsoft.com/en-us/library/gg699618.aspx).
That way there is no need to filter result items by `RowNumber`. I think this solution is easier:
```
SELECT *
FROM SHOP.dbo.PRODUCT
WHERE CATEGORY = 'ARDUINO' AND PRODUCTNAAM LIKE '%yellow%'
ORDER BY PRODUCTNUMMER
OFFSET 100 ROWS -- start row
FETCH NEXT 20 ROWS ONLY -- page size
```
Find out more [Pagination with OFFSET / FETCH](http://sqlperformance.com/2015/01/t-sql-queries/pagination-with-offset-fetch)
|
Building on what `esiprogrammer` showed in his answer on how to return only rows in a certain range using paging.
Your second question was:
> Ideally I want to be able to not give a category and search word and it will just list all products.
You can either have two queries/stored procedures, one for the case where you do lookup with specific parameters, another for lookup without parameters.
Or, if you insist on keeping one query/stored procedure for all cases, there are two options:
1. Build a Dynamic SQL statement that only has the filters that are present; execute it using `EXECUTE (@sql)` or `EXECUTE sp_executesql @sql`
2. Build a Catch-All Query
---
Example for option 2:
```
-- if no category is given, it will be NULL
DECLARE @search_category VARCHAR(128);
-- if no name is given, it will be NULL
DECLARE @search_name VARCHAR(128);
SELECT *
FROM SHOP.dbo.PRODUCT
WHERE (@search_category IS NULL OR CATEGORY=@search_category) AND
(@search_name IS NULL OR PRODUCTNAAM LIKE '%'+@search_name+'%')
ORDER BY PRODUCTNUMMER
OFFSET 100 ROWS
FETCH NEXT 20 ROWS ONLY
OPTION(RECOMPILE); -- generate a new plan on each execution that is optimized for that execution’s set of parameters
```
|
Optional parameters in SQL query
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
Existing table t:
```
namz varchar2(255), sumz number, datez date
-------------------------------------------
John 100.50 01.01.2016
Ivan 200.45 02.02.2014
...
John 400.32 03.03.2016
```
Can I get grouped monthly sales for defining period by using simple one select:
```
select namz, .... from t where datez>date1 and datez<=date2 and ....
```
and get result like:
```
namz 01.2014 02.2014 03.2014 .... 03.2016
-------------------------------------------
John null 1200.34 234.34 ... null
Ivan 1234.45 null null ... 254.23
```
Thank you in advance!
|
Using the dataset
```
CREATE TABLE tmp AS
(SELECT 'John' name, 100.5 Sumz, '01-Jan-16' datez FROM dual UNION
SELECT 'Ivan' name, 200.45 Sumz, '02-Feb-14' datez FROM dual UNION
SELECT 'John' name, 400.32 Sumz, '03-Mar-16' datez FROM dual UNION
SELECT 'John' name, 200.0 Sumz, '03-Mar-16' datez FROM dual UNION
SELECT 'John' name, 8475.36 Sumz, '01-Jan-18' datez FROM dual UNION
SELECT 'Ivan' name, 8759.36 Sumz, '03-Mar-16' datez FROM dual)
;
```
Use the pivot statement
```
select * FROM
(
SELECT name, sumz, datez
FROM tmp
)
PIVOT
(
Sum(sumz)
FOR datez IN ('01-Jan-16','02-Feb-14','03-Mar-16', '01-Jan-18')
)
ORDER BY name;
```
It gives you exactly what you want
|
You can use the function `YEAR`, `MONTH` and `CONCAT` like this :
```
SELECT namz, CONCAT(MONTH(datez),'.',YEAR(datez)) as d, SUM(something) as s
FROM t
WHERE datez>date1
AND datez<=date2
GROUP BY namz, d
```
The result will be like :
```
namez d s
---------------------
Ivan 01.2014 1234.45
John 02.2014 1200.34
...
```
|
SQL Select with difficult grouping
|
[
"",
"sql",
"oracle",
""
] |
I explain it with an example. We have 5 events (each with an start- and end-date) which partly overlap:
```
create table event (
id integer primary key,
date_from date,
date_to date
);
--
insert into event (id, date_from, date_to) values (1, to_date('01.01.2016', 'DD.MM.YYYY'), to_date('03.01.2016 23:59:59', 'DD.MM.YYYY HH24:MI:SS'));
insert into event (id, date_from, date_to) values (2, to_date('05.01.2016', 'DD.MM.YYYY'), to_date('08.01.2016 23:59:59', 'DD.MM.YYYY HH24:MI:SS'));
insert into event (id, date_from, date_to) values (3, to_date('03.01.2016', 'DD.MM.YYYY'), to_date('05.01.2016 23:59:59', 'DD.MM.YYYY HH24:MI:SS'));
insert into event (id, date_from, date_to) values (4, to_date('03.01.2016', 'DD.MM.YYYY'), to_date('03.01.2016 23:59:59', 'DD.MM.YYYY HH24:MI:SS'));
insert into event (id, date_from, date_to) values (5, to_date('05.01.2016', 'DD.MM.YYYY'), to_date('07.01.2016 23:59:59', 'DD.MM.YYYY HH24:MI:SS'));
--
commit;
```
Here the events visualized:
```
1.JAN 2.JAN 3.JAN 4.JAN 5.JAN 6.JAN 7.JAN 8.JAN
---------1--------- ------------2-------------
---------3---------
--4-- ---------5---------
```
Now I would like to select the maximum number of events which overlap in a given timerange.
For the timerange 01.01.2016 00:00:00 - 08.01.2016 23:59:59 the result should be 3 because max 3 events overlap (between 03.01.2016 00:00:00 - 03.01.2016 23:59:59 and between 05.01.2016 00:00:00 - 05.01.2016 23:59:59).
For the timerange 06.01.2016 00:00:00 - 08.01.2016 23:59:59 the result should be 2 because max 2 events overlap (between 06.01.2016 00:00:00 - 07.01.2016 23:59:59).
Would there be a (performant) solution in SQL? I am thinking about performance because there could be many events in a wide timerange.
**Update #1**
I like MTOs answer most. It even works for the timerange 01.01.2016 00:00:00 - 01.01.2016 23:59:59. I adapted the SQL to my exact needs:
```
select max(num_events)
from (
select sum(startend) over (order by dt) num_events
from (
select e1.date_from dt,
1 startend
from event e1
where e1.date_to >= :date_from
and e1.date_from <= :date_to
union all
select e2.date_to dt,
-1 startend
from event e2
where e2.date_to >= :date_from
and e2.date_from <= :date_to
)
);
```
|
This will get all the time ranges and the count of events occurring within those ranges:
```
SELECT *
FROM (
SELECT dt AS date_from,
LEAD( dt ) OVER ( ORDER BY dt ) AS date_to
SUM( startend ) OVER ( ORDER BY dt ) AS num_events
FROM (
SELECT date_from AS dt, 1 AS startend FROM event
UNION ALL
SELECT date_to, -1 FROM event
)
)
WHERE date_from < date_to;
```
|
If you need only to get the number and don't need to operate with more precise time values, it will be just something like this:
```
SELECT MAX(c) max_overlap FROM
(SELECT d, COUNT(1) c
FROM
(SELECT date_from d FROM event
UNION ALL
SELECT date_to FROM event
) A
GROUP BY A.d
) B
```
Otherwise it will need to use recursion etc.
|
Count max number of overlapping timeranges in a timerange
|
[
"",
"sql",
"oracle",
"performance",
""
] |
I have fallen a problem to set date first into my SQL query. Here is the code below where I have set by using IF, Else IF. But I need one sql statement not IF, Else IF.
```
DECLARE @CurrentDate DATE, @FirstDayOfWeek INT
SELECT @CurrentDate = DATEADD(yy, DATEDIFF(yy, 0, GETDATE()), 0)
SELECT @FirstDayOfWeek = DATEPART(DW,@CurrentDate)
IF @FirstDayOfWeek = 6
SET DATEFIRST 5; --friday
ELSE IF @FirstDayOfWeek = 7
SET DATEFIRST 6; --sunday
--AND SO ON
```
|
The result of `DATEPART(DW, . . .)` depends on the current value of `@@DATEFIRST`, so you need to take that into account when setting the new value of `@@DATEFIRST`. The simplest thing would be to set `@@DATEFIRST` back to 1, get the current day of the week, and then set `@@DATEFIRST` to that.
By the way, if you just want the date of a date-time, you can just cast it to `DATE`. Ultimately, though, there's no need to convert the date-time to a date, because the result of `DATEPART(DW, . . .)` will be the same either way.
```
DECLARE @CurrentDate DATE, @CurrentDayOfWeek INT
SET @CurrentDate = CAST(GETDATE() AS DATE)
SET DATEFIRST 1 -- normalize first day of week
SET @CurrentDayOfWeek = DATEPART(DW, @CurrentDate)
SET DATEFIRST @CurrentDayOfWeek
```
Or
```
DECLARE @CurrentDayOfWeek INT
SET DATEFIRST 1 -- normalize first day of week
SET @CurrentDayOfWeek = DATEPART(DW, GETDATE())
SET DATEFIRST @CurrentDayOfWeek
```
Example:
```
DECLARE @CurrentDayOfWeek INT
SET DATEFIRST 1 -- normalize first day of week to Monday
SET @CurrentDayOfWeek = DATEPART(DW, '2016-01-01') -- a Friday, which will be 5 because first day of week is currently 1
SET DATEFIRST @CurrentDayOfWeek -- first day of week is now 5 (Friday)
```
|
I know your question mentions `DATEFIRST`, but is there any reason you don't just calculate it based on how many days it has been since the first of the year?
```
SELECT ((DATEPART(DY, GETDATE()) - 1) % 7) + 1
```
---
*old answer:*
Just got to mention that if `DATEFIRST` has been changed from the default of 7, this will no longer work since `DATEPART(dw,...)` will return a different value. The value used with `SET DATEFIRST` is *always* 1 = Monday, 7 = Sunday, etc. You can always to a `SET DATEFIRST 7` ahead of time...
Basically what you are doing is subtracting 1 and rolling back over to 7 if you hit 0. You can just use the mod operator (`%`) for that:
|
Set datefirst dynamically by the first day of the year
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
I have this query:
```
SELECT YEAR(`data`) AS ano, SUM(ativo) AS tempo_ativo
FROM rh.processamento
GROUP BY YEAR(`data`);
```
The result from this query is
```
ano tempo_ativo
2015 108247387
2016 172003845
```
And this query:
```
SELECT YEAR(`data`) AS ano, SUM(tempo) AS tempo_extra
FROM rh.aprovacoes
WHERE tipo = 'BH' OR tipo = 'HE' AND estado=1
GROUP BY YEAR(`data`);
```
The result is:
```
ano tempo_extra
0 8768100
2015 -4410782
2016 -7213369
```
I made this query to join the results from both queries:
```
SELECT YEAR(processamento.`data`) AS ano, SUM(ativo) AS tempo_ativo, SUM(tempo)/3600 AS tempo_extra
FROM rh.processamento
LEFT JOIN rh.aprovacoes ON processamento.`data`=aprovacoes.`data` AND (tipo = 'BH' OR tipo = 'HE') AND estado=1
GROUP BY YEAR(aprovacoes.`data`);
```
But the results are wrong.
I need the results to be something like this:
```
ano tempo_ativo tempo_extra
0 NULL 8768100
2015 108247387 -4410782
2016 172003845 -7213369
```
Can you you guys help me?
|
If the second query *always* returns all records of the first one, then you can try using a `LEFT JOIN`:
```
SELECT t1.ano, tempo_extra, tempo_ativo
FROM (
SELECT YEAR(`data`) AS ano, SUM(tempo) AS tempo_extra
FROM rh.aprovacoes
WHERE tipo = 'BH' OR tipo = 'HE' AND estado=1
GROUP BY YEAR(`data`)
) AS t1
LEFT JOIN (
SELECT YEAR(`data`) AS ano, SUM(ativo) AS tempo_ativo
FROM rh.processamento
GROUP BY YEAR(`data`)
) AS t2 ON t1.ano = t2.ano
```
|
Try this:
```
SELECT *
FROM
(
SELECT YEAR(`data`) AS ano, SUM(tempo) AS tempo_extra
FROM rh.aprovacoes
WHERE tipo = 'BH' OR tipo = 'HE' AND estado=1
GROUP BY YEAR(`data`)
) t1
LEFT JOIN
(
SELECT YEAR(`data`) AS ano, SUM(ativo) AS tempo_ativo
FROM rh.processamento
GROUP BY YEAR(`data`)
) t2
USING(ano)
```
|
Join 2 queries results
|
[
"",
"mysql",
"sql",
""
] |
I'm new to SQL so please forgive me if I use incorrect terminology and my question sounds confused.
I've been tasked with writing a stored procedure which will be sent 3 variables as strings (varchar I think). I need to take two of the variables and remove text from the end of the variable and only from the end.
The strings/text I need to remove from the end of the variables are
* co
* corp
* corporation
* company
* lp
* llc
* ltd
* limited
For example this string
> Global Widgets LLC
would become
> Global Widgets
However it should only apply once so
> Global Widgets Corporation LLC
Should become
> Global Widgets Corporation
I then need to use the altered variables to do a SQL query.
This is to be used as a backup for an integration piece we have which makes a callout to another system. The other system takes the same variables and uses Regex to remove the strings from the end of variables.
I've tried different combinations of PATINDEX, SUBSTRING, REPLACE, STUFF but cannot seem to come up with something that will do the job.
===============================================================
Edit: I want to thank everyone for the answers provided so far, but I left out some information that I didn't think was important but judging by the answers seems like it would affect the processing.
My proc will start something like
```
ALTER PROC [dbo].[USP_MyDatabaseTable] @variableToBeAltered nvarchar(50)
AS
```
I will then need to remove all `,` and `.` characters. I've already figured out how to do this. I will then need to do the processing on `@variableToBeAltered` (technically there will be two variables) to remove the strings I listed previously. I must then remove all spaces from `@variableToBeAltered`. (Again I figured that part out). Then finally I will use `@variableToBeAltered` in my SQL query something like
```
SELECT [field1] AS myField
,[field2] AS myOtherField
FROM [MyData].[dbo].[MyDatabaseTable]
WHERE [field1] = (@variableToBeAltered);
```
I hope this information is more useful.
|
Building on the answer given by Tom H, but applying across the entire table:
```
set nocount on;
declare @suffixes table(tag nvarchar(20));
insert into @suffixes values('co');
insert into @suffixes values('corp');
insert into @suffixes values('corporation');
insert into @suffixes values('company');
insert into @suffixes values('lp');
insert into @suffixes values('llc');
insert into @suffixes values('ltd');
insert into @suffixes values('limited');
declare @companynames table(entry nvarchar(100),processed bit default 0);
insert into @companynames values('somecompany llc',0);
insert into @companynames values('business2 co',0);
insert into @companynames values('business3',0);
insert into @companynames values('business4 lpx',0);
while exists(select * from @companynames where processed = 0)
begin
declare @currentcompanyname nvarchar(100) = (select top 1 entry from @companynames where processed = 0);
update @companynames set processed = 1 where entry = @currentcompanyname;
update @companynames
set entry = SUBSTRING(entry, 1, LEN(entry) - LEN(tag))
from @suffixes
where entry like '%' + tag
end
select * from @companynames
```
|
I'd keep all of your suffixes in a table to make this a little easier. You can then perform code like this either within a query or against a variable.
```
DECLARE @company_name VARCHAR(50) = 'Global Widgets Corporation LLC'
DECLARE @Suffixes TABLE (suffix VARCHAR(20))
INSERT INTO @Suffixes (suffix) VALUES ('LLC'), ('CO'), ('CORP'), ('CORPORATION'), ('COMPANY'), ('LP'), ('LTD'), ('LIMITED')
SELECT @company_name = SUBSTRING(@company_name, 1, LEN(@company_name) - LEN(suffix))
FROM @Suffixes
WHERE @company_name LIKE '%' + suffix
SELECT @company_name
```
The keys here are that you are only matching with strings that end in the suffix and it uses `SUBSTRING` rather than `REPLACE` to avoid accidentally removing copies of any of the suffixes from the middle of the string.
The `@Suffixes` table is a table variable here, but it makes more sense for you to just create it and fill it as a permanent table.
The query will just find the one row (if any) that matches its suffix with the end of your string. If a match is found then the variable will be set to a substring with the length of the suffix removed from the end. There will usually be a trailing space, but for a `VARCHAR` that will just get dropped off.
There are still a couple of potential issues to be aware of though...
First, if you have a company name like "Watco" then the "co" would be a false positive here. I'm not sure what can be done about that other than maybe making your suffixes include a leading space.
Second, if one suffix ends with one of your other suffixes then the ordering that they get applied could be a problem. You could get around this by only applying the row with the greatest length for `suffix`, but it gets a little more complicated, so I've left that out for now.
|
SQL Server 2012: Remove text from end of string
|
[
"",
"sql",
"sql-server",
"stored-procedures",
""
] |
I have a table with a `VARCHAR2` column which contains values that are a mixture of pure-numbers and alpha-numerics. I have a `CODE` column that contains:
```
200
215
220
A553
D545
etc.
```
The following query works:
```
select *
from TABLE
where CLASS = 3
AND (CODE >= 210 and CODE < 220) or CODE = 291)
```
Values that are CLASS 3 are always numeric.
But when I add a `ORDER BY`, it doesn't work:
```
select *
from TABLE
where CLASS = 3
and (CODE >= 210 and CODE < 220) or CODE = 291)
ORDER BY CODE
```
instead I get `ORA-01722: invalid number`. This [seems to be](http://www.dba-oracle.com/sf_ora_01722_invalid_number.htm) because the Oracle Optimiser is assessing the "order by" before the where clause, and thus non-numeric values get assessed.
I have tried changing it to `ORDER BY TO_CHAR(CODE)` but to no affect. Similar negative result with trying to place it all into a sub-query.
So, how do I order the results of this query by `CODE` (`ASC`ending)? I guess I can specify all possible `CODE` values manually in the where clause as strings (i.e. `code = '210' or code = '211' or...`), but is there a more elegant way?
|
The problem can be in your `WHERE` condition, given that it forces Oracle to cast your code to number;
Try keeping the `WHERE` condition in `varchar2` format:
```
with TABLE_(code, class_) as
(
select '200',3 from dual union all
select '215',3 from dual union all
select '220',3 from dual union all
select 'A553',3 from dual union all
select 'D545',3 from dual
)
select *
from TABLE_
where CLASS_ = 3
and ( (CODE >= '210' and CODE < '220') or CODE = '291')
ORDER BY CODE
```
|
The `ORDER BY` has nothing to do with the problem -- at least not directly.
SQL in general, and Oracle in particular, make no promises about the order of evaluation of conditions in the `WHERE` clause. Hence, the `WHERE` clause is not (necessarily) evaluated in the order written. The presence of the `ORDER BY` might affect the order of evaluation of the conditions in this particular case.
In general, it is really bad practice to mix data types, the way that you are doing it. But, you can guarantee the order of evaluation by using `case`:
```
select *
from TABLE
where CLASS = 3
'true' = (case when class <> 3 then 'false'
when (CODE >= 210 and CODE < 220) or CODE = 291) then 'true'
end);
```
*I do not recommend doing this.* I only want to point out that `case` does force the order of evaluation of the conditions.
The correct solution is to use string comparisons. In this case, I would go with:
```
select *
from TABLE
where CLASS = 3 AND
CODE in ('210', '211', '212', '213', '214', '215', '216', '217', '218', '219', '291')
```
Alternatively, you could do:
```
where CLASS = 3 and length(CODE) = 3 and
((CODE >= '210' and CODE < '220') or CODE = '291')
```
Note that for accuracy you do need to take the length into account.
|
Oracle ordering of results using a mixed varchar column but numeric where clause
|
[
"",
"sql",
"oracle",
"sql-order-by",
""
] |
I want to get a 14 rows query result even when table has only 6 records available.
For example table has only 6 records so I want to make a SELECT and the output throw me the 6 records with 8 blank rows.
Like this:
```
|trackings |
---------------
|track1 |
|track2 |
|track3 |
|track4 |
|track5 |
|track6 |
| *blank* |
| *blank* |
| *blank* |
| *blank* |
| *blank* |
| *blank* |
| *blank* |
| *blank* |
```
I search on google how to achieve this but I couldn't find the best tags to find it, I read some examples with UNION but in this case is more than 1 blank or custom row.
Note that the records in the table may change but I just need 14 rows. These can be the 14 rows with data or some of them in blank depending the case.
Thanks and sorry for my english!
|
You might be better off implementing this in your application layer, however, if you want to do this in SQL, you can use a CTE to implement this. The first CTE creates 14 empty records, in the second CTE these empty records are unioned with the results of your query, and in the final query the top 14 results are selected sorting the non-empty records at the top:
```
WITH cte
AS ( SELECT 0 AS Id ,
' ' AS EmptyData
UNION ALL
SELECT Id + 1 AS Id ,
EmptyData
FROM cte
WHERE Id < 14
),
cte2
AS ( SELECT 1 AS SortOrder ,
trackings
FROM dbo.data
UNION ALL
SELECT 2 AS SortOrder ,
EmptyData
FROM cte
)
SELECT TOP 14
trackings
FROM cte2
ORDER BY SortOrder
```
The advantage of this approach is that you can easily change the total number of records, just replace the two occurrences of 14 with a different number.
|
I have added otherField1, otherField2 just to do a generic answer
This resultset, allways have 14 records, the latests filled with null if there are less that that number in the table
```
select top 14 tracking, otherField1, otherField2
from (
select tracking, otherField1, otherField2, 1 as orderBy from yourTable
union all select null, null, null, 2
union all select null, null, null, 2
union all select null, null, null, 2
union all select null, null, null, 2
union all select null, null, null, 2
union all select null, null, null, 2
union all select null, null, null, 2
union all select null, null, null, 2
union all select null, null, null, 2
union all select null, null, null, 2 -- this is 14 times here
) as subQuery
order by orderBy, tracking
```
|
SQL select returning a defined number of rows
|
[
"",
"sql",
"sql-server",
"select",
"rows",
""
] |
What is a proper way to make a query in Tarantool DB with SQL LIKE keyword?
For example:
```
SELECT * FROM space where smth LIKE '%some_value%';
```
Can I search for values using a part of index or I need to write my own LUA script for such functionality?
|
Yes, you shoud write Lua Script, that'll iterate over space and use lua function `gsub` on `'smth'` field of tuple.
There's no way, for now, to search for part of string.
|
There is nothing wrong with your query if you use tarantool version 2.x.
```
SELECT * FROM "your_space" WHERE "smth" LIKE '%some_value%';
```
|
SQL LIKE query in Tarantool
|
[
"",
"sql",
"sql-like",
"tarantool",
""
] |
I have to store in the Boolean field the words " SI or NO " ( affirmation and negation in Spanish). But I can only use these:
TRUE:
't'
'true'
'y'
'yes'
'on'
'1'
FALSE:
'f'
'false'
'n'
'no'
'off'
'0'
```
create table sales
(
code varchar(3),
sold boolean,
CONSTRAINT pk_codesale PRIMARY KEY (code)
);
```
I try that:
```
insert into sales(code, sold)
values('001','SI');
```
Edit:
Based on Python's post:
I try that code (based on AlexT82):
```
insert into sales(code, sold)
(SELECT '001',CASE
WHEN 'FILLHERE' ='SI' THEN 't'
ELSE 'f' END
);
```
|
you could do this:
```
insert into sales(code, sold)
(SELECT '001',CASE
WHEN 'FILLHERE' ='SI' THEN 1
ELSE 0 END
);
```
Where in the part of 'FILLHERE' you fill the text where you fill "SI" or "NO" from your application.
|
I try that code:
```
insert into sales(code, sold)
(
SELECT '001',CASE
WHEN 'FILLHERE' ='SI' THEN 't'
ELSE 'f' END
);
```
|
Add " SI or NO" in a Boolean field
|
[
"",
"sql",
"database",
"boolean",
""
] |
I'm trying to filter a table to show only the distinct values of a column using only the WHERE clause of an SQL statement.
Below is an example table:
```
Person ID | Name | City
----------------------------
1 person1 city1
2 person2 city1
3 person3 city2
4 person4 city3
5 person5 city1
```
I want to get the distinct cities like the following output:
```
City
----
city1
city2
city3
```
However my application restricts me to only specify a WHERE clause.
So the basic format that I'm allowed is this:
```
select city from table1 where __________;
```
UPDATE: I've also tried and found that the SELECT statement is taking all columns (\*) as opposed to that single column so I cannot use:
```
select * from table1 where 1=1 group by city;
```
|
This in a `where` clause will have the desired effect.
```
PersonId IN (SELECT MAX(PersonId) FROM table1 GROUP BY city)
```
|
Something like this should do it:
```
select *
from table1
where
PersonID IN
(
SELECT MIN(PersonID) AS MinID
FROM table1
GROUP BY City
)
```
|
SQL filtering records by unique column using WHERE clause only
|
[
"",
"sql",
"sql-server",
""
] |
This is my **bill** table:
```
shop_id | billing_date | total
------------------------------
ABC | 2016-03-07 | 100
ABC | 2016-03-14 | 200
DEF | 2016-03-07 | 300
DEF | 2016-03-14 | 100
GHI | 2016-03-07 | 30
```
I want to get one line per shop, with **average total per week**, the **current month total**, and the **average total per month**. This final data must look like this:
```
shop | weekly avg. | current month total | monthly avg.
-------------------------------------------------------
ABC | 150 | 300 | 300
DEF | 200 | 500 | 500
GHI | 30 | 30 | 30
```
My question is: Is it possible to get this informations directly from an SQL query?
|
Hey you can try this way for current year using `WEEK` and `MONTH` of mysql. as per your data entries in table is week wise:
[SQLFIDDLE](http://sqlfiddle.com/#!9/e7cab/6)
```
select shop_id,(sum(total)/(WEEK(MAX(bdate)) - WEEK(MIN(bdate))+1)) as weekly_avg,(sum(total)/(MONTH(MAX(bdate))-MONTH(MIN(bdate))+1)) as mothly_avg, sum( case when MONTH(bdate) = MONTH(NOW()) then total else 0 end) as current_month_total from bill group by shop_id WHERE YEAR(bdate) = 2016
```
For number of year greater than one
[SQL FIDDLE](http://sqlfiddle.com/#!9/3a5f2/1)
```
select shop_id,
sum(total)/(12 * (YEAR(MAX(bdate)) - YEAR(MIN(bdate))) + (MONTH(MAX(bdate)) - MONTH(MIN(bdate)))+1) as month_avg,
sum(total)/(7 * (YEAR(MAX(bdate)) - YEAR(MIN(bdate))) + (WEEK(MAX(bdate)) - WEEK(MIN(bdate)))+1) as weekly_avg,
sum( case when YEAR(bdate) = YEAR(bdate) and MONTH(bdate) = MONTH(NOW()) then total else 0 end) as current_month_total from bill group by shop_id
```
|
Is this the sort of thing you are after??:
```
SELECT DISTINCT(bill.shop_id),wk as WeeklyTotal,mt as MonthlyTotal,ma as MonthlyAverage
FROM bill
JOIN (SELECT AVG(total) wk,shop_id
FROM bill
WHERE YEAR(billing_date) = 2016 AND MONTH(billing_date) = 1
GROUP BY shop_id) as weekly ON bill.shop_id = weekly.shop_id
JOIN (SELECT SUM(total) mt,shop_id
FROM bill
WHERE YEAR(billing_date) = 2016 AND MONTH(billing_date) = 1
GROUP BY CONCAT(shop_id,MONTH(billing_date))
) month_total ON month_total.shop_id = bill.shop_id
JOIN (SELECT AVG(total) ma,shop_id
FROM bill
WHERE YEAR(billing_date) = 2016 AND MONTH(billing_date) = 1
GROUP BY CONCAT(shop_id,MONTH(billing_date))
) month_avg ON month_avg.shop_id = bill.shop_id
```
|
MySQL : weekly and monthly average
|
[
"",
"mysql",
"sql",
"database",
""
] |
For some work, the requirement is such that we want to retain the table and database structure while truncating all data in the multiple table at one go.
Since Truncate Table\_name only truncates one table at time. Is there a way to truncate multiple table? Help would be appreciated.
|
The easiest way may be the following:
If you have foreign key constraint then temporarily set it `OFF`.
`SET FOREIGN_KEY_CHECKS=0;`
To set it `ON` again:
`SET FOREIGN_KEY_CHECKS=1;`
---
**To truncate all tables under a particular database**
```
SELECT
CONCAT('TRUNCATE TABLE ',TABLE_NAME,';') AS truncateCommand
FROM information_schema.TABLES
WHERE TABLE_SCHEMA = 'YOUR_DATABASE_NAME_HERE';
```
**To truncate all tables out of all databases**
```
SELECT
CONCAT('TRUNCATE TABLE ',TABLE_NAME,';') AS truncateCommand
FROM information_schema.TABLES;
```
---
And you will get output like that:
```
TRUNCATE TABLE your_table_1;
TRUNCATE TABLE your_table_2;
TRUNCATE TABLE your_table_3;
TRUNCATE TABLE your_table_4;
TRUNCATE TABLE your_table_5;
TRUNCATE TABLE your_table_6;
TRUNCATE TABLE your_table_7;
TRUNCATE TABLE your_table_8;
.
.
etc..
```
Now grab these truncate commands and execute all.
You can approach this way to avoid the hassle of writing a stored procedure to get it done if and only if it's a one time job
|
Do you have relationship in your tables ? If so you cannot truncate Tables .. However to remove all data & Reseed Identity use this query
```
EXEC sp_MSForEachTable 'ALTER TABLE ? NOCHECK CONSTRAINT ALL'
--Disable all triggers
EXEC sp_MSForEachTable 'ALTER TABLE ? DISABLE TRIGGER ALL'
--Delete
-- Then reseed if Table has identity column
EXEC sp_MSForEachTable 'IF (''?'' NOT LIKE ''%CNF_tb%'')
BEGIN
DELETE from ?
if @@RowCount >0
BEGIN
IF (Select OBJECTPROPERTY(OBJECT_ID(''?''),''Tablehasidentity'')
FROM Sys.Tables
Where object_id=OBJECT_ID(''?''))=1
BEGIN
DBCC CHECKIDENT(''?'' ,Reseed,0)
END
END
END
ELSE
BEGIN
Select ''?'' as ''Not Changed''
END'
--Enable all constraints
EXEC sp_MSForEachTable 'ALTER TABLE ? With Check CHECK CONSTRAINT ALL'
--Enable all triggers
EXEC sp_MSForEachTable 'ALTER TABLE ? ENABLE TRIGGER ALL'
```
|
TRUNCATE data from all tables in a Database
|
[
"",
"mysql",
"sql",
"database",
"truncate",
""
] |
I need to check complete DLL. I need a sql query for the same.
If I use `DESC` or `DESCRIBE` in Oracle, it shows me all columns and type.
But I also need all `Constraints`, `Triggers` acting on it, etc. Just like how Oracle SQL Developer shows the structure if you click on `SQL` tab.
What should be my query command?
|
The table definition with constraints:
```
select DBMS_METADATA.GET_DDL('TABLE', 'MY_TABLE' )
from dual;
```
The triggers on table:
```
select DBMS_METADATA.GET_DDL('TRIGGER',trigger_name) from user_triggers where table_name='MY_TABLE';
```
|
For constraints:
```
Select constraint_name from all_constraints where table_name='Table_Name';
```
For triggers:
```
select trigger_name from all_triggers where table_name='Table_Name';
```
|
I need to check Table structure with triggers, constraints
|
[
"",
"sql",
"oracle",
"dll",
"triggers",
""
] |
I have a query that scans for people with same name but different ids.
Table structure is Staff(name,id)
What I want to find is people who share the same name but with different id(they are different people).
I do happen to have two people with same name & diff id.
```
+---------+-----+
| NAME | ID |
+---------+-----+
| John S. | 138 |
| John S. | 491 |
+---------+-----+
```
so far I have
```
select a.name, b.name, a.id, b.id
from staff a, staff b
where a.name = b.name and a.id != b.id
```
But when I run this code it gives the output twice, which are
```
+---------+-----+
| NAME | ID |
+---------+-----+
| John S. | 138 |
| John S. | 491 |
| John S. | 491 |
| John S. | 138 |
+---------+-----+
```
I know why this happens because these two outputs both satisfy the checking condition, but is there anyway I can suppress ones that are already outputted? I can run a select table and WHERE ROWNUM <= 2 but that wont be the optimal case when I have more people with same names.
Thanks!
|
If you want only one result you can do something like this:
```
select a.name, b.name, a.id, b.id
from staff a, staff b
where a.name = b.name and a.id > b.id
```
This way, only one of the combinations between them will answer the join condition, therefore , only one will be returned
BTW - please avoid the use of implicit join syntax's(comma separated) . Use only the **explicit** syntax of join, like this:
```
SELECT a.name, b.name, a.id, b.id
FROM staff a
INNER JOIN staff b
ON(a.name = b.name and a.id > b.id)
```
|
I don't think you need `JOIN` for this *I want to find is people who share the same name but with different id*
Using `Having` clause to filter the `name's` who has more than one `ID`
```
select NAME
from yourtable
Group by name
having count(distinct id)> 1
```
|
Selecting records with the same name give duplicate results
|
[
"",
"sql",
"oracle",
""
] |
I have a table for packages, where each package consists of Number of Days, Days included [any day(s) Sunday, Monday, ... ]
```
Package | Duration | Days Included
-------------------------------------------
Package 1 | 10 days | '1,2,3' [Sun, Mon, Tue]
Package 2 | 15 days | '4,5,6,7' [Wed, Thu, Fri, Sat]
Package 3 | 30 days | '1,2,3,4,5,6,7' [Sun, Mon, Tue, Wed, Thu, Fri, Sat]
etc
```
When customer selects any package (selecting the start date), I need to calculate the expiry date of that package based on the no. of days and days included in that package.
I need to create a function in which will return the **Expiry Date** providing
the following 3 inputs.
1. Start date
2. Number of days
3. Days to be included
Example:
> For Package 1, starting from 13-Mar-2016, Correct End Date should be:
> 03-Apr-2016
> (10 days would be 13,14,15,20,21,22,27,28,29 March, 03
> Apr)
```
DECLARE @StartDate DATETIME
DECLARE @NoDays INT
DECLARE @EndDate DATETIME
SET @EndDate = DATEADD(DD, @NoDays, @StartDate)
```
So far I have done this, but it is including all 7 days.
Can anybody help how only the specific days can be included to get the correct expiry date?
|
You can do it with Numbers table and calendar table,i have created some test data which uses normalized version of your packagedays table.
```
---package table
create table packagetable
(
id int,
maxduration int
)
insert into packagetable
select 1,10
----storing number of days in normalized way
create table packagedays
(
pkgid int,
pkgdays int
)
insert into packagedays
select 1,1
union all
select 1,2
create function dbo.getexpirydate
(
@packageno int,
@dt datetime
)
returns datetime
as
begin
declare @expiry datetime
;with cte
as
(
select date,row_number() over ( order by date) as rn from dbo.calendar
where wkdno in (select pkgdays from packagedays where pkgid=@packageno ) and date>=@dt
)
select @expiry= max(Date)+1--after last date of offer add +1 to get next day as expiry date
from cte
where rn=(select maxduration from packagetable where id=@packageno)
return @expiry
end
```
if you don't want alter daysincluded as normalized version,you might have to use tally function which does the same and add it in cte
You can see calendar table [here](http://www.sqlservercentral.com/articles/T-SQL/70482/)
|
```
DECLARE @StartDate DATETIME
DECLARE @NoDays INT
DECLARE @DaysIncluded VARCHAR(20)
DECLARE @EndDate DATETIME, @LOOP INT, @Count int
SET @StartDate = getdate()
SET @NoDays = 10
SET @DaysIncluded = '1,2'
SET @LOOP = @NoDays
SET @EndDate = @StartDate
WHILE (@LOOP > 0)
BEGIN
SET @EndDate = DATEADD(DD, 1, @EndDate)
print @EndDate
Select @Count = Count(1) from dbo.splitstring(@DaysIncluded) where name in (DATEPART(dw,@EndDate))
if(@Count > 0)
BEGIN
print 'day added'
SET @LOOP = @LOOP - 1
END
END
```
if you want the function dbo.splitstring, please click [here](https://stackoverflow.com/questions/10914576/t-sql-split-string)
|
Find End Date based on Start date and Duration - T-SQL
|
[
"",
"sql",
"sql-server",
"t-sql",
"date",
""
] |
I've a query :
```
select C.ChapterID, C.ChapterName, TA.TestAllotmentID,
T.TestName, S.StudentFname, B.BatchName, TA.UpdatedDate
from TransTestAllotment TA,
MstStudent S,
MstBatchDetails B,
MstTest T,
MstChapter C
where TA.StudentID = 47
and TA.BatchID = 10
and T.TestID = TA.TestID
and S.StudentID = TA.StudentID
and B.BatchID = TA.BatchID
and T.ChapterID = C.ChapterID
and TA.IsAttempted = 'True'
and TA.IsEvaluated = 'True'
order by TA.UpdatedDate desc
```
It returns result as below.
```
+-----------+-----------------------+-----------------+-------------------------+
| ChapterID | ChapterName | TestAllotmentID | UpdatedDate |
+-----------+-----------------------+-----------------+-------------------------+
| 52 | HTML Basics - Part II | 37 | 2016-03-14 13:12:53.000 |
| 52 | HTML Basics - Part II | 36 | 2016-03-14 13:11:59.000 |
| 52 | HTML Basics - Part II | 35 | 2016-03-14 13:11:35.000 |
| 50 | HTML Basics - Part I | 28 | 2016-03-11 18:45:51.000 |
| 50 | HTML Basics - Part I | 27 | 2016-03-11 18:45:05.000 |
| 50 | HTML Basics - Part I | 26 | 2016-03-11 18:43:48.000 |
| 50 | HTML Basics - Part I | 24 | 2016-03-11 18:42:37.000 |
| 50 | HTML Basics - Part I | 22 | 2016-03-11 18:41:07.000 |
+-----------+-----------------------+-----------------+-------------------------+
```
Now I want to retrieve distinct record based on chapter ID order by Updated date. The final output that I am expecting is:
```
+-----------+-----------------------+-----------------+-------------------------+
| ChapterID | ChapterName | TestAllotmentID | UpdatedDate |
+-----------+-----------------------+-----------------+-------------------------+
| 52 | HTML Basics - Part II | 37 | 2016-03-14 13:12:53.000 |
| 50 | HTML Basics - Part I | 28 | 2016-03-11 18:45:51.000 |
+-----------+-----------------------+-----------------+-------------------------+
```
How do I write a SQL query to get this output?
|
You should fix your query to use proper explicit `JOIN` syntax. But the answer to your question is to use window functions:
```
with q as (
<your query here>
)
select q.*
from (select q.*,
row_number() over (partition by chapterid order by updateddate desc) as seqnum
from q
) q
where seqnum = 1;
```
|
Thank you Gordon Linoff. My final query is as below.
```
Select Y.* from (select X.*, row_number() over (partition by chapterid order by updateddate desc) as SequencNo from
(Select C.ChapterID,C.ChapterName,TA.TestAllotmentID, T.TestName,
S.StudentFname,B.BatchName,TA.UpdatedDate
from TransTestAllotment TA, MstStudent S, MstBatchDetails B,
MstTest T,MstChapter C
where TA.StudentID=47 and TA.BatchID=10 and
T.TestID=TA.TestID and S.StudentID=TA.StudentID and
B.BatchID=TA.BatchID and T.ChapterID=C.ChapterID and
TA.IsAttempted='True' and TA.IsEvaluated='True') X) Y Where SequencNo=1
```
|
Retrive distinct record based on ID order by Date
|
[
"",
"sql",
"sql-server",
""
] |
I have a SQL Server query which takes data from Oracle 8i server using linked server connection. My question here is completely on Oracle SQL.
There is a StyleSizes table. One style can have one or more sizes associated with it. I need to find any one Style\_CD which has more than one Size\_CD. This can be achieved by GROUP BY and HAVING. But I need only one such style. The volume of the table is so big that I cannot wait till the query execution is complete – so I need only one style.
How to get only one record quickly after applying GROUP BY and HAVING clauses in Oracle?
```
SELECT STYLE_CD
FROM OPENQUERY(LinkedORAServer,
'SELECT STYLE_CD
FROM DA.StyleSize M
GROUP BY STYLE_CD
HAVING COUNT(SIZE_CD) > 1
')
```
**FURTHER READING**
1. [Ask Tom - On ROWNUM and Limiting Results](http://www.oracle.com/technetwork/issue-archive/2006/06-sep/o56asktom-086197.html)
2. [Limit - Comparison of different SQL implementations](http://troels.arvin.dk/db/rdbms/#select-limit)
3. [How do I limit the number of rows returned by an Oracle query after ordering?](https://stackoverflow.com/questions/470542/how-do-i-limit-the-number-of-rows-returned-by-an-oracle-query-after-ordering/26051830#26051830)
4. [Row Limiting Clause for Top-N Queries in Oracle Database 12c Release 1 (12.1)](https://oracle-base.com/articles/12c/row-limiting-clause-for-top-n-queries-12cr1)
5. [SQL Server - Pagination with OFFSET / FETCH : A better way](http://sqlperformance.com/2015/01/t-sql-queries/pagination-with-offset-fetch)
|
If you want one style that has more than one size, quickly, then you can use `exists`:
```
select m.*
from da.stylesize m
where exists (select 1 from da.stylesize m2 where m2.style_cd = m.style_cd and m2.size_cd <> m.size_cd) and
rownum = 1;
```
Then, you want to be sure you have an index on `da.stylesize(style_cd, size_cd)`.
With this query, you don't even need to do the aggregation, which should be a big savings.
|
You should use ROWNUM which is similar to MySQL limit and SQL-SERVER top :
```
SELECT STYLE_CD
FROM OPENQUERY(LinkedORAServer,
'SELECT STYLE_CD FROM(SELECT STYLE_CD
FROM DA.StyleSize M
GROUP BY STYLE_CD
HAVING COUNT(SIZE_CD) > 1)
WHERE rownum = 1
')
```
|
First record which meets GROUP BY and HAVING clauses
|
[
"",
"sql",
"oracle",
""
] |
i need to create table with a variable name.
Heres my code, i dont know why it not work.
```
BEGIN
SET @tablename = tablename;
SET @sql_text = concat('CREATE TABLE ',@tablename,' (ID INT(11) NOT NULL, team0 DOUBLE NOT NULL, team1 DOUBLE NOT NULL)');
PREPARE stmt FROM @sql_text;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;
END
```
And here is the error:
> Procedure execution failed
>
> 1054 - Unknown column 'TestTableName' in 'field list'
|
Wrap `tablename` with `'` to indicate that it is string literal and not identifier.
```
BEGIN
SET @tablename = 'tablename';
SET @sql_text = concat('CREATE TABLE ',@tablename,' (ID INT(11) NOT NULL, team0 DOUBLE NOT NULL, team1 DOUBLE NOT NULL)');
PREPARE stmt FROM @sql_text;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;
END
```
`SqlFiddleDemo`
And please read [CREATE TABLE @tbl](http://www.sommarskog.se/dynamic_sql.html#Cre_tbl) because creating tables at runtime could indicate poor design.
|
```
BEGIN
SET @tablename = 'tablename';
SET @sql_text = concat('CREATE TABLE ',@tablename,' (ID INT(11) NOT NULL, team0 DOUBLE NOT NULL, team1 DOUBLE NOT NULL)');
PREPARE stmt FROM @sql_text;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;
END
```
|
SQL Procedure CREATE Table with variable Tablename
|
[
"",
"mysql",
"sql",
"procedure",
""
] |
I am trying learn how to optimize SQL statements and I was wondering if it's possible to estimate what might be making my queries slow just by seeing the execution plan.
```
*************************** 1. row ***************************
id: 1
select_type: PRIMARY
table: <derived2>
type: ALL
possible_keys: NULL
key: NULL
key_len: NULL
ref: NULL
rows: 382856
Extra: Using where; Using temporary; Using filesort
*************************** 2. row ***************************
id: 1
select_type: PRIMARY
table: rf
type: ref
possible_keys: rec_id
key: rec_id
key_len: 4
ref: rs.id
rows: 7
Extra: Using index condition
*************************** 3. row ***************************
id: 2
select_type: DERIVED
table: f
type: range
possible_keys: facet_name_and_value,rec_id
key: facet_name_and_value
key_len: 309
ref: NULL
rows: 382856
Extra: Using index condition; Using where; Using temporary; Using filesort
*************************** 4. row ***************************
id: 2
select_type: DERIVED
table: r
type: ref
possible_keys: record_id
key: record_id
key_len: 9
ref: sqlse_test_crescentbconflate.f.rec_id
rows: 1
Extra: Using where; Using index
```
Just by looking at the execution plan I can see that I am using too many joins and the data is too big since SQL is using filesort, but I might be wrong.
|
No, it's not really possible to diagnose the performance issue from just the EXPLAIN output.
But the output does reveal that there's a view query that's returning (an estimated) 384,000 rows. We can't tell if that's a stored view, or an inline view. But we can see that that results from that query are being materialized into a table (MySQL calls it a "derived table"), and then the outer query is running against that. The overhead for that can be considerable.
What we can't tell if it's possible to get the same result without the view, to flatten the query. And if that's not possible, whether there are any predicates on the outer query that could be pushed down into the view.
A "Using filesort" isn't necessarily a bad thing. But that operation can become expensive for really large sets. So we do want to avoid unnecessary sort operations. (What we can't tell from the EXPLAIN output is whether it would be possible to avoid those sort operations.)
And if the query uses a "covering index" then the query is satisfied from the index pages, without needing to lookup/visit pages in the underlying table, which means less work to do.
Also, make sure the predicates are in a form that enables effective use of an index. That means having conditions on bare columns, not wrapping the columns in functions. e.g.
We want to avoid writing a condition like this:
```
where DATE_FORMAT(t.dt,'%Y-%m') = '2016-01'
```
when the same thing can be expressed like this:
```
where t.dt >= '2016-01-01' and t.dt < '2016-02-01'
```
With the former, MySQL has to evaluate the DATE\_FORMAT function for every row in the table, and the compare the return from the function. With the latter form, MySQL could use a "range scan" operation on an index with `dt` as the leading column. A range scan operation has the potential to eliminate vast swaths of rows very efficiently, without actually needing to examine the rows.
---
To summarize, the biggest performance improvements would likely come from
* avoiding creating a derived table (no view definitions)
* pushing predicates into view definitions (where view definitions can't be avoided)
* avoiding unnecessary sort operations
* avoiding unnecessary joins
* writing predicates in a form that can make use of suitable indexes
* creating suitable indexes, covering indexes where appropriate
|
I'd look at the `extra` field in the execution plan, and then examine your query and your database schema to find ways to improve performance.
`using temporary` means a temporary table was used, which may slow down the query. Furthermore, temporary tables may end up being written to the disk (and not stored in RAM, which the server typically tries to do if it can) if they are too large.
> According the [MySQL 5.5 documentation](https://dev.mysql.com/doc/refman/5.7/en/internal-temporary-tables.html), here are some reasons
> temporary tables are created:
>
> * Evaluation of UNION statements.
> * Evaluation of some views, such those that use the TEMPTABLE algorithm, UNION, or aggregation.
> * Evaluation of statements that contain an ORDER BY clause and a different GROUP BY clause, or for which the ORDER BY or GROUP BY
> contains columns from tables other than the first table in the join
> queue.
> * Evaluation of DISTINCT combined with ORDER BY may require a temporary table.
> * For queries that use the SQL\_SMALL\_RESULT option, MySQL uses an in-memory temporary table, unless the query also contains elements
> (described later) that require on-disk storage.
> * Evaluation of multiple-table UPDATE statements.
> * Evaluation of GROUP\_CONCAT() or COUNT(DISTINCT) expressions.
Then there's `using filesort`, which means that a sort was performed which could not be done with existing indexes. This could be no big deal, but you should check what fields are being sorted on and where your indexes are and make sure you're not giving MySQL too much work to do.
|
Is it possible to see why my queries are so slow just by seeing the execution plan
|
[
"",
"mysql",
"sql",
""
] |
I have a sql query
```
Select * from `products`
where (`product_company`='mad over donuts' OR
`product_company`= 'dunkin donuts')
AND (`flavour`='vanilla')
AND (355 < `price` < 561 ) ;
```
Which fetches me elements greater than some fixed price and less than some fixed price.It also includes other constraints as well.However when i run this sql query it gives me a
> > 1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '"Select \* from `products` where (`product_company`='mad over donuts' OR `product' at line 1.
I checked my sql syntax and i don't seem to get the problem.Anyone ,Please?
|
Change
```
355 < price < 561)
```
to
```
price between 356 and 560
```
to correct the problem (between includes equals so need to shrink the range)
i.e.
```
Select * from `products`
where (`product_company`='mad over donuts' OR
`product_company`= 'dunkin donuts')
AND (`flavour`='vanilla')
AND price between 355 and 561
```
|
in place of this
```
Select * from `products`
where (`product_company`='mad over donuts' OR
`product_company`= 'dunkin donuts')
AND (`flavour`='vanilla')
AND (355 < `price` < 561 )";
```
" is causing problem
try this
```
Select * from `products`
where (`product_company`='mad over donuts' OR
`product_company`= 'dunkin donuts')
AND (`flavour`='vanilla')
AND (`price`BETWEEN 355 and 561 );
```
|
Sql Error 1064,Syntax Error
|
[
"",
"mysql",
"sql",
""
] |
I have the following problem: I should print the count of sunk ships of all countries. So I write the following:
```
use ships;
select CLASSES.CLASS, COUNT(*)
from CLASSES
left join SHIPS on CLASSES.CLASS = SHIPS.CLASS
left join OUTCOMES on NAME = SHIP
where RESULT = 'sunk'
group by CLASSES.CLASS;
```
but if some country doesn't have a sunk ship it will not appear in result. But I want every country to be in the result and if it doesn't have sunk ships to shop 0 for count.
How can I achieve that?
Thanks in advance.
|
The proper way to do this is using `LEFT JOIN`, but with the condition in the `ON` clause:
```
select c.CLASS, COUNT(o.RESULT)
from CLASSES c left join
SHIPS s
on c.CLASS = s.CLASS left join
OUTCOMES o
on s.NAME = o.SHIP and
o.RESULT = 'sunk'
group by c.CLASS;
```
Notice that the table aliases also make the query easier to write and to read.
Also, I have to guess what tables the columns belong in. When you use qualified column names from the beginning, then there is no guessing.
|
It depends on the SQL dialect you use. If you use Informix as a database, you can use nvl() function:
```
SELECT <whatever>, nvl(ships, 0) AS ship_num FROM SHIPS;
```
Using Sqlite3 you can use coalesce() function:
```
SELECT <whatever>, coalesce(ships, 0) AS ship_num FROM SHIPS;
```
etc. All dialects of SQL will have one of such functions but they all named differently. But the basic principle is the same - if the value I get is NULL, use the provided value instead - zero in your case, but it can be anything, even column value or a string "Not specified" etc. If you provide the database you're querying against, I can give you the function to use.
|
SQL - show all rows in table using group by and where
|
[
"",
"sql",
""
] |
I am trying to concatenate 2 values from the same column but with different conditions. here is my sample table.
```
----------------------------------
| user_id | key | value |
----------------------------------
1 firstname maria
1 lastname enuole
2 firstname chris
2 lastname magnolia
```
concatenating values from the value field with key firstname and lastname with the same user\_id. Sorry its really hard to explain.
i would like a result like this...
```
--------------------------
| user_id | Name |
--------------------------
1 maria enuole
2 chris magnolia
```
Is there a way to do this? Thanks for the feedback.
|
Another way using `Conditional Aggregate` but I prefer `Group_concat` approach
```
select user_id, concat(F_name, ' ', L_name)
From
(
select user_id,
max(case when key = 'firstname' then value end) F_name,
max(case when key = 'lastname' then value end) as L_name
From Yourtable
Group by user_id
) A
```
|
You can do this using `group_concat()` and `order by`. In your case, the solution is pretty simple:
```
select user_id,
group_concat(value separator ' ' order by key) as name
from t
where key in ('firstname', 'lastname')
group by user_id;
```
Or, use the `join` approach:
```
select tfirst.user_id, concat_ws(' ', tfirst.value, tlast.value) as name
from t tfirst join
t tlast
on tfirst.user_id = tlast.user_id and
tfirst.key = 'firstname' and
tlast.key = 'lastname';
```
|
Concatenate 2 values from the same column with different conditions
|
[
"",
"mysql",
"sql",
"join",
""
] |
Let's assume I have table1:
```
id Column2 Column3 Column4
1 YES NO NO
2 NO YES YES
3 NO YES NO
4 YES YES NO
5 NO NO NO
```
and I have table2:
```
id Column5
1 NO
2 YES
3 NO
6 YES
7 YES
```
I want to join them to have output like this.
```
id Column2 Column3 Column4 Column5
1 YES NO NO NO
2 NO YES YES YES
3 NO YES NO NO
4 YES YES NO null
5 NO NO NO null
6 null null null YES
7 null null null YES
```
Which type of join should I use?
I am currently using Oracle 12c
|
Well, if you are looking for the exact output, then you can use `COALESCE` along with `FULL OUTER JOIN`
```
select COALESCE(A.ID,B.ID) ID,COL1,COL2,COL3,COL4
from Table1 A full outer join Table2 B
ON A.id = B.id
order by a.id asc
```
[](https://i.stack.imgur.com/VmaMF.png)
Without `COALESCE` and not selecting columns specifically :
[](https://i.stack.imgur.com/94RJQ.png)
|
You will have to do a `full outer join`. Note the `NULL` in line 4/5 for the right table and in 6/7 for the left table.
Something like
```
SELECT *
FROM table1
FULL OUTER JOIN table2
ON table1.id=table2.id;
```
|
Join two tables having in final table value (or null) for every id
|
[
"",
"sql",
"oracle",
""
] |
I'm having difficulty in removing NULLS. The ISNULL statement seems to work for the dynamic result, but not the second one.
My final extract query is :
```
select itemid,title,description,cat,fibre,washing,colours,promo,
max(case when seqnum = 1 then isnull(chm_sizegrouping,'') end) as sizes_1,
max(case when seqnum = 2 then isnull(chm_sizegrouping,'') end) as sizes_2,
max(case when seqnum = 3 then chm_sizegrouping end) as sizes_3,
max(case when seqnum = 4 then chm_sizegrouping end) as sizes_4)
```
Results shown are:
```
itemid sizes_1 sizes_2
LM008 one NULL
LM009 NULL
LM010 NULL
lm011 NULL
```
Any help is much appreciated,
Thanks.
|
Try this:
```
select itemid,title,description,cat,fibre,washing,colours,promo,
isnull(max(case when seqnum = 1 then chm_sizegrouping end),'') as sizes_1,
isnull(max(case when seqnum = 2 then chm_sizegrouping end),'') as sizes_2,
........
```
|
Unfortunatelly you have to use the `ISNULL` function to each field that could retrieve a `NULL` value.
|
Removing NULLS from conditional aggregation results
|
[
"",
"sql",
"pivot",
"isnull",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.