Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
query with results as below:
```
b_id| l_id | result | Count | avg
-----+------+--------- -+-------+-----
1 | 10 | Limited | 2 | 66.66
1 | 10 |Significant| 1 | 33.33
2 | 09 | Critical | 1 |100.00
```
I am struggling to get a query right using a case statement as below:
```
SELECT DISTINCT ON (b_id, l_id) b_id, l_id,
(CASE
WHEN result = 'Critical' THEN 'Critical'
WHEN result = 'Significant' AND avg >= 50 THEN 'Critical'
WHEN result = 'Significant' AND result <> 'Critical' THEN 'Significant'
WHEN result = 'Medium' AND avg >= 50 THEN 'Medium'
ELSE 'Limited' END) as cr
From (sub query)
```
the results that I am getting are as below:
```
b_id| l_id | result
-----+------+----------
1 | 10 | Limited
2 | 09 | Critical
```
but what I am expecting is as below:
```
b_id| l_id | result
-----+------+----------
1 | 10 | significant
2 | 09 | Critical
```
1). if there is atleast 1 critical then critical.
2) when there is significant => 50 % and no critical then critical(that means if there is only 1 row and that is significant so it is 100% then 'critical')
3) if there is atleast 1 significant, no critical and (medium, limited) > significant then significant
4) if medium is >= 50% and no (critical or significant) then medium
5) rest will be limited.
I need Significant rather than limited because the highest value trumps a lower value in most cases so Sig trumps Ltd. Overall I want the case statement to assess the group of pairs (b\_id,l\_id) so in the group of pairs for 1 | 10 I need the case statement to assess and return a result.
|
Use bool\_or aggregate (At least the condition is true for one row ) :
```
SELECT b_id, l_id,CASE WHEN bool_or(result='Critical' or (result = 'Significant' AND avg >= 50) ) Then 'Critical'
WHEN bool_or(result='Significant') THEN 'Significant'
WHEN bool_or(result = 'Medium' AND avg >= 50) THEN 'Medium'
ELSE 'Limited' END as cr
From (sub query) group by 1,2
```
|
The `WHEN result = 'Significant' AND result <> 'Critical' THEN 'Significant'` issue aside\*, all three rows qualify and then one one of the first two rows gets selected because of `DISTINCT ON (b_id, l_id)`. You can't control which of the two rows will be selected, that is basically a function of how your data is organized on disk and that may change over time.
You will never get a row with `1 | 10 | Critical` because the corresponding row from the table has `result = 'Significant'` but the `avg = 33.33` so it can not become `'Critical'`. If you want to favour rows with "Critical" over "Significant" over "Medium" over "Limited", then you should add a specific clause for that, such as a table with a numerical value assigned to each `result` level such that you can sort on it.
*\* `CASE` statements are evaluated only up to the point where a final result is obtained, so when the first sub-clause matches, remaining clauses are not evaluated.*
|
case statement with pairs giving incorrect value
|
[
"",
"sql",
"postgresql",
"postgresql-9.1",
""
] |
I am trying to convert 3 columns into 2. Is there a way I can do this with the example below or a different way?
For example.
```
Year Temp Temp1
2015 5 6
```
Into:
```
Year Value
Base 5
2015 6
```
|
You could use `CROSS APPLY` and row constructor:
```
SELECT s.*
FROM t
CROSS APPLY(VALUES('Base', Temp),(CAST(Year AS NVARCHAR(100)), Temp1)
) AS s(year,value);
```
`LiveDemo`
|
This is called unpivot, pivot is the exact opposite(make 2 columns into more) .
You can do this with a simple `UNION ALL`:
```
SELECT 'Base',s.temp FROM YourTable s
UNION ALL
SELECT t.year,t.temp1 FROM YourTable t
```
This relays on what you wrote on the comments, if year is constant , you can replace it with '2015'
|
SQL convert from 3 to 2 columns
|
[
"",
"sql",
"sql-server",
"unpivot",
""
] |
I've just set up a new PostgreSQL 9.5.2, and it seems that all my transactions are auto committed.
Running the following SQL:
```
CREATE TABLE test (id NUMERIC PRIMARY KEY);
INSERT INTO test (id) VALUES (1);
ROLLBACK;
```
results in a warning:
```
WARNING: there is no transaction in progress
ROLLBACK
```
on a **different** transaction, the following query:
```
SELECT * FROM test;
```
actually returns the row with `1` (as if the insert was committed).
I tried to set `autocommit` off, but it seems that this feature no longer exists (I get the `unrecognized configuration parameter` error).
What the hell is going on here?
|
autocommit in Postgres is controlled by the SQL *client*, not on the server.
In `psql` you can do this using
```
\set AUTOCOMMIT off
```
Details are in the manual:
<http://www.postgresql.org/docs/9.5/static/app-psql.html#APP-PSQL-VARIABLES>
In that case **every** statement you execute starts a transaction until you run `commit` (including `select` statements!)
Other SQL clients have other ways of enabling/disabling autocommit.
---
Alternatively you can use `begin` to start a transaction manually.
<http://www.postgresql.org/docs/current/static/sql-begin.html>
```
psql (9.5.1)
Type "help" for help.
postgres=> \set AUTCOMMIT on
postgres=> begin;
BEGIN
postgres=> create table test (id integer);
CREATE TABLE
postgres=> insert into test values (1);
INSERT 0 1
postgres=> rollback;
ROLLBACK
postgres=> select * from test;
ERROR: relation "test" does not exist
LINE 1: select * from test;
^
postgres=>
```
|
```
\set AUTCOMMIT 'off';
```
The `off value` should be in single quotes
|
Transactions are auto committed on PostgreSQL 9.5.2 with no option to change it?
|
[
"",
"sql",
"postgresql",
"transactions",
"autocommit",
""
] |
I would like to group (a,b) and (b,a) into one group (a,b) in SQL
For e.g the following set
```
SELECT 'a' AS Col1, 'b' AS Col2
UNION ALL
SELECT 'b', 'a'
UNION ALL
SELECT 'c', 'd'
UNION ALL
SELECT 'a', 'c'
UNION ALL
SELECT 'a', 'd'
UNION ALL
SELECT 'b', 'c'
UNION ALL
SELECT 'd', 'a'
```
should yield
```
Col1 | Col2
a b
c d
a c
a d
b c
```
|
Group by a case statement that selects the pairs in alphabetical order:
```
select case when col1 < col2 then col1 else col2 end as col1,
case when col1 < col2 then col2 else col1 end as col2
from (
select 'a' as col1, 'b' as col2
union all
select 'b', 'a'
union all
select 'c', 'd'
union all
select 'a', 'c'
union all
select 'a', 'd'
union all
select 'b', 'c'
union all
select 'd', 'a'
) t group by case when col1 < col2 then col1 else col2 end,
case when col1 < col2 then col2 else col1 end
```
<http://sqlfiddle.com/#!3/9eecb7db59d16c80417c72d1/6977>
If you simply want unique values (as opposed to a grouping for aggregation) then you can use `distinct` instead of `group by`
```
select distinct case when col1 < col2 then col1 else col2 end as col1,
case when col1 < col2 then col2 else col1 end as col2
from (
select 'a' as col1, 'b' as col2
union all
select 'b', 'a'
union all
select 'c', 'd'
union all
select 'a', 'c'
union all
select 'a', 'd'
union all
select 'b', 'c'
union all
select 'd', 'a'
) t
```
|
As an alternative, you could use a `UNION` to achieve this:
```
WITH cte AS (
SELECT 'a' AS Col1, 'b' AS Col2
UNION ALL
SELECT 'b', 'a'
UNION ALL
SELECT 'c', 'd'
UNION ALL
SELECT 'a', 'c'
UNION ALL
SELECT 'a', 'd'
UNION ALL
SELECT 'b', 'c'
UNION ALL
SELECT 'd', 'a')
SELECT col1, col2 FROM cte WHERE col1 < col2 OR col1 IS NULL
UNION
SELECT col2, col1 FROM cte WHERE col1 >= col2 OR col2 IS NULL
ORDER BY 1, 2
```
[SQL fiddle](http://sqlfiddle.com/#!3/9eecb7db59d16c80417c72d1/7003)
Note that a `UNION` removes duplicates.
If you don't have `NULL` values, you can of course omit the `OR` part in the `WHERE` clauses.
|
Grouping of pairs in sql
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I can't think of a way to speed this up. It's doing a table scan but I kind of have to because I need to update ALL records...
The problem is that this table has MILLIONS of records... like around 30 million.
This is taking about 50 minutes to run. Anyone have any tips on how I can improve this?
```
update A
set A.product_dollar_amt = round(A.product_dollar_amt, 2),
A.product_local_amt = round(A.product_local_amt, 2),
A.product_trans_amt = round(A.product_trans_amt, 2)
from dbo.table A
```
The table is currently a heap (no clustered index) because it isn't used anywhere else... not sure if creating a clustered index would improve anything here.
|
Here are three options.
The first mentioned by Randy is to do the work in batches.
The second method is to dump the results into a temporary table and recreate the original table:
```
select . . . ,
product_dollar_amt = round(A.product_dollar_amt, 2),
product_local_amt = round(A.product_local_amt, 2),
product_trans_amt = round(A.product_trans_amt, 2)
into a_temp
from a;
drop a;
sp_rename 'a_temp', 'a';
```
Note: This is not guaranteed to be faster, but because logging inserts goes faster than logging updates, it often is. Also, you would need to rebuild indexes and triggers.
Finally, there is the "no-update" solution: create derived values instead:
```
sp_rename 'A.product_dollar_amt', '_product_dollar_amount', 'COLUMN';
sp_rename 'A.product_local_amt', '_product_local_amt', 'COLUMN';
sp_rename 'A.product_trans_amt', '_product_trans_amt', 'COLUMN';
```
Then add the columns back as formulas:
```
alter table A add product_dollar_amt as (round(product_dollar_amt, 2));
alter table A add product_local_amt = round(product_local_amt, 2);
alter table A add product_trans_amt = round(product_trans_amt, 2);
```
|
You really don't have any alternatives here. You are updating every single row and it's going to take as long as it takes. I can tell you though that updating 30M rows in a single transaction is not a great idea. You could easily blow out your transaction log. And if this table is used by other users, you are probably going to lock them all out until the entire table is updated. You are much better off updating this table in small batches. Overall performance won't be improved but you'll be putting much less strain on your trans log and other users.
|
is there ANY way I can improve performance on this simple query?
|
[
"",
"sql",
"sql-server",
"performance",
""
] |
I am fetching records from a table and it returns me a set of mixed data
here is a simple query
```
SELECT code_id, atb_name_id FROM `attribute_combinations` WHERE products_id =109
```
It returns me data like this
```
| code_id |atb_name_id|
-----------------------
| 1 | 31 |
| 2 | 31 |
| 3 | 31 |
| 4 | 31 |
| 5 | 31 |
| 6 | 34 |
| 7 | 34 |
| 8 | 34 |
| 9 | 34 |
```
I want to make another alias "flag" that will have all values filled with "yes" if "atb\_name\_id" column has all same values otherwise filled with "no".
In the above example as "atb\_name\_id" has both set of 31 and 34 so the output will be
```
| code_id |atb_name_id| flag |
------------------------------
| 1 | 31 | no |
| 2 | 31 | no |
| 3 | 31 | no |
| 4 | 31 | no |
| 5 | 31 | no |
| 6 | 34 | no |
| 7 | 34 | no |
| 8 | 34 | no |
| 9 | 34 | no |
```
|
You can do the following:
```
SELECT code_id, atb_name_id,
(SELECT CASE WHEN COUNT(DISTINCT atb_name_id) > 1 THEN 'no' ELSE 'yes' END
FROM `attribute_combinations` ac2
WHERE products_id = ac.products_id) AS flag
FROM `attribute_combinations` ac
WHERE products_id =109
```
|
Produce the additional information in a sub select, that you simply join to the original table:
```
SELECT
ac.code_id,
ac.atb_name_id,
CASE WHEN f.count = 1 THEN 'yes'
ELSE 'no' END AS flag
FROM
attribute_combinations ac INNER JOIN
(SELECT
products_id,
COUNT(DISTINCT atb_name_id) AS count
FROM
attribute_combinations
GROUP BY
products_id) f ON ac.products_id = f.products_id
WHERE
ac.products_id = 109
```
Beware: I have not tested this code. It is just here to give you an idea and might contain bugs.
Since the MySQL query optimizer is not really always the best, you might get better performance with restricting the result of the subselect to only the products, you are interested in:
```
SELECT
ac.code_id,
ac.atb_name_id,
CASE WHEN f.count = 1 THEN 'yes'
ELSE 'no' END AS flag
FROM
attribute_combinations ac,
(SELECT
COUNT(DISTINCT atb_name_id) AS count
FROM
attribute_combinations
WHERE
products_id = 109) f
WHERE
ac.products_id = 109
```
|
Mysql check if values are same or not
|
[
"",
"mysql",
"sql",
"database",
"relational-database",
""
] |
I'm trying to perform the following in a stored procedure
```
DECLARE @TICKET_AGE INT
SELECT @TICKET_AGE = TOP 1 (DATEDIFF(second, DATE_ENTERED, GETDATE())/60) AS TICKET_AGE
FROM TICKETS
```
But it's giving error saying INCORRECT SYNTAX NEAR 'TOP'
What am i doing wrong?
So i've updated my code to look like below...but now am getting Incorrect syntax near the keyword 'AS'.
```
-- DECLARE VARIABLE
DECLARE @TICKET_AGE INT
-- PULL THE DATA
SELECT TOP 1 @TICKET_AGE = (DATEDIFF(second, DATE_ENTERED, GETDATE())/60) AS TICKET_AGE
FROM TICKETS
WHERE LOWER(STATUS_DESCRIPTION) LIKE '%new%'
ORDER BY DATE_ENTERED ASC
```
|
The `TOP 1` comes *before* the variable:
```
SELECT TOP 1 @TICKET_AGE = DATEDIFF(second, DATE_ENTERED, GETDATE()) / 60
FROM TICKETS
```
|
Not sure if it matters, but this should give you the same result and *might* perform better
```
SELECT @TICKET_AGE = MAX(DATEDIFF(second,DATE_ENTERED,GETDATE()) / 60)
FROM TICKETS
WHERE LOWER(STATUS_DESCRIPTION) LIKE '%new%'
```
|
SQL Server SELECT @VARIABLE = TOP 1
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
I have this string in sql 'PIERCESTOWN WEXFORD EIRE'
I want only 'WEXFORD' , the search should be based on the string 'EIRE'
So far I have tried
```
DECLARE @a varchar(500)
SET @a='MALTON ROAD WICKLOW EIRE'
SELECT charindex('EIRE',@a)
SELECT SUBSTRING(@a,1,charindex('EIRE',@a))
```
But it gives me 'MALTON ROAD WICKLOW E'
Any help is appreciated ?
|
Try this
```
DECLARE @a varchar(500), @v varchar(500)
SET @a='MALTON ROAD WICKLOW EIRE'
SELECT @v = LTRIM(RTRIM(SUBSTRING(@a,1,charindex('EIRE',@a)-1)))
SELECT REVERSE( LEFT( REVERSE(@v),
ISNULL(NULLIF(CHARINDEX(' ', REVERSE(@v)),0)-1,LEN(@v)) ) )
```
Result:
```
DATA RESULT
-----------------------------------
MALTON EIRE MALTON
MALTON ROAD WICKLOW EIRE WICKLOW
```
|
```
DECLARE @a varchar(500), @x varchar(500)
SET @a='MALTON ROAD WICKLOW EIRE'
SELECT @x = LTRIM(RTRIM(SUBSTRING(@a,1,charindex('EIRE',@a)))
SELECT REVERSE( LEFT( REVERSE(@x), CHARINDEX(' ', REVERSE(@x))-1 ) )
```
|
Get the word before particular word in sql
|
[
"",
"sql",
"sql-server",
""
] |
I have 3 table, I want to get Data from those table using join.
This is my table structure and data in these 3 tables.
I am using **MS Sql server**.
[](https://i.stack.imgur.com/rfBwG.jpg)
```
Year Month TotalNewCaseAmount TotalNewCaseCount TotalClosingAmount TotalReturnCount TotalClosingAmount TotalReturnCount
2016 Januray 146825.91 1973 54774.41 147 299.35 41
2016 Fabuary 129453.30 5384 46443.99 7 7568.21 123
2016 March 21412.07 3198 Null Null 78.83 73
2016 April 0.00 5 Null Null Null NULL
```
I don't know which join will me this result, I have tried CROSS join, But it will give me 36 row.
|
That will do:
```
;WITH Table1 AS (
SELECT *
FROM (VALUES
(2016, 'Januray', 146825.91, 1973),
(2016, 'Fabuary', 129453.30, 5384),
(2016, 'March', 21412.07, 3198),
(2016, 'April', 0.00, 5)
) as t ([Year], [Month], TotalNewCaseAmount, TotalNewCaseCount)
), Table2 AS (
SELECT *
FROM (VALUES
(2016, 'Januray', 54774.41, 147),
(2016, 'Fabuary', 46443.99, 7)
) as t ([Year], [Month], TotalClosingAmount, TotalClosingCount)
), Table3 AS (
SELECT *
FROM (VALUES
(2016, 'Januray', 299.35, 41),
(2016, 'Fabuary', 7568.21, 123),
(2016, 'March', 78.83, 73)
) as t ([Year], [Month], TotalReturnAmount, TotalReturnCount)
)
SELECT t1.[Year],
t1.[Month],
t1.TotalNewCaseAmount,
t1.TotalNewCaseCount,
t2.TotalClosingAmount,
t2.TotalClosingCount,
t3.TotalReturnAmount,
t3.TotalReturnCount
FROM table1 t1
LEFT JOIN table2 t2
ON t1.[Year] = t2.[Year] AND t1.[Month] = t2.[Month]
LEFT JOIN table3 t3
ON t1.[Year] = t3.[Year] AND t1.[Month] = t3.[Month]
```
Output:
```
Year Month TotalNewCaseAmount TotalNewCaseCount TotalClosingAmount TotalClosingCount TotalReturnAmount TotalReturnCount
2016 Januray 146825.91 1973 54774.41 147 299.35 41
2016 Fabuary 129453.30 5384 46443.99 7 7568.21 123
2016 March 21412.07 3198 NULL NULL 78.83 73
2016 April 0.00 5 NULL NULL NULL NULL
```
Just change `table1`, `table2` and `table3` to your actual table names.
|
Try like this,
```
select
t1.*,
t2.TotalClosingAmount,
t2.TotalClosingCount,
t3.TotalRetrunAmount,
t3.TotalReturnCount
from
Table1 t1
left join
Table2 t2 on t1.year=t2.year
and t1.month=t2.month
left join
Table3 t3 on t1.year=t3.year
and t1.month=t3.month
```
|
Get Data from mutile table in sql server
|
[
"",
"sql",
"sql-server",
"join",
""
] |
Below are the sample tables I have to join.
```
SQL> select 'CH1' chapter , 'HELLO'||chr(10)||'WORLD' output from dual union
2 select 'CH2' chapter , 'HELLO'||chr(10)||'GALAXY' output from dual union
3 select 'CH3' chapter , 'HELLO'||chr(10)||'UNIVERSE' output from dual;
CHAPTER OUTPUT
--------------- --------------
CH1 HELLO
WORLD
CH2 HELLO
GALAXY
CH3 HELLO
UNIVERSE
```
and
```
SQL> select 'WORLD' output, 'PG1' Page from dual union
2 select 'GALAXY' output, 'PG2' Page from dual union
3 select 'UNIVERSE' output, 'PG3' Page from dual;
OUTPUT PAGE
-------- ------------
GALAXY PG2
UNIVERSE PG3
WORLD PG1
```
The OUTPUT column in the first table has multiple values seperated with chr(10) which I want to join with the OUTPUT column of second table so that the output looks like following:
```
CHAPTER OUTPUT PAGE
--------------- -------------- ----------------
CH1 HELLO P1
WORLD
CH2 HELLO P2
GALAXY
CH3 HELLO P3
UNIVERSE
```
Thanks in advance.!
|
```
select chapter, c.output, page
from table_chapters c join table_pages p
on c.output like '%' || p.output || '%'
order by chapter, page
```
The join condition matches if the output in the "pages" table is an exact substring of the output in the "chapters" table. I assume this is what you need.
If you need the output sorted as I have shown, care must be taken because in lexicographic sorting P10 is before P3. Best if page numbers are in NUMBER format, not string format. Same with chapters.
|
If the join needs to be done always on the second row of the string, you can use:
```
SQL> select chapter, a.output, page
2 from test_a A
3 inner join test_B B
4 on ( substr(a.output, instr(a.output, chr(10))+1, length(a.output)) = B.output)
5 order by chapter;
CHA OUTPUT PAG
--- -------------- ---
CH1 HELLO PG1
WORLD
CH2 HELLO PG2
GALAXY
CH3 HELLO PG3
UNIVERSE
```
|
Oracle SQL join on column with multiple Line feed (chr(10)) seperated values
|
[
"",
"sql",
"oracle",
"join",
""
] |
This post has been totally rephrased in order to make the question more understandable.
**Settings**
`PostgreSQL 9.5` running on `Ubuntu Server 14.04 LTS`.
**Data model**
I have dataset tables, where I store data separately (time series), all those tables must share the same structure:
```
CREATE TABLE IF NOT EXISTS %s(
Id SERIAL NOT NULL,
ChannelId INTEGER NOT NULL,
GranulityIdIn INTEGER,
GranulityId INTEGER NOT NULL,
TimeValue TIMESTAMP NOT NULL,
FloatValue FLOAT DEFAULT(NULL),
Status BIGINT DEFAULT(NULL),
QualityCodeId INTEGER NOT NULL,
DataArray FLOAT[] DEFAULT(NULL),
DataCount BIGINT DEFAULT(NULL),
Performance FLOAT DEFAULT(NULL),
StepCount INTEGER NOT NULL DEFAULT(0),
TableRegClass regclass NOT NULL,
Updated TIMESTAMP NOT NULL,
Tags TEXT[] DEFAULT(NULL),
--
CONSTRAINT PK_%s PRIMARY KEY(Id),
CONSTRAINT FK_%s_Channel FOREIGN KEY(ChannelId) REFERENCES scientific.Channel(Id),
CONSTRAINT FK_%s_GranulityIn FOREIGN KEY(GranulityIdIn) REFERENCES quality.Granulity(Id),
CONSTRAINT FK_%s_Granulity FOREIGN KEY(GranulityId) REFERENCES quality.Granulity(Id),
CONSTRAINT FK_%s_QualityCode FOREIGN KEY(QualityCodeId) REFERENCES quality.QualityCode(Id),
CONSTRAINT UQ_%s UNIQUE(QualityCodeId, ChannelId, GranulityId, TimeValue)
);
CREATE INDEX IDX_%s_Channel ON %s USING btree(ChannelId);
CREATE INDEX IDX_%s_Quality ON %s USING btree(QualityCodeId);
CREATE INDEX IDX_%s_Granulity ON %s USING btree(GranulityId) WHERE GranulityId > 2;
CREATE INDEX IDX_%s_TimeValue ON %s USING btree(TimeValue);
```
This definition comes from a `FUNCTION`, thus `%s` stands for the dataset name.
The `UNIQUE` constraint ensure that there must not have duplicate records within a given dataset. A record in this dataset is a value (`floatvalue`) for a given channel (`channelid`), sampled at given time (`timevalue`) on a given interval (`granulityid`), with a given quality (`qualitycodeid`). Whatever the value is, there cannot have a duplicate of `(channelid, timevalue, granulityid, qualitycodeid)`.
Records in dataset look like:
```
1;25;;1;"2015-01-01 00:00:00";0.54;160;6;"";;;0;"datastore.rtu";"2016-05-07 16:38:29.28106";""
2;25;;1;"2015-01-01 00:30:00";0.49;160;6;"";;;0;"datastore.rtu";"2016-05-07 16:38:29.28106";""
3;25;;1;"2015-01-01 01:00:00";0.47;160;6;"";;;0;"datastore.rtu";"2016-05-07 16:38:29.28106";""
```
I also have another satellite table where I store significant digit for channels, this parameters can change with time. I store it in the following way:
```
CREATE TABLE SVPOLFactor (
Id SERIAL NOT NULL,
ChannelId INTEGER NOT NULL,
StartTimestamp TIMESTAMP NOT NULL,
Factor FLOAT NOT NULL,
UnitsId VARCHAR(8) NOT NULL,
--
CONSTRAINT PK_SVPOLFactor PRIMARY KEY(Id),
CONSTRAINT FK_SVPOLFactor_Units FOREIGN KEY(UnitsId) REFERENCES Units(Id),
CONSTRAINT UQ_SVPOLFactor UNIQUE(ChannelId, StartTimestamp)
);
```
When there is a significant digit defined for a channel, a row is added to this table. Then the factor apply since this date. First records always have the sentinel value `'-infinity'::TIMESTAMP` which means: the factor applies since the beginning. Next rows must have a real defined value. If there is no row for a given channel, it means significant digit is unitary.
Records in this table look like:
```
123;277;"-infinity";0.1;"_C"
124;1001;"-infinity";0.01;"-"
125;1001;"2014-03-01 00:00:00";0.1;"-"
126;1001;"2014-06-01 00:00:00";1;"-"
127;1001;"2014-09-01 00:00:00";10;"-"
5001;5181;"-infinity";0.1;"ug/m3"
```
**Goal**
My goal is to perform an comparison audit of two datasets that have been populated by distinct processes. To achieve it, I must:
* Compare records between dataset and assess their differences;
* Check if the difference between similar records is enclosed within the significant digit.
For this purpose, I have written the following query which behaves in a manner that I don't understand:
```
WITH
-- Join records before records (regard to uniqueness constraint) from datastore templated tables in order to make audit comparison:
S0 AS (
SELECT
A.ChannelId
,A.GranulityIdIn AS gidInRef
,B.GranulityIdIn AS gidInAudit
,A.GranulityId AS GranulityId
,A.QualityCodeId
,A.TimeValue
,A.FloatValue AS xRef
,B.FloatValue AS xAudit
,A.StepCount AS scRef
,B.StepCount AS scAudit
,A.DataCount AS dcRef
,B.DataCount AS dcAudit
,round(A.Performance::NUMERIC, 4) AS pRef
,round(B.Performance::NUMERIC, 4) AS pAudit
FROM
datastore.rtu AS A JOIN datastore.audit0 AS B USING(ChannelId, GranulityId, QualityCodeId, TimeValue)
),
-- Join before SVPOL factors in order to determine decimal factor applied to records:
S1 AS (
SELECT
DISTINCT ON(ChannelId, TimeValue)
S0.*
,SF.Factor::NUMERIC AS svpolfactor
,COALESCE(-log(SF.Factor), 0)::INTEGER AS k
FROM
S0 LEFT JOIN settings.SVPOLFactor AS SF ON ((S0.ChannelId = SF.ChannelId) AND (SF.StartTimestamp <= S0.TimeValue))
ORDER BY
ChannelId, TimeValue, StartTimestamp DESC
),
-- Audit computation:
S2 AS (
SELECT
S1.*
,xaudit - xref AS dx
,(xaudit - xref)/NULLIF(xref, 0) AS rdx
,round(xaudit*pow(10, k))*pow(10, -k) AS xroundfloat
,round(xaudit::NUMERIC, k) AS xroundnum
,0.5*pow(10, -k) AS epsilon
FROM S1
)
SELECT
*
,ABS(dx) AS absdx
,ABS(rdx) AS absrdx
,(xroundfloat - xref) AS dxroundfloat
,(xroundnum - xref) AS dxroundnum
,(ABS(dx) - epsilon) AS dxeps
,(ABS(dx) - epsilon)/epsilon AS rdxeps
,(xroundfloat - xroundnum) AS dfround
FROM
S2
ORDER BY
k DESC
,ABS(rdx) DESC
,ChannelId;
```
The query may be somewhat unreadable, roughly I expect from it to:
* Join data from two datasets using uniqueness constraint to compare similar records and compute difference (`S0`);
* For each difference, find the significant digit (`LEFT JOIN`) that applies for the current timestamps (`S1`);
* Perform some other useful statistics (`S2` and final `SELECT`).
**Problem**
When I run the query above, I have missing rows. For example: `channelid=123` with `granulityid=4` have 12 records in common in both tables (`datastore.rtu` and `datastore.audit0`). When I perform the whole query and store it in a `MATERIALIZED VIEW`, there is less than 12 rows. Then I started investigation to understand why I have missing records and I faced a strange behaviour with `WHERE` clause. If I perform a `EXPLAIN ANALIZE` of this query, I get:
```
"Sort (cost=332212.76..332212.77 rows=1 width=232) (actual time=6042.736..6157.235 rows=61692 loops=1)"
" Sort Key: s2.k DESC, (abs(s2.rdx)) DESC, s2.channelid"
" Sort Method: external merge Disk: 10688kB"
" CTE s0"
" -> Merge Join (cost=0.85..332208.25 rows=1 width=84) (actual time=20.408..3894.071 rows=63635 loops=1)"
" Merge Cond: ((a.qualitycodeid = b.qualitycodeid) AND (a.channelid = b.channelid) AND (a.granulityid = b.granulityid) AND (a.timevalue = b.timevalue))"
" -> Index Scan using uq_rtu on rtu a (cost=0.43..289906.29 rows=3101628 width=52) (actual time=0.059..2467.145 rows=3102319 loops=1)"
" -> Index Scan using uq_audit0 on audit0 b (cost=0.42..10305.46 rows=98020 width=52) (actual time=0.049..108.138 rows=98020 loops=1)"
" CTE s1"
" -> Unique (cost=4.37..4.38 rows=1 width=148) (actual time=4445.865..4509.839 rows=61692 loops=1)"
" -> Sort (cost=4.37..4.38 rows=1 width=148) (actual time=4445.863..4471.002 rows=63635 loops=1)"
" Sort Key: s0.channelid, s0.timevalue, sf.starttimestamp DESC"
" Sort Method: external merge Disk: 5624kB"
" -> Hash Right Join (cost=0.03..4.36 rows=1 width=148) (actual time=4102.842..4277.641 rows=63635 loops=1)"
" Hash Cond: (sf.channelid = s0.channelid)"
" Join Filter: (sf.starttimestamp <= s0.timevalue)"
" -> Seq Scan on svpolfactor sf (cost=0.00..3.68 rows=168 width=20) (actual time=0.013..0.083 rows=168 loops=1)"
" -> Hash (cost=0.02..0.02 rows=1 width=132) (actual time=4102.002..4102.002 rows=63635 loops=1)"
" Buckets: 65536 (originally 1024) Batches: 2 (originally 1) Memory Usage: 3841kB"
" -> CTE Scan on s0 (cost=0.00..0.02 rows=1 width=132) (actual time=20.413..4038.078 rows=63635 loops=1)"
" CTE s2"
" -> CTE Scan on s1 (cost=0.00..0.07 rows=1 width=168) (actual time=4445.910..4972.832 rows=61692 loops=1)"
" -> CTE Scan on s2 (cost=0.00..0.05 rows=1 width=232) (actual time=4445.934..5312.884 rows=61692 loops=1)"
"Planning time: 1.782 ms"
"Execution time: 6201.148 ms"
```
And I know that I must have 67106 rows instead.
At the time of writing, I know that `S0` returns the correct amount of rows. Therefore the problem must lies in further `CTE`.
What I find really strange is that:
```
EXPLAIN ANALYZE
WITH
S0 AS (
SELECT * FROM datastore.audit0
),
S1 AS (
SELECT
DISTINCT ON(ChannelId, TimeValue)
S0.*
,SF.Factor::NUMERIC AS svpolfactor
,COALESCE(-log(SF.Factor), 0)::INTEGER AS k
FROM
S0 LEFT JOIN settings.SVPOLFactor AS SF ON ((S0.ChannelId = SF.ChannelId) AND (SF.StartTimestamp <= S0.TimeValue))
ORDER BY
ChannelId, TimeValue, StartTimestamp DESC
)
SELECT * FROM S1 WHERE Channelid=123 AND GranulityId=4 -- POST-FILTERING
```
returns 10 rows:
```
"CTE Scan on s1 (cost=24554.34..24799.39 rows=1 width=196) (actual time=686.211..822.803 rows=10 loops=1)"
" Filter: ((channelid = 123) AND (granulityid = 4))"
" Rows Removed by Filter: 94890"
" CTE s0"
" -> Seq Scan on audit0 (cost=0.00..2603.20 rows=98020 width=160) (actual time=0.009..26.092 rows=98020 loops=1)"
" CTE s1"
" -> Unique (cost=21215.99..21951.14 rows=9802 width=176) (actual time=590.337..705.070 rows=94900 loops=1)"
" -> Sort (cost=21215.99..21461.04 rows=98020 width=176) (actual time=590.335..665.152 rows=99151 loops=1)"
" Sort Key: s0.channelid, s0.timevalue, sf.starttimestamp DESC"
" Sort Method: external merge Disk: 12376kB"
" -> Hash Left Join (cost=5.78..4710.74 rows=98020 width=176) (actual time=0.143..346.949 rows=99151 loops=1)"
" Hash Cond: (s0.channelid = sf.channelid)"
" Join Filter: (sf.starttimestamp <= s0.timevalue)"
" -> CTE Scan on s0 (cost=0.00..1960.40 rows=98020 width=160) (actual time=0.012..116.543 rows=98020 loops=1)"
" -> Hash (cost=3.68..3.68 rows=168 width=20) (actual time=0.096..0.096 rows=168 loops=1)"
" Buckets: 1024 Batches: 1 Memory Usage: 12kB"
" -> Seq Scan on svpolfactor sf (cost=0.00..3.68 rows=168 width=20) (actual time=0.006..0.045 rows=168 loops=1)"
"Planning time: 0.385 ms"
"Execution time: 846.179 ms"
```
And the next one returns the correct amount of rows:
```
EXPLAIN ANALYZE
WITH
S0 AS (
SELECT * FROM datastore.audit0
WHERE Channelid=123 AND GranulityId=4 -- PRE FILTERING
),
S1 AS (
SELECT
DISTINCT ON(ChannelId, TimeValue)
S0.*
,SF.Factor::NUMERIC AS svpolfactor
,COALESCE(-log(SF.Factor), 0)::INTEGER AS k
FROM
S0 LEFT JOIN settings.SVPOLFactor AS SF ON ((S0.ChannelId = SF.ChannelId) AND (SF.StartTimestamp <= S0.TimeValue))
ORDER BY
ChannelId, TimeValue, StartTimestamp DESC
)
SELECT * FROM S1
```
Where:
```
"CTE Scan on s1 (cost=133.62..133.86 rows=12 width=196) (actual time=0.580..0.598 rows=12 loops=1)"
" CTE s0"
" -> Bitmap Heap Scan on audit0 (cost=83.26..128.35 rows=12 width=160) (actual time=0.401..0.423 rows=12 loops=1)"
" Recheck Cond: ((channelid = 123) AND (granulityid = 4))"
" Heap Blocks: exact=12"
" -> BitmapAnd (cost=83.26..83.26 rows=12 width=0) (actual time=0.394..0.394 rows=0 loops=1)"
" -> Bitmap Index Scan on idx_audit0_channel (cost=0.00..11.12 rows=377 width=0) (actual time=0.055..0.055 rows=377 loops=1)"
" Index Cond: (channelid = 123)"
" -> Bitmap Index Scan on idx_audit0_granulity (cost=0.00..71.89 rows=3146 width=0) (actual time=0.331..0.331 rows=3120 loops=1)"
" Index Cond: (granulityid = 4)"
" CTE s1"
" -> Unique (cost=5.19..5.28 rows=12 width=176) (actual time=0.576..0.581 rows=12 loops=1)"
" -> Sort (cost=5.19..5.22 rows=12 width=176) (actual time=0.576..0.576 rows=12 loops=1)"
" Sort Key: s0.channelid, s0.timevalue, sf.starttimestamp DESC"
" Sort Method: quicksort Memory: 20kB"
" -> Hash Right Join (cost=0.39..4.97 rows=12 width=176) (actual time=0.522..0.552 rows=12 loops=1)"
" Hash Cond: (sf.channelid = s0.channelid)"
" Join Filter: (sf.starttimestamp <= s0.timevalue)"
" -> Seq Scan on svpolfactor sf (cost=0.00..3.68 rows=168 width=20) (actual time=0.006..0.022 rows=168 loops=1)"
" -> Hash (cost=0.24..0.24 rows=12 width=160) (actual time=0.446..0.446 rows=12 loops=1)"
" Buckets: 1024 Batches: 1 Memory Usage: 6kB"
" -> CTE Scan on s0 (cost=0.00..0.24 rows=12 width=160) (actual time=0.403..0.432 rows=12 loops=1)"
"Planning time: 0.448 ms"
"Execution time: 4.510 ms"
```
Thus the problem seems to lie in `S1`. There is no significant digit defined for `channelid = 123`, therefore, those records should not be generated without the `LEFT JOIN`. But this does not explain why there are some missing.
**Questions**
* **What am I doing wrong in this query?**
I use `LEFT JOIN` in order to keep correct cardinality when I fetch significant digits, therefore it must not remove records, after that it is just arithmetic.
* **How can pre-filtering returning more rows than post-filtering?**
This sounds a little bit buggy to me. If I do not use `WHERE` clauses, all records (or combination) are generated (I know that `JOIN` is a `WHERE` clause) and then computation occurs. When I do not use additional `WHERE` (original query), I miss rows (like showed in the examples). When I add a WHERE clause to filter, results are different (which may be fine, if post-filtering has returned more records than pre-filtering).
Any constructive answer that points out my mistakes and miscomprehension of the query are welcome. Thank you.
|
# What's happening
You're probably missing rows because of the `DISTINCT ON` clause in `S1`. It appears you're using this to pick only the most recent applicable rows of `SVPOLFactor`. However, you wrote
```
DISTINCT ON(ChannelId, TimeValue)
```
while in the query `S0`, unique rows could also differ by `GranulityId` and/or `QualityCodeId`. So, for example, if you had rows in both `rtu` and `audit0` with the following columns:
```
Id | ChannelId | GranulityId | TimeValue | QualityCodeid
----|-----------+-------------+---------------------+---------------
1 | 123 | 4 | 2015-01-01 00:00:00 | 2
2 | 123 | 5 | 2015-01-01 00:00:00 | 2
```
then `S0` with no `WHERE` filtering would return rows for both of these, because they differ in `GranulityId`. But one of these would be dropped by the `DISTINCT ON` clause in `S1`, because they have the same values for `ChannelId` and `TimeValue`. Even worse, because you only ever sort by `ChannelId` and `TimeValue`, which row is picked and which is dropped is not determined by anything in your query—it's left to chance!
In your example of "post-filtering" `WHERE ChannelId = 123 AND GranulityId = 4`, both these rows are in `S0`. Then it's possible, depending on an ordering that you aren't really in control of, for the `DISTINCT ON` in `S1` to filter out row 1 instead of row 2. Then, row 2 is filtered out at the end, leaving you with *neither* of the rows. The mistake in the `DISTINCT ON` clause caused row 2, which you didn't even want to see, to eliminate row 1 in an intermediate query.
In your example of "pre-filtering" in `S0`, you filter out row 2 before it can interfere with row 1, so row 1 makes it to the final query.
# A fix
One way to stop these rows from being excluded would be to expand the `DISTINCT ON` and `ORDER BY` clauses to include `GranulityId` and `QualityCodeId`:
```
DISTINCT ON(ChannelId, TimeValue, GranulityId, QualityCodeId)
-- ...
ORDER BY ChannelId, TimeValue, GranulityId, QualityCodeId, StartTimestamp DESC
```
Of course, if you filter the results of `S0` so that they all have the same values for some of these columns, you can omit those in the `DISTINCT ON`. In your example of pre-filtering `S0` with `ChannelId` and `GranulityId`, this could be:
```
DISTINCT ON(TimeValue, QualityCodeId)
-- ...
ORDER BY TimeValue, QualityCodeId, StartTimestamp DESC
```
But I doubt you'd save much time doing this, so it's probably safest to keep all those columns, in case you change the query again some day and forget to change the `DISTINCT ON`.
---
I want to mention that [the PostgreSQL docs](http://www.postgresql.org/docs/current/static/queries-select-lists.html#QUERIES-DISTINCT) warn about these sorts of problems with `DISTINCT ON` (emphasis mine):
> A set of rows for which all the [`DISTINCT ON`] expressions are equal are considered duplicates, and only the first row of the set is kept in the output. Note that the "first row" of a set is **unpredictable unless the query is sorted on enough columns to guarantee a unique ordering** of the rows arriving at the `DISTINCT` filter. (`DISTINCT ON` processing occurs after `ORDER BY` sorting.)
>
> The `DISTINCT ON` clause is not part of the SQL standard and is sometimes considered bad style because of the **potentially indeterminate** nature of its results. With judicious use of `GROUP BY` and subqueries in `FROM`, this construct can be avoided, but it is often the most convenient alternative.
|
You already got a correct answer, this is just an addition. When you calculate start/end in a Derived Table, the join returns a single row and you don't need `DISTINCT ON` (and this might be more efficient, too):
```
...
FROM S0 LEFT JOIN
(
SELECT *,
-- find the next StartTimestamp = End of the current period
COALESCE(LEAD(StartTimestamp)
OVER (PARTITION BY ChannelId
ORDER BY StartTimestamp, '+infinity') AS EndTimestamp
FROM SVPOLFactor AS t
) AS SF
ON (S0.ChannelId = SF.ChannelId)
AND (S0.TimeValue >= SF.StartTimestamp)
AND (S0.TimeValue < SF.EndTimestamp)
```
|
Strange behaviour with a CTE involving two joins
|
[
"",
"sql",
"postgresql",
"join",
"common-table-expression",
"distinct-on",
""
] |
```
SELECT oi.created_at, count(oi.id_order_item)
FROM order_item oi
```
The result is the follwoing:
```
2016-05-05 1562
2016-05-06 3865
2016-05-09 1
...etc
```
The problem is that I need information for all days even if there were no id\_order\_item for this date.
Expected result:
```
Date Quantity
2016-05-05 1562
2016-05-06 3865
2016-05-07 0
2016-05-08 0
2016-05-09 1
```
|
You can't count something that is not in the database. So you need to generate the missing dates in order to be able to "count" them.
```
SELECT d.dt, count(oi.id_order_item)
FROM (
select dt::date
from generate_series(
(select min(created_at) from order_item),
(select max(created_at) from order_item), interval '1' day) as x (dt)
) d
left join order_item oi on oi.created_at = d.dt
group by d.dt
order by d.dt;
```
The query gets the minimum and maximum date form the existing order items.
If you want the count for a specific date range you can remove the sub-selects:
```
SELECT d.dt, count(oi.id_order_item)
FROM (
select dt::date
from generate_series(date '2016-05-01', date '2016-05-31', interval '1' day) as x (dt)
) d
left join order_item oi on oi.created_at = d.dt
group by d.dt
order by d.dt;
```
SQLFiddle: <http://sqlfiddle.com/#!15/49024/5>
|
Friend, Postgresql Count function ignores Null values. It literally does not consider null values in the column you are searching. For this reason you need to include oi.created\_at in a Group By clause
PostgreSql searches row by row sequentially. Because an integral part of your query is Count, and count basically stops the query for that row, your dates with null id\_order\_item are being ignored. If you group by oi.created\_at this column will trump the count and return 0 values for you.
```
SELECT oi.created_at, count(oi.id_order_item)
FROM order_item oi
Group by io.created_at
```
From TechontheNet (my most trusted source of information):
*Because you have listed one column in your SELECT statement that is not encapsulated in the count function, you must use a GROUP BY clause. The department field must, therefore, be listed in the GROUP BY section.*
Some info on Count in PostgreSql
<http://www.postgresqltutorial.com/postgresql-count-function/>
<http://www.techonthenet.com/postgresql/functions/count.php>
|
How to select all dates in SQL query
|
[
"",
"sql",
"postgresql",
"date",
""
] |
For voyages (Years, ships), how can I group and count sailors by age intervalles in standard SQL or MS-ACCESS?
```
YEAR SHIP SAILOR_AGE
2003 Flying dolphin 33
2003 Flying dolphin 33
2003 Flying dolphin 34
2001 Flying dolphin 23
2003 Flying dolphin 35
2001 Flying dolphin 38
2001 Flying dolphin 31
2003 Flying dolphin 36
2003 Columbine 41
2003 Columbine 42
2003 Flying dolphin 27
2003 Flying dolphin 51
2003 Flying dolphin 46
```
What I tried:
```
SELECT YEAR, SHIP, SAILOR_AGE, COUNT (*) as `NUMBERS`
FROM TABLE
GROUP BY YEARS, SHIP, SAILOR_AGE;
```
It give me the number of sailor for each year:
Example:
```
YEAR |SHIP |SAILOR_AGE | NUMBERS
------------------------------------------
2003 | Flying dolphin| 33 | 2
```
How Can group sailor ages by intervalles
Example:
```
From 20th to 40th year's old
From 40th to 60th year's old
```
|
You can do it with a single query using `CASE EXPRESSION` :
```
SELECT t.years,t.ship,
CASE WHEN t.Sailor_age between 21 and 40 then 'From 20th to 40th'
WHEN t.Sailor_age between 41 and 60 then 'From 40th to 60th'
ELSE 'Other Ages'
END as Range
FROM YourTable t
GROUP BY t.years,t.ship,CASE WHEN t.Sailor_age between 21 and 40 then 'From 20th to 40th' WHEN t.Sailor_age between 41 and 60 then 'From 40th to 60th' ELSE 'Other Ages' END
ORDER BY t.years,t.ship,Range
```
Or if you want it as a single row using conditional aggregation:
```
SELECT t.years,t.ship,
COUNT(CASE WHEN t.Sailor_age between 21 and 40 then 1 end) as Age_21_to_40,
COUNT(CASE WHEN t.Sailor_age between 41 and 60 then 1 end) as Age_41_to_60,
FROM YourTable t
GROUP BY t.years,t.ship
ORDER BY t.years,t.ship
```
|
As this - given the tags - most likely is Access SQL, you can go like this:
```
SELECT
YEAR,
SHIP,
INT(SAILOR_AGE / 20) * 20 AS AgeGroup,
COUNT(*) As [NUMBERS]
FROM
TABLE
GROUP BY
YEAR,
SHIP,
INT(SAILOR_AGE / 20) * 20
```
|
Group and count by age breaks
|
[
"",
"sql",
"sqlite",
"ms-access",
"ms-access-2010",
"ms-access-2007",
""
] |
Is it possible to merge these 2 left join into one?
I can't think of any way
```
select left1.field1,
left2.field2
from masterTable left join (
select somefield,
field1,
row_number() over (partition by somefield orderby otherfield) as rowNum
from childTable
inner join masterTable
on masterTable.somefield = childTable.somefield
) as left1
on masterTable.somefield = left1.somefield
AND left1.rownum =1
left join (
select somefield,
max(field2) as field2
from childTable
inner join masterTable
on masterTable.somefield = childTable.somefield
where field3 = 1
group by somefield
) as left2
on masterTable.somefield = left2.somefield
```
|
You can use `max() over()` to get the max of field2 per somefield in the same query.
```
select left1.field1,
left1.field2
from masterTable
left join
(select
somefield,field1
,row_number() over (partition by somefield orderby otherfield) as rowNum
,max(field2) over(partition by somefield) as field2
from childTable
inner join masterTable on masterTable.somefield = childTable.somefield) as left1
ON masterTable.somefield = left1.somefield
AND left1.rownum =1 AND field3 = 1
```
|
Try this, but without samples of data and output I can't guarantee that it will work properly, it's just guessing.
```
SELECT field1,
MAX(field2) AS field2
FROM (
SELECT ct.field1,
ct1.field2,
ROW_NUMBER() OVER (PARTITION BY ct.somefield ORDER BY ct.otherfield) as rn
FROM masterTable mt
INNER JOIN childTable ct
ON mt.somefield = ct.somefield
LEFT JOIN childTable ct1
ON mt.somefield = ct1.somefield AND ct1.field3 = 1
) as t
WHERE rn = 1
GROUP BY ct.field1
```
|
merging two left join on same table into one
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
I need little help with reading my "chats" SQL table.
Column:
```
Chat_ID - decimal(18, 0) primary key, inflexible-yes
Sent_ID - decimal(18, 0)
Receive_ID - decimal(18, 0)
Time - datetime
Message - nvarchar(MAX)
Sent_ID| Receive_ID | Time | Message
-------+----------+------------------+-----------------
1 | 2 | 11/21/2015 10:00 | Hey! test
-------+----------+------------------+-----------------
2 | 1 | 11/21/2015 10:50 | Hi! respond
-------+----------+------------------+-----------------
1 | 2 | 11/21/2015 10:51 | respond 3
-------+----------+------------------+-----------------
2 | 1 | 11/21/2015 11:05 | respond final
-------+----------+------------------+-----------------
1 | 3 | 11/21/2015 11:51 | Message 1
-------+----------+------------------+-----------------
3 | 1 | 11/21/2015 12:05 | Message 2
-------+----------+------------------+-----------------
1 | 3 | 11/21/2015 12:16 | Message Final
-------+----------+------------------+-----------------
4 | 1 | 11/21/2015 12:25 | New message 1
-------+----------+------------------+-----------------
```
**How to get...(last message with each user?)**
```
Sent_ID| Receive_ID | Time | Message
-------+----------+------------------+-----------------
2 | 1 | 11/21/2015 11:05 | respond final
-------+----------+------------------+-----------------
1 | 3 | 11/21/2015 12:16 | Message Final
-------+----------+------------------+-----------------
4 | 1 | 11/21/2015 12:25 | New message 1
-------+----------+------------------+-----------------
```
You notice that i need something like: MAX(Time), WHERE (Sent\_ID=@Sent\_ID or Receive\_ID=@Receive\_ID) in these case........... Sent\_ID=1..... Receive\_ID=1
**to simplify**: WHERE (Sent\_ID=1 or Receive\_ID=1)
Thank you....
|
Update: I understand now that you want to get the latest message between two users regardless if it is sent or received. If that's the case, you can use `ROW_NUMBER`:
`ONLINE DEMO`
```
WITH Cte AS(
SELECT *,
rn = ROW_NUMBER() OVER(
PARTITION BY
CASE
WHEN Sent_ID > Receive_ID THEN Receive_ID
ELSE Sent_ID
END,
CASE
WHEN Sent_ID > Receive_ID THEN Sent_ID
ELSE Receive_ID
END
ORDER BY Time DESC
)
FROM chats
WHERE
Sent_ID = 1
OR Receive_ID = 1
)
SELECT
Sent_ID, Receive_ID, Time, Message
FROM Cte
WHERE rn = 1
```
What the above query does is to partition the chat messages by the lower ID first, and then the higher ID. This way, you ensure that you have distinct combination of IDs.
---
First, you need to get the last message sent by the user. And the `UNION` it to the last messages received by the user from each of the other users.
```
SELECT c.*
FROM chats c
INNER JOIN (
SELECT
Sent_ID, MAX(Time) AS MaxTime
FROM chats
WHERE Sent_ID = 1
GROUP BY Sent_ID
) t
ON t.Sent_ID = c.Sent_ID
AND t.MaxTime = c.Time
UNION ALL
SELECT c.*
FROM chats c
INNER JOIN (
SELECT
Sent_ID, MAX(Time) AS MaxTime
FROM chats
WHERE Receive_ID = 1
GROUP BY Sent_ID
) t
ON t.Sent_ID = c.Sent_ID
AND t.MaxTime = c.Time
```
`ONLINE DEMO`
|
Try this
```
select c.* from
( select User1id,User2id,max(time) as newtime
from
( select
case when sent_id < receive_id then sent_id else receive_id
end as User1id,
case when sent_id > receive_id then sent_id else receive_id
end as User2id,
time
from chats where (Sent_ID = 1 or Receive_ID = 1)
) temp
group by User1id,User2id
)t1
inner join chats c on
t1.newtime = c.time
```
|
SQL SELECT - Chats
|
[
"",
"sql",
"select",
""
] |
I am making Android app for practicing driving licence theory tests. I will have about 3000 questions. Question object would have several atributes (text, category, subcategory, answers, group). I will create them and put in app, so data won't ever change. When user chooses category, app would go througt data, look which question meets requirements (that user selected) and put it in list for displaying. What should I use to store data/questions, XML or SQLite? Thanks in advance.
Edit:
I forgot to mentiont that app won't use internet connection. Also, I planned to make simple java app for entering data. I would copy text from government's website (I don't have access to their database and I have to create mine), so I thought to just put question's image url to java program and it would download it and name it automaticaly. Also, when entering new question's text it would tell me if that question already exist before I enter other data. That would save me time, I wouldn't have to save every picture and name it my self. That is what I thought if using XML. Can I do this for JSON or SQLite?
|
If you do not have to perform complex queries, I would recommend to store your datas in **json** since very well integrated in android apps using a lib such as [GSON](https://github.com/google/gson) or [Jackson](https://github.com/FasterXML/jackson).
If you don't want to rebuild your app / redeploy on every question changes. You can imagine to have a small webserver (apache, nginx, tomcat) that serves the json file that you will request on loading of the app. So that you will download the questions when your app is online or use the cached one.
XML is a verbose format for such an usage, and does not bring much functions....
To respond to your last question, you can organise your code like that :
```
/**
* SOF POST http://stackoverflow.com/posts/37078005
* @author Jean-Emmanuel
* @company RIZZE
*/
public class SOF_37078005 {
@Test
public void test() {
QuestionsBean questions = new QuestionsBean();
//fill you questions
QuestionBean b=buildQuestionExemple();
questions.add(b); // success
questions.add(b); //skipped
System.out.println(questions.toJson()); //toJson
}
private QuestionBean buildQuestionExemple() {
QuestionBean b= new QuestionBean();
b.title="What is the size of your boat?";
b.pictures.add("/res/images/boatSize.jpg");
b.order= 1;
return b;
}
public class QuestionsBean{
private List<QuestionBean> list = new ArrayList<QuestionBean>();
public QuestionsBean add(QuestionBean b ){
if(b!=null && b.title!=null){
for(QuestionBean i : list){
if(i.title.compareToIgnoreCase(b.title)==0){
System.out.println("Question "+b.title+" already exists - skipped & not added");
return this;
}
}
System.out.println("Question "+b.title+" added");
list.add(b);
}
else{
System.out.println("Question was null / not added");
}
return this;
}
public String toJson() {
ObjectMapper m = new ObjectMapper();
m.configure(Feature.ALLOW_SINGLE_QUOTES, true);
String j = null;
try {
j= m.writeValueAsString(list);
} catch (JsonProcessingException e) {
e.printStackTrace();
System.out.println("JSON Format error:"+ e.getMessage());
}
return j;
}
}
public class QuestionBean{
private int order;
private String title;
private List<String> pictures= new ArrayList<String>(); //path to picture
private List<String> responseChoice = new ArrayList<String>(); //list of possible choices
public int getOrder() {
return order;
}
public void setOrder(int order) {
this.order = order;
}
public String getTitle() {
return title;
}
public void setTitle(String title) {
this.title = title;
}
public List<String> getPictures() {
return pictures;
}
public void setPictures(List<String> pictures) {
this.pictures = pictures;
}
public List<String> getResponseChoice() {
return responseChoice;
}
public void setResponseChoice(List<String> responseChoice) {
this.responseChoice = responseChoice;
}
}
}
```
**CONSOLE OUTPUT**
```
Question What is the size of your boat? added
Question What is the size of your boat? already exists - skipped & not added
[{"order":1,"title":"What is the size of your boat?","pictures":["/res/images/boatSize.jpg"],"responseChoice":[]}]
```
**GIST** :
provides you the complete working code I've made for you
<https://gist.github.com/jeorfevre/5d8cbf352784042c7a7b4975fc321466>
**To conclude, what is a good practice to work with JSON is :**
1) create a bean in order to build your json (see my example here)
2) build your json and store it in a file for example
3) Using android load your json from the file to the bean (you have it in andrdoid)
4) use the bean to build your form...etc (and not the json text file) :D
|
I would recommend a database (SQLite) as it provides superior filtering functionality over xml.
|
Android - XML or SQLite for static data
|
[
"",
"android",
"sql",
"xml",
"sqlite",
""
] |
I need help with a data extraction. I'm an sql noob and I think I have a serious issue with my data design skills. DB system is MYSQL running on Linux.
Table A is structured like this one:
```
TYPE SUBTYPE ID
-------------------
xyz aaa 0001
xyz aab 0001
xyz aac 0001
xyz aad 0001
xyz aaa 0002
xyz aaj 0002
xyz aac 0002
xyz aav 0002
```
Table B is:
```
TYPE1 SUBTYPE1 TYPE2 SUBTYPE2
-------------------------------------
xyz aaa xyz aab
xyz aac xyz aad
```
Looking at whole table A, I need to extract all rows where both type and subtype are present as columns in a single table B row. Of course this condition is never met since A.subtype can't be at same time equal to B.subtype1 AND B.subtype2 ...
In the example the result set for id should be:
```
xyz aaa 0001
xyz aab 0001
xyz aac 0001
xyz aad 0001
```
I m trying to use a join with 2 AND conditions, but of course I got an empty set.
EDIT:
@Barmar thank you for your support. It seems that I m really near the final solution. Just to keep things clear, I opened this thread with a shortened and simplified data structure, just to highlight the point where I was stuck.
I thought about your solution, and is acceptable to have both result on a single row. Now, I need to reduce execution time.
First join takes about 2 minutes to complete, and it produce around 23Million of rows. The second join (table B) is probably taking longer.
In fact, I need 3 hours to have the final set of 10 millions of rows. How can we impove things a bit? I noticed that mysql engine is not threaded, and the query is only using a single CPU. I indexed all fields used by join, but I m not sure its the right thing to do...since I m not a DBA
I suppose also having to rely on VARCHAR comparison for such a big join is not the best solution. Probably I should rewrite things using numerical ID that should be faster..
Probably split things into different query will help parallelism. thanks for a feedback
|
You can join Table A with itself to find all combinations of types and subtypes with the same ID, then compare them with the values in Table B.
```
SELECT t1.type AS type1, t1.subtype AS subtype1, t2.type AS type2, t2.subtype AS subtype2, t1.id
FROM TableA AS t1
JOIN TableA AS t2 ON t1.id = t2.id AND NOT (t1.type = t2.type AND t1.subtype = t2.subtype)
JOIN TableB AS b ON t1.type = b.type1 AND t1.subtype = b.subtype1 AND t2.type = b.type2 AND t2.subtype = b.subtype2
```
This returns the two rows from Table A as a single row in the result, rather than as separate rows, I hope that's OK. If you need to split them up, you can move this into a subquery and join it back with the original table A to return each row.
```
SELECT a.*
FROM TableA AS a
JOIN (the above query) AS x
ON a.id = x.id AND
((a.type = x.type1 AND a.subtype = x.subtype1)
OR
(a.type = x.type2 AND a.subtype = x.subtype2))
```
[DEMO](http://www.sqlfiddle.com/#!9/00d407/4)
|
You can use `EXISTS`:
```
SELECT a.*
FROM TableA a
WHERE EXISTS(
SELECT 1
FROM TableB b
WHERE
(b.Type1 = a.Type AND b.SubType1 = a.SubType)
OR (b.Type2 = a.Type AND b.SubType2 = a.SubType)
)
AND a.ID = '0001'
```
`ONLINE DEMO`
|
SQL: Can't understand how to select from my tables
|
[
"",
"mysql",
"sql",
""
] |
I've tried a lot of techniques though I can't get the right answer.
For instance I have table with Country names and IDs. And I want to return only distinct countries that haven't used ID 3. Because if they have been mentioned on ID 2 or 1 or etc they still get displayed which I don't want.
```
SELECT DISTINCT test.country, test.id
FROM test
WHERE test.id LIKE 2
AND test.id NOT IN (SELECT DISTINCT test.id FROM test WHERE test.id LIKE 3);
```
|
```
SELECT DISTINCT c1.name
FROM countries c1
WHERE NOT EXISTS (
SELECT 1
FROM countries c2
WHERE c1.name = c2.name
AND c2.id = 3
)
```
|
I'm now sure I understood your question, but if you want distinct countries where id is not 3, you just need this:
```
select distinct c.name from Countries
where c.id <> 3
```
|
SQL Joining single table where value does not exist
|
[
"",
"sql",
""
] |
From my C# application I am getting the XML data as like below
```
'<NewDataSet>
<tblCFSPFSDDeclaration>
<PKCFSPFSDDeclaration>-1</PKCFSPFSDDeclaration>
<FKCFSPStatus>2</FKCFSPStatus>
<FKBatch>EDCCCL05070801</FKBatch>
<SDWCountSubmitted>112</SDWCountSubmitted>
<SDWCountAccepted>112</SDWCountAccepted>
<CFSPTraderRole>EDCCCL</CFSPTraderRole>
<TurnNo>120220002000</TurnNo>
<CFSPTraderLocation>EDCCCL001</CFSPTraderLocation>
<FSDPeriod>2016-04-01T00:00:00+01:00</FSDPeriod>
</tblCFSPFSDDeclaration>
</NewDataSet>'
```
You can see that the FSDPeriod is 01/Apr/2016
But when I run the below query after inserting the data into a temporary table I am getting the FSDPeriod value as 31/Mar/2016
```
DECLARE @ID xml
SET @ID=
'<NewDataSet>
<tblCFSPFSDDeclaration>
<PKCFSPFSDDeclaration>-1</PKCFSPFSDDeclaration>
<FKCFSPStatus>2</FKCFSPStatus>
<FKBatch>EDCCCL05070801</FKBatch>
<SDWCountSubmitted>112</SDWCountSubmitted>
<SDWCountAccepted>112</SDWCountAccepted>
<CFSPTraderRole>EDCCCL</CFSPTraderRole>
<TurnNo>120220002000</TurnNo>
<CFSPTraderLocation>EDCCCL001</CFSPTraderLocation>
<FSDPeriod>2016-04-01T00:00:00+01:00</FSDPeriod>
</tblCFSPFSDDeclaration>
</NewDataSet>'
--2016-05-07T07:49:39+01:00
DECLARE @hDoc int -- handle to the xml document
DECLARE @tblCFSPFSDDeclaration TABLE
(
PKCFSPFSDDeclaration bigint,
FKCFSPStatus int ,
FKBatch varchar(20) ,
ChiefBatchRef varchar(20),
SDWCountSubmitted int ,
SDWCountAccepted int ,
TurnNo varchar(15) ,
CFSPTraderRole varchar(12) ,
CFSPTraderLocation varchar(14) ,
FSDPeriod datetime
)
EXECUTE sp_xml_preparedocument @hDoc output, @ID -- Open the document
INSERT INTO @tblCFSPFSDDeclaration
(
[FKCFSPStatus] ,
[FKBatch] ,
[SDWCountSubmitted] ,
[SDWCountAccepted] ,
[TurnNo] ,
[CFSPTraderRole],
[CFSPTraderLocation],
[FSDPeriod]
)
SELECT [FKCFSPStatus] ,
[FKBatch] ,
[SDWCountSubmitted] ,
[SDWCountAccepted] ,
[TurnNo] ,
[CFSPTraderRole],
[CFSPTraderLocation],
[FSDPeriod]
FROM OPENXML(@hDoc,'/NewDataSet/tblCFSPFSDDeclaration', 2)
WITH
(
[FKCFSPStatus] int ,
[FKBatch] varchar(20) ,
[SDWCountSubmitted] int ,
[SDWCountAccepted] int ,
[TurnNo] varchar(15) ,
[CFSPTraderRole] varchar(12) ,
[CFSPTraderLocation] varchar(14) ,
[FSDPeriod] datetime
)
select * from @tblCFSPFSDDeclaration
```
Can anyone help to get the data as 01/Apr/2016?
|
I think this will solve your problem.
```
Declare the [FSDPeriod] as datetimeoffset.
```
then you can cast them as you need. If you want only date and time then try this
```
select Convert(varchar(19), cast([FSDPeriod]as datetime),120) from @tblCFSPFSDDeclaration
```
for more conversion see here [conversion helps](http://www.sqlusa.com/bestpractices/datetimeconversion/)
|
> Can anyone help to get the data as 01/Apr/2016?
Change `[FSDPeriod] datetime` to `[FSDPeriod] varchar(19)` and you will extract the `datetime` value without the timezone information. That value will be implicitly converted to a `datetime` when you insert to the table variable.
|
Datetime field value getting reduced by 1 day while converting XML data into @table values
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"sql-server-2012",
""
] |
I am writing a report on SQL injection attacks. I've found an example on [Owasp](https://www.owasp.org/index.php/Category:OWASP_Top_Ten_Project) as shown bellow.
Since this is an example and to me, it seems as a simple query getting a row with the specific ID, does it do anything else or my assumption is correct?
```
String query = "SELECT * FROM accounts WHERE custID='" +
request.getParameter("id") + "'";
// Since this is an online example i don't know what getParameter("id") method does.
```
|
> to me it seems as a simple query getting a row with specific ID
Thats the magic of injection. The query **should** only get a row that fits a certain criteria that comes from a request (like a GET or POST from html for example).
So `request.getParameter("id")` provides a parameter submitted by the user (or a very bad guy).
Usually whoever wrote that peace of code expected something like this:
`id = 12`
which would result in
```
SELECT * FROM accounts WHERE custID='12'
```
Now image what happens if the user (a bad one in this case) sends this instead:
```
id = 0'; DROP TABLE accounts; --
```
This would execute as
```
SELECT * FROM accounts WHERE custID='0'; DROP TABLE accounts; --'
```
Step-by-step:
1. Make sure the intended query executes without error (`0`)
2. End the query before the intended point (`';`)
3. Inject your malicous code (`DROP TABLE accounts;`)
4. Make sure everything that is left of the original query is treated as a comment (`--`)
The problem in the OWASP example isn't the query itself, but the fact that parameters that come from 'outside' (`request.getParameter("id")`) are used to generate a query, without escaping any potential control characters.
This style of writing code basically allows any user to execute code on your SQL-Server.
|
The problem with this query is that the SQL is created dynamically. Request.getparameter is probably just a function which returns the id of the row for the specific web request.
But if the webpage allows filling this parameter through a text box or the function is called directly from JavaScript any value can be set in id.
This could contain any SQL statement, which with the correct authentication, could even contain 'DROP Database'
|
SQL Injection Query
|
[
"",
"sql",
"sql-injection",
""
] |
**Update**: Do not provide an answer that uses `NOT EXISTS`. According to [MariaDB](http://www.techonthenet.com/mariadb/exists.php) "SQL statements that use the EXISTS condition in MariaDB are very inefficient since the sub-query is RE-RUN for EVERY row in the outer query's table." This query will be used ***a lot***, so it needs to be efficient.
I have two tables `following`:
```
CREATE TABLE `following` (
`follower` int(1) unsigned NOT NULL,
`followee` int(1) unsigned NOT NULL,
PRIMARY KEY (`follower`,`followee`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1
```
and `association_record`:
```
CREATE TABLE `association_record` (
`user_id` int(1) unsigned NOT NULL,
`post_id` int(1) unsigned NOT NULL,
`answer_id` int(1) unsigned NOT NULL,
`date_created` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (`user_id`,`post_id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1
```
What I want are the `follower`s of `followee` '5' who do not have an `association_record` with post '88'. The below SQL is what I came up with from reading other posts, but it doesn't get me the desired results:
```
select f.follower
from following f
left outer join association_record a
on f.follower = a.user_id
where f.followee = 5
and a.post_id = 88
and a.user_id is null
```
|
Here was the query that did it:
```
SELECT f.follower
FROM following f
LEFT OUTER JOIN association_record a
ON f.follower = a.user_id
AND a.poll_id = 88
WHERE f.followee = 5
AND a.user_id is null
```
Forgot to post the solution to my question after I solved it, and now a month later, I end up having a similar problem but without any reference to the original solution; didn't need it anymore.
Almost had to resolve the whole issue again from scratch; which would have been tough being that I never understood how the solution worked. Luckily, MySQL workbench keeps a log of all the queries ran from it, and trying queries to answer this question was one of the few times I've used it.
Moral of the story, don't forget to post your solution; you might be doing it for yourself.
|
```
select f.follower from following f, response_record r where f.follower = r.user_id and f.followee = 5 and r.post_id = 88 and r.user_id is null
```
try this
|
MariaDB: Select the fields from one column in one table that are not in a subset of another column from another table
|
[
"",
"mysql",
"sql",
"mariadb",
""
] |
```
Dim sql As String = "SELECT * FROM old where inputdate BETWEEN '" + DateTimePicker2.Value.ToShortDateString() + "' AND '" + DateTimePicker3.Value.ToShortDateString() + "';"
Dim dataadapter As New SqlDataAdapter(sql, connection)
Dim ds As New DataSet()
connection.Open()
dataadapter.Fill(ds, "old_table")
connection.Close()
```
I have 2 DateTimePickers of format Short. In SQL, the column name "inputdate" of DataType: date.
When I choose dates with days <= 12, everything is working fine. When I choose dates with days > 12, I am having this error. I can see it's a matter of day and month but i'm still can't get the solution.
Any help is really appreciated": Conversion failed when converting date and/or time from character string.
|
The basic solution is that you have to provide date into either mm/DD/YYYY format or in YYYY-MM-DD date format into ms-sql query.
So, before passing date to query convert your date into either mm/DD/YYYY format or in YYYY-MM-DD format.
|
I advice you to use the `Parameter` to use `SqlDbType.DateTime` and then pass the `DateTime` directly to the parameter (do not need to convert) , and also avoid `SQL` injections , like this :
```
Dim sql As String = "SELECT * FROM old where inputdate BETWEEN @startDate AND @endDate;"
Dim cmd As New SqlCommand(sql, connection)
cmd.Parameters.Add("@startDate", SqlDbType.DateTime).Value = DateTimePicker2.Value
cmd.Parameters.Add("@endDate", SqlDbType.DateTime).Value = DateTimePicker3.Value
Dim dataadapter As New SqlDataAdapter(cmd)
Dim ds As New DataSet()
connection.Open()
dataadapter.Fill(ds, "old_table")
connection.Close()
```
|
Converting Date to string with DateTimePicker
|
[
"",
"sql",
"vb.net",
"datetimepicker",
"datetime-format",
""
] |
Let’s assume I have two tables, A and B, both with an ID column and a foreign key (value).
I want to do a select based query that returns only the matching records, not including those that don't meet the condition of having the same data (`ID` and `Value` columns), also sorted by `Value` column of table B.
Table A
```
SELECT *
FROM (VALUES
(15, 1),
(16, 2),
(17, 3)
) as t(idMetadata, [Value])
```
Table B
```
SELECT *
FROM (VALUES
(185442, 22008, 16, 6 ,2),
(187778, 22269, 16, 6 ,2),
(211260, 24925, 16, 6 ,2),
(251476, 29431, 15, 4 ,1),
(251477, 29431, 16, 5 ,2),
(251478, 29431, 17, 6 ,3)
) as t(idDet, idEnc, idMetadata, OrderValue, [Value])
```
The expected result is
[](https://i.stack.imgur.com/Bhal2.jpg)
Can this be achieved by a single query? Or do I have to create a CTE or subqueries ?
EDIT: Sorry, I forgot to mention another condition for the query: in Table B, the records should have the same idEnc and the OrderValue column should be consecutive, That's why the expected result also have same idEnc and the OrderValue is 4, 5 & 6.
|
That will give you desired result:
```
SELECT idDet,
idEnc,
idMetadata,
OrderValue,
[Value]
FROM (
SELECT b.idDet,
b.idEnc,
b.idMetadata,
b.OrderValue,
b.[Value],
ROW_NUMBER() OVER (PARTITION BY b.idEnc ORDER BY b.OrderValue) as rn,
DENSE_RANK() OVER (ORDER BY a.[Value]) as dr
FROM TableB b
INNER JOIN TableA a
ON b.idMetadata = a.idMetadata AND b.[Value] = a.[Value]
) as t
WHERE rn = dr
```
|
You can use CTE notation
```
;with cte as(
SELECT B.*,row_number() over(partition by b.idmetadata order by b.value,b.iddet desc) rn
FROM (VALUES
(185442, 22008, 16, 6 ,2),
(187778, 22269, 16, 6 ,2),
(211260, 24925, 16, 6 ,2),
(251476, 29431, 15, 4 ,1),
(251477, 29431, 16, 5 ,2),
(251478, 29431, 17, 6 ,3)
) as B(idDet, idEnc, idMetadata, OrderValue, [Value])
inner join
(VALUES
(15, 1),
(16, 2),
(17, 3)
) as A(idMetadata, [Value]) on A.idMetadata=B.idMetadata
)
select * from cte
where rn=1
```
or without CTE:
```
select * from (
SELECT B.*,row_number() over(partition by b.idmetadata order by b.value,b.iddet desc) rn
FROM (VALUES
(185442, 22008, 16, 6 ,2),
(187778, 22269, 16, 6 ,2),
(211260, 24925, 16, 6 ,2),
(251476, 29431, 15, 4 ,1),
(251477, 29431, 16, 5 ,2),
(251478, 29431, 17, 6 ,3)
) as B(idDet, idEnc, idMetadata, OrderValue, [Value])
inner join
(VALUES
(15, 1),
(16, 2),
(17, 3)
) as A(idMetadata, [Value]) on A.idMetadata=B.idMetadata
) t
where rn=1
```
|
SQL Server : matching records from 2 tables with sorting
|
[
"",
"sql",
"sql-server",
""
] |
I have a table called associate\_ratings with the below structure:
```
id int(11) NO PRI auto_increment
associate varchar(10) NO
skill_id int(11) NO MUL
rating int(11) NO
updated_time datetime NO
```
This table holds the skills(`skill_id`) of the associate and their corresponding `rating` in that skill.
Rating column can take values (1,2,3)
I want to get the in each skill how many associates have got a particular rating, please find below output table structure:
```
Skill_id Rating1_count Rating2_count Rating3_count
Java 2 1 4
C# 3 2 2
```
This says in Java there are 2 associates with rating 1, 1 associates with rating 2 & 4 associates with rating 3
I tried the below query, but the output is not in the format I expect:
```
SELECT skill_id, rating, count(*) FROM associate_ratings a
WHERE updated_time = (
SELECT max(updated_time)
FROM skill_set.associate_ratings b
WHERE a.associate = b.associate
) GROUP BY a.skill_id, a.rating order by a.skill_id, a.rating;
```
Could you please let me know how to get the output in the format I want?
|
Use temporary table and case
```
SELECT skill_id, sum(rating_1), sum(rating_2), sum(rating_3)
FROM (
SELECT a.skill_id as skill_id,
case a.rating when '1' then 1 else 0 end as rating_1,
case a.rating when '2' then 1 else 0 end as rating_2,
case a.rating when '3' then 1 else 0 end as rating_3
FROM associate_ratings a
WHERE updated_time = (
SELECT max(updated_time)
FROM skill_set.associate_ratings b
WHERE a.associate = b.associate
) ) as t
GROUP BY skill_id
ORDER BY skill_id;
```
|
```
select Skill_id ,
count(case when rating = 1 then 1 else null end) as Rating1_count ,
count(case when rating = 2 then 1 else null end) as Rating2_count ,
count(case when rating = 3 then 1 else null end) as Rating3_count
from associate_ratings b
left join associate_ratings a
on b.Skill_id = a.Skill_id
group by Skill_id
```
|
How display the count of associates at each rating?
|
[
"",
"mysql",
"sql",
""
] |
I need to sum the results of three different queries. Here's what I'm doing but the result is incorrect. If I run each query separately they are fine, but running them together I get different results.
```
select 'Inforce TIV',
ISNULL(SUM(a.TSI), 0)
+
ISNULL(SUM(b.TSI), 0)
+
ISNULL(SUM(c.TSI), 0)
from
(
select ISNULL(SUM(p.[Suma Asegurada Inforce]), 0) as TSI
from temp_portafolio_cy p
--where p.fec_emi between @varFechaDesde and @varFechaHasta
where p.fec_emi between '20160101' and '20160131'
and p.sn_bancaseguros = 0
group by p.cod_suc, p.cod_ramo_comercial, p.Poliza, p.Item
) a,
(
select ISNULL(p.[Suma Asegurada Inforce], 0) as TSI
from temp_portafolio_cy p
--where p.fec_emi between @varFechaDesde and @varFechaHasta
where p.fec_emi between '20160101' and '20160131'
and p.sn_bancaseguros = -1
and not (
(p.cod_suc = 1 and p.cod_ramo_comercial = 34 and p.Poliza = 51385)
or (p.cod_suc = 1 and p.cod_ramo_comercial = 26 and p.Poliza = 53231)
)
group by p.cod_suc, p.cod_ramo_comercial, p.Poliza, p.Item, p.[Suma Asegurada Inforce]
) b,
(
select ISNULL(p.[Suma Asegurada], 0) as TSI
from temp_portafolio_cy p
--where p.fec_emi between @varFechaDesde and @varFechaHasta
where p.fec_emi between '20160101' and '20160131'
and p.sn_bancaseguros = -1
and (
(p.cod_suc = 1 and p.cod_ramo_comercial = 34 and p.Poliza = 51385)
or (p.cod_suc = 1 and p.cod_ramo_comercial = 26 and p.Poliza = 53231)
)
group by p.cod_suc, p.cod_ramo_comercial, p.Poliza, p.Item, p.[Suma Asegurada]
) c
```
|
You're doing a cartesian join of the 3 sets of results.
You have 2 options:
1. Remove all the "group by" statements
2. Include the columns your grouping by in each of the 3 selects statements, and then add a where clause when joining them together (e.g. where a.code\_suc = b.code\_suc and b.code\_suc = c.code\_suc... etc.)
I would recommend option 1, as the "group by"s aren't really necessary in the example you gave and joining with where clauses will give you an implicit inner join
|
The `group by` in the sub queries are probably causing you to get cartesian join on the results, i.e. causing wrong sum through multiplication. Maybe you could write like this?
```
select 'Inforce TIV',
ISNULL(SUM(a.TSI), 0)
+
ISNULL(SUM(b.TSI), 0)
+
ISNULL(SUM(c.TSI), 0)
from
(
select ISNULL(SUM(p.[Suma Asegurada Inforce]), 0) as TSI
from temp_portafolio_cy p
--where p.fec_emi between @varFechaDesde and @varFechaHasta
where p.fec_emi between '20160101' and '20160131'
and p.sn_bancaseguros = 0
) a,
(
select ISNULL(p.[Suma Asegurada Inforce], 0) as TSI
from temp_portafolio_cy p
--where p.fec_emi between @varFechaDesde and @varFechaHasta
where p.fec_emi between '20160101' and '20160131'
and p.sn_bancaseguros = -1
and not (
(p.cod_suc = 1 and p.cod_ramo_comercial = 34 and p.Poliza = 51385)
or (p.cod_suc = 1 and p.cod_ramo_comercial = 26 and p.Poliza = 53231)
)
) b,
(
select ISNULL(p.[Suma Asegurada], 0) as TSI
from temp_portafolio_cy p
--where p.fec_emi between @varFechaDesde and @varFechaHasta
where p.fec_emi between '20160101' and '20160131'
and p.sn_bancaseguros = -1
and (
(p.cod_suc = 1 and p.cod_ramo_comercial = 34 and p.Poliza = 51385)
or (p.cod_suc = 1 and p.cod_ramo_comercial = 26 and p.Poliza = 53231)
)
) c
```
|
Sum the results from different queries
|
[
"",
"sql",
"sql-server",
""
] |
I have a table which has the following values:
```
ID | Name
---------------
1 | Anavaras
2 | Lamurep
```
I need a query which outputs the value which doesn't have entry in the table.
For e.g:
If my where clause contains `id in('1','2','3','4')`, should produce output has
```
3 |
4 |
```
for the above entries in the table.
|
You would put this into a "derived table" and use `left join` or a similar construct:
```
select v.id
from (values(1), (2), (3), (4)) v(id) left join
t
on t.id = v.id
where t.id is null;
```
|
First you need to split your `in` to a table. Sample split function is here:
```
CREATE FUNCTION [dbo].[split]
(
@str varchar(max),
@sep char
)
RETURNS
@ids TABLE
(
id varchar(20)
)
AS
BEGIN
declare @pos int,@id varchar(20)
while len(@str)>0
begin
select @pos = charindex(@sep,@str + @sep)
select @id = LEFT(@str,@pos),@str = SUBSTRING(@str,@pos+1,10000000)
insert @ids(id) values(@id)
end
RETURN
END
```
Then you can use this function.
```
select id from dbo.split('1,2,3,4,5',',') ids
left join myTable t on t.id=ids.id
where t.id is null
-- if table ID is varchar then '''1'',''2'',''3'''
```
|
SQL Server : searching value doesn't have entry in table
|
[
"",
"sql",
"sql-server",
""
] |
A colleague wrote this piece of SQL (SQL Server 2012):
```
SELECT
a.account_id
,(SELECT SUM(e.amount)
FROM event e
WHERE e.event_type_id <> 47
AND e.master_comm_id = (SELECT c.comm_id
FROM comm c
WHERE c.item_id = a.item_id
AND c.comp_type_id = 20
AND c.comm_type_id = 485))
FROM account a
```
However, there are cases where there are multiple master\_comm\_ids against an event, and so the query fails (*Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , >, >= or when the subquery is used as an expression.*). I only want the first master\_comm\_id, ie. the MIN one.
I have looked at various similar questions and attempted various things to achieve this (using MIN or ROW\_NUMBER and rearranging the query to use joins etc) but I must be missing something obvious as everything has either resulted in SQL errors or the wrong data or not fixed the issue.
Can anyone help me just find the min master\_comm\_id to then use in the subquery?
|
Top 1 works, but only if the data is ordered (per below SO)
You mentioned you tried MIN, but where? This may work (you are very close):
```
SELECT
a.account_id
,(SELECT SUM(e.amount)
FROM event e
WHERE e.event_type_id <> 47
AND e.master_comm_id = (SELECT **MIN**(c.comm_id)
FROM comm c
WHERE c.item_id = a.item_id
AND c.comp_type_id = 20
AND c.comm_type_id = 485))
FROM account
```
[MAX vs Top 1 - which is better?](https://stackoverflow.com/questions/7198274/max-vs-top-1-which-is-better)
|
Try this one. Since I don't have sample data, I cannot check, but it looks obvious - select only top 1, while the select list is ordered by the ID ascending...
```
SELECT
a.account_id
,(SELECT SUM(e.amount)
FROM event e
WHERE e.event_type_id <> 47
AND e.master_comm_id = (SELECT top 1 c.comm_id
FROM comm c
WHERE c.item_id = a.item_id
AND c.comp_type_id = 20
AND c.comm_type_id = 485
ORDER BY C.COMM_ID ASC))
FROM account a
```
|
Finding a min value to use in a subquery
|
[
"",
"sql",
"sql-server",
""
] |
I created the following tables:
```
create table people
(
ID varchar(10),
name varchar(35),
CONSTRAINT pk_ID PRIMARY KEY (ID)
);
create table numbers
(
code varchar(10),
ID varchar(10),
number numeric,
CONSTRAINT pk_code PRIMARY KEY (code)
);
```
I inserted the following datas:
```
insert into people(ID, name)
values('fdx1','Peter');
insert into people(ID, name)
values('fdx2','Alice');
insert into people(ID, name)
values('fdx3','Louis');
insert into numbers(code, ID, number)
values('001','fdx1',1);
insert into numbers(code, ID, number)
values('002','fdx1',1);
insert into numbers(code, ID, number)
values('003','fdx2',2);
insert into numbers(code, ID, number)
values('004','fdx2',3);
insert into numbers(code, ID, number)
values('005','fdx3',4);
insert into numbers(code, ID, number)
values('006','fdx3',4);
```
My problem is: how to select people that has the same number. For example "Peter" and "Louis".
|
By "same number" you mean that there is only one number in `numbers` for the person. You can do this with `group by` and `having`:
```
select n.id
from numbers n
group by n.id
having min(number) = max(number);
```
Note: this doesn't take `NULL` into account. Your question doesn't specify what to do if one of the values is `NULL`.
|
If I understand correctly you want to see out of the 2 rows for each user in numbers who has the same number twice? If so you could do.
```
SELECT p.ID, p.name, p.number, COUNT(DISTINCT n.id) as num
FROM people as p
INNER JOIN numbers as n on n.ID = p.ID
GROUP BY p.ID, n.number
HAVING num > 1
```
The would return a list of people how have the same number in numbers more than once
In case this was not what you were looking for some other things you could do are:
To return a list of people for a specific number you could do the following
```
SELECT p.ID, p.name
FROM people as p
INNER JOIN numbers as n on n.ID = p.ID
WHERE n.number = [[X]]
```
You would replace [[X]] with the number e.g. 1 this would then return a list of people who are linked to number 1 in this case Peter
If you wanted a list of all people and their associated number you could do:
```
SELECT p.ID, p.name, n.number
FROM people as p
INNER JOIN numbers as n on n.ID = p.ID
ORDER BY n.number
```
This would return the users ID, Name and their associated number.
|
How to select a specific person in MySQL?
|
[
"",
"mysql",
"sql",
""
] |
It is asked many times, but not this way.
I am on SQL Server 2008, and there is no `STRING_SPLIT` function (like in 2016).
A query returns with the following row, see below a single example row. What you see below in bald is a single field actually, so one `varchar` column has it altogether:
```
Appple|10|admin|845687|Apr|26|11:32:29|2016|AwesomeApplication.zip
```
which I'd like to be split by the pipe `|` character.
I cannot write a CTE for this, or a custom function.
I have to extract the individual pipe delimited elements, into different columns, within one select statement using the built in string functions like `CHARINDEX`, `PATINDEX`.
Does anybody have any idea?
|
```
DECLARE @Result Table(Value varchar(50))
DECLARE @x XML
SELECT @X = CAST('<A>' + REPLACE(@StringList, '|', '</A><A>') + '</A>' AS XML)
INSERT INTO @Result
SELECT t.value('.', 'varchar(50)') as inVal
FROM @X.nodes('/A') AS x(t)
```
This will create a table with one column (Value). Each split value from your pipe-delimited string will create a new record in this table. Then you can join to it however you'd like. Please let me know if this is unclear or if it doesn't work on SQL 2008.
You can increase the size of the varchar, if needed - and you can modify the query to split on different values (comma-delimited, etc.).
|
```
ALTER FUNCTION [dbo].[split]
(
@string varchar(max),
@separator varchar(1) -- longer separator is also possible
)
RETURNS
@result TABLE (keyword varchar(max) )
AS
BEGIN
declare @pos int=0, @res varchar(100)
while len(@string)>0
begin
set @pos=CHARINDEX(@separator, @string+@separator,0)
select @res=left(@string,@pos),@string=SUBSTRING(@string,@pos+1,100)
insert @result values(@res)
end
RETURN
END
```
|
Mimic STRING_SPLIT without custom function in SQL Server 2008
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
Following sample returns affected ROWCOUNT. Is there a way to return affected IDENTITYs?
```
UPDATE HumanResources.Employee
SET JobTitle = N'Executive'
WHERE NationalIDNumber = 123456789
IF @@ROWCOUNT = 0
PRINT 'Warning: No rows were updated';
```
|
```
UPDATE HumanResources.Employee
SET JobTitle = N'Executive'
OUTPUT INSERTED.PrimaryKeyID
WHERE NationalIDNumber = 123456789
```
**Here I assumed Id name is PrimaryKeyID.**
|
```
UPDATE HumanResources.Employee
SET JobTitle = N'Executive'
OUTPUT INSERTED.YourIdentityColumn
WHERE NationalIDNumber = 123456789
```
[OUTPUT](https://msdn.microsoft.com/en-us/library/ms177564.aspx)
|
Retrieving affected records through T-SQL
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have the following code
```
IF EXISTS(SELECT @FunderID
FROM dbo.FunderCharityTbl
WHERE @ContributionAmount > ( ( sum(TotalContributions) / 100 ) *10 ))
BEGIN
RAISERROR ('Sorry contribution is refused limit is breached', 16,1)
RETRUN 99
END
```
And I am getting the following error
> Msg 147, Level 15, State 1, Procedure InsertContribution, Line 33
> An aggregate may not appear in the WHERE clause unless it is in a
> subquery contained in a HAVING clause or a select list, and the column
> being aggregated is an outer reference.
What I am trying to do is check if the `@contributionAmount` (entered amount) is greater than 10% of all the previous contributions made by the person with the entered `funderID` and if it is send an error message
I am relatively new to SQL and am wondering why you can't write the `If Exists` statement the way I did write it and what do I need to do to fix this error and have my procedure perform in the same manner as I wanted.
|
You can't use a Aggregate function in `WHERE` clause, but you can use it in `HAVING` clause
```
IF EXISTS( SELECT 1 --@FunderID
FROM dbo.FunderCharityTbl
HAVING @ContributionAmount > ((sum(TotalContributions)/100)*10)
)
```
|
You have to use `GROUP BY` and `HAVING` something like:
```
IF EXISTS(
SELECT @FunderID
FROM dbo.FunderCharityTbl
GROUP BY @FunderID
HAVING @ContributionAmount > ((sum(TotalContributions)/100)*10)
)
```
|
SQL Server : getting an error "Msg147, level15" Why do I get this error and how to fix it
|
[
"",
"sql",
"sql-server",
"t-sql",
"stored-procedures",
""
] |
I have two methods for solution, but both are high inefficient to work with value of the order 108 and greater.
**Method 1**
```
select 100 + rownum - 1 from dual connect by level <= (200 - 100 + 1)
```
**Method 2**
```
select rownum + 100 - 1
from (select 1 from dual group by cube(1, 2, 3, 4, 5, 6, 7, 8, 9))
where rownum < (200 - 100 + 1)
```
but first method fails when the max value is 100,000,000 and second method takes lot of time to process.
please suggest an efficient method,
I could think of sequences but I suppose the time cost will be higher.
**update**
*Error in First method*
> ORA-30009: Not enough memory for CONNECT BY operation
|
For that many rows a pipelined function would probably the best solution:
```
create or replace TYPE t_numbers IS TABLE OF NUMBER;
/
create or replace function generate_series(p_min integer, p_max integer)
return t_numbers pipelined
as
begin
for i in p_min..p_max loop
pipe row (i);
end loop;
end;
/
```
A simple:
```
select count(*)
from table(generate_series(1,100000000));
```
This takes about 30 seconds on my laptop
|
Some other options:
**Option 1 - Generate a collection**:
```
CREATE TYPE intlist IS TABLE OF NUMBER(10,0);
/
CREATE FUNCTION list_between(
min_value NUMBER,
max_value NUMBER
) RETURN intlist DETERMINISTIC
AS
o_lst intlist := intlist();
BEGIN
IF ( min_value <= max_value ) THEN
o_lst.EXTEND( max_value - min_value + 1 );
FOR i IN 0 .. max_value - min_value LOOP
o_lst( i ) := i + min_value;
END LOOP;
END IF;
RETURN o_lst;
END;
/
SELECT COLUMN_VALUE
FROM TABLE( list_between( 123456789, 987654321 ) );
```
**Option 2 - Use a recursive sub-query factoring clause**:
```
WITH sqfc ( min_value, max_value ) AS (
SELECT 123456789, 987654321 FROM DUAL
UNION ALL
SELECT min_value + 1, max_value
FROM sqfc
WHERE min_value < max_value
)
SELECT min_value AS value
FROM sqfc;
```
**Option 3 - Use a pipelined function**:
```
CREATE FUNCTION pipelined_list_between (
min_value NUMBER,
max_value NUMBER
) RETURN intlist DETERMINISTIC PIPELINED
AS
BEGIN
FOR i IN min_value .. max_value LOOP
PIPE ROW ( i );
END LOOP;
END;
/
SELECT COLUMN_VALUE
FROM TABLE( pipelined_list_between( 123456789, 987654321 ) );
```
|
Generate numbers between min_value and max_value oracle
|
[
"",
"sql",
"oracle",
"performance",
""
] |
I have an SQL table with some data like this, it is sorted by date:
```
+----------+------+
| Date | Col2 |
+----------+------+
| 12:00:01 | a |
| 12:00:02 | a |
| 12:00:03 | b |
| 12:00:04 | b |
| 12:00:05 | c |
| 12:00:06 | c |
| 12:00:07 | a |
| 12:00:08 | a |
+----------+------+
```
So, I want my select result to be the following:
```
+----------+------+
| Date | Col2 |
+----------+------+
| 12:00:01 | a |
| 12:00:03 | b |
| 12:00:05 | c |
| 12:00:07 | a |
+----------+------+
```
I have used the `distinct` clause but it removes the last two rows with `Col2 = 'a'`
|
You can use `lag` (SQL Server 2012+) to get the value in the previous row and then compare it with the current row value. If they are equal assign them to one group (`1` here) and a different group (`0` here) otherwise. Finally select the required rows.
```
select dt,col2
from (
select dt,col2,
case when lag(col2,1,0) over(order by dt) = col2 then 1 else 0 end as somecol
from t) x
where somecol=0
```
|
If you are using Microsoft SQL Server 2012 or later, you can do this:
```
select date, col2
from (
select date, col2,
case when isnull(lag(col2) over (order by date, col2), '') = col2 then 1 else 0 end as ignore
from (yourtable)
) x
where ignore = 0
```
This should work as long as col2 cannot contain nulls and if the empty string ('') is not a valid value for col2. The query will need some work if either assumption is not valid.
|
Remove duplicates from query, while repeating
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have this query
```
INNER JOIN view AS vw ON vw.[Id] = vw2.[Id]
```
The problem is the return in vw2.[Id] contains a tab space at the end ('2012 ') and vw does not ('2012'). So I tried doing
```
INNER JOIN view AS vw ON vw.[Id] = Replace(vw2.[Id], char(9), '')
```
Unfortunately, the comparison still returns false. I thought that it cannot be done on joins so I tried it on a select query
```
SELECT *
FROM database.view2
WHERE REPLACE(Id, char(9), '') = '2012 '
```
But this query also returns a false. Can someone explain to me what concept am I missing or misunderstanding?
EDIT
Hello and thank you to everyone who took their time to help me on this. It seems that
```
vw.[Id] = LTRIM(RTRIM(REPLACE(vw2.[Id], char(9), '')
```
did the trick. I keep on using `REPLACE` on both ends which resulted from a very long query time which is not necessary. I used `LTRIM` and `RTRIM` also to get the data but from the return of `varbinary` it should not be needed and I don't understand what I'm doing wrong. But it works now thank you everyone.
|
I suspect there may be more characters you're dealing with than just a tab. For example, you include
`REPLACE(Id, char(9), '') = '2012 '`
Why is there still a space on the end after the replace?
I was able to get your method to work in SQL 2008R2, so below is proof-of-concept code.
```
CREATE TABLE #table1 (
Id varchar(5)
)
CREATE TABLE #table2 (
Id varchar(5)
)
INSERT INTO #table1
VALUES
('2012')
,('2013')
,('2014')
,('2015')
,('2016')
INSERT INTO #table2
VALUES
('2012'+CHAR(9))
,('2013'+CHAR(9))
,('2014'+CHAR(9))
,('2015'+CHAR(9))
,('2016'+CHAR(9))
SELECT t1.Id, t2.Id
FROM #table1 t1
INNER JOIN #table2 t2
ON t1.Id = REPLACE(t2.Id,CHAR(9),'')
```
See if that gives you the proper results - it does for me.
|
Your logic seems right. Have you tried:
```
INNER JOIN view AS vw ON vw.[Id] = RTRIM(vw2.[Id])
```
?
You could also combine trims and replaces as a way to get rid of all of the whitespace. Though, it seems like using a sledgehammer to get what you want...
```
INNER JOIN view AS vw ON REPLACE(LTRIM(RTRIM(vw.[Id]), char(9), '') = REPLACE(LTRIM(RTRIM(vw2.[Id]), char(9), '')
```
|
Removing tab spaces in SQL Server 2012
|
[
"",
"sql",
"sql-server",
""
] |
I have a database with colums I am working on. What I am looking for is the date associated with the row where the SUM(#) reaches 6 in a query. The query I have now will give the date when the number in the colum is six but not the sum of the previous rows. example below
```
Date number
---- ------
6mar16 1
8mar16 4
10mar16 6
12mar16 2
```
I would like to get a query to get the 10mar16 date because on that date the number is now greater than 6. Earlier dates wont total up to six.
Here is an example of a query i have been working on:
```
SELECT max(date) FROM `numbers` WHERE `number` > 60
```
|
You could use this query, which tracks the accumulated sum and then returns the first one that meets the condition:
```
select date
from (select * from mytable order by date) as base,
(select @sum := 0) init
where (@sum := @sum + number) >= 6
limit 1
```
[SQL Fiddle](http://sqlfiddle.com/#!9/382e3/8)
|
Most databases support ANSI standard window functions. In this case, cumulative sum is your friend:
```
select t.*
from (select t.*, sum(number) over (order by date) as sumnumber
from t
) t
where sumnumber >= 10
order by sumnumber
fetch first 1 row only;
```
In MySQL, you need variables:
```
select t.*
from (select t.*, (@sumn := @sumn + number) as sumnumber
from t cross join (select @sumn) params
order by date
) t
where sumnumber >= 10
order by sumnumber
fetch first 1 row only;
```
|
Stop query when SUM is reached (mysql)
|
[
"",
"mysql",
"sql",
"sum",
""
] |
I would like to recreate a table for path analysis in BigQuery (similar to unpivot).
For each person, I have his/her page path data like follows.
```
[visitor] [page] [page_orders]
A, Pa, 1
A, Pb, 2
A, Pc, 3
A, Pb, 4
A, Pf, 5
B, Px, 1
B, Pb, 2
B, Pz, 3
B, Pk, 4
C, Pb, 1
C, Pz, 2
C, Pa, 3
```
And I would like to create tables like this.
```
[visitor] [page_path]
A, Pz > Pb > Pc > Pa > Pf
B, Px > Pb > Pz > Pk
C, Pb > Pz > Pa
```
What I have done is to create a table like this:
```
[visitor] [Pa] [Pb] [Pc] [Pf] [Pk] [Px] [Pz]
A, 0, 0, 0, 0, 0, 0, 1
A, 0, 1, 0, 0, 0, 0, 0
A, 0, 0, 1, 0, 0, 0, 0
A, 1, 0, 0, 0, 0, 0, 0
A, 0, 0, 0, 1, 0, 0, 0
B, 0, 0, 0, 0, 0, 1, 0
B, 0, 1, 0, 0, 0, 0, 0
B, 0, 0, 0, 0, 0, 0, 1
B, 0, 0, 0, 0, 1, 0, 0
C, 0, 1, 0, 0, 0, 0, 0
C, 0, 0, 0, 0, 0, 0, 1
C, 1, 0, 0, 0, 0, 0, 0
```
AND then I can get a table like this
```
A, Pa Pb Pc Pf Pz
B, Pb Pk Px Pz
C, Pa Pb Pz
```
However, the page orders is not correct.
Is there any better idea to create correct page path table via BigQuery?
(It seemed I cannot create variables nor using loop in BigQuery...)
|
```
SELECT visitor, GROUP_CONCAT(page, ' > ') as page_path
FROM (
SELECT visitor, page, page_order
FROM YourTable
ORDER BY visitor, page_order
)
GROUP BY visitor
```
Another option: with use of [BigQuery User-Defined Functions](https://cloud.google.com/bigquery/user-defined-functions) (has no dependency on order by that Mosha pointed in his comment - but UDF have their own limitations - so you will need to choose)
```
SELECT visitor, page_path FROM JS(
// input table
(
SELECT visitor, GROUP_CONCAT(CONCAT(STRING(100000 + page_order), ',', page), ';') AS list
FROM YourTable
GROUP BY visitor
) ,
// input columns
visitor, list,
// output schema
"[
{name: 'visitor', type: 'string'},
{name: 'page_path', type: 'string'}
]",
// function
"function(r, emit){
var list = r.list.split(';');
list.sort();
path = ''
for (var i = 0; i < list.length; i++) {
if (i > 0) path += ' > ';
path += list[i].split(',')[1];
}
emit({
visitor: r.visitor,
page_path: path
});
}"
)
```
|
With BigQuery support for standard SQL, this can be solved by using native operations on ARRAYs.
Below is one possible solution:
```
select visitor, (select string_agg(p, ' --> ') from t.pages p) from
(select visitor, array(select p.page from t.pages p order by p.page_order asc) pages from
(select visitor, array_agg(struct(page, page_order)) pages
from VisitsTable group by visitor) t) t
```
For clearness the solution is written as 3 subselects (it is possible to write it in shorter form, but it won't change performance). Explanations:
1. Innermost ARRAY\_AGG aggregation builds for every visitor an ARRAY of STRUCTs. Each STRUCT has STRING page and INT64 page\_order.
2. Second subselect runs ORDER BY **within** each array. It sorts pages by their order.
3. Outermost subselect simply computes STRING\_AGG of all strings in each ARRAY, using ' --> ' as separator.
For more details, see documentation at: <https://cloud.google.com/bigquery/sql-reference/>
|
Create Page Path Table in BigQuery (Path Analysis ; Unpivot )
|
[
"",
"sql",
"google-bigquery",
""
] |
I am trying to select from a table in SQL Server, so that it groups my `FinalDate` column into week numbers and sums the `ThisApp` column for that week in the next cell. I looked online and I cannot seem to find what I am after.
I was wondering if this was something I can do in T-SQL?
These are currently the rows in my table:
```
FinalDate ThisApp
------------------------
15/04/2016 20459.92
29/05/2016 7521.89
30/05/2016 5963.61
31/05/2016 3293.72
03/06/2016 27413.20
04/06/2016 8392.16
05/06/2016 7789.46
05/06/2016 11414.73
10/06/2016 48893.46
11/06/2016 14685.47
11/06/2016 7030.03
```
I would want to replace the `FinalDate` column with week numbers and sum the `This App` for each week number.
**Also:**:
I would need it to display continuous week numbers, so I wouldn't want it to skip any weeks, so for example:
```
FinalDate ThisApp
------------------------
01/01/2016 10.00 -- (Would be week 1)
02/01/2016 10.00 -- (Would be week 1)
15/01/2016 10.00 -- (Would be week 3)
```
Would display like:
```
FinalDate ThisApp
------------------------
1 20.00
2 0.00 --This would show as 0.00 because there was no week 2.
3 10.00
```
I understand this a very specific request that's why I was wondering If I could do it in SQL.
|
```
DECLARE @tmp table(dat datetime,
val float)
insert into @tmp
values ('15/04/2016' , 20459.92),
('29/05/2016', 7521.89),
('30/05/2016', 5963.61),
('31/05/2016', 3293.72),
('03/06/2016', 27413.20),
('04/06/2016', 8392.16),
('05/06/2016', 7789.46),
('05/06/2016', 11414.73),
('10/06/2016', 48893.46),
('11/06/2016', 14685.47),
('11/06/2016', 7030.03)
SELECT Weeknumb,
ISNULL(sumvals,0) as weekval
FROM
(SELECT DATEPART(ISOWK,DATEADD(wk,t2.number,'2016')) as Weeknumb
FROM master..spt_values t2
WHERE t2.type = 'P'
AND t2.number <= 255
AND YEAR(DATEADD(wk,t2.number,'2016'))=2016)allWeeks
LEFT JOIN
(SELECT sum(val) as sumvals,
datepart(ISOWK,dat) as weeks
FROM @tmp
GROUP BY datepart(ISOWK,dat) ) actualData
ON weeks = Weeknumb
ORDER BY Weeknumb asc
```
there you go. All weeks of 2016 with your values summed
|
```
SELECT DATEPART(WEEK, FinalDate) FinalDate
, SUM(ThisApp) ThisApp
FROM Your_Table
GROUP BY DATEPART(WEEK, FinalDate)
```
In order to get `0` for weeks not existent in the dataset you have to create a table with weeknumbers (1-52) and `right join` to it. In that case you'd get something like:
```
SELECT wk.Number
, ISNULL(SUM(ThisApp), 0)
FROM Your_Table T
RIGHT JOIN WeekNumbers wk
ON wk.Number = DATEPART(WEEK, T.FinalDate)
GROUP BY wk.Number
```
|
Group by week number including empty weeks and sum other column
|
[
"",
"sql",
"sql-server",
"group-by",
"week-number",
""
] |
I'm using SQL Server 2016, but this is not a 2016 question. Here's what I would like to do. I will have the user's email address and will need to get data for that user. So this is simple:
```
select [firstname],[lastname],[managerid] from employees where workemail='emp@company.com'
```
Very straight forward, but notice i'm pulling the manager's id. And i now need the exact same data for the manager, sans the manager's id:
```
select [firstname],[lastname] from employees where mamangerid=manager_id_from_employee_query
```
You can see all data is in the same table. I could stop here because I have my data, but I love to learn and look for better solutions. How can i achieve what I'm trying to do without having to write 2 separate SQL statements or what would be the better solution?
|
You can do a left-join. The left side expression will have the employee, and the right side will link to a row in the same table and get the manager's info (assuming it exists). Don't do an inner join because if the "manager" row is missing, then you'll get nothing for the employee.
```
Select [e].[firstname] [emp_fname]
, [e].[lastname] [emp_lname]
, Manager.[firstname] [mgr_fname]
, Manager.[lastname] [mgr_lname]
From [employees] [e]
Left Join [employees] Manager
On [e].[employee_id] = Manager.[managerid]
```
|
You can use a self join where the first instance of the table indicated the employee's details and the second the manager's details:
```
SELECT e.[firstname], e.[lastname], m.[firstname], m.[lastname]
FROM employees e
LEFT JOIN employees m ON e.[managerid] = m.[id]
WHERE e.workemail = 'emp@company.com'
```
|
Querying the same table twice to get 2 records..in one query
|
[
"",
"sql",
"sql-server",
"select",
""
] |
I need to count the sessions which visited a particular page once and sessions which visited the same page once or more than once. For example: consider these sessions between 1st to 4th April:
```
Session_id| Date
----+---------
1| 01/04/2016
1| 02/04/2016
2| 01/04/2016
3| 01/04/2016
4| 01/04/2016
4| 03/04/2016
4| 04/04/2016
```
I can not do it using a sub query as there are millions of session. Hence a query like this won't work for me:
```
select case when no_of_visits=1 then 'single_visit'
when no_of_visits>1 then 'multiple_visits' end as visit,
count(distinct session_id) as sessions
FROM (
select session_id,
count(distinct date) as no_of_visits
from my_table
group by session_id
) a
group by case when no_of_visits=1 then 'single_visit'
when no_of_visits>1 then 'multiple_visits' end
```
The answer should be like this:
```
Visit|Sessions
single_visit|2
multiple_visit|2
```
Is there any way where i can do something like this:
`count(distinct session_id) where no_of_visits=1 and count(distinct session_id) where no_of_visits>=1` without subquery or self join?
Any help would be deeply appreciated.
|
You can't avoid subqueries to get this result, but you can get rid of the `count(distinct)` (which is probably the most expensive part):
```
select no_of_visits,
count(*) as sessions
FROM
(
select session_id,
case when min(date) <> max(date) -- at least two different dates
then 'multiple_visits'
else 'single_visit'
end as no_of_visits
from table_name
group by session_id
) a
group by no_of_visits
```
|
It *seems* you want something like this:
```
select session_id, count(distinct dt) as no_of_visits,
case when count(distinct dt) = 1 then "Single visit"
else "Multiple visits" end as visit
from my_table
group by session_id;
```
**Note**: I used dt rather than "date" as a column name; date is an Oracle key word, and such words should not be used as schema, table or column names.
|
SQLCount distinct of one column based on another column
|
[
"",
"sql",
"oracle",
"session",
"count",
""
] |
I want to search a `student` with multi condition in SQL, ex: `name`, `age`. But we may have only name or only age value or we may have all value with the information from user . I use a form in `HTML` to get values. I don't know how to write the query to select if they not fill all text field .
If I have only name value the query is
```
SELECT * FROM stuTBL WHERE name = @name
```
If I have only `age` value the query is
```
SELECT * FROM stuTBL WHERE age = @age
```
If I have all value the query is
```
SELECT * FROM stuTBL WHERE name = @name AND age = @age
```
But what query will cover whatever user input ?
|
@austin wernli comment is one answer and I found another way to use that
SELECT \* FROM stuTBL WHERE
(name = @name or @name is null)
AND (age = @age or @age is null)
|
Check the `@name = ''` , `@age = ''` along with the condition. If the `@name` is empty it will check the age condition only. If `@age` is empty it will check the name condition only.
If the both param have valid data then both the condition will work
```
SELECT *
FROM stuTBL
WHERE (name = @name OR ISNULL(@name, '') = '')
AND (age = @age OR ISNULL(@age, '') = '')
```
|
Select with multi condition in SQL
|
[
"",
"sql",
""
] |
I have a varchar column, and each field contains a single word, but there are random number of pipe character before and after the word.
Something like this:
```
MyVarcharColumn
'|||Apple|||||'
'|||||Pear|||||'
'||Leaf|'
```
When I query the table, I wish to replace the multiple pipes to a single one, so the result would be like this:
```
MyVarcharColumn
'|Apple|'
'|Pear|'
'|Leaf|'
```
Cannot figure out how to solve it with REPLACE function, anybody knows?
|
vkp's method absolutely solves your issue. Another method that works, and also will work in a variety of other situations, is using a triple `REPLACE()`
```
SELECT REPLACE(REPLACE(REPLACE('|||Apple|||||', '|', '><'), '<>',''), '><','|')
```
This method will allow you to keep a delimiter between multiple strings where Mr. VPK's method will concat the strings and put a delim at the very beginning and the very end.
```
SELECT REPLACE(REPLACE(REPLACE('|||Apple|||||Banana||||||||||', '|', '><'), '<>',''), '><','|')
```
|
One way is to replace all the `|` with blanks and add a pipe character at the beginning and the end of string.
```
select '|'+replace(mycolumn,'|','')+'|' from tablename
```
|
Replace multiple repeating character to one
|
[
"",
"sql",
"sql-server-2008",
"t-sql",
""
] |
I have these tables:
```
Users
Id (PK)
NationalCode
UserProfiles
UserProfileId (PK) One to One with Users
SalaryAmount
Salaries
NationalCode
SalaryAmount
```
I want to update `SalaryAmount` for each user inside `UserProfiles` with new one in Salaries. How can I do that?
I have tried this:
```
UPDATE Users
SET SalaryAmount = t2.Salary
FROM Users t1
INNER JOIN Salaries t2 ON t1.NationalCode = t2.NationalCode
GO
```
The above code works if `SalaryAmount` be inside `Users`, But as you can see `SalaryAmount` is inside `UserProfiles`.
|
All you need is another join on `UserProfiles:
```
UPDATE up
SET up.SalaryAmount = s.Salary
FROM UsersProfiles up
JOIN Users u on up.UserProfileId = u.Id
JOIN Salaries s ON u.NationalCode = s.NationalCode
```
|
```
UPDATE up
SET up.SalaryAmount = t2.Salary
FROM UserProfiles up
INNER JOIN Users t1 ON t1.Id = up.UserProfileId
INNER JOIN Salaries t2 ON t1.NationalCode = t2.NationalCode
GO
```
Is that what you need? I would recommend you to make `UserProfileId` column to be unique in table `UserProfiles` and store Users's unique Id in Column `UserId` which is a `Foreign Key` for table `Users`
|
TSQL: How to update this field
|
[
"",
"sql",
"t-sql",
""
] |
I have asked this question before here: [Between two tables how does one SELECT the table where the id of a specific value exists mysql](https://stackoverflow.com/questions/36069463/between-two-tables-how-does-one-select-the-table-where-the-id-of-a-specific-valu) however I feel that I didn't phrase it well enough.
I have 2 tables in my "Hiking" database lets say table 1 is called "Forest" and table 2 is called "Mountain". Both tables have a `FOREIGN KEY` "Trip\_id" which is a `PRIMARY KEY` in table "Trip" (or something, this is a made up example) that is `AUTO_INCREMENT`. A trip can either be Mountain or Forest, so the 2 tables do not share any Trip\_ids. They also each have an attribute that they do not share with the other table. Mountains has an attribute "Temperature" and Forests has an attribute "Atmosphere". What I want to do, is extract either "Temperature" or "Atmosphere" depending on which, Mountains or Forests contains the Trip\_id value 74.
```
SELECT Temperature FROM Mountain WHERE Trip_id = 74 OR
SELECT Atmosphere FROM Forest WHERE Trip_id = 74;
```
(I know the code above does not work).
I did end up solving this problem using Java:
```
String sqlStatement = "SELECT Trip_id FROM Mountains WHERE Trip_id = 74";
if (/*(execute sql statement) == null*/){
//Use Forest (and Atmosphere)
}
else{
//Use Mountain (and Temperature)
}
```
However there are no if statements in mysql, so I was wondering if it at all was possible to solve this using mysql.
What I was thinking was something like this: `SELECT * FROM FOREST OR MOUNTAIN WHERE Trip_id = 74 EXISTS;` (I know this code is rubbish and completely wrong but I hope it helps illustrate what I am aiming for).
P.S if the columns were the same, then this would be the best answer:
```
select m.*
from mountains m
where m.trip_id = 74
union all
select f.*
from forests f
where f.trip_id = 74;
```
Thanks to [Gordon Linoff](https://stackoverflow.com/users/1144035/gordon-linoff) for providing that answer.
|
There ARE if statements in mysql. Anyway!
You can do this:
```
SELECT Temperature as Value FROM Mountain WHERE Trip_id = 74
Union All
SELECT Atmosphere as Value FROM Forest WHERE Trip_id = 74
```
|
You can use below the query.
```
Select
T.TripID,
M.Temperature,
F.Atmosphere
From
Trip T
LEFT OUTER JOIN Mountain M
ON T.TripID = M.TripID
LEFT OUTER JOIN Forest F
ON T.TripID = F.TripID
Where T.TripID = 74
```
So in the above query if both Mountain and Forest table have this ID then Temp and Atmosphere will be displayed.
If either Mountain OR Forest table have this ID then Temp OR Atmosphere will be displayed.
|
Between two tables how does one SELECT the table where the id of a specific value exists mysql (duplicate)
|
[
"",
"mysql",
"sql",
"database",
""
] |
I have this SELECT statement:
```
SELECT
t.clmUnit,
t.clmConType,
Count(t.clmCaseID) TotalCases,
SUM(CASE WHEN t.clmOutcomeType='Resolved' THEN 1 ELSE 0 END) TotalResolved,
cast(SUM(CASE WHEN t.clmOutcomeType='Resolved' THEN 1 ELSE 0 END) as decimal(18,2))/cast(Count(t.clmCaseID) as decimal(18,2))*100 as [TotalResolved %],
SUM(CASE WHEN t.clmOutcomeType='Unresolved' THEN 1 ELSE 0 END) TotalCaseProgressed,
cast(SUM(CASE WHEN t.clmOutcomeType='Unresolved' THEN 1 ELSE 0 END) as decimal(18,2))/cast(Count(t.clmCaseID)as decimal(18,2))*100 as [TotalCaseProgressed %],
SUM(CASE WHEN t.clmOutcomeType='Work In Progress' THEN 1 ELSE 0 END) TotalFurtherConReq,
cast(SUM(CASE WHEN t.clmOutcomeType='Work In Progress' THEN 1 ELSE 0 END) as decimal(18,2))/cast(Count(t.clmCaseID)as decimal(18,2))*100 as [TotalFurtherConReq %]
```
Which shows the output columns as decimal figures (eg) 62.24837411 and I know I can put this into excel and format the column as a percentage, but my question is how do I get SQL to convert it to the percentage? I have tried changing the as decimal(18,2) to as decimal (4,2) which is what I want but it just throws the math overflow error?
|
AS Arul Suggested, you need to use cast function on the whole column calculation.
```
SELECT
t.clmUnit,
t.clmConType,
Count(t.clmCaseID) TotalCases,
SUM(CASE WHEN t.clmOutcomeType='Resolved' THEN 1 ELSE 0 END) TotalResolved,
cast(SUM(CASE WHEN t.clmOutcomeType='Resolved' THEN 1 ELSE 0 END))/Count(t.clmCaseID) * 100 as decimal(18,2)) as [TotalResolved %],
SUM(CASE WHEN t.clmOutcomeType='Unresolved' THEN 1 ELSE 0 END) TotalCaseProgressed,
cast(SUM(CASE WHEN t.clmOutcomeType='Unresolved' THEN 1 ELSE 0 END))/Count(t.clmCaseID) * 100 as decimal(18,2)) as [TotalCaseProgressed %],
SUM(CASE WHEN t.clmOutcomeType='Work In Progress' THEN 1 ELSE 0 END) TotalFurtherConReq,
cast(SUM(CASE WHEN t.clmOutcomeType='Work In Progress' THEN 1 ELSE 0 END))/Count(t.clmCaseID) * 100 as decimal(18,2)) as [TotalFurtherConReq %]
```
|
Try this one :
```
select convert(DECIMAL(10,2),column)
```
for precision of 2 decimal places
|
How to fix SQL output into two decimal places
|
[
"",
"sql",
"sql-server",
"decimal",
""
] |
I got a query where iam trying to get the max(date) value of from another table to be used as join condition.
```
SELECT a.col1, a.col2
FROM tablea a,
tableb b
WHERE a.pk_id = b.fk_id
AND a.effdt = (SELECT MAX(effdt)
FROM tablea c
where c.id= a.id
and c.effdt <= sysdate
)
```
Here a index is already created on the tablea for effdt column still the query is taking long time to return values. any help in joining them better would be great.
|
Use the `RANK()` analytic function to eliminate the correlated sub-query:
```
SELECT *
FROM (
SELECT a.*,
RANK() OVER ( PARTITION BY a.id ORDER BY a.effdt DESC ) AS rnk
FROM tablea a
INNER JOIN
tableb b
ON ( a.pk_id = b.fk_id )
WHERE a.effdt <= SYSDATE
)
WHERE rnk = 1;
```
|
The the sub query on table c, an index on `effdt` is not enough.
An index on `(id, effdt)` would do better. The key is to cover the `where` clause as well as the `max(effdt)` function in a single index. This is very similar to indexing `order by`: <http://use-the-index-luke.com/de/sql/sortieren-gruppieren/indexed-order-by>
|
make max() function faster when used as inline query join condition
|
[
"",
"sql",
"oracle",
"join",
"max",
"greatest-n-per-group",
""
] |
Writing SQL in MS SQL server management studio.
I have this problem now, I have a table with where there are two of each row with almost identical values:
```
code | name | location | group
1 | Thing | 1 | 1
1 | Thing | 2 | NULL
```
I need to update the NULL GROUP to match the group that has a value, where the code is the same.
Currently in this form:
```
code Locationid ItemGroup2
100001 1 TTE
100001 2 NULL
100002 1 TTG
100002 2 NULL
```
I would like to update the table to match:
```
code Locationid ItemGroup2
100001 1 TTE
100001 2 TTE
100002 1 TTG
100002 2 TTG
```
|
Here's one option `joining` the table to itself:
```
update t1
set grp = t2.grp
from yourtable t1
join yourtable t2 on t1.code = t2.code and t2.grp is not null
where t1.grp is null
```
|
One method uses window functions:
```
with toupdate as (
select t.*, max(grp) over (partition by code) as maxgrp
from t
)
update toupdate
set grp = maxgrp
where grp is null;
```
|
If same value in one column in tow rows, update another column with same value from one row
|
[
"",
"sql",
"sql-server",
""
] |
How do I aggregate 2 select clauses without replicating data.
For instance, suppose I have `tab_a` that contains the data from 1 to 10:
```
|id|
|1 |
|2 |
|3 |
|. |
|. |
|10|
```
And then, I want to generate the combination of `tab_b` and `tab_c` making sure that result has 10 lines and add the column of `tab_a` to the result tuple
Script:
```
SELECT tab_b.id, tab_c.id, tab_a.id
from tab_b, tab_c, tab_a;
```
However this is replicating data from `tab_a` for each combination of `tab_b` and `tab_c`, I only want to add and would that for each combination of tab\_b x tab\_c I add a row of tab\_a.
Example of data from tab\_b
```
|id|
|1 |
|2 |
```
Example of data from tab\_c
```
|id|
|1 |
|2 |
|3 |
|4 |
|5 |
```
I would like to get this output:
```
|tab_b.id|tab_c.id|tab_a.id|
|1 |1 |1 |
|2 |1 |2 |
|1 |2 |3 |
|... |... |... |
|2 |5 |10 |
```
|
Perhaps you're new to SQL, but this is generally not the way things are done with RDBMSs. Anyway, if this is what you need, PostgreSQL can deal with it nicely, using different strategies:
### Window Functions:
```
with
tab_a (id) as (select generate_series(1,10)),
tab_b (id) as (select generate_series(1,2)),
tab_c (id) as (select generate_series(1,5))
select tab_b_id, tab_c_id, tab_a.id
from (select *, row_number() over () from tab_a) as tab_a
left join (
select tab_b.id as tab_b_id, tab_c.id as tab_c_id, row_number() over ()
from tab_b, tab_c
order by 2, 1
) tabs_b_c ON (tabs_b_c.row_number = tab_a.row_number)
order by tab_a.id;
```
### Arrays:
```
with
tab_a (id) as (select generate_series(1,10)),
tab_b (id) as (select generate_series(1,2)),
tab_c (id) as (select generate_series(1,5))
select bc[s][1], bc[s][2], a[s]
from (
select array(
select id
from tab_a
order by 1
) a,
array(
select array[tab_b.id, tab_c.id]
from tab_b, tab_c
order by tab_c.id, tab_b.id
) bc
) arr
join lateral generate_subscripts(arr.a, 1) s on true
```
|
Your question includes an unstated, invalid assumption: that the *position* of the values in the table (the row number) is meaningful in SQL. It's not. In SQL, rows have *no order*. All joins -- everything, in fact -- are based on values. To join tables, you have to supply the values the DBMS should use to determine which rows go together.
You got a hint of that with your attempted join: `from tab_b, tab_c, tab_a`. You didn't supply any basis for joining the rows, which in SQL means there's no restriction: all rows are "the same" for the purpose of this join. They all match, and *voila*, you get them all!
To do what you want, redesign your tables with at least one more column: the key that serves to identify the value. It could be a number; for example, your source data might be an array. More commonly each value has a name of some kind.
Once you have tables with keys, I think you'll find the join easier to write and understand.
|
Aggregate multiple select statements without replicating data
|
[
"",
"sql",
"postgresql",
""
] |
I have the following SQL Server Stored Procedure which validates a password.
```
ALTER PROC [dbo].[spValidatePassword]
@UserId uniqueidentifier,
@Password NVARCHAR(255)
AS
BEGIN
DECLARE @PasswordHash NVARCHAR(255) = HASHBYTES('SHA2_512', (SELECT @Password + CAST((SELECT p.PasswordSalt FROM Passwords p WHERE p.UserId = @UserId) AS NVARCHAR(255))))
SELECT COUNT(*)
from Passwords
WHERE UserId = @UserId
AND [Password] = @PasswordHash
--return 1 if valid password
--return 0 if not valid
END
```
How can I return 1 from the stored procedure if the count is greater than zero, and zero otherwise?
|
```
ALTER PROC [dbo].[spValidatePassword]
@UserId uniqueidentifier,
@Password NVARCHAR(255)
AS
BEGIN
DECLARE @PasswordHash NVARCHAR(255) = HASHBYTES('SHA2_512', (SELECT @Password + CAST((SELECT p.PasswordSalt FROM Passwords p WHERE p.UserId = @UserId) AS NVARCHAR(255))))
SELECT
CASE WHEN EXISTS (
SELECT *
from Passwords
WHERE UserId = @UserId
AND [Password] = @PasswordHash
)
THEN 1
ELSE 0
END
--return 1 if valid password
--return 0 if not valid
END
```
But consider using some other authentication model like OAuth or Office 365 logins rather than reinventing the wheel
|
Try this query, will return `1` if there is a result, else `0`
```
SELECT (CASE WHEN COUNT(*) > 1 THEN 1 ELSE 0 END)
FROM Passwords
WHERE UserId = @UserId
AND [Password] = @PasswordHash
```
|
Return value based on count from SQL Server Stored Procedure
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I'm trying to figure this one out. In this scenario, gas prices from a particular city in this database are increasing, so they will need to raise their prices 20%. I'm supposed to create a query that will display what the new price will be from that city only. Here is what it is supposed to look like:
[](https://i.stack.imgur.com/LHEAw.jpg)
Here is my code:
```
Select ProductID, tblProduct.ProductType, Price, SUM((Price *.20)+Price) AS 'Increased Price'
From tblProduct join tblCompany
On tblProduct.CompanyID = tblCompany.CompanyID
Where tblCompany.CompanyID IN
(Select CompanyID
From tblCompany
Where City = 'Kalamazoo')
Order By ProductID
```
However, when I go to execute the code, I get the following error:
> Msg 8120, Level 16, State 1, Line 1
> Column 'tblProduct.ProductID' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.
I can't figure out what I'm doing wrong. Can anyone enlighten me?
|
You dont really need the SUM function if you are only calculating the 20% price increase...Do this instead
```
Select ProductID, tblProduct.ProductType, Price, ((Price *.20)+Price) AS 'Increased Price'
From tblProduct join tblCompany
On tblProduct.CompanyID = tblCompany.CompanyID
Where tblCompany.CompanyID IN
(Select CompanyID
From tblCompany
Where City = 'Kalamazoo')
Order By ProductID
```
|
```
Select tblProduct.ProductID, tblProduct.ProductType, tblProduct.Price, SUM((tblProduct.Price *.20)+tblProduct.Price) AS 'Increased Price'
From tblProduct join tblCompany
On tblProduct.CompanyID = tblCompany.CompanyID
Where tblCompany.CompanyID IN
(Select c.CompanyID
From tblCompany c
Where c.City = 'Kalamazoo')
Order By ProductID
```
|
Add Column to Query From Same Table, Update Values in Duplicated Column?
|
[
"",
"sql",
"sql-server",
""
] |
I have 3 tables with the following schema
```
create table main (
main_id int PRIMARY KEY,
secondary_id int NOT NULL
);
create table secondary (
secondary_id int NOT NULL,
tags varchar(100)
);
create table bad_words (
words varchar(100) NOT NULL
);
insert into main values (1, 1001);
insert into main values (2, 1002);
insert into main values (3, 1003);
insert into main values (4, 1004);
insert into secondary values (1001, 'good word');
insert into secondary values (1002, 'bad word');
insert into secondary values (1002, 'good word');
insert into secondary values (1002, 'other word');
insert into secondary values (1003, 'ugly');
insert into secondary values (1003, 'bad word');
insert into secondary values (1004, 'pleasant');
insert into secondary values (1004, 'nice');
insert into bad_words values ('bad word');
insert into bad_words values ('ugly');
insert into bad_words values ('worst');
expected output
----------------
1, 1000, good word, 0 (boolean flag indicating whether the tags contain any one of the words from the bad_words table)
2, 1001, bad word,good word,other word , 1
3, 1002, ugly,bad word, 1
4, 1003, pleasant,nice, 0
```
I am trying to use case to select 1 or 0 for the last column and use a join to join the main and secondary table, but getting confused and stuck. Can someone please help me with a query ? These tables are stored in redshift and i want query compatible with redshift.
you can use the above schema to try your query in [sqlfiddle](http://sqlfiddle.com)
EDIT: I have updated the schema and expected output now by removing the `PRIMARY KEY` in secondary table so that easier to join with the bad\_words table.
|
```
SELECT m.main_id, m.secondary_id, t.tags, t.is_bad_word
FROM srini.main m
JOIN (
SELECT st.secondary_id, st.tags, exists (select 1 from srini.bad_words b where st.tags like '%'+b.words+'%') is_bad_word
FROM
( SELECT secondary_id, LISTAGG(tags, ',') as tags
FROM srini.secondary
GROUP BY secondary_id ) st
) t on t.secondary_id = m.secondary_id;
```
This worked for me in redshift and produced the following output with the above mentioned schema.
```
1 1001 good word false
3 1003 ugly,bad word true
2 1002 good word,other word,bad word true
4 1004 pleasant,nice false
```
|
You can use EXISTS and a regex comparison with \m and \M (markers for beginning and end of a word, respectively):
```
with
main(main_id, secondary_id) as (values (1, 1000), (2, 1001), (3, 1002), (4, 1003)),
secondary(secondary_id, tags) as (values (1000, 'very good words'), (1001, 'good and bad words'), (1002, 'ugly'),(1003, 'pleasant')),
bad_words(words) as (values ('bad'), ('ugly'), ('worst'))
select *, exists (select 1 from bad_words where s.tags ~* ('\m'||words||'\M'))::int as flag
from main m
join secondary s using (secondary_id)
```
|
sql query to join two tables and a boolean flag to indicate whether it contains any words from third table
|
[
"",
"sql",
"postgresql",
"join",
"amazon-redshift",
""
] |
I am looking for a SQL query to clean up a hacked SQL Server database. I have some basic SQL knowledge, but I have no idea how to solve the below.
For one of our websites we have a SQL Server database that was recently hacked into. Thousands of records were filled with hidden divs, containing all sorts of dodgy references. Our ISP says the content of the database is not their responsibility and they have no knowledge how to help us clean up the database. There are no clean backups available. It is way too much work to go through all records manually.
So I am now desperately trying to find a SQL query to remove these blocks of hidden text from the database.
Two useful bits of information:
1. All the spammy content is contained within div tags. The information between the tags is different in every instance, but they all open and close with the div tag.
2. Our original data will have some HTML-content, but will never contain div tags. So if we can find a way to remove everything from the starting div up to and including the closing div, then we would be sorted.
Any assistance here is much appreciated. Thanks for your time.
|
Try this out; will only work if your assumptions are correct. Another assumption hacker did not add nested DIVs. And yes, TEST this thoroughly before running the update. And back up your data before running the update.
```
CREATE TABLE #temp(id INT IDENTITY, html VARCHAR(MAX));
INSERT #temp(html)
VALUES('<p>Some text</p><strong>other text</strong><div>added by hacker</div>')
,('<p>Some text</p><strong>other text<div>added by hacker within html tag</div></strong>')
,('<p>Some text</p><div>some other text added by <a href="http://google.com">hack</a></div><strong>other text</strong>');
SELECT html
, CHARINDEX('<div',html) AS startPos
, CHARINDEX('</div>',html) AS endPos
, (CHARINDEX('</div>',html)+6)-(CHARINDEX('<div',html)) AS stringLenToRemove
, SUBSTRING(html, CHARINDEX('<div',html), (CHARINDEX('</div>',html)+6)-(CHARINDEX('<div',html))) AS HtmlAddedByHack
,REPLACE(html,SUBSTRING(html, CHARINDEX('<div',html), (CHARINDEX('</div>',html)+6)-(CHARINDEX('<div',html))), '') AS sanitizedHtml
FROM #temp;
--UPDATE #temp
--SET html = REPLACE(html,SUBSTRING(html, CHARINDEX('<div',html), (CHARINDEX('</div>',html)+6)-(CHARINDEX('<div',html))), '');
--SELECT *
--FROM #temp;
```
|
A UDF that uses `PATINDEX` may be able to do it.
Assuming
* All malicious content is in `<DIV>...</DIV>` sections
* No `<DIV>...</DIV>` sections exist that are *not* malicious content
* You TEST THIS EXTENSIVELY on a backup of your data before applying it to your live database
First use this UDF for Pattern replacement, from [here](http://dataeducation.com/splitting-a-string-of-unlimited-length/):
```
CREATE FUNCTION dbo.PatternReplace
(
@InputString VARCHAR(4000),
@Pattern VARCHAR(100),
@ReplaceText VARCHAR(4000)
)
RETURNS VARCHAR(4000)
AS
BEGIN
DECLARE @Result VARCHAR(4000) SET @Result = ''
-- First character in a match
DECLARE @First INT
-- Next character to start search on
DECLARE @Next INT SET @Next = 1
-- Length of the total string -- 8001 if @InputString is NULL
DECLARE @Len INT SET @Len = COALESCE(LEN(@InputString), 8001)
-- End of a pattern
DECLARE @EndPattern INT
WHILE (@Next <= @Len)
BEGIN
SET @First = PATINDEX('%' + @Pattern + '%', SUBSTRING(@InputString, @Next, @Len))
IF COALESCE(@First, 0) = 0 --no match - return
BEGIN
SET @Result = @Result +
CASE --return NULL, just like REPLACE, if inputs are NULL
WHEN @InputString IS NULL
OR @Pattern IS NULL
OR @ReplaceText IS NULL THEN NULL
ELSE SUBSTRING(@InputString, @Next, @Len)
END
BREAK
END
ELSE
BEGIN
-- Concatenate characters before the match to the result
SET @Result = @Result + SUBSTRING(@InputString, @Next, @First - 1)
SET @Next = @Next + @First - 1
SET @EndPattern = 1
-- Find start of end pattern range
WHILE PATINDEX(@Pattern, SUBSTRING(@InputString, @Next, @EndPattern)) = 0
SET @EndPattern = @EndPattern + 1
-- Find end of pattern range
WHILE PATINDEX(@Pattern, SUBSTRING(@InputString, @Next, @EndPattern)) > 0
AND @Len >= (@Next + @EndPattern - 1)
SET @EndPattern = @EndPattern + 1
--Either at the end of the pattern or @Next + @EndPattern = @Len
SET @Result = @Result + @ReplaceText
SET @Next = @Next + @EndPattern - 1
END
END
RETURN(@Result)
END
```
Then, make use of the UDF:
```
UPDATE ContentTable SET ContentColumn=dbo.PatternReplace('<DIV>%</DIV>', '')
```
|
SQL replace query for variable content
|
[
"",
"sql",
"sql-server",
""
] |
Is there any query that says
> "in .... but not in ....." ?
Let's say I have a table about "double agent".
This table has Name, Nationality, and Age.
```
+-------------------------+
| Name Nationality Age |
+-------------------------+
| Jony US 20 |
| Jony China 20 |
| Adam Argentina 25 |
| Lukas China 39 |
| Lukas US 39 |
+-------------------------+
```
Then I'm planning to output a list for all agent that has China as nationality but not US.
What I wrote
> Select name from agent where nationality = 'China' and nationality != 'US'
As predicted, it didn't print anything because it is wrong.
I know maybe it is a stupid answer, but can anyone correct me? Thank you.
|
I would do this using aggregation and `having`. I think the clearest approach is:
```
select name
from agent
group by name
having sum(nationality = 'China') > 0 and
sum(nationality = 'US') = 0;
```
There are definitely other methods. Here is one that doesn't require `select distinct` or `group by`:
```
select a.name
from agent a
where a.nationality = 'China' and
not exists (select 1 from agent a2 where a2.name = a.name and a2.nationality = 'US');
```
With indexes on `agent(nationality, name)` and `agent(name, nationality)`, this probably has the best performance.
|
If you want to select all agents that are in China but not the US, you can apply the filter in `having`
```
select name from mytable
where nationality in ('US', 'China')
group by name
having sum(nationality = 'US') = 0
```
Note that in your sample data all agents that are in China are also in the US, so this query wouldn't return any rows in that case.
|
MySQL Query: Is There any Query such "BUT NOT IN <blank>"?
|
[
"",
"mysql",
"sql",
""
] |
How can I 'score' results using order by, if possible, by number of matches between a given user and every other user in a table.
Basically, I have a given 'userid' of '1' and I need to check all of this users 'interests' against of users interests and order by the number of like matches between users.
Say userid '1' has 4 interests in common with userid '4', but only 3 interests in common with userid '2', it would return 4 first, then 2, then everyone else with less overall interest matches.
I have tried about 10 different queries to no avail, not even one that was close, which is why I didn't post any of the attempts here.
```
CREATE TABLE `interests` (
`id` int(12) NOT NULL AUTO_INCREMENT,
`userid` int(12) NOT NULL DEFAULT '0',
`interest` varchar(100) NOT NULL DEFAULT '',
PRIMARY KEY (`id`)
) ENGINE=InnoDB;
INSERT INTO `interests` set `userid` = '1', `interest` = 'term1';
INSERT INTO `interests` set `userid` = '1', `interest` = 'term2';
INSERT INTO `interests` set `userid` = '1', `interest` = 'term3';
INSERT INTO `interests` set `userid` = '1', `interest` = 'term4';
INSERT INTO `interests` set `userid` = '2', `interest` = 'term1';
INSERT INTO `interests` set `userid` = '2', `interest` = 'term2';
INSERT INTO `interests` set `userid` = '2', `interest` = 'term8';
INSERT INTO `interests` set `userid` = '2', `interest` = 'term4';
INSERT INTO `interests` set `userid` = '3', `interest` = 'term9';
INSERT INTO `interests` set `userid` = '3', `interest` = 'term2';
INSERT INTO `interests` set `userid` = '3', `interest` = 'term3';
INSERT INTO `interests` set `userid` = '3', `interest` = 'term7';
INSERT INTO `interests` set `userid` = '4', `interest` = 'term1';
INSERT INTO `interests` set `userid` = '4', `interest` = 'term2';
INSERT INTO `interests` set `userid` = '4', `interest` = 'term3';
INSERT INTO `interests` set `userid` = '4', `interest` = 'term4';
```
The proper result set should look like:
```
-------
userid
-------
4
2
3
```
Any help here is greatly appreciated guys.
|
You can do a `SELF JOIN`:
```
SELECT i2.userid
FROM interests i1
INNER JOIN interests i2
ON i2.userid <> i1.userid
AND i2.interest = i1.interest
WHERE i1.userid = 1
GROUP BY i2.userid
ORDER BY COUNT(*) DESC;
```
`ONLINE DEMO`
|
The easiest way is to get a list of all users and shares and counts and then select and order the ones you want from that sub-query. It makes the logic clear:
```
SELECT userid, otherid, sameCount
FROM (
SELECT base.userid, other.userid as otherid, count(*) sameCount
FROM interests base
JOIN interests other ON base.interest = other.interest and base.userid != other.userid
GROUP BY base.userid, other.userid
) sub
WHERE userid = 1
ORDER BY sameCount DESC
```
|
how do I get a count of matches across multiple tables joining by id?
|
[
"",
"mysql",
"sql",
""
] |
I have two tables. One table shows me inbound inventory by month and the other shows me outbound inventory by month. Essentially I would like to show both inbound and outbound inventory by month on a single table.
1.
```
select
warehouses.id,
count(pallets.id),
to_char(pallets.in_date, 'FMMonth-YY') as in_month
from pallets
inner join reservations
on pallets.reservation_id = reservations.id
inner join warehouses
on reservations.warehouse_id = warehouses.id
where pallets.in_date is not null
group by
warehouses.id,
date_part('month', pallets.in_date),
date_part('year', pallets.in_date),
to_char(pallets.in_date, 'FMMonth-YY')
order by
date_part('year', pallets.in_date),
date_part('month', pallets.in_date)
```
and 2.
```
select
warehouses.id as id,
count(pallets.id),
to_char(pallets.out_date, 'FMMonth-YY') as out_month
from pallets
inner join reservations
on pallets.reservation_id = reservations.id
inner join warehouses
on reservations.warehouse_id = warehouses.id
where pallets.in_date is not null and pallets.out_date is not null
group by
date_part('month', pallets.out_date),
date_part('year', pallets.out_date),
to_char(pallets.out_date, 'FMMonth-YY'),
warehouses.id
order by
date_part('year', pallets.out_date),
date_part('month', pallets.out_date)
```
I tried to join the second table with a sub query but the returned values were all sorts of jacked up.
```
select
warehouses.id,
count(pallets.id),
count(out.out),
to_char(pallets.in_date, 'FMMonth-YY') as in_month
from pallets
inner join reservations
on pallets.reservation_id = reservations.id
inner join warehouses
on reservations.warehouse_id = warehouses.id
full join (select
warehouses.id as id,
count(pallets.id) as out,
to_char(pallets.out_date, 'FMMonth-YY') as out_month
from pallets
inner join reservations
on pallets.reservation_id = reservations.id
inner join warehouses
on reservations.warehouse_id = warehouses.id
where pallets.in_date is not null and pallets.out_date is not null
group by
date_part('month', pallets.out_date),
date_part('year', pallets.out_date),
to_char(pallets.out_date, 'FMMonth-YY'),
warehouses.id
order by
date_part('year', pallets.out_date),
date_part('month', pallets.out_date)) as out
on out.id=warehouses.id
where pallets.in_date is not null
group by
warehouses.id,
date_part('month', pallets.in_date),
date_part('year', pallets.in_date),
to_char(pallets.in_date, 'FMMonth-YY')
order by
date_part('year', pallets.in_date),
date_part('month', pallets.in_date)
```
Any ideas how I can do this?
|
One way to achieve this is to use the existing queries (with some minor modifications) as inline views. And use a FULL OUTER JOIN operation to combine them.
No indication is given whether warehouse\_id could be NULL or not, so in place of the equality comparison, the example below uses a "null-safe" comparison so NULL values of warehouse\_id will get "matched" by the join operation.
And rather than use the DATE\_PART and DATE\_FORMAT, just use a DATE\_TRUNC function in the inline views. That makes it easier to do the join, and the value from that can be used for ordering the result (if that's important) and for returning the month as a formatted string. If this is going to be stored as a table, I would store the date value, rather than the formatted string.
I also prefer to use short table aliases to qualify the column references.
```
select coalesce(i.warehouse_id,o.warehouse_id) as warehouse_id
, coalesce(i.in_month,o.out_month) as month_
, to_char(coalesce(i.in_month,o.out_month), 'FMMonth-YY') as month_yy
, coalesce(i.cnt,0) as in_count
, coalesce(o.cnt,0) as out_count
from ( select w.id as warehouse_id
, date_trunc('month', p.in_date) as in_month
, count(p.id) as cnt
from pallets p
join reservations r
on r.id = p.reservation_id
join warehouses w
on w.id = r.warehouse_id
where p.in_date is not null
group by w.id
, date_trunc('month', p.in_date)
) i
full
join ( select w.id as warehouse_id
, date_trunc('month', p.out_date) as out_month
, count(p.id) as cnt
from pallets p
join reservations r
on r.id = p.reservation_id
join warehouses w
on w.id = r.warehouse_id
where p.in_date is not null
and p.out_date is not null
group
by w.id
, date_trunc('month', p.out_date)
) o
on o.warehouse_id is not distinct from i.warehouse_id
and o.out_month = i.in_month
order by coalesce(i.warehouse_id,o.warehouse_id)
, coalesce(i.in_month,o.out_month)
```
My personal preference is to use upper case for the functions, keywords and reserved words.
```
SELECT COALESCE(i.warehouse_id,o.warehouse_id) AS warehouse_id
, COALESCE(i.in_month,o.out_month) AS month_
, TO_CHAR(COALESCE(I.IN_MONTH,O.OUT_MONTH), 'FMMonth-YY') AS month_yy
, COALESCE(i.cnt,0) AS in_count
, COALESCE(o.cnt,0) AS out_count
FROM ( SELECT w.id AS warehouse_id
, DATE_TRUNC('month', p.in_date) AS in_month
, COUNT(p.id) AS cnt
FROM pallets p
JOIN reservations r
ON r.id = p.reservation_id
JOIN warehouses w
ON w.id = r.warehouse_id
WHERE p.in_date IS NOT NULL
GROUP BY w.id
, DATE_TRUNC('month', p.in_date)
) i
FULL
JOIN ( SELECT w.id AS warehouse_id
, DATE_TRUNC('month', p.out_date) AS out_month
, COUNT(p.id) AS cnt
FROM pallets p
JOIN reservations r
ON r.id = p.reservation_id
JOIN warehouses w
ON w.id = r.warehouse_id
WHERE p.in_date IS NOT NULL
AND p.out_date IS NOT NULL
GROUP BY w.id
, DATE_TRUNC('month', p.out_date)
) o
ON o.warehouse_id IS NOT DISTINCT FROM i.warehouse_id
AND o.out_month = i.in_month
ORDER BY COALESCE(i.warehouse_id,o.warehouse_id)
, COALESCE(i.in_month,o.out_month)
```
|
To simplify the query I think you can trick the COUNT this way :
*Note: The query inside the WITH clause is maybe sufficient for you. However I added the WITH clause to be able to make a union. It depends on what you want the output to be.*
```
WITH computed_warehouses AS (
-- Maybe this single select (inside the WITH is what you want)
select
warehouses.id,
count(pallets.id) as nb_in_date,
count(pallets.id||pallets.out_date) as nb_out_date,
-- Edit: count(pallets.out_date) as nb_out_date should also work and it's simpler
to_char(pallets.in_date, 'FMMonth-YY') as in_month,
to_char(pallets.out_date, 'FMMonth-YY') as out_month
from pallets
inner join reservations
on pallets.reservation_id = reservations.id
inner join warehouses
on reservations.warehouse_id = warehouses.id
where pallets.in_date is not null
group by
warehouses.id,
date_part('month', pallets.in_date),
date_part('year', pallets.in_date),
to_char(pallets.in_date, 'FMMonth-YY'),
to_char(pallets.out_date, 'FMMonth-YY')
order by
date_part('year', pallets.in_date),
date_part('month', pallets.in_date)
)
-- If you wanna make a union
(
SELECT
t.id AS warehouse_id,
t.in_month AS which_month,
t.nb_in_date AS inbound,
NULL AS outbound
FROM computed_warehouses t
)
UNION ALL
(
SELECT
t.id AS warehouse_id,
t.out_month AS which_month,
NULL AS inbound,
t.nb_out_date AS outbound
FROM computed_warehouses t
WHERE t.nb_out_date IS NOT NULL
AND t.nb_out_date > 0
)
```
count(pallets.id||pallets.out\_date) is a little tricky, it will count only pallets where out\_date is not null. It can work well because *id* and *out\_date* belongs to the same table and because count will not increment when null values.
**Edit** :
`count(pallets.out_date) as nb_out_date` should work too, is simpler and should be faster
|
Combining aggregated columns into single table
|
[
"",
"sql",
"postgresql",
"join",
"aggregate",
""
] |
Alright, so I managed to connect these two tables:
```
CREATE TABLE [dbo].[T_Artikli] (
[ArtikliId] INT IDENTITY (1, 1) NOT NULL,
[Naziv] NVARCHAR (100) NOT NULL,
[Sifra] VARCHAR (13) NOT NULL,
[Vp] FLOAT (53) NOT NULL,
[MP] FLOAT (53) NOT NULL,
[Napomena] NVARCHAR (300) NOT NULL,
PRIMARY KEY CLUSTERED ([ArtikliId] ASC)
);
```
and
```
CREATE TABLE [dbo].[T_Stanje] (
[StanjeId] INT IDENTITY (1, 1) NOT NULL,
[Trenutno] INT NOT NULL,
[Naruceno] INT NOT NULL,
[Datum] DATE NOT NULL,
[Firma] NVARCHAR (40) NOT NULL,
[ArtiklId] INT NOT NULL,
PRIMARY KEY CLUSTERED ([StanjeId] ASC),
CONSTRAINT [FK_T_Stanje_T_Artikli] FOREIGN KEY ([StanjeId]) REFERENCES [dbo].[T_Artikli] ([ArtikliId])
);
```
And it works like a charm. When it comes to deleting one of these tables rows I did it simple like this:
When deleting Artikl table (ArtiklId and ArtikliId is not a typo :D )
```
string deleteSql =
"DELETE FROM T_Stanje WHERE ArtiklId = @Id " +
"DELETE FROM T_Artikli WHERE ArtikliId = @Id;";
```
and when deleting Stanje table
```
string deleteSql =
"DELETE FROM T_Stanje WHERE StanjeId = @Id;";
```
These also work like a charm BUT when I add values to Artikli and Stanje and then deleting that Stanje row I am unable to add NEW Stanje for that same Artikli.
[](https://i.stack.imgur.com/7oP9l.png)
|
First of all, you are using identity which is automatically generated. so you cannot create the same row.
Secondly, you referenced the wrong foreign key I believe
|
The problem is here
```
CONSTRAINT [FK_T_Stanje_T_Artikli]
FOREIGN KEY ([StanjeId])
REFERENCES [dbo].[T_Artikli] ([ArtikliId])
```
Your foreign key is incorrect. It should be:
```
CONSTRAINT [FK_T_Stanje_T_Artikli]
FOREIGN KEY ([ArtiklId])
REFERENCES [dbo].[T_Artikli] ([ArtikliId])
```
|
SQL deleting connected rows and adding same values
|
[
"",
"sql",
"database",
""
] |
I have a table with approximately 8 million rows. The rows contain an ID, a date, and an event code. I would like the select all the ID's and dates for which the event code is equal to 1 and there has been an event code equal to 2 at some time in the past.
For example, my table looks like this:
```
ID Date Code
----------------------
1 4/16/2016 6
1 4/10/2016 1
1 3/1/2016 13
1 1/26/2016 2
2 5/2/2016 8
2 3/14/2016 1
2 1/13/2016 14
```
I would want ID = 1 and Date = 4/10/2016 returned but I would not want anything returned with ID=2 because ID=2 never had an event code equal to 2.
How should I write my `SELECT` statement to get these results?
|
If you only want to select the max date for each `ID`:
```
WITH Cte AS(
SELECT *,
rn = ROW_NUMBER() OVER(PARTITION BY ID ORDER BY Date DESC)
FROM tbl
WHERE Code = 1
)
SELECT
ID, Date, Code
FROM Cte c
WHERE
rn = 1
AND EXISTS(
SELECT 1
FROM tbl t
WHERE
t.ID = c.ID
AND t.Date < c.Date
AND t.Code = 2
)
;
```
`ONLINE DEMO`
---
Using `MAX`, `GROUP BY`, and `HAVING`:
```
SELECT
ID, Date = MAX(Date)
FROM tbl t1
WHERE Code = 1
GROUP BY t1.ID
HAVING MAX(Date) > (SELECT Date FROM tbl t2 WHERE t2.ID = t1.ID AND t2.Code = 2)
```
`ONLINE DEMO`
|
You can use `exists`.
```
select *
from t
where code = 1 and
exists (select 1 from t t1 where t.id = t1.id and t.dt > t1.dt and t1.code=2)
```
|
SQL Server 2012 SELECT max date based on multiple conditions
|
[
"",
"sql",
"sql-server",
"select",
"sql-server-2012",
""
] |
I am writing a sql code for a report page that that joins three tables. Here is the query I have written.
```
comm.CommandText = "SELECT Count(DISTINCT Courses.CourseID) AS CourseCount, Count(DISTINCT Students.StudentID) AS StudentCount, Count(Students.StartDate) AS StartCount, School.Name, School.StartDate, School.SchoolFees " +
"FROM Schools " +
"LEFT JOIN Courses ON (School.SchoolID = Courses.SchoolId) " +
"LEFT JOIN Students ON (School.SchoolID = Student.SchoolID) " +
"WHERE School.Active = 1 " +
"GROUP BY School.Name, School.StartDate, School.SchoolFees";
```
The above query works well. But I want to show the count of the Student.StartDate for each School where Student.StartDate satisfy a condition. Here is the query I want to use
```
SELECT Count(Students.StartDate)
FROM Students
WHERE Student.StartDate >= DATEADD(month, -1, GETDATE());
```
I want the above query to be return as part of my main query but dont know how to achieve it. Any help will be appreciated. Thanks
|
When you want aggregates from different tables, you should not join the tables and then aggregate, but always build the aggregates first and join these instead. In your case you were able to avoid issues by counting distinct IDs, but that is not always possible (i.e. when looking for sums or avarages). You can count conditionally with `CASE WHEN`.
```
SELECT
COALESCE(c.CourseCount, 0) AS CourseCount,
COALESCE(s.StudentCount, 0) AS StudentCount,
COALESCE(s.StartCount, 0) AS StartCount,
School.Name,
School.StartDate,
School.SchoolFees
FROM Schools
LEFT JOIN
(
SELECT SchoolID, COUNT(*) AS CourseCount
FROM Courses
GROUP BY SchoolID
) c ON c.SchoolId = School.SchoolID
LEFT JOIN
(
SELECT
SchoolID,
COUNT(*) AS StudentCount,
COUNT(CASE WHEN StartDate >= DATEADD(month, -1, GETDATE() THEN 1 END) as StartCount
FROM Students
GROUP BY SchoolID
) s ON s.SchoolId = School.SchoolID
WHERE School.Active = 1;
```
In case it is guaranteed for every school to have at least one student and one course (which is probably the case), you can change the outer joins to inner joins and get thus rid of the COALESCE expressions.
|
You can do this with conditional aggregation. Just add this to the `SELECT`:
```
SUM(CASE WHEN Student.StartDate >= DATEADD(month,-1, GETDATE()) THEN 1 ELSE 0 END) as RecentStudents
```
|
Using COUNT Function in Table Joins
|
[
"",
"sql",
"asp.net",
"sql-server",
""
] |
If I have a table such as:
```
name1 | name2 | id |
+----------------+--------------+-----------+
| A | E | 1 |
| A | F | 1 |
| B | G | 1 |
| C | H | 1 |
| D | I | 1 |
| A | J | 2 |
| B | K | 2 |
| C | L | 2 |
| D | M | 2 |
| A | N | 2 |
```
what I need is that select all rows of id where name2 <> 'E'
If I do:
```
selete * from table where name2 <> 'E'
```
It only gives me this
```
name1 | name2 | id |
+----------------+--------------+-----------+
| A | F | 1 |
| B | G | 1 |
| C | H | 1 |
| D | I | 1 |
| A | J | 2 |
| B | K | 2 |
| C | L | 2 |
| D | M | 2 |
| A | N | 2 |
```
The result I want is (excluding all rows of id which contains name2 = 'E' at least once) :
```
name1 | name2 | id |
+----------------+--------------+-----------+
| A | J | 2 |
| B | K | 2 |
| C | L | 2 |
| D | M | 2 |
| A | N | 2 |
```
Which query should I use?
|
One approach is to use subquery that finds all ids that have a value 'E' in column `name2` and then filter out all these ids:
```
SELECT *
FROM table
WHERE id NOT IN
( SELECT DISTINCT id FROM table WHERE name2 = 'E' )
```
|
SELECT \*
FROM `table` WHERE id NOT IN (SELECT id FROM `table` WHERE name2 <> 'E')
|
select query -mysql
|
[
"",
"mysql",
"sql",
"select",
""
] |
I have a table called user\_versions. This table doesn't have any identity column due to the requirements. If a user updates their details a new version is created, a columns called version is incremented by 1 and the other version isn't live anymore but is kept again this is due to requirements and the new version goes live.
```
**My Problem**
Simple example
user_versions
------------
user_id|name|email|version
1|John|john@example.com|1
1|John|john@example.com|2
1|John|john@example.com|3
1|John|john@example.com|4
2|Peter|peter@example.com|1
2|Peter|peter@example.com|2
2|Peter|peter@example.com|3
Needed Results!!!
------------
1|John|john@example.com|4
2|Peter|peter@example.com|3
```
If I wanted to return the newest version of every how could I do so? I've tried a lot of stuff and can't figure it out. I am able to do so with two separate queries but I would really appreciate if I could return the needed in one query.
Any help at all would be great thanks.
|
With standard syntax it would be:
```
select uv.*
from user_versions uv
inner join(select user_id, max(version) as version
from user_versions
group by user_id) as t
on uv.user_id = t.user_id and
uv.version = t.version
```
|
```
with cte as
(
select user_id,name,email,version, row_number() over (partition by user_id order by version desc) as rn
from
user_versions)
select user_id,name, email, version from cte
where rn = 1
```
|
I need to select records based on the value of a column in the same table.
|
[
"",
"sql",
""
] |
I've got a table with close to 7 million rows in it. Here's the table structure
```
`CREATE TABLE `ERS_SALES_TRANSACTIONS` (
`saleId` int(12) NOT NULL AUTO_INCREMENT,
`ERS_COMPANY_CODE` int(3) DEFAULT NULL,
`SALE_SECTION` varchar(128) DEFAULT NULL,
`SALE_DATE` date DEFAULT NULL,
`SALE_STOCKAGE_EXACT` int(4) DEFAULT NULL,
`SALE_NET_AMOUNT` decimal(11,2) DEFAULT NULL,
`SALE_ABSOLUTE_CDATE` date DEFAULT NULL,
PRIMARY KEY (`saleId`),
KEY `index_location` (`ERS_COMPANY_CODE`),
KEY `idx-erscode-salesec` (`SALE_SECTION`,`ERS_COMPANY_CODE`) USING BTREE,
KEY `idx-saledate-section` (`SALE_DATE`,`SALE_SECTION`) USING BTREE
KEY `idx_quick_sales_transactions` (`ERS_COMPANY_CODE`,`SALE_SECTION`,`SALE_DATE`,`SALE_STOCKAGE_EXACT`,`SALE_NET_AMOUNT`)
) ENGINE=InnoDB;
```
This query is taking more than 7 secs to execute, is there any way to speed this up?
```
SELECT
A.SALE_SECTION,
SUM(IF(A.SALE_DATE BETWEEN '2016-01-16' AND '2016-04-30'
AND A.SALE_STOCKAGE_EXACT BETWEEN 0 AND 90, A.SALE_NET_AMOUNT, 0)) AS fs1_pd1_sale,
SUM(IF(A.SALE_DATE BETWEEN '2016-01-16' AND '2016-04-30'
AND A.SALE_STOCKAGE_EXACT BETWEEN 91 AND 180, A.SALE_NET_AMOUNT, 0)) AS fs2_pd1_sale,
SUM(IF(A.SALE_DATE BETWEEN '2016-01-16' AND '2016-04-30'
AND A.SALE_STOCKAGE_EXACT BETWEEN 181 AND 365, A.SALE_NET_AMOUNT, 0)) AS os1_pd1_sale,
SUM(IF(A.SALE_DATE BETWEEN '2016-01-16' AND '2016-04-30'
AND A.SALE_STOCKAGE_EXACT BETWEEN 366 AND 9999, A.SALE_NET_AMOUNT, 0)) AS os2_pd1_sale,
SUM(IF(A.SALE_DATE BETWEEN '2016-01-16' AND '2016-04-30', A.SALE_NET_AMOUNT, 0)) AS TOTAL_PD1_SALE,
SUM(IF(A.SALE_DATE BETWEEN '2016-04-01' AND '2016-04-30'
AND A.SALE_STOCKAGE_EXACT BETWEEN 0 AND 90, A.SALE_NET_AMOUNT, 0)) AS fs1_pd2_sale,
SUM(IF(A.SALE_DATE BETWEEN '2016-04-01' AND '2016-04-30'
AND A.SALE_STOCKAGE_EXACT BETWEEN 91 AND 180, A.SALE_NET_AMOUNT, 0)) AS fs2_pd2_sale,
SUM(IF(A.SALE_DATE BETWEEN '2016-04-01' AND '2016-04-30'
AND A.SALE_STOCKAGE_EXACT BETWEEN 181 AND 365, A.SALE_NET_AMOUNT, 0)) AS os1_pd2_sale,
SUM(IF(A.SALE_DATE BETWEEN '2016-04-01' AND '2016-04-30'
AND A.SALE_STOCKAGE_EXACT BETWEEN 366 AND 9999, A.SALE_NET_AMOUNT, 0)) AS os2_pd2_sale,
SUM(IF(A.SALE_DATE BETWEEN '2016-04-01' AND '2016-04-30', A.SALE_NET_AMOUNT, 0)) AS TOTAL_PD2_SALE,
SUM(IF(A.SALE_DATE BETWEEN '2016-05-01' AND '2016-05-31'
AND A.SALE_ABSOLUTE_CDATE BETWEEN '2016-03-01' AND '2016-05-31', A.SALE_NET_AMOUNT, 0)) AS fs1_achived_sale,
SUM(IF(A.SALE_DATE BETWEEN '2016-05-01' AND '2016-05-31'
AND A.SALE_ABSOLUTE_CDATE BETWEEN '2015-12-01' AND '2016-02-29', A.SALE_NET_AMOUNT, 0)) AS fs2_achived_sale,
SUM(IF(A.SALE_DATE BETWEEN '2016-05-01' AND '2016-05-31'
AND A.SALE_ABSOLUTE_CDATE BETWEEN '2015-06-01' AND '2015-11-30', A.SALE_NET_AMOUNT, 0)) AS os1_achived_sale,
SUM(IF(A.SALE_DATE BETWEEN '2016-05-01' AND '2016-05-31'
AND A.SALE_ABSOLUTE_CDATE BETWEEN '2006-12-26' AND '2015-05-31', A.SALE_NET_AMOUNT, 0)) AS os2_achived_sale,
SUM(IF(A.SALE_DATE BETWEEN '2016-05-01' AND '2016-05-31', A.SALE_NET_AMOUNT, 0)) AS Total_ACHIVED_SALE
FROM ERS_SALES_TRANSACTIONS A WHERE A.ERS_COMPANY_CODE = 48 GROUP BY A.SALE_SECTION
```
Here's Explain query
```
{
"data":
[
{
"id": 1,
"select_type": "SIMPLE",
"table": "A",
"type": "ref",
"possible_keys": "index_location,idx-erscode-salesec,idx-saledate-section",
"key": "index_location",
"key_len": "5",
"ref": "const",
"rows": 1411944,
"Extra": "Using where; Using temporary; Using filesort"
}
]
}
```
After adding composite index, time decreased to 4.03 sec. Here' the plan
```
{
"data":
[
{
"id": 1,
"select_type": "SIMPLE",
"table": "A",
"type": "ref",
"possible_keys": "index_location,idx-erscode-salesec,idx-saledate-section,idx_quick_sales_transactions",
"key_len": "5",
"key": "idx_quick_sales_transactions",
"ref": "const",
"rows": 1306058,
"Extra": "Using where"
}
]
```
}
|
I don't agree with Jimmy B here. Your query looks perfect in my opinion.
Depending on how many records there are for company 48 either the full table should be read sequentially (when it's many, say, 50% of all table records) or an index on ERS\_COMPANY\_CODE should be used (when it's not that many, say, only 1% of all records).
As the DBMS decided to use the index on ERS\_COMPANY\_CODE, the latter should be the case.
You can try to further speed up the query by creating a composed index. Make that at least `(ERS_COMPANY_CODE , SALE_SECTION)`, so as to have the `GROUP BY` quicker. Better even add all fields, so all data can be gathered from the index and the table itself doesn't have to be accessed any more.
```
CREATE INDEX idx_quick_sales_transactions ON ERS_SALES_TRANSACTIONS
(ERS_COMPANY_CODE, SALE_SECTION, SALE_DATE, SALE_STOCKAGE_EXACT, SALE_NET_AMOUNT);
```
|
I don't know if there is a way to speed this up. But, you can try using an index. I would recommend one on `ERS_SALES_TRANSACTIONS(ERS_COMPANY_CODE, SALE_SECTION, SALE_DATE, SALE_NET_AMOUNT)`.
This is a covering index for the query, meaning that all columns used for the query are in the index -- and hence the data base engine does not need to access the original data pages.
However, the performance still depends on the number of rows that match the particular company code. And, in particular, the performance of the file sort used for aggregation.
|
Mysql Query Optimization
|
[
"",
"mysql",
"sql",
"indexing",
"innodb",
""
] |
I have the following table:
```
custid custname channelid channel dateViewed
--------------------------------------------------------------
1 A 1 ABSS 2016-01-09
2 B 2 STHHG 2016-01-19
3 C 4 XGGTS 2016-01-09
6 D 4 XGGTS 2016-01-09
2 B 2 STHHG 2016-01-26
2 B 2 STHHG 2016-01-28
1 A 3 SSJ 2016-01-28
1 A 1 ABSS 2016-01-28
2 B 2 STHHG 2016-02-02
2 B 7 UUJKS 2016-02-10
2 B 8 AKKDC 2016-02-10
2 B 9 GGSK 2016-02-10
2 B 9 GGSK 2016-02-11
2 B 7 UUJKS 2016-02-27
```
And I want the results to be:
```
custid custname month count
------------------------------
1 A 1 1
2 B 1 1
2 B 2 4
3 C 1 1
6 D 1 1
```
According to the following rules:
* All channel views subscription is billed every 15 days. If the
customer viewed the same channel within the 15 days, he will only be
billed once for that channel. For instance, custid 2, custname B his billing cycle is 19 Jan - 3 Feb (one billing cycle), 4 Feb - 20 Feb (one billing cycle) and so on. Therefore, he is billed only 1 time in Jan since he watch the same channel throughout the billing cycle; and he is billed 4 times in Feb for watching (channelid 7, 8, 9) and channelid 7 watched on 27 Feb (since this falls in another billing cycle, customer B is also charged here). Customer B is not charged on 2 Feb for watching channel 2 since he was already billed in 19 jan - 3 Feb billing cycle.
* An invoice is generated every month for each customer, therefore, the
results should show the 'Month' and the 'Count' of the channels
viewed for each customer.
Can this be done in SQL server?
|
```
;WITH cte AS (
SELECT custid,
custname,
channelid,
channel,
dateViewed,
CAST(DATEADD(day,15,dateViewed) as date) as dateEnd,
ROW_NUMBER() OVER (PARTITION BY custid, channelid ORDER BY dateViewed) AS rn
FROM (VALUES
(1, 'A', 1, 'ABSS', '2016-01-09'),(2, 'B', 2, 'STHHG', '2016-01-19'),
(3, 'C', 4, 'XGGTS', '2016-01-09'),(6, 'D', 4, 'XGGTS', '2016-01-09'),
(2, 'B', 2, 'STHHG', '2016-01-26'),(2, 'B', 2, 'STHHG', '2016-01-28'),
(1, 'A', 3, 'SSJ', '2016-01-28'),(1, 'A', 1, 'ABSS', '2016-01-28'),
(2, 'B', 2, 'STHHG', '2016-02-02'),(2, 'B', 7, 'UUJKS', '2016-02-10'),
(2, 'B', 8, 'AKKDC', '2016-02-10'),(2, 'B', 9, 'GGSK', '2016-02-10'),
(2, 'B', 9, 'GGSK', '2016-02-11'),(2, 'B', 7, 'UUJKS', '2016-02-27')
) as t(custid, custname, channelid, channel, dateViewed)
), res AS (
SELECT custid, channelid, dateViewed, dateEnd, 1 as Lev
FROM cte
WHERE rn = 1
UNION ALL
SELECT c.custid, c.channelid, c.dateViewed, c.dateEnd, lev + 1
FROM res r
INNER JOIN cte c ON c.dateViewed > r.dateEnd and c.custid = r.custid and c.channelid = r.channelid
), final AS (
SELECT * ,
ROW_NUMBER() OVER (PARTITION BY custid, channelid, lev ORDER BY dateViewed) rn,
DENSE_RANK() OVER (ORDER BY custid, channelid, dateEnd) dr
FROM res
)
SELECT b.custid,
b.custname,
MONTH(f.dateViewed) as [month],
COUNT(distinct dr) as [count]
FROM cte b
LEFT JOIN final f
ON b.channelid = f.channelid and b.custid = f.custid and b.dateViewed between f.dateViewed and f.dateEnd
WHERE f.rn = 1
GROUP BY b.custid,
b.custname,
MONTH(f.dateViewed)
```
Output:
```
custid custname month count
----------- -------- ----------- -----------
1 A 1 3
2 B 1 1
2 B 2 4
3 C 1 1
6 D 1 1
(5 row(s) affected)
```
I don't know why you get `1` in `count` field for customer `A`. He got:
```
ABSS 2016-01-09 +1 to count (+15 days = 2016-01-24)
SSJ 2016-01-28 +1 to count
ABSS 2016-01-28 +1 to count (28-01 > 24.01)
```
So in January there must be `count = 3`.
|
I'm not sure how this solution will scale - but with some good index candidates and decent data housekeeping, it'll work..
You're going to need some extra info for starters, and to normalize your data. You will need to know the first charging period start date for each customer. So store that in a customer table.
Here are the tables I used:
```
create table #channelViews
(
custId int, channelId int, viewDate datetime
)
create table #channel
(
channelId int, channelName varchar(max)
)
create table #customer
(
custId int, custname varchar(max), chargingStartDate datetime
)
```
I'll populate some data. I won't get the same results as your sample output, because I don't have the appropriate start dates for each customer. Customer 2 will be OK though.
```
insert into #channel (channelId, channelName)
select 1, 'ABSS'
union select 2, 'STHHG'
union select 4, 'XGGTS'
union select 3, 'SSJ'
union select 7, 'UUJKS'
union select 8, 'AKKDC'
union select 9, 'GGSK'
insert into #customer (custId, custname, chargingStartDate)
select 1, 'A', '4 Jan 2016'
union select 2, 'B', '19 Jan 2016'
union select 3, 'C', '5 Jan 2016'
union select 6, 'D', '5 Jan 2016'
insert into #channelViews (custId, channelId, viewDate)
select 1,1,'2016-01-09'
union select 2,2,'2016-01-19'
union select 3,4,'2016-01-09'
union select 6,4,'2016-01-09'
union select 2,2,'2016-01-26'
union select 2,2,'2016-01-28'
union select 1,3,'2016-01-28'
union select 1,1,'2016-01-28'
union select 2,2,'2016-02-02'
union select 2,7,'2016-02-10'
union select 2,8,'2016-02-10'
union select 2,9,'2016-02-10'
union select 2,9,'2016-02-11'
union select 2,7,'2016-02-27'
```
And here is the somewhat unweildy query, in a single statement.
The two underlying sub-queries are actually the same data, so there may be more appropriate / efficient ways to generate these.
We need to exclude from billing any channel charged in the same charging period C for the previous Month. This is the essence of the join. I used a right-join so that I could exclude all such matches from the results (using `old.custId is null`).
```
select c.custId, c.[custname], [month], count(*) [count] from
(
select new.custId, new.channelId, new.month, new.chargingPeriod
from
(
select distinct cv.custId, cv.channelId, month(viewdate) [month], (convert(int, cv.viewDate) - convert(int, c.chargingStartDate))/15 chargingPeriod
from #channelViews cv join #customer c on cv.custId = c.custId
) old
right join
(
select distinct cv.custId, cv.channelId, month(viewdate) [month], (convert(int, cv.viewDate) - convert(int, c.chargingStartDate))/15 chargingPeriod
from #channelViews cv join #customer c on cv.custId = c.custId
) new
on old.custId = new.custId
and old.channelId = new.channelId
and old.month = new.Month -1
and old.chargingPeriod = new.chargingPeriod
where old.custId is null
group by new.custId, new.month, new.chargingPeriod, new.channelId
) filteredResults
join #customer c on c.custId = filteredResults.custId
group by c.custId, [month], c.custname
order by c.custId, [month], c.custname
```
And finally my results:
```
custId custname month count
1 A 1 3
2 B 1 1
2 B 2 4
3 C 1 1
6 D 1 1
```
This query does the same thing:
```
select c.custId, c.custname, [month], count(*) from
(
select cv.custId, min(month(viewdate)) [month], cv.channelId
from #channelViews cv join #customer c on cv.custId = c.custId
group by cv.custId, cv.channelId, (convert(int, cv.viewDate) - convert(int, c.chargingStartDate))/15
) x
join #customer c
on c.custId = x.custId
group by c.custId, c.custname, x.[month]
order by custId, [month]
```
|
How to COUNT rows according to specific complicated rules?
|
[
"",
"sql",
"sql-server",
"count",
"rows",
""
] |
Hi i am in need of help to optimize a query for large database records above 1 Millions . Current query is taking 27-30 seconds to execute.
```
SELECT SQL_CALC_FOUND_ROWS
candidate.candidate_id AS candidateID,
candidate.candidate_id AS exportID,
candidate.is_hot AS isHot,
candidate.date_modified AS dateModifiedSort,
candidate.date_created AS dateCreatedSort,
candidate.first_name AS firstName,
candidate.last_name AS lastName,
candidate.city AS city,
candidate.state AS state,
candidate.key_skills AS keySkills,
owner_user.first_name AS ownerFirstName,
owner_user.last_name AS ownerLastName,
CONCAT(owner_user.last_name,
owner_user.first_name) AS ownerSort,
DATE_FORMAT(candidate.date_created, '%m-%d-%y') AS dateCreated,
DATE_FORMAT(candidate.date_modified, '%m-%d-%y') AS dateModified,
candidate.email2 AS email2 FROM
candidate
LEFT JOIN
user AS owner_user ON candidate.owner = owner_user.user_id
LEFT JOIN
saved_list_entry ON saved_list_entry.data_item_type = 100
AND saved_list_entry.data_item_id = candidate.candidate_id WHERE
is_active = 1 GROUP BY candidate.candidate_id ORDER BY dateModifiedSort
DESC LIMIT 0 , 15
```
is there any method to reduce the execution time of the query. I have also added the index in tables but it is not working fine.
[](https://i.stack.imgur.com/zS306.png)
|
I've changed changed the table alias in the below query, use this
This Must Solve Your Problem
```
SELECT SQL_CALC_FOUND_ROWS
candidate.candidate_id AS candidateID,
candidate.candidate_id AS exportID,
candidate.is_hot AS isHot,
candidate.date_modified AS dateModifiedSort,
candidate.date_created AS dateCreatedSort,
candidate.first_name AS firstName,
candidate.last_name AS lastName,
candidate.city AS city,
candidate.state AS state,
candidate.key_skills AS keySkills,
user.first_name AS ownerFirstName,
user.last_name AS ownerLastName,
CONCAT(user.last_name,
user.first_name) AS ownerSort,
DATE_FORMAT(candidate.date_created, '%m-%d-%y') AS dateCreated,
DATE_FORMAT(candidate.date_modified, '%m-%d-%y') AS dateModified,
candidate.email2 AS email2 FROM
candidate
LEFT JOIN
user ON candidate.owner = user.user_id
LEFT JOIN
saved_list_entry ON saved_list_entry.data_item_type = 100
AND saved_list_entry.data_item_id = candidate.candidate_id WHERE
is_active = 1 GROUP BY candidate.candidate_id ORDER BY dateModifiedSort
DESC LIMIT 0 , 15
```
use the below queries to create indexes for **join conditions**
```
create index index_user user(user_id);
create index index_saved_list_entry saved_list_entry(data_item_type,data_item_id);
create index index_candidate candidate(is_active,candidate_id,dateModifiedSort);
```
|
1. Get rid of `saved_list_entry`, it adds nothing.
2. Delay joining to `user`. This will let you get rid of the `GROUP BY`, which is adding a bunch of time, and possibly inflating the value of `FOUND_ROWS()`.
Something like:
```
SELECT c2.*,
ou.first_name AS ownerFirstName,
ou.last_name AS ownerLastName,
CONCAT(ou.last_name, ou.first_name) AS ownerSort,
FROM
( SELECT SQL_CALC_FOUND_ROWS
c.candidate_id AS candidateID, c.candidate_id AS exportID,
c.is_hot AS isHot, c.date_modified AS dateModifiedSort,
c.date_created AS dateCreatedSort, c.first_name AS firstName,
c.last_name AS lastName, c.city AS city, c.state AS state,
c.key_skills AS keySkills,
DATE_FORMAT(c.date_created, '%m-%d-%y') AS dateCreated,
DATE_FORMAT(c.date_modified, '%m-%d-%y') AS dateModified,
c.email2 AS email2
FROM candidate AS c
WHERE is_active = 1
GROUP BY c.candidate_id
ORDER BY c.date_modified DESC -- note change here
LIMIT 0 , 15
) AS c2
LEFT JOIN user AS ou ON c2.owner = ou.user_id;
```
(I messed up the column order, but you can fix that.)
Index needed:
```
candidate: INDEX(is_active, candidate_id, date_modified)
```
|
Query Optimization for large database
|
[
"",
"mysql",
"sql",
"query-optimization",
""
] |
I am trying to retrieve the product with the highest price:
```
SELECT ProductName, Price
FROM [Products]
ORDER BY Price DESC
LIMIT 1
```
I wanted to know if there is another way of doing this in a more efficient way, with MAX for example.
|
Use MAX and GROUP BY
```
SELECT ProductName, MAX(Price) [Price]
FROM [Products]
GROUP BY ProductName
ORDER BY MAX(Price) DESC
LIMIT 1;
```
|
I've always done it with the following
SELECT top 1 Name
FROM tableName
ORDER BY Price DESC
|
SQL - getting record with maximum value
|
[
"",
"sql",
"max",
""
] |
I'm writing code to manage users in a sqlite database with Go.
I'm trying to check if a username is taken, but my code is ugly.
My table looks like:
```
id INTEGER PRIMARY KEY AUTOINCREMENT,
username TEXT UNIQUE,
password TEXT
```
And I check if a username is taken with:
```
func UserExists(db * sql.DB, username string) bool {
sqlStmt := `SELECT username FROM userinfo WHERE username = ?`
count := 0
rows, err := db.Query(sqlStmt, username)
Check(err)
for rows.Next() { // Can I just check if rows is non-zero somehow?
count++
}
return len(rows) != 0
}
```
Is there a better query I could use that would tell me if the username value exists in the table in a more staight forward way? Or is there a nicer way to check if `rows` is non-zero?
|
Use [`QueryRow`](https://golang.org/pkg/database/sql/#DB.QueryRow) to query at most one row. If the query doesn't return any row, it returns `sql.ErrNoRows`.
```
func UserExists(db * sql.DB, username string) bool {
sqlStmt := `SELECT username FROM userinfo WHERE username = ?`
err := db.QueryRow(sqlStmt, username).Scan(&username)
if err != nil {
if err != sql.ErrNoRows {
// a real error happened! you should change your function return
// to "(bool, error)" and return "false, err" here
log.Print(err)
}
return false
}
return true
}
```
|
I know this is a bit old, but I don't see any clean answers here. Notice below the if rows.Next() statement which will return a boolean if there are any rows or not:
```
package main
import (
"database/sql"
_ "github.com/mattn/go-sqlite3"
"log"
)
func main() {
exists, _ := SelectDBRowExists(`SELECT * FROM GEO_VELOCITY_EVENTS WHERE USERNAME='bob'`)
log.Println(exists)
}
func SelectDBRowExists(query string) (bool, error) {
DbConn, err := sql.Open("sqlite3", "/path/to/your/sql.sqlite3")
if err != nil {
return false, err
}
defer DbConn.Close()
err = DbConn.Ping()
if err != nil {
return false, err
}
rows, err := DbConn.Query(query)
if rows.Next() {
return true, nil
} else {
return false, nil
}
defer rows.Close()
return false, nil
}
```
|
Checking if a value exists in sqlite db with Go
|
[
"",
"sql",
"sqlite",
"go",
""
] |
In this example, I'm trying to return a list of values (in this case, company names) that have no entries in another table (entries in this case meaning invoices). In other words, I'm trying to return a list of companies that have no invoices. Here is my code:
`Select CompanyName
From tblCompany join tblInvoice
ON tblCompany.CompanyID = tblInvoice.CompanyID
Where tblCompany.CompanyID NOT IN
(Select CompanyID
From tblInvoice)`
What I'm trying to get is this:
[Desired Results](https://i.stack.imgur.com/WCpKP.jpg)
However, when I run the code, no values show up. Can anyone tell me why?
|
Use left join and filter on nulls:
```
select CompanyName
from tblCompany
left join tblInvoice on tblCompany.CompanyID = tblInvoice.CompanyID
where tblInvoice.CompanyID is null
```
This works because missed joins return nulls in the joined table's values.
|
Try this
```
Select CompanyName
From tblCompany
Where tblCompany.CompanyID NOT IN
( Select CompanyID
From tblInvoice)
```
That is get all the `CompanyName` from `tblCompany` where the `CompanyID` not exists in the `tblInvoice`.
Or you can try the below one,
```
select CompanyName
from tblCompany
left join tblInvoice on tblCompany.CompanyID = tblInvoice.CompanyID
where tblInvoice.CompanyID is null
```
|
Return Values from one table that have no entries in another?
|
[
"",
"sql",
"sql-server",
""
] |
I have the following table:
```
ITEM DATE VALUE
----------------------
ITEM1 2016-05-04 1
ITEM1 2016-05-05 3
ITEM1 2016-05-06 3
ITEM1 2016-05-09 3
ITEM1 2016-05-04 4
ITEM2 2016-05-10 1
ITEM2 2016-05-05 2
ITEM2 2016-05-06 3
ITEM2 2016-05-09 1
ITEM2 2016-05-10 1
```
And I want to get out, per item, how many entries back in time the value column has been the same (flat):
```
ITEM DATE VALUE NUM_FLAT_ENTRYPOINTS
------------------------------
ITEM1 2016-05-04 1 0
ITEM1 2016-05-05 3 0
ITEM1 2016-05-06 3 1
ITEM1 2016-05-09 3 2
ITEM1 2016-05-10 4 0
ITEM2 2016-05-04 1 0
ITEM2 2016-05-05 2 0
ITEM2 2016-05-06 3 0
ITEM2 2016-05-09 1 0
ITEM2 2016-05-10 1 1
```
My initial though would be:
```
select
*,
rank()-1 over (partition by ITEM,VALUE order by DATE) as NUM_FLAT_ENTRYPOINTS
from my_table
```
This, however, does not work as ITEM2 would partition 2016-05-04, 2016-05-09 and 2016-05-10 together and show 2 instead of 1 for NUM\_FLAT\_ENTRYPOINTS for the last line.
I am using Microsoft SQL Server 2008.
Any ideas?
Edit:
In Oracle (and possible other SQL Servers) it seems I can just do
```
select
count(VALUE)-1 over (partition by ITEM,VALUE order by DATE) as NUM_FLAT_ENTRYPOINTS
from my_table
```
but as far as I can tell this syntax does not work in SQL Server 2008. Any way to work around it?
|
It looks like a variation of gaps-and-islands.
**Sample data**
```
DECLARE @T TABLE (ITEM varchar(50), dt date, VALUE int);
INSERT INTO @T(ITEM, dt, VALUE) VALUES
('ITEM1', '2016-05-04', 1),
('ITEM1', '2016-05-05', 3),
('ITEM1', '2016-05-06', 3),
('ITEM1', '2016-05-09', 3),
('ITEM1', '2016-05-10', 4),
('ITEM2', '2016-05-04', 1),
('ITEM2', '2016-05-05', 2),
('ITEM2', '2016-05-06', 3),
('ITEM2', '2016-05-09', 1),
('ITEM2', '2016-05-10', 1);
```
**Query**
```
WITH
CTE
AS
(
SELECT
ITEM
,dt
,VALUE
,ROW_NUMBER() OVER (PARTITION BY ITEM ORDER BY dt) AS rn1
,ROW_NUMBER() OVER (PARTITION BY ITEM, VALUE ORDER BY dt) AS rn2
FROM @T
)
SELECT
ITEM
,dt
,VALUE
,rn1-rn2 AS rnDiff
,ROW_NUMBER() OVER
(PARTITION BY ITEM, VALUE, rn1-rn2 ORDER BY dt) - 1 AS NUM_FLAT_ENTRYPOINTS
FROM CTE
ORDER BY ITEM, dt;
```
**Result**
```
+-------+------------+-------+--------+----------------------+
| ITEM | dt | VALUE | rnDiff | NUM_FLAT_ENTRYPOINTS |
+-------+------------+-------+--------+----------------------+
| ITEM1 | 2016-05-04 | 1 | 0 | 0 |
| ITEM1 | 2016-05-05 | 3 | 1 | 0 |
| ITEM1 | 2016-05-06 | 3 | 1 | 1 |
| ITEM1 | 2016-05-09 | 3 | 1 | 2 |
| ITEM1 | 2016-05-10 | 4 | 4 | 0 |
| ITEM2 | 2016-05-04 | 1 | 0 | 0 |
| ITEM2 | 2016-05-05 | 2 | 1 | 0 |
| ITEM2 | 2016-05-06 | 3 | 2 | 0 |
| ITEM2 | 2016-05-09 | 1 | 2 | 0 |
| ITEM2 | 2016-05-10 | 1 | 2 | 1 |
+-------+------------+-------+--------+----------------------+
```
|
Assuming the correction to the sample data I suggested in the comments, this seems to fit the bill:
```
declare @t table (ITEM char(5), Date date, Value tinyint)
insert into @t(ITEM,DATE,VALUE) values
('ITEM1','20160504',1),
('ITEM1','20160505',3),
('ITEM1','20160506',3),
('ITEM1','20160509',3),
('ITEM1','20160510',4),
('ITEM2','20160504',1),
('ITEM2','20160505',2),
('ITEM2','20160506',3),
('ITEM2','20160509',1),
('ITEM2','20160510',1)
;With Ordered as (
select
Item,
Date,
Value,
ROW_NUMBER() OVER (PARTITION BY Item ORDER BY Date) as rn
from @t
)
select
*,
COALESCE(rn -
(select MAX(o2.rn) from Ordered o2
where o2.ITEM = o.ITEM and
o2.rn < o.rn and
o2.Value != o.Value) - 1
, o.rn - 1) as NUM_FLAT_ENTRYPOINTS
from
Ordered o
```
That is, we assign row numbers (separately for each item), and then we simply find the latest row number earlier than the current one where `Value` is different. Subtracting these row numbers (and a further 1) produces the answer we need - assuming such an earlier row can be found. If there's no such earlier row then we're obviously in a sequence that's at the start for a particular item - so we just subtract 1 from the row number.
I've gone for "obviously correct" here - it's possible that there's a way to produce the result that may perform better but I'm not aiming for that right now.
Results:
```
Item Date Value rn NUM_FLAT_ENTRYPOINTS
----- ---------- ----- -------------------- --------------------
ITEM1 2016-05-04 1 1 0
ITEM1 2016-05-05 3 2 0
ITEM1 2016-05-06 3 3 1
ITEM1 2016-05-09 3 4 2
ITEM1 2016-05-10 4 5 0
ITEM2 2016-05-04 1 1 0
ITEM2 2016-05-05 2 2 0
ITEM2 2016-05-06 3 3 0
ITEM2 2016-05-09 1 4 0
ITEM2 2016-05-10 1 5 1
```
|
Assign sequential numbers to rows with repeating values
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"window-functions",
""
] |
I've a dataset that has some comments that would exclude subjects. I want to make a mini dataset to collect these subjects.
I'm trying to use SAS SQL for this so I tried to do this:
```
PROC SQL;
CREATE TABLE EXCLUDE as
SELECT *
FROM data_set
WHERE UPCASE(COMMENT) like '%(INELIGIBLE | REFUSED)%';
QUIT;
```
I also tried
```
PROC SQL;
CREATE TABLE exclude as
SELECT *
FROM Data_set
WHERE UPCASE(COMMENT) like ('%INELIGIBLE%'|'%REFUSED%')
;
QUIT;
```
I keep getting an error that says 'LIKE OPERATOR Requires character operands'
How can I make this a proper syntax query?
Thanks
|
You could do it via a like-join against a list of the terms to exclude :
```
data words ;
input word $char16. ;
datalines ;
INELIGABLE
REFUSED
;
run ;
proc sql ;
create table exclude as
select a.*
from data_set a
left join
words b on upcase(a.comment) like cats('%',b.word,'%')
where missing(b.word) ;
quit ;
```
|
You can use perl regular expressions to do this, if you're working with a string that already is formed. (If not, you're better off just writing the separate syntax, PRXs are slow.)
Equivalent code here, one written out, one with a PRX using a single string:
```
proc sql;
select *
from sashelp.class
where not (name like 'A%' or name like 'B%');
quit;
proc sql;
select *
from sashelp.class
where not (prxmatch('~^[A|B]~io',name));
quit;
```
|
SAS SQL: WHERE LIKE 'list of words'
|
[
"",
"sql",
"sas",
""
] |
I'm new to SQL and have a large database that contains IDs and Service Dates and I need to write a query to give me the first date each ID had a service.
I tried:
```
SELECT dbo.table.ID, dbo.otherTable.ServiceDate AS EasliestDate
FROM dbo.table INNER JOIN dbo.table.ID = dbo.otherTable.ID
```
But the output is every service for every ID, which has too many results to sort through. I want the output to only show the ID and the oldest service date. Any advice is appreciated.
EDIT: To be more precise, the output I am looking for is the ID and service date if the oldest service date is during the year that I specify. I.E. if ID = 1 has a service in 2015 and 2016 and I am searching for IDs in 2016 then ID = 1 should not appear in the results because there was an earlier service in 2015.
EDIT: Thanks everyone who helped with this! The answer I accepted did exactly what I asked. Major kudos to Patty though who who elaborated on how to further filter the outcome by year.
|
Use [`GROUP BY`](https://msdn.microsoft.com/en-us/library/ms177673.aspx) and [`MIN`](https://msdn.microsoft.com/en-GB/library/ms179916.aspx) to get the first date for each ID:
```
SELECT dbo.table.ID,
MIN(dbo.otherTable.ServiceDate) AS EasliestDate
FROM dbo.table
INNER JOIN otherTable
ON dbo.table.ID = dbo.otherTable.ID
GROUP BY dbo.table.ID;
```
---
**ADDENDUM**
In reference to a question in the comments:
> how would I also restrict it to show only those who had a service in a specific year?
It would depend on your exact requirements, consider the following set:
```
ID ServiceDate
--------------------
1 2014-05-01
1 2015-08-01
1 2016-07-07
2 2015-08-19
```
You would only want to include ID = 1 if the year you specified was 2016, but assuming you still wanted to return the first date of `2014-05-01` then you would need to add a having clause with a case statement to get this.
```
DECLARE @Year INT = 2016;
DECLARE @YearStart DATE = DATEADD(YEAR, @Year - 1900, '19000101'),
@YearEnd DATE = DATEADD(YEAR, @Year - 1900 + 1, '19000101');
SELECT @YearStart, @YearEnd
SELECT t.ID,
MIN(o.ServiceDate) AS EasliestDate
FROM dbo.table AS t
INNER JOIN otherTable AS o
ON o.ID = r.ID
GROUP BY t.ID
HAVING COUNT(CASE WHEN o.ServiceDate >= @YearStart
AND o.ServiceDate < @YearEnd THEN 1 END) > 0;
```
If you only want the earliest date in 2016 the a where clause would suffice
```
DECLARE @Year INT = 2016;
DECLARE @YearStart DATE = DATEADD(YEAR, @Year - 1900, '19000101'),
@YearEnd DATE = DATEADD(YEAR, @Year - 1900 + 1, '19000101');
SELECT @YearStart, @YearEnd
SELECT t.ID,
MIN(o.ServiceDate) AS EasliestDate
FROM dbo.table AS t
INNER JOIN otherTable AS o
ON o.ID = r.ID
WHERE o.ServiceDate >= @YearStart
AND o.ServiceDate < @YearEnd
GROUP BY t.ID;
```
It is worth noting there is a very good reason I have chosen to calculate the start of the year, and the start of the next year and used
```
WHERE o.ServiceDate >= @YearStart
AND o.ServiceDate < @YearEnd
```
Instead of just
```
WHERE DATEPART(YEAR, o.ServiceDate) = 2016;
```
In the former, an index on `ServiceDate` can be used whereas in the latter, the `DATEPART` calculation must be done on every record and this can cause significant performace issues.
**ADDENDUM 2**
To do the following:
> The exact thing I want then would be IDs who's earliest service is in the year I specify.
Then you would need a having clause, just a different one to the one I posted before:
```
DECLARE @Year INT = 2016;
DECLARE @YearStart DATE = DATEADD(YEAR, @Year - 1900, '19000101'),
@YearEnd DATE = DATEADD(YEAR, @Year - 1900 + 1, '19000101');
SELECT @YearStart, @YearEnd
SELECT t.ID,
MIN(o.ServiceDate) AS EasliestDate
FROM dbo.table AS t
INNER JOIN otherTable AS o
ON o.ID = r.ID
GROUP BY t.ID
HAVING MIN(o.ServiceDate) >= @YearStart
AND MIN(o.ServiceDate) < @YearEnd;
```
**ADDENDUM 3**
```
CREATE VIEW dbo.YourView
AS
SELECT dbo.table.ID,
MIN(dbo.otherTable.ServiceDate) AS EasliestDate
FROM dbo.table
INNER JOIN otherTable
ON dbo.table.ID = dbo.otherTable.ID
GROUP BY dbo.table.ID;
```
Then you can apply your criteria to the view:
```
SELECT *
FROM dbo.YourView
WHERE EasliestDate >= '2015-01-01'
AND EasliestDate < '2016-01-01';
```
|
You have to include a WHERE in your current query:
```
SELECT dbo.table.ID, dbo.otherTable.ServiceDate AS EasliestDate
FROM dbo.table INNER JOIN dbo.table.ID = dbo.otherTable.ID
WHERE Month(dbo.otherTable.ServiceDate) = 1
```
Or you can search with Year(dbo.otherTable.ServiceDate) = 2016
Or you can use Day(dbo.otherTable.ServiceDate) = 1
Or an specific date.
|
SQL Query to get oldest date
|
[
"",
"sql",
"sql-server-2008",
""
] |
```
CREATE TABLE #EmpPcodes
(
YearMonth INT,
YEAR INT,
MONTH INT,
RunNo INT,
Amount NUMERIC(18, 3),
GroupCode NvarCHAR(30),
GroupName NvarCHAR(250),
GroupAName NvarCHAR(250),
PayrollGroup INT,
EmployeeId INT
)
CREATE TABLE #pArrangeAllcode
(
YearMonth INT,
YEAR INT,
MONTH INT,
RunNo INT,
Amount NUMERIC(18, 3),
GroupCode NvarCHAR(30),
GroupName NvarCHAR(250),
GroupAName NvarCHAR(250),
PayrollGroup INT,
EmployeeId INT,
CodeArrange INT
)
INSERT INTO #pArrangeAllcode
SELECT YearMonth, YEAR, MONTH, RunNo, Amount, GroupCode, GroupName,
GroupAName, PayrollGroup, EmployeeId,
FROM dbo.#EmpPcodes
SELECT * FROM #pArrangeAllcode
```
I expect to get the data from the #EmPcodes temp table to #pArrangeAllcode
but it has extra column that I'll use it later ... I always get this error :
```
Insert Error: Column name or number of supplied values does not match table definition.
```
Any Help ?
|
Actually you are missing one column in insert statement whch also need some value:
use this:
```
INSERT INTO #pArrangeAllcode
SELECT YearMonth, YEAR, MONTH, RunNo, Amount, GroupCode, GroupName,
GroupAName, PayrollGroup, EmployeeId,NULL
FROM dbo.#EmpPcodes
```
|
Use below query for inserting records.
```
INSERT INTO #pArrangeAllcode
SELECT YearMonth, YEAR, MONTH, RunNo, Amount, GroupCode, GroupName,
GroupAName, PayrollGroup, EmployeeId,NULL
FROM dbo.#EmpPcodes
```
|
How to add temp table to another temp table with extra column
|
[
"",
"sql",
"sql-server",
"dynamic-sql",
""
] |
Hi sql'ers though am dealing with SQL from past 4 years. I encountered little complex situation today.
I have a mysql table with columns, id,movieid,Lang,comment, and timestamp so i want to render comment for a particular movie in which user can decide a range that from where to where he wants a comment for that particular movie but when i try the following query will not working out. So help me guys I'm stucked.
```
SELECT comment from aaa where movieid=2 and language='en' between6 to 10
```
Here basically I need to fetch from 6th comment to 10th comment from my aaa table (only for passed movieid parameter). All help will be appreciated.
|
As far I understand you need to fetch 4 comments in total (between 6th and 10th) so you want to use `LIMIT` for that, starting from 6th record:
```
SELECT
`comment`
FROM
aaa
WHERE
movieid = 2
AND `language` = 'en'
LIMIT 6,4
```
|
You can try this:
```
SELECT comment from aaa
where movieid=2 and language='en'
comment between 6 and 10
```
|
this sql made me little complex situation
|
[
"",
"mysql",
"sql",
""
] |
I have an sql table:
```
DATE USER FLAGGED COMMAND
1 Alice 0 sudo gparted
2 Bob 1 sudo
3 Bob 0 mv
4 Alice 1 sudo rm -rf
5 Charlie 1 sudo chown
```
I want to select last flagged action of a user
```
DATE USER FLAGGED COMMAND
2 Bob 1 sudo
4 Alice 1 sudo rm -rf
5 Charlie 1 sudo chown
```
I tried
```
select DATE, USER, REQUEST
from
(
select DATE, USER, REQUEST, FLAGGED
row_number() over(partition by USER order by date desc) rn
from USERDATA
) src
where rn = 1 and FLAGGED = 1
```
but it omites Bob. How can I. What is the correct way to add "FLAGGED = 1" to SQL request?
|
Here you first select latest entries for user and then drop those that are not flagged. Instead you should first select all entries that are FLAGGED and then find the most recent of them. This code should work.
```
select DATE, USER, REQUEST
from
(
select DATE, USER, REQUEST, FLAGGED
row_number() over(partition by USER order by DATE desc) rn
from USERDATA WHERE FLAGGED = 1
) src
where rn = 1
```
|
Move the flagged condition to the sub-query, so only the rows with flagged=1 will be retrieved and you can select the latest one with rn=1 condition.
```
select DATE, USER, REQUEST
from
(
select DATE, USER, REQUEST, FLAGGED,
row_number() over(partition by USER order by date desc) rn
from USERDATA
where flagged = 1
) src
where rn = 1
```
|
SQL select latest entry in the category among visible
|
[
"",
"sql",
""
] |
I have problem with my REGEXP expression which I want to loop and every iteration deletes text after slash. My expression looks like this now
```
REGEXP_SUBSTR('L1161148/1/10', '.*(/)')
```
I'm getting L1161148/1/ instead of L1161148/1
|
You said you wanted to loop.
CAVEAT: Both of these solutions assume there are no NULL list elements (all slashes have a value in between them).
```
SQL> with tbl(data) as (
select 'L1161148/1/10' from dual
)
select level, nvl(substr(data, 1, instr(data, '/', 1, level)-1), data) formatted
from tbl
connect by level <= regexp_count(data, '/') + 1 -- Loop # of delimiters +1 times
order by level desc;
LEVEL FORMATTED
---------- -------------
3 L1161148/1/10
2 L1161148/1
1 L1161148
SQL>
```
EDIT: To handle multiple rows:
```
SQL> with tbl(rownbr, col1) as (
select 1, 'L1161148/1/10/2/34/5/6' from dual
union
select 2, 'ALKDFJV1161148/123/456/789/1/2/3' from dual
)
SELECT rownbr, column_value substring_nbr,
nvl(substr(col1, 1, instr(col1, '/', 1, column_value)-1), col1) formatted
FROM tbl,
TABLE(
CAST(
MULTISET(SELECT LEVEL
FROM dual
CONNECT BY LEVEL <= REGEXP_COUNT(col1, '/')+1
) AS sys.OdciNumberList
)
)
order by rownbr, substring_nbr desc
;
ROWNBR SUBSTRING_NBR FORMATTED
---------- ------------- --------------------------------
1 7 L1161148/1/10/2/34/5/6
1 6 L1161148/1/10/2/34/5
1 5 L1161148/1/10/2/34
1 4 L1161148/1/10/2
1 3 L1161148/1/10
1 2 L1161148/1
1 1 L1161148
2 7 ALKDFJV1161148/123/456/789/1/2/3
2 6 ALKDFJV1161148/123/456/789/1/2
2 5 ALKDFJV1161148/123/456/789/1
2 4 ALKDFJV1161148/123/456/789
2 3 ALKDFJV1161148/123/456
2 2 ALKDFJV1161148/123
2 1 ALKDFJV1161148
14 rows selected.
SQL>
```
|
You can try removing the string after the last slash:
```
select regexp_replace('L1161148/1/10', '/([^/]*)$', '') from dual
```
|
Regexp_substr expression
|
[
"",
"sql",
"oracle",
""
] |
My table structure is like below
```
TblMemberInfo | TblCarInfo
MemberID Name | Id MemberId CarNumber
1 Sandeep | 1 2 1234
2 Vishal | 2 1 1111
3 John | 3 4 2458
4 Kevin | 4 2 1296
5 Devid | 5 4 7878
| 6 3 4859
```
I need to query for select all from TblMemberInfo,TblCarInfo where Count(MemberId)=1
```
MemberId Name CarNumber
1 Sandeep 1111
3 John 4859
```
|
Here is one method:
```
select mi.MemberID, mi.Name, min(CarNumber) as CarNumber
from TblMemberInfo mi join
TblCarInfo ci
on mi.MemberID = ci.MemberID
group by mi.MemberID, mi.Name
having count(*) = 1;
```
This works, because with only one row in the group, the `min()` returns the right value.
And alternative approach uses `not exists`:
```
select mi.MemberID, mi.Name, ci.CarNumber
from TblMemberInfo mi join
TblCarInfo ci
on mi.MemberID = ci.MemberID
where not exists (select 1
from TblCarInfo ci2
where ci2.MemberID = ci.MemberID and ci2.id <> ci.id
);
```
|
A couple more options!
```
select mi.MemberId, mi.Name, ci.CarNumber
from TblMemberInfo mi
join TblCarInfo ci on
mi.MemberId = ci.MemberId
group by mi.MemberId, mi.Name, ci.CarNumber
having min(ci.Id) = max(ci.Id)
```
Using a subquery to retrieve the single `MemberId's` is a good idea if you have a lot of other columns you need to bring in as well
```
select mi.MemberId, mi.Name, ci.CarNumber
from TblMemberInfo mi
join TblCarInfo ci on
mi.MemberId = ci.MemberId
where mi.MemberId in
(
select MemberId
from TblCarInfo
group by MemberId
having count(*) = 1
)
```
|
Select all columns from two tables grouping by all columns in table1 and specific column in table2
|
[
"",
"sql",
"sql-server",
""
] |
I've got a set of data which has an `type` column, and a `created_at` time column. I've already got a query which is pulling the relevant data from the database, and this is the data that is returned.
```
type | created_at | row_num
-----------------------------------------------------
"ordersPage" | "2015-07-21 11:32:40.568+12" | 1
"getQuote" | "2015-07-21 15:49:47.072+12" | 2
"completeBrief" | "2015-07-23 01:00:15.341+12" | 3
"sendBrief" | "2015-07-24 08:59:42.41+12" | 4
"sendQuote" | "2015-07-24 18:43:15.967+12" | 5
"acceptQuote" | "2015-08-03 04:40:20.573+12" | 6
```
The row number is returned from the standard row number function in postgres
```
ROW_NUMBER() OVER (ORDER BY created_at ASC) AS row_num
```
What I want to do is somehow aggregate this data so get a time distance between every event, so the output data might look something like this
```
type_1 | type_2 | time_distance
--------------------------------------------------------
"ordersPage" | "getQuote" | 123423.3423
"getQuote" | "completeBrief" | 123423.3423
"completeBrief" | "sendBrief" | 123423.3423
"sendBrief" | "sendQuote" | 123423.3423
"sendQuote" | "acceptQuote" | 123423.3423
```
The time distance would be a float in milliseconds, in other queries I've been using something like this to get time differences.
```
EXTRACT(EPOCH FROM (MAX(events.created_at) - MIN(events.created_at)))
```
But this time i need it for every pair of events in the sequential order of the row\_num so I need the aggregate for `(1,2), (2,3), (3,4)...`
Any ideas if this is possible? Also doesn't have to be exact, I can deal with duplicates, and with `type_1` and `type_2` columns returning an existing row in a different order. I just need a way to at least get those values above.
|
What about a [self join](https://stackoverflow.com/questions/3362038/what-is-self-join-and-when-would-you-use-it) ? It would look like this :
```
SELECT
t1.type
, t2.type
, ABS(t1.created_at - t2.created_at) AS time_diff
FROM your_table t1
INNER JOIN your_table t2
ON t1.row_num = t2.row_num + 1
```
|
```
select type_1,
type_2,
created_at_2-created_at_1 as time_distance
from
(select
type type_1,
lead(type,1) over (order by row_num) type_2,
created_at created_at_1,
lead(created_at,1) over (order by row_num) created_at_2
from table_name) temp
where type_2 is not null
```
|
Aggregating multiple rows more than once
|
[
"",
"sql",
"postgresql",
"aggregate",
""
] |
I have to develop an insert query on sql server which inserts a nchar value from another nchar value, but I have to increment it by 1. I have to preserve the left 0 (not always be a 0). How can I make the conversion? This is my code:
```
DECLARE @CIG int;
SET @CIG = (SELECT MAX([VALUE1]) FROM [DB].[dbo].[TABLE]) +1;
```
For example If I have:
```
0761600002511
```
the result will be:
```
0761600002512
```
if I have:
```
1761600002511
```
the result will be:
```
1761600002512
```
I have tried with something like this: (with the same conversion error and with the same problem with the possible starting 0)
```
SET @CIG =
CAST
(
(CAST
(
(
SELECT MAX([VALUE1])FROM [DB].[dbo].[TABLE]
)
as int)+1)
as nchar(100))
```
Thanks
|
Assuming That you have set NCHAR(10) length for your column, to preserve zero if there
```
DECLARE @tmp NCHAR(100) ='0761600002511';
SELECT LEFT(@tmp ,3)+CAST((CAST(RIGHT(@tmp ,97)AS INT)+1) AS NCHAR(10))
```
|
You should Convert/cast first and then take max. Try this.
```
DECLARE @CIG int;
SELECT @CIG = MAX(CONVERT(INT, [VALUE1]) + 1 FROM [DB].[dbo].[TABLE];
```
|
SQL Server insert Top nchar +1 that could start by 0
|
[
"",
"sql",
"sql-server",
""
] |
I have an SQL table, in which I store my application logs. I have a column errors, in which I store values like this example
```
+------+--------+----------------------------------------------+
| id | name | error |
+------+--------+----------------------------------------------+
| 1 | john | Flushing folder error on folderid 456 |
| 2 | paul | Flushing folder error on folderid 440 |
| 3 | gary | Error connection has timed out on source 320|
| 4 | ade | Error connection has timed out on source 220|
| 5 | fred | Error connection has timed out on source 821|
| 6 | bob | Reading errors occured on folder 400 |
| 7 | ade | Error connection has timed out on source 320|
| 8 | fred | Error connection has timed out on source 320|
| 9 | bob | Reading errors occured on folder 402 |
| 10 | ade | Error connection has timed out on source 320|
| 11 | fred | Error connection has timed out on source 320|
| 12 | bob | Reading errors occured on folder 400 |
| 13 | paul | Flushing folder error on folderid 100 |
+------+--------+----------------------------------------------+
```
The result I would like to get is something like :
```
+-------------------------------------------+------------+
| Error Like | Occurence |
+-------------------------------------------+------------+
| Error connection has timed out on source | 7 |
| Flushing folder error on folderid | 3 |
|Reading errors occured on folder | 3 |
+-------------------------------------------+------------+
```
Is there anyway I could do that in SQL? Errors are variables and I don't have the exhaustive list of all possible errors.
Thanks!
|
Remove the last digit part and count.
**Query**
```
SELECT LEFT(error, LEN(error) - 4) AS [Error Like],
COUNT(LEFT(error, LEN(error) - 4)) AS [Occurence]
FROM tbl_error
GROUP BY LEFT(error, LEN(error) - 4);
```
Or you can do it with a sub-query also.
**Query**
```
SELECT t.[Error Like], COUNT(t.[Error Like]) AS [Occurence] FROM(
SELECT LEFT(error, LEN(error) - 4) AS [Error Like]
FROM tbl_error
)t
GROUP BY t.[Error Like];
```
***If you are not aware about the last digit part, then***
**Query**
```
SELECT t.[Error Like], COUNT(t.[Error Like]) as [Occurence] FROM(
SELECT LEFT(error, LEN(error) - CHARINDEX(' ', REVERSE(error), 1)) AS [Error Like]
FROM tbl_error
)t
GROUP BY t.[Error Like]
ORDER BY COUNT(t.[Error Like]) desc, t.[Error Like];
```
**Result**
```
+--------------------------------------------+-----------+
| Error Like | Occurence |
+--------------------------------------------+-----------+
| Error connection has timed out on source | 7 |
| Flushing folder error on folderid | 3 |
| Reading errors occured on folder | 3 |
+--------------------------------------------+-----------+
```
[**Find a demo here**](https://data.stackexchange.com/stackoverflow/query/483956/error-like-occurence)
|
It looks like you want to strip out the last four characters and aggregate:
```
select left(error, len(error) - 4) as ErrorLike, count(*)
from applogs
group by left(error, len(error) - 4)
order by count(*) desc;
```
|
How to get most frequent values in SQL where value like a variable?
|
[
"",
"sql",
"sql-server",
""
] |
As part of a SQL stored procedure (producing XML output) I have the following line:
```
FORMAT(Quantity * ld.UnitPrice, 'N2') AS '@value'
```
This has produced perfectly acceptable results for several months now. Last week however the service into which the XML that this procedure produces failed because the value was over a thousand pounds (sterling), producing the following output `1,049.60`.
It would have been nice had the service documentation mentioned the fact that it didn't like commas in the values passed back to it as attributes but that argument is for another day.
I'm after some advice as to the best way to reformat this one line so that it does not produce anything that separates thousands, in other words I need it to produce the following `1049.60` or something similar whenever a value over one thousand pounds is arrived at. It still needs to retain the two decimal places.
Although I have mentioned specifically Pounds sterling in truth this format needs to be culture neutral, so that all figures that rise above a thousand do not have a separator.
|
You could probably remove the format function. Then if required add `convert(decimal(9,2), YourColumn)`
|
You can use the FORMAT function with format of 'F2'. 'F' is the Fixed-Point format specifier. This is supported by all numeric types.
```
DECLARE @quantity int = 65;
DECLARE @unitPrice money = 150.33;
SELECT FORMAT(@quantity * @unitPrice, 'F2') AS 'Amount';
```
[](https://i.stack.imgur.com/vKxoT.png)
See Books Online > FORMAT (Transact-SQL): <https://msdn.microsoft.com/en-GB/library/hh213505.aspx>
MSDN > Standard Numeric Format Strings: <https://msdn.microsoft.com/library/dwhawy9k.aspx>
However, if you are formatting numbers as strings and then passing them around, you need to be careful about language and culture settings. The decimal separator has the potential to change to a comma.
You could ensure this is always a full-stop, by specifying the culture as 'en-US' to the FORMAT function, which takes an optional third parameter.
```
SET LANGUAGE 'Portuguese';
SELECT FORMAT(@quantity * @unitPrice, 'F2', 'en-US') AS 'Amount';
```
[](https://i.stack.imgur.com/FconJ.png)
See Defensive Database Programming with SQL Server by Alex Kuznetsov, published by Redgate > Chapter 1: Basic Defensive Database Programming Techniques > How SET LANGUAGE can break a query
> Many developers test their code only under the default language setting of their server, and do not test how their code will respond if executed on a server with a different language setting, or if there is a change in the setting at the session level.
Books Online > Local Language Versions in SQL Server: <https://msdn.microsoft.com/en-us/library/ee210665.aspx#BK_ConfigureOS>
|
What is the best way to reformat this piece of SQL to avoid the thousands separator
|
[
"",
"sql",
"sql-server",
""
] |
I have table similar to this:
```
ID ProductName Price
1 Water 0.89
1 Water 0.99
1 Water 0.79
2 Coke 1.99
3 Sprite 1.99
```
What I would like is to get is the lowest price of every product. ( ID can't change for same name ) If I could group just by one column it would be fine but I can't since Access doesn't let me. My current code that I've been trying to deal with is:
```
SELECT DISTINCT Products.ProductName, Products.Price
FROM Products
GROUP BY Products.ProductName, Products.Price
```
Information that I would like to get should look like:
```
ProductName Price
Water 0.79
Coke 1.99
Sprite 1.99
```
|
Just use an aggregation on the product name (or id):
```
SELECT Products.ProductName, MIN(Products.Price) as Price
FROM Products
GROUP BY Products.ProductName;
```
|
You are very close to it. Just use `min()` function:
```
SELECT Products.ProductName, min(Products.Price)
FROM Products
GROUP BY Products.ProductName;
```
|
MS Access Selecting distinct rows
|
[
"",
"sql",
"ms-access",
""
] |
I have a string that is adding data to a table so I can print a report or labels from the data. The data consists of addresses and is causing the string to fail because of the comma in the address. this string has been working but when it has some weird addresses I think that is what is causing this.
```
sqls = "INSERT INTO tInvReportDataWrk(SO,ITEM,QTY,billTO,shipTO,LINEKEY)VALUES('" &
SO & "', '" & it & "', '" & qty & "', '" & billTO & "', '" & shipTO & "', '" & lk & "')"
```
The data that is trying looks like this from the `debug.print`
```
INSERT INTO tInvReportDataWrk(SO,ITEM,QTY,billTO,shipTO,LINEKEY)
VALUES('0000001', 'L-R_4-8R2B-01', '2', 'BAR - ANAHEIM
BAR BRANCH
P.O. BOX 00000
VENDOR ID# VC-00001
Saint Louis, MO 00008
', 'ABC ELEMENT WAREHOUSE
2000 O'TOOL AVE.
San Jose, CA 95131-1301
', '000001')
```
|
I'm uncertain whether the comma in the address is the problem. It looks to me like the apostrophe in *O'TOOL* should be a problem. But if it is not the cause of the first error Access complains about, it should trigger another error after you fix the first error.
For a simple example, the following code triggers error 3075, *"Syntax error (missing operator) in query expression ''O'TOOL');'."*
```
Dim strInsert As String
strInsert = "INSERT INTO tblFoo(text_field) VALUES ('O'TOOL');"
CurrentDb.Execute strInsert, dbFailOnError
```
Changing the `INSERT` statement to either of these allows the statement to execute without error.
```
strInsert = "INSERT INTO tblFoo(text_field) VALUES ('O''TOOL');"
strInsert = "INSERT INTO tblFoo(text_field) VALUES (""O'TOOL"");"
```
You could revise your code to use one of those approaches. However consider a [parameter query](https://stackoverflow.com/a/6572841/77335) or the [DAO.Recordset.AddNew](https://stackoverflow.com/a/13712926/77335) method instead ... and then quotes and apostrophes will be less troublesome.
|
Yes, it is the apostrophe.
You can use this function to avoid this and most other troubles when concatenating SQL:
```
' Converts a value of any type to its string representation.
' The function can be concatenated into an SQL expression as is
' without any delimiters or leading/trailing white-space.
'
' Examples:
' SQL = "Select * From TableTest Where [Amount]>" & CSql(12.5) & "And [DueDate]<" & CSql(Date) & ""
' SQL -> Select * From TableTest Where [Amount]> 12.5 And [DueDate]< #2016/01/30 00:00:00#
'
' SQL = "Insert Into TableTest ( [Street] ) Values (" & CSql(" ") & ")"
' SQL -> Insert Into TableTest ( [Street] ) Values ( Null )
'
' Trims text variables for leading/trailing Space and secures single quotes.
' Replaces zero length strings with Null.
' Formats date/time variables as safe string expressions.
' Uses Str to format decimal values to string expressions.
' Returns Null for values that cannot be expressed with a string expression.
'
' 2016-01-30. Gustav Brock, Cactus Data ApS, CPH.
'
Public Function CSql( _
ByVal Value As Variant) _
As String
Const vbLongLong As Integer = 20
Const SqlNull As String = " Null"
Dim Sql As String
Dim LongLong As Integer
#If Win32 Then
LongLong = vbLongLong
#End If
#If Win64 Then
LongLong = VBA.vbLongLong
#End If
Select Case VarType(Value)
Case vbEmpty ' 0 Empty (uninitialized).
Sql = SqlNull
Case vbNull ' 1 Null (no valid data).
Sql = SqlNull
Case vbInteger ' 2 Integer.
Sql = Str(Value)
Case vbLong ' 3 Long integer.
Sql = Str(Value)
Case vbSingle ' 4 Single-precision floating-point number.
Sql = Str(Value)
Case vbDouble ' 5 Double-precision floating-point number.
Sql = Str(Value)
Case vbCurrency ' 6 Currency.
Sql = Str(Value)
Case vbDate ' 7 Date.
Sql = Format(Value, " \#yyyy\/mm\/dd hh\:nn\:ss\#")
Case vbString ' 8 String.
Sql = Replace(Trim(Value), "'", "''")
If Sql = "" Then
Sql = SqlNull
Else
Sql = " '" & Sql & "'"
End If
Case vbObject ' 9 Object.
Sql = SqlNull
Case vbError ' 10 Error.
Sql = SqlNull
Case vbBoolean ' 11 Boolean.
Sql = Str(Abs(Value))
Case vbVariant ' 12 Variant (used only with arrays of variants).
Sql = SqlNull
Case vbDataObject ' 13 A data access object.
Sql = SqlNull
Case vbDecimal ' 14 Decimal.
Sql = Str(Value)
Case vbByte ' 17 Byte.
Sql = Str(Value)
Case LongLong ' 20 LongLong integer (Valid on 64-bit platforms only).
Sql = Str(Value)
Case vbUserDefinedType ' 36 Variants that contain user-defined types.
Sql = SqlNull
Case vbArray ' 8192 Array.
Sql = SqlNull
Case Else ' Should not happen.
Sql = SqlNull
End Select
CSql = Sql & " "
End Function
```
|
VBA SQL String Run-Time Error 3075
|
[
"",
"sql",
"ms-access",
"vba",
""
] |
I have a table which is something like the below
```
Key CL EmailAddress CT Product1 Product2 Product3 Product4 Product5
1 X abc@gmail.com A 12 null null null null
2 X abc@gmail.com B 123 22 null null null
```
For each row I can have a maximum of 5 products. Each record would have a minimum of 1 but less than 5 products.
Also the email address can repeat across the same CL.
I would have to write a query to find if the email address repeats across the same CL and if it does
I would have to merge the productids for the same email address.
On doing the merge if I have got 5 products I need to stop and exclude the remaining products.
So the output for the above example should look something like this
```
Key CL EmailAddress CT Product1 Product2 Product3 Product4 Product5
1 X abc@ gmail.com A+B 12 123 22 null null
```
Can we do something like this in Oracle SQL query?
|
As an alternative to MTO's approach, you could unpivot the data from your table:
```
select *
from your_table
unpivot (product for pos in (product1 as 1, product2 as 2, product3 as 3,
product4 as 4, product5 as 5));
KEY CL EMAILADDRESS CT POS PRODUCT
---------- -- ------------- --- ---------- ----------
1 X abc@gmail.com A 1 12
2 X abc@gmail.com B 1 123
2 X abc@gmail.com B 2 22
```
Use that to pick the key and generate the CT value (shamelessly pinching MTO's regular expression to remove duplicates), and generating a new position value:
```
with t as (
select *
from your_table
unpivot (product for pos in (product1 as 1, product2 as 2, product3 as 3,
product4 as 4, product5 as 5))
)
select min(key) over (partition by cl, emailaddress) as key,
cl,
emailaddress,
regexp_replace(
listagg(ct, '+') within group (order by key) over (partition by cl, emailaddress),
'(.)(\+\1)+', '\1') as ct,
rank() over (partition by cl, emailaddress order by key, pos) as pos,
product
from t;
KEY CL EMAILADDRESS CT POS PRODUCT
---------- -- ------------- --- ---------- ----------
1 X abc@gmail.com A+B 1 12
1 X abc@gmail.com A+B 2 123
1 X abc@gmail.com A+B 3 22
```
And then finally pivot that back:
```
with t as (
select *
from your_table
unpivot (product for pos in (product1 as 1, product2 as 2, product3 as 3,
product4 as 4, product5 as 5))
)
select key, cl, emailaddress, ct, a_product as product1, b_product as product2,
c_product as product3, d_product as product4, e_product as product5
from (
select min(key) over (partition by cl, emailaddress) as key,
cl,
emailaddress,
regexp_replace(
listagg(ct, '+') within group (order by key) over (partition by cl, emailaddress),
'(.)(\+\1)+', '\1') as ct,
rank() over (partition by cl, emailaddress order by key, pos) as pos,
product
from t
)
pivot (max(product) as product for (pos) in (1 as a, 2 as b, 3 as c, 4 as d, 5 as e));
KEY CL EMAILADDRESS CT PRODUCT1 PRODUCT2 PRODUCT3 PRODUCT4 PRODUCT5
---------- -- ------------- --- ---------- ---------- ---------- ---------- ----------
1 X abc@gmail.com A+B 12 123 22
```
It's made slightly more complicated by making the column names in the final result match your original table. I've also assumed you want to keep the lowest key value, link the CT values in key order, and keep the products in the same order they appeared originally - or at least, with the products from the first key in their original order, followed by the products from the second key in *their* original order, etc.
|
**Oracle Setup**
```
CREATE TABLE table_name (
"Key" INT PRIMARY KEY,
CL CHAR(1),
EmailAddress VARCHAR2(100),
CT VARCHAR2(100),
Product1 INT,
Product2 INT,
Product3 INT,
Product4 INT,
Product5 INT
);
INSERT INTO table_name
SELECT 1, 'X', 'abc@gmail.com', 'A', 12, null, null, null, null FROM DUAL
UNION ALL
SELECT 2, 'X', 'abc@gmail.com', 'B', 123, 22, null, null, null FROM DUAL;
CREATE TYPE stringlist AS TABLE OF VARCHAR2(100);
/
CREATE OR REPLACE FUNCTION nth_item(
collection STRINGLIST,
n INT
) RETURN VARCHAR2 DETERMINISTIC
AS
BEGIN
IF collection IS NULL OR n < 1 OR n > collection.COUNT THEN
RETURN NULL;
END IF;
RETURN collection(n);
END;
/
```
**Query**:
```
SELECT "Key",
CL,
EmailAddress,
CT,
Nth_Item( products, 1 ) AS Product1,
Nth_Item( products, 2 ) AS Product2,
Nth_Item( products, 3 ) AS Product3,
Nth_Item( products, 4 ) AS Product4,
Nth_Item( products, 5 ) AS Product5
FROM (
SELECT MIN( "Key" ) AS "Key",
CL,
EmailAddress,
REGEXP_REPLACE(
LISTAGG( CT, '+' ) WITHIN GROUP ( ORDER BY CT ),
'(.)(\+\1)+',
'\1'
) AS CT,
CAST( COLLECT( COLUMN_VALUE ) AS stringlist ) AS products
FROM table_name t,
TABLE(
STRINGLIST(
t.Product1,
t.Product2,
t.Product3,
t.Product4,
t.Product5
)
)
WHERE COLUMN_VALUE IS NOT NULL
GROUP BY CL, EmailAddress
);
```
**Output**:
```
Key CL EMAILADDRESS CT PRODUCT1 PRODUCT2 PRODUCT3 PRODUCT4 PRODUCT5
--- -- ------------- --- -------- -------- -------- -------- --------
1 X abc@gmail.com A+B 12 22 123
```
|
SQL how to merge records Oracle
|
[
"",
"sql",
"oracle",
"oracle11g",
""
] |
I have the following query
```
SELECT COUNT(DISTINCT ETABLISSEMENTS.IU_ETS) AS compte,ETABLISSEMENTS.IU_GREFFE
FROM ENTREPRISES
LEFT OUTER JOIN ETABLISSEMENTS ON ETABLISSEMENTS.IU_ENTREPRISE = ENTREPRISES.IU_ENTREPRISE
LEFT OUTER JOIN dbo.BASES ON dbo.ETABLISSEMENTS.IU_BASE = dbo.BASES.IU_BASE
LEFT OUTER JOIN dbo.ETATS ON dbo.ETABLISSEMENTS.IU_ETAT = dbo.ETATS.IU_ETAT
LEFT OUTER JOIN dbo.NAF ON dbo.ETABLISSEMENTS.IU_NAF_ECO = dbo.NAF.IU_NAF
LEFT OUTER JOIN ADRESSES ON ETABLISSEMENTS.IU_ADR_PHY = ADRESSES.IU_ADR
LEFT OUTER JOIN PARTENAIRES ON
(PARTENAIRES.IU_PART = Etablissements.IU_GREFFE OR Etablissements.IU_GREFFE IS NULL)
WHERE (dbo.ETABLISSEMENTS.SIREN IS NOT NULL)
AND (dbo.ETABLISSEMENTS.SIREN <> '')
AND (dbo.ENTREPRISES.FLG_HISTORISE <> '1')
AND (dbo.ETABLISSEMENTS.NIC IS NOT NULL)
AND (dbo.ETABLISSEMENTS.NIC <> '')
AND (dbo.ETABLISSEMENTS.GESTDEL = '1')
AND (dbo.BASES.CODE = 'J1')
AND (dbo.ETATS.LIBEL = 'Actif')
AND (dbo.NAF.NAF NOT LIKE '000%')
AND (dbo.ENTREPRISES.GESTDEL = '1')
AND PARTENAIRES.IU_TYPE_PART = '3'
GROUP BY ETABLISSEMENTS.IU_GREFFE
```
The aim is to flag the `NULL` and have them counted (see below).
```
compte | IU_GREFFE
-------------------
2 | 115
1 | 126
4875 | 26
1 | 813
21 | 2021
36 | 5559
6 | 149
11661 | 27
14904 | 130
1 | 1298
13402 | 25
15790 | NULL
1 | 54
11080 | 120
9 | 423
1 | 14
```
I want something neater than just having a count with a number, to have the libel like below
```
compte | Greffes
------------------
2 | Stack
1 | Morris
4875 | Dembe
1 | Dallas
21 | Delhi
36 | Rohintra
6 | Zheng
11661 | Liliane
14904 | T-shirt
1 | Star
13402 | Yes
15790 | NULL
1 | Whatsapp
11080 | Enkai
9 | Algérie
1 | Hewah
```
I change my query to have the name of the `greffes`, I'm interested in
```
SELECT COUNT(DISTINCT ETABLISSEMENTS.IU_ETS) AS compte,PARTENAIRES.LIBEL AS Greffes
-- changing the ETABLISSEMENTS.IU_GREFFE to PARTENAIRES.LIBEL
FROM ENTREPRISES
LEFT OUTER JOIN ETABLISSEMENTS ON ETABLISSEMENTS.IU_ENTREPRISE = ENTREPRISES.IU_ENTREPRISE
LEFT OUTER JOIN dbo.BASES ON dbo.ETABLISSEMENTS.IU_BASE = dbo.BASES.IU_BASE
LEFT OUTER JOIN dbo.ETATS ON dbo.ETABLISSEMENTS.IU_ETAT = dbo.ETATS.IU_ETAT
LEFT OUTER JOIN dbo.NAF ON dbo.ETABLISSEMENTS.IU_NAF_ECO = dbo.NAF.IU_NAF
LEFT OUTER JOIN ADRESSES ON ETABLISSEMENTS.IU_ADR_PHY = ADRESSES.IU_ADR
LEFT OUTER JOIN PARTENAIRES
ON (PARTENAIRES.IU_PART = Etablissements.IU_GREFFE OR Etablissements.IU_GREFFE IS NULL)
WHERE (dbo.ETABLISSEMENTS.SIREN IS NOT NULL)
AND (dbo.ETABLISSEMENTS.SIREN <> '')
AND (dbo.ENTREPRISES.FLG_HISTORISE <> '1')
AND (dbo.ETABLISSEMENTS.NIC IS NOT NULL)
AND (dbo.ETABLISSEMENTS.NIC <> '')
AND (dbo.ETABLISSEMENTS.GESTDEL = '1')
AND (dbo.BASES.CODE = 'J1')
AND (dbo.ETATS.LIBEL = 'Actif')
AND (dbo.NAF.NAF NOT LIKE '000%')
AND (dbo.ENTREPRISES.GESTDEL = '1')
AND PARTENAIRES.IU_TYPE_PART = '3'
GROUP BY PARTENAIRES.LIBEL
```
Instead of giving me the desired result, I have the result below (just giving you a tiny sample of the results as there are 284 lines )
```
compte |Greffes
-------------------
15790 |Mooshi
15790 |Bazoo
15790 |NouYawk
15790 |Matamata
```
Thinking about it, the issue most likely lies with the null part as I have added this extra clause on the
```
LEFT OUTER JOIN PARTENAIRES ON
(PARTENAIRES.IU_PART = Etablissements.IU_GREFFE OR Etablissements.IU_GREFFE IS NULL)
```
And the part below most specifically
```
OR Etablissements.IU_GREFFE IS NULL
```
And looking at the results, 15790 is the null part.
I'm not sure what I can do to sort that out as I'm already using a `LEFT OUTER JOIN` to retrieve the `NULL`
As another variation, I tried to put the 2nd query into a nested query and do a join with the `partenaires` table but it gives me the same issue.
Any insights are more than welcomed
Thanks
**update**: Thinking about it, I can use a `case` to manually give the `iu_greffe` a proper libel but still, I would like to be able to sort the issue without going too much into manual tweaking. And to be honest, I hate being stuck into something I cannot resolve.
|
You use **with** clause
```
with t as
(
SELECT COUNT(DISTINCT ETABLISSEMENTS.IU_ETS) AS compte,ETABLISSEMENTS.IU_GREFFE
FROM ENTREPRISES
LEFT OUTER JOIN ETABLISSEMENTS ON ETABLISSEMENTS.IU_ENTREPRISE = ENTREPRISES.IU_ENTREPRISE
LEFT OUTER JOIN dbo.BASES ON dbo.ETABLISSEMENTS.IU_BASE = dbo.BASES.IU_BASE
LEFT OUTER JOIN dbo.ETATS ON dbo.ETABLISSEMENTS.IU_ETAT = dbo.ETATS.IU_ETAT
LEFT OUTER JOIN dbo.NAF ON dbo.ETABLISSEMENTS.IU_NAF_ECO = dbo.NAF.IU_NAF
LEFT OUTER JOIN ADRESSES ON ETABLISSEMENTS.IU_ADR_PHY = ADRESSES.IU_ADR
LEFT OUTER JOIN PARTENAIRES ON
(PARTENAIRES.IU_PART = Etablissements.IU_GREFFE OR Etablissements.IU_GREFFE IS NULL)
WHERE (dbo.ETABLISSEMENTS.SIREN IS NOT NULL)
AND (dbo.ETABLISSEMENTS.SIREN <> '')
AND (dbo.ENTREPRISES.FLG_HISTORISE <> '1')
AND (dbo.ETABLISSEMENTS.NIC IS NOT NULL)
AND (dbo.ETABLISSEMENTS.NIC <> '')
AND (dbo.ETABLISSEMENTS.GESTDEL = '1')
AND (dbo.BASES.CODE = 'J1')
AND (dbo.ETATS.LIBEL = 'Actif')
AND (dbo.NAF.NAF NOT LIKE '000%')
AND (dbo.ENTREPRISES.GESTDEL = '1')
AND PARTENAIRES.IU_TYPE_PART = '3'
GROUP BY ETABLISSEMENTS.IU_GREFFE
)
select t.compte, PARTENAIRES.LIBEL AS Greffes
from t
LEFT OUTER JOIN PARTENAIRES
ON (PARTENAIRES.IU_PART = t.IU_GREFFE)
where PARTENAIRES.IU_TYPE_PART = '3'
```
|
When you are tired, you are messing up. This is what I've done earlier and I thought as not working
```
SELECT t.compte, PARTENAIRES.LIBEL AS Greffes
FROM
(
SELECT COUNT(DISTINCT ETABLISSEMENTS.IU_ETS) AS compte,ETABLISSEMENTS.IU_GREFFE
FROM ENTREPRISES
LEFT OUTER JOIN ETABLISSEMENTS ON ETABLISSEMENTS.IU_ENTREPRISE = ENTREPRISES.IU_ENTREPRISE
LEFT OUTER JOIN dbo.BASES ON dbo.ETABLISSEMENTS.IU_BASE = dbo.BASES.IU_BASE
LEFT OUTER JOIN dbo.ETATS ON dbo.ETABLISSEMENTS.IU_ETAT = dbo.ETATS.IU_ETAT
LEFT OUTER JOIN dbo.NAF ON dbo.ETABLISSEMENTS.IU_NAF_ECO = dbo.NAF.IU_NAF
LEFT OUTER JOIN ADRESSES ON ETABLISSEMENTS.IU_ADR_PHY = ADRESSES.IU_ADR
LEFT OUTER JOIN PARTENAIRES ON
(PARTENAIRES.IU_PART = Etablissements.IU_GREFFE OR Etablissements.IU_GREFFE IS NULL)
WHERE (dbo.ETABLISSEMENTS.SIREN IS NOT NULL)
AND (dbo.ETABLISSEMENTS.SIREN <> '')
AND (dbo.ENTREPRISES.FLG_HISTORISE <> '1')
AND (dbo.ETABLISSEMENTS.NIC IS NOT NULL)
AND (dbo.ETABLISSEMENTS.NIC <> '')
AND (dbo.ETABLISSEMENTS.GESTDEL = '1')
AND (dbo.BASES.CODE = 'J1')
AND (dbo.ETATS.LIBEL = 'Actif')
AND (dbo.NAF.NAF NOT LIKE '000%')
AND (dbo.ENTREPRISES.GESTDEL = '1')
AND PARTENAIRES.IU_TYPE_PART = '3'
GROUP BY ETABLISSEMENTS.IU_GREFFE
) AS t
LEFT OUTER JOIN PARTENAIRES
ON (PARTENAIRES.IU_PART = t.IU_GREFFE)
```
But it actually does ...
Thanks to @proggear for his answer
|
Left join returning different results when changing column name
|
[
"",
"sql",
"sql-server",
""
] |
```
SELECT Count(*) AS MonthTotal
FROM CRMProjects
WHERE CreatedDate between '01 May 2016' and '31 May 2016'
SELECT Count(*) AS YearTotal
FROM CRMProjects
WHERE CreatedDate between '01 Jan 2016' and '31 Dec 2016'
SELECT Count(*) AS MonthNew
FROM CRMProjects
WHERE CreatedDate between '01 May 2016' and '31 May 2016'
AND SystemType = 'O'
SELECT Count(*) AS YearClosed
FROM CRMProjects
WHERE CreatedDate between '01 Jan 2016' and '31 Dec 2016'
AND SystemType = 'C'
```
It only populates the month in the table, it does not populate the other sections as Visual studio does not allow multiple select statements for one data set.
|
I would suggest something like this:
```
SELECT 'Month Total' AS Label, Count(*) AS Value
FROM CRMProjects
WHERE CreatedDate between '01 May 2016' and '31 May 2016'
UNION ALL
SELECT 'Year Total' AS Label, Count(*) AS Value
FROM CRMProjects
WHERE CreatedDate between '01 Jan 2016' and '31 Dec 2016'
UNION ALL
SELECT 'Month New' AS Label, Count(*) AS Value
FROM CRMProjects
WHERE CreatedDate between '01 May 2016' and '31 May 2016'
AND SystemType = 'O'
UNION ALL
SELECT 'Year Closed' AS Label, Count(*) AS Value
FROM CRMProjects
WHERE CreatedDate between '01 Jan 2016' and '31 Dec 2016'
AND SystemType = 'C'
```
|
You can use `SUM(CASE)` as a kind of `SUMIF` for this...
```
SELECT
SUM(CASE WHEN CreatedDate between '01 May 2016' and '31 May 2016' THEN 1 END) AS MonthTotal,
SUM(1) AS YearTotal,
SUM(CASE WHEN CreatedDate between '01 May 2016' and '31 May 2016' AND SystemType = 'O' THEN 1 END) AS MonthNew,
SUM(CASE WHEN SystemType = 'C' THEN 1 END) AS YearClosed
FROM
CRMProjects
WHERE
CreatedDate between '01 Jan 2016' and '31 Dec 2016'
```
This ensures that you only scan the relevant section of the table once, aggregating the results in a single pass.
|
Can I make one select statement in SQL?
|
[
"",
"sql",
"select",
"relational-database",
""
] |
Imagine i have such a table :
```
Nr Date
2162416 14.02.2014
2162416 11.08.2006
2672007 13.04.2016
2672007 27.11.2007
3030211 31.01.2013
3030211 25.04.2006
3108243 11.04.2016
3108243 24.08.2009
3209248 05.04.2016
3209248 08.06.2012
3232333 11.04.2012
3232333 23.12.2011
3232440 08.04.2013
3232440 23.01.2008
```
as you can see, the entries are pairs which only differ on value of date column. How can i delete one of them by comparing date. I want to remove the old ones.
There can be only two rows with same Nr
|
Simple way, use `EXISTS` to remove a row if another row with same Nr but later date exists:
```
delete from tablename t1
where exists (select 1 from tablename t2
where t2.nr = t1.nr
and t2.date > t1.date)
```
Alternatively:
```
delete from tablename
where (nr, date) not in (select nr, max(date) from tablename group by nr)
```
|
If you always have pairs of rows, you can use:
```
delete your_table
where (nr, date) in (
select nr, min(date)
from your_table
group by nr
)
```
If you want to handle the case in which you only have one row, you can add a condition:
```
delete your_table
where (nr, date) in (
select nr, min(date)
from your_table
group by nr
having count(1) > 1
)
```
|
Remove one of two Rows where first column is identical and second with different date
|
[
"",
"sql",
"oracle",
""
] |
I have a table `comments` which contains a field `student_id`(foreign key to `students` table.
I have another table `students`
What I would like to do, is run a query that displays all students who have not made any comments. The SQL I have, only shows students who have made comments
```
SELECT studentID, email, first_name, last_name FROM "students" JOIN comments ON students.id = comments.student_id
```
How do I 'reverse' this SQL to show students who have NOT commented?
|
You could do this:
```
SELECT studentID, email, first_name, last_name
FROM students
LEFT JOIN comments ON students.id = comments.student_id
WHERE comments.student_id IS NULL
```
|
One method uses `not exists`:
```
select s.*
from students s
where not exists (select 1
from comments c
where s.id = c.student_id
);
```
|
How to select 'exceptions' in SQL?
|
[
"",
"sql",
""
] |
I have tables called Products and ProductsDetails. I want to get something like the price of an order. So let's say I want 5 pairs of "Headphonesv1" ( Comparing with not ID but name, since name could change ), 2 packs of "GumOrbit" and 7 packs of "crisps". Pair of headphonesv1 costs 10$, gum 1$ and crisps 2$. So the answer that I should get is bill ID, Bill date, and TotalCost which is = 66. My question is how do I make multiple calculations? The code that I've been trying with one at least but I get syntax error:
```
SELECT Products.billID, Products.Date, (ProductsDetails.Price * 5 WHERE ProductsDetails.name LIKE 'Headphonesv1')
FROM Products INNER JOIN ProductsDetails ON Products.billdID = ProductsDetails.billID
```
Also have tried inserting SELECT inside () but then the values that I get are wrong and creating one more inner join inside doesn't seem promising
|
I think if you just want to see that total cost for multiple items you can use a aggregate and case expression to get the SUM.
```
SELECT Products.billID,
Products.Date,
SUM(CASE WHEN ProductsDetails.name LIKE 'Headphonesv1' THEN ProductsDetails.Price * 5
WHEN ProductsDetails.name LIKE 'GumOrbit' THEN ProductsDetails.Price * 2
WHEN ProductsDetails.name LIKE 'crisps' THEN ProductsDetails.Price * 7
END) TotalCost
FROM Products
INNER JOIN ProductsDetails ON Products.billdID = ProductsDetails.billID
GROUP BY Products.billID,
Products.Date
```
this seems very hard coded to serve much use though
|
You probably have another two tables: `cart(cartID int/*,more info*/)` and `cartItems(cartID int /*FK*/, item varchar(50),qty int)` where you add something. Finally,
```
select sum(pd.price * ci.qty) tot
FROM Products p
INNER JOIN ProductsDetails pd ON p.billdID = pd.billID
inner join cartItems ci on ci.item = pd.name
inner join cart c on c.cartID = ci.cartID
where c.cartID = 123456
```
|
SQL Using Multiply 3 Times with different ID's
|
[
"",
"sql",
""
] |
I execute this query:
```
SELECT distinct(accounts.account_id), transactions.trn_code , transactions.trn_date
FROM accounts join transactions ON accounts.account_id = transactions.account_id
WHERE accounts.account_currency = 'USD' AND accounts.account_cards_linked > 1
AND transactions.trn_date >= '2015/03/01' AND transactions.trn_date <= '2015/03/31'
GROUP BY transactions.trn_code , transactions.trn_date,accounts.account_id
HAVING transactions.trn_date >= MAX(transactions.trn_date)
ORDER BY accounts.account_id;
```
and I get these results:
[click image to see the result grid](https://i.stack.imgur.com/ybxqV.jpg)
My problem is that I want for each one account to appear only the transaction with the latest date. But now if an account has more than one transaction, there are all appeared. (for example, I want the account 912...129 to appears only one time with the latest day,2015/03/05. (see the image for the example))
Any ideas??
|
You need to do a `JOIN` on `MAX(trn_date)` of each account:
```
SELECT
a.account_id,
t.trn_code,
t.trn_date
FROM accounts a
INNER JOIN transactions t
ON a.account_id = t.account_id
INNER JOIN (
SELECT
account_id, MAX(trn_date) AS trn_date
FROM transactions
WHERE
trn_date >= '2015/03/01'
AND trn_date <= '2015/03/31'
GROUP BY account_id
)td
ON t.account_id = td.account_id
AND t.trn_date = td.trn_date
WHERE
a.account_currency = 'USD'
AND a.account_cards_linked > 1
ORDER BY a.account_id
```
*Note: Use meaningful table aliases to improve readability and maintainability.*
|
Select Max(trn\_date) and remove the date from the group by
```
SELECT distinct(accounts.account_id), transactions.trn_code , MAX(transactions.trn_date)
FROM accounts join transactions ON accounts.account_id = transactions.account_id
WHERE accounts.account_currency = 'USD' AND accounts.account_cards_linked > 1
AND transactions.trn_date >= '2015/03/01' AND transactions.trn_date <= '2015/03/31'
GROUP BY transactions.trn_code , accounts.account_id
HAVING transactions.trn_date >= MAX(transactions.trn_date)
ORDER BY accounts.account_id;
```
|
cant appear only the max date for a number of transactions
|
[
"",
"mysql",
"sql",
"sql-server",
"date",
"mysql-workbench",
""
] |
Trying to grab a count for 2-3 types of entry determined by column name 'adsource' value of the last 8 days. So far I've been able to grab one type only and add to results. Database MySql.
**Results i'm getting now (only one type):**
```
days_ago monthday type1
---------------------------------
1 2016-05-11 2
2 2016-05-10 34
3 2016-05-09 11
4 2016-05-08 1
5 2016-05-07 0
6 2016-05-06 16
7 2016-05-05 42
8 2016-05-04 76
```
**Result I'm after -> 2-3 types(adsource column):**
```
monthday type1 type2
-----------------------------------------
2016-05-11 1 2
2016-05-10 6 0
2016-05-09 6 6
2016-05-08 1 65
2016-05-07 4 23
2016-05-06 1 12
2016-05-05 3 9
2016-05-04 6 11
```
I don't need the first column days\_ago, not sure how to remove it from display. I need to be able to display on results:
```
adsource = 9 as type1
adsource = 12 as type2
adsource = 3 as type3
```
**This is the query I'm using:**
```
SELECT *
FROM (
SELECT DATEDIFF(NOW(), edate) AS days_ago, DATE(edate) AS monthday,
COUNT(DISTINCT entries.email) AS type1
FROM entries
WHERE iplocation = 'mx' AND adsource = 9
GROUP BY DATE(edate)) AS temp
WHERE days_ago <= 8
GROUP BY monthday
ORDER BY monthday DESC
```
Thank you for any input you might have :)
|
Try something like this..
```
SELECT *
FROM (
SELECT DATEDIFF(NOW(), edate) AS days_ago, DATE(edate) AS monthday,
COUNT(DISTINCT (case when adsource=9 then entries.email end)) AS type1,
COUNT(DISTINCT (case when adsource=12 then entries.email end)) AS type2,
COUNT(DISTINCT (case when adsource=3 then entries.email end)) AS type3
FROM entries
WHERE iplocation = 'mx'
GROUP BY DATE(edate)) AS temp
WHERE days_ago <= 8
GROUP BY monthday
ORDER BY monthday DESC
```
|
Whats the second column? Just add it to query:
```
SELECT DATE(edate) AS monthday, COUNT(DISTINCT entries.email) AS type1, COUNT(DISTINCT entries.type2) AS type2
FROM entries
WHERE iplocation = 'mx' AND adsource = 9 AND DATEDIFF(DATE(edate), now()) BETWEEN 0 and 7
GROUP BY DATE(edate)) AS temp
```
|
How to include multiple COUNT per day in one query?
|
[
"",
"mysql",
"sql",
"join",
"count",
""
] |
i have a table like this
```
userid | points | position
1 | 100 | NULL
2 | 89 | NULL
3 | 107 | NULL
```
i need a query for update the position column ordering by points desc, example result:
```
userid | points | position
1 | 100 | 2
2 | 89 | 3
3 | 107 | 1
```
|
I would not use physical columns that depend on values in other rows, otherwise you have to update the entire table every time one row changes. Use a view or other mechanism to calculate the position on the fly.
The query to calculate "position" would look something like:
```
SELECT
userid,
points,
RANK() OVER (ORDER BY points DESC) AS position
```
However, if you *have* to make it an `UPDATE` then you could use something like
```
UPDATE a
SET a.position = b.position
FROM {table_name} a
INNER JOIN
(
SELECT
userid,
RANK() OVER (ORDER BY points DESC) AS position
FROM {table_name}
) b
ON a.userid = b.userid
```
but keep in mind that you will need to run the update *every time the table is updated*, so performance may be an issue if it's a decent size table that gets updated a lot.
|
Also consider using DENSE\_RANK() instead of RANK() when you want to increment the ranking of your 'position' by 1 as the 'points' change. RANK() will do what you want, though it will create number sequence gaps according to how many duplicate 'userids' are equal in 'points' standing's (if that's ever the case in your spec).
Refer to this [answer](https://stackoverflow.com/questions/7747327/sql-rank-versus-row-number) for the difference between them.
|
update column with incremental value
|
[
"",
"sql",
"sql-server",
""
] |
I've just started using SQL and I have a pretty basic question:
I'm tried dividing 2 columns (amount/rate) - I converted them from 'money' to 'INT' but when I tried executing it gave me this error:
> Operand data type varchar is invalid for divide operator.
This is the query I typed:
```
select referenceid,
CONVERT(decimal(15,3), sellamount) as 'amount',
CONVERT(decimal(15,3), rateactual) as 'Rate',
CONVERT(decimal(15,3), 'amount' / 'rate') as 'local amount'
FROM currencyfxconversions
```
Can someone help me understand what I did wrong?
|
Try like this,
```
SELECT referenceid
,CONVERT(DECIMAL(15, 3), sellamount) AS 'amount'
,CONVERT(DECIMAL(15, 3), rateactual) AS 'Rate'
,CONVERT(DECIMAL(15, 3), (CONVERT(DECIMAL(15, 3), sellamount) / CONVERT(DECIMAL(15, 3), rateactual))) AS 'local amount'
FROM currencyfxconversions
```
|
Your issue is that you're using a column that you use in the same context. You can't use am alias as a table column in the select context. You must repeat the last alias queries, as follows:
```
SELECT referenceid
,CONVERT(DECIMAL(15, 3), sellamount) AS 'amount'
,CONVERT(DECIMAL(15, 3), rateactual) AS 'Rate'
,CONVERT(DECIMAL(15, 3), (CONVERT(DECIMAL(15, 3), sellamount) / convert(DECIMAL(15, 3), rateactual))) AS 'local amount'
```
|
Operand data type varchar is invalid for divide operator
|
[
"",
"sql",
"sql-server",
""
] |
I have tables `users` and `topics`. Every user can have from 0 to several topics (one-to-many relationship).
How I can get only those users which have at least one topic?
I need all columns from users (without columns from topics) and without duplicates in table `users`. In last column I need number of topics.
**UPDATED:**
Should be like this:
```
SELECT user.*, count(topic.id)
FROM ad
LEFT JOIN topic ON user.id = topic.ad
GROUP BY user.id
HAVING count(topic.id) > 0;
```
but it takes 0 result. But it should not be 0.
|
Firstly you need to have your two tables, because you have left limited information about your table structure I will use an example to explain how this works, you should then be able to easily apply this to your own tables.
Firstly you need to have two tables (which you do)
Table "user"
```
id | name
1 | Joe Bloggs
2 | Eddy Ready
```
Table "topic"
```
topicid | userid | topic
1 | 1 | Breakfast
2 | 1 | Lunch
3 | 1 | Dinner
```
Now asking for a count against each user is done using the follwing;
```
SELECT user.name, count(topic.topicid)
FROM user
INNER JOIN topic ON user.id = topic.userid
GROUP BY user.name
```
If you use a left join, this will include records from the "user" table which does not have any rows in the "topic" table, however if you use an INNER JOIN this will ONLY include users who have a matching value in both tables.
I.e. because the user id "2" (which we use to join) is not listed in the topic table you will not get any results for this user.
Hope that helps!
|
You can select topic number per user and then join it with user data. Something like this:
```
with t as
(
select userid, count(*) as n
from topic
group by userid
)
SELECT user.*, t.n
FROM user
JOIN t ON user.id = t.userid
```
|
SQL query (Join without duplicates)
|
[
"",
"sql",
"postgresql",
"having",
""
] |
I'm new to SQL queries and I need to make a join starting from this query:
```
SELECT b.Name, a.*
FROM a INNER JOIN b ON a.VId = b.Id
WHERE b.SId = 100
AND a.Day >= '2016-05-05'
AND a.Day < '2016-05-09';
```
and adding 2 more columns to the first selected data (SCap and ECap from table c).
From what I've tried my code looks like this:
```
SELECT b.Name,a.*,
c.MaxDay,
c.Cap,
FROM a INNER JOIN b
ON a.VId = b.Id
INNER JOIN
(SELECT VId,
MAX(TimestampLocal) AS MaxDay,
CAST (TimestampLocal AS DATE) AS Day,
Cap,
FROM Info
GROUP BY VId,
CAST (TimestampLocal AS DATE),
Cap) AS c
ON a.VId = c.VId
AND a.Day = c.Day
WHERE b.SId = 33
AND a.Day >= '2016-05-05'
AND a.Day < '2016-05-09';
```
But I get more rows than needed.
I need the earliest and the latest TimestampLocal for a given vehicle in a daterange. That would come from two records in Info, and each would have its proper values for Cap.
For example:
I have two Names with values inside table Info for 2 days (Name 1) and only one day for Name 2:
```
- Day 1 2016-05-07:
- Name 1: Values at 2:45, 10:10 and 3.10
- Name 2: Values at 5:13 and 8:22
- Day 2 2016-05-09:
- Name 1: Values at 4:13, 6:15 and 9:20
```
I need to display (if I select daterange: `2016-05-05` to `2016-05-09`) for Name 1 for SCap the value from 2016-05-07 from 2:45 and for ECap the value from 2016-05-09 from 9:20 and for Name 2 for SCap the value from 2016-05-07 from 5:13 and for ECap the value from 2016-05-07 from 8:22. This should be displayed in 2 lines.
Is there a way I can add those 2 columns into my query without adding more rows for the same Name?
EDIT!
Table a I have:
```
VId | Day
5251 | 05/09/2016
5382 | 05/09/2016
```
Table b:
```
Id | Name
5251 | N1
5382 | N2
```
Table Info:
```
VId | TimestampLocal | Cap
5251 | 2016-05-09 11:33:46.2000000 +03:00 | 0
5251 | 2016-05-09 11:37:11.4000000 +03:00 | 7
5251 | 2016-05-09 11:38:11.4000000 +03:00 | 4
5251 | 2016-05-09 11:39:11.7000000 +03:00 | 2
5382 | 2016-05-09 09:30:56.6000000 -04:00 | 5
5382 | 2016-05-09 09:31:56.6000000 -04:00 | 3
```
And I need to display - if I select a daterange from `2016-05-03` to `2016-05-10`:
```
Id | Name | SCap | ECap
5251 | N1 | 0 | 2
5382 | N2 | 5 | 3
```
|
I found a solution for my query after some further investigations and here is what I managed to do:
```
SELECT b.Id,
b.Name,
c.S_Time,
c.E_Time,
s.Cap AS S_Cap,
e.Capa AS E_Cap,
FROM b
INNER JOIN (SELECT VId,
MIN(TimestampLocal) AS S_Time,
MAX(TimestampLocal) AS E_Time
FROM Info
where CAST (TimestampLocal AS DATE) >='2016-04-05'
and CAST (TimestampLocal AS DATE) <'2016-05-10'
GROUP BY VId
) AS c
ON b.Id = c.VId
INNER JOIN Info AS s
ON s.VId = c.VId
AND s.TimestampLocal = c.S_Time
INNER JOIN Info AS e
ON e.Vd = c.VId
AND e.TimestampLocal = c.E_Time
WHERE b.SId = 100
```
|
You can use a `CTE` with `ROW_NUMBER` in order to retrieve the two records:
```
;WITH Info_CTE AS (
SELECT VId,
CAST (TimestampLocal AS DATE) AS Day,
Cap,
ROW_NUMBER() OVER (PARTITION BY VId
ORDER BY TimestampLocal) AS rn1,
ROW_NUMBER() OVER (PARTITION BY VId
ORDER BY TimestampLocal DESC) AS rn2
FROM Info
)
SELECT b.Name,
a.*,
c.Day,
c.Cap
FROM a
INNER JOIN b ON a.VId = b.Id
INNER JOIN Info_CTE AS c ON a.VId = c.VId AND
a.Day = c.Day AND
1 IN (c.rn1, c.rn2)
WHERE b.SId = 33
AND a.Day >= '2016-05-05'
AND a.Day < '2016-05-09';
```
If you want `ECap` and `SCap` as separate columns then instead of
```
c.Cap
```
you can use:
```
CASE WHEN c.rn1 = 1 THEN c.Cap END AS ECap,
CASE WHEN c.rn2 = 1 THEN c.Cap END AS SCap
```
|
SQL Select where first value and last value of daterange
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I am attempting to get the maximum Decimal value from a column and then add 1 to it to get the next value. However, the column's values are of type varchar, meaning that I must filter to make sure that it only attempts to cast values it can, like '123,' while avoiding values like '123a'
What I have attempted so far is this:
```
SELECT
(CAST(MAX(Id) AS DECIMAL(38,0)) + 1) AS 'New ID'
FROM database
WHERE Id NOT LIKE '%[^0-9]%' --filters out non-numeric ID values
AND (LEN(Id) + 1) < 39 --ensures ID is less than length 39
```
However, when I run this, if a value such as 9 and a value such as 100 exist in the ID column, MAX will prioritize the 9 over the 100.
I've also attempted:
```
SELECT
(CAST(MAX(CAST(Id AS DECIMAL(38,0))) AS DECIMAL(38,0)) + 1) AS 'New ID'
FROM database
WHERE Id NOT LIKE '%[^0-9]%'
AND (LEN(Id) + 1) < 39
```
However, this results in an error converting varchar into numeric, since I'm assuming it attempts to cast it as a decimal before filtering out the non-numeric IDs in the WHERE clause. I've also tried getting rid of the extra cast at the start, but that still has the same issue.
Thanks for the help!
|
What you are missing is empty strings. Your condition: *Id NOT LIKE '%[^0-9]%'* is true when Id is the empty string. This should work:
```
SELECT
(MAX(CAST(Id AS DECIMAL(38,0))) + 1) AS 'New ID'
FROM database
WHERE Id NOT LIKE '%[^0-9]%'
AND Id <> ''
AND (LEN(Id) + 1) < 39
```
The outer CAST is redundant (MAX will return the same type as its argument).
PS: The WHERE clause is evaluated before the SELECT clause, so if you filter out the invalid values in the WHERE, conversions in the SELECT will not fail.
|
You should use `try_convert()`:
```
SELECT MAX(TRY_CONVERT(DECIMAL(38, 0), id)) + 1 AS [New ID]
FROM database
WHERE Id NOT LIKE '%[^0-9]%' AND (LEN(Id) + 1) < 39 ;
```
Your example converts back to a decimal, but I think you want a string:
```
SELECT TRY_CONVERT(VARCHAR(38), MAX(TRY_CONVERT(DECIMAL(38, 0), id)) + 1) AS [New ID]
FROM database
WHERE Id NOT LIKE '%[^0-9]%' AND (LEN(Id) + 1) < 39 ;
```
The problem that you are facing is based on how SQL Server processes queries. We *intend* that the `WHERE` clause is processed before the `SELECT`. However, that might not be how the calculation gets set up. So, SQL Server ends up doing unnecessary work on converting values that will get filtered out -- but then an error occurs interrupting the query.
In my opinion, this is an error in the implementation of SQL Server. One presumes that they have a good reason for allowing errors to flow through, even when the rows are not being selected. We can all agree that it is an inconvenience.
Before `TRY_CONVERT()`, the `CASE` statement would be used for this purpose.
|
Selecting maximum varchar casted into a decimal
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
I have two tables.
```
table_1 | table_2
id col_1 | col_2 | col_1 | col_2
1 1 | B | 1 | B
2 1 | C | 3 | C
3 1 | D | 5 | D
....
```
I write this query
```
SELECT *
FROM table_1 t1
LEFT JOIN table_2 t2 ON t1.col_1 = t2.col_1
WHERE t1.col_1 = 1
AND (
t2.col_1 IS NULL
AND t2.col_1 != 'B'
)
```
I want get this result.
```
table_1 |
id col_1 | col_2 |
2 1 | C |
3 1 | D |
```
How can I do that? Thanks!
**Update question**
Table 1 - PriceItems, Table 2 - BlockedPrices, col1 - Code, col2 - Brand.
I want to get all PriceItems current code and all brands exept codes from BlockesPrices.
|
I *think* this is what you're looking for:
```
Select t1.*
From table_1 t1
Left Join table_2 t2 On t1.col_1 = t2.col_1
And t2.col_1 != 'B'
Where t1.col_1 = 1
And t2.col_1 Is Not Null
```
As I understand your question, you're looking to do a `LEFT JOIN` on the second table, but have the case where `t2.col_1 = 'B'` excluded from your final result.
In this case, you'll need to move the condition `t2.col_1 = 'B'` to your `ON` clause to have records that don't match both of the `LEFT JOIN` conditions come back as `NULL`.
The reason you might have seen strange results is due to the `AND t2.col_1 != 'B'` condition in your `WHERE` clause. This causes the filter to happen *after* the `LEFT JOIN`, which will further filter those results (thus, effectively transforming the `LEFT JOIN` into an `INNER JOIN`.
**Edit, question changed:**
Based on the new information in the question, you are using the second table as an *exclusion* table, and want to remove the results from the first table that appear in the second.
For this, you don't need a `JOIN` at all, you can just do it with a `WHERE NOT EXISTS` as follows:
```
Select *
From PriceItems P
Where Not Exists
(
Select *
From BlockedPrices B
Where P.Code = B.Code
And P.Brand = B.Brand
)
And P.Code = 1
```
|
You only want results from table\_1, so you don't even need a join.
```
select *
from table_1 t1
where not exists (
select 1
from table_2 t2
where t1.col_1 = t2.col_1
and t2.col_2 = t2.col_2
)
```
You can provide any other `where` condition in the inner select statement to specify what type of match should disqualify table\_1 records from being listed.
For example, you could add `and t2.price > 2000` to the inner `where` condition to disqualify all records from table\_1 that have matching record in table\_2 with price above 2000.
|
left join where b.key is null by two columns
|
[
"",
"sql",
"postgresql",
""
] |
trying to generate a list of genres with the number of tracks and the % of tracks belonging to that genre. No of tracks should be order ascending.
Output should look something like this with 3 seperate columns.
```
Genre No.of Tracks % of Tracks
Rock 5 20%
Genre table consists of - GenreId , Name
Track table consists of - TrackId, Name, GenreId
```
|
[Check at sqlfiddle](http://sqlfiddle.com/#!9/053e1a/2)
```
SELECT g.nameOfGenre,
COUNT(t.TrackId) AS tracks_count,
COUNT(t.TrackId) / tt.total_tracks * 100 AS tracks_percent
FROM Genre g
LEFT JOIN Track t ON t.GenreId = g.genreId
JOIN (SELECT COUNT(*) AS total_tracks FROM Track) AS tt
GROUP BY g.genreId
```
Simple example. You can round percents if you want.
|
**EDIT**: After seeing the OP's comment, I believe the following query should be more suitable. Unlike the answer below me, I will use an implicit join.
```
SELECT Genre.genreName, Track.trackName, COUNT(Track.genreID) AS "Track Count", ( ( (COUNT(Track.genreID) / COUNT(Track.trackID)) * 100 ) AS "Percent Of Tracks") FROM Genre, Track WHERE Genre.genreID = Track.genreID;
```
**ORIGINAL ANSWER**:
Assuming that these are all in the same table, this can be achieved with a simple SELECT query.
```
SELECT Genre, No_Of_Tracks, Percent_of_Tracks FROM Table ORDER BY No_Of_Tracks
```
You should replace "Genre, No\_Of\_Tracks, Percent\_Of\_Tracks" with your respective column names and Table with the actual table name.
|
generate a list in sql
|
[
"",
"mysql",
"sql",
"sqlite",
""
] |
someone can help me please?
I have a field with this structure: 2,3,4,5,6,0,1
they are day of the week and I have to find when if today is the last delivery day of the week, the problem is this:
Example, today it's day 3 and with the following query I can select all the one that finish with the number 3 but if I have someone that has this structure: 3,0,1, or 3,1 or 3,0 I don't get them.
```
declare @today int
set @today=DATEPART(dw,GETDATE())-2
print @today
select cast (cfv.value as VARCHAR)
from CompanyFieldvalues cfv
join CompanyFields cf on cf.companyFieldId=cfv.companyFieldId
where cf.name='NextDeliveryDates' and cfv.companyId in(
select cfv.companyId
from CompanyFieldvalues cfv
join Companies c on c.companyId=cfv.companyId
where cfv.value='Retailer' and c.status=1)
and (cfv.value like '%,' + cast (@today as VARCHAR) or cfv.value like '%' + cast (@today as VARCHAR))
```
To fix it I tough to introduce a case, so my query will be this one:
```
declare @today int
set @today=DATEPART(dw,GETDATE())-2
print @today
select cast (cfv.value as VARCHAR)
from CompanyFieldvalues cfv
join CompanyFields cf on cf.companyFieldId=cfv.companyFieldId
where cf.name='NextDeliveryDates' and cfv.companyId in(
select cfv.companyId
from CompanyFieldvalues cfv
join Companies c on c.companyId=cfv.companyId
where cfv.value='Retailer' and c.status=1)
and (cfv.value like '%,' + cast (@today as VARCHAR) or cfv.value like '%' + cast (@today as VARCHAR)) and case
when (@today != 0 or @today!=1)
then (cfv.value not like '%,0' or cfv.value not like '%,1')
```
But in this case I receive this error message:
> Msg 156, Level 15, State 1, Line 14 Incorrect syntax near the keyword
> 'not'.
I'm sure that I'm making some stupid mistake but I don't get it :S
Some help pls!
Thanks so much
|
With a CASE statement should look something like this:
```
declare @today int
set @today=DATEPART(dw,GETDATE())-2
print @today
select cfv.value
from CompanyFieldvalues cfv
join CompanyFields cf on cf.companyFieldId=cfv.companyFieldId
where cf.name='NextDeliveryDates' and cfv.companyId in(
select cfv.companyId
from CompanyFieldvalues cfv
join Companies c on c.companyId=cfv.companyId
where cfv.value='Retailer' and c.status=1)
and
CASE
WHEN (@today !=0 or @today!=1) THEN
(cfv.value like '%,' + cast (@today as VARCHAR) or cfv.value like '%' + cast (@today as VARCHAR)
or cfv.value like '%' + cast (@today as VARCHAR)+',0'
or cfv.value like '%' + cast (@today as VARCHAR)+',0,1'
or cfv.value like '%' + cast (@today as VARCHAR)+',1')
ELSE
(cfv.value like '%,' + cast (@today as VARCHAR) or cfv.value like '%' + cast (@today as VARCHAR))
END = 1
```
OR \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*
```
CASE
WHEN ((@today !=0 or @today!=1) AND (cfv.value like '%,' + cast (@today as VARCHAR) or cfv.value like '%' + cast (@today as VARCHAR)
or cfv.value like '%' + cast (@today as VARCHAR)+',0'
or cfv.value like '%' + cast (@today as VARCHAR)+',0,1'
or cfv.value like '%' + cast (@today as VARCHAR)+',1'))
THEN 1
WHEN (cfv.value like '%,' + cast (@today as VARCHAR) or cfv.value like '%' + cast (@today as VARCHAR)) THEN 1
ELSE 0
END = 1
```
But it is true the expression (@today !=0 or @today!=1) is always true. Maybe you should find another meaningful condition
|
You can replace the CASE statement with the logical expression:
```
(((@today != 0 or @today!=1) AND (cfv.value not like '%,0' or cfv.value not like '%,1')) OR NOT (@today != 0 or @today!=1))
```
So your query will become:
```
declare @today int
set @today=DATEPART(dw,GETDATE())-2
print @today
select cast (cfv.value as VARCHAR) from CompanyFieldvalues cfv
join CompanyFields cf on cf.companyFieldId=cfv.companyFieldId
where cf.name='NextDeliveryDates' and cfv.companyId in(
select cfv.companyId
from CompanyFieldvalues cfv
join Companies c on c.companyId=cfv.companyId
where cfv.value='Retailer' and c.status=1)
and (cfv.value like '%,' + cast (@today as VARCHAR) or cfv.value like '%' + cast (@today as VARCHAR))
and (((@today != 0 or @today!=1) AND (cfv.value not like '%,0' or cfv.value not like '%,1')) OR NOT (@today != 0 or @today!=1))
```
|
SQL, Case with Like inside of the where
|
[
"",
"sql",
"database",
"datetime",
"case",
"ssms-2012",
""
] |
This is my statement
```
select * from table1
where col1 not in (select col2 from table2);
```
I want to try(but failed)
```
select t1.* from table1 t1
inner join table2 on t1.col1<> t2.col2
```
Is there another way to do this?
|
```
select t1.* from table1 t1
left join table2 t2 on t1.col1 = t2.col2
where t2.col2 is null
```
This is standard implementation of missing record search.
|
If You try to find out the data of t1.c1 (t1=table1, c1=column1) that is exist or not exist in t2.c2 then use flowing script
`select t1.c1 from t1 left join t2 on t1.c1=t2.c2`
other wise if you want to find only t1.c1 data that is not in t2.c2 then below script is help full
`select t1.c1 from t1 where t1.c1 not in (select t2.c2 from t2)`
|
SQL:find table1.col1 not exist table2.col2, return table1's records
|
[
"",
"sql",
""
] |
Following is my table structure.
```
Measure type |Grade |date
LNF |K |1/1/2016
LNF |K |2/1/2016
CNF |K |3/1/2016
LDR |K |4/1/2016
```
LNF,CNF,LDR is measure types, K is grade level.
I need to list only the last record (by date) when measure type and grade level equal.
in the example correct query will list only the for LNF and Grade K.
```
LNF |K |2/1/2016
```
Total result must be
```
LNF |K |2/1/2016
CNF |K |3/1/2016
LDR |K |4/1/2016
```
Please help me on this any my sql expert.
Thanks in advance
|
Try like this,
```
SELECT Measuretype
,Grade
,Max(DATE) as LatestDate
FROM YourTableName
GROUP BY Measuretype
,Grade
```
This would give output like this
```
Measure type |Grade |date
LNF |K |2/1/2016
CNF |K |3/1/2016
LDR |K |4/1/2016
```
|
Try this:
```
SELECT Measure type, Grade, MAX(date)
FROM mytable
GROUP BY Measure type, Grade
HAVING COUNT(*) > 1
```
The query returns records having duplicate `Measure type, Grade` values. It selects the latest date for each group as requested in the OP.
|
List only the latest record when measure type and grade same
|
[
"",
"mysql",
"sql",
""
] |
I have a problem with a query....
I have a this query:
```
declare @today int
set @today=DATEPART(dw,GETDATE())-2
select cast (cfv.value as VARCHAR), cfv.companyId
from CompanyFieldvalues cfv
join CompanyFields cf on cf.companyFieldId=cfv.companyFieldId
where cf.name='NextDeliveryDates' and cfv.companyId in(
select cfv.companyId
from CompanyFieldvalues cfv
join Companies c on c.companyId=cfv.companyId
where cfv.value='Retailer' and c.status=1)
/*and cfv.value like '%' + cast (@today as VARCHAR) + '%' */
```
This give me as result a table like that:
[Unique Account of a company, Delivery Days](https://i.stack.imgur.com/RGj7Y.png)
CM001 | 2,4,1
CD04 | 3,3,4
CS7 | 2
CR001 | 4
FG076 | 3,3,5,4
JUH768 | 2,2,2
HG006 | 2
KG040 | 3,2,5
In a few words I'm just saving in @today the value of the actual day of the week (-2 because the system that use this DB manage the days in a different way) and then I just select the the company information and the delivery days from two different tables.
My problem is that I need select just the company that have as last delivery day today.... so if today is day 2 I can have company with last delivery day 1,2 - 0,2 - 0,1,2 etc...
If you see in my code there is the last line commented, if you add this line you get this other result:
CM001 | 2,4,1
CS7 | 2
JUH768 | 2,2,2
HG006 | 2
KG040 | 3,2,5
But in this way, as you can see, I select different companies that don't have as last delivery day the current day.
So I calculate a dynamic table that contain all the future date:
```
declare @day int
set @day=DATEPART(dw,GETDATE())-1
declare @week int
set @week=7
declare @extra table
(extraday varchar)
while (@day<@week)
begin
insert into @extra (extraday) values (@day)
set @day=@day+1
end
```
This give me this result:
[Days of the week future than the current one](https://i.stack.imgur.com/18DYF.png)
3
4
5
6
I try to make different join, difference, like but I don't get to have just the companies with last delivery day as today.
Do you know how can I fix it? or if you have another idea of how I can do please let me know.
Thanks so much,
Carlo
|
It looks like you are trying to implement conditional logic within your `WHERE` clause, but you are going about it incorrectly. You'll need to either break the statements up or use dynamic string building to create the query and execute it. It should look something like this. Depending on your validation routines for @today, you may want add some protection to guard against SQL injection.
```
declare @today int
set @today=DATEPART(dw,GETDATE())-2-2
print @today
declare @nsql nvarchar(max)
set @nsql=N'
select
cast (cfv.value as VARCHAR)
from
CompanyFieldvalues cfv
join CompanyFields cf on cf.companyFieldId=cfv.companyFieldId
where
cf.name=''NextDeliveryDates''
and cfv.companyId in
(
select cfv.companyId
from
CompanyFieldvalues cfv
join Companies c on c.companyId=cfv.companyId
where
cfv.value=''Retailer'' and c.status=1
)
and ( cfv.value like ''%,''' + cast(@today as VARCHAR)+'
or cfv.value like ''%''' + cast(@today as VARCHAR)
if (@today != 0 or @today!=1)
set @nsql=@nsql+N'
and ((cfv.value not like ''%,0'' or cfv.value not like ''%,1''))'
print @nsql
--exec sp_executesql @nsql
```
|
Seeing that the data structure is this one: 2,3,4,5,6,0,1, I found a partially solution in this way:
```
declare @today int
set @today=DATEPART(dw,GETDATE())-2-2
print @today
select cast (cfv.value as VARCHAR)
from CompanyFieldvalues cfv
join CompanyFields cf on cf.companyFieldId=cfv.companyFieldId
where cf.name='NextDeliveryDates' and cfv.companyId in(
select cfv.companyId
from CompanyFieldvalues cfv
join Companies c on c.companyId=cfv.companyId
where cfv.value='Retailer' and c.status=1)
and (cfv.value like '%,' + cast (@today as VARCHAR) or cfv.value like '%' + cast (@today as VARCHAR))
```
If the day end with the current day it's the last one; but I still have to exception:
example:
4,5,0
2,1
etc....
to solve that I tough to do an IF but I receive an error message, someone know how I can do it please?
```
declare @today int
set @today=DATEPART(dw,GETDATE())-2-2
print @today
select cast (cfv.value as VARCHAR)
from CompanyFieldvalues cfv
join CompanyFields cf on cf.companyFieldId=cfv.companyFieldId
where cf.name='NextDeliveryDates' and cfv.companyId in(
select cfv.companyId
from CompanyFieldvalues cfv
join Companies c on c.companyId=cfv.companyId
where cfv.value='Retailer' and c.status=1)
and (cfv.value like '%,' + cast (@today as VARCHAR) or cfv.value like '%' + cast (@today as VARCHAR)) and (
if (@today != 0 or @today!=1)
(cfv.value not like '%,0' or cfv.value not like '%,1')
)
```
This is the error:
> Msg 156, Level 15, State 1, Line 14 Incorrect syntax near the keyword
> 'if'. Msg 102, Level 15, State 1, Line 15 Incorrect syntax near 'cfv'.
|
SQL - Day of the week issue on select
|
[
"",
"sql",
"datetime",
"sql-like",
"week-number",
"weekday",
""
] |
SQL newbie here.
So we have 3 tables:
```
categories(cat_id,name);
products(prod_id,name);
relationships(prod_id,cat_id);
```
It is a one-to-many relationship.
So, given a category name say "Books". How do I find all the products that come under books?
As an example,
```
categories(1,Books);
categories(2,Phones);
products(302,Sherlock Holmes);
relationships(302,1);
```
|
You have to join tables on related columns and specify `WHERE` clause to select all records where category name = `'Books'`
```
SELECT p.*
FROM categories c
JOIN relationships r ON c.cat_id = r.cat_id
JOIN products p ON r.prod_id = p.prod_id
WHERE c.name = 'Books' -- or specify parameter like @Books
```
|
You need to `JOIN` the three tables.
```
SELECT p.*
FROM relationships r
INNER JOIN products p
ON p.prod_id = r.prod_id
INNER JOIN categories c
ON c.cat_d = r.cat_id
WHERE c.name = 'Books'
```
|
SQL Join involving 3 tables, how to?
|
[
"",
"mysql",
"sql",
"join",
""
] |
Let me describe my doubt. I have system where I have three entities, Doctor, Patient and Appointment. An appointment has the doctor's id and patient Id.
I need now to retrieve all the patients which have an appointment with a concrete doctor, and I'm not sure what will be faster, a distinct or a subselect for the id's, these are the queries:
using distinct->
```
SELECT DISTINCT patient.id, patient.name, patient.surname FROM
appointment INNER JOIN patient ON patient.id = appointment.patientid WHERE
appointment.doctorid = @id;
```
using subselect->
```
SELECT patient.id, patient.name, patient.surname FROM patient
WHERE patient.id IN (select appointment.patientid FROM appointment
WHERE appointment.doctorid = @id);
```
Not sure it this will affect, the system will run on a MariaDB cluster.
|
As with any performance question, you should test on your data and your hardware. The suspect problem in the first version the `DISTINCT` after the `JOIN`; this can require a lot of extra processing.
You can write the second as:
```
SELECT p.id, p.name, p.surname
FROM patient p
WHERE p.id IN (select a.patientid FROM appointment a WHERE a.doctorid = @id);
```
For this, you want an index on `appointment(doctorid, patientid)`.
You might consider this version as well:
```
select p.id, p.name, p.surname
from patient p join
(select distinct appointment.patientid
from appointment
where appointment.doctorid = @id
) a
on p.id = a.patientid;
```
This specifically wants the same index. This pushes the `distinct` so it is only operating on a single table, meaning that MySQL may be able to use the index for that operation.
And this one:
```
SELECT p.id, p.name, p.surname
FROM patient p
WHERE EXISTS (select 1
from appointment a
where a.doctorid = @id and a.patientid = p.id
);
```
This query wants an index on `appointment(patientid, doctorid)`. It requires a full table scan of `patient` with a fast index lookup on each row. That could often be the fastest approach, depending on the data.
Note: which query performs better may also depends on the size and distribution of the data.
|
Neither.
These suffer from "inflate-deflate". That is, the `JOIN` leads to more rows in a temp table, only to prune back to what you need. This is costly. (And it can give wrong answers for `COUNT` and `SUM`.)
```
SELECT DISTINCT ... JOIN ...
and
SELECT ... JOIN ... GROUP BY ...
```
This performs poorly because of optimizer limitations:
```
... IN ( SELECT ... )
```
This is what you want:
```
SELECT ...
FROM ( SELECT id FROM ... WHERE ... )
JOIN ...
```
It is especially good if the subquery needs `DISTINCT`, `GROUP BY`, and/or `LIMIT`. This is because it will create a small set of rows before doing the `JOIN`, thereby decreasing the number of `JOINs` needed.
|
What is faster, subselect or distinct (MySQL)?
|
[
"",
"mysql",
"sql",
"mariadb",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.