Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
I've searched around, but haven't found the similar situation to what I'm trying to achieve. I want to execute the where clause expression only when the condition is true, otherwise I want to ignore the data that doesn't match it.
```
Where (
case when LeaveDateTime > FromDate and EnteringDateTime < ToDate then
(dateadd(d, Threshhold, LeaveDateTime) >= EnteringDateTime
end
);
```
I get Incorrect syntax near '>='
Can someone please help me solve this?
|
"execute where clause expression only when the condition is true, otherwise I want to ignore the data that doesn't match it"
Since you say you want to ignore, i.e. not include, data otherwise if the first condition is not met, then it sounds like you want a simple AND. That's may not be what you intended to express, which is why it's often important to provide example results to clarify.
```
Where
(
(LeaveDateTime > FromDate and EnteringDateTime < ToDate)
AND
dateadd(day, Threshhold, LeaveDateTime) >= EnteringDateTime
);
```
|
Replace this with simple logic:
```
Where (not (LeaveDateTime > FromDate and EnteringDateTime < ToDate) or
dateadd(day, Threshhold, LeaveDateTime) >= EnteringDateTime
);
```
If the variables take on `NULL` values, then the logic is somewhat more cumbersome.
Your version doesn't work because `case` is a *scalar expression*. A boolean comparison is *not* a scalar expression, so it cannot be the value returned by an `then`.
|
Conditional where clause SQL Server 2012
|
[
"",
"sql",
"t-sql",
"sql-server-2012",
""
] |
I must write query in SQL which will return Account Id.
For this query i have input parameters, which can be empty.
In database here is rows where some field is null or empty. So in case parameter is empty i also need to check field if it's null.
```
SELECT a.Account_ID
FROM Accounts a
WHERE a.FirstName = @FirstName /*Add check if @FirstName = '' then a.FirstName IS NULL */
AND a.LastName = @LastName
AND a.Middle = @MiddleName
AND a.Email = @Email
AND a.Company = @Company
```
|
I think it is a or condition you need:
```
SELECT a.Account_ID
FROM Accounts a
WHERE ((@FirstName='' and a.FirstName is null) or a.FirstName = @FirstName) /*Add check if @FirstName = '' then a.FirstName IS NULL */
AND a.LastName = @LastName
AND a.Middle = @MiddleName
AND a.Email = @Email
AND a.Company = @Company
```
|
I normally do something like this if i want special behavior for empty params:
```
SELECT a.Account_ID
FROM Accounts a
WHERE
(
(@firstname != '' AND a.FirstName = @FirstName)
OR (@firstname = '' AND a.FirstName is null)
)
AND a.LastName = @LastName AND a.Middle = @MiddleName
AND a.Email = @Email AND a.Company = @Company
```
I have a bracketed OR that handles the case for param = '' and param != '' specifically.
|
In Where clause add check if Parameter is empty string
|
[
"",
"sql",
"select",
"conditional-statements",
"sql-server-2014",
""
] |
I have a table and there are 4 fields in it, `ID, Price, QTY, Ratting` and Optional [Position].
I have all the records Grouped By Columns `[Qty,Ratting]`
I have to define the position of groupwise and store that Position into Optional column.
For better understanding I have added an image with data in table:

On the basis of `QTY` in Each `Rating` I have to Mark `Top3, Bottom3 and Rest of them as remaining.`
I am not getting how to do it.
Can anybody suggest me how to do it?
So far what I've tried is:
```
Declare @RankTable TABLE
(
ID INT,
Price Decimal (10,2),
Qty INT,
Ratting INT
)
INSERT INTO @RankTable
SELECT 1,10,15,1
UNION ALL
SELECT 2,11,11,1
UNION ALL
SELECT 3,96,10,1
UNION ALL
SELECT 4,96,8,1
UNION ALL
SELECT 5,56,7,1
UNION ALL
SELECT 6,74,5,1
UNION ALL
SELECT 7,93,4,1
UNION ALL
SELECT 8,98,2,1
UNION ALL
SELECT 9,12,1,1
UNION ALL
SELECT 10,32,80,2
UNION ALL
SELECT 11,74,68,2
UNION ALL
SELECT 12,58,57,2
UNION ALL
SELECT 13,37,43,2
UNION ALL
SELECT 14,79,32,2
UNION ALL
SELECT 15,29,28,2
UNION ALL
SELECT 16,46,17,2
UNION ALL
SELECT 17,86,13,2
UNION ALL
SELECT 19,75,110,3
UNION ALL
SELECT 20,27,108,3
UNION ALL
SELECT 21,38,104,3
UNION ALL
SELECT 22,87,100,3
UNION ALL
SELECT 23,47,89,3
DECLARE @PositionGroup VARCHAR(1)
SELECT *,ISNULL(@PositionGroup,'') AS Position FROM @RankTable
```
|
You can try this:
```
SELECT ID
,Price
,Qty
,Ratting
,CASE WHEN RowID >= 1 AND RowID <= 3
THEN 0
ELSE CASE WHEN RowID > Total - 3 THEN 1 ELSE 2 END END AS Position
FROM (SELECT ID
,Price
,Qty
,Ratting
,COUNT(*) OVER(PARTITION BY Ratting) AS Total
,ROW_NUMBER() OVER(PARTITION BY Ratting ORDER BY Qty DESC) AS RowID
,ISNULL(@PositionGroup,'') AS Position
FROM @RankTable) AS T
```
|
Use `Window Function`. Try this.
```
;WITH cte
AS (SELECT *,
Row_number()OVER(partition BY rating ORDER BY id) rn,
count(id)OVER(partition BY rating) mx
FROM @RankTable)
SELECT ID,
Price,
Qty,
Rating,
mx - rn,
CASE WHEN rn IN ( 1, 2, 3 ) THEN 0
WHEN mx - rn IN( 0, 1, 2 ) THEN 1
ELSE 2
END position
FROM cte
```
|
Find Top Most AND Lowest In a Table's Group Column
|
[
"",
"sql",
"sql-server-2008",
"t-sql",
""
] |
I know how to find all tables with specified column name in database:
```
SELECT TABLE_NAME
, COLUMN_NAME
FROM INFORMATION_SCHEMA.COLUMNS
WHERE COLUMN_NAME LIKE '%Name%'
```
As a result I get a table with `TABLE_NAME` and `COLUMN_NAME`. Is it possible to set all values in columns received by query to NULL?
|
Use `Dynamic Sql`. Try this.
```
DECLARE @sql NVARCHAR(max)=''
SELECT @sql += 'update ' + TABLE_NAME + ' set ' + COLUMN_NAME + '= NULL '
FROM INFORMATION_SCHEMA.COLUMNS
WHERE COLUMN_NAME LIKE '%Name%'
--print @sql
EXEC Sp_executesql @sql
```
|
You cannot define each table and each column in your query. For that you need to select tables and columns dynamically and execute it. You can use a cursor to this dynamically
```
DECLARE @TableName VARCHAR(100)=''
DECLARE @ColumnName VARCHAR(100)
DECLARE @PreviousTableName VARCHAR(100)=''
DECLARE @PreviousColumnName NVARCHAR(MAX)=''
DECLARE @ROWNO INT
DECLARE @MAXCOUNT INT
DECLARE @UPDATESQL NVARCHAR(MAX)
DECLARE @COLSSQL NVARCHAR(MAX)
-- Here you declare which all columns you need to loop in Cursor
DECLARE CUR CURSOR FOR
SELECT TABLE_NAME, COLUMN_NAME
-- Find Row number for each table
,ROW_NUMBER() OVER(PARTITION BY TABLE_NAME ORDER BY COLUMN_NAME) RNO,
-- Gets the count of columns in each table
COUNT(COLUMN_NAME) OVER(PARTITION BY TABLE_NAME) CNT
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME LIKE '%TE%'
ORDER BY TABLE_NAME, COLUMN_NAME
OPEN CUR
FETCH NEXT FROM CUR
INTO @TableName,@ColumnName,@ROWNO,@MAXCOUNT
WHILE @@FETCH_STATUS = 0
BEGIN
SET @PreviousColumnName = @PreviousColumnName + @ColumnName + '=NULL,'
-- If current row is last record of the table in the loop
IF(@ROWNO = @MAXCOUNT)
BEGIN
SET @UPDATESQL = 'UPDATE '+@TableName+' SET ' + @PreviousColumnName
SET @UPDATESQL = LEFT(@UPDATESQL, LEN(@UPDATESQL) - 1)
EXEC SP_EXECUTESQL @UPDATESQL
SET @PreviousColumnName = ''
END
-- Fetches next record and increments the loop
FETCH NEXT FROM CUR
INTO @TableName,@ColumnName,@ROWNO,@MAXCOUNT
END
CLOSE CUR;
DEALLOCATE CUR;
```
* **[SQL FIDDLE](http://sqlfiddle.com/#!3/b1b76/2)**
|
Find all tables with specified column name and set values to NULL
|
[
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2012",
""
] |
I have a table in Hive (SQL) with a bunch of timestamps that need to be grouped in order to create separate sessions based on the time difference between the timestamps.
Example:
Consider the following timestamps(Given in HH:MM for simplicity):
`9.00
9.10
9.20
9.40
9.43
10.30
10.45
11.25
12.30
12.33`
and so on..
So now, all timestamps that fall within 30 mins of the next timestamp come under the same session,
i.e. 9.00,9.10,9.20,9.40,9.43 form 1 session.
But since the difference between 9.43 and 10.30 is more than 30 mins, the time stamp 10.30 falls under a different session. Again, 10.30 and 10.45 fall under one session.
After we have created these sessions, we have to obtain the minimum timestamp for that session and the max timestamp.
I tried to subtract the current timestamp with its LEAD and place a flag if it is greater than 30 mins, but I'm having difficulty with this.
Any suggestion from you guys would be greatly appreciated. Please let me know if the question isn't clear enough.
Expected Output for this sample data:
```
Session_start Session_end
9.00 9.43
10.30 10.45
11.25 11.25 (same because the next time is not within 30 mins)
12.30 12.33
```
Hope this helps.
|
So it's not MySQL but Hive. I don't know Hive, but if it supports LAG, as you say, try this PostgreSQL query. You will probably have to change the time difference calculation, that's usually different from one dbms to another.
```
select min(thetime) as start_time, max(thetime) as end_time
from
(
select thetime, count(gap) over (rows between unbounded preceding and current row) as groupid
from
(
select thetime, case when thetime - lag(thetime) over (order by thetime) > interval '30 minutes' then 1 end as gap
from mytable
) times
) groups
group by groupid
order by min(thetime);
```
The query finds gaps, then uses a running total of gap counts to build group IDs, and the rest is aggregation.
SQL fiddle: <http://www.sqlfiddle.com/#!17/8bc4a/6>.
|
With MySQL lacking LAG and LEAD functions, getting the previous or next record is some work already. Here is how:
```
select
thetime,
(select max(thetime) from mytable afore where afore.thetime < mytable.thetime) as afore_time,
(select min(thetime) from mytable after where after.thetime > mytable.thetime) as after_time
from mytable;
```
Based on this we can build the whole query where we are looking for gaps (i.e. the time difference to the previous or next record is more than 30 minutes = 1800 seconds).
```
select
startrec.thetime as start_time,
(
select min(endrec.thetime)
from
(
select
thetime,
coalesce(time_to_sec(timediff((select min(thetime) from mytable after where after.thetime > mytable.thetime), thetime)), 1801) > 1800 as gap
from mytable
) endrec
where gap
and endrec.thetime >= startrec.thetime
) as end_time
from
(
select
thetime,
coalesce(time_to_sec(timediff(thetime, (select max(thetime) from mytable afore where afore.thetime < mytable.thetime))), 1801) > 1800 as gap
from mytable
) startrec
where gap;
```
SQL fiddle: <http://www.sqlfiddle.com/#!2/d307b/20>.
|
Grouping Timestamps based on the interval between them
|
[
"",
"sql",
"session",
"select",
"group-by",
"hive",
""
] |
I have the following two tables
Wishlist
```
id item created_by modified_by
---------------------------------------
1 Lips.. 5 6
```
Users
```
id first_name
----------------
5 Mark
6 Clare
```
I need to replace columns created\_by and modified\_by with the first\_name associated with the ID in those fields.
I'm currently using the following code
```
SELECT wishlist.*, user.first_name FROM wishlist
LEFT JOIN user ON user.id = wishlist.created_by
```
Which returns the following result
```
id item created_by modified_by first_name
-----------------------------------------------------
1 Lips.. 5 6 Mark
```
Which is okay.. I guess - but not what I need.
Ideally, I'd like..
```
id item created_by modified_by
---------------------------------------
1 Lips.. Mark Clare
```
Any help is appreciated. Thanks :)
|
Join the user table twice, specify the exact columns you want and rename them accordingly:
```
SELECT wishlist.id, wishlist.item, u1.first_name AS created_by, u2.first_name AS modified_by FROM wishlist
LEFT JOIN user AS u1 ON u1.id = wishlist.created_by
LEFT JOIN user AS u2 ON u2.id = wishlist.modified_by
```
|
Try this:
```
SELECT w.id,w.item, u1.first_name as created_by , u2.first_name as modified_by
FROM wishlist w
LEFT JOIN user u1 ON u1.id = wishlist.created_by
LEFT JOIN user u2 ON u2.id = wishlist.modified_by
```
|
Replace columns with SQL JOIN
|
[
"",
"mysql",
"sql",
"join",
"foreign-keys",
""
] |
I have created a table empInfo as follow
```
CREATE TABLE empInfo (
empid INT(11) PRIMARY KEY AUTO_INCREMENT ,
firstname VARCHAR(255) DEFAULT NULL,
lastname VARCHAR(255) DEFAULT NULL
)
```
Then I run below Insert statements :-
```
INSERT INTO empInfo VALUES(NULL , 'SHREE','PATIL');
INSERT INTO empInfo(firstname,lastname) VALUES( 'VIKAS','PATIL');
INSERT INTO empInfo VALUES(NULL , 'SHREEKANT','JOHN');
```
I thought first or Third SQL will fail as empid is PRIMARY KEY and We are trying to insert NULL for empid .
But MYSQL proved me wrong and all 3 queries ran successfully .
**I wanted to know Why it is not failing when trying to insert `NULL` in `empid` column ?**
Final Data available in table is as below
```
empid firstname lastname
1 SHREE PATIL
2 VIKAS PATIL
3 SHREEKANT JOHN
```
I can figure out that it has something releted to `AUTO_INCREMENT` But I am not able to figure out reason for it . Any pointers on this .
|
This [behaviour is by design](https://dba.stackexchange.com/a/60531/5489), viz inserting `0, NULL, or DEFAULT` into an `AUTO_INCREMENT` column will all trigger the `AUTO_INCREMENT` behaviour.
```
INSERT INTO empInfo VALUES(DEFAULT, 'SHREEKANT','JOHN');
INSERT INTO empInfo VALUES(NULL, 'SHREEKANT','JOHN');
INSERT INTO empInfo VALUES(0, 'SHREEKANT','JOHN');
```
and is [commonplace practice](https://stackoverflow.com/a/1871343/314291)
Note however that this wasn't however always the case in versions prior to [`4.1.6`](http://bugs.mysql.com/bug.php?id=5510)
**Edit**
> Does that mean AUTO\_INCREMENT is taking precedance over PRIMARY KEY?
Yes, since the primary key is dependent on the `AUTO_INCREMENT` delivering a new sequence prior to constraint checking and record insertion, the `AUTO_INCREMENT` process (including the above re-purposing of `NULL / 0 / DEFAULT`) would need to be resolved prior to checking PRIMARY KEY constraint in any case.
If you remove the `AUTO_INCREMENT` and define the `emp_id` PK as `INT(11) NULL` (which is nonsensical, but MySql will create the column this way), as soon as you insert a NULL into the PK you will get the familiar
> Error Code: 1048. Column 'emp\_id' cannot be null
So it is clear that the `AUTO_INCREMENT` resolution precedes the primary key constraint checks.
|
As per the [documentation page](http://dev.mysql.com/doc/refman/5.7/en/example-auto-increment.html):
> No value was specified for the AUTO\_INCREMENT column, so MySQL assigned sequence numbers automatically. You can also explicitly assign 0 to the column to generate sequence numbers. **If the column is declared NOT NULL, it is also possible to assign NULL to the column to generate sequence numbers.**
So, because you have an auto increment null-allowed field, it *ignores* the fact that you're trying to place a NULL in there, and instead gives you a sequenced number.
You could just leave it as is since, even *without* the `not null` constraint, you can't get a NULL in there, because it will auto-magically convert that to a sequenced number.
Or you can *change* the column to be `empid INT(11) PRIMARY KEY AUTO_INCREMENT NOT NULL` if you wish, but I still think the `insert` will allow you to specify NULLs, converting them into sequenced numbers in spite of what the documentation states (tested on [sqlfiddle](http://sqlfiddle.com/) in MySQL `5.6.6 m9` and `5.5.32`).
In *both* cases, you can still force the column to a specific (non-zero) number, constraints permitting of course.
|
MySQL is not failing when deliberately inserting `NULL` in Primary Key AUTO_INCREMENT column
|
[
"",
"mysql",
"sql",
""
] |
Is there any way to send a recursive query in SQL?
Given the end node id, I need all the rows up to the root node (which has `parentid = NULL`) ordered by level. E.g. if I have something like:
```
nodeid | parentid
a | NULL
b | a
c | b
```
after querying for `end_node_id = c`, I'd get something like:
```
nodeid | parentid | depth
a | NULL | 0
b | a | 1
c | b | 2
```
(Instead of the depth I can also work with the distance to the given end node)
The only (and obvious) way I could come up with is doing a single query per row until I reach the parent node.
Is there a more efficient way of doing it?
|
Ended up with the following solutions (where level is the distance to the end node)
Oracle, using [hierarchical queries](http://docs.oracle.com/cd/B19306_01/server.102/b14200/queries003.htm) (thanks to the info provided by @Mureinik):
```
SELECT IDCATEGORY, IDPARENTCATEGORY, LEVEL
FROM TNODES
START WITH IDCATEGORY=122
CONNECT BY IDCATEGORY = PRIOR IDPARENTCATEGORY;
```
Example using a view so it boils down to a single standard SQL query (requires >= 10g):
```
CREATE OR REPLACE VIEW VNODES AS
SELECT CONNECT_BY_ROOT IDCATEGORY "IDBRANCH", IDCATEGORY, IDPARENTCATEGORY, LEVEL AS LVL
FROM TNODES
CONNECT BY IDCATEGORY = PRIOR IDPARENTCATEGORY;
SELECT * FROM VNODES WHERE IDBRANCH = 122 ORDER BY LVL ASC;
```
<http://sqlfiddle.com/#!4/18ba80/3>
Postgres >= 8.4, using a [WITH RECURSIVE Common Table Expression](http://www.postgresql.org/docs/8.4/static/queries-with.html) query:
```
WITH RECURSIVE BRANCH(IDPARENTCATEGORY, IDCATEGORY, LEVEL) AS (
SELECT IDPARENTCATEGORY, IDCATEGORY, 1 AS LEVEL FROM TNODES WHERE IDCATEGORY = 122
UNION ALL
SELECT p.IDPARENTCATEGORY, p.IDCATEGORY, LEVEL+1
FROM BRANCH pr, TNODES p
WHERE p.IDCATEGORY = pr.IDPARENTCATEGORY
)
SELECT IDCATEGORY,IDPARENTCATEGORY, LEVEL
FROM BRANCH
ORDER BY LEVEL ASC
```
Example using a view so it boils down to a single standard SQL query:
```
CREATE OR REPLACE VIEW VNODES AS
WITH RECURSIVE BRANCH(IDBRANCH,IDPARENTCATEGORY,IDCATEGORY,LVL) AS (
SELECT IDCATEGORY AS IDBRANCH, IDPARENTCATEGORY, IDCATEGORY, 1 AS LVL FROM TNODES
UNION ALL
SELECT pr.IDBRANCH, p.IDPARENTCATEGORY, p.IDCATEGORY, LVL+1
FROM BRANCH pr, TNODES p
WHERE p.IDCATEGORY = pr.IDPARENTCATEGORY
)
SELECT IDBRANCH, IDCATEGORY, IDPARENTCATEGORY, LVL
FROM BRANCH;
SELECT * FROM VNODES WHERE IDBRANCH = 122 ORDER BY LVL ASC;
```
<http://sqlfiddle.com/#!11/42870/2>
|
If you are using mssql 2005+ you can do this:
**Test data:**
```
DECLARE @tbl TABLE(nodeId VARCHAR(10),parentid VARCHAR(10))
INSERT INTO @tbl
VALUES ('a',null),('b','a'),('c','b')
```
**Query**
```
;WITH CTE
AS
(
SELECT
tbl.nodeId,
tbl.parentid,
0 AS Depth
FROM
@tbl as tbl
WHERE
tbl.parentid IS NULL
UNION ALL
SELECT
tbl.nodeId,
tbl.parentid,
CTE.Depth+1 AS Depth
FROM
@tbl AS tbl
JOIN CTE
ON tbl.parentid=CTE.nodeId
)
SELECT
*
FROM
CTE
```
|
Hierarchical SQL Queries: Best SQL query to obtain the whole branch of a tree from a [nodeid, parentid] pairs table given the end node id
|
[
"",
"sql",
"tree",
"hierarchical-data",
"recursive-query",
"hierarchical-query",
""
] |
On postgresql 9.3, I have a table with a little over a million records, the table was created as:
```
CREATE TABLE entradas
(
id serial NOT NULL,
uname text,
contenido text,
fecha date,
hora time without time zone,
fecha_hora timestamp with time zone,
geom geometry(Point,4326),
CONSTRAINT entradas_pkey PRIMARY KEY (id)
)
WITH (
OIDS=FALSE
);
ALTER TABLE entradas
OWNER TO postgres;
CREATE INDEX entradas_date_idx
ON entradas
USING btree
(fecha_hora);
CREATE INDEX entradas_gix
ON entradas
USING gist
(geom);
```
I'm executing a query to aggregate rows on time intervals as follows:
```
WITH x AS (
SELECT t1, t1 + interval '15min' AS t2
FROM generate_series('2014-12-02 0:0' ::timestamp
,'2014-12-02 23:45' ::timestamp, '15min') AS t1
)
select distinct
x.t1,
count(t.id) over w
from x
left join entradas t on t.fecha_hora >= x.t1
AND t.fecha_hora < x.t2
window w as (partition by x.t1)
order by x.t1
```
This query takes about 50 seconds. From the output of explain, you can see that the timestamp index is not used:
```
Unique (cost=86569161.81..87553155.15 rows=131199111 width=12)
CTE x
-> Function Scan on generate_series t1 (cost=0.00..12.50 rows=1000 width=8)
-> Sort (cost=86569149.31..86897147.09 rows=131199111 width=12)
Sort Key: x.t1, (count(t.id) OVER (?))
-> WindowAgg (cost=55371945.38..57667929.83 rows=131199111 width=12)
-> Sort (cost=55371945.38..55699943.16 rows=131199111 width=12)
Sort Key: x.t1
-> Nested Loop Left Join (cost=0.00..26470725.90 rows=131199111 width=12)
Join Filter: ((t.fecha_hora >= x.t1) AND (t.fecha_hora < x.t2))
-> CTE Scan on x (cost=0.00..20.00 rows=1000 width=16)
-> Materialize (cost=0.00..49563.88 rows=1180792 width=12)
-> Seq Scan on entradas t (cost=0.00..37893.92 rows=1180792 width=12)
```
However, if i do `set enable_seqscan=false` (I know, one should never do this), then the query executes in less than a second and the output of explain shows that it is using the index on the timestamp column:
```
Unique (cost=91449584.16..92433577.50 rows=131199111 width=12)
CTE x
-> Function Scan on generate_series t1 (cost=0.00..12.50 rows=1000 width=8)
-> Sort (cost=91449571.66..91777569.44 rows=131199111 width=12)
Sort Key: x.t1, (count(t.id) OVER (?))
-> WindowAgg (cost=60252367.73..62548352.18 rows=131199111 width=12)
-> Sort (cost=60252367.73..60580365.51 rows=131199111 width=12)
Sort Key: x.t1
-> Nested Loop Left Join (cost=1985.15..31351148.25 rows=131199111 width=12)
-> CTE Scan on x (cost=0.00..20.00 rows=1000 width=16)
-> Bitmap Heap Scan on entradas t (cost=1985.15..30039.14 rows=131199 width=12)
Recheck Cond: ((fecha_hora >= x.t1) AND (fecha_hora < x.t2))
-> Bitmap Index Scan on entradas_date_idx (cost=0.00..1952.35 rows=131199 width=0)
Index Cond: ((fecha_hora >= x.t1) AND (fecha_hora < x.t2))
```
Why is postgres not using `entradas_date_idx` unless I force it to even if executing the query is way faster using it?
How could I make postgres use `entradas_date_idx` without resorting to `set enable_seqscan=false`?
|
You can simplify your query quite a bit:
```
SELECT x.t1, count(*) AS ct
FROM generate_series('2014-12-02'::timestamp
, '2014-12-03'::timestamp
, '15 min'::interval) x(t1)
LEFT JOIN entradas t ON t.fecha_hora >= x.t1
AND t.fecha_hora < x.t1 + interval '15 min'
GROUP BY 1
ORDER BY 1;
```
`DISTINCT` in combination with a window function is typically much more expensive (and also harder to estimate) for the query planner.
The CTE is not necessary and typically more expensive than a subquery. And also harder to estimate for the query planner since CTEs are optimization barriers.
It looks like you want to cover a whole day, but you were missing out on the last 15 minutes. Use a simpler `generate_series()` expression to cover the whole day (still not overlapping with adjacent days).
Next, why do you have `fecha_hora timestamp` **`with time zone`**, while you also have have `fecha date` and `hora time [without time zone]`? Looks like it should be `fecha_hora timestamp` and drop the redundant columns?
This would also avoid the subtle difference to the data type of your `generate_series()` expression - which should not normally be a problem, but `timestamp` depends on the time zone of your session and is not `IMMUTABLE` like `timestamptz`.
If that's sill not good enough, add a redundant `WHERE` condition as [advised by @Daniel](https://stackoverflow.com/a/27879301/939860) to instruct the query planner.
Basic advise for bad plans is applicable as well:
* [Keep PostgreSQL from sometimes choosing a bad query plan](https://stackoverflow.com/questions/8228326/how-can-i-avoid-postgresql-sometimes-choosing-a-bad-query-plan-for-one-of-two-ne/8229000#8229000)
|
**Analysis of the wrong estimate**
The gist of the problem here is that the postgres planner has no idea what values and how many rows are coming out of the `generate_series` call, and yet has to estimate how much of them will satisfy the JOIN condition against the big `entradas` table. In your case, it fails big time.
In reality, only a small portion of the table will be joined, but the estimate errs on the opposite side, as shown in this part of the EXPLAIN:
```
-> Nested Loop Left Join (cost=0.00..26470725.90 rows=131199111 width=12)
Join Filter: ((t.fecha_hora >= x.t1) AND (t.fecha_hora < x.t2))
-> CTE Scan on x (cost=0.00..20.00 rows=1000 width=16)
-> Materialize (cost=0.00..49563.88 rows=1180792 width=12)
-> Seq Scan on entradas t (cost=0.00..37893.92 rows=1180792 width=12)
```
`entradas` is estimated at `1180792` rows, `x` is estimated at `1000` rows which I believe is just the default for any SRF call. The result of the JOIN is estimated at `131199111` rows, more than 100 times the number of rows of the big table!
**Trick the planner into a better estimate**
Since we know that the timestamps in `x` belong to a narrow range (one day), we may help the planner with that information in the form of an additional JOIN condition:
```
left join entradas t
ON t.fecha_hora >= x.t1
AND t.fecha_hora < x.t2
AND (t.fecha_hora BETWEEN '2014-12-02'::timestamp
AND '2014-12-03'::timestamp)
```
(it does not matter that the BETWEEN range includes the upper bound or is generally a bit bigger, it will be filtered out strictly by the other conditions).
The planner should then be able to make use of the statistics, recognize that only a small portion of the index is concerned by this range of values, and use the index rather than sequentially scanning the entire big table.
|
Postgresql ignoring index on timestamp column even if query is faster using index
|
[
"",
"sql",
"postgresql",
"postgresql-9.3",
"postgresql-performance",
"sql-execution-plan",
""
] |
I have Table1 with Column1:
```
Column1|
-------|
1 aaaa|
2 aaaa|
3 aaaa|
4 aaaa|
10 aaaa|
6 aaaa|
7 aaaa|
8 aaaa|
9 aaaa|
5 aaaa|
```
If I query:
```
Select *
FROM Table1
Order By Column1
```
The output is:
```
1 aaaa
10 aaaa
2 aaaa
3 aaaa
4 aaaa
5 aaaa
6 aaaa
7 aaaa
8 aaaa
9 aaaa
```
What I would like is:
```
1 aaaa
2 aaaa
3 aaaa
4 aaaa
5 aaaa
6 aaaa
7 aaaa
8 aaaa
9 aaaa
10 aaaa
```
I know I can solve it by adding a 0 in front of the number 1 until 9.
But unfortunately I can not do this in the data - It must be in the query or something
|
try this:
```
Select *
FROM Table1
Order By cast(SUBSTRING (Column1,1,charindex(' ',Column1)-1) as int)
```
Please check this [**DEMO**](http://sqlfiddle.com/#!3/3a311/9)
|
Got it:
```
SELECT C
FROM Table_1
ORDER BY CAST(Left(Column1, CHARINDEX(' ', Column1)) AS int)
```
Thanks everyone
|
Order by with special condition
|
[
"",
"sql",
"sql-order-by",
""
] |
In the below query condition is failing but logically it should pass. Is something wrong with to\_char??
```
SELECT data
FROM table1
WHERE
TO_CHAR(TO_DATE(value1, 'DD/MM/RRRR HH24:MI:SS'), 'DD/MM/RRRR HH24:MI:SS') <=
TO_CHAR( SYSDATE, 'DD/MM/RRRR HH24:MI:SS')
AND TO_CHAR(TO_DATE(value2, 'DD/MM/RRRR HH24:MI:SS'), 'DD/MM/RRRR HH24:MI:SS') >=
TO_CHAR( SYSDATE, 'DD/MM/RRRR HH24:MI:SS');
value1='02/07/2014 12:30:10'
value2='06/08/2015 09:57:33'
```
in both the conditions it is only checking the dates i.e.,02<=07 (7th is todays date).First condition is getting satisfied regardless of month and year.if i change value1 to `'15/08/2014 12:30:10'` it is failing. Same with second condition.
|
Finally after lot of googling i got the solution. Below query is working for me :) :)
SELECT MESSAGE
FROM TABLE1
WHERE TO\_TIMESTAMP(value1, 'DD/MM/RRRR HH24:MI:SS') <= CAST(SYSDATE AS TIMESTAMP)
AND TO\_TIMESTAMP(value2, 'DD/MM/RRRR HH24:MI:SS') >= CAST(SYSDATE AS TIMESTAMP);
value1='02/07/2014 12:30:10'
value2='06/08/2015 09:57:33'
|
Why are you comparing dates as strings? This also begs the question of why you would store dates as strings in the first place. You should store date/times using the built-in types.
Try this instead:
```
SELECT data
FROM table1
WHERE TO_DATE(value1, 'DD/MM/RRRR HH24:MI:SS') <= sysdate AND
TO_DATE(value2, 'DD/MM/RRRR HH24:MI:SS') >= sysdate;
```
Your problem is presumably that you are comparing strings, rather than dates. And the format you are using `DD/MM/YYYY` doesn't do comparisons the same way. This is, in fact, why you should just use the ISO format of YYYY-MM-DD whenever you are storing date/time values in strings (which I don't recommend in most cases anyway).
If your values are already stored in proper types, then you can just do:
```
SELECT data
FROM table1
WHERE value1 <= sysdate AND
value2 >= sysdate;
```
If these are timestamps with time zone, then you can use `SYSTIMESTAMP` instead of `SYSDATE`.
|
TO_CHAR only considering dates while comparing two time stamps?
|
[
"",
"sql",
"oracle",
""
] |
I am fairly new in Access and SQL programming. I am trying to do the following:
```
Sum(SO_SalesOrderPaymentHistoryLineT.Amount) AS [Sum Of PaymentPerYear]
```
and group by year even when there is no amount in some of the years. I would like to have these years listed as well for a report with charts. I'm not certain if this is possible, but every bit of help is appreciated.
My code so far is as follows:
```
SELECT
Base_CustomerT.SalesRep,
SO_SalesOrderT.CustomerId,
Base_CustomerT.Customer,
SO_SalesOrderPaymentHistoryLineT.DatePaid,
Sum(SO_SalesOrderPaymentHistoryLineT.Amount) AS [Sum Of PaymentPerYear]
FROM
Base_CustomerT
INNER JOIN (
SO_SalesOrderPaymentHistoryLineT
INNER JOIN SO_SalesOrderT
ON SO_SalesOrderPaymentHistoryLineT.SalesOrderId = SO_SalesOrderT.SalesOrderId
) ON Base_CustomerT.CustomerId = SO_SalesOrderT.CustomerId
GROUP BY
Base_CustomerT.SalesRep,
SO_SalesOrderT.CustomerId,
Base_CustomerT.Customer,
SO_SalesOrderPaymentHistoryLineT.DatePaid,
SO_SalesOrderPaymentHistoryLineT.PaymentType,
Base_CustomerT.IsActive
HAVING
(((SO_SalesOrderPaymentHistoryLineT.PaymentType)=1)
AND ((Base_CustomerT.IsActive)=Yes))
ORDER BY
Base_CustomerT.SalesRep,
Base_CustomerT.Customer;
```
|
Thank you John for your help. I found a solution which works for me. It looks quiet different but I learned a lot out of it. If you are interested here is how it looks now.
```
SELECT DISTINCTROW
Base_Customer_RevenueYearQ.SalesRep,
Base_Customer_RevenueYearQ.CustomerId,
Base_Customer_RevenueYearQ.Customer,
Base_Customer_RevenueYearQ.RevenueYear,
CustomerPaymentPerYearQ.[Sum Of PaymentPerYear]
FROM
Base_Customer_RevenueYearQ
LEFT JOIN CustomerPaymentPerYearQ
ON (Base_Customer_RevenueYearQ.RevenueYear = CustomerPaymentPerYearQ.[RevenueYear])
AND (Base_Customer_RevenueYearQ.CustomerId = CustomerPaymentPerYearQ.CustomerId)
GROUP BY
Base_Customer_RevenueYearQ.SalesRep,
Base_Customer_RevenueYearQ.CustomerId,
Base_Customer_RevenueYearQ.Customer,
Base_Customer_RevenueYearQ.RevenueYear,
CustomerPaymentPerYearQ.[Sum Of PaymentPerYear]
;
```
|
You need another table with all years listed -- you can create this on the fly or have one in the db... join from that. So if you had a table called alltheyears with a column called y that just listed the years then you could use code like this:
```
WITH minmax as
(
select min(year(SO_SalesOrderPaymentHistoryLineT.DatePaid) as minyear,
max(year(SO_SalesOrderPaymentHistoryLineT.DatePaid) as maxyear)
from SalesOrderPaymentHistoryLineT
), yearsused as
(
select y
from alltheyears, minmax
where alltheyears.y >= minyear and alltheyears.y <= maxyear
)
select *
from yearsused
join ( -- your query above goes here! -- ) T
ON year(T.SO_SalesOrderPaymentHistoryLineT.DatePaid) = yearsused.y
```
|
Include missing years in Group By query
|
[
"",
"sql",
"sql-server",
"missing-data",
""
] |
I have three tables TableThreeItems, TableTwoItems and TableOneItems, I need to get the number of visits that Person had in each of these tables.
I tried the following :
```
SELECT Person.[PersonId]
, Count(TableOneItemId ) as TableOneItemCount
, Count(TableTwoItemIdId ) as TableTwoItemCount
, Count(TableThreeItemId ) as TableThreeItemCount
FROM [dbo].[Person]
LEFT JOIN TableOneItem ON TableOneItem.PersonId=Person.PersonId
LEFT JOIN TableThreeItemId ON TableThreeItemId.PersonId=Person.PersonId
LEFT JOIN TableTwoItemId ON TableTwoItemId.PersonId=Person.PersonId
WHERE [Person].PersonId=1
GROUP BY Person.[PersonId]
```
But this doesn't consider the count of each of these as separate but i'm always a wrong count. For example, if he had 1 TableOneItem and 2 TableTwoItemIds, the TableOneItemCount would be 2 while the TableTwoItemIdCount is 2
|
You can do this in a single `join` by using `count(distinct)`:
```
SELECT Person.[PersonId],
Count(distinct TableOneItemId ) as TableOneItemCount,
Count(distinct TableTwoItemIdId ) as TableTwoItemCount,
Count(distinct TableThreeItemId ) as TableThreeItemCount
FROM [dbo].[Person] LEFT JOIN
TableOneItem
ON TableOneItem.PersonId = Person.PersonId LEFT JOIN
TableThreeItemId
ON TableThreeItemId.PersonId = Person.PersonId LEFT JOIN
TableTwoItemId
ON TableTwoItemId.PersonId = Person.PersonId
WHERE [Person].PersonId = 1
GROUP BY Person.[PersonId];
```
This is not recommended if you expect the counts to be even moderately large. If the counts are each 100, then this produces a cartesian product for each person with 100\*100\*100 = 1,000,000 intermediate rows. Way too much processing. In that case, you want to do the counts as a subquery.
```
SELECT Person.[PersonId], TableOneItemCount, TableTwoItemCount, TableThreeItemCount
FROM [dbo].[Person] LEFT JOIN
(select personid, count(TableOneItemId) as TableOneItemCount from TableOneItem where PersonId = 1 group by personid
) t1
ON t1.PersonId = Person.PersonId LEFT JOIN
(select personid, count(TableTwoItemId) as TableTwoItemCount from TableTwoItem where PersonId = 1 group by personid
) t2
ON t2.PersonId = Person.PersonId LEFT JOIN
(select personid, count(TableThreeItemId) as TableThreeItemCount from TableThreeItem where PersonId = 1 group by personid
) t3
ON t3.PersonId = Person.PersonId
WHERE Person.PersonId = 1;
```
In most cases, you would just use `group by PersonId` in the subqueries for the calculation without the `where`. But you are only selecting one person.
|
Just add a `DISTINCT` to the count and you're done:
```
SELECT Person.[PersonId]
, Count(DISTINCT TableOneItemId ) as TableOneItemCount
, Count(DISTINCT TableTwoItemIdId ) as TableTwoItemCount
, Count(DISTINCT TableThreeItemId ) as TableThreeItemCount
FROM [dbo].[Person]
LEFT JOIN TableOneItem ON TableOneItem.PersonId=Person.PersonId
LEFT JOIN TableThreeItemId ON TableThreeItemId.PersonId=Person.PersonId
LEFT JOIN TableTwoItemId ON TableTwoItemId.PersonId=Person.PersonId
WHERE [Person].PersonId=1
GROUP BY Person.[PersonId]
```
|
Select count from multiple tabes
|
[
"",
"sql",
""
] |
I am working on sqlite 1st time
I have need help on following problem
I have 2 tables as follows
# test1
---
key value
a 1
b 2
c 3
e 7
f 41
j 52
# test2
---
key value
a null
b null
c null
d null
e null
f null
g null
j null
I am trying to updating values of test2 if key in table test2 = test1 then update value or else put null
expected output is like this
# test2
---
key value
a 1
b 2
c 3
d null
e 7
f 41
g null
j 52
I try this query
insert into test2([value])
select test1.value
from test1, test2
where test2.value= test1.value;
but it not working
how to solve this?
|
*(Assuming you mean conditional UPDATE, not INSERT)*
Fortuitously, it seems you need to reset `test2` to null where the join fails, so you can do an update with the set specified as a subquery:
```
update test2
set value =
(SELECT t1.value
FROM test1 t1 where t1.key = test2.key
LIMIT 1);
```
[SqlFiddle:](http://sqlfiddle.com/#!7/19653/9)
The `LIMIT` will ensure just one row returned, but if the relationship between test1 and test2 isn't 1:1 you will need to apply logic to determine how to join the two tables.
|
Try this
UPDATE test2
SET value = (SELECT test1.value
FROM test1
WHERE test2.key =test1.key) ;
|
conditional UPDATE sqlite statement
|
[
"",
"sql",
"database",
"sqlite",
""
] |
This was an interview question.
If I have a table like this:
```
ID FirstName LastName
-- --------- --------
1 Aaron Aames
2 Malcolm Middle
3 Zamon Zorr
```
How can I get output that looks like this?
```
Aaron Aames
Aames Malcolm
Malcolm Middle
Middle Zamon
Zamon Zorr
```
Note: If you need a specific dialect to do it, use T-SQL.
|
Here is another way using a self-join.
```
CREATE TABLE temp (ID INT IDENTITY, FirstName VARCHAR(25), LastName VARCHAR(25));
INSERT INTO temp VALUES
(N'Aaron', N'Aames'),
(N'Malcolm', N'Middle'),
(N'Zamon', N'Zorr');
WITH names(ID, Name, ColNum) AS(
SELECT
ID, FirstName, 1
FROM temp
UNION ALL
SELECT
ID, LastName, 2
FROM temp
),
numbered AS(
SELECT
rn = ROW_NUMBER() OVER(ORDER BY ID, ColNum),
Name
FROM names
)
SELECT
n.Name AS Name1, n2.Name AS Name2
FROM numbered n
INNER JOIN numbered n2
ON n.rn = n2.rn - 1
DROP TABLE temp
```
<http://sqlfiddle.com/#!3/d91c4/2>
|
You have really high reputation, so this isn't just a "they asked me at an interview" kind of question.
There are several approaches. I think the one that I would take is a `union all`. Recognize that every other row is from the table. The rest are from joining one row to the next. So, that suggests:
```
select firstname, lastname
from likethis t
union all
select t.lastname, lead(t.firstname) over (order by id)
from likethis t
```
Alas, this gives you six rows instead of five, so that last one needs to be filtered out:
```
select firstname, lastname
from (select firstname, lastname
from likethis t
union all
select t.lastname, lead(t.firstname) over (order by id)
from likethis t
) t
where lastname is not null
order by firstname;
```
Note: I cannot determine if the sort criteria is alphabetical or by id; these solutions assume it is alphabetical.
Second note: I'm guessing this is not the solution they have in mind. They probably are looking for a self-join. But why bother when `lead()` does the work for you.
|
How do you make a column value slide to the right or the left using SQL?
|
[
"",
"sql",
"sql-server",
""
] |
I am trying to create SQL trigger which adds a new record to the same table where an insertion is made through a web page. I am not exactly sure how to implement it but I tried the following query
```
CREATE trigger [dbo].[trgI_DealsDoneInserRecord]
on [dbo].[Terms]
after insert
As
Insert into DealsDone
(Company,Grade,Term,Pipeline,[Index],Volume,Price,[Type],CounterParty,
TermID,GradeID,CPID,Locked,Product)
VALUES
(SELECT Company,Grade,Term,Pipeline,[Index],Volume,Price,[Type],CounterParty,
TermID,GradeID,CPID,Locked,Product FROM inserted)
END
```
The above query threw an error in the `SELECT` statement in `VALUES`.
**May I know a way to implement this?**
|
Try this:
```
CREATE trigger [dbo].[trgI_DealsDoneInserRecord]
ON [dbo].[Terms]
AFTER INSERT
As
BEGIN
INSERT INTO DealsDone
(Company,Grade,Term,Pipeline,[Index],Volume,Price,[Type],CounterParty,
TermID,GradeID,CPID,Locked,Product)
SELECT Company,Grade,Term,Pipeline,[Index],Volume,Price,[Type],CounterParty,
TermID,GradeID,CPID,Locked,Product FROM inserted
END
```
|
While I generally advocate *against* using `SELECT *`, in this case it seems like a benefit:
* By not specifying the fields you can automatically account for changes in the tables without having to update this trigger if you add or remove or even rename fields.
* This will help you catch errors in schema updates if one of the tables is updated but the other one isn't and the structure is then different. If that happens, the INSERT operation will fail and you don't have to worry about cleaning up bad data.
So use this:
```
CREATE TRIGGER [dbo].[trgI_DealsDoneInserRecord]
ON [dbo].[Terms]
AFTER INSERT
AS
SET NOCOUNT ON;
INSERT INTO [DealsDone]
SELECT *
FROM inserted;
```
|
SQL trigger to add new records to a table with the same structure when an insertion is made
|
[
"",
"sql",
"sql-server",
"t-sql",
"triggers",
"sql-server-2012",
""
] |
Im new to Codeigniter and its developing. i have tested a query succefully. But when i try to take voteid value im getting errors please help me.
```
$query=$this->db->query("SELECT voteid FROM Questions WHERE questionid = '$questionid'");
echo $query->voteid; //im getting errors here.
```
Var\_dump value of query
```
object(CI_DB_mysql_result)#18 (8) {
["conn_id"]=>
resource(30) of type (mysql link persistent)
["result_id"]=>
resource(39) of type (mysql result)
["result_array"]=>
array(0) {
}
["result_object"]=>
array(0) {
}
["custom_result_object"]=>
array(0) {
}
["current_row"]=>
int(0)
["num_rows"]=>
int(1)
["row_data"]=>
NULL
```
Error
```
Severity: Notice
Message: Undefined property: CI_DB_mysql_result::$voteid
Filename: models/questions_model.php
```
|
```
$query=$this->db->query("SELECT * FROM Questions WHERE questionid = '$id'");
echo $query->row()->voteid;
```
try this it should work
|
Your query returns nothing because it's incorrect. Try: "SELECT voteid FROM Questions WHERE questionid = " . $questionid (I assume that $questionid is a variable of integer type)
|
take value after execute query
|
[
"",
"sql",
"codeigniter",
""
] |
**Background**
We recently had an issue with query plans sql server was using on one of our larger tables (around 175,000,000 rows). The column and index structure of the table has not changed for 5+ years.
The table and indexes looks like this:
```
create table responses (
response_uuid uniqueidentifier not null,
session_uuid uniqueidentifier not null,
create_datetime datetime not null,
create_user_uuid uniqueidentifier not null,
update_datetime datetime not null,
update_user_uuid uniqueidentifier not null,
question_id int not null,
response_data varchar(4096) null,
question_type_id varchar(3) not null,
question_length tinyint null,
constraint pk_responses primary key clustered (response_uuid),
constraint idx_responses__session_uuid__question_id unique nonclustered (session_uuid asc, question_id asc) with (fillfactor=80),
constraint fk_responses_sessions__session_uuid foreign key(session_uuid) references dbo.sessions (session_uuid),
constraint fk_responses_users__create_user_uuid foreign key(create_user_uuid) references dbo.users (user_uuid),
constraint fk_responses_users__update_user_uuid foreign key(update_user_uuid) references dbo.users (user_uuid)
)
create nonclustered index idx_responses__session_uuid_fk on responses(session_uuid) with (fillfactor=80)
```
The query that was performing poorly (~2.5 minutes instead of the normal <1 second performance) looks like this:
```
SELECT
[Extent1].[response_uuid] AS [response_uuid],
[Extent1].[session_uuid] AS [session_uuid],
[Extent1].[create_datetime] AS [create_datetime],
[Extent1].[create_user_uuid] AS [create_user_uuid],
[Extent1].[update_datetime] AS [update_datetime],
[Extent1].[update_user_uuid] AS [update_user_uuid],
[Extent1].[question_id] AS [question_id],
[Extent1].[response_data] AS [response_data],
[Extent1].[question_type_id] AS [question_type_id],
[Extent1].[question_length] AS [question_length]
FROM [dbo].[responses] AS [Extent1]
WHERE [Extent1].[session_uuid] = @f6_p__linq__0;
```
(The query is generated by entity framework and executed using sp\_executesql)
The execution plan during the poor performance period looked like this:

Some background on the data- running the query above would never return more than 400 rows. In other words, filtering on session\_uuid really pares down the result set.
Some background on scheduled maintenance- a scheduled job runs on a weekly basis to rebuild the database's statistics and rebuild the table's indexes. The job runs a script that looks like this:
```
alter index all on responses rebuild with (fillfactor=80)
```
---
The resolution for the performance problem was to run the rebuild index script (above) on this table.
Other possibly relevant tidbits of information... The data distribution didn't change at all since the last index rebuild. There are no joins in the query. We're a SAAS shop, we have at 50 - 100 live production databases with exactly the same schema, some with more data, some with less, all with the same queries executing against them spread across a few sql servers.
**Question:**
What could have happened that would make sql server start using this terrible execution plan in this particular database?
Keep in mind the problem was solved by simply rebuilding the indexes on the table.
Maybe a better question is "what are the circumstances where sql server would stop using an index?"
Another way of looking at it is "why would the optimizer not use an index that was rebuilt a few days ago and then start using it again after doing an emergency rebuild of the index once we noticed the bad query plan?"
|
The reason is simple: the optimizer changes its mind on what the best plan is. This can be due to subtle changes in the distribution of the data (or other reasons, such as a type incompatibility in a `join` key). I wish there were a tool that not only gave the execution plan for a query but also showed thresholds for how close you are to another execution plan. Or a tool that would let you stash an execution plan and give an alert if the same query starts using a different plan.
I've asked myself this exact same question on more than one occasion. You have a system that's running nightly, for months on end. It processes lots of data using really complicated queries. Then, one day, you come in in the morning and the job that normally finishes by 11:00 p.m. is still running. Arrrggg!
The solution that we came up with was to use explicit `join` hints for the failed joins. (`option (merge join, hash join)`). We also started saving the execution plans for all our complex queries, so we could compare changes from one night to the next. In the end, this was of more academic interest than practical interest -- when the plans changed, we were already suffering from a bad execution plan.
|
This is one my most hated issues with SQL - I've had more than one failure due to this issue - once a query that had been working for months went from ~250ms to beyond the timeout threshold causing a manufacturing system to crash at 3am of course. Took awhile to isolate the query and stick it into SSMS and then start breaking it into pieces - but everything I did just "worked". In the end I just added the phrase " AND 1=1" to the query which got things working again for a few weeks - the final patch was to "blind" the optimizer - basically copying all passed parameters into local parameters. If the query works off the bat, it seems like it will continue to work.
To me a reasonably simple fix from MS would be: if this query has been profiled already and ran just fine the last time, and the relevant statistics haven't changed significantly (e.g. come up with some factor of various changes in tables or new indexes, etc), and the "optimizer" decides to spice things up with a new execution plan, how about if that new and improved plan takes more than X-multiple of the old plan, I abort and switch back again. I can understand if a table goes from 100 to 100,000,000 rows or if a key index is deleted, but for a stable production environment to have a query jump in duration to between 100x and 1000x slower, it couldn't be that hard to detect this, flag the plan, and go back to the previous one.
|
Why did SQL Server suddenly decide to use such a terrible execution plan?
|
[
"",
"sql",
"sql-server",
"performance",
""
] |
I have the following SQL statement. It joins three tables: `Person`, `Deliverable`, and `DeliverableActions`
```
select
p.first_name, p. last_name, d.title, da.type
from
Deliverable d
right join
Person p on d.person_responsible_id = p.id
right join
DeliverableAction da on da.DeliverableID = d.id
where
d.date_deadline >= @startDate and
d.date_deadline <= @endDate
order by
d.title
```
The result is the following:
```
first_name | last_name | title | type
-----------+-------------+--------------+------
Joe | Kewl | My_Report_1 | 2
Joe | Kewl | My_Report_1 | 3
Joe | Kewl | My_Report_1 | 1
Sly | Foxx | Other_Rep_1 | 1
Sly | Foxx | Other_Rep_1 | 2
```
My goal result is to get the following table:
```
first_name | last_name | title | type_1 | type_2 | type_3 | type_4
-----------+------------+--------------+--------+--------+--------+---------
Joe | Kewl | My_report_1 | 1 | 1 | 1 | 0
Sly | Foxx | Other_Rep_1 | 1 | 1 | 0 | 0
```
Unfortunately I don't know what term to describe what I'm doing. I've searched 'grouping' and 'aggregation', but I'm left without an answer so I am putting it to the community. Thank you in advance for your help.
|
you can use `case based aggregation` or you can also use `pivot`
```
select p.first_name,
p. last_name,
d.title,
sum(case when da.type = 1 then 1 else 0 end) as type_1,
sum(case when da.type = 2 then 1 else 0 end) as type_2,
sum(case when da.type = 3 then 1 else 0 end) as type_3,
sum(case when da.type = 4 then 1 else 0 end) as type_4,
from Deliverable d
right join Person p on d.person_responsible_id = p.id
right join DeliverableAction da on da.DeliverableID = d.id
where d.date_deadline >= @startDate and
d.date_deadline <= @endDate
group by p.first_name, p.last_name, d.title
```
|
```
select
first_name, last_name, title,
sum(case when type = 1 then 1 else 0 end) as type_1
from
(
select p.first_name, p. last_name, d.title, da.type from Deliverable d
right join Person p on d.person_responsible_id = p.id
right join DeliverableAction da on da.DeliverableID = d.id
where d.date_deadline >= @startDate and
d.date_deadline <= @endDate
) as a
group by first_name, last_name, title
```
|
SQL Server query to roll up data
|
[
"",
"sql",
"sql-server",
""
] |
Lets say I have a simple query like:
```
SELECT
A
,B
FROM
C
```
I will get this result:
```
A|B
---
x|x
```
Without changing the select clause in the query (SELECT A, B), how can I receive a result as follows:
```
B|A
---
x|x
```
######## EDIT
The reason I ask is because I want to do the following but is improper syntax:
```
SELECT A, DISTINCT B
FROM C
```
I truly want the `DISTINCT B` in the first column because it is easier to both look at and find patterns in the results.
Is there a better approach to take?
|
Your revised query could be written:
```
SELECT DISTINCT A, B
FROM RBase;
```
or
```
SELECT DISTINCT B, A
FROM RBase;
```
|
You could use a subquery:
```
SELECT A, B FROM (SELECT B AS A, A AS B FROM C) AS sq;
```
Obviously the column names won't match but the column content would be swapped.
|
How To Switch Columns In SQL Results
|
[
"",
"sql",
"sql-server",
""
] |
Hello I'm using MS JET OLEDB 4.0 on VBA (MS ACCESS)
```
SQL = "SELECT COUNT(Date) FROM [Orders] WHERE [Orders].[Date] BETWEEN " & "#" & StartDate & "#" & " AND " & "#" & EndDate & "#"
```
The [Date] is formatted like f dd/mm/yyyy (French) and the StartDate and EndDate are formatted like mm/dd/yyyy (Two date objects).
My belief is that BETWEEN only compares dates formatted mm/dd/yyyy, so is there a way like a function to convert the [Date] formatting into mm/dd/yyyy so the between function can compare correctly?
Edit: Using a string instead of the dates like follow:
```
StartDateFormatted = Format(StartDate,"dd/mm/yyyy")
EndDateFormatted = Format(EndDate,"dd/mm/yyyy")
```
So as to be sure of the startdate and enddate format. It doesn't work still.
I'm left to assume two things:
* Either BETWEEN onlys compares mm/dd/yyyy formats
* I have to use a function to format [Date] to mm/dd/yyyy
Thanks to @Maciej Los for answering my question.
|
Use [Format function](http://www.techonthenet.com/access/functions/date/format.php).
```
SQL = "SELECT COUNT(Date)" & vbcr & _
"FROM [Orders]" & vbcr & _
"WHERE [Orders].[Date] BETWEEN #Format(" & StartDate & "], 'MM/dd/yyyy')# AND #Format(" & EndDate & ",'MM/dd/yyyy')#"
```
|
Always use ISO date formatting YYYY-MM-DD and you will have no problems. Without formatting.
|
How to format date in a SQL query?
|
[
"",
"sql",
"vba",
""
] |
I have two tables;
`dbo.Users`:
```
user_id user_name gender has_phone
--------------------------------------
1 admin Male true
2 UserA Male false
3 UserB Female true
```
and `dbo.Filterby`:
```
id name filterid filtervalue
---------------------------------------
1 All NULL -1
2 Males gender Male
3 Females gender Female
4 Phone has_phone true
5 NoPhone has_phone false
```
I need to select all patients based on the result of the query from the filterby, so for example I want all males returned:
```
SELECT
[user_id], [user_name]
FROM
dbo.Users
WHERE
(SELECT filterid FROM dbo.Filterby WHERE id = 2) = (SELECT filtervalue FROM dbo.Filterby WHERE id = 2)
```
This should be more generic, so I can pass in any Filterby.id and it will return the correct results. But basically, I really just need to know how to select using a result from another query as the column identifier.
How do I do this?
Thanks.
|
You could do this by using `CROSS JOIN`
```
SELECT
user_id, user_name, gender, has_phone
FROM (
SELECT
u.user_id,u.user_name, u.gender,u.has_phone,
CASE ISNULL(f.filterid,'')
WHEN '' THEN '-1'
WHEN 'gender' THEN u.gender
WHEN 'has_phone' THEN u.has_phone
END AS val,
f.filtervalue
FROM users AS u
CROSS JOIN Filterby AS f
WHERE f.id=4 -- here id of filter
) AS m
WHERE
val=filtervalue
```
In the case where you need to use filtration by two or more fields for example this filtration table
```
id name filterid filtervalue
---------------------------------------
1 All NULL -1
2 Males gender Male
3 Females gender Female
4 PhoneMales has_phone true
4 PhoneMales gender Male
5 NoPhone has_phone false
```
Then you could use enchanted version of query
```
SELECT
m2.user_id, u2.user_name, u2.gender, u2.has_phone
FROM (
SELECT user_id
FROM (
SELECT
u.user_id,
CASE ISNULL(f.filterid,'')
WHEN '' THEN '-1'
WHEN 'gender' THEN u.gender
WHEN 'has_phone' THEN u.has_phone
END AS val,
f.filtervalue
FROM users AS u
CROSS JOIN Filterby AS f
WHERE f.id=4 -- here id of filter
) AS m
WHERE val=filtervalue
GROUP BY user_id
HAVING COUNT(*)=(
SELECT COUNT(*) FROM Filterby WHERE id=4 -- here must be the same filter id as above
)
) AS m2
INNER JOIN users AS u2
ON m2.user_id=u2.user_id
```
|
In my experience using dynamic SQL is going to be the simplest way to do this. Take a look at the example below. Basically I think you would want to split this up using a variable and then some dynamic SQL.
```
DECLARE @Table_Name VARCHAR(1000) /*Would actually be an input parameter in
our SP.*/
DECLARE @Statement VARCHAR(1000) /*Declaration of our SQL statement to
execute.*/
SET @Table_Name = 'ErrorLog' /*The value for our input parameter*/
SET @Statement = 'SELECT * FROM ' + '[dbo].[' + @Table_Name + ']';
/*Dynamically creating the SQL statement with our table_name variable.*/
EXEC (@Statement); /*We execute the @statement to get our results.*/
```
The above takes a declared variable and then executes a statement based on that. Depending on your needs you could just also use a variable. You might also look at doing this in a stored procedure where your passing in your filtervalue.
|
SELECT WHERE (column name is a result of another query)
|
[
"",
"sql",
"sql-server",
""
] |
What I want to do is have a SQL join inside a table.
Here is what it looks like:
### pie table
```
Id type message
1 2 'hello'
2 0 'bye'
3 2 'meh'
```
### cake table
```
Id data pieId
1 70 1@2
2 6 2
3 8 1@3@2
```
That is what the two tables look like, and what I want to do is be able to select the cake table where `data = 70` and have it return the row 1 of cake table, and return rows 1 and 2 of pie table.
I don't really care that it returns them as a separate table, row, join, or column.
What I really want is to be able to ask the database for `data = 8` and be able to return `1`, `2`,and `3` from pie table. I don't really care how it works if you have any suggestions
So I want to be able to query the cake and get a dynamic amount of pies. Even the table can change if you can think of some way of doing this.
|
Looks like you need to update your database design
I would get rid of the pieId from cake table and create a relationship table. Lets call it CakePies with Id, PieId, CakeId. Just add the relationship to that table
The query would be
```
Select *
from Cake c
Inner join CakePies cp on c.Id = cp.CakeId
inner join Pies p on p.Id = cp.PieId
where Data = 70 -- or whatever filter you want
```
**Update**
Your CakePies table should look like the following:
```
CakeID PieId
1 1
1 2
2 2
3 1
3 2
3 3
```
|
Take a look at this example for fnSplitString:
<http://www.sqlservercentral.com/blogs/querying-microsoft-sql-server/2013/09/19/how-to-split-a-string-by-delimited-char-in-sql-server/>
From there, try:
```
SELECT c.id, c.data, (SELECT message FROM pie WHERE pieid IN (SELECT * FROM fnSplitString(c.pieid, '@') ) AS 'message'
FROM cake c
WHERE c.data = 8
```
Which is a bit goofy - you're definitely better off having a relational structure for the pieid column: For example:
```
cake table
Id data pieId
1 70 1
1 70 2
...
```
|
SQL statement inside a table
|
[
"",
"sql",
"sql-server",
"select",
"join",
"sql-like",
""
] |
I have multiple tables in my database with different names but all have the same structure. Is there a way that I can change the structure to add a column to every table?
|
As David Faber suggested, you can accomplish this only in a loop.
I have provided the code sample below to accomplish this.
```
private static final String ALTER_USER_TABLE_ColumnDetails =
" ADD NEW_COLUMN_NAME TEXT";
@Override
public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion)
{
// strTableList - Array of Table Names
for(int i=0;i<strTableList.length;i++)
{
db.execSQL("ALTER TABLE "+strTableList[i]+ALTER_USER_TABLE_ColumnDetails);
}
}
```
Hope this is of some help.
|
**SQL inhericance.**
(works in postgresql atleast)
if all the tables inherit a base table, adding a column to the base table would add it to all the others. (after aquiring a lock on every table in the database)
If your DBMs doesn't do this sort of sql inheritance, then i think "no":
Get a list of tables and write a looping script, (or use find and replace).
|
how to change structure of multiple tables at once in sql
|
[
"",
"sql",
""
] |
Is there a simple way to compare a list of numbers in my query to a column in a table to return the ones that are NOT in the db?
I have a comma separated list of numbers (1,57, 888, 99, 76, 490, etc etc) that I need to compare to the number column in a table in my DB. SOME of those numbers are in the table, some are not. I need the query to return those that are in my comma separated list, but are NOT in the DB...
|
I would put the list of numbers to be checked in a table of their own, then use `WHERE NOT EXISTS` to check whether they exist in the table to be queried. See this [SQLFiddle demo](http://sqlfiddle.com/#!2/645c5/4) for an example of how this might be accomplished:
If you're comfortable with this syntax, you can even avoid putting into a temp table:
```
SELECT * FROM (
SELECT 1 AS mycolumn
UNION
SELECT 2
UNION
SELECT 3
UNION
SELECT 4
UNION
SELECT 5
UNION
SELECT 6
UNION
SELECT 7
) a
WHERE NOT EXISTS ( SELECT 1 FROM mytable b
WHERE b.mycolumn = a.mycolumn )
```
**UPDATE per comments from OP**
If you can insert your very long list of numbers into a table, then query as follows to get the numbers that are not found in the other table:
```
SELECT mynumber
FROM mytableof37000numbers a
WHERE NOT EXISTS ( SELECT 1 FROM myothertable b
WHERE b.othernumber = a.mynumber)
```
Alternately
```
SELECT mynumber
FROM mytableof37000numbers a
WHERE a.mynumber NOT IN ( SELECT b.othernumber FROM myothertable b )
```
Hope this helps.
|
May be this is what you are looking for.
Convert your `CSV` to rows using `SUBSTRING_INDEX`. Use `NOT IN` operator to find the values which is not present in `DB`
Then Convert the result back to `CSV` using `Group_Concat`.
```
select group_concat(value) from(
SELECT SUBSTRING_INDEX(SUBSTRING_INDEX(t.a, ',', n.n), ',', -1) value
FROM csv t CROSS JOIN
(
SELECT a.N + b.N * 10 + 1 n
FROM
(SELECT 0 AS N UNION ALL SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4 UNION ALL SELECT 5 UNION ALL SELECT 6 UNION ALL SELECT 7 UNION ALL SELECT 8 UNION ALL SELECT 9) a
,(SELECT 0 AS N UNION ALL SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4 UNION ALL SELECT 5 UNION ALL SELECT 6 UNION ALL SELECT 7 UNION ALL SELECT 8 UNION ALL SELECT 9) b
ORDER BY n
) n
WHERE n.n <= 1 + (LENGTH(t.a) - LENGTH(REPLACE(t.a, ',', '')))) ou
where value not in (select a from db)
```
## [SQLFIDDLE DEMO](http://sqlfiddle.com/#!2/07405/3)
`CSV TO ROWS` referred from this [**ANSWER**](https://stackoverflow.com/questions/19073500/sql-split-comma-separated-row)
|
mysql find numbers in query that are NOT in table
|
[
"",
"mysql",
"sql",
""
] |
Currently my triggers for each table are like this for every field in a table:
```
ALTER TRIGGER [dbo].[trg_Statement] ON [dbo].[tbl_Statement]
FOR INSERT, UPDATE, DELETE
AS
BEGIN
SET NOCOUNT ON
INSERT INTO tbl_ChangeLog(TableName, ID, FieldName, OldValue, NewValue)
SELECT
'Statement', CU.id, 'id', deleted.id,inserted.id
FROM
tbl_Statement CU
LEFT JOIN
inserted on CU.id = inserted.id
LEFT JOIN
deleted on CU.id = deleted.id
WHERE
(inserted.id is not null or deleted.id is not null)
AND IsNull(inserted.id,'') <> IsNull(deleted.id,'')
INSERT INTO tbl_ChangeLog(TableName, ID, FieldName, OldValue, NewValue)
SELECT
'Statement', CU.id, 'idAccount', deleted.idAccount,inserted.idAccount
FROM
tbl_Statement CU
LEFT JOIN
inserted on CU.id = inserted.id
LEFT JOIN
deleted on CU.id = deleted.id
WHERE
(inserted.id is not null or deleted.id is not null)
AND IsNull(inserted.idAccount,'') <> IsNull(deleted.idAccount,'')
INSERT INTO tbl_ChangeLog(TableName, ID, FieldName, OldValue, NewValue)
SELECT
'Statement', CU.id, 'OpeningBalance', deleted.OpeningBalance,inserted.OpeningBalance
FROM
tbl_Statement CU
LEFT JOIN
inserted on CU.id = inserted.id
LEFT JOIN
deleted on CU.id = deleted.id
WHERE
(inserted.id is not null or deleted.id is not null)
AND IsNull(inserted.OpeningBalance,'') <> IsNull(deleted.OpeningBalance,'')
...
```
This however is very costly, especially when there are a lot of fields, can anyone propose a more efficient way to track changes to our changelog?
|
I agree with Madison, a single row in you AUDIT table that contains the entire record is going to be more useful and more performant. This becomes more true the wider the base table is. In your case:
```
CREATE TRIGGER IUD_Statement_Audit
ON dbo.tbl_Statement AFTER INSERT, UPDATE, DELETE
AS BEGIN
IF (@@rowCount = 0) RETURN;
SET NOCOUNT ON;
INSERT INTO dbo.tbl_Statement_AUDIT
(id, idAccount, OpeningBalance, insertedOrDeleted, modTime, modId)
SELECT id, idAccount, OpeningBalance, 'D', GETDATE(), USER_NAME() FROM DELETED
UNION ALL
SELECT id, idAccount, OpeningBalance, 'I', GETDATE(), USER_NAME() FROM Inserted
END
```
There are many benefits to this approach including: easy join on id column to see all changes and triggers for all tables become copy/paste exercises. In general RDBMS will perform better with a wide tables(one record containing all columns) and fewer records as opposed to a skinny[?] table(one record for each column change). Your current approach will create three records for every record that is inserted, deleted, and updated.
Rolling back to a past point can be done by ordering the audit table by `modTime desc, insertedOrDeleted desc` and performing the inverse operation... going back as far as you want.
|
I sometimes create an identical table to the table I want to log except I add a new identity primary key and a datetime field to track when the change occurred. Then whenever the original table changes, I insert into the tracking table the entire row that changed plus the current datetime.
The pros to this are it is easy to implement and allows rolling back to a previous set of values if needed. It also allows easy joining to current rows if you want to see total history (remember to index if you do this a lot). Also, in your case it would just be a single insert with no logic other than grabbing the correct row. The downside is it stores all the fields every time anything changes and you have to maintain two tables.
You could obviously reduce the fields you didn't want to track or modify this in other ways more custom to your needs. For example, you might want to keep an extra field in the tracking table if the row gets deleted.
In order to see the old value and new value, you'd need to just look backwards in the table by the date to see when it changed.
|
More efficient way of logging changes in my table trigger
|
[
"",
"sql",
"sql-server",
"database",
"performance",
"t-sql",
""
] |
I have records from a specific table which contains fields: `phone1`, `phone2` .
How can i change value of `057` in these fields to
`053`, but only the beginning of the value?
*For example:* `057-4353009` should change to `053-4353009`, but `057-5405731` should change to `053-5405731` (*the second `057` in this specific number shouldn't change*).
|
You have to implement two parts:
1. Checking if the phone number starts with the specific sequence
(`WHERE phone LIKE '057%'`)
2. Getting the part after `057` and
concatenate with the new one (`'053' + RIGHT(phone, LEN(phone) - 3)`)
Here is the example query to do this:
```
UPDATE
tbl
SET
phone = '053' + RIGHT(phone, LEN(phone) - 3) -- Leaving the first 3 characters and use another one
WHERE
phone LIKE '057%' -- Starts with 057
```
The general solution is something like this:
```
DECLARE
@SearchString NVARCHAR(MAX) = '057'
, @ReplaceString NVARCHAR(MAX) = '053'
UPDATE
tbl
SET
phone = @ReplaceString + RIGHT(phone, LEN(phone) - LEN(@SearchString))
WHERE
phone LIKE (@SearchString + '%')
```
|
You can use [**REPLACE**](http://msdn.microsoft.com/ru-ru/library/ms186862.aspx) :
```
UPDATE tbl
SET phone = REPLACE(phone, '057-', '053-')
```
### [SQLFiddle](http://sqlfiddle.com/#!3/9201bb/1)
*Edit : In case you are not sure if the number has the structure with a delimiter like in **xxx–xxxxxxx**:*
```
UPDATE tbl
SET phone = '053' + SUBSTRING(phone, 4 , LEN(phone) - 1)
WHERE LEFT(phone, 3) = '057';
```
### [SQLFiddle](http://sqlfiddle.com/#!3/7b0c8/3)
|
How to change only the beginning of value that matches condition?
|
[
"",
"sql",
"sql-server",
"replace",
""
] |
I have this query
```
SELECT Client.ClientNo,
Client.ContactName,
Deal.Currency,
MAX(Deal.DealDate)
FROM Deal
JOIN Client ON Deal.ClientNo = Client.ClientNo
GROUP BY Client.ClientNo, Client.ContactName, Deal.Currency;
```
which gives me a result
```
1 John Smith EUR 2014-10-07
1 John Smith GBP 2014-11-12
2 Jane Doe GBP 2014-09-17
2 Jane Doe USD 2014-12-23
1 John Smith USD 2013-11-13
2 Jane Doe EUR 2012-09-06
```
Problem is, I need an aggregated result with the latest date per client, like this:
```
1 John Smith GBP 2014-11-12
2 Jane Doe USD 2014-12-23
```
How can I change my query to achieve this?
**UPDATE** Thanks to jarlh for the answer, however I have missed something - if there is a duplicate row - it will remain in the result, looking like this:
```
1 John Smith GBP 2014-11-12
1 John Smith GBP 2014-11-12
2 Jane Doe USD 2014-12-23
```
Any way to make that work?
|
Untested, but should work. Will return several rows for a clieant if the client has two (or more) deals the same, latest day.
```
SELECT Client.ClientNo,
Client.ContactName,
Deal.Currency,
Deal.DealDate
FROM Deal
JOIN Client ON Deal.ClientNo = Client.ClientNo
WHERE Deal.DealDate = (select max(DealDate) from Deal
where ClientNo = Client.ClientNo)
```
|
You could do something like this:
**Test data:**
```
DECLARE @Deal TABLE(ClientNo INT,Currency VARCHAR(10),DealDate DATETIME)
DECLARE @Client TABLE(ClientNo INT,ContactName VARCHAR(100))
INSERT INTO @Deal
VALUES (1,'EUR','2014-10-07'),(1,'GBP','2014-11-12'),(2,'GBP','2014-09-17'),
(2,'USD','2014-12-23'),(1,'USD','2013-11-13'),(2,'EUR','2012-09-06')
INSERT INTO @Client
VALUES (1,'John Smith'),(2,'Jane Doe')
```
**Query:**
```
;WITH latestDeals
AS
(
SELECT
ROW_NUMBER() OVER(PARTITION BY ClientNo ORDER BY DealDate DESC) AS RowNbr,
Deal.*
FROM
@Deal AS Deal
)
SELECT
client.ClientNo,
client.ContactName,
latestDeals.Currency,
latestDeals.DealDate
FROM
@Client AS client
JOIN latestDeals
ON client.ClientNo=latestDeals.ClientNo
AND latestDeals.RowNbr=1
```
**Update:**
If you want to use conventional query. You could do something like this:
```
SELECT
client.ClientNo,
client.ContactName,
Latestdeal.maxDealDate as DealDate,
deal.Currency
FROM
@Client AS client
JOIN
(
SELECT
MAX(Deal.DealDate) AS maxDealDate,
Deal.ClientNo
FROM
@Deal AS Deal
GROUP BY
Deal.ClientNo
) AS Latestdeal
ON client.ClientNo=Latestdeal.ClientNo
JOIN @Deal as deal
ON client.ClientNo=deal.ClientNo
AND deal.DealDate=Latestdeal.maxDealDate
```
This will result in the same output
**Result:**
```
1 John Smith GBP 2014-11-12 00:00:00.000
2 Jane Doe USD 2014-12-23 00:00:00.000
```
|
SQL Server: aggregate to single result
|
[
"",
"sql",
"sql-server",
"greatest-n-per-group",
""
] |
I want to get the biggest ZIP code in DB. Normally I do this
```
SELECT *
FROM (
Select * From tbuser ORDER BY zip DESC
)
WHERE rownum = 1
```
with this code I can get the biggest zip code value without a duplicate row (since zip code is not a primary key).
But the main company at Japan said that I cant use it since when the connection is slow or the DB have very large data, you cant get the right row of it. It will be a great help for me if someone can helps.
|
> I want to get the biggest ZIP code in DB.
If you really *only* want the zip code, try that:
```
SELECT MAX(zip) FROM TBUSER;
```
This will use the index on the `zip` column (if it exists).
That being said, Oracle is usually smart enough to properly optimize sub-query selection using `ROWNUM`. Maybe your main company is more concerned about the possible "full table" ̀ORDER BY` in the subquery ? OTH, if the issue is really with "slow network", maybe worth taking some time with your DBA to look on the wire using a network analyzer or some other tool if your approach really leads to "excessive bandwidth consumption". I sincerely doubt about that...
---
If you want to retrieve the whole row having the maximum zip code here is a slight variation on [an other answer](https://stackoverflow.com/a/27820362/2363712) (in my opinion, this is one of the rare case for using a `NATURAL JOIN`):
```
select * from t
natural join (select max(zip) zip from t);
```
Of course, in case of duplicates, this will return multiple rows. You will have to combine that with one of the several options posted in the various other answers to return only 1 row.
As an extra solution, and since you are not allowed to use `ROWNUM` (and assuming `row_number` is arbitrary forbidden too), you can achieve the desired result using something as contrived as:
```
select * from t
where rowid = (
select min(t.rowid) rid from t
natural join (select max(zip) zip from t)
);
```
See <http://sqlfiddle.com/#!4/3bd63/5>
But honestly, there isn't any serious reason to hope that such query will perform better than the simple `... ORDER BY something DESC) WHERE rownum <= 1` query.
|
This sounds to me like bad advice (masquerading as a rule) from a newbie data base administrator who doesn't understand what he's looking at. That insight isn't going to help you, though. Rarely does a conversation starting with "you're an obstructionist incompetent" achieve anything.
So, here's the thing. First of all, you need to make sure there's an *index* on your `zip` column. It doesn't have to be a primary key.
Second, you can try explaining that Oracle's table servers do, in fact, optimize the `... ORDER BY something DESC) WHERE rownum <= 1` style of query. Their servers do a good job of that. Your use case is very common.
But if that doesn't work on your DBA, try saying "I heard you" and do this.
```
SELECT * FROM (
SELECT a.*
FROM ( SELECT MAX(zip) zip FROM zip ) b
JOIN ZIP a ON (a.zip = b.zip)
) WHERE rownum <= 1
```
This will get one row with the highest numbered `zip` value without the `ORDER BY` that your DBA mistakenly believes is messing up his server's RAM pool. And, it's reasonably efficient. As long as `zip` has an index.
|
Selecting the biggest ZIP code from a column
|
[
"",
"sql",
"oracle",
"greatest-n-per-group",
""
] |
I have 2 table in my database that tables are in relationship with foreign key
I want to select all records from main table and then select count of each row in another table than have same ID from main table I tried to create a select query but it is not work correctly
this query return all records from main table + count of all records from next table(not count of each row in relationship)
```
SELECT tblForumSubGroups_1.id, tblForumSubGroups_1.GroupID,
tblForumSubGroups_1.SubGroupTitle, tblForumSubGroups_1.SubGroupDesc,
(SELECT COUNT(dbo.tblForumPosts.id) AS Expr1
FROM dbo.tblForumSubGroups INNER JOIN dbo.tblForumPosts ON
dbo.tblForumSubGroups.id = dbo.tblForumPosts.SubGroupID) AS Expr1
FROM dbo.tblForumSubGroups AS tblForumSubGroups_1 INNER JOIN
dbo.tblForumPosts AS tblForumPosts_1 ON tblForumSubGroups_1.id
= tblForumPosts_1.SubGroupID
```
|
```
SELECT tblForumSubGroups_1.id, tblForumSubGroups_1.GroupID, tblForumSubGroups_1.SubGroupTitle, tblForumSubGroups_1.SubGroupDesc,
COUNT(tblForumPosts_1.id) AS Expr1
FROM dbo.tblForumSubGroups AS tblForumSubGroups_1
INNER JOIN dbo.tblForumPosts AS tblForumPosts_1 ON tblForumSubGroups_1.id = tblForumPosts_1.SubGroupID
GROUP BY tblForumSubGroups_1.id, tblForumSubGroups_1.GroupID, tblForumSubGroups_1.SubGroupTitle, tblForumSubGroups_1.SubGroupDesc
```
|
Do not mix sub-query and join logic. Use only one of them. I prefer sub-select.
```
SELECT tblForumSubGroups_1.id,
tblForumSubGroups_1.GroupID,
tblForumSubGroups_1.SubGroupTitle,
tblForumSubGroups_1.SubGroupDesc,
(SELECT COUNT(*)
FROM dbo.tblForumPosts
WHERE dbo.tblForumSubGroups.id = dbo.tblForumPosts.SubGroupID) AS Expr1
FROM dbo.tblForumSubGroups AS tblForumSubGroups_1
```
|
selecting records from main table and count of each row in another table
|
[
"",
"sql",
"sql-server",
""
] |
One `Quiz` can have many `Submissions`. I want to fetch all `Quizzes` that have at least one associated `Submission` with `submissions.correct = t` and at least one associated `Submission` with `submissions.correct = f`.
How can I fix the following query and the WHERE statement in particular to make that happen:
```
SELECT quizzes.*,
Count(submissions.id) AS submissions_count
FROM "quizzes"
INNER JOIN "submissions"
ON "submissions"."quiz_id" = "quizzes"."id"
WHERE ( submissions.correct = 'f' )
AND ( submissions.correct = 't' )
GROUP BY quizzes.id
ORDER BY submissions_count ASC
```
**Update:**
Here is the missing information:
I need all row data from Quizzes. I only need the count for ordering within the query (the quizzes with the least amount of submissions first).
```
k-voc_development=# \d quizzes;
Table "public.quizzes"
Column | Type | Modifiers
------------+-----------------------------+------------------------------------------------------
id | integer | not null default nextval('quizzes_id_seq'::regclass)
question | character varying | not null
created_at | timestamp without time zone | not null
updated_at | timestamp without time zone | not null
Indexes:
"quizzes_pkey" PRIMARY KEY, btree (id)
Referenced by:
TABLE "submissions" CONSTRAINT "fk_rails_04e433a811" FOREIGN KEY (quiz_id) REFERENCES quizzes(id)
TABLE "answers" CONSTRAINT "fk_rails_431b8a33a3" FOREIGN KEY (quiz_id) REFERENCES quizzes(id)
k-voc_development=# \d submissions;
Table "public.submissions"
Column | Type | Modifiers
------------+-----------------------------+----------------------------------------------------------
id | integer | not null default nextval('submissions_id_seq'::regclass)
quiz_id | integer | not null
correct | boolean | not null
created_at | timestamp without time zone | not null
updated_at | timestamp without time zone | not null
Indexes:
"submissions_pkey" PRIMARY KEY, btree (id)
"index_submissions_on_quiz_id" btree (quiz_id)
Foreign-key constraints:
"fk_rails_04e433a811" FOREIGN KEY (quiz_id) REFERENCES quizzes(id)
k-voc_development=#
```
|
The best solution depends on the details of your implementation, data distribution and requirements.
**If** you have a typical installation with referential integrity (FK constraint) and defined `submissions.correct` as `boolean NOT NULL` and just want the `quiz_id` along with a count of total submissions, then you don't need to join in `quizzes` at all and this should be fastest:
```
SELECT quiz_id, count(*) AS ct
FROM submissions
-- WHERE correct IS NOT NULL -- only relevant if correct can be NULL
GROUP BY 1
HAVING bool_or(correct)
AND bool_or(NOT correct);
```
The dedicated [aggregate function `bool_or()`](http://www.postgresql.org/docs/current/interactive/functions-aggregate.html#FUNCTIONS-AGGREGATE-TABLE) is particularly useful for your test with boolean values. Simpler and faster than a `CASE` expression or similar constructs.
There are *many* other techniques, the best solution depends on the missing information.
### For your updated requirements
> I need all row data from `quizzes`. I only need the count for ordering
> within the query (the quizzes with the least amount of submissions first).
This should be fastest if **lots** of quizzes qualify (high percentage of total).
```
SELECT q.*
FROM (
SELECT quiz_id, count(*) AS ct
FROM submissions
GROUP BY 1
HAVING count(*) > count(correct OR NULL)
) s
JOIN quizzes q ON q.id = s.quiz_id
ORDER BY s.ct;
```
`count(*) > count(correct OR NULL)` works because `correct` is `boolean NOT NULL`. Should be slightly faster than the variant above for *few* submissions per quiz.
|
```
-- I want to fetch all Quizzes
SELECT * FROM quizzes q
WHERE EXISTS ( -- that have at least one associated Submission with submissions.correct = t
SELECT * FROM submissions s
WHERE s.quiz_id = q.id AND s.correct = 't'
)
AND EXISTS ( -- and at least one associated Submission with submissions.correct = f.
SELECT * FROM submissions s
WHERE s.quiz_id = q.id AND s.correct = 'f'
);
```
|
Postgres: Inner Join with AND condition on same field
|
[
"",
"sql",
"postgresql",
""
] |
I've been working on a school management project. A teacher has multiple subjects, I need to display only one subject in query but I get all the subjects that a teacher has. Now I need only one record for Teacher 2 (TeacherId =2 has two subjects (English & Math) from the above screenshot's query. English or Math any of them will do.
Query in Text:
```
SELECT dbo.Teacher.TeacherId, dbo.Teacher.TeacherName, dbo.Gender.Gender, dbo.Teacher.DOB, ISNULL(dbo.Subject.Subject, 'No Subject') AS Subject,
dbo.Teacher.Contact, dbo.Teacher.Address, dbo.Teacher.Email, dbo.Teacher.Photo
FROM dbo.Subject INNER JOIN
dbo.TeacherSubject ON dbo.Subject.SubjectId = dbo.TeacherSubject.SubjectId RIGHT OUTER JOIN
dbo.Teacher INNER JOIN
dbo.Gender ON dbo.Teacher.Gender = dbo.Gender.GenderId ON dbo.TeacherSubject.TeacherId = dbo.Teacher.TeacherId
```
Thanks in advance for helping me a lot.
|
The easiest way would be to join just one subject per teacher in the first place. So join with an aggregate:
```
select
t.teacherid,
t.teachername,
g.gender,
t.dob,
coalesce(s.subject, 'no subject') as subject,
t.contact,
t.address,
t.email,
t.photo
from teacher t
join gender g on g.genderid = t.gender
left join
(
select teacherid, max(subjectid) as max_subjectid
from teachersubject
group by teacherid
) ts on ts.teacherid = t.teacherid
left join subject s on s.subjectid = ts.max_subjectid;
```
|
Take your query add Row\_Number() statement as another column and put in the CTE your entire query
```
WITH CTE AS
(
Select *,Row_Number()OVER(PARTITION BY SUBJECT ORDER BY 1 DESC)RN from Table
)
Select * from CTE WHERE RN = 1
```
OR
TRY LIKE THIS
```
Select * from Dbo.Subject
INNER JOIN (Select DISTINCT ISNULL(MAX(Subjectname,'NoSubject') As Subject FROM TeacherSubject)As Subjectname
ON Subjectname.SubjectID = Subject.SubjectID
```
|
How to get DISTINCT row from INNER JOIN Query in SQL Server
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"sql-server-2012",
""
] |
I have a table called post in which it has a column called title. I wanted to select all post which only have 1 word as the title. So for example 'cat' , 'dog' . There are posts which has more than one more such as 'cat and dog are not good', this I don't want to select. Only post with one word title only. How can I do so with mysql ?
|
```
SELECT * FROM post WHERE title NOT LIKE '% %'
```
Chooses rows where titles have no spaces.
**EDIT:**
> If a single word with a space in the end like `cat`, then?
Then you smack your users with a tunafish for making silly inputs, and also your programmers for not trimming the inputs.
Just kidding. If that is a possibility, this should work:
```
SELECT * FROM post WHERE title RLIKE '^[[=space=]]*[^[=space=]]+[[=space=]]*$'
```
|
If a single word with a space in the end like 'cat '.
**Query**
```
CREATE TABLE tbl(title VARCHAR(250));
INSERT INTO tbl VALUES('cat ');
INSERT INTO tbl VALUES('dog');
INSERT INTO tbl VALUES('cat and dog');
SELECT * FROM tbl WHERE TRIM(TRAILING ' ' FROM title) NOT LIKE '% %';
```
**[Fiddle Demo 1](http://sqlfiddle.com/#!2/2e3e3e/6)**
---
If space in both start and end. Then,
**Query**
```
SELECT * FROM tbl WHERE TRIM(BOTH ' ' FROM title) NOT LIKE '% %';
```
OR
```
SELECT * FROM tbl WHERE TRIM(title) NOT LIKE '% %';
```
**[Fiddle Demo 2](http://sqlfiddle.com/#!2/d868c/1)**
|
select entries that has one word
|
[
"",
"mysql",
"sql",
""
] |
```
INSERT INTO MonthlyFee(StudentID, ClassTypeID, Fee, MonthID,
[Year], IsDefaulter, FeeStatus, CreatedDate)
SELECT StudentID, ClassTypeID FROM ClassRollNo
```
I want to set the following values (mostly hardcoded parameters): MonthID 1, Year 2015, IsDefaulter FALSE, FeeStatus 0 and CreatedDate GETDATE().
|
Just hard code those values in same `select` statement in order of insert `column list`.Try this syntax
```
INSERT INTO MonthlyFee
(StudentID,
ClassTypeID,
Fee,
MonthID,
[Year],
IsDefaulter,
FeeStatus,
CreatedDate)
SELECT StudentID,
ClassTypeID,
100, -- You missed
1,
2015,
'false',
0,
Getdate()
FROM ClassRollNo
```
**Note :** You missed Fee column i have hard coded
|
Use the same query and the values you described:
```
INSERT INTO MonthlyFee(StudentID, ClassTypeID,Fee,MonthID,
[Year],IsDefaulter,FeeStatus,CreatedDate)
SELECT StudentID, ClassTypeID, 0, 1, 2015, 0, 0, GETDATE() FROM ClassRollNo
```
|
Insert Data from multiple tables into a single table
|
[
"",
"sql",
"sql-server",
"join",
"subquery",
""
] |
Why do we need to use query `SELECT COLUMN1, COLUMN2` when we can just use `SELECT *` and only show the columns we want?
And what's the difference between `SELECT ALL` and `SELECT`?
|
The question is, why would one use `SELECT col1, col2` when you can easily just show what columns you like in the application and always use `SELECT *`?
The main reasons are these:
1. Selecting only the columns you want means that the server has to gather and send less data to the application, making for a faster query and less resource usage, especially if you have many columns or some of them contain BLOBs.
2. If the structure in the database changes(for example if a column is renamed or dropped, you get more easily understood errors, and they appear in the select, not the application.
3. If you want to use the fields in the result by their integer index, you *must* specify the columns, or else you are at the mercy of whatever column order the query optimization comes up with. Your application may just stop working because someone added a column or index.
**Edit to answer the second part of the question:**
Oh, and the difference between SELECT ALL and SELECT is that there is no difference.
SELECT ALL is the default behaviour of SELECT in all dialects I know of.
However, SELECT DISTINCT is different. It returns the unique values of a set, so if you run :
```
SELECT DISTINCT col1, col2
```
on a table that contains
```
1,2
1,2
2,2
2,3
```
you will get
```
1,2
2,2
2,3
```
|
There are a lot of questions posted before comparing differences of `SELECT Col1, Col2` vs `SELECT *`
You can find some of them here.
[Which is faster/best? SELECT \* or SELECT column1, colum2, column3, etc](https://stackoverflow.com/questions/65512/which-is-faster-best-select-or-select-column1-colum2-column3-etc)
[Is there a difference between Select \* and Select](https://stackoverflow.com/questions/208925/is-there-a-difference-between-select-and-select-list-each-col)
[select \* vs select column](https://stackoverflow.com/questions/3180375/select-vs-select-column)
---
About `SELECT ALL` and `SELECT`:
There is no difference between these two statements and you can use `SELECT Col1, Col2` or `SELECT ALL Col1, Col2`
If you have a look at `SELECT` statement:
`SELECT [ALL | DISTINCT] column1[,column2] FROM table1`
`ALL` is default.
|
SELECT COLUMN SQL
|
[
"",
"mysql",
"sql",
""
] |
while studying, im struggling to find an answer to this question:
What is a generic datamodel?
Are there any alternatives to a generic datamodel, if so; which one would you recommend?
I've looked everywhere to find a decent answer, but i didn't find it. It might be just a very simple question, i just couldn't find these terms; 'generic datamodel'.
|
Perhaps by generic data model you mean an entity attribute value model: [EAV on wikipedia](http://en.wikipedia.org/wiki/Entity%E2%80%93attribute%E2%80%93value_model)
This data model has one table and three columns. It is usually burnt by fire if ever discovered by someone of a database (vs developer) persuasion.
The thing with a data model is that you are modeling *something*. You are applying your skills to build a database that is optimised to store and retrieve data about something (widgets in a factory, trades on a stock exchange, Facebook posts). A EAV can store all of this stuff but it causes many other problems down the line.
More info here: [EAV](https://dba.stackexchange.com/questions/20759/is-there-a-name-for-this-database-structure)
|
It could be possible its an EAV mode. But that leaves me with the following question: are there any alternatives to this model?
Thanks.
|
SQL - Alternatives to a generic datamodel
|
[
"",
"sql",
"database",
"generics",
"datamodel",
""
] |
I am using SQL Server. I have two date filters using those fields as show below in query.
```
SELECT *
FROM student
WHERE
Dateofbirth >= '2014-10-03'
AND Dateofbirth <= '2014-10-03'
```
While using this same date no result get listed. If I change the date to
```
SELECT *
FROM student
WHERE
Dateofbirth >= '2014-10-03'
AND Dateofbirth <= '2014-10-04'
```
I have my records listed.
This same query works fine in MySql. Can someone help me? Why is this not working in SQL Server?
|
Looking at the issue Dateofbirth is probably a datetime column.
A datetime column has both a date part and time part.So, your query with date alone will translate as
```
SELECT * FROM student
WHERE Dateofbirth >= '2014-10-03 00:00:00.000'
AND Dateofbirth <= '2014-10-03 00:00:00.000'
```
So, to get the entire day the query should be like
```
SELECT * FROM student
WHERE Dateofbirth >= '2014-10-03 00:00:00.000'
AND Dateofbirth <= '2014-10-03 23:59:59.997
```
|
```
SELECT * FROM student
WHERE Dateofbirth >= '2014-10-03'
AND Dateofbirth <= '2014-10-03'
```
This means show only data which have only `2014-10-03` value. There is hour is also in your date, and your date string is without hour,minute, second. So in compare it will not get.
The problem will solved by type casting or proper format need when you working with date compare.
```
SELECT * FROM student
WHERE cast ( Dateofbirth as date) >= cast ( '2014-10-03' as date)
AND cast( Dateofbirth as date) <= cast( '2014-10-03' as date)
```
This will set both value in date portion not time.
When you use `2014-10-04` mean all hours including and get the result.
OR
you can user between operator also, which include the starting date and ending date.
```
SELECT * FROM student
WHERE cast ( Dateofbirth as date) between cast ( '2014-10-03' as date) and cast ( '2014-10-04' as date)
or
SELECT * FROM student
WHERE Dateofbirth between '2014-10-03' and '2014-10-04'
```
|
SQL Server : where with same date
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I need to convert a SQL column which is in `varchar` datatype to `decimal` and the null values to 0. This is my code to convert `varchar` to `decimal`:
```
SELECT
CAST(debit as DECIMAL(9,2)),
CAST(credit as DECIMAL(9,2)),
sum_accname,
sum_date,
sum_description
FROM
sum_balance
```
I need to convert debit and credit column null values to zero. How to insert the null value converting part to this code?
|
```
SELECT
COALESCE(CAST(debit as DECIMAL(9,2)),0) as debit,
COALESCE(CAST(credit as DECIMAL(9,2)),0) as credit,
sum_accname,
sum_date,
sum_description
FROM
sum_balance
```
|
Use [**COALESCE**](http://msdn.microsoft.com/en-us/library/ms190349(v=sql.105).aspx) function to fetch NOT NULL values.
Try this:
```
SELECT
CAST(COALESCE(debit, '0') AS DECIMAL(9,2)),
CAST(COALESCE(credit, '0') AS DECIMAL(9,2)),
sum_accname,
sum_date,
sum_description
FROM
sum_balance;
```
|
How to convert SQL column null value to 0 and varchar to decimal in sql query
|
[
"",
"sql",
"sql-server",
"select",
"sql-server-2008-r2",
"crystal-reports",
""
] |
I am using SQL Server 2012.
I have a simple query,
```
select * from P_Bld
where datep = (select max(datep) from P_Bld)
and codeBlood like 'nm_%'
order by codeBlood
```
This works fine and returns the data as shown below
```
codeBlood num
nm_aab 1
nm_abc 2
nm_rst 3
```
I was wondering if it is possible to change the codeBlood column so it does not include the 'nm\_' and also make the rest of the string uppercase so I would get something like below,
```
codeBlood num
AAB 1
ABC 2
RST 3
```
Is this possible?
|
I think this will do,
First replace "nm\_" by blank using `Replace()` function & then convert remaining string to Uppercase bu `Upper()` function
```
select Upper(Replace(codeBlood,'nm_','')) as codeBlood,num from P_Bld
where datep = (select max(datep) from P_Bld)
and codeBlood like 'nm_%'
order by codeBlood
```
|
If the pattern is same, try this
```
select Upper(substring(codeBlood,4,length(CodeBllod))),num from P_Bld
where datep = (select max(datep) from P_Bld)
and codeBlood like 'rf_%'
order by codeBlood
```
|
query and formatting string at the same time
|
[
"",
"sql",
"sql-server",
""
] |
I'm a newbie to SQL (Server) and the following question is really making me clueless.
```
ID Order Type
=====================
26 Automatic Orders
26 Backend Orders
41 Automatic Orders
58 Automatic Orders
86 Automatic Orders
86 Automatic Orders
86 Automatic Orders
111 Automatic Orders
104 Automatic Orders
112 Backend Orders
112 Backend Orders
119 Backend Orders
119 Backend Orders
119 Backend Orders
```
Now it is asked to find "The hotel which has more number of automatic orders and less backend orders ?"
Though I was able to answer most of the questions came along with this, I'm really not getting what should I do to crack this!
|
Not sure this is exactly what you need,
but It can guide you to the solution.
If I understood correctly,
You want the hotel in which the difference between the automatic and backend orders is the highest?
If so,
this may help.
Assuming such table exists, and is populated with the values you provided:
```
CREATE TABLE [dbo].[Orders](
[Id] [int] NULL,
[Type] [varchar](50) NULL
) ON [PRIMARY]
```
This query should return the hotel list sorted according to the demand.
```
SELECT a.HotelId , a.NumOfAutomaticOrders - a.NumOfBackendOrders
FROM
(
SELECT Id as HotelId ,
sum(case when [type] = 'Automatic Orders' then 1 else 0 end ) NumOfAutomaticOrders,
sum(case when [type] = 'Backend Orders' then 1 else 0 end ) NumOfBackendOrders
FROM Orders
where 1=1
group by Id
) a
order by 2 desc
```
|
If I understand your question correctly, you want Hotels with more `Automatic Orders` than `Backend Orders`. You can use `SUM` and `HAVING` function for this.
```
SELECT
ID
FROM temp
GROUP BY ID
HAVING
SUM(CASE WHEN [Order Type] = 'Automatic Orders' THEN 1 ELSE 0 END) >
SUM(CASE WHEN [Order Type] = 'Backend Orders' THEN 1 ELSE 0 END)
```
# EDIT
As per your comment on Asketh's answer, this is probaby what you need:
```
SELECT
TOP 1 *
FROM(
SELECT
ID,
[Automatic Orders] = SUM(CASE WHEN [Order Type] = 'Automatic Orders' THEN 1 ELSE 0 END),
[Backend Orders] = SUM(CASE WHEN [Order Type] = 'Backend Orders' THEN 1 ELSE 0 END)
FROM temp
GROUP BY ID
)t
ORDER BY ([Automatic Orders] - [Backend Orders]) DESC
```
|
SQL Query to find the ID with highest difference of contents
|
[
"",
"sql",
"sql-server",
""
] |
Here is my query:
```
IF EXISTS(%some query%)
BEGIN
BEGIN TRY
ALTER DATABASE [MyDatabase] SET single_user WITH ROLLBACK IMMEDIATE;
--do something
--throw some exception
END TRY
BEGIN CATCH
--here I want to set database back to multi_user
RAISERROR ('DBErrorMessage', @ErrorSeverity, @ErrorState);
END CATCH
END
```
What is the best way to set database back to `multi_user`? I am afraid that the most straightforward way might lead to an
> Database 'MyDatabase' is already open and can only have one user at a time.
exception.
By the most straightforward way I mean this one:
```
ALTER DATABASE [MyDatabase] SET MULTI_USER
```
|
It's been a while, but I believe the `with rollback immediate` option is there to say you want this to succeed no matter what. the normal behavior blocks until all running transactions have completed successfully.
Also, setting the database to multi-user mode when it's already multi-user is safe.
However, if you want to be really sure, use a nested try catch block. The first one to handle any error moving to single user mode, and the inner one to handle errors that occur in single-user mode..
see [try-catch documentation](http://technet.microsoft.com/en-US/library/ms175976(v=sql.105).aspx)
and [alter database documentation](http://technet.microsoft.com/en-US/library/ms174269(v=sql.90).aspx)
A couple of final notes for completeness:
1) If the error is severe enough your connection will be closed, and your database will still be in single-user mode.
2) You should have a test instance where you can safely try this stuff out, so that you can learn what's going to happen in production.
|
What happens if you simply put
```
ALTER DATABASE [MyDatabase] SET MULTI_USER
```
after the CATCH ? It shouldn't find any active user since you are in single user from your program/stored procedure?
Also, have you tried putting everything in a transaction, using ROLLBACK in the CATCH block?
|
How do I go back to multi_user mode
|
[
"",
"sql",
"sql-server",
"t-sql",
"multi-user",
"single-user",
""
] |
I have datetime column value below
```
2015-01-04 20:37:00.000
```
I tried below
```
cast(cast(MyDateColumn as date) as datetime)+cast(datepart(hour,MyDateColumn ) as float)/24
as MyDateColumn
```
and
```
CAST(CONVERT(CHAR(16),MyDateColumn,113) AS datetime) as MyDateColumn
```
These are did not work for me
How can i get above datetime as **01-04.2015 20:37** ?
|
In SQL Server this will work:
```
DECLARE @now [datetime];
SET @now = GETDATE();
SELECT
CONVERT([varchar](10), @now, 105) + ' ' +
RIGHT('0' + CONVERT([varchar](2), DATEPART(HOUR, @now)), 2) + ':' +
RIGHT('0' + CONVERT([varchar](2), DATEPART(MINUTE, @now)), 2);
```
|
Since MS SQL 2012, you can use [`FORMAT`](http://msdn.microsoft.com/en-GB/library/hh213505.aspx),
```
SELECT FORMAT([MyDateColumn], 'dd-MM.yyyy HH:mm')
```
|
Sql Server select datetime without seconds
|
[
"",
"sql",
"sql-server",
"datetime",
""
] |
I have table person
```
create table Person (
id SERIAL not null,
...
);
```
person can have 0 or more PreviousName
```
create table PreviousName (
id SERIAL not null,
person_id INTEGER not null,
value VARCHAR(120)
);
```
how can I select all person who has for example value in previous name "John"
|
```
select * from Person
where id in (select person_id from PREDCHOZI_JM_PR
where value = 'John')
```
|
You can find distinct person\_id from PREDCHOZI\_JM\_PR with value containing john:
```
SELECT * FROM Person WHERE id IN (SELECT DISTINCT person_id FROM PREDCHOZI_JM_PR WHERE value LIKE '%John%')
```
|
How to select person depending on one to many relationship
|
[
"",
"sql",
"postgresql",
""
] |
I have written a stored procedure as
```
Declare @Split table
(
Jurisdiction nvarchar(max)
)
Insert into @Split select Jurisdiction from tbLicence where AgentId =@AgentId;
Select * from @Split;
Declare @products varchar(MAX);
Select @products = Jurisdiction from @Split;
Select @products;
```
I want to put all the values of column @Split in @products variable.
Please note that above code is putting only last row's column value in @products.
Please help me!!!
|
First Initialize the variable with `empty string`. Then append all the rows using any delimiter. Use this.
```
DECLARE @products VARCHAR(MAX)='';
SELECT @products += Isnull(Jurisdiction, '') + ','
FROM @Split;
SELECT left(@products,len(@products)-1);
```
or Use `For XML path()`
```
set @products=(select ','+Jurisdiction from @Split for xml path(''))
select right(@products,len(@products)-1)
```
|
This might helps you
```
Declare @Split table( Jurisdiction nvarchar(max))
Declare @products varchar(MAX)=''
declare @count int=(select count(*) from @split)
declare @a int=1
Insert into @Split values ('a'),('b')
while(@a<@count)
begin
Select @products += Jurisdiction from @Split
set @a=@a+1
end
select @products
```
|
How to put all rows in a column to a variable?
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
I have a table (tran) with the following values:
```
Set Type Acc Type Tran Value
---- ----- ---------- ---------- ------------------- -----------
1 BS 87411 CCHQ Other Transactions -200
1 BS 87411 RECPT Other Transactions 200
1 BS 87411 CCHQ Other Transactions -200
1 BS 87411 RECPT Other Transactions 200
```
How do I update this table to order the items by Type. I should mention this code is part of an SP and I cannot use select since I don't want output by updating the table.
I tried something like:
```
update a
set Value = sum(Value)
from tran a
group by TranType
```
but this doesn't work, shows an error "incorrect syntax near group"
|
This is one of the things you can do, I'm afraid there's no getting away from a DELETE statement.. as you won't be able to overwrite your existing data by having a unique row with a summed up value without deleting it.
```
BEGIN TRANSACTION
SELECT ROW_NUMBER() OVER (ORDER BY Type2) AS UniqueID, * --- Create Unique ID if you don't already have one.
INTO #tmpTransactionTypes
FROM dbo.TransactionTypes_Test;
UPDATE A
SET A.Value = B.Total
FROM #tmpTransactionTypes AS A
INNER JOIN (
SELECT Type2, SUM(Value) AS Total
FROM #tmpTransactionTypes
GROUP BY Type2) AS B ON A.Type2 = B.Type2;
DECLARE @FirstValue INT = 1
DECLARE @LastValue INT = (SELECT MAX(UniqueID) FROM #tmpTransactionTypes)
WHILE @FirstValue <= @LastValue
BEGIN
DELETE
FROM #tmpTransactionTypes
WHERE UniqueId = @FirstValue;
SET @FirstValue = @FirstValue +2
END
SELECT *
FROM #tmpTransactionTypes;
-- COMMIT TRANSACTION
ROLLBACK TRANSACTION
```
Result:
```
UniqueID Set Type1 Acc Type2 Tran Value
2 1 BS 87411 CCHQ Other Transactions -400.00
4 1 BS 87411 RECPT Other Transactions 400.00
```
|
You can't do it this way. Try this instead:
```
update a
set a.Value = t.Total
from tran a
INNER JOIN
(
SELECT TranType, SUM(Value) Total
FROM tran
GROUP BY TranType
) AS t ON a.TranType = t.TranType
```
|
Update a table in SQL server (without output from select)
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I am using MySQL to solve this problem. I need to give points to a user based on the total time spent by him on a question. I have calculated the time spent by the user. Let's say it is in **user\_time** table.
```
user_id question_id time_spent
1 1 7
1 2 50
2 1 11
```
My points are range based:
```
[0-10) seconds: 100 points,
[10-20) seconds: 300 points,
[20-30) seconds: 500 points,
[30, inf): 1000 points
```
Exact 10 seconds will fetch me 300 points. Though, the changes of an exact number would be low given that I am computing from the system clock difference.
This information is currently scored in an external table **points\_table**
```
time_spent points
0 100
10 300
20 500
30 1000
```
I need a query which finds out which range the seconds belong to and give me that result.
```
user_id question_id points
1 1 100
1 2 1000
2 1 300
```
I tried thinking of different type of joins but couldn't think of one which will answer this specific requirement.
|
I think the easiest approach is a correlated subquery. Something like this:
```
select ut.*,
(select pt.points
from points_table pt
where pt.time_spent <= ut.time_spent
order by pt.time_spent desc
limit 1
) as points
from user_time ut
```
|
For another take on this, you could achieve the same result without having the points table:
```
SELECT *,
CASE
WHEN time_spent >= 30 THEN 1000
WHEN time_spent >= 20 THEN 500
WHEN time_spent >= 10 THEN 300
ELSE 100
END 'Points'
FROM user_time;
```
|
MySQL: Range based on rows in external table
|
[
"",
"mysql",
"sql",
"select",
"join",
""
] |
I've done a lot of googling as to what JSON is, and how it is used on the web. I understand that it is a way of storing data using Javascript object notation, and that it is a lightweight way to store data and is therefore preferable to XML.
However, I don't quite understand exactly how XML and JSON are used to transmit data in the first place, especially when using SQL. Is SQL data loaded into an XML file/JSON string and then displayed with Ajax?
I know this is a complete newbie question but I am seriously confused.
|
XML and JSON are *serialisation formats*. They're plain text formats which allow you to express more than plain text. JSON can express complex nested hierarchies of arrays and key-value objects, XML can express even more complex object-attribute-value-content hierarchies. Both data formats simply allow you to send more than a simple string in a defined way.
In more practical terms, how would you send *three* items from A to B? You can easily send just "foo", it's just a string. But what about "foo", "bar" and "baz"? Do you want to invent your own serialisation format? Like:
```
foo,bar,baz
```
Then you do some string processing on the other end to split by `,` to get your three elements back? What if you need to send more complex data? What if your data contains a ","? That's where existing serialisation standards come in, like JSON and XML. You don't need to answer all those questions every time from scratch, you just use an existing, well supported standard:
```
["foo","bar","baz"]
```
or:
```
<data>
<item>foo</item>
<item>bar</item>
<item>baz</item>
</data>
```
You would generate this data on the fly as needed; a database can be one possible source for this data. You don't "put it in a file", you output it as needed. Pseudo code example:
```
var data = []
while (row = fetch_from_database())
data.push(row.name)
var json = json_encode(data)
print json
```
Here you're creating an array `data` with the contents of the `name` column of your database, then encode it to JSON and output it. The output will be something like the "foo", "bar", "baz" example from above.
|
SQL is used to store data. XML and JSON are human readable, text based, data transport protocols.
Typically a request is made either via HTTP or AJAX. The App that receives the request retrieves the data from an SQL database. The requested data is formatted into XML or JSON then transmitted to the requester as the Response.
XML and JSON can contain all the elements of a database. The original purpose of XML was to transfer data between two systems with dissimilar data storage. JSON was created as a simplified alternative to XML. JSON is key-value pair oriented where a database column name is the key and the data stored in that column is the value associated with the key.
For example Google Geo Coding gives the option for their response to be formatted as JSON or XML.
**XML /geocode/xml/**
```
$data = @file_get_contents(http://maps.googleapis.com/maps/api/geocode/xml?address=1234+Main+St&sensor=false);
```
This creates a very simple XML response.
**To get Latitude from the XML response:**
```
$xmldecode = new SimpleXMLElement($data);
$coordinates = $googlexml - > Response - > Placemark - > Point - > coordinates;
$coordinates = split(",", $coordinates);
$lat = $coordinates[1];
```
**JSON /geocode/json/**
```
$data = @file_get_contents(http://maps.googleapis.com/maps/api/geocode/json?address=1234+Main+St&sensor=false);
```
To get Latitude from the JSON response:
```
$json = json_decode($data,true);
$lat = $json['results']['0']['geometry']['location']['lat'];
```
An SQL record can be converted to JSON
**Sample mySQL Export to JSON**
```
{"number": 9547839995,
"TimeStamp": "2014-10-09 21:38:10",
"ip": "",
"business": "Creative R Us Computer & Phone Repair",
"addr": "547 E Sample Rd",
"city": "Pompano Beach,
FL 33064",
"email": "",
"exp": "",
"web": "http://www.creativerus.com",
"name": "",
"note": ""}]
```
Now an XML mySQL Export gives a lot more detail.
**Sample mySQL Export to XML**
```
<pma_xml_export version="1.0" xmlns:pma="http://www.phpmyadmin.net/some_doc_url/">
<!--
- Structure schemas
-->
<pma:structure_schemas>
<pma:database name="isl_contact" collation="latin1_swedish_ci" charset="latin1">
<pma:table name="Profile">
CREATE TABLE `Profile` (
`number` bigint(20) NOT NULL DEFAULT '0',
`TimeStamp` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`ip` char(16) COLLATE utf8_bin NOT NULL,
`business` char(64) COLLATE utf8_bin DEFAULT '',
`addr` char(64) COLLATE utf8_bin DEFAULT '',
`city` char(64) COLLATE utf8_bin DEFAULT '',
`email` char(128) COLLATE utf8_bin NOT NULL,
`exp` char(16) COLLATE utf8_bin DEFAULT '',
`web` char(128) COLLATE utf8_bin DEFAULT NULL,
`name` char(32) COLLATE utf8_bin DEFAULT '',
`note` varchar(256) COLLATE utf8_bin NOT NULL,
PRIMARY KEY (`id`),
KEY `NUMBER` (`ip`,`state`)
) ENGINE=MyISAM AUTO_INCREMENT=369 DEFAULT CHARSET=utf8 COLLATE=utf8_bin;
</pma:table>
</pma:database>
</pma:structure_schemas>
<!--
- Database: 'isl_contact'
-->
<database name="isl_contact">
<!-- Table Profile -->
<table name="Profile">
<column name="number">9545555555</column>
<column name="TimeStamp">2014-10-09 21:38:10</column>
<column name="ip"></column>
<column name="business">Creative R Us Computer</column>
<column name="addr">547 E Sample Rd</column>
<column name="city">Pompano Beach, FL 33064</column>
<column name="email"></column>
<column name="exp"></column>
<column name="web">http://www.creativerus.com</column>
<column name="name"></column>
<column name="note"></column>
<column name="link"></column>
<column name="contact"></column>
<column name="source"></column>
<column name="id">1</column>
<column name="mobile"></column>
<column name="state">3</column>
<column name="attributes">0</column>
<column name="fuDate">NULL</column>
<column name="fuTime">NULL</column>
</table>
</database>
</pma_xml_export>
```
[<http://php.net/manual/en/xml.examples.php][1]>
|
How are XML and JSON used in conjunction with SQL?
|
[
"",
"sql",
"ajax",
"xml",
"json",
""
] |
I have simple query like
```
SELECT * FROM temp t WHERE t.id IN (:IDs)
```
When executed, it prompts me (Oracle SQL Developer) for entering value if IDs variable.
When I enter for example 169, everything runs smoothly, but when I try to enter multiple IDs, like 169,170,171, I get error Invalid Number even while putting it into ''.
I'm used to working with MS SQL and MySQL, so this is little confusing to me.
Anyone any suggestions.
|
The problem is the `varying-IN list`. In `SQL Developer`, when you are prompted to enter the value for the bind variable, you are simple passing it as `169,170,171` which it is not considering as a `set of values`.
What you could do is, have multiple binds -
`SELECT * FROM temp t WHERE t.id IN (:ID1, :ID2)`
When prompted, enter value for each bind.
**UPDATE** Alright, if the above solution looks ugly, then I would prefer the below solution -
```
WITH DATA AS
(SELECT to_number(trim(regexp_substr(:ids, '[^,]+', 1, LEVEL))) ids
FROM dual
CONNECT BY instr(:ids, ',', 1, LEVEL - 1) > 0
)
SELECT * FROM temp t WHERE it.d IN
(SELECT ids FROM data
)
/
```
|
If you put them into ", you get error. Oracle doesn't accept ". You should use just numbers without ".
i.e: (169,170,171,...)
|
ORACLE substitute variable in IN statement
|
[
"",
"sql",
"oracle",
""
] |
I cant seem to get this query to run, it keeps telling me "Msg 102, Level 15, State 1, Line 5
Incorrect syntax near '='." Not sure what i doing wrong here.
Basically i need [forwarder\_preferred\_status] to be P2 or P3 depending on the value of [forwarder\_short]
Any Suggestions would be welcome
```
Select
[forwarder_display],
Case [forwarder_preferred_status]
-- Update P2 Customers
When [forwarder_short] = 'ABL' THEN [forwarder_preferred_status] = 'P2'
When [forwarder_short] = 'ALK' THEN [forwarder_preferred_status] = 'P2'
When [forwarder_short] = 'EIF' THEN [forwarder_preferred_status] = 'P2'
When [forwarder_short] = 'NNR' THEN [forwarder_preferred_status] = 'P2'
When [forwarder_short] = 'ALI' THEN [forwarder_preferred_status] = 'P2'
When [forwarder_short] = 'LAF' THEN [forwarder_preferred_status] = 'P2'
When [forwarder_short] = 'AIT' THEN [forwarder_preferred_status] = 'P2'
-- Update P3 Customers
When [forwarder_short] = 'SCHBAX' THEN [forwarder_preferred_status] = 'P3'
When [forwarder_short] = 'UPS' THEN [forwarder_preferred_status] = 'P3'
When [forwarder_short] = 'KUE' THEN [forwarder_preferred_status] = 'P3'
When [forwarder_short] = 'AGI' THEN [forwarder_preferred_status] = 'P3'
When [forwarder_short] = 'PAN' THEN [forwarder_preferred_status] = 'P3'
When [forwarder_short] = 'DGF' THEN [forwarder_preferred_status] = 'P3'
When [forwarder_short] = 'NEC' THEN [forwarder_preferred_status] = 'P3'
When [forwarder_short] = 'HWE' THEN [forwarder_preferred_status] = 'P3'
When [forwarder_short] = 'NIS' THEN [forwarder_preferred_status] = 'P3'
When [forwarder_short] = 'BDP' THEN [forwarder_preferred_status] = 'P3'
When [forwarder_short] = 'KWE' THEN [forwarder_preferred_status] = 'P3'
When [forwarder_short] = 'CEVA' THEN [forwarder_preferred_status] = 'P3'
When [forwarder_short] = 'VIZ' THEN [forwarder_preferred_status] = 'P3'
When [forwarder_short] = 'FTN' THEN [forwarder_preferred_status] = 'P3'
When [forwarder_short] = 'UTI' THEN [forwarder_preferred_status] = 'P3'
-- Fix wrong prefered status
When [forwarder_short] = 'OIA' THEN [forwarder_preferred_status] = null
When [forwarder_short] = 'PHO' THEN [forwarder_preferred_status] = null
When [forwarder_short] = 'REI' THEN [forwarder_preferred_status] = null
Else [forwarder_preferred_status]
End as [forwarder_preferred_status]
Sum ([calc_gross_rev]) As [Gr Revenue]
Sum ([charge_weight]) As [Ch Weight]
From
[dbo].[report_bo_awb_revenue_all]
Where
[yr] = 2014
AND [agent_iata_code] not in ('0508634','0514616')
```
|
The code in your question is structured like this:
```
--Bad
CASE A
WHEN X THEN A = B
WHEN Y THEN A = C
...
END
```
Instead, you need to structure it like this:
```
--Good
A = CASE
WHEN X THEN B
WHEN Y THEN C
...
END
```
or like this:
```
--Good
CASE
WHEN X THEN B
WHEN Y THEN C
...
END As A
```
A case statement only results in a *value*, not an expression. It doesn't branch between arbitrary code. Also, given you have a lot of codes that result in the same value, you can *greatly* simplifiy things like this:
```
--Better
CASE
WHEN X IN (1,2,3) THEN B
WHEN X IN (4,5,6) THEN C
...
ELSE NULL
END
```
That would look something like this:
```
Select
[forwarder_display],
CASE
WHEN [forwarder_short] IN ('ABL', 'ALK', 'EIF', 'NNR', 'ALI', 'LAF', 'AIT')
THEN 'P2'
WHEN [forwarder_short] IN ('SCHBAX', 'UPS', 'KUE', 'AGI', 'PAN', 'DGF', 'NEC',
'HWE', 'NIS', 'BDP', 'KWE', 'CEVA', 'VIZ', 'FTN', 'UTI')
THEN 'P3'
WHEN [forwarder_short] IN ('OIA','PHO', 'REI')
THEN NULL
ELSE [forwarder_preferred_status]
END as [forwarder_preferred_status],
Sum ([calc_gross_rev]) As [Gr Revenue],
Sum ([charge_weight]) As [Ch Weight]
From
[dbo].[report_bo_awb_revenue_all]
Where
[yr] = 2014
AND [agent_iata_code] not in ('0508634','0514616')
```
Unfortunately, this still won't work, as you're missing the GROUP BY clause needed to make your `SUM()` aggregate functions work. Because of the complex expressions, I find it easier to fix the problem by nesting the query, like this:
```
SELECT forwarder_display, [forwarder_preferred_status],
SUM([calc_gross_rev]) As [Gr Revenue],
SUM([charge_weight]) As [Ch Weight]
FROM
(
Select
[forwarder_display],
CASE
WHEN [forwarder_short] IN ('ABL', 'ALK', 'EIF', 'NNR', 'ALI', 'LAF', 'AIT')
THEN 'P2'
WHEN [forwarder_short] IN ('SCHBAX', 'UPS', 'KUE', 'AGI', 'PAN', 'DGF', 'NEC',
'HWE', 'NIS', 'BDP', 'KWE', 'CEVA', 'VIZ', 'FTN', 'UTI')
THEN 'P3'
WHEN [forwarder_short] IN ('OIA','PHO', 'REI')
THEN NULL
ELSE [forwarder_preferred_status]
END as [forwarder_preferred_status],
[calc_gross_rev],
[charge_weight]
From
[dbo].[report_bo_awb_revenue_all]
Where
[yr] = 2014
AND [agent_iata_code] not in ('0508634','0514616')
) t
GROUP BY [forwarder_display], [forwarder_preferred_status]
```
This should run and give you the desired results, barring some mistake (very likely, given that I've typed all this untested into the answer window). However, you may find you get a better execution plan by just copy/pasting the complex CASE expression to the GROUP BY clause, with no nested query.
Lastly, code like this — with all of those 3 and 4 -digit codes hard-coded in — cries out to have these codes put into a table, such that this is done via a JOIN + COALESCE, rather than a CASE. That would be **MUCH** better for both performance and maintenance. That would allow you to write something like this:
```
--Best
SELECT [forwarder_display],
COALESCE(M.status, A.forwarder_preferred_status) As forwarder_preferred_status
Sum ([calc_gross_rev]) As [Gr Revenue],
Sum ([charge_weight]) As [Ch Weight]
FROM [dbo].[report_bo_awb_revenue_all] A
LEFT JOIN forwarder_map M ON m.forwarder_short = A.forwarder_short
WHERE
A.[yr] = 2014
AND [agent_iata_code] not in ('0508634','0514616')
GROUP BY [forwarder_display], COALESCE(M.status, A.forwarder_preferred_status)
```
One final note: for the example above to work, you would need to use empty strings instead of NULL values in the imaginary `forwarder_map` table. You could then use the `NULLIF()` to get NULLs back, if that's really what you need.
|
I bet this isn't the first time someone has needed to translate ABL to a P2 for a report and it won't be the last. As @Aaron pointed out, stop with these horrible case statements and tabularize it.
For the sake of this answer, I am using a table variable. These generally are not advisable as the database engine doesn't record any statistics about values in there and their distribution which is the key for it to make good query plans.
```
DECLARE
@TRANSLATE table
(
forwarder_short varchar(10) NOT NULL
, forwarder_preferred_status varchar(10) NULL
);
INSERT INTO
@TRANSLATE
(
forwarder_short
, forwarder_preferred_status
)
VALUES
('ABL', 'P2')
, ('ALK', 'P2')
, ('EIF', 'P2')
, ('NNR', 'P2')
, ('ALI', 'P2')
, ('LAF', 'P2')
, ('AIT', 'P2')
, ('SCHBAX', 'P3')
, ('UPS', 'P3')
, ('KUE', 'P3')
, ('AGI', 'P3')
, ('PAN', 'P3')
, ('DGF', 'P3')
, ('NEC', 'P3')
, ('HWE', 'P3')
, ('NIS', 'P3')
, ('BDP', 'P3')
, ('KWE', 'P3')
, ('CEVA', 'P3')
, ('VIZ', 'P3')
, ('FTN', 'P3')
, ('UTI', 'P3')
, ('OIA', NULL)
, ('PHO', NULL)
, ('REI', NULL);
```
Now I have a table, `TRANSLATE` that provides a mapping between our known `forwarder_short` values to a rollup `forwarder_preferred_status`.
To cover conditions where the forwarder\_short values are not in this table, we need to use our translation table as an OUTER JOIN.
```
WITH report_bo_awb_revenue_all AS
(
-- simulating your source data
SELECT
T.forwarder_short
, T.forwarder_preferred_status
, 100 AS calc_gross_rev
, 12 AS charge_weigh
, 2014 AS yr
, '' AS agent_iata_code
, 'fwd' AS forwarder_display
FROM
@TRANSLATE AS T
UNION ALL
SELECT
'UNK'
, 'P90x'
, 9000 AS calc_gross_rev
, 12 AS charge_weigh
, 2014 AS yr
, '' AS agent_iata_code
, 'fwd' AS forwarder_display
)
SELECT
R.forwarder_display
, COALESCE(T.forwarder_preferred_status, R.forwarder_preferred_status) AS forwarder_preferred_status
, SUM(R.calc_gross_rev) AS calc_gross_rev
, SUM(R.charge_weigh) AS charge_weigh
FROM
report_bo_awb_revenue_all AS R
LEFT OUTER JOIN
@TRANSLATE AS T
ON T.forwarder_short = R.forwarder_short
WHERE
R.yr = 2014
AND R.agent_iata_code not in ('0508634','0514616')
GROUP BY
R.forwarder_display
, T.forwarder_preferred_status
, R.forwarder_preferred_status;
```
Now, all business logic is distilled down to this
```
, COALESCE(T.forwarder_preferred_status, R.forwarder_preferred_status) AS forwarder_preferred_status
```
"Use the value in our translate table if it exists, otherwise, use what's stored on the report\_bo\_awb\_revenue\_all table."
[SQLFiddle version](http://sqlfiddle.com/#!3/d41d8/42150) This uses a temporary table for translation as the fiddle doesn't handle table variables but the core concept remains.
|
Sql When A = Z then set B = X
|
[
"",
"sql",
"sql-server",
"where-clause",
""
] |
I am completely new to sql (have couple of days to get to know it) and have following question:
Here is a syntax for constraints:
While creating the table, I have this kind of line:
```
CONSTRAINT smth UNIQUE(name)
```
I understand that it puts constraint on column `name` for it to be unique but what is `smth` for? Why do we need to name the constraint? Is it used anywhere?
|
You name it (as with many things) so that you can perform *maintenance* in your database easily.
See [ALTER TABLE ... DROP CONSTRAINT](http://www.postgresql.org/docs/9.1/static/sql-altertable.html) and note that you have to supply the *name* of the constraint there.
Also, it's helpful if the constraint is [*violated*](http://www.postgresql.org/docs/9.1/static/sql-createtable.html):
> An optional name for a column or table constraint. If the constraint is violated, the constraint name is present in error messages, so constraint names like col must be positive can be used to communicate helpful constraint information to client applications.
|
Constraints have names.
It is useful. Just imagine:
* when you need to drop a constraint
* when you list constraints on a object
* when a constraint fails, it will show name in error message.
|
Concerning Constraints
|
[
"",
"sql",
"postgresql",
""
] |
I have a table that looks roughly like this
```
Year Species Count
1979 A 0
1980 A 10
1981 A 4
1982 A 3
1979 B 0
1980 B 1
1981 B 2
1982 B 3
1979 C 9
1980 C 14
1981 C 2
1982 C 1
```
What i want is to return all Year, Species, Count for those species that have a total count (as in summed over all years) of 10 or more. so for a total count of 20 i would want it to just return
```
1979 C 9
1980 C 14
1981 C 2
1982 C 1
```
i played around with having but havent really gotten anything useful (total SQL beginner)
|
This is the easiesy. You already have the counts. Group on species and filter table on the results of the subquesy. You can get the same functionality with an exists or a join also.
```
SELECT
[YEAR]
,SPECIES
,[COUNT]
FROM TABLE
WHERE SPECIES IN (
SELECT SPECIES
FROM TABLE
GROUP BY SPECIES
HAVING SUM([COUNT]) > 20)
)
```
Adding some addtional explanation for BootstrapBill
Group by "makes multiple sets" for each unique value of the GROUP BY column. That allows the aggregate function SUM() act on only one set of the GROUP BY values at a time. HAVING is sort of like a WHERE clause for the GROUP BY statement that allows you to apply a predicate. The only fields allowed to be returned by a GROUP BY are the grouped column itself and the results of any aggregate function(s), you need to join back to or filter the original set to get the other columns your are targeting in the query.
And I apoligze, I did not see where the OP stated this was for MySql. The core concept is the same so I am leaving the answer. [] are MS SQL syntax for escaping the keywords COUNT and YEAR.
|
In MySQL, you can do this using aggregation and a `join`:
```
select t.*
from table t join
(select species, count(*) as cnt
from table
group by species
) s
on t.species = s.species
where s.cnt >= 10;
```
|
Using a sum of values as a condition (SQL query)
|
[
"",
"mysql",
"sql",
""
] |
could use some SQL help in Oracle 11G. I'm trying to create a result set that takes a current transaction, finds the most recent related transaction, shows the current price along with the previous price, and then calculates the difference.
Assume each item can only have one price for a given month. If there's no earlier data available, then display the current value.
Raw data would look something like:
```
+-------+----------+------------+------------+-------+
| Item | Location | Department | MonthEnd | Price |
+-------+----------+------------+------------+-------+
| Truck | Illinois | BusinessA | 4/30/2014 | 10000 |
| Truck | Illinois | BusinessA | 6/30/2014 | 9500 |
| Truck | Illinois | BusinessA | 10/31/2014 | 8000 |
+-------+----------+------------+------------+-------+
```
And the query result would look something like:
```
+-------+----------+------------+------------+-------+------------------+---------------+------------+
| Item | Location | Department | MonthEnd | Price | PreviousMonthEnd | PreviousPrice | PriceDelta |
+-------+----------+------------+------------+-------+------------------+---------------+------------+
| Truck | Illinois | BusinessA | 10/31/2014 | 8000 | 6/30/2014 | 9500 | -1500 |
| Truck | Illinois | BusinessA | 6/30/2014 | 9500 | 4/30/2014 | 10000 | -500 |
| Truck | Illinois | BusinessA | 4/30/2014 | 10000 | 4/30/2014 | 10000 | 0 |
+-------+----------+------------+------------+-------+------------------+---------------+------------+
```
Thanks in advance!
|
You should be able to use [Lag analytical function](http://www.techonthenet.com/oracle/functions/lag.php) to get that. The query will look like below.
```
SELECT Item,
Location,
Department,
MonthEnd,
Price,
COALESCE(LAG (MonthEnd, 1) OVER (ORDER BY MonthEnd), MonthEnd) PrevMonthEnd,
COALESCE(LAG (Price, 1) OVER (ORDER BY MonthEnd), price) PrevPrice ,
(price - coalesce(LAG (Price, 1) OVER (ORDER BY MonthEnd), price)) PriceDelta
FROM items
ORDER BY monthend desc
```
Here's a [SQLFiddle testing this.](http://sqlfiddle.com/#!4/7b8cb/13)
|
Use Analytic Function to generate `row_number` and `Left join` the result .Try this.
```
WITH cte
AS (SELECT *,Row_number()OVER (ORDER BY MonthEnd DESC) rn
FROM yourtable)
SELECT a.Item,
a.Location,
a.Department,
a.MonthEnd,
a.Price,
COALESCE(( a.Price - b.Price ), a.price)
FROM cte a
LEFT JOIN cte b
ON a.rn = b.rn - 1
```
Note : Based on your requirement you may to add `Partition by` in `Over` clause
|
SQL select current transaction and reference most recent transaction
|
[
"",
"sql",
"oracle",
"oracle11g",
"analytics",
""
] |
I have a string and want to get the first n characters of this string but ends with a whole word.
```
DECLARE @text VARCHAR(MAX) = '123 4056 78789 abcdefg 2222222222 3333 444 55555'
,@length INT = 23
```
I have written this query so far:
```
REVERSE(STUFF(REVERSE(LEFT(@text, @length)), 1, CHARINDEX(' ',REVERSE(LEFT(@text, @length))),''))
```
The output for `@length = 23` is `123 4056 78789 abcdefg`
But for `@length = 22` is `123 4056 78789` which is not correct and should be still `123 4056 78789 abcdefg`
Thanks!
**EDIT**
Just to clarify my question:
From `@length = 22` till `@length = 32` I should get `123 4056 78789 abcdefg`
|
If I understand your requirements correctly, I think you want the following:
```
WITH x AS (
SELECT '123 4056 78789 abcdefg 2222222222 3333 444 55555' AS [text], 22 AS [length]
)
SELECT LEFT([text], LEN([text]) - CHARINDEX(' ', REVERSE([text]), LEN([text]) - [length]))
FROM x;
```
(I put everything into the `WITH` clause just so I can refer to the column names instead of plugging in your values.)
[Please see SQL Fiddle demo here](http://sqlfiddle.com/#!3/d41d8/42386). I tested it with values of 22 and 32 for length.
**UPDATE** per comments below:
```
WITH x AS (
SELECT '123 4056 78789 abcdefg 2222222222 3333 444 55555' AS [text], 22 AS [length]
)
SELECT LEFT([text], LEN([text]) - CHARINDEX(' ', REVERSE([text]) + ' ', LEN([text]) - [length]) + 1)
FROM x;
```
|
I think I understand your requirements.
What you need to do is to find the first space after the length, then return all the characters up until that point.
Try this:
```
SELECT LEFT(@text + ' ',CHARINDEX(' ',@text,@length))
```
|
How to get the first n characters of a string but ending with a whole word
|
[
"",
"sql",
"sql-server",
"string",
"t-sql",
"sql-server-2008-r2",
""
] |
I have a huge table with many columns and I know that this columns sometimes takes some specific value 'MyValue'. How can I select all the rows in that one specific table containing 'MyValue' regardless in which column.
I found related topic here:
[How do I find a value anywhere in a SQL Server Database?](https://stackoverflow.com/questions/436351/how-do-i-find-a-value-anywhere-in-a-sql-server-database)
But my query needs a smaller cannon. I know the table name, it is, let's say 'MyTable'. I do not need to search the whole db.
|
To do it without a special procedure in a simple statement, you could convert each row to XML and then use an XQuery on the XML to search for any value in the row that matches. So for example:
```
declare @SearchValue as varchar(20)
set @SearchValue = 'MyValue'
select *
--,(select MyTable.* for XML PATH (''),TYPE) AllColumns
--,convert(nvarchar(255),(select MyTable.* for XML PATH (''),TYPE).query('for $item in * where $item=sql:variable("@SearchValue") return $item')) FoundColumns
from MyTable
where convert(nvarchar(255),(select MyTable.* for XML PATH (''),TYPE).query('for $item in * where $item=sql:variable("@SearchValue") return $item'))<>''
```
A procedure specifically designed for this task could probably do this more efficiently and could take advantage of indexes... etc. Honestly I would not put this into a production database solution without quite a bit of consideration, but as a throw together search tool it's not bad. I ran a search on a 700,000 record table in 40 seconds. However if I filter by each column individually it runs nearly instantly. Also a few more caveats:
* None of the table columns can not have spaces or other unfriendly
characters for an XML tag. I couldn't figure out how to get column names with spaces to work. Maybe there's a way.
* The filter has to be written in XQuery... which is not exactly like
SQL. But you can use =, <, >, and there's even pattern matching.
* The parameter for the query function must be a string literal. So
you can't build a string dynamically. This is why I used the variable for your search values, but you could also use a sql:column("ColName") if needed.
* If searching for other types besides strings, the search string you use must match exactly what the field would be converted to as an XML value.
|
You can do this by reversing the value and column in `In` operator.
```
SELECT *
FROM Mytable
WHERE 'Myvalue' IN ( Col1, Col2, col3,.... )
```
If you don't want to type the columns, then pull it from `information_schema.column` view and create a `dynamic query`
|
SQL search all columns of a table for text value
|
[
"",
"sql",
"sql-server",
"search",
"select",
"text",
""
] |
How to do something like this...
```
alter table customer_schedule add (week_number as (TO_CHAR((SCHEDULE_DATE),'iw'))
```
Wherein SCHEDULE\_DATE is one of the existing columns in table
|
This is where you need **VIRTUAL COLUMN**. If you are on 11g and up, you could certainly do -
```
alter table table_name
add (column_name [data_type] [generated always] as (column_expression) [virtual]);
```
In your case, it will be something like -
```
alter table customer_schedule add (week_number data_type generated always as (TO_CHAR((SCHEDULE_DATE),'iw') VIRTUAL)
```
|
On 9i, you cannot use *virtual columns*, so I'd probably go with a view:
```
create view customer_schedule_view as
select
c.*,
to_char(c.schedule_date, 'iw')) week_number
from
customer_schedule c;
```
Of course, in your forms you need then to select from the view rather from the table.
|
Alter table add column as select statement
|
[
"",
"sql",
"database",
"oracle",
"oracle9i",
""
] |
I have a function that returns the following:
```
Title Start End
Task A 2015-01-02 2015-03-31
Task B 2015-02-12 2015-04-01
Task C 2014-11-01 2015-02-05
....
```
I want to return a column for each month and 1 if its within the Start and End period 0 otherwise
```
Title Start End Jan Feb Mar Apr May Jun ....
Task A 2015-01-02 2015-03-31 1 1 1 0 0 0
Task B 2015-02-12 2015-04-01 0 1 1 1 0 0
Task C 2014-11-01 2015-02-05 1 1 0 0 0 0
....
```
Anyone have an idea on how to do this?
|
You would do this with basic `case` statements:
```
select title, start, end,
(case when 1 between month(start) and month(end) then 1 else 0 end) as jan,
(case when 2 between month(start) and month(end) then 1 else 0 end) as feb,
. . .
(case when 12 between month(start) and month(end) then 1 else 0 end) as dec
from table t;
```
Note: I am leaving your column names as in the query, even though some are reserved words and should be escaped (if that is the real name of the columns).
Also note that in your sample data, the dates change between the first table and the second.
|
If you only wanted to check 1 date, this would work. You should be able to adapt this sample to meet your needs.
```
SELECT c.CreateDateUTC, DATEPART(MONTH, c.CreateDateUTC) 'MONTH',
CASE DATEPART(MONTH, c.CreateDateUTC)
WHEN 1 THEN 1
END 'JAN',
CASE DATEPART(MONTH, c.CreateDateUTC)
WHEN 2 THEN 1
END 'FEB',
CASE DATEPART(MONTH, c.CreateDateUTC)
WHEN 3 THEN 1
END 'MAR',
CASE DATEPART(MONTH, c.CreateDateUTC)
WHEN 4 THEN 1
END 'APR',
CASE DATEPART(MONTH, c.CreateDateUTC)
WHEN 5 THEN 1
END 'MAY',
CASE DATEPART(MONTH, c.CreateDateUTC)
WHEN 6 THEN 1
END 'JUN',
CASE DATEPART(MONTH, c.CreateDateUTC)
WHEN 7 THEN 1
END 'JUL',
CASE DATEPART(MONTH, c.CreateDateUTC)
WHEN 8 THEN 1
END 'AUG',
CASE DATEPART(MONTH, c.CreateDateUTC)
WHEN 9 THEN 1
END 'SEP',
CASE DATEPART(MONTH, c.CreateDateUTC)
WHEN 10 THEN 1
END 'OCT',
CASE DATEPART(MONTH, c.CreateDateUTC)
WHEN 11 THEN 1
END 'NOV',
CASE DATEPART(MONTH, c.CreateDateUTC)
WHEN 12 THEN 1
END 'DEC'
FROM dbo.Code c
```
Result:

|
Get months between two dates in TSQL
|
[
"",
"sql",
"sql-server",
"t-sql",
"date",
"datetime",
""
] |
I am having following values in database table :
```
12.00
15.00
18.00
20.00
```
I want to remove all decimal ZEROS from all values , So how can I do this using SQL query. I tried replace query but that is not working.
I want values like :
```
12
15
18
20
```
My replace query :
```
select height(replace (12.00, '')) from table;
```
Please help.
|
Since all your values end with ".00", there will be no rounding issues, this will work
```
SELECT CAST(columnname AS INT) AS columnname from tablename
```
to update
```
UPDATE tablename
SET columnname = CAST(columnname AS INT)
WHERE .....
```
|
Here column name must be decimal.
```
select CAST(columnname AS decimal(38,0)) from table
```
|
Remove decimal values using SQL query
|
[
"",
"sql",
"sql-server",
""
] |
I have a query like this:
```
(SELECT * FROM something WHERE ...) UNION (SELECT * FROM something WHERE ...)
```
Because I want the results from the first `SELECT` to come out on top. But, the second part comes out in reverse order. I can't simply order the entire query because it will push the first `SELECT` that I need at the top to the bottom... So I want to do something like this:
```
(SELECT * FROM something WHERE ...) UNION (SELECT * FROM something WHERE ... ORDER BY timestamp DESC)
```
But this just doesn't work, the `ORDER BY` is completely ignored; I tested this by inserting gibberish, `ORDER BY dosaif30h` still works... One solution would be to make two separate queries, but I'd much prefer to handle it in one.
|
```
SELECT * FROM something WHERE ... UNION SELECT * FROM (SELECT * FROM something WHERE ... ORDER BY timestamp DESC)a
```
|
If you want the first table to come first, then you need to order by that explicitly:
```
SELECT s.*
FROM ((SELECT s.*, 1 as which FROM something s WHERE ...)
UNION ALL
(SELECT s.*, 2 as which FROM something s WHERE ...)
) s
ORDER BY which, timestamp DESC
```
You can only trust an `order by` in the outer query -- in most circumstances -- if you want the results in a particular order.
NOTE: You can write this without the subquery if you like:
```
SELECT s.*, 1 as which FROM something s WHERE ...
UNION ALL
SELECT s.*, 2 as which FROM something s WHERE ...
ORDER BY which, timestamp;
```
|
MySQL: can't order one side of a union
|
[
"",
"mysql",
"sql",
""
] |
Thank you in advance for taking the time to look at this.
I am looking to take a number of records containing a date field and split them into hour columns with a count in each (sql server).
E.g.
```
SpecialDateColumn
14/1/15 10:23
14/1/15 11:34
14/1/15 12:45
14/1/15 12:55
```
I'm looking the results in a single row as follows:
```
Date 10 11 12 13 etc
14/1/15 1 1 2 0
```
I've tried to do this using a pivot table, but not had much joy.
Thanks again in advance.
|
You can do this :
```
SELECT *
FROM (
SELECT SpecialDateColumn AS [Date]
,DATEPART(HOUR, SpecialDateColumn) [Hour]
FROM < TABLE >
) AL1
PIVOT(COUNT([Hour]) FOR [Hour] IN (
[0]
,[1]
,[2]
,[3]
,[4]
,[5]
,[6]
,[7]
,[8]
,[9]
,[10]
,[11]
,[12]
,[13]
,[14]
,[15]
,[16]
,[17]
,[18]
,[19]
,[20]
,[21]
,[22]
,[23]
)) P;
```
|
It is simple enough to write this as conditional aggregation:
```
select cast(SpecialDateColumn as date) as thedate,
sum(case when datepart(hour, SpecialDateColumn) = 10 then 1 else 0 end) as hour_10,
sum(case when datepart(hour, SpecialDateColumn) = 11 then 1 else 0 end) as hour_11,
sum(case when datepart(hour, SpecialDateColumn) = 12 then 1 else 0 end) as hour_12,
sum(case when datepart(hour, SpecialDateColumn) = 13 then 1 else 0 end) as hour_13
from table t
group by cast(SpecialDateColumn as date)
order by thedate;
```
|
Split date column into hour segments
|
[
"",
"sql",
"sql-server",
"date",
""
] |
I found this related answer useful:
* [Export "Create Aggregate" functions from PostgreSQL](https://stackoverflow.com/questions/15112971/export-create-aggregate-functions-from-postgresql)
But how do I get the `CREATE AGGREGATE` statement without a GUI client (e.g. with psql command line)?
|
Something like this, but I'm not sure if this covers all possible ways of creating an aggregate (it definitely does not take the need for quoted identifiers into account)
```
SELECT 'create aggregate '||n.nspname||'.'||p.proname||'('||format_type(a.aggtranstype, null)||') (sfunc = '||a.aggtransfn
||', stype = '||format_type(a.aggtranstype, null)
||case when op.oprname is null then '' else ', sortop = '||op.oprname end
||case when a.agginitval is null then '' else ', initcond = '||a.agginitval end
||')' as source
FROM pg_proc p
JOIN pg_namespace n ON p.pronamespace = n.oid
JOIN pg_aggregate a ON a.aggfnoid = p.oid
LEFT JOIN pg_operator op ON op.oid = a.aggsortop
where p.proname = 'your_aggregate'
and n.nspname = 'public' --- replace with your schema name
```
|
A modern version to generate the `CREATE AGGREGATE` statement - using `format()` and casts to object identifier types to make it simple and add double-quotes and schema-qualification to identifiers where required automatically:
```
SELECT format('CREATE AGGREGATE %s (SFUNC = %s, STYPE = %s%s%s%s%s)'
, aggfnoid::regprocedure
, aggtransfn
, aggtranstype::regtype
, ', SORTOP = ' || NULLIF(aggsortop, 0)::regoper
, ', INITCOND = ' || agginitval
, ', FINALFUNC = ' || NULLIF(aggfinalfn, 0)
, CASE WHEN aggfinalextra THEN ', FINALFUNC_EXTRA' END
-- add more to cover special cases like moving-aggregate etc.
) AS ddl_agg
FROM pg_aggregate
WHERE aggfnoid = 'my_agg_func'::regproc; -- name of agg func here
```
You can schema-qualify the aggregate function's name where necessary:
```
'public.my_agg_func'::regproc
```
And / or add function parameters to disambiguate in case of overloaded aggregate functions:
```
'array_agg(anyarray)'::regprocedure
```
This does not cover special cases like moving-aggregate functions etc. Can easily be extended to cover all options for your current Postgres version. But the next major version may bring new options.
A function *`pg_get_aggregatedef()`* similar to the existing [`pg_get_functiondef()`](https://www.postgresql.org/docs/current/functions-info.html#FUNCTIONS-INFO-CATALOG-TABLE) would be great to eliminate the need for this custom query - which may have to be adapted with every new major Postgres version ...
|
How to get definition/source code of an aggregate in PostgreSQL?
|
[
"",
"sql",
"postgresql",
"aggregate-functions",
"psql",
"ddl",
""
] |
Is it possible to retrieve the maximum value of a key column, increase it and insert it as a new value in the database, like this (pseudo-sql)
```
insert into mytable (mykeycolumn,col1,col2)
values((max(mykeycolumn)+1),'val1','val2');
```
|
Yes, you `INSERT` from a `SELECT`, but your fixed fields must "come from the `SELECT`" as well, like such:
```
INSERT INTO mytable (mykeycolumn, col1, col2)
SELECT MAX(mykeycolumn)+1, 'val1', 'val2'
FROM mytable;
```
---
**Complementing:** as [a\_horse\_with\_no\_name](https://stackoverflow.com/users/330315/a-horse-with-no-name) pointed out, `MAX() + 1` could cause your problems if you have simultaneous transactions in `mytable`. At some point two identical `mykeycolumn` values would be *generated* and you would get an error. The ideal solution is to convert your table to use *auto\_increment*.
|
You can with this:
```
INSERT INTO mytable (mykeycolumn,col1,col2) VALUES
((SELECT MAX(mykeycolumn) FROM mytable AS foo)+1,'val1','val2');
```
|
max value of no autonincrement column in update query
|
[
"",
"mysql",
"sql",
""
] |
I have a table with a column 'Name', like the following:
```
Name
Michael Jackson
Chester Bennington
Mike Shinoda
```
I want to split the 'Name' values in FirstName and LastName,blike this:
```
LastName FirstName
Jackson Michael
Bennington Chester
Shinoda Mike
```
And also I need to create new columns and add those values, all in a single query if its possible.
So as a review, I have Name, I need to create LastName, FirstName columns, split the Name values after the first blank space,the value before the blank space will go into LastName and the values after the blank space will go into FirstName. This is what I have tried so far:
```
ALTER TABLE `data` ADD `FirstName` VARCHAR(30),ADD `LastName` VARCHAR(30)
INSERT INTO `data`(`LastName`,`FirstName`) VALUES ('...','...')
```
Help please..
|
Try this
```
UPDATE data SET FirstName = SUBSTRING_INDEX(Name, ' ', 1), LastName = SUBSTRING(SUBSTRING_INDEX(Name, ' ', 2),
LENGTH(SUBSTRING_INDEX(Name, ' ', 1)) + 1)
```
**EDIT**
As David Faber stated, one can use `SUBSTRING_INDEX(Name, ' ', -1)` for updating the `LastName` column.
|
You might try this:
```
ALTER TABLE `data` ADD `FirstName` VARCHAR(30)
, ADD `LastName` VARCHAR(30);
UPDATE `data`
SET `FirstName` = SUBSTR( `Name`, 1, INSTR(`Name`, ' ') - 1 )
, `LastName` = SUBSTR( `Name`, INSTR(`Name`, ' ') + 1, LENGTH(`Name`) - INSTR(`Name`, ' ') )
WHERE `Name` LIKE '% %';
```
I added the `WHERE` clause with `LIKE` operator to the update statement; otherwise it could fail if `Name` has any values without spaces since `INSTR()` will return 0 for these.
Alternately, instead of `SUBSTR()` you might use `LEFT()` and RIGHT()`:
```
UPDATE `data`
SET `FirstName` = LEFT( `Name`, INSTR(`Name`, ' ') - 1 )
, `LastName` = RIGHT( `Name`, LENGTH(`Name`) - INSTR(`Name`, ' ') )
WHERE `Name` LIKE '% %';
```
Hope this helps.
|
Adding a new column,splitting a column value in two then insert a part of the value in the new column
|
[
"",
"mysql",
"sql",
"split",
""
] |
I am using following query on postgres database:
```
insert into user_settings (google_access_token, google_refresh_token)
select 'google_access_token', 'google_refresh_token' from user
where id = user_id
```
table user\_settings has a field `user_id` which corresponds to field named `id` in user table. but this gives the error:
```
column "user_id" does not exist
Hint: There is a column named "user_id" in table "user_settings",
but it cannot be referenced from this part of the query
```
|
you are taking a column in user\_settings and inserting into that table with condition.
I think you have to use update statement because your condition is
user .id = user\_settings .user\_id
|
You can update a table with out NULLs even if you don't have matching values
```
update user_settings
set
user_settings.google_access_token='google_access_token',
user_settings.google_refresh_token='google_refresh_token'
from user_settings
inner join user
on user_settings.user_id=user.id
```
|
Sql Insert with dynamic values
|
[
"",
"sql",
"postgresql",
""
] |
I don't use SQL everyday so bear with me ... I need to find out if the date has passed a certain month and day period. If this is not clear let me give you an example:
Ex: Given a date: 2012/02/08 (2012 Feb 08) determine if it is after 07/01 (Jul 01) in this case NO.
I need some kind of generic query to check if any given full date (SQL format `smalldatetime`) to see if it is AFTER 07/01. Basically ignoring the year part only verifying based on Month and Day.
Solution: Here is how I used it in a query
```
SELECT *
FROM [dbo].[databaseName]
WHERE DATEPART(mm, RCVD_DATE) >= 7 and DATEPART (DD, RCVD_DATE) >= 1
```
Will only give you results that have received date that are on 7/1 or after (ignoring the year).
|
```
select table.column from table
where DATEPART(mm,table.column)>=7
and DATEPART(DD,table.column)>=1
```
|
Here is a simple script that lets you do the test. The important part is the conditional in the case statement. Use that where you need to do the test.
```
declare @date datetime
set @date = '2/8/2012'
select case when @date > cast('7/1/' + cast(year(@date) as varchar(10)) as datetime) then 'yes'
else 'no'
end
```
|
Microsoft SQL given ANY DATE find out if AFTER specific month and day
|
[
"",
"jquery",
"sql",
"sql-server-2008",
""
] |
I know this is possible through some complex techniques, i want to know any simplest way to achieve this patter that every 10 rows should repeat .
for example
```
select a,b from tablename; (repeating 2 for example)
```
will give
```
a1,b1
a2,b2
a1,b1
a2.b2
a3,b3
a4,b4
a3,b3
a4,b4
```
where if it was 10 it will go like
```
a1,b1 to a10,b10 again a1,b1 to a10,b10
```
then
```
a11,b11 to a20,b20 again a11,b11 to a20,b20
```
and so on
|
You want blocks of ten rows repeated twice. So to get:
```
rows 1 to 10
rows 1 to 10
rows 11 to 20
rows 11 to 20
...
```
In order to get rows n-fold cross join with a table holding n records. (You get such for instance by querying a big enough table and stop at rowcount n.)
You also need the row number of your original records, so you can get block 1 first, then block 2 and so on. Use integer division to get from row numbers to blocks.
```
select t.a, t.b
from (select a, b, row_number() over (order by a, b) as rn from tablename) t
cross join (select rownum as repeatno from bigenoughtable where rownum <= 2) r
order by trunc((t.rn -1) / 10), r.repeatno, t.a, t.b;
```
|
Use a CTE and union all:
```
with rows as (
select a, b
from tablename
where rownum <= 2
)
select *
from rows
union all
select *
from rows;
```
Just some caveats to this. You should use an `order by` if you want particular rows from the table. This is important, because the same select can return different sets of rows. Actually, considering this, a better way is probably:
```
with rows as (
select a, b
from tablename
where rownum <= 2
)
select *
from rows cross join
(select 1 as n from dual union all select 2 from dual) n;
```
|
Simplest way to repeat every N rows in sql
|
[
"",
"sql",
"oracle",
""
] |
**(The database is Postgres )**
Let's say I have a table races with `columns` `(user_name , race_time , race_speed , race_no).`
Each users can have multiple races . I want to get data for each month and show for each month how many races are played in that month. ***But I also want to show which user played the most and which played has the maximum speed.***
```
select
extract ( month from race_time) as month,
extract ( year from race_time) as year,
count(race_no) as total_races
from races r
group by year,month
order by year desc,month desc
```
The above `query` gives me each month and the total races but how can I find for each month which user played the most (that'd be one user)?
I hope someone could help me with this.
|
[SQL Fiddle](http://sqlfiddle.com/#!15/c9d86/1)
```
with r as (
select
to_char(race_time, 'YYYY-MM') as month,
user_name,
count(*) as n_races,
max(race_speed) as max_speed
from races
group by 1, 2
), total_races as (
select sum(n_races) as total_races, month
from r
group by month
)
select *
from (
select distinct on (month)
month, user_name as user_most_races, n_races as most_races
from r
order by 1, 3 desc
) s
inner join (
select distinct on (month)
month, user_name as user_max_speed, max_speed
from r
order by 1, 3 desc
) q using (month)
inner join
total_races using(month)
order by month
```
|
```
SELECT COUNT(DISTINCT user_name), user_name, EXTRACT(month FROM race_time) AS month, EXTRACT(year FROM race_time) AS year
FROM races r
GROUP BY user_name, year, month
ORDER BY year desc, month desc
```
|
Getting data for a user in a month
|
[
"",
"sql",
"postgresql",
"select",
"group-by",
""
] |
Can I transform the following insertion in a way that it can handle case where the subquery returns more than 1 row? Only the first value is needed.
```
insert into mytable (column1,column2) values ('value1', select value from mytable2 where columnX='ABC')
```
---
Any idea to get this in oracle as well?
|
In Oracle you can use a scalar sub-query with the insert...values syntax. For your example you would need to define "first value" and then write a query that only returns that.
If it is the minimum value:
```
insert into mytable (column1,column2) values (9, (select min(value) from mytable2 where columnX = 'ABC'));
```
If it is the value with the lowest id (assuming your table has an id field):
```
insert into mytable (column1,column2) values (9,
(select value
from (select value,
row_number() over (order by id) rn
from mytable2
where columnX = 'ABC')
where rn = 1));
```
|
Either use `INSERT ..VALUES` syntax or `INSERT ..SELECT`, not both
```
insert into mytable (column1,column2)
select 'value1',value from mytable2 where columnX='ABC'
```
|
subquery - #1242 - Subquery returns more than 1 row
|
[
"",
"mysql",
"sql",
"oracle",
""
] |
I have three tables
```
orders.orderid (and other non-pertinent stuff)
payment.orderid
payment.transactiondate
payment.amount
projectedpayment.orderid
projectedpayment.projecteddate
projectedpayment.projectedamount
```
Essentially, `payment` represents when actual payments are received; `projectedpayment` represents when the system thinks they should be received. I need to build a query to compare projected vs actual.
I'd like to query them such that each row in the query has the orderid, payment.transactiondate, payment.amount, projectedpayment.projecteddate, projectedpayment.projectedamount, with the rows from payment and projectedpayment sorted by their respective dates. e.g.,
```
orderid transactiondate amount projecteddate projectedamount
1 2015-01-01 12.34 2015-01-03 12.34
1 2015-01-15 12.34 2015-01-15 12.44
1 null null 2015-02-01 12.34
2 2014-12-31 50.00 null null
```
So broken down by order, what are the actual and projected payments, where there may be more projected payments than actual, or more actual payments than projected, aligned by date (simply by sorting the two, nothing more complex than that).
It seems like I should be able to achieve this with a `left join` from `orders` to some kind of `union` of the other two tables sorted with an `order by`, but I haven't been able to make it work, so it may be something completely different. I know I cannot join all three of `order`, `payment`, and `projectedpayment` or I get the cross-product of the latter two tables.
I happen to be using `postgresql` 9.4, but hopefully we don't need to get too database-specific.
|
I dont know postgres sorry :( but if you know how to do partitioned row numbers something like this should work.
```
select
coalesce(a.orderid,b.orderid) as orderid
,transactiondate
,amount
,projecteddate
,projectedamount
FROM
(select
orderid
,ransactiondate
,amount
,row_number() over (partition by orderid order by orderid,transactiondate) as rn
from payment) as a
full join
(select
orderid
,projecteddate
,projectedamount
,row_number() over (partition by orderid order by orderid,projecteddate) as rn
from projectedpayment) as b
on a.orderid= b.orderid
and a.rn = b.rn
```
\*this is sqlserver syntax (2K5+ AFAIK)
The logic here is that you need to assign a unique number to each predicted and actual payment so that you can join the two tables together but only have each row matching a single row from the other table.
If you have *ONLY ONE PAYMENT PER DAY* then yo could do the full join on the order ID and date without worrying about row numbers.
the full join allows you to have nulls on either side so you will need to coalesce orderid
\*also this doesn't show orders with NO payments or predictions.. comment if this is an issue.
|
This should work
```
Select * from Orders o
Left Join Payments p on o.ID = p.OrderID
Left Join ProjectedPaymentp pp on o.ID = pp.OrderID
Order By o.ID
```
|
Query two similar tables and combine sorted results
|
[
"",
"sql",
"postgresql",
"union",
""
] |
If I open a connection to a database, for example, with SSMS, and I run a query like this:
```
SELECT * FROM MySchema.MyTable
```
the query is executed without error.
If I run it as dynamic sql like this
```
declare var @qry nvarchar(200);
set @qry = N'SELECT * FROM MySchema.MyTable'
exec master.sys.sp_executesql @qry
```
I get an error stating that the table doesn't exist.
If I put the database name prefix before `MySchema.MyTable` name, i.e. `MyDb.MySchema.MyTable` the query runs correctly.
How can I avoid the error without specifying the database name in the dyanmic SQL?
|
Both answers by Recursive and koushik veldanda work fine, but this gives a deeper insight to the problem:
The problem is that executing a query with
```
exec master.sys.sp_execute_sql @qry
```
changes the context to the `master` database, so `MySchema.MyTable` is not accesible, because it doesn't belong to `master`, but to `MyDb`.
If the query is executed with
```
exec sys.sp_execute_sql @qry
```
the context is maintained in the current database, so the table is accesible, and the query runs without problem.
|
try this!
change this
```
exec master.sys.sp_executesql @qry
```
to
```
exec sp_executesql @qry
```
or
```
exec @qry
```
|
Table doesn't exist error when execuitng dynamic sql with sp_executesql
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have problem on my Database, I try simplify my database..
after a lot of tries the function use 'X' table , then use the function on 'X' table.you can not use function use same table...
--after create table:
```
create table Items(
ID int,
Price money
);
```
--Insert some values
```
insert into Items Values(1,4);
insert into Items Values(2,5);
```
--create a function use above table
```
CREATE FUNCTION GetMaxPrice
(
-- Add the parameters for the function here
)
RETURNS Money
AS
BEGIN
-- Declare the return variable here
DECLARE @result money = 5;
select @result = max(Price)
from Items;
-- Return the result of the function
RETURN @result;
END
```
--then alter table to add constraint
--accept any price less or equal price on table
```
alter table Items add check(Price <= dbo.GetMaxPrice())
```
--after that, i try to insert items
```
insert into Items Values(3,4);
insert into Items Values(4,15); -- <-- here, I have problem. it inserted in Database..
--why inserted when max value is 5 ??
```
---
I use sql server 2012 Express, Win7
|
The problem you have is that the new value exists in the table (inside an implicit transaction) when the check constraint fires, so when you insert 15, the max(Price) *is* 15, so the constraint is satisfied, and the INSERT succeeds. I've had a thorough Google to try and find where this is documented but not found anything definitive.
An alternative approach to achieve the effect you are after would be to use an INSTEAD OF trigger, example below.
A word of advice though - this sort of validation strikes me as prone to going wrong somehow. I'd try and separate your limit values from the data - probably in another table.
Hope this helps,
Rhys
```
create table dbo.Items(
ID int,
Price money
);
insert into dbo.Items Values(1,4);
insert into dbo.Items Values(2,5);
go
create trigger trgItemsInsert on dbo.Items instead of insert as
begin
-- Lookup current max price
declare @MaxPrice money = (select max(Price) from dbo.Items)
if exists (select 1 from inserted where Price > @MaxPrice)
begin
-- If there is a price greater than the current max then reject the insert
declare @msg varchar(255) = 'Maximum allowed price is ' + cast(@MaxPrice as varchar(32)) + '.'
rollback
raiserror('%s', 16, 1, @msg)
end
else
begin
-- Otherwise perform the insert
insert into dbo.Items
select ID,Price from inserted
end
end
go
insert into dbo.Items Values(3,4);
insert into dbo.Items Values(4,15);
go
select * from dbo.Items
go
```
|
Try with this code. It is working fine for me.
```
CREATE TABLE item
(
ID INT,
Price MONEY
);
--Insert some values
INSERT INTO item
VALUES (1,
4);
INSERT INTO item
VALUES (2,
5);
--create a function use above table
CREATE FUNCTION GetMax (@price MONEY)
RETURNS MONEY
AS
BEGIN
-- Declare the return variable here
DECLARE @result MONEY = 5;
SELECT @result = max(Price)
FROM item;
IF @price < @result
RETURN 1
RETURN 0
-- Return the result of the function
END
--then alter table to add constraint --accept any price less or equal price on table
ALTER TABLE item
WITH NOCHECK ADD CONSTRAINT ck1 CHECK(dbo.GetMax(Price)=(1))
--ALTER TABLE item
-- DROP CONSTRAINT ck1
--after that, i try to insert item
INSERT INTO item
VALUES (3,
4);
INSERT INTO item
VALUES (4,
15);
```
|
function do not work in CHECK Constraint on sql server
|
[
"",
"sql",
"sql-server",
"database",
"function",
""
] |
I want to use a prefix for the city column too (ie: l.city) in the main select statement just like in the inner select sub query statement for a more "understanding" and a better looking way so to speak but i can't because it gives me ORA-00904: "L"."CITY": invalid identifier, using only city without identifier works, why?
This is the code:
```
SELECT d.department_name, l.city
FROM departments d
NATURAL JOIN (SELECT l.location_id,l.city,l.country_id
FROM locations l
INNER JOIN countries c
ON (l.country_id = c.country_id)
INNER JOIN regions r
ON (c.region_id = r.region_id)
WHERE r.region_name = 'Europe');
```
|
Your `"L"` alias resides inside the view and is therefore not visible at the place where you want to use it.
Try this:
```
SELECT d.department_name, x.city
FROM departments d
NATURAL JOIN (SELECT l.location_id,l.city,l.country_id
FROM locations l // <-- "l" has no scope outside brackets
INNER JOIN countries c
ON (l.country_id = c.country_id)
INNER JOIN regions r
ON (c.region_id = r.region_id)
WHERE r.region_name = 'Europe') x;
```
|
Just give your subquery an alias or remove the `l.` in `l.city`:
```
SELECT d.department_name, l.city
FROM departments d NATURAL JOIN
(SELECT l.location_id,l.city,l.country_id
FROM locations l INNER JOIN
countries c
ON (l.country_id = c.country_id) INNER JOIN
regions r
ON (c.region_id = r.region_id)
WHERE r.region_name = 'Europe'
) l
```
|
ORA-00904: invalid identifier when using natural join in the from clause combined with inner joins in subquery
|
[
"",
"sql",
"oracle11g",
""
] |
Using Postgres, I can perform an update statement and return the rows affected by the commend.
```
UPDATE accounts
SET status = merge_accounts.status,
field1 = merge_accounts.field1,
field2 = merge_accounts.field2,
etc.
FROM merge_accounts WHERE merge_accounts.uid =accounts.uid
RETURNING accounts.*
```
This will give me a list of all records that matched the `WHERE` clause, however will not tell me which rows were actually updated by the operation.
In this simplified use-case it of course would be trivial to simply add another guard `AND status != 'Closed`, however my real world use-case involves updating potentially dozens of fields from a merge table with 10,000+ rows, and I want to be able to detect which rows were actually changed, and which are identical to their previous version. (The expectation is very few rows will actually have changed).
The best I've got so far is
```
UPDATE accounts
SET x=..., y=...
FROM accounts as old WHERE old.uid = accounts.uid
FROM merge_accounts WHERE merge_accounts.uid = accounts.uid
RETURNING accounts, old
```
Which will return a tuple of old and new rows that can then be diff'ed inside my Java codebase itself - however this requires significant additional network traffic and is potentially error prone.
The ideal scenario is to be able to have postgres return just the rows that actually had any values changed - is this possible?
[Here on github](https://gist.github.com/jimmydivvy/c425615646812872d300) is a more real world example of what I'm doing, incorporating some of the suggestions so far.
Using Postgres 9.1, but can use 9.4 if required. The requirements are effectively
* Be able to perform an upsert of new data
* Where we may only know the specific key/value pair to update on any given row
* Get back a result containing just the rows that were actually changed by the upsert
* Bonus - get a copy of the old records as well.
Since this question was opened I've gotten most of this working now, although I'm unsure if my approach is a good idea or not - it's a bit hacked together.
|
### Only update rows that actually change
That saves expensive updates ***and*** expensive checks after the `UPDATE`.
To update every column with the new value provided (if anything changes):
```
UPDATE accounts a
SET (status, field1, field2) -- short syntax for ..
= (m.status, m.field1, m.field2) -- .. updating multiple columns
FROM merge_accounts m
WHERE m.uid = a.uid
AND (a.status IS DISTINCT FROM m.status OR
a.field1 IS DISTINCT FROM m.field1 OR
a.field2 IS DISTINCT FROM m.field2)
RETURNING a.*;
```
Due to PostgreSQL's MVCC model *any* change to a row writes a new row version. Updating a single column is almost as expensive as updating every column in the row at once. Rewriting the rest of the row comes at practically no cost, as soon as you have to update *anything*.
Details:
* [How do I (or can I) SELECT DISTINCT on multiple columns?](https://stackoverflow.com/questions/54418/how-do-i-or-can-i-select-distinct-on-multiple-columns/12632129#12632129)
* [UPDATE a whole row in PL/pgSQL](https://stackoverflow.com/questions/12528981/update-a-whole-row-in-pl-pgsql/12529443#12529443)
### Shorthand for whole rows
If the row types of `accounts` and `merge_accounts` are *identical* and you want to adopt *everything* from `merge_accounts` into `accounts`, there is a shortcut comparing the whole row type:
```
UPDATE accounts a
SET (status, field1, field2)
= (m.status, m.field1, m.field2)
FROM merge_accounts m
WHERE a.uid = m.uid
AND m IS DISTINCT FROM a
RETURNING a.*;
```
This even works for NULL values. [Details in the manual.](https://www.postgresql.org/docs/current/functions-comparisons.html#ROW-WISE-COMPARISON)
But it's **not** going to work for your home-grown solution where (quoting your comment):
> `merge_accounts` is identical, save that all non-pk columns are array types
It requires compatible row types, i.e. each column shares the same data type or there is at least an implicit cast between the two types.
### For your special case
```
UPDATE accounts a
SET (status, field1, field2)
= (COALESCE(m.status[1], a.status) -- default to original ..
, COALESCE(m.field1[1], a.field1) -- .. if m.column[1] IS NULL
, COALESCE(m.field2[1], a.field2))
FROM merge_accounts m
WHERE m.uid = a.uid
AND (m.status[1] IS NOT NULL AND a.status IS DISTINCT FROM m.status[1]
OR m.field1[1] IS NOT NULL AND a.field1 IS DISTINCT FROM m.field1[1]
OR m.field2[1] IS NOT NULL AND a.field2 IS DISTINCT FROM m.field2[1])
RETURNING a.*
```
`m.status IS NOT NULL` works if columns that shouldn't be updated are NULL in `merge_accounts`.
`m.status <> '{}'` if you operate with *empty* arrays.
`m.status[1] IS NOT NULL` covers *both options*.
Related:
* [Return pre-UPDATE column values using SQL only](https://stackoverflow.com/questions/7923237/return-pre-update-column-values-using-sql-only-postgresql-version)
|
if you aren't relying on side-effectts of the update, only update the records that need to change
```
UPDATE accounts
SET status = merge_accounts.status,
field1 = merge_accounts.field1,
field2 = merge_accounts.field2,
etc.
FROM merge_accounts WHERE merge_accounts.uid =accounts.uid
AND NOT (status IS NOT DISTINCT FROM merge_accounts.status
AND field1 IS NOT DISTINCT FROM merge_accounts.field1
AND field2 IS NOT DISTINCT FROM merge_accounts.field2
)
RETURNING accounts.*
```
|
Return rows of a table that actually changed in an UPDATE
|
[
"",
"sql",
"postgresql",
"sql-update",
""
] |
I am using `DATEDIFF( day, date1, date2)` and it works great.
Is it possible to have the plus sign if a number is positive and if it is negative, it has a minus.
|
You can CAST or CONVERT the output to a string, and then add the '+' sign if required (the minus will already be there if it is negative.
```
Case When DATEDIFF( day, date1, date2) > 0 Then '+' Else '' End + Cast (DATEDIFF( day, date1, date2) as VarChar(10))
```
Here is an example
```
;With MyTable as
(
Select GETDATE() as Date1, GetDate()+10 as Date2
)
Select
Case When DATEDIFF( day, date1, date2) > 0 Then '+' Else '' End + Cast (DATEDIFF( day, date1, date2) as VarChar(10))
From MyTable
;With MyTable as
(
Select GETDATE() as Date1, GetDate()-10 as Date2
)
Select
Case When DATEDIFF( day, date1, date2) > 0 Then '+' Else '' End + Cast (DATEDIFF( day, date1, date2) as VarChar(10))
From MyTable
```
|
You can use conditional formatting:
```
SELECT FORMAT(DATEDIFF( day, date1, date2),'+#;-#;0')
FROM MyTable
```
|
SQL DATEDIFF add plus sign if number is positive
|
[
"",
"sql",
"sql-server",
"t-sql",
"datediff",
""
] |
How could I retrieve only records that have a substring in one attribute that exists in the same attribute but on a different record for the same ID?..
to elaborate what I mean here's an example:
Table containing two attributes:
- ID
- version
Data is as follows:
ID version
1 'draft 1.0'
1 'final'
1 '1.0'
2 'draft 1.2'
2 'final'
In the example below we see that for ID '1' the substring of the first row has '1.0' in it. for ID '1' the '1.0' value is also present in the third record. where as for ID '2' the version '1.2' is only as a substring in the the forth record.. this ID doesn't have a record with the version number by itself..
My aim is to write an SQL in Oracle that would return only ID '1' since it has the version repeated separately in different row
any help on this would be very much appreciated :)
Rami
|
This should works
```
with countOfVersion as
(select id, count(id) occurence
from yourTable
where regexp_like(version, '^*1.0$*$')
group by id) select id from countOfVersion where occurence > 0;
```
If you want to check for version 1.2 then don't forget to change the 1.0 in the regex.
|
Given what you explained, the better advice is probably the one given by @mason in a comment: *"You should normalize your tables."*.
Assuming this is *not* an option at this time, *my* second option would be to add some virtual column containing the *version number*. At the very least this will allow you to easily index those values for efficient retrieving. I tend to think this could ease transition to a proper schema too:
```
ALTER TABLE T ADD "versnum"
GENERATED ALWAYS AS (REGEXP_SUBSTR("version",'\d+(\.\d+)+'))
;
CREATE INDEX t_versnum_idx ON T("versnum")
;
```
Now, you query is simply something like that:
```
select "ID", "versnum" from t group by "ID", "versnum" having count(*) > 1;
```
Producing:
```
id | versnum
----+---------
1 | 1.0
```
---
If really you *can't at all* alter the schema, using a cte is probably your best bet:
```
with cte as (
select T.*, REGEXP_SUBSTR("version",'\d+(\.\d+)+') "versnum"
from t
)
select "ID", "versnum" from cte group by "ID", "versnum" having count(*) > 1;
```
|
check if a substring exists in another row for the same ID
|
[
"",
"sql",
"oracle",
""
] |
I have the following statement which is still returning divide by zero error even when using the coalesce expression.
```
COALESCE(CAST(sum(case when W0.i13 = 1 then 1 else 0 end)AS FLOAT) / CAST(sum(case when W0.i13 <> 0 then 1 else 0 end) AS FLOAT),0)
```
> Msg 8134, Level 16, State 1, Line 1 Divide by zero error encountered.
I also tried the following and still get the error
```
COALESCE(CAST(sum(case when W0.i13 = 1 then 1 else 0 end)AS FLOAT),0) / COALESCE(CAST(sum(case when W0.i13 <> 0 then 1 else 0 end) AS FLOAT),0)
```
|
As CoderOfCode pointed out in the comment - you are using 0 as the second argument to COALESCE instead of 1. But your real problem is that the `SUM` in the divisor gives 0, and COALESCE will happily use that (because 0 != NULL).
You should be able to use a combination of `COALESCE / NULL` in the divisor part to force COALESCE to use its second argument and therefore avoid the division by 0:
```
CAST(
sum(
case
when W0.i13 = 1 then 1
else 0
end)
AS FLOAT) /
COALESCE(
CAST(
sum(
case
when W0.i13 <> 0 then 1
else NULL end)
AS FLOAT),
1)
```
[SQL Fiddle](http://sqlfiddle.com/#!3/97479/2)
|
When the SUM getting ZERO. It will be devided by ZERO.
Try this...
```
COALESCE(CAST(sum(case when W0.i13 = 1 then 1 else 0 end)AS FLOAT),1) / COALESCE(CAST((case when sum(case when W0.i13 <> 0 then 1 else 0 end) = 0 then 1 else sum(case when W0.i13 <> 0 then 1 else 0 end) end) AS FLOAT),1)
```
|
SQL Server Divide by zero error encountered COALESCE()
|
[
"",
"sql",
"sql-server-2008",
""
] |
I currently have an MS SQL query which calculates the length of time that each User has been logged in to a system during one day. The table I am extracting this information from records each log in/log out as a separate record. Currently, my MS SQL code is as follows:
```
SELECT
CAST(DateTime As Date),
UserID,
MIN(DateTime),
MAX(DateTime),
DATEDIFF(SS, MIN(DateTime), MAX(DateTime))
FROM
LoginLogoutData
WHERE
CAST(DateTime AS DATE) = '01/01/2015'
GROUP BY
CAST(DateTime As Date),
UserID
```
This works as required and creates a table similar to the below.
```
Date UserID FirstLogIn FinalLogOut LoggedInTime
......... ...... .......... ............ ............
01/01/2015 ABC 07:42:57 14:57:13 26056
01/01/2015 DEF 07:45:49 13:57:56 22326
```
This works fine for one day's-worth of data. However, if I wanted to calculate the length of time that someone was logged into the system for during a larger date range, e.g. a week or month, this would not work; it would calculate the length of time between the user's log in on the first day and their log out on the final day.
Basically, I would like my code to calculate `(Max(DateTime) - Min(DateTime))` **FOR EACH DAY** then sum all these values together into one simple table grouped only by `UserId`. I would then be able to set my date range as I please and receive the correct results.
So I would have a table as follows:
```
UserId LoggedInTime
........ .............
ABC 563287
DEF 485823
GEH 126789
```
I assume I need to use a `GROUP BY` within the `MIN()` function but I don't have much experience with this yet.
Does anyone have any experience with this? Any help would be greatly appreciated.
Thank you.
|
First you need to aggregate by date, and then by larger units of time. For instance, for the year to date:
```
SELECT UserId, SUM(diffsecs)
FROM (SELECT CAST(DateTime As Date) as thedate, UserID,
DATEDIFF(second, MIN(DateTime), MAX(DateTime)) as diffsecs
FROM LoginLogoutData
GROUP BY CAST(DateTime As Date), UserID
) ud
WHERE thedate between '2015-01-01' and getdate();
```
|
Here is an example of how to do this with working sample data and the T-SQL included.
```
-- original table you described
CREATE TABLE LoginLogoutData (UserID int, DateTime DateTime)
GO
-- clean any previous sample records
TRUNCATE TABLE LoginLogOutData
/*local variables for iteration*/
DECLARE @i int = 1
DECLARE @n int
DECLARE @entryDate DateTime = GETDATE()
--populate the table with some sample data
/* for each of the five sample users, generate sample login and logout
data for 30 days. Each login and logout are simply an hour apart for demo purposes. */
SET NOCOUNT ON
-- iterate over 5 users (userid)
WHILE (@i <= 5)
BEGIN
--set the initial counter for the date loop
SET @n = 1
--dated entry loop
WHILE (@n <= 30)
BEGIN
-- increment to the next day
SET @entryDate = DateAdd(dd,@n,GETDATE())
--logged in entry
INSERT INTO LoginLogoutData (DateTime, UserID)
SELECT @entryDate,@i
-- logged out entry
INSERT INTO LoginLogoutData (DateTime, UserID)
SELECT DateAdd(hh,1,@entryDate),@i
--increment counter
SET @n = @n+1
END
--increment counter
SET @i=@i+1
END
GO
/* demonstrate that for each user each day has entries and that
the code calculates (Max(DateTime) - Min(DateTime)) FOR EACH DAY
*/
SELECT UserID,
MIN(DateTime) AS LoggedIn,
MAX(DateTime) AS LoggedOut,
DATEDIFF(SS, MIN(DateTime), MAX(DateTime)) AS LoginTime
FROM LoginLogoutData
GROUP BY CAST(DateTime As Date), UserID
/*this is a table variable used to support the "sum all these values together into one
simple table grouped only by UserId*/
DECLARE @SummedUserActivity AS TABLE (UserID int, DailyActivity int)
-- group the subtotals from each day per user
INSERT INTO @SummedUserActivity (UserID, DailyActivity)
SELECT UserID, DATEDIFF(SS, MIN(DateTime), MAX(DateTime))
FROM LoginLogoutData
GROUP BY CAST(DateTime As Date), UserID
-- demonstrate the sum of the subtotals grouped by userid
SELECT UserID, SUM(DailyActivity) AS TotalActivity
FROM @SummedUserActivity
GROUP BY UserID
```
|
MS SQL GROUPED SUM
|
[
"",
"sql",
"sql-server",
"sum",
""
] |
I want to show results according to distance for e.g. if distance is under 300 than results till 300. If it is not under 300 than only take second condition which is under 400.
But i am getting all results for e.g. 118,149,357
my inner select statement returns distance through a function
```
SELECT A.Id, A.Distance FROM
(
SELECT Id, ROUND((dbo.GetDistance(@Latitude, @Longitude, Latitude, Longitude)*1000),0) AS Distance
FROM Messages WITH (NOLOCK) where LEN(Latitude) > 0 AND LEN(Longitude) > 0
) A
WHERE
(A.Distance <= 300
OR
A.Distance <= 400)
ORDER BY Distance ASC
```
|
Try this.
```
;WITH cte
AS (SELECT Id,
Round(( dbo.Getdistance(@Latitude, @Longitude, Latitude, Longitude) * 1000 ), 0) AS Distance
FROM Messages WITH (NOLOCK)
WHERE Len(Latitude) > 0
AND Len(Longitude) > 0)
SELECT A.Id,
A.Distance
FROM cte a
WHERE A.Distance <= 300
OR ( A.Distance <= 400
AND NOT EXISTS (SELECT 1
FROM cte B
WHERE B.Distance <= 300) )
ORDER BY Distance ASC
```
|
A `WHERE` clause is evaluated for each row. Inside your filter for one specific row, you cannot use the existence of other rows in the same result set as a criterion in your filter.
You have to re-think your approach. One way I can think of is to rank the rows by distance, and select the top 1 rank:
```
WITH cte1 AS (
SELECT Id, ROUND((dbo.GetDistance(@Latitude, @Longitude, Latitude, Longitude)*1000),0) AS Distance
FROM Messages WITH (NOLOCK)
WHERE LEN(Latitude) > 0 AND LEN(Longitude) > 0
), cte2 AS (
SELECT Id, Distance, RANK() OVER (ORDER BY CASE WHEN Distance <= 300 THEN 0 ELSE 1 END) AS [Rank]
FROM cte1
WHERE Distance <= 400
)
SELECT Id, Distance
FROM cte2
WHERE [Rank] = 1
ORDER BY Distance ASC
```
Disclaimer: untested, may have trivial errors, but the general approach should work.
|
Conditional statement in where clause not working
|
[
"",
"sql",
""
] |
I would like to delete records from a table if the corresponding record is not present in another table.
I.e. table1 has one-to-many relationship with table2. I need to delete orphaned records from table2 where table2.id is not present in table1.
I have tried this in Access:
`DELETE *
FROM t2
RIGHT JOIN t2
ON t1.id = t2.id
WHERE t1.id is NULL`
but I get "Syntax error in JOIN operation". I cannot see what is wrong.
|
Remove the `*` after `DELETE`..
I would have solved it like this:
```
DELETE FROM t2
WHERE id not in (
SELECT id from t1);
```
Not sure if deleting with a join will work. It would need to be a `LEFT JOIN`though, as you want to delete all the rows in the first part of the join that is not joined with anything. Also, you are joining t2 with itself, guessing it's just a typo..
|
This will help:
```
DELETE
from t2
FROM t1
RIGHT JOIN t2
ON t1.id = t2.id
WHERE t2.id is NULL
```
|
How to delete 'orphaned' records from second table
|
[
"",
"sql",
"ms-access",
""
] |
i need to convert the following values from a variable:
```
1234,1234,12345,12346,1344,4564
```
to this:
```
'1234','1234','12345','12346','1344','4564'
```
using an SQL function.
I have tried:
```
DECLARE @VAL AS VARCHAR(MAX)
SELECT @VAL = '1234,1234,12345,12346,1344,4564'
DECLARE @listStr VARCHAR(MAX)
SELECT @listStr = COALESCE(@listStr+',' ,'') + @VAL
SELECT @listStr AS 'List'
GO
```
But i get:
```
1234,1234,12345,12346,1344,4564
```
|
Try this:
```
SELECT '''' + REPLACE('1234,1234,12345,12346,1344,4564', ',', ''',''') + '''';
```
Check this **[SQL FIDDLE DEMO](http://www.sqlfiddle.com/#!3/d41d8/42450)**
**OUTPUT**
```
| COLUMN_0 |
|---------------------------------------------|
| '1234','1234','12345','12346','1344','4564' |
```
**EDIT**
Use User defined function:
```
CREATE FUNCTION dbo.ufnReplace(@Val VARCHAR(MAX))
RETURNS VARCHAR(MAX)
AS
BEGIN
SELECT @Val = '''' + REPLACE(@Val, ',', ''',''') + '''';
RETURN @Val;
END;
SELECT dbo.ufnReplace('1234,1234,12345,12346,1344,4564');
```
|
```
DECLARE @VAL AS VARCHAR(MAX)
SELECT @VAL = '1234,1234,12345,12346,1344,4564'
SELECT '''' + REPLACE(@VAL,',',''',''') + ''''
```
|
Comma delimited list to string sql
|
[
"",
"sql",
"sql-server",
"t-sql",
"replace",
"string-concatenation",
""
] |
In my table, I have a datetime column and a tag column. I would like to select the latest value before a timespan, if there are no values in the timespan. If there are values in the timespan, I would not like to return any values.
My query has the following input parameters:
```
@StartDate datetime
@EndDate datetime
```
The most important query results:
1. The query should not return any results if there is data between @StartDate and @EndDate
2. The query should return the latest value for each Tag before the @StartDate IF, and only if, there are no results between @StartDate and @EndDate
My issue, I have created two queries:
1. Returns the latest result before the timespan.
2. Returns all results IN the timespan.
The idea is to `SELECT [Values before the timespan] WHERE NOT EXISTS IN [Values in the timespan]`.
I have tried to join these queries to get the end result, but this is where I struggle.
---
STEPS TO REPRODUCE (SETUP):
```
CREATE TABLE dbo.MyTable(id int IDENTITY(1,1) NOT NULL, Tag nvarchar(200) NOT NULL, StartTime datetime NOT NULL)
DECLARE @day int, @month int, @year int
SELECT @day = 15, @month = 1, @year = 2015
INSERT INTO dbo.MyTable(Tag, StartTime) VALUES('MyTag',dateadd(mm, (@year - 1900) * 12 + @month - 1 , @day - 1))
INSERT INTO dbo.MyTable(Tag, StartTime) VALUES('MySuperTag',dateadd(mm, (@year - 1900) * 12 + @month - 1 , @day - 1))
SELECT @day = 16, @month = 1, @year = 2015
INSERT INTO dbo.MyTable(Tag, StartTime) VALUES('MyTag',dateadd(mm, (@year - 1900) * 12 + @month - 1 , @day - 1))
INSERT INTO dbo.MyTable(Tag, StartTime) VALUES('MySuperTag',dateadd(mm, (@year - 1900) * 12 + @month - 1 , @day - 1))
SELECT @day = 18, @month = 1, @year = 2015
INSERT INTO dbo.MyTable(Tag, StartTime) VALUES('MyTag',dateadd(mm, (@year - 1900) * 12 + @month - 1 , @day - 1))
INSERT INTO dbo.MyTable(Tag, StartTime) VALUES('MySuperTag',dateadd(mm, (@year - 1900) * 12 + @month - 1 , @day - 1))
SELECT @day = 19, @month = 1, @year = 2015
INSERT INTO dbo.MyTable(Tag, StartTime) VALUES('MyTag',dateadd(mm, (@year - 1900) * 12 + @month - 1 , @day - 1))
INSERT INTO dbo.MyTable(Tag, StartTime) VALUES('MySuperTag',dateadd(mm, (@year - 1900) * 12 + @month - 1 , @day - 1))
SELECT @day = 26, @month = 1, @year = 2015
INSERT INTO dbo.MyTable(Tag, StartTime) VALUES('MyTag',dateadd(mm, (@year - 1900) * 12 + @month - 1 , @day - 1))
INSERT INTO dbo.MyTable(Tag, StartTime) VALUES('MySuperTag',dateadd(mm, (@year - 1900) * 12 + @month - 1 , @day - 1))
```
---
STEPS TO REPRODUCE (QUERY):
This should not return any values, since there are values in the timespan.
```
DECLARE @day int, @month int, @year int
DECLARE @StartTime datetime
DECLARE @EndTime datetime
SELECT @day = 17, @month = 1, @year = 2015
SET @StartTime = dateadd(mm, (@year - 1900) * 12 + @month - 1 , @day - 1)
SET @EndTime = dateadd(mm, (@year - 1900) * 12 + @month - 1 , @day - 1 + 3)
SELECT * FROM (SELECT id, Tag, StartTime FROM dbo.MyTable WHERE StartTime < @StartTime AND Tag NOT IN ( SELECT Tag FROM dbo.MyTable WHERE (StartTime > @StartTime AND StartTime < @EndTime))) as d WHERE EXISTS ( SELECT Tag, StartTime, ROW_NUMBER FROM ( SELECT Tag, StartTime, ROW_NUMBER() OVER(PARTITION BY Tag ORDER BY StartTime DESC) AS ROW_NUMBER FROM dbo.MyTable WHERE StartTime < @StartTime) AS b WHERE ROW_NUMBER = '1')
```
---
STEPS TO REPRODUCE (QUERY2):
This should produce the latest values before the timespan, since there are no values in the timespan.
```
SELECT @day = 21, @month = 1, @year = 2015
SET @StartTime = dateadd(mm, (@year - 1900) * 12 + @month - 1 , @day - 1)
SET @EndTime = dateadd(mm, (@year - 1900) * 12 + @month - 1 , @day - 1 + 3)
SELECT * FROM (SELECT id, Tag, StartTime FROM dbo.MyTable WHERE StartTime < @StartTime AND Tag NOT IN ( SELECT Tag FROM dbo.MyTable WHERE (StartTime > @StartTime AND StartTime < @EndTime))) as d WHERE EXISTS ( SELECT Tag, StartTime, ROW_NUMBER FROM ( SELECT Tag, StartTime, ROW_NUMBER() OVER(PARTITION BY Tag ORDER BY StartTime DESC) AS ROW_NUMBER FROM dbo.MyTable WHERE StartTime < @StartTime) AS b WHERE ROW_NUMBER = '1')
```
EDIT: Added "latest value for each Tag" in section about expected results.
|
Here is my revised answer for the modified question:
```
SELECT [A].* FROM [dbo].[MyTable] AS [A]
INNER JOIN (
SELECT [Tag], MAX([StartTime]) AS [StartTime]
FROM [dbo].[MyTable]
WHERE [StartTime] < @StartTime
GROUP BY [Tag]
) AS B ON ([A].[Tag] = [B].[Tag] AND [A].[StartTime] = [B].[StartTime])
WHERE
[A].[StartTime] < @StartTime AND
0 = (
SELECT COUNT(*)
FROM [dbo].[MyTable]
WHERE [StartTime] BETWEEN @StartTime AND @EndTime
)
;
```
The joined subquery works out the latest date for each tag before the `@StartTime` and joins back to it's self so that the full row can be returned (with the `id`).
|
```
SELECT *
FROM MyTable t
WHERE StartTime < @start
AND id =
(SELECT TOP 1 mt.id FROM MyTable mt WHERE mt.Tag = t.Tag ORDER BY StartTime DESC)
AND NOT EXISTS
(SELECT 1 FROM MyTable WHERE StartTime >= @start AND StartTime <= @end)
ORDER BY StartTime DESC;
```
|
SQL Query - Select last date before timespan where no dates exists in timespan
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have a table (Postgres 9.3) defined as follows:
```
CREATE TABLE tsrs (
id SERIAL PRIMARY KEY,
customer_id INTEGER NOT NULL REFERENCES customers,
timestamp TIMESTAMP WITHOUT TIME ZONE,
licensekeys_checksum VARCHAR(32));
```
The pertinent details here are the `customer_id`, the `timestamp`, and the `licensekeys_checksum`. There can be multiple entries with the same `customer_id`, some of those may have matching `licensekey_checksum` entries, and some may be different. There will never be rows with equal checksum and equal timestamps.
I want to return a table containing 1 row for each group of rows with matching `licensekeys_checksum` entries. The row returned for each group should be the one with the newest / most recent timestamp.
Sample Input:
```
1, 2, 2014-08-21 16:03:35, 3FF2561A
2, 2, 2014-08-22 10:00:41, 3FF2561A
2, 2, 2014-06-10 10:00:41, 081AB3CA
3, 5, 2014-02-01 12:03:23, 299AFF90
4, 5, 2013-12-13 08:14:26, 299AFF90
5, 6, 2013-09-09 18:21:53, 49FFA891
```
Desired Output:
```
2, 2, 2014-08-22 10:00:41, 3FF2561A
2, 2, 2014-06-10 10:00:41, 081AB3CA
3, 5, 2014-02-01 12:03:23, 299AFF90
5, 6, 2013-09-09 18:21:53, 49FFA891
```
I have managed to piece together a query based on the comments below, and hours of searching on the internet. :)
```
select * from tsrs
inner join (
select licensekeys_checksum, max(timestamp) as mts
from tsrs
group by licensekeys_checksum
) x on x.licensekeys_checksum = tsrs.licensekeys_checksum
and x.mts = tsrs.timestamp;
```
It seems to work, but I am unsure. Am I on the right track?
|
Your query in the question should perform better than the queries in the (previously) accepted answer. Test with `EXPLAIN ANALYZE`.
`DISTINCT ON` is typically simpler and faster:
```
SELECT DISTINCT ON (licensekeys_checksum) *
FROM tsrs
ORDER BY licensekeys_checksum, timestamp DESC NULLS LAST;
```
*db<>fiddle [here](https://dbfiddle.uk/?rdbms=postgres_14&fiddle=376cdaf9d039b6f45ae2afbe7aa7602c)*
Old [sqlfiddle](http://sqlfiddle.com/#!17/2f1fc/1)
Detailed explanation:
* [Select first row in each GROUP BY group?](https://stackoverflow.com/questions/3800551/select-first-row-in-each-group-by-group/7630564#7630564)
|
Try this
```
select *
from tsrs
where (timestamp,licensekeys_checksum) in (
select max(timestamp)
,licensekeys_checksum
from tsrs
group by licensekeys_checksum)
```
[**>SqlFiddle Demo**](http://sqlfiddle.com/#!15/b73ce/1/0)
or
```
with cte as (
select id
,customer_id
,timestamp
,licensekeys_checksum
,row_number () over (partition by licensekeys_checksum ORDER BY timestamp DESC) as rk
from tsrs)
select id
,customer_id
,timestamp
,licensekeys_checksum
from cte where rk=1 order by id
```
[**>SqlFiddle Demo**](http://sqlfiddle.com/#!15/b73ce/2/0)
---
Reference : [**Window Functions**](http://www.postgresql.org/docs/9.4/static/tutorial-window.html), [**row\_number()**](http://www.postgresql.org/docs/9.4/static/functions-window.html), and [**CTE**](http://www.postgresql.org/docs/9.4/static/queries-with.html)
|
Returning the row with the most recent timestamp from each group
|
[
"",
"sql",
"postgresql",
"greatest-n-per-group",
""
] |
I am trying to find two-tailed inverse of the X's t-distribution. It can be find using TINV function in Excel but I need this to achieve in SQL Sever. Please suggest me idea.
The TINV function syntax has the following arguments:
**Probability** : The probability associated with the two-tailed Student's t-distribution.
**Deg\_freedom** : The number of degrees of freedom with which to characterize the distribution.
Ex:
```
select tinv( 0.054645, 60);
-- -----------------
-- 1.9599994129833
-- (1 row returned)
```
|
SQL-Server does not incorporate a lot of statistical functions.
`tinv` is not present in SQL-Server.
The only way to add a tinv function, is to use a CLR-Function.
Thus, the problem reduces itselfs to "How do I calculate tinv with the subset of C# allowed in SQL-Server ?".
If you're on .NET 4.0, you could use the chart-class in `System.Web.DataVisualization.dll`
e.g.
```
var someChart = new System.Web.UI.DataVisualization.Charting.Chart();
double res = someChart.DataManipulator.Statistics.InverseTDistribution(.05, 15);
//2.131449546
```
However, you probably don't want that overhead.
So you'll have to rip it out of [Math.NET](http://numerics.mathdotnet.com/api/MathNet.Numerics.Distributions/StudentT.htm)'s SourceCode (MIT/X11 License).
```
StudentT dist = new StudentT(0.0,1.0,7.0);
double prob = dist.CumulativeDistribution(1.8946);
```
Which should result in 0.95
Since you need the inverse, you'll need
```
StudentT.InvCDF(double location, double scale, double freedom, double p)
```
location: The location (μ) of the distribution.
scale: The scale (σ) of the distribution. Range: σ > 0.
freedom: The degrees of freedom (ν) for the distribution. Range: ν > 0.
p: The location at which to compute the inverse cumulative density.
```
[Microsoft.SqlServer.Server.SqlFunction]
public static System.Data.SqlTypes.SqlDouble TInv(double probability, int degFreedom)
{
double result = 0.00;
try
{
result = fnInverseTDistribution(degFreedom, probability);
}
catch
{
// throw; // Optionally throw/log/ignore/whatever
}
return result;
}
```
using DataVisualization, this goes like this:
```
[Microsoft.SqlServer.Server.SqlFunction]
public static System.Data.SqlTypes.SqlDouble TInv(double probability, int degFreedom)
{
double result = 0.00;
try
{
var someChart = new System.Web.UI.DataVisualization.Charting.Chart();
result = someChart.DataManipulator.Statistics.InverseTDistribution( probability, degFreedom);
}
catch
{
// throw; // Optionally throw/log/ignore/whatever
}
return result;
}
```
The DataVisualization trick however won't work on SQL-Server, because you'd need to add a reference to System.Web, which you can't do in SQL-Server.
Also, excel has a lot of similar functions, tinv, t.inv, T.INV.2S, etc., so be sure to choose the right one.
# Edit:
Found some more information:
<http://numerics.mathdotnet.com/api/MathNet.Numerics/ExcelFunctions.htm#TIn>
There is a special class called ExcelFunctions in Math.NET that you can actually use:
```
MathNet.Numerics.ExcelFunctions.TInv (1.1, 55);
```
You find some more information on [TINV](https://wiki.openoffice.org/wiki/Documentation/How_Tos/Calc:_TINV_function) and [TDIST](https://wiki.openoffice.org/wiki/Documentation/How_Tos/Calc:_TDIST_function) on OpenOffice.org along with a list of functions by [by category](https://wiki.openoffice.org/wiki/Documentation/How_Tos/Calc:_Functions_listed_by_category)
So the solution to your problem is
```
[Microsoft.SqlServer.Server.SqlFunction]
public static System.Data.SqlTypes.SqlDouble TInv(double probability, int degFreedom)
{
double result = 0.00;
try
{
result = MathNet.Numerics.ExcelFunctions.TInv (probability, degFreedom);
}
catch
{
// throw; // Optionally throw/log/ignore/whatever
}
return result;
}
```
which is actually the same as
```
[Microsoft.SqlServer.Server.SqlFunction]
public static System.Data.SqlTypes.SqlDouble TInv(double probability, int degFreedom)
{
double result = 0.00;
try
{
result = -StudentT.InvCDF(0d, 1d, degFreedom, probability/2);
}
catch
{
// throw; // Optionally throw/log/ignore/whatever
}
return result;
}
```
So now you grab the sourcecode of Math.Net from
<https://github.com/mathnet/mathnet-numerics>
and drag and drop the contents of mathnet-numerics/src/Numerics/ (or the part thereof that you need) into your project with the CRL-Function, and finished.
When you have your CLR dll, you go into SSMS and execute:
```
EXEC dbo.sp_configure 'clr enabled',1 RECONFIGURE WITH
CREATE ASSEMBLY SQLServerStatistics from 'C:\SQLServerStatistics.dll' WITH PERMISSION_SET = SAFE
```
After that has succeeded, you still have to register the function with SQL-Server.
```
CREATE FUNCTION [dbo].[tinv](@prob float, @degFreedom int)
RETURNS float WITH EXECUTE AS CALLER
AS
EXTERNAL NAME [SQLServerStatistics].[Functions].[TInv]
```
See [this](http://www.codeproject.com/Articles/37377/SQL-Server-CLR-Functions) article for further information.
If you want to bring the Dll onto a productive server, you'll need to create the assembly from a byte-array-string, like this:
```
CREATE ASSEMBLY [MyFunctions]
AUTHORIZATION [dbo]
FROM 0x4D5A90000[very long string here...];
```
You create the hex string from the byte array like this:
```
byte[] bytes = System.IO.File.ReadAllBytes(@"C:\SQLServerStatistics.dll");
"0x" + BitConverter.ToString(bytes).Replace("-", "")
```
I have uploaded the entire solution [here](https://github.com/ststeiger/SqlServerStatistics) on github.
Then you can run the function like this:
```
SELECT dbo.tinv(0.54645, 60)
```
==> 0.606531559343638
The Script-Generator tool automatically builds the install script for you.
Looks like this:
```
> sp_configure 'show advanced options', 1; GO RECONFIGURE; GO
> sp_configure 'clr enabled', 1; GO RECONFIGURE; GO
>
>
> DECLARE @sql nvarchar(MAX) SET @sql = 'ALTER DATABASE ' +
> QUOTENAME(DB_NAME()) + ' SET TRUSTWORTHY ON;'
> -- PRINT @sql; EXECUTE(@sql); GO
>
>
> -- Restore sid when db restored from backup... DECLARE @Command NVARCHAR(MAX) = N'ALTER AUTHORIZATION ON DATABASE::<<DatabaseName>> TO
> [<<LoginName>>]' SELECT @Command = REPLACE ( REPLACE(@Command,
> N'<<DatabaseName>>', SD.Name) , N'<<LoginName>>' , SL.Name ) FROM
> master..sysdatabases AS SD JOIN master..syslogins AS SL ON SD.SID
> = SL.SID
>
> WHERE SD.Name = DB_NAME()
>
> -- PRINT @command EXECUTE(@command) GO
>
> IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[TDist]') AND type in (N'FN', N'IF', N'TF', N'FS',
> N'FT')) DROP FUNCTION [dbo].[TDist] GO
>
> IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[TInv]') AND type in (N'FN', N'IF', N'TF', N'FS',
> N'FT')) DROP FUNCTION [dbo].[TInv] GO
>
>
>
> IF EXISTS (SELECT * FROM sys.assemblies asms WHERE asms.name =
> N'SQLServerStatistics' and is_user_defined = 1) DROP ASSEMBLY
> [SQLServerStatistics] GO
>
>
> CREATE ASSEMBLY SQLServerStatistics AUTHORIZATION [dbo]
> FROM 'c:\users\administrator\documents\visual studio 2013\Projects\SqlServerStatistics\ClrCreationScriptGenerator\bin\Debug\SqlServerStatistics.dll'
> WITH PERMISSION_SET = UNSAFE GO
>
>
> CREATE FUNCTION [dbo].[TDist](@x AS float, @degFreedom AS int, @tails
> AS int)
> RETURNS float WITH EXECUTE AS CALLER AS EXTERNAL NAME [SQLServerStatistics].[SqlServerStatistics.ExcelFunctions].[TDist] GO
>
>
>
> CREATE FUNCTION [dbo].[TInv](@probability AS float, @degFreedom AS
> int)
> RETURNS float WITH EXECUTE AS CALLER AS EXTERNAL NAME [SQLServerStatistics].[SqlServerStatistics.ExcelFunctions].[TInv] GO
```
|
You could either write your own implementation as a SQL function or use a [CLR](http://msdn.microsoft.com/en-us/library/ms254498%28v=vs.110%29.aspx) and write it in C#.
My advice would be to use a CLR and include the [Accord](http://accord-framework.net) library (I recommend this one because they still have .NET 3.5 versions which you will need for a SQL Server CLR) to implement the statistic function. I've done this for other statistic calculations in the past and it worked like a charme.
|
SQL Server equivalent of Excel's TINV function
|
[
"",
"sql",
"sql-server",
"distribution",
"probability",
""
] |
I have a table of employees which contains about 25 columns. Right now there are a lot of duplicates and I would like to try and get rid of some of these duplicates.
First, I want to find the duplicates by looking for multiple records that have the same values in first name, last name, employee number, company number and status.
```
SELECT
firstname,lastname,employeenumber, companynumber, statusflag
FROM
employeemaster
GROUP BY
firstname,lastname,employeenumber,companynumber, statusflag
HAVING
(COUNT(*) > 1)
```
This gives me duplicates but my goal is to find and keep the best single record and delete the other records. The "best single record" is defined by the record with the least amount of NULL values in all of the other columns. How can I do this?
I am using Microsoft SQL Server 2012 MGMT Studio.
EXAMPLE:

Red: DELETE
Green: KEEP
NOTE: There are a lot more columns in the table than what this table shows.
|
You can use the sys.columns table to get a list of columns and build a dynamic query. This query will return a 'KeepThese' value for every record you want to keep based on your given criteria.
```
-- insert test data
create table EmployeeMaster
(
Record int identity(1,1),
FirstName varchar(50),
LastName varchar(50),
EmployeeNumber int,
CompanyNumber int,
StatusFlag int,
UserName varchar(50),
Branch varchar(50)
);
insert into EmployeeMaster
(
FirstName,
LastName,
EmployeeNumber,
CompanyNumber,
StatusFlag,
UserName,
Branch
)
values
('Jake','Jones',1234,1,1,'JJONES','PHX'),
('Jake','Jones',1234,1,1,NULL,'PHX'),
('Jake','Jones',1234,1,1,NULL,NULL),
('Jane','Jones',5678,1,1,'JJONES2',NULL);
-- get records with most non-null values with dynamic sys.column query
declare @sql varchar(max)
select @sql = '
select e.*,
row_number() over(partition by
e.FirstName,
e.LastName,
e.EmployeeNumber,
e.CompanyNumber,
e.StatusFlag
order by n.NonNullCnt desc) as KeepThese
from EmployeeMaster e
cross apply (select count(n.value) as NonNullCnt from (select ' +
replace((
select 'cast(' + c.name + ' as varchar(50)) as value union all select '
from sys.columns c
where c.object_id = t.object_id
for xml path('')
) + '#',' union all select #','') + ')n)n'
from sys.tables t
where t.name = 'EmployeeMaster'
exec(@sql)
```
|
Try this.
```
;WITH cte
AS (SELECT Row_number()
OVER(
partition BY firstname, lastname, employeenumber, companynumber, statusflag
ORDER BY (SELECT NULL)) rn,
firstname,
lastname,
employeenumber,
companynumber,
statusflag,
username,
branch
FROM employeemaster),
cte1
AS (SELECT a.firstname,
a.lastname,
a.employeenumber,
a.companynumber,
a.statusflag,
Row_number()
OVER(
partition BY a.firstname, a.lastname, a.employeenumber, a.companynumber, a.statusflag
ORDER BY (CASE WHEN a.username IS NULL THEN 1 ELSE 0 END +CASE WHEN a.branch IS NULL THEN 1 ELSE 0 END) )rn
-- add the remaining columns in case statement
FROM cte a
JOIN employeemaster b
ON a.firstname = b.firstname
AND a.lastname = b.lastname
AND a.employeenumber = b.employeenumber
AND a.companynumbe = b.companynumber
AND a.statusflag = b.statusflag)
SELECT *
FROM cte1
WHERE rn = 1
```
|
Remove duplicates with less null values
|
[
"",
"sql",
"sql-server",
""
] |
I would like to create a query where I find the details only for members with the earliest and latest registration\_date. So far I have
```
SELECT member_id, UPPER(name), registration_date
from db.member WHERE registration_date >=
'2012-01-01'
```
I am unsure how to find just min/max but am just testing it with the set date 2012-01-01
So the query should only bring back two results - min/max
Thank you in advance
|
you can get `min` , `max` dates in a subquery and then can get the results.
```
select member_id, upper(name), registration_date
from db.member
cross join (
select min(registration_date) as minDate, max(registration_date) as maxDate
from db.member ) t
where registration_date in ( t.minDate, t.maxDate)
```
or you can do it with `in` and `union`
```
select member_id, upper(name), registration_date
from db.member
where registration_date in (
select min(registration_date)
from db.member
union
select max(registration_date) as maxDate
from db.member )
```
|
select min(dateColumn),max(dateColumn) from tbl;
|
Finding min and max dates SQL
|
[
"",
"sql",
"max",
"min",
""
] |
1. Working on SQL (2005 and 2008)
2. the variable with value '1,2,3' would be call @cedis and this could to have N number for example
```
set @cedis='1' or set @cedis='1,2,3,4,5,6,7' or set @cedis='125,98,91'
```
3. so important, its this must to be a select only, a loop could not to be use, only a select!
4. this must to return a (result as ) table with values for example
```
set @cedis='1,2,3,4' this must to return a result
number 1 2 3 4
declare @cedis varchar(max)
set @cedis='1,58,123,8'
;with datos as
(
my select with is going to return me the table
)
select * from datos
```
result set is
```
number
1
58
123
8
```
|
If am not wrong this is what you need
```
DECLARE @cedis VARCHAR(500)='1,2,3,4'
SELECT Split.a.value('.', 'VARCHAR(100)') Numbers
FROM (SELECT Cast ('<M>' + Replace(@cedis, ',', '</M><M>') + '</M>' AS XML) AS Numbers) AS A
CROSS APPLY Numbers.nodes ('/M') AS Split(a)
```
**Result:**
```
Numbers
-------
1
2
3
4
```
|
A table valued function would do it.
```
CREATE FUNCTION [dbo].[fn_Split](@text VARCHAR(MAX), @delimiter VARCHAR(5) = ',')
RETURNS @Strings TABLE
(
position int IDENTITY PRIMARY KEY,
value VARCHAR(8000)
)
AS
BEGIN
DECLARE @index int
SET @index = -1
WHILE (LEN(@text) > 0)
BEGIN
SET @index = CHARINDEX(@delimiter , @text)
IF (@index = 0) AND (LEN(@text) > 0)
BEGIN
INSERT INTO @Strings VALUES (@text)
BREAK
END
IF (@index > 1)
BEGIN
INSERT INTO @Strings VALUES (LEFT(@text, @index - 1))
END
SET @text = RIGHT(@text, (LEN(@text) - (@index+LEN(@delimiter)-1)))
END
RETURN
END
```
You can call it as follows:
```
SELECT *
FROM dbo.fn_Split(@cedis,',')
```
|
Hot to convert a variable with value '1,2,3' to a table (every number as a record)
|
[
"",
"sql",
"sql-server",
"t-sql",
"select",
""
] |
If I execute the following query it works fine.
```
SELECT
CASE WHEN description LIKE '%a%' THEN 'A'
WHEN description LIKE '%b%' THEN 'B'
ELSE 'C'
END AS foo
,COUNT(*) AS cnt
FROM product p
GROUP BY foo
ORDER BY foo;
```
The result is something like
```
foo | cnt
-----------
A | 809
B | 29
C | 55
```
However, the following query gives an error. Note that I only changed the last line.
```
SELECT
CASE WHEN description LIKE '%a%' THEN 'A'
WHEN description LIKE '%b%' THEN 'B'
ELSE 'C'
END AS foo
,COUNT(*) AS cnt
FROM product p
GROUP BY foo
ORDER BY FIELD(foo, 'A', 'B', 'C'); -- Different line
```
> SQL Error (1054): Unknown column 'foo' in 'order clause'
Why is that?
I'musing MySQL version 5.0.22
**UPDATE**:
just to expand, I see the same behaviour if I leave out the grouping, that is:
```
SELECT
CASE WHEN description LIKE '%a%' THEN 'A'
WHEN description LIKE '%b%' THEN 'B'
ELSE 'C'
END AS foo
FROM product p
ORDER BY foo;
```
(runs fine)
vs
```
SELECT
CASE WHEN description LIKE '%a%' THEN 'A'
WHEN description LIKE '%b%' THEN 'B'
ELSE 'C'
END AS foo
FROM product p
ORDER BY FIELD(foo, 'A', 'B', 'C'); -- Different line
```
(error)
|
The reason why that happens is because MySQL was processing ORDER BY in earlier versions (apparently including 5.0) is the following way:
1. If the entire `ORDER BY` clause is an integer, sort by the corresponding element of the projection clause (`ORDER BY 2` sorts by the second element of the project clause)
2. If the entire `ORDER BY` clause is an identifier, search for a projection list element with such an alias, or, if none found, for a column with such a name, and sort by that.
3. Otherwise, parse the expression. Only column names, but not aliases, are visible during parsing.
So in your first query you get to the second case, hence you see the alias, in your second query you get to the third case, and hence do not see it.
UPDATE: Seems like PostgreSQL has exactly the same behavior. Look at answer by Peter Eisentraut here
[ [ORDER BY Alias not working](https://stackoverflow.com/questions/6458669/order-by-alias-not-working) ]
He summarizes the same thing I said, but for PostgreSQL. Order By can have either an alias (**output column**), or an integer, or an expression referencing **input columns**.
|
If you want to group by the alias then put your query in a derived table
```
SELECT foo, count(*) cnt FROM (
SELECT
CASE WHEN description LIKE '%a%' THEN 'A'
WHEN description LIKE '%b%' THEN 'B'
ELSE 'C'
END AS foo
FROM product p
) t1
GROUP BY foo
ORDER BY FIELD(foo, 'A', 'B', 'C'); -- Different line
```
|
Can't use alias in mysql function as part of ORDER BY clause
|
[
"",
"mysql",
"sql",
"sql-order-by",
""
] |
Given this table:
```
create temp table stats (
name text, country text, age integer
)
insert into stats values
('eric', 'se', 1),
('eric', 'dk', 4),
('johan', 'dk', 6),
('johan', 'uk', 7),
('johan', 'de', 3),
('dan', 'de', 3),
('dan', 'de', 3),
('dan', 'de', 4)
```
I want know the count of distinct name that has either the country or the age the same as the key.
```
country age count
se 1 1
de 3 2
de 4 3
dk 4 3
dk 6 2
uk 7 1
```
There are 3 distinct names that have either `country = dk` (eric, johan) or `age = 4` (eric,dan)
So my question is, what is the best way to write this query?
I have this solution but I find it very ugly!
```
with country as (
select count(distinct name), country
from stats
group by country
),
age as (
select count(distinct name), age
from stats
group by age
),
country_and_age as(
select count(distinct name), age, country
from stats
group by age, country
)
select country, age, c.count+a.count-ca.count as count from country_and_age ca join age a using(age) join country c using(country)
```
Any better way?
|
You can join on the original table also:
```
SELECT
s1.country,
s1.age,
COUNT(distinct s2.name)
FROM stats s1
JOIN stats s2 ON s1.country=s2.country OR s1.age=s2.age
GROUP by 1, 2;
```
|
Select distinct age and country from stats. For each record count how many distinct names you find in records matching country or age.
```
select
country,
age,
(
select count(distinct name)
from stats s
where s.country = t.country
or s.age = t.age
) as cnt
from (select distinct country, age from stats) t;
```
|
Count distinct where attribute is either of keys
|
[
"",
"sql",
"postgresql",
""
] |
I'm hitting a baffling issue in SQL. Somehow, a JOIN in a CASE statement is returning both TRUE and FALSE!
If you run the below (I'm using SQL Server 2008), you'll see the effect. The JOIN creates 2 records, based on the CASE statements.
Any ideas how to stop this?
```
CREATE TABLE #table1(id int PRIMARY KEY, fieldA nvarchar(MAX))
INSERT INTO #table1
SELECT 1, 'ABC Airlines'
CREATE TABLE #table2(field_id int PRIMARY KEY, fieldB nvarchar(MAX))
INSERT INTO #table2
SELECT 1, 'ABC Airlines'
UNION
SELECT 2, '*** OTHER ***'
SELECT *
FROM #table1 t
JOIN #table2 t2 ON t2.fieldB = CASE WHEN t.fieldA <> t2.fieldB THEN '*** OTHER ***' ELSE t.fieldA END
```
|
It seems that you are using #table2.fieldB as a JOIN condition, so you are getting 2 values ('ABC Airlines' and 'Other'). Your expected result can be achieved if you change your query this way
```
SELECT *
FROM #table1 t
JOIN #table2 t2 ON t.fieldA = CASE WHEN t.fieldA <> t2.fieldB THEN '*** OTHER ***' ELSE t2.fieldB END
```
|
why wouldn't you want:
```
SELECT *
FROM #table1 t
JOIN #table2 t2 ON t.fieldA = t2.fieldB
```
It is a bad practice to use case statements and inequalities in joins. Based on your end result desired all you want is an ordinary inner join.
|
why is SQL CASE returning both true and false?
|
[
"",
"sql",
"sql-server",
""
] |
I am pretty new in SQL and expecially in Oracle DB and I have some doubt about how to update a specific single field values of all records of a table.
So, int an Oracle DB, I have a very simple table named **PROJECT\_INFO\_STATUS** that have the following form:
```
ID STATUS
-------------
0 Closed
1 Active
2 Testing
3 Starting
```
As you can see in the previous snippet this table have only 2 fields: an **ID** (a **NUMBER**) and a **STATUS** (a **VARCHAR**).
What can I do to update all the rows so that the ID value is the **old ID value + 1**.
For example the ID value of the first row (**STATUS=Closed**) have to be 1 (and not 0). The ID value of the second row (\*\*STATUS=Active) have to be 2 (and not 1). And so on.
How can I do to obtain this behavior?
|
```
update PROJECT_INFO_STATUS
set ID = ID +1
```
you can directly add column value if it is not `PK` or `FK`
|
Simply set the column value to itself plus one:
```
update project_info_status set id = id + 1;
select * from project_info_status;
ID STATUS
---------- ----------
1 Closed
2 Active
3 Testing
4 Started
```
If the ID value is used as a foreign key in another table then you would need to update all of the values in that table as well, and if it's controlled by a referential integrity constraint (as it should be) then you'd need to disable the constraint while you update all related tables, and re-enable the constraint once the data is consistent again. Unless the constraint is defined as deferrable, in which case you can update both tables in the same transaction without disabling the constraint.
But if it is a key then its actual value should be irrelevant, and you shouldn't ever really need up modify a synthetic primary key value.
|
How can I update the value of a specific field of my table for all rows?
|
[
"",
"sql",
"database",
"oracle",
"oracle10g",
"oracle-sqldeveloper",
""
] |
For MSSQL or MySQL, here is a query that unites two results into one:
```
SELECT boy as person from table1
union all
SELECT girl as person from table2
```
How to modify the query above so that the result contains the second (added) column with the name of the table (so it contains `table1` or `table2` value).
|
you can give string lateral with table name as second column
```
SELECT boy as person, 'table1' as column2 from table1
union all
SELECT girl as person, 'table2' as column2 from table2
```
|
Just `Hard code` the `tablename` in Second Column
```
SELECT boy as person,'Table1' as Tablename from table1
UNION ALL
SELECT girl as person ,'Table2' as Tablename from table2
```
|
Select with UNION ALL to display tablename as an additional column
|
[
"",
"mysql",
"sql",
"select",
"union",
""
] |
Postgres
I have table user\_answers:
```
----------------------------
| id | user_id | answer_id |
----------------------------
| 1 | 47 | 121 |
----------------------------
| 2 | 47 | 125 |
----------------------------
| 3 | 47 | 141 |
----------------------------
| 4 | 49 | 122 |
----------------------------
| 5 | 49 | 121 |
----------------------------
| 6 | 49 | 101 |
----------------------------
| 7 | 52 | 121 |
----------------------------
| 8 | 52 | 125 |
----------------------------
| 9 | 52 | 101 |
----------------------------
| 10 | 67 | 101 |
----------------------------
```
I would like to get user\_id, only user\_id where answer\_id = 121 and answer\_id = 125.
A good result: 47 and 52
because:
user\_id = 49 has 121, but no 125
This simple query does not work (returns nothing):
```
SELECT user_id FROM user_answers
WHERE answer_id = 121 AND answer_id = 125
```
|
Your query returns nothing because `answer_id` has only one value on any given row. It cannot have multiple values on the row.
This is an example of a set-within-sets query. I would recommend using `group by` and `having`. Here is one method:
```
SELECT user_id
FROM user_answers
WHERE answer_id IN (121, 125)
GROUP BY user_id
HAVING COUNT(DISTINCT answer_id) = 2;
```
THis will return values that have 121, 125 and other values. To get only those two values:
```
SELECT user_id
FROM user_answers
WHERE answer_id IN (121, 125)
GROUP BY user_id
HAVING SUM(CASE wHEN answer_id = 121 THEN 1 ELSE 0 END) > 0 AND
SUM(CASE wHEN answer_id = 125 THEN 1 ELSE 0 END) > 0;
```
|
```
select user_id
from user_answers
where answer_id in (121, 125)
group by user_id
having count(*) = 2
```
|
How to compare the values of one column? SQL(Postgres)
|
[
"",
"sql",
"postgresql",
""
] |
I have the following mysql table (simplified):
```
CREATE TABLE IF NOT EXISTS `mytable` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`emails` varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL,
`phones` varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL,
`history` varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL,
PRIMARY KEY (`id`),
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci AUTO_INCREMENT=259 ;
```
I have the following query :
```
SELECT *
FROM mytable
WHERE emails LIKE '%addr@yahoo.com%'
OR phones LIKE '%addr@yahoo.com%'
AND history !='None'
```
This is producing 2 records:
```
INSERT INTO `mytable` (`id`, `emails`, `phones`, `history`) VALUES
(118, 'PLEASE SET', NULL, 0, 'addr@yahoo.com', NULL, 0, 'None'),
(237, 'PLEASE SET', NULL, 0, 'addr@yahoo.com', NULL, 0, 'gomez');
```
but I was expecting the one with `history = 'None'` not to be included. What am I doing wrong?
|
Organize your `OR` criteria in `()`
```
SELECT *
FROM mytable
WHERE (all_emails LIKE '%addr@yahoo.com%'
OR all_phones LIKE '%addr@yahoo.com%')
AND history !='None
```
[`Sample Demo`](http://www.sqlfiddle.com/#!2/df6830/3)
|
You should explicitly set parenthesis in your `WHERE` clause :
```
WHERE (all_emails LIKE '%addr@yahoo.com%'
OR all_phones LIKE '%addr@yahoo.com%')
AND history !='None'
```
|
Mysql not equal operator failing?
|
[
"",
"mysql",
"sql",
""
] |
Basically, when given a list of strings,
I want to create a table with `select` statement.
For example,
```
"A", "B", "C",
```
I want to create a table as a sub-select like:
```
sub-select
+---------+
| "A" |
+---------+
| "B" |
+---------+
| "C" |
+---------+
```
How do I do this in redshift and postgres?
Thanks!
Update:
select 'a' as A;
is sort of what I want that returns:
```
a
+---------+
| "a" |
+---------+
```
How do I have multiple rows for this column `a` from the query `select 'a' as A;`
|
One of the way to convert a column value to multiple rows is using split\_part function and UNION.
Here is an example.
Source Table:
```
=> CREATE TABLE t_csv (value varchar(64));
=> INSERT INTO t_csv VALUES ('"A","B","C"');
=> INSERT INTO t_csv VALUES ('"D","E"');
=> INSERT INTO t_csv VALUES ('"F","G","H","I"');
=> SELECT * FROM t_csv;
value
-----------------
"D","E"
"A","B","C"
"F","G","H","I"
(3 rows)
```
Here is the query to get multiple rows.
```
=> WITH a AS (SELECT value FROM t_csv)
SELECT * FROM
(
SELECT split_part(a.value,',',1) AS value FROM a
UNION
SELECT split_part(a.value,',',2) AS value FROM a
UNION
SELECT split_part(a.value,',',3) AS value FROM a
UNION
SELECT split_part(a.value,',',4) AS value FROM a
)
WHERE value != '';
value
-------
"A"
"B"
"C"
"D"
"E"
"F"
"G"
"H"
"I"
(9 rows)
```
|
Have no chance to test it in db, but something like that
```
select * INTO table from (
SELECT CAST('A' AS VARCHAR(100)) AS col
UNION ALL
SELECT 'B' AS col
UNION ALL
SELECT 'C' AS col
) a
```
|
From a string array, create a table
|
[
"",
"sql",
"postgresql",
"amazon-redshift",
""
] |
I have a table analyst\_request
```
-------------------------
id | test | request_id |
-------------------------
1 | a |NDQA201404320
-------------------------
2 | a |NDQA201404320
-------------------------
3 | b |NDQA201404333
-------------------------
4 | b |NDQA201404333
```
How do i just delete a any duplicate of the record containing 'NDQA201404320' and leave only one record of the same,
suggestions.
|
Generally you can do
```
delete from analyst_request
where id not in
(
select min(id)
from analyst_request
where request_id IN ( 'NDQA201404320')
group by test, request_id
)
```
But MySQL can't delete from the same table you are selecting from. So you need to make a temp table like this
```
delete from analyst_request
where id not in
(
select * from
(
select min(id)
from analyst_request
where request_id IN ( 'NDQA201404320')
group by test, request_id
) tmp
)
```
|
Why not make it a `unique` field and prevent duplicates getting into the data base in the first place?
|
Delete Single value from duplicate record
|
[
"",
"mysql",
"sql",
""
] |
I need to get the following two queries into one statement in PostgreSQL.
```
select count(*) as REDEEMTOTALUSED
from voucher
where campaign_id='123' and redeemcount>0;
select count(*) as REDEEMTOTALUNUSED
from voucher
where campaign_id='123' and redeemcount=0;
```
The result should be the following.
```
REDEEMTOTALUSED | REDEEMTOTALUNUSED
1 | 9
```
Is this possible?
|
Please try:
```
select
(select count(*) as REDEEMTOTALUSED
from voucher
where campaign_id='123' and redeemcount>0) AS REDEEMTOTALUSED ,
(select count(*) as REDEEMTOTALUNUSED
from voucher
where campaign_id='123' and redeemcount=0) AS REDEEMTOTALUNUSED
```
or another way:
```
SELECT
SUM(CASE
WHEN redeemcount>0 THEN 1
ELSE 0
END) as REDEEMTOTALUSED ,
SUM(CASE
WHEN redeemcount=0 THEN 1
ELSE 0
END) as REDEEMTOTALUNUSED
from
voucher
WHERE
campaign_id='123'
```
|
Use conditional aggregation:
```
select sum(case when redeemcount > 0 then 1 else 0 end) as REDEEMTOTALUSED,
sum(case when redeemcount = 0 then 1 else 0 end) as REDEEMTOTALUNUSED
from voucher
where campaign_id = '123';
```
|
Multiple queries in one statement
|
[
"",
"sql",
"postgresql",
""
] |
I have a report that produces a list of values that should all be numbers but are currently stored as a varchar. In my export I am casting the varchar character as an Int but when some one incorrectly enters a char or a special character the report fails because is cannot convert to an INT
```
Current List
557780
558871
5588a6
5588 7
```
What I need is something that will swap out those values that contain spaces or characters for an error int like 99999999 so the end result would be
```
557780
558871
99999999
99999999
```
I know I can do this by using something like this
```
CASE WHEN SomeNum LIKE '%_%' THEN 99999999 ELSE SomeNum
WHEN SomeNum LIKE '%a%' THEN 99999999 ELSE SomeNum END
```
But is there a better way then to compare to every letter and special charterer individually?
|
You can try this on SQL 2012+:
```
SELECT COALESCE(TRY_CONVERT(INT, SomeNum), 99999999) AS Value
FROM YourTable
```
|
you can use do it with pattern
```
case when SomeNum NOT LIKE '%[^0-9]%' THEN 99999999 ELSE cast(SomeNum as int) END
```
|
SQL Server Check String for Spaces, Special Characters, and Charcters
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
ere's an example: I want to see how good my marketing efforts are working for a product I'm trying to sell in a store. For instance, I want to know how many people bought my product within a month after they received a coupon for it in their email on 12/1/2014, compared to how many people bought my product in that same time period without ever receiving a coupon.
Here's a sample of my Customer table:
```
CUSTOMER_NUMBER PURCHASE_DATE
--------------- -------------
1 2014-12-02
2 2014-12-05
3 2014-12-05
4 2014-12-10
5 2014-12-21
```
Here's a sample of my Email table
```
CUSTOMER_NUMBER EMAIL_ADDR SEND_DATE
--------------- ------------ ----------
1 john@abc.com 2014-12-01
3 mary@xyz.com 2014-12-01
5 beth@def.com 2014-12-01
```
I have a pretty good idea how to determine who bought the product with the coupon: I use an inner join on the two tables. But in order to determine who bought the product anyway, even though they didn't have a coupon for whatever reason (they don't have email, they're part of a control group, etc.), I *think* I need to use a left join to get a result set, and then subtract the results of the inner join from my first result set. Alas, that is where I am stuck. In the example above, Customers 2 and 5 bought the product even though they never received a coupon, but I cannot figure out how to write a query to return that data.
I am using IBM's Netezza DB. Thank you!!
|
Use `Left Outer Join` with `NULL` check
```
SELECT C.*
FROM customer C
LEFT OUTER JOIN email e
ON C.customer_Number = E.customer_Number
WHERE E.customer_Number IS NULL
```
Or use `Not Exists`
```
SELECT *
FROM customer C
WHERE NOT EXISTS (SELECT 1
FROM email e
WHERE c.customer_number = e.customer_number)
```
|
```
select from customers c left outer join email e
on c.customer_number = e.customer_number
where e.customer_number is null
or C.purchase_date < e.send_date
```
|
How to write a SQL query that subtracts INNER JOIN results from LEFT JOIN results?
|
[
"",
"sql",
"left-join",
"inner-join",
""
] |
Hi I try to show data from two tables, but second table is empty
```
SELECT ST.* , E.data, E.status
FROM `students` ST
INNER JOIN `table2` E ON E.id_student=ST.id_student
WHERE ST.class='$class' AND E.data LIKE '$year-$month-%'
ORDER BY ST.surname, ST.name, E.data
```
what query I can use for my request (if second table is empty I need only students list if not I need the result), I know that Inner JOIN shows records only if it exist
|
Left join will do it for you.
```
SELECT ST.* , E.data, E.status
FROM `students` ST
LEFT JOIN `table2` E
ON E.id_student=ST.id_student and E.data LIKE '$year-$month-%'
WHERE ST.class='$class'
ORDER BY ST.surname, ST.name, E.data
```
It will give you ST data values even if E data is empty. Of course you have to take care of the null values in the columns from the empty table

Inner join only gives you information when the data is present on both tables

Edit: jarlh suggestion in order to work properly
|
You have to use **LEFT JOIN** to fetch the data from first table and matched data of second table.
```
SELECT ST.*, E.data, E.status
FROM `students` ST
LEFT JOIN `table2` E ON E.id_student=ST.id_student AND E.data LIKE '$year-$month-%'
WHERE ST.class='$class'
ORDER BY ST.surname, ST.name, E.data
```
|
MySQL - Select from table 1 if second table is empty or not not empty
|
[
"",
"mysql",
"sql",
"select",
"join",
""
] |
I have to calculate percentage based on count. Given below is a sample data table.
```
TeamName Count1 Count0
-------- ----- ------
Team1 1 2
Team2 3 0
Team3 1 1
```
I want to display the percentage based on the `greatest value` in `Count1`. `The expecting output sample is given below:`
Expecting Out Put :
```
TeamName Count1 Count0 Percentage1 Percentage0
-------- ----- ------ ----------- -----------
Team1 1 2 33.33% 66.6%
Team2 3 0 100% 0%
Team3 1 1 33.33% 33.33%
```
Help me to find a proper solution. Thank You.
|
use `Max()over ()` trick to find the max of all the row.
```
select TeamName,
Count1,
Count0,
(count1*100.0)/nullif(Max(Count1) over(),0) Percentage1,
(count0*100.0)/nullif(max(Count1) over(),0) Percentage2
from yourtable
```
or Use a `subquery` to find the `Max` and do the math
```
SELECT
TeamName,
Count1,
Count0,
(Count1*100.0) / nullif((SELECT max(Count1) FROM yourTable),0) Percentage1,
(Count0*100.0) / nullif((SELECT max(Count1) FROM yourTable),0) Percentage2
FROM yourTable
```
|
```
SELECT
TeamName,
Count1,
Count0,
Count1 / (SELECT MAX(Count1) FROM Tbl),
Count0 / (SELECT MAX(Count1) FROM Tbl)
FROM Tbl
```
What out for zero values in Count1. I can improve answer, if you describe case what to do when MAX(Count1) is zero.
|
How to calculate percentage in sql?
|
[
"",
"sql",
""
] |
I have two tables that I want to join. The first table is named `WorkItem`:

The second table is `WorkItem_Schedule`:

I want to get the **1st** joined row with `ActualEndDate IS NULL`.
In case of WorkItemId 3, I only want to join it on WorkItemScheduleId 95. Only the first row!
I tried this SQL statement, but I am having trouble:
```
SELECT
W.WorkItemId
,MIN(WS.WorkItemScheduleId) test
,W.WorkItemName
,WS.[PhaseName]
,WS.[StartDate]
,WS.[EndDate]
,WS.[ActualStartDate]
,WS.[ActualEndDate]
FROM
WorkItem W
INNER JOIN
WorkItem_Schedule WS ON W.WorkItemId = WS.WorkItemId
WHERE
WS.ActualEndDate IS NULL
GROUP BY
W.WorkItemId;
```
I get an error
> Column 'WorkItem.WorkItemName' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.
I google this error and found out that I need to add other columns in the select list to group by, so I tried this sql, but it return all schedules and not distinct workitemid:
```
SELECT
W.WorkItemId
,MIN(WS.WorkItemScheduleId) test
,W.WorkItemName
,WS.[PhaseName]
,WS.[StartDate]
,WS.[EndDate]
,WS.[ActualStartDate]
,WS.[ActualEndDate]
FROM
WorkItem W
INNER JOIN
WorkItem_Schedule WS ON W.WorkItemId = WS.WorkItemId
WHERE
WS.ActualEndDate IS NULL
GROUP BY
W.WorkItemId, WS.PhaseName, WS.StartDate, WS.EndDate, WS.ActualStartDate, WS.ActualEndDate;
```
Please help! THanks in advance!
|
`cross apply` is probably the easiest way to do this:
```
select w.*, ws.*
from WorkItem w cross apply
(select top 1 ws.*
from WorkItem_Schedule ws
where ws.WorkItemId = w.WorkItemId and
ws.ActualEndDate is null
order by ws.WorkItemScheduleId
) ws;
```
By "first", I assume you mean the one with the smallest `WorkItemScheduleId`.
|
You could use `ROW_NUMBER()` function to get the "first" joined row.
```
SELECT * FROM (
SELECT
W.WorkItemId
,WS.WorkItemScheduleId
,CASE WHEN WS.ActualEndDate IS NULL
THEN ROW_NUMBER() OVER (PARTITION BY W.WorkItemId ORDER BY WorkItemScheduleId) --ROW_NUMBER() only invoked when the ActualEndDate IS NULL
ELSE -1 END AS ROWN
,W.WorkItemName
,WS.[PhaseName]
,WS.[StartDate]
,WS.[EndDate]
,WS.[ActualStartDate]
,WS.[ActualEndDate]
FROM WorkItem W
INNER JOIN WorkItem_Schedule WS
ON W.WorkItemId = WS.WorkItemId
GROUP BY W.WorkItemId, WS.PhaseName, WS.StartDate, WS.EndDate, WS.ActualStartDate, WS.ActualEndDate) A
WHERE ROWN = 1 --Getting the "first" instance
```
|
SQL Server INNER JOIN with GROUP BY
|
[
"",
"sql",
"sql-server",
"inner-join",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.