Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
```
select KeywordId,currentposition,PsnUpdateDate,PsnUpdateBy from seo.Tbl_KeywordPosition where psnupdatedate = '2015-01-22'
select KeywordId,currentposition,PsnUpdateDate,PsnUpdateBy from seo.Tbl_KeywordPosition where psnupdatedate = '2015-01-23'
1456 10 2015-01-22 00:00:00.000 Ananth
1467 8 2015-01-22 00:00:00.000 gangabhavani
1468 10 2015-01-22 00:00:00.000 admin
1456 9 2015-01-23 00:00:00.000 Ananth
1467 11 2015-01-23 00:00:00.000 gangabhavani
1468 9 2015-01-23 00:00:00.000 admin
```
output needed =
```
KeywordId, oldPosition newposition PsnUpdateBy
1456 10 9 Ananth
1467 8 11 gangabhavani
```
im giving two inputs old date and new date . i want to see the difference of position on date wise and userwise.
|
you can use [UNION operator](http://www.techonthenet.com/sql/union.php)
```
SELECT KeywordId,currentposition,PsnUpdateDate,PsnUpdateBy
FROM seo.Tbl_KeywordPosition
WHERE psnupdatedate = '2015-01-22'
UNION
SELECT KeywordId,currentposition,PsnUpdateDate,PsnUpdateBy
FROM seo.Tbl_KeywordPosition
WHERE psnupdatedate ='2015-01-23';
```
|
The simplest way to merge these two statements is:
```
select old.KeywordId, oldposition, newposition, old.PsnUpdateBy
from
(select KeywordId,currentposition as oldposition, PsnUpdateBy
from seo.Tbl_KeywordPosition
where psnupdatedate = '2015-01-22' ) as old
inner join
(select KeywordId, currentposition as newposition, PsnUpdateBy
from seo.Tbl_KeywordPosition
where psnupdatedate = '2015-01-23' ) as new
on old.KeywordId = new.KeywordId
```
|
i want to merge two select statements
|
[
"",
"sql",
""
] |
I am unable to get clear cut answers on this contentious question .
[MSDN documentation](https://msdn.microsoft.com/en-us/library/ms190457.aspx) mentions
Clustered
* Clustered indexes **sort** and store the data rows in the table or view
based on their key values. These are the columns included in the
index definition. There can be only one clustered index per table,
because the data rows themselves can be **sorted** in only one order.
* The only time the data rows in a table are stored in **sorted** order is
when the table contains a clustered index. When a table has a
clustered index, the table is called a clustered table. If a table
has no clustered index, its data rows are stored in an unordered
structure called a heap.
While I see most of the answers
* [Does a SELECT query always return rows in the same order? Table with clustered index](https://stackoverflow.com/questions/9584813/does-a-select-query-always-return-rows-in-the-same-order-table-with-clustered-i)
* <http://sqlwithmanoj.com/2013/06/02/clustered-index-do-not-guarantee-physically-ordering-or-sorting-of-rows/>
answering negative.
What is it ?
|
Just to be clear. Presumably, you are talking about a simple query such as:
```
select *
from table t;
```
First, if all the data on the table fits on a single page and there are no other indexes on the table, it is hard for me to imagine a scenario where the result set is not ordered by the primary key. However, this is because I think the most reasonable query plan would require a full-table scan, not because of any requirement -- documented or otherwise -- in SQL or SQL Server. Without an explicit `order by`, the ordering in the result set is a consequence of the query plan.
That gets to the heart of the issue. When you are talking about the ordering of the result sets, you are really talking about the query plan. And, the assumption of ordering by the primary key really means that you are assuming that the query uses full-table scan. What is ironic is that people make the assumption, without actually understanding the "why". Furthermore, people have a tendency to generalize from small examples (okay, this is part of the basis of human intelligence). Unfortunately, they see consistently that results sets from simple queries on small tables are always in primary key order and generalize to larger tables. The induction step is incorrect in this example.
What can change this? Off-hand, I think that a full table scan would return the data in primary key order if the following conditions are met:
* Single threaded server.
* Single file filegroup
* No competing indexes
* No table partitions
I'm not saying this is always true. It just seems reasonable that under these circumstances such a query would use a full table scan starting at the beginning of the table.
Even on a small table, you can get surprises. Consider:
```
select NonPrimaryKeyColumn
from table
```
The query plan would probably decide to use an index on `table(NonPrimaryKeyColumn)` rather than doing a full table scan. The results would not be ordered by the primary key (unless by accident). I show this example because indexes can be used for a variety of purposes, not just `order by` or `where` filtering.
If you use a multi-threaded instance of the database and you have reasonably sized tables, you will quickly learn that results without an `order by` have no explicit ordering.
And finally, SQL Server has a pretty smart optimizer. I think there is some reluctance to use `order by` in a query because users think it will automatically do a sort. SQL Server works hard to find the best execution plan for the query. IF it recognizes that the `order by` is redundant because of the rest of the plan, then the `order by` will not result in a sort.
And, of course you want to guarantee the ordering of results, you need `order by` in the outermost query. Even a query like this:
```
select *
from (select top 100 t.* from t order by col1) t
```
Does not guarantee that the results are ordered in the final result set. You really need to do:
```
select *
from (select top 100 t.* from t order by col1) t
order by col1;
```
to guarantee the results in a particular order. This behavior *is* documented [here](https://msdn.microsoft.com/en-us/library/ms188385.aspx).
|
**Without ORDER BY, there is no default sort order even if you have clustered index**
in [**this link**](http://sqlblog.com/blogs/alexander_kuznetsov/archive/2009/05/20/without-order-by-there-is-no-default-sort-order.aspx) there is a good example :
```
CREATE SCHEMA Data AUTHORIZATION dbo
GO
CREATE TABLE Data.Numbers(Number INT NOT NULL PRIMARY KEY)
GO
DECLARE @ID INT;
SET NOCOUNT ON;
SET @ID = 1;
WHILE @ID < 100000 BEGIN
INSERT INTO Data.Numbers(Number)
SELECT @ID;
SET @ID = @ID+1;
END
CREATE TABLE Data.WideTable(ID INT NOT NULL
CONSTRAINT PK_WideTable PRIMARY KEY,
RandomInt INT NOT NULL,
CHARFiller CHAR(1000))
GO
CREATE VIEW dbo.WrappedRand
AS
SELECT RAND() AS random_value
GO
CREATE ALTER FUNCTION dbo.RandomInt()
RETURNS INT
AS
BEGIN
DECLARE @ret INT;
SET @ret = (SELECT random_value*1000000 FROM dbo.WrappedRand);
RETURN @ret;
END
GO
INSERT INTO Data.WideTable(ID,RandomInt,CHARFiller)
SELECT Number, dbo.RandomInt(), 'asdf'
FROM Data.Numbers
GO
CREATE INDEX WideTable_RandomInt ON Data.WideTable(RandomInt)
GO
SELECT TOP 100 ID FROM Data.WideTable
```
**OUTPUT:**
```
1407
253
9175
6568
4506
1623
581
```
> As you have seen, the optimizer has chosen to use a non-clustered
> index to satisfy this SELECT TOP query.
>
> Clearly you cannot assume that your results are ordered unless you
> explicitly use ORDER BY clause.
|
Do clustered index on a column GUARANTEES returning sorted rows according to that column
|
[
"",
"sql",
"sql-server",
""
] |
I have a requirement where there is a database column that stores DateTime such as 24-01-2015 22:31:49. Now, I can many such rows with different values in the column.
I need a query to update the column's data to 24-01-2015 00:00:00 for each row. (Basically convert datetime to date and save it in the same place).
I have to do it in SQL layer as doing it in my Application level (Java) is not feasible for huge number of records.
Thanks for the help.
|
The solution depends on the database. Here are exampels:
MySQL and DB2:
```
update t
set col = ate(col;
```
SQL Server 2008+:
```
update t
set col = cast(col as date);
```
Oracle:
```
update t
set col = trunc(col);
```
Postgres (and Postgres-derived databases):
```
update t
set col = date_trunc('day', col);
```
Teradata:
```
update t
set col = trunc(col, 'd');
```
|
```
UPDATE table_name
SET date_time_col = CONCAT(date(date_time_col), ' 00:00:00')
```
Explanation: the **DATE** function will extract the date part of the datetime string. And the CONCAT function will add the 00:00:00 at the end of date.
|
Update database column data via sql query
|
[
"",
"sql",
""
] |
is it possible to create a query that join 2 table to calculate the total qty per stuff code ? I am using SQL Server 2008.
I have table Purchase
```
id_Purchase| stuff_code| qty
------------------------------
1 | G-001 | 6000
2 | G-002 | 4000
3 | G-003 | 2000
4 | G-001 | 5000
```
and table Selling
```
id_selling| id_purchase| qty
------------------------------
1 | 1 | 2000
2 | 1 | 3000
3 | 2 | 1000
```
id\_purchase is foreign key from table Purchase
what I want is the query to generate this
```
stuff_code| qty
-----------------
G-001 | 6000
G-002 | 3000
G-002 | 2000
```
note that the G-001 qty is from 6000 + 5000 - 2000 - 3000
this is my current query
```
SELECT stuff_code, SUM(P.qty)-ISNULL(SUM(S.qty),0)
FROM Purchase P LEFT JOIN Selling S ON P.ID_Purchase = S.ID_Purchase
GROUP BY stuff_code
```
and the result is
```
stuff_code| qty
-----------------
G-001 | 12000
G-002 | 3000
G-002 | 2000
```
|
Below is one method, using common table expressions for purchases and sales summaries by stuff\_code.
```
CREATE TABLE dbo.Purchase(
id_Purchase int
CONSTRAINT PK_Purchase PRIMARY KEY
, stuff_code varchar(10)
, qty int
);
INSERT INTO dbo.Purchase VALUES
(1, 'G-001', 6000)
,(2, 'G-002', 4000)
,(3, 'G-003', 2000)
,(4, 'G-001', 5000);
CREATE TABLE dbo.Selling(
id_selling int
CONSTRAINT PK_Selling PRIMARY KEY
, id_purchase int
, qty int
);
INSERT INTO dbo.Selling VALUES
(1, 1, 2000)
, (2, 1, 3000)
, (3, 2, 1000);
WITH
purchase_summary AS (
SELECT stuff_code, SUM(qty) AS qty
FROM dbo.Purchase
GROUP BY stuff_code
)
,sales_summary AS (
SELECT p.stuff_code, SUM(s.qty) AS qty
FROM dbo.Selling AS s
JOIN dbo.Purchase AS p ON
p.id_purchase = s.id_purchase
GROUP BY p.stuff_code
)
SELECT
purchase_summary.stuff_code
, purchase_summary.qty - COALESCE(sales_summary.qty, 0) AS qty
FROM purchase_summary
LEFT JOIN sales_summary ON
sales_summary.stuff_code = purchase_summary.stuff_code
ORDER BY
stuff_code;
```
|
You have data sets at two different granularities. Get them at the same grain and you can just do math straight across.
```
SELECT p.ID_Purchase
, p.stuff_code
, p.StuffPurchased
, ISNULL(s.StuffSold,0) as StuffSold
, p.StuffPurchased - ISNULL(s.StuffSold,0) as StuffLeft
FROM (SELECT
ID_Purchase
, stuff_code
, SUM(Qty) StuffPurchased
FROM Purchase
GROUP BY ID_Purchase
, stuff_code) p
LEFT JOIN (
SELECT ID_Purchase
, SUM(Qty) StuffSold
FROM Selling
GROUP BY ID_Purchase) s
ON P.ID_Purchase = S.ID_Purchase
```
|
what is the query to join 2 table to calculate qty per stuff
|
[
"",
"sql",
"sql-server",
"join",
""
] |
I'm looking to plot the number of views on an investment every week from creation date to current date.

Every time someone visits an investment, I'm inserting it to a table with
`investment_id`, `user_id` and `timestamp(datetime)`.
Table looks like this:
investment\_id
```
investment_id user_id Time_stamp
499 233 2015-01-22 09:00:42
499 256 2015-01-21 09:00:42
499 275 2015-01-20 09:00:42
499 233 2015-01-14 09:00:42
499 233 2015-01-14 09:00:42
499 233 2015-01-10 09:00:42
499 273 2015-01-06 09:00:42
499 347 2015-01-02 09:00:42
499 343 2015-01-01 09:00:42
499 344 2015-01-01 09:00:42
```
So for investment\_id 499 created on 2015-01-01 00:00:00, the result will be as below:
```
Week Views
1 4
2 3
3 2
4 1
```
Found a work around:
```
SELECT (CASE (ceiling(datediff(Time_stamp,"2015-01-01 00:00:00")/7)) WHEN 0 THEN 1
ELSE (ceiling(datediff(Time_stamp,"2015-01-01 00:00:00")/7)) END) as weeks,
count(Investment_Id) as viewCount
from log_table where Investment_Id =499 group by weeks
```
|
Needed is a temporary/virtual table like this:
```
+ ------------- + ---------- + ---------- + ---------- +
| investment_id | week_start | week_end | weeknumber |
+ ------------- + ---------- + ---------- + ---------- +
| 499 | 2015-01-02 | 2015-01-09 | 1 |
| 499 | 2015-01-09 | 2015-01-16 | 2 |
| 499 | 2015-01-16 | 2015-01-23 | 3 |
| 499 | 2015-01-23 | 2015-01-30 | 4 |
+ ------------- + ---------- + ---------- + ---------- +
```
This can be accomplished with a kind of generate series:
```
SELECT
499 investment_id,
@wstart := @startdate + INTERVAL @wseq * 7 DAY week_start,
@wend := @wstart + INTERVAL 7 DAY week_end,
@wseq := @wseq + 1 weeknumber
FROM
information_schema.collations
CROSS JOIN
(SELECT @startdate := (SELECT MIN(DATE(time_stamp))
FROM log_table
WHERE investment_id = 499), @wseq := 0) u
HAVING
week_start <= CURRENT_DATE
```
The cross join is to initiate the user variables, with a table with enough rows you can generate series from that user variables. information\_schema.collations is always available and has over 200 rows.
If you put this in a subquery the table with investment views can be joined on the investent\_id and the time\_stamp between week\_start and week\_end.
This will result in:
```
SELECT
s.weeknumber,
COUNT(v.user_id) views
FROM
(
SELECT
499 investment_id,
@wstart := @startdate + INTERVAL @wseq * 7 DAY week_start,
@wend := @wstart + INTERVAL 7 DAY week_end,
@wseq := @wseq + 1 weeknumber
FROM
information_schema.collations
CROSS JOIN
(SELECT @startdate := (SELECT MIN(DATE(start_date))
FROM log_table
WHERE investment_id = 499), @wseq := 0) u
HAVING
weekstart <= CURRENT_DATE
) s
LEFT JOIN
log_table v
ON s.investment_id = v.investment_id
AND v.time_stamp >= s.week_start
AND v.time_stamp < s.week_end
GROUP BY s.weeknumber
```
|
something like this?
```
select week(time_stamp), count(investment_id)
group by time_stamp, investment_id
```
|
Count of view week by week
|
[
"",
"mysql",
"sql",
""
] |
I am trying to select top 2 records from a database table result that looks like this
```
SubjectId | StudentId | Levelid | total
------------------------------------------
1 | 1 | 1 | 89
1 | 2 | 1 | 77
1 | 3 | 1 | 61
2 | 4 | 1 | 60
2 | 5 | 1 | 55
2 | 6 | 1 | 45
```
i tried this query
```
SELECT rv.subjectid,
rv.total,
rv.Studentid,
rv.levelid
FROM ResultView rv
LEFT JOIN ResultView rv2
ON ( rv.subjectid = rv2.subjectid
AND
rv.total <= rv2.total )
GROUP BY rv.subjectid,
rv.total,
rv.Studentid
HAVING COUNT( * ) <= 2
order by rv.subjectid desc
```
but some subjects like where missing, i even tried the suggestiong frm the following link
[How to select the first N rows of each group?](https://stackoverflow.com/questions/17991943/sqlite-how-to-select-first-n-row-of-each-group)
but i get more that two for each subjectid
what am i doing wrong?
|
You could use a correlated subquery:
```
select *
from ResultView rv1
where SubjectId || '-' || StudentId || '-' || LevelId in
(
select SubjectId || '-' || StudentId || '-' || LevelId
from ResultView rv2
where SubjectID = rv1.SubjectID
order by
total desc
limit 2
)
```
This query constructs a single-column primary key by concatenating three columns. If you have a real primary key (like `ResultViewID`) you can substitute that for `SubjectId || '-' || StudentId || '-' || LevelId`.
[Example at SQL Fiddle.](http://sqlfiddle.com/#!5/ecfd4/3/0)
|
I hope I'm understanding your question correctly. Let me know if this is correct:
I recreated your table:
```
CREATE TABLE stack (
SubjectId INTEGER(10),
StudentId INTEGER(10),
Levelid INTEGER(10),
total INTEGER(10)
)
;
```
Inserted values
```
INSERT INTO stack VALUES
(1,1,1,89),
(1,2,1,77),
(1,3,1,61),
(2,4,1,60),
(2,5,1,55),
(2,6,1,45)
;
```
If you're trying to get the top group by Levelid (orderd by total field, assuming StudentID as primary key):
```
SELECT *
FROM stack AS a
WHERE a.StudentID IN (
SELECT b.StudentID
FROM stack AS b
WHERE a.levelid = b.levelid
ORDER BY b.total DESC
LIMIT 2
)
;
```
Yields this result:
```
SubjectId | StudentId | Levelid | total
1 | 1 | 1 | 89
1 | 2 | 1 | 77
```
Example of top 2 by SubjectId, ordered by total:
```
SELECT *
FROM stack AS a
WHERE a.StudentID IN (
SELECT b.StudentID
FROM stack AS b
WHERE a.subjectID = b.subjectID
ORDER BY b.total DESC
LIMIT 2
)
;
```
Result:
```
SubjectId | StudentId | Levelid | total
1 | 1 | 1 | 89
1 | 2 | 1 | 77
2 | 4 | 1 | 60
2 | 5 | 1 | 55
```
I hope that was the answer you were looking for.
|
select top n record from each group sqlite
|
[
"",
"sql",
"sqlite",
"greatest-n-per-group",
""
] |
I have this query:
```
Select t1.col5,
t1.col6,
SUM(CASE WHEN t2.col1 = 'A' THEN t2.col2 ELSE 0
WHEN t2.col1 = 'B' THEN t2.col2 * -1 ELSE 0
END) AS price,
SUM(CASE WHEN t2.col1 = 'C' THEN t2.col2 + 100 ELSE 0
END) AS bonus
FROM t1, t2
WHERE t1.col1 = t2.col4
AND (price + bonus ) <> 0
GROUP BY t1.col5,
t1.col6
```
But I can't do `(price + bonus ) <> 0`? I have this message: price invalid identifier!
How can I do to use the names of my SUM in a WHERE?
|
You can't use column aliases (or, for that matter, aggregates) in the `WHERE` clause. So what you want to do is use a subquery OR put the aggregates in a `HAVING` clause. I'll show the subquery below, which I think is easier since you don't have to reproduce all the keystrokes (you need to use the aggregates themselves, and not their aliases, in a `HAVING` clause):
```
SELECT * FROM (
Select t1.col5,
t1.col6,
SUM(CASE WHEN t2.col1 = 'A' THEN t2.col2 ELSE 0
WHEN t2.col1 = 'B' THEN t2.col2 * -1 ELSE 0
END) AS price,
SUM(CASE WHEN t2.col1 = 'C' THEN t2.col2 + 100 ELSE 0
END) AS bonus
FROM t1, t2
WHERE t1.col1 = t2.col4
GROUP BY t1.col5,
t1.col6
) WHERE (price + bonus ) <> 0
```
|
Constraining on aggregate functions needs to be done in a `HAVING` clause:
```
Select t1.col5,
t1.col6,
SUM(CASE WHEN t2.col1 = 'A' THEN t2.col2 ELSE 0
WHEN t2.col1 = 'B' THEN t2.col2 * -1 ELSE 0
END) AS price,
SUM(CASE WHEN t2.col1 = 'C' THEN t2.col2 + 100 ELSE 0
END) AS bonus
FROM t1, t2
WHERE t1.col1 = t2.col4
GROUP BY t1.col5,
t1.col6
having SUM(CASE WHEN t2.col1 = 'A' THEN t2.col2 ELSE 0
WHEN t2.col1 = 'B' THEN t2.col2 * -1 ELSE 0
END) + SUM(CASE WHEN t2.col1 = 'C' THEN t2.col2 + 100 ELSE 0
END) <> 0
```
|
Using the function SUM?
|
[
"",
"sql",
"sum",
""
] |
I have a table like this -
```
id cat_id
1 1,2
2 1,3
3 5,3,11
```
And I want to get count of those rows which have `cat_id = 1` (For Ex- in uppar case it count is 2). How can I do it ?
|
**To get count of `cat_id = 1`**
```
select count(*)
from (
select unnest(string_to_array(cat_id, ','))::int rowz
from foo ) t
where rowz = 1
```
**To get all `cat_id` and it's *total count***
```
select rowz cat_id,count(*) total_count from (
select unnest(string_to_array(cat_id, ','))::int rowz
from foo ) t
group by rowz
order by cat_id
```
|
You can use `LIKE`, for example :
```
SELECT COUNT(*)
FROM tbl
WHERE cat_id LIKE '1,%'
```
*if string is starting from something different ??? like 5* asked by @POHH :
```
SELECT COUNT(*)
FROM tbl
WHERE cat_id LIKE '%,1,%'
OR cat_id LIKE '1,%'
OR cat_id LIKE '%,1'
```
|
How to get rows having specified value in sql?
|
[
"",
"sql",
"database",
"postgresql",
""
] |
I have a table 'order' containing summary order information and a table 'orderItem' with each item of the order.
My issue is: when selecting the Sum of column 'orderQTY' from the 'Order' table, I get an incorrect total if i join the orderItem table.
The first query below gives me the correct total.
However, as soon as I add the join to orderItem, the sum result is incorrectly duplicating the 'orderqty' column for each orderitem record.
> nb: I know the below doesn't utilise the join and isn't necessary.
> I've removed the clauses referring to the joined table to simplify the
> question.
```
--RETURNS Correct value
select sum(o.orderqty)
from [order] o
--RETURNS the sum containing duplicates of o.orderqty
select sum(o.orderqty)
from [order] o
join OrderItem oi on o.Id = oi.OrderId
```
-- adding clarification: ----
I am wanting to Sum the column 'orderqty' from table 'order' while joining to orderItem eg:
There would be multiple orderItems for each Order, but I only want to obviously only count the orderqty from the order table once per order.
```
select sum(o.ordertotal)
from [order] o with(NOLOCK)
join OrderItem oi on o.Id = oi.OrderId
where oi.mycolumn = 1
```
or would i need to do something like:
```
select sum(o.ordertotal)
from [order] o with(NOLOCK)
where o.id in (select orderid from orderitem where x = y)
```
|
It returns different results because the `join` multiplies the number of rows or filters out rows, both of which affect the sum. It is unclear what you really want to do. If you only want the sum of the quantities of orders that have orderlines, then use exists:
```
select sum(o.orderqty)
from [order] o
where exists (select 1
from OrderItem oi
where o.Id = oi.OrderId
);
```
|
You can sum once per grouping (in this case order.id) using the row\_number function:
```
select sum(orderqty)
from (
select
case row_number() over(partition by o.Id order by o.id)
when 1 then orderqty
else 0
end as orderqty
from [order] o
join OrderItem oi on o.Id = oi.OrderId
) o
```
|
SQL SUM Query returns duplicate rows in result
|
[
"",
"sql",
"sql-server",
""
] |
Please tell me what I am missing in this example.
I have a stored procedure where I provide a default value of an empty string for a parameter. When I check the value, it comes back as `NULL`. I'm not sure if the default value is not being applied, or my `ISNULL` check is wrong.
Here is a simple example to illustrate:
```
CREATE PROCEDURE TestDefaultValueParam
@vcTestName varchar(100)= ''
AS
BEGIN
IF ISNULL(@vcTestName, '') = ''
BEGIN
PRINT '@vcTestName IS NULL'
END
ELSE
BEGIN
PRINT '@vcTestName NOT NULL'
END
END
```
I have tried the following 3 commands:
```
EXEC [dbo].[TestDefaultValueParam] @vcTestName = NULL
EXEC [dbo].[TestDefaultValueParam] @vcTestName = ''
EXEC [dbo].[TestDefaultValueParam]
```
The result was `@vcTestName IS NULL` for all three even though there is a default empty string provided in the stored procedure.
|
Your if statement in effect says if it's empty string OR null, then treat is as empty string (print is null) - in your exec statements, you're in all cases passing in a null or an empty string, therefore hitting your isnull logic.
```
declare @string varchar(20)
set @string = ''
select isnull(@string, ''), -- '' would be returned
@string -- null would be returned.
```
what this means is, if @string is null, treat it as an empty string. Then you're checking for empty string and printing "string is null"
There's no code issue here, I just think you're having a brain fart maybe?
|
Why not just use `IS NULL` if that is what you want?
```
CREATE PROCEDURE TestDefaultValueParam
@vcTestName varchar(100)= ''
AS
BEGIN
IF @vcTestName IS NULL
BEGIN
PRINT '@vcTestName IS NULL'
END
ELSE
BEGIN
PRINT '@vcTestName NOT NULL'
END
END;
```
Your logic is equivalent to:
```
IF @vcTestName IS NULL OR vcTestName = ''
```
It is testing for both values.
|
Stored Procedure Default Value Not Being Set
|
[
"",
"sql",
"sql-server",
"stored-procedures",
""
] |
I'd like to select all the rows in the `plot` table where it exists in `watchlist`. For instance, with the example I've uploaded to [SQLFiddle](http://sqlfiddle.com/#!2/c54c01/7), I should be able to return rows `3`, `5` and `8` in `plot` with the query, because they exist in `watchlist`. The problem is, I am not sure how to go about it. Any ideas?
This is what I've done so far:
```
SELECT id, p_id, area, jobs from plot WHERE code="SA" AND p_id="3";
```
But only selects one row, but I understand that it would require a subquery of some sort i.e. replacing `WHERE code="SA" AND p_id="3";` with a reference to the **watchlist** table.
|
You can use `INNER JOIN` to check this:
```
SELECT t1.id
, t1.p_id
, t1.area
, t1.jobs
FROM plot t1
JOIN watchlist t2 ON t1.p_id = t2.p_id
```
[**SQLFiddle**](http://sqlfiddle.com/#!2/c54c01/15)
|
**Returns ROWS 3, 5, and 8**
```
SELECT *
FROM plot, watchlist
WHERE plot.p_id = watchlist.p_id;
```
|
Select multiple values from one table based on values in another
|
[
"",
"mysql",
"sql",
"join",
""
] |
The following is a sample INSERT statement:
```
INSERT INTO
Foo (c1,c2)
VALUES
(a,1)
,(b,2)
,(c,3)
```
How do I insert these values to show the following result without redundant insert statements?:
```
c1 | c2
-------
a |1
a |1
a |1
a |1
a |1
a |1
b |2
b |2
b |2
b |2
b |2
b |2
c |3
c |3
c |3
c |3
c |3
c |3
```
|
You can use dynamic sql to replicate your insert x times:
```
declare @sql nvarchar(max)
select @sql = replicate('
INSERT INTO
Foo (c1,c2)
VALUES
(''a'',1)
,(''b'',2)
,(''c'',3)',6)
exec(@sql)
select * from Foo order by c1,c2
```
Alternatively, you can loop until you have the number of desired inserts:
```
while (select count(*) from Foo where c1 = 'a') < 6
begin
INSERT INTO
Foo (c1,c2)
VALUES
('a',1)
,('b',2)
,('c',3)
end
select * from Foo order by c1,c2
```
And yet another option would be:
```
INSERT INTO
Foo (c1,c2)
VALUES
('a',1)
,('b',2)
,('c',3)
GO 6
```
|
After inserting those values use `Recursive CTE` to do this
```
;with cte as
(
select c1,c2,1 as id from foo
union all
select c1,c2,id+1 from cte where id<5
)
Insert into Foo (c1,c2)
select c1,c2 from cte
```
Or do a `Cross Join` with `numbers` table. If you don't have `numbers` table use `master.dbo.spt_values` table
```
Insert into Foo(c1,c2)
SELECT c1, c2
FROM Foo
CROSS JOIN (SELECT number
FROM master.dbo.spt_values
WHERE type = 'P'
AND number BETWEEN 1 AND 5) T
```
|
Insert Duplicates Of Distinct Value In One SQL Statement
|
[
"",
"sql",
"sql-server",
"insert",
""
] |
## Situation
* I am new to the QuickBooks world.
* I have a `.qbw` file -> CompanyName.qbw
* It's a huge file that contain almost everything about my company.
* I want to query `some` data out of that file - NOT all , but some.:)
Let's say, I want to query only the `inventory report`.
* Within that inventory report, I only want 3 fields.
1. product\_id
2. name
3. availability
I've been looking for a tool out there everywhere, I could not find anything that will does exactly what I want.
Can someone get me start on this ?
Can anybody at least point me to the right direction ?
I am not exactly sure, if what I am trying to do is possible.
|
Not a direct answer, but some direction as you requested:
The QOBDC driver available here (<http://qodbc.com>) should allow you to access your qbw file like a SQL database and perform queries against it. There is a free trial, but it looks like you'll need to pay $150-$500 to buy this driver if you find that it works and you want to use it long term.
As for querying a specific report like inventory, I don't know, but there are plenty of tutorials around in the form of blog posts and YouTube videos that should help you figure out how to use QODBC for your purpose.
|
Intuit (the people that make QuickBooks) offers something called the QuickBooks SDK specifically for situations like yours.
* <https://developer.intuit.com/docs/0250_qb>
It's free to download. Download, install, and you can connect to QuickBooks and extract any data you want or run any reports you want.
There's about 600 pages of documentation available in PDF format:
* <https://developer-static.intuit.com/qbSDK-current/doc/PDF/QBSDK_ProGuide.pdf>
And you'll probably also want to look at the QuickBooks OSR, which shows all of the XML messages you can send QuickBooks, along with the XML responses you get back and even some auto-generated sample code:
* <https://developer-static.intuit.com/qbSDK-current/Common/newOSR/index.html>
If you post more details (e.g. what language are you developing in?) we could post sample code for you too.
|
How to query some data out of a QuickBook(.qbw) file?
|
[
"",
"sql",
"quickbooks",
"quickbooks-online",
""
] |
I am have a table with data and now i need to return zero in select statement if there is no records in table for example. I need to use it in Stored Procedure.
```
-- If no records exists in below select statement
SELECT ID,Text,Date FROM tblData WHERE ID = 12
IF (@@ROWCOUNT = 0)
BEGIN
SELECT -5 AS ID
END
```
**Output:**
```
ID Text Date
ID
-5
```
**Expected output**
```
ID
-5
```
|
```
SELECT (CASE WHEN Ta.ID IS NULL THEN TBL.ID
ELSE Ta.ID END) AS ID,Ta.Text,Ta.Date
FROM (VALUES(-5)) AS TBL(ID)
LEFT JOIN
(
SELECT ID,Text,Date FROM tblData WHERE ID = 12
)
AS Ta ON Ta.ID = Ta.ID
```
|
I always use an Exists statment.
```
if exists(SELECT ID FROM tblData WHERE ID = 12)
select 0 as RowsExist
else
select 1 as RowsExist
```
|
Select statement to return constant when no records found in table in SQL Server 2008
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"stored-procedures",
""
] |
I have a stored procedures that accepts a comma separated list, then makes copies of that list with quoted strings and brackets, then uses those in a dynamic sql statement to build a pivot table with flexible number of columns.
My problem is that sometimes my users submit a list with duplicates, and that makes the pivot query fail. So I want to somehow select distinct from my comma separated strings.
Here's how I manipulate the initial string:
```
Declare @data varchar(max) = '150593, 150593, 150603, 150578, 150604'
Declare @bracketed varchar(max) = ''
Declare @quoted varchar(max) = ''
select @bracketed = @bracketed + quotename(rtrim(ltrim(Value))) + ', ',
@quoted = @quoted + quotename(rtrim(ltrim(Value)), '''') + ', '
from [dbo].[fnSplitStringAsTable](@data, ',')
Select @bracketed = LEFT(@bracketed, len(@bracketed) - 1),
@quoted = LEFT(@quoted, len(@quoted) - 1)
```
I'm thinking I should be able to add DISTINCT somewhere in this query,
but I can't make it work. How can I select distinct from comma separated lists?
|
I guess we can add in distinct after you make the table, like this:
```
select @bracketed = @bracketed + quotename(rtrim(ltrim(Value))) + ', ',
@quoted = @quoted + quotename(rtrim(ltrim(Value)), '''') + ', '
from (
SELECT DISTINCT Value FROM [dbo].[fnSplitStringAsTable](@data, ',')
) T
```
If this fails try this:
```
select @bracketed = @bracketed + quotename(Value) + ', ',
@quoted = @quoted + quotename(Value), '''') + ', '
from (
SELECT DISTINCT RTRIM(LTRIM(Value)) AS Value FROM [dbo].[fnSplitStringAsTable](@data, ',')
) T
```
|
as an alternative solution you can dedupe in xml and convert back to varchar
```
Declare @data varchar(max) = '150593, 150593, 150603, 150578, 150604'
set @data= (select '''' + cast(cast('<d>'+replace(@data, ', ',',</d><d>')+'</d>' as xml).query('distinct-values(/d)') as varchar) +'''')
select @data
```
|
SQL create distinct comma separated lists
|
[
"",
"sql",
"sql-server",
"csv",
"pivot",
""
] |
The date in my table looks like **2010-06-16 00:00:00.000**
When I use the below regular expression, no rows get selected.
```
select mydate from mytable where mydate like '2010%'
```
However, when I use this one, rows do get selected correctly.
```
select mydate from mytable where mydate like '%2010%'
```
So, I thought that probably leading white spaces are getting added. To take those into account, I used the below query. But this doesn't work either.
```
select mydate from mytable where LTRIM(RTRIM(mydate)) like '2010%'
```
Since the first one works, it means, that there is something before the 2010? What else apart from white spaces, could it be? I tried pasting it onto a text file, and don't see anything.
|
If the `datatype` is `Datetime` you should use `Year()` function to filter the rows instead of `like`. You should not use `like` operator to filter the data from `Datetime` type.
If the `datatype` is `Datetime` then you won't be having `leading` or `trailing` spaces.
You do have appropriate `Date` functions in `Sql Server` use those to filter the rows.
```
select mydate from mytable where year(mydate) = 2010
```
or even `Datepart`
```
select mydate from mytable where Datepart(YYYY,mydate) = 2010
```
|
You have to convert `datetime` that first in `varchar` to use `like`
```
select mydate from mytable
WHERE CONVERT(VARCHAR(10),mydate,120) like '2010%'
```
[DEMO](http://sqlfiddle.com/#!3/26ecfb/3)
|
regular expression not working on the datetime column
|
[
"",
"sql",
"sql-server",
""
] |
```
select round((cast(56/3 AS DECIMAL (4,2))),1)
```
is showing 18 output instead of 19 , where as actual value is 18.66.
My Round function is not working
please help.
|
The problem is `56/3` is an *integer* calculation which means no floating point numbers.
You need to use floating point numbers in the initial calculation e.g. `56.0/3`. Also, if you want to round to 19 then you need to round to the nearest *whole* number, `ROUND(x, 1)` will round to the first decimal place - you need to pass `0` to round up to 19.
```
SELECT ROUND((CAST(56.0/3 AS DECIMAL (4,2))),0)
```
Alternatively, you could switch [ROUND](https://msdn.microsoft.com/en-gb/library/ms175003.aspx) for [CEILING](https://msdn.microsoft.com/en-us/library/ms189818.aspx)
```
select CEILING(CAST(56.0/3 AS DECIMAL(4,2)))
```
|
Your section of the code:
```
CAST( 56/3 AS DECIMAL )
```
First evaluates the `56/3` which returns 18. This is then cast to decimal, giving 18.0.
You need to cast either the numerator or denominator before the division occurs.
The `round` function is working fine -- it's just not using the input you think it is.
|
sql server round
|
[
"",
"sql",
"asp.net",
"sql-server",
""
] |
In Oracle, if I make a composite index on 2 columns, then in which situation this index will be used to search the record ?
a) If my query has a WHERE clause which involves first column
e.g. WHERE first\_column = 'John'
b) If my query has a WHERE clause which involves second column
e.g. WHERE second\_column = 'Sharma'
c) Either a or b
d) Both a and b
e) Not specifically these 2 columns but it could be any column in the WHERE clause.
f) Only column a or both columns a and b
|
**d) Both a or b**
If the leading column is used, Oracle will likely use a regular index range scan and just ignore the unused columns.
If a non-leading column is used, Oracle can use an [index skip scan](https://docs.oracle.com/database/121/TGSQL/tgsql_optop.htm#TGSQL238). In practice a skip scan is not used very often.
---
There are two completely different questions here: when *can* Oracle use an index and when *will* Oracle use an index. The above explains that Oracle *can* use an index in either case, and you can test that out with a hint: `/*+ index(table_name index_name) */`.
Determining when Oracle *will* use an index is much trickier. Oracle uses multi-block reads for full table scans and fast full index scans, and uses single-block reads for other index scans. This means a full table scan is more efficient when reading a larger percent of the data. But there are a lot of factors involved: the percentage of data, how big is the index, system statistics that tell Oracle how fast single- and multi-block IO are, the number of distinct values (especially important for choosing a skip scan), index clustering factor (how ordered is the table by the index columns), etc.
|
I happen to think that MySQL does a pretty good job of describing how composite indexes are used. The documentation is [here](http://dev.mysql.com/doc/refman/5.7/en/multiple-column-indexes.html).
The basic idea is that the index would normally be used in the following circumstances:
* When the `where` condition is an equality on `col1` (`col1 = value`).
* When the `where` condition is an inequality or `in` on `col1` (`col1 in (list)`, `col1 < value`)
* When the `where` condition is an equality on `col1` and `col2`, connected by an `and` (`col1 = val1 and col2 = val2`)
* When the `where` condition is an equality on `col1` and an inequality or `in` on `col2`.
* Any of the above four cases where additional columns are used with additional conditions on other columns, connected by an `and`.
In addition, the index would normally be used if `col1` and `col2` are the only columns referenced in the query. This is called a covering index, and -- assuming there are other columns in the table -- it is faster to read the index than the original table because the index is smaller.
Oracle has a pretty smart optimizer, so it might also use the index in some related circumstances, for instance when `col1` uses an `in` condition along with a condition on `col2`.
In general, a condition will not qualify for an index if the column is an argument to a function. So, these clauses would not use a basic index:
```
where month(col1) = 3
where trunc(col1) = trunc(sysdate)
where abs(col1) < 1
```
Oracle supports functional indexes, so if these constructs are actually important, you can create an index on `month(col1)`, `trunc(col1)`, or `abs(col1)`.
Also, `or` tends to make the use of indexes less likely.
|
In Oracle, if I make a composite index on 2 columns, then in which situation this index will be used to search the record?
|
[
"",
"sql",
"oracle",
"indexing",
""
] |
I need to make row numbering with ordering, partitioning and grouping. Ordering by `IdDocument, DateChange`, partitioning by `IdDocument` and grouping by `IdRole`. The problem is in grouping especially. As it could be seen from the example (`NumberingExpected`) `DENSE_RANK()` must be the best function for this purpose but it makes repetition of numbering only when the values which are used to order are the same. In my case values used for ordering (`IdDocument, DateChange`) are always different and repetition of numbering must be done by `IdRole`.
Sure it could be solved by the usage of cursor very easy. But is there any way to make it with numbering/ranking functions?
Test data:
```
declare @LogTest as table (
Id INT
,IdRole INT
,DateChange DATETIME
,IdDocument INT
,NumberingExpected INT
)
insert into @LogTest
select 1 as Id, 7 as IdRole, GETDATE() as DateChange, 13 as IdDocument, 1 as NumberingExpected
union
select 2, 3, DATEADD(HH, 1, GETDATE()), 13, 2
union
select 3, 3, DATEADD(HH, 2, GETDATE()), 13, 2
union
select 4, 3, DATEADD(HH, 3, GETDATE()), 13, 2
union
select 5, 5, DATEADD(HH, 4, GETDATE()), 13, 3
union
select 7, 3, DATEADD(HH, 6, GETDATE()), 13, 4
union
select 6, 3, DATEADD(HH, 5, GETDATE()), 27, 1
union
select 8, 3, DATEADD(HH, 7, GETDATE()), 27, 1
union
select 9, 5, DATEADD(HH, 8, GETDATE()), 27, 2
union
select 10, 3, DATEADD(HH, 9, GETDATE()), 27, 3
select * from @LogTest order by IdDocument, DateChange;
```
Explanation in terms of functional programming:
1. Order data by IdDocument, DateChange
2. Set first row number as i=1 go to next row
3. If IdDocument has changed
{ i=1; }
else {
If IdRow has changed { i++; }
}
4. set row number as i;
5. go to the next row;
6. IF EOF { exit; } else { go to step 3; }
|
Since 2012 you could use `LAG`/`LEAD`, but in 2008 it is not available, so we'll emulate it. Performance could be poor, you should check with your actual data.
This is the final query:
```
WITH
CTE_rn
AS
(
SELECT
Main.IdRole
,Main.IdDocument
,Main.DateChange
,ROW_NUMBER() OVER(PARTITION BY Main.IdDocument ORDER BY Main.DateChange) AS rn
FROM
@LogTest AS Main
OUTER APPLY
(
SELECT TOP (1) T.IdRole
FROM @LogTest AS T
WHERE
T.IdDocument = Main.IdDocument
AND T.DateChange < Main.DateChange
ORDER BY T.DateChange DESC
) AS Prev
WHERE Main.IdRole <> Prev.IdRole OR Prev.IdRole IS NULL
)
SELECT *
FROM
@LogTest AS LT
CROSS APPLY
(
SELECT TOP(1) CTE_rn.rn
FROM CTE_rn
WHERE
CTE_rn.IdDocument = LT.IdDocument
AND CTE_rn.IdRole = LT.IdRole
AND CTE_rn.DateChange <= LT.DateChange
ORDER BY CTE_rn.DateChange DESC
) CA_rn
ORDER BY IdDocument, DateChange;
```
Final Result set:
```
Id IdRole DateChange IdDocument NumberingExpected rn
1 7 2015-01-26 20:00:41.210 13 1 1
2 3 2015-01-26 21:00:41.210 13 2 2
3 3 2015-01-26 22:00:41.210 13 2 2
4 3 2015-01-26 23:00:41.210 13 2 2
5 5 2015-01-27 00:00:41.210 13 3 3
7 3 2015-01-27 02:00:41.210 13 4 4
6 3 2015-01-27 01:00:41.210 27 1 1
8 3 2015-01-27 03:00:41.210 27 1 1
9 5 2015-01-27 04:00:41.210 27 2 2
10 3 2015-01-27 05:00:41.210 27 3 3
```
## How it works
1) We need the value of IdRole from the previous row when the table is ordered by IdDocument and DateChange. To get it we use `OUTER APPLY` (because `LAG` is not available):
```
SELECT *
FROM
@LogTest AS Main
OUTER APPLY
(
SELECT TOP (1) T.IdRole
FROM @LogTest AS T
WHERE
T.IdDocument = Main.IdDocument
AND T.DateChange < Main.DateChange
ORDER BY T.DateChange DESC
) AS Prev
ORDER BY Main.IdDocument, Main.DateChange;
```
This is result set of this first step:
```
Id IdRole DateChange IdDocument NumberingExpected IdRole
1 7 2015-01-26 20:50:32.560 13 1 NULL
2 3 2015-01-26 21:50:32.560 13 2 7
3 3 2015-01-26 22:50:32.560 13 2 3
4 3 2015-01-26 23:50:32.560 13 2 3
5 5 2015-01-27 00:50:32.560 13 3 3
7 3 2015-01-27 02:50:32.560 13 4 5
6 3 2015-01-27 01:50:32.560 27 1 NULL
8 3 2015-01-27 03:50:32.560 27 1 3
9 5 2015-01-27 04:50:32.560 27 2 3
10 3 2015-01-27 05:50:32.560 27 3 5
```
2) We want to remove rows with repeating IdRole, so we add a `WHERE` and number the rows. You can see that row numbers follow the expected result:
```
SELECT
Main.IdRole
,Main.IdDocument
,Main.DateChange
,ROW_NUMBER() OVER(PARTITION BY Main.IdDocument ORDER BY Main.DateChange) AS rn
FROM
@LogTest AS Main
OUTER APPLY
(
SELECT TOP (1) T.IdRole
FROM @LogTest AS T
WHERE
T.IdDocument = Main.IdDocument
AND T.DateChange < Main.DateChange
ORDER BY T.DateChange DESC
) AS Prev
WHERE Main.IdRole <> Prev.IdRole OR Prev.IdRole IS NULL
;
```
This is result set of this step (it becomes the CTE):
```
IdRole IdDocument DateChange rn
7 13 2015-01-26 20:13:26.247 1
3 13 2015-01-26 21:13:26.247 2
5 13 2015-01-27 00:13:26.247 3
3 13 2015-01-27 02:13:26.247 4
3 27 2015-01-27 01:13:26.247 1
5 27 2015-01-27 04:13:26.247 2
3 27 2015-01-27 05:13:26.247 3
```
3) Finally, we need to get the correct row number from CTE for each row of the original table. I use `CROSS APPLY` to get one row from CTE for each row of the original table.
|
This might not be pretty but it does create the required output.
```
; with cte as (
select l.Id,l.IdRole,l.IdDocument,l.NumberingExpected,l.DateChange,
(select min(x.DateChange) from @LogTest x where x.IdDocument = l.IdDocument and x.IdRole = l.IdRole and x.id<=l.id and
x.id > (select max(y.id) from @LogTest y where y.IdDocument = l.IdDocument and y.IdRole <> l.IdRole and y.id <=l.Id)) as DateChange2
from @LogTest l
)
select c.Id,c.IdRole,c.DateChange,c.IdDocument,c.NumberingExpected,dense_rank() over (partition by c.IdDocument order by c.DateChange2) as rn
from cte c order by c.IdDocument, c.DateChange;
```
If I had some more time I think the x.id predicate in the CTE could be improved.
|
How to make row numbering with ordering, partitioning and grouping
|
[
"",
"sql",
"sql-server",
"sql-server-2008-r2",
""
] |
I need to select the employees that registered in the company this year; the field is a date type(field name :datemp) so i used :
```
SELECT employee_name
FROM employees
WHERE datemp BETWEEN to_date(01-JAN-2015) AND to_date (31-DEC-2015)
```
is there another method i can say that the year of a field that has a date type is in the actual year?
|
You can pull it from `sysdate`
```
SELECT employee_name
FROM employees
WHERE EXTRACT(year FROM datemp) = EXTRACT(year FROM sysdate)
```
|
You can always do:
```
select employee_name
from emploees
where trunc(datemp, 'yyyy') = to_date('01/01/2015', 'dd/mm/yyyy'); -- or trunc(sysdate, 'yyyy') if you need it to refer to this year
```
(or even ... `to_char(datemp, 'yyyy') = '2015';`)
Bear in mind that if you do use a function on the datemp column, any indexes on that column will no longer be used (unless there's a function-based index that matches the new function on the column).
|
How to select actual year in Oracle Date?
|
[
"",
"sql",
"oracle",
"oracle10g",
"oracle-sqldeveloper",
""
] |
I have the following query:
```
select * from [lead].[ContactFeedback] cf
where cf.LeadId in
(select LeadId from [lead].[LeadDetails]
where TeleportReference in (122096,
122097,
122098))
order by LeadId Desc
```
The results are something like this :
```
FeedbackDate LeadId
2015-01-23 16:25:13.547 95920
2015-01-23 16:25:38.960 95919
2015-01-23 16:25:19.393 95917
2015-01-23 16:25:32.837 95916
2015-01-23 16:25:59.840 95914
2015-01-23 16:26:08.840 95913
2015-01-23 16:15:01.933 95910
2015-01-23 16:22:04.820 95910
2015-01-23 16:24:40.477 95909
2015-01-23 16:24:03.523 95908
2015-01-23 16:16:44.290 95908
2015-01-23 16:17:16.047 95907
2015-01-23 16:25:11.783 95907
```
I want to list all top 1(most recent feedbackdate) for each LeadId. How can I achieve this in SQL Server?
|
```
select LeadId, Max(FeedbackDate)
from [lead].[ContactFeedback] cf
where cf.LeadId in
(select LeadId from [lead].[LeadDetails]
where TeleportReference in (122096, 122097, 122098))
group by LeadId
order by LeadId Desc
```
|
Use `Window Function`
```
select leadid,feedbackdate from (
select row_number() over(partition by leadid order by feedbackdate desc) rn,*
from yourtable) a
where rn=1
```
|
SQL - Select Top 1 multiple lines of the same id
|
[
"",
"sql",
"sql-server",
""
] |
I have written a stored procedure for inserting data into my table. These are my table's columns with their datatype:
```
Ad nvarchar(150),
Yazar nvarchar(150),
SayfaSayisi smallint,
KategoriId int
Gmc datetime,
HostName nvarchar(150)
```
The problem is that `Gmc` and `HostName` have their own default values. So I can't use these two in the stored procedure.
```
Gmc ---> GetDate() (to get insert date)
HostName --> Host_Name( )
```
So when I execute the query I am getting this error.
> There are more columns in the INSERT statement than values specified in the VALUES clause. The number of values in the VALUES clause must match the number of columns specified in the INSERT statement
This is the query
```
Create proc Kitap_Insert
@Ad nvarchar(150),
@Yazar nvarchar(150),
@SayfaSayisi smallint,
@KategoriId int
Gmc datetime,
HostName nvarchar(150)
as
Insert into Kitap(Id, Ad, Yazar, SayfaSayisi, KategoriId)
values(@Ad, @Yazar, @SayfaSayisi, @KategoriId)
```
What is the proper way of doing this?
|
You need remove `ID` from insert list
```
Insert into Kitap(Ad,Yazar,SayfaSayisi,KategoriId)
values(@Ad,@Yazar,@SayfaSayisi,@KategoriId)
```
or add a value for it as below
```
Insert into Kitap(Id,Ad,Yazar,SayfaSayisi,KategoriId)
values(@ID, @Ad,@Yazar,@SayfaSayisi,@KategoriId)
```
|
Instead of :
```
Insert into Kitap(Id,Ad,Yazar,SayfaSayisi,KategoriId)
values(@Ad,@Yazar,@SayfaSayisi,@KategoriId)
```
Use:
```
INSERT INTO Kitap(Ad,Yazar,SayfaSayisi,KategoriId)
VALUES (@Ad,@Yazar,@SayfaSayisi,@KategoriId)
```
You are asking SQL engine that you will provide id (an additional field) as well (field that doesn't exist in the table or is an auto increment field) and you are not providing the value for the same and hence your error `here are more columns in the INSERT statement than values specified in the VALUES clause. The number of values in the VALUES clause must match the number of columns specified in the INSERT statement`
So remove additional Id from your insert query.
|
Confusion about Stored Procedure
|
[
"",
"sql",
"stored-procedures",
""
] |
Does "group by" clause automatically guarantee that the results will be ordered by that key? In other words, is it enough to write:
```
select *
from table
group by a, b, c
```
or does one have to write
```
select *
from table
group by a, b, c
order by a, b, c
```
I know e.g. in MySQL I don't have to, but I would like to know if I can rely on it accross the SQL implementations. Is it guaranteed?
|
`group by` does not order the data neccessarily. A DB is designed to grab the data as fast as possible and only sort if necessary.
So add the `order by` if you need a guaranteed order.
|
> An efficient implementation of group by would perform the group-ing by sorting the data internally. That's why some RDBMS return sorted output when group-ing. Yet, the SQL specs don't mandate that behavior, so unless explicitly documented by the RDBMS vendor I wouldn't bet on it to work (tomorrow). OTOH, if the RDBMS implicitly does a sort it might also be smart enough to then optimize (away) the redundant order by. [@jimmyb](https://stackoverflow.com/users/1015327/jimmyb)
An example using PostgreSQL proving that concept
Creating a table with 1M records, with random dates in a day range from today - 90 and indexing by date
```
CREATE TABLE WITHDRAW AS
SELECT (random()*1000000)::integer AS IDT_WITHDRAW,
md5(random()::text) AS NAM_PERSON,
(NOW() - ( random() * (NOW() + '90 days' - NOW()) ))::timestamp AS DAT_CREATION, -- de hoje a 90 dias atras
(random() * 1000)::decimal(12, 2) AS NUM_VALUE
FROM generate_series(1,1000000);
CREATE INDEX WITHDRAW_DAT_CREATION ON WITHDRAW(DAT_CREATION);
```
Grouping by date truncated by day of month, restricting select by dates in a two days range
```
EXPLAIN
SELECT
DATE_TRUNC('DAY', W.dat_creation), COUNT(1), SUM(W.NUM_VALUE)
FROM WITHDRAW W
WHERE W.dat_creation >= (NOW() - INTERVAL '2 DAY')::timestamp
AND W.dat_creation < (NOW() - INTERVAL '1 DAY')::timestamp
GROUP BY 1
HashAggregate (cost=11428.33..11594.13 rows=11053 width=48)
Group Key: date_trunc('DAY'::text, dat_creation)
-> Bitmap Heap Scan on withdraw w (cost=237.73..11345.44 rows=11053 width=14)
Recheck Cond: ((dat_creation >= ((now() - '2 days'::interval))::timestamp without time zone) AND (dat_creation < ((now() - '1 day'::interval))::timestamp without time zone))
-> Bitmap Index Scan on withdraw_dat_creation (cost=0.00..234.97 rows=11053 width=0)
Index Cond: ((dat_creation >= ((now() - '2 days'::interval))::timestamp without time zone) AND (dat_creation < ((now() - '1 day'::interval))::timestamp without time zone))
```
Using a larger restriction date range, it chooses to apply a **SORT**
```
EXPLAIN
SELECT
DATE_TRUNC('DAY', W.dat_creation), COUNT(1), SUM(W.NUM_VALUE)
FROM WITHDRAW W
WHERE W.dat_creation >= (NOW() - INTERVAL '60 DAY')::timestamp
AND W.dat_creation < (NOW() - INTERVAL '1 DAY')::timestamp
GROUP BY 1
GroupAggregate (cost=116522.65..132918.32 rows=655827 width=48)
Group Key: (date_trunc('DAY'::text, dat_creation))
-> Sort (cost=116522.65..118162.22 rows=655827 width=14)
Sort Key: (date_trunc('DAY'::text, dat_creation))
-> Seq Scan on withdraw w (cost=0.00..41949.57 rows=655827 width=14)
Filter: ((dat_creation >= ((now() - '60 days'::interval))::timestamp without time zone) AND (dat_creation < ((now() - '1 day'::interval))::timestamp without time zone))
```
Just by adding `ORDER BY 1` at the end (there is no significant difference)
```
GroupAggregate (cost=116522.44..132918.06 rows=655825 width=48)
Group Key: (date_trunc('DAY'::text, dat_creation))
-> Sort (cost=116522.44..118162.00 rows=655825 width=14)
Sort Key: (date_trunc('DAY'::text, dat_creation))
-> Seq Scan on withdraw w (cost=0.00..41949.56 rows=655825 width=14)
Filter: ((dat_creation >= ((now() - '60 days'::interval))::timestamp without time zone) AND (dat_creation < ((now() - '1 day'::interval))::timestamp without time zone))
```
PostgreSQL 10.3
|
Does "group by" automatically guarantee "order by"?
|
[
"",
"sql",
"database",
"group-by",
"database-agnostic",
""
] |
This one is not easy at all it seems.
I have a table **data**:
```
String ticker, Double price, Date time
--------------------------------------
```
How can I given the above table:
**SELECT ticker WHERE price has increased x percent AND time BETWEEN '2014-10-01' AND '2014-10-31'** ?
What one needs to do is for each ticker, determine the last and first value and divide them.
I have tried this but it is not working for obvious reasons:
```
SELECT * FROM (
SELECT ticker, min(ctid) as min, max(ctid) as max
FROM data
WHERE
time BETWEEN '2014-10-01' AND '2014-10-31'
GROUP by ticker, time
ORDER by ticker, time ASC
) X
WHERE
1.05 < (
SELECT value
FROM data
WHERE
time BETWEEN '2014-10-01' AND '2014-10-31'
AND
ticker = X.ticker
AND
ctid = X.min
)
/
(
SELECT value
FROM data
WHERE
time BETWEEN '2014-10-01' AND '2014-10-31'
AND
ticker = X.ticker
AND
ctid = X.max
)
```
The above query is grouped by ticker and time, and the min and max are should be for the entire dataset. But I do get several tickers as a returned result, so I am not sure what is actually going on there.
I have also investigated doing this through WINDOW functions, there is a sample that's also not working:
```
SELECT
ticker,
first_value(price) over W as first, last_value(price) over W as last
FROM data
WHERE
time BETWEEN '2014-10-01' AND '2014-10-31'
WINDOW W as (
partition by ticker, time
)
ORDER BY ticker, time ASC
```
Does anyone know how to do this kind of query, on any database?
I am using PostgreSQL, why you are seeing **ctid** which in other databases is the same as **ROW\_ID**.
But this problem is not related to PostgreSQL only.
**Dataset:**
```
create table data
(
ticker varchar(5),
price numeric(5,2),
time date);
insert into data (ticker, price, time) values ('ABC',1,'2014-10-01');
insert into data (ticker, price, time) values ('ABC',0.95,'2014-10-02');
insert into data (ticker, price, time) values ('ABC',1,'2014-10-03');
insert into data (ticker, price, time) values ('ABC',1.04,'2014-10-04');
insert into data (ticker, price, time) values ('ABC',1.05,'2014-10-05');
insert into data (ticker, price, time) values ('ABC',1.06,'2014-10-06');
insert into data (ticker, price, time) values ('ABC',1.07,'2014-10-07');
insert into data (ticker, price, time) values ('ABC',1.09,'2014-10-08');
insert into data (ticker, price, time) values ('ABC',2,'2014-10-09');
insert into data (ticker, price, time) values ('ABC',2,'2014-10-10');
insert into data (ticker, price, time) values ('ABC',1.9,'2014-10-11');
insert into data (ticker, price, time) values ('ABC',1.8,'2014-10-12');
insert into data (ticker, price, time) values ('ABC',1.7,'2014-10-13');
insert into data (ticker, price, time) values ('ABC',1.6,'2014-10-14');
insert into data (ticker, price, time) values ('ABC',1.5,'2014-10-15');
insert into data (ticker, price, time) values ('ABC',1.4,'2014-10-16');
insert into data (ticker, price, time) values ('ABC',1.6,'2014-10-17');
insert into data (ticker, price, time) values ('ABC',1.4,'2014-10-18');
insert into data (ticker, price, time) values ('ABC',1.3,'2014-10-19');
insert into data (ticker, price, time) values ('ABC',1.2,'2014-10-31');
insert into data (ticker, price, time) values ('XYZ',.95,'2014-10-01');
insert into data (ticker, price, time) values ('XYZ',1,'2014-10-31');
insert into data (ticker, price, time) values ('PDQ',1.4,'2014-10-01');
insert into data (ticker, price, time) values ('PDQ',1.3,'2014-10-31');
insert into data (ticker, price, time) values ('XKCD',.01,'2014-10-01');
insert into data (ticker, price, time) values ('XKCD',100,'2014-10-31');
insert into data (ticker, price, time) values ('Z8T',1,'2014-10-01');
insert into data (ticker, price, time) values ('Z8T',1.04,'2014-10-31');
```
|
In postgresql, we can efficiently perform a computational query as this:
Note, we are first selecting all the tickers from another table Ticker first, but this could be DISTINCT(ticker) on the data table as well.
```
SELECT A.name, A.symbol, B.price AS start, C.price AS last, D.percent from ticker A,
lateral ( SELECT B.price AS value from data B where B.time BETWEEN '2014-12-01' AND '2014-12-10' AND B.ticker=A.id ORDER BY B.time ASC limit 1 ) AS B,
lateral ( SELECT C.price AS value from data C where AND C.time BETWEEN '2014-12-01' AND '2014-12-10' AND C.ticker=A.id ORDER BY C.time DESC limit 1 ) AS C,
lateral ( SELECT C.price / B.price AS percent ) AS D
WHERE D.percent > 1.05
ORDER BY D.percent DESC
```
|
Probably overkill using a cross join but I though you might be after any time in the range where a price compared to a future price for same ticker was greater than 5%
```
SELECT DS.Ticker, ds.Time as StartTime, ds.Price StartPrice,
De.Time as EndTime, De.Price as endPrice,
De.price/ds.price-1 as Growth
FROM data dS
CROSS JOIN data dE
WHERE dE > DS
and De.Price/DS.Price>=1.05
and DS.Ticker = DE.ticker
and DS.Time = '2014-10-01'
and De.Time = '2014-10-31'
```
I'm not sure how your between fits into this now though... and why would be using it..
<http://sqlfiddle.com/#!15/733d7/1/0>
Which is why I thought...
```
SELECT DS.Ticker, ds.Time as StartTime, ds.Price StartPrice,
De.Time as EndTime, De.Price as endPrice,
De.price/ds.price-1 as Growth
FROM data dS
INNER JOIN data dE
WHERE dE > DS
and De.Price/DS.Price>=1.05
and DS.Ticker = DE.ticker
and DS.Time between '2014-10-01' and '2014-10-31'
and DE.Time between '2014-10-01' and '2014-10-31'
```
might be what you're after instead.
|
Selecting all tickers where price has increased by 5 percent within a time range
|
[
"",
"sql",
"database",
""
] |
I have a PostgreSQL database table called "user\_links" which currently allows the following duplicate fields:
```
year, user_id, sid, cid
```
The unique constraint is currently the first field called "id", however I am now looking to add a constraint to make sure the `year`, `user_id`, `sid` and `cid` are all unique but I cannot apply the constraint because duplicate values already exist which violate this constraint.
Is there a way to find all duplicates?
|
The basic idea will be using a nested query with count aggregation:
```
select * from yourTable ou
where (select count(*) from yourTable inr
where inr.sid = ou.sid) > 1
```
You can adjust the where clause in the inner query to narrow the search.
---
There is another good solution for that mentioned in the comments, (but not everyone reads them):
```
select Column1, Column2, count(*)
from yourTable
group by Column1, Column2
HAVING count(*) > 1
```
Or shorter:
```
SELECT (yourTable.*)::text, count(*)
FROM yourTable
GROUP BY yourTable.*
HAVING count(*) > 1
```
|
From "[Find duplicate rows with PostgreSQL](https://stackoverflow.com/questions/14471179/find-duplicate-rows-with-postgresql/14471928#14471928)" here's smart solution:
```
select * from (
SELECT id,
ROW_NUMBER() OVER(PARTITION BY column1, column2 ORDER BY id asc) AS Row
FROM tbl
) dups
where
dups.Row > 1
```
|
How to find duplicate records in PostgreSQL
|
[
"",
"sql",
"postgresql",
"duplicates",
""
] |
I have the following query I'd like to run on my database:
```
SELECT
u.UserId, u.FullName, u.Location, csr.SponsorId
FROM
[User] u
LEFT JOIN
(SELECT
csr.SponsorId
FROM
ClubSponsorRelation csr
WHERE
csr.ClubId = @clubId) AS csr ON u.UserId = csr.SponsorId
WHERE
u.UserType = 'Sponsor'
AND csr.SponsorId IS NULL
```
This is basically trying to run an excluding left join, all Users not in the ClubSponsorRelation table will be returned.
My question is in regards to the `WHERE u.UserType = 'Sponsor'` line. Will SQL Server take this into consideration before the Left Join, or after?
If it applies the `WHERE` after the Left Join, how can I rewrite this query that it will only apply the left join on Users with the UserType 'Sponsor'? IS a Left View even the most permanent way? The User and ClubSponsorRelation will become pretty big over time and the query is probably going to be run often.
|
Give this a try. Use `Not Exists` to find the users, since you want to find users who is not present in `ClubSponsorRelation` selecting `csr.SponsorId` doesn't make any sense to me.
```
SELECT u.UserId,
u.FullName,
u.Location
FROM [User] U
WHERE NOT EXISTS (SELECT 1
FROM ClubSponsorRelation csr
WHERE u.UserId = csr.SponsorId
AND csr.ClubId = @clubId)
AND u.UserType = 'Sponsor'
```
|
This is up to the execution engine. The easiest way to check is to let the server generate an execution plan for you - for example, in Management Studio, check `Include actual execution plan`. That will give you a good idea of how the query is actually going to be run, and why.
Note that the reasoning is quite complex, and in many cases might seem counter-intuitive - for example, if the statistics show that the query is going to touch most of the rows, it might ignore indices etc. If you want reasonable results, you want to run this on a realistic (and realistically scaled) data and on a properly maintained database.
And for a bit of code review - there's no need to join on a "subquery". Instead, just use a join with two conditions:
```
left join ClubSponsorRelation csr on csr.ClubId = @clubId and u.UserId = csr.SponsorId
```
There's fewer reasons with each new MS SQL version to ever use subqueries. But of course, profiling is the king - there's too many variables to have reliable guesses in complex scenarios.
Another important thing to understand is that we're only talking about possible performance issues here - the statement must not depend on order of evaluations etc. That's part of the whole set/relational algebra SQL is built on.
|
How does SQL Server handle the following query?
|
[
"",
"sql",
"sql-server",
""
] |
Hello I'm trying to know how to exclude columns from SELECT statement ,
I have 2 tables joined.
First Table:
```
|--ID--|--Name--|--Date 1--|--Date 2--|
```
Second Table:
```
|--ID--|--Tax--|--Total--|
```
Output:
```
|--ID--|--Name--|--Date 1--|--Date 2--|--ID--|--Tax--|--Total--|
```
This is the output what i want:
```
|--Name--|--Tax--|--Total|
```
Here's my Code:
```
Dim loadq As String =
"SELECT HelperEmpEarns.*, HelperEmpDed.*" & _
"FROM (HelperEmpEarns RIGHT JOIN HelperEmpDed ON HelperEmpEarns.EmpPID = HelperEmpDed.EmpPID) " & _
"WHERE [Position] LIKE '%Helper%' AND [Pday1] >= # " & MetroDateTime1.Value.ToShortDateString & "# AND [Pday2] <= # " & MetroDateTime2.Value.ToShortDateString & "# AND HelperEmpEarns.EmpPID = HelperEmpDed.EmpPID"
```
|
You can make your select statement more specific and get rid of the wildcard(s) \*
```
SELECT firstTable.Name, SecondTable.Tax, SecondTable.Total
...
```
Take a look at [w3schools SQL SELECT Statement](http://www.w3schools.com/sql/sql_select.asp) for more information about the SELECT statement.
|
Just enumerate the fields you want to show, instead of putting \*, which means "all fields":
```
Dim loadq As String =
"SELECT HelperEmpEarns.[Name], HelperEmpDed.Tax, HelperEmpDed.Total" & _
"FROM (HelperEmpEarns RIGHT JOIN HelperEmpDed ON HelperEmpEarns.EmpPID = HelperEmpDed.EmpPID) " & _
"WHERE [Position] LIKE '%Helper%' AND [Pday1] >= # " & MetroDateTime1.Value.ToShortDateString & "# AND [Pday2] <= # " & MetroDateTime2.Value.ToShortDateString & "# AND HelperEmpEarns.EmpPID = HelperEmpDed.EmpPID"
```
I put [Name] in square brackets to help identify it as a field name rather than a SQL keyword.
|
Excluding Columns From SELECT Statement
|
[
"",
"sql",
"vb.net",
"ms-access-2007",
""
] |
I have a data something like below.
```
ID | Value
------------
1 | A
1 | B
1 | C
2 | A
2 | C
3 | C
```
I'm trying to get ID which has A but not including B.
In this case, I should only get ID 2 as a result.
If I query like
```
SELECT DISTINCT ID
FROM TABLE
WHERE (VALUE = 'A' AND VALUE <> 'B')
```
I get 1 and 2 as a result.
Is this even possible from this database structure?
|
You need to exclude any ID's that contain the value 'B'. you can do this with a left join and a check for null... i.e. no match.
```
select d.*
from data d
left join (select id from data where value = 'B') x --exclude these
on d.id = x.id
where x.id is null
and d.value = 'A'
```
|
Even though you accepted an answer, I'll write another variant. I think it is easier to write and understand. In terms of performance you have to check how different variants work on your system with your real data.
```
DECLARE @T TABLE (ID int, Value char(1));
INSERT INTO @T (ID, Value) VALUES (1, 'A');
INSERT INTO @T (ID, Value) VALUES (1, 'B');
INSERT INTO @T (ID, Value) VALUES (1, 'C');
INSERT INTO @T (ID, Value) VALUES (2, 'A');
INSERT INTO @T (ID, Value) VALUES (2, 'C');
INSERT INTO @T (ID, Value) VALUES (3, 'C');
SELECT ID FROM @T WHERE Value = 'A'
EXCEPT
SELECT ID FROM @T WHERE Value = 'B'
```
Result set:
```
ID
2
```
|
T-SQL Query - How to not include some results from different row
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I am new to the `RANK` and `CTE` queries. The following is my query:
```
SELECT
*
,ROW_NUMBER() OVER(PARTITION BY name, Product ORDER BY Date desc) AS 'RANK'
FROM
Config
WHERE
Name
IN
(SELECT
name
FROM
Config
GROUP BY
Name
,Product
,Amount
,Date
HAVING
COUNT(name) >1)
AND
Product
IN
(SELECT
Product
FROM
Config
GROUP BY
Name
,Product
,Amount
,Date
HAVING
COUNT(name) >1)
```
The following are sample results and satisfy the conditions:
```
ID|name|Product|Amount|Date |RANK|
---------------------------------------
1 |a |0000 |1 |2015-01-01| 1 |
2 |a |0000 |1 |2015-01-01| 2 |
3 |a |1111 |0 |2015-01-01| 1 |
4 |a |1111 |0 |2015-01-01| 2 |
5 |b |2222 |1 |2015-01-01| 1 |
6 |b |2222 |1 |2015-01-01| 2 |
7 |b |3333 |0 |2015-01-01| 1 |
8 |b |3333 |0 |2015-01-01| 2 |
```
What is the best way to `DELETE` the records that satisfy `RANK > 1`?
|
From the looks of it, you want to delete duplicate entries for `Name` and `Product` leaving the newest one. You can simplify your query with this:
```
SELECT
*,
RN = ROW_NUMBER() OVER(PARTITION BY Name, Product ORDER BY [Date] DESC)
FROM Config
```
You can then put it in a CTE and execute the `DELETE`.
```
CREATE TABLE Config(
ID INT,
Name VARCHAR(10),
Product VARCHAR(10),
Amount INT,
[Date] DATE
)
INSERT INTO Config VALUES
(1, 'a', '0000', 1, '2015-01-01'),
(2, 'a', '0000', 1, '2015-01-01'),
(3, 'a', '1111', 0, '2015-01-01'),
(4, 'a', '1111', 0, '2015-01-01'),
(5, 'b', '2222', 1, '2015-01-01'),
(6, 'b', '2222', 1, '2015-01-01'),
(7, 'b', '3333', 0, '2015-01-01'),
(8, 'b', '3333', 0, '2015-01-01');
;WITH CTE AS(
SELECT
*,
RN = ROW_NUMBER() OVER(PARTITION BY Name, Product ORDER BY [Date] DESC)
FROM Config
)
DELETE FROM CTE WHERE RN > 1
SELECT * FROM Config
```
|
You can use a CTE:
```
with todelete as (<your query here>)
delete from todelete
where [rank] = 1;
```
|
DELETE FROM Table WHERE 'RANK' > 1
|
[
"",
"sql",
"sql-server",
"conditional-statements",
"rank",
"sql-delete",
""
] |
Greeting,
My question; Whether or no sql query (SELECT) **continues or stops reading data (records) from table when find the value** that I was looking for?
referance: "In order to return data for this query, mysql must start at the beginning of the disk data file, read in enough of the record to know where the category field data starts (because long\_text is variable length), read this value, see if it satisfies the where condition (and so decide whether to add to the return record set), then figure out where the next record set is, then repeat."
link for referance: <http://www.verynoisy.com/sql-indexing-dummies/#how_the_database_finds_records_normally>
|
Yes it scans the entire file. Unless you put something like
```
select * from user where id=100 limit 1
```
This of course will still search entire rows if id 100 is the last record.
If id is a primary key it will automatically be indexed and searching would be optimized
|
The RDBMS **reading data from disk** is something you cannot know, you should not care and you must not rely on.
The issue is too broad to get a precise answer. The engine reads data from storage in blocks, a block can contain records that are not needed by the query at hand. If all the columns needed by the query is available in an index, the RDBMS won't even read the data file, it will only use the index. The data it needs could already be cached in memory (because it was read during the execution of a previous query). The underlying OS and the storage media also keep their own caches.
On a busy system, all these factors could lead to very different storage access patterns while running the same query several times on a couple of minutes apart.
|
Whether or not SQL query (SELECT) continues or stops reading data from table when find the value
|
[
"",
"mysql",
"sql",
""
] |
I have a PostgreSQL table
```
CREATE TABLE my_table
(
id serial NOT NULL,
name text,
actual boolean DEFAULT false,
CONSTRAINT my_table_pkey PRIMARY KEY (id),
);
```
How can I set a constraint that only one row can have `actual` flag set to `TRUE`?
|
You can create a unique [partial index](https://www.postgresql.org/docs/current/indexes-partial.html) on that column only for true values:
```
create unique index on my_table (actual)
where actual = true;
```
SQLFiddle: <http://sqlfiddle.com/#!15/91f62/1>
|
My approach would add another feature to an index-based only solution: automatic deactivation of the current flag when setting the flag on another row.
That would involve, of course a trigger.
I would also recommand, as suggested by Frank Heikens, storing "not actual" state as a `null` instead of `false`. In postgresql, each `null` value is different from another `null` value, so the unicity constraint is quite easy to solve: we can allow for only one `true` value, and as many `null` value as necessary.
Here is my implementation:
```
CREATE TABLE my_table
(
id serial NOT NULL,
name text,
actual boolean,
CONSTRAINT my_table_pkey PRIMARY KEY (id),
CONSTRAINT actual_not_false CHECK(actual != false)
);
```
.
```
CREATE UNIQUE INDEX ON my_table USING btree(actual nulls LAST);
```
.
```
CREATE OR REPLACE FUNCTION ensure_only_one_enabled_state_trigger()
RETURNS trigger
AS $function$
BEGIN
-- nothing to do if updating the row currently enabled
IF (TG_OP = 'UPDATE' AND OLD.actual = true) THEN
RETURN NEW;
END IF;
-- disable the currently enabled row
EXECUTE format('UPDATE %I.%I SET actual = null WHERE actual = true;', TG_TABLE_SCHEMA, TG_TABLE_NAME);
-- enable new row
NEW.actual := true;
RETURN NEW;
END;
$function$
LANGUAGE plpgsql;
```
.
```
CREATE TRIGGER my_table_only_one_enabled_state
BEFORE INSERT OR UPDATE OF actual ON my_table
FOR EACH ROW WHEN (NEW.actual = true)
EXECUTE PROCEDURE ensure_only_one_enabled_state_trigger();
```
|
PostgreSQL constraint - only one row can have flag set
|
[
"",
"sql",
"postgresql",
""
] |
I am having trouble understanding this code. It actually seems to work, but I don't understand how the correct value for activity year and month is "found" get the proper min and max? Or is it running all permutations getting the highest? This is very strange to me.
I do understand how the dateadd works, just not how the query is actually working on the whole. This may be a bad question since I don't actually need help solving a problem, just insight into why this works.
```
select
EmployeeNumber,
sum(BaseCalculation) as BaseCalculation,
min(dateadd(mm, (ActivityYear - 1900) * 12 + ActivityMonth - 1 , 0)) as StartDate,
max(dateadd(mm, (ActivityYear - 1900) * 12 + ActivityMonth - 1 , 0)) as EndDate
from
Compensation
where
1=1
-- and
group by
EmployeeNumber
```
|
for both the `min` and `max` function call, the algorithm is
```
dateadd(mm, (ActivityYear - 1900) * 12 + ActivityMonth - 1 , 0)
```
Your query compute all possible date from the `Compensation` table using this algorithm. Then, you select the minimum date as `StartDate` and the maximum as `EndDate`.
This is how the proper max and min are returned.
Note that the dateadd signature is `DATEADD (datepart , number , date )`
Since the last parameter is 0, you are addind to month(mm) the number calculated in the algorithm, and return the corresponding date starting from 0.
Check this out for more information : <https://msdn.microsoft.com/en-us/library/ms186819.aspx>
|
It is converting the columns `ActivityYear` and `ActivityMmonth` to a date. It is doing so by counting the number of months since 1900 and adding them to time zero. So, Jan 2000 would become something like Jan, 100. This seems like a very arcane calculation, because dates that are about 2,000 years old are not really useful.
Of course, this assumes that `ActivityYear` is a recognizable recent year.
I would convert the year and month to the first day of the beginning of the month, with something like this:
```
min(cast(cast(ActivityYear * 10000 + ActivityMonth + 1 as varchar(255)) as date)
```
|
How does this aggregated query find the correct values
|
[
"",
"sql",
"sql-server",
""
] |
I have a table where I store items and the time where they are relevant. For this question the following columns are relevant:
```
CREATE TABLE my_items
(
id INTEGER,
category INTEGER,
t DOUBLE
);
```
I want to select all items from a specific category (e.g. 1) and the sets of items that have a time within +- 5 (seconds) from these items.
I will probably do this with two types of queries in a script:
```
SELECT id,t from my_items where category=1;
```
then loop over the result set, using each result row's time as t\_q1, and do a separate query:
```
SELECT id from my_items where t >= t_q1-5 AND t <= t_q1+5;
```
How can I do this in one query?
|
You can use a join. Take your subquery that selects all category 1 items, and join it with the original table on the condition that the time is within +/- five. It's possible that duplicate rows are returned, so you can group by id to avoid that:
```
SELECT t.*
FROM myTable t
JOIN (SELECT id, timeCol FROM myTable WHERE category = 1) t1
ON t.timeCol BETWEEN (t1.timeCol - 5) AND (t1.timeCol + 5)
OR t.id = t1.id
GROUP BY t.id;
```
I added the `OR t.id = t1.id` to make sure that the rows of category 1 are still included.
|
You can use a single query with all you criteria if there is only one table
`SELECT id,t from my_items where category=1 AND t >= t_q1-5 AND t <= t_q1+5;`
If there is two tables, use a right join on the timestamps table for performance.
|
Finding item and dependent set of items from the same mysql table in one query
|
[
"",
"mysql",
"sql",
""
] |
I am trying to sum the value by date, here is the query i currently have,
```
SELECT SUM(LineTotalValue) AS Value, DateTimeCreated
FROM SOPOrderReturnLine AS SOPOrderReturnLine
WHERE (AnalysisCode1 LIKE 'value%')
GROUP BY DateTimeCreated
ORDER BY DateTimeCreated desc
```
Here is the what what data looks like from the results of the query.
```
Value DateTimeCreated
433.00 2015-01-26 15:36:28.723
135.00 2015-01-26 15:36:13.883
600.00 2015-01-26 15:28:14.957
0.00 2015-01-26 14:45:57.920
58.25 2015-01-26 14:45:21.080
39.08 2015-01-26 14:45:13.443
41.56 2015-01-26 14:45:07.010
99.80 2015-01-26 14:44:56.243
99.99 2015-01-26 14:44:48.590
75.00 2015-01-26 14:44:39.647
```
As you can see as the DateTimeCreated value has the times in it it wont count the value correctly what i want it to do is count based on the date, how to achieve this? changing the data or table design is not an option
|
```
SELECT SUM(LineTotalValue) AS Value,
CONVERT(date, DateTimeCreated)
FROM SOPOrderReturnLine AS SOPOrderReturnLine
WHERE AnalysisCode1 LIKE 'value%'
GROUP BY CONVERT(date, DateTimeCreated)
ORDER BY CONVERT(date, DateTimeCreated) desc
```
|
You can try this
```
SELECT SUM(LineTotalValue) AS Value,
CAST(DateTimeCreated AS DATE)
FROM SOPOrderReturnLine AS SOPOrderReturnLine
WHERE AnalysisCode1 LIKE 'value%'
GROUP BY CAST(DateTimeCreated AS DATE)
ORDER BY CAST(DateTimeCreated AS DATE) desc
```
|
SQL server, Order by datetime value
|
[
"",
"sql",
"sql-server",
""
] |
I have some problems with the order of my select and don't know how to fix it.
The hierarchy should look like this:
```
10
100
1001
1003
1004
1007
10010
20
210
2101
220
22100
22101
```
But when I do a usual order by this hierarchy I get it like that:
```
10
20
100
210
220
1001
...
```
Does anyone have an idea how I could get the right order?
|
Your column is of a number data type. Convert it to a string like this
```
select * from your_table
order by cast(your_column as varchar(20))
```
|
OK, got it
```
SELECT *
FROM Table
ORDER BY RIGHT('00000000' + CAST(ID AS VARCHAR),8)
```
|
T SQL - Special order for hierarchy
|
[
"",
"sql",
"sql-server",
""
] |
All i'm trying to do is count how many bookings have requested wheelchair access not sure with the best way....
```
using (var context = new TalyllynContext())
{
var count = context.Bookings.SqlQuery(" SELECT * FROM dbo.Booking Where Wheelchair_Access = 'true' ").Count();
}
ViewBag.Count = count;
```
This is what i want to happen in the SQL, but not sure how to make the view bag Count display the variable or is there a much better solution
In the view it can be anything as long as it shows the count value !!
|
```
using (var context = new TalyllynContext())
{
ViewBag.Count = context.Bookings.SqlQuery(" SELECT * FROM dbo.Booking Where Wheelchair_Access = 'true' ").Count();
}
return View()
```
|
If you are using razor do something like this.
```
<p>@ViewBag.Count</p>
```
That will display the value in the view if you use the code you showed.
|
Count rows of data and display in view ASP.net MVC
|
[
"",
"sql",
"asp.net",
"asp.net-mvc",
"count",
"data-retrieval",
""
] |
So I'm a bit confused as to why I am receiving this error. Here's my (very simple) query.
```
SELECT * FROM consumer_info, respite INNER JOIN respite ON consumer_info.consumer_id = respite.consumer_id;
```
I've even rewritten it and aliased both fields (`consumer_info.consumer_id` and `respite.consumer_id`) to no avail. I have no clue what could possibly be causing this – any help would be appreciated.
|
You're selecting from a table you're also joining:
```
SELECT *
FROM consumer_info, respite
^^^^^
INNER JOIN respite ...
^^^
```
You can join/use a table multiple times, but each usage of the table MUST have its own unique alias. Try
```
FROM consumer_info, respite
INNER JOIN respite AS somethingelse
^^^^^^^^^^^^^^^^---- table alias
```
then `respite.foo` would be using the table copy listed in the `FROM`, and `somethingelse.foo` would be using the copy listed in the `JOIN`.
|
You are mixing up two join syntaxes - the pre-ANSI and ANSI joins. You should rewrite the statement as follows:
```
SELECT *
FROM consumer_info
INNER JOIN respite ON consumer_info.consumer_id = respite.consumer_id;
```
In pre-ANSI syntax the join would look like this (**not recommended**):
```
SELECT *
FROM consumer_info, respite
WHERE consumer_info.consumer_id = respite.consumer_id;
```
|
Not unique/table alias 'respite'
|
[
"",
"mysql",
"sql",
""
] |
I would like to be about to round SQL Time to the nearest hour if the time value is 1min away from the nearest hour. (Time is in 24 hour clock format)
For instance
```
23:59 - > 00:00
11:30 - > 11:30
03:59 - > 04:00
01:15 - > 01:15
12:59 - > 13:00
```
This is what I was able to do so far, but it is only rounding to the nearest 1min.
```
declare @t time
declare @t2 time
set @t = '23:59:00.0000000';
set @t2 = '23:30:00.0000000';
select cast(dateadd(millisecond, 29999, @t) as smalldatetime) as T1, cast(dateadd(millisecond, 29999, @t2) as smalldatetime) as T2
```
|
You can make a decision if you try to add one minute and then test the result:
```
SELECT
cast(dateadd(millisecond, 29999, @t) as smalldatetime) as T1,
cast(dateadd(millisecond, 29999, @t2) as smalldatetime) as T2
,CASE WHEN SUBSTRING(CAST(DATEADD(MINUTE, 1, @t) as nvarchar(15)),4,2) = '00' THEN DATEADD(MINUTE, 1, @t) ELSE @t END
,CASE WHEN SUBSTRING(CAST(DATEADD(MINUTE, 1, @t2) as nvarchar(15)),4,2) = '00' THEN DATEADD(MINUTE, 1, @t2) ELSE @t2 END
```
|
This seems like a strange requirement -- just getting rid of one particular minute out of 60. But a `case` statement should do what you want:
```
select (case when datepart(hour, time) = 59
then dateadd(hour, datediff(hour, 0, @t) + 1, 0)
else @t
end)
```
|
Round SQL Time to nearest hour if 1min away from nearest hour
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
Sorry if the question is dumb. But I have a real problem in doing this in a right way.
I have 1st table, which are `announcements` , where I have `Title, Description` where users add some announcements.
There is another table, where we have
1 `id, email, keyword1,keyword2,keyword4,keyword5` where users add up to 5 keywords, and when such keywords would appear in any newly added announcement , I need to send out an email to the email mentioned in this table. So question is, what is the fastest and optimal solution to solve this problem?
Now when an announcement is added I am taking all items in second table, and then for each one I loop and make a query with **Like %keyword1%** statements, and this take long. And it will take too much when I will have 20.000 entries for example.
So what would be the best way ? Maybe one query which will list
```
title,keyword,email ??
```
|
Do it in an opposite way. Do not match keywords against your announcements, but match announcements against your table with users+keywords.
When you add a new announcement, build a special string with your new query, which will directly match keywords in `users` table. For example, lets assume that your added announcement is this:
```
New product! (this is title)
We have added new product, check it out!
```
Now take all words from your announcement:
```
New product We have added check it out
```
and build a query WHERE clause:
```
keyword1 IN ("New","product","We","have","added","check","it", "out") OR
keyword1 IN ("New","product","We","have","added","check","it", "out") OR
keyword2 IN ("New","product","We","have","added","check","it", "out") OR
keyword3 IN ("New","product","We","have","added","check","it", "out") OR
keyword4 IN ("New","product","We","have","added","check","it", "out") OR
keyword5 IN ("New","product","We","have","added","check","it", "out")
```
Finally make the whole query like this:
```
SELECT * FROM users
WHERE
keyword1 IN ("New","product","We","have","added","check","it", "out") OR
keyword2 IN ("New","product","We","have","added","check","it", "out") OR
keyword3 IN ("New","product","We","have","added","check","it", "out") OR
keyword4 IN ("New","product","We","have","added","check","it", "out") OR
keyword5 IN ("New","product","We","have","added","check","it", "out")
```
Make sure you have INDEX on your keyword1,2,3,4,5 columns. This query will be VERY fast and will return you just the users which match your words from newly added announcement.
Just make sure that the entire query string is not longer than your max packet size (which is usually like 8MB or so)
|
It makes it a little more difficult because the keywords are arrayed on the table. I would recommend considering having those on another table and I believe it will help you with the query.
I think something like this will work out
```
select * from users u
inner join announcement a on instr(a.description, u.keyword1) > 0
union
select * from users u
inner join announcement a on instr(a.description, u.keyword2) > 0
union
select * from users u
inner join announcement a on instr(a.description, u.keyword3) > 0
```
You will have to keep adding unions in for each keyword. Here is a [SQL fiddle](http://www.sqlfiddle.com/#!2/1d874/1/0) I put together for it.
|
mysql query strategy to find keyword matches
|
[
"",
"mysql",
"sql",
""
] |
I can't seem to figure out how to use the opposite of isnull or ifnull statements in sql. I need to say if `a.Error1` is not null -- then print the `' - '` and the + `CHAR(13)+CHAR(10)`. Basically There should be no dash or no new line break if the a.Error1 comes back null. So print the information if the field isn't null.
```
select a. ....
' - ' + a.Error1 + CHAR(13)+CHAR(10) +
' - ' + a.Error2 + CHAR(13)+CHAR(10) +
' - ' + a.Error3 + CHAR(13)+CHAR(10) +
' - ' + a.Error4 + CHAR(13)+CHAR(10) +
' - ' + a.Error5 + CHAR(13)+CHAR(10) +
' - ' + a.Error6 as 'error_message'
...
from table1 a
```
For example if for a given record error1, 2 and 5 returned output I would like the output to be as follows:
- Error1: There was a ...
- Error2: ....
- Error5: The data was ...
If no errors existed for that row it should simply be an empty/null field.
|
You can use `CASE`:
```
SELECT a. ....
(CASE WHEN a.Error1 IS NOT NULL
THEN ' - ' + a.Error1 + CHAR(13)+CHAR(10)
ELSE ''
END) +
(CASE WHEN a.Error2 IS NOT NULL
THEN ' - ' + a.Error2 + CHAR(13)+CHAR(10)
ELSE ''
END) +
(CASE WHEN a.Error3 IS NOT NULL
THEN ' - ' + a.Error3 + CHAR(13)+CHAR(10)
ELSE ''
END) +
...etc
```
|
Yes! i know i'm like 5 years too late but i too enountered this problem.
It's weird how it doesn't exist some kind of !ISNULL() but whatever.
Try this for a cleaner code:
```
select a. ....
IIF(a.Error1 IS NOT NULL, ' - ' + a.Error1 + CHAR(13)+CHAR(10) , '') as Error1,
IIF(a.Error1 IS NOT NULL, ' - ' + a.Error1 + CHAR(13)+CHAR(10) , '') as Error2
from table1 a
```
Learn more about IIF() function : [SQL Server IIF Function](https://www.sqlservertutorial.net/sql-server-system-functions/sql-server-iif-function/)
|
using sql - Is not null in a select statement
|
[
"",
"sql",
"sql-server",
"null",
"isnull",
"ifnull",
""
] |
In sql server, From my desktop I connected to the server. And I want to move a data from a database to another. I have used both select into and import wizard. But import wizard seems to be slow. Why?
Is there any methodology changes for transferring data ?
|
Select into is a SQL query, and it is executed directly.
Import and Export Wizard is a tool which invokes Integration Services (SSIS).
Wizard is slow, but can use various data sources
More about export/import wizard
<https://msdn.microsoft.com/en-US/en-en/library/ms141209.aspx>
Topic about select into and export/import wizard
<https://social.msdn.microsoft.com/forums/sqlserver/en-US/e0524b2a-0ea4-43e7-b74a-e9c7302e34e0/super-slow-performance-while-using-import-export-wizard>
|
I agree with Andrey. The Wizard is super slow. If you perform a Google search on "sql server import and export wizard slow", you will receive nearly 50k hits. You may want to consider a couple of other options.
**BCP Utility**
Note: I have used this on a number occasions. Very fast processing.
The bcp utility bulk copies data between an instance of Microsoft SQL Server and a data file in a user-specified format. The bcp utility can be used to import large numbers of new rows into SQL Server tables or to export data out of tables into data files. Except when used with the queryout option, the utility requires no knowledge of Transact-SQL. To import data into a table, you must either use a format file created for that table or understand the structure of the table and the types of data that are valid for its columns.
Example:
```
BULK INSERT TestServer.dbo.EmployeeAddresses
FROM 'D:\Users\Addresses.txt';
GO
```
**OPENROWSET(BULK) Function**
The OPENROWSET(BULK) function connects to an OLE DB data source to restore data and it allows access to a remote data by connecting to a remote data source.
Example:
```
INSERT INTO AllAddress(Address)
SELECT * FROM OPENROWSET(
BULK 'D:\Users\Addresses.txt',
SINGLE_BLOB) AS x;
```
**Reference**
<https://msdn.microsoft.com/en-us/library/ms175915.aspx>
<http://solutioncenter.apexsql.com/sql-server-bulk-copy-and-bulk-import-and-export-techniques/>
|
Select into VS Import and export wizard in sql server
|
[
"",
"sql",
"import",
"select-into",
"sql-import-wizard",
""
] |
I have three tables, persons, email, and personemail. Personemail basically has a foreign key to person and email so one person can be linked to multiple email addresses. Also the email table has a field named primaryemail. This field is either 1 or 0. The primary email flag is used for pulling emails into reports/invoices etc.
There was a logic flaw in the UI that allowed users to set no primary email addresses for customers. I have closed the logic flaw but I need a script to force a primary email address for any customer that doesn't have one set. It was decided to set the primary email address to the lowest value for emailid (the primary key in the email table). Below is the script that was written and it works but it is very expensive to run and may cause locks for end users while running. The software is deployed in multiple time zones so even if we run it during the lowest usage time we need it to run as fast as possible.
Here is the current script. It has temp tables and a while loop so you can see it can really be improved upon. My SQL skills need polishing so I am putting it out here for suggestions.
```
CREATE TABLE #TEMP(PERSONID INT, PRIMARYEMAIL INT,FLAG INT)
CREATE INDEX IDX_TEMP_PERSONID ON #TEMP(PERSONID)
CREATE TABLE #TEMP2(PERSONID INT,PRIMARYEMAIL INT)
CREATE INDEX IDX_TEMP2_PERSONID ON #TEMP2(PERSONID)
--Grab all the person id's that have at least one email addresses that is not primary in the db, also set a flag for the while loop
INSERT INTO #TEMP
SELECT PE.PersonID, E.primaryEmail ,0
FROM Account.tbPersonEmail PE WITH (NOLOCK)
LEFT OUTER JOIN Account.tbEmail E ON E.EmailID=PE.EmailID
WHERE E.primaryEmail=0
--Grab all person ID's that have at least one email address that is primary.
INSERT INTO #TEMP2
SELECT PE.PersonID, E.primaryEmail
FROM Account.tbPersonEmail PE WITH (NOLOCK)
LEFT OUTER JOIN Account.tbEmail E ON E.EmailID=PE.EmailID
WHERE E.primaryEmail=1
--SELECT * FROM #TEMP2
--Remove any customers that already have a primary email set.
DELETE FROM #TEMP WHERE PERSONID IN (SELECT DISTINCT PERSONID FROM #TEMP2)
--Debug line to see how many customers are affected.
--SELECT * FROM #TEMP
--Perfom a while loop to update the min email ID to primary.
DECLARE @INTFLAG INT
DECLARE @PERSONID INT
SET @INTFLAG = (SELECT COUNT(*) FROM #TEMP)
--SELECT @INTFLAG
WHILE (@INTFLAG > 0)
BEGIN
SET @PERSONID =(SELECT TOP(1) PERSONID FROM #TEMP WHERE FLAG=0)
UPDATE Account.tbEmail SET primaryEmail=1 WHERE EmailID=(SELECT MIN(EMAILID) FROM Account.tbPersonEmail where PersonID=@PERSONID)
--Update the flag on the #temp table to grab the next ID
UPDATE #TEMP SET FLAG=1 WHERE PERSONID=@PERSONID
--Reduce the intflag variable that the loop is running off of.
SET @INTFLAG=@INTFLAG-1
END
DROP TABLE #TEMP
DROP TABLE #TEMP2
```
|
Creating temporary tables is a very expensive way to do this, and using loops is a bad idea is SQL, as they are slow, since they can't be optimized. The typical method uses subqueries instead. To start, try doing this:
```
CREATE TABLE #TEMP(PERSONID INT, PRIMARYEMAIL INT,FLAG INT)
CREATE INDEX IDX_TEMP_PERSONID ON #TEMP(PERSONID)
INSERT INTO #TEMP
SELECT PE.PersonID, E.primaryEmail , 0
FROM Account.tbPersonEmail PE WITH (NOLOCK)
LEFT OUTER JOIN Account.tbEmail E ON E.EmailID=PE.EmailID
WHERE E.primaryEmail=0 and
PE.PersonID not in (SELECT Distinct PE2.PersonID
FROM Account.tbPersonEmail PE2 WITH (NOLOCK)
LEFT OUTER JOIN Account.tbEmail E2 ON E.EmailID=PE2.EmailID
WHERE E2.primaryEmail=1)
```
And then running your while loop. That should help a bit. You can test that this is correct by seeing if #TEMP matches the previous version.
To further optimize, you probably need to rewrite the entire update process as a single query. You also may want to look at this: [How can I optimize this SQL query (Using Indexes)?](https://stackoverflow.com/questions/11299217/how-can-i-optimize-this-sql-query)
|
Single query to set primaryEmail=1 for first email for each person except ones who already have primary email:
```
UPDATE Account.tbEmail E SET E.primaryEmail=1
WHERE
E.EmailID in (
-- get min email id for each person
SELECT min(PE.EmailID) FROM Account.tbPersonEmail PE
-- but exclude persons who already have primary email
WHERE PE.PersonID NOT IN (
SELECT PE1.PersonID
FROM Account.tbPersonEmail PE1
INNER JOIN Account.tbEmail E1 ON E1.EmailID=PE1.EmailID
WHERE E1.primaryEmail=1
)
GROUP BY PE.PersonID
)
```
|
Need a less expensive query
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have been trying to find a way to select/delete the duplicated payroll records from my dataset below.
```
ID HDate SiteID RecordID EmployeeID TimeWorked
8066839 24/01/2015 164 840 28930 240
7982224 08/11/2014 164 840 28812 390
8066838 23/01/2015 164 839 28930 360
7982223 07/11/2014 164 839 28812 240
8066837 22/01/2015 164 838 28930 330
7982222 06/11/2014 164 838 28812 0
```
Duplicate is the RecordID and needs to be unique for each SiteID.
I have not been able to find a way of retrieving the lowest 'ID' for each 'RecordID'. I use the below to find my duplicates when required against each site.
```
SELECT *
FROM StaffHours
WHERE (SiteID = '164') AND (RecordID IN
(SELECT RecordID
FROM StaffHours
WHERE (SiteID = '164')
GROUP BY RecordID
HAVING (COUNT(RecordID) > 1)))
ORDER BY RecordID DESC
```
I was attempting something like this, but couldnt get it to work.
```
declare dup_id cursor for
SELECT * FROM StaffHours
WHERE (ID IN
(SELECT MIN(ID) AS Expr1
FROM StaffHours
WHERE (SiteID = '25') AND (RecordID IN
(SELECT RecordID
FROM StaffHours
WHERE (SiteID = '25')
GROUP BY RecordID
HAVING (COUNT(RecordID) > 1)))));
open dup_id;
fetch next from dup_id;
while @@fetch_status = 0
begin
fetch next from dup_id;
end;
close dup_id;
deallocate dup_id
go
```
Any help would be greatly appreciated.
Thanks,
Adam
|
I think you were very close to solving this yourself:
```
SELECT *
FROM StaffHours
WHERE (SiteID = '164') AND (ID IN
(SELECT Min(ID)
FROM StaffHours
WHERE (SiteID = '164')
GROUP BY RecordID
HAVING (COUNT(RecordID) > 1)))
ORDER BY RecordID DESC
```
|
If `ID` is a key or unique:
```
DELETE FROM StaffHours
WHERE ID =
(SELECT MIN(ID) FROM StaffHours
WHERE SiteID = 164
AND (SELECT count(*) FROM StaffHours sh WHERE sh.RecordID = RecordID AND SiteID = sh.SiteID) > 1
GROUP BY RecordID)
```
|
SQL Server Duplicate Payroll Records
|
[
"",
"sql",
"sql-server-2008",
"duplicates",
""
] |
I'm currently working on a query that returns records from within a date range. The ideal is for the start date to be everything from 2015 onward. In theory, shouldn't each of the three SET statements below set the variable to 2015\*?
```
DECLARE @startDate datetime;
SET @startDate = '20150101';
SET @startDate = YEAR(GETDATE());
SET @startDate = DATEPART(yyyy,GETDATE());
```
Only the first one, the hardcoded date, behaves as expected. The other two return ALL records that are being queried through. Am I missing something here?
\*EDIT: I apologize for how unclear I was with that initially. Basically, `@startDate` should be set to 01-01-XXXX, where XXXX is whatever year today's date is a part of. It's being compared against another `DATETIME` variable. I hope that clarifies things.
|
The answer to your question is "No". The variable `@StartDate` is date time. This doesn't make sense:
```
set @startDate = 2015
```
It doesn't make sense. An integer that looks like a year is not a date.
If you want the first day of the year, you can do:
```
set @startDate = dateadd(day,
1 - datepart(dayofyear, getdate()),
cast(getdate() as date)
) as FirstDayOfYear
```
|
I think this would work (for SQL-Server):
```
SET @startDate = cast(YEAR(GETDATE()) as varchar(4))
SET @startDate = cast(DATEPART(yyyy,GETDATE()) as varchar(4))
```
This will show you what's happening:
```
DECLARE @startDate datetime
SET @startDate = '20150101'
select @startdate
SET @startDate = YEAR(GETDATE())
select @startdate
SET @startDate = cast(YEAR(GETDATE()) as varchar(4))
select @startdate
SET @startDate = DATEPART(yyyy,GETDATE())
select @startdate
SET @startDate = cast(DATEPART(yyyy,GETDATE()) as varchar(4))
select @startdate
```
|
SQL YEAR(GETDATE())
|
[
"",
"sql",
"sql-server",
"t-sql",
"variables",
""
] |
I have a query like
```
SELECT
`campaign_question_options`.`text`,
COUNT(`campaign_submission_answers`.`answer`) as `count`
FROM `campaign_questions`
INNER JOIN `campaign_question_options` ON `campaign_question_options`.`campaign_question_id` = `campaign_questions`.`id`
LEFT JOIN `campaign_submission_answers` ON `campaign_submission_answers`.`answer` = `campaign_question_options`.`text` AND `campaign_submission_answers`.`campaign_question_id` = 1
LEFT JOIN `campaign_submissions` ON `campaign_submissions`.`id` = `campaign_submission_answers`.`campaign_submission_id`
LEFT JOIN `participants` ON `participants`.`id` = `campaign_submissions`.`participant_id`
WHERE
`campaign_questions`.`id` = 1
GROUP BY `campaign_submission_answers`.`answer`
ORDER BY `campaign_question_options`.`index`;
```
This gives me a result set like
```
+--------------+-------+
| text | count |
+--------------+-------+
| 1 (positive) | 114 |
| 2 | 48 |
| 3 (neutral) | 34 |
| 4 | 6 |
| 5 (negative) | 0 |
+--------------+-------+
```
So the problem is that I then need to filter the results further on the `participants`.`appraisee_id` column. However if I add this to the where clause I lose my zero result (since the left join returns a null row).
```
SELECT
`campaign_question_options`.`text`,
COUNT(`campaign_submission_answers`.`answer`) as `count`
FROM `campaign_questions`
INNER JOIN `campaign_question_options` ON `campaign_question_options`.`campaign_question_id` = `campaign_questions`.`id`
LEFT JOIN `campaign_submission_answers` ON `campaign_submission_answers`.`answer` = `campaign_question_options`.`text` AND `campaign_submission_answers`.`campaign_question_id` = 1
LEFT JOIN `campaign_submissions` ON `campaign_submissions`.`id` = `campaign_submission_answers`.`campaign_submission_id`
LEFT JOIN `participants` ON `participants`.`id` = `campaign_submissions`.`participant_id`
WHERE
`campaign_questions`.`id` = 1 AND `participants`.`appraisee_id` = 1
GROUP BY `campaign_submission_answers`.`answer`
ORDER BY `campaign_question_options`.`index`;
```
Which returns
```
+--------------+-------+
| text | count |
+--------------+-------+
| 1 (positive) | 16 |
| 2 | 1 |
+--------------+-------+
```
When in fact I wish for
```
+--------------+-------+
| text | count |
+--------------+-------+
| 1 (positive) | 16 |
| 2 | 1 |
| 3 (neutral) | 0 |
| 4 | 0 |
| 5 (negative) | 0 |
+--------------+-------+
```
Can anyone help me improve this query?
Thanks
**UPDATE**
I have created a DB dump of the structure which could be useful if any kind people wish to continue helping me on this. <https://gist.github.com/simonbowen/a8316fe91c78b8464402>
|
Update on this question, I tried attempting this query from a different angle. It seems to output the results I expect, however I am not sure if it's the most efficient way since I have had to use a subquery.
```
SELECT `campaign_question_options`.`text`, COUNT(`csa`.`answer`) FROM `campaign_questions`
INNER JOIN `campaign_question_options` ON `campaign_question_options`.`campaign_question_id` = `campaign_questions`.`id`
LEFT JOIN (
SELECT `campaign_submission_answers`.* FROM `campaign_submission_answers`
INNER JOIN `campaign_submissions` ON `campaign_submissions`.`id` = `campaign_submission_answers`.`campaign_submission_id`
INNER JOIN `participants` ON `participants`.`id` = `campaign_submissions`.`participant_id`
INNER JOIN `campaign_questions` ON `campaign_questions`.`id` = `campaign_submission_answers`.`campaign_question_id`
INNER JOIN `campaign_question_options` ON `campaign_question_options`.`text` = `campaign_submission_answers`.`answer`
WHERE `campaign_submissions`.`campaign_id` = 1 AND `participants`.`appraisee_id` = 1 AND `campaign_submission_answers`.`campaign_question_id` = 1
GROUP BY `campaign_submission_answers`.`id`
) as `csa` ON `csa`.`answer` = `campaign_question_options`.`text`
WHERE `campaign_questions`.`id` = 1
GROUP BY `campaign_question_options`.`text`;
```
|
When you have `left join`s and you want to filter on an any table but the first, you need to put the conditions in the `on` clause:
```
SELECT cqo.`text`,
COUNT(csa.`answer`) as `count`
FROM `campaign_questions` cq INNER JOIN
`campaign_question_options` cqo
ON cqo.`campaign_question_id` = cq.`id` LEFT JOIN
`campaign_submission_answers` csa
ON csa.`answer` = cqo.`text` AND csa.`campaign_question_id` = 1 LEFT JOIN
`campaign_submissions` cs
ON cs.`id` = csa.`campaign_submission_id LEFT JOIN
`participants` p
ON p.`id` = cs.`participant_id` AND
p. appraisee_id = XXX
WHERE cq.`id` = 1
GROUP BY csa.`answer`
ORDER BY cqo.`index`;
```
I also added table aliases. They make the query easier to write and to read.
|
Grouping results with left and inner join with zero counts
|
[
"",
"mysql",
"sql",
"join",
"count",
"group-by",
""
] |
First and Foremost, this is part of an assignment.
I am trying to use the COUNT function as part of a query in relation to the Northwind database. The query should return the CustomerID, CompanyName, and **the number of of orders placed for each respective customer**.
Of course the first two parts are easy, but I can't get the COUNT function to work properly. The query I have so far is:
```
SELECT DISTINCT Customers.CustomerID, Customers.CompanyName, COUNT(Customers.CustomerID)
FROM Orders, Customers
WHERE Customers.CustomerID = Orders.CustomerID;
```
What would be the correct syntax to use COUNT in that fashion? It would look like:
```
CompanyID | CompanyName | # of orders
1 | Company A | 4
2 | Company B | 3
3 | Company C | 5
```
All examples thus far have been using the COUNT function by itself, and not part of a more complex query.
|
You need a `group by` clause, which will allow you to split your result in to groups, and perform the aggregate function (`count`, in this case), per group:
```
SELECT Customers.CustomerID, Customers.CompanyName, COUNT(*)
FROM Orders, Customers
WHERE Customers.CustomerID = Orders.CustomerID;
GROUP BY Customers.CustomerID, Customers.CompanyName
```
Note: Although this is not part of the question, it's recommended to use explicit `join`s instead of the deprecated implicit join syntax you're using. In this case, the query would look like:
```
SELECT Customers.CustomerID, Customers.CompanyName, COUNT(*)
FROM Orders
JOIN Customers ON Customers.CustomerID = Orders.CustomerID;
GROUP BY Customers.CustomerID, Customers.CompanyName
```
|
The below query should work.
```
SELECT Customers.CustomerID, Customers.CompanyName, COUNT(*)
FROM Orders, Customers
WHERE Customers.CustomerID = Orders.CustomerID group by Orders.CustomerID
```
|
Using the COUNT function in SQL
|
[
"",
"sql",
"select",
"count",
""
] |
Here's my function which is supposed to return the top row of the record:
```
ALTER FUNCTION [dbo].[fn_PAT_LastTS]
(
-- Add the parameters for the function here
@PATcode varchar(50)
)
RETURNS datetime
AS
BEGIN
-- Declare the return variable here
DECLARE @lastTS datetime
-- Add the T-SQL statements to compute the return value here
SET @lastTS = (select top 1 tsdate from timesheet where pat = @PATcode order by tsdate desc)
-- Return the result of the function
RETURN @lastTS
END
```
For some reason, it always returns all records instead of the top one. Does 'Top' work at all within T-SQL scalar function?
Edit 1: this is how I call the function.
```
select dbo.fn_PAT_LastTS('ZZ793843') from timesheet
```
Edit: added the picture of returned data which showed multiple rows instead of the top 1.
|
Try this:
```
SELECT TOP 1 @lastTS = tsdate FROM timesheet WHERE pat = @PATcode ORDER BY tsdate DESC
```
# EDIT:
The problem is with how you're calling the function:
```
SELECT dbo.fn_PAT_LastTS('ZZ793843') FROM timesheet
```
This will select all rows from `timesheet` with one column whose value is the result of `dbo.fn_PAT_LastTS('ZZ793843')`. You should be calling it like this:
```
SELECT * FROM timesheet WHERE tsdate = dbo.fn_PAT_LastTS('ZZ793843')
```
OR
```
SELECT dbo.fn_PAT_LastTS('ZZ793843')
```
|
The scalar is being called for each row in timesheet.
Try the following:
```
select dbo.fn_PAT_LastTS('ZZ793843')
```
|
Unable to return top record in a T-SQL scalar function
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I am trying to get the `MAX()` of a column and use that value to perform a `DATEADD()` operation,
```
CONVERT(Datetime, DATEADD(MONTH, tb_files.file_retentiondate, MAX((SELECT tb_records.rec_record_date FROM tb_records)))) AS ExpiryDate
```
What I tried is this:
```
SELECT tb_files.file_pk AS 'File Number',
tb_files.file_subject AS 'File Subject',
CONVERT(Datetime, DATEADD(MONTH, tb_files.file_retentiondate, MAX(
(SELECT tb_records.rec_record_date
FROM tb_records)))) AS ExpiryDate,
tb_compartment.comp_cab_fk AS 'Storage Number',
tb_compartment.comp_part AS 'Compartment Number',
tb_departments.dept_code AS 'Department',
tb_location.locationno AS 'Location',
tb_users.usr_fullname AS 'Created by',
tb_files.file_datecreated AS 'Date Created'
FROM tb_files
INNER JOIN tb_fileparts ON tb_files.file_pk = tb_fileparts.fp_file_fk
INNER JOIN tb_records ON tb_fileparts.fp_pk = tb_records.rec_filepart_fk
INNER JOIN tb_users ON tb_files.file_createdby_usr_fk = tb_users.usr_pk
INNER JOIN tb_compartment ON tb_files. file_comp_fk = tb_compartment. comp_pk
INNER JOIN tb_cabinet ON tb_compartment.comp_cab_fk = tb_cabinet.cab_pk
INNER JOIN tb_location ON tb_cabinet.cab_location_fk = tb_location.location_pk
INNER JOIN tb_departments ON tb_cabinet.cab_dept_fk = tb_departments.dept_pk
WHERE (tb_files.file_active = 1)
AND (tb_records.rec_active = 1)
GROUP BY tb_files.file_pk,
tb_files.file_subject,
tb_files.file_retentiondate,
tb_compartment.comp_cab_fk,
tb_compartment.comp_part,
tb_departments.dept_code,
tb_location.locationno,
tb_users.usr_fullname,
tb_files.file_datecreated HAVING(Convert(DateTime, DateAdd(MONTH, tb_files.file_retentiondate, MAX(tb_records.rec_record_date))) <= getdate())
```
I get this error message:
```
Msg 130, Level 15, State 1, Line 2
Cannot perform an aggregate function on an expression containing an aggregate or a subquery.
```
Can you explain what I'm doing wrong?
|
You can't wrap a subquery with a MAX() function like that... Try this:
```
SELECT tb_files.file_pk AS 'File Number',
tb_files.file_subject AS 'File Subject',
CONVERT(Datetime, DATEADD(MONTH, tb_files.file_retentiondate, (SELECT MAX(tb_records.rec_record_date) FROM tb_records))) AS ExpiryDate,
tb_compartment.comp_cab_fk AS 'Storage Number',
tb_compartment.comp_part AS 'Compartment Number',
tb_departments.dept_code AS 'Department',
tb_location.locationno AS 'Location',
tb_users.usr_fullname AS 'Created by',
tb_files.file_datecreated AS 'Date Created'
FROM tb_files
INNER JOIN tb_fileparts ON tb_files.file_pk = tb_fileparts.fp_file_fk
INNER JOIN tb_records ON tb_fileparts.fp_pk = tb_records.rec_filepart_fk
INNER JOIN tb_users ON tb_files.file_createdby_usr_fk = tb_users.usr_pk
INNER JOIN tb_compartment ON tb_files. file_comp_fk = tb_compartment. comp_pk
INNER JOIN tb_cabinet ON tb_compartment.comp_cab_fk = tb_cabinet.cab_pk
INNER JOIN tb_location ON tb_cabinet.cab_location_fk = tb_location.location_pk
INNER JOIN tb_departments ON tb_cabinet.cab_dept_fk = tb_departments.dept_pk
WHERE (tb_files.file_active = 1)
AND (tb_records.rec_active = 1)
GROUP BY tb_files.file_pk,
tb_files.file_subject,
tb_files.file_retentiondate,
tb_compartment.comp_cab_fk,
tb_compartment.comp_part,
tb_departments.dept_code,
tb_location.locationno,
tb_users.usr_fullname,
tb_files.file_datecreated HAVING(Convert(DateTime, DateAdd(MONTH, tb_files.file_retentiondate, MAX(tb_records.rec_record_date))) <= getdate())
```
|
The error message is pretty descriptive. SQL does not allow you to use a subquery for an argument to an aggregation function. The simple solution is to move the aggregation inside the subquery:
```
. . .
CONVERT(Datetime, DATEADD(MONTH, tb_files.file_retentiondate,
(SELECT MAX(tb_records.rec_record_date)
FROM tb_records
)
) AS ExpiryDate,
```
|
Nested SELECT statement in DateAdd()
|
[
"",
"sql",
"sql-server",
"t-sql",
"select",
"nested",
""
] |
I have data for current year as well as next year in a table.
Sample Table data:
```
Year,Period,prd_Desc,Amount
2014,11, Humira, 120
2015,11, humira, 140
```
Key Coulmn are Year,Period,prd\_Desc
If the data present for next year and for same period, i need that value in a separate column. Like below
```
Year,Period,prd_Desc,Amount_curnt_yr,Amount_next_yr
2014,11, humira, 120, 140
```
I can achive this by doint a left outer join between same table using below query:
```
select a.Year,a.Period,a.prd_Desc,a.Amount as Amount_curnt_yr,b.Amount as Amount_next_yr
from (select Year,Period,prd_Desc,Amount
from tableA) a
left outer join (select Year,Period,prd_Desc,Amount from tableA) b on
b.year=a.year+1 and a.Period=b.period and a.prd_Desc=b.prd_Desc
```
I was trying to get it in a simngle query without using left outer join, but could not. If anybody can share any idea, that would helps
|
Assuming your real table has some more rows (i.e. more years and more periods) than you show us, this would be a more generic solution:
```
WITH t AS
(SELECT YEAR,Period,prd_Desc,Amount,
LEAD(amount, 1) OVER (PARTITION BY prd_Desc, period ORDER BY YEAR) AS amount_next_year
FROM a_table)
SELECT *
FROM t
WHERE amount_next_year IS NOT NULL;
```
In case values in column `YEAR` are not continuous, you can do it like this:
```
WITH t AS
(SELECT YEAR,Period,prd_Desc,Amount,
LAST_VALUE(amount) OVER
(PARTITION BY period, prd_Desc ORDER BY YEAR
RANGE BETWEEN 1 FOLLOWING AND 1 FOLLOWING) AS amount_next_year
FROM A_TABLE)
SELECT *
FROM t
WHERE amount_next_year IS NOT NULL;
```
|
You can achieve this with a subquery.
```
select t.*, (select amount from sampleTableData where year = t.year+1 and period = t.period and prd_desc = t.prd_desc) Amount_next_yr from sampleTableData t
```
|
Getting a specific row value in a column
|
[
"",
"sql",
"oracle",
"informatica",
""
] |
Right now I just have
```
INSERT INTO MY_TABLE (VAL1, VAL2)
SELECT VAL1, VAL2 FROM OTHER_TABLE;
```
However, if `MY_TABLE` already has the values `(1, 2)`, I don't want to let it insert `(2,1)` if `(2,1)` is in `OTHER_TABLE`;
Is there a way to do this here, or even while creating the tables?
I have tried to `ALTER` the table and create a `UNIQUE` constraint but it doesn't account for duplicates
|
Try this.
```
SELECT VAL1,
VAL2
FROM OTHER_TABLE a
WHERE NOT EXISTS (SELECT 1
FROM my_table b
WHERE ( a.val1 = b.val2
AND a.val2 = b.val1 )
OR ( a.val1 = b.val1
AND a.val2 = b.val2 ))
```
|
We can place a condition while creating the table itself:
create table my\_table
(
val1 int,
val2 int,
check( val,val2 not in (select val1,val2 from other\_table,my\_table where my\_table.val1=other\_table.val2 and my\_table.val2=other\_table.val1)
union
select val1,val2 from my\_table mt1,mt2 where mt1.val1=mt2.val2 and mt1.val2=mt2.val1))
);
|
Don't insert any combination of duplicates in SQL
|
[
"",
"mysql",
"sql",
""
] |
How to find the last day os the month in postgres?
I have a date columns stored as numeric(18) in the format(YYYYMMDD)
I am trying it to make it date using
```
to_date("act_dt",'YYYYMMDD') AS "act date"
```
then find the last day of this date:
like this:
```
(select (date_trunc('MONTH',to_date("act_dt",'YYYYMMDD')) + INTERVAL '1 MONTH - 1 day')::date)
```
but it gives me this error:
```
ERROR: Interval values with month or year parts are not supported
Detail:
-----------------------------------------------
error: Interval values with month or year parts are not supported
code: 8001
context: interval months: "1"
query: 673376
location: cg_constmanager.cpp:145
process: padbmaster [pid=20937]
-----------------------------------------------
```
Any help?
Postgres version:
`PostgreSQL 8.0.2 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.4.2 20041017 (Red Hat 3.4.2-6.fc3), Redshift 1.0.874`
|
If you're using Amazon AWS Redshift then you can use Redshift's `LAST_DAY` function. While Redshift is based on PostgreSQL, **the `LAST_DAY` function is not available in PostgreSQL**, for a solution for PostgreSQL [see @wspurgin's answer](https://stackoverflow.com/a/45740174/159145).
<https://docs.aws.amazon.com/redshift/latest/dg/r_LAST_DAY.html>
> ```
> LAST_DAY( { date | timestamp } )
> ```
>
> `LAST_DAY` returns the date of the last day of the month that contains date. The return type is always `DATE`, regardless of the data type of the date argument.
For example:
```
SELECT LAST_DAY( TO_DATE( act_date, 'YYYYMMDD' ) )
```
|
For anybody coming to this question looking for the *Postgres* way to do this (not using Redshift), here's how you'd do it:
```
SELECT (date_trunc('month', '2017-01-05'::date) + interval '1 month' - interval '1 day')::date
AS end_of_month;
```
Replacing the `'2017-01-05'` with whatever date you want to use. You can make this into a function like this:
```
create function end_of_month(date)
returns date as
$$
select (date_trunc('month', $1) + interval '1 month' - interval '1 day')::date;
$$ language 'sql'
immutable strict;
```
### EDIT Postgres 11+
Pulling this out of the comments from [@Gabriel](https://stackoverflow.com/questions/28186014/how-to-get-the-last-day-of-month-in-postgres/45740174#comment115114539_45740174), you can now combine interval expressions in one `interval` (which makes things a little shorter):
```
select (date_trunc('month', now()) + interval '1 month - 1 day')::date as end_of_month;
-- +--------------+
-- | end_of_month |
-- +--------------+
-- | 2021-11-30 |
-- +--------------+
-- (1 row)
```
|
How to get the last day of month in postgres?
|
[
"",
"sql",
"amazon-redshift",
""
] |
Current SQL:
```
select t1.*
from table t1
where t1.id in ('2', '3', '4')
```
Current results:
```
id | seq
---+----
3 | 5
2 | 7
2 | 5
3 | 7
4 | 3
```
Attempt to select maxes:
```
select t1.*
from table t1
where t1.id in ('2', '3', '4')
and t1.seq = (select max(t2.seq)
from table2 t2
where t2.id = t1.id)
```
This obviously does not work since I'm using an `in` list. How can I adjust my SQL to get these expected results:
```
id | seq
---+----
2 | 7
3 | 7
4 | 3
```
|
Group By is your friend:
```
SELECT
id,
MAX(seq) seq
FROM TABLE
GROUP BY id
```
EDIT: Response to comment. To get the rest of the data from the table matching the max seq and id just join back to the table:
```
SELECT t1.*
FROM TABLE t1
INNER JOIN (
SELECT
id
MAX(seq) as seq
FROM TABLE
GROUP BY id
) as t2
on t1.id = t2.id
and t1.seq = t2.seq
```
EDIT: Gordon and Jean-Francois are correct you can also use the ROW\_NUMBER() analytic function to get the same result. You need to check the performance difference for your application (I did not check). Here is an example of that:
```
SELECT *
FROM (
SELECT ROW_NUMBER() OVER (
PARTITION BY id
ORDER BY seq DESC) as row_num
,*
FROM TABLE
) as TMP
WHERE row_num = 1
```
|
This SQL Query will give you max seq from individaul ID.
```
SELECT t1.*
FROM t1
WHERE t1.id in ('2', '3', '4')
AND NOT EXISTS (
SELECT *
FROM t1 t2
WHERE t2.id = t1.id
AND t2.seq > t1.seq
```
|
Max rows by group
|
[
"",
"sql",
"oracle",
""
] |
I have column (Numbers) which has values as follows:
```
1,2,3
1,2,3,
1,2,3,,,
1,2,3,,,,,,
```
I want to Trim all the Commas at the end of string, So that result would be
```
1,2,3
1,2,3
1,2,3
1,2,3
```
I have tried below Query but by this we can remove only one last comma
```
DECLARE @String as VARCHAR(50)
SET @String='1,2,3,4,,,,,,,,,,,,,,,,'
SELECT CASE WHEN right(rtrim(@String),1) = ',' then substring(rtrim(@String),1,len(rtrim(@String))-1)
ELSE @String
END AS TruncString
```
How can I remove all the commas at the end of string?
|
You can do this using:
```
LEFT(Numbers, LEN(Numbers) - (PATINDEX('%[^,]%', REVERSE(Numbers)) - 1))
```
The premise of this is you first reverse the string using [`REVERSE`](https://msdn.microsoft.com/en-us/library/ms180040.aspx):
```
REVERSE(Numbers) --> ,,,,,,3,2,1
```
You then find the position of the first character that is not a comma using [`PATINDEX`](https://msdn.microsoft.com/en-us/library/ms188395.aspx) and the pattern match `[^,]`:
```
PATINDEX('%[^,]%', REVERSE(Numbers)) --> ,,,,,,3,2,1 = 7
```
Then you can use the length of the string using [`LEN`](https://msdn.microsoft.com/en-us/library/ms190329.aspx), to get the inverse position, i.e. if the position of the first character that is not a comma is 7 in the reversed string, and the length of the string is 10, then you need the first 4 characters of the string. You then use [`SUBSTRING`](https://msdn.microsoft.com/en-us/library/ms187748.aspx) to extract the relevant part
A full example would be
```
SELECT Numbers,
Reversed = REVERSE(Numbers),
Position = PATINDEX('%[^,]%', REVERSE(Numbers)),
TrimEnd = LEFT(Numbers, LEN(Numbers) - (PATINDEX('%[^,]%', REVERSE(Numbers)) - 1))
FROM (VALUES
('1,2,3'),
('1,2,3,'),
('1,2,3,,,'),
('1,2,3,,,,,,'),
('1,2,3,,,5,,,'),
(',,1,2,3,,,5,,')
) t (Numbers);
```
---
**EDIT**
In response to an edit, that had some errors in the syntax, the below has functions to trim the start, and trim both sides of commas:
```
SELECT Numbers,
Reversed = REVERSE(Numbers),
Position = PATINDEX('%[^,]%', REVERSE(Numbers)),
TrimEnd = LEFT(Numbers, LEN(Numbers) - (PATINDEX('%[^,]%', REVERSE(Numbers)) - 1)),
TrimStart = SUBSTRING(Numbers, PATINDEX('%[^,]%', Numbers), LEN(Numbers)),
TrimBothSide = SUBSTRING(Numbers,
PATINDEX('%[^,]%', Numbers),
LEN(Numbers) -
(PATINDEX('%[^,]%', REVERSE(Numbers)) - 1) -
(PATINDEX('%[^,]%', Numbers) - 1)
)
FROM (VALUES
('1,2,3'),
('1,2,3,'),
('1,2,3,,,'),
('1,2,3,,,,,,'),
('1,2,3,,,5,,,'),
(',,1,2,3,,,5,,')
) t (Numbers);
```
|
Because there are multiple occurrences you can't do it with a simple builtin function expression, but a simple user defined function can do the job.
```
create function dbo.MyTrim(@text varchar(max)) returns varchar(max)
as
-- function to remove all commas from the right end of the input.
begin
while (right(@text, 1) = ','
begin
set @text = left(@text, len(@text) - 1)
end
return @text
end
go
```
|
TrimEnd Equivalent in SQL Server
|
[
"",
"sql",
"sql-server-2008",
"trim",
""
] |
I've been looking around for the answer but I didn't find anything. Sorry if the answer has been given elsewhere.
Here is my problem :
I have a calculated member which is the number of items (of the current member) divided by the total number of items (sumitem).
```
with
member
sumitem
as
SUM ([FailureReason].[FailureReason].[All],[Measures].[Items])
member
Impact
as
[Measures].[Items]/[Measures].[SumItem]
```
But for a specific member of my dimension **FailureReason**, the result of Impact has to be 0. So I tried to add this :
```
member
ImpactFinal
as
iif ([FailureReason].CurrentMember = [FailureReason].[FailureReason].&[127],
0,
Impact
)
```
and I select my data like this :
```
select
{[Measures].[Items],
ImpactFinal
} on columns,
[FailureReason].members on rows
from
NoOTAR
```
But instead of getting 0 only for this specific member, every members of this dimension have their ImpactFinal equals to 0. What is strange is if I replace 0 by any other value, the result is good.
|
Just use
`[FailureReason].CurrentMember IS [FailureReason].[FailureReason].&[127]`
instead of
`[FailureReason].CurrentMember = [FailureReason].[FailureReason].&[127]`
and it will work.
**Update**: Several tips:
1. There is also not necessary to use `SUM` function, since you can define only tuple, this will be enough for server: `([FailureReason].[FailureReason].[All],[Measures].[Count])`
2. It's quite reasonable to check `sumitem` measure for dividing by zero in `ImpactFinal` calculation. Because once some filters are applied, this may cause zeroing this measure and errors in reports.
3. If you have an opportunity not only to query, but update cube, `SCOPE ([FailureReason].[FailureReason].&[127],[Measures].[Impact])` with `THIS = 0` is better than additional member because of performance.
Best of luck!
**UPDATE to fix totals:**
If total should be w/o FailureReason 127, you can substitute your measures with:
```
member Impact
as
iif ([FailureReason].[FailureReason].CurrentMember is [FailureReason].[FailureReason].&[127],
0,
[Measures].[Items]
)
member ImpactFinal
as
iif ([FailureReason].[FailureReason].CurrentMember is [FailureReason].[FailureReason].[All]
,[Measures].[Items]-([FailureReason].[FailureReason].&[127],[Measures].[Items])
,[Measures].[Impact])/[Measures].[SumItem]
```
But I have another solution, which is more readable:
```
member v2_ImpactUncountableFailure
as
iif ([FailureReason].[FailureReason].CurrentMember.Level.Ordinal=0
or
[FailureReason].[FailureReason].CurrentMember is [FailureReason].[FailureReason].&[127]
,([FailureReason].[FailureReason].&[127],[Measures].[Items])
,null)
member v2_ImpactFinal
as
([Measures].[Items]-[Measures].[v2_ImpactUncountableFailure])
/
([FailureReason].[FailureReason].[All],[Measures].[Items])
```
Use only this two measures instead of set of measures `sumitem`,`Impact`,`ImpactFinal`. First one will show result on failure-127 and total. Second subtracts it from clean unfiltered measure, so in the end we have clean members, zeroed failure-127 and corrected total.
Please let me know if it isn't work, I've tested on my DB and everything is OK.
|
Try
```
with
member sumitem
as
SUM ([FailureReason].[FailureReason].[All],[Measures].[Items])
member LeaveOut
as
[FailureReason].[FailureReason].CurrentMember.Properties("Key")
member Impact
as
IIf([Measures].[LeaveOut]= "127", 0, [Measures].[Items]/[Measures].[SumItem])
```
|
Set 0 for specific value MDX query
|
[
"",
"sql",
"sql-server-2008",
"ssas",
"mdx",
"cube",
""
] |
I have one Table in SQL Server 2008, like `T1, and have column like`C1 and C2`
I just want to swap values from `C1 to C2` and `C2 to C1`
for Example
```
C1 C2
-------- --------
12/11/2014 12/10/2014
12/13/2014 12/12/2014
12/15/2014 12/14/2014
12/17/2014 12/16/2014
12/19/2014 12/18/2014
```
I need records as follows
```
C1 C2
-------- --------
12/10/2014 12/11/2014
12/12/2014 12/13/2014
12/14/2014 12/15/2014
12/16/2014 12/17/2014
12/18/2014 12/19/2014
```
for that I am trying something like below
```
UPDATE T1 SET C1=C2, C2=C1
```
EDIT :
I am getting `Conversion failed when converting date and/or time from character string.` Error.
C1 is `nvarchar(50) datatype` and C2 is `DateTime`,
I am storing `DateTime.Now.Date` in C1, But it will depend on Regional Setting of Window, If Date is set as `"MM-DD-yyyy"` Format then in C1 Value will be in same Format. My main problem is some time User can change its windows setting to any other format so I am getting Said Error.
The answer of [How do I swap column values in sql server 2008?](https://stackoverflow.com/questions/4198587/how-do-i-swap-column-values-in-sql-server-2008) this not meet my requirement due to it has same datatypes
I am So Sorry for providing only int Value
|
Your query work for you..This is sample and give the perfect result.
```
declare @t table(c1 int, c2 int)
insert into @t values(10,11),(11,13),(14,15),(16,17),(18,19)
select * from @t
update @t set c1 = c2 ,c2=c1
select * from @t
```
Updated :- Please check my answer.
You can set dateformat in sqlserver side via `"set dateformat mdy"`
```
set dateformat dmy
select isdate( '18/12/2014' ), convert( datetime, '18/12/2014' ,103)
you can use ISDATE with convert date function.
set dateformat mdy
--the problem is in your insert not in update, if data is save in 'mm-dd-yyyy', then no issue with swap.
;with cte as (
select
case when isdate('19/12/2014') = 1
then '19/12/2014'
else convert( datetime, '19/12/2014' ,103)
end stdt
from @t
)
select * from cte
--update @t set c1 = c2 ,c2=c1
--select * from @t
```
|
your query will work, if you want another method you can follow the below one
```
select* from tablename;
Go
DECLARE @tempcol as int
update tablename
set @tempcol= C1,
C1= C2,
C2= @tempcol
```
(if you have any conditions then )
WHERE condition;
GO
```
select* from tablename;
```
|
How to Swap column values in SQL server of different datatype
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have four tables table\_1, table\_2, table\_3 and table\_4. All the four of them have columns like this:
```
table_1: age_grp, gender, height;
table_2: age_grp, gender, weight;
table_3: age_grp, gender, shoesize;
table_4: age_group, gender, BMI;
```
I would like to create a new table with columns:
```
age_grp, gender, height, weight, shoesize, BMI
```
I want only those columns to be merged where `age_grp and gender` is same in all tables. Any idea how to do this?
|
This easily can be doen via [`INNER JOIN`](http://dev.mysql.com/doc/refman/5.0/en/join.html):
```
SELECT table_1.*, table_2.*, table_3.*, table_4.* FROM table_1
INNER JOIN table_2 ON table_1.age_grp = table_2.age_grp
AND table_1.gender = table_2.gender
INNER JOIN table_3 ON table_2.age_grp = table_3.age_grp
AND table_2.gender = table_3.gender
INNER JOIN table_4 ON table_3.age_grp = table_4.age_grp
AND table_3.gender = table_4.gender
```
You can `JOIN` any table with any, if you have a requirement that all the data in all tables have same values in the columns.
Note that you shouldn't use `*` in real production script, use the column names explicitly.
|
In spite of this request being answered already, I'd like to add an answer for the case of missing values, e.g. only shoesize not given for an age\_grp/gender pair.
For a solution with joins you would need FULL OUTER JOINs which MySQL doesn't support. And mimicking this with LEFT and /or RIGHT OUTER JOINs can be a pain with several tables.
Here is a solution using UNION ALLs and a final aggregation instead.
```
create table mytable as
select age_grp, gender, max(height) as height, max(weight) as weight, max(shoesize) as shoesize, max(bmi) as bmi
from
(
select age_grp, gender, height, cast(null as unsigned integer) as weight, cast(null as unsigned integer) as shoesize, cast(null as unsigned integer) as bmi from table_1
union all
select age_grp, gender, cast(null as unsigned integer) as height, weight, cast(null as unsigned integer) as shoesize, cast(null as unsigned integer) as bmi from table_2
union all
select age_grp, gender, cast(null as unsigned integer) as height, cast(null as unsigned integer) as weight, shoesize, cast(null as unsigned integer) as bmi from table_3
union all
select age_group, gender, cast(null as unsigned integer) as height, cast(null as unsigned integer) as weight, cast(null as unsigned integer) as shoesize, bmi from table_4
) x
group by age_grp, gender;
```
I was surprised that `CAST(NULL AS INT)` results in a syntax error, btw. I had to change it to `CAST(NULL AS UNSIGNED INTEGER)`.
SQL fiddle: <http://www.sqlfiddle.com/#!2/f4fa5c/1>.
|
How to merge multiple tables in MYSQl which have two columns in common
|
[
"",
"mysql",
"sql",
""
] |
How can I make a SQL query that returns me something like
```
---------------------
|DATE | Count |
---------------------
|2015/01/07 | 7 |
|2015/01/06 | 0 |
|2015/01/05 | 8 |
|2015/01/04 | 5 |
|2015/01/03 | 0 |
|2015/01/02 | 4 |
|2015/01/01 | 2 |
---------------------
```
When there are no records for the 6th and 3rd?
|
One solution is to create a calendar table containing all the dates you need. You can then left join it to your data to get what you are after
|
You need a table of all the sequence numbers from 0 to 6. This is easy to generate in a simple query, as follows.
```
SELECT 0 AS seq
UNION ALL SELECT 1 UNION ALL SELECT 2
UNION ALL SELECT 3 UNION ALL SELECT 4
UNION ALL SELECT 5 UNION ALL SELECT 6
```
Next, let's use this to construct a virtual table of seven dates. For this example, we pick today and the six preceding days.
```
SELECT DATE(NOW())-INTERVAL seq.seq DAY theday
FROM (
SELECT 0 AS seq
UNION ALL SELECT 1 UNION ALL SELECT 2
UNION ALL SELECT 3 UNION ALL SELECT 4
UNION ALL SELECT 5 UNION ALL SELECT 6
) seq
```
Then you do your summary query. You didn't say exactly how it goes so I will guess. This one gives you the records from six days ago until today. Today is still in progress.
```
SELECT DATE(i.item_time) theday
COUNT(*) `count`
FROM items i
WHERE i.item_time >= DATE(NOW()) - INTERVAL 6 DAYS
GROUP BY DATE(i.item_time)
```
Finally, starting with the list of days, let's LEFT JOIN that summary to it.
```
SELECT thedays.theday, IFNULL(summary.`count`,0) `count`
FROM (
SELECT DATE(NOW())-INTERVAL seq.seq DAY theday
FROM (
SELECT 0 AS seq
UNION ALL SELECT 1 UNION ALL SELECT 2
UNION ALL SELECT 3 UNION ALL SELECT 4
UNION ALL SELECT 5 UNION ALL SELECT 6
) seq
) thedays
LEFT JOIN (
SELECT DATE(i.item_time) theday
COUNT(*) `count`
FROM items i
WHERE i.item_time >= DATE(NOW()) - INTERVAL 6 DAYS
GROUP BY DATE(i.item_time)
) summary USING (theday)
ORDER BY thedays.theday
```
It looks complex, but it is simply the combination of three basic queries. Think of it as a sandwich, with bread and cheese and tomato stuck together with an `ORDER BY` toothpick.
Here's a more thorough writeup. <http://www.plumislandmedia.net/mysql/filling-missing-data-sequences-cardinal-integers/>
MariaDB version 10 has built-in virtual tables of cardinal number sequences like seq\_0\_to\_6. This is convenient.
|
Get last 7 days count even when there are no records
|
[
"",
"mysql",
"sql",
"laravel-4",
"aggregate-functions",
""
] |
i'v been learning SQL for the last week but I am unsure how to correctly add a case statement within a check constraint. Can anybody give me any pointers?
I have the following grade table:
```
CREATE TABLE Grade
(
salary_grade char(1) NOT NULL CHECK (salary_grade = UPPER(salary_grade)),
CONSTRAINT ck_grade_scale CHECK(
CASE
WHEN salary_grade = '[A-D]'
THEN salary_scale = 'S1'
WHEN salary_grade = '[D-G]'
THEN salary_scale = 'S2'
END)
salary_scale char(2) DEFAULT 'S1' NOT NULL,
CONSTRAINT pk_grade PRIMARY KEY (salary_grade),
CONSTRAINT ck_salary_grade CHECK (REGEXP_LIKE(salary_grade, '[A-G]', 'c')),
--constraint must be either S1 or S2
CONSTRAINT ck_salary_scale CHECK (salary_scale IN ('S1', 'S2'))
);
```
I want to check that if the `salary_grade` is between A-D then the `salary_scale` must be 'S1' or if the `salary_grade` is between E-G then it's 'S2'.
I have tried to research this and come up with the latter but however it does not work.. have I structured the code correctly?
|
I think you can do the following:
```
CREATE TABLE Grade
(
salary_grade char(1) NOT NULL CHECK (REGEXP_LIKE(salary_grade, '[A-G]', 'c')),
salary_scale char(2) DEFAULT 'S1' NOT NULL,
CONSTRAINT pk_grade PRIMARY KEY (salary_grade),
CONSTRAINT ck_grade_scale CHECK ( REGEXP_LIKE(salary_grade, '[A-D]', 'c') AND salary_scale = 'S1'
OR REGEXP_LIKE(salary_grade, '[E-G]', 'c') AND salary_scale = 'S2' )
);
```
[Please see SQL Fiddle schema here.](http://sqlfiddle.com/#!4/b79f9)
You don't need the `UPPER()` constraint on `salary_grade` since the regex check will suffice (you're already checking to make sure it's an uppercase letter between A and G). I don't think the constraint on `salary_scale` alone is necessary either since it would be contained, logically, in the last constraint.
**UPDATE**
Here is how you might do it with a `CASE` statement:
```
CREATE TABLE Grade
(
salary_grade char(1) NOT NULL CHECK (REGEXP_LIKE(salary_grade, '[A-G]', 'c')),
salary_scale char(2) DEFAULT 'S1' NOT NULL,
CONSTRAINT pk_grade PRIMARY KEY (salary_grade),
CONSTRAINT ck_grade_scale CHECK ( salary_scale = CASE WHEN REGEXP_LIKE(salary_grade, '[A-D]', 'c') THEN 'S1' ELSE 'S2' END )
);
```
[Please see SQL Fiddle schema here.](http://sqlfiddle.com/#!4/1e0f9)
|
A `case` has to be compared to something, which is why you are getting the missing right parenthesis error. Unless you particularly want a `case`, you can just check the combination with and/or:
```
CONSTRAINT ck_grade_scale CHECK(
(salary_grade BETWEEN 'A' AND 'D' AND salary_scale = 'S1')
OR (salary_grade BETWEEN 'D' AND 'G' AND salary_scale = 'S2')),
```
[SQL Fiddle demo](http://sqlfiddle.com/#!4/26a71/2).
As Parado has said, you can't use constraints to set column values conditionally, only to restrict them. You could potentially use a virtual column for the scale, but it would mean putting part of a look-up table into the DDL rather than the data, which seems a bit strange.
|
Using a case statement in a check constraint
|
[
"",
"sql",
"oracle",
"oracle11g",
"check-constraints",
""
] |
I have MySQL table
```
id product p_image
1 G images\20131030164545.jpg
2 S images\20131230164545.jpg
3 V images\20140110164545.jpg
4 R images\20140320164545.jpg
5 K images\20140526164545.jpg
6 L images\20150110164545.jpg
7 SK images\20150120164545.jpg
```
Here I need to extract products from above table where p\_image timestamp between two dates (for example I need to extract from 2013/12/01 to 2014/07/30 dates)
In this query I need to extract timestamp from this string 'images\20140526164545.jpg' and convert this to date format and select values between two dates.
|
Assuming the format of the string is fixed (which it looks to be) you can use the`substr`function to extract the timestamp and then cast it to a date and filter by it. Something like this should work:
```
select * from table1
where cast(substr(p_image FROM 7 FOR 14) as date)
between '2013/12/01' and '2014/07/30'
```
[Sample SQL Fiddle](http://www.sqlfiddle.com/#!2/e5f1f/6)
There might be more efficient ways to do this, but this should give you an idea to start with.
Edit: if the string can vary the something like `left(right(p_image, 18), 14)` should work.
|
The dates in `p_image` are in YYYYMMDD format, so you can compare them as strings. That is, there is no reason to convert the strings to a date data type.
Hence you can just do:
```
where substr(p_image, 8, 8) between '20131201' and '20140730'
```
If the position of the date is not fixed but always after the `/`, you can do:
```
where left(substring_index(p_image, '/', -1), 8) between '20131201' and '20140730'
```
|
Query to Select where timestamp between date1 and date 2
|
[
"",
"mysql",
"sql",
""
] |
I use this forum all the time for VBA help but this is the first time I have to post something myself.
I am trying to make a report that provides a summary of various alarms stored in Access. I want to provide a simple Count of each alarm, each day. I have used some SQL queries but not really any Access. I took the fact that Access can do Pivot tables from Access itself. If there is a better way, please let me know.
```
Set CommandQuery.activeConnection = conn
commandQuery.CommandText = _
"TRANSFORM Count(FixAlarms.[Alm_NativeTimeLast]) AS CountOfAlm_NativeTimeLast " & _
"SELECT FixAlarms.Alm_Tagname, FixAlarms.Alm_Desc " & _
"FROM FixAlarms " & _
"WHERE ((FixAlarms.Alm_Tagname) <> """")) AND FixAlarms.Alm_NativeTimeIn > CellTime " & _
"GROUP BY FixAlarms.[Alm_Tagname], FixAlarms.Alm_Descr " & _
"PIVOT Format([Alm_NativeTimeIn],""Short Date"")"
rec.Open commandQuery
```
This is the code I am using. I had to retype it, so please forgive any typo. It does most of what I want but it does not give me any indication of what day each column is. I need a header on each column in case there were no alarms one day. I think the answer lies within the IN part of the PIVOT but I can't get it to work without syntax errors. I thought all I had to do was add on
```
PIVOT Format([Alm_NativeTimeIn],""Short Date"") IN 01/20/15"
```
Please help if you can.
Thanks.
|
So, I just wanted to add a header to my pivot table that would tell me what date the particular column was for.
The part of the code that I did not show was that I was using a rec.getrows to move all of my data into a simpler array variable. While this had all the data from Access, it did not have any headers to inform me what was a tagname, what was a description, and what was which date.
I found that in the recordset itself under fields.item(n) there was a Name attribute. This name told me where the column data came from or the date of the data. Using this and a simple day(date) function, I was able to make my monthly report summarizing all of the alarms.
Thanks for your help guys, but I either was not clear in my description of the problem or it was being over thought.
|
In order to get the records for all day, even those where there were no activity you need to create these days. The simplest way to do so in access is to use a set of UNION statements to create a fake table for the days similar to this:
```
SELECT #2015-01-20# as dt FROM dual
UNION ALL
SELECT #2015-01-21# as dt FROM dual
UNION ALL
SELECT #2015-01-22# as dt FROM dual
```
If you try the above query in Access it will not work, as there is no table called `dual`. You will have to create it. Check [this](https://stackoverflow.com/questions/7933518/table-less-union-query-in-ms-access-jet-ace) SO question.
After you created the above query you can LEFT JOIN it with the source table.
```
TRANSFORM Count(FixAlarms.[Alm_NativeTimeLast]) AS CountOfAlm_NativeTimeLast
SELECT FixAlarms.Alm_Tagname, FixAlarms.Alm_Desc
FROM
(SELECT #2015-01-20# as dt FROM dual
UNION ALL
SELECT #2015-01-21# as dt FROM dual
UNION ALL
SELECT #2015-01-22# as dt FROM dual) as dates LEFT JOIN
FixAlarms ON DateValue(FixAlarms.[Alm_NativeTimeIn]) = dates.dt
WHERE ((FixAlarms.Alm_Tagname) <> """")) AND FixAlarms.Alm_NativeTimeIn > CellTime
GROUP BY FixAlarms.[Alm_Tagname], FixAlarms.Alm_Descr
PIVOT Format(dates.dt, 'Short Date')
```
EDIT: I must add that this is not the only way of achieving it. Another way is to use a `Numbers` table. Create a table called `Numbers` with a single numeric column `n` and fill it with numbers 0 to 100 (depends on the maximum number of days you wish to include into your query). Then your query for the `dates` will be:
SELECT DateAdd('d', n, #2015-01-20#) as dt FROM numbers where n < 30;
And the resulting query will be:
```
TRANSFORM Count(FixAlarms.[Alm_NativeTimeLast]) AS CountOfAlm_NativeTimeLast
SELECT FixAlarms.Alm_Tagname, FixAlarms.Alm_Desc
FROM
(SELECT DateAdd('d', n, #2015-01-20#) as dt FROM numbers where n < 30) as dates LEFT JOIN
FixAlarms ON DateValue(FixAlarms.[Alm_NativeTimeIn]) = dates.dt
WHERE ((FixAlarms.Alm_Tagname) <> """")) AND FixAlarms.Alm_NativeTimeIn > CellTime
GROUP BY FixAlarms.[Alm_Tagname], FixAlarms.Alm_Descr
PIVOT Format(dates.dt, 'Short Date')
```
|
VBA Access Query for Day Summary
|
[
"",
"sql",
"excel",
"vba",
"ms-access",
"pivot",
""
] |
I have a column in SQL Server with utf8 SQL\_Latin1\_General\_CP1\_CI\_AS encoding. How can I convert and save the text in ISO 8859-1 encoding? I would like to do thing in a query on SQL Server. Any tips?
> Olá. Gostei do jogo. Quando "baixei" até achei que não iria curtir muito
|
I have written a function to repair UTF-8 text that is stored in a `varchar` field.
To check the fixed values you can use it like this:
```
CREATE TABLE #Table1 (Column1 varchar(max))
INSERT #Table1
VALUES ('Olá. Gostei do jogo. Quando "baixei" até achei que não iria curtir muito')
SELECT *, NewColumn1 = dbo.DecodeUTF8String(Column1)
FROM Table1
WHERE Column1 <> dbo.DecodeUTF8String(Column1)
```
Output:
```
Column1
-------------------------------
Olá. Gostei do jogo. Quando "baixei" até achei que não iria curtir muito
NewColumn1
-------------------------------
Olá. Gostei do jogo. Quando "baixei" até achei que não iria curtir muito
```
The code:
```
CREATE FUNCTION dbo.DecodeUTF8String (@value varchar(max))
RETURNS nvarchar(max)
AS
BEGIN
-- Transforms a UTF-8 encoded varchar string into Unicode
-- By Anthony Faull 2014-07-31
DECLARE @result nvarchar(max);
-- If ASCII or null there's no work to do
IF (@value IS NULL
OR @value NOT LIKE '%[^ -~]%' COLLATE Latin1_General_BIN
)
RETURN @value;
-- Generate all integers from 1 to the length of string
WITH e0(n) AS (SELECT TOP(POWER(2,POWER(2,0))) NULL FROM (VALUES (NULL),(NULL)) e(n))
, e1(n) AS (SELECT TOP(POWER(2,POWER(2,1))) NULL FROM e0 CROSS JOIN e0 e)
, e2(n) AS (SELECT TOP(POWER(2,POWER(2,2))) NULL FROM e1 CROSS JOIN e1 e)
, e3(n) AS (SELECT TOP(POWER(2,POWER(2,3))) NULL FROM e2 CROSS JOIN e2 e)
, e4(n) AS (SELECT TOP(POWER(2,POWER(2,4))) NULL FROM e3 CROSS JOIN e3 e)
, e5(n) AS (SELECT TOP(POWER(2.,POWER(2,5)-1)-1) NULL FROM e4 CROSS JOIN e4 e)
, numbers(position) AS
(
SELECT TOP(DATALENGTH(@value)) ROW_NUMBER() OVER (ORDER BY (SELECT NULL))
FROM e5
)
-- UTF-8 Algorithm (http://en.wikipedia.org/wiki/UTF-8)
-- For each octet, count the high-order one bits, and extract the data bits.
, octets AS
(
SELECT position, highorderones, partialcodepoint
FROM numbers a
-- Split UTF8 string into rows of one octet each.
CROSS APPLY (SELECT octet = ASCII(SUBSTRING(@value, position, 1))) b
-- Count the number of leading one bits
CROSS APPLY (SELECT highorderones = 8 - FLOOR(LOG( ~CONVERT(tinyint, octet) * 2 + 1)/LOG(2))) c
CROSS APPLY (SELECT databits = 7 - highorderones) d
CROSS APPLY (SELECT partialcodepoint = octet % POWER(2, databits)) e
)
-- Compute the Unicode codepoint for each sequence of 1 to 4 bytes
, codepoints AS
(
SELECT position, codepoint
FROM
(
-- Get the starting octect for each sequence (i.e. exclude the continuation bytes)
SELECT position, highorderones, partialcodepoint
FROM octets
WHERE highorderones <> 1
) lead
CROSS APPLY (SELECT sequencelength = CASE WHEN highorderones in (1,2,3,4) THEN highorderones ELSE 1 END) b
CROSS APPLY (SELECT endposition = position + sequencelength - 1) c
CROSS APPLY
(
-- Compute the codepoint of a single UTF-8 sequence
SELECT codepoint = SUM(POWER(2, shiftleft) * partialcodepoint)
FROM octets
CROSS APPLY (SELECT shiftleft = 6 * (endposition - position)) b
WHERE position BETWEEN lead.position AND endposition
) d
)
-- Concatenate the codepoints into a Unicode string
SELECT @result = CONVERT(xml,
(
SELECT NCHAR(codepoint)
FROM codepoints
ORDER BY position
FOR XML PATH('')
)).value('.', 'nvarchar(max)');
RETURN @result;
END
GO
```
|
[Jason Penny](http://jasontpenny.com/) has also [written](http://www.jasontpenny.com/blog/2009/07/31/sql-function-to-get-nvarchar-from-utf-8-stored-in-varchar/) an SQL function to convert UTF-8 to Unicode (MIT licence) which worked on a simple example for me:
```
CREATE FUNCTION dbo.UTF8_TO_NVARCHAR(@in VarChar(MAX))
RETURNS NVarChar(MAX)
AS
BEGIN
DECLARE @out NVarChar(MAX), @i int, @c int, @c2 int, @c3 int, @nc int
SELECT @i = 1, @out = ''
WHILE (@i <= Len(@in))
BEGIN
SET @c = Ascii(SubString(@in, @i, 1))
IF (@c < 128)
BEGIN
SET @nc = @c
SET @i = @i + 1
END
ELSE IF (@c > 191 AND @c < 224)
BEGIN
SET @c2 = Ascii(SubString(@in, @i + 1, 1))
SET @nc = (((@c & 31) * 64 /* << 6 */) | (@c2 & 63))
SET @i = @i + 2
END
ELSE
BEGIN
SET @c2 = Ascii(SubString(@in, @i + 1, 1))
SET @c3 = Ascii(SubString(@in, @i + 2, 1))
SET @nc = (((@c & 15) * 4096 /* << 12 */) | ((@c2 & 63) * 64 /* << 6 */) | (@c3 & 63))
SET @i = @i + 3
END
SET @out = @out + NChar(@nc)
END
RETURN @out
END
GO
```
The ticked answer by Anthony "looks" better to me, but maybe run both if doing conversion and investigate any discrepencies?!
Also we used the *very* ugly code below to detect BMP page unicode characters that were encoded as UTF-8 and then converted from varchar to nvarchar fields, that can be converted to UCS-16.
```
LIKE (N'%[' + CONVERT(NVARCHAR,(CHAR(192))) + CONVERT(NVARCHAR,(CHAR(193))) + CONVERT(NVARCHAR,(CHAR(194))) + CONVERT(NVARCHAR,(CHAR(195))) + CONVERT(NVARCHAR,(CHAR(196))) + CONVERT(NVARCHAR,(CHAR(197))) + CONVERT(NVARCHAR,(CHAR(198))) + CONVERT(NVARCHAR,(CHAR(199))) + CONVERT(NVARCHAR,(CHAR(200))) + CONVERT(NVARCHAR,(CHAR(201))) + CONVERT(NVARCHAR,(CHAR(202))) + CONVERT(NVARCHAR,(CHAR(203))) + CONVERT(NVARCHAR,(CHAR(204))) + CONVERT(NVARCHAR,(CHAR(205))) + CONVERT(NVARCHAR,(CHAR(206))) + CONVERT(NVARCHAR,(CHAR(207))) + CONVERT(NVARCHAR,(CHAR(208))) + CONVERT(NVARCHAR,(CHAR(209))) + CONVERT(NVARCHAR,(CHAR(210))) + CONVERT(NVARCHAR,(CHAR(211))) + CONVERT(NVARCHAR,(CHAR(212))) + CONVERT(NVARCHAR,(CHAR(213))) + CONVERT(NVARCHAR,(CHAR(214))) + CONVERT(NVARCHAR,(CHAR(215))) + CONVERT(NVARCHAR,(CHAR(216))) + CONVERT(NVARCHAR,(CHAR(217))) + CONVERT(NVARCHAR,(CHAR(218))) + CONVERT(NVARCHAR,(CHAR(219))) + CONVERT(NVARCHAR,(CHAR(220))) + CONVERT(NVARCHAR,(CHAR(221))) + CONVERT(NVARCHAR,(CHAR(222))) + CONVERT(NVARCHAR,(CHAR(223))) + CONVERT(NVARCHAR,(CHAR(224))) + CONVERT(NVARCHAR,(CHAR(225))) + CONVERT(NVARCHAR,(CHAR(226))) + CONVERT(NVARCHAR,(CHAR(227))) + CONVERT(NVARCHAR,(CHAR(228))) + CONVERT(NVARCHAR,(CHAR(229))) + CONVERT(NVARCHAR,(CHAR(230))) + CONVERT(NVARCHAR,(CHAR(231))) + CONVERT(NVARCHAR,(CHAR(232))) + CONVERT(NVARCHAR,(CHAR(233))) + CONVERT(NVARCHAR,(CHAR(234))) + CONVERT(NVARCHAR,(CHAR(235))) + CONVERT(NVARCHAR,(CHAR(236))) + CONVERT(NVARCHAR,(CHAR(237))) + CONVERT(NVARCHAR,(CHAR(238))) + CONVERT(NVARCHAR,(CHAR(239)))
+ N'][' + CONVERT(NVARCHAR,(CHAR(128))) + CONVERT(NVARCHAR,(CHAR(129))) + CONVERT(NVARCHAR,(CHAR(130))) + CONVERT(NVARCHAR,(CHAR(131))) + CONVERT(NVARCHAR,(CHAR(132))) + CONVERT(NVARCHAR,(CHAR(133))) + CONVERT(NVARCHAR,(CHAR(134))) + CONVERT(NVARCHAR,(CHAR(135))) + CONVERT(NVARCHAR,(CHAR(136))) + CONVERT(NVARCHAR,(CHAR(137))) + CONVERT(NVARCHAR,(CHAR(138))) + CONVERT(NVARCHAR,(CHAR(139))) + CONVERT(NVARCHAR,(CHAR(140))) + CONVERT(NVARCHAR,(CHAR(141))) + CONVERT(NVARCHAR,(CHAR(142))) + CONVERT(NVARCHAR,(CHAR(143))) + CONVERT(NVARCHAR,(CHAR(144))) + CONVERT(NVARCHAR,(CHAR(145))) + CONVERT(NVARCHAR,(CHAR(146))) + CONVERT(NVARCHAR,(CHAR(147))) + CONVERT(NVARCHAR,(CHAR(148))) + CONVERT(NVARCHAR,(CHAR(149))) + CONVERT(NVARCHAR,(CHAR(150))) + CONVERT(NVARCHAR,(CHAR(151))) + CONVERT(NVARCHAR,(CHAR(152))) + CONVERT(NVARCHAR,(CHAR(153))) + CONVERT(NVARCHAR,(CHAR(154))) + CONVERT(NVARCHAR,(CHAR(155))) + CONVERT(NVARCHAR,(CHAR(156))) + CONVERT(NVARCHAR,(CHAR(157))) + CONVERT(NVARCHAR,(CHAR(158))) + CONVERT(NVARCHAR,(CHAR(159))) + CONVERT(NVARCHAR,(CHAR(160))) + CONVERT(NVARCHAR,(CHAR(161))) + CONVERT(NVARCHAR,(CHAR(162))) + CONVERT(NVARCHAR,(CHAR(163))) + CONVERT(NVARCHAR,(CHAR(164))) + CONVERT(NVARCHAR,(CHAR(165))) + CONVERT(NVARCHAR,(CHAR(166))) + CONVERT(NVARCHAR,(CHAR(167))) + CONVERT(NVARCHAR,(CHAR(168))) + CONVERT(NVARCHAR,(CHAR(169))) + CONVERT(NVARCHAR,(CHAR(170))) + CONVERT(NVARCHAR,(CHAR(171))) + CONVERT(NVARCHAR,(CHAR(172))) + CONVERT(NVARCHAR,(CHAR(173))) + CONVERT(NVARCHAR,(CHAR(174))) + CONVERT(NVARCHAR,(CHAR(175))) + CONVERT(NVARCHAR,(CHAR(176))) + CONVERT(NVARCHAR,(CHAR(177))) + CONVERT(NVARCHAR,(CHAR(178))) + CONVERT(NVARCHAR,(CHAR(179))) + CONVERT(NVARCHAR,(CHAR(180))) + CONVERT(NVARCHAR,(CHAR(181))) + CONVERT(NVARCHAR,(CHAR(182))) + CONVERT(NVARCHAR,(CHAR(183))) + CONVERT(NVARCHAR,(CHAR(184))) + CONVERT(NVARCHAR,(CHAR(185))) + CONVERT(NVARCHAR,(CHAR(186))) + CONVERT(NVARCHAR,(CHAR(187))) + CONVERT(NVARCHAR,(CHAR(188))) + CONVERT(NVARCHAR,(CHAR(189))) + CONVERT(NVARCHAR,(CHAR(190))) + CONVERT(NVARCHAR,(CHAR(191)))
+ N']%') COLLATE Latin1_General_BIN
```
The above:
* detects multi-byte sequences encoding U+0080 to U+FFFF (U+0080 to U+07FF is encoded as 110xxxxx 10xxxxxx, U+0800 to U+FFFF is encoded as 1110xxxx 10xxxxxx 10xxxxxx)
* i.e. it detects hex byte 0xC0 to 0xEF followed by hex byte 0x80 to 0xBF
* ignores ASCII control characters U+0000 to U+001F
* ignores characters that are already correctly encoded to unicode >= U+0100 (i.e. not UTF-8)
* ignores unicode characters U+0080 to U+00FF if they don't appear to be part of a UTF-8 sequence e.g. "coöperatief".
* doesn't use LIKE "%[X-Y]" for X=0x80 to Y=0xBF because of potential collation issues
* uses CONVERT(VARCHAR,CHAR(X)) instead of NCHAR because we had problems with NCHAR getting converted to the wrong value (for some values).
* ignores UTF characters greater than U+FFFF (4 to 6 byte sequences which have a first byte of hex 0xF0 to 0xFD)
|
Convert text value in SQL Server from UTF8 to ISO 8859-1
|
[
"",
"sql",
"sql-server",
"encoding",
"utf-8",
"iso-8859-1",
""
] |
I'm a WPF developer not a sql dba... Trying to do a join in a sub select.
I have two tables: tbArticles & tbRegions. TbArticles holds a regionCode column which is also common to tbRegions. I'm trying to get a count of all articles by region and country:


This is the result I'm looking for:

I've managed to get the countrys by region from an inner join:
```
select distinct r.longName, a.Country
from ises.tbarticles a
INNER JOIN ref.tbRegions r
On a.RegionCode = r.regionCode
order by longName
```
But how do I get the result I want with the join as a sub select? I've tried this sort of thing but obviously it's not right.
```
select (select distinct r.longName from ises.tbarticles a
INNER JOIN ref.tbRegions r
on r.regionCode = a.RegionCode) as 'Region', country, COUNT(*) as 'Total Articles' from ises.tbarticles a, ref.tbRegions r
group by country
order by Region
```
|
Try this:
```
select r.longName, a.Country, count(*)
from ises.tbarticles a
INNER JOIN ref.tbRegions r
On a.RegionCode = r.regionCode
group by r.longName, a.Country
order by longName
```
|
Use need to use [GROUP BY Clause](http://www.techonthenet.com/sql/group_by.php) and then the count
```
Select r.longName, a.Country, Count(a.*) as ArticleCount
From ises.tbarticles a
INNER JOIN ref.tbRegions r On a.RegionCode = r.regionCode
Group By r.longName, a.Country
```
|
T SQL: Inner join in sub select query?
|
[
"",
"sql",
"sql-server",
"join",
""
] |
I have this string:
Blah blah blah - 2389023823
I want to grab everything to the LEFT of the dash:
"Blah blah blah"
How do I go about doing this?
I'm assuming I would have to do `LEFT()` with `ChARINDEX() + LEN()` somehow but I'm having difficulty.
Appreciate the help.
|
You only need to use [LEFT](https://msdn.microsoft.com/en-us/library/ms177601.aspx) and [CHARINDEX](https://msdn.microsoft.com/en-us/library/ms186323.aspx):
```
SELECT LEFT(ColumnName, CHARINDEX('-', ColumnName)-1)
FROM TableName
```
|
declare @string varchar(50)
set @string = 'Blah blah blah - 2389023823'
select SUBSTRING(@string, 1, NULLIF(CHARINDEX('-', @string) - 1, -1)) as ResultingString
|
How do I get a specific string UNTIL a specific character out of a string in SQL Server?
|
[
"",
"sql",
"sql-server",
"string",
""
] |
The incoming data Looks like this:
```
ID Key Year
1 2288 2013
1 2288 2014
1 2831 2012
1 3723 2012
1 5005 2012
```
The o/p should be
```
ID Key Year
1 2288 2013
1 2288 2014
```
If there are multiple "Key" Values" for the same ID and Year then those rows should be eliminated.
|
Group by ID and Year.
```
SELECT ID, MIN(KEY), YEAR
FROM TABLE
GROUP BY ID, YEAR
HAVING COUNT(KEY)=1
```
|
Use `Window Function` to get the `min` key value in each `year` when it is duplicated.
```
Select ID, Key, Year from
(
select *,row_number() over(partition by Id,Year order by key) Rn
)a
where rn=1
```
|
Select Unique rows from the data (not having mutiple Key values and Year) using SQL
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"sql-server-2012",
""
] |
So...this is a little confusing. I have 2 tables, one is basically a list of Codes and Names of people and topics and then a value, for example:

The second table is just a list of topics, with a value and a "result" which is just a numerical value too:

Now, what I want to do is do a `LEFT OUTER JOIN` on the first table, matching on topic and value, to get the "Result" field from the second table. This is simple in the majority of cases because they will almost always be an exact match, however there will be some cases there won't be, and in those cases the problem will be that the "Value" in table 1 is lower than all the Values in table 2. In this case, I would like to simply do the JOIN as though the Value in table 1 equalled the lowest value **for that topic** in table 2.
To highlight - the `LEFT OUTER JOIN` will return nothing for Row 2 if I match on topic and value, because there's no Geography row in table 2 with the Value 30. In that case, I'd like it to just pick the row where the value is 35, and return the Result field from there in the JOIN instead.
Does that make sense? And, is it possible?
Much appreciated.
|
You can use Cross Apply here. There may be a better solution performance wise.
```
declare @people table(
Code int,
Name varchar(30),
Topic varchar(30),
Value int
)
declare @topics table(
[Subject] varchar(30),
Value int,
Result int
)
INSERT INTO @people values (1, 'Doe,John', 'History', 25),
(2, 'Doe,John', 'Geography', 30),
(3, 'Doe,John', 'Mathematics', 45),
(4, 'Doe,John', 'Brad Pitt Studies', 100)
INSERT INTO @topics values ('History', 25, 95),
('History', 30, 84),
('History', 35, 75),
('Geography', 35, 51),
('Geography', 40, 84),
('Geography', 45, 65),
('Mathematics', 45, 32),
('Mathematics', 50, 38),
('Mathematics', 55, 15),
('Brad Pitt Studies', 100, 92),
('Brad Pitt Studies', 90, 90)
SELECT p.Code, p.Name,
case when p.Value < mTopic.minValue THEN mTopic.minValue
else p.Value
END, mTopic.minValue
FROM @people p
CROSS APPLY
(
SELECT [Subject],
MIN(value) as minValue
FROM @topics t
WHERE p.Topic = t.Subject
GROUP BY [Subject]
) mTopic
```
I am also assuming that:
**This is simple in the majority of cases because they will almost always be an exact match, however there will be some cases there won't be, and in those cases the problem will be that the "Value" in table 1 is lower than all the Values in table 2.**
is correct. If there is a time when Value is not equal to any topic values AND is not less than the minimum, it will currently return the people.value even though it is not a 'valid' value (assuming topics is a list of valid values, but I can't tell from your description.)
Also technically you only need that case statement in the select statement, not the following mTopic.minValue but I thought the example showed the effect better with it.
|
In this case I would make two joins instead of one. Something like this:
```
select *
from Table1 T1
LEFT JOIN Table2 T2 on T1.Topic=T2.subject and T1.Value=T2.VALUE
LEFT JOIN Table2 as T3 on T1.Topic=T3.Subject and T1.Value<T2.Value
```
The do a case to choose the table to take values from. If T2.value is null then use T3.Value ELSE T2.Value. Hope this helps you
|
LEFT OUTER JOIN...with differing matching keys
|
[
"",
"mysql",
"sql",
""
] |
I want the date in `DD-MMM-YYYY` format eg 29-JAN-2015.
I have tried with:
```
SELECT TRIM(TO_DATE('29 Jan 2015'
,'DD MON YY')) FROM DUAL
```
I got result as: 29-JAN-15
But I am expecting: `29-JAN-2015` in date format not in char format
|
Thanks for answers.
I got the solution. First we need to alter the session as below:
alter session set nls\_date\_format='DD-MON-YYYY';
then run the query:
SELECT TRIM(TO\_DATE('29 Jan 2015'
,'DD MON YYYY'))
FROM DUAL
Now I got result as:29-JAN-2015
|
Im assuming Oracle DB:
```
select to_char(SYSDATE, 'dd-Mon-yyyy') from dual
```
Returns
```
29-Jan-2015
```
|
How to format date in DD-MMM-YYYY format eg 29-JAN-2015?
|
[
"",
"sql",
"oracle",
"date",
""
] |
To describe my query problem, the following data is helpful:

A single table contains the columns ID (int), VAL (varchar) and ORD (int)
The values of VAL may change over time by which older items identified by ID won't get updated but appended. The last valid item for ID is identified by the highest ORD value (increases over time).
T0, T1 and T2 are points in time where data got entered.
* How do I get in an efficient manner to the Result set?
A solution must not involve materialized views etc. but should be expressible in a single SQL-query. Using Postgresql 9.3.
|
The correct way to select groupwise maximum in postgres is using [`DISTINCT ON`](https://stackoverflow.com/questions/9795660/postgresql-distinct-on-with-different-order-by)
```
SELECT DISTINCT ON (id) sysid, id, val, ord
FROM my_table
ORDER BY id,ord DESC;
```
[Fiddle](http://sqlfiddle.com/#!15/9afb5/3)
|
You want all records for which no newer record exists:
```
select *
from mytable
where not exists
(
select *
from mytable newer
where newer.id = mytable.id
and newer.ord > mytable.ord
)
order by id;
```
You can do the same with row numbers. Give the latest entry per ID the number 1 and keep these:
```
select sysid, id, val, ord
from
(
select
sysid, id, val, ord,
row_number() over (partition by id order by ord desc) as rn
from mytable
)
where rn = 1
order by id;
```
|
Query historized data
|
[
"",
"sql",
"postgresql",
""
] |
```
For table1 Inserted 3 records
```
It should get those three identities and it should insert 3 records in table3 (but it’s not happening- it inserts 3 records with same identity ie.last scope identity)
```
create table table1(ID INT identity(1,1),Name varchar(50))
insert into table1 values('Ram'),('Sitha'),('Laxman')
create table table1(ID INT identity(1,1),Name varchar(50))
create table table3(ID INT ,Name varchar(50))
insert into table2(Name)
select Name from table1
declare @id int;
set @id= (select scope_Identity())
begin
insert into table3(ID,Name)
select @id,Name from table2
end
select * from table2
select * from table3
```
How can get all identities to insert do I need to write a loop (or) do I need to Create a trigger.
Please give me a solution I am strugguling from past 4 hours.
Thanks in anvance
|
Use the OUTPUT clause to handle multi-row inserts:
```
INSERT INTO dbo.table2(Name)
OUTPUT inserted.ID, inserted.Name INTO table3
SELECT Name FROM dbo.table1;
```
|
You can use the OUTPUT clause to get the identity from any number of inserts.
```
create table table1(ID INT identity(1,1),Name varchar(50))
DECLARE @T1 Table (ID int, name varchar(50))
insert into table1
OUTPUT inserted.ID, Inserted.Name INTO @T1
values('Ram'),('Sitha'),('Laxman')
```
|
Get Scope identity for multiple inserts
|
[
"",
"sql",
"sql-server",
""
] |
I found two solutions to this problem. One is hard to write and the other is slow to run. I was wondering if there's a better way I'm missing and I'm always happy to learn new methods.
I have an address table that holds multiple records for the same customer, but different types (billing, shipping, etc.) It might not have all types for all customers. If they have a billing address I want to use that, if not use the shipping address.
For solution 1 I joined the table to itself and for each field had to use a case statement to pick the right address. The when part of each case statement was the same but had to be written for every field.
```
select
case
when billing.customer is not null
then billing.address
else shipping.address
end as address
from
(
select *
from personal
where type = 'billing'
) billing
full outer join
(
select *
from personal
where type = 'shipping'
) shipping
on billing.customer = shipping.customer
```
<http://sqlfiddle.com/#!4/6c5ff/4/0>
The second solution is a bit easier to write but takes longer to execute.
```
select *
from personal
where type = 'billing'
union
select * from personal
where type = 'shipping'
and customer not in (
select customer
from personal
where type = 'billing'
)
```
<http://sqlfiddle.com/#!4/6c5ff/5/0>
If there's any better way that would be awesome to learn.
|
Not sure if this is faster:
```
select *
from (
select *,
row_number() over (partition by customer order by type) as rn
from personal
where type in ('billing', 'shipping')
)
where rn = 1;
```
This works because `'billing'` is ordered *before* `'shipping'` and thus gets the row number 1 if both addresses are present. If you need to include other address types that do not happen to sort in the way you want them "considered", you can use a conditional sorting:
```
select *
from (
select personal.*,
row_number() over (partition by customer
order by
case type
when 'postal' then 1
when 'shipping' then 2
else 3
end) as rn
from personal
where type in ('billing', 'shipping', 'postal')
)
where rn = 1;
```
This would give a postal address a higher priority than a shipping address.
|
I would go with the second approach with two slight modifications. First, create an index on `customer` and `type`:
```
create index idx_personal_type_customer on personal(type, customer_type)
```
And use `union all` instead of `union`:
```
select *
from personal
where type = 'billing'
union all
select * from personal
where type = 'shipping'
and customer not in (
select customer
from personal
where type = 'billing'
)
```
|
Conditionally select entire record based on a field
|
[
"",
"sql",
"oracle",
""
] |
I am creating a query to get the total hours elapsed in a day by someone, however there can be multiple breaks in the times per day.
Here is the query that I have at the moment.
```
SELECT
CHINA_VISION_DorEvents.DorCtrls_Ref,
CHINA_VISION_PubCards.CardCode,
CHINA_VISION_DorEvents.EventTM
FROM
CHINA_VISION_PubCards
INNER JOIN
CHINA_VISION_DorEvents ON CHINA_VISION_PubCards.CardCode = CHINA_VISION_DorEvents.CardCode
WHERE
(CHINA_VISION_PubCards.CardCode = '000006f1')
AND CHINA_VISION_DorEvents.DorCtrls_Ref = '16'
ORDER BY
CONVERT(Date,CHINA_VISION_DorEvents.EventTM) DESC
```
This query doesn't currently attempt to work out the elapsed time, but here are the results of this so you can see how the data looks.
```
Ref CardCode EventTM
---------------------------------------
16 000006f1 2015-01-27 07:32:35.000
16 000006f1 2015-01-26 07:38:02.000
16 000006f1 2015-01-26 12:30:54.000
16 000006f1 2015-01-26 13:03:28.000
16 000006f1 2015-01-26 17:28:47.000
16 000006f1 2015-01-23 07:31:10.000
16 000006f1 2015-01-23 12:22:50.000
16 000006f1 2015-01-23 12:47:51.000
16 000006f1 2015-01-23 17:00:20.000
16 000006f1 2015-01-22 07:35:03.000
16 000006f1 2015-01-22 12:28:13.000
16 000006f1 2015-01-22 13:03:12.000
16 000006f1 2015-01-22 16:55:56.000
```
As you can see most days there are 4 records, and i need to work out the elapsed time for them so for example for the 26
```
07:38:02
12:30:54
elapsed time = 4 hours, 52 minutes and 52 seconds
13:03:28
17:28:47
Elapsed time = 4 hours, 25 minutes and 19 seconds
```
So the total elapsed for the 26th would be 9 hours 17 minuets 71
So in the result it would look like
```
Date Elapsed
2015-01-26 9:17:71
```
and so on
We do not need to calculate between 2-3 as the user is not logged on on here.
```
1 2 3 4
think of it like this ON - OFF BACK ON - OFF
```
table structure
```
Name type allow null
Reference int Unchecked
DorCtrls_Ref int Checked
EventsID tinyint Checked
EventTM datetime Checked
CardCode varchar(50) Checked
JustificationCode tinyint Checked
RecordIndex bigint Checked
Memo varchar(50) Checked
TempltCard varchar(1024)Checked
Templtlength varchar(32)Checked
TempltDir varchar(50) Checked
```
|
If you're not using a very old version of SQL Server, this will work for you:
Test Data:
```
CREATE TABLE Test(Ref int, CardCode varchar(20), EventTM datetime)
insert into Test
select 16,'000006f1','2015-01-27T07:32:35.000' union all
select 16,'000006f1','2015-01-26T07:38:02.000' union all
select 16,'000006f1','2015-01-26T12:30:54.000' union all
select 16,'000006f1','2015-01-26T13:03:28.000' union all
select 16,'000006f1','2015-01-26T17:28:47.000' union all
select 16,'000006f1','2015-01-23T07:31:10.000' union all
select 16,'000006f1','2015-01-23T12:22:50.000' union all
select 16,'000006f1','2015-01-23T12:47:51.000' union all
select 16,'000006f1','2015-01-23T17:00:20.000' union all
select 16,'000006f1','2015-01-22T07:35:03.000' union all
select 16,'000006f1','2015-01-22T12:28:13.000' union all
select 16,'000006f1','2015-01-22T13:03:12.000' union all
select 16,'000006f1','2015-01-22T16:55:56.000';
```
Query:
```
WITH ByDays AS ( -- Number the entry register in each day
SELECT
EventTm AS T,
CONVERT(VARCHAR(10),EventTm,102) AS Day,
FLOOR(CONVERT(FLOAT,EventTm)) DayNumber,
ROW_NUMBER() OVER(PARTITION BY FLOOR(CONVERT(FLOAT,EventTm)) ORDER BY EventTm) InDay
FROM Test
)
--SELECT * FROM ByDays ORDER BY T
,Diffs AS (
SELECT
E.Day,
E.T ET, O.T OT, O.T-E.T Diff,
DATEDIFF(S,E.T,O.T) DiffSeconds -- difference in seconds
FROM
(SELECT BE.T, BE.Day, BE.InDay
FROM ByDays BE
WHERE BE.InDay % 2 = 1) E -- Even rows
INNER JOIN
(SELECT BO.T, BO.Day, BO.InDay
FROM ByDays BO
WHERE BO.InDay % 2 = 0) O -- Odd rows
ON E.InDay + 1 = O.InDay -- Join rows (1,2), (3,4) and so on
AND E.Day = O.Day -- in the same day
)
--SELECT * FROM Diffs
SELECT Day,
SUM(DiffSeconds) Seconds,
CONVERT(VARCHAR(8),
(DATEADD(S, SUM(DiffSeconds), '1900-01-01T00:00:00')),
108) TotalHHMMSS -- The same, formatted as HH:MM:SS
FROM Diffs GROUP BY Day
```
The result looks like this.
```
Day Seconds TotalHHMMSS
2015.01.22 31554 08:45:54
2015.01.23 32649 09:04:09
2015.01.26 33491 09:18:11
```
See the corresponding sql fiddle: <http://sqlfiddle.com/#!3/e1d31/1>
|
From your result you have posted in your question, you can try the below code
```
CREATE TABLE #TEMP(Ref INT,CardCode VARCHAR(40),EventTM DATETIME)
INSERT INTO #TEMP
SELECT 16, '000006f1', '2015-01-27 07:32:35.000'
UNION ALL
SELECT 16, '000006f1', '2015-01-26 07:38:02.000'
UNION ALL
SELECT 16, '000006f1', '2015-01-26 12:30:54.000'
UNION ALL
SELECT 16, '000006f1', '2015-01-26 13:03:28.000'
UNION ALL
SELECT 16, '000006f1', '2015-01-26 17:28:47.000'
UNION ALL
SELECT 16, '000006f1', '2015-01-23 07:31:10.000'
UNION ALL
SELECT 16, '000006f1', '2015-01-23 12:22:50.000'
UNION ALL
SELECT 16, '000006f1', '2015-01-23 12:47:51.000'
UNION ALL
SELECT 16, '000006f1', '2015-01-23 17:00:20.000'
UNION ALL
SELECT 16, '000006f1', '2015-01-22 07:35:03.000'
UNION ALL
SELECT 16, '000006f1', '2015-01-22 12:28:13.000'
UNION ALL
SELECT 16, '000006f1', '2015-01-22 13:03:12.000'
UNION ALL
SELECT 16, '000006f1', '2015-01-22 16:55:56.000'
```
**QUERY**
```
;WITH CTE AS
(
-- Gets row number Order the date
SELECT ROW_NUMBER() OVER( ORDER BY EventTM)RNO, *
FROM #TEMP
)
,CTE2 AS
(
-- Split to hours,minutes and seconds
SELECT C1.*,C2.EventTM EM,DATEDIFF(S,C1.EventTM,C2.EventTM)DD,
cast(
(cast(cast(C2.EventTM as float) - cast(C1.EventTM as float) as int) * 24)
+ datepart(hh, C2.EventTM - C1.EventTM)
as INT)HH
,CAST(right('0' + cast(datepart(mi, C2.EventTM - C1.EventTM) as varchar(2)), 2)AS INT)MM
,CAST(right('0' + cast(datepart(ss, C2.EventTM - C1.EventTM) as varchar(2)), 2)AS INT)SS
FROM CTE C1
LEFT JOIN CTE C2 ON C1.RNO=C2.RNO-1
WHERE C1.RNO % 2 <> 0
),
CTE3 AS
(
-- Sum the hours, minute and seconds
SELECT CAST(EventTM AS DATE)EventTM,
SUM(HH) HH,SUM(MM) MM,SUM(SS) SS
FROM CTE2
GROUP BY CAST(EventTM AS DATE)
)
-- Format the elapsed time
SELECT EventTM,
CASE WHEN MM >=60 THEN CAST(HH+1 AS VARCHAR(10)) END + ':' +
CASE WHEN MM >=60 THEN right('0' + CAST(MM-60 AS VARCHAR(10)),2) END + ':' +
CAST(SS AS VARCHAR(10))Elapsed
FROM CTE3
```
* [Click here](http://sqlfiddle.com/#!3/a6cf86/1) to view result
**EDIT :**
From your query, you can use the below code
```
;WITH CTE AS
(
-- Gets row number Order the date
SELECT ROW_NUMBER() OVER( ORDER BY CONVERT(DateTime,CHINA_VISION_DorEvents.EventTM))RNO,
CHINA_VISION_DorEvents.DorCtrls_Ref Ref,
CHINA_VISION_PubCards.CardCode,
CONVERT(DateTime,CHINA_VISION_DorEvents.EventTM) EventTM
FROM CHINA_VISION_PubCards INNER JOIN
CHINA_VISION_DorEvents ON CHINA_VISION_PubCards.CardCode = CHINA_VISION_DorEvents.CardCode
WHERE (CHINA_VISION_PubCards.CardCode = '000006f1')
and CHINA_VISION_DorEvents.DorCtrls_Ref= '16'
)
,CTE2 AS
(
-- Split to hours,minutes and seconds
SELECT C1.*,C2.EventTM EM,DATEDIFF(S,C1.EventTM,C2.EventTM)DD,
cast(
(cast(cast(C2.EventTM as float) - cast(C1.EventTM as float) as int) * 24)
+ datepart(hh, C2.EventTM - C1.EventTM)
as INT)HH
,CAST(right('0' + cast(datepart(mi, C2.EventTM - C1.EventTM) as varchar(2)), 2)AS INT)MM
,CAST(right('0' + cast(datepart(ss, C2.EventTM - C1.EventTM) as varchar(2)), 2)AS INT)SS
FROM CTE C1
LEFT JOIN CTE C2 ON C1.RNO=C2.RNO-1
WHERE C1.RNO % 2 <> 0
),
CTE3 AS
(
-- Sum the hours, minute and seconds
SELECT CAST(EventTM AS DATE)EventTM,
SUM(HH) HH,SUM(MM) MM,SUM(SS) SS
FROM CTE2
GROUP BY CAST(EventTM AS DATE)
)
-- Format the elapsed time
SELECT EventTM,
CASE WHEN MM >=60 THEN CAST(HH+1 AS VARCHAR(10)) END + ':' +
CASE WHEN MM >=60 THEN right('0' + CAST(MM-60 AS VARCHAR(10)),2) END + ':' +
CAST(SS AS VARCHAR(10))Elapsed
FROM CTE3
```
|
SQL Server, calculating total hours per day with multiple breaks
|
[
"",
"sql",
"sql-server",
""
] |
The CreatedDateTime is the table I am trying to filter by, using SQL
```
SELECT( * ) COUNT
FROM TABLE
WHERE CreatedDateTime like '2015''-''01''-''29%'
```
Return 0 Records
even though I have two records that have a CreatedDateTime of the following
```
CreatedDateTime
2015-01-29 07:33:25.700
2015-01-29 02:01:54.713
```
However the below works
```
like '%' + CONVERT(VARCHAR(12), GETDATE(), 0) + '%'
```
What am I doing incorrect here?
|
Datetime is not a string, Wildcard `%` is used to look for a pattern in a string value.
to get all the values in particular day you can simply cast it to date and that will ignore the time part and will bring back all the records for that particular date. Something like .....
```
SELECT COUNT(*)
FROM TABLE
WHERE CAST(CreatedDateTime AS DATE) = '20150129' --<-- use ASNI YYYYMMDD
```
Also always stick to ANSI standards, will protect you against environment specific issues.
|
Assuming it's stored as a proper datetime you can strip the time portion and use equivalence, since wildcard matching is for strings:
```
SELECT COUNT(*)
FROM TABLE
WHERE CAST(CreatedDateTime AS DATE) = '2015-01-29'
```
|
SQL filter by date returns 0 results
|
[
"",
"sql",
""
] |
I'm trying to create a view that returns the first occurrence of a value.
I have two tables
```
First table:
sID | dt
12 | DateTimeValue1
12 | DateTimeValue2
second table:
S_ID
12
```
I want the view to join both tables and give me the first occurance of S\_ID (in this case DateTimeValue1)
How can I accomplish this?
More Info:
in table 1 I have two columns sID and dt. Values for these columns look like this:
```
sID: 1 dt: 2014-06-12
sID: 1 dt: 2014-06-13
sID 1 dt: 2014-06-14 etc...
```
I want to join the two tables in my view so that
where S\_ID matches sID it returns the first value (in this case 2014-06-12)
Sorry for any confusion!
Here's what I got so far:
This is what I got so far:
```
CREATE VIEW view_name AS
SELECT [S_ID]
FROM table1
LEFT JOIN table2
ON table1.[S_ID]=table2.sID;
```
|
You could do something like : <http://sqlfiddle.com/#!3/66ee02/1>
```
create view theview as
select
t1.s_id, min(dt) dt
from
table1 t1 inner join
table2 t2 on t1.s_id = t2.s_id
group by
t1.s_id
```
|
In MS SQL Server you can select the first row of table1 and join it with table2 in a view like this:
```
create view view_name
as
select table1.*,table2.*
from table2
inner join
(select top 1 *
from table1
order by table1.what_ever_field) as table1
on table2.id = table1.id
```
This works well if table2 has a foreign key to table1.
If they are independent tables with no foreign keys You can do this:
```
create view view_name
as
select table1.*,table2.*
from (select top 1 * from table2 order by table2.field1) as table2
,
(select top 1 * from table1 order by table1.field1) as table1
```
|
Creating a View that returns the first occurrence of a value
|
[
"",
"mysql",
"sql",
"sql-server-2008",
""
] |
I have the following query
```
SELECT url
FROM
table_repo3
WHERE
(url LIKE '%auto%'
OR url LIKE '%automobile%'
OR url LIKE '%voiture%'
OR url LIKE '%bagnole%'
OR url LIKE '%vehicule%'
OR url LIKE '%berline%'
OR zpages LIKE '%auto%'
OR zpages LIKE '%automobile%'
OR zpages LIKE '%voiture%'
OR zpages LIKE '%bagnole%'
OR zpages LIKE '%vehicule%'
OR zpages LIKE '%berline%')
OR url like '%google%';
```
It returns me ,every rows with `google` and `yahoo` or other URL's.
If I'm using a `AND` instead of the last `OR`, I have **no results**.
To be able to apply the condition with google, I did the following
```
CREATE TEMPORARY TABLE toto
SELECT *
FROM
table_repo3
WHERE
(url LIKE '%auto%'
OR url LIKE '%automobile%'
OR url LIKE '%voiture%'
OR url LIKE '%bagnole%'
OR url LIKE '%vehicule%'
OR url LIKE '%berline%'
OR zpages LIKE '%auto%'
OR zpages LIKE '%automobile%'
OR zpages LIKE '%voiture%'
OR zpages LIKE '%bagnole%'
OR zpages LIKE '%vehicule%'
OR zpages LIKE '%berline%')
;
```
Then
```
SELECT url FROM temporary_table WHERE url LIKE '%google%';
```
This solution works but it is tedious and long.
Is there anything easier I can do?
TIA as always.
|
I think you want to do the following:
```
SELECT *
FROM table_repo3
WHERE url LIKE '%google%'
AND ( url LIKE '%auto%'
OR url LIKE '%automobile%'
OR url LIKE '%voiture%'
OR url LIKE '%bagnole%'
OR url LIKE '%vehicule%'
OR url LIKE '%berline%'
OR zpages LIKE '%auto%'
OR zpages LIKE '%automobile%'
OR zpages LIKE '%voiture%'
OR zpages LIKE '%bagnole%'
OR zpages LIKE '%vehicule%'
OR zpages LIKE '%berline%' );
```
But that is really not a good way of doing this. You might use a regular expression instead, but even that probably wouldn't speed things up (`LIKE`s with leading wildcards generally won't use indexes):
```
SELECT * FROM table_repo3
WHERE url LIKE '%google%'
AND ( url ~ '(auto)?mobile|voiture|bagnole|vehicule|berline'
OR zpages ~ '(auto)?mobile|voiture|bagnole|vehicule|berline' );
```
|
You can do it simply using `SIMILAR TO` in `Postgres` for multiple `Like` checking, Try this way.
```
SELECT url
FROM
table_repo3
WHERE
url SIMILAR TO '%(auto|automobile|voiture|bagnole|vehicule|berline|google)%'
OR
zpages SIMILAR TO'%(auto|automobile|voiture|bagnole|vehicule|berline)%'
```
|
why the or condition is not taken into account
|
[
"",
"sql",
"postgresql",
""
] |
I have the following schema:
```
PRODUCT_MASTER_TB
PRODUCT_CODE CATEGORY MODEL_GROUP
A 1 1
B 1 1
C 1 2
D 1 2
E 2 2
SALES_TB
SALES_DATE PRODUCT_CODE SOLD_QTY
20150101 A 2
20150101 A 3
20150102 A 4
20150102 B 5
20150103 B 6
20150104 C 7
...
```
What I'd like to select out of these two table is the total amount of sold qty for each product code based on the category and model group the product code is under.
For example:
```
Sales date from 20150101 - 20150104
PRODUCT_CODE SOLD_QTY_FOR_CATEGORY_MODEL
A 20
B 20
C 7
...
```
A and B have the same category and model\_group, so if you add their sold qty up, you get 20 each (2 + 3 + 4 + 5 + 6)
I thought about using fetch to solve the above problem, but is there a way of using joins to get the result I want?
|
you can do aggregation based on `category` and `model_group` in a sub query and join with `product_master_tb`, using `left join` to get results for all products , if there is no quantity sold for the given date range, it will return zero for such products
```
select pmt.PRODUCT_CODE , isnull(t.soldqty,0) as SOLD_QTY_FOR_CATEGORY_MODEL
from PRODUCT_MASTER_TB pmt
left JOIN (
select pmt.CATEGORY, pmt.MODEL_GROUP, SUM(sold_qty) soldqty
from PRODUCT_MASTER_TB PMT
join SALES_TB ST
on pmt.PRODUCT_CODE = st.PRODUCT_CODE
where SALES_DATE between '20150101' and '20150104'
GROUP BY PMT.CATEGORY, pmt.MODEL_GROUP ) t
on pmt.MODEL_GROUP = t.MODEL_GROUP
and pmt.CATEGORY = t.CATEGORY
```
|
This is a strange requirement, because you are a multiplying the total sum of `sold_qty`. The idea, though, is to append the category and model group and then group by that information:
```
select pm.PRODUCT_CODE, SOLD_QTY_FOR_CATEGORY_MODEL
from product_master_tb pm join
(select pm.category, pm.model_group,
sum(sold_qty) as SOLD_QTY_FOR_CATEGORY_MODEL
from sales_tb s join
product_master_tb pm
on s.product_code = pm.product_code
group by category, model_group
) cm
on pm.category = cm.category and pm.model_group = cm.model_group;
```
|
SQL Select data based on the group of the product
|
[
"",
"sql",
"sql-server",
""
] |
I have two values in table PC and PC/name. But when I write a query,
```
select * from TransDetail TD where td.ModUserId like '%PC/%'
```
it gives me both results. Is there a way to get only one record?
|
You should make your WHERE clause like this:
```
select *
from TransDetail TD
where TD.ModUserId ='PC'
```
This way you will get the results which are matched to 'PC' **only**.
The *like* that you made is giving your results of cases that PC can be start/middle/end of the field
|
You should use this query instead for an exact match of the value PC:
```
SELECT * FROM TransDetail TD WHERE TD.ModUserId = 'PC';
```
When using % in the WHERE clause you are using a wildcard that stands for 0 or more occurrences of characters in that position.
Also, if you are actually using `LIKE '%PC/%'` it should match the value PC/name and not the value PC, because of the extra '/' character in the statement.
|
How to do exact string match in SQL
|
[
"",
"mysql",
"sql",
""
] |
I want to create a column with a space in its name. In MySQL I just created this using back ticks, but in SQL Server that causes an error. Are there any possibilities to create a column name with a space?
I've tried the following but it doesn't work.
```
create table space_check
(
`roll num` int,
name varchar(50)
)
```
Can anybody tell me how to create this?
|
Use brackets in SQL-Server
```
create table space_check
(
[roll num] int,
name varchar(50)
)
```
|
Give the column name within `Square brackets`.
```
create table space_check
(
[roll num] int,
name varchar(50)
)
```
It was not a good practice. Try to create using underscore(roll\_num) or caps(rollNum).
|
How to create a column name with space in SQL Server
|
[
"",
"mysql",
"sql",
"sql-server",
"space",
""
] |
I'm using `sp_sqlexec` and was wondering how you deal with single quotes within the statement string **'problem'**
```
declare @id int =1
declare @sql nvarchar(200)
set @sql = '
insert into mytable (col1,col2) values (1, 'problem')
'
exec sp_sqlexec @sql
```
|
Better than handling those extra quotes you could change your dynamic sql to use parameters. (Note this is using sp\_executesql instead of sp\_sqlexec because it allows for parameters)
```
declare @id int = 1
declare @sql nvarchar(2000)
declare @col2 varchar(10) = 'problem'
set @sql = 'insert into mytable (col1, col2) values (1, @col2)'
exec sp_executesql @sql, N'@Col2 varchar(10)', @col2 = @col2
```
|
You can escape them with single quotes `'`:
```
set @sql = '
insert into mytable (col1,col2) values (1, ''problem'')'
```
[**SQLFiddle**](http://sqlfiddle.com/#!3/759a9c/1)
|
sql sp_sqlexec dealing with single quotes in statement
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
Given the following schema...

how would I go about converting the following query to use joins (to work with MySQL)?
```
SELECT submissions.SubmissionID,
(SELECT data.DataContent FROM data WHERE data.DataFieldName =
(SELECT forms.FormEmailFieldName FROM forms WHERE forms.FormID = submissions.SubmissionFormID)
AND data.DataSubmissionID = submissions.SubmissionID) AS EmailAddress,
(SELECT data.DataContent FROM data WHERE data.DataFieldName =
(SELECT forms.FormNameFieldName FROM forms WHERE forms.FormID = submissions.SubmissionFormID)
AND data.DataSubmissionID = submissions.SubmissionID) AS UserName
FROM submissions
WHERE submissions.SubmissionFormID = 4
```
Below is some sample data and my desired result...
```
+--------+--------------------+-------------------+
| forms | | |
+--------+--------------------+-------------------+
| FormID | FormEmailFieldName | FormNameFieldName |
| 4 | UserEmailAddress | UserName |
| 5 | email | name |
+--------+--------------------+-------------------+
+--------------+------------------+
| submissions | |
+--------------+------------------+
| SubmissionID | SubmissionFormID |
| 10 | 4 |
| 11 | 5 |
| 12 | 5 |
+--------------+------------------+
+--------+------------------+------------------+------------------+
| data | | | |
+--------+------------------+------------------+------------------+
| DataID | DataSubmissionID | DataFieldName | DataContent |
| 1 | 10 | UserEmailAddress | user@example.com |
| 2 | 10 | UserName | Paul D'Otherone |
| 3 | 11 | email | bob@bobs.com |
| 4 | 11 | name | Bob Bobbington |
| 5 | 11 | phone | 01234 5678910 |
| 6 | 11 | country | UK |
| 7 | 12 | name | Sheila Sausages |
| 8 | 12 | country | UK |
+--------+------------------+------------------+------------------+
+--------------------------+------------------+-----------------+
| DESIRED RESULT | | |
+--------------------------+------------------+-----------------+
| submissions.SubmissionID | EmailAddress | UserName |
| 10 | user@example.com | Paul D'Otherone |
| 11 | bob@bobs.com | Bob Bobbington |
| 12 | NULL | Sheila Sausages |
+--------------------------+------------------+-----------------+
```
Also see <http://sqlfiddle.com/#!2/78dea/1/0>
I've tried various combinations of inner joins and left joins but can't get a result set in the same format as the above query. I am still learning how to use joins and am finding this hard to get my head around.
|
Starting from Olli's and Gordon's suggestions I have come up with the following which gives the same result set as my original query...
```
SELECT
s.SubmissionID,
d1.DataContent AS EmailAddress,
d2.DataContent AS Username
FROM submissions s
INNER JOIN forms AS f1 ON (f1.FormID = s.SubmissionFormID)
LEFT JOIN data AS d1 ON (d1.DataFieldName = f1.FormEmailFieldName and d1.DataSubmissionID = s.SubmissionID)
LEFT JOIN data AS d2 ON (d2.DataFieldName = f1.FormNameFieldName and d2.DataSubmissionID = s.SubmissionID)
WHERE s.SubmissionFormID = 4
```
Does this seem like the correct way to go about achieving what I want?
|
Without having sample data, just a quick try:
```
SELECT
s.SubmissionID,
d1.DataContent AS EmailAddress,
d2.DataContent AS Username
FROM submissions s
JOIN forms AS f1 ON (f1.FormID = s.SubmissionFormID)
JOIN data AS d1 ON (d1.DataFieldName = f1.FormEmailFieldName)
JOIN data AS d2 ON (d2.DataFieldName = f1.FormNameFieldName)
WHERE s.SubmissionFormID = 4
```
sql fiddle for it: <http://sqlfiddle.com/#!2/9f917/1>
|
Converting sub-queries to joins
|
[
"",
"mysql",
"sql",
"join",
"subquery",
""
] |
I have a database table with the columns: email, name, id
There are duplicate records with the same email and name (ids may or may not be different)
I want to remove all duplicates and not keep any of them.
Example
```
email | name | id
-------------------------------------------------
a@b.com | a | 2 --- remove
a@b.com | a | 4 --- remove
b@c.com | b | 3
c@d.com | c | 5
```
What sql query can I use in phpmyadmin to do this?
|
In MySQL, you should do this with a `join`:
```
delete t
from example t join
(select email, name
from example
group by email, name
having count(*) > 1
) todelete
on t.email = todelete.email and t.name = todelete.name;
```
Unfortunately, MySQL does not support simple subqueries on the table being modified in an `update` or `delete` statement.
EDIT:
Deleting a lot of records can be a performance bottleneck. To get around this, create a temporary table with the records you want, truncate the original table, and re-insert them.
So, try this:
```
create temporary table tempt as
select t.*
from example t join
(select email, name
from example
group by email, name
having count(*) = 1
) tokeep
on t.email = tokeep.email and t.name = tokeep.name;
truncate table example;
insert into example
select * from tempt;
```
Try the `select` query first to be sure it has reasonable performance and does what you want.
|
You could use `EXISTS`:
```
DELETE FROM TableName t1
WHERE EXISTS
(
SELECT 1 FROM TableName t2
WHERE t1.id <> t2.id
AND COALESCE(t1.email,'') = COALESCE(t2.email,'')
AND COALESCE(t1.name,'') = COALESCE(t2.name,'')
)
```
I've used `COALESCE` to also delete duplicates if the emails or names are null.
|
How to remove ALL duplicates in a database table and NOT KEEP any of them?
|
[
"",
"mysql",
"sql",
""
] |
I have a query that returns multiple columns currently, below is an example.
What I am looking to do is count the number of times ClientID and ServerID pair up. I am looking to get the ServerID that most serves that Client.
```
ClientID, ServerID, Last.Date
1 AB 1/27/2015
2 DS 1/27/2015
1 JK 1/27/2015
1 AB 1/24/2015
2 PD 1/24/2015
2 DS 1/23/2015
```
What I want:
**ClientID ServerID Last.Date ConnectionCount**
```
1 AB 1/27/2015 2
2 DS 1/27/2015 2
```
I know I need to use the Count function, but the issue is Count(ClientID+ServerID) is not valid and I am not sure how count based on just two columns.
Thanks ahead of time
~Jason
|
you can use `group by` and get maximum connections and then further do one more `group by` on the client to get the server that serves the most
**[SQL Fiddle](http://sqlfiddle.com/#!2/0bcbbb/3)**
```
SELECT clientId, serverId,
max(connectionCount) as ConnectionCount, LastDate
from
(
select clientId, ServerID,
count(*) as ConnectionCount,
max(LastDate) as LastDate
from Table1
group by clientId, ServerID
) t
group by clientId
```
|
You can `GROUP BY` multiple columns, to get the count of each combination.
```
SELECT ClientID, ServerID, MAX(`Last.Date`) AS `Last.Date`, COUNT(*) AS ConnectionCount
FROM YourTable
GROUP BY ClientID, ServerID
```
|
SQL Count multiple columns
|
[
"",
"mysql",
"sql",
"count",
""
] |
I am using sql server 2008 R2. More specifically, Microsoft SQL Server 2008 R2 (RTM) - 10.50.1600.1 (X64) Apr 2 2010 15:48:46 Copyright (c) Microsoft Corporation Standard Edition (64-bit) on Windows NT 6.1 (Build 7601: Service Pack 1) (Hypervisor). I am new to sql server and procedures/triggers. I have the following code to create a trigger (it works):
```
CREATE TRIGGER [dbo].[Insert_WithdrawalCodes]
ON [dbo].[PupilWithdrawalReason]
AFTER INSERT
AS
BEGIN
SET NOCOUNT ON;
UPDATE [dbo].[PupilWithdrawalReason] SET DateCreated=dbo.SYSTEMTIME()
WHERE WithdrawalCodeID IN (SELECT WithdrawalCodeID FROM inserted)
END
```
How do I conditionally create only if the trigger does not yet exist? What am I doing wrong here? StackOverflow has good examples of `IF NOT EXISTS`, but I can't get this to work in conjunction with a `CREATE`. Here is one of my failed efforts:
```
IF NOT EXISTS (SELECT * FROM sys.objects WHERE type = 'TR' AND name = 'Insert_WithdrawalCodes')
exec('CREATE TRIGGER [dbo].[Insert_WithdrawalCodes] ON [dbo].[PupilWithdrawalReason] AFTER INSERT AS BEGIN SET NOCOUNT ON; UPDATE [dbo].[PupilWithdrawalReason] SET DateCreated=dbo.SYSTEMTIME() WHERE WithdrawalCodeID IN (SELECT WithdrawalCodeID FROM inserted) END')
GO
```
|
```
IF EXISTS (SELECT * FROM sys.triggers WHERE object_id = OBJECT_ID(N'[dbo].[TRIGGERNAME]'))
DROP TRIGGER [dbo].[TRIGGERNAME]
go
IF EXISTS (SELECT * FROM dbo.sysobjects WHERE id = OBJECT_ID(N'[dbo].[TABLENAME]') AND OBJECTPROPERTY(id, N'IsUserTable') = 1)
BEGIN
CREATE TRIGGER [dbo].[TRIGGERNAME] ON [dbo].[TABLENAME] FOR INSERT, UPDATE
AS ...
END
```
Based on your updated question... try this:
```
IF NOT EXISTS (select * from sys.objects where type = 'TR' and name = 'Insert_WithdrawalCodes')
EXEC dbo.sp_executesql @statement = N'
CREATE TRIGGER [dbo].[Insert_WithdrawalCodes]
ON [dbo].[PupilWithdrawalReason]
AFTER INSERT
AS
BEGIN
SET NOCOUNT ON;
UPDATE [dbo].[PupilWithdrawalReason] SET DateCreated=dbo.SYSTEMTIME()
WHERE WithdrawalCodeID IN (SELECT WithdrawalCodeID FROM inserted)
END
'
```
|
The best way is to check for objects and drop them if they exist before you create them.
Rather then not creating it at all if it exists, I would approach it the other way, drop it if exists and then create.
Normally in long lenghty scripts if you want to update the definition of a trigger you would just simply add this at the end of that script and your trigger definition will be updated.
So the approach should be `create the object but drop it if it already exists` rather then `dont create it at all if it already exists`
```
IF OBJECT_ID ('[Insert_WithdrawalCodes] ', 'TR') IS NOT NULL
DROP TRIGGER [Insert_WithdrawalCodes];
GO
CREATE TRIGGER .......
```
|
How to add "IF NOT EXISTS" to create trigger statement
|
[
"",
"sql",
"sql-server",
"t-sql",
"if-statement",
"triggers",
""
] |
I have an SQL selection which return the following:
```
Name Code Qty
Janet 10 6
Janet 11 9
Janet 09 8
Jones 12 7
Jones 11 8
James 09 5
James 10 4
```
I want this selection to get sorted based on the qty for all the three people : order the people by their maximum quantity, and then order by quantity.
The output should look like this:
```
Janet 11 9
Janet 09 8
Janet 10 6
Jones 11 8
Jones 12 7
James 09 5
James 10 4
```
Any way to achieve this?
|
You can specify more than one sorting condition:
```
SELECT * from names order by name, qty desc
```
Above query will sort by `name` and if names are equal then will sort by `qty`
If you want to select only higher qty for every user then use this query:
```
SELECT name, MAX(qty) FROM names GROUP BY name order by MAX(qty);
```
|
This is a subtle problem. It looks like you want to sort the *names* by the maximum of *qty*. This requires a join and aggregation to get the maximum `qty` for each `name`:
```
select t.*
from table t join
(select name, max(qty) as maxq
from table t
group by name
) tt
order by tt.maxq desc, tt.name, t.qty desc;
```
|
Sort the SQL table by name using maximum of quantity
|
[
"",
"mysql",
"sql",
""
] |
Why correlated subquery result cannot be used in outer query?
In below query I want to access the count in the inner corrleated query in outer query. But it is giving error.
**What am I doing wrong?**
referenct - <http://sqlzoo.net/wiki/More_JOIN_operations#Looking_at_the_id_field>.
Question- 14 (Obtain a list in alphabetical order of actors who've had at least 30 starring roles.)
```
select distinct a.name , t.count
from actor a
join casting c
on (a.id= c.actorid)
where 25< (select count(movieid) as count
from casting c1
where c1.actorid= a.id) as t
order by a.name
```
Below query works fine.
```
select distinct a.name, (select count(movieid) as count
from casting c1
where c1.actorid= a.id)
from actor a
join casting c
on (a.id= c.actorid)
```
<http://sqlfiddle.com/#!4/4e7be/8>
|
`count` is a column in the subquery, but it is not part of the outer join. Therefore, it is not visible outside the subquery itself.
An easier way to tackle this would be by computing the count within your join itself, like so:
```
select a.name, count(movieid)
from
actor a
join casting c
on a.id = c.actorid
group by a.name, a.id
having count(movieid) > 3
order by a.name;
```
[SQL Fiddle](http://sqlfiddle.com/#!4/d438a/5)
|
The question is asking two things. one for starring roles, two for at least 30 times. You can use WHERE to solve the first one, then group the result to count. Finally use HAVING to filter the second question.
You can use COUNT function in HAVING statement.
```
SELECT actor.name
FROM actor JOIN casting ON actor.id = casting.actorid
WHERE ord = 1
GROUP BY actor.name
HAVING COUNT(actor.name) >= 30
ORDER BY actor.name
```
|
Why correlated subquery result cannot be used in outer query?
|
[
"",
"sql",
"oracle",
"correlated-subquery",
""
] |
I am terrible with regex and too lazy to learn it (I know, it is bad...) My query
`SELECT regexp_replace('MYSTART1#blah~MYSTART2#blah2~MYSTART3#blah-blah~MYSTART4#blah-blah', '.*MYSTART2#(.+)\~.*', '\1') FROM DUAL;`
Should show the value between `MYSTART2#` and `~`
Result:
`blah2~MYSTART3#blah-blahMYSTART4#blah-blah`
Needs to be `blah2`
|
The plus sign operator is "greedy," which means it matches as many characters as possible. Add a question mark to make it lazy instead:
```
'.*MYSTART2#(.+?)\~.*'
```
|
Use regexp\_substr:
```
SELECT regexp_substr('MYSTART1#blah~MYSTART2#blah2~MYSTART3#blah-blah~MYSTART4#blah-blah', '[^\#\~]+', 1, 2*&mystart_pos) FROM DUAL;
```
|
Extract text between two text delimiters (tags)
|
[
"",
"sql",
"regex",
"oracle",
""
] |
I have a table t1 with columns and rows as
```
date(date) plan(numeric) actual(numeric)
2015-01-01 50 36
2015-01-02 60 45
2015-01-03 70 40
2015-01-04 80 36
```
I want to change rows (only in plan column) with respect to the date. For example i want change rows belongs to 2015-01-01 to 2015-01-30.
expected ouput:
```
date(date) plan(numeric) actual(numeric)
2014-12-31 45 50
2015-01-01 50 36
2015-01-02 50 45
2015-01-03 50 40
2015-01-04 50 36
.
.
2015-01-28 50 20
```
can someone please let me know how can i do these.
thank you
|
```
UPDATE t1 SET plan = 50 WHERE date >= '2015-01-01' AND date <= '2015-01-30'
```
|
If I understand correctly, you want the first value of the month to be the value for the entire month.
With a query, you can do:
```
select date,
first_value(plan) over (partition by extract(year from date), extract(month from date)
) as plan,
actual
from t1;
```
|
postgresql: update rows in a selected column with respect to date
|
[
"",
"sql",
"postgresql",
""
] |
I have the following query:
```
DECLARE @query AS NVARCHAR(MAX);
SET @query ='
SELECT
col1 [TÜR],
col2 [KOD],
col3 [BANKA/CARİ],
col4 [BANKA HESABI],
col5 [AÇIKLAMA],
col6 [VADE],
'+ @cols +'
FROM
(
(
SELECT
''LEASİNG'' [col1],
d.REGNR [col2],
cl.DEFINITION_ [col3],
'''' [col4],
d.DESCRIPTION [col5],
c.PAYMENTDATE [col6],
a.KDVLI- Isnull(b.KDVLI,0) [AMOUNT],
c.TRCURR [TRCURR],
e.CURCODE [CURCODE]
FROM
(SELECT
LOGICALREF,
SUM(PAYMENTTOTAL) AS KDVSIZ,
SUM(INTTOTAL) AS FAIZ,
SUM(MAINTOTAL) AS ANAPARA,
SUM(VATINPAYMENTTOTAL-PAYMENTTOTAL) AS KDV,
SUM(VATINPAYMENTTOTAL) AS KDVLI
FROM LG_011_LEASINGPAYMENTSLNS
WHERE TRANSTYPE=0
GROUP BY LOGICALREF) a
LEFT OUTER JOIN
(SELECT
PARENTREF,
SUM(PAYMENTTOTAL) AS KDVSIZ,
SUM(INTTOTAL) AS FAIZ,
SUM(MAINTOTAL) AS ANAPARA,
SUM(VATINPAYMENTTOTAL-PAYMENTTOTAL) AS KDV,
SUM(VATINPAYMENTTOTAL) AS KDVLI
FROM LG_011_LEASINGPAYMENTSLNS
WHERE TRANSTYPE=1
GROUP BY PARENTREF
) b
ON a.LOGICALREF= b.PARENTREF
INNER JOIN
LG_011_LEASINGPAYMENTSLNS c
ON a.LOGICALREF=c.LOGICALREF
INNER JOIN
LG_011_LEASINGREG d
ON c.LEASINGREF=d.LOGICALREF
INNER JOIN
LG_011_PURCHOFFER z
ON c.LEASINGREF=z.LEASINGREF
INNER JOIN
(SELECT
MAX(LOGICALREF) LOGICALREF,
LEASINGREF,
CLIENTREF
FROM LG_011_PURCHOFFER
GROUP BY CLIENTREF,LEASINGREF) y
ON z.LOGICALREF=y.LOGICALREF
INNER JOIN LG_011_CLCARD cl
ON z.CLIENTREF=cl.LOGICALREF
INNER JOIN L_CURRENCYLIST e
ON c.TRCURR=e.CURTYPE OR (c.TRCURR=0 AND e.CURTYPE=160)
WHERE e.FIRMNR=11 AND z.STATUS=4 AND a.KDVLI - Isnull(b.KDVLI,0)<>0
)
UNION ALL
(
SELECT
''ÇEK'',
cs.NEWSERINO,
bn.DEFINITION_,
ban.DEFINITION_,
cl.DEFINITION_,
cs.DUEDATE,
cs.AMOUNT,
cs.TRCURR,
cur.CURCODE
FROM
LG_011_01_CSTRANS a
INNER JOIN
(
SELECT
CSREF,
MAX(STATNO) [STATNO]
FROM LG_011_01_CSTRANS
GROUP BY CSREF) b
ON a.CSREF=b.CSREF AND a.STATNO=b.STATNO
INNER JOIN LG_011_01_CSCARD cs ON a.CSREF=cs.LOGICALREF
INNER JOIN LG_011_BANKACC ban ON cs.OURBANKREF=ban.LOGICALREF
INNER JOIN LG_011_BNCARD bn ON ban.BANKREF=bn.LOGICALREF
INNER JOIN L_CURRENCYLIST cur ON cs.TRCURR=cur.CURTYPE OR (cs.TRCURR=0 AND cur.CURTYPE=160)
INNER JOIN LG_011_CLCARD cl ON a.CARDREF=cl.LOGICALREF
WHERE cs.DOC=3 AND cs.CURRSTAT=9 AND cur.FIRMNR=11
)
UNION ALL
(
SELECT
CASE WHEN cl.SPECODE=''OTOMATİK'' THEN ''OTOMATİK ÖDEME'' WHEN cl.SPECODE=''ZORUNLU'' THEN ''ZORUNLU CARİ'' END,
CASE WHEN pt.MODULENR=5 AND pt.TRCODE=14 THEN clf.DOCODE WHEN pt.MODULENR=5 AND pt.TRCODE<>14 THEN clf.TRANNO ELSE inv.FICHENO END,
cl.DEFINITION_,
'''',
'''',
pt.DATE_,
pt.TOTAL,
pt.TRCURR,
cur.CURCODE
FROM LG_011_01_PAYTRANS pt
INNER JOIN LG_011_CLCARD cl ON pt.CARDREF=cl.LOGICALREF
LEFT OUTER JOIN LG_011_01_INVOICE inv ON pt.FICHEREF=inv.LOGICALREF
LEFT OUTER JOIN LG_011_01_CLFLINE clf ON pt.FICHEREF=clf.LOGICALREF
INNER JOIN L_CURRENCYLIST cur ON pt.TRCURR=cur.CURTYPE OR (pt.TRCURR=0 AND cur.CURTYPE=160)
WHERE pt.MODULENR IN (4,5) AND pt.PAID=0 AND pt.SIGN=1 AND cl.CODE LIKE ''320%'' AND cl.SPECODE IN (''OTOMATİK'',''ZORUNLU'') AND cur.FIRMNR=11
)
UNION ALL
(
SELECT
CASE d.SPECODE WHEN '''' THEN ''KREDİ'' WHEN ''FORWARD'' THEN ''FORWARD'' END [TÜR],
d.CODE,
f.DEFINITION_,
g.DEFINITION_,
d.NAME_,
b.DUEDATE,
a.TAKSIT - Isnull(c.TAKSIT,0) AS TAKSIT,
d.TRCURR,
e.CURCODE
FROM
(SELECT
PARENTREF,
SUM(TOTAL) AS ANAPARA,
SUM(INTTOTAL) AS FAIZ,
SUM(BSMVTOTAL) AS BSMV,
SUM(KKDFTOTAL) AS KKDF,
SUM(TOTAL+INTTOTAL+BSMVTOTAL+KKDFTOTAL) AS TAKSIT
FROM LG_011_BNCREPAYTR
WHERE TRANSTYPE = 0
GROUP BY PARENTREF) a
INNER JOIN (SELECT
LOGICALREF,
PARENTREF,
CREDITREF,
DUEDATE,
OPRDATE
FROM LG_011_BNCREPAYTR
WHERE TRANSTYPE = 0) b
ON a.PARENTREF=b.PARENTREF
LEFT OUTER JOIN (SELECT
PARENTREF,
SUM(TOTAL) AS ANAPARA,
SUM(INTTOTAL) AS FAIZ,
SUM(BSMVTOTAL) AS BSMV,
SUM(KKDFTOTAL) AS KKDF,
SUM(TOTAL+INTTOTAL+BSMVTOTAL+KKDFTOTAL) AS TAKSIT
FROM LG_011_BNCREPAYTR
WHERE TRANSTYPE = 1
GROUP BY PARENTREF) c
ON b.LOGICALREF = c.PARENTREF
INNER JOIN LG_011_BNCREDITCARD d
ON b.CREDITREF=d.LOGICALREF
INNER JOIN L_CURRENCYLIST e
ON d.TRCURR=e.CURTYPE OR (d.TRCURR=0 AND e.CURTYPE=160)
INNER JOIN LG_011_BNCARD f
ON d.BNCRREF=f.LOGICALREF
INNER JOIN LG_011_BANKACC g
ON d.BNACCREF=g.LOGICALREF
WHERE e.FIRMNR=11 AND a.TAKSIT - Isnull(c.TAKSIT,0)<>0
)
) x
PIVOT
(
SUM(AMOUNT)
FOR CURCODE IN ('+ @cols +')
) xx
ORDER BY xx.col6,xx.TRCURR, xx.col1, xx.col3, xx.col4, xx.col2
'
```
When I print this query using `print @query`, I get the following, with the last part of my code cut off:
```
SELECT
col1 [TÜR],
col2 [KOD],
col3 [BANKA/CARİ],
col4 [BANKA HESABI],
col5 [AÇIKLAMA],
col6 [VADE],
[TL],[USD],[EUR]
FROM
(
(
SELECT
'LEASİNG' [col1],
d.REGNR [col2],
cl.DEFINITION_ [col3],
'' [col4],
d.DESCRIPTION [col5],
c.PAYMENTDATE [col6],
a.KDVLI- Isnull(b.KDVLI,0) [AMOUNT],
c.TRCURR [TRCURR],
e.CURCODE [CURCODE]
FROM
(SELECT
LOGICALREF,
SUM(PAYMENTTOTAL) AS KDVSIZ,
SUM(INTTOTAL) AS FAIZ,
SUM(MAINTOTAL) AS ANAPARA,
SUM(VATINPAYMENTTOTAL-PAYMENTTOTAL) AS KDV,
SUM(VATINPAYMENTTOTAL) AS KDVLI
FROM LG_011_LEASINGPAYMENTSLNS
WHERE TRANSTYPE=0
GROUP BY LOGICALREF) a
LEFT OUTER JOIN
(SELECT
PARENTREF,
SUM(PAYMENTTOTAL) AS KDVSIZ,
SUM(INTTOTAL) AS FAIZ,
SUM(MAINTOTAL) AS ANAPARA,
SUM(VATINPAYMENTTOTAL-PAYMENTTOTAL) AS KDV,
SUM(VATINPAYMENTTOTAL) AS KDVLI
FROM LG_011_LEASINGPAYMENTSLNS
WHERE TRANSTYPE=1
GROUP BY PARENTREF
) b
ON a.LOGICALREF= b.PARENTREF
INNER JOIN
LG_011_LEASINGPAYMENTSLNS c
ON a.LOGICALREF=c.LOGICALREF
INNER JOIN
LG_011_LEASINGREG d
ON c.LEASINGREF=d.LOGICALREF
INNER JOIN
LG_011_PURCHOFFER z
ON c.LEASINGREF=z.LEASINGREF
INNER JOIN
(SELECT
MAX(LOGICALREF) LOGICALREF,
LEASINGREF,
CLIENTREF
FROM LG_011_PURCHOFFER
GROUP BY CLIENTREF,LEASINGREF) y
ON z.LOGICALREF=y.LOGICALREF
INNER JOIN LG_011_CLCARD cl
ON z.CLIENTREF=cl.LOGICALREF
INNER JOIN L_CURRENCYLIST e
ON c.TRCURR=e.CURTYPE OR (c.TRCURR=0 AND e.CURTYPE=160)
WHERE e.FIRMNR=11 AND z.STATUS=4 AND a.KDVLI - Isnull(b.KDVLI,0)<>0
)
UNION ALL
(
SELECT
'ÇEK',
cs.NEWSERINO,
bn.DEFINITION_,
ban.DEFINITION_,
cl.DEFINITION_,
cs.DUEDATE,
cs.AMOUNT,
cs.TRCURR,
cur.CURCODE
FROM
LG_011_01_CSTRANS a
INNER JOIN
(
SELECT
CSREF,
MAX(STATNO) [STATNO]
FROM LG_011_01_CSTRANS
GROUP BY CSREF) b
ON a.CSREF=b.CSREF AND a.STATNO=b.STATNO
INNER JOIN LG_011_01_CSCARD cs ON a.CSREF=cs.LOGICALREF
INNER JOIN LG_011_BANKACC ban ON cs.OURBANKREF=ban.LOGICALREF
INNER JOIN LG_011_BNCARD bn ON ban.BANKREF=bn.LOGICALREF
INNER JOIN L_CURRENCYLIST cur ON cs.TRCURR=cur.CURTYPE OR (cs.TRCURR=0 AND cur.CURTYPE=160)
INNER JOIN LG_011_CLCARD cl ON a.CARDREF=cl.LOGICALREF
WHERE cs.DOC=3 AND cs.CURRSTAT=9 AND cur.FIRMNR=11
)
UNION ALL
(
SELECT
CASE WHEN cl.SPECODE='OTOMATİK' THEN 'OTOMATİK ÖDEME' WHEN cl.SPECODE='ZORUNLU' THEN 'ZORUNLU CARİ' END,
CASE WHEN pt.MODULENR=5 AND pt.TRCODE=14 THEN clf.DOCODE WHEN pt.MODULENR=5 AND pt.TRCODE<>14 THEN clf.TRANNO ELSE inv.FICHENO END,
cl.DEFINITION_,
'',
'',
pt.DATE_,
pt.TOTAL,
pt.TRCURR,
cur.CURCODE
FROM LG_011_01_PAYTRANS pt
INNER JOIN LG_011_CLCARD cl ON pt.CARDREF=cl.LOGICALREF
LEFT OUTER JOIN LG_011_01_INVOICE inv ON pt.FICHEREF=inv.LOGICALREF
LEFT OUTER JOIN LG_011_01_CLFLINE clf ON pt.FICHEREF=clf.LOGICALREF
INNER JOIN L_CURRENCYLIST cur ON pt.TRCURR=cur.CURTYPE OR (pt.TRCURR=0 AND cur.CURTYPE=160)
WHERE pt.MODULENR IN (4,5) AND pt.PAID=0 AND pt.SIGN=1 AND cl.CODE LIKE '320%' AND cl.SPECODE IN ('OTOMATİK','ZORUNLU') AND cur.FIRMNR=11
)
UNION ALL
(
SELECT
CASE d.SPECODE WHEN '' THEN 'KREDİ' WHEN 'FORWARD' THEN 'FORWARD' END [TÜR],
d.CODE,
f.DEFINITION_,
g.DEFINITION_,
d.NAME_,
b.DUEDATE,
a.TAKSIT - Isnull(c.TAKSIT,0) AS TAKSIT,
d.TRCURR,
e.CURCODE
FROM
(SELECT
PARENTREF,
SUM(TOTAL) AS ANAPARA,
SUM(INTTOTAL) AS FAIZ,
SUM(BSMVTOTAL) AS BSMV,
SUM(KKDFTOTAL) AS KKDF,
SUM(TOTAL+INTTOTAL+BSMVTOTAL+KKDFTOTAL) AS TAKSIT
FROM LG_011_BNCREPAYTR
WHERE TRANSTYPE = 0
GROUP BY PARENTREF) a
INNER JOIN (SELECT
LOGICALREF,
PARENTREF,
CREDITREF,
DUEDATE,
OPRDATE
FROM LG_011_BNCREPAYTR
WHERE TRANSTYPE = 0) b
ON a.PARENTREF=b.PARENTREF
LEFT OUTER JOIN (SELECT
PARENTREF,
SUM(TOTAL) AS ANAPARA,
SUM(INTTOTAL) AS FAIZ,
SUM(BSMVTOTAL) AS BSMV,
SUM(KKDFTOTAL) AS KKDF,
SUM(TOTAL+INTTOTAL+BSMVTOTAL+KKDFTOTAL) AS TAKSIT
FROM LG_011_BNCREPAYTR
```
How can I fit all my query inside @query, so that I can execute it properly? Note: NoDisplayName's statement that the query would work regardless is not true as I have tried it. I have removed all the unnecessary spaces and trimmed my code (while reducing the functionality), and it works. So a way to fit the code to @query is appreciated!
Thanks!
---
When I separate the code into two parts, the query executes without any problems:
```
DECLARE @cols AS NVARCHAR(MAX), @query AS NVARCHAR(MAX), @query2 AS NVARCHAR(MAX);
SET @cols= STUFF((SELECT ','+QUOTENAME(c.CURCODE) FROM (
(
SELECT DISTINCT b.CURCODE,a.TRCURR FROM LG_011_BNCREDITCARD a INNER JOIN L_CURRENCYLIST b
ON a.TRCURR=b.CURTYPE OR (a.TRCURR=0 AND b.CURTYPE=160)
)
UNION
(
SELECT DISTINCT b.CURCODE,a.TRCURR
FROM LG_011_LEASINGPAYMENTSLNS a
INNER JOIN LG_011_PURCHOFFER z
ON a.LEASINGREF=z.LEASINGREF
INNER JOIN
(SELECT
MAX(LOGICALREF) LOGICALREF,
LEASINGREF
FROM LG_011_PURCHOFFER
GROUP BY LEASINGREF) y
ON z.LOGICALREF=y.LOGICALREF
INNER JOIN L_CURRENCYLIST b
ON a.TRCURR=b.CURTYPE OR (a.TRCURR=0 AND b.CURTYPE=160)
WHERE z.STATUS=4
)
UNION
(
SELECT DISTINCT cur.CURCODE,cs.TRCURR FROM
LG_011_01_CSTRANS a
INNER JOIN
(
SELECT
CSREF,
MAX(STATNO) [STATNO]
FROM LG_011_01_CSTRANS
GROUP BY CSREF) b
ON a.CSREF=b.CSREF AND a.STATNO=b.STATNO
INNER JOIN LG_011_01_CSCARD cs ON a.CSREF=cs.LOGICALREF
INNER JOIN L_CURRENCYLIST cur ON cs.TRCURR=cur.CURTYPE OR (cs.TRCURR=0 AND cur.CURTYPE=160)
WHERE cs.DOC=3 AND cs.CURRSTAT=9 AND cur.FIRMNR=11
)
UNION
(
SELECT DISTINCT cur.CURCODE, pt.TRCURR
FROM LG_011_01_PAYTRANS pt
INNER JOIN LG_011_CLCARD cl ON pt.CARDREF=cl.LOGICALREF
INNER JOIN L_CURRENCYLIST cur ON pt.TRCURR=cur.CURTYPE OR (pt.TRCURR=0 AND cur.CURTYPE=160)
WHERE pt.MODULENR IN (4,5) AND pt.PAID=0 AND pt.SIGN=1 AND cl.CODE LIKE '320%' AND cl.SPECODE IN ('OTOMATİK','ZORUNLU')
)
) c ORDER BY c.TRCURR FOR XML PATH(''), TYPE
).value('.','NVARCHAR(MAX)'),1,1,'')
SET @query ='
SELECT
col1 [TÜR],
col2 [KOD],
col3 [BANKA/CARİ],
col4 [BANKA HESABI],
col5 [AÇIKLAMA],
col6 [VADE],
'+ @cols +'
FROM
(
(
SELECT
''LEASİNG'' [col1],
d.REGNR [col2],
cl.DEFINITION_ [col3],
'''' [col4],
d.DESCRIPTION [col5],
c.PAYMENTDATE [col6],
a.KDVLI- Isnull(b.KDVLI,0) [AMOUNT],
c.TRCURR [TRCURR],
e.CURCODE [CURCODE]
FROM
(SELECT
LOGICALREF,
SUM(PAYMENTTOTAL) AS KDVSIZ,
SUM(INTTOTAL) AS FAIZ,
SUM(MAINTOTAL) AS ANAPARA,
SUM(VATINPAYMENTTOTAL-PAYMENTTOTAL) AS KDV,
SUM(VATINPAYMENTTOTAL) AS KDVLI
FROM LG_011_LEASINGPAYMENTSLNS
WHERE TRANSTYPE=0
GROUP BY LOGICALREF) a
LEFT OUTER JOIN
(SELECT
PARENTREF,
SUM(PAYMENTTOTAL) AS KDVSIZ,
SUM(INTTOTAL) AS FAIZ,
SUM(MAINTOTAL) AS ANAPARA,
SUM(VATINPAYMENTTOTAL-PAYMENTTOTAL) AS KDV,
SUM(VATINPAYMENTTOTAL) AS KDVLI
FROM LG_011_LEASINGPAYMENTSLNS
WHERE TRANSTYPE=1
GROUP BY PARENTREF
) b
ON a.LOGICALREF= b.PARENTREF
INNER JOIN
LG_011_LEASINGPAYMENTSLNS c
ON a.LOGICALREF=c.LOGICALREF
INNER JOIN
LG_011_LEASINGREG d
ON c.LEASINGREF=d.LOGICALREF
INNER JOIN
LG_011_PURCHOFFER z
ON c.LEASINGREF=z.LEASINGREF
INNER JOIN
(SELECT
MAX(LOGICALREF) LOGICALREF,
LEASINGREF,
CLIENTREF
FROM LG_011_PURCHOFFER
GROUP BY CLIENTREF,LEASINGREF) y
ON z.LOGICALREF=y.LOGICALREF
INNER JOIN LG_011_CLCARD cl
ON z.CLIENTREF=cl.LOGICALREF
INNER JOIN L_CURRENCYLIST e
ON c.TRCURR=e.CURTYPE OR (c.TRCURR=0 AND e.CURTYPE=160)
WHERE e.FIRMNR=11 AND z.STATUS=4 AND a.KDVLI - Isnull(b.KDVLI,0)<>0
)
UNION ALL
(
SELECT
''ÇEK'',
cs.NEWSERINO,
bn.DEFINITION_,
ban.DEFINITION_,
cl.DEFINITION_,
cs.DUEDATE,
cs.AMOUNT,
cs.TRCURR,
cur.CURCODE
FROM
LG_011_01_CSTRANS a
INNER JOIN
(
SELECT
CSREF,
MAX(STATNO) [STATNO]
FROM LG_011_01_CSTRANS
GROUP BY CSREF) b
ON a.CSREF=b.CSREF AND a.STATNO=b.STATNO
INNER JOIN LG_011_01_CSCARD cs ON a.CSREF=cs.LOGICALREF
INNER JOIN LG_011_BANKACC ban ON cs.OURBANKREF=ban.LOGICALREF
INNER JOIN LG_011_BNCARD bn ON ban.BANKREF=bn.LOGICALREF
INNER JOIN L_CURRENCYLIST cur ON cs.TRCURR=cur.CURTYPE OR (cs.TRCURR=0 AND cur.CURTYPE=160)
INNER JOIN LG_011_CLCARD cl ON a.CARDREF=cl.LOGICALREF
WHERE cs.DOC=3 AND cs.CURRSTAT=9 AND cur.FIRMNR=11
)
UNION ALL
(
SELECT
CASE WHEN cl.SPECODE=''OTOMATİK'' THEN ''OTOMATİK ÖDEME'' WHEN cl.SPECODE=''ZORUNLU'' THEN ''ZORUNLU CARİ'' END,
CASE WHEN pt.MODULENR=5 AND pt.TRCODE=14 THEN clf.DOCODE WHEN pt.MODULENR=5 AND pt.TRCODE<>14 THEN clf.TRANNO ELSE inv.FICHENO END,
cl.DEFINITION_,
'''',
'''',
pt.DATE_,
pt.TOTAL,
pt.TRCURR,
cur.CURCODE
FROM LG_011_01_PAYTRANS pt
INNER JOIN LG_011_CLCARD cl ON pt.CARDREF=cl.LOGICALREF
LEFT OUTER JOIN LG_011_01_INVOICE inv ON pt.FICHEREF=inv.LOGICALREF
LEFT OUTER JOIN LG_011_01_CLFLINE clf ON pt.FICHEREF=clf.LOGICALREF
INNER JOIN L_CURRENCYLIST cur ON pt.TRCURR=cur.CURTYPE OR (pt.TRCURR=0 AND cur.CURTYPE=160)
WHERE pt.MODULENR IN (4,5) AND pt.PAID=0 AND pt.SIGN=1 AND cl.CODE LIKE ''320%'' AND cl.SPECODE IN (''OTOMATİK'',''ZORUNLU'') AND cur.FIRMNR=11
'
SET @query2='
)
UNION ALL
(
SELECT
CASE d.SPECODE WHEN '''' THEN ''KREDİ'' WHEN ''FORWARD'' THEN ''FORWARD'' END [TÜR],
d.CODE,
f.DEFINITION_,
g.DEFINITION_,
d.NAME_,
b.DUEDATE,
a.TAKSIT - Isnull(c.TAKSIT,0) AS TAKSIT,
d.TRCURR,
e.CURCODE
FROM
(SELECT
PARENTREF,
SUM(TOTAL) AS ANAPARA,
SUM(INTTOTAL) AS FAIZ,
SUM(BSMVTOTAL) AS BSMV,
SUM(KKDFTOTAL) AS KKDF,
SUM(TOTAL+INTTOTAL+BSMVTOTAL+KKDFTOTAL) AS TAKSIT
FROM LG_011_BNCREPAYTR
WHERE TRANSTYPE = 0
GROUP BY PARENTREF) a
INNER JOIN (SELECT
LOGICALREF,
PARENTREF,
CREDITREF,
DUEDATE,
OPRDATE
FROM LG_011_BNCREPAYTR
WHERE TRANSTYPE = 0) b
ON a.PARENTREF=b.PARENTREF
LEFT OUTER JOIN (SELECT
PARENTREF,
SUM(TOTAL) AS ANAPARA,
SUM(INTTOTAL) AS FAIZ,
SUM(BSMVTOTAL) AS BSMV,
SUM(KKDFTOTAL) AS KKDF,
SUM(TOTAL+INTTOTAL+BSMVTOTAL+KKDFTOTAL) AS TAKSIT
FROM LG_011_BNCREPAYTR
WHERE TRANSTYPE = 1
GROUP BY PARENTREF) c
ON b.LOGICALREF = c.PARENTREF
INNER JOIN LG_011_BNCREDITCARD d
ON b.CREDITREF=d.LOGICALREF
INNER JOIN L_CURRENCYLIST e
ON d.TRCURR=e.CURTYPE OR (d.TRCURR=0 AND e.CURTYPE=160)
INNER JOIN LG_011_BNCARD f
ON d.BNCRREF=f.LOGICALREF
INNER JOIN LG_011_BANKACC g
ON d.BNACCREF=g.LOGICALREF
WHERE e.FIRMNR=11 AND a.TAKSIT - Isnull(c.TAKSIT,0)<>0
)
) x
PIVOT
(
SUM(AMOUNT)
FOR CURCODE IN ('+ @cols +')
) xx
ORDER BY xx.col6,xx.TRCURR, xx.col1, xx.col3, xx.col4, xx.col2
'
EXECUTE(@query + @query2)
```
|
I have hit this before (albeit with varchar(max)). Below is the work-around we came up with. This is one of those situations where I can sorta understand why SQL does this, but I can't really explain it, let alone spell out precisely why it works this way. (Google around enough and you'll probably find a technical explanation, if you can figure out the proper search terms.)
Assuming these variables:
```
DECLARE
@Query AS NVARCHAR(MAX)
,@Cols nvarchar(10)
SET @Cols = 'A, B, C, D, E'
```
Set your command as follows:
```
SET @Query = replace( cast('' as varchar(max))
+ 'Big chunk of text containing <@Cols> wherever you need to substitute in that string'
,'<@Cols>'
,@Cols)
```
The following checks the results (len <> datalength when you're working with nvarchars)
```
PRINT @Query
PRINT len(@query)
PRINT datalength(@query)
```
|
From [MSDN](https://msdn.microsoft.com/en-us/library/ms176047.aspx)
> A message string can be up to 8,000 characters long if it is a
> non-Unicode string, and 4,000 characters long if it is a Unicode
> string. Longer strings are truncated. The varchar(max) and
> nvarchar(max) data types are truncated to data types that are no
> larger than varchar(8000) and nvarchar(4000)
Even though the print doesn't show your entire query, Query will execute
|
Query stored in NVARCHAR(MAX) does not fit
|
[
"",
"sql",
"sql-server",
"stored-procedures",
"nvarchar",
""
] |
```
create or replace procedure PROC_MYDATA (inputStr IN VARCHAR2,
p_RecordSet IN OUT SYS_REFCURSOR) is
begin
OPEN p_RecordSet FOR
(select * from myTable where name in (inputStr));
end PROC_MYDATA;
```
In the PLSQL Test window, I am trying to set,
inputStr = 'A','B'
and I am getting this error:
```
ORA-01722: invalid number
```
I also tried to put escape character for single quote.
inputStr = '''A''','''B'''
Same error.
Can someone please help me understand what am I doing wrong?
|
I'm afraid it doesn't work this way:
```
SELECT * from myTable where name in (inputStr);
```
You can use dynamic SQL, as in @Bob Jarvis' answer, or you can do the following:
```
SELECT * FROM myTable WHERE REGEXP_LIKE(name, '^(' || REPLACE(inputStr, ',', '|') || ')$');
```
The difficulty with the latter is that, in Oracle, a regular expression can be at most 512 bytes long. So your `inputStr` would be limited to 508 bytes (since we're adding four bytes for the anchors and the grouping).
|
To use a list of comma-separated values you'll need to build and execute the statement dynamically:
```
create or replace procedure PROC_MYDATA (inputStr IN VARCHAR2,
p_RecordSet IN OUT SYS_REFCURSOR)
is
strSql VARCHAR2(32767);
begin
strSql := 'select * from myTable where name in (' || inputStr || ')';
OPEN p_RecordSet FOR strSql;
end PROC_MYDATA;
```
You should use this with a string which contains the single-quote characters in it to delimit each string; thus, use
```
DECLARE
inputStr VARCHAR2(100);
csrCursor SYS_REFCURSOR;
BEGIN
inputStr = '''A'', ''B''';
PROC_MYDATA(inputStr, csrCursor);
-- ...code to use csrCursor;
CLOSE csrCursor;
END;
```
Share and enjoy.
|
comma separated parameter in plsql stored procedure
|
[
"",
"sql",
"oracle",
"plsql",
""
] |
I have two data sets: `DS_A` and `DS_B`.
My question is why I have two quantity for `pro_id` 71549 and not three?
The [ALL](https://msdn.microsoft.com/en-us/library/ms180026(v=sql.90).aspx)
Incorporates all rows into the results. This includes duplicates. If not specified, duplicate rows are removed.
```
loc_id pro_id quantity price
------------- ----------- ----------- -----------
2310 5052 1 0
2365 5433 1 0
2310 7694 1 0
2310 9480 1 0
2310 9502 1 0
2310 14413 1 0
2310 31277 1 0
2310 46180 1 0
2310 65233 1 0
2310 68369 1 0
2310 68372 1 0
2310 77396 1 0
loc_id pro_id quantity price
------------- ----------- ----------- -----------
2310 71549 0.15 0
```
When I do my `UNION ALL`:
```
declare @tax float
set @tax = 0.05
select loc_id
, pro_id
, sum(quantity)
, price
from DS_A
group by loc_id, pro_id
UNION ALL
select 2310
, 71549
, case when sum(quantity)<>0 Then sum(quantity/ @tax) Else 0 End
, price
from DS_B
group by pro_id, loc_id
```
Results:
```
loc_id pro_id quantity price
------------- ----------- ----------- -----------
2310 5052 1 0
2365 5433 1 0
2310 7694 1 0
2310 9480 1 0
2310 9502 1 0
2310 14413 1 0
2310 31277 1 0
2310 46180 1 0
2310 65233 1 0
2310 68369 1 0
2310 68372 1 0
2310 77396 1 0
2310 71549 2 0
```
|
I suspect it is getting cast to int and rounded down
```
declare @tax float
set @tax = 0.05
declare @quantityF float
set @quantityF = 0.15
select @quantityF / @tax -- 3
select cast((@quantityF / @tax) as int) -- 2
```
You should not be using float - use decimal
```
declare @taxD decimal(9,2)
set @taxD = 0.05
declare @quantityD decimal(9,2)
set @quantityD = 0.15
select @quantityD / @taxD -- 3.0000000
select cast((@quantityD / @taxD) as int) -- 3
```
|
Not sure ... works for me, on Oracle 11 ... what DBMS are you using, what version?
Seems like you might have a typo, or it's a 2 when you think it's a 3 ? It definitely should be a 3 after the UNION if it was a 3 before.
```
SQL> l
1 with w_d1 as (
2 select 2310 as loc_id, 5052 as pro_id, 1 as quantity, 0 as price from dual union all
3 select 2365 as loc_id, 5433 as pro_id, 1 as quantity, 0 as price from dual union all
4 select 2310 as loc_id, 7694 as pro_id, 1 as quantity, 0 as price from dual union all
5 select 2310 as loc_id, 9480 as pro_id, 1 as quantity, 0 as price from dual union all
6 select 2310 as loc_id, 9502 as pro_id, 1 as quantity, 0 as price from dual union all
7 select 2310 as loc_id, 14413 as pro_id, 1 as quantity, 0 as price from dual union all
8 select 2310 as loc_id, 31277 as pro_id, 1 as quantity, 0 as price from dual union all
9 select 2310 as loc_id, 46180 as pro_id, 1 as quantity, 0 as price from dual union all
10 select 2310 as loc_id, 65233 as pro_id, 1 as quantity, 0 as price from dual union all
11 select 2310 as loc_id, 68369 as pro_id, 1 as quantity, 0 as price from dual union all
12 select 2310 as loc_id, 68372 as pro_id, 1 as quantity, 0 as price from dual union all
13 select 2310 as loc_id, 77396 as pro_id, 1 as quantity, 0 as price from dual
14 ),
15 w_d2 as (
16 select 2310 as loc_id, 71549 as pro_id, 3 as quantity, 0 as price from dual
17 )
18 select loc_id, pro_id, quantity, price from w_d1
19 union all
20* select loc_id, pro_id, quantity, price from w_d2
SQL> /
LOC_ID PRO_ID QUANTITY PRICE
---------- ---------- ---------- ----------
2310 5052 1 0
2365 5433 1 0
2310 7694 1 0
2310 9480 1 0
2310 9502 1 0
2310 14413 1 0
2310 31277 1 0
2310 46180 1 0
2310 65233 1 0
2310 68369 1 0
2310 68372 1 0
2310 77396 1 0
2310 71549 3 0
13 rows selected.
```
|
WHY? UNION ALL is doing a Calculation
|
[
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2005",
""
] |
I am currently working on a project for the management of oil distribution, and i need the receipts of every bill to get stored in a database. I am thinking of building a smart key for the receipts which will contain the first 2 letters of the city, the gas station id, the auto increment number, first letter of the month and the last 2 digits of the year. So it will be somewhat like this:
"AA-3-0001-J15". What i am wondering is how to make the AI number to go back at 0001 when the month changes. Any suggestions?
|
To answer the direct question - how to make the number restart at 1 at the beginning of the month.
Since it is not a simple `IDENTITY` column, you'll have to implement this functionality yourself.
To generate such complex value you'll have to write a user-defined function or a stored procedure. Each time you need a new value of your key to insert a new row in the table you'll call this function or execute this stored procedure.
Inside the function/stored procedure you have to make sure that it works correctly when two different sessions are trying to insert the row at the same time. One possible way to do it is to use [`sp_getapplock`](https://msdn.microsoft.com/en-us/library/ms189823.aspx).
You didn't clarify whether the "auto increment" number is the single sequence across all cities and gas stations, or whether each city and gas station has its own sequence of numbers. Let's assume that we want to have a single sequence of numbers for all cities and gas stations within the same month. When month changes, the sequence restarts.
The procedure should be able to answer the following question when you run it: Is the row that I'm trying to insert the first row of the current month? If the generated value is the first for the current month, then the counter should be reset to 1.
One method to answer this question is to have a helper table, which would have one row for each month. One column - date, second column - last number of the sequence. Once you have such helper table your stored procedure would check: what is the current month? what is the last number generated for this month? If such number exists in the helper table, increment it in the helper table and use it to compose the key. If such number doesn't exist in the helper table, insert 1 into it and use it to compose the key.
Finally, I would not recommend to make this composite value as a primary key of the table. It is very unlikely that user requirement says "make the primary key of your table like this". It is up to you how you handle it internally, as long as accountant can see this magic set of letters and numbers next to the transaction in his report and user interface. Accountant doesn't know what a "primary key" is, but you do. And you know how to join few tables of cities, gas stations, etc. together to get the information you need from a normalized database.
Oh, by the way, sooner or later you will have more than 9999 transactions per month.
|
Do you want to store all that in one column? That sounds to me like a composite key over four columns...
Which could look like the following:
```
CREATE TABLE receipts (
CityCode VARCHAR2(2),
GasStationId NUMERIC,
AutoKey NUMERIC,
MonthCode VARCHAR2(2),
PRIMARY KEY (CityCode, GasStationId, AutoKey, MonthCode)
);
```
Which DBMS are you using? (MySQL, MSSQL, PostgreSQL, ...?)
If it's MySQL you could have a batch-job which runs on the month's first which executes:
```
ALTER TABLE tablename AUTO_INCREMENT = 1
```
But that logic would be on application layer instead of DB-layer...
|
How to create a smart key, which would reset its auto increment value at the end of the month
|
[
"",
"sql",
"sql-server",
""
] |
I'm having huge difficulty with a 3 table query.
The scenario is that `TEAM` has many or no `MEMBERS`, a `MEMBER` could have many or no `TASKS`. What I want to get is the number of `TASKS` for every `TEAM`. `TEAM` has its own `ID`, `MEMBER` holds this as a FK on `TEAM_ID`, `TASK` has `MEMBER_ID` on the `TASK`.
I want to get a report of TEAM.NAME, COUNT(Person/Team), Count(Tasks/Team)
I have myself so confused, My thinking was to use an Outer Join on TEAM and MEMBER so I have all the teams with any members they have. From here I'm getting totally confused. If anyone can just point me in the right direction so I have something to work from I'd be so greateful
|
You want to use `count distinct`:
[MySQL COUNT DISTINCT](https://stackoverflow.com/questions/5737628/mysql-count-distinct)
```
select t.name as Team,
count(distinct m.ID) as Member_cnt,
count(distinct t.ID) as Task_cnt
from team t
left join member m
on t.ID= m.TEAM_ID
left join tasks t
on t.MEMBER_ID= m.ID
group by t.name;
```
|
I think you can do what you want with aggregation -- and `count(distinct)`:
```
select t.name,
count(distinct m.memberid) as nummembers,
count(distinct tk.taskid) as numtasks
from team t left join
member m
on t.teamid = j.teamid left join
tasks tk
on tk.memberid = m.memberid
group by t.name;
```
|
3 table query with count
|
[
"",
"mysql",
"sql",
"sqlplus",
""
] |
For example there is some table with dates:
```
2015-01-01
2015-01-02
2015-01-03
2015-01-06
2015-01-07
2015-01-11
```
I have to write SQL query, which will return the count of consecutive dates starting from every date in the table. So the result will be like:
```
2015-01-01 1
2015-01-02 2
2015-01-03 3
2015-01-06 1
2015-01-07 2
2015-01-11 1
```
It seems to me that I should use the `LAG` and `LEAD` functions, but now I can't see how.
|
```
CREATE TABLE #T ( MyDate DATE) ;
INSERT #T VALUES ('2015-01-01'),('2015-01-02'),('2015-01-03'),('2015-01-06'),('2015-01-07'),('2015-01-11')
SELECT
RW=ROW_NUMBER() OVER( PARTITION BY GRP ORDER BY MyDate) ,MyDate
FROM
(
SELECT
MyDate, DATEDIFF(Day, '1900-01-01' , MyDate)- ROW_NUMBER() OVER( ORDER BY MyDate ) AS GRP
FROM #T
) A
DROP TABLE #T;
```
|
You can use this `CTE`:
```
;WITH CTE AS (
SELECT [Date],
ROW_NUMBER() OVER(ORDER BY [Date]) AS rn,
CASE WHEN DATEDIFF(Day, PrevDate, [Date]) IS NULL THEN 0
WHEN DATEDIFF(Day, PrevDate, [Date]) > 1 THEN 0
ELSE 1
END AS flag
FROM (
SELECT [Date], LAG([Date]) OVER (ORDER BY [Date]) AS PrevDate
FROM #Dates ) d
)
```
to produce the following result:
```
Date rn flag
===================
2015-01-01 1 0
2015-01-02 2 1
2015-01-03 3 1
2015-01-06 4 0
2015-01-07 5 1
2015-01-11 6 0
```
All you have to do now is to calculate a running total of `flag` *up to* the *first* occurrence of a *preceding* zero value:
```
;WITH CTE AS (
... cte statements here ...
)
SELECT [Date], b.cnt + 1
FROM CTE AS c
OUTER APPLY (
SELECT TOP 1 COALESCE(rn, 1) AS rn
FROM CTE
WHERE flag = 0 AND rn < c.rn
ORDER BY rn DESC
) a
CROSS APPLY (
SELECT COUNT(*) AS cnt
FROM CTE
WHERE c.flag <> 0 AND rn < c.rn AND rn >= a.rn
) b
```
`OUTER APPLY` calculates the `rn` value of the *first* zero-valued flag that comes before the current row. `CROSS APPLY` calculates the number of records preceding the current record *up to* the first occurrence of a preceding zero valued flag.
|
Get count of consecutive dates
|
[
"",
"sql",
"sql-server",
"date",
""
] |
I've got a table structure with 2 tables
a table person like this:
```
id name etc.
1 test
2 smith
3 shaw
```
and a second table personhist where I document the changes on the names like this: so if someone changed his name because he married or something like this I still got the old name.
```
id personid name etc.
1 1 oldtest
```
Now I have the following query
```
select * from person p
Left join personhist ph on p.id = ph.personid
where (upper(p.name) = upper('oldtest') or upper(ph.name) like upper('oldtest'));
```
This query will result in the following result
```
id name id personid name
1 test 1 1 oldtest
```
But I'd like to have only the actual name result. So the question is: is there a way to only show data from the "main" table also if there is a record in the table I join?
|
> But I'd like to have only the actuel name result. So the question is is there a way to only show data from the "main" table also if there is a record in the table i join?
Simply do NOT include the columns which you don't want to display. You are using **SELECT \*** which is not actually a good thing to do on any production system. You should specify only those column names which are required.
If you really want to display all the columns of one of the table, then , in your query, as you already have an **alias** to the table name, use the same alias to include only that table's columns.
`select p.*` - this will include all columns of table `persons` having alias as `p`.
`select ph.*` - this will include all columns of table `personhist` having alias as `ph`.
To include only selected columns, mention them explicitly. For example -
`select p.id, p.name, ph.personid...`
I guess what you need is -
```
select p.*, ph.personid
from person p
Left join personhist ph on p.id = ph.personid
where (upper(p.name) = upper('oldtest') or upper(ph.name) like upper('oldtest'));
```
But I really doubt what is the purpose to **join** at all when you only need to display is the first table only.
|
```
select p.*, coalesce(p.name, ph.name)
from person p
Left join personhist ph on p.id = ph.personid
where (upper(p.name) = upper('oldtest') or upper(ph.name) like upper('oldtest'))
```
|
Selection with left join but without showing the Data inside the joined table
|
[
"",
"sql",
"oracle",
"select",
"join",
"left-join",
""
] |
I've been working lately in this and I couldn't work it out. I have the following table:
```
ID Language Text
---- -------- --------
1 spanish Hola
1 english Hello
2 spanish Chau
2 english Goodbye
2 french Au revoir
3 english Thank you
```
I need to get each ID once and the text in Spanish but if there wasn't any text in Spanish I should get the English one and so on.
So if I run this query I should get:
```
ID Language Text
---- -------- --------
1 spanish Hola
2 spanish Chau
3 english Thank you
```
I can not use
```
Select ID, Language, Text From table Where Language = 'spanish'
```
Because in the case there is no Spanish set I would not retrieve that ID and I need one record per ID. I though of maybe using something like this:
```
select Distinct(Id), Text from table
order by FIELD(Language, 'Spanish', 'English', 'French', 'Italian')
```
But It didn't work. I get:
```
'FIELD' is not a recognized built-in function name.
```
Can someone help me?
Thanks you all very much!
|
This is another option:
```
SELECT i.ID, w.Text
FROM (
SELECT ID
FROM Words
GROUP BY ID) i(ID)
CROSS APPLY (
SELECT TOP 1 [Text]
FROM Words
WHERE ID = i.ID AND [Language] IN ('spanish', 'english')
ORDER BY (CASE [Language] WHEN 'spanish' THEN 1
ELSE 2
END)
) w([Text])
```
For each ID contained in the `Words` table we perform a `CROSS APPLY` to find the matching `Text` that satisfies the criteria set by the OP.
|
For this type of prioritization, you can use `row_number()`:
```
Select t.*
From (select t.*,
row_number() over (partition by id
order by (case when Language = 'Spanish' then 1
when Language = 'English' then 2
else 3
end)
) as seqnum
from table t
) t
where seqnum = 1;
```
|
SQL & SQL Server - Denormalized table, Distinct and Order By Complex Query
|
[
"",
"sql",
"sql-server",
"select",
"sql-order-by",
"distinct",
""
] |
I'm trying to run this SQL statement but seem to run into a error that states
> Msg 8120, Level 16, State 1, Procedure MemberCountryCarLocation, Line
> 5
> Column 'Country.Country\_name' is invalid in the select list because
> it is not contained in either an aggregate function or the GROUP BY
> clause.
Code:
```
create view MemberCountryCarLocation
(Member_ID, Country_Of_Rental, Home_Addr, Business_Name)
as
select
M.member_Id, C.country_name, A.addr1, B.business_addr
from
Member M, Country C, License L, Addr A, Bz_Addr B
where
M.member_id = A.member_id
and A.country_code = C.Country_code
and B.addr_code = A.addr_code
and M.member_id = 'M%'
group by
M.member_id
```
|
If you use `group by` you can only select the column which are in the group by statement and aggregation function like `sum()`.
So you have to remove all values from `select` clause wich are not in `group by`
```
select M.member_Id
from Member M, Country C, License L, Addr A, Bz_Addr B
where M.member_id = A.member_id
and A.country_code = C.Country_code
and B.addr_code = A.addr_code
and M.member_id = 'M%'
group by M.member_id
```
Or you have to add all values in `select` clause to group by:
```
select M.member_Id, C.country_name, A.addr1, B.business_addr
from Member M, Country C, License L, Addr A, Bz_Addr B
where M.member_id = A.member_id
and A.country_code = C.Country_code
and B.addr_code = A.addr_code
and M.member_id = 'M%'
group by M.member_id, C.country_name, A.addr1, B.business_addr
```
|
In this case, You need to add all the column names you are selecting when you use **group by**.
Try this:
```
create view MemberCountryCarLocation
(Member_ID, Country_Of_Rental, Home_Addr, Business_Name)
AS
select M.member_Id, C.country_name, A.addr1, B.business_addr
from Member M, Country C, License L, Addr A, Bz_Addr B
where M.member_id = A.member_id
and A.country_code = C.Country_code
and B.addr_code = A.addr_code
and M.member_id = 'M%'
group by M.member_Id, C.country_name, A.addr1, B.business_addr
```
|
SQL Server error : can't seem to solve it
|
[
"",
"sql",
"sql-server",
""
] |
I have a table of some columns named as `Students`.
I am using following three queries to `select` rows.
```
1. select * from Students
2. select * from Students where 1=1
3. select * from Students where null=null
```
When I execute first then this returns all rows.
When I execute second then this is also returns all rows.
But when I execute third then this does not return any row.
I have two issues
> 1. What is `1=1` and `null=null` and why are we using these?
> 2. What is difference between first query and second query?
|
To the questions part about "why" using this:
Since `1=1` always evaluates to `true` and `true` is the neutral element for logical `AND` operation, it is often used for dynamically built queries.
The problem with dynamically built queries that are using multiple `AND` operations added or not to the query on some conditions ist that you've to keep track about it is the `first` condition added or not. (i.e. `SELECT * FROM mytable WHERE AND bla = 'blub'` is not valid; we must suppress the `AND` with the first condition added to the query)
All this can be avoided when we change the queries base to `SELECT * from mytable WHERE 1=1`. Now we can add our `AND` conditions dynamically without considering if it is the first condition added or not. (i.e. `SELECT * FROM mytable WHERE 1=1 AND bla = 'blub'` is valid)
|
if you put `ANSI_NULL off` then it will return `true` . if it is `on` then it will return false
```
set ansi_nulls off
if null = null
print 'true'
else
print 'false'
set ansi_nulls ON
if null = null
print 'true'
else
print 'false'
```
reference got from [Null=Null](https://stackoverflow.com/questions/1843451/why-does-null-null-evaluate-to-false-in-sql-server)
@Jarlh if i set `ansi_nulls off`
when you set `ansi_nulls off` it will tell that not to follow `ANSI standerds` for comparing `null` values
then this query will result all rows.
```
set ansi_nulls off
select * from cust_data where null=null
```
|
What is 'where 1=1' and 'where null=null' in sql server
|
[
"",
"sql",
"sql-server",
""
] |
How to retrieve mysql distinct values without empty string value and NULL value
```
SELECT
DISTINCT CON.EMAILADDRESS AS 'E-MAIL'
FROM STUDENT
INNER JOIN CONTACT CON ON STUDENT.CONTACT_ID = CON.ID
WHERE
(CON.EMAILADDRESS IS NOT NULL
OR CON.EMAILADDRESS !=' ');
```
But in output still getting empty e-mails too. Can't figure out what wrong I am doing.
|
Try this:
```
SELECT
DISTINCT CON.EMAILADDRESS AS 'E-MAIL'
FROM STUDENT AST
INNER JOIN CONTACT CON ON AST.CONTACT_ID = CON.ID
WHERE length(trim(CON.EMAILADDRESS)) >0 and CON.EMAILADDRESS IS NOT NULL
```
|
Your only problem is that you use OR instead of AND.
Let's look at the case where the value is NULL:
* `CON.EMAILADDRESS IS NOT NULL` => FALSE
* `CON.EMAILADDRESS != ' '` => NULL
FALSE OR NULL => NULL. As the criteria doesn't result in TRUE you don't select NULLs.
And if the value is an empty string '', ' ', or whatever length:
* `CON.EMAILADDRESS IS NOT NULL` => TRUE
* `CON.EMAILADDRESS != ' '` => FALSE
TRUE OR FALSE => TRUE. You select the empty string.
I suppose this is what confused you: in spite of having mistakenly used OR instead of AND you still removed some empty strings, but not all.
So:
```
WHERE CON.EMAILADDRESS IS NOT NULL AND CON.EMAILADDRESS != ' ';
```
Or, as any string `!= ''` can't be NULL (`NULL != ''` => NULL, not TRUE), simply:
```
WHERE CON.EMAILADDRESS != '';
```
|
mysql distinct values without empty string and NULL
|
[
"",
"mysql",
"sql",
""
] |
My repeating table has duplicate ids but I want summary statistics on unique ids.
```
Detail_id code book tree
----------- ------ ------ ------
1 BR54 COOK PINE
1 BR55 COOK PINE
1 BR51 COOK PINE
2 BR55 COOK MAPL
2 BR60 COOK MAPL
3 BR61 FORD PINE
3 BR54 FORD PINE
3 BR55 FORD PINE
```
Here's my query which is also on [SQLFiddle](http://sqlfiddle.com/#!4/889a8/15)
```
select count(case detail_book when 'COOK' THEN 1 else 0 end) as cook_total,
count(case detail_book when 'FORD' THEN 1 else 0 end) as ford_total,
count(case detail_tree when 'PINE' THEN 1 else 0 end) as pine_total,
count(case detail_book when 'MAPL' THEN 1 else 0 end) as mapl_total
from detail_records;
```
Desired results:
```
COOK_TOTAL FORD_TOTAL PINE_TOTAL MAPL_TOTL
---------- ---------- ---------- ----------
2 1 2 1
```
|
I don't think you need analytic functions here:
```
SELECT COUNT(DISTINCT CASE WHEN detail_book = 'COOK' THEN detail_id END) AS cook_total
, COUNT(DISTINCT CASE WHEN detail_book = 'FORD' THEN detail_id END) AS ford_total
, COUNT(DISTINCT CASE WHEN detail_tree = 'PINE' THEN detail_id END) AS pine_total
, COUNT(DISTINCT CASE WHEN detail_tree = 'MAPL' THEN detail_id END) AS mapl_total
FROM detail_records;
```
The `CASE` statement returns NULL when the values don't match; those aren't counted.
[Updated SQL Fiddle here.](http://sqlfiddle.com/#!4/889a8/46) By the way, in your query you were trying to match `detail_book` to `MAPL` when I think you wanted to match `detail_tree`, and my query reflects that.
|
You could use analytic functions and an inline view to avoid the duplicate counting issue:
```
select sum(case when detail_book = 'COOK' and book_cntr = 1 then 1 else 0 end) as cook_total,
sum(case when detail_book = 'FORD' and book_cntr = 1 then 1 else 0 end) as ford_total,
sum(case when detail_tree = 'PINE' and tree_cntr = 1 then 1 else 0 end) as pine_total,
sum(case when detail_tree = 'MAPL' and tree_cntr = 1 then 1 else 0 end) as mapl_total
from (select d.*,
row_number() over(partition by detail_book, detail_id order by detail_book, detail_id) as book_cntr,
row_number() over(partition by detail_tree, detail_id order by detail_tree, detail_id) as tree_cntr
from detail_records d) v
```
**Fiddle:** <http://sqlfiddle.com/#!4/889a8/31/0>
|
How to get summary totals on unique IDs using analytics?
|
[
"",
"sql",
"oracle",
"distinct",
"analytics",
""
] |
I have 2 tables.
```
Table 1:
ID Name Age PhoneNumber
12 Joe 25 873827382
23 Bob 28 928398233
34 Jane 23 237828883
Table 2:
ID Agent QuantitySold
12 A1 100
23 B1 300
12 C1 600
34 A2 400
34 B1 800
23 B2 900
```
I want to show all the details of the employees who have never sold a quantity not equal to 800.
```
SELECT a.ID, a.Name, a.Age, a.PhoneNumber
FROM table1 a LEFT JOIN table2 b
ON a.ID= b.ID AND b.quantity <> 800
```
I want a result set that doesn't have ID 34 in it. But I cant seem to achieve that. Any help?
|
This is what finally worked.
```
SELECT a.ID
, a.Name
, a.Age
, a.PhoneNumber
FROM table1 a
WHERE a.ID NOT IN (SELECT ID FROM table2 QuantitySold = 800);
```
|
You can use `NOT EXISTS` without any of the `JOIN`s:
```
SELECT a.ID
, a.Name
, a.Age
, a.PhoneNumber
FROM table1 a
WHERE NOT EXISTS (SELECT * FROM table2 WHERE ID = a.ID AND QuantitySold = 800)
```
By the way, the column name is `QuantitySold`, not `quantity`.
[**JSFiddle**](http://sqlfiddle.com/#!2/452fa/1)
|
Left Join query not working
|
[
"",
"mysql",
"sql",
"join",
""
] |
In my project, I have two tables like this:
```
parameters (
id PRIMARY KEY,
name
)
```
and
```
parameters_offeritems (
id_offeritem,
id_parameter,
value,
PRIMARY KEY (id_offeritem, id_parameter)
)
```
*I'm not showing structure of `offeritems` table, because it's not necessary.*
Some sample data:
```
INSERT INTO parameters (id, name) VALUES
(1, 'first parameter'), (2, 'second parameter'), (3, 'third parameter')
INSERT INTO parameters_offeritems (id_offeritem, id_parameter, value) VALUES
(123, 1, 'something'), (123, 2, 'something else'), (321, 2, 'anything')
```
Now my question is - how to fetch (for given offer ID) list of **all** existing parameters, and moreover, if for the given offer ID there are some parameters set, I want to fetch their value in one query.
So far, I made query like this:
```
SELECT p.*, p_o.value FROM parameters p LEFT JOIN parameters_offeritems p_o
ON p.id = p_o.id_parameter WHERE id_offeritem = OFFER_ID OR id_offeritem IS NULL
```
But it fetches only those parameters, for which there are no existing records in `parameters_offeritems` table, **or** parameters, for which value are set only for the current offer.
|
To get all parameters, plus the value of any parameters set for a specific Offer Item, you need to move the Offer ID logic into the join like this (see below).
```
SELECT p.*, p_o.value
FROM parameters p
LEFT JOIN parameters_offeritems p_o
ON p.id = p_o.id_parameter
AND id_offeritem = OFFER_ID;
```
If you have logic in your `WHERE` clause referring to fields in a table you are doing a `LEFT JOIN` on, you effectively change your `JOIN` to an `INNER JOIN` (unless you are checking for a NULL).
|
The magic word you're looking for is OUTER JOIN. Jeff Atwood did a nice Venn-diagram explanation [here](http://blog.codinghorror.com/a-visual-explanation-of-sql-joins/).
|
LEFT JOIN - fetching all data from left table with no matches in the right one
|
[
"",
"mysql",
"sql",
"left-join",
""
] |
I have table like this:
```
name | salary
Tom | 10000
Mary | 20000
Jack | 30000
Lisa | 40000
Jake | 60000
```
I need an update query to update the salary column depending on the values it contains.
Salaries need to increase by:
* 5000 for values between 10000 to 15000
* 7000 for values between 15000 to 20000
* 8000 for values between 20000 to 30000
* 10000 for values between 40000 to 60000
|
Try using the [CASE](https://msdn.microsoft.com/en-us/library/ms181765.aspx) statement within the [UPDATE](https://msdn.microsoft.com/en-us/library/ms177523.aspx) command
```
UPDATE
[yourtablename]
SET
salary =
CASE
WHEN salary BETWEEN 10000 AND 15000 THEN salary + 5000
WHEN salary BETWEEN 15000 AND 20000 THEN salary + 7000
WHEN salary BETWEEN 20000 AND 30000 THEN salary + 8000
WHEN salary BETWEEN 40000 AND 60000 THEN salary + 10000
ELSE salary
END
```
|
Something like this:
```
UPDATE YourTable
SET salary = CASE
WHEN salary > 10000 AND salary <= 15000 THEN salary + 5000
WHEN salary > 15000 AND salary <=20000 THEN salary + 7000
.
.
.
END
```
|
Update table column values based on conditional logic
|
[
"",
"sql",
"sql-server",
"postgresql",
"t-sql",
"sql-server-2008-r2",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.