Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
i use SQL 2008 R2
i have a table ORDER
```
ORDER_ID LABEL QUAINTITY IS_CLOSED
1 oooo 5 true
2 pppp 6 true
3 oooo 5 true
4 iiii 9 false
```
Table COMMANDE
```
COMMAND_ID THE_ODER
1 1_3
2 2
```
what i want
```
ORDER_ID LABEL QUAINTITY THE_ODER
1 oooo 5 1_3
2 pppp 6 2
3 oooo 5 1_3
4 iiii 9
```
how can i joint this two table ?
somehow like where ODER.ORDER\_ID in COMMAND.THE\_ORDER
i know this not a good architecture and violence the norm. but i got to deal with it. | First, `order` is a bad name for a table. It is a SQL reserved word. `orders` would be better.
You can do this with an arcane join. Basically, look for the `order_id` in the list of `the_oder`:
```
select o.ORDER_ID, o.LABEL o.QUAINTITY, c.THE_ODER
from "order" o left outer join
commande c
on instr('_'||c.the_oder||'_', '_'||cast(o.order_id as varchar2(255))||'_') > 0
```
The above uses Oracle syntax for the string concatenation and finding a substring. Unfortunately, you cannot use `like` easily because `'_'` is a wildcard character for `like`.
EDIT:
In SQL Server, you would do:
```
select o.ORDER_ID, o.LABEL o.QUAINTITY, c.THE_ODER
from "order" o left outer join
commande c
on charindex('_'+cast(o.order_id as varchar(255))+'_', '_'+c.the_oder+'_') > 0
``` | In this example rather than having a delimited set of values to check against you should simply have a link table to store the values in. It should look something like this:
Table COMMANDE
```
COMMAND_ID ODRER_ID
1 1
1 3
2 2
```
From here you can join to this table to retrieve the information you need. | SQL join special like in | [
"",
"sql",
"select",
"join",
"sql-server-2008-r2",
""
] |
I currently have data organized in 2 tables as such:
**Meetings**
meet\_id
meet\_category
**Orders**
order\_id
meet\_id
order\_date
I need to write a single query that returns the total number of meetings, the number of meetings with a category of "long" and the number of meetings with a category of "short".
Count only the meetings that have at least one order\_date after March 1, 2011.
The output should be in 3 fields and 1 row
So far I what I have is:
```
SELECT COUNT(m.meet_id),
COUNT(SELECT m.meet_id WHERE m.meet_category = 'long'),
COUNT(SELECT m.meet_id WHERE m.meet_category = 'short')
FROM Meetings m
INNER JOIN Orders o
ON m.meet_id = o.meet_id
WHERE o.order_date >= '2011-03-01';
```
That is what first comes to mind, but this query doesn't work and I am not even sure if my approach is the correct one. All help appreciated! | Try this:
```
SELECT COUNT(m.meet_id),
SUM(CASE WHEN m.meet_category = 'long' THEN 1 ELSE 0 END),
SUM(CASE WHEN m.meet_category = 'short' THEN 1 ELSE 0 END)
FROM Meetings m where meet_id in
(select meet_id
FROM Orders o
WHERE o.order_date >= '2011-03-01');
``` | try this
```
select count(1) as total_count,innerqry.*
(
select count(m.meet_id), meet_catagory
Meetings m
INNER JOIN Orders o
ON m.meet_id = o.meet_id
and m.meet_id in
(select meet_id from orders where order_date>= '2011-03-01')
group by meet_category
) innerqry
```
if you want only one row and you have few known meeting types try this
```
SELECT COUNT(m.meet_id),
SUM(CASE WHEN m.meet_category = 'long' THEN 1 ELSE 0 END),
SUM(CASE WHEN m.meet_category = 'short' THEN 1 ELSE 0 END)
FROM Meetings m
INNER JOIN Orders o
ON m.meet_id = o.meet_id
WHERE m.meet_id in
(select meet_id from orders where order_date>'2011-03-01')
``` | SQL multiple where and fields | [
"",
"mysql",
"sql",
""
] |
I've got a query that displays the second result for one customer.
what i now need to do is show the second result for each customer in a particular list (for example 20 different customers).
how would i do this?
MS SQL2000 via SSMS 2005
current query for 1 customer is
```
SELECT TOP 1 link_to_client, call_ref
FROM
(
SELECT TOP 2 link_to_client, call_ref
FROM calls WITH (NOLOCK)
WHERE link_to_client IN ('G/1931')
AND call_type = 'PM'
ORDER BY call_ref DESC
) x
ORDER BY call_ref
```
thanks | You need to use [row\_number()](http://technet.microsoft.com/ru-RU/library/ms186734.aspx) function, try something like this:
```
select
link_to_client, call_ref
from
(
select
link_to_client, call_ref,
row_number() over (partition by link_to_client order by call_ref desc) n
from
calls with (nolock)
where
link_to_client in ('G/1931')
and call_type = 'PM'
) x
where
n = 2 -- second result for every client
``` | Try this one -
```
SELECT
link_to_client
, call_ref
FROM (
SELECT
link_to_client
, call_ref
, rn = ROW_NUMBER() OVER (PARTITION BY link_to_client ORDER BY call_ref DESC)
FROM dbo.calls WITH (NOLOCK)
WHERE link_to_client = 'G/1931'
AND call_type = 'PM'
) x
WHERE x.rn = 2
``` | show second result for multiple records - MSSQL2005 | [
"",
"sql",
"sql-server",
"sql-server-2005",
"sql-server-2000",
""
] |
Ok the title is quite confusing so let me explain with an example.
My table :
```
id ref valid
---------------
1 PRO true
1 OTH true
2 PRO true
2 OTH false
3 PRO true
4 OTH true
```
The primary key here is the combination of id and ref.
I want to select all ids having both a valid "PRO" ref AND another valid ref meaning in this case it would return me only "1".
I don't understand how I can do this, IN and SELF JOIN don't seem to be suited for this. | Here's one way, using `EXISTS`:
```
SELECT id
FROM Table1 a
WHERE ref = 'PRO'
AND EXISTS
(
SELECT 1
FROM Table1 b
WHERE b.id = a.id
AND b.ref <> 'PRO'
AND b.valid = 'true'
)
``` | Using `join` :
```
SELECT t1.id
FROM Table1 t1 join Table1 t2 on
t1.id = t2.id and t1.ref = 'PRO' and t2.ref <> 'PRO'
and t1.valid= 'true' and t2.valid= 'true'
``` | select value AND not select value from one column, and merge results | [
"",
"sql",
""
] |
I have the following select, which, on a large database, is slow:
```
SELECT eventid
FROM track_event
WHERE inboundid IN (SELECT messageid FROM temp_message);
```
The temp\_message table is small (100 rows) and only one column (messageid varchar), with a btree index on the column.
The track\_event table has 19 columns and nearly 13 million rows. The columns used in this query (eventid bigint and inboundid varchar) both have btree indexes.
I can't copy/paste the explain plan from the big database, but here's the plan from a smaller database (only 348 rows in track\_event) with the same schema:
```
explain analyse SELECT eventid FROM track_event WHERE inboundid IN (SELECT messageid FROM temp_message);
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------------
Nested Loop Semi Join (cost=0.00..60.78 rows=348 width=8) (actual time=0.033..3.186 rows=348 loops=1)
-> Seq Scan on track_event (cost=0.00..8.48 rows=348 width=25) (actual time=0.012..0.860 rows=348 loops=1)
-> Index Scan using temp_message_idx on temp_message (cost=0.00..0.48 rows=7 width=32) (actual time=0.005..0.005 rows=1 loops=348)
Index Cond: ((temp_message.messageid)::text = (track_event.inboundid)::text)
Total runtime: 3.349 ms
(5 rows)
```
On the large database, this query takes about 450 seconds. Can anyone see any obvious speed-ups? I notice there's a Seq Scan on track\_event in the explain plan - I think I'd like to lose that, but cannot work out which index I could use instead.
EDITS
Postgres 9.0
The track\_event table is part of a very large complicated schema which I can't make significant changes to. Here's the information, including a new index I just added :
```
Table "public.track_event"
Column | Type | Modifiers
--------------------+--------------------------+-----------
eventid | bigint | not null
messageid | character varying | not null
inboundid | character varying | not null
newid | character varying |
parenteventid | bigint |
pmmuser | bigint |
eventdate | timestamp with time zone | not null
routeid | integer |
eventtypeid | integer | not null
adminid | integer |
hostid | integer |
reason | character varying |
expiry | integer |
encryptionendpoint | character varying |
encryptionerror | character varying |
encryptiontype | character varying |
tlsused | integer |
tlsrequested | integer |
encryptionportal | integer |
Indexes:
"track_event_pk" PRIMARY KEY, btree (eventid)
"foo" btree (inboundid, eventid)
"px_event_inboundid" btree (inboundid)
"track_event_idx" btree (messageid, eventtypeid)
Foreign-key constraints:
"track_event_parent_fk" FOREIGN KEY (parenteventid) REFERENCES track_event(eventid)
"track_event_pmi_route_fk" FOREIGN KEY (routeid) REFERENCES pmi_route(routeid)
"track_event_pmim_smtpaddress_fk" FOREIGN KEY (pmmuser) REFERENCES pmim_smtpaddress(smtpaddressid)
"track_event_track_adminuser_fk" FOREIGN KEY (adminid) REFERENCES track_adminuser(adminid)
"track_event_track_encryptionportal_fk" FOREIGN KEY (encryptionportal) REFERENCES track_encryptionportal(id)
"track_event_track_eventtype_fk" FOREIGN KEY (eventtypeid) REFERENCES track_eventtype(eventtypeid)
"track_event_track_host_fk" FOREIGN KEY (hostid) REFERENCES track_host(hostid)
"track_event_track_message_fk" FOREIGN KEY (inboundid) REFERENCES track_message(messageid)
Referenced by:
TABLE "track_event" CONSTRAINT "track_event_parent_fk" FOREIGN KEY (parenteventid) REFERENCES track_event(eventid)
TABLE "track_eventaddress" CONSTRAINT "track_eventaddress_track_event_fk" FOREIGN KEY (eventid) REFERENCES track_event(eventid)
TABLE "track_eventattachment" CONSTRAINT "track_eventattachment_track_event_fk" FOREIGN KEY (eventid) REFERENCES track_event(eventid)
TABLE "track_eventrule" CONSTRAINT "track_eventrule_track_event_fk" FOREIGN KEY (eventid) REFERENCES track_event(eventid)
TABLE "track_eventthreatdescription" CONSTRAINT "track_eventthreatdescription_track_event_fk" FOREIGN KEY (eventid) REFERENCES track_event(eventid)
TABLE "track_eventthreattype" CONSTRAINT "track_eventthreattype_track_event_fk" FOREIGN KEY (eventid) REFERENCES track_event(eventid)
TABLE "track_quarantineevent" CONSTRAINT "track_quarantineevent_track_event_fk" FOREIGN KEY (eventid) REFERENCES track_event(eventid)
``` | Your query is doing a full table scan on the larger table. An obvious speed up is to add an index on `event_track(inboundid, eventid)`. Postgres should be able to use the index on your query as written. You can rewrite the query as:
```
SELECT te.eventid
FROM track_event te join
temp_message tm
on te.inboundid = tm.messageid;
```
which should definitely use the index. (You might need `select distinct te.eventid` if there are duplicates in the `temp_message` table.)
EDIT:
The last attempted rewrite is to invert the query:
```
select (select eventid from track_event te WHERE tm.messageid = te.inboundid) as eventid
from temp_message tm;
```
This should force the use of the index. If there are non-matches, you might want:
```
select eventid
from (select (select eventid from track_event te WHERE tm.messageid = te.inboundid) as eventid
from temp_message tm
) tm
where eventid is not null;
``` | **It Depends on the number of records in database (Specific Tables).** If there is small database and nature of database is static
and very rare record increase then **join** usage is better then **IN** and **where** Because after join thay behave as a table and in small tables join takes micro seconds . Because Where and **IN** have specific time of execution thay remain better in large database if database is large then thay get quick result if database is small then in case of using In Statment **Query** takes more time
**For Small Database**
```
SELECT t1.column_name,t1.column_name,t2.column_name,t2.column_name FROM tbl1 t1
INNER JOIN tbl2 t2
ON tbl1.column_name=tbl2.column_name;
```
**For Large Database**
```
SELECT column_name,column_name FROM tbl1 t1 WHERE tbl1.column_name IN (SELECT column_name FROM tbl2 t2 where t2.column_name = t1.column_name);
``` | Slow select - PostgreSQL | [
"",
"sql",
"postgresql",
""
] |
I have a table as
```
NUM | TDATE
1 | 200712
2 | 200708
3 | 200704
4 | 20081210
```
where `mytable` is created as
```
mytable
(
num int,
tdate char(8) -- legacy
);
```
The format of tdate is YYYYMMDD.. sometimes the `date` part is optional.
So a date such as "200712" can be interpreted as 2007-12-01.
I want to write query such that i can treat `tdate` as a Date column and apply date comparison.
like
```
select num, tdate from mytable where tdate
between '2007-12-31 00:00:00' and '2007-05-01 00:00:00'
```
So far i tried this
```
select num, tdate,
CAST(LEFT(tdate,6)
+ COALESCE(NULLIF(SUBSTRING(CAST(tdate AS VARCHAR(8)),7,8),''),'01') AS Date)
from mytable
```
[SQL Fiddle](http://sqlfiddle.com/#!3/e2e7f/25)
How can I use the above converted date (3rd column ) for comparison? (needs a join?)
Also is there a better way to do this?
Edit: I have no control over the table scheme for now.. we have suggested the change to the DB team..for now have to stick with char(8) . | I think this a better way to get your fixed date:
```
SELECT CAST(LEFT(RTRIM(tdate) + '01',8) AS DATE)
```
You can create a subquery/cte with the date cast properly:
```
;WITH cte AS (select num, tdate,CAST(LEFT(RTRIM(tdate)+ '01',8) AS DATE)'FixedDate'
from mytable )
select num, FixedDate
from cte
where FixedDate
between '2007-12-31' and '2007-05-01'
```
Or you can just use your fixed date in the query directly:
```
select num, tdate
from mytable
where CAST(LEFT(RTRIM(tdate)+ '01',8) AS DATE) between '2007-12-31' and '2007-05-01'
```
Ideally you would add the fixed date field to your table so that queries can benefit from indexing the date.
Note: Be wary of `BETWEEN` with `DATETIME` as the time portion can result in undesired results if you really only care about the `DATE` portion. | `'2007-12-31 00:00:00'` > `'2007-05-01 00:00:00'`, so your `BETWEEN` clause will never return any records.
This will work, with a subquery, and with the dates flipped:
```
select num, tdate, formattedDate
from
(
select num, tdate
,
CAST(LEFT(tdate,6) + COALESCE(NULLIF(SUBSTRING(CAST(tdate AS VARCHAR(8)),7,8),''),'01') AS Date) as formattedDate
from mytable
) a
where formattedDate between '2007-05-01 00:00:00' and '2007-12-31 00:00:00'
```
[sqlFiddle here](http://sqlfiddle.com/#!3/e2e7f/46) | Cast string as date and use it in comparison | [
"",
"sql",
"sql-server",
""
] |
I am trying to find the count of days listed by person where they have over 100 records in the recordings table. It is having a problem with the having clause, but I am not sure how else to distinguish the counts by person. There is also a problem with the where clause, I also tried putting "where Count(Recordings.ID) > 100" and that did not work either. Here is what I have so far:
```
SELECT Person.FirstName,
Person.LastName,
Count(Recordings.ID) AS DAYS_ABOVE_100
FROM Recordings
JOIN Person ON Recordings.PersonID=Person.ID
WHERE DAYS_ABOVE_100 > 100
AND Created BETWEEN '2013-08-01 00:00:00.000' AND '2013-08-21 00:00:00.000'
GROUP BY Person.FirstName,
Person.LastName
HAVING Count(DISTINCT PersonID), Count(Distinct Datepart(day, created))
ORDER BY DAYS_ABOVE_100 DESC
```
Example data of what I want to get:
```
First Last Days_Above_100
John Doe 5
Jim Smith 12
```
This means that for 5 of the days in the given time frame, John Doe had over 100 records each day. | For the sake a readability, I would break the problem into two parts.
First, figure out how many recordings each person has for a day. This is the query in the common table expression (the first select statement). Then select against the common table expression to limit the rows to only those that you need.
```
with cteRecordingsByDate as
(
SELECT Person.FirstName,
Person.LastName,
cast(created as date) as Whole_date,
Count(Recordings.ID) AS Recording_COUNT
FROM Recordings
JOIN Person ON Recordings.PersonID=Person.ID
WHERE Created BETWEEN '2013-08-01 00:00:00.000' AND '2013-08-21 00:00:00.000'
GROUP BY Person.FirstName, Person.LastName, cast(created as date)
)
select FirstName, LastName, count(*) as Days_Above_100
from cteRecordingsByDate
where Recording_COUNT > 100
order by count(*) desc
``` | You can count what you want using a subquery. The inner query counts the number of records per day. The outer subquery then counts the number of days that exceed 100 (and adds in the person information as well):
```
SELECT p.FirstName, p.LastName,
count(*) as DaysOver100
FROM (select PersonId, cast(Created as Date) as thedate, count(*) as NumRecordings
from Recordings r
where Created BETWEEN '2013-08-01 00:00:00.000' AND '2013-08-21 00:00:00.000'
) r join
Person p
ON r.PersonID = p.ID
WHERE r.NumRecordings > 100
GROUP BY p.FirstName, p.LastName;
```
This uses SQL Server syntax for the conversion from `datetime` to `date`. In other databases you might use `trunc(created)` or `date(created)` to extract the date from a datetime. | SQL Find Count of Records by Day and By User | [
"",
"sql",
"count",
"where-clause",
"having-clause",
"datepart",
""
] |
So this has been driving me crazy. I'm using the following query
```
SELECT *
FROM
CensusFacility_Records
WHERE
Division_Program ='Division 1'
ORDER by JMS_UpdateDateTime DESC
```
I'm trying to get the latest record. Keep in mind that there are multiple rows with "Division 1" within the Division\_Program field. I just need to get the latest record from today that contains 'Division 1'.
JMS\_UpdateDateTime field is populated with a timestamp using the month, day, year and time format (i.e. 8/23/2013 8:00:05 AM)
How can I get the latest record from today?'
I'm Updating my question. I'm trying to write to write to the latest record in a table.
When I look at the table the latest record is not updated
```
<%
divrec = request.QueryString("div")
Set rstest = Server.CreateObject("ADODB.Recordset")
rstest.locktype = adLockOptimistic
sql = "SELECT TOP 1 * FROM CensusFacility_Records WHERE Division_Program ='Division 1' ORDER BY JMS_UpdateDateTime DESC"
rstest.Open sql, db
%>
<%
Shipment_Current = request.form("Shipment_Current")
Closed_Bed_Current = request.form("Closed_Bed_Current")
Available_Current = request.form("Available_Current")
rstest.fields("Shipment") = Shipment_Current
rstest.fields("Closed_Bed") = Closed_Bed_Current
rstest.fields("Current") = Available_Current
rstest.update
Response.Redirect("chooseScreen.asp")
%>
``` | Of what I understand, the `TOP` keyword is what you are looking for. Would you mind specifying what version of MSSQL server you are querying against? This solution only works considering your data is stored in a valid dating format (like timestamp), if your data is stored in a text format you only require to convert it before sorting against it.
```
SELECT TOP 1 *
FROM
CensusFacility_Records
WHERE
Division_Program ='Division 1'
ORDER BY
JMS_UpdateDateTime DESC
```
If you have any questions feel free to comment! :-)
---
**EDIT**
Ok, now it's an ASP question! I've never done classic ASP, but inspired by a few minutes of tutorial I would recommend using this approach :
1. Grab the primary key of the object you got from your SQL select
2. Construct the following update query
3. Execute the update query
the query
```
UPDATE CensusFacility_Records SET
Shipment = @Shipment_Current,
Closed_Bed = @Closed_Bed_Current,
Current = @AvailableCurrent
WHERE CensusFacility_Records.ID = @ID
```
with an ASP script that is probably going to ressemble this :
```
sql = "UPDATE CensusFacility_Records SET "
sql = sql & "Shipment ='" & Request.Form("Shipment_Current") & "'," &
sql = sql & "Closed_Bed ='" & Request.Form("Closed_Bed_Current") & "'," &
sql = sql & "Current ='" & Request.Form("AvailableCurrent") &
sql = sql & "WHERE CensusFacility_Records.ID = " & ID
conn.Execute sql
conn.close
``` | you can use the same query you doing but add Select Top 1
```
"SELECT TOP 1 * FROM CensusFacility_Records WHERE Division_Program ='Division 1' ORDER by JMS_UpdateDateTime desc "
``` | How do you get the lastest record from a row in SQL within ASP | [
"",
"sql",
"asp-classic",
""
] |
I have a table with product values as below:
1. apple iphone
2. iphone apple
3. samsung phone
4. phone samsung
I want to delete those products from the table which are exact reverse(as I consider them as duplicates), such that instead of 4 records, my table just have 2 records
1. apple iphone
2. samsung phone
I understand that there is REVERSE function in SQL Server, but it will reverse the whole string, and its not what I'm looking for.
I'd greatly appreciate any suggestions/ideas. | Assuming that your dictionary does not include any XML entities (e.g. `>` or `<`), and that it is not practical to manually create a bunch of `UPDATE` statements for every combination of words in your table (if it is practical, then simplify your life, stop reading this answer, and use [Justin's answer](https://stackoverflow.com/a/18409066/61305)), you can create a function like this:
```
CREATE FUNCTION dbo.SplitSafeStrings
(
@List NVARCHAR(MAX),
@Delimiter NVARCHAR(255)
)
RETURNS TABLE
WITH SCHEMABINDING
AS
RETURN
( SELECT Item = LTRIM(RTRIM(y.i.value('(./text())[1]', 'nvarchar(4000)')))
FROM ( SELECT x = CONVERT(XML, '<i>'
+ REPLACE(@List, @Delimiter, '</i><i>') + '</i>').query('.')
) AS a CROSS APPLY x.nodes('i') AS y(i));
GO
```
(If XML is a problem, [there are other, more complex alternatives](http://www.sqlperformance.com/2012/07/t-sql-queries/split-strings), such as CLR.)
Then you can do this:
```
DECLARE @x TABLE(id INT IDENTITY(1,1), s VARCHAR(64));
INSERT @x(s) VALUES
('apple iphone'),
('iphone Apple'),
('iphone samsung hoochie blat'),
('samsung hoochie blat iphone');
;WITH cte1 AS
(
SELECT id, Item FROM @x AS x
CROSS APPLY dbo.SplitSafeStrings(LOWER(x.s), ' ') AS y
),
cte2(id,words) AS
(
SELECT DISTINCT id, STUFF((SELECT ',' + orig.Item
FROM cte1 AS orig
WHERE orig.id = cte1.id
ORDER BY orig.Item
FOR XML PATH(''), TYPE).value('.[1]','nvarchar(max)'),1,1,'')
FROM cte1
),
cte3 AS
(
SELECT id, words, rn = ROW_NUMBER() OVER (PARTITION BY words ORDER BY id)
FROM cte2
)
SELECT id, words, rn FROM cte3
-- WHERE rn = 1 -- rows to keep
-- WHERE rn > 1 -- rows to delete
;
```
So you could, after the three CTEs, instead of the final `SELECT` above, say:
```
DELETE t FROM @x AS t
INNER JOIN cte3 ON cte3.id = t.id
WHERE cte3.rn > 1;
```
And what should be left in `@x`?
```
SELECT id, s FROM @x;
```
Results:
```
id s
-- ---------------------------
1 apple iphone
3 iphone samsung hoochie blat
``` | It seems to me that you are complicating this too much, a simple update statement would work:
```
UPDATE table SET productname = 'apple iphone' WHERE productname = 'iphone apple'
``` | Reverse strings in SQL Server | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
I have table like this
```
Reg_No Student_Name Subject1 Subject2 Subject3 Subject4 Total
----------- -------------------- ----------- ----------- ----------- ----------- -----------
101 Kevin 85 94 78 90 347
102 Andy 75 88 91 78 332
```
From this I need to create a temp table or table like this:
```
Reg_No Student_Name Subject Total
----------- -------------------- ----------- -----------
101 Kevin 85 347
94
78
90
102 Andy 75 332
88
91
78
```
Is there a way I can do this in `SQL Server`? | Check [this Fiddle](http://sqlfiddle.com/#!6/a6c86/12)
```
;WITH MyCTE AS
(
SELECT *
FROM (
SELECT Reg_No,
[Subject1],
[Subject2],
[Subject3],
[Subject4]
FROM Table1
)p
UNPIVOT
(
Result FOR SubjectName in ([Subject1], [Subject2], [Subject3], [Subject4])
)unpvt
)
SELECT T.Reg_No,
T.Student_Name,
M.SubjectName,
M.Result,
T.Total
FROM Table1 T
JOIN MyCTE M
ON T.Reg_No = M.Reg_No
```
If you do want NULL values in the rest, you may try the following:
[This is the new Fiddle](http://sqlfiddle.com/#!6/a6c86/23)
And here is the code:
```
;WITH MyCTE AS
(
SELECT *
FROM (
SELECT Reg_No,
[Subject1],
[Subject2],
[Subject3],
[Subject4]
FROM Table1
)p
UNPIVOT
(
Result FOR SubjectName in ([Subject1], [Subject2], [Subject3], [Subject4])
)unpvt
),
MyNumberedCTE AS
(
SELECT *,
ROW_NUMBER() OVER(PARTITION BY Reg_No ORDER BY Reg_No,SubjectName) AS RowNum
FROM MyCTE
)
SELECT T.Reg_No,
T.Student_Name,
M.SubjectName,
M.Result,
T.Total
FROM MyCTE M
LEFT JOIN MyNumberedCTE N
ON N.Reg_No = M.Reg_No
AND N.SubjectName = M.SubjectName
AND N.RowNum=1
LEFT JOIN Table1 T
ON T.Reg_No = N.Reg_No
``` | **DDL:**
```
DECLARE @temp TABLE
(
Reg_No INT
, Student_Name VARCHAR(20)
, Subject1 INT
, Subject2 INT
, Subject3 INT
, Subject4 INT
, Total INT
)
INSERT INTO @temp (Reg_No, Student_Name, Subject1, Subject2, Subject3, Subject4, Total)
VALUES
(101, 'Kevin', 85, 94, 78, 90, 347),
(102, 'Andy ', 75, 88, 91, 78, 332)
```
**Query #1 - ROW\_NUMBER:**
```
SELECT Reg_No = CASE WHEN rn = 1 THEN t.Reg_No END
, Student_Name = CASE WHEN rn = 1 THEN t.Student_Name END
, t.[Subject]
, Total = CASE WHEN rn = 1 THEN t.Total END
FROM (
SELECT
Reg_No
, Student_Name
, [Subject]
, Total
, rn = ROW_NUMBER() OVER (PARTITION BY Reg_No ORDER BY 1/0)
FROM @temp
UNPIVOT
(
[Subject] FOR tt IN (Subject1, Subject2, Subject3, Subject4)
) unpvt
) t
```
**Query #2 - OUTER APPLY:**
```
SELECT t.*
FROM @temp
OUTER APPLY
(
VALUES
(Reg_No, Student_Name, Subject1, Total),
(NULL, NULL, Subject2, NULL),
(NULL, NULL, Subject3, NULL),
(NULL, NULL, Subject4, NULL)
) t(Reg_No, Student_Name, [Subject], Total)
```
**Query Plan:**

**Query Cost:**

**Output:**
```
Reg_No Student_Name Subject Total
----------- -------------------- ----------- -----------
101 Kevin 85 347
NULL NULL 94 NULL
NULL NULL 78 NULL
NULL NULL 90 NULL
102 Andy 75 332
NULL NULL 88 NULL
NULL NULL 91 NULL
NULL NULL 78 NULL
```
**PS:** In your case query with `OUTER APPLY` is faster than `ROW_NUMBER` solution. | sql server single row multiple columns into one column | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
Is there source or library somewhere that would help me generate DDL on the fly?
I have a few hundred remote databases that I need to copy to the local server. Upgrade procedure in the field is to create a new database. Locally, I do the same.
So, instead of generating the DDL for all the different DB versions in the field, I'd like to read the DDL from the source tables and create an identical table locally.
Is there such a lib or source? | Actually, you will discover that your can do this yourself and you will learn something in the process. I use this on several databases I maintain. I create a view that makes it easy to look use DDL style info.
```
create view vw_help as
select
Table_Name as TableName
, Column_Name as ColName
, Ordinal_Position as ColNum
, Data_Type as DataType
, Character_Maximum_Length as MaxChars
, coalesce(Datetime_Precision, Numeric_Precision) as [Precision]
, Numeric_Scale as Scale
, Is_Nullable as Nullable
, case when (Data_Type in ('varchar', 'nvarchar', 'char', 'nchar', 'binary', 'varbinary')) then
case when (Character_Maximum_Length = -1) then Data_Type + '(max)'
else Data_Type + '(' + convert(varchar(6),Character_Maximum_Length) + ')'
end
when (Data_Type in ('decimal', 'numeric')) then
Data_Type + '(' + convert(varchar(4), Numeric_Precision) + ',' + convert(varchar(4), Numeric_Scale) + ')'
when (Data_Type in ('bit', 'money', 'smallmoney', 'int', 'smallint', 'tinyint', 'bigint', 'date', 'time', 'datetime', 'smalldatetime', 'datetime2', 'datetimeoffset', 'datetime2', 'float', 'real', 'text', 'ntext', 'image', 'timestamp', 'uniqueidentifier', 'xml')) then Data_Type
else 'unknown type'
end as DeclA
, case when (Is_Nullable = 'YES') then 'null' else 'not null' end as DeclB
, Collation_Name as Coll
-- ,*
from Information_Schema.Columns
GO
```
And I use the following to "show the table structure"
```
/*
exec ad_Help TableName, 1
*/
ALTER proc [dbo].[ad_Help] (@TableName nvarchar(128), @ByOrdinal int = 0) as
begin
set nocount on
declare @result table
(
TableName nvarchar(128)
, ColName nvarchar(128)
, ColNum int
, DataType nvarchar(128)
, MaxChars int
, [Precision] int
, Scale int
, Nullable varchar(3)
, DeclA varchar(max)
, DeclB varchar(max)
, Coll varchar(128)
)
insert @result
select TableName, ColName, ColNum, DataType, MaxChars, [Precision], Scale, Nullable, DeclA, DeclB, Coll
from dbo.vw_help
where TableName like @TableName
if (select count(*) from @result) <= 0
begin
select 'No tables matching ''' + @TableName + '''' as Error
return
end
if (@ByOrdinal > 0)
begin
select * from @result order by TableName, ColNum
end else begin
select * from @result order by TableName, ColName
end
end
GO
```
You can use other info in InformationSchemas if you also need to generate Foreign keys, etc. It is a bit complex and I never bothered to flesh out everything necessary to generate the DDL, but you should get the right idea. Of course, I would not bother with rolling your own if you can use what has already been suggested.
Added comment -- I did not give you an exact answer, but glad to help. You will need to generate lots of dynamic string manipulation to make this work -- varchar(max) helps. I will point out the TSQL is not the language of choice for this kind of project. Personally, if I had to generate full table DDL's I might be tempted to write this as a CLR proc and do the heavy string manipulation in C#. If this makes sense to you, I would still debug the process outside of SQL server (e.g. a form project for testing and dinking around). Just remember that CLR procs are Net 2.0 framework.
You can absolutely make a stored proc that returns a set of results, i.e., 1 for the table columns, 1 for the foreign keys, etc. then consume that set of results in C# and built the DDL statements. in C# code. | Gary Walker, based on your scripts, I created exactly what I needed. Thank you very much for your help.
Here it is, if anyone else needs it:
```
with ColumnDef (TableName, ColName, ColNum, DeclA, DeclB)
as
(
select
Table_Name as TableName
, Column_Name as ColName
, Ordinal_Position as ColNum
, case when (Data_Type in ('varchar', 'nvarchar', 'char', 'nchar', 'binary', 'varbinary')) then
case when (Character_Maximum_Length = -1) then Data_Type + '(max)'
else Data_Type + '(' + convert(varchar(6),Character_Maximum_Length) + ')'
end
when (Data_Type in ('decimal', 'numeric')) then
Data_Type + '(' + convert(varchar(4), Numeric_Precision) + ',' + convert(varchar(4), Numeric_Scale) + ')'
when (Data_Type in ('bit', 'money', 'smallmoney', 'int', 'smallint', 'tinyint', 'bigint', 'date', 'time', 'datetime', 'smalldatetime', 'datetime2', 'datetimeoffset', 'datetime2', 'float', 'real', 'text', 'ntext', 'image', 'timestamp', 'uniqueidentifier', 'xml')) then Data_Type
else 'unknown type'
end as DeclA
, case when (Is_Nullable = 'YES') then 'null' else 'not null' end as DeclB
from Information_Schema.Columns
)
select 'CREATE TABLE ' + TableName + ' (' +
substring((select ', ' + ColName + ' ' + declA + ' ' + declB
from ColumnDef
where tablename = t.TableName
order by ColNum
for xml path ('')),2,8000) + ') '
from
(select distinct TableName from ColumnDef) t
``` | SSIS: generate create table DDL programmatically | [
"",
"sql",
"sql-server",
"t-sql",
"ssis",
""
] |
I have an `ID` that I create in both the asp.net vb app and SQL Server that has a format of `MM/YYYY/##` where the #'s are a integer. The integer is incremented by 1 through out the month as users generate forms so currently it is `08/2013/39`.
The code I use for this is as follows
```
Dim get_end_rfa As String = get_RFA_number()
Dim pos As Integer = get_end_rfa.Trim().LastIndexOf("/") + 1
Dim rfa_number = get_end_rfa.Substring(pos)
Convert.ToInt32(rfa_number)
Dim change_rfa As Integer = rfa_number + 1
Dim rfa_date As String = Format(Now, "MM/yyyy")
Dim rfa As String = rfa_date + "/" + Convert.ToString(change_rfa)
RFA_number_box.Text = rfa
Public Function get_RFA_number() As String
Dim conn As New SqlConnection(ConfigurationManager.ConnectionStrings("AnalyticalNewConnectionString").ConnectionString)
conn.Open()
Dim cmd As New SqlCommand("select TOP 1 RFA_Number from New_Analysis_Data order by submitted_date desc", conn)
Dim RFA As String = (cmd.ExecuteScalar())
conn.Close()
Return RFA
End Function
```
I need to reset the integer to 1 at the beginning of each month. How do I go about this? | Select the max(record) from your table, and compare the currentDate with the record's Date. When they are different. reset the ID = 1 .
The SQL like this
```
select ID, recordDate
from TABLE
where ID = (select max(ID) from TABLE)
where datename(YYYY ,getdate()) = datename(YYYY ,recordDate())
and datename(MM ,getdate()) = datename(MM ,recordDate())
``` | using a combination of the answer above and some further research I am using the code below. This resolves my query.
```
declare @mydate as date
set @mydate = CONVERT(char(10), GetDate(),103)
if convert(char(10), (DATEADD(month, DATEDIFF(month, 0, GETDATE()), 0)), 126) = CONVERT(char(10), GetDate(),126)
select right(convert(varchar, @mydate, 103), 7) + '/01'
else
select TOP 1 RFA_Number
from New_Analysis_Data
order by submitted_date desc
``` | How to reset a custom ID to 1 at the beginning of each month | [
"",
"sql",
"sql-server",
"vb.net",
""
] |
In my PostgreSQL database I have a unique index created this way:
```
CREATE UNIQUE INDEX <my_index> ON <my_table> USING btree (my_column)
```
Is there way to alter the index to remove the unique constraint? I looked at [ALTER INDEX documentation](http://www.postgresql.org/docs/8.2/static/sql-alterindex.html) but it doesn't seem to do what I need.
I know I can remove the index and create another one, but I'd like to find a better way, if it exists. | You may be able to remove the unique `CONSTRAINT`, and not the `INDEX` itself.
Check your `CONSTRAINTS` via `select * from information_schema.table_constraints;`
Then if you find one, you should be able to drop it like:
`ALTER TABLE <my_table> DROP CONSTRAINT <constraint_name>`
Edit: a related issue is described in [this question](https://stackoverflow.com/questions/12922500/trouble-dropping-unique-constraint) | Assume you have the following:
```
Indexes:
"feature_pkey" PRIMARY KEY, btree (id, f_id)
"feature_unique" UNIQUE, btree (feature, f_class)
"feature_constraint" UNIQUE CONSTRAINT, btree (feature, f_class)
```
To drop the UNIQUE CONSTRAINT, you would use [ALTER TABLE](https://www.postgresql.org/docs/9.6/static/sql-altertable.html):
```
ALTER TABLE feature DROP CONSTRAINT feature_constraint;
```
To drop the PRIMARY KEY, you would also use [ALTER TABLE](https://www.postgresql.org/docs/9.6/static/sql-altertable.html):
```
ALTER TABLE feature DROP CONSTRAINT feature_pkey;
```
To drop the UNIQUE [index], you would use [DROP INDEX](https://www.postgresql.org/docs/9.6/static/sql-dropindex.html):
```
DROP INDEX feature_unique;
``` | Remove uniqueness of index in PostgreSQL | [
"",
"sql",
"database",
"postgresql",
""
] |
I'm building the following query:
```
SELECT taskid
FROM tasks, (SELECT @var_sum := 0) a
WHERE (@var_sum := @var_sum + taskid) < 10
```
Result:
```
taskid
1
2
3
```
Right now it's returning me all rows that when summed are <10, I want to include one extra row to my result (10 can be anything though, it's just an example value)
So, desired result:
```
taskid
1
2
3
4
``` | I would do it like this:
```
SELECT t.taskid
FROM tasks t
CROSS
JOIN (SELECT @var_sum := 0) a
WHERE IF(@var_sum<10, @var_sum := @var_sum+t.taskid, @var_sum := NULL) IS NOT NULL
```
This is checking if the current value of `@var_sum` is less than 10 (or less than whatever, or less than or equal to whatever. If it is, we add the taskid value to @var\_num, and the row is included in the resultset. But when the current value of @var\_sum doesn't meet the condition, we assign a NULL to it, and the row does not get included.
**NOTE** The order of rows returned from the `tasks` table is not guaranteed. MySQL can return the rows in any order; it may look deterministic in your testing, but that's likely because MySQL is choosing the same access plan against the same data. | If you want the first value that is ">= 10":
```
SELECT taskid
FROM tasks, (SELECT @var_sum := 0) a
WHERE (@var_sum < 10) or (@var_sum < 10 and @var_sum := @var_sum + taskid) >= 10);
```
Although this will (probably) work in practice, I don't think it is guaranteed to work. MySQL does not specify the order of evaluation for `where` clauses.
EDIT:
This should work:
```
select taskid
from (SELECT taskid, (@var_sum := @var_sum + taskid) as varsum
FROM tasks t cross join
(SELECT @var_sum := 0) const
) t
WHERE (varsum < 10) or (varsum - taskid < 10 and varsum >= 10);
``` | One extra row in a sum column query | [
"",
"mysql",
"sql",
""
] |
After I `ORDER BY cnt DESC` my results are
```
fld1 cnt
A 9
E 8
D 6
C 2
B 2
F 1
```
I need to have top 3 displayed and the rest to be summed as 'other', like this:
```
fld1 cnt
A 9
E 8
D 6
other 5
```
EDITED:
Thank you all for your input. Maybe it will help if you see the actual statement:
```
SELECT
CAST(u.FA AS VARCHAR(300)) AS FA,
COUNT(*) AS Total,
COUNT(CASE WHEN r.RT IN (1,11,12,17) THEN r.RT END) AS Jr,
COUNT(CASE WHEN r.RT IN (3,4,13) THEN r.RT END) AS Bk,
COUNT(CASE WHEN r.RT NOT IN (1,11,12,17,3,4,13) THEN r.RT END ) AS Other
FROM R r
INNER JOIN DB..RTL rt
ON r.RT = rt.RTID
INNER JOIN U u
ON r.UID = u.UID
WHERE rt.LC = 'en'
GROUP BY CAST(u.FA AS VARCHAR(300))--FA is ntext
ORDER BY Total DESC
```
The produced result has 19 records. I need to show the top 5 and sum up the rest as "Other FA". I don't want to do a select from a select from a select with this kind of statement. I am more looking for some SQL function. Maybe ROW\_NUMBER is good idea, but I don't know how exactly to apply it in this case. | I think the most direct way is to use `row_number()` to enumerate the rows and then reaggreate them:
```
select (case when seqnum <= 3 then fld1 else 'Other' end) as fld1,
sum(cnt) as cnt
from (select t.*, row_number() over (partition by fld1 order by cnt desc) as seqnum
from t
) t
group by (case when seqnum <= 3 then fld1 else 'Other' end);
```
You can actually do this as part of your original aggregation as well:
```
select (case when seqnum <= 3 then fld1 else 'Other' end) as fld1,
sum(cnt) as cnt
from (select fld1, sum(...) as cnt,
row_number() over (partition by fld1 order by sum(...) desc) as seqnum
from t
group by fld1
) t
group by (case when seqnum <= 3 then fld1 else 'Other' end);
```
EDIT (based on revised question):
```
select (case when seqnum <= 3 then FA else 'Other' end) as FA,
sum(Total) as Total
from (SELECT CAST(u.FA AS VARCHAR(300)) AS FA,
COUNT(*) AS Total,
ROW_NUMBER() over (PARTITION BY CAST(u.FA AS VARCHAR(300)) order by COUNT(*) desc
) as seqnum
FROM R r
INNER JOIN DB..RTL rt
ON r.RT = rt.RTID
INNER JOIN U u
ON r.UID = u.UID
WHERE rt.LC = 'en'
GROUP BY CAST(u.FA AS VARCHAR(300))--FA is ntext
) t
group by (case when seqnum <= 3 then FA else 'Other' end)
order by max(seqnum) desc;
```
The final `order by` keeps the records in ascending order by total. | Could be something like this:
```
select top 3 fld1, cnt from mytable
union
select 'Z - Other', sum(cnt) from mytable
where fld1 not in (select top 3 fld1 from mytable order by fld1)
order by fld1
```
(Updated to include order by) | Display top three values and the sum of all other values | [
"",
"sql",
"sql-server",
""
] |
```
SELECT IFNULL(NULL, 'Replaces the NULL')
--> Replaces the NULL
SELECT COALESCE(NULL, NULL, 'Replaces the NULL')
--> Replaces the NULL
```
In both clauses the main difference is argument passing. For `IFNULL` it's two parameters and for `COALESCE` it's multiple parameters. So except that, do we have any other difference between these two?
And how it differs in MS SQL? | The main difference between the two is that `IFNULL` function takes two arguments and returns the first one if it's not `NULL` or the second if the first one is `NULL`.
`COALESCE` function can take two or more parameters and returns the first non-NULL parameter, or `NULL` if all parameters are null, for example:
```
SELECT IFNULL('some value', 'some other value');
-> returns 'some value'
SELECT IFNULL(NULL,'some other value');
-> returns 'some other value'
SELECT COALESCE(NULL, 'some other value');
-> returns 'some other value' - equivalent of the IFNULL function
SELECT COALESCE(NULL, 'some value', 'some other value');
-> returns 'some value'
SELECT COALESCE(NULL, NULL, NULL, NULL, 'first non-null value');
-> returns 'first non-null value'
```
**UPDATE:** MSSQL does stricter type and parameter checking. Further, it doesn't have `IFNULL` function but instead `ISNULL` function, which needs to know the types of the arguments. Therefore:
```
SELECT ISNULL(NULL, NULL);
-> results in an error
SELECT ISNULL(NULL, CAST(NULL as VARCHAR));
-> returns NULL
```
Also `COALESCE` function in MSSQL requires at least one parameter to be non-null, therefore:
```
SELECT COALESCE(NULL, NULL, NULL, NULL, NULL);
-> results in an error
SELECT COALESCE(NULL, NULL, NULL, NULL, 'first non-null value');
-> returns 'first non-null value'
``` | ### Pros of `COALESCE`
* **`COALESCE` is SQL-standard function**.
While `IFNULL` is MySQL-specific and its equivalent in MSSQL (`ISNULL`) is MSSQL-specific.
* **`COALESCE` can work with two or more arguments** (in fact, it can work with a single argument, but is pretty useless in this case: `COALESCE(a)`≡`a`).
While MySQL's `IFNULL` and MSSQL's `ISNULL` are limited versions of `COALESCE` that can work with two arguments only.
### Cons of `COALESCE`
* Per [Transact SQL documentation](//learn.microsoft.com/en-us/sql/t-sql/language-elements/coalesce-transact-sql), `COALESCE` is just a syntax sugar for `CASE` and **can evaluate its arguments more that once**. In more detail: `COALESCE(a1, a2, …, aN)`≡`CASE WHEN (a1 IS NOT NULL) THEN a1 WHEN (a2 IS NOT NULL) THEN a2 ELSE aN END`. This greatly reduces the usefulness of `COALESCE` in MSSQL.
On the other hand, `ISNULL` in MSSQL is a normal function and never evaluates its arguments more than once. `COALESCE` in MySQL and PostgreSQL neither evaluates its arguments more than once.
* At this point of time, I don't know how exactly SQL-standards define `COALESCE`.
As we see from previous point, actual implementations in RDBMS vary: some (e.g. MSSQL) make `COALESCE` to evaluate its arguments more than once, some (e.g. MySQL, PostgreSQL) — don't.
c-treeACE, which [claims it's `COALESCE` implementation is SQL-92 compatible](//docs.faircom.com/doc/sqlref/#33405.htm), says: "This function is not allowed in a GROUP BY clause. Arguments to this function cannot be query expressions." I don't know whether these restrictions are really within SQL-standard; most actual implementations of `COALESCE` (e.g. MySQL, PostgreSQL) don't have such restrictions. `IFNULL`/`ISNULL`, as normal functions, don't have such restrictions either.
### Resume
Unless you face specific restrictions of `COALESCE` in specific RDBMS, I'd recommend to **always use `COALESCE`** as more standard and more generic.
The exceptions are:
* Long-calculated expressions or expressions with side effects in MSSQL (as, per documentation, `COALESCE(expr1, …)` may evaluate `expr1` twice).
* Usage within `GROUP BY` or with query expressions in c-treeACE.
* Etc. | What is the difference between IFNULL and COALESCE in MySQL? | [
"",
"mysql",
"sql",
"sql-server",
""
] |
I am using an procedure to Select First-name and Last-name
```
declare
@firstName nvarchar(50),
@lastName nvarchar(50),
@text nvarchar(MAX)
SELECT @text = 'First Name : ' + @firstName + ' Second Name : ' + @lastName
```
The `@text` Value will be sent to my mail But the Firstname and Lastname comes in single line. i just need to show the Lastname in Second line
O/P **First Name : Taylor Last Name : Swift** ,
I need the output like this below format
```
First Name : Taylor
Last Name : Swift
``` | Try to use `CHAR(13)` -
```
DECLARE
@firstName NVARCHAR(50) = '11'
, @lastName NVARCHAR(50) = '22'
, @text NVARCHAR(MAX)
SELECT @text =
'First Name : ' + @firstName +
CHAR(13) + --<--
'Second Name : ' + @lastName
SELECT @text
```
Output -
```
First Name : 11
Second Name : 22
``` | You may use
```
CHAR(13) + CHAR(10)
``` | New line in sql server | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
Suppose We have ten rows in each table `A` and table `B`
table A with single
```
ColA
1
2
3
4
5
6
7
8
9
10
```
and table `B` with column
```
ColB
11
12
13
14
15
16
17
18
19
20
```
Output required:
```
SingleColumn
1
11
2
12
3
13
4
14
5
15
6
16
7
17
8
18
9
19
10
20
```
P.S : There is no relation between the two table. Both columns are independent. Also 1, 2...19, 20 , they are row `id`s and if considered the data only then in an unordered form. | **UPDATED** In SQL Server and Oracle you can do it like this
```
SELECT col
FROM
(
SELECT a.*
FROM
(
SELECT cola col, 1 source, ROW_NUMBER() OVER (ORDER BY cola) rnum
FROM tablea
) a
UNION ALL
SELECT b.*
FROM
(
SELECT colb col, 2 source, ROW_NUMBER() OVER (ORDER BY colb) rnum
FROM tableb
) b
) c
ORDER BY rnum, source
```
Output:
```
| COL |
|-----|
| 1 |
| 11 |
| 2 |
| 12 |
| 3 |
| 13 |
| 4 |
| 14 |
| 5 |
| 15 |
| 6 |
| 16 |
| 7 |
| 17 |
| 8 |
| 18 |
| 9 |
| 19 |
| 10 |
| 20 |
```
Here is **[SQLFiddle](http://sqlfiddle.com/#!3/70200/15)** demo (SQL Server)
Here is **[SQLFiddle](http://sqlfiddle.com/#!4/be715/2)** demo (Oracle)
In MySql you can do
```
SELECT col
FROM
(
(
SELECT cola col, 1 source, @n := @n + 1 rnum
FROM tablea CROSS JOIN (SELECT @n := 0) i
ORDER BY cola
)
UNION ALL
(
SELECT colb col, 2 source, @m := @m + 1 rnum
FROM tableb CROSS JOIN (SELECT @m := 0) i
ORDER BY colb
)
) c
ORDER BY rnum, source
```
Here is **[SQLFiddle](http://sqlfiddle.com/#!2/14f5e/7)** demo | ```
SELECT col FROM (
select colA as col
,row_number() over (order by colA) as order1
,1 as order2
from tableA
union all
select colB
,row_number() over (order by colB)
,2
from tableB
) order by order1, order2
``` | How to merge rows of two tables when there is no relation between the tables in sql | [
"",
"mysql",
"sql",
"sql-server",
"oracle",
""
] |
Actually my query is like this :
```
SELECT ABS(20-80) columnA , ABS(10-70) columnB ,
ABS(30-70) columnC , ABS(40-70) columnD , etc..
```
The pb is each ABS() is in fact some complex calculation , and i need to add a last columnTotal witch is the SUM of each ABS() , and i'd like to do that in one way without recalculate all . What i'd like to achieve is :
```
SELECT ABS(20-80) columnA , ABS(10-70) columnB ,
ABS(30-70) columnC , ABS(40-70) columnD , SUM(columnA+columnB+columnC+columnD) columnTotal
```
. The result expected look like this :
```
columnA columnB columnC columnD columnTotal
60 60 40 30 190
```
don't know if its possible | Yes, in MySQL you can do it like this way:
```
SELECT
@a:=ABS(40-90) AS column1,
@b:=ABS(50-10) AS column2,
@c:=ABS(100-40) AS column3,
@a+@b+@c as columnTotal;
```
```
+---------+---------+---------+-------------+
| column1 | column2 | column3 | columnTotal |
+---------+---------+---------+-------------+
| 50 | 40 | 60 | 150 |
+---------+---------+---------+-------------+
1 row in set (0.00 sec)
``` | you can wrap it in one more layer like this:
```
select columnA, columnnB, columnnC, columnnD, (columnA+ columnnB+ columnnC+ columnnD) total
from
(
SELECT ABS(20-80) columnA , ABS(10-70) columnB ,
ABS(30-70) columnC , ABS(40-70) columnD , etc..
)
``` | Is possible to count alias result on mysql | [
"",
"mysql",
"sql",
"count",
"alias",
""
] |
I'm writing a stored procedure in SQL Server 2005, at given point I need to execute another stored procedure. This invocation is dynamic, and so i've used sp\_executesql command as usual:
```
DECLARE @DBName varchar(255)
DECLARE @q varchar(max)
DECLARE @tempTable table(myParam1 int, -- other params)
SET @DBName = 'my_db_name'
SET q = 'insert into @tempTable exec ['+@DBName+'].[dbo].[my_procedure]'
EXEC sp_executesql @q, '@tempTable table OUTPUT', @tempTable OUTPUT
SELECT * FROM @tempTable
```
But I get this error:
> Must declare the scalar variable "@tempTable".
As you can see that variable is declared. I've read the [documentation](http://technet.microsoft.com/it-it/library/ms188001%28v=sql.90%29.aspx) and seems that only parameters allowed are text, ntext and image. How can I have what I need?
PS: I've found many tips for 2008 and further version, any for 2005. | Resolved, thanks to all for tips:
```
DECLARE @DBName varchar(255)
DECLARE @q varchar(max)
CREATE table #tempTable(myParam1 int, -- other params)
SET @DBName = 'my_db_name'
SET @q = 'insert into #tempTable exec ['+@DBName+'].[dbo].[my_procedure]'
EXEC(@q)
SELECT * FROM #tempTable
drop table #tempTable
``` | SQL Server 2005 allows to use INSERT INTO EXEC operation (<https://learn.microsoft.com/en-us/sql/t-sql/statements/insert-transact-sql?view=sqlallproducts-allversions>).
You might create a table valued variable and insert result of stored procedure into this table:
```
DECLARE @tempTable table(myParam1 int, myParam2 int);
DECLARE @statement nvarchar(max) = 'SELECT 1,2';
INSERT INTO @tempTable EXEC sp_executesql @statement;
SELECT * FROM @tempTable;
```
Result:
```
myParam1 myParam2
----------- -----------
1 2
```
or you can use any other your own stored procedure:
```
DECLARE @tempTable table(myParam1 int, myParam2 int);
INSERT INTO @tempTable EXEC [dbo].[my_procedure];
SELECT * FROM @tempTable;
``` | sp_executesql and table output | [
"",
"sql",
"sql-server-2005",
"sp-executesql",
""
] |
I have a table (sql 2008) with A,B,C,D,E values in col1
Is there a way to get counts grouped by col1 so that result returned will be
```
A - #
B - #
other - #
```
Thank you | ```
select
case when col1 in ('a','b') then col1 else 'other' end,
count(*)
from tab
group by case when col1 in ('a','b') then col1 else 'other' end
``` | Repeating the `CASE` expression works, but I find it a little less tedious to only perform that expression once. Plans are identical.
```
;WITH x AS
(
SELECT Name = CASE WHEN Name IN ('A','B') THEN Name
ELSE 'Other' END
FROM dbo.YourTable
)
SELECT Name, COUNT(*) FROM x
GROUP BY Name;
```
If ordering is important (e.g. `Other` should be the last row in the result, even if other names come after it alphabetically), then you can say:
```
ORDER BY CASE WHEN Name = 'Other' THEN 1 ELSE 0 END, Name;
``` | group by in sql combining values | [
"",
"sql",
"sql-server-2008",
""
] |
I am working on `SQL Server 2008R2`, I am having the following Table
```
ID Name date
1 XYZ 2010
2 ABC 2011
3 VBL 2010
```
Now i want to prevent insertion if i have a Data although the ID is different but data is present
```
ID Name date
4 ABC 2011
```
Kindly guide me how should i write this trigger. | Something like this:
```
CREATE TRIGGER MyTrigger ON dbo.MyTable
AFTER INSERT
AS
if exists ( select * from table t
inner join inserted i on i.name=t.name and i.date=t.date and i.id <> t.id)
begin
rollback
RAISERROR ('Duplicate Data', 16, 1);
end
go
```
That's just for insert, you might want to consider updates too.
**Update**
A simpler way would be to just create a unique constraint on the table, this will also enforce it for updates too and remove the need for a trigger. Just do:
```
ALTER TABLE [dbo].[TableName]
ADD CONSTRAINT [UQ_ID_Name_Date] UNIQUE NONCLUSTERED
(
[Name], [Date]
)
```
and then you'll be in business. | If you are using a store procedure inserting data into the table, you don't really need a trigger. You first check if the combination exists then don't insert.
```
CREATE PROCEDURE usp_InsertData
@Name varchar(50),
@Date DateTime
AS
BEGIN
IF (SELECT COUNT(*) FROM tblData WHERE Name = @Name AND Date=@Date) = 0
BEGIN
INSERT INTO tblData
( Name, Date)
VALUES (@Name, @Date)
Print 'Data now added.'
END
ELSE
BEGIN
Print 'Dah! already exists';
END
END
```
The below trigger can used if you are not inserting data via the store procedure.
```
CREATE TRIGGER checkDuplicate ON tblData
AFTER INSERT
AS
IF EXISTS ( SELECT * FROM tblData A
INNER JOIN inserted B ON B.name=A.name and A.Date=B.Date)
BEGIN
RAISERROR ('Dah! already exists', 16, 1);
END
GO
``` | Trigger to prevent Insertion for duplicate data of two columns | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I am trying to select a set of records from an orders table based on the latest status of that record. The status is kept in another table called orderStatus. My table is more complex, but here's a basic example
**table - orders:**
```
orderID
```
**table - orderStatus:**
```
orderStatusID
orderID
orderStatusCode
dateTime
```
An order can have many status records, I simply want to get the orders that have the latest statusCode of what I'm querying for. Problem is I'm getting a lot of duplicates. Here's a basic example.
```
select orders.orderID
from orders inner join orderStatus on orders.orderID = orderStatus.orderID
where orderStatusCode = 'PENDING'
```
I've tried doing an inner query to select the top 1 from the orderStatus table ordered by dateTime. But I was still seeing the same duplication. Can someone point me in the right direction on how to go about doing this?
edit: SQL server 2008 | A simple `LEFT JOIN` to check that no newer status exists on an order should do it just fine;
```
SELECT o.*
FROM orders o
JOIN orderStatus os
ON o.orderID = os.orderID
LEFT JOIN orderStatus os2
ON o.orderID = os2.orderID
AND os.dateTime < os2.dateTime
WHERE os.orderStatusCode = 'PENDING' AND os2.dateTime IS NULL;
``` | ```
select DISTINCT orders.orderID
from orders inner join orderStatus on orders.orderID = orderStatus.orderID
where orderStatusCode = 'PENDING'
```
As an alternative your can GROUP BY
```
select orders.orderID
from orders inner join orderStatus on orders.orderID = orderStatus.orderID
where orderStatusCode = 'PENDING'
GROUP BY orders.orderID
``` | SQL Select latest records | [
"",
"sql",
"sql-server-2008",
""
] |
I have a question on SQL join which involve multiple condition in second joined table. Below is the table details
## Table 1
pId status keyVal
---- ------- ------
100 1 45
101 1 46
## Table 2
pId mode modeVal
100 2 5
100 3 6
101 2 7
101 3 8
I have above two tables and I am trying to join based on below condition to get pId's
pId's which has keyVal = 45 and status = 1 joined with table2 which has mode = 2 and modeVal 5 and mode =3 and modeVal = 6
the result I am expecting is to return pid = 100
Can you please help me with a join query ? | One way is to use `GROUP BY` with `HAVING` to count that the number of rows found is 2, of which 2 are matching the condition;
```
WITH cte AS (SELECT DISTINCT * FROM Table2)
SELECT t1."pId"
FROM Table1 t1 JOIN cte t2 ON t1."pId" = t2."pId"
WHERE t1."status" = 1 AND t1."keyVal" = 45
GROUP BY t1."pId"
HAVING SUM(
CASE WHEN t2."mode"=2 AND t2."modeVal"=5 OR t2."mode"=3 AND t2."modeVal"=6
THEN 1 END) = 2 AND COUNT(*)=2
```
If the values in t2 are already distinct, you can just remove the `cte` and select directly from Table2.
[An SQLfiddle to test with](http://sqlfiddle.com/#!4/087ed/2). | ```
SELECT columns
FROM table1 a, table2 B
WHERE a.pid = B.pid
AND a.keyval = 45
AND a.status = 1
AND (
(B.mode = 2 AND B.modeval = 5)
OR
(B.mode = 3 AND B.modeval = 6)
)
``` | SQL Join with multiple row condition in second table | [
"",
"sql",
"join",
""
] |
I have a MySQL database with postcodes in it, sometimes in the database there are spaces in the postcodes (eg. NG1 1AB) sometimes there are not (eg. NG11AB). Simlarly in the PHP query to read from the database the person searching the database may add a space or not. I've tried various different formats using LIKE but can't seem to find an effective means of searching so that either end it would bring up the same corresponding row (eg. searching for either NG11AB or NG1 1AB to bring up 'Bob Smith' or whatever the corresponding row field would be).
Any suggestions? | I wouldn't even bother with `LIKE` or regex and simply remove spaces and compare the strings:
```
SELECT *
FROM mytable
WHERE LOWER(REPLACE(post_code_field, ' ', '')) = LOWER(REPLACE(?, ' ', ''))
```
Note that I also convert both values to lower case to still match correctly if the user enters a lower-case post code. | ```
SELECT *
FROM MYTABLE
WHERE REPLACE(MYTABLE.POSTCODE, ' ', '') LIKE '%input%'
```
Make sure your PHP input is trimmed as well | MySQL 'LIKE' query with spaces | [
"",
"mysql",
"sql",
""
] |
I have experiments, features, and feature\_values. Features have values in different experiments. So I have something like:
```
Experiments:
experiment_id, experiment_name
Features:
feature_id, feature_name
Feature_values:
experiment_id, feature_id, value
```
Lets say, I have three experiments (exp1, exp2, exp3) and three features (feat1, feat2, feat3).
I would like to have a SQL-result that looks like:
```
feature_name | exp1 | exp2 | exp3
-------------+------+------+-----
feat1 | 100 | 150 | 110
feat2 | 200 | 250 | 210
feat3 | 300 | 350 | 310
```
How can I do this?
Furthermore, It might be possible that one feature does not have a value in one experiment.
```
feature_name | exp1 | exp2 | exp3
-------------+------+------+-----
feat1 | 100 | 150 | 110
feat2 | 200 | | 210
feat3 | | 350 | 310
```
The SQL-Query should be with good performance. In the future there might tens of millions entries in the feature\_values table.
Or is there a better way to handle the data? | I'm supposing here that `feature_id, experiment_id` is unique key for `Feature_values`.
Standard SQL way to do this is to make n joins
```
select
F.feature_name,
FV1.value as exp1,
FV2.value as exp2,
FV3.value as exp3
from Features as F
left outer join Feature_values as FV1 on FV1.feature_id = F.feature_id and FV1.experiment_id = 1
left outer join Feature_values as FV2 on FV2.feature_id = F.feature_id and FV2.experiment_id = 2
left outer join Feature_values as FV3 on FV3.feature_id = F.feature_id and FV3.experiment_id = 3
```
Or pivot data like this (aggregate `max` is not actually aggregating anything):
```
select
F.feature_name,
max(case when E.experiment_name = 'exp1' then FV.value end) as exp1,
max(case when E.experiment_name = 'exp2' then FV.value end) as exp2,
max(case when E.experiment_name = 'exp3' then FV.value end) as exp3
from Features as F
left outer join Feature_values as FV on FV.feature_id = F.feature_id
left outer join Experiments as E on E.experiment_id = FV.experiment_id
group by F.feature_name
order by F.feature_name
```
[**`sql fiddle demo`**](http://sqlfiddle.com/#!12/5a635/1)
you can also consider using [json](http://www.postgresql.org/docs/9.3/static/datatype-json.html) (in 9.3 version) or [hstore](http://www.postgresql.org/docs/current/static/hstore.html) to get all experiment values into one column. | This is a common request. It's called a pivot or crosstab query. PostgreSQL doesn't have any nice built-in syntax for it, but you can use [the `crosstab` function from the `tablefunc` module to do what you want](http://www.postgresql.org/docs/current/static/tablefunc.html).
For more information search Stack Overflow for `[postgresql] [pivot]` or `[postgresql] [crosstab]`.
Some relational database systems offer a nice way to do this with a built-in query, but as yet PostgreSQL does not. | How to flatten a PostgreSQL result | [
"",
"sql",
"postgresql",
"pivot",
"crosstab",
""
] |
I have a MySQL 5.5 DB of +-40GB on a 64GB RAM machine in a production environment. All tables are InnoDB. There is also a slave running as a backup.
One table - the most important one - grew to 150M rows, inserting and deleting became slow. To speed up inserting and deleting I deleted half of the table. This did not speed up as expected; inserting and deleting is still slow.
I've read that running `OPTIMIZE TABLE` can help in such a scenario. As I understand this operation will require a read lock on the entire table and optimizing the table might take quite a while on a big table.
What would be a good strategy to optimize this table while minimizing downtime?
**EDIT** The specific table to be optimized has +- 91M rows and looks like this:
```
+-------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------------+--------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| channel_key | varchar(255) | YES | MUL | NULL | |
| track_id | int(11) | YES | MUL | NULL | |
| created_at | datetime | YES | | NULL | |
| updated_at | datetime | YES | | NULL | |
| posted_at | datetime | YES | | NULL | |
| position | varchar(255) | YES | MUL | NULL | |
| dead | int(11) | YES | | 0 | |
+-------------+--------------+------+-----+---------+----------------+
``` | Percona Toolkit's [pt-online-schema-change](http://www.percona.com/doc/percona-toolkit/2.2/pt-online-schema-change.html) does this for you. In this case it worked very well. | 300 ms to insert seems excessive, even with slow disks. I would look into the root cause. Optimizing this table is going to take a lot of time. MySQL will create a copy of your table on disk.
Depending on the size of your innodb\_buffer\_pool (if the table is innodb), free memory on the host, I would try to preload the whole table in the page cache of the OS, so that at least reading the data will be sped up by a couple orders of magnitude.
If you're using innodb\_file\_per\_table, or if it's a MyISAM table, it's easy enough to make sure the whole file is cached using "time cat /path/to/mysql/data/db/huge\_table.ibd > /dev/null". When you rerun the command, and it runs in under a few seconds, you can assume the file content is sitting in the OS page cache.
You can monitor the progress whilst the "optimize table" is running, by looking at the size of the temporary file. It's usually in the database data directory, with a temp filename starting with a dash (#) character. | How to run OPTIMIZE TABLE with the least downtime | [
"",
"mysql",
"sql",
"optimization",
""
] |
So I have a client that I am building a Rails app for....I am using PostgreSQL.
He made this comment about preferring to hide records, rather than delete them, and given that this is the first time I have heard about this I figured I would ask you guys to hear your thoughts.
> I'd rather hide than delete because deletions in tables eventually lead to table index havoc that causes queries to take longer than expected (much worse than Inserts or Updates). This won't be a problem in the beginning of the site (it gets exponentially worse over time), but seems like an easy issue to never encounter by just not deleting anything (yet) as part of the "everyday" web application functionality. We can always handle deletions much later as part of a Data Optimization & Maintenance process and re-index tables in that process on some (yet to be determined) scheduled basis.
In all the Rails apps I have built, I have never had an issue with records being deleted and it affecting the index.
Am I missing something? Is this a problem that used to exist, but modern RDBMS products have fixed it? | There may be functional reasons for preferring that records not be deleted, but reasons relating to some form of table index "havoc" are almost certainly bogus unless supported by some technical evidence.
You hear this sort of thing quite often in the Oracle world -- that indexes do not re-use space freed up by deletions. It's usually based on some misinterpretation of the facts (eg. that index blocks are not freed for re-use until they are completely empty). Hence you end up with people giving advice to periodically rebuild indexes. If you give these issues some thought, you wonder why the RDBMS developers would not have fixed such an issue, given that it supposedly harms the system performance.
So there may be some piece of Postgres-related, possibly obsolete, information on which this is based, but the onus is really on the person objecting to a perfectly normal type of database operation to come with evidence to support their position.
Another thought: I believe that in Postgres an update is implemented as a delete and insert, hence the advice to vacuum frequently on heavily updated tables. Based on that, updates should also cause the same index problems that are supposed to be associated with deletes. | Other reasons for not deleting the records.
1. you don't have to worry about cascading a delete through various other tables in the database that reference the row you are deleting
2. Every bit of data is useful. Debugging and auditing becomes easy.
3. Easier to rollback if needed. | Benefit to keeping a record in the database, rather than deleting it, for performance issues? | [
"",
"sql",
"ruby-on-rails",
"performance",
"postgresql",
"indexing",
""
] |
I have a table 'optionsproducts' having the following structure and values
```
ID OptionID ProductID
1 1 1
1 2 1
1 3 1
1 2 2
1 2 3
1 3 3
```
Now I want to extract ProductIDs against which both OptionID 2 and OptionID 3 is assigned. Which means in this case ProductID 1 and 3 should be returned. I am not sure what I am missing. Any help will be appreciated. | To get `productID`s which relate to both 2 and 3 `OptionId`s, you can write a similar query:
```
select productid
from ( select productid
, optionid
, dense_rank() over(partition by productid
order by optionid) as rn
from t1
where optionid in (2,3)
) s
where s.rn = 2
```
Result:
```
PRODUCTID
----------
1
3
```
[**SQLFiddle Demo**](http://sqlfiddle.com/#!6/d1ee3/3) | Try this one -
```
DECLARE @temp TABLE
(
ID INT
, OptionID INT
, ProductID INT
)
INSERT INTO @temp (ID, OptionID, ProductID)
VALUES
(1, 1, 1),(1, 2, 1),
(1, 3, 1),(1, 2, 2),
(1, 2, 3),(1, 3, 3),(1, 3, 3)
SELECT ProductID
FROM @temp
WHERE OptionID IN (2,3)
GROUP BY ProductID
HAVING COUNT(*) > 1
```
Output -
```
ProductID
-----------
1
3
``` | Check single column for multiple values mysql | [
"",
"mysql",
"sql",
"sql-server",
""
] |
I´m trying to query the sql from the table below. I have tried many ways to get the job done.But seemed like it too complicate for me to find the solution.
**user\_id="200";** // let´s say the user id now is 200.
tb\_conversation
```
-------------------------------------------------------------------------------------
c_id | user_one_id | user_two_id | user_one_delmsg | user_two_delmsg
-------------------------------------------------------------------------------------
001 | 200 | 198 | Y | N
------------------------------------------------------------------------------------
002 | 195 | 200 | Y | N
------------------------------------------------------------------------------------
003 | 200 | 193 | N | N
------------------------------------------------------------------------------------
```
What I´m trying to do is to query the only one table which match with the **user\_id** above.
And it can be user\_one or user\_two in the table. if the user\_id is user\_one in the table then, user\_one\_delmsg must not be "Y". OR if the user\_id is user\_two in the table then, user\_two\_delmsg must not be "Y"
**What I have tried :**
```
$q= "SELECT * from conversation ORDER BY c_id DESC ";
$_stmt = $conn->prepare($q);
$_stmt->execute();
$row=$_stmt->fetchAll();
foreach ($row as $r) {
if ($user_id==$r['user_one_id']){
if ( $r['user_one_delmsg']!="Y") {
//do something
}
}
if ($user_id==$r['user_two_id']){
if ( $r['user_two_delmsg']!="Y") {
//do something
}
}
```
**What I get is:**
array of result that match the query.
**But** what I want is only one result that´s the max c\_id and user\_**x**\_delmsg must not be "Y"
I have also use only fetch(); I did not get what I want.
I have also put the limit 1 in the last of query, but it did not help. | for a given userId, try
```
Select Max(c_id) from conversation
Where 200 in (user_one_id, user_two_id)
And (user_one_id <> 200 Or user_one_delmsg <> 'Y')
And (user_two_id <> 200 Or user_two_delmsg <> 'Y')
```
for all UserIds, try:
```
Select userId , Max(cid) From
(Select c_id cid, user_one_id userId
from conversation
Where user_one_delmsg <> 'Y'
Union
Select c_id cid, user_two_id userId
from conversation
Where user_one_delmsg <> 'Y') Z
Group By UserId
``` | This will select max(c\_id) and will check for user\_one\_delmsg not equal to y.
```
select max(c_id), user_one_id from conversation where user_one_delmsg!='y';
```
This will select max(c\_id) for both user\_one\_id and user\_two\_id(specifically 200 mentioned) and will check for user\_one\_delmsg.
```
select max(c_id), user_one_id from conservation where user_one_id='200' and
user_one_delmsg!='y' union select max(c_id), user_two_id from conservation where
user_two_id='200' and user_two_delmsg!='y';
``` | Complicated sql statement with 4 conditions inside | [
"",
"mysql",
"sql",
""
] |
I have 2 tables which have the identical columns. First table lets me store the user's bills, and the second table stores the bill cancelation.
```
first table
-----------
id - total
-----------
1 - 100
2 - 85
3 - 50
second table
-----------
id - total
-----------
2 - 85
Result of JOIN
------------------------
id - total - status
------------------------
1 - 100 - OK
2 - 85 - OK
3 - 50 - OK
2 - 85 - CANCEL
```
How can I do get the above result? | Use the `UNION ALL` set operator to combine both tables as one.
```
SELECT id, total, 'OK' as status
FROM First_Table
UNION ALL
SELECT id, total, 'CANCEL' as status
FROM Second_Table
``` | Query:
**[SQLFIDDLEExample](http://sqlfiddle.com/#!6/83f25/1)**
```
SELECT t1.id,
t1.total,
'OK' AS status
FROM first t1
UNION ALL
SELECT t2.id,
t2.total,
'CANCEL' AS status
FROM second t2
```
Result:
```
| ID | TOTAL | STATUS |
-----|-------|--------|
| 1 | 100 | OK |
| 2 | 85 | OK |
| 3 | 50 | OK |
| 2 | 85 | CANCEL |
``` | merge 2 identical tables rows | [
"",
"sql",
"sql-server",
"select",
"join",
""
] |
I'm trying to compare the sum of 2 different columns and I'm getting an error saying that I must declare @Base. Then I have tried to do something like @Base AS B, the error will disappear. But I'm not retrieving any data.
Can anyone help me with if I have made a typo or my INNER JOIN is wrong?
```
Declare @Base table(PickupDate smalldatetime, DeliveryDate smalldatetime, PickupAdrID int, PickupCustID varchar(10), DeliveryType char, DeliveryAdrID int, DeliveryCustID varchar(10), DeliveryAlias varchar (30), Volumen float, Weight float) Insert @Base(PickupDate,DeliveryDate, PickupAdrID, PickupCustID, DeliveryType, DeliveryAdrID, DeliveryCustID, DeliveryAlias, Volumen,Weight)
SELECT PickupDate,DeliveryDate, PickupAdrID, PickupCustID, DeliveryType, DeliveryAdrID, DeliveryCustID, DeliveryAlias, Volumen, Weight
FROM Sending
INNER JOIN Address_ViewI ON Sending.PickupAdrID = Address_ViewI.AdrID
INNER JOIN Address_ViewI AS Address_View_DE ON Sending.DeliveryAdrID = Address_View_DE.AdrID
WHERE (Address_ViewI.CountryUK = @puC AND Address_View_DE.CountryUK = @deC) AND (Sending.PickupDate >= @start) AND (Sending.PickupDate < @end) AND ((PickUpCustID Like 'TMHSE' OR DeliveryCustID like 'TMHSE' ) )
OR (Address_ViewI.CountryUK = @puC AND Address_View_DE.CountryUK = @deC) AND (Sending.PickupDate >= @start) AND (Sending.PickupDate < @end) AND ((PickUpCustID Like 'SomeName' OR DeliveryCustID like 'SomeName' ) )
SELECT totals.DeliveryAdrID, totals.PickupDate,
(CASE WHEN weightTOTAL <= volumenTOTAL THEN volumenTOTAL
WHEN weightTOTAL >= volumenTOTAL THEN weightTOTAL ELSE weightTOTAL END) AS InvoiceWeight
FROM @Base INNER JOIN
(SELECT DeliveryAdrID, CONVERT(CHAR(10),PickupDate,110) AS PickupDate,
CEILING(SUM(CASE Weight When 0 Then @zeroKiloVal ELSE Weight END)) AS WeightTOTAL,
CEILING(SUM(CASE Volumen WHEN 0 THEN (@zeroVoluVal * @zeroVoluFac) ELSE Volumen END)) AS volumenTOTAL,
COUNT(DeliveryAdrID)AS Packages
FROM @Base GROUP BY CONVERT(CHAR(10),PickupDate,110), DeliveryAdrID ) AS totals
ON @Base.DeliveryAdrID = totals.DeliveryAdrID AND CONVERT(CHAR(10),@Base.PickupDate,110) = totals.PickupDate
```
The full code is listed here <http://pastie.org/8238866>
And the error I'm getting | I figured out the error it seems that my declared variables was missing something:
```
Declare @puC varchar = 'Sweden'
Declare @deC varchar = 'Sweden'
```
I changed it to
```
Declare @puC varchar (50) = 'Sweden'
Declare @deC varchar (50) = 'Sweden'
```
Thanks for your time guys | It worked for me when I placed an alias on the reference to @Base
```
Declare @zeroKiloVal float = 10
Declare @zeroVoluVal float = 10
Declare @zeroVoluFac float = 200
Declare @puC varchar = 'Sweden'
Declare @deC varchar = 'Sweden'
Declare @start smalldatetime = '2013-04-21'
Declare @end smalldatetime = '2013-05-01'
DECLARE @Base TABLE (SendingID INT, Barcode VARCHAR(50), PickupType CHAR, PickupDate SMALLDATETIME, DeliveryDate SMALLDATETIME, PickupAdrID INT, PickupCustID VARCHAR(10), DeliveryType CHAR, DeliveryAdrID INT, DeliveryCustID VARCHAR(10), DeliveryAlias VARCHAR (30), Volumen FLOAT, [Weight] FLOAT)
INSERT INTO @Base(SendingID, Barcode, PickupType, PickupDate,DeliveryDate, PickupAdrID, PickupCustID, DeliveryType, DeliveryAdrID, DeliveryCustID, DeliveryAlias, Volumen,[Weight])
SELECT SendingID = 1, Barcode= 1, PickupType= 1, PickupDate= 1,DeliveryDate= 1, PickupAdrID= 1, PickupCustID= 1, DeliveryType= 1, DeliveryAdrID= 1, DeliveryCustID= 1, DeliveryAlias= 1, Volumen= 1, [Weight] = 1
-- Replacing below code with stubbed data for testing.
-- SELECT SendingID, Barcode, PickupType, PickupDate,DeliveryDate, PickupAdrID, PickupCustID, DeliveryType, DeliveryAdrID, DeliveryCustID, DeliveryAlias, Volumen, Weight
-- FROM Sending
-- INNER JOIN Address_ViewI ON Sending.PickupAdrID = Address_ViewI.AdrID
-- INNER JOIN Address_ViewI AS Address_View_DE ON Sending.DeliveryAdrID = Address_View_DE.AdrID
-- WHERE (Address_ViewI.CountryUK = @puC AND Address_View_DE.CountryUK = @deC) AND (Sending.PickupDate >= @start) AND (Sending.PickupDate < @end) AND ((PickUpCustID Like 'TMHSE' OR DeliveryCustID like 'TMHSE' ) )
-- OR (Address_ViewI.CountryUK = @puC AND Address_View_DE.CountryUK = @deC) AND (Sending.PickupDate >= @start) AND (Sending.PickupDate < @end) AND ((PickUpCustID Like 'TMHSE' OR DeliveryCustID like 'TMHSE' ) )
SELECT totals.DeliveryAdrID
, totals.PickupDate
, InvoiceWeight =
(
CASE WHEN weightTOTAL <= volumenTOTAL THEN volumenTOTAL
WHEN weightTOTAL >= volumenTOTAL THEN weightTOTAL ELSE weightTOTAL END
)
FROM @Base AS B -- <<Added alias here>>
INNER JOIN
(
SELECT DeliveryAdrID
, PickupDate = CONVERT(CHAR(10),PickupDate,110)
, WeightTOTAL = CEILING(SUM(CASE [Weight] WHEN 0 THEN @zeroKiloVal ELSE [Weight] END))
, volumenTOTAL = CEILING(SUM(CASE Volumen WHEN 0 THEN (@zeroVoluVal * @zeroVoluFac) ELSE Volumen END))
, Packages = COUNT(DeliveryAdrID)
FROM @Base
GROUP BY CONVERT(CHAR(10),PickupDate,110), DeliveryAdrID
) AS totals ON B.DeliveryAdrID = totals.DeliveryAdrID
AND CONVERT(CHAR(10),B.PickupDate,110) = totals.PickupDate
``` | Comparing Sum from 2 different Columns | [
"",
"sql",
"inner-join",
"declare",
""
] |
This question ties to my previous question:
[Updating only ID's with the latest date SQL (2 of 6)](https://stackoverflow.com/questions/18462159/updating-only-ids-with-the-latest-date-sql-2-of-6)
I have the following 2 tables:

I already have this query that updates the ID's in Table1 where only the latest date is found.
However, is it possible to match up the appropriate ID's on Table1 with the appropriate ID's on Table2 based on Date.
Please see my query below:
```
Update Dairy
SET DY_H_ID = (
SELECT MAX(ID)
FROM History
WHERE H_DateTime <= DY_Date
AND H_IDX = DY_IDX
AND H_HA_ID = 7
AND H_HSA_ID = 19
AND H_Description LIKE 'Diary item added for :%'
)
WHERE DY_H_ID IS NULL
AND DY_IDX IS NOT NULL
AND DY_Date = (SELECT MAX(DY_Date) FROM Dairy)
```
I'd like to do something like this to match the rest up, however this doesn't work:
```
AND SUBSTRING(CAST(DY_Date AS varchar(11)), 1, 10) = (SELECT SUBSTRING(CAST(H_DateTime AS varchar(11)), 1, 10) FROM History)
```
E.g. ID 10029 in Table 1 should get ID 3205 from History. ID 10030 should get the ID of 3206, ID 10031 should get ID 3207, ID 10032 should get 3208 etc etc.
Note the values in both Tables will change. | I found it!! Thanks to @Secret Squirrel and @Gidil for pointing me in the right direction.
```
;WITH cte AS (
SELECT H_IDX, MAX(ID) MaxID, SUBSTRING(H_Description, 24, 10) AS [Date]
FROM History
WHERE H_HA_ID = 7
AND H_HSA_ID = 19
AND H_Description LIKE 'Diary item added for :%'
GROUP BY H_IDX, H_DateTime, H_Description
)
Update Dairy
SET DY_H_ID = MaxID
FROM Dairy
INNER JOIN CTE ON cte.H_IDX = DY_IDX
WHERE DY_H_ID IS NULL
AND DY_IDX IS NOT NULL
AND DATEPART(YYYY, DY_Date) = SUBSTRING(cte.[Date], 1, 4)
AND DATEPART(MM, DY_Date) = SUBSTRING(cte.[Date], 6, 2)
AND DATEPART(DD, DY_Date) = SUBSTRING(cte.[Date], 9, 2)
```
Please feel free to have a look and comment on this. | Could you not use DATEPART to solve this <http://msdn.microsoft.com/en-us/library/ms174420.aspx>
```
UPDATE Dairy
SET DY_H_ID = (
SELECT MAX(ID)
FROM History
WHERE H_DateTime <= DY_Date
AND H_IDX = DY_IDX
AND H_HA_ID = 7
AND H_HSA_ID = 19
AND H_Description LIKE 'Diary item added for :%'
)
WHERE DY_H_ID IS NULL
AND DY_IDX IS NOT NULL
AND DY_Date = (SELECT MAX(DY_Date) FROM Dairy)
AND DATEPART(yyyy, DY_Date) = DATEPART(yyyy, H_DateTime)
AND DATEPART(mm, DY_Date) = DATEPART(mm, H_DateTime)
AND DATEPART(dd, DY_Date) = DATEPART(dd, H_DateTime)
``` | Update columns from one table with appropriate column values from another table | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
Let's say I'm running the following sql SELECT query:
```
SELECT Name, Age FROM Friends;
Bilbo|111
Aragorn|69
```
I'd like to be able to get the total number of results **in the same query**.
However, if I do the following:
```
SELECT Count(*),Name, Age FROM Friends;
2|Aragorn|69
```
only the final result is displayed. Is there a way I get all results and still get the Count with a single SQLite query? | If your DBMS supports Windowed Aggregate Functions it would be easy:
```
SELECT Count(*) OVER (),Name, Age FROM Friends;
```
Otherwise you have to (cross) join:
```
SELECT cnt,Name, Age FROM Friends, (SELECT Count(*) as cnt FROM Friends);
```
or use a Scalar Subquery:
```
SELECT (SELECT Count(*) FROM Friends) as cnt, Name, Age FROM Friends;
``` | Normally, the results-set object that is returned can be queried for its length; so you don't put an aggregate function like `COUNT` into the SQL query itself, but rather just count how many rows are returned. | Regular results and Count(*) in the same query | [
"",
"sql",
"sqlite",
""
] |
I'm using SQL SERVER 2008 R2 to store data and C# web application to retrieve data and display it into gridview.
I have already stored data with different dates Now I want to show data daily on C# gridview according to date like : as today is 15 august and data having date as 15 august only populate the gridview.
I have created the datasource to populate the gridview which previously shows all the data irrespective of date:
code is as follows:
```
<asp:SqlDataSource ID="SqlDataSource1" runat="server"
ConnectionString="<%$ ConnectionStrings:LoginConnectionString %>"
SelectCommand="SELECT * FROM [Student]"></asp:SqlDataSource>
```
Now I want to just show only current date/todays date data as explained above.
Please guide me how to write the sql command to achieve the above functionality.
Table name is Student
Column Name is date stored as datetime
Thanks | Assuming that "date" also contains time of day information, you have to strip it when comparing to current date:
```
WHERE CAST(Student.[Date] AS DATE) = CAST(GETDATE() AS DATE)
``` | Add your where clause to your select statement like this:
```
Where Cast(Student.Date As Date) = Cast(GetDate() As Date)
```
Edit: updated to use the cast instead of convert, and made it so it's only equal to today's date. | only retrieve record from sql database which have todays's date | [
"",
"sql",
"sql-server",
"date",
""
] |
i have a problem, i have a table were i keep the incoming and other one were i keep the outgoing records, the thing that i need to do is to group these records by partNumber and have a single output.
```
InTable
PartNo Qty Date
A 1 1/1/13
A 5 1/1/13
B 2 1/1/13
OutTable
PartNo Qty Date
A 2 1/1/13
B 1 1/1/13
C 3 1/1/13
Result Needed
Date 1/1/13
PartNo In Out Total
A 6 2 4
B 2 1 1
C 0 3 -3
```
I have tried something like this but it results on the totals because of the sum(qty), but it does not work any other way.
```
select a.PartNo, sum(b.inQty) as inQty,sum(c.outQty) as outQty, sum(b.inQty)-sum(c.outQty) as total from
(Select PartNo FROM InTable
where date= '01-01-2013'
group by PartNo
union
Select PartNo FROM OutTable
where date= '01-01-2013'
group by PartNo) A
cross join
(
SELECT PartNo,SUM(Qty) inQty FROM InTable
where date= '01-01-2013'
group by PartNo
)B
cross join
(
SELECT PartNo,SUM(Qty) outQty FROM OutTable
where date= '01-01-2013'
group by PartNo
)c
group by a.PartNo
```
There i tried to join three queries, each query individually results in something helpfull, but the problem is when i try to join them, the query will result in something like
```
PartNo inQty outQty total
A 8 6 2
B 8 6 2
C 8 6 2
```
Any sugestions?, thanks. | Use a Full Join
```
SELECT
Coalesce(I.PartNo,O.PartNo) AS PartNo,
IsNull(I.Qty,0) AS [In],
IsNull(O.Qty,0) AS [Out],
IsNull(I.Qty,0) - IsNull(O.Qty,0) AS [Total]
FROM
(SELECT PartNo, Sum(Qty) AS Qty FROM InTable WHERE Date = '1/1/2013' GROUP BY PartNo) I
FULL JOIN
(SELECT PartNo, Sum(Qty) AS Qty FROM OutTable WHERE Date = '1/1/2013' GROUP BY PartNo) O ON I.PartNo = O.PartNo;
```
[Sql Fiddle Example](http://sqlfiddle.com/#!6/9a3cc/10) | Use a Full Outer Join on two Derived Tables:
```
SELECT
COALESCE(inTab.PartNo, outTab.PartNo) AS PartNo,
COALESCE(inQty, 0),
COALESCE(outQty, 0),
COALESCE(inQty, 0) - COALESCE(outQty, 0) AS total
FROM
(
SELECT PartNo, SUM(Qty) AS inQty
FROM InTable
WHERE DATE= '01-01-2013'
GROUP BY PartNo
) InTab
FULL JOIN
(
SELECT PartNo, SUM(Qty) AS outQty
FROM OutTable
WHERE DATE= '01-01-2013'
GROUP BY PartNo
) OutTab
ON inTab.Partno = outTab.PartNo
``` | Join Query Results | [
"",
"sql",
"sql-server",
""
] |
```
CUSTOMER(CustID, CustName)
Sale(SaleNo, StockNo, CustNo, SaleDate)
```
How do I delete customers that have not bought anything since 2009?
i can get the CustID's by using a minus
```
SELECT CustID FROM CUSTOMER
WHERE SaleDate <= to_date('31-12-09', 'DD-MM-YY')
MINUS
SELECT CustID FROM CUSTOMER
WHERE SaleDate > to_date('31-12-09', 'DD-MM-YY');
```
But I have no idea how I would do the delete in a single query.
Any suggestions will be appreciated | You didn't specify your DBMS although I suspect it's Oracle due to the use of `MINUS` and `to_date()`
Anyway the following should work on Oracle (and any DBMS that complies with the ANSI standard)
```
delete from customer
where not exists (select 1
from sale
where sale.custId = customer.CustNo
and sale.saledate >= date '2009-01-01')
``` | ```
delete from CUSTOMER c
inner join Sale S
on C.CustID=S.CustNo
where max(SaleDate)<='31-12-09'
``` | SQL DELETE using DATE | [
"",
"sql",
"oracle",
"date",
"sql-delete",
""
] |
What I want to do is select the price from a table where the date given is between the start and finish date in a table.
```
SELECT price
FROM table
WHERE 'dategiven' BETWEEN startdate AND enddate
```
This is easy enough with a datetime filter. Problem is I have multiple records in the time window provided, I have a version column as well.
I have below an example of my table:

I want my query output to be as below where my `dategiven` is `2013-08-25`

Milk has 3 records, 2 of them are valid for `dategiven` (2013-08-25). I then want to return the result with the highest version?
something like:
```
SELECT
price
FROM table
WHERE 'dategiven' BETWEEN startdate AND enddate AND max(version)
``` | Using the `row_number()` function to rank the rows
```
select product, price, version
from
(
select
*,
row_number() over (partition by product order by version desc) rn
from yourtable
where @dategiven between startdate and enddate
) v
where rn = 1
``` | May not be the most efficient way but:
```
select price
from table
where
'dategiven' between startdate and enddate
and version =
(
select max(version) from table t2 where t2.product = table.product
and 'dategiven' between startdate and enddate
)
``` | Query to return rows in date range but return only max value of column | [
"",
"sql",
"sql-server",
""
] |
I know that I can search for a term in one column in a table in t-sql by using `like %termToFind%`. And I know I can get all columns in a table with this:
```
SELECT *
FROM MyDataBaseName.INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = N'MyTableName`
```
How can I perform a like comprparison on each of the columns of a table? I have a very large table so I can't just spell out `LIKE` for each column. | As always, I'll suggest xml for this (I'd suggest JSON if SQL Server had native support for it :) ). You can try to use this query, though it could perform not so well on large number of rows:
```
;with cte as (
select
*,
(select t.* for xml raw('data'), type) as data
from test as t
)
select *
from cte
where data.exist('data/@*[local-name() != "id" and contains(., sql:variable("@search"))]') = 1
```
see [**`sql fiddle demo`**](http://sqlfiddle.com/#!3/b9eec/12) for more detailed example.
**Important** note by [Alexander Fedorenko](https://stackoverflow.com/users/1085940/alexander-fedorenko) in comments: *it should be understood that `contains` function is case-sensitive and uses xQuery default Unicode code point collation for the string comparison*.
More general way would be to use dynamic SQL solution:
```
declare @search nvarchar(max)
declare @stmt nvarchar(max)
select @stmt = isnull(@stmt + ' or ', '') + quotename(name) + ' like @search'
from sys.columns as c
where c.[object_id] = object_id('dbo.test')
--
-- also possible
--
-- select @stmt = isnull(@stmt + ' or ', '') + quotename(column_name) + ' like @search'
-- from INFORMATION_SCHEMA.COLUMNS
-- where TABLE_NAME = 'test'
select @stmt = 'select * from test where ' + @stmt
exec sp_executesql
@stmt = @stmt,
@params = N'@search nvarchar(max)',
@search = @search
```
[**`sql fiddle demo`**](http://sqlfiddle.com/#!3/b9eec/20) | I'd use dynamic SQL here.
Full credit - this answer was initially posted by another user, and deleted. I think it's a good answer so I'm re-adding it.
```
DECLARE @sql NVARCHAR(MAX);
DECLARE @table NVARCHAR(50);
DECLARE @term NVARCHAR(50);
SET @term = '%term to find%';
SET @table = 'TableName';
SET @sql = 'SELECT * FROM ' + @table + ' WHERE '
SELECT @sql = @sql + COALESCE('CAST('+ column_name
+ ' as NVARCHAR(MAX)) like N''' + @term + ''' OR ', '')
FROM INFORMATION_SCHEMA.COLUMNS WHERE [TABLE_NAME] = @table
SET @sql = @sql + ' 1 = 0'
SELECT @sql
EXEC sp_executesql @sql
```
The XML answer is cleaner (I prefer dynamic SQL only when necessary) but the benefit of this is that it will utilize any index you have on your table, and there is no overhead in constructing the XML CTE for querying. | how do I select records that are like some string for any column in a table? | [
"",
"sql",
"sql-server",
"t-sql",
"xquery",
"sqlxml",
""
] |
I have a table called "places"
```
origin | destiny | distance
---------------------------
A | X | 5
A | Y | 8
B | X | 12
B | Y | 9
```
For each origin, I want to find out which is the closest destiny. In MySQL I could do
```
SELECT origin, destiny, MIN(distance) FROM places GROUP BY origin
```
And I could expect the following result
```
origin | destiny | distance
---------------------------
A | X | 5
B | y | 9
```
Unfortunately, this query is not working in PostgreSQL. Postgre is forcing me to either put "destiny" in his own aggregate function or to define it as another argument of the GROUP BY statement. Both "solutions" change completely my desired result.
How could I translate the above MySQL query to PostgreSQL? | MySQL is the only DBMS that allows the broken ("lose" in MySQL terms) group by handling. Every other DBMS (including Postgres) would reject your original statement.
In Postgres you can use the `distinct on` operator to achieve the same thing:
```
select distinct on (origin)
origin,
destiny,
distance
from places
order by origin, distance;
```
The ANSI solution would be something like this:
```
select p.origin,
p.destiny,
p.distance
from places p
join (select p2.origin, min(p2.distance) as distance
from places p2
group by origin
) t on t.origin = p.origin and t.distance = p.distance
order by origin;
```
Or without a join using window functions
```
select t.origin,
t.destiny,
t.distance
from (
select origin,
destiny,
distance,
min(distance) over (partition by origin) as min_dist
from places
) t
where distance = min_dist
order by origin;
```
Or another solution with window functions:
```
select distinct origin,
first_value(destiny) over (partition by origin order by distance) as destiny,
min(distance) over (partition by origin) as distance
from places
order by origin;
```
My guess is that the first one (Postgres specific) is probably the fastest one.
Here is an SQLFiddle for all three solutions: <http://sqlfiddle.com/#!12/68308/2>
---
Note that the MySQL result might actually be incorrect as it will return an arbitrary (=random) value for destiny. The value returned by MySQL might not be the one that belongs to the lowest distance.
More details on the broken group by handling in MySQL can be found here: <http://www.mysqlperformanceblog.com/2006/09/06/wrong-group-by-makes-your-queries-fragile/> | The neatest (in my opinion) way to do this in PostgreSQL is to use an aggregate function which clearly specifies *which* value of `destiny` should be selected.
The desired value can be described as "the first matching `destiny`, if you order the matching rows by their `distance`".
You therefore need two things:
* A ["first" aggregate](http://wiki.postgresql.org/wiki/First_%28aggregate%29), which simply returns the "first" of a list of values. This is very easy to define, but is not included as standard.
* The ability to specify what order those matches come in (otherwise, like the MySQL "loose Group By", it will be undefined which value you actually get). This was added in PostgreSQL 9.0, and [the syntax is documented under "Aggregate Expressions"](http://www.postgresql.org/docs/current/static/sql-expressions.html#SYNTAX-AGGREGATES).
Once the `first()` aggregate is defined (which you need do only once per database, while you're setting up your initial tables), you can then write:
```
Select
origin,
first(destiny Order by distance Asc) as closest_destiny,
min(distance) as closest_destiny_distance
-- Or, equivalently: first(distance Order by distance Asc) as closest_destiny_distance
from places
group by origin
order by origin;
```
[Here is a SQLFiddle demo](http://sqlfiddle.com/#!12/d85cf/2) showing the whole thing in operation. | PostgreSQL - Get related columns of an aggregated column | [
"",
"sql",
"postgresql",
""
] |
Newbie Rails question... I know there's a better way to do this. I need help understanding why this isn't working like I thought it should.
I'm doing a simple join tables using a "has\_many" relationship with a pre-existing database. I need to keep the "non-rails" friendly titles.
Here's the output <%= room.levels %> as seen in the browser: `[#<Level Name: "01 - FIRST FLOOR">]`
I'd like to only see 01 - FIRST FLOOR without all of the other information.
I have two tables. :rooms and :levels
Here's the schema for the two tables:
```
create_table "levels", :primary_key => "Id", :force => true do |t|
t.integer "TypeId"
t.integer "DesignOption"
t.string "Name"
t.float "Elevation"
create_table "rooms", :primary_key => "Id", :force => true do |t|
t.integer "DesignOption"
t.integer "PhaseId"
t.string "Comments"
t.float "Volume"
t.float "Perimeter"
t.integer "Level"
t.string "Occupancy"
t.float "Area"
t.string "Number"
t.string "Name"
end
add_index "rooms", ["Id"], :name => "Id", :unique => true
```
Here's the app/model/room.rb:
```
class Room < ActiveRecord::Base
attr_accessible :Area, :Level, :Name, :Number, :Perimeter, :PhaseId, :Elevation
has_many :levels, :primary_key => 'Level', :foreign_key => 'Id', :select => 'Name' set_primary_key :Id
end
```
Here's a snippet from the app/views/rooms/index.html.erb:
```
<% @rooms.each do |room| %>
<tr>
<td><%= room.Name %></td>
<td><%= room.Number %></td>
<td><%= room.PhaseId %></td>
<td><%= room.levels %></td>
<td><%= link_to 'Show', room %></td>
<td><%= link_to 'Edit', edit_room_path(room) %></td>
<td><%= link_to 'Destroy', room, method: :delete, data: { confirm: 'Are you sure?' } %></td>
</tr>
```
Thanks! | You can do this:
```
<td><%= room.levels.map(&:Name).join(', ') %></td>
```
**Why your code didn't work** on the first place? because room.levels returns an array of Level objects. You need to loop through them to get each name, and then display it.
```
room.levels
# => returns all the level objects associated
room.levels.map(&:Name)
# => collect each name of the level objects (makes an array of (String) names)
room.levels.map(&:Name).join(', ')
# => Return a nice String with all the levels name with ", " between each.
``` | because `levels` gives you a collection back. you need to iterate over `room.levels` and there you get the elements | has_many join display issue | [
"",
"sql",
"ruby-on-rails",
"ruby-on-rails-3",
""
] |
when I press F2 then following code in my form keydown event return 113
```
MessageBox.Show(e.KeyValue.ToString());//If pressed key is F2 ->113
```
but when I want to get the Char from KeyValue from following code then it return "q"
```
MessageBox.Show(Char.ConvertFromUtf32(113));//return q
```
how can I reach to F2 from 113 keyvalue ? | As its a `KeyCode` cast to the [`Keys`](http://msdn.microsoft.com/en-us/library/system.windows.forms.keys.aspx) enum?
```
((Keys)e.KeyValue).ToString();
``` | The reason you are getting the letter q when you try and get the char value of 113 is because it maps to the ASCII value of q

And to answer the rest of your question
[ascii codes for windows keyboard keys and the codes for function keys(F1 - F12) and other keys like shift,capslock,backspace,ctrl etc](https://stackoverflow.com/questions/10832522/ascii-codes-for-windows-keyboard-keys-and-the-codes-for-function-keysf1-f12)
I found this article here <http://bytes.com/topic/c-sharp/answers/429451-keycode-keyvalue-keydata>
> There are KeyUp, KeyDown and KeyPress event. Only the KeyPress event
> supplies the character code. I don't say ASCII because this code
> depends on the current language settings; even one code can be
> generated as a result of pressing a sequence of keys. Even in the
> simplest cases with English language settings there is no 1:1 mapping
> between keys and character codes e.g. key A produces ASCII for 'a' or
> key A produces ASCII for 'A' if cpaslock is on or shift is pressed or
> 'a' of both capslock is on and shift is pressed, etc. Even more there
> are keys that don't produce ASCII codes - ctrl, alt, shift, F1, F2.,
> Fx, windows keys, etc | how to return (F1) from 113 key value ? | [
"",
"sql",
"visual-studio-2010",
"c#-4.0",
"onkeydown",
""
] |
Im trying to add my local SQL Server database to Visual Studio so I can grab the connection string from it but I am getting this error message:


As you can see the Test succeeds but when I try to add I receive error. | I figured out that you need to have SDK.sfc 11.0.0.0 installed. I only had 10.0.0.0 installed.
You can use the link here to download "Microsoft® System CLR Types for Microsoft® SQL Server® 2012" and "Microsoft® SQL Server® 2012 Shared Management Objects"
[Download Link](http://www.microsoft.com/en-us/download/details.aspx?id=29065)
I think these are normally installed with VS 2012 but in either case it fixed the issue. You can check your version in the assembly path:
C:\windows\assembly

The install does not require a restart but you will need to close and reopen your Visual Studio. | It seems that you are missing some updates from Microsoft: [one link](http://social.msdn.microsoft.com/Forums/vstudio/en-US/43d7267f-bc42-4540-9c85-e29a67691784/unable-to-add-data-connection-to-sql-server)
You could search with google/bing/whatever and find a lot of hints to get this working! | Visual Studio Unable to Add Data Connection to SQL Server | [
"",
"sql",
"visual-studio",
"visual-studio-2012",
"sql-server-2008-r2",
""
] |
I would like to be able to create a temp variable within a query--not a stored proc nor function-- which will not need to be declared and set so that I don't need to pass the query parameters when I call it.
Trying to work toward this:
```
Select field1,
tempvariable=2+2,
newlycreatedfield=tempvariable*existingfield
From
table
```
Away from this:
```
DECLARE @tempvariable
SET @tempvariable = 2+2
Select field1,
newlycreatedfield=@tempvariable*existingfield
From
table
```
Thank you for your time
I may have overcomplicated the example; more simply, the following gives the Invalid Column Name QID
```
Select
QID = 1+1
THN = QID + 1
```
If this is housed in a query, is there a workaround? | You can do something like this :
```
SELECT field1, tv.tempvariable,
(tv.tempvariable*existingfield) AS newlycreatedfield
FROM table1
INNER JOIN (SELECT 2+2 AS tempvariable) AS tv
```
See SQLFIDDLE : <http://www.sqlfiddle.com/#!2/8b0724/8/0>
And to refer at your simplified example :
```
SELECT var.QID,
(var.QID + 1) AS THN
FROM (SELECT 1+1 as QID) AS var
```
See SQLFIDDLE : <http://www.sqlfiddle.com/#!2/d41d8/19140/0> | ## You can avoid derived tables and subqueries if you do a "hidden" assignment as a part of a complex concat\_ws expression
Since the assignment is part of the expression of the ultimate desired value for the column, as opposed to sitting in its own column, you don't have to worry about whether MySQL will evaluate it in the correct order. Needless to say, if you want to use the temp var in multiple columns, then all bets are off :-/
*caveat: I did this in MySQL 5.1.73; things might have changed in later versions*
I wrap everything in **concat\_ws** because it coalesces null args to empty strings, whereas concat does not.
I wrap the assignment to the var **@stamp** in an **if** so that it is "consumed" instead of becoming an arg to be concatenated. As a side note, I have guaranteed elsewhere that u.status\_timestamp is populated when the user record is first created. Then @stamp is used in two places in **date\_format**, both as the date to be formatted and in the nested if to select which format to use. The final concat is an hour range "h-h" which I have guaranteed elsewhere to exist if the c record exists, otherwise its null return is coalesced by the outer concat\_ws as mentioned above.
```
SELECT
concat_ws( '', if( @stamp := ifnull( cs.checkin_stamp, u.status_timestamp ), '', '' ),
date_format( @stamp, if( timestampdiff( day, @stamp, now() )<120, '%a %b %e', "%b %e %Y" )),
concat( ' ', time_format( cs.start, '%l' ), '-', time_format( cs.end, '%l' ))
) AS as_of
FROM dbi_user AS u LEFT JOIN
(SELECT c.u_id, c.checkin_stamp, s.start, s.end FROM dbi_claim AS c LEFT JOIN
dbi_shift AS s ON(c.shift_id=s.id) ORDER BY c.u_id, c.checkin_stamp DESC) AS cs
ON (cs.u_id=u.id) WHERE u.status='active' GROUP BY u.id ;
```
A final note: while I happen to be using a derived table in this example, it is only because of the requirement to get the latest claim record and its associated shift record for each user. You probably won't need a derived table if a complex join is not involved in the computation of your temp var. This can be demonstrated by going to the first fiddle in @Fabien TheSolution's answer and changing the right hand query to
```
Select field1, concat_ws( '', if(@tempvariable := 2+2,'','') ,
@tempvariable*existingfield ) as newlycreatedfield
from table1
```
Likewise the second fiddle (which appears to be broken) would have a right hand side of
```
SELECT concat_ws( '', if(@QID := 2+2,'',''), @QID + 1) AS THN
``` | Creating Temp Variables within Queries | [
"",
"mysql",
"sql",
""
] |
Is it possible to supply the list of parameters to sp\_ExecuteSql dynamically?
In sp\_ExecuteSql the query and the parameter definitions are strings. We can use string variables for these and pass in any query and parameter definitions we want to execute. However, when assigning values to the parameters, we cannot seem to use strings or string variables for the parameter names.
For example:
```
DECLARE @SelectedUserName NVARCHAR(255) ,
@SelectedJobTitle NVARCHAR(255);
SET @SelectedUserName = N'TEST%';
SET @SelectedJobTitle = N'%Developer%';
DECLARE @sql NVARCHAR(MAX) ,
@paramdefs NVARCHAR(1000);
SET @sql = N'select * from Users where Name LIKE @UserName '
+ N'and JobTitle LIKE @JobTitle;'
SET @paramdefs = N'@UserName nvarchar(255), @JobTitle nvarchar(255)';
EXEC sp_ExecuteSql @sql, @paramdefs, @UserName = @SelectedUserName,
@JobTitle = @SelectedJobTitle;
```
The query @sql, and the parameter definitions, @paramdefs, can be passed into sp\_ExecuteSql dynamically, as string variables. However, it seems to me that when assigning values to the parameters we cannot assign dynamically and must always know the number of parameters and their names ahead of time. Note in my example how I could declare parameters @UserName and @JobTitle dynamically and pass in that declaration as a string variable, but I had to explicitly specify the parameter names when I wanted to set them. Is there any way around this limitation?
I would like to be able to both declare the parameters dynamically and assign to them dynamically as well. Something like:
```
EXEC sp_ExecuteSql @sql, @paramdefs,
N'@UserName = @SelectedUserName, @JobTitle = @SelectedJobTitle';
```
Note that this doesn't actually work but illustrates the sort of thing I'd like to happen. If this sort of thing worked then I could pass in different queries with different numbers of parameters which have different names. The whole thing would be dynamic and I wouldn't have to know the names or numbers of parameters beforehand. | You're trying to work one level too high in abstraction.
Arbitrary parameters requires dynamic SQL, a.k.a. building SQL via strings, which then makes the entire point of parameters moot.
Instead, this should be handled as parameters in the *calling code*, such as C#, which *will* allow you to take any SQL statement in a string, apply an arbitrary number of arguments, and execute it. | You can do this by using a table valued parameter as the only parameter:
```
DECLARE @YourQuery NVARCHAR(MAX0 = '<your dynamic query>'
CREATE TYPE dbo.SqlVariantTable AS TABLE
(
[Name] VARCHAR(255),
Type VARCHAR(255),
Value SQL_VARIANT
)
DECLARE @Table SqlVariantTable;
-- Insert your dynamic parameters here:
INSERT INTO @Table
VALUES
('Parameter1', 'VARCHAR(255)', 'some value'),
('Parameter2', 'INT', 3),
DECLARE @ParameterAssignment NVARCHAR(MAX)
SELECT @ParameterAssignment = ISNULL(@ParameterAssignment + ';','') + 'DECLARE ' + Name + ' ' + Type + ' = (SELECT CAST(Value AS ' + Type + ') FROM @p1 WHERE Name = ''' + Name + ''')'
FROM @Table
SET @YourQuery = @ParameterAssignment + ';' + @YourQuery
EXEC SP_EXECUTESQL @YourQuery, N'@p1 SqlVariantTable READONLY', @Table
```
Now you can simpy insert the parameters into the @Table variable, and they will be present with they original name and type within the query exeuted in the SP\_EXECUTESQL. Only make sure you do not use VARCHAR(MAX) or NVARCHAR(MAX) variable types, since they are not supported by SQL\_VARIANT. Use (for instance) VARCHAR(4000) instead | Is it possible to supply sp_ExecuteSql parameter names dynamically? | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
Does anyone know how can I calculate the number of weekdays between two date fields? I'm using oracle sql developer. I need to find the average of weekdays between multiple start and end dates. So I need to get the count of days for each record so I can average them out. Is this something that can be done as one line in the `SELECT` part of my query? | This answer is similar to Nicholas's, which isn't a surprise because you need a subquery with a `CONNECT BY` to spin out a list of dates. The dates can then be counted while checking for the day of the week. The difference here is that it shows how to get the weekday count value on each line of the results:
```
SELECT
FromDate,
ThruDate,
(SELECT COUNT(*)
FROM DUAL
WHERE TO_CHAR(FromDate + LEVEL - 1, 'DY') NOT IN ('SAT', 'SUN')
CONNECT BY LEVEL <= ThruDate - FromDate + 1
) AS Weekday_Count
FROM myTable
```
The count is inclusive, meaning it includes `FromDate` and `ThruDate`. This query assumes that your dates don't have a time component; if they do you'll need to `TRUNC` the date columns in the subquery. | You could do it the following way :
Lets say we want to know how many weekdays between `start_date='01.08.2013'` and `end_date='04.08.2013'` In this example `start_date` and `end_date` are string literals. If your `start_date` and `end_date` are of `date` datatype, the `TO_DATE()` function won't be needed:
```
select count(*) as num_of_weekdays
from ( select level as dnum
from dual
connect by (to_date(end_date, 'dd.mm.yyyy') -
to_date(start_date, 'dd.mm.yyyy') + 1) - level >= 0) s
where to_char(sysdate + dnum, 'DY',
'NLS_DATE_LANGUAGE=AMERICAN') not in ('SUN', 'SAT')
```
Result:
```
num_of_weekdays
--------------
2
``` | Oracle SQl Dev, how to calc num of weekdays between 2 dates | [
"",
"sql",
"oracle",
"date",
"count",
""
] |
I have two tables Table1 and Table2.
I use an insert on Table2 like below:
```
insert into table2
(colOne, colTwo, colThree) //and many more cols
values
(1, norman, US) //and many more values that come from a form and not table1
```
I'd like the insert to succeed only if values (1,norman, US) are present in Table1. Values (1,Norman,US) come from a form. How can I do something like this.
At present I used 2 steps to do this. One to check if the values exist, two - the insert | You may use an `INSERT INTO... SELECT... WHERE`
Something liket that
```
insert into table2 (col1, col2, col3)
select (1, 'norman', 'US') from Table1 t1 -- where 1, 'norman', 'US' are your variables from a Form
where t1.id=1 and t1.name = 'norman' and t1.country = 'US' -- same
```
little [SqlFiddle](http://sqlfiddle.com/#!2/53356/1) demo for "select whatever I want". | You can use this method :
```
INSERT INTO table2
(colOne, colTwo, colThree)
SELECT colOne, colTwo, colThree
FROM
(SELECT 1 AS colOne,'norman' AS colTwo,'US' AS colThree
UNION
SELECT 2,'sabt','US'
UNION
SELECT 3,'ebi','US'
)p
WHERE
EXISTS (
SELECT 1 FROM table1
WHERE table1.colOne = p.colOne AND
table1.colTwo =p.colTwo AND
table1.colThree =p.colThree
)
```
Good luck. | Insert into Table2 only if values being inserted are found in Table1 | [
"",
"mysql",
"sql",
""
] |
I'm working on an application that inserts data in bulk and in order to reduce the number of queries I wanted to use Table Valued parameters.
Example:
```
var sql = require('node-sqlserver');
var connectionString = 'Driver={SQL Server Native Client 11.0};server=tcp:serverName.database.windows.net,1433;UID=user@serverName;PWD=password;Database={databaseName};Encrypt=yes;Connection Timeout=30;';
sql.open(connectionString, function(err, conn) {
if(err) {
return console.error('could not connect to sql', err);
}
var tableValuedObject = ???;
var query = 'usp_InsertSomeTable ?';
conn.query(query, tableValuedObject, function(err, result) {
if(err) {
return console.error('error running insert', err);
}
});
});
```
Where `usp_InsertSomeTable` is defined as
```
CREATE PROCEDURE usp_InsertSomeTable
@TVP SomeTableType READONLY
AS
BEGIN
SET NOCOUNT ON;
INSERT INTO SomeTable (Data) SELECT Data FROM @TVP;
END
```
I have my table valued object defined as
```
CREATE TYPE SomeTableType AS TABLE
(
Data VARCHAR(50)
);
```
What structure should the JavaScript object have or has that not been implemented in the [Node-SqlServer](https://github.com/WindowsAzure/node-sqlserver) project?
Edit 1:
using `var tableValuedObject = { Data: 'Hello World' };` produces an error:
```
node-sqlserver\lib\sql.js:3254: Uncaught Error: [msnodesql] Invalid parameter(s) passed to function query or queryRaw.
```
Edit 2:
using `var tableValuedObject = [{ Data: 'Hello World'}];` produces and error:
```
error running query { [Error: IMNOD: [msnodesql] Parameter 1: Invalid parameter type] sqlstate: 'IMNOD', code: -1 }
``` | If your Node.js application is running on Windows (e.g. in Windows Azure Web Sites), a convenient way of running any SQL commands is to use ADO.NET from .NET Framework within a Node.js process via Edge.js (<http://tjanczuk.github.io/edge>). Edge.js allows you to run .NET and Node.js in-process.
There is also edge-sql, an Edge.js extension that exposes basic CRUD operations to Node.js using ADO.NET and Edge (<http://tomasz.janczuk.org/2013/06/access-sql-azure-from-nodejs-app.html>). While current functionality of edge-sql will not allow you to accomplish what you need here, the implementation of edge-sql itself may be a good starting point for your own (<https://github.com/tjanczuk/edge-sql>). I also do take contributions to edge-sql if you feel like enhancing its feature set. | The parameters to the function `query` have to be passed as an array (see example [here](http://blogs.msdn.com/b/sqlphp/archive/2012/06/08/introducing-the-microsoft-driver-for-node-js-for-sql-server.aspx)). Try calling it as such, it should work:
```
sql.open(connectionString, function(err, conn) {
if(err) {
return console.error('could not connect to sql', err);
}
var tableValuedObject = { Data: 'Hello World' };
var query = 'usp_InsertSomeTable ?';
conn.query(query, [tableValuedObject], function(err, result) {
if(err) {
return console.error('error running insert', err);
}
});
});
``` | How to insert table valued parameters into Sql Azure from Node? | [
"",
"sql",
"node.js",
"azure",
"azure-sql-database",
"azure-mobile-services",
""
] |
I'm trying to turn `object_type,ABC,00,DEF,XY` string into `ABC-00-DEF-XY-`
Here's what I've got, I'm wondering if there is a more efficient way?
```
CONCAT(
REPLACE(
SUBSTR(a.object_name,
INSTR(a.object_name, ',',1,1)+1,
INSTR(a.object_name, ',',1,2)+1
),',','-'
),'-'
)
```
Clarification: I need to strip off everything up to and including the first comma, replace all remaining commas with dashes, and then add a dash onto the end. | Try this
```
replace(substr(a.object_name,instr(a.object_name,',',1,1) + 1),',','-') ||'-'
``` | [Rexexp\_replace()](http://docs.oracle.com/cd/B14117_01/server.101/b10759/functions115.htm#sthref1608) regular expression function can come in handy in this situation as well:
```
select ltrim(
regexp_replace( col
, '([^,]+)|,([^,]+)', '\2-'
)
, '-'
) as res
from t1
```
Result:
```
RES
--------------
ABC-00-DEF-XY-
```
[**SQLFiddle Demo**](http://sqlfiddle.com/#!4/06521/1) | Best way to parse and concatenate a string with SQL | [
"",
"sql",
"oracle11g",
""
] |
I have the following scenario
pid & month form a composite primary key .
```
pid month amount
1 1 10
2 2 15
1 2 20
1 3 10
3 3 4
2 3 6
```
Now the column to be generated with the table will be like this
```
pid month amount sum
1 1 10 10
2 2 15 15
1 2 20 30
1 3 10 40
3 3 4 4
2 3 6 21
```
What should be the query ? | This query will do the trick :
```
SELECT t1.*, sum(t2.amount)
FROM Table1 t1
INNER JOIN Table1 t2 ON t1.pid = t2.pid AND t1.month >= t2.month
GROUP BY t1.pid, t1.month, t1.amount
```
See SQLFIDDLE : <http://www.sqlfiddle.com/#!3/db350/7/0> | If using SQL Server 2012:
```
SELECT *,SUM(amount) OVER(PARTITION BY pid ORDER BY month ROWS UNBOUNDED PRECEDING)'Total'
FROM YourTable
``` | How to get the cumulative column from my sales table? | [
"",
"sql",
"sql-server",
""
] |
I have three columns in a table: id, streetname, count. To some ids is more than one streetname assinged. Count tells how often the respective street is assigned to the id. How can I get just the id and the streetname with the highest count.
Example table:
```
id streetname count
1 street1 80
1 street2 60
1 street3 5
2 street4 10
2 street5 6
```
Result should be like this:
```
id streetname
1 street1
2 street4
```
Thanks in advance! | You did not specify what database you are using but you should be able to use the following:
```
select t1.id, t1.streetname, t1.count
from yourtable t1
inner join
(
select id, max(count) max_count
from yourtable
group by id
) t2
on t1.id = t2.id
and t1.count = t2.max_count
```
See [SQL Fiddle with Demo](http://sqlfiddle.com/#!2/17dae/1). Note, you will have to escape the `count` column name using backticks for MySQL or whatever character your database uses to escape reserved words. My suggestion would be to avoid using reserved words for column and table names. | Try this
```
SELECT T1.id, T1.streetname FROM TableName T1
INNER JOIN
(
SELECT id, MAX(count) maxcnt FROM TableName
GROUP BY id
) T2
ON T1.id= T2.id AND T1.count = T2.maxcnt
```
[**SQL FIDDLE DEMO**](http://sqlfiddle.com/#!6/ddd4c/4) | SQL SELECT MAX COUNT | [
"",
"sql",
"select",
"count",
"max",
""
] |
I'm having trouble figuring out how to replace a value on a stored procedure result based on values from another table. I have a table [LOG] that is formatted as such:
```
TIME STAMP, TAG, DESCRIPTION, EVENTCODE, SUBEVENTCODE
30-Aug-2013 10:14:10, TAG X, HI TEMP FAULT, 3, 16
30-Aug-2013 10:12:10, TAG Y, HI PRESS FAULT, 3, 16
...
```
And another table [EVENTS] which basically describes what the EVENTCODE is:
```
EVENT, DESCRIPTION
1, FAULT
2, LOGIC
3, ALARM
```
I would like to have the stored procedure retrieve 2000 entries (rows) of the 1st table and, instead of showing EVENTCODE as a number, display the description contained in the 2nd table on the result.
e.g:
```
TIME STAMP, TAG, DESCRIPTION, EVENTCODE, SUBEVENTCODE
30-Aug-2013 10:14:10, TAG X, HI TEMP FAULT, ALARM, 16
```
Reason is I have another software that interacts with the result of the stored procedure, and wouldn't like to create another table to hold these results within the database.
Here is what the stored procedure looks like so far:
```
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE [dbo].[Get2kEvents]
AS
BEGIN
SELECT TOP 2000 CAST(datepart(day,TIME_STAMP) as char(2)) + '-' +
CAST(datename(month,TIME_STAMP) as char(3)) + '-' +
CAST(datepart(year,TIME_STAMP) as char(4))+ ' ' + CONVERT(varchar,TIME_STAMP,108)
as 'TIME STAMP',
[TAG],
[DESCRIPTION],
[EVENTCODE],
[SUBEVENTCODE]
FROM [Arc_DB].[dbo].[LOG]
ORDER BY TIME_STAMP DESC
END
GO
```
I appreciate your assistance. Sorry if this is too basic, but I wasn't able to figure out a solution for this while browsing this and other websites.
Cheers.
TM | You want to `join` the two tables. ie:
```
SELECT TOP 2000 CAST(datepart(day,TIME_STAMP) as char(2)) + '-' +
CAST(datename(month,TIME_STAMP) as char(3)) + '-' +
CAST(datepart(year,TIME_STAMP) as char(4))+ ' ' + CONVERT(varchar,TIME_STAMP,108)
as 'TIME STAMP',
[TAG],
[DESCRIPTION],
Events.Description,
[SUBEVENTCODE]
FROM [Arc_DB].[dbo].[LOG]
inner join events on log.eventcode = events.event
ORDER BY TIME_STAMP DESC
``` | Use a `JOIN`:
```
SELECT TOP 2000 CAST(datepart(day,TIME_STAMP) as char(2)) + '-' +
CAST(datename(month,TIME_STAMP) as char(3)) + '-' +
CAST(datepart(year,TIME_STAMP) as char(4))+ ' ' + CONVERT(varchar,TIME_STAMP,108)
as 'TIME STAMP',
[TAG],
L.[DESCRIPTION],
E.[DESCRIPTION],
[SUBEVENTCODE]
FROM [Arc_DB].[dbo].[LOG] L
INNER JOIN [Arc_DB].[dbo].[EVENTS] E ON E.EVENT = L.EVENTCODE
ORDER BY TIME_STAMP DESC
``` | SQL Stored Procedure - replace a result column with value from existing table column | [
"",
"sql",
"sql-server",
"stored-procedures",
""
] |
Display the list of employees from emp and ordered by deptno where null deptno should appear first, then courses with deptno=2 should appear and rest in ascending order.
i tried following query with DECODE,
```
SELECT empno, ename, deptno
FROM emp
ORDER BY DECODE (deptno, NULL, 0, 2, 1, 3);
```
but, i'm unable to achieve it through analytical functions.
Someone please help me solve this using analytical functions!! | I assume that you want to assign `row_number()` based on the ordering, because the analytic functions do not "order" tables. Did you try this?
```
SELECT empno, ename, deptno,
row_number() over (ORDER BY DECODE (deptno, NULL, 0, 2, 1, 3) as seqnum
FROM emp ;
```
You could also do this without analytic functions at all:
```
select e.*, rownum as seqnum
from (SELECT empno, ename, deptno
FROM emp
ORDER BY DECODE (deptno, NULL, 0, 2, 1, 3)
) e
``` | Even RANK() or DENSE\_RANK() can be used as an analytical funciton,
ORDER BY clause should contain another venue\_id to sort the else option of the DECODE.
SELECT course\_id, course\_title, venue\_id
FROM ha\_courses
ORDER BY ROW\_NUMBER () OVER (ORDER BY DECODE (venue\_id, NULL, 0, 2, 1, 3)),
venue\_id; | Custom ordering using Analytical Functions | [
"",
"sql",
"oracle",
"oracle11g",
"oracle10g",
""
] |
First, I am aware that this question has been posted generally [Equals(=) vs. LIKE](https://stackoverflow.com/questions/543580/equals-vs-like).
Here, I query about date type data on ORACLE database, I found the following, when I write select statment in this way:
```
SELECT ACCOUNT.ACCOUNT_ID, ACCOUNT.LAST_TRANSACTION_DATE
FROM ACCOUNT
WHERE ACCOUNT.LAST_TRANSACTION_DATE LIKE '30-JUL-07';
```
I get all rows I'm looking for. but when I use the sign equal `=` instead :
```
SELECT ACCOUNT.ACCOUNT_ID, ACCOUNT.LAST_TRANSACTION_DATE
FROM ACCOUNT
WHERE ACCOUNT.LAST_TRANSACTION_DATE = '30-JUL-07';
```
I get nothing even though nothing is different except the equal sign. Can I find any explanation for this please ? | Assuming `LAST_TRANSACTION_DATE` is a `DATE` column (or `TIMESTAMP`) then both version are very bad practice.
In both cases the `DATE` column will implicitly be converted to a character literal based on the current NLS settings. That means with different clients you will get different results.
When using date literals ***always*** use `to_date()` with(!) a format mask or use an ANSI date literal. That way you compare dates with dates not strings with strings. So for the equal comparison you should use:
```
LAST_TRANSACTION_DATE = to_date('30-JUL-07', 'dd-mon-yy')
```
Note that using 'MON' can still lead to errors with different NLS settings (`'DEC'` vs. `'DEZ'` or `'MAR'` vs. `'MRZ'`). It is much less error prone using month numbers (and four digit years):
```
LAST_TRANSACTION_DATE = to_date('30-07-2007', 'dd-mm-yyyy')
```
or using an ANSI date literal
```
LAST_TRANSACTION_DATE = DATE '2007-07-30'
```
Now the reason why the above query is very likely to return nothing is that in Oracle `DATE` columns include the time as well. The above date literals implicitly contain the time `00:00`. If the time in the table is different (e.g. `19:54`) then of course the dates are not equal.
To workaround this problem you have different options:
1. use `trunc()` on the table column to "normalize" the time to `00:00`
`trunc(LAST_TRANSACTION_DATE) = DATE '2007-07-30`
this will however prevent the usage of an index defined on `LAST_TRANSACTION_DATE`
2. use `between`
`LAST_TRANSACTION_DATE between to_date('2007-07-30 00:00:00', 'yyyy-mm-dd hh24:mi:ss') and to_date('2007-07-30 23:59:59', 'yyyy-mm-dd hh24:mi:ss')`
The performance problem of the first solution could be worked around by creating an index on `trunc(LAST_TRANSACTION_DATE)` which could be used by that expression. But the expression `LAST_TRANSACTION_DATE = '30-JUL-07'` prevents an index usage as well because internally it's processed as `to_char(LAST_TRANSACTION_DATE) = '30-JUL-07'`
The important things to remember:
1. Never, ever rely on implicit data type conversion. It *will* give you problems at some point. Always compare the correct data types
2. Oracle `DATE` columns always contain a time which is part of the comparison rules. | You should not compare a date to a string directly. You rely on [implicit conversions](http://docs.oracle.com/cd/E11882_01/server.112/e26088/sql_elements002.htm#i53062), the rules of which are difficult to remember.
Furthermore, your choice of date format is not optimal: years have four digits (Y2K bug?), and not all languages have the seventh month of the year named `JUL`. You should use something like `YYYY/MM/DD`.
Finally, dates in Oracle are points in time precise to the second. **All dates have a time component**, even if it is `00:00:00`. When you use the `=` operator, Oracle will compare the date and time for dates.
Here's a test case reproducing the behaviour you described:
```
SQL> create table test_date (d date);
Table created
SQL> alter session set nls_date_format = 'DD-MON-RR';
Session altered
SQL> insert into test_date values
2 (to_date ('2007/07/30 11:50:00', 'yyyy/mm/dd hh24:mi:ss'));
1 row inserted
SQL> select * from test_date where d = '30-JUL-07';
D
-----------
SQL> select * from test_date where d like '30-JUL-07';
D
-----------
30/07/2007
```
When you use the `=` operator, Oracle will convert the constant string `30-JUL-07` to a date and compare the value with the column, like this:
```
SQL> select * from test_date where d = to_date('30-JUL-07', 'DD-MON-RR');
D
-----------
```
When you use the `LIKE` operator, Oracle will convert the column to a string and compare it to the right-hand side, which is equivalent to:
```
SQL> select * from test_date where to_char(d, 'DD-MON-RR') like '30-JUL-07';
D
-----------
30/07/2007
```
Always compare dates to dates and strings to strings. Related question:
* [How to correctly handle dates in queries constraints](https://stackoverflow.com/q/4876771/119634) | Equals(=) vs. LIKE for date data type | [
"",
"sql",
"oracle",
""
] |
Im wondering if on a relational table I set the two values below as a PRIMARY KEY if that automatically makes the table know that all entries should be unique....
```
CREATE TABLE UserHasSecurity
(
userID int REFERENCES Users(userID) NOT NULL,
securityID int REFERENCES Security(securityID) NOT NULL,
PRIMARY KEY(userID,securityID)
)
```
or do I need to be more explicit like this...
```
CREATE TABLE UserHasSecurity
(
userID int REFERENCES Users(userID) NOT NULL,
securityID int REFERENCES Security(securityID) NOT NULL,
PRIMARY KEY(userID,securityID),
UNIQUE(userID,securityID)
)
``` | You don't need UNIQUE here. PRIMARY KEY will make sure there is no duplicate (userID,securityID) pairs. | A `PRIMARY KEY` has to be unique, so you only need to declare as a primary key. The underlying index is unique by definition.
[Creating Unique Indexes](http://technet.microsoft.com/en-us/library/ms175132%28v=sql.105%29.aspx) | In SQL does a Primary Key in a create table enforce uniqueness? | [
"",
"sql",
"sql-server",
"unique-constraint",
"composite-primary-key",
""
] |
I have an insert query like this
```
INSERT INTO tbl_LandRigs (Company,Rig,[RigType] ,[DrawWorks],[TopDrive] ) VALUES
(@Company,@Rig,@RigType ,@DrawWorks,@TopDrive )
```
Now what i need is ,I want to insert data to column `TopDrive` only if the data is numeric.Else I want to insert "0" to that column.Is it possible in SQL? | ```
INSERT INTO tbl_LandRigs ([Company], [Rig], [RigType], [DrawWorks], [TopDrive])
VALUES (@Company, @Rig, @RigType, @DrawWorks, CASE WHEN ISNUMERIC(@TopDrive) = 1 THEN @TopDrive ELSE 0 END)
```
Here we are putting both [`CASE`](http://technet.microsoft.com/en-us/library/ms181765.aspx) and [`ISNUMERIC`](http://technet.microsoft.com/en-us/library/ms186272.aspx) to good use.
Also, it usually makes it easier for other developers (or yourself at a later date) to be consistent with your usage of brackets and white space, even in SQL code.
As mentioned by others, `isnumeric` may give you some [false positives](https://www.simple-talk.com/blogs/2011/01/13/isnumeric-broken-only-up-to-a-point/). If possible, it's often best to sanitize the input before it gets to your database, perhaps by not using untyped data in this case. Only you know your inputs, but if you anticipate a problem, [there are some options](https://stackoverflow.com/questions/312054/efficient-isnumeric-replacements-on-sql-server). | There is an IsNumeric() function that you can call in SQL Server. Careful of scientific notation though... | Insert into table only if numeric in SQL | [
"",
"sql",
"sql-server-2008",
""
] |
I currently am spooling a pipe file via this sqlplus script:
```
set feedback off
set echo off
set verify off
set pagesize 0
set heading off
set termout off
set trim on
set wrap on
set trimspool on
set linesize 9000
spool c:\exp3.txt
select
to_char(D_DTM, 'mm-dd-yyyy hh24.mi.ss')||'|'||
DAYOFWEEK||'|'||"24HOUR"||'|'||TECHNOLOGY||'|'||VOICEDATA||'|'||MRKT_NM||'|'||REGION_NM||'|'||CLUSTER_NM||'|'||
CLUSTER2_NM||'|'||BSC_NM||'|'||BTS_ID||'|'||BSC_BTS||'|'||CSCD_ID||'|'||CSCD_NM||'|'||SECT_SEQ_ID||'|'||BND_ID||'|'||
FA_ID||'|'||ATT_CNT||'|'||AXS_F_CNT||'|'||CE_BLK_CNT||'|'||CUST_BLK_CNT||'|'||DRP_CALL_CNT||'|'||HHI_ATT_CNT||'|'||
HHI_BAFRM_CNT||'|'||HHI_CALL_SETUP_SXS_CNT||'|'||MBL_ORG_CNT||'|'||MBL_TER_CNT||'|'||NON_BTS_EQ_BLK_CNT||'|'||
PRIM_CALL_ERL||'|'||PWR_BLK_CNT||'|'||SFUL_CALL_CNT||'|'||SILENT_RETRY_CNT||'|'||T1_BHL_BLK_CNT||'|'||WCD_BLK_CNT||'|'||
SMS_ATT_CNT||'|'||SMS_SXS_CNT||'|'||CTRL_CH_USG_CNT||'|'||CTRL_SL_USG_CNT||'|'||DO_SECT_PHL_FWD_PS_TMS||'|'||
DO_SECT_PHL_REV_PS_TMS||'|'||EUSR_CONN_SETUP_ATT_CNT||'|'||EUSR_CONN_SETUP_F_CNT||'|'||FWD_D_TRANSD_QTY||'|'||
MAC_ID_BLK_CNT||'|'||MAC_ID_UTIL_RT||'|'||MS_RQST_CNT||'|'||MS_RQST_D_QTY||'|'||NORM_CONN_CLS_CNT||'|'||
NORM_SESS_RLS_CNT||'|'||RAB_SET_CNT||'|'||RCVD_RAB_CNT||'|'||REV_AIR_PER_BAD_FRM_CNT||'|'||REV_AIR_PER_TRSF_D_QTY||'|'||
REV_D_TRANSD_QTY||'|'||RNC_BLK_CNT||'|'||SESS_ATT_CNT||'|'||SESS_CONF_SXS_CNT||'|'||SL_USG_CNT||'|'||MAX_USER_CNT||'|'||
AVG_USER_CNT||'|'||MOU_TMS||'|'
from ds3r_fh_all_fa_lvl_kpi
where D_DTM = to_date('8/19/2013', 'mm/dd/yyyy');
```
but I can't figure out how to include the column names as the header in the file as well. How do I do that? | You should set the `pagesize` to a non-zero number that way your headers will be displayed. The number you set it to may depend on the number of rows that you have. If you set it 100 as per the following line:
`set pagesize 100`
then 100 rows will be printed under the headers, then the column headers will be repeated for the next 100 rows. The maximum as mentioned in 50000. Also set `heading` to on as
`set heading on`
otherwise it will not display the headers, despite the pagesize being non-zero. Also remember when you are selecting columns in that way they will not be padded, so the headings might appear out of place. | ```
set heading on
set pagesize 0 embedded on
```
* Includes header
* Infinite page size | spool with column headers in pipe delimited sqlplus | [
"",
"sql",
"sqlplus",
"spool",
""
] |
I have the following cursor in SQL:
```
DECLARE @Script varchar(max)
DECLARE @getScript CURSOR
SET @getScript = CURSOR FOR
SELECT [Script]
FROM ScriptTable
OPEN @getScript
FETCH NEXT
FROM @getScript INTO @Script
WHILE @@FETCH_STATUS = 0
BEGIN
BEGIN TRY
EXEC(@Script) --Working part. This executes the query stored in the Script column.
--For example INSERT INTO zTest VALUES(VAL1, VAL2, etc etc..)
UPDATE ScriptTable
SET DateDone = GETDATE(), IsDone = 1, Err = NULL
FETCH NEXT
FROM @getScript INTO @Script
END TRY
BEGIN CATCH
DECLARE @Err varchar(max)
SET @Err = ERROR_MESSAGE()
UPDATE ScriptTable
SET DateDone = GETDATE(), Err = @Err
END CATCH
END
CLOSE @getScript
DEALLOCATE @getScript
```
**Q1:**
Currently, I am getting the values inserted into the "zTest" table specified in my comments when I execute EXEC(@Script).
However, the second part where the "Update ScriptTable" is, updates all the rows in my Script Table. I know I need to specify the ID for the appropriate row that the cursor is moving through. Question is, how can I do that? I wan't to only update the appropriate row, move to the next then update that one.
**Q2:**
My next question is, in the CATCH block, I think I am creating an infinite loop as soon as there is an error in one of the queries in the Script Column of the ScriptTable as when I look at results, it just keeps going and going. I don't want to BREAK; the procedure as I want to write an error to the Err column and continue with the next rows till it reaches the end of @Script, then stop.
IDENT\_CURRENT, Scope\_Identity etc doesn't work because I haven't inserted anything into the Scripts Table.
Please help. | Regarding Q1, you have to have a primary key in order to use the cursor for updating (though there are workarounds).
In general you'll want syntax something like this:
```
update ScriptTable
SET DateDone = GETDATE(), IsDone = 1, Err = NULL
where ID of @getScript
```
Regarding Q2, it make sense that it's an infinite loop. When you use the `TRY` and `CATCH` clauses and it fails it doesn't execute any of the syntax in the `TRY` "area".
Therefor the `FETCH NEXT` gets skipped, and in the next loop the same error happens again.
Try to make sure there is always a `FETCH NEXT` in the loop.
Hope this helps you out a bit. | **Here is my final code if anyone is interested:**
```
DECLARE @Script varchar(max)
DECLARE @getScript CURSOR
SET @getScript = CURSOR FOR
SELECT [Script]
FROM ScriptControl
OPEN @getScript
FETCH NEXT
FROM @getScript INTO @Script
DECLARE @Counter int = 1
WHILE @@FETCH_STATUS = 0
BEGIN
BEGIN TRY
EXEC(@Script)
UPDATE ScriptControl
SET DateDone = GETDATE(), IsDone = 1, Error = NULL WHERE ID = @Counter
FETCH NEXT
FROM @getScript INTO @Script
SET @Counter = (@Counter + 1)
END TRY
BEGIN CATCH
DECLARE @Err varchar(max)
SET @Err = ERROR_MESSAGE()
UPDATE ScriptControl
SET CSC_EOD_DateDone = NULL, CSC_EOD_Err = @Err, CSC_EOD_IsDone = 0 WHERE CURRENT OF @getScript
FETCH NEXT
FROM @getScript INTO @Script
SET @Counter = (@Counter + 1)
END CATCH
END
CLOSE @getScript
DEALLOCATE @getScript
``` | Use "WHERE CURRENT OF" clause to update only the specific row on which the CURSOR is positioned in SQL | [
"",
"sql",
"sql-server",
"t-sql",
"cursor",
""
] |
I have this table:
```
Person Job
PersonA XX
PersonA XX
PersonA XX
PersonB XX
PersonB XX
PersonB YY
PersonB ZZ
PersonC XX
PersonC XX
PersonA XX
PersonA YY
PersonB ZZ
...
```
Now, I want the output to be something like this:
```
Job PersonA PersonB Person C
XX 4 2 2
YY 1 1 0
ZZ 0 2 0
```
So far I have this:
```
SELECT DISTINCT Person,
(SELECT COUNT(Job)
FROM dbo.TableName
GROUP BY Job)
FROM dbo.ExcelImport
```
No luck :( | You shoud try something like this:
```
select
Job
,SUM(case when Person = 'PersonA'
then
1
else
0
end) as 'PersonA'
,SUM(case when Person = 'PersonB'
then
1
else
0
end) as 'PersonB'
,SUM(case when Person = 'PersonC'
then
1
else
0
end) as 'PersonC'
from
TableName
group by
Job
``` | It is easy task for `pivot` operator:
```
select *
from (select Job as Job2, * from [TableName]) t
pivot (count(Job2) for Person in ([PersonA],[PersonB],[PersonC])) p
``` | Groupby using PIVOT | [
"",
"sql",
"sql-server-2008",
"pivot",
""
] |
Table 1:
```
TicketNumber | Rules
---------------------------
PR123 | rule_123
PR123 | rule_234
PR123 | rule_456
PR999 | rule_abc
PR999 | rule_xyz
```
---
Table2:
```
TicketNumber | Rules
---------------------------
PR123 | rule_123
PR123 | rule_234
PR999 | rule_abc
```
NOTE: Both tables have the same structure: same column names but different count.
NOTE: Both tables have same set of TicketNumber values
CASE 1:
If I need ticket and rules count of each ticket from table1, the query is:
```
Select [TicketNo], COUNT([TicketNo]) AS Rules_Count from [Table1] group by TicketNo
```
This will give me output in format :
```
ticketNumber | Rules_Count
---------------------------
PR123 | 3
PR999 | 9
```
---
CASE 2: (NEED HELP WITH THIS)
Now, the previous query gets the ticket and the count of the ticket of only 1 table. I need the count of the same ticket (since both have same set of tkt nos) in table2 also.
I need result in this way:
```
ticketNumber | Count(ticketNumber) of table1 | Count(ticketNumber) of table2
---------------------------------------------------------------------------------
PR123 | 3 | 2
PR999 | 2 | 1
```
Both Table1 and table2 have the same set of ticket nos but different counts
How do i get the result as shown above? | A simpler solution from a "statement point of view" (without `COALESCE` that maybe it's not so easy to understand).
Pay attention to the performances:
```
Select T1.TicketNumber,T1.Rules_Count_1,T2.Rules_Count_2
FROM
(
Select [TicketNumber], COUNT([TicketNumber]) AS Rules_Count_1
from [Table1] T1
group by TicketNumber) T1
INNER JOIN
(
Select [TicketNumber], COUNT([TicketNumber]) AS Rules_Count_2
from [Table2] T2
group by TicketNumber
) T2
on T1.TicketNumber = T2.TicketNumber
```
[SQL Fiddle Demo](http://sqlfiddle.com/#!6/23d4a/3) | You can do this with a `full outer join` after aggregation (or an inner join if you really know that both tables have the same tickets:
```
select coalesce(t1.TicketNo, t2.TicketNo) as TicketNo,
coalesce(t1.Rules_Count, 0) as t1_Rules_Count,
coalesce(t2.Rules_Count, 0) as t2_Rules_Count
from (Select [TicketNo], COUNT([TicketNo]) AS Rules_Count
from [Table1]
group by TicketNo
) t1 full outer join
(Select [TicketNo], COUNT([TicketNo]) AS Rules_Count
from [Table2]
group by TicketNo
) t2
on t1.TicketNo = t2.TicketNo;
``` | SQL Query - Get count of two columns from two tables | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I'm importing a fairly hefty amount of data into a SQL Server database. The source data originates from PgSql (including table defs), which I throw through some fairly simple regex to translate to TSql. This creates tables with no primary key.
As far as I understand, lack of a primary key/clustering index means that the data is stored in a heap.
Once the import is complete, I add PKs as follows:
```
ALTER TABLE someTable ADD CONSTRAINT PK_someTable PRIMARY KEY (id);
```
(note the lack of `CLUSTERED` keyword). What's going on now? Still a heap? What's the effect on lookup by primary key? Is this really any different to adding a standard index?
Now, say instead I add PKs as follows:
```
ALTER TABLE someTable ADD CONSTRAINT PK_someTable PRIMARY KEY CLUSTERED (id);
```
I assume this now completely restructures the table into a row based structure with more efficient lookup by PK but less desirable insertion characteristics.
Are my assumptions correct?
If my import inserts data in PK order, is there any benefit to omitting the PK in the first place? | When you execute
```
ALTER TABLE someTable ADD CONSTRAINT PK_someTable PRIMARY KEY (id);
```
if there is no clustered index on `someTable` then the PK will be a clustered PK. Otherwise, if there is a clustered index before executing `ALTER .. ADD ... PRIMARY KEY (id)` the PK will be a non-clustered PK.
**-- Test #1**
```
BEGIN TRAN;
CREATE TABLE dbo.MyTable
(
id INT NOT NULL,
Col1 INT NOT NULL,
Col2 VARCHAR(50) NOT NULL
);
SELECT i.name, i.index_id, i.type_desc
FROM sys.indexes i
WHERE i.object_id = OBJECT_ID(N'dbo.MyTable');
/*
name index_id type_desc
---- ----------- ---------
NULL 0 HEAP
*/
ALTER TABLE dbo.MyTable
ADD CONSTRAINT PK_MyTable PRIMARY KEY (id);
SELECT i.name, i.index_id, i.type_desc
FROM sys.indexes i
WHERE i.object_id = OBJECT_ID(N'dbo.MyTable');
/*
name index_id type_desc
----------- ----------- ---------
PK_MyTable 1 CLUSTERED
*/
ROLLBACK;
```
**-- Test #2**
```
BEGIN TRAN;
CREATE TABLE dbo.MyTable
(
id INT NOT NULL,
Col1 INT NOT NULL,
Col2 VARCHAR(50) NOT NULL
);
SELECT i.name, i.index_id, i.type_desc FROM sys.indexes i WHERE i.object_id = OBJECT_ID(N'dbo.MyTable');
/*
name index_id type_desc
---- ----------- ---------
NULL 0 HEAP
*/
CREATE CLUSTERED INDEX ix1
ON dbo.MyTable(Col1);
SELECT i.name, i.index_id, i.type_desc FROM sys.indexes i WHERE i.object_id = OBJECT_ID(N'dbo.MyTable');
/*
name index_id type_desc
---- ----------- ---------
ix1 1 CLUSTERED
*/
ALTER TABLE dbo.MyTable
ADD CONSTRAINT PK_MyTable PRIMARY KEY (id);
SELECT i.name, i.index_id, i.type_desc FROM sys.indexes i WHERE i.object_id = OBJECT_ID(N'dbo.MyTable');
/*
name index_id type_desc
---------- ----------- ------------
ix1 1 CLUSTERED
PK_MyTable 2 NONCLUSTERED
*/
ROLLBACK;
``` | In sql server, a primary keys defaults to clustered if no clustered index exists. A clustered index really means that the "index" is not kept in a separate storage area (as is a non-clustered index), but that the index data is "interspersed" with the corresponding regular table data. If you thing about this, you will realize that they can only be 1 cluster index.
The real advantage of a clustered index is that the data is near the index data, so you can grab both while the drive head is "in the area". A clustered index is noticebly faster than a non-clusted index when the data you are processing exhibits locality of reference -- when rows of nearly the same value tend to be read at the same time.
For example, if you primary key is SSN, you do not get large advantage unless you are processing data that is randomly ordered with respect to SSN -- though you do get an advantage due to the nearness of data. But, if you can presort the input by SSN a clustered key is a large advantage.
So yes, a clustered index does reorder the data so that it is comingled with the clustered index. | Does adding a primary key cause restructuring of underlying data | [
"",
"sql",
"sql-server",
"sql-server-2008",
"primary-key",
"clustered-index",
""
] |
The below is a table that is meant to show when a media will play. so basically it has a start time (`starts`), the length of the track (`clip_length`), and when it ends (`ends` = `starts + clip_length`), and finally the position of the track.
```
|starts | ends |position |file_id| clip_length
|2013-08-30 22:00:00 | 2013-08-30 22:03:08 |0 |16 |00:03:08.081768
|2013-08-30 22:03:08 | 2013-08-30 22:06:33 |1 |17 |00:03:25.436485
|2013-08-30 22:06:33 | 2013-08-30 22:09:07 |2 |7 |00:02:33.79968
|2013-08-30 22:09:07 | 2013-08-30 22:12:21 |3 |3 |00:03:14.020273
|2013-08-30 22:12:21 | 2013-08-30 22:15:31 |4 |8 |00:03:10.466689
```
what i want to do is to add a record, at say `position =2` , shown bellow. I have been able to increment the positions, how ever the problem lies with the fact that the times are all messed up.
```
|starts | ends |position |file_id|clip_length
|2013-08-30 22:00:00 | 2013-08-30 22:03:08 |0 |16 |00:03:08.081768
|2013-08-30 22:03:08 | 2013-08-30 22:06:33 |1 |17 |00:03:25.436485
|2013-08-30 22:06:33 | 2013-08-30 22:09:07 |2 |7 |00:02:33.79968
|2013-08-30 22:06:33 | 2013-08-30 22:11:03 |3 |1 |00:04:30.006958
|2013-08-30 22:09:07 | 2013-08-30 22:12:21 |4 |3 |00:03:14.020273
|2013-08-30 22:12:21 | 2013-08-30 22:15:31 |5 |8 |00:03:10.466689
```
so it possible to use the first start time.. as point 00, and add `clip_length` to `starts` and save in `ends`, for the first one. then for the second one use the first `ends` value as the starts and do this recursively till the end (following the positions) .
thanks in advance.. | [SQL Fiddle](http://sqlfiddle.com/#!12/2929d/2)
```
update clip c
set
starts = s.starts,
ends = s.ends
from (
select
starts,
starts + clip_length as ends,
file_id,
position
from (
select
'2013-08-30 22:00:00'::timestamp
+ sum(clip_length) over(order by position)
- clip_length as starts,
clip_length,
file_id,
position
from clip
) s
) s
where c.file_id = s.file_id
``` | Your data model is pretty borked. You're storing at least two pieces of redundant information. You need `starts` and `file_id` and any one of `clip_length`, `ends` or `position`; they can each be calculated from each other.
Right now you are storing *redundant* data, which creates the problem you now have where the data is not internally consistent, it has conflicts within its self.
In this case it sounds like `position` is trusted and the others aren't. Here's what I'd do:
```
SELECT
(SELECT min(starts) FROM Sometable)
+ COALESCE(
sum(clip_length) OVER all_rows_but_last,
INTERVAL '0' second
)
AS starts,
(SELECT min(starts) FROM Sometable)
+ COALESCE(
sum(clip_length) OVER all_rows_but_last,
INTERVAL '0' second
) + clip_length
AS ends,
position,
clip_length
FROM Sometable
WINDOW all_rows_but_last AS (ORDER BY position ROWS BETWEEN UNBOUNDED PRECEDING AND 1 PRECEDING);
```
see: <http://sqlfiddle.com/#!12/ca5fa/1>
The principle here is to find the lowest start - the one known to be valid - and *ignore* the following ends and starts as redundant and useless. Instead, add the *running total* of the previous durations as the start, and the same running total plus the current interval as the end.
You'll notice that this isn't an update. That's because I think you should actually:
* change `position` to a floating point value or large integer so you don't need to re-number when you insert an entry. If you put something between `1` and `2` give it position `0.5`. Or start with `10000`, `20000` etc so you can insert `15000`.
* Get rid of the `ends` column completely. calculate it from `starts` + `clip_length`
* Move `starts` to a separate table where it's stored only once, for the series of segments. Calculate clip start times on the fly from the sum of the lengths of previous clips since the start.
* Re-define the current table as a view over the two tables described above.
That's making a lot of guesses about your data and how you're using it, but you haven't said much about what you're doing. | adding and incrementing date and time (Postgresql) | [
"",
"sql",
"postgresql",
"date",
"time",
"increment",
""
] |
I have a couple of interface elements stored in a database table which the user can filter by entering keywords (stored in another table):
Table Buttons:
```
ID Name
1 Button1
2 Button2
3 Button3
```
Table Keywords:
```
ButtonID Keyword
1 Garden
2 House
3 Garden
3 House
```
If the user enters `Garden`, the db returns `Button1 and Button3`.
If the user enters `House`, the db returns `Button2 and Button3`.
If the user enters `Garden AND House`, the db returns only `Button3`.
The last one is the problem, I managed to put together this query:
```
SELECT T.ID, T.Name
, T.Type
, T.Description
, T.Action
, T.Image
FROM Tiles T
JOIN Keywords K
ON T.ID=K.TileID
WHERE K.Keyword IN ('House', 'Garden')
```
Unfortunately, this query returns all three buttons with ANY of the provided keywords. But I want only the elements with all provided keywords which is `Button3`.
What should the query look like to achieve that?
Thanks a lot :) | ```
declare @params table (Keyword varchar(6) primary key)
insert into @params
select 'House' union all
select 'Garden'
select
b.Name
from Keywords as k
inner join Buttons as b on b.ID = k.ButtonID
where k.Keyword in (select Keyword from @params)
group by b.Name
having count(*) = (select count(*) from @params)
```
[**`sql fiddle demo`**](http://sqlfiddle.com/#!3/b031c/2) | Use Intersect
```
SELECT T.ID, T.Name, T.Type, T.Description, T.Action, T.Image FROM Tiles T JOIN Keywords K ON T.ID=K.TileID WHERE K.Keyword like 'House'
intersect
SELECT T.ID, T.Name, T.Type, T.Description, T.Action, T.Image FROM Tiles T JOIN Keywords K ON T.ID=K.TileID WHERE K.Keyword like 'Garden'
```
That should get you what you want. | Selecting rows from a table depending on keywords in another table | [
"",
"sql",
"sql-server",
""
] |
I am trying to call a stored procedure with a number of parameters but getting the error of
> "Procedure or function 'sp\_User\_Create' expects parameter
> '@adminUser', which was not supplied."
when I send a value of 0
But when I call the stored procedure with a value greater then 0 it works fine?
Here is my code....
```
SqlConnection con;
SqlCommand cmd = new SqlCommand();
con = new SqlConnection("server=XXX.XXX.XXX.XXX; database=xxxxxxx; uid=xxxxxxx; pwd=xxxxx");
cmd = new SqlCommand("sp_User_Create", con);
cmd.CommandType = CommandType.StoredProcedure;
cmd.Parameters.Add(new SqlParameter("fullName", txtFirstName.Text + " " + txtLastName.Text));
cmd.Parameters.Add(new SqlParameter("emailAddress", txtEmailAddress.Text));
cmd.Parameters.Add(new SqlParameter("password", txtPassword.Text));
cmd.Parameters.Add(new SqlParameter("phoneNumber", txtPhoneNumber.Text));
cmd.Parameters.Add(new SqlParameter("jobTitle", txtJobTitle.Text));
cmd.Parameters.Add(new SqlParameter("createDate", DateTime.Now));
cmd.Parameters.Add(new SqlParameter("companyId", 116));
cmd.Parameters.Add(new SqlParameter("adminUser", 0));
cmd.Parameters.Add(new SqlParameter("userExpireDate", DateTime.Now.AddDays(7)));
cmd.Parameters.Add(new SqlParameter("userAccessLeaderboardClips", 99));
cmd.Parameters.Add(new SqlParameter("userAccessLeaderboardUsers", 99));
cmd.Parameters.Add(new SqlParameter("userAccessLeaderboardVideos", 99));
cmd.Parameters.Add(new SqlParameter("userAccessShared", 99));
cmd.Parameters.Add(new SqlParameter("userAccessEmbed", 99));
cmd.Parameters.Add(new SqlParameter("userAccessTinyUrl", 1));
cmd.Parameters.Add(new SqlParameter("userAccessAllowCreatedVideos", 1));
cmd.Parameters.Add(new SqlParameter("userAccessDownload", 99));
cmd.Parameters.Add(new SqlParameter("companyName", "My Company Name"));
cmd.Parameters.Add(new SqlParameter("logoName", "the logo name"));
cmd.Parameters.Add(new SqlParameter("postcode", "XXXX XXX"));
cmd.Parameters.Add(new SqlParameter("optIn", true));
cmd.Parameters.Add(new SqlParameter("accountType", ""));
cmd.Parameters.Add(new SqlParameter("transactionDesc", ""));
cmd.Parameters.Add(new SqlParameter("transactionDate", ""));
cmd.Parameters.Add(new SqlParameter("transactionAmount", ""));
con.Open();
cmd.ExecuteNonQuery();
con.Close();
```
I have checked the stored procedure and it's expecting:
```
@adminUser as int
```
Here is the stored procedure:
```
USE [vCreateDev]
GO
/****** Object: StoredProcedure [dbo].[sp_User_Create] Script Date: 29/08/2013 10:47:03 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
-- =============================================
-- Author: Trevor Daniel
-- Create date: 28/8/2013
-- Description: Create a user
-- =============================================
ALTER PROCEDURE [dbo].[sp_User_Create]
-- UserLogin Fields
@fullName as varchar(100),
@emailAddress as varchar(100),
@password as varchar(20),
@phoneNumber as varchar(100),
@jobTitle as varchar,
@createDate as datetime,
@companyId as int,
@adminUser as int,
@userExpireDate as datetime,
@userAccessLeaderboardClips as int,
@userAccessLeaderboardUsers as int,
@userAccessLeaderboardVideos as int,
@userAccessShared as int,
@userAccessEmbed as int,
@userAccessTinyUrl as int,
@userAccessAllowCreatedVideos as int,
@userAccessDownload int,
-- ClientCompanyDetailsPerUser
@companyName varchar(200),
@logoName varchar(200),
-- UserCustomerDetails
@postcode varchar(15),
@optIn bit,
@accountType varchar(50),
-- UserTransactions
@transactionDesc varchar(150),
@transactionDate datetime,
@TransactionAmount smallmoney
-- Outputs
--@UserLoginID INT OUTPUT
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
-- Insert statements for procedure here
INSERT INTO UserLogin(
Username,
Password,
AuthID,
DateCreated,
LastLogin,
Fullname,
Title,
Mobile,
DDI,
Email,
AdminUser,
CompanyID,
UserExpireDate,
CurrentLogin,
LastActionDateTime,
ActiveLicense,
UserAccessLeaderboardClips,
UserAccessLeaderboardUsers,
UserAccessLeaderboardVideos,
UserAccessShared,
UserAccessEmbed,
UserAccessTinyUrl,
UserAccessAllowCreatedVideos,
UserAccessDownload)
VALUES(
@emailAddress,
@password,
null,
@createDate,
null,
@fullName,
@jobTitle,
@phoneNumber,
null,
@emailAddress,
@adminUser,
@companyId,
@userExpireDate,
null,
null,
null,
@userAccessLeaderboardClips,
@userAccessLeaderboardUsers,
@userAccessLeaderboardVideos,
@userAccessShared,
@userAccessEmbed,
@userAccessTinyUrl,
@userAccessAllowCreatedVideos,
@userAccessDownload)
DECLARE @NewUserID as int
SET @NewUserID = SCOPE_IDENTITY()
INSERT INTO ClientCompanyDetailsPerUser(CompanyName, CompanyLogoLocation, LargeCompanyLogoLocation, UserID)
VALUES (@companyName,@logoName,@logoName,@NewUserID)
INSERT INTO UserCustomerDetails(UserID, PostCode, OptIn, AccountType)
VALUES (@NewUserID,@postcode,@optIn,@accountType)
INSERT INTO UserTransactions(UserID, TransactionDesc, TransactionDate, TransactionAmount, AccountType, ResellerParameter, CampaignParameter)
VALUES (@NewUserID,@transactionDesc,@transactionDate,@TransactionAmount,@accountType,NULL,NULL)
END
```
I just cannot understand why when a zero is sent it says the parameter is missing but when I send something greater then zero it works perfectly? | Use caution when you use this overload of the SqlParameter constructor to specify integer parameter values. Because this overload takes a value of type Object, you must convert the integral value to an Object type when the value is zero.
If you do not perform this conversion, the compiler assumes that you are trying to call the `SqlParameter (string, SqlDbType)` constructor overload.-[MSDN](http://msdn.microsoft.com/en-us/library/0881fz2y.aspx)
Try using this
```
cmd.Parameters.Add(new SqlParameter("@adminUser", SqlDbType.Int).Value=0);
``` | you need to add "@" just before parameter name like this for all parameter you are passing to procedure
```
cmd.Parameters.Add(new SqlParameter("@adminUser", 0));
``` | Asp.net Sql Parameter Missing | [
"",
"asp.net",
"sql",
"sql-server",
"stored-procedures",
""
] |
i have a table t1
```
ag_name ag_start ag_end
a 10 20
c 30 50
a 60 70
c 70 75
```
i want to have this :
```
ag_name numberOfCards
a 20
c 25
```
which means a has (20-10) + (70-60) = 20 Cards.
then please help me with the QUERY ?? | ```
select ag_name, sum(ag_end - ag_start)
from the_unknown_table
group by ag_name
``` | Please try:
```
select
ag_name,
sum(ag_end-ag_start) NoOfCards
From t1
group by ag_name
``` | SUM and GROUP BY and Difference of colums | [
"",
"sql",
"select",
"group-by",
"sum",
""
] |
I have a table that holds values like this,
```
menu_id inner_text
1 Inner 1
1 Inner 2
2 Inner 1
2 Inner 2
2 Inner 3
3 Inner 1
```
I want to be able to select the menu\_id distinctly and get a count of values that are present beside menu\_id, for example
```
menu_id count
1 2
2 3
3 1
```
I tried a few queries myself but not able to get the result I needed.
What do i do? | please try this
```
SELECT MENU_ID,COUNT(*) AS COUNT FROM TABLE
GROUP BY MENU_ID
```
Regards
Ashutosh Arya | ```
SELECT menu_id, COUNT(*)
FROM table
GROUP BY menu_id
ORDER BY menu_id
``` | selecting a distinct primary key and getting a count of values for them | [
"",
"sql",
""
] |
I have a query which is made up of two select statements with a union in the middle.
This works for what I need it for.
However, there is one value missing which I want to manually enter.
What I'm looking to is:
```
select * from tab1
union
select * from tab2
union
insert values('John',cast('2013-01-01' as date), 'Jim', 130)
```
Unfortunately this is not working. Can someone suggest how I do this please?
I'm using Teradata. | You need to keep selecting:
```
select * from tab1
union
select * from tab2
union
select 'John', cast('2013-01-01' as date), 'Jim', 130 from dual
```
The name `dual` is used in Oracle for a table with one row (and one column). Depending on the DBMS you use, you may be able to omit that final FROM altogether (and you may be able to do this in Oracle too):
```
select * from tab1
union
select * from tab2
union
select 'John', cast('2013-01-01' as date), 'Jim', 130
```
or you may have to choose from a system catalog table and ensure you get one row returned (`FROM systables WHERE tabid = 1` was the classic mechanism in Informix, though you could also use `'sysmaster':sysdual` instead of `dual`, etc, too), or you can select from any other table with a query that is guaranteed one row. There are probably ways to do it using a VALUES clause too.
Note the change from double quotes to single quotes. In strict standard SQL, double quotes enclose a delimited identifier, but single quotes surround strings. | You just want to `SELECT` the data, not `INSERT` it.
Not very familiar with TeraData, perhaps you need a `FROM` in which case limiting to 1 record would also make sense:
```
select * from tab1
union
select * from tab2
union
SELECT 'John',cast('2013-01-01' as date), 'Jim', '130' FROM dbc.columns 1
``` | Using Union with Insert to add one row | [
"",
"sql",
"union",
"teradata",
""
] |
I have the following query:
```
SELECT
DATEDIFF(day, DateUsed, DateExpires) AS DaysBetweenExpirationAndUse
FROM tblOffer
```
How could i get the average number of days from the `DaysBetweenExpirationAndUse` column? | This should work fine:
```
SELECT AVG(DATEDIFF(d, DateUsed, DateExpires)) AS AvgDaysBetweenExpirationAndUse
FROM tbl
```
If you want decimal places in your AVG:
```
SELECT AVG(DATEDIFF(d, DateUsed, DateExpires)*1.0) AS AvgDaysBetweenExpirationAndUse
FROM tbl
```
If you want to select the `AVG` and other fields too, without grouping:
```
SELECT *
,AVG(DATEDIFF(d, DateUsed, DateExpires)*1.0) OVER() AS AvgDaysBetweenExpirationAndUse
FROM tbl
``` | this sql could execute?
```
SELECT AVG(t.a) from
(
SELECT DATEDIFF(d, DateUsed, DateExpires) AS a
FROM tbl
) as t
```
this is my test:

but good answer is:
```
SELECT AVG(DATEDIFF(d, DateOne, DateTwo)*1.0) AS avgDate
FROM Test
``` | How to get the average of a DATEDIFF()? | [
"",
"sql",
"sql-server",
""
] |
I'm trying to create a SQL script with a delay.
I could use:
```
blah blah
WAITFOR DELAY '00:30:00'
blah blah
```
but using a script like this requires that I sit through the 30 minutes.
Leaving the website will cancel the script.
---
The problem is that I want to change something in a table, and then change it back automatically 30 minutes later.
The person making the change will leave the webpage, so any kind of **client-side** script is out of the question.
Nor can I wait for the person to return and make the change if 30 minutes have passed, the change must happen after 30 minutes regardless.
---
Is there anyway to do this without making a service or any other program on the server?
Using only ASP/SQL programming.
If this is impossible, how do I make a **service** or **program** on the server to make this change?
Must be able to start this from the website using ASP. | Figured it out by using **SQL Server Agent** and **SQL procedures**.
This is basically how my code is built up now:
*Make the temporary change in the table*
```
UPDATE table SET column = 'temp_value' WHERE column = 'normal_value'
```
*Check if the procedure is there, if so, delete it. Create a procedure to revert the changes in the table.*
```
IF EXISTS ( SELECT *
FROM sys.objects
WHERE object_id = OBJECT_ID(N'myRevertProcedure')
AND type IN ( N'P', N'PC' ) )
DROP PROCEDURE myRevertProcedure
CREATE PROCEDURE myRevertProcedure
AS
BEGIN
WAITFOR DELAY '00:30:00'
UPDATE table SET column = 'normal_value' WHERE column = 'temp_value'
END
```
---
*I've created a **job** in the SQL Server Agent that runs the following:*
```
IF EXISTS ( SELECT *
FROM sys.objects
WHERE object_id = OBJECT_ID(N'myRevertProcedure')
AND type IN ( N'P', N'PC' ) )
BEGIN
EXEC MyProc
DROP PROCEDURE myRevertProcedure
END
```
The reason the job does not simply revert the change itself is because the user shall set the delay.
*If the delay were allways to be 30 mins, I could've made the job run the following:*
```
IF EXISTS (SELECT * FROM table WHERE column = 'temp_value')
BEGIN
WAITFOR DELAY '00:30:00'
UPDATE table SET column = 'normal_value' WHERE column = 'temp_value'
END
```
By doing this, I would not need any procedure.
BTW: The job runs every few seconds. | I personally would not approach the situation this way. I don't know exactly what your data structure is, or why you need to change something for 30 minutes, but I would use a 'Change' table.
So you might have something like
**MainTable** (ID, Column1, Column2, Column3, Column4);
**ChangeTable** (ID, Column1, Column2, Column3, Column4, CreatedDateTime);
Whenever you make your change instead of updating your main table you can simply insert the values you would be updating to into the ChangeTable (I'm assuming SQL-Server based on `WAITFOR`).
I would then make a view like so:
```
CREATE VIEW dbo.MainView
AS
SELECT m.ID,
Column1 = ISNULL(c.Column1, m.Column1),
Column2 = ISNULL(c.Column2, m.Column2),
Column3 = ISNULL(c.Column3, m.Column3)
FROM dbo.MainTable m
OUTER APPLY
( SELECT TOP 1 c.Column1, c.Column2, c.Column3
FROM dbo.ChangeTable c
WHERE c.ID = m.ID
AND c.CreatedDate >= DATEADD(MINUTE, -30, GETDATE())
ORDER BY c.CreatedDate DESC
) c;
```
Then refer to this throughout the website.
If space is an issue you could set up a nightly Job to delete any old entries, e.g. set the following to run at 00:30
```
DELETE ChangeTable
WHERE CreatedDate < CAST(GETDATE() AS DATE);
``` | How can I make a SQL script run after a certain time without waiting for it? | [
"",
"sql",
"asp-classic",
"delay",
"delayed-execution",
""
] |
I want to extract the **name of** primary key column of table if it is also foreign key from information schema
like two tables :
```
Student
std_id std_name
```
And
```
PHDstudent
std_id reaserchfield
```
That std\_id in PHDstudent is primary key and foreign key at same time that refers to Student table. | Run this query
```
SELECT Tab.TABLE_NAME, Col.Column_Name from
INFORMATION_SCHEMA.TABLE_CONSTRAINTS Tab,
INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE Col
WHERE
Col.Constraint_Name = Tab.Constraint_Name
AND Col.Table_Name = Tab.Table_Name
AND Constraint_Type = 'PRIMARY KEY '
INTERSECT
SELECT Tab.TABLE_NAME, Col.Column_Name from
INFORMATION_SCHEMA.TABLE_CONSTRAINTS Tab,
INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE Col
WHERE
Col.Constraint_Name = Tab.Constraint_Name
AND Col.Table_Name = Tab.Table_Name
AND Constraint_Type = 'FOREIGN KEY '
``` | I think you are asking about primary key column name, than you should go with below query which gives you the primary key column name.....
**`SELECT column_name
FROM INFORMATION_SCHEMA.KEY_COLUMN_USAGE
WHERE OBJECTPROPERTY(OBJECT_ID(constraint_name), 'IsPrimaryKey') = 1
AND table_name = 'PHDstudent'`** | SQL : Check if primary key is foreign key | [
"",
"sql",
"sql-server",
"sql-server-2008",
"database-schema",
""
] |
I don't know too much about optimized SQL Selects and my query is very slow. Maybe you have some hints that makes my query faster.
SQL Query
```
SELECT DISTINCT CLI.FANTASIA AS Cliente
, DBSMP.VEICULO_PLACA AS Placa
, DBSMP.DTINICIOPREV AS 'Data Inicio Previsto'
, DBSMP.DTFIMPREV AS 'Data Fim Previsto'
, DBSMP.DTINICIOREAL AS 'Data Incio Real'
, DBSMP.DTFIMREAL AS 'Data Fim Real'
, DBSMP.CIDADE_DES AS 'Cidade Destino'
, DBSMP.CIDADE_ORI AS 'Cidade Origem'
, TRA.FANTASIA AS Transportador
FROM DBSMP_WORK WORK
INNER JOIN DBSMP ON WORK.ID_SMP = DBSMP.ID_SMP
INNER JOIN DBCLIENTE CLI ON DBSMP.ID_CLIENTE = CLI.ID_CLIENTE
LEFT JOIN DBCLIENTE TRA ON DBSMP.ID_TRANSPORTADOR = CLI.ID_CLIENTE
WHERE WORK.[status] IN ('F')
AND DBSMP.ID_CLIENTE IN (85, 107, 137, 139, 510, 658, 659, 661, 702)
AND TRA.RAZAO = 'Google'
AND DBSMP.DTINICIOPREV BETWEEN '01/01/1900' AND '02/09/2013'
```
Then, my question is: How can I make abovequery faster?
This query must to run in an instance of SQL Server.
Thanks in advance. | Just a few thoughts:
* try not to use `DISTINCT`, instead restrict your data appropriate.
* try not to use `IN`, for example `IN('F')` can be `='F'`
* read about indices and create them for the columns which you are querying/joining
* read how to create and read execution plans to find the bottleneck | Try this one -
```
SELECT DISTINCT CLI.FANTASIA AS Cliente
, DBSMP.VEICULO_PLACA AS Placa
, DBSMP.DTINICIOPREV AS [Data Inicio Previsto]
, DBSMP.DTFIMPREV AS [Data Fim Previsto]
, DBSMP.DTINICIOREAL AS [Data Incio Real]
, DBSMP.DTFIMREAL AS [Data Fim Real]
, DBSMP.CIDADE_DES AS [Cidade Destino]
, DBSMP.CIDADE_ORI AS [Cidade Origem]
, TRA.FANTASIA AS Transportador
FROM (
SELECT *
FROM DBSMP
WHERE DBSMP.DTINICIOPREV BETWEEN '19000101' AND '20130902'
AND DBSMP.ID_CLIENTE IN (85, 107, 137, 139, 510, 658, 659, 661, 702)
) DBSMP
JOIN DBCLIENTE CLI ON DBSMP.ID_CLIENTE = CLI.ID_CLIENTE
JOIN DBCLIENTE TRA ON DBSMP.ID_TRANSPORTADOR = TRA.ID_CLIENTE -- or TRA.ID_TRANSPORTADOR = CLI.ID_CLIENTE
WHERE TRA.RAZAO = 'Google'
AND EXISTS(
SELECT 1
FROM DBSMP_WORK WORK
WHERE WORK.ID_SMP = DBSMP.ID_SMP
AND WORK.[status] = 'F'
)
``` | Tuning SQL Select | [
"",
"sql",
"sql-server",
"sql-tuning",
""
] |
`CREATE OR REPLACE VIEW` doesn't seem to work in SQL Server. So how do I port `CREATE OR REPLACE VIEW` to work on SQL Server?
This is what I'm trying to do:
```
CREATE OR REPLACE VIEW data_VVVV AS
SELECT
VCV.xxxx,
VCV.yyyy AS yyyy,
VCV.zzzz AS zzzz
FROM
TABLE_A
;
```
Any ideas? | **Edit: Although this question has been marked as a duplicate, it has still been getting attention. The answer provided by @JaKXz is correct and should be the accepted answer.**
---
You'll need to check for the existence of the view. Then do a `CREATE VIEW` or `ALTER VIEW` depending on the result.
```
IF OBJECT_ID('dbo.data_VVVV') IS NULL
BEGIN
CREATE VIEW dbo.data_VVVV
AS
SELECT VCV.xxxx, VCV.yyyy AS yyyy, VCV.zzzz AS zzzz FROM TABLE_A VCV
END
ELSE
ALTER VIEW dbo.data_VVVV
AS
SELECT VCV.xxxx, VCV.yyyy AS yyyy, VCV.zzzz AS zzzz FROM TABLE_A VCV
BEGIN
END
``` | Borrowing from @Khan's answer, I would do:
```
IF OBJECT_ID('dbo.test_abc_def', 'V') IS NOT NULL
DROP VIEW dbo.test_abc_def
GO
CREATE VIEW dbo.test_abc_def AS
SELECT
VCV.xxxx
,VCV.yyyy AS yyyy
,VCV.zzzz AS zzzz
FROM TABLE_A
```
[MSDN Reference](http://msdn.microsoft.com/en-us/library/ms173492.aspx) | How to make CREATE OR REPLACE VIEW work in SQL Server? | [
"",
"sql",
"sql-server",
"view",
"porting",
""
] |
I have a MySQL Table on an Amazon RDS Instance with 250 000 Rows. When I try to
```
SELECT * FROM tableName
```
without any conditions (**just for testing**, the normal query specifies the columns I need, but I need most of them) , the query takes between 20 and 60 seconds to execute. This will be the base query for my report, and the report should run in under 60 seconds, so I think this will not work out (it times out the moment I add the joins). The report runs without any problems in our smaller test environments.
Could it be that the Query is taking so long because MySQL is trying to lock the table and waiting for all writes to finish? There might be quite a lot of writes on this table. I am doing the query on a MySQL slave, since I do not want to lockup the production system with my queries.
* I have no experience with how much rows are much for a relational DB. Are 250 000 Rows with ~30 columns (varchar, date and integer types) much?
* How can I speedup this query (hardware, software, query optimization ...)
* Can I tell MySQL that I do not care that the Data might be inconsistent (It is a snapshot from a Reporting Database)
* Is there a chance that this query will run under 60 seconds, or do I have to adjust my goals? | Remember that MySQL has to prepare your result set and transport it to your client. In your case, this could be 200MB of data it has to shuttle across the connection, so 20 seconds is not bad at all. Most libraries, by default, wait for the entire result being received before forwarding it to the application.
To speed it up, fetch *only* the columns you need, or do it in chunks with `LIMIT`. `SELECT *` is usually a sign that someone's being super lazy and not optimizing at all.
If your library supports streaming resultsets, use that, as then you can start getting data almost immediately. It'll allow you to iterate on rows as they come in without buffering the entire result. | A table with 250,000 rows is not too big for MySQL at all.
However, waiting for those rows to be returned to the application does take time. That is network time, and there are probably a lot of hops between you and Amazon.
Unless your report is really going to process all the data, check the performance of the database with a simpler query, such as:
```
select count(*) from table;
```
EDIT:
Your problem is unlikely to be due to the database. It is probably due to network traffic. As mentioned in another answer, streaming might solve the problem. You might also be able to play with the data formats to get the total size down to something more reasonable.
A last-resort step would be to save the data in a text file, compress the file, move it over, and uncompress it. Although this sounds like a lot of work, you might get 5x - 10x compression on the data, saving oodles of time on the transmission and still have a large improvement in performance with the rest of the processing. | Reporting with MySQL - Simplest Query taking too long | [
"",
"mysql",
"sql",
"performance",
""
] |
I have two tables with the same structure: `Table1` (Original) and `Table2` (Updated)
I must to keep all the data from `Table1` without changes.
`Table2` could have 1 or all the rows from `Table1` with the same information EXCEPT one column `QUANTITY` that could vary from the original.
I would like a query where we join the two tables, and as result show all the rows from `Table1` but when exist a match in `Table2`, show the row from `Table2` instead, discarding the row from `Table1` and showing the sum or substract of the column `QUANTITY`.
Example:
**Table1**
```
Product Quantity
Shirt 3
Pants 5
Shoes 9
```
**Table 2**
```
Product Quantity
Pants 2
```
Result:
```
Product Quantity
---------------
Shirt 3
Pants 3
Shoes 9
```
Is that possible from one single query, or should I use extra code (I'm using C# and LINQ)? | ```
var query = from table1 in data.Table1s
join table2 in data.Table2s
on table1.ID equals table2.ID
into x
select new {
table1.Shirt,
Pants = x.Sum(c=>c.Pants) + table1.Pants,
table1.Shoes
};
``` | ```
select table1.product, isnull(table1.quantity - table2.quantity, table1.quantity) as Quantity
from table1
left join table2
on table1.product = table2.product
``` | SQL Server 2005 or LINQ query: joining two tables and calculate field | [
"",
"sql",
"linq",
"sql-server-2005",
""
] |
I am creating an application that allows users to construct complex SELECT statements. The SQL that is generated cannot be trusted, and is totally arbitrary.
I need a way to execute the untrusted SQL in relative safety. My plan is to create a database user who only has SELECT privileges on the relevant schemas and tables. The untrusted SQL would be executed as that user.
What could possibility go wrong with that? :)
If we assume postgres itself does not have critical vulnerabilities, the user could do a bunch of cross joins and overload the database. That could be mitigated with a session timeout.
I feel like there is a lot more than could go wrong, but I'm having trouble coming up with a list.
**EDIT:**
Based on the comments/answers so far, I should note that the number of people using this tool at any given time will be very near 0. | Things that come to mind, in addition to having the user SELECT-only and revoking privileges on functions:
* Read-only transaction. When a transaction is started by `BEGIN READ ONLY`, or `SET TRANSACTION READ ONLY` as its first instruction, it cannot write anything, independantly of the user permissions.
* At the client side, if you want to restrict it to one `SELECT`, better use a SQL submission function that does not accept several queries bundled into one. For instance, the swiss-knife `PQexec` method of the `libpq` API does accept such queries and so does every driver function that is built on top of it, like PHP's `pg_query`.
* <http://sqlfiddle.com/> is a service dedicated to running arbitrary SQL statements which may be seen somehow as a proof-of-concept that it's doable without being hacked or DDos'ed all day long. | SELECT queries can not change anything in databse. Lack of dba privileges guarantee that any global settings can not be changed. So, overload is truely the only concern.
Onerload can be result of complex queryies or too much simple queries.
* Too complex queryies can be ruled out by setting `statement_timeout` in `postgresql.conf`
* Receiving plenties of simple queryies can be avoided too. Firstly, you can set parallel connection limit per user (`alter user` with `CONNECTION LIMIT`). And if you have some interface program between user and postgresql, you can additionally (1) add some extra wait after each query completion, (2) introduce CAPTCHA to avoid automated DOS-attack
**ADDITION:** PostgreSQL public system functions give many possible attack vectors. They can be called like `select pg_advisory_lock(1)` and every user have privilege to call them. So, you should restrict access to them. Good option is creating whitelist of all "callable words" or, more precisely, identifiers that can be used with `(` after them. And rule out all queryies that include call-like construct `identifier (` with an identifier not in white list. | Executing untrusted SQL with SELECT only user | [
"",
"sql",
"security",
"postgresql",
"user-input",
"privileges",
""
] |
I have a table of `posts` and a table of `post_tags`
Here is my post table structure example:
```
post_id int(11)
post_user int(11)
post_title text
post_content longtext
```
and this is my post\_tags structure example :
```
post_id int(11)
tag_id int(11)
```
What I need is selecting all posts from `posts` table that have `tag_id` of 1 AND 2 at the same time, I've tried different joins without success.
example of post\_tags table data :
```
post_id tag_id
1 1
2 1
5 2
6 1
6 2
```
HERE for example my query should return post (from post table) whos id is 6, watch in the example post\_id of 6 has tag\_id 1 AND tag\_id 2 NOT ONLY ONE OF THEM but BOTH at the same time. | You can do this with aggregation:
```
select post_id
from post_tags pt
group by post_id
having sum(tag_id = 1) > 0 and
sum(tag_id = 2) > 0;
```
If you want to see the actual information from `posts`, just join that table in.
EDIT (a bit of an explanation):
You have a "set-within-sets" query. This is a common query and I prefer to solve it using aggregation and a `having` clause, because this is the most general approach.
Each condition in the `having` clause is counting the number of rows that match one of the tags. That is, `sum(tag_id = 1)` is counting up the rows in `post_tags` where this is true. The condition `> 0` is just saying "tag\_id = 1 exists on at least one row".
The reason I like this approach is because you can generalize it easily. If you want tags 3 and 4 as well:
```
having sum(tag_id = 1) > 0 and
sum(tag_id = 2) > 0 and
sum(tag_id = 3) > 0 and
sum(tag_id = 4) > 0;
```
And so on. | Give this a try:
```
SELECT post.*
FROM (SELECT T1.post_id
FROM (SELECT * FROM post_tags WHERE 1 IN(tag_id)) T1
INNER JOIN (SELECT * FROM post_tags WHERE 2 IN(tag_id)) T2 ON T1.post_id = T2.post_id)
T3
INNER JOIN post ON T3.post_id=post.post_id;
```
SQL Fiddle Link: <http://sqlfiddle.com/#!2/04f74/33> | mysql Select rows with exclusive AND | [
"",
"mysql",
"sql",
""
] |
I have a solution that is sharded across multiple SQL Azure databases. These databases all have the exact same data schema and I generate an edmx from one of them.
How can I maintain the schemas of multiple databases with respect to change management?
Any change in one schema has to be automatically applied on all the other databases. Is there something I am missing? I looked at data sync but it seems to be solving another problem. In my case the schema is exactly the same and the data stored is different. | This can be achieved using SSDT (Addin for VS) and Automated deployment tools (I use Final Builder).
I currently use SSDT tools where I created a database project of the original schema.
If there is any changes to one of the databases, i use schema compare in SSDT and update the database project.
Then I follow the below steps to rollout the changes to other databases.
Step 1: Update the schema changes to database Project.
Step 2: Use MSBuild and Generate a deployment script by setting one of the databases as Target.
Step 3: Run the generated script across all the databases using any auto deployment tools. | If you're always using Entity Framework migrations to make changes to the database structure, then you might be able to use the approach on the below. It depends a little on how your multiple databases are used in relation to the EF application.
* [EF 4.3 Auto-Migrations with multiple DbContexts in one database](https://stackoverflow.com/questions/9110536/ef-4-3-auto-migrations-with-multiple-dbcontexts-in-one-database)
* [EF Code First Migration with Multiple Database / DbContext](https://stackoverflow.com/questions/10507910/ef-code-first-migration-with-multiple-database-dbcontext)
Or you can use the EF migrations to [generate a script](http://msdn.microsoft.com/en-gb/data/jj591621.aspx#script) that you can run on each server manually.
If you're interested in an automated process it might be worth taking a look at [Deployment Manager](http://www.red-gate.com/delivery/deployment-manager/) from Red Gate (full disclose I work for Red Gate).
This lets you take a database and from Visual Studio or SSMS turn it into a package that you can deploy to multiple servers and environments. The starter edition for up to 5 projects and 5 servers is free to use, and it deploys to Azure. It can deploy .NET web applications too, but you can just use it for the DB.
It's very good for ensuring that the same database schema is on all servers within an environment, and for propagating database changes through test/staging/prod in line with any application changes. You can also integrate it with source control and CI systems to automate database changes from Dev to Production | SQL server multiple databases with same schema | [
"",
"sql",
"sql-server-2008",
"azure-sql-database",
""
] |
i have 2 tables: activities and users.
users has columns: name, active
activities: name, type, time, user\_id.
for example i have these tables:
```
users
-----
id | name | active
1 | marc | true
2 | john | true
3 | mary | true
4 | nico | true
activities
-----
id | name | type | time | user_id
1 | morn | walk | 90 | 2
2 | morn | walk | 22 | 2
3 | morn | run | 12 | 2
4 | sat | walk | 22 | 1
5 | morn | run | 13 | 1
6 | mond | walk | 22 | 3
7 | morn | walk | 22 | 2
8 | even | run | 42 | 1
9 | morn | walk | 22 | 3
10 | morn | walk | 62 | 1
11 | morn | run | 22 | 3
```
now i would like to get table that would sum time spent on each type of activity and would group it by user name. so:
```
result
------
user name | type | time
marc | walk | 84
marc | run | 55
john | walk | 134
john | run | 12
mary | walk | 44
mary | run | 2
nico | walk | 0
nico | run | 0
```
how should i write this query to get this result?
thanks in advance
gerard | you can use `coalesce` to get 0 for empty activities and `distinct` to get all type of possible activities
```
select
u.name, c.type,
coalesce(sum(a.time), 0) as time
from (select distinct type from activities) as c
cross join users as u
left outer join activities as a on a.user_id = u.id and a.type = c.type
group by u.name, c.type
order by u.name, c.type
```
[**`sql fiddle demo`**](http://www.sqlfiddle.com/#!1/4886f/44) | ```
Select u.name, a.type, SUM(a.time) FROM
activities a
LEFT JOIN users u
ON a.user_id = u.id
GROUP BY u.name, a.type
```
**[FIDDLE](http://www.sqlfiddle.com/#!1/4886f/3)**
Use this to get zero count as well
```
SELECT c.name,c.type,aa.time FROM
(Select u.id,u.name, b.type FROM
users u
CROSS JOIN (SELECT DISTINCT type FROM activities) b) c
LEFT JOIN (SELECT a.user_id, a.type, SUM(a.time) as time FROM
activities a
GROUP BY a.user_id, a.type) aa ON
aa.user_id = c.id and c.type = aa.type
```
**[Fiddle2](http://www.sqlfiddle.com/#!1/4886f/37)** | how to sql query 2 tables and group by? | [
"",
"sql",
"postgresql",
"sum",
""
] |
Im new to LINQ and am struggling to find out how to do the following.
I have a system that has a sql server back end and it uses EF5 to interact with the database. I have the following tables
```
Case
CaseStatus
Status
CaseAction
ActionType
```
Relationships are...
```
Case (1 - many) CaseStatus
CaseStatus (1 - many) CaseAction
Status (1 - many) CaseStatus
ActionType (1 - many) CaseAction
```
I need to display a list of Case records that includes the latest Status (based on the last CaseStatus record related to the case) and the latest Action record (based on the latest CaseAction record related to the latest CaseStatus record related to the case)
All cases will have at least on CaseStatus record but not all CaseStatus records will have related CaseAction records. In these scenarios I still need to display some details from the Case record but I will leave the Action column blank
In T-SQL this is the sort of thing that would give me what I want
```
select
c.ID CaseID
,cs1.Name CurrentStatus
,cs1.Date StatusDate
,ca1.Name LatestAction
,ca1.Date ActionDate
from
[Case] c
left join
--Latest CaseStatus record to get current status of case
(
select
cs.ID
,cs.CaseID
,s.Name
,cs.Date
,ROW_NUMBER() over (partition by caseid order by cs.id desc) rn
from
CaseStatus cs
--Join to status to get the name of the statusID
inner join Status s on cs.StatusID=s.ID
) cs1
on c.ID=cs1.CaseID and cs1.rn=1
left join
--Latest CaseAction on the case
(
select
ca.CaseStatusID
,at.Name
,ca.Date
,ROW_NUMBER() over (partition by casestatusID order by ca.id desc) rn
from
CaseAction ca
inner join ActionType at on ca.ActionTypeID=at.ID
) ca1
on ca1.CaseStatusID=cs1.ID and ca1.rn=1
```
I know that one thing I could do to get my data into my application is to include the view that contains this SQL in my EF model.
What I'd like, for the purposes of learning is to be able to achieve the same results using LINQ. If possible I would really like to see examples using the comprehensive syntaxt and the extension method syntax
Thanks in advance for any assistance | I managed to come up with the following that seems to give me what I need. Thanks for pointers guys
```
var cases = db.Cases
.Select(c => new
{
CaseID = c.ID,
LatestStatus = c.CaseStatus
.Where(cs => cs.CaseID == c.ID)
.OrderByDescending(cs => cs.ID)
.FirstOrDefault()
})
.Select(c => new
{
CaseID = c.CaseID,
CurrentStatus = c.LatestStatus.Status.Name,
LatestAction = c.LatestStatus.CaseActions
.Where(ca => ca.CaseStatusID == c.LatestStatus.ID)
.OrderByDescending(ca => ca.ID)
.FirstOrDefault()
})
.Select(c => new CaseListItem
{
CaseID = c.CaseID,
CurrentStatus = c.CurrentStatus,
LastAction = c.LatestAction != null ? c.LatestAction.ActionType.Name : "-",
ActionDate = c.LatestAction != null ? c.LatestAction.Date : null
});
``` | I'll assume that you have a `Case` class that has a `Statuses` property, which is a collection of `CaseStatus` objects. Also that the `CaseStatus` object has a `Status` property and a collection of `CaseAction` objects called Actions. I'll also assume that you have a `DbContext`-descendant entity called `context` through which you access the data.
In that case the queries are relatively simple:
```
context.Cases.Select( c=> {
//get the last status
var lastStatus = c.Statuses.OrderByDescending(cs => cs.ID).First();
//get the last action (might be null)
var lastAction = lastStatus.OrderByDescending(ca => ca.ID).FirstOrDefault();
return new
{
CaseID = c.ID,
StatusName = lastStatus.Status.Name,
StatusDate = lastStatus.Date,
ActionName = (lastAction == null) ? null : lastAction.Action.Name,
ActionDate = (lastAction == null) ? null : lastAction.Date,
}
});
``` | How to return list of records along with latest of related record using LINQ | [
"",
"sql",
"linq",
""
] |
I am having two table.
one table contains customer details(i.e)
```
id(customerid--primary key),name,email,phonenumber
```
And other table contains order table(i.e)
```
id,customerid(foreign key),ordervalue,orderdate
```
I need to get the list of customers who have not ordered for last one month(i.e) for the month of august. How can i do it.
This is the query i tried
```
select a.id,a.name,b.order_date from customers a
left join orders b
on a.id = b.customer_id
where b.order_date is null
``` | This query will extract those customers who haven't order `past one month from today`:
```
SELECT a.id, a.name
FROM customers a
WHERE NOT EXISTS(SELECT *
FROM orders b
WHERE b.cid = a.id AND
orderdate BETWEEN now()- '1 month'::interval
AND now());
```
Here is the [SQLfiddle](http://sqlfiddle.com/#!1/8d0fd/14)
However, if you want to be more precise where you want last month's orders i.e. from `1st of last month to last date of last month` then you can use this :
```
SELECT a.id, a.name
FROM customers a
WHERE NOT EXISTS(SELECT *
FROM orders b
WHERE b.cid = a.id AND
to_char(orderdate, 'Mon YYYY') =
to_char(now()- '1 month'::interval, 'Mon YYYY') );
```
Here is the [SQLfiddle](http://sqlfiddle.com/#!1/8d0fd/24)
**EDIT**
Please also have a look at [Roman Pekar's](https://stackoverflow.com/a/18557777/2555580) answer which is more efficient. | ```
SELECT *
FROM customers c
WHERE NOT EXISTS (
SELECT *
FROM orders nx
WHERE nx.customer_id = c.id
AND nx.order_date BETWEEN '2013-08-01' AND '2013-08-31'
);
``` | join two table with non-matching rows | [
"",
"sql",
"postgresql",
""
] |
When I search any record by **chapter number** then it works.
But the problem is when I select **chapter number 1 or 2** from drop-down and the search all records included in that chapter.
It displays all records included in 1,11,21,31...or 2,21,12,...like this.
I know I wrote '**like**' there that's why it happens. But when i write " **=** " operator that I commented in my code that also didn't work for me.
What will be the perfect query to solve this problem?
**My Code:**
```
<?php
include("conn.php");
$name=$_POST['fname'];
$name2=$_POST['chapter'];
$sql="SELECT distinct * FROM $user WHERE question like '%".$name."%' and Chapter like '%".$name2."%'";
// $sql="SELECT * FROM $user WHERE question='$name' and Chapter='$name2'";
$result=mysql_query($sql,$connection) or die(mysql_error());
while($row=mysql_fetch_array($result)) {
?>
``` | I would be interested to see what the type of 'Chapter' is in the returned query, and try to see why it is that the equality comparison doesn't work.
If the typing is straightforward (i.e. it really is just plain old strings), then I'd be looking for whitespace characters or something like that which is foiling the equality comparison.
Similarly, I'm wondering whether it's the equality on the 'Question' that is messing up your alternate query.
At a guess, try one of the following:
```
$sql="SELECT distinct * FROM $user WHERE question like '%".$name."%' and Chapter like '$name2'";
$sql="SELECT distinct * FROM $user WHERE question like '%".$name."%' and Chapter='$name2'";
```
Oh, and you should really do something about escaping those parameters properly to avoid any nasty SQL injection attacks. | The problem is this part of the first query:
```
Chapter like '%".$name2."%'
```
If `=` doesn't work, then I can think of two things. The first is that `Chapter` is really a list, probably a comma delimited list. The second is that there are extraneous characters in the database.
If `Chapter` is really a list, use `find_in_set()` instead:
```
find_in_set($name2, Chapter) > 0
``` | issue with duplicating record in mysql query | [
"",
"mysql",
"sql",
""
] |
I am trying to insert a new record if the email address does not exist in list\_email.email\_addr AND not exist in list\_no\_email.email\_addr
```
INSERT INTO list_email(fname, lname, email_addr) VALUES('bob', 'schmoe', 'bogus@bogus.com'), ('mary', 'lamb', 'hoe@me.com');
SELECT email_addr FROM list_email
WHERE NOT EXIST(
SELECT email_addr FROM email_addr WHERE email_addr = $post_addr
)
WHERE NOT IN (
SELECT email_addr FROM list_no_email WHERE email_addr = $post_addr
)LIMIT 1
**************************** mysql tables ******************************
mysql> desc list_email;
+------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+------------+--------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| list_name | varchar(55) | YES | | NULL | |
| fname | char(50) | YES | | NULL | |
| lname | char(50) | YES | | NULL | |
| email_addr | varchar(150) | YES | | NULL | |
+------------+--------------+------+-----+---------+----------------+
5 rows in set (0.00 sec)
mysql> desc list_no_email;
+------------+--------------+------+-----+-------------------+-----------------------------+
| Field | Type | Null | Key | Default | Extra |
+------------+--------------+------+-----+-------------------+-----------------------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| date_in | timestamp | NO | | CURRENT_TIMESTAMP | on update CURRENT_TIMESTAMP |
| email_addr | varchar(150) | YES | | NULL | |
+------------+--------------+------+-----+-------------------+-----------------------------+
3 rows in set (0.00 sec)
```
\*\**\**\*\**\**\*\**\**\*\**\**\*\**\**\*\* error ***\**\*\**\**\*\**\**\*\**\**\*\**\**\***
```
INSERT INTO list_email(fname, lname, email_addr) VALUES('bob', 'schmoe', 'bogus@bogus.com'), ('mary', 'lamb', 'hoe@me.com'); SELECT email_addr FROM list_email WHERE NOT EXIST( SELECT email_addr FROM email_addr WHERE email_addr = $post_addr ) WHERE NOT IN ( SELECT email_addr FROM list_no_email WHERE email_addr = $post_addr )LIMIT 1;
Query OK, 2 rows affected (0.00 sec)
Records: 2 Duplicates: 0 Warnings: 0
ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'SELECT email_addr FROM email_addr WHERE email_addr = $post_addr ) WHERE NOT IN ' at line 1
```
\*\**\**\*\**\**\*\**\**\*\**\**\* without ; ***\**\*\**\**\*\**\**\*\**\**\*\**\**\*\*\***
```
mysql> INSERT INTO list_email(fname, lname, email_addr) VALUES('bob', 'schmoe', 'bogus@bogus.com'), ('mary', 'lamb', 'hoe@me.com')
-> SELECT email_addr FROM list_email AS tmp
-> WHERE NOT EXIST(
-> SELECT email_addr FROM email_addr WHERE email_addr = $post_addr
-> )
-> WHERE NOT IN (
-> SELECT email_addr FROM list_no_email WHERE email_addr = $post_addr
-> )LIMIT 1;
ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'SELECT email_addr FROM list_email AS tmp
WHERE NOT EXIST(
SELECT email_addr FROM' at line 2
``` | **Replace**
```
WHERE NOT EXIST(
```
**With**
```
WHERE email_addr NOT IN(
```
**EDIT**
**Replace**
```
SELECT email_addr FROM list_email
WHERE NOT IN
(
SELECT email_addr FROM email_addr WHERE email_addr = $post_addr
)
WHERE NOT IN
(
SELECT email_addr FROM list_no_email WHERE email_addr = $post_addr
)LIMIT 1
```
**with**
```
SELECT email_addr FROM list_email
WHERE NOT IN
(
SELECT email_addr FROM email_addr WHERE email_addr = $post_addr
UNION ALL
SELECT email_addr FROM list_no_email WHERE email_addr = $post_addr
)
LIMIT 1
``` | Try something like this:
```
INSERT INTO list_email($username.$rowname, fname, lname, list_email) VALUES(?,?,?,?,?)
SELECT email_addr FROM list_email AS tmp
WHERE email_addr NOT IN(
SELECT email_addr FROM list_email WHERE email_addr = $post_addr)
WHERE email_addr NOT IN (SELECT email_addr FROM list_no_email
WHERE email_addr = $post_addr)
LIMIT 1;
``` | MySQL INSERT INTO with dual condition for IF NOT EXIST | [
"",
"mysql",
"sql",
""
] |
I'm trying to insert a large CSV file (several gigs) into `SQL Server`, but once I go through the `Import Wizard` and finally try to import the file I get the following error report:
* Executing (Error)
Messages
> Error 0xc02020a1: Data Flow Task 1: Data conversion failed. The data
> conversion for column ""Title"" returned status value 4 and status
> text "Text was truncated or one or more characters had no match in the
> target code page.".
(`SQL Server Import and Export Wizard`)
> Error 0xc020902a: Data Flow Task 1: The "Source -
> Train\_csv.Outputs[Flat File Source Output].Columns["Title"]" failed
> because truncation occurred, and the truncation row disposition on
> "Source - Train\_csv.Outputs[Flat File Source Output].Columns["Title"]"
> specifies failure on truncation. A truncation error occurred on the
> specified object of the specified component.
(`SQL Server Import and Export Wizard`)
> Error 0xc0202092: Data Flow Task 1: An error occurred while processing
> file "C:\Train.csv" on data row 2.
(`SQL Server Import and Export Wizard`)
> Error 0xc0047038: Data Flow Task 1: SSIS Error Code
> DTS\_E\_PRIMEOUTPUTFAILED. The PrimeOutput method on Source - Train\_csv
> returned error code 0xC0202092. The component returned a failure code
> when the pipeline engine called PrimeOutput(). The meaning of the
> failure code is defined by the component, but the error is fatal and
> the pipeline stopped executing. There may be error messages posted
> before this with more information about the failure.
(`SQL Server Import and Export Wizard`)
I created the table to insert the file into first, and I set each column to hold varchar(MAX), so I don't understand how I can still have this truncation issue. What am I doing wrong? | In SQL Server Import and Export Wizard you can adjust the source data types in the `Advanced` tab (these become the data types of the output if creating a new table, but otherwise are just used for handling the source data).
The data types are annoyingly different than those in MS SQL, instead of `VARCHAR(255)` it's `DT_STR` and the output column width can be set to `255`. For `VARCHAR(MAX)` it's `DT_TEXT`.
So, on the Data Source selection, in the `Advanced` tab, change the data type of any offending columns from `DT_STR` to `DT_TEXT` (You can select multiple columns and change them all at once).
[](https://i.stack.imgur.com/KVQdv.jpg) | This answer may not apply universally, but it fixed the occurrence of this error I was encountering when importing a small text file. The flat file provider was importing based on fixed 50-character text columns in the source, which was incorrect. No amount of remapping the destination columns affected the issue.
To solve the issue, in the "Choose a Data Source" for the flat-file provider, after selecting the file, a "Suggest Types.." button appears beneath the input column list. After hitting this button, even if no changes were made to the enusing dialog, the Flat File provider then re-queried the source .csv file and then *correctly* determined the lengths of the fields in the source file.
Once this was done, the import proceeded with no further issues. | Errors in SQL Server while importing CSV file despite varchar(MAX) being used for each column | [
"",
"sql",
"sql-server",
"csv",
""
] |
I am trying to find a way to access a database of some management software which uses some kind of raw isam files to store the data. The data folder compromises of of .idx and .fs5 files, I cannot find any evidence of this being a informix type as mentioned in another question here on stackoverflow.
Does anyone have any kind of solution to creating some kind of bridge to the database? I have had no luck finding an open source odbc to c-isam driver, is anyone aware of something that could possibly help here? | FS is possibly a [Flagship](http://www.fship.com/) file, a product that uses the same format as Ashton Tate's [dBase III](http://en.wikipedia.org/wiki/DBase).
[This link](https://groups.google.com/forum/#!topic/multisoft-flagship/xLg7YmeTJao) may help:
> Open your \*.FS5 file by any Hex-Editor. If the first byte is (hex) 03 or 83, it is a dBaseIII+ compatible database, fully supported by FlagShip. If the 1st byte is 04, 05, 13,
> 23, 33, 93, B3 it is a database with FlagShip extensions. If the first byte of \*.idx is (hex) 52, it is FlagShip index. See additional details on <http://www.fship.com/dbfspecs.txt>. If above apply, go to <http://www.fship.com/eval.html> for free test version of FlagShip.
If it *is* a Flagship file, the documentation at the link given above states that the file format for database files (but not indexes apparently) is fully compatible with dBase III so you may be able to find software (dBase, Clipper, FoxPro, Excel et al) that can extract the data if not the index info.
They also handily detail the file format in the LNG section of their docs but dBase III format is well known so you may be able to get more information elsewhere, such as [here](http://www.clicketyclick.dk/databases/xbase/format/index.html).
If it turns out to *not* be a Flagship/dBase file, you'll have to do some more research. The only other thing the internet suggests is that it may be a Flight Simulator file, which seems unlikely :-)
You *could* try running a `strings` command on the files to see if any vendor-type information shows up.
Failing that, it may be that the best solution is to contact the vendor of the management software to see if they'll part with the information. Social engineering like that may provide a quicker solution than trying to reverse engineer the format. | IBM has a C-ISAM bridge for their Online RDBMS engine that re-hosts the C-ISAM files as tables in the RDBMS. You can then use SQL to access the data. | Opening c isam files? | [
"",
"sql",
"c",
"database",
"isam",
""
] |
I have the following sql trigger for one of my tables. I'm just wondering if its possible that i can track and add the user that makes a change to the table?
See code below
```
ALTER trigger [dbo].[Trigger_Audit_Client] on [dbo].[Client] for insert, update, delete
as
declare @type varchar(1) ,
@UpdateDate datetime
if exists (select * from inserted) and exists (select * from deleted)
select @type = 'U'
else if exists (select * from inserted)
select @type = 'I'
else
select @type = 'D'
select @UpdateDate = getdate()
insert Client_Audit (Client_Id, ClientName, ClientSurname, TelephoneHome, TelephoneWork, TelephoneCellular, DOB, Gender, ClientIdNumber, Company_Id, Region, City, Email, AddressLine1, AddressLine2, Will_UID, WillCreated, WillLatest, UpdateDate, UpdateType)
select Client_Id, ClientName, ClientSurname, TelephoneHome, TelephoneWork, TelephoneCellular, DOB, Gender, ClientIdNumber, Company_Id, Region, City, Email, AddressLine1, AddressLine2, Will_UID, WillCreated, WillLatest, @UpdateDate, @type + '_old'
from deleted
insert Client_Audit (Client_Id, ClientName, ClientSurname, TelephoneHome, TelephoneWork, TelephoneCellular, DOB, Gender, ClientIdNumber, Company_Id, Region, City, Email, AddressLine1, AddressLine2, Will_UID, WillCreated, WillLatest, UpdateDate, UpdateType)
select Client_Id, ClientName, ClientSurname, TelephoneHome, TelephoneWork, TelephoneCellular, DOB, Gender, ClientIdNumber, Company_Id, Region, City, Email, AddressLine1, AddressLine2, Will_UID, WillCreated, WillLatest, @UpdateDate, @type + '_new'
from inserted
``` | `User_Name()`: <http://technet.microsoft.com/en-us/library/ms188014.aspx>
## Syntax
```
USER_NAME ( [ id ] )
```
> When id is omitted, the current user in the current context is assumed. If the parameter contains the word NULL will return NULL.When USER\_NAME is called without specifying an id after an EXECUTE AS statement, USER\_NAME returns the name of the impersonated user. If a Windows principal accesses the database by way of membership in a group, USER\_NAME returns the name of the Windows principal instead of the group.
# ASIDE:
I would change your trigger code to this:
```
SET NOCOUNT ON;
INSERT INTO dbo.Client_Audit (Cliend_Id, ClientName, ..., UpdateDate, UpdateType)
SELECT Coalesce(i.Cliend_Id, d.Cliend_Id) As Cliend_Id
, Coalesce(i.ClientName, d.ClientName) As ClientName
, ...
, Current_Timestamp As UpdateDate
, CASE WHEN i.Cliend_Id IS NULL THEN 'D'
WHEN d.Cliend_Id IS NULL THEN 'I'
ELSE 'U'
END As UpdateType
FROM inserted As i
FULL
JOIN deleted As d
ON d.Cliend_Id = i.Cliend_Id;
```
Does the same thing but in a cleaner manner (no extra logic and variables, just a single statement).
Any questions just ask! | Use [system\_user](http://technet.microsoft.com/en-us/library/ms179930.aspx) or [suser\_name()](http://technet.microsoft.com/en-us/library/ms187934.aspx) for that.
[user\_name()](http://technet.microsoft.com/en-us/library/ms188014.aspx) will return `dbo` if your user is in sysadmin role. | Getting current user with a sql trigger | [
"",
"sql",
"sql-server",
""
] |
I have 2 columns `a` and `b` with a 1:n relation:
```
A.id, a.text, a.b_id(fk), a.value --
B.id(pk), b.text etc
```
I want to create a query that returns the number of `b.id` with `count(a.value=1) > 0`
I tried this:
```
Select count(b.id)
from a.id
join b on a.b_id=b_id
group by b.id
having count(if(a.value=1),1,null))>0
```
...but without result. It seems simple but for me is a problem. | You don't need `HAVING` because standard `INNER` join won't return `b` rows without matching `a` rows anyway. You don't need `GROUP BY` either - use `COUNT(DISTINCT )` instead:
```
SELECT COUNT(DISTINCT b.id) AS cnt
FROM b
JOIN a ON a.b_id = b.id
WHERE a.value = 1
``` | This should do:
```
SELECT COUNT(*)
FROM TableB AS B
WHERE EXISTS(SELECT 1 FROM TableA
WHERE id = B.id
AND value = 1)
``` | query impossible for me? | [
"",
"mysql",
"sql",
"count",
""
] |
My select-statement looks like:
```
Select column1, column2
From Table1
Group By column2
```
`column1` is a CLOB and I want to receive one of the values that is part of one group. I know they are all the same so it doesn't matter which one I get. I've tried functions like `MIN` and `MAX` but they don't accept CLOB as a type.
To be clear I don't want to aggregate the CLOBs just pick one of them.
This is a simplification of the actual SELECT statement and the GROUP BY clause is necessary.
So with this data:
```
column1 column2
qwerty 1
qwerty 1
asdfgh 2
asdfgh 2
```
I want to get:
```
qwerty 1
asdfgh 2
```
Any idea how this could be done? | A `CLOB` value cannot be used for grouping or inside a `distinct` clause.
The only chance you have is to convert the `CLOB` to a varchar but that means you cannot compare the complete contents of the column (note: those are *columns*, not rows). If you are certain that all your CLOB values are smaller than 8000 bytes, you can use something like this:
```
select min(dbms_lob.substr(column1)), column2
from foo
group by column2;
``` | You can use sub-queries.
Add unique column ID and then:
```
SELECT t1.col2,t2.col1
FROM
(SELECT max(ID) as IDM, col2 FROM Table1 GROUP BY col2) t1
LEFT JOIN
Table1 t2
ON t1.IDM=t2.ID
``` | Group by with CLOB in select-statement | [
"",
"sql",
"oracle",
""
] |
I have two tables, Player and Shot. There is a 1 to many relationship between Player and Shot. I want to get some player information, like Email, FirstName, and LastName, along with a player's top shot information, which contains various parts- CalculatedScore, AccuracyScore, DistanceScore, and TimeScore. CalculatedScore is the most important value. All the others are components of that score.
Here's my best effort:
```
<!-- language: lang-sql -->
select s_max.PlayerId as playerId_max, s_max.TopScore, s.PlayerId, p.FirstName,
p.LastName, s.CalculatedScore, s.AccuracyScore, s.TimeScore, s.DistanceScore from Player p
inner join Shot s on s.PlayerId = p.Id
inner join (
select distinct MAX(CalculatedScore) over (partition by PlayerId) as TopScore,
PlayerId from Shot s2
) s_max on s.PlayerId = s_max.PlayerId and s.CalculatedScore = s_max.TopScore
order by PlayerId desc
```
This is almost identical to what I need, but it returns each row that ties for the top score. Getting it to return one row instead is surprisingly frustrating.
Thanks for your time. | I think this is a good case to use `outer/cross apply`.This should perform better than using window function, especially if you `Player` table has small number of rows:
```
select
p.Id, p.FirstName, p.LastName,
s.CalculatedScore, s.AccuracyScore, s.TimeScore, s.DistanceScore
from Player as p
outer apply (
select top 1 s.*
from Shot as s
where s.PlayerId = p.Id
order by s.CalculatedScore desc
) as s
```
here's [**sql fiddle demo**](http://sqlfiddle.com/#!3/54100/1) with examples, you can check performance. | If I understood you correctly, this is what you want:
```
;WITH CTE AS
(
SELECT *,
RN=ROW_NUMBER() OVER(PARTITION BY PlayerId
ORDER BY CalculatedScore DESC)
FROM Shot
)
SELECT A.PlayerId as playerId_max,
B.TopScore,
A.FirstName,
A.LastName,
B.CalculatedScore,
B.AccuracyScore,
B.TimeScore,
B.DistanceScore
FROM Player A
INNER JOIN CTE B
ON A.PlayerId = B.PlayerId
WHERE B.RN = 1
``` | SQL Query, obtain maximum with no duplicates | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I am looking to join two unrelated tables into a single table so i can sort by a common field and limit the combined resultset to display on the page.
Example
```
Table 1
Field1 | Field2 | Field3 | date_posted
--------------------------------------
Blah | Blah2 | Blah3 | 2013-02-01
Table 2
Field4 | Field 5 | date_posted
------------------------------
Blah4 | Blah5 | 2013-01-01
Result
Field1 | Field2 | Field3 | Field4 | Field5 | date_posted
--------------------------------------------------------
Blah | Blah2 | Blah3 | NULL | NULL | 2013-02-01
NULL | NULL | NULL | Blah4 | Blah5 | 2013-01-01
```
The reason for this is I have previously setup a database to display these tables on separate pages and now the client wants to combine them into a single page. If i run the queries separately and then combine the data in php there are certain issues such as pagination as well as having to select a set amount of each even if they are not the latest. | No join necessary:
```
select field1, field2, field3, null as field4, null as field5, date_posted
from table_1
union all
select null, null, null, field4, field5, date_posted
from table_2
```
To sort by a specific column, just add an `order by`
```
select field1, field2, field3, null as field4, null as field5, date_posted
from table_1
union all
select null, null, null, field4, field5, date_posted
from table_2
order by date_posted
```
Note that in a UNION the order by always works on the full result, not on the individual parts. So even if it's placed right after the second select, it will sort everything.
To implement paging using `LIMIT` | ```
SELECT Field1, Field2, Field3, date_posted
FROM Table1
UNION
SELECT Field4, Field5, date_posted
FROM Table2
ORDER BY date_posted LIMIT 20
``` | SQL Join Tables with no common field | [
"",
"mysql",
"sql",
"database",
""
] |
This is what I have so far:
```
SELECT ListKey
FROM Closing_List
WHERE DATEPART(YEAR,closedate) > DATEPART(YEAR,GETDATE())-2)
```
But I want 24 Months back.
Thanks! | ```
SELECT ListKey
FROM Closing_List
WHERE closedate > DATEADD(MONTH, -24, GETDATE())
``` | ```
SELECT ListKey
FROM Closing_List
WHERE closedate > DATEADD(MONTH,-24,GETDATE())
``` | Filter date to be greater than 24 months rather than 2 years? | [
"",
"sql",
"reporting-services",
"ssrs-2008",
""
] |
How can I parsed a string comma-delimited string from a column in Oracle database and use each one as join to another table
example:
```
Table 1 Table 2
ID Name ID Rate
--- ------ ---- ------
1,2,3 FISRT 1 90
2 80
3 70
```
Would like to extract each ID from Table 1 to join in Table 2, something like
```
extract(tbl1.ID, i) = tbl2.ID
```
Thanks in advance. | Based on [this answer](https://stackoverflow.com/a/13580861/1083652) you can do something like this
```
select *
from table1 t1 join table2 t2 on ','||t1.id||',' like '%,'||t2.id||',%'
```
[here is a sqlfiddle](http://www.sqlfiddle.com/#!4/1925d/1) | The typical way is the use a hierarchical query and REGEXP\_SUBSTR. If you have one row then this is okayish; if you have multiple then you've got problems.
The following will parse the comma-delimited string.
```
select regexp_substr(:txt, '[^,]+', 1, level)
from dual
connect by regexp_substr(:txt, '[^,]+', 1, level) is not null
```
However, if you're doing more than one at once you will need to add a DISTINCT see [duplicating entries in listagg function](https://stackoverflow.com/questions/17968016/duplicating-entries-in-listagg-function/17971170#17971170).
You can then use this in your JOIN.
```
with the_rows as (
select distinct regexp_substr(col, '[^,]+', 1, level) as id
from table1
connect by regexp_substr(col, '[^,]+', 1, level) is not null
)
select *
from table2 a
join the_rows b
on a.id = b.id
```
This is a horrible way to do it; you're using hierarchical queries and then a sort unique for the DISTINCT. It is about as far from efficient as you can get. You need to normalise your table. | SQL: Parse Comma-delimited string and use as join | [
"",
"sql",
"oracle",
""
] |
I have the following sql query(s):
```
//Get id
SELECT id from magazine where name = 'Web Designer'
SELECT avatar from person where newsletter = 1 and id in
(
SELECT person_id as id FROM person_magazine WHERE magazine_id = 9 //use id from previous query
)
```
How do I combine them so I don't have to run two queries? | I think you need to learn about joins. The following is the query you really seem to want:
```
SELECT p.avatar
from person p join
person_magazine pm
on pm.person_id = p.id join
magazine m
on pm.magazine_id = m.id
where p.newsletter = 1 and m.name = 'Web Designer'
``` | Nest the queries:
```
SELECT avatar
from person
where
newsletter = 1
and id in (
SELECT person_id as id
FROM person_magazine
WHERE magazine_id = (SELECT id from magazine where name = 'Web Designer' limit 1)
)
```
Or use a variable:
```
set @mag_id = (SELECT id from magazine where name = 'Web Designer' limit 1);
SELECT avatar from person where newsletter = 1 and id in
(
SELECT person_id as id FROM person_magazine WHERE magazine_id = @mag_id
)
```
I use `limit 1` to ensure that only one row is returned.
If there are more rows, consider using `join`. | How to combine two simple sql queries | [
"",
"mysql",
"sql",
""
] |
is there mySQL function to convert a date from format dd.mm.yy to YYYY-MM-DD?
for example, `03.09.13 -> 2013-09-03`. | Since your input is a string in the form `03.09.13`, I'll assume (since today is September 3, 2013) that it's `dd.mm.yy`. You can convert it to a date using [`STR_TO_DATE`](http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_str-to-date):
```
STR_TO_DATE(myVal, '%d.%m.%y')
```
Then you can format it back to a string using [`DATE_FORMAT`](http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-format):
```
DATE_FORMAT(STR_TO_DATE(myVal, '%d.%m.%y'), '%Y-%m-%d')
```
Note that the year is `%y` (lowercase "y") in `STR_TO_DATE` and `%Y` (uppercase "Y") in `DATE_FORMAT`. The lowercase version is for two-digit years and the uppercase is for four-digit years. | Use
```
SELECT CONCAT(
'20',
SUBSTR('03.09.13', 7, 2),
'-',
SUBSTR('03.09.13', 4, 2),
'-',
SUBSTR('03.09.13', 1, 2))
```
Fiddle [demo](http://www.sqlfiddle.com/#!2/2bad6e/7).
More about formats you can read in the corresponding [manual page](http://dev.mysql.com/doc/refman/5.1/en/date-and-time-functions.html#function_date-format).
Tip: if this is about conversion value from non-datetime field - better to use DATE/DATETIME data type instead. However, this is a **bad idea** to operate with dates via string functions. Above there is a nice trick with `STR_TO_DATE` (will not repeat that code, updated to fit better) | SQL date format convert? [dd.mm.yy to YYYY-MM-DD] | [
"",
"mysql",
"sql",
"date",
"format",
"sql-function",
""
] |
i have `myTable` with column `RecTime AS DATETIME`
```
RecTime
-----------------------
2013-05-22 15:32:37.530
2013-05-22 22:11:16.103
2013-05-22 16:24:06.883
2013-05-22 16:38:30.717
2013-05-22 23:54:41.777
2013-05-23 22:01:00.000
2013-05-23 09:59:59.997
```
i need an SQL statement that tell me which dates contain time between 12:00 and 22:00
expected result is :-
```
RecTime | foo
------------------------|-----
2013-05-22 15:32:37.530 | 1
2013-05-22 22:11:16.103 | 0
2013-05-22 16:24:06.883 | 1
2013-05-22 16:38:30.717 | 1
2013-05-22 23:54:41.777 | 0
2013-05-23 22:01:00.000 | 0
2013-05-23 09:59:59.997 | 0
```
for now i use the following :-
```
SELECT
[RecTime]
, CASE WHEN [RecTime] >= CONVERT(DATETIME, CONVERT(VARCHAR(4), DATEPART(YEAR, [RecTime])) + '-' + CONVERT(VARCHAR(2), DATEPART(MONTH, [RecTime])) + '-' + CONVERT(VARCHAR(2), DATEPART(DAY, [RecTime])) + ' 12:00')
AND [RecTime] <= CONVERT(DATETIME, CONVERT(VARCHAR(4), DATEPART(YEAR, [RecTime])) + '-' + CONVERT(VARCHAR(2), DATEPART(MONTH, [RecTime])) + '-' + CONVERT(VARCHAR(2), DATEPART(DAY, [RecTime])) + ' 22:00')
THEN 1
ELSE 0
END
FROM dbo.myTable
```
and i know this is not the best solution / performance.
help would be appreciated | One more approach
```
SELECT RecTime,
CASE WHEN CAST(RecTime AS TIME)
BETWEEN '12:00:00' AND '22:00:00'
THEN 1 ELSE 0 END foo
FROM Table1
```
Output:
```
| RECTIME | FOO |
|----------------------------|-----|
| May, 22 2013 15:32:37+0000 | 1 |
| May, 22 2013 22:11:16+0000 | 0 |
| May, 22 2013 16:24:06+0000 | 1 |
| May, 22 2013 16:38:30+0000 | 1 |
| May, 22 2013 23:54:41+0000 | 0 |
| May, 23 2013 22:01:00+0000 | 0 |
| May, 23 2013 09:59:59+0000 | 0 |
```
Here is **[SQLFiddle](http://sqlfiddle.com/#!3/4a503/6)** demo | You've found [`DATEPART`](http://msdn.microsoft.com/en-us/library/ms174420.aspx) and yet the obvious didn't spring to mind:
```
CASE WHEN DATEPART(hour,[RecTime]) >= 12
AND DATEPART(hour,[RecTime]) < 22
THEN 1
ELSE 0
```
As a side note, when working with a continuum like time, it's almost always more sensible to define semi-open intervals (where you use `>=` on the start and `<` on the finish), otherwise you end up with oddities such as 22:00:00.000 exactly being included in your period, and 22:00:00.003 being excluded. That's rarely correct. I've adjusted my query to match this pattern.
---
> i know this is not the best solution / performance.
Performance is always going to be poor for this query because we'll never be able to leverage an index. If this is a frequent form of query (querying on just the Hour component), you might consider adding a computed column which you can then index.
---
To include 22:00:00.000 exactly, I'd handle that as a separate, specialised case:
```
CASE WHEN DATEPART(hour,[RecTime]) >= 12
AND DATEPART(hour,[RecTime]) < 22
THEN 1
WHEN DATEPART(hour,RecTime) = 22 AND DATEPART(minute,RecTime) = 0 and DATEPART(second,RecTime) = 0 and DATEPART(millisecond,RecTime) = 0
THEN 1
ELSE 0
``` | sql : which dates contain time between 12:00 and 22:00 | [
"",
"sql",
"sql-server-2008-r2",
"sql-server-2012",
""
] |
I have a table which stores a code in Stack1, Stack2, Stack3, Stack4,Stack5 and Stack6 column. In case user deletes one of the codes in any of these stacks, i have to autostack the remaining codes such that the gap can be moved to the last.

For Eg :- in the abovescreenshot, user has deleted code in Stack2, now i want code in stack3 to come in Stack2 and code in Stack4 to come in Stack3. Following is the expected output:

Please suggest solutions. | While it would be nice to normalize your current schema, here is one possibility to update the table:
```
;with piv as (
-- Unpivot the data for ordering
select csCode, lane, [row], 1 as ordinal, stack1 as stack from MyTable
union all select csCode, lane, [row], 2 as ordinal, stack2 as stack from MyTable
union all select csCode, lane, [row], 3 as ordinal, stack3 as stack from MyTable
union all select csCode, lane, [row], 4 as ordinal, stack4 as stack from MyTable
union all select csCode, lane, [row], 5 as ordinal, stack5 as stack from MyTable
union all select csCode, lane, [row], 6 as ordinal, stack6 as stack from MyTable
)
, sort as (
-- Order the stacks
select *
, row_number() over (partition by csCode, lane, [row] order by case when stack = '' then 1 else 0 end, ordinal) as stackNumber
from piv
)
update a
set stack1 = (select stack from sort where csCode = a.csCode and lane = a.lane and [row] = a.[row] and stackNumber = 1)
, stack2 = (select stack from sort where csCode = a.csCode and lane = a.lane and [row] = a.[row] and stackNumber = 2)
, stack3 = (select stack from sort where csCode = a.csCode and lane = a.lane and [row] = a.[row] and stackNumber = 3)
, stack4 = (select stack from sort where csCode = a.csCode and lane = a.lane and [row] = a.[row] and stackNumber = 4)
, stack5 = (select stack from sort where csCode = a.csCode and lane = a.lane and [row] = a.[row] and stackNumber = 5)
, stack6 = (select stack from sort where csCode = a.csCode and lane = a.lane and [row] = a.[row] and stackNumber = 6)
from MyTable a
```
I'm using a couple of [CTEs](http://technet.microsoft.com/en-us/library/ms190766.aspx) to unpivot and order the stacks, then updating each stack with a correlated subquery. You can run this on the entire table, or provide a where clause as appropriate for better performance.
There are a couple of assumptions here:
* Your "blank" data is an empty string. If you might have spaces and nulls, sanitize that or qualify it with something like `ltrim(rtrim(coalesce(stack1, ''))) != ''`
* `csCode, lane, row` makes a candidate key (unique and none of those are null). If `csCode` is the primary key by itself, then you wouldn't need `lane` or `row` in any of this query. | Some combination of `Case` statements and `Coalesce` should get you there. It's tough to provide precise code without all the edge case variations, but an *psuedo example* of what your asking *could* look like:
```
UPDATE MyTable
Stack1 = Coalesce (Stack1, Stack2, Stack3, Stack4, Stack5, Stack6),
Stack2 = Coalesce (Stack2, Stack3, Stack4, Stack5, Stack6)
Stack3 = Coalesce (Stack3, Stack4, Stack5, Stack6),
Stack4 = Coalesce (Stack4, Stack5, Stack6),
Stack5 = Coalesce (Stack5, Stack6),
Stack6 = Stack6
WHERE ID = @SomeIDValue
```
With that said, your questions smells of a database design problem. From what you've shared, I would rather have stack in a child table.
```
CodeParent
* codeParentID
* csCode
* line
* row
CodeStackChild
* codeStackChildId (int identity)
* codeParentID
* stackValue
```
With a table setup like above you could delete a value in the middle of your stack and just move along without the need to shift values around. To get values in order you just query the `CodeStackChild` table and use a `ROW_NUMBER() OVER (Partition by codeParentID)` in your select statement. | Update multiple columns in a single update query to autostack column values in sql server 2008 | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I need to list number of column1 that have been added to the database over the selected time period (since the day the list is requested)-daily, weekly (last 7 days), monthly (last 30 days) and quarterly (last 3 months). for example below is the table I created to perform this task.
```
Column | Type | Modifiers
------------------+-----------------------------+-----------------------------------------------------
column1 character varying (256) not null default nextval
date timestamp without time zone not null default now()
coloumn2 charater varying(256) ..........
```
Now, I need the total count of entries in column1 with respect the selected time period.
Like,
```
Column 1 | Date | Coloumn2
------------------+-----------------------------+-----------------------------------------------------
abcdef 2013-05-12 23:03:22.995562 122345rehr566
njhkepr 2013-04-10 21:03:22.337654 45hgjtron
ffb3a36dce315a7 2013-06-14 07:34:59.477735 jkkionmlopp
abcdefgggg 2013-05-12 23:03:22.788888 22345rehr566
```
From above data, for daily selected time period it should be count= 2
I have tried doing this query
```
select count(column1) from table1 where date='2012-05-12 23:03:22';
```
and have got the exact one record matching the time stamp. But I really needed to do it in proper way I believe this is not an efficient way of retrieving the count. Anyone who could help me know the right and efficient way of writing such query would be great. I am new to the database world, and I am trying to be efficient in writing any query.
Thanks!
**[EDIT]**
*Each query currently is taking 175854ms to get process. What could be the efficient way to lessen the time to have it processed accordingly.* Any help would be really great. I am using Postgresql to do the same. | To be efficient, conditions should compare values of the sane type as the columns being compared. In this case, the column being compared - `Date` - has type `timestamp`, so we need to use a range of `tinestamp` values.
In keeping with this, you should use `current_timestamp` for the "now" value, and as confirmed by the [documentation](http://www.postgresql.org/docs/9.1/static/functions-datetime.html), subtracting an `interval` from a `timestamp` yields a `timestamp`, so...
For the last 1 day:
```
select count(*) from table1
where "Date" > current_timestamp - interval '1 day'
```
For the last 7 days:
```
select count(*) from table1
where "Date" > current_timestamp - interval '7 days'
```
For the last 30 days:
```
select count(*) from table1
where "Date" > current_timestamp - interval '30 days'
```
For the last 3 months:
```
select count(*) from table1
where "Date" > current_timestamp - interval '3 months'
```
Make sure you have an index on the Date column.
---
If you find that the index is not being used, try converting the condition to a between, eg:
```
where "Date" between current_timestamp - interval '3 months' and current_timestamp
```
Logically the same, but may help the optimizer to choose the index.
---
Note that `column1` is irrelevant to the question; being unique there is no possibility of the *row* count being different from the number of different values of `column1` found by any given criteria.
Also, the choice of "Date" for the column name is poor, because a) it is a reserved word, and b) it is not in fact a date. | ```
select count(distinct column1) from table1 where date > '2012-05-12 23:03:22';
```
I assume "number of column1" means "number of distinct values in `column1`.
**Edit:**
Regarding your second question (speed of the query): I would assume that an index on the date column should speed up the runtime. Depending on the data content, this could even be declared `unique`. | Efficient way of counting a large content from a cloumn or a two in a database using selected time period | [
"",
"sql",
"database",
"postgresql",
"postgresql-9.2",
""
] |
I have two tables. The first one is T\_EMPLOYEE
```
create table t_employee
(
f_id number(8, 2) not null primary key,
f_name varchar(200),
);
```
The second is T\_SALARY
```
create table t_salary
(
f_id number(8, 2) not null primary key,
f_employee_id number(8,2),
f_salary number(8, 2)
);
ALTER TABLE t_salary ADD CONSTRAINT fk_salary
FOREIGN KEY (f_employee_id) REFERENCES t_employee;
```
I want to get max salary and the name of the corresponding employee, I wrote this query
```
select t_employee.f_name, MAX(f_salary)
from t_salary
inner join t_employee on t_salary.f_employee_id=t_employee.f_id
group by f_name;
```
but the result looks like this:
```
Jenny 5000
Andy 3000
Mary 1000
```
But I want to retrive only one name of the user who have the highest salary, so what am I doing wrong? | ```
select f_name,
f_salary
from (
select t_employee.f_name,
t_salary.f_salary,
dense_rank() over (order by t_salary.f_salary desc) as rnk
from t_salary
inner join t_employee on t_salary.f_employee_id=t_employee.f_id
) t
where rnk = 1;
``` | You can use the [rownum psuedcolumn](http://docs.oracle.com/cd/B19306_01/server.102/b14200/pseudocolumns009.htm)
```
select
f_name,
f_salary
from (
select
t_employee.f_name,
MAX(f_salary) as f_salary
from
t_salary
inner join
t_employee
on t_salary.f_employee_id=t_employee.f_id
group by
f_name
order by
max(f_salary) desc
) x
where
rownum = 1;
``` | Select max value with join | [
"",
"sql",
"oracle",
"max",
""
] |
I have have the following SQL that increases the starttime of each row in the result from MOCKTABLE.
```
SELECT
DATEADD( "d", ROW_NUMBER() OVER (ORDER BY mt.ID), mt.StartTime ) AS INCREMENT
FROM
MOCKTABLE mt
```
Now I would like to trade the "d" to a column specified in MOCKTABLE. Like this:
```
SELECT
DATEADD( mt.PeriodTime, ROW_NUMBER() OVER (ORDER BY mt.ID), mt.StartTime ) AS INCREMENT
FROM
MOCKTABLE mt
```
How can I do this? The column PeriodTime is nvarchar(5) and will contain 'd', 'ww' or 'm'. Reason I want this is because the user should decide the incrementation of the orginial date. | You can do it with a big case statement:
```
SELECT (case when PeriodTime = 'day'
then DATEADD(day, ROW_NUMBER() OVER (ORDER BY mt.ID), mt.StartTime)
when PeriodTime = 'month'
then DATEADD(month, ROW_NUMBER() OVER (ORDER BY mt.ID), mt.StartTime)
when PeriodTime = 'year'
then DATEADD(year, ROW_NUMBER() OVER (ORDER BY mt.ID), mt.StartTime)
end) AS INCREMENT
FROM MOCKTABLE mt
``` | If you don't need years or months, then instead of PeriodTime being a string, it could be an int
e.g if the smallest increment you wanted to make was an hour. 1,24,168 would give you hour, day and week in
```
DateAdd(hour,PeriodTime * ROW_Number(),Somedate)
```
without the huge and irritating case statement. | DateAdd with abbreviation from column | [
"",
"sql",
"sql-server",
""
] |
I have a stored procedure that is properly executing. Now, I tried to add `TRY CATCH` T-SQL statements.
After adding
```
BEGIN TRY
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
SET XACT_ABORT ON
GO
/****** add new column Accountid to Metainformation table ******/
IF NOT EXISTS (SELECT * FROM SYSCOLUMNS WHERE NAME = 'Accountid' AND ID = OBJECT_ID('[GSF].[dbo].[Metainformation]'))
BEGIN
ALTER TABLE [GSF].[dbo].[Metainformation] ADD Accountid int
END
GO
IF EXISTS (SELECT * FROM SYSCOLUMNS WHERE NAME = 'Accountid' AND ID = OBJECT_ID('[GSF].[dbo].[Metainformation]'))
BEGIN
UPDATE [GSF].[dbo].[Metainformation]
SET MP.Accountid = AD.Accountid
FROM [GSF].[dbo].[Metainformation] MI, [GSF].[dbo].[AccountDetails] AD
WHERE MI.DetailID= AD.DetailID
END
GO
```
I get error in my `GO` statements, shows error saying incorrect syntax near GO.
Any pointers or alternative to use ?
**Updated Code:**
```
USE GSF
GO
/****** add new column AccountId to MetaInformation table ******/
IF NOT EXISTS (SELECT * FROM **SYS.COLUMNS** WHERE NAME = 'AccountId' AND ID = OBJECT_ID('[GSF].[dbo].[MetaInformation]'))
ALTER TABLE [GSF].[dbo].[MetaInformation] ADD AccountId uniqueidentifier
BEGIN TRANSACTION;
BEGIN TRY
SET ANSI_NULLS ON
SET QUOTED_IDENTIFIER ON
/****** If the column AccountId exists, Update all the AccountId values in MetaInformation from AccountDetails ******/
IF EXISTS (SELECT * FROM **SYS.COLUMNS** WHERE NAME = 'AccountId' AND ID = OBJECT_ID('[GSF].[dbo].[MetaInformation]'))
BEGIN
UPDATE MP
SET MP.AccountId = AD.AccountId
FROM [GSF].[dbo].[MetaInformation] MP
**INNER JOIN**
[GSF].[dbo].[AccountDetails] AD
ON MP.AllocationDetailId = AD.AllocationDetailId
END
/****** Drop AccountId column from AccountDetails ******/
IF EXISTS (SELECT * FROM SYS.COLUMNS WHERE NAME = 'AccountId' AND ID = OBJECT_ID('[GSF].[dbo].[AccountDetails]'))
ALTER TABLE [GSF].[dbo].[AccountDetails] DROP COLUMN AccountId
/****** add two new PStage values to [PStageToCategory] table ******/
INSERT INTO [GSF].[dbo].[PStageToCategory]
(PStage, PStageToCategoryName)
VALUES(19,1)
INSERT INTO [GSF].[dbo].[PStageToCategory]
(PStage, PStageToCategoryName)
VALUES(21,1)
/****** Drop and create new ViewName view to remove reference of AccountId ******/
USE GSF
IF EXISTS ( SELECT * FROM sys.views WHERE name = 'ViewName')
DROP VIEW ViewName
DECLARE @sql_view NVARCHAR(MAX);
SET @sql_view = '<VIEW DEFINITION>';
EXEC sp_executesql @sql_view;
COMMIT TRANSACTION
END TRY
BEGIN CATCH
SELECT
ERROR_NUMBER() AS ErrorNumber
,ERROR_SEVERITY() AS ErrorSeverity
,ERROR_STATE() AS ErrorState
,ERROR_PROCEDURE() AS ErrorProcedure
,ERROR_LINE() AS ErrorLine
,ERROR_MESSAGE() AS ErrorMessage;
END CATCH
```
I'm using this script for Changing Schema in Production .So, am including try catch just to know the error and transaction to rollback if there is an error in the script(only one it will be run) in the run. Do you think it makes sense ?
Also, I have made changes to the my script according to your comments. What do you think ? | "GO is not a Transact-SQL statement; it is a command recognized by the sqlcmd and osql utilities and SQL Server Management Studio Code editor" -- Quote from [MSDN](http://technet.microsoft.com/en-us/library/ms188037.aspx).
The sample code below assumes you are just executing a script from SSMS or one of the other utilities.
Here are some improvements to the code.
1 - The TRY/CATCH code blocks need to be in one batch. Therefore, the GO needs to be after the block.
2 - The SYSCOLUMNS table is depreciated and will be removed from the product eventually. See Jason [Strate's](http://www.jasonstrate.com/2012/07/lost-in-translation-deprecated-system-tables-introduction/) blog article on this topic.
3 - You do not need a BEGIN/END code block if there is one statement after the IF expression.
4 - You should avoid ANSI 92 joins since they do not work in SQL Server 2012. See Mike Walsh's blog [article](http://www.straightpathsql.com/archives/2012/11/sql-server-join-syntax-its/) on this topic.
5 - There was a typo in which MP.Accountid was used instead of MI.Accountid.
6 - Use semicolons at the end of statements since there has be some buzz about this being a requirement in the future.
In summary, I use TRY/CATCH blocks when I write a stored procedure and want to return an error code back to the calling application. This sample code just returns information about the error.
Sincerely
John
```
--
-- Use correct database and set session settings
--
-- Switch to correct database
USE [GSF];
GO
-- Change session settings
SET ANSI_NULLS ON;
SET QUOTED_IDENTIFIER ON;
SET XACT_ABORT ON;
GO
--
-- My code block
--
BEGIN TRY
-- Add column if it does not exist
IF NOT EXISTS (SELECT * FROM SYS.COLUMNS AS C WHERE C.NAME = 'Accountid' AND C.OBJECT_ID = OBJECT_ID('dbo.Metainformation'))
ALTER TABLE [dbo].[Metainformation] ADD Accountid INT;
-- Update the column regardless
IF EXISTS (SELECT * FROM FROM SYS.COLUMNS AS C WHERE C.NAME = 'Accountid' AND C.OBJECT_ID = OBJECT_ID('dbo.Metainformation'))
UPDATE Metainformation
SET Accountid = AD.Accountid
FROM [dbo].[Metainformation] AS MI JOIN [dbo].[AccountDetails] AS AD ON MI.DetailID = AD.DetailID;
END TRY
--
-- My error handling
--
-- Error Handler
BEGIN CATCH
SELECT
ERROR_NUMBER() AS ErrorNumber
,ERROR_SEVERITY() AS ErrorSeverity
,ERROR_STATE() AS ErrorState
,ERROR_PROCEDURE() AS ErrorProcedure
,ERROR_LINE() AS ErrorLine
,ERROR_MESSAGE() AS ErrorMessage;
END CATCH
GO
``` | Having seen your code, I'd recommend not to use the [`TRY...CATCH`](http://technet.microsoft.com/en-us/library/ms175976.aspx) at all:
> A TRY…CATCH construct cannot span multiple batches[1]. A TRY…CATCH construct cannot span multiple blocks of Transact-SQL statements. For example, a TRY…CATCH construct cannot span two BEGIN…END blocks of Transact-SQL statements and cannot span an IF…ELSE construct.
1 Batches are sent as separate commands to SQL Server. The client tools need to know how to break a long script up into multiple batches to send to the server. By convention, most client tools (Management Studio, OSQL, etc) use [`GO`](http://technet.microsoft.com/en-us/library/ms188037.aspx):
> GO is not a Transact-SQL statement; | Go command Error | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
Hello I am running the query below and I am getting a null value for my [sumofnetsales] amount. Does anyone know what could be the problem?
Also, if I was wanting to use [SumofAmountShipped] column instead of summing it again in the subtraction equation, how would I do that? Thank you.
```
DECLARE @Rundate datetime
SET @RunDate = '3/11/2013' -- Date they run the report
--Temp Tables to sum up all accural types
Declare @Chargeback table (Amount money, ShortItemNo int, ProductName varchar(50))
INSERT @Chargeback
SELECT sum(a.AccrualAmount),
ShortItemNo,
ProductName
FROM
Accruals a
WHERE
@RunDate between a.AccrualBeginDate and a.AccrualEndDate and a.enddate = '12/31/2200' and accrualtype = 2
Group By
ShortItemNo,
ProductName
Declare @AdjustedForNetPrice table (Amount money, ShortItemNo int, ProductName varchar(50))
INSERT @AdjustedForNetPrice
SELECT sum(a.AccrualAmount),
ShortItemNo,
ProductName
FROM
Accruals a
WHERE
@RunDate between a.AccrualBeginDate and a.AccrualEndDate and a.enddate = '12/31/2200' and accrualtype = 2
Group By
ShortItemNo,
ProductName
Declare @AdminFee table (Amount money, ShortItemNo int, ProductName varchar(50))
INSERT @AdminFee
SELECT sum(a.AccrualAmount),
ShortItemNo,
ProductName
FROM
Accruals a
WHERE
@RunDate between a.AccrualBeginDate and a.AccrualEndDate and a.enddate = '12/31/2200' and accrualtype = 1
Group By
ShortItemNo,
ProductName
Declare @Returns table (Amount money, ShortItemNo int, ProductName varchar(50))
INSERT @Returns
SELECT sum(a.AccrualAmount),
ShortItemNo,
ProductName
FROM
Accruals a
WHERE
@RunDate between a.AccrualBeginDate and a.AccrualEndDate and a.enddate = '12/31/2200' and accrualtype = 3
Group By
ShortItemNo,
ProductName
Declare @Rebates table (Amount money, ShortItemNo int, ProductName varchar(50))
INSERT @Rebates
SELECT sum(a.AccrualAmount),
ShortItemNo,
ProductName
FROM
Accruals a
WHERE
@RunDate between a.AccrualBeginDate and a.AccrualEndDate and a.enddate = '12/31/2200' and accrualtype = 4
Group By
ShortItemNo,
ProductName
Declare @ACCPSW1 table (Amount money, ShortItemNo int, ProductName varchar(50))
INSERT @ACCPSW1
SELECT sum(a.AccrualAmount),
ShortItemNo,
ProductName
FROM
Accruals a
WHERE
@RunDate between a.AccrualBeginDate and a.AccrualEndDate and a.enddate = '12/31/2200' and accrualtype = 5
Group By
ShortItemNo,
ProductName
Declare @CashDiscount table (Amount money, ShortItemNo int, ProductName varchar(50))
INSERT @CashDiscount
SELECT sum(a.AccrualAmount),
ShortItemNo,
ProductName
FROM
Accruals a
WHERE
@RunDate between a.AccrualBeginDate and a.AccrualEndDate and a.enddate = '12/31/2200' and accrualtype = 6
Group By
ShortItemNo,
ProductName
Declare @INIT1 table (Amount money, ShortItemNo int, ProductName varchar(50))
INSERT @INIT1
SELECT sum(a.AccrualAmount),
ShortItemNo,
ProductName
FROM
Accruals a
WHERE
@RunDate between a.AccrualBeginDate and a.AccrualEndDate and a.enddate = '12/31/2200' and accrualtype = 7
Group By
ShortItemNo,
ProductName
Declare @Medicaid table (Amount money, ShortItemNo int, ProductName varchar(50))
INSERT @Medicaid
SELECT sum(a.AccrualAmount),
ShortItemNo,
ProductName
FROM
Accruals a
WHERE
@RunDate between a.AccrualBeginDate and a.AccrualEndDate and a.enddate = '12/31/2200' and accrualtype = 8
Group By
ShortItemNo,
ProductName
Declare @InitialOrderDiscount table (Amount money, ShortItemNo int, ProductName varchar(50))
INSERT @InitialOrderDiscount
SELECT sum(a.AccrualAmount),
ShortItemNo,
ProductName
FROM
Accruals a
WHERE
@RunDate between a.AccrualBeginDate and a.AccrualEndDate and a.enddate = '12/31/2200' and accrualtype = 9
Group By
ShortItemNo,
ProductName
SELECT
rtrim(IMDSC1) [ItemDesc1],
rtrim(IMDSC2) [ItemDesc2],
sum(QuantityShipped) [SumOfQuantityShipped],
sum(ExtendedPrice) [SumOfAmountShipped],
rtrim(IMSRTX) [BrandName],
'' [SumOfNetSaleUnitPrice],
sum(ExtendedPrice) - c.Amount - a.Amount - af.Amount - r.Amount - rr.Amount - ac.Amount - cd.Amount - i.Amount - m.Amount - id.Amount [SumOfNetSales],
'' [SumOfGrossProfit],
'' [SumOfGrossMargin],
'3/11/2013' [Rundate]
FROM
SalesSummary ss join [Product] p
on ss.ShortItemNo = p.SDITM
join JDE_PRODUCTION.PRODDTA.F4101 im
on im.IMITM = p.SDITM
left join @Chargeback c
on c.ShortItemNo = ss.ShortItemNo
left join @AdjustedForNetPrice a
on a.ShortItemNo = ss.ShortItemNo
left join @AdminFee af
on af.ShortItemNo = ss.ShortItemNo
left join @Returns r
on r.ShortItemNo = ss.ShortItemNo
join @Rebates rr
on rr.ShortItemNo = ss.ShortItemNo
left join @ACCPSW1 ac
on ac.ShortItemNo = ss.ShortItemNo
left join @CashDiscount cd
on cd.ShortItemNo = ss.ShortItemNo
left join @INIT1 i
on i.ShortItemNo = ss.ShortItemNo
left join @Medicaid m
on m.ShortItemNo = ss.ShortItemNo
left join @InitialOrderDiscount id
on id.ShortItemNo = ss.ShortItemNo
WHERE
ss.InvoiceDate = @RunDate
GROUP BY
rtrim(IMDSC1),
rtrim(IMDSC2),
rtrim(IMSRTX),
c.Amount,
a.Amount,
af.Amount,
r.Amount,
rr.Amount,
ac.Amount,
cd.Amount,
i.Amount,
m.Amount,
id.Amount
ORDER BY
rtrim(IMDSC1),
rtrim(IMDSC2)
``` | You are getting `null` because one of the column values in the subtraction is `null`. It's like:
```
1000 - null = null
```
You have to use `coalesce(<column>, 0)` (= the ANSI SQL way) to avoid this, or the `ISNULL()` function (SQL Server specific). Your subtraction line would look like:
```
sum(coalesce(ExtendedPrice, 0)) - coalesce(c.Amount,0) - coalesce(a.Amount, 0)
- coalesce(af.Amount, 0) - ... and so on ... [SumOfNetSales]
```
As of your second point: you can't avoid writing out the [SumofAmountShipped] column in the subtraction, because you can't use a just defined alias in the same statement. | Slightly difficult to understand your code but the problem most likely lies with the JOIN.
When using a left join, if only the left side exists it will return the left side + NULLs on the right side.
If you do any arithmetic where a NULL is involved, the outcome will always be NULL.
To get around this use the `ISNULL` function to return 0 instead of the NULL for each value in the subtraction.
Good luck! | Subtracting values in a query | [
"",
"sql",
"sql-server",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.