Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I am a SQL beginner programmer and I want to write an SQL code which should count the number of entries in a column in two different ways. Like first just count them and then count only those values which are greater than 5.
I am using the following code but it is giving error.
```
SELECT
table1.col1,
Count(table1.col1) AS Expr1,
count (where(table1.col1)>5) as expr2
FROM table1
GROUP BY tabl1.col1
```
the error is about the where expression used in : `count (where(table1.col1)>5) as expr2.`
Is there anyother sql function which i can use to filter out values greater than 5 before counting..
Thank you. | ```
select
col1,
count(col1) as Expr1,
sum(case when col1 > 5 then 1 else 0 end) as expr2
from table1
group by col1
```
or
```
select
col1,
count(col1) as Expr1,
count(case when col1 > 5 then col1 end) as expr2
from table1
group by col1
```
First one just add 1 for each `col1 > 5`. Second one counts all not null values of expression `case when col1 > 5 then col1 end`, which equals `col1` if `col1 > 5` otherwise equals `null` | You want conditional aggregation:
```
SELECT table1.col1,
Count(table1.col1) AS Expr1,
SUM(case when table1.col1 > 5 then 1 else 0 end) as expr2
FROM table1
GROUP BY table1.col1;
```
Some versions of SQL treat booleans as the integer values `0` and `1`. For these, you can simplify the expression to:
```
SELECT t1.col1,
Count(t1.col1) AS Expr1,
SUM(t1.col1 > 5) as expr2
FROM table1 t1
GROUP BY t1.col1;
```
This version also introduces table aliases. This makes the query more readable and fixes the typo in the `order by` clause. | Implement two different Operation on the same Column in a query using SQL | [
"",
"sql",
""
] |
I am getting this error when there is absolutely no usage of int anywhere.
I have this stored procedure
```
ALTER procedure [dbo].[usp_GetFileGuid] @fileType varchar(25)
as
select [id] from fileTypes where dirId = @fileType
```
Here id is a uniqueidentifier in fileTypes table
When I execute the following
```
declare @fileGuid uniqueidentifier
exec @fileGuid = usp_GetFileGuid 'accounts'
print @fileGuid
```
I get the following error
```
(1 row(s) affected)
Msg 206, Level 16, State 2, Procedure usp_GetFileGuid, Line 0
Operand type clash: int is incompatible with uniqueidentifier
```
Is there anything wrong with the syntax of assigning output of stored procedure to the local variable? Thank you. | You are using `EXEC @fileGuid = procedure` syntax which is used for retrieving *return* values, not resultsets. Return values are restricted to `INT` and should only be used to return status / error codes, not data.
What you want to do is use an `OUTPUT` parameter:
```
ALTER procedure [dbo].[usp_GetFileGuid]
@fileType varchar(25),
@id UNIQUEIDENTIFIER OUTPUT
AS
BEGIN
SET NOCOUNT ON;
SELECT @id = [id] from dbo.fileTypes where dirId = @fileType;
-- if the procedure *also* needs to return this as a resultset:
SELECT [id] = @id;
END
GO
```
Then for usage:
```
declare @fileGuid uniqueidentifier;
exec dbo.usp_GetFileGuid @fileType = 'accounts', @id = @fileGuid OUTPUT;
print @fileGuid;
``` | The value returned is an int as it is the status of the execution
From [CREATE PROCEDURE (Transact-SQL)](http://technet.microsoft.com/en-us/library/ms187926.aspx)
> Return a status value to a calling procedure or batch to indicate
> success or failure (and the reason for failure).
You are looking for an output parameter.
> OUT | OUTPUT
>
> Indicates that the parameter is an output parameter. Use
> OUTPUT parameters to return values to the caller of the procedure.
> text, ntext, and image parameters cannot be used as OUTPUT parameters,
> unless the procedure is a CLR procedure. An output parameter can be a
> cursor placeholder, unless the procedure is a CLR procedure. A
> table-value data type cannot be specified as an OUTPUT parameter of a
> procedure. | int is incompatible with uniqueidentifier when no int usage | [
"",
"sql",
"sql-server",
""
] |
Does SQLite support common table expressions?
I'd like to run query like that:
```
with temp (ID, Path)
as (
select ID, Path from Messages
) select * from temp
``` | As of Sqlite version 3.8.3 SQLite supports common table expressions.
[Change log](http://www.sqlite.org/releaselog/3_8_3.html)
[Instructions](http://www.sqlite.org/lang_with.html) | Another solution is to integrate a "CTE to SQLite" translation layer in your application :
"with w as (y) z" => "create temp view w as y;z"
"with w(x) as (y) z" => "create temp table w(x);insert into w y;z"
As an (ugly, desesperate, but working) example :
<http://nbviewer.ipython.org/github/stonebig/baresql/blob/master/examples/baresql_with_cte_code_included.ipynb> | Does SQLite support common table expressions? | [
"",
"sql",
"sqlite",
"common-table-expression",
""
] |
I have a column called Work Done where on daily basis some amount of work is caarried out. It has columns
## Id, VoucherDt, Amount
Now my report has scenario to print the sum of amount till date of the month. For example if Current date is 3rd September 2013 then the query will pick all records of 1st,2nd and 3rd Sept and return a sum of that.
I am able to get the first date of the current month. and I am using the following condition
`VoucherDt between FirstDate and GetDate()` but it doesnot givign the desired result. So kindly suggest me the proper `where` condition. | ```
SELECT SUM(AMOUNT) SUM_AMOUNT FROM <table>
WHERE VoucherDt >= DATEADD(MONTH, DATEDIFF(MONTH, 0, CURRENT_TIMESTAMP), 0)
AND VoucherDt < DATEADD(DAY, DATEDIFF(DAY, 0, CURRENT_TIMESTAMP), 1)
``` | I think that there might be a better solution but this should work:
```
where YEAR(VoucherDt) = YEAR(CURRENT_TIMESTAMP)
and MONTH(VoucherDt) = MONTH(CURRENT_TIMESTAMP)
and DAY(VoucherDt) <= DAY(CURRENT_TIMESTAMP)
``` | Taking sum of column based on date range in T-Sql | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
"sql-server-2008-r2",
""
] |
I've got a bit of a messy table on my hands that has two fields, a date field and a time field that are both strings. What I need to do is get the minimum date from those fields, or just the record itself if there is no date/time attached to it. Here's some sample data:
```
ID First Last Date Time
1 Joe Smith 2013-09-06 04:00
1 Joe Smith 2013-09-06 02:00
2 Jack Jones
3 John Jack 2013-09-05 06:00
3 John Jack 2013-09-15 15:00
```
What I would want from a query is to get the following:
```
ID First Last Date Time
1 Joe Smith 2013-09-06 02:00
2 Jack Jones
3 John Jack 2013-09-05 06:00
```
The min date/time for ID 1 and 3 and then just ID 2 back because he doesn't have a date/time. I cam up with the following query that gives me ID's 1 and 3 exactly as I would want them:
```
SELECT *
FROM test as t
where
cast(t.date + ' ' + t.time as Datetime ) = (select top 1 cast(p.date + ' ' + p.time as Datetime ) as dtime from test as p where t.ID = p.ID order by dtime)
```
But it doesn't return row number 2 at all. I imagine there's a better way to go about doing this. Any ideas? | You can do this with `row_number()`:
```
select ID, First, Last, Date, Time
from (select t.*,
row_number() over (partition by id order by date, time) as seqnum
from test t
) t
where seqnum = 1;
```
Although storing dates and times as strings is not recommended, you at least do it right. The values use the ISO standard format (or close enough) so alphabetic sorting is the same as date/time sorting. | Assuming `[Date]` and `[Time]` are the types I think they are, and not strings:
```
SELECT ID,[First],[Last],[Date],[Time] FROM
(
SELECT ID,[First],[Last],[Date],[Time],rn = ROW_NUMBER()
OVER (PARTITION BY ID ORDER BY [Date], [Time])
FROM dbo.test
) AS t WHERE rn = 1;
```
Example:
```
DECLARE @x TABLE
(
ID INT,
[First] VARCHAR(32),
[Last] VARCHAR(32),
[Date] DATE,
[Time] TIME(0)
);
INSERT @x VALUES
(1,'Joe ','Smith','2013-09-06','04:00'),
(1,'Joe ','Smith','2013-09-06','02:00'),
(2,'Jack','Jones',NULL, NULL ),
(3,'John','Jack ','2013-09-05','06:00'),
(3,'John','Jack ','2013-09-15','15:00');
SELECT ID,[First],[Last],[Date],[Time] FROM
(
SELECT ID, [First],[Last],[Date],[Time],rn = ROW_NUMBER()
OVER (PARTITION BY ID ORDER BY [Date], [Time])
FROM @x
) AS x WHERE rn = 1;
```
Results:
```
ID First Last Date Time
-- ----- ----- ---------- --------
1 Joe Smith 2013-09-06 02:00:00
2 Jack Jones NULL NULL
3 John Jack 2013-09-05 06:00:00
``` | SQL Statement to Get The Minimum DateTime from 2 String FIelds | [
"",
"sql",
"sql-server",
""
] |
If I do:
```
SELECT * FROM A
WHERE conditions
UNION
SELECT * FROM B
WHERE conditions
```
I get the union of the resultset of query of A **and** resultset of query of B.
Is there a way/operator so that I can get a short-circuit `OR` result instead?
I.e. Get the result of `SELECT * FROM A WHERE conditions` and **only** if this returns nothing get the resultset of the `SELECT * FROM B WHERE conditions` ? | The short answer is no, but you can avoid the second query, but you must re-run the first:
```
SELECT * FROM A
WHERE conditions
UNION
SELECT * FROM B
WHERE NOT EXISTS (
SELECT * FROM A
WHERE conditions)
AND conditions
```
This assumes the optimizer helps out and short circuits the second query because the result of the NOT EXISTS is false for all rows.
If the first query is much cheaper to run than the second, you would probably gain performance if the first row returned rows. | You can do this with a single SQL query as:
```
SELECT *
FROM A
WHERE conditions
UNION ALL
SELECT *
FROM B
WHERE conditions and not exists (select * from A where conditions);
``` | Short-circuit UNION? (only execute 2nd clause if 1st clause has no results) | [
"",
"mysql",
"sql",
"set",
"union",
"resultset",
""
] |
I need to extract some data from a postgresql database, taking a few elements from each of two tobles. The tables contain data relating to physical network devices, where one table is exclusively for mac addresses of these devices. Each device is identified by location (vehicle) and function (dev\_name).
table1 (assets):
```
vehicle
dev_name
dev_serial
dev_model
```
table2: (macs)
```
vehicle
dev_name
mac
interface
```
What i tried:
```
SELECT assets.vehicle, assets.dev_name, dev_model, dev_serial, mac
FROM assets, macs
AND interface = 'E0'
ORDER BY vehicle, dev_name
;
```
But it seems to not be matching vehicle and dev\_name as i thought it would. Instead it seems to print every combination of mac and dev\_serial, which is not the intended output, as i want one line for each.
How would one make sure that it matches the mac address to the device based on assets.dev\_name = macs.dev\_name and assets.vehicle = macs.vehicle?
Note: Some devices in `assets` may not have a recorded mac address in `mac`, in which case i want them displayed anyway with an empty mac | When using a `join` you can specify which columns have to match
```
SELECT a.vehicle, a.dev_name, dev_model, dev_serial, mac
FROM assets a
LEFT JOIN macs m ON m.vehicle = a.vehicle
AND m.dev_name = a.dev_name
WHERE interface = 'E0'
ORDER BY a.vehicle, a.dev_name
``` | Read about [SQL join](http://en.wikipedia.org/wiki/Join_%28SQL%29).
```
select
a.vehicle, a.dev_name, a.dev_model, a.dev_serial, m.mac
from assets as a
inner join macs as m on m.dev_name = a.dev_name and m.vehicle = a.dev_name
where m.interface = 'E0'
order by a.vehicle, a.dev_name
```
In your case it's inner join, but if you want to get all assets and show mac for those with interface E0, you could use [left outer join](http://en.wikipedia.org/wiki/Join_%28SQL%29#Left_outer_join):
```
select
a.vehicle, a.dev_name, a.dev_model, a.dev_serial, m.mac
from assets as a
left outer join macs as m on
m.dev_name = a.dev_name and m.vehicle = a.dev_name and m.interface = 'E0'
order by a.vehicle, a.dev_name
``` | JOIN across two tables | [
"",
"sql",
"database",
"postgresql",
"select",
"join",
""
] |
Take a look at this execution plan: <http://sdrv.ms/1agLg7K>
It’s not estimated, it’s actual. From an actual execution that took roughly **30 minutes**.
Select the second statement (takes 47.8% of the total execution time – roughly 15 minutes).
Look at the top operation in that statement – View Clustered Index Seek over \_Security\_Tuple4.
The operation costs 51.2% of the statement – roughly 7 minutes.
The view contains about 0.5M rows (for reference, log2(0.5M) ~= 19 – a mere 19 steps given the index tree node size is two, which in reality is probably higher).
The result of that operator is zero rows (doesn’t match the estimate, but never mind that for now).
Actual executions – zero.
**So the question is**: how the bleep could that take seven minutes?! (and of course, how do I fix it?)
---
**EDIT**: *Some clarification on what I'm asking here*.
I am **not** interested in general performance-related advice, such as "look at indexes", "look at sizes", "parameter sniffing", "different execution plans for different data", etc.
I know all that already, I can do all that kind of analysis myself.
What I really need is to know **what could cause that one particular clustered index seek to be so slow**, and then **what could I do to speed it up**.
**Not** the whole query.
**Not** any part of the query.
Just that one particular index seek.
**END EDIT**
---
Also note how the second and third most expensive operations are seeks over \_Security\_Tuple3 and \_Security\_Tuple2 respectively, and they only take 7.5% and 3.7% of time. Meanwhile, \_Security\_Tuple3 contains roughly 2.8M rows, which is six times that of \_Security\_Tuple4.
Also, some background:
1. This is the only database from this project that misbehaves.
There are a couple dozen other databases of the same schema, none of them exhibit this problem.
2. The first time this problem was discovered, it turned out that the indexes were 99% fragmented.
Rebuilding the indexes did speed it up, but not significantly: the whole query took 45 minutes before rebuild and 30 minutes after.
3. While playing with the database, I have noticed that simple queries like “select count(\*) from \_Security\_Tuple4” take several minutes. WTF?!
4. However, they only took several minutes on the first run, and after that they were instant.
5. The problem is **not** connected to the particular server, neither to the particular SQL Server instance: if I back up the database and then restore it on another computer, the behavior remains the same. | First I'd like to point out a little misconception here: although the delete statement is said to take nearly 48% of the entire execution, this does not have to mean it takes 48% of the time needed; in fact, the 51% assigned inside that part of the query plan most definitely should NOT be interpreted as taking 'half of the time' of the entire operation!
Anyway, going by your remark that it takes a couple of minutes to do a COUNT(\*) of the table 'the first time' I'm inclined to say that you have an IO issue related to said table/view. Personally I don't like materialized views very much so I have no real experience with them and how they behave internally but normally I would suggest that fragmentation is causing its toll on the underlying storage system. The reason it works fast the second time is because it's much faster to access the pages from the cache than it was when fetching them from disk, especially when they are *all over the place*. (Are there any (max) fields in the view ?)
Anyway, to find out what is taking so long I'd suggest you rather take this code out of the trigger it's currently in, 'fake' an inserted and deleted table and then try running the queries again adding times-stamps and/or using some program like SQL Sentry Plan Explorer to see how long each part REALLY takes (it has a duration column when you run a script from within the program).
It might well be that you're looking at the wrong part; experience shows that cost and actual execution times are not always as related as we'd like to think. | Observations include:
1. Is this the biggest of these databases that you are working with? If so, size matters to the optimizer. It will make quite a different plan for large datasets versus smaller data sets.
2. The estimated rows and the actual rows are quite divergent. This is most apparent on the fourth query. "delete c from @alternativeRoutes...." where the \_Security\_Tuple5 estimates returning **16** rows, but actually used **235,904** rows. For that many rows an Index Scan could be more performant than Index Seeks. Are the statistics on the table up to date or do they need to be updated?
3. The "select count(\*) from \_Security\_Tuple4" takes several minutes, the first time. The second time is instant. This is because the data is all now cached in memory (until it ages out) and the second query is fast.
4. Because the problem moves with the database then the statistics, any missing indexes, et cetera are in the database. I would also suggest checking that the indexes match with other databases using the same schema.
This is not a full analysis, but it gives you some things to look at. | View Clustered Index Seek over 0.5 million rows takes 7 minutes | [
"",
"sql",
"sql-server",
"performance",
"indexed-view",
"indexed-views",
""
] |
I have the following MySQL query that runs absolutely fine:
```
SELECT a.id, a.event_name, c.name, a.reg_limit-e.places_occupied AS places_available, a.start_date
FROM nbs_events_detail AS a, nbs_events_venue_rel AS b, nbs_events_venue AS c,
(SELECT e.id, COUNT(d.event_id) AS places_occupied FROM nbs_events_detail AS e LEFT JOIN nbs_events_attendee AS d ON e.id=d.event_id GROUP BY e.id) AS e
WHERE a.id=b.event_id AND b.venue_id=c.id AND a.id=e.id AND a.event_status='A' AND a.start_date>=NOW()
ORDER BY a.start_date
```
However, I'm trying to add one more `WHERE` clause, in order to filter the results shown in the column created for the subtraction: `a.reg_limit-e.places_occupied AS places_available`
What I've done so far was adding a `WHERE` clause like the following:
`WHERE places_available>0`
However, if I try to use this instruction, the query will fail and no results will be shown. The error reported is the following: `#1054 - Unknown column 'places_available' in 'where clause'`
In `a.reg_limit` I have numbers, in `e.places_occupied` I have numbers generated by the `COUNT` in the sub-query.
What am I missing? | You [can not use aliases](http://dev.mysql.com/doc/refman/5.0/en/problems-with-alias.html), defined in query, in `WHERE` clause. In order to make your query works, use condition `a.reg_limit-e.places_occupied>0` in `WHERE` clause. | The `WHERE` clause is executed before the `SELECT` statement, so it doesn't know about that new alias `places_available`, [the logical order of operations in Mysql](http://www.bennadel.com/blog/70-SQL-Query-Order-of-Operations.htm) is like this:
> 1. FROM clause
> 2. WHERE clause
> 3. GROUP BY clause
> 4. HAVING clause
> 5. SELECT clause
> 6. ORDER BY clause
As a workaround for this, you can wrap it in a subquery like this:
```
SELECT *
FROM
(
SELECT
a.id,
a.event_name,
c.name,
a.reg_limit-e.places_occupied AS places_available,
a.start_date
FROM nbs_events_detail AS a
INNER JOIN nbs_events_venue_rel AS b ON a.id=b.event_id
INNER JOIN nbs_events_venue AS c ON b.venue_id=c.id
INNER JOIN
(
SELECT e.id,
COUNT(d.event_id) AS places_occupied
FROM nbs_events_detail AS e
LEFT JOIN nbs_events_attendee AS d ON e.id=d.event_id GROUP BY e.id
) AS e ON a.id=e.id
WHERE a.event_status='A' AND a.start_date>=NOW()
) AS t
WHERE places_available>0
ORDER BY a.start_date;
```
Also try to use the ANSI-92 `JOIN` syntax instead of the old syntax, and use the explicit `JOIN` instead of mixing the conditions in the `WHERE` clause like what I did. | WHERE clause on subtraction between two columns MySQL | [
"",
"mysql",
"sql",
"where-clause",
""
] |
I need to compare two strings character by character using T-SQL. Let's assume i have twor strings like these:
```
123456789
212456789
```
Every time the character DO NOT match, I would like to increase the variable @Diff +=1. In this case the first three characters differ. So the @Diff = 3 (0 would be default value).
Thank you for all suggestions. | for columns in table you don't want to use row by row approach, try this one:
```
with cte(n) as (
select 1
union all
select n + 1 from cte where n < 9
)
select
t.s1, t.s2,
sum(
case
when substring(t.s1, c.n, 1) <> substring(t.s2, c.n, 1) then 1
else 0
end
) as diff
from test as t
cross join cte as c
group by t.s1, t.s2
```
[**=>sql fiddle demo**](http://sqlfiddle.com/#!3/32b0a/1) | This code should count the differences in input strings and save this number to counter variable and display the result:
```
declare @var1 nvarchar(MAX)
declare @var2 nvarchar(MAX)
declare @i int
declare @counter int
set @var1 = '123456789'
set @var2 = '212456789'
set @i = LEN(@var1)
set @counter = 0
while @i > 0
begin
if SUBSTRING(@var1, @i, 1) <> SUBSTRING(@var2, @i, 1)
begin
set @counter = @counter + 1
end
set @i = @i - 1
end
select @counter as Value
``` | T-SQL - compare strings char by char | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I use the following code to create a new database in SQL Server 2008 R2, program language is VB.NET:
```
Dim WEBconnString As String = "Data Source='NewName'; LCID=1032; Case Sensitive = FALSE; SSCE:Database Password=NewDBpass; Encrypt = TRUE;"
Dim engine As New SqlCeEngine(WEBconnString)
engine.CreateDatabase()
```
Additionally:
I open a web server connection with the following instruction:
```
ServerConn.ConnectionString = "Provider=SQLOLEDB; Data Source=" & WebServerName & _
"; Database=master" & _
"; Uid=" & TempUserName & _
"; Pwd=" & TempUserPass
ServerConn.Open()
```
And I also use a data base connection with the following instruction:
```
SQLConn.ConnectionString = "server=" & WebServer & "; Uid=" & AdminName & "; Pwd=" & AdminPassword & "; database=master"
SQLConn.Open()
```
In order to be able to use my instruction. I have already create a temporary database in my ISP SQL Server, and I'm using for login name and password the credentials from this database.
For the first time use; it works fine, means that somewhere creates a database
Now I'm trying to see this database and I can't found anything
I'm running the same code again because it seams to me that the database not created, and at the engine instruction gives me an error "The database exist"
And the question is: Where it is opened the new database?
Please give me any solution I may need in VB.NET | 1) It seems you are mixing *SqlServerCompact* (Local datatase which can be used in ASP.NET since version 4.0) and *SqlServer*. `SqlCeEngine` is part of `System.Data.SqlServerCe` namespace. So you create a SqCompact file and the `engine.CreateDatabase()` method raises an exception the second time.
The connection string seems correct (for a SqlServerCompact file). If you don't specify the full path in your connection string (just set the database name like here), the database will be created where the `app.exe` is executed (=`|DirectoryPath|`). You will have to look for a file with `.sdf` extension.
2) I don't know what type is `ServerConn` but since I see `"Provider=SQLOLEDB;"` in your connection string, I guess your are using `OLEDB` class. You should use instead the managed SqlServer class (`System.Data.SqlServer` namespace). So you should use `SqlConnection` , `SqlCommand`, ..., objects. If you already use them, then check your [connectionstring](http://www.connectionstrings.com/) since the provider is wrong.
At any rate, you can't access to the file created first in 1) in both case.
3) If your goal is to create a SqlServer Datatase, unfortunatly, where is no `SqlServerEngine` class like in `SqlServerCe`.
To create a database, some possible ways:
* At design time with Sql Server Management studio
* By executing scrits (via Sql Server Management studio, Or via .Net code, ...)
* Using `System.Data.SqlServer.Smo` class (.Net) | It looks like you are opening the Server connection the the master database. The master database is where the sql server engine keeps all the important information and meta data about all databases on that server.
I need to replace `database=master` to `database=WHATEVER DATABASE NAME YOU CREATED`. All else fails go to the sql server and run this
```
use master;
select * from sysdatabases
```
That will give you the name of every database that master is 'monitoring'. Find the one you tried to create and replace the `database=master` with `database=the one you found` | Create new Database in SQL Server 2008 R2 | [
"",
"sql",
"database",
""
] |
I am very new to using SQL and I am attempting to build my first query/report, and was hoping I could get some help on this (this seems to be the place to be for that!). Basically what I want to create is a report that shows when the last time an employee or contractor was paid. We have a database with all this information, I just want to return a distinct list of every person with their last pay date. What I end up getting is either a list of every pay we have made (Person1 is on the list 20+ times with each pay date), or a list with every person and the most recent paydate of anyone, not just that person. Here is what I have so far:
```
SELECT table1.Office ,
table1.EE_No ,
table1.Name ,
table1.Code ,
table1.Freq ,
( SELECT DISTINCT
MAX(table2.PayDate)
FROM table2
) AS Last_Paycheck
FROM table1
INNER JOIN table2 ON table1.UniqueID = table2.UniqueID
WHERE table1.EndDate IS NULL
```
What this returns is a list of every employee with 8/30/2013 listed, which is the last time anyone has got paid, but not everyone. What am I doing wrong here with the Max function? I've tried a lot of different ways and no luck, must be missing something obvious here! | You need to try something like
```
Select table1.Office,
table1.EE_No,
table1.Name,
table1.Code,
table1.Freq,
(Select distinct MAX (table2.PayDate)
from table2
where table1.UniqueID = table2.UniqueID
) as Last_Paycheck
from
table1 on
where
table1.EndDate is null
```
The other way would be something like
```
Select table1.Office,
table1.EE_No,
table1.Name,
table1.Code,
table1.Freq,
MAX (table2.PayDate) as Last_Paycheck
from
table1 inner join
table2
on table1.UniqueID = table2.UniqueID
where
table1.EndDate is null
GROUP BY table1.Office,
table1.EE_No,
table1.Name,
table1.Code,
table1.Freq
```
Also note as the comment states, use more descriptive table names, as this will greatly improve maitainability later. | Try changing this line:
```
(Select distinct MAX (table2.PayDate) from table2) as Last_Paycheck
```
To something like this:
```
MAX (table2.PayDate) OVER (PARTITON BY table2.UniqueID) as Last_Paycheck
```
With no `Select` needed - You've already stated the association needed in your join clause.
Hope this does the trick... | SQL: Using Max function and returning max per client, not overall | [
"",
"sql",
"sql-server",
""
] |
I have a table called `Sales` which has an `ID` for each sale and its date. I have another, called `SaleDetail`, which has the details of each sale: the `Article` and the `Quantity` sold. I want to show a list of all the articles and quantities of them that were sold between two dates. I'm using:
```
SELECT
SaleDetail.Article,
SUM(SaleDetail.Quantity) AS Pieces
FROM
SaleDetail
JOIN Sales ON Sales.ID = SaleDetail.ID
GROUP BY SaleDetail.Article
HAVING Sales.Date BETWEEN '10-01-2013' AND '20-01-2013'
```
But it seems to have logical errors, because I get "'Sales.Date' in clause HAVING is not valid, because it is not contained in an aggregate function or in the GROUP BY clause" How should I do it? Thank you. | Here the condition is supposed to be in where clause, but not having clause.
Having only to groups as a whole, whereas the WHERE clause applies to individual rows. Here you are looking for rows with the specific dates.
```
SELECT
SaleDetail.Article,
SUM(SaleDetail.Quantity) AS Pieces
FROM
SaleDetail
JOIN Sales ON Sales.ID = SaleDetail.ID
where Sales.Date BETWEEN '10-01-2013' AND '20-01-2013'
GROUP BY SaleDetail.Article
``` | The HAVING Clause is for use when trying to filter on an Aggregate. In this case, Sales.Date is not being aggregated. So you can just use WHERE.
```
SELECT
SaleDetail.Article,
SUM(SaleDetail.Quantity) AS Pieces
FROM
SaleDetail
JOIN Sales ON Sales.ID = SaleDetail.ID
WHERE Sales.Date BETWEEN '10-01-2013' AND '20-01-2013'
GROUP BY SaleDetail.Article
``` | Restrict joined and grouped SQL results | [
"",
"sql",
"sql-server",
""
] |
I have a column called `MealType` (`VARCHAR`) in my table with a `CHECK` constraint for `{"Veg", "NonVeg", "Vegan"}`
That'll take care of insertion.
I'd like to display these options for selection, but I couldn't figure out the SQL query to find out the constraints of a particular column in a table.
From a first glance at system tables in SQL Server, it seems like I'll need to use SQL Server's API to get the info. I was hoping for a SQL query itself to get it. | This query should show you all the constraints on a table:
```
select chk.definition
from sys.check_constraints chk
inner join sys.columns col
on chk.parent_object_id = col.object_id
inner join sys.tables st
on chk.parent_object_id = st.object_id
where
st.name = 'Tablename'
and col.column_id = chk.parent_column_id
```
can replace the select statement with this:
```
select substring(chk.Definition,2,3),substring(chk.Definition,9,6),substring(chk.Definition,20,5)
``` | Easiest and quickest way is to use:
```
sp_help 'TableName'
``` | How do I get constraints on a SQL Server table column | [
"",
"sql",
"sql-server",
""
] |
i have a code that works,
```
CURSOR c_val IS
SELECT 'Percentage' destype,
1 descode
FROM dual
UNION ALL --16028 change from union to union all for better performance
SELECT 'Unit'destype,
2 descode
FROM dual
ORDER BY descode DESC;
--
```
i change the union to union all, from what i have read it wouldnt make a big differnece as long as i have a order by (which i do) , is this truth or should i just leave it the way it was , it still works both ways | The choice between UNION and UNION ALL is not whether one of them works or not, or whether one of them is faster performing, but which one is correct for the logic you want to implement.
UNION ALL is simply the results of the two queries combined into a single result set. You should consider the order of the result sets to be random, although in practice you'd *probably* find that it's the first result set followed by the second.
UNION is the same, but with a DISTINCT applied to the combined sets. Therefore it's possible for it to return fewer rows and take longer and consume more resources. You would probably find that the order of rows is different, but again you should assume that the order is random.
Applying an ORDER BY to either one of them is the only way to guarantee an order on the result set. It would affect performance of the UNION ALL, but may not affect UNION too much (on top of the impact of the DISTINCT) because the optimiser might choose a DISTINCT implementation that will return an ordered result set.
So in your case you want both rows to appear (and in fact UNION would not eliminate either of them), and you want the order to be specified -- so based on that, UNION ALL and the ORDER BY you have provided are the correct approaches. | ```
"UNION" = "UNION ALL" + "DISTINCT"
```
So basically by going for `UNION ALL` you save a dedup operation. In this case you may find that the performance gain is negligible but it can be important on large datasets. | does union or union all make a big difference? | [
"",
"sql",
"oracle",
""
] |
Using SQL Server 2008, this query works great:
```
select CAST(CollectionDate as DATE), CAST(CollectionTime as TIME)
from field
```
Gives me two columns like this:
```
2013-01-25 18:53:00.0000000
2013-01-25 18:53:00.0000000
2013-01-25 18:53:00.0000000
2013-01-25 18:53:00.0000000
.
.
.
```
I'm trying to combine them into a single datetime using the plus sign, like this:
```
select CAST(CollectionDate as DATE) + CAST(CollectionTime as TIME)
from field
```
I've looked on about ten web sites, including answers on this site (like [this one](https://stackoverflow.com/questions/700619/how-to-combine-date-from-one-field-with-time-from-another-field-ms-sql-server)), and they all seem to agree that the plus sign should work but I get the error:
> Msg 8117, Level 16, State 1, Line 1
> Operand data type date is invalid for add operator.
All fields are non-zero and non-null. I've also tried the CONVERT function and tried to cast these results as varchars, same problem. This can't be as hard as I'm making it.
Can somebody tell me why this doesn't work? Thanks for any help. | Assuming the underlying data types are date/time/datetime types:
```
SELECT CONVERT(DATETIME, CONVERT(CHAR(8), CollectionDate, 112)
+ ' ' + CONVERT(CHAR(8), CollectionTime, 108))
FROM dbo.whatever;
```
This will convert `CollectionDate` and `CollectionTime` to char sequences, combine them, and then convert them to a `datetime`.
The parameters to `CONVERT` are `data_type`, `expression` and the optional `style` (see [syntax documentation](https://learn.microsoft.com/en-us/sql/t-sql/functions/cast-and-convert-transact-sql#syntax)).
The [date and time `style`](https://learn.microsoft.com/en-us/sql/t-sql/functions/cast-and-convert-transact-sql#date-and-time-styles) value `112` converts to an ISO `yyyymmdd` format. The `style` value `108` converts to `hh:mi:ss` format. Evidently both are 8 characters long which is why the `data_type` is `CHAR(8)` for both.
The resulting combined char sequence is in format `yyyymmdd hh:mi:ss` and then converted to a `datetime`. | The simple solution
```
SELECT CAST(CollectionDate as DATETIME) + CAST(CollectionTime as DATETIME)
FROM field
``` | Combining (concatenating) date and time into a datetime | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
"datetime",
""
] |
I will list some queries. Outputs are given later, towards the end.
This query gives me 7 rows.
```
SELECT
S.companyname AS supplier, S.country,
P.productid, P.productname, P.unitprice
FROM Production.Suppliers AS S
LEFT OUTER JOIN Production.Products AS P
ON S.supplierid = P.supplierid
WHERE S.country = N'Japan'
ORDER BY S.country
```
The next query is the same as above except that WHERE is replaced by AND. This gives 34
rows.
```
SELECT
S.companyname AS supplier, S.country,
P.productid, P.productname, P.unitprice
FROM Production.Suppliers AS S
LEFT OUTER JOIN Production.Products AS P
ON S.supplierid = P.supplierid
AND S.country = N'Japan'
ORDER BY S.country
```
I don't understand how why the output for second query is as shown below. Please explain.
---
Output 1 -
```
supplier country productid productname unitprice
Supplier QOVFD Japan 9 Product AOZBW 97.00
Supplier QOVFD Japan 10 Product YHXGE 31.00
Supplier QOVFD Japan 74 Product BKAZJ 10.00
Supplier QWUSF Japan 13 Product POXFU 6.00
Supplier QWUSF Japan 14 Product PWCJB 23.25
Supplier QWUSF Japan 15 Product KSZOI 15.50
Supplier XYZ Japan NULL NULL NULL
```
Output 2 -
```
supplier,country,productid,productname,unitprice
Supplier GQRCV,Australia,NULL,NULL,NULL
Supplier JNNES,Australia,NULL,NULL,NULL
Supplier UNAHG,Brazil,NULL,NULL,NULL
Supplier ERVYZ,Canada,NULL,NULL,NULL
Supplier OGLRK,Canada,NULL,NULL,NULL
Supplier XOXZA,Denmark,NULL,NULL,NULL
Supplier ELCRN,Finland,NULL,NULL,NULL
Supplier ZRYDZ,France,NULL,NULL,NULL
Supplier OAVQT,France,NULL,NULL,NULL
Supplier LVJUA,France,NULL,NULL,NULL
Supplier ZPYVS,Germany,NULL,NULL,NULL
Supplier SVIYA,Germany,NULL,NULL,NULL
Supplier TEGSC,Germany,NULL,NULL,NULL
Supplier KEREV,Italy,NULL,NULL,NULL
Supplier ZWZDM,Italy,NULL,NULL,NULL
Supplier XYZ,Japan,NULL,NULL,NULL
Supplier QWUSF,Japan,13,Product POXFU,6.00
Supplier QWUSF,Japan,14,Product PWCJB,23.25
Supplier QWUSF,Japan,15,Product KSZOI,15.50
Supplier QOVFD,Japan,9,Product AOZBW,97.00
Supplier QOVFD,Japan,10,Product YHXGE,31.00
Supplier QOVFD,Japan,74,Product BKAZJ,10.00
Supplier FNUXM,Netherlands,NULL,NULL,NULL
Supplier NZLIF,Norway,NULL,NULL,NULL
Supplier CIYNM,Singapore,NULL,NULL,NULL
Supplier EQPNC,Spain,NULL,NULL,NULL
Supplier QQYEU,Sweden,NULL,NULL,NULL
Supplier QZGUF,Sweden,NULL,NULL,NULL
Supplier BWGYE,UK,NULL,NULL,NULL
Supplier SWRXU,UK,NULL,NULL,NULL
Supplier VHQZD,USA,NULL,NULL,NULL
Supplier STUAZ,USA,NULL,NULL,NULL
Supplier JDNUG,USA,NULL,NULL,NULL
Supplier UHZRG,USA,NULL,NULL,NULL
``` | A `LEFT JOIN` brings back every record from the 'LEFT' table, unless you limit it, as you did with your `WHERE` criteria.
When you change it to be part of the `JOIN` criteria it's not limiting the results, just limiting which records have non NULL values from the right table. | Because you are doing Left join it always returns everything in the first table (Production.Suppliers).
In the first example the Where clause filters the Suppliers table to just show 'Japan'.
The query is something like:
> "Give me all the suppliers **that are in** Japan and show me their products"
When you do a join you are adding conditions to the table you are joining to (Production.Products). Because it's a left join you will ge all of the suppliers back, and only the products that have suppliers that are in Japan. You are saying something like:
> "Give me **all** the suppliers, and give me all the **products where the supplier is in Japan**"
It's a subtle difference but a very important one with regard to left joins.
If it was an `inner join` instead of a `left join` then you wouldn't have this problem, but you would be asking a different question again:
> "Give me all of the suppliers that **have** products in japan" | Understanding the difference between ON with multiple conditions and WHERE in a JOIN | [
"",
"sql",
"sql-server",
""
] |
I have a table with values like 0, 1, 2, 3 , -1, -2, 9, 8, etc. I want to put "blank" in column where number is less then 0, i.e. it should show all records and whereever the position is less than 0 it should show ''.
I tried using STR but this made my query too slow. The query looks like this:
```
select CASE
WHEN position < 0 THEN ''
ELSE position
END Position, column2
from
(
Select STR (column1) as position, column2 from table
) as t1
```
Can anybody suggest a better way to show the '' in column1? is there other way around? what can i use instead of str?
Update: column1 is int type and i do not want to put null but an empty string. I know i can't show empty string in an int column so have to use cast. Just wanted to be sure that it should not slow down my query, looking for the best option which returns the results in less time. | You can put the `CASE` expression into the query, and replace `STR` with a `CAST`, like this:
```
SELECT
CASE WHEN position < 0 THEN '' ELSE CAST(column1 as VARCHAR(10)) END as position
, column2
FROM myTable
``` | Try this way:
```
Select
case
when column1 <0 then null
else column1
end as position, column2
from tab
``` | Put blank in sql server column with integer datatype | [
"",
"sql",
"sql-server-2008",
""
] |
I am running below : `sqlplus ABC_TT/asfddd@\"SADSS.it.uk.hibm.sdkm:1521/UGJG.UK.HIBM.SDKM\"`
afte that I am executing one stored procedure `exec HOLD.TRWER`
I want to capture return code of the above stored procedure in unix file as I am running the above commands in unix. Please suggest. | You can store command return value in variable
```
value=`command`
```
And then checking it's value
```
echo "$value"
```
For your case to execute oracle commands within shell script,
```
value=`sqlplus ABC_TT/asfddd@\"SADSS.it.uk.hibm.sdkm:1521/UGJG.UK.HIBM.SDKM\" \
exec HOLD.TRWER`
```
I'm not sure about the sql query, but you can get the returned results by using
```
value=`oraclecommand`.
```
To print the returned results of oracle command,
```
echo "$value"
```
To check whether oracle command or any other command executed successfully, just
check with $? value after executing command. Return value is 0 for success and non-zero for failure.
```
if [ $?=0 ]
then
echo "Success"
else
echo "Failure"
fi
``` | I guess you are looking for `spool`
```
SQL> spool output.txt
SQL> select 1 from dual;
1
----------
1
SQL> spool off
```
Now after you exit. the query/stroed procedure output will be stored in a file called output.txt | How to capture sqlplus command line output in unix file | [
"",
"sql",
"oracle",
"unix",
"sed",
"awk",
""
] |
Why do I get a syntax error on the following SQL statements:
```
DECLARE @Count90Day int;
SET @Count90Day = SELECT COUNT(*) FROM Employee WHERE DateAdd(day,30,StartDate) BETWEEN
DATEADD(day,-10,GETDATE()) AND DATEADD(day,10,GETDATE()) AND Active ='Y'
```
I am trying to assign the number of rows returned from my Select statement to the variable @Count90Day. | You need parentheses around the subquery:
```
DECLARE @Count90Day int;
SET @Count90Day = (SELECT COUNT(*)
FROM Employee
WHERE DateAdd(day,30,StartDate) BETWEEN DATEADD(day,-10,GETDATE()) AND
DATEADD(day,10,GETDATE()) AND
Active ='Y'
);
```
You can also write this without the `set` as:
```
DECLARE @Count90Day int;
SELECT @Count90Day = COUNT(*)
FROM Employee
WHERE DateAdd(day,30,StartDate) BETWEEN DATEADD(day,-10,GETDATE()) AND DATEADD(day,10,GETDATE()) AND
Active ='Y';
``` | You can assign it within the `SELECT`, like so:
```
DECLARE @Count90Day int;
SELECT @Count90Day = COUNT(*)
FROM Employee
WHERE DateAdd(day,30,StartDate) BETWEEN
DATEADD(day,-10,GETDATE()) AND DATEADD(day,10,GETDATE()) AND Active ='Y'
``` | Assign INT Variable From Select Statement in SQL | [
"",
"sql",
"database",
"t-sql",
""
] |
I have a messages table that has both a `to_id` and a `from_id` for a given message.
I'd like to pass a user\_id to this query and have it return a non-duplicate list of id's, representing any users that has a message to/from the supplied user\_id.
What I have right now is (using 12 as the target user\_id) :
```
SELECT from_id, to_id
FROM messages
WHERE
from_id = 12
OR
to_id = 12
```
This does return all records where that user\_id exists, but I'm not sure how to have it only return non-duplicates, and ONLY one field which would be the user\_id that is not 12.
In short, it would return the id's of any users that user 12 has an existing message record with.
I hope I have explained this well enough, I have to believe it's relatively simple that I have not learned yet.
EDIT :
I should have specified that while my current SQL has two fields, I want only one field to be returned --- contact\_id. And there should be no duplicates.
contact\_id is not a field in the messages table, but that is the field name I'd like the query to return, regardless of whether it is returning the from\_id or to\_id | You can use a [`union`](http://dev.mysql.com/doc/refman/5.0/en/union.html):
```
select
from_id as contact_id
from messages
where to_id = 12
union
select
to_id
from messages
where from_id = 12
```
Also, from [the documentation](http://dev.mysql.com/doc/refman/5.0/en/union.html) (emphasis mine):
> The default behavior for UNION is that **duplicate rows are removed** from the result.
Alternatively, you could use [`if()`](http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_if) (or [`case`](http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#operator_case)):
```
select distinct
if(from_id = 12, to_id, from_id) as contact_id
from messages
where from_id = 12
or to_id = 12
``` | Here's a solution avoiding a union statement.
```
SELECT distinct (case when to_id = 12 then from_id else to_id end) as contact_id
FROM messages
WHERE
from_id = 12
OR
to_id = 12
``` | Return id where one of two fields matches a value, but only the field other than that vaue | [
"",
"mysql",
"sql",
""
] |
I am working on an Access database which keeps records of all the different batches arrived to and shipped from the warehouse. The locations in Warehouse are indicated by Lot\_ID.
The table contains the following data:
```
Batch_ID | Lot_ID |Arrival_date | Shipping_date
------------------------------------------------------
1 | 1 | 2013/7/08 | 2013/8/21
2 | 2 | 2013/7/10 |
3 | 3 | 2013/7/15 | 2013/8/28
4 | 1 | 2013/7/22 | 2013/8/23
5 | 3 | 2013/8/12 |
```
I am trying to write a query which would show only Lot\_IDs which are currently not occupied.
The problem arises because the table contain both current and historical data. My idea is to group the table by Lot\_ID and choose only those groups where Shipping\_date of every row in the group is not null (which means every Batch which was stored in the Lot was shipped already and currently the Lot is free)
So the result would be:
```
Batch_ID
---------
1
```
So what sql query would be better to use in this case? | Try
```
SELECT Lot_ID
FROM Table1
GROUP BY Lot_ID
HAVING SUM(IIF(Shipping_date IS NULL, 1, 0)) = 0
```
Output:
```
|Lot_ID |
|-------|
| 1 |
``` | you should check beluw link
<http://www.w3schools.com/sql/sql_where.asp>
should use where caluse like 'where shipping\_date = null ' | Select ID from GROUP BY group if the group satisfied criteria | [
"",
"sql",
"select",
"group-by",
""
] |
I have two tables and I want in one query take few values.
Tables:
```
users
|user_id |
|name |
user_group_link
|user_id |
|group_id |
```
This is my query:
```
SELECT users.*, user_group_link.*
FROM users
LEFT JOIN user_group_link ON users.user_id = user_group_link.user_id
WHERE user_group_link.group_id = 10
AND user_group_link.group_id = 11
AND user_group_link.group_id = 12
```
Problem is that AND doesnt work. I don't understand why it does not work cause i used and before and it seemed to work | The problem with this query: there is not a single row that has 10, 11 **AND** 12 as group\_id... you need **OR** here!
```
SELECT users.*, user_group_link.*
FROM users
LEFT JOIN user_group_link ON users.user_id = user_group_link.user_id
WHERE user_group_link.group_id = 10
OR user_group_link.group_id = 11
OR user_group_link.group_id = 12
```
Alternative:
```
SELECT users.*, user_group_link.*
FROM users
LEFT JOIN user_group_link ON users.user_id = user_group_link.user_id
WHERE user_group_link.group_id IN (10,11,12)
``` | `user_group_link.group_id` can't be `10`, `11`, or `12` **at the same time**, you need `OR` instead of `AND`.
So your query should be:
```
SELECT users.*, user_group_link.*
FROM users
LEFT JOIN user_group_link ON users.user_id = user_group_link.user_id
WHERE user_group_link.group_id = 10
OR user_group_link.group_id = 11
OR user_group_link.group_id = 12
``` | How to make right sql query? | [
"",
"mysql",
"sql",
""
] |
```
userid pageid
123 100
123 101
456 101
567 100
```
I want to return all the `userid` with `pageid` 100 AND 101, should give me 123 only. This is probably super easy but I can't figure it out! I tried:
```
SELECT userid
FROM table_name
WHERE pageid=100 AND pageid=101
```
But it gives me 0 results. Any help please | No record has 2 values of `pageid`. You probably want:
```
SELECT userid
FROM table_name
WHERE pageid=100 OR pageid=101
```
In a cleaner version, you can use:
```
SELECT userid
FROM table_name
WHERE pageid IN (100,101)
```
**UPDATE:** Based on the question edit, Here is the answer:
```
SELECT userid
FROM table_name
WHERE pageid IN (100,101)
GROUP BY userid
HAVING COUNT(DISTINCT pageid) = 2
```
Explanation:
First of all, the WHERE clause filter out all data except `pageid` not equal to 101 or 102. Then, By grouping `userid`, we have a list of unique `userid` having `DISTINCT pageid = 2`, which means contain ONLY 1 `pageid = 101` and 1 `pageid = 102` | According to your last edit you want to find all (distinct) `userid` where a `pageid` of 101 and also 101 exists. You can use `EXISTS`:
```
SELECT DISTINCT userid
FROM TableName t1
WHERE t1.userid=123
AND EXISTS(
SELECT 1 FROM TableName t2
WHERE t2.userid=t1.userid AND pageid = 100
)
AND EXISTS(
SELECT 1 FROM TableName t2
WHERE t2.userid=t1.userid AND pageid = 101
)
```
`DEMO`
I assume you are confusing `AND` with `OR`, this returns multiple records:
```
SELECT DISTINCT userid
FROM TableName
WHERE pageid=100 OR pageid=101
```
`DEMO`
```
USERID
123
567
```
Note that i have used `DISTINCT` to remove duplicates, since you don't want them. | SQL: how to select multiple rows given condition | [
"",
"sql",
""
] |
I have a table that I select "project managers" from and pull data about them.
Each of them has a number of "clients" that they manage.
Clients are linked to their project manager by name.
For example: John Smith has 3 Clients. Each of those Clients have his name in a row called "manager".
Here's what a simple version of the table looks like:
```
name | type | manager
--------------------------------------
John Smith | manager |
Client 1 | client | John Smith
Client 2 | client | John Smith
Client 3 | client | John Smith
John Carry | manager |
Client 4 | client | John Carry
Client 5 | client | John Carry
Client 6 | client | John Carry
```
How can I return the following data?
John Smith - 3 Clients
John Carry - 3 Clients | You should be able to use a self-join and the `count()` aggregate function to get the result:
```
select t.name,
count(t1.name) TotalClients
from yourtable t
inner join yourtable t1
on t.name = t1.manager
group by t.name;
```
See [SQL Fiddle with Demo](http://sqlfiddle.com/#!2/d72a87/2).
If you want the result to include the `Clients` text, then you the `CONCAT()` function:
```
select t.name,
concat(count(t1.name), ' Clients') TotalClients
from yourtable t
inner join yourtable t1
on t.name = t1.manager
group by t.name;
```
See [SQL Fiddle with Demo](http://sqlfiddle.com/#!2/d72a87/3)
If you want to add a WHERE clause based on the `type`, then you will use:
```
select t.name,
concat(count(t1.name), ' Clients') TotalClients
from yourtable t
inner join yourtable t1
on t.name = t1.manager
where t.type = 'manager'
group by t.name
```
See [SQL Fiddle with Demo](http://sqlfiddle.com/#!2/d72a87/11) | You don't need a join here. A simple `group by` can do the work:
```
Select manager, count(*) as totalClients from yourtable group by 1, order by 2 desc
```
This will list all managers and the number of client's they have. The manager with the most clients comes first. This way you omit the join and the query will be faster.
Between the `from` and the `group by` you can add `where` predicates to filter clients or managers or other columns. | SQL count trouble | [
"",
"mysql",
"sql",
""
] |
As I'm sure you'll be able to tell from this question, I am very new and unfamiliar with SQL. After quite some time (and some help from this wonderful website) I was able to create a query that lists almost exactly what I want:
```
Select p1.user.Office,
p1.user.Loc_No,
p1.user.Name,
p1.user.Code,
p1.user.Default_Freq,
(Select distinct MAX(p2.pay.Paycheck_PayDate)
from p2.pay
where p1.user.Client_TAS_CL_UNIQUE = p2.pay.CL_UniqueID) as Last_Paycheck
from
PR.client
where
p1.user.Client_End_Date is null
and p1.user.Client_Region = 'Z'
and p1.user.Client_Office <> 'ZF'
and substring(p1.user. Code,2,1) <> '0'
```
Now I just need to filter this slightly more using the following logic:
If Default\_Freq = 'W' then only output clients with a Last\_Paycheck 7 or more days past the current date
If Default\_Freq = 'B' then only output clients with a Last\_Paycheck 14 or more days past the current date
Etc., Etc.
I know this is possible, but I have no clue how the syntax should start. I believe I would need to use a Case statement inside the Where clause? Any help is greatly appreciated as always! | ```
SELECT
X.p1.user.Office,
X.p1.user.Loc_No,
X.p1.user.Name,
X.p1.user.Code,
X.Default_Freq,
X.Last_Paycheck
FROM
(Select
p1.user.Office,
p1.user.Loc_No,
p1.user.Name,
p1.user.Code,
p1.user.Default_Freq AS Default_Freq,
(Select distinct MAX(p2.pay.Paycheck_PayDate)
from p2.pay
where p1.user.Client_TAS_CL_UNIQUE = p2.pay.CL_UniqueID) as Last_Paycheck
from
PR.client
where
p1.user.Client_End_Date is null
and p1.user.Client_Region = 'Z'
and p1.user.Client_Office <> 'ZF'
and substring(p1.user. Code,2,1) <> '0') X
WHERE (
(X.Default_Freq = 'W' AND (DATEDIFF ( d , X.Last_Paycheck , @currentDate) >= 7))
OR (X.Default_Freq = 'B' AND (DATEDIFF ( d , X.Last_Paycheck , @currentDate) >= 14))
)
``` | Use your current query as a derived table or a CTE (CTEs exists for SQL Server 2005+). And then you can do:
```
;WITH CTE AS
(
--- Your current query here
)
SELECT *
FROM CTE
WHERE (Default_Freq = 'W' AND DATEDIFF(DAY,Last_Paycheck,GETDATE()) >= 7)
OR (Default_Freq = 'B' AND DATEDIFF(DAY,Last_Paycheck,GETDATE()) >= 14)
OR () -- keep going
``` | SQL: Filtering a query with multiple conditions | [
"",
"sql",
"sql-server",
""
] |
I need to add a primary key on a table "usernames"
I have 3 columns in it :
```
userid int(10)
username char(20)
user char(50)
```
and the primary key is set on 'username' field and i used it as a foreign key to link it to another table. Now i need to add primary key on 'userid' field also... so i tried out :
```
alter table `usernames` drop primary key, add primary key(userid,username);
```
and i get an error saying
```
ERROR 1553 (HY000): Cannot drop index 'PRIMARY":needed in a foreign key constraint
```
is there any possible way to do this ?? | There is:
1. Drop the FK constraint
2. Drop PK Constraint
3. Create New PK
4. Add Unique Constraint on the name column
5. Recreate FK
Raj | I assume you use MySql (despite you tag your question as Sql Server).
You can decide to:
Disable all check and try to remove primary key but the new must have the same name
or
Drop foreign key constraints referred to your primary key and then remove your primary key and finally re-add foreign keys | adding a primary key when a foreign key already exists | [
"",
"mysql",
"sql",
"sqlite",
"mysql-workbench",
""
] |
i have the following situation :
I need to update customer information on a field in a database specifically on customers not in the EU. I've already selected and filtered the customers, now i've got the following question.
There is a field lets call it "order\_note" which i need to update. I know how to do that normally, but some of the fields contain notes that had been set by hand and i don't want to loose them, but also add a "Careful! Think of XY here" in the field - best before the other information. Is it possible to update a field that already contains content without deleting it?
Thanks for any advice | ```
UPDATE OrderTable
SET order_note = 'Careful! Think of XY here. ' + order_note
WHERE order_id = 1
``` | ```
update customer_information
set order_note = 'Careful! Think of XY here!\n' || order_note
where customer_region != 'EU'
``` | SQL: Update Records without losing their contents | [
"",
"sql",
""
] |
Suppose we have a table:
```
╔═════════════════════════════════════╗
║ Name Date Value ║
╠═════════════════════════════════════╣
║ John 2013-01-01 10:20:00 10 ║
║ John 2013-01-01 12:20:11 20 ║
║ Mark 2013-01-01 11:44:10 10 ║
║ Mark 2013-01-02 12:00:00 20 ║
║ Mark 2013-01-03 15:20:00 20 ║
║ Tim 2013-01-01 15:11:12 5 ║
║ Tim 2013-01-03 18:44:44 10 ║
║ Tim 2013-01-03 20:11:00 15 ║
╚═════════════════════════════════════╝
```
And using a single `SELECT` query, output:
```
╔════════════════════════════════════════════════╗
║ Name 2013-01-01 2013-01-02 2013-01-03 ║
╠════════════════════════════════════════════════╣
║ John 30 0 0 ║
║ Mark 10 20 20 ║
║ Tim 5 0 25 ║
╚════════════════════════════════════════════════╝
```
* The 3 days are fixed (2013-01-01, 2013-01-02, 2013-01-02).
How do you do this in a single `SELECT`? I tried with `SUM(DISTINCT)` but no success. I cannot figure out the logic.
It must be `GROUP BY Name` only (I think), but how would I compute the `SUM()` by intervals? | Try this:
```
SELECT NAME,
SUM(CASE
WHEN CAST(DATE AS DATE) = '2013-01-01' THEN VALUE
ELSE 0
END) [2013-01-01],
SUM(CASE
WHEN CAST(DATE AS DATE) = '2013-01-02' THEN VALUE
ELSE 0
END) [2013-01-02],
SUM(CASE
WHEN CAST(DATE AS DATE) = '2013-01-03' THEN VALUE
ELSE 0
END) [2013-01-03]
FROM TABLE1
GROUP BY NAME
```
Take a look at the working example on [SQL Fiddle](http://www.sqlfiddle.com/#!6/805fb/5). | If the dates are fixed:
```
SELECT [Name],
SUM(CASE WHEN [Date] >= '20130101'
AND [Date] < '20130102' THEN Value END) [2013-01-01],
SUM(CASE WHEN [Date] >= '20130102'
AND [Date] < '20130103' THEN Value END) [2013-01-02],
SUM(CASE WHEN [Date] >= '20130103'
AND [Date] < '20130104' THEN Value END) [2013-01-03]
FROM YourTable
GROUP BY [Name]
``` | SELECT SUM () by DISTINCT intervals (Date) | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have a stored procedure that runs nightly.
It pulls some data from a linked server and inserts it into a table on the server where the sql agent job runs. Before the INSERT statement is run, the procedure checks if the database on the linked server is online (STATE = 0). If not the INSERT statement is not run.
```
IF EXISTS(
SELECT *
FROM OPENQUERY(_LINKEDSERVER,'
SELECT name, state FROM sys.databases
WHERE name = ''_DATABASENAME'' AND state = 0')
)
BEGIN
INSERT INTO _LOCALTABLE (A, B)
SELECT A, B FROM _LINKEDSERVER._DATABASENAME.dbo._REMOTETABLE
END
```
But the procedure gives an error (deferred prepare could not be completed) when the remote database is in restore mode. This is because the statement between BEGIN and END is evaluated **before** the whole script is run. Also when the IF evaluation is not true. And because \_DATABASENAME is in restore mode this already gives an error.
As a workaround I placed the INSERT statement in an execute function:
```
EXECUTE('INSERT INTO _LOCALTABLE (A, B)
SELECT A, B FROM _LINKEDSERVER._DATABASENAME.dbo._REMOTETABLE')
```
But is there another more elegant solution to prevent the evaluation of this statement before this part of the sql is used?
My scenario involves a linked server. Off course the same issue is when the database is on the same server.
I was hoping for some command I am not aware of yet, that prevents evaluation syntax inside an IF:
```
IF(Evaluation)
BEGIN
PREPARE THIS PART ONLY IF Evaluation IS TRUE.
END
```
edit regarding answer:
I tested:
```
IF(EXISTS
(
SELECT *
FROM sys.master_files F WHERE F.name = 'Database'
AND state = 0
))
BEGIN
SELECT * FROM Database.dbo.Table
END
ELSE
BEGIN
SELECT 'ErrorMessage'
END
```
Which still generates this error:
Msg 942, Level 14, State 4, Line 8
Database 'Database' cannot be opened because it is offline. | I don't think there's a way to conditionally prepare only part of a t-sql statement (at least not in the way you've asked about).
The underlying problem with your original query isn't that the remote database is sometimes offline, it's that the query optimizer can't create an execution plan when the remote database is offline. In that sense, the offline database is effectively like a syntax error, i.e. it's a condition that prevents a query plan from being created, so the whole thing fails before it ever gets a chance to execute.
The reason `EXECUTE` works for you is because it defers compilation of the query passed to it until run-time of the query that calls it, which means you now have potentially two query plans, one for your main query that checks to see if the remote db is available, and another that doesn't get created unless and until the `EXECUTE` statement is actually executed.
So when you think about it that way, using `EXECUTE` (or alternatively, `sp_executesql`) is not so much a workaround as it is one possible solution. It's just a mechanism for splitting your query into two separate execution plans.
With that in mind, you don't necessarily have to use dynamic SQL to solve your problem. You could use a second stored procedure to achieve the same result. For example:
```
-- create this sp (when the remote db is online, of course)
CREATE PROCEDURE usp_CopyRemoteData
AS
BEGIN
INSERT INTO _LOCALTABLE (A, B)
SELECT A, B FROM _LINKEDSERVER._DATABASENAME.dbo._REMOTETABLE;
END
GO
```
Then your original query looks like this:
```
IF EXISTS(
SELECT *
FROM OPENQUERY(_LINKEDSERVER,'
SELECT name, state FROM sys.databases
WHERE name = ''_DATABASENAME'' AND state = 0')
)
BEGIN
exec usp_CopyRemoteData;
END
```
Another solution would be to not even bother checking to see if the remote database is available, just try to run the `INSERT INTO _LOCALTABLE` statement and ignore the error if it fails. I'm being a bit facetious, here, but unless there's an `ELSE` for your `IF EXISTS`, i.e. unless you do something different when the remote db is offline, you're basically just suppressing (or ignoring) the error anyway. The functional result is the same in that no data gets copied to the local table.
You could do that in t-sql with a try/catch, like so:
```
BEGIN TRY
/* Same definition for this sp as above. */
exec usp_CopyRemoteData;
/* You need the sp; this won't work:
INSERT INTO _LOCALTABLE (A, B)
SELECT A, B FROM _LINKEDSERVER._DATABASENAME.dbo._REMOTETABLE
*/
END TRY
BEGIN CATCH
/* Do nothing, i.e. suppress the error.
Or do something different?
*/
END CATCH
```
To be fair, this would suppress all errors raised by the sp, not just ones caused by the remote database being offline. And you still have the same root issue as your original query, and would need a stored proc or dynamic SQL to properly trap the error in question. BOL has a pretty good example of this; see the "Errors Unaffected by a TRY…CATCH Construct" section of this page for details: <http://technet.microsoft.com/en-us/library/ms175976(v=sql.105).aspx>
The bottom line is that you need to split your original query into separate batches, and there are lots of ways to do that. The best solution depends on your specific environment and requirements, but if your actual query is as straightforward as the one presented in this question then your original workaround is probably a good solution. | Unfortunately, the only way to keep sql from preparsing the statement is with dynamic SQL. I have the same issue where I add a column and then want to update it. There is no way to get the script to compile without making the update statement dynamic sql. | prevent error when target database in restore mode (sql preparation) | [
"",
"sql",
"sql-server-2008",
"prepared-statement",
""
] |
not sure if the title really makes the question clear but basically I am using "SELECT DISTINCT" in a combobox to look at distinct log times , however the format of the log tim column (which cannot be changed) is example :
AUGUST 29th 9:21pm
AUGUST 29th 9:22pm
This is no good as even using distinct will judge these as distinct because they are.
WHAT I WANT : I want an sql statement which will look at the values and select a distinct value based on just the first part e.g AUGUST 29th and not take the time into account.
Anyone got any ideas/code ?
UPDATE ::
I used ypercube's code and it works however for the filter I want it so that when I select a date eg 29/08/2013 it then brings the datagrid cursor to the cell where the logtime matches this date , however the logtime must stay in the format "August 29th 9:21pm" , and ideas how to do this in c# ? | If the data is stored on a `DATETIME` column and you are one a recent version of SQL-Server, you can cast to `DATE`:
```
SELECT DISTINCT
CAST(created_at AS DATE) AS date_created_at
FROM
SSISLog ;
``` | I guess you want something like following
```
SELECT DISTINCT(DATE(created_at)) FROM users;
``` | Select distinct values using like | [
"",
"sql",
"sql-server",
"distinct",
""
] |
I have a problem here,
Have an SSRS report for Bank Transfer see attached

I would like to add a row expression which will Sum the total amount of the same bank, i.e,
03001 - Standard Chartered Bank Ltd BJL, 03002 - Standard Chartered Bank Ltd sk and 03002 - Standard Chartered Bank Base are all standard charters, i would like to get a total of all standard charters GMD figures. if need more clarification, please ask.
NB: Banks which are together e.g Standard charters above have a common field called BankAName. So the sum condition can be set to check if BankAName is the same. | Have ended up creating a new group the groups bank branches by Banks then create a sum per group.
Thank you guys, your answers gave me a new perspective. | You'll need something like this:
```
=Sum(IIf(Fields!BankAName.Value = "Standard Chartered Bank"
, Fields!Amount.Value
, Nothing)
, "DataSet1")
```
This checks for a certain field (e.g. `BankAName`), and if it's a certain value that row's `Amount` value will be added to the total - this seems to be what you're after. You may have to modify for your field names/values.
By setting the Scope of the aggregate to the Dataset this will apply to all rows in the table; you can modify this as required. | SSRS Sum Expression with Condition | [
"",
"sql",
"reporting-services",
"ssrs-expression",
""
] |
I have this Table which contains **Term** and **TermStartDate** and **TermEndDate**. I have to consider **Todays date** and then check in which term that falls under(Current Term) and then Consider a Date exactly 360 days from the TermEndDate of Current term(Calculated\_Date).
Once I get that Date I have to check which Term Falls After that date.Basically what is the TermStartdate that falls after the Calculated\_Date.
Note:
Basically, I need all the records of the students who fall between current term and exactly a year before the Current term. say for example, if this term is Fall 2013 , I would need records from Spring 2013. Should not consider Fall 2012.
**Edit:**
Sample Table
```
Term TermStartDate TermEndDate
Fall 2012 2012/08/27 2012/12/15
Spring 2013 2013/01/14 2013/04/26
Sumr I 2013 2013/05/06 2013/06/29
Sumr II 2013 2013/07/01 2013/08/24
Fall 2013 2013/08/26 2013/12/14
Spring 2014 2014/01/13 2014/04/26
```
Step 1: GetDate()
Step 2: Check the TermEndDate falling just after the GetDate() (Gives the Current term)
Step 3: Calculate the date exactly 360 days before the Current Term End Date
Step 4: The First Term That Falls after the date that is Calcuted in Step 3 | I think you are over complicating the problem, but as you requested, try this:
```
DECLARE @terms TABLE(term varchar(50),termStartDate date, termEndDate date)
INSERT INTO @terms VALUES('Fall 2012','8/27/2012','12/15/2012')
INSERT INTO @terms VALUES('Spring 2013','1/14/2013','4/26/2013')
INSERT INTO @terms VALUES('Sumr I 2013','5/6/2013','6/29/2013')
INSERT INTO @terms VALUES('Sumr II 2013','7/1/2013','8/24/2013')
INSERT INTO @terms VALUES('Fall 2013','8/26/2013','12/14/2013')
INSERT INTO @terms VALUES('Spring 2014','1/13/2014','4/26/2014')
DECLARE @today date =GETDATE()
SELECT @today = termEndDate
FROM @terms
WHERE termStartDate<=@today AND termEndDate>=@today
SELECT term
FROM @terms
WHERE termStartDate>=DATEADD(d,-360,@today) AND termStartDate<=GETDATE()
```
This will list all terms included in the period 360 days prior to the end of the current term.
**UPDATE**
```
SELECT min(termStartDate)startDate FROM (
SELECT termStartDate
FROM @terms
GROUP BY termStartDate
HAVING termStartDate>=DATEADD(d,-360,@today)
AND termStartDate<=GETDATE()
)z
```
will get the startDate for the earliest term. | ```
DECLARE @lastTermEnd
SELECT @lastTermEnd=DATEADD(d,-360,TermEndDate)
FROM Students
where GETDATE() between TermStartDate and TermEndDate
SELECT TOP 1 *
from Students
WHERE TermStartDate between @lastTermEnd and GETDATE()
ORDER BY TermStartDate
```
This will list first term which fall after the calculated date.
UPDATE:
```
DECLARE @lastTermEnd datetime
DECLARE @TermEnd datetime
SELECT @TermEnd=TermEndDate
FROM Students
where GETDATE() between TermStartDate and TermEndDate
SET @lastTermEnd=DATEADD(d,-360,@TermEnd)
SELECT TOP 1 TermStartDate,@TermEnd
from Students
WHERE TermStartDate between @lastTermEnd and GETDATE()
ORDER BY TermStartDate
``` | Date a year from now and check what is the next Term from that Date | [
"",
"sql",
"sql-server-2008",
"t-sql",
""
] |
Is it possible to display query results like below within mysql shell?
```
mysql> select code, created_at from my_records;
code created_at
1213307927 2013-04-26 09:52:10
8400000000 2013-04-29 23:38:48
8311000001 2013-04-29 23:38:48
3 rows in set (0.00 sec)
```
instead of
```
mysql> select code, created_at from my_records;
+------------+---------------------+
| code | created_at |
+------------+---------------------+
| 1213307927 | 2013-04-26 09:52:10 |
| 8400000000 | 2013-04-29 23:38:48 |
| 8311000001 | 2013-04-29 23:38:48 |
+------------+---------------------+
3 rows in set (0.00 sec)
```
The reason I'm asking because I have some tedious task that I need to copy the output and paste it on other tool. | > **--raw, -r**
For tabular output, the “boxing” around columns enables one column value to be distinguished from another. For nontabular output (such as is produced in batch mode or when the --batch or --silent option is given), special characters are escaped in the output so they can be identified easily. Newline, tab, NUL, and backslash are written as \n, \t, \0, and \\. The --raw option disables this character escaping.
The following example demonstrates tabular versus nontabular output and the use of raw mode to disable escaping:
```
% mysql
mysql> SELECT CHAR(92);
+----------+
| CHAR(92) |
+----------+
| \ |
+----------+
% mysql --silent
mysql> SELECT CHAR(92);
CHAR(92)
\\
% mysql --silent --raw
mysql> SELECT CHAR(92);
CHAR(92)
\
```
From [**MySQL Docs**](http://dev.mysql.com/doc/refman/5.1/en/mysql-command-options.html#option_mysql_raw) | Not exactly what you need, but it might be useful. Add \G at the end of the query
```
select code, created_at from my_records\G;
```
Query result will look like this:
```
*************************** 1. row ***************************
code: 1213307927
created_at: 2013-04-26 09:52:10
*************************** 2. row ***************************
code: 8400000000
created_at: 2013-04-29 23:38:48
``` | Display query results without table line within mysql shell ( nontabular output ) | [
"",
"mysql",
"sql",
""
] |
I have a table that looks like this:
```
create (DATETIME) | Message (TEXT)
=========================================
2013-08-05 9:00:00 "Hi there!"
2013-08-05 10:00:00 "Hi again!"
2013-08-07 14:00:00 "Hi from the future!"
```
I would like to be able to get all messages and group on `DATE(create)` so the messages are grouped by `2013-08-05`, `2013-08-07` and so on. This is easily done by an `ORDER BY`:
```
SELECT * FROM messages ORDER BY create
```
The problem is that I would like to be able to paginate by groups. For example, if I have 10 different dates, I would like to show all messages 5 date groups per page and paginate.
I have seen a lot of solutions for `LIMIT`ing the number of elements within a group, but haven't seen any solutions to `LIMIT` the number of groups.
I would basically like results that look like this:
If `LIMIT`ing to only show 1 group:
```
2013-08-05 9:00:00 "Hi there!"
2013-08-05 10:00:00 "Hi again!"
```
If `LIMIT`ing to show 2 groups:
```
2013-08-05 9:00:00 "Hi there!"
2013-08-05 10:00:00 "Hi again!"
2013-08-07 14:00:00 "Hi from the future!"
```
If `LIMIT`ing to show 1 group skipping the first one:
```
2013-08-07 14:00:00 "Hi from the future!"
```
How can this be achieved in SQL? | It is clearer with the example data :)
Query #1: 1 group counting from 0
```
SELECT m1.* FROM messages m1 JOIN (
SELECT DISTINCT date(created) OnlyDate FROM messages
ORDER BY OnlyDate
LIMIT 0,1
) m2 ON date(m1.created) = OnlyDate;
| CREATED | MESSAGE |
|---------------------|-----------|
| 2013-08-05 9:00:00 | Hi there! |
| 2013-08-05 10:00:00 | Hi again! |
```
Query #2: 1 group counting from 1
```
SELECT m1.* FROM messages m1 JOIN (
SELECT DISTINCT date(created) OnlyDate FROM messages
ORDER BY OnlyDate
LIMIT 1,1
) m2 ON date(m1.created) = OnlyDate
| CREATED | MESSAGE |
|---------------------|---------------------|
| 2013-08-07 14:00:00 | Hi from the future! |
```
You just handle the groups with the limit clause (first value is the starting group and second value is the offset from it). The query should be quite efficient too and is not hard to read either.
Fiddle [here](http://sqlfiddle.com/#!2/3c3f1/1). | I think you would need to calculate the group id and do the `limit` manually:
```
SELECT m.*
FROM messages m join
(select date(create) as createdate, count(*) as cnt, @rn := @rn + 1 as groupnum
from messages m cross join (select @rn := 0) const
group by date(create)
) md
on date(m.create) = createdate
WHERE groupnum between 1 and 5;
``` | Paginating date data in groups | [
"",
"mysql",
"sql",
"group-by",
"pagination",
"sql-order-by",
""
] |
I recently discovered included columns in SQL Server indexes. Do included columns in an index take up extra memory or are they stored on disk?
Also can someone point me to performance implications of including columns of differing data types as included columns in a Primary Key, which in my case is typically an in?
Thanks. | I don't fully understand the question: "Do included columns in an index take up extra memory or are they stored on disk?" Indexes are both stored on disk (for persistence) and in memory (for performance when being used).
The answer to your question is that the non-key columns are stored in the index and hence are stored both on disk and memory, along with the rest of the index. Included columns do have a significant performance advantage over key columns in the index. To understand this advantage, you have to understand the key values may be stored more than once in a b-tree index structure. They are used both as "nodes" in the tree and as "leaves" (the latter point to the actual records in the table). Non-key values are stored only in leaves, providing potentially a big savings in storage.
Such a savings means that more of the index can be stored in memory in a memory-limited environment. And that an index takes up less memory, allowing memory to be used for other things.
The use of included columns is to allow the index to be a "covering" index for queries, with a minimum of additional overhead. An index "covers" a query when all the columns needed for the query are in the index, so the index can be used instead of the original data pages. This can be a significant performance savings.
The place to go to learn more about them is the Microsoft [documentation](http://technet.microsoft.com/en-us/library/ms190806.aspx). | In SQL **Server 2005 or upper versions**, you can extend the functionality of nonclustered indexes by adding nonkey columns to the leaf level of the nonclustered index.
By including nonkey columns, you can create nonclustered indexes that cover more queries. This is because the nonkey columns have the following benefits:
• They can be data types not allowed as index key columns. ( All data types are allowed except **text**, **ntext**, and **image**.)
• They are not considered by the Database Engine when calculating the number of index key columns or index key size. You can include nonkey columns in a nonclustered index to avoid exceeding the current index size limitations of a **maximum of 16 key** columns and a maximum index key size of **900** bytes.
An index with included nonkey columns can significantly improve query performance when all columns in the query are included in the index either as key or nonkey columns. Performance gains are achieved because the query optimizer can locate all the column values within the index; table or clustered index data is not accessed resulting in fewer disk I/O operations.
Example:
```
Create Table Script
CREATE TABLE [dbo].[Profile](
[EnrollMentId] [int] IDENTITY(1,1) NOT NULL,
[FName] [varchar](50) NULL,
[MName] [varchar](50) NULL,
[LName] [varchar](50) NULL,
[NickName] [varchar](50) NULL,
[DOB] [date] NULL,
[Qualification] [varchar](50) NULL,
[Profession] [varchar](50) NULL,
[MaritalStatus] [int] NULL,
[CurrentCity] [varchar](50) NULL,
[NativePlace] [varchar](50) NULL,
[District] [varchar](50) NULL,
[State] [varchar](50) NULL,
[Country] [varchar](50) NULL,
[UIDNO] [int] NOT NULL,
[Detail1] [varchar](max) NULL,
[Detail2] [varchar](max) NULL,
[Detail3] [varchar](max) NULL,
[Detail4] [varchar](max) NULL,
PRIMARY KEY CLUSTERED
(
[EnrollMentId] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
```
**Stored procedure script**
```
CREATE Proc [dbo].[InsertIntoProfileTable]
As
BEGIN
SET NOCOUNT ON
Declare @currentRow int
Declare @Details varchar(Max)
Declare @dob Date
set @currentRow =1;
set @Details ='Let''s think about the book. Every page in the book has the page number. All information in this book is presented sequentially based on this page number. Speaking in the database terms, page number is the clustered index. Now think about the glossary at the end of the book. This is in alphabetical order and allow you to quickly find the page number specific glossary term belongs to. This represents non-clustered index with glossary term as the key column. Now assuming that every page also shows "chapter" title at the top. If you want to find in what chapter is the glossary term, you have to lookup what page # describes glossary term, next - open corresponding page and see the chapter title on the page. This clearly represents key lookup - when you need to find the data from non-indexed column, you have to find actual data record (clustered index) and look at this column value. Included column helps in terms of performance - think about glossary where each chapter title includes in addition to glossary term. If you need to find out what chapter the glossary term belongs - you don''t need to open actual page - you can get it when you lookup the glossary term. So included column are like those chapter titles. Non clustered Index (glossary) has addition attribute as part of the non-clustered index. Index is not sorted by included columns - it just additional attributes that helps to speed up the lookup (e.g. you don''t need to open actual page because information is already in the glossary index).'
while(@currentRow <=200000)
BEGIN
insert into dbo.Profile values( 'FName'+ Cast(@currentRow as varchar), 'MName' + Cast(@currentRow as varchar), 'MName' + Cast(@currentRow as varchar), 'NickName' + Cast(@currentRow as varchar), DATEADD(DAY, ROUND(10000*RAND(),0),'01-01-1980'),NULL, NULL, @currentRow%3, NULL,NULL,NULL,NULL,NULL, 1000+@currentRow,@Details,@Details,@Details,@Details)
set @currentRow +=1;
END
SET NOCOUNT OFF
END
GO
```
Using the above SP you can **insert 200000 records** at one time.
You can see that there is a **clustered index on column “EnrollMentId”.**
Now Create a non-Clustered index on “ UIDNO” Column.
Script
```
CREATE NONCLUSTERED INDEX [NonClusteredIndex-20140216-223309] ON [dbo].[Profile]
(
[UIDNO] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
```
Now Run the following Query
```
select UIDNO,FName,DOB, MaritalStatus, Detail1 from dbo.Profile --Takes about 30-50 seconds and return 200,000 results.
```
Query 2
```
select UIDNO,FName,DOB, MaritalStatus, Detail1 from dbo.Profile
where DOB between '01-01-1980' and '01-01-1985'
--Takes about 10-15 seconds and return 36,479 records.
```
Now drop the above non-clustered index and re-create with following script
```
CREATE NONCLUSTERED INDEX [NonClusteredIndex-20140216-231011] ON [dbo].[Profile]
(
[UIDNO] ASC,
[FName] ASC,
[DOB] ASC,
[MaritalStatus] ASC,
[Detail1] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
```
**It will throw the following error**
Msg 1919, Level 16, State 1, Line 1
Column 'Detail1' in table 'dbo.Profile' is of a type that is invalid for use as a key column in an index.
**Because we can not use varchar(Max) datatype as key column.**
Now Create a non-Clustered Index with included columns using following script
```
CREATE NONCLUSTERED INDEX [NonClusteredIndex-20140216-231811] ON [dbo].[Profile]
(
[UIDNO] ASC
)
INCLUDE ( [FName],
[DOB],
[MaritalStatus],
[Detail1]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
```
Now Run the following Query
```
select UIDNO,FName,DOB, MaritalStatus, Detail1 from dbo.Profile --Takes about 20-30 seconds and return 200,000 results.
```
Query 2
```
select UIDNO,FName,DOB, MaritalStatus, Detail1 from dbo.Profile
where DOB between '01-01-1980' and '01-01-1985'
--Takes about 3-5 seconds and return 36,479 records.
``` | SQL Server 2008 included columns | [
"",
"sql",
"sql-server",
"indexing",
""
] |
I have data where I need to add leading zeros to it. But the problem is the data type is float. So whenever I add zeros, it automatically omits them. I have tried to add leading zero to it then try to convert it to varchar(50).
I used following query (thanks to Mr. Damien\_The\_Unbeliever):
```
select '0' + convert (varchar(50), (wallet_sys)) as wallet_sys from NewSysData1
```
However the result is showing: 01.72295e+009 instead of showing like this: 01718738312
What should I do?
PS: Some of the sample data are below:
```
1718738312, 8733983312, 9383879312
```
I want these to be:
```
01718738312, 08733983312, 09383879312
``` | you can use the [str() function](http://msdn.microsoft.com/en-us/library/ms189527.aspx) and trim
```
select '0' + ltrim(rtrim(str(wallet_sys))) as wallet_sys from NewSysData1;
```
[SqlFiddle](http://sqlfiddle.com/#!6/77883/1) | I have done this using the following:
```
select right('00' + convert(varchar, MONTH(getdate())),2)
``` | Add leading zero and convert as varchar to a float | [
"",
"sql",
"sql-server",
""
] |
can we use regular expression to select the item from database?
The table item is like below
```
Table column|name|
10.01.02 | a |
100.2.03 | b |
1021.10.04 | c |
```
Now my problem is that i need to select the column and get the substring like below
```
Table column|name|
10.01 | a |
100.2 | b |
1021.10 | c |
```
any suggestion with regular expression or substring? | Try this,
```
SELECT SUBSTRING(column1,0,(CHARINDEX('.',column1,CHARINDEX('.',column1)+1)))
FROM Table1
```
[SQL Fiddle Demo](http://sqlfiddle.com/#!3/bc9ee/8) | ```
select left(a, datalength(a) - charindex('.', reverse(a))), b
from (
select '10.01.02', 'a' union all
select '100.2.03', 'b' union all
select '1021.10.04', 'c'
) t(a,b)
```
I think there is no solution with regexp.
**UPD**
Another solution:
```
select reverse(parsename(reverse(a), 1)) + '.' + reverse(parsename(reverse(a), 2)), b
from (
select '10.01.02', 'a' union all
select '100.2.03', 'b' union all
select '1021.10.04', 'c'
) t(a,b)
``` | Sql Server select with regular expression | [
"",
"sql",
"sql-server",
"regex",
"database",
"subquery",
""
] |
I have a row in the table `users` with the username `test`. For some reason, though, this query returns an empty result set.
```
SELECT `id` FROM `users` WHERE `username` = "test" AND `id` != null;
```
However, if I remove the `` `id` != null `` segment, the query returns the result `id = 1`.
But `1` != `NULL`. How is this happening?
The `id` field is non-nullable and is auto-increment.
Thanks! | The query doesn't return a row because the predicate " `id != NULL` " will never return TRUE.
Th reason for this is that boolean logic in SQL is three valued. A boolean can have values of `TRUE`, `FALSE` or `NULL`.
And an inequality comparison will return `NULL` whenever one (or both) of the values being compared is `NULL`.
The SQL standard means to compare to a NULL is to use `id IS NULL` or `id IS NOT NULL`. MySQL also adds a convenient null-safe comparison operator which will return TRUE or FALSE:
`col <=> NULL`. Or, in your case `NOT (col <=> NULL)` | Your method of checking for NULL is probably the issue. In MySQL, try the following:
```
SELECT `id` FROM `users` WHERE `username` = "test" AND `id` IS NOT NULL;
```
To check for NULL and an empty string, you can use:
```
SELECT `id`
FROM `users`
WHERE `username` = "test"
AND (`id` IS NOT NULL OR `id` != "");
``` | Why does this simple MySQL query not return the row? | [
"",
"mysql",
"sql",
"select",
"null",
""
] |
I have a list of products and a count corresponding to the quantity sold in a single table. The data is laid out as such:
```
Product Name QTY_SOLD
Mouse 23
Keyboard 25
Monitor 56
TV 10
Laptop 45
...
```
I want to create a group ID where groups are created if the ROLLING sum of the quantity sold is greater than 50. We can order by Product Name to get an output similar to the following.
```
Product Name QTY_SOLD GROUP_NBR
Keyboard 25 1
Laptop 45 1
Monitor 56 2
Mouse 23 3
TV 10 3
```
I created a case statement to create the output I need but if I want to change the group id cutoff from 50 to say 100 or if i get more products and quantities I have to keep changing the case statement. Is there an easy way to use either recursion or some other method to accomodate this?
This works on Teradata 13.10
```
UPDATE main FROM prod_list AS main,
(
SEL PROD_NAME
, QTY_SOLD
, SUM(QTY_SOLD) OVER (ORDER BY PROD_NAME ROWS UNBOUNDED PRECEDING) RUNNING FROM prod_list
) inr
SET GROUP_NBR = CASE
WHEN RUNNING < 50 THEN 1
WHEN RUNNING > 50 AND RUNNING < 100 THEN 2
WHEN RUNNING > 100 AND RUNNING < 150 THEN 3
WHEN RUNNING > 150 AND RUNNING < 200 THEN 4
WHEN RUNNING > 200 AND RUNNING < 250 THEN 5
ELSE 6
END
WHERE main.PROD_NAME = inr.PROD_NAME ;
``` | When i first saw your question i thought it was a kind of bin-packing problem. But your query looks like you simply want to put your data into **n** buckets :-)
Teradata supports the QUANTILE function, but it's deprecated and it doesn't fit your requirements as it creates buckets with equal size. You need WIDTH\_BUCKET which creates (as the name implies) buckets of equal width:
```
SELECT
PROD_id
, COUNT(DISTINCT PROD_ID) AS QTY
, SUM(QTY) OVER (ORDER BY QTY ROWS UNBOUNDED PRECEDING) RUNNING
, WIDTH_BUCKET(RUNNING, 1, 120*2000000, 120) AS GRP_NBR
FROM TMP_WORK_DB.PROD_LIST
GROUP BY 1
```
You can easily change the size of a bucket (2000000) or the number of buckets (120). | Create a reference table and join it...then the change only needs to be done in a table (can even create a procedure to help automate the changes to the table later on)
Psuedo create:
```
Create table group_nbr (low_limit,upper_limit,group_nbr)
```
Insert your case values into that table and inner join to it using greater than and less than conditions.
```
select *, group_nbr.group_nbr
from table inner join group_nbr on RUNNING > lower_limit and RUNNING < upper_limit
```
Code won't quite work as it sits there, but hopefully you get the idea well enough to alter your code to it. I find leaving these values in reference tables like this far easier than altering code. You can even allow multiple group\_nbr setups by adding a 'group\_id' to the group\_nbr table and having group\_id 1 be one set of running limits and group\_id of 2,3,4,5 etc having different sets of running limits and use a where clause to choose which group\_id you want to use. | TERADATA Creating Group ID from Rolling SUM Limit | [
"",
"sql",
"group-by",
"sum",
"teradata",
""
] |
I wrote a stored procedure to import and transform data from one database to another. Each import would take a single company ID and import all data related to this company.
To help with the transformation step I use temporary tables. As part of script review, I was told to use table variables rather than temporary tables.
The reviewer claims that if we run two different imports at the same time, the temporary table would be shared and corrupt the import.
---
**Questions:**
* Is it true that the temporary table would be shared if we run two different imports at the same time?
* Does each call to `EXEC` create a new scope?
---
Here is a contrived example of the script.
```
CREATE PROC [dbo].[ImportCompany]
(
@CompanyId AS INTEGER
)
AS
EXEC [dbo].[ImportAddress] @CompanyId = @CompanyId
--Import other data
CREATE PROC [dbo].[ImportAddress]
(
@CompanyId AS INTEGER
)
AS
CREATE TABLE #Companies (OldAddress NVARCHAR(128), NewAddress NVARCHAR(128))
INSERT INTO #Companies(OldAddress, NewAddress)
SELECT
Address as OldAddress,
'Transformed ' + Address as NewAddress
FROM
[OldDb].[dbo].[Addresses]
WHERE
CompanyId = @CompanyId
--Do stuff with the transformed data
DROP TABLE #Companies
EXEC [dbo].[ImportCompany] @CompanyId = 12345
``` | From [`CREATE TABLE`](http://technet.microsoft.com/en-us/library/ms174979.aspx):
> Local temporary tables are visible only in the current session
and (more importantly):
> If a local temporary table is created in a stored procedure or application that can be executed at the same time by several users, the Database Engine must be able to distinguish the tables created by the different users *[sic - almost certainly this should say sessions not users]*. The Database Engine does this by internally appending a numeric suffix to each local temporary table name.
Which exactly rebuts the point of whoever said that they would be shared.
---
Also, there's no need to `DROP TABLE` at the end of your procedure (from same link again):
> A local temporary table created in a stored procedure is dropped automatically when the stored procedure is finished | `##` is used for global temporary tables - will be available to the different imports.
`#` is used for local temporary tables and only available in the current/inner scope. | Scope of temporary tables in SQL Server | [
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2012",
""
] |
I have a table in MS Access with rows which have a column called "repeat"
I want to `SELECT` all the rows, duplicated by their "repeat" column value.
For example, if repeat is 4, then I should return 4 rows of the same values. If repeat is 1, then I should return only one row.
This is very similar to this answer:
<https://stackoverflow.com/a/6608143>
Except I need a solution for MS Access. | First create a "Numbers" table and fill it with numbers from 1 to 1000 (or up to whatever value the "Repeat" column can have):
```
CREATE TABLE Numbers
( i INT NOT NULL PRIMARY KEY
) ;
INSERT INTO Numbers
(i)
VALUES
(1), (2), ..., (1000) ;
```
then you can use this:
```
SELECT t.*
FROM TableX AS t
JOIN
Numbers AS n
ON n.i <= t.repeat ;
``` | If repeat has only small values you can try:
```
select id, col1 from table where repeat > 0
union all
select id, col1 from table where repeat > 1
union all
select id, col1 from table where repeat > 2
union all
select id, col1 from table where repeat > 3
union all ....
``` | How can I select multiple copies of the same row? | [
"",
"sql",
"ms-access",
""
] |
I am working on an SSIS job that contains a complex query.
It has some thing like :
```
some sql statements
left outer join
(
select query joining two more tables )
table1
```
here, i am unable to understand what that table1 mean ? Is it a kind of temporary view
created . This table1 is used in the other parts of query . But, actually the table1 does
exists in the database.
Is it like , the results of the select query in the parenthesis is created as table1
Please clarify me on this..
I am not able to put down my code because of Security Policies | `table1` is an *alisas* your subquery. It's the name of subquery you can use with columns for example `table1.col1` | **[Here is SQL Fiddel example](http://sqlfiddle.com/#!8/14284/8)**
**Below is the sample query**
```
Select Temp1.id,Table1.id Table1_id
from Temp1
left Outer join
(
Select Temp2.id
from Temp2
join Temp3
On Temp2.id = Temp3.id
) Table1
on Temp1.id = Table1.Id
```
In above example `table1` is the Alias for data coming from `joins`of two tables (`temp2` and `temp3`) | Unable to understand query | [
"",
"sql",
"sql-server-2008",
""
] |
I have the following query:
```
SELECT DISTINCT
YEAR(DateRegistered) as Years,
Months.[MonthName],
COUNT(UserID)as totalReg
FROM
Months WITH(NOLOCK)
LEFT OUTER JOIN
UserProfile WITH(NOLOCK)
ON
Months.MonthID = MONTH(DateRegistered)
AND
DateRegistered > DATEADD(MONTH, -12,GETDATE())
GROUP BY YEAR(DateRegistered), Months.[MonthName]
ORDER BY Months.[MonthName]
```
As you can tell this will always bring back 12 months worth of data. As such it is working, although there is a bug with this method.
It creates Null values in months where there is no data, now the record should exist(whole point of the query) but Year field is bringing Nulls which is something I dont want.
Now I understand the problem is because there is no data, how is it supposed to know what year?
So my question is - is there any way to sort this out and replace the nulls? I suspect I will have to completely change my methodology.
```
**YEAR** **MONTH** **TOTAL**
2013 April 1
2013 August 1
NULL December 0
2013 February 8
2013 January 1
2013 July 1
NULL June 0
2013 March 4
NULL May 0
NULL November 0
NULL October 0
2012 September 3
``` | If you want 12 months of data, then construct a list of numbers from 1 to 12 and use these as offsets with `getdate()`:
```
with nums as (
select 12 as level union all
select level - 1
from nums
where level > 1
)
select YEAR(thedate) as Years,
Months.[MonthName],
COUNT(UserID) as totalReg
FROM (select DATEADD(MONTH, - nums.level, GETDATE()) as thedate
from nums
) mon12 left outer join
Months WITH (NOLOCK)
on month(mon12.thedate) = months.monthid left outer join
UserProfile WITH (NOLOCK)
ON Months.MonthID = MONTH(DateRegistered) and
DateRegistered > DATEADD(MONTH, -12, GETDATE())
GROUP BY YEAR(thedate), Months.[MonthName]
ORDER BY Months.[MonthName];
```
I find something strange about the query though. You are defining the span from the current date. However, you seem to be splitting the months themselves on calendar boundaries. I also find the table `months` to be awkward. Why aren't you just using the `datename()` and `month()` functions? | Try this out:
```
;With dates as (
Select DateName(Month, getdate()) as [Month],
DatePart(Year, getdate()) as [Year],
1 as Iteration
Union All
Select DateName(Month,DATEADD(MONTH, -Iteration, getdate())),
DatePart(Year,DATEADD(MONTH, -Iteration, getdate())),
Iteration + 1
from dates
where Iteration < 12
)
SELECT DISTINCT
d.Year,
d.Month as [MonthName],
COUNT(up.UserID)as totalReg
FROM dates d
LEFT OUTER JOIN UserProfile up ON d.Month = DateName(DateRegistered)
And d.Year = DatePart(Year, DateRegistered)
GROUP BY d.Year, d.Month
ORDER BY d.Year, d.Month
``` | Normalization of Year bringing nulls back | [
"",
"sql",
"sql-server-2008",
""
] |
I've got a many to many table p\_to\_t
```
+----------+-----------+-----------+
| p | t | weight |
+----------+-----------+-----------+
| 1 | bob | 40 |
+----------+-----------+-----------+
| 2 | sue | 24 |
+----------+-----------+-----------+
| 3 | bob | 90 |
+----------+-----------+-----------+
| 4 | joe | 55 |
+----------+-----------+-----------+
| 5 | bob | 33 |
+----------+-----------+-----------+
```
I'm looking to query for value t WHERE p IN(1,2,4,5) while summing the weight for each value of t.
This query: `"SELECT t, SUM(weight) AS sWeight FROM p_to_t WHERE p IN(1,2,4,5)";` just adds all the sums to the first "t" selected e.g;
```
+-----------+-----------+
| t | weight |
+-----------+-----------+
| bob | 152 |
+-----------+-----------+
```
When what I would like is:
```
+-----------+-----------+
| t | weight |
+-----------+-----------+
| bob | 97 |
+-----------+-----------+
| joe | 55 |
+-----------+-----------+
```
Thoughts? | Just add a `GROUP BY` clause on t
```
SELECT
t,
SUM(weight) AS sWeight
FROM p_to_t
WHERE p IN(1,2,4,5)
GROUP BY t
```
By the way, you will also get *Sue* with these values in the IN clause...
And not `97` for *Bob*, as `40 + 33` are more often `73` | ```
SELECT t, SUM(weight) AS sWeight FROM p_to_t WHERE p IN(1,2,4,5) GROUP BY t
```
**[FIDDLE](http://sqlfiddle.com/#!2/da370/1)** | MYSQL Summing of Column With Unique Results | [
"",
"mysql",
"sql",
"group-by",
"sum",
""
] |
I have the following two tables with the following data. I would like to return all data when the two tables are joined. For instance, `SELECT t1.data, t2.data FROM t1 INNER JOIN t2 ON t2.t1_id=t2.id WHERE t1.id=1;` Now the tricky part. I don't want to return 3 rows but only one, and I would like t2.data to be CSVs. For instance, the above query would return `"bla1","hi1,hi2,hi2"` (if no join results, then null or "", and not ","). Is this fairly easy with just SQL, or am I better off using PHP, etc? If so with just SQL, how? Thanks
```
t1
-id
-data
t2
-id
-t1_id
-data
t1
-id=1, data="bla1"
-id=2, data="bla2"
-id=3, data="bla3"
t2
-id=1, t1_id=1, data=hi1
-id=2, t1_id=1, data=hi2
-id=3, t1_id=1, data=hi3
-id=4, t1_id=2, data=hi4
``` | You can use [`GROUP_CONCAT`](http://dev.mysql.com/doc/refman/5.0/en/group-by-functions.html#function_group-concat) which concatenates non-null values from a group using a delimiter (comma by default)
```
SELECT t1.data, GROUP_CONCAT(t2.data)
FROM t1 JOIN t2
ON t1.id = t2.t1_id
WHERE t1.id = 1;
```
Example on SQLFiddle: <http://sqlfiddle.com/#!2/68154/5> | You can use `CONCAT` or `CONCAT_WS` to "stick" column values together and `GROUP_CONCAT` to "stick" row values together.
Example:
```
SELECT
x, y, z
FROM
table;
```
Turns into:
```
SELECT
GROUP_CONCAT(CONCAT('"', x, '","', y, '","', z, '"') SEPARATOR '\n')
FROM
table
GROUP BY x; -- considering x would be the unique row identifier
```
The above example will return exactly one cell (one row with one column). | INNER JOIN table and return all joined results as CSV (not separate records) | [
"",
"mysql",
"sql",
""
] |
I want to make a query that returns all customers, but match orders only if they were placed in February 2008. This requires a join of tables Customers and Orders.
I used two queries and got different answers. The second one is right, but why ?
Logically speaking, I was expecting the first one to be right, but its not.
```
SELECT Cust.custid, Cust.contactname, Ord.orderid, Ord.orderdate
FROM Sales.Customers AS Cust LEFT OUTER JOIN
Sales.Orders Ord
ON Cust.custid = Ord.custid
WHERE Ord.orderdate >= '2008-02-01'
AND Ord.orderdate < '2008-03-01'
order by Cust.custid, Ord.orderdate asc
```
Second query -
```
SELECT C.custid, C.companyname, O.orderid, O.orderdate
FROM Sales.Customers AS C
LEFT OUTER JOIN Sales.Orders AS O
ON C.custid = O.custid
AND O.orderdate >= '20080201'
AND O.orderdate < '20080301';
```
---
Result sets for queries 1 and 2 added below. Putting the AND part in brackets of WHERE in Q1 has no effect.
Q1 -
```
custid,companyname,orderid,orderdate
4,Arndt, Torsten,10864,2008-02-02 00:00:00.000
5,Higginbotham, Tom,10866,2008-02-03 00:00:00.000
5,Higginbotham, Tom,10875,2008-02-06 00:00:00.000
9,Raghav, Amritansh,10871,2008-02-05 00:00:00.000
9,Raghav, Amritansh,10876,2008-02-09 00:00:00.000
12,Ray, Mike,10881,2008-02-11 00:00:00.000
18,Rizaldy, Arif,10890,2008-02-16 00:00:00.000
20,Kane, John,10895,2008-02-18 00:00:00.000
24,San Juan, Patricia,10880,2008-02-10 00:00:00.000
24,San Juan, Patricia,10902,2008-02-23 00:00:00.000
29,Kolesnikova, Katerina,10887,2008-02-13 00:00:00.000
30,Shabalin, Rostislav,10872,2008-02-05 00:00:00.000
30,Shabalin, Rostislav,10874,2008-02-06 00:00:00.000
30,Shabalin, Rostislav,10888,2008-02-16 00:00:00.000
30,Shabalin, Rostislav,10911,2008-02-26 00:00:00.000
34,Cohen, Shy,10886,2008-02-13 00:00:00.000
34,Cohen, Shy,10903,2008-02-24 00:00:00.000
35,Langohr, Kris,10863,2008-02-02 00:00:00.000
35,Langohr, Kris,10901,2008-02-23 00:00:00.000
37,Crăciun, Ovidiu V.,10897,2008-02-19 00:00:00.000
37,Crăciun, Ovidiu V.,10912,2008-02-26 00:00:00.000
39,Song, Lolan,10893,2008-02-18 00:00:00.000
44,Louverdis, George,10891,2008-02-17 00:00:00.000
45,Sunkammurali, Krishna,10884,2008-02-12 00:00:00.000
46,Dressler, Marlies,10899,2008-02-20 00:00:00.000
48,Szymczak, Radosław,10867,2008-02-03 00:00:00.000
48,Szymczak, Radosław,10883,2008-02-12 00:00:00.000
50,Mace, Donald,10892,2008-02-17 00:00:00.000
50,Mace, Donald,10896,2008-02-19 00:00:00.000
54,Tiano, Mike,10898,2008-02-20 00:00:00.000
62,Misiec, Anna,10868,2008-02-04 00:00:00.000
62,Misiec, Anna,10913,2008-02-26 00:00:00.000
62,Misiec, Anna,10914,2008-02-27 00:00:00.000
63,Veronesi, Giorgio,10865,2008-02-02 00:00:00.000
63,Veronesi, Giorgio,10878,2008-02-10 00:00:00.000
64,Gaffney, Lawrie,10916,2008-02-27 00:00:00.000
65,Moore, Michael,10889,2008-02-16 00:00:00.000
66,Voss, Florian,10908,2008-02-26 00:00:00.000
67,Garden, Euan,10877,2008-02-09 00:00:00.000
70,Ginters, Kaspars,10909,2008-02-26 00:00:00.000
71,Navarro, Tomás,10882,2008-02-11 00:00:00.000
71,Navarro, Tomás,10894,2008-02-18 00:00:00.000
72,Welcker, Brian,10869,2008-02-04 00:00:00.000
74,O’Brien, Dave,10907,2008-02-25 00:00:00.000
76,Gulbis, Katrin,10885,2008-02-12 00:00:00.000
80,Geschwandtner, Jens,10915,2008-02-27 00:00:00.000
88,Li, Yan,10900,2008-02-20 00:00:00.000
88,Li, Yan,10905,2008-02-24 00:00:00.000
89,Smith Jr., Ronaldo,10904,2008-02-24 00:00:00.000
90,Larsson, Katarina,10873,2008-02-06 00:00:00.000
90,Larsson, Katarina,10879,2008-02-10 00:00:00.000
90,Larsson, Katarina,10910,2008-02-26 00:00:00.000
91,Conn, Steve,10870,2008-02-04 00:00:00.000
91,Conn, Steve,10906,2008-02-25 00:00:00.000
```
Q2 -
```
custid,companyname,orderid,orderdate
1,Customer NRZBB,NULL,NULL
2,Customer MLTDN,NULL,NULL
3,Customer KBUDE,NULL,NULL
4,Customer HFBZG,10864,2008-02-02 00:00:00.000
5,Customer HGVLZ,10866,2008-02-03 00:00:00.000
5,Customer HGVLZ,10875,2008-02-06 00:00:00.000
6,Customer XHXJV,NULL,NULL
7,Customer QXVLA,NULL,NULL
8,Customer QUHWH,NULL,NULL
9,Customer RTXGC,10876,2008-02-09 00:00:00.000
9,Customer RTXGC,10871,2008-02-05 00:00:00.000
10,Customer EEALV,NULL,NULL
11,Customer UBHAU,NULL,NULL
12,Customer PSNMQ,10881,2008-02-11 00:00:00.000
13,Customer VMLOG,NULL,NULL
14,Customer WNMAF,NULL,NULL
15,Customer JUWXK,NULL,NULL
16,Customer GYBBY,NULL,NULL
17,Customer FEVNN,NULL,NULL
18,Customer BSVAR,10890,2008-02-16 00:00:00.000
19,Customer RFNQC,NULL,NULL
20,Customer THHDP,10895,2008-02-18 00:00:00.000
21,Customer KIDPX,NULL,NULL
22,Customer DTDMN,NULL,NULL
23,Customer WVFAF,NULL,NULL
24,Customer CYZTN,10880,2008-02-10 00:00:00.000
24,Customer CYZTN,10902,2008-02-23 00:00:00.000
25,Customer AZJED,NULL,NULL
26,Customer USDBG,NULL,NULL
27,Customer WMFEA,NULL,NULL
28,Customer XYUFB,NULL,NULL
29,Customer MDLWA,10887,2008-02-13 00:00:00.000
30,Customer KSLQF,10888,2008-02-16 00:00:00.000
30,Customer KSLQF,10874,2008-02-06 00:00:00.000
30,Customer KSLQF,10872,2008-02-05 00:00:00.000
30,Customer KSLQF,10911,2008-02-26 00:00:00.000
31,Customer YJCBX,NULL,NULL
32,Customer YSIQX,NULL,NULL
33,Customer FVXPQ,NULL,NULL
34,Customer IBVRG,10903,2008-02-24 00:00:00.000
34,Customer IBVRG,10886,2008-02-13 00:00:00.000
35,Customer UMTLM,10901,2008-02-23 00:00:00.000
35,Customer UMTLM,10863,2008-02-02 00:00:00.000
36,Customer LVJSO,NULL,NULL
37,Customer FRXZL,10897,2008-02-19 00:00:00.000
37,Customer FRXZL,10912,2008-02-26 00:00:00.000
38,Customer LJUCA,NULL,NULL
39,Customer GLLAG,10893,2008-02-18 00:00:00.000
40,Customer EFFTC,NULL,NULL
41,Customer XIIWM,NULL,NULL
42,Customer IAIJK,NULL,NULL
43,Customer UISOJ,NULL,NULL
44,Customer OXFRU,10891,2008-02-17 00:00:00.000
45,Customer QXPPT,10884,2008-02-12 00:00:00.000
46,Customer XPNIK,10899,2008-02-20 00:00:00.000
47,Customer PSQUZ,NULL,NULL
48,Customer DVFMB,10883,2008-02-12 00:00:00.000
48,Customer DVFMB,10867,2008-02-03 00:00:00.000
49,Customer CQRAA,NULL,NULL
50,Customer JYPSC,10892,2008-02-17 00:00:00.000
50,Customer JYPSC,10896,2008-02-19 00:00:00.000
51,Customer PVDZC,NULL,NULL
52,Customer PZNLA,NULL,NULL
53,Customer GCJSG,NULL,NULL
54,Customer TDKEG,10898,2008-02-20 00:00:00.000
55,Customer KZQZT,NULL,NULL
56,Customer QNIVZ,NULL,NULL
57,Customer WVAXS,NULL,NULL
58,Customer AHXHT,NULL,NULL
59,Customer LOLJO,NULL,NULL
60,Customer QZURI,NULL,NULL
61,Customer WULWD,NULL,NULL
62,Customer WFIZJ,10868,2008-02-04 00:00:00.000
62,Customer WFIZJ,10913,2008-02-26 00:00:00.000
62,Customer WFIZJ,10914,2008-02-27 00:00:00.000
63,Customer IRRVL,10865,2008-02-02 00:00:00.000
63,Customer IRRVL,10878,2008-02-10 00:00:00.000
64,Customer LWGMD,10916,2008-02-27 00:00:00.000
65,Customer NYUHS,10889,2008-02-16 00:00:00.000
66,Customer LHANT,10908,2008-02-26 00:00:00.000
67,Customer QVEPD,10877,2008-02-09 00:00:00.000
68,Customer CCKOT,NULL,NULL
69,Customer SIUIH,NULL,NULL
70,Customer TMXGN,10909,2008-02-26 00:00:00.000
71,Customer LCOUJ,10882,2008-02-11 00:00:00.000
71,Customer LCOUJ,10894,2008-02-18 00:00:00.000
72,Customer AHPOP,10869,2008-02-04 00:00:00.000
73,Customer JMIKW,NULL,NULL
74,Customer YSHXL,10907,2008-02-25 00:00:00.000
75,Customer XOJYP,NULL,NULL
76,Customer SFOGW,10885,2008-02-12 00:00:00.000
77,Customer LCYBZ,NULL,NULL
78,Customer NLTYP,NULL,NULL
79,Customer FAPSM,NULL,NULL
80,Customer VONTK,10915,2008-02-27 00:00:00.000
81,Customer YQQWW,NULL,NULL
82,Customer EYHKM,NULL,NULL
83,Customer ZRNDE,NULL,NULL
84,Customer NRCSK,NULL,NULL
85,Customer ENQZT,NULL,NULL
86,Customer SNXOJ,NULL,NULL
87,Customer ZHYOS,NULL,NULL
88,Customer SRQVM,10905,2008-02-24 00:00:00.000
88,Customer SRQVM,10900,2008-02-20 00:00:00.000
89,Customer YBQTI,10904,2008-02-24 00:00:00.000
90,Customer XBBVR,10910,2008-02-26 00:00:00.000
90,Customer XBBVR,10873,2008-02-06 00:00:00.000
90,Customer XBBVR,10879,2008-02-10 00:00:00.000
91,Customer CCFIZ,10870,2008-02-04 00:00:00.000
91,Customer CCFIZ,10906,2008-02-25 00:00:00.000
``` | The first query will join Customers and Orders first, then it will filter out all rows as per your WHERE clause That is, customers that don't have a record (For them, orderdate and such will be null) in that date range will be excluded.
The second query will return all Customers, but it will only join an Order record where the Order date is within the specified range.
I hope this makes sense - let me know if you need further clarification! | The `from` clause is evaluated before the `where` clause.
In the first case, if an order has no date in the range, then the results will be filtered out of the query. This means that there are no rows for the customer to include in the output.
In the second case, if an order has no date in the range, then it fails the `on` clause. The query is still doing a `left join`, so it keeps the row from `customer`. The values from the second table are `NULL`. There is no additional filtering, so the customer record appears in the output. | Filtering with WHERE while using JOIN? | [
"",
"sql",
"sql-server",
""
] |
I am tracking customer store entry data in Microsoft SQL Server 2008 R2 that looks something like this:
```
DoorID DateTimeStamp EntryType
1 2013-09-02 09:01:16.000 IN
1 2013-09-02 09:04:09.000 IN
1 2013-09-02 10:19:29.000 IN
1 2013-09-02 10:19:30.000 IN
1 2013-09-02 10:19:32.000 OUT
1 2013-09-02 10:26:36.000 IN
1 2013-09-02 10:26:40.000 OUT
```
I don't want to count the `OUT` rows, just `IN`.
I believe that it needs to be grouped on `Date`, and `DoorID`, then get the hours totals.
I would like it to come out like this.
```
Date DoorID HourOfDay TotalInPersons
2013-09-02 1 0 0
2013-09-02 1 1 0
2013-09-02 1 2 0
2013-09-02 1 3 0
2013-09-02 1 4 0
2013-09-02 1 5 0
2013-09-02 1 6 0
2013-09-02 1 7 0
2013-09-02 1 8 0
2013-09-02 1 9 2
2013-09-02 1 10 3
2013-09-02 1 11 0
2013-09-02 1 12 0
2013-09-02 1 13 0
2013-09-02 1 14 0
2013-09-02 1 15 0
2013-09-02 1 16 0
2013-09-02 1 17 0
2013-09-02 1 18 0
2013-09-02 1 19 0
2013-09-02 1 20 0
2013-09-02 1 21 0
2013-09-02 1 22 0
2013-09-02 1 23 0
``` | ```
SELECT
[Date] = CONVERT(DATE, DateTimeStamp),
DoorID,
HourOfDay = DATEPART(HOUR, DateTimeStamp),
TotalInPersons = COUNT(*)
FROM dbo.tablename
WHERE EntryType = 'IN'
GROUP BY
CONVERT(DATE, DateTimeStamp),
DoorID,
DATEPART(HOUR, DateTimeStamp)
ORDER BY
[Date], DoorID, HourOfDay;
```
Of course if you need all hours, even where no rows are represented, here is one solution (which limits the output for any day only to the doors that have at least one `IN` entry on that day):
```
;WITH h AS
(
SELECT TOP (24) h = number FROM Master..spt_values
WHERE type = N'P' ORDER BY number
),
doors AS
(
SELECT DISTINCT DoorID, [Date] = CONVERT(DATE,DateTimeStamp)
FROM dbo.tablename WHERE EntryType = 'IN'
)
SELECT
d.[Date],
d.DoorID,
HourOfDay = h.h,
TotalInPersons = COUNT(t.EntryType)
FROM doors AS d CROSS JOIN h
LEFT OUTER JOIN dbo.tablename AS t
ON CONVERT(DATE, t.DateTimeStamp) = d.[Date]
AND t.DoorID = d.DoorID
AND DATEPART(HOUR, t.DateTimeStamp) = h.h
AND t.EntryType = 'IN'
GROUP BY d.[Date], d.DoorID, h.h
ORDER BY d.[Date], d.DoorID, h.h;
``` | How about something like this:
```
SELECT
CAST(DateTimeStamp AS DATE) AS Date
,DoorID
,DATEPART(HOUR, DateTimeStamp) AS HourOfDay
,COUNT(*) AS TotalInPersons
FROM StoreTable
WHERE EntryType = 'IN'
GROUP BY
CAST(DateTimeStamp AS DATE)
,DoorID
,DATEPART(HOUR, DateTimeStamp)
``` | Counting number of rows grouped by date and hour | [
"",
"sql",
"sql-server",
"sql-server-2008-r2",
"group-by",
""
] |
I have a column in a database where I store pin numbers for a survey. Currently the list of numbers is 5100 - 6674. I need to keep those pin numbers there for a current survey but I also have to add a new list of pin numbers to the same column. I just want to make sure the query that I have created will insert the new list into the column without affecting the old list. There is also another column which gives the survey type. I need to also add the new type to the new list so for example, currently I have the list of pins ranging from 5100 - 6674 with a Survey type of CLN. The next list is a range of numbers from 8100 - 8855 with a survey type of TM. I do not want to mess anything up with the current survey that is currently running so I have one shot to get it right, otherwise there will be a big mess. Here is the query I have come up with.
```
DECLARE @PIN INT
SET @PIN = 8100
WHILE (@PIN <= 8855)
BEGIN
Insert into Survey (Pin) values (@PIN)
Select @PIN = @PIN + 1
from Survey
where Pin > 6674
END
UPDATE Survey SET Type = 'TM'
WHERE Pin > 6674
```
Thank you for any help or advice. | I would strongly suggest that you set the `type` at the same time as the pin and fix the increment step:
```
DECLARE @PIN INT;
SET @PIN = 8100;
WHILE (@PIN <= 8855)
BEGIN
Insert into Survey(Pin, Type)
values (@PIN, @TYPE);
set @PIN = @PIN + 1;
END;
```
You can also do this with a single statement, if you want to learn a bit more SQL:
```
with pins as (
select 8100 as pin
union all
select pin + 1
from pins
where pin < 8855
)
Insert into Survey(Pin, Type)
select pin, 'TM'
from pins;
option (MAXRECURSION 10000);
``` | Well since you are inserting fewer numbers you can just select a range from the existing set and add a constant:
```
INSERT INTO Survey (PIN, Type)
SELECT Pin + 3000, 'TM'
FROM Survey
WHERE Pin BETWEEN 5100 and 5855
```
If you were inserting MORE values, you could use the same technique but in batches. | How to insert a list of numbers after an existing list of numbers | [
"",
"sql",
""
] |
Sorry for this simple question, but looked through some old questions and couldn't find the answer I needed.
I have 3 tables all with user\_id (though named differently in each table), and I want to join them together with conditions from each table. I've only done JOIN with 2 tables before, and curious why my below query is throwing an error.
```
SELECT c.user_id FROM
(SELECT userid from database_all where date='2013-09-03' AND college='Harvard') a
JOIN
(SELECT userid2 FROM database_users) b
ON
a.userid = b.userid2
JOIN
(SELECT user_id FROM database_courses where num_courses < 3 ) c
ON
b.userid2 = c.user_id
``` | This query should solve your problem :
```
SELECT
database_courses.user_id
FROM
database_all
JOIN
database_users ON database_all.userid=database_users.userid2
JOIN
database_courses ON database_users.userid2=database_courses.userid
WHERE
database_all.date='2013-09-03' AND database_all.college='Harvard' AND database_courses.num_courses < 3
``` | Try something more like this:
```
SELECT c.user_id FROM database_all AS a
JOIN database_users AS u ON a.userid=u.userid2
JOIN database_courses AS c ON b.userid2=c.user_id
WHERE a.data='2013-09-03'
AND a.college='Harvard'
AND c.num_courses<3
```
Generally you need to put your `WHERE` clause at the end, and the `DBMS` will optimize the query for you.
I actually had a very similar question [here](https://stackoverflow.com/questions/18000389/where-clause-followed-by-join).
The MySQL documentation also covers this in their section about [`SELECT` syntax](http://dev.mysql.com/doc/refman/5.0/en/select.html).
```
SELECT
[ALL | DISTINCT | DISTINCTROW ]
[HIGH_PRIORITY]
[STRAIGHT_JOIN]
[SQL_SMALL_RESULT] [SQL_BIG_RESULT] [SQL_BUFFER_RESULT]
[SQL_CACHE | SQL_NO_CACHE] [SQL_CALC_FOUND_ROWS]
select_expr [, select_expr ...]
[FROM table_references
[WHERE where_condition]
[GROUP BY {col_name | expr | position}
[ASC | DESC], ... [WITH ROLLUP]]
[HAVING where_condition]
[ORDER BY {col_name | expr | position}
[ASC | DESC], ...]
[LIMIT {[offset,] row_count | row_count OFFSET offset}]
[PROCEDURE procedure_name(argument_list)]
[INTO OUTFILE 'file_name' export_options
| INTO DUMPFILE 'file_name'
| INTO var_name [, var_name]]
[FOR UPDATE | LOCK IN SHARE MODE]]
```
As you can see, the `FROM` clause with the table references comes directly before the `WHERE` clause. | SQL: joining 3 tables | [
"",
"sql",
""
] |
Below is an example of the BCP Statement.
I'm not accustomed to using BCP so your help and candor is greatly appreciated
I am using it with a format file as well.
If I execute from CMD prompt it works fine but from SQL I get the error.
The BCP statement is all on one line and the SQL Server Agent is running as Local System.
The SQL server, and script are on the same system.
I ran exec master..xp\_fixeddrives
C,45589
E,423686
I've tried output to C and E with the same result
```
EXEC xp_cmdshell 'bcp "Select FILENAME, POLICYNUMBER, INSURED_DRAWER_100, POLICY_INFORMATION, DOCUMENTTYPE, DOCUMENTDATE, POLICYYEAR FROM data.dbo.max" queryout "E:\Storage\Export\Data\max.idx" -fmax-c.fmt -SSERVERNAME -T
```
Here is the format file rmax-c.fmt
```
10.0
7
1 SQLCHAR 0 255 "$#Y#$" 1 FILENAME
2 SQLCHAR 0 40 "" 2 POLICYNUMBER
3 SQLCHAR 0 40 "" 3 INSURED_DRAWER_100
4 SQLCHAR 0 40 "" 4 POLICY_INFORMATION
5 SQLCHAR 0 40 "" 5 DOCUMENTTYPE
6 SQLCHAR 0 40 "" 6 DOCUMENTDATE
7 SQLCHAR 0 8 "\r\n" 7 POLICYYEAR
```
Due to formating in this post the last column of the format file is cut off but reads `SQL_Latin1_General_CP1_CI_AS` for each column other that documentdate. | First, rule out an xp\_cmdshell issue by doing a simple 'dir c:\*.\*';
Check out [my blog](http://craftydba.com/?p=1690) on using BCP to export files.
I had problems on my system in which I could not find the path to BCP.EXE.
Either change the PATH variable of hard code it.
Example below works with Adventure Works.
```
-- BCP - Export query, pipe delimited format, trusted security, character format
DECLARE @bcp_cmd4 VARCHAR(1000);
DECLARE @exe_path4 VARCHAR(200) =
' cd C:\Program Files\Microsoft SQL Server\100\Tools\Binn\ & ';
SET @bcp_cmd4 = @exe_path4 +
' BCP.EXE "SELECT FirstName, LastName FROM AdventureWorks2008R2.Sales.vSalesPerson" queryout ' +
' "C:\TEST\PEOPLE.TXT" -T -c -q -t0x7c -r\n';
PRINT @bcp_cmd4;
EXEC master..xp_cmdshell @bcp_cmd4;
GO
```
Before changing the path to \110\ for SQL Server 2012 and the name of the database to [AdventureWorks2012], I received the following error.

After making the changes, the code works fine from SSMS. The service is running under NT AUTHORITY\Local Service. The SQL Server Agent is disabled. The output file was created.
 | Does the output path exist? BCP does not create the folder before trying to create the file.
Try this before your BCP call:
```
EXEC xp_cmdshell 'MKDIR "E:\Storage\Export\Data\"'
``` | Unable to open BCP host data-file | [
"",
"sql",
"sql-server",
"bcp",
""
] |
I am trying to update 1 column based on if the categories\_id is equal to 90 in a second table but getting an unknown column error.
Here is my sql:
```
UPDATE products SET qty='20'
WHERE products.products_id = products_to_categories.products_id AND products_to_categories.categories_id = '90'
```
The error I am getting is
Unknown column 'products\_to\_categories.products\_id' in 'where clause' | In SQL, you have to introduce table names with `from` (or `update` or `delete`) statements. You may mean:
```
UPDATE products
SET qty = '20'
WHERE exists (select 1
from products_to_categories ptc
where ptc.products_id = products.products_id AND
ptc.categories_id = '90'
);
```
Another way to do this is with a join:
```
UPDATE products p join
products_to_categories ptc
on ptc.products_id = products.products_id AND
ptc.categories_id = '90'
SET p.qty = '20';
```
This works better if there is more than one match. | ```
UPDATE products p, products_to_categories pc
SET p.qty='20'
WHERE p.products_id = pc.products_id AND pc.categories_id = '90'
```
**[FIDDLE](http://sqlfiddle.com/#!2/fd62d)** | update statement giving uknown column error | [
"",
"mysql",
"sql",
""
] |
I am using SQL server 2008 and I am trying to unpivot the data. Here is the SQL code that I am using,
```
CREATE TABLE #pvt1 (VendorID int, Sa int, Emp1 int,Sa1 int,Emp2 int)
GO
INSERT INTO #pvt1 VALUES (1,2,4,3,9);
GO
--Unpivot the table.
SELECT distinct VendorID,Orders,Orders1
FROM
(SELECT VendorID, Emp1, Sa,Emp2,Sa1
FROM #pvt1 ) p
UNPIVOT
(Orders FOR Emp IN
(Emp1,Emp2)
)AS unpvt
UNPIVOT
(Orders1 FOR Emp1 IN
(Sa,Sa1)
)AS unpvt1;
GO
```
And Here is the result of the above code.
```
VendorID Orders Orders1
1 4 2
1 4 3
1 9 2
1 9 3
```
But I want my Output to be the way indicated below
```
VendorID Orders Orders1
1 4 2
1 9 3
```
The relationship from the above code is 2 is related to 4, and 3 is related to 9.
How can I achieve this? | An easier way to unpivot the data would be to use a CROSS APPLY to unpivot the columns in pairs:
```
select vendorid, orders, orders1
from pvt1
cross apply
(
select emp1, sa union all
select emp2, sa1
) c (orders, orders1);
```
See [SQL Fiddle with Demo](http://sqlfiddle.com/#!3/3f20b/3). Or you can use CROSS APPLY with the VALUES clause if you don't want to use the UNION ALL:
```
select vendorid, orders, orders1
from pvt1
cross apply
(
values
(emp1, sa),
(emp2, sa1)
) c (orders, orders1);
```
See [SQL Fiddle with Demo](http://sqlfiddle.com/#!3/3f20b/4) | The answer by Taryn is indeed super useful, and I'd like to expand one aspect of it.
If you have a very un-normalized table like this, with multiple sets of columns for e.g. 4 quarters or 12 months:
```
+-------+------+------+------+------+------+------+-------+------+
| cYear | foo1 | foo2 | foo3 | foo4 | bar1 | bar2 | bar3 | bar4 |
+-------+------+------+------+------+------+------+-------+------+
| 2020 | 42 | 888 | 0 | 33 | one | two | three | four |
+-------+------+------+------+------+------+------+-------+------+
```
Then the CROSS APPLY method is easy to write and understand, when you got the hang of it. For the numbered column, use constant values.
```
SELECT
cYear,
cQuarter,
foo,
bar
FROM temp
CROSS APPLY
(
VALUES
(1, foo1, bar1),
(2, foo2, bar2),
(3, foo3, bar3),
(4, foo4, bar4)
) c (cQuarter, foo, bar)
```
Result:
```
+-------+----------+-----+-------+
| cYear | cQuarter | foo | bar |
+-------+----------+-----+-------+
| 2020 | 1 | 42 | one |
| 2020 | 2 | 888 | two |
| 2020 | 3 | 0 | three |
| 2020 | 4 | 33 | four |
+-------+----------+-----+-------+
```
[SQL Fiddle](http://sqlfiddle.com/#!18/a8890/2/0) | SQL Unpivot multiple columns Data | [
"",
"sql",
"sql-server",
"sql-server-2008",
"unpivot",
""
] |
I have a simple table like this:
```
user letter
--------------
1 A
1 A
1 B
1 B
1 B
1 C
2 A
2 B
2 B
2 C
2 C
2 C
```
I want to get the top 2 occurrences of 'letter' per user, like so
```
user letter rank(within user group)
--------------------
1 B 1
1 A 2
2 C 1
2 B 2
```
or even better: collapsed into columns
```
user 1st-most-occurrence 2nd-most-occurrence
1 B A
2 C B
```
How can I accomplish this in postgres? | ```
with cte as (
select
t.user_id, t.letter,
row_number() over(partition by t.user_id order by count(*) desc) as row_num
from Table1 as t
group by t.user_id, t.letter
)
select
c.user_id,
max(case when c.row_num = 1 then c.letter end) as "1st-most-occurance",
max(case when c.row_num = 2 then c.letter end) as "2st-most-occurance"
from cte as c
where c.row_num <= 2
group by c.user_id
```
[**=> sql fiddle demo**](http://sqlfiddle.com/#!12/8903d/9) | Something like this:
```
select *
from (
select userid,
letter,
dense_rank() over (partition by userid order by count(*) desc) as rnk
from letters
group by userid, letter
) t
where rnk <= 2
order by userid, rnk;
```
Note that I replaced `user` with `userid` because using reserved words for columns is a bad habit.
Here is an SQLFiddle: <http://sqlfiddle.com/#!12/ec3ec/1> | postgres: get top n occurrences of a value within each group | [
"",
"sql",
"postgresql",
"group-by",
"sql-order-by",
"greatest-n-per-group",
""
] |
I'm trying to write a mysql query and I'm having some issues with it. I'm trying to query WooCommerce data out of my Wordpress database. Basic invoice data is stored in the wp\_posts table and the rest of the data is stored in the wp\_postmeta table. Now 1 inovice in the wp\_posts table points to multiple items in the wp\_postmeta table. Here is an example.
WP\_POSTS
```
----------------------------------------------------
ID STATUS Date
----------------------------------------------------
0001 OPEN 01/01/2000
0002 OPEN 01/01/2000
0003 CLOSED 01/02/2000
```
WP\_POSTMETA
```
--------------------------------------------------------------------------
ID POST_ID META_KEY META_VALUE
--------------------------------------------------------------------------
0001 0001 CustomerLN Test
0002 0001 CustomerFN Tester
0003 0001 Payment_Type PayPal
0004 0001 Invoice_Total $200
0005 0002 CustomerLN Doe
0006 0002 CustomerFN John
0007 0002 Payment_Type CC-Mastercard
0008 0002 Invoice_Total $1000
```
I've got a basic query that pulls the data in from the wp\_posts table but I can't figure out how to pull data from the second table based on the META\_KEY value.
Any help would be great. Thanks in advance. | Welcome to the **[Entity-Attribute-Value](https://perfectsmoked.com/best-gas-smoker/)** world.
To have a normal representation of the resultset you might need not only `JOIN` these two tables but also `PIVOT` the resultset. You can do that with a query like
```
SELECT p.id, p.status,
MAX(CASE WHEN m.meta_key = 'CustomerLN'
THEN m.meta_value END) customer_last_name,
MAX(CASE WHEN m.meta_key = 'CustomerFN'
THEN m.meta_value END) customer_firt_name,
MAX(CASE WHEN m.meta_key = 'Payment_Type'
THEN m.meta_value END) payment_type,
MAX(CASE WHEN m.meta_key = 'Invoice_Total'
THEN m.meta_value END) invoice_total
FROM wp_posts p LEFT JOIN wp_postmeta m
ON p.id = m.post_id
GROUP BY p.id, p.status
```
Sample output:
```
+------+--------+--------------------+--------------------+---------------+---------------+
| id | status | customer_last_name | customer_firt_name | payment_type | invoice_total |
+------+--------+--------------------+--------------------+---------------+---------------+
| 1 | OPEN | Test | Tester | PayPal | $200 |
| 2 | OPEN | Doe | John | CC-Mastercard | $1000 |
| 3 | CLOSED | NULL | NULL | NULL | NULL |
+------+--------+--------------------+--------------------+---------------+---------------+
```
Here is **[SQLFiddle](http://sqlfiddle.com/#!2/ba9fc/2)** demo
---
Now to be able to filter your records based on meta keys and meta values you'll have to use `HAVING` clause
For example if you want to get invoices made by customer Jhon Doe
```
SELECT p.id, p.status,
MAX(CASE WHEN m.meta_key = 'CustomerLN'
THEN m.meta_value END) customer_last_name,
MAX(CASE WHEN m.meta_key = 'CustomerFN'
THEN m.meta_value END) customer_first_name,
MAX(CASE WHEN m.meta_key = 'Payment_Type'
THEN m.meta_value END) payment_type,
MAX(CASE WHEN m.meta_key = 'Invoice_Total'
THEN m.meta_value END) invoice_total
FROM wp_posts p LEFT JOIN wp_postmeta m
ON p.id = m.post_id
GROUP BY p.id, p.status
HAVING customer_last_name = 'Doe'
AND customer_first_name = 'John'
```
Output:
```
+------+--------+--------------------+---------------------+---------------+---------------+
| id | status | customer_last_name | customer_first_name | payment_type | invoice_total |
+------+--------+--------------------+---------------------+---------------+---------------+
| 2 | OPEN | Doe | John | CC-Mastercard | $1000 |
+------+--------+--------------------+---------------------+---------------+---------------+
```
Here is **[SQLFiddle](http://sqlfiddle.com/#!2/ba9fc/7)** demo | To start, I would do
```
select a.*, b.* from WP_POSTS a, wp_postmeta b
where b.POST_ID = a.id
```
What have you done? | Can't get a complicated mysql query to work | [
"",
"mysql",
"sql",
"wordpress",
"woocommerce",
""
] |
I have a code where I am pulling language knowledge and each employee has a plan year, they do not all complete them each year and in order to get their most recent one I use the MAX for plan year. Now one of the criteria is whether or not they are willing to move over seas, the issue arises that it will bring up their most recent YES and most recent NO and I just need their most recent plan year whether it be yes or no, I am having difficulty troubleshooting this. The code is as follows:
```
SELECT Employee_ID, Accept_International_Assignment, MAX(Plan_Year) AS Expr1
FROM dbo.v_sc08_CD_Employee_Availabilities
GROUP BY Employee_ID, Accept_International_Assignment
``` | ```
SELECT a.Employee_ID, a.Accept_International_Assignment, a.Plan_Year
FROM dbo.v_sc08_CD_Employee_Availabilities a
INNER JOIN (SELECT Employee_ID, MAX(Plan_Year) maxPlanYear
from dbo.v_sc08_CD_Employee_Availabilities
GROUP BY Employee_ID) m
ON a.Plan_Year = m.maxPlanYear AND a.Employee_ID = m.Employee_ID
``` | I suspect this will be more efficient than the accepted answer, at scale...
```
;WITH x AS
(
SELECT Employee_ID, Accept_International_Assignment, Plan_Year,
rn = ROW_NUMBER() OVER (PARTITION BY Employee_ID ORDER BY Plan_Year DESC)
FROM dbo.v_sc08_CD_Employee_Availabilities -- who comes up with these names?
)
SELECT Employee_ID, Accept_International_Assignment, Plan_Year
FROM x WHERE rn = 1;
``` | How to get the MAX result even if the MAX is different | [
"",
"sql",
"sql-server",
""
] |
I have two tables in my access DB, I can join the tables based on their primary key, Grant Number, however both tables contain an expiry date column and I want to see if the expiry dates match.
```
First Table: "Everything2013"
Columns: Claim Name, Grant Number, Expiry Date
Second Table: "Yukon Claims Govt"
Columns: Claim Name, Claim Owner, Grant Number, StakingDate, ExpiryDate
```
I want to join the tables based on the Grant Number, which is the primary key, I then want to see if the expiry dates are the same for each grant number from both tables. Basically, I'm checking the expiry dates from one table to see if they match the expiry date for the same grant number in another table. | ```
select E.Claim_Name, E.Grant_Number, Y.StakingDate \
from Everything2013 e
inner join Yukon_Claims_Govt y
on E.[Grant_Number]=Y.[Grant_number]
where E.Expiry_Date=Y.ClaimExpiryDate;
``` | I assume you want to get the result set which have same grant number and expiry date from two tables.
```
select Grant_Number,Claim_Name, StakingDate
from Everything2013
inner join Yukon_Claims_Govt
on Everything2013.Grant_Number=Yukon_Claims_Govt.Grant_Number
AND Everything2013.Expiry_Date=Yukon_Claims_Govt.ClaimExpiryDate;
``` | Join two tables and compare two columns to see if they are equal | [
"",
"sql",
"database",
"ms-access",
"join",
""
] |
I'm having a `sql table` with date column named `CREATED_TS` which holds the dates in different format eg. as shown below
```
Feb 20 2012 12:00AM
11/29/12 8:20:53 PM
Feb 20 2012 12:00AM
11/29/12 8:20:53 PM
Feb 20 2012 12:00AM
11/29/12 8:20:53 PM
Nov 16 2011 12:00AM
Feb 20 2012 12:00AM
11/29/12 8:20:52 PM
```
Now I want to convert these to format `mm\dd\yyyy` before as i am comparing the dates in `WHERE` clause of my `SELECT` query.
I tried using
```
CONVERT(VARCHAR(10),CREATED_TS,101)
```
but got the result as,
```
Feb 20 201
11/29/12
Feb 20 201
11/29/12
Feb 20 201
11/29/12
Nov 16 201
Feb 20 201
11/29/12
```
I need the result as eg. `02/20/2012` in order to compare.
Any help will be appreciated. | As your data already in varchar, you have to convert it into date first:
```
select convert(varchar(10), cast(ts as date), 101) from <your table>
``` | Use [`CONVERT`](http://technet.microsoft.com/en-us/library/ms187928%28v=sql.100%29.aspx) with the `Value` specifier of `101`, whilst casting your data to `date`:
```
CONVERT(VARCHAR(10), CAST(Created_TS AS DATE), 101)
``` | how to convert date to a format `mm/dd/yyyy` | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
So for each distinct value in a column of one table I want to insert that unique value into a row of another table.
```
list = select distinct(id) from table0
for distinct_id in list
insert into table1 (id) values (distinct_id)
end
```
Any ideas as to how to go about this? | Whenever you think about doing something in a loop, step back, and think again. SQL is optimized to work with sets. You can do this using a set-based query without the need to loop:
```
INSERT dbo.table1(id) SELECT DISTINCT id FROM dbo.table0;
```
There are some edge cases where looping can make more sense, but as SQL Server matures and more functionality is added, those edge cases get narrower and narrower... | ```
insert into table1 (id)
select distinct id from table0
``` | Insert distinct values from one table into another table | [
"",
"sql",
"sql-server",
""
] |
I have a large CSV file (1.7GB - I believe around 4 million lines). The file is a dump from Cisco IronPort of all traffic for a range. My end goal is to import the text into either SQL/Access, or one of the data modeling applications out there to be able to show browsing habits for the unique ids within the file (actually, 2 files).
Upon import to SQL, it bombs as one of the urls has a comma within it. My idea was to try a rewrite on the URL column to dump everything after the TLD (foo.com/blah,tracking?ref=!superuselessstuff to just foo.com).
A coworker came up with the following two codes for PowerShell. The first one works great, but the 1.7G file drags my system to a crawl, and it never finished (ran for 48 hours without finishing). The second one finished, but made the text harder to work with. Help?
Source Data Example:
```
"Begin Date"|"End Date"|"Time (GMT -05:00)"|"URL"|"CONTENT TYPE"|"URL CATEGORY"|"DESTINATION IP"|"Disposition"|"Policy Name"|"Policy Type"|"Application Type"|"User"|"User Type"
"2013-07-24 05:00 GMT"|"2013-07-25 04:59 GMT"|"1374728377"|"hxxp://mediadownloads.mlb.com/mlbam/2013/06/23/mlbtv_bosdet_28278793_1800K.mp4"|"video/mp4"|"Sports and Recreation"|"165.254.94.168"|"Allow"|"Generics"|"Access"|"Media"|"DOMAIN\gen7@Domain 10.XXX.XXX.XXX"|"[-]"
"2013-07-24 05:00 GMT"|"2013-07-25 04:59 GMT"|"1374728376"|"hxxp://stats.pandora.com/v1?callback=jQuery17102006296486278092_1374683921429&type=promo_box&action=auto_scroll&source=PromoBoxView&listener_id=84313100&_=1374728377192"|"text/javascript"|"Streaming Audio"|"208.85.40.44"|"Allow"|"Generics"|"Access"|"Media"|"DOMAIN\gen7@Domain 10.XXX.XXX.XXX"|"[-]"
"2013-07-24 05:00 GMT"|"2013-07-25 04:59 GMT"|"1374728357"|"hxxp://b.scorecardresearch.com/p?c1=1&c2=3005352&c3=&c4=mlb&c5=02&c6=&c10=&c7=hxxp%3A//wapc.mlb.com/det/play/%3Fcontent_id%3D29083605%26topic_id%3D8878748%26c_id%3Ddet&c8=Video%3A%20Recap%3A%20BOS%203%2C%20DET%2010%20%7C%20MLB.com%20Multimedia&c9=hxxp%3A//detroit.tigers.mlb.com/index.jsp%3Fc_id%3Ddet&rn=0.36919005215168&cv=2.0"|"image/gif"|"Business and Industry"|"207.152.125.91"|"Allow"|"Generics"|"Access"|"-"|"DOMAIN\gen7@Domain 10.XXX.XXX.XXX"|"[-]"
"2013-07-24 05:00 GMT"|"2013-07-25 04:59 GMT"|"1374728356"|"hxxp://lt150.tritondigital.com/lt?guid=VEQyNX4wMmIzY2FmZi1mMmExLTQ5OWQtODM5NS1kMjE0ZTkwMzMyMTY%3D&yob=1978&gender=M&zip=55421&hasads=0&devcat=WEB&devtype=WEB&cb=13747283558794766"|"text/plain"|"Business and Industry"|"208.92.52.90"|"Allow"|"Generics"|"Access"|"-"|"DOMAIN\GEN1@Domain 10.XXX.XXX.XXX"|"[-]"
"2013-07-24 05:00 GMT"|"2013-07-25 04:59 GMT"|"1374728356"|"""hxxp://an.mlb.com/b/ss/mlbglobal08,mlbtigers/1/H.26/s93606666143392?AQB=1&ndh=1&t=24%2F6%2F2013%2023%3A59%3A17%203%20300&fid=0DDFB0A0676D5241-080519A2C0D076F2&ce=UTF-8&ns=mlb&pageName=Major%20League%20Baseball%3A%20Multimedia%3A%20Video%20Playback%20Page&g=hxxp%3A%2F%2Fwapc.mlb.com%2Fdet%2Fplay%2F%3Fcontent_id%3D29083605%26topic_id%3D8878748%26c_id%3Ddet&cc=USD&events=event2%2Cevent28%2Cevent4&v13=Video%20Playback%20Page&c24=mlbglobal08%2Cmlbtigers&v28=28307515%7CFLASH_1200K_640X360&c49=mlb.mlb.com&v49=mlb.mlb.com&pe=lnk_o&pev1=hxxp%3A%2F%2FmyGenericURL&pev2=VPP%20Game%20Recaps&s=1440x900&c=32&j=1.6&v=Y&k=Y&bw=1440&bh=719&AQE=1"""|"image/gif"|"Sports and Recreation"|"66.235.133.11"|"Allow"|"Generics"|"Access"|"-"|"DOMAIN\gen7@Domain 10.XXX.XXX.XXX"|"[-]"
"2013-07-24 05:00 GMT"|"2013-07-25 04:59 GMT"|"1374728356"|"hxxp://ad.auditude.com/adserver/e?type=podprogress&br=4&z=50389&u=e91d539c7acb7daed69ab3fcdb2a4ea0&pod=id%3A4%2Cctype%3Al%2Cptype%3At%2Cdur%3A200%2Clot%3A5%2Cedur%3A0%2Celot%3A0%2Ccpos%3A3&advancepattern=1&l=1374710168&cid=1922976207&event=complete&uid=RzsxnCYcRkiQ6p9YxyRdEQ&s=e9c06908&t=1374728168"|"-"|"Advertisements"|"63.140.50.240"|"Allow"|"Generics"|"Access"|"-"|"DOMAIN\gen7@Domain 10.XXX.XXX.XXX"|"[-]"
```
First code that chews up resources, but spits it out as hoped, is this:
```
$filename = 'Dump.csv'
$csv = Import-csv $filename -Delimiter '|'
$csv | foreach {
$url = $_.URL
$_.URL = $url -replace '^\"*(\w*)://([^/]*)/(.*)$','$1://$2'
} $csv | Export-Csv 'DumpParsed.csv'
```
Spits it out like this:
```
"Begin Date","End Date","Time (GMT -05:00)","URL","CONTENT TYPE","URL CATEGORY","DESTINATION IP","Disposition","Policy Name","Policy Type","Application Type","User","User Type"
"2013-07-24 05:00 GMT","2013-07-25 04:59 GMT","1374728377","hxxp://mediadownloads.mlb.com","video/mp4","Sports and Recreation","165.254.94.168","Allow","Generics","Access","Media","DOMAIN\gen7@Domain 10.XXX.XXX.XXX","[-]"
"2013-07-24 05:00 GMT","2013-07-25 04:59 GMT","1374728376","hxxp://stats.pandora.com","text/javascript","Streaming Audio","208.85.40.44","Allow","Generics","Access","Media","DOMAIN\gen7@Domain 10.XXX.XXX.XXX","[-]"
"2013-07-24 05:00 GMT","2013-07-25 04:59 GMT","1374728357","hxxp://b.scorecardresearch.com","image/gif","Business and Industry","207.152.125.91","Allow","Generics","Access","-","DOMAIN\gen7@Domain 10.XXX.XXX.XXX","[-]"
"2013-07-24 05:00 GMT","2013-07-25 04:59 GMT","1374728356","hxxp://lt150.tritondigital.com","text/plain","Business and Industry","208.92.52.90","Allow","Generics","Access","-","DOMAIN\GEN1@Domain 10.XXX.XXX.XXX","[-]"
"2013-07-24 05:00 GMT","2013-07-25 04:59 GMT","1374728356","hxxp://an.mlb.com","image/gif","Sports and Recreation","66.235.133.11","Allow","Generics","Access","-","DOMAIN\gen7@Domain 10.XXX.XXX.XXX","[-]"
"2013-07-24 05:00 GMT","2013-07-25 04:59 GMT","1374728356","hxxp://ad.auditude.com","-","Advertisements","63.140.50.240","Allow","Generics","Access","-","DOMAIN\gen7@Domain 10.XXX.XXX.XXX","[-]"
```
The second code works markedly faster, but spits out badly formatted data, which SQL doesn't like.
```
$filename = 'Dump.csv'
Import-csv $filename -Delimiter '|' | foreach {
$_.URL = $_.URL -replace '^\"*(\w*)://([^/]*)/(.*)$','$1://$2'
Add-Content 'DumpParsed.csv' "$_"
}
```
Not so pretty output:
```
@{Begin Date=2013-07-24 05:00 GMT; End Date=2013-07-25 04:59 GMT; Time (GMT -05:00)=1374728377; URL=hxxp://mediadownloads.mlb.com; CONTENT TYPE=video/mp4; URL CATEGORY=Sports and Recreation; DESTINATION IP=165.254.94.168; Disposition=Allow; Policy Name=Generics; Policy Type=Access; Application Type=Media; User=DOMAIN\gen7@Domain 10.XXX.XXX.XXX; User Type=[-]}
@{Begin Date=2013-07-24 05:00 GMT; End Date=2013-07-25 04:59 GMT; Time (GMT -05:00)=1374728357; URL=hxxp://b.scorecardresearch.com; CONTENT TYPE=image/gif; URL CATEGORY=Business and Industry; DESTINATION IP=207.152.125.91; Disposition=Allow; Policy Name=Generics; Policy Type=Access; Application Type=-; User=DOMAIN\gen7@Domain 10.XXX.XXX.XXX; User Type=[-]}
@{Begin Date=2013-07-24 05:00 GMT; End Date=2013-07-25 04:59 GMT; Time (GMT -05:00)=1374728356; URL=hxxp://lt150.tritondigital.com; CONTENT TYPE=text/plain; URL CATEGORY=Business and Industry; DESTINATION IP=208.92.52.90; Disposition=Allow; Policy Name=Generics; Policy Type=Access; Application Type=-; User=DOMAIN\GEN1@Domain 10.XXX.XXX.XXX; User Type=[-]}
@{Begin Date=2013-07-24 05:00 GMT; End Date=2013-07-25 04:59 GMT; Time (GMT -05:00)=1374728356; URL=hxxp://an.mlb.com; CONTENT TYPE=image/gif; URL CATEGORY=Sports and Recreation; DESTINATION IP=66.235.133.11; Disposition=Allow; Policy Name=Generics; Policy Type=Access; Application Type=-; User=DOMAIN\gen7@Domain 10.XXX.XXX.XXX; User Type=[-]}
@{Begin Date=2013-07-24 05:00 GMT; End Date=2013-07-25 04:59 GMT; Time (GMT -05:00)=1374728356; URL=hxxp://ad.auditude.com; CONTENT TYPE=-; URL CATEGORY=Advertisements; DESTINATION IP=63.140.50.240; Disposition=Allow; Policy Name=Generics; Policy Type=Access; Application Type=-; User=DOMAIN\gen7@Domain 10.XXX.XXX.XXX; User Type=[-]}
```
Any other ideas? I know a bit of powershell, and a little bit of sql. But I'm open to anything else. | First if you do this:
```
$csv = Import-csv $filename -Delimiter '|'
```
You load the whole file into memory as objects constructed from the fields. So no surprise memory consumption and performance are an issue. The second approach isn't too bad but it should be dumping out in CSV format. As it stands now, it dumps the contents of the objects that it creates. You could try this:
```
$filename = 'Dump.csv'
Import-csv $filename -Delimiter '|' |
Foreach {$_.URL = $_.URL -replace '^\"*(\w*)://([^/]*)/(.*)$','$1://$2'} |
ConvertTo-Csv -NoTypeInfo | Out-File DumpParsed.csv -Enc UTF8 -Append
```
BTW it would be interesting to see if skipping CSV processing would speed this up significantly or not e.g.:
```
Get-Content $filename | Foreach {$_ -replace '\"*(\w*)://([^/]*)/[^"]*"(.*)','$1://$2"$3'} |
Out-File DumpParsed.csv -Enc UTF8
```
I'm just guessing on the log file's original encoding. It could very well be ASCII. | Your second solution works faster because it does not put all the file in memory. You can try to change it like that :
```
$filename = 'Dump.csv'
Import-csv $filename -Delimiter '|' | foreach { $_.URL = $_.URL -replace '^\"*(\w*)://([^/]*)/(.*)$','$1://$2'; $_ } |export-csv 'DumpParsed.csv'
``` | Efficiently replace text from column within large CSV | [
"",
"sql",
"powershell",
"csv",
""
] |
I use below query but says `Incorrect syntax near if` how to solve this problem?
```
CREATE FUNCTION getCustomerAllReseller
(
@code BIGINT
)
RETURNS TABLE AS
RETURN (
WITH Directories AS
(
IF (@code < 0)
SELECT code FROM CustomerAll WHERE code = @code
ELSE
SELECT code FROM Customer WHERE code = @code
)
SELECT * FROM Directories
)
``` | Try this one -
```
CREATE FUNCTION dbo.getCustomerAllReseller
(
@code BIGINT
)
RETURNS TABLE AS
RETURN
(
SELECT code
FROM dbo.CustomerAll
WHERE code = @code
AND @code < 0
UNION ALL
SELECT code
FROM dbo.Customer
WHERE code = @code
AND @code >= 0
)
``` | The syntax for `with` expects a query, not a statement. You can use conditions in a query with a union:
```
WITH Directories AS (
select code from CustomerAll where @code<0 and code=@code
union all
Select code from Customer where @code>=0 and code=@code
)
SELECT * FROM Directories
``` | IF Else in function | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
We have a DB that stores details of our contract clients - we need to check every month if they have moved base. So trying to run a query that will return any that have changed homebase in the last month.
Using mySQL phpmyadmin localhost via Unix socket, for is set as the month i.e. '07-2013' as VARCHAR.
```
SELECT crewcode, firstname, surname, homebase,'for' AS 'Date'
FROM `hours`
WHERE homebase ...
```
not sure what I need here? need to see if homebase has changed in the last month from 06-2013 to 07-2013
Help is greatly appreciated. | Here is the solution for anyone interested:
```
SELECT A.employee_ID as crewcode,
A.emp_firstname,
emp_lastname,
B.base,
B.base_for AS 'Date'
FROM `hs_hr_employee` as A,
`employee_base_table` as B
where B.base_for=(SELECT DATE_FORMAT(STR_TO_DATE( CURDATE( ) , '%Y-%m-%d' ),'%m-%Y'))
AND A.emp_number=B.emp_number
group by A.employee_ID
``` | To look for changes in the last 30 days:
```
WHERE mydate > curdate() - interval 30 day
```
To look for changes in the last calendar month:
```
WHERE year(mydate) = year(curdate()) and month(mydate) = month(curdate()) - 1
``` | query for returning the changes in the last month | [
"",
"mysql",
"sql",
""
] |
I have 3 queries which is shared same table, query 1 as below:
```
SELECT MAX(TranDate) AS MaxDate
FROM tblTransaction
WHERE AccNo = 12345
```
After get the max transaction date of account 12345, I want to use the max date to find the max transaction ID in query 2 as below:
```
SELECT MAX(TransactionID) AS MaxTran
FROM tblTransaction
WHERE AccNo = 12345
AND TranDate = 'pass max tran date here'
```
After that, I want pass the max transaction ID in query 2 to query 3 as below:
```
SELECT Commission
FROM tblTransaction
WHERE AccNo = 12345
AND TransactionID = 'pass max tranID here'
```
How to I combine these 3 queries? Thanks | Assuming that `tblTransaction` has `TransactionID` as the primary key, you can use the simpler, for **MySQL**:
```
SELECT Commission
FROM tblTransaction
WHERE AccNo = 12345
ORDER BY TranDate DESC,
TransactionID DESC
LIMIT 1 ;
```
and for **SQL-Server**:
```
SELECT TOP (1) Commission
FROM tblTransaction
WHERE AccNo = 12345
ORDER BY TranDate DESC,
TransactionID DESC ;
``` | ```
SELECT Commission
FROM tblTransaction
WHERE AccNo = 12345
AND TransactionID = (
SELECT MAX(TransactionID) AS MaxTran
FROM tblTransaction
WHERE AccNo = 12345
AND TranDate = (
SELECT MAX(TranDate) AS MaxDate
FROM tblTransaction
WHERE AccNo = 12345
)
)
``` | SQL combine 3 query by using same table | [
"",
"mysql",
"sql",
"sql-server",
""
] |
I need to run a query on some data that brings back the moving 12 month total. I've already performed the average function on the data... so now I just need to loop through and get a moving 12 month period of data and SUM it up so I can graph it.
Below is the current query i'm issuing but it isn't adding up correctly. Basically I want to have January of 1996 show the SUM of the prior 12 months averaged data. So January of 1996 would be the sum of January 1995 + February 1995 .... all the way through December 1995. Then February of 1996 would be the SUM of February 1995 + March 1995 ....
Is this possible using the OVER function in Oracle 11g? Seems like it is but I'm not getting the right results for some reason.
Here is my current query:
```
/* Formatted on 9/5/2013 3:20:55 PM (QP5 v5.149.1003.31008) */
SELECT MONTH_DT,
YEAR_DT,
AVERAGE_TOTAL_RAINFALL,
(SUM (AVERAGE_TOTAL_RAINFALL)
OVER (ORDER BY MONTH_DT ROWS BETWEEN 12 PRECEDING AND CURRENT ROW)) as "12_MONTH_TOTAL"
FROM ( SELECT month_dt,
year_dt,
TRUNC ( (SUM (tsvalue_ms)
/ (SELECT COUNT (FEATURE_ID)
FROM nexrad.springshed_featureid
WHERE springshed_id = 1)), 3)
AS average_total_rainfall
FROM ts_monthly_agg a, nexrad.springshed_featureid b
WHERE a.FEATURE_ID = b.FEATURE_ID AND b.springshed_id = 1
GROUP BY year_dt, month_dt
ORDER BY year_dt, month_dt)
ORDER BY year_dt, month_dt;
```
Here is a few rows of the sample data and the current way the OVER function is calculating.
```
MONTH YEAR AVERAGE 12_MONTH_TOTAL
------------------------------------
1 1995 4.31 4.31
2 1995 1.932 27.473
3 1995 3.733 47.523
4 1995 4.216 44.841
5 1995 3.721 31.573
6 1995 8.379 50.459
7 1995 6.028 102.591
8 1995 7.918 105.076
9 1995 3.516 97.507
10 1995 5.623 74.056
11 1995 1.813 30.904
12 1995 1.881 23.03
1 1996 2.625 6.935
2 1996 1.165 27.459
3 1996 9.374 55.274
4 1996 2.84 45.388
5 1996 2.538 32.714
6 1996 5.952 55.483
7 1996 6.562 102.816
8 1996 8.428 108.123
9 1996 3.364 95.583
10 1996 4.222 72.252
11 1996 0.453 31.116
12 1996 4.968 26.6
```
Here is a dataset incase that helps.
Thanks so much for any assistance!
Josh
```
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(1, 1995, 4.31);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(2, 1995, 1.932);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(3, 1995, 3.733);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(4, 1995, 4.216);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(5, 1995, 3.721);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(6, 1995, 8.379);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(7, 1995, 6.028);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(8, 1995, 7.918);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(9, 1995, 3.516);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(10, 1995, 5.623);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(11, 1995, 1.813);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(12, 1995, 1.881);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(1, 1996, 2.625);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(2, 1996, 1.165);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(3, 1996, 9.374);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(4, 1996, 2.84);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(5, 1996, 2.538);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(6, 1996, 5.952);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(7, 1996, 6.562);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(8, 1996, 8.428);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(9, 1996, 3.364);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(10, 1996, 4.222);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(11, 1996, 0.453);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(12, 1996, 4.968);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(1, 1997, 2.384);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(2, 1997, 0.903);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(3, 1997, 2.575);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(4, 1997, 4.692);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(5, 1997, 1.786);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(6, 1997, 6.301);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(7, 1997, 6.515);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(8, 1997, 5.053);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(9, 1997, 5.01);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(10, 1997, 7.267);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(11, 1997, 4.254);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(12, 1997, 8.082);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(1, 1998, 3.205);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(2, 1998, 10.933);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(3, 1998, 4.298);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(4, 1998, 0.385);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(5, 1998, 1.037);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(6, 1998, 2.544);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(7, 1998, 8.21);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(8, 1998, 6.244);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(9, 1998, 10.461);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(10, 1998, 0.378);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(11, 1998, 1.35);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(12, 1998, 0.969);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(1, 1999, 5.353);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(2, 1999, 1.116);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(3, 1999, 1.749);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(4, 1999, 2.02);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(5, 1999, 2.605);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(6, 1999, 8.07);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(7, 1999, 4.378);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(8, 1999, 7.373);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(9, 1999, 5.294);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(10, 1999, 1.281);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(11, 1999, 1.35);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(12, 1999, 1.279);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(1, 2000, 2.217);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(2, 2000, 0.862);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(3, 2000, 1.45);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(4, 2000, 0.698);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(5, 2000, 0.205);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(6, 2000, 8.584);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(7, 2000, 5.381);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(8, 2000, 5.288);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(9, 2000, 6.026);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(10, 2000, 0.241);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(11, 2000, 1.398);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(12, 2000, 0.814);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(1, 2001, 2.005);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(2, 2001, 0.953);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(3, 2001, 6.8);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(4, 2001, 0.648);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(5, 2001, 0.484);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(6, 2001, 6.337);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(7, 2001, 8.872);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(8, 2001, 5.446);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(9, 2001, 8.764);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(10, 2001, 0.532);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(11, 2001, 0.842);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(12, 2001, 1.388);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(1, 2002, 1.179);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(2, 2002, 1.623);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(3, 2002, 2.293);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(4, 2002, 1.397);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(5, 2002, 0.928);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(6, 2002, 9.158);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(7, 2002, 7.98);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(8, 2002, 8.007);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(9, 2002, 4.902);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(10, 2002, 2.282);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(11, 2002, 3.611);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(12, 2002, 7.606);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(1, 2003, 0.172);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(2, 2003, 4.871);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(3, 2003, 7.315);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(4, 2003, 1.408);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(5, 2003, 3.083);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(6, 2003, 12.093);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(7, 2003, 5.617);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(8, 2003, 6.838);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(9, 2003, 3.103);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(10, 2003, 2.773);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(11, 2003, 1.191);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(12, 2003, 0.658);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(1, 2004, 1.769);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(2, 2004, 5.733);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(3, 2004, 1.565);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(4, 2004, 1.688);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(5, 2004, 1.694);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(6, 2004, 7.389);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(7, 2004, 9.441);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(8, 2004, 7.223);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(9, 2004, 18.424);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(10, 2004, 2.368);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(11, 2004, 2.131);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(12, 2004, 1.699);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(1, 2005, 0.958);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(2, 2005, 2.1);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(3, 2005, 6.085);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(4, 2005, 6.194);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(5, 2005, 5.947);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(6, 2005, 10.413);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(7, 2005, 11.105);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(8, 2005, 11.823);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(9, 2005, 2.737);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(10, 2005, 4.933);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(11, 2005, 2.659);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(12, 2005, 4.525);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(1, 2006, 1.48);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(2, 2006, 6.679);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(3, 2006, 0.261);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(4, 2006, 2.652);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(5, 2006, 2.194);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(6, 2006, 6.536);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(7, 2006, 7.587);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(8, 2006, 8.422);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(9, 2006, 5.141);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(10, 2006, 1.583);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(11, 2006, 1.845);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(12, 2006, 4.588);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(1, 2007, 3.543);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(2, 2007, 5.29);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(3, 2007, 2.623);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(4, 2007, 1.931);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(5, 2007, 1.404);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(6, 2007, 5.479);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(7, 2007, 10.63);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(8, 2007, 5.385);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(9, 2007, 6.448);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(10, 2007, 6.419);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(11, 2007, 0.845);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(12, 2007, 2.127);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(1, 2008, 3.357);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(2, 2008, 4.253);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(3, 2008, 5.22);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(4, 2008, 2.192);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(5, 2008, 0.541);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(6, 2008, 8.029);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(7, 2008, 9.963);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(8, 2008, 13.26);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(9, 2008, 2.199);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(10, 2008, 1.179);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(11, 2008, 1.49);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(12, 2008, 0.69);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(1, 2009, 3.007);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(2, 2009, 1.456);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(3, 2009, 1.693);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(4, 2009, 2.359);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(5, 2009, 10.714);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(6, 2009, 5.592);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(7, 2009, 8.076);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(8, 2009, 5.85);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(9, 2009, 4.032);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(10, 2009, 1.501);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(11, 2009, 3.027);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(12, 2009, 2.909);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(1, 2010, 3.812);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(2, 2010, 5.226);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(3, 2010, 6.655);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(4, 2010, 1.09);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(5, 2010, 5.203);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(6, 2010, 5.846);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(7, 2010, 6.793);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(8, 2010, 9.743);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(9, 2010, 2.599);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(10, 2010, 0.01);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(11, 2010, 0.94);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(12, 2010, 0.677);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(1, 2011, 4.928);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(2, 2011, 2.23);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(3, 2011, 3.986);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(4, 2011, 2.12);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(5, 2011, 2.214);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(6, 2011, 5.476);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(7, 2011, 5.713);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(8, 2011, 6.706);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(9, 2011, 4.058);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(10, 2011, 5.27);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(11, 2011, 1.17);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(12, 2011, 0.363);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(1, 2012, 1.034);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(2, 2012, 2.375);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(3, 2012, 2.148);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(4, 2012, 1.658);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(5, 2012, 5.773);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(6, 2012, 14.215);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(12, 2012, 3.889);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(1, 2013, 0.302);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(2, 2013, 1.954);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(3, 2013, 0.781);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(4, 2013, 3.163);
Insert into RAINFALL_DATA
(MONTH_DT, YEAR_DT, AVERAGE_TOTAL_RAINFALL)
Values
(5, 2013, 2.385);
``` | To get the desired result you need to order by `year_dt` column or combination of `year_dt` and `month_dt` columns:
```
select month_dt
, year_dt
, average_total_rainfall
, sum(average_total_rainfall) over(order by year_dt, month_dt
rows between 12 preceding
and current row) res
from rainfall_data
MONTH_DT YEAR_DT AVERAGE_TOTAL_RAINFALL RES
---------- ---------- ---------------------- ----------
1 1995 4,31 4,31
2 1995 1,932 6,242
3 1995 3,733 9,975
4 1995 4,216 14,191
5 1995 3,721 17,912
6 1995 8,379 26,291
7 1995 6,028 32,319
8 1995 7,918 40,237
9 1995 3,516 43,753
10 1995 5,623 49,376
11 1995 1,813 51,189
12 1995 1,881 53,07
1 1996 2,625 55,695
2 1996 1,165 52,55
3 1996 9,374 59,992
4 1996 2,84 59,099
5 1996 2,538 57,421
6 1996 5,952 59,652
7 1996 6,562 57,835
8 1996 8,428 60,235
```
When you are ordering only by `month_dt` the rows will be processed by analytic function in the following order:
```
month_dt year_dt average
--------------------------
1 1995 4.31
1 1996 2.625
1 1997 2.384
1 1998 3.205
1 1999 5.353
1 2000 2.217
1 2001 2.005
1 2002 1.179
1 2003 0.172
1 2004 1.769
1 2005 0.958
1 2006 1.48
1 2007 3.543
```
producing result that doesn't meet your expectations. | I think you just need to include the year in the `order by` clause:
```
(SUM (AVERAGE_TOTAL_RAINFALL)
OVER (ORDER BY YEAR_DT, MONTH_DT ROWS BETWEEN 12 PRECEDING AND CURRENT ROW)) as "12_MONTH_TOTAL"
``` | Query Oracle for 12 month moving total (SUM) of data | [
"",
"sql",
"oracle",
"oracle11g",
"analytics",
""
] |
Is there any way to get a datetime when the row was created? Is it stored anywhere? I just forgot to add a column which tells when the row was inserted, and lots of rows have been inserted and I don't know when they were inserted... Thanks. | Unfortunately there's no way to achieve that. `MySQL` doesn't store a `timestamp` of row creation, that's why you always have to create a column to do that, along with `ON UPDATE` clause which will update the column every time the row is updated | Nope. You need a column to record that, it's not stored in the system otherwise. | Get the date when the row was inserted MySQL | [
"",
"mysql",
"sql",
""
] |
Please see the table structure below:
```
CREATE TABLE Person (id int not null, PID INT NOT NULL, Name VARCHAR(50))
CREATE TABLE [Order] (OID INT NOT NULL, PID INT NOT NULL)
INSERT INTO Person VALUES (1,1,'Ian')
INSERT INTO Person VALUES (2,2,'Maria')
INSERT INTO [Order] values (1,1)
```
Why does the following query return two results:
```
select * from Person WHERE id IN (SELECT ID FROM [Order])
```
ID does not exist in Order. Why does the query above produce results? I would expect it to error because I'd does not exist in order. | The problem here is that you're not using `Table.Column` notation in your subquery, table `Order` doesn't have column `ID` and `ID` in subquery really means `Person.ID`, not `[Order].ID`. That's why I always insist on using aliases for tables in production code. Compare these two queries:
```
select * from Person WHERE id IN (SELECT ID FROM [Order]);
select * from Person as p WHERE p.id IN (SELECT o.ID FROM [Order] as o)
```
The first one will execute but will return incorrect results, and the second one will raise an error. It's because the outer query's columns may be referenced in a subquery, so in this case you can use `Person` columns inside the subquery.
Perhaps you wanted to use the query like this:
```
select * from Person WHERE pid IN (SELECT PID FROM [Order])
```
But you never know when the schema of the `[Order]` table changes, and if somebody drops the column `PID` from `[Order]` then your query will return all rows from the table `Person`. Therefore, use aliases:
```
select * from Person as P WHERE P.pid IN (SELECT O.PID FROM [Order] as O)
```
Just quick note - this is not SQL Server specific behaviour, it's standard SQL:
* [**SQL Server demo**](http://sqlfiddle.com/#!3/a18a4/1)
* [**PostgreSQL demo**](http://sqlfiddle.com/#!12/a18a4/2)
* [**MySQL demo**](http://sqlfiddle.com/#!2/a18a4/2)
* [**Oracle demo**](http://sqlfiddle.com/#!4/a18a4/15) | This behavior, while unintuitive, is very well defined in Microsoft's Knowledge Base:
[KB #298674 : PRB: Subquery Resolves Names of Column to Outer Tables](http://support.microsoft.com/kb/298674)
From that article:
> To illustrate the behavior, use the following two table structures and query:
```
CREATE TABLE X1 (ColA INT, ColB INT)
CREATE TABLE X2 (ColC INT, ColD INT)
SELECT ColA FROM X1 WHERE ColA IN (Select ColB FROM X2)
```
> The query returns a result where the column ColB is considered from table X1.
>
> By qualifying the column name, the error message occurs as illustrated by the following query:
```
SELECT ColA FROM X1 WHERE ColA in (Select X2.ColB FROM X2)
```
> Server: Msg 207, Level 16, State 3, Line 1
>
> Invalid column name 'ColB'.
Folks have been complaining about this issue for years, but Microsoft isn't going to fix it. It is, after all, complying with the standard, which essentially states:
> If you don't find column x in the current scope, traverse to the next outer scope, and so on, until you find a reference.
More information in the following Connect "bugs" along with multiple official confirmations that this behavior is by design and is not going to change (so you'll have to change yours - i.e. **always use aliases**):
[Connect #338468 : CTE Column Name resolution in Sub Query is not validated](http://connect.microsoft.com/SQLServer/feedback/details/338468/cte-column-name-resolution-in-sub-query-is-not-validated)
[Connect #735178 : T-SQL subquery not working in some cases when IN operator used](http://connect.microsoft.com/SQLServer/feedback/details/735178/t-sql-subquery-not-working-in-some-cases-when-in-operator-used)
[Connect #302281 : Non-existent column causes subquery to be ignored](http://connect.microsoft.com/SQLServer/feedback/details/302281/non-existent-column-causes-subquery-to-be-ignored-rather-than-errored-out)
[Connect #772612 : Alias error not being reported when within an IN operator](http://connect.microsoft.com/SQLServer/feedback/details/772612/alias-error-not-being-reported-when-within-an-in-operator)
[Connect #265772 : Bug using sub select](http://connect.microsoft.com/SQLServer/feedback/details/265772/bug-using-sub-select)
In your case, this "error" will probably be much less likely to occur if you use more meaningful names than ID, OID and PID. Does `Order.PID` point to `Person.id` or `Person.PID`? Design your tables so that people can figure out the relationships without having to ask you. A `PersonID` should always be a `PersonID`, no matter where in the schema it is; same with an `OrderID`. Saving a few characters of typing is not a good price to pay for a completely ambiguous schema.
You could write an `EXISTS` clause instead:
```
... FROM dbo.Person AS p WHERE EXISTS
(
SELECT 1 FROM dbo.[Order] AS o
WHERE o.PID = p.id -- or is it PID? See why it pays to be explicit?
);
``` | SQL IN query produces strange result | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
In my app,I have a requirement to insert 600 rows and each row have 10 parameters that means I am inserting 6000 data for every 5 seconds.So I am deleting the previous value and inserting 600 rows for every 5 seconds.The other way is I can update the previous data using where Clause.So wondering to know which will have least performance issue.Also if I have some millions of rows and If i want to select the data for ID 50000.Then I will write this way
```
select * from mytable where ID=50000.
```
After some time If i need the data for id 90000 then Will the sql server search from beginning ID(ID=1) for every select statement or it will directly move to ID=90000
My query is will sql server starts looking from id=1 till it matches the where clause or how it searches?
**EDIT** ID is Primay key and auto increment | I totally agree with Aakash' answer that UPDATE is most probably the faster approach here.
---
However I would like to clear up a potential misunderstanding (or lack of understanding) on how a relational database works:
> *If i need the data for id 90000 then will the sql server search from beginning ID(ID=1) for every select statement or it will directly move to ID=90000*
Assuming `ID` is the primary key in your table (or has a unique index defined) then SQL Server will look up the row with ID=90000 directly (actually pretty much every relational database will do it that way).
Finding the row with `id=1` takes the same amount of time as finding the row with `ID=90000` or `ID=90000000`
I suggest you take some time to read the SQL Server manual to understand how a relational database works. The following topics might be interesting for you:
* <http://msdn.microsoft.com/en-us/library/ms190659%28v=sql.100%29.aspx>
* <http://msdn.microsoft.com/en-us/library/ms190623%28v=sql.100%29.aspx>
* <http://msdn.microsoft.com/en-us/library/bb500155%28v=sql.100%29.aspx>
Additionally you might want to have a look at "Use The Index Luke": <http://use-the-index-luke.com/>
There is a *lot* of very good information on how indexes work and how the database uses them. | Generally, **UPDATE is much faster** than DELETE+INSERT, it being a single command.
The bigger the table (number of and size of columns) the more expensive it becomes to delete and insert rather than update. This is because you have to pay the price of UNDO and REDO.
Also, keep in mind the actual fragmentation that occurs when DELETE+INSERT is issued opposed to a correctly implemented UPDATE will make great difference by time. | Which is faster insert query or update in sql? | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
## The goal
Add from the 5th value.
## The problem
There is a table on my database called `markets`. Within it, there are 5 values:
```
+----+------+
| Id | Name |
+----+------+
| 0 | NULL |
+----+------+
| 1 | A |
+----+------+
| 2 | B |
+----+------+
| 3 | C |
+----+------+
| 4 | D |
+----+------+
```
`Id` is a primary key and auto-incremented column and its start by zero (I added it manually to use as default). When I add some market to this table, seems that MySQL adds the market in question with `0` as `Id` — and I want to set the `Id` as `5` to continue the table's logic. So I ask: **how can I add some value from the 5th `Id`?**
## What I've already tried
I already set the `auto_increment` value for `5`, but without success. The query that I used is the following:
```
ALTER TABLE `markets` AUTO_INCREMENT = 5;
```
## How do I know that the MySQL is attempting to add the market with its `Id` as `0`?
My application is powered by C# that is throwing an exception:
> Duplicate entry '0' for key 'PRIMARY' | On your `INSERT` statement, leave the `auto_increment` field out of the statement. This will leave MySQL to increment it and add it automatically. | Already there are four values are there in the table, So you can just insert next row. it will automatically take 5 as a next value.
When we are using auto\_increment no need to specify that column in the insert command.
```
INSERT INTO markets(Name) VALUES('E');
```
In case if you want to alter the auto\_increment value you can use the following statement
```
ALTER TABLE market AUTO_INCREMENT = 5;
``` | Add from the 5th value | [
"",
"mysql",
"sql",
""
] |
I have a stored procedure that uses a simple UPDATE with some variables passed to it. But I don't want to update those fields when their variables aren't null. This is essentially what my statement looks like.
```
UPDATE myTable
SET myColumn = @myColumn,
mColumn1 = @myColumn1
WHERE myColumn2 = @myColumn2
```
Is there anyway to apply some conditional logic within the SET? I have around 10 fields that need to be checked, so I wouldn't want to do an update per field or something like that.
Any ideas? | [`COALESCE`](http://msdn.microsoft.com/en-us/library/ms190349.aspx) is your friend. It returns its first non-NULL argument. I'm not actually sure from your narrative which way around you want things, it's either:
```
UPDATE myTable
SET myColumn = COALESCE(myColumn,@myColumn),
mColumn1 = COALESCE(myColumn1,@myColumn1)
WHERE myColumn2 = @myColumn2
```
Which keeps the current column's value if the column's not null, or
```
UPDATE myTable
SET myColumn = COALESCE(@myColumn,myColumn),
mColumn1 = COALESCE(@myColumn1,myColumn1)
WHERE myColumn2 = @myColumn2
```
Which keeps the current column's value if the *variable* is null. | Try to use [coalesce](http://technet.microsoft.com/pl-pl/library/ms190349.aspx) function as below
```
UPDATE myTable
SET myColumn = coalesce(myColumn,@myColumn),
mColumn1 = coalesce(mColumn1,@myColumn1)
WHERE myColumn2 = @myColumn2
```
Above code updates your columns only when they are null. If they are not null the code sets the same value stored in the columns. | Can I check if a variable is NULL within an Update's SET? | [
"",
"sql",
"sql-server",
"stored-procedures",
""
] |
I have a table with with two columns as shown below:
```
A001 A1;A2;A3
B002 B1
C003 C1;C2
D004 D1;D2;D3;D4
E005 E1
```
The table has two columns, the second column has contains either a single value or multiple values as a string separated by a semicolon.
What i would like to do is for each of the rows where the value is a string with multiple values, i want to split the string and use each value to insert a new row (duplicating the value from the first column for the new row).
With the above sample data, the output would be something like this:
```
A001 A1
A001 A2
A001 A3
B002 B1
C003 C1
C003 C2
D004 D1
D004 D2
D004 D3
D004 D4
E005 E1
``` | ```
with src as
(
select 'A001' col1, 'A1;A2;A3' col2 from dual union all
select 'B002', 'B1' from dual union all
select 'C003', 'C1;C2' from dual union all
select 'D004', 'D1;D2;D3;D4' from dual union all
select 'E005', 'E1' from dual
)
, explode as
(
select col1
, regexp_substr(col2, '\w+', 1, 1) as col2_1
, regexp_substr(col2, '\w+', 1, 2) as col2_2
, regexp_substr(col2, '\w+', 1, 3) as col2_3
, regexp_substr(col2, '\w+', 1, 4) as col2_4
-- if there is more add more...
from src
)
select col1, col2_1 from explode where col2_1 is not null union all
select col1, col2_2 from explode where col2_2 is not null union all
select col1, col2_3 from explode where col2_3 is not null union all
select col1, col2_4 from explode where col2_4 is not null
order by col1
;
```
The result:
```
/*
A001 A1
A001 A2
A001 A3
B002 B1
C003 C2
C003 C1
D004 D1
D004 D4
D004 D2
D004 D3
E005 E1
*/
``` | ```
select id, txt from table1
model partition by (id) dimension by (1 as n) measures (txt) (
txt[for n from regexp_count(txt[1], '[^;]+') to 1 decrement 1] =
regexp_substr(txt[1], '[^;]+', 1, cv(n))
)
order by id, n
```
[fiddle](http://www.sqlfiddle.com/#!4/122f3/1) | SQL - Splitting a string into a new row | [
"",
"sql",
"plsql",
""
] |
I'm having some difficulties trying to get the following to work in SQL Server:
```
CREATE TABLE table_X AS
SELECT
NEXT VALUE FOR GROUP_A AS GROUP_ID, RISK_ID
FROM
( SELECT UNIQUE RISK_ID, FROM table_Y ) SS;
```
I am told:
```
"Incorrect syntax near the keyword 'SELECT'."
"Incorrect syntax near the keyword 'UNIQUE'."
```
What exactly is the problem and how can I fix this to work in `SQL Server`? | ```
CREATE TABLE Person
(
FirstName Varchar(50),
LastName Varchar(50)
);
/*
Insert Some Values Into Person
*/
CREATE SEQUENCE CountBy
START WITH 1
INCREMENT BY 1 ;
SELECT NEXT VALUE FOR CountBy OVER (ORDER BY LastName) AS ListNumber,
FirstName, LastName
INTO table_x
FROM Person
``` | ```
SELECT
NEXT VALUE FOR GROUP_A AS GROUP_ID, SS.RISK_ID
INTO
table_X
FROM
( SELECT DISTINCT RISK_ID FROM table_Y ) SS
``` | How to make SELECT NEXT VALUE work in SQL Server? | [
"",
"sql",
"sql-server",
"syntax",
"syntax-error",
"porting",
""
] |
I have a script that runs 2 rather slow SQL queries that are almost identical. One selects from the database records that are supposed to be "featured" on a page, i.e. they come first. Another selects from the database the regular records. Each query takes approximately 0.3 seconds, and since both run at the same time, I lose 0.6 seconds on them. So I was wondering if it would be possible to combine these queries. Here is what they look like:
```
SELECT * FROM records WHERE [other where conditions] AND featured>=$timestamp ORDER BY featured DESC
SELECT * FROM records WHERE [other where conditions] AND featured<$timestamp ORDER BY created DESC
```
What I have labeled [other where conditions] is identical in both cases. So what I essentially need to do is:
```
SELECT * FROM records WHERE [other where conditions] ORDER BY featured DESC UNLESS featured < $timestamp, created DESC
```
Except I don't know how to tell MySQL "ORDER BY ... UNLESS". | You can do this with the `order by` alone:
```
SELECT *
FROM records
WHERE [other where conditions]
ORDER BY (case when features >= $timestamp then featured else $timestamp end) DESC,
created desc;
``` | You can use [`IF()`](http://dev.mysql.com/doc/refman/5.6/en/control-flow-functions.html) for this.
```
SELECT * FROM records
WHERE [other where conditions]
ORDER BY IF(featured >= $timestamp, featured, created) DESC
``` | MySQL order by value1 unless value1 < 0, then order by value2 | [
"",
"mysql",
"sql",
""
] |
I am pretty much new to bcp but I researched quite a bit and can not find any resource that says where we are actually sending the user name and password to the database with this command. So everyone can access the database?
```
bcp AdventureWorks2008.HumanResources.Employee out C:\Data\EmployeeData.dat -T
``` | ```
bcp AdventureWorks2008.HumanResources.Employee out C:\Data\EmployeeData.dat -T -c -U<login_id> -P<password> -S<server_name\instance_name>
``` | You aren't sending the username and password with the -T argument. In fact, it won't send any specified username and password to SQL Server if -T is present in the command line.
-T tells BCP to use Windows Authentication to connect to SQL Server. It will operate as the user who's running the command. | Where we are giving password by using BCP command? | [
"",
"sql",
"sql-server",
"sql-server-2008",
"sql-server-2005",
"bcp",
""
] |
I'm having a problem with grouping in SQL Server Express 2005
I have a `DATETIME COLUMN` but i want to group it only by date.
Here my SQL Statement:
```
SELECT (u.FirstName + ' ' + u.LastName) AS [FullName],d.user_id,CONVERT(varchar,d.log_date,101) AS log_date, min(d.login_time) as LOG_IN, max(d.logout_time) as LOG_OUT, sum(d.totaltime) as TOTHrs
FROM tbldtr d INNER JOIN tblUsers u ON d.user_id = u.User_Id
WHERE d.user_id = 'ADMIN1' and d.log_date BETWEEN '6/1/2013' AND '6/15/2013'
GROUP BY DATEADD(day, DATEDIFF(day, 0, log_date), 0),u.FirstName,u.LastName,d.user_id order by d.log_date asc
```
but it give me this error:
> **Column 'tbldtr.log\_date' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY
> clause.**
Thanks in advance.! | Just move `convert(varchar,d.log_date,101)` into `group by` clause:
```
select
u.FirstName + ' ' + u.LastName as [FullName],
d.user_id,
convert(varchar, d.log_date, 101) as log_date,
min(d.login_time) as LOG_IN,
max(d.logout_time) as LOG_OUT,
sum(d.totaltime) as TOTHrs
from tbldtr d
inner join tblUsers u on d.user_id = u.User_Id
where d.user_id = 'ADMIN1' and d.log_date between '20130601' AND '20130615'
group by
convert(varchar, d.log_date, 101),
u.FirstName, u.LastName, d.user_id
order by log_date asc
```
Also, it's more safe to change dates in the `where` into unambiguous format - `YYYYMMDD` | Use the casting function so it should become something like:
```
GROUP BY CAST(d.log_date AS DATE)
``` | How to group by DATE only in column datetime | [
"",
"sql",
"sql-server",
""
] |
I have a simple table with two columns, like the one below:
```
Id | Name
0 | A
1 | A
2 | B
3 | B
4 | C
5 | D
6 | E
7 | E
```
I want to make a SQL query which will count how many times each "Name" appears on the table. However, I need a few of these values to count as if they were the same. For example, a normal group by query would be:
```
select Name, count(*)
from table
group by Name
```
The above query would produce the result:
```
Name | Count
A | 2
B | 2
C | 1
D | 1
E | 2
```
but I need the query to count "A" and "B" as if they were only "A", and to count "D" and "E" as if they were only "D", so that the result would be like:
```
Name | Count
A | 4 // (2 "A"s + 2 "B"s)
C | 1
D | 3 // (1 "D" + 2 "E"s)
```
How can I make this kind of query? | You can make translation with `case`. Also, you can use subquery or CTE so you don't have [to repeat yourself](http://en.wikipedia.org/wiki/Don%27t_repeat_yourself):
```
with cte as (
select
case Name
when 'B' then 'A'
when 'E' then 'D'
else Name
end as Name
from table
)
select Name, count(*)
from cte
group by Name
```
or with with online translation table:
```
select
isnull(R.B, t.Name), count(*)
from table as t
left outer join (
select 'A', 'B' union all
select 'E', 'D'
) as R(A, B) on R.A = t.Name
group by isnull(R.B, t.Name)
``` | If you need `A` and `B`, `D` and `E`, to count the same, you can build a query like this:
```
SELECT
CASE Name WHEN 'B' THEN 'A' WHEN 'E' THEN 'D' ELSE Name END as Name
, COUNT(*)
FROM table
GROUP BY CASE Name WHEN 'B' THEN 'A' WHEN 'E' THEN 'D' ELSE Name END
```
[Demo on sqlfiddle](http://www.sqlfiddle.com/#!3/45c68/3). | SQL Query with group by clause, but counting two distinct values as if they were the same | [
"",
"sql",
"sql-server",
"sql-server-2008-r2",
""
] |
I have a table within a SQL Server 2008 database called `Meter`. This table has a column called `Name`.
Each entry within the column `Name` has a prefix of the following `ZAA\`. I'd like to change this prefix to `ZAA_` without affecting the rest of the text within the column. | ```
UPDATE Meter
SET Name = 'ZAA_' + SUBSTRING(Name, 4, LEN(Name))
WHERE SUBSTRING(Name, 1, 4) = 'ZAA\'
```
Edit:
Or as @Damien\_The\_Unbliever states, to use an index:
```
UPDATE Meter
SET Name = 'ZAA_' + SUBSTRING(Name, 4, LEN(Name))
WHERE Name LIKE 'ZAA\%'
```
**EDIT**
From your comment, try this statement to fix the additional `\`:
```
UPDATE Meter
SET Name = 'ZAA_' + SUBSTRING(Name, 5, LEN(Name))
WHERE Name LIKE 'ZAA_\%'
``` | MYSQL:
```
UPDATE Meter
SET Name = CONCAT('ZAA' ,'_', SUBSTRING(Name, 4, LENGTH(Name)))
WHERE Name LIKE 'ZAA\%'
``` | Update substring of a column | [
"",
"sql",
"database",
"sql-server-2008",
"t-sql",
"where-clause",
""
] |
I have table called Rule.
```
RuleId Name
1 A1
2 A2
3 A3
.
.
.
```
Now I want all the names as single result.
may be like `@allnames = A1,A2,A3`
Can somebody advise how to write query for this without using loops?
Thanks in advance... | Try this:
```
SELECT @allnames = STUFF((SELECT distinct ',' + Name
FROM table1
FOR XML PATH(''), TYPE
).value('.', 'NVARCHAR(MAX)')
, 1, 1, '');
```
* [SQL Fiddle Demo](http://sqlfiddle.com/#!3/1c24c3/5) | ```
DECLARE @names NVARCHAR(MAX)
SELECT @names = coalesce(@names + ',', '') + coalesce(Name, '')
FROM (SELECT distinct Name FROM Rule) x
print @names
``` | concatinate all rows of a column into single value | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
I have a table with currency name, then currency sk.
I have a second table with currency\_sk\_from, currency\_sk\_to and conversion\_rate.
I want to get a new table which combines this so I get currency\_sk\_from, currency\_name, currency\_sk\_to, currency\_name\_to, conversion\_rate
Can someone please help, when I tried to join it doubled the size of the second table.
Table one, looks like
```
currency name Currency SK
EUR , 1
USD , 2
SEK , 3
```
etc
Table two, looks like
```
currency_sk_from currency_sk_to conversion_rate
1 , 2 , 1.5
2 , 1 , .6666
3 , 1 , 41
```
What I want is
```
currency_sk_from currency_name currency_sk_to currency_name_to conversion_rate
1 , EUR , 2 , USD , 1.5
2 , USD , 1 , EUR , .6666
3 , SEK , 1 , EUR , 41
``` | You could try using subqueries instead of joins:-
```
SELECT CURRENCY_SK_FROM,
(SELECT CURRENCY_NAME
FROM CURRENCY
WHERE CURRENCY_SK = CURRENCY_SK_FROM) AS CURRENCY_NAME,
CURRENCY_SK_TO,
(SELECT CURRENCY_NAME
FROM CURRENCY
WHERE CURRENCY_SK = CURRENCY_SK_TO) AS CURRENCY_NAME_TO,
CONVERSION_RATE
FROM CONVERSION_RATE_TABLE;
``` | If I have understood the question correctly below query will do the job :
```
SELECT DISTINCT
cskf.currency_sk_from,
cskfrom.currency_name,
cskf.currency_sk_to,
cskto.currency_name currency_name_to,
cskf.conversion_rate
FROM
currency_sk cskfrom
JOIN
currency_sk_from cskf ON
cskfrom.currency_sk = cskf.currency_sk_from
JOIN
currency_sk cskto ON
cskto.currency_sk = cskf.currency_sk_to
```
fiddle:<http://sqlfiddle.com/#!3/26e09e/2> | Two SK's in one table | [
"",
"sql",
"join",
""
] |
I've read numerous tutorials and graphical representations of MySQL joins, and they still don't make sense to me.
I'm trying to type my own now, and they are working, but I just don't see how they're working.
Take this set of tables
```
images squares
---------------------------- ----------------------------------
image_id | name | square_id square_id | latitude | longitude
---------------------------- ----------------------------------
1 someImg 14 1 42.333 24.232
2 newImg 3 2 38.322 49.2320
3 blandImg 76 3 11.2345 99.4323
... ...
n n
```
This is a one to many relationship - one square can have many images, but an image can only have one square.
Now I run this simple join, but I'm not understanding the syntax of it at all...
```
SELECT images.image_id
FROM squares
LEFT JOIN images ON images.square_id=squares.square_id
WHERE images.square_id=711464;
```
Now, this actually works, which amazes me. It brings up a list of images within the square range.
But I'm having a hard time understanding the `ON` syntax.
* **What does `ON` do exactly?**
* **Does it show how the two tables are related?**
Mainly however, `SELECT images.image_id FROM squares`, makes the least sense.
**How can I select a field in one table but `FROM` another?** | Let's start with the `FROM` clause, which in it's entirety is:
```
FROM squares LEFT JOIN images ON images.square_id=squares.square_id
```
(it's not just `FROM squares`).
This defines the source of your data. You specify both tables, `squares` and `images` so they are both sources for the data that the query will work on.
Next, you use the `on` syntax to explain how this tables are related to one another. `images.square_id=squares.square_id` means: consider a row in the `images` table related to a row in the `squares` table if and only if the value of the field `square_id` of the `images` row is equal to the value of the field `square_id` of the `squares` table. At this moment, each row of the result is a combination of a row from the `images` table and a row from the `squares` table (I'll ignore the `LEFT JOIN` at the moment).
Next, you have the `WHERE` clause
```
WHERE images.square_id=711464
```
This means, from the rows that are in result set, just get those where the value of the `square_id` field, in that part of the result row that came from the images table, is exactly `711464`.
And last comes the `SELECT` part.
```
SELECT images.image_id
```
This means, from the rows that are in the result set (a combination of a `square` row and a `images` row), take just the field `image_id` that is from the `images` table. | You should read the query as such:
```
SELECT images.image_id FROM
squares LEFT JOIN images
ON images.square_id=squares.square_id
WHERE
images.square_id=711464
```
So you first join the `squares` table with the `images` table, combining entries in `images` which have the same `square_id` as in the `squares` table. So, the `ON` syntax is actually specifying the condition on which the two tables should be joined.
Then, you do a filter using `WHERE`, which will take the entries with `square_id=711464`
Note that by the time you do the `SELECT`, you already joined the two tables, which will have the combined fields:
```
images
--------------------------------------------------
square_id | latitude | longitude | image_id | name
--------------------------------------------------
```
So, you can select the `square_id` from the resulting table. | SQL Joins - Why does this simple join work, despite the syntax making no sense? | [
"",
"mysql",
"sql",
"join",
""
] |
I'm trying to run a query to update the user password using.
```
alter user dell-sys with password 'Pass@133';
```
But because of `-` it's giving me error like,
```
ERROR: syntax error at or near "-"
LINE 1: alter user dell-sys with password 'Pass@133';
^
```
Can Anyone shade a light on it? | I have reproduced the issue in my system,
```
postgres=# alter user my-sys with password 'pass11';
ERROR: syntax error at or near "-"
LINE 1: alter user my-sys with password 'pass11';
^
```
Here is the issue,
psql is asking for input and you have given again the alter query see `postgres-#`That's why it's giving error at alter
```
postgres-# alter user "my-sys" with password 'pass11';
ERROR: syntax error at or near "alter"
LINE 2: alter user "my-sys" with password 'pass11';
^
```
Solution is as simple as the error,
```
postgres=# alter user "my-sys" with password 'pass11';
ALTER ROLE
``` | Wrap it in double quotes
```
alter user "dell-sys" with password 'Pass@133';
```
Notice that you will have to use the same case you used when you created the user using double quotes. Say you created `"Dell-Sys"` then you will have to issue exact the same whenever you refer to that user.
I think the best you do is to drop that user and recreate without illegal identifier characters and without double quotes so you can later refer to it in any case you want. | syntax error at or near "-" in PostgreSQL | [
"",
"sql",
"database",
"postgresql",
"syntax-error",
"alter",
""
] |
I've inherited a project that somehow ended up with random rows in the application settings table getting duplicated. I got the duplicate rows removed successfully, but then I noticed that the actual values, of type nvarchar, for said rows are also duplicated.
For instance, one row has Key column `Error Email Address` and Value column `websupport@mycompany.com,websupport@mycompany.com`. The Value column should just contain `websupport@mycompany.com`. There are numerous records like this, all following the same pattern of `The value,The value`.
How can I detect when the Value column contains this kind of duplicated data and correct it?
Note that the comma alone is not enough to say the row is invalid, because things like Key `Default Error Message` Value `Oops, something went wrong` are correct and also contain a comma. | Here is the query:
```
Update TableName
set Col1=substring(Col1,0,charindex(',',Col1))
where substring(Col1,0,charindex(',',Col1))=substring(Col1,charindex(',',Col1)+1,500)
```
Replace Col1 with the column name, and TableName with the actual table name. It will only replace the rows where value before and after comma(,) are same with a single value. | Here is an `update` that would solve the problem:
```
update t
set value = left(value, len(val)/2)
where left(value, len(val)/2) = right(value, len(val)/2) and
substring(value, (len(val)/2) + 1, 1) = ',';
```
You can validate the logic by doing `select` first:
```
select value
from t
where left(value, len(val)/2) = right(value, len(val)/2) and
substring(value, (len(val)/2) + 1, 1) = ',';
``` | Nvarchar column value is doubled, how do I detect and fix this? | [
"",
"sql",
"sql-server",
""
] |
first time asking a question on Stack Overflow... Amazing resource, but there's just one thing that's really baffling me as a newcomer to SQL.
I have three tables and I would like to obtain the names of all the Mentors who are linked to Bob's students.
Table 1: TEACHERS
```
================
ID Name
================
1 Bob
```
Table 2: STUDENTS
```
===================================
STUDENT_ID Name TEACHER_ID
===================================
1 Jayne 1
2 Billy 5
3 Mark 2
```
Table 3: MENTOR\_RELATIONSHIPS
```
==============================
ID STUDENT_ID MENTOR_ID
==============================
1 1 3
2 2 2
3 3 3
```
Table 4: MENTORS
```
=====================
MENTOR_ID Name
=====================
1 Sally
2 Gillian
3 Sean
```
I would like to run a query to find all of the mentors of Bob's students. So the mentors for all students with `TEACHER_ID = 1`
In this case Sean would be the result.
I know that it is something to do with Joins, or could I find this using a normal query??
Any help is much appreciated! Many thanks... | this should do the work
```
select distinct m.name from students s
inner join mentor_ralationships mr on mr.student_id=s.student_id
inner join mentors m on m.mentoir_id=mr.mentor_id
where s.teacher_id=1;
``` | Without joins (not preferred)
```
SELECT mentors.name FROM mentors
WHERE mentors.id
IN (SELECT MENTOR_RELATIONSHIPS.mentor FROM MENTOR_RELATIONSHIPS
WHERE MENTOR_RELATIONSHIPS.student
IN (SELECT students.id FROM students WHERE students.teacher
= (SELECT teachers.id FROM teachers WHERE teachers.name = 'bob')));
``` | SQL Query (or Join) for 3 tables | [
"",
"sql",
"join",
"inner-join",
""
] |
I have two tables both contains "callid,callnumber,callduration"..now i want to get a call which have same callnumber whereas callduration is greater than a given value.i.e
```
Select a.callnumber
, b.callnumber
from table a
INNER JOIN table b ON ( a.callnumber = b.callnumber
AND a.callduration-b.callduration > value );
```
but this returns multiple results fulfilling the creteria whereas i just want a call which have minimum duration difference. | the join condition in your original sql and your verbal explanation mismatch, so here are two versions: with the durations above a given value as opposed to their difference being greater than the given value:
v1:
```
Select a.callnumber
, min (a.callduration-b.callduration) mindiff
from table a
INNER JOIN table b ON ( a.callnumber = b.callnumber
AND a.callduration > value
AND b.callduration > value )
group by a.callnumber
;
```
v2:
```
Select a.callnumber
, min (a.callduration-b.callduration) mindiff
from table a
INNER JOIN table b ON ( a.callnumber = b.callnumber
AND a.callduration-b.callduration > value )
group by a.callnumber
;
``` | ```
Select
a.callnumber,
b.callnumber
From table a
Join table b
On a.callnumber = b.callnumber
And a.callduration - b.callduration > value And a.rowid < b.rowid
```
--- use rowid to avoid repetation of data ( I have tested it on Oracle) | How to select only distinct result based on particular criteria from tables using join | [
"",
"mysql",
"sql",
""
] |
I'm having a very tough time trying to figure out how to do a dynamic pivot in SQL Server 2008 with multiple columns.
My sample table is as follows:
```
ID YEAR TYPE TOTAL VOLUME
DD1 2008 A 1000 10
DD1 2008 B 2000 20
DD1 2008 C 3000 30
DD1 2009 A 4000 40
DD1 2009 B 5000 50
DD1 2009 C 6000 60
DD2 2008 A 7000 70
DD2 2008 B 8000 80
DD2 2008 C 9000 90
DD2 2009 A 10000 100
DD2 2009 B 11000 110
DD2 2009 C 12000 120
```
and I'm trying the pivot it as follows:
```
ID 2008_A_TOTAL 2008_A_VOLUME 2008_B_TOTAL 2008_B_VOLUME 2008_C_TOTAL 2008_C_VOLUME 2009_A_TOTAL 2009_A_VOLUME 2009_B_TOTAL 2009_B_VOLUME 2009_C_TOTAL 2009_C_VOLUME
DD1 1000 10 2000 20 3000 30 4000 40 5000 50 6000 60
DD2 7000 70 8000 80 9000 90 10000 100 11000 110 12000 120
```
My SQL Server 2008 query is as follows to create the table:
```
CREATE TABLE ATM_TRANSACTIONS
(
ID varchar(5),
T_YEAR varchar(4),
T_TYPE varchar(3),
TOTAL int,
VOLUME int
);
INSERT INTO ATM_TRANSACTIONS
(ID,T_YEAR,T_TYPE,TOTAL,VOLUME)
VALUES
('DD1','2008','A',1000,10),
('DD1','2008','B',2000,20),
('DD1','2008','C',3000,30),
('DD1','2009','A',4000,40),
('DD1','2009','B',5000,50),
('DD1','2009','C',6000,60),
('DD2','2008','A',7000,70),
('DD2','2008','B',8000,80),
('DD2','2008','C',9000,90),
('DD2','2009','A',10000,100),
('DD2','2009','B',11000,110),
('DD2','2009','C',1200,120);
```
The `T_Year` column may change in the future but the `T_TYPE` column is generally know, so I'm not sure if I can use a combination of the PIVOT function in SQL Server with dynamic code?
I tried following the example here:
<http://social.technet.microsoft.com/wiki/contents/articles/17510.t-sql-dynamic-pivot-on-multiple-columns.aspx>
but I ended up with with weird results. | In order to get the result, you will need to look at unpivoting the data in the `Total` and `Volume` columns first before applying the PIVOT function to get the final result. My suggestion would be to first write a hard-coded version of the query then convert it to dynamic SQL.
The UNPIVOT process converts these multiple columns into rows. There are a few ways to UNPIVOT, you can use the UNPIVOT function or you can use CROSS APPLY. The code to unpivot the data will be similar to:
```
select id,
col = cast(t_year as varchar(4))+'_'+t_type+'_'+col,
value
from ATM_TRANSACTIONS t
cross apply
(
select 'total', total union all
select 'volume', volume
) c (col, value);
```
This gives you data in the format:
```
+-----+---------------+-------+
| id | col | value |
+-----+---------------+-------+
| DD1 | 2008_A_total | 1000 |
| DD1 | 2008_A_volume | 10 |
| DD1 | 2008_B_total | 2000 |
| DD1 | 2008_B_volume | 20 |
| DD1 | 2008_C_total | 3000 |
| DD1 | 2008_C_volume | 30 |
+-----+---------------+-------+
```
Then you can apply the PIVOT function:
```
select ID,
[2008_A_total], [2008_A_volume], [2008_B_total], [2008_B_volume],
[2008_C_total], [2008_C_volume], [2009_A_total], [2009_A_volume]
from
(
select id,
col = cast(t_year as varchar(4))+'_'+t_type+'_'+col,
value
from ATM_TRANSACTIONS t
cross apply
(
select 'total', total union all
select 'volume', volume
) c (col, value)
) d
pivot
(
max(value)
for col in ([2008_A_total], [2008_A_volume], [2008_B_total], [2008_B_volume],
[2008_C_total], [2008_C_volume], [2009_A_total], [2009_A_volume])
) piv;
```
Now that you have the correct logic, you can convert this to dynamic SQL:
```
DECLARE @cols AS NVARCHAR(MAX),
@query AS NVARCHAR(MAX)
select @cols = STUFF((SELECT ',' + QUOTENAME(cast(t_year as varchar(4))+'_'+t_type+'_'+col)
from ATM_TRANSACTIONS t
cross apply
(
select 'total', 1 union all
select 'volume', 2
) c (col, so)
group by col, so, T_TYPE, T_YEAR
order by T_YEAR, T_TYPE, so
FOR XML PATH(''), TYPE
).value('.', 'NVARCHAR(MAX)')
,1,1,'')
set @query = 'SELECT id,' + @cols + '
from
(
select id,
col = cast(t_year as varchar(4))+''_''+t_type+''_''+col,
value
from ATM_TRANSACTIONS t
cross apply
(
select ''total'', total union all
select ''volume'', volume
) c (col, value)
) x
pivot
(
max(value)
for col in (' + @cols + ')
) p '
execute sp_executesql @query;
```
This will give you a result:
```
+-----+--------------+---------------+--------------+---------------+--------------+---------------+--------------+---------------+--------------+---------------+--------------+---------------+
| id | 2008_A_total | 2008_A_volume | 2008_B_total | 2008_B_volume | 2008_C_total | 2008_C_volume | 2009_A_total | 2009_A_volume | 2009_B_total | 2009_B_volume | 2009_C_total | 2009_C_volume |
+-----+--------------+---------------+--------------+---------------+--------------+---------------+--------------+---------------+--------------+---------------+--------------+---------------+
| DD1 | 1000 | 10 | 2000 | 20 | 3000 | 30 | 4000 | 40 | 5000 | 50 | 6000 | 60 |
| DD2 | 7000 | 70 | 8000 | 80 | 9000 | 90 | 10000 | 100 | 11000 | 110 | 1200 | 120 |
+-----+--------------+---------------+--------------+---------------+--------------+---------------+--------------+---------------+--------------+---------------+--------------+---------------+
``` | ```
declare @stmt nvarchar(max)
select @stmt = isnull(@stmt + ', ', '') +
'sum(case when T_YEAR = ''' + T.T_YEAR + ''' and T_TYPE = ''' + T.T_TYPE + ''' then TOTAL else 0 end) as ' + quotename(T.T_YEAR + '_' + T.T_TYPE + '_TOTAL') + ',' +
'sum(case when T_YEAR = ''' + T.T_YEAR + ''' and T_TYPE = ''' + T.T_TYPE + ''' then VOLUME else 0 end) as ' + quotename(T.T_YEAR + '_' + T.T_TYPE + '_VOLUME')
from (select distinct T_YEAR, T_TYPE from ATM_TRANSACTIONS) as T
order by T_YEAR, T_TYPE
select @stmt = '
select
ID, ' + @stmt + ' from ATM_TRANSACTIONS group by ID'
exec sp_executesql
@stmt = @stmt
```
unfortunately, sqlfiddle.com is not working at the moment, so I cannot create an example for you.
The query created by dynamic SQL would be:
```
select
ID,
sum(case when T_YEAR = '2008' and T_TYPE = 'A' then TOTAL else 0 end) as 2008_A_TOTAL,
sum(case when T_YEAR = '2008' and T_TYPE = 'A' then VOLUME else 0 end) as 2008_A_VOLUME,
...
from ATM_TRANSACTIONS
group by ID
``` | SQL Server : dynamic pivot over 5 columns | [
"",
"sql",
"sql-server",
"sql-server-2008",
"pivot",
""
] |
I'm returning results from a stored procedure and passing them off to a function for further processing. In some cases, it's possible (and perfectly fine) for one of the fields, a date value, to return null.
However, whenever I pass the null into the function, an exception is thrown trying to convert the null into there function parameter's type. What is the best way to handle this?
Data:
```
Name StartDate EndDate
Bob 01/01/2013 NULL
```
Calling Function:
```
MyFunction(
DataRow.Item("StartDate"),
DataRow.Item("EndDate")) ' <--- invalid cast exception
```
Function:
```
Public Function MyFunction(
ByVal StartDate as Date,
ByVal EndDate as Date) As Object
....
Return something
End Function
```
**EDIT:** Lots of great tips but still no dice.
Declaring the DateTime type in the function as nullable, `ByVal EndDate as DateTime?`, results in `System.InvalidCastException: Specified cast is not valid.`
Using DataRow.Field(Of DateTime)("EndDate") along with declaring the parameter as a nullable type results in `System.InvalidCastException: Cannot cast DBNull.Value to type 'System.DateTime'`
**EDIT2:** Found a source of one of my problems. I was using Iif(), and one of the values was of type System.DBNull, the other was of type Date. And both the true and false parts must be the same type. Took me a while to spot that. | You try can making the parameter [`Nullable`](http://msdn.microsoft.com/en-us/library/b3h38hb0.aspx?cs-save-lang=1&cs-lang=vb#code-snippet-2):
```
Public Function MyFunction(ByVal StartDate as DateTime, ByVal EndDate as DateTime?) As Object
....
Return something
End Function
```
But you might need to call it using the the [`Field`](http://msdn.microsoft.com/en-us/library/system.data.datarowextensions.field.aspx) method:
```
MyFunction(DataRow.Field(Of DateTime)("StartDate"), DataRow.Field(Of DateTime?)("EndDate"))
``` | Use a [nullable](http://msdn.microsoft.com/en-us/library/b3h38hb0.aspx?cs-save-lang=1&cs-lang=vb#code-snippet-1) DateTime parameter:
```
ByVal EndDate AS Nullable(Of DateTime)
```
Full example:
```
Public Function MyFunction(ByVal StartDate as Date, ByVal EndDate as Nullable(Of DateTime)) As Object
....
Return something
End Function
``` | How to handle passing DBNull into a function? | [
"",
"sql",
".net",
"sql-server",
"vb.net",
""
] |
I have values like something
```
| UserName | SkillsID |
|------------|------------------|
| Sohil | 1,2,15,16,19, |
| Ankur | 5,8,14,19, |
| Lalu | 4,3,14,15, |
| Vinod | 5, |
| Milind | 8,11 |
```
now I want to search the Users who have got SkillsID = 5 then result would
```
| UserName | SkillsID |
|------------|------------------|
| Ankur | 5,8,14,19, |
| Vinod | 5, |
```
How my query would be?
I have thought to split SkillsID into table & then perform query on it but i am not getting any idea to split. can anybody help me???
Thanks in advance to helpers.. | You can do it with [FIND\_IN\_SET()](http://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_find-in-set) function or regex:
```
SELECT * FROM t WHERE SkillsID REGEXP '(^|,)5($|,)'
```
But you're violating relation DB principle with storing multiple values in one field. You should create a link table for storing that properly and make a foreign key to your user table in it. | ## Never, never, never store multiple values in one column!
As you see now this will only result in problems. Please normalize your DB structure first like this
```
User table
+-------------------+
| userid | username |
+-------------------+
| 1 | Sohil |
| 2 | Ankur |
+-------------------+
skill table
--------------------
| Userid| SkillID |
|-------|----------|
| 1 | 1 |
| 1 | 2 |
| 1 | 15 |
| 1 | 16 |
...
| 2 | 5 |
| 2 | 8 |
| 2 | 14 |
...
``` | mysql db query for comma separate values | [
"",
"mysql",
"sql",
""
] |
In Sweden the second to last digit in the [personal identity number](http://en.wikipedia.org/wiki/Personal_identity_number_%28Sweden%29) is ODD if you are a MAN and EVEN if you are a WOMAN.
I want to make this query so that it only selects the women (even numbers).
```
SELECT COL1,COL2,COL3,COL4 FROM TABLE
INNER JOIN TABLE.COL ON TABLE.COL = TABLE.COL
INNER JOIN TABLE.COL ON TABLE.COL = TABLE.COL
WHERE COL = 'TEXT'
AND COL > 5000
AND RIGHT(IDNUMBER,2) = %2 <> 0
``` | ```
WHERE CAST (LEFT(RIGHT(IDNUMBER,2),1) AS INT) % 2 = 0
``` | To pick even numbers, the remainder of the number divided by 2 must be 0, thus:
```
RIGHT(IDNUMBER,2) % 2 = 0
```
But `RIGHT(IDNUMBER,2)` doesn't return the second last digit, it instead returns the last 2 digits. Divide it by 10 to get what you want:
```
RIGHT(IDNUMBER,2)/10 % 2 = 0
```
Another way to get the 2 last digits would be finding the remaining of the number divided by 100, thus:
```
IDNUMBER % 100 / 10 % 2 = 0
```
Actually, we only need to divide by 10, as division by 2 is only dependent on the last digit of a number:
```
IDNUMBER / 10 % 2 = 0
``` | Use a odd number in a condition SQL | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have the following tables:
```
+---------+ +-----------+ +---------+
| USER | | USER_BOX | | BOX |
| id | | user_id | | id |
| name | | box_id | | name |
| | | | | |
+---------+ +-----------+ +---------+
```
What I want to do is query the DB as to list all boxes, but have all boxes that have relations with an user to be listed first, and **only once**. I'm stuck and I'm not even sure that MySql supports such a query.
```
USER BOX USER_BOX
+--------|--------+ +--------|--------+ +--------|--------+
| 0 | Jonh | | 0 | Boobox | | 0 | 4 |
+--------|--------+ | 1 | RedBox | | 0 | 3 |
| 2 | GGbox | +--------|--------+
| 3 | OKbox |
| 4 | boxy |
+--------|--------+
```
Resulting query should return:
```
4 - boxy
3 - Okbox
0 - Boobox
1 - RedBox
2 - GGbox
```
Edit: The idea is to be able to query by user, so it's easier to locate users boxes in a large list. | ```
SELECT b.id, b.name
FROM BOX b
LEFT JOIN (
USER_BOX ub
JOIN USER u ON (user_id = u.id)
) ON (box_id = b.id)
ORDER BY
u.id IS NULL,
b.id DESC
```
<http://sqlfiddle.com/#!2/a4f304/4/0>
EDIT: Actually there's no need to join on `USER`.
<http://sqlfiddle.com/#!2/a4f304/7/0> | Something like this:
```
SELECT a.id, a.name
FROM BOX AS a
LEFT JOIN USER_BOX AS b
ON a.id = b.box_id
ORDER BY CASE WHEN b.box_id IS NOT NULL THEN 0 ELSE 1 END
,a.id
``` | many to many order by relantionship | [
"",
"mysql",
"sql",
""
] |
I have set up a number of different models in my ruby on rails blog project. Currently the different model types are articles, quick tips, quotes and definitions. I've created a page where I would like to display all of these in a single flow ordered by creation time.
**The question**: What is a smart/efficient way to retrieve a specific number of items, chosen among all of the content types, ordered by creation date.
Hopefully illustrative pseudo code:
```
Db.load([:Tip, :Quote, :Article, :Definition]).order_by(:created_at).limit(10)
```
The best solution I've come up with myself so far is to load 10 items of every category, sort them and pick the latest 10 of the sorted list. :P | So the way to do what I want is to make a superclass, say Post, and have all other content types inherit from this Model. This will create a Single Table Inheritance scheme where all of the subclasses attributes are kept in the "posts" table together with a "type" attribute which facillitates differentiating between the content types. I should still be able to use all content types normally, but also get a Posts model where I can query and sort all content types at the same time.
[More information about the technique can be found here](http://api.rubyonrails.org/classes/ActiveRecord/Base.html "Single Table Inheritance") | Just combine them.
```
(Tip.all + Quote.all + Article.all + Definition.all).order_by(:created_at).limit(10)
```
Or
```
tips = Tips.all.limit(10)
quotes = Quotes.all.limit(10)
(tips + quotes).order_by(:created_at).limit(10)
```
Get the idea?
[Possibly useful similar question](https://stackoverflow.com/questions/7498525/merging-two-arrays-of-different-objects-in-rails) | Retrieving many different kinds of models from the database at the same time | [
"",
"sql",
"ruby-on-rails",
"ruby",
"activerecord",
"ruby-on-rails-4",
""
] |
**ERROR MESSAGE: #1215 - Cannot add foreign key constraint**
hello i can't create table foreign key... how to do this... i have two table..
the first table got two primary key then the 2nd table is 1 primary key...
the first table is
`courseid varchar(5)`
and `courseyear int`
and the second is `subj_id varchar(5)`
```
create table studentload(
student_id varchar(11) not null,
courseid varchar(5) not null,
courseyear int not null,
subj_id varchar(5) not null,
foreign key (courseid,courseyear) references course(courseid,courseyear),
foreign key (subj_id) references subject(subj_id)
)
```
**EDIT**
this is all table that I inserted already
```
CREATE TABLE IF NOT EXISTS `course` (
`courseid` varchar(5) NOT NULL,
`courseyear` int(11) NOT NULL,
`coursedesc` varchar(50),
`subj_id` varchar(5) NOT NULL,
PRIMARY KEY (`courseid`,`courseyear`)
)
CREATE TABLE IF NOT EXISTS `subject` (
`subj_id` varchar(5) NOT NULL,
`subj_name` varchar(50) NOT NULL,
`courseid` varchar(5),
`courseyear` int(11),
foreign key (`courseid`,`courseyear`) references `courseid` (`courseid`,`courseyear`)
)
``` | Here it is sample [SQLFiddle](http://sqlfiddle.com/#!2/e850e)
You have mandatory to add `primary key (subj_id)` in your `studentload` table
In your foriegn relationship courseid is not tablename. It should be
```
"references course(courseid,courseyear)"
```
Like
```
CREATE TABLE IF NOT EXISTS `course` (
`courseid` varchar(5) NOT NULL,
`courseyear` int(11) NOT NULL,
`coursedesc` varchar(50),
`subj_id` varchar(5) NOT NULL,
PRIMARY KEY (`courseid`,`courseyear`)
);
CREATE TABLE IF NOT EXISTS `subject` (
`subj_id` varchar(5) NOT NULL,
`subj_name` varchar(50) NOT NULL,
`courseid` varchar(5),
`courseyear` int(11),
foreign key (`courseid`,`courseyear`) references `course` (`courseid`,`courseyear`),
primary key (`subj_id`)
);
create table studentload(
student_id varchar(11) not null,
courseid varchar(5) not null,
courseyear int not null,
subj_id varchar(5) not null,
foreign key (courseid,courseyear) references course(courseid,courseyear),
foreign key (subj_id) references subject(subj_id)
);
``` | You have used "references `courseid`(`courseid`,`courseyear`) ". courseid is not your tablename. It should be "references `course`(`courseid`,`courseyear`) " | cannot add foreign key constraint | [
"",
"mysql",
"sql",
""
] |
I have a large table with 100,000,000 rows. I'd like to select every n'th row from the table. My first instinct is to use something like this:
```
SELECT id,name FROM table WHERE id%125000=0
```
to retrieve a even spread of 800 rows (id is a clustered index)
This technique works fine on smaller data sets but with my larger table the query takes 2.5 minutes. I assume this is because the modulus operation is applied to every row. Is there a more optimal method of row skipping ? | If `id` is in an index, then I am thinking of something along these lines:
```
with ids as (
select 1 as id
union all
select id + 125000
from ids
where id <= 100000000
)
select ids.id,
(select name from table t where t.id = ids.id) as name
from ids
option (MAXRECURSION 1000);
```
I think this formulation will use the index on table.
EDIT:
As I think about this approach, you can actually use it to get actual random ids in the table, rather than just evenly spaced ones:
```
with ids as (
select 1 as cnt,
ABS(CONVERT(BIGINT,CONVERT(BINARY(8), NEWID()))) % 100000000 as id
union all
select cnt + 1, ABS(CONVERT(BIGINT,CONVERT(BINARY(8), NEWID()))) % 100000000
from ids
where cnt < 800
)
select ids.id,
(select name from table t where t.id = ids.id) as name
from ids
option (MAXRECURSION 1000);
```
The code for the actual random number generator came from [here](http://www.blackwasp.co.uk/SQLSelectRandom.aspx).
EDIT:
Due to quirks in SQL Server, you can still get non-contiguous ids, even in your scenario. This accepted [answer](https://stackoverflow.com/questions/14146148/identity-increment-is-jumping-in-sql-server-database%29) explains the cause. In short, identity values are not allocated one at a time, but rather in groups. The server can fail and even unused values get skipped.
One reason I wanted to do the random sampling was to help avoid this problem. Presumably, the above situation is rather rare on most systems. You can use the random sampling to generate say 900 ids. From these, you should be able to find 800 that are actually available for your sample. | The time is not going into the modulus operation itself, but rather into just reading 124,999 unnecessary rows for every row that you actually want (i.e., the Table Scan or Clustered Index Scan).
Just about the only way to speed up a query like this is something that seems at first illogical: Add an extra non-Clustered index on just that column ([ID]). Additionally, you may have to add an Index Hint to force it to use that index. And finally, it may not actually make it faster, though for a modulus of 125,000+, it should be (though it'll never be truly fast).
---
If your IDs are not necessarily contiguous (any deleted rows will pretty much cause this) and you really do need exactly *every modulo rows, by ID order*, then you can still use the approach above, but you will have to resequence the IDs for the Modulo operation using `ROW_NUMBER() OVER(ORDER BY ID)` in the query. | SQL server modulus operator to skip to every n'th row on a large table | [
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
I'm working in Access 2010. I have the following query (which is named bird\_year\_species):
```
SELECT sub.Species, Min(sub.obs_year) AS First_sighting_year
FROM (SELECT DISTINCT [Genus_BiLE] & " " & [Species_BiLE] AS Species, [Year_BiLE] AS obs_year
FROM BiLE_Bound
UNION ALL
SELECT DISTINCT [Genus_BiMN] & " " & [Species_BiMN] AS Species, [Year_BiMN] AS obs_year
FROM BiMN_Bound
UNION ALL
SELECT DISTINCT [Genus_BiPC] & " " & [Species_BiPC] AS Species, [Year_BiPC] AS obs_year
FROM BiPC_Bound
UNION ALL
SELECT Distinct [Genus_BiOP] & " " & [Species_BiOP] AS Species, [Year_BiOP] AS obs_year
FROM BiOP_Rec
) AS sub
GROUP BY sub.Species;
```
When I open it I get a popup asking for a parameter value for Query1.obs\_year. If I just fill in anything and hit okay the table pops up and it works. I've no idea why this is happening, and the query is not named Query1. | I tried copying the code into a new query. I tried compacting an repairing. I tried saving the database under another name. Non of which worked.
Eventually I opened the query, switched out of SQL view and into design view, back to SQL view and voila, that appeared to do it. So strange. | When this happens, check your properties, specifically look for references to Queryx in the filter and order by fields. | Why is this query asking for a parameter value? | [
"",
"sql",
"parameters",
"ms-access-2010",
""
] |
If I use an update statement in an update trigger, does that update statement in update trigger causes fire trigger? | Make sure your database property for `RECURSIVE_TRIGGERS` is set to off (which is the default anyway) so that it cannot be fired recursively.
<http://technet.microsoft.com/en-us/library/ms190946.aspx> | By default, a trigger doesn't fire himself. | Does an update statement in update trigger, fire trigger again? | [
"",
"sql",
"sql-server",
"triggers",
""
] |
I am attempting to write an SQLite Query that orders a table `mytable` firstly by a column called `status_id` and secondly by a column called `name`, but with an extra constraint using:
```
SELECT name,
status_id
FROM mytable
ORDER BY status_id, name
```
produces a table that is correctly sorted.
e.g (name, status): ("b", 1), ("a", 2), ("c", 2), ("b", 3), ("a", 4)...
However I require the same sort of output, but with any result having `status_id` equal to 2, being at the top of the results.
e.g. ("a", 2), ("c", 2), ("b", 1), ("b", 3), ("a", 4)...
---
I came up with this, but it doesn't function as required as the list isn't also subsorted by name:
```
SELECT name,
status_id
FROM mytable
ORDER BY CASE status_id
WHEN 2 THEN 0 ELSE 1 END DESC
``` | You still need to sort by status\_id and name, AFTER you sort by the CASE statement.
```
SELECT name
, status_id
FROM mytable
ORDER BY
CASE WHEN status_id = 2 THEN 1 ELSE 0 END DESC
,status_id
,name
``` | Just sort the rest by `status_id` and add `name` yo your `ORDER BY`:
```
SELECT name, status_id
FROM mytable
ORDER BY CASE status_id
WHEN 2 THEN -1
ELSE status_id
END,
name
``` | Sort SQL based on two fields and extra constraint | [
"",
"sql",
"sqlite",
""
] |
I am creating a SQL query in which I need a conditional `where` clause.
It should be something like this:
```
SELECT
DateAppr,
TimeAppr,
TAT,
LaserLTR,
Permit,
LtrPrinter,
JobName,
JobNumber,
JobDesc,
ActQty,
(ActQty-LtrPrinted) AS L,
(ActQty-QtyInserted) AS M,
((ActQty-LtrPrinted)-(ActQty-QtyInserted)) AS N
FROM
[test].[dbo].[MM]
WHERE
DateDropped = 0
--This is where i need the conditional clause
AND CASE
WHEN @JobsOnHold = 1 THEN DateAppr >= 0
ELSE DateAppr != 0
END
```
The above query is not working. Is this not the correct syntax or is there another way to do this that I don't know?
I don't want to use dynamic SQL, so is there any other way or do I have to use a workaround like using `if else` and using the same query with different `where` clauses? | Try this
```
SELECT
DateAppr,
TimeAppr,
TAT,
LaserLTR,
Permit,
LtrPrinter,
JobName,
JobNumber,
JobDesc,
ActQty,
(ActQty-LtrPrinted) AS L,
(ActQty-QtyInserted) AS M,
((ActQty-LtrPrinted)-(ActQty-QtyInserted)) AS N
FROM
[test].[dbo].[MM]
WHERE
DateDropped = 0
AND (
(ISNULL(@JobsOnHold, 0) = 1 AND DateAppr >= 0)
OR
(ISNULL(@JobsOnHold, 0) != 1 AND DateAppr != 0)
)
```
[You can read more about conditional WHERE here.](https://csharpcornerplus.wordpress.com/2014/09/24/conditional-where-clause-in-sql-server/) | Try this one -
```
WHERE DateDropped = 0
AND (
(ISNULL(@JobsOnHold, 0) = 1 AND DateAppr >= 0)
OR
(ISNULL(@JobsOnHold, 0) != 1 AND DateAppr != 0)
)
``` | Conditional WHERE clause in SQL Server | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
"where-clause",
""
] |
I have three tables (didn't create the database). They are essentially the same but with slightly different data, they have Company\_Id and Category\_Id. I also want to link the Category Name to this table, so I have to join in Category.
Before the join this worked
```
Select Distinct CompanyId, Category_ID From Product1 where Product1.CompanyId = 10
union
Select Distinct Distinct CompanyId, Category_ID From Product2 where Product2.CompanyId = 10
union
Select Distinct Distinct CompanyId, Category_ID From Product3 where Product3.CompanyId = 10
```
This got me a disticnt list of Products and categories based on the product ID.
I now need to add in the category name, to the Category table. I tried:
```
Select Distinct CompanyId, Category_ID From Product1 where Product1.CompanyId = 10
inner join Category on Category.Id=Product1.Category_ID
union
Select Distinct Distinct CompanyId, Category_ID From Product2 where Product2.CompanyId = 10
inner join Category on Category.Id=Product2.Category_ID
union
Select Distinct Distinct CompanyId, Category_ID From Product3 where Product3.CompanyId = 10
inner join Category on Category.Id=Product3.Category_ID
```
but that returns
The multi-part identifier "Company.CATEGORY\_ID" could not be bound.
Any way to display just the distinct items with the category names from all 3 tables?
(so if product1 and product2 both had a product with Company\_ID = 10, and had two categories, it would only show twice) | The `where` goes after the `from` clause. The `join` keyword is part of `from`, so it needs to go before `where`:
```
Select CompanyId, Category_ID
From Product1 inner join
Category
on Category.Id = Product1.Category_ID
where Product1.CompanyId = 10
union
Select CompanyId, Category_ID From Product2
From Product2 inner join
Category
on Category.Id = Product1.Category_ID
where Product2.CompanyId = 10
union
Select CompanyId, Category_ID From Product3
From Product3 inner join
Category
on Category.Id = Product3.Category_ID
where Product1.CompanyId = 10;
```
Also, `distinct distinct` might be returning an error. You don't need the `distinct` keyword, because you are using `union` rather than `union all`. | ```
Select distinct CompanyId, Category_ID From Product1
inner join Category on Category.Id=Product1.Category_ID where Product1.CompanyId = 10
union
Select distinct CompanyId, Category_ID From Product2
inner join Category on Category.Id=Product2.Category_ID where Product2.CompanyId = 10
union
Select distinct CompanyId, Category_ID From Product3
inner join Category on Category.Id=Product3.Category_ID where Product3.CompanyId = 10
```
this was not tested, but if there is no typo, it should solve your problem. | Join Multiple tables (basically the same table) with Joins | [
"",
"sql",
"sql-server",
"t-sql",
"join",
""
] |
I have a single value of 9/1/2013 for a mailing date. I also have 50 row primary key IDS that need to be updated with this single mail date of 9/1/2013.
```
UPDATE myTable
SET MailingDate = CONVERT(DATETIME, '2013-09-01 00:00:00', 102)
WHERE (tblID = 1) OR
(tblID = 2) OR
(tblID = 3) OR
(tblID = 4) OR ETC...
```
I have seen some questions about table to table, so i suppose the next question would be. Am i going to be forced to create a tempTable with an ID column and MailDate column and Inner Join it with the actual table I want to update?
Is there way to do this update that doesnt require me have to make my WHERE statement so huge or the whole creating a tempTable way? Any help would be greatly appreciated.
Edit:
Just adding this, so people have a better understanding why I choose the answer I did.
I get the primary keys through a user selecting rows from a access form datasheet. So the update could be updating anywhere from 1 row to 500 rows(doubt it get that high though, but if it does. Looks like i am stuck creating the temp table). Therfore, I will have a vba string collection of all those ID's which looks like I will need to loop through to create my `IN` in my SQL statement that I will send to the SQL server. | You can use `IN` It should be fine for 50 values. If you expect hundreds or thousands of values then you should find some other way like temp table.
```
UPDATE myTable
SET MailingDate = CONVERT(DATETIME, '2013-09-01 00:00:00', 102)
WHERE tblID IN ( 1, 2,3,...)
```
If you are picking your ids from the result of another query, you can even replace the entire list with the subquery in `IN`
Or if your number is in sequence, you can also do
```
UPDATE myTable
SET MailingDate = CONVERT(DATETIME, '2013-09-01 00:00:00', 102)
WHERE tblID >= 1 AND tblID <= 50
``` | You can use something like this:
```
UPDATE myTable
SET MailingDate = CONVERT(DATETIME, '2013-09-01 00:00:00', 102)
WHERE tblID IN
(Select tblID from myTable WHERE <some condition>)
```
the "some condition" is how you could list which ID's should be changed. | Update multiple rows using single value | [
"",
"mysql",
"sql",
"t-sql",
"sql-update",
""
] |
I have a query join 2 table as below:
```
SELECT * FROM Staff s INNER JOIN Account a on s.AccNo = a.AccNo WHERE a.Status = 'Active'
```
The result as shown:
```
AccNo | Name | ID
------------------
1 | Alex | S01
2 | John | S02
```
After I get the staff ID,I write second query to find out the max sale as below:
```
SELECT s.ProductID,Max(s.Amount) from Sales s WHERE StaffID = 'S01' GROUP BY s.ProductID
```
The max sale for Staff 'S01' as below:
```
ProductID | Amount
------------------
Cloth | 2000
```
How to I combine these 2 queries and become result as below? Thanks
```
AccNo | Name | ID | Amount
--------------------------
1 | Alex | S01 | 2000
2 | John | S02 | 5000
``` | You can create a subquery and join it:
```
SELECT a.AccNo, b.Name, b.ID, c.maximum
FROM transaction as a
INNER JOIN Account as b
ON a.AccNo = b.AccNo
LEFT JOIN (SELECT StaffID, Max(Amount) as maximum FROM Sales GROUP BY StaffID) as c
ON c.StaffID = b.ID
WHERE b.Status = 'Active'
```
See the [SQLFiddle example](http://sqlfiddle.com/#!2/e2a91/7/0) (I've tried to guess the schema) | I think this should work
Just do a join and retrieve the max amount associated with each staff
SELECT t.AccNo , t.Name, t.ID, s.ProductID, Max(s.Amount) FROM Transaction t
INNER JOIN Account a ON t.AccNo = a.AccNo
INNER JOIN Sales s ON s.StaffID = a.ID
WHERE a.Status = 'Active';
Thanks | Pass value from query 1 to query 2 | [
"",
"mysql",
"sql",
"sql-server",
""
] |
In this procedure, the user passes one project ID. I'd like to be able to pass an indeterminate number of project IDs so I can change the where clauses from "ProjectID=@projectID" (for example) to "ProjectID IN (@ProjectID)".
That way, I can delete a dozen projects with one call to the DB, rather than calling the DB repeatedly.
Is there a good strategy for doing that? I'm calling the SP from Access....
```
create procedure dbo.cpas_DeleteProject
@ProjectID INt = 0,
@errorFlag int OUTPUT
AS
set @errorFlag=0
BEGIN TRY
BEGIN TRANSACTION
DELETE FROM tblWOTasks WHERE tblWOTasks.WorkOrderID IN (SELECT ID FROM tblWorkOrders WHERE ProjectID=@ProjectID)
DELETE FROM tblELaborSpread WHERE tblELaborSpread.WorkOrderID IN (SELECT ID FROM tblWorkOrders WHERE ProjectID=@ProjectID)
DELETE FROM tblWorkOrders WHERE tblWorkOrders.ProjectID IN (SELECT ID FROM tblWorkOrders WHERE ProjectID=@ProjectID)
DELETE FROM tblCPTransactiON WHERE tblCPTransactiON.CPProjectID=@ProjectID
DELETE FROM tblCPJE WHERE tblcpje.jeid IN
(SELECT tblcpje.JEID FROM tblCPJE left joIN tblCPTransactiON as CR ON CR.CPTransID = tblCPJE.JECreditID
left joIN tblCPTransactiON as DR ON DR.CPTransID = tblCPJE.JEDebitID
WHERE DR.CPTransID is null AND cr.CPTransID is null)
DELETE FROM tblProjectTasks WHERE tblProjectTasks.ProjectID=@ProjectID
DELETE FROM xrefProjectMICAP WHERE xrefProjectMICAP.ProjectID=@ProjectID
DELETE FROM tblworkorders WHERE tblWorkOrders.ProjectID=@ProjectID
DELETE FROM tblprojects WHERE tblProjects.ID=@ProjectID
--Project Comments cascade delete....
COMMIT TRANSACTION
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION
set @errorFlag=1
END CATCH
``` | The only way I have been able to pass multible values to a stored procedure is to concat them to a string and pass the string to the SP and then parse the string in the beginning of the SP, something like this (the projetc\_id's are delimeted with ,
```
CREATE PROCEDURE [dbo].[cpas_DeleteProject]
@ProjectID varchar(3000)
@errorFlag int OUTPUT
AS
--Parse function
DECLARE @tblstring Table(ProjectID int) --table variable to store all parsed ProjectID's
DECLARE @project varchar(10)
DECLARE @StartPos int,
@Length int,--streng lengd
@Delimeter varchar(1)=','--delimeter
WHILE LEN(@ProjectID) > 0
BEGIN
SET @StartPos = CHARINDEX(@Delimeter, @ProjectID)
IF @StartPos < 0 SET @StartPos = 0
SET @Length = LEN(@ProjectID) - @StartPos - 1
IF @Length < 0 SET @Length = 0
IF @StartPos > 0
BEGIN
SET @Project = SUBSTRING(@ProjectID, 1, @StartPos - 1)
SET @ProjectID = SUBSTRING(@ProjectID, @StartPos + 1, LEN(@ProjectID) - @StartPos)
END
ELSE
BEGIN
SET @Project = @ProjectID
SET @ProjectID = ''
END
INSERT @tblstring (ProjectID) VALUES(@Project)
END
set @errorFlag=0
BEGIN TRY
BEGIN TRANSACTION
DELETE FROM tblWOTasks WHERE tblWOTasks.WorkOrderID IN (SELECT ID FROM tblWorkOrders WHERE ProjectID in (SELECT ProjectID from @tblstring))
DELETE FROM tblELaborSpread WHERE tblELaborSpread.WorkOrderID IN (SELECT ID FROM tblWorkOrders WHERE ProjectID in (SELECT ProjectID from @tblstring))
DELETE FROM tblWorkOrders WHERE tblWorkOrders.ProjectID IN (SELECT ID FROM tblWorkOrders WHERE ProjectID in (SELECT ProjectID from @tblstring))
DELETE FROM tblCPTransactiON WHERE tblCPTransactiON.CPProjectID in (SELECT ProjectID from @tblstring)
DELETE FROM tblCPJE WHERE tblcpje.jeid IN
(SELECT tblcpje.JEID FROM tblCPJE left joIN tblCPTransactiON as CR ON CR.CPTransID = tblCPJE.JECreditID
left joIN tblCPTransactiON as DR ON DR.CPTransID = tblCPJE.JEDebitID
WHERE DR.CPTransID is null AND cr.CPTransID is null)
DELETE FROM tblProjectTasks WHERE tblProjectTasks.ProjectID in (SELECT ProjectID from @tblstring)
DELETE FROM xrefProjectMICAP WHERE xrefProjectMICAP.ProjectID in (SELECT ProjectID from @tblstring)
DELETE FROM tblworkorders WHERE tblWorkOrders.ProjectID in (SELECT ProjectID from @tblstring)
DELETE FROM tblprojects WHERE tblProjects.ID in (SELECT ProjectID from @tblstring)
--Project Comments cascade delete....
COMMIT TRANSACTION
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION
set @errorFlag=1
END CATCH
``` | Although it's possible to use table valued parameters in SQL Server, there's no way easy way of populating those parameters from Access, as far as I know.
A solution could be to use dynamic SQL inside your Stored Procedure, and then having the project ID's in a varchar(max) that is created inside your Access application, for example:
```
@projectIdList = '200,300,400'
```
Then in your dynamic SQL you would just include it as
```
'WHERE tblProjects.ID IN (' + @projectIdList + ')'
```
This is obviously vulnerable to SQL inject attacks, but depending on how your Access application is used, this might be acceptable.
Another, more safe solution, is to allow Access to write the projectId's as rows in a small table before calling the stored procedure, and then inside the stored procedure, do the deletes like:
```
DELETE FROM tblprojects
WHERE tblProjects.ID IN (SELECT ProjectId FROM ProjectIdsToBeDeleted)
```
This is a lot safer and cleaner, since you avoid the dynamic SQL. However, you need some way to handle concurrency - perhaps by generating a unique identifier when populating the ProjectIdsToBeDeleted table, and then filtering on that same identifier when executing the stored procedure. | Parameter to populate an "IN" clause | [
"",
"sql",
"t-sql",
"stored-procedures",
"ms-access-2007",
""
] |
I've been looking around for a while, and it seems it cant be found anywhere. I want to know how do you select the 100 highest and the 100 lowst values in a column? The MIN-function only chooses the lowest one and the MAX the highest one.
Anyone out there who knows how you do this? | SQL Server
* Top 100 Highest
```
SELECT TOP 100 * FROM MyTable
ORDER BY MyCol DESC
```
* Top 100 Lowest
```
SELECT TOP 100 * FROM MyTable
ORDER BY MyCol ASC
```
MySQL
* Top 100 Highest
```
SELECT * FROM MyTable
ORDER BY MyCol DESC LIMIT 100
```
* Top 100 Lowest
```
SELECT * FROM MyTable
ORDER BY MyCol ASC LIMIT 100
``` | You can do it as below,
**Highest**
```
select * from
tablename
order by
column DESC
limit 0,100
```
**Lowest**
```
select * from
tablename
order by
column ASC
limit 0,100
```
**EDIT**
For SQL Server replace `select * from` with `select TOP 100 * from`
The `SELECT TOP` clause is used to specify the number of records to return. | Select the 100 lowest values in SQL? | [
"",
"sql",
"sql-server",
"sql-server-2008-r2",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.