Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I have an sql-statement, which should select all content of a table ordered by the row number. So i used the row\_number function.
But I get everytime the error
```
Exception in thread "main" java.sql.SQLSyntaxErrorException: ORA-00923: FROM keyword not found where expected
```
or this error
```
ORA-00936: missing expression
```
My statement looks like this:
```
SELECT row_number() OVER(ORDER BY table.column1) AS row, table.*
FROM table
WHERE column2= ("Any String")
ORDER BY row;
```
I hope you can help me with this code. | If you're trying to use the name `row` for that result column, you'll have issues since it's a [Reserved Word](http://docs.oracle.com/cd/B10501_01/appdev.920/a42525/apb.htm) in Oracle. Either pick a different name for the column, or surround it with `""` quotes (a [quoted identifier](http://docs.oracle.com/cd/B10500_01/server.920/a96540/sql_elements9a.htm):
```
SELECT row_number() OVER(ORDER BY table.column1) AS "row", table.*
FROM table
WHERE column2= ("Any String")
ORDER BY "row";
```
(I'd generally opt for picking a different name though) | I'm not an expert but maybe you have to select the columns for working with their, try syh like this:
```
SELECT name_column1, name_column2, ROW_NUMBER()
OVER( ORDER BY name_column1) AS THE_NAME_THAT_YOU_WANT
FROM NAME_OF_TABLE
WHERE name_column2 ="any_string";
```
take a look to the origin of error:
ORA-00923 originates from the necessity for the keyword FROM to follow the final selected item in a SELECT statement, or privileges in a REVOKE statement. When ORA-00923 is thrown, it is because a SELECT or REVOKE statement has one of the following problematic syntaxes in the keyword FROM:
1. missing
2. misspelled
3. misplaced
There are many areas of interest when resolving ORA-00923:
You should appropriately correct the syntax, inserting the keyword FROM where necessary.
Keep in mind that the SELECT list itself could be in error, which could also cause ORA-00923 to appear.
Quotation marks should also be evaluated when correcting ORA-00923, as they should enclose the alias (if used within the alias).
Also, find out if you used a reserve word as an alias. | row_number() FROM keyword not found where expected | [
"",
"sql",
"select",
""
] |
I am using the following script to add a column to my table. When I hit `F5` in the query window it gives an error on the 2nd update query that `fkRefTabNo` doesn't exist.
```
ALTER TABLE EPFieldSQLScripts
ADD fkRefTabNo int DEFAULT 1
update EPFieldSQLScripts
set fkRefTabNo = 1
where fkRefTabNo is null
ALTER TABLE EPFieldSQLScripts
ALTER COLUMN fkRefTabNo INTEGER NOT NULL
```
But when I run these queries one by one, it doesn't give an error. Can anyone tell me what is wrong with this script? | Put `GO` between statements. This way the result of each statement will be committed before running the next one (each statement will be a separate transaction).
```
ALTER TABLE EPFieldSQLScripts
ADD fkRefTabNo int DEFAULT 1
GO
update EPFieldSQLScripts
set fkRefTabNo = 1
where fkRefTabNo is null
GO
ALTER TABLE EPFieldSQLScripts
ALTER COLUMN fkRefTabNo INTEGER NOT NULL
``` | As others have answered, adding `GO` is the solution. It's not the *reason* though
SQL Server is trying to run your entire script as a single transaction, and then only committing the data to the database when the script completes. So at the point at which
```
update EPFieldSQLScripts
set fkRefTabNo = 1
where fkRefTabNo is null
```
is run (and queued ready for commit), the field `fkRefTabNo` has not actually been added to the database table (which only happens on commit). Adding the `GO` statements - or, running each statement individually - commits the transactions to the database before continuing with the next statement, hence why you're seeing a difference in behaviour. | SQL script doesnt work but individual queries work | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
There is such code(using PublicActivity gem & Squeel)
```
def index
@activities = Activity.limit(20).order { created_at.desc }
@one = @activities.where{trackable_type == 'Post'}.includes(trackable: [:author, :project])
@two = @activities.where{trackable_type == 'Project'}.includes trackable: [:owner]
@activities = @one + @two
end
```
But it creates **8** SQL requests:
```
SELECT "activities".* FROM "activities" WHERE "activities"."trackable_type" = 'Post' ORDER BY "activities"."created_at" DESC LIMIT 20
SELECT "posts".* FROM "posts" WHERE "posts"."id" IN (800, 799, 798, 797, 796, 795, 794, 793, 792, 791, 790, 789, 788, 787, 786, 785, 784, 783, 782, 781)
SELECT "users".* FROM "users" WHERE "users"."id" IN (880, 879, 878, 877, 876, 875, 874, 873, 872, 871, 869, 868, 867, 866, 865, 864, 863, 862, 861, 860)
SELECT "projects".* FROM "projects" WHERE "projects"."id" IN (80, 79)
SELECT "activities".* FROM "activities" WHERE "activities"."trackable_type" = 'Project' ORDER BY "activities"."created_at" DESC LIMIT 20
SELECT "projects".* FROM "projects" WHERE "projects"."id" IN (80, 79, 78, 77, 76, 75, 74, 73, 72, 71, 70, 69, 68, 67, 66, 65, 64, 63, 62, 61)
SELECT "users".* FROM "users" WHERE "users"."id" IN (870, 859, 848, 837, 826, 815, 804, 793, 782, 771, 760, 749, 738, 727, 716, 705, 694, 683, 672, 661)
```
1. activites request are not joined
2. some users (post owner and project owner) are loaded twice
3. some projects are loaded twice
4. @activities is Array. Rails relations merge methods(except `+`) don't work with the code above.
Any ideas to optimize it? | In a nutshell, you can't optimize any further without using SQL. This is the way Rails does business. It doesn't allow access to join fields outside the AR model where the query is posed. Therefore to get values in other tables, it does a query on each one.
It also doesn't allow `UNION` or fancy `WHERE` conditions that provide other ways of solving the problem.
The good news is that these queries are all efficient ones (given that trackable\_type is indexed). If the size of the results is anything substantial (say a few dozen rows), the i/o time will dominate the slight additional overhead of 7 simple queries vice 1 complex one.
Even using SQL, it will be difficult to get all the join results you want in one query. (It can be done, but the result will be a hash rather than an AR instance. So dependent code will be ugly.) The one-query-per-table is wired pretty deeply into Active Record.
@Mr.Yoshi's solution is a good compromise using minimal SQL except it doesn't let you selectively load either `author` or `project`+`owner` based on the `trackable_type` field.
**Edit**
The above is all correct for Rails 3. For Rails 4 as @CMW says, the `eager_load` method will do the same as `includes` using an outer join instead of separate queries. This is why I love SO! I always learn something. | A non-rails-4, non-squeel solution is:
```
def index
@activities = Activity.limit(20).order("created_at desc")
@one = @activities.where(trackable_type: 'Post') .joins(trackable: [:author, :project]).includes(trackable: [:author, :project])
@two = @activities.where(trackable_type: 'Project').joins(trackable: [:owner]) .includes(trackable: [:owner])
@activities = @one + @two
end
```
The combination of `joins` and `includes` looks odd, but in my testing it works surprisingly well.
This'll reduce it down to two queries though, not to one. And @activities will still by an array. But maybe using this approach with squeel will solve that, too. I don't use squeel and can't test it, unfortunately.
**EDIT:** I totally missed the point of this being about polymorphic associations. The above works to force
If you want to use what AR offers, it's a bit hacky but you could define read-only associated projects and posts:
```
belongs_to :project, read_only: true, foreign_key: :trackable_id
belongs_to :post, read_only: true, foreign_key: :trackable_id
```
With those the mentioned way of forcing eager loads should work. The `where` conditions are still needed, so those associations are only called on the right activities.
```
def index
@activities = Activity.limit(20).order("created_at desc")
@one = @activities.where(trackable_type: 'Post') .joins(post: [:author, :project]).includes(post: [:author, :project])
@two = @activities.where(trackable_type: 'Project').joins(project: [:owner]) .includes(project: [:owner])
@activities = @one + @two
end
```
It's no clean solution and the associations should be attr\_protected to make sure they aren't set accidentally (that will break polymorphism, I expect), but from my testing it seems to work. | Optimize difficult query (possibly with squeel) | [
"",
"sql",
"ruby-on-rails",
"activerecord",
"rails-activerecord",
"squeel",
""
] |
What I'm trying to do with the statement is show all of the movies released in 1999, 2000, and 2001 that run over three hours (the actual column is in seconds).
The output table is showing the correct years, but it's showing other things for title\_type other than movie, and it's showing times less than three (as well as greater than three) in title\_runtime\_hrs.
This is my code:
```
select title_name, title_type, title_release_year,
(title_runtime / 3600.0) as title_runtime_hrs
from nf_titles
where title_type = 'Movie' and
title_runtime > 3 and
title_release_year = '1999' or
title_release_year = '2000' or
title_release_year = '2001'
order by title_release_year, title_runtime_hrs desc
``` | Or do it this way
```
select title_name, title_type, title_release_year, (title_runtime / 3600.0) as title_runtime_hrs
from nf_titles
where title_type = 'Movie' and
title_runtime > 3 and
title_release_year in ('1999','2000','2001')
order by title_release_year, title_runtime_hrs desc
``` | You need to wrap your `OR` statements in parantheses. Otherwise SQL Server will say "ok, here's a year 2000, we've met the criteria.
```
select title_name, title_type, title_release_year, (title_runtime / 3600.0) as title_runtime_hrs
from nf_titles
where title_type = 'Movie' and
title_runtime > 3 and
(title_release_year = '1999' or
title_release_year = '2000' or
title_release_year = '2001' )
order by title_release_year, title_runtime_hrs desc
``` | Mix of AND / OR in WHERE clause leads to unexpected results | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
So I think I understand the idea of querying the sys.partitions to get a **total** count on a table.
`SELECT SUM(rows) FROM sys.partitions WHERE [object_id] = OBJECT_ID('dbo.myTable')
AND index_id IN (0,1);`
but how would I add conditions to the table I'm counting from?
E.g. `MyTable WHERE communityID = 123`
A generic SQL count(\*) takes around 4-8 seconds to run so clearly isn't the solution. | ```
select count(*) from table where communityID=123
``` | There's no other way than querying the table directly:
```
SELECT COUNT(*) FROM dbo.myTable WHERE UserName LIKE '%C%'
``` | SQL count on large recordsets with conditions | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
i want to select distinct `customer_id` order by either `s.no` or `time` or anyhow
```
s.no customer_id time
1 3 100001
2 2 100002
3 4 100003
4 3 100004
5 2 100005
```
i am using
```
select distinct(customer_id) from table_name order by time DESC
```
and it is giving the answer as `4 2 3` but as i want it should be `2 3 4` | So your problem statement is "You want the list of customer\_id, sorted in descended order from their largest time value", right?
```
SELECT customer_id, MAX(time)
FROM table_name
GROUP BY customer_id
ORDER BY MAX(time) DESC
``` | You have to do a second select like this:
```
SELECT
`number`
FROM
(
SELECT
DISTINCT(number) AS number
FROM
`table_name`
ORDER BY
`time` DESC
) AS `temp`
ORDER BY
`number` ASC
``` | selecting distinct value order by desc | [
"",
"mysql",
"sql",
""
] |
I have two tables in the database: *Artist* and *Filter*. Is there anyway to query columns of *Artist* based on the subquery results of *Filter* in a **single** SQL statement?
Pseudo Code:
`SELECT (SELECT ColumnName FROM Filter WHERE ShowColumn=1) FROM Artist`
* Artist
```
------------------------------------------
| id | FirstName | LastName | Genre |
------------------------------------------
| 1 | John | Coltrane | Jazz |
| 2 | Jimi | Hendrix | Rock |
| 3 | Ulrich | Schnauss | Electronic |
------------------------------------------
```
* Filter
```
--------------------------------
| id | ColumnName | ShowColumn |
--------------------------------
| 1 | FirstName | 1 |
| 2 | LastName | 0 |
| 3 | Genre | 1 |
--------------------------------
```
* Expected Results
```
-------------------------------
| id | FirstName | Genre |
-------------------------------
| 1 | John | Jazz |
| 2 | Jimi | Rock |
| 3 | Ulrich | Electronic |
-------------------------------
```
The best I can have is to use multiple `CASE..WHEN`, but it isn't so elegant and there're some extra annoying NULL fields.
```
SELECT
CASE WHEN (SELECT ShowColumn FROM Filter WHERE ColumnName='FirstName') THEN FirstName END,
CASE WHEN (SELECT ShowColumn FROM Filter WHERE ColumnName='LastName') THEN LastName END,
CASE WHEN (SELECT ShowColumn FROM Filter WHERE ColumnName='Genre') THEN Genre END
FROM Artist
``` | This is not possible in SQLite.
You have to read the columns from `Filter` first, and then to construct a query with those columns. | If your target is formatted output, ie, you don't need a dataset, you can compose a string in first column.
Example for CSV output:
```
SELECT --Header
'id'
||CASE (SELECT ShowColumn FROM Filter WHERE ColumnName='FirstName') WHEN 1 THEN ',FirstName' ELSE '' END
||CASE (SELECT ShowColumn FROM Filter WHERE ColumnName='LastName') WHEN 1 THEN ',LastName' ELSE '' END
||CASE (SELECT ShowColumn FROM Filter WHERE ColumnName='Genre') WHEN 1 THEN ',Genre' ELSE '' END
UNION ALL
SELECT --Data
CAST(id AS TEXT)
||CASE WHEN mf=1 THEN ','||CAST(FirstName AS TEXT) ELSE '' END
||CASE WHEN ml=1 THEN ','||CAST(LastName AS TEXT) ELSE '' END
||CASE WHEN mg=1 THEN ','||CAST(Genre AS TEXT) ELSE '' END
FROM Artist
JOIN (SELECT ShowColumn AS mf FROM Filter WHERE ColumnName='FirstName')
JOIN (SELECT ShowColumn AS ml FROM Filter WHERE ColumnName='LastName')
JOIN (SELECT ShowColumn AS mg FROM Filter WHERE ColumnName='Genre')
;
``` | SELECT columns based on a subquery results with single statement | [
"",
"sql",
"sqlite",
""
] |
I'm using SQL Server 2005 and trying to build a query in a way that I can't seem to get working.
I have two tables. One table has a list of ItemTypes and another with a list of mappings between ItemTypes and users.
I want to have a query that will return these results:
```
ID | ItemType | Enabled?
---------------------------------
1 | Type1 | false
2 | Type2 | true
```
The Enabled column is going to be based on whether there is an entry in the other table. I want to use some sort of case statement where if the LEFT JOIN item exists, then put TRUE otherwise put FALSE.
```
SELECT systype.ID, systype.TypeDescription,
'Enable' = CASE hid.ID WHEN NULL THEN 'true' ELSE 'false' END
FROM [HardwareTestcaseManagement].[dbo].[A7_SystemItemTypes] systype
LEFT JOIN [dbo].[A7_HiddenNewsFeedTypes] hid
ON systype.ID = hid.SystemItemTypesID
AND hid.UserName = @UserName
```
However, this query I've come up with so far ^ is not working quite as I intended.
Any suggestions how I can tweak this query to work how I want it to?
EDIT: When I do a left join and just select the column "hid.ID" it comes back as either NULL or the actual ID. I was hoping I could modify the value based on whether it was null or not. | You can't compare `null` values like other values. Use the `is null` operator in the `case`:
```
'Enable' = CASE WHEN hid.ID IS NULL THEN 'true' ELSE 'false' END
``` | A simple and efficient approach is to use `EXISTS` instead of a join:
```
SELECT systype.ID, systype.TypeDescription,
[Enable] = CASE WHEN EXISTS(
SELECT 1 FROM A7_HiddenNewsFeedTypes ft
WHERE ft.SystemItemTypesID = systype.ID
AND ft.UserName = @UserName
) THEN 'true' ELSE 'false' END
FROM [HardwareTestcaseManagement].[dbo].[A7_SystemItemTypes] systype
``` | SQL LEFT JOIN with a CASE Statement | [
"",
"sql",
"sql-server-2005",
""
] |
I have to obtain the substring in a string without giving position using SQL in PostgreSQL.
I have tried in this way but which is displaying only last number '2'
This way i have worked
The String is P1.B1.12
```
SELECT SUBSTRING(REGISTRATIONNO from '.(.)*') AS REGISTRATIONNO
FROM SUBSCRIBER;
```
The expected result is `12` - everything after the second `.` | ```
select (regexp_split_to_array(registrationno, '\.'))[3]
from subscriber
```
Note that this assumes that the desired value is *always* at the third position. If this is not the case, the expression will return the wrong value.
Here is an SQLFiddle example: <http://sqlfiddle.com/#!12/d41d8/1924> | Can you try this?
```
SELECT SUBSTRING(REGISTRATIONNO from '\w*.\w*.(\d+)') AS REGISTRATIONNO
FROM SUBSCRIBER;
``` | Obtain substring in SQL | [
"",
"sql",
"regex",
"postgresql",
""
] |
An application I am working on invokes a SP like this:
```
exec CreateChildRecord @ParentID = 123, @ChildID = 124
```
The SP needs to copy all fields except the ID from the parent record into the child record. The child record may or may not currently exist.
What I need is something like the below:
```
UPDATE [Table] AS [Table1] SET (data1, data2) = (
SELECT [Table2].[data1], [Table2].[data2]
FROM [Table] AS [Table2]
WHERE [Table2.ID] = @ParentID)
WHERE [Table1].[ID] = @ChildID
IF @@ROWCOUNT = 0
INSERT INTO [TABLE] (id, data1, data2)
(SELECT @ChildID, data1, data2
FROM [TABLE]
WHERE id = @ParentID)
```
I've tried various combinations of the above to no effect. Can anyone help? | if you can use `merge` statement:
```
merge [Table] as T
using (
select @ChildID as ID, data1, data2
from [Table]
where ID = @ParentID
) as P on P.ID = T.ID
when matched then
update set
data1 = P.data1,
data2 = P.data2
when not matched then
insert (ID, data1, data2)
values (P.ID, P.data1, P.data2);
```
If you cannot use `merge`:
```
if exists (select * from [Table] where ID = @ChildID)
update c set
data1 = p.data1,
data2 = p.data2
from [Table] as c cross join [Table] as p
where c.ID = @ChildID and p.ID = @ParentID
else
insert into [Table] (ID, data1, data2)
select @ChildID, data1, data2
from [Table]
where ID = @ParentID
```
**`sql fiddle demo`** | Depending on the version of SQL Server you are using (2008+) you can make use of [MERGE (Transact-SQL)](http://msdn.microsoft.com/en-us/library/bb510625%28v=sql.100%29.aspx)
> Performs insert, update, or delete operations on a target table based
> on the results of a join with a source table. For example, you can
> synchronize two tables by inserting, updating, or deleting rows in one
> table based on differences found in the other table.
## [SQL Fiddle DEMO](http://www.sqlfiddle.com/#!3/07d07/15) | Duplicating a record with different ID | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
> **UPDATE:**
> Someone marked this question as duplicate of
> [How do I split a string so I can access item x](https://stackoverflow.com/questions/2647/how-do-i-split-a-string-so-i-can-access-item-x).
> But it's different, my question is about Sybase SQL Anywhere, the other is about MS SQL Server. These are two different SQL engines, even if they have the same origin, they have different syntax. So it's not duplicate. I wrote in the first place in description and tags that it's all about **Sybase SQL Anywhere**.
I have field `id_list='1234,23,56,576,1231,567,122,87876,57553,1216'`
and I want to use it to search `IN` this field:
```
SELECT *
FROM table1
WHERE id IN (id_list)
```
* `id` is `integer`
* `id_list` is `varchar/text`
But in this way this doesn't work, so I need in some way to split `id_list` into select query.
What solution should I use here? I'm using the T-SQL Sybase ASA 9 database (SQL Anywhere).
Way I see this, is to create own function with while loop through,
and each element extract based on split by delimiter position search,
then insert elements into temp table which function will return as result. | Like *Mikael Eriksson* said, there is answer at **[dba.stackexchange.com](https://dba.stackexchange.com/a/51444/29314)** with two very good solutions, first with use of `sa_split_list` system procedure, and second slower with `CAST` statement.
For the Sybase SQL Anywhere 9 `sa_split_list` system procedure not exist, so I have made `sa_split_list` system procedure replacement (I used parts of the code from *bsivel* answer):
> ```
> CREATE PROCEDURE str_split_list
> (in str long varchar, in delim char(10) default ',')
> RESULT(
> line_num integer,
> row_value long varchar)
> BEGIN
> DECLARE str2 long varchar;
> DECLARE position integer;
>
> CREATE TABLE #str_split_list (
> line_num integer DEFAULT AUTOINCREMENT,
> row_value long varchar null,
> primary key(line_num));
>
> SET str = TRIM(str) || delim;
> SET position = CHARINDEX(delim, str);
>
> separaterows:
> WHILE position > 0 loop
> SET str2 = TRIM(LEFT(str, position - 1));
> INSERT INTO #str_split_list (row_value)
> VALUES (str2);
> SET str = RIGHT(str, LENGTH(str) - position);
> SET position = CHARINDEX(delim, str);
> end loop separaterows;
>
> select * from #str_split_list order by line_num asc;
>
> END
> ```
Execute the same way as `sa_split_list` with default delimiter `,`:
> ```
> select * from str_split_list('1234,23,56,576,1231,567,122,87876,57553,1216')
> ```
or with specified delimiter which can be changed:
> ```
> select * from str_split_list('1234,23,56,576,1231,567,122,87876,57553,1216', ',')
> ``` | This can be done without using dynamic SQL but you will need to create a couple of supporting objects. The fist object is a table valued function that will parse your string and return a table of integers. The second object is a stored procedure that will have a parameter where you can pass the string (id\_list), parse it to a table, and then finally join it to your query.
First, create the function to parse the string:
```
CREATE FUNCTION [dbo].[String_To_Int_Table]
(
@list NVARCHAR(1024)
, @delimiter NCHAR(1) = ',' --Defaults to CSV
)
RETURNS
@tableList TABLE(
value INT
)
AS
BEGIN
DECLARE @value NVARCHAR(11)
DECLARE @position INT
SET @list = LTRIM(RTRIM(@list))+ ','
SET @position = CHARINDEX(@delimiter, @list, 1)
IF REPLACE(@list, @delimiter, '') <> ''
BEGIN
WHILE @position > 0
BEGIN
SET @value = LTRIM(RTRIM(LEFT(@list, @position - 1)));
INSERT INTO @tableList (value)
VALUES (cast(@value as int));
SET @list = RIGHT(@list, LEN(@list) - @position);
SET @position = CHARINDEX(@delimiter, @list, 1);
END
END
RETURN
END
```
Now create your stored procedure:
```
CREATE PROCEDURE ParseListExample
@id_list as nvarchar(1024)
AS
BEGIN
SET NOCOUNT ON;
--create a temp table to hold the list of ids
CREATE TABLE #idTable (ID INT);
-- use the table valued function to parse the ids into a table.
INSERT INTO #idTable(ID)
SELECT Value FROM dbo.String_to_int_table(@id_list, ',');
-- join the temp table of ids to the table you want to query...
SELECT T1.*
FROM table1 T1
JOIN #idTable T2
on T1.ID = T2.ID
```
Execution Example:
```
exec ParseListExample @id_list='1234,23,56,576,1231,567,122,87876,57553,1216'
```
I hope this helps... | Split/explode comma delimited string with Sybase SQL Anywhere | [
"",
"sql",
"t-sql",
"split",
"sybase",
"sqlanywhere",
""
] |
I'm trying to figure out how to use ActiveRecord to execute the following psuedocode:
```
rows_updated = UPDATE my_table SET my_column="abc" WHERE id=123
if rows_updated == 0 then INSERT INTO my_table (id, my_column) VALUES (123, "abc")
```
The reason for this is most of the time the record already exists. I want to save myself the extra `SELECT` query that `first_or_create!` seems to generate.
Any ideas? | There is also different and more elegant solution (from my point of view) for this case
One of the options is using a replace command instead of all this code.
just run:
```
REPLACE INTO my_table (id, my_column) VALUES (123, "abc")
```
this solution works just fine in most cases, but if you have a very high load on this pice of code, you should consider using
[INSERT .... ON DUPLICATE KEY UPDATE](http://dev.mysql.com/doc/refman/5.5/en/insert-on-duplicate.html) | First do a find or initialize so that if the id is not there it will create the object. then check if it is new. If new save it else update it.
```
obj= find_or_initialize_by_id(obj)
obj.new_record? ? prod.save! : *update the entry*
``` | How to efficiently use first_or_initialize when records usually already exist | [
"",
"mysql",
"sql",
"ruby-on-rails",
"activerecord",
""
] |
I have 2 tables, project/images related:
```
[project]
project_id (int primary key)
completed (date)
[project_media]
project_media_id (int primary key)
project_id (int foreign key)
image (varchar)
```
What I want to select is:
from the last 5 project records based on completed, take the last image added based on project\_media\_id.
In other words the last image inserted from the last 5 projects.
What I have so far:
```
select project.completed, project_media.project_id, project_media.image
from project
inner join project_media
on project.project_id = project_media.project_id
order by completed desc limit 5
``` | ```
SELECT PR.*,
(
SELECT PM.image FROM project_media PM
WHERE PR.project_id = PM.project_id
ORDER BY PM.project_media_id DESC
LIMIT 1
) AS image
FROM
(
SELECT P.project_id, P.completed
FROM project P
ORDER BY P.completed DESC
LIMIT 5
) AS PR
``` | If you wanted to just get the five most recent things in the project table, you'd do this:
```
select * from project order by completed desc limit 5;
```
Joining in the other table, you'd end up with something like this:
```
select * from project p join project_media pm on p.project_id = pm.project_id order by completed desc limit 5;
```
Limiting the columns, your query will be:
```
select pm.image from project p join project_media pm on p.project_id = pm.project_id order by completed desc limit 5;
```
I'm a little unclear about what you're asking in the last bit, but it sounds like what you now want to get only one image based on the media\_project\_id, from these five things. You could do this with a subquery:
```
select a.image from (select pm.image, pm.project_media_id from project p join project_media pm on p.project_id = pm.project_id order by completed desc limit 5) a order by a.project_media_id limit 1;
```
Does this answer your question? | SQL get the 5 last items inserted based on 2 table join | [
"",
"mysql",
"sql",
""
] |
I am trying to get the avg of an item so I am using a subquery.
**Update**: I should have been clearer initially, but i want the avg to be for the last 5 items only
First I started with
```
SELECT
y.id
FROM (
SELECT *
FROM (
SELECT *
FROM products
WHERE itemid=1
) x
ORDER BY id DESC
LIMIT 15
) y;
```
Which runs but is fairly useless as it just shows me the ids.
I then added in the below
```
SELECT
y.id,
(SELECT AVG(deposit) FROM (SELECT deposit FROM products WHERE id < y.id ORDER BY id DESC LIMIT 5)z) AVGDEPOSIT
FROM (
SELECT *
FROM (
SELECT *
FROM products
WHERE itemid=1
) x
ORDER BY id DESC
LIMIT 15
) y;
```
When I do this I get the error **Unknown column 'y.id' in 'where clause'**, upon further reading here I believe this is because when the queries go down to the next level they need to be joined?
So I tried the below \*\* removed un needed suquery
```
SELECT
y.id,
(SELECT AVG(deposit) FROM (
SELECT deposit
FROM products
INNER JOIN y as yy ON products.id = yy.id
WHERE id < yy.id
ORDER BY id DESC
LIMIT 5)z
) AVGDEPOSIT
FROM (
SELECT *
FROM products
WHERE itemid=1
ORDER BY id DESC
LIMIT 15
) y;
```
But I get **Table 'test.y' doesn't exist**. Am I on the right track here? What do I need to change to get what I am after here?
The example can be found [here in sqlfiddle](http://sqlfiddle.com/#!2/c13bc).
```
CREATE TABLE products
(`id` int, `itemid` int, `deposit` int);
INSERT INTO products
(`id`, `itemid`, `deposit`)
VALUES
(1, 1, 50),
(2, 1, 75),
(3, 1, 90),
(4, 1, 80),
(5, 1, 100),
(6, 1, 75),
(7, 1, 75),
(8, 1, 90),
(9, 1, 90),
(10, 1, 100);
```
Given my data in this example, my expected result is below, where there is a column next to each ID that has the avg of the previous 5 deposits.
```
id | AVGDEPOSIT
10 | 86 (deposit value of (id9+id8+id7+id6+id5)/5) to get the AVG
9 | 84
8 | 84
7 | 84
6 | 79
5 | 73.75
``` | I'm not an MySQL expert (in MS SQL it could be done easier), and your question looks a bit unclear for me, but it looks like you're trying to get average of previous 5 items.
If you have **Id without gaps**, it's easy:
```
select
p.id,
(
select avg(t.deposit)
from products as t
where t.itemid = 1 and t.id >= p.id - 5 and t.id < p.id
) as avgdeposit
from products as p
where p.itemid = 1
order by p.id desc
limit 15
```
**If not**, then I've tri tried to do this query like this
```
select
p.id,
(
select avg(t.deposit)
from (
select tt.deposit
from products as tt
where tt.itemid = 1 and tt.id < p.id
order by tt.id desc
limit 5
) as t
) as avgdeposit
from products as p
where p.itemid = 1
order by p.id desc
limit 15
```
But I've got exception `Unknown column 'p.id' in 'where clause'`. Looks like MySQL cannot handle 2 levels of nesting of subqueries.
But you can get 5 previous items with `offset`, like this:
```
select
p.id,
(
select avg(t.deposit)
from products as t
where t.itemid = 1 and t.id > coalesce(p.prev_id, -1) and t.id < p.id
) as avgdeposit
from
(
select
p.id,
(
select tt.id
from products as tt
where tt.itemid = 1 and tt.id <= p.id
order by tt.id desc
limit 1 offset 6
) as prev_id
from products as p
where p.itemid = 1
order by p.id desc
limit 15
) as p
```
**`sql fiddle demo`** | This is my solution. It is easy to understand how it works, but at the same time it can't be optimized much since I'm using some string functions, and it's far from standard SQL. If you only need to return a few records, it could be still fine.
This query will return, for every ID, a comma separated list of previous ID, ordered in ascending order:
```
SELECT p1.id, p1.itemid, GROUP_CONCAT(p2.id ORDER BY p2.id DESC) previous_ids
FROM
products p1 LEFT JOIN products p2
ON p1.itemid=p2.itemid AND p1.id>p2.id
GROUP BY
p1.id, p1.itemid
ORDER BY
p1.itemid ASC, p1.id DESC
```
and it will return something like this:
```
| ID | ITEMID | PREVIOUS_IDS |
|----|--------|-------------------|
| 10 | 1 | 9,8,7,6,5,4,3,2,1 |
| 9 | 1 | 8,7,6,5,4,3,2,1 |
| 8 | 1 | 7,6,5,4,3,2,1 |
| 7 | 1 | 6,5,4,3,2,1 |
| 6 | 1 | 5,4,3,2,1 |
| 5 | 1 | 4,3,2,1 |
| 4 | 1 | 3,2,1 |
| 3 | 1 | 2,1 |
| 2 | 1 | 1 |
| 1 | 1 | (null) |
```
then we can join the result of this query with the products table itself, and on the join condition we can use FIND\_IN\_SET(src, csvalues) that return the position of the src string inside the comma separated values:
```
ON FIND_IN_SET(id, previous_ids) BETWEEN 1 AND 5
```
and the final query looks like this:
```
SELECT
list_previous.id,
AVG(products.deposit)
FROM (
SELECT p1.id, p1.itemid, GROUP_CONCAT(p2.id ORDER BY p2.id DESC) previous_ids
FROM
products p1 INNER JOIN products p2
ON p1.itemid=p2.itemid AND p1.id>p2.id
GROUP BY
p1.id, p1.itemid
) list_previous LEFT JOIN products
ON list_previous.itemid=products.itemid
AND FIND_IN_SET(products.id, previous_ids) BETWEEN 1 AND 5
GROUP BY
list_previous.id
ORDER BY
id DESC
```
Please see fiddle [here](http://sqlfiddle.com/#!2/c13bc/81). I won't recommend using this trick for big tables, but for small sets of data it is fine. | Unknown column in mysql subquery | [
"",
"mysql",
"sql",
"join",
"subquery",
""
] |
Sample Table
```
CustomerId | VoucherId | CategoryId | StartDate | EndDate
-------------------------------------------------------------
10 | 1 | 1 | 2013-09-01| 2013-09-30
10 | 1 | 2 | 2013-09-01| 2013-09-30
11 | 2 | 1 | 2013-09-01| 2013-11-30
11 | 2 | 2 | 2013-09-01| 2013-11-30
11 | 2 | 3 | 2013-09-01| 2013-11-30
10 | 3 | 1 | 2013-10-01| 2013-12-31
10 | 3 | 2 | 2013-10-01| 2013-12-31
11 | 4 | 1 | 2013-12-01| 2014-04-30
```
In above sample record, I want to find out total No. of months customer's vouchers cover
I need the output in the form
```
CustomerId | Months
--------------------
10 | 4
11 | 8
```
The catch is that a voucher can have multiple rows for different CategoryIds...
I calculated months covered by a voucher as DATEDIFF(MM, StartDate, EndDate) + 1...
When i apply SUM(DATEDIFF(MM, StartDate, EndDate)) GROUP BY VoucherId, StartDate, EndDate
I give wrong result because of multiple rows for a VoucherId....
I get something like this...
```
CustomerId | Months
--------------------
10 | 8
11 | 14
```
CategoryId is useless in this scenario
Thanks | **`sql fiddle demo`**
This SQL Fiddle addresses your concerns. You need to generate a Calendar table so that you have something to join your dates to. Then you can do a count of distinct MonthYears for each Customer.
```
create table test(
CustomerId int,
StartDate date,
EndDate date
)
insert into test
values
(10, '9/1/2013', '9/30/2013'),
(10, '9/1/2013', '9/30/2013'),
(11, '9/1/2013', '11/30/2013'),
(11, '9/1/2013', '11/30/2013'),
(11, '9/1/2013', '11/30/2013'),
(10, '10/1/2013', '12/31/2013'),
(10, '10/1/2013', '12/31/2013'),
(11, '12/1/2013', '4/30/2014')
create table calendar(
MY varchar(10),
StartDate date,
EndDate date
)
insert into calendar
values
('9/2013', '9/1/2013', '9/30/2013'),
('10/2013', '10/1/2013', '10/31/2013'),
('11/2013', '11/1/2013', '11/30/2013'),
('12/2013', '12/1/2013', '12/31/2013'),
('1/2014', '1/1/2014', '1/31/2014'),
('2/2014', '2/1/2014', '2/28/2014'),
('3/2014', '3/1/2014', '3/31/2014'),
('4/2014', '4/1/2014', '4/30/2014')
select
t.CustomerId,
count(distinct c.MY)
from
test t
inner join calendar c
on t.StartDate <= c.EndDate
and t.EndDate >= c.StartDate
group by
t.CustomerId
``` | actually, in your case (when periods for each category are equal) you can use this query:
```
with cte as (
select distinct
CustomerId, StartDate, EndDate
from Table1
)
select CustomerId, sum(datediff(mm, StartDate, EndDate) + 1) as diff
from cte
group by CustomerId
```
**`sql fiddle demo`** | SQL: SUM after GROUP BY | [
"",
"sql",
"group-by",
"sum",
""
] |
I have the following piece of statement entered into MySQL5.6 Command Line Client. However, the following error was received. I haven't even been able to add in END// Delimiter; after the select statement.
At the same time, i was wondering after the stored procedure has been created successfully, how do i CALL the stored procedure without the command line but using java codes.
Kindly assist. Greatly appreciated!
 | ```
mysql> delimiter //
mysql> CREATE PROCEDURE GetUStocke()
-> BEGIN
-> SELECT * FROM buystocks ;
-> END//
``` | give space between `delimiter` and `//`. After your `select` statement write `end;` on next line and `//` at last line (after `end;` in next new line)
```
delimiter //
create procedure GetUStocks()
Begin
Select * From buystocks;
end;
//
``` | Cannot create stored procedure | [
"",
"mysql",
"sql",
"database",
"stored-procedures",
""
] |
following T-Sql code:
```
DECLARE @usu VARCHAR(10);
SET @usu = 'TOM';
PRINT @usu;
RAISERROR ('Name of USU is %i ',14,2,@usu);
```
returns following error:
> Msg 2786, Level 16, State 1, Line 4
> The data type of substitution
> parameter 1 does not match the expected type of the format
> specification.
Does anyone know how I can get rid of this error? | Yeah, change your format to `Name of USU is %s`, the `%i` means the value of `@usu` is a signed integer. All of the format types are clearly [documented on MSDN](http://msdn.microsoft.com/en-us/library/ms178592.aspx). | Try to change that:
```
RAISERROR ('Name of USU is %i ',14,2,@usu);
```
into that
```
RAISERROR ('Name of USU is %s ',14,2,@usu);
```
since @usu is varchar(10) and %i means signed integer | RAISERROR raises substitution parameter error | [
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2005",
""
] |
How to get current date in sql to **MMMYY** i.e. OCT13
```
select Convert(varchar(10),getdate(),6) this will generate 11 Oct 13
```
I need to get OCT13 .
Any help appreciated.
Front end application cannot do this formating. I am exporting the data from sql to another sql server
Thanks | ```
SELECT REPLACE(RIGHT(CONVERT(VARCHAR(9), GETDATE(), 6), 6), ' ', '') AS [MMYY]
``` | With the FORMAT Function its very easy to convert it to the right format:
(for SQL Sever 2012)
`SELECT FORMAT(GETDATE(), 'MMMyy')` | Sql Date in Format MMMYY | [
"",
"sql",
"sql-server-2008",
"t-sql",
""
] |
I have a table that looks like this:
```
Person (Ssn, Name, Age, Petname)
```
I need to form a question that returns the name of all persons that have the same amount of pets as the person with Ssn = 1 (i.e. if the person with Ssn = 1 has Petname = "Zeus" in the Petname all the persons that also have a pet should be returned). I know the table design is stupid but it's from a school asignment and has to look like that.
This is what I've got so far. I think it's partly right but I can't seem to figure it out completely:
```
SELECT Name
FROM Person
WHERE (SELECT COUNT(Petname) FROM Person WHERE Ssn = '1')
= (SELECT COUNT(Petname) FROM Person WHERE Ssn != '1');
``` | This should do the trick:
```
SELECT Name
FROM Person
WHERE SSN <> 1
GROUP BY Name
HAVING COUNT(PetName) = (SELECT COUNT(PetName) FROM PERSON WHERE SSN='1')
```
Here is also a [SQLFiddle](http://sqlfiddle.com/#!3/64860/16) with the code. | Try this
```
select Name from Person
group by Name
having
sum(case when Pnr = '1' then 1 else 0 end) = sum(case when Pnr <> '1' then 1 else 0 end)
``` | Select rows with the same count in a column from a table | [
"",
"sql",
"select",
"join",
""
] |
I am trying the following query and it works as I expect it to
`SELECT RIGHT('0000' + CAST(MAX(party_id)+1 AS VARCHAR(4)),4) FROM PARTY`
The result is:
> 0147
But when I execute the following query so that I can store this value in a variable
```
DECLARE @pid varchar;
SELECT @pid = RIGHT('0000' + CAST(MAX(party_id)+1 AS VARCHAR(4)),4) FROM PARTY
SELECT @pid as party_id
```
it doesn't return `0147` as in the above query, instead what it returns is
> 0
Can anyone please tell me what I am doing wrong here? | You should **always** define a **length** when you declare a `varchar` !!
This
```
DECLARE @pid varchar;
```
gives you a `varchar` of **exactly ONE character** length!!
Use
```
DECLARE @pid varchar(20);
```
and your problem is solved ... | Yo have not declared size of varchar that is why it is truncating.
Set size sufficiently large to store result.
```
DECLARE @pid varchar(10);
SELECT @pid = RIGHT('0000' + CAST(MAX(party_id)+1 AS VARCHAR(4)),4) FROM PARTY
SELECT @pid as party_id
```
[From SQL Server MSDN](http://technet.microsoft.com/en-us/library/ms176089.aspx)
> **varchar [ ( n | max ) ]**
>
> Variable-length, non-Unicode string data. n
> defines the string length and can be a value from 1 through 8,000. max
> indicates that the maximum storage size is 2^31-1 bytes (2 GB). The
> storage size is the actual length of the data entered + 2 bytes. The
> ISO synonyms for varchar are char varying or character varying.
>
> **When n is not specified in a data definition or variable declaration**
> **statement, the default length is 1.** | SQL Server : How to use variables? | [
"",
"sql",
"sql-server",
"stored-procedures",
""
] |
I have a series of sql statements that I am reading into my db - specificially, I am seeding a table with cities and coordinates, but am a little confused as how to handle missing ID columns in sql dumps.
My migration to create the table:
```
class CreateCitiesExtended < ActiveRecord::Migration
def change
create_table :cities_extended do |t|
t.string :city
t.string :state_code
t.integer :zip
t.float :latitude
t.float :longitude
t.string :county
end
end
def down
drop_table :cities_extended
end
end
```
After running the migration:
```
sqlite> PRAGMA table_info(cities_extended)
0|id|INTEGER|1||1
1|city|varchar(255)|0||0
2|state_code|varchar(255)|0||0
3|zip|integer|0||0
4|latitude|float|0||0
5|longitude|float|0||0
6|county|varchar(255)|0||0
```
The sql file looks something like this:
```
INSERT INTO `cities_extended` VALUES ('Holtsville', 'NY', '00501', '40.8152', '-73.0455', 'Suffolk');
INSERT INTO `cities_extended` VALUES ('Holtsville', 'NY', '00544', '40.8152', '-73.0455', 'Suffolk');
INSERT INTO `cities_extended` VALUES ('Adjuntas', 'PR', '00601', '18.1788', '-66.7516', 'Adjuntas');
```
But when I attempt to read the .sql file into my sqlite table, I get a column mismatch error:
```
rails db
sqlite> .read ./db/data/cities_extended.sql
Error: near line 41780: table cities_extended has 7 columns but 6 values were supplied
Error: near line 41781: table cities_extended has 7 columns but 6 values were supplied
```
As you can see from the migrated table, an extra column called id was created by rails. This prevents the table from being seeded. What is the best way to satisfy the column requirements? | If you do require the default id column, you can amend the INSERT sql to specify the columns used:
```
INSERT INTO `cities_extended` (city, state_code, zip, latitude, longtitude, county) VALUES ('Holtsville', 'NY', '00501', '40.8152', '-73.0455', 'Suffolk');
```
This should give you the normal auto-incremented id column as generated by the table. | So i found a work around, but I'm not convinced its the best way:
```
class CreateCitiesExtended < ActiveRecord::Migration
def change
create_table :cities_extended, :id => false do |t|
```
Setting :id => false allows me to bypass the requirements.
It works for my cause, but I'm not sure its the best way because there will not be any unique ID's on any of the records. I'll leave the question open in case someone knows of a better way?
source: [Create an ActiveRecord database table with no :id column?](https://stackoverflow.com/questions/874634/create-an-activerecord-database-table-with-no-id-column) | rails import sql, column mismatch on id | [
"",
"sql",
"ruby-on-rails",
"sqlite",
""
] |
I am running an SQL query as below, selecting data for each property with a LEFT JOIN selecting data mainly from table1 ('main\_table') for each property and also displaying data in table 2('houses') if table2 also holds data on each property.
However I want to only show results 1 per property.
I have tried various results with DISTINCT before the fields that i am selecting but it hasn't worked.
Below is the SQL and also showing a sample of the data returned.
So for example, 128 Mayfield Road has 2 entries in table 2 so it is returned twice but i only want to show each house once.
Many Thanks
```
SELECT main_table.housenumber, main_table.streetname, main_table.rent, houses.houseID, houses.path, houses.rooms
FROM main_table
LEFT JOIN houses ON main_table.housenumber = houses.housenumber
AND main_table.streetname = houses.streetname
533 Portswood Road 57 NULL NULL NULL
35 Ripstone Gardens 70 NULL NULL NULL
67 Kent Road 68 NULL NULL NULL
21 Bealing Close 65 NULL NULL NULL
75 Broadlands Road 76 NULL NULL NULL
7 Gordon Avenue 70 243 images_housing/GOR1.jpg 4
29 Broadlands Road 74 NULL NULL NULL
10 Westbrook Way 65 NULL NULL NULL
328C Burgess Road 85 NULL NULL NULL
10 Thackeray Road 68 NULL NULL NULL
128 Mayfield Road 70 311 images_housing/mayfield1.jpg 4
128 Mayfield Road 67 311 images_housing/mayfield1.jpg 4
``` | Maybe like this?
```
SELECT
main_table.housenumber,
main_table.streetname,
max(main_table.rent)
houses.houseID,
houses.path,
houses.rooms
FROM main_table
LEFT JOIN houses ON main_table.housenumber = houses.housenumber
AND main_table.streetname = houses.streetname
group by
main_table.housenumber,
main_table.streetname,
houses.houseID,
houses.path,
houses.rooms
``` | Say a house has more than one entry. If you don't care which of the possible values you get in the other columns then you can use a much maligned MySQL extension to group by:
```
Select
m.housenumber,
m.streetname,
m.rent,
h.houseID,
h.path,
h.rooms
From
main_table m
Left Join
houses h
on m.housenumber = h.housenumber and
m.streetname = h.streetname
Group By
m.housenumber,
m.streetname
```
Most databases won't let you do this, because usually you do care which of the other possible values you get. | SQL query multiple columns DISTINCT on one column | [
"",
"mysql",
"sql",
"join",
""
] |
I just have a quick question. Can a table have it's only primary key as a foreign key?
To clarify. When I've been creating tables I sometimes have a table with multiple keys where some of them are foreign keys. For example:
```
create table Pet(
Name varchar(20),
Owner char(1),
Color varchar(10),
primary key(Name, Owner),
foreign key(Owner) referecnes Person(Ssn)
);
```
So now I'm wondering if it's possible to do something like this:
```
create table WorksAs(
Worker char(1),
Work varcahr(30),
primary key(Worker),
foreign key(Worker) references Person(Ssn)
);
```
This would result in two tables having the exact same primary key. Is this something that should be avoided or is it an ok way to design a database? If the above is not a good standard I would simply make the Work variable a primary key as well and that would be fine, but it seems simpler to just skip if it is not needed. | Yes. Because of the following reasons.
1. Making them the primary key will force uniqueness (as opposed to imply it).
2. The primary key will presumably be clustered (depending on the dbms) which will improve performance for some queries.
3. It saves the space of adding a unique constraint which in some DBMS also creates a unique index | Yes, it's perfectly legal to do that.
In fact, this is the basis of [IS-A relations](http://www.geomagas.gr/index.php/is-a-relations-in-relational-databases/) ;) | Can a foreign key be the only primary key | [
"",
"sql",
""
] |
I am stuck here working on a homework and this is what I have got.
Instruction is: Display department name, city and number of different jobs in each department.
tables employees (has job\_id and department\_id), deptos (has location\_id, department\_id but no job ids), locations (has location\_id and city)
I need to include all cities even the ones without employees
What I was trying to do...
```
select d.department_name, l.city, count (distinct e.job_id)
from employees e
join deptos d on (e.department_id=d.department_id)
join locations l on (d.location_id=l.location_id)
group by d.department_name
``` | locations can have data missing in other tables, so right join or you start from it and use left joins. You also need to group by city. Tried to do minimum changes to OP query.
```
select d.department_name, l.city, count(distinct e.job_id)
from employees e
join deptos d on (e.department_id=d.department_id)
right join locations l on (d.location_id=l.location_id)
group by d.department_name, l.city
```
[SQL Fiddle to test with](http://sqlfiddle.com/#!4/36c00/14/0) | You need to use a OUTER JOIN to accomplish this... like the below. You do not necessarily have to use the keyword outer as it is implied, just remember the the difference between using LEFT and RIGHT joins there is a post on that here. Example below
[LEFT JOIN vs. LEFT OUTER JOIN in SQL Server](https://stackoverflow.com/questions/406294/left-join-and-left-outer-join-in-sql-server)
```
select d.department_name, l.city, count (distinct e.job_id)
from locations l
left outer join deptos d on (e.department_id=d.department_id)
left outer join employees e on (d.location_id=l.location_id)
group by d.department_name
``` | Join 3 tables and Group | [
"",
"sql",
"oracle",
"join",
"oracle11g",
"oracle-sqldeveloper",
""
] |
I'm trying to return a count of return users which happens when there is a duplicate 'user\_id and action\_type'.
So if you refer below, I would like my output to be = 2, since user\_id (5) has 2 similar action\_types (234) and user\_id (6) also has 2 similar action\_types (585).
How do I structure my query to reflect this?
```
Table t1
User_Id Action_Type
--------- ------------
5 234
5 846
5 234
6 585
6 585
7 465
``` | ```
SELECT COUNT(DISTINCT User_Id) FROM (
SELECT User_Id
FROM t1
GROUP BY User_Id, Action_Type
HAVING COUNT(*) > 1
) t
``` | ```
SELECT COUNT(User_Id) FROM (
SELECT User_Id
FROM t1
GROUP BY User_Id, Action_Type
HAVING COUNT(*) > 1
) t
```
`DISTINCT` not required just count the ids returned | SQL count of return users | [
"",
"mysql",
"sql",
"hive",
"hue",
""
] |
I am using Liquibase for generating a MySQL and a HSQLDB databases.
In several tables I have a column called 'last\_modified' which is the TIMESTAMP of the last update on that particular record.
```
<changeSet author="bob" id="7">
<createTable tableName="myTable">
<column autoIncrement="true" name="id" type="INT">
<constraints nullable="false" primaryKey="true" />
</column>
<column name="name" type="VARCHAR(128)">
<constraints nullable="false" />
</column>
<column name="description" type="VARCHAR(512)" />
<column defaultValueBoolean="true" name="enabled" type="BIT">
<constraints nullable="false" />
</column>
<column name="last_modified" type="TIMESTAMP"/>
</createTable>
<modifySql dbms="mysql">
<append value=" engine innodb" />
</modifySql>
</changeSet>
```
I noticed that if I use MySQL, the generated SQL for that column is:
```
`last_modified` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
```
While if I use HSQLDB, in case of update nothing happens, but I would like to have the same behaviour of the MySQL database with a default value on update equals to the CURRENT\_TIMESTAMP.
How can I set the CURRENT\_TIMESTAMP as a default value ON UPDATE? | You can't do this with a default value. The MySQL behaviour is non-standard and not supported by other databases. The proper way to do this is with a TRIGGER which is defined as BEFORE UPDATE and sets the timestamp each time the row is updated.
Update: From HSQLDB version 2.3.4 this feature is supported. For example: `CREATE TABLE T1(ID INT, last_modified timestamp DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP NOT NULL)`. Note the NOT NULL constraint must appear after the DEFAULT and ON UPDATE clauses. | Or you could try this, as you have already have `modifySql` tag added:
```
<column defaultValue="CURRENT_TIMESTAMP"
name="timestamp"
type="TIMESTAMP">
<constraints nullable="false"/>
</column>
<modifySql dbms="mysql">
<regExpReplace replace="'CURRENT_TIMESTAMP'" with="CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP"/>
<append value=" engine innodb" />
</modifySql>
```
Liquibase 3.1.1 does not produce what you described above. I have to deal with it as given above | Default Value ON UPDATE Liquibase | [
"",
"mysql",
"sql",
"database",
"hsqldb",
"liquibase",
""
] |
I have a temporary table I insert values into, something like;
```
SELECT ColourID
INTO #TEMP
FROM [Orders]
WHERE [Order] = 12345
```
Then later on I have a statement which does something like this;
```
SELECT ColourName
FROM Colours
WHERE [ID] IN(SELECT ColourID FROM #TEMP)
DROP TABLE #TEMP
```
This will return;
```
Yellow
Red
```
I know there are several instances of Red on this `order`, and the ID for Red is in the temporary table for each. How do I show each instance? I've tried performing a `COUNT` but that returns one beside each colour. | **IF** I understand you correctly, you want to display, say, 'Red' for every order line that contains Red. Then you need `INNER JOIN`:
```
SELECT c.ColourName, t.* FROM #TEMP as t INNER JOIN Colours as c ON t.ColourID = c.ID
``` | If I understand you correctly, you need a join to show the non-unique colors for each instance in your temp table:
```
SELECT ColourName
FROM Colours, #TEMP
WHERE ID = ColourID
``` | WHERE - IN Query Shows Only Distinct Values | [
"",
"sql",
"sql-server",
""
] |
Is it possible to set the `ORDER BY` in a sql statement to the values you want? For example, if I want to choose the values for Day in this order: `Thu, Sat, Sun, Mon`
```
SELECT *
FROM `NFL_Games`
WHERE Week = '1'
ORDER BY Day
``` | you can use a `case when` clause for personal orderings...
```
order by
(case Day
when 'Thu' then 1
when 'Sat' then 2
when 'Sun' then 3
when 'Mon' then 4
else 5 end),
Day
``` | Yes, you can use expressions in `ORDER BY`
```
SELECT * FROM `NFL_Games` WHERE Week = '1' ORDER BY DAYOFWEEK( date_field )
``` | SQL setting ORDER BY | [
"",
"mysql",
"sql",
""
] |
I am using postgresql is it possible for me to change a table when a specific time happens. I would like to modify values when a specific date happens that is specified in the table being modified. For example:
a piece of artwork is located in a museum, after it's exhibition ends it is automatically placed back into storage, changing its location attribute. This occurs on a specified date. | postgres does not have triggers on system-wide event (such as time).
What you can do, however, is have the OS's `cron` or `at` services do it for you by scheduling a statement like this:
```
echo "UPDATE artwork SET location='storage' WHERE name='Mona Lisa' | psql -u some_user -d some_database
``` | It is not possible. See cron jobs, as suggested by Muleinik... but then, to expand on your example:
> a piece of artwork is located in a museum, after it's exhibition ends it is automatically placed back into storage, changing its location attribute. This occurs on a specified date.
What happens if the piece of art is stolen (happens), or the museum it got sent to as part of a temporary exhibition decides to keep it (happens) or return it to its "rightful owner" (happens), or it's shelved in the wrong location (happens), etc.?
Don't just assume that things will go well -- they won't. | SQL change rule based off when a time happens | [
"",
"sql",
"database",
"postgresql",
""
] |
I have a general doubt in sql. What is actually "Top 1 1" will do ?
What is the meaning of the below query ?
```
select top 1 1 from Worker W where not exists (select 1 from Manager M where M.Id = W.Id)
```
what is the diff between select "TOP 1 1" and "SELECT 1" in sql server query ? | `SELECT TOP 1` Means Selecting the very 1st record in the result set
`SELECT 1` Means return 1 as the result set
`SELECT TOP 1 1 FROM [SomeTable] WHERE <SomeCondition>` Means if the condition is true and any rows are returned from the select, only return top `1` row and only return integer `1` for the row (no data just the integer 1 is returned). | In the following, the first "1", which is part of the "TOP 1" means to stop after it gets to a single result. The second "1" is just because the author really does not care what the result is.
```
SELECT TOP 1 1 FROM WORKER
```
is essentially the same as
```
SELECT TOP 1 * FROM WORKER
```
The only question is whether it would be more efficient in the "EXISTS" part of the query than just
```
SELECT 1 FROM Manager...
``` | Diff between Top 1 1 and Select 1 in SQL Select Query | [
"",
"sql",
"sql-server",
""
] |
i have multiple tables in a database:
**tblOjt**
```
ID studentid courseid companyid addresseeid dateadded datestarted dateended ojthours
1 3 1 1 1 9/25/2013 500
```
**tblStudent**
```
ID lastname firstname middlename course gender renderedhours dateadded archive
3 Dela Cruz Juan Santos BSIT Male 500
```
**tblCourse**
```
ID coursealias coursename hours
1 BSIT Bachelor of Science in Information Technology 500
```
**tblCompany**
```
ID companyname
1 MyCompany
```
**tblAddressee**
```
ID addresseename
1 John dela Cruz
```
i need to have a SQL statement in which i can get this values:
```
tableOjt.id tableOJT.surname,firstname, and middlename course companyname addresseename dateadded datestarted dateended ojthours
```
how will i get this code in SQL using those join methods...im writing it in VB6 ADODC, is this the same syntax in a standard SQL ? thanks | If you are writing a query against an Access database backend, you need to use the following join syntax:
```
select
t1.c1
, t2.c2
, t3.c3
, t4.c4
from ((t1
inner join t2 on t1.something = t2.something)
inner join t3 on t2.something = t3.something)
inner join t4 on t3.something = t4.something
```
The table and column names aren't important here, but the placement of the parentheses is. Basically, you need to have *n - 2* left parentheses after the `from` clause and one right parenthesis before the start of each new `join` clause except for the first, where *n* is the number of tables being joined together.
The reason is that Access's join syntax supports joining only two tables at a time, so if you need to join more than two you need to enclose the extra ones in parentheses. | ```
SELECT tblOjt.id, tblStudent.firstname, tblStudent.middlename,
tblStudent.lastname, tblStudent.course, tblCompany.companyname,
tblAddressee.addressee
FROM (((tblOjt
INNER JOIN tblStudent ON tblOjt.studentid = tblStudent.id)
INNER JOIN tblCourse ON tblOjt.courseid = tblCourse.id)
INNER JOIN tblCompany ON tblOjt.companyid = tblCompany.id)
INNER JOIN tblAddressee ON tblOjt.addresseeid = tbladdressee.id
```
found it!thanks to Yawar's approach... | Access-SQL: Inner Join with multiple tables | [
"",
"sql",
"ms-access",
"vb6",
"ado",
""
] |
I have a query which is looking like this,
```
select * from tbl1 where field1 in (select field2 from tbl2 where id=1)
```
* where field1 is of type integer
* and `select field2 from tbl2 where id=1` will return '1,2,3' which is a string
Obviously, we cannot pass string in order to check a field which is of type integer. What I am asking is, How to fix the above query? or Is there any alternate solution for this?
My DBMS : Postgresql 9.0.3 | ```
select *
from tbl1
where field1 in (
select regexp_split_to_table(field2, ',')::int
from tbl2
where id=1
)
```
Variation suggested by **Igor**
```
select *
from tbl1
where field1 = any (
select regexp_split_to_array(field2, ',')::int[]
from tbl2
where id=1
)
``` | You need to first split the input using [regex\_split\_to\_array()](http://www.postgresql.org/docs/9.1/static/functions-string.html) method and then use those IDs by casting them to integers
Try this -
```
select * from tbl1 where field1 in
(select NULLIF(x[1],'')::int,NULLIF(x[2],'')::int,NULLIF(x[3],'')::int from
(select regex_split_to_array(field2) from tbl2 where id=1) as dt(x))
```
Given that you have known number of elements i.e. 3 in this case coming from that result of the second query. | Pass a query inside Where in () | [
"",
"sql",
"database",
"postgresql",
""
] |
In postgres I have two tables like so
```
CREATE TABLE foo (
pkey SERIAL PRIMARY KEY,
name TEXT
);
CREATE TABLE bar (
pkey SERIAL PRIMARY KEY,
foo_fk INTEGER REFERENCES foo(pkey) NOT NULL,
other TEXT
);
```
What I want to do is to write a .sql script file that does the following
```
INSERT INTO foo(name) VALUES ('A') RETURNING pkey AS abc;
INSERT INTO bar(foo_fk,other) VALUES
(abc, 'other1'),
(abc, 'other2'),
(abc, 'other3');
```
which produces the error below in pgAdmin
```
Query result with 1 row discarded.
ERROR: column "abc" does not exist
LINE 3: (abc, 'other1'),
********** Error **********
ERROR: column "abc" does not exist
SQL state: 42703
Character: 122
```
Outside of a stored procedure how do a define a variable that I can use between statements? Is there some other syntax for being able to insert into bar with the pkey returned from the insert to foo. | You can combine the queries into one. Something like:
```
with foo_ins as (INSERT INTO foo(name)
VALUES ('A')
RETURNING pkey AS foo_id)
INSERT INTO bar(foo_fk,other)
SELECT foo_id, 'other1' FROM foo_ins
UNION ALL
SELECT foo_id, 'other2' FROM foo_ins
UNION ALL
SELECT foo_id, 'other3' FROM foo_ins;
```
Other option - use an anonymous PL/pgSQL block like:
```
DO $$
DECLARE foo_id INTEGER;
BEGIN
INSERT INTO foo(name)
VALUES ('A')
RETURNING pkey INTO foo_id;
INSERT INTO bar(foo_fk,other)
VALUES (foo_id, 'other1'),
(foo_id, 'other2'),
(foo_id, 'other3');
END$$;
``` | You can use **[`lastval()`](https://www.postgresql.org/docs/current/functions-sequence.html)** to ...
> Return the value most recently returned by `nextval` in the current session.
This way you do not need to know the name of the seqence used.
```
INSERT INTO foo(name) VALUES ('A');
INSERT INTO bar(foo_fk,other) VALUES
(lastval(), 'other1')
, (lastval(), 'other2')
, (lastval(), 'other3')
;
```
This is safe because you control what you called last in your own session.
---
If you use a [**writable CTE**](https://www.postgresql.org/docs/current/queries-with.html#QUERIES-WITH-MODIFYING) as [proposed by @Ihor](https://stackoverflow.com/a/19337709/939860), you can still use a short `VALUES` expression in the 2nd `INSERT`. Combine it with a `CROSS JOIN` (or append the CTE name after a comma (`, ins`) - same thing):
```
WITH ins AS (
INSERT INTO foo(name)
VALUES ('A')
RETURNING pkey
)
INSERT INTO bar(foo_fk, other)
SELECT ins.pkey, o.other
FROM (
VALUES
('other1'::text)
, ('other2')
, ('other3')
) o(other)
CROSS JOIN ins;
``` | How to to get the value of an auto increment column in postgres from a .sql script file? | [
"",
"sql",
"postgresql",
""
] |
In SQL there is an expressive way to count how many values differ from `'x'`:
```
SELECT COUNT(NULLIF(col, 'x')) FROM table
```
I find it less elegant to count values equal to `'x'`:
* `SELECT COUNT(*) - COUNT(NULLIF(col, 'x')) FROM table`
* `SELECT SUM(CASE WHEN col = 'x' THEN 1 ELSE 0 END) FROM table`
* *[oracle]* `SELECT COUNT(DECODE(col,'x','x',NULL)) FROM table`
Is there a more elegant way to do that? | ```
SELECT
COUNT(CASE WHEN col='x' THEN 1 END) AS XCount,
COUNT(CASE WHEN col='y' THEN 1 END) AS YCount,
COUNT(CASE WHEN col='z' THEN 1 END) AS ZCount
FROM table
``` | The straightforward way is to do the filtration in the `WHERE` clause:
```
SELECT COUNT(*) FROM table WHERE col = 'x'
```
**EDIT:**
If you cannot use the where clause (because you are performing several counts in the same SELECT), then I think the ways you suggested yourself are the most elegant ones.
My personal preference would be the `SUM(CASE WHEN...`. | Aggregate to count values equal to a constant | [
"",
"sql",
"oracle",
"count",
"aggregate-functions",
""
] |
Consider the following sql command run in MySQL command prompt-
```
select 'aBc'='Abc';
```
O/P
```
+-------------+
| 'aBc'='Abc' |
+-------------+
| 1 |
+-------------+
```
I expected the result to show '0' in place of '1'.
How can I differentiate between two strings if they are not in the same case? | You can use a binary collation. For example:
```
select 'aBc'='Abc' collate utf8_bin;
```
Or you can transform one of the strings into a binary type:
```
select binary('aBc')=binary('Abc');
```
For the differences between these two, see [The \_bin and binary Collations](http://dev.mysql.com/doc/refman/5.6/en/charset-binary-collations.html) in the MySQL documentation. | By default, MySQL is case-insensitive.
Either change column collation, or use `COLLATE` keyword, like:
```
SELECT 'abc' COLLATE 'utf8_bin' = 'abc' COLLATE 'utf8_bin'
``` | Differentiate between two strings with different case in mysql | [
"",
"mysql",
"sql",
"string",
""
] |
I was wondering how to simplify this query, here(<http://sqlfiddle.com/#!3/2789c/4>) you have the complete example
```
SELECT distinct (R.[roleId])
FROM [Role] R
LEFT JOIN [userRole] U ON R.[roleId] = U.[roleId]
WHERE R.RoleID NOT IN(
SELECT [roleId]
from [dbo].[userRole]
WHERE userId = 2)
```
I want to get all the roles that are not assigned to an specific user. I think the inner select could be erase.
**Update 1**
After your great help, I could use only one `SELECT` <http://sqlfiddle.com/#!3/2789c/87>
```
SELECT R.[roleID]
FROM [Role] R
LEFT JOIN [userRole] U
ON R.[roleID] = U.[roleID] AND U.userId = @userID
WHERE U.userId IS NULL
``` | As simple as it gets:
```
select roleId
from Role
except
select roleId
from userRole
where userId = 2
``` | ```
SELECT R.roleId
FROM [Role] R
LEFT JOIN [userRole] U ON R.roleId = U.roleId
group by r.roleId
having sum(case when U.userId = 2 then 1 else 0 end) = 0
```
## [SQLFiddle demo](http://sqlfiddle.com/#!3/2789c/20) | How to simplify a query between Many to Many Relationship | [
"",
"sql",
"sql-server",
""
] |
This has been asked in different ways before, but I can't seem to get something that works for what I need exactly.
The goal here is to make a search query that returns Photos based on tags that are selected. Many tags can be applied to the filter simultaneously, which would need to make it so that the query only returns photos that have ALL of the tags selected. Think of any major web shop where you are narrowing down results after performing a basic keyword search.
Table1: Photos
ID|Title|Description|URL|Created
Table2: PhotosTagsXref
ID|PhotoId|TagId
Table3: PhotosTags
ID|Title|Category
What I have:
```
SELECT p.* FROM `PhotosTagsXref` AS pt
LEFT JOIN `Photos` AS p ON p.`ID` = pt.`PhotoId`
LEFT JOIN `PhotosTags` AS t ON pt.`TagId` = t.`ID`
WHERE p.`Description` LIKE "%test%" AND
????
GROUP BY p.`ID`
ORDER BY p.`Created` DESC LIMIT 20
```
The ???? is where I've tried a bunch of things, but stumped. Problem is I can easily find a result set that contains photos with one tag or another, but if applying 2, 3, or 4 tags we'd need to only return photos that have entries for all of those tags in the database. I think this will involve combining result sets but not 100% sure.
Example:
Photo 1 Tags: Blue, White, Red
Photo 2 Tags: Blue
Searching for a photo with tags of 'blue' returns both photos, searching for a photo with tags of 'blue' and 'white' returns only Photo 1. | Admittedly a bit ugly. But assuming that PhotosTags.Category has the 'Blue', 'White', etc, try something along this line.
```
SELECT p.*
From `Photos` AS p
WHERE p.`Description` LIKE "%test%" AND
AND Exists
( Select 1 FROM `PhotosTagsXref` AS pt
Inner JOIN `PhotosTags` AS t ON pt.`TagId` = t.`ID`
Where pt.`PhotoId` = p.`ID`
And t.Category = 'FirstCatToSearch'
)
AND Exists
( Select 1 FROM `PhotosTagsXref` AS pt
Inner JOIN `PhotosTags` AS t ON pt.`TagId` = t.`ID`
Where pt.`PhotoId` = p.`ID`
And t.Category = 'SecondCatToSearch'
)
AND Exists
( ...
)
...
``` | Supposing the requested set of tags is (`red`,`blue`) you can do:
```
SELECT * FROM `Photos`
WHERE `Description` LIKE "%test%"
AND `ID` IN (
SELECT pt.`PhotoId` FROM `PhotosTagsXref` AS pt
JOIN `PhotosTags` AS t ON pt.`TagId` = t.`ID`
WHERE t.Title in ('red','blue') /* your set here */
GROUP BY pt.`PhotoId` HAVING COUNT(DISTINCT t.`TagId`)=2 /* # of tags */
)
ORDER BY `Created` DESC LIMIT 20
```
Apparently, the tag set needs to be created dynamically, as well as its count.
*Note*: I'm counting `DISTINCT` `TagID`s because I don't know your table's constraints. If `PhotosTagsXRef` had a PK/UNIQUE (`PhotoId`,`TagId`) and `PhotosTags` had a PK/UNIQUE (`TagId`), then `COUNT(*)` would suffice. | MySQL Query show results based on multiple filters/tags | [
"",
"mysql",
"sql",
""
] |
I added Item in `dropdownlist` and when I select the Item I added, it does not show up in `Label1`. Here is my code:
**ASPX**
```
<asp:Label ID="Label1" runat="server"></asp:Label>
<asp:DropDownList ID="drpOne" runat="server" AutoPostBack="true">
</asp:DropDownList>
```
**VB**
```
Protected Sub Page_Load(sender As Object, e As System.EventArgs) Handles Me.Load
con.Open()
If Not IsPostBack Then
Dim Sql = "SELECT College FROM College"
cmdAdd = New SqlDataAdapter(Sql, con)
Dim ds As New DataSet()
cmdAdd.Fill(ds)
drpOne.DataSource = ds
drpOne.DataTextField = "College"
drpOne.DataValueField = "College"
drpOne.DataBind()
drpOne.Items.Insert(0, New ListItem("Please select College", ""))
drpOne.SelectedItem.Value = "Please select College"
drpOne.Items.Insert(0, New ListItem("All", ""))
end if
End Sub
Protected Sub drpOne_SelectedIndexChanged(sender As Object, e As System.EventArgs) Handles drpOne.SelectedIndexChanged
Label1.Text = drpOne.SelectedItem.Value
End Sub
``` | i haven't checked the code, but try it this way
```
Protected Sub Page_Load(sender As Object, e As System.EventArgs) Handles Me.Load
con.Open()
If Not IsPostBack Then
Dim Sql = "SELECT College FROM College"
cmdAdd = New SqlDataAdapter(Sql, con)
Dim ds As New DataSet()
cmdAdd.Fill(ds)
drpOne.AppendDataBoundItems = true
drpOne.Add(New ListItem("Please select College", ""))//You have passed "Please select College" as text and "" as value in ListItem
drpOne.Add(New ListItem("All", ""))
drpOne.DataSource = ds
drpOne.DataTextField = "College"
drpOne.DataValueField = "College"
drpOne.DataBind()
drpOne.Items.FindByText("Please select College").Selected = true
end if
End Sub
Protected Sub drpOne_SelectedIndexChanged(sender As Object, e As System.EventArgs) Handles drpOne.SelectedIndexChanged
Label1.Text = drpOne.SelectedItem.Value
End Sub
``` | ```
drpOne.SelectedItem.Text == "enter the value which you want to search"
``` | Get the value of selected item in dropdownlist | [
"",
"asp.net",
".net",
"sql",
"vb.net",
"drop-down-menu",
""
] |
I have a date function returning today date minus date of birth. How can I select or exec result?
```
CREATE FUNCTION [dbo].[udfCalculateAge]
(
@DOB AS DATE,
@EndDate as DATE = '2999-01-01' -- Defaul is today's date (see below) but any date can be used here
)
RETURNS TINYINT
AS
BEGIN
DECLARE @Result as TINYINT
-- IF DEFAULT VALUE (marked as 2999-01-01 as it doesn't accept functions) IS USED THEN USE TODAY'S DATE
IF @EndDate = '2999-01-0'
SET @EndDate = GETDATE()
IF @DOB >= @EndDate -- trap errors
SET @Result = 0
ELSE
BEGIN
-- check if the person had its birthday in the specified year and calculate age
IF (MONTH(@EndDate)*100)+DAY(@EndDate) >= (MONTH(@DOB)*100)+DAY(@DOB)
SET @Result = DATEDIFF(Year,@DOB,@EndDate)
ELSE
SET @Result = DATEDIFF(Year,@DOB,@EndDate)-1
END
RETURN @Result
END
``` | You can just simply return it in select
```
DECLARE @dob AS DATE = '19800101'
DECLARE @enddate AS DATE = '20200101'
SELECT [dbo].[udfCalculateAge](@dob, @enddate)
```
You can also use it when selecting from a table to return the value of the function together with other columns:
```
SELECT dob, enddate, [dbo].[udfCalculateAge](dob, enddate)
FROM table1
```
You can also use it in other clauses, e.g. `WHERE`, but this is not recommended for performance reasons.
If you want to use the function with the default parameter, use `default` as the 2nd parameter as you still need to specify all parameters:
```
SELECT [dbo].[udfCalculateAge](@dob, default)
``` | Is this what you're looking for?
```
DECLARE @dob date;
SET @dob = '1980-01-01';
SELECT udfCalculateAge(@dob, NULL);
``` | SELECT or exec result | [
"",
"sql",
"sql-server",
""
] |
I have ex 17 rows and 2 columns in my tabel. Like this:
```
ColA ColB
---- ----
X 1
X 2
X 3
X a
Y 1
Y 2
Y a
Z 4
Z 4
Z b
Q 1
Q 2
Q 3
Q a
W 4
W b
W 5
```
Is there a way to look for a pattern in colB of 1,2,3,a for the same value of ColA?
That would give me at output of X and Q. | Your sample data shows distinct rows. In that case, you can use this `GROUP BY` query.
```
SELECT y.ColA
FROM YourTable AS y
WHERE y.ColB In ('1','2','3','a')
GROUP BY y.ColA
HAVING Count(*) = 4;
```
If your actual data might include duplicate rows, you can start with `SELECT DISTINCT` in a subquery before applying the `GROUP BY`.
```
SELECT sub.ColA
FROM
(
SELECT DISTINCT y.ColA, y.ColB
FROM YourTable AS y
WHERE y.ColB In ('1','2','3','a')
) AS sub
GROUP BY sub.ColA
HAVING Count(*) = 4;
``` | *(I've assumed that your table is named [PatternData].)*
If you use Allen Browne's [`ConcatRelated`](http://allenbrowne.com/func-concat.html) function you can create a query to "string together" all of the [ColB] values for each distinct value of [ColA] like this...
```
SELECT
ColA,
ConcatRelated("ColB", "PatternData", "ColA=""" & ColA & """" , "ColB", "") AS ColB_values
FROM (SELECT DISTINCT ColA FROM PatternData)
```
...returning...
```
ColA ColB_values
---- -----------
Q 123a
W 45b
X 123a
Y 12a
Z 44b
```
Then you can use the above query as the basis for a query to find the [ColA] values with the desired pattern
```
SELECT ColA
FROM
(
SELECT
ColA,
ConcatRelated("ColB", "PatternData", "ColA=""" & ColA & """" , "ColB", "") AS ColB_values
FROM (SELECT DISTINCT ColA FROM PatternData)
)
WHERE ColB_values = "123a"
```
...returning...
```
ColA
----
Q
X
``` | How to find a specific pattern in some Access data? | [
"",
"sql",
"ms-access",
"pattern-matching",
""
] |
I just realized I've had a headache for years. Well, metaphorically speaking. In reality I was looking at my database structure and somehow just realized I never use transactions. Doh.
There's a lot of data on the internet about transactions (`begin transaction`, `rollback`, `commit`, etc.), but surprisingly not much detail about exactly why they are vital, and just exactly how vital?
I understand the concept of handling if something goes wrong. This made sense when one is doing multiple updates, for example, in multiple tables in one go, but this is bad practice as far as I know and I don't do this. All of my queries just update one table. If a query errors, it cancels, transaction or no transaction. What else could go wrong or potentially corrupt a one table update, besides my pulling the plug out of my server?
In other words, my question is,
### exactly how vital is it that i implement transactions on all of my tables - I am fully blasphemous for not having them, or does it really matter that much?
### UPDATE
+1 to invisal, who pointed out that queries are automatically wrapped as transactions, which I did not know. Pointed out multiple good references on the subject of my question. | > This made a lot of sense when one is doing multiple updates, for
> example, in multiple tables in one go. But **basically all of my queries
> just update one table at a time**. If a query errors, it cancels,
> transaction or no transaction.
In your case, it does nothing. A single statement has its own transaction itself. For more information you can read the existed question and answers:
* [What does a transaction around a single statement do?](https://stackoverflow.com/questions/1171749/what-does-a-transaction-around-a-single-statement-do)
* [Transaction necessary for single update query?](https://stackoverflow.com/questions/17856370/transaction-necessary-for-single-update-query?lq=1)
* [Do i need transaction for joined query?](https://stackoverflow.com/questions/9938930/do-i-need-transaction-for-joined-query?lq=1) | Most important property of the database is to keep your data, reliably.
Database reliability is assured by conforming to [ACID](http://en.wikipedia.org/wiki/ACID) principles (Atomicity, Consistency, Isolation, Durability). In the context of databases, a single logical operation on the data is called a transaction. Without transactions, such reliability would not be possible.
In addition to reliability, using transactions properly lets you improve performance of some data operations considerably. For example, you can start transaction, insert a lot of data (say 100k rows), and only then commit. Server does not have to actually write to disk until commit is called, effectively batching data in memory. This allows to improve performance a lot. | SQL - Why transactions? | [
"",
"sql",
""
] |
I am trying to sort alphabetically case insensitive using `COLLATE NOCASE`
but getting error
> ORA - 00933 SQL command not properly ended.
below is the query I am firing:
```
SELECT LPN.LPN_ID,
LPN.TC_ORDER_ID,
ORDERS.D_NAME,
ORDERS.PURCHASE_ORDER_NUMBER AS ORDER_PURCHASE_ORDER_NUMBER,
ORDERS.D_NAME AS D_NAME_2, LPN.LPN_NBR_X_OF_Y
FROM ORDERS ORDERS,
LPN LPN
WHERE ORDERS.ORDER_ID=LPN.ORDER_ID
ORDER BY ORDERS.D_NAME COLLATE NOCASE DESC
```
I checked here to try this but still getting error
[How to use SQL Order By statement to sort results case insensitive?](https://stackoverflow.com/questions/2413427/how-to-use-sql-order-by-statement-to-sort-results-case-insensitive)
Any suggestions please ? | Oracle does not support `COLLATE NOCASE` option of the `order by` clause. To be able to perform case-insensitive ordering you have two options:
1. Set `NLS_COMP='ANSI'` and `'NLS_SORT=BINARY_CI'`, `CI` suffix means case-insensitive, session or system wide by using `alter session` or `alter system` statement:
```
alter session set nls_comp='ANSI';
alter session set nls_sort='BINARY_CI';
with t1(col) as(
select 'A' from dual union all
select 'a' from dual union all
select 'b' from dual union all
select 'B' from dual
)
select *
from t1
order by col
```
Result:
```
COL
---
A
a
b
B
```
2. Change case of the character literal by using either `upper()` or `lower()` function.
```
with t1(col) as(
select 'A' from dual union all
select 'a' from dual union all
select 'b' from dual union all
select 'B' from dual
)
select *
from t1
order by upper(col)
```
result:
```
COL
---
A
a
b
B
```
---
**Edit**
> but i need the UpperCase to preceed any LowerCase eg. Alan, alan, Brian, brian, Cris
This is not the case-insensitive ordering, rather quite contrary in some sense. As one of the options you could do the following to produce desired result:
```
with t1(col) as(
select 'alan' from dual union all
select 'Alan' from dual union all
select 'brian' from dual union all
select 'Brian' from dual union all
select 'Cris' from dual
)
select col
from ( select col
, case
when row_number() over(partition by lower(col)
order by col) = 1
then 1
else 0
end as rn_grp
from t1
)
order by sum(rn_grp) over(order by lower(col))
```
Result:
```
COL
-----
Alan
alan
Brian
brian
Cris
``` | `COLLATE NOCASE` does not work with Oracle, Try this:
```
SELECT LPN.LPN_ID,
LPN.TC_ORDER_ID,
ORDERS.D_NAME,
ORDERS.PURCHASE_ORDER_NUMBER AS ORDER_PURCHASE_ORDER_NUMBER,
ORDERS.D_NAME AS D_NAME_2,
LPN.LPN_NBR_X_OF_Y
FROM orders orders,
lpn lpn
where orders.order_id=lpn.order_id
ORDER BY lower(orders.d_name) DESC;
``` | how to sort by case insensitive alphabetical order using COLLATE NOCASE | [
"",
"sql",
"oracle",
"sorting",
""
] |
```
"id" "parent" "name"
"1" "0" "Books"
"2" "1" "Crime Fiction"
"3" "2" "Death On the Nile"
```
From something like the above, how can I select the `name` of the parent row along with the name of the `child`. Here, the name of the child row will be supplied. I need to get the `name` of the parent.
**Desired output:**
```
@id = 3
Crime Fiction //This is the name of the parent row - in this case 2
Death on the Nile // This is the name of the row who's id was supplied.
```
How is selecting inside the same table done? | ```
select parent.name, child.name
from your_table child
left join your_table parent on child.parent = parent.id
where child.id = 3
``` | ```
select t1.name, t2.name as parent_name from tablename t1 join tablename t2
on t1.id=t2.parent where t1.id=3
``` | Select parent row from same table when a child row is supplied | [
"",
"mysql",
"sql",
""
] |
I'm trying to get all users that speak English **and** French based on the schema above. How can I achieve this?
I've tried with something like:
```
SELECT * FROM User
INNER JOIN UserLanguage on User.idUser = UserLanguage.idUser
INNER JOIN Language on UserLanguage.idLanguage = Language.idLanguage
WHERE Language.name = "FR" AND Language.name = "EN"
```
 | I'm using a subquery that counts the number of languages (that are either FR or EN) spoken by each user. It then returns all the id of all users that speaks both of these languages. The outer query returns all columns for each of these users:
```
SELECT Users.*
FROM Users
WHERE idUser IN (
SELECT UserLanguage.idUser
FROM
UserLanguage INNER JOIN Language
ON UserLanguage.idLanguage = Language.idLanguage
WHERE
Language.name IN ("FR", "EN")
GROUP BY
UserLanguage.idUser
HAVING
COUNT(DISTINCT Language.name)=2
)
``` | Change your conditional from
```
WHERE Language.name = "FR" AND Language.name = "EN"
```
to
```
WHERE (Language.name = "FR" OR Language.name = "EN")
```
You should never have an entry in the database that has two values for a single field, but if you use the "OR" operator, you should be selecting the entry if either value is equal. | SQL query - double condition on relational table | [
"",
"mysql",
"sql",
"database",
""
] |
Sorry for the weird title, couldn't write it better...

Take a look at the image above.
In mysql, is it possible/feasible to have column UNIQUE based on user\_id?
Let me rephrase this, is it possible to have order column values all unique only for specific user\_id?
So that, if for user\_id = 1 order = 1 and there cannot be another order = 1. But if user\_id = 2 I can have order = 1 again?
Or is it better/faster to control this in php script that interfaces with mysql? | You are asking a couple of independent questions.
(1) Yes, you can have unique orders only for a customer. In this case you need to create an unique index using both columns.
(2) Is it better/faster to control this in php script that interfaces with mysql? The first question has to do with data integrity not performance. It will be faster to generate new word\_id's if that is an auto-increment column because if it is not, your PHP code needs to query the database to find the largest value and add one.
If you want to have unique word\_id per customer, you will need to generate those in your PHP code. MySQL cannot generate new auto-increment values based on another column (because you are using InnoDB) | You can have a compund index that enforces uniqueness on its compound values like this:
```
CREATE UNIQUE INDEX `myUniqueIndex` ON mytable (user_id, order)
```
Any attempt to insert a row with a duplicate compound value will return an error you can check for | Is it possible to have mysql unique index for specific column value? | [
"",
"mysql",
"sql",
""
] |
Howsit everyone
I am trying to do something i think is impossible on MYSQL.
We have a very large database where we pull production reports from for our products created. At the moment we run multiple queries to get these results and then transfer the data manually to a table before it gets sent off to whomever needs the reports.
is there an easier way to do this. IE a single query.
Examples of the queries i run
```
a. SELECT count(id) from ProductA_T where created between '<date>' and '<date>' and productstatus = "<successful>";
b. SELECT count(id) from ProductB_T where created between '<date>' and '<date>' and productstatus = "<successful>";
c. SELECT count(id) from ProductC_T where created between '<date>' and '<date>' and productstatus = "<successful>";
```
Example of the result im looking for
1. ProductA ProductB ProductC
2. 500 500 500 | You can put everything into a Select clause, like this:
```
Select
(SELECT count(id) from ProductA_T where created between '<date>' and '<date>' and productstatus = "<successful>") as CountA
(SELECT count(id) from ProductB_T where created between '<date>' and '<date>' and productstatus = "<successful>") as CountB
(SELECT count(id) from ProductC_T where created between '<date>' and '<date>' and productstatus = "<successful>") as CountC
```
That way you'll have an input like this:
* CountA CountB CountC
* 500 500 500
You can later add more Count easily | You could use UNION ALL:
```
SELECT 'ProductA', count(id) from ProductA_T where created between '<date>' and '<date>' and productstatus = "<successful>"
UNION ALL
SELECT 'ProductB', count(id) from ProductB_T where created between '<date>' and '<date>' and productstatus = "<successful>"
UNION ALL
SELECT 'ProductC', count(id) from ProductC_T where created between '<date>' and '<date>' and productstatus = "<successful>";
```
Of course, you can just duplicate the select for other 4 products or any amount you wish ;-) | Multiple SQL counts into single result set | [
"",
"mysql",
"sql",
"sqlyog",
""
] |
I have a SQL query question.
I have a table with first name, last name and mobile numbers of clients.
```
Thor Prestby 98726364
Thor Prestby 98726364
Lars Testrud 12938485
Lise Robol 12938485
```
I want to find rows with the same mobile number, that have different names. As you see above Thor has 2 rows, and that's right. Lars and Lise have the same mobile number and that is what I want to find. | You pretty much outlined yourself the actions needed to take in your question.
In a nutshell
* use a subselect to get all the distinct rows
* group on mobilenumber from this unique resultset from the subselect
* retain only those mobilenumbers that occur at least twice
**SQL Statement**
```
SELECT mobilenumber, COUNT(*)
FROM (
SELECT DISTINCT mobilenumber, firstname, lastname
FROM YourTable
) AS q
GROUP BY
mobilenumber
HAVING COUNT(*) > 1
``` | I am assuming that you are using MS SQL Server here but you could use:
```
Declare @t table
(
FirstName varchar(100),
LastName varchar(100),
Mobile bigint
)
Insert Into @t
values ('Thor','Prestby',98726364),
('Thor','Prestby', 98726364),
('Lars','Testrud',12938485),
('Lise','Robol', 12938485),
('AN','Other', 12345868)
Select Mobile
From @t
Group By Mobile
Having Count(*) > 1
EXCEPT
Select Mobile
From @t
Group By FirstName, LastName, Mobile
Having Count(*) > 1
``` | How do I find duplicate rows, and at the same time distinct? | [
"",
"sql",
""
] |
This is what I have so far to find all tables with more than 100 rows:
```
SELECT sc.name +'.'+ ta.name TableName
,SUM(pa.rows) RowCnt
FROM sys.tables ta
INNER JOIN sys.partitions pa
ON pa.OBJECT_ID = ta.OBJECT_ID
INNER JOIN sys.schemas sc
ON ta.schema_id = sc.schema_id
WHERE ta.is_ms_shipped = 0 AND pa.index_id IN (1,0) AND pa.rows >100
GROUP BY sc.name,ta.name,pa.rows
ORDER BY TABLENAME
```
Is there something similar where I can go through the database to find out a specific row data for a column within a table?
For example:
Where c.name = GUITARS and GUTARS = 'Fender'
Edit:
I do not have CREATE PROCEDURE permission OR CREATE TABLE
Just looking for any specific data under a certain column name, doesn't matter if it returns a lot of rows. | Please see my answer to [How do I find a value anywhere in a SQL Server Database?](https://stackoverflow.com/a/12306613/880904) where I provide a script to search all tables in a database.
A pseudo-code description of this would be be `select * from * where any like 'foo'`
It also allows you to search specific column names using standard `like` syntax, e.g. `%guitar%` to search column names that have the word "guitar" in them.
It runs ad-hoc, so you do not have to create a stored procedure, but you do need access to information\_schema.
I use this script in SQL 2000 and up almost daily in my DB development work. | I dont know whether my solutions works for you or not. But instead of that i would use the query below to get all possible tables having guitar in any combination .
`Select t.name, c.name
from sys.columns c
inner join sys.tables t
on c.object_id=t.object_id
Where c.name like '%guitar%'`
Suppose it will give 20-25 tables depending on the no of tables and usage of guitar columns. You can see the resultset and can almost know your usable tables.
Now search for Fenders in your guessed items list.
I am saying so as I am working on maintenance of an erp app and it has 6000+ tables and 13000+ procedures. So whenever I need to find out the related tables , i just use the same trick and it works. | Find specific row data in all database tables? Without Creating A Procedure Or Table | [
"",
"sql",
"sql-server",
"database",
"t-sql",
""
] |
I Would like to produce a `MySql` Query to be execute on `sales_flat_order` table.
and to fetch a list of customers that has made their first order between dates.
```
select customer_id from sales_flat_order where
created_at between '1/1/2013' and '30/1/2013'
and (this is the first time that the customer has made an order.
no records for this customer before the dates)
```
Thanks | First, get every customer's earliest order date:
```
SELECT
customer_id,
MIN(entity_id) AS entity_id,
MIN(increment_id) AS increment_id,
MIN(created_at) AS created_at
FROM sales_flat_order
GROUP BY customer_id
```
Results:
```
+-------------+-----------+--------------+---------------------+
| customer_id | entity_id | increment_id | created_at |
+-------------+-----------+--------------+---------------------+
| 1 | 1 | 100000001 | 2012-04-27 22:43:27 |
| 2 | 15 | 100000015 | 2012-05-10 14:43:27 |
+-------------+-----------+--------------+---------------------+
```
**Note:** The above assumes that the smallest `entity_id` for a customer will match the earliest `created_at` for a customer.
Building on this, you can join with the orders:
```
SELECT o.* FROM sales_flat_order AS o
JOIN (
SELECT
customer_id,
MIN(entity_id) AS entity_id,
MIN(created_at) AS created_at
FROM sales_flat_order
GROUP BY customer_id
) AS first ON first.entity_id = o.entity_id
WHERE
first.created_at BETWEEN '2013-01-01' AND '2013-01-30';
``` | You could try something like this:
```
$collection = Mage::getModel('sales/order')->getCollection()
->addFieldToSelect('customer_id')
->addFieldToFilter('created_at', array(
'from' => '1/1/2013',
'to' => '30/1/2013',
))
->distinct(true);
``` | Magento MySql Query to fetch List of first time buyers | [
"",
"mysql",
"sql",
"magento",
"analytics",
""
] |
I have a procedure where I insert values into my table.
```
declare @fName varchar(50),@lName varchar(50),@check tinyint
INSERT INTO myTbl(fName,lName) values(@fName,@lName)
```
**EDITED:**
Now I want check if it inserted successfully set @check = 0 else @check = 1 | You can use `@@ROWCOUNT` server variable immediately after the insert query to check number of affected rows by the insert operation.
```
declare @fName varchar(50) = 'Abcd',
@lName varchar(50) = 'Efgh'
INSERT INTO myTbl(fName,lName) values(@fName,@lName)
PRINT @@ROWCOUNT --> 0- means no rows affected/nothing inserted
--> 1- means your row has been inserted successfully
```
For your requirement, you could use a `Case` statement(as per comment):
```
--If you need @check as a bit type please change Int to bit
DECLARE @check Int = CASE WHEN @@ROWCOUNT = 0 THEN 1 ELSE 0 END
``` | You need to use [@@ROWCOUNT](http://technet.microsoft.com/en-us/library/ms187316.aspx)
It returns the number of rows affected by the last statement. If the number of rows is more than 2 billion, use `ROWCOUNT_BIG`.
> @@ROWCOUNT is both scope and connection safe.
>
> In fact, it reads only the last statement row count for that
> connection and scope.
>
> It’s safe to use @@ROWCOUNT in SQL Server even when there is a trigger
> on the base table. The trigger will not skew your results; you’ll get
> what you expect. @@ROWCOUNT works correctly even when NOCOUNT is set.
so you query should be:
```
declare @fName varchar(50), @lName varchar(50), @check tinyint = 0
...
INSERT INTO myTbl(fName,lName) values(@fName,@lName)
if @@ROWCOUNT>0
set @check = 1
``` | How to check if value is inserted successfully or not? | [
"",
"sql",
"sql-server",
""
] |
I have one table analytics\_metrics. i am trying to get count from visitorsStatistics and pageviewsStatistics for the last x days. the date range can change.
```
id metrics count date
67 visitorsStatistics 15779 2013-10-10
69 pageviewsStatistics 282141 2013-10-10
90 visitorsStatistics 14588 2013-10-11
92 pageviewsStatistics 265042 2013-10-11
108 pageviewsStatistics 278523 2013-10-12
106 visitorsStatistics 15015 2013-10-12
122 visitorsStatistics 16474 2013-10-13
124 pageviewsStatistics 312752 2013-10-13
138 visitorsStatistics 16829 2013-10-14
140 pageviewsStatistics 320614 2013-10-14
85 pageviewsStatistics 67976 2013-10-15
83 visitorsStatistics 5452 2013-10-15
```
i am looking to get an output like this :
```
visitorsStatistics pageviewsStatistics
15779 282141
14588 265042
15015 278523
16474 312752
16829 320614
5452 67976
```
i have tryed different queries for more than 4 hours now i just cant seem to find the right way to do it :-(.
here is what i got so far:
```
SET @fromDate = '2013-10-10';
set @tillDate = '2013-10-11';
SELECT
*
/* ga_visits.count as visits,
ga_pageviews.count as pageviews
*/
FROM analytics_metrics as ga_visits
LEFT JOIN analytics_metrics as ga_pageviews on (ga_pageviews.date BETWEEN @fromDate AND @tillDate AND ga_pageviews.metrics = 'pageviewsStatistics')
WHERE ga_visits.date BETWEEN @fromDate AND @tillDate AND ga_visits.metrics = 'visitsStatistics'
```
if i use this query for one day it works fine but not for a date range.
hope someone can help.
thank you in advance | If I got that correctly, you want to combine paired rows within one date, like:
```
SELECT
l.count AS visitorsStatistics,
r.count AS pageviewsStatistics
FROM
(SELECT * FROM analytics_metrics WHERE metrics='visitorsStatistics') AS l
LEFT JOIN
(SELECT * FROM analytics_metrics WHERE metrics='pageviewsStatistics') AS r
ON l.date=r.date
WHERE
l.date BETWEEN @fromDate AND @tillDate
```
-see this [fiddle](http://sqlfiddle.com/#!2/4ae29/2) | Try this out:
```
SELECT
sum(if(metrics = 'visitorsStatistics', `count`, 0)) visitorsStatistics,
sum(if(metrics = 'pageviewsStatistics', `count`, 0)) pageviewsStatistics
FROM analytics_metrics am
WHERE <WHATEVER YOU NEED>
GROUP BY `date`
```
Fiddle [here](http://sqlfiddle.com/#!2/999bb/1). | Mysql join using date between | [
"",
"mysql",
"sql",
""
] |
I have troubles to SUM time in a varchar/nvarchar data type field.
This is the format what I'm trying to SUM: (HH:mm:ss) | I have figured it out:
```
select convert(decimal(18, 2), convert(decimal(18, 2), ppt.hour) + convert(decimal(18, 2), convert(decimal(18, 2), ppt.minute/ 60)) + convert(decimal(18, 2), convert(decimal(18, 2), ppt.second / 3600))) AS Hours, ppt.[column] from (select SUM(convert(decimal(2),left([column], 2))) AS hour, SUM(convert(decimal(2), left(right([column], 5), 2))) AS minute, SUM(convert(decimal(2), right([column], 2))) AS second, [column] FROM [table] GROUP BY [column]) AS ppt
``` | Here you go:
```
create table tblTime (dt datetime)
insert tblTime values
('00:25:00'),
('00:20:00')
```
Here is the calculation:
```
Select cast(sum(datediff(second,0,dt))/3600 as varchar(12)) + ':' +
right('0' + cast(sum(datediff(second,0,dt))/60%60 as varchar(2)),2) +
':' + right('0' + cast(sum(datediff(second,0,dt))%60 as varchar(2)),2)
from tblTime
```
Output: `0:45:00`
`Live Demo` | SUM time in varchar/nvarchar data type field | [
"",
"sql",
"time",
"sum",
"varchar",
"nvarchar",
""
] |
I have a database column containing an integer value that represents a systems up time in seconds. I'd really like a query to be able to show me that up time in a easy to read format day(s) hour(s) minute(s) but I'm not quite sure how to do it. A lot of examples I've found appear to use parameters as an example but never much of how to use it in a select function.
I need the time to be the same as what's displayed on a website too. I tried one query earlier and its added days and removed minutes. Can anyone help me out?
Source data:
```
PDT0014 6141
PDT0008 4990
PDT0024 840227
PDT0033 2301
PDT0035 5439
PDT0005 3434
PDT0019 5482
```
Sample code:
```
SELECT tblAssets.AssetName,
(case when tblAssets.Uptime> (24*60*60)
then
cast(datepart(day,datediff(dd, 0, dateadd(second, tblAssets.Uptime, 0))) as varchar(4))
+ ' Day(s) ' + convert(varchar(2), dateadd(second, tblAssets.Uptime, 0), 108) +' Hour(s)'
else
convert(varchar(5), dateadd(second, tblAssets.Uptime, 0), 108) + ' Hour(s) Minute(s) '
end) AS Uptime
FROM tblAssets
```
Desired Query Output:
```
PDT0014 01:42 Hour(s) Minute(s)
PDT0008 01:23 Hour(s) Minute(s)
PDT0024 10 Day(s) 17 Hour(s)
PDT0033 00:38 Hour(s) Minute(s)
PDT0035 01:30 Hour(s) Minute(s)
PDT0005 00:57 Hour(s) Minute(s)
PDT0019 01:31 Hour(s) Minute(s)
``` | I tend to use:
```
CAST(FLOOR(seconds / 86400) AS VARCHAR(10))+'d ' +
CONVERT(VARCHAR(5), DATEADD(SECOND, Seconds, '19000101'), 8)
```
The top part just gets your days as an integer, the bottom uses SQL-Server's convert to convert a date into a varchar in the format HH:mm:ss after converting seconds into a date.
e.g.
```
SELECT Formatted = CAST(FLOOR(seconds / 86400) AS VARCHAR(10))+'d ' +
CONVERT(VARCHAR(5), DATEADD(SECOND, Seconds, '19000101'), 8),
Seconds
FROM ( SELECT TOP 10
Seconds = (ROW_NUMBER() OVER (ORDER BY Object_ID) * 40000)
FROM sys.all_Objects
ORDER BY Object_ID
) S
```
**[Example on SQL Fiddle](http://sqlfiddle.com/#!3/9dbc1/2)**
*N.B. Change* `CONVERT(VARCHAR(5), DATEADD(..` *to* `CONVERT(VARCHAR(8), DATEADD(..` *to keep the seconds in the result*
**EDIT**
If you don't want seconds and need to round to the nearest minute rather than truncate you can use:
```
SELECT Formatted = CAST(FLOOR(ROUND(Seconds / 60.0, 0) * 60 / 86400) AS VARCHAR(10))+'d ' +
CONVERT(VARCHAR(5), DATEADD(SECOND, ROUND(Seconds / 60.0, 0) * 60, '19000101'), 8),
Seconds
FROM ( SELECT Seconds = 3899
) S
```
I have just replaced each reference to the column `seconds` with:
```
ROUND(Seconds / 60.0, 0) * 60
```
So before doing the conversion rounding your seconds value to the nearest minute | Depending on the output you want:
```
DECLARE @s INT = 139905;
SELECT CONVERT(VARCHAR(12), @s /60/60/24) + ' Day(s), '
+ CONVERT(VARCHAR(12), @s /60/60 % 24)
+ ':' + RIGHT('0' + CONVERT(VARCHAR(2), @s /60 % 60), 2)
+ ':' + RIGHT('0' + CONVERT(VARCHAR(2), @s % 60), 2);
```
Result:
```
1 Day(s), 14:51:45
```
Or:
```
DECLARE @s INT = 139905;
SELECT
CONVERT(VARCHAR(12), @s /60/60/24) + ' Day(s), '
+ CONVERT(VARCHAR(12), @s /60/60 % 24) + ' Hour(s), '
+ CONVERT(VARCHAR(2), @s /60 % 60) + ' Minute(s), '
+ CONVERT(VARCHAR(2), @s % 60) + ' Second(s).';
```
Result:
```
1 Day(s), 14 Hour(s), 51 Minute(s), 45 Second(s).
```
You can replace `60/60/24` with `86400` etc. but I find it better self-documenting if you leave in the /seconds/minutes/hours calculations. And if you are going against a table, just use `column_name` in place of `@s`. | SQL server, Converting Seconds to Minutes, Hours, Days | [
"",
"sql",
"sql-server-2008",
""
] |
I've made a stored procedure and got the error message below, dont know why. I've been looking around for some answers and with some other guys here at the office but they're all unsure of the problem. Hoping someone here has had the same problem and knows the solution to this.
```
Msg 116, Level 16, State 1, Procedure Name_Stored_Procedure,
Line 113 Only one expression can be specified in the select list
when the subquery is not introduced with EXISTS.
```
Here is my Code
```
Set @SQLstring =
'Update #TempTable set Col1' + case when len(Convert(Varchar, (4+@counter)))=1
then '0' else '' end
+ Convert(Varchar,(4+@counter)) + '=''' +
(select @Year, @Month,
Convert(Varchar,count(distinct Table.Column1))
from Databse.Table
where DATEPART(yy,Time) = @Year
and DATEPART(mm,Time) = @Month
and Table.Column2 = @Column2 and Column3 in ('X','Z','Y - A'))
+''' where row = ' + CONVERT(varchar,10+@somevariable * 12)
exec('' + @SQLstring +'')
``` | If you're going to build a string of SQL and execute it with dynamic SQL, then you need to treat it as a string
```
Set @SQLstring =
'Update #TempTable set Col'
+ case when len(Convert(Varchar, (4+@counter)))=1 then '0' else '' end
...
```
In your inner select, remove the `@year, @month` from the results
```
+ ( select Convert(Varchar,count(distinct Table.Column1)) from databse.Table....
``` | Take the year , month , count in the below part in separate select queries.
```
(select @Year, @Month,
Convert(Varchar,count(distinct Table.Column1))
from Databse.Table
where DATEPART(yy,Time) = @Year
and DATEPART(mm,Time) = @Month
and Table.Column2 = @Column2 and Column3 in ('X','Z','Y - A'))
``` | Subquery is not introduced with exists, stored procedure | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
In a perfect world for this type of setup, we would have an integer column that expects only numbers;
But what if you have a `varchar` column and you want to add a `WHERE` clause that said something like this:
`WHERE <value> is NOT a number`
In essence, you are selecting all rows that contain any characters that are NOT ONLY numbers.
This is for MySQL. | try this
```
SELECT * FROM myTable WHERE concat('',col1 * 1) != col1
```
[demo here](http://www.sqlfiddle.com/#!2/2984dc/1) | [`REGEXP`](http://dev.mysql.com/doc/refman/5.1/en/regexp.html#operator_regexp) or [`RLIKE`](http://dev.mysql.com/doc/refman/5.1/en/regexp.html#operator_regexp) are your friends:
```
SELECT * FROM `MyTable` WHERE `Column` RLIKE '^[^0-9]*$';
```
---
**UPDv1:**
You may use different regexes to detect negative integers:
```
SELECT
'-2' RLIKE '^[^0-9]*$', -- fails
'-1' RLIKE '-[0-9]'; -- succeeds
```
*For example:*
```
SELECT * FROM `MyTable` WHERE `Column` RLIKE '-[0-9]' OR `Column` RLIKE '^[^0-9]*$';
```
---
Tested with this:
```
SELECT
*
FROM
(
SELECT 'abs 12 x' as `Column`
UNION ALL
SELECT 12
UNION ALL
SELECT -2
UNION ALL
SELECT '-x'
) as `sub`
WHERE
`Column` RLIKE '-[0-9]'
OR
`Column` RLIKE '^[^0-9]*$';
```
Output:
```
-2
-x
``` | Checking rows that are not numbers in a varchar column for MySQL Query | [
"",
"mysql",
"sql",
""
] |
I'm new to sql server. I googled the use of coalesce and find out it is another words a replace of ISNULL.
I came across a piece of code posted in the [forum](http://www.mssqltips.com/sqlservertip/1521/the-many-uses-of-coalesce-in-sql-server/) about the different uses of coalesce.
```
use adventureworks
DECLARE @DepartmentName VARCHAR(1000)
SELECT @DepartmentName = COALESCE(@DepartmentName,'') + Name + ';'
FROM HumanResources.Department
WHERE (GroupName = 'Executive General and Administration')
SELECT @DepartmentName AS DepartmentNames
```
which return the result in a single line.
So why sql does not support string concatenation by default as in .NET like below?
```
DECLARE @DepartmentName VARCHAR(1000)
**SELECT @DepartmentName = @DepartmentName + Name + ';'**
FROM HumanResources.Department
WHERE (GroupName = 'Executive General and Administration')
SELECT @DepartmentName AS DepartmentNames
```
what is the use of coalesce in the below line
```
SELECT @DepartmentName = COALESCE(@DepartmentName,'') + Name + ';'
```
and why
```
**SELECT @DepartmentName = @DepartmentName + Name + ';'**
FROM HumanResources.Department
WHERE (GroupName = 'Executive General and Administration')
```
is not working? | add `set @DepartmentName = ''` after `DECLARE @DepartmentName VARCHAR(1000)` and you won't need the coalesce statement.
edit: updated after comment clarified things for me | `coalesce` simply returns the value of **the 1st `non-NULL` argument** in its parameter list.
e.g.
```
1.select coalesce(NULL,0,2,1);
```
will return 0 as **0 doesn't mean NULL**.
```
2.select coalesce(id,emp_id,0) from tab1;
```
In this case attribute-`id` will be returned if it is `NOT NULL` otherwise if `emp_id` is `NOT NULL` `emp_id` will be returned else 0 will be returned.
**In this case for concatenation you can simply use either `+`-operator or `concat()`-function**. But here `coalesce` is used for the case when **`DepartmentName=NULL`** because if you concatenate or do some operation with **`NULL`** the result will be **`NULL`**. Hence to use **blank**(i.e. `''`) in place of `NULL` `coalesce()` has been used.
In this case `coalesce` has been used as `DepartmentName` is `NULL` at the moment when it is declared. Use the syntax **Moho** has given in his answer to substitute the use of `coalesce`. | what is the use of coalesce in sql? | [
"",
"sql",
"sql-server-2008-r2",
""
] |
According to multiple sources [Microsoft](http://technet.microsoft.com/en-us/library/ms181627.aspx) , [SQL Server Administration Blog | zarez.net](http://zarez.net/?p=1581) adding comments to SQL and doing so with SSMS is a piece of cake. And for the most part they are probably right. But when I log in and create a view I have been unable to leave comments in it.
If I use two hyphens (--) the comments get deleted when I save the view, it does not matter if I am creating it from scratch or updating a view that I created some time ago.
If I try the `Edit -> Advanced -> Click ‘Comment Selection’` the `Advanced` option is not displayed (see screen shot)

Am I missing something or is it just impossible to leave comments in a SQL Server view? | Stop using the clunky and buggy view designer.
For a new view, just open a new query window and start typing. This will work fine:
```
USE MyDatabase;
GO
CREATE VIEW dbo.MyView
AS
-- this view is cool
SELECT whatever FROM dbo.wherever;
```
For an existing view, right-click the view and choose Script As > Alter instead. This will give you a much better experience (minus the ability to check and uncheck columns etc).

The various visual designers may look like they'll save you time (and the intentions were certainly good), but the implementation is terrible, there are all kinds of bugs and limitations, and they really haven't been improved or even touched in years. | When you're creating database objects there are two places you can store comments. Before the object definition (and after any GO statements) and inside the object itself.
```
USE GODUCKS;
-- This comment will not be preserved
GO
-- This comment precedes the view definition
-- This too
CREATE VIEW dbo.CommentedView
AS
-- This comment lives inside the view
SELECT 1 AS MyColumn;
```
Hit F5 and then script the view back out. You can see where the comments have/have not been preserved.
```
USE [GODUCKS]
GO
/****** Object: View [dbo].[CommentedView] Script Date: 10/15/2013 8:12:49 AM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
-- This comment precedes the view definition
-- This too
CREATE VIEW [dbo].[CommentedView]
AS
-- This comment lives inside the view
SELECT 1 AS MyColumn;
GO
``` | How do you leave comments in SQL Server 2008 R2 view with SSMS? | [
"",
"sql",
"sql-server-2008",
"comments",
"ssms",
""
] |
What is the best way to obtain the numeric value from a guid column?
I am trying this line but i am getting:
```
ORA-00904: "HASHBYTES": invalid identifier
00904. 00000 - "%s: invalid identifier"
```
Query is below:
```
SELECT HASHBYTES('MD5',CAST(prod AS varchar2(30)))
FROM PRODS;
```
Please advice. | Oracle doesn't have the `HASHBYTES` function. You can convert hex to decimal using `TO_NUMBER(hex-string, 'xx')`, but you must have enough `x` characters in your format string to cover the input value. In the case of a GUID, that's 32 `x` characters:
```
SELECT TO_NUMBER(prod, RPAD('x', 32, 'x'))
FROM PRODS;
``` | This will convert MD5 hash (hex) to decimal number
```
SELECT CONV('MD5', 16, 10)
FROM PRODS;
``` | Converting guids to numeric value | [
"",
"sql",
"database",
"oracle",
"primary-key",
"guid",
""
] |
Let's say I have a table with the following columns:
field 1 | field 2 | field3 | field4
I want to insert multiple rows in this table, but the values of field1, field2 and field3 are identical for each row. Only the value of field4 will change.
Obviously I could insert each row separately but the resulting query would be a bit ugly, and I'm wondering if there is a more efficient / elegant way to do it.
I thought of something like this for example:
```
insert into my_table (field1, field2, field3, field4) values (foo, bar, baz, ('value one','value two','value three','value four'))
```
And the result would be:
```
field1 | field2 | field3 | field4
foo | bar | baz | value one
foo | bar | baz | value two
foo | bar | baz | value four
foo | bar | baz | value five
```
In practice, the 'field4' column is a string type, and the different values are known when I write the query. There's no need to get them from a table or anything (although if it's possible, i'm interested in a solution that can do it)
Is this posible or will I have to write each insert separately ?
EDIT: I've changed the question to be more clear about the data type of the changing column (general textual data) and where does the data come from. Sorry for those who have already answered without these information.
Thanks. | You could use a variation on Nicholas Krasnov's answer with a `case` statement to set the string values:
```
insert into my_table(field1, field2, field3, field4)
select 'foo', 'bar', 'baz',
case level
when 1 then 'value one'
when 2 then 'value two'
when 3 then 'value three'
when 4 then 'value four'
end
from dual
connect by level <= 4;
select * from my_table;
FIELD1 FIELD2 FIELD3 FIELD4
------ ------ ------ --------------------
foo bar baz value one
foo bar baz value two
foo bar baz value three
foo bar baz value four
```
[SQL Fiddle](http://sqlfiddle.com/#!4/7309b/1).
Adding more rows/values would just require a change to the `level` limit and extra `when` clauses to match. ([like this](http://sqlfiddle.com/#!4/e021d/1)). You could also have an `else` with a warning in case you get a mismatch in the numbers. There's no special significance to which string value goes with which `level` value, incidentally. | The simplest way to accomplish this would be taking advantage of the `connect by` clause of `select` statement to generate as many synthetic rows as you need.
Suppose `field1` to `field3` are of `varchar2` data type and the `field4` is of number data type, as the sample of data and `insert` statement you have provided imply, then you could write the following `insert` statement
```
Insert into your_table_name(field1, field2, field3, field4)
select 'foo'
, 'bar' /* static string literals */
, 'baz'
, level /* starts at 1 and will be increased by 1 with each iteration */
from dual
connect by level <= 5 /* regulator of number of rows */
```
Result:
```
FIELD1 FIELD2 FIELD3 FIELD4
----------- ----------- ----------- ----------
foo bar baz 1
foo bar baz 2
foo bar baz 3
foo bar baz 4
foo bar baz 5
```
**Edit**:
If you want to literally see `value one`, `value two` and so on as values of the `fiedl4` column, you could change the above `insert` statement as follows:
```
Insert into your_table_name(field1, field2, field3, field4)
select 'foo'
, 'bar'
, 'baz'
, concat('value ', to_char(to_date(level, 'J'), 'jsp'))
from dual
connect by level <= 5
```
Result:
```
FIELD1 FIELD2 FIELD3 FIELD4
------ ------ ------ -------------
foo bar baz value one
foo bar baz value two
foo bar baz value three
foo bar baz value four
foo bar baz value five
```
If you want to populate the `field4` with absolutely random generated string literal you can use `dbms_random` package and `string()` function specifically:
```
Insert into your_table_name(field1, field2, field3, field4)
select 'foo'
, 'bar'
, 'baz'
, dbms_random.string('l', 7)
from dual
connect by level <= 5
```
Result:
```
FIELD1 FIELD2 FIELD3 FIELD4
------ ------ ------ --------
foo bar baz dbtcenz
foo bar baz njykkdy
foo bar baz bcvgabo
foo bar baz ghxcavn
foo bar baz solhgmm
``` | Insert multiple rows into a table with only one value changing | [
"",
"sql",
"oracle",
""
] |
Difference between inner and outer join. i am using two table and want to fetch data from both table so which type join we should use owning of that we can solve our problem | Inner join - An inner join using either of the equivalent queries gives the intersection of the two tables, i.e. the two rows they have in common.
Left outer join -
A left outer join will give all rows in A, plus any common rows in B.
Full outer join -
A full outer join will give you the union of A and B, i.e. All the rows in A and all the rows in B. If something in A doesn't have a corresponding datum in B, then the B portion is null, and vice versa.
check [this](https://stackoverflow.com/questions/38549/difference-between-inner-and-outer-join) | This is the best and simplest way to understand joins:

**Credits go to the writer of this article [HERE](http://www.codeproject.com/Articles/33052/Visual-Representation-of-SQL-Joins)** | What is difference between INNER join and OUTER join | [
"",
"sql",
"join",
""
] |
I have little problem with my MySql query:
I have two tables:
Table **timeline**
```
id | date | text
1 2013-10-13 Hello
```
Table **reps**
```
id | date | text
1 2013-10-12 Its me again
1 2013-10-11 What?
1 2013-10-10 Lorem ipsum
```
What i am doing is `UNION ALL` timeline and reps.First row should always be the row from timeline(it's always one row) and then all rows from reps table but in `DESC` order.
My query is the following one(which work ok except from order by)
```
select id,date,text from timeline UNION ALL select * from reps order by date desc
```
Think something like a comment (sits on top) with replies on the comment in desc order,newest first.
Thank you in advance. | Put the UNION in a subquery:
```
SELECT id, date, text
FROM (SELECT id, date, text, 1 AS priority
FROM timeline
UNION ALL
SELECT *, 2 AS priority
FROM reps) u
ORDER BY priority, date DESC
``` | your query is right , you just close second query by () like that: and it will order just the second query . without () it will order the two queries.
```
select id,date,text from timeline
UNION all
(select * from reps order by date desc)
``` | Order by date a select query on UNION ALL | [
"",
"mysql",
"sql",
"sql-order-by",
"union-all",
""
] |
I would like to know if there's a way for a mysql query to return the username of the user that issues the query.
Is this possible? | Try the `CURRENT_USER()` function. This returns the username that MySQL used to authenticate your client connection. It is this username that determines your privileges.
This may be different from the username that was sent to MySQL by the client (for example, MySQL might use an anonymous account to authenticate your client, even though you sent a username). If you want the username the client sent to MySQL when connecting use the `USER()` function instead.
The value indicates the user name you specified when connecting to the server, and the client host from which you connected. The value can be different from that of CURRENT\_USER().
<http://dev.mysql.com/doc/refman/5.0/en/information-functions.html#function_current-user> | Try to run either
```
SELECT USER();
```
or
```
SELECT CURRENT_USER();
```
It can sometimes be different, [USER()](http://dev.mysql.com/doc/refman/5.5/en/information-functions.html#function_user) will return by which login you attempted to authenticate and [CURRENT\_USER()](http://dev.mysql.com/doc/refman/5.5/en/information-functions.html#function_current-user) will return how you were actually allowed to authenticate. | Show the current username in MySQL? | [
"",
"sql",
"mysql",
"database",
"mysqlcommand",
"mysql-command-line-client",
""
] |
I am thinking and exploring options on designing database for my new application. In general, I will have registered users and info about them. They will be able to do some things in app and that data will be in the sam DB as users data (so I can have FK's shared and stuff)
But, then I plan to have second database that will be in logic totally independent of the first database except it will share userID as FK.
I don't know should I even put that second logic in an extra DB or should I have everything in the same database. I plan to have subdomain in my app for second logic (it is like app in app) but what if I discover they should share more data? Will that cross querying drop my peformances? And is that a way to go actually, is there a real reason to separate databases ? | As soon as you have two databases you have potential complexity. You have not given any particular reason why you need two databases. So keep it simple until you have a reason.
An example of what folks do: have a "current" database, small, holding just the data needed right now. That might be where orders are taken and fulfilled. Once the data is no longer current, say some days or weeks after the order is filled move the data to a "historic" database. There marketing and mangement folks can look at overall trends in the history without affecting performance of the "current" database, whose performance might be critical to keeping your customers happy.
As an example of complexity: any time you have two databases you need to consider consistency between them, this is much harder to ensure than it might appear. Databases do offer Two-Phase Transactional capabilities, or you can devise batch processes but there are always subtleties that are hard to catch. | Also agree that unless volume of your data is going to be huge (judging by the question, doesn't seem like that is the case here), you can use single database to store your data without performance issues.
For "visual" separation of data structure, you can always create tables in two schemas of single database. | Database design - Sharing data between two databases? | [
"",
"sql",
"sql-server",
"database",
"database-design",
""
] |
Ok, I tried to simplify my question by abstracting away the details but I'm afraid I wasn't clear and didn't meet moderator requirements. So I will post the full query with my problem in more detail and the actual query I am struggling with. If the question is still inadequate, could you please comment with specifics about what is unclear and I will do my best to clarify.
First, here is the current query that returns all assignment rows for each bed:
```
SELECT
beds.bed_id,
beds.bedstatus,
beds.position as bed_position,
rooms.room_id,
rooms.room,
wings.wing_id,
wings.name as wing_name,
buildings.building_id,
buildings.name as building_name,
assignments.assignment_id,
assignments.student_id,
assignments.assign_dt,
assignments.assigned_by,
assignments.assignment_status,
assignments.expected_arrival_dt as arrival_dt,
assignments.room_charge_type,
students.first_name,
students.last_name,
meal_plans.name as meal_plan_name,
room_rates.rate_name
FROM
beds
LEFT JOIN
rooms ON (beds.room_id = rooms.room_id)
LEFT JOIN
wings ON (rooms.wing_id = wings.wing_id)
LEFT JOIN
buildings ON (wings.building_id = buildings.buildings_id)
LEFT JOIN assignments ON
((beds.bed_id=assignments.bed_id) AND (term_id = @term_id))
LEFT JOIN
students ON (assignments.student_id = students.student_id)
LEFT JOIN
meal_plans ON (assignments.meal_plan_id = meal_plans.meal_plan_id)
LEFT JOIN
room_rates ON (room_rate_id = room_rates.room_rate_id)
WHERE
(
(rooms.room IS NOT NULL) AND
(rooms.assignable = 1) AND
(buildings.active = 1) AND
(buildings.building_id = @building_id)
)
ORDER BY BY rooms.room;
```
The problem is that there may be multiple rows in the "assignments" table for each room distinguished by the "assignment\_status" field and I want a single row for each assignment. I want to determine which assignment row to select based on the value in assignment\_status. That is if the assignment status is "active", I want that row, otherwise, if there is a row with status "waiting approval" then I want that row, etc...
Barmar's suggestion is given here:
```
LEFT JOIN (SELECT *
FROM OtherTable
WHERE <criteria>
ORDER BY CASE status
WHEN 'Active' THEN 1
WHEN 'Waiting Approval' THEN 2
WHEN 'Canceled' THEN 3
...
END
LIMIT 1) other
```
This was very helpful and I attempted this approach:
```
SELECT
beds.bed_id,
beds.bedstatus,
beds.position as bed_position,
rooms.room_id,
rooms.room,
wings.wing_id,
wings.name as wing_name,
buildings.building_id,
buildings.name as building_name,
assign.assignment_id,
assign.student_id,
assign.assign_dt,
assign.assigned_by,
assign.assignment_status,
assign.expected_arrival_dt as arrival_dt,
assign.room_charge_type,
students.first_name,
students.last_name,
meal_plans.name as meal_plan_name,
room_rates.rate_name
FROM
beds
LEFT JOIN
rooms ON (beds.room_id = rooms.room_id)
LEFT JOIN
wings ON (rooms.wing_id = wings.wing_id)
LEFT JOIN
buildings ON (wings.building_id = buildings.buildings_id)
LEFT JOIN (SELECT *
FROM assignments
WHERE ((assignments.bed_id==beds.bed_id) AND (term_id = @term_id))
ORDER BY CASE assignment_status
WHEN 'Active' THEN 1
WHEN 'Waiting Approval' THEN 2
WHEN 'Canceled' THEN 3
END
LIMIT 1) assign
LEFT JOIN
students ON (assign.student_id = students.student_id)
LEFT JOIN
meal_plans ON (assign.meal_plan_id = meal_plans.meal_plan_id)
LEFT JOIN
room_rates ON (room_rate_id = room_rates.room_rate_id)
WHERE
(
(rooms.room IS NOT NULL) AND
(rooms.assignable = 1) AND
(buildings.active = 1) AND
(buildings.building_id = @building_id)
)
ORDER BY rooms.room;
```
But I realized, the problem here is that OtherTable (assignments) is joined to the parent query based on a FK:
```
((beds.bed_id=assignments.bed_id) AND (term_id = @term_id))
```
So I can't do the subselect as the beds.bed\_id isn't in scope for the subselect. So as Barmar's comment indicates the join criteria needs to be outside the subselect--but I'm having trouble figuring out how to both restrict the results to a single row per room and move the join outside the subselect. I'm wondering if travelboy's suggestion to use GROUP BY may be more fruitful, but haven't been able to determine how the grouping should be done.
Let me know if I can provide additional clarification.
Original Question:
I need from Table A to do a LEFT JOIN on a SINGLE row in another table, Table B meeting certain criteria (there may be multiple or no rows in Table B that meet the criteria). If there are multiple rows I want to select which row in B to join based on the value of a field in Table B. For example, if there is a row in B with status column='Active', I want that row, if not, if there is a row with status='Waiting Approval', I want that row, if there is a row with status='Canceled', I want that row, etc... Can I do this without a sub select? With a sub select? | In some cases (but not in all cases) you can do it without a sub-select. You would need to `GROUP BY` a unique field in table A, typically an ID. This ensures that you get only one (or none) row from table B. However, selecting the row you want is the tricky part. You need an aggregating function such as MAX(). If the field in B is a number, that's easy to do. If not, you can apply some SQL functions on the fields in B to calculate something like a score to sort by. For example, `Active` could correspond to a higher value than `Cancelled` etc. That will work without a sub-select and likely be faster on big data sets.
With a sub-select it's easy to do. You can either use Barmar's solution, or, if you only need one specific field from B, you can also put the sub-select within the SELECT clause of the outer query. | Use:
```
LEFT JOIN (SELECT *
FROM OtherTable
WHERE <criteria>
ORDER BY CASE status
WHEN 'Active' THEN 1
WHEN 'Waiting Approval' THEN 2
WHEN 'Canceled' THEN 3
...
END
LIMIT 1) other
``` | LEFT JOIN to a single row in order of criteria in MySQL | [
"",
"mysql",
"sql",
""
] |
I have a big strange error in SQL Server 2008R2:
I have the following statement:
```
DROP TABLE TEST_
-- here it says that this table doesn't exists
GO
CREATE TABLE TEST_(
[ID_] [int] IDENTITY(1,1) NOT NULL,
[ITEM_] NVARCHAR(255) NOT NULL,
CONSTRAINT [TEST_] PRIMARY KEY CLUSTERED
(
[ID_] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
-- here says that table already exists
```
Please follow comment. Did this fact happens to you also ?
I cannot re-create table. What have I to do ?
**PS: I have same database selected** | I think you may be confused, it complains that 'TEST\_' already exists, but likely it means the constraint 'test\_' already exists; use a different name for the constraint, and it works OK for me. | **I cannot run your query, because of next error:**
*Msg 3701, Level 11, State 5, Line 1*
*Cannot drop the table 'TEST*', because it does not exist or you do not have permission.\_
*Msg 102, Level 15, State 1, Line 5*
*Incorrect syntax near '('.*
*Msg 319, Level 15, State 1, Line 7*
*Incorrect syntax near the keyword 'with'. If this statement is a common table expression, an xmlnamespaces clause or a change tracking context clause, the previous statement must be terminated with a semicolon.*
**But when I modify your query to:**
CREATE TABLE TEST\_(
[ID\_] [int] IDENTITY(1,1) NOT NULL,
[ITEM\_] NVARCHAR(255) NOT NULL
) ON [PRIMARY]
**it works fine:**
*Command(s) completed successfully.* | When drop a table it says doesn't exists, when create it says that exists in SQL Server 2008R2 | [
"",
"sql",
"sql-server",
""
] |
If I use following query:
```
SELECT DISTINCT comment FROM table;
```
And I have for example following data: (IDs are just there to SHOW the order...)
```
ID | comment
-------------
1 | comment1
2 | comment1
3 | comment2
4 | comment1
```
What I could get back are following three results:
*Result 1:*
```
1 | comment1
3 | comment2
```
*Result 2:*
```
3 | comment2
4 | comment1
```
*Result 3:*
```
order is unpredicatable
```
**Question 1:**
Is the result independant from the platform? Can I make sure, that I always get a predictable result?
**Question 2:**
I want to distinct select all comments and get the NEWEST only, meaning I want to always get result 2. Is it possible to achive that? Maybe ordering by the key would affect the result? | Your query doesn't request the ID column, only the comment column:
```
SELECT DISTINCT comment FROM table;
```
In the result, the ID is not included, so the row each value comes from is irrelevant.
```
comment1
comment2
```
As for how it will sort them, I think it depends on index order. I'll do a test to confirm:
```
mysql> create table t (id int primary key, comment varchar(100));
mysql> insert into t values
-> (1, 'comment2'),
-> (2, 'comment1'),
-> (3, 'comment2'),
-> (4, 'comment1');
```
The default order is that of the primary key:
```
mysql> select distinct comment from t;
+----------+
| comment |
+----------+
| comment2 |
| comment1 |
+----------+
```
Whereas if we have an index on the requested column, it returns the values in index order:
```
mysql> create index i on t(comment);
mysql> select distinct comment from t;
+----------+
| comment |
+----------+
| comment1 |
| comment2 |
+----------+
```
I'm assuming the InnoDB storage engine, because everyone should be using InnoDB. ;-)
---
Your last question indicates that you really want a query that doesn't involve DISTINCT at all, but it's a [greatest-n-per-group](/questions/tagged/greatest-n-per-group "show questions tagged 'greatest-n-per-group'") question. This type of question is very common, and it has been asked and answered hundreds of times on StackOverflow. Follow the link and read the many solutions. | You can experiment and see which of the unique rows is returned, and you can experiment and see which order they're returned in, but that will only show you how things turn out with your experimental table, today, under the current database engine version. Bottom line:
* If you `SELECT DISTINCT comment` the `id` is immaterial because it's not in your `SELECT`
* If you don't `ORDER BY` the database will determine the order.
If you want the most recent distinct comment with its ID, this will work *every time* (*full disclosure: this replaces an earlier answer that works but was over-thinking the problem*):
```
SELECT comment, MAX(id)
FROM myTable
GROUP BY comment
ORDER BY 2 DESC;
```
Note that the `ORDER BY 2 DESC` assumes that the higher the ID, the more recent the comment. | Distinct - which items are taken? The first or the last occurance? | [
"",
"mysql",
"sql",
"distinct",
"greatest-n-per-group",
""
] |
This seems like such an easy query to run yet I cannot get it to work and I'm starting to rethink why I chose to tackle this project.
I am trying to find how many records are in a table where an id is identical. For example
```
select productid, productgroup, shelflvl, shelfaisle, Count(Shelfaisle) AS totalaisles
from productlocation
Where productid= productid AND productgroup = 'canned'
Group By productid, productgroup, shelflvl, shelfaisle
```
A product with the same id can be in a different aisle and on a different shelflvl. All I am trying to do is see how many aisles a product is in and I cannot for the life of me get it to work properly.
Any help is appreciated thank you! | I'm assuming a product cannot belong to many `productgroup`s.
Remove `Shelfaisle` from `GROUP BY` as this is what you're trying to count.
Also, a `COUNT(DISTINCT Shelfaisle)` would prevent duplicates (if applicable).
Lastly, you don't need a `productid=productid` condition which apparently always yields `true`.
Cut a long story short:
```
select
productid, productgroup, shelflvl, Count(distinct Shelfaisle) AS totalaisles
from productlocation
Where productgroup = 'canned'
Group By productid, productgroup, shelflvl
``` | Don't group by the column you are trying to aggregate:
```
select productid, productgroup, Count(Shelfaisle) AS totalaisles
from productlocation
Where productid=productid AND productgroup = 'canned'
Group By productid, productgroup
``` | Counting records in a table given corresponding ID | [
"",
"sql",
""
] |
I have a table of logs that contain a ID and TIMESTAMP. I want to ORDER BY ID and then TIMESTAMP.
For example, this is what the result set would look like:
```
12345 05:40
12345 05:50
12345 06:22
12345 07:55
12345 08:33
```
Once that's done, I want to INSERT a order value in a third column that signifies it's placement in the group from earliest to latest.
So, you would have something like this:
```
12345 05:40 1 <---First entry
12345 05:50 2
12345 06:22 3
12345 07:55 4
12345 08:33 5 <---Last entry
```
How can I do that in a SQL statement? I can select the data and ORDER BY ID, TIMESTAMP. But, I can't seem to INSERT a order value based on the groupings. :( | Try this `update` *not an* `insert`:
**[Fiddle demo here](http://sqlfiddle.com/#!3/5aecb/3)**:
```
;with cte as(
select id, yourdate, row_number() over(order by id,yourdate) rn
from yourTable
)
Update ut Set thirdCol = rn
From yourTable ut join cte on ut.Id = cte.id and ut.yourdate = cte.yourdate
```
**NOTE:** if you need to get the `thirdColumn` updated per id basis, please partition your rownumber by using `row_number() over (partition by id, order by order by id,yourdate)`
Results:
```
| ID | YOURDATE | THIRDCOL |
|-------|----------|----------|
| 12345 | 05:40 | 1 |
| 12345 | 05:50 | 2 |
| 12345 | 06:22 | 3 |
| 12345 | 07:55 | 4 |
| 12345 | 08:33 | 5 |
``` | Using a derived table and an update.
```
IF OBJECT_ID('tempdb..#TableOne') IS NOT NULL
begin
drop table #TableOne
end
CREATE TABLE #TableOne
(
SomeColumnA int ,
LetterOfAlphabet varchar(12) ,
PositionOrdinal int not null default 0
)
INSERT INTO #TableOne ( SomeColumnA , LetterOfAlphabet )
select 123 , 'x'
union all select 123 , 'b'
union all select 123 , 'z'
union all select 123 , 't'
union all select 123 , 'c'
union all select 123 , 'd'
union all select 123 , 'e'
union all select 123 , 'a'
Select 'pre' as SpaceTimeContinium , * from #TableOne order by LetterOfAlphabet
Update
#TableOne
Set PositionOrdinal = derived1.rowid
From
( select SomeColumnA , LetterOfAlphabet , rowid = row_number() over (order by LetterOfAlphabet asc) from #TableOne innerT1 )
as derived1
join #TableOne t1
on t1.LetterOfAlphabet = derived1.LetterOfAlphabet and t1.SomeColumnA = derived1.SomeColumnA
Select 'post' as SpaceTimeContinium, * from #TableOne order by LetterOfAlphabet
IF OBJECT_ID('tempdb..#TableOne') IS NOT NULL
begin
drop table #TableOne
end
``` | How do you order a group of records then insert their order placement too? | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I've got two postgresql tables:
```
table name column names
----------- ------------------------
login_log ip | etc.
ip_location ip | location | hostname | etc.
```
I want to get every IP address from `login_log` which doesn't have a row in `ip_location`.
I tried this query but it throws a syntax error.
```
SELECT login_log.ip
FROM login_log
WHERE NOT EXIST (SELECT ip_location.ip
FROM ip_location
WHERE login_log.ip = ip_location.ip)
```
> ```
> ERROR: syntax error at or near "SELECT"
> LINE 3: WHERE NOT EXIST (SELECT ip_location.ip`
> ```
I'm also wondering if this query (with adjustments to make it work) is the best performing query for this purpose. | There are basically 4 techniques for this task, all of them standard SQL.
### [`NOT EXISTS`](https://www.postgresql.org/docs/current/functions-subquery.html#FUNCTIONS-SUBQUERY-EXISTS)
Often fastest in Postgres.
```
SELECT ip
FROM login_log l
WHERE NOT EXISTS (
SELECT -- SELECT list mostly irrelevant; can just be empty in Postgres
FROM ip_location
WHERE ip = l.ip
);
```
Also consider:
* [What is easier to read in EXISTS subqueries?](https://stackoverflow.com/questions/7710153/what-is-easier-to-read-in-exists-subqueries)
### [`LEFT JOIN / IS NULL`](https://www.postgresql.org/docs/current/queries-table-expressions.html#QUERIES-FROM)
Sometimes this is fastest. Often shortest. Often results in the same query plan as `NOT EXISTS`.
```
SELECT l.ip
FROM login_log l
LEFT JOIN ip_location i USING (ip) -- short for: ON i.ip = l.ip
WHERE i.ip IS NULL;
```
### [`EXCEPT`](https://www.postgresql.org/docs/current/queries-union.html)
Short. Not as easily integrated in more complex queries.
```
SELECT ip
FROM login_log
EXCEPT ALL -- "ALL" keeps duplicates and makes it faster
SELECT ip
FROM ip_location;
```
Note that ([per documentation](https://www.postgresql.org/docs/current/queries-union.html)):
> duplicates are eliminated unless `EXCEPT ALL` is used.
Typically, you'll want the `ALL` keyword. If you don't care, still use it because it makes the query *faster*.
### [`NOT IN`](https://www.postgresql.org/docs/current/functions-subquery.html#FUNCTIONS-SUBQUERY-NOTIN)
Only good without `null` values or if you know to handle `null` properly. [I would ***not*** use it for this purpose.](https://wiki.postgresql.org/wiki/Don%27t_Do_This#Don.27t_use_NOT_IN) Also, performance can deteriorate with bigger tables.
```
SELECT ip
FROM login_log
WHERE ip NOT IN (
SELECT DISTINCT ip -- DISTINCT is optional
FROM ip_location
);
```
`NOT IN` carries a "trap" for `null` values on either side:
* [Find records where join doesn't exist](https://stackoverflow.com/questions/14251180/postgresql-records-where-join-doesnt-exist/14260510#14260510)
Similar question on dba.SE targeted at MySQL:
* [Select rows where value of second column is not present in first column](https://dba.stackexchange.com/q/16650/3684) | A.) The command is NOT EXISTS, you're missing the 'S'.
B.) Use NOT IN instead
```
SELECT ip
FROM login_log
WHERE ip NOT IN (
SELECT ip
FROM ip_location
)
;
``` | Select rows which are not present in other table | [
"",
"sql",
"postgresql",
"null",
"left-join",
"exists",
""
] |
In my SQL table, I'd like to find the state codes that appear more than 300 times and also what the total row counts are for records containing each respective state code. More than a bit rusty on my SQL (plus it's 8am on a Monday...) and this is what i have so far:
```
SELECT StateCode
FROM ZipCodeTerritory
HAVING COUNT(StateCode) > 300
``` | ```
SELECT StateCode, Count(*) As StateCodeCount
FROM ZipCodeTerritory
Group By StateCode
HAVING COUNT(*) > 300
``` | Something like this?
```
SELECT StateCode, count(StateCode)
FROM ZipCodeTerritory
GROUP BY StateCode
HAVING COUNT(StateCode) > 300
``` | Return row counts for field that appears more than x times | [
"",
"sql",
"t-sql",
""
] |
I'm running a really simple query, however for some of the results the value in one field is null. How can I set that value to "a string" if its value is null?
Something like
```
SELECT RegName,
RegEmail,
RegPhone,
RegOrg,
RegCountry,
DateReg,
(Website IS NULL? 'no website' : Website) AS WebSite
FROM RegTakePart
WHERE Reject IS NULL
```
It will be running on a sql server 2005
thanks | Use the following:
```
SELECT RegName,
RegEmail,
RegPhone,
RegOrg,
RegCountry,
DateReg,
ISNULL(Website,'no website') AS WebSite
FROM RegTakePart
WHERE Reject IS NULL
```
or as, @Lieven noted:
```
SELECT RegName,
RegEmail,
RegPhone,
RegOrg,
RegCountry,
DateReg,
COALESCE(Website,'no website') AS WebSite
FROM RegTakePart
WHERE Reject IS NULL
```
The dynamic of COALESCE is that you may define more arguments, so if the first is null then get the second, if the second is null get the third etc etc... | As noted above, the coalesce solution is preferred. As an added benefit, you can use the coalesce against a "derived" value vs. a selected value as in:
```
SELECT
{stuff},
COALESCE( (select count(*) from tbl where {stuff} ), 0 ) AS countofstuff
FROM
tbl
WHERE
{something}
```
Using "iif" or "case" you would need to repeat the inline whereas with coalesce you do not and it allows you to avoid a "null" result in that return... | Set default value in query when value is null | [
"",
"sql",
"sql-server-2005",
""
] |
I am struggling with MySQL joins at the moment.
I have 3 tables, users, gifts and users\_gifts.
```
- Users
- id
- username
- email
- password
- Gifts
- id
- gift
- value
- users_gifts
- uid
- gift_id
```
I want to return all gifts but do not include the gifts that the user has already sent. So, if the available gifts are:
```
card
heart
necklace
perfume
```
If the user has already sent a heart, then the only gifts that would return are
```
card
necklace
perfume
```
I have tried the following join, but for some reason it's not having the desired effect.
```
SELECT *
FROM gifts
JOIN users_gifts
ON gifts.id = user_gifts.gift_id
WHERE users_gifts.uid != 3
```
... assuming the user ID of the current user is 3.
Am I missing something here, or is there any error with my SQL? | I recommend you change your approach.
Currently, you are trying to select all gifts sent by a user other than #3. In addition, you are using an inner join, meaning *only* gifts *sent* by a user other than #3 are included. Gifts which have never been sent by anyone are, therefore, excluded.
Instead, you should start with all gifts and then perform an *outer* join on user #3. This will return all gift columns and user columns *if* the user has sent the gift. If the user has not sent the gift, then user columns will simply contain NULL, but the row will still be included.
Here's an example of an outer join which will check the user columns to determine if a user has sent the gift. Only columns containin NULL are included in the final result, meaning the user has *not* sent the gift:
```
SELECT *
FROM gifts
LEFT JOIN users_gifts
ON users_gifts.gift_id = gifts.id
AND user_gifts.uid = 3 -- Whatever the User's ID is
WHERE users_gifts.gift_ID IS NULL -- exclude the gift if it's already been sent
```
This is an alternative approach using an [EXISTS](http://dev.mysql.com/doc/refman/5.0/en/exists-and-not-exists-subqueries.html) subquery:
```
SELECT *
FROM gifts
WHERE NOT Exists(
SELECT 0
FROM users_gifts
WHERE users_gifts.gift_id = gifts.id
AND user_gifts.uid = 3 -- Whatever the User's ID is
)
```
Which method is faster, or whether one method is faster than the other, depends on your specific circumstances. According to [this article](http://explainextended.com/2009/09/18/not-in-vs-not-exists-vs-left-join-is-null-mysql/), the `NOT EXISTS` approach is 30% slower, but [this article](http://planet.mysql.com/entry/?id=24888) suggests that `NOT EXISTS` is more performant than `LEFT JOIN` if the column contains NULLs. As [Darius mentioned](https://stackoverflow.com/questions/19369029/select-all-and-excluding-results-sql-join/19369135#comment28702978_19369097) in the comments on his answer, "testing is the way to go." | Your query is looking for users other than #3.
Instead, you need to look in users\_gifts, for user=3.
But, you want to exclude any gifts that exist in that sub-query:
```
SELECT *
FROM gifts
WHERE NOT EXISTS
(SELECT 1
FROM users_gifts
WHERE users_gifts.uid = 3
AND user_gifts.gift_id = gifts.id
)
``` | Select all and excluding results. SQL join | [
"",
"mysql",
"sql",
"database",
""
] |
Consider a table with a composite primary key:
```
create table T (
C1 int not null,
C2 int not null,
...
primary key (C1, C2)
)
```
and the problem of retrieving the rows greater than a certain primary key tuple:
```
select * from T where (C1, C2) > (10, 100)
```
The pseudo-syntax above is invalid on MSSQL and Oracle. It can be implemented as:
```
select * from T where (C1 = 10 and C2 > 100) or (C1 > 10)
```
The more columns the PK has, the uglier this gets. Is there any elegant way of doing this (MSSQL and Oracle)?
Also, it's important to take advantage of the unique index on (C1, C2). I wouldn't want to concatenate the columns in a string to simulate the tuple and then end up with a table scan. | afair there is no elegant and fast way to do it, if you really have only 2 int columns - try to combine them into one bigint, but for more - you will need to produce such ugly where condition, good news - it will use primary key index and will work pretty fast | What you are looking for is a comparision of glued keys:
```
'11000' > '10100'
'10101' > '10100'
'10099' < '10100'
'09999' < '10100'
```
To achieve this you need a virtual column. In Oracle you would just build a functional index, in SQL Server you would create a computed column first and build the index on that.
You could write a deterministic (!) glue function to glue your keys together and use this:
```
alter table T add column GluedKeys as GlueTKeys(c1,c2,c3,c4,c5);
create index IndexGluedTKeys on T(GluedKeys);
select * from T where GluedKeys > GlueTKeys(?,?,?,?,?);
```
Be aware of how long the single keys can be. Don't glue 10 + 100 to 10100 and 20 + 10 to 2010, but rather 20 + 10 to 20010. | Is it possible to use column tuples in a where condition? | [
"",
"sql",
"sql-server",
"oracle",
""
] |
I'm trying to check if a country code exists in a cell `country` before a user can add anything to the database. So, if a country code did not exist, he would not be able to add anything.
I'm storing country codes as comma separated values. The column type is VARCHAR.
```
"id" "rtype" "country"
"1" "0" "US,UK,SE,CA"
"2" "1" "US"
"3" "2" "UK"
```
I ran the following query, but this results in no rows. Where am going wrong here? The problem is, this query needs to run with comma separated values as well as single values.
```
select id, rtype from test where id=1 and country in('US')
```
**Expected results**
```
id | rType
1 | 0
select id, rtype from test where id=2 and country in('US')
```
**Expected results**
```
id | rType
2 | 0
``` | Try this out:
```
SELECT id, rtype FROM test
WHERE FIND_IN_SET('US', country) > 0
```
This is how the `FIND_IN_SET(str,strlist)` function works:
> Returns a value in the range of 1 to N if the string str is in the string list strlist consisting of N substrings. A string list is a string composed of substrings separated by “,” characters. [...] Returns 0 if str is not in strlist or if strlist is the empty string. | MySQL has a really nice [Set Datatype](http://dev.mysql.com/doc/refman/5.0/en/set.html). It is very efficient and can store till 64 distinct members of 255 values. I don't know if you could change your column type, but it would solve a lot of problems:
1. your column would be indexable
2. you wouldn't store duplicated values by mistake
3. it would be stored very efficiently
4. invalid values would be ignored (or error if in strict mode)
5. trailing spaces are removed
You'd search using the [FIND\_IN\_SET](http://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_find-in-set) function. | Check if a country code exists in a cell | [
"",
"mysql",
"sql",
""
] |
I cannot seem to figure out how to get an output of a SQL query as a variable in VB. I need something like what's below so that I can use that variable in another SQL query. SQL doesn't allow the use of variables as column headers when running queries, so I thought I could use VB to insert the actual output of one SQL task as the dynamic variable in a loop of update queries. Let me (hopefully) explain.
I have the following query:
```
DECLARE @id int = (SELECT max(id) FROM [views]), @ViewType nvarchar(3);
WHILE @id IS NOT NULL
BEGIN
SELECT @ViewType = (SELECT [View] FROM [views] WHERE id = @id);
UPDATE a
SET a.[@ViewType] = '1'
FROM [summary] a
INNER JOIN [TeamImage] b
ON a.[Part_Number] = b.[PartNum]
WHERE b.[View] = @ViewType;
SELECT @id = max(id) FROM [views] WHERE id < @id;
END;
```
The SET a.[@ViewType] = '1' obviously will not work in SQL. But if I could have the (SELECT [View] FROM [views] WHERE id = @id) equal to a variable, then I could write the SQL query in VB and execute it and the variable would become part of the string and therefore execute correctly.
I'm newer to VB, but here's what I have so far:
```
Dim cn As SqlConnection = New SqlConnection("Data Source=Server1;" & _
"Initial Catalog=DB1;" & _
"Integrated Security=SSPI")
cn.Open()
Dim cmd As New sqlCommand("SELECT max(id) FROM orientation_view_experiment;", cn)
vID = cmd.ExecuteNonQuery()
Do While vID > 0
Dim cmd2 As New sqlCommand("SELECT [View] FROM [views] WHERE id ='" + vID + "'"
vViewType = cmd2.ExecuteNonQuery()
Dim cmd3 As New sqlCommand("UPDATE a
SET a.'" + vViewType + "' = '1' & _
FROM [summary] a & _
INNER JOIN [TeamImage] b & _
ON a.[Part_Number] = b.[PartNum] & _
WHERE b.[View] = '" + vViewType + "';"
cmd3.ExecuteNonQuery()
vID = vID - 1
Loop
cn.Close()
```
I hope some of that made sense, but I'm kind of lost at this point. I feel like I know what I need the SQL to do, but can't quite figure out how to make the computer/programs submit to my will and just do what I need it to do.
Thank you for any help/direction you can give. | Your code is wrong because you insist in using `ExecuteNonQuery` for `SELECT` statements. `ExecuteNonQuery` doesn't return the rows selected but just a count of the rows affected by an INSERT/DELETE/UPDATE query (I think that for SELECT it returns always zero)
What you need is [ExecuteScalar](http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlcommand.executescalar.aspx) to get the MAX value and the VIEW value because ExecuteScalar is the best choice when you expect to get just the first field of the first row from your SQL statement
```
Dim cmd As New sqlCommand("SELECT max(id) FROM orientation_view_experiment;", cn)
vID = Convert.ToInt32(cmd.ExecuteScalar())
Do While vID > 0
Dim cmd2 As New sqlCommand("SELECT [View] FROM [views] WHERE id =" + vID.ToString()
Dim result = cmd2.ExecuteScalar()
if Not string.IsNullOrEmpty(result)) Then
vViewType = result.ToString()
Dim cmd3 As New sqlCommand("UPDATE a SET a.[" + vViewType + "] = '1' " & _
"FROM [summary] a " & _
"INNER JOIN [TeamImage] b " & _
"ON a.[Part_Number] = b.[PartNum] " & _
"WHERE b.[View] = @vType"
cmd3.Parameters.AddWithValue("@vType", vViewType)
cmd3.ExecuteNonQuery()
End If
Loop
```
The last part of your code is not really clear to me, but you could use a couple of square brackets around the column name in table `summary` and a parameter for the View field in table `TeamImage`.
As a last advice, be sure that the column View in table TeamImage is not directly modifiable by your end user because a string concatenation like this could lead to a [Sql Injection](https://stackoverflow.com/questions/332365/how-does-the-sql-injection-from-the-bobby-tables-xkcd-comic-work) attacks | Do a little research into what the different methods of a `command` are. When you call `ExecuteNonQuery`, this return the *number of records effected*. I think you want `ExecuteScalar` as your `cmd` and `cmd2` methods, so you can get a value from the database. | SQL output as variable in VB.net | [
"",
"sql",
"vb.net",
"variables",
""
] |
So I have the query:
```
select states, count(numberofcounty)
from countytable
group by states
order by count(distinct numberofcounty) ASC;
```
which return a table of 2 columns: number of state and number of county from least to most.
How can I get the avg of how many county are there per state in a single real number?
The table structure is:
```
create table numberofcounty(
numberofcounty text,
states text references states(states),
);
create table states (
states text primary key,
name text,
admitted_to_union text
```
); | This might be it?
```
countytable:
state|county
a aa
a ab
a ac
b ba
SELECT AVG(counties) FROM (SELECT state, COUNT(county) as counties FROM countytable GROUP BY state) as temp
```
result: 2 | You can use your current query as a subquery to one that gets the average of the number of counties per state
```
Select avg (c) average_counties
From (select count(numberofcounty) c
from countytable
group by states) s
``` | sql find average from counting? | [
"",
"sql",
"count",
"average",
""
] |
Imagine the following `Loans` table:
```
BorrowerID StartDate DueDate
=============================================
1 2012-09-02 2012-10-01
2 2012-10-05 2012-10-21
3 2012-11-07 2012-11-09
4 2012-12-01 2013-01-01
4 2012-12-01 2013-01-14
1 2012-12-20 2013-01-06
3 2013-01-07 2013-01-22
3 2013-01-15 2013-01-18
1 2013-02-20 2013-02-24
```
How would I go about selecting the distinct `BorrowerID`s of those who have only ever taken out a single loan at a time? This includes borrowers who have only ever taken out a single loan, as well as those who have taken out more than one, provided if you were to draw a time line of their loans, none of them would overlap. For example, in the table above, it should find borrowers 1 and 2 only.
I've tried experimenting with joining the table to itself, but haven't really managed to get anywhere. Any pointers much appreciated! | ## **Solution for dbo.Loan with PRIMARY KEY**
To solve this you need a two step approach as detailed in the following [SQL Fiddle](http://sqlfiddle.com/#!3/4c533/6). I did add a LoanId column to your example data and the query requires that such a unique id exists. If you don't have that, you need to adjust the join clause to make sure that a loan does not get matched to itself.
**MS SQL Server 2008 Schema Setup**:
```
CREATE TABLE dbo.Loans
(LoanID INT, [BorrowerID] int, [StartDate] datetime, [DueDate] datetime)
GO
INSERT INTO dbo.Loans
(LoanID, [BorrowerID], [StartDate], [DueDate])
VALUES
(1, 1, '2012-09-02 00:00:00', '2012-10-01 00:00:00'),
(2, 2, '2012-10-05 00:00:00', '2012-10-21 00:00:00'),
(3, 3, '2012-11-07 00:00:00', '2012-11-09 00:00:00'),
(4, 4, '2012-12-01 00:00:00', '2013-01-01 00:00:00'),
(5, 4, '2012-12-01 00:00:00', '2013-01-14 00:00:00'),
(6, 1, '2012-12-20 00:00:00', '2013-01-06 00:00:00'),
(7, 3, '2013-01-07 00:00:00', '2013-01-22 00:00:00'),
(8, 3, '2013-01-15 00:00:00', '2013-01-18 00:00:00'),
(9, 1, '2013-02-20 00:00:00', '2013-02-24 00:00:00')
GO
```
First you need to find out which loans overlap with another loan. The query uses `<=` to compare the start and due dates. That counts loans where the second one starts the same day the first one ends as overlapping. If you need those to not be overlapping use `<` instead in both places.
**Query 1**:
```
SELECT
*,
CASE WHEN EXISTS(SELECT 1 FROM dbo.Loans L2
WHERE L2.BorrowerID = L1.BorrowerID
AND L2.LoanID <> L1.LoanID
AND L1.StartDate <= L2.DueDate
AND L2.StartDate <= l1.DueDate)
THEN 1
ELSE 0
END AS HasOverlappingLoan
FROM dbo.Loans L1;
```
**[Results](http://sqlfiddle.com/#!3/4c533/6/0)**:
```
| LOANID | BORROWERID | STARTDATE | DUEDATE | HASOVERLAPPINGLOAN |
|--------|------------|----------------------------------|---------------------------------|--------------------|
| 1 | 1 | September, 02 2012 00:00:00+0000 | October, 01 2012 00:00:00+0000 | 0 |
| 2 | 2 | October, 05 2012 00:00:00+0000 | October, 21 2012 00:00:00+0000 | 0 |
| 3 | 3 | November, 07 2012 00:00:00+0000 | November, 09 2012 00:00:00+0000 | 0 |
| 4 | 4 | December, 01 2012 00:00:00+0000 | January, 01 2013 00:00:00+0000 | 1 |
| 5 | 4 | December, 01 2012 00:00:00+0000 | January, 14 2013 00:00:00+0000 | 1 |
| 6 | 1 | December, 20 2012 00:00:00+0000 | January, 06 2013 00:00:00+0000 | 0 |
| 7 | 3 | January, 07 2013 00:00:00+0000 | January, 22 2013 00:00:00+0000 | 1 |
| 8 | 3 | January, 15 2013 00:00:00+0000 | January, 18 2013 00:00:00+0000 | 1 |
| 9 | 1 | February, 20 2013 00:00:00+0000 | February, 24 2013 00:00:00+0000 | 0 |
```
Now, with that information you can determine the borrowers that have no overlapping loans with this query:
**Query 2**:
```
WITH OverlappingLoans AS (
SELECT
*,
CASE WHEN EXISTS(SELECT 1 FROM dbo.Loans L2
WHERE L2.BorrowerID = L1.BorrowerID
AND L2.LoanID <> L1.LoanID
AND L1.StartDate <= L2.DueDate
AND L2.StartDate <= l1.DueDate)
THEN 1
ELSE 0
END AS HasOverlappingLoan
FROM dbo.Loans L1
),
OverlappingBorrower AS (
SELECT BorrowerID, MAX(HasOverlappingLoan) HasOverlappingLoan
FROM OverlappingLoans
GROUP BY BorrowerID
)
SELECT *
FROM OverlappingBorrower
WHERE hasOverlappingLoan = 0;
```
Or you could even get more information by counting the loans as well as counting the number of loans that have overlapping other loans for each borrower in the database. (Note, if loan A and loan B overlap, both will be counted as overlapping loan by this query)
**[Results](http://sqlfiddle.com/#!3/4c533/6/1)**:
```
| BORROWERID | HASOVERLAPPINGLOAN |
|------------|--------------------|
| 1 | 0 |
| 2 | 0 |
```
**Query 3**:
```
WITH OverlappingLoans AS (
SELECT
*,
CASE WHEN EXISTS(SELECT 1 FROM dbo.Loans L2
WHERE L2.BorrowerID = L1.BorrowerID
AND L2.LoanID <> L1.LoanID
AND L1.StartDate <= L2.DueDate
AND L2.StartDate <= l1.DueDate)
THEN 1
ELSE 0
END AS HasOverlappingLoan
FROM dbo.Loans L1
)
SELECT BorrowerID,COUNT(1) LoanCount, SUM(hasOverlappingLoan) OverlappingCount
FROM OverlappingLoans
GROUP BY BorrowerID;
```
**[Results](http://sqlfiddle.com/#!3/4c533/6/2)**:
```
| BORROWERID | LOANCOUNT | OVERLAPPINGCOUNT |
|------------|-----------|------------------|
| 1 | 3 | 0 |
| 2 | 1 | 0 |
| 3 | 3 | 2 |
| 4 | 2 | 2 |
```
---
---
## **Solution for dbo.Loan without PRIMARY KEY**
UPDATE: As the requirement actually calls for a solution that does not rely on a unique identifier for each loan, I made the following changes:
**1)** I added a borrower that has two loans with the same start and due dates
[SQL Fiddle](http://sqlfiddle.com/#!3/4c533/6)
**MS SQL Server 2008 Schema Setup**:
```
CREATE TABLE dbo.Loans
([BorrowerID] int, [StartDate] datetime, [DueDate] datetime)
GO
INSERT INTO dbo.Loans
([BorrowerID], [StartDate], [DueDate])
VALUES
( 1, '2012-09-02 00:00:00', '2012-10-01 00:00:00'),
( 2, '2012-10-05 00:00:00', '2012-10-21 00:00:00'),
( 3, '2012-11-07 00:00:00', '2012-11-09 00:00:00'),
( 4, '2012-12-01 00:00:00', '2013-01-01 00:00:00'),
( 4, '2012-12-01 00:00:00', '2013-01-14 00:00:00'),
( 1, '2012-12-20 00:00:00', '2013-01-06 00:00:00'),
( 3, '2013-01-07 00:00:00', '2013-01-22 00:00:00'),
( 3, '2013-01-15 00:00:00', '2013-01-18 00:00:00'),
( 1, '2013-02-20 00:00:00', '2013-02-24 00:00:00'),
( 5, '2013-02-20 00:00:00', '2013-02-24 00:00:00'),
( 5, '2013-02-20 00:00:00', '2013-02-24 00:00:00')
GO
```
**2)** Those "equal date" loans require an additional step:
**Query 1**:
```
SELECT BorrowerID, StartDate, DueDate, COUNT(1) LoanCount
FROM dbo.Loans
GROUP BY BorrowerID, StartDate, DueDate;
```
**[Results](http://sqlfiddle.com/#!3/4c533/6/0)**:
```
| BORROWERID | STARTDATE | DUEDATE | LOANCOUNT |
|------------|----------------------------------|---------------------------------|-----------|
| 1 | September, 02 2012 00:00:00+0000 | October, 01 2012 00:00:00+0000 | 1 |
| 1 | December, 20 2012 00:00:00+0000 | January, 06 2013 00:00:00+0000 | 1 |
| 1 | February, 20 2013 00:00:00+0000 | February, 24 2013 00:00:00+0000 | 1 |
| 2 | October, 05 2012 00:00:00+0000 | October, 21 2012 00:00:00+0000 | 1 |
| 3 | November, 07 2012 00:00:00+0000 | November, 09 2012 00:00:00+0000 | 1 |
| 3 | January, 07 2013 00:00:00+0000 | January, 22 2013 00:00:00+0000 | 1 |
| 3 | January, 15 2013 00:00:00+0000 | January, 18 2013 00:00:00+0000 | 1 |
| 4 | December, 01 2012 00:00:00+0000 | January, 01 2013 00:00:00+0000 | 1 |
| 4 | December, 01 2012 00:00:00+0000 | January, 14 2013 00:00:00+0000 | 1 |
| 5 | February, 20 2013 00:00:00+0000 | February, 24 2013 00:00:00+0000 | 2 |
```
**3)** Now, with each loan range unique, we can use the old technique again. However, we also need to account for those "equal date" loans. `(L1.StartDate <> L2.StartDate OR L1.DueDate <> L2.DueDate)` prevents a loan getting matched with itself. `OR LoanCount > 1` accounts for "equal date" loans.
**Query 2**:
```
WITH NormalizedLoans AS (
SELECT BorrowerID, StartDate, DueDate, COUNT(1) LoanCount
FROM dbo.Loans
GROUP BY BorrowerID, StartDate, DueDate
)
SELECT
*,
CASE WHEN EXISTS(SELECT 1 FROM dbo.Loans L2
WHERE L2.BorrowerID = L1.BorrowerID
AND L1.StartDate <= L2.DueDate
AND L2.StartDate <= l1.DueDate
AND (L1.StartDate <> L2.StartDate
OR L1.DueDate <> L2.DueDate)
)
OR LoanCount > 1
THEN 1
ELSE 0
END AS HasOverlappingLoan
FROM NormalizedLoans L1;
```
**[Results](http://sqlfiddle.com/#!3/4c533/6/1)**:
```
| BORROWERID | STARTDATE | DUEDATE | LOANCOUNT | HASOVERLAPPINGLOAN |
|------------|----------------------------------|---------------------------------|-----------|--------------------|
| 1 | September, 02 2012 00:00:00+0000 | October, 01 2012 00:00:00+0000 | 1 | 0 |
| 1 | December, 20 2012 00:00:00+0000 | January, 06 2013 00:00:00+0000 | 1 | 0 |
| 1 | February, 20 2013 00:00:00+0000 | February, 24 2013 00:00:00+0000 | 1 | 0 |
| 2 | October, 05 2012 00:00:00+0000 | October, 21 2012 00:00:00+0000 | 1 | 0 |
| 3 | November, 07 2012 00:00:00+0000 | November, 09 2012 00:00:00+0000 | 1 | 0 |
| 3 | January, 07 2013 00:00:00+0000 | January, 22 2013 00:00:00+0000 | 1 | 1 |
| 3 | January, 15 2013 00:00:00+0000 | January, 18 2013 00:00:00+0000 | 1 | 1 |
| 4 | December, 01 2012 00:00:00+0000 | January, 01 2013 00:00:00+0000 | 1 | 1 |
| 4 | December, 01 2012 00:00:00+0000 | January, 14 2013 00:00:00+0000 | 1 | 1 |
| 5 | February, 20 2013 00:00:00+0000 | February, 24 2013 00:00:00+0000 | 2 | 1 |
```
This query logic did not change (other than switching out the beginning).
**Query 3**:
```
WITH NormalizedLoans AS (
SELECT BorrowerID, StartDate, DueDate, COUNT(1) LoanCount
FROM dbo.Loans
GROUP BY BorrowerID, StartDate, DueDate
),
OverlappingLoans AS (
SELECT
*,
CASE WHEN EXISTS(SELECT 1 FROM dbo.Loans L2
WHERE L2.BorrowerID = L1.BorrowerID
AND L1.StartDate <= L2.DueDate
AND L2.StartDate <= l1.DueDate
AND (L1.StartDate <> L2.StartDate
OR L1.DueDate <> L2.DueDate)
)
OR LoanCount > 1
THEN 1
ELSE 0
END AS HasOverlappingLoan
FROM NormalizedLoans L1
),
OverlappingBorrower AS (
SELECT BorrowerID, MAX(HasOverlappingLoan) HasOverlappingLoan
FROM OverlappingLoans
GROUP BY BorrowerID
)
SELECT *
FROM OverlappingBorrower
WHERE hasOverlappingLoan = 0;
```
**[Results](http://sqlfiddle.com/#!3/4c533/6/2)**:
```
| BORROWERID | HASOVERLAPPINGLOAN |
|------------|--------------------|
| 1 | 0 |
| 2 | 0 |
```
**4)** In this counting query we need to incorporate the "equal date" loan counts again. For that we use `SUM(LoanCount)` instead of a plain `COUNT`. We also have to multiply `hasOverlappingLoan` with the LoanCount to get the correct overlapping count again.
**Query 4**:
```
WITH NormalizedLoans AS (
SELECT BorrowerID, StartDate, DueDate, COUNT(1) LoanCount
FROM dbo.Loans
GROUP BY BorrowerID, StartDate, DueDate
),
OverlappingLoans AS (
SELECT
*,
CASE WHEN EXISTS(SELECT 1 FROM dbo.Loans L2
WHERE L2.BorrowerID = L1.BorrowerID
AND L1.StartDate <= L2.DueDate
AND L2.StartDate <= l1.DueDate
AND (L1.StartDate <> L2.StartDate
OR L1.DueDate <> L2.DueDate)
)
OR LoanCount > 1
THEN 1
ELSE 0
END AS HasOverlappingLoan
FROM NormalizedLoans L1
)
SELECT BorrowerID,SUM(LoanCount) LoanCount, SUM(hasOverlappingLoan*LoanCount) OverlappingCount
FROM OverlappingLoans
GROUP BY BorrowerID;
```
**[Results](http://sqlfiddle.com/#!3/4c533/6/3)**:
```
| BORROWERID | LOANCOUNT | OVERLAPPINGCOUNT |
|------------|-----------|------------------|
| 1 | 3 | 0 |
| 2 | 1 | 0 |
| 3 | 3 | 2 |
| 4 | 2 | 2 |
| 5 | 2 | 2 |
```
I strongly suggest finding a way to use my first solution, as a loan table without a primary key is a, let's say "odd" design. However, if you really can't get there, use the second solution. | I got it working but in a bit convoluted way. It first gets borrowers that don't meet criteria in the inner query and returns the rest. The inner query has 2 parts:
Get all overlapping borrowings not starting on the same day.
Get all borrowings starting on the same date.
```
select distinct BorrowerID from borrowings
where BorrowerID NOT IN
(
select b1.BorrowerID from borrowings b1
inner join borrowings b2
on b1.BorrowerID = b2.BorrowerID
and b1.StartDate < b2.StartDate
and b1.DueDate > b2.StartDate
union
select BorrowerID from borrowings
group by BorrowerID, StartDate
having count(*) > 1
)
```
I had to use 2 separate inner queries as your table doesn't have a unique identifier for each record and using `b1.StartDate <= b2.StartDate` as I should have makes a record join to itself. It would be good to have a separate identifier for each record. | Select rows with no date range overlap | [
"",
"sql",
"sql-server",
"date-range",
""
] |
hello guys I need a little help writing a sql statement that would basically point out the rows that don't have a corresponding negative matching number, based on my\_id,report\_id.
Here is the table declaration for better explanation.
```
CREATE TABLE ."TEST"
( "REPORT_ID" VARCHAR2(100 BYTE),
"AMOUNT" NUMBER(17,2),
"MY_ID" VARCHAR2(30 BYTE),
"FUND" VARCHAR2(20 BYTE),
"ORG" VARCHAR2(20 BYTE)
)
```
here is some sample data
```
Insert into TEST (REPORT_ID,AMOUNT,MY_ID,FUND,ORG) values ('1',50,'910','100000','67120');
Insert into TEST (REPORT_ID,AMOUNT,MY_ID,FUND,ORG) values ('1',-50,'910','100000','67130');
Insert into TEST (REPORT_ID,AMOUNT,MY_ID,FUND,ORG) values ('1',100,'910','100000','67150');
Insert into TEST (REPORT_ID,AMOUNT,MY_ID,FUND,ORG) values ('2',200,'910','100000','67130');
Insert into TEST (REPORT_ID,AMOUNT,MY_ID,FUND,ORG) values ('2',-200,'910','100000','67120');
INSERT INTO TEST (REPORT_ID, AMOUNT, MY_ID, FUND, ORG) VALUES ('1', '40.17', '910', '100000', '67150')
INSERT INTO TEST (REPORT_ID, AMOUNT, MY_ID, FUND, ORG) VALUES ('1', '-40.17', '910', '100000', '67150')
INSERT INTO TEST (REPORT_ID, AMOUNT, MY_ID, FUND, ORG) VALUES ('1', '40.17', '910', '100000', '67150')
```
if you create the table and look closely , you'll notice that by report\_id and my\_id most positive amounts have a direct negative amount. In the other hand, I need to identify those positive amounts that do not have a corresponding negative amount by my\_id , and report\_id.
expected result should look like this
```
"REPORT_ID" "FUND" "MY_ID" "ORG" "AMOUNT"
"1" "100000" "910" "67150" "40.17"
"1" "100000" "910" "67150" "100"
```
any ideas how can acomplish this.
***EDIT:***
Posted the wrong output result. Just to be clear the fund and org don't matter until after the match. For example if i was writing this using plsql i would find how many minuses do i have then how many pluses do i have compare each plus amount to each minus amount and delete them then i would be left with whatever plus amounts did not have negative amounts.
I apologize for the confusion. hope this makes it clearer now. once i have all my matches i should end up with only positive amounts that are left behind.
**EDIT:**
```
additional inserts
Insert into TEST (REPORT_ID,AMOUNT,MY_ID,FUND,ORG) values ('5',71,'911','100000','67150');
Insert into TEST (REPORT_ID,AMOUNT,MY_ID,FUND,ORG) values ('5',71,'911','100000','67120');
Insert into TEST (REPORT_ID,AMOUNT,MY_ID,FUND,ORG) values ('5',71,'911','100000','67140');
Insert into TEST (REPORT_ID,AMOUNT,MY_ID,FUND,ORG) values ('5',71,'911','100000','67130');
Insert into TEST (REPORT_ID,AMOUNT,MY_ID,FUND,ORG) values ('5',71,'911','100000','67130');
Insert into TEST (REPORT_ID,AMOUNT,MY_ID,FUND,ORG) values ('5',71,'911','100000','67130');
Insert into TEST (REPORT_ID,AMOUNT,MY_ID,FUND,ORG) values ('5',-71,'911','100000','67150');
Insert into TEST (REPORT_ID,AMOUNT,MY_ID,FUND,ORG) values ('5',-71,'911','100000','67150');
Insert into TEST (REPORT_ID,AMOUNT,MY_ID,FUND,ORG) values ('5',-71,'911','100000','67150');
``` | **New Version**
This should return just the rows that you want. If you are not concerned with org or fund then you can just use the query that is aliased x:
```
select distinct t1.report_id, t1.fund, t1.my_id, t1.org, t1.amount
from test t1,
(select distinct t.report_id, t.my_id, abs(amount) as amount
from test t
group by t.report_id, t.my_id, abs(amount)
having sum(t.amount) > 0) x
where t1.report_id = x.report_id
and t1.my_id = x.my_id
and t1.amount = x.amount;
```
**Previous Version**
```
select *
from test t
minus
select t1.*
from test t1,
test t2
where t1.amount = -1*t2.amount
and t1.report_id = t2.report_id
and t1.my_id = t2.my_id;
```
This just gives on row of output for the row with amt 100. I have asked you to clarify in the comments why any row with 200 should be included (if it should). I am also not sure whether you want one of the 47.17 values to be included. The difficulty with this is that the two positive values are identical in the example data you provided, is this correct? | **EDIT:**
Here's an updated query based on the feedback and additional sample data provided. This query has the advantage of querying the TEST table just once, and it returns the expected results (3 rows of amount 71, one row of amount 100, and one row of amount 40.17).
```
SELECT
report_id, MAX(fund) fund, my_id, MAX(org) org, SUM(amount) amount
FROM (
SELECT
report_id, fund, my_id, org, amount
, ROW_NUMBER() OVER( PARTITION BY report_id, my_id, amount ) rn
FROM
test
) t
GROUP BY
rn, ABS(amount), report_id, my_id
HAVING
SUM(amount) > 0;
```
Results:
```
report_id fund my_id org amount
5 100000 911 67120 71.00
5 100000 911 67140 71.00
5 100000 911 67150 71.00
1 100000 910 67150 40.17
1 100000 910 67150 100.00
```
**INITIAL ANSWER:**
The below query should provide what you're looking for. I'm not sure what should be done if org and/or fund are different since you're not grouping on those values - I decided to use a MAX aggregate function on fund and org to select a single value without affecting the grouping. Maybe those columns should just be left out?
```
SELECT
report_id, MAX(fund) fund, my_id, MAX(org) org, SUM(amount) amount
FROM
test
GROUP BY
report_id, my_id, ABS(amount)
HAVING
SUM(amount) > 0;
```
Results:
```
report_id fund my_id org amount
1 100000 910 67150 40.17
1 100000 910 67150 100.00
```
Note that based on the sample data you provided, the expected result should not show 200 because there's a corresponding -200 for the same report\_id (2) and my\_id (910). | oracle sql how to get the unmatch positive records | [
"",
"sql",
"oracle",
"oracle11g",
""
] |
I have a query which returns the following results...
```
SELECT wk, cost FROM my_table WHERE id = '234' order by week
WK COST
--------
17 446
18 446
19 446
26 588
27 588
28 588
```
What I need to try and do is write a query to achieve the following results...
```
WKS COST
------------------
17, 18, 19 446
26, 27, 28 588
```
Is this possible? | Try like this,
```
WITH t(wk, COST) AS
(
SELECT 17, 446 FROM dual
UNION
SELECT 18, 446 FROM dual
UNION
SELECT 19, 446 FROM dual
UNION
SELECT 26, 588 FROM dual
UNION
SELECT 27, 588 FROM dual
UNION
SELECT 28, 588 FROM dual
)
SELECT listagg(wk,',') WITHIN GROUP(ORDER BY wk) AS wks, COST
FROM t
GROUP BY COST;
``` | ```
By using LISTAGG function you can achieve what you are trying to do.
SELECT deptno
,LISTAGG(sal, '-') WITHIN GROUP (ORDER BY sal) AS employees
FROM emp
GROUP BY
deptno;
-----------------------------------------------------
OUTPUT
-----------------------------------------------------
DEPTNO EMPLOYEES
10 1300-2450-5000
20 800-1100-2975-3000-3000
30 950-1250-1250-1500-1600-2850
3 rows returned in 0.01 seconds
``` | Oracle SQL - Group by one column and list results in another | [
"",
"sql",
"oracle",
"oracle-sqldeveloper",
""
] |
I'm currently unable to email out time based subscription reports from SSRS on a new SQL Server 2012 installation on Server 2012.
I receive the following error in the SSRS LogFiles
> schedule!WindowsService\_5!dc4!10/14/2013-10:01:09:: i INFO: Handling Event TimedSubscription with data 1a762da1-75ab-4c46-b989-471185553304.
> library!WindowsService\_5!dc4!10/14/2013-10:01:09:: e ERROR: Throwing Microsoft.ReportingServices.Diagnostics.Utilities.ReportServerStorageException: , An error occurred within the report server database. This may be due to a connection failure, timeout or low disk condition within the database.;
> library!WindowsService\_5!dc4!10/14/2013-10:01:09:: w WARN: Transaction rollback was not executed connection is invalid
> schedule!WindowsService\_5!dc4!10/14/2013-10:01:09:: i INFO: Error processing event 'TimedSubscription', data = 1a762da1-75ab-4c46-b989-471185553304, error = Microsoft.ReportingServices.Diagnostics.Utilities.ReportServerStorageException: An error occurred within the report server database. This may be due to a connection failure, timeout or low disk condition within the database. ---> System.Data.SqlClient.SqlException: Invalid object name 'ReportServerTempDB.dbo.ExecutionCache'.
Databases were migrated from SQL 2008, this was done by a third party and I'm unsure if something was overlooked.
Any assistance would be greatly appreciated.
Thank you.
Dane | This thread seems to address your issue.
<http://www.sqlservercentral.com/Forums/Topic553765-147-1.aspx>
Please do a modicum of research before posting error messages.
From the Link
"
After much consternation, I have found a trigger referencing the invalid object. Trigger [Schedule\_UpdateExpiration] on ReportServer table Schedule has the offending reference in it. In test, I altered this trigger to reference the correct report server tempdb and now subscriptions appear to be working properly. So far I have found nothing else broken."
AND
"If anyone is looking for a quick answer then here is what I did to solve my problem:
* Updated trigger on dbo.schedule to reference the correct tempdb.
* Scripted all stored procedures with their permissions onto a new query then "find and replaced" all instances of the old tempdb with the new one. " | After a while searching for a solution to fix this issue, I found that this is caused by the jobs definition of SQL Server Agent was not fully migrated to the new service. For every subscription created in SSRS, there is an associated job defined in SQL Server Agent. For services reply heavily on report delivery via subscriptions, it's best to export those jobs definition and import them into the new server. | SSRS - Renamed TempDB and now Subscription Reports not Emailing | [
"",
"sql",
"sql-server",
"reporting-services",
"sql-server-2012",
"database-migration",
""
] |
I'm using a SQL query to determine the z-score (x - μ / σ) of several columns.
In particular, I have a table like the following:
```
my_table
id col_a col_b col_c
1 3 6 5
2 5 3 3
3 2 2 9
4 9 8 2
```
...and I want to select the z-score of every number of every row, according to the average and standard deviation of its column.
So the result would look like this:
```
id col_d col_e col_f
1 -0.4343 1.0203 ...
2 0.1434 -0.8729
3 -0.8234 -1.2323
4 1.889 1.5343
```
Currently my code computes the score for two columns and looks like this:
```
select id,
(my_table.col_a - avg(mya.col_a)) / stddev(mya.col_a) as col_d,
(my_table.col_b - avg(myb.col_b)) / stddev(myb.col_b) as col_e,
from my_table,
select col_a from my_table)mya,
select col_b from my_table)myb
group by id;
```
However, this is extremely slow. I've been waiting minutes for a three column query.
Is there a better way to accomplish this? I'm using postgres but any general language will help me. Thanks! | you can use window functions like this:
```
select
t.id,
(t.col_a - avg(t.col_a) over()) / stdev(t.col_a) over() as col_d,
(t.col_b - avg(t.col_b) over()) / stdev(t.col_b) over() as col_e
from my_table as t
```
or cross join with precalculated `avg` and `stdev`:
```
select
t.id,
(t.col_a - tt.col_a_avg) / tt.col_a_stdev as col_d,
(t.col_b - tt.col_b_avg) / tt.col_b_stdev as col_e
from my_table as t
cross join (
select
avg(tt.col_a) as col_a_avg,
avg(tt.col_b) as col_b_avg,
stdev(tt.col_a) as col_a_stdev,
stdev(tt.col_b) as col_b_stdev
from my_table as tt
) as tt
``` | Using a WITH clause:
```
WITH stats AS ( SELECT avg ( col_a ) a_avg, stddev ( col_a ) a_stddev,
avg ( col_b ) b_avg, stddev ( col_b ) b_stddev
FROM my_table
)
SELECT id, ( col_a - a_avg) / a_stddev col_d,
( col_b - b_avg) / b_stddev col_e
FROM my_table, stats
```
But I like Roman's window solution better.
For Oğuz: to deal with NULL values in my\_table:
```
WITH stats AS (
SELECT avg ( col_a ) a_avg, stddev ( col_a ) as a_stddev,
avg ( col_b ) b_avg, stddev ( col_b ) as b_stddev
FROM my_table
)
SELECT id,
COALESCE ( ( col_a - a_avg) / a_stddev, NULL ) col_d,
COALESCE ( ( col_b - b_avg) / b_stddev, NULL ) col_e
FROM my_table, stats
``` | Calculating the respective z-score of several columns | [
"",
"sql",
"postgresql",
""
] |
I need an sql query that given table of values of the form
```
| id | day1 | day2 | day3 | day4 | day5 |
| 1 | 4 | 0 | 5 | 0 | 0 |
| 2 | 2 | 0 | 0 | 0 | 0 |
```
gives
```
| id | trailing_zeros |
| 1 | 2 |
| 2 | 4 |
```
that is, the number of consecutive trailing zeros in the days columns for each id (from day5 backwards) | Here is possible solution
```
select id,
length(@k:=concat(day1,day2,day3,day4,day5&&1))
- length(trim(trailing '0' from @k)) as trailing_zeros
from days_table
``` | I'd go for something like this. Of course this is assuming you only have 5 days:
```
SELECT
id,
CASE WHEN day5 = 0 THEN
CASE WHEN day4 = 0 THEN
CASE WHEN day3 = 0 THEN
CASE WHEN day2 = 0 THEN
CASE WHEN day1 = 0 THEN 5
ELSE 4 END
ELSE 3 END
ELSE 2 END
ELSE 1 END
ELSE 0 END
amount_of_zeros
FROM t
```
Awful, isn't it? | SQL query to find number of trailing zeros across columns | [
"",
"mysql",
"sql",
""
] |
I have a table with following data:
```
Name Score
A 2
B 3
A 1
B 3
```
I want a query which give the following output.
```
Name Score
A 2
A 1
Subtotal 3
B 3
B 3
Subtotal 6
```
Please help me with an SQL code | In general, it would be something like this:
```
select Name, Score
from (
select Name, Score, Name as o
from Table1 as a
union all
select 'Subtotal', sum(Score), Name as o
from Table1 as a
group by Name
) as a
order by o, score
```
But, just as @ypercube said, there're different specific implementations in different RDBMS. Your task a bit complicated, because your table doesn't have primary key, so you can emulate it with [row\_number()](http://msdn.microsoft.com/en-us/library/ms186734.aspx) function. For SQL Server you can use [grouping sets](http://technet.microsoft.com/en-us/library/ms177673.aspx):
```
with cte as (
select *, row_number() over(order by newid()) as rn
from Table1
)
select
case
when grouping(c.rn) = 1 then 'Subtotal'
else c.Name
end as Name,
sum(c.Score) as Score
from cte as c
group by grouping sets ((c.Name), (c.Name, c.rn))
order by c.Name;
```
Or [rollup()](http://technet.microsoft.com/en-us/library/ms177673.aspx):
```
with cte as (
select *, row_number() over(order by newid()) as rn
from Table1
)
select
case
when grouping(c.rn) = 1 then 'Subtotal'
else c.Name
end as Name,
sum(c.Score) as Score
from cte as c
group by rollup(c.Name, c.rn)
having grouping(c.Name) = 0
order by c.Name;
```
Note the [grouping()](http://sqlfiddle.com/#!3/4fbf9/29) function to replace `name` column for static `'Subtotal'` string. Also note, that order of columns matters in rollup() query.
**[=> sql fiddle demo](http://sqlfiddle.com/#!3/4fbf9/3)** | Some DBMS (like MySQL and SQL-Server) have the `WITH ROLLUP` modifier of the `GROUP BY` clause, that can be used for such a query:
```
SELECT pk, name,
SUM(score) AS score
FROM tableX
GROUP BY name, pk -- pk is the PRIMARY KEY of the table
WITH ROLLUP ;
```
Test at **[SQL-Fiddle](http://sqlfiddle.com/#!2/778e4/3)**
---
Other DBMS (SQL-Server, Oracle) have implemented the [`GROUP BY GROUPING SETS`](http://technet.microsoft.com/en-us/library/bb522495%28v=sql.105%29.aspx) feature which can be used (and is more powerful than `WITH ROLLUP`):
```
SELECT pk, name,
SUM(score) AS score
FROM tableX
GROUP BY GROUPING SETS
((name, pk), (name), ());
```
Test at **[SQL-Fiddle-2](http://sqlfiddle.com/#!4/63261/6)**
---
@AndriyM corrected me that there is also the `GROUP BY ROLLUP` (implemented by Oracle and SQL-Server), with same results (see updated SQL-Fiddle-2 above):
```
SELECT pk, name,
SUM(score) AS score
FROM t
GROUP BY ROLLUP
(name, pk) ;
``` | How to add a subtotal row in sql | [
"",
"sql",
""
] |
I want to make specific order of JOINs
```
SELECT *
FROM (lives_in as t1 NATURAL JOIN preferences p1) l1
JOIN (lives_in t2 NATURAL JOIN preferences p2) l2
ON l1.dormid = l2.dormid
```
Returns an error.
Anyone can help? Thanks a lot! | Your aliased queries are missing a `SELECT` clause, so try this:
```
SELECT *
FROM (
select * -- added this
FROM lives_in as t1
NATURAL JOIN preferences p1) l1
JOIN (
select * -- added this
FROM lives_in t2
NATURAL JOIN preferences p2) l2
ON l1.dormid = l2.dormid
``` | The order of joins doesn't matter for the results. You probably want something like this:
```
SELECT *
FROM lives_in t1
NATURAL JOIN preferences p1 ON p1.some_id = t1.id
NATURAL JOIN preferences p2 ON p2.some_id = t1.id
```
Also, most people call it an INNER JOIN, not a NATURAL JOIN, btw. | (A join B) join (C join D) | [
"",
"mysql",
"sql",
""
] |
I have used four tables, namely VehicleMaster, OwnerMaster, CustomerMaster and CustomerVehicle in my project
As name explains, first three are master tables and last table (CustomerVehicle) contains CustomerID & VehicleID. The table hierarchy are as follows
```
OwnerMaster (OwnerID, OwnerName)
VehicleMaster (VehicleID, OwnerID, VehicleDetails blah blah...)
CustomerMaster (CustomerID, CustomerName..)
CustomerVehicle (CustomerID, VehicleID)
```
Now i would like to get how many vehicles are running under each owner. output should be something like this.
```
OwnerName, TotalVehicles, No of Running Vehicles, NonRunning Vehicles.
xxxx, 40, 34, 6
```
Any help will be much appreciated.
Thanks, | There can be owners without vehicles, we don't know... something like this should work as well:
```
SELECT
om.OwnerName,
count(vm.VehicleID) Total,
sum(case when vm.VehicleID is not null
and cv.VehicleID is not null then 1 else 0 end) Runnng,
sum(case when vm.VehicleID is not null
and cv.VehicleID is null then 1 else 0 end) NotRunnng
FROM OwnerMaster om
LEFT JOIN VehicleMaster vm ON om.OwnerID = vm.OwnerID
LEFT JOIN CustomerVehicle cv ON vm.VehicleID = cv.VehicleID
GROUP BY om.OwnerName
``` | Something like this should work; left join the tables and (distinct) count the respective fields;
```
SELECT om.OwnerName,
COUNT(DISTINCT vm.VehicleID) TotalVehicles,
COUNT(DISTINCT cv.VehicleID) Running,
COUNT(DISTINCT vm.VehicleID)-COUNT(DISTINCT cv.VehicleID) NotRunning
FROM OwnerMaster om
LEFT JOIN VehicleMaster vm ON om.OwnerID = vm.OwnerID
LEFT JOIN CustomerVehicle cv ON vm.VehicleID = cv.VehicleID
GROUP BY om.OwnerID, om.OwnerName
```
[An SQLfiddle to test with](http://sqlfiddle.com/#!6/43b28/1). | SQL-Using JOINS to get the values from three tables | [
"",
"sql",
"linq",
""
] |
I have very little knowledge of how access works, but I need some more efficient then what I am doing now.
I have these queries:
```
UPDATE [Receipt Audit]
SET [Receipt Audit].[Receipt Date] = #04/07/2003#
WHERE ((([Receipt Audit].[Receipt Date])=#4/7/303#));
UPDATE [Receipt Audit]
SET [Receipt Audit].[Receipt Date] = #2/27/2004#
WHERE ((([Receipt Audit].[Receipt Date])=#2/27/404#));
UPDATE [Receipt Audit]
SET [Receipt Audit].[Receipt Date] =#5/29/2003#
WHERE ((([Receipt Audit].[Receipt Date])=#5/29/303#));
UPDATE [Receipt Audit]
SET [Receipt Audit].[Receipt Date] =#8/25/2003#
WHERE ((([Receipt Audit].[Receipt Date])=#8/25/303#));
UPDATE [Receipt Audit]
SET [Receipt Audit].[Receipt Date] = #8/28/2003#
WHERE ((([Receipt Audit].[Receipt Date])=#8/28/303#));
UPDATE [Receipt Audit]
SET [Receipt Audit].[Receipt Date] = #9/29/2003#
WHERE ((([Receipt Audit].[Receipt Date])=#9/29/303#));
UPDATE [Receipt Audit]
SET [Receipt Audit].[Receipt Date] = #2/25/2004#
WHERE ((([Receipt Audit].[Receipt Date])=#2/25/404#));
UPDATE [Receipt Audit]
SET [Receipt Audit].[Receipt Date] = #3/30/2004#
WHERE ((([Receipt Audit].[Receipt Date])=#3/30/404#));
UPDATE [Receipt Audit]
SET [Receipt Audit].[Receipt Date] = #8/23/2004#
WHERE ((([Receipt Audit].[Receipt Date])=#8/23/404#));
UPDATE [Receipt Audit]
SET [Receipt Audit].[Receipt Date] = #8/25/2004#
WHERE ((([Receipt Audit].[Receipt Date])=#8/25/404#));
UPDATE [Receipt Audit]
SET [Receipt Audit].[Receipt Date] = #8/26/2004#
WHERE ((([Receipt Audit].[Receipt Date])=#8/26/404#));
UPDATE [Receipt Audit]
SET [Receipt Audit].[Receipt Date] = #8/27/2004#
WHERE ((([Receipt Audit].[Receipt Date])=#8/27/404#));
UPDATE [Receipt Audit]
SET [Receipt Audit].[Receipt Date] = #8/30/2004#
WHERE ((([Receipt Audit].[Receipt Date])=#8/30/404#));
```
The problem is I have to run them all individually. Is there a way I could combine them into one query? Any help is appreciated. | Perhaps something like this?
```
UPDATE [Receipt Audit]
SET [Receipt Date] = DateSerial(Switch(Year([Receipt Date])=303,2003,Year([Receipt Date])=404,2004,True,Year([Receipt Date])),Month([Receipt Date]),Day([Receipt Date]))
WHERE Year([Receipt Date]) IN (303,404)
``` | Unfortunately Microsoft Access can't run multiple queries simultaneously. You have two approaches I can think of.
* [Write a Macro] or VBA Procedure[1](http://office.microsoft.com/en-us/access-help/create-a-macro-HA010030811.aspx):
> You can create a macro to perform a specific series of actions, and
> you can create a macro group to perform related series of actions.
>
> In Microsoft Office Access 2007, macros can be contained in macro
> objects (sometimes called standalone macros), or they can be embedded
> into the event properties of forms, reports, or controls. Embedded
> macros become part of the object or control in which they are
> embedded. Macro objects are visible in the Navigation Pane, under
> Macros; embedded macros are not.
However, the other route may be far better. As you won't have to be inside of Access to run the Macro.
* Write a Query Program:
This way you can write a program that will physically connect to the database; then the program can do the heavy lifting. This will give you far more control over it as well. Then an actual programming language will be manipulating.
Not sure if those help, I actually have an example project for something similar- I'll post it to Git and edit it here for you. | Doing Multiple Updates in Access at the same time | [
"",
"sql",
"ms-access",
"updates",
""
] |
I have an SQL database with 3 tables:
1. `Customer record` table, holding master data (each row is unique)
2. `Customer notes`, each note data/time stamped (there could be many notes per customer, or none at all)
3. `Product` table, showing which customer has bought which product
**Table: tbl\_Customer\_Records**
CustomerID---- Company Name-----Company Segment----- Promo Code
**Table: tbl\_Customer\_Notes**
CustomerID---- Note-----Created Date
**Table: tbl\_Customer\_Products**
CustomerID---- Product Category
What I want is to pull a list of customer records which includes the latest note only, so there are no duplicate lines if multiple notes exist. But I also want my report to include customer records if **no notes** exist at all. I've achieved the first part of this with a SELECT MAX function, and that works well, the problem is when I add the OR = NULL clause in the final line of code below. This doesn't work, and I can't figure out a solution.
Any suggestions will be greatly appreciated!
```
SELECT
[tbl_Customer_Records].[CustomerID],
[tbl_Customer_Records].[Company Name],
[tbl_Customer_Records].[Company Segment],
[tbl_Customer_Records].[Promo Code],
[tbl_Customer_Notes].[Note],
[tbl_Customer_Products].[Product Category]
FROM
tbl_Customer_Records
LEFT OUTER JOIN tbl_Customer_Notes
ON tbl_Customer_Records.CustomerID = tbl_Customer_Notes.CustomerID
LEFT OUTER JOIN tbl_Customer_Products
ON tbl_Customer_Records.CustomerID = tbl_Customer_Products.CustomerID
WHERE
[Product Category] in ('Nuts','Bolts','Screws','Spanners')
AND
[Created Date] in (SELECT MAX ([Created Date]) FROM tbl.Customer_Notes GROUP BY [CustomerID])
OR tbl_Customer_Note.Note is null
``` | There're a few tricks to do this kind of query (row\_number or join with grouped data), but I think most cleanest one in your case is to use outer apply:
```
select
cr.[CustomerID],
cr.[Company Name],
cr.[Company Segment],
cr.[Promo Code],
cn.[Note],
cp.[Product Category]
from tbl_Customer_Records as cr
left outer join tbl_Customer_Products as cp on cp.CustomerID = cr.CustomerID
outer apply (
select top 1
t.[Note]
from tbl_Customer_Notes as t
where t.[CustomerID] = cr.[CustomerID]
order by t.[Created_Date] desc
) as cn
where
cp.[Product Category] in ('Nuts','Bolts','Screws','Spanners')
```
Changed all clumsy `table name.column name` to `alias.column name`, I think it's much more readable this way.
Or:
```
select
cr.[CustomerID],
cr.[Company Name],
cr.[Company Segment],
cr.[Promo Code],
cn.[Note],
cp.[Product Category]
from tbl_Customer_Records as cr
left outer join tbl_Customer_Products as cp on cp.CustomerID = cr.CustomerID
left outer join tbl_Customer_Notes as cn on
cn.CustomerID = cr.CustomerID and
cn.[Created_Date] = (select max(t.[Created_Date]) from tbl_Customer_Notes as t where t.CustomerID = cr.CustomerID)
where
cp.[Product Category] in ('Nuts','Bolts','Screws','Spanners')
``` | You can add your filter condition in `ON` predicate to preserve rows from left table and fetch only required matching rows from right table, from first LEFT OUTER JOIN operator. Following query should work:
```
SELECT
CR.[CustomerID],
CR.[Company_Name],
CR.[Company_Segment],
CR.[Promo_Code],
CN.[Note],
CP.[Product_Category]
FROM
tbl_Customer_Records CR
LEFT OUTER JOIN tbl_Customer_Notes CN
ON CR.CustomerID = CN.CustomerID AND CN.[Created_Date] in (SELECT MAX ([Created_Date])
FROM tbl_Customer_Notes
WHERE CR.CustomerID = tbl_Customer_Notes.CustomerID
GROUP BY [CustomerID])
LEFT OUTER JOIN tbl_Customer_Products CP
ON CR.CustomerID = CP.CustomerID
WHERE
[Product_Category] in ('Nuts','Bolts','Screws','Spanners')
``` | SQL - query to report MAX DATE results or NULL results | [
"",
"sql",
"sql-server",
"max",
""
] |
I want to fetch **category name** too. I have following table
**Product Table**
```
mysql> describe prod;
+----------+-----------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+----------+-----------------+------+-----+---------+----------------+
| pid | int(4) unsigned | NO | PRI | NULL | auto_increment |
| pro_name | varchar(32) | NO | | NULL | |
+----------+-----------------+------+-----+---------+----------------+
```
**Category Table**
```
mysql> describe cat;
+----------+-----------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+----------+-----------------+------+-----+---------+----------------+
| cid | int(4) unsigned | NO | PRI | NULL | auto_increment |
| cat_name | varchar(32) | NO | | NULL | |
+----------+-----------------+------+-----+---------+----------------+
```
**`cat_pro`** Table [ Relation Table ]
```
mysql> describe cat_pro;
+----------+-----------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+----------+-----------------+------+-----+---------+----------------+
| cat_id | int(4) unsigned | NO | | NULL | |
| pro_id | int(4) unsigned | NO | | NULL | |
+----------+-----------------+------+-----+---------+----------------+
```
My **Current** `Query` :-
```
mysql> select pid, pro_name, cat_pro.cat_id
from prod
left join cat_pro on cat_pro.pro_id=prod.pid
where pid='2';
```
But when i run this command, it gives me **error** saying `unknown column cat_pro.cat_id in on clause`
```
mysql> select pid, pro_name, cat_pro.cat_id, cat.cat_name
from prod
left join cat_pro on cat_pro.pro_id=prod.pid
left join cat on cat.cid=cat_pro.cat_id
where pid='2'
```
Is there any way, I can fetch **category name** too from `cat` table ? , or do i have to run another `query` for fetching **category name** from `cat` table.
Thanks | query:
```
select pid, pro_name, cat_pro.cat_id, cat.cat_name
from
prod
inner join
cat_pro
on (cat_pro.pro_id = prod.pid)
inner join
cat
on (cat.cid = cat_pro.cat_id)
where pid='2';
```
works without errors for the db structure you provided | Hope this will helpful to you. In below query we are using sub-query to get category name based on category\_id.
```
select
prod.pid,
prod.pro_name,
cat_pro.cat_id,
(
select
cat_name
from
Category c
where
c.cid = cat_pro.cat_id
) as category_name
from
prod left join
cat_pro on cat_pro.pro_id=prod.pid
where prod.pid='2';
``` | Fetching column data from 3 table with Relation table in-between as mediator in mysql | [
"",
"mysql",
"sql",
""
] |
I've got this recursive function.
For a table of hierarchical values, I receive a id and want to return the parent of that id for a certain level (unity\_id)
```
CREATE FUNCTION getObs
(
@obs int
)
RETURNS table -- <-- returns table so it's a table function
as
return -- <- here's actual return
(
WITH ret2 AS(
SELECT *
FROM OBS
WHERE OBS_Id = @obs
UNION ALL
SELECT t.*
FROM OBS as t INNER JOIN
ret2 r ON t.OBS_Id = r.UnitId
)
SELECT *
FROM ret2 r
WHERE unity_id = 7
)
```
But I need to return an int instead of a table, so I've tried this.
I've put "RETURNS int" and not "RETURNS table"
I'm selecting the property I want (it's an int)
```
CREATE FUNCTION getObs
(
@obs int
)
RETURNS int
as
return
(
WITH ret2 AS(
SELECT *
FROM OBS
WHERE OBS_Id = @obs
UNION ALL
SELECT t.*
FROM OBS as t INNER JOIN
ret2 r ON t.OBS_Id = r.UnitId
)
SELECT r.OBS_Id
FROM ret2 r
WHERE unity_id = 7
)
```
Thank you for any help, I'm new to SQL and functions. | You can use the code below to declare a variable, assign a value to it and return it.
```
CREATE FUNCTION getObs
(
@obs int
)
RETURNS int
as
begin
declare @return int
;WITH ret2 AS(
SELECT *
FROM OBS
WHERE OBS_Id = @obs
UNION ALL
SELECT t.*
FROM OBS as t INNER JOIN
ret2 r ON t.OBS_Id = r.UnitId
)
SELECT @return = r.OBS_Id
FROM ret2 r
WHERE unity_id = 7
return @return
end
``` | Change the function to explicitly return the int, rather than casting it from a table.
```
as
begin
declare @ret int
;WITH ret2 AS(
SELECT *
FROM OBS
WHERE OBS_Id = @obs
UNION ALL
SELECT t.*
FROM OBS as t INNER JOIN
ret2 r ON t.OBS_Id = r.UnitId
)
SELECT @ret = r.OBS_Id
FROM ret2 r
WHERE unity_id = 7
return @ret
end
``` | Returning a scalable value from a recursive function | [
"",
"sql",
"sql-server",
""
] |
I have a table `mytable(**id**,colA,colB)` with 5 million records. colA, colB has no other constraints. I have to copy the values in colB to colA and make colB null. Here is the procedure I created below. Some times it runs in 5 minutes and some times it takes 45 minutes. What is wrong with this script? I'm sure that no other process accessing this table during this time. How do I optimize this? ( I know that there will be many other factors that effects the speed of this like DB engine itself slow, may be the machine running at fullcapacity at that time. I'm looking for something that I have in my hands i.e., my script.)
```
DECLARE
l_update_total pls_integer := 0;
CURSOR cur IS SELECT id, colB FROM mytable where colA is null;
TYPE t_recs IS TABLE OF cur%ROWTYPE;
l_loop_count pls_integer := 0;
l_recs t_recs;
l_rec cur%ROWTYPE;
BEGIN
OPEN cur;
LOOP
FETCH cur BULK COLLECT INTO l_recs LIMIT 500;
EXIT WHEN l_recs.COUNT = 0;
FOR indx IN 1 .. l_recs.COUNT
LOOP
l_rec := l_recs(indx);
UPDATE mytable SET colB=null,colA = l_rec.colB WHERE id = l_rec.id;
l_update_total := l_update_total + SQL%ROWCOUNT;
END LOOP;
COMMIT;
END LOOP;
END;
/
``` | Why aren't you writing a simple SQL statement
```
UPDATE mytable
SET colB = null,
colA = colB
WHERE colA IS NULL;
```
A pure SQL solution is going to be faster than a PL/SQL solution.
If you don't try to understand why your runtimes vary, however, it will be exceptionally difficult to do much to improve performance. If the system is so overloaded that it adds 40 minutes to the runtime of your procedure and none of that is due to locks (since you claim that no other sessions are reading from or writing to the table in question), it is entirely possible that it will add 40 minutes to the runtime of a single SQL statement.
If you insist on using a slower PL/SQL method, at least take out the `commit` in the loop to avoid incurring that overhead with every batch. | Dump the cursors and procedural logic, and rewrite it as SQL. | PL/SQL script running very slow | [
"",
"sql",
"performance",
"oracle",
"plsql",
""
] |
I want to be able to select persons that are *ONLY* Asian *AND* White. For example in my table below, the only person I would like to retrieve records for is PersonId 1 and 2 (Desired Result).
Table:

Desired Result:
 | There are a few ways that you can get the result.
Using a HAVING clause with an aggregate function:
```
select personid, race
from yourtable
where personid in
(select personid
from yourtable
group by personid
having
sum(case when race = 'white' then 1 else 0 end) > 0
and sum(case when race = 'asian' then 1 else 0 end) > 0
and sum(case when race not in('asian', 'white') then 1 else 0 end) = 0);
```
See [SQL Fiddle with Demo](http://sqlfiddle.com/#!3/8e559/4)
Or you can use `count(distinct race)` in a having clause:
```
;with cte as
(
select personid
from yourtable
where race in ('white', 'asian')
and personid not in (select personid
from yourtable
where race not in ('white', 'asian'))
group by personid
having count(distinct race) = 2
)
select personid, race
from yourtable
where personid in (select personid
from cte);
```
See [SQL Fiddle with Demo](http://sqlfiddle.com/#!3/8e559/7) | This should do
```
select * from table where personID in (
select personID from table
group by personID having count(distinct race) = 2 and min(race) = 'Asian'
and max(race) = 'White')
``` | Selecting records matching ONLY/AND criteria on one column | [
"",
"sql",
"sql-server",
""
] |
Example 1
```
+--------------------------+
| IDENT | CURRENT | SOURCE |
+--------------------------+
| 12345 | 12345 | A |
| 23456 | 12345 | B |
| 34567 | 12345 | C |
+--------------------------+
```
Example 2
```
+--------------------------+
| IDENT | CURRENT | SOURCE |
+--------------------------+
| 12345 | 55555 | A |
| 23456 | 55555 | B |
+--------------------------+
```
Trying to write select query that will show all records that CURRENT count = 2 and SOURCE contains both A and B (NOT C).
Example A should not show up as there are 3 entries for the CURRENT as record is linked to SOURCE C.
Example B is what I'm looking the query to find, CURRENT has two records and is only linked to SOURCE 'A' and 'B'.
Currently if I run something similar to "where SOURCE = A or SOURCE = B", results are records that just have SOURCE of A, OR A+C.
NOTES: IDENT is always a unique value. CURRENT links multiple IDENTS from different SOURCE's. | 1. If `SOURCE` values are guaranteed to be unique per `CURRENT`:
```
SELECT CURRENT
FROM atable
GROUP BY CURRENT
HAVING COUNT(SOURCE) = 2
AND COUNT(CASE WHEN SOURCE IN ('A', 'B') THEN SOURCE END) = 2
;
```
2. If `SOURCE` values aren't unique per `CURRENT` but `CURRENT`s with duplicate entries of `'A'` or `'B'` are allowed:
```
SELECT CURRENT
FROM atable
GROUP BY CURRENT
HAVING COUNT(DISTINCT SOURCE) = 2
AND COUNT(DISTINCT CASE WHEN SOURCE IN ('A', 'B') THEN SOURCE END) = 2
;
```
3. If `SOURCE` values aren't unique and groups with duplicate `SOURCE` entries aren't allowed:
```
SELECT CURRENT
FROM atable
GROUP BY CURRENT
HAVING COUNT(SOURCE) = 2
AND COUNT(DISTINCT SOURCE) = 2
AND COUNT(DISTINCT CASE WHEN SOURCE IN ('A', 'B') THEN SOURCE END) = 2
;
```
Every query returns only distinct `CURRENT` values matching the requirements. Use the query as a derived dataset and join it back to your table to get the details.
All the above options assume that either `SOURCE` is a `NOT NULL` column or that NULLs can just be disregarded. | We're clearly missing more information. Let's take example data (thanks gloomy for the initial fiddle).
```
| ID | CURRENT | SOURCE |
|----|---------|--------|
| 1 | 111 | A |
| 2 | 111 | B |
| 3 | 111 | C |
| 4 | 222 | A |
| 5 | 222 | B |
| 6 | 333 | A |
| 7 | 333 | C |
| 8 | 444 | B |
| 9 | 444 | C |
| 10 | 555 | B |
| 11 | 666 | A |
| 12 | 666 | A |
| 13 | 666 | B |
| 14 | 777 | A |
| 15 | 777 | A |
```
I assume you only need this as the result:
```
| ID | CURRENT | SOURCE |
|----|---------|--------|
| 4 | 222 | A |
| 5 | 222 | B |
```
This query will work with any amount of sources and result in the expected output:
```
SELECT * FROM test
WHERE CURRENT IN (
SELECT CURRENT FROM test
WHERE CURRENT NOT IN (
SELECT CURRENT FROM test
WHERE SOURCE NOT IN ('A', 'B')
)
GROUP BY CURRENT
HAVING count(SOURCE) = 2 AND count(DISTINCT SOURCE) = 2
)
``` | Select query where record count = 2 and column contains either two values | [
"",
"sql",
"count",
"having",
""
] |
I want to make a combined search on the first, middle and lastname. The query is:
```
SELECT picture,person_id, firstname+ ' ' +middlename+ ' ' +lastname AS fullName
FROM PERSON
WHERE (firstname+ ' ' +middlename+ ' ' +lastname LIKE"%#queryString#%")
```
Now this is working for: Peter den Test (first/ middle/ last)
This is not working for: Marc Baker (first / no middle name /last). I have to type in two spaces before Marc Baker shows up. Can I check if the middle name is empty in the query?
Thanks! | try like this :
```
SELECT picture,person_id, firstname+ ' ' +middlename+ ' ' +lastname AS fullName
FROM PERSON
WHERE (firstname+ ' ' + LTRIM(middlename + ' ') + lastname LIKE"%#queryString#%")
``` | I think this would work:
```
WHERE (firstname+ ' ' +LTRIM(middlename+ ' ' +lastname) LIKE"%#queryString#%")
``` | Firstname, middlename and lastname MS SQL | [
"",
"sql",
"sql-server-2008",
""
] |
I have a table `orders`, consisting of 3 columns:
* `order_id int primary key`,
* `cust_id integer`,
* `order_date integer`
with data:
```
order_id | cust_id | order_date
1 | 10 | 1325376000
2 | 10 | 1325548800
3 | 10 | 1325894400
4 | 11 | 1325462400
5 | 11 | 1325721600
6 | 12 | 1325721600
7 | 12 | 1326326400
```
I'm trying to write a query to give a Cursor containing the most recent order for a given customer that I can then pass to a `SimpleCursorAdapter` and bind to a `ListView`, such that the user sees the following:
* 10 1325894400 (formatted as human readable date)
* 11 1325721600
* 12 1326326400
I've tried joining the table to itself in various ways without any luck:
<http://sqlfiddle.com/#!2/77b22d/1/0>
If I have to populate an `ArrayList` and use an `ArrayAdapter` I will, but I'd like to exhaust this option first. Thanks!
EDIT: Apologize for the differences between here and the SQLFiddle, brain running on two separate threads. The Fiddle is the 'correct' data set.
2nd EDIT: Added a new wrinkle (ignore table above, [see the SQL fiddle](http://sqlfiddle.com/#!2/bbc29/2)). Adding a field for free-form text and then running the query returns the first record in the GROUP BY, plus the field for the max\_date. I need to pull the whole record containing the date that equals max\_date. Adding a WHERE clause breaks the query. Thoughts? | Try this
```
select
order_number
, cust_number
, order_date
from orders o1
where order_number =
(
select order_number
from orders o2
where o2.cust_number = o1.cust_number
and order_date =
(
select max(order_date)
from orders o3
where o3.cust_number = o2.cust_number
)
)
```
This will get you the correct records and you can format the date as you like in the main query.
Note: My answer is a bit different form your display since the example here and the Fiddle are different. used the Fiddle one
```
create table orders (order_number integer primary key,
cust_number integer not null,
order_date integer not null);
insert into orders values (1001,10,1005),
(1,10,1325376000),
(2,10,1325548800),
(3,11,1325894400),
(4,11,1325462400),
(5,11,1325721600),
(6,12,1325721600),
(7,12,1326326400),
(8,12,1326326460);
``` | If you just want the latest record for each customer, I think this will work:
```
SELECT order_number, cust_number, max(order_date) as max_date FROM orders GROUP BY cust_number
``` | Complex SQL query in Android SQLite | [
"",
"android",
"sql",
"sqlite",
""
] |
I want to know what the (+) in the below query signify,
```
select ..
from ..., Fat fat
where prop = fat.prop (+)
```
Thanks | It is the obsolete outer join symbol.
In Oracle, **(+)** denotes the "optional" table in the JOIN.
You may check out this for **[Left and Right Outer Joins](http://www.java2s.com/Tutorial/Oracle/0140__Table-Joins/LeftandRightOuterJoins.htm)**.
**On a side note:-**(Although its obsolete)
The placement of the (+) determines RIGHT or LEFT. If the **(+)** is on the right, it's a **LEFT JOIN** and if **(+)** is on the left, it's a **RIGHT JOIN**. | For Oracle specifically, indicates a Left Outer Join. Older notation. | what is (+) syntax in SQL query | [
"",
"sql",
"oracle",
""
] |
I have 3 input parameters,based on values of each input parameter i have to prepare the SQL in procedure dynamically. I am doing below way but it is failing, if the parameter value is null,then i have to exclude that from the where clause.
**IN Parameters:**
> empid in varchar2 || empname IN varchar2 || empsal IN varchar2
```
SELECT
EMP_NAME
INTO
V_EMP_NAME
FROM
EMPLOYEE
WHERE
( EMP_ID = EMPID
OR ( EMP_ID IS NULL
AND EMPID IS NULL ) )
AND ( EMP_NAME = EMPNAME
OR ( EMP_NAME IS NULL
AND EMPNAME IS NULL ) )
AND ( EMP_SAL = EMPSAL
OR ( EMP_SAL IS NULL
AND EMPSAL IS NULL ) );
```
After updates i modified the query like below,it is compiled but giving run time errors saying that ORA-00933: SQL command not properly ended on just before EXECUTE IMMEDIATE
```
V_SQL :='SELECT EMP_NAME INTO V_empname FROM employee WHERE ';
BEGIN
IF(EMPID IS NOT NULL) THEN
V_SQL := V_SQL || ' emp_id='||EMPID;
END IF;
IF(EMPNAME IS NOT NULL) THEN
V_SQL := V_SQL || ' AND emp_name='||EMPNAME;
END IF;
IF(V_empsalIS NOT NULL) THEN
V_SQL := V_SQL || ' AND emp_sal='||empsal;
V_SQL := V_SQL ||' AND ACTIVE =''Y''' ;
END IF;
EXECUTE IMMEDIATE V_SQL;
EXCEPTION
WHEN NO_DATA_FOUND THEN
V_check:= '' ;
END;
``` | Here you are another alternative:
```
V_SQL_WHERE := null;
V_SQL :='SELECT EMP_NAME INTO V_empname FROM employee';
BEGIN
IF(EMPID IS NOT NULL) THEN
V_SQL_WHERE := V_SQL_WHERE || ' emp_id='||EMPID;
END IF;
IF(EMPNAME IS NOT NULL) THEN
IF (V_SQL_WHERE is not null) THEN
V_SQL_WHERE := V_SQL_WHERE || ' AND ';
END IF;
V_SQL_WHERE := V_SQL_WHERE || ' emp_name='||EMPNAME;
END IF;
IF(V_empsalIS NOT NULL) THEN
IF (V_SQL_WHERE is not null) THEN
V_SQL_WHERE := V_SQL_WHERE || ' AND ';
END IF;
V_SQL_WHERE := V_SQL_WHERE || ' emp_sal=' || empsal;
V_SQL_WHERE := V_SQL_WHERE || ' AND ACTIVE =''Y''' ;
END IF;
IF (V_SQL_WHERE is not null) then
V_SQL := V_SQL || ' WHERE ' || V_SQL_WHERE;
end if;
EXECUTE IMMEDIATE V_SQL;
EXCEPTION
WHEN NO_DATA_FOUND THEN
V_check:= '' ;
END;
``` | You can create one from scratch, including if's and elses, or you can use a package called plsql-utils, the SQL\_BUILDER\_PKG. It does the whole work for you.
Have a look on **SQL utilities**
<http://code.google.com/p/plsql-utils/>
Eg:
```
declare
l_my_query sql_builder_pkg.t_query;
l_sql varchar2(32000);
begin
sql_builder_pkg.add_select (l_my_query, 'ename');
sql_builder_pkg.add_select (l_my_query, 'sal');
sql_builder_pkg.add_select (l_my_query, 'deptno');
sql_builder_pkg.add_from (l_my_query, 'emp');
sql_builder_pkg.add_where (l_my_query, 'ename = :p_ename');
sql_builder_pkg.add_where (l_my_query, 'sal > :p_sal');
l_sql := sql_builder_pkg.get_sql (l_my_query);
dbms_output.put_line (l_sql);
end;
```
I used it in one project and it was a good tool. | preparing sql dynamically in oracle stored procedure | [
"",
"sql",
"plsql",
""
] |
I have table with departments. I need to count how many people are within which dept. This is easily done by
```
SELECT DEPT,
COUNT(*) as 'Total'
FROM SR
GROUP BY DEPT;
```
Now I need to also do cumulative count as below:

I have found some SQL to count running total, but not case like this one. Could you provide me some advice in this case, please? | Here's a way to do it with a CTE instead of a cursor:
```
WITH Base AS
(
SELECT ROW_NUMBER() OVER (ORDER BY [Count] DESC) RowNum,
[Dept],
[Count]
FROM SR
)
SELECT SR.Dept, SR.Count, SUM(SR2.[Count]) Total
FROM Base SR
INNER JOIN Base SR2
ON SR2.RowNum <= SR.RowNum
GROUP BY SR.Dept, SR.Count
ORDER BY SR.[Count] DESC
```
Note that this is ordering by descending `Count` like your sample result does. If there's some other column that's not shown that should be used for ordering just replace `Count` in each of the `ORDER BY` clauses.
[SQL Fiddle Demo](http://sqlfiddle.com/#!3/d0771/1/0) | I think you can use some temporary / variable table for this, and use solution from [here](https://stackoverflow.com/questions/860966/calculate-a-running-total-in-sqlserver/13744550#13744550):
```
declare @Temp table (rn int identity(1, 1) primary key, dept varchar(128), Total int)
insert into @Temp (dept, Total)
select
dept, count(*) as Total
from SR
group by dept
;with cte as (
select T.dept, T.Total, T.Total as Cumulative, T.rn
from @Temp as T
where T.rn = 1
union all
select T.dept, T.Total, T.Total + C.Cumulative as Cumulative, T.rn
from cte as C
inner join @Temp as T on T.rn = C.rn + 1
)
select C.dept, C.Total, C.Cumulative
from cte as C
option (maxrecursion 0)
```
**`sql fiddle demo`**
There're some other solutions, but this one is fastest for SQL Server 2008, I think. | SQL Cumulative Count | [
"",
"sql",
"sql-server",
"sql-server-2008",
"running-count",
""
] |
we are implementing a search application
we have implemented a exact word search by the following sql query
```
SELECT *
FROM jreviews_content
WHERE jr_produits REGEXP '[[:<:]]ryan[[:>:]]'
```
which works well now we have another requirement in some of our field ie in jr\_title field if the user fill one missing letter or one letter mistake or one extra letter for example if the user type restauran or restaunts or restaurants then it should give the result but not more than one letter. | This will allow one alphanumerich character before the search string, and one after:
```
SELECT *
FROM jreviews_content
WHERE jr_produits REGEXP '[[:<:]][[:alnum:]]{0,1}ryan[[:alnum:]]{0,1}[[:>:]]'
```
if you want to allow one alphanumeric character before the search string OR one after, you could use this instead:
```
SELECT *
FROM jreviews_content
WHERE
jr_produits REGEXP '[[:<:]][[:alnum:]]{0,1}ryan[[:>:]]'
OR jr_produits REGEXP '[[:<:]]ryan[[:alnum:]]{0,1}[[:>:]]'
``` | Look for "Levenshtein distance"
An implementation is here
<http://www.artfulsoftware.com/infotree/queries.php#552>
or as a compiled function
<http://samjlevy.com/2011/03/mysql-levenshtein-and-damerau-levenshtein-udfs/> | One Letter mistake allowed in exact word search | [
"",
"mysql",
"sql",
""
] |
Hi guys is something like that possible in SQL.
```
declare a as varchar(20)
set a = 'ID = 34 and ID = 22'
select * from something where ID = 1 and a
``` | You can use the command `exec` for that - concatenating strings to one and then running an SQL query of that string.
So in your case it would be:
```
declare @a varchar(20)
set @a = 'ID = 34 and ID = 22'
exec ('select * from something where ID = 1 and ' + @a)
```
Just be careful of SQL injections. | Yes, you can use use `EXEC`
```
declare @a as varchar(20)
set @a = 'ID = 34 and ID = 22'
EXEC ('select * from something where ID = 1 and ' + @a)
```
Warning: this type of concatenating SQL queries is a security risk because of [SQL injection](http://en.wikipedia.org/wiki/SQL_injection). | (MS)SQL execute query with additional varchar/text in where condition | [
"",
"sql",
"sql-server",
"select",
""
] |
It's maybe a noob question but I found some T-SQL query example to verify database size with `SELECT` and `WHERE` clause [here](http://technet.microsoft.com/en-us/library/ms176061.aspx)
Here is the code:
```
SELECT name, size, size*1.0/128 AS [Size in MBs]
FROM sys.master_files
WHERE name = N'mytest';
```
My question is what does the `N` prefix mean in the `WHERE name = N'keyword'` clause?
I always use `WHERE name = 'keyword'` before, and I don't find the differences (you can try it by yourself).
I've googled that but I don't know the keyword I supposed to search | It's declaring the string as nvarchar data type (Unicode), rather than varchar (8-bit encoding, including ASCII).
FYI, this is a duplicate of the question:
[What is the meaning of the prefix N in T-SQL statements?](https://stackoverflow.com/questions/10025032/what-is-the-meaning-of-the-prefix-n-in-t-sql-statements) | From the [docs](http://databases.aspfaq.com/general/why-do-some-sql-strings-have-an-n-prefix.html):-
> You may have seen Transact-SQL code that passes strings around using
> an N prefix. **This denotes that the subsequent string is in Unicode
> (the N actually stands for National language character set).** Which
> means that you are passing an NCHAR, NVARCHAR or NTEXT value, as
> opposed to CHAR, VARCHAR or TEXT. | SQL Server T-SQL N prefix on string literal | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.