Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I'm facing a database that keeps the ORDERING in columns of the table.
It's like:
```
Id Name Description Category OrderByName OrderByDescription OrderByCategory
1 Aaaa bbbb cccc 1 2 3
2 BBbbb Aaaaa bbbb 2 1 2
3 cccc cccc aaaaa 3 3 1
```
So, when the user want's to order by name, the SQL goes with an `ORDER BY OrderByName`.
I think this doesn't make any sense, since that's why Index are for and i tried to find any explanation for that but haven't found. Is this faster than using indexes? Is there any scenario where this is really useful? | It can make sense for many reasons but mainly when you don't want to follow the "natural order" given by the `ORDER BY` clause.
This is a scenario where this can be useful :
[SQL Fiddle](http://sqlfiddle.com/#!3/ce569/1)
**MS SQL Server 2008 Schema Setup**:
```
CREATE TABLE Table1
([Id] int, [Name] varchar(15), [OrderByName] int)
;
INSERT INTO Table1
([Id], [Name], [OrderByName])
VALUES
(1, 'Del Torro', 2 ),
(2, 'Delson', 1),
(3, 'Delugi', 3)
;
```
**Query 1**:
```
SELECT *
FROM Table1
ORDER BY Name
```
**[Results](http://sqlfiddle.com/#!3/ce569/1/0)**:
```
| ID | NAME | ORDERBYNAME |
|----|-----------|-------------|
| 1 | Del Torro | 2 |
| 2 | Delson | 1 |
| 3 | Delugi | 3 |
```
**Query 2**:
```
SELECT *
FROM Table1
ORDER BY OrderByName
```
**[Results](http://sqlfiddle.com/#!3/ce569/1/1)**:
```
| ID | NAME | ORDERBYNAME |
|----|-----------|-------------|
| 2 | Delson | 1 |
| 1 | Del Torro | 2 |
| 3 | Delugi | 3 |
``` | I think it makes little sense for two reasons:
1. Who is going to maintain this set of values in the table? You need to update them every time any row is added, updated, or deleted. You can do this with triggers, or horribly buggy and unreliable constraints using user-defined functions. But why? The information that seems to be in those columns *is already there*. It's redundant because you can get that order by ordering by the actual column.
2. You still have to use massive conditionals or dynamic SQL to tell the application how to order the results, since you can't say `ORDER BY @column_name`.
Now, I'm basing my assumptions on the fact that the ordering columns still reflect the alphabetical order in the relevant columns. It could be useful if there is some customization possible, e.g. if you wanted all Smiths listed first, and then all Morts, and then everyone else. But I don't see any evidence of this in the question or the data. | SQL Server: Use a column to save order of the record | [
"",
"sql",
"sql-server",
"database",
"database-design",
"relational-database",
""
] |
I want to create a query that will return all the route\_id, that contain both stop variables.
```
| route_stop_id | route_id | stop_id | time (in sec) |
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
| 1 | 1 | 1 | 3:24pm |
| 2 | 1 | 2 | 3:26pm |
| 3 | 1 | 3 | 3:29pm |
| 4 | 1 | 4 | 4:04pm |
| 5 | 2 | 1 | 3:03pm |
| 6 | 3 | 1 | 3:02pm |
```
If `route_id` has `stop_id = 1` and `stop_id = 2`
```
SELECT route_id FROM route_stop_list WHERE stop_id = 1 and stop_id = 2
```
But the above statement doesn't return anything, because no row can have both a `stop_id` of `1` and a `stop_id` of `2`. But is it possible to write a statement that will return properly?
---
EDIT, more explanation because I think this is kind of confusing.
This is a transportation app.
My application asks the user to enter a starting stop and a ending stop.
I am trying to write a SQL statement that will return all the routes that both stops exists on. | use this one (edited)
```
SELECT t1.route_id
FROM route_stop_list t1 join route_stop_list t2 on t1.route_id=t2.route_id
where t1.stop_id=1 AND t2.stop_id=2
```
explanation
i am joining the 2 tables first(your table with itself). t1 and t2 are just aliases(nicknames) for the tables so that you can refer to them later. this gives us all the possible combinations. the condition for joining is having the same route id. Try running the query without the last line (where clause). Use select \* to get all columns. If you still won't understand i'll explain more. | Please try:
```
SELECT route_id
FROM route_stop_list
WHERE stop_id IN (1, 2)
GROUP BY route_id
HAVING COUNT(DISTINCT stop_id)=2
``` | SQL Statement filtering | [
"",
"sql",
""
] |
I got a question from <http://sqlzoo.net/wiki/SUM_and_COUNT_Quiz#quiz0>, the 4th one. I chose the second option, but it turned out the fifth option is the right answer. I don't know why this sql sentence is wrong. Can anyone tell me the reason? Thank you in advance.
the sql sentence is shown below:
```
SELECT region, SUM(area)
FROM bbc
WHERE SUM(area) > 15000000
GROUP BY region
```
why the answer to this problem is "No result due to invalid use of the WHERE function"? | You can't use SUM(area) in WHERE clause.
For that to be valid you had to use HAVING after the GROUP BY:
```
SELECT region, SUM(area) FROM bbc GROUP BY region HAVING SUM(area) > 15000000;
``` | To understand why take a look at a logical query processing order which roughly looks like
1. FROM
2. **WHERE**
3. GROUP BY
4. HAVING
5. SELECT
6. ORDER BY
`WHERE` comes before `GROUP BY` therefore you can't use aggregate functions directly in it. That what the `HAVING` clause is for. | sql error about where and group by | [
"",
"sql",
""
] |
I have two tables -
Table 1: `Types`
```
ID | Type
---+------
1 | type1
2 | type2
3 | type3
4 | type4
5 | type5
6 | type6
```
Table 2: `Details`
```
ID | Type | EventName | Cost
---+--------+-----------+------
1 | type1 | name1 | 500
2 | type1 | name2 | 500
3 | type2 | name3 | 500
4 | type3 | name4 | 1500
5 | type3 | name5 | 1000
6 | type3 | name6 | 1000
```
Expected result from two tables:
```
Type | Number | Cost
-------+--------+--------------
type1 | 2 | 1000
type2 | 1 | 500
type3 | 3 | 3500
type4 | 0 | 0
type5 | 0 | 0
type6 | 0 | 0
```
I tried with `LEFT JOIN`. But it does not give me the right numbers. | ```
SELECT t.Type, count(d.Type), sum(d.cost)
FROM Types t LEFT JOIN Details d ON t.Type=d.Type
group by t.Type,d.Type
``` | ```
SELECT
t.Type,
count(t.type) as Number,
sum(d.cost) as Cost
FROM
Types t
INNER JOIN Details d
ON t.type=d.type
GROUP BY t.type
``` | SQl Left Join with tables | [
"",
"sql",
"join",
"left-join",
""
] |
I have an SQL script in oracle which is almost complete but I am stuck on one last issue.
I have 2 database tables I am accessing data from, and inserting the new rows into a new table in my database.
One of the database tables has an account number column and a descriptor column, and each account number has 3 descriptor values, all with the same format. There is a third column as well which I want to grab data from. Here is a visual:
```
Account # | Descriptor | Value
1 Cost Center: ASDF CC123
1 Company: ASDF123 F123
1 Fund: JKL R123
2 Cost Center: ASDF12 CC456
2 Company: ASDF456 F456
2 Fund: JKL23 R456
```
I would like my final table to be like this:
```
Account_Number | Company_Description | Cost_Center_Value
1 Company: ASDF123 CC123
2 Company: ASDF456 CC456
```
The elements for the Cost\_Center\_Value column and the Company\_Description column come from different rows, but the same account object.
The issue I am having is that I have a special `where` clause to help me extract only the descriptor row with the 'Company:' text. If I put in an `AND`, the table is not populated at all, I think because it expects the data to be in the same row, when I want data from 2 rows but different columns. If I use and `OR` statement, it adds an entire new row.
Here is my script:
```
create table mydb.test1 as
select distinct substr(testdb.table1.FAO, 1) as Account_Number,
'CM' || substr(testdb.table1.DESCRIPTOR, 18) as Division,
substr(testdb.table1.value, 1) as Department,
substr(testdb.table2.Workday_Description, 1) as Description,
from testdb.table1
join testdb.table2
on testdb.table1.fao = testdb.table2.workday_number
where testdb.table2.descriptor like 'Company for%'
OR testdb.table1.value like 'CC%'
order by Account_Number Desc
```
I left the OR statement in there to show what I have been playing around with.
That is the closest I can get.
If more info is needed, let me know. | The real answer is to learn about normal form and redesign this database. However you could do something hideous like:
```
Select
Account_Number,
max(Case
When Descriptor Like 'Company: %' then substr(Descriptor, 10, 1000)
end) Company_Description,
max(Case
When Descriptor Like 'Cost Center: %' then value
end) Cost_Center_Value
From
test -- this is the table in the example section
Group By
Account_Number
```
**`Example SQLFiddle`** | try this query:
```
create table mydb.test1 as
select distinct
account as Account_Number,
(select (case when t2.Description like 'Company%' then t2.description end) from testdb.table1 t2 where t1.account=t2.account and
case when t2.Description like 'Company%' then t2.description end is not null)as Company_Description ,
(select (case when t2.Description like 'Cost Center%' then t2.value end) from testdb.table1 t2 where t1.account=t2.account and
case when t2.Description like 'Cost Center%' then t2.value end is not null)as Cost_Center_Value
from testdb.table1 t1;
``` | Combine 2 column values in 1 to many into new row in new table - SQL | [
"",
"sql",
"database",
"oracle",
"one-to-many",
""
] |
I'm trying to insert new rows into a MySQL table, but only if one of the values that I'm inserting isn't in a row that's already in the table.
For example, if I'm doing:
```
insert into `mytable` (`id`, `name`) values (10, `Fred`)
```
I want to be able to check to see if any other row in the table already has `name = 'Fred'`. How can this be done?
Thanks!
**EDIT**
What I tried (can't post the exact statement, but here's a representation):
```
INSERT IGNORE INTO mytable (`domain`, `id`)
VALUES ('i.imgur.com', '12gfa')
WHERE '12gfa' not in (
select id from mytable
)
```
which throws the error:
```
#1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'WHERE '12gfa' not in ( select id from mytable)' at line 3
``` | First of all, your `id` field should be an `autoincrement`, unless it's a foreign key (but I can't assume it from the code you inserted in your question).
In this way you can be sure to have a unique value for `id` for each row.
If it's not the case, you should create a primary key for the table that includes **ALL** the fields you don't want to duplicate and use the `INSERT IGNORE` command.
[Here's a good read](http://www.tutorialspoint.com/mysql/mysql-handling-duplicates.htm) about what you're trying to achieve. | You could use something like this
```
INSERT INTO someTable (someField, someOtherField)
VALUES ("someData", "someOtherData")
ON DUPLICATE KEY UPDATE someOtherField=VALUES("betterData");
```
This will insert a new row, unless a row already exists with a duplicate key, it will update it. | Insert new row unless ONE of the values matches another row? | [
"",
"mysql",
"sql",
""
] |
I have a table and need to verify that a certain column contains only dates. I'm trying to count the number of records that are not follow a date format. If I check a field that I did not define as type "date" then the query works. However, when I check a field that I defined as a date it does not.
Query:
```
SELECT
count(case when ISDATE(Date_Field) = 0 then 1 end) as 'Date_Error'
FROM [table]
```
Column definition:
```
Date_Field(date, null)
```
Sample data: `'2010-06-27'`
Error Message:
> Argument data type date is invalid for argument 1 of isdate function.
Any insight as to why this query is not working for fields I defined as dates?
Thanks! | If you defined the column with the Date type, it ***IS*** a Date. Period. This check is completely unnecessary.
What you may want to do is look for `NULL` values in the column:
```
SELECT SUM(case when Date_Field IS NULL THEN 1 ELSE 0 end) as 'Date_Error' FROM [table]
```
I also sense an additional misunderstanding about how Date fields, including DateTime and DateTime2, work in Sql Server. The values in these fields are not stored as a string in any format at all. They are stored in a binary/numeric format, and only shown as a string as a convenience in your query tool. And that's a good thing. If you want the date in a particular format, use the [`CONVERT()`](http://msdn.microsoft.com/en-us/library/ms187928.aspx) function in your query, or even better, let your client application handle the formatting. | `ISDATE()` only evaluates against a STRING-like parameter (varchar, nvarachar, char,...)
To be sure, `ISDATE()`'s parameter should come wrapped in a cast() function.
i.e.
```
Select isdate(cast(parameter as nvarchar))
```
should return either 1 or 0, even if it's a MULL value.
Hope this helps. | SQL Server ISDATE() Error | [
"",
"sql",
"sql-server",
""
] |
I have a column in database that stores an array of numbers in a nvarchar string and the values look like this
`"1,5,67,122"`
Now, I'd like to use that column value in a query that utilizes IN statement
However, if I have something like
```
WHERE columnID IN (@varsArray);
```
it doesn't work as it sees @varsArray as a string and can't compare it to `columnID` which is of INT type.
how can I convert that variable to something that could be used with a IN statement? | You need dynamic SQL for that.
```
exec('select * from your_table where columnID in (' + @varsArray + ')')
```
### And BTW it is bad DB design to store more than 1 value in a column! | alternatively you can parse your variable with finding a split user defined function on the internet and enter each number into a temp table and then join the temp table
but the person who answered above is correct this is bad database design
create a table so you can join it based on some id, and all the answers will be in a table (temporary or not) | Using a String Array in an IN statement | [
"",
"sql",
"t-sql",
""
] |
I have an MS Access (.accdb) table with data like the following:
```
Location Number
-------- ------
ABC 1
DEF 1
DEF 2
GHI 1
ABC 2
ABC 3
```
Every time I append data to the table I would like the number to be unique to the location.
I am accessing this table through MS Excel VBA - I would like to create a new record (I specify the location in the code) and have a unique sequential number created.
Is there a way to setup the table so this happens autmatically when a record is added?
Should I write a query of some description and to determine the next number per location, and then specify both the Location & Number when I create the record?
I am writing to the table as below:
```
Set rst = New ADODB.Recordset
rst.CursorLocation = adUseServer
rst.Open Source:="Articles", _
ActiveConnection:=cnn, _
CursorType:=adOpenDynamic, _
LockType:=adLockOptimistic, _
Options:=adCmdTable
rst.AddNew
rst("Location") = fLabel.Location 'fLabel is an object contained within a collection called manifest
rst("Number") = 'Determine Unique number per location
rst.Update
```
Any help would be appreciated.
Edit - Added the VBA code I am struggling with as question was put on-hold | I suspect that you are looking for something like this:
```
Dim con As ADODB.Connection, cmd As ADODB.Command, rst As ADODB.Recordset
Dim newNum As Variant
Const fLabel_Location = "O'Hare" ' test data
Set con = New ADODB.Connection
con.Open "Provider=Microsoft.ACE.OLEDB.12.0;Data Source=C:\Users\Public\Database1.accdb;"
Set cmd = New ADODB.Command
cmd.ActiveConnection = con
cmd.CommandText = "SELECT MAX(Number) AS maxNum FROM Articles WHERE Location = ?"
cmd.CreateParameter "?", adVarWChar, adParamInput, 255
cmd.Parameters(0).Value = fLabel_Location
Set rst = cmd.Execute
newNum = IIf(IsNull(rst("maxNum").Value), 0, rst("maxNum").Value) + 1
rst.Close
rst.Open "Articles", con, adOpenDynamic, adLockOptimistic, adCmdTable
rst.AddNew
rst("Location").Value = fLabel_Location
rst("Number").Value = newNum
rst.Update
rst.Close
Set rst = Nothing
Set cmd = Nothing
con.Close
Set con = Nothing
```
Note, however, that this code is **not** multiuser-safe. If there is the possibility of more than one user running this code at the same time then you *could* wind up with duplicate [Number] values.
(To make the code multiuser-safe you would need to create a unique index on ([Location], [Number]) and add some error trapping in case the `rst.Update` fails.)
## Edit
For Access 2010 and later consider using an event-driven Data Macro and shown in my [other answer to this question](https://stackoverflow.com/a/19785633/2144390). | You need to add a new column to your table of data type `AutoNumber`.
[office.microsoft.com: Fields that generate numbers automatically in Access](http://office.microsoft.com/en-gb/access-help/fields-that-generate-numbers-automatically-in-access-HA001055067.aspx)
You should probably also set this column as your primary key. | Generate a sequential number (per group) when adding a row to an Access table | [
"",
"sql",
"excel",
"vba",
"ms-access",
""
] |
How can i split a decimal value into two decimal values.
if the decimal number which has fractional part that is less than .50 or greater than .50 .it should split in such a way that first no should end with only .00 or .50. second value should contain the remaining factorial value.
```
ex. 19.97 should return 19.50 & 0.47
19.47 19.00 & 0.47
``` | You can "floor" to the highest multiple of 0.5 by multiplying by 2, calling `FLOOR`, then dividing by 2. From there just subtract that from the original value to get the remainder.
```
DECLARE @test decimal(10,7)
SELECT @test =19.97
SELECT
FLOOR(@test * 2) / 2 AS base,
@test - FLOOR(@test * 2) / 2 AS fraction
```
or to reduce duplication
```
SELECT
base,
@test - base AS fraction
FROM ( SELECT FLOOR(@test * 2) / 2 AS base )
``` | ```
Declare @money money
Set @money = 19.97
Select convert(int,@money - (@money % 1)) as 'LeftPortion'
,convert(int, (@money % 1) * 100) as 'RightPortion'
``` | spliting a decimal values to lower decimal in sql server | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
```
SELECT image_id,
CASE image_id
WHEN 0 THEN (SELECT image_path
FROM images
WHERE image_id IN (SELECT default_image
FROM registration
WHERE reg_id IN (SELECT favorite_id
FROM @favDT)))
ELSE (SELECT image_path
FROM images
WHERE image_id = b.image_id
AND active = 1)
END AS image_path
FROM buddies b
WHERE reg_id IN (SELECT favorite_id
FROM @favDT)
```
i'm facing a problem in this query because the `select favorite_id from @favDT` returns many favorite id in the case but i need to get the same favorite id that is selected in the from section `FROM buddies b where reg_id in (select favorite_id from @favDT)` and use it the `case when 0`
any help? | Maybe this:
```
SELECT image_id,
CASE image_id
WHEN 0 THEN (SELECT image_path
FROM images
WHERE image_id IN (SELECT default_image
FROM registration
WHERE reg_id IN (SELECT favorite_id
FROM @favDT
WHERE favorite_id = b.reg_id
)
)
)
ELSE (SELECT image_path
FROM images
WHERE image_id = b.image_id
AND active = 1)
END AS image_path
FROM buddies b
WHERE reg_id IN (SELECT favorite_id
FROM @favDT)
``` | You need to write condition like below :
```
SELECT image_id,
CASE image_id
WHEN 0 THEN (SELECT image_path
FROM images
WHERE image_id IN (SELECT default_image
FROM registration
WHERE reg_id = b.reg_id))
ELSE (SELECT image_path
FROM images
WHERE image_id = b.image_id
AND active = 1)
END AS image_path
FROM buddies b
WHERE reg_id IN (SELECT favorite_id
FROM @favDT)
``` | multi case selection in a stored procedure | [
"",
"mysql",
"sql",
"sql-server",
"stored-procedures",
"case",
""
] |
I am attempting to set variables whose names are dynamic in a stored procedure:
```
DECLARE @var01 varchar(50)
DECLARE @var02 varchar(50)
...
DECLARE @var30 varchar(50)
DECLARE @sql = varchar(max)
DECLARE @loopcnter INT
-- (Inside some loop where the loopcounter increments each iteration)
...
SET @sql = 'SET @var0'+CAST(@loopcntr AS Varchar)+'= '''+'somevalue'+''''
-- e.g.) SET @var01= 'somevale'
EXEC (@sql)
```
This doesn't work because the variables are declared in a different scope to that of the dynamic sql.
What is the correct way to dynamically set variables in this manner? | Well, it is not pretty, but you can do:
```
if @loopcntr = 1
set var01 = 'somevalue'
else if @loopcntr = 2
set var02 = 'whatever'
else if . . .
```
This should be sufficiently unpleasant that you might think of alternatives. Oh, here's a good one. Define a table variable and just add rows in for each value:
```
declare @vars table (
id int identity(1, 1),
loopcntr int,
value varchar(255)
);
. . .
-- inside the loop
insert into @vars(loopcntr, value)
select @loopcntr, 'whatever';
```
When you want to get a variable, you can do:
```
declare @var varchar(255);
select @var = value from @vars where loopcntr = <the one I want>;
``` | You can also use stored procedure sp\_executesql which allows passing parameters. Although
I recommend to use additional table to store variables. | SQL: Dynamic Variable Names | [
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2005",
""
] |
this question I've found is driving me nuts, (edit:) because I'm trying to achieve this without the order by method.
I have a table:
**BookAuthors**(book, author)
```
book author
------ ---------
1 Joy McBean
2 Marti McFly
2 Joahnn Strauss
2 Steven Spoilberg
1 Quentin Toronto
3 Dr E. Brown
```
both, book and author, are keys.
Now I would like to select the 'book' value with highest number of different 'author', and the number of authors.
In our case, the query should retrieve the 'book' 2 with a number of 3 authors.
```
book authors
-------- ------------
2 3
```
I've been able to group them and obtain the number of authors for each book with this query:
```
select B.book, count(B.author) as authors
from BookAuthors B
group by B.book
```
which will results in:
```
book authors
-------- -------------
1 2
2 3
3 1
```
Now I would like to obtain only the book with the highest number of authors.
This is one of the queries I've tried:
```
select Na.libro, Na.authors from (
select B.book, count(B.author) as authors
from BookAuthors B
group by B.book
) as Na
where Na.authors in (select max(authors) from Na)
```
and
```
select Na.libro, Na.authors from (
select B.book, count(B.author) as authors
from BookAuthors B
group by B.book
) as Na
having max( Na.authors)
```
I'm a little bit struggled...
Thank you for your help.
EDIT:
since @Sebas was kind to reply right AND expanding my question, here is the solution come to my mind using the CREATE VIEW method:
```
create view auth as
select A.book, count(A.author)
from BooksAuthors A
group by A.book
;
```
and then
```
select B.book, B.nAuthors
from auth B
where B.nAuthors = (select max(nAuthors)
from auth)
``` | ```
SELECT cnt.book, maxauth.mx
FROM (
SELECT MAX(authors) as mx
FROM
(
SELECT book, COUNT(author) AS authors
FROM BookAuthors
GROUP BY book
) t
) maxauth JOIN
(
SELECT book, COUNT(author) AS authors
FROM BookAuthors
GROUP BY book
) cnt ON cnt.authors = maxauth.mx
```
This solution would be more beautiful and efficient with a view:
```
CREATE VIEW v_book_author_count AS
SELECT book, COUNT(author) AS authors
FROM BookAuthors
GROUP BY book
;
```
and then:
```
SELECT cnt.book, maxauth.mx
FROM (
SELECT MAX(authors) as mx
FROM v_book_author_count
) maxauth JOIN v_book_author_count AS cnt ON cnt.authors = maxauth.mx
;
``` | ```
select book, max(authors)
from ( select B.book, count(B.author) as authors
from BookAuthors B group by B.book )
table1;
```
I could not try this as i dont have mysql with me... You try and let me know... | max() value from a nested select without ORDER BY | [
"",
"mysql",
"sql",
"nested",
"subquery",
""
] |
The column is "CreatedDateTime" which is pretty self-explanatory. It tracks whatever time the record was commited. I need to update this value in over 100 tables and would rather have a cool SQL trick to do it in a couple lines rather than copy pasting 100 lines with the only difference being the table name.
Any help would be appreciated, having a hard time finding anything on updating columns across tables (which is weird and probably bad practice anyways, and I'm sorry for that).
Thanks!
EDIT: This post showed me how to get all the tables that have the column
[I want to show all tables that have specified column name](https://stackoverflow.com/questions/4197657/i-want-to-show-all-tables-that-have-specified-column-name)
if that's any help. It's a start for me anyways. | If that's a one time task, just run this query, copy & paste the result to query window and run it
```
Select 'UPDATE ' + TABLE_NAME + ' SET CreatedDateTime = ''<<New Value>>'' '
From INFORMATION_SCHEMA.COLUMNS
WHERE COLUMN_NAME = 'CreatedDateTime'
``` | You could try using a cursor : like this
```
declare cur cursor for Select Table_Name From INFORMATION_SCHEMA.COLUMNS Where column_name = 'CreatedDateTime'
declare @tablename nvarchar(max)
declare @sqlstring nvarchar(max)
open cur
fetch next from cur into @tablename
while @@fetch_status=0
begin
--print @tablename
set @sqlstring = 'update ' + @tablename + ' set CreatedDateTime = getdate()'
exec sp_executesql @sqlstring
fetch next from cur into @tablename
end
close cur
deallocate cur
``` | I have the same column in multiple tables, and want to update that column in all tables to a specific value. How can I do this? | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have a quite common case of two tables: 'Orders' placed and 'ItemsBought' for each order:
```
Orders: orderId, orderDate, [...]
ItemsBought: boughtId, orderId, itemId, [...]
```
Order can have one or more ItemsBought. Now, I want to select only those orders where users bought BOTH itemId=1 AND itemId=2.
Say, we have such data in ItemsBought table:
```
boughtId | orderId | itemId
---------------------------
1 | 1 | 1
2 | 1 | 2
3 | 1 | 3
4 | 2 | 1
5 | 2 | 3
6 | 2 | 4
```
I need query to return only:
```
orderId
-------
1
```
What would be SQL code in Access 2010? | Try something like this:
```
SELECT orderId
FROM ItemsBought
WHERE itemId IN (1,2)
GROUP BY orderId
HAVING COUNT(orderId) = 2;
```
This will only work if you can't have multiple items 1 or 2 in the same orderid.
`sqlfiddle demo` | I'm positive there is a more efficient way to do this but here's one simple solution. Basically just do an inner join on the two separate subqueries which pull back the lines you want.
The results will be the intersection between the two sets or in otherwords those orders where both items were found.
```
Select orderId
From
(
Select orderId
From Orders o
Inner Join ItemsBought i on o.orderID = i.orderID
Where itemId = 1) qry1
Inner Join
(Select orderId
From Orders o
Inner Join ItemsBought i on o.orderID = i.orderID
Where itemId = 2) qr2 on qry1.orderID = qry2.orderId
Group By orderId
``` | Select orderId if it has both items bought | [
"",
"sql",
"ms-access",
"ms-access-2010",
""
] |
Using TSQL, what's the most efficient way to convert a datatype string formatted date 'mmm-dd-yyyy' to data type date format 'yyyy-mm-dd'
Example:
Original date: Jan-31-2013
Converted date: 2013-01-31
I was looking for the conversion in this document but no success. <http://msdn.microsoft.com/en-us/library/ms187928.aspx> | This solves your problem, not sure if there's a more efficient solution though?
```
declare @DateTime as varchar(max)
set @DateTime = 'Jan-31-2013';
select cast(substring(@DateTime,8,4) + '-' + substring(@DateTime,1,3) + '-' + substring(@DateTime,5,2) as date)
```
 | Most effective way returned as date:
```
select convert(date, replace('Jan-31-2013', '-', ' '), 0)
```
Most effective way returned as char(10):
```
select convert(char(10),convert(date, replace('Jan-31-2013', '-', ' '), 0), 126)
``` | Using T-SQL, how to convert date format mmm-dd-yyyy to yyyy-mm-dd | [
"",
"sql",
"sql-server",
"t-sql",
"date",
""
] |
I have a table A with three columns a b c. I want to insert values in to columns a and b based on the join of column c with another table B
I am using the following query
```
MERGE INTO A
USING
(SELECT * FROM B) **B**
ON (B.c=A.c)
WHEN MATCHED THEN
INSERT(a,b) VALUES(local_varialble,'STRING');
```
I am getting the following error
```
PL/SQL : ORA-00905 : MISSING KEYWORD
```
Please help !! This query always seems to be tricky
EDIT : I found out that using != in the ON condition and following it up with WHEN NOT MATCHED compiles perfectly.. but I don't know whether it yields the correct result | What exactly are you trying to achieve here? You want to insert some data into table `A` when there is connection between rows in tables `A` and `B`, but you do not use any values from columns of the `B` column.
Merge won't work like the way you wrote it, because you **have to** have either an `UPDATE` or `DELETE` statement in the `WHEN MATCHED THEN` clause, you can not have an `INSERT` there. On the other hand, in the `WHEN NOT MATCHED THEN` clasue, you can have **only** `INSERT`.
More about `MERGE` here: [Oracle Documentation - MERGE Statement](http://docs.oracle.com/cd/B28359_01/server.111/b28286/statements_9016.htm)
If you change from `=` to `!=`, it won't work, because then you will be doing an `INSERT` in the `WHEN NOT MATCHED THEN` part of `MERGE`, and what you want is to `INSERT` when there is a match between the records.
So, to sum up - what are you trying to achieve? I think that you shouldn't use `MERGE` in that particular situation, instead use a simple `INSERT` with well defined condition which rows should be inserted and from where. | I think that you need to define first a "merge\_update\_clause" (see [merge documentation](http://docs.oracle.com/cd/E11882_01/server.112/e41084/statements_9016.htm))
like this:
```
MERGE INTO A
USING (SELECT * FROM B) b
ON (B.c = A.c)
WHEN MATCHED THEN UPDATE SET ....
DELETE WHERE ...
WHEN NOT MATCHED THEN INSERT ...
``` | Insert based on inner join oracle sql | [
"",
"sql",
"database",
"oracle",
"plsql",
"oracle11g",
""
] |
A simplified version of my MySQL db looks like this:
```
Table books (ENGINE=MyISAM)
id <- KEY
publisher <- LONGTEXT
publisher_id <- INT <- This is a new field that is currently null for all records
Table publishers (ENGINE=MyISAM)
id <- KEY
name <- LONGTEXT
```
Currently books.publisher holds values that keep getting repeated, but that the publishers.name holds uniquely.
I want to get rid of books.publisher and instead populate the books.publisher\_id field.
The straightforward SQL code that describes what I want done, is as follows:
```
UPDATE books
JOIN publishers ON books.publisher = publishers.name
SET books.publisher_id = publishers.id;
```
The problem is that I have a big number of records, and even though it works, it's taking forever.
Is there a faster solution than using something like this in advance?:
```
CREATE INDEX publisher ON books (publisher(20));
``` | There are a few problems here that might be helped by optimization.
First of all, a few thousand rows doesn't count as "big" ... that's "medium."
Second, in MySQL saying "I want to do this without indexes" is like saying "I want to drive my car to New York City, but my tires are flat and I don't want to pump them up. What's the best route to New York if I'm driving on my rims?"
Third, you're using a `LONGTEXT` item for your publisher. Is there some reason not to use a fully indexable datatype like `VARCHAR(200)`? If you do that your WHERE statement will run faster, index or none. Large scale library catalog systems limit the length of the publisher field, so your system can too.
Fourth, from one of your comments this looks like a routine data maintenance update, not a one time conversion. So you need to figure out how to avoid repeating the whole deal over and over. I am guessing here, but it looks like newly inserted rows in your `books` table have a publisher\_id of zero, and your query updates that column to a valid value.
So here's what to do. First, put an index on tables.publisher\_id.
Second, run this variant of your maintenance query:
```
UPDATE books
JOIN publishers ON books.publisher = publishers.name
SET books.publisher_id = publishers.id
WHERE books.publisher_id = 0
LIMIT 100;
```
This will limit your update to rows that haven't yet been updated. It will also update 100 rows at a time. In your weekly data-maintenance job, re-issue this query until MySQL announces that your query affected zero rows (look at mysqli::rows\_affected or the equivalent in your php-to-mysql interface). That's a great way to monitor database update progress and keep your update operations from getting out of hand. | Your question title says ".. optimize ... query without using an index?"
What have you got against using an index?
You should always examine the execution plan if a query is running slowly. I would guess it's having to scan the `publishers` table for each row in order to find a match. It would make sense to have an index on `publishers.name` to speed the lookup of an `id`.
You can drop the index later but it wouldn't harm to leave it in, since you say the process will have to run for a while until other changes are made. I imagine the `publishers` table doesn't get update very frequently so performance of `INSERT` and `UPDATE` on the table should not be an issue. | Can I optimize such a MySQL query without using an index? | [
"",
"mysql",
"sql",
"query-optimization",
""
] |
I'm creating a report using SQL to pull logged labor hours from our labor database for the previous month. I have it working great, but need to add logic to prevent it from breaking when it runs in January. I've tried adding If/Then statements and CASE logic, but I don't know if I'm just not doing it right, or if our system can't process it. Here's the snippet that pulls the date range:
```
SELECT
...
FROM
...
WHERE
...
AND
YEAR(ENTERDATE) = YEAR(current date) AND MONTH(ENTERDATE) = (MONTH(current date)-1)
``` | Just use AND as a barrier like this. In January, the second clause will be executed instead of the first one:
```
SELECT
...
FROM
...
WHERE
...
AND
(
(
(MONTH(current date) > 1) AND
(YEAR(ENTERDATE) = YEAR(current date) AND MONTH(ENTERDATE) = (MONTH(current date)-1))
-- this one gets used from Feb-Dec
)
OR
(
(MONTH(current date) = 1) AND
(YEAR(ENTERDATE) = YEAR(current date) - 1 AND MONTH(ENTERDATE) = 12)
-- alternatively, in Jan only this one gets used
)
)
``` | If your report is always going to be for the previous month, then I think the simplest idea is to declare the year and month of the previous month and then reference those in the Where clause. For example:
```
Declare LastMo_Month Integer = MONTH(DATEADD(MONTH,-1,getdate()));
Declare LastMo_Year Integer = YEAR(DATEADD(MONTH,-1,getdate()));
Select ...
Where MONTH(EnterDate) = @LastMo_Month
and YEAR(EnterDate) = @LastMo_Year
```
You could even take it a step further and allow the report to be created for any number of months ago:
```
Declare Delay Integer = -1;
Declare LastMo_Month Integer = MONTH(DATEADD(MONTH,@Delay,getdate()));
Declare LastMo_Year Integer = YEAR(DATEADD(MONTH,@Delay,getdate()));
Select ...
Where MONTH(EnterDate) = @LastMo_Month
and YEAR(EnterDate) = @LastMo_Year
```
Hope this helps.
PS - This is my first answer on StackOverflow, so sorry if the formatting isn't right! | How to add check for January in SQL statement | [
"",
"sql",
""
] |
I work with entity-attribute-value database model.
My product have many attribute\_varchars and attribute\_varchars have one attribute. An attribute has many attribute\_varchars and an attribute varchar has one product. Same logic apply to attribute\_decimals and attribute\_texts.
Anyway, i have the following query and i would like to filter the result using a where clause
```
SELECT
products.id,
(select value from attribute_texts where product_id = products.id and attribute_id = 1)
as description,
(select value from attribute_varchars where product_id = products.id and attribute_id = 2)
as model,
(select value from attribute_decimals where product_id = products.id and attribute_id = 9)
as rate,
(select value from attribute_varchars where product_id = products.id and attribute_id = 20)
as batch
FROM products
WHERE products.status_id <> 5
```
**I would like to add a where `rate > 5`**
I tried but I get the following error : `Unknown column 'rate' in 'where clause'`. I tried adding an alias to the value and to the table of the value but nothing seems to work. | In MySQL, you can do:
```
having rate > 5
```
MySQL has extended the `having` clause so it can work without a `group by`. Although questionable as a feature, it does allow you to reference aliases in the `select` clause without using a subquery. | With generated column like that, the best way is to made your query as a subquery, and do your filtering on the upper level like that:
```
SELECT *
FROM
(
SELECT
products.id,
(select value from attribute_texts where product_id = products.id and attribute_id = 1)
as description,
(select value from attribute_varchars where product_id = products.id and attribute_id = 2)
as model,
(select value from attribute_decimals where product_id = products.id and attribute_id = 9)
as rate,
(select value from attribute_varchars where product_id = products.id and attribute_id = 20)
as batch
FROM products
WHERE products.status_id <> 5
) as sub
WHERE sub.Rate > 5
``` | MySQL where clause targeting dependent subquery | [
"",
"mysql",
"sql",
"entity-attribute-value",
""
] |
So someone gave me a spreadsheet of orders, the unique value of each order is the PO, the person that gave me the spreadsheet is lazy and decided for orders with multiple PO's but the same information they'd just separate them by a "/". So for instance my table looks like this
```
PO Vendor State
123456/234567 Bob KY
345678 Joe GA
123432/123456 Sue CA
234234 Mike CA
```
What I hoped to do as separate the PO using the "/" symbol as a delimiter so it looks like this.
```
PO Vendor State
123456 Bob KY
234567 Bob KY
345678 Joe GA
123432 Sue CA
123456 Sue CA
234234 Mike CA
```
Now I have been brainstorming a few ways to go about this. Ultimately I want this data in Access. The data in its original format is in Excel. What I wanted to do is write a vba function in Access that I could use in conjunction with a SQL statement to separate the values. I am struggling at the moment though as I am not sure where to start. | If I had to do it I would
* Import the raw data into a table named [TempTable].
* Copy [TempTable] to a new table named [ActualTable] using the "Structure Only" option.
Then, in a VBA routine I would
* Open two DAO recordsets, `rstIn` for [TempTable] and `rstOut` for [ActualTable]
* Loop through the `rstIn` recordset.
* Use the VBA `Split()` function to split the [PO] values on "/" into an array.
* `For Each` array item I would use `rstOut.AddNew` to write a record into [ActualTable] | About about this:
1) Import the source data into a new Access table called SourceData.
2) Create a new query, go straight into SQL View and add the following code:
```
SELECT * INTO ImportedData
FROM (
SELECT PO, Vendor, State
FROM SourceData
WHERE InStr(PO, '/') = 0
UNION ALL
SELECT Left(PO, InStr(PO, '/') - 1), Vendor, State
FROM SourceData
WHERE InStr(PO, '/') > 0
UNION ALL
SELECT Mid(PO, InStr(PO, '/') + 1), Vendor, State
FROM SourceData
WHERE InStr(PO, '/') > 0) AS CleanedUp;
```
This is a 'make table' query in Access jargon (albeit with a nested union query); for an 'append' query instead, alter the top two lines to be
```
INSERT INTO ImportedData
SELECT * FROM (
```
(The rest doesn't change.) The difference is that re-running a make table query will clear whatever was already in the destination table, whereas an append query adds to any existing data.
3) Run the query. | Split delimited entries into new rows in Access | [
"",
"sql",
"ms-access",
"vba",
"delimiter",
""
] |
Can you format a phone number in an postgreSQL query? I have a phone number column. The phone numbers are held as such: 1234567890. I am wondering if postgres will format to (123) 456-7890. I can do this outside the query, I am using php, but it would be nice if I was able to have the output of the query like (123) 456-7890 | Use [SUBSTRING](http://www.postgresql.org/docs/9.1/static/functions-string.html) function
something like:
```
SELECT
'(' || SUBSTRING((PhoneNumber, 1, 3) + ') ' || SUBSTRING(PhoneNumber, 4,3) || '-' || SUBSTRING((PhoneNumber,7,4)
``` | This will work for you:
```
SELECT
'( ' || SUBSTRING(CAST(NUMBER AS VARCHAR) FROM 1 FOR 3) || ' ) '
|| SUBSTRING(CAST(NUMBER AS VARCHAR) FROM 4 FOR 3) || '-'
|| SUBSTRING(CAST(NUMBER AS VARCHAR) FROM 7 FOR LENGTH(CAST(NUMBER AS VARCHAR)))
FROM
YOURTABLE
```
Also, here is a [SQLFiddle](http://sqlfiddle.com/#!1/99462/20). | formating a phone number in the postresql query | [
"",
"sql",
"postgresql",
""
] |
I need a SQL statement that will find all OrderIDs within a table that have an ActivityID = 1 but not an ActivityID = 2
So here's an example table:
OrderActivityTable:
```
OrderID // ActivityID
1 // 1
1 // 2
2 // 1
3 // 1
```
So in this example, OrderID - 1 has activity of 1 AND 2, so that shouldn't return as a result. Orders 2 and 3 have an activity 1, but not 2... so they SHOULD return as a result.
The final result should be a table with an OrderID column with just 2 and 3 as the rows.
What I HAD tried before was:
```
select OrderID, ActivityID from OrderActivityTable where ActivityID = 1 AND NOT ActivityID = 2
```
That doesn't seem to get the result I want. I think the answer is a little more complicated. | You can use an outer self-join for this:
```
SELECT *
FROM OrderActivityTable t1
LEFT JOIN OrderActivityTable t2
ON t1.OrderID = t2.OrderID AND t2.ActivityID = 2
WHERE t1.ActivityID = 1
AND t2.OrderID IS NULL
```
You can also use the `WHERE NOT IN` solution of Mike Christensen, or the correlated subquery solution of alfasin, but in my experience this method usually has the best performance (I use MySQL -- results may be different in other implementations). In an ideal world, the DBMS would recognize that these are all the same, and optimize them equivalently. | Something like:
```
SELECT OrderID, ActivityID FROM OrderActivityTable
WHERE ActivityID = 1
AND OrderID NOT IN (select distinct OrderID from OrderActivityTable where ActivityID = 2);
```
This will select everything in `OrderActivityTable` where `ActivityID` is 1, but filter out the list of `OrderID` rows that contain an `ActivityID` of 2. | SQL statement to get values that have one property but not another property | [
"",
"sql",
""
] |
I need to insert approximatlly 10 records.I am thinking foreach loop,do u think is it the best way to do that.whenn searching what is the best method to do that,I find something like this,
```
GO
INSERT INTO MyTable (FirstCol, SecondCol)
SELECT 'First' ,1
UNION ALL
SELECT 'Second' ,2
UNION ALL
SELECT 'Third' ,3
UNION ALL
SELECT 'Fourth' ,4
UNION ALL
SELECT 'Fifth' ,5
GO
```
what is the differences between the this and foreach loop.which one is the better.
thank you all | ```
INSERT INTO MyTable (FirstCol, SecondCol)
VALUES ('First',1),
('Second',2),
('Third',3),
('Fourth',4),
('Fifth',5)
GO
```
Both the `foreach` loop and the `union all` methods would be what is called "Row by Agonizing Row" approach | Just Create a single for inserting all the record..
Because foreach loop will hit the database 10 times and u will take more time and resources..
So better to make single query for all the 10..
```
INSERT INTO Table ( FirstCol, SecondCol) VALUES
( Value1, Value2 ), ( Value1, Value2 )...
``` | bulky insert record to database | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have a table-valued parameter like this
CREATE TYPE dbo.Loc AS TABLE(Lo integer);
My stored procedure looks like this:
```
ALTER PROCEDURE [dbo].[T_TransactionSummary]
@startDate datetime,
@endDate datetime,
@locations dbo.Loc readonly
..........................
...........................
WHERE (Transaction_tbl.dtime BETWEEN @fromDate AND @toDate)
AND (Location_tbl.Locid IN (select Lo from @locations))
```
I have a listbox that contains multiple items. I can select multiple items from my listbox. How can I pass multiple Locationid to my stored procedure
```
cnt = LSTlocations.SelectedItems.Count
If cnt > 0 Then
For i = 0 To cnt - 1
Dim locationanme As String = LSTlocations.SelectedItems(i).ToString
locid = RecordID("Locid", "Location_tbl", "LocName", locationanme)
next
end if
Dim da As New SqlDataAdapter
Dim ds As New DataSet
Dim cmd23 As New SqlCommand("IBS_TransactionSummary", con.connect)
cmd23.CommandType = CommandType.StoredProcedure
cmd23.Parameters.Add("@startDate", SqlDbType.NVarChar, 50, ParameterDirection.Input).Value = startdate
cmd23.Parameters.Add("@endDate", SqlDbType.NVarChar, 50, ParameterDirection.Input).Value = enddate
Dim tvp1 As SqlParameter =cmd23.Parameters.Add("@location", SqlDbType.Int).Value = locid
tvp1.SqlDbType = SqlDbType.Structured
tvp1.TypeName = "dbo.Loc"
da.SelectCommand = cmd23
da.Fill(ds)
```
but i am getting error..i am working on windows forms in vb.net | There are some examples of how to do this at <http://msdn.microsoft.com/en-us/library/bb675163%28v=vs.110%29.aspx> (see the section titled "Passing a Table-Valued Parameter to a Stored Procedure
").
The simplest thing would seem to be filling a `DataTable` with the values the user selected and passing that to the stored procedure for the `@locations` parameter.
Perhaps something along the lines of (note I don't have VB.NET installed, so treat this as an outline of how it should work, not necessarily as code that will work straight away):
```
cnt = LSTlocations.SelectedItems.Count
' *** Set up the DataTable here: *** '
Dim locTable As New DataTable
locTable.Columns.Add("Lo", GetType(Integer))
If cnt > 0 Then
For i = 0 To cnt - 1
Dim locationanme As String = LSTlocations.SelectedItems(i).ToString
locid = RecordID("Locid", "Location_tbl", "LocName", locationanme)
' *** Add the ID to the table here: *** '
locTable.Rows.Add(locid)
next
end if
Dim da As New SqlDataAdapter
Dim ds As New DataSet
Dim cmd23 As New SqlCommand("IBS_TransactionSummary", con.connect)
cmd23.CommandType = CommandType.StoredProcedure
cmd23.Parameters.Add("@startDate", SqlDbType.NVarChar, 50, ParameterDirection.Input).Value = startdate
cmd23.Parameters.Add("@endDate", SqlDbType.NVarChar, 50, ParameterDirection.Input).Value = enddate
' *** Supply the DataTable as a parameter to the procedure here: *** '
Dim tvp1 As SqlParameter =cmd23.Parameters.AddWithValue("@location", locTable)
tvp1.SqlDbType = SqlDbType.Structured
tvp1.TypeName = "dbo.Loc"
da.SelectCommand = cmd23
da.Fill(ds)
``` | ```
cnt = LSTlocations.SelectedItems.Count
' *** Set up the DataTable here: *** '
Dim locTable As New DataTable
locTable.Columns.Add("Lo", GetType(Integer))
If cnt > 0 Then
For i = 0 To cnt - 1
Dim locationanme As String = LSTlocations.SelectedItems(i).ToString
locid = RecordID("Locid", "Location_tbl", "LocName", locationanme)
' *** Add the ID to the table here: *** '
locTable.Rows.Add(locid)
next
end if
Dim da As New SqlDataAdapter
Dim ds As New DataSet
Dim cmd23 As New SqlCommand("IBS_TransactionSummary", con.connect)
cmd23.CommandType = CommandType.StoredProcedure
cmd23.Parameters.Add("@startDate", SqlDbType.NVarChar, 50, ParameterDirection.Input).Value = startdate
cmd23.Parameters.Add("@endDate", SqlDbType.NVarChar, 50, ParameterDirection.Input).Value = enddate
' *** Supply the DataTable as a parameter to the procedure here: *** '
Dim tvp1 As SqlParameter =cmd23.Parameters.AddWithValue("@location", locTable)
tvp1.SqlDbType = SqlDbType.Structured
tvp1.TypeName = "dbo.Loc"
da.SelectCommand = cmd23
da.Fill(ds)
``` | while using table valued parameter how to pass multiple parameter together to stored prcocedure | [
"",
"sql",
"vb.net",
""
] |
I have a `Toplist` table and I want to get a user's rank. How can I get the row's index?
Unfortunately, I am getting all rows and checking in a for loop the user's ID, which has a significant impact on the performance of my application.
How could this performance impact be avoided? | You can use ROW.NUMBER
This is a example syntax for MySQL
```
SELECT t1.toplistId,
@RankRow := @RankRow+ 1 AS Rank
FROM toplist t1
JOIN (SELECT @RankRow := 0) r;
```
This is a example syntax for MsSQL
```
SELECT ROW_NUMBER() OVER(ORDER BY YourColumn) AS Rank,TopListId
FROM TopList
``` | You may also do something like this:
```
SELECT ROW_NUMBER() OVER(ORDER BY (SELECT 1)) AS MyIndex
FROM TopList
``` | How can I get row's index from a table in SQL Server? | [
"",
"sql",
"sql-server",
"row",
"rank",
""
] |
I have a table in the database with records like following:
```
match_id | guess | result
125 | 1 | 0
130 | 5 | 0
233 | 11 | 0
125 | 2 | 0
```
my users choose a guess for each match and I have a function that calculate the result of the guess depending on the result of the match:
if the guess is right the result will be (1)
if it is wrong the result will be (2)
if the match did not finish yet the result will be (0 default)
I have eleven possibilities for guesses (more than one could be right at the same time)
for example : if I have a match whit id=125 and I have all the guesses wrong except 8,11
so i should update the result field for all matches that have the match id and their guess is 8 or 11 (i will give 1 for this result field)
and I want to give (2) for the other guesses of the same match
I use this query for all eleven possibilities like following:
```
UPDATE `tahminler` SET result=1 WHERE match_id='1640482' AND tahmin='8'
UPDATE `tahminler` SET result=1 WHERE match_id='1640482' AND tahmin='11'
UPDATE `tahminler` SET result=0 WHERE match_id='1640482' AND tahmin='1'
UPDATE `tahminler` SET result=0 WHERE match_id='1640482' AND tahmin='2'
UPDATE `tahminler` SET result=0 WHERE match_id='1640482' AND tahmin='3'
UPDATE `tahminler` SET result=0 WHERE match_id='1640482' AND tahmin='4'
UPDATE `tahminler` SET result=0 WHERE match_id='1640482' AND tahmin='5'
UPDATE `tahminler` SET result=0 WHERE match_id='1640482' AND tahmin='6'
UPDATE `tahminler` SET result=0 WHERE match_id='1640482' AND tahmin='7'
UPDATE `tahminler` SET result=0 WHERE match_id='1640482' AND tahmin='9'
UPDATE `tahminler` SET result=0 WHERE match_id='1640482' AND tahmin='10'
```
I want to know if I can do this job in one query?or not? | use these two query:
```
UPDATE `tahminler`
SET result=0
WHERE match_id='1640482'
AND tahmin IN ('1','2','3','4','5','6','7','9','10')
```
And then use this:
```
UPDATE `tahminler`
SET result=1
WHERE match_id='1640482'
AND tahmin IN ('8','11')
``` | You can do this, but it will be also ugly. Use [CASE()](http://dev.mysql.com/doc/refman/5.0/en/case.html) operator, like:
```
UPDATE tahminler
SET
result=CASE
WHEN tahmin IN ('1','2','3','4','5','6','7','8','9','10') THEN 0
WHEN tahmin IN ('8', 11) THEN 1
END
WHERE
match_id='1640482'
``` | MySQL Update query with multiple values | [
"",
"mysql",
"sql",
""
] |
I have Mysql Database, with two tables, one with two key values, and the second one related to it 1 to N.
the first table always have data, but the second table related to it may not have.
I need to always return data from the first one, independent if theres none in the second table.
Heres my query :
```
select a.*, b.* FROM disp_ofer a, ofer_detl b
WHERE
a.esta_cod = 'Lelis'
AND
a.disp_ofer_data = '2013-10-30 16:07:20'
AND
b.disp_ofer_data = a.disp_ofer_data
AND
b.esta_cod = a.esta_cod
``` | use [join](http://en.wikipedia.org/wiki/Join_%28SQL%29) syntax... to join tables.
In your case, a LEFT JOIN.
```
select a.*, b.* FROM disp_ofer a
Left join ofer_detl b
on b.disp_ofer_data = a.disp_ofer_data and b.esta_cod = a.esta_cod
WHERE
a.esta_cod = 'Lelis'
AND
a.disp_ofer_data = '2013-10-30 16:07:20'
``` | Always use the [`JOIN`](http://dev.mysql.com/doc/refman/5.0/en/join.html) syntax rather than listing multiple tables in the `FROM` statement. It's considered a better practice.
A `LEFT JOIN` will allow you to achieve this goal. The `NATURAL` keyword will perform then join on all columns that exists in both tables automatically.
```
SELECT a.*, b.*
FROM disp_ofer a
NATURAL LEFT JOIN ofer_detl
WHERE a.esta_cod = 'Lelis' AND a.disp_ofer_data = '2013-10-30 16:07:20'
``` | select from two tables and always return from one | [
"",
"mysql",
"sql",
"database",
""
] |
I'm trying to write a query that will tell me the number of each colors for `Female`.
```
White - 2
Blue - 5
Green - 13
```
So far I have the following query with some of my attempts commented out:
```
SELECT a.id AS aid, af.field_name AS aname, afv.field_value
FROM applications app, applicants a, application_fields af, application_fields_values afv, templates t, template_fields tf
WHERE a.application_id = app.id
AND af.application_id = app.id
AND afv.applicant_id = a.id
AND afv.application_field_id = af.id
#AND af.template_id = t.id
AND af.template_field_id = tf.id
AND t.id = tf.template_id
AND afv.created_at >= '2013-01-01'
AND afv.created_at <= '2013-12-31'
#AND af.field_name = 'Male'
AND afv.field_value = 1
ORDER BY aid, aname
#GROUP BY aid, aNAME
#HAVING aname = 'Female';
```
Currently this query returns data like this:
```
aid | aname | field_value
4 Female 1
4 White 1
5 Green 1
5 Female 1
6 Female 1
6 White 1
7 Blue 1
7 Female 1
8 Female 1
8 Blue 1
9 Male 1
9 Green 1
```
Table structure:
```
applications:
id
application_fields:
id
application_id
field_name
applications_fields_values:
id
application_field_id
applicant_id
field_value
template:
id
template_fields:
id
template_id
applicant:
id
application_id
```
Sample data:
```
application_fields
id | application_id | field_name |template_id | template_field_id
1 | 1 | blue | 1 | 1
2 | 1 | green | 1 | 2
3 | 1 | female | 1 | 3
application_fields_values
id | application_field_id | applicant_id | field_value
4 | 1 | 1 | 1
5 | 2 | 1 | 0
6 | 3 | 1 | 1
templates
id | name |
1 | mytemplate |
template_fields
id | template_id | field_name |
1 | 1 | blue
2 | 1 | green
3 | 1 | female
```
**EDIT**
I'm pretty sure the query below gets what i'm looking for, but it's horrendously slow and my largest table has less than 30K rows.
query
```
SELECT af.field_name AS aname, sum(afv.field_value) AS totals
FROM applications app, applicants a, application_fields af, application_fields_values afv, templates t, template_fields tf
WHERE a.application_id = app.id
AND af.application_id = app.id
AND afv.applicant_id = a.id
AND afv.application_field_id = af.id
AND af.template_field_id = tf.id
AND t.id = tf.template_id
AND afv.created_at >= '2013-01-01'
AND afv.created_at <= '2013-12-31'
AND afv.field_value = 1
AND a.id IN (
SELECT
a2.id
FROM applications app2, applicants a2, application_fields af2, application_fields_values afv2, templates t2, template_fields tf2
WHERE af2.application_id = app2.id
AND afv2.applicant_id = a2.id
AND afv2.application_field_id = af2.id
AND af2.template_field_id = tf2.id
AND t2.id = tf2.template_id
AND afv2.created_at >= '2013-01-01'
AND afv2.created_at <= '2013-12-31'
#AND af2.field_name = 'Male'
AND af2.field_name = 'Female'
AND afv2.field_value = 1
)
GROUP BY aname;
```
which produces the results:
```
aname | totals
Green 2
Black 27
Blue 5
``` | ```
SELECT f1.field_name, count(*) as total
FROM application_fields f1
JOIN applications_fields_values v1
ON v1.application_field_id = f1.id
JOIN applications_fields_values v2
ON v1.applicant_id = v2.applicant_id
JOIN applications_fields f2
ON v2.application_field_id = f2.id
WHERE v1.field_value = 1
AND v2.field_value = 1
AND f2.field_name = 'Female'
AND f1.field_name != 'Female'
AND f1.created_at BETWEEN '2013-01-01' AND '2013-12-31'
GROUP BY f1.field_name
```
It doesn't seem you need to refer the tables `templates`, `template_fields`, `applications`, or `applicant` to solve your problem, unless you have additional requirements. Also, it's not at all clear how do you identify which `application_fields` represent colors. If you have more information about that, some condition may be added. | The query looks fine, just add the condition in the `WHERE` clause, adding in within `HAVING` should be done if you wish to filter out the results of based on grouping
Try this
```
SELECT
af.field_name AS aname,
count(afv.field_value) as totals
FROM
applications app,
applicants a,
application_fields af,
application_fields_values afv,
templates t,
template_fields tf
WHERE
a.application_id = app.id
AND af.application_id = app.id
AND afv.applicant_id = a.id
AND afv.application_field_id = af.id
#AND af.template_id = t.id
AND af.template_field_id = tf.id
AND t.id = tf.template_id
AND afv.created_at >= '2013-01-01'
AND afv.created_at <= '2013-12-31'
#AND af.field_name = 'Male'
AND afv.field_value = 1
AND aname = 'Female'
ORDER BY
aname
GROUP BY
aNAME
``` | Trying to write a complex query to retrieve data | [
"",
"mysql",
"sql",
""
] |
Do Impala or Hive have something similar to PL/SQL's `IN` statements? I'm looking for something like this:
```
SELECT *
FROM employees
WHERE start_date IN
(SELECT DISTINCT date_id FROM calendar WHERE weekday = 'MON' AND year = '2013');
```
This would return a list of all the employees that started on a Monday in 2013. | I should mention that this is one possible solution and my preferred one:
```
SELECT *
FROM employees emp
INNER JOIN
(SELECT
DISTINCT date_id
FROM calendar
WHERE weekday = 'FRI' AND year = '2013') dates
ON dates.date_id = emp.start_date;
``` | ```
select *
from employees emp, calendar c
where emp.start_date = c.date_id
and weekday = 'FRI' and year = '2013';
``` | Do Impala or Hive have something like an IN clause in other SQL syntaxes? | [
"",
"sql",
"hadoop",
"plsql",
"hive",
"impala",
""
] |
I have noticed that a table in my database contains duplicate rows. This has happened on various dates.
When i run this query
```
select ACC_REF, CIRC_TYP, CIRC_CD, count(*) from table
group by ACC_REF, CIRC_TYP, CIRC_CD
having count(1)>1
```
I can see the rows which are duplicated and how many times it excists (always seems to be 2).
The rows do have a unique id on them, and i think it would be best to remove the value with the newest id
I want to select the data thats duplicated but only with the highest id so i can move it to another table before deleteing it.
Anyone know how i can do this select?
Thanks a lot | ```
select max(id) as maxid, ACC_REF, CIRC_TYP, CIRC_CD, count(*)
from table
group by ACC_REF, CIRC_TYP, CIRC_CD
having count(*)>1
```
Edit:
I think this is valid in Sybase, it will find ALL duplicates except the one with the lowest id
```
;with a as
(
select ID, ACC_REF, CIRC_TYP, CIRC_CD,
row_number() over (partition by ACC_REF, CIRC_TYP, CIRC_CD order by id) rn,
from table
)
select ID, ACC_REF, CIRC_TYP, CIRC_CD
from a
where rn > 1
``` | It will only output the unique values from your current table along with the criteria you specified for duplicate entries.
This will allow you to do one step "insert into new\_table" from one single select statement.
Without having to delete and then insert.
```
select
id
,acc_ref
,circ_typ
,circ_cd
from(
select
id
,acc_ref
,circ_typ
,circ_cd
,row_number() over ( partition by
acc_ref
,circ_typ
,circ_cd
order by id desc
) as flag_multiple_id
from Table
) a
where a.flag_multiple_id = 1 -- or > 1 if you want to see the duplicates
``` | Sybase removing duplicate data | [
"",
"sql",
"database",
"duplicates",
"sybase",
""
] |
Having an issue calculating money after percentages have been added and then subtracting , I know that 5353.29 + 18% = 6316.88 which I need to do in my tsql but I also need to do the reverse and take 18% from 6316.88 to get back to 5353.29 all in tsql, I might have just been looking at this too long but I just cant get the figures to calculate properly, any help please? | 6316.88 is 118%, so to get back you need to divide 6316.88 by 118, then multiply by 100.
```
(6316.88/118)*100=5353.29
``` | ```
newVal = 5353.29(1 + .18)
origVal = newVal/(1 + .18)
``` | Percentage Plus and Minus | [
"",
"sql",
"t-sql",
""
] |
I've encountered here an inusited situation that I couldn't understand. Nor the documentation of the functions that I will write about has something to light up this thing.
I've a table with a field `titulo varchar2(55)`. I'm in Brazil, some of the characters in this field has accents and my goal is to create a similar field without the accents (replaced by the original character as this `Γ‘` became `a` and so on.).
I could use a bunch of functions to do that as `replace`, `translate` and others but I find over the internet one that seams to be more elegant, then I use it. That is where the problem came.
My update code is like:
```
update myTable
set TITULO_URL = replace(
utl_raw.cast_to_varchar2(
nlssort(titulo, 'nls_sort=binary_ai')
)
,' ','_');
```
As I said the goal is to transform every accented character in its equivalent without the accent plus the spaces character for an `_`
Then I got this error:
```
ORA-12899: value too large for column
"mySchem"."myTable"."TITULO_URL" (actual: 56, maximum: 55)
```
And at first I though maybe those functions are adding some character, let me checkit. I did a select command to get me a row where `titulo` has 55 characters.
```
select titulo from myTable where length(titulo) = 55
```
Then I choose a row to do some tests, the row that I choose has this value: `'FGHJTΓRYO DE YHJKS DA DGHQΓΓA DE ASGA XCVBGL EASDEΓNASD'` (I did change it bit to preserve the data, but the result is the same)
When i do the following select statement that things became weird:
```
select a, length(a), b, length(b)
from ( select 'FGHJTΓRYO DE YHJKS DA DGHQΓΓA DE ASGA XCVBGL EASDEΓNASD' a,
replace(
utl_raw.cast_to_varchar2(
nlssort('FGHJTΓRYO DE YHJKS DA DGHQΓΓA DE ASGA XCVBGL EASDEΓNASD', 'nls_sort=binary_ai')
)
,' ','_') b
from dual
)
```
The result for this sql is (i will put the values one down other for better visualization):
```
a LENGTH(a)
FGHJTΓRYO DE YHJKS DA DGHQΓΓA DE ASGA XCVBGL EASDEΓNASD 55
b LENGTH(b)
fghjtoryo_de_yhjks_da_dghqcaa_de_asga_xcvbgl_easdeonasd 56
```
Comparing the two strings one above other there is no difference in size:
```
FGHJTΓRYO DE YHJKS DA DGHQΓΓA DE ASGA XCVBGL EASDEΓNASD
fghjtoryo_de_yhjks_da_dghqcaa_de_asga_xcvbgl_easdeonasd
```
I've tested this query on Toad, PLSQL Developer and SQLPLUSW all with the same result. So my question is **Where this LENGTH(b)=56 came from**? I know that it can be something with character set, but I couldn't figure out why. I even tested with the `trim` command and the result is the same.
Another tests that i did
* `substr(b, 1,55)` the result was the same text as above
* `lenght(trim(b))` the result was 56
* `substr(b,56)` the result was empty (no null, no space, just empty)
Suggested by @Sebas:
* `LENGTHB(b)` the result was 56
* `ASCII(substr(b,56))`
So, again: **Where this LENGTH(b)=56 came from**?
Sorry for the long post and thank you for the ones who get down here (read everything).
An thanks for the ones who doesn't read anyway :)
Best regards | The 'nlssort' function's documentation does not state that the output string will be a normalization of the input string, or that they will have same length. The purpose of the function is to return data that can be used to sort the input string.
See <http://docs.oracle.com/cd/E11882_01/server.112/e26088/functions113.htm#SQLRF51561>
It is tempting to use it to normalize your string since *apparently* it works, but you are gambling here...
Heck, it could even yield a **LENGTH(b)=200** and *still* be doing what it is supposed to do :) | 1) Oracle distinguishes lengths in bytes and lengths in characters: `varchar2(55)` means 55 bytes, so 55 UTF-8 characters fit only if you are lucky: you should declare your field as `varchar2 (55 char)`.
2) Contortions like
```
replace(utl_raw.cast_to_varchar2(nlssort(
'FGHJTΓRYO DE YHJKS DA DGHQΓΓA DE ASGA XCVBGL EASDEΓNASD',
'nls_sort=binary_ai')),' ','_') b
```
are nonsense, you are merely replacing strings with somewhat similar ones.
Your database has an encoding, and all strings are represented with that encoding, which determines their length in bytes; the arbitrary variations mcalmeida explains introduce random data-dependent noise, never a good thing if you are making comparisons.
3) Regarding the stated task of removing accents, you should do it yourself with REPLACE, TRANSLATE etc. because only you know your requirements; it isn't Unicode normalization or anything "standard", there are no shortcuts.
You can define a function and call it from any query and any PL/SQL program, without ugly copying and pasting. | Strange behavior of LENGTH command - ORACLE | [
"",
"sql",
"oracle",
""
] |
When I used this query above exception has thrown
```
SELECT FINQDET.InquiryNo,FINQDET.Stockcode,FINQDET.BomQty,FINQDET.Quantity,FINQDET.Rate,FINQDET.Required,FINQDET.DeliverTo,FSTCODE.TitleA AS FSTCODE_TitleA ,FSTCODE.TitleB AS FSTCODE_TitleB,FSTCODE.Size AS FSTCODE_Size,FSTCODE.Unit AS FSTCODE_Unit, FINQSUM.TITLE AS FINQSUM_TITLE,FINQSUM.DATED AS FINQSUM_DATED
FROM FINQSUM , FINQDET left outer join [Config]..FSTCODE ON FINQDET.Stockcode=FSTCODE.Stockcode
WHERE FINQDET.InquiryNo=FINQSUM.INQUIRYNO
ORDER BY FINQDET.Stockcode,FINQDET.InquiryNo
```
but if I used below query problem solved,
```
SELECT FINQDET.InquiryNo,FINQDET.Stockcode,FINQDET.BomQty,FINQDET.Quantity,FINQDET.Rate,FINQDET.Required,FINQDET.DeliverTo,FSTCODE.TitleA AS FSTCODE_TitleA ,FSTCODE.TitleB AS FSTCODE_TitleB,FSTCODE.Size AS FSTCODE_Size,FSTCODE.Unit AS FSTCODE_Unit,
FINQSUM.TITLE AS FINQSUM_TITLE,FINQSUM.DATED AS FINQSUM_DATED
FROM FINQSUM As FINQSUM , FINQDET As FINQDET left outer join [Config]..FSTCODE As FSTCODE ON FINQDET.Stockcode=FSTCODE.Stockcode
HERE FINQDET.InquiryNo=FINQSUM.INQUIRYNO
ORDER BY FINQDET.Stockcode,FINQDET.InquiryNo
```
Please can you explain Why using Alias better than using actual table names | The table [Config]..FSTCODE is qualified with database name which works fine if you use alias. Otherwise you need to qualify full name as it is from different database | Looks like `FSTCODE` is a table in a different DB. In the first query, though the `JOIN` uses the DB name to identify the table, the `ON` statement does not. The second statement adresses this by using an `ALIAS`
You can also modify the first statement as
```
left outer join [Config]..FSTCODE
ON FINQDET.Stockcode=[Config]..FSTCODE.Stockcode
``` | The multi-part identifier "tablename.column " could not be bound | [
"",
"sql",
"sql-server",
""
] |
Every day, the requests get weirder and weirder.
I have been asked to put together a query to detect which columns in a table contain the same value for all rows. I said "That needs to be done by program, so that we can do it in one pass of the table instead of N passes."
I have been overruled.
So long story short. I have this very simple query which demonstrates the problem. It makes 4 passes over the test set. I am looking for ideas for SQL Magery which do not involve adding indexes on every column, or writing a program, or taking a full human lifetime to run.
And **sigh** It needs to be able to work on any table.
Thanks in advance for your suggestions.
```
WITH TEST_CASE AS
(
SELECT 'X' A, 5 B, 'FRI' C, NULL D FROM DUAL UNION ALL
SELECT 'X' A, 3 B, 'FRI' C, NULL D FROM DUAL UNION ALL
SELECT 'X' A, 7 B, 'TUE' C, NULL D FROM DUAL
),
KOUNTS AS
(
SELECT SQRT(COUNT(*)) S, 'Column A' COLUMNS_WITH_SINGLE_VALUES
FROM TEST_CASE P, TEST_CASE Q
WHERE P.A = Q.A OR (P.A IS NULL AND Q.A IS NULL)
UNION ALL
SELECT SQRT(COUNT(*)) S, 'Column B' COLUMNS_WITH_SINGLE_VALUES
FROM TEST_CASE P, TEST_CASE Q
WHERE P.B = Q.B OR (P.B IS NULL AND Q.B IS NULL)
UNION ALL
SELECT SQRT(COUNT(*)) S, 'Column C' COLUMNS_WITH_SINGLE_VALUES
FROM TEST_CASE P, TEST_CASE Q
WHERE P.C = Q.C OR (P.C IS NULL AND Q.C IS NULL)
UNION ALL
SELECT SQRT(COUNT(*)) S, 'Column D' COLUMNS_WITH_SINGLE_VALUES
FROM TEST_CASE P, TEST_CASE Q
WHERE P.D = Q.D OR (P.D IS NULL AND Q.D IS NULL)
)
SELECT COLUMNS_WITH_SINGLE_VALUES
FROM KOUNTS
WHERE S = (SELECT COUNT(*) FROM TEST_CASE)
``` | do you mean something like this?
```
WITH
TEST_CASE AS
(
SELECT 'X' A, 5 B, 'FRI' C, NULL D FROM DUAL UNION ALL
SELECT 'X' A, 3 B, 'FRI' C, NULL D FROM DUAL UNION ALL
SELECT 'X' A, 7 B, 'TUE' C, NULL D FROM DUAL
)
select case when min(A) = max(A) THEN 'A'
when min(B) = max(B) THEN 'B'
when min(C) = max(C) THEN 'C'
when min(D) = max(D) THEN 'D'
else 'No one'
end
from TEST_CASE
```
**Edit**
this works:
```
WITH
TEST_CASE AS
(
SELECT 'X' A, 5 B, 'FRI' C, NULL D FROM DUAL UNION ALL
SELECT 'X' A, 3 B, 'FRI' C, NULL D FROM DUAL UNION ALL
SELECT 'X' A, 7 B, 'TUE' C, NULL D FROM DUAL
)
select case when min(nvl(A,0)) = max(nvl(A,0)) THEN 'A ' end ||
case when min(nvl(B,0)) = max(nvl(B,0)) THEN 'B ' end ||
case when min(nvl(C,0)) = max(nvl(C,0)) THEN 'C ' end ||
case when min(nvl(D,0)) = max(nvl(D,0)) THEN 'D ' end c
from TEST_CASE
```
Bonus: I have also added the check for the null values, so the result now is: A and D
And the [SQLFiddle demo](http://sqlfiddle.com/#!4/d41d8/20055) for you. | Optimizer statistics can easily identify columns with more than one distinct value. After statistics are gathered a simple query against the data dictionary will return the results almost instantly.
The results will only be accurate on 10g if you use ESTIMATE\_PERCENT = 100. The results will be accurate on 11g+ if you use ESTIMATE\_PERCENT = 100 or AUTO\_SAMPLE\_SIZE.
**Code**
```
create table test_case(a varchar2(1), b number, c varchar2(3),d number,e number);
--I added a new test case, E. E has null and not-null values.
--This is a useful test because null and not-null values are counted separately.
insert into test_case
SELECT 'X' A, 5 B, 'FRI' C, NULL D, NULL E FROM DUAL UNION ALL
SELECT 'X' A, 3 B, 'FRI' C, NULL D, NULL E FROM DUAL UNION ALL
SELECT 'X' A, 7 B, 'TUE' C, NULL D, 1 E FROM DUAL;
--Gather stats with default settings, which uses AUTO_SAMPLE_SIZE.
--One advantage of this method is that you can quickly get information for many
--tables at one time.
begin
dbms_stats.gather_schema_stats(user);
end;
/
--All columns with more than one distinct value.
--Note that nulls and not-nulls are counted differently.
--Not-nulls are counted distinctly, nulls are counted total.
select owner, table_name, column_name
from dba_tab_columns
where owner = user
and num_distinct + least(num_nulls, 1) <= 1
order by column_name;
OWNER TABLE_NAME COLUMN_NAME
------- ---------- -----------
JHELLER TEST_CASE A
JHELLER TEST_CASE D
```
**Performance**
On 11g, this method might be about as fast as mucio's SQL statement. Options like `cascade => false` would improve performance by not analyzing indexes.
But the great thing about this method is that it also produces useful statistics. If the system is already gathering statistics at regular intervals the hard work may already be done.
**Details about AUTO\_SAMPLE\_SIZE algorithm**
AUTO\_SAMPLE\_SIZE was completely changed in 11g. It does not use sampling for estimating number of distinct values (NDV). Instead it scans the whole table and uses a hash-based distinct algorithm. This algorithm does not require large amounts of memory or temporary tablespace. It's much faster to read the whole table than to sort even a small part of it. The Oracle Optimizer blog has a good description of the algorithm [here](https://blogs.oracle.com/optimizer/entry/how_does_auto_sample_size). For even more details, see [this presentation](http://jonathanlewis.files.wordpress.com/2011/12/one-pass-distinct-sampling-presentation.pdf) by Amit Podder. (You'll want to scan through that PDF if you want to verify the details in my next section.)
**Possibility of a wrong result**
Although the new algorithm does not use a simple sampling algorithm it still does not count the number of distinct values 100% correctly. It's easy to find cases where the estimated number of distinct values is not the same as the actual. But if the number of distinct values are clearly inaccurate, how can they be trusted in this solution?
The potential inaccuracy comes from two sources - hash collisions and synopsis splitting. Synopsis splitting is the main source of inaccuracy but does not apply here. It only happens when there are 13864 distinct values. And it never throws out *all* of the values, the final estimate will certainly be much larger than 1.
The only real concern is what are the chances of there being 2 distinct values with a hash collision. With a 64-bit hash the chance could be as low as 1 in 18,446,744,073,709,551,616. Unfortunately I don't know the details of their hashing algorithm and don't know the real probability. I was unable to produce any collisions from some simple testing and from previous real-life tests. (One of my tests was to use large values, since some statistics operations only use the first N bytes of data.)
Now also consider that this will only happen if *all* of the distinct values in the table collide. What are the chances of there being a table with *only* two values that just happen to collide? Probably much less than the chance of winning the lottery and getting struck by a meteorite at the same time. | Detect all columns in an oracle table which have the same value in each row | [
"",
"sql",
"oracle",
"oracle10g",
""
] |
I have an array of words like this one:
```
$word1 = array('test1','test2','test3','test4','test5',...,'test20');
```
I need to search in my table every row that has at least one of these words in the `text` column. So far, I have this sql query:
```
SELECT * FROM TABLE WHERE text LIKE '$word1[0]' OR text LIKE '$word1[1]'
OR ... OR text LIKE '$word1[20]'
```
But I see that this design isn't very efficient. Is there any way I can shorten this query, in such a way that I don't need to write out every word in the where clause?
Example `SELECT * FROM TABLE WHERE text IN ($word1)`
P.S.: this is an example of what I'm looking for, not an actual query I can run. | If you use a table variable instead of a list to store your words then you can use something like:
```
DECLARE @T TABLE (Word VARCHAR(255) NOT NULL);
INSERT @T (Word)
VALUES ('test1'), ('test2'), ('test3'), ('test4'), ('test5'), ('test20');
SELECT *
FROM TABLE t
WHERE EXISTS
( SELECT 1
FROM @T
WHERE t.Text LIKE '%' + word + '%'
);
```
You can also create a table type to store this, then you can pass this as a parameter to a stored procedure if required:
```
CREATE TYPE dbo.StringList (Value VARCHAR(MAX) NOT NULL);
GO
CREATE PROCEDURE dbo.YourProcedures @Words dbo.StringList READONLY
AS
SELECT *
FROM TABLE t
WHERE EXISTS
( SELECT 1
FROM @Words w
WHERE t.Text LIKE '%' + w.word + '%'
);
GO
DECLARE @T dbo.StringList;
INSERT @T (Value)
VALUES ('test1'), ('test2'), ('test3'), ('test4'), ('test5'), ('test20');
EXECUTE dbo.YourProcedure @T;
```
For more on this see [table-valued Parameters](http://msdn.microsoft.com/en-us/library/bb675163%28v=vs.110%29.aspx) on MSDN.
---
**EDIT**
I may have misunderstood your requirements as you used `LIKE` but with no wild card operator, in which case you can just use `IN`, however I would still recommend using a table to store your values:
```
DECLARE @T TABLE (Word VARCHAR(255) NOT NULL);
INSERT @T (Word)
VALUES ('test1'), ('test2'), ('test3'), ('test4'), ('test5'), ('test20');
SELECT *
FROM TABLE t
WHERE t.Text IN (SELECT Word FROM @T);
``` | You can use a SELECT like this without declaring an array:
```
SELECT * FROM TABLE WHERE text IN ('test1', 'test2', 'test3', 'test4', 'test5')
``` | How to minimize sql select? | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I need to check if two dates over lap with another two dates in my database.
My database looks like this
```
+----+--------------+------------+------------+
| id | code | StartDate | EndDate |
+----+--------------+------------+------------+
| 1 | KUVX-21-40 | 2013-10-23 | 2013-11-22 |
| 2 | UEXA286-1273 | 2013-10-30 | 2013-11-29 |
| 3 | UAJFAU-2817 | 2013-10-21 | 2013-11-20 |
| 4 | KUVX-21-40 | 2013-10-30 | 2013-11-29 |
+----+--------------+------------+------------+
```
In my query i specify the scope: A start date and an enddate
Lets asign them as follows:
```
ScopeStartDate = "2013-10-1"
ScopeEndDate = "2013-11-26"
```
Above should return me all of the records, since the all overlapse the timespan.
However I cannot get a query working :/
I've tried the following query with no luck:
```
WHERE
(
(StartDate < ScopeStartDate AND StartDate > ScopeStartDate)
OR
(StartDate > ScopeStartDate AND EndDate < ScopeEndDate )
)
```
This returns me two results:
**1 and 3**
what am I doing wrong? | I believe the following condition matches every possible overlapping case.
```
WHERE
(
(ScopeStartDate <= EndDate AND ScopeEndDate >= StartDate)
)
```
except if you declare illogic timespans (for example, those which end before starting) | This is an old thread, but use BETWEEN. This is an excerpt from my timeclock, pls modify to your needs...
```
$qs = "SELECT COUNT(*) AS `count` FROM `timeclock` WHERE `userid` = :userid
AND (
(`timein` BETWEEN :timein AND :timeout OR `timeout` BETWEEN :timein AND :timeout )
OR
(:timein BETWEEN `timein` AND `timeout` OR :timeout BETWEEN `timein` AND `timeout`)
);";
``` | MySQL check if two date range overlap with input | [
"",
"mysql",
"sql",
"datetime",
""
] |
I want to get all `data` in `id`'s 1-3 that are NOT in `id`'s > 6
I'm using id's for simplicity, but I'm really using timestamps.
```
CREATE TABLE test (
id bigint(20) NOT NULL AUTO_INCREMENT,
data varchar(3) NOT NULL,
PRIMARY KEY (id)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
INSERT INTO test (id, data) VALUES
(1, 'abc'),
(2, 'def'),
(3, 'ghi'),
(4, 'jkl'),
(5, 'mno'),
(6, 'pqr'),
(7, 'def'),
(8, 'vxw'),
(9, 'yz');
```
One query of the dozens that I've tried.
```
SELECT
t1.data t1_data,
t2.data t2_data
FROM test t1
JOIN test t2
ON t2.id BETWEEN 1 AND 3
AND t1.id > 6
AND t1.data <> t2.data
```
So I want to get this result:
```
+----------+
| data |
+----------+
| abc |
| ghi |
+----------+
``` | ```
SELECT t1.data AS t1_data
FROM test t1
WHERE t1.id BETWEEN 1 AND 3
AND NOT EXISTS(
SELECT *
FROM test t2
WHERE t2.data = t1.data
AND t2.id > 6
);
``` | This is an example of a set-within-sets subquery. I like to approach these using aggregation with the `having` clause, because this is the most general approach. In your case:
```
select t1.data
from test t1
group by t.data
having sum(id between 1 and 3) > 0 and
sum(id > 6) = 0;
```
The conditions in the `having` clause count the number of rows that meet each condition. The first says that there is at least one row (for a given `data`) with the `id` between 1 and 3. The second says there are no rows where the `id` is greater than 6. | Joining table to itself and selecting values that don't match | [
"",
"mysql",
"sql",
"join",
""
] |
I have this query:
```
SELECT Field1, OrderFor, Writeback, Actshipdate, Orderstatus, receivedate, receivetime
FROM orderinfo, shippinginfo
WHERE orderinfo.orderid = shippinginfo.orderid
AND shippinginfo.custid = '37782'
AND receivedate = DATE(NOW())
AND receivetime = ???????
```
I am using Sybase adaptive server anywhere and trying to get records for the last hour. | try this !!
```
SELECT Field1, OrderFor, Writeback, Actshipdate, Orderstatus, receivedate, receivetime
FROM orderinfo, shippinginfo
WHERE orderinfo.orderid = shippinginfo.orderid
AND shippinginfo.custid = '37782'
AND receivedate = DATE(NOW())
AND receivetime > DATEADD(HOUR, -1, GETDATE())
``` | Try below query:
```
SELECT Field1, OrderFor, Writeback, Actshipdate, Orderstatus, receivedate, receivetime
FROM orderinfo, shippinginfo
WHERE orderinfo.orderid = shippinginfo.orderid
AND shippinginfo.custid = '37782'
AND receivedate = DATE(NOW())
AND receivetime >= (sysdate-1/24);
``` | Get records from last hour | [
"",
"sql",
"sybase",
"sybase-asa",
""
] |
We are trying to select the first purchase for each customer in a table similar to this:
```
transaction_no customer_id operator_id purchase_date
20503 1 5 2012-08-24
20504 1 7 2013-10-15
20505 2 5 2013-09-05
20506 3 7 2010-09-06
20507 3 7 2012-07-30
```
The expected result from the query that we are trying to achieve is:
```
transaction_no customer_id operator_id first_occurence
20503 1 5 2012-08-24
20505 2 5 2013-09-05
20506 3 7 2010-09-06
```
The closest we've got is the following query:
```
SELECT customer_id, MIN(purchase_date) As first_occurence
FROM Sales_Transactions_Header
GROUP BY customer_id;
```
With the following result:
```
customer_id first_occurence
1 2012-08-24
2 2013-09-05
3 2010-09-06
```
But when we select the rest of the needed fields we obviously have to add them to the GROUP BY clause which will make the result from MIN different. We have also tried to joining it on itself, but haven't made any progress.
How do we get the rest of the correlated values without making the aggregate function confused? | You can simply treat the query you have come up with as an inner query. This will work on older version of SQL Server as well (you didn't specify version of SQL Server).
```
SELECT H.transaction_no, H.customer_id, H.operator_id, H.purchase_date
FROM Sales_Transactions_Header H
INNER JOIN
(SELECT customer_id, MIN(purchase_date) As first_occurence
FROM Sales_Transactions_Header
GROUP BY customer_id) X
ON H.customer_id = X.customer_id AND H.purchase_date = X.first_occurence
``` | You can use the [ROW\_NUMBER](https://learn.microsoft.com/en-us/sql/t-sql/functions/row-number-transact-sql?view=sql-server-ver15) function to help you with that.
This is how to do it for your case.
```
WITH Occurences AS
(
SELECT
*,
ROW_NUMBER () OVER (PARTITION BY customer_id order by purchase_date ) AS "Occurence"
FROM Sales_Transactions_Header
)
SELECT
transaction_no,
customer_id,
operator_id,
purchase_date
FROM Occurences
WHERE Occurence = 1
``` | Select first purchase for each customer | [
"",
"sql",
"sql-server",
"sql-server-2005",
""
] |
here's my query:
```
SELECT
p1.time_neu as Datum
count(p1.*) as Anzahl_Palette,
count(p2.*) as Anzahl_Stangen_Behaelter
FROM
00_Gesamt_Pickauf p1,
00_Gesamt_Pickauf p2
WHERE
p1.platz_von like '%-%-00-00' AND
p2.platz_von like ...
```
I've got trouble with the second like-condition. I want to select all the rows, where the "platz\_von"-coloumn looks like this:
```
01-01-01-01
02-02-02-02
02-02-03-04
...
```
But not like this:
```
01-02-00-00
02-01-00-00
```
I need to filter all the rows where "platz\_von" does not end on "-00-00". Any hints on how to write the query?
Thanks! | Try like this
```
SELECT
p1.time_neu as Datum
count(p1.*) as Anzahl_Palette,
count(p2.*) as Anzahl_Stangen_Behaelter
FROM
00_Gesamt_Pickauf p1,
00_Gesamt_Pickauf p2
WHERE
p1.platz_von NOT LIKE '%-00-00%' AND
p2.platz_von like ...
``` | ```
SELECT
p1.time_neu AS Datum
count(p1.*) AS Anzahl_Palette,
count(p2.*) AS Anzahl_Stangen_Behaelter
FROM
00_Gesamt_Pickauf p1,
00_Gesamt_Pickauf p2
WHERE
p1.platz_von like '%-%-%-%' AND
NOT (p1.platz_von like '%-%-00-00')
``` | SQL: Additional condition inside the 'like'-condition | [
"",
"mysql",
"sql",
"ms-access",
"sql-like",
""
] |
I'm new to databases and I'm thinking of creating one for a website. I started with SQL, but I really am not sure if I'm using the right kind of database.
Here's the problem:

What I have right now is the first option. So that means that, my query looks something like this:
```
user_id photo_id photo_url
0 0 abc.jpg
0 1 123.jpg
0 2 lol.png
```
etc.. But to me that seems a little bit inefficient when the database becomes BIG. So the thing I want is the second option shown in the picture. Something like this, then:
```
user_id photos
0 {abc.jpg, 123.jpg, lol.png}
```
Or something like that:
```
user_id photo_ids
0 {0, 1, 2}
```
I couldn't find anything like that, I only find the ordinary SQL. Is there anyway to do something like that^ (even if it isn't considered a "database")? If not, why is SQL more efficient for those kinds of situations? How can I make it more efficient?
Thanks in advance. | Your initial approach to having a `user_id, photo_id, photo_url` is correct. This is the normalized relationship that most database management systems use.
The following relationship is called "one to many," as a user can have many photos.

You may want to go as far as separating the photo details and just providing a reference table between the users and photos.
The reason your second approach is inefficient is because databases are not designed to search or store multiple values in a single column. While it's possible to store data in this fashion, you shouldn't.
If you wanted to locate a particular photo for a user using your second approach, you would have to search using `LIKE`, which will most likely not make use of any indexes. The process of extracting or listing those photos would also be inefficient.
You can read more about basic database principles [here](http://en.wikipedia.org/wiki/Database_normalization). | Your first example looks like a traditional relational database, where a table stores a single record per row in a standard 1:1 key-value attribute set. This is how data is stored in RDBMS' like Oracle, MySQL and SQL Server. Your second example looks more like a document database or NoSQL database, where data is stored in nested data objects (like hashes and arrays). This is how data is stored in database systems like MongoDB.
There are benefits and costs to storing data in either model. With relational databases, where data is spread accross multiple tables and linked by keys, it is easy to get at data from multiple angles and aggregate it for multiple purposes. With document databases, data is typically more difficult to join in single queries, but much faster to retrieve, and also typically formatted for quicker application use.
For your application, the latter (document database model) might be best if you only care about referencing a user's images when you have a user ID. This would not be ideal for say, querying for all images of category 'profile pic' or for all images uploaded after a certain date. You could probably accomplish your task with either database type, and choosing the right database will always depend on the application(s) that it will be used for, but as a general rule-of-thumb, relational databases are more flexible and hard to go wrong with. | What is this form of database called? | [
"",
"sql",
"database",
"database-design",
""
] |
The following two queries are returning different results. I understand the difference has to do with the way the time portions of the dates are being handled, but why is it working this way?
```
// QUERY ONE
select top 3 OrderDate
from Orders
where OrderDate >= '2013-11-01 04:00'
and OrderDate <= '2013-11-30 05:00'
order by OrderDate
// RESULTS
// 2013-11-01
// 2013-11-01
// 2013-11-01
// QUERY TWO
exec sp_executesql
N'select top 3 OrderDate
from Orders
where OrderDate >= @p__linq__0
and OrderDate <= @p__linq__1
order by OrderDate',
N'@p__linq__0 datetime2(7),@p__linq__1 datetime2(7)',
@p__linq__0='2013-11-01T04:00:00',
@p__linq__1='2013-11-30T05:00:00'
// RESULTS
// 2013-11-02
// 2013-11-02
// 2013-11-02
```
UPDATE
If I change the types of the parameters passed to sp\_executesql to 'date' instead of 'datetime', the results are the same.
```
// QUERY THREE
exec sp_executesql
N'select top 3 OrderDate
from Orders
where OrderDate >= @p__linq__0
and OrderDate <= @p__linq__1
order by OrderDate',
N'@p__linq__0 date,@p__linq__1 date',
@p__linq__0='2013-11-01T04:00:00',
@p__linq__1='2013-11-30T05:00:00'
// RESULTS
// 2013-11-01
// 2013-11-01
// 2013-11-01
``` | [Data type precedence](http://technet.microsoft.com/en-us/library/ms190309.aspx) is taking the data in your table, which starts as a date, and compares it as a datetime2(7). So your dynamic SQL version is actually running this:
```
WHERE column_as_datetime2 >= @parameter_as_datetime2
```
So, since `2013-11-01 00:00:00.0000000` is *not* greater than or equal to `2013-11-01 04:00:00.0000000`, the rows from November 1st are left out.
The most practical solution is to use `DATE` parameters (preferred, since the parameters should match the underlying data type, after all), and/or stop passing time values along with them. Try these:
```
USE tempdb;
GO
CREATE TABLE dbo.Orders(OrderDate DATE);
INSERT dbo.Orders VALUES('2013-11-01'),('2013-11-01'),('2013-11-01'),
('2013-11-02'),('2013-11-02'),('2013-11-02');
exec sp_executesql N'select top 3 OrderDate
from Orders
where OrderDate >= @p__linq__0
and OrderDate <= @p__linq__1
order by OrderDate;
select top 3 OrderDate
from Orders
where OrderDate >= @p2
and OrderDate <= @p3
order by OrderDate;
select top 3 OrderDate
from Orders
where OrderDate >= @p4
and OrderDate <= @p5
order by OrderDate;',
N'@p__linq__0 datetime2(7),@p__linq__1 datetime2(7),
@p2 datetime2(7),@p3 datetime2(7),@p4 date,@p5 date',
@p__linq__0='2013-11-01T04:00:00',
@p__linq__1='2013-11-30T05:00:00',
@p2='2013-11-01T00:00:00', -- note no time
@p3='2013-11-30T00:00:00', -- note no time
@p4='2013-11-01',
@p5='2013-11-30';
```
Results:
```
OrderDate
----------
2013-11-02
2013-11-02
2013-11-02
OrderDate
----------
2013-11-01
2013-11-01
2013-11-01
OrderDate
----------
2013-11-01
2013-11-01
2013-11-01
``` | I bet the column `OrderDate` is of type `date`, not `datetime`.
So when you do this
```
where OrderDate >= '2013-11-01 04:00'
```
it converts `'2013-11-01 04:00'` to `date`, not `datetime`, and so it loses the time information. Therefore, the condition in the first query is interpreted as `'2013-11-01 00:00:00' >= '2013-11-01 00:00:00'`. Which is true.
In the second query, the SP receives a parameter of type `datetime`, which has the time information. The condition there is interpreted as `'2013-11-01 00:00:00' >= '2013-11-01 04:00:00'` which is false.
If you want the same behavior in the first query, use a `datetime` variable instead of a string.
```
declare @d1 datetime
declare @d2 datetime
set @d1 = '2013-11-01 04:00'
set @d2 = '2013-11-30 05:00'
select top 3 OrderDate
from Orders
where OrderDate >= @d1
and OrderDate <= @d2
order by OrderDate
``` | sp_executesql returning different results than direct query | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I am refactoring some old oracle sql statements containing plenty of conditions. Some are single conditions put into brackets. Now, does the brackets matter for single conditions? Is there a difference between the two examples below?
example 1
```
WHERE
(
A = B
AND B = C
)
AND ( A > 5 )
AND ( B <> 0 )
```
example 2
```
WHERE
(
A = B
AND B = C
)
AND A > 5
AND B <> 0
``` | What about logic: you do need brackets if you have some OR logic, but in this case (only AND) it has no meaning. You can remove all brackets in your query.
And if we'll go deep: look the explain query analyze, you can see that interpreter puts brackets automatically even you missed it. | As far as I know there ain't any semantic differences.
In my experience usually this is either
* a relic of some old condition (maybe an `OR` was in that bracket somewhere in the past) or
* just the style of the dev working in this. | Single SQL conditions in brackets | [
"",
"sql",
""
] |
I have just started with `PL/SQL` and writing a simple fundamental code. I am not getting, what is wrong with the code.
```
DECLARE
role_test varchar(40) := 'Programmer';
BEGIN
update veer_test set project_id=1008420 where role=:role_test;
END;
```
I am getting `ORA-01008: not all variables bound`.
Earlier I tried:
```
DECLARE
role varchar(40) := 'Programmer';
BEGIN
update veer_test set project_id=1008420 where role=role;
END;
```
This updated all the rows of the table. Well this was justifiable. But what if I want to use the same variable name as in the table. I also tried:
```
role=:role;
```
But same error `ORA-01008` | Try:
```
DECLARE
role_test varchar(40) := 'Programmer';
BEGIN
update veer_test set project_id=1008420 where role=role_test;
END;
``` | If you want to use the same variable name as your column name, You can try with a procedure like this,
```
CREATE OR REPLACE PROCEDURE role_p(role VARCHAR2)
IS
BEGIN
UPDATE veer_test SET project_id=1008422 WHERE role = role_p.role;
END;
```
To Execute:
```
EXECUTE role_p('Programmer');
``` | Oracle "ORA-01008" Error. Variable Not Bound | [
"",
"sql",
"oracle",
"plsql",
""
] |
Short version: I need to return a query with 3 items from another table and adding it to the existing table.
Long version:
`Table A` contains the following information:
```
| ID | Name | Date | Comment |
--------------------------------
| 1 | AJ | 9/11 | Howdy |
| 2 | AW | 9/13 | Hi |
| 3 | AK | 9/15 | Aloha |
| 4 | AW | 9/15 | Hello |
| 5 | AJ | 9/18 | Greetings |
```
I need `Table B` to resemble:
```
| ID | Comment | Comment2 | Comment3 |
--------------------------------------------
| 1 | Howdy | Aloha | Greetings |
```
I am running
```
SELECT TOP 3 *
FROM a
WHERE Name IN ('AJ','AK')
```
but that makes `Table B` appear like:
```
| ID | Name | Date | Comment |
--------------------------------
| 1 | AJ | 9/11 | Howdy |
| 3 | AK | 9/15 | Aloha |
| 5 | AJ | 9/18 | Greetings |
```
Is it even possible to get what I want? | Not entirely sure what you are after as you have id's for each comment, then your output has a single row with an id (where does this id come from for your output?) but this may be able to be expanded upon:
```
SELECT
[1] AS COMMENT1,
[2] AS COMMENT2,
[3] AS COMMENT3
FROM
TABLE_A
PIVOT (MAX(COMMENT) FOR id IN ([1],[2],[3])) AS PVT
``` | Please try this , it helpful to you
```
select b.id, b.comment as comment
, (select comment from ##temp1 where id = b.id+2 ) as comment1
, (select comment from ##temp1 where id = b.id+4 ) as comment2
from ##temp1 b where b.id=1
``` | SQL Server : return 3 items | [
"",
"sql",
"sql-server",
""
] |
```
SELECT * FROM `people` WHERE first_name like 'm%' and last_name like 'm%';
```
- this selects people with with the same first and last name, but that's for `m` only. How to select all such people from a to z (order by desc is not a matter)? | ```
SELECT * FROM `people` WHERE UPPER(LEFT(first_name, 1)) = UPPER(LEFT(last_name, 1))
```
Explanation: takes the leftmost 1 character of first name and of last name, converts them to uppercase, and compares them. | ```
SELECT * FROM people WHERE LEFT(first_name, 1) = LEFT(last_name, 1);
ORDER BY last_name, first_name
``` | mysql: select * where first and last name starts with the same letter | [
"",
"mysql",
"sql",
"select",
"where-clause",
""
] |
Lets say, I have employees, and I know what fruits they like.
```
fruits(name, fruit_name)
```
My question is: list all the employees which at least like the same fruits as Donald.
So how do I compare two set of values?
This is how I get the fruits which Donald likes:
```
Select name, fruit_name
from fruits
where initcap(name) like '%Donald%';
```
Example: Donald likes apples, pears, peaches. I need the people who like apples, pears, peaches and possibly other fruits, but they must like those 3. | You can use self join to get your desired result-
I have tweaked your query a little to get the output-
```
select distinct e1.name from fruits e1,(Select name, fruit_name
from fruits
where initcap(name) like '%Donald%') e2
where e1.fruit_name = e2.fruit_name;
```
the above query returns employees for whom atleast one fruit matches with Donald
Below tested Query gives employees for whom atleast all the Donald's fruits matches
```
select name from (
select name,count(1) cnt from
(select name,fruit_name, case when fruit_name in (Select distinct fruit_name
from fruits
where initcap(name) like '%Donald%') then 1 else 0 end fruit_match from fruits)
where fruit_match = 1 group by name) where cnt >=
(select count(distinct fruit_name) from fruits where initcap(name) like '%Donald%');
``` | Two ways to do this:
# Using Collections
I find this gives the most comprehensible SQL but it does require defining a collection type:
```
CREATE TYPE VARCHAR2s_Table AS TABLE OF VARCHAR2(20);
```
Then you can just group everything up into collections and use a self join and `SUBMULTISET OF` to find the other names.
```
WITH grouped AS (
SELECT name,
CAST( COLLECT( fruit ) AS VARCHAR2s_Table ) AS list_of_fruits
FROM fruits
GROUP BY name
)
SELECT g.name
FROM grouped f
INNER JOIN
grouped g
ON ( f.list_of_fruits SUBMULTISET OF g.list_of_fruits
AND f.name <> g.name )
WHERE f.name = 'Alice';
```
[SQLFIDDLE](http://sqlfiddle.com/#!4/7ce16/2/0)
Or an alternative version of this:
```
WITH grouped AS (
SELECT name,
CAST( COLLECT( fruit ) AS VARCHAR2s_Table ) AS list_of_fruits
FROM fruits
GROUP BY name
)
SELECT name
FROM grouped
WHERE name <> 'Alice'
AND ( SELECT list_of_fruits FROM grouped WHERE name = 'Alice' )
SUBMULTISET OF list_of_fruits ;
```
# Not using Collections
```
WITH match_by_user AS (
SELECT DISTINCT
name,
fruit
FROM fruits
WHERE name = 'Alice'
)
SELECT f.name
FROM fruits f
INNER JOIN
match_by_user m
ON ( f.fruit = m.fruit
AND f.name <> m.name )
GROUP BY f.name
HAVING COUNT( DISTINCT f.fruit ) = ( SELECT COUNT(1) FROM match_by_user );
```
[SQLFIDDLE](http://sqlfiddle.com/#!4/8fb58/1/0)
As an aside - using `INITCAP(Name) LIKE '%Name%'` has the potential to match multiple names and you might find that you are finding the fruits that one of several people like. | SQL - Compare two sets | [
"",
"sql",
"oracle",
""
] |
It can be needed for inserting and modifying of big amounts of data. | Disable all FKs:
```
EXEC sp_MSforeachtable @command1="ALTER TABLE ? NOCHECK CONSTRAINT ALL"
GO
```
Enable all FKs:
```
EXEC sp_MSforeachtable @command1="ALTER TABLE ? CHECK CONSTRAINT ALL"
GO
```
Disable all triggers:
```
EXEC sp_MSforeachtable @command1="ALTER TABLE ? DISABLE TRIGGER ALL"
GO
```
Enable all triggers:
```
EXEC sp_MSforeachtable @command1="ALTER TABLE ? ENABLE TRIGGER ALL"
GO
```
Of course, note that if you had any FKs/triggers disabled prior to this, the enable script will re-enable these. | To disable all FKs in a table:
```
ALTER TABLE Table2 NOCHECK CONSTRAINT ALL
```
To disable a single FK in a table:
```
ALTER TABLE Table2 NOCHECK CONSTRAINT FK_Table2_Table1
```
To enable them replace `NOCHECK` with `CHECK`. | How to turn off/on all the foreign keys and triggers in Microsoft SQL Server? | [
"",
"sql",
"sql-server",
""
] |
I am new in stored procedures.
```
ALTER PROCEDURE [dbo].[SP_MY_STORE_PROCEDURED]
(
@ID INT,
@NAME VARCHAR(50)
)
AS
BEGIN
UPDATE MY_TABLE
SET
WHEN
(( ID ) > 1)
THEN
ID=@ID
,
NAME = @NAME
END
```
I try to use when then for update my **ID** and `Name`
**If Id is greater than 1 i want to update otherwise no update.**
How can i do it ms sql?
Any help will be appreciated.
Thanks. | I think this is what you are after:
```
ALTER PROCEDURE [dbo].[SP_MY_STORE_PROCEDURED]
(
@ID INT,
@NAME VARCHAR(50)
)
AS
BEGIN
UPDATE MY_TABLE
SET NAME = @NAME
WHERE ID = @ID
END
```
You do not need to check ID>1 since you are checking the equality with @ID. If you want to be sure that this doesn't happen if @ID <=1 then you may try the following:
```
ALTER PROCEDURE [dbo].[SP_MY_STORE_PROCEDURED]
(
@ID INT,
@NAME VARCHAR(50)
)
AS
BEGIN
IF @ID > 1
UPDATE MY_TABLE
SET NAME = @NAME
WHERE ID = @ID
END
``` | I'm not sure exactly what you're trying to update. Are you trying to change the name on a user record with id = @ID?
```
ALTER PROCEDURE [dbo].[SP_MY_STORE_PROCEDURED]
(
@ID INT,
@NAME VARCHAR(50)
)
AS
BEGIN
UPDATE MY_TABLE
SET Name = @Name
WHERE Id = @ID and @ID > 1
END
``` | Sql Server Stored Procedure Case When in Update | [
"",
"sql",
"sql-server",
"stored-procedures",
""
] |
I have a `item_prices` table with prices. Those prices vary at any time.
I want to display all items where date is highest
**ITEM\_prices**
```
id | Items_name | item_price | item_date
------------------------------------------
1 A 10 2012-01-01
2 B 15 2012-01-01
3 B 16 2013-01-01
4 C 50 2013-01-01
5 A 20 2013-01-01
```
I want to display ABC items once each with highest date like as below
```
id | Items_name | item_price | item_date
-------------------------------------------
3 B 16 2013-01-01
4 C 50 2013-01-01
5 A 20 2013-01-01
``` | when you can use native functions then why to go for any window function or CTE.
```
SELECT t1.*
FROM ITEM_prices t1
JOIN
(
SELECT Items_name,MAX(item_date) AS MaxItemDate
FROM ITEM_prices
GROUP BY Items_name
)t2
ON t1.Items_name=t2.Items_name AND t1.item_date=t2.MaxItemDate
``` | One approach would be to use a CTE (Common Table Expression) if you're on SQL Server **2005 and newer** (you aren't specific enough in that regard).
With this CTE, you can partition your data by some criteria - i.e. your `Items_name` - and have SQL Server number all your rows starting at 1 for each of those "partitions", ordered by some criteria.
So try something like this:
```
;WITH NewestItem AS
(
SELECT
id, Items_name, item_price, item_date,
RowNum = ROW_NUMBER() OVER(PARTITION BY Items_name ORDER BY item_date DESC)
FROM
dbo.ITEM_Prices
)
SELECT
id, Items_name, item_price, item_date
FROM
NewestItem
WHERE
RowNum = 1
```
Here, I am selecting only the "first" entry for each "partition" (i.e. for each `Items_Name`) - ordered by the `item_date` in descending order (newest date gets RowNum = 1).
Does that approach what you're looking for?? | sql server select query a bit complex | [
"",
"sql",
"sql-server",
"select",
""
] |
We currently have a table that logs traffic through our service. We're looking for a way to do some diagnostics/alerts based on the traffic, but find that at 2am we don't have enough traffic to be reliable (1 fail at 2 am can be a 50% failure rate, but at 9am when people work up, 1 fail can be .01%).
We'd like to check on the last 10 minutes by server, if the last 10 minutes doesn't have 300 records in it, we'd like to go back until we have 300 records.
Is there a way to do this in a query?
Our table looks like:
```
ID INT,
ServerID INT,
Success BIT,
ActionDate DATETIME
```
I could use `ROW_NUMBER()` to get the last 300, but if the traffic is high enough that this isn't the full last 10 minutes, we're missing data that might be relevant.
I'm looking to do something along the lines of
```
SET _RowCount = SELECT COUNT(*) FROM tbl WHERE Date >= DATEADD(M, -10, GETDATE());
IF _RowCount < 300 SET @RowCount = 300;
SELECT TOP _RowCount records
```
But can't seem to use a variable as the TOP count. | Using the last 300 records seems like the easiest thing to do:
```
SELECT top 300 t.*
FROM tbl t
order by date desc;
```
But you can do what you want with a single query:
```
select t.*
from (select t.*, row_number() over (order by date desc) as seqnum
from tbl t
) t
where seqnum <= 300 or Date >= DATEADD(M, -10, GETDATE());
``` | Not as neat as yours, but:
```
SET @RowCount = SELECT COUNT(*) FROM tbl WHERE Date >= DATEADD(M, -10, GETDATE());
IF @RowCount < 300
BEGIN
SELECT TOP 300 records order by Date Desc
END
ELSE
SELECT records WHERE Date >= DATEADD(M, -10, GETDATE())
END
``` | Query for last 10 minutes if > 300 records OR last 300 records | [
"",
"sql",
"sql-server",
""
] |
I have two tables as follows:
```
CREATE TABLE `keywords_by_city` (
`idKEYWORD` int(11) NOT NULL,
`CITY` varchar(45) NOT NULL);
CREATE TABLE `city` (
`idCITY_NAME` varchar(50) NOT NULL DEFAULT ''
);
```
I want to get a list of all the cities in which idKeyword=781 is not present.
I tried creating the query as follows but I don't think its correct:
```
SELECT cty.`idCITY_NAME`
FROM `keywords_by_city` kbc
LEFT JOIN `city` cty ON cty.`idCITY_NAME` = kbc.`CITY`
WHERE kbc.`idKEYWORD` IS NULL
AND kbc.`idKEYWORD` = 781
```
I also tried the following:
```
SELECT `CITY`
FROM `keywords_by_city` kbc
WHERE kbc.`idKEYWORD` = 781
AND kbc.`CITY` NOT IN (SELECT `idCITY_NAME` FROM `city`);
```
neither of these seem to work. Can someone please help. I would prefer a solution without a sub query if possible.
**UPDATE**
I am using the following data:
```
INSERT INTO keywords_by_city (idKEYWORD, CITY)
VALUES (781, 'NYC'), (266855, 'NYC'),
(266856, 'NYC'), (266857, 'NYC'),
(266858, 'NYC'), (266859, 'NYC');
INSERT INTO `city`
(`idCITY_NAME`)
VALUES
('NYC'),('Jersey City'),
('San Jose'),('Albany');
``` | You were close in your attempt. You just needed to swap the tables and put the keyword ID requirement into the join clause:
```
SELECT cty.`idCITY_NAME`
FROM `city` cty
LEFT JOIN `keywords_by_city` kbc ON cty.`idCITY_NAME` = kbc.`CITY` AND kbc.`idKEYWORD` = 781
WHERE kbc.`idKEYWORD` IS NULL
```
Also, it is standard practice to add indices on foreign key fields - ie. `keywords_by_city`.`CITY`. This will make the query perform significantly faster, especially as the table grows. | `t1 LEFT JOIN t2` means t1 table is outer and t2 table is inner of `NESTED LOOP JOIN`
something like this:
```
foreach (row in t1) {
if ((t1.col1 matches t2.col1) OR (t1.col1 doest not match t2.col2)) {
JOIN condition match.
}
}
```
so, your query changed like this:
```
SELECT cty.`idCITY_NAME`
FROM `city` cty
LEFT JOIN `keywords_by_city` kbc
ON cty.`idCITY_NAME` = kbc.`CITY` AND kbc.`idKEYWORD` = 781
WHERE kbc.`CITY` IS NULL
```
if you use `INNER JOIN`, the optimizer choose which table is outer using cost function. but `LEFT JOIN` left table is outer in `NESTED LOOP JOIN`, and right table is outer when `RIGHT JOIN` | mySQL - difference between two tables | [
"",
"mysql",
"sql",
""
] |
I'm new to Ruby on Rails. I'm trying to determine the proper ruby query for the following SQL query. The following query works but I'm wondering what the most Railsy way to do this is.
```
@user_urls = Url.find_by_sql(["
select urls.*
from urls
join connections on connections.id = urls.connection_id
join users on users.id = connections.user_id
where urls.marked_for_sending = true and users.id = ?", @user.id])
```
Here are some details about my model associations -
```
User has many Connections
Connection has many URLs
Connection belongs to User
URL belongs to Connection
```
Please give me some idea. | You can add an association in the User model:
```
#user.rb
has_many :urls, :through => :connections
@user_urls = @user.urls.where(marked_for_sending: true)
```
Other option:
```
@user_urls = Url.joins(:connection).where(connections: {user: @user}).where(marked_for_sending: true)
``` | You can use `joins` and `where` in rails
```
@user_urls = Url.joins("LEFT JOIN connections ON connections.id = urls.connection_id").joins(" LEFT JOIN users ON users.id = connections.user_id").where("urls.marked_for_sending =? AND users.id = ?", true, @user.id)
``` | Need help converting SQL Query to Ruby. | [
"",
"sql",
"ruby-on-rails",
"ruby",
""
] |
I have a table that has the following structure:
```
Account_No Contact Date
-------------------------
1 2013-10-1
2 2013-9-12
3 2013-10-15
3 2013-8-1
3 2013-8-20
2 2013-10-25
4 2013-9-12
4 2013-10-2
```
I need to search the table and return any account numbers that have two contact dates that are within 30 days of each other. Some account numbers may have 5 or 6 contact dates. I essentially just need to return all of the full account numbers and the records that are within 30 days of each other and ignore the rest. Contact date is being stored as a date data type.
So for example account number 3 would return the 2013-8-1 and the 2013-8-20 records, and both of the records for account number 4 would appear as well, but not the other account number records nor the account number 3 from 2013-10-15.
I am using SQL Server 2008 R2.
Thanks for any help in advance! | You can use DATEADD for the +/-30 days and compare against the time window:
```
DECLARE @ContactDates TABLE (
Account_No int
, Contact Date
)
-- Sample data
INSERT @ContactDates (Account_No, Contact)
VALUES
(1, '2013-10-01')
, (2, '2013-09-12')
, (3, '2013-10-15')
, (3, '2013-08-01')
, (3, '2013-08-20')
, (2, '2013-10-25')
, (4, '2013-09-12')
, (4, '2013-10-02')
-- Find the records within +/-30 days
SELECT c1.Account_No, c1.Contact AS Contact_Date1
FROM @ContactDates AS c1
JOIN (
-- Inner query with the time window
SELECT Account_No
, Contact
, DATEADD(dd, 30, Contact) AS Date_Max
, DATEADD(dd, -30, Contact) AS Date_Min
FROM @ContactDates
) AS c2
-- Compare based on account number, exclude the same date
-- from comparing against itself. Usually this would be
-- a primary key, but this example doesn't show a PK.
ON (c1.Account_No = c2.Account_No AND c1.Contact != c2.Contact)
-- Compare against the +/-30 day window
WHERE c1.Contact BETWEEN c2.Date_Min AND c2.Date_Max
```
This returns the following:
```
Account_No Contact
========== ==========
3 2013-08-20
3 2013-08-01
4 2013-10-02
4 2013-09-12
``` | In SQL Server 2012, you would have the `lag()` and `lead()` functions. In 2008, you can do the following for values that are in the same calendar month:
```
select distinct account_no
from t t1
where exists (select 1
from t t2
where t1.account_no = t2.account_no and
datediff(month, t1.ContactDate, t2.ContactDate) = 0
)
```
There is a bit of a challenge in defining what a "month" is when dates are in different months. (Is March 16 "one month" after Feb 15? They are closer in time than Jan 1 and Jan 31.) You could just go with 30 days:
```
select distinct account_no
from t t1
where exists (select 1
from t t2
where t1.account_no = t2.account_no and
datediff(day, t1.ContactDate, t2.ContactDate) <= 30
)
``` | T-SQL Finding Accounts With Contact Dates within 30 days | [
"",
"sql",
"sql-server-2008-r2",
""
] |
I created function and want to assign variable result of query:
```
CREATE OR REPLACE FUNCTION GetData
(
OUT outValue integer
)
AS $$
DECLARE
records "Records";
BEGIN
records := (SELECT "Value1" FROM "Records");
outValue := (SELECT sum("Value1") FROM records)
END;
$$ LANGUAGE plpgsql;
```
But, postgresql say's:
```
"ERROR: subquery in an expression returned more than one row."
```
If declare a variable of type `<"Records"%ROWTYPE>`, we have same result error.
How to declare variable with result of query? | You can create temporary table inside your function and use it for queries after populating:
```
create or replace function GetData()
returns integer
as $$
declare
outValue int;
begin
create temporary table records(Value1 int);
insert into records
select Value1 from Table1;
outValue := (select sum(Value1) from records);
return outValue;
end;
$$ language plpgsql;
```
**`sql fiddle demo`** | If you just want to return a single value why not declare the function as `returns integer` and use something like this:
```
CREATE OR REPLACE FUNCTION GetData()
returns integer
AS $$
SELECT sum("Value1")::integer FROM "Records";
$$ LANGUAGE sql;
```
Btw: I would highly recommend to stop using quoted identifiers and get rid of the double quotes. That will save you a lot of trouble in the long run. | How to declare variable with result of query? | [
"",
"sql",
"postgresql",
""
] |
I have 2 tables, `members` and `chat`.
I want to get all member ids which had chat with $user\_id, here we assume $user\_id = '5'
`chat` table is like:
```
id from to message
1 32 12 test
2 25 18 new message
3 5 86 another message
4 53 5 final message
```
`members` table:
```
id user_id
1 1
2 2
3 3
4 4
5 5
```
...
So for the above table contents, my quesry should return `53` and `86` because only these user ids had chat with '5'
my query is:
```
$q = "
SELECT members.user_id
FROM members
LEFT JOIN chat ON chat.to = '5' OR chat.from = '5'
"
```
Expected output is :
```
user_id
86
53
```
But it's not working, how could I achieve this?
Thanks | The join clause has to contains one field from members and one field from chat. But you may prefer that way to write your query :
```
$q = "
SELECT CASE chat.to WHEN 5 THEN chat.from ELSE chat.to END as the_other_guy
FROM chat
WHERE chat.from = 5 OR chat.to = 5";
```
**EDIT:** Next time, please ask what you REALLY need in your question, because we cannot guess what you thing. "display id and username of members who talked to user 5" would have been more clear than your first title, which was "how to join 2 table"
```
$q = "
SELECT CASE chat.to WHEN 5 THEN chat.from ELSE chat.to END as the_other_guy, username
FROM chat, members
WHERE
(chat.from = 5 AND members.id=chat.to)
OR (chat.to = 5 AND members.id=chat.from)";
``` | I think you 're looking for this one:
```
select chat.to from chat where chat.from=5
union
select chat.from from chat where chat.to=5
```
meaning every user\_id that user:5 has chatted with | get all rows matching for same value in 2 different columns | [
"",
"mysql",
"sql",
""
] |
In Oracle, is it possible to have a sub-query within a select statement that returns a column if exactly one row is returned by the sub-query and null if none or more than one row is returned by the sub-query?
Example:
```
SELECT X,
Y,
Z,
(SELECT W FROM TABLE2 WHERE X = TABLE1.X) /* but return null if 0 or more than 1 rows is returned */
FROM TABLE1;
```
Thanks! | How about going about it in a different way? A simple LEFT OUTER JOIN with a subquery should do what you want:
```
SELECT T1.X
,T1.Y
,T1.Z
,T2.W
FROM TABLE1 AS T1
LEFT OUTER JOIN (
SELECT X
,W
FROM TABLE2
GROUP BY X,W
HAVING COUNT(X) = 1
) AS T2 ON T2.X = T1.X;
```
This will only return items that have exactly 1 instance of X, and LEFT OUTER JOIN it back to the table when appropriate (leaving the non-matches NULL).
This is also ANSI-compliant, so it is quite performant. | Besides a `CASE` solution or rewriting the inline subquery as an outer join, this will work, if you can apply an aggregate function (`MIN` or `MAX`) on the `W` column:
```
SELECT X,
Y,
Z,
(SELECT MIN(W) FROM TABLE2 WHERE X = TABLE1.X HAVING COUNT(*) = 1) AS W
FROM TABLE1;
``` | SQL return exactly one row or null in a select sub-query | [
"",
"sql",
"oracle",
"subquery",
""
] |
I have no idea how to do this, and have not found any documentation on it, but I am 99% sure that it can be done. in other words if I have say a 5 col and 5 row table, and the string is in 1:1 and I want it in 5:5, what would the code look like?(I have done this in excel and access that is why I believe it is possible in sql). Please forgive me if I didn't word this right but I am new to sql and don't know all the lingo just yet. thanks in advance. | ```
UPDATE TheTable
SET column5=(SELECT column1 FROM TheTable WHERE id=1)
WHERE id=5
``` | Database is not Excel, you cannot select value directly based on columnNo, rowNo. What's the use case for it, anyway?
PS. It can be done by some circus tricks and dynamic queries like described in <http://forums.mysql.com/read.php?10,286477,286489#msg-286489>, but I don't think it's worth the effort. You should consider redesign of your table structure. | How do I move a string from a specific column and row to a different column and row? | [
"",
"mysql",
"sql",
""
] |
I want to be able to update a table of the same schema using a "replace into" statement. In the end, I need to be able to update a large table with values that may have changed.
Here is the query I am using to start off:
```
REPLACE INTO table_name
(visual, inspection_status, inspector_name, gelpak_name, gelpak_location),
VALUES (3, 'Partially Inspected', 'Me', 'GP1234', 'A01');
```
What I don't understand is how does the database engine know what is a duplicate row and what isn't? This data is extremely important and I can't risk the data being corrupted. Is it as simple as "if all columns listed have the same value, it is a duplicate row"?
I am just trying to figure out an efficient way of doing this so I can update > 45,000 rows in under a minute. | As the [documentation](http://dev.mysql.com/doc/refman/5.0/en/replace.html) says:
> REPLACE works exactly like INSERT, except that if an old row in the table has the same value as a new row for a PRIMARY KEY or a UNIQUE index, the old row is deleted before the new row is inserted. | `REPLACE` does work much like an `INSERT` that just overwrites records that have the same `PRIMARY KEY` or `UNIQUE` index, however, beware.
Shlomi Noach writes about the problem with using `REPLACE INTO` [here](http://code.openark.org/blog/mysql/replace-into-think-twice):
> But weak hearted people as myself should be aware of the following: it is a heavyweight solution. It may be just what you were looking for in terms of ease of use, but the fact is that on duplicate keys, a DELETE and INSERT are performed, and this calls for a closer look.
>
> Whenever a row is deleted, all indexes need to be updated, and most importantly the PRIMARY KEY. When a new row is inserted, the same happens. Especially on InnoDB tables (because of their clustered nature), this means much overhead. The restructuring of an index is an expensive operation. Index nodes may need to be merged upon DELETE. Nodes may need to be split due to INSERT. After many REPLACE INTO executions, it is most probable that your index is more fragmented than it would have been, had you used SELECT/UPDATE or INSERT INTO ... ON DUPLICATE KEY
>
> Also, there's the notion of "well, if the row isn't there, we create it. If it's there, it simply get's updated". This is false. The row doesn't just get updated, it is completely removed. The problem is, if there's a PRIMARY KEY on that table, and the REPLACE INTO does not specify a value for the PRIMARY KEY (for example, it's an AUTO\_INCREMENT column), the new row gets a different value, and this may not be what you were looking for in terms of behavior.
>
> Many uses of REPLACE INTO have no intention of changing PRIMARY KEY (or other UNIQUE KEY) values. In that case, it's better left alone. On a production system I've seen, changing REPLACE INTO to INSERT INTO ... ON DPLICATE KEY resulted in a ten fold more throughput (measured in queries per second) and a drastic decrease in IO operations and in load average.
In summary, `REPLACE INTO` *may* be right for your implementation, but you might find it more appropriate (and less risky) to use [`INSERT ... ON DUPLICATE KEY UPDATE`](https://dev.mysql.com/doc/refman/5.7/en/insert-on-duplicate.html) instead. | Replace Into Query Syntax | [
"",
"mysql",
"sql",
"database",
"replace",
""
] |
I'm on a query.
```
SELECT ttf.default_text
FROM test_template_field ttf, TEST t
WHERE ttf.schema_field_id = 2044
--HERE
AND ttf.test_template_id = t.test_template_id
AND t.workflow_node_id IN (
SELECT wn.workflow_node_id
FROM lims_sys.workflow_node wn, lims_sys.workflow_user wu
WHERE wn.workflow_id = wu.workflow_id
AND wn.workflow_node_type_id = 42
AND wu.u_external_category IN ('M'))
group by ttf.DEFAULT_TEXT
```
This works fine, and in 8 seconds I get my result back.
But if I add another AND function, it will take 28 Minutes before I get my results back.
Its about this one, his location was "--HERE"
```
AND ttf.default_text NOT IN ('Preparation Microbiology', 'Other', 'Preparation')
```
I don't know why it's slow.. Can someone help? | * Use [DBMS\_STATS](http://docs.oracle.com/cd/B28359_01/appdev.111/b28419/d_stats.htm) to update the table statistics that Oracle uses to decide the execution plan
* If this doesn't help, you can use [optimizer hints](http://docs.oracle.com/cd/B19306_01/server.102/b14211/hintsref.htm) in the query to kick the query optimizer's butt into the right direction | Check if you have got indexses on
test\_template\_field.schema\_field\_id,
test\_template\_field.test\_template\_id,
test.test\_template\_id,
lims\_sys.workflow\_node.workflow\_id,
wu.u\_external\_category,
wn.workflow\_node\_type\_id
Also try this query
```
SELECT ttf.default_text
FROM test_template_field ttf
JOIN TEST t
on ttf.test_template_id = t.test_template_id
JOIN (SELECT wn.workflow_node_id
FROM lims_sys.workflow_node wn
JOIN lims_sys.workflow_user wu
ON wn.workflow_id = wu.workflow_id
AND wu.u_external_category = 'M'
WHERE wn.workflow_node_type_id = 42) wni
on t.workflow_node_id = wni.workflow_node_id
WHERE ttf.schema_field_id = 2044
AND ttf.default_text NOT IN ('Preparation Microbiology', 'Other', 'Preparation')
group by ttf.DEFAULT_TEXT
```
Looks like NOT IN is taking too long ( there is too many rows with ttf.schema\_field\_id = 2044 )
try this query also
```
SELECT *
FROM (SELECT ttf.default_text
FROM test_template_field ttf
JOIN TEST t
on ttf.test_template_id = t.test_template_id
JOIN (SELECT wn.workflow_node_id
FROM lims_sys.workflow_node wn
JOIN lims_sys.workflow_user wu
ON wn.workflow_id = wu.workflow_id
AND wu.u_external_category = 'M'
WHERE wn.workflow_node_type_id = 42) wni
on t.workflow_node_id = wni.workflow_node_id
WHERE ttf.schema_field_id = 2044) ss
WHERE ss.default_text NOT IN
('Preparation Microbiology', 'Other', 'Preparation')
group by ss.DEFAULT_TEXT
``` | Oracle SELECT statement slow | [
"",
"sql",
"oracle",
"select",
""
] |
```
Delete From StudentTb Where StudentType in (1, 2) and StudentI not in
(
Select lg.StudentI From StudentLog lg
)
```
I tried to changed the above code to the following... But, I am not sure if it is correct or not.
```
Delete From StudentTb Where StudentType in (1, 2)
and Not Exists
(
Select 'x'
From StudentLog lg
Where StudentI= lg.StudentI
)
```
Can anyone please help me? | Not in and not exists do not always have the same meaning. I assume you want to convert because "not in" tends to be slow. Here is another way where the logic will always match.
```
Delete From StudentTb
Where StudentType in (1, 2)
and StudentI in
(
select StudentI
from StudentTb
except
Select StudentI From StudentLog
)
``` | ```
Delete From StudentTb s
Where StudentType in (1, 2)
and Not Exists
(
Select *
From StudentLog
Where StudentI = s.StudentI
)
```
In a subquery, unqualified references to columns(which do not specify table name or alias) are first assumed to be in the table referenced in the subquery itself, not in other tables in the outer query. So, in your sql syntax, both sides of the predicate `Where StudentI = lg.StudentI` are the same column in the `StudentLog` subquery table. | SQL, how do I convert to logic ('not in' to 'not exists') | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I am trying to insert data into a SQL Server database by calling a stored procedure, but I am getting the error
> Procedure or function 'SHOWuser' expects parameter '@userID', which was not supplied.
My stored procedure is called `SHOWuser`. I have checked it thoroughly and no parameters is missing.
My code is:
```
public void SHOWuser(string userName, string password, string emailAddress, List<int> preferences)
{
SqlConnection dbcon = new SqlConnection(conn);
try
{
SqlCommand cmd = new SqlCommand();
cmd.Connection = dbcon;
cmd.CommandType = System.Data.CommandType.StoredProcedure;
cmd.CommandText = "SHOWuser";
cmd.Parameters.AddWithValue("@userName", userName);
cmd.Parameters.AddWithValue("@password", password);
cmd.Parameters.AddWithValue("@emailAddress", emailAddress);
dbcon.Open();
int i = Convert.ToInt32(cmd.ExecuteScalar());
cmd.Parameters.Clear();
cmd.CommandText = "tbl_pref";
foreach (int preference in preferences)
{
cmd.Parameters.Clear();
cmd.Parameters.AddWithValue("@userID", Convert.ToInt32(i));
cmd.Parameters.AddWithValue("@preferenceID", Convert.ToInt32(preference));
cmd.ExecuteNonQuery();
}
}
catch (Exception)
{
throw;
}
finally
{
dbcon.Close();
}
```
and the stored procedure is:
```
ALTER PROCEDURE [dbo].[SHOWuser]
(
@userName varchar(50),
@password nvarchar(50),
@emailAddress nvarchar(50)
)
AS
BEGIN
INSERT INTO tbl_user(userName, password, emailAddress)
VALUES (@userName, @password, @emailAddress)
SELECT
tbl_user.userID, tbl_user.userName,
tbl_user.password, tbl_user.emailAddress,
STUFF((SELECT ',' + preferenceName
FROM tbl_pref_master
INNER JOIN tbl_preferences ON tbl_pref_master.preferenceID = tbl_preferences.preferenceID
WHERE tbl_preferences.userID = tbl_user.userID
FOR XML PATH ('')), 1, 1, ' ' ) AS Preferences
FROM
tbl_user
SELECT SCOPE_IDENTITY();
END
```
This is the second stored procedure `tbl_pref` which is used in the same function:
```
ALTER PROCEDURE [dbo].[tbl_pref]
@userID int,
@preferenceID int
AS
BEGIN
INSERT INTO tbl_preferences(userID, preferenceID)
VALUES (@userID, @preferenceID)
END
``` | Your stored procedure expects 5 parameters as input
```
@userID int,
@userName varchar(50),
@password nvarchar(50),
@emailAddress nvarchar(50),
@preferenceName varchar(20)
```
So you should add all 5 parameters to this SP call:
```
cmd.CommandText = "SHOWuser";
cmd.Parameters.AddWithValue("@userID",userID);
cmd.Parameters.AddWithValue("@userName", userName);
cmd.Parameters.AddWithValue("@password", password);
cmd.Parameters.AddWithValue("@emailAddress", emailAddress);
cmd.Parameters.AddWithValue("@preferenceName", preferences);
dbcon.Open();
```
PS: It's not clear what these parameter are for. You don't use these parameters in your SP body so your SP should looks like:
```
ALTER PROCEDURE [dbo].[SHOWuser] AS BEGIN ..... END
``` | In my case I received this exception even when all parameter values were correctly supplied but the type of command was not specified :
```
cmd.CommandType = System.Data.CommandType.StoredProcedure;
```
This is obviously not the case in the question above, but exception description is not very clear in this case, so I decided to specify that. | Stored procedure or function expects parameter which is not supplied | [
"",
"sql",
"sql-server",
"stored-procedures",
""
] |
My Situation:
I have an Employee field which I am getting through a SharePoint list.
The current value I am getting is this:
`EmployeeID;Employee Firstname Employee Lastname`
Say for example:
`43;Stacky Stackerflow`
What I Need is either the ID alone without the `;` or the First and Lastname,
but I have no way of telling if the ID is gonna be 1, 2 or 3 Digits Long or in anyway tell how big the Names are gonna be.
Is there a way to cut these using any of the Tools in SSIS ? If so how ? And if not, is there another way? | This is a Copy and paste from my answer here :[Split a single column of data with comma delimiters into multiple columns in SSIS](https://stackoverflow.com/questions/19653995/split-a-single-column-of-data-with-comma-delimiters-into-multiple-columns-in-ssi/19654637#19654637)
You can use the [Token](http://technet.microsoft.com/en-us/library/hh213216.aspx) expression to isolate strings delimited by well, delimiters.
> TOKEN(character\_expression, delimiter\_string, occurrence)
so
> TOKEN(EmployeeField, ";", 1)
will give you your ID
You can also set up a simple transformation script component. Use your "DATA" column as an input and add as many outputs as you need. Use the split method and you're set.
```
string[] myNewColumns = inputColumn.split(";");
``` | Because this is tagged SQL, I'll answer using that syntax:
```
select left(Employee, charindex(';', Employee) - 1)
```
If you want this as an integer, then use:
```
select (case when isnumeric(left(Employee, charindex(';', Employee) - 1)) = 1
then cast(left(Employee, charindex(';', Employee) - 1) as int)
end) as EmployeeId
``` | SSIS: Trim field for specific part of string | [
"",
"sql",
"string",
"sharepoint",
"ssis",
"expression",
""
] |
I have following two tables, one table is of person and another table is for storing various dynamic properties/information about the person.
```
Id | Persons PersonId | Field | Value
----+------------- ----------+--------+-----------
1 | Peter 1 | City | New York
2 | Jane 1 | Age | 26
2 | City | New York
2 | Age | 50
```
1. Can I apply a search condition in an sql query like person with `age > 25 and city = 'New York'` without `grouping` or `pivoting` the table.
2. What is best way to apply the search criteria with least performance overhead. | ```
SELECT key1.PersonId
FROM KeyValue key1
INNER JOIN KeyValue key2 ON key1.PersonId = key2.PersonId
WHERE key1.[Field] = 'Age' and key1.[Value] > 25
AND key2.[Field] = 'City' and key2.[Value] = 'New York'
```
**Update**
I did some tests and INNER JOIN looks fast enough. Here result and test script
```
SET NOCOUNT ON
SET STATISTICS IO ON
CREATE TABLE KeyValue (
ID INT NOT NULL IDENTITY CONSTRAINT [PK_KeyValue] PRIMARY KEY CLUSTERED
,PersonId INT NOT NULL
,Field varchar(30) NOT NULL
,Value varchar(255) NOT NULL
,CONSTRAINT UQ__KeyValue__PersonId_Field UNIQUE (PersonId, Field)
)
GO
--INSERT INTO KeyValue 500K "users", 4 "Fields" - 2M rows
CREATE NONCLUSTERED INDEX [IX__KeyValue__Field_Value_ID]
ON [dbo].[KeyValue] ([Field],[Value]) INCLUDE ([PersonId])
GO
select PersonId from (
select PersonId, ROW_NUMBER() OVER (PARTITION BY PersonId ORDER BY PersonId) RowNumber from (
select PersonId from KeyValue where [Field] = 'Age' and [Value] > 25 union all
select PersonId from KeyValue where [Field] = 'City' and [Value] = 'Sydney' union all
select PersonId from KeyValue where [Field] = 'Email' and [Value] = 'xxxxx@gmail.com' union all
select PersonId from KeyValue where [Field] = 'Name' and [Value] = 'UserName'
) x
) y where RowNumber = 4
--Table 'KeyValue'. Scan count 20, logical reads 1510, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
--Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
select PersonId from (
select PersonId from KeyValue where [Field] = 'Age' and [Value] > 25 union all
select PersonId from KeyValue where [Field] = 'City' and [Value] = 'Sydney' union all
select PersonId from KeyValue where [Field] = 'Email' and [Value] = 'xxxxx@gmail.com' union all
select PersonId from KeyValue where [Field] = 'Name' and [Value] = 'UserName'
) x GROUP by PersonId
HAVING COUNT(*) = 4
--Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
--Table 'KeyValue'. Scan count 4, logical reads 1377, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SELECT key1.PersonId
FROM KeyValue key1
INNER JOIN KeyValue key2 ON key1.PersonId = key2.PersonId
INNER JOIN KeyValue key3 ON key1.PersonId = key3.PersonId
INNER JOIN KeyValue key4 ON key1.PersonId = key4.PersonId
WHERE key1.[Field] = 'Age' and key1.[Value] > 25
AND key2.[Field] = 'City' and key2.[Value] = 'Sydney'
AND key3.[Field] = 'Email' and key3.[Value] = 'xxxxx@gmail.com'
AND key4.[Field] = 'Name' and key4.[Value] = 'UserName'
-- Table 'KeyValue'. Scan count 1, logical reads 21, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SET STATISTICS IO OFF
GO
``` | You can use the following structure:
```
select PersonId from
(
select PersonId from xxx where [Field] = 'Age' and [Value] > 25
union all
select PersonId from xxx where [Field] = 'City' and [Value] = 'New York'
) x
group by PersonId
having count(*) = 2
```
You can create one more query for the union for each parameter. So this way it will return each `PersonId` as many times as it matches the criteria. Then you can select those `PersonId` that match all the criteria, i.e. `count(*)` equals number of parameters.
You can extend that to more parameters easily.
This should perform well if you have the right indexes on `Field` and `Value`.
---
This is a version without `group by` (though it has the same effect):
```
select PersonId from
(
select PersonId, ROW_NUMBER() OVER (PARTITION BY PersonId ORDER BY PersonId) RowNumber from
(
select PersonId from xxx where [Field] = 'Age' and [Value] > 25
union all
select PersonId from xxx where [Field] = 'City' and [Value] = 'Sydney'
) x
) y
where RowNumber = 2
``` | Apply search criteria on dynamic field table without using grouping or pivoting | [
"",
"sql",
"sql-server",
""
] |
I want to move 1 million rows of data to another table. Im using query:
```
insert into Table1
select * from Table2;
```
in my PL/SQL function. But this way is too slow.
How can I do this with **Bulk Insert** method?
* Source and destination table has same structure.
* Tables have hash partition and 1 index. | Forget about bulk insert. Because the insert into select is the best bulk you can load.
The fastest would be to disable the indexes (mark them unusable) and do this in a SINGLE
insert:
```
insert /*+ append */ into TARGET
select COLS
from SOURCE;
commit;
```
and rebuild the indexes using UNRECOVERABLE (and maybe even parallel).
**PS:** If the table is partitioned (Both source and target, you can even use parallel inserts)
**FOLLOW UP:**
Check the performance of the below select
```
SELECT /*+ PARALLEL(A 4)
USE_HASH(A) ORDERED */
YOUR_COLS
FROM
YOUR_TABLE A
WHERE
ALL_CONDITIONS;
```
If faster then
```
INSERT /*+ APPEND */
INTO
TARGET
SELECT /*+ PARALLEL(A 4)
USE_HASH(A) ORDERED */
YOUR_COLS
FROM
YOUR_TABLE A
WHERE
ALL_CONDITIONS;
``` | USING Bulk Collect
Converting to collections and bulk processing can increase the volume and complexity of your code. If you need a serious boost in performance, however, that increase is well-justified.
Collections, an evolution of PL/SQL tables that allows us to manipulate many variables at once, as a unit. Collections, coupled with two new features introduced with Oracle 8i, BULK\_COLLECT and FORALL, can dramatically increase the performance of data manipulation code within PL/SQL.
```
CREATE OR REPLACE PROCEDURE test_proc (p_array_size IN PLS_INTEGER DEFAULT 100)
IS
TYPE ARRAY IS TABLE OF all_objects%ROWTYPE;
l_data ARRAY;
CURSOR c IS SELECT * FROM all_objects;
BEGIN
OPEN c;
LOOP
FETCH c BULK COLLECT INTO l_data LIMIT p_array_size;
FORALL i IN 1..l_data.COUNT
INSERT INTO t1 VALUES l_data(i);
EXIT WHEN c%NOTFOUND;
END LOOP;
CLOSE c;
END test_proc;
``` | Move large data between tables in oracle with bulk insert | [
"",
"sql",
"oracle",
"plsql",
"oracle11g",
"bulkinsert",
""
] |
I'm thinking about a SQL query that returns me all entries from a column whose first 5 characters match. Any ideas?
I'm thinking about entries where ANY first 5 characters match, not specific ones. E.g.
```
HelloA
HelloB
ThereC
ThereD
Something
```
would return the first four entries:
```
HelloA
HelloB
ThereC
ThereD
```
EDIT: I am using SQL92 so cannot use the left command! | Try this :
```
SELECT *
FROM YourTable
WHERE LEFT(stringColumn, 5) IN (
SELECT LEFT(stringColumn, 5)
FROM YOURTABLE
GROUP BY LEFT(stringColumn, 5)
HAVING COUNT(*) > 1
)
```
`SQLFIDDLE DEMO`
This selects the first 5 characters, groups by them and returns only the ones that happen more than once.
Or with Substring:
```
SELECT * FROM YourTable
WHERE substring(stringColumn,1,5) IN (
SELECT substring(stringColumn,1,5)
FROM YOURTABLE
GROUP BY substring(stringColumn,1,5)
HAVING COUNT(*) > 1)
;
```
`SQLFIDDLE DEMO` | Sounds easy enough...
In SQL Server this would be something along the lines of
```
where Left(ColumnName,5) = '12345'
``` | Pattern matching SQL on first 5 characters | [
"",
"sql",
"pattern-matching",
"ansi-sql-92",
""
] |
I have this table:
```
CREATE TABLE [dbo].[Fruit](
[RecId] [int] IDENTITY(1,1) NOT NULL,
[Banana] [int] NULL,
[TextValue] [varchar](50) NULL
)
```
And the following piece of code:
```
DECLARE @FruitInput INT = NULL
SELECT * FROM Fruit WHERE Banana = @FruitInput
```
This record is in that table:
```
1|NULL|Hello
```
Why wont my query return this record? | It is because `Null is undefined`. Null neither equal nor not equal to anything. Please read more on **[MSDN](http://technet.microsoft.com/en-us/library/ms191504%28v=sql.105%29.aspx)**.
Null values can be checked with `is null` (or `is not null`), or `isnull()` *or* `coalesce()` functions depending on the requirement.
Try this:
```
SELECT * FROM Fruit WHERE Banana is null
```
And following query to select `all the records` in case if `@FruitInput is null`:
```
SELECT * FROM Fruit WHERE @FruitInput is null or Banana = @FruitInput
``` | NULL does not equal NULL, use IS NULL for null row checks:
`SELECT * FROM Fruit WHERE Banana IS NULL`
In this case because you have NULL in a variable, you can use the ISNULL operator:
`SELECT * FROM Fruit WHERE ISNULL(Banana, 0) = ISNULL(@FruitInput, 0)`
This will ensure you can check against whatever value NULL is or otherwise compare 0 to 0 - obviously if you have rows where Banana is NULL and @FruitInput is 0 this will match them, adjust as necessary
(e.g. you can jut use -1 or a string)
`SELECT * FROM Fruit WHERE ISNULL(Banana, '') = ISNULL(@FruitInput, '')`
Edit: No you can't because for some reason 0 = '' in SQL....?!
I suppose the way to do it (which might not be that performant, I've not checked the plan) is:
```
select * from Fruit
where 1 = CASE
WHEN @FruitInput IS NULL AND banana IS NULL THEN 1
WHEN @FruitInput = banana THEN 1
ELSE 0
END
``` | Where is my banana? (SQL Query NULL issue) | [
"",
"sql",
"sql-server",
"null",
""
] |
I just found out a SQL Server bug (described [here](https://stackoverflow.com/questions/19819139/merge-into-throws-foreign-key-conflict)). Now I am wondering, if there is any possibility of forcing an `INSERT` statement without performing foreign key checks? Like disabling this checks server-wide for few minutes, or somehow disabling (not deleting) FK's on specific tables? | **Drop Constraint**
```
ALTER TABLE Table_Name
DROP CONSTRAINT fk_ConstraintName
```
Drop the constraint once you have done the operation then create it again.
**Disable Constraint**
```
ALTER TABLE MyTable
NOCHECK CONSTRAINT fk_Constraint_Name
```
**Enable Constraint**
```
ALTER TABLE MyTable
CHECK CONSTRAINT fk_Constraint_Name
``` | ```
ALTER TABLE [schema].[table] NOCHECK CONSTRAINT [constraint_name]
```
Then
```
ALTER TABLE [schema].[table] CHECK CONSTRAINT [constraint_name]
```
However, this results in your FK being not trusted, and when you re-enable the constraint it will take time to re-validate it. | Is there a way of forcing sql INSERT or disabling foreign key check? | [
"",
"sql",
"sql-server",
""
] |
I have an equation stored in my table. I am fetching one equation at a time and want to replace all the operators with any other character.
**Input String:** `(N_100-(6858)*(6858)*N_100/0_2)%N_35`
**Operators or patterns:** `(+, -, *, /, %, (, ))`
**Replacement character:** `~`
**Output String:** `~N_100~~6858~~~6858~~N_100~0_2~~N_35`
I had tried below query with **Nested REPLACE** Functions and I got desired output:
```
DECLARE @NEWSTRING VARCHAR(100)
SET @NEWSTRING = '(N_100-(6858)*(6858)*N_100/0_2)%N_35' ;
SELECT @NEWSTRING = REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(
@NEWSTRING, '+', '~'), '-', '~'), '*', '~'), '/', '~')
, '%', '~'), '(', '~'), ')', '~')
PRINT @NEWSTRING
```
**Output:** `~N_100~~6858~~~6858~~N_100~0_2~~N_35`
How can I replace all the operators without using nested replace functions? | I had created a `SPLIT` function to implement this because I need to implement this operation multiple time in `PROCEDURE`
**SPLIT FUNCTION**
```
create function [dbo].[Split](@String varchar(8000), @Delimiter char(1))
returns @temptable TABLE (items varchar(8000))
as
begin
declare @idx int
declare @slice varchar(8000)
select @idx = 1
if len(@String)<1 or @String is null return
while @idx!= 0
begin
set @idx = charindex(@Delimiter,@String)
if @idx!=0
set @slice = left(@String,@idx - 1)
else
set @slice = @String
if(len(@slice)>0)
insert into @temptable(Items) values(@slice)
set @String = right(@String,len(@String) - @idx)
if len(@String) = 0 break
end
return
end
```
**Code used in procedure:**
```
DECLARE @NEWSTRING VARCHAR(100)
SET @NEWSTRING = '(N_100-(6858)*(6858)*N_100/0_2)%N_35' ;
SELECT @NEWSTRING = REPLACE(@NEWSTRING, items, '~') FROM dbo.Split('+,-,*,/,%,(,)', ',');
PRINT @NEWSTRING
```
**OUTPUT**
```
~N_100~~6858~~~6858~~N_100~0_2~~N_35
``` | I believe it is easier and more readable if you use a table to drive this.
```
declare @String varchar(max) = '(N_100-(6858)*(6858)*N_100/0_2)%N_35'
--table containing values to be replaced
create table #Replace
(
StringToReplace varchar(100) not null primary key clustered
,ReplacementString varchar(100) not null
)
insert into #Replace (StringToReplace, ReplacementString)
values ('+', '~')
,('-', '~')
,('*', '~')
,('/', '~')
,('%', '~')
,('(', '~')
,(')', '~')
select @String = replace(@String, StringToReplace, ReplacementString)
from #Replace a
select @String
drop table #Replace
``` | Replace multiple characters from string without using any nested replace functions | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
"replace",
""
] |
This cross join works fine:
```
select * from x, y
```
I am trying to run this query:
```
select abc.col1 from abc
(select * from x, y) abc
```
But I get this error message:
```
Msg 8156, Level 16, State 1, Line 2
The column 'col1' was specified multiple times for 'abc'.
```
Both tables x and y have the same columns and column definitions.
Any ideas/suggestions? | ```
select abc.col1 from abc
(select * from x, y) abc
```
You are aliasing two tables with the same name. Try:
```
select abc.col1 from abc,
(select x.* from x, y) abc2
``` | you have to specify column name in inner query section.
something like this:
```
select abc.col1 from abc
(select x.col1,y.col1 from x, y) abc
``` | SQL Cross Join Query not working with variable | [
"",
"sql",
"sql-server",
"variables",
"cross-join",
""
] |
Can someone explain to me why when I perform a `LIKE` select in SQL (T-SQL) on a `varchar` column I can do the following:
```
SELECT *
FROM Table
WHERE Name LIKE 'Th%'
```
to get names beginning with `Th`, but when I do the same on a `datetime` column I need a `%` before the year, like:
```
SELECT *
FROM Table
WHERE Date LIKE '%2013%'
```
to get dates in 2013. The datetimes are stored in `yyyy-MM-dd hh:mm:ss` format. I know I could use a `DATEPART` style query but I was just interested in why I need the extra `%` here. | The DATETIME is converted to a VARCHAR before the comparison, and there definitely is no guarantee that the conversion will be in the pattern you mention. DATETIME is not stored internally as a VARCHAR but as a FLOAT. | You should stop wondering because the syntax is not useful.
```
SELECT *
FROM Table
WHERE Date LIKE '%2013%'
```
Will give you a full table scan because the date will be converted to a varchar when comparing. In other words, don't do it !
Use this syntax instead:
```
SELECT *
FROM Table
WHERE Date >= '2013-01-01T00:00:00'
and Date < '2014-01-01T00:00:00'
``` | SQL datetime LIKE select - why do I need an extra %? | [
"",
"sql",
"t-sql",
"datetime",
""
] |
I've read dozens of answers regarding CASE and am not sure that is what i need to be using here, it seems like it should work but its not:
```
Data:
OrderNum OrderLine PartNum
200011 1 ABC-1
200011 2 DEF-1
200012 1 XYZ-1
What I would like to return:
OrderNum Item#
200011 MIXED
200012 XYZ-1
What I am returning instead:
OrderNum Item#
200011 ABC-1
200011 MIXED
200012 XYZ-1
```
My query:
```
SELECT OrderHed.OrderNum,
(CASE WHEN ShipDtl.OrderLine > '1' then 'MIXED' else ShipDtl.PartNum end) as [Item#]
FROM dbo.OrderHed, dbo.ShipDtl
WHERE ShipDtl.Company = OrderHed.Company
AND ShipDtl.OrderNum = OrderHed.OrderNum
GROUP BY OrderHed.OrderNum, ShipDtl.OrderLine, ShipDtl.Part
``` | Try with grouping like
```
SELECT OrderHed.OrderNum,
(CASE WHEN SUM(ShipDtl.OrderLine) > 1 then 'MIXED' else MAX(ShipDtl.PartNum) end) as [Item#]
FROM dbo.OrderHed, dbo.ShipDtl
WHERE ShipDtl.Company = OrderHed.Company
AND ShipDtl.OrderNum = OrderHed.OrderNum
GROUP BY OrderHed.OrderNum
```
SQLFiddle Demo : <http://sqlfiddle.com/#!3/209d8/1> | You didn't write which database engine you use, but if it is SQL 2005 and above, I'd think using the window function for COUNT will make things easier as you then do not need to group.
```
SELECT DISTINCT
OrderHed.OrderNum ,
CASE
WHEN COUNT(ShipDtl.OrderLine) OVER (PARTITION BY ShipDtl.OrderNum) > 1 THEN 'MIXED'
ELSE PartNum
END AS [Item#]
FROM dbo.OrderHed ,
dbo.ShipDtl
WHERE ShipDtl.Company = OrderHed.Company
AND ShipDtl.OrderNum = OrderHed.OrderNum
```
You'll need to DISTINCT though, because it'll select a row per line, but each order with multiple lines will be MIXED, so you can easily distinct.
This will simply select the OrderNum and if multiple orderlines exists per ordernum Count(xxx) OVER (partition by yyy) it'll select 'MIXED', otherwise the partnum.
And then distinct the result. | SQL CASE WHEN to narrow results, change field | [
"",
"sql",
"case-when",
""
] |
I'm writing a stored procedure that looks something like this:
```
SELECT
Positions.PositionId
,Positions.Title
,Positions.Location
,Positions.Description
,Positions.MaximumSalary
,PositionsDepartments.Description Department
,PositionsSubDepartments.Description Subdepartment
,PositionsDepartments.DepartmentId DepartmentId
,PositionsSubDepartments.SubDepartmentId SubdepartmentId
,@TheRole TheRole
,@Essentials Essentials
,@Desirable Desirable
,Positions.Published
,Positions.LastUpdatedDate
,PositionsStatus.Description Status
FROM
Positions
WITH (NOLOCK)
INNER JOIN PositionsStatus ON Positions.StatusId = PositionsStatus.StatusId
INNER JOIN PositionsSubDepartments ON Positions.SubDepartmentId = PositionsSubDepartments.SubDepartmentId
INNER JOIN PositionsDepartments ON PositionsSubDepartments.DepartmentId = PositionsDepartments.DepartmentId
WHERE
Positions.PositionId = @PositionId
```
BUT the Positions.SubDepartmentId could now be null - which means I don't get all the data back that I'm expecting. I've tried this but get a load of duplicate data back:
```
SELECT
Positions.PositionId
,Positions.Title
,Positions.Location
,Positions.Description
,Positions.MaximumSalary
,PositionsDepartments.Description Department
,PositionsSubDepartments.Description Subdepartment
,PositionsDepartments.DepartmentId DepartmentId
,PositionsSubDepartments.SubDepartmentId SubdepartmentId
,@TheRole TheRole
,@Essentials Essentials
,@Desirable Desirable
,Positions.Published, Positions.LastUpdatedDate
,PositionsStatus.Description Status
FROM
Positions
WITH (NOLOCK)
INNER JOIN PositionsStatus ON Positions.StatusId = PositionsStatus.StatusId
INNER JOIN PositionsSubDepartments ON Positions.SubDepartmentId = PositionsSubDepartments.SubDepartmentId OR ( Positions.SubDepartmentId IS NULL)
INNER JOIN PositionsDepartments ON PositionsSubDepartments.DepartmentId = PositionsDepartments.DepartmentId
WHERE
Positions.PositionId = @PositionId
```
What am I doing wrong? | You need to use a left join when an entry could be null
```
SELECT
Positions.[PositionId]
,Positions.[Title]
,Positions.[Location]
,Positions.[Description]
,Positions.[MaximumSalary]
,PositionsDepartments.[Description] Department
,PositionsSubDepartments.[Description] Subdepartment
,PositionsDepartments.[DepartmentId] DepartmentId
,PositionsSubDepartments.[SubDepartmentId] SubdepartmentId
,@TheRole TheRole
,@Essentials Essentials
,@Desirable Desirable
,Positions.[Published]
,Positions.[LastUpdatedDate]
,PositionsStatus.[Description] Status
FROM
Positions WITH (NOLOCK)
INNER JOIN PositionsStatus ON Positions.StatusId = PositionsStatus.StatusId
LEFT JOIN PositionsSubDepartments ON Positions.SubDepartmentId = PositionsSubDepartments.SubDepartmentId
LEFT JOIN PositionsDepartments ON PositionsSubDepartments.DepartmentId = PositionsDepartments.DepartmentId
WHERE
Positions.PositionId = @PositionId
```
You might also find the following useful: <http://www.tutorialspoint.com/sql/sql-left-joins.htm> | ```
select Positions.PositionId, Positions.Title, Positions.Location,Positions.Description,Positions.MaximumSalary,
PositionsDepartments.Description Department, PositionsSubDepartments.Description Subdepartment, PositionsDepartments.DepartmentId DepartmentId, PositionsSubDepartments.SubDepartmentId SubdepartmentId,
@TheRole TheRole, @Essentials Essentials, @Desirable Desirable,
Positions.Published, Positions.LastUpdatedDate, PositionsStatus.Description Status
FROM Positions
WITH (NOLOCK)
INNER JOIN PositionsStatus ON Positions.StatusId = PositionsStatus.StatusId
LEFT JOIN PositionsSubDepartments ON Positions.SubDepartmentId = PositionsSubDepartments.SubDepartmentId
LEFT JOIN PositionsDepartments ON PositionsSubDepartments.DepartmentId = PositionsDepartments.DepartmentId
WHERE Positions.PositionId = @PositionId
``` | SQL if statement for null values in INNER JOINs | [
"",
"mysql",
"sql",
""
] |
So here's what my data table looks like:
```
TeamNum Round Points1 Points2
1 1 5 21
2 1 10 20
3 1 9 29
1 2 6 22
2 2 11 21
3 2 10 30
1 3 80 50
```
I also have a second table with this:
```
TeamNum TeamName
1 goteam1
2 goteam2
3 goteam4-1
```
I want SQL to take it and turn it into this:
```
Team Round1 Round2 Round3 TeamName
1 (points1+points2 of round1) (same but for r2) (same but for r3) goteam1
2 (points1+points2 of round1) (same but for r2) (same but for r3) goteam2
3 (points1+points2 of round1) (same but for r2) (same but for r3) goteam4-1
```
And a sample output from the tables above would be:
```
Team Round1 Round2 Round3 TeamName
1 26 28 130 goteam1
2 30 32 0 goteam2
3 38 40 0 goteam4-1
```
The actual data has a bunch of "points1" and "points2" columns, but there are only 3 rounds.
I am very new to SQL and this is all I have right now:
```
select
`data`.`round`,
`data`.`teamNumber`,
sum(`Points1`) + sum(`Points2`) as score
from `data` join `teams` ON `teams`.`teamNumber` = `data`.`teamNumber`
group by `data`.`teamNumber` , `round`
order by `data`.`teamNumber`, `data`.`round`
```
But it doesn't return anything at all. If I remove the join statement, it shows everything like I want, but doesn't consolidate Round1, 2, and 3 as columns, they are each separate rows. Can you guys help me out? Thanks! | Use conditional aggregation
```
SELECT t.teamnumber, t.teamname,
SUM(CASE WHEN d.round = 1 THEN d.points1 + d.points2 ELSE 0 END) round1,
SUM(CASE WHEN d.round = 2 THEN d.points1 + d.points2 ELSE 0 END) round2,
SUM(CASE WHEN d.round = 3 THEN d.points1 + d.points2 ELSE 0 END) round3
FROM data d JOIN teams t
ON d.teamnumber = t.teamnumber
GROUP BY t.teamnumber, t.teamname
```
Output:
```
| TEAMNUMBER | TEAMNAME | ROUND1 | ROUND2 | ROUND3 |
|------------|-----------|--------|--------|--------|
| 1 | goteam1 | 26 | 28 | 130 |
| 2 | goteam2 | 30 | 32 | 0 |
| 3 | goteam4-1 | 38 | 40 | 0 |
```
Here is **[SQLFiddle](http://sqlfiddle.com/#!2/ba0fe/4)** demo | No need to aggregate:
```
SELECT
t.teamnumber,
COALESCE(r1.points1 + r1.points2, 0) AS round1,
COALESCE(r2.points1 + r2.points2, 0) AS round2,
COALESCE(r3.points1 + r3.points2, 0) AS round3,
t.teamname
FROM teams t
LEFT JOIN data r1 ON r1.teamnumber = t.teamnumber AND r1.round = 1
LEFT JOIN data r2 ON r2.teamnumber = t.teamnumber AND r2.round = 2
LEFT JOIN data r3 ON r3.teamnumber = t.teamnumber AND r3.round = 3
``` | SQL consolidate data and turn them into columns | [
"",
"mysql",
"sql",
"database",
""
] |
What I've attempted and which returns incorrect data:
```
SELECT type, pid, MAX(distance) FROM table GROUP BY type
```
This retrieves every unique `type` and the correct corresponding maximum `distance` for each `type` from my database (there are a total of 26 different unique `type` values and this will never change), but the `pid` values are incorrect. How can I retrieve the correct and corresponding `pid` values?
Example table data:
```
pid | type | distance | ...
675 | dcj | 273060192 | ...
743 | mcj | 273046176 | ...
284 | dcj | 271592224 | ...
4091 | lj | 255217488 | ...
743 | lj | 255170160 | ...
4091 | lj | 230840928 | ...
```
What should be returned:
```
pid | type | distance
675 | dcj | 273060192
743 | mcj | 273046176
4091 | lj | 255217488
```
Some notable information about the data:
There are multiple entries per `pid` value and there may be multiple entries per `pid` value that have the same `type` value.
Do I need to utilize PHP to run two different queries where the first one grabs the 26 different `type` values via `GROUP BY type` and the second query is executed 26 different times (once per unique `type` value) and finds the maximum for each `type` via something like `WHERE type="' . $type . '" ORDER BY distance DESC LIMIT 1`? Or can this be done in one SQL query? | This query gives the desired result.
```
SELECT t.type, t.pid, t.distance
FROM table t
INNER JOIN (SELECT type, MAX(distance) as distance FROM table GROUP BY type) as m
ON t.type = m.type and t.distance = m.distance
ORDER BY t.type
```
It will return all pid that have the same maximum distance for the ( not very probable ) case that two or more pid have the same maximum distance. | You can do a `GROUP BY` query, them join it to the original table to get the right `pid`, like this:
```
SELECT x.type, t.pid, x.max_dist
FROM (
SELECT type, MAX(distance) as max_dist FROM table GROUP BY type
) as x
JOIN table t ON x.max_dist=t.distance AND x.type=t.type
```
[Demo on sqlfiddle](http://www.sqlfiddle.com/#!2/3f813/4). | Retrieving Maximum Of One Column For Each Unique Value From Another | [
"",
"mysql",
"sql",
""
] |

I have a table in database which is already has the data, but now I added new column called `Vol_Age` and it has `NULL` value. I want that column filled by calculating Age, the data comes from `Vol_Date_of_Birth` column, I need a query that calculate the age and insert it to the `Vol_Age` column.
which we can use `Insert` or `Update`, and how we can do it?!
Please forgive me for my English.
Thanks | Keep in mind that since you added the column to the database, any values within that column will retain their value. As time goes on, you will need to keep updateding the values within the database.
Another approach would be to remove the column and make it a computed column instead. This way, the age is caculated at query time rather than from your last update.
```
ALTER TABLE tbname ADD Vol_Age AS DATEDIFF(YEAR, Vol_Date_Of_Birth, GETDATE())
``` | Try using the DateDiff function:
```
SELECT DATEDIFF(year,GETDATE(),Vol_Age)
``` | Calculate the age and Insert it to an already column via SQL Server | [
"",
"sql",
"sql-server",
"sql-server-2008",
"sql-server-2008-r2",
"sql-server-2012",
""
] |
I have run into a problem when I have tried to select data from more than 1 table and I was wondering if anyone could help me. The tables are linked and I'm trying to select only things with a certain id. I have had success with queries with 1 table but with 2 tables it kicked an error my way:
> Query to get data from Recipe failed: Column `Recipe_ID` in where clause is ambiguous.
Here is my query:
```
$query="SELECT * FROM Recipe, Ingredients_Needed WHERE Recipe_ID ='$chosen_id'";
``` | Does Recipe\_ID appear in both Recipe and Ingredients\_Needed?
IF it does then you need to do something like this:
```
SELECT * FROM Recipe, Ingredients_Needed WHERE Recipe.Recipe_ID = ....
```
As a side note, use correct join syntax not an implicit join ie:
```
SELECT r.*, n.*
FROM Recipe r
INNER JOIN Ingredients_Needed n ON n.Recipe_ID = r.Recipe_ID
WHERE r.Recipe_ID = ...
```
This will make it clearer especially if you are joining on multiple tables with different conditions etc as you can see the aliased tables quickly just by looking at the query instead of guessing. | Specify the table:
```
WHERE table.Recipe_ID=
``` | where clause is ambiguous | [
"",
"mysql",
"sql",
""
] |
I have a query against a large number of big tables (rows and columns) with a number of joins, however one of tables has some duplicate rows of data causing issues for my query. Since this is a read only realtime feed from another department I can't fix that data, however I am trying to prevent issues in my query from it.
Given that, I need to add this crap data as a left join to my good query. The data set looks like:
```
IDNo FirstName LastName ...
-------------------------------------------
uqx bob smith
abc john willis
ABC john willis
aBc john willis
WTF jeff bridges
sss bill doe
ere sally abby
wtf jeff bridges
...
```
(about 2 dozen columns, and 100K rows)
My first instinct was to perform a distinct gave me about 80K rows:
```
SELECT DISTINCT P.IDNo
FROM people P
```
But when I try the following, I get all the rows back:
```
SELECT DISTINCT P.*
FROM people P
```
OR
```
SELECT
DISTINCT(P.IDNo) AS IDNoUnq
,P.FirstName
,P.LastName
...etc.
FROM people P
```
I then thought I would do a FIRST() aggregate function on all the columns, however that feels wrong too. Syntactically am I doing something wrong here?
**Update:**
Just wanted to note: These records are duplicates based on a non-key / non-indexed field of ID listed above. The ID is a text field which although has the same value, it is a different case than the other data causing the issue. | Turns out I was doing it wrong, I needed to perform a nested select first of just the important columns, and do a distinct select off that to prevent trash columns of 'unique' data from corrupting my good data. The following appears to have resolved the issue... but I will try on the full dataset later.
```
SELECT DISTINCT P2.*
FROM (
SELECT
IDNo
, FirstName
, LastName
FROM people P
) P2
```
Here is some play data as requested: <http://sqlfiddle.com/#!3/050e0d/3>
```
CREATE TABLE people
(
[entry] int
, [IDNo] varchar(3)
, [FirstName] varchar(5)
, [LastName] varchar(7)
);
INSERT INTO people
(entry,[IDNo], [FirstName], [LastName])
VALUES
(1,'uqx', 'bob', 'smith'),
(2,'abc', 'john', 'willis'),
(3,'ABC', 'john', 'willis'),
(4,'aBc', 'john', 'willis'),
(5,'WTF', 'jeff', 'bridges'),
(6,'Sss', 'bill', 'doe'),
(7,'sSs', 'bill', 'doe'),
(8,'ssS', 'bill', 'doe'),
(9,'ere', 'sally', 'abby'),
(10,'wtf', 'jeff', 'bridges')
;
``` | `distinct` is **not** a function. It always operates on *all* columns of the select list.
Your problem is a typical "greatest N per group" problem which can easily be solved using a window function:
```
select ...
from (
select IDNo,
FirstName,
LastName,
....,
row_number() over (partition by lower(idno) order by firstname) as rn
from people
) t
where rn = 1;
```
Using the `order by` clause you can select which of the duplicates you want to pick.
The above can be used in a left join, see below:
```
select ...
from x
left join (
select IDNo,
FirstName,
LastName,
....,
row_number() over (partition by lower(idno) order by firstname) as rn
from people
) p on p.idno = x.idno and p.rn = 1
where ...
``` | SQL Left Join first match only | [
"",
"sql",
"sql-server",
"t-sql",
"join",
"greatest-n-per-group",
""
] |
I am using MIN function in mysql to return the smallest value of a column - lets call it price, however i also wish to display various other columns for this particular row with the smallest value for price. So far i have the following:
```
SELECT underlying,volatility FROM shares
where (id,price) IN (
SELECT id,MIN(price) FROM shares
where DATE(issueDate) >= DATE(NOW())
group by id
)
```
This almost works however it returns more than one row, i only wish to display the last row, can anyone please assist me with this? | i think i have solved the problem, it takes a while to execute so i will start by adding a new index :
```
SELECT underlying,volatility,price FROM shares
where (id,price) IN (
SELECT id,MIN(price) FROM shares
where DATE(issueDate) >= DATE(NOW())
group by id
) order by price asc limit 1;
``` | Having clause will do the trick, please check my solution
```
select volatility,underlying, min(price) as min_price from shares
where DATE(issueDate) >= DATE(NOW())
group by issueDate
having min_price = min(price);
```
[fiddle example](http://sqlfiddle.com/#!2/3b5ae/3) | I can't get mysql to return last row when using the aggregate MIN | [
"",
"mysql",
"sql",
"greatest-n-per-group",
""
] |
I checked this question asked before
[How to add Sysadmin login to SQL Server?](https://stackoverflow.com/questions/14814798/how-to-add-sysadmin-login-to-sql-server)
I tried all the solutions mentioned but I don't have the same options
this is the output for the first solution

and this is the menu I got for the second solution I don't have "properties" in it.

So, does anyone know how to login as a Sysadmin ?! | -> Open Security -> Logins
then right click to add a new Login
or
right click on an existing Login -> Properties -> Server Roles
and give him Sysadmin rights
of course you should have enough rights yourself be allowed to do this | if you have no SQL Server Login Mode right-click on the Server -> Server Properties -> Security and change the Server authentication method to "SQL Server and Windows Authentication mode" | How can I login to the SQL Server with Sysadmin privileges? | [
"",
"sql",
"sql-server",
"database",
"sql-server-2008",
"authentication",
""
] |
Trying to run
```
INSERT INTO BOOKING_EXTRAS (BOOKING_ID, EXTRAS_, EXTRAS_PRICE) VALUES ('1','Phone call: 1.80','1.8');
```
in Oracle SQL Developer. Ive had it running but when I close it, then reopen it I get this error:
```
Error starting at line 1 in command:
INSERT INTO BOOKING_EXTRAS (BOOKING_ID, EXTRAS_, EXTRAS_PRICE) VALUES ('1','Phone call: 1.80','1.8')
Error report:
SQL Error: ORA-00001: unique constraint (COURSEWORK_XE.BOOKING_EXTRAS_PK) violated
00001. 00000 - "unique constraint (%s.%s) violated"
*Cause: An UPDATE or INSERT statement attempted to insert a duplicate key.
For Trusted Oracle configured in DBMS MAC mode, you may see
this message if a duplicate entry exists at a different level.
*Action: Either remove the unique restriction or do not insert the key.
```
how would I fix this? Its happening to every table I run! | You need to either clear the tables or insert new information, a database doesn't want duplicate rows because that makes it impossible to find the correct rows later. | In addition, if that **BOOKING\_ID** (currently serves as primary key if I guess right) doesn't actually mean something for you, you can set it as AUTO INCREMENT in your schema, then afterwards you don't need to insert a value for BOOKING\_ID, the system will automatically find a value which is not duplicate for you. This might save you a lot effort.
```
INSERT INTO BOOKING_EXTRAS (EXTRAS_, EXTRAS_PRICE) VALUES ('Phone call: 1.80','1.8');
``` | Insert statement attempted to include a duplicate key | [
"",
"sql",
"database",
"oracle",
"syntax",
"syntax-error",
""
] |
I have this query:
```
SELECT p.[PostingID]
,[EmployerID]
,[JobTitle]
,pin.[IndustryID]
FROM [Posting] p
INNER JOIN [City] c
ON p.CityID = c.CityID
LEFT OUTER JOIN PostingIndustry pin
ON p.PostingID = pin.PostingID
WHERE (c.CityID = @CityId OR @CityId IS NULL)
AND (p.StateProvinceID = @StateProvinceId OR @StateProvinceId IS NULL)
AND (pin.IndustryID = @IndustryId OR @IndustryId IS NULL)
AND
(
(p.[Description] LIKE '%' + @Keyword + '%' OR @Keyword IS NULL)
OR (p.[JobTitle] LIKE '%' + @Keyword + '%' OR @Keyword IS NULL)
)
AND p.StreetAddress IS NOT NULL
AND p.ShowOnMap = 1
```
Which returns results for all the pin.[IndustryID] if IndustryId is not selected or if all the industries are selected. If only one industry is selected I am getting one result which is good, but when one posting is included in multiple industries then I am getting multiple results as on the image shown below:

So for example when that's happening I want to get only one result for that posting id otherwise I am getting multiple results for one google map marker per the image below:

Is there a way how I can optimize the query above to do what I need? | What about selecting only the row with the smallest IndustryId:
```
SELECT [PostingID]
,[EmployerID]
,[JobTitle]
,MIN(pin.[IndustryID])
FROM [Posting] p
INNER JOIN [City] c
ON p.CityID = c.CityID
LEFT OUTER JOIN PostingIndustry pin
ON p.PostingID = pin.PostingID
WHERE (c.CityID = @CityId OR @CityId IS NULL)
AND (p.StateProvinceID = @StateProvinceId OR @StateProvinceId IS NULL)
AND (pin.IndustryID = @IndustryId OR @IndustryId IS NULL)
AND
(
(p.[Description] LIKE '%' + @Keyword + '%' OR @Keyword IS NULL)
OR (p.[JobTitle] LIKE '%' + @Keyword + '%' OR @Keyword IS NULL)
)
AND p.StreetAddress IS NOT NULL
AND p.ShowOnMap = 1
GROUP BY [PostingID],[EmployerID],[JobTitle]
```
Whenever you have only one, it returns that one, when you have more than one it returns only the one with the smallest IndustryId. | Try using a group by clause at the end
Use `MAX(IndustryId)`.
Then also
```
GROUP BY PostingId,EmployerId,Jobtitle
``` | SQL query select only one row from multiple rows | [
"",
"sql",
"sql-server-2008",
"t-sql",
""
] |
I'm trying to get all participants that have more than 1 record in the table where at lease one of those records has IsCurrent = 0 and IsActive = 1
This is what I have so far, but it's not working:
```
SELECT ParticipantId
FROM Contact
WHERE (IsCurrent = 0 AND IsActive = 1 AND ContactTypeId = 1)
Group by ParticipantId
Having COUNT(ParticipantId) > 1
```
This query brings back a record that matches that description, but I need all of the records that match that description, there are more. | You can use [EXISTS](http://technet.microsoft.com/en-us/library/ms188336.aspx):
```
SELECT ParticipantId
FROM Contact
WHERE EXISTS
( SELECT 1
FROM Contact c2
WHERE c2.ParticipantID = c.ParticipantId
AND ContactTypeId = 1
GROUP BY ParticipantID
HAVING COUNT(*) > 1
AND COUNT(CASE WHEN IsCurrent = 0 AND IsActive = 1 THEN 1 END) >= 1
);
``` | Use it as a subquery and join to it:
```
select * from
(
SELECT ParticipantId
FROM Contact
WHERE (IsCurrent = 0 AND IsActive = 1 AND ContactTypeId = 1)
Group by ParticipantId
Having COUNT(ParticipantId) > 1
) base
inner join Contact c on c.ParticipantId = base.ParticipantID
WHERE (IsCurrent = 0 AND IsActive = 1 AND ContactTypeId = 1)
``` | sql select records having count > 1 where at lease one record has value | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have a table in this form (this is just the partial view, the table contains more than 100 columns).
```
LOAN NUMBER DOCUMENT_TYPE DOCUMENT_ID
992452533663 Voters ID XPD0355636
992452533663 Pan card CHXPS5522D
992452533663 Drivers licence DL-0420110141769
```
For a single loan number, I have three kinds of documents as proof. I want these details to be converted into columns and take the following shape:
```
LOAN NUMBER VOTERS_ID PAN_CARD DRIVERS LICENCE
992452533663 XPD0355636 CHXPS5522D DL-0420110141769
```
How to go about this? | If you are using Oracle 10g, you can use the `DECODE` function to pivot the rows into columns:
```
CREATE TABLE doc_tab (
loan_number VARCHAR2(20),
document_type VARCHAR2(20),
document_id VARCHAR2(20)
);
INSERT INTO doc_tab VALUES('992452533663', 'Voters ID', 'XPD0355636');
INSERT INTO doc_tab VALUES('992452533663', 'Pan card', 'CHXPS5522D');
INSERT INTO doc_tab VALUES('992452533663', 'Drivers licence', 'DL-0420110141769');
COMMIT;
SELECT
loan_number,
MAX(DECODE(document_type, 'Voters ID', document_id)) AS voters_id,
MAX(DECODE(document_type, 'Pan card', document_id)) AS pan_card,
MAX(DECODE(document_type, 'Drivers licence', document_id)) AS drivers_licence
FROM
doc_tab
GROUP BY loan_number
ORDER BY loan_number;
```
Output:
```
LOAN_NUMBER VOTERS_ID PAN_CARD DRIVERS_LICENCE
------------- -------------------- -------------------- --------------------
992452533663 XPD0355636 CHXPS5522D DL-0420110141769
```
You can achieve the same using Oracle `PIVOT` clause, introduced in 11g:
```
SELECT *
FROM doc_tab
PIVOT (
MAX(document_id) FOR document_type IN ('Voters ID','Pan card','Drivers licence')
);
```
SQLFiddle example with both solutions: `SQLFiddle example`
Read more about pivoting here: [Pivot In Oracle by Tim Hall](http://www.oracle-base.com/articles/11g/pivot-and-unpivot-operators-11gr1.php) | You can do it with a [`pivot` query](http://www.oracle.com/technetwork/articles/sql/11g-pivot-097235.html), like this:
```
select * from (
select LOAN_NUMBER, DOCUMENT_TYPE, DOCUMENT_ID
from my_table t
)
pivot
(
MIN(DOCUMENT_ID)
for DOCUMENT_TYPE in ('Voters ID','Pan card','Drivers licence')
)
```
Here is a [demo on sqlfiddle.com](http://sqlfiddle.com/#!4/8b635/1). | How to convert Rows to Columns in Oracle? | [
"",
"sql",
"oracle",
"pivot",
""
] |
I need to output 3 null columns in my SQL statement, i'm using SQL developer Oracle 11g
For example:
```
SELECT
c.CID,
c.CName,
null,
null,
null,
r.RegionID,
r.RegionName
FROM
Regions r
INNER JOIN Branch b ON b.RegionID = r.RegionID
INNER JOIN Countries c ON c.CID = b.CID;
```
So in the script output/query result window on SQL Developer shows
```
CID| CName | Null | Null | Null | RegionID | RegionName
------------------------------
C1 | ENG | Null | Null | Null | R1 | UK
C2 | ... | Null | Null | Null | R2 | ...
C3 | ... | Null | Null | Null | R3 | ...
```
What would be ideal is if i could do something like:
```
Select
Country,
Region,
Null as Continent,
Null as hemisphere
etc
```
I know this may sound bonkers but it's just for formatting purposes until the database is updated.
EDIT: Thanks for that, didn't realize it was so straight forward, the only issue is my null column is massive on the script output screen, even though it's empty, is there a way make it a more reasonable size? | Well, you can do it just the way you presented...:
```
SELECT
c.CID,
c.CName,
CAST(null AS VARCHAR2(1)) AS Continent,
null AS hemisphere,
null AS something,
r.RegionID,
r.RegionName
FROM
Regions r
INNER JOIN Branch b ON b.RegionID = r.RegionID
INNER JOIN Countries c ON c.CID = b.CID;
``` | Actually, that should work.
```
SELECT
c.CID,
c.CName,
null AS country,
null AS Region,
null AS hemisphere,
r.RegionID,
r.RegionName
FROM
Regions r
INNER JOIN Branch b ON b.RegionID = r.RegionID
INNER JOIN Countries c ON c.CID = b.CID;
-- Or, if you don't have any tables yet:
SELECT
NULL AS CID,
NULL AS CName,
NULL AS country,
NULL AS Region,
NULL AS hemisphere,
NULL AS RegionID,
NULL AS RegionName
FROM dual;
``` | Selecting null columns for script output in SQL Developer | [
"",
"sql",
"database",
"oracle11g",
""
] |
I have a problem with the following SQL. Why on the earth minus of 2 exactly same queries would return non-empty result? I have tried "UNION ALL" instead of UNION, I have tried many other things, none of which worked. PLease advise.
```
SELECT y.segment1 po_num, fad.seq_num seq, fdst.short_text st
FROM applsys.fnd_attached_documents fad,
applsys.fnd_documents fd,
applsys.fnd_documents_short_text fdst,
po_headers_all y
WHERE 1 = 1
AND fad.pk1_value(+) = y.po_header_id
AND fad.entity_name = 'PO_HEADERS'
AND fad.document_id = fd.document_id
AND fd.datatype_id = 1
and fad.seq_num>=100
AND fdst.media_id = fd.media_id
and y.type_lookup_code='STANDARD'
AND NVL(y.CANCEL_FLAG,'N')='N'
-- and y.segment1 in (100,1000,100,650,26268)
-- and y.segment1=1000
UNION
SELECT poh.segment1, 1, '1' --null, null
FROM po.po_headers_all poh
LEFT JOIN
(SELECT fad1.pk1_value
FROM applsys.fnd_attached_documents fad1,
applsys.fnd_documents fd1
WHERE 1 = 1
AND fad1.entity_name = 'PO_HEADERS'
AND fad1.document_id = fd1.document_id
and fad1.seq_num>=100
AND fd1.datatype_id = 1) sub1
ON poh.po_header_id = sub1.pk1_value
WHERE sub1.pk1_value IS NULL
and poh.type_lookup_code='STANDARD'
AND NVL(poh.CANCEL_FLAG,'N')='N'
-- and poh.segment1 in (100,1000,100,650,26268)
-- and poh.segment1=1000
-- and poh.segment1=650)
minus
SELECT y.segment1 po_num, fad.seq_num seq, fdst.short_text st
FROM applsys.fnd_attached_documents fad,
applsys.fnd_documents fd,
applsys.fnd_documents_short_text fdst,
po_headers_all y
WHERE 1 = 1
AND fad.pk1_value(+) = y.po_header_id
AND fad.entity_name = 'PO_HEADERS'
AND fad.document_id = fd.document_id
AND fd.datatype_id = 1
and fad.seq_num>=100
AND fdst.media_id = fd.media_id
and y.type_lookup_code='STANDARD'
AND NVL(y.CANCEL_FLAG,'N')='N'
--and y.segment1 in (100,1000,100,650,26268)
--and y.segment1=1000
UNION
SELECT poh.segment1, 1, '1'--null,null
FROM po.po_headers_all poh
LEFT JOIN
(SELECT fad1.pk1_value
FROM applsys.fnd_attached_documents fad1,
applsys.fnd_documents fd1
WHERE 1 = 1
AND fad1.entity_name = 'PO_HEADERS'
AND fad1.document_id = fd1.document_id
and fad1.seq_num>=100
AND fd1.datatype_id = 1) sub1
ON poh.po_header_id = sub1.pk1_value
WHERE sub1.pk1_value IS NULL
and poh.type_lookup_code='STANDARD'
AND NVL(poh.CANCEL_FLAG,'N')='N'
-- and poh.segment1 in (100,1000,100,650,26268)
-- and poh.segment1=1000
-- and poh.segment1=650)
``` | Use parentheses. Right now, you're doing `((set1 UNION set2) MINUS set1) UNION set2` while you meant to do `(set1 UNION set2) MINUS (set1 UNION set2)`.
In other words, you're uniting set1 and set2, removing set1 from that and uniting set2 with that, while you probably meant to take a union of set1 and set2 and remove a union of set1 and set2 from that. `UNION` and `MINUS` have the same precedence and are processed in the order they're encountered. | The problem is that, firstly, you do a UNION of the first two queries, then, you do a MINUS of the third and then you UNION the result with the fourth query. | Minus (except) of 2 same queries return non-empty resultset | [
"",
"sql",
"oracle",
"except",
""
] |
I'm using sqlcmd to export a query result with two columns to csv. The simple query is:
```
SELECT DISTINCT
CustomerGuid, CustomerPassword
FROM
ServiceOrder
ORDER BY
CustomerGuid
```
When I open the exported csv in Excel both customer and password are on the same column. Is it possible to split them into their own column using sqlcmd. My sqlcmd looks like
```
SQLCMD -S . -d BAS -Q "SQL STATEMENT" -s "," -o "c:\data.csv"
```
Thanks. | The problem is that you are using `,` instead of `;` as the line delimiter. Try:
`SQLCMD -S . -d BAS -Q "SQL STATEMENT" -s ";" -o "c:\data.csv"`
:) | Actually, this is more of an Excel question and it's already answered in [superuser](https://superuser.com/questions/291445/how-do-you-change-default-delimiter-in-the-text-import-in-excel). The default separator when you open a CSV file is locale dependent. In locales where `,` is a decimal separator, the default is `;`.
You can either modify the List separator in Regional settings (not recommended) or open an empty worksheet and import the data, specifying the separator you want.
By the way, the same rules are used in Excel and SharePoint Formulas, where you may have to type `;` instead of `,` to separate values depending on your locale. | Export SQL query result to CSV | [
"",
"sql",
"sql-server",
"excel",
"sqlcmd",
""
] |
Using Postgresql 9.2
What select statement would be used to find all `numeric` results that end with `.00`
Say, from this example:
```
Trans Amt
|-------|
-57059.44
-239.00
-100.61
-181.33
-100.00
```
I would only want to see the `-100.00` and `-239.00` | Try
`SELECT * from TABLE where TRANS_AMT = floor(TRANS_AMT);` | May be
```
select * from Table1 where TransAmt = TransAmt::int
```
**`sql fiddle demo`** | Using Regular expressions with Numeric value | [
"",
"sql",
"postgresql",
""
] |
i have created the next db file -
```
String sql = ""
+ "CREATE TABLE "+ Constants.TABLE_NAME + " ("
+ Constants.NAME_ID + " INTEGER PRIMARY KEY AUTOINCREMENT,"
+ Constants.NAME_PERSON + " TEXT"
+ ")";
db.execSQL(sql);
```
Now what I would like to know is, how to be able to run on the db and to know if a name already exist sin the db, and if so i would like to get the id of that row.
all i can understand is that i should use the
```
Cursor c= db.query(table, columns, selection, selectionArgs, groupBy, having, orderBy)
```
but I don't have a clue what I should do next -
so thanks for any kind of help | you can add this in your DB and call the function passing "to be searched key" as an argument
```
public boolean checkIfExist(String name) {
SQLiteDatabase db = this.getReadableDatabase();
Cursor cursor = db.query(TABLE_INFO, new String[] { KEY_TITLE}, KEY_TITLE + "=?",
new String[] { name }, null, null, null, null);
if (cursor != null)
cursor.moveToFirst();
if (cursor.moveToFirst()) {
return true;
}else{
return false;
}
}
```
Where KEY\_TITLE is the column name in your table. | Take more example on this:
[AndroidSQLite](http://www.vogella.com/articles/AndroidSQLite/article.html)
[AndroidSQLite with multi tables](http://www.androidhive.info/2013/09/android-sqlite-database-with-multiple-tables/) | how to check if a value already exist in db, and if so how to get the id of that row? android sql | [
"",
"android",
"sql",
"database",
""
] |
I've got a one-to-many relationship with these two tables:
```
p2c {
parent_id,
child_id
}
child {
child_id, -- pk
count1,
count2
}
```
If I do this:
```
select distinct parent_id from p2c
join child on p2c.child_id = child.child_id
where count1 = count2
```
I get the parents where one of the children have the counts equal. How do I get the parent where ALL of its children have the counts equal? | You can use GROUP BY/HAVING as follows:
```
select parent_id
from p2c
join child on p2c.child_id = child.child_id
group by parent_id
having count(case when count1 = count2 then 1 end) = count(*);
```
This basically counts rows where `count1 = count2` and only returns `parent_id`s where this count is the same as the total | Use a derived table first , to get the count of rows and rows where they match:
```
select
*
from
pc2
join
(
select
child_id,
count1,
count2,
count(child_id) as NUM_ROWS,
count(case when count1 = count2 then 1 else null) as NUM_MATCHES
from child
group by child_Id
) child
on pc2.child_id = child.child_id
where child.num_rows = num_matches
``` | One to many relationship needs all of the many records to match a condition | [
"",
"sql",
""
] |
I'm trying to compile the stored procedure:
```
create
procedure checkFroud2(code IN varchar2, p_recordset OUT SYS_REFCURSOR)
AS
BEGIN
OPEN p_recordset FOR
if code='C' then
select * from emp
//dbms_output.putline('this is if block')
else if code='F' then
dbms_output.putline('this is else block')
else
dbms_output.putline('last else')
end if;
end checkFroud2;
```
but this is showing compile time errors. Can anybody suggest where the problem is? | ```
CREATE
PROCEDURE checkFroud2(
code IN VARCHAR2,
p_recordset OUT SYS_REFCURSOR)
AS
BEGIN
IF code='C' THEN
dbms_output.put_line('this is if block');
OPEN p_recordset FOR
SELECT * FROM emp;
ELSIF code='F' THEN
--you can open p_recordset with dummy as
/*open p_recordset for select * from dual where 1 = 0; */
dbms_output.put_line('this is else block');
ELSE
/*open p_recordset for select * from dual where 1 = 0; */
dbms_output.put_line('last else');
END IF;
END checkFroud2;
/
var o refcursor;
BEGIN
CHECKfroud2
('C',:o);
END;
/
PRINT O;
``` | ```
The correct code is as follows:
create procedure checkFroud2(code IN varchar2, p_recordset OUT SYS_REFCURSOR)
AS
BEGIN
OPEN p_recordset FOR
if code='C' then
select * from emp
//dbms_output.putline('this is if block');
elsif code='F' then
dbms_output.putline('this is else block');
else
dbms_output.putline('last else');
end if;
end checkFroud2;
``` | IF else condition in sql stored procedure | [
"",
"sql",
"oracle",
""
] |
I have three entities:
Cue(id, name)
Reaction(id, name)
Freqs(id, cue\_id FK, reaction\_id FK, some\_other\_data)
I want to count number of (cue, reaction) pairs like (cue.name, reaction.name, count)
I already wrote a query
```
SELECT
a.cue_id,
a.reaction_id,
Count(*) as freq
FROM
rdb_freqs a
JOIN rdb_reaction b ON a.cue_id=b.id
JOIN rdb_cue c ON a.reaction_id=c.id
GROUP BY a.cue_id, a.reaction_id
ORDER BY freq DESC;
```
But how should I replace 'id' with 'name'? | ```
SELECT distinct
c.name cue_name,
b.name reaction_name,
(SELECT
Count(*)
FROM
rdb_freqs a1
where
a1.cue_id = a.cue_id and
a1.reaction_id = a.reaction_id
) as count
FROM
rdb_freqs a
inner JOIN rdb_reaction b ON a.reaction_id = b.id
inner JOIN rdb_cue c ON a.cue_id = c.id
order by count desc
```
[**SQLfiddle sample**](http://sqlfiddle.com/#!4/785f6/15) (written in oracle) | ```
SELECT
c.name as cue_name,
b.name as reacion_name,
Count(*) as freq
FROM
rdb_freqs a
JOIN rdb_reaction b ON a.cue_id=b.id
JOIN rdb_cue c ON a.reaction_id=c.id
GROUP BY c.name, b.name
ORDER BY freq DESC;
``` | Select from Many-to-Many with GroupBy | [
"",
"sql",
"postgresql",
"many-to-many",
""
] |
Good Day,
So I'm stuck with a SQL query in which the table I'm querying has multiple sequential columns, such as
```
Property1,
Property2,
Property3,
Property4,
Property5
..etc
```
Now there are about 64 columns descending in the same naming convention.
They are varchar of type and marked by a single "Y" or "N" stating boolean function.(Not my design)
Now where I'm stuck is that in my query I need to return the First Property column that's marked as "Y" in a single record..
I've searched around but could not have come upon the same question asked elsewhere.. Maybe I'm just missing it?
It would really be appreciated should anyone have a hint for me to follow or so on?
Thanks in advance! | Try this:
```
select
CASE WHEN QryGroup1 = 'Y' then 'QryGroup1'
WHEN QryGroup2 = 'Y' then 'QryGroup2'
WHEN QryGroup3 = 'Y' then 'QryGroup3'
WHEN QryGroup10 = 'Y' then 'QryGroup10'
else ''
end as [SelectedBP]
from OCRD
``` | The basic idea is to concatenate all fields in one string and then find the index of the first occurrence of `Y` and form a field label as `PROPERTY`+`FIRST OCCURRENCE INDEX`.
If `Y` is not found then `PROPERTY0` appears in this query you can handle this with CASE statement for example.
[SQLFiddle demo](http://sqlfiddle.com/#!3/8a395/1)
```
select id,
'PROPERTY'+
CONVERT(VARCHAR(10),CHARINDEX('Y',property1+property2+property3))
from T
``` | SQL Select the first Column to match criteria of Many columns in a single row | [
"",
"sql",
"sql-server",
"sql-server-2008",
"multiple-columns",
""
] |
i have the following query for inserting values into my Customer table:
```
INSERT INTO customer(Booking_id,First_name,Last_name,Phone,Address,Town,Postcode,email)
VAlUES
(1,'Elroy','Craddock',01497 3139773','36 Yaffingale Gate','Tadley','RG78 2AB','e.craddock@yautia.co.uk')
```
after running it writes
```
Error starting at line 1,551 in command:
INSERT INTO customer (Booking_id, First_name, Last_name, Phone, Address, Town, Post code, email) VALUES( 1551 ,' Leonard ',' Babbs ', 01959 8159688 ,' 46 Zoophagy Green ',' Choppington ',' NE41 5DB ',' l.babbs@sommelier.co.uk ')
Error at Command Line:1,551 Column:86
Error report:
SQL Error: ORA-00917: missing comma
00917. 00000 - "missing comma"
*Cause:
*Action:
```
i'v been trying to fix this syntax error for almost a day now! Any help/suggestions are appreciated! Thank you | This is your query:
```
INSERT INTO customer (Booking_id, First_name, Last_name, Phone, Address, Town, Post code, email) VALUES( 1551 ,' Leonard ',' Babbs ', 01959 8159688 ,' 46 Zoophagy Green ',' Choppington ',' NE41 5DB ',' l.babbs@sommelier.co.uk ')
```
Your problem is here: `01959 8159688`. This is an invalid number literal.
Depending on `Phone` column type, it's got to be: `'01959 8159688'` (if it is a text column), or `01959.8159688` (if it is a numeric column). | The problem is with `01959 8159688`.
Assuming this is a phone number, and you want to keep the space in order to separate the area code from the rest of the number, you should surround it with single quotes: `'01959 8159688'` - otherwise, it's interpreted as two unrelated numeric literals. | Need help finding syntax in SQL developer | [
"",
"sql",
"database",
"oracle",
"syntax-error",
""
] |
I am trying to insert new information into an already existing table. This is my code which is not working:
```
INSERT INTO customer (cNo, cName, street, city, county, discount)
VALUES (10, Tom, Long Street, Short city, Wide County, 2);
```
Where am I going wwrong? | You are not specifying your strings properly it should be:
```
INSERT INTO customer (cNo, cName, street, city, county, discount)
VALUES (10, 'Tom', 'Long Street', 'Short city', 'Wide County', 2);
``` | You must use quotes for string values:
```
INSERT INTO customer (cNo, cName, street, city, county, discount)
VALUES (10, 'Tom', 'Long Street', 'Short city', 'Wide County', 2);
``` | Inserting information into a table on SQL | [
"",
"sql",
"ms-access",
""
] |
If I have table t like this:
```
id date_from date_to n_n n_d i_s ...
591 2014-04-26 2014-05-03 1 NULL 1 ...
595 2014-04-26 2014-05-03 1 NULL 1 ...
```
And I have a query like this:
```
SELECT id
FROM t
WHERE ROW(date_from, date_to, n_n, n_d, i_s) =
(SELECT date_from, date_to, n_n, n_d, i_s FROM t WHERE id = 591);
```
I would expect the result to be an id column with the two ids, 591 and 595. But I get an empty result. What is wrong with this?
What I need to do: find out the ids of all rows which are equal on certain columns. | That is because of `NULL` comparison. Normally, `NULL` can't be handled with usual equality comparison operators. There are such things as `IS NULL` for it. But in your case, use [<=>](http://dev.mysql.com/doc/refman/5.0/en/comparison-operators.html#operator_equal-to) for your query to get proper results:
```
SELECT id
FROM t
WHERE ROW(date_from, date_to, n_n, n_d, i_s) <=>
(SELECT date_from, date_to, n_n, n_d, i_s FROM t WHERE id = 591);
``` | It does not work because one column contains `null`. That compare results in `unknown` which is `false`. That is why you don't get a result. | Row query for finding ids of similar rows | [
"",
"mysql",
"sql",
""
] |
```
create Table dealing_record(
dealing_record_id int NOT NULL,
Transaction_number Varchar (6) NOT NULL UNIQUE,
Number_of_shares Number NOT NULL,
Amount Number (7,2) NOT NULL,
Stamp_duty Varchar(6) NOT NULL,
commission Varchar(6) NOT NULL,
Date_time SYSDATE NOT NULL,
PRIMARY KEY(dealing_record_id));
```
outcome
```
SQL> create Table dealing_record(
2 dealing_record_id int NOT NULL,
3 Transaction_number Varchar (6) NOT NULL UNIQUE,
4 Number_of_shares Number NOT NULL,
5 Amount Number (7,2) NOT NULL,
6 Stamp_duty Varchar(6) NOT NULL,
7 commission Varchar(6) NOT NULL,
8 Date_time SYSDATE NOT NULL,
9 PRIMARY KEY(dealing_record_id));
Date_time SYSDATE NOT NULL,
*
ERROR at line 8:
ORA-00902: invalid datatype
```
Please what am I getting wrong? | This line:
```
Date_time SYSDATE NOT NULL,
```
should be
```
Date_time date NOT NULL default SYSDATE,
```
Sysdate is a value, not a datatype. | `SYSDATE` is not a data type. You probably mean
```
Date_time DATE NOT NULL,
``` | issues with sysdate in sqlplus | [
"",
"sql",
"sqlplus",
""
] |
I have a working query that is grouping data by hardware model and a result, but the problem is there are many *"results"*. I have tried to reduce that down to *"if result = 0 then keep as 0, else set it to 1"*. This generally works, but I end up having:
```
day | name | type | case | count
------------+----------------+------+------+-------
2013-11-06 | modelA | 1 | 0 | 972
2013-11-06 | modelA | 1 | 1 | 42
2013-11-06 | modelA | 1 | 1 | 2
2013-11-06 | modelA | 1 | 1 | 11
2013-11-06 | modelB | 1 | 0 | 456
2013-11-06 | modelB | 1 | 1 | 16
2013-11-06 | modelB | 1 | 1 | 8
2013-11-06 | modelB | 3 | 0 | 21518
2013-11-06 | modelB | 3 | 1 | 5
2013-11-06 | modelB | 3 | 1 | 7
2013-11-06 | modelB | 3 | 1 | 563
```
Instead of the aggregate I am trying to achieve, where only 1 row per type/case combo.
```
day | name | type | case | count
------------+----------------+------+------+-------
2013-11-06 | modelA | 1 | 0 | 972
2013-11-06 | modelA | 1 | 1 | 55
2013-11-06 | modelB | 1 | 0 | 456
2013-11-06 | modelB | 1 | 1 | 24
2013-11-06 | modelB | 3 | 0 | 21518
2013-11-06 | modelB | 3 | 1 | 575
```
Here is my query:
```
select CURRENT_DATE-1 AS day, model.name, attempt.type,
CASE WHEN attempt.result = 0 THEN 0 ELSE 1 END,
count(*)
from attempt attempt, prod_hw_id prod_hw_id, model model
where time >= '2013-11-06 00:00:00'
AND time < '2013-11-07 00:00:00'
AND attempt.hard_id = prod_hw_id.hard_id
AND prod_hw_id.model_id = model.model_id
group by model.name, attempt.type, attempt.result
order by model.name, attempt.type, attempt.result;
```
Any tips on how I can achieve this would be awesome.
Day will always be defined in the `WHERE` clause, so it will not vary. `name, type, result(case)` and `count` will vary. In short, for any given model I want only 1 row per *"type + case"* combo. As you can see in the first result set I have 3 rows for `modelA` that have `type=1` and `case=1` (because there are many *"result"* values that I have turned into *0=0 and anything else=1*). I want that to be represented as 1 row with the count aggregated as in example data set 2. | Your query would work already - except that you are running into naming conflicts or just confusing the **output column** (the `CASE` expression) with **source column** `result`, which has different content.
```
...
GROUP BY model.name, attempt.type, attempt.result
...
```
You need to `GROUP BY` your `CASE` expression instead of your source column:
```
...
GROUP BY model.name, attempt.type
, CASE WHEN attempt.result = 0 THEN 0 ELSE 1 END
...
```
Or provide a **column alias** that's different from any column name in the `FROM` list - or else that column takes precedence:
```
SELECT ...
, CASE WHEN attempt.result = 0 THEN 0 ELSE 1 END AS result1
...
GROUP BY model.name, attempt.type, result1
...
```
The SQL standard is rather peculiar in this respect. [Quoting the manual here:](https://www.postgresql.org/docs/current/sql-select.html#SQL-SELECT-LIST)
> An output column's name can be used to refer to the column's value in
> `ORDER BY` and `GROUP BY` clauses, but not in the `WHERE` or `HAVING` clauses;
> there you must write out the expression instead.
And:
> If an `ORDER BY` expression is a simple name that matches both an output
> column name and an input column name, `ORDER BY` will interpret it as
> the output column name. **This is the opposite of the choice that `GROUP BY`
> will make** in the same situation. This inconsistency is made to be
> compatible with the SQL standard.
**Bold** emphasis mine.
These conflicts can be avoided by using **positional references** (ordinal numbers) in `GROUP BY` and `ORDER BY`, referencing items in the `SELECT` list from left to right. See solution below.
The drawback is that this may be harder to read and vulnerable to edits in the `SELECT` list: one might forget to adapt positional references accordingly.
But you do *not* have to add the column `day` to the `GROUP BY` clause, as long as it holds a constant value (`CURRENT_DATE-1`).
Rewritten and simplified with proper JOIN syntax and positional references it could look like this:
```
SELECT m.name
, a.type
, CASE WHEN a.result = 0 THEN 0 ELSE 1 END AS result
, CURRENT_DATE - 1 AS day
, count(*) AS ct
FROM attempt a
JOIN prod_hw_id p USING (hard_id)
JOIN model m USING (model_id)
WHERE ts >= '2013-11-06 00:00:00'
AND ts < '2013-11-07 00:00:00'
GROUP BY 1,2,3
ORDER BY 1,2,3;
```
I avoided the column name `time`. That's a [reserved word](https://www.postgresql.org/docs/current/sql-keywords-appendix.html) and should not be used as identifier. Besides, your "time" obviously is a [`timestamp` or `date`](https://www.postgresql.org/docs/current/datatype-datetime.html), so that was rather misleading. | can you please try this:
replace the case statement with the below one
```
Sum(CASE WHEN attempt.result = 0 THEN 0 ELSE 1 END) as Count,
``` | GROUP BY + CASE statement | [
"",
"sql",
"postgresql",
"group-by",
"case",
"aggregate-functions",
""
] |
So this is my current situation.
I have a Table 'Test' with following attributes:
* Id
* PatientID (FK)
* Result
* Type
I need to count how many patients have done a test from a specific type (it's a 1 - \* relationship between Patient and Test).
I solved it like this:
```
SELECT COUNT(*)
FROM Test
WHERE (Type = 'GDS')
GROUP BY PatientID;
```
And now I have to group those results.. ("How many Patients have done n tests")
```
SELECT COUNT(*)
FROM Test
WHERE (Type = 'GDS')
GROUP BY
(
SELECT COUNT(*)
FROM Test WHERE (Type = 'GDS')
GROUP BY PatientID
);
```
Of course it's wrong, since I don't have a clue how to do it .. and I couldn't find anything on the web.
It's a SQL database in Visual Studio ... so I'm not sure if JOINS are going to work.. and if, how would you use it? | Answering your question:
> I need to Count how many How many patients have done a test from a
> specific type (it's a 1 - \* relationship between Patient and Test).
This is the solution:
```
SELECT COUNT(DISTINCT PatientID) FROM Test WHERE Type = 'GDS'
```
For example, if you have:
```
ID | PatiantID | Type
1 | 71 | GDS
2 | 71 | FTP
3 | 53 | GDS
```
Then the code above will return `2`, coz there are two patients who had the GDS test. | Is this homework?
You are searching for the [HAVING](http://www.w3schools.com/sql/sql_having.asp) clause, that permits you to filter an aggregate result:
```
SELECT PatientID
FROM Test
WHERE Type = 'GDS'
GROUP BY PatientID
HAVING COUNT(*) = n
``` | how to Group by a new Select statement | [
"",
"sql",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.