Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
Is it possible to refer a views column as a select statement for another view?
How about joining two views together in another view?
How can I refer to a view column. I have used GROUP\_CONCAT(column1) in the initial view but then couldnt refer to it later on. | > Yes it is possible. For reference go through [MySql Docs](http://dev.mysql.com/doc/refman/5.0/en/create-view.html)
but note that nesting views can cause serious performance penalties. | Yes, it is possible.
Without more detail, that's about as descriptive as I can be.
[BTW, I think this is something you could have found easily by searching.] | Can a view select statement contain another view? | [
"",
"mysql",
"sql",
""
] |
In Oracle we have the limitation of only 1000 items supported in the IN clause of SQL. I would like to know if there are any other such limitations in Oracle. | Various limits are documented here for 11.2: <http://docs.oracle.com/cd/B28359_01/server.111/b28320/limits003.htm>
Some edition-based limitations: <http://www.oracle.com/us/products/database/enterprise-edition/comparisons/index.html> | Limits for Oracle 10g:
<http://docs.oracle.com/cd/B19306_01/server.102/b14237/limits.htm>
Logical limits:
<http://docs.oracle.com/cd/B19306_01/server.102/b14237/limits003.htm>
Physical limits:
<http://docs.oracle.com/cd/B19306_01/server.102/b14237/limits002.htm> | ORACLE SQL,PL/SQL limitations | [
"",
"sql",
"oracle",
"plsql",
""
] |
I have been facing a strange scenario when comparing dates in postgresql(version 9.2.4 in windows).
I have a column in my table say `update_date` with type `timestamp without timezone`.
Client can search over this field with only date (e.g: `2013-05-03`) or date with time (e.g.: `2013-05-03 12:20:00`).
This column has the value as timestamp for all rows currently and have the same date part `2013-05-03`, but difference in time part.
When I'm comparing over this column, I'm getting different results. Like the followings:
```
select * from table where update_date >= '2013-05-03' AND update_date <= '2013-05-03' -> No results
select * from table where update_date >= '2013-05-03' AND update_date < '2013-05-03' -> No results
select * from table where update_date >= '2013-05-03' AND update_date <= '2013-05-04' -> results found
select * from table where update_date >= '2013-05-03' -> results found
```
My question is how can I make the first query possible to get results, I mean why the 3rd query is working but not the first one? | @Nicolai is correct about casting and why the condition is false for any data. i guess you prefer the first form because you want to avoid date manipulation on the input string, correct? you don't need to be afraid:
```
SELECT *
FROM table
WHERE update_date >= '2013-05-03'::date
AND update_date < ('2013-05-03'::date + '1 day'::interval);
``` | When you compare `update_date >= '2013-05-03'` postgres casts values to the same type to compare values. So your '2013-05-03' was casted to '2013-05-03 00:00:00'.
So for update\_date = '2013-05-03 14:45:00' your expression will be that:
```
'2013-05-03 14:45:00' >= '2013-05-03 00:00:00' AND '2013-05-03 14:45:00' <= '2013-05-03 00:00:00'
```
This is always `false`
To solve this problem cast update\_date to `date`:
```
select * from table where update_date::date >= '2013-05-03' AND update_date::date <= '2013-05-03' -> Will return result
``` | How to compare dates in datetime fields in Postgresql? | [
"",
"sql",
"database",
"postgresql",
"date",
""
] |
I have 2 tables , Users and Friends Table, Users have uID, Friends have UID as well, I want to check the UIDs present in Users table but not present in Friends Table,
I am using this query
```
SELECT b.userId
FROM users b
where b.userId NOT IN(
select userId
from user_friend
)
```
But it is returning me all the rows of Users Table..
Am I missing something?
Example Data in MYSQL
```
Users Table
1
2
3
4
Friends Table
2
3
4
```
I actually need to get 1 | Try this
```
SELECT b.userId
FROM users b
WHERE NOT EXISTS (
SELECT *
FROM user_friend
WHERE b.userId = user_friend.userId
);
``` | Try this one:
```
SELECT b.userId
FROM users b
where b.userId NOT IN(
select user_friend.userId
from user_friend
)
``` | NOT IN not working SQL | [
"",
"mysql",
"sql",
"oracle",
"select",
"notin",
""
] |
Basic SQL question -- and I'm new at this, so please bear with me...
I'm trying to join two fields in this manner:
'Buyers' table:
```
Name Date
Greg Jan 01
John Jan 01
Greg Jan 02
```
'Purchases' table:
```
Name Date Product Qty
Greg Jan 01 Apple 2
Greg Jan 01 Banana 3
John Jan 01 Apple 2
Greg Jan 02 Banana 1
```
Joined table:
```
Name Date Apples Bananas
Greg Jan 01 2 3
John Jan 01 2 0
Greg Jan 02 0 1
```
I know it has to be something simple, but I'm just not getting it. | Looks like you're trying to `pivot` your results. You can achieve this using `sum` with `case`:
```
select b.name,
b.date,
sum(case when product='Apple' then qty end) Apples,
sum(case when product='Banana' then qty end) Bananas
from buyers b
join purchases p on b.name = p.name and b.date = p.date
group by b.name,
b.date
``` | if Joined table names 'buyer\_purchases',then the sql looks like :
`select buyer_purchases.*,Purchases.* from buyer_purchases,Purchases where buyer_purchases.Name=Purchases.Name and buyer_purchases.Date=Purchases.Date;` | Simple Joining SQL on two fields | [
"",
"mysql",
"sql",
""
] |
I have a database representing retail items. Some items have multiple scancode, but are in essence the same item, ie. their name, cost, and retail will ALWAYS be the same. To model this, [the database has the following structure](http://sqlfiddle.com/#!2/3daa6/2):
```
Inventory_Table
INV_PK | INV_ScanCode | INV_Name | INV_Cost | INV_Retail
1 | 000123456789 | Muffins | 0.15 | 0.30
2 | 000987654321 | Cookie | 0.25 | 0.50
3 | 000123454321 | Cake | 0.45 | 0.90
Alternates_Table
ALT_PK | INV_FK | ALT_ScanCode
1 | 2 | 000999888777
2 | 2 | 000666555444
3 | 2 | 000333222111
```
Now say I want a listing of all the scan codes in the database, how would I join the tables to get the following output:
```
ScanCode | Name | Cost | Retail
000123456789 | Muffins | 0.15 | 0.30
000987654321 | Cookie | 0.25 | 0.50
000999888777 | Cookie | 0.25 | 0.50
000666555444 | Cookie | 0.25 | 0.50
000333222111 | Cookie | 0.25 | 0.50
000123454321 | Cake | 0.45 | 0.90
```
[**SQL Fiddle**](http://sqlfiddle.com/#!2/3daa6/2) | You're looking for a `union`:
```
SELECT it.INV_ScanCode, it.INV_Name, it.INV_Cost, it.INV_Retail
FROM Inventory_Table AS it
UNION ALL
SELECT at.ALT_ScanCode, it.INV_Name, it.INV_Cost, it.INV_Retail
FROM Inventory_Table AS it
INNER JOIN Alternate_Table AS at
ON at.INV_FK = INV_PK
```
`UNION ALL` is the faster choice when you know the rows won't repeat between the two sets of results (so the DB doesn't need to check for duplicates). | ```
SELECT it.INV_ScanCode, it.INV_Name, it.INV_Cost, it.INV_Retail
FROM Inventory_Table AS it
union all
SELECT at.ALT_ScanCode, it.INV_Name, it.INV_Cost, it.INV_Retail
FROM Alternate_Table AS at
inner join Inventory_Table AS it on at.INV_FK = it.INV_PK
```
[SQL Fiddle](http://sqlfiddle.com/#!2/3daa6/6) | SQL JOIN to get a list of intventory items and alternate items | [
"",
"sql",
"join",
"foreign-keys",
"sybase",
""
] |
I have this table

and I want the 10 distinct rows with the highest score ordered descending.
So I tried
```
SELECT * FROM `highscores` GROUP BY userID ORDER BY score DESC LIMIT 10 ;
```
which is not correct because it returns:

Then I tried:
`SELECT distinct(userID),userName,userLastname,score FROM`highscores`ORDER BY score DESC ;`
which is not correct too, because it doesn't really returns distinct rows based on userID.

And this is the result that I want:

I want to keep the highest score of each player(different `userID`) for the 10 first players. Any idea how can I do that? | The right query is:
```
SELECT userName, userLastname, userID, MAX( score )
FROM `highscores`
GROUP BY userID
ORDER BY MAX( score ) DESC
LIMIT 10
```
Thanks to EddieJamsession's comment. | ```
SELECT a.*
FROM highscore a
INNER JOIN
(
SELECT userID, MAX(score) score
FROM highscore
GROUP BY userID
) b ON a.userID = b.userID
AND a.score = b.score
ORDER BY score DESC
LIMIT 10
```
*this does not handle ties, though.* | Distinct values of an SQL Table | [
"",
"mysql",
"sql",
"greatest-n-per-group",
""
] |
I have the following table in my database
```
" CREATE TABLE Q_GROUP ( Id INTEGER PRIMARY KEY ); "
```
This is only needed to ensure different Items are in the same group. Each time, when adding new items, I need to create a unique group. The items are then connected to this group. The usual syntax for adding items and auto-incrementing the identifier is to specify the items but not the identifier. In this case, sq lite gives a syntax error when attempting this.
Should I add a foo value to the table, or is there a better way to do this in SQ Lite?
-- edit --
The following queries give a syntax error:
```
INSERT INTO Q_GROUP VALUES;
INSERT INTO Q_GROUP VALUES ();
INSERT INTO Q_GROUP () VALUES ();
INSERT INTO Q_GROUP ;
``` | Use `null` as placeholder
```
insert into Q_GROUP (Id)
values (null);
```
## [SQLFiddle demo](http://sqlfiddle.com/#!7/c8fda/1) | try like this,
```
insert into Q_GROUP values(1);
``` | SQ Lite insert just a single identifier | [
"",
"sql",
"database",
"sqlite",
"syntax-error",
"identifier",
""
] |
I have the following table:
```
Table1 Table2
CardNo ID Record Date ID Name Dept
1 101 8.00 11/7/2013 101 Danny Green
2 101 13.00 11/7/2013 102 Tanya Red
3 101 15.00 11/7/2013 103 Susan Blue
4 102 11.00 11/7/2013 104 Gordon Blue
5 103 12.00 11/7/2013
6 104 12.00 11/7/2013
7 104 18.00 11/7/2013
8 101 1.00 12/7/2013
9 101 10.00 12/7/2013
10 102 0.00 12/7/2013
11 102 1.00 12/7/2013
12 104 3.00 12/7/2013
13 104 4.00 12/7/2013
```
i want the result to be like this:
```
Name Dept Record
Danny Green 8.00
Tanya Red 11.00
Susan Blue 12.00
Gordon Blue 18.00
```
where the result is only showing the minimum value of "Record" for each "Name", and filtered by the date selected. I'm using SQL. | Use:
```
select t2.Name, t2.Dept, min(t1.Record)
from table1 t1
join table2 t2 on t2.ID = t1.ID
group by t2.ID, t2.Name, t2.Dept
```
or
```
select t2.Name, t2.Dept, a.record
from table2 t2
join
(
select t1.ID, min(t1.Record) [record]
from table1 t1
group by t1.ID
)a
on a.ID = t2.ID
```
For filtering add `where` clause, e.g.:
```
select t2.Name, t2.Dept, min(t1.Record)
from table1 t1
join table2 t2 on t2.ID = t1.ID
where t1.Date = '11/7/2013'
group by t2.ID, t2.Name, t2.Dept
``` | Please try:
```
Select
b.Name,
b.Dept,
MIN(Record) Record
from
Table1 a join Table2 b on a.ID=b.ID
GROUP BY b.Name,
b.Dept
``` | Select the minimum value for each row join by another table | [
"",
"sql",
"join",
"group-by",
"multiple-tables",
"minimum",
""
] |
I want to merge the values of three different columns name **ProcessFalseRedirect,ProcessTrueRedirect,GeneralRedirectToPP** into a single column named as **PPID**.
The query i'm using is this,
```
select ProcessFalseRedirect,ProcessTrueRedirect,GeneralRedirectToPP from IVR_PPMaster
```
which gives me the following result,

I want my output to be like this,
**PPID**
---------
PP-01
PP-02
PP-03
PP-04
PP-04a
PP-04b
PP-05
and so on.
I want a query which will ignore the blank rows and blank cells as well.
Please help. | The `UNION` can help you.
```
select ProcessFalseRedirect PPID from IVR_PPMaster
union
select ProcessTrueRedirect from IVR_PPMaster
union
select GeneralRedirectToPP from IVR_PPMaster
```
To get data in ascending order use this:
```
select PPID from
(
select ProcessFalseRedirect PPID from IVR_PPMaster
union
select ProcessTrueRedirect from IVR_PPMaster
union
select GeneralRedirectToPP from IVR_PPMaster
) p
order by PPID
```
Notice, that there are difference between `UNION` and `UNION ALL`. | try this:
```
(SELECT ProcessFalseRedirect PPID FROM IVR_PPMaster WHERE ProcessFalseRedirect IS NOT NULL
UNION ALL
SELECT ProcessTrueRedirect FROM IVR_PPMaster WHERE ProcessTrueRedirect IS NOT NULL
UNION ALL
SELECT GeneralRedirectToPP FROM IVR_PPMaster WHERE GeneralRedirectToPP IS NOT NULL)
ORDER BY ProcessFalseRedirect
``` | SQL Query to merge different column data in to one | [
"",
".net",
"sql",
"sql-server",
"sql-server-2008-r2",
""
] |
I am trying to create a table in Oracle SQL Developer but I am getting error ORA-00902.
Here is my schema for the table creation
```
CREATE TABLE APPOINTMENT(
Appointment NUMBER(8) NOT NULL,
PatientID NUMBER(8) NOT NULL,
DateOfVisit DATE NOT NULL,
PhysioName VARCHAR2(50) NOT NULL,
MassageOffered BOOLEAN NOT NULL, <-- the line giving the error -->
CONSTRAINT APPOINTMENT_PK PRIMARY KEY (Appointment)
);
```
What am I doing wrong?
Thanks in advance | Oracle does not support the `boolean` data type at schema level, though it is supported in PL/SQL blocks. By schema level, I mean you cannot create table columns with type as `boolean`, nor nested table types of records with one of the columns as `boolean`. You have that freedom in PL/SQL though, where you can create a record type collection with a boolean column.
As a workaround I would suggest use `CHAR(1 byte)` type, as it will take just one byte to store your value, as opposed to two bytes for `NUMBER` format. Read more about data types and sizes [here](http://docs.oracle.com/cd/B19306_01/server.102/b14220/datatype.htm#i16209) on Oracle Docs. | Last I heard there were no `boolean` type in oracle. Use `number(1)` instead! | Boolean giving invalid datatype - Oracle | [
"",
"sql",
"oracle",
""
] |
I have a table in SQL SERVER "usage" that consists of 4 columns
```
userID, appid, licenseid, dateUsed
```
Now I need to query the table so that I can find how many users used an app under a license within a time frame. Bearing in mind that even if user 1 used the app on say january 1st and 2nd they would only count as 1 in the result. i.e unique users in the time frame.
so querying
```
where dateUsed > @from AND dateUsed < @dateTo
```
what I would like to get back is a table of rows like
, appid, licenseid
Has anyone any ideas how? I have been playing with count as subquery and then trying to sum the results but no success yet. | Just tested this and provides your desired result:
```
SELECT appid, licenseid, COUNT(DISTINCT userID)
FROM usage
WHERE dateUsed BETWEEN @from AND @dateTo
GROUP BY appid, licenseid
``` | Something like this should work:
```
SELECT appid, licenseid, COUNT(DISTINCT userid) as DistictUsers
FROM yourTable
WHERE dateUsed BETWEEN @from AND @dateTo
GROUP BY appid, licenseid
``` | SQL count of distinct records against multiple criteria | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I use SQL Server 2008 R2 and SQL Server Business Intelligence Development Studio.
I create one project of `Business Intelligence Project`
I create a `Data Source` Of `Adventure Work 2008 DW` and then i create one `Data Source View`
and then i create one `Cube`.
I can `build` and `rebuild` my project but when i want `Deploy` i get 34 error.
First error is
```
OLE DB error: OLE DB or ODBC error: Login failed for user 'NT AUTHORITY\NETWORK
SERVICE'.; 28000; Cannot open database "AdventureWorksDW2008" requested by the login.
The login failed.; 42000.
```
I find this link : [SQL Server 2012: Login failed for user 'NT Service\MSSQLServerOLAPService'.; 28000](https://stackoverflow.com/questions/15238559/sql-server-2012-login-failed-for-user-nt-service-mssqlserverolapservice-280)
but it not work for me. | I Fix this error with verify this way :
At First open `Data Source`

Then `Edit` connection string


And then I `use specific windows ....` because I Want connect to another server for access to my `SSAS`.
 | This error is regarding **"NT AUTHORITY\SYSTEM"**, so just go to relational database and expand Security folder.
[](https://i.stack.imgur.com/sPxzA.png)
Double click on **"NT AUTHORITY\SYSTEM"**, it will open **"Login Properties"** wizard, go to **"User Mapping"** tab and choose the relational database that you are going to use. Give appropriate permissions and click OK.
[](https://i.stack.imgur.com/7VOJw.png) | OLE DB or ODBC error: Login failed for user 'NT AUTHORITY\NETWORK SERVICE | [
"",
"sql",
"sql-server",
"sql-server-2008-r2",
"ssas",
"olap",
""
] |
Error is
> Unknown column 'num' in 'where' clause
```
SELECT COUNT(*) AS num, books_bookid
FROM bookgenre_has_books
WHERE num > 10
GROUP BY books_bookid
```
What am I doing wrong? Thanks. | `WHERE` clause cant see aliases,use `HAVING`.
It is not allowable to refer to a column alias in a WHERE clause, because the column value might not yet be determined when the WHERE clause is executed
<http://dev.mysql.com/doc/refman/5.0/en/problems-with-alias.html> | Try this, you should use the HAVING clause
```
SELECT COUNT(*) AS num, books_bookid
FROM bookgenre_has_books
GROUP BY books_bookid
HAVING COUNT(*) > 10
```
The SQL HAVING clause is used in combination with the SQL GROUP BY clause. It can be used in an SQL SELECT statement to filter the records that a SQL GROUP BY returns. | Unknown column error in this COUNT MySQL statement? | [
"",
"mysql",
"sql",
""
] |
I have two tables with a common id,
table1 has a task numbers column and table2 has documents column
each task can have multiple documents. I'm trying to find all task numbers that don't have a specific document
Fake data:
```
SELECT * FROM table1
id tasknumber
1 3210-012
2 3210-022
3 3210-032
SELECT * FROM table2
id document
1 revision1
1 SB
1 Ref
2 revision1
2 Ref
3 revision1
3 SB
```
But how would I find tasknumbers which don't have a document named SB? | ```
SELECT t1.tasknumber
FROM table1 t1
LEFT JOIN table2 t2 ON t2.id = t1.id AND t2.document = 'SB'
WHERE t2.id IS NULL;
```
There are basically four techniques:
* [Select rows which are not present in other table](https://stackoverflow.com/questions/19363481/select-rows-which-are-not-present-in-other-table/19364694#19364694) | ```
select t1.tasknumber from table1 t1
where not exists
(select 1 from table2 t2 where t1.id = t2.id and t2.document = 'SB')
``` | SQL View to find missing values, should be simple | [
"",
"sql",
"sql-server",
"sql-server-2000",
""
] |
Im looking for a query to gave me the last day result.
the query can be run in any time of day so it shouldnt be hour dependent
and the result must be from starting point of last day and start of today.
i find this , is this give the exact result ?
```
WHERE Date between
select dateadd(d, -2, CAST(GETDATE() AS DATE))
and
select dateadd(d, -1, CAST(GETDATE() AS DATE))
``` | try something like this:
```
select * from mytable where mydate >= DATEADD(day, -1, convert(date, GETDATE()))
and mydate < convert(date, GETDATE())
``` | if i understood your problem correctly..Add below condition to your select query
```
WHERE Date >=(select dateadd(d, -1, CAST(GETDATE() AS DATE))) AND Date < GETDATE()
``` | sql query to get exactly the last day result | [
"",
"sql",
"sql-server-2008",
""
] |
I have a table like this:
```
date(timestamp) Error(integer) someOtherColumns
```
I have a query to select all the rows for specific date:
```
SELECT * from table
WHERE date::date = '2010-01-17'
```
Now I need to count all rows which Error is equal to 0(from that day) and divide it by count of all rows(from that day).
So result should look like this
```
Date(timestamp) Percentage of failure
2010-01-17 0.30
```
Database is pretty big, millions of rows..
And it would be great if someone know how to do this for more days - interval from one day to another.
```
Date(timestamp) Percentage of failure
2010-01-17 0.30
2010-01-18 0.71
and so on
``` | what about this (if `error` could be only 1 and 0):
```
select
date,
sum(Error)::numeric / count(Error) as "Percentage of failure"
from Table1
group by date
```
or, if `error` could be any integer:
```
select
date,
sum(case when Error > 0 then 1 end)::numeric / count(Error) as "Percentage of failure"
from Table1
group by date
```
---
Just fount that I've counted `not 0` (assumed that error is when Error != 0), and didn't take nulls into accounts (don't know how do you want to treat it). So here's another query which treats nulls as 0 and counts percentage of failure in two opposite ways:
```
select
date,
round(count(nullif(Error, 0)) / count(*) ::numeric , 2) as "Percentage of failure",
1- round(count(nullif(Error, 0)) / count(*) ::numeric , 2) as "Percentage of failure2"
from Table1
group by date
order by date;
```
**`sql fiddle demo`** | try this
```
select cast(data1.count1 as float)/ cast(data2.count2 as float)
from (
select count(*) as count1 from table date::date = '2010-01-17' and Error = 0) data1,
(select count(*) as count1 from table date::date = '2010-01-17') data2
``` | Divide two counts from one select | [
"",
"sql",
"postgresql",
"select",
"percentage",
""
] |
```
SELECT SUM(orders.quantity) AS num, fName, surname
FROM author
INNER JOIN book ON author.aID = book.authorID;
```
I keep getting the error message: "you tried to execute a query that does not include the specified expression "fName" as part of an aggregate function. What do I do? | The error is because `fName` is included in the `SELECT` list, but is not included in a `GROUP BY` clause and is not part of an aggregate function (`Count()`, `Min()`, `Max()`, `Sum()`, etc.)
You can fix that problem by including `fName` in a `GROUP BY`. But then you will face the same issue with `surname`. So put both in the `GROUP BY`:
```
SELECT
fName,
surname,
Count(*) AS num_rows
FROM
author
INNER JOIN book
ON author.aID = book.authorID;
GROUP BY
fName,
surname
```
Note I used `Count(*)` where you wanted `SUM(orders.quantity)`. However, `orders` isn't included in the `FROM` section of your query, so you must include it before you can `Sum()` one of its fields.
If you have Access available, build the query in the query designer. It can help you understand what features are possible and apply the correct Access SQL syntax. | I had a similar problem in a MS-Access query, and I solved it by changing my equivalent `fName` to an "Expression" (as opposed to "Group By" or "Sum"). So long as all of my fields were "Expression", the Access query builder did not require any `Group By` clause at the end. | "You tried to execute a query that does not include the specified aggregate function" | [
"",
"sql",
"ms-access",
""
] |
I am using SQL Server 2008 R2.
I want the priority based sorting for records in a table.
So that I am using CASE WHEN statement in ORDER BY clause. The ORDER BY clause is as below :
```
ORDER BY
CASE WHEN TblList.PinRequestCount <> 0 THEN TblList.PinRequestCount desc, TblList.LastName ASC, TblList.FirstName ASC, TblList.MiddleName ASC END,
CASE WHEN TblList.HighCallAlertCount <> 0 THEN TblList.HighCallAlertCount desc, TblList.LastName ASC, TblList.FirstName ASC, TblList.MiddleName ASC END,
Case WHEN TblList.HighAlertCount <> 0 THEN TblList.HighAlertCount DESC, TblList.LastName ASC, TblList.FirstName ASC, TblList.MiddleName ASC END,
CASE WHEN TblList.MediumCallAlertCount <> 0 THEN TblList.MediumCallAlertCount DESC, TblList.LastName ASC, TblList.FirstName ASC, TblList.MiddleName ASC END,
Case WHEN TblList.MediumAlertCount <> 0 THEN TblList.MediumAlertCount DESC, TblList.LastName ASC, TblList.FirstName ASC, Patlist.MiddleName ASC END
```
But it gives `Incorrect syntax near the keyword 'desc'`
Any solution?
Also I can have:
```
TblList.PinRequestCount <> 0 and TblList.HighCallAlertCount <> 0 and
TblList.HighAlertCount <> 0` and TblList.MediumCallAlertCount <> 0 and
TblList.MediumAlertCount <> 0
```
at the same time. | `CASE` is an *expression* - it returns a *single* scalar value (per row). It can't return a complex part of the parse tree of something else, like an `ORDER BY` clause of a `SELECT` statement.
It looks like you just need:
```
ORDER BY
CASE WHEN TblList.PinRequestCount <> 0 THEN TblList.PinRequestCount END desc,
CASE WHEN TblList.HighCallAlertCount <> 0 THEN TblList.HighCallAlertCount END desc,
Case WHEN TblList.HighAlertCount <> 0 THEN TblList.HighAlertCount END DESC,
CASE WHEN TblList.MediumCallAlertCount <> 0 THEN TblList.MediumCallAlertCount END DESC,
Case WHEN TblList.MediumAlertCount <> 0 THEN TblList.MediumAlertCount END DESC,
TblList.LastName ASC, TblList.FirstName ASC, TblList.MiddleName ASC
```
Or possibly:
```
ORDER BY
CASE
WHEN TblList.PinRequestCount <> 0 THEN TblList.PinRequestCount
WHEN TblList.HighCallAlertCount <> 0 THEN TblList.HighCallAlertCount
WHEN TblList.HighAlertCount <> 0 THEN TblList.HighAlertCount
WHEN TblList.MediumCallAlertCount <> 0 THEN TblList.MediumCallAlertCount
WHEN TblList.MediumAlertCount <> 0 THEN TblList.MediumAlertCount
END desc,
TblList.LastName ASC, TblList.FirstName ASC, TblList.MiddleName ASC
```
It's a little tricky to tell which of the above (or something else) is what you're looking for because you've a) not *explained* what actual sort order you're trying to achieve, and b) not supplied any *sample data* and expected results, from which we could attempt to *deduce* the actual sort order you're trying to achieve.
---
This may be the answer we're looking for:
```
ORDER BY
CASE
WHEN TblList.PinRequestCount <> 0 THEN 5
WHEN TblList.HighCallAlertCount <> 0 THEN 4
WHEN TblList.HighAlertCount <> 0 THEN 3
WHEN TblList.MediumCallAlertCount <> 0 THEN 2
WHEN TblList.MediumAlertCount <> 0 THEN 1
END desc,
CASE
WHEN TblList.PinRequestCount <> 0 THEN TblList.PinRequestCount
WHEN TblList.HighCallAlertCount <> 0 THEN TblList.HighCallAlertCount
WHEN TblList.HighAlertCount <> 0 THEN TblList.HighAlertCount
WHEN TblList.MediumCallAlertCount <> 0 THEN TblList.MediumCallAlertCount
WHEN TblList.MediumAlertCount <> 0 THEN TblList.MediumAlertCount
END desc,
TblList.LastName ASC, TblList.FirstName ASC, TblList.MiddleName ASC
``` | Another simple example from [here..](http://codechef4u.com/post/2015/04/07/order-by-clause-with-case-expressions-case-statement)
```
SELECT * FROM dbo.Employee
ORDER BY
CASE WHEN Gender='Male' THEN EmployeeName END Desc,
CASE WHEN Gender='Female' THEN Country END ASC
``` | CASE WHEN statement for ORDER BY clause | [
"",
"sql",
"sql-server",
"sql-server-2008-r2",
""
] |
I have a requirement to show a report in the following format from the two different tables as shown below, the below report should count the order numbers by branch.
```
===================================================================
Branch OrdersCountInTable1 OrdersCountInTable2
===================================================================
100 5 2
200 10 10
300 12 11
```
how can i achieve this using a sqlquery?
Following are the columns in the tables:
```
Table1:
________
- Branch
- OrderNo
Table2
__________
- Branch
- OrderNo
```
Table1 Data:
```
===============================
Branch OrderNo
===============================
100 1000
100 1001
200 2001
100 1003
```
Table2 Data:
```
===============================
Branch OrderNo
===============================
100 1000
200 2001
100 1003
```
We just want to reconcile orders from both the tables!!
Thanks for any valuable feedbacks. | One *more or less generic* way to do it
```
SELECT COALESCE(t1.branch, t2.branch) branch,
COALESCE(t1.ordercount, 0) OrdersCountInTable1,
COALESCE(t2.ordercount, 0) OrdersCountInTable2
FROM
(
SELECT branch, COUNT(orderno) ordercount
FROM Table1
GROUP BY branch
) t1 FULL JOIN
(
SELECT branch, COUNT(orderno) ordercount
FROM Table2
GROUP BY branch
) t2
ON t1.branch = t2.branch
```
*Assumption is that tables may not have entries for all branches. That's why `FULL JOIN` is used.* | Do like this using SUM aggregate function and UNION ALL operator
```
SELECT Branch,
SUM( CASE tag WHEN 'table1' THEN 1 ELSE 0 END) as OrdersCountInTable1,
SUM( CASE tag WHEN 'table2' THEN 1 ELSE 0 END) as OrdersCountInTable2
FROM
(
SELECT Branch,'table1' as tag
FROM Table1
UNION ALL
SELECT Branch,'table2' as tag
FROM Table2
) z
GROUP BY Branch
ORDER BY Branch
``` | Group By from two tables | [
"",
"sql",
"sql-server",
"database",
"sql-server-2008",
""
] |
I am using the below SQL Query to get the data from a table for the last 7 days.
```
SELECT *
FROM emp
WHERE date >= (SELECT CONVERT (VARCHAR(10), Getdate() - 6, 101))
AND date <= (SELECT CONVERT (VARCHAR(10), Getdate(), 101))
ORDER BY date
```
The data in the table is also holding the last year data.
Problem is I am getting the output with Date column as
```
10/11/2013
10/12/2012
10/12/2013
10/13/2012
10/13/2013
10/14/2012
10/14/2013
10/15/2012
10/15/2013
10/16/2012
10/16/2013
10/17/2012
10/17/2013
```
I don't want the output of `2012` year. Please suggest on how to change the query to get the data for the last 7 days of this year. | Instead of converting a `date` to a `varchar` and comparing a `varchar` against a `varchar`. Convert the `varchar` to a `datetime` and then compare that way.
```
SELECT
*
FROM
emp
WHERE
convert(datetime, date, 101) BETWEEN (Getdate() - 6) AND Getdate()
ORDER BY
date
``` | Why convert to varchar when processing dates? Try this instead:
```
DECLARE @Now DATETIME = GETDATE();
DECLARE @7DaysAgo DATETIME = DATEADD(day,-7,@Now);
SELECT *
FROM emp
WHERE date BETWEEN @7DaysAgo AND @Now
ORDER BY date
``` | SQL Output to get only last 7 days output while using convert on date | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have two table as follows:
```
- tblEmployee
employeeID | Name
10 | sothorn
20 | lyhong
30 | sodaly
40 | chantra
50 | sangha
60 | bruno
- tblSale
ID | employeeID | employeeSaleID
1 | 30 | 10
2 | 10 | 40
3 | 50 | 20
```
I would like to select from tableSale and join with tblEmployee result that:
```
1 | sodaly | sothorn
2 | sothorn | chantra
3 | sangha | lyhong
``` | Simply select all rows of the `tblSale` table, and join `tblEmployee` table twice:
```
SELECT s.ID, e1.Name, e2.Name
FROM tblSale s
INNER JOIN tblEmployee e1
ON e1.employeeID = s.employeeID
INNER JOIN tblEmployee e2
ON e2.employeeID = s.employeeSaleID
``` | Here is a sample query on your data.
<http://sqlfiddle.com/#!2/b74ca/5/0> | How to do one select two query with join table | [
"",
"mysql",
"sql",
""
] |
I need to extract everything after the last '=' (<http://www.domain.com?query=blablabla> - > blablabla) but this query returns the entire strings. Where did I go wrong in here:
```
SELECT RIGHT(supplier_reference, CHAR_LENGTH(supplier_reference) - SUBSTRING('=', supplier_reference))
FROM ps_product
``` | ```
select SUBSTRING_INDEX(supplier_reference,'=',-1) from ps_product;
```
Please use [this](https://dev.mysql.com/doc/refman/8.0/en/string-functions.html#function_substring-index) for further reference. | Try this (it should work if there are multiple '=' characters in the string):
```
SELECT RIGHT(supplier_reference, (CHARINDEX('=',REVERSE(supplier_reference),0))-1) FROM ps_product
``` | SQL SELECT everything after a certain character | [
"",
"mysql",
"sql",
"substring",
"string-length",
""
] |
I have a table "`Customers`".
It has a column name "`CreatedDate`", means it is a joining date of customer.
I want to calculate how many customer are between 1-5 years, 6-10 years, 11-15 years from current date to ceateddate, like below
```
Years No of Customer
0-5 200
6-10 500
11-15 100
```
In detail if a customers createddate is "5-5-2010" than it should be in range of 0-5 years from current date.
And if createddate is "5-5-2006" than it should be in range of 6-10 years from current date. | something like this:
```
with cte as (
select ((datediff(yy, CreatedDate, getdate()) - 1) / 5) * 5 + 1 as d
from Customers
)
select
cast(d as nvarchar(max)) + '-' + cast(d + 4 as nvarchar(max)),
count(*)
from cte
group by d
```
**`sql fiddle demo`** | Try this
```
SELECT '0-5' as [Years],COUNT(Customer) as [No of Customers] FROM dbo.Customers WHERE DATEDIFF(YY,CreatedDate,GETDATE()) <=5
UNION
SELECT '6-10' as [Years],COUNT(Customer) as [No of Customers] FROM dbo.Customers WHERE DATEDIFF(YY,CreatedDate,GETDATE()) >=5 AND DATEDIFF(YY,CreatedDate,GETDATE()) <=10
UNION
SELECT '11-15' as [Years],COUNT(Customer) as [No of Customers] FROM dbo.Customers WHERE DATEDIFF(YY,CreatedDate,GETDATE()) >=10 AND DATEDIFF(YY,CreatedDate,GETDATE()) <=15
``` | Sql query to get number of customers according to joining date | [
"",
"sql",
"sql-server",
""
] |
I want to select data from one table (T1, in DB1) in one server (Data.Old.S1) into data in another table (T2, in DB2) in another server (Data.Latest.S2). How can I do this ?
Please note the way the servers are named. The query should take care of that too. That is,
SQL server should not be confused about fully qualified table names. For example - this could confuse SQL server - Data.Old.S1.DB1.dbo.T1.
I also want "mapping" . Eg Col1 of T1 should go to Col18 of T2 etc. | create a [linked server](http://msdn.microsoft.com/en-us/library/aa560998.aspx). then use an [openquery](http://technet.microsoft.com/en-us/library/ms188427.aspx) sql statement. | Use Sql Server Management Studio's Import feature.
1. right click on database in the object explorer and select import
2. select your source database
3. select your target database
4. choose the option to 'specify custom query' and just select your data from T1, in DB1
5. choose your destination table in the destination database i.e. T2
6. execute the import | Select into from one sql server into another? | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
There are two tables `JOB` and `WORKER`.
JOB table
* JOBID
WORKER table
* WORKERID
* JOBID (FK from JOB table)
* VACATION ('Y' or 'N')
With these two tables, I want to find a list of jobs that no workers are now assigned.
I made the following query, but it seems inefficient and verbose because of aggregate function `SUM` and `CASE WHEN`.
Any query better than this?
```
SELECT
SUBQUERY.JOBID
FROM
(
SELECT
JOBID,
SUM
(
CASE WHEN
VACATION = 'N'
THEN 1
ELSE 0
END
) NUM_WORKERS
FROM
JOB
LEFT JOIN
WORKER
ON
JOB.JOBID = WORKER.JOBID
GROUP BY
JOB.JOBID
) SUBQUERY
WHERE
SUBQUERY.NUM_WORKERS = 0
``` | Select \*
From job
Where jobid not in( select distinct jobid from worker)
Thats it. | Have a look here, hope it helps
```
SELECT JOBID
FROM JOB
WHERE JOBID NOT IN (
SELECT j.JOBID
FROM JOB j
JOIN WORKER w ON j.JOBID = w.JOBID
WHERE w.VACATION = 'N'
GROUP BY j.JOBID
)
```
Sqlfiddle example: [**EXAMPLE**](http://sqlfiddle.com/#!2/81aa3/4) | Select items that do not meet conditions | [
"",
"sql",
"t-sql",
""
] |
I need to get a list of Customers who have never had an Order Exported
I am passing in a list of CustomerNumbers, grab them join on Orders then I am grouping - I feel like I am close but not sure how to get just Customers where none of the Orders.Exported is set to 1.
Here is what I have so far:
```
SELECT Customers.CustomerID,
Orders.Exported,
Count(Orders.OrderID) AS OrderCount
FROM Customers WITH (Nolock)
JOIN Orders ON Customers.ManufacturerID = Orders.ManufacturerID
AND Customers.CustomerNumber = Orders.CustomerNumber
WHERE Customers.CustomerNumber IN (
SELECT *
FROM dbo.Split(REPLACE(@CustomerNumbers,'\',''),','))
AND Customers.ManufacturerID=@ManufacturerID
AND Customers.Source = 'ipad'
GROUP BY Customers.CustomerID,
Orders.Exported
```
This almost gets me what I need, my results for this are:
```
CustomerID Exported OrderCount
375408 NULL 1
375408 1 5
375412 1 2
376892 NULL 1
```
So out of this list I would only want 376892 because they have never had an Order exported before | You could use `Having Min(IsNull(Orders.Exported,0))` *with* a `Left Join` and *remove grouping by* `Orders.Exported` to filter out customers who has exported orders before.
Logically your count will always be 0 and so you don't need to count.
```
SELECT Customers.CustomerID, Min(IsNull(Orders.Exported,0)) Exported, Count(Orders.OrderID) As OrderCount
FROM Customers With (Nolock) LEFT JOIN Orders
ON Customers.ManufacturerID = Orders.ManufacturerID AND
Customers.CustomerNumber = Orders.CustomerNumber
WHERE Customers.CustomerNumber IN (
SELECT colName FROM dbo.Split(REPLACE(@CustomerNumbers,'\',''),',')) AND
Customers.ManufacturerID=@ManufacturerID AND Customers.Source = 'ipad'
GROUP BY Customers.CustomerID
HAVING Min(IsNull(Orders.Exported,0)) = 0
``` | ```
WHERE Customers.CustomerNumber IN (SELECT * FROM dbo.Split(REPLACE(@CustomerNumbers,'\',''),','))
AND Customers.ManufacturerID=@ManufacturerID
AND Customers.Source = 'ipad'
AND Orders.Exported is NuLL
``` | find Customers where none of the Orders have been Exported | [
"",
"sql",
"sql-server-2008",
""
] |
I have a table with some numerical values (diameters)
18
21
27
34
42
48
60
76
89
114
etc...
How Can I select the max nearest value if I put for example in a text.box a number.
25 to select 27, 100 to select 114, 48 to select 48.
I put the following code but it is not acting correct ...It is selecting the closest nearest value but not the MAX nearest value:
```
strSQL = "SELECT * " & "FROM [materials] WHERE ABS([dia] - " & Me.TextBox1.Text & ") = (SELECT MIN(ABS([dia] - " & Me.TextBox1.Text & ")) FROM [materials])"
```
this code is inside on an user form in excel that is connected to an DAO database.
Thank you! | Lets say you were using SQL Server, you could try something like
```
strSQL = "SELECT TOP 1 * " & "FROM [materials] WHERE [dia] >= " & Me.TextBox1.Text & " ORDER BY dia ASC"
```
If it was MySQL You would have to use [LIMIT](http://dev.mysql.com/doc/refman/5.5/en/select.html)
> The LIMIT clause can be used to constrain the number of rows returned
> by the SELECT statement. | ```
strSQL = "SELECT TOP 1 * FROM materials " & _
"WHERE dia >= " & Me.TextBox1.Text & " " & _
"ORDER BY dia"
``` | max nearest values sql | [
"",
"sql",
"vba",
""
] |
I have created two indexes on different materialized views with the same name in TimesTen and now cannot drop neither of them. If try to I get the following error message:
```
2222: Index name is not unique
```
Could you please advise me how could I get rid of one (or at least both) of these indexes?
Thank you! | Oracle doesn't permit the creation of index with the same name in the same schema. Are your indexes in seperate schemas? if Yes, then please specify your schema.index\_name while deletion.To check the schemas of index , you can query all\_indexes.
select \* from all\_indexes where index\_name = 'put your index name here';
Then you can log in to one of the schemas and run delete schema\_name.index\_name. It must be a privilege issue hence you are getting an error | To drop indexes for Materialized Views [or tables] of the same name in two different schemas, you need to either:
1. Log into the first schema and drop the MV index
Log into the second schema and drop the MV index
2. As the instance administrator [the OS user who you installed TimesTen as]
and qualify the index to be dropped by the schema. eg
ttIsql yourDbDSN
drop schema1.index;
drop schema2.index; | 2222: Index name is not unique (TimesTen) | [
"",
"sql",
"database",
"oracle",
"indexing",
"timesten",
""
] |
i have a table that have the following columns and sample data.
(this data is output of a query so i have to use that query in FROM () statement)
```
Type| Time | Count
--------------------------------
1 |2013-05-09 12:00:00 | 71
2 |2013-05-09 12:00:00 | 48
3 |2013-05-09 12:00:00 | 10
3 |2013-05-09 13:00:00 | 4
2 |2013-05-16 13:00:00 | 30
1 |2013-05-16 13:00:00 | 31
1 |2013-05-16 14:00:00 | 4
3 |2013-05-16 14:00:00 | 5
```
I need to group data based on time so my output should look like this
```
AlarmType1 | AlarmType2 |AlarmType3| AlarmTime
--------------------------------
71 | 48 | 10 | 2013-05-09 12:00:00
31 | 30 | 4 | 2013-05-09 13:00:00
4 | 0 | 5 | 2013-05-09 14:00:00
```
i have tried this query
```
SELECT
SUM(IF (AlarmType = '1',1,0)) as AlarmType1,
SUM(IF (AlarmType = '2',1,0)) as AlarmType2,
SUM(IF (AlarmType = '3',1,0)) as AlarmType3,
AlarmHour
FROM 'Table1'
GROUP BY Time
```
but this did not work, as i am missing Count in my Query, have to adjust Count im my Query | You need SUM(Count), not SUM(1):
```
SELECT
SUM(IF (AlarmType = '1',Count,0)) as AlarmType1,
SUM(IF (AlarmType = '2',Count,0)) as AlarmType2,
SUM(IF (AlarmType = '3',Count,0)) as AlarmType3,
AlarmHour
FROM 'Table1'
GROUP BY Time
``` | Your `table1` is your subquery right? Well it has to have its alias so it will be something like that
```
SELECT
SUM(IF (AlarmType = '1',1,0)) as AlarmType1,
SUM(IF (AlarmType = '2',1,0)) as AlarmType2,
SUM(IF (AlarmType = '3',1,0)) as AlarmType3,
AlarmHour
FROM (subquery giving table1) as subTable
GROUP BY subtable.Time
```
**EDIT:**
As for summing, why dont you just use multiple `group by`? EG. `GROUP BY Type, Time` and show type for every group and type `SELECT Type,Time,Sum(Count) ...`?
OK I get it now what are you trying to achieve. If you want to "add columns" you will have to use `JOINS`. So create single query for every column you need (every AlarmType count) and join them on commond value whitch is Time of alarm. | Multiple Count based on value MYSQL | [
"",
"mysql",
"sql",
""
] |
I have a simple application in which users submit data into a table.
There are currently 3 fields the user can pass values to at the moment. I'm trying to find a solution where the number of description columns can vary, depending on the description list created by the user.
So far I have considered:
1. having the user\_input table have many nullable description
fields, say from 1-15 and thus limiting the number of description
fields the user can define to 15. This solution is very easy to
query and maintain, however is limited to a set number of fields.
(is this a viable and acceptable solution in general?)
2. creating a table where each row would correspond to 1 description
entered. This would allow the user to create unlimited number of
description fields, however storing each all inputs would instead of
1 row now take n-rows, where n is the count of descriptions linked
to the current description\_list. Users are free to choose the amount
of columns, however not so easy to query and maintain.
My current table is something like this:
```
CREATE TABLE `user_input` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`user_id` int(11) NOT NULL,
`description_list_id` int(11) NOT NULL,
`description1` int(11) NOT NULL,
`description2` int(11) NOT NULL,
`description3` int(11) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
```
Are there any other solutions? | Definitely option #2. Normalizaing is always the best option in a scenario like this. You are correct that it is more work, but you are overcoming the inevitable issue when you will need more than 15 descriptions. | The second solution is preferable in terms of flexibility. If tomorrow you need to add more description fields with the first solution you will need to modify the table and the code to manage it.
The second solution can require a bit more work now but then it will handle 2 like 200 descriptions.
The first approach is more a quick and dirty solution for a small problem, the second also a good exercise if you have time to try something new. | Database design, variable number of columns | [
"",
"mysql",
"sql",
"database-design",
""
] |
I need to select one of the table column value together with some constant variable. For example,
SQL Table :
```
Key KeyName
-------------
1 Normal
2 Basic
3 Super
```
Constant values are R1, R2, R3, R4.
The output result as single column:
```
Normal R1
Normal R2
Normal R3
Normal R4
Basic R1
Basic R2
.
.
.
Super R4
```
Appreciate any advice. Thanks. | Try this:
```
SELECT
T.KeyName,
TT.ConstValues
FROM Tbl T
CROSS JOIN
(VALUES ('R1'), ('R2'), ('R3'), ('R4')) TT(ConstValues)
```
**[SQL FIDDLE DEMO](http://sqlfiddle.com/#!6/d8e19/1)** | You need to represent the constant values as a result-set, then you can get the cartesian product by selecting from both of them. For example:
```
WITH ConstantValues AS
(
SELECT 'R1' AS ConstantValue
UNION ALL
SELECT 'R2'
UNION ALL
SELECT 'R3'
UNION ALL
SELECT 'R4'
)
SELECT t.KeyName, c.ConstantValue
FROM SqlTable t, ConstantValues c;
```
If you want each pair to be represented into a single result, then you can use `SELECT t.KeyName + ' ' + c.ConstantValue AS ResultColumn` instead. | Select query with constant variable | [
"",
"sql",
"sql-server",
""
] |
I have `Book` and `Author` tables. `Book` have many `Author`s (I know that it should be many-to-many it's just for the sake of this example).
How do I select all books that have been written by authors: X **and** by Y in one sql query?
**EDIT**
Number of authors can be variable - 3, 5 or more authors.
I can't figure it out now (I've tried to do `JOIN`s and sub-queries).
`SELECT * FROM book ...`? | Try this:
```
SELECT
B.Name
FROM Books B
JOIN Authors A
ON B.AuthorID = A.ID
WHERE A.Name IN ('X', 'Y')
GROUP BY B.Name
HAVING COUNT(DISTINCT A.ID) = 2
``` | You can just double join the authors table.
```
SELECT Book.* from Book
JOIN Author author1
ON author1.book_id = Book.id AND author1.author_name = 'Some Name'
JOIN Author author2
ON author2.book_id = Book.id AND author1.author_name = 'Some Other Name'
GROUP BY Book.id
```
The JOINs ensure that only books with Both authors are returned, and the GROUP BY just makes the result set only contain unique entries.
It's worth noting by the way that this query will bring back books that have *at least* the two authors specified. For example, if you want books only by Smith and Jones, and not by Smith, Jones and Martin, this query will not do that. This query will pull back books that have *at least* Smith and Jones. | Select books having specified authors | [
"",
"sql",
""
] |
```
select packageid,status+' Date : '+UpdatedOn from [Shipment_Package]
```
The below error is appeared when executing the above code in sql server. The type of `UpdatedOn` is `DateTime` and status is a `varchar`. We wanted to concatenate the status, Date and UpdatedOn.
error:
> Conversion failed when converting date and/or time from character
> string. | You need to convert `UpdatedOn` to `varchar` something like this:
```
select packageid, status + ' Date : ' + CAST(UpdatedOn AS VARCHAR(10))
from [Shipment_Package];
```
You might also need to use [`CONVERT`](http://msdn.microsoft.com/en-us/library/ms187928.aspx) if you want to format the datetime in a specific format. | To achive what you need, you would need to CAST the Date?
Example would be;
Your current, incorrect code:
```
select packageid,status+' Date : '+UpdatedOn from [Shipment_Package]
```
Suggested solution:
```
select packageid,status + ' Date : ' + CAST(UpdatedOn AS VARCHAR(20))
from [Shipment_Package]
```
[MSDN article for CAST / CONVERT](http://msdn.microsoft.com/en-us/library/ms187928.aspx)
Hope this helps. | SQL Server: How to concatenate string constant with date? | [
"",
"sql",
"sql-server",
"sql-server-2008r2-express",
""
] |
I have a table in a database that has 9 columns containing the same sort of data, these values are **allowed to be null**. I need to select each of the non-null values into a single column of values that don't care about the identity of the row from which they originated.
So, for a table that looks like this:
```
+---------+------+--------+------+
| Id | I1 | I2 | I3 |
+---------+------+--------+------+
| 1 | x1 | x2 | x7 |
| 2 | x3 | null | x8 |
| 3 | null | null | null|
| 4 | x4 | x5 | null|
| 5 | null | x6 | x9 |
+---------+------+--------+------+
```
I wish to select each of the values prefixed with x into a single column. My resultant data should look like the following table. The order needs to be preserved, so the first column value from the first row should be at the top and the last column value from the last row at the bottom:
```
+-------+
| value |
+-------+
| x1 |
| x2 |
| x7 |
| x3 |
| x8 |
| x4 |
| x5 |
| x6 |
| x9 |
+-------+
```
I am using **SQL Server 2008 R2**. Is there a better technique for achieving this than selecting the value of each column in turn, from each row, and inserting the non-null values into the results? | You can use the UNPIVOT function to get the final result:
```
select value
from yourtable
unpivot
(
value
for col in (I1, I2, I3)
) un
order by id, col;
```
Since you are using SQL Server 2008+, then you can also use CROSS APPLY with the VALUES clause to unpivot the columns:
```
select value
from yourtable
cross apply
(
values
('I1', I1),
('I2', I2),
('I3', I3)
) c(col, value)
where value is not null
order by id, col
``` | ```
SELECT value FROM (
SELECT ID, 1 AS col, I1 AS [value] FROM t
UNION ALL SELECT ID, 2, I2 FROM t
UNION ALL SELECT ID, 3, I3 FROM t
) AS t WHERE value IS NOT NULL ORDER BY ID, col;
``` | Select values from multiple columns into single column | [
"",
"sql",
"sql-server",
"sql-server-2008-r2",
"unpivot",
""
] |
I'm currently working on a query which takes all the relevant data AND stock levels within a query, rather than looping over the stock levels separately. So far I have managed to obtain those stock levels by doing this:
```
<cfquery datasource="datasource" name="get">
Select *,
(
Select IsNull(Sum(stocklevel))
From itemstock
Where item_id = itemstock_itemid
) As stock_count
From items
Where item_id = #URL.item_id#
</cfquery>
```
Now the term I tried adding after the "Where" clause is this:
```
And get.stock_count > 0
```
This however just throws an error and says it is not a valid field (which makes sense) im wondering, how do i reference back to that Sum total? | To reference an alias, use a derived table:
```
select * from
(Select *,
(
Select IsNull(Sum(stocklevel), 0)
From itemstock
Where item_id = itemstock_itemid
) As stock_count
From items
Where item_id = <cfqueryparam value="#URL.item_id#">
) derived_table
where stock_count > 0
```
Note that my query param tag is not complete. It needs a datatype, but you can do that. | You cannot reference 'get.' inside the query. I think if you just reference stock\_count you should be fine? | SQL reference to alias name in the WHERE clause | [
"",
"mysql",
"sql",
"coldfusion",
"coldfusion-10",
""
] |
I am working on magento platform.I face a problem regarding values insertion to specific field: My query run perfect but one specific column not working for any query.I try my best but didn't find why .When i change the column type from int to varchar type it works.This is my table structure.
```
CREATE TABLE `followupemails_emaillogs` (
`id` int(8) NOT NULL AUTO_INCREMENT,
`schedule_time` datetime DEFAULT NULL,
`sent_time` datetime DEFAULT NULL,
`email_status` varchar(100) DEFAULT NULL,
`client_name` varchar(250) DEFAULT NULL,
`client_email` varchar(250) DEFAULT NULL,
`followupemails_id` int(11) DEFAULT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `id` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=30 DEFAULT CHARSET=latin1.
```
the "followupemails\_id" column not working in insert and update query.This is one update query where record exist that id(29). `UPDATE followupemails_emaillogs SET followupemails_id=5 WHERE id =29`.
This is insertion query `INSERT INTO followupemails_emaillogs SET followupemails_id=4, schedule_time='2013-10-23 08:10:00', email_status='pending', client_name='ayaz ali'`.this works fine on fiddle but not on my sqlyog ? what could be the issue.At last i find query that work perfect
`.INSERT INTO followupemails_emaillogs (followupemails_id,schedule_time,email_status,client_name,client_email) VALUES (26,'2013-10-23 08:10:00','pending','ayaz ali','mamhmood@yahoo.com');`
Can anyone tell me why set query not working but second query works perfect.so that i can accept his answer.Thanks for all your help | the whole query is ok
```
CREATE TABLE `followupemails_emaillogs` (
`id` int NOT NULL AUTO_INCREMENT,
`schedule_time` datetime DEFAULT NULL,
`sent_time` datetime DEFAULT NULL,
`email_status` varchar(100) DEFAULT NULL,
`client_name` varchar(250) DEFAULT NULL,
`client_email` varchar(250) DEFAULT NULL,
`followupemails_id` int DEFAULT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `id` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=30 DEFAULT CHARSET=latin1.
```
but at the last there is dot which is actually error so remove the dot and create the table
```
latin1.
```
so remove the dot sign and not null in id filed use this line by default fields are null so don't use default null
```
id int (8) AUTO_INCREMENT
CREATE TABLE `followupemails_emaillogs` (
`id` int (8) AUTO_INCREMENT,
`schedule_time` datetime DEFAULT NULL,
`sent_time` datetime DEFAULT NULL,
`email_status` varchar(100),
`client_name` varchar(250),
`client_email` varchar(250),
`followupemails_id` int,
PRIMARY KEY (`id`),
UNIQUE KEY `id` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=30 DEFAULT CHARSET=latin1
``` | Try like this
To Create,
```
CREATE TABLE followupemails_emaillogs (
id int(8) NOT NULL AUTO_INCREMENT PRIMARY KEY,
schedule_time datetime DEFAULT NULL,
sent_time datetime DEFAULT NULL,
email_status varchar(100) DEFAULT NULL,
client_name varchar(250) DEFAULT NULL,
client_email varchar(250) DEFAULT NULL,
followupemails_i int(11) DEFAULT NULL,
UNIQUE (id)
)
```
To Insert,
```
INSERT INTO followupemails_emaillogs (schedule_time,sent_time,email_status,client_name,client_email,followupemails_i)
VALUES
('2012-05-05','2012-05-06',"sent","sagar","sagar@xxxx.com",2)
``` | Field not inserting or updating , int type in sql | [
"",
"mysql",
"sql",
"magento",
""
] |
I'm sure this is simple SQL, but I have a table which contains multiple records for each of X (currently 3) levels. I basically want to copy this to csv files, one for each level.
I've got the SQL which selects and I can copy that out. I can also do a select to get the list of unique levels in the file. What I can't work is how to get foxpro to loop over the unique levels and provide a filename and save only the relevant records.
I'm using scan to loop over the unique records, but clearly what I'm doing with that is then wrong.
```
* identify the different LPG report levels
SELECT STREXTRACT(ALLTRIM(group),"|","|",3) as LPG_level FROM &lcFile GROUP BY LPG_level INTO CURSOR levels
TEXT to lcSql1 noshow textmerge pretext 15
SELECT
LEFT(ALLTRIM(group),ATC("|",ALLTRIM(group))-1) as Sim,
STREXTRACT(ALLTRIM(group),"|","|",1) as Company,
ENDTEXT
TEXT to lcSql2 noshow textmerge pretext 15
time,
SUM(as) as Asset_Share_Stressed,
SUM(as_us) as Asset_Share_Unstressed
FROM <<lcFile>>
GROUP BY Sim,
Company,
Fund,
LPG_level,
Output_group,
time
ORDER BY sim asc,
output_group asc
INTO CURSOR bob
ENDTEXT
TEXT to lcSqlgroup2 noshow textmerge pretext 15
RIGHT(ALLTRIM(group),LEN(ALLTRIM(group)) - ATC("|",ALLTRIM(group),4)) as Output_group,
ENDTEXT
TEXT to lcSql_fund2 noshow textmerge pretext 15
STREXTRACT(ALLTRIM(group),"|","|",2) as Fund,
ENDTEXT
TEXT to lcSql_level noshow textmerge pretext 15
STREXTRACT(ALLTRIM(group),"|","|",3) as LPG_level,
ENDTEXT
&lcSql1 + &lcSql_fund2 + &lcSql_level + &lcSqlgroup2 + &lcSql2
SELECT levels
SCAN
COPY TO output_path + lcFilename + levels.LPG_level for bob.LPG_Level = levels.LPG_Level
endscan
``` | I don't know why you have all the text/endtext. You can just build your SQL-Select as one long statement... just use a semi-colon at the end of each line to indicate that the statement continues on the following line (unlike in C# that ; indicates end of statement)...
Anyhow, this simplified should do what you have
```
SELECT ;
LEFT(ALLTRIM(group),ATC("|",ALLTRIM(group))-1) as Sim, ;
STREXTRACT(ALLTRIM(group),"|","|",1) as Company, ;
STREXTRACT(ALLTRIM(group),"|","|",2) as Fund, ;
STREXTRACT(ALLTRIM(group),"|","|",3) as LPG_level, ;
RIGHT(ALLTRIM(group),LEN(ALLTRIM(group)) - ATC("|",ALLTRIM(group),4)) as Output_group, ;
time, ;
SUM(as) as Asset_Share_Stressed, ;
SUM(as_us) as Asset_Share_Unstressed ;
FROM ;
( lcFile ) ;
GROUP BY ;
Sim, ;
Company, ;
Fund, ;
LPG_level, ;
Output_group, ;
time ;
ORDER BY ;
sim asc,;
output_group ASC ;
INTO ;
CURSOR bob
SELECT distinct LPG_Level ;
FROM Bob ;
INTO CURSOR C_TmpLevels
SELECT C_TmpLevels
SCAN
*/ You might have to be careful if the LPG_Level has spaces or special characters
*/ that might cause problems in file name creation, but at your discretion.
lcOutputFile = output_path + "LPG" + ALLTRIM( C_TmpLevels.LPG_Level ) + ".csv"
SELECT Bob
COPY TO ( lcOutputFile ) ;
FOR LPG_Level = C_TmpLevels.LPG_Level ;
TYPE csv
ENDSCAN
```
In this scenario, I just built your entire SQL query and ran it... From THAT result, I get distinct LPG\_Level so it exactly matches the structure of the result set you have to work with. Notice in the "FROM" clause, I have the (lcFile) in parenthesis. This tells VFP to look to the variable name for the table name, not the actual table named "lcFile" as a literal. Similarly when I'm copying OUT to the CSV file... copy to (lcOutputFile).
Macros "&" can be powerful and useful, but can also bite you too especially if a file name path has a space in it... you are toast in that case... Try to get used to using parens in cases like this. | Try something like:
```
FOR curlevel = 1 TO numlevels
outfile = 'file' + ALLTRIM(STR(curlevel)) + '.csv'
TEXT TO contents
blah blah
ENDTEXT
= STRTOFILE(contents, outfile)
ENDFOR
```
You'll have to adjust things, but that's a technique to use. | Splitting a foxpro table | [
"",
"sql",
"foxpro",
""
] |
I am having an issue with using inner selects where the output isn't quite right. Any help would be greatly appreciated.
Here is my [SQLFiddle](http://sqlfiddle.com/#!2/19f3d/1) example.
Here is the query I am using.
```
SELECT
t.event as event_date,
count((
SELECT
count(s.id)
FROM mytable s
WHERE s.type = 2 AND s.event = event_date
)) AS type_count,
count((
SELECT
count(s.id)
FROM mytable s
WHERE s.type != 3 AND s.event = event_date
)) as non_type_count
FROM mytable t
WHERE t.event >= '2013-10-01' AND t.event <= '2013-10-08'
GROUP BY t.event
```
My current output:
```
October, 01 2013 00:00:00+0000 / 2 / 2
October, 03 2013 00:00:00+0000 / 1 / 1
The output I am trying to get:
October, 01 2013 00:00:00+0000 / 1 / 2
October, 03 2013 00:00:00+0000 / 0 / 0
```
So if you look at my query I am trying to use, I am basically
trying to query the table in date range, then using inner selects
get the rows that match the type.
Thanks for your help in advance. | Can simplify a bit and exclude the sub-selects using conditional aggregation:
```
SELECT
t.event as event_date,
SUM(t.type = 2) AS type_count,
SUM(t.type != 3)AS non_type_count
FROM mytable t
WHERE t.event >= '2013-10-01' AND t.event <= '2013-10-08'
GROUP BY t.event
```
Demo: [SQL Fiddle](http://sqlfiddle.com/#!2/19f3d/7/0)
This works in MySQL because an expression returns 1 or 0 for true/false. In other databases you can achieve the same via `SUM(CASE WHEN type=2 THEN 1 END)` | Try this way:
```
SELECT
t.event as event_date,
SUM( case when type = 2 then 1 else 0 end )
AS type_count,
SUM( case when type != 3 then 1 else 0 end )
as non_type_count
FROM mytable t
WHERE t.event >= '2013-10-01' AND t.event <= '2013-10-08'
GROUP BY t.event
```
demo: --> <http://sqlfiddle.com/#!2/19f3d/13> | Query counts using inner selects | [
"",
"mysql",
"sql",
""
] |
I want to create a user and only allow them to use select statements on the `cameracircle` database. So I have the following code:
```
CREATE USER 'hoeym'@'localhost' IDENTIFIED BY 'password';
CREATE DATABASE cameracircle;
GRANT SELECT ON cameracircle TO 'hoeym'@'localhost';
```
But the phpmyadmin doesn't like that. If I run this it says there is an error cause I don't have a databases selected, and if I add in `USE cameracircle;` before the `GRANT` statement it says that there is no table inside the database with the same name as the database. What have I done wrong? | Solved it with
```
GRANT SELECT ON cameracircle.* TO 'hoeym'@'localhost';
``` | Before you issue a GRANT statement, check that the
> derby.database.sqlAuthorization
property is set to true. The derby.database.sqlAuthorization property enables the SQL Authorization mode. | SQL Grant SELECT | [
"",
"mysql",
"sql",
"phpmyadmin",
"easyphp",
""
] |
I'm a bit stumped how I might go about this.
I have a very basic query, that currently returns sales for each product, by year and month.
It is grouping by year/month, and summing up the quantity.
This returns one row for each product/year/month combo where there was a sale.
If there was no sale for a month, then there is no data.
I'd like my query to return one row of data for each product for each year/month in my date range, regardless of whether there was actually an order.
If there was no order, then I can return 0 for that product/year/month.
Below is my example query.
```
Declare @DateFrom datetime, @DateTo Datetime
Set @DateFrom = '2012-01-01'
set @DateTo = '2013-12-31'
select
Convert(CHAR(4),order_header.oh_datetime,120) + '/' + Convert(CHAR(2),order_header.oh_datetime,110) As YearMonth,
variant_detail.vad_variant_code,
sum(order_line_item.oli_qty_required) as 'TotalQty'
From
variant_Detail
join order_line_item on order_line_item.oli_vad_id = variant_detail.vad_id
join order_header on order_header.oh_id = order_line_item.oli_oh_id
Where
(order_header.oh_datetime between @DateFrom and @DateTo)
Group By
Convert(CHAR(4),order_header.oh_datetime,120) + '/' + Convert(CHAR(2),order_header.oh_datetime,110),
variant_detail.vad_variant_code
``` | Thank your for your suggestions.
I managed to get this working using another method.
```
Declare @DateFrom datetime, @DateTo Datetime
Set @DateFrom = '2012-01-01'
set @DateTo = '2013-12-31'
select
YearMonthTbl.YearMonth,
orders.vad_variant_code,
orders.qty
From
(SELECT Convert(CHAR(4),DATEADD(MONTH, x.number, @DateFrom),120) + '/' + Convert(CHAR(2),DATEADD(MONTH, x.number, @DateFrom),110) As YearMonth
FROM master.dbo.spt_values x
WHERE x.type = 'P'
AND x.number <= DATEDIFF(MONTH, @DateFrom, @DateTo)) YearMonthTbl
left join
(select variant_Detail.vad_variant_code,
sum(order_line_item.oli_qty_required) as 'Qty',
Convert(CHAR(4),order_header.oh_datetime,120) + '/' + Convert(CHAR(2),order_header.oh_datetime,110) As 'YearMonth'
FROM order_line_item
join variant_detail on variant_detail.vad_id = order_line_item.oli_vad_id
join order_header on order_header.oh_id = order_line_item.oli_oh_id
Where
(order_header.oh_datetime between @DateFrom and @DateTo)
GROUP BY variant_Detail.vad_variant_code,
Convert(CHAR(4),order_header.oh_datetime,120) + '/' + Convert(CHAR(2),order_header.oh_datetime,110)
) as Orders on Orders.YearMonth = YearMonthTbl.YearMonth
``` | You can generate this by using CTE.
You will find information on this article :
<http://blog.lysender.com/2010/11/sql-server-generating-date-range-with-cte/>
Especially this piece of code :
```
WITH CTE AS
(
SELECT @start_date AS cte_start_date
UNION ALL
SELECT DATEADD(MONTH, 1, cte_start_date)
FROM CTE
WHERE DATEADD(MONTH, 1, cte_start_date) <= @end_date
)
SELECT *
FROM CTE
``` | Return All Months & Years Between Date Range - SQL | [
"",
"sql",
"sql-server",
"date",
""
] |
I need to calculate the local time from yyyymmddhhmmss and return it as yyyymmddhhmmss. I have tried the below, it is working but I am not able to get rid of the month name.
```
Declare @VarCharDate varchar(max)
Declare @VarCharDate1 varchar(max)
Declare @VarCharDate2 varchar(max)
--Declare
set @VarCharDate = '20131020215735' --- YYYYMMDDHHMMSS
--Convert
set @VarCharDate1 =(select SUBSTRING(@VarCharDate,0,5) + '/' + SUBSTRING(@VarCharDate,5,2) + '/' + SUBSTRING(@VarCharDate,7,2) + ' ' + SUBSTRING(@VarCharDate,9,2) +':'+SUBSTRING(@VarCharDate,11,2) +':' + RIGHT(@VarCharDate,2))
select @VarCharDate1
--Convert to Date and Add offset
set @VarCharDate2 = DATEADD(HOUR,DateDiff(HOUR, GETUTCDATE(),GETDATE()),CONVERT(DATETIME,@VarCharDate1,20))
select @VarCharDate2
-- Now we need to revert it to YYYYMMDDhhmmss
--Tried this but month name still coming
Select convert(datetime, @VarCharDate2, 120)
``` | Try this -
```
Declare @VarCharDate varchar(max)
Declare @VarCharDate1 varchar(max)
Declare @VarCharDate2 varchar(max)
--Declare
set @VarCharDate = '20131020215735' --- YYYYMMDDHHMMSS
--Convert
set @VarCharDate1 =(select SUBSTRING(@VarCharDate,0,5) + '/' + SUBSTRING(@VarCharDate,5,2) + '/' + SUBSTRING(@VarCharDate,7,2) + ' ' + SUBSTRING(@VarCharDate,9,2) +':'+SUBSTRING(@VarCharDate,11,2) +':' + RIGHT(@VarCharDate,2))
select @VarCharDate1
--Convert to Date and Add offset
set @VarCharDate2 = DATEADD(HOUR,DateDiff(HOUR, GETUTCDATE(),GETDATE()),CONVERT(DATETIME,@VarCharDate1,20))
select @VarCharDate2
SELECT REPLACE(REPLACE(REPLACE(CONVERT(VARCHAR(19), CONVERT(DATETIME, @VarCharDate2, 112), 126), '-', ''), 'T', ''), ':', '') [date]
```
It will return -
```
date
20131021035700
``` | ```
Declare @VarCharDate varchar(max)
Declare @VarCharDate1 varchar(max)
Declare @VarCharDate2 datetime
--Declare
set @VarCharDate = '20131020215735' --- YYYYMMDDHHMMSS
--Convert
set @VarCharDate1 =(select SUBSTRING(@VarCharDate,0,5) + '/' + SUBSTRING(@VarCharDate,5,2) + '/' + SUBSTRING(@VarCharDate,7,2) + ' ' + SUBSTRING(@VarCharDate,9,2) +':'+SUBSTRING(@VarCharDate,11,2) +':' + RIGHT(@VarCharDate,2))
select @VarCharDate1
--Convert to Date and Add offset
set @VarCharDate2 = DATEADD(HOUR,DateDiff(HOUR, GETUTCDATE(),GETDATE()),CONVERT(DATETIME,@VarCharDate1,120))
select @VarCharDate2
-- Now we need to revert it to YYYYMMDDhhmmss
--Tried this but month name still coming
Select convert(datetime, @VarCharDate2, 120)
```
by using datetime data type you will always have the correct datetime | Convert datetime to yyyymmddhhmmss in sql server | [
"",
"sql",
"sql-server",
"datetime",
"gmt",
""
] |
Given
```
CREATE TABLE Parent (
Id INT IDENTITY(1,1) NOT NULL
Name VARCHAR(255)
SomeProp VARCHAR(255)
)
CREATE TABLE Child (
Id INT IDENTITY(1,1) NOT NULL
ParentId INT NOT NULL
ChildA VARCHAR(255)
ChildZ VARCHAR(255)
)
```
I wish to write a stored procedure that accepts `@name` as a parameter, finds the Parent matching that name (if any), returns that Parent as a result set, and then returns any children of that Parent as a separate result set.
How can I efficiently select the children? My current naive approach is
```
SELECT @id = FROM Parent WHERE Name = @name
SELECT * FROM Parent WHERE Name = @name
SELECT * FROM Child WHERE ParentId=@id
```
Can I avoid selecting from Parent twice? | Your naïve approach looks OK, except that you don't have a UNIQUE constraint on `Parent.Name`, which means you could have duplicate parent names, but would only return children matching the first Id you find. Also there is a syntax error on your first SELECT, which should be:
```
SELECT @id = Id FROM Parent WHERE Name = @name
```
An alternative would be:
```
SELECT * FROM Parent
WHERE Name = @Name
ORDER BY Name
SELECT Child.*
FROM Child C INNER JOIN PARENT P ON C.ParentId = P.Id
WHERE P.Name = @Name
ORDER BY P.Name
```
which would return all parents whose name is @Name and all their matching children. | You could use a join like this and never select the id.
```
SELECT *
FROM Child c
JOIN Parent p on c.ParentId = P.Id
WHERE p.Name = @name
``` | Return Parent and Child from Stored Procedure | [
"",
"sql",
"sql-server",
""
] |
Need a query that lists all the dates of the past 12 months
say my current date is 10-21-2013. need to use sysdate to get the data
The result should look like
```
10/21/2013
...
10/01/2013
09/30/2013
...
09/01/2013
...
01/31/2013
...
01/01/2013
...
11/30/2012
...
11/01/2012
```
Please help me with this..
Thanks in advance.
AVG | Allowing for leap years and all, by using add\_months to work out the date 12 months ago and thus how many rows to generate ...
```
select trunc(sysdate) - rownum + 1 the_date
from dual
connect by level <= (trunc(sysdate) - add_months(trunc(sysdate),-12))
``` | You could do something like this:
```
select to_date('21-oct-2012','dd-mon-yyyy') + rownum -1
from all_objects
where rownum <=
to_date('21-oct-2013','dd-mon-yyyy') - to_date('21-oct-2012','dd-mon-yyyy')+1
```
of course, you could use parameters for the start and end date to make it more usable.
-or- using sysdate, like this:
```
select sysdate + interval '-1' year + rownum -1
from all_objects
where rownum <=
sysdate - (sysdate + interval '-1' year)
``` | dates of past 12 months in oracle | [
"",
"sql",
"oracle",
"oracle11g",
""
] |
I have a web portal that should display only time if the date is today's date otherwise it should display date .
```
dossier_date_created | Expected result
----------------------------------------------------
2013-10-22 16:18:46.610 | 2013-10-22
2013-10-23 11:26:56.390 | 11:26
```
I tried something like this :
```
select
case
when CONVERT(date,dossier_date_created) = CONVERT(DATE,getdate()) then convert(char(5), dossier_date_created, 108)
else convert(date, dossier_date_created)
end as timedate
from Proj_Manage_Dossier
```
But the result was :
```
timedate
-----------
2013-10-22
1900-01-01
```
How can I do it only with SQL? And btw Datatype of my column "dossier\_date\_created" is `datetime` | You can just use `CAST` or `CONVERT` to get your values to `DATE` or `TIME` datatypes. But, since you can't combine two data types in same column you have to cast them both to something identical - like `NVARCHAR` afterwards:
```
SELECT CASE WHEN CAST(dossier_date_created AS DATE) = CAST(GETDATE() AS DATE)
THEN CAST(CAST (dossier_date_created AS TIME) AS NVARCHAR(10))
ELSE CAST(CAST(dossier_date_created AS DATE) AS NVARCHAR(10)) END
FROM dbo.Proj_Manage_Dossier
```
**[SQLFiddle DEMO](http://sqlfiddle.com/#!3/564d9/1)**
**EDIT** - For SQL Server versions older then 2008, where there are no DATE and TIME datatypes - You can also directly `CONVERT` to `VARCHAR` [using appropriate style codes](http://msdn.microsoft.com/en-us/library/ms187928.aspx).
```
SELECT CASE WHEN CONVERT(VARCHAR(10),dossier_date_created,102) = CONVERT(VARCHAR(12),GETDATE(),102)
THEN CONVERT(VARCHAR(10),dossier_date_created,108)
ELSE CONVERT(VARCHAR(10),dossier_date_created,102) END
FROM dbo.Proj_Manage_Dossier
```
**[SQLFiddle DEMO](http://sqlfiddle.com/#!3/505d6/1)** | This works fine for me:
```
select
case
when (CONVERT(date,PaymentDate) = CONVERT(DATE,getdate()))
then convert(VARCHAR(15), PaymentDate, 108)
else convert(varchar, PaymentDate, 101)
end as timedate
from Payments
```
Hope it will help you.. :) | only date and only time in same column - sql server | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have the following table:
```
CREATE TABLE posts...
id INT(11),
time DATETIME,
...
```
And I want to know the highest ID in general, and I also want to know the latest post disregarding posts from the last hour so I can subtract them to get the number of posts from the past hour.
I could do two queries:
```
SELECT MAX(id) AS old_max FROM posts WHERE time <
DATE_SUB(NOW(), INTERVAL 1 HOUR);
```
And
```
SELECT MAX(id) AS max FROM posts;
```
But I'd like to have them in the same result set. How would I achieve this? | Try this::
```
Select
(SELECT MAX(id) AS max from posts),
(SELECT
MAX(id) AS old_max
FROM posts WHERE time <
DATE_SUB(NOW(), INTERVAL 1 HOUR));
``` | How about this;
```
SELECT (
SELECT MAX(id) AS old_max FROM posts WHERE time <
DATE_SUB(NOW(), INTERVAL 1 HOUR)),
(SELECT MAX(id) AS max FROM posts)
``` | JOIN the same statement without WHERE clause | [
"",
"mysql",
"sql",
""
] |
I want to move data from Column Col11 of DB1.Table1 into Col555 of DB2.Table7, based on a certain condition. How do I do it ? Is there a statement like -
```
select Col11 from DB1.Table1
into Col555 of Db2.Table7
where Col11 = 'Important'
``` | You don't need `COPY` or `INSERT` but `UPDATE`(using JOIN):
```
UPDATE [DB2].[Table7]
SET [Table7].[Col555] = [Table1].[Col11]
FROM [Table1] JOIN [Table7] ON -- add the base for the join here...
WHERE [Table1].[Coll] = 'Important'
```
See this post for more details: [SQL update query using joins](https://stackoverflow.com/questions/982919/sql-update-query-using-joins) | ```
INSERT INTO [DB2].[Table7] (Col555)
SELECT [Table1].[Col11]
FROM [DB1].[Table1]
WHERE [Table1].[Coll] = 'Important'
```
--May need .[dbo] or schema name between database and table names. | Copy data from one column into another column | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I see multiple topics on this issue, where you have tested with a database, and now want to reset the ids.
There are a lot of ways to do this with codes, but isn't there a way to do this manually in sql server 2008? So to just go to your table(/content) and reset the id's + clear the values.
Atm, I can only go into my contents and delete these, but my id's still remain.
[Similar Topic](https://stackoverflow.com/questions/9384776/empty-table-data-and-reset-identity-columns) | Basically - you need to run the [DBCC checkident](http://technet.microsoft.com/en-us/library/ms176057.aspx) to get the database engine to reseed.
```
DBCC CHECKIDENT ('<your table>', RESEED, <new seed>);
``` | On SQL server, if you are deleting all data and resetting an identity column, I would use Truncate
<http://technet.microsoft.com/en-us/library/ms177570.aspx> | Reset IDENTITY columns Manually | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
Suppose I write a query that finds the names of all artists which have members who were not born in the 1960s and another query to find the names of all artists which do not have any members born in the 1960s.
So we are speaking of these three tables:
## Artists
```
ArtistID, ArtistName, Region, EntryDate
```
## Members
```
MemberId, Firstname, Lastname, Birthday
```
The relationship between the Aritsts and Members Tables is the XrefArtistsMembers table:
## XrefArtistsMembers
```
MemberId, ArtistID, RespParty
```
I've started the Query with:
```
select salesid, firstname, lastname, birthday
from members
where year(birthday) >= 1970 or year(birthday) < 1960;
```
But what would be true of any artist that appeared in the results of the first query but not in the results of the second? | The question seems not to be looking for code. The question is:
> But what would be true of any artist that appeared in the results of the first query but not in the results of the second?
So, the first query gets all artists with SOME members not born in 1960. The second query gets all artists WITH NO member born in 1960. Hence, what you can be sure of is that the artists returned in the first query have SOME BUT NOT ALL members (not) born in 1960. | Each can be done with an IN or NOT IN condition:
```
-- Artists with NO members born in the 60s
SELECT * FROM Artists
WHERE ArtistID NOT IN (
SELECT ArtistID
FROM XrefArtistsMembers x
INNER JOIN Members m ON x.MemberId = m.MemberID
WHERE year(birthday) BETWEEN 1960 AND 1969)
-- Artists with ANY member NOT born in the 60s
SELECT * FROM Artists
WHERE ArtistID IN (
SELECT ArtistID
FROM XrefArtistsMembers x
INNER JOIN Members m ON x.MemberId = m.MemberID
WHERE year(birthday) NOT BETWEEN 1960 AND 1969
)
``` | SQL difficult JOIN tables | [
"",
"sql",
""
] |
Given this table structure:
```
Table A
ID AGE EDUCATION
1 23 3
2 25 6
3 22 5
Table B
ID AGE EDUCATION
1 26 4
2 24 6
3 21 3
```
I want to find all matches between the two tables where the age is within 2 and the education is within 2. However, I do not want to select any row from TableB more than once. Each row in B should be selected 0 or 1 times and each row in A should be selected one or more times (standard left join).
```
SELECT *
FROM TableA as A LEFT JOIN TableB as B ON
abs(A.age - B.age) <= 2 AND
abs(A.education - B.education) <= 2
A.ID A.AGE A.EDUCATION B.ID B.AGE B.EDUCATION
1 23 3 3 21 3
2 25 6 1 26 4
2 25 6 2 24 6
3 22 5 2 24 6
3 22 5 3 21 3
```
As you can see, the last two rows in the output have duplicated B.ID of 2 and 3 when compared to the entire result set. I'd like those rows to return as a single null match with A.ID = 3 since they were both matched to previous A values.
Desired output:
(note that for A.ID = 3, there is no match in B because all rows in B have already been joined to rows in A.)
```
A.ID A.AGE A.EDUCATION B.ID B.AGE B.EDUCATION
1 23 3 3 21 3
2 25 6 1 26 4
2 25 6 2 24 6
3 22 5 null null null
```
I can do this with a short program, but I'd like to solve the problem using a SQL query because it is not for me and I will not have the luxury of ever seeing the data or manipulating the environment.
Any ideas? Thanks | As @Joel Coehoorn said earlier, there has to be a mechanism that selects which pairs of (a,b) with the same (b) are filtered out from the output. SQL is not great on allowing you to select ONE row when multiple match, so a pivot query needs to be created, where you filter out the records you don't want. In this case, filtering can be done by reducing all of match IDs of B as a smallest (or largest, it doesn't really matter), using any function that will return one value from a set, it's just min() and max() are most convenient to use. Once you reduced the result to know which (a,b) pairs you care about, then you join against that result, to pull out the rest of the table data.
```
select a.id a_id, a.age a_age, a.education a_e,
b.id b_id, b.age b_age, b.education b_e
from a left join
(
SELECT
a.id a_id, min(b.id) b_id from a,b where
abs(A.age - B.age) <= 2 AND
abs(A.education - B.education) <= 2
group by a.id
) g on a.id = g.a_id
left join b on b.id = g.b_id;
``` | In SQL-Server, you can use the `CROSS APPLY` syntax:
```
SELECT
a.id, a.age, a.education,
b.id AS b_id, b.age AS b_age, b.education AS b_education
FROM tableB AS b
CROSS APPLY
( SELECT TOP (1) a.*
FROM tableA AS a
WHERE ABS(a.age - b.age) <= 2
AND ABS(a.education - b.education) <= 2
ORDER BY a.id -- your choice here
) AS a ;
```
Depending on the order you choose in the subquery, different rows from `tableA` will be selected.
**Edit** (after your update): But the above query will not show rows from A that have no matching rows in B or even some that have but not been selected.
---
It could also be done with window functions but Access does not have them. Here is a query that I think will work in Access:
```
SELECT
a.id, a.age, a.education,
s.id AS s_id, s.age AS b_age, s.education AS b_education
FROM tableB AS a
LEFT JOIN
( SELECT
b.id, b.age, b.education, MIN(a.id) AS a_id
FROM tableB AS b
JOIN tableA AS a
ON ABS(a.age - b.age) <= 2
AND ABS(a.education - b.education) <= 2
GROUP BY b.id, b.age, b.education
) AS s
ON a.id = s.a_id ;
```
I'm not sure if Access allows such a subquery but if it doesn't, you can define it as a "Query" and then use it in another. | Is it possible to left join two tables and have the right table supply each row no more than once? | [
"",
"sql",
"left-join",
""
] |
I have access database called road.mdb.
Inside road.mdb, I have a linked SQL table and
the table name is student.
I can insert records using query design in MSAccess
But I cannot update nor Delete
when run delete query below, the error is: could not delete from specified table
```
delete from student where studentid=303;
```
and when I run update query below, the error is: Operation must use an updateable query
```
update student set Name='BOB' where studentid= 303;
```
I have full access to the sql database and I can run the query OK using sql management studio.
Is it impossible to delete and update using query design inside MSaccess??
The weird thing is I can insert new records using query design inside MSaccess
thank you | I SOLVED this by adding primary key to the SQL table and re linked the table to ACCESS
Thanks everyone... | In the case that you can't manipulated the table on SqlServer, you can get around the problem by telling Access which/s column/s are meant to be the primary key. This is done on the last step of creating a Linked table, the window title is "Select Unique Record Identifier". | cannot delete and update records on access linked table | [
"",
"sql",
"ms-access",
""
] |
When I use an if statement to get a result like this:
```
SELECT
IF(huxwz_user_orders.year = YEAR(CURDATE()) + 1, huxwz_user_orders.plannedweek + 52, huxwz_user_orders.plannedweek) as PlannedWeekG
```
When I used PlannedWeekG in my query like this: `WHERE PlannedWeekG > 0`
It's giving me an error "Unknown column 'PlannedWeekG' in where clause. What am I doing wrong? | You need to use 'HAVING' in place of 'WHERE' to look at the derived result. Since PlannedWeekG is not a column in the DB, 'HAVING' is the magic to making this work ;) | You can use the `Having` clause in mysql:
```
SELECT
IF(huxwz_user_orders.year = YEAR(CURDATE()) + 1, huxwz_user_orders.plannedweek + 52, huxwz_user_orders.plannedweek) as PlannedWeekG
HAVING PlannedWeekG > 0
``` | Columns defined through if statement | [
"",
"mysql",
"sql",
""
] |
In mysql - table like this:
```
ID Name Value
1 Color Blue
1 Weight 50
1 Type Fruit
2 Color Red
2 Weight 40
3 Color Yellow
```
I want to get a count of distinct ID's that have a name/characteristic of 'Type' and a distinct count of ID's that don't. The first one (those that do) is easy since it's defined, but if I do a select count(distinct ID) where name <> 'type' - ID 1 will still be part of that count as it has other rows/attributes that <> 'type'.
In this example - the desired result for distinct count = 'type' would be 1 (ID 1) and for distinct count <> 'type' would be 2 (ID's 2 & 3).
Thanks in advance. | This is similar to [SQL Query User has one record in the table but not another one](https://stackoverflow.com/questions/18896462/sql-query-user-has-one-record-in-the-table-but-not-another-one/18896706#18896706)
You can select where the id is not in a subquery looking for ids that have type
```
select count(id) from table where id not in (select id from table where name = 'type')
group by id
``` | ```
SELECT id FROM test GROUP BY id HAVING
CONCAT(",",CONCAT(GROUP_CONCAT(name), ","))
NOT LIKE '%,Type,%'
```
will give you all ids without `Type`: <http://sqlfiddle.com/#!2/30837/1>
(Concat with `,` ensures, that you are not matching `XType` by accident.)
while
```
SELECT COUNT(id) AS count, GROUP_CONCAT(id) AS ids
FROM(SELECT id, count(name) as count FROM test
GROUP BY id HAVING CONCAT(",",CONCAT(GROUP_CONCAT(name), ","))
NOT LIKE '%,Type,%') as temp;
```
will give you the desired count: <http://sqlfiddle.com/#!2/30837/9> | Get count of ID for name-value pair when name doesn't exist | [
"",
"mysql",
"sql",
"t-sql",
""
] |
I am trying to create a stored that will some values, one of the value is column name and a value.
I've tried the following code
```
create PROC SelectDynamic
@DateFrom DATETIME,
@DateTo DATETIME,
@ColumName NVARCHAR(50),
@ColumID INT
AS
DECLARE @Sql NVARCHAR(MAX)
SET @Sql=
'
SELECT
*
FROM
Ticket t
WHERE t.TicketDate BETWEEN '+ @DateFrom +' AND' + @DateTo+' AND' + @ColumName +'='+ @ColumID
EXEC sp_executesql @Sql
```
it give me this error `Conversion failed when converting date and/or time from character string.`
I am not a SQL expert, and it's me first dynamic sql statement
can any one help
Thanks! | You have to escape the quotation marks when building a dynamic query.
Thus, your @SQL variable should be something like this
```
SET @Sql= 'SELECT * FROM Ticket t WHERE t.TicketDate BETWEEN ''' + CAST(@DateFrom AS NVARCHAR) + ''' AND ''' + CAST(@DateTo AS NVARCHAR) + ''' AND ' + @ColumName + '=' + CAST(@ColumID AS NVARCHAR) + ''
```
Escaping is done by doubling the quotation marks.
You can do a `SELECT @SQL` after to test if your query has been built correctly. | ```
Try this
declare @DateFrom DATETIME=getdate(),
@DateTo DATETIME=getdate(),
@ColumName NVARCHAR(50)='A',
@ColumID INT=1
DECLARE @Sql NVARCHAR(MAX)
SET @Sql=
'
SELECT
*
FROM
Ticket t
WHERE t.TicketDate BETWEEN '''+ convert(varchar,@DateFrom,110) +''' AND ''' + convert(varchar,@DateTo,110)+''' AND ' + @ColumName +'='+ convert(varchar,@ColumID)
EXEC (@Sql)
```
Which create the sql string as
```
SELECT *
FROM Ticket t
WHERE t.TicketDate BETWEEN '10-23-2013' AND '10-23-2013' AND A=1
``` | create Dynamic where in sql SP | [
"",
"sql",
"sql-server",
"stored-procedures",
"dynamic",
"where-clause",
""
] |
I have two tables, a USER table which contains all current users. The next table is called tasks and has a lot of information, one of the columns is PointPerson which is the same as one of the columns in the user table. This column has entries for people who are not users anymore. This is what I want to get from a SELECT statement. I want all of the users in the task table that don't exist on the user table.
This is what I have tried but I am getting no responses which can't eb correct:
```
SELECT DISTINCT TSK.PointPerson FROM [Task] AS TSK
WHERE NOT EXISTS (
SELECT DisplayName FROM [User]
)
```
What am I doing wrong and how do I get the desired result? | Try this:
```
SELECT DISTINCT TSK.PointPerson FROM [Task] AS TSK
WHERE TSK.PointPerson NOT IN (
SELECT DisplayName FROM [User]
)
```
Or a different approach:
```
SELECT DISTINCT TSK.PointPerson FROM [Task] AS TSK
LEFT JOIN [User] AS U ON U.DisplayName = TSK.PointPerson
WHERE U.DisplayName IS NULL
``` | You may try like this:-
```
SELECT DISTINCT TSK.PointPerson
FROM [Task]
WHERE NOT EXISTS
(SELECT DisplayName
FROM User
WHERE Task.name = User.name)
```
or using a LEFT JOIN like this:-
```
SELECT DISTINCT TSK.PointPerson
FROM Task t1
LEFT JOIN User t2 ON t2.name = t1.name
WHERE t2.name IS NULL
``` | SELECT all elements that don't exist in another table | [
"",
"sql",
"sql-server",
""
] |
There are two tables.
Visits (Parent)
VisitDocs (Child)
I need to show all the visits which has at least docType 3 and doesn't have DocType 1 and 2. There are different DocTypes 1 to 15.
Example:

Result should be 101 because visit 101 doesn't have docType 1 or 2. It is one to many relation.
I have tried the following query but it doesn't work.
```
Select v.visitID, d.visitdoc From Visits v
INNER JOIN VisitDocs d ON d.VisitID = v.VisitID
WHERE d.docType = 3 and d.docType Not IN (1,2)
``` | Answer above seems very close, just duplicated a similar structure and tested. My revised select below (just removed the 'AS' keyword and changed the last 'NOT IN' to 'IN' and it seems to work for me.
```
SELECT v.visitID
FROM Visits v
WHERE EXISTS(SELECT *
FROM VisitDocs d
WHERE d.VisitID = v.VisitID
AND d.docType = 3)
AND NOT EXISTS(SELECT *
FROM VisitDocs d
WHERE d.VisitID = v.VisitID
AND d.docType IN (1,2))
``` | Try this. I am sure you can do this via a couple JOINs (or at least part of it) but you did not request what your return columns should be
```
SELECT v.visitID
FROM Visits AS v
WHERE EXISTS(SELECT *
FROM VisitDocs AS d
WHERE d.VisitID = v.VisitID
AND d.docType = 3)
AND NOT EXISTS(SELECT *
FROM VisitDocs AS d
WHERE d.VisitID = v.VisitID
AND d.docType Not IN (1,2))
``` | How to check if a column A IN (3,4,5) and A NOT IN (1,2) | [
"",
"mysql",
"sql",
""
] |
I'm trying to create a query that only returns rows with objects that have three or more orders in a week's timeframe and are only orders submitted after 9/1/13.
```
SELECT OrderID, DateSubmitted, ObjectID = i.ObjectID
FROM dbo.Object i
JOIN dbo.Order j
ON i.ObjectID = j.ObjectID
WHERE DateSubmitted >= '9/1/2013'
```
I just can't figure out how to narrow the results to those objects with three or more orders in a week. I've tried numerous `GROUP BY` and `HAVING` clauses with no luck. Any help would be greatly appreciated. | based on your last comments the query you looking for is very simple. use the DatePart function and find out WEEK of that object's order-date. check the below query. also unless you data base is configured already by default Sunday(int 7) is the First Day set in the SQL server. so you for this query purpose you need to set Monday(int 1) as the first day of Week.
you can check the current setting by executing `SELECT @@DATEFIRST`
if same object has places 3 or more times for multiple weeks below query is returning that object for each of that week. if you needed only Objects then you can skip weekNumber from result set and do Distinct ObjectID.
**Do not forget to Reset the DATEFIRST setting to its original Value at the END.**
```
DECLARE @Object TABLE
(
objectID INT
)
DECLARE @Order TABLE
(
orderID INT
,objectID INT
,DateSubmitted DATE
)
INSERT INTO @Object( objectID )
SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4 UNION ALL SELECT 5 UNION ALL SELECT 6
INSERT INTO @Order ( orderID, objectID, DateSubmitted )
SELECT 1,1,'10/2/2013'
UNION ALL SELECT 2,1,'10/3/2013'
UNION ALL SELECT 3,1,'10/5/2013'
UNION ALL SELECT 4,1,'10/09/2013'
UNION ALL SELECT 5,1,'10/10/2013'
UNION ALL SELECT 6,1,'10/13/2013'
UNION ALL SELECT 4,2,'10/15/2013'
UNION ALL SELECT 5,2,'10/16/2013'
UNION ALL SELECT 6,2,'10/21/2013'
UNION ALL SELECT 7,3,'09/02/2013'
UNION ALL SELECT 8,3,'09/03/2013'
UNION ALL SELECT 9,3,'09/04/2013'
DECLARE @CurrentDateFirst INT=@@DATEFIRST
SET DATEFIRST 1;
SELECT i.objectID,DATEPART(week,DateSubmitted) AS weekNumber
FROM @Object i
JOIN @Order j
ON i.ObjectID = j.ObjectID
WHERE DateSubmitted >= '9/1/2013'
GROUP BY i.objectID,DATEPART(week,DateSubmitted)
HAVING(COUNT(DISTINCT orderID) >= 3)
ORDER BY i.objectID
SET DATEFIRST @CurrentDateFirst
```
 | Try:
```
SELECT ObjectID
FROM dbo.Object i
JOIN dbo.Order j ON J.ObjectID = i.ObjectID
WHERE DateSubmitted >= '9/1/2013'
GROUP BY ObjectID
HAVING COUNT(1) >=3
``` | Assistance with creating SQL query | [
"",
"sql",
"sql-server",
""
] |
Is it possible to create a column within a MySQL structure that automatically sums two other columns?
So if I have a table called TABLE:
`Column A, Column B, and Column C.`
I would want `Column C` to automatically sum `Column A` and `Column B`.
Is that possible?
If A changes, C changes.
If it possible than how.any example if you have. | With a trigger:
```
DELIMITER $$
CREATE TRIGGER myTableAutoSum
BEFORE INSERT ON `myTable` FOR EACH ROW
BEGIN
SET NEW.ColumnC = NEW.ColumnA + NEW.ColumnB;
END;
$$
DELIMITER ;
```
Then this query:
```
INSERT INTO myTable (ColumnA, ColumnB) values(1, 1), (2, 3), (1, 4);
```
Will result in rows:
```
ColumnA ColumnB ColumnC
1 1 2
2 3 5
1 4 5
``` | You can achieve this with a VIEW:
```
CREATE TABLE table1 (a INT, b INT);
CREATE
OR replace VIEW V_TABLE AS
SELECT a, b, a + b AS c
FROM table1;
```
`sqlfiddle` | Calculate two columns they automatic put into total column | [
"",
"mysql",
"sql",
""
] |
Here is some SQL to set up with a very simple table.
```
CREATE TABLE CC_TEST2
("CURRENCYID" NUMBER NOT NULL ENABLE,
"NAME" NVARCHAR2(255)) ;
insert into CC_TEST2 (select 1,'Testing issue'from dual);
commit;
```
Then this recreates the issue
```
SELECT (step.Name ||
'Commentary of 1234567890 1234567890 1234567890 1234567890 1234567890 1234567890 1234567890 1234567890 1234567890 1234567890 1234567890 1234567890 1234567890 1234567890
1234567890 1234567890 1234567890 1234567890 1234567890 1234567890 1234567890 1234567890 1234567890 1234567890 1234567890 1234567890 1234567890 1234567890 1234567890
1234567890 1234567890 1234567890 1234567890 1234567890 1234567890 1234567890 1234567890 1234567890 1234567890 1234567890 1234567890 1234567890 1234567890 1234567890
1234567890 1234567890 1234567890 1234567890 1234567890 1234567890 1234567890 1234567890 1234567890 1234567890 1234567890 1234567890 1234567890 1234567890 1234567890
1234567890 1234567890 1234567890 1234567890 1234567890 1234567890 1234567890 1234567890 1234567890 1234567890 1234567890 1234567890 1234567890 1234567890 1234567890
1234567890 1234567890 1234567890 1234567890 1234567890 1234567890 1234567890 1234567890 1234567890 1234567890 1234567890 1234567890 1234567890 1234567890 1234567890
1234567890 1234567890 1234567890 1234567890 1234567890 1234567890 1234567890 1234567890 1234567890 1234567890 1234567890 1234567890 1234567890 12')
as thing FROM CC_TEST2 step
```
Any ideas?
I think it's something odd about nVarchar2? If I change the column type to varChar2 then it's OK. Sadly I can't change the column type of the actual production database where I'm getting the issue | If "NAME" NVARCHAR2(255) is changed to "NAME" VARCHAR2(255) (i.e by using varchar2) you won't get any issue. You can test the same at
<http://sqlfiddle.com/#!4/cefd8/2> | There seems to be some strangeness with NVARCHAR2 and string concatenation.
See <http://sqlfiddle.com/#!4/936c4/2>
My understanding based on running the various statements in the SQL Fiddle is that the string constant on the right hand side of the concatenation operator || is also treated as an NVARCHAR2, and can be at most 1000 characters. | Why am I getting ORA-01401: inserted value too large for column - when I'm not inserting? | [
"",
"sql",
"oracle",
""
] |
Schema is below:
Ships(name, yearLaunched, country, numGuns, gunSize, displacement)
Battles(ship, battleName, result)
where name and ship are equal. By this I mean if 'Missouri' was one of the tuple
results for name, 'Missouri' would also appear as a tuple result for ship.
(i.e. name = 'Missouri' , ship = 'Missouri)
They are the same
Now the question I have is what SQL statement would I make in order to list
the battleship amongst a list of battleships that has the largest amount
of guns (i.e. gunSize)
I tried:
```
SELECT name, max(gunSize)
FROM Ships
```
But this gave me the wrong result.
I then tried:
```
SELECT s.name
FROM Ships s,
(SELECT MAX(gunSize) as "Largest # of Guns"
FROM Ships
GROUP BY name) maxGuns
WHERE s.name = maxGuns.name
```
But then SQLite Admin gave me an error saying that no such column 'maxGuns' exists
even though I assigned it as an alias: maxGuns
Do any of you know what the correct query for this problem would be?
Thanks! | The problem in your query is that the subquery has no column named `name`.
Anyway, to find the largest amount of guns, just use `SELECT MAX(gunSize) FROM Ships`.
To get all ships with that number of guns, you need nothing more than a simple comparison with that value:
```
SELECT name
FROM Ships
WHERE gunSize = (SELECT MAX(gunSize)
FROM Ships)
``` | It does not exist because you are trying to alias a subquery in the 'Where' clause, instead of aliasing specific column from a table. In order to identify the ship with the most guns you could try something like:
```
with cte as (select *
,ROW_NUMBER() over (order by s.gunsize desc) seq
from ships s )
select * from cte
where seq = '1'
```
Another approach could be: And it will only select the 1st row,containing the ship with highest number of guns.
```
select Top 1 *
from ships s
order by s.gunsize desc
``` | SQL Query: Largest number of guns | [
"",
"sql",
"sqlite",
"max",
""
] |
I have a value in a table that was changed unexpectedly. The column in question is `CreatedDate`, which is set when my item is created, but it's being changed by a stored procedure.
Could I write some type of `SELECT` statement to get all the procedure names that reference this column from my table? | One option is to create a script file.
Right click on the database ***-> Tasks -> Generate Scripts***
Then you can select all the stored procedures and generate the script with all the sps. So you can find the reference from there.
Or
```
-- Search in All Objects
SELECT OBJECT_NAME(OBJECT_ID),
definition
FROM sys.sql_modules
WHERE definition LIKE '%' + 'CreatedDate' + '%'
GO
-- Search in Stored Procedure Only
SELECT DISTINCT OBJECT_NAME(OBJECT_ID),
object_definition(OBJECT_ID)
FROM sys.Procedures
WHERE object_definition(OBJECT_ID) LIKE '%' + 'CreatedDate' + '%'
GO
```
Source [SQL SERVER – Find Column Used in Stored Procedure – Search Stored Procedure for Column Name](http://blog.sqlauthority.com/2012/07/19/sql-server-find-column-used-in-stored-procedure-search-stored-procedure-for-column-name-part-2/) | If you want to get stored procedures using specific column only, you can use try this query:
```
SELECT DISTINCT Name
FROM sys.Procedures
WHERE object_definition(OBJECT_ID) LIKE '%CreatedDate%';
```
If you want to get stored procedures using specific column of table, you can use below query :
```
SELECT DISTINCT Name
FROM sys.procedures
WHERE OBJECT_DEFINITION(OBJECT_ID) LIKE '%tbl_name%'
AND OBJECT_DEFINITION(OBJECT_ID) LIKE '%CreatedDate%';
``` | Find all stored procedures that reference a specific column in some table | [
"",
"sql",
"sql-server",
"select",
"find",
""
] |
I am trying to do a SQL query and to build the where condition dynamically depending if the parameters are null or no. I have something like this:
```
SELECT tblOrder.ProdOrder, tblOrder.Customer FROM tblOrder
CASE WHEN @OrderId IS NOT NULL
THEN
WHERE tblOrder.OrderId = @OrderId
ELSE
END
CASE WHEN @OrderCustomer IS NOT NULL
THEN
AND tblOrder.OrderCustomer = @OrderCustomer
ELSE
END
END
```
This doesn't work, but this is just a small prototype how to assemble the query, so if the orderid is not null include in the where clause, or if the ordercustomer is not null include in the where clause. But I see problem here, for example if the ordercustomer is not null but the orderid is null, there will be error because the where keyword is not included.
How can I tackle this problem? | This should do what you want:
```
SELECT tblOrder.ProdOrder, tblOrder.Customer
FROM tblOrder
WHERE ( @OrderId IS NULL OR tblOrder.OrderId = @OrderId )
AND ( @OrderCustomer IS NULL OR tblOrder.OrderCustomer = @OrderCustomer )
OPRION (RECOMPILE)
```
But as commented you should include the OPTION RECOMPILE hint, otherwise it will have bad performance.
Worth reading:
* <http://www.sommarskog.se/dyn-search-2008.html> | You can't dynamically write the `WHERE` clause, but you can use compound statements to achieve the desired effect.
Since in SQL `NULL` is **never** equal to any value, you can actually achieve a pretty elegant query:
```
SELECT tblOrder.ProdOrder, tblOrder.Customer
FROM tblOrder
WHERE
-- Can only be true if @OrderId isn't NULL, no need to state it explicitly
tblOrder.OrderId = @OrderId
OR
-- Thanks to short-circuit evaluation,
-- we only get here if the first condition evaluated as false
tblOrder.OrderCustomer = @OrderCustomer
``` | SQL query where parameters null not null | [
"",
"sql",
"sql-server-2008",
"t-sql",
"sql-server-2008-r2",
"case",
""
] |
I am creating a database which stores arbitrary date/time ranges in PostgreSQL 9.2.4. I want to place a constraint on this database which forces the date/time ranges to be non-overlapping, and non-adjacent (since two adjacent ranges can be expressed as a single continuous range).
To do this, I am using an `EXCLUDE` constraint with a GiST index. Here is the constraint I have currently:
```
ADD CONSTRAINT overlap_exclude EXCLUDE USING GIST (
box(
point (
extract(EPOCH FROM "from") - 1,
extract(EPOCH FROM "from") - 1
),
point (
extract(EPOCH FROM "to"),
extract(EPOCH FROM "to")
)
) WITH &&
);
```
The columns `from` and `to` are both `TIMESTAMP WITHOUT TIME ZONE`, and are date/times stored in UTC (I convert to UTC before inserting data into these columns in my application, and I have my database's timezone set to "UTC" in postgresql.conf).
The problem I am thinking I might have, though, is that this constraint is making the (incorrect) assumption that there are no time increments smaller than one second.
It is worth noting that, for the particular data I am storing, I only need second resolution. However, I feel that I may still need to deal with this since the SQL types `timestamp` and `timestamptz` are both higher resolution than one second.
My question is either: is there any problem with simply assuming second resolution, since that's all my application needs (or wants), or, if there is, how can I alter this constraint to deal with fractions-of-a-second in a robust way? | [Range types](https://www.postgresql.org/docs/current/rangetypes.html#RANGETYPES-CONSTRUCT) consist of lower and upper bound, and each can be included or excluded.
The canonical form (and default for range types) is to *include* the lower and *exclude* the upper bound.
### Inclusive bounds `'[]'`
You could include lower and upper bound (`[]`), and enforce it with a [`CHECK` constraint](https://www.postgresql.org/docs/current/ddl-constraints.html#DDL-CONSTRAINTS-CHECK-CONSTRAINTS) using [range functions](https://www.postgresql.org/docs/current/functions-range.html#RANGE-FUNCTIONS-TABLE).
Then "adjacent" ranges overlap. Excluding overlapping ranges seems clear. There is a [code example in the manual](https://www.postgresql.org/docs/current/rangetypes.html#RANGETYPES-CONSTRAINT).
```
CREATE TABLE tbl (
tbl_id serial PRIMARY KEY
, tsr tsrange
, CONSTRAINT tsr_no_overlap EXCLUDE USING gist (tsr WITH &&)
, CONSTRAINT tsr_enforce_incl_bounds CHECK (lower_inc(tsr) AND upper_inc(tsr)) -- all bounds inclusive!
);
```
Only ranges with inclusive bounds are allowed:
```
INSERT INTO tbl(tsr) VALUES ('[2013-10-22 00:00, 2013-10-22 01:00]');
```
*db<>fiddle [here](https://dbfiddle.uk/?rdbms=postgres_14&fiddle=7129a88f571116f311b4af54f3a944bc)*
### Canonical bounds `'[)'`
Enforce `[)` bounds (including lower and excluding upper).
In addition, create another exclusion constraint employing the [adjacent operator **`-|-`**](https://www.postgresql.org/docs/current/functions-range.html#RANGE-OPERATORS-TABLE) to also exclude **adjacent** entries. Both are based on **GiST** indexes as GIN is currently not supported for this.
```
CREATE TABLE tbl (
tbl_id serial PRIMARY KEY
, tsr tsrange
, CONSTRAINT tsr_no_overlap EXCLUDE USING gist (tsr WITH &&)
, CONSTRAINT tsr_no_adjacent EXCLUDE USING gist (tsr WITH -|-)
, CONSTRAINT tsr_enforce_bounds CHECK (lower_inc(tsr) AND NOT upper_inc(tsr))
);
```
*db<>fiddle [here](https://dbfiddle.uk/?rdbms=postgres_14&fiddle=5200714f0634a1273c16eab9cc9f4986)*
Old [sqlfiddle](http://sqlfiddle.com/#!17/ca7c4/1)
Unfortunately, this creates two identical GiST indexes to implement both exclusion constraints, where one would suffice, logically. | You can rewrite the exclude with the range type introduced in 9.2. Better yet, you could replace the two fields with a range. See "Constraints on Ranges" here, with an example that basically amounts to your use case:
<http://www.postgresql.org/docs/current/static/rangetypes.html> | Preventing adjacent/overlapping entries with EXCLUDE in PostgreSQL | [
"",
"sql",
"postgresql",
"datetime",
"range-types",
"exclusion-constraint",
""
] |
I'm looking to show on 1 line my clients name and all of their check dates for the month of January in 2014. Some clients may have 5 check dates in the month while other may only have 1 or 3. Is there a way to make this happen? Using this SQL query:
```
SELECT CC.co, checkDate FROM CCalendar CC
INNER JOIN CInfo CI on CC.co = CI.co
WHERE checkDate BETWEEN '01/01/2014' AND '01/31/2014'
AND CI.endDate IS NULL AND CI.status IN ('LIVE', 'CONVERSION')
```
The results currently look like this:
```
co | checkDate
---+------------
co | 01/03/2014
co | 01/14/2014
co | 01/17/2014
co | 01/24/2014
co | 01/24/2014
```
and what I want it to look like is this
```
Co | checkdate1| checkdate2| checkdate3| checkdate4| checkdate5|
co | 01/03/2014| 01/14/2014| 01/17/2014| 01/24/2014| 01/24/2014|
``` | OK, What you need to use here is a combination of both [PIVOT](http://technet.microsoft.com/en-us/library/ms177410%28v=sql.105%29.aspx) and [ROW\_NUMBER](http://technet.microsoft.com/en-us/library/ms186734.aspx).
Your statment should be close to this:
```
SELECT co, [1] as checkDate1, [2] checkDate2, [3] checkDate3, [4] checkDate4, [5] checkDate5 FROM
(
SELECT CC.co,
ROW_NUMBER() OVER (PARTITION BY CC.co ORDER BY CC.co) AS CheckDateNum, checkDate
FROM CCalendar CC
INNER JOIN CInfo CI on CC.co = CI.co
WHERE checkDate BETWEEN '01/01/2014' AND '01/31/2014'
AND CI.endDate IS NULL AND CI.status IN ('LIVE', 'CONVERSION')
) AS T1
PIVOT (MAX(checkDate) for CheckDateNum in ([1],[2],[3],[4],[5])) as checkDate
```
Example:
 | You're what you're looking for is a function similar to MySql's `group_concat` function, only in SQL Server instead.
This is probably a bad idea to do this in your query... instead, you should handle this in your UI / front-end, and build up your comma separated `checkDate` column there instead.
However, if you're still wanting to do this in your query, [this question's answers show how this can be done in SQL Server.](https://stackoverflow.com/questions/451415/simulating-group-concat-mysql-function-in-microsoft-sql-server-2005) | is there a way to show this SQL query on 1 line? | [
"",
"sql",
"sql-server",
""
] |
Edit:
Sorry I did not clarify in beginning. CompanyType is smallint, not null and TotalLicenses is int, null
Basically I only want to check that `TotalLicenses > 0 if C.CompanyType != 4`
I looked at a few reference guides and tutorials, but I couldn't find one that confirmed that I am using CASE WHEN how I intend to in this example.
I am trying to update the "Purchased" column to 1 when Expiration is greater than or equal to Today's date, DemoLicense is equal to 0, and TotalLicenses is greater than 0 if the company type is not 4. If it is 4 it can be 0
```
update WebCatalog.Published.DemoTracking
set Purchased = 1
from WebCatalog.Published.Company C
inner join WebCatalog.Published.RCompany RC
on C.RCompany = RC.Link
inner join WebCatalog.Published.DemoTracking DT
on C.PKey = DT.CompanyPKey and DT.Purchased = 0
where RC.Expiration >= Cast(GETDATE() as Date) and RC.DemoLicense = 0
and C.TotalLicenses >
Case
when C.CompanyType != 4 THEN 0
END
``` | It is not that much different from the way you described it:
```
update WebCatalog.Published.DemoTracking
set Purchased = 1
from WebCatalog.Published.Company C
inner join WebCatalog.Published.RCompany RC
on C.RCompany = RC.Link
inner join WebCatalog.Published.DemoTracking DT
on C.PKey = DT.CompanyPKey and DT.Purchased = 0
where RC.Expiration >= Cast(GETDATE() as Date) and RC.DemoLicense = 0
and (C.TotalLicenses > 0 or C.CompanyType = 4)
``` | I think you want:
```
and ( C.CompanyType <> 4 AND C.TotalLicenses > 0
OR C.CompanyType = 4 AND C.TotalLicenses >= 0
)
```
which can be simplified (under the assumption that these two columns are not nullable) to:
```
and ( C.TotalLicenses > 0
OR C.CompanyType = 4 AND C.TotalLicenses = 0
)
```
which, if `Company.TotalLicenses` can never have negative values, can be further simplified to:
```
and ( C.TotalLicenses > 0
OR C.CompanyType = 4
)
``` | Am I using CASE WHEN Correctly? | [
"",
"sql",
"case",
"case-when",
""
] |
Yes, you can find similar questions numerous times, but:
the most elegant solutions posted here, work for SQL Server, but not for Sybase (in my case Sybase Anywhere 11). I have even found some Sybase-related questions marked as duplicates for SQL Server questions, which doesn't help.
One example for solutions I liked, but didn't work, is the `WITH ... DELETE ...` construct.
I have found working solutions using cursors or while-loops, but I hope it is possible without loops.
I hope for a nice, simple and fast query, just deleting all but one exact duplicate.
Here a little framework for testing:
```
IF OBJECT_ID( 'tempdb..#TestTable' ) IS NOT NULL
DROP TABLE #TestTable;
CREATE TABLE #TestTable (Column1 varchar(1), Column2 int);
INSERT INTO #TestTable VALUES ('A', 1);
INSERT INTO #TestTable VALUES ('A', 1); -- duplicate
INSERT INTO #TestTable VALUES ('A', 1); -- duplicate
INSERT INTO #TestTable VALUES ('A', 2);
INSERT INTO #TestTable VALUES ('B', 1);
INSERT INTO #TestTable VALUES ('B', 2);
INSERT INTO #TestTable VALUES ('B', 2); -- duplicate
INSERT INTO #TestTable VALUES ('C', 1);
INSERT INTO #TestTable VALUES ('C', 2);
SELECT * FROM #TestTable ORDER BY Column1,Column2;
DELETE <your solution here>
SELECT * FROM #TestTable ORDER BY Column1,Column2;
``` | Ok, now that I know the `ROWID()` function, solutions for tables with primary key (PK) can be easily adopted. This one first selects all rows to keep and then deletes the remaining ones:
```
DELETE FROM #TestTable
FROM #TestTable
LEFT OUTER JOIN (
SELECT MIN(ROWID(#TestTable)) rowid
FROM #TestTable
GROUP BY Column1, Column2
) AS KeepRows ON ROWID(#TestTable) = KeepRows.rowid
WHERE KeepRows.rowid IS NULL;
```
...or how about this shorter variant? I like!
```
DELETE FROM #TestTable
WHERE ROWID(#TestTable) NOT IN (
SELECT MIN(ROWID(#TestTable))
FROM #TestTable
GROUP BY Column1, Column2
);
```
In [this post](https://stackoverflow.com/a/18949/1855801), which inspired me most, is a comment that `NOT IN` might be slower. But that's for SQL server, and sometimes elegance is more important :) - I also think it all depends on good indexes.
Anyway, usually it is bad design, to have tables without a PK. You should at least add an "autoinc" ID, and if you do, you can use that ID instead of the `ROWID()` function, which is a non-standard extension by Sybase (some others have it, too). | If all fields are identical, you can just do this:
```
select distinct *
into #temp_table
from table_with_duplicates
delete table_with_duplicates
insert into table_with_duplicates select * from #temp_table
```
If all fields are not identical, for example, if you have an id that is different, then you'll need to list all the fields in the select statement, and hard code a value in the id to make it identical, if that is a field you don't care about.
For example:
```
insert #temp_table field1, field2, id select (field1, field2, 999)
from table_with_duplicates
``` | How to delete duplicate rows in sybase, when you have no unique key? | [
"",
"sql",
"sybase",
"duplicates",
"sqlanywhere",
""
] |
In SQL 2008 Im trying to combine the data from 4 columns into a single column. I have searched and tried several different things but none have worked. My latest attempt has been :
```
SELECT [2012 Notes] + [2012 STEPS TAKEN TO REMEDY ISSUES] + [2013 NOTES] + [2013 STEPS TAKEN TO REMEDY ISSUES] AS 'ConcatNotes'
FROM ECRSurvey
UPDATE ECRSurvey Set UserNotes = 'ConcatNotes'
```
This does not work though. Can someone tell me the proper way to do this? Im basically trying to take 4 columns and trying to combine that data into the UserNotes field. | That should work - you probably have some NULLS.
try
```
SELECT ISNULL([2012 Notes],'') + ISNULL([2012 STEPS TAKEN TO REMEDY ISSUES],'') + ISNULL([2013 NOTES],'') + ISNULL([2013 STEPS TAKEN TO REMEDY ISSUES],'') AS 'ConcatNotes'
FROM ECRSurvey
```
or to update
```
UPDATE ECRSurvey Set UserNotes = ISNULL([2012 Notes],'') + ISNULL([2012 STEPS TAKEN TO REMEDY ISSUES],'') + ISNULL([2013 NOTES],'') + ISNULL([2013 STEPS TAKEN TO REMEDY ISSUES],'')
``` | If any of those columns contain NULLs, the result will be NULL. You'll want to coalesce or isnull checks on the fields. | How can I combine 4 columns of data into a single column? | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I want to upload data from a local SQL Server 2008 database to web server database within particular time.
I.e. suppose my database runs on local host and some inserts, updates and deletes are performed on local database. Now I want to upload that data to the web server.
How it will happen?
Both database are structurally identical. | You should use sql server Copy database functionality.
Connect to Database server >> Right Click on any one database >> click on TASk >> Select Copy Database.. Option..
This Sql Copy Database wizard will copy your whole database from one server to another server. you can copy this to you webserver also.
Hope this will help you..
Referance:
[Referacen 1 Step by Step..](http://blog.sqlauthority.com/2011/05/27/sql-server-copy-database-from-instance-to-another-instance-copy-paste-in-sql-server/)
[Referance 2 From MSDN..](http://msdn.microsoft.com/en-us/library/ms188664.aspx) | You can either do it manually (for example, using a program like <http://www.red-gate.com/products/sql-development/sql-data-compare/>) or you can make scripts/applications that do it for you.
Then you can either run those script manually or set up various automated jobs to do it, in various types of software (deploy managers, windows jobs, sql server agent jobs etc).
Basically - you're asking a general question about how to deploy a solution/change, and well, a ton of different methods exists. | Upload local SQL Server 2008 database to web server within particular time | [
"",
"asp.net",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have the following tables:
`account`
```
+------------+--------------+
| account_id | account_name |
+------------+--------------+
| 452 | a |
| 785 | b |
| 985 | c |
+------------+--------------+
```
and
`task`
```
+---------+------------+------------+
| task_id | task_date | account_id |
+---------+------------+------------+
| 2 | 01-01-2013 | 452 |
| 1 | 14-02-2013 | 452 |
| 5 | 03-01-2013 | 452 |
| 1 | 02-02-2013 | 785 |
| 7 | 07-01-2013 | 785 |
| 5 | 01-03-2013 | 785 |
| 1 | 25-03-2013 | 985 |
| 4 | 22-03-2013 | 985 |
+---------+------------+------------+
```
I need to show records from the table `task` and `account` such that only the oldest task can be shown and without showing the date. So the result will be:
```
+--------------+---------+
| account_name | task_id |
+--------------+---------+
| a | 2 |
| b | 7 |
| c | 4 |
+--------------+---------+
```
Using `min` function came to my mind. But since I do not want to show the date, how can I use the function.
```
SELECT A.ACCOUNT_NAME,
T.TASK_ID,
MIN(T.TASK_DATE)
FROM ACCOUNT A
INNER JOIN TASK T
ON A.ACCOUNT_ID = T.ACCOUNT_ID
GROUP BY ACCOUNT_NAME, TASK_ID
``` | You can either use analytics:
```
SELECT account_name, task_id
FROM (SELECT a.account_name,
t.task_id,
row_number()
over(PARTITION BY a.account_id ORDER BY t.task_date) task_num
FROM account a
INNER JOIN task t ON a.account_id = t.account_id
GROUP BY account_name, task_id)
WHERE task_num = 1
```
or a self-join:
```
SELECT a.account_name, t.task_id
FROM account a
INNER JOIN task t ON a.account_id = t.account_id
WHERE (t.account_id, t.task_date) = (SELECT t_in.account_id,
MIN(t_in.task_date)
FROM task t_in
GROUP BY t_in.account_id)
``` | Another approach, a bit shorter:
```
select max(a.accountname) as account_name
, max(t.taskid) keep(dense_rank first
order by t.taskdate) as task_id
from account1 a
join task t
on (a.accountid = t.accountid)
group by a.accountid
```
result:
```
ACCOUNT_NAME TASK_ID
------------ ----------
a 2
b 7
c 4
```
[**SQLFiddle Demo**](http://sqlfiddle.com/#!4/12b39/1) | Show the oldest or newest records without showing the date | [
"",
"sql",
"oracle",
""
] |
Quite a lot of database scripts are of the form:
```
IF NOT EXISTS(SELECT * FROM Countries WHERE Name = 'France')
INSERT INTO(Countries)
```
However, I've also seen people do:
```
IF NOT EXISTS(SELECT CountryID FROM Countries WHERE Name = 'France')
INSERT INTO(Countries)
```
And even:
```
IF NOT EXISTS(SELECT 1 FROM Countries WHERE Name = 'France')
INSERT INTO(Countries)
```
The advantage of the last one is supposedly that its more efficient: the query doesn't actually use any of the columns in the subquery, so it might be quicker to not bring any of them back. But it looks odd, so it strikes me that it might confuse some people. And does it make any difference anyway to the actual execution time? | I think it was back in the 6.5 - 7 period of SQL Server that they made the query optimizer smart enough to know that:
```
IF NOT EXISTS(SELECT * FROM Countries WHERE Name = 'France')
```
Does not actually need to return any row data. The advice to use `SELECT 1` pre-dates that, yet continues on as a myth.
Arguably, it's a fault with the SQL standard - they ought to allow `EXISTS` to start with the `FROM` clause and not have a `SELECT` portion at all.
---
And from [Subqueries with EXISTS](http://technet.microsoft.com/en-gb/library/ms189259%28v=sql.105%29.aspx):
> The select list of a subquery introduced by EXISTS almost always consists of an asterisk (\*). There is no reason to list column names because you are just testing whether rows that meet the conditions specified in the subquery exist. | No,
sql server is smart enough to make this optimization for you. | When using "IF NOT EXISTS(SELECT..." in Sql Server, does it matter which columns you choose? | [
"",
"sql",
"sql-server",
""
] |
SQlite, Android, true story. I have a table, which I use as a cache:
```
CREATE TABLE cache(key TEXT, ts TIMESTAMP, size INTEGER, data BLOB);
CREATE UNIQUE INDEX by_key ON cache(key);
CREATE INDEX by_ts ON cache(ts);
```
During app lifetime I fill the cache and at some point I want to clear it out and drop `N` records. Typically this table will contain ~25000 blobs ~100-500Kb each, total blobs size in the DB is 600-800Mb, but now I test for ~2000 which are about 60Mb (following numbers are for this case). Clear removes 90% of cache entries.
**I tried different ways to do it, here brief description:**
**[1]** Worst and simplest. First select, than remove one by one, walking cursor. Terribly slow.
**[2]** Make SQLite to do it with query (delete blobs with totally `N` bytes in them):
```
DELETE FROM blobs WHERE
ROWID IN (SELECT ROWID FROM blobs WHERE
(SELECT SUM(size) FROM blobs AS _ WHERE ts <= blobs.ts) <= N);
```
This is faster, but still terribly slow: ~15 sec. Seems also it like it has quadratic complexity.
**[3]** Select row around where to remove (using average blob size for computations) and delete with simple `WHERE` clause:
```
-- Find row after which to delete, let it's time stamp is T0:
SELECT ts FROM cache ORDER BY ts LIMIT 1 OFFSET count;
-- Delete
DELETE FROM cache WHERE ts < T0;
```
This is much better, but takes ~7 sec.
**[4]** Create new table, copy that I need to save and drop old one. Note, that I create index in the new table AFTER I copied all this stuff:
```
-- Insert only rows I want leave
INSERT INTO temp(key, ts, size, data) SELECT key, ts, size, data
FROM cache ORDER BY ts LIMIT count;
-- Drop table and indices.
DROP INDEX by_key;
DROP INDEX by_ts;
DROP TABLE cache;
-- Rename temp table and create indices...
```
Copying takes ~300ms for 6Mb for blobs. But `DROP TABLE` is about ~8 sec.
Note in all cases I do `VACUUM` which takes another ~1 sec. How can I make it fast? Why `DROP TABLE` and deletion are so slow? I think it might be because of indices: when I dropped key index before `DELETE` it worked faster. How to make SQLite delete fast? | You are working on a database with "big" data - ie, each blob using multiple pages.
At some point near optimal performance you will reach a limit you can't improve.
Checking all your choices, I see different behaviors, not just different algorithms.
[1] This one shouldn't be terrible slow as long as you use a transaction. You need two operations at once, query (to get blob size) and delete.
[2] This is a good approach. As two queries and a delete, all in a single command, so SQLite engine will optimize.
[3] This is a different behaviour from all before. Same as `DELETE FROM cache WHERE ts < (SELECT ts FROM cache ORDER BY ts LIMIT 1 OFFSET count)`. Query is less expensive then previous, but I bet number of rows deleted are far less then previous one! Expensive part of query/delete will be delete! Query optimization is important, but things will always get slower in delete.
[4] This is a very bad approach!!! Copying all your data to a new table - maybe another database - will be VERY expensive. I only get one advantage from this: you may copy data to a new database and avoid `VACUUM`, as new database was build from base and it's clean.
About `VACUUM`... Worst then `DELETE` is `VACUUM`. Vacuum is not supposed to be used often in a database. I understand this algorithm is supposed to "clean" your database, but cleaning shouldn't be a frequent operation - databases are optimized for select/insert/delete/update - not to keep all data at a minimal size.
My choice would be using a `DELETE ... IN (SELECT ...)` single operation, according to predefined criteria. `VACUUM` wouldn't be used, at least not so often. One good choice would be monitor db size - when this size run over a limit, run a assumed expensive cleaning to trim database.
At last, when using multiple commands, never forget to use transactions! | Apparently, what is slow is not finding the records to be deleted, but the actual deletion itself.
Check if [PRAGMA secure\_delete](http://www.sqlite.org/pragma.html#pragma_secure_delete) is set by default in your Android's SQLite.
You should just disable it, just to be sure.
You do not need to run `VACUUM`; SQLite automatically resuses freed pages.
You need `VACUUM` only if you *actually know* that the database will not grow again in size in the future. | SQLite: efficient way to drop lots of rows | [
"",
"android",
"sql",
"sqlite",
""
] |
I've got 3 Tables:
```
tblCustomer
CustomerID CustomerName
1 Customer 1
2 Customer 2
tblOrder
OrderID CustomerID OrderTypeID LoanNumber
1 1 1 98513542
2 1 1 71283527
3 1 1 10268541
4 1 1 61258965
tblOrderType
OrderTypeID OrderTypeName
1 Purchase
2 Rental
```
Now, I'm looking to return `CustomerID`, `CustomerName`, `OrderTypeName` and `OrderCount`, where `OrderCount` is how many of each order the customer has. I'm using the following query:
```
SELECT tblCustomer.CustomerID, tblCustomer.customerName, tblOrderType.OrderTypeName, tblOrder.OrderID
FROM tblCustomer
INNER JOIN tblOrder
ON tblCustomer.CustomerID = tblOrder.CustomerID
INNER JOIN tblOrderType
ON tblOrderType.OrderTypeID = tblOrder.OrderTypeID
```
It sort of works. It gets all I'm asking for, except for the `OrderCount`, obviously. The result is like this:
```
CustomerID CustomerName OrderTypeName OrderID
1 Customer 1 Purchase 1
1 Customer 1 Purchase 2
1 Customer 1 Purchase 3
1 Customer 1 Purchase 4
```
But what I'm looking for is this:
```
CustomerID CustomerName OrderTypeName OrderCount
1 Customer 1 Purchase 4
```
Now, I've tried adding `Count()` into the query at various places (`Count(tblOrder.OrderID)` for one), and I get an error that `tblCustomer.CustomerID is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause`.
This isn't a homework assignment. I just don't know a whole lot about sql and database interactions, since it wasn't taught in my school, and I've got a friend who's throwing scenarios at me. | You need to use grouping and aggregation:
```
SELECT tblCustomer.CustomerID, tblCustomer.customerName, tblOrderType.OrderTypeName,
count (tblOrder.OrderID) asOrderCOunt
FROM tblCustomer
INNER JOIN tblOrder
ON tblCustomer.CustomerID = tblOrder.CustomerID
INNER JOIN tblOrderType
ON tblOrderType.OrderTypeID = tblOrder.OrderTypeID
GROUP BY
tblCustomer.CustomerID,
tblCustomer.customerName,
tblOrderType.OrderTypeName
```
You can't use aggregate functions without grouping everything that isn't aggregated. | The error message does tell you the issue. Whenever you use an aggregation operator (`COUNT`, `SUM`, `AVG`, etc.), you have to tell the database how these should be aggregated together. You do this with a [`GROUP BY`](http://technet.microsoft.com/en-us/library/ms177673.aspx) statement (the link there is for SQL Server, but it should generally apply to all database engines):
```
SELECT
C.CustomerID
,C.customerName
,OT.OrderTypeName
,COUNT(O.OrderID)
FROM tblCustomer C
INNER JOIN tblOrder O
ON C.CustomerID = O.CustomerID
INNER JOIN tblOrderType OT
ON OT.OrderTypeID = O.OrderTypeID
GROUP BY
C.CustomerID
,C.customerName
,OT.OrderTypeName
``` | How to get the count and display only one row | [
"",
"sql",
""
] |
I have got a sample table like this.
```
ID Name
1 Jane
2 John
3 Kevin
4 George
5 Jane
```
The result I would like to get from the query would be a table of two values:
```
ID Name
1 Jane
5 Jane
```
I know this could be done using 2 copies of a table and then comparing names of one and the other table but I couldn't get this working, though.
--
Sorry, didn't add this. I want to get all duplicate values.
I can see all the responses about duplicates and so on, I've seen the post on stackoverflow about finding duplicates using count(\*) but I wonder whether it could be done by creating two copies of the table (say t and tt) and then checking t.name = tt.name? | I usually recommend using `Joins` rather than `exist,in or not in` in the codes.
```
Select A.ID,B.Name from Tablename A
inner join Tablename B
on A.Name = B.Name
where a.id <> b.id
```
and
I don't usually recommend this, ID's should normally be joined rather than the name.
here is the [DEMO](http://sqlfiddle.com/#!6/25036e/1) | One way of doing this is by using `EXISTS()`.
```
SELECT *
FROM tablename a
WHERE EXISTS
(
SELECT 1
FROM tableName b
WHERE a.Name = b.name
GROUP BY Name
HAVING COUNT(*) > 1
)
```
* [SQLFiddle Demo](http://sqlfiddle.com/#!2/28e729/1) | Compare rows in SQL | [
"",
"sql",
"compare",
""
] |
I have two tables in a SQL Server 2008 R2 database. I want to create a View where it has all the columns from Table1 and receives an additional column called "Photo\_Exist" appended to it and it is assigned "1" or "True" if ID in Table1 exists in Table2.
**Table1**:
ID, Col1, Col2, ..., Coln
**Table2**:
Linked\_ID
**View**:
ID, Col1, Col2, ..., Coln, Photo\_Exist
Thanks in advance!
Alex | Try this
```
SELECT *,
CASE WHEN EXISTS (SELECT * FROM Table2 AS T2 WHERE T2.Linked_ID=T1.ID)
THEN 1
ELSE 0
END AS Photo_Exist
FROM Table1 AS T1
``` | Create a view using this query. This should help
```
select table1.*, case when table2.linked_id is null then 0 else 1 end as Photo_exist
from table1 left outer join table2 on table1.id =table2.linked_id
``` | Set value to a column on the row where its ID exists in another table | [
"",
"sql",
"sql-server",
"database",
"view",
"sql-server-2008-r2",
""
] |
> Exercise 39: Define the ships that "survived for future battles";
> being damaged in one battle, they took part in another.
Database Schema: <http://www.sql-ex.ru/help/select13.php#db_3>
My approach:
```
SELECT distinct o.ship from Outcomes o
WHERE o.RESULT = 'damaged'
AND exists (select 1 FROM Outcomes o2 WHERE o2.ship = o.ship
AND (o2.result='OK' OR o2.result='sunk'))
```
sql-ex says
> Your query produced correct result set on main database, but it failed
> test on second, checking database
correct result matched with my output.
Where I failed? | Solved! needed to add Distinct
```
select distinct kk.ship from
(Select ship,date from outcomes oo,battles bb where oo.battle=bb.name) tt
inner join
(Select ship,date as damagedate from outcomes oo,battles bb where result='damaged' AND oo.battle=bb.name) kk
ON tt.ship=kk.ship AND tt.date>kk.damagedate
``` | ```
with a as (select * from outcomes o join battles b on o.battle=b.name)select distinct a1.ship from a a1, a a2 where a1.ship=a2.ship and a1.result='damaged'and a1.date<a2.date
``` | sql-ex exercise 39 failed on second server error? | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I'm running into some issues with long query times on very basic queries. This is my first time working with a bigger table (12.5 mil rows) and I'm just trying to figure out what might be bottlenecking or what I can do to improve performance. All data is in 1 table.
**Server Specs:**
Windows Server 2008 R2 Standard
Intel Xeon X3430 @ 2.40GHz
4GB RAM
1TB 7200RPM HDD
**Table Information**
**ID** int(10) UN PK AI
**Store** int(2) UN
**Date** datetime
**Register** int(2) UN
**Cashier** int(3) UN
**Department** int(4) UN
**Total** decimal(7,2)
**Customers** int(5) UN
**Items** int(5) UN
**Time** int(5) UN
**Example Queries and Times**
```
SELECT Store, sum(Total)
FROM sales
GROUP BY Store
```
*19.56sec*
```
SELECT Date, Register, Customers, Items, Total
FROM sales
WHERE Date(Date) = Date('2013-10-22')
AND Store = 1
```
*9.59sec*
I can certainly provide more information, but is there anything glaringly obvious as to why these are running so slow? | Yes there are 2 glaring issues. In the first query you have no criteria -- hence you are asking for the entire sales table to be read, grouped and summarized. The grouping is going to create a temporary table as well. Hence you will make a table scan and that will be limited to what the hardware can provide in terms of io performance, which is likely your bottleneck.
In the second query you are performing a function on the Date column Date(Date) which means that even if you have an index on Date, it will not be able to utilize that index , so once again you are going to table scan the table.
With that said, you should run explain extended SELECT.... on each query to gain confidence in this assessment. | Query 1:
```
SELECT Store, sum(Total)
FROM sales
GROUP BY Store ;
```
Add an index on `(Store, Total)`:
```
ALTER TABLE sales
ADD INDEX store_total_IX -- pick a name for the index
(store, total) ;
```
---
Query 2:
```
SELECT Date, Register, Customers, Items, Total
FROM sales
WHERE Date(Date) = Date('2013-10-22')
AND Store = 1 ;
```
Add an index on `(Store, Date)`:
```
ALTER TABLE sales
ADD INDEX store_date_IX -- pick a name for the index
(store, date) ;
```
and rewrite the query - so the index can be used - as:
```
SELECT Date, Register, Customers, Items, Total
FROM sales
WHERE Date >= '2013-10-22'
AND Date < '2013-10-22' + INTERVAL 1 DAY
AND Store = 1 ;
``` | Database Performance Problems | [
"",
"mysql",
"sql",
"performance",
""
] |
So I have two tables, players and matches with structures like so:
```
Table "public.players"
Column | Type | Modifiers
--------+---------+------------------------------------------------------
id | integer | not null default nextval('players_id_seq'::regclass)
name | text |
wins | integer | default 0
loses | integer | default 0
rating | integer | default 1500
```
and
```
Table "public.matches"
Column | Type | Modifiers
---------------+---------+------------------------------------------------------
id | integer | not null default nextval('matches_id_seq'::regclass)
player1 | integer |
player1rating | integer |
player2 | integer |
player2rating | integer |
winner | boolean |
```
Where winner is true if player1 won that match or false if player2 won that match.
Doing simple win/loss comparisons is easy using the players table, however I'm watching to do a comparison where two players have faced each other, what their records against each other are.
So I'm confused on how I'd summarize where conditions given that a given player may be listed as either player1 or player2:
P1 Wins:
```
(player1 = <player1> AND winner = true) OR (player2 = <player1> AND winner = false)
```
P2 Wins:
```
(player1 = <player2> AND winner = true) OR (player2 = <player2> AND winner = false)
```
Losses would just be the opposite of the other players wins.
Something that returned information like:
```
id | name | wins | loses | rating | wins_v_opp | loses_v_opp
------+----------+------+-------+--------+------------+------------
4200 | Sinku | 5 | 48 | 1191 | 1 | 4
4201 | Kenshiro | 33 | 29 | 1620 | 4 | 1
```
What I came up with before update2 below:
```
CREATE FUNCTION matchup(text, text) AS $$
DECLARE
player1_name ALIAS FOR $1;
player2_name ALIAS FOR $2;
BEGIN
EXECUTE 'SELECT id FROM player WHERE name LIKE $1'
INTO player1_id
USING player1_name;
IF NOT FOUND THEN
RAISE EXCEPTION 'Player1 % not found', player1_name;
END IF;
EXECUTE 'SELECT id FROM player WHERE name LIKE $1'
INTO player2_id
USING player2_name;
IF NOT FOUND THEN
RAISE EXCEPTION 'Player2 % not found', player2_name;
END IF;
RETURN QUERY EXECUTE 'WITH cte_winners AS (
SELECT
CASE WHEN winner THEN m.player1 ELSE m.player2 END AS player,
COUNT(*) AS wins_v_opp,
sum(count(*)) over() - COUNT(*) AS loses_v_opp
FROM matches AS m
WHERE player1 IN ($1,$2) AND player2 IN ($1,$2)
GROUP BY player
)
SELECT * FROM players AS p
LEFT OUTER JOIN cte_winners AS cw ON cw.player = p.id WHERE p.id IN ($1,$2)'
USING player1_id,player2_id;
END;
$$ LANGUAGE plpgsql;
``` | if you want to select matches won by `<player1>` or `<player2>`, you can use this query:
```
with cte_winners as (
select
id, case when winner then player1 else player2 end as player
from public.matches
)
select *
from cte_winners
where player in (<player1>, <player2>)
```
**update**
to get desired output your can use something like this:
```
with cte_winners as (
select
case when winner then m.player1 else m.player2 end as player,
count(*) as wins_v_opp,
sum(count(*)) over() - count(*) as loses_v_opp
from matches as m
where player1 in (4200, 4201) and player2 in (4200, 4201)
group by player
)
select *
from players as p
left outer join cte_winners as cw on cw.player = p.id
where p.id in (4200, 4201)
```
**`sql fiddle demo`**
**update2**
```
with cte_player as (
select p.id, p.name, p.wins, p.losses, p.rating
from players as p
where p.name in ('Sinku', 'Kenshiro')
), cte_winner as (
select
case when winner then m.player1 else m.player2 end as player,
count(*) as wins_v_opp,
sum(count(*)) over() - count(*) as loses_v_opp
from matches as m
where
m.player1 in (select p.id from cte_player as p) and
m.player2 in (select p.id from cte_player as p)
group by player
)
select
p.id, p.name, p.wins, p.losses, p.rating,
m.wins_v_opp, m.loses_v_opp
from cte_player as p
left outer join cte_winner as m on m.player = p.id
```
**`sql fiddle demo`** | ```
SELECT id, name, wins, losses, rating
,count(CASE WHEN winner THEN m.player1 = p.id ELSE m.player2 = p.id END
OR NULL) AS wins_v_opp
,count(CASE WHEN winner THEN m.player2 = p.id ELSE m.player1 = p.id END
OR NULL) AS loses_v_opp
FROM (
SELECT *, COALESCE(lag(p.id) OVER(), lead(p.id) OVER()) AS id2
FROM players p
WHERE p.name in ('Sinku', 'Kenshiro')
) p
JOIN matches m ON m.player1 IN (id, id2)
AND m.player2 IN (id, id2)
GROUP BY 1,2,3,4,5;
```
-> [SQLfiddle](http://sqlfiddle.com/#!12/72776/1) | Summarize totals given multiple conditions | [
"",
"sql",
"postgresql",
"plpgsql",
""
] |
My tables:
feeds\_time:
```
user_id | time
1 | 123123
2 | 1231111
3 | 1233223
...
```
users\_follow
```
user_id | follow_id
1 | 2
1 | 3
2 | 3
```
And table posts - but this is not important in this problem.
Actually my query looks like:
```
SELECT `id`, `name`, `user_id`, `time`, 'posts' as `what`
FROM `posts`
WHERE `user_id` IN (SELECT `follow_id`
FROM users_follow WHERE user_id = posts.user_id)
```
Now, I want to add one WHERE: `WHERE posts.time > feeds_time.time` by "current" `user_id`.
How I can do it? | You can try this
Use join
```
SELECT A.`id`, A.`name`, A.`user_id`, A.`time`, A.'posts' as `what`
FROM `posts` A
JOIN feeds_time B ON A.user_id = B.user_id
WHERE A.`user_id` IN (SELECT `follow_id`
FROM users_follow WHERE user_id = posts.user_id)
AND A.time > B.time
``` | Try this:
```
SELECT * FROM (SELECT `id`, `name`, `user_id`, `time`, 'posts' as `what`
FROM `posts`
WHERE `user_id` IN (SELECT `follow_id` FROM users_follow
WHERE user_id = posts.user_id)) a, feeds_time
where a.`time` > feeds_time.time and a.`user_id` = feeds_time.`user_id`
``` | SELECT from 3 tables and "IN" | [
"",
"mysql",
"sql",
""
] |
If I write a hive sql like
```
ALTER TABLE tbl_name ADD PARTITION (dt=20131023) LOCATION 'hdfs://path/to/tbl_name/dt=20131023;
```
How can I query this location about partition later? Because I found there is some data in location but I can't query them, hive sql like
```
SELECT data FROM tbl_name where dt=20131023;
``` | ```
show table extended like 'tbl_name' partition (dt='20131023');
```
> [Show Tables/Partitions Extended](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-ShowTables/PartitionsExtended)
>
> `SHOW TABLE EXTENDED` will list information for all tables matching the given regular expression. Users cannot use regular expression for table name if a partition specification is present. This command's output includes basic table information and file system information like `totalNumberFiles`, `totalFileSize`, `maxFileSize`, `minFileSize`, `lastAccessTime`, and `lastUpdateTime`. If partition is present, it will output the given partition's file system information instead of table's file system information. | Do a describe on the partition instead of the full table.
This will show the linked location if it's an external table.
```
describe formatted tbl_name partition (dt='20131023')
``` | How to know location about partition in hive? | [
"",
"sql",
"hadoop",
"hive",
""
] |
I have data this is linked from SQL Server into an excel document. The column format on the SQL Server is `datetime2`.
When I get the data via an ODBC connection it comes across as a string?
I tried using
```
CAST(column AS DATE )
```
but that didn't work.
I tried reformatting via
```
CONVERT(VARCHAR(10), column, 103)
```
as well but that didn't work.
I tried retrieving the data via Microsoft query as well but that didn't work.
At the moment I am using VBA code like:
```
While (ActiveCell.Value <> "")
ActiveCell.Value = DATEVALUE(ActiveCell.Value)
ActiveCell.Offset(1,0).Activate
Wend
```
and looping through each column that needs this treatment but 100000 rows in multiple columns takes forever to loop through. Are there any alternatives? | It appears that Excel and Access as well don't like the SQL Server datetime2 format.
I have converted the data into datetime format and that seems to be working now. | You can simply convert date in the Excel to the following format:
> "YYYY-MM-DD HH:mm:ss"
This is the [default ODBC canonical format](http://msdn.microsoft.com/en-us/library/ms187928.aspx). In this case the implicit conversion will be made and the correct results will be achieved.
Another way is to try to perform convert using another style:
> CONVERT(VARCHAR(10), column, 120) | MS SQL Server Dates Excel | [
"",
"sql",
"sql-server",
"excel",
"datetime",
"odbc",
""
] |
I am a C# developer, I am not really good with SQL. I have a simple questions here. I need to move more than 50 millions records from a database to other database. I tried to use the import function in ms SQL, however it got stuck because the log was full (I got an error message The transaction log for database 'mydatabase' is full due to 'LOG\_BACKUP'). The database recovery model was set to simple. My friend said that importing millions records using task->import data will cause the log to be massive and told me to use loop instead to transfer the data, does anyone know how and why? thanks in advance | If you are moving the entire database, use backup and restore, it will be the quickest and easiest.
<http://technet.microsoft.com/en-us/library/ms187048.aspx>
If you are just moving a single table read about and use the BCP command line tools for this many records:
> The bcp utility bulk copies data between an instance of Microsoft SQL Server and a data file in a user-specified format. The bcp utility can be used to import large numbers of new rows into SQL Server tables or to export data out of tables into data files. Except when used with the queryout option, the utility requires no knowledge of Transact-SQL. To import data into a table, you must either use a format file created for that table or understand the structure of the table and the types of data that are valid for its columns.
<http://technet.microsoft.com/en-us/library/ms162802.aspx> | The fastest and probably most reliable way is to bulk copy the data out via SQL Server's [bcp.exe utility](http://technet.microsoft.com/en-us/library/ms162802.aspx). If the schema on the destination database is *exactly* identical to that on the source database, including nullability of columns, export it in "native format":
* <http://technet.microsoft.com/en-us/library/ms191232.aspx>
* <http://technet.microsoft.com/en-us/library/ms189941.aspx>
If the schema differs between source and target, you will encounter...interesting (yes, interesting is a good word for it) problems.
If the schemas differ or you need to perform any transforms on the data, consider using text format. Or another format (BCP lets you create and use a format file to specify the format of the data for export/import).
You might consider exporting data in chunks: if you encounter problems it gives you an easier time of restarting without losing all the work done so far.
You might also consider zipping the exported data files up to minimize time on the wire.
Then FTP the files over to the destination server.
bcp them in. You can use the bcp utility on the destination server for the BULK IMPORT statement in SQL Server to do the work. Makes no real difference.
The nice thing about using BCP to load the data is that the load is what is described as a 'non-logged' transaction, though it's really more like a 'minimally logged' transaction.
If the tables on the destination server have IDENTITY columns, you'll need to use [SET IDENTITY](http://technet.microsoft.com/en-us/library/aa259221%28v=sql.80%29.aspx) statement to disable the identity column on the the table(s) involved for the nonce (don't forget to reenable it). After your data is imported, you'll need to run [DBCC CHECKIDENT](http://technet.microsoft.com/en-us/library/ms176057.aspx) to get things back in synch.
And depending on what your doing, it can sometimes be helpful to put the database in [single-user mode](http://technet.microsoft.com/en-us/library/ms345598.aspx) or dbo-only mode for the duration of the surgery: <http://msdn.microsoft.com/en-us/library/bb522682.aspx>
Another approach I've used to great effect is to use Perl's DBI/DBD modules (which provide access to the bulk copy interface) and write a perl script to suck out the data from the source server, transform it and bulk load it directly into the destination server, without having to save it to disk and move it. Also means you can trap errors and design things for recovery and restart right at the point of failure. | SQL, moving million records from a database to other database | [
"",
"sql",
"sql-server",
""
] |
I am working on a project that accepts 3 different parameters, the date is required and the First and Last name are optional. We setup the query below, but even if I change the parameters on the report (SSRS) it still looks at @LetterCreated as being '1/1/1950', any ideas on how I can get this to just accept the parameters? We set the date this way because we want the report to show with all of the reports when it is initially opened.
```
Alter Proc
AS
BEGIN
SET NOCOUNT ON;
DECLARE @LetterCreated DATETIME,
@FirstName VARCHAR(20),
@LastName VARCHAR(20)
SELECT @LetterCreated = '1/1/1950'
SELECT @FirstName = ''
SELECT @LastName = ''
SELECT
LETTERCREATETIME,
Firstname,
LastName,
From RedFlagAddress
WHERE
CASE WHEN @LetterCreated='1/1/1950'
THEN '1/1/1950'
ELSE ISNULL(LETTERCREATETIME, '07/05/81')
END = @LetterCreated
AND (LastName LIKE @LASTNAME + '%' AND FirstName LIKE @FirstNAME + '%')
END
```
Any suggestions would be greatly appreciated. | I'm guessing here but what you may want is
* IF parameter `@LetterCreated` is null then it should not be used as a filter at all
* IF the data in `RedFlagData.LETTERCREATETIME` is Null then it should be matched to a filter date of `'07/05/81'` FWR
Assuming that you have the 'allow nulls' check on the `RDL/RDLC` set for the `@LetterCreated` `parameter`, the `where` needs to be changed, the optional filter can be set like so:
```
ISNULL(@LetterCreated, LETTERCREATETIME) = ISNULL(LETTERCREATETIME, '07/05/81')
```
If you get rid of the magic date, then you can guarantee no date filter applied even if `LETTERCREATETIME` is null, by the filter:
```
ISNULL(@LetterCreated, LETTERCREATETIME) = ISNULL(LETTERCREATETIME)
OR
(@LetterCreated IS NULL AND LETTERCREATETIME IS NULL)
```
Thus:
```
ALTER PROC XYZ
(
@LetterCreated DATETIME,
@FirstName VARCHAR(20) = NULL,
@LastName VARCHAR(20) = NULL
)
as
begin
select
LETTERCREATETIME,
Firstname,
LastName,
From
RedFlagAddress
where
(
ISNULL(@LetterCreated, LETTERCREATETIME) = LETTERCREATETIME
OR
(@LetterCreated IS NULL AND LETTERCREATETIME IS NULL)
)
AND LastName LIKE ISNULL(@LastName, '') + '%'
AND FirstName LIKE ISNULL(@FirstName, '') + '%'
end
```
One caveat : The performance of this query will be terrible, given the amount of functional manipulation in the where clause, and the `LIKEs` and `ORs` - Hopefully `RedFlagAddress` is a relatively small table? If not, you may need to [rethink your approach](https://softwareengineering.stackexchange.com/questions/160031/best-method-to-implement-a-filtered-search) | You are setting the `@lettercreated` date in the procedure. Variables defined within the procedure are not visible outside it.
You should declare the parameters as parameters, and set the default in the declaration
```
ALTER PROC yourproc
(
@LetterCreated DATETIME = '1950-1-1',
@FirstName VARCHAR(20) = '',
@LastName VARCHAR(20) = ''
)
as
begin
select
LETTERCREATETIME,
Firstname,
LastName,
From
RedFlagAddress
where
(
ISNULL(LETTERCREATETIME, '07/05/81') = @LetterCreated
or
@LetterCreated = '1950-1-1'
)
AND LastName LIKE @LASTNAME + '%'
AND FirstName LIKE @FirstNAME + '%'
end
``` | Passing Parameters to a stored Procedure | [
"",
"sql",
"sql-server",
"sql-server-2008",
"stored-procedures",
"reporting-services",
""
] |
I have a database and a lot of tables inside it. I wrote some information into the each table and column's decription part. And now using query i want to see all table and columns descriptions.
Note: DATABASE -> ms sql server
Can you please help me ? | Check this query:
```
SELECT
t.name AS TableName
, td.value AS TableDescription
, c.name AS ColumnName
, cd.value AS ColumnDescription
FROM sys.tables t
INNER JOIN sys.columns c ON t.object_id = c.object_id
LEFT JOIN sys.extended_properties td
ON td.major_id = t.object_id
AND td.minor_id = 0
AND td.name = 'MS_Description'
LEFT JOIN sys.extended_properties cd
ON cd.major_id = t.object_id
AND cd.minor_id = c.column_id
AND cd.name = 'MS_Description'
``` | You can see that using [INFORMATION\_SCHEMA](http://technet.microsoft.com/en-us/library/ms186778.aspx)
To get columns for each table you can do:
```
SELECT * FROM INFORMATION_SCHEMA.COLUMNS
```
To get table information you can do:
```
SELECT * FROM INFORMATION_SCHEMA.TABLES
``` | Getting the decriptions of the tables and columns via a sql statement | [
"",
"sql",
"sql-server",
""
] |
I can `INSERT` a row into SQL like this:
```
INSERT INTO MyTable VALUES (0, 'This is some text...');
```
However, what if I wanted *This is some text...* to be the contents of `C:\SomeFile.txt`. Is there a method in Oracle that makes this possible?
I dug through the Docs a bit and found the `LOAD FILE` method, however it appears to be for bulk loading data. For example, it wants a `FIELDS TERMINATED BY` parameter and what not. I want to simply INSERT a single row and set a single column to by the contents of a file on the local disk. | You should never be reading files from the DATABASE SERVER's file system to insert into the db.
Why do you want to do this? You really should read the file in your application code, then insert the string or binary data through standard SQL.
If you really must do this you will need to use the oracle [UTL\_FILE](http://docs.oracle.com/cd/B19306_01/appdev.102/b14258/u_file.htm) function. and write some PL/SQL to store in a variable, then insert. | To do this I think you would have to use PL/SQL. This is technically an "advanced SQL".
search up a bit about PL/SQL. You can copy your SQL directly into PL/SQL and use the same database. | Inserting a single row in SQL, but loading the contents from a file | [
"",
"sql",
"oracle",
"oracle11g",
""
] |
I have a Listing model which has a category column and size column. For each category I have an array of sizes. I want to return only the Listings in each category that correspond to the size arrays. (I also have an array of designers as a condition with params[:designer].)
Params hash:
```
params[:category] => ['tops', 'bottoms', 'outerwear', 'footwear']
params['tops'] => ['M', 'L']
params['bottoms'] => []
params['outerwear'] => ['XL']
params['footwear'] => ['11', '12']
```
I've created a loop to do this:
```
@listings = []
params[:category].each do |category|
@listings += Listing.where(category: category, size: params[category], designer: params[:designer], sold: nil).includes(:photos).page(params[:category_page]).per(@perpage)
end
```
But I need it to be all in one query since I'm using the kaminari gem (the .page call) to paginate it. | I ended up using Arel which is pretty good. Arel lets you build up whatever query you want and then call it on Model.where(). It's kind of complicated but was the only solution I found that worked.
```
t = Listing.arel_table
query = t[:category].eq('rooney')
params[:category].each do |category|
if params[category]
params[category].each do |size|
query = query.or(t[:category].eq(category).and(t[:size].eq(size)))
end
end
end
dquery = t[:designer].eq('rooney')
params[:designer].each do |designer|
dquery = dquery.or(t[:designer].eq(designer))
end
query = query.and(dquery)
@listings = Listing.where(query).includes(:photos).page(params[:category_page]).per(@perpage)
```
EDIT:
The designer query can be simplified using .eq\_any().
```
t = Listing.arel_table
query = t[:category].eq('rooney')
params[:category].each do |category|
if params[category]
params[category].each do |size|
query = query.or(t[:category].eq(category).and(t[:size].eq(size)))
end
end
end
dquery = t[:designer].eq_any(params[:designer])
query = query.and(dquery)
@listings = Listing.where(query).includes(:photos).page(params[:category_page]).per(@perpage)
``` | You could pass an array to `where`:
```
@listings = Listing.where(category: params[:category], s...
``` | Reduce loop of activerecord queries into one query | [
"",
"sql",
"ruby-on-rails",
"activerecord",
""
] |
I have a following query:
```
UPDATE TOP (@MaxRecords) Messages
SET status = 'P'
OUTPUT inserted.*
FROM Messages
where Status = 'N'
and InsertDate >= GETDATE()
```
In the Messages table there is priority column and I want to select high priority messages first. So I need an ORDER BY. But I do not need to have sorted output but sorted data before update runs.
As far as I know it's not possible to add ORDER BY to UPDATE statement. Any other ideas? | you can use common table expression for this:
```
;with cte as (
select top (@MaxRecords)
status
from Messages
where Status = 'N' and InsertDate >= getdate()
order by ...
)
update cte set
status = 'P'
output inserted.*
```
This one uses the fact that in SQL Server it's possible to update cte, like updatable view. | You can try sub query like
```
UPDATE Messages
SET status = 'P'
WHERE MessageId IN (SELECT TOP (@MaxRecords) MessageId FROM Messages where Status = 'N' and InsertDate >= GETDATE() ORDER BY Priority)
output inserted.*
``` | SQL UPDATE TOP with ORDER BY? | [
"",
"sql",
"sql-server",
""
] |
What's wrong with SQL code below:
```
INSERT INTO tbl_HAMP_Actions ( Loan_ID, Action_ID, Event_User, Event_Time, Event_Station, Reason_ID )
VALUES(3238, 3, 'CONTROL', Now(), 'CONTROL', (SELECT tbl_Reasons.Reason_ID
FROM tbl_Reasons
WHERE tbl_Reasons.Reason = 'Incorrect Information'));
``` | I don't think you can use subqueries as values in Access.
Try this instead:
```
INSERT INTO tbl_HAMP_Actions ( Loan_ID, Action_ID, Event_User, Event_Time, Event_Station, Reason_ID )
SELECT 3238, 3, 'CONTROL', Now(), 'CONTROL', Reason_ID
FROM tbl_Reasons
WHERE tbl_Reasons.Reason = 'Incorrect Information'
``` | ```
(SELECT tbl_Reasons.Reason_ID
FROM tbl_Reasons
WHERE tbl_Reasons.Reason = 'Incorrect Information')
```
could this part of the code return more than one row? If so then this will fail. | SQL Access Error - Syntax error in query expression | [
"",
"sql",
"ms-access",
"append",
""
] |
I have the following table in postgres:
```
create table1 (col1 character varying, col2 character varying);
```
My table has the following data:
```
col1 col2
Questions Tags Users
Value1 Value2 Val
```
I want find the length of col1 and col2 and when the length of values of column 1 and column2 exceeds 6, I want to truncate it and discard the remaining values. i.e. I want my final table to look like the following:
```
col1 col2
Questi Tags U
Value1 Value2
```
Actually the reason why I want to do this is, when I create index on table1 then I am getting the following error:
> ERROR: index row size 2744 exceeds maximum 2712 for index "allstrings\_string\_key"
> HINT: Values larger than 1/3 of a buffer page cannot be indexed.
> Consider a function index of an MD5 hash of the value, or use full text indexing.
I know I can do this by importing the values to some programming language and then truncating the value. Is there some way by which I may achieve the same using an sql query in postgres. | Couldn't you just update them to contain only strings of length 6 at max?
I am no postrgres pro, so this is probably not the best method, but should do the job anyways:
```
UPDATE table1 SET col1 = SUBSTRING(col1, 1, 6) WHERE LEN(col1) > 6
UPDATE table1 SET col2 = SUBSTRING(col2, 1, 6) WHERE LEN(col2) > 6
``` | I'd suggest that you actually follow the advice from Postgres, rather than changing your data. Clearly, that column with a 2k character long string shouldn't be indexed -- or not with a btree index anyway.
If the idea behind the index is searching, use full text search instead:
<http://www.postgresql.org/docs/current/static/textsearch.html>
If the idea behind the need is for sorting, use a functional index instead. For instance:
```
create index tbl_sort on (substring(col from 1 for 20));
```
Then, instead of ordering by `col`, order by `substring(col from 1 for 20)`. | sql query to truncate columns which are above specified length | [
"",
"sql",
"postgresql",
""
] |
I have a table in SQL Server with bundle of records. I want to convert the ID column which is Primary Key to an identity Column without loss of data. I thought of the following two approaches:
1. Create a new table with identity & drop the existing table.
2. Create a new column with identity & drop the existing column.
but it's clear that they can not be implemented because keeping records is my first priority.
Is there another way to do this? | As you are using SQL Server 2012, another possible alternative could be to create a sequence object that has a starting value of the highest ID +1 already in your table, then create a default constraint for your column using GET NEXT VALUE FOR and reference your sequence object you just created. | This solution violates your point 2, but there is no other way and I think your aim is to keep the old values, because nothing else makes sense...
You could do the following:
1. make it possible to insert into identity columns in your table:
```
set identity_insert YourTable ON
```
2. add a new ID column to your table with identity and insert the values from your old columns
3. turn identity insert off
```
set identity_insert YourTable OFF
```
4. delete old ID column
5. rename new column to old name
6. make it to the primary key
The only problem could be that you have your ID column already connected as foreign key to other tables. Then you have a problem with deleting the old column...
In this case you have to drop the foreign key constraints on your ID column after step 3, then do step 4 to 6 and then recreate your foreign key constraints. | Convert an existing Column to Identity | [
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2012",
""
] |
I am a SQL beginner and I need to figure out this query: I have three tables joined together from which I am counting certain value, like this:
```
SELECT SEATS_MAX-COUNT(BOOKING_ID)
FROM FLIGHTS
INNER JOIN PLANES ON FLIGHTS.PLANE_ID=PLANES.PLANE_ID
LEFT JOIN BOOKINGS ON FLIGHTS.FLIGHT_ID=BOOKINGS.FLIGHT_ID
GROUP BY SEATS_MAX;
```
This returns number of free seats in a flight. But I would like to get all the columns from FLIGHTS (as in `SELECT * FROM FLIGHTS;`) plus the count number. i.e. something like
```
SELECT FLIGHTS.*, SEATS_MAX-COUNT(BOOKING_ID)
FROM FLIGHTS
INNER JOIN PLANES ON FLIGHTS.PLANE_ID=PLANES.PLANE_ID
LEFT JOIN BOOKINGS ON FLIGHTS.FLIGHT_ID=BOOKINGS.FLIGHT_ID
GROUP BY SEATS_MAX;
```
but this doesn't work (invalid user.table.column, table.column or column specification). Is there a way to achieve this? I'm using Oracle db.
Thanks | In the group by you need to have all the column not aggregated.
So you query has to become:
```
SELECT FLIGHTS.*,
SEATS_MAX-COUNT(BOOKING_ID)
FROM FLIGHTS
INNER JOIN PLANES
ON FLIGHTS.PLANE_ID = PLANES.PLANE_ID
LEFT JOIN BOOKINGS
ON FLIGHTS.FLIGHT_ID = BOOKINGS.FLIGHT_ID
GROUP BY FLIGHTS.Column1,
...
FLIGHTS.ColumN,
SEATS_MAX;
```
**Edit:**
To list all columns of you table you can use the following query
```
SELECT 'FLIGHTS.' || column_name
FROM user_tab_columns
WHERE table_name = 'FLIGHTS'
ORDER BY column_id;
```
This should make your life a bit easier, then copy and paste | This should work:
```
SELECT flights.*, seats_max-booked_seats
FROM flights, (SELECT COUNT(*) booked_seats, flight_id
FROM bookings GROUP BY flight_id) book
WHERE flights.flight_id = book.flight_id
``` | SQL select all from one table of joint tables | [
"",
"sql",
"oracle",
"select",
""
] |
Is there a way to update all columns in table from
```
2010-12-31 23:59:59.000
```
to
```
2010-12-31 00:00:00.000
```
Something like this??
```
UPDATE t1
SET [Posting Date] = [Posting Date] with ms = 00:00:00.000
WHERE ms = other than 00:00:00.000
GO
``` | ```
UPDATE t1
SET [Posting Date] = cast([Posting Date] as date)
WHERE cast([Posting Date] as time) > '00:00'
``` | ```
UPDATE t1
SET [Posting Date] = CAST([Posting Date] as DATE)
``` | SQL datetime update time to 00:00:00 from all other time values | [
"",
"sql",
"sql-server-2008",
"t-sql",
"datetime",
"milliseconds",
""
] |
I want to match my user to a different user in his/her community every day. Currently, I use code like this:
```
@matched_user = User.near(@user).order("RANDOM()").first
```
But I want to have a different @matched\_user on a daily basis. I haven't been able to find anything in Stack or in the APIs that has given me insight on how to do it. I feel it should be simpler than having to resort to a rake task with cron. (I'm on postgres.) | Whenever I find myself hankering for shared 'memory' or transient state, I think to myself "this is what (distributed) caches were invented for".
```
@matched_user = Rails.cache.fetch(@user.cache_key + '/daily_match', expires_in: 1.day) {
User.near(@user).order("RANDOM()").first
}
```
**NOTE**: While specifying a TTL for cache entry tells Rails/the cache system to try and keep that value for the given timeframe, there's *NO* guarantee that it *will*. In particular, a cache that aggressively tries to reclaim memory may expire an entry well before its desired `expires_in` time.
For this particular use case, it shouldn't be a big deal but in cases where the business/domain logic *demands* periodically generated values that are *durable* then you really have to factor that into your database. | How about using PostgreSQL's `SETSEED` function? I used the date to seed so that every day the seed will change, but within a day, the seed will be consistent.:
```
User.connection.execute "SELECT SETSEED(#{Date.today.strftime("%y%d%m").to_i/1000000.0})"
@matched_user = User.near(@user).order("RANDOM()").first
```
You may want to seed a random value after using this so that any future calls to random aren't biased:
```
random = User.connection.execute("SELECT RANDOM()").to_a.first["random"]
# Same code as above:
User.connection.execute "SELECT SETSEED(#{Date.today.strftime("%y%d%m").to_i/1000000.0})"
@matched_user = User.near(@user).order("RANDOM()").first
# Use random value before seed to make new seed:
User.connection.execute "SELECT SETSEED(#{random})"
``` | Rails show different object every day | [
"",
"sql",
"ruby-on-rails",
"ruby",
"postgresql",
"activerecord",
""
] |
i have one interesting question, and could not resolve it. Please help!!!!
They are tables
```
t_employee
ID NUMBER,
DEPARTMENT_ID NUMBER,
CHIEF_ID NUMBER,
NAME VARCHAR2(100 BYTE),
SALARY NUMBER,
BIRTH_DATE DATE,
ADDRESS VARCHAR2(200 BYTE),
STATUS VARCHAR2(1 BYTE)
```
and
```
t_department
ID NUMBER,
NAME VARCHAR2(100 BYTE)
```
Need display the number of employees in each region - in Adress column (if they are now region then = 'No' area).
Names of areas converted to uppercase.
What is problem? Problem is that adress collumn has unstructured data for example:
Adress:
```
Country,REGION,city,...
```
So REGION always must be beetween first (,) and second (,) AND must include word (reg)
For example:
```
Russia(Country), reg Moskovskay , Moscow(city), Lenina, (street) .... or
Russia(Country), Moskovskay reg , Moscow(city), Lenina, (street) .... or
```
separator is (,)
position is - second
Many thanks! | Free form strings are seldom a good idea in databases, this query won't be able to use indexes which will most likely make it a slow performer;
```
WITH a AS ( SELECT TRIM(
REPLACE(
UPPER(
REGEXP_SUBSTR(ADDRESS, ',([^,]*),', 1, 1, 'i', 1)
),
' REG ', ''
)
) REGION
FROM t_employee)
SELECT REGION, COUNT(*) cnt FROM a GROUP BY REGION
```
[An SQLfiddle to test with](http://sqlfiddle.com/#!4/f0e5b/2). | Try the below:
```
SELECT regexp_substr(address, ',(.*?reg.*?),', 1, 1, null, 1) AS region, COUNT(*)
FROM t_employee
GROUP BY regexp_substr(address, ',(.*?reg.*?),', 1, 1, null, 1);
```
I would strongly advise however to refactor the schema and break the address into separate fields for street, city, region, etc. before or during table load, if only you have the possibility to do so. | oracle sql select with employeees | [
"",
"sql",
"database",
"oracle",
"function",
""
] |
I have two tables:
```
CREATE TABLE "status" (
"id" integer NOT NULL PRIMARY KEY,
"created_at" datetime NOT NULL,
"updated_at" datetime NOT NULL);
CREATE TABLE "device" (
"id" integer NOT NULL PRIMARY KEY,
"created_at" datetime NOT NULL,
"updated_at" datetime NOT NULL,
"last_status_object_id" integer REFERENCES "status" ("id"));
```
In the table "device", last\_status\_object\_id references status.id.
I want to delete all status rows that are not referenced by "last\_status\_object\_id". I can't seem to figure out how to make this happen. Can anyone help me out? | ```
DELETE
status
FROM
status
LEFT JOIN device ON (status.id = last_status_object_id)
WHERE
device.id IS NULL
``` | ```
delete from status
where id not in (select last_status_object_id
from device);
```
Here we go, with demo in [sqlfiddle](http://sqlfiddle.com/#!9/19b06/2) | Delete all rows in one table that aren't referenced by another table | [
"",
"mysql",
"sql",
""
] |
I have following code in SQL Server,
```
declare @previouspulse int = 0
declare @previousox int = 0
declare @Pulse int = 0
declare @SpO2 int = 0
set @previouspulse = null
set @Pulse= 121
set @previousox = 56
set @SpO2 = null
IF @previouspulse <> @Pulse or @previousox <> @SpO2
select -1
else
select -2
```
i am getting result -2 from above code
my question is that:
if symbol `<>` means `not equal to` then i should get result `-1` instead `-2`
so in condition, in above code, if one of the condition satisfied in if condition then i should get result -1 and in my case i am getting `true` in both confition, if symbol `'<>'` means `'Not Equal to(!=)'`
so does it really that the symbol `'<>'` means `'Not Equal to(!=)'` ???
Thanks | NULL values represent missing or unknown data.It is not possible to test for NULL values with comparison operators, such as =, <, or <> because comparison with unknown data results in an unknown data.
Use ISNULL operator to get correct results from comparison as:
```
declare @previouspulse int = 0
declare @previousox int = 0
declare @Pulse int = 0
declare @SpO2 int = 0
set @previouspulse = null
set @Pulse= 121
set @previousox = 56
set @SpO2 = null
IF isnull(@previouspulse,0) <> isnull(@Pulse,0) or isnull(@previousox,0) <> isnull(@SpO2,0)
select -1
else
select -2
``` | Here's the reason why,
```
@previouspulse <> @Pulse OR @previousox <> @SpO2
NULL OR NULL
NULL
```
since you have only two paths, it falls on `FALSE` block which results to `-2`. Whn comparing with `NULL` value use `IS NULL` or `IS NOT NULL`.
* [SQL Three-Valued Logic](http://en.wikipedia.org/wiki/Null_%28SQL%29) | Issue related to not Equal to (<>) in SQL Server | [
"",
"sql",
"sql-server",
""
] |
I would like to write an IN statement when setting the `RECEIPTIDS` variable, so that I can pass multiple values in that format into my query. I have tried the following:
```
DECLARE @RECEIPTIDS VARCHAR(2000)
SET @RECEIPTIDS = ('R00013','R00028')
```
However, I get the error:
> Incorrect syntax near ','. | You need extra single qoutes.
```
create table MyTable
(
ID varchar(50)
)
insert into MyTable values('R00013')
insert into MyTable values('R00028')
insert into MyTable values('R00015')
DECLARE @RECEIPTIDS VARCHAR(2000)
SET @RECEIPTIDS = ('''R00013'',''R00028''')
DECLARE @QUERY VARCHAR(100)
SET @QUERY='SELECT *
from MyTable
where ID IN ('+@RECEIPTIDS+')'
EXEC (@QUERY)
```
Edited: Use it with Dynamic query. | You need [table variable](http://blog.sqlauthority.com/2012/10/27/sql-server-storing-variable-values-in-temporary-array-or-temporary-list/) or [temp table](http://blogs.msdn.com/b/sqlserverstorageengine/archive/2008/03/30/sql-server-table-variable-vs-local-temporary-table.aspx).
```
DECLARE @RECEIPTIDS TABLE(val VARCHAR(100))
Insert into @RECEIPTIDS values ('R00013'), ('R00028')
```
You can use it in IN as
```
where field IN (Select val from @RECEIPTIDS)
``` | Declare multiple value variable in SQL | [
"",
"sql",
"sql-server-2008",
""
] |
`table1` has fields `id`, `a`, `b`.
`table2` has fields `id`, `a`.
I want to update `table1.a` with data from `table2.a` for the records with the same ID. `table1.c` should remain unchanged. Can I do that in one single query? | Just use `JOIN` with `UPDATE`:
```
UPDATE table1 LEFT JOIN table2 ON table1.id=table2.id SET table1.a=table2.a
```
-you may want to use `INNER JOIN` instead (if you want to update only records with id existing in both tables) | ```
update table1, table2 set table1.a=table2.a where table1.id=table2.id
``` | SQL update a list of key/values with one statement? | [
"",
"mysql",
"sql",
""
] |
```
ID name Dept Manager
101 Mukesh SW
102 Ram SW 101
103 sham SW 101
104 rahul SW 101
105 Rajat HQ
106 akhilesh HQ 105
107 sachin HQ 105
```
I've this table and I want output like below
```
dep Manager name
SW Mukesh Ram
Sham
Rahul
HQ Rajat akhilesh
``` | Kindly find below query as you want. i created MyTest as table name which you need to replace.
```
SELECT
Case WHEN ISNULL(SecondTable.Id,0) = 0
THEN FirstTable.Dept
ELSE ''
END As Department,
Case WHEN ISNULL(SecondTable.Id,0) = 0
THEN Manager.Name
ELSE ''
END As Manager,
FirstTable.Name FROM MyTest As FirstTable
LEFT JOIN MyTest As SecondTable ON
SecondTable.ID = (SELECT Top 1 MyTest.ID
FROM MyTest
WHERE MyTest.Manager = FirstTable.Manager
AND MyTest.dept = FirstTable.dept
AND MyTest.Id < FirstTable.Id
ORDER BY MyTest.ID Desc)
LEFT JOIN MyTest As Manager On Manager.Id = FirstTable.Manager WHERE FirstTable.Manager <> 0 ORDER BY FirstTable.dept, FirstTable.ID
``` | What you are looking for is to retrieve the hierachical data, with sql 2008 you can use common table expression. check out this link [Recursive Queries Using Common Table Expressions](http://technet.microsoft.com/en-us/library/ms186243%28v=sql.100%29.aspx) the data output is flat (it will repeat SW and Mukesh for each row).
Hope it helps | sql server desired output from tables | [
"",
"sql",
"sql-server-2008",
""
] |
How can i search for a particular date for eg: '2013-10-22' from teradata timestamp(6) field?
```
sel * from table A
where date = '2013-10-22';
```
I tried the above query which is throwing error. Please help! | You may try like this:-
```
sel * from table A
where date = date '2013-10-22';
```
Since in ANSI standard form (must be preceded by the keyword DATE)
Check out [this](http://www.teradataforum.com/l081007a.htm) | And more formally:
```
select *
from table A
where cast(timestamp_column as date) = date '2013-10-22';
```
I'm guessing that you were just showing an example, because I don't think you can have a column named `date`; it's a reserved word. The keyword "**date**" above is how you specific an ANSI date constant and is not related to the "date" function. | query to return specific date from teradata timestamp(6) | [
"",
"sql",
"teradata",
""
] |
Is there something similar to `DATEFROMPARTS(year, month, day)` in SQL Server 2008? I want to create a date using the current year and month, but my own day of the month. This needs to be done in one line in order to be used in a computed column formula.
For Example (I'm not sure if it works because I do not have SQL Server 2012):
```
DATEFROMPARTS(YEAR(GETDATE()), MONTH(GETDATE()), 3)
```
Is there a way to do this in SQL Server 2008?
DATEFROMPARTS Seems only available in SQL Server 2012 [(link)](http://technet.microsoft.com/en-us/library/hh213228.aspx) | Using the `3` from your example, you could do this:
```
dateadd(dd, 3 -1, dateadd(mm, datediff(mm,0, current_timestamp), 0))
```
It works by finding the number of months since the epoch date, adding those months back to the epoch date, and then adding the desired number of days to that prior result. It sounds complicated, but it's built on what was the canonical way to truncate dates prior to the `Date` (not DateTime) type added to Sql Server 2008.
You're probably going to see other answers here suggesting building date strings. I urge you to **avoid suggestions to use strings**. Using strings is likely to be *much* slower, and there are some potential pitfalls with alternative date collations/formats. | You could use something like this to make your own datetime:
```
DECLARE @year INT = 2012
DECLARE @month INT = 12
DECLARE @day INT = 25
SELECT CAST(CONVERT(VARCHAR, @year) + '-' + CONVERT(VARCHAR, @month) + '-' + CONVERT(VARCHAR, @day)
AS DATETIME)
``` | Creating date in SQL Server 2008 | [
"",
"sql",
"sql-server",
"sql-server-2008",
"date",
"calculated-columns",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.