Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I have a design like this
```
accounts(id, username, email, password, ..)
admin_accounts(account_id, ...)
user_accounts(account_id, ....)
premium_accounts(account_id, ....)
```
id is the primary key in accounts
account\_id is a foreign(references id on accounts table) and primary key in these three tables(admin, user, premium)
Knowing the id how can I find which type this user is with only one query? Also knowing that an id can only exists in one of the three tables(admin, user, premium) | Using `case`:
```
select
a.id,
case
when aa.account_id is not null then 'admin_accounts'
when ua.account_id is not null then 'user_accounts'
when pa.account_id is not null then 'premium_accounts'
else
'No detail found'
end as found_in
from
accounts a
left join admin_accounts aa on aa.account_id = a.id
left join user_accounts ua on ua.account_id = a.id
left join premium_accounts pa on pa.account_id = a.id
/*where -- In case you want to filter.
a.id = ....*/
```
using `union`
```
select
id,
found_in
from
(select account_id as id, 'admin_accounts' as found_in
from admin_accounts aa
union all
select account_id, 'user_accounts'
from user_accounts ua
union all
select account_id, 'premium_accounts'
from premium_accounts pa) a
/*where -- In case you want to filter.
a.account_id = ....*/
``` | If you use `UNION` you can try applying a solution proposed here - [mysql return table name](https://stackoverflow.com/questions/2168068/mysql-return-table-name) | Mysql to return the table name the id exists in | [
"",
"mysql",
"sql",
""
] |
I want to make a function that search in a table and returns rows that contain a certain word that I Insert like below. But when I use LIKE it give me an error: Incorrect syntax near '@perberesi'
```
CREATE FUNCTION perberesit7
(@perberesi varchar(100))
RETURNS @menu_rest TABLE
(emri_hotelit varchar(50),
emri_menuse varchar(50),
perberesit varchar(255))
AS
Begin
insert into @menu_rest
Select dbo.RESTORANTET.Emri_Rest, dbo.MENU.Emri_Pjatës, dbo.MENU.Pershkrimi
From RESTORANTET, MENU
Where dbo.MENU.Rest_ID=dbo.RESTORANTET.ID_Rest and
dbo.MENU.Pershkrimi LIKE %@perberesi%
return
End
```
Pleae help me...How can I use LIKE in this case | Ok, I just realized that you are creating a function, which means that you **can't** use `INSERT`. You should also really take Gordon's advice and use explicit joins and table aliases.
```
CREATE FUNCTION perberesit7(@perberesi varchar(100))
RETURNS @menu_rest TABLE ( emri_hotelit varchar(50),
emri_menuse varchar(50),
perberesit varchar(255))
AS
Begin
return(
Select R.Emri_Rest, M.Emri_Pjatës, M.Pershkrimi
From RESTORANTET R
INNER JOIN MENU M
ON M.Rest_ID = R.ID_Rest
Where M.Pershkrimi LIKE '%' + @perberesi + '%')
End
``` | try using:
`'%' + @perberesi + '%'`
instead of:
`%@perberesi%`
[Some Examples](http://forums.asp.net/t/1256985.aspx) | Use LIKE for strings in sql server 2008 | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I am working on a project that containg tables in SQL and shows them in ListViews, i get "column '\_id' does not exist" crush when i try to get to the activity containing the listview, i have checked other answers here and they all say i have to have a colum named "\_id", and i do, what else could cause this error?
this is my constants class
```
final static String DB_CLIENTS_ID = "_id";
final static String DB_CLIENTS_NAME = "name";
final static String DB_CLIENTS_BALANCE = "balance";
final static String DB_CLIENTS_IDNUM = "idNum";
final static String DB_CLIENTS_TYPE = "type";
```
here is the hendler function that gets the curser:
```
public Cursor queryClients(){
db = dbOpenHelper.getReadableDatabase();
Cursor cursor = db.query(dbConstants.TABLE_NAME_CLIENTS,
null, null, null, null, null,
dbConstants.DB_CLIENTS_NAME+ " ASC");
return cursor;
}
```
> here is the snippet that uses the curser to make the listview:
```
dbhendler = new dbHendler(this);
Cursor c = dbhendler.queryClients();
startManagingCursor(c);
String[] from = new String[] {dbConstants.DB_CLIENTS_ID, dbConstants.DB_CLIENTS_NAME,dbConstants.DB_CLIENTS_BALANCE};
int[] to = new int[] {R.id.account_list_id_number, R.id.account_list_client_name, R.id.account_list_balance};
adapter = new SimpleCursorAdapter(this, R.layout.account_list_line, c, from, to);
ListView lv = (ListView) findViewById(R.id.listView1);
lv.setAdapter(adapter);
```
what could be the problem besides having no colum named "\_id"?
edit:
here is the log cat:
01-28 10:00:31.806: E/AndroidRuntime(27937): java.lang.RuntimeException: Unable to start activity ComponentInfo{com.ziv.bank\_app/com.ziv.bank\_app.ClientListActivity}: java.lang.IllegalArgumentException: column '\_id' does not exist
edit:
code for creating table:
```
public void onCreate(SQLiteDatabase db) {
String sql1 = ""
+ "CREATE TABLE "+ dbConstants.TABLE_NAME_CLIENTS+ " ("
+dbConstants.DB_CLIENTS_ID + " INTEGER PRIMARY KEY AUTOINCREMENT,"
+dbConstants.DB_CLIENTS_NAME + " TEXT,"
+dbConstants.DB_CLIENTS_IDNUM + " INTEGER,"
+dbConstants.DB_CLIENTS_TYPE + " TEXT,"
+dbConstants.DB_CLIENTS_BALANCE + " REAL"
+ ")";
db.execSQL(sql1);
``` | when first seing the problem i saw here that the answer was to have a string named "\_id", i changed it in my table creation file, however a new file was never created, it would have been created on a new device/emulator but on mine it still used the one i have created.
create a new database file by simply changing its name in the table creation code, and the problem is solved.
edit:
also raising the version number would do the trick | If i am not wrong then try out this in your `queryClients()`:
```
Cursor cursor = db.query(dbConstants.TABLE_NAME_CLIENTS, new String[] {"_id","name","balance","idNum","type"},
null, null, null, null, "name ASC ");
```
or Try this:
```
Cursor mCursor = db.query(true, dbConstants.TABLE_NAME_CLIENTS,
new String[] {"_id","name","balance","idNum","type"},
null,null, null, null, null, null);
if (mCursor != null) {
mCursor.moveToFirst();
}
``` | column '_id' does not exist using SQLite with android | [
"",
"android",
"sql",
"listview",
"cursor",
"simplecursoradapter",
""
] |
In my MySQL database, I have two tables, `item` and `users`. In the item table I have two columns called `created_by` and `created_by_alias`. The created by alias column is fully populated with names but the `created_by` column is empty. The next table I have is the `users` table. This has the `id` and `name` columns inside of it.
I would like to know whether it is possible to use MySQL to match the `created_by_alias` in the item table with the `name` column in the `users` table, then take the id of the user and put it into the `created_by` column.
I was thinking some sort of `JOIN` function. Any ideas would be greatly appreciated. | Actually, you are indeed in the right direction - MySQL has an `update join` syntax:
```
UPDATE items
JOIN users ON users.name = items.created_by_alias
SET created_by = items.id
``` | This should work:
```
UPDATE items
SET created_by = (SELECT u.id
FROM users u
WHERE u.name = items.created_by_alias)
``` | Find and replace table data with another table in MySQL | [
"",
"mysql",
"sql",
"join",
"sql-update",
"inner-join",
""
] |
Im after an sql statement (if it exists) or how to set up a method using several sql statements to achieve the following.
I have a listbox and a search text box.
in the search box, user would enter a surname e.g. smith.
i then want to query the database for the search with something like this :
```
select * FROM customer where surname LIKE searchparam
```
This would give me all the results for customers with surname containing : SMITH . Simple, right?
What i need to do is limit the results returned. This statement could give me 1000's of rows if the search param was just S.
What i want is the result, limited to the first 20 matches AND the 10 rows prior to the 1st match.
For example, SMI search:
```
Sives
Skimmings
Skinner
Skipper
Slater
Sloan
Slow
Small
Smallwood
Smetain
Smith ----------- This is the first match of my query. But i want the previous 10 and following 20.
Smith
Smith
Smith
Smith
Smoday
Smyth
Snedden
Snell
Snow
Sohn
Solis
Solomon
Solway
Sommer
Sommers
Soper
Sorace
Spears
Spedding
```
Is there anyway to do this?
As few sql statements as possible.
Reason? I am creating an app for users with slow internet connections.
I am using POSTGRESQL v9
Thanks
Andrew | ```
WITH ranked AS (
SELECT *, ROW_NUMBER() over (ORDER BY surname) AS rowNumber FROM customer
)
SELECT ranked.*
FROM ranked, (SELECT MIN(rowNumber) target FROM ranked WHERE surname LIKE searchparam) found
WHERE ranked.rowNumber BETWEEN found.target - 10 AND found.target + 20
ORDER BY ranked.rowNumber
```
SQL Fiddle [here](http://www.sqlfiddle.com/#!15/a47b1/4). Note that the fiddle uses the example data, and I modified the range to 3 entries before and 6 entries past. | I'm assuming that you're looking for a general algorithm ...
It sounds like you're looking for a combination of finding the matches "greater than or equal to smith", and "less than smith".
For the former you'd order by surname and limit the result to 20, and for the latter you'd order by surname descending and limit to 10.
The two result sets can then be added together as arrays and reordered. | sql statement to select previous rows to a search param | [
"",
"sql",
"postgresql",
"select",
""
] |
I have table like this...
```
cola colb
11 r
11 r
11 r
21 k
21 k
21 m
31 x
31 y
31 z
```
I want to get:
```
cola count()
11 1
21 2
31 3
```
I want to count how many distinct values in col b exists in each col a.
Thanks! | Assuming MySQL or MSSQL the following should work for you:
```
SELECT `ColA`, COUNT(DISTINCT `ColB`)
FROM `Data`
GROUP BY `ColA`
ORDER BY `ColA` ASC;
```
This gets the Distinct (unique) Count of column B for each value of Column A.
[SQL Fiddle](http://www.sqlfiddle.com/#!2/48ee7/1/0) | Assuming SQL is SQL Server, try this:
```
SELECT
ColA,
COUNT(DISTINCT ColB) Cnt
FROM
TableName
GROUP BY
ColA
```
The DISTINCT clause will take care of only counting distinct entries in ColB | SQL count() in query | [
"",
"sql",
""
] |
This is my SQL query :
```
SELECT DISTINCT dest
FROM messages
WHERE exp = ?
UNION
SELECT DISTINCT exp
FROM messages
WHERE dest = ?
```
With this query, I get all the messages I've sent or received. But in my table `messages`, i have a field `timestamp`, and i need, with this query, add an order by timestamp... but how ? | ```
SELECT *
FROM
( SELECT
col1 = dest,
col2 = MAX(timestampCol)
FROM
messages
WHERE
exp = ?
GROUP BY
dest
UNION
SELECT
col1 = exp,
col2 = MAX(timestampCol)
FROM
messages
WHERE
dest= ?
GROUP BY
exp
) tbl
ORDER BY col2
```
This should return only one row per `distinct` `exp` / `dest` though I'm sure this could probably be done without a union; The `GROUP BY` will only get the most recent one.
**Updated SQL:** Given that it is possible for an `exp` on one record to equal a `dest` on the same or another record.
```
SELECT
CASE WHEN exp = ? THEN dest ELSE exp END AS col1
,MAX(timestampCol) AS col2
FROM
messages
WHERE
exp = ?
OR dest = ?
GROUP BY
(CASE WHEN exp = ? THEN dest ELSE exp END)
ORDER BY
MAX(timestampCol) DESC;
```
You might want to consider adding an [SQL Fiddle](http://www.sqlfiddle.com/) with some dummy data to allow users to better help you. | You can do this without a `union`:
```
SELECT (case when exp = ? then dest else exp end), timestamp
FROM messages
WHERE exp = ? or dest = ?;
```
Then to get the most recent message for each participant, use `group by` not `distinct`:
```
SELECT (case when exp = ? then dest else exp end) as other, max(timestamp)
FROM messages
WHERE exp = ? or dest = ?
group by (case when exp = ? then dest else exp end)
order by max(timestamp) desc;
``` | Combining select distinct and order by | [
"",
"mysql",
"sql",
"union",
""
] |
I have 2 tables

```
Select distinct
ID
,ValueA
,Place (How to get the Place value from the table 2 based on the Match between 2 columns ValueA and ValueB
Here Table2 is just a ref Table I''m using)
,Getdate() as time
Into #Temp
From Table1
```
For example when we receive value aa in ValueA column - I want the value of "Place" = "LA"
For example when we receive value bb in ValueA column - I want the value of "Place" = "TN"
Thanks in advance! | ```
SELECT A.ID
, A.ValueA
, B.Place
, GETDATE() INTO #TempTable
FROM Table1 A INNER JOIN Table2 B
ON A.ValueA = B.ValueB
``` | You can do this dude:
Select ID, ValueA, Place, getdate() as Date FROM Table1 INNER JOIN Table2 on Table1.ValueA = table2.ValueB.
Hope this works dude!!!
Regards... | Getting the value upon match from the ref table | [
"",
"sql",
"join",
""
] |
Do mysql support something like this?
```
INSERT INTO `table` VALUES (NULL,"1234") IF TABLE EXISTS `table` ELSE CREATE TABLE `table` (id INT(10), word VARCHAR(500));
``` | I'd create 2 statements. Try this:
```
CREATE TABLE IF NOT EXISTS `table` (
id INT(10),
word VARCHAR(500)
);
INSERT INTO `table` VALUES (NULL,"1234");
``` | You can first check if the table exists, if it doesn't then you can create it.
Do the insert statement after this..
<http://dev.mysql.com/doc/refman/5.1/en/create-table.html>
Something like
```
CREATE TABLE IF NOT EXISTS table (...)
INSERT INTO table VALUES (...)
```
Note:
> However, there is no verification that the existing table has a structure identical to that indicated by the CREATE TABLE statement. | INSERT INTO table IF table exists otherwise CREATE TABLE | [
"",
"mysql",
"sql",
"database",
""
] |
The request is 2 part:
1. Records should fall off report \_\**If the Record has expired AND
if \**\_`RecordStatus != 'Funded'`.
2. Records should stay on the report
\_**If** `RecordStatus = 'Funded'`\**, has no \**`PURCHASEDATE`, and if the Record has expired (is in the past from \*\*`currentdate`).
I've tried to do this with `CASE` but this request appears to be boolean logic therefore, impractical. I've tried to looking into `IF...THEN...ELSE` but I'm lost as how to do it. I've spent 2 full days trying to get this to work and I've come up with nothing. Is there anyone out there that an assist me with this and then explain it?
Below is a simple example of failure (and yes, I full understand this fails due to a condition, not an expression):
```
Select distinct ...
from ... --with joins
where ...
and case when RecordStatus != 'Funded' then getdate() <= EXPIRATIONDATE else ?? end
```
Thank you!! | The requirements seem suspect to me, but here is my view on it (no "conditional logic" needed) and why it seems "suspicious", if I'm reading it correctly ..
> Records should stay on the report If RecordStatus = 'Funded', has no PURCHASEDATE, and if the Record has expired (is in the past from currentdate).
```
RecordStatus = 'Funded'
AND PurchaseDate IS NULL
AND ExpiredDate < CURRENT_TIMESTAMP
```
> Records should fall off report If the Record has expired AND if RecordStatus != 'Funded'.
```
NOT (HasExpired AND RecordStatus <> 'Funded')
```
Combined:
```
RecordStatus = 'Funded'
AND PurchaseDate IS NULL
AND ExpiredDate < CURRENT_TIMESTAMP
AND NOT (HasExpired AND RecordStatus <> 'Funded')
```
With De Morgan's transformation:
```
RecordStatus = 'Funded'
AND PurchaseDate IS NULL
AND ExpiredDate < CURRENT_TIMESTAMP
AND (NOT HasExpired OR RecordStatus = 'Funded')
```
Elimination of subsumed condition after distribution.
```
RecordStatus = 'Funded'
AND PurchaseDate IS NULL
AND ExpiredDate < CURRENT_TIMESTAMP
AND NOT HasExpired
```
However, assuming that `HasExpired` is the same as `ExpiredDate < CURRENT_TIMESTAMP`
```
RecordStatus = 'Funded'
AND PurchaseDate IS NULL
AND ExpiredDate < CURRENT_TIMESTAMP
AND NOT ExpiredDate < CURRENT_TIMESTAMP
```
Which of course is never true and why I am suspect. | You're current criteria a bit misleading.
You are saying in one case it should fall off the report and in the other case it should stay on, but you are referring to different status'.
I like to think of the things I want to see in a SQL query.
So I want to see the Funded status and I want to see PurchaseDate's of null.
```
SELECT ...
FROM ...
WHERE (RecordStatus = 'Funded' AND purchaseDate IS NULL)
```
If you also wanted to see records of RecordStatus != 'Funded' but only if the record is current then add an `OR`
```
WHERE (RecordStatus = 'Funded' AND purchaseDate IS NULL) OR
(RecordStatus != 'Funded' AND getDate() <= ExpirationDate)
``` | How to write conditions in a WHERE clause? | [
"",
"sql",
"sql-server",
"conditional-statements",
"where-clause",
""
] |
I've been scratching my head about this.
I have a table with multiple columns for the same project.
However, each project can have multiple rows of a different type.
I would like to find only projects type `O` and only if they don't have other types associated with them.
Ex:
```
Project_Num | Type
1 | O
1 | P
2 | O
3 | P
```
In the case above, only project 2 should be returned.
Is there a query or a method to filter this information? Any suggestions are welcome. | If I understand correctly, you want to check that the project has only record for its project number and it has type 'O'. You can use below query to implement this:
```
;with cte_proj as
(
select Project_Num from YourTable
group by Project_Num
having count(Project_Num) = 1)
select Project_Num from cte_proj c
inner join YourTable t on c.Project_Num = t.Project_Num
where t.Type = 'O'
``` | You can do this using `not exists`:
```
select p.*
from projects p
where type = 'O' and
not exists (select 1
from projects p2
where p2.project_num = p.project_num and p2.type <> 'O'
);
```
You can also do this using aggregation:
```
select p.project_num
from projects p
group by p.project_num
having sum(case when p.type = 'O' then 1 else 0 end) > 0 and
sum(case when p.type <> 'O' then 1 else 0 end) = 0;
``` | Query to Not selecting columns if multiple | [
"",
"sql",
"database",
"select",
""
] |
In a many-to-many relationship MySql DB, I need to select all resources (documents with titles) that belong to the categories in a list (categories with the id 9,10 in the example below).
Here is the model:

*However, and this is the important thing*: for each resource I want to also get ALL the categories that resource has associated with it - not just the ones queried for...
So if the resource with title: title\_1 matches categories with id 9 and 10 for instance, but if that resource in fact also has category 11 and 12, I would want to get that information.
So for a query asking for the resource where category matches either 9 or 10, I would want that resource, and all of its associated categories. I.e the information I need for such a resource is:
resource title: title\_1
categories: 9,10,11,12
Even though I only specified that it needed to match 9 and 10 in the query…
The reason for this is that if a user searches based on a couple of categories, the results should still give all the information about that object regarding all the categories set on it.
What I have so far is this:
```
SELECT
r.title,
cr.category_id
FROM
category_resource cr
INNER JOIN resource r
ON cr.resource_id = r.id
WHERE cr.category_id IN (9,10)
```
This gives me rows with resources tagged with either of these, very fast (about 10 ms) from a DB of 5000 rows. The result is 3 titles:
title\_1, category: 9
title\_1, category 10
title\_2, category: 10
I.e, I’m getting two results for title\_1, because it has both category 9 and 10. But I only get these two categories because they were in the query. But this does not give me the other categories associated with the resource title\_1.
Note that I have asked a similar question before, but not quite. I got an answer to that one that sort of resolved it. But it got too complex and I needed to rephrase it and also give it a different focus. The point is here: I need to get this info, but it also must be almost as fast as the current simpler query I have, i.e around 10 ms. That may be asking a lot, but at the same time the requirement doesn’t seem too far-fetched to me for a database? I.e get an item matching a criterion, and get all the values associated with in from a many-to-many relationship table…
Being far from an expert in this area, still this seems to me what relational databases should be made for, handling queries on relationships…so I feel like there must be a simple and efficient solution to this problem, and I just hope I am able to explain it clearly enough… | You can accomplish this with a [Self-Join](https://stackoverflow.com/questions/1284441/how-does-a-mysql-self-join-work):
Here is a working sqlfiddle: <http://sqlfiddle.com/#!2/1826f/6>
You already joined on category\_resource to limit to (9,10), so we join on a copy of it and get all the results that match:
```
SELECT
r.title,
cr2.category_id
FROM
category_resource cr
INNER JOIN resource r
ON cr.resource_id = r.id
INNER JOIN category_resource cr2 on cr2.resource_id = r.id
WHERE cr.category_id IN (9,10)
GROUP BY cr2.category_id,r.title
ORDER BY r.title
```
Edit based on comment:
If you are requiring ALL categories (9,10) you could change it to self-join AGAIN as cr3:
```
SELECT
r.title,
cr2.category_id
FROM
category_resource cr
INNER JOIN resource r
ON cr.resource_id = r.id
INNER JOIN category_resource cr2 on cr2.resource_id = r.id
INNER JOIN category_resource cr3 on cr3.resource_id = r.id
WHERE cr.category_id = 9
and cr3.category_id = 10
GROUP BY cr2.category_id,r.title
ORDER BY r.title
```
sqlfiddle: <http://sqlfiddle.com/#!2/1826f/11>
OR here is the version using HAVING: <http://sqlfiddle.com/#!2/1826f/12>
```
SELECT
r.title,
cr2.category_id
FROM
category_resource cr
INNER JOIN resource r
ON cr.resource_id = r.id
INNER JOIN category_resource cr2 on cr2.resource_id = r.id
WHERE cr.category_id IN (9,10)
GROUP BY cr2.category_id,r.title
HAVING count(cr.category_id)=2
ORDER BY r.title
``` | Try using a left join instead of the inner join. I think that might be what you are looking for, but I am not certain that I have followed your description properly. | Select all related values in a many-to-many relationship | [
"",
"mysql",
"sql",
"many-to-many",
""
] |
As someone who is newer to many things SQL as I don't use it much, I'm sure there is an answer to this question out there, but I don't know what to search for to find it, so I apologize.
Question: if I had a bunch of rows in a database with many columns but only need to get back the IDs which is faster or are they the same speed?
```
SELECT * FROM...
```
vs
```
SELECT ID FROM...
``` | You asked about performance in particular vs. all the other reasons to avoid `SELECT *`: so it is performance to which I will limit my answer.
On my system, SQL Profiler initially indicated less CPU overhead for the ID-only query, but with the small # or rows involved, each query took the same amount of time.
I think really this was only due to the ID-only query being run first, though. On re-run (in opposite order), they took equally little CPU overhead.
Here is the view of things in SQL Profiler:

With extremely high column and row counts, extremely wide rows, there may be a perceptible difference in the database engine, but nothing glaring here.
*Where you will really see the difference* is in sending the result set back across the network! The ID-only result set will typically be much smaller of course - i.e. less to send back. | Never use \* to return all columns in a table–it’s lazy. You should only extract the data you need.
so-> select field from is more faster | "SELECT * FROM..." VS "SELECT ID FROM..." Performance | [
"",
"asp.net",
"sql",
".net",
"sql-server",
"database",
""
] |
I am working with Yii 1.1.x and get the following error when performing an insert.
```
CDbCommand failed to execute the SQL statement: SQLSTATE[23000]:
Integrity constraint violation: 1062 Duplicate entry '5375' for key 'PRIMARY'
```
This happens on a table called 'competition\_prizes' - there is only one primary key called 'id' (there are no other indexes or anything like that).
I can see that there is one row with the id of 5375 so that entry does already exist (as per the insert query).
The controller code is as follows has some functionality within the afterSave() functionality.
```
protected function setPrizes($prizes, $prize_type)
{
if(!empty($prizes) && is_array($prizes))
{
$prize_model = CompetitionPrizes::model();
$competition_id = $this->competition_id;
$prize_model->deleteAll('competition_id =:competition_id AND prize_type = :prize_type ',array(
':competition_id'=> $competition_id,
':prize_type'=> $prize_type
));
foreach($prizes as $prize_position => $prize_desc) :
$prize_model->setIsNewRecord(true);
$prize_model->setAttributes(compact('competition_id', 'prize_position', 'prize_type','prize_desc'));
$prize_model->save();
endforeach;
}
}
```
Any ideas of how to get around the error - please note I'm new to Yii so be gentle :) | On top of the `$prize_model->setIsNewRecord(true)` (or `$prize_model->isNewRecord = true`) you also have to empty out the id attribute. As I've mentioned in my comment above, the setIsNewRecord only determines if ether the insert or update scenario is used when you call save(). If your PK is still set it will simply attempt an insert with those values set, resulting in a duplicate error.
The following should do the trick:
```
$id = NULL;
foreach($prizes as $prize_position => $prize_desc) :
$prize_model->setIsNewRecord(true);
$prize_model->setAttributes(compact('id', 'competition_id', 'prize_position', 'prize_type','prize_desc'));
$prize_model->save();
endforeach;
``` | Instead of setting new record flag,
```
$prize_model->setIsNewRecord(true); // sometimes it may not work.
```
The issues with `setIsNewRecord` is discussed [here](https://github.com/yiisoft/yii2/issues/566)
You can do like this also:
```
$prize_model->isNewRecord = true;
```
1. Yii forum suggest to create new object inside for loop. You can check [here](http://www.yiiframework.com/forum/index.php/topic/17027-duplicate-a-module-with-setisnewrecordtrue/)
2. You have to unset values manually like `$model->id = null;`
You can get details [here](http://www.yiiframework.com/forum/index.php/topic/47559-insert-multiple-rows-in-database-using-loop/) | Yii - CDbCommand failed to execute the SQL statement: SQLSTATE[23000]: Integrity constraint violation: 1062 Duplicate entry '5375' for key 'PRIMARY' | [
"",
"sql",
"activerecord",
"yii",
"frameworks",
""
] |
I've got the following query in Oracle
```
select (case when seqnum = 1 then id else '0' end) as id,
(case when seqnum = 1 then start else "end" end) as timestamp,
unit
from (select t.*,
row_number() over (partition by id, unit order by start) as seqnum,
count(*) over (partition by id, unit) as cnt
from table t
) t
where seqnum = 1 or seqnum = cnt;
```
instead of selecting unit, which is a number, I want to bring its description. The description is stored in another table, and I'm trying to bring the desciption instead of the number.
```
SELECT t2.unit_description from t2,t where t2.unit = t.unit
```
I'm trying to JOIN these results but it's not working. Can anyone help me in this situation? | I think this is what you want:
```
select (case when t.seqnum = 1 then t.id else '0' end) as id,
(case when t.seqnum = 1 then t.start else t."end" end) as timestamp,
t.unit,
t2.unit_description
from (select t.*,
row_number() over (partition by id, unit order by start) as seqnum,
count(*) over (partition by id, unit) as cnt
from table t
) t join
t2
on t2.unit = t.unit
where seqnum = 1 or seqnum = cnt;
``` | Does this not work?
```
select (case when seqnum = 1 then id else '0' end) as id,
(case when seqnum = 1 then start else "end" end) as timestamp,
t2.unit_description
from (select t.*,
row_number() over (partition by id, unit order by start) as seqnum,
count(*) over (partition by id, unit) as cnt
from table t
) t
join t2 on t.unit = t2.unit
where seqnum = 1 or seqnum = cnt;
``` | ORACLE - How to JOIN in this situation [other option other than JOIN]? | [
"",
"sql",
"oracle",
""
] |
Dataset is following:
```
FirstName LastName city street housenum
john silver london ukitgam 780/19
gret garbo berlin akstrass 102
le chen berlin oppenhaim NULL
daniel defo rome corso vinchi 25
maggi forth london bolken str NULL
voich lutz paris pinchi platz NULL
anna poperplatz milan via domani 15/4
```
write following query:
```
SELECT Trim(a.FirstName) & ' ' & Trim(a.LastName) AS employee_name,
a.city, a.street + ' ' + a.housenum AS address
FROM Employees AS a
```
The result will be as this:
```
employee_name city address
john silver london ukitgam 780/19
gret garbo berlin akstrass 102
le chen berlin NULL
daniel defo rome corso vinchi 25
maggi forth london NULL
voich lutz paris NULL
anna poperplatz milan via domani 15/4
```
but I want this:
```
employee_name city address
john silver london ukitgam 780/19
gret garbo berlin akstrass 102
le chen berlin oppenhaim
daniel defo rome corso vinchi 25
maggi forth london bolken str
voich lutz paris pinchi platz
anna poperplatz milan via domani 15/4
```
please help me. | ```
SELECT Trim(a.FirstName) & ' ' & Trim(a.LastName) AS employee_name,
a.city, a.street & (' ' +a.housenum) AS address
FROM Employees AS a
``` | Just use [**ISNULL()**](http://technet.microsoft.com/en-us/library/ms184325.aspx) function for `housenum` column.
```
SELECT Trim(a.FirstName) & ' ' & Trim(a.LastName) AS employee_name,
a.city, a.street + ' ' + ISNULL(a.housenum,'') AS address
FROM Employees AS a
```
You are getting address as NULL when `housenum` column value is **NULL** because **NULL** concatenated with anything gives **NULL** as final result.
As has been posted in other answers , you can also use **[COALESCE()](http://msdn.microsoft.com/en-us/library/ms190349.aspx)** to handle NULL values.
**COALESCE** is in the SQL '92 standard and supported by more different databases. On other hand **ISNULL()** is provided in SQL Server only as so would not be much portable. | Concatenate data from two fields ignoring NULL values in one of them? | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
My sub-query gets me the brand name and sum of the of the sales amount from 2013. My main query is trying to get the brand name and the highest sales amount from that year but this will return duplicate because the query will try to get the max(amount) for each brand name. How do I filter it so it will just return the highest amount with only one brand name? This is my query so far, any pointers would be helpful. Thanks!
```
SELECT
maxamt.brnd_nm,
MAX(maxamt.amt) AS amt
FROM
(
SELECT
c.brnd_nm AS BRND_NM,
SUM(b.sales_amt) AS AMT
FROM prd.client A
INNER JOIN db1.table1 B
ON b.pty_key = a.pty_key
INNER JOIN db1.table2 c
ON b.d_key = c.d_key
INNER JOIN db1.table3 d
ON b.posat_key = d.posat_key
INNER JOIN db1.table4 e
ON b.frmly_key = e.frmly_key
WHERE B.date BETWEEN '2013-01-01'
AND '2013-12-31'
AND b.C_ID IN ( 'abc', 'def', 'wqs')
GROUP BY 1
) MaxAmt
GROUP BY 1
``` | Can't you use the `TOP` operator?:
```
SELECT TOP 1 *
FROM ( SELECT c.brnd_nm AS BRND_NM,
SUM(b.sales_amt) AS AMT
FROM prd.client A
INNER JOIN db1.table1 B
ON b.pty_key = a.pty_key
INNER JOIN db1.table2 c
ON b.d_key = c.d_key
INNER JOIN db1.table3 d
ON b.posat_key = d.posat_key
INNER JOIN db1.table4 e
ON b.frmly_key = e.frmly_key
WHERE B.date BETWEEN '2013-01-01'
AND '2013-12-31'
AND b.C_ID IN ( 'abc', 'def', 'wqs')
GROUP BY c.brnd_nm) MaxAmt
ORDER BY AMT DESC
``` | You don't need a subquery, just use `ORDER BY` and `LIMIT`:
```
SELECT
c.brnd_nm AS BRND_NM,
SUM(b.sales_amt) AS AMT
FROM prd.client A
INNER JOIN db1.table1 B
ON b.pty_key = a.pty_key
INNER JOIN db1.table2 c
ON b.d_key = c.d_key
INNER JOIN db1.table3 d
ON b.posat_key = d.posat_key
INNER JOIN db1.table4 e
ON b.frmly_key = e.frmly_key
WHERE B.date BETWEEN '2013-01-01'
AND '2013-12-31'
AND b.C_ID IN ( 'abc', 'def', 'wqs')
GROUP BY BRND_NM
ORDER BY AMT DESC
LIMIT 1
``` | How to get the max() amount from a sum() subquery | [
"",
"sql",
"aggregate-functions",
"teradata",
""
] |
I have a table that contains these columns
ID, NAME, JOB
what I want is to select one record of every distinct job in the table
from this table
```
ID NAME JOB
1 Juan Janitor
2 Jun Waiter
3 Jani Janitor
4 Jeni Bartender
```
to something like this
```
ID NAME JOB
1 Juan Janitor
2 Jun Waiter
4 Jeni Bartender
```
Using distinct will allow me to select one distinct column but i want to select every column in the table, any one have an idea how? | You may try this
```
SELECT ID, NAME,JOB FROM
(
SELECT ID, NAME,JOB,Row_Number() Over (Partition BY NAME Order By ID) AS RN FROM `table1`
) AS T
WHERE RN = 1
``` | ```
SELECT MIN(ID), NAME, JOB FROM `table`
Group by NAME, JOB
``` | Select all columns with distinct values from a table | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
How can I set a variable in MSSQL and use it further in the statement?
```
set @DOT =((case when[Day 1] > 12 then ([Day 1] - 12) else 0 end)
+ (case when[Day 2] > 12 then ([Day 2] - 12) else 0 end)
+ (case when[Day 3] > 12 then ([Day 3] - 12) else 0 end)
+ (case when[Day 4] > 12 then ([Day 4] - 12) else 0 end)
+ (case when[Day 5] > 12 then ([Day 5] - 12) else 0 end)
+ (case when[Day 6] > 12 then ([Day 6] - 12) else 0 end)
+ (case when[Day 7] > 12 then ([Day 7] - 12) else 0 end))
case when [Timesheet Total] <= 40 and @DOT = 0 then 0
when [Timesheet Total] <= 40 and @DOT > 0 then @DOT
when [Timesheet Total] > 40 and @DOT = 0 then ([Timesheet Total] -40)
when [Timesheet Total] > 40 and @DOT > 0 and ([Timesheet Total] -40) > @DOT then ([Timesheet Total] -40)
when [Timesheet Total] > 40 and @DOT > 0 and ([Timesheet Total] -40) < @DOT then @DOT
else 0
end
``` | If I'm reading your business logic correctly, you are calculating an overtime payment where:
```
DOT (daily overtime) = sum of all daily hours worked beyond 12 hours / day
WOT (weekly overtime) = number of weekly hours worked beyond 40 hours / week
EOT (effective overtime) = larger of (DOT, WOT)
```
This can be **greatly simplified** to calculate the result for all employees, instead of painstakingly iterating through them.
```
SELECT *
FROM tblHoursWorked t1
CROSS APPLY (
SELECT SUM(CASE WHEN [hr] > 12 THEN [hr] - 12 ELSE 0 END)
FROM (VALUES ([Day 1]),([Day 2]),([Day 3]),([Day 4]),([Day 5]),([Day 6]),([Day 7]) ) d(hr)
) t2(DOT)
CROSS APPLY (
SELECT CASE WHEN [TimeSheet Total] > 40 THEN [TimeSheet Total] - 40 ELSE 0 END
) t3(WOT)
CROSS APPLY (
SELECT MAX(OT) FROM ( VALUES (WOT),(DOT) ) w(OT)
) t4(EOT)
``` | Like so.
```
DECLARE @DOT INT
SELECT @DOT = ((case when[Day 1] > 12 then ([Day 1] - 12) else 0 end)
+ (case when[Day 2] > 12 then ([Day 2] - 12) else 0 end)
+ (case when[Day 3] > 12 then ([Day 3] - 12) else 0 end)
+ (case when[Day 4] > 12 then ([Day 4] - 12) else 0 end)
+ (case when[Day 5] > 12 then ([Day 5] - 12) else 0 end)
+ (case when[Day 6] > 12 then ([Day 6] - 12) else 0 end)
+ (case when[Day 7] > 12 then ([Day 7] - 12) else 0 end))
case when [Timesheet Total] <= 40 and @DOT = 0 then 0
when [Timesheet Total] <= 40 and @DOT > 0 then @DOT
when [Timesheet Total] > 40 and @DOT = 0 then ([Timesheet Total] -40)
when [Timesheet Total] > 40 and @DOT > 0 and ([Timesheet Total] -40) > @DOT then ([Timesheet Total] -40)
when [Timesheet Total] > 40 and @DOT > 0 and ([Timesheet Total] -40) < @DOT then @DOT
else 0
end
``` | SQL case statement and use returned value | [
"",
"sql",
"sql-server",
""
] |
I'm using MySQL 6.0 Workbench and having a problem while dropping a table if exists and creating a new table script.
Below is my sql script.
```
delimiter $$
DROP TABLE IF EXISTS `tableName`;
CREATE TABLE `tableName` (
`column1` varchar(45) NOT NULL,
`column2` varchar(45) NOT NULL,
PRIMARY KEY (`column1`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1$$
```
Error message :
```
Syntax error near CREATE TABLE `tableName`
```
What am I doing wrong ? | UPDATED Answer
You need to use delimiter `$$` in `DROP TABLE` statement as well
```
delimiter $$
DROP TABLE IF EXISTS `tableName`$$
CREATE TABLE `tableName` (
`column1` varchar(45) NOT NULL,
`column2` varchar(45) NOT NULL,
PRIMARY KEY (`column1`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1$$
``` | Use following sql :
```
drop table if exists test;
create table test(
var1 int,
var2 int
)
```
Fiddle : <http://sqlfiddle.com/#!2/a6e60> | Delete table and create table again | [
"",
"mysql",
"sql",
"database",
"syntax",
"syntax-error",
""
] |
I am getting an error when trying to run:
```
SELECT MAX(TopDayScore) AS TopDayScore, Username
FROM Users
WHERE PartnerID = '{0}' and Validated = 1
```
The error is:
> "Column 'Users.Username' is invalid in the select list because it is
> not contained in either an aggregate function or the GROUP BY clause."
Not sure what i'm missing as I can't seem to find an example of mixing the MAX method with multiple properties. | This query can be the right one:
```
SELECT MAX(TopDayScore) AS TopDayScore, Username FROM users GROUP BY Username
``` | Anytime you use an aggregate (MAX, MIN, SUM, COUNT, etc.) in a `SELECT`, all columns not contained in some aggregate function needs to be in the `GROUP BY` clause, in this case, the Username column
```
SELECT MAX(TopDayScore) AS TopDayScore
,Username
FROM Users
WHERE PartnerID = '{0}'
and Validated = 1
GROUP BY Username
``` | SELECT MAX(TopDayScore) AS TopDayScore, Username FROM ERROR | [
"",
"sql",
"sql-server",
"aggregate-functions",
""
] |
**Update:** I’ve found a solution for my case (my answer below) involving being judicious about when the “in” is used, but there may be more generally useful advice yet to be had.
I’m sure an answer to this question exists in plenty of places, but I’m having trouble finding it because my situation is slightly more complicated than what I’ve found discussion of in the Postgres documentation but much less complicated than any of the questions I’ve found on here that involve multiple tables or subqueries and are answered with an elaborate plan of attack. So I don’t mind being pointed to one of those existing answers that I’ve failed to find, so long as it actually helps in my situation.
Here’s an example of a query that’s causing me trouble:
```
SELECT trees.id FROM "trees" WHERE "trees"."trashed" = 'f' AND (trees.chapter_id IN (1,8,9,12,18,11,6,10,5,2,4,7,16,15,17,3,14,13)) ORDER BY LOWER(trees.shortcode);
```
This is generated by ActiveRecord in my Rails application and maybe I could rephrase the query to be more optimal somehow, but this result set (the IDs of all trees, in a textual order, filtered by "trashed" and belonging to a subset of "chapters") is something I currently need for a big paginated list of trees in the interface. (The subset of chapters is decided by the system of user permissions, so this query has to be invoked at least once when a user begins looking at the list.)
In my local version, there are about 67,000 trees in this table, and there will only ever be more in production.
Here’s the query plan given by `EXPLAIN`:
```
Sort (cost=9406.85..9543.34 rows=54595 width=17)
Sort Key: (lower((shortcode)::text))
-> Seq Scan on trees (cost=0.00..3991.18 rows=54595 width=17)
Filter: ((NOT trashed) AND (chapter_id = ANY ('{1,8,9,12,18,11,6,10,5,2,4,7,16,15,17,3,14,13}'::integer[])))
```
This becomes much faster if I remove the order by, obviously, but again, I need this list of IDs in a specific order to display even a page of this list. Locally, this query executes in about 2-3 seconds, which is way too long, and generally I’ve found that the database on heroku where the production version is takes similar or longer times as my local database does.
There are individual (btree) indices on trees.trashed, trees.chapter\_id, and LOWER(trees.shortcode). I experimented with adding a multi-column index on trashed and chapter\_id, but predictably, that didn’t help, because that’s not the slow part of this query. I don’t know enough about postgres or SQL to have an idea of where to go from here, which is why I’m asking for help. (I’d like to learn more, so any pointers to sections of the documentation that would give me a better sense of the kinds of things to investigate would be greatly appreciated as well.)
The list of chapters is never going to get much longer than this, so maybe it would be faster to filter on each individually? There are similar queries elsewhere in the application, so I would rather learn a general way to improve this kind of thing.
I may have forgotten to add some important information while writing this, so if there’s something that seems obviously wrong, please comment and I’ll try to clarify.
**Update:** Here’s the description of the trees table, as requested by a commenter.
```
Table "public.trees"
Column | Type | Modifiers
-------------------+-----------------------------+----------------------------------------------------
id | integer | not null default nextval('trees_id_seq'::regclass)
created_at | timestamp without time zone |
updated_at | timestamp without time zone |
shortcode | character varying(255) |
cross_id | integer |
chapter_id | integer |
name | character varying(255) |
classification | character varying(255) |
tag | character varying(255) |
alive | boolean | not null default true
latitude | numeric(14,10) |
longitude | numeric(14,10) |
city | character varying(255) |
county | character varying(255) |
state | character varying(255) |
comments | text |
trashed | boolean | not null default false
created_by_id | integer |
death_date | date |
planted_as | character varying(255) | not null default 'seed'::character varying
wild | boolean | not null default false
submitted_by_id | integer |
owned_by_id | integer |
steward_id | integer |
planting_id | integer |
planting_cross_id | integer |
Indexes:
"trees_pkey" PRIMARY KEY, btree (id)
"index_trees_on_chapter_id" btree (chapter_id)
"index_trees_on_created_by_id" btree (created_by_id)
"index_trees_on_cross_id" btree (cross_id)
"index_trees_on_trashed" btree (trashed)
"trees_lower_classification_idx" btree (lower(classification::text))
"trees_lower_name_idx" btree (lower(name::text))
"trees_lower_shortcode_idx" btree (lower(shortcode::text))
"trees_lower_tag_idx" btree (lower(tag::text))
```
My local trees table has 67406 rows, and there will be more in production. | Since much of the complication comes from the "in" here, and the most common cases are going to be "all chapters" (for administrators) or "one or two chapters" (for normal users, and which run much, much, much faster), I’ve decided to just optimize the "all chapters" case by leaving out the chapters clause when the application detects that that will be the case. This does not solve my problem in general, but it solves the issue in practice.
In general, I’ve decided to compare the list of included “parent IDs” in situations like this to all possible parent IDs, and to switch to an equivalent NOT IN if it’s more than half, or drop the clause entirely if the lists are the same. In all the practical cases I’ve tested, this drastically improves performance, and it will only be in very unusual cases that it will be anywhere near as slow as my initial strategy. | Based on your query plan, you're fetching 55k of 67k rows. No indexes are going to help you do that. The fastest plan will be to read the entire table, filter out the occasional unneeded row, and sort.
Naturally, the real question is whether you should be fetching that many rows to begin with, instead of paging them using limit ... offset. In the latter case, your indexes would become useful. The one on lower(shortcode) in particular, since it'll find matching rows very fast, and do so in the correct order. | Speeding up a postgres query that filters on two columns and sorts on a function | [
"",
"sql",
"postgresql",
""
] |
I have a column that is returning either nothing, 0 or Budget codes.
The column is ExerciceBudgetaire and I want to replace 0 by '' (empty field) and keep other fields as they are: if 0 then '', if not, return ExerciceBudgetaire
Here is what I did, but it's not working:
```
Select DescriptionLigne,ref1
case ExerciceBudgetaire
WHEN 0 THEN ''
else ExerciceBudgetaire
end
from dbo.CODA
```
Thanks for your help! | There's one issue where you were missing a comma, which both [current](https://stackoverflow.com/a/21376558/15498) [answers](https://stackoverflow.com/a/21377103/15498) address.
The second issue is that
```
case ExerciceBudgetaire
WHEN 0
```
Has a string value coming from `ExerciceBudgetaire` and a constant `int` value, `0`. `int` has a higher [type precedence](http://technet.microsoft.com/en-us/library/ms190309.aspx) than any of the (`n`)(`var`)`char` types, and so it tries to convert all of the strings to `int`s to see if they're `0`. So, instead, pass it a string constant instead:
```
Select DescriptionLigne,ref1, --New comma
case ExerciceBudgetaire
WHEN '0' THEN ''
else ExerciceBudgetaire
end
from dbo.CODA
```
(Both of the current answers also switch the [`CASE` expression](http://technet.microsoft.com/en-us/library/ms181765.aspx) from a simple `CASE` expression to a searched one. However, I see no necessity to do that) | Try this
```
Select DescriptionLigne,ref1, case when ExerciceBudgetaire = 0 THEN '' else ExerciceBudgetaire end from dbo.CODA
```
(Assuming `ref1` to be a column) | SQL: How to replace field when 0 and return same data when it's not? | [
"",
"sql",
""
] |
As you seen the image below, the month is now 1 and 2. How can I convert it to 1 to Jan, 2 to Feb and so on. Below are my sql.
Help will be appreciate. Thanks in advance! :)

```
SELECT DISTINCT fbmenuname,
Sum(quantity) AS ordercount,
Month(Str_to_date(date, '%d/%m/%Y')) AS month
FROM fb.fborders
INNER JOIN fb.fbmenu
ON fborders.fbmenuid = fbmenu.fbmenuid
INNER JOIN fb.fbrestaurant
ON fbrestaurant.fbrestaurantid = fbmenu.fbrestaurantid
WHERE fborders.status = 'completed'
AND ( Year(Str_to_date(date, '%d/%m/%Y')) = '2013'
AND fbrestaurant.fbrestaurantid = 'R0001' )
GROUP BY fbmenuname,
Month(Str_to_date(date, '%d/%m/%Y'))
``` | As stated [in this other answer](https://stackoverflow.com/a/7027175/447356):
> You can use [`STR_TO_DATE()`](http://dev.mysql.com/doc/refman/5.1/en/date-and-time-functions.html#function_str-to-date) to convert the number to a date, and
> then back with `MONTHNAME()`
>
> ```
> SELECT MONTHNAME(STR_TO_DATE(6, '%m'));
>
> +---------------------------------+
> | MONTHNAME(STR_TO_DATE(6, '%m')) |
> +---------------------------------+
> | June |
> +---------------------------------+
> ```
Use [`Left`](http://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_left) Function to Trim the unwanted character
So try this
```
SELECT DISTINCT fbmenuname,
Sum(quantity)
AS ordercount,
Month(Str_to_date(date, '%d/%m/%Y'))
AS month,
Monthname(Str_to_date(Month(Str_to_date(date, '%d/%m/%Y')), '%m'
)) AS MonthName
FROM fb.fborders
INNER JOIN fb.fbmenu
ON fborders.fbmenuid = fbmenu.fbmenuid
INNER JOIN fb.fbrestaurant
ON fbrestaurant.fbrestaurantid = fbmenu.fbrestaurantid
WHERE fborders.status = 'completed'
AND ( Year(Str_to_date(date, '%d/%m/%Y')) = '2013'
AND fbrestaurant.fbrestaurantid = 'R0001' )
GROUP BY fbmenuname,
Month(Str_to_date(date, '%d/%m/%Y'))
```
# [FIDDLE](http://www.sqlfiddle.com/#!2/d41d8/30380) | To assume you are using `MSSQL`, then the query for the same would be:
```
SELECT DATENAME(month, 1)
FROM fborders
```
and further rest of your query, you can use it in either `Update` clause or `Select` clause. | SQL Statement Convert it to 1 to Jan, 2 to Feb and so on | [
"",
"sql",
"select",
"monthcalendar",
"str-to-date",
""
] |
I'm running a mysql-query from a bash shell which should write the result into a file. The thingy looks like this (simplified a bit):
```
mysql -uname -ppwd wmap -e "select netpoints.bssid,netpoints.lat,netpoints.lon from netpoints,users WHERE ((users.flags & 1 = 1) AND users.idx=netpoints.userid) OR (netpoints.source=5);" >db/db.csv
```
The query itself is OK and worked fine, but until some time execution of the script fails with
```
line 1: 11427 Killed
```
So...how can I avoid that the query is terminated this way? The result would be a big amounto f data and the query needs a really long time to execute but that's OK as long as there will be an result.
(Add the command and error message into code tags) | By default the entire result set is fetched in memory. If that becomes to much, the mysql client will be killed. You can start the mysql client with the `--quick` option to prevent this:
```
mysql --quick -uname -ppwd wmap -e ...
``` | You might be hitting a timeout in MySQL. You may want to check your MySQL slow query log to see if that's the case. If so, you may want to look at optimizing the query if possible, or perhaps increasing the timeouts in MySQL (<http://www.rackspace.com/knowledge_center/article/how-to-change-the-mysql-timeout-on-a-server> for info on how to do this). | MySQL query killed - how to prevent that? | [
"",
"mysql",
"sql",
"linux",
"bash",
""
] |
I struggled with this issue for too long before finally tracking down how to avoid/fix it. It seems like something that should be on StackOverflow for the benefit of others.
I had an SSRS report where the query worked fine and displayed the string results I expected. However, when I tried to add that field to the report, it kept showing "ERROR#". I was eventually able to find a little bit more info:
> The Value expression used in [textbox] returned a data type that is
> not valid.
But, I knew my data was valid. | Found the answer [here](http://social.msdn.microsoft.com/Forums/sqlserver/en-US/fad557d1-f32b-4291-a5ef-33220a99fa1b/the-value-expression-used-in-textrun-textboxparagraphs0textruns0-returned-a-data-type-that?forum=sqlreportingservices).
Basically, it's a problem with caching and you need to delete the ".data" file that is created in the same directory as your report. Some also suggested copying the query/report to a new report, but that appears to be the hard way to achieve the same thing. I deleted the .data file for the report I was having trouble with and it immediately started working as-expected. | After you preview the report, click the refresh button on the report and it will pull the data again creating an updated rdl.data file. | SQL Server Reporting Studio report showing "ERROR#" or invalid data type error | [
"",
"sql",
"sql-server",
"reporting-services",
"ssrs-2012",
""
] |
I have two tables : request and location
Sample data for request
```
request id | requestor | locations
1 | ankur | 2,5
2 | akshay | 1
3 | avneet | 3,4
4 | priya | 4
```
Sample data for locations
```
loc_id | loc_name |
1 | gondor
2 | rohan
3 | mordor
4 | bree
5 | shire
```
I'd like to find the request\_id for a particular location. If I do this with location\_id, I am getting correct results.
```
select request_id from request where locations like "%,3%" or locations like "%3,%";
```
This query gives me the requests raised for location id = 3
How can I achieve this for loc\_name instead? Replacing the digit in the "like" part of the query with
```
select loc_id from locations where loc_name = "mordor"
```
Any help with this would be very helpful. Thanks. | You can use `FIND_IN_SET()`
```
SELECT *
FROM request r JOIN locations l
ON FIND_IN_SET(loc_id, locations) > 0
WHERE loc_name = 'mordor'
```
Here is **[SQLFiddle](http://sqlfiddle.com/#!2/95b0a/2)** demo
But you **better normalize** your data by introducing a many-to-many table that may look like
```
CREATE TABLE request_location
(
request_id INT NOT NULL,
loc_id INT NOT NULL,
PRIMARY KEY (request_id, loc_id),
FOREIGN KEY (request_id) REFERENCES request (request_id),
FOREIGN KEY (loc_id) REFERENCES locations (loc_id)
);
```
This will pay off big time in a long run enabling you to maintain and query your data normally.
Your query then may look like
```
SELECT *
FROM request_location rl JOIN request r
ON rl.request_id = r.request_id JOIN locations l
ON rl.loc_id = l.loc_id
WHERE l.loc_name = 'mordor'
```
or even
```
SELECT rl.request_id
FROM request_location rl JOIN locations l
ON rl.loc_id = l.loc_id
WHERE l.loc_name = 'mordor';
```
if you need to return only `request_id`
Here is **[SQLFiddle](http://sqlfiddle.com/#!2/1c4c5/1)** demo | solution if locations is a varchar field!
join your tables with a like string (concat your r.locations for a starting and ending comma)
code is untested:
```
SELECT
r.request_id
FROM location l
INNER JOIN request r
ON CONCAT(',', r.locations, ',') LIKE CONCAT('%,',l.loc_id,',%')
WHERE l.loc_name = 'mordor'
``` | Replace values in an sql query according to results of a nested query | [
"",
"mysql",
"sql",
""
] |
**Edit:**
> Database - Oracle 11gR2 - Over Exadata (X2)
---
Am framing a problem investigation report for past issues, and I'm slightly confused with a situation below.
Say , I have a table `MYACCT`. There are `138` Columns. It holds `10 Million` records. With regular updates of at least 1000 records(Inserts/Updates/Deletes) per hour.
Primary Key is `COL1 (VARCHAR2(18))` (Application rarely uses this, except for a join with other tables)
There's Another Unique Index over `COL2 VARCHAR2(9))`. Which is the one App regularly uses. What ever updates I meant previously happens based on both these columns. Whereas any `SELECT` only operation over this table, always refer `COL2`. So `COL2` would be our interest.
We do a Query below,
`SELECT COUNT(COL2) FROM MYACCT; /* Use the Unique Column (Not PK) */`
There's no issue with the result, whereas I was the one recommending to change it as
`SELECT COUNT(COL1) FROM MYACCT; /* Use the primary Index`
I just calculated the time taken for actual execution
> Query using the PRIMARY KEY was faster by `0.8-1.0 seconds always!
Now, I am trying to explain this behavior. Just drafting the Explain plan behind these Queries.
## Query 1: (Using Primary Key)
```
SELECT COUNT(COL1) FROM MYACCT;
```
**Plan :**
```
SQL> select * from TABLE(dbms_xplan.display);
Plan hash value: 2417095184
---------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
---------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 11337 (1)| 00:02:17 |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | INDEX STORAGE FAST FULL SCAN| PK_ACCT | 10M| 11337 (1)| 00:02:17 |
---------------------------------------------------------------------------------
9 rows selected.
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
41332 consistent gets
0 physical reads
0 redo size
210 bytes sent via SQL*Net to client
346 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
```
---
## Query 2:(NOT using Primary Key)
```
SELECT COUNT(COL2) FROM MYACCT;
```
**Plan :**
```
Plan hash value: 1130703739
------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 10 | 7868 (1)| 00:01:35 |
| 1 | SORT AGGREGATE | | 1 | 10 | | |
| 2 | INDEX STORAGE FAST FULL SCAN| MYINDX01 | 10M| 95M| 7868 (1)| 00:01:35 |
------------------------------------------------------------------------------------------
9 rows selected.
Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
28151 consistent gets
23 physical reads
784 redo size
233 bytes sent via SQL*Net to client
346 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
```
> We can find in terms of `Cost` and `Time` the Query without primary
> key wins. Then how come the execution time of primary key is
> faster???
EDIT:
```
SQL> select segment_name, bytes from dba_segments where segment_name in ('MYINDX01','PK_ACCT');
PK_ACCT 343932928
MYINDX01 234881024
``` | You're reading more data from the PK index than you are from the other one. `COL1` is `VARCHAR2(18)` while `COL2` is `VARCHAR(9)`, which doesn't necessarily mean anything but implies you probably have values in `COL1` that are consistently longer than those in `COL2`. They will therefore use more storage, both in the table and in the index, and the index scan has to pull more data from the block buffer and/or disk for the PK-based query.
The execution statistics show that; 41332 consistent gets for the PK-based query, and only 28151 for the faster one, so it's doing more work with the PK. And the segment sizes show it too - for the PK you need to read about 328M, for the UK only 224M.
The block buffer is likely to be crucial if you're seeing the PK version run faster sometimes. In the example you've shown both queries are hitting the block buffer - the 23 physical reads are a trivial number, If the index data wasn't cached consistently then you might see 41k consistent gets versus 28k physical reads, which would likely reverse the apparent winner as physical reads from disk will be slower. This often manifests if running two queries back to back shows one faster, but reversing the order they run shows the other as faster.
You can't generalise this to 'PK query is slower then UK query'; this is because of your specific data. You'd probably also get better performance if your PK was actually a number column, rather than a `VARCHAR2` column holding numbers, which is never a good idea. | Given a statement like
```
select count(x) from some_table
```
If there is a *covering index* for column `x`, the query optimizer is likely to use it, so it doesn't have to fetch the [ginormous] data page.
It sounds like the two columns (`col1` and `col2`) involved in your similar queries are both indexed[1]. What you don't say is whether or not either of these indices are *clustered*.
That can make a big difference. If the index is clustered, the leaf node in the B-tree that is the index is the table's data page. Given how big your rows are (or seem to be), that means a scan of the clustered index will likely move a lot more data around -- meaning more paging -- than it would if it was scanning a non-clustered index.
Most aggregate functions eliminate nulls in computing the value of the aggregate function. `count()` is a little different. `count(*)` includes nulls in the results while `count(expression)` excludes nulls from the results. Since you're not using `distinct`, and assuming your `col1` and `col2` columns are `not null`, you might get better performance by trying
```
select count(*) from myacct
```
or
```
select count(1) from myacct
```
so the optimizer doesn't have to consider whether or not the column is null.
Just a thought.
[1]And I assume that they are the **only** column in their respective index. | Impact of COUNT() Based on Column in a Table | [
"",
"sql",
"performance",
"oracle",
"oracle11g",
"exadata",
""
] |
I don't quite understand how this DECODE function will resolve itself, specifically in the pointed (==>) parameter:
```
DECODE
(
SH.RESPCODE, 0, SH.AMOUNT-DEVICE_FEE,
102, SH.NEW_AMOUNT-DEVICE_FEE,
==>AMOUNT-DEVICE_FEE,
0
)
```
Thanks in advance for any enlightenment on what that parameter will resolve to. | In pseudo-code, this would be equivalent to the following IF statement
```
IF( sh.respcode = 0 )
THEN
RETURN sh.amount-device_fee
ELSIF( sh.respcode = 102 )
THEN
RETURN sh.new_amount-device_fee
ELSIF( sh.respcode = amount-device_fee )
THEN
RETURN 0;
END IF;
``` | It says:
```
If the value of SH.RESPCODE is 0, then return SH.AMOUNT-DEVICE_FEE.
Otherwise, if it's equal to 102, then return SH.NEW_AMOUNT-DEVICE_FEE.
Otherwise, if it's equal to AMOUNT-DEVICE_FEE, then return 0
```
As a case statement it would be:
```
Case SH.RESPCODE
when 0 then SH.AMOUNT-DEVICE_FEE
when 102 then rSH.NEW_AMOUNT-DEVICE_FEE
when AMOUNT-DEVICE_FEE then 0
else null
end
```
There are some subtle differences with respect to treatment of NULL between the two. | DECODE in ORACLE | [
"",
"sql",
"oracle",
"plsql",
""
] |
I have a table with survey results. It basically contains the question ID, the response(s) chosen, and the user ID of the person taking the survey. Some sample data is -
```
Q_ID / Response ID / Response / Username
23 / 14 / Male / testuser1
23 / 14 / Male / testuser2
23 / 15 / Female / testuser3
24 / 16 / Male / testuser2
24 / 17 / Married / testuser3
25 / 19 / Engineer / testuser1
25 / 21 / Surgeon / testuser3
```
I also have another simple table with the Question ID, and the actual corresponding question. For example, Question with Q\_ID 23 is "What is your Gender?"
What query can I use in order to get the results similar to this -
```
Question No / Question / Response # / Response / Count
23 / What is your Gender / 14 / Male / 27
23 / What is your Gender / 15 / Female / 14
```
This is what I tried, but it doesn't quite do what I'm looking for.. (I am a beginner)
```
Select a.Q_ID, b.Question, a.response_id, a.response, count(a.response)
from survey_responses a, survey_questions b
where a.Q_ID = b.Q_ID group by count(response)
``` | ```
Select a.Q_ID, b.Question, a.response_id, a.response, count(a.response)
from survey_responses a, survey_questions b
where a.Q_ID = b.Q_ID
group by a.Q_ID, b.Question, a.response_id, a.response
```
Group by the columns you want to count up, not by the count.
Edit in : Newer syntax using joins...you should probably be creating your sql like this (since 1992)
```
SELECT a.Q_ID, b.Question, a.response_id, a.response, count(a.response)
FROM survey_responses a
INNER JOIN survey_questions b ON b.Q_ID = a,Q_ID
group by a.Q_ID, b.Question, a.response_id, a.response
order by a.Q_ID
```
1 more edit...put in the order by | Try using a `JOIN` like this:
```
SELECT a.Q_ID, b.Question, a.response_id, a.response, count(a.response)
FROM survey_responses a
INNER JOIN survey_questions b ON b.Q_ID = a,Q_ID
GROUP BY a.Q_ID
ORDER BY count(a.response)
``` | SQL Group by Query help needed | [
"",
"mysql",
"sql",
""
] |
Im trying to find out how i can remove 00:00:00.000 from my monthly sales date, so it gets even more easy for my economic brothers.
When i do my:
```
select SALES_DATE from SALES
where SALES_DATE BETWEEN '2013-11-02' and '2013-12-01'
```
i get the dates formatted as: 2013-11-28 00:00:00.000
while i really want it as 2013-11-28
any ideas? | ```
SELECT DATE(SALES_DATE) FROM SALES
WHERE DATE(SALES_DATE)
BETWEEN '2013-11-02' AND '2013-12-01'
```
Thus using the Date function, you can get your date in the required format.
Here, we are explicitly converting it.
The different date format is because of the sql date and util date which is implicit. | Are you looking for **[DATE()](http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date)** function in mysql.
According to mysql documentation
> DATE(expr) Extracts the date part of the date or datetime expression
> expr. mysql> SELECT DATE('2003-12-31 01:02:03');
> -> '2003-12-31'
So your case will be
```
select DATE(SALES_DATE) from SALES
where SALES_DATE BETWEEN '2013-11-02' and '2013-12-01'
``` | Remove the 00:00:00.000 from a sales query MYSQL | [
"",
"mysql",
"sql",
"date",
"datetime",
"select",
""
] |
need help on sql query to find the positions. for example if I have a string "111444411114444411111"
i need the positions of 4 for example for the above string i need the postions of 4 from to :as
4 to 7,
12 to 16 | Please try:
```
DECLARE @var NVARCHAR(MAX)='111444411114444411111'
;WITH T AS(
SELECT LEFT(@var, 1) Cha, 1 pos, 1 Ind
UNION ALL
SELECT SUBSTRING(@var, Ind+1, 1),
CASE WHEN Cha=SUBSTRING(@var, Ind+1, 1) THEN pos ELSE pos+1 END,
Ind+1
FROM T
WHERE Ind+1<=LEN(@var)
)
SELECT CAST(MIN(Ind) AS NVARCHAR)+' to '+CAST(MAX(Ind) AS NVARCHAR) Positions
FROM T WHERE Cha='4'
GROUP BY pos
``` | Use [CHARINDEX](https://www.google.co.in/url?sa=t&rct=j&q=&esrc=s&source=web&cd=4&cad=rja&ved=0CEIQFjAD&url=http://www.sql-server-helper.com/tips/tip-of-the-day.aspx?tkey=4a59645b-a5a1-4099-b6d2-959d8644cbed&tkw=uses-of-the-charindex-function&ei=lxnqUtexEsixrgeWtYHADQ&usg=AFQjCNGzpFcFNrzuKoC1R_ghwZgx7yTSBg&bvm=bv.60444564,d.bmk) to find the postion of the character. | SQL query for finding positions in string | [
"",
"sql",
"string",
"position",
""
] |
I need to compare 2 parameters in select query with 2 parameters in subquery.
Example:
```
SELECT *
FROM pupil
WHERE name NOT IN (SELECT name FROM bad_names)
```
This example contains only one parameter to compare. But what if I need to compare subquery result with pair of 2 parameters?
This will work:
```
SELECT *
FROM pupil
WHERE name||lastname NOT IN (
SELECT name||lastname FROM bad_name_lastname_combinations)
```
So string concatenation is the only way to do this? | PostgreSQL is flexible enough to do this with a [*row constructor* and NOT IN](http://www.postgresql.org/docs/current/interactive/functions-subquery.html#FUNCTIONS-SUBQUERY-NOTIN):
> ```
> row_constructor NOT IN (subquery)
> ```
>
> The left-hand side of this form of `NOT IN` is a row constructor, as described in Section 4.2.13. The right-hand side is a parenthesized subquery, which must return exactly as many columns as there are expressions in the left-hand row. The left-hand expressions are evaluated and compared row-wise to each row of the subquery result. The result of `NOT IN` is "true" if only unequal subquery rows are found (including the case where the subquery returns no rows). The result is "false" if any equal row is found.
So you can do it in the straight forward way if you write the `row_constructor` properly:
```
select *
from pupil
where (name, lastname) not in (
select name, lastname
from bad_last_name_combinations
)
```
Demo: <http://sqlfiddle.com/#!15/a6863/1> | You can do this with a `left outer join`:
```
SELECT p.*
FROM pupil p left outer join
bad_name_lastname_combinations bnlc
on p.name = bnlc.name and p.lastname = bnlc.lastname
WHERE bnlc.name is null;
``` | Compare subquery result with pair of 2 parameters | [
"",
"sql",
"postgresql",
"subquery",
""
] |
I have some data like this in the database as varchar
when the amount of characters after the decimal point is 1 I would like to add a 0
```
4.2
4.80
2.43
2.45
```
becomes
```
4.20
4.80
2.43
2.45
```
Any ideas? I am currently trying to play around with LEFT and LEN but can't get it to work | You could just cast it to the appropriate data type? (Or better still, [store it as the appropriate data type](https://sqlblog.org/2009/10/12/bad-habits-to-kick-choosing-the-wrong-data-type)):
```
SELECT v,
AsDecimal = CAST(v AS DECIMAL(3, 2))
FROM (VALUES ('4.2'), ('4.80'), ('2.43'), ('2.45')) t (v)
```
Will give:
```
v AsDecimal
4.2 4.20
4.80 4.80
2.43 2.43
2.45 2.45
```
If this is not an option you can use:
```
SELECT v,
AsDecimal = CAST(v AS DECIMAL(4, 2)),
AsVarchar = CASE WHEN CHARINDEX('.', v) = 0 THEN v + '.00'
WHEN CHARINDEX('.', REVERSE(v)) > 3 THEN SUBSTRING(v, 1, CHARINDEX('.', v) + 2)
ELSE v + REPLICATE('0', 3 - CHARINDEX('.', REVERSE(v)))
END
FROM (VALUES ('4.2'), ('4.80'), ('2.43'), ('2.45'), ('54'), ('4.001'), ('35.051')) t (v);
```
Which gives:
```
v AsDecimal AsVarchar
4.2 4.20 4.20
4.80 4.80 4.80
2.43 2.43 2.43
2.45 2.45 2.45
54 54.00 54.00
4.001 4.00 4.00
35.051 35.05 35.05
```
Finally, if you have non varchar values you need to check the conversion first with ISNUMERIC, but [this has its flaws](http://classicasp.aspfaq.com/general/what-is-wrong-with-isnumeric.html):
```
SELECT v,
AsDecimal = CASE WHEN ISNUMERIC(v) = 1 THEN CAST(v AS DECIMAL(4, 2)) END,
AsVarchar = CASE WHEN ISNUMERIC(v) = 0 THEN v
WHEN CHARINDEX('.', v) = 0 THEN v + '.00'
WHEN CHARINDEX('.', REVERSE(v)) > 3 THEN SUBSTRING(v, 1, CHARINDEX('.', v) + 2)
ELSE v + REPLICATE('0', 3 - CHARINDEX('.', REVERSE(v)))
END,
SQLServer2012 = TRY_CONVERT(DECIMAL(4, 2), v)
FROM (VALUES ('4.2'), ('4.80'), ('2.43'), ('2.45'), ('54'), ('4.001'), ('35.051'), ('fail')) t (v);
```
Which gives:
```
v AsDecimal AsVarchar SQLServer2012
4.2 4.20 4.20 4.20
4.80 4.80 4.80 4.80
2.43 2.43 2.43 2.43
2.45 2.45 2.45 2.45
54 54.00 54.00 54.00
4.001 4.00 4.00 4.00
35.051 35.05 35.05 35.05
fail NULL fail NULL
``` | You can convert you varchars to decimals if you want, but it could be unsafe as you don't know if it will always work:
```
CONVERT(decimal(10,2), MyColumn)
```
That kind of thing.
It should be noted that the precision value (in the example above, the '2') will trim values off if your varchars were say 4.103.
Additionally, I notice that if you're using SQL Server 2012, that there is now 'TRY\_CONVERT' and 'TRY\_CAST' which would be safer:
<http://msdn.microsoft.com/en-us/library/hh230993.aspx> | CASE WHEN LEN after decimal point is 1 add 0 | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
Our PostgreSQL database contains the following tables:
* categories
```
id SERIAL PRIMARY KEY
name TEXT
```
* articles
```
id SERIAL PRIMARY KEY
content TEXT
```
* categories\_articles (many-to-many relationship)
```
category_id INT REFERENCES categories (id)
article_id INT REFERENCES articles (id)
UNIQUE (category_id, article_id)
```
* comments
```
article_id INT REFERENCES articles (id)
posted_date TIMESTAMP NOT NULL
is_visible BOOLEAN NOT NULL
is_banned BOOLEAN NOT NULL
message TEXT
```
We have partial index on `comments` table:
```
CREATE INDEX comments_posted_date_idx
ON comments USING btree (posted_date)
WHERE is_visible = TRUE AND is_banned = FALSE;
```
So, we need to get recent comments by category:
```
SELECT * FROM comments co
JOIN categories_articles ca
ON ca.article_id = co.article_id
WHERE ca.category_id = 1
AND co.is_visible = TRUE
AND co.is_banned = FALSE
ORDER BY co.posted_date DESC
LIMIT 20;
```
`EXPLAIN ANALYZE` output:
```
Limit (cost=0.00..1445.20 rows=20 width=24) (actual time=93969.479..98515.109 rows=20 loops=1)
-> Nested Loop (cost=0.00..7577979.47 rows=104871 width=24) (actual time=93969.475..98515.084 rows=20 loops=1)
-> Index Scan Backward using comments_posted_date_idx on comments co (cost=0.00..3248957.69 rows=9282514 width=40) (actual time=13.405..82860.852 rows=117881 loops=1)
-> Index Scan using categories_articles_article_id_idx on categories_articles ca (cost=0.00..0.45 rows=1 width=16) (actual time=0.132..0.132 rows=0 loops=117881)
Index Cond: (article_id = co.article_id)
Filter: (category_id = 1)
Total runtime: 98515.179 ms
```
Is there any way to optimize the query?
UPD: table `comments` has ~11 million rows. | It's a pathological plan where no good fix really exists… In short, the options to find the rows basically are:
* Run through an index on `posted_date` in reverse order, and nest join using `article_id` until you locate 20 matches — scanning an enormous part of the table in the process because not so many rows match, as it's doing now — and stop; or
* Run through an index on e.g. `category_id`, nest or hash join on `article_id` to find all matching comments, and top-n sort the first 20 comments.
If you've lots of articles, the first will be faster. If you've very few, the second will be. Trouble is, Postgres doesn't collect correlated stats; it's making assumptions, and not necessarily good ones.
You *might* be able to get a faster index scan for this part:
```
Index Cond: (article_id = co.article_id)
Filter: (category_id = 1)
```
By adding the reverse (and unique) index on `(article_id, category_id)` on the `categories_articles` table instead of on plain `(article_id)` — which you forgot to mention in your question, but still appears in your plan.
With and without it around, also try (partial) indexes on `(article_id, posted_date)` and `(posted_date, article_id)` on the `comments` table, instead of on plain `(posted_date)`. | Since the EXPLAIN output shows only index scans, the real question is: where does the time go? I would make an educated guess your disk IO is saturated, which you can verify by running "iostat 1" or a similar tool and seeing if the %busy counter is 100% or (if there is no such counter) see if your "wait" CPU state is near 100%. | PostgreSQL very slow index scan | [
"",
"sql",
"postgresql",
"database-design",
""
] |
I have a script that is currently importing data from one table and inserting in to another table. Now I have to modify the script in such a way that if the value of the field is A then the code must appear in the column 10 else if B then the code should appear in Column 50.
Table 1:
```
Id Field 1 Field 2 Field 3 Field 4
1 A
2 B
```
Table 2:
```
Id Field 1 Field 2 Field 3 Field 4....Field 10.....Field 50
1 0/Null
2 1
```
Could anyone suggest me a case statement to implement this requirement.
Any advice is greatly appreciated!
Thank You, New Bee | You can write the T-SQL with case statement which is:
```
`
Create table #T(
col1 int, col2 varchar(2), col3 int, col4 int, col5 int, col6 int, col7 int, col8 int,
col9 int, col10 int, col11 int)
GO
Create table #T1(
col1 int, col2 varchar(2), col3 int, col4 int, col5 int, col6 int, col7 int, col8 int,
col9 int, col10 int, col11 int, )
Insert into #t
Select 1,'B',5,null,null,null,null,null,null,null,null union all
Select 2,'A',5,null,null,null,null,null,null,null,null union all
Select 112,'B',5,null,null,null,null,null,null,null,null union all
Select 41,'B',5,null,null,null,null,null,null,null,null union all
Select 15,'A',5,null,null,null,null,null,null,null,null
Select * from #t
Insert into #t1
Select col1, col2, col3,col4,col5,col6,col7, col8
col9,CASE WHEN COL2 ='A' Then Col2 Else 'B' End col10, col11
from #t
`
``` | You could use a case expression when selecting the data: <http://technet.microsoft.com/en-us/library/ms181765.aspx> | T-SQL Stored Procedure | [
"",
"sql",
"sql-server",
"stored-procedures",
""
] |
I have the following table:
```
PriceId PriceType FromDate ToDate Price ElementId
1 F 2014-01-01 NULL 0.00900 1
3 F 2014-01-01 2014-02-01 0.31470 4
4 F 2014-01-01 NULL 432.00000 3
6 F 2014-02-01 NULL 0.30950 4
7 F 2014-02-01 NULL 0.28990 2
```
The input is ElementID = 4, a fromDate and a toDate
I want all prices for an element where the from and todate are between the input period.
The toDate are to, but not including.
So, 2014-01-28 to 2014-03-01 should return record 3 & 6
2014-01-28 to 2014-02-01 should only return record 3.
2014-02-07 to 2014-03-01 should return record 6.
Can anyone help me complete the WHERE clause:
```
WHERE ElementId = 4 AND...
```
TIA...
--
Dag Sunde. | It is not entirely clear what the significance of a NULL `ToDate` is intended to be. I'll assume it means 'ongoing' or, basically , `NOW()`.
In that case, you need
```
WHERE whatever
AND FromDate >= ?starting
AND IFNULL(ToDate, NOW()) < ?ending
```
This will work correctly as long as your FromDate values are always in the past and before your `ToDate` values, and your `?starting` parameters are before your `?ending`.
There may be a serious efficiency problem with this design. It's hard to use indexes to search for null values. | ```
fromDate <= :yourMaxDateParameterHere and ( ToDate >= :yourMinDateParameterHere or ToDate is null )
``` | Select date ranges with NULL in "toDate" | [
"",
"mysql",
"sql",
"t-sql",
""
] |
I'd appreciate if someone could help me with my query:
My first query:
```
SELECT
1 as num, 'TITLE1' as tt, [DATE], CURRENCY,
SUM(AMOUNT) as assets, NULL as liabilities
FROM
[MYTABLE] T1
WHERE
GROUP_ID IN (1700) AND [DATE] = '2014-01-20'
GROUP BY
CURRENCY, [DATE]
```
Which gives me:
```
num tt DATE CURRENCY assets liabilities
1 TITLE1 2014-01-20 USD 1111 NULL
```
Second query is:
```
SELECT
1, 'TITLE1', [DATE], CURRENCY, NULL as assets,
SUM(AMOUNT) as liabilities
FROM
[MYTABLE] T2
WHERE
GROUP_ID IN (2700, 2770)
AND [DATE] = '2014-01-20'
GROUP BY
CURRENCY, [DATE]
```
Which gives me:
```
num tt DATE CURRENCY assets liabilities
1 TITLE1 2014-01-20 EUR NULL 22222
```
I want to join this two queries so that the result would be:
```
num tt DATE CURRENCY assets liabilities
1 TITLE1 2014-01-20 EUR NULL 22222
1 TITLE1 2014-01-20 USD 1111 NULL
```
This is the case when `CURRENCY` is different. But when the `CURRENCY` is the same the result should be like:
```
num tt DATE CURRENCY assets liabilities
1 TITLE1 2014-01-20 EUR 3333 22222
1 TITLE1 2014-01-20 USD 1111 44444
```
The logic is that I calculate `Assets` and `liabilities` grouped by `CURRENCY` and `DATE` from one table but in different `GROUD_ID`s. Tried to use subquery, but getting only one entry (in case when currencies are different for `assets` and `liabilities`):
```
SELECT
1, 'TITLE1', [DATE], CURRENCY, SUM(AMOUNT) as assets,
(SELECT SUM(AMOUNT) as liabilities
FROM [MYTABLE] T2
WHERE (GROUP_ID IN (2700, 2770))
AND [DATE] = '2014-01-20' AND T1.CURRENCY = T2.CURRENCY
GROUP BY CURRENCY, [DATE])
FROM
[MYTABLE] T1
WHERE
GROUP_ID IN (1700) AND [DATE] = '2014-01-20'
GROUP BY
CURRENCY, [DATE]
```
I believe I can handle this with left join. I'd appreciate any help. | Try this:
```
SELECT 1 as num, 'TITLE1' as tt, [DATE], CURRENCY,
SUM(CASE WHEN GROUP_ID IN (1700) THEN AMOUNT END) as assets,
SUM(CASE WHEN GROUP_ID IN (2700, 2770) THEN AMOUNT END) as liabilities
FROM [MYTABLE] T1
WHERE GROUP_ID IN (1700, 2700, 2770)
AND [DATE] = '2014-01-20'
GROUP BY CURRENCY, [DATE]
``` | ```
declare @MTable table ( Date date, Currency char(3), Group_ID int, amount int)
insert @Mtable
values
('20140120', 'USD',1700,1111),
('20140120', 'USD',2700,1111),
('20140120', 'EUR',2770,1111)
declare @Assets table ( ID int)
insert @Assets values (1700)
declare @Liabilities table ( ID int)
insert @Liabilities values (2700),(2770)
;
with RES as (
select [DATE], CURRENCY,
asset = sum(T1.Amount) over (PARTITION by [DATE], CURRENCY) ,
liab = null
FROM @MTABLE T1
join @Assets T2 on T2.ID = T1.Group_ID
union
select [DATE], CURRENCY,
null,
sum(T1.Amount) over (PARTITION by [DATE], CURRENCY)
FROM @MTABLE T1 join @Liabilities T2
on T2.ID = T1.Group_ID
)
select distinct
num = 1, tt = 'TITLE', RES.Date, RES.Currency,
assets = sum(RES.asset) over (PARTITION by [DATE], CURRENCY),
liabilities = sum(RES.liab) over (PARTITION by [DATE], CURRENCY)
from RES
/* Result :
num tt Date Currency assets liabilities
----------- ----- ---------- -------- ----------- -----------
1 TITLE 2014-01-20 EUR NULL 1111
1 TITLE 2014-01-20 USD 1111 1111
*/
``` | Improve query to join table with select statement in t-sql | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
**UPDATE: Initially, I had the order date at line item table and realized that was a mistake and moved it to the Order table. Have updated my example query as well. Sorry**
I am trying to write a query to load all orders whose line item order date is after a certain date along with loading all other orders which are out there for the same product returned by the first part of the query. Maybe an example could help
```
CREATE TABLE DemandOrder
(OrderId INT, OrderDate date, Customer VARCHAR(25))
CREATE TABLE LineItem
(OrderId INT, LineItemId INT, ProductId VARCHAR(10))
INSERT INTO DemandOrder VALUES(1, '01/23/2014', 'ABC');
INSERT INTO DemandOrder VALUES(2, '01/24/2014', 'DEF');
INSERT INTO DemandOrder VALUES(3, '01/24/2014', 'XYZ');
INSERT INTO DemandOrder VALUES(4, '01/23/2014', 'ABC');
INSERT INTO LineItem VALUES(1, 1, 'A');
INSERT INTO LineItem VALUES(1, 2, 'C');
INSERT INTO LineItem VALUES(2, 1, 'B');
INSERT INTO LineItem VALUES(3, 1, 'A');
INSERT INTO LineItem VALUES(4, 1, 'C');
```
In the above example, I need to query for all orders where the order date is on or after 01/24 along with all other orders which may have the returned by the first part of the query. The result should have orders 1, 2 & 3
Here is the updated sql code (using ErikE's suggestions from a post below)
```
SELECT
DISTINCT O.*
FROM
dbo.[DemandOrder] O
INNER JOIN dbo.LineItem LI
ON O.OrderID = LI.OrderID
WHERE
EXISTS (
SELECT *
FROM
dbo.DemandOrder O2 INNER JOIN
dbo.LineItem L2 ON O2.OrderId = L2.OrderId
WHERE
O2.OrderDate >= '01/24/2014'
AND LI.ProductID = L2.ProductID -- not clear if correct
);
```
Thanks for your help and suggestions | I believe this is going to be close to what you're looking for.
All orders that have at least one productID that matches any product ID in an order 1/24/2014 or later.
```
SELECT
O.*
FROM
dbo.[Order] O
INNER JOIN dbo.LineItem LI
ON O.OrderID = LI.OrderID
WHERE
EXISTS (
SELECT *
FROM
dbo.LineItem L2
INNER JOIN dbo.LineItem L3
ON L2.ProductID = L3.ProductID
INNER JOIN dbo.[Order] O2
ON L3.OrderID = O2.OrderID
WHERE
O2.OrderDate >= '20140124'
AND O.OrderID = L2.OrderID
)
;
``` | To get a result set with 1 row per order (meaning you're not interesting in line item data, just the order summary), something like this should do:
```
select o.*
from ( select distinct OrderId
from dbo.LineItem t1
where exists ( select *
from dbo.LineItem t2
where t2.Product = t1.Product
and t2.OrderDate >= @SomeLowerBoundDateTimeValue
)
) t
join dbo.Order o on o.OrderId = t.OrderId
```
The first item in the `from` clause is a *derived table* consisting of the set of order ids associated with a product that was part of an order dated on or after the specified date. Having done that, the rest is trival: just join against the order table.
Generally, for performance, you want to use correlated subqueries with `[not] exists (...)` in preference to uncorrelated subqueries with `[not] in (...)`.
`exists` short circuits as soon as possible; `in` does not as it must construct the entire result set of the subquery. | SQL query with JOIN and WHERE IN clause | [
"",
"sql",
"sql-server-2008",
"join",
""
] |
This is how I image what I want.
```
SELECT dbo.AUDIT WHERE AUD_CloseDate IS NULL
IF AUD_CloseDate IS NULL
check if score is 100%
then update closed date
```
I need to select `AUD_CloseDate` where it is NULL, if it is null then check if `total` = 100% if that is true then update `AUD_CloseDate` column with today's date.
```
Total = (Scored / Total - NA) * 100
```
Not sure how to code this.? | ```
UPDATE dbo.AUDIT
SET AUD_CloseDate = GETDATE()
WHERE (AUD_CloseDate IS NULL) AND (Scored / Total - NA) *100 >= 100
``` | This should work as intended :
```
UPDATE dbo.AUDIT
SET AUD_CloseDate = GETDATE()
WHERE AUD_CloseDate IS NULL AND (Scored / Total - NA) *100 >= 100
``` | update if multiple IF statements are true | [
"",
"sql",
"sql-server",
""
] |
I am trying to create a SELECT statement, but I am not really sure how to accomplish it.
I have 2 tables, `user` and `group`. Each user has a `userid` and each group has a `ownerid` that specifies who owns the group. Each group also has a `name` and then inside the user table, there is a column `group` designating which group that person belongs to. (excuse the annoying structure, I did not create it). I am trying to find all rows in `group` where the `ownerid` of that group does not have `group` (inside the user table) set to the `name` of that group. If this helps:
**User**
```
|-----------------------|
| id | username | group |
|----|----------|-------|
| 0 | Steve | night |
| 1 | Sally | night |
| 2 | Susan | sun |
| 3 | David | xray |
|-----------------------|
```
**Group**
```
|---------------------|
| ownerid | name |
|---------|-----------|
| 1 | night |
| 3 | bravo |
| 2 | sun |
|---------------------|
```
Where the SQL statement would return the group row for `bravo` because bravo's owner does not have his group set to bravo. | This is a join back to the original table and then a comparison of the values:
```
select g.*
from group g join
user u
on g.ownerid = id
where g.name <> u.group;
```
If the values can be `NULL`, then the logic would need to take that into account. | An anti-join is a familiar pattern:
```
SELECT g.*
FROM `Group` g
LEFT
JOIN `User` u
ON u.group = g.name
AND u.id = g.ownerid
WHERE u.id IS NULL
```
Let's unpack that a bit. We're going to start with returning all rows from Group. Then, we're going to "match" each row in Group with a row (or rows) from User. To be considered a "match", the `User.id` has to match the `Group.ownerid`, and the `User.group` value has to match the `Group.name`.
The "trick" is to eliminate all rows where we found a match (that's what the WHERE clause does), and that leaves us with only those rows from `Group` that didn't have a match.
Another way to obtain an equivalent result using a NOT EXISTS predicate
```
SELECT g.*
FROM `Group` g
WHERE NOT EXISTS
( SELECT 1
FROM `User` u
WHERE u.group = g.name
AND u.id = g.ownerid
)
```
This is uses a correlated subquery; it usually doesn't perform as fast as a join.
Note that these have the potential to return a slightly different result than the query from Gordon Linoff, if you had a row with in Group that had an ownerid value that wasn't in the user table. | SQL complicated select statement | [
"",
"mysql",
"sql",
""
] |
What is the optimal solution, use Inner Join or multiple queries?
**something like this:**
```
SELECT * FROM brands
INNER JOIN cars
ON brands.id = cars.brand_id
```
**or like this:**
```
SELECT * FROM brands
```
... (while query)...
```
SELECT * FROM cars WHERE brand_id = [row(brands.id)]
``` | Generally speaking, one query is better, but there are come caveats to that. For example, older versions of SQL Server had a great decreases in performance if you did more than seven joins. The answer will really depend on the database engine, version, query, schema, fields, etc., so we can't say for sure which is better. Always look into minimizing the number of queries when possible without going too overboard and creating result sets that are crazy or impossible to maintain. | This is a very subjective question but remember that each time you call the database there's significant overhead.
Almost without exception the optimum is to issue as few commands and pull out all the data you'll need. However for practical reasons this clearly may not be possible.
Generally speaking if a database is well maintained one query is quicker than two. If it's not you need to look at your data/indicies and determine why.
A final point, you're hinting in your second example that you'd load the brand then issue a command to get all the cars in each brand. This is without a doubt your worst option as it doesn't issue 2 commands - it issues N+1 where N is the number of brands you have... 100 brands is 101 DB hits! | What is the optimal solution, use Inner Join or multiple queries? | [
"",
"mysql",
"sql",
""
] |
i have written this query to pick timeout, time in, Date visited and total overtime for an employee, that's ok but problem is that i want to get record for current month only. It actually picks for all month mean from day he has joined organization till today but i want for this month only.
Query:
```
WITH times
AS (
SELECT t1.EmplID
,t3.EmplName
,min(t1.RecTime) AS InTime
,max(t2.RecTime) AS [TimeOut]
,cast(min(t1.RecTime) AS DATETIME) AS InTimeSub
,cast(max(t2.RecTime) AS DATETIME) AS TimeOutSub
,t1.RecDate AS [DateVisited]
FROM AtdRecord t1
INNER JOIN AtdRecord t2 ON t1.EmplID = t2.EmplID
AND t1.RecDate = t2.RecDate
AND t1.RecTime < t2.RecTime
INNER JOIN HrEmployee t3 ON t3.EmplID = t1.EmplID
GROUP BY t1.EmplID
,t3.EmplName
,t1.RecDate
)
SELECT EmplID
,EmplName
,InTime
,[TimeOut]
,[DateVisited]
,convert(CHAR(5), cast([TimeOutSub] - InTimeSub AS TIME), 108) totaltime
,CONVERT(CHAR(5), CASE
WHEN CAST([TimeOutSub] AS DATETIME) >= '18:00'
AND EmplId NOT IN (
5
,43
,11
,40
,46
,42
,31
)
THEN LEFT(CONVERT(VARCHAR(12), DATEADD(ms, DATEDIFF(ms, CAST('18:00' AS DATETIME), CAST([TimeOutSub] AS DATETIME)), 0), 108), 5)
WHEN CAST([TimeOutSub] AS DATETIME) >= '17:00'
AND EmplId IN (
5
,43
,40
,46
,42
,31
)
THEN LEFT(CONVERT(VARCHAR(12), DATEADD(ms, DATEDIFF(ms, CAST('17:00' AS DATETIME), CAST([TimeOutSub] AS DATETIME)), 0), 108), 5)
ELSE '00:00'
END, 108) AS OVERTIME
FROM times
``` | Change your CTE to this:
```
WITH times
AS (
SELECT t1.EmplID
,t3.EmplName
,min(t1.RecTime) AS InTime
,max(t2.RecTime) AS [TimeOut]
,cast(min(t1.RecTime) AS DATETIME) AS InTimeSub
,cast(max(t2.RecTime) AS DATETIME) AS TimeOutSub
,t1.RecDate AS [DateVisited]
FROM AtdRecord t1
INNER JOIN AtdRecord t2 ON t1.EmplID = t2.EmplID
AND t1.RecDate = t2.RecDate
AND t1.RecTime < t2.RecTime
INNER JOIN HrEmployee t3 ON t3.EmplID = t1.EmplID
WHERE MONTH(t1.RecDate)=MONTH(Getdate())
AND YEAR (t1.RecDate)=YEAR(Getdate())
GROUP BY t1.EmplID
,t3.EmplName
,t1.RecDate
)
```
Please note that if you run this on the first of any month, you will get data only for that day and not for the previous month. | You can use [Date](http://www.tutorialspoint.com/sql/sql-date-functions.htm) functions
SO you need to add this in your where conditions
```
WHERE MONTH(t1.RecDate) = MONTH(GetDate())
AND
YEAR(t1.RecDate) = YEAR(GetDate())
```
Try like this
```
WITH times
AS (
SELECT t1.EmplID
,t3.EmplName
,min(t1.RecTime) AS InTime
,max(t2.RecTime) AS [TimeOut]
,cast(min(t1.RecTime) AS DATETIME) AS InTimeSub
,cast(max(t2.RecTime) AS DATETIME) AS TimeOutSub
,t1.RecDate AS [DateVisited]
FROM AtdRecord t1
INNER JOIN AtdRecord t2 ON t1.EmplID = t2.EmplID
AND t1.RecDate = t2.RecDate
AND t1.RecTime < t2.RecTime
INNER JOIN HrEmployee t3 ON t3.EmplID = t1.EmplID
WHERE MONTH(t1.RecDate) = MONTH(GetDate())
AND
YEAR(t1.RecDate) = YEAR(GetDate())
GROUP BY t1.EmplID
,t3.EmplName
,t1.RecDate
)
SELECT EmplID
,EmplName
,InTime
,[TimeOut]
,[DateVisited]
,convert(CHAR(5), cast([TimeOutSub] - InTimeSub AS TIME), 108) totaltime
,CONVERT(CHAR(5), CASE
WHEN CAST([TimeOutSub] AS DATETIME) >= '18:00'
AND EmplId NOT IN (
5
,43
,11
,40
,46
,42
,31
)
THEN LEFT(CONVERT(VARCHAR(12), DATEADD(ms, DATEDIFF(ms, CAST('18:00' AS DATETIME), CAST([TimeOutSub] AS DATETIME)), 0), 108), 5)
WHEN CAST([TimeOutSub] AS DATETIME) >= '17:00'
AND EmplId IN (
5
,43
,40
,46
,42
,31
)
THEN LEFT(CONVERT(VARCHAR(12), DATEADD(ms, DATEDIFF(ms, CAST('17:00' AS DATETIME), CAST([TimeOutSub] AS DATETIME)), 0), 108), 5)
ELSE '00:00'
END, 108) AS OVERTIME
FROM times
``` | getting records for current month only | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
"sql-server-2012",
""
] |
For simplification, POC, I have the following query, using character typed columns:
```
select AH_NAME1 from GGIMAIN.SYSADM.BW_AUFTR_KOPF
union
select AH_NAME1 from GGI2014.SYSADM.BW_AUFTR_KOPF
```
and I get the following error:
> Msg 468, Level 16, State 9, Line 2
> Cannot resolve the collation conflict between "SQL\_Latin1\_General\_CP1\_CI\_AS" and "Latin1\_General\_CS\_AS" in the UNION operation.
`GGI2014` was indeed created with collation `SQL_Latin1_General_CP1_CI_AS`. This has been changed in SMS and the instance has been restarted, also in SMS.
When I look in SMS, as well as query:
```
select name, collation_name from sys.databases
```
all indications are that both `GGIMAIN` and `GGI2014` are collated `Latin1_General_CS_AS`.
Does anyone have any advice on what else needs to be done?
Thanks,
Matt | ```
select AH_NAME1 COLLATE DATABASE_DEFAULT from GGIMAIN.SYSADM.BW_AUFTR_KOPF
union
select AH_NAME1 COLLATE DATABASE_DEFAULT from GGI2014.SYSADM.BW_AUFTR_KOPF
```
Unless I am mistaken, changing the collation of the database does not change the collation of the already existing objects. Only new objects will be affected | Try this one (maybe you're columns have different collation) -
```
SELECT AH_NAME1 COLLATE database_default
FROM GGIMAIN.SYSADM.BW_AUFTR_KOPF
UNION
SELECT AH_NAME1 COLLATE database_default
FROM GGI2014.SYSADM.BW_AUFTR_KOPF
``` | SQL Server 2008 Collation conflict - how to resolve? | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
Well I am stuck at a point where I need to distribute a value across multiple rows. Since I do not know the specific term, I would put it in the form of example below for better understanding:
Assuming the value of x to be 20, I need to distribute/subtract it to rows in descending order.
TABLE:
```
ID Value1
1 6
2 5
3 4
4 3
5 9
```
Result should look like: (x=20)
```
ID Value1 Answer
1 6 14
2 5 9
3 4 5
4 3 2
5 9 0
```
Can anyone just give me an idea how I could go with this? | It is perhaps easier to think of this problem in a different way. You want to calculate the cumulative sum of `value1` and then subtract that value from `@X`. If the difference is negative, then put in `0`.
If you are using SQL Server 2012, then you have cumulative sum built-in. You can do this as:
```
select id, value1,
(case when @X - cumvalue1 < 0 then 0 else @X - cumvalue1 end) as answer
from (select id, value1,
sum(value1) over (order by id) as cumvalue1
from table t
) t;
```
If you don't have cumulative sum, you can do this with a subquery instead:
```
select id, value1,
(case when @X - cumvalue1 < 0 then 0 else @X - cumvalue1 end) as answer
from (select id, value1,
(select sum(value1)
from table t2
where t2.id <= t.id
) as cumvalue1
from table t
) t;
``` | Untested for syntax, but the idea should work in SQL Server 2005 and newer.
SQL Server 2012 has SUM OVER clause which makes this even handier.
```
SELECT ID, Value1, CASE WHEN 20-SumA < 0 THEN 0 ELSE 20-SumA END AS Answer
FROM TABLE A
CROSS APPLY (SELECT SUM(B.Answer) SumA FROM TABLE B
WHERE B.ID <= A.ID) CA
``` | Subtract value to multiple rows | [
"",
"sql",
"select",
""
] |
I have the following SP which works correctly when ran on its own:
```
USE [Orders]
GO
SET FMTONLY OFF;
CREATE PROCEDURE [dbo].[Get_Details_by_Type]
@isArchived varchar(10),
@Type varchar(50)
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
declare @sqlQuery nvarchar(max)
IF(@isArchived = 'ALL')
BEGIN
set @sqlQuery = 'SELECT * FROM [dbo].[Orders]
WHERE ' + @Type + ' != €
ORDER BY [IDNumber]'
exec sp_executesql @sqlQuery
END
ELSE
BEGIN
set @sqlQuery = 'SELECT * FROM [dbo].[Orders]
WHERE ' + @Type + ' != € AND [isArchived] = ' + @isArchived + ' ORDER BY [IDNumber]'
exec sp_executesql @sqlQuery
END
END
SET FMTONLY ON;
```
The problem I'm having is that when I add a DataSet for a SSRS report, it pulls no fields/columns in the Fields section. I'm guessing it's due to the dynamic SQL?
How can I resolve that? | **The Problem**
Stored procs which contain Dynamic Sql and Temp tables are the bane of wizards like SSRS and ORM generators like Linq2SQL and EF reverse engineering tools.
This is because the tools `SET FMTONLY ON;` (or more recently, [`sp_describe_first_result_set`](http://technet.microsoft.com/en-us/library/ff878602.aspx)) prior to running the PROC, in order to derive the resultset schema produced by the PROC so that mappings for the ReportViewer UI can be generated. However, neither `FMTONLY ON` nor `sp_describe_first_result` actually execute the PROC.
e.g. the tool will do something like:
```
SET FMTONLY ON;
EXEC dbo.MyProc NULL;
```
**Some Workarounds:**
* Manually editing the RDL / RDLC file to insert the actual result set column names and types.
* Temporarily dropping the real proc and substituting it with one which returns a data set of zero or more rows with the actual data types and column names returned by the real proc, running the wizard, and then reverting the real proc.
* Temporarily adding `SET FMTONLY OFF;` as the first line in the PROC - this will force execution of the PROC. Revert the original PROC once done (although note your proc may fail because of null or dummy parameters passed in by the tool). Also, `FMTONLY` is [being deprecated](http://technet.microsoft.com/en-us/library/ms173839.aspx)
* At the start of the proc, adding a dummy statement which returns the actual schema(s) of the result set(s), wrapped in a conditional branch which never gets executed.
Here's an example of the last hack:
```
CREATE PROCEDURE [dbo].[Get_Details_by_Type]
@isArchived varchar(10),
@Type varchar(50)
AS
BEGIN
-- For FMTONLY ON tools only
IF 1 = 2
BEGIN
-- These are the actual column names and types returned by the real proc
SELECT CAST('' AS NVARCHAR(20)) AS Col1,
CAST(0 AS DECIMAL(5,3)) AS Col2, ...
END;
-- Rest of the actual PROC goes here
```
`FMTONLY ON` / `sp_describe_first_result_set` are fooled by the dummy conditional and assumes the schema from the never-executed branch.
As an aside, for your own sanity, I would suggest that you don't `SELECT *` in your PROC - rather explicitly list all the real column names returned from `Orders`
Finally, just make sure you don't include the `SET FMTONLY ON;` statement in your proc (from your code above!)
```
END - Proc
GO **
SET FMTONLY ON; ** This isn't part of the Proc!
``` | If anyone is still facing this issue, I solved what appears to be a similar problem with ssrs and dynamic sql.
1. For ssrs to properly map the fields from the SP,
2. select the "Text" option under query type,
3. enter the SP name and parameters like you are calling it from and SSMS window. `sp_YourStoredProc @Parameter1....@ParameterN`
4. Click on the refresh fields button.
5. When the statement runs, the fields will refresh, and your matrix will be populated.
BTW, I'm using SQL 2012
Hope this helps. | No fields for dynamic SQL stored procedure in SSRS with SET FMTONLY | [
"",
"sql",
"sql-server",
"stored-procedures",
"reporting-services",
""
] |
I am new to SQL and am having difficulty with the queries needed for the following SSRS report.
We have one table that's lists name, and date. The date corresponds with the date that person has attended an event. We'd like to run a query with a date range, and pull up a list of names that have attended an event in that date range along with ALL of their attendance dates, even if one of the dates is outside the range
so for table
```
NAME, DATE OF ATTENDENCE
ALICE, 1/1/2000
BOB, 1/1/2000
CHARLIE, 1/1.2000
ALICE, 1/1/2001
ALICE, 1/1/2002
BOB, 1/1/2002
```
We would want a to query for a year, say 2002, and then see a list of anyone who had attended an event in 02, and all the other attendance dates for those people, so 2002 would show
```
ALICE, 1/1/2000
ALICE, 1/1/2001
ALICE, 1/1/2002
BOB, 1/1/2000
BOB, 1/1/2002
```
I'm having trouble with how to approach this , I assume I need multiple queries, but cant quite wrap my head around how to do this.
Any help or advice would be appreciated.
Thanks.
MC | You can do this with an `exists` clause:
```
select e.*
from events e
where exists (select 1
from events e2
where e2.name = e.name and
year(e2.DateOfAttendence) = 2002
);
``` | You will want to be using a nested query. Consider the following table:
```
CREATE TABLE IF NOT EXISTS `events` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`name` varchar(32) NOT NULL,
`when` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
PRIMARY KEY (`id`)
);
```
Then you could get all the information you requested by using the following query:
```
select * from `events` where name in (
SELECT distinct name
FROM `events`
WHERE `when` >= STR_TO_DATE('24/01/2014', '%d/%m/%Y')
and `when` <= STR_TO_DATE('26/01/2014', '%d/%m/%Y')
)
```
..got ninja'd :(.. | how to write sql subquery with 2 date ranges | [
"",
"sql",
"reporting-services",
""
] |
Here is the thing:
I have this `Payment_types`, `Sales`, `Payments` and `Orders` tables, either a sale as an order generate a payment
My tables is thus:
```
Payment_types: id, name (may be a order or a sale)
Payments: id, payment_type_id, sale_id, order_id, price, paid, description
Sales: id, price
Orders: id, price
```
then, if the payment is a sale, the `order_id` is null and vice versa.
then how I will do select to display each table record sales or orders which have their price equal to the sum of the table payments related to that id? and would also like to know if my database is structured correctly. | You could try something like:
```
select Sales.id,
Max(Sales.price) Sales_price,
Sum(Payment.price) Payment_price
from Sales, Payment
where Sales.id = Payment.sale_id
group by Sales.id
having Max(Sales.price) = Sum(Payment.price)
```
Your query does not work as expected because you use a WHERE clause on the result of an aggregation, but you should use a HAVING-clause. | You could try something like:
```
SELECT Sales.*
FROM Sales
WHERE Sales.price = (
SELECT SUM(Payments.price)
FROM Payments
WHERE Payments.sale_id = Sales.id
)
``` | MySQL Count paid sales and orders | [
"",
"mysql",
"sql",
""
] |
```
Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles txtsearch.Click
Dim SearchData = Search.StoredProcedure4(txtsearch.Text)
DataGridView1.DataSource = SearchData
End Sub
```
Conversion from string "&search" to type 'Integer' is not valid... How can I convert string into integer? | ```
Dim SearchData = Search.StoredProcedure4(txtsearch.Text)
int SearchDataInt = Convert.ToInt32(SearchData ) //or (int)SearchData
DataGridView1.DataSource = SearchDataInt
```
Just add this line between your define and display. | ```
string s = "&search";
foreach( char c in s)
{
Console.WriteLine(System.Convert.ToInt32(c));
}
``` | Conversion from string "&search" to type 'Integer' | [
"",
"sql",
"vb.net",
"type-conversion",
""
] |
I'm trying to add a foreign key to the following table, so when a row is deleted from it's parent table 'accounts', the corresponding rows in 'threads' are deleted. I've already done this successfully with the 'messages' table, which is a child of 'threads'.
```
ALTER TABLE `threads` ADD FOREIGN KEY ( `owner_id` )
REFERENCES `social_network`.`accounts` (`id`)
ON DELETE CASCADE
```
Now though, I'm getting the error:
```
#1050 - Table '.\social_network\threads' already exists
```
Which doesn't make sense to me, because I wouldn't be trying to 'ALTER TABLE' if it didn't already exist. To add more confusion, this SQL was generated automatically by phpMyAdmin using it's built in tools.
Can someone explain to me what causes this error to be thrown and how to fix it? | This answer is a followup to Bruno Casali's and Maverick's posted answers. I was in fact able to fix this by repairing my table. The InnoDB engine doesn't support the REPAIR operation, so I just recreated and repopulated the table:
```
CREATE TABLE threads_tmp LIKE threads;
INSERT INTO threads_tmp SELECT * FROM threads;
TRUNCATE TABLE threads;
INSERT INTO threads SELECT * FROM threads_tmp;
DROP TABLE threads_tmp;
```
Hope this helps anyone having the same issue. | I think what you can do are as follows:
* Check if table exists `DROP TABLE IF EXISTS`
* `REPAIR TABLE`
* Try Inserting values, it should work. | MySQL - Adding Foreign Key To Existing Table | [
"",
"mysql",
"sql",
"database",
"foreign-keys",
"cascading-deletes",
""
] |
I am using ASPX.NET MVC4. I want to bring some data from the database, through a model, which I want to display as points at a chart at the view. At the chart I want to display at maximum 70 datapoints (date, value), but not more.
My model StudentGrades is consisted by the following
```
StudentID, ModuleID, TestDate, TestGrade.
```
I have written the following code, which does actually the job, but is **really** slow, and the time it needs is not acceptable.
Do you know how I can really optimize my code, (by changing the query, the datastructures, the loops, or anything other)?
My code is shown below.
```
var query = (from b in db.StudentGrades
where b.StudentID.Equals(InputStudentID)
orderby b.Date
select b);
var dates = query.Select(x => EntityFunctions.DiffDays(query.Min(y => y.Date), x.Date));
var grades = query.Select(x => x.grades);
var datestable = dates.ToArray();
var gradetable = grades.ToArray();
List<int?> dateslist = new List<int?>();
List<double> gradelist = new List<double>();
double result = dates.Count() * 1.0 / 70.0;
int pointStep = (int)Math.Ceiling(result);
for (int i = 0; i < dates.Count(); i = i + pointStep)
{
dateslist.Add(datestable.ElementAt(i));
gradelist.Add(gradetable.ElementAt(i));
}
ViewBag.dates = dateslist;
ViewBag.grades = gradelist;
```
Thanks a lot
**EDIT**
I forgot to mention. I don't just want 70 points. I could then just do a take(70). What I actually want is those 70 points to be taken uniformly from my data, based at their dates. So if for example I have the following records at my database for a specific student, and instead of 70 points I wanted 3:
```
ModuleID, TestDate, TestGrade
23, 1 January 2014, 5
34, 2 January 2014, 54
45, 3 January 2014, 35
56, 4 January 2014, 55
67, 5 January 2014, 35
78, 6 January 2014, 56
89, 7 January 2014, 53
90, 8 January 2014, 55
94, 9 January 2014, 57
```
I would choose the record for 1 January 2014, 4 January 2014 and 7 January 2014 or something very similar. I mean the records I want should have equal, or relatively equal distance (in dates) between them.
This is also the reason why I have the pointStep variable above.
***Edit 2***
In addition, can you come up with a really clever way for this to be done in the query (without adding an id, as that changes my model)? If not, that's ok, I cannot do it either.
Thanks a lot | First do one query to get the minimum date and store it in `minDate`.
Then you can do:
```
var query = from b in db.StudentGrades
where b.StudentID == InputStudentID
orderby b.Date
select x => new { Day = EntityFunctions.DiffDays(minDate, x.Date),
Grade = x.grades
}
```
Then get the records where `Day` is a multiple of `pointStep`:
```
var results = query.Where(x => x.Day % pointStep == 0).ToList();
var dateslist = results.Select(x => x.Day).ToList();
var gradelist = results.Select(x => x.Grade).ToList();
```
Now you will fetch only the required data from the database, and nothing more, in one query (well, two, frankly). Building the final lists happens in the blink of an eye. | The term `query.Min(y => y.Date)` is executed for each row. Why don't you pull the minimum date once?
```
var minDate = query.Min(y => y.Date);
var dates = query.Select(x => EntityFunctions.DiffDays(minDate, x.Date));
```
Also, the query is executed for each of the following lines
```
var datestable = dates.ToArray();
var gradetable = grades.ToArray();
```
So your query is executed (N+3) times where N is the number of rows. This might happen in the server but is still quite a slow solution.
Solution: Put the `query` in an array at the very beginning to avoid multiple executions of the query.
```
var query = (from b in db.StudentGrades
where b.StudentID.Equals(InputStudentID)
orderby b.Date
select b).ToArray(); // <---- ToArray() does the trick.
```
But still you could clean up a bit by storing the minimum date once. This also makes the code a bit more readable. Try this:
```
var query = (from b in db.StudentGrades
where b.StudentID.Equals(InputStudentID)
orderby b.Date
select b).ToArray();
var minDate = query.Min(y => y.Date);
var dates = query.Select(x => EntityFunctions.DiffDays(minDate, x.Date));
// Nothing changed after this point
var grades = query.Select(x => x.grades);
...
```
Does this improve performance? | Optimize the very slow following code in C# | [
"",
"asp.net",
"sql",
"asp.net-mvc",
"linq",
"asp.net-mvc-4",
""
] |
Hi I have the table below...
```
FirstNumber SecondNumber
3 2
2 5
9 0
1 4
8 7
2 2
1 4
```
If I use this SQL statement...
```
SELECT *
FROM Table
ORDER BY FirstNumber,SecondNumber;
```
I should have the data shown below...This orders the number by FirstNumber, in case of duplicates, it orders the rows by SecondNumber in ASCENDING order since by default it is...
```
FirstNumber SecondNumber
1 4
1 4
2 2
2 5
3 2
8 7
9 0
```
However, I want to achieve a different ordering. I want to have ordering by FirstNumber in ASCENDING, in case of duplicates, I want them ordered by DESCENDING SecondNumber. How do I do this? So basically I want to have the data below...
```
FirstNumber SecondNumber
1 4
1 4
2 5
2 2
3 2
8 7
9 0
```
Thank you very much :) | Change the query to
```
SELECT *
FROM Table
ORDER BY FirstNumber,SecondNumber DESC;
```
From [SQL ORDER BY Keyword](http://www.w3schools.com/sql/sql_orderby.asp)
> The ORDER BY keyword is used to sort the result-set by one or more
> columns. The ORDER BY keyword sorts the records in ascending order
> by default. To sort the records in a descending order, you can use the
> DESC keyword. | Well it seems this code will do
```
SELECT *
FROM Table
ORDER BY FirstNumber asc, -- <- ascending
SecondNumber desc -- <- descending
``` | Sql Order by multiple fields in different order | [
"",
"sql",
""
] |
I'm trying to find the 10 year period that produced the most movies. I have a table Called Movie with id, name, and year.
What I've been trying to do is get the count for the number of movies in each year:
```
SELECT year, count(id)
FROM Movie
GROUP BY year
```
Then use that as a subquery to sum every 10 year period and find the max. Is this the right way to do it? Any idea where I'm going wrong?
Any help is appreciated, thank you
EDIT:
Okay I'm able to get a list with each Year and the number of movies made in the next 10 years. Now I'm having trouble only returning the DecadeStart with the max MovieCount. Here's what I have:
```
SELECT m.year AS DecadeStart, (
SELECT COUNT(*)
FROM Movie m2
WHERE m2.year >= m.year AND m2.year < (m.year + 10) AND m2.year IS NOT NULL) AS MovieCount
FROM Movie m
GROUP BY m.year);
```
Sample input from Movie table:
```
45|Beach|2008
46|Affair with a Stranger|1953
47|A Letter to Uncle Sam|1913
48|A Woman Scorned|1999
50|Caryl of the Mountains|1914
```
Sample output so far:
```
DecadeStart MovieCount
1890 3478
1891 4334
1892 5271
```
What I want to do is write a query that will return the year from the row with max MovieCount from this output. So I only want the DecadeStart value back, not the MovieCount | This should work
```
SELECT DecadeStart, MAX(MovieCount)
from (
SELECT m.year AS DecadeStart, (
SELECT COUNT(*)
FROM Movie m2
WHERE m2.year >= m.year AND m2.year < (m.year + 10)
AND m2.year IS NOT NULL)
as MovieCount
FROM Movie m GROUP BY m.year
) movie ;
``` | If you want the decade-wise, like `1970`, `1980`, etc. data and you are using Oracle, the following will do:
```
SELECT round(year, -1) decade, count(id) cnt
FROM movies
GROUP BY round(year, -1);
```
If you are using some other RDBMS, try `year%10`. | Return the sum of counts from multiple years | [
"",
"sql",
"azure",
""
] |
I want to do custom sort by `Customercode` for `tblCustomer` table.
`CustomerCode` consist of `(3 char of Surname) + 1 + (PostCode)`
Here, 1 will increment if Same Surname and postcode customer found.
For e.g. `ABB12615`, `ABB22615`
So mainly I want to sort this by
```
First 3 Letters of Surname + Index + PostCode.
```
I tried to do in this manner :
```
ORDER BY CHARINDEX(SUBSTRING(customerCode, 1, 3), customerCode)
```
but it gives me output like this:
```
ABB12615
ABB12715
...
...
...
..
.
ABB22615
```
But I want output in this order:
```
ABB12615
ABB22615
ABB12715
```
and so on
Is it possible to do? | Based on your expected results you *really* want to sort on
```
Surname, postcode, index
```
which would be
```
ORDER BY SUBSTRING(customerCode, 1, 3),
SUBSTRING(customerCode, 5, 4),
SUBSTRING(customerCode, 4, 1)
``` | Try this
```
SELECT *
FROM TABLE1
ORDER BY CASE WHEN COlumn1 = 'ABB12615' THEN 1
WHEN COlumn1 = 'ABB22615' THEN 2
WHEN COlumn1 = 'ABB12715' THEN 3
END
``` | How to do Custom Sorting in SQL Server 2005 | [
"",
"sql",
"sql-server",
"sorting",
""
] |
I have two tables that I will be using for tracking purposes, a *Date Table* and a *Item Table*. The *Date Table* is used to track the start and end dates of a tracked id. The *Item Table* is the amount of items that are pulled on a specific date for an id. The id is the foreign key between these two tables.
What I want to do, is a sum of the items with a GROUP BY of the id of the items, but only by summing the items based on if the date of the pulled item falls between the start\_date and end\_date of the tracked id.
## The Date Table
```
id start_date end_date
1 2014-01-01 NULL
2 2014-01-01 2014-01-02
3 2014-01-25 NULL
```
## The Item Table
```
id items date
1 3 2014-01-01
1 5 2014-01-02
1 5 2014-01-26
2 2 2014-01-01
2 3 2014-01-05
2 2 2014-01-26
3 2 2014-01-01
3 3 2014-01-05
3 2 2014-01-26
```
SQL I have so far, but I'm lost as to what to add to it from here.
```
SELECT
a.id,
SUM(items)
FROM
ww_test.dbo.items a
INNER JOIN ww_test.dbo.dates b ON
a.id = b.id
WHERE
a.date >= '2014-01-01' AND a.date <= '2014-01-30'
GROUP BY
a.id
ORDER BY
a.id
```
## The output should be:
```
id items
1 13
2 2
3 2
```
## Instead of:
```
id items
1 13
2 7
3 7
``` | First of all, I strongly recommend that you stop using `NULL` in your date ranges to represent "no end date" and instead use a *sentinel value* such as `9999-12-31`. The reason for this is primarily **performance** and secondarily query simplicity--a benefit to yourself now in writing the queries and to you or others later who have to maintain them. In front-end or middle-tier code, there is little difference to comparing a date range to `Null` or to `9999-12-31`, and in fact you get some of the same benefits of simplified code there as you do in your SQL. I base this recommendation on over 10 years of full-time professional SQL query writing experience.
To fix your query as is, I think this would work:
```
SELECT
a.id,
ItemsSum = SUM(items)
FROM
ww_test.dbo.items a
INNER JOIN ww_test.dbo.dates b
ON a.id = b.id
AND a.date >= Coalesce(b.start_date, 0)
AND a.date <= Coalesce(b.end_date, '99991231')
WHERE
a.date >= '20140101'
AND a.date <= '20140130'
GROUP BY
a.id
ORDER BY
a.id
;
```
Note that if you followed my recommendation, your query `JOIN` conditions could look like this:
```
INNER JOIN ww_test.dbo.dates b
ON a.id = b.id
AND a.date >= b.start_date
AND a.date <= b.end_date
```
You will find that if your data sets become large, having to put a `Coalesce` or `IsNull` in there will hurt performance in a significant way. It doesn't help to use `OR` clauses, either:
```
INNER JOIN ww_test.dbo.dates b
ON a.id = b.id
AND (a.date >= b.start_date OR b.start_date IS NULL)
AND (a.date <= b.end_date OR b.end_date IS NULL)
```
That's going to have the same problems (for example converting what could have been a seek when there's a suitable index, into a scan, which would be very sad).
Last, I also recommend that you change your end dates to be *exclusive* instead of *inclusive*. This means that for the end date, instead of entering the date of the beginning of the final day the information is true, you put the date of the first day it is *no longer true*. There are several reasons for this recommendation:
* If your date resolution ever changes to hours, or minutes, or seconds, every piece of code you have ever written dealing with this data will have to change (and it won't if you use exclusive end dates).
* If you ever have to compare date ranges to each other (to collapse date ranges together or locate contiguous ranges or even locate non-contiguous ranges), you now have to do all the comparisons on `a.end_date + 1 = b.start_date` instead of a simple equijoin of `a.end_date = b.start_date`. This is painful, and easy to make mistakes.
* Always thinking of dates as suggesting time of day will be extremely salutary to your coding ability in any language. Many mistakes are made, over and over, by people forgetting that dates, even ones in formats that can't denote a time portion (such as the `date` data type in SQL 2008 and up) still have an implicit time portion, and can be converted directly to date data types that do have a time portion, and that time portion will always be `0` or `12 a.m.`.
The only drawback is that in some cases, you have to do some twiddling about what date you show users (to convert to the inclusive date) and then convert dates they enter into the exclusive date for storing into the database. But this is confined to UI-handling code and is not throughout your database, so it's not that big a drawback.
The only change to your query would be:
```
INNER JOIN ww_test.dbo.dates b
ON a.id = b.id
AND a.date >= b.start_date
AND a.date < b.end_date -- no equal sign now
```
One last thing: be aware that the date format 'yyyy-mm-dd' is not culture-safe.
```
SET LANGUAGE FRENCH;
SELECT Convert(datetime, '2014-01-30'); -- fails with an error
```
The only invariantly culture-safe formats for datetime in SQL Server are:
```
yyyymmdd
yyyy-mm-ddThh:mm:ss
``` | I think what you want to do is to compare the dates to be between the `start_date` and `end_date` of your `Data` table.
Change your query to the following and try
```
SELECT
a.id,
SUM(items)
FROM
ww_test.dbo.items a
INNER JOIN ww_test.dbo.dates b ON a.id = b.id
WHERE
a.date >= ISNULL(b.start_date, GETDATE())
AND a.date <= ISNULL(b.end_date, GETDATE())
GROUP BY a.id
ORDER BY a.id
``` | SELECT Query between dates, only selecting items between start and end fields | [
"",
"sql",
"sql-server",
"date",
""
] |
How can I do this in Linq?
```
SELECT [...] WHERE A.Year = YEAR(ISNULL(B.Date, '1900-01-01'))
```
data types of fields:
```
A.Year : int not null
B.Date : datetime null
``` | LINQ to Entities:
```
(...)
where a.year == SqlFunctions.DatePart("y", b.date ?? new DateTime(1900, 1, 1))
```
LINQ to SQL:
```
(...)
where a.year == (b.date ?? new DateTime(1900, 1, 1)).Year
``` | Without knowing what your data structures look like on the .NET side of things...
```
var defaultDate = new DateTime(1900, 1, 1);
var result = from x in xxx
where x.Year == (b.Date ?? defaultDate).Year
``` | Both ISNULL and YEAR SQL functions in in Linq | [
"",
"sql",
"linq",
"t-sql",
""
] |
I have a MySQL table with a field called `end_date` of type `DATETIME` which stores a UTC date. I want to get the difference in minutes between `NOW()` and the `end_date` value. I alredy found a lot of questions and answers here on stackoverflow to do this, some uses `UNIX_TIMESTAMP` which from what I understand takes the input value and converts it to UTC (so I can't use it because my date is alredy UTC) and other that suggests to use `TIME_TO_SEC` and divide by `60`. This is what I picked up:
```
INTERVAL (TIME_TO_SEC(UTC_TIMESTAMP()) - TIME_TO_SEC(end_date)) / 60 MINUTE
```
It works fine until the current day is different from the `end_date`'s day. At that point, the result of the subtraction is negative and when I use this code inside a `DATE_ADD` function it outputs the wrong result. How can I fix this? | Try to use
```
INTERVAL (TO_SECONDS(UTC_TIMESTAMP()) - TO_SECONDS(end_date)) / 60 MINUTE
``` | Try `ABS`
```
INTERVAL (ABS( TIME_TO_SEC(UTC_TIMESTAMP()) - TIME_TO_SEC(end_date))) / 60 MINUTE
```
Or, if you always want a *positive* difference, then on `end_date` check if it is later than `now` or `utc_timestamp`. If it is later replace it with `now` and the difference would be `0`. | Get dates difference in minutes with MySQL | [
"",
"mysql",
"sql",
""
] |
I have these kinds of tables:
Customer1:
```
"Customer1" table:
--------------------------------
id | branch | company_id
--------------------------------
1 | typical | 1
2 | natural | 8
--------------------------------
```
Customer2
```
"Customer2" table:
-------------------------------
id | company | group_id
-------------------------------
1 | deux | 1
-------------------------------
```
Customer3
```
"Customer3" table:
-------------------------------
id | group
-------------------------------
1 | alpha
-------------------------------
```
Now, how can I have an output using sql command/statement base from the tables like this one :
```
----------------------------
group | company | branch
----------------------------
alpha | deux | typical
``` | This should be what you are looking for- However this may not be what you are looking for if tables have multiple records for same id
```
SELECT C3.group, C2.company, C1.branch
FROm Customer1 c1 INNER JOIN
Customer2 c2 ON c1.company_id = c2.id INNER JOIN
Customer3 c3 ON c2.group_id = c3.id
``` | You could try something like
```
SELECT [group],
[company],
[branch]
FROm Customer1 c1 INNER JOIN
Customer2 c2 ON c1.company_id = c2.id INNER JOIN
Customer3 c3 ON c2.group_id = c3.id
```
To ensure that you always show all the values, you might want to take a look at using `LEFT JOINS`
Something like
```
SELECT [group],
[company],
[branch]
FROm Customer1 c1 LEFT JOIN
Customer2 c2 ON c1.company_id = c2.id LEFT JOIN
Customer3 c3 ON c2.group_id = c3.id
```
Here is a nice article that explains the difference between the various JOIN types
[Introduction to JOINs – Basic of JOINs](http://blog.sqlauthority.com/2009/04/13/sql-server-introduction-to-joins-basic-of-joins/) | SQL statement using join | [
"",
"sql",
"sql-server",
"join",
""
] |
I have 2 tables that I'm unioning and want to custom order using `ORDER BY CASE` but I keep getting the ORA-01785: ORDER BY item must be the number of a SELECT-list expression error.
My 2 tables each have a column named "VISIT", containing (something similar to) the following data:
FollowUp table: BASELINE, 1\_MONTH, 2\_MONTH
Procedure table: PROCEDURE
I want to union these tables and sort in the following order: BASELINE, PROCEDURE, 1\_MONTH, 2\_MONTH
Here is what I think should work:
```
SELECT VISIT
FROM FollowUp
UNION
SELECT VISIT
FROM Procedure
ORDER BY
CASE VISIT
WHEN 'BASELINE' THEN 1
WHEN 'PROCEDURE' THEN 2
WHEN '1_MONTH' THEN 3
WHEN '2_MONTH' THEN 4
ELSE 5 END
```
However I'm getting the 01785 error. I've also tried replacing `CASE VISIT` with `CASE 1` and get the same error. Thank you for the help! | Try doing this with a subquery:
```
select visit
from ((SELECT VISIT
FROM FollowUp
) union
(SELECT VISIT
FROM Procedure
)
) t
ORDER BY
CASE VISIT
WHEN 'BASELINE' THEN 1
WHEN 'PROCEDURE' THEN 2
WHEN '1_MONTH' THEN 3
WHEN '2_MONTH' THEN 4
ELSE 5 END;
```
If you do not need duplicate removal, then use `union all`. | Try this:
```
SELECT x.visit,
CASE x.VISIT
WHEN 'BASELINE' THEN 1
WHEN 'PROCEDURE' THEN 2
WHEN '1_MONTH' THEN 3
WHEN '2_MONTH' THEN 4
ELSE 5
END
sort_by
FROM (SELECT VISIT FROM FollowUp
UNION ALL
SELECT VISIT FROM Procedure) x
ORDER BY SORT_BY
``` | Oracle SQL Custom Sort with Union | [
"",
"sql",
"oracle",
"case",
"union",
""
] |
I am new to SQL, this is what I am trying to figure out. Any help will be appreciated.
This is what I have as Input.
```
Number Start End Age
PRB1 1/2/2014 1/2/2014 0
PRB1 1/2/2014 1/3/2014 1
PRB1 1/3/2014 1/6/2014 1
PRB2 1/13/2014 1/14/2014 1
PRB3 1/9/2014 1/9/2014 0
PRB4 1/22/2014 1/22/2014 0
PRB4 1/25/2014 1/28/2014 1
```
This is the output I am expecting
```
Number Age
PRB1 2
PRB2 1
PRB3 0
PRB4 1
``` | It will work
```
SELECT number,
Sum(age) AS Age
FROM tablename
GROUP BY number
```
# [SQL FIDDLE](http://www.sqlfiddle.com/#!3/77580/4) | Just group by number, sum by age.
```
select number, sum(age) as total
from my_table
group by number
``` | SQL Server 2008 - Aggregate totals | [
"",
"sql",
"sql-server",
"aggregate-functions",
""
] |
Simple question, just out of curiosity.
For example `select 1,2,3` that will show `table` with `one column` and `three rows`.
Something like this: `select values(1),(2),(3)`
\***with one select statement** | As it appears there is a simple code that I've been searching for:
`select n from (values (1),(2),(3)) D(c);` | An example for my comment in your post.
```
DECLARE @TABLE TABLE (ONE INT, TWO INT, THREE INT)
INSERT INTO @TABLE VALUES (1,2,3)
SELECT UP.COL, UP.VALUE
FROM @TABLE
UNPIVOT (VALUE FOR COL IN (ONE,TWO,THREE)) UP
``` | Is there a way to return more than 1 row in select without using existing tables | [
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2012",
""
] |
I have two tables :
```
CREATE TABLE `Users` (
`user_id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`first_name` varchar(50) NOT NULL DEFAULT '',
`last_name` varchar(50) NOT NULL DEFAULT '',
`login` varchar(50) NOT NULL DEFAULT '',
`password` varchar(50) NOT NULL DEFAULT '',
PRIMARY KEY (`user_id`)
) ENGINE=InnoDB AUTO_INCREMENT=15 DEFAULT CHARSET=utf8;
```
AND
```
CREATE TABLE `Books` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`title` varchar(50) NOT NULL DEFAULT '',
`author` varchar(50) NOT NULL DEFAULT '',
`year` int(4) NOT NULL,
`available` int(3) NOT NULL DEFAULT '0',
`availabledate` date NOT NULL,
`user_id` int(11) unsigned NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=3 DEFAULT CHARSET=utf8;
```
I am trying to create a relationship between those two, so that one user may have multiple books (user\_id) but whatever I'm doing I'm getting errors. Either
> Cannot add or update a child row: a foreign key constraint fails
> (`bookstore`.,
> CONSTRAINT `books_fk` FOREIGN KEY (`user_id`) REFERENCES `users`
> (`user_id`) ON DELETE CASCADE ON UPDATE CASCADE)
or before I didn't use unsigned int in the Books table and I said that default value is 0 (which I would prefere but I don't think I can do that?) In that case I got error 150. | I'd recommend, to change your database schema. Why?
1. Can a book exist without having a user? If yes, you shouldn't have a foreign key from books referencing users. Can a user exist without having a book? If yes, you shouldn't have a foreign key from users referencing books.
2. Can a user have multiple books? And a book multiple users? If yes, you have a m:n relationship. This means you need a bridge table.
In your tables you don't need foreign keys:
```
CREATE TABLE `Users` (
`user_id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`first_name` varchar(50) NOT NULL DEFAULT '',
`last_name` varchar(50) NOT NULL DEFAULT '',
`login` varchar(50) NOT NULL DEFAULT '',
`password` varchar(50) NOT NULL DEFAULT '',
PRIMARY KEY (`user_id`)
) ENGINE=InnoDB AUTO_INCREMENT=15 DEFAULT CHARSET=utf8;
CREATE TABLE `Books` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`title` varchar(50) NOT NULL DEFAULT '',
`author` varchar(50) NOT NULL DEFAULT '',
`year` int(4) NOT NULL,
`available` int(3) NOT NULL DEFAULT '0',
`availabledate` date NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=3 DEFAULT CHARSET=utf8;
```
And a bridge table would look like this:
```
CREATE TABLE books_users (
book_id int(11) unsigned NOT NULL,
user_id int(11) unsigned NOT NULL,
PRIMARY KEY (book_id, user_id),
KEY idx_user_id (user_id),
FOREIGN KEY fk_books (book_id) REFERENCES Books(id),
FOREIGN KEY fk_users (user_id) REFERENCES Users(user_id)
) ENGINE=InnoDB;
```
This solves both problems and is common practice.
To query both users and books in one query, you join them like this:
```
SELECT
whatever
FROM
Books b
INNER JOIN books_users bu ON b.id = bu.book_id
INNER JOIN users u ON bu.user_id = u.user_id
WHERE user_id = 1 /*for example*/
;
```
If you want to insert something in the tables, just do the insert and get the id which was generated for the row with `SELECT LAST_INSERT_ID();`, then insert this id in the `books_users` bridge table.
Updates don't affect anything, you can simply perform them on `users` or `books`. If you really really have to update the auto\_increment column (which usually isn't needed and not recommended) you can add `ON UPDATE CASCADE` after the foreign keys in the `books_users` table. | Change these lines and try again
> change `user_id int(11) unsigned NOT NULL AUTO_INCREMENT` to `user_id
> int(11) NOT NULL AUTO_INCREMENT`
and
> `id` int(11) unsigned NOT NULL AUTO\_INCREMENT, to `id` int(11) NOT
> NULL AUTO\_INCREMENT,
and
> `user_id` int(11) unsigned NOT NULL, to `user_id` int(11) NOT NULL,
Finally try
```
CREATE TABLE `Users` (
`user_id` int(11) NOT NULL AUTO_INCREMENT,
`first_name` varchar(50) NOT NULL DEFAULT '',
`last_name` varchar(50) NOT NULL DEFAULT '',
`login` varchar(50) NOT NULL DEFAULT '',
`password` varchar(50) NOT NULL DEFAULT '',
PRIMARY KEY (`user_id`)
) ENGINE=InnoDB AUTO_INCREMENT=15 DEFAULT CHARSET=utf8;
```
and
```
CREATE TABLE `Books` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`title` varchar(50) NOT NULL DEFAULT '',
`author` varchar(50) NOT NULL DEFAULT '',
`year` int(4) NOT NULL,
`available` int(3) NOT NULL DEFAULT '0',
`availabledate` date NOT NULL,
`user_id` int(11) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=3 DEFAULT CHARSET=utf8;
``` | Fail to create relationship between two tables | [
"",
"mysql",
"sql",
"relation",
""
] |
I have the following schema:
```
TABLE ATTRIBUTES
---------------------------
movies id
title
stars id
first_name
last_name
genres id
name
stars_in_movies movie_id
star_id
genres_in_movies movie_id
genre_id
```
I want to create a query that returns a view that is grouped by `movie.id` and that concatenates the list of associated `stars` and `genres`
This is my attempt:
```
SELECT movies.id, title,
GROUP_CONCAT(stars.id separator ',') AS starIds,
GROUP_CONCAT(concat(first_name,' ',last_name) separator ',') AS starNames,
GROUP_CONCAT(genres.id separator ',') AS genreIds,
GROUP_CONCAT(genres.name separator ',') AS genreNames
FROM movies
JOIN stars_in_movies ON stars_in_movies.movie_id = movies.id
JOIN stars ON stars_in_movies.star_id = stars.id
JOIN genres_in_movies ON genres_in_movies.movie_id = movies.id
JOIN genres ON genres_in_movies.genre_id = genres.id
```
For some reason, for each row in the view, the query concatenates the same star for each genre. For example here is an example of a row:
```
movie.id movie.title stars.id genres.id
-----------------------------------------------------------
12345 Ocean's Twelve 3,3,3,4,4,4 2,4,6,2,4,6,2,4,6
```
EDIT:
This is the result of the query without using `GROUP_CONCAT`. It lists all combinations of tuples.
```
SELECT movies.id, stars.id, genres.id FROM movies
JOIN stars_in_movies ON stars_in_movies.movie_id = movies.id
JOIN stars ON stars_in_movies.star_id = stars.id
JOIN genres_in_movies ON genres_in_movies.movie_id = movies.id
JOIN genres ON genres_in_movies.genre_id = genres.id
WHERE movies.title = 'Oceans Twelve';
+--------+--------+--------+
|movies.id|stars.id|genres.id|
+--------+--------+--------+
| 764010 | 492108 | 907000 |
| 764010 | 492108 | 907001 |
| 764010 | 492108 | 855203 |
| 764010 | 492108 | 907002 |
| 764010 | 492108 | 150008 |
| 764010 | 492108 | 907005 |
| 764010 | 48004 | 907000 |
| 764010 | 48004 | 907001 |
| 764010 | 48004 | 855203 |
| 764010 | 48004 | 907002 |
| 764010 | 48004 | 150008 |
| 764010 | 48004 | 907005 |
| 764010 | 48014 | 907000 |
| 764010 | 48014 | 907001 |
| 764010 | 48014 | 855203 |
| 764010 | 48014 | 907002 |
| 764010 | 48014 | 150008 |
| 764010 | 48014 | 907005 |
| 764010 | 855108 | 907000 |
| 764010 | 855108 | 907001 |
| 764010 | 855108 | 855203 |
| 764010 | 855108 | 907002 |
| 764010 | 855108 | 150008 |
| 764010 | 855108 | 907005 |
| 764010 | 658011 | 907000 |
| 764010 | 658011 | 907001 |
| 764010 | 658011 | 855203 |
| 764010 | 658011 | 907002 |
| 764010 | 658011 | 150008 |
| 764010 | 658011 | 907005 |
| 764010 | 855106 | 907000 |
| 764010 | 855106 | 907001 |
| 764010 | 855106 | 855203 |
| 764010 | 855106 | 907002 |
| 764010 | 855106 | 150008 |
| 764010 | 855106 | 907005 |
| 764010 | 693109 | 907000 |
| 764010 | 693109 | 907001 |
| 764010 | 693109 | 855203 |
| 764010 | 693109 | 907002 |
| 764010 | 693109 | 150008 |
| 764010 | 693109 | 907005 |
+--------+--------+--------+
``` | It looks like there is a Many to Many relationship between stars\_in\_movies and genres\_in\_movies. You need to perform your query step by step. First create a sub-query that groups on movies JOIN genres\_in\_movies. Then create a second sub query that groups on movies JOIN stars\_in\_movies. Then join between the sub queries on movie.id.
This way you have a one to one relationship between the sub queries and you avoid the cartesian product that you currently have. | Add GROUP BY movies.id, title to the query. | Grouping row on column and flattening the other columns | [
"",
"mysql",
"sql",
"group-by",
"group-concat",
""
] |
why below query is not giving results if I remove the `<` sign from query.Because even without `<` it must match with results?
Query used to get second max id value:
```
select min(id)
from(
select distinct id
from student
order by id desc
)
where rownum <=2
```
```
student id
1
2
3
4
``` | Rownum has a special meaning in Oracle. It is increased with every row, but the optimizer knows that is increasing continuously and all consecutive rows must met the rownum condition. So if you specify `rownum = 2` it will never occur since the first row is already rejected.
You can see this very nice if you do an explain plan on your query. It will show something like:
Plan for `rownum <=`:
```
COUNT STOPKEY
```
Plan for `rownum =`:
```
FILTER
``` | A ROWNUM value is not assigned permanently to a row (this is a common misconception). A row in a table does not have a number; you cannot ask for row 2 or 3 from a table
click [Here](http://www.oracle.com/technetwork/issue-archive/2006/06-sep/o56asktom-086197.html) for more Info.
This is from the link provided:
Also confusing to many people is when a ROWNUM value is actually assigned. A ROWNUM value is assigned to a row after it passes the predicate phase of the query but before the query does any sorting or aggregation. Also, a ROWNUM value is incremented only after it is assigned, which is why the following query will never return a row:
```
select *
from t
where ROWNUM > 1;
```
Because ROWNUM > 1 is not true for the first row, ROWNUM does not advance to 2. Hence, no ROWNUM value ever gets to be greater than 1. Consider a query with this structure:
```
select ..., ROWNUM
from t
where <where clause>
group by <columns>
having <having clause>
order by <columns>;
``` | oracle sql wih rownum <= | [
"",
"sql",
"oracle",
""
] |
I hate to ask something that I know has been answered, but after 45 minutes of searching I yield to everyone here.
I have a table where I need to select a list of items based on a value, and then take one field and add a year to it and update another column with that value.
Example:
```
select * from table where description='value_i_need'
UPDATE table SET endDate=DATEADD(year, 1, beginDate) -- For each item in the select
```
EDIT: The select statement will return a ton of results. I need to update each row uniquely.
I don't know how to combine the select or loop through the results.
My SQL knowledge is limited, and I wish I could have figured this out without having to ask. I appreciate any help
Edit 2:
I have one table that I need to go through and update the end date on each item that matches a description. So I have 1000 "Computers" for example, that I need to update a field in each row with that description based on another field in the same row. All coming from one table | Try like this
```
UPDATE
Table T1
SET
T1.endDate=DATEADD(year, 1, beginDate)
WHERE T1.description='value_i_need'
``` | Unless I missed something, you should be able to simply add your criteria to your update statement as follows:
```
UPDATE [table]
SET endDate=DATEADD(year, 1, beginDate)
WHERE description='value_i_need'
``` | SQL Server - Select list and update a field in each row | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
Is it possible to use the DatePart function to show a Week running from Sat - Fri as opposed to your typical Monday - Sunday week? I know this will return Monday - Sunday, but can ya change it to Sat - Wed?
```
DATEPART(WEEK,[HireDate]) AS Week_Number
``` | Yes use [SET DATEFIRST](http://msdn.microsoft.com/en-us/library/ms181598.aspx) this sets the day to count as the first day of the week so it changes on the day you specify. | You can alter the first day of the week:
```
SET DATEFIRST { number | @number_var }
``` | Using DatePart to show specific Days | [
"",
"sql",
"sql-server",
"t-sql",
""
] |

I have table dbo.[sg\_Attendance] with schema as shown above image.
Now using below query I am getting output with only listed data as:
```
SELECT Year(SG_PROCESS_DATE) as yearname, DATENAME(MONTH,DateAdd(MONTH, MONTH(SG_PROCESS_DATE),0) - 1) AS monthname,SUM(SG_WORKTIME) as workhour from dbo.[SG_ATTENDANCE] where (SG_PROCESS_DATE BETWEEN '2013-01-01' AND '2013-05-01') Group By YEAR(SG_PROCESS_DATE), MONTH(SG_PROCESS_DATE)
```
My requirement is to get data for all months falling in range:
 | Finally I got the Solution:
```
SELECT Year(SG_PROCESS_DATE) AS yearname ,DATENAME(MONTH, DateAdd(MONTH, MONTH(SG_PROCESS_DATE), 0) - 1) AS monthname ,SUM(SG_WORKTIME) AS workhour FROM innovator.[SG_ATTENDANCE] WHERE ( SG_PROCESS_DATE BETWEEN '2013-01-01' AND '2014-03-01') GROUP BY YEAR(SG_PROCESS_DATE),MONTH(SG_PROCESS_DATE)
UNION
SELECT DATENAME(YEAR, DATEADD(MONTH , x.number , '2013-01-01')) as yearname, DATENAME(MONTH, DATEADD(MONTH, x.number, '2013-01-01')) AS monthname, 0 as workhour FROM master.dbo.spt_values x WHERE x.type = 'P' AND x.number <= DATEDIFF(MONTH, '2013-01-01', '2014-03-01')
EXCEPT
SELECT Year(SG_PROCESS_DATE) AS yearname ,DATENAME(MONTH, DateAdd(MONTH, MONTH(SG_PROCESS_DATE), 0) - 1) AS monthname ,0 AS workhour FROM innovator.[SG_ATTENDANCE] WHERE ( SG_PROCESS_DATE BETWEEN '2013-01-01' AND '2014-03-01') GROUP BY YEAR(SG_PROCESS_DATE),MONTH(SG_PROCESS_DATE)
``` | Here is another option using a little trick with the `master..spt_values` table, which contains sequential number from 0 to 2047.
```
declare @months table (mm date)
declare @start date, @end date
set @start = '2013-01-01'
set @end = '2013-05-01'
insert into @months (mm)
select dateadd(day,v.number,@start)
from master..spt_values v
where
v.number >= 0 and
v.number <= DATEDIFF(day,@start,@end) and
v.type = 'p';
SELECT Year(m.mm) as yearname,
DATENAME(MONTH,DateAdd(MONTH, MONTH(m.mm),0) - 1) AS monthname,
SUM(SG_WORKTIME) as workhour
from @months m
left join dbo.[SG_ATTENDANCE] a on a.SG_PROCESS_DATE = m.mm
where SG_PROCESS_DATE is null or (SG_PROCESS_DATE BETWEEN @start AND @end)
Group By YEAR(m.mm), MONTH(m.mm)
``` | Get all year and months from date range in sql with or without data | [
"",
"sql",
"sql-server",
""
] |
I have a smalltimedate column and want to do the equivalent to a wild card search on it and find all rows that have times of 00:00:00 irrespective of the date.
eg
```
id | col
1 | 2014-01-01 00:01:00
2 | 2014-01-02 00:01:00
3 | 2014-01-03 00:00:00
4 | 2014-01-04 00:00:00
```
would return
```
id | col
3 | 2014-01-03 00:00:00
4 | 2014-01-04 00:00:00
```
What's the easiest way to do this? | ```
WHERE CONVERT(VARCHAR(8),col,108) = '00:00:00'
```
See <http://www.w3schools.com/sql/func_convert.asp> | Try this one -
```
DECLARE @t TABLE
(
id INT PRIMARY KEY,
col SMALLDATETIME
)
INSERT INTO @t (id, col)
VALUES
(1, '2014-01-01 00:01:00'),
(2, '2014-01-02 00:01:00'),
(3, '2014-01-03 00:00:00'),
(4, '2014-01-04 00:00:00')
SELECT *
FROM @t
WHERE CAST(col AS FLOAT) - ROUND(CAST(col AS FLOAT), 0) = 0
```
Output -
```
id col
----------- -----------------------
3 2014-01-03 00:00:00
4 2014-01-04 00:00:00
``` | MS-SQL server 2008r2 selecting the time from smalltimedate irrespective of date | [
"",
"sql",
"sql-server",
""
] |
Here's my table.

When I execute the below query using NOT IN it gives me namal and Ann.
```
SELECT firstname
FROM info.student_info
WHERE firstname NOT IN (SELECT firstname
FROM info.student_info
WHERE lastname IS NULL)
```
But when I execute the below query using NOT EXISTS it gives me no rows.
```
SELECT firstname
FROM info.student_info
WHERE NOT EXISTS (SELECT firstname
FROM info.student_info
WHERE lastname IS NULL)
```
Why is this? I have researched some areas on NOT IN and NOT EXISTS, but couldn't find an answer for this. | `NOT IN` is true when value is not in the set returned by the subquery.
`NOT EXISTS` is true when the subquery doesn't return anything. | You have not joined the tables in your `NOT EXISTS` query and that is why, you're not getting any result from your second query.
Your second query would have returned results only in case when the sub-query could not find record at all.
change your query to join on the firstname column of the outer table, and then you might get result.
```
SELECT firstname
FROM info.student_info
WHERE NOT EXISTS (SELECT b.firstname
FROM info.firstname b
WHERE b.lastname IS NULL
AND b.firstname = a.firstname
)
```
Disclaimer: Not tested the query | Different between NOT IN and NOT EXISTS in postgres SQL | [
"",
"sql",
"postgresql",
""
] |
I have the following relationships in the User model.
```
has_many :subscriptions, dependent: :destroy
has_one :subscription, -> { where 'end_date >= ?', Date.today }
```
:subscription is essentially their most current monthly subscription. :subscriptions is an array of all their previous monthly subscriptions (including the most current). They list the start\_date, end\_date, user\_id, plan\_id, payment\_method.
When a subscription plan is upgraded or downgraded, I changed the end\_date on the current subscription to Date.today (signifying the subscription ended today) and then create a new subscription that starts today and ends in a month.
The problem with having the end\_date as Date.today is that the WHERE clause will pick up the cancelled subscription and the new subscription. But will favor the cancelled subscription because the SQL query generated tacks on the ORDER BY class for primary id ascending.
```
ORDER BY "subscriptions"."id" ASC LIMIT 1
```
For the life of me, I can't figure out how to reverse the ORDER BY from ASC to DESC on the id. But I'd much prefer it to order based on the created\_at timestamp. The following would be the appropriate code just for ordering, but it's necassary that I have the WHERE clause in addition to the ORDER BY clause.
```
has_one :subscription, -> { order 'created_at desc' }
```
Has anyone figured how to combine the two? | As of Rails 4, Rails automatically adds the ordering by id when you call the `first` method. (You can call `take` instead to avoid this.) So I'd guess that `has_one` is calling `first` internally.
Since `has_one` is just auto-creating a method called `subscription`, why not just create it yourself?:
```
def subscription
subscriptions.where('end_date >= ?', Date.today).order("created_at DESC").take
end
```
If you don't like this, it's not clear from your question whether you've tried this:
```
has_one :subscription, -> { where('end_date >= ?', Date.today).order("created_at DESC") }
```
There's no reason you can't chain the two items like that. But as I recall the Rails 4 `first` will override your `order` directive and replace it with `order(:id)`, so this might not work. You can find out for sure by saying `@user.subscription.to_sql`. | The problem appears to be a mismatch in your design between expiring subscriptions and dating them as "today", and stating that only one subscription can have an end date >= today. They contradict each other, so you need to resolve that issue.
The heart of it seems to be that subscription start dates are inclusive and end dates are exclusive -- unless you define end dates as being "the last date on which the subscription is active" rather than "the day after the last day on which the subscription is active", you have two subscriptions being active on the same day. | Rails - has_one relationship with both WHERE & ORDER? | [
"",
"sql",
"ruby-on-rails",
"ruby",
"relationship",
"has-one",
""
] |
My data looks as below:
```
Student_ID State
1 WI
1 IL
1 IL
2 WI
3 IL
4 IL
4 MN
```
I want my Output to be as follows:
Ouput: If the same student is in both WI and any other state then we need to mention as 'multiple', when student is only in WI then mention as 'InState' and When the student is in any other state mention as 'OUT of STATE'. This query needs to be kept in SSIS lookup
```
Student ID Status
1 MULTIPLE
2 IN
3 OUT
4 OUT
```
Please let me know how can we achieve this output in SQL.
Thanks | You'll need to tidy it up but it should work
```
WITH x AS
-- eliminate all duplicates,set up a counter for each distinct record
-- as well as flag for WI
(SELECT DISTINCT studentID, STATE, 1 AS cnt, CASE WHEN STATE = 'WI' THEN 1 ELSE 0 END WI
FROM DataTable)
SELECT studentID,
CASE
WHEN MAX(wi) = 0 THEN 'OUT' -- this student is not in WI at all
WHEN MAX(wi) = 1 AND SUM(cnt) > 1 THEN 'MULTI' -- this student is in WI as well as another state
WHEN MAX(wi) = 1 AND SUM(cnt) = 1 THEN 'IN' END AS Status -- this student is in WI and and no other state
FROM x GROUP BY studentID
``` | ```
;WITH CTE AS
(
SELECT DISTINCT SID, [State],
Score =
CASE
WHEN [State] = 'WI' THEN 100
ELSE 1
END
FROM T
)
SELECT SID,
Status =
CASE
WHEN SUM(Score) > 100 THEN 'MULTIPLE'
WHEN SUM(Score) = 100 THEN 'IN'
ELSE 'OUT'
END
FROM CTE
GROUP BY SID
``` | Choosing and assigning a value based among the duplicate records in sql | [
"",
"sql",
"sql-server",
"t-sql",
"ssis",
"lookup",
""
] |
If I have a table `mytable` and a list
```
set vals = (1,2,3,4);
```
and I want to cross-join the table with the list (getting a new table which has 4 time as many rows as the original table and an extra `val` column), do I have a better option than creating [an explicit temp table](https://stackoverflow.com/questions/21413217/how-do-i-create-a-hive-table-without-any-intermediate-files)?
What I can do is:
```
select a.*, b.val
from mytable a cross join
(select stack(4,1,2,3,4) as (val) from
(select * from mytable limit 1) z) b;
```
EDIT: My main use case would be passing `-hiveconf vals='4,1,2,3,4'` to `hive` and replacing `stack(4,1,2,3,4)` with `stack(${hiveconf:vals})` in the above code. | ```
select a.*, b.val
from a lateral view explode(array(1,2,3,4)) b as val;
``` | I dont know this will help.
```
SELECT *
from mytable cross join
(select 1 as p
union
select 2
union
select 3
union
select 4) as x
``` | How do I cross-join a table with a list? | [
"",
"sql",
"join",
"hive",
""
] |
[**STRUCTURE TABLES AND SQL QUERY ON SQLFIDDLE**](http://sqlfiddle.com/#!3/299fd/4)
Table Price:
```
name id_firm id_city name
ООО 1429 73041 ПЕРЕВОЗКА ГРУЗА (ГРУЗОПЕРЕВОЗКА) АВТО
ООО 1429 73041 ПЕРЕВОЗКА ГРУЗА (ГРУЗОПЕРЕВОЗКА) АВТО
ООО 1429 73041 ПЕРЕВОЗКА ГРУЗА (ГРУЗОПЕРЕВОЗКА) АВТО
ООО 1429 73041 ПЛИТКА КЕРАМИЧЕСКАЯ ГРАНИТ (КЕРАМОГРАНИТ) АССОРТ.
ООО 1429 73041 РАБОТЫ ГРУЗОПОДЪЕМНЫЕ АВТОВЫШКА (ПОДЪЕМНИК)
ООО 1429 73041 РАБОТЫ ГРУЗОПОДЪЕМНЫЕ АВТОКРАНОМ Г/П 25Т
ООО 1429 73041 РАБОТЫ КОМПРЕССОРОМ ВСЕ ВИДЫ
ООО 1429 73041 РАБОТЫ ПОГРУЗОЧНО-РАЗГРУЗОЧНЫЕ АВТОКРАНОМ-МАНИПУЛЯТОРОМ
```
i use query:
```
SELECT * FROM (
SELECT Row_Number()OVER(PARTITION BY f.name ORDER BY f.priority, d.end_date, p.datetime DESC) AS NameOrder,
p.id_price as p_id_price,
p.id_service as p_id_service,
p.name as p_name,
p.name_original as p_name_original,
p.id_producer_country as p_id_producer_country,
p.id_firm as p_id_firm,
p.unit as p_unit,
f.name as f_name,
f.priority as f_priority,
f.address as f_address,
f.phone as f_phone,
city.name as city_name,
pc.name as pc_name
FROM Price p
left join Firm f on f.id_service = p.id_service AND f.id_city = p.id_city AND f.id_firm = p.id_firm
left join City city on city.id_city = p.id_city
left join Producer_country pc on pc.id_producer_country = p.id_producer_country
left join Dogovor d on d.id_service=p.id_service AND d.id_city=p.id_city AND d.id_firm=p.id_firm
WHERE
p.name LIKE '%АВТОВЫШКА%'
)
AS S WHERE S.NameOrder = 1
ORDER BY S.f_priority DESC
```
But i get 0 results.
Tell me please why i get 0 results when i search `АВТОВЫШК_ (%` if table have row `ООО 1429 73041 РАБОТЫ ГРУЗОПОДЪЕМНЫЕ АВТОВЫШКА (ПОДЪЕМНИК)`
Why count rows in results is 0? | The clause `LIKE 'АВТОВЫШК %'` means that the value of that column for the current row has to *start with* that string. If you want to match it *anywhere* in the string you need an "any number of characters" wildcard on both ends:
```
LIKE '%АВТОВЫШК %'
``` | I'm guessing that because of your language encoding you are storing p.name as an `NVARCHAR(xx)` datatype.
The string you are passing in with `LIKE '%АВТОВЫШК %'` is effectively a `VARCHAR`, so all the special characters are lost and so the strings don't match.
Try specifying it like this:
```
LIKE N'%АВТОВЫШК%'
```
Adding the N to the start of the string tells SQL server that it is a unicode string and so the special characters will be preserved, then you have a better chance of matching. | Why i get 0 results in query(one row in table have search word)? | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I'm trying to match similar email addresses from two tables. What I would like to do is find records where the email address from table 1 is similar to the email address in table 2 (where the text is the same before the @ symbol). For example JohnSmith@gmail.com would match JohnSmith@yahoo.com. | For Access you can use a query like this:
```
SELECT
t1.ID AS ID1,
t1.Email AS Email1,
t2.ID AS ID2,
t2.Email AS Email2
FROM
Table1 AS t1
INNER JOIN
Table2 as t2
ON Left(t1.Email,InStr(t1.Email,'@')-1) = Left(t2.Email,InStr(t2.Email,'@')-1)
```
**Edit re: comment**
If you receive a "Data type mismatch in criteria expression" error due to Null values in the fields that you are attempting to join then try this
```
SELECT
t1.ID AS ID1,
t1.Username AS Email1,
t2.ID AS ID2,
t2.email AS Email2
FROM
USERS AS t1
INNER JOIN
LICENSE AS t2
ON Left(Nz(t1.Username,''),InStr(Nz(t1.Username,''),'@')-1) = Left(Nz(t2.email,''),InStr(Nz(t2.email,''),'@')-1);
```
**Edit 2**
...or perhaps you'd prefer something like this:
```
SELECT
t1.ID,
t1.Username AS Email
FROM
USERS AS t1
WHERE
EXISTS
(
SELECT *
FROM LICENSE AS t2
WHERE Left(Nz(t1.Username,''),InStr(Nz(t1.Username,''),'@')-1) = Left(Nz(t2.email,''),InStr(Nz(t2.email,''),'@')-1)
)
``` | In SQl you could do something like this just an in and selecting everying before the @, the column name is 'email' in both tables:
```
SELECT SUBSTRING(Email,0, CHARINDEX('@',Email))
from [TABLE1]
where SUBSTRING(email,0, CHARINDEX('@',email))
in (SELECT SUBSTRING(email,0, CHARINDEX('@',email)) from [TABLE2])
``` | SQL - Match email address text before @ symbol from two tables | [
"",
"mysql",
"sql",
"ms-access",
""
] |
I have a MySQL db table with a column containing strings. And I have a list of strings. Some of the strings from the list can be found in my table, others can't. See my table TableName:
```
<TableName>
IntegerColumn, StringColumn
1, one
2, two
3, three
4, four
```
If I execute the query
```
SELECT * FROM TableName WHERE StringColumn NOT IN ('three', 'four', 'five', 'six');
```
I get a result of two rows, which contain nothing but NULL.
How can I see for which of the strings there was **no** match? Because I want to add them to the db table
Thx in advance | I think what you want is, 'three', 'four', 'five', 'six' if any of this string is not present in the database you want to identify that.
Using query I think it will be tough. You can just use below query to get the available strings and the counts. Then, if you are using a programming language you can identify which string are not present in the result and then proceed further.
```
SELECT StringColumn, count(*) FROM TableName group by StringColumn
```
Not sure if this is what you are looking for. | Using the following sample
```
CREATE TABLE sample_table
(
id int auto_increment primary key,
details varchar(30)
);
INSERT INTO sample_table
(id, details)
VALUES
(1, 'One'),
(2, 'Two'),
(3, 'Three'),
(4, 'Four'),
(5, 'Five');
```
I ran the query
```
SELECT * FROM sample_table
WHERE details NOT IN ('two', 'three', 'nine');
```
which gave the correct output of:
```
1 One
4 Four
5 Five
```
If you've got NULL returned then there is something you're not explaining in your question. Can you provide schema information or even a [SQL Fiddle](http://sqlfiddle.com/) and I'm sure you'll get a much better answer. | MySQL: show strings in result that were not found | [
"",
"mysql",
"sql",
""
] |
Is it OK to swallow Duplicate key violation exceptions for INSERTS or should you check if the record exists?
So let's say that I have a table Photo with one field: PhotoName. I'm looking through a file directory to add items to Photo. In the process, it's possible that when I find a photoname, it might already be in the database. So there are two ways to go about this:
```
1) //Look to see if it exists before adding it. Only add it if it does not exist.
bool photoExists = SQLSELECTStatementToCheckIfThePhotoExists(photoName);
if(!photoExists)
SQLCommandToInsertPhoto(photoName)
or 2) //Assume that it doesn't exist. If it does, catch and ignore.
try
{
SQLCommandToInsertPhoto(photoName);
}
catch(DuplicateKeyException ex)
{
//swallow it and continue on as if nothing happened.
}
```
On the one hand, I don't necessarily like the notion of just "swallowing" an exception, but on the other hand, try...catch uses only one call to the DB. This happens to be in SQL Server. | You should definitely not just "swallow" the exception. You should be trying to find these duplicates and not insert them if needed.
On method could be checking where not exists on the key.
```
INSERT INTO TargetTable
SELECT
KeyID,
blah,
blerg,
FROM SourceTable AS S
WHERE NOT EXISTS (
SELECT 1
FROM TargetTable AS T
WHERE S.KeyID = T.KeyID
)
```
This method will allow you to only `INSERT` new rows into the table. This method of course does not account for any matching you may need to do for an `UPDATE`, however that's outside the scope of this question but should still be thought about. Most users could also use `MERGE` I'll post an example of that when I get time. | It can be very expensive to let SQL Server raise exceptions (even if you only swallow them) - see [here](http://www.mssqltips.com/sqlservertip/2632/checking-for-potential-constraint-violations-before-entering-sql-server-try-and-catch-logic/?utm_source-AaronBertrand) and [here](http://www.sqlperformance.com/2012/08/t-sql-queries/error-handling).
So my suggestion is to check for the violation first, and only insert if you have to. However, I wouldn't separate these out into separate statements, especially in completely separate round-trips to the app, as you can have this scenario:
```
-- connection A, at 12:00:00.0000001:
SELECT FROM TABLE WHERE key = 'x'; -- 0 rows returned
-- connection B, at 12:00:00.0000002:
SELECT FROM TABLE WHERE key = 'x'; -- 0 rows returned
-- connection A, at 12:00:00.0000003:
INSERT dbo.TABLE(key) VALUES('x'); -- succeeds
-- connection B, at 12:00:00.0000003:
INSERT dbo.TABLE(key) VALUES('x'); -- fails
```
I would rather do this in a single `INSERT ... WHERE NOT EXISTS` statement, as [@Zane's answer](https://stackoverflow.com/a/21485328/61305) demonstrates, though I would add higher escalation on the `SELECT` portion. Or you could use an `INSTEAD OF INSERT` trigger to just bail from the insert if a key violation is spotted ([I wrote about this here](http://www.mssqltips.com/sqlservertip/3139/improve-sql-server-efficiency-by-switching-to-instead-of-triggers/?utm_source-AaronBertrand).)
As an aside, I'd use extreme caution with `MERGE` - see [this article for my reasoning](http://www.mssqltips.com/sqlservertip/3074/use-caution-with-sql-servers-merge-statement/?utm_source-AaronBertrand) and [some other opinions, too](https://sqlblog.org/merge). | Is it OK to swallow an exception for DB INSERT | [
"",
"sql",
"sql-server",
"exception",
""
] |
This little problem is driving up the wall... I have this table:
```
ID N1 N2 N3
----------------------------------------------
1 - 2 -
1 9 - -
1 - - 3
2 - - 2
2 1 - -
3 - - -
3 1 - 3
```
and I want this result:
```
ID N1 N2 N3
-----------------------------
1 9 2 3
2 1 - 2
3 1 - 3
```
I mean, I want to put rows together depending on the ID and preserving the data that is not null in the other columns, in Oracle...
Please help me! I don't know much about PL/SQL but I imagine that the solution could be found by using functions or procedures. | Using min, this can be achieved
```
with tab(ID,N1,N2,N3) as (
select 1,NULL,2,NULL from dual union all
select 1,9,NULL,NULL from dual union all
select 1,NULL,NULL,3 from dual union all
select 2,NULL,NULL,2 from dual union all
select 2,1,NULL,NULL from dual union all
select 3,NULL,NULL,NULL from dual union all
select 3,1,NULL,3 from dual)
-----
--End of data
-----
select id, min(n1), min(n2), min(n3)
from tab
group by id
```
Output
```
| ID | MIN(N1) | MIN(N2) | MIN(N3) |
|----|---------|---------|---------|
| 1 | 9 | 2 | 3 |
| 2 | 1 | (null) | 2 |
| 3 | 1 | (null) | 3 |
``` | Assuming that those "-" represent values that are null, you could use the FIRST\_VALUE analytical function, with the IGNORE NULLS option.
Example Query:
```
select id,
first_value(n1) ignore nulls over (partition by n1 order by n1) as N1,
first_value(n2) ignore nulls over (partition by n2 order by n2) as N2,
first_value(n3) ignore nulls over (partition by n3 order by n3) as N3
from table;
```
More information on the function can be found [on Oracle Base](http://www.oracle-base.com/articles/misc/first-value-and-last-value-analytic-functions.php "First Value and Last Value Analytic Functions") | Put rows together in result of query? | [
"",
"sql",
"oracle",
"group-by",
""
] |
I need to copy a number from a table where the status is `9` to where the status is `8`
This is what the table currently looks like:
```
REF STATUS number
ab12 9 3452
ab12 8
cd23 9 2112
cd23 8
```
SQL:
```
SELECT
[REF]
,[STATUS]
,(select [number]
FROM table
where [REF] in
(select [REF]
from table
where [number] = ''
and [STATUS] = '8')
and [STATUS] = '9')
FROM [table]
```
I would like it to return something like:
```
REF STATUS number
ab12 8 3452
cd23 8 2112
```
However, because my subquery `select [REF] from table where [number] = '' and [STATUS] = '8'`
returns more than one row it won't work
Any idea how to do this? | If you are trying to `UPDATE` the original table, you can do this:
```
UPDATE t
SET t.number = s.number
FROM [table] t
INNER JOIN [table] s
ON t.REF = s.REF
AND s.status = 9
WHERE t.status = 8
AND t.number = ''
``` | You can try joining the table on itself.
```
SELECT
t.REF,
t2.STATUS,
t.number
FROM [table] t
INNER JOIN [table] t2 ON t.REF = t2.REF
WHERE t.STATUS = 9
AND t2.STATUS = 8
``` | SQL Subquery returns more than one result in copy data | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
my problem is about this kind of query :
```
select * from SOMETABLE where SOMEFIELD in ('STRING1','STRING2');
```
the previous code works fine within Sql Developer.
The same static query also works fine and returns me a few results;
```
Query nativeQuery = em.createNativeQuery(thePreviousQuery,new someResultSet());
return nativeQuery.getResultList();
```
But when I try to parameterize this, I encounter a problem.
```
final String parameterizedQuery = "select * from SOMETABLE where SOMEFIELD in (?selectedValues)";
Query nativeQuery = em.createNativeQuery(parameterizedQuery ,new someResultSet());
nativeQuery.setParameter("selectedValues","'STRING1','STRING2'");
return nativeQuery.getResultList();
```
I got no result (but no error in console).
And when I look at the log, I see such a thing :
```
select * from SOMETABLE where SOMEFIELD in (?)
bind => [STRING1,STRING2]
```
I also tried to use no quotes (with similar result), or non ordered parameter (:selectedValues), which leads to such an error :
```
SQL Error: Missing IN or OUT parameter at index:: 1
```
I enventually tried to had the parentheses set directly in the parameter, instead of the query, but this didn't work either...
I could build my query at runtime, to match the first (working) case, but I'd rather do it the proper way; thus, if anyone has an idea, I'll read them with great interest!
FYI :
JPA version 1.0
Oracle 11G | JPA support the use of a collection as a list literal parameter only in JPQL queries, not in native queries. Some JPA providers support it as a proprietary feature, but it's not part of the JPA specification (see <https://stackoverflow.com/a/3145275/1285097>).
Named parameters in native queries also aren't part of the JPA specification. Their behavior depends on the persistence provider and/or the JDBC driver.
Hibernate with the JDBC driver for Oracle support both of these features.
```
List<String> selectedValues = Arrays.asList("STRING1", "STRING2");
final String parameterizedQuery = "select * from SOMETABLE where SOMEFIELD in (:selectedValues)";
return em.createNativeQuery(parameterizedQuery)
.setParameter("selectedValues", selectedValues)
.getResultList();
``` | Instead of:
```
nativeQuery.setParameter("selectedValues", params);
```
I had to use:
```
nativeQuery.setParameterList("selectedValues", params);
``` | How to use a dynamic parameter in a IN clause of a JPA named query? | [
"",
"sql",
"oracle",
"jpa",
"oracle11g",
""
] |
In a PostgreSQL table I have a column which has values like
```
AX,B,C
A,BD
X,Y
J,K,L,M,N
```
In short , it will have a few comma separated strings in the column for each record. I wanted to get the last one in each record. I ended up with this.
```
select id, reverse(substr(reverse(mycolumn),1,position(',' in reverse(mycolumn)))) from mytable order by id ;
```
Is there an easier way? | With `regexp_replace`:
```
select id, regexp_replace(mycolumn, '.*,', '')
from mytable
order by id;
``` | I would do it this way:
```
select reverse(split_part(reverse(myColumn), ',', 1))
``` | PostgreSQL get last value in a comma separated list of values | [
"",
"sql",
"postgresql",
""
] |
I have data recording the `StartDateTime` and `EndDateTime` (both `DATETIME2`) of a process for all of the year 2013.
My task is to find the maximum amount of times the process was being ran at any specific time throughout the year.
I have wrote some code to check every minute/second how many processes were running at the specific time, but this takes a very long time and would be impossible to let it run for the whole year.
Here is the code (in this case check every minute for the date 25/10/2013)
```
CREATE TABLE dbo.#Hit
(
ID INT IDENTITY (1,1) PRIMARY KEY,
Moment DATETIME2,
COUNT INT
)
DECLARE @moment DATETIME2
SET @moment = '2013-10-24 00:00:00'
WHILE @moment < '2013-10-25'
BEGIN
INSERT INTO #Hit ( Moment, COUNT )
SELECT @moment, COUNT(*)
FROM dbo.tblProcessTimeLog
WHERE ProcessFK IN (25)
AND @moment BETWEEN StartDateTime AND EndDateTime
AND DelInd = 0
PRINT @moment
SET @moment = DATEADD(MINute,1,@moment)
END
SELECT * FROM #Hit
ORDER BY COUNT DESC
```
Can anyone think how i could get a similar result (I just need the maximum amount of processes being run at any given time), but for all year?
Thanks | ```
DECLARE @d DATETIME = '20130101'; -- the first day of the year you care about
;WITH m(m) AS
( -- all the minutes in a day
SELECT TOP (1440) ROW_NUMBER() OVER (ORDER BY number) - 1
FROM master..spt_values
),
d(d) AS
( -- all the days in *that* year (accounts for leap years vs. hard-coding 365)
SELECT TOP (DATEDIFF(DAY, @d, DATEADD(YEAR, 1, @d))) DATEADD(DAY, number, @d)
FROM master..spt_values WHERE type = N'P' ORDER BY number
),
x AS
( -- all the minutes in *that* year
SELECT moment = DATEADD(MINUTE, m.m, d.d) FROM m CROSS JOIN d
)
SELECT TOP (1) WITH TIES -- in case more than one at the top
x.moment, [COUNT] = COUNT(l.ProcessFK)
FROM x
INNER JOIN dbo.tblProcessTimeLog AS l
ON x.moment >= l.StartDateTime
AND x.moment <= l.EndDateTime
WHERE l.ProcessFK = 25 AND l.DelInd = 0
GROUP BY x.moment
ORDER BY [COUNT] DESC;
```
[See this post for why I don't think you should use `BETWEEN` for range queries, even in cases where it does semantically do what you want](https://sqlblog.org/2011/10/19/what-do-between-and-the-devil-have-in-common). | Create a table `T` whose rows represent some time segments.
This table could well be a temporary table (depending on your case).
Say:
`row 1 - [from=00:00:00, to=00:00:01)`
`row 2 - [from=00:00:01, to=00:00:02)`
`row 3 - [from=00:00:02, to=00:00:03)`
and so on.
Then just join from your main table
(`tblProcessTimeLog`, I think) to this table
based on the datetime values recorded in
`tblProcessTimeLog`.
A year has just about half million minutes
so it is not that many rows to store in `T`. | trying to find the maximum number of occurrences over time T-SQL | [
"",
"sql",
"sql-server",
""
] |
Here's my current query:
```
SELECT SUM(round("records"."value")) AS sum_value,
to_char(service_date_time, 'Mon YYYY') AS to_char_service_date_time_mon_yyyy
FROM "records"
INNER JOIN "categories" ON "categories"."id" = "records"."category_id"
WHERE (value < 10000)
AND (categories.kind = 'Attendance')
AND (records.service_date_time >= '2013-01-01 00:00:00.000000'
AND records.service_date_time <= '2014-01-29 00:37:00.400862')
GROUP BY to_char_service_date_time_mon_yyyy
```
This returns all the proper data:
```
3036710 Aug 2013
2792991 Jul 2013
3344243 Jun 2013
3121535 Apr 2013
3752803 Sep 2013
2931149 May 2013
3046820 Nov 2013
4437698 Mar 2013
2709170 Jan 2014
3709154 Jan 2013
3361630 Dec 2013
3008767 Oct 2013
3474820 Feb 2013
```
However, I can't seem to order by the new date format column because it's a string. If I do so, it'll sort Apr 2013 first, then Aug 2013. I need it sorted by month. I've tried a few different things, like using to\_date instead of to\_char, and also trying to cast the column to a date before ordering. No dice.
Any ideas?
Thanks! | I think the simplest way is to do:
```
order by min(service_date_time)
``` | You could add auxiliary column for sort:
```
SELECT SUM(round("records"."value")) AS sum_value,
to_char(service_date_time, 'Mon YYYY') AS to_char_service_date_time_mon_yyyy,
max(to_char(service_date_time, 'yyyy-mm')) as yyyy_mm
FROM "records"
INNER JOIN "categories" ON "categories"."id" = "records"."category_id"
WHERE (value < 10000)
AND (categories.kind = 'Attendance')
AND (records.service_date_time >= '2013-01-01 00:00:00.000000'
AND records.service_date_time <= '2014-01-29 00:37:00.400862')
GROUP BY to_char_service_date_time_mon_yyyy
order by yyyy_mm
```
If you want to remove the auxiliary column, then:
```
SELECT sum_value, to_char_service_date_time_mon_yyyy
FROM (... copy the above query here...) t
``` | Postgres: Ordering result by to_char date field | [
"",
"sql",
"postgresql",
""
] |
When I add a new argument to the `SELECT` clause of a `UNION`, I get more records... how can this be? Isn't a `UNION` just mashing them together? Example:
EDIT: They're absolutely distinct. the `code` column is either "IN" or "OUT", and that's what I'm using to separate the two.
EDIT2: UNION ALL gives me 80 records, like it should, but it's odd because my two SELECT statements are absolutely distinct.
FINAL EDIT: Ultimate problem was records within one of my SELECT statements being not DISTINCT, not between the two SELECT statements. Thanks all.
```
-- Yields 76 records
SELECT
f.date
, f.code
, f.cost
FROM a.fact f
WHERE f.code = 'IN'
UNION
SELECT
f2.date
, f2.code
, f2.cost
FROM a.fact2 f2
WHERE f2.code = 'OUT'
;
-- Yields 80 records
SELECT
f.key
, f.date
, f.code
, f.cost
FROM a.fact f
WHERE f.code = 'IN'
UNION
SELECT
f2.key
, f2.date
, f2.code
, f2.cost
FROM a.fact2 f2
WHERE f2.code = 'OUT'
;
``` | By default UNION selects distinct results, there must be duplicates between your result sets. | change `UNION` to `UNION ALL` and you should get the same results. `UNION` selects distinct rows, `UNION ALL` should select all. | Adding argument to SELECT statement with UNION changes record number | [
"",
"sql",
"oracle",
""
] |
I have three tables with common fields - `users`, `guests` and `admins`.
The last two tables have some of the `users` fields.
Here's an example:
**users**
```
id|username|password|email|city|country|phone|birthday|status
```
**guests**
```
id|city|country|phone|birthday
```
**admins**
```
id|username|password|status
```
I'm wondering if it's better to:
a)use one table with many NULL values
b)use three tables | The question is less about "one table with many NULL versus three tables" that about the data structure. The real question is how other tables in your data structure will refer to these entities.
This is a classic situation, where you have "one-of" relationships and need to represent them in SQL. There is a "right" way, and that is to have *four* tables:
* "users" (I can't think of a good name) would encompass everyone and have a unique id that could be referenced by other tables
* "normal", "admins", "guests" each of which would have a 1-0/1 relationship with "users"
This allows other tables to refer to any of the three types of users, or to users in general. This is important for maintaining proper relationships.
You have suggested two shortcuts. One is that there is no information about "normal" users so you dispense with that table. However, this means that you can't refer to "normal" users in another table.
Often, when the data structures are similar, the data is simply denormalized into a single row (as in your solution a).
All three approach are reasonable, in the context of applications that have specific needs. As for performance, the difference between having additional `NULLABLE` columns is generally minimal when the data types are variable length. If a lot of the additional columns are numeric, then these occupy real space even when `NULL`, which can be a factor in designing the best solution.
In short, I wouldn't choose between the different options based on the premature optimization of which might be better. I would choose between them based on the overall data structure needed for the database, and in particular, the relationships that these entities have with other entities.
EDIT:
Then there is the question of the `id` that you use for the specialized tables. There are two ways of doing this. One is to have a separate id, such as `AdminId` and `GuestId` for each of these tables. Another column in each table would be the `UserId`.
This makes sense when other entities have relationships with these particular entities. For instance, "admins" might have a sub-system that describes rights and roles and privileges that they have, perhaps along with a history of changes. These tables (ahem, entities) would want to refer to an `AdminId`. And, you should probably oblige by letting them.
If you don't have such tables, then you might still split out the Admins, because the 100 integer columns they need are a waste of space for the zillion other users. In that case, you can get by without a separate id.
I want to emphasize that you have asked a question that doesn't have a "best" answer in general. It does have a "correct" answer by the rules of normalization (that would be 4 tables with 4 separate ids). But the best answer in a given situation depends on the overall data model. | Why not have one parent user table with three foreign keyed detail tables. Allows unique user id that can transition. | Many null values in one table vs three tables | [
"",
"mysql",
"sql",
"database",
"database-design",
"null",
""
] |
I have the following query:
```
SELECT
SUM("balance_transactions"."fee") AS sum_id
FROM "balance_transactions"
JOIN charges ON balance_transactions.source = charges.balance_id
WHERE "balance_transactions"."account_id" = 6
AND (balance_transactions.type = 'charge'
AND charges.refunded = false
AND charges.invoice IS NOT NULL)
AND ("balance_transactions"."created" BETWEEN '2013-12-20' AND '2014-01-19');
```
What that does is adds up all the "fees" that occurred between those two dates. Great. Works fine.
The problem is that I almost always need those fees for hundreds of date ranges at a time, which amounts to me running that same query hundreds of times. Not efficient.
But is there some way to condense this into a single query for all the date ranges?
For instance, I'd be calling `SUM` for a series of ranges like this:
```
2013-12-20 to 2014-01-19
2013-12-21 to 2014-01-20
2013-12-22 to 2014-01-21
2013-12-23 to 2014-01-22
2013-12-24 to 2014-01-23
...so on and so on
```
I need to output the sum of fees collected in each date range (and ultimately need that in an array).
So, any ideas on a way to handle that and reduce database transactions?
FWIW, this is on Postgres inside a Rails app. | Assuming I understand your request correctly I think what you need is something along these lines:
```
SELECT "periods"."start_date",
"periods"."end_date",
SUM(CASE WHEN "balance_transactions"."created" BETWEEN "periods"."start_date" AND "periods"."end_date" THEN "balance_transactions"."fee" ELSE 0.00 END) AS period_sum
FROM "balance_transactions"
JOIN charges ON balance_transactions.source = charges.balance_id
JOIN ( SELECT '2013-12-20'::date as start_date, '2014-01-19'::date as end_date UNION ALL
SELECT '2013-12-21'::date as start_date, '2014-01-20'::date as end_date UNION ALL
SELECT '2013-12-22'::date as start_date, '2014-01-21'::date as end_date UNION ALL
SELECT '2013-12-23'::date as start_date, '2014-01-22'::date as end_date UNION ALL
SELECT '2013-12-24'::date as start_date, '2014-01-23'::date as end_date
) as periods
ON "balance_transactions"."created" BETWEEN "periods"."start_date" AND "periods"."end_date"
WHERE "balance_transactions"."account_id" = 6
AND "balance_transactions"."type" = 'charge'
AND "charges"."refunded" = false
AND "charges"."invoice" IS NOT NULL
GROUP BY "periods"."start_date", "periods"."end_date"
```
This should return you all the periods you're interested in in one single resultset.
Since the query is 'generated' on the fly in your front-end you can add as many rows to the periods part as you want.
Edit: with some trial and error I managed to get it working [in sqlFiddle][1] and updated the syntax above accordingly. | Here's a untested procedure you can used.
```
CREATE OR REPLACE PROCEDURE sum_fees(v_start IN Date, v_end in Date) IS
BEGIN
SELECT
SUM("balance_transactions"."fee") AS sum_id
FROM "balance_transactions"
JOIN charges ON balance_transactions.source = charges.balance_id
WHERE "balance_transactions"."account_id" = 6
AND (balance_transactions.type = 'charge'
AND charges.refunded = false
AND charges.invoice IS NOT NULL)
AND ("balance_transactions"."created" BETWEEN v_start AND v_end);
END;
```
Then call the procedure with your range date. | Sum for multiple date ranges in a single query? | [
"",
"sql",
"postgresql",
"sum",
"query-optimization",
""
] |
So I'm trying to convert strings in an SQL databse into datetime values.
I have some dates in a table like this:
```
23/12/2013 16:34:32
24/12/2013 07:53:44
24/12/2013 09:59:57
24/12/2013 12:57:14
24/12/2013 12:48:49
24/12/2013 13:04:17
24/12/2013 13:15:47
24/12/2013 13:21:02
24/12/2013 14:01:28
24/12/2013 14:02:22
24/12/2013 14:02:51
```
They are stored as strings unfortunately
And I want to convert them to datetime
```
SELECT CONVERT(datetime, analysed, 103 )
FROM OIL_SAMPLE_UPLOAD
```
However I get this message when I run the query
> The conversion of a varchar data type to a datetime data type resulted
> in an out-of-range value.
Presumably because some values are badly formed (although I am yet to spot any of these)
It's ok if some values don't convert, I just need a way of handling this situation.
Something like ISNULL(CONVERT(datetime, analysed, 103 )) would be good except that the convert function does not return NULL when it fails. | For SQL Server you can use [**ISDATE()**](http://technet.microsoft.com/en-us/library/ms187347.aspx) function to check whether value is valid date
```
SELECT CASE WHEN ISDATE(analysed)=1 THEN CONVERT(datetime, analysed, 103 )
ELSE '' END
FROM OIL_SAMPLE_UPLOAD
``` | You can use [TRY\_CONVERT](http://msdn.microsoft.com/en-us/library/hh230993.aspx) or [TRY\_CAST](http://msdn.microsoft.com/en-us/library/hh974669.aspx) functions | How to handle date conversion error in SQL? | [
"",
"sql",
"string",
"t-sql",
"error-handling",
"type-conversion",
""
] |
I have a table with 3 columns VALUE1, step,date with values as given below....Now i want a view with a 4th column like below picture. for example, the maximum value of date for value1 '1' is 13.3.2014 and its corresponding step is C...So, the value of max(step-date) for '1' should be 'C' and so on. I want to do this without performing a join on the table itself. Hope am clear in my requirement. Thanks in advance.
 | You want to use analytical functions:
```
first_value(step) over (partition by value1 order by date_ desc)
```
`first_value (step)` tells Oracle that you want to get the *first value* of a list of steps. The elements and order of these is speciefied in the paranthesis after the `over` clause.
The "lists" are created with the `partition by value1`. Since there are two different value1's, two lists are created. The list belonging to *value1* constists of the elements A, B and C, the list belonging to *value2* of the elements A and B. These lists are ordered with the `order by date_ desc` clause.
Then, Oracle can "return" the first element of these lists.
See also [this SQL fiddle](http://sqlfiddle.com/#!4/80345/1) | You want the last value based on date. You can do that with Oracle using an analytic function:
```
select Value1, step, date,
max(step) keep (dense_rank last order by date) over (partition by value1) as maxval
from table t;
```
The important part here is the `keep` part and the part after that. The `keep (dense_rank last order by date)` says to get the last value by date. The `over (partition by value1)` says to do that within groups where `value1` has the same value. | Populate the maximum value without a join oracle | [
"",
"sql",
"oracle",
""
] |
I need to get the max lesson\_score from the following table, along with the respective date for a particular user:
```
--------------------------------
|uid |lesson_score |date |
--------------------------------
|1 |2 |1391023460 |
|1 |8 |1391023518 |
|1 |4 |1391023596 |
--------------------------------
```
I need a result of:
```
---------------------------
|lesson_score |date |
---------------------------
|8 |1391023596 |
---------------------------
```
My SQL looks like this:
```
SELECT date, MAX(lesson_score) AS lesson_score
FROM cdu_user_session_progress
WHERE uid = 1
GROUP BY date";
```
But it just gives me three rows:
```
---------------------------
|lesson_score |date |
---------------------------
|2 |1391023460 |
|4 |1391023596 |
|8 |1391023518 |
---------------------------
```
What am I doing wrong? Thanks! | Try using
```
SELECT lesson_score, date FROM cdu_user_session_progress ORDER BY lesson_score DESC LIMIT 1;
```
The `ORDER BY` - part is responsible, that the max. lession\_score will be fetched at the beginning.
After the order-by, you get the folling result:
```
---------------------------
|lesson_score |date |
---------------------------
|8 |1391023518 |
|4 |1391023596 |
|2 |1391023460 |
---------------------------
```
Now the `LIMIT`-part says, that the database should only return the first row - all other result-rows will be ignored, and the result is this:
```
---------------------------
|lesson_score |date |
---------------------------
|8 |1391023518 |
---------------------------
```
In order to get the minium-score, you just write `ASC` intead of `DESC` (or remove it, because `ASC` is the default-value) | ```
SELECT date, MAX(lesson_score) AS lesson_score
FROM cdu_user_session_progress
WHERE uid = 1
GROUP BY date";
```
MAX is an aggregation function, it will return maximum of lesson\_score if there are multiple value for lesson\_score
In your query the value is always same so it returns that. Remember SELECT MAX(...) does not work on whole table records, it works each record one by one.
You can get your result using order by like this
```
SELECT top 1 date, lesson_score AS lesson_score
FROM cdu_user_session_progress
WHERE uid = 1
ORDER BY lesson_score DESC;
``` | SQL: get maximum value and it's corresponding field(s) | [
"",
"sql",
""
] |
```
user_list
+---------------------+
+ user_id + name +
+---------------------+
+ 001 + Anna +
+---------------------+
+ 004 + David +
+----------------------
...
job_list
+---------------------+
+ user_id + job +
+---------------------+
+ 001 + Norse +
+---------------------+
+ 002 + Doctor +
+---------------------+
+ 003 + Sales +
+----------------------
+ 004 + Driver +
+----------------------
...
```
I want to find the records which `job_list` exists but `user_list` doesn't exists
For example, some user may deleted for some reason
but their record still stay in the database, so I want to find them and remove
In this case, the returning result should be 002 and 003
```
SELECT * FROM `user_list` ul, `job_list` jl WHERE jl.user_id NOT LIKE ul.user_id;
```
I tried the SQL command above but not work, please help | ```
SELECT * FROM job_list AS jl
LEFT JOIN user_list AS ul ON ul.user_id = jl.user_id
WHERE ul.user_id IS NULL
```
should do the job | Try this
```
SELECT *
FROM `job_list` jl
WHERE NOT EXISTS (SELECT 1 FROM user_list ul WHERE ul.user_id=jl.user_Id)
``` | Find two table where one table has redundant value | [
"",
"mysql",
"sql",
""
] |
Hello All I read somewhere in book or blog that there is rule-of-thumb in SQL that we must create ID column in each and every table with PRIMARY KEY and AUTO-INCREMENT attribute. What if i am using mobile number as a user id in my table. Should i create id column in my table or mobile number that is primary key in my table will be enough? | No it is not necessary, but for anything short of an association table it is recommended.
This Identity column provides a unique and unchanging Identifier of your data, it makes setting up foreign key relations quite easy.
An association table would not have one of these Identity columns because it has no data itself they generally consist of 2 or more foreign key columms. | If you want to use phone number as primary key you can
```
create table MyData(
phonenumber int not null primary key,
fname varchar(25),
lname varchar(25)
...
)
```
But this may not be the best practice. Because a phone number might change. Once it belonged to customer A now belongs to customer B. With this model, you may eventually run into a problem where you can't pull your unique customer based on phone number only. If you add a self incrementing index as primary key, you can always refer to a unique customer by that ID. **This is safer, cleaner, more intuitive and easy to code against**. It esp. helps when you add foreign keys etc. By selecting a wrong primary key in the beginning, you may put yourself in big trouble may be an year or two down the road.
What you can also do is add a self incrementing index as primary key and put a UNIQUE constraint on phone number so the phone number must be unique in order to be entered into the DB. This is mainly done for data integrity for example if someone mistype something and now it matches another number, it will be caught right away. I would still not recommend this because you again might run into a problem where this constrain might catch a valid case.
So I would use self incrementing index as primary key to make thing simple and add a phone number field, which cannot be null (this would be another constraint :)) | Is it necessary to create id column in SQL table? | [
"",
"mysql",
"sql",
""
] |
at first here is the alpha version of what I want: <http://sqlfiddle.com/#!2/45c89/2>
However I don't want to count all representative\_id, but only this rows with the lowest id, eg:
```
(`id`, `economy_id`, `representative_id`)
(1, 1, 5), <-this one, lowest id through the same economy_id=1
(2, 1, 6),
(3, 1, 7),
(4, 1, 8),
(5, 1, 3),
(6, 1, 4),
(7, 1, 1),
(8, 1, 2),
(9, 1, 102),
(10, 2, 7), <-this one, lowest id through the same economy_id=2
(11, 2, 8),
(12, 2, 102),
(13, 2, 1),
(14, 2, 2),
(15, 2, 3),
(16, 2, 4),
(17, 3, 3), <-this one, lowest id through the same economy_id=3
(18, 3, 4),
(19, 3, 1),
(20, 3, 2),
(21, 3, 102),
(22, 4, 1), <-this one, lowest id through the same economy_id=4
(23, 4, 2),
(24, 4, 102),
(25, 5, 1), <-this one, lowest id through the same economy_id=5
(26, 5, 2),
(27, 5, 102),
(28, 5, 7),
(29, 6, 1), <-this one, lowest id through the same economy_id=6
```
The output should be:
representative\_id, count()
According to above example:
```
5, 1
7, 1
3, 1
1, 3
```
Is it possible only in SQL? | If I'm understanding your question correctly, I think this should work using `min` in a `subquery` and joining back to itself:
```
select s.representative_id, count(*)
from stl_parliament s
join
(
select min(id) minid
from stl_parliament
group by economy_id
) t on s.id = t.minid
group by s.representative_id
```
* [Updated Fiddle Demo](http://sqlfiddle.com/#!2/45c89/30) | ```
SELECT x.representative_id
, COUNT(*) total
FROM stl_parliament x
JOIN
( SELECT economy_id
, MIN(id) min_id
FROM stl_parliament
GROUP
BY economy_id
) y
ON y.economy_id = x.economy_id
AND y.min_id = x.id
GROUP
BY representative_id;
```
<http://sqlfiddle.com/#!2/45c89/34> | GROUP BY with only first row of sequence of one column? | [
"",
"mysql",
"sql",
"group-by",
""
] |
im taking my very first programming (sql) class. I've got no tech background whatsoever and I'm having a little trouble getting the code down.
here is the what the database looks like.
### BOOK
```
BOOK_CODE (UNIQUE)
BOOKTITLE
PUBLISHER_CODE
BOOKTYPE
PRICE
```
### INVENTORY
```
BOOK_CODE (UNIQUE)
BRANCH_NUM (UNIQUE)
ON_HAND
```
The question is, list the books (titles) that are available in branches 3 and 4 (on both at the same time).
Im thinking i need to use the following tables: booktitle on the book table , bookcode on both tables ( book and inventory), and branch\_num on inventory table.
also the answer can only show the book titles available on branches 3 and 4 (at the same time) and no other columns.
sorry if im making no sense. like i said im a n00b. | ```
SELECT BOOKTITLE FROM BOOK
WHERE BOOK_CODE IN (SELECT BOOK_CODE FROM INVENTORY WHERE BRANCH_NUM = 3 AND ON_HAND > 0)
AND BOOK_CODE IN (SELECT BOOK_CODE FROM INVENTORY WHERE BRANCH_NUM = 4 AND ON_HAND > 0);
```
OR
```
SELECT BOOKTITLE FROM
(
SELECT BOOK.BOOK_CODE, BOOKTITLE, COUNT(*) AS BRANCH_COUNT FROM BOOK
INNER JOIN INVENTORY ON BOOK.BOOK_CODE = INVENTORY.BOOK_CODE
AND INVENTORY.BRANCH_NUM IN (3, 4)
GROUP BY BOOK.BOOK_CODE, BOOKTITLE
) B
WHERE B.BRANCH_COUNT = 2;
``` | ```
select distinct BOOKTITLE from BOOK a, INVENTORY b
where a.BOOK_CODE = b.BOOK_CODE and a.BOOK_CODE in
(select distinct p.BOOK_CODE from Inventory p, Inventory q where p.BOOK_CODE =
q.BOOK_CODE
and p.BRANCH_NUM = 3 AND q.BRANCH_NUM = 4);
``` | Looking for help on joining query | [
"",
"mysql",
"sql",
"t-sql",
"inner-join",
""
] |
I have a type of data called a `chain`. Each `chain` is made up of a specific sequence of another type of data called a `step`. So a `chain` is ultimately made up of multiple `steps` in a specific order. I'm trying to figure out the best way to set this up in MySQL that will allow me to do the following:
1. Look up all steps in a chain, and get them in the right order
2. Look up all chains that contain a step
I'm currently considering the following table set up as the appropriate solution:
```
TABLE chains
id date_created
TABLE steps
id description
TABLE chains_steps (this would be used for joins)
chain_id step_id step_position
```
In the table `chains_steps`, the `step_position` column would be used to order the steps in a chain correctly. It seems unusual for a JOIN table to contain its own distinct piece of data, such as `step_position` in this case. But maybe it's not unusual at all and I'm just inexperienced/paranoid.
I don't have much experience in all this so I wanted to get some feedback. Are the three tables I suggested the correct way to do this? Are there any viable alternatives and if so, what are the advantages/drawback? | You're doing it right.
Consider a database containing the `Employees` and `Projects` tables, and how you'd want to link them in a many-to-many fashion. You'd probably come up with an `Assignments` table (or `Project_Employees` in some naming conventions).
At some point you'd decide you want not only to store each project assignment, but you'd also want to store when the assignment started, and when it finished. The natural place to put that is in the assignment itself; it doesn't make sense to store it either with the project or with the employee.
In further designs you might even find it necessary to store further information about the assignment, for example in an employee review process you may wish to store feedback related to their performance in that project, so you'd make the assignment the "one" end of a relationship with a `Review` table, which would relate back to `Assignments` with a FK on `assignment_id`.
So in short, it's perfectly normal to have a junction table that has its own data. | That looks fine, and it's not unusual for the join table to contain a position/rank field.
Look up all steps in a chain, and get them in the right order
```
SELECT * FROM chains_steps
LEFT JOIN steps ON steps.id = chains_steps.step_id
WHERE chains_steps.chain_id = ?
ORDER BY chains_steps.step_position ASC
```
Look up all chains that contain a step
```
SELECT DISTINCT chain_id FROM chains_steps
LEFT JOIN chains ON chains.id = chains_steps.chain_id
``` | How to set up relational database tables for this many-to-many relationship? | [
"",
"mysql",
"sql",
"join",
"many-to-many",
""
] |
------------------------here is my sql procedue to update table------------------------
```
create procedure sp_stepUpdate
@ID int,
@StepOrder int
AS
BEGIN
IF OBJECT_ID('tempdb.dbo.#UpdateBatch','u') IS NOT NULL
begin
DROP TABLE #UpdateBatch
end
IF OBJECT_ID('tempdb.dbo.#UpdateBatch2','u') IS NOT NULL
begin
DROP TABLE #UpdateBatch2
end
create table #UpdateBatch2
(
ID int,
StepOrder int
)
insert into #UpdateBatch2 values (@ID,@StepOrder)
Select *,ROW_NUMBER() OVER(ORDER BY ID) as newIID into #UpdateBatch
from #UpdateBatch2
set identity_insert [ODM_BatchSteps] ON
Update [ODM_BatchSteps]
set [StepOrder] = newIID
From #UpdateBatch
where [ODM_BatchSteps].ID = #UpdateBatch.ID
set identity_insert [ODM_BatchSteps] off
END
go
```
---------------and here is my code in the program to get the new order from the list------
```
connection.Open()
For Each item As ListViewItem In ListView1.Items
Dim command As SqlCommand = New SqlCommand("sp_stepUpdate", connection)
command.CommandType = CommandType.StoredProcedure
command.Parameters.AddWithValue("@ID", item.SubItems(0).Text)
command.Parameters.AddWithValue("@StepOrder", item.SubItems(1).Text)
command.ExecuteNonQuery()
Next
```
i get Violation of UNIQUE KEY constraint when tryin to update the table with the new order from my listview
-----------here is the the table the order iam trying to update-----
```
create table [dbo].[ODM_BatchSteps]
(
[ID] uniqueidentifier primary key not null default newid(),
[StepOrder]int ,
[StepType]int,
[StepGrid]nvarchar(max),
[BatchGrid]int,
foreign key (BatchGrid) REFERENCES ODM_Batches(ID)
)
``` | Ok thanks for the help guys much much appreciated... took the easy way out and added count to add what number row the new order is, replaced it with the old StepOrder in the DB from a new table
-------------------- sql update---------------------
```
Update [ODM_BatchSteps]
set ODM_BatchSteps.StepOrder = UpdateBatch2.StepOrder
From UpdateBatch2
where [ODM_BatchSteps].ID = UpdateBatch2.ID
```
---------------- code in program ---------------------
```
Dim count As Integer
For Each item As ListViewItem In ListView1.Items
count = count + 1
Dim command As SqlCommand = New SqlCommand("sp_stepUpdate", connection)
command.CommandType = CommandType.StoredProcedure
command.Parameters.AddWithValue("@ID", item.SubItems(0).Text)
command.Parameters.AddWithValue("@StepOrder", count)
command.Parameters.AddWithValue("@StepType", item.SubItems(2).Text)
command.Parameters.AddWithValue("@StepGrid", item.SubItems(3).Text)
command.Parameters.AddWithValue("@BatchGrid", item.SubItems(4).Text)
command.ExecuteNonQuery()
Next
connection.Close()
``` | I suppose that your field `BatchGrid` identifies a group of records to be kept in a particular order.
If this is the case and there are no foreign keys that refer to your `ODM_BatchSteps` fields, a rude, but effective way to correctly rewrite this block of records is to remove every entry that refers to the same `BatchGrid` and then reinsert everything from your ListView items
```
Dim tran as SqlTransaction
Try
connection.Open()
tran = connection.BeginTransaction()
Dim command As SqlCommand = new SqlCommand("DELETE FROM ODM_BatchSteps WHERE BatchGrid = @grd", connection, tran)
command.Parameters.AddWithValue("@grd", currentGrid)
command.ExecuteNonQuery()
For Each item As ListViewItem In ListView1.Items
' Now we INSERT every item in the grid passing the parameters
' required to rebuild the block of records for the same BatchGrid
command = New SqlCommand("usp_stepInsert", connection, tran)
command.CommandType = CommandType.StoredProcedure
command.Parameters.AddWithValue("@ID", item.SubItems(0).Text)
command.Parameters.AddWithValue("@StepOrder", item.SubItems(1).Text)
command.Parameters.AddWithValue("add the other parameters to rebuild the record")
command.ExecuteNonQuery()
Next
tran.Commit()
Catch Ex as Exception
' Log the exception, message to user ???
tran.RollBack
End Try
```
Of course your sp\_stepUpdate should be renamed and rewritten (`usp_stepInsert`?) to accept all the parameters required to INSERT a new record in the correct step order
If this is a viable approach, then you could try to boost performance using a [Table Valued Parameter](http://msdn.microsoft.com/en-us/library/bb675163%28v=vs.110%29.aspx) instead of making a separate call to the database for every item | Violation of UNIQUE KEY constraint Cannot insert duplicate key in object | [
"",
"sql",
"sql-server",
"vb.net",
""
] |
I have read it over and over again that SQL, at its heart, is an unordered model. That means executing the same SQL query multiple times can return result-set in different order, unless there's an "order by" clause included. Can someone explain why can a SQL query return result-set in different order in different instances of running the query? It may not be the case always, but its certainly possible.
Algorithmically speaking, does query plan not play any role in determining the order of result-set when there is no "order by" clause? I mean when there is query plan for some query, how does the algorithm not always return data in the same order?
Note: Am not questioning the use of order by, am asking why there is no-guarantee, as in, am trying to understand the challenges due to which there cannot be any guarantee. | Some SQL Server examples where the **exact same execution plan** can return differently ordered results are
1. An unordered index scan might be carried out in either [allocation order](http://blogs.msdn.com/b/sqlserverstorageengine/archive/2006/11/09/when-can-allocation-order-scans-be-used.aspx?Redirected=true) or key order dependant on the isolation level in effect.
2. The [merry go round scanning](http://technet.microsoft.com/en-us/library/ms191475%28v=sql.105%29.aspx) feature allows scans to be shared between concurrent queries.
3. Parallel plans are often non deterministic and order of results might depend on the degree of parallelism selected at runtime and concurrent workload on the server.
4. If the plan has nested loops with unordered prefetch this [allows the inner side of the join to proceed using data from whichever I/Os happened to complete first](http://sqlblog.com/blogs/paul_white/archive/2013/08/31/sql-server-internals-nested-loops-prefetching.aspx) | Martin Smith has some great examples, but the absolute dead simple way to demonstrate when SQL Server will change the plan used (and therefore the ordering that a query without ORDER BY will be used, based on the different plan) is to add a covering index. Take this simple example:
```
CREATE TABLE dbo.floob
(
blat INT PRIMARY KEY,
x VARCHAR(32)
);
INSERT dbo.floob VALUES(1,'zzz'),(2,'aaa'),(3,'mmm');
```
This will order by the clustered PK:
```
SELECT x FROM dbo.floob;
```
Results:
```
x
----
zzz
aaa
mmm
```
Now, let's add an index that happens to cover the query above.
```
CREATE INDEX x ON dbo.floob(x);
```
The index causes a recompile of the above query when we run it again; now it orders by the new index, because that index provides a more efficient way for SQL Server to return the results to satisfy the query:
```
SELECT x FROM dbo.floob;
```
Results:
```
x
----
aaa
mmm
zzz
```
Take a look at the plans - neither has a sort operator, they are just - without any other ordering input - relying on the inherent order of the index, and they are scanning the whole index because they have to (and the cheapest way for SQL Server to scan the index is in order). (Of course even in these simple cases, some of the factors in Martin's answer could influence a different order; but this holds true in the absence of any of those factors.)
As others have stated, the ***ONLY WAY TO RELY ON ORDER*** is to ***SPECIFY AN ORDER BY***. Please write that down somewhere. It doesn't matter how many scenarios exist where this belief can break; the fact that there is even one makes it futile to try to find some guidelines for when you can be lazy and not use an ORDER BY clause. Just use it, always, or be prepared for the data to not always come back in the same order.
Some related thoughts on this:
* [Bad habits to kick : relying on undocumented behavior](https://sqlblog.org/2010/02/08/bad-habits-to-kick-relying-on-undocumented-behavior)
* [Why people think some SQL Server 2000 behaviors live on… 12 years later](https://sqlblog.org/2012/02/28/why-people-think-some-sql-server-2000-behaviors-live-on-12-years-later) | Unordered results in SQL | [
"",
"sql",
"sql-server",
"oracle",
"t-sql",
""
] |
I have a a query like so that ONLY shows data for the specific day. But how can I modify this to show data for the previous 7 days? So when I run the query this week show data for the date range of
01/24/2014 - 01/30/2014 then next week shows data for 01/31/2014 - 02/06/2014
```
SELECT * FROM SalesInformation
WHERE (datename(dw, getdate()) = 'Monday' AND CONVERT(VARCHAR(10), DateSold, 101) = DATEADD (Day,DATEDIFF(Day,0,GetDate()),0))
OR (datename(dw, getdate()) = 'Tuesday' AND CONVERT(VARCHAR(10), DateSold, 101) = DATEADD(Day,DATEDIFF(Day,0,GetDate()),0))
OR (datename(dw, getdate()) = 'Wednesday' AND CONVERT(VARCHAR(10), DateSold, 101) = DATEADD(Day,DATEDIFF(Day,0,GetDate()),0))
OR (datename(dw, getdate()) = 'Thursday' AND CONVERT(VARCHAR(10), DateSold, 101) = DATEADD(Day,DATEDIFF(Day,0,GetDate()),0))
OR (datename(dw, getdate()) = 'Friday' AND CONVERT(VARCHAR(10), DateSold, 101) = DATEADD(Day,DATEDIFF(Day,0,GetDate()),0))
``` | This will be seven days including time. So 01/23/2014 10:06 AM to 1/30/2014 10:06 AM. If you want the whole day of 01/23, cast to a DATE.
Here is a complete working example for 2012. I changed to getdate() just in case you are using a older version of the engine.
```
-- Just playing
use tempdb;
go
-- drop existing
if object_id ('sales') > 0
drop table sales
go
-- create new
create table sales
(
id int identity (1,1) primary key,
sold smalldatetime,
amt smallmoney
);
go
-- clear data
truncate table sales;
go
-- insert data
declare @dte date = '20131231';
declare @amt int;
while (@dte < '20140201')
begin
set @amt = floor(rand(checksum(newid())) * 50000);
set @dte = dateadd(d, 1, @dte);
insert into sales values (@dte, @amt);
end
go
-- Show 7 * 24 hrs worth of data
select *
from sales
where
sold >= dateadd(d, -7, getdate()) and
sold < getdate()
go
```

Check out Aarons Blog on dates. It goes over the good, bad, and ugly practices. For instance, do not use BETWEEN.
* [Bad habits to kick : mis-handling date / range queries](https://sqlblog.org/2009/10/16/bad-habits-to-kick-mis-handling-date-range-queries) | To get last seven day's data:
```
WHERE ... AND DATEDIFF(Day,dw,GetDate())<=7
```
Edit: To get current week's data:
```
AND dw BETWEEN(DATEADD(week, DATEDIFF(day, 0, getdate())/7, 0)
AND DATEADD(week, DATEDIFF(day, 0, getdate())/7, 6)
```
Better performance:
```
DECLARE @dw1 datetime = DATEADD(week, DATEDIFF(day, 0, getdate())/7, 0)
select ... WHERE dw >= @dw1
```
For multiple weeks:
```
DECLARE @dw1 datetime = DATEADD(week, DATEDIFF(day, 0, getdate())/7, 0)
DECLARE @dw2 datetime = DATEADD(week, DATEDIFF(day, 0, getdate())/7, 0)-7
select ... WHERE dw >= @dw1 -- current week
union all
select ... WHERE dw >= @dw2 AND dw < @dw1 -- last week
``` | How to have Query Pull Weekly Data | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have a query that calculates an invoice from table `invoice`. This invoice has taxes associated to it located in `tax_recv` table. The `tax_recv` will have multiple rows that are tied to an invoice in `invoice` table.
I have a query that calculates 12 months worth of invoices and orders them by their corresponding date. Here is the query:
```
SELECT
invoice_amount + late_fee + SUM(c.tax) AS amount, tollfree_json, date_generated
FROM
invoices as i
LEFT JOIN
csi_tax_recv as c
ON
c.invoice_number = i.id
WHERE
DATE_FORMAT(date_generated,'%Y-%m') < DATE_FORMAT(NOW(),'%Y-%m')
AND
DATE_FORMAT(date_generated,'%Y-%m') >= DATE_FORMAT(NOW() - INTERVAL 12 MONTH,'%Y-%m')
ORDER BY
date_generated
```
The only problem with this query, is it is only returning one row? Not sure exactly why. The minute I remove the left join and the SUM(c.tax) (which is what I think is causing the issue), the query works great.
The end result should look like this:
```
invoice_amount + total_taxes_for_invoices, tollfree_json, date_generated
```
Cheers. | As people said, you need to group by the fields you want to get the sum of the taxes and make calculations with that sum, something like this:
```
SELECT
i.tollfree_json,
i.date_generated,
(i.invoice_amount + i.late_fee + SUM(c.tax)) AS amount
FROM
invoices as i JOIN csi_tax_recv as c ON i.id = c.invoice_number
WHERE
DATE_FORMAT(date_generated,'%Y-%m') < DATE_FORMAT(NOW(),'%Y-%m')
AND
DATE_FORMAT(date_generated,'%Y-%m') >= DATE_FORMAT(NOW() - INTERVAL 12 MONTH,'%Y-%m')
GROUP BY
i.tollfree_json,
i.date_generated
ORDER BY
i.date_generated
```
With this query, you will get the sum of the taxes aggregated by every `tollfree_json` and `date_generated` combination, and you can add the invoice\_amount and late\_fee to that sum, if this is what you looked for. | ```
SELECT
invoice_amount + late_fee + SUM(c.tax) AS amount, tollfree_json, date_generated
FROM
invoices as i
LEFT JOIN
csi_tax_recv as c
ON
c.invoice_number = i.id
WHERE
DATE_FORMAT(date_generated,'%Y-%m') < DATE_FORMAT(NOW(),'%Y-%m')
AND
DATE_FORMAT(date_generated,'%Y-%m') >= DATE_FORMAT(NOW() - INTERVAL 12 MONTH,'%Y-%m')
GROUP BY date_generated
ORDER BY
date_generated
```
Looking over your query, I would normally say group on everything before the aggregate, but in this case, I'm not sure it makes sense. So group by what you order by (date\_generated\_) will group them on the associated date of invoice. | MySQL: JOIN sum of multiple values | [
"",
"mysql",
"sql",
""
] |
I have been experimenting a bit but can't find the right way to do this.
My query looks like this:
```
select name, value from table1
union select name, value from table2
```
Currently my query returns the below:
```
Name | Value
-------------
Name1 | null
Name1 | value1
Name1 | value2
Name2 | null
```
The null values in the above result come only from table1 and I would like to return null values only if the respective name has no non-null values, e.g.:
```
Name | Value
-------------
Name1 | value1
Name1 | value2
Name2 | null
```
Any suggestions on how to perform this query? | A simpler approach than the answers you have so far, IMO, would be to explicitly exclude null values from the results for those names where a non-null value exists. It can be written as
```
with unioncte as (
select name, value from table1
union
select name, value from table2
)
select name, value from unioncte
minus
select name, null from unioncte
where value is not null;
``` | You could also use analytic functions to compute the number of NULL values for the name and filter by that:
```
with v_data as (
select name, value from table1
union
select name, value from table2
)
select v2.* from (
select
v1.*,
count(value) over (partition by name) as value_cnt
from v_data v1
) v2 where value_cnt = 0 or value is not null
``` | Oracle: Union columns if non-null value exists | [
"",
"sql",
"oracle",
""
] |
I have looked at a few stackoverflow forum posts but nothing fits (or atleast I dont think so) what I need help with.
I'm looking for general advise, my company has 'tasked' me to look at moving some data from tables stored in our parent companies databases into a database of our own that has all the information we need in one place.
For instance if we want information that related to one thing, we may have to pull data from several different databases. I think I can get my head around the moving of the data and create a sql query to do it, however we're currently using SQL express as our SQL db (the company is more than happy to buy/create a SQL server but as far as we can see SQL express does what we need it too (feel free to correct me)).
I need to look at scheduling the data move for every hour or few hours to keep the data 'up to date' for when reports are generated using the data.
I have looked at a few programs but the as the queries and the database is on a server 2008 r2 system some of the 'programs' don't like it as they were last updated pre 2010 etc. I have also installed SQL management suite 2012 due to SQL server agent but I cant even get that worked (Service is enabled and I have restarted the DB just still nothing within suite).
I'm not looking (however happy to take the help) for a 'Do this and that and that' type reply but more than happy to accept that amount of help but if you guys / gals can point me in the right direction.
Summary:
-Combining data already on databases from our parent company into a table / DB of our own making
-Currently using SQL Express but willing to upgrade to something else that does the job
-Schedule the data moves for every X hours (Windows scheduling?)
-automating the entire thing so don't have to manually do the moves.
Help on any of the points above would be greatly appreciated and I would 'love you long time' for the help.
JB | There are a bunch of limitations for SQL Express. One of them is that SQL Agent is not supported. SSIS like SQL Agent is not supported.
<http://msdn.microsoft.com/en-us/library/cc645993.aspx>
Do not fret, you can always schedule a job with Windows Scheduler.
<http://windows.microsoft.com/en-US/windows/schedule-task#1TC=windows-7>
As for moving the data, it is up to you to select a solution.
1 - Write a **PowerShell** application to perform the Extract, Translate, and Load (ETL).
<http://technet.microsoft.com/en-us/library/cc281945(v=sql.105).aspx>
2 - Use the **SQLCMD** to perform logic like calling stored procedures.
<http://technet.microsoft.com/en-us/library/ms162773.aspx>
3 - Use **BCP** to dump and load data.
<http://craftydba.com/?p=1245>
<http://craftydba.com/?p=1255>
<http://craftydba.com/?p=1584>
<http://craftydba.com/?p=1690>
It is funny how youngsters think they need to spend a-lot of $ to create a solution for a business.
However, Microsoft does supply you with a-lot of free tools.
You just have to put them together for a solution.
PS: I remember about 10 years ago I created a custom ETL solution using VBSCRIPT. Unlike power shell, it is installed on old and new programs.
Good luck! | I think you should have a look at [Integartion Services](http://msdn.microsoft.com/en-us/library/cc645993%28v=sql.105%29.ASPX#SSIS), which is not available for Express Edition. Have a look at this article to get started with [SSIS](http://www.codeproject.com/Articles/441678/Getting-Started-With-SSIS-Script-transforms). | New to SQL servers, wanting to schedule table / data moves every X hours | [
"",
"sql",
"sql-server",
"database",
"sql-server-2008",
""
] |
I have a database table with around 2400 'items'. A user will have any arbitrary combination of items from the 2400 item set. Each item's details then need to be looked up and displayed to the user. What is the most efficient way to get all of the item details? I can think of three methods:
* Select all 2400 items and parse the required details client-side.
* Select the specific items with a SELECT which could be a very long SQL string (0-2400 ids)?
* Select each item one at a time (too many connections)?
I'm not clued up on SQL efficiency so any guidance would help. It may help to know this is a web app heavily AJAX based.
Edit: On average a user will select ~150 items and very rarely more than 400-500. | The best method is to return the data you want from the database in a single query:
```
select i.*
from items i
where i.itemID in (<list of ids>);
```
MySQL queries can be quite large (see [here](http://dev.mysql.com/doc/refman/5.7/en/packet-too-large.html)), so I wouldn't worry about being able to pass in the values.
However, if your users have so many items, I would suggest storing them in the database first and then doing a `join` to get the additional information. | If the users never/rarely select more than ~50 elements, then I agree with Gordons answer.
If it is really plausible that they might select up to 2400 items, you'll probably be better off by inserting the selected ids into a holding table and then joining with that.
However, a more thorough answer can be found [here](http://explainextended.com/2009/08/18/passing-parameters-in-mysql-in-list-vs-temporary-table/) - which I found through [this](https://stackoverflow.com/a/1537685/806549) answer.
He concludes that:
> We see that for a large list of parameters, passing them in a temporary table is much faster that as a constant list, while for small lists performance is almost the same.
'Small' and 'large' are hardly static, but dependent upon your hardware - so you should test. My guess would be that with an average of 150 elements in your IN-list, you will see the temp table win.
(If you do test, please come back here and say what is the fastest in your setup.) | Most efficient way to select lots of values from database | [
"",
"mysql",
"sql",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.