Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I have a posts table like this:
----- POSTS -----
- id
- ..etc..
- category1
- category2
- category3
Any of these categories could be filtered, and I was planning to build SQL statements like:
```
SELECT *
FROM posts
WHERE category1='..sth..'
AND category1='..sth..'
OR category1='..sth..'
OR category2='..sth..'
OR category3='..sth..'
AND category3='..sth..'
OR ..."
```
Is this a bad practice and will this have performance issues ?
edit: The AND and OR statements on the query I gave was only for example. | If your database is well designed then no, this is not bad practice and is in fact the ideal way to select your data.
The best thing to do is to think about how your data is going to be used and design the storage to make it easy to access the data you will need most often.
There are lots of posts here about the specifics of database design.
the sql you suggest is nonsensical (use brackets) but I suspect you had intended it only as an example. | try this
```
SELECT * FROM posts
WHERE category1 in ('sth','sth')
OR category2 in ('sth','sth')
OR category3 in ('sth','sth')
.....
``` | Is using AND & OR statements too much a bad practice for MySQL? | [
"",
"mysql",
"sql",
"database",
""
] |
```
$query="SELECT a.pk_i_id,a.i_price,b.s_title,c.pk_i_id AS img_id,c.s_extension,d.s_city,d.s_city_area from zl_t_item a, zl_t_item_description b, zl_t_item_resource c, zl_t_item_location d where a.fk_i_category_id=$cat_id and a.pk_i_id=b.fk_i_item_id and a.pk_i_id=c.fk_i_item_id and a.pk_i_id=d.fk_i_item_id ORDER BY a.dt_pub_date DESC";
```
In this above query i need to add DISTINCT before this **c.pk\_i\_id AS img\_id** ??
it shows error when i did like below
```
$query="SELECT a.pk_i_id,a.i_price,b.s_title,DISTINCT c.pk_i_id AS img_id,c.s_extension,d.s_city,d.s_city_area from zl_t_item a, zl_t_item_description b, zl_t_item_resource c, zl_t_item_location d where a.fk_i_category_id=$cat_id and a.pk_i_id=b.fk_i_item_id and a.pk_i_id=c.fk_i_item_id and a.pk_i_id=d.fk_i_item_id ORDER BY a.dt_pub_date DESC";
```
what is the problem on it?. | It is invalid use of DISTINCT keyword. You can only apply it on a set of columns and not for a specific column skipping other columns | `DISTINCT` should be applied right after `SELECT` for a column or set of columns you cannot use `DISTINCT` between the columns
```
SELECT DISTINCT c.pk_i_id AS img_id,
a.pk_i_id,a.i_price,b.s_title,c.s_extension,d.s_city,d.s_city_area
from zl_t_item a, zl_t_item_description b, zl_t_item_resource c,
zl_t_item_location d where a.fk_i_category_id=$cat_id
and a.pk_i_id=b.fk_i_item_id and a.pk_i_id=c.fk_i_item_id
and a.pk_i_id=d.fk_i_item_id ORDER BY a.dt_pub_date DESC
``` | DISTINCT with as clause | [
"",
"mysql",
"sql",
"distinct",
""
] |
I would like to ask for help in understanding this piece of code... Am a beginner to mysql so I found it hard to read what this code does...
```
SELECT b.bcode
FROM (SELECT bcode FROM tbl_items
UNION
SELECT bcode FROM tbl_items_bulk
) b left outer join
tbl_reservation r
on b.bcode = r.bcode and
@USERTO <= r.ToDate and
@USERFROM >= r.FromDate
WHERE r.bcode is null;
```
Would someone help me understand what this code does?
Thanks in advance!
@Ashutosh Arya, I was trying to get all the items that hasn't been reserved for a date and timespan. By being 'reserved', an equipment should be present in tbl\_reservations. I'm just trying to get the opposite of getting all the equipment for a certain date and timespan.
Logic:
1. get all the equipment bcode currently present in tbl\_items, tbl\_items\_bulk(UNION).
2. get all the equipment bcode currently present in tbl\_reservations and see if there are matches with the UNION in number 1.
3. if there are matches, filter the results further using the resDate(reservation date).
\*\*this results in NOT displaying all the bcodes that has the resDate that the user specified.
4. if there are still matches, filter the results further using the timeSTART and timeEND.
meaning to say,
-if an equipment bcode exists in the tbl\_reservations, DO NOT DISPLAY IT.
-if an equipment bcode exists in the tbl\_reservations, but doesn't have the resDate the user will specify, then SHOW it.
-if an equipment bcode exists in the tbl\_reservations, has the resDate the user will specify, but doesn't have the time covered by timeStart and timeEnd, then SHOW IT.
It seems like my code only checks for a date.
I was thinking of something that checks for date and time as well, like:
```
SELECT b.*
FROM (SELECT * FROM tbl_items
UNION
SELECT * FROM tbl_items_bulk
) b left outer join
tbl_test t
on b.bcode = t.bcode and
'2013-09-16' = t.resDate AND
'9:30' <= t.timeSTART AND
'7:00' >= t.timeEND
WHERE t.bcode is null;
```
AND ALSO, my system doesn't allow borrowing equipment for a long period of time. Anyone cannot borrow an equipment for more than 7:00AM-9:00PM(15 Hours).
HERE's the LATEST code(timeFROM and timeTO changed to timeSTART and timeEND because my system doesn't allow more than 15hours of reservation a day and does not allow reservation for more than 1 day):
```
SELECT b.*
FROM (SELECT * FROM tbl_items
UNION
SELECT * FROM tbl_items_bulk
) b left outer join
tbl_test t
on b.bcode = t.bcode and
NOT ('4:30' < t.timeSTART OR '7:00' > t.timeEND)
WHERE t.bcode is null;
``` | Ok, I'll try explain, hope you understand it;
```
SELECT b.bcode // Selecting the bCode from tbl_items and tbl_items_bulk
FROM (SELECT bcode FROM tbl_items
UNION
SELECT bcode FROM tbl_items_bulk
) b left outer join // join to tbl_reservation where the bcode is equal and in
a given timespan (USERTO & USERFROM are 2 variables
passed from outside)
tbl_reservation r
on b.bcode = r.bcode and
@USERTO <= r.ToDate and
@USERFROM >= r.FromDate
WHERE r.bcode is null; // and where your selected bcode is equal to null
``` | Effectively your query is showing you all of the bcode values from the tables `tbl_items` and `tbl_bulk_items` that do not have a match in the bcode column of the `tbl_reservations` table.
This query could be done more effeciently as so:
```
SELECT t.bcode
FROM (
SELECT bcode
FROM tbl_items
UNION
SELECT bcode
FROM tbl_items_bulk
) AS t
WHERE NOT EXISTS (
SELECT *
FROM tbl_reservation AS r
WHERE r.bcode=t.bcode
AND r.toDate>='20130916 07:00:00' AND r.fromDate<='20130916 22:00:00'
)
``` | Query: I dont understand this piece of code | [
"",
"mysql",
"sql",
"select",
""
] |
I have looked over numerous tutorials, manuals and documentations, but I still can not get this to work.
I am trying to create a stored procedure using phpMyAdmin.
I cant seem to find the errors here, the sql errors are so vague...
```
CREATE PROCEDURE insertToonOneShot(IN locale CHAR(2), IN name VARCHAR(16), IN realm VARCHAR(24), IN faction CHAR(1), IN toon_level INT, IN class_name INT)
BEGIN
DECLARE @realmID INT;
DECLARE @classID INT;
DECLARE @toonID INT;
SET @realmID = SELECT id FROM realms WHERE realms.name = realm;
SET @classID = SELECT id FROM classes WHERE classes.name = class_name;
IF NOT @realmID IS NULL AND NOT @classID IS NULL AND @toonID IS NULL THEN
INSERT INTO
toon (`locale`, `name`, `realm_id`, `faction`, `level`, `class_id`)
VALUES
(locale, name, @realmID, faction, toon_level, @classID);
END IF;
END;
```
The error I am getting right now is:
> #1064 - You have an error in your SQL syntax; check the manual that
> corresponds to your MySQL server version for the right syntax to use
> near @realmID INT; DECLARE @classID INT; DECLARE @toonID INT; SET @rea
> at line 3
Probably one of the more frustrating things I have ever had to do...
I have seen many tutorials online that show using the @ symbol in variable declaration, and others not using it, I have even seen some that use VAR instead of DECLARE. What is the right syntax?... | This does the trick:
```
CREATE PROCEDURE insertToonOneShot(IN locale CHAR(2), IN name VARCHAR(16), IN realm VARCHAR(24), IN faction CHAR(1), IN toon_level INT, IN class_name VARCHAR(12))
BEGIN
SELECT @realmID := id FROM realms WHERE realms.name = realm;
SELECT @classID := id FROM classes WHERE classes.name = class_name;
SELECT @toonID := id FROM toon WHERE toon.name = name AND toon.realm_id = @realmID;
IF NOT @realmID IS NULL AND NOT @classID IS NULL AND @toonID IS NULL
THEN
INSERT INTO toon (`locale`, `name`, `class_id`, `realm_id`, `faction`, `level`)
VALUES (locale, name, @classID, @realmID, faction, toon_level);
END IF;
END;
//
```
Apparently the declare statements were not required... Who would have known?
Thanks to Gordon Linoff for pointing me in the right direction. | This question, and its answers, seem to be confusing session variables (which are prefixed with `@`) with procedural variables (which are not prefixed.) See the answers to [this question](https://stackoverflow.com/q/1009954/1255289) for more information.
The [accepted solution](https://stackoverflow.com/a/18834683/1255289) resolves the error by using session variables, but it could introduce problems related to variable scope. For example, if a variable called `realmID` has been defined outside the procedure, its value will be overwritten when the procedure is run.
The correct way to resolve the problem is to use only procedural variables. These are `DECLARE`d at the start of the procedure ***without the `@` prefix***.
```
DELIMITER $$
CREATE PROCEDURE insertToonOneShot(
IN locale CHAR(2),
IN name VARCHAR(16),
IN realm VARCHAR(24),
IN faction CHAR(1),
IN toon_level INT,
IN class_name INT
)
BEGIN
DECLARE realmID INT;
DECLARE classID INT;
SELECT id INTO realmID FROM realms WHERE realms.name = realm LIMIT 1;
SELECT id INTO classID FROM classes WHERE classes.name = class_name LIMIT 1;
IF realmID IS NOT NULL AND classID IS NOT NULL THEN
INSERT INTO toon (`locale`, `name`, `realm_id`, `faction`, `level`, `class_id`)
VALUES (locale, name, realmID, faction, toon_level, classID);
END IF;
END$$
DELIMITER ;
``` | MYSQL Stored Procedures: Variable Declaration and Conditional Statements | [
"",
"mysql",
"sql",
"stored-procedures",
"variable-declaration",
""
] |
```
SELECT DISTINCT HouseHoldMembers.FamilyID,
Year(GetDate())-Year(DOB) as Age,
FirstName+' '+LastName 'HeadOfHousehold',
FamilyFolder.HouseNo
from HouseHoldMembers
left outer join FamilyFolder on FamilyFolder.FamilyID=HouseHoldMembers.FamilyID
WHERE HouseHoldMembers.FamilyID
in (select familyid from HouseHoldMembers where FirstName like 'laxmi')
and HouseHoldMembers.FamilyID like 'alv'+'%'
```
For this i am gettng --Output is:
```
FamilyID age HeadOfHousehold HouseNo
ALV01L11034A 25 ashoka 34
ALV01L11034A 74 arpana 34
ALV01L11034A 62 laxmi 34
ALV01L11059A 34 shama 63
ALV01L11059A 62 baby 63
ALV01L11059A 35 laxmi 63
```
But,
I want to display like query retreival for maximum age for each familyid (select the familyid accoding to maximum age of that perticular family,output which i wanted to look like is explained below).please give the solution
```
FamilyID age HeadOfHousehold HouseNo
ALV01L11034A 74 arpana 34
ALV01L11059A 62 baby 63
```
table structure is:
for householdmembers table->
```
memberid varchar(50) as primarykey,firstname varchar(50) as null,lastname varchar(50) as null, relationwithhead varchar(50) as null,familyid varchar(50) as null,dob datetime as null
```
for familyfolder table->
```
familyid varchar(50) as primarykey, houseno varchar(50) as null
```
Thanks in advance | If your DBMS supports OLAP functions it's a simple ROW\_NUMBER:
```
SELECT *
FROM
(
SELECT
HouseHoldMembers.FamilyID
,YEAR(GetDate()) - YEAR(DOB) AS Age
,FirstName + ' ' + LastName 'HeadOfHousehold'
,FamilyFolder.HouseNo
,ROW_NUMBER() OVER(PARTITION BY FamilyId ORDER BY YEAR(GetDate()) - YEAR(DOB) DESC
) AS rn
FROM HouseHoldMembers LEFT OUTER JOIN FamilyFolder
ON FamilyFolder.FamilyID = HouseHoldMembers.FamilyID
WHERE
HouseHoldMembers.FamilyID IN
(
SELECT familyid
FROM HouseHoldMembers
WHERE FirstName LIKE 'laxmi'
)
AND HouseHoldMembers.FamilyID LIKE 'alv' + '%' d`
) AS dt
WHERE rn = 1
``` | I think you can use `max(age)` and `group by(FamilyID)` to get your answer.
Like this:
```
select FamilyID,max(Age)
from
(SELECT DISTINCT HouseHoldMembers.FamilyID,Year(GetDate())-Year(DOB) as Age,
FirstName+' '+LastName 'HeadOfHousehold',FamilyFolder.HouseNo
from HouseHoldMembers
left outer join FamilyFolder
on FamilyFolder.FamilyID=HouseHoldMembers.FamilyID
WHERE HouseHoldMembers.FamilyID in (select familyid from HouseHoldMembers where FirstName like 'laxmi')
and HouseHoldMembers.FamilyID like 'alv'+'%')
group by FamilyID
```
BTW:I don't test it. | sql query is not showing proper output for my requirement | [
"",
"sql",
""
] |
I have a scenario whereby I need to use multiple values in my where clause if a param passed to the query is 99999 otherwise I just use the param itself. I am using "IN" as part of the where clause but I get the following error "Incorrect syntax near the keyword 'CASE'".
Any one provide an alternative to the incorrect query that will get me the results I need?
```
DECLARE @IPD_NEWS_TYPE_PARAM decimal (18,0)
SET @IPD_NEWS_TYPE_PARAM = 99999
SELECT
n.id,
n.headline,
n.news_type_id
FROM news n
WHERE n.news_type_id IN
CASE WHEN @IPD_NEWS_TYPE_PARAM = 99999 THEN (1,2,3,4,5)
ELSE
(@IPD_NEWS_TYPE_PARAM)
END
``` | Perhaps you want:
```
DECLARE @IPD_NEWS_TYPE_PARAM decimal (18,0)
SET @IPD_NEWS_TYPE_PARAM = 99999
SELECT
n.id,
n.headline,
n.news_type_id
FROM news n
WHERE (n.news_type_id IN ('1','2','3','4','5') AND @IPD_NEWS_TYPE_PARAM = 99999)
OR (n.news_type_id = @IPD_NEWS_TYPE_PARAM AND @IPD_NEWS_TYPE_PARAM <> 99999)
``` | Use an IF statement:
```
DECLARE @IPD_NEWS_TYPE_PARAM decimal (18,0)
SET @IPD_NEWS_TYPE_PARAM = 99999
IF (@IPD_NEWS_TYPE_PARAM = 99999)
SELECT
n.id,
n.headline,
n.news_type_id
FROM news n
WHERE n.news_type_id IN (1,2,3,4,5)
ELSE
SELECT
n.id,
n.headline,
n.news_type_id
FROM news n
WHERE n.news_type_id = @IPD_NEWS_TYPE_PARAM
``` | Case statement with "where in" clause | [
"",
"sql",
"sql-server",
"t-sql",
"case",
"where-clause",
""
] |
I want to dynamically declare the range of my partition function.
I don't want to hard-code the range value,like below:
> CREATE PARTITION FUNCTION PartFun(smallint)AS RANGE LEFT FOR VALUES (1,2,3,4,5)
The problem is:The column(ID in IDMASTER table) on which I want to create partition is of smallint datatype.
```
Declare @IDS NVARCHAR(100)
SELECT @IDS = coalesce(@IDS + ', ', '') + CAST(a.ID AS VARCHAR) FROM
(SELECT DISTINCT TOP 100 ID from IDMASTER ORDER BY ID ) a
--PRINT @IDS --it prints 0,1,2,3,4 like that
CREATE PARTITION FUNCTION PartFun(smallint)
AS RANGE LEFT FOR VALUES (@IDS)
```
Getting this error:
> "Could not implicitly convert range values type specified at ordinal 1 to partition function parameter type."
Any workaround idea?
Also,is it that people mostly use hard coded values for partition range? | May not be an exact solution to what you are looking for. Here is the scenario I am faced with:
We have a DB that has multiple tables partitioned on a column named 'PriceListDate', but strangely, the data type is Varchar(8). We are in the middle of redesigning the application and the DB, so decided to change the data type to 'Date'. Here is how we are doing this dynamically:
```
IF NOT EXISTS (SELECT NULL FROM sys.partition_functions WHERE name = N'PriceListDateFunction')
BEGIN;
DECLARE @CreatePartitionFunctionScript NVARCHAR(MAX);
SET @CreatePartitionFunctionScript = 'CREATE PARTITION FUNCTION [PriceListDateFunction] (Date) AS RANGE LEFT FOR VALUES (' +
STUFF((SELECT ','+'N'+''''+CAST(prv.value as varchar(8))+''''
FROM sys.partition_range_values prv
INNER JOIN sys.partition_functions pf
ON pf.function_id = prv.function_id
WHERE pf.name = 'PriceListFunction'
FOR XML PATH(''), TYPE
).value('.', 'NVARCHAR(MAX)'),1,1,'') --Get list of existing partitons from existing partition function
+ ')';
-- Create Partition Function
EXECUTE sp_executesql @CreatePartitionFunctionScript;
END;
```
Hope this gives you some ideas.
Raj | Thanks, got idea.
This is how I have solved it:
```
DECLARE @sqlcmd nvarchar(400),@ids nvarchar(100);
SET @sqlcmd = N'CREATE PARTITION FUNCTION PartFun(smallint) AS RANGE LEFT FOR VALUES (' + @ids + N')' ;
--PRINT @sqlcmd
EXEC SP_EXECUTESQL @sqlcmd
```
So the solution is:Dynamic SQL and making everything as NVARCHAR! | How to dynamically declare PARTITION RANGE IN Partition function in sql server | [
"",
"sql",
"sql-server",
"partitioning",
""
] |
We have table with companies names and a numeric primary key identity. We are cleaning up the data and we have discovered the name column is full of similar names that represent the same company.
E.g. BA and Ba or GTC Ltd and GTC Limited.
Is there anyway using SQL server that we can get counts and summary of all items that have similar names and list of IDs. I wondered if there was some sort of similarity comparison we could set a threshold value for etc
We need to present a list of names to the client that look like they need merged. | The basic answer is "No". Name rectification is a hard problem. Two names like "GTC Ltd" and "GTC Limited" are more different than "GTC" and "GTE" (by more obvious measures). There are outside service bureaus and special-purpose software for this purpose.
If you are dealing with a smallish amount of data, I would suggest that you alphabetize the values, load them into Excel, and add a column in Excel with the "official" name. You can then re-import this as a table in the database to do what you want. It might help if you remove known suffixes and prefixes, such as "ltd", "bros", "partners" and so on.
If you do try to go down the path of something like `soundex()`, then be sure that you [understand it well](http://technet.microsoft.com/en-us/library/ms187384.aspx). For instance, the `soundex()` values of the following two strings are the same: "gte, blah blah blah" and "gdteey, junk goes here". | Your answer lies in the [`SoundEx()`](http://technet.microsoft.com/en-us/library/ms187384.aspx) and [`Difference()`](http://technet.microsoft.com/en-us/library/ms188753.aspx) functions.
```
DECLARE @a varchar(50) = 'BA'
, @b varchar(50) = 'Ba'
;
SELECT @a
, @b
, SoundEx(@a)
, SoundEx(@b)
, Difference(@a, @b)
;
SET @a = 'GTC Ltd';
SET @b = 'GTC Limited';
SELECT @a
, @b
, SoundEx(@a)
, SoundEx(@b)
, Difference(@a, @b)
;
SET @a = 'BLAH';
SELECT @a
, @b
, SoundEx(@a)
, SoundEx(@b)
, Difference(@a, @b)
;
```
This of `SoundEx` as *"sounds like"* - it's a function that returns a representation of the input which you can compare with other outputs.
The `Difference()` function returns a value between 0 and 4, where the higher numbers represent better matches. | Data cleansing - how to decide which names are misspellings or are equivalent but slightly different? | [
"",
"sql",
"sql-server",
"data-analysis",
"data-cleaning",
""
] |
Suppose i have 2 tables for user & task. have user\_id & status in task with status having possible values "complete" & "not complete".
Now I want to retrieve users who have not completed even 1 task.
The most crude way is to first find users who have atleast 1 complete task and run a "not in" query.
Any better ways to achieve this without an "in" subquery. Please note that the data set is huge and i can't afford to have a lock on the task table for long time! | ```
SELECT *
FROM users u
WHERE NOT EXISTS (
SELECT * FROM tasks t
WHERE t.user_id = u.user_id
AND t.status = 'complete'
);
``` | When `task.user_id` cannot contain NULL (i.e. has a `NOT NULL` constraint), `LEFT JOIN` with `IS NULL` is your [best choice](http://planet.mysql.com/entry/?id=24888):
```
SELECT user.* FROM user
LEFT JOIN task ON (task.user_id = user.id AND task.status = 'complete')
WHERE task.user_id IS NULL
``` | mysql join query that needs to return negative set | [
"",
"mysql",
"sql",
"join",
""
] |
First of all, I'm not an SQL expert, so probably I'm missing some brackets or something else that's stupid simple ;-)
Can anyone please tell me why this works and returns 4 rows:
```
SELECT ppa.products_id FROM products_propattributes ppa
WHERE ( (options_id = "11" AND options_values_id = "18")
OR (options_id = "4" AND options_values_id = "4") )
```
And this one doesn't, while it should match 1 row:
```
SELECT ppa.products_id FROM products_propattributes ppa
WHERE ( (options_id = "11" AND options_values_id = "18")
AND (options_id = "4" AND options_values_id = "4") )
```
Any help would be highly appreciated!
Thanks in advance..
Bas | Because there can only be one possible value in a column in every row. If you want to get the `ID` that has both condition to return true using `OR`
```
SELECT ppa.products_id
FROM products_propattributes ppa
WHERE (options_id = "11" AND options_values_id = "18") OR
(options_id = "4" AND options_values_id = "4")
GROUP BY ppa.products_id
HAVING COUNT(*) = 2
``` | In the second query you have only `AND`s, so it's impossible that `options_id = "11"` AND `options_id = "4"` both match.
And of course the same with `options_values_id = "18"` AND `options_values_id = "4"`. | Multiple where clause working with OR, not with AND | [
"",
"mysql",
"sql",
""
] |
I have a text field in MS Access and it has text like this:
```
bla bla bla [hhh] bla bla bla [kkkd]
blo blo blo blo [ttt] blo blo blo [ppp]
jh asdjahsuz uizasdui asudz j jksdf
```
I'm trying to search for all records that have '[somthing]' in that field.
```
SELECT pruefhinweis
FROM tb_bauteile
WHERE pruefhinweis LIKE '%[%]%'
```
but this SQL doesnt work, could you please say me how can I do this job? | Try this:
```
SELECT pruefhinweis
FROM tb_bauteile
WHERE pruefhinweis LIKE '*[[a-z]]*';
```
**Or**
```
SELECT b.pruefhinweis
FROM tb_bauteile AS b
WHERE b.pruefhinweis Like '*[[]*]*';
``` | Matching square brackets in a pattern is not intuitive. See the help topic *"Using Wildcard Characters in String Comparisons"*:
*You can use the special characters opening bracket ([ ), question mark (?), number sign (#), and asterisk (\*) to match themselves directly only if enclosed in brackets. You cannot use the closing bracket ( ]) within a group to match itself, but you can use it outside a group as an individual character.*
Either of these will return what you want with a query run from DAO (eg in an Access session using the query designer).
```
SELECT b.pruefhinweis
FROM tb_bauteile AS b
WHERE b.pruefhinweis Like '*[[]*]*';
SELECT b.pruefhinweis
FROM tb_bauteile AS b
WHERE b.pruefhinweis ALike '%[[]%]%';
```
From ADO, you could use the second query or this one ...
```
SELECT b.pruefhinweis
FROM tb_bauteile AS b
WHERE b.pruefhinweis Like '%[[]%]%';
``` | How to filter data with square brackets? | [
"",
"sql",
"ms-access",
"pattern-matching",
""
] |
I am using SQL Server 2012. I am new to T-SQL.
I have a table called `tblEmp` where phone no of some employee is NULL.
I want to copy all these data to a new table using `INTO` clause, but in the new table it should replace all the NULL phone nos with a string 'NA'
I think this can be done by `CASE` statement. | You can use [ISNULL](http://technet.microsoft.com/en-us/library/ms184325.aspx) statement
Query:
```
INSERT INTO table_name (column1,column2,column3,...)
SELECT isnull(phone_no, 'NA') AS phone_no,
name,
.....
FROM tblEmp
```
OR [SELECT-INTO](http://technet.microsoft.com/en-us/library/ms188029.aspx) Statment:
```
SELECT isnull(phone_no, 'NA') AS phone_no,
name,
.....
INTO newtable
FROM tblEmp
```
**[ExampleSqlFiddle](http://sqlfiddle.com/#!6/083ae/1)** | ```
INSERT INTO NewTable(col1,col2,....)
SELECT case when phone_no is null then 'NA' else phone_no end,
name,....
FROM tblEmp
``` | How to insert data to a new table based on condtion using INTO CLAUSE? | [
"",
"sql",
"t-sql",
"sql-server-2012",
""
] |
I have 2 tables in my database, one is `tablea`, and the other is `tableb`.
```
SELECT *
FROM TABLEA
```
Results in:
```
βββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β ip mac β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββ£
β 1.10.0.0.97 00 14 2A 2F 72 FE β
β 1.10.0.0.98 08 CC 68 71 A1 C0 β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββ
SELECT *
FROM TABLEB
```
Results in:
```
βββββββββββββββββββββββββββββββββββββββββ
β mac port β
β ββββββββββββββββββββββββββββββββββββββββ£
β 0:14:2a:2f:72:fe 24 β
β 8:cc:68:71:a1:c0 7 β
βββββββββββββββββββββββββββββββββββββββββ
```
I now want to create a third table , which joins tablea and table c and has 3 columns displaying ip,mac and port. the table has already been created and this is what i have in mind;
```
INSERT INTO TABLEC
SELECT A.IP,
A.MAC,
B.PORT
FROM TABLEA A,
TABLEB B
WHERE A.MAC = REPLACE('REPLACE('B.MAC',':',' ')','0','00')
```
The query gives me an error
```
ERROR 1064(42000): You have an error in your SQL syntax; check the manual that corresponds to your mysql server version for the right syntax to use near 'b.mac',':',' ')','0','00')
```
I have checked the manual and i could not identify where my error is . And also my replace function can be able to convert 0:14:2a:2f:72:fe to 00 14 2A 2F 72 FE but it will not work if tried for 8:cc:68:71:a1:c0. I would really appreciate any help i can get in creating this 3rd table | You should remove `'` before `replace` and after `)` and the quotes around `b.mac`.
Also you can't replace `0` with `00` because for `c0` it will return `c00` you should use **[LPAD](http://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_lpad)** instead.
```
INSERT INTO TABLEC
SELECT A.IP,
A.MAC,
B.PORT
FROM TABLEA A,
TABLEB B
WHERE A.MAC = UPPER(LPAD(REPLACE(B.MAC, ':', ' '), 17, '0'))
```
[**SQL Fiddle DEMO**](http://sqlfiddle.com/#!2/dc0944/12/0) | you should lpad (with 0) tableb.mac before doing your replace. | Joining 2 tables in which the common row has different data formats | [
"",
"mysql",
"sql",
""
] |
How many times will this nested sub-query be executed?
```
SELECT CID, CNAME
FROM CUSTOMER
WHERE EXISTS ( SELECT CID
FROM RENTALS
WHERE CUSTOMER.CID = RENTALS.CID
AND PICKUP = 'CARY' )
```
This is a theoretical question i.e. it is found in my book. The provided answer was 6, but I don't understand why this is so.
---
Ok, I think there is some problem with the book itself. I will go through the book and maybe, later ask the question. | As others point out, this correlated subquery can be rewritten as a join, but that is not entirely the complete story because the execution plan for an untransformed EXISTS is going to look pretty much like a join anyway. So this is not really a syntax question but a query optimisation issue.
EXISTS is really just syntactic shorthand for "join to this data set but only to a single row in it even if there are 1,000,000 matches", or what is also known as a [semijoin](http://en.wikipedia.org/wiki/Relational_algebra#Semijoin_.28.E2.8B.89.29.28.E2.8B.8A.29).
So the semijoin required by an EXISTS predicate against a correlated or uncorrelated subquery can be implemented in a number of ways, which depend to a large extent on the numbers or records in the two tables.
If you imagine that CUSTOMER is estimated to have a single row, and the optimiser estimates that there are many thousands of rows in RENTALS for which PICKUP = 'CARY', then the optimiser's is pretty likely to read the row from the CUSTOMER TABLE and perform a single lookup against the RENTALS table.
If there are an estimated one million CUSTOMERS and only one row in the RENTALS table then that execution plan would be crazy -- the optimiser could instead invert the join by leading with the RENTALS table and looking up against the CUSTOMER table the single row to be returned. In that case the subquery has arguably not been executed at all.
In between these extremes there are various other optimisation. For example, building the unique values of the RENTAL.CID column into an in-memory hash table for rows where PICKUP='CARY' and full-scanning the CUSTOMER TABLE to probe that hash table for every row, which would be a HASH SEMIJOIN. Again, no execution of a recognisable subquery. (And the query rewrite that Barmer suggests is likely to lead to that kind of plan, but might also constrain the optimiser from seeing other plans suitable for other data distributions and cardinalities).
So, as other answers say, the question is really moot, because I think that there are two important lessons here:
1. *Many different SQL statements can lead to the same execution plan, and a single SQL statement can also lead to multiple execution plans.*
2. *You should write queries that express syntactically the result that you want, and not in general to preempt and constrain the query optimiser's choices.*
The second point is important, as it argues against some developer's instinct to avoid writing correlated subqueries (and EXISTS or NOT EXISTS in particular it seems) in the hope of providing their own optimisation. In particular, replacing an EXISTS with an outer join can be very suboptimal given the right/wrong data distributions.
To get to the direct answer to the question, I'd say either:
* 0
* 1
* However many rows are in CUSTOMERS
* Possibly something else. | There's no correct *theoretical* answer to this question. A smart query optimizer can transform the query into a JOIN:
```
SELECT CID, CNAME
FROM Customer
LEFT JOIN (SELECT DISTINCT CID FROM Rentals WHERE PICKUP = 'CARY') as Rentals
ON Customer.CID = Rentals.CID
WHERE Rentals.CID IS NOT NULL
```
Now the sub-query is just executed once. | Number of times a nested query is executed | [
"",
"sql",
"database-agnostic",
""
] |
I need my query to select unix timestamps which range from a particular past date till now?For example I need to select '2013-01-01' till current date. I was able to do it for a particular year. Any help would be appreciated.
```
SELECT mdl_user_info_data.data, mdl_user.firstname, mdl_user.lastname, mdl_user.id AS userid, SUM( mdl_quiz.fcpd ) AS cpdtotal
FROM mdl_grade_grades
INNER JOIN mdl_user ON mdl_grade_grades.userid = mdl_user.id
INNER JOIN mdl_grade_items ON mdl_grade_grades.itemid = mdl_grade_items.id
INNER JOIN mdl_quiz ON mdl_grade_items.itemname = mdl_quiz.name
INNER JOIN mdl_course ON mdl_grade_items.courseid = mdl_course.id
INNER JOIN mdl_user_info_data ON mdl_user.id = mdl_user_info_data.userid
WHERE mdl_user_info_data.fieldid =1
AND mdl_grade_items.itemname IS NOT NULL
AND YEAR( FROM_UNIXTIME( mdl_grade_grades.timemodified ) ) =2013
GROUP BY mdl_user.id
``` | Simply replace this line:
```
AND YEAR(FROM_UNIXTIME(mdl_grade_grades.timemodified)) = 2013
```
to:
```
AND FROM_UNIXTIME(mdl_grade_grades.timemodified)
BETWEEN '2012-01-01 12:00'
AND now()
```
It seems that you are storing unix timestamp in your `timemodified` column. You should consider changing it to `timestamp` type - this will allow to take advantage of index on that field. Now, your query (and this query as well) is slow because it has to compute `FROM_UNIXTIME()` on every row in that table and only then compare year or date.
**UPDATE**:
If you don't want to change types, using this should make your query work much faster (provided that there is an index on `timemodified` column):
```
AND mdl_grade_grades.timemodified
BETWEEN unix_timestamp('2012-01-01 12:00')
AND unix_timestamp(now())
``` | To check for a timestamp value between a given datetime in the past, and the current datetime, something like this:
```
AND mdl_grade_grades.timemodified >= '2013-07-25 15:30'
AND mdl_grade_grades.timemodified < NOW()
``` | select certain date till now mysql | [
"",
"mysql",
"sql",
""
] |
I have a very complicated table I'd like to take a temporary backup of whilst I make some changes. Normally, I'd just do the following:
```
SELECT *
INTO temp_User
FROM dbo.[User] AS u
```
Unfortunately I'm using Azure, and it appears this isn't supported:
> Msg 40510, Level 16, State 1, Line 2 Statement 'SELECT INTO' is not
> supported in this version of SQL Server.
Is there a way to re-create this feature into a function, potentially? I could do this by scripting the table, creating it and then inserting data using a select statement but given how frequently I use Azure, and how many databases I need to work on in this area this is very unwieldy. | The new [Azure DB Update preview](http://azure.microsoft.com/en-us/documentation/articles/sql-database-preview-whats-new/) has this problem resolved:
> The V12 preview enables you to create a table that has no clustered
> index. This feature is especially helpful for its support of the T-SQL
> SELECT...INTO statement which creates a table from a query result.
<http://azure.microsoft.com/en-us/documentation/articles/sql-database-preview-whats-new/> | Azure requires a clustered index on all tables, therefore `SELECT INTO` is not supported.
You'll have to:
```
CREATE TABLE temp_User () --fill in table structure
INSERT INTO temp_User
SELECT *
FROM dbo.[User]
```
To script table easily you can write your own or use one of the answers to this question:
[Script CREATE Table SQL Server](https://stackoverflow.com/questions/21547/in-sql-server-how-do-i-generate-a-create-table-statement-for-a-given-table/317864#317864)
**Update:** As Jordan B pointed out, V12 will include support for heaps (no clustered index requirement) which means `SELECT INTO` will work. At the moment V12 Preview is available, Microsoft of course only recommends upgrading with test databases. | SQL Azure doesn't support 'select into' - Is there another way? | [
"",
"sql",
"t-sql",
"azure",
"azure-sql-database",
""
] |
I have two tables as below -
Sale Record:
```
Date | Customer | ItemSold
-----------------------------------------
11/01/2013 | Alex | Pen
12/01/2013 | Rony | Paper
13/01/2013 | Alex | Eraser
14/01/2013 | Marty | Eraser
15/01/2013 | Alex | Pen
16/01/2013 | Rob | Paper
17/01/2013 | Alex | Pencil
18/01/2013 | Alex | Pen
19/01/2013 | Ned | Pen
20/01/2013 | Alex | Paper
21/01/2013 | Alex | Pencil
22/01/2013 | Ned | Pen
23/01/2013 | Alex | Eraser
24/01/2013 | Alex | Pen
25/01/2013 | Alex | Pen
26/01/2013 | Alex | Paper
27/01/2013 | Ned | Paper
28/01/2013 | Alex | Pen
29/01/2013 | Alex | Eraser
30/01/2013 | Alex | Pen
31/01/2013 | Rony | Pencil
01/02/2013 | Alex | Eraser
02/02/2013 | Ned | Paper
03/02/2013 | Alex | Pen
```
Priority:
```
ItemName | Priority
--------------------------
Pen | 1
Paper | 2
Pencil | 3
Eraser | 4
```
I want to get a list to know which customer is likely to buy what as below -
```
Name | Item
----------------
Alex | Pen
Rob | Paper
Ned | Pen
Marty | Eraser
Rony | Paper
```
If there is a tie with items, the item with highest priority should be selected. Ned bought Pen and Paper each twice but Pen should be selected as it has more priority than paper.
What will be the sql query for this ? | Statistically, the term for what you are looking for is the `mode`. Here is one way to calculate it using window/analytic functions:
```
select customer, ItemSold
from (select customer, ItemSold, count(*),
row_number() over (partition by customer order by count(*) desc, p.priority
) as seqnum
from sales s left outer join
priority p
on s.ItemSold = p.ItemName
group by customer, ItemSold
) ci
where seqnum = 1;
``` | [SQL Fiddle](http://sqlfiddle.com/#!12/76e5c/1)
```
select distinct on (customer)
customer, itemsold, total
from
(
select customer, itemsold, count(*) total
from sales
group by customer, itemsold
) s
inner join priority on itemsold = itemname
order by customer, total desc, priority
``` | SQL query to get most prevalent value in a column | [
"",
"sql",
"postgresql",
""
] |
We have some tables which keep track of processed transactions. These tables have millions of rows. Often times I want to look at the most recent X transactions, so I have a query like this to pull the info I want from a few tables:
```
select a.id, b.field_one, c.field_two
from trans a, table_two b, table_three c
where a.id = b.id
and a.id = c.id
and a.id in
(select id from trans where id > (select max(id) from trans) - 100);
```
Right now the query is very slow. The explain plan shows a full table scan on B and C. Now, if I evaluate the nested query separately and replace it with a list of comma separated IDs, the query is very fast. This seems obvious to me - it will only have 100 rows to join together so of course it will be faster than if it answered the query by first joining A and B together.
Conceptually I understand the query optimizer is trying to find a good execution plan but in this case it seems like it is doing a terrible job. Is there any way to force the DBMS to execute the nested query first? Possibly by using a Hint?
Thanks | Your method is probably obsuring the fact that only a maximum of 100 rows are being selected from trans.
Try this:
```
with cte_last_trans as (
select id
from (select id
from trans
where id > (select max(id)-100 from trans)
order by id desc)
where rownum <= 100)
select a.id,
b.field_one,
c.field_two
from cte_last_trans a,
table_two b,
table_three c
where a.id = b.id
and a.id = c.id
```
By the way, have you thought of the possibility that not all values of id might be present? If you want 100 rows to be returned, use:
```
with cte_last_trans as (
select id
from (select id
from trans
order by id desc)
where rownum <= 100)
select a.id,
b.field_one,
c.field_two
from cte_last_trans a,
table_two b,
table_three c
where a.id = b.id
and a.id = c.id
``` | You can use a NO\_MERGE hint which will force Oracle to execute the inner query first and not try to merge the two queries. Here's an example:
```
SELECT /*+NO_MERGE(seattle_dept)*/ e1.last_name, seattle_dept.department_name
FROM employees e1,
(SELECT location_id, department_id, department_name
FROM departments
WHERE location_id = 1700) seattle_dept
WHERE e1.department_id = seattle_dept.department_id;
select /*+ no_merge(inner) */ a.id, b.field_one, c.field_two
from trans a,
table_two b,
table_three c,
(select id from trans where id > (select max(id) from trans) - 100) inner
where a.id = b.id
and a.id = c.id
and a.id = inner.id;
``` | How can I get Oracle to have a better execution plan for this query? | [
"",
"sql",
"oracle",
"query-optimization",
""
] |
I want to execute this query
```
select * from locations where region=1 or region=2
```
but in certain cases I don't want to include the second clause in the WHERE (`or region=2`)
Is there a way to conditionally run the or statement? | Check for the condition in the where statement
```
select * from locations
where region=1
or (region=2 and @checkRegion2=1)
``` | If you want genralizable query, you can declare table variable for this (I assuming you're using SQL server):
```
declare @regions table (id int primary key)
```
and then fill it depending on condition:
```
insert into @regions select 1
if @use2 = 1
insert into @regions select 2
```
now you can run this query:
```
select *
from location as l
where exists (select * from @regions as r where r.id = l.region)
```
or
```
select *
from location as l
where l.region in (select r.id from @regions as r)
``` | Dynamic WHERE clauses in SQL Query | [
"",
"sql",
"sql-server",
"stored-procedures",
""
] |
I have the following tables
```
CREATE TABLE Foos (
[Id] INT IDENTITY,
-- Other fields
)
CREATE TABLE Boos (
[Id] INT IDENTITY,
[FooId] INT,
-- Other fields
)
```
I am trying to execute a very simple query:
```
SELECT f.Id, COUNT(*)
FROM Foos f
JOIN Boos b on f.Id = b.FooId
GROUP BY b.FooId
```
Obviously I am getting an error because of the `GROUP BY`. The error message is
> Column 'Foo.Id' is invalid in the select list because it is not
> contained in either an aggregate function or the GROUP BY clause.
When I change the group by to `GROUP BY f.Id` everything works fine again.
My question is, why would SQL Server throw that error while it already knows for sure that `f.Id = b.FooId` from the join, that `Foo.Id` is unique from the `IDENTITY`, and that the grouping will logically return the same count due to the join on a primary key? | SQL Server cannot consistently reason about such key relationships - if boos.FooId is nullable and your query is:
```
SELECT f.Id, COUNT(*)
FROM Foos f
LEFT JOIN Boos b on f.Id = b.FooId
GROUP BY b.FooId
```
you have a set of records where b.fooId is NULL and f.Id can be different between them.
You can also join on other conditions which may also cause NULLS or other sorts of non-equivalence - e.g. you could join like:
```
SELECT f.Id, COUNT(*)
FROM Foos f
JOIN Boos b on SUBSTRING(f.Id, 2, 4) = SUBSTRING(b.FooId, 2, 4)
GROUP BY b.FooId
```
So - yes this can be reasoned about in one simple, common case, but only in that case. That would cause inconsistency, and ultimately prove more frustrating to developers. | SQL Server simply doesn't take these facts into account and therefore it can't infer that `b.FooId` will always be the same as `f.Id` in this particular query. | Do I outsmart SQL Server or there is a reason for this? | [
"",
"sql",
"sql-server",
"select",
"join",
"group-by",
""
] |
I have the following SQL query (SQL Server 2005):
```
Select TOP 1000 col1,
LTRIM(RTRIM(cast(col2 as varchar)))
+ SPACE(2) + '|' + SPACE(2) + LTRIM(RTRIM(cast(col3 as varchar)))
+ SPACE(2) + '|' + SPACE(2) + LTRIM(RTRIM(cast(col4 as varchar)))
+ SPACE(2) + '|' + SPACE(2) + LTRIM(RTRIM(cast(col5 as varchar)))
+ SPACE(2) + '|' + SPACE(2) + LTRIM(RTRIM(cast(col6 as varchar))))
from mytable
```
But I'm having some problems because these columns are all nullable and I cannot have NULL values in this concatenation. What's the nicest way to avoid NULL values in this situation?
Thanks in advance! | Wrap the fields in `ISNULL()`
```
Select TOP 1000 col1,
LTRIM(RTRIM(cast(ISNULL(col2,'') as varchar)))
+ SPACE(2) + '|' + SPACE(2) + LTRIM(RTRIM(cast(ISNULL(col3,'') as varchar)))
+ SPACE(2) + '|' + SPACE(2) + LTRIM(RTRIM(cast(ISNULL(col4,'') as varchar)))
+ SPACE(2) + '|' + SPACE(2) + LTRIM(RTRIM(cast(ISNULL(col5,'') as varchar)))
+ SPACE(2) + '|' + SPACE(2) + LTRIM(RTRIM(cast(ISNULL(col6,'') as varchar))))
from mytable
```
Or `COALESCE()`
```
Select TOP 1000 col1,
LTRIM(RTRIM(cast(COALESCE(col2,'') as varchar)))
+ SPACE(2) + '|' + SPACE(2) + LTRIM(RTRIM(cast(COALESCE(col3,'') as varchar)))
+ SPACE(2) + '|' + SPACE(2) + LTRIM(RTRIM(cast(COALESCE(col4,'') as varchar)))
+ SPACE(2) + '|' + SPACE(2) + LTRIM(RTRIM(cast(COALESCE(col5,'') as varchar)))
+ SPACE(2) + '|' + SPACE(2) + LTRIM(RTRIM(cast(COALESCE(col6,'') as varchar))))
from mytable
```
If using SQL Server 2012 it's even easier thanks to `CONCAT()` (you can replace `SPACE(2)` with `' | '` above too):
```
Select TOP 1000 col1
,CONCAT(LTRIM(RTRIM(cast(col2 as varchar)))
,' | ', LTRIM(RTRIM(col3))
,' | ', LTRIM(RTRIM(col4))
,' | ', LTRIM(RTRIM(col5))
,' | ', LTRIM(RTRIM(col6)))
from mytable
``` | You can have some other value when there is a NUll, through ISNULL function. Here is a sample query:
```
Select TOP 1000 col1,
LTRIM(RTRIM(cast(ISNULL(col2,'SOMEVALUE') as varchar)))
+ SPACE(2) + '|' + SPACE(2) + LTRIM(RTRIM(cast(ISNULL(col3,'SOMEVALUE') as varchar)))
+ SPACE(2) + '|' + SPACE(2) + LTRIM(RTRIM(cast(ISNULL(col4,'SOMEVALUE') as varchar)))
+ SPACE(2) + '|' + SPACE(2) + LTRIM(RTRIM(cast(ISNULL(col5,'SOMEVALUE') as varchar)))
+ SPACE(2) + '|' + SPACE(2) + LTRIM(RTRIM(cast(ISNULL(col6,'SOMEVALUE') as varchar))))
from mytable
``` | Avoiding NULL values - CONCATENATION | [
"",
"sql",
"sql-server",
""
] |
I have a table like this
```
| customer_id | item_id | price | amount |
```
I want to retrieve the amount of money each customer has spent in a single query.
I have tried:
```
SELECT SUM(price * amount) AS total FROM table GROUP BY customer_id
```
but this spits out astronomically high values for `total`. It cannot be correct.
I also tried
```
SELECT @total := @total + (price * amount) AS total FROM table
CROSS JOIN (SELECT @total := 0) CONST
GROUP BY customer_id
```
but this doesn't start from `0` for each customer, so the previous totals will stack up...
How do I properly retrieve the data I want? | Your initial query is correct:
```
SELECT SUM(price * amount) AS total
FROM table
GROUP BY customer_id;
```
(although I would include `customer_id` in the `SELECT` clause.)
If you have "astronomically high values" then the issue is your data. Quite possibly, the issue is that "price" is really "total" on each row. | You have to add customer id to your select clause. | Calculate proper total in query? | [
"",
"mysql",
"sql",
"innodb",
""
] |
I'm building a stock keeping system and decided to store each product's balance (everytime it's updated) into the following table:
```
+------------+--------------+---------+------+
| Product_id | Warehouse_id | Balance | Date |
+------------+--------------+---------+------+
```
Example:
Staff adds 10 pieces to product\_id 123 in warehouse\_id 5
```
+------------+--------------+---------+-------------+
| Product_id | Warehouse_id | Balance | Date |
+------------+--------------+---------+-------------+
| 123 | 5 | 10 | 2013-09-16 |
+------------+--------------+---------+-------------+
```
Staff then adds 3 pieces to product 234 in warehouse\_id 5, and
5 pieces to 123 in warehouse\_id 5,
```
+------------+--------------+---------+-------------+
| Product_id | Warehouse_id | Balance | Date |
+------------+--------------+---------+-------------+
| 123 | 5 | 10 | 2013-09-16 |
| 234 | 5 | 3 | 2013-09-18 |
| 123 | 5 | 15 | 2013-09-21 |
+------------+--------------+---------+-------------+
```
\*Notice the date column
Now let me add a few more rows
```
+------------+--------------+---------+-------------+
| Product_id | Warehouse_id | Balance | Date |
+------------+--------------+---------+-------------+
| 123 | 5 | 10 | 2013-09-16 |
| 234 | 5 | 3 | 2013-09-18 |
| 123 | 5 | 15 | 2013-09-21 |
| 123 | 5 | 18 | 2013-09-24 |
| 234 | 5 | 10 | 2013-09-26 |
| 123 | 5 | 22 | 2013-09-29 |
+------------+--------------+---------+-------------+
```
---
How do i run a query that would get me all products' balances as at 25th of September 2013?
That means i need the following result:
```
+------------+--------------+---------+-------------+
| Product_id | Warehouse_id | Balance | Date |
+------------+--------------+---------+-------------+
| 234 | 5 | 3 | 2013-09-18 |
| 123 | 5 | 18 | 2013-09-24 |
+------------+--------------+---------+-------------+
```
In short I need the latest row (by date), per product\_id.
Any help would be greatly appreciated! | Assuming that products' balances are being maintained per warehouse you can do it like this
```
SELECT t.product_id, t.warehouse_id, t.balance, t.date
FROM table1 t JOIN
(
SELECT warehouse_id, product_id, MAX(date) date
FROM table1
WHERE date <= '2013-09-25'
GROUP BY warehouse_id, product_id
) q
ON t.warehouse_id = q.warehouse_id
AND t.product_id = q.product_id
AND t.date = q.date
```
Output:
```
| PRODUCT_ID | WAREHOUSE_ID | BALANCE | DATE |
|------------|--------------|---------|------------|
| 234 | 5 | 3 | 2013-09-18 |
| 123 | 5 | 18 | 2013-09-24 |
```
Here is **[SQLFiddle](http://sqlfiddle.com/#!2/a4f49/30)** demo | ```
SELECT *
FROM TABLE
WHERE (PRODUCT_ID, DATE) IN
(SELECT PRODUCT_ID, MAX(DATE) FROM TABLE
WHERE DATE <= '2013-09-25'
GROUP BY PRODUCT_ID )
``` | MySQL Select one row with unique attribute value | [
"",
"mysql",
"sql",
"database",
""
] |
I have 3 tables with mainly string data and unique id column:
```
categories ~45 rows
clientfuncs ~800 rows
serverfuncs ~600 rows
```
All tables have unique primary AI column 'id'.
I try to count rows in one query:
```
SELECT COUNT(categories.id), COUNT(serverfuncs.id), COUNT(clientfuncs.id) FROM categories, serverfuncs, clientfuncs
```
It takes 1.5 - 1.7 s.
And when I try
```
SELECT COUNT(categories.id), COUNT(serverfuncs.id) FROM categories, serverfuncs
```
or
```
SELECT COUNT(categories.id), COUNT(clientfuncs.id) FROM categories, clientfuncs
```
or
```
SELECT COUNT(clientfuncs.id), COUNT(serverfuncs.id) FROM clientfuncs, serverfuncs
```
, it takes 0.005 - 0.01 s. (as it should be)
Can someone explain, what is the reason for this? | You're doing a cross join of 45\*800\*600 rows, you'll notice that when you check the result of the counts :-)
Try this instead:
```
SELECT
(SELECT COUNT(*) FROM categories),
(SELECT COUNT(*) FROM serverfuncs),
(SELECT COUNT(*) FROM clientfuncs);
``` | The queries are doing [cartesian product](http://en.wikipedia.org/wiki/Cartesian_product) since no join condition is applied so:
```
1 query : 800*600*45 = 21,6 mil
2 query : 45*600 = 27 k
3 query : 45*800 ...
``` | Counting rows in multiple tables cause large delay | [
"",
"mysql",
"sql",
""
] |
Please somebody suggest me with a SQL Server Select query to get values from a table column if only the values are not in another table, this is what I want to do:
```
SELECT t.TeacherID
FROM Teacher t ...... (Who are not in Class c, c.TeacherID Column)
``` | ```
select teacherID from Teacher
Except
select teacherID from Class
```
or you can also go for:
```
select teacherID from Teacher where TeacherId not in(select teacherID from Class)
``` | Teachers who ***do not*** have classes:
```
SELECT *
FROM teacher
WHERE NOT EXISTS (
SELECT *
FROM class
WHERE class.teacherid = teacher.teacherid
)
```
If you want to do the inverse (find teachers who ***do*** have classes) then all you have to do is reverse the condition: remove the `NOT`! | Select query to get values from a table column only if the values are not in another table | [
"",
"sql",
"sql-server",
"sql-server-2008-r2",
""
] |
As far as i know according to mathematics rounding should work as below when rounding number is 5.
```
2.435 => 2.44 (Round Up, if rounding to digit(3) is odd number)
2.445 => 2.44 (Round Down, if rounding to digit(4) is even number)
```
if we do summation all fine,
```
2.435 + 2.445 = 4.88
2.44 + 2.44 = 4.88
```
I'm pretty sure in .Net also rounding works like this.
But in SQL server, 5 is always rounding up which is not correct according to maths.
```
SELECT round(2.345, 2) = 2.35
SELECT round(2.335, 2) => 2.34
```
this results to 1 cent discrepancies in summation of rounded values.
```
2.345 + 2.335 = 4.68
2.35 + 2.34 = 4.69 => which is not correct
```
I have tried this with decimal and money data types.
Am i doing something wrong? Is there a work around for this? | If you do want to use banker's rounding in SQL Server...
```
CREATE FUNCTION BankersRounding(@value decimal(36,11), @significantDigits INT)
RETURNS MONEY
AS
BEGIN
-- if value = 12.345 and signficantDigits = 2...
-- base = 1000
declare @base int = power(10, @significantDigits + 1)
-- roundingValue = 12345
declare @roundingValue decimal(36,11) = floor(abs(@value) * @base)
-- roundingDigit = 5
declare @roundingDigit int = @roundingValue % 10
-- significantValue = 1234
declare @significantValue decimal(36,11) = floor(@roundingValue / 10)
-- lastSignificantDigit = 4
declare @lastSignificantDigit int = @significantValue % 10
-- awayFromZero = 12.35
declare @awayFromZero money = (@significantValue + 1) / (@base / 10)
-- towardsZero = 12.34
declare @towardsZero money = @significantValue / (@base / 10)
-- negative values handled slightly different
if @value < 0
begin
-- awayFromZero = -12.35
set @awayFromZero = ((-1 * @significantValue) - 1) / (@base / 10)
-- towardsZero = -12.34
set @towardsZero = (-1 * @significantValue) / (@base / 10)
end
-- default to towards zero (i.e. assume thousandths digit is 0-4)
declare @rv money = @towardsZero
if @roundingDigit > 5
set @rv = @awayFromZero -- 5-9 goes away from 0
else if @roundingDigit = 5
begin
-- 5 goes to nearest even number (towards zero if even, away from zero if odd)
set @rv = case when @lastSignificantDigit % 2 = 0 then @towardsZero else @awayFromZero end
end
return @rv
end
``` | You're looking for *Banker's Rounding* - which is the default rounding in C# but is not how SQL Server ROUND() works.
See [Why does TSQL on Sql Server 2000 round decimals inconsistently?](https://stackoverflow.com/questions/7852892/why-does-tsql-on-sql-server-2000-round-decimals-inconsistently/7852940#7852940) as well as <http://blogs.lessthandot.com/index.php/DataMgmt/DataDesign/sql-server-rounding-methods> and <http://www.chrispoulter.com/blog/post/rounding-decimals-using-net-and-t-sql> | SQL Server Rounding Issue where there is 5 | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
One of my table stores the UserAgent from the user's browser along with the corresponding UID associated with it, with some other data. This occurs every time a user logs in. So they will have many entries per user. I am trying to query this table to find some unique users based on qualities.
For example, I am trying to find only users that have used IE6 and no other browsers. The closest I can get so far is through this method:
```
select distinct (U.UID) from TABLE1 tb1
inner join TABLE1 tb2 on tb1.UID = tb2.UID
where tb1.UserAgent like '%MSIE 6.%'
and tb2.UserAgent like '%MSIE 6.%'
```
This seems to return users whom have used IE6 and any other browser as well. I am trying to find basically the opposite of this. Users that have used IE6 and IE6 only. I also tried the one below but didn't quite work either because a good chunk of this users had other entries with non IE6 browsers.
```
select distinct (U.UID) from TABLE1 tb1
inner join TABLE1 tb2 on tb1.UID = tb2.UID
where tb1.UserAgent like '%MSIE 6.%'
and tb2.UserAgent not like '%MSIE 6.%'
```
I think I am on the right track but could be way off here.
TIA! | Select user with user agent `like '%MSIE 6.%'` who do not have any other user agent. The inner query returns users who have not used `'%MSIE 6.%'`
```
select distinct tb1.UID from TABLE1 tb1
where tb1.UserAgent like '%MSIE 6.%' and
NOT EXISTS ( select tb2.UID from TABLE1 tb2
where tb1.UID = tb2.UID AND
tb2.UserAgent not like '%MSIE 6.%' )
```
You can even use `NOT IN` instead of `NOT EXISTS` like `tb1.UID NOT IN (...)`.
```
select distinct tb1.UID from TABLE1 tb1
where tb1.UserAgent like '%MSIE 6.%' and
tb1.UID NOT IN ( select tb2.UID from TABLE1 tb2
where tb2.UserAgent not like '%MSIE 6.%' )
```
The where clause condition `tb1.UserAgent like '%MSIE 6.%' and` can also be dropped without any side effects as the `NOT` and the inner query makes sure that the user's agent matches %MSIE 6.% | ```
select distinct (tb1.UID) from TABLE1 tb1
where not exists (
select 1
from TABLE1
where UID = tb1.UID and UserAgent not like '%MSIE 6.%'
)
``` | Joining same table on itself | [
"",
"sql",
"sql-server-2000",
"user-agent",
""
] |
I have a normal select query which results following output.
```
select cid,x1,x2,x3,x4,fy
from temp_table;
cid x1 x2 x3 x4 fy
----------------------------
6657 100 0 0 200 2014
6658 300 0 0 400 2015
6659 500 0 0 600 2016
```
I want it to rewrite it print following output.
```
2014 2015 2016
-------------------------
x1 100 300 500
x2 0 0 0
x3 0 0 0
x4 200 400 600
```
How can this be achieved? | Here is a way to do this with just subqueries and aggregation:
```
select name,
sum(case when fy = 2014 then x end) as "2014",
sum(case when fy = 2015 then x end) as "2015",
sum(case when fy = 2016 then x end) as "2016"
from (select fy,
(case when n.n = 1 then 'x1'
when n.n = 2 then 'x2'
when n.n = 3 then 'x3'
when n.n = 4 then 'x4'
end) as name,
(case when n.n = 1 then x1
when n.n = 2 then x2
when n.n = 3 then x3
when n.n = 4 then x4
end) as x
from temp_table cross join
(select 1 as n from dual union all
select 2 from dual union all
select 3 from dual union all
select 4 from dual
) n
) t
group by name;
```
You can also use `pivot`, but that is a very recent addition to Oracle SQL, so I'm inclined to use this method. | More general solution is with [UNPIVOT and PIVOT](http://www.oracle-developer.net/display.php?id=506).
Your `temp_table` is already a pivot table with *(x1 x2 x3 x4)* as **x-axis** and *fy* as **y-axis**.
First we need to UNPIVOT `temp_table` into `unpivoted_temp_table` and then PIVOT it with *fy* as x-axis and *(x1 x2 x3 x4)* as y-axis:
```
with unpivoted_temp_table as (
SELECT *
FROM temp_table
UNPIVOT (
totalSales
FOR x
IN (x1, x2, x3, x4)
)
)
select *
FROM unpivoted_temp_table
PIVOT (
SUM(totalSales)
FOR fy
IN (2014, 2015, 2016)
)
order by 1 --order by column X
```
<http://sqlfiddle.com/#!4/4fdfc/1/0> | Transpose a query output | [
"",
"sql",
"oracle11g",
"transpose",
""
] |
I have the below data in a table **A** which I need to insert into table **B** along with one computed column.
TABLE A:
```
Account_No | Balance | As_on_date
1001 |-100 | 1-Jan-2013
1001 |-150 | 2-Jan-2013
1001 | 200 | 3-Jan-2013
1001 |-250 | 4-Jan-2013
1001 |-300 | 5-Jan-2013
1001 |-310 | 6-Jan-2013
```
Table B:
In table B, there should be no of days to be shown when balance is negative and
the date one which it has gone into negative.
So, for 6-Jan-2013, this table should show below data:
```
Account_No | Balance | As_on_date | Days_passed | Start_date
1001 | -310 | 6-Jan-2013 | 3 | 4-Jan-2013
```
Here, no of days should be the days when the balance has gone negative in recent time and
not from the old entry.
I need to write a SQL query to get the no of days passed and the start date from when the
balance has gone negative.
I tried to formulate a query using `Lag` analytical function, but I am not succeeding.
How should I check the first instance of negative balance by traversing back using `LAG` function?
Even the `first_value` function was given a try but not getting how to partition in it based on negative value.
Any help or direction on this will be really helpful. | Here's a way to achive this using analytical functions.
```
INSERT INTO tableb
WITH tablea_grouped1
AS (SELECT account_no,
balance,
as_on_date,
SUM (CASE WHEN balance >= 0 THEN 1 ELSE 0 END)
OVER (PARTITION BY account_no ORDER BY as_on_date)
grp
FROM tablea),
tablea_grouped2
AS (SELECT account_no,
balance,
as_on_date,
grp,
LAST_VALUE (
balance)
OVER (
PARTITION BY account_no, grp
ORDER BY as_on_date
ROWS BETWEEN UNBOUNDED PRECEDING
AND UNBOUNDED FOLLOWING)
closing_balance
FROM tablea_grouped1
WHERE balance < 0
AND grp != 0 --keep this, if starting negative balance is to be ignored
)
SELECT account_no,
closing_balance,
MAX (as_on_date),
MAX (as_on_date) - MIN (as_on_date) + 1,
MIN (as_on_date)
FROM tablea_grouped2
GROUP BY account_no, grp, closing_balance
ORDER BY account_no, MIN (as_on_date);
```
* First, SUM is used as analytical function to assign group number to consecutive balances less than 0.
* LAST\_VALUE function is then used to find the last -ve balance in each group
* Finally, the result is aggregated based on each group. MAX(date) gives the last date, MIN(date) gives the starting date, and the difference of the two gives number of days.
Demo at [sqlfiddle](http://sqlfiddle.com/#!4/4304a/1). | Try this and use `gone_negative` to computing specified column value for insert into another table:
```
select temp.account_no,
temp.balance,
temp.prev_balance,
temp.on_date,
temp.prev_on_date,
case
WHEN (temp.balance < 0 and temp.prev_balance >= 0) THEN
1
else
0
end as gone_negative
from (select account_no,
balance,
on_date,
lag(balance, 1, 0) OVER(partition by account_no ORDER BY account_no) prev_balance,
lag(on_date, 1) OVER(partition by account_no ORDER BY account_no) prev_on_date
from tblA
order by account_no) temp;
```
Hope this helps pal. | Query for getting previous date in oracle in specific scenario | [
"",
"sql",
"oracle",
""
] |
I am trying to select from one table a list of products ordered by price, year, name, and others....
The problem is that I must make zero values come last when sorting ascending.
My code is:
```
SELECT * FROM Product P
ORDER BY CASE WHEN @OrderBy='Date ASC' THEN Date END ASC,
CASE WHEN @OrderBy='Price ASC' THEN Price END ASC,
CASE WHEN @OrderBy='Title ASC' THEN Title END ASC,
CASE WHEN @OrderBy='' THEN Match END
```
This works but don't put the zero at the bottom of the list.
So, I tried to transform it (see next code), but it gave me the error *Incorrect syntax near ','*.
```
SELECT * FROM Product P
ORDER BY CASE WHEN @OrderBy='Price ASC' THEN
(case A.Price WHEN 0 THEN 1 ELSE 0 END,A.Price )
END ASC
```
I appreciate any help | You can do it by testing for price-ordering twice:
```
SELECT * FROM Product P
ORDER BY CASE WHEN @OrderBy='Date ASC' THEN Date END ASC,
CASE WHEN @OrderBy='Price ASC' THEN CASE WHEN Price = 0 THEN 1 ELSE 0 END ASC,
CASE WHEN @OrderBy='Price ASC' THEN Price END ASC,
CASE WHEN @OrderBy='Title ASC' THEN Title END ASC,
CASE WHEN @OrderBy='' THEN Match END
```
By the way, the implicit value of the case expression when `@orderBy` doesn't equal the string is null. When the sort column contains all nulls, it effectively disables sorting for that attribute. | You can try with this syntax:
```
SELECT *,
CASE WHEN @OrderBy = 'Price ASC' AND Price = 0 THEN 1 ELSE 0 END AS OrderPriceZeroLast
FROM Product P
ORDER BY OrderPriceZeroLast,
CASE WHEN @OrderBy = 'Date ASC' THEN Date END ASC,
CASE WHEN @OrderBy = 'Price ASC' THEN Price END ASC,
CASE WHEN @OrderBy = 'Title ASC' THEN Title END ASC,
CASE WHEN @OrderBy = '' THEN Match END
``` | 0 come last when sorting ascending | [
"",
"sql",
"sorting",
"select",
"sql-order-by",
""
] |
I am trying to build a vb.net application with a where clause to filter data using a DateTime field.
My table contains 5 fields and more than 10000 rows. I wanna use a DateTime field to find all the rows older than 7 years from now.
But this script will be re-used many times. So I don't wanna use this kind of where clause cause I don't wanna need to modify the where clause every time I wanna run the application :
```
select * from myTable WHERE myDateTimeField < '2006-09-07 00:00:00.000'
```
I'd like to find a way to write a where clause like this :
`select * from myTable WHERE myDateTimeField "is older than 7 years from NOW`"
I don't use VB.net very often (as you can see), so this thing is really bugging me | Just make use of `DateTime`:
```
Dim dateTime As DateTime
dateTime = DateTime.Now.AddYears(-7);
```
When you're building your SQL string just call `ToString` on your date (you can obviously format it however you need):
```
dateTime.ToString("MM/dd/yyyy");
``` | i think better would be provided you have SQL Server
```
strQuery = "select * from myTable WHERE myDateTimeField
<= DATEADD(YEAR, -7, GETDATE())"
``` | filter DateTime field with now as reference in WHERE clause | [
"",
"sql",
"vb.net",
"datetime",
"where-clause",
""
] |
I would like to count the number of sorted subgroups in the table below :
```
id nmb
1 11
2 12
3 13
4 22
5 23
6 31
7 32
8 33
9 11
10 12
11 13
12 12
13 13
14 21
15 22
16 11
17 12
18 13
19 14
```
And want to obtain something like this in postgresql 8.4 :
```
id nmb local
1 11 1
2 12 1
3 13 1
4 22 1
5 23 1
6 31 1
7 32 1
8 33 1
9 11 2
10 12 2
11 13 2
12 12 3
13 13 3
14 21 3
15 22 3
16 11 4
17 12 4
18 13 4
19 14 4
```
EDIT: the last few numbers on the 'local' column were wrong. Corrected!
Thank you. | It appears that you are trying to enumerate the groups, where a group starts with a lower `nmb` value than the previous row ("previous" defined by the `id` order).
The idea is to identify the start of a group by using `lag()`. Then take the cumulative sum to get the group identifier that you want:
```
select id, nmb, sum(StartFlag) over (order by id) as local
from (select id, nmb,
lag(nmb) over (order by id) as lastnmb,
(case when lag(nmb) over (order by id) < nmb then 0
else 1
end) as StartFlag
from t
) t
order by id;
``` | I though i finally understood what you want, consecutively growing values:
```
select id, nmb,
sum(flag)
over (order by id
rows unbounded preceding) as local
from
(
select
id, nmb,
case
when lag(nmb)
over (order by id) < nmb
then 0
else 1
end as flag
from t
) as dt
order by id
```
But group 4 doesn't fit
Edit: Now they fit :-) | Counting number of sorted subgroups inpostgresql | [
"",
"sql",
"postgresql",
""
] |
I've got a few different SQL tables that I need to perform some counts on. This is for my job, so obviously the data is confidential, but I'll try to abstract it to something analogous. Imagine a University registrar's office...
Our first table contains a list of all courses that students have completed.
Table: `student_courses`
```
student_id | course_id
----------------------
123456 | MATH101
123456 | MATH203
785426 | PHYS305
... | ...
```
We also have tables for the different departments at our University, listing (among other things) the course ID and the year level of the course:
Table: `math_courses`
```
course_id | year
--------------------
MATH101 | 1
MATH201 | 2
MATH202 | 2
CALC103 | 1
STAT402 | 4
... | ...
```
Imagine similar `science_courses` and `general_courses` tables. The university has other departments, but we are only considering these three for now.
What we're looking for is a count of how many students have reached each combination of year/department.
For instance, if a student has taken both MATH101 and MATH201, their "year" for the math department will be considered "2".
Not all students have completed courses in all departments, and not all students have necessarily taken any courses in these three departments.
Basically, I'm looking to produce the following table:
```
max_math_year | max_science_year | max_general_year | student_count
---------------------------------------------------------------------
NULL | NULL | NULL | 39847
NULL | NULL | 1 | 172
NULL | NULL | 2 | 0
... | ... | ... | ...
4 | 4 | 4 | 4986
```
I've built a temporary table to store the maximum level for each student, and it seemed to work properly, so for sake of this exercise, assume we have the following table:
Table: `#student_maximums`
```
student_id | max_math_year | max_science_year | max_general_year
----------------------------------------------------------------
123465 | 2 | NULL | 1
782514 | 3 | 1 | NULL
874252 | NULL | NULL | NULL
... | ... | ... | ...
```
I want to return `0` when a row has no students who fit a certain combination, but if I run a count on the above table, it won't return zero-values.
Most of the time I'd do a `LEFT OUTER JOIN` to include the zeroes, but in this case, we're counting against data originally from four different tables. I've considered using a `CROSS JOIN` but that won't include `NULL` values in the rows, which I also need.
Thanks! | Based on both suggestions, I came up with the following. I performed the join on the coalesced values, so that the null rows would actually line up across the tables.
Note, I was able to insert rows into the courses tables. If I wasn't, I'd UNION the zero value rows in the query.
```
INSERT INTO math_courses (0, NULL)
INSERT INTO science_courses (0, NULL)
INSERT INTO general_courses (0, NULL)
SELECT
COALESCE(m.max_math_year, 0)
, COALESCE(s.max_science_year, 0)
, COALESCE(g.max_general_year, 0)
, COUNT(DISTINCT student_id)
FROM
math_courses m
CROSS JOIN
science_courses s
CROSS JOIN
general_courses g
LEFT OUTER JOIN
student_maximums sm
ON COALESCE(sm.max_math_year, 0) = COALESCE(m.year, 0)
AND COALESCE(sm.max_science_year, 0) = COALESCE(s.year, 0)
AND COALESCE(sm.max_general_year, 0) = COALESCE(g.year, 0)
GROUP BY
COALESCE(m.max_math_year, 0)
, COALESCE(s.max_science_year, 0)
, COALESCE(g.max_general_year, 0)
ORDER BY
COALESCE(m.max_math_year, 0)
, COALESCE(s.max_science_year, 0)
, COALESCE(g.max_general_year, 0)
``` | I'm not shure if i understood correctly, but is this what you want?
```
select
coalesce(max_math_year, 0)
,coalesce(max_science_year, 0)
,coalesce(max_general_year, 0)
,count(*)
from #student_maximums
group by
coalesce(max_math_year, 0)
,coalesce(max_science_year, 0)
,coalesce(max_general_year, 0)
``` | Counting zero-values when joining in T-SQL | [
"",
"sql",
"sql-server",
"database",
"join",
"null",
""
] |
We're running into a problem here and I can't figure out why it behaves how it behaves.
Given the following two (simplified) stored procedures in TSQL (SQL Server 2008R2)
```
create procedure [datetransaction1]
as
begin
begin try
begin transaction
declare @a datetime
exec datetransaction2 '2013-02-02 22:21', @a output
select @a
exec datetransaction2 '2013-020222:22', @a output
select @a
exec datetransaction2 '2013-02-02 22:23', @a output
select @a
commit transaction
end try
begin catch
print 'Catch'
end catch
end
```
and
```
create procedure [dbo].[datetransaction2] @text nvarchar(100), @res datetime OUTPUT
AS
BEGIN
BEGIN TRY
if (LEN(@text) = 16) SET @text = replace(@text, ' ', 'T') + ':00.000'
else if (LEN(@text) = 19) SET @text = replace(@text, ' ', 'T') + '.000'
else SET @text = replace(@text, ' ', 'T')
PRINT 'trydate:' + @text
SELECT @res =convert(datetime, @text, 126)
END TRY
BEGIN CATCH
PRINT ERROR_SEVERITY()
PRINT 'errordate:' + @text
END CATCH
END
```
If you then execute `exec datetransaction1`, we see that all 3 calls to `datetransaction2` are executed, with the first and last (as expected) running correctly, and the second one entering the `CATCH` block within `datetransaction2`.
So far, so good.
But then we land in the catch block of `datetransaction1` with the message that the transaction is uncommittable:
```
Msg 266, Level 16, State 2, Procedure datetransaction1, Line 0
Transaction count after EXECUTE indicates a mismatching number of BEGIN and COMMIT statements. Previous count = 0, current count = 1.
Msg 3998, Level 16, State 1, Line 1
Uncommittable transaction is detected at the end of the batch. The transaction is rolled back.
```
Which isn't supposed to happen (I think). We caught the errors in the sub procedures, so why would the transaction suddenly become uncommittable?
Can someone explain that to me?
Note that we can probably find a way around this, but I'm intrigued more by the idea behind it. Why does this transaction suddenly become uncommittable here? | The reason is: SQL Server dooms the transaction WHENEVER an error occurs, whatever the error is, whether it is in a TRY block or not, whether you saved a transaction state or not, whether the error occurs in a procedure or not, whatever you do.
When the error occurs in one of the procedure calls, the transaction is doomed. You can only rollback it completely (any savepoint will not help).
At the end, since the transaction is doomed, you cannot commit it...
Try this:
```
SET XACT_ABORT OFF -- pityful attempt to avoid the doom
BEGIN TRANSACTION
--
-- some useful TSQL instructions could be here
--
SAVE TRANSACTION SQL_SERVER_IS_GARBAGE -- another pityful attempt to do a partial restore
PRINT 'XACT_STATE='+CONVERT(varchar(10),XACT_STATE())
BEGIN TRY
DECLARE @n int
SELECT @n = CONVERT(int,'ABC') -- some very benign data error here (example)
COMMIT TRANSACTION -- will never reach here
END TRY
BEGIN CATCH
PRINT ERROR_MESSAGE()
PRINT 'XACT_STATE='+CONVERT(varchar(10),XACT_STATE())
IF XACT_STATE()=-1 BEGIN
PRINT 'The transaction is doomed, say thanks to Sql Server!'
PRINT 'CANNOT restore to the save point!'
-- You can just cry here and abort all, you lost all the useful work
ROLLBACK TRANSACTION
END
ELSE BEGIN
-- would restore before the error if the transaction was not doomed
ROLLBACK TRANSACTION SQL_SERVER_IS_GARBAGE -- will never reach here either!
END
END CATCH
``` | Since the second call to datetransaction2 function caused severity level 16 error SQL Server automatically rolled back your transaction. That's the reason for the error you are seeing.
Here is a [really nice article](http://blogs.msdn.com/b/sqlserverfaq/archive/2011/05/11/errors-raised-with-severity-level-16-may-cause-transactions-into-doomed-state.aspx) why the transactions gets into doomed state when severity level 16 error occurs.
To verify that its getting rolled back automatically I added the following line to your datetransaction2 proc : print XACT\_STATE()
```
create procedure [dbo].[datetransaction2] @text nvarchar(100), @res datetime OUTPUT
AS
BEGIN
print 'Start'
print XACT_STATE()
BEGIN TRY
if (LEN(@text) = 16) SET @text = replace(@text, ' ', 'T') + ':00.000'
else if (LEN(@text) = 19) SET @text = replace(@text, ' ', 'T') + '.000'
else SET @text = replace(@text, ' ', 'T')
PRINT 'trydate:' + @text
SELECT @res =convert(datetime, @text, 126)
END TRY
BEGIN CATCH
print XACT_STATE()
print 'Catch'
PRINT ERROR_SEVERITY()
PRINT 'errordate:' + @text
END CATCH
print XACT_STATE()
print 'End'
END
``` | SQL Transaction uncommittable while using try..catch.. Why? | [
"",
"sql",
"sql-server",
"transactions",
"sql-server-2008-r2",
"try-catch",
""
] |
What is best approach
**Approach-I**
```
SELECT
SUM(CASE WHEN CODE = 'A' THEN ISNULL(UNIT,0.00)+ISNULL(UNIT_1,0.00) END) AS UNIT_SUM
FROM tblA
```
**Approach-II**
```
SELECT
ISNULL(SUM(CASE WHEN CODE = 'A' THEN UNIT+UNIT_1 END),0.00) AS UNIT_SUM
FROM tblA
```
1) my concern is Do i have to place ISNULL inside CASE statement OR outsite CASE statement.
Does it affect summation of units OR both query give same result.
And what happen if any UNIT column have NULL value.Does it result NULL of total.
2) do i must use ELSE in CASE statement as given below:
THEN ... **ELSE 0.00** END | **Answer to the original question:**
> *Do I have to place ISNULL inside CASE statement...?*
No you don't. [`SUM()`](http://technet.microsoft.com/en-us/library/ms187810%28v=sql.105%29.aspx) ignores `NULL`s. Therefore the presence of NULL values won't affect the result of summation. The exception is when all values are NULL, or you don't have any values at all. In that case you'll get `NULL` as a result.
> *Do I must use ELSE in CASE statement..?*
No you don't for the same reasons.
> *...Do I need to use `ISNULL()` outside `CASE`..?*
It depends on what you expect in the resultset in case when all values are `NULL`s or you don't have any values. If you want to have `0` or any other value then use `ISNULL()` or `COALESCE()`. Otherwise don't.
Here is **[SQLFiddle](http://sqlfiddle.com/#!3/c5432/3)** demo
**UPDATE**
Given the expression like one in your changed question
```
SUM(CASE WHEN CODE = 'A' THEN UNIT + UNIT_1 END)
```
you most likely want to use `ISNULL()` or `COALESCE()` around both `UNIT` and `UNIT_1` if you want to treat an absent value of one of these columns as zero values.
In that case if you want to get a zero in a resultset even if there are no values that match your `CASE` condition you **still need** to use `ISNULL()` or `COALESCE()` around `SUM()`.
```
SELECT ISNULL(SUM(CASE WHEN CODE = 'A'
THEN ISNULL(UNIT, 0) + ISNULL(UNIT_1, 0)
END), 0) AS UNIT_SUM
```
or
```
SELECT COALESCE(SUM(CASE WHEN CODE = 'A'
THEN COALESCE(UNIT, 0) + COALESCE(UNIT_1, 0)
END), 0) AS UNIT_SUM
```
Here is **[SQLFiddle](http://sqlfiddle.com/#!3/c5432/6)** demo | You have other answers pointing out that you better put your filter `= 'A'` into where clause to speed up the query, and that's right thing to do.
I also want to note that your queries produce different results in case there's no rows to sum:
```
SELECT
ISNULL(SUM(CASE WHEN CODE = 'A' THEN UNIT END),0.00) AS UNIT_SUM
FROM tblA;
-- output: 0
SELECT
SUM(CASE WHEN CODE = 'A' THEN ISNULL(UNIT,0.00) END) AS UNIT_SUM
FROM tblA;
-- output: null
```
If you want to get sum of all your rows and and put this data into columns, you can use this query:
```
select
isnull(sum(case when code = 'A' then unit end), 0) as [A],
isnull(sum(case when code = 'B' then unit end), 0) as [B]
from tblA
-- filter out rows if there's more codes in your table
where code in ('A', 'B')
```
**`sql fiddle demo`** | T-SQL help needed regarding CASE statement in SELECT clause | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I am writing a MySQL query where I can get all records from the user table with duplicate email addresses.
This is the query I have so far.. my PHPMyAdmin keeps loading:
```
select personal_email, userid
from user
where personal_email in (
select personal_email
from user
group by personal_email
having count(personal_email) > 1);
```
If I use the query below I get all double emails just once:
```
select personal_email, count(personal_email)
from user
group by personal_email
having count(personal_email) > 1
```
The goal is getting all the records with duplicate emails. | Try the query with a JOIN instead of `IN`:
```
SELECT user.personal_email, user.userid
FROM user
INNER JOIN
( SELECT personal_email
FROM User
GROUP BY personal_email
HAVING COUNT(*) > 1
) dupe
ON dupe.personal_email = user.personal_email;
```
MySQL often optimises INNER JOINs much better. | Perhaps the most efficient way to do this in MySQL is to use an `exists` clause:
```
select u.*
from user u
where exists (select 1
from user u2
where u.personal_email = u2.personal_email and
u.userid <> u2.userid
);
```
For optimal performance, create an index on `user(personal_email, userid)`. | Get all users with duplicate email addresses | [
"",
"mysql",
"sql",
""
] |
let me explain my situation, it really clearer than the title! I have two tables. Let give a look here and than I will you explain what I would like to do.
First table (lesson table)
```
+------------------+----------------+
| LessonCode | SubjectCode |
+------------------+----------------+
| 1 | 10 |
| 2 | 11 |
| 3 | 12 |
+------------------+----------------+
```
Second table (subject table)
```
+------------------+----------------+------------+
| subjectCode | SubjectName | teacherCode|
+------------------+----------------+------------+
| 10 | Maths | 15 |
| 11 | English | 20 |
| 12 | Greek | 30 |
+------------------+----------------+------------+
```
I would select from table `lesson` the subject that has `teacherCode=20`. It is possible in one query? | To get all lessons of teacher X, you will have to join both tables:
```
select LessonCode from lesson l
inner join subject s on s.subjectCode = l.SubjectCode
where s.teacherCode = x
```
This basically does the following: It creates the cross product of the two tables and then deletes all lines which do not comply with the `on` clause. Then it deletes all lines which do not comply with the `where` clause. This leaves only lessons of a given teacher X. | Try this:
```
select *
from lesson l
join subject s on s.subjectCode=l.SubjectCode
where s.teacherCode=20
``` | Select a value in a first table according to another value of a second table (sql) | [
"",
"mysql",
"sql",
""
] |
I have two tables that I need to pull data from. The first table is applications and the second is verification\_status. Each application has 20 steps and the verification\_status contains a date of when that step was completed. I am trying to pull the application along with the completion dates from a number of specific steps.
Essentially what I want is a row that has all the applications data, along with the dates from the specific verification status rows matched based on applications\_id and verification\_step\_id.
I tried this and I am getting an error that says "Unknown column 'applications.id' in 'where clause'"
It works fine when I take out the subquery.
```
SELECT * FROM applications,
(SELECT date FROM verification_status WHERE application_id = applications.id AND verification_step_id = 1) as steponedate
LEFT JOIN application_details ON application_details.application_id = applications.id
WHERE application_details.application_status_id != 1 AND application_details.application_status_id != 2
``` | Give this a go:
```
SELECT
a.*,
ad.*,
vs.date as steponedate
FROM applications a
LEFT JOIN verification_status vs
ON vs.application_id = a.id
AND vs.verification_step_id = 1
LEFT JOIN application_details ad
ON ad.application_id = a.id
WHERE ad.application_status_id != 1
AND ad.application_status_id != 2;
``` | I am not certain that this is the problem, but you seem to be mixing ANSI89 and ANSI92 syntax in your query. I have never written a working query mixing the two together. Also, it seems as if the subquery you ran was supposed to be run for each row of data, not as a table.
I think the following query will likely work for you:
```
SELECT
applications.*,
(
SELECT
date
FROM
verification_status
WHERE
application_id = applications.id
AND verification_step_id = 1
) as steponedate
FROM
applications,
LEFT JOIN
application_details
ON application_details.application_id = applications.id
WHERE
application_details.application_status_id != 1
AND application_details.application_status_id != 2
``` | Mysql query with multiple sub queries for each row | [
"",
"mysql",
"sql",
""
] |
I have to establish a connection using connection string to DB.
I am using **SQL Server 2008 R2**, for which i need connection string.
It is not **SQLEXPRESS**.
Current String:
```
<connectionStrings>
<add name="ApplicationServices"
connectionString="server=USER-PC\SQL Server;database=TLE;Connection Timeout=40"
providerName="System.Data.SqlClient"/>
</connectionStrings>
```
But it is not working. I am getting error
> A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified)
 | The following connection string should probably work:
```
connectionString=
"server=USER-PC;Integrated Security=True;Initial Catalog=TLE;Connection Timeout=40;providerName=System.Data.SqlClient"
``` | Identify the sql server instance name. While installing SQL Server Express, you might have installed it under the default instance name (SQLEXPRESS). In that case you should use USER-PC\SQLEXPRESS
If you have not given the instance name and still you are unable to connect then try to give the IP address like this:
```
connectionString=
"server=Your_system_ip_address;Integrated Security=True;Initial Catalog=TLE;Connection Timeout=40;providerName=System.Data.SqlClient"
```
For more information regarding connection strings refer to [this](http://www.connectionstrings.com/sql-server/) | SQL string connection issue | [
"",
"sql",
"sql-server",
"sql-server-2008",
"connection-string",
"sqlconnection",
""
] |
I have the following data in a table called: dbo.DecodedCSVMessages\_Staging
**Here is a sample of the data:**
```
MMSI Message_ID Time Ship_Type Vessel_Name Latitude Longitude
2102930 3 2012-06-01 NULL NULL 56.528003 85.233443
2102930 5 2012-07-01 70 RIO_CUBAL NULL NULL
2109300 1 2012-08-01 NULL NULL 57.43234 131.123343
2109300 1 2012-09-01 NULL NULL 62.432345 121.123343
2109300 1 2012-09-02 NULL NULL 65.432345 140.123343
2109300 5 2012-08-01 70 RIO_CUBAL NULL NULL
2225555 1 2012-08-01 NULL NULL 66.432345 143.123343
2225555 1 2012-09-01 NULL NULL 25.432345 145.123343
2225555 5 2012-08-01 70 RIO_II NULL NULL
```
What I need is as follows:
* An MMSI number is a unique number that represents a ship. A ship sends out different information in unique Message types. I am interested in only Message\_ID 1's and 3's as they have Latitudinal and Longitudinal Information. MMSI's will be reoccuring in this database (as demonstrated in the data above)
* Problem is, I need information about the ship which is unfortunately only available in Message\_ID 5's. such as Vessel\_name and Ship\_type.
* I only need ships which fall within the following longitude and latitude
* **Ship\_Type might change! In which case it must reflect the later date**
**where Latitude > 55 and Latitude <85 and Longitude > 50 and Longitude < 141;**
**The result of querying the data** would produce the following (Of which I will save to CSV format as I am using this information to build point features in ArcGIS)
```
MMSI Message_ID Time Ship_Type Vessel_Name Latitude Longitude
2102930 3 2012-06-01 70 RIO_CUBAL 56.528003 85.233443
2109300 1 2012-08-01 70 RIO_CUBAL 57.43234 131.123343
2109300 1 2012-09-01 70 RIO_CUBAL 62.432345 121.123343
2109300 1 2012-09-02 70 RIO_CUBAL 65.432345 140.123343
2225555 1 2012-08-01 70 RIO_II 66.432345 143.123343
```
* So all the Message\_ID 5's are gone
* The second to last row is gone (as the Latitude was only 25.432345)
* The information Ship\_Type and Vessel\_Name has been added to the
Message\_ID 1's and 3's by way of linking using the MMSI.
RESTRAINTS:
1. Cannot create a new table (do no have access)
2. There are over a billion rows of data in this database (only about 4.3 million that meet the latitude/longitude constraints though
**WHAT MIGHT THE QUERY BE??? tHANKS!** | Something like this should do it:
```
; WITH positions AS (
SELECT MMSI
, Message_ID
, "Time"
, Latitude
, Longitude
FROM dbo.DecodedCSVMessages_Staging
WHERE Message_ID IN (1, 3)
AND Latitude > 55
AND Latitude < 85
AND Longitude > 50
AND Longitude < 141
)
, details AS (
SELECT MMSI
, Ship_Type
, Vessel_Name
, Row_Number() OVER (PARTITION BY MMSI ORDER BY "Time" DESC) As row_num
FROM dbo.DecodedCSVMessages_Staging
WHERE Message_ID = 5
)
SELECT positions.MMSI
, positions.Message_ID
, positions."Time"
, details.Ship_Type
, details.Vessel_Name
, positions.Latitude
, positions.Longitude
FROM positions
INNER
JOIN details
ON details.MMSI = positions.MMSI
AND details.row_num = 1 -- Limit to "latest" ship details per MMSI
``` | I would think it relatively simple... something like..
```
select
MS.MMSI,
MS.Message_ID,
MS.Time,
Ship.Ship_Type,
Ship.Vessel_Name,
MS.Latitude,
MS.Longitude
from
dbo.DecodedCSVMessages_Staging MS
JOIN dbo.DecodedCSVMessages_Staging Ship
on MS.MMSI = Ship.MMSI
AND Ship.Message_ID = 5
where
MS.Message_ID in ( 1, 3 )
AND MS.Latitude > 55
AND MS.Latitude < 85
AND MS.Longitude > 50
AND MS.Longitude < 141
```
I would also suggest an index on
( MMSI, Message\_ID, Latitude, Longitude )
so it covers as much of the criteria you are looking for and join conditions as well | SQL AdvancedQuery | [
"",
"sql",
"sql-server-2008",
""
] |
I have following table structure:
```
MID AutoID MikatName
1 3 ABC
4 17 XYZ
9 20 MNS
15 26 MNVS
```
Now i want to form a query so that i can get or select next record against particular AutoID.
Means what query i have to form if i want record next to autoID 17. i.e. it is MID=9 AutoID=20 MikatName=MNS
I tried forming it as:
```
select * from ejamatPass where AutoID>17
```
But it gives me all the 2 records having autoID grater that 17. I wanted just next record to AutoID 17.
Please help me. | You want to add a `limit` clause and `order by`:
```
select *
from ejamatPass
where AutoID>17
order by AutoID
limit 1;
```
By the way, SQL tables are inherently unordered. When you do a `select` there is no guarantee on the order of the results returned. You need an `order by` to ensure ordering. | ```
SELECT * FROM ejamatPass WHERE AutoID>17 ORDER BY AutoID ASC LIMIT 1;
``` | select next record against particular ID | [
"",
"sql",
"sqlite",
""
] |
I am hoping someone wouldn't mind helping with something that I am stuck on.
I need to produce a report which gives a count of items which have been created over a certain date range. For example, when a user would run this report on the 15/09/2013 it would show a count of any records which has a creation date of the 1st of the current month, or before.
The code I've tried to get working is below, but doesn't run. I wonder if I'd be better using datepart for this?
```
SELECT RECORDTYPE, COUNT(RECORDTYPE) AS 'COUNT OF RECORDS'
FROM VIEW_GLOBALSEARCH
WHERE DATEPART(YYYY,CREATIONDATE) = DATEPART(YYYY, GETDATE ())
AND DATEPART(MM,CREATIONDATE) = DATEPART(MM, GETDATE ())
AND DATEPART(DD,CREATIONDATE)= '01'
OR CREATIONDATE <= DATEPART(MM, CREATIONDATE) = DATEPART(MM, GETDATE())
DATEPART(YYYY,CREATIONDATE) = DATEPART(YYYY, GETDATE ())
```
Any advice is appreciated - Sorry if this is basic. | For "a creation date of the 1st of the current month, or before." use the following WHERE clause. The second piece of this will always returnt he first day of the current month.
`WHERE CREATIONDATE <= DATEADD(m, DATEDIFF(m, 0, getdate()), 0)` | This will get all records up through 8/31.
```
SELECT RECORDTYPE, COUNT(RECORDTYPE) AS 'COUNT OF RECORDS'
FROM VIEW_GLOBALSEARCH
WHERE Year(CREATOINDATE)
``` | SQL: Datepart searches | [
"",
"sql",
"t-sql",
""
] |
I have a Table name stocks with following colums;
```
Stockcode Hig low Close Date
MSFT | 100 | 90 | 95 |2013-09-18
MSFT | 105 | 96 | 97 |2013-09-17
MSFT | 104 | 92 | 91 |2013-09-16
MSFT | 101 | 88 | 100 |2013-09-13
```
I want to list the lowest value from the Low colum for last 3 days from the Date colum.
I tried the following query but it just giving me the lowest value from all days rather than last 3 days.
```
SELECT `LOW`
FROM `stocksdata`
WHERE `STOCKSYMBOL` = "MSFT"
ORDER by TIMESTAMP
DESC LIMIT 1;
```
How could I `SELECT` the values from the last 3 days ? | By the looks of your data you're looking for the lowest value over the last three trading days:
```
select min(low) from(
SELECT
`LOW`
FROM `stocksdata`
WHERE
`STOCKSYMBOL` = "MSFT"
ORDER by
TIMESTAMP DESC LIMIT 3) ltd
``` | ```
SELECT LOW
FROM stocksdata
WHERE STOCKSYMBOL = 'MSFT'
AND date BETWEEN DATE_ADD(NOW(),INTERVAL -3 DAY) AND NOW()
ORDER by date DESC LIMIT 1
``` | How to get the Smallest value from the Top 3 rows of the table in mysql | [
"",
"mysql",
"sql",
""
] |
I am trying to return just the first result of a query after it is sorted in alphabetical order:
```
SELECT MIN(DisplayName) FROM [User] GROUP BY DisplayName
```
I believed the above would do the trick. But it is still returing all results in alphabetical order. How do I get the desired results? | You don't need to `GROUP BY`:
```
SELECT MIN(DisplayName)
FROM [User]
```
OR:
```
SELECT TOP 1 DisplayName
FROM [User]
ORDER BY DisplayName
```
`GROUP BY` will return each distinct value/combination of values for the fields listed. | Only the first row? Simply remove the group by:
```
SELECT MIN(DisplayName) FROM [User]
``` | Return only the first alphabetical result of SELECT | [
"",
"sql",
"sql-server",
""
] |
The first table:
```
day | title
1 | "Lorem"
2 | "Ipsum"
```
The second table:
```
day | title
2 | "Amor"
3 | "Zeus"
```
Now I need such a result:
```
day | title
1 | "Lorem"
2 | "Amor"
3 | "Zeus"
```
How to do that with a single query?
**Edit:**
"Zeus" added to the second table. | <http://sqlfiddle.com/#!2/0dc08/1>
```
SELECT t1.Day, COALESCE(t2.Word, t1.Word) AS Word
FROM Table1 t1
LEFT JOIN Table2 t2 ON t2.Day = t1.Day
UNION ALL
SELECT t2.Day, t2.Word
FROM Table2 t2
LEFT JOIN Table1 t1 ON t1.Day = t2.Day
WHERE t1.Day IS NULL
```
The code above prefers t2 to t1 when a day is found in both tables. And the `UNION ALL` at the bottom gets all records that are JUST in t2. If that case can't happen, then remove the `UNION ALL` statement and below. | ```
SELECT t1.day, COALESCE(t2.title, t1.title) title
FROM t1 LEFT JOIN t2 on t1.day = t2.day
```
## Edit:
Following @GlaviΔ comment, here a solution where there is data in `t2` not in `t1` that should be considered:
```
SELECT t1.day, COALESCE(t2.title, t1.title) title
FROM t1 LEFT JOIN t2 on t1.day = t2.day
UNION ALL
SELECT t2.day, t2.title
FROM t2 LEFT JOIN t1 on t2.day = t1.day
WHERE t1.day IS NULL
``` | MySQL How can I select data from two tables so that rows from the second one overwrote those from the first one? | [
"",
"mysql",
"sql",
"database",
""
] |
i have a situation where i want to add unique constaint only when there is a certain value in other field
for example the table is
ID
CategoryName
Name
Value
CategoryID
the constraint will be on ID, CategoryName and Name only is CategoryID is 0
is it possible? | ```
CREATE UNIQUE NONCLUSTERED INDEX [UQ_ID_CategoryName_Name, CategoryName, Name] ON TableName
(
ID, CategoryName, Name
)
WHERE (CategoryID = 0)
``` | One way to achieve this would be to have two tables, one for `categoryId` 0 and one for the rest. You can define the uniqueness constraint on the first table, and use a view and union any time you want to use them as a single table. | Add unique constraint depending on value in SQL | [
"",
"sql",
"sql-server",
""
] |
Using a SQL tool like SQL Developer / Toad for Oracle
Is it possible to write a SQL query that will do the following
```
SELECT * FROM TABLE
WHERE COLUMN1 IN CSV_FILE
```
The CSV file is just one column of data with no delimiters.
How can I achieve this?
**Constraints**
I cannot create a temp table to insert CSV file (no create permissions)
The data I am using of this column is the only index in that table so I cannot use other columns to query or else it will be really slow.
Thanks | Creating external table is the best way. If you dont have permission then the other way is to move the file to the path of any oracle directory(Oracle object - Directory). And with help of utl\_file read the file, loop through it and do your operation inside a PL/SQL block which is too tedious.
See the eaxmples for using utl\_file - <http://psoug.org/reference/utl_file.html>
But its better if you try and get create access. | 1. Toad for Oracle data import (uses sqlldr internally)
* Create a temp table and load the data using this utility and select the values
2. External tables
* Create external table, load the data through the same and select the values. | Finding rows in table using input from a CSV file - SQL | [
"",
"sql",
"oracle-sqldeveloper",
"toad",
""
] |
I have two user tables with same schema.
```
| useraward_new | | useraward_his |
| User | Award | | User | Award |
| user1 | 5 | | user1 | 1 |
| user2 | 2 | | user1 | 3 |
| | | | user2 | 4 |
| | | | user3 | 5 |
```
useraward\_new contains the data on newest year and useraward\_his contains historical data.
The difference is that 'new'.User is unique but 'his'.User isn't so 'his' may contain duplicated user data.
I would like to have
```
| useraward_total |
| User | Award |
| user1 | 9 |
| user2 | 6 |
```
UNION works fine.
But it returns all data contains in historical table, but only data on users in new table is needed.
I have tried various combination of JOIN, UNION, SUM, ... but no luck.
Any help would be appreciated, thanks! | ```
SELECT n.[User], n.Award + ISNULL(SUM(h.Award),0) AS Award
FROM useraward_new n
LEFT JOIN useraward_his h ON h.[User] = n.[User]
GROUP BY n.[User], n.Award
``` | You need to select from the new table and do an inner join on the historical table:
```
SELECT User, SUM(Award)
FROM useraward_new N
INNER JOIN useraward_his H ON N.User = H.User
GROUP BY User
```
edit: I guess that assumes that there is always going to be an equivalent user in the historical table. Otherwise Markus' answer will include any users in new which aren't in the historical table. | merge two tables and get sum without select all from huge table | [
"",
"sql",
"sql-server",
"sum",
""
] |
Here is what my data looks like:
```
item_ID | group_ID | count_of_items_in_group
2|ABC|3
5|ABC|3
9|ABC|3
29|DEF|3
3|DEF|3
4|DEF|3
200|XYZ|2
300|XYZ|2
600|GHI|1
```
**SQL Filddle:** <http://sqlfiddle.com/#!2/dfe09/1>
For each group, I want to limit the number of item\_IDs returned to a max of 2. I do not care which 2 are returned. If the group has less than 2 rows return only the 1 row.
I can't write a select top \* 2 for each group and union the selects because I have a few hundred groups.
I'm not sure where to start, your help is greatly appreciated.
Using MS SQL 2005
(Table layout is exactly as it is in the sample, it is a view based on a number of queries) | What about using [ROW\_NUMBER()](http://technet.microsoft.com/en-us/library/ms186734.aspx) to enum the number of fields of your data set, partitioned by each group\_id.
And after that just return when is just less or equal to 2? or any number that you want
```
SELECT * FROM
(
select item_id, group_id, count_of_items,
ROW_NUMBER() OVER(PARTITION BY group_id ORDER BY count_of_items DESC)
AS RN
from items_in_groups
) A
WHERE RN <= 2
```
Here is the [Sql Fiddle](http://sqlfiddle.com/#!3/dfe09/17) | Use the `ROW_NUMBER()` function for this:
```
SELECT *
FROM (select *,ROW_NUMBER() OVER (PARTITION BY group_id ORDER BY item_id) as RowRank
from items_in_groups
)sub
WHERE RowRank <=2
```
Demo: [SQL Fiddle](http://sqlfiddle.com/#!3/dfe09/13/0)
TheΒ `ROW_NUMBER()`Β function assigns a number to each row.Β `PARTITION BY`Β is optional, but used to start the numbering over for each value in that group, ie: if you `PARTITION BY group_id` then for each unique `group_id` value the numbering would start over at 1. `ORDER BY` of course is used to define how the counting should go, and is required in the `ROW_NUMBER()` function. | SQL - Restrict the number of rows returned based on a count of rows | [
"",
"sql",
"sql-server",
"sql-server-2005",
""
] |
## The goal
Concatenate a parameter with a `%` operator of LIKE using MySQL.
## The problem
There's the following query on my database and it returns me the results very well:
```
SELECT people.LOGIN, people.EMAIL FROM people
WHERE people.LOGIN LIKE 'o%'
ORDER BY people.LOGIN ASC
LIMIT 10;
```
But I want to call this function using a Stored Procedure, and instead of use `'o%'`, I want to use `'<myvariable>%'` and I'm not managing to do this.
## The scenario
I've tried the following on my procedure:
```
BEGIN
SELECT people.EMAIL FROM people
WHERE people.LOGIN LIKE CONCAT(expression, '%')
ORDER BY people.LOGIN ASC
LIMIT 10;
END
```
The result? **Empty.** This query returns me nothing.
**Someone has any idea to help me?** | Often what actually gets passed in is not what we think.
Check that the parameter:
* is not null
* does not have leading or trailing whitespace
Try this to guard against the above:
```
WHERE people.LOGIN LIKE CONCAT(TRIM(IFNULL(expression, '')), '%')
``` | You need to use [prepared statements](http://dev.mysql.com/doc/refman/5.6/en/sql-syntax-prepared-statements.html).
Create a string with a query, and execute it using prepared statements. Something like this -
```
SET @query = CONCAT('SELECT * FROM table1 WHERE column1 LIKE ''', expression, '%''');
PREPARE stmt FROM @query;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;
``` | Concatening LIKE with Stored Procedure's parameter using MySQL | [
"",
"mysql",
"sql",
"stored-procedures",
""
] |
I have following query on mysql, query logic is correct however issue is since there are over 10,000+ seeker emails and 24,000+ guest emails it take long time to execute, is there a better way to do this same this ?
```
SELECT g.email, g.name
FROM guest g
WHERE g.type='guest'
AND g.email NOT IN (SELECT email FROM seeker GROUP BY email)
GROUP BY g.email
``` | Try this:
```
SELECT
g.email, g.name
FROM
guest g
LEFT JOIN
seeker s
ON
s.email = g.email
WHERE
g.type = 'guest'
AND
s.email IS NULL
GROUP BY
g.email;
```
<http://sqlfiddle.com/#!2/d94bf/5> | ```
SELECT DISTINCT g.email, g.name
FROM guest g
WHERE g.type='guest'
AND NOT EXISTS (SELECT 1 FROM seeker s WHERE g.email = s.email)
```
And be sure you have an index on seeker.email, guest.type, guest.email, which would be awesome if the columns are `NOT NULL` on top of that. | MySQL optimize a NOT IN query how to? | [
"",
"mysql",
"sql",
""
] |
```
select
country_olympic_name,
SUM(part_gold) as 'Number of Gold Medals'
From
games.country,
games.participation
where
participation.country_isocode = country.country_isocode
group by
country_olympic_name;
```
I have been getting the error ORA-00923: FROM keyword not found where expected and do not know why, please help | Identifiers need to be quoted with double quotes (`"`). Single quotes (`'`) denote a character *value* (not a "name").
Therefor you need to use:
```
SUM(part_gold) as "Number of Gold Medals"
```
More details in the manual:
* [Database Object Names and Qualifiers](https://docs.oracle.com/database/121/SQLRF/sql_elements008.htm#SQLRF00223)
* [Text literals](https://docs.oracle.com/database/121/SQLRF/sql_elements003.htm#SQLRF00218) | ## Add comma after SELECT QUERY
---
In my case, I had this query
```
SELECT BANK_NAME
DECODE (SWIFT_CODE, 'BRDEROBU', 'BRD',
'NO RESULT') RESULT
FROM BANK_GAR;
```
As you may see, **I didn't had the comma** after the `SELECT BANK_NAME` line.
The correct query is:
```
SELECT BANK_NAME,
DECODE (SWIFT_CODE, 'BRDEROBU', 'BRD',
'NO RESULT') RESULT
FROM BANK_GAR;
``` | Error (ORA-00923: FROM keyword not found where expected) | [
"",
"sql",
"oracle",
""
] |
I've been trying to count the number of columns in a table in my database using the below statement:
```
rs = st.executeQuery("SELECT COUNT(*) FROM information_schema.columns WHERE table_name = '"+TableName+"'");
while (rs.next()) {
i = rs.getInt(1);
}
```
For the below table table this code returns "6". If it only counts the columns it must return 3 and if it count columns of all record the returned value must be 12. My question is how come I get the 6 value.
```
|Username| |Password| |Level|
|Admin | |Admin | |1 |
|User | |User | |2 |
|Staff | |Staff | |3 |
|User1 | |User1 | |2 |
```
And how can I properly count the columns of a table ? | The following query returns the number of columns that you have. If you want to get 12 you would have to multiply by the number of rows in the table:
```
SELECT COUNT(*)
FROM `INFORMATION_SCHEMA`.`COLUMNS`
WHERE `TABLE_SCHEMA`='Yourdatabase'
AND `TABLE_NAME`='yourtable';
```
See it in [sqlfiddle](http://sqlfiddle.com/#!2/794e8/6) | Also specify the db,you might have a table with the same name in other db
```
...FROM information_schema.columns WHERE table_schema = 'myDB'...
``` | Counting the number of columns in the database | [
"",
"mysql",
"sql",
""
] |
I have a table that contains a text field (summary). I recently received a request to query that field for any occurrences of a long list of strings. For example, I'd like to search the "summary" field for *any* occurrences of these strings:
* apple
* banana
* carrot
* dog
* elephant
* ... x 50 more items
* zebra
Is there a simple query I can write that would return the IDs of the rows that contain any of these values? | You could put them in one giant regular expression:
```
select t.*
from t
where summary similar to '%((apple)|(banana)|(carrot)|(dog)| . . .|(zebra))%';
```
An alternative, if you want more flexibility and care even less about performances:
```
with tosearch as (
select '%apple%' as pattern union all
. . .
select '%zebra%'
)
select t.*
from t join
tosearch
on t.summary like tosearch.pattern;
``` | The simplest way that comes in my mind is something like this:
```
SELECT "ID" FROM table WHERE "summary" IN ('apple', 'banana', 'carrot', 'dog', ....)
```
Try this out.
**EDIT AFTER COMMENT:** use the [similar to](http://www.postgresql.org/docs/8.3/static/functions-matching.html) operator:
```
select "ID" from table where upper("summary") similar to upper('%((apple)|(banana)|...|(zebra))%')
``` | Postgresql query to search text field for multiple strings | [
"",
"sql",
"postgresql",
""
] |
Is it possible to just create a table in MSSQL without columns?
I'm asking this because I'll have to check for every columns if it already exists or not.
```
IF NOT EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[Tags]') AND type in (N'U'))
BEGIN
CREATE TABLE [dbo].Tags(
[Id] [int] IDENTITY(1,1) NOT NULL,
CONSTRAINT [PK_Tags] PRIMARY KEY CLUSTERED
(
[Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
END
GO
```
The execution might break by creating the column Id if the table and the column already exists.
This would be ideally:
```
IF NOT EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[Tags]') AND type in (N'U'))
CREATE TABLE [dbo].Tags
GO
if NOT EXISTS (select column_name from INFORMATION_SCHEMA.columns where table_name = 'Tags' and column_name = 'Id')
alter table MyTable add MyColumn int IDENTITY(1,1) NOT NULL
GO
-- add new primary key constraint on new column
ALTER TABLE dbo.Tags
ADD CONSTRAINT PK_Tags
PRIMARY KEY CLUSTERED ([Id] ASC)
GO
```
I can't find an example on the second piece of code. But I can imagine that a table can't be created with 0 columns.
Any suggestions?
[Edit]
Some more context. Some tables exists some don't. Some columns exists some don't. I want to creat a script which could be executed anytime without breaking or causing error msg's like the table/column already exists. | I think you can acheive what you're trying using the following:
Create the table with the ID and only check for the existence of the `Id` column if you didn't have to create the table.
```
IF NOT EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[Tags]') AND type in (N'U'))
BEGIN
CREATE TABLE [dbo].Tags (Id int IDENTITY(1,1) NOT NULL)
END
ELSE
BEGIN
if NOT EXISTS (select column_name from INFORMATION_SCHEMA.columns where table_name = 'Tags' and column_name = 'Id')
BEGIN
alter table [dbo].Tags add MyColumn int IDENTITY(1,1) NOT NULL
-- add new primary key constraint on new column
ALTER TABLE dbo.Tags
ADD CONSTRAINT PK_Tags
PRIMARY KEY CLUSTERED ([Id] ASC)
END
ELSE
GO
``` | This code
```
create table tab
```
raises an error
> Incorrect syntax near 'tab'.
and this code
```
create table tab1
(
id int
)
alter table tab1 drop id
```
raises an error
> ALTER TABLE 'tab1' failed. Dropping all columns in a table is not
> allowed.
so a table without columns is not possible. | Create Table if not exists without columns | [
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
Say suppose I have two mysql queries which are something like this
```
if(!empty($company))
{
$q1="SELECT * FROM TABLE1 WHERE company='$company'";
}
else
{
$q1="SELECT * FROM TABLE1;
}
$r1=mysqli_query($q1);
if(!empty($color))
{
$q2="SELECT * FROM TABLE1 WHERE color='$color'";
}
else
{
$q2="SELECT * FROM TABLE1;
}
$r2=mysqli_query($q2);
```
While displaying result I tried comparing the pid(an auto\_incrementing primary key) if
```
if($r1['pid]==$r2['pid])
{
/* display the contents */
}
```
But obviously this won't work cos if the pid is not equal then nothing is displayed and moreover both will not increment at the same pace, the next result of pid for r1 can be 23 where as in r2 it wil be only 12, this way nothing gets displayed.
The query which will have lesser results will make the filter work obviously, how can I set the smaller pid equal to the greater one within one cycle of the while loop which is displaying the result? If this can be done then also I guess my problem will be solved?
How can I achieve this?
I simply want to display the results that are common to both the queries.
**PS**: These queries are being generated dynamically so if any of $color or $company is empty then everything will be selected from the table.
**UPDATE** \**Example*\*
r1 returns this
```
pid company color
1 acer red
3 hp red
5 logitech red
6 compaq red
7 microsoft red
8 lenovo red
```
AND r2 returns this
```
pid company color
5 logitech red
6 compaq red
7 microsoft red
13 nvidia red
```
Then from the above results we see that pid 5,6,7 exist in both r1 and r2, how to display all the columns with respect to these pids | If your application variables are blank when unassigned, you may combine the two conditions, *and* all of your application logic (testing for blank in app code), with this single, simple but elegant query:
```
SELECT *
FROM TABLE1
WHERE '$company' IN ('', company)
AND '$color' IN ('', color)
```
Note how by reversing the usual way you use `IN (...)`, you capture the ignore-if-blank logic with a minimum of code. | Usually this is done with `INTERSECT`. MySQL however do not support `INTERSECT` so you have to use `JOIN`
`SELECT * FROM TABLE1 a WHERE company='$company' INNER JOIN
(SELECT * FROM TABLE1 WHERE color='$color') b ON (a.pid = b.pid)`
Of course, if your case is so simple, you can just add the WHERE clauses
`SELECT * FROM TABLE1 a WHERE company='$company' AND color='$color'`
This is how you'd build the query dynamically, bases on whether company or color is selected:
```
$q1="SELECT * FROM TABLE1 WHERE 1=1";
if(!empty($company))
{
$q1 .=" AND company='$company'";
}
if(!empty($color))
{
$q1 .=" AND color='$color'";
}
$r=mysqli_query($q1);
``` | How to fetch the common result of two queries? | [
"",
"mysql",
"sql",
""
] |
Using [this solution](https://stackoverflow.com/questions/16062825/combine-two-columns-in-one-column), I tried to use COALESCE as part of a MySQL query that outputs to a csv file using SELECT As to name the column names when exporting the data.
```
SELECT FirstName AS First_Name
, LastName AS Last_Name
, ContactPhoneAreaCode1
, ContactPhoneNumber1
, COALESCE(ContactPhoneAreaCode1, ContactPhoneNumber1) AS Contact_Phone
FROM TABLE1
```
I wanted 3 columns: First\_Name, Last\_Name and Contact\_Phone
I am getting 5 columns: First\_Name, Last\_Name, ContactPhoneAreaCode1, ContactPhoneNumber1 and Contact\_Phone
How do I hide the merging of ContactPhoneAreaCode1, ContactPhoneNumber1 into a single column for Contact\_Phone from within the query? | If both columns can contain `NULL`, but you still want to merge them to a single string, the easiest solution is to use [**CONCAT\_WS()**](http://dev.mysql.com/doc/refman/5.5/en/string-functions.html#function_concat-ws):
```
SELECT FirstName AS First_Name
, LastName AS Last_Name
, CONCAT_WS('', ContactPhoneAreaCode1, ContactPhoneNumber1) AS Contact_Phone
FROM TABLE1
```
This way you won't have to check for `NULL`-ness of each column separately.
Alternatively, if both columns are actually defined as `NOT NULL`, [**CONCAT()**](http://dev.mysql.com/doc/refman/5.5/en/string-functions.html#function_concat) will be quite enough:
```
SELECT FirstName AS First_Name
, LastName AS Last_Name
, CONCAT(ContactPhoneAreaCode1, ContactPhoneNumber1) AS Contact_Phone
FROM TABLE1
```
As for `COALESCE`, it's a bit different beast: given the list of arguments, it returns the first that's not `NULL`. | You don't need to list ContactPhoneAreaCode1 and ContactPhoneNumber1
```
SELECT FirstName AS First_Name,
LastName AS Last_Name,
COALESCE(ContactPhoneAreaCode1, ContactPhoneNumber1) AS Contact_Phone
FROM TABLE1
``` | MySQL SELECT AS combine two columns into one | [
"",
"mysql",
"sql",
""
] |
I'm using Microsoft Sql Server Management Studio.
I currently have an existing database with data in it, which I will call DatabaseProd
And I have a second database with data used for testing, so the data isn't exactly correct nor up to date. I will call this database DatabaseDev.
However DatabaseDev now contains newly added tables and newly added columns,etc etc.
I would like to copy this new schema from DatabaseDev to DatabaseProd while keeping the DatabaseProd's Data.
Ex.
DatabaseProd contains 2 tables
TableA with column ID and Name
TableB with column ID and jobName
and these tables contains data that I would like to keep
DatabaseDev contains 3 tables
TableA with column ID ,Name and phoneNum
TableB with column ID and jobName
TableC with column ID and document
and these tables contains Data that I dont need
Copy DatabaseDev Schema to DatabaseProd but keep the data from DatabaseProd
So DatabaseProd after the copy would look like this
TableA with column ID ,Name and phoneNum
TableB with column ID and jobName
TableC with column ID and document
But the tables would contain it's original Data.
Is that possible?
Thank you | You can use [Red-Gate SQL Compare](http://www.red-gate.com/products/sql-development/sql-compare/), this will allow you to compare both DB's and generate a script to run on the source DB. You have to pay for a license, but you will get a `14-day trial period`.
This tool, along with [Data Compare](http://www.red-gate.com/products/sql-development/sql-data-compare/) and two tools I always insist on with new roles as they speed up development time, and minimise human error.
Also, a good **tip** when using SQL compare - if you need to generate a `rollback script`, then you can edit the project (after creating your `rollout script`), switch the source and destination around and this will create a script which will return the schema back to it's original state if the `rollout script` fails. However, be **very careful** when doing this, and **don't select synchronize with sql compare**, rather generate a script, [see image](http://www.red-gate.com/support/SQL_Data_Compare/help/9.0/sc_w-ex-1_sync_wiz_1.gif). I can't upload an image, but I have linked to one here - you can see the two options to select Generate Script / Sync using SQL compare. | You need an ALTER TABLE statement
```
ALTER TABLE tableA ADD PhoneNum Varchar(10) --Insert variable of choice here
```
Looked like no changes to TableB
Add TableC
```
CREATE TABLE TableC (ColumnID int, Document Varvhar(50))
```
DO you need to copy constraints, Indexes or triggers over? | Copy database schema to an existing database | [
"",
"sql",
"sql-server",
"database",
""
] |
I have two tables
**Table 1**
```
Column1
_______
1
2
3
4
5
6
```
**Table 2**
```
Column 1
________
4
NULL //This NULL value added after answering the question, to show the real problem
5
6
7
8
9
```
This is an example case. When I tried,
```
SELECT column1 FROM Table1 WHERE column1 IN (SELECT column1 FROM Table2)
```
I got 4,5,6
WHEN
```
SELECT column1 FROM Table1 WHERE column1 NOT IN (SELECT column1 FROM Table2)
```
**I didn't get 1,2,3 but NULL**.
In real case the column1 of table1 is nvarchar(max) and column1 of table2 is varchar(50). However, I tried casting both into varchar(50). | Check the documentation for [`IN`](http://technet.microsoft.com/en-us/library/ms177682.aspx), specifically:
> Any null values returned by *subquery* or *expression* that are compared to *test\_expression* using IN or NOT IN return UNKNOWN. Using null values in together with IN or NOT IN can produce unexpected results.
You haven't *shown* them, but I'm *certain* that you've got at least one `NULL` value lurking in your data.
You can exclude the `NULL`(s) and then the `NOT IN` will work as you expected:
```
SELECT column1 FROM Table1
WHERE column1 NOT IN (SELECT t2.column1 FROM Table2 t2
WHERE t2.column1 IS NOT NULL)
```
---
`IN` and `NOT IN` are, within a hand wave, opposites, but you have to keep SQL's [three-valued logic](http://en.wikipedia.org/wiki/Three-valued_logic) in mind. Imagine we'd written the `IN` using the *expression* form
```
a IN (1,2,NULL)
```
Which is treated the same as:
```
a = 1 OR a = 2 or a = NULL
```
For any row where a = 1, we have:
```
TRUE OR TRUE OR UNKNOWN
```
which is `TRUE`. And for any row where a = 3, say, we have:
```
FALSE OR FALSE OR UNKNOWN
```
which is `UNKNOWN`
Now, consider `NOT IN` in the same way:
```
a NOT IN (1,2,NULL)
```
Which is treated the same as:
```
a != 1 AND a != 2 AND a != NULL
```
For any row where a = 1, we have:
```
FALSE AND TRUE AND UNKNOWN
```
Which is `FALSE`. And for a = 3, we have:
```
TRUE AND TRUE AND UNKNOWN
```
Which is `UNKNOWN`. The presence of the `NULL` means that there's no way to ever get this chain of `AND`s to produce a `TRUE` value. | That may happend if you have `null` values in your Table2. Use this query instead:
```
select *
from Table1 as t1
where not exists (select * from Table2 as t2 where t2.column1 = t1.column1);
```
**`sql fiddle demo`**
Test query:
```
-- Table2 doesn't have null values, works good
SELECT column1 FROM Table1 WHERE column1 IN (SELECT column1 FROM Table2);
SELECT column1 FROM Table1 WHERE column1 NOT IN (SELECT column1 FROM Table2);
insert into Table2
select null;
-- nothing returned by query, because of null values in Table2
SELECT column1 FROM Table1 WHERE column1 NOT IN (SELECT column1 FROM Table2);
-- works good
select *
from Table1 as t1
where not exists (select * from Table2 as t2 where t2.column1 = t1.column1);
```
This is happens because [three-valued logic](http://en.wikipedia.org/wiki/Three-valued_logic) of SQL, see Damien\_The\_Unbeliever [nice explanation](https://stackoverflow.com/a/18865243/1744834). You can use `not null` query, like this:
```
SELECT column1 FROM Table1 WHERE column1 NOT IN (SELECT column1 FROM Table2 where column1 is not null);
```
But I like `exists` better, because it's filter out null implicitly (just because using `=`) condition.
As an addition, don't use query like ones in your question without aliases (actually not aliases, but dot notation, like `Table.column` or `Alias.column`), because you can have [incorrect results](https://stackoverflow.com/questions/18577622/sql-in-query-produces-strange-result/18579128#18579128). Always use dot notation for your columns. So your query should be something like:
```
SELECT t1.column1 FROM Table1 as t1 WHERE t1.column1 NOT IN (SELECT t2.column1 FROM Table2 as t2 where t2.column1 is not null);
``` | Is IN and NOT IN mutually Exclusive? | [
"",
"sql",
"sql-server",
""
] |
If I have for example an model:
```
User :firstname :lastname
joseph smith
anna bauer
... ...
```
And an input field where you can search for an **user**. The different search queries could be:
```
searchquery1: smith joseph
searchquery2: joseph smith
searchquery3: joseph
searchquery4: smith
```
Which search query in SQL would be the best? Actually I could imagine this search query:
```
where 'firstname OR lastname LIKE ?', "%#{search}"
```
**First try:**
```
def self.search(search)
if search
select('CONCAT(vorname, " ", nachname) as full_name, *')
where ['vorname LIKE :s OR nachname LIKE :s OR full_name LIKE :s', :s => "%#{search}"]
else
scoped
end
end
```
error: `SQLite3::SQLException: no such column: full_name`
**Second try:**
```
def self.search(search)
if search
a = search.split
where('firstname OR lastname LIKE ?', a[1])
where('firstname OR lastname LIKE ?', a[2]) unless a[2].nil?
else
scoped
end
end
```
Problem: Finds nothing! | Yes, you have to search it on both first name and last name like this
```
select('(firstname || " " || lastname) as \'full_name\', *')
where ['firstname LIKE :s OR lastname LIKE :s OR full_name LIKE :s', :s => "%#{search}"]
```
but if the data is too big. You can use the full text search engines like Solr or thinkin sphinx | To make search on firstname, lastname and fullname;
Member.where ['first\_name LIKE :s OR last\_name LIKE :s OR CONCAT(first\_name,last\_name) LIKE :s', :s => "#{search.delete(' ')}"]
It works! | How to make SQL search query for firstname, lastname | [
"",
"sql",
"ruby-on-rails",
"ruby-on-rails-3",
""
] |
I have two tables that join easily and a third that is giving me trouble. First, the tables:
```
tbl_customer
id dt value group
a 2013-01-01 10 cavalry
a 2012-06-01 20 lance
a 2011-03-01 10 infantry
b 2013-01-01 20 court
b 2012-07-01 5 maiden
b 2005-06-01 10 chivalry
tbl_title
id dt_active dt_inactive title
a 2001-01-01 2012-01-01 mister
a 2012-01-02 3001-01-01 sir
a 2012-01-02 3001-01-01 king
b 2001-01-01 2012-01-01 miss
b 2012-01-02 3001-01-01 lady
b 2012-01-02 3001-01-01 queen
```
This is easy to join to return a title for a given ID using:
```
SELECT
id, dt, value, title
FROM
tbl_customer AS cust
INNER JOIN tbl_title AS titles
ON titles.id = cust.id
AND dt >= titles.dt_active
AND dt <= titles.dt_inactive
```
This returns all of the rows in `tbl_customer` with some duplicates where there are multiple "active" titles. For example, the first row in `tbl_customer` returns twice, once with "king" and once with "sir" as a `title`.
I have a third table that links groups to titles. It can help resolve these relationships:
```
tbl_group
group title rank
cavalry sir 10
lance king 20
infantry mister 30
court lady 10
court queen 20
maiden lady 10
chivalry miss 5
```
I can then join the tables to include the title from `tbl_group` and limit the results to those that match:
```
SELECT
id, dt, value, titles.title
FROM
tbl_customer AS cust
INNER JOIN tbl_title AS titles
ON titles.id = cust.id
AND dt >= titles.dt_active
AND dt <= titles.dt_inactive
INNER JOIN tbl_group AS group
ON group.group = cust.group
WHERE
titles.title = group.title
```
This largely solves my problems. The third table helps to remove the ambiguity when there are multiple "active" records in `tbl_title`.
However, there is a problem. The third table can also have multiple titles associated with a group. Therefore, there can still be duplication in the final output. In the example above, "cavalry" is unambiguously associated with the title "sir". However, the group "court" can be either "lady" or "queen", and `tbl_title` isn't any help either, since both "lady" and "queen" are active for the date in `tbl_customer`.
At this point, I have no other way to choose other than "rank" in `tbl_group`. "Queen" outranks "lady" 20 to 10, so I want to use that title for the entry in `tbl_customer`. If the ranks are equal, I want to just use the first record in `tbl_group`. (*Could this be resolved with some kind of internal ordering?*)
The tricky part is that the active/inactive dynamic needs to be taken into account before the rank. For example, I don't want to first filter `tbl_group` for one title per group, since that would prevent possible matches based on date.
Ideally, I need to return one row per entry from `tbl_customer` that shows title, first based on active/inactive date. Next, I want to eliminate duplicates using `tbl_group`. Finally, I want to limit the results to just one row per entry in `tbl_customer` using matching and rank from `tbl_group`. Is this even possible? | First, some actual table schema would be helpful as the table data you have provided is missing some key elements. What are the keys in each of the tables? I.e., what can be used to uniquely identify a row in each of the tables? What does the `id` column in the customer and title table represent? Surely not a customer identifier since there are duplicates.
Second, to answer your question, you can use a ranking function such as `Row_Number` to rank the items in your list and only return the first item for a given set:
```
With RnkItems As
(
Select C.id, C.dt, C.value, T.Title
, Row_Number() Over ( Partition By C.id, C.dt, C.value, C.[group]
Order By G.rank Desc ) As Rnk
From tbl_customer As C
Join tbl_title As T
On T.id = C.id
And C.dt Between T.dt_active And T.dt_inactive
Join tbl_group As G
On G.[group] = C.[group]
And G.title = T.title
)
Select id, dt, value, title
From RnkItems
Where Rnk = 1
```
The key is the `Partition By C.id, C.dt, C.value, C.[group]` statement. Why all four columns? The reason is that I have no idea what uniquely identifies a customer row. If there were a primary key on that table, we could simply partition by that.
[SQL Fiddle version](http://sqlfiddle.com/#!3/a38f5/4) | Try this:
```
SELECT *
FROM
(
SELECT
id,
dt,
value,
titles.title,
group.rank MAX(rank) OVER(PARTITION BY cust.id) AS MAX_RANK
FROM
tbl_customer AS cust
INNER JOIN tbl_title AS titles
ON titles.id = cust.id
AND dt BETWEEN titles.dt_active AND titles.dt_inactive
INNER JOIN tbl_group AS group
ON group.group = cust.group
WHERE
titles.title = group.title
) AS RESULTS
WHERE rank = MAX_RANK
```
The windowing function `MAX(rank) OVER(PARTITION BY cust.id)` will get the max rank per id. Then we filter the rows to only return the row whose rank is that customer's maximum rank. | Difficult SQL join across tables | [
"",
"sql",
"sql-server",
""
] |
I'm trying to join two fields in SQL with the below command;
```
SELECT [NUM] & [ITEMNUM] AS LINKCODE
FROM mydatabase
```
When I run this I get the error;
```
Msg 245, Level 16, State 1, Line 1
Conversion failed when converting the varchar value 'O00000001' to data type int.
```
This is probably something easy but I am quite new to SQL. Any help would be appreciated :)
Thank you. | You can't just add a number and a string together. You have to bring them on the same data type first. Cast to `varchar` first
```
SELECT cast([NUM] as varchar) + [ITEMNUM] AS LINKCODE
FROM mydatabase
``` | If you are on SQL Server 2012+ you can use
```
SELECT CONCAT([NUM],[ITEMNUM]) AS LINKCODE
FROM mytable
```
This treats `NULL` as empty string and does the casting of numeric values to string. | Error when trying to run join command in SQL | [
"",
"sql",
""
] |
I have a set of dates that are in the format DD-MMM-YYYY. I need to be able to compare dates by using only the DD-MMM part of the date, since the year isn't important.
How would I achieve this?
I have tried reading up on the DATEPART function (edit: which evidently wouldn't work) but I can only theoretically get that to return either the DD or the MMM parts, not both of them at once.
Edit: added oracle tag. Sorry.
Example of date field: 01-MAR-1994 | If your column is of type `DATE` then it doesn't have a format.
If I understand you right, then you want to view the `mon-dd` part only, so you need to convert it with `TO_CHAR` function,
i.e.:
```
select to_char(your_date_column, 'mon-dd') from your_table
``` | Convert your dates using the following format, it will only month and the date part. You have to replace getdate() with you date fields.:
```
select convert(varchar(5),getdate(),110)
``` | Getting only day and month from a date field | [
"",
"sql",
"oracle",
""
] |
I have two date for example 08/08/2013 and 11/11/2013 and I need last date of each month starting from August to November in a table so that i can iterate over the table to pick those dates individually.
I know how to pick last date for any month but i am stucked with a date range.
kindly help, it will be highly appreciated.
**Note : I am using Sql 2008 and date rang could be 1 month , 2 month or 6 month or a year or max too..** | You can use [CTE](http://technet.microsoft.com/en-us/library/ms190766%28v=sql.105%29.aspx) for getting all last days of the month within the defined range
```
Declare @Start datetime
Declare @End datetime
Select @Start = '20130808'
Select @End = '20131111'
;With CTE as
(
Select @Start as Date,Case When DatePart(mm,@Start)<>DatePart(mm,@Start+1) then 1 else 0 end as [Last]
UNION ALL
Select Date+1,Case When DatePart(mm,Date+1)<>DatePart(mm,Date+2) then 1 else 0 end from CTE
Where Date<@End
)
Select * from CTE
where [Last]=1 OPTION ( MAXRECURSION 0 )
``` | ```
DECLARE @tmpTable table (LastDates DATE);
DECLARE @startDate DATE = '01/01/2012'; --1 Jan 2012
DECLARE @endDate DATE = '05/31/2012'; --31 May 2012
DECLARE @tmpEndDate DATE;
SET @startDate = DATEADD(s,-1,DATEADD(mm, DATEDIFF(m,0,@startDate)+1,1));
SET @tmpEndDate = DATEADD(DAY, 1, @endDate);
WHILE (@startDate <= @tmpEndDate)
BEGIN
INSERT INTO @tmpTable (LastDates) values (DATEADD(DAY, -1, @startDate));
SET @startDate = DATEADD(MONTH, 1, @startDate);
END
SELECT [LastDates] FROM @tmpTable;
```
**Output:**
**Example: 1**
```
@startDate DATE = '01/01/2012'; --1 Jan 2012
@endDate DATE = '05/31/2012'; --31 May 2012
LastDates
----------
2012-01-31
2012-02-29
2012-03-31
2012-04-30
2012-05-31
```
**Example: 2**
```
@startDate DATE = '11/01/2011'; --1 Nov 2011
@endDate DATE = '03/13/2012'; --13 Mar 2012
LastDates
----------
2011-11-30
2011-12-31
2012-01-31
2012-02-29
``` | tsql: How to retrieve the last date of each month between given date range | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
How do I split file on batches using powershell?
I have a sql post deploy file with multiple batches. Below is example of my postDeployment.sql file.
```
IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'StoredProc') AND type in (N'P', N'PC'))
BEGIN
DROP PROCEDURE [dbo].[StoredProc]
END
** GO **
CREATE PROCEDURE [dbo].[StoredProc]
@nodeId int = 0
AS
BEGIN
SET NOCOUNT ON;
END
**GO**
IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'StoredProc') AND type in (N'P', N'PC'))
BEGIN
DROP PROCEDURE [dbo].[StoredProc]
END
**GO**
CREATE PROCEDURE [dbo].[StoredProc]
@nodeId int = 0
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
END
```
$\_
**I need to execute each batch by it self. Shile spliting on highlighted \*\* GO \*\***
My PS Command is as follows (not working correctly):
```
function GetSqlBatchesFromString($file)
{
$buffer = new-object System.Text.StringBuilder
switch -regex -file $file {
"^\s*GO[\s\d]*$"
{
# $__ returns go
$buffer.ToString();
$buffer.Length = 0;
}
default
{
$temp = $buffer.AppendLine($_);
}
}
$buffer.ToString();
}
function ExecuteBatch($batch)
{
Write-Host "Batch part ==== start"
Write-Host $batch
Write-Host "Batch part ==== end"
}
GetSqlBatchesFromString "D:\PostDeployment.sql" | % { ExecuteBatch $__}
``` | I have found solution but it does not cover all cases like GO inside of sql:
```
$file = "D:\PostDeployment.sql"
$SQLCommandText = @(Get-Content -Path $file) # loads content
foreach($SQLString in $SQLCommandText)
{
if($SQLString -ne "go") # if string is not equal to go then add it to existing string
{
#Preparation of SQL packet
$SQLPacket += $SQLString + "`n"
}
else
{
Write-Host "---------------------------------------------"
Write-Host "Executed SQL packet:"
Write-Host $SQLPacket
$SQLPacket = "" # clear up sql string
}
}
``` | this:
```
$a = ( [system.io.file]::ReadAllText( "c:\file.sql" ) -split '(?:\bGO\b)' ) |
% { $_ + "`r`nGO" }
```
create an array whit batch then you can try:
```
$a | % { ExecuteBatch $__}
``` | How do I split file on batches using powershell | [
"",
"sql",
"powershell",
""
] |
I have a query that returns many rows:
```
SELECT c.id AS courseID,
c.fullname AS courseName,
c.continuingeducationcredits AS cec,
cc.name AS catName,
IFNULL(ccp.name, '-No Parent Category-') AS catParent,
ue.userid AS userID,
CONCAT(u.firstname, ' ', u.lastname) AS name,
ud.data AS employeeid,
date_format(date_add('1970-01-01',interval round(c.startdate/60/60/24) DAY), '%m/%d/%Y') AS startdate,
u.id AS uid,
c.startdate AS cstartdate
FROM user_enrolments AS ue
INNER
JOIN USER AS u ON ue.userid = u.id
INNER JOIN user_info_data AS ud ON (ud.fieldid = 1
AND ud.userid = u.id)
INNER JOIN enrol AS e ON e.id = ue.enrolid
INNER JOIN course AS c ON c.id = e.courseid
INNER JOIN course_categories AS cc ON cc.id = c.category
LEFT
OUTER JOIN course_categories AS ccp ON ccp.id = cc.parent
WHERE 1 = 1
AND (cc.ID = 13
OR ccp.ID = 13)
ORDER BY catParent,
uid,
cstartdate DESC
```
That works fine, and returns multiple rows. But if I try to use it as a subquery:
```
SELECT *
FROM
(SELECT c.id AS courseID,
c.fullname AS courseName,
c.continuingeducationcredits AS cec,
cc.name AS catName,
IFNULL(ccp.name, '-NO Parent Category-') AS catParent, ue.userid AS userID, CONCAT(u.firstname, ' ', u.lastname) AS name,
ud.data AS employeeid,
date_format(date_add('1970-01-01',interval round(c.startdate/60/60/24) DAY), '%m/%d/%Y') AS startdate,
u.id AS uid,
c.startdate AS cstartdate
FROM user_enrolments AS ue
INNER JOIN USER AS u ON ue.userid = u.id
INNER JOIN user_info_data AS ud ON (ud.fieldid = 1
AND ud.userid = u.id)
INNER JOIN enrol AS e ON e.id = ue.enrolid
INNER JOIN course AS c ON c.id = e.courseid
INNER
JOIN course_categories AS cc ON cc.id = c.category
LEFT OUTER JOIN course_categories AS ccp ON ccp.id = cc.parent
WHERE 1 = 1
AND (cc.ID = 13
OR ccp.ID = 13)) AS Pseudotable
WHERE courseID = 8946
ORDER BY catParent,
uid,
cstartdate DESC
```
It will only ever return one row.
What am I doing wrong? :(
The first query is the inner query of the second query.
The first query gets all the users who attended a course within a category. The second is intended to filter that set, and get only the users who attended a single course. | I finally figured out the issue ... the application is Moodle, and it loads queries into an array, using a function called get\_records\_sql.
It *requires* the first row of the results to be a unique key. It uses that row as the index of the array.
So ... to make that happen:
Begin the SELECT like this-
```
SELECT @a:=@a+1 as serial_number, c.id as courseID,
```
And add this to the FROM clause-
```
LEFT OUTER JOIN mdl_course_categories AS ccp ON ccp.id = cc.parent, (SELECT @a:= 0) AS a WHERE
``` | To be fair... my comment with this answer was first :)
```
WHERE courseID = 8946
```
Is in the second query. Remove it and it should work. | mysql subquery only returning one row | [
"",
"mysql",
"sql",
""
] |
I need to do some simple calculations counting the percentage ranges of each project. It seems as if this should be a very simple straightforward query but my results (counts of percentages <10%, >50%, and >100% are coming back incorrect and I'm having trouble figuring out why. I'm working in db2 and here is a sample of my data.

My query that returns the incorrect results ("24") is:
```
SELECT
COUNT(*)
FROM Fact_Table
WHERE Dept_NAME = 'sales'
AND PERCENTAGE_USED > '100.00%'
``` | It looks like your values are being stored as strings (hence the `'%'`) rather than as numbers.
Here is a method to compare them as numbers, and do the ranges at the same time:
```
SELECT (case when cast(replace(PERCENTAGE_USED, '%', '') as float) <= 10 then '<10'
when cast(replace(PERCENTAGE_USED, '%', '') as float) <= 50 then '10-50'
when cast(replace(PERCENTAGE_USED, '%', '') as float) <= 100 then '50-100'
when cast(replace(PERCENTAGE_USED, '%', '') as float) > 100 then '>100'
end) as range,
COUNT(*)
WHERE Dept_NAME = 'sales'
GROUP BY (case when cast(replace(PERCENTAGE_USED, '%', '') as float) <= 10 then '<10'
when cast(replace(PERCENTAGE_USED, '%', '') as float) <= 50 then '10-50'
when cast(replace(PERCENTAGE_USED, '%', '') as float) <= 100 then '50-100'
when cast(replace(PERCENTAGE_USED, '%', '') as float) > 100 then '>100'
end);
``` | I'm not too sure what you are expecting or what the field data types are set as, but could this be down to you comparing a string value rather than a numeric one for the comparison > '100.00%' | Count and percentage ranges | [
"",
"sql",
"db2",
"aggregate-functions",
""
] |
I need to get list of ids of duplicate rows in my table so I can delete them using `where id in`.
Here is my table :
```
id|col1|col2
1 |22 | text
2 |22 | text
3 |23 | text
4 |22 | text2
```
So in here ids `1` and `2` are duplicates other are not. So I know how to get them by using `group by and having count(*) > 1`
But I want to keep one and delete the others. So this is how this table would look like after purge of duplicates :
```
id|col1|col2
1 |22 | text
3 |23 | text
4 |22 | text2
```
or :
```
id|col1|col2
2 |22 | text
3 |23 | text
4 |22 | text2
```
Either one is fine. How can I do this? Get rid of the duplicates but keep one of the duplicates at the end making it not a duplicate anymore?
My next goal is to add indexes to these fields so this won't happen again. | Try something like:
```
delete from table_name
where id not in (select min(id)
from table_name
group by col1, col2);
```
It will delete all rows with non minimal `id` in every `col1, col2` group.
Alternative query:
```
delete from table_name t1
where exists (select *
from table_name t2
where t1.col1 = t2.col2
and t1.col2 = t2.col2
and t1.id < t2.id );
```
It does the same thing, just in other way. | Igor Romanchenko gave good solutions, another one could be:
```
with cte as c (
select id, row_number() over(partition by col1, col2 order by id) as rn
from Table1
)
delete Table1 as t
from cte as c
where c.id = t.id and c.rn > 1
``` | Filtering out duplicates from database but preserving the originals | [
"",
"sql",
"postgresql",
""
] |
I have a database with 5 columns (`A1,A2,A3,A4,A5`) which store 5 numbers.
The 5 numbers are `"1,2,3,4,5"`.
```
A1 A2 A3 A4 A5
-------------------------------
2 4 5 Null Null
```
I want get the missing number which is "1" and "3".
How do I find the missing number from 5 numbers? | An elegant solution that is extensible beyond the 5 values presented in your question makes use of a [numbers table](https://dba.stackexchange.com/questions/11506/why-are-numbers-tables-invaluable) (as shown in the link, a numbers table is invaluable for a multitude of tasks).
Here's the setup for the numbers table, which in this case is limited to 256 numbers since you're using a small range in the question.
```
/*
create a numbers table with numbers from 0 to
255 (the range of the tinyint data type).
*/
DROP TABLE IF EXISTS [#numbers];
GO
CREATE TABLE [#numbers]
(
[n] tinyint NOT NULL
);
WITH [numbers]
AS
(
SELECT [v].[n]
FROM
(
VALUES (0), (1), (2), (3), (4), (5), (6), (7), (8), (9)
) [v]([n])
)
INSERT INTO [#numbers]
(
[n]
)
SELECT TOP(255)
[number] =
([n1].[n] * 100)
+ ([n2].[n] * 10)
+ ([n3].[n] * 1)
FROM
[numbers] [n1]
CROSS JOIN [numbers] [n2]
CROSS JOIN [numbers] [n3]
ORDER BY
([n1].[n] * 100)
+ ([n2].[n] * 10)
+ ([n3].[n] * 1);
```
Next, we'll create the table you mention in your question:
```
DROP TABLE IF EXISTS [#rows];
CREATE TABLE [#rows]
(
[a1] tinyint NULL
, [a2] tinyint NULL
, [a3] tinyint NULL
, [a4] tinyint NULL
, [a5] tinyint NULL
);
```
Here, we populate the row with your values:
```
INSERT INTO [#rows] ([a1], [a2], [a3])
VALUES (2, 4, 5);
```
The contents of the `[#rows]` table:
| a1 | a2 | a3 | a4 | a5 |
| --- | --- | --- | --- | --- |
| 2 | 4 | 5 | NULL | NULL |
I'm going to use an [UNPIVOT clause](https://learn.microsoft.com/en-us/sql/t-sql/queries/from-transact-sql?view=sql-server-ver16#unpivot-unpivot_clause) to turn columns into rows. For clarity, this is what the `UNPIVOT` looks like on its own:
```
SELECT
[upv].[value]
FROM
(
SELECT *
FROM [#rows]
) [src]
UNPIVOT
(
[value] FOR [column] IN ([a1], [a2], [a3], [a4], [a5])
) [upv]
ORDER BY
[upv].[value];
```
The intermediate results of the `UNPIVOT` query:
| value |
| --- |
| 2 |
| 4 |
| 5 |
As you can see, we're missing the values `1` and `3`. Here's where we display the missing values by combining the results of the `UNPIVOT` with an [EXCEPT T-SQL operator](https://learn.microsoft.com/en-us/sql/t-sql/language-elements/set-operators-except-and-intersect-transact-sql?view=sql-server-ver16) against the `[#numbers]` table to show the missing values:
```
SELECT
[missing_number] = [n].[n]
FROM
[#numbers] [n]
WHERE
[n].[n] > 0
AND [n].[n] <= 5
EXCEPT
SELECT
[upv].[value]
FROM
(
SELECT *
FROM [#rows]
) [src]
UNPIVOT
(
[value] FOR [column] IN ([a1], [a2], [a3], [a4], [a5])
) [upv];
```
The `EXCEPT` operator shows distinct results from the left side of the operator that do not exist in the results from the right side of the operator. The results are:
| missing\_number |
| --- |
| 1 |
| 3 | | ```
Select Replace(Replace(Replace(Replace(
Replace('12345',(Cast(Coalesce(A5,0) as varchar(1))),''),
(Cast(Coalesce(A4,0) as varchar(1))),''),
(Cast(Coalesce(A3,0) as varchar(1))),''),
(Cast(Coalesce(A2,0) as varchar(1))),''),
(Cast(Coalesce(A1,0) as varchar(1))),'') from Table1
```
[**Sql Fiddle Demo**](http://sqlfiddle.com/#!3/b717b/6) | Find missing number from not sequence number | [
"",
"sql",
"sql-server",
"vb.net",
""
] |
I've got 3 tables: `Movies`, `Actors`, and `MovieActors`. `MovieActors` is a many-to-many relationship of `Movies` and `Actors` with columns `MovieActorId`, `MovieId`, and `ActorId`
How do I find movies that have a certain set of actors in it? For example I want to find all movies that have both Michael Fassbender (actor Id 1) and Brad Pitt (actor Id 2) in them. What would the query look like? | To keep it simple, you can just do two `in` clauses:
```
select * from Movies m
where m.MovieId in (select MovieId from MovieActors where ActorId = 1)
and m.MovieId in (select MovieId from MovieActors where ActorId = 2)
```
Performance may not be as good as a single `join`, but it's clean and easy to read. | One way is to join the tables. Filter for the actors and then insure the count has the number of actors you want in it (2 in this case)
```
SELECT
m.MovieID
FROM
Movies m
INNER JOIN MovieActors ma
ON m.MovieID = ma.MovieID
WHERE
ma.ActorID IN (1,2)
GROUP BY
m.MovieID
HAVING COUNT(DISTINCT ma.ActorID) = 2
```
[DEMO](http://sqlfiddle.com/#!6/a8e92/2)
**Note**
Thanks to user814064 for pointing out that since Actors can have more than one role on a movie we need to count the `DISTINCT ma.ActorID` not just `*` The SQL Fiddle Demo demonstrates the difference | SQL select on a many-to-many table | [
"",
"sql",
"sql-server",
""
] |
I am creating four database tables by the following schemas but am getting an integrity constraint violation error. Here is my entire schema along with the queries to populate the tables:
```
create table Employee(
Lastname varchar(10),
FirstName varchar(10),
MidInitial char(1),
gender char(1),
street varchar(10),
city varchar(10),
primary key(Lastname, FirstName, MidInitial));
create table company(
company_name varchar(20),
city varchar(10),
primary key(company_name));
create table Works(
Lastname varchar(10),
FirstName varchar(10),
MidInitial char(1),
company_name varchar(20),
salary numeric(8,2),
primary key(Lastname, FirstName, MidInitial, company_name),
foreign key(Lastname, FirstName, MidInitial) references Employee,
foreign key(company_name) references company);
create table Manages(
Lastname varchar(10),
FirstName varchar(10),
MidInitial char(1),
ManagerLastname varchar(10),
MFirstName varchar(10),
MMidInitial char(1),
start_date date,
primary key(Lastname, FirstName, MidInitial, ManagerLastname, MFirstName, MMidInitial),
foreign key(Lastname, FirstName, MidInitial) references Employee);
insert into Employee
values('Anderson', 'Susan', 'L', 'F', '108th', 'Omaha');
insert into Employee
values('Brandy', 'Dan', 'L', 'M', 'P street', 'Lincoln');
insert into Employee
values('Chen', 'Peter', 'K', 'M', '124th', 'Omaha');
insert into Employee
values('Clemson', 'Ann', 'M', 'F', 'O', 'Lincoln');
insert into Employee
values('Dale', 'Mary', 'K', 'F', '132nd', 'Omaha');
insert into Employee
values('Gill', 'Mary', 'L', 'F', 'P street', 'Lincoln');
insert into Employee
values('Harrison', 'Susan', 'M', 'F', 'Old Mill', 'Omaha');
insert into Employee
values('Jackson', 'Kim', 'A', 'F', '178th', 'Omaha');
insert into Employee
values('Jason', 'Pat', 'M', 'M', '8th', 'C. Bluffs');
insert into Employee
values('Kumar', 'Paul', 'T', 'M', 'Dodge', 'Omaha');
insert into Employee
values('Orr', 'Susanne', 'S', 'F', 'Q', 'Omaha');
insert into Employee
values('Peterson', 'Mike', 'E', 'M', 'Blondo', 'Omaha');
insert into Employee
values('Powell', 'Tom', 'E', 'M', 'Broadway', 'C. Bluffs');
insert into Employee
values('Quinn', 'Jacky', 'M', 'F', '168th', 'Omaha');
insert into Employee
values('Rand', 'Pam', 'R', 'F', '13th', 'Omaha');
insert into Employee
values('Schreck', 'Ann', 'P', 'F', '214th', 'Elkhorn');
insert into Employee
values('Simon', 'Eric', 'K', 'M', '77th', 'Lincoln');
insert into Employee
values('Smith', 'John', 'A', 'M', '8th', 'C. Bluffs');
insert into Employee
values('Thomson', 'Ron', 'F', 'M', 'Farnam', 'Omaha');
insert into Employee
values('Young', 'Don', 'R', 'M', 'Dodge', 'Omaha');
insert into Employee
values('Wong', 'Carole', 'S', 'F', 'Broadway', 'C. Bluffs');
insert into company
values('First Bank', 'Omaha');
insert into company
values('FDR', 'Omaha');
insert into company
values('Lincoln Star', 'Lincoln');
insert into company
values('Mutual of Omaha', 'Omaha');
insert into company
values('Union Pacific (UP)', 'Omaha');
insert into Works
values('Anderson', 'Susan', 'L', 'Mutual of Omaha', 48000);
insert into Works
values('Brandy', 'Dan', 'L', 'FDR', 42000);
insert into Works
values('Chen', 'Peter', 'K', 'FDR', 53000);
insert into Works
values('Clemson', 'Ann', 'M', 'First Bank', 39000);
insert into Works
values('Dale', 'Mary', 'K', 'Mutual of Omaha', 58000);
insert into Works
values('Gill', 'Mary', 'L', 'Union Pacific', 48700);
insert into Works
values('Harrison', 'Susan', 'M', 'Union Pacific', 54320);
insert into Works
values('Jackson', 'Kim', 'A', 'FDR', 68000);
insert into Works
values('Jason', 'Pat', 'M', 'FDR', 83000);
insert into Works
values('Kumar', 'Paul', 'T', 'FDR', 44000);
insert into Works
values('Orr', 'Susanne', 'S', 'Lincoln Star', 32000);
insert into Works
values('Peterson', 'Mike', 'E', 'First Bank', 37000);
insert into Works
values('Powell', 'Tom', 'E', 'First Bank', 54000);
insert into Works
values('Quinn', 'Jacky', 'M', 'First Bank', 85000);
insert into Works
values('Rand', 'Pam', 'R', 'FDR', 37000);
insert into Works
values('Schreck', 'Ann', 'P', 'First Bank', 36000);
insert into Works
values('Simon', 'Eric', 'K', 'Mutual of Omaha', 45000);
insert into Works
values('Smith', 'John', 'A', 'FDR', 55000);
insert into Works
values('Thomson', 'Ron', 'F', 'First Bank', 39000);
insert into Works
values('Young', 'Don', 'R', 'Mutual of Omaha', 27000);
insert into Works
values('Wong', 'Carole', 'S', 'First Bank', 44000);
insert into Manages
values('Anderson', 'Susan', 'L', 'Dale', 'Mary', 'K', "15-SEP-2002");
insert into Manages
values('Brandy', 'Dan', 'L', 'Jason', 'Pat', 'M', "05-MAR-2004");
insert into Manages
values('Chen', 'Peter', 'K', 'Jason', 'Pat', 'M', "12-JUNE-2002");
insert into Manages
values('Clemson', 'Ann', 'M', 'Powell', 'Tom', 'E', "30-AUG-2001");
insert into Manages
values('Gill', 'Mary', 'L', 'Harrison', 'Susan', 'M', "25-JAN-2003");
insert into Manages
values('Jackson', 'Kim', 'A', 'Jason', 'Pat', 'M', "09-MAY-2008");
insert into Manages
values('Kumar', 'Paul', 'T', 'Jason', 'Pat', 'M', "09-MAY-2001");
insert into Manages
values('Rand', 'Pam', 'R', 'Smith', 'John', 'A', "15-JAN-2004");
insert into Manages
values('Peterson', 'Mike', 'E', 'Powell', 'Tom', 'E', "30-AUG-2006");
insert into Manages
values('Schreck', 'Ann', 'P', 'Quinn', 'Jacky', 'M', "28-FEB-1996");
insert into Manages
values('Simon', 'Eric', 'K', 'Anderson', 'Susan', 'L', "22-FEB-2004");
insert into Manages
values('Smith', 'John', 'A', 'Jason', 'Pat', 'M', "05-MAR-2003");
insert into Manages
values('Thomson', 'Ron', 'F', 'Powell', 'Tom', 'E', "30-AUG-1998");
insert into Manages
values('Wong', 'Carole', 'S', 'Quinn', 'Jacky', 'M', "28-FEB-2001");
insert into Manages
values('Young', 'Don', 'R', 'Dale', 'Mary', 'K', "15-SEP-2007");
```
And here is the error I'm getting again:
```
Schema Creation Failed: ORA-02291: integrity constraint (USER_4_3C8F2.SYS_C00778260) violated - parent key not found
```
It would be great to get another set of eyes to take a look at this and see what's going wrong with it... I'm wondering if I should have created/populated the tables in a different order but I thought that what I was doing was correct.
Any help is greatly appreciated. Thanks!
----UPDATE---- Also I am using SQL Fiddle and Oracle 11g R2. | Ok. Few errors noted...
```
insert into Works
values('Gill', 'Mary', 'L', 'Union Pacific (UP)', 48700);
insert into Works
values('Harrison', 'Susan', 'M', 'Union Pacific (UP)', 54320);
```
-- Changed 'Union Pacific' to 'Union Pacific (UP)'
```
insert into Manages
values('Anderson', 'Susan', 'L', 'Dale', 'Mary', 'K', '15-SEP-2002');
insert into Manages
values('Brandy', 'Dan', 'L', 'Jason', 'Pat', 'M', '05-MAR-2004');
insert into Manages
values('Chen', 'Peter', 'K', 'Jason', 'Pat', 'M', '12-JUNE-2002');
insert into Manages
values('Clemson', 'Ann', 'M', 'Powell', 'Tom', 'E', '30-AUG-2001');
insert into Manages
values('Gill', 'Mary', 'L', 'Harrison', 'Susan', 'M', '25-JAN-2003');
insert into Manages
values('Jackson', 'Kim', 'A', 'Jason', 'Pat', 'M', '09-MAY-2008');
insert into Manages
values('Kumar', 'Paul', 'T', 'Jason', 'Pat', 'M', '09-MAY-2001');
insert into Manages
values('Rand', 'Pam', 'R', 'Smith', 'John', 'A', '15-JAN-2004');
insert into Manages
values('Peterson', 'Mike', 'E', 'Powell', 'Tom', 'E', '30-AUG-2006');
insert into Manages
values('Schreck', 'Ann', 'P', 'Quinn', 'Jacky', 'M', '28-FEB-1996');
insert into Manages
values('Simon', 'Eric', 'K', 'Anderson', 'Susan', 'L', '22-FEB-2004');
insert into Manages
values('Smith', 'John', 'A', 'Jason', 'Pat', 'M', '05-MAR-2003');
insert into Manages
values('Thomson', 'Ron', 'F', 'Powell', 'Tom', 'E', '30-AUG-1998');
insert into Manages
values('Wong', 'Carole', 'S', 'Quinn', 'Jacky', 'M', '28-FEB-2001');
insert into Manages
values('Young', 'Don', 'R', 'Dale', 'Mary', 'K', '15-SEP-2007');
```
-- Changed all double quotes to single quoted | This is your problem:
```
insert into Works values('Gill', 'Mary', 'L', 'Union Pacific', 48700);
```
'Union Pacific' is not one of your companies, you have 'Union Pacific (UP)'
As an aside, this is a really shabby way to do it anyway. Duplicating your data across tables like this is going to mean you have problems like this over and over again.
You should really have a surrogate key for each of your reference data tables (`Employee` & `Company`) and your detail tables (`Works` and `Manages`) should use those. For example:
```
create table Employee(
Id numeric(8),
Lastname varchar(10),
FirstName varchar(10),
MidInitial char(1),
gender char(1),
street varchar(10),
city varchar(10),
primary key(Id));
```
---
```
create table company(
Id numeric(8),
company_name varchar(20),
city varchar(10),
primary key(Id));
```
---
```
create table Works(
Employee_Id numeric(8),
Company_Id numeric(8),
salary numeric(8,2),
primary key(Employee_Id, Company_Id),
foreign key(Employee_Id) references Employee(Id),
foreign key(Company_Id) references Company(Id);
```
---
```
create table Manages(
Employee_Id,
Manager_Id,
start_date date,
primary key(Employee_Id,Manager_id),
foreign key(Employee_Id) references Employee(Id),
foreign key(Manager_Id) references Employee(Id));
``` | Integrity constraint error again | [
"",
"sql",
"oracle",
"referential-integrity",
""
] |
I am implementing a competition where there might be a lot of simultaneous entries. I am collecting some user data, which I am putting in one table called entries. I have another table of pre-generated unique discount codes in a table called discountCodes. I am then assigning one to each entry. I thought I would do this by putting a entry id in the discountCodes table.
As there may be a lot of concurrent users I think I should select the first unassigned row and then assign the entry id to that row. I need to make sure between picking an unassigned row and adding the entry id that another thread doesn't find the same row.
What is the best way of ensuring that the row doesn't get assigned twice? | Something can be done like
*The following example sets the TRANSACTION ISOLATION LEVEL for the session. For each Transact-SQL statement that follows, SQL Server holds all of the shared locks until the end of the transaction.* **Source:MSDN**
```
USE databaseName;
GO
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;
GO
BEGIN TRANSACTION;
GO
SELECT *
FROM Table1;
GO
SELECT *
FROM Table2;
GO
COMMIT TRANSACTION;
GO
```
Read more [SET TRANSACTION ISOLATION LEVEL](http://msdn.microsoft.com/en-us/library/ms173763.aspx) | I would recommend building a bridge table instead of having the `EntryId` in the `DiscountCodes` table with an `EntryId` and a `DiscountCodeId`. Place a `Unique Constraint` on both of those fields.
This way your entry point will encounter a constraint violation when it tries to enter a duplicate. | MS SQL locking for update | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I am having problems with has\_and\_belongs\_to\_many relation. I have 2 models, Doctors and PatientMeeting.
```
class Doctor < ActiveRecord::Base
has_and_belongs_to_many :patient_meetings
end
class PatientMeeting < ActiveRecord::Base
has_and_belongs_to_many :doctors
end
```
The problem is to show the patient meetings for doctors marked in the top50 (I have a bool variable called top50). I need to create a query that lists the meetings of these
doctors in descending order and include today and tomorrow dates only. I tried this and variations but it didn't work:
```
Doctor.joins(:patient_meetings).where(:top50 => true).where('patient_meetings.meeting_date' => Date.today.strftime("%Y-%m-%d")..Date.tomorrow.strftime("%Y-%m-%d")).order('patient_meetings.meeting_date DESC').limit(50)
```
I am out of ideas about what to do here next. Any help would be much appreciated. | Assuming you're using `mysql`. Try this
```
Doctor.joins(:patient_meetings).where("patient_meetings.meeting_date <= CURDATE() + INTERVAL 1 DAY AND patient_meetings.meeting_date >= CURDATE()").order("patient_meetings.meeting_date DESC").limit(50)
``` | Try this:
```
PatientMeeting.joins(:doctors).where(doctors: {top50: true})
``` | has_and_belongs_to_many and joining tables | [
"",
"sql",
"ruby-on-rails",
"ruby",
""
] |
Is it possible to have an `OR` statement in an SQL query without having to repeat information that doesn't change? Take this as an example:
```
SELECT *
FROM table
WHERE a = 1
AND b = 1
AND c = 1
AND x = 1
AND y = 2
OR a = 1
AND b = 1
AND c = 1
AND x = 2
AND y = 1
```
This is the only way I can figure out how to do it. a,b and c do not change so I see it as illogical to have to repeat them when they stay the same. Is there any way to avoid having to repeat values that don't change when including an `OR` statement? | ```
WHERE a = 1
AND b = 1
AND c = 1
AND (
( x = 1 AND y = 2 )
OR
( x = 2 AND y = 1 )
)
```
Alternative option (which is clearer IMO):
```
SELECT *
FROM (
SELECT *
FROM table
WHERE a = 1
AND b = 1
AND c = 1
) As a_subquery
WHERE (x = 1 AND y = 2)
OR (x = 2 AND y = 1)
``` | ```
SELECT * FROM table
WHERE a = 1 AND b = 1 AND c = 1
AND
(
(x = 1 AND y = 2) OR (x = 2 AND y = 1)
)
``` | SQL OR statement without having to repeat | [
"",
"sql",
""
] |
I have two tables (bold=PK):
log: [**id**,type, value]
totals: [**type**,total]
Where the "total" value at "totals" is the sum of the values of all entries in the log with the same type. I want to update the totals table so I run a query:
```
INSERT INTO totals (type,total) (SELECT type,sum(value) AS total FROM log GROUP BY type)
```
This works fine except when the PK already exists, I'm trying to solve it adding a:
```
ON DUPLICATE KEY UPDATE total=XXXXXXXX
```
but I don't know what to put as XXXXXXXXX to refer the new value | You want:
```
ON DUPLICATE KEY UPDATE total = VALUES(total)
``` | As Gordon already said, you use VALUES(`field`) to get the value that would be inserted, or just `field` to get the old value
```
INSERT INTO .... ON DUPLICATE KEY UPDATE `field` = VALUES(`field`)
```
<http://dev.mysql.com/doc/refman/5.0/en/insert-on-duplicate.html>
In your case - the complete query should look something like:
```
INSERT INTO totals (type,total) (SELECT type,sum(value) AS total FROM log GROUP BY type) ON DUPLICATE KEY UPDATE total = VALUES(total)
``` | On duplicate key update with the new value | [
"",
"mysql",
"sql",
""
] |
I have been working on a query:
```
SELECT P.[Name]+' - '+P.[Description] AS Sprint, S.[Number] AS Story, T.[Name] AS Task
FROM DailyTaskHours D
INNER JOIN Task T ON D.TaskId = T.PK_Task
INNER JOIN Story S ON T.StoryId = S.PK_Story
INNER JOIN Sprint P ON S.SprintId = P.PK_Sprint
GROUP BY P.[Name], P.[Description], S.[Number], T.[Name]
```
The Sprint column may or may not be NULL:

The above query will only return the requested columns if there is a SprintId associated. If it is NULL the whole column will not be returned. This makes sense, S.SprintId = P.PK\_Sprint is not equivelant when it is NULL on the Story table.
If it's NULL I still want it to return the row with all of the other tables columns data but with the word KanBan instead of not returning anything. How do I achieve this? | Change `S.Number` to `ISNULL(S.Number, 'KanBan')`. This will add 'Kanban' in case no matching sprintID is found in the Sprint table.
Change `INNER JOIN Sprint P ON S.SprintId = P.PK_Sprint` to `LEFT JOIN Sprint P ON S.SprintId = P.PK_Sprint`. This ensures that all other records will still show, even in case of no match.
The full query then becomes:
```
SELECT P.[Name]+' - '+P.[Description] AS Sprint, ISNULL(S.Number, 'KanBan') AS Story, T.[Name] AS Task
FROM DailyTaskHours D
INNER JOIN Task T ON D.TaskId = T.PK_Task
INNER JOIN Story S ON T.StoryId = S.PK_Story
LEFT JOIN Sprint P ON S.SprintId = P.PK_Sprint
GROUP BY P.[Name], P.[Description], S.[Number], T.[Name]
``` | ```
SELECT ISNULL(P.[Name]+' - '+P.[Description],'KanBan') AS Sprint, S.[Number] AS Story, T.[Name] AS Task
FROM DailyTaskHours D
INNER JOIN Task T ON D.TaskId = T.PK_Task
INNER JOIN Story S ON T.StoryId = S.PK_Story
LEFT OUTER JOIN Sprint P ON S.SprintId = P.PK_Sprint
GROUP BY P.[Name], P.[Description], S.[Number], T.[Name]
``` | SELECT using multiple JOINS needs to return even if one compairson is not equal | [
"",
"sql",
"sql-server",
""
] |
I have a table with numbers in a `varchar(255)` field. They're all greater than one and have multiple decimal places. I'd like to convert them to integers. According to every web site I've consulted, including [this one on StackOverflow](https://stackoverflow.com/questions/728833/convert-a-string-to-int-using-sql-query), either of these should work:
```
SELECT CAST(VarcharCol AS INT) FROM MyTable
SELECT CONVERT(INT, VarcharCol) FROM MyTable
```
These both work for me for every kind of numeric value but `integer` - I can convert to `float`, `decimal`, etc. just fine, but trying to convert to `integer` gives me the following error:
> Conversion failed when converting the varchar value '7082.7758172'
> to data type int.
I've worked around the problem by converting to data type `Decimal(6,0)`, which works fine. But just for my education, can anyone tell me why converting to data type `int` (or `integer`) gives me an error? Thanks. | Converting a `varchar` value into an `int` fails when the value includes a decimal point to prevent loss of data.
If you convert to a `decimal` or `float` value first, then convert to `int`, the conversion works.
Either example below will return 7082:
```
SELECT CONVERT(int, CONVERT(decimal(12,7), '7082.7758172'));
SELECT CAST(CAST('7082.7758172' as float) as int);
```
Be aware that converting to a `float` value may result, in rare circumstances, in a loss of precision. I would tend towards using a `decimal` value, however you'll need to specify precision and scale values that make sense for the `varchar` data you're converting. | Actually whether there are digits or not is irrelevant. The . (dot) is forbidden if you want to cast to int. Dot can't - logically - be part of Integer definition, so even:
```
select cast ('7.0' as int)
select cast ('7.' as int)
```
will fail but both are fine for floats. | tSQL - Conversion from varchar to numeric works for all but integer | [
"",
"sql",
"sql-server",
"number-formatting",
""
] |
I have a table `test_table` as below
```
test_id varchar2(10)
new_test_id number
```
Though test\_id is `varchar` it stored numbers, now I want to store the negative of the current value in test\_id to new\_test\_id. The statement I tried is as below:
```
update test_table a set a.new_test_id = TO_NUMBER(-a.test_id);
```
This throws the invalid number error although on running a test query as
```
select TO_NUMBER(-test_id) from test_table
```
it shows properly formatted number results.
Why does `to_number` fails to convert string to number in the `update` statement where as it works fine in `select`? And what is the alternative in this scenario? | I just figured out it was not working because the test\_id column unexpectedly had a couple of records with alphabets. To\_number was failing for only those records under update and was fine for others which had numbers. | `update test_table a set a.new_test_id = -1 * TO_NUMBER(a.test_id);` | To_number fails to convert varchar to number in update statement | [
"",
"sql",
"oracle",
""
] |
I got this "single-row subquery returns more than one row"
and my Query is
```
Select contact_ID from contact where (
Select b.Contact_ID from company a, contact b where a.Company_ID = b.Company_ID
and a.IS_ACTIVE <> 'N' and b.IS_ACTIVE <> 'N'
and a.CREATED_DATE >= '01-Jun-2013') =
(
Select b.Contact_ID from company a, contact b where a.Company_ID = b.Company_ID
and a.IS_ACTIVE <> 'N' and b.IS_ACTIVE <> 'N'
and a.CREATED_DATE < '01-Jun-2013');
```
I don't know why this is not working. Please help me out | try this
```
Select contact_ID from contact where Contact_ID in
(
Select b.Contact_ID from company a
inner join contact b
on a.Company_ID = b.Company_ID
where a.IS_ACTIVE <> 'N' and b.IS_ACTIVE <> 'N'
and a.CREATED_DATE >= '01-Jun-2013'
) ;
```
edit:
more simpler without `IN` clause.
```
Select b.Contact_ID from company a
inner join contact b
on a.Company_ID = b.Company_ID
where a.IS_ACTIVE <> 'N' and b.IS_ACTIVE <> 'N'
and a.CREATED_DATE >= '01-Jun-2013' ;
``` | Try this:
Removed sub select and replaced with union.
```
Select b.Contact_ID
from company a, contact b
where a.Company_ID = b.Company_ID
and a.IS_ACTIVE <> 'N' and b.IS_ACTIVE <> 'N'
and a.CREATED_DATE >= '01-Jun-2013'
union
Select b.Contact_ID
from company a, contact b
where a.Company_ID = b.Company_ID
and a.IS_ACTIVE <> 'N' and b.IS_ACTIVE <> 'N'
and a.CREATED_DATE < '01-Jun-2013');
``` | SQL error sign (single-row subquery returns more than one row | [
"",
"sql",
"oracle",
"select",
""
] |
I have probably in excess of 100 databases on this one SQL Server (2005) instance. I'd like to list them in order of their create dates, or even better, in the order of the date of the latest modification to any table.
Is there a SELECT query I can write, and just as importantly, from what context do I write it and with what permissions do I need to execute it? | You can easily write this query against the `sys.databases` catalog view
```
SELECT * FROM sys.databases
ORDER BY create_date
```
but unfortunately, there's no equivalent for the "last modification date" that I'm aware of ...
This should work from any database on that server - doesn't matter which database you're in, those `sys` catalog views should be accessible from anywhere. | ```
create table #db_name (db_name nvarchar(128), last_change datetime);
exec sp_MSForEachDB 'Use ?; insert into #db_name (db_name, last_change) select ''?'', max(modify_date) from sys.tables'
select * from #db_name order by last_change desc
```
this is not exactly one select but at least you got what you want. I'm db\_owner on one of our databases and probably nothing impressive server-wide so it's not very demanding. | List all the databases on one SQL Server in the order they were created | [
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2005",
""
] |
I need to do the following without using a #Temp table or a table variable. Can't do CTE with a GROUP BY. Is there a way to do this?
```
select Title, count(*) as PlayCount
into #Temp
from LC l
join Play p on p.lcID = l.lcID
group by Title
declare @Max int
select @Max = max(PlayCount) from #Temp
select Title, PlayCount, cast(PlayCount as float) / cast(@Max as float) as Weight
from #Temp
``` | Use a subquery/cte and `MAX() OVER()`:
```
SELECT Title, PlayCount, PlayCount*1.0 / Max_CT AS WEIGHT
FROM (SELECT Title
, COUNT(*) AS PlayCount
, MAX(COUNT(*)) OVER() AS Max_CT
FROM LC l
JOIN PLAY P ON P.LCID = L.LCID
GROUP BY Title
)sub
```
(Assuming MS SQL 2005 or newer) | sub query? perhaps not the cleanest
```
select Title, count(*) as PlayCount , count(*) / tot
from LC l join Play p on p.lcID = l.lcID ,
(select max(a) as tot from (select count(*) as a from LC group by lcid))
group by Title , tot
```
not entirely sure which table is doing what, but above should give you an idea | Making an in-line statement without a table or a cte | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I am using sybase.
I have two tables like this:
```
USER | HITS
------------
A | 2
B | 1
C | 4
E | 3
USER | HITS
------------
A | 2
B | 8
C | 1
D | 2
```
I want to combine them on USER and SUM the HITS column to get a result like this:
```
USER | HITS
------------
A | 4
B | 9
C | 5
D | 2
E | 3
```
I was able to combine them partially, using this query:
```
select a.USERNAME, sum(a.HITS+b.HITS)
from table1 a, table2 b
where a.USERNAME=b.USERNAME
group by a.USERNAME
```
I am not sure what to do to get the whole data set I want... I'm having trouble when trying to use a JOIN or UNION... please help :) | Try this:
```
SELECT USER, SUM(HITS) HITS
FROM
(SELECT USER, HITS
FROM Tbl1 UNION ALL
SELECT USER, HITS
FROM Tbl2) A
GROUP BY USER
``` | Not sure if Sybase has `with`, if not you can nest the subquery as per Hamlet's solution.
```
;with x (user, hits) as (
select
user, hits
from
table1
union all
select
user, hits
from
table2
)
select
user,
sum(hits) as hits
from
x
group by
user;
``` | Aggregating 2 Tables in tSQL | [
"",
"sql",
"t-sql",
"sybase",
""
] |
**Database -** Microsoft Adventureworks
**Tables -**
Sales.SalesOrderHeader
**Problem -** Which customers (ie customerID's) ordered something in March 2003 or April 2003 but not both.
**Concept -**
Get the blue parts, ie elements/rows *unique* to A and those *unique* to B.

**My query -**
```
select Soh.CustomerID, Soh.OrderDate
from Sales.SalesOrderHeader as Soh
where Soh.OrderDate >= '2003-03-01' AND Soh.OrderDate < '2003-04-01' -- march only
UNION
select Soh.CustomerID, Soh.OrderDate
from Sales.SalesOrderHeader as Soh
where Soh.OrderDate >= '2003-04-01' AND Soh.OrderDate < '2003-05-01' -- april only
order by Soh.OrderDate asc;
```
**My questions -**
Did I answer the problem correctly ?
Any other ways of solving this problem, preferably lesser amount of code ?
**EDIT -** Ooops. This will only give orders in the 2 months and not answer the question. So, I am wrong. Trying to fix it.
Thanks. | ```
SELECT Cust.CustomerID
FROM Sales.Customer AS Cust
INNER JOIN Sales.SalesOrderHeader AS Soh
ON Cust.CustomerID = Soh.CustomerID
WHERE Soh.OrderDate >= '2003-03-01' AND Soh.OrderDate < '2003-05-01'
GROUP BY Cust.CustomerID
HAVING COUNT(DISTINCT CASE WHEN MONTH(Soh.OrderDate) = 3 AND
YEAR(Soh.OrderDate) = 2003 THEN 1 ELSE 2 END) = 1
```
* [SQLFiddle Demo (*simplified records*)](http://sqlfiddle.com/#!3/d375c/3)
Well, `SELECT`, `FROM`, `WHERE` and `GROUP BY` are pretty self explanatory. The tricky part here is the `HAVING` clause. So let me simplify it, the `CASE` statement you see gives a value which categorized the record into a group. Instead of `1` and `2`, I will be using `March` and `April` to make it more understandable.
```
SELECT CustomerID,
CASE WHEN MONTH(OrderDate) = 3 AND YEAR(OrderDate) = 2003
THEN 'March'
ELSE 'April'
END AS MonthBought
FROM TableName
WHERE OrderDate >= '2003-03-01' AND OrderDate < '2003-05-01'
```
* [SQLFiddle Demo](http://sqlfiddle.com/#!3/d375c/4)
As you can see in the demo, when the date of order falls in the month of `March, 2013` of any date, the corresponding value of `MonthBought` is `March`, other wise `April` since we are sure that all records fall between `March and April 2013` because of the `WHERE` clause.
The `HAVING` clause filters all records which has the number of unique value in the `MonthBought` to only `1` which means the customer bought only on a certain month.
* [SQLFiddle Demo](http://sqlfiddle.com/#!3/d375c/7) | ```
CREATE TABLE table_a ( id INTEGER NOT NULL PRIMARY KEY
, OrderDate DATE NOT NULL DEFAULT '2003-03-15');
CREATE TABLE table_b ( id INTEGER NOT NULL PRIMARY KEY
, OrderDate DATE NOT NULL DEFAULT '2003-04-15');
INSERT INTO table_a(id) VALUES (0),(2),(4),(6),(8),(10),(12),(14),(16),(18),(20);
INSERT INTO table_b(id) VALUES (0),(3),(6),(9),(12),(15),(18),(21);
SELECT COALESCE (a.id, b.id) AS id
FROM (
SELECT DISTINCT id
FROM table_a
WHERE OrderDate >= '2003-03-01' AND OrderDate < '2003-04-01'
) a
FULL OUTER JOIN (
SELECT DISTINCT id
FROM table_b
WHERE OrderDate >= '2003-04-01' AND OrderDate < '2003-05-01'
) b ON b.id = a.id
WHERE a.id IS NULL OR b.id IS NULL
;
```
Note: I had to invent my own data, since the OP did not supply any, and I am too lazy to type it in.
UPDATE: the original UNION query (here with the table\_a/table\_b construct, for the original data model, use table\_a = table\_b = `Sales.SalesOrderHeader`)
```
SELECT a.id, a.OrderDate
FROM table_a as a
WHERE a.OrderDate >= '2003-03-01' AND a.OrderDate < '2003-04-01' -- march only
AND NOT EXISTS (
SELECT * FROM table_b nx
WHERE nx.id = a.id
AND nx.OrderDate >= '2003-04-01' AND nx.OrderDate < '2003-05-01' -- april only
)
UNION ALL
SELECT b.id, b.OrderDate
FROM table_b as b
WHERE b.OrderDate >= '2003-04-01' AND b.OrderDate < '2003-05-01' -- april only
AND NOT EXISTS (
SELECT * FROM table_a nx
WHERE nx.id = b.id
AND nx.OrderDate >= '2003-03-01' AND nx.OrderDate < '2003-04-01' -- march only
)
ORDER BY OrderDate ASC;
```
Notes:
* the `UNION` should be a `UNION ALL`, since duplicates are impossible, and don't have to be removed
* The `NOT EXISTS ()` clauses are necessary: you want the records in march that don't exist in april, and vice versa.
* the need for `UNION` is often an indication of a sub-optimal data model (in this case it is not)
* `FULL OUTER JOIN` can be considered a special form of relational division | Get combined difference (not common/"intersecting" records) from two queries | [
"",
"sql",
""
] |
**Preamble**
As of late, I see too many geeks commenting on Oracle questions saying "Do not use (+) operator, rather use JOIN syntax".
**Question**
I do see that both work well. But what is the real difference between using them? I would welcome answers more from experience.
1. Is there anything to do with limitations in application, performance, etc. while using them?
2. What would you suggest for me?
I did read something on [Oracle documentation](http://docs.oracle.com/cd/B28359_01/server.111/b28286/statements_10002.htm#i2080416) but itβs not good enough to make me understand or feel comfortable with the comprehensive information.
**Note**: I am planning to migrate 200+ packages and procedures, if the Keyword should be used instead of (+)
3. Also are there any freeware tools to do the rewrite?
**Posting samples**
```
βββββββββββββββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββββββββββββ
β INNER JOIN - CONVENTIONAL β INNER JOIN - ANSI SYNTAX β
βββββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββ€
β SELECT β SELECT β
β emp.deptno β ename, β
β FROM β dname, β
β emp, β emp.deptno, β
β dept β dept.deptno β
β WHERE β FROM β
β emp.deptno = dept.deptno; β scott.emp INNER JOIN scott.dept β
β β ON emp.deptno = dept.deptno; β
βββββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββ€
β LEFT OUTER JOIN - CONVENTIONAL β LEFT OUTER JOIN - ANSI SYNTAX β
βββββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββ€
β SELECT β SELECT β
β emp.deptno β ename, β
β FROM β dname, β
β emp, β emp.deptno, β
β dept β dept.deptno β
β WHERE β FROM β
β emp.deptno = dept.deptno(+); β scott.emp LEFT OUTER JOIN scott.dept β
β β ON emp.deptno = dept.deptno; β
βββββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββ€
β RIGHT OUTER JOIN - CONVENTIONAL β RIGHT OUTER JOIN - ANSI SYNTAX β
βββββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββ€
β SELECT β SELECT β
β emp.deptno β ename, β
β FROM β dname, β
β emp, β emp.deptno, β
β dept β dept.deptno β
β WHERE β FROM β
β emp.deptno(+) = dept.deptno; β scott.emp RIGHT OUTER JOIN scott.dept β
β β ON emp.deptno = dept.deptno; β
βββββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββ€
β FULL OUTER JOIN - CONVENTIONAL β FULL OUTER JOIN - ANSI SYNTAX β
βββββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββ€
β SELECT β SELECT β
β * β * β
β FROM β FROM β
β emp, β scott.emp FULL OUTER JOIN scott.dept β
β dept β ON emp.deptno = dept.deptno; β
β WHERE β β
β emp.deptno = dept.deptno(+) β β
β UNION ALL β β
β SELECT β β
β * β β
β FROM β β
β emp, β β
β dept β β
β WHERE β β
β emp.deptno(+) = dept.deptno β β
β AND emp.deptno IS NULL; β β
βββββββββββββββββββββββββββββββββββββ΄ββββββββββββββββββββββββββββββββββββββββββββββ
```
**PS**: Read the summary of answers for all updates grouped. | **Grouping answers together**
1. Use explicit JOINs rather than implicit (regardless whether they are outer joins or not) is that it's much easier to accidently create a cartesian product with the implicit joins. With explicit JOINs you cannot "by accident" create one. The more tables are involved the higher the risk is that you miss one join condition.
2. Basically (+) is severely limited compared to ANSI joins. Furthermore it is only available in Oracle whereas the ANSI join syntax is supported by all major DBMS
3. SQL will not start to perform better after migration to ANSI syntax - it's just different syntax.
4. Oracle strongly recommends that you use the more flexible FROM clause join syntax shown in the former example. In the past there were some bugs with ANSI syntax but if you go with latest 11.2 or 12.1 that should be fixed already.
5. Using the JOIN operators ensure your SQL code is ANSI compliant, and thus would allow a front-end application to be more easily ported for other database platforms.
6. Join conditions have a very low selectivity on each table and a high selectivity on the tuples in the theoretical cross product. Conditions in the where statement usually have a much higher selectivity.
7. Oracle internally converts ANSI syntax to the (+) syntax, you can see this happening in the execution plan's Predicate Information section.
**Possible Pitfall in using ANSI syntax on 12c engine**
Including a possibility of bug in JOIN in 12c. See [here](https://stackoverflow.com/questions/19686262/query-featuring-outer-joins-behaves-differently-in-oracle-12c)
**FOLLOW UP:**
`Quest SQL optimizer tool` rewrites the SQL to ANSI syntax. | If your 200+ packages work as intended with "old fashioned" syntax, let it be.
SQL will not start to perform better after migration to ANSI syntax - it's just different syntax.
All that being said, ANSI syntax is cleaner - you are not going to normal join if you forget (+) in some multi-column outer join.
In the past there were some bugs with ANSI syntax but if you go with latest 11.2 or 12.1 that should be fixed already.
Of course, you know your environment and priorities better - as SchmitzIT said - ANSI syntax is part of SQL standard and it would help when going to use some other RDBMS product. | Oracle Joins - Comparison between conventional syntax VS ANSI Syntax | [
"",
"sql",
"oracle",
"oracle11g",
"oracle10g",
"oracle12c",
""
] |
How to insert a default value in all columns after inserting a value in a single column,
Example: I created a `product` table and the columns are `id (autoincrement)`, `product_name`, and `item`.
```
CREATE TABLE product
(
id int AUTOINCREMENT,
product_name varchar(255),
item int
);
```
How can I insert only the `product_name` and have the item automatically populated with the value `30`?
```
insert into
product
values (
'burger'
)
```
and have the result be
```
id: product_name: item:
7 burger 30
``` | Set Default for item:
```
CREATE TABLE product ( id int AUTOINCREMENT, product_name varchar(255), item int DEFAULT 30);
``` | ```
CREATE TABLE product ( id int AUTOINCREMENT, product_name varchar(255), item int );
```
in mssql server use the following query:
```
ALTER TABLE product
ADD item int
CONSTRAINT DF_TestTable_NewCol DEFAULT 30 not null
GO
``` | INSERT in a single column, the rest of the column will have a default value | [
"",
"mysql",
"sql",
""
] |
If I run this query on my oracle database:
```
SELECT COLUMN_NAME,
DATA_DEFAULT,
DATA_TYPE
FROM USER_TAB_COLUMNS
WHERE TABLE_NAME = 'MY_TABLE'
AND DATA_DEFAULT IS NOT NULL;
```
The result I'm getting is:
```
STOICHIOMETRY NULL NUMBER
```
My Question is WHY? I've asked for not NULL.
If I run:
```
describe MY_TABLE
```
I get:
```
Name Null Type
------------- -------- ------------
TID NOT NULL NUMBER(9)
COMPONENT_ID NOT NULL NUMBER(9)
RELATIONSHIP NOT NULL VARCHAR2(20)
STOICHIOMETRY NUMBER(3)
TARGCOMP_ID NOT NULL NUMBER(9)
HOMOLOGUE NOT NULL NUMBER(1)
``` | I think NULL is your default value? For example I have a table and I've altered the default value to NULL:
```
ALTER TABLE UNIQUEVALTEST MODIFY ZIP DEFAULT NULL;
```
Now when I select:
```
SELECT COLUMN_NAME, DATA_DEFAULT
FROM ALL_TAB_COLUMNS WHERE TABLE_NAME = 'UNIQUEVALTEST'
AND data_default IS NOT NULL;
```
I get:
```
ZIP NULL
```
Edit: To explain better, if the field was (null), then there is no default value. But the default value is now set to NULL, so any new data inserted would automatically be NULL unless specified otherwise. I don't know if that changes the functionality at all (having no default value, or having the default value set to NULL), but I would think it would operate the same. | The nullability of a column is stored in `NULLABLE`, not in `DATA_DEFAULT`. The `DATA_DEFAULT` stores the default value for a column (and if you haven't provided a default value in the `CREATE TABLE` statement, all columns get `DEFAULT NULL` (even non-nullable ones!)
Try:
```
SELECT COLUMN_NAME,
DATA_DEFAULT,
DATA_TYPE,
NULLABLE
FROM USER_TAB_COLUMNS
WHERE TABLE_NAME = 'MY_TABLE'
AND NULLABLE = 'N'; -- 'N' means No, not nullable
-- 'Y' means Yes, nullable
``` | Oracle, NULL but not NULL | [
"",
"sql",
"oracle",
"null",
"oracle-sqldeveloper",
"isnull",
""
] |
When I try to insert a value which has '&' in TOAD I get a prompt for a substitution variable.
So when I try:
```
insert into x values('hello & world');
```
it throws a prompt for substituting the variable 'world'.
I have tried right clicking in the editor and disabling prompt for substitution variables but this seems to work only for Execute Statement (`F9`) It doesn't work for Execute as script(`F5`).
I am also aware of using `chr(38)` like:
```
insert into x values('hello '||chr(38)||'world');
```
but I do not want this.
Is there something else that will help me run the script? | Try putting `set define off` at the beginning of your script. It should work with `F5`:
```
set define off;
insert into x values('hello & world');
set define on;
``` | In TOAD version 12.120.39
Right click in Editor and Click in Substitution Variable Prompting and select none
This worked for me. | How to escape ampersand in TOAD? | [
"",
"sql",
"plsql",
"toad",
""
] |
I have a sql table :
Levels
```
LevelId Min Product
1 x 1
2 y 1
3 z 1
4 a 1
```
I need to duplicate the same data into the database by changing only the product Id from 1 2,3.... 40
example
```
LevelId Min Product
1 x 2
2 y 2
3 z 2
4 a 2
```
I could do something like
`INSERT INTO dbo.Levels SELECT top 4 * fROM dbo.Levels`
but that would just copy paste the data.
Is there a way I can copy the data and paste it changing only the Product value? | You're most of the way there - you just need to take one more logical step:
```
INSERT INTO dbo.Levels (LevelID, Min, Product)
SELECT LevelID, Min, 2 FROM dbo.Levels WHERE Product = 1
```
...will duplicate your rows with a different product ID.
Also consider that `WHERE Product = 1` is going to be more reliable than `TOP 4`. Once you have more than four rows in the table, you will not be able to guarantee that `TOP 4` will return the same four rows unless you also add an `ORDER BY` to the select, however `WHERE Product = ...` will always return the same rows, and will continue to work even if you add an extra row with a product ID of 1 (where as you'd have to consider changing `TOP 4` to `TOP 5`, and so on if extra rows are added). | You can generate the product id's and then load them in:
```
with cte as (
select 2 as n
union all
select n + 1
from cte
where n < 40
)
INSERT INTO dbo.Levels(`min`, product)
SELECT `min`, cte.n as product
fROM dbo.Levels l cross join
cte
where l.productId = 1;
```
This assumes that the `LevelId` is an identity column, that auto-increments on insert. If not:
```
with cte as (
select 2 as n
union all
select n + 1
from cte
where n < 40
)
INSERT INTO dbo.Levels(levelid, `min`, product)
SELECT l.levelid+(cte.n-1)*4, `min`, cte.n as product
fROM dbo.Levels l cross join
cte
where l.productId = 1;
``` | Populate a sql table with duplicate data except for one column | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
"sql-server-2008-r2",
""
] |
Can anyone help? I have a table which has a datetime field and duration.. what I need is to merge the records that happened on one day... As below, My first column shows the date, the second the amount of hours available and then the minutes... As you can see there is a morning and afternoon, what I need is to select just one record for each day and then total duration, so it shows for 10th July 2004 just one record and then a duration count of 7 hours 30 minutes.
```
2004-10-07 09:00:00.000 4 0
2004-10-07 14:00:00.000 3 30
2004-10-08 09:00:00.000 4 0
``` | The idea is that you want to convert the date time to a date and group by that. Then you want to sum the hours and minutes. It is easy to sum them as minutes `60*h + m`. However, to split this back into hours and minutes requires a bit of arithmetic.
Here is one method:
```
select cast(dt as date),
cast(sum(h * 60 + m)/60 as int) as hours,
sum(h * 60 + m) - 60 * cast(sum(h * 60 + m)/60 as int) as minutes
from t
group by cast(dt as date);
```
Unfortunately, the specific operations for converting to a date depend on the database. Also, the syntax for the "modulus" operation may differ among databases, so this takes the safer route of doing the calculation directly. | In case you're using some database which supports the `SUM` and `GROUP BY` commands, try this:
This works fine for sqlite3:
```
SELECT date(date), (sum(hours)*60+sum(minutes))/60, sum(minutes)%60
FROM dates
GROUP BY date(date)
```
This is the answer to the follow-up question in the comments:
```
SELECT
date(free.date),
(SUM(hours)*60+SUM(free.minutes))-
(SELECT (SUM(hours)*60+SUM(used.minutes))
FROM used
WHERE date(used.date)=date(free.date))
FROM free
WHERE date(free.date) IN (SELECT DISTINCT date(used.date) FROM used)
GROUP BY date(free.date);
``` | SQL to Merge Date fields | [
"",
"sql",
"date",
"merge",
"group-by",
""
] |
Imagine we have a table such as:
```
Number Value GroupValue
1 FOO GR1
22 BAR GR2
100 FOO3 GR1
20 BAR23 GR2
```
I want to get the sum values of these rows based on group value,but also ,i want to see which rows it has grouped by in another column such as:
```
SUM values groups
101 FOO,FOO3 GR1
42 BAR,BAR23 GR2
```
How can i achieve that in sql?
Tried:
```
> SELECT SUM(ID) AS SUM ,
STUFF(( SELECT ',' + A
FROM dbo.Table_1 AS T2
WHERE A = T2.A
FOR
XML PATH('')
), 1, 1, '') AS A ,
B
FROM dbo.Table_1 AS T
GROUP BY B ,
A
```
But this can not handle the groups properly... | You can't use an alias created in the `SELECT` list in your `GROUP BY`.
You can fix this by separating your listing from your aggregation (cte/subquery):
```
WITH cte AS (SELECT *
,STUFF(( SELECT ',' + Value
FROM Table1 AS T2
WHERE T.GroupValue = T2.GroupValue
FOR XML PATH('')
), 1, 1, '') AS Value_List
FROM Table1 AS T)
SELECT SUM(Number) AS Total
,Value_List
,GroupValue
FROM cte
GROUP BY Value_List, GroupValue
```
Demo: [SQL Fiddle](http://sqlfiddle.com/#!3/8e531/8/0)
Or you can use a partitioned `SUM()` (window function):
```
SELECT DISTINCT SUM(Number) OVER(PARTITION BY GroupValue) AS Total
,STUFF(( SELECT ',' + Value
FROM Table1 AS T2
WHERE T.GroupValue = T2.GroupValue
FOR XML PATH('')
), 1, 1, '') AS Value_List
,GroupValue
FROM Table1 AS T
```
Demo: [SQL Fiddle](http://sqlfiddle.com/#!3/8e531/10/0) | You just need to make a small change, stick your SQL **without** the group into another select with the group. I think I converted your SQL right:
```
SELECT SUM(ID) as IdSum, B FROM (
SELECT ID,
STUFF(( SELECT ',' + A
FROM dbo.Table_1 AS T2
WHERE A = T2.A
FOR
XML PATH('')
), 1, 1, '') AS B
FROM dbo.Table_1 AS T) AS B2
GROUP BY B
``` | Concatenating rows using group by and aggregate functions | [
"",
"sql",
"sql-server",
""
] |
how to search data from month to month if :
> txtbox1.text = "m/y" or "06/2013"
>
> txtbox2.text = "m/y" or "09/2013"
mysql query code :
> SELECT \* FROM tb\_user WHERE dateregister ?.....
and if, record in table is:
> id name dateregister
>
> 1 abc 05/05/2013
>
> 2 ccc 06/05/2013
>
> 3 ddd 01/06/2013
>
> 4 ggg 01/07/2013
>
> 5 ttt 10/07/2013
>
> 6 kkm 20/08/2013
>
> 7 ooo 01/09/2013
and I just wanted to take the data from month "06/2013" to "09/2013" and data that I get is:
> 3 ddd 01/06/2013
>
> 4 ggg 01/07/2013
>
> 5 ttt 10/07/2013
>
> 6 kkm 20/08/2013
>
> 7 ooo 01/09/2013
I've tried a variety of query but I always fail..
this is my code in vb.net :
> Imports MySql.Data.MySqlClient
>
> Dim connString As String = "Database=dbuser;Data Source=localhost;User Id=root;Password="
>
> Dim conn As New MySqlConnection(connString)
>
> Dim adapter As New MySqlDataAdapter
>
> Dim ds As New DSReportPO
>
> Dim cmd As MySqlCommand
>
> monthx1= txtbox1.text
>
> monthx2 = txtbox2.text
>
> cmd = New MySqlCommand("SELECT \* FROM tb\_user WHERE dateregister?...", conn)
>
> adapter.SelectCommand = cmd
>
> adapter.Fill(ds.Tables(0))
>
> Frm1.Show()
>
> Frm1.ReportViewer1.ProcessingMode = Microsoft.Reporting.WinForms.ProcessingMode.Local
>
> Frm1.ReportViewer1.LocalReport.ReportPath = System.Environment.CurrentDirectory & "\Report1.rdlc"
>
> Frm1.ReportViewer1.LocalReport.DataSources.Clear()
>
> Frm1.ReportViewer1.LocalReport.DataSources.Add(New Microsoft.Reporting.WinForms.ReportDataSource("DSPOListSETUP", ds.Tables(0)))
>
> Frm1.ReportViewer1.DocumentMapCollapsed = True
>
> Frm1.ReportViewer1.RefreshReport()
thanks a lot | ```
SELECT *
FROM tableName
WHERE STR_TO_DATE(dateregister, '%d/%m/%Y')
BETWEEN STR_TO_DATE(CONCAT('01/', '06/2013'), '%d/%m/%Y') AND
LAST_DAY(STR_TO_DATE(CONCAT('01/', '09/2013'), '%d/%m/%Y'))
```
* [SQLFiddle Demo](http://sqlfiddle.com/#!2/43d5e/2)
OUTPUT
```
ββββββ¦βββββββ¦βββββββββββββββ
β ID β NAME β DATEREGISTER β
β βββββ¬βββββββ¬βββββββββββββββ£
β 3 β ddd β 01/06/2013 β
β 4 β ggg β 01/07/2013 β
β 5 β ttt β 10/07/2013 β
β 6 β kkm β 20/08/2013 β
β 7 β ooo β 01/09/2013 β
ββββββ©βββββββ©βββββββββββββββ
```
Here's what happened to the query above, the column `dateregister` was converted into proper date using `STR_TO_DATE()` since it was saved as string in the format of `dd/mm/YYYY`.
Your input `06/2013` and `09/2013` were concatenated with `01/` and converted into date just like how the column `dateregister` was converted.
The function `LAST_DAY` takes a valid date and returns the last date of the month.
If you need faster performance in terms of searching for dates, you need to change the data type of column `dateregister` into `DATETIME or DATE` data type to avoid conversion.
Other References
* [STR\_TO\_DATE](http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_str-to-date)
* [LAST\_DAY](http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_last-day)
---
```
monthx1= txtbox1.text
monthx2 = txtbox2.text
Dim sqlQuery As new System.Text.StringBuilder
sqlQuery.Append("SELECT * " & vbCrLf)
sqlQuery.Append("FROM tablename " & vbCrLf)
sqlQuery.Append("WHERE Str_to_date(dateregister, '%d/%m/%Y') BETWEEN " & vbCrLf)
sqlQuery.Append(" Str_to_date(Concat('01/', " & vbCrLf)
sqlQuery.Append(" '" & monthx1 & "'), '%d/%m/%Y') AND Last_day( " & vbCrLf)
sqlQuery.Append(" Str_to_date(Concat('01/', " & vbCrLf)
sqlQuery.Append(" '" & monthx2 & "'), '%d/%m/%Y')) ")
cmd = New MySqlCommand(sqlQuery.ToString(), conn)
``` | You just have to use the `BETWEEN` keyword.
```
SELECT * FROM tb_user
WHERE dateregister between @textBox1Value and @textBox2Value
```
Make sure the dates you parsing (@textBox1Value and @textBox2Value) is well typed.
Refer to this link for the [date formatting](http://dev.mysql.com/doc/refman/4.1/en/date-and-time-functions.html). | vb.net - mysql query select record from month to month | [
"",
"mysql",
"sql",
"vb.net",
""
] |
I'm trying to execute a query with union and then put an additional condition on the result. Basically, I want to have something like
> SELECT \* FROM (A UNION B) WHERE condition
Is this possible to do? When I try to execute
> SELECT \* FROM (A UNION B)
SQL Management Studio complains about the closing bracket:
> Incorrect syntax near ')'. | ```
SELECT *
FROM ( SELECT * FROM A
UNION
SELECT * FROM B
) temp
WHERE condition
``` | Here is the correct syntax:
```
SELECT *
FROM (select from A
UNION
select from B
) ab
WHERE condition;
```
I agree that the original syntax "looks right" for a `from` statement, but SQL doesn't allow it. Also, you need an alias on the subquery (requirement of SQL Server). Finally, you might consider using `union all`, if you know there are no duplicates in tables. | SQL condition on unioned tables | [
"",
"sql",
"sql-server-2008-r2",
"union",
""
] |
I am trying to execute the sql query:
```
select * from table where column like '%value%';
```
But the data is saved as 'Value' ( V is capital ).
When I execute this query i don't get any rows.
How do i make the call such that, it looks for 'value' irrespective of the casing of the characters ? | use LOWER Function in both (column and search word(s)). Doing it so, you assure that the even if in the query is something like %VaLuE%, it wont matter
```
select qt.*
from query_table qt
where LOWER(column_name) LIKE LOWER('%vAlUe%');
``` | If you want this column be case insensitive :
```
ALTER TABLE `schema`.`table`
CHANGE COLUMN `column` `column` TEXT CHARACTER SET 'utf8' COLLATE 'utf8_general_ci';
```
Thus, you don't have to change your query.
And the MySQL engine will process your query quicker than using lower() function or any other tricks.
And I'm not sure that using lower function will be a good solution for **index searching performance**. | SQL SELECT LIKE (Insensitive casing) | [
"",
"mysql",
"sql",
"sql-like",
"case-insensitive",
""
] |
I have two tables:
Table 1:
```
------------------------------
|Country |Emp Level |Name|
|-----------|-----------|-----
|UK |Staff |A
|US |Admin Staff|B
|CA |Staff |C
|NL |Admin Staff|D
|MN |Intern |E
|IN |Staff |F
|UK |Staff |G
|US |Admin Staff|H
------------------------------
```
Table 2:
```
---------------------------------------------
|Level Guid |Level name |Country code |
|-----------|-----------|-------------------|
|101 |Admin Staff|UK |
|102 |Admin Staff|US |
|103 |Admin Staff|Global |
|105 |Staff |UK |
|106 |Staff |US |
|107 |Staff |Global |
|108 |Intern |UK |
|109 |Intern |US |
|110 |Intern |Global |
---------------------------------------------
```
And result should be:
```
|Level ID |Level |Country| Count |
|105 |Staff |UK | 2 |
|102 |Admin Staff|US | 2 |
|107 |Staff |CA | 1 |
|103 |Admin Staff|NL | 1 |
|110 |Intern |MN | 1 |
|107 |Staff |IN | 1 |
```
In table 2 any country except US,Uk is considered Global. Note that the countries are counted sepeartely even though in Table 2, all the countries except US, Uk are Global. I have tried a query liek this but it is giving me error `More than 1 value returned in subquery`
```
SELECT country,
emp - levels,
Count(emp - levels) count,
"LevelID"= CASE
WHEN (SELECT level_guid
FROM table_2 T
WHERE u.emp - levels = T.level_name
AND u.country IN ( 'UK', 'US' )) IS NOT NULL
THEN
(SELECT level_guid
FROM table_2 T
WHERE u.emp - levels = T.level_name
AND u.country NOT IN ( 'UK', 'US' ))
ELSE (SELECT level_guid
FROM [DPN_Reporting].dbo.taxonomytermset T
WHERE u.emp_levels = T.level_name
AND T.level_name = 'Global')
END
FROM table_1 u
WHERE emp - levels IS NOT NULL
AND emp - levels != ''
GROUP BY emp - levels,
country
ORDER BY Count(emp - levels) DESC
``` | Try this one
```
CASE
WHEN EXISTS(SELECT level_guid
FROM table_2 T
WHERE u.emp - levels = T.level_name
AND u.country IN ( 'UK', 'US' ))
THEN (SELECT MAX(level_guid)
FROM table_2 T
WHERE u.emp - levels = T.level_name
AND u.country NOT IN ( 'UK', 'US' ))
ELSE (SELECT MAX(level_guid)
FROM [DPN_Reporting].dbo.taxonomytermset T
WHERE u.emp_levels = T.level_name
AND T.level_name = 'Global')
END
``` | Here there are two queries:
```
SELECT T2.[Level Guid] [Level ID], T1.[Emp Level] Level, T1.Country, COUNT(*) Count
FROM Table1 T1
INNER JOIN Table2 T2 ON T1.[Emp Level] = T2.[Level name]
AND CASE T1.Country WHEN 'UK' THEN 'UK' WHEN 'US' THEN 'US' ELSE 'Global' END = T2.[Country code]
GROUP BY T2.[Level Guid], T1.[Emp Level], T1.Country
```
SQLFiddle: <http://sqlfiddle.com/#!3/18e8f/8>
and
```
SELECT DISTINCT T2.[Level Guid] [Level ID], T1.[Emp Level] Level, T1.Country,
(SELECT COUNT(*) FROM Table1 T3 WHERE T1.Country = T3.Country) Count
FROM Table1 T1
INNER JOIN Table2 T2 ON T1.[Emp Level] = T2.[Level name]
AND CASE T1.Country WHEN 'UK' THEN 'UK' WHEN 'US' THEN 'US' ELSE 'Global' END = T2.[Country code]
```
SQLFiddle: <http://sqlfiddle.com/#!3/18e8f/7>
The first one in `Count` will give the number of persons with the same `T2.[Level Guid], T1.[Emp Level], T1.Country`.
The second one in `Count` will give the number of persons of the same `Country`. Note that I've added a DISTINCT to make the result the same as you.
I think the second one is useless as a query, because you lose the information that there are 2x `UK Staff` and 2x `US Admin Staff` for example.
As a suggestion, please **don't put spaces in the column names!** . | Count number of people in each country in SQL server 2008 | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have a database in SQL Server 2008 R2, and I want to copy this database onto another machine.
How do I make a copy?
And how do I restore it?
Thanks | There are probably more ways to do this but I usually right-click the database and choose "Tasks β Back up..." with Backup type "Full". After that you can copy the created file to your target machine, connect to its SQL Server in SQL Server Management Studio, right-click the "Databases" folder and choose "Restore Database". Select "Device" and choose the file, then restore.
Another approach would be to script the database in SQL Server Management Studio (right-click the database, then "Tasks β Generate scripts..."). During this process there'll be a step called "Set Scripting Options", where you'll have to click the "Advanced" button and carefully go through the options. You'll definitely want to choose "Data and schema" for the option "Types of data to script". I sometimes prefer this method if I really just want the data structures and the data itself to be transferred.
Update: Sorry, I forgot to mention how to restore the database for the scripting option. I always generate the scripts by selecting "Save to new query window" during the "Set Scripting Options" step. After the script is generated, just leave it there for a moment.
On the target server create a new database with the same name as the one you generated the scripts for. Or you can create a script for that on the source server, too (right-click the database, choose "Script Database as β CREATE TO... β Clipboard") and execute this script by right-clicking the server node in the SSMS Object Explorer, selecting "New query", pasting the script into the query window and executing it. This second option is the better choice if you really need a complete copy of the database and not just the data itself.
Once you've gone down one of these two roads you should have created a new database. Right-click this database in Object Explorer and select "New Query", then copy and paste the script containing the database structure and data into the new query window and execute the query. This should do the trick. | You can't copy Database to another machine.
Yes you can take back up to same machine and copy it to another machine and do restore.
To take backup follow procedure:
1. Right Click on the database you want to take backup.
2. Choose Task -> Back Up.
3. In Destination, Choose Add.
4. In File Name click on `...` button and choose destination folder where you want to backup with backupname.bak . Click Ok, Ok and Ok. and wait until backup process is completed. Click Ok.
Now copy that backup file into pendrive or any media and paste it to another machine and Open SQL Server 2008 R2
To restore backup follow procedure:
1. Right Click on the Databases.
2. Choose Restore Database.
3. Write database name which you want to restore in To Database field
4. Select From device radio button in Source for restore. Click on `...`
5. Click on Add button, Select database backup file you have pasted. Click Ok, Ok.
6. Check the checkbox of Restore in Select the beckup sets to restore.
7. Go on Options Check Overwrite the existing database & Preserve the replication settings (this fields needed to check only when you try to restore database which is already resided on that another device)
8. Click Ok. wait until restore complete and click ok.
Tell me if you face any problem.
---
**By Code**
*To Backup:*
```
USE DATABASE_NAME;
GO
BACKUP DATABASE DATABASE_NAME
TO DISK = 'D:\DATABASE_NAME.Bak'
WITH FORMAT, MEDIANAME = 'D_SQLServerBackups',
NAME = 'Full Backup of DATABASE_NAME';
GO
```
(If you want to put backup in any folder, the folder must be exist before you take the backup.)
*To Restore:*
Step 1: Retrive the Logical file name of the database from backup.
```
RESTORE FILELISTONLY
FROM DISK = 'D:BackUpYourBaackUpFile.bak'
GO
```
Step 2: Use the values in the LogicalName Column in following Step.
----Make Database to single user Mode
```
ALTER DATABASE YourDB
SET SINGLE_USER WITH
ROLLBACK IMMEDIATE
```
----Restore Database
```
RESTORE DATABASE YourDB
FROM DISK = 'D:BackUpYourBaackUpFile.bak'
WITH MOVE 'YourMDFLogicalName' TO 'D:DataYourMDFFile.mdf',
MOVE 'YourLDFLogicalName' TO 'D:DataYourLDFFile.ldf'
```
/*If there is no error in statement before database will be in multiuser
mode.
If error occurs please execute following command it will convert
database in multi user.*/
```
ALTER DATABASE YourDB SET MULTI_USER
GO
``` | How to copy SQL Server 2008 R2 database from one machine to another | [
"",
"sql",
"sql-server-2008-r2",
""
] |
I have a field in a table I am querying that looks similar to this:
```
Name Phone Category_IDS Category_Labels
Sample 1111111111 ["1"] ["foo", "bar"]
```
I am trying to use the FIND\_IN\_SET function to find all rows that contain one of the values listed in the comma separated list. Something like this returns nothing:
```
SELECT * FROM sampletable WHERE FIND_IN_SET('1', category_ids) <> 0
```
It does work if I do this:
```
SELECT * FROM factual_usplaces WHERE FIND_IN_SET('["1"]', category_ids) <> 0
```
But of course that limits to searches to rows where the category\_ids or labels only contains a single value in the comma separated list. So ["1"] would be found but ["1", "2"] would not.
Is there a way to remove the brackets and quotations from the string on the fly in the query? | If data is stored exactly how you showed it then you can use [`REPLACE()`](http://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_replace) to strip double quotes and brackets before feeding `category_ids` to `FIND_IN_SET()`.
```
SELECT *
FROM Table1
WHERE FIND_IN_SET(1, REPLACE(
REPLACE(
REPLACE(category_ids, '"', ''),
'[', ''),
']','')) > 0
```
Here is **[SQLFiddle](http://sqlfiddle.com/#!2/b269a/1)**
---
Now if you will use it a lot then you may consider to create a user defined function to simplify your code
```
CREATE FUNCTION UNQOUTE_LIST(_list VARCHAR(512))
RETURNS VARCHAR(512)
RETURN
REPLACE(
REPLACE(
REPLACE(_list, '"', ''),
'[', ''),
']','');
```
And use it
```
SELECT *
FROM Table1
WHERE FIND_IN_SET(1, UNQOUTE_LIST(category_ids)) > 0
```
Here is **[SQLFiddle](http://sqlfiddle.com/#!2/f1c39/1)** | try below sql query with like operator
SELECT \* FROM factual\_usplaces WHERE category\_ids LIKE '%1,2%'
Hope This help you | MySQL FIND_IN_SET With Array String | [
"",
"mysql",
"sql",
""
] |
I have a resturant db and I need to total up the total value of all the items sold individually. So if I sold a hamburger that has a base price of $10.00 with bacon which costs $1.00 and a hambuger(again $10.00) with avacado that costs $0.50 I need to get $21.50 returned. My invoice table looks like this:
```
invoice_num item_num price item_id parent_item_id
111 hmbg 10.00 guid_1 ''
111 bacn 1.00 guid_2 guid_2
112 hmbg 10.00 guid_3 ''
112 avcd 0.50 guid_4 guid_3
```
I can get the sum of all the parent items like this:
```
SELECT item_num, SUM(price) FROM invoices WHERE parent_item_id = ''
```
it is the adding of the toppings that is confusing me. I feel like I need to add a subquery in the SUM but I'm not sure how to go about doing it and referencing the original query to use the item\_id. | ```
SELECT item_num, sum(i.price) + sum(nvl(x.ingred_price,0))
FROM invoices i
LEFT OUTER JOIN
(SELECT parent_item_id
, sum(price) ingred_price
FROM invoices
WHERE parent_item_id IS NOT NULL
GROUP BY parent_item_id) x
ON x.parent_item_id = i.item_id
WHERE i.parent_item_id IS NULL
GROUP BY item_num
```
Here's a [SQL Fiddle](http://sqlfiddle.com/#!4/eb532/1) that proves the above code works. I used Oracle, but you should be able to adapt it to whatever DB you are using.
Assumption: You don't have more than one level in a parent child relationship. E.g. A can have a child B, but B won't have any other children. | Not clear based on your question (see my comment) but as I understand it a simple group by will give you what you want. If not please explain (in the original question) why does this query does not work --- what is it missing from your requirements?
```
SELECT item_num, SUM(price)
FROM invoices
GROUP BY item_num
``` | subquery the same table in select statement | [
"",
"sql",
""
] |
I want to preserve the record order, which is provided as comma delimited string. The 5th item in by delimited string is a null. I need the 5th row to be null as well.
```
with test as
(select 'ABC,DEF,GHI,JKL,,MNO' str from dual
)
select rownum, regexp_substr (str, '[^,]+', 1, rownum) split
from test
connect by level <= length (regexp_replace (str, '[^,]+' )) + 1
```
The current result I'm getting puts this in the 6th position:
```
1 ABC
2 DEF
3 GHI
4 JKL
5 MNO
6
``` | Order **is** preserved by your expression, but your regular expression doesn't match nulls correctly, so the 5th item disappears. The 6th row is a `NULL` because there are no more match after the 5th match.
You could do this instead:
```
SQL> with test as
2 (select 'ABC,DEF,GHI,JKL,,MNO' str from dual
3 )
4 SELECT rownum,
5 rtrim(regexp_substr(str || ',', '[^,]*,', 1, rownum), ',') split
6 FROM test
7 CONNECT BY LEVEL <= length(regexp_replace(str, '[^,]+')) + 1;
ROWNUM SPLIT
---------- ---------------------------------------------------------------
1 ABC
2 DEF
3 GHI
4 JKL
5
6 MNO
6 rows selected
```
Or this:
```
SQL> with test as
2 (select 'ABC,DEF,GHI,JKL,,MNO' str from dual
3 )
4 SELECT rownum,
5 regexp_substr(str, '([^,]*)(,|$)', 1, rownum, 'i', 1) split
6 FROM test
7 CONNECT BY LEVEL <= length(regexp_replace(str, '[^,]+')) + 1;
ROWNUM SPLIT
---------- ------------------------------------------------------------
1 ABC
2 DEF
3 GHI
4 JKL
5
6 MNO
6 rows selected
``` | Try Something like this:
```
SELECT
STR,
REPLACE ( SUBSTR ( STR,
CASE LEVEL
WHEN 1
THEN
0
ELSE
INSTR ( STR,
'~',
1,
LEVEL
- 1 )
END
+ 1,
1 ),
'~' )
FROM
(SELECT 'A~~C~~E' AS STR FROM DUAL)
CONNECT BY
LEVEL <= LENGTH ( REGEXP_REPLACE ( STR,
'[^~]+' ) )
+ 1;
``` | Preserve order when converting a delimited string to a column | [
"",
"sql",
"oracle",
"oracle11g",
""
] |
I'm trying to pull the `ClientID` from the following sub-query with a group by and having clause but I get the following error:
> Msg 116, Level 16, State 1, Line 1
> Only one expression can be specified in the select list when the subquery is not introduced w ith
> EXISTS.
Query:
```
select
ClientID
from
SurveyResponses
where
ClientID in (select ClientID, count (surveyresponseid)
from SurveyResponses
where SurveyID in (1988,1989,2750,3206,15561)
group by
ClientID
having count (SurveyResponseID) > 1) and SurveyID = 1989
``` | Remove the **count()** from the select clause
```
select ClientID from SurveyResponses where ClientID in
(select ClientID from SurveyResponses
where SurveyID in (1988,1989,2750,3206,15561)
group by ClientID
having count (SurveyResponseID) > 1) and SurveyID = 1989
``` | You are pulling two columns in your sub query and can only pull one, because you are telling sql, check where client ID exists in the count of survey responses AND the clientIDs in the SurveyResponses table.
Try this, it is untested
```
select ClientID from SurveyResponses
where ClientID in (select ClientID from SurveyResponses
where SurveyID in (1988,1989,2750,3206,15561)
group by ClientID
having count (SurveyResponseID) > 1) and SurveyID = 1989
``` | How do you use a group by / having clause in a sub-query for? | [
"",
"sql",
"sql-server",
""
] |
I have a query for getting search results, which works fine.
### Example of successful query:
```
SELECT
individuals.individual_id,
individuals.unique_id,
TIMESTAMPDIFF(YEAR,individuals.day_of_birth,CURDATE()) AS age,
individuals_dynamics.id,
individuals_achievements.degree
FROM
individuals as individuals
LEFT JOIN
individuals_dynamics AS individuals_dynamics ON individuals.unique_id = individuals_dynamics.individual_id
LEFT JOIN
individuals_achievements AS individuals_achievements ON individuals.unique_id = individuals_achievements.individual_id
WHERE
$uuid_access_status $display_type $detailed_search_query
ORDER BY
$search_sort $search_order
```
From now on, I have more than one record in `individuals_achievements` per each individual and this is the where I would like to get the MAX value (latest id).
I tried the many different queries but always was getting an error ***Call to a member function rowCount() on a non-object.***
I understand what that error means but I can't figure out where I'm making that mistake and what is wrong in general.
### Example of my unsuccessful attempt:
```
SELECT
individuals.individual_id,
individuals.unique_id,
TIMESTAMPDIFF(YEAR,individuals.day_of_birth,CURDATE()) AS age,
individuals_dynamics.id,
individuals_achievements.degree
FROM
individuals as individuals
LEFT JOIN
individuals_dynamics AS individuals_dynamics ON individuals.unique_id = individuals_dynamics.individual_id
INNER JOIN
(
SELECT
degree, MAX(id) AS latest_record
FROM
individuals_achievements
GROUP BY
latest_record
) individuals_achievements AS individuals_achievements ON individuals.unique_id = individuals_achievements.individual_id
WHERE
$uuid_access_status $display_type $detailed_search_query
ORDER BY
$search_sort $search_order
```
What am I missing here? Any help please? | This is your `from` clause:
```
FROM
individuals as individuals
LEFT JOIN
individuals_dynamics AS individuals_dynamics ON individuals.unique_id = individuals_dynamics.individual_id
INNER JOIN
(
SELECT
degree, MAX(id) AS latest_record
FROM
individuals_achievements
GROUP BY
latest_record
) individuals_achievements AS individuals_achievements ON individuals.unique_id = individuals_achievements.individual_id;
```
I can spot at least three problems. The first is `individuals_achievements AS individuals_achievements`; the second is the reference to `individuals_achievements.individual_id` which isn't in the subquery. The third is the `group by latest_record`.
```
FROM individuals LEFT JOIN
individuals_dynamics
ON individuals.unique_id = individuals_dynamics.individual_id LEFT JOIN
individuals_achievements
ON individuals.unique_id = individuals_achievements.individual_id JOIN
(SELECT ia.individual_id, MAX(ia.id) AS latest_record
FROM individuals_achievements ia
GROUP BY ia.individual_id
) iamax
ON individuals.unique_id = iamax.individual_id and
individuals_achievements.id = iamax.latest_record
```
This adds an additional subquery, with the id of the latest record.
By the way, it is redundant to have a table alias be the same name as the table name. That just clutters up the query. Also, it is a good idea to use table abbreviations for the aliases, such as `ia` for `individuals_achievements`. Because this answer focuses only on the `from` clause, I have not made that change. | `PDO::query` returns a `FALSE` if there's an error while processing your query, and applying `rowCount` to `FALSE` as a member function will result in the error you're seeing.
Check your query for syntax errors and parse any error messages `PDO` would return you. | Using sub-query to SELECT MAX value along with LEFT JOIN | [
"",
"mysql",
"sql",
"join",
"subquery",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.