Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I have a MySQL commands:
```
CREATE DATABASE IF NOT EXISTS courses;
USE courses
CREATE TABLE IF NOT EXISTS teachers(
id INT(10) UNSIGNED PRIMARY KEY NOT NULL AUTO_INCREMENT,
name VAR_CHAR(50) NOT NULL,
addr VAR_CHAR(255) NOT NULL,
phone INT NOT NULL,
);
```
When I run it, I get an error:
```
ERROR 1064 (42000): You have an error in your SQL syntax; check the
manual that corresponds to your MySQL server version for the right
syntax to use near 'VAR_CHAR(50) NOT NULL, addr VAR_CHAR(255) NOT
NULL, phone INT NOT NULL, )' at line 3
``` | It is `varchar` and not `var_char`
```
CREATE DATABASE IF NOT EXISTS courses;
USE courses;
CREATE TABLE IF NOT EXISTS teachers(
id INT(10) UNSIGNED PRIMARY KEY NOT NULL AUTO_INCREMENT,
name VARCHAR(50) NOT NULL,
addr VARCHAR(255) NOT NULL,
phone INT NOT NULL
);
```
You should use a SQL tool to visualize possbile errors like *MySQL Workbench*. | MySQL doesn't accept using reserved words such as 'name' on column names.
Therefore, to avoid this problem you could use back-ticks(`). This way, MySQL will treat them as column names rather than reserved keywords.
Try this:
Use back-ticks for NAME
```
CREATE TABLE `teachers` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`name` varchar(50) NOT NULL,
`addr` varchar(255) NOT NULL,
`phone` int(10) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
``` | ERROR 1064 (42000): You have an error in your SQL syntax; | [
"",
"mysql",
"sql",
""
] |
I know that the "select record corresponding to the maximum value for a field" has been exhaustively answered, but I was wondering why nobody suggested using an ORDER BY clause to get the right row.
For example, I have this table:
```
| other_field | target_field |
| 1 | 15 |
| 2 | 25 |
| 3 | 20 |
```
and I want to find the `other_field` value corresponding to the maximum `target_field` (e.g. in this case, I want to find 2).
Many people suggested using GROUP and JOIN, however my first idea was to use:
```
SELECT other_field FROM table ORDER by target_field DESC LIMIT 1;
```
Is there anything wrong with this? The only problem I can think of is that maybe ordering takes longer then just find the maximum (although on the other hand the JOIN might also take a while).
Thanks!
EDIT: sorry guys for the late replies, I'm new here and I was expecting to get some e-mails for notifications :) | Yes.
It actually has to sort every record before it can return any data. It's highly inefficient. It will return what you want, but not in the best possible way. Aggregate functions tend to do it much better, and much quicker.
With your current query, once you reached a much higher data load, it would take ages to process and materialize. (With smaller data sets, you should be fine) | 1. If you need single value from one or more than one tables then you have to go for Max and GroupBy
2. if you are only one table and requires multiple columns then it is ok to use Order By Desc.
3. if you again need a single value from single table then MAX is preferred here too.
I hope you got my points | SQL getting record for maximum value: why not use "ORDER BY"? | [
"",
"sql",
"max",
"sql-order-by",
""
] |
I apologize for the confusing title, I can't figure out the proper wording for this question. Instead, I'll just give you the background info and the goal:
This is in a table where a person may or may not have multiple rows of data, and those rows may contain the same value for the `activity_id`, or may not. Each row has an auto-incremented ID. The people do not have a unique identifier attached to their names, so we can only use first\_name/last\_name to identify a person.
I need to be able to find the people that have multiple rows in this table, but only the ones who have multiple rows that contain more than one different `activity_id`.
Here's a sample of the data we're looking through:
```
unique_id | first_name | last_name | activity_id
---------------------------------------------------------------
1 | ted | stevens | 544
2 | ted | stevens | 544
3 | ted | stevens | 545
4 | ted | stevens | 546
5 | rachel | jameson | 633
6 | jennifer | tyler | 644
7 | jennifer | tyler | 655
8 | jennifer | tyler | 655
9 | jack | fillion | 544
10 | mallory | taylor | 633
11 | mallory | taylor | 633
```
From that small sample, here are the records I would want returned:
```
unique_id | first_name | last_name | activity_id
---------------------------------------------------------------
dontcare | ted | stevens | 544
dontcare | jennifer | tyler | 655
```
Note that which value of `unique_id` gets returned is irrelvant, as long as it's one of the `unique_id`s belonging to that person, and as long as only one record is returned for that person.
Can anyone figure out how to write a query like this? **I don't care what version of SQL you use, I can probably translate it into Oracle if it's somehow different.** | I would do:
```
SELECT first_name, last_name, COUNT(DISTINCT activity_id)
FROM <table_name>
GROUP BY first_name, last_name
HAVING COUNT(DISTINCT activity_id) > 0;
``` | I'll build through the logic with you. First, lets find all people that have more than one entry:
Unique list of name + activity ID:
```
select first_name, last_name,activity_id, count(1)
from yourtable
group by first_name, last_name,activity_id
```
Now we'll turn that into a subquery and look for users with more than 1 activity\_ID
```
Select first_name, last_name
from
(select first_name, last_name,activity_id, count(1)
from yourtable
group by first_name, last_name,activity_id) a
group by first_name, last_name
having count(1) > 1
```
Should work as that...I didn't return an activity\_id, adding max(activity\_id) to the select statement will grab the highest one. | SELECT Only Records With Duplicate (Column A || Column B) But Different (Column C) Values | [
"",
"mysql",
"sql",
"oracle",
""
] |
I want to get the email address part of a string.
For example if the string is
> "your name(your@name.com)" aaa@bbb.com
then I want to get only
> aaaa@bbb.com
basically if I can remove the string within
> ""
then it does the trick. I am using below regular expression with `REGEXP_SUBSTR`
```
REGEXP_SUBSTR('"your name(abc@dd.com)" aaa@bbb.com',
'([a-zA-Z0-9_.\-])+@(([a-zA-Z0-9-])+[.])+([a-zA-Z0-9]{2,4})+')
```
kindly help. | You can simply indicate that the match must occur at the end of the string, using `$` anchor.
```
with t1(col) as(
select '"your name(your@name.com)" aaa@bbb.com' from dual
)
select regexp_substr(col, '[[:alnum:]._%-]+@[[:alnum:]._%-]+\.com$') as res
from t1
```
Result:
```
RES
-----------
aaa@bbb.com
``` | You probably need something more along the lines of:
```
REGEXP_SUBSTR('"your name(abc@dd.com)" aaa@bbb.com','[A-Z0-9._%-]+@[A-Z0-9.-]+\.[A-Z]{2,4}')
```
Things like `[.]` doesnt really make sense, dot matches any character and the square brackets is a kind of "OR" statement where any character inside can go in that place, but in your case you actually want to match the literal dot so you need to escape that `\.` not sure how oracle handles the escapes, you might need to double escape them. | Regular Expression in Oracle with REGEXP_SUBSTR | [
"",
"sql",
"regex",
"oracle",
""
] |
Is it possible to return the result of two function calls in a select statement?
I have the below TSQL snipped that accepts two csv string which I want to insert into a temp table in order. `dbo.Split` takes the string and returns a table of values.
**Snippet**
```
DECLARE @MeasureCategoryIDs as nvarchar(100)
DECLARE @SnapshotIDs as nvarchar(100)
SET @MeasureCategoryIDs = '1,2,3'
SET @SnapshotIDs = '9,8,7'
CREATE TABLE #tmpMeasureCats
(
MeasureCatID int PRIMARY KEY NOT NULL,
SnapshotID int NOT NULL
)
INSERT INTO #tmpMeasureCats
SELECT value FROM dbo.Split(',',@MeasureCategoryIDs), value FROM dbo.Split(',',@SnapshotIDs)
DROP TABLE #tmpMeasureCats
```
**Split**
```
ALTER FUNCTION [dbo].[Split]
( @Delimiter varchar(5),
@List varchar(8000)
)
RETURNS @TableOfValues table
( RowID smallint IDENTITY(1,1),
[Value] varchar(50)
)
AS
BEGIN
DECLARE @LenString int
WHILE len( @List ) > 0
BEGIN
SELECT @LenString =
(CASE charindex( @Delimiter, @List )
WHEN 0 THEN len( @List )
ELSE ( charindex( @Delimiter, @List ) -1 )
END
)
INSERT INTO @TableOfValues
SELECT substring( @List, 1, @LenString )
SELECT @List =
(CASE ( len( @List ) - @LenString )
WHEN 0 THEN ''
ELSE right( @List, len( @List ) - @LenString - 1 )
END
)
END
RETURN
END
```
**Results from calling the split function on both csv strings**

When I run the snippet above I get the below error
> Msg 156, Level 15, State 1, Line 15
> Incorrect syntax near the keyword 'FROM'.
What I am looking for is as below
```
Col1 Col2
1 9
2 8
3 7
```
Is there another way to term the Select statement to return the two values for the Insert? | ```
INSERT INTO #tmpMeasureCats(MeasureCatID, SnapshotID)
select t1.value, t2.value
from
(SELECT value, row_number() over (order by (select 1)) rn FROM dbo.Split(',',@MeasureCategoryIDs)) t1
join
(SELECT value, row_number() over (order by (select 1)) rn FROM dbo.Split(',',@SnapshotIDs)) t2
on t1.rn = t2.rn
``` | Assuming your functions are working as expected
```
INTO #tmpMeasureCats
SELECT
A.value as 'MeasureCat',
B.value as 'Snapshot'
FROM
dbo.Split(',',@MeasureCategoryIDs) A,
dbo.Split(',',@SnapshotIDs) B
``` | TSQL : Return the results of two functions in a single select statement | [
"",
"sql",
"sql-server-2008",
"t-sql",
"function",
"csv",
""
] |
I have two tables.
```
Table1
ID Text
Table2
ID ParentID Text
```
I am trying to join to the same table twice using 2 different columns. I want all the rows where Table1.ID = Table2.ID as well as all the rows where Table1.ID = Table2.ParentID. I've tried the following but it returns no rows. When I run the script below in two different select statements each with one of the joins only, I get the result I want but not in the same select statement. Any ideas on what I'm doing wrong?
```
SELECT *
FROM Table 1
JOIN Table2 2
on 2.ID = Table1.ID
JOIN Table2 22
on 22.ParentID = Table1.ID
``` | You're only getting the records where Table 1 is both a Parent ID and an ID... maybe you want:
```
SELECT
* FROM Table 1
JOIN Table2 2 on 2.ID = Table1.ID
UNION
SELECT
* FROM Table 1
JOIN Table2 22 on 22.ParentID = Table1.ID
```
Or potentially UNION ALL? | You could try this query:
```
SELECT
*
FROM
TABLE1,
TABLE2
WHERE
TABLE1.ID = TABLE2.ID
OR
TABLE1.ID = TABLE2.ParentID
``` | Joining to Same Column but Getting No Rows | [
"",
"sql",
"sql-server-2008",
""
] |
In a procedure I have statements like below
```
declare @mnth int
select @mnth=month(cast('08/12/2013' as datetime))
if( @mnth<7)
begin
--some statements
end
else
begin
--some other statements
end
```
I am getting error
> Incorrect syntax near the keyword 'else'.
I can't understand what is wrong. Please help | Have you really entered some statements in between the `begin` and `end` statements? Because I'm pretty sure the error message will vanish once there is something to begin with.
Side note: This probably should have been a comment rather than an answer, but I can't comment yet and I think it will solve the problem (if there really is nothing between the mentioned lines, at least). | There should be some actual code inside begin/end, I guess you only have comments there. | Error in if else condition TSQL | [
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2005",
""
] |
In building a web app recently, I started thinking about the information returned from a query I was making:
Find the user information and (for simplicity sake) the associated phone numbers tied to this user. Something as simple as:
```
SELECT a.fname, a.lname, b.phone
FROM users a
JOIN users_phones b
ON (a.userid = b.userid)
WHERE a.userid = 12345;
```
No problem here (yes I'm preventing injection, etc, not the point of this question). When I think about the data that is returned though, I am returning (potentially) several rows of information with that users name on each one. Let's say that single user has 1000 phone numbers associated with it. That's a first name and last name being returned each call a lot. Let's also assume I want to return a lot more than just the first name and last name of that user and in fact I'm starting to return quite a bit of extra rows which I really only needed once.
Are there circumstances in which it is "more appropriate" to make multiple calls to a database?
e.g.
```
SELECT firstname, lastname
FROM users
WHERE userid = 12345;
SELECT phone
FROM users_phones
WHERE userid = 12345;
```
If the answer is yes, is there a good/proper method of determining when to use multiple queries versus a single one? | A query with a `JOIN` may be slower than two independent queries. It really depends on the type of access you're doing.
For your example, I'd go with the two query approach. These queries could be executed in parallel, they could be cached, and there's no real reason to `JOIN` other than for arbitrary presentation concerns.
You'll also want to be concerned about returning duplicate data. In your example it looks like `fname` and `lname` would be repeated for each and every phone number, resulting in a lot of data being transmitted that's actually not useful. This is because of the *one-to-many* relationship you've described.
Generally you'll want to `JOIN` if it means sending less data, or because the two queries are not independent. | I think that really depends on your use case. In the example you gave, it seems to make sense to return it as two queries, especially if you're passing that info back to a mobile device where you want to make sure you send them as little data as possible (not everyone has unlimited data.....)
I'd probably stick a DISTINCT in those queries as well if that's going to make a difference based on your tables. | Is it ever "ok" to use multiple queries instead of one? | [
"",
"mysql",
"sql",
"database",
"join",
""
] |
I have a table with fields `id` (primary key) and `fid`. I want to get the record where `id` matches a particular value, as well as all related records that have its same `fid` value.
I can do this:
```
SELECT * FROM mytable
WHERE fid = (SELECT TOP 1 fid FROM mytable WHERE id = 'somevalue')
```
But I don't want the related records if the `fid` is a particular value (in my case an empty guid value).
Is there a way to do this in a single SQL statement? I am using SQL Server 2008 R2.
**UPDATE:**
Looking at the answers so far I think I may not have asked my question clearly. `id` and `fid` will never be equal. `LEFT JOIN` may be what I need, but I'm a bit SQL ignorant. What I'm hoping for is the following two queries as a single statement:
```
SELECT * FROM mytable WHERE id = 'somevalue'
SELECT * FROM mytable WHERE fid =
(SELECT TOP 1 fid FROM mytable
WHERE id = 'somevalue' AND fid != '00000000-0000-0000-0000-000000000000')
``` | Based on your revision, the problem seems to be "select all rows where `id` has a certain value and all other rows with the `id` matches "somevalue" and the `fid` is not null.
The following captures this logic:
```
SELECT t.*
FROM mytable t left outer join
(SELECT TOP 1 fid
FROM mytable
WHERE id = 'somevalue' AND fid <> '00000000-0000-0000-0000-000000000000'
) t1
on t.fid = t1.fid
WHERE id = 'somevalue' or t1.fid is not null;
```
Because `id` is a primary key, the `t1` subquery will return 0 or 1 rows. When it returns 0 rows, you will only get the original row matching `'somevalue'`. | I'm not certain I understand your question, but I'll take a stab at it. What I think you're asking is if you can select all records from one table where either the id or fid fields equal a particular value, but you don't want the related fields if the particular value you're searching on equals an empty guid value. If so, here's how you can do it:
```
SELECT
*
FROM
mytable t1
LEFT JOIN
mytable t2 ON (t1.id = t2.fid) AND (t2.fid IS NOT NULL);
```
Is this what you were looking for? | SQL statement to conditionally select related records | [
"",
"sql",
"sql-server",
""
] |
i have the following tables:
**TABLE: teachers:**
> teacherID
> teacherName
**TABLE: students:**
> studentID
> studentName
> teacherID
> advisorID
so, usually, i know i can get a single row per student, with their teachers name using an INNER JOIN.
but in this case - the advisor and tacher - are from the same teachers table. so how can i join onto the teachers table twice - once getting the teacher name, and then again to get the advisor name?
hope this is clear
thanks! | This lists students with the names of their teachers and advisors if any, in alpha order of student, without either (a) the teacher or (b) the advisor having to exist. If you want only where those names exist, change the respective join to an INNER join.
```
SELECT s.studentname as [Student], t.teachername as [Teacher], a.teachername as [Advisor]
FROM Students s
LEFT JOIN Teachers t ON s.TeacherID = t.TeacherID
LEFT JOIN Teachers a ON s.AdvisorID = a.TeacherID
ORDER BY 1, 2
``` | You can join to the same table more than once, just give it a different alias for each join, and name your fields in a descriptive enough way. Use a left join if there might not be a link, but if a student always has both a teacher and an advisor, a straight join should be fine.
Something like this:
```
select s.studentname student
, t.teachername teacher
, a.teachername advisor
from students s
join teacher t
on t.teacherID = s.teacherID
join teacher a
on a.teacherID = s.teacherID
``` | Multiple joins onto same table | [
"",
"sql",
"sql-server",
""
] |
I have a column of data that's `varchar(50)`. A sample data point reads "31-Dec-2001".
Is it possible to convert this into `DateTime`? I use SQL Server 2008. Thanks. | ```
CREATE TABLE T
(
X VARCHAR(50) NOT NULL
);
INSERT INTO T VALUES ('31-Dec-2001');
SET LANGUAGE ENGLISH;
ALTER TABLE T ALTER COLUMN X DATETIME NOT NULL;
``` | Yes it is!!
```
select convert(datetime,'31-Dec-2001')
``` | How to convert a varchar(50) into DateTime in SQL? | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have the following table
* quizId is the ID of a quiz in the db
* userId corresponds to the user who took up the quiz with the respective quizId
```
quizId | userId
1 | 9
1 | 10
1 | 11
2 | 11
3 | 11
```
* Now, consider the userId to be 9.
* I need a query that will return only the quizId's 2 and 3
* Meaning, since quizId 1 is taken by userId 9, I don't want them again.
The idea is:
* In the quiz application that I'm making, a person can take a quiz only once.
* There is a page where he can see the list of available quizzes.
* So I want to show only the quizzes which he have not taken already.
* The table above is a quizLog table where the quizId and userId who took up that quiz are stored
I'm sorry but this is the best way I could explain. I hope someone could help understand and help me out. And I couldn't make sure if this is a duplicate question since I have no clue what keywords I should search for. Sorry if this is a duplicate.
**EDIT:**
* I've a table called 'quiz' where quiz details are stored
* I've a table called 'users' where user details are stored
* The table above is called 'quizLog'
This is my existing query
```
SELECT quiz.quizId,quiz.title
FROM quiz
JOIN quizlog
ON quiz.quizId = quizlog.quizId
WHERE condition
``` | The simplest to understand query is:
```
select *
from quiz
where id not in (
select quiz_id
from user_quiz
where user_id = 9)
```
A more advanced way would be:
```
select *
from quiz q
left join user_quiz uq
on uq.quiz_id = q.id
and uq.user_id = 9
where uq.user_id is null
```
The join version may perform better on most databases. | You also need to use the table where the quizes are stored. If you take the quizId from the quizLog table, you will only see the ones that someone already have taken, so new quizes would never show up.
You can make a left join on the quizLog table, and get the quizes where there is no match.
```
select q.quizId
from quiz q
left join quizLog l on l.quizId = q.quizId and l.userId = 9
where l.quizId is null
``` | How to find a column with respect to another column in the same table? | [
"",
"mysql",
"sql",
""
] |
I have an application running in different cities. Each city has a server with different host name (IP ADDRESS). My Application connects to a database using the connectionString configuration from the config file like the sample below:
```
<add name="SAC_NG.My.MySettings.db_SACConnectionString" connectionString="Data Source=localhost;Initial Catalog=db_SAC;Persist Security Info=True;User
```
As the system is running in different servers, I would like to select witch database I want to connect to use the system.
So the point is, instead of DataSouce = localhost I would like to select an option in the system before start running to select witch database I want to run the system.
Thanks. | I suggest you do as Mr. MarkAnswer suggests first:
Set up three connection strings with appropriate names that indicate to you which city they are for.
After that is done, you can have the user select the city (from a Dropdownlist or Radiobuttonlist for instance) and store it in a `session` variable, and in accordance with that variable you can select which connection string to use.
(Or even better, you can store the name of the connection string in a variable and point to that variable on each instance the connection string is needed) | Add ipaddress of your server with instance of sql server to the data source..
```
example: 192.168.1.251\PISDEV
```
Create Different connection strings for different ip..
IN the code behind try to get the country and choose the connection based on the selected country..
(If you are not clear about ip and instance just run your sql server in server machine,connect to database engine,server name will be there when you connect,just copy it and pass it to datasource).. | Use config file to connect in different database sql server Application in .NET | [
"",
".net",
"sql",
"sql-server",
""
] |
I Have a problem with oracle split query.
While splitting comma separated data into multiple rows using connect by and regular expression in oracle query I am getting more **duplicate rows**. for example actually my table having 150 rows in that one two rows having comma separated strings so overall i have to get only 155 rows but i am getting 2000 rows. If i use distinct its working fine but i dont want duplicate rows in query result.
I tried the following query however it's generating duplicate rows in query result:
```
WITH CTE AS (SELECT 'a,b,c,d,e' temp,1 slno FROM DUAL
UNION
SELECT 'f,g',2 from dual
UNION
SELECT 'h',3 FROM DUAL)
SELECT TRIM(REGEXP_SUBSTR( TEMP, '[^,]+', 1, LEVEL)) ,SLNO FROM CTE
CONNECT BY LEVEL <= LENGTH(REGEXP_REPLACE(temp, '[^,]+')) + 1
```
**EDIT**
The above select query is only able to split a **single comma delimited string**, however, it produces duplicate rows when executed on a table with **multiple rows**. How to restrict the duplicate rows? | Finally I came up with this answer
```
WITH CTE AS (SELECT 'a,b,c,d,e' temp, 1 slno FROM DUAL
UNION
SELECT 'f,g' temp, 2 slno FROM DUAL
UNION
SELECT 'h' temp, 3 slno FROM DUAL)
SELECT TRIM(REGEXP_SUBSTR(temp, '[^,]+', 1, level)), slno
FROM CTE
CONNECT BY level <= REGEXP_COUNT(temp, '[^,]+')
AND PRIOR slno = slno
AND PRIOR DBMS_RANDOM.VALUE IS NOT NULL
``` | Try like this,
```
WITH CTE AS (SELECT 'a,b,c,d,e' temp,1 slno FROM DUAL
UNION
SELECT 'f,g',2 from dual
UNION
SELECT 'h',3 FROM DUAL)
SELECT regexp_substr (temp, '[^,]+', 1, rn)temp, slno
FROM cte
CROSS JOIN
(
SELECT ROWNUM rn
FROM (SELECT MAX (LENGTH (regexp_replace (temp, '[^,]+'))) + 1 max_l
from cte
)
connect by level <= max_l
)
WHERE regexp_substr (temp, '[^,]+', 1, rn) IS NOT NULL
order by temp;
``` | oracle -- Split multiple comma separated values in oracle table to multiple rows | [
"",
"sql",
"regex",
"oracle",
"split",
""
] |
I been practicing queries, and my current scenario is to find the nearest Saturday for a given date. After i got the logic down, i came up with a, whats looks like a long and messy query. And I was wondering if there is a way to simplify this.
Here's my query
```
DECLARE @DATE DATE
SET @DATE ='2013-09-13'
IF DATENAME(DW,@DATE) = 'SUNDAY'
BEGIN
SELECT DATEADD(DAY,-1,@DATE) AS DATE, 'IS THE NEAREST SATURDAY'
END
ELSE IF DATENAME(DW,@DATE) = 'MONDAY'
BEGIN
SELECT DATEADD(DAY,-2,@DATE) AS DATE, 'IS THE NEAREST SATURDAY'
END
ELSE IF DATENAME(DW,@DATE) = 'TUESDAY'
BEGIN
SELECT DATEADD(DAY,-3,@DATE) AS DATE, 'IS THE NEAREST SATURDAY'
END
ELSE IF DATENAME(DW,@DATE) = 'WEDNESDAY'
BEGIN
SELECT DATEADD(DAY,3,@DATE) AS DATE, 'IS THE NEAREST SATURDAY'
END
ELSE IF DATENAME(DW,@DATE) = 'THURSDAY'
BEGIN
SELECT DATEADD(DAY,2,@DATE) AS DATE, 'IS THE NEAREST SATURDAY'
END
ELSE IF DATENAME(DW,@DATE) = 'FRIDAY'
BEGIN
SELECT DATEADD(DAY,1,@DATE) AS DATE, 'IS THE NEAREST SATURDAY'
END
ELSE IF DATENAME(DW,@DATE) = 'SATURDAY'
BEGIN
SELECT CONVERT(NVARCHAR,@DATE) + ' IS SATURDAY' AS DATE_DOW
END
```
As we can see the query is long and runs multiple IFs to look for TRUE condition.
Please don't write the query, just hints. I would like to work on the query myself. | Here's my hint without answering, as you requested:
Consider date manipulation, using current date (you can use `GETDATE()` or your date variable), `DATEADD()` and `DATEDIFF()` can be used to write this in a couple lines.
If you just want to simplify your method you could use a `CASE` statement:
```
DECLARE @date DATE = '2013-09-13'
SELECT CASE WHEN DATENAME(DW,@DATE) = 'SUNDAY' THEN CAST(DATEADD(DAY,-3,@DATE) AS VARCHAR(12)) + ' IS THE NEAREST SATURDAY'
WHEN DATENAME(DW,@DATE) = 'MONDAY' THEN CAST(DATEADD(DAY,-2,@DATE) AS VARCHAR(12)) + ' IS THE NEAREST SATURDAY'
WHEN DATENAME(DW,@DATE) = 'TUESDAY' THEN CAST(DATEADD(DAY,-1,@DATE) AS VARCHAR(12)) + ' IS THE NEAREST SATURDAY'
WHEN DATENAME(DW,@DATE) = 'WEDNESDAY' THEN CAST(DATEADD(DAY,1,@DATE) AS VARCHAR(12)) + ' IS THE NEAREST SATURDAY'
WHEN DATENAME(DW,@DATE) = 'THURSDAY' THEN CAST(DATEADD(DAY,2,@DATE) AS VARCHAR(12)) + ' IS THE NEAREST SATURDAY'
WHEN DATENAME(DW,@DATE) = 'FRIDAY' THEN CAST(DATEADD(DAY,3,@DATE) AS VARCHAR(12)) + ' IS THE NEAREST SATURDAY'
ELSE CONVERT(NVARCHAR,@DATE) + ' IS SATURDAY'
END
```
To clarify on the method I was hinting at and Sparky posted, you need to adjust `DATEFIRST` to make this work, it works for whichever day is the first day of the week, Saturday is the 7th day of the week, so:
```
SET DATEFIRST 7
DECLARE @date DATE = '2013-09-21'
SELECT DATEADD(day,7-DATEPART(weekday,@date),@date)
``` | Try this:
```
select dateAdd(dd,7-DATEPART(dw,getDate()),GETDATE())
```
**datePart(dw,...)** returns day of the week for current date.
**7 - that number**, returns number of days until Saturday
Add the result to the date to get the next Saturday...
Similar logic if you need to go back to previous Saturday
Sql Fiddle: <http://www.sqlfiddle.com/#!3/61998/2> | How to find the Nearest (day of the week) for a given date | [
"",
"sql",
"sql-server-2008-r2",
""
] |
I have a news tables as follows, when editing news I would like to see the original version of the news and edited versions of news by all the users how do i make database table structure for that ?
```
Table news
News id, title, headline, content, date, user
``` | The long answer is to learn about Time-Oriented Database Applications, there's a [web site](http://www.cs.arizona.edu/~rts/publications.html) that has an [eBook](http://www.cs.arizona.edu/~rts/tdbbook.pdf) about it.
The short version is to either have another table for the history or change you existing table to allow for history.
Think about how you would differentiate between "original version", "edited versions" and "most recent versions". It could be simply by date, or you could try to maintain your own version number column, or you might even want some kind of flag or bit field (I don't recommend this option).
**UPDATE**
One possible solution (hard to know the best one for your scenario, so just an example):
Keep your existing table for the most recent version, as that's what you will probably need most of the time.
Add another table for historical versions, and in code (or a SQL Trigger if you have to) copy the existing version to the history table before saving the update. The schema of your history table could look something like this:
```
CREATE TABLE [dbo].[NewsHistory]
(
NewsHistoryID int IDENTITY(1, 1) PRIMARY KEY,
NewsID int NOT NULL,
Title varchar(100),
Headline varchar(200),
Content varchar(MAX),
CreatedAt datetime NOT NULL,
CreatedBy int NOT NULL
)
```
Just populate `NewsID, Title, Headline, Content, CreatedAt, CreatedBy` from the existing table before writing an update.
When you want to see the history for a news item, simply:
```
SELECT
Title,
Headline,
Content,
CreatedAt,
CreatedBy
FROM NewsHistory
WHERE NewsID = @newsID
ORDER BY NewsHistoryID
```
Sort by date if you prefer, but natural ordering should work here if everything is inserted in the right order. Again, depends on your needs/environment. | I’d go with pretty much identical table structure for history.
Table NewsHistory
News\_History\_ID, News id, Revision, title, headline, content, date, user
News\_ID would reference the latest version of the news in news table
Revision would be used to make tracking easier
What you can do is to create UPDATE trigger in your news table that will automatically copy current version into NewsHistory table. | Database structure for edited news | [
"",
"sql",
"sql-server",
""
] |
I am trying to SUM two CASE functions that I just created in my query. I need both of these columns to return in my result set, but also need the sum of the two columns to return in another column as 'DegreeDays.' I tried the SUM function, but to no avail, and I tried a simple a 'HeatingDegreeDays + CoolingDegreeDays AS DegreeDays' and that didn't work either. Suggestions?
```
,CASE
WHEN TempLow > 60.5 THEN 0
WHEN ((TempHigh + TempLow)/2) > 60.5 THEN ((60.5-TempLow)/4)
WHEN TempHigh >= 60.5 THEN (((60.5 - TempLow)/2)-((TempHigh-TempLow)/4))
WHEN TempHigh < 60.5 THEN (60.5-(TempHigh+TempLow)/4)
END AS HeatingDegreeDays
,CASE
WHEN TempHigh < 66.25 THEN 0
WHEN ((TempHigh + TempLow)/2) < 66.25 THEN ((TempHigh-66.25)/4)
WHEN TempLow <= 66.25 THEN (((TempHigh - 66.25)/2)-((66.25-TempLow)/4))
WHEN TempLow > 66.25 THEN ((TempHigh+TempLow)/2)-66.25
END AS CoolingDegreeDays
``` | One way is to simply make your existing query into a [derived table](http://www.sqlteam.com/article/using-derived-tables-to-calculate-aggregate-values):
```
SELECT a.HeatingDegreeDays,
a.CoolingDegreeDays,
a.HeatingDegreeDays + a.CoolingDegreeDays as DegreeDays
FROM
(
SELECT
CASE
WHEN TempLow > 60.5 THEN 0
WHEN ((TempHigh + TempLow)/2) > 60.5 THEN ((60.5-TempLow)/4)
WHEN TempHigh >= 60.5 THEN (((60.5 - TempLow)/2)-((TempHigh-TempLow)/4))
WHEN TempHigh < 60.5 THEN (60.5-(TempHigh+TempLow)/4)
END AS HeatingDegreeDays
,CASE
WHEN TempHigh < 66.25 THEN 0
WHEN ((TempHigh + TempLow)/2) < 66.25 THEN ((TempHigh-66.25)/4)
WHEN TempLow <= 66.25 THEN (((TempHigh - 66.25)/2)-((66.25-TempLow)/4))
WHEN TempLow > 66.25 THEN ((TempHigh+TempLow)/2)-66.25
END AS CoolingDegreeDays
FROM MyTable
) AS a; -- Derived tabled aliased as "a"
```
If you're on SQL Server 2005 or greater, this would also work just as well as a [Common Table Expression (CTE)](http://technet.microsoft.com/en-us/library/ms190766.aspx):
```
;WITH cte AS (
SELECT
CASE
WHEN TempLow > 60.5 THEN 0
WHEN ((TempHigh + TempLow)/2) > 60.5 THEN ((60.5-TempLow)/4)
WHEN TempHigh >= 60.5 THEN (((60.5 - TempLow)/2)-((TempHigh-TempLow)/4))
WHEN TempHigh < 60.5 THEN (60.5-(TempHigh+TempLow)/4)
END AS HeatingDegreeDays
,CASE
WHEN TempHigh < 66.25 THEN 0
WHEN ((TempHigh + TempLow)/2) < 66.25 THEN ((TempHigh-66.25)/4)
WHEN TempLow <= 66.25 THEN (((TempHigh - 66.25)/2)-((66.25-TempLow)/4))
WHEN TempLow > 66.25 THEN ((TempHigh+TempLow)/2)-66.25
END AS CoolingDegreeDays
FROM MyTable
)
SELECT HeatingDegreeDays,
CoolingDegreeDays,
HeatingDegreeDays + CoolingDegreeDays as DegreeDays
FROM cte;
```
Either of these would seem to be better to maintain than repeating the logic and hard-coded values of your query just to make a calculation. | You can repeat the formulas:
```
,CASE
WHEN TempLow > 60.5 THEN 0
WHEN ((TempHigh + TempLow)/2) > 60.5 THEN ((60.5-TempLow)/4)
WHEN TempHigh >= 60.5 THEN (((60.5 - TempLow)/2)-((TempHigh-TempLow)/4))
WHEN TempHigh < 60.5 THEN (60.5-(TempHigh+TempLow)/4)
END +
CASE
WHEN TempHigh < 66.25 THEN 0
WHEN ((TempHigh + TempLow)/2) < 66.25 THEN ((TempHigh-66.25)/4)
WHEN TempLow <= 66.25 THEN (((TempHigh - 66.25)/2)-((66.25-TempLow)/4))
WHEN TempLow > 66.25 THEN ((TempHigh+TempLow)/2)-66.25
END AS TotalDays
```
Or you can put it in a subquery (or CTE) and say:
```
select . . .,
(HeadingDegreeDays + CoolingDegreeDays) as TotalDays
``` | SUM two CASE functions | [
"",
"sql",
"sql-server",
""
] |
I've been scouring the internet trying to find an answer to this but I am coming up empty handed. I was wondering if there is a function that acts as an opposite to the `nvl` function. Something like this:
```
UPDATE transaction_review
SET "function use if null"(review1,review2) = 'Yellow',
"function use if null"(reason1,reason2) = 'Audit'
WHERE ACCOUNT = '11111111'
```
Essentially, the `update` would pick review2 if review1 is already populated. Is there any `function` like this? Thanks. | If I understand you correctly try do use **[coalesce](http://www.techonthenet.com/oracle/functions/coalesce.php)** function as below
```
UPDATE transaction_review
set review1 = coalesce(review1,'Yellow'),
review2 = coalesce(review2,'Audit')
WHERE ACCOUNT = '11111111'
``` | Try NVL2 command in plSQL.
> The NVL2 function accepts three parameters. If the first parameter
> value is not null it returns the value in the second parameter. If the
> first parameter value is null, it returns the third parameter. | Function that is opposite of nvl | [
"",
"sql",
"oracle",
""
] |
Basically I have the following in one table (only the first 25 rows) and I get it by doing this simple query.
```
select * from SampleDidYouBuy
order by MemberID, SampleID, DidYouBuy
SampleDidYouBuyID SampleID MemberID DidYouBuy DateAdded
-----------------------------------------------------------------
1217 23185 5 1 35:27.9
58458 23184 22 0 47:15.4
58459 23184 22 1 47:36.8
58457 23203 22 1 47:12.6
299576 23257 22 1 33:38.4
59470 23182 23 0 36:22.1
97656 23183 24 1 53:46.5
97677 23214 24 0 53:59.6
212732 23214 24 0 42:53.3
226583 23245 24 1 28:29.6
191718 23184 27 0 00:19.4
156363 23184 27 0 09:45.6
121106 23184 27 0 50:57.0
156362 23224 27 0 09:42.8
191716 23224 27 0 00:17.7
191715 23235 27 1 00:15.2
318100 23254 27 0 24:36.6
335410 23254 27 0 57:33.2
335409 23259 27 0 57:31.9
318099 23259 27 0 24:34.5
118989 23184 32 0 55:03.6
119013 23184 32 0 56:57.4
119842 23183 34 1 38:12.6
129364 23181 40 0 23:59.7
139977 23181 40 0 04:08.8
```
What I want to do is count the number Yes’s per member ID which I already know how to do **DidYouBuy = ‘1’**
But what I want to also do is count the number of No’s which is a bit trickier **‘DidYouBuy = 0’**
As you can see in the above table there are multiple entries for No’s for the same memberID and Sample ID (this is the ID of the sample they are marketing) and this is because that each time someone selects a No answer on the website, the question still remains and each time they click No it registers for that Sample. However when they click Yes the question disappears and there arent anymore No’s registered for that sample for that particular member.
I want to count the number of Unique No’s that **HAVE NOT** turned to a Yes. I know it sounds confusing, so when you got time give us a shout, I cant figure this out, is it using a condition statement?
I can get the Yes's without problems but to count the number of No's who have not selected Yes is a problem i cant figure out. I have a feeling it need to be done using the group by clause?
Expected Output
```
SampleDidYouBuyID SampleID MemberID DidYouBuy DateAdded
-----------------------------------------------------------------
59470 23182 23 0 36:22.1
212732 23214 24 0 42:53.3
121106 23184 27 0 50:57.0
191716 23224 27 0 00:17.7
335410 23254 27 0 57:33.2
318099 23259 27 0 24:34.5
119013 23184 32 0 56:57.4
139977 23181 40 0 04:08.8
```
This is what i would like it to look like when im querying for the No's, notice how the people who have a No but later answered yes are excluded from the result | Updated:
```
select a.MemberId,
count(distinct a.SampleId) as unique_nos
from SampleDidYouBuy a
where a.DIdYouBuy = 0
and not exists (select 1
from SampleDidYouBuy b
where b.MemberId = a.MemberId
and b.SampleId = a.SampleId
and b.DidYouBuy = 1)
group by a.MemberId;
```
Demo at [fiddle](http://sqlfiddle.com/#!2/9a6c7e/3).
To get rows with all columns,
```
select SampleDidYouBuyID, sampleid, MemberID, DIdYouBuy, DateAdded
from SampleDidYouBuy a
where a.DIdYouBuy = 0
and not exists (select 1
from SampleDidYouBuy b
where b.MemberId = a.MemberId
and b.SampleId = a.SampleId
and b.DidYouBuy = 1)
group by memberid, sampleid
order by MemberID,sampleid;
```
It is not quite clear from your expected output, which one of the duplicate rows should be showed. | Well, the simiplication of your problem is to count unique MemberId
```
SELECT COUNT(*) FROM (
SELECT * FROM SampleDidYouBuy
WHERE SampleID = {YourSampleID}
GROUP BY MemberID
) AS sample;
``` | Counting and Excluding based on conditions | [
"",
"mysql",
"sql",
"count",
""
] |
I'm getting the following error:
**#1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'JOIN product\_catalog ON product\_catalog.entity\_id**
As a result of the following query:
```
SELECT sales_order.created_at , order_item.order_id, sales_order.increment_id, SUM(order_item.qty_ordered) AS qty_ordered , COUNT( * )
FROM order_item
JOIN sales_order
ON sales_order.entity_id = order_item.order_id
WHERE sales_order.created_at > '2012-11-15 00:00:00'
JOIN product_catalog
ON product_catalog.entity_id = order_item.product_id
WHERE product_catalog.size = 14
GROUP BY order_item.order_id;
```
Variations on this query have worked for grouping different types of product by sales order in the past where I only needed to perform one `JOIN` to get all the info I needed. The problem I'm encountering is from the second `JOIN`. Clearly I'm missing something but I really am not sure what. :( | Please make sure that `WHERE` condition must be after all `JOIN`
```
SELECT sales_order.created_at , order_item.order_id, sales_order.increment_id, SUM(order_item.qty_ordered) AS qty_ordered , COUNT( * )
FROM order_item
JOIN sales_order
ON sales_order.entity_id = order_item.order_id
JOIN product_catalog
ON product_catalog.entity_id = order_item.product_id
WHERE product_catalog.size = 14
AND sales_order.created_at > '2012-11-15 00:00:00'
GROUP BY order_item.order_id;
```
First of all you have to `JOIN` your tables which you need. Then after `WHERE` clause come for conditions. | `JOIN...ON...` clause it's also section to input condition so you don't need `where` clause, just add `AND` instead.
```
SELECT sales_order.created_at , order_item.order_id, sales_order.increment_id,
SUM(order_item.qty_ordered) AS qty_ordered , COUNT( * )
FROM order_item
JOIN sales_order ON sales_order.entity_id = order_item.order_id
and sales_order.created_at > '2012-11-15 00:00:00'
JOIN product_catalog ON product_catalog.entity_id = order_item.product_id
and product_catalog.size = 14
GROUP BY order_item.order_id;
```
Please consider below example I added aliases. It's good practice to use it because code is more readable.
```
SELECT SO.created_at , OI.order_id, SO.increment_id,
SUM(OI.qty_ordered) AS qty_ordered , COUNT( * )
FROM order_item OI
JOIN sales_order SO ON SO.entity_id = OI.order_id
and SO.created_at > '2012-11-15 00:00:00'
JOIN product_catalog PC ON PC.entity_id = OI.product_id
and PS.size = 14
GROUP BY OI.order_id;
``` | MySQL throwing error on second JOIN | [
"",
"mysql",
"sql",
"join",
"syntax-error",
"inner-join",
""
] |
This query returns missing right parenthesis although when I run the inside query it runs fine.
```
select t.id
from
(select
v.id,
max(c.image_type_id),
max(vp.x),
max(vp.y),
max(vp.z)
FROM
v,
vp,
c,
WHERE v.id = vp.id
AND v.id = c.id
group by v.id;) t
``` | Remove semicolon `;` and `,`
```
SELECT t.id
FROM
(SELECT
v.id,
max(c.image_type_id),
max(vp.x),
max(vp.y),
max(vp.z)
FROM
v,
vp,
c // HERE the comma
WHERE v.id = vp.id
AND v.id = c.id
GROUP BY v.id) t // HERE the semicolon
``` | Just remove semicolon and last comma. Try to run below code.
```
select t.id
from
(select
v.id,
max(c.image_type_id),
max(vp.x),
max(vp.y),
max(vp.z)
FROM
v,
vp,
c
WHERE v.id = vp.id
AND v.id = c.id
group by v.id) t
``` | Missing right parenthesis subquery | [
"",
"sql",
""
] |
I have a date column in a table. The date column is in varchar. I want to identify a particular date range from that date column. My query is like this:
```
SELECT *
FROM [003 AccptReg].[dbo].[SysData1]
WHERE [RegDate_Sys] > '18 jul 2013'
```
But the result is not giving accurate result, i.e. it gives dates which are prior of 18 jul 2013.
Is there any thing wrong I am doing? | The problem is that you have the date as a `varchar`, and doesn't convert it to a date when you are doing the comparison. The database doesn't know that you see the data as dates, and will simply compare them as strings, so for example `'2 jan 1736'` will be larger than `'18 jul 2013'` because `2` comes after `1`.
The best would be if you could store the data as `datetime` values (or `date`), then you don't need to do the conversion when you compare the values, which would give better performance.
If that's not possible, do the conversion in the query:
```
select * from [003 AccptReg].[dbo].[SysData1]
where convert(datetime, [RegDate_Sys], 106) > '18 jul 2013'
```
Depending on the settings on the server, you might also need to convert `'18 jul 2013'` in the same way for the database to understand it correctly as a date. | For date column, you should compare as **DATE**
```
select * from [003 AccptReg].[dbo].[SysData1]
where CAST([RegDate_Sys] AS DATE) > CAST('18 jul 2013' AS DATE)
``` | Need to identify number that is higher than a given date | [
"",
"sql",
"sql-server",
"date",
""
] |
I have a small select query which picks data from a table as per the parameter passed to a procedure.
```
DECLARE @flgParam bit
```
.
.
```
SELECT *
FROM tablename
WHERE flgRequired like <If @flgparam is 0 then 1 or zero , Else 1>
```
what is the best way to construct the where clause | A bit rough, but it should work, based on requirements:
```
select
S.itemname
,S.flgrequired
from
sample S
where
(S.flgRequired >= @flgParam)
```
[Tested on sqlfiddle](http://sqlfiddle.com/#!3/fb559/3). | I'm thinking something like this:
```
SELECT *
from tablename
where @flgparam is null or @flgcolumnval = @flgparam;
```
@flgparam is declared as a bit, so it can only take on the values of `NULL`, `0`, and `1`.
EDIT:
I'm trying to understand the logic. Adapted for the right names:
```
SELECT *
from sample
where (@flgparam = 0 and flgRequired is not null) or
(coalesce(@flgparam, 1) = 1 and flgRequired = 1)
```
The `like` is unnecessary; you can do strict equality. | SELECT statement having where clause with dynamic condition | [
"",
"sql",
"sql-server",
"select",
""
] |
I have a table Item and I want to get the minimum price of the Item for the particular id
**Table Item:**
```
Id Price1 Price2 Price3
1 10 20 30
2 20 30 40
```
According to the above example, the minimum price for **id-1** is **10** and for **id-2** is **20**. I simply just want to get the minimum value from the three column for particular id.
**Remember:** I can't create the cases as, any column can be null. Thanks in advance. | One approach could be like this:
```
SELECT Id, MIN(Price) FROM (
SELECT Id, Price1 As Price FROM Table1
UNION ALL
SELECT Id, Price2 As Price FROM Table1
UNION ALL
SELECT Id, Price3 As Price FROM Table1
) As AllValues
GROUP BY Id
```
This works even if there are `null` values. Here is the [working demo](http://sqlfiddle.com/#!3/ce957/1). | Two similar solutions, using **[`APPLY`](http://technet.microsoft.com/en-us/library/ms175156%28v=sql.90%29.aspx)** operator:
```
SELECT t.Id,
MIN(m.Price)
FROM
tableX AS t
CROSS APPLY
( SELECT Price = Price1 UNION
SELECT Price2 UNION
SELECT Price3
) AS m
GROUP BY t.Id ;
SELECT t.Id,
x.Price
FROM
tableX AS t
OUTER APPLY
( SELECT TOP (1) Price
FROM
( SELECT Price1 UNION
SELECT Price2 UNION
SELECT Price3
) AS m (Price)
WHERE Price IS NOT NULL
ORDER BY Price ASC
) x ;
```
Tested at **[SQL-Fiddle](http://sqlfiddle.com/#!3/e1abeb/19)** | How to get the minimum value from the columns | [
"",
"sql",
"sql-server-2008",
"sql-server-2005",
""
] |
How can I get all the system stored procedures listed by MS SQL Server (2012) using an SQL query? | `sysobjects` is deprecated. You can use
```
SELECT QUOTENAME(SCHEMA_NAME(schema_id)) + '.' + QUOTENAME(name)
FROM sys.all_objects
WHERE type = 'P'
AND is_ms_shipped = 1
``` | ```
SELECT sch.name + '.' + obj.name
FROM sysobjects obj, sys.schemas sch
WHERE obj.type = 'P'
AND sch.schema_id = obj.uid
AND sch.name = 'sys'
--AND obj.name like '%my_search_string%'--use this for filtering
ORDER BY sch.name + '.' + obj.name
``` | Query returning all system stored procedures listed by MS SQL Server | [
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
I want swap two records in sql server for example

How can i do it? | You can do it in one statement with `JOIN` like this
```
UPDATE t1
SET t1.id = t2.id
FROM table1 t1 JOIN table1 t2
ON (t1.id = 1 AND t2.id = 2)
OR (t1.id = 2 AND t2.id = 1)
```
Output after update:
```
| ID | NAME | FAMILY |
|----|------|--------|
| 1 | Mary | Simson |
| 2 | Jame | Ander |
```
Here is **[SQLFiddle](http://sqlfiddle.com/#!3/837b2/2)** demo | in the case that ID is `identity` you could use a combined solution between @Vaibs\_Cool and @user814064
something like this
```
DECLARE @aID INT, @bID INT
SELECT @aID = 123, @bID = 987
;WITH ToSwap AS
(
SELECT CASE WHEN id = @aID THEN @bID ELSE @aID END AS id, [Name], [Family]
FROM YourTable
WHERE id IN (@aID, @bID)
)
UPDATE YourTable
SET [Name]=ToSwap.[Name], [Family] = ToSwap.[Family]
FROM YourTable
INNER JOIN ToSwap ON YourTable.id =ToSwap.Id
``` | how can I swap two records in sql server with script | [
"",
"sql",
"sql-server-2008",
""
] |
i have column name [car\_no] i want to rename it....
I am thinking like this :
```
exec sp_rename 'add_duty_slip.[car_no]' , 'car_no', 'column'
```
but it didn't work.
thanks | Your code runs like a charm. See [this fiddle](http://sqlfiddle.com/#!3/d89cb/1).
But using the same name for both old and new names won't do anything.
With using a different name for both (i.e. actually renaming the column), like :
```
exec sp_rename 'add_duty_slip.[car_no]' , 'car_no1', 'column'
```
works well too. See [this other fiddle](http://sqlfiddle.com/#!3/31283/2).
**EDIT** :
In case this is the real meaning of your question, brackets ( `[]` ) are not part of a column name, they're just a delimiter for names containing irregular characters (e.g. spaces). So `[car_no]` and `car_no` is in fact the exact same name.
And in the case your column name is **really** enclosed within brackets, you can do :
```
exec sp_rename 'add_duty_slip."[car_no]"' , 'car_no', 'column'
```
See [this fiddle](http://sqlfiddle.com/#!3/05fbe/1). | This would help you.
```
sp_RENAME 'TableName.[OldColumnName]' , '[NewColumnName]', 'COLUMN'
```
Also, Since, both old name and new names are the same, you are unable to notice the change even after you run your code. | I want to rename column name in sql server | [
"",
"sql",
"sql-server",
""
] |
I have Products, from many Brands, which a User can favorite (many-to-many) and also the User can follow Brands. And for the homepage I need all the products which the user has added to favorites and the products from Brands the User follows.
I have came with the following SQL query, which however, works not as expected -- it returns only products which are from followed brands and are in the same time in favorites.
```
SELECT * FROM products
INNER JOIN favorites ON products.id = favorites.favorable_id
INNER JOIN followings ON products.merchant_id = followings.followable_id
WHERE favorites.favorable_type = 'Product' AND favorites.user_id = ?
AND followings.followable_type = 'Merchant' AND followings.user_id = ?
```
How can I properly fix the query?
Thanks in advance.
EDIT:
I will have 10k+ products, 1k+ users. So I need the fastest in execution time query. | One 'simple' approach (just modifying your existing query) could be to turn both of your `INNER JOIN`s into `LEFT JOIN`s and check that `at least one of the two joined successfully` in the `WHERE` clause...
```
SELECT
*
FROM
products
LEFT JOIN
favorites
ON products.id = favorites.favorable_id
AND favorites.favorable_type = 'Product'
AND favorites.user_id = ?
LEFT JOIN
followings
ON products.merchant_id = followings.followable_id
AND followings.followable_type = 'Merchant'
AND followings.user_id = ?
WHERE
favorites.user_id IS NOT NULL
OR followings.user_id IS NOT NULL
```
This has the downside that ***every*** record in the `product` table has to be checked by the `WHERE` clause.
If that table is small, or you normally return a 'large' fraction of the table any way, this may be fine. If you, however, only return a 'small' fraction of the table, you may want to optimise this, such as using two queries with a `UNION`, as implied by your question title...
```
SELECT
products.*
FROM
products
INNER JOIN
favorites
ON products.id = favorites.favorable_id
AND favorites.favorable_type = 'Product'
AND favorites.user_id = ?
UNION
SELECT
products.*
FROM
products
INNER JOIN
followings
ON products.merchant_id = followings.followable_id
AND followings.followable_type = 'Merchant'
AND followings.user_id = ?
```
Each `INNER JOIN` here may be significantly faster than the whole of the `LEFT JOIN` based query, due to potentially being able to use indexes on `followings(user_id, followable_type)` and `favorites(user_id, favorable_type)`. | Try to replace the
```
INNER JOIN followings ON products.merchant_id = followings.followable_id
```
with
```
left JOIN followings ON favorites.favorable_id = followings.followable_id
``` | SQL left join between three tables with union | [
"",
"sql",
"postgresql",
"join",
""
] |
Consider this table :
```
dt qnt
---------- -------
1 10
2 -2
3 -4
4 3
5 -1
6 5
```
How do I create a query to get this result? (res is a running total column):
```
dt qnt res
---- ----- -----
1 10 10
2 -2 8
3 -4 4
4 3 7
5 -1 6
6 5 11
``` | You can do it using a simple subquery that calculates the sum up to the current row, which should work well on any version of SQL Server;
```
SELECT dt, qnt,
(SELECT SUM(qnt) FROM Table1 ts WHERE ts.dt <= t1.dt) res
FROM Table1 t1
ORDER BY dt;
```
[An SQLfiddle to test with](http://sqlfiddle.com/#!6/0b7c4/1).
If you're using SQL Server 2012, see [Amit's answer](https://stackoverflow.com/a/18756013/477878) for a more efficient query. | If you are Using sql Server 2012 than you can try like this.
```
Select * ,Sum([qnt]) Over(order by dt) from table1
```
[**Sql Fiddle Demo**](http://sqlfiddle.com/#!6/2341f/1) | Please help me to write this query in SQL Server | [
"",
"sql",
"sql-server",
""
] |
I have a query like this:
```
SELECT *
FROM table1 ref1,
table1 ref2,
table2 ref3,
table2 ref4,
table3
WHERE ref3.a = ref1.b ,
ref4.a = ref2.b ,
ref3.c = f,
ref4.c = d
```
and it works great, it gives me 1 record with all the columns I want.
Two of these columns have the same name but the latter one gets intuitively the extension \_1, so the first column with that name has the name `frubberducks` and the second one has the name `frubberducks_1`, and that's great. I need a query that only gives me these two columns so I tried:
```
SELECT frubberducks
FROM table1 ref1,
table1 ref2,
table2 ref3,
table2 ref4,
table3
WHERE ref3.a = ref1.b ,
ref4.a = ref2.b ,
ref3.c = f ,
ref4.c = d
```
and I get the error:
> ORA-00918: column ambiguously defined
Whats the best way of doing this? | The best way to do that is using table aliases and column aliases as below
for one column:
```
SELECT ref1.frubberducks
FROM table1 ref1,table1 ref2,table2 ref3,table2 ref4,table3
WHERE ref3.a = ref1.b and ref4.a = ref2.b and ref3.c=f and ref4.c=d
```
for two columns with the same name:
```
SELECT ref1.frubberducks,
ref2.frubberducks
FROM table1 ref1,table1 ref2,table2 ref3,table2 ref4,table3
WHERE ref3.a = ref1.b and ref4.a = ref2.b and ref3.c=f and ref4.c=d
```
for two columns with the same name and with column aliases:
```
SELECT ref1.frubberducks ref1frubberducks,
ref2.frubberducks ref2frubberducks
FROM table1 ref1,table1 ref2,table2 ref3,table2 ref4,table3
WHERE ref3.a = ref1.b and ref4.a = ref2.b and ref3.c=f and ref4.c=d
``` | The best way is to be explicit such as with:
```
SELECT ref1.frubberducks as frubberducks,
ref3.frubberducks as frubberducks_1 ...
```
`select *` means you want everything and you're not overly concerned about where it is in the result set. If you *did* care, you'd be explicitly listing the columns. There are precious few cases where you should actually use `select *` anyway. | "column ambigiously defined" error | [
"",
"sql",
"oracle",
"name-collision",
""
] |
I ask this question with a bit of sheepishness because I should know the answer. Could someone be kind and explain if and how injection could occur in the following code?
```
<cfquery>
select * from tableName
where fieldName = '#value#'
</cfquery>
```
I'm specifically curious about injection attempts and other malicious input, not about best practices or input validation for handling "normal" user input. I see folks strongly advocating use of CFQueryParam, but don't think I see the point. If user input has been validated for consistency to the database schema (e.g. so that input must be numeric for numerical database fields), is there anything else gained by using CFQueryParam? What does `<cfqueryparam CFSQLType = "CF_SQL_VARCHAR">` do that `'#value#'` doesn't do? | > doesn't CF already "magically" do this in CF query tag when you wrap evaluated variables in single quotes?
Yep, it'll convert `'` to `''` for you.
Now guess what SQL you get from this code:
```
<cfset value = "\'; DROP TABLE tableName -- " />
<cfquery>
select * from tableName
where fieldName = '#value#'
</cfquery>
```
The cfqueryparam tag works; using query params solves SQL injection.
Any custom written attempts at validating, sanitizing, or escaping (all separate things, btw) are, at best, only as good as the developer's knowledge of the database system the code is running against.
If the developer is unaware of other escape methods, or if the values are modified between validation/escaping and them being rendered into SQL, or even if the codebase is ported to another database system and *seems* to be fine, there's a chance of custom code breaking down.
When it comes to security, you don't want chances like that. So use cfqueryparam. | **Update:**
While this answers part of your question, [Peter's response](https://stackoverflow.com/a/18797042/104223) is better, in that it directly addresses your question of "Why use cfqueryparam, when CF automatically adds protection by escaping single quotes?". Answer: In short, because the latter does not always work. Bind variables do.
---
> It says in the docs "escapes string variables in single-quotation
> marks" but doesn't CF already "magically" do this in CF query tag when
> you wrap evaluated variables in single quotes?
Yes, most versions automatically escape single quotes as a protection measure for those not using cfqueryparam. However, as Scott noted above, it is better to use cfqueryparam (ie bind variables) because they [ensure parameters are not *executed* as sql commands](http://en.wikipedia.org/wiki/Bind_variable). Bind variables work, even in cases where the automatic escaping does not, [as Peter's answer demonstrates](https://stackoverflow.com/a/18797042/104223).
That said, sql injection protection is really just a side effect of using bind variables. The primary reason to use bind variables is performance. [Bind variables encourage databases to re-use query plans](https://stackoverflow.com/questions/17574276/how-can-cfqueryparam-affect-performance-for-constants-and-null-values/17582859#17582859), instead of creating a new plan every time your #parameters# change. That cuts down on compilation time, improving performance.
Cfqueryparam also has a number of other benefits:
* Provides data type checking (length, value, type, ...)
* Provides attributes that simplify handling of "lists" and `null` values
* Performs data type checking *before* any sql is sent to the database, preventing wasted database calls
While it does not really apply to string columns, IMO another big reason to use it is accuracy. When you pass a quoted string to the database, you are relying on [implicit conversion](http://blogs.msdn.com/b/craigfr/archive/2008/06/05/implicit-conversions.aspx). Essentially you are leaving it up to the database to figure out how to best perform the comparison, and the results are not always what you were expecting. (Date strings are a prime example). You may end with inaccurate results, or sometimes slower queries, depending on how the database decides to execute the sql. Using cfqueryparam avoids those issues by eliminating the ambiguity. | ColdFusion Query - Injection Protection | [
"",
"sql",
"coldfusion",
"sql-injection",
"cfml",
""
] |
I've found a number of examples showing how to select a single oldest/newest row from a grouped set, but am having trouble getting the oldest two rows from a data set.
Here's my sample table:
```
CREATE TABLE IF NOT EXISTS `orderTable` (
`customer_id` varchar(10) NOT NULL,
`order_id` varchar(4) NOT NULL,
`date_added` date NOT NULL,
PRIMARY KEY (`customer_id`,`order_id`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1;
INSERT INTO `orderTable` (`customer_id`, `order_id`, `date_added`) VALUES
('1234', '5A', '1997-01-22'),
('1234', '88B', '1992-05-09'),
('0487', 'F9', '2002-01-23'),
('5799', 'A12F', '2007-01-23'),
('1234', '3A', '2009-01-22'),
('3333', '7FHS', '2009-01-22'),
('0487', 'Z33', '2004-06-23'),
('3333', 'FF44', '2013-09-11'),
('3333', '44f5', '2013-09-02');
```
This query returns more than two rows:
```
SELECT customer_id, order_id, date_added
FROM orderTable T1
WHERE (
select count(*) FROM orderTable T2
where T2.order_id = T1.order_id AND T2.date_added <= T1.date_added
) <= 2;
```
Since I am not looking for a single row, this is not a standard `greatest-n-per-group` type query.
What am I missing that I can get the first two orders for each customer\_id? | The best (i.e. most performant) approach is to use a User Defined Variable in the query.
```
SELECT tmp.customer_id, tmp.date_added
FROM (
SELECT
customer_id, date_added,
IF (@prev <> customer_id, @rownum := 1, @rownum := @rownum+1 ) rank,
@prev := customer_id
FROM orderTable t
JOIN (SELECT @rownum := NULL, @prev := 0) r
ORDER BY t.customer_id
) tmp
WHERE tmp.rank <= 2
ORDER BY customer_id, date_added
```
**Results**:
```
| CUSTOMER_ID | DATE_ADDED |
|-------------|----------------------------------|
| 0487 | January, 23 2002 00:00:00+0000 |
| 0487 | June, 23 2004 00:00:00+0000 |
| 1234 | May, 09 1992 00:00:00+0000 |
| 1234 | January, 22 1997 00:00:00+0000 |
| 3333 | January, 22 2009 00:00:00+0000 |
| 3333 | September, 02 2013 00:00:00+0000 |
| 5799 | January, 23 2007 00:00:00+0000 |
```
Fiddle [here](http://sqlfiddle.com/#!2/23e88/1).
Note that the join is just being used to initialise the variables. | Your original query should be (use customer\_id in subquery)
```
SELECT customer_id, order_id, date_added
FROM orderTable T1
WHERE (
select count(*) FROM orderTable T2
where T2.customer_id = T1.customer_id AND T2.date_added <= T1.date_added
) <= 2;
```
You can also use variables:
```
SELECT customer_id, order_id, date_added FROM (
SELECT customer_id, order_id, date_added,
@rownum := if(@prev_cust = customer_id, @rownum + 1,1) as rn,
@prev_cust := customer_id cust_var
FROM orderTable T1,
(SELECT @rownum := 0) r,
(SELECT @prev_cust := '') c
order by customer_id, date_added
) o where o.rn < 3;
```
[SQL DEMO](http://sqlfiddle.com/#!2/a6ae0/14) | Select oldest two records from group | [
"",
"mysql",
"sql",
"group-by",
"greatest-n-per-group",
""
] |
I have a select statement
```
select
name
,age
from table_employees
where id=@impId;
```
I want to check if age is null than return zero. I tried following but it doesn't work
```
select
name
,age isnull(age,0.00)
from table_employees
where id=@impId;
```
Please let me know how to fix this.
Thanks | Try this way
```
select
name,age=isnull(age,0.00)
from table_employees
where id=@impId;
```
or
```
select
name,
isnull(age,0.00) as age
from table_employees
where id=@impId;
``` | In SQL Server 2005 or later, you can use the [`COALESCE`](http://msdn.microsoft.com/en-us/library/ms190349.aspx) function for that:
```
SELECT
name
, COALESCE(age, 0) as age
FROM table_employees
WHERE id=@impId
```
The function evaluates its arguments one by one, and returns the first non-`NULL` value. | Returning zeros if null | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have a SQL query, where I'm using a subquery in a WHERE clause. I then need to use the same subquery again to compare it to a different column.
I'm assuming there isn't a way for my to access 'emp\_education\_list li' outside of the subquery?
I'm guessing the subquery repetition is redundant. I'm a bit rusty in SQL to be frank though.
```
SELECT e.fname, e.lname
FROM employee e, emp_education_list l
WHERE
e.skillsID = l.skillsID
AND
(
SELECT li.educationID
FROM emp_education_list li, job j
WHERE j.skillsID = li.skillsID
AND j.jobID = 1001
) = l.educationID
AND
(
SELECT li.edu_level
FROM emp_education_list li, job j
WHERE j.skillsID = li.skillsID
AND j.jobID = 1001
) < l.edu_level
;
``` | Give this a try,
```
SELECT e.fname, e.lname
FROM employee e
INNER JOIN emp_education_list l
ON e.skillsID = l.skillsID
INNER JOIN
(
SELECT li.educationID, li.edu_level
FROM emp_education_list li
INNER JOIN job j
ON j.skillsID = li.skillsID
WHERE j.jobID = 1001
) x ON l.educationID = x.educationID
WHERE x.edu_level < l.edu_level
``` | Try this one with `JOIN` i guess proper joining will take care of rest of the conditions
```
SELECT e.fname, e.lname
FROM employee e
JOIN emp_education_list l ON (e.skillsID = l.skillsID)
JOIN job j ON ( j.skillsID = l.skillsID AND j.jobID = 1001)
``` | MySQL Subquery Repetition Avoidance in WHERE/AND | [
"",
"mysql",
"sql",
"subquery",
"reusability",
""
] |
What SQL select query can I use to sort each row and then order the sorted rows?
E.g.: *table tab (c1, c2, c3, c4)*
```
2,5,8,4
2,1,6,7
5,2,9,3
```
The query must give:
```
1,2,6,7
2,3,5,9
2,4,5,8
``` | ```
SELECT
MIN(c1, c2, c3, c4) AS new_c1,
CASE MIN(c1, c2, c3, c4) WHEN c1 THEN MIN(c2, c3, c4)
WHEN c2 THEN MIN(c1, c3, c4)
WHEN c3 THEN MIN(c1, c2, c4)
WHEN c4 THEN MIN(c1, c2, c3)
END AS new_c2,
CASE MAX(c1, c2, c3, c4) WHEN c1 THEN MAX(c2, c3, c4)
WHEN c2 THEN MAX(c1, c3, c4)
WHEN c3 THEN MAX(c1, c2, c4)
WHEN c4 THEN MAX(c1, c2, c3)
END AS new_c3,
MAX(c1, c2, c3, c4) AS new_c4
FROM tab
ORDER BY new_c1, new_c2, new_c3, new_c4
```
* see it working in an [sqlfiddle](http://sqlfiddle.com/#!5/010dc/2/0)
* see [here](http://www.sqlite.org/lang_corefunc.html) for `min()` and `max()` functions
Quote:
> Note that max() [and min()] is a simple function when it has 2 or more arguments but operates as an aggregate function if given only a single argument. | It's just somthing to start with, maybe not the most "clean" code, but it will work
```
SELECT CASE WHEN C1 < C2 THEN
CASE WHEN C1 < C3 THEN C1 ELSE C3 END
ELSE
CASE WHEN C2 < C3 THEN C2 ELSE C3 END
END
From YourTable
```
This will select the smallest element in the first place.
Three more of those and you are good to go..
Once you have this result set, you can order by the first column. | SQL to sort columns within each row and order rows based on this | [
"",
"sql",
"sqlite",
"sorting",
"select",
""
] |
I have a table where I store some items with prices and others items without prices.
I want to select all the items with price but in the otherhand I want to select some items without prices at the same time. Is there any choice to do this?
Right now I have this select statement:
```
SELECT DISTINCT TOP 100 PERCENT idItem, itemDescription, price
FROM myTable
WHERE price > 0 and idItem = '000228'
``` | If you want to select all items with price > 0 plus the ine with id 000228, the where clause needs to be "price > 0 OR idItem = '000228'" | Try to add `price is null` condition as below
```
SELECT DISTINCT TOP 100 PERCENT idItem, itemDescription, price
FROM myTable
WHERE (price > 0 or price is null) and idItem = '000228'
``` | sql where clause but not for some records | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
Every time I run this and add as an input the words 'yellow' or 'red' or 'blue', the same error appears.
```
set serveroutput on
undefine color1
undefine color2
accept color1 prompt 'Type the 1st primary color: '
accept color2 prompt 'Type the 2nd primary color: '
begin
if &color1 = &color2 then
dbms_output.put_line(&color1||' plus '||&color2||' then '||&color1);
elsif (&color1 = 'red' and &color2 = 'blue') or (&color2 = 'red' and &color1 = 'blue') then
dbms_output.put_line(&color1||' plus '||&color2||' is purple');
elsif (&color1 = 'red' and &color2 = 'yellow') or (&color2 = 'red' and &color1 = 'yellow') then
dbms_output.put_line(&color1||' plus '||&color2||' is orange');
else
dbms_output.put_line(&color1||' plus '||&color2||' is green');
end if;
end;
/
```
ERROR:
```
ORA-06550: line 2, column 7:
PLS-00201: identifier 'BLUE' must be declared
ORA-06550: line 2, column 4:
PL/SQL: Statement ignored
```
Please Help! :) | Try like this,
```
set serveroutput on
undefine color1
undefine color2
accept color1 prompt 'Type the 1st primary color: '
accept color2 prompt 'Type the 2nd primary color: '
BEGIN
IF '&color1' = '&color2' THEN
dbms_output.put_line('&color1'||' plus '||'&color2'||' then '||'&color1');
elsif ('&color1' LIKE 'red' AND '&color2' LIKE 'blue') OR ('&color2' LIKE 'red' AND '&color1' LIKE 'blue') THEN
dbms_output.put_line('&color1'||' plus '||'&color2'||' is purple');
elsif ('&color1' LIKE 'red' AND '&color2' LIKE 'yellow') OR ('&color2' LIKE 'red' AND '&color1' LIKE 'yellow') THEN
dbms_output.put_line('&color1'||' plus '||'&color2'||' is orange');
ELSE
dbms_output.put_line('&color1'||' plus '||'&color2'||' is green');
end if;
END;
/
```
Or it would be be better to store the color1 & color2 in a local variable like this,
```
SET serveroutput ON
undefine color1
undefine color2
accept color1 prompt 'Type the 1st primary color: '
accept color2 prompt 'Type the 2nd primary color: '
DECLARE
color_1 VARCHAR2(10) := '&color1';
color_2 VARCHAR2(10) := '&color2';
BEGIN
IF color_1 = color_2 THEN
dbms_output.put_line(color_1||' plus '||color_2||' then '||color_1);
elsif (color_1 LIKE 'red' AND color_2 LIKE 'blue') OR (color_2 LIKE 'red' AND color_1 LIKE 'blue') THEN
dbms_output.put_line(color_1||' plus '||color_2||' is purple');
elsif (color_1 LIKE 'red' AND color_2 LIKE 'yellow') OR (color_2 LIKE 'red' AND color_1 LIKE 'yellow') THEN
dbms_output.put_line(color_1||' plus '||color_2||' is orange');
ELSE
dbms_output.put_line(color_1||' plus '||color_2||' is green');
END IF;
end;
``` | `&color1` is a substitution variable, sqlplus basically replaces `&color1` with user input contents - which has to be a single-quoted literal here. As long as it is not, your `blue` input is treated as a variable name, which is not defined.
So, you have two options - either to input quoted `'blue'`, `'yellow'` etc. in sqlplus, or to replace `&color1` with `'&color1'` in your code. | Identifier 'input' must be declared - PL SQL | [
"",
"sql",
"oracle",
"plsql",
""
] |
I have 3 tables (user, item, userlike) and 2 sql queries. How can I unify these two queries?
```
SELECT item.userid, item.id, user.name FROM item
INNER JOIN user ON item.userid = user.id
SELECT userid,itemid, COUNT(*) AS `liked` FROM userlike
WHERE userid=9
GROUP BY itemid
```
I want to know whether a specific user (9) has liked the item or not.
Result should be somthing like this
```
itemid userid name liked* (*whether 'user 9' liked this item or not)
1 7 foo 0
2 4 asd 1
```
Thanks | You want to use an `OUTER JOIN` for this
```
SELECT i.id itemid, u.id userid, u.name, COALESCE(liked, 0) liked
FROM item i JOIN user u
ON i.userid = u.id LEFT JOIN
(
SELECT itemid, COUNT(*) liked
FROM userlike
WHERE userid = 9
GROUP BY itemid
) l
ON i.id = l.itemid;
```
or
```
SELECT i.id itemid, u.id userid, u.name, l.userid IS NOT NULL liked
FROM item i JOIN user u
ON i.userid = u.id LEFT JOIN userlike l
ON i.id = l.itemid
AND l.userid = 9;
```
Sample output:
```
| ITEMID | USERID | NAME | LIKED |
|--------|--------|-------|-------|
| 2 | 4 | user4 | 1 |
| 1 | 7 | user7 | 0 |
```
Here is **[SQLFiddle](http://sqlfiddle.com/#!2/ac755/16)** demo | ```
SELECT item.id, item.userid, user.name, userlike.liked
FROM item
JOIN user ON user.id = item.userid
JOIN userlike ON item.id = userlike.itemid
WHERE userlike.liked = 1
GROUP BY item.id
```
**OR**
```
SELECT item.id, item.userid, user.name, userlike.liked
FROM item
JOIN user ON user.id = item.userid
JOIN userlike ON item.id = userlike.itemid
WHERE COUNT(userlike.liked) >= 1
GROUP BY item.id
``` | SQL INNER JOIN and COUNT | [
"",
"mysql",
"sql",
"union",
""
] |
I am tasked to calculate the difference in months and fiscal years from a `NUMBERS` column and a `SHIP_DATES` column. I have trouble converting date or time from character string in the `NUMBERS` columns (`NUMBER_MONTH`), and (`NUMBER_FY`) before the calculations.
Could someone please look? Basically, the issue I have is I cant take `SHIP_DATE_MONTH` minus `NUMBER_MONTH` and `SHIP_DATE_FY` minus `NUMBER_FY`.
* Column `NUMBERS` is `varchar`. It is formatted as `SSSYYMMFFFFFF`, where
+ `SSS` is the store station,
+ `YY` is the fiscal year,
+ `MM` is the month, and
+ `FFFFFF` is the frequency
* Column `SHIP_DATES` is `datetime`.
* Column `ID` is int.
Here is my code:
```
CREATE TABLE #TEMP
(
NUMBERS VARCHAR (20),
SHIP_DATES DATETIME,
ID INT
)
INSERT INTO #TEMP VALUES ( 'ABC1006000046' , '6/5/2010' , '123' )
INSERT INTO #TEMP VALUES ( 'ABC1006000046' , '7/15/2013' , '123' )
INSERT INTO #TEMP VALUES ( 'CDS0809000059' , '9/8/2008' , '124' )
INSERT INTO #TEMP VALUES ( 'CDS0809000059' , '1/31/2013' , '124' )
SELECT SUBSTRING(NUMBERS, 6, 2) AS NUMBER_MONTH,
SUBSTRING(NUMBERS, 4, 2) AS NUMBER_FY,
DATEPART (MONTH, SHIP_DATES) AS SHIP_DATE_MONTH,
DATEPART (YEAR, SHIP_DATES) AS SHIP_DATE_FY,
ID
INTO #TEMP1
FROM #TEMP
--calculate the difference in month and fiscal year
SELECT DATEDIFF( YEAR , NUMBER_FY , SHIP_DATE_FY ) AS DIFF_YEAR ,
DATEDIFF( MONTH , NUMBER_MONTH , SHIP_DATE_MONTH ) AS DIFF_MONTH ,
ID
FROM #TEMP1
```
and this is the error message I receive:
`Conversion failed when converting date and/or time from character string.`
Please note, here is a converting to datetime approach, but i have not figured out how to make it work yet:
```
CONVERT(VARCHAR , SHIP_DATES , 101 ) AS 'MM/DD/YYYY'
``` | When you call [DATEDIFF](http://technet.microsoft.com/en-us/library/ms189794.aspx) the 2nd and 3rd arguments are of type DateTime where you pass strings for the 2nd argument and integers for the 3rd argument.
I think (if i understand what you are trying to do) you should convert the strings to numbers, get rid of the `DATEDIFF` and do a regular minus operation.
Addition after comment:
to cast a `nvarcahr` to an int you can change your code like this:
```
SELECT
CAST(SUBSTRING(NUMBERS, 6, 2) AS BIGINT) AS NUMBER_MONTH,
CAST(SUBSTRING(NUMBERS, 4, 2) AS BIGINT) AS NUMBER_FY,
DATEPART (MONTH, SHIP_DATES) AS SHIP_DATE_MONTH,
DATEPART (YEAR, SHIP_DATES) AS SHIP_DATE_FY,
ID
INTO #TEMP1
FROM #TEMP
``` | I would try something more along these lines:
```
CREATE TABLE #TEMP
(NUMBERS VARCHAR (20), SHIP_DATES_A DATETIME, SHIP_DATES_B DATETIME, ID INT)
```
Then for testing:
```
INSERT INTO TEMP VALUES ('ABC1006000046', '6/5/2010','7/15/2013', '123')
INSERT INTO TEMP VALUES ('CDS0809000059', '9/8/2008','1/31/2013', '124')
```
Then this query:
```
select NUMBERS,SHIP_DATES_A,SHIP_DATES_B,ID,
DATEDIFF(month,ship_dates_a,ship_dates_b)
from temp
``` | Convert Varchar columns to Datetime | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I'm looking to update multiple rows in PostgreSQL in one statement. Is there a way to do something like the following?
```
UPDATE table
SET
column_a = 1 where column_b = '123',
column_a = 2 where column_b = '345'
``` | You can also use [`update ... from`](https://www.postgresql.org/docs/current/static/sql-update.html) syntax and use a mapping table. If you want to update more than one column, it's much more generalizable:
```
update test as t set
column_a = c.column_a
from (values
('123', 1),
('345', 2)
) as c(column_b, column_a)
where c.column_b = t.column_b;
```
You can add as many columns as you like:
```
update test as t set
column_a = c.column_a,
column_c = c.column_c
from (values
('123', 1, '---'),
('345', 2, '+++')
) as c(column_b, column_a, column_c)
where c.column_b = t.column_b;
```
**`sql fiddle demo`** | Based on the solution of @Roman, you can set multiple values:
```
update users as u set -- postgres FTW
email = u2.email,
first_name = u2.first_name,
last_name = u2.last_name
from (values
(1, 'hollis@weimann.biz', 'Hollis', 'Connell'),
(2, 'robert@duncan.info', 'Robert', 'Duncan')
) as u2(id, email, first_name, last_name)
where u2.id = u.id;
``` | Update multiple rows in same query using PostgreSQL | [
"",
"sql",
"postgresql",
""
] |
I'm having difficulties managing the queries like this:
```
SELECT c.id, p.id FROM claim c, procedure p LIMIT 1;
```
This query will return following set:
```
id | id
----+----
49 | 1
```
Is there any way to make it return `c.id` and `p.id` for column names? This one doesn't work:
```
SELECT c.id as c.id, p.id as c.id FROM claim c, procedure p LIMIT 1;
```
Or is this my final solution?
```
SELECT c.id as c_id, p.id as p_id FROM claim c, procedure p LIMIT 1;
``` | ```
SELECT c.id AS "c.id", p.id AS "p.id" FROM claim c, procedure p LIMIT 1;
```
or simply:
```
SELECT c.id "c.id", p.id "p.id" FROM claim c, procedure p LIMIT 1;
``` | With a proper **naming convention** you would rarely have to deal with this problem to begin with.
I suggest to use something like this instead:
```
CREATE TABLE claim (claim_id serial PRIMARY KEY, ...);
CREATE TABLE procedure (procedure_id serial PRIMARY KEY, ...);
```
`"id"` is a *very bad choice* for a column name. Unfortunately, some half-wit ORMs use this anti-pattern. Avoid it where you can.
Related:
[Join one to many and retrieve single result](https://stackoverflow.com/questions/12468623/join-one-to-many-and-retrieve-single-result/12469295#12469295)
[Using UNNEST with a JOIN](https://stackoverflow.com/questions/16054925/using-unnest-with-a-join/16070732#16070732) | SELECT statement returning unqualified column names | [
"",
"sql",
"postgresql",
"naming-conventions",
"alias",
""
] |
I have this SQL for retrieving all items
```
SELECT A.item_code,A.description,A.uom, A.open_stock, B.recd_total, C.issue_qty
FROM chem_master as A
LEFT JOIN ( SELECT item_code, sum(recd_qty) as recd_total
from chem_reciepts GROUP BY item_code) as B on A.item_code=B.item_code
LEFT JOIN(SELECT item_code, sum(iss_qty) as issue_qty
from chem_issue GROUP BY item_code) as C on A.item_code=C.item_codeand
```
the returned data is
```
Item_code Description unit op.stock recd_total issue_qty
25139 ALUMINIUM OXIDE ACTIVE NEUTRAL GM 0 0 0
54006 L-ASCORBIC ACID GM 0 0 0
AC001 ACETIC ACID GLACIAL LTR 0 14 2
AC002 ACETONE AR LTR 0 0 0
AC005 ACACIA POWDER GM 0 0 0
```
I want only those records where `open_stock + recd_total - issue_qty = 0`
How to modify sql? | You can add `having` with the condition you mentioned to your query(at the end of query).
Something like below:
```
HAVING op.stock+recd_total-issue_qty=0
``` | Try this one
```
SELECT q.*, (q.stock+(q.recd_total-q.issue_qty)) as resultant FROM (
SELECT A.item_code,A.description,A.uom, A.open_stock, B.recd_total, C.issue_qty
FROM chem_master as A
LEFT JOIN ( SELECT item_code, sum(recd_qty) as recd_total
from chem_reciepts GROUP BY item_code) as B on A.item_code=B.item_code
LEFT JOIN(SELECT item_code, sum(iss_qty) as issue_qty
from chem_issue GROUP BY item_code) as C on A.item_code=C.item_codeand
) q HAVING q.resultant =0
``` | Adding and subtracting sum columns in mysql queries | [
"",
"mysql",
"sql",
""
] |
Is it possible to use a Linux environment variable inside a .sql file? I'm using the copy/select query to write to an output file, and I'll like to put that directory in a variable. So I want to do something like:
```
COPY (SELECT * FROM a)
TO $outputdir/a.csv
```
Outputdir would be set in my environment. Is this possible? | You can store the result of a shell command inside a `psql` variable like this:
```
\set afile `echo "$outputdir/a.csv"`
COPY (SELECT * FROM a) TO :'afile';
```
Another (better in my opinion) solution is to use only `psql` variables, see [this answer of mine about psql variables](https://stackoverflow.com/questions/18673208/how-to-use-variables-in-a-psql-script/18683163#18683163), which is similar to your example. A example for your case would be:
```
\set outputdir '/path/to/output'
\set afile :outputdir '/a.csv'
COPY (SELECT * FROM a) TO :'afile';
```
Note that, in the example, you need to set the variable inside the script file, but you can skip the first line if you set it when you call `psql`:
```
psql --set=outputdir="$outputdir" <conn parameters> -f /path/to/yourscript.sql
``` | This appears to work for your use case, provided you single quote the output file name as I mentioned. It will escape any double quotes as well contained within the SQL.
```
psql -c "$(eval echo '"' $(<envvars.sql | sed 's/"/\\"/g') '"')"
```
Of course, note that if your file contains any dollar quoted variables, the shell is going to try to interpret as a variable, and your script will break, so you will need to escape any dollar signs you need preserved literally with a backslash.
See also the second snippet in the accepted answer to [this question](https://superuser.com/questions/235738/how-do-i-substitute-environment-variables-when-i-ouput-a-file) for a possibly more robust answer. | Using an environment variable in a PSQL script | [
"",
"sql",
"postgresql",
"environment-variables",
""
] |
I think i have a somewhat simple problem but due to my lack of experience in DB modelling and SQL i'm unable to get the right solution,
i can draw the basic concept but i'm having a hard time to implement a solution
Let's say i have some sets A,B and C.
A is inside B
B is inside C
A is the set of the 10 objects with greatest value inside B set
B is the set of all values greater than 100
C is the set of all values greater than 50

that's the easy part,
Now i need to get 50 objects with the greatst value from C that are not inside A
How would i translate that problem to the SQL language, or what commands, tools or design patterns should i use to solve this problem?
I'm not sure if I was clear enough,
again sorry if the problem is too trivial for some, but i'm trying to learn more about SQL and designs.
I'm looking for a general solution that it's not bound by a framework or OS
Thanks for any help | Let me assume that you have a table `t` with a `value` column, and that the labels "A", "B", and "C" are based on these.
```
select t.*
from t left outer join
(select value
from t
where value >= 100
order by value desc
limit 1
) A
on t.value = A.value
where t.value >= 50 and A.value is null
order by value desc
limit 50;
```
The use of `limit` is database specific. It might be `top` or `rownum` in the `where` clause or something else depending on the database.
EDIT:
If the sets are large enough, you could just do:
```
select t.*
from t
where value >= 50 and value < 100
order by value desc
limit 50;
``` | May be this query would help:
```
select Col1,Max(Col2) from
(select col1, col2 from TableC
Except
Select col1, col2 from TableA)as A
Group By Col1
```
Here the inner query would only select the records which are there in table A but not in Table C, and then outer query would take the max of those records. | How to query sets and subsets and get a set that doesn't contain the other set | [
"",
"sql",
"set",
""
] |
How can I sum sub-queries data in SQL Server 2008?
In this query I want to sum this `Score` column data in this `CASE` function, where point between rang of number, enough time I try to sum this column it show me error.
Now I want to sum this column `Score` with a `Group by Applicant id`, what's the way I can sum the `Score` column data together ?
But I can't to do this query. please help me, here I'm a beginner.
```
SELECT
ApplicantId, point,
'Score' = CASE
WHEN point BETWEEN 4.00 AND 4.50 THEN ('3' )
WHEN point BETWEEN 3.5 AND 4.00 THEN ('2' )
END
FROM tblAcademicInfo
WHERE tblAcademicInfo.ApplicantId = tblAcademicInfo.ApplicantId
```
Output:
```
ApplicantId point Score
--------------------------------
xzc1 3.25 NULL
xzc1 4.36 3
xzc1 3.59 2
xzc1 4.00 3
``` | There are several problems
1. Your `WHERE` clause makes no sense
2. To sum scores per applicant use `SUM()` and `GROUP BY` clause
3. To be able to sum values you should return numeric values in `CASE` instead of string literals
4. In your query you have `point` column in your select. But it's not part of `GROUP BY` because you want to group by `applicantid`. Therefore you should either apply an aggregate function on this column (e.g. also `SUM()`) or remove it from the resultset.
Try
```
SELECT ApplicantId,
SUM(CASE WHEN point BETWEEN 4.0 AND 4.5 THEN 3
WHEN point BETWEEN 3.5 AND 4.0 THEN 2
END) Score
FROM tblAcademicInfo
GROUP BY ApplicantId
```
Output:
```
| APPLICANTID | SCORE |
|-------------|-------|
| xzc1 | 8 |
```
Here is **[SQLFiddle](http://sqlfiddle.com/#!3/750af/3)** demo | Have you tried the following:
```
SELECT ApplicantId,point,
SUM(CASE
WHEN point BETWEEN 4.00 AND 4.50 THEN 3
WHEN point BETWEEN 3.5 AND 4.00 THEN 2
END
) [Score]
FROM tblAcademicInfo
WHERE tblAcademicInfo.ApplicantId
GROUP BY ApplicantId, point
``` | How can I sum subqueries data in SQL Server 2008? | [
"",
"sql",
"sql-server-2008-r2",
""
] |
First off, I already know that Windows Firewall is blocking my connection -- I just don't know what to do about it. When I turn WF all the way off, my remote connection works. I already have a Rule to allow incoming requests at port 1433, and it seems that's the right port, but I'm unsure. Now for more details.
I had SQL Server 2008 R2 on my remote dedicated server and had no problem connecting remotely via SSMS. But then I installed SQL Server 2012. It works fine, but I was forced to create a named instance for it during installation because the default instance is in use by SQL08.
My ASP.NET connection string running on that box works fine and connects to sql12 with no problem. Here is the obfuscated connection string:
Server=myserver\MSSQLSSERVER2012;Database=MyDB;User Id=Me; password=MyPaSS;
It looks like the named instance is using port 1433. I verified that SQL Browser is active and running. I have a windows firewall exception that allows sql server and port 1433, though that was there before and I'm wondering if that's only working for the now-disabled SQL08 (how do I ensure this is working for SQL12).
Say the IP to that server is 1.2.3.4
From SSMS I have tried connecting to the server thusly:
1. 1.2.3.4
2. 1.2.3.4\MSSQLSSERVER2012
3. 1.2.3.4\MSSQLSSERVER2012,1433
Nothing works. Always the error is:

I have read a number of articles on this topic and they all say:
1. enable SQL Browser (done)
2. open a port in Windows firewall (seems done ??)
So how can I discover what exactly I need to do in Windows Firewall to fix this? I cannot turn the whole thing off, this is a public web server. | Here is the list of the ports that SQL Server Features use. Depending on the features/services you use you may need additional ports TCP/UDP opened. Also under server properties > connections > confirm that Allow remote connections to this server is checked. The instructions of remote connection is for 2008, since you mentioned that turning firewall off that you are able to connect remotely then I would guess this is not needed but would not hurt to double check this.
[Configure the Windows Firewall to Allow SQL Server Access](http://technet.microsoft.com/en-us/library/cc646023.aspx)
[Enable Remote Connections](http://blogs.msdn.com/b/walzenbach/archive/2010/04/14/how-to-enable-remote-connections-in-sql-server-2008.aspx) | A short answer is:
1. SQL Server default instance uses TCP port 1433, unless messed about with (don't do it!). If you only have a default instance, that's the only port you need, I believe - SQL Server clients don't use the browser service to find a default instance.
2. SQL Browser comes into play if named instances are used. This listens on UDP port 1434; it responds to requests for port numbers for named instances.
3. By default, named instances use dynamic TCP ports - so a different port each time they start up. The browser sorts this out for you but it's a pain through a firewall.
4. There is no reason why you shouldn't set a fixed port in SQL Server config manager for a named instance (say, TCP Port 1434 or whatever) so that you only need to open one extra port for the named instance. | Can't connect to SQL 2012 remotely by IP and named instance | [
"",
"sql",
"sql-server",
"windows",
"sql-server-2012",
"windows-firewall",
""
] |
I am using following query to select certain values
```
select
sb.company
,b.id as Id
,bds.id as PId
,brp.bId
from supp b
left outer join sales bds on (bds.sId = b.id)
left outer join tperiod brp on (brp.id = bds.rId)
left outer join tbuyer sb on (sb.id = brp.bId)
where
b.ownerId = @oId;
```
In `where b.ownerId = @old` ownerId has multiple same values in the column, I want to select distinct or unique on it. So that query is done on distinct ownerId only.
Please let me know how to achieve this.
For example : column `ownerId` has values
```
2231
2231
2231
3341
2231
```
So I want the query to use only 2231 once rather than for all occurrences of 2231 | Since you want to select the values based on Owner ID uniqueness, you have use group by clause and give "distinct" in select clause like
```
select distinct
sb.company
,b.id as Id
,bds.id as PId
,brp.bId
from supp b
left outer join sales bds on (bds.sId = b.id)
left outer join tperiod brp on (brp.id = bds.rId)
left outer join tbuyer sb on (sb.id = brp.bId)
where b.ownerId = @oId
group by b.ownerId
``` | You need to user GROUP BY on the ownerId, but then you will need to use some aggregate function on all the other fields in the select statment
If you want a unique ownerId and the other related fields are different, SQL needs to know what to display. Do you just want the MAX value on the related fields? You are asking for a unique value for the owner, so what do you just want to display if the joined fields have different values for that ownerId?
This is something that could work, depending on what you want.
```
select
MAX(sb.company)
,MAX(b.id) as Id
,MAX(bds.id) as PId
,MAX(brp.bId)
from supp b
left outer join sales bds on (bds.sId = b.id)
left outer join tperiod brp on (brp.id = bds.rId)
left outer join tbuyer sb on (sb.id = brp.bId)
where
b.ownerId = @oId
GROUP BY b.ownerId
``` | Selecting distinct on where clause | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
I have a table that contains words and an input field to search that table using a live search. Currently, I use the following query to search the table:
```
SELECT word FROM words WHERE word LIKE '%searchstring%' ORDER BY word ASC
```
Is there a way to order the results so that the ones where the string is found at the beginning of the word come first and those where the string appears later in the word come last?
An example: searching for '**hab**' currently returns
1. **a** lphabet
2. **h** abit
3. **r** ehab
but I'd like it this way:
1. **hab** it (first because 'hab' is the beginning)
2. alp **hab** et (second because 'hab' is in the middle of the word)
3. re **hab** (last because 'hab' is at the end of the word)
or at least this way:
1. **hab** it (first because 'hab' is the beginning)
2. re **hab** (second because 'hab' starts at the third letter)
3. alp **hab** et (last because 'hab' starts latest, at the fourth letter)
Would be great if anyone could help me out with this! | To do it the first way (starts word, in the middle of the word, ends word), try something like this:
```
SELECT word
FROM words
WHERE word LIKE '%searchstring%'
ORDER BY
CASE
WHEN word LIKE 'searchstring%' THEN 1
WHEN word LIKE '%searchstring' THEN 3
ELSE 2
END
```
To do it the second way (position of the matched string), use the [`LOCATE` function](http://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_locate):
```
SELECT word
FROM words
WHERE word LIKE '%searchstring%'
ORDER BY LOCATE('searchstring', word)
```
You may also want a tie-breaker in case, for example, more than one word starts with `hab`. To do that, I'd suggest:
```
SELECT word
FROM words
WHERE word LIKE '%searchstring%'
ORDER BY <whatever>, word
```
In the case of multiple words starting with `hab`, the words starting with `hab` will be grouped together and sorted alphabetically. | Try this way:
```
SELECT word
FROM words
WHERE word LIKE '%searchstring%'
ORDER BY CASE WHEN word = 'searchstring' THEN 0
WHEN word LIKE 'searchstring%' THEN 1
WHEN word LIKE '%searchstring%' THEN 2
WHEN word LIKE '%searchstring' THEN 3
ELSE 4
END, word ASC
``` | MySQL order by "best match" | [
"",
"mysql",
"sql",
"sql-order-by",
""
] |
Is it possible to calculate this sales report in one query? I'm not so hot with SQL and having trouble wrapping my head around joins considering the all-inclusive where clause.
TABLES (assume 1-N for foreign keys):
```
TABLE contacts
id
TABLE reservations
id
created_at
contact_id
TABLE reservation_trips
id
reservation_id -- never null
amount
quantity
trip_type_id -- never null
TABLE cost_adjustments
id
reservation_trip_id -- never null
cost_type_id -- never null
TABLE trip_types
id
title
TABLE cost_types
id
title
kind
```
EXAMPLE WHERE:
```
reservations.created_at >= '2013-01-01T09:00:00+00:00' AND
reservations.created_at < '2014-01-01T09:00:00+00:00' AND
reservation_trips.trip_type_id in (1, 2, 3, 4) AND
cost_adjustments.trip_type_id in (5, 6, 7, 8) AND
contact_id in (9, 10, 11, 12)
```
RESULT COLUMNS:
```
trip_types.title as trip_type_title,
COUNT(DISTINCT reservations.id) as bookings, -- distinct reservations
SUM(reservations.commissions) as commissions, -- distinct reservations
SUM(reservation_trips.id) as guests, -- distinct reservation_trips
SUM(reservation_trips.amount * reservation_trips.quantity) as gross_sales, -- distinct reservation_trips
SUM(cost_adjustments.amount * cost_adjustments.quantity) as adjustments, -- distinct cost_adjustments where cost_types.kind != 'tax'
SUM(gross_sales + adjustments - commissions) as net_sales
```
The table columns would look like this:
```
TRIP TYPE | BOOKINGS | GUESTS | GROSS SALES | ADJUSTMENTS | COMMISSIONS | NET SALES
```
Thanks! | The main problem is to avoid counting the intermediate levels multiple times. One way to do it is do divide by the number of items at each level.
Another approach, which can be more efficient is to put each level in it's own subquery and then join them all up.
Here's an untested stab at the first method. It might need some `IsNull(xxx, 0)` fixup.
```
Select
tt.title as trip_type_title,
count(distinct r.id) as bookings,
sum(r.commissions) / count(distinct r.id) as commissions,
count(distinct rt.id) as guests, -- this seems odd, but it's in the example
sum(rt.amount * rt.quantity) / count(distinct rt.id) as gross_sales,
sum(ca.amount * ca.quantity) as adjustments,
sum(rt.amount * rt.quantity) / count(distinct rt.id) + sum(r.commissions) / count(distinct r.id) + sum(ca.amount * ca.quantity) as net_sales
From
trip_types tt
inner Join reservation_trips rt on tt.id = rt.trip_type_id
inner Join reservations r on rt.reservation_id = r.id
left outer join cost_adjustments ca on ca.reservation_trip_id = rt.id
left outer join cost_types ct on ca.cost_type_id = ct.id
Where
r.created_at >= '2013-01-01T09:00:00+00:00' and
r.created_at < '2014-01-01T09:00:00+00:00' and
rt.trip_type_id in (1, 2, 3, 4) and
ca.trip_type_id in (5, 6, 7, 8) and
contact_id in (9, 10, 11, 12) and
ct.kind != 'tax'
Group By
tt.title
``` | Give this a whirl, I think it covers it
```
SELECT trip_types.title as trip_type_title,
COUNT(DISTINCT reservations.id) as bookings,
SUM(reservations.commissions) as commissions, distinct reservations,
SUM(reservation_trips.id) as guests, distinct reservation_trips,
SUM(reservation_trips.amount * reservation_trips.quantity) as gross_sales,
SUM(cost_adjustments.amount * cost_adjustments.quantity) as adjustments, distinct cost_adjustments where cost_types.kind != 'tax'
SUM(gross_sales + adjustments - commissions) as net_sales
FROM trip_types tt
JOIN reservation_trips on tt.id = reservation_trips.trip_type_id
JOIN reservations on reservation_trips.reservation_id = reservations.id
JOIN cost_adjustments ON cost_adjustments.reservation_trip_id = reservation_trips.id
WHERE
reservations.created_at >= '2013-01-01T09:00:00+00:00' AND
reservations.created_at < '2014-01-01T09:00:00+00:00' AND
reservation_trips.trip_type_id in (1, 2, 3, 4) AND
cost_adjustments.trip_type_id in (5, 6, 7, 8) AND
contact_id in (9, 10, 11, 12)
GROUP BY trip_types.title
``` | Trying to formulate exclusive sql query from all-inclusive where clause | [
"",
"sql",
"join",
"subquery",
""
] |
I'm looking for the best way to count how many rows there are in a large (15 million+ rows) table. The naive way of `select count(*) from table;` is apparently O(n) according to a few older posts I've found on the matter, e.g. <http://osdir.com/ml/sqlite-users/2010-07/msg00437.html>.
Is there a constant time mechanism to get this information, or failing that are there preferred alternatives to the straightforward `select count(*)` query? | SQLite has a special optimization for `COUNT(*)` without a `WHERE` clause, where it goes through the table's B-tree pages and counts entries without actually loading the records.
However, this still requires that all the table's data (except overflow pages for large records) is visited, so the runtime is still O(n).
SQLite does not store a separate record count in the database because that would make all changes slower. | as a workaround you could query [ROWID](http://www.sqlite.org/autoinc.html). if you don't delete from the table it'll be accurate otherwise it will be high
```
select max(rowid) from table
``` | Is count(*) constant time in SQLite, and if not what are alternatives? | [
"",
"sql",
"sqlite",
"count",
""
] |
I am trying to update a table using the following query
```
update at_product A join
(
SELECT atbillfields.billeditemguid,count(*) AS numberOfPeopleBought
,sum(atbillfields.billeditemqty) AS soldquantity
FROM jtbillingtest.at_salesbill atsalesbill
JOIN jtbillingtest.at_billfields atbillfields
ON atsalesbill.billbatchguid=atbillfields.billbatchguid
WHERE atsalesbill.billcreationdate BETWEEN '2013-09-09' AND date_add('2013-09-09', INTERVAL 1 DAY)
GROUP BY atbillfields.billeditemguid) B ON B.billeditemguid = A.productguid
SET A.productQuantity = A.productQuantity - B.soldquantity
```
But, getting the following exception:
> Error Code: 1175. You are using safe update mode and you tried to
> update a table without a WHERE that uses a KEY column To disable safe
> mode, toggle the option in Preferences -> SQL Queries and reconnect.
When I gave a `where` clause with the update like `A.productQuantity = 1`, it updated that particular.
Can someone point why I am unable to execute the query and how to solve the issue? | Have a look at:
<http://justalittlebrain.wordpress.com/2010/09/15/you-are-using-safe-update-mode-and-you-tried-to-update-a-table-without-a-where-that-uses-a-key-column/>
If you want to update without a where key you must execute
```
SET SQL_SAFE_UPDATES=0;
```
right before your query.
Another option is to rewrite your query o include a key. | This error means you're operating in safe update mode and therefore you have two options:
* you need to provide a where clause that includes an index for the update to be successful or
* You can disable this feature by doing `SET SQL_SAFE_UPDATES = 0;` | Update query failing with error : 1175 | [
"",
"mysql",
"sql",
""
] |
I have this schema
```
create table t(id int, d date)
insert into t (id, d) values (1, getdate()),
(2, NULL)
```
When doing
```
declare @mindate date
select @mindate = min(d) from t
```
I get the warning
> Null value is eliminated by an aggregate or other SET operation
Why and what can I do about it? | Mostly you should do nothing about it.
* It is possible to disable the warning by setting `ansi_warnings` off but this has [other effects, e.g. on how division by zero is handled](https://learn.microsoft.com/en-us/sql/t-sql/statements/set-ansi-warnings-transact-sql?view=sql-server-ver15) and can cause failures when your queries use features like indexed views, computed columns or XML methods.
* In some limited cases you can rewrite the aggregate to avoid it. e.g. `COUNT(nullable_column)` can be rewritten as `SUM(CASE WHEN nullable_column IS NULL THEN 0 ELSE 1 END)` but this isn't always possible to do straightforwardly without changing the semantics.
It's just an informational message [required in the SQL standard](https://dba.stackexchange.com/questions/10952/meaning-of-set-in-error-message-null-value-is-eliminated-by-an-aggregate-or-o/18454#18454). Apart from adding unwanted noise to the messages stream it has no ill effects (other than meaning that SQL Server can't just bypass reading `NULL` rows, which can have [an overhead](https://feedback.azure.com/forums/908035-sql-server/suggestions/40496701-scalargbaggtotop-transformation-with-max-nullable) but disabling the warning doesn't give better execution plans in this respect)
The reason for returning this message is that throughout most operations in SQL nulls propagate.
`SELECT NULL + 3 + 7` returns `NULL` (regarding `NULL` as an unknown quantity this makes sense as `? + 3 + 7` is also unknown)
but
```
SELECT SUM(N)
FROM (VALUES (NULL),
(3),
(7)) V(N)
```
Returns `10` and the warning that nulls were ignored.
However these are **exactly the semantics you want** for typical aggregation queries. Otherwise the presence of a single `NULL` would mean aggregations on that column over all rows would always end up yielding `NULL` which is not very useful.
**Which is the heaviest cake below?** ([Image Source](https://www.flickr.com/photos/8514720@N04/6862029495/), [Creative Commons](https://creativecommons.org/licenses/by/2.0/) image altered (cropped and annotated) by me)
[](https://i.stack.imgur.com/z8fj6.png)
After the third cake was weighed the scales broke and so no information is available about the fourth but it was still possible to measure the circumference.
```
+--------+--------+---------------+
| CakeId | Weight | Circumference |
+--------+--------+---------------+
| 1 | 50 | 12.0 |
| 2 | 80 | 14.2 |
| 3 | 70 | 13.7 |
| 4 | NULL | 13.4 |
+--------+--------+---------------+
```
The query
```
SELECT MAX(Weight) AS MaxWeight,
AVG(Circumference) AS AvgCircumference
FROM Cakes
```
Returns
```
+-----------+------------------+
| MaxWeight | AvgCircumference |
+-----------+------------------+
| 80 | 13.325 |
+-----------+------------------+
```
even though technically it is not possible to say with certainty that 80 was the weight of the heaviest cake (as the unknown number may be larger) the results above are generally more useful than simply returning unknown.
```
+-----------+------------------+
| MaxWeight | AvgCircumference |
+-----------+------------------+
| ? | 13.325 |
+-----------+------------------+
```
So likely you want NULLs to be ignored, and the warning just alerts you to the fact that this is happening. | @juergen provided two good answers:
* Suppress the warning using `SET ANSI_WARNINGS OFF`
* Assuming you want to include NULL values and treat them as (say) use `select @mindate = min(isnull(d, cast(0 as datetime))) from t`
However if you want to ignore rows where the d column is null and not concern yourself with the `ANSI_WARNINGS` option then you can do this by excluding all rows where d is set to null as so:
```
select @mindate = min(d) from t where (d IS NOT NULL)
``` | Getting warning: Null value is eliminated by an aggregate or other SET operation | [
"",
"sql",
"sql-server",
"settings",
""
] |
I'm using this SQL query to go through a table and search for a customer name and return that row's id and date column:
```
SELECT custName, date, id FROM booking
WHERE custName LIKE '%$s'
OR custName LIKE '$s%'
```
($s being a PHP variable)
If I'm looking for John Dorian, I could input $s as the first name John, or family name Dorian and my function will find him. My problem is that John Dorian may appear in more than one row, and if that's the case I would like the query to return only the most recent row (using the date column to figure this out).
IE if my table looks like this and $s = John:
```
(custName, date, id)
John Dorian - 2013/01/01 - 1
John Doe - 2013/01/02 - 2
John Dorian - 2013/01/10 - 3
```
I would like my query to return
```
John Doe - 2013/01/02 - 2
John Dorian - 2013/01/10 - 3
``` | How about:
```
SELECT custName, date, id
FROM booking b
INNER JOIN
(
SELECT max(date) MaxDate, custName
FROM booking
WHERE custName LIKE '%$s%'
GROUP BY custName
) bm
ON b.custName = bm.custName
AND b.date = bm.maxDate
WHERE custName LIKE '%$s%'
ORDER BY b.date DESC
``` | 1) You can search and get only one row per customer easily using `DISTINCT` or `GROUP BY`:
```
SELECT DISTINCT custName
FROM booking
WHERE custName LIKE '%$s' OR custName LIKE '$s%';
```
or
```
SELECT custName
FROM booking
WHERE custName LIKE '%$s' OR custName LIKE '$s%'
GROUP BY custName;
```
2) You can get the max date by coupling an aggregrate function (ie `MAX`) with the `GROUP BY`
```
SELECT custName, MAX(date) as date
FROM booking
WHERE custName LIKE '%$s' OR custName LIKE '$s%'
GROUP BY custName;
```
3) Finally, you can get the full table row by joining the results back to the original table:
```
SELECT b.custName, b.date, b.id
FROM booking AS b
INNER JOIN
(SELECT custName, MAX(date) AS maxDate
FROM booking
WHERE custName LIKE '%$s' OR custName LIKE '$s%'
GROUP BY custName
) AS gb
ON b.custName = gb.custName AND b.date = gb.maxDate;
```
or (probably slower):
```
SELECT b.custName, b.date, b.id
FROM booking AS b
INNER JOIN
(SELECT custName, MAX(date) AS maxDate
FROM booking
GROUP BY custName
) AS gb
ON b.custName = gb.custName AND b.date = gb.maxDate
WHERE b.custName LIKE '%$s' OR b.custName LIKE '$s%';
```
---
p.s.
The following may seem promising, and may even give the correct results sometimes, but is not guaranteed to work.
```
SELECT *
FROM (
SELECT custName, date, id
FROM booking
WHERE b.custName LIKE '%$s' OR b.custName LIKE '$s%'
ORDER BY date DESC
) AS t
GROUP BY custNAME;
```
Unfortunately you can't rely on the `GROUP BY` to maintain the supplied order.
**EDIT** See also
* [MySQL order by before group by](https://stackoverflow.com/questions/14770671/mysql-order-by-before-group-by)
* [How can I SELECT rows with MAX(Column value), DISTINCT by another column in SQL?](https://stackoverflow.com/questions/612231/how-can-i-select-rows-with-maxcolumn-value-distinct-by-another-column-in-sql)
* <http://ftp.nchu.edu.tw/MySQL/tech-resources/articles/debunking-group-by-myths.html> | SQL SELECT names LIKE $name but not exact same name twice | [
"",
"sql",
"sql-like",
""
] |
I have a table that contains a list of performances. These performances are grouped by production number. What I am trying to do is create a stored procedure that will return the last performance for each production entered. I would like to be able to input the production ids as a list of ids. Below is my procedure so far. Difficulty is I'm not sure how best to declare the @prod\_no parameter to be used in the IN statement.
```
CREATE PROCEDURE IP_MAX_PERF_DATE
-- Add the parameters for the stored procedure here
@prod_no
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
-- Insert statements for procedure here
SELECT [prod_season_no], MAX([perf_dt]) As max_dt FROM [T_PERF] WHERE [prod_season_no] IN (@prod)
GROUP By [prod_season_no];
END
GO
```
Any ideas | Try the sp\_executesql
```
CREATE PROCEDURE IP_MAX_PERF_DATE
@prod_no nvarchar(500)
AS
BEGIN
SET NOCOUNT ON;
declare @statement nvarchar(1000)
set @statement = N'SELECT [prod_season_no], MAX([perf_dt]) As max_dt FROM [T_PERF] WHERE [prod_season_no] IN (' + @prod_no + ') GROUP By [prod_season_no]'
EXEC sp_executesql
@stmt = @statement
END
GO
``` | generally there are three ways to pass in a list of Ids:
Option 1: use comma separated list and split it in the stored procedure. this requires you to have a split function, or use dynamic sql (not preferred most of the time due to performance problem - at least hard to see the execution plan and you lose the point of using stored procedure to optimize your query)
Option 2: use xml, and again, you need to query the xml to find out the Ids
Option 3: use table valued parameter, this requires you to have a user defined table type
a detailed comparison could be found here:
<http://www.adathedev.co.uk/2010/02/sql-server-2008-table-valued-parameters.html> | declaring T-Sql parameter for comma delimited list of integers | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
Here is what I have so far but the results are wrong.
```
SELECT c.CompanyName,
COUNT(o.OrderID) AS [Total Orders],
SUM(
(od.UnitPrice -
(od.UnitPrice * od.Discount))* Quantity) AS [Purchase Total]
FROM Customers AS c,
Orders AS o,
[Order Details] AS od
WHERE c.CustomerID = o.CustomerID
AND o.OrderID = od.OrderID
GROUP BY c.CompanyName
ORDER BY c.CompanyName;
```
The issue I am having is with the count, it is off by double or more. I believe that this is because the OrderID appears multiple times in the Order Details table. I think I need a nested SELECT statement but I am unsure how to do that.
Would I be removing the SUM() expression, Order Details, and the AND clause from the first query? Or am I way off?
With help I have gotten the COUNT field to work but now my SUM field is wrong. This is my most recent attempt and it produces the same value for every customer.
```
SELECT c.CompanyName,
COUNT(o.OrderID) AS [Total Orders],
(SELECT SUM(
(odIN.UnitPrice -
(odIN.UnitPrice * odIN.Discount)) * odIN.Quantity) AS [OrderTotal]
FROM [Order Details] AS odIN, Orders As oIN
WHERE odIN.OrderID = oIN.OrderID) AS [Purchase Total]
FROM
Customers AS c, Orders AS o
WHERE c.CustomerID = o.CustomerID
GROUP BY c.CompanyName
ORDER BY c.CompanyName;
```
I was unsuccessful at getting the query to fully work the way I wanted it to. Then I realized that maybe maybe I was looking for the wrong data. So I switched the name for the COUNT field to Num Products Purchased.
I would still like to get the other way working, but I think that will require creating a temporary table or view that could be used to do one of the calculations and then call it from the query. That is something I'll have to figure out.
Thank you for the attempts to help. | Because Access doesn't have `COUNT(DISTINCT)` then you need to create an inner query.
What this does is compute the sum of each item in an order in the inner query, and then sums up all the order totals for the customer as the purchase total. An individual OrderID will not be counted twice, as `o` and `od` now have a one to one relationship.
There might be a syntax error in there somewhere, but the idea should work.
```
SELECT c.CompanyName,
COUNT(o.OrderID) AS [Total Orders],
SUM(od.OrderTotal) AS [Purchase Total]
FROM
Customers AS c,
Orders AS o,
(SELECT odIn.OrderID,
SUM(
(odIn.UnitPrice -
(odIn.UnitPrice * odIn.Discount)) * odIn.Quantity) AS [OrderTotal]
FROM [Order Details] AS odIn
GROUP BY odIn.OrderID) AS od
WHERE c.CustomerID = o.CustomerID
AND o.OrderID = od.OrderID
GROUP BY c.CompanyName
ORDER BY c.CompanyName;
``` | if the problem is because OrderID appears multiple times, try:
```
SELECT c.CompanyName, COUNT(DISTINCT o.OrderID) AS [Total Orders], SUM((od.UnitPrice - (od.UnitPrice * od.Discount)) * Quantity) AS [Purchase Total]
FROM Customers AS c, Orders AS o, [Order Details] AS od
WHERE c.CustomerID = o.CustomerID AND o.OrderID = od.OrderID
GROUP BY c.CompanyName
ORDER BY c.CompanyName;
```
The distinct clause lets you count only each appearance. | Does this SQL statement require a nested SELECT query? | [
"",
"sql",
"ms-access",
""
] |
Relatively new to SQL querying. I can successfully get results from a simple query that shows a customer number and a total dollars invoiced sorted highest dollar amount to lowest. I want to also display the customer name. The customer name, `[Name]`, is in another table along with the customer number but the column name for customer number is different, ie. `Table 1` is `[Bill-to Customer No_]` and `Table 2` is just `[No_]`. How would I get the information from `Table 2` to display in the same row with the customer number? | ```
SELECT [Bill-to Customer No_], [Invoice Amount] AS amt, [Name]
FROM Table1 t1 JOIN Table2 t2
ON t1.[Bill-to Customer No_] = t2.[No_]
ORDER BY amt DESC;
```
I haven't grasped your column names yet, but hope you get the idea.
***EDIT** : (as per your new query)*
```
SELECT [Sell-to Customer No_], [Name], SUM([Amount]) as "Total Dollars Spent"
FROM [Table 1 - LIVE$Sales Invoice Line] a JOIN [Table 2 - LIVE$Customer] b
ON a.[Sell-to Customer No_] = b.[No_]
WHERE [Source Code] = 'RENTAL' and [Sell-to Customer No_] != 'GOLF'
GROUP BY [Sell-to Customer No_], [Name]
ORDER BY SUM([Amount]) DESC;
```
You need to add `[Name]` to the `GROUP BY` clause as well. Remember you cannot `SELECT` a column that's not a part of `GROUP BY` unless it's being processed by a group function like `[Amount]` is being processed by `SUM()`. | ```
SELECT
[bill-to Customer No_]
,customer_name
FROM table1 AS a
INNER JOIN table2 AS b on a.[bill-to Customer No_]=b.No_
``` | How to join two sql tables when the common column has different names but information is the same in both tables | [
"",
"sql",
""
] |
I'm trying to create a sub-table from another table of all the last name fields sorted A-Z which have a phone number field that isn't null. I could do this pretty easy with SQL, but I have no clue how to go about running a SQL query within Excel. I'm tempted to import the data into postgresql and just query it there, but that seems a little excessive.
For what I'm trying to do, the SQL query `SELECT lastname, firstname, phonenumber WHERE phonenumber IS NOT NULL ORDER BY lastname` would do the trick. It seems too simple for it to be something that Excel can't do natively. How can I run a SQL query like this from within Excel? | There are many fine ways to get this done, which others have already suggestioned. Following along the "get Excel data via SQL track", here are some pointers.
1. Excel has the **"Data Connection Wizard"** which allows you to import or link from another data source or even within the very same Excel file.
2. As part of Microsoft Office (and OS's) are two providers of interest: the old "Microsoft.Jet.OLEDB", and the latest "Microsoft.ACE.OLEDB". Look for them when setting up a connection (such as with the Data Connection Wizard).
3. Once connected to an Excel workbook, a worksheet or range is the equivalent of a table or view. The table name of a worksheet is the name of the worksheet with a dollar sign ("$") appended to it, and surrounded with square brackets ("[" and "]"); of a range, it is simply the name of the range. To specify an unnamed range of cells as your recordsource, append standard Excel row/column notation to the end of the sheet name in the square brackets.
4. The native SQL will (more or less be) the SQL of Microsoft Access. (In the past, it was called JET SQL; however Access SQL has evolved, and I believe JET is deprecated old tech.)
5. Example, reading a worksheet: `SELECT * FROM [Sheet1$]`
6. Example, reading a range: `SELECT * FROM MyRange`
7. Example, reading an unnamed range of cells: `SELECT * FROM [Sheet1$A1:B10]`
8. There are many many many books and web sites available to help you work through the particulars.
## Further notes
By default, it is assumed that the first row of your Excel data source contains column headings that can be used as field names. If this is not the case, you must turn this setting off, or your first row of data "disappears" to be used as field names. This is done by adding the optional `HDR= setting` to the Extended Properties of the connection string. The default, which does not need to be specified, is `HDR=Yes`. If you do not have column headings, you need to specify `HDR=No`; the provider names your fields F1, F2, etc.
A caution about specifying worksheets: The provider assumes that your table of data begins with the upper-most, left-most, non-blank cell on the specified worksheet. In other words, your table of data can begin in Row 3, Column C without a problem. However, you cannot, for example, type a worksheet title above and to the left of the data in cell A1.
A caution about specifying ranges: When you specify a worksheet as your recordsource, the provider adds new records below existing records in the worksheet as space allows. When you specify a range (named or unnamed), Jet also adds new records below the existing records in the range as space allows. However, if you requery on the original range, the resulting recordset does not include the newly added records outside the range.
Data types (worth trying) for `CREATE TABLE: Short, Long, Single, Double, Currency, DateTime, Bit, Byte, GUID, BigBinary, LongBinary, VarBinary, LongText, VarChar, Decimal`.
Connecting to "old tech" Excel (files with the xls extention): `Provider=Microsoft.Jet.OLEDB.4.0;Data Source=C:\MyFolder\MyWorkbook.xls;Extended Properties=Excel 8.0;`. Use the Excel 5.0 source database type for Microsoft Excel 5.0 and 7.0 (95) workbooks and use the Excel 8.0 source database type for Microsoft Excel 8.0 (97), 9.0 (2000) and 10.0 (2002) workbooks.
Connecting to "latest" Excel (files with the xlsx file extension): `Provider=Microsoft.ACE.OLEDB.12.0;Data Source=Excel2007file.xlsx;Extended Properties="Excel 12.0 Xml;HDR=YES;"`
Treating data as text: IMEX setting treats all data as text. `Provider=Microsoft.ACE.OLEDB.12.0;Data Source=Excel2007file.xlsx;Extended Properties="Excel 12.0 Xml;HDR=YES;IMEX=1";`
(More details at <http://www.connectionstrings.com/excel>)
More information at <http://msdn.microsoft.com/en-US/library/ms141683(v=sql.90).aspx>, and at <http://support.microsoft.com/kb/316934>
Connecting to Excel via ADODB via VBA detailed at <http://support.microsoft.com/kb/257819>
Microsoft JET 4 details at <http://support.microsoft.com/kb/275561> | Might I suggest giving [QueryStorm](https://www.querystorm.com) a try - it's a plugin for Excel that makes it quite convenient to use SQL in Excel.
[](https://i.stack.imgur.com/w2rmT.gif)
In the SQL scripts Excel tables are visible as if they were regular database tables.
All four SQL data operations are supported: **select/update/insert/delete**.
The engine that executes the queries is **SQLite** so you can use joins, common table expressions, window functions, etc... And you get the fancy stuff like code completion, auto-formatting, symbol tooltips etc...
It has a completely free community edition for use by individuals and small companies. If you're in a company that has more than 5 employees or more than $1M in yearly revenue, you'll need a paid license but you can use a free trial key for evaluation purposes.
[This blog post](https://querystorm.com/how-to-use-sql-in-excel/) describes the SQL functionality of the plugin in much more detail.
Disclaimer: I'm the author. | How to run a SQL query on an Excel table? | [
"",
"sql",
"excel",
"filtering",
""
] |
I've an application that checks recorded waste types. Part of the system allows the user to forecast how much waste they will recycle, and in a report it will list the forecasted waste type with how much was forecast and the actual waste that's been recorded.
The way it works it out is that there is one table called `forecastwaste` and a table called `wastestream`. `wastestream` holds all of the data about waste types that actually have been recycled, and `forecastwaste` holds the waste types that have been forecast. The `wastetypes` table holds the name of the available wastetypes that the user can choose from.
I have this SQL Statement (`$contractid` contains the id of the contract):
```
SELECT ws.wastetype, SUM(ws.recordedweight) totalWeight, SUM(ws.percent) totalPercent, wt.id, wt.category, wt.defrecycling, fw.tonnes
FROM wastestream ws, wastetypes wt, forecastwaste fw
WHERE ws.contractid = ".$contractid."
AND fw.contractid = ".$contractid."
AND ws.wastetype = wt.id
AND fw.wastetype = wt.id
GROUP BY ws.wastetype
```
However, the problem I have is that if there is a waste type in the forecastewate table that isn't in the `wastestream` table the query won't display anything. I want to get it so that if no results can be found in the `wastestream` table, the query will still display the forecastewaste record and return `0` if it can't find anything. The current query doesn't allow this.
How can I make the query work so that it does what I need it to?
**EDIT**
Thanks to Bandydan I've rewritten the query so it now looks like this:
```
SELECT ws.wastetype, SUM(ws.recordedweight) totalWeight, SUM(ws.percent) totalPercent, wt.id, wt.category, wt.defrecycling, fw.tonnes
FROM c1skips.wastestream ws
LEFT JOIN c1skips.wastetypes wt ON (wt.id = ws.wastetype)
INNER JOIN c1skips.forecastwaste fw ON (wt.id = fw.wastetype)
WHERE fw.contractid = '602'
AND ws.contractid = '602'
GROUP BY ws.wastetype;
```
I will explain what I'm trying to do a bit better too.
I have a table called `forecastwaste` and in that table I have the following data:
```
|---------------------------------|
| wastetype | tonnes | contractid |
|-----------|--------|------------|
| 1 | 10 | 602 |
| 2 | 20 | 602 |
| 3 | 50 | 602 |
|-----------|--------|------------|
```
This table is then used to look at the `wastestream` table so see how much of the material is recycled. The `wastestream` table looks like this:
```
|-----------------------------------------|
| wastetype | recordedweight | contractid |
|-----------|----------------|------------|
| 1 | 2 | 602 |
| 1 | 4 | 602 |
| 2 | 20 | 602 |
|-----------|----------------|------------|
```
Both tables reference the `wastetype` table, which identifies the number with a waste type.
With the current query, it will only return the results if they show up in the `wastestream` table. However, I want it so that even if there isn't a record in the `wastestream` table it will return 0.
**EDIT 2**
I've added `COALESCE` to my query like this:
```
SELECT ws.wastetype, SUM(ws.recordedweight) totalWeight, SUM(ws.percent) totalPercent, wt.id, wt.category, wt.defrecycling,
COALESCE(ws.recordedweight, 0) tonnes
FROM c1skips.wastestream ws
LEFT JOIN c1skips.wastetypes wt ON (wt.id = ws.wastetype)
INNER JOIN c1skips.forecastwaste fw ON (wt.id = fw.wastetype)
WHERE fw.contractid = '602'
AND ws.contractid = '602'
GROUP BY ws.wastetype;
```
But the results are still the same. It's going to be either the `SUM(ws.recordedweight) totalWeight` or the `SUM(ws.percent) totalPercent` that return the NULL values in the `wastestream` table but there will be a value in the `forecaste` table trying to reference them, but `COALESCE` won't work with that. | You need an Inner Join between forecastwaste and wastetypes plus a Left Join to wastestream:
```
SELECT ws.wastetype, SUM(ws.recordedweight) totalWeight, SUM(ws.percent) totalPercent, wt.id, wt.category, wt.defrecycling, fw.tonnes
FROM c1skips.forecastwaste fw
JOIN c1skips.wastetypes wt ON (wt.id = fw.wastetype)
LEFT JOIN c1skips.wastestream ws
ON (ws.contractid = fw.contractid) AND (ws.wastetype = fw.wastetype)
WHERE fw.contractid = '602'
GROUP BY ws.wastetype;
```
I replaced **fw.contractid = '602'** with **(ws.contractid = fw.contractid)** as join-condition. Now there's no need to write the contractid in two places and you might run the query without WHERE to return all rows.
Edit: Changed the outer table from wastestream to forecastwaste | I think you need to rewrite that one using joins, then you will be able to use LEFT JOIN. It works the way you need.
The thing I can do in some minutes looks like that:
```
SELECT ws.wastetype, SUM(ws.recordedweight) totalWeight, SUM(ws.percent) totalPercent, wt.id, wt.category, wt.defrecycling, fw.tonnes
FROM wastetypes wt
LEFT JOIN wastestream ws ON (wt.id = ws.wastetype)
INNER JOIN forecastwaste fw USING(wt.id = fw.wastetype)
WHERE wt.contractid = ".$contractid."
GROUP BY ws.wastetype
```
Here I assume that you need information from wt, information about the same type, that **MAYBE** exists in ws and information about that type from fw(that definitely exists). I'm not sure that this is right, cause I don't have tables to check, but I wanted to show you the whole idea.
Take a look at the answers to [that question](https://stackoverflow.com/questions/5706437/whats-the-difference-between-inner-join-left-join-right-join-and-full-join) and you will find a way to solve your problem. | Retrieve all records from one table even if a related record can't be found in a related table | [
"",
"mysql",
"sql",
"cakephp-2.0",
""
] |
I've created two datasets.
Dataset1 (from a stored procedure):
```
CREATE PROCEDURE [dbo].[usp_GetPerson](
@Category varchar(20))
AS
BEGIN
SELECT FirstName, LastName, Category
FROM tblPerson
WHERE (Category IN (@Category))
END
```
Dataset2:
```
SELECT DISTINCT Category
FROM tblPerson
```
In SSRS, I've edited the parameter to allow multiple values and to pull available values from Dataset2.
I've tried filtering based on Dataset1 alone, but receive all the inputs which are repetitive (which is why I opted using dataset 2).
When I use the stored procedure, I can't seem to select multiple values. I'm only able to select single values, otherwise the report goes blank.
So I recreated Dataset1, but did not use the stored procedure. Instead I just wrote the SQL statement in the text editor, and I'm able to select the multiple values just fine.
Does anyone know why this happens and could help me fix this?
Note: I'm using stored procedures for when my SQL statements become more complex where I will be joining multiple databases. I tried doing this in SSRS, but it was much faster using stored procedures.
Thank you! | The issue when running from an SP using a query like this:
```
SELECT FirstName, LastName, Category
FROM tblPerson
WHERE (Category IN (@Category))
```
where `@Category` is something like `'Cat1,Cat2,Cat3'`
is that the `IN` clause is treating `@Category` as a single string, i.e. one single value, not a set of multiple values, so will most likely never match anything.
This is why when you have only one value it works - something like `'Cat1'` will match one or more rows correctly.
When run as an embedded query in the report itself, SSRS will essentially treat this as a piece of dynamic T-SQL, which means the string `@Category` gets added into the main query as written and works correctly.
So there are a couple of options for the SP.
You can run dynamic T-SQL in the SP, something like:
```
DECLARE @Query NVARCHAR(max) = N'SELECT FirstName, LastName, Category
FROM tblPerson
WHERE (Category IN (' + @Category + ))'
EXEC @Query
```
Dynamic T-SQL is seldom ideal, so the other option is turn `@Category` into a set for use in the query, typically using a function to split the string and return a table, which will look something like:
```
SELECT FirstName, LastName, Category
FROM tblPerson
WHERE (Category IN (SELECT values from dbo.SplitString(@Category)))
```
There are any number of way to get a set returned from a delimited string.
See the SO question for [many split options](https://stackoverflow.com/questions/697519/split-function-equivalent-in-tsql).
Or the definitive article from [Erland Sommarskog](http://www.sommarskog.se/arrays-in-sql.html) .
Choose the one you like the best and go from there. | You can only use IN with static defined within your query. In this cas use =
```
CREATE PROCEDURE [dbo].[usp_GetPerson](
@Category varchar(20))
AS
BEGIN
SELECT FirstName, LastName, Category
FROM tblPerson
WHERE (Category = @Category)
``` | Using Multiple Parameters in SSRS from Stored Procedure Returns Blank | [
"",
"sql",
"stored-procedures",
"reporting-services",
"parameters",
"sql-server-2012",
""
] |
It may seem quit basic but I've not come across this yet and can't figure it out. Say I have two tables of users and in each table is a field 'email\_address'. How can I select all email addresses from both tables as one list? I'll most likely need the DISTINCT keyword since there may be occurrences of the same email address in both tables.
I bet it's deadly obvious and I just can't see the logic. | ```
select email_address from table1
union
select email_address from table2
```
`UNION` will merge the identical email adresses.
If you wanna keep the duplicates, use `UNION ALL` | Just `UNION` the two tables:
```
SELECT `email_address`
FROM `a`
UNION
SELECT `email_address`
FROM `b`
``` | Selecting 'field_name' from multiple tables in SQL? | [
"",
"mysql",
"sql",
""
] |
First of all, I use:
Windows 7 x64
Oracle Enterprise 11g R2 x32
I have some basic experience with SQL Server and MySQL (more with MySQL). Today I started working with Oracle, and I've been bumping a lot on the way.
The problem I'm having now is that, for some reason, I can't connect to any sys user (sysdba or sysoper). Probably I made a mistake writing the password when creating the database, but I'm not sure, so I'm trying to change it.
I've been trying, as some searches had led me to try, to use
```
sqlplus /nolog
connect / as sysdba
alter user sys identified by new_pass;
```
And it seems to work. I mean, it says "User altered". However, I still can't log in sqlplus with the new password. It's a little weird, since I can connect to "sysman" with my original password, but it doesn't have the privileges I need.
I'd appreciate any help I could get, and I thank you beforehand. | After you log in with 'sqlplus / as sysdba', try changing the password for system by issuing
```
ALTER USER system IDENTIFIED BY abcdef;
```
Then, to make sure, without quitting sqlplus:
```
CONN system/abcdef
```
That should work. After that, you can log on with system/abcdef when you start sqlplus. | The ORA-01994 error happens when you forget to use the orapwd command, and it critical to note that the name of the file must be orapwsid, and you must supply the full path name when using the orapwd command. 'orapwsid' sid is case sensitive. | "Alter user sys identified by" not working | [
"",
"sql",
"oracle",
""
] |
I'll try to explain the type of the query that I want:
Assume I have a table like this:
```
| ID | someID | Number |
|----|--------|--------|
| 1 | 1 | 10 |
| 2 | 1 | 11 |
| 3 | 1 | 14 |
| 4 | 2 | 10 |
| 5 | 2 | 13 |
```
Now, I want to find the someID that have a specific numbers (For example query for numbers 10, 11, 14 will return someID 1 and query for numbers 10, 13 will return 2). But, if someID contains all the query numbers but also more numbers, it will not return by the query. (For example query for 10, 11 will return nothing).
Is it possible? | ```
SELECT t1.someId
FROM yourTable t1
WHERE t1.number IN (10,14,11)
GROUP BY t1.someID
HAVING COUNT(DISTINCT t1.ID) = (SELECT COUNT(DISTINCT t2.ID) FROM yourTable t2 WHERE t1.someID=t2.someID)
```
[**Example Fiddle**](http://sqlfiddle.com/#!2/0269c/5) | ```
select someID
from yourtable
where number in (10,11,14)
and not exists (select * from yourtable t2 where number not in(10,11,14)
and t2.someid=yourtable.someid)
group by someID
having count(distinct ID) = 3
```
Where `3` is the number of items you are querying for | SELECT query with cross rows WHERE statement | [
"",
"sql",
"sql-server",
""
] |
I have 1 table called `accounts` and another one called `level_points`
Basically the idea is to determine what is the minimum amount of points you need to be X level.
Account Structure
`id`, `name`.. etc. `points`
Level\_Points Structure
`level`, `points`
Values in here such as
```
(1, 5)
(2, 10)
(3, 15)
```
I'm able to calculate the level using this query
```
SELECT `level`
FROM `level_points`
WHERE `points` <= (SELECT `points`
FROM `accounts`
WHERE `id` = 'x')
ORDER BY `level`
DESC LIMIT 1
```
My problem is that now i'm trying to join the tables to get something like this (For every user in the accounts table)
Result:
```
For user 1: `id`, `name` etc... `points`, `level`
For user 2: `id`, `name` etc... `points`, `level`
For user 3: `id`, `name` etc... `points`, `level`
```
I'm not exactly sure how to do this using joins and I can't seem to find an answer here that helps me here. | Try something like this:
```
SELECT ac.id,
ac.name,
ac.points,
le.level
FROM account ac,
level le
WHERE ac.id = 'x'
AND le.level= (
SELECT level
FROM level_points
WHERE points <= (
SELECT points
FROM accounts
WHERE id = 'x'
)
ORDER BY level DESC LIMIT 1
);
```
To get it for all players you can do:
```
SELECT ac.id,
ac.name,
ac.points,
le.level
FROM accounts ac,
level_points le
WHERE le.level= (
SELECT level
FROM level_points
WHERE points <= (
SELECT points
FROM accounts ac2
WHERE ac2.id = ac.id
)
ORDER BY level DESC LIMIT 1
);
```
Check [this fiddle](http://sqlfiddle.com/#!2/bcf8d2/1) to see it working | Try This ..
```
select column1 from table1
inner join table2 on table1.column = table2.column
where table2.column=0
```
Have a look at this
<http://www.w3schools.com/sql/sql_join.asp> | SQL: Join value from another table | [
"",
"mysql",
"sql",
""
] |
I've been banging my head over this problem for the whole day. I'm posting here in hope that someone would be able to help me.
I'm trying to fix a database that has missing values, by adding them.
Let's say we have the following tables:
```
Color with columns (itemId, colorId)
Process with columns (itemId, colorId, iteration, rating)
```
Process contains entries only if the rating is bigger than 0. I want to fix that by adding the missing entries with rating == 0 for each iteration.
For instance, the table process contains:
```
item id | color id | iteration | rating
1 | 1 | 1 | 1
1 | 1 | 2 | 2
1 | 1 | 3 | 2
1 | 2 | 3 | 1
1 | 1 | 4 | 5
1 | 2 | 4 | 5
```
the missing entries are:
```
item id | color id | iteration | rating
1 | 2 | 1 | 0
1 | 2 | 2 | 0
```
I'm beginner in SQL and not very familiar with joins. I was thinking of using an insert select combine with a join. Maybe a cross join?
I would greatly appreciate your help! Thanks a lot!
PS: I'm using mysql, but I guess this wouldn't affect the answer.
EDIT: Thanks for your proposed solutions. However none of them works because it is assumed that the column iteration does not have missing values. However, for me it is the case. Please consider "iteration" as another id, not necessary incremental...
So what I really want is something that, whenever there is one entry in table process for a given itemId and iteration, then there is also one entry per colorId (matching itemId in table Color). The rating of this missing entry should be 0. | ```
SELECT
P.itemId
,C.colorId
,I.iteration
,0 AS rating
FROM
(SELECT DISTINCT itemId FROM Process) P
CROSS JOIN
(SELECT DISTINCT colorId FROM Color) C
CROSS JOIN
(SELECT DISTINCT iteration FROM Process) I
LEFT JOIN
Process E
ON P.itemId = E.itemId
AND C.colorId = E.colorId
AND I.iteration = E.iteration
WHERE E.rating IS NULL
``` | `Updated` based on no missing iterations
```
INSERT INTO process (itemId, colorId, iteration, rating)
SELECT p1.itemId, p1.colorId, i.iteration, 0
FROM (SELECT itemId, colorId, MAX(iteration) as max_iteration
FROM process
GROUP BY itemId, colorId
) p1
INNER
JOIN (SELECT DISTINCT iteration FROM process) i
ON i.iteration < p1.max_iteration
LEFT OUTER
JOIN process p2
ON p2.itemId = p1.itemId
AND p2.colorId = p1.colorId
AND p2.iteration = i.iteration
WHERE p2.iteration IS NULL;
``` | SQL insert if missing value | [
"",
"mysql",
"sql",
""
] |
Alright, this is a relativly simple question that I just need confirmation on. I am pretty sure I know the answer, but I need expert validation because I have OCD.
Anyways, I am wondering what the difference is between just a simple Inner Join Statement
and simply using the where clause to set the primary keys of the tables I am matching as = to each other.
Example: I wrote this
```
select a.CONTACT_ID, a.TRADE_REP as Rep_Trading_ID, p.CRD_NUMBER, c.FIRST_NAME, c.LAST_NAME
from dbo.REP_PROFILE p, dbo.REP_ALIAS a, dbo.CONTACT c
where
c.CONTACT_ID = p.CONTACT_ID
and p.CONTACT_ID = a.TRADE_REP
and a.PRIMARY_YN = 'y'
```
In my mind, this seems to perform the exact same query as if I were to have created and Inner Join between tables c and p and another inner join between tables p and a on Contact ID (Contact ID is my primary key for most everything)
Am I correct? Am I partially correct? Am I tottally hopelessly ignorant? Any validation will do | Is the same!
look at these two tables:
```
CREATE TABLE table1 (
id INT,
name VARCHAR(20)
);
CREATE TABLE table2 (
id INT,
name VARCHAR(20)
);
```
The execution plan for the query using the inner join:
```
-- with inner join
EXPLAIN PLAN FOR
SELECT * FROM table1 t1
INNER JOIN table2 t2 ON t1.id = t2.id;
SELECT *
FROM TABLE (DBMS_XPLAN.DISPLAY);
-- 0 select statement
-- 1 hash join (access("T1"."ID"="T2"."ID"))
-- 2 table access full table1
-- 3 table access full table2
```
And the execution plan for the query using a WHERE clause.
```
-- with where clause
EXPLAIN PLAN FOR
SELECT * FROM table1 t1, table2 t2
WHERE t1.id = t2.id;
SELECT *
FROM TABLE (DBMS_XPLAN.DISPLAY);
-- 0 select statement
-- 1 hash join (access("T1"."ID"="T2"."ID"))
-- 2 table access full table1
-- 3 table access full table2
```
In my opinion is more readable to use JOIN. | Yes, using the format you provided works fine for INNER JOINS.
But what to do when you get to LEFT/RIGHT JOINS? Then you cannot use the joining conditions in the where clause like that.
Now if I am not mistaken, the old school syntax would be somethine like
```
=* for LEFT JOIN
```
and
```
*= for RIGHT JOIN
```
You might wfind the following article a nice read (more specifically at the various JOIN types)
[Join (SQL)](http://en.wikipedia.org/wiki/Join_%28SQL%29) | SQL Primary Key/Table Joining | [
"",
"sql",
"sql-server",
"join",
"inner-join",
""
] |
I'm using PostgreSQL 9.2.
I have table containing time of some devices getting out of service.
```
+----------+----------+---------------------+
| event_id | device | time |
+----------+----------+---------------------+
| 1 | Switch4 | 2013-09-01 00:01:00 |
| 2 | Switch1 | 2013-09-01 00:02:30 |
| 3 | Switch10 | 2013-09-01 00:02:40 |
| 4 | Switch51 | 2013-09-01 03:05:00 |
| 5 | Switch49 | 2013-09-02 13:00:00 |
| 6 | Switch28 | 2013-09-02 13:01:00 |
| 7 | Switch9 | 2013-09-02 13:02:00 |
+----------+----------+---------------------+
```
I want the rows to be grouped by +/-3 minutes' time difference, like that:
```
+----------+----------+---------------------+--------+
| event_id | device | time | group |
+----------+----------+---------------------+--------+
| 1 | Switch4 | 2013-09-01 00:01:00 | 1 |
| 2 | Switch1 | 2013-09-01 00:02:30 | 1 |
| 3 | Switch10 | 2013-09-01 00:02:40 | 1 |
| 4 | Switch51 | 2013-09-01 03:05:00 | 2 |
| 5 | Switch49 | 2013-09-02 13:00:00 | 3 |
| 6 | Switch28 | 2013-09-02 13:01:00 | 3 |
| 7 | Switch9 | 2013-09-02 13:02:00 | 3 |
+----------+----------+---------------------+--------+
```
I tried to make it using window function, but in clause
> [ RANGE | ROWS ] BETWEEN frame\_start AND frame\_end,
> where frame\_start and frame\_end can be one of
> UNBOUNDED PRECEDING value PRECEDING CURRENT ROW value FOLLOWING
> UNBOUNDED FOLLOWING,
>
> value must be an integer expression not containing any variables,
> aggregate functions, or window functions
So, considering this, I'm not able to indicate the time interval. Now I doubt that window function can resolve my problem. Could you help me? | [SQL Fiddle](http://sqlfiddle.com/#!12/25dfc/7)
```
select
event_id, device, ts,
floor(extract(epoch from ts) / 180) as group
from t
order by ts
```
It is possible to make the group number a sequence starting at 1 using a window function but it is a not small cost that I don't know if is necessary. This is it
```
select
event_id, device, ts,
dense_rank() over(order by "group") as group
from (
select
event_id, device, ts,
floor(extract(epoch from ts) / 180) as group
from t
) s
order by ts
```
`time` is a reserved word. Pick another one as the column name. | [SQLFiddle](http://sqlfiddle.com/#!12/25dfc/4)
```
with u as (
select
*,
extract(epoch from ts - lag(ts) over(order by ts))/ 60 > 180 or lag(ts) over(order by ts) is null as test
from
t
)
select *, sum(test::int) over(order by ts) from u
``` | Group by floating date range | [
"",
"sql",
"postgresql",
"grouping",
"window-functions",
"group-by",
""
] |
I am passing 3 parameters into my stored procedure: `@Time, @DeptID, @Value`.
1. `@Time` is representing: 1=past 24hours, 2=past week, 3=past month, 4=past year
2. `@DeptID` is the ID of the various departments
3. `@Value` 1=Sort by Low, 2=Sort by High
Here is my current code:
```
DECLARE @SQL VARCHAR(MAX)
SET @SQL = ('SELECT S.ID, S.[Description], D.Department, S.Value, S.[Date] FROM Suggestions S INNER JOIN Department D ON D.ID = S.DeptID WHERE Approved =1')
IF (@DeptID = 0 AND @Value = 0 AND @Time = 0)
BEGIN
SET @SQL = (@SQL +' ORDER BY [Date] DESC')
END
IF (@Time > 0)
BEGIN
SET @SQL = (CASE WHEN @Time = 1 THEN (@SQL + ' AND [Date] >= DATEADD(DAY, -1, GETDATE()) ORDER BY S.[Date] DESC')
WHEN @Time = 2 THEN (@SQL + ' AND [Date] >= DATEADD(DAY, -7, GETDATE()) ORDER BY S.[Date] DESC' )
WHEN @Time = 3 THEN (@SQL + ' AND [Date] >= DATEADD(DAY, -30, GETDATE()) ORDER BY S.[Date] DESC')
WHEN @Time = 4 THEN (@SQL + ' AND [Date] >= DATEADD(DAY, -365, GETDATE()) ORDER BY S.[Date] DESC') END)
END
IF (@DeptID > 0)
BEGIN
SET @SQL = @SQL + ' AND S.DeptID = @DeptID ORDER BY S.[Date] DESC')
END
IF (@Value > 0)
BEGIN
SET @SQL = (CASE WHEN @Value = 1 THEN (@SQL + ' ORDER BY S.Value DESC')
WHEN @Value = 2 THEN (@SQL + ' ORDER BY S.Value ASC')
```
This is fine when only one parameter is passed in, but when trying to filter by two or more parameters then I run into a problem...
So far I have been getting errors because I am adding 2 'Order By' clause onto the end of my Statement:
```
ORDER BY S.[Date] DESC ORDER BY S.Value DESC
```
Can anyone point me in the right direction with this?
Any help at all would be much appriciated | I rewrote your code to not use dynamic SQL:
```
SELECT S.ID,
S.[Description],
D.Department,
S.Value,
S.[Date]
FROM Suggestions S
INNER JOIN Department D
ON D.ID = S.DeptID
WHERE Approved =1
AND (@Time = 0
OR (@Time = 1 AND [Date] >= DATEADD(DAY, -1, GETDATE()))
OR (@Time = 2 AND [Date] >= DATEADD(DAY, -30, GETDATE()))
OR (@Time = 3 AND [Date] >= DATEADD(DAY, -365, GETDATE()))
)
AND (@DeptID = 0
OR (@DeptID > 0 AND S.DeptID = @DeptID)
)
ORDER BY [Date] DESC,
CASE WHEN @Value = 1 THEN S.Value
ELSE 1 END DESC,
CASE WHEN @Value = 2 THEN S.Value
ELSE 1 END ASC
```
**Updated with dynamic SQL version**
Ok, if you want the dynamic SQL solution, then this is one way (but first, did you read [this link](http://www.sommarskog.se/dynamic_sql.html%E2%80%8E)?):
```
DECLARE @SQL VARCHAR(MAX), @WHERE VARCHAR(MAX), @ORDER VARCHAR(MAX)
SET @SQL = ('SELECT S.ID, S.[Description], D.Department, S.Value, S.[Date] FROM Suggestions S INNER JOIN Department D ON D.ID = S.DeptID WHERE Approved =1')
SET @WHERE = ' AND ' +
CASE WHEN @Time = 0 THEN '1 = 1'
WHEN @Time = 1 THEN '[Date] >= DATEADD(DAY, -1, GETDATE())'
WHEN @Time = 2 THEN '[Date] >= DATEADD(DAY, -7, GETDATE())'
WHEN @Time = 3 THEN '[Date] >= DATEADD(DAY, -30, GETDATE())'
WHEN @Time = 4 THEN '[Date] >= DATEADD(DAY, -365, GETDATE())' END
SET @WHERE = @WHERE +
CASE WHEN @DeptID > 0 THEN ' AND S.DeptID = @DeptID'
ELSE '' END
SET @ORDER = ' ORDER BY S.[Date] DESC' +
CASE WHEN @Value = 0 THEN ''
WHEN @Value = 1 THEN ', S.Value DESC'
WHEN @Value = 2 THEN ', S.Value ASC' END
SET @SQL = @SQL + @WHERE + @ORDER
PRINT @SQL
``` | Not sure I agree with all of your dynamic SQL but why not just check if the order by clause is already in the SQL String and if it is just add the additional parameter otherwise add the entire clause. You can do this using CharIndex.
I would also suggest only adding the DESC at the very end just before you execute the @SQL
```
IF (@DeptID = 0 AND @Value = 0 AND @Time = 0)
BEGIN
IF CHARINDEX('ORDER BY',@SQL) = 0
BEGIN
SET @SQL = (@SQL +' ORDER BY [Date] ')
END
ELSE
BEGIN
SET @SQL = (@SQL +',[Date] ')
END
END
SET @SQL = @SQL + ' DESC'
``` | How can I change the ORDER BY clause depending on the parameters that are passed into the stored procedure | [
"",
"sql",
"sql-server",
"stored-procedures",
"sql-order-by",
"dynamic-sql",
""
] |
I have a MySql table with a 'Order' field but when a record gets deleted a gap appears
how can i update my 'Order' field sequentially ?
If possible in one query 1 1
```
id.........order
1...........1
5...........2
4...........4
3...........6
5...........8
```
to
```
id.........order
1...........1
5...........2
4...........3
3...........4
5...........5
```
I could do this record by record
Getting a SELECT orderd by Order and row by row changing the Order field
but to be honest i don't like it.
thanks
Extra info :
I also would like to change it this way :
```
id.........order
1...........1
5...........2
4...........3
3...........3.5
5...........4
```
to
```
id.........order
1...........1
5...........2
4...........3
3...........4
5...........5
``` | In MySQL you can do this:
```
update t join
(select t.*, (@rn := @rn + 1) as rn
from t cross join
(select @rn := 0) const
order by t.`order`
) torder
on t.id = torder.id
set `order` = torder.rn;
```
In most databases, you can also do this with a correlated subquery. But this might be a problem in MySQL because it doesn't allow the table being updated as a subquery:
```
update t
set `order` = (select count(*)
from t t2
where t2.`order` < t.`order` or
(t2.`order` = t.`order` and t2.id <= t.id)
);
``` | There is no need to re-number or re-order. The table just gives you all your data. If you need it presented a certain way, that is the job of a query.
You don't even need to change the order value in the query either, just do:
```
SELECT * FROM MyTable WHERE mycolumn = 'MyCondition' ORDER BY order;
``` | Reorder a MYSQL table | [
"",
"mysql",
"sql",
"database",
"select",
"sql-order-by",
""
] |
I have a table of items with a 'date\_added' column. What I want to do is select all the items added during the last two weeks. How can I do that?
```
$sql = "SELECT *
FROM iteminfo
WHERE quantity > 0
ORDER BY ID ASC";
// $query = mysql_query($sql);
``` | If you are using MS SQL Server try this code:
```
SELECT tb.date_added
FROM MyTable tb
WHERE tb.date_added > DATEADD(week, -2, GETDATE())
```
For MySQL try:
```
SELECT tb.date_added
FROM MyTable tb
WHERE DATE_ADD(tb.date_added, INTERVAL 2 WEEK) >= NOW();
``` | Basically it depends on database, but you can try something like this:
```
select * from table where date_added > getdate()-14
``` | SQL Query Comparing Date | [
"",
"sql",
"date",
"time",
""
] |
I just think that the answer is false because the foreign key doesn't have `uniqueness` property.
But some people said that it can be in case of self joining the table.
I am new to `SQL`. If its true please explain how and why?
```
Employee table
| e_id | e_name | e_sala | d_id |
|---- |------- |----- |--------|
| 1 | Tom | 50K | A |
| 2 | Billy | 15K | A |
| 3 | Bucky | 15K | B |
department table
| d_id | d_name |
|---- |------- |
| A | XXX |
| B | YYY |
```
Now, d\_id is foreign key so how it can be a primary key. And explain something about `join`. What is its use? | I think the question is a bit confusing.
If you mean "can foreign key 'refer' to a primary key in the same table?", the answer is a firm yes as some replied. For example, in an employee table, a row for an employee may have a column for storing manager's employee number where the manager is also an employee and hence will have a row in the table like a row of any other employee.
If you mean "can column(or set of columns) be a primary key as well as a foreign key in the same table?", the answer, in my view, is a no; it seems meaningless. However, the following definition succeeds in SQL Server!
```
create table t1(c1 int not null primary key foreign key references t1(c1))
```
But I think it is meaningless to have such a constraint unless somebody comes up with a practical example.
AmanS, in your example d\_id in no circumstance can be a primary key in Employee table. A table can have only one primary key. I hope this clears your doubt. d\_id is/can be a primary key only in department table. | **This may be a good explanation example**
```
CREATE TABLE employees (
id INTEGER NOT NULL PRIMARY KEY,
managerId INTEGER REFERENCES employees(id),
name VARCHAR(30) NOT NULL
);
INSERT INTO employees(id, managerId, name) VALUES(1, NULL, 'John');
INSERT INTO employees(id, managerId, name) VALUES(2, 1, 'Mike');
```
-- Explanation:
-- In this example.
-- John is Mike's manager. Mike does not manage anyone.
-- Mike is the only employee who does not manage anyone. | Can a foreign key refer to a primary key in the same table? | [
"",
"sql",
"foreign-keys",
"primary-key",
""
] |
I'm trying to write a query that queries table Z\_INSUR. I want to find all instances where the same EMP has more than 1 row where the INSUR\_TYPE is M. I want to see all of the EMPs that do this, which is why I added the UNIQUE part to the query. Below is what I've tried so far but it doesn't work. Can someone help me with this? I'm using ORacle
```
select UNIQUE(EMP) from Z_INSUR where COUNT(INSUR_TYPE = 'M') > 1;
``` | ```
SELECT EMP
FROM Z_INSUR
WHERE INSUR_TYPE = 'M'
GROUP BY EMP
HAVING count(*) > 1;
``` | Since you didn't specify, I am going to assume you're using SQL Server.
`select emp from z_insur where insur_type = 'm' group by emp having count(emp) > 1` | SQL Query finding field with multiple rows | [
"",
"sql",
"oracle",
""
] |
I want to perform a query which would look like this in native SQL:
```
SELECT
AVG(t.column) AS average_value
FROM
table t
WHERE
YEAR(t.timestamp) = 2013 AND
MONTH(t.timestamp) = 09 AND
DAY(t.timestamp) = 16 AND
t.somethingelse LIKE 'somethingelse'
GROUP BY
t.somethingelse;
```
If I am trying to implement this in Doctrine's query builder like this:
```
$qb = $this->getDoctrine()->createQueryBuilder();
$qb->select('e.column AS average_value')
->from('MyBundle:MyEntity', 'e')
->where('YEAR(e.timestamp) = 2013')
->andWhere('MONTH(e.timestamp) = 09')
->andWhere('DAY(e.timestamp) = 16')
->andWhere('u.somethingelse LIKE somethingelse')
->groupBy('somethingelse');
```
I get the error exception
> [Syntax Error] line 0, col 63: Error: Expected known function, got 'YEAR'
How can I implement my query with Doctrines query builder?
**Notes:**
* I know about [Doctrine's Native SQL](http://docs.doctrine-project.org/en/latest/reference/native-sql.html). I've tried this, but it leads to the problem that my productive and my development database tables have different names. I want to work database agnostic, so this is no option.
* Although I want to work db agnostic: FYI, I am using MySQL.
* There is way to extend Doctrine to "learn" the `YEAR()` etc. statements, e.g. as [seen here](http://www.simukti.net/blog/2012/04/05/how-to-select-year-month-day-in-doctrine2/). But I am looking for a way to avoid including third party plugins. | You can add [Doctrine extension](https://github.com/beberlei/DoctrineExtensions) so you can use the MySql `YEAR` and `MONTH` statement by adding this configuration if you're on Symfony:
```
doctrine:
orm:
dql:
string_functions:
MONTH: DoctrineExtensions\Query\Mysql\Month
YEAR: DoctrineExtensions\Query\Mysql\Year
```
now you can use the MONTH and YEAR statements in your DQL or querybuilder.
Note: The extension supports MySQL, Oracle, PostgreSQL and SQLite. | In **Symfony 4** you must install [DoctrineExtensions](https://github.com/beberlei/DoctrineExtensions):
```
composer require beberlei/DoctrineExtensions
```
And then edit the doctrine config file (config/packages/doctrine.yaml) as follow:
```
doctrine:
orm:
dql:
string_functions:
MONTH: DoctrineExtensions\Query\Mysql\Month
YEAR: DoctrineExtensions\Query\Mysql\Year
``` | How can I use SQL's YEAR(), MONTH() and DAY() in Doctrine2? | [
"",
"mysql",
"sql",
"doctrine-orm",
"sql-date-functions",
""
] |
Using LIKE is very common in MySQL. We use it like this: `WHERE field LIKE '%substring%'`. Where we have a substring and field has full string. But what I need is something opposite. I have substrings in field. So, I want that row which contains a substring of my string. Suppose the table is:
```
----+-------------------
id | keyword
----+-------------------
1 | admission
----+-------------------
2 | head of the dept
----+-------------------
```
and I have a string from user: `Tell me about admission info`. I need such a MySQL query that returns `admission` as this is a substring of user string. Something like:
```
SELECT keyword FROM table WHERE (keyword is a substring of 'Tell me about admission info')
```
thanks in advance. | You re looking for the [LIKE](http://dev.mysql.com/doc/refman/5.0/en/string-comparison-functions.html#operator_like) operator
> Pattern matching using SQL simple regular expression comparison.
> Returns 1 (TRUE) or 0 (FALSE). If either expr or pat is NULL, the
> result is NULL.
Something like
```
SELECT keyword
FROM table
WHERE ('Tell me about admission info' LIKE CONCAT('%', keyword, '%'))
```
## [SQL Fiddle DEMO](http://www.sqlfiddle.com/#!2/f6873/1) | This work fine, using `REGEXP`:
```
SELECT keyword
FROM table
WHERE 'Tell me about admission info' REGEXP keyword;
```
But this work only if `keyword` don't contain *Regular expression*'s escapes...
I.e. This will work fine while `keyword` contain only letters, numbers, spaces and so on. | SELECT MySQL field that contains a substring | [
"",
"mysql",
"sql",
"string",
""
] |
I have a table `USERS` with two columns (`RID`, `USERNAME`)
I am just passing `RID` to get `USERNAME`.
I will pass 4 parameters as User1, User2, User3, User4 which has `RID`
Now I want results like follow:
```
Level1User Level2User Level3User Level4User
ABC DEF GHI JKL
```
I have tried following query:
```
SELECT
L1.USERNAME AS Level1User,
L2.USERNAME AS Level2User,
L3.USERNAME AS Level3User,
L4.USERNAME AS Level4User
FROM [USERS] AS L1,
[USERS] AS L2,
[USERS] AS L3,
[USERS] AS L4
WHERE L1.RID = @User1RID AND L2.RID = @User2RID AND L3.RID = @User3RID
AND L4.RID = @User4RID
```
It is working fine but it fails when I pass `NULL` for any USER's `RID`.
I don't know how to do. | Rewrite it using Scalar Subqueries and you don't have to care about NULLs anymore:
```
SELECT
(SELECT USERNAME FROM [USERS] WHERE RID = @User1RID) AS Level1User,
(SELECT USERNAME FROM [USERS] WHERE RID = @User2RID) AS Level2User,
(SELECT USERNAME FROM [USERS] WHERE RID = @User3RID) AS Level3User,
(SELECT USERNAME FROM [USERS] WHERE RID = @User4RID) AS Level4User
``` | You can do this with aggregation:
```
select max(case when u.RID = @User1RID then USERNAME end) as Level1User,
max(case when u.RID = @User2RID then USERNAME end) as Level2User,
max(case when u.RID = @User3RID then USERNAME end) as Level3User,
max(case when u.RID = @User4RID then USERNAME end) as Level4User
from [users] u
where u.RID in (@User1RID, @User2RID, @User3RID, @User4RID)
``` | Select Statement with rows as columns | [
"",
"sql",
"sql-server",
"sql-server-2008",
"sql-server-2008-r2",
""
] |
I like to join a temporary table in MySQL which fails, the idea quite simple:
```
CREATE TEMPORARY TABLE temp_table LIKE any_other_table; -- srsly it does not matter which table
(
SELECT p1,p2,p3 FROM temp_table WHERE p4 = 1
) UNION (
SELECT p1,p2,p3 FROM temp_table WHERE p4 = 2
)
```
Any help is greatly appreciated.
EDIT: The error thrown by mysql is `ERROR 1137 (HY000): Can't reopen table: 'temp_table'` | You cannot refer to a TEMPORARY table more than once in the same query.
Please read the following link
<http://dev.mysql.com/doc/refman/5.5/en/temporary-table-problems.html> | Does this work?
```
SELECT p1, p2, p3
FROM temp_table
WHERE p4 in (1, 2);
```
It is a much simpler way to write the same query.
EDIT:
If by "fail" you mean "doesn't return any rows", then you have a simple problem. `CREATE TABLE LIKE` does not populate the table. It creates a table with the same structure as `any_other_table`, but with no rows. You should then populate it with `insert`. Or, use `create table` with the `select` statement only. | Join the same table temporary table in MySQL | [
"",
"mysql",
"sql",
"join",
"temp-tables",
""
] |
I am struggling to get my head around this sql.
I have a function that returns a list of items associated with a Bill of Materials BOM.
The result of the sql select
```
SELECT
BOM,
ITEMID,
QTY
FROM boms
WHERE
bom='A'
```
is
```
BOM | ITEMID | QTY
A | ITEMB | 1
A | ITEMC | 2
```
Now using that result set I am looking to query my salestable to find sales where ITEMB and ITEMC were sold in enough quantity.
The format of the salestable is as follows
```
SELECT
salesid,
itemid,
sum(qtyordered) 'ordered'
FROM salesline
WHERE
itemid='ITEMB'
or itemid='ITEMC'
GROUP BY salesid, itemid
```
This would give me something like
```
salesid | itemid | ordered
SO-10000 | ITEMB | 1
SO-10001 | ITEMB | 1
SO-10001 | ITEMC | 1
SO-10002 | ITEMB | 1
SO-10002 | ITEMC | 2
```
ideally I would like to return only SO-10002 as this is the only sale where all necessary units were sold.
Any suggestions would be appreciated. Ideally one query would be ideal but I am not sure if that is possible. Performance is not a must as this would be run once a week in the early hours of the morning.
EDIT
with the always excellent help, the code is now complete. I have wrapped it all up into a UDF which simply returns the sales for a specified BOM over a specified period of time.
Function is
```
CREATE FUNCTION [dbo].[BOMSALES] (@bom varchar(20),@startdate datetime, @enddate datetime)
RETURNS TABLE
AS
RETURN(
select count(q.SALESID) SOLD FROM (SELECT s.SALESID
FROM
(
SELECT s.SALESID, ITEMID, SUM(qtyordered) AS SOLD
FROM salesline s inner join SALESTABLE st on st.salesid=s.SALESID
where st.createddate>=@startdate and st.CREATEDDATE<=@enddate and st.salestype=3
GROUP BY s.SALESID, ITEMID
) AS s
JOIN dbo.BOM1 AS b ON b.ITEMID = s.ITEMID AND b.QTY <= s.SOLD
where b.BOM=@bom
GROUP BY s.SALESID
HAVING COUNT(*) = (SELECT COUNT(*) FROM dbo.BOM1 WHERE BOM = @bom)) q
)
``` | This should return all sales with an exact match, i.e. same itemid and same quantity:
```
SELECT s.salesid
FROM
(
SELECT salesid, itemid, SUM(qtyordered) AS ordered
FROM salesline AS s
GROUP BY salesid, itemid
) AS s
JOIN
boms AS b
ON b.itemid = s.itemid
AND b.QTY = s.ordered
WHERE b.BOM='A'
GROUP BY s.salesid
HAVING COUNT(*) = (SELECT COUNT(*) FROM boms WHERE BOM='A');
```
If you want to return a sale where the quantity is greater than boms.qty youhave to change the join accordingly:
```
JOIN
boms AS b
ON b.itemid = s.itemid
AND b.QTY <= s.ordered
```
Untested... | You can do this aggregation and a `having` clause:
```
select salesid
from salesline sl
group by salesid
having sum(case when itemid = 'ITEMB' then 1 else 0 end) > 0 and
sum(case when itemid = 'ITEMA' then 1 else 0 end) > 0;
```
Each condition in the `having` clause is counting the number of rows with each item. | sql 2000 select id where multiple row conditions are met | [
"",
"sql",
"sql-server-2000",
""
] |
I am trying to run the following sql server stored procedure, and I am unable to get it to execute. The procedure query is:
```
ALTER PROCEDURE [dbo].[get_StockNavigationReportData]
@startDate VARCHAR(200),
@endDate VARCHAR(200),
@groupBy VARCHAR(200)
AS
BEGIN
SELECT M.VRNOA, M.VRDATE 'DATE', M.REMARKS, D.qty 'QTY', g2.name 'Godown_2', g.name 'Godown_1'
FROM
Stockmain M, StockDetail D, GODOWN G, (
SELECT * FROM GODOWN
) AS g2
WHERE
M.Etype='navigation'
AND M.STID = D.STID
AND D.GODOWN_ID = G.GODOWN_ID
AND g2.godown_id = D.GODOWN_ID2
AND VRDATE BETWEEN CONVERT(VARCHAR, CAST(@startDate AS DATETIME),101)
AND CONVERT(VARCHAR, CAST(@endDate AS DATETIME), 101)
ORDER BY
@groupBy ASC
END
```
And the error that I get is:
> Msg 1008, Level 16, State 1, Procedure get\_StockNavigationReportData, Line 25
> The SELECT item identified by the ORDER BY number 1 contains a variable as part of the expression identifying a column position. Variables are only allowed when ordering by an expression referencing a column name.
Can anybody please have a look and tell me what I am doing wrong here + How may I get this to work? | yes you cannot use variable in the order by clause which specify order position.
<http://exacthelp.blogspot.com/2012/03/how-to-use-variable-in-order-by-clause.html>
You should write something like this:
```
ORDER BY
CASE
WHEN @groupBy = 1 THEN VRNOA
WHEN @groupBy = 2 THEN DATE
WHEN @groupBy = 3 THEN REMARKS
...............
............
END
``` | You cannot use a variable in the `order by` clause. Instead, you have to do something like:
```
order by (case when @groupBy = 'VRNOA' then VRNOA
. . .
end)
```
Be careful, though, because if the columns are of different types, either unexpected things might happen or might get another error. (There is an alternative to use dynamic SQL, but I would not recommend that.)
Also, the name `@groupby` is a bit misleading. "Grouping" is a SQL term equivalent to "aggregation". Wouldn't `@OrderBy` or `@SortBy` be more appropriate? | Sql Server 2008: Strange error in stored procedure | [
"",
"sql",
"sql-server",
"sql-server-2008",
"stored-procedures",
""
] |
Is there any difference between using `SPACE(2) + '|' + SPACE(2)` or just `' | '`? I know the output will be the same but I'm not sure about other aspects... Actually I can't see the point in using SPACE function if there's no difference...
Thanks! | The first difference which comes to my mind is the amount of spaces. Let's say you'd like to write 20 spaces, a word and another 20 spaces:
```
' abcd '
```
How many spaces are there? Much more useful:
```
SPACE(20)+'abcd'+SPACE(20)
```
Isn't it?
There are many cases in which you edit somebody's code generating SQL queries. It's more than probable that you delete a space somewhere and not possible to debug it. There's no such problem if `SPACE` function is used. | I agree with others who have answered this that SPACE() is useful to indicate the number of spaces used or to add large numbers of spaces.
Using SPACE may also be simpler if you want to generate a fixed-width format file - you can use the data length as a parameter e.g.
```
SELECT
name + SPACE(20-LEN(name))
FROM
SomeTableWithNames
``` | SQL - Difference between SPACE(2) + '|' + SPACE(2) and ‘ | ‘? | [
"",
"sql",
"sql-server",
""
] |
I Have a procedure executed in SQL Server 2008 R2, the script is:
```
DECLARE @LocalVar SMALLINT = GetLocalVarFunction();
SELECT
[TT].[ID],
[TT].[Title]
FROM [TargetTable] AS [TT]
LEFT JOIN [AcceccTable] AS [AT] ON [AT].[AccessID] = [TT].[ID]
WHERE
(
(@LocalVar = 1 AND ([AT].[Access] = 0 OR [AT].[Access] Is Null) AND
([TT].[Level] > 7)
);
GO
```
This Procedure executed in `16` seconds.
But When I change the Where Clause to:
```
WHERE
(
((1=1) AND [AT].[Access] = 0 OR [AT].[Access] Is Null) AND
([TT].[Level] > 7)
);
```
The Procedure Executed in less than `1` second.
As You see I just remove the local variable.
So where is the problem? Is there any thing I missing to use local variable in where clause? any suggestion to improve execute time when I using local variable in where clause?
**Update:**
I also think to add an `if` statement before script and split the procedure to 2 procedures, but I have 4 or 5 variables like above and use `if` statement is so complex.
**Update2:**
I change the set of `@LocalVar`:
```
DECLARE @LocalVar SMALLINT = 1;
```
There is no change in execute time. | When you use use local variables in `WHERE` filter then it causes **FULL TABLE SCAN**. The value of the local variable is not known to the SQL Server at compile time. hence SQL Server creates an execution plan for the largest scale that is avaliable for that column.
As you have seen that when you pass `1==1` then SQL server knows the value and hence the performance is not degraded. But the moment you pass a local variable the value is unknown.
One solution may be to use **OPTION ( RECOMPILE )** at the end of your SQL query
You can check out the [OPTIMIZE FOR UNKNOWN](http://blogs.msdn.com/b/sqlprogrammability/archive/2008/11/26/optimize-for-unknown-a-little-known-sql-server-2008-feature.aspx) | When you use a local variable in WHERE optimizer doesn't know what to do with it.
You may check this [link](http://connectsql.blogspot.gr/2012/07/sql-server-how-local-variables-can.html)
What you could do in your case is run your query with displaying the actual plan in both cases and see how SQL is treating them. | Increase Execute Duration of Procedure When Using Variables in WhereClause | [
"",
"sql",
"sql-server",
"performance",
"variables",
"stored-procedures",
""
] |
I would like to execute a query that will only show all the string before dash in the particular field.
**For example:**
Original data: `AB-123`
After query: `AB` | You can use [`substr`](http://docs.oracle.com/cd/E11882_01/server.112/e26088/functions181.htm#i87066):
```
SQL> WITH DATA AS (SELECT 'AB-123' txt FROM dual)
2 SELECT substr(txt, 1, instr(txt, '-') - 1)
3 FROM DATA;
SUBSTR(TXT,1,INSTR(TXT,'-')-1)
------------------------------
AB
```
or [`regexp_substr`](http://docs.oracle.com/cd/E11882_01/server.112/e26088/functions150.htm#i1239858) (10g+):
```
SQL> WITH DATA AS (SELECT 'AB-123' txt FROM dual)
2 SELECT regexp_substr(txt, '^[^-]*')
3 FROM DATA;
REGEXP_SUBSTR(TXT,'^[^-]*')
---------------------------
AB
``` | I found this simple
```
SELECT distinct
regexp_replace(d.pyid, '-.*$', '') as result
FROM schema.table d;
```
pyID column contains ABC-123, DEF-3454
SQL Result:
ABC
DEF | SQL Query to show string before a dash | [
"",
"sql",
"oracle",
""
] |
I want to select all column values irrespective of NULL as column value.
My problem is when I try to select values using IS NULL I'm getting such records also these having NULL as a value.
```
declare @status varchar(50)=NULL
declare @path varchar(50)= 'India'
select
*
from [vwMYDATA]
where
Path like '%' + @path + '%' and
(Status = @status or @status IS NULL)
```
vwMYDATA
```
path status
INDIA1 NULL
INDIA2 close
INDIA3 open
```
If I execute above query I'm getting all records whose status column has NULL value. Expected is if no status value specified show all status records:
```
path status
INDIA1 NULL
INDIA2 close
INDIA3 open
```
It is showing:
```
path status
INDIA1 NULL
```
I'm passing these 2 parameters to stored procedure which is having this select statement.
How can I filter out and show all the records if status is not given as input? | ```
select
*
from [vwMYDATA]
where
Path like '%' + @path + '%' and
((@status IS NULL )OR (Status = @status or status IS NULL))
``` | You need to change the WHERE clause to a Case statement if you want different actions based on whether the @status varchar is NULL.
```
Select *
From [vwMYDATA]
Where
Path Like Concat('%', @path, '%') And
1 = (
Case When @status Is Null Then 1
When Status = @status Then 1
Else 0
)
```
This might be inefficient and nonperformant. For performance, it would be better to have a completely different query that matches based on NULL or not, so that proper indexing can be applied. | How to take all values from SQL table column on condition? | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
I was trying to find the no of days between 2 dates.
I tried running the below query and ended up with **ORA-30076** error.
Query is,
```
select extract(day from (sysdate - to_date('2009-10-01', 'yyyy-mm-dd')))
from dual
```
Error is,
```
Error at Command Line:1 Column:34
Error report:
SQL Error: ORA-30076: invalid extract field for extract source
30076. 00000 - "invalid extract field for extract source"
*Cause: The extract source does not contain the specified extract field.
```
But when i try to run
```
Select extract(day from sysdate) from dual;
```
i got correct output.
Please help if anyone knows the reason as why i am getting the above error. | Subtracting one date from another doesn't return a date, it returns a number that represents the number of days difference so you can't use extract. | `sysdate - to_date('2009-10-01', 'yyyy-mm-dd')` will give the difference **in days** between two dates - there is no need to use `extract` function. Moreover in this situation it would be incorrect usage of `extract` function simply because it would expect expression of date or interval datatype after `from` clause and result of `sysdate - to_date('2009-10-01', 'yyyy-mm-dd')` subtraction is of number datatype - cause of the error you are facing. | ORA-30076 when trying to find days between dates | [
"",
"sql",
"oracle",
""
] |
a table like this
table
```
|primary_key| project | tag |
| 1 | 2 | 3 |
| 2 | 2 | 0 |
| 3 | 2 | 4 |
| 4 | 2 | 5 |
| 5 | 3 | 0 |
| 6 | 2 | 0 |
```
I want to query project with tag ' (3 and 4) or (0 and 4) and (5 and 0)',
In this example, the output should be project 2?
how could I write this in a SQL?
I tried using phpmyadmin to generate several result, but not work as I expect.
It's realy kind of your guys to help me.
I change the question, if the condition is much more complex, can the query be
from
table a, table b, table c? | You can join the table to itself and check if every row with tag 3/0 has another row with tag 4.
```
SELECT DISTINCT a.project
FROM table a, table b
WHERE a.project= b.project AND
( a.tag = 3 AND b.tag = 4 ) OR
( a.tag = 0 AND b.tag = 4 )
```
Updated according to the updated question. | You want to do this with aggregation and a `having` clause:
```
select project
from t
group by project
having sum(tag = 0 or tag = 3) > 0 and
sum(tag = 4) > 0;
```
Each `sum()` expression is counting the number of rows where the condition is true. So, the two conditions are saying "there is at least one row with tag = 0 or 3 and there is at least one row with tag = 4". | How to query multi 'or' and a and in SQL? | [
"",
"mysql",
"sql",
""
] |
Using SQL Server 2008
**Objective:** Select a series of columns from table1 to insert into table2
**Issue:** In table2 there is one additional column that needs to be inserted that can be derived from a join between table1 and table 3
**Current Code**
```
SELECT
table1.name,
table1.email,
table1.phone,
CASE WHEN table1.status = 'active' THEN 1 ELSE 0 END AS Active,
CASE WHEN table1.group_id = 3 THEN 5 ELSE table1.group_id END AS RoleId,
(SELECT
table3.UserID AS ParentID
FROM
table3
INNER JOIN
table1 ON
table3.ID = table1.table3_ID)
FROM
table1
WHERE
table1.group_id = 3 AND
table1.status = 'active'
```
Currently this code does not work and returns "Subquery returned more than 1 value" error.
I am aware this may not be the correct way to use a nested select, what would be the correct way to do this?
Additional data can be provided if necessary.
Thank you in advance. | The problem is that you want a correlated subquery. That means that you need to remove `table1` from the subquery:
```
SELECT table1.name, table1.email, table1.phone,
(CASE WHEN table1.status = 'active' THEN 1 ELSE 0 END) AS Active,
(CASE WHEN table1.group_id = 3 THEN 5 ELSE table1.group_id END) AS RoleId,
(SELECT table3.UserID
FROM table3
WHERE table3.ID = table1.table3_ID
) as ParentID
FROM table1
WHERE table1.group_id = 3 AND table1.status = 'active';
```
If there is more than one possible match in `table3`, then you will need something like `select top 1 table3.UserId` or `select max(table3.UserId)`. | If you really (really really really) want to use `subquery` instead of `join` your code is quite well. You should change your subquery's `join` as below.
```
SELECT
table1.name,
table1.email,
table1.phone,
CASE WHEN table1.status = 'active' THEN 1 ELSE 0 END AS Active,
CASE WHEN table1.group_id = 3 THEN 5 ELSE table1.group_id END AS RoleId,
(SELECT
table3.UserID
FROM
table3
WHERE
table3.ID = table1.table3_ID) AS ParentID
FROM
table1
WHERE
table1.group_id = 3 AND
table1.status = 'active'
```
Apart from this I put an alias `ParentID` outside the `subquery`.
---
Please consider below code with table aliases. It's good practice to use it because the code is more readable.
```
SELECT
t1.name,
t1.email,
t1.phone,
CASE WHEN t1.status = 'active' THEN 1 ELSE 0 END AS Active,
CASE WHEN t1.group_id = 3 THEN 5 ELSE t1.group_id END AS RoleId,
(SELECT
table3.UserID
FROM
table3 t3
WHERE
t3.ID = t1.table3_ID) AS ParentID
FROM
table1 t1
WHERE
t1.group_id = 3 AND
t1.status = 'active'
``` | How do I add a column derived from a select statement within a select statement? | [
"",
"sql",
"sql-server",
"select",
"insert",
""
] |
What are the practical differences between COALESCE() and ISNULL(,'')?
When avoiding NULL values in SQL concatenations, which one is the best to be used?
Thanks! | > Comparing COALESCE and ISNULL
>
> The ISNULL function and the COALESCE expression have a similar purpose but can behave differently.
>
> 1. Because ISNULL is a function, it is evaluated only once. As described above,
> the input values for the COALESCE expression can be evaluated multiple
> times.
> 2. Data type determination of the resulting expression is
> different. ISNULL uses the data type of the first parameter, COALESCE
> follows the CASE expression rules and returns the data type of value
> with the highest precedence.
> 3. The NULLability of the result expression is different for ISNULL and COALESCE. The
> ISNULL return value is always considered NOT NULLable (assuming the return value is a
> non-nullable one) whereas COALESCE with non-null parameters is
> considered to be NULL. So the expressions ISNULL(NULL, 1) and
> COALESCE(NULL, 1) although equivalent have different nullability
> values. This makes a difference if you are using these expressions in
> computed columns, creating key constraints or making the return value
> of a scalar UDF deterministic so that it can be indexed as shown in
> the following example.
>
> ```
> USE tempdb;
> GO
>
> -- This statement fails because the PRIMARY KEY cannot accept NULL values
> -- and the nullability of the COALESCE expression for col2
> -- evaluates to NULL.
>
> CREATE TABLE #Demo ( col1 integer NULL, col2 AS COALESCE(col1, 0) PRIMARY KEY, col3 AS ISNULL(col1, 0) );
>
> -- This statement succeeds because the nullability of the
> -- ISNULL function evaluates AS NOT NULL.
>
> CREATE TABLE #Demo ( col1 integer NULL, col2 AS COALESCE(col1, 0), col3 AS ISNULL(col1, 0) PRIMARY KEY );
> ```
>
> Validations for ISNULL and
> COALESCE are also different. For example, a NULL value for ISNULL is
> converted to int whereas for COALESCE, you must provide a data type.
> ISNULL takes only 2 parameters whereas COALESCE takes a variable
> number of parameters.
Source: [BOL](http://msdn.microsoft.com/en-us/library/ms190349.aspx) | The main difference is, that `COALESCE` is ANSI-Standard, so you will also find it in other RDBMSs, the other difference is you can give a whole list of values to be checked to `COALESCE` whereas to `ISNULL` you can only pass one. | SQL - Difference between COALESCE and ISNULL? | [
"",
"sql",
"sql-server",
""
] |
Hi I'm working with data depending mostly on the day of the week. Data is formatted in a table
Date - position - count/number.
There are multiple different positions.
I was able to sort my data for a each day of the week using.
```
select MOD(to_char(time, 'J'),7),
sum(COUNT))
from TABLE
where time > sysdate -x
group by to_char(time, 'J')
order by to_char(time, 'J');
```
This outputs daily sums according to day of the week.
Now I'm able to get an average for a single day of a week in a year.
This code outputs an average for only Sunday
```
SELECT AVG(asset_sums)
FROM (
select MOD(to_char(time, 'J'),7),
sum(COUNT)) as asset_sums
from table
where time > sysdate -365
and MOD(TO_CHAR(time, 'J'), 7) + 1 IN (7)
group by to_char(time, 'J')
order by to_char(time, 'J')
);
```
My goal is to be able to get a table with daily sum compared with yearly average for that particular day of the week.
For example yearly average number for Mondays is 57 , Tuesdays 60.
This week my Monday is 59 and Tuesday is 57. Output of the table is
Monday +2, Tuesday -3.
What is the easiest way / most efficient ?
Thanks for your help.
Edit : Format of my data
> Date : yyyy-mm-dd | Place : xxxx | Number( of customers) 0 to 10000
>
> ```
> 2013-09-16 | AAAA | 1534
> 2013-09-16 | AAAB | 534
> 2013-09-17 | AAAA | 1434
> 2013-09-17 | AAAC | 834
> 2013-09-18 | AAAA | 134
> 2013-09-18 | AAAD | 183
> ```
Needed output
> 2013-09-16 | Day of the week | Sum | Average monday this year | Difference Sum-AVG
>
> 2013-09-16 | 1 (= Monday) | 2068 | 2015| 53 | For clarity I will use [subquery factoring](http://www.oracle-developer.net/display.php?id=212). First, select the current weeks data. Next, subquery the sum for the day over the current week. Then, subquery the sum for each day over the past year. Then, average the daily sum of each day for each day of the week. Finally, join the two and display the difference.
```
with
this_week as (
select
time
from table
where time > x - 7
group by time
),
this_week_dly_sum as (
select
to_char(time, 'd') day,
sum(count) sum
from this_week
group by to_char(time, 'd')
),
this_year_dly_sum as (
select
time,
sum(count) sum
from table
where time > x - 365
group by time
),
this_year_dly_avg as (
select
to_char(day, 'd'),
avg(sum) avg
from this_year_dly_sum
group by to_char(day, 'd')
)
select
this_week.time,
to_char(this_week.time, 'day') day of week,
this_week_dly_sum.sum,
this_year_dly_avg.avg,
this_week_dly_sum.sum - this_year_dly_avg.avg difference
from this_week
inner join this_week_dly_sum
on to_char(this_week.time, 'd') = this_week_dly_sum.day
inner join this_year_dly_avg
on to_char(this_week.time, 'd').day = this_year_dly_avg.
group by time
;
``` | You can use analytic function for this.
```
select date1, to_char(date1, 'd'),
sum(val) over(partition by to_char(date1, 'd')),
avg(val) over(partition by to_char(date1, 'd')),
sum(val) over(partition by to_char(date1, 'd'))-
avg(val) over(partition by to_char(date1, 'd'))
from table1
time > add_month(sysdate,-12);
``` | Calculating difference between daily sum and a average for the same day of the week in defined time range. SQL 10g Oracle | [
"",
"sql",
"oracle",
""
] |
I have a stored procedure to load the data from one table to another table.
i need to set the column value of the destination table based on the two values of the select statement, some thing like the below example.
```
insert into table table_name
( value1, value 2,value 3)
select (value 1,value2 ,
case value3
when value1 = 'somevalue' &&* value2 = 'somevalue'
then 'x'
else 'y'
End
from table_name.
```
can any one help me to find out how to update the a column in based on the two previous column values in the same select query?
i have tried with the below sample example to understand but it was failed to parse.
```
INSERT INTO HumanResources.departmentcopy
( DepartmentID,GroupName,Name,temp)
SELECT DepartmentID,GroupName,Name,
CASE temp
WHEN DepartmentID = 1 && Name = 'Engineering and Research'
THEN 'sucessful'
ELSE 'unsucessful'
END
FROM HumanResources.department
```
Help me on this!!
thanks,
Venkat | You were very close:
```
INSERT INTO HumanResources.departmentcopy(DepartmentID, GroupName, Name, temp)
SELECT DepartmentID,
GroupName,
Name,
CASE WHEN DepartmentID = 1 AND Name = 'Engineering and Research'
THEN 'sucessful' ELSE 'unsucessful' END
FROM HumanResources.department
``` | `&&` is not valid in SQL. Use `AND` to append a condition.
```
INSERT INTO HumanResources.departmentcopy( DepartmentID,GroupName,Name,temp)
SELECT DepartmentID,
GroupName,
Name,
CASE
WHEN DepartmentID = 1 AND Name = 'Engineering and Research' THEN 'sucessful'
ELSE 'unsucessful'
END
FROM HumanResources.department
``` | case statement in select query in sql | [
"",
"sql",
"sql-server",
""
] |
Hi I am in need of a script that Updates the rows on a table from another table based on an ID and after that it deletes the from the second table the data it copied.
This are the tables I have
```
Documents
--------
DocumentID(PK)
RealFileName
FileName
ImageDocuments
--------------
ImageDocumentId(PK)
DocumentId(FK)
OriginalFileName
StorageFileName
```
Mapping beetween the two tables is One to One.
At the moment I have the script that updates the table but I do not know how to proceed to delete the values from documents table.This is what I have so far:
```
UPDATE [dbo].[ImageDocuments]
SET [dbo].[ImageDocuments].[OriginalFileName] = d.FileName,
[dbo].[ImageDocuments].[StorageFileName] = d.RealName
FROM [dbo].[ImageDocuments] as fu,
[dbo].[Documents] as d
WHERE fu.DocumentID = d.DocumentID
```
How can I delete from the documents table the rows that were added to the ImageDocuments table?
**UPDATE**
I create a delete query can anyone please validate if this will work corectly:
```
DELETE [dbo].[Documents]
FROM [dbo].[ImageDocuments] as fu
INNER JOIN [dbo].[Documents] as d ON d.DocumentID = fu.DocumentID
``` | Use this:
```
UPDATE [dbo].[ImageDocuments]
SET [dbo].[ImageDocuments].[OriginalFileName] = d.FileName,
[dbo].[ImageDocuments].[StorageFileName] = d.RealName
OUTPUT INSERTED.ImageDocumentId
INTO @Ids
FROM [dbo].[ImageDocuments] as fu,
[dbo].[Documents] as d
WHERE fu.DocumentID = d.DocumentID;
DELETE FROM Documents
WHERE DocumentID IN (SELECT ImageDocumentId FROM @IDs);
``` | I think this should work.
As you already copied all rows in ImageDocuments you can safely delete all the DocumentID in Documents
```
UPDATE [dbo].[ImageDocuments]
SET [dbo].[ImageDocuments].[OriginalFileName] = d.FileName,
[dbo].[ImageDocuments].[StorageFileName] = d.RealName
FROM [dbo].[ImageDocuments] as fu,
[dbo].[Documents] as d
WHERE fu.DocumentID = d.DocumentID;
Delete From Documents where Documents.DocumentID in
(Select ImageDocuments.DocumentID from ImageDocuments);
``` | How to migrate data to another table | [
"",
"sql",
"sql-server",
""
] |
```
declare @A table (
name varchar(10)
)
declare @B table (
name varchar(10)
)
insert into @A(name) values('A')
insert into @A(name) values('A')
insert into @A(name) values('A')
insert into @A(name) values('B')
insert into @A(name) values('B')
insert into @A(name) values('B')
insert into @A(name) values('B')
insert into @B(name) values('B')
insert into @B(name) values('B')
insert into @B(name) values('B')
insert into @B(name) values('B')
insert into @B(name) values('B')
insert into @B(name) values('B')
insert into @B(name) values('B')
insert into @B(name) values('A')
insert into @B(name) values('A')
```
@A has 3A's and 4B's
@B has 2A's and 7B's
u can say that
```
@A @B
A 3 2
B 4 7
```
deal is we have to get highest numbers into one table.
like 3A's of @A and 7B's of @B should be in one table.
@C should contain
# @C
```
A
A
A
B
B
B
B
B
B
B
```
i wanted to have the content of A's from @A and B's from @B into the table @C.
how?
if possible give me an optimal sql Query for the above example. | New answer, now the requirement has been clarified:
```
;With x as (
Select
name,
row_number() over (partition by name order by name) rn
From
@A
Union
Select
name,
row_number() over (partition by name order by name)
From
@B
)
Insert Into
@C
Select
name
From
x
```
**`Example Fiddle`**
Previous answer:
```
Select
name,
max(c)
From (
Select
name,
count(*) c
From
@A
Group By
name
union all
Select
name,
count(*)
From
@B
Group By
name
) x
Group By
name
```
**`Example Fiddle`** | If the actual datasets that you need to work with indeed have just one column, I believe [@Laurence's suggestion](https://stackoverflow.com/a/18813654/297408) should perfectly suffice.
However, if they have other columns that need to be returned too and those columns do not have the same data like those you've shown us, you could try this approach:
1. Count rows per `name` partition in each table.
2. Combine (`UNION ALL`) the two sets and get the maximum count value per `name` partition in the combined row set.
3. Get the rows where the count value matches the maximum one.
Here's one implementation of the above:
```
WITH counted AS (
SELECT *, cnt = COUNT(*) OVER (PARTITION BY name)
FROM A
UNION ALL
SELECT *, cnt = COUNT(*) OVER (PARTITION BY name)
FROM B
),
compared AS (
SELECT *, maxcnt = MAX(cnt) OVER (PARTITION BY name)
FROM counted
)
SELECT name, data
FROM compared
WHERE cnt = maxcnt
;
```
You can take a look at [this SQL Fiddle demo](http://sqlfiddle.com/#!3/3c0cd/1) too. | SQL Query for putting highest counters of two tables in one table | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I'm currently working with SQL Server 2008 R2, and I have only `READ` access to a few tables that house production data.
I'm finding that in many cases, it would be extremely nice if I could run something like the following, and get the total record count back that was affected :
```
USE DB
GO
BEGIN TRANSACTION
UPDATE Person
SET pType = 'retailer'
WHERE pTrackId = 20
AND pWebId LIKE 'rtlr%';
ROLLBACK TRANSACTION
```
However, seeing as I don't have the `UPDATE` permission, I cannot successfully run this script without getting :
```
Msg 229, Level 14, State 5, Line 5
The UPDATE permission was denied on the object 'Person', database 'DB', schema 'dbo'.
```
**My questions :**
* Is there any way that my account in SQL Server can be configured so that if I want to run an `UPDATE` script, it would automatically be wrapped in a transaction with an rollback (so no data is actually affected)
I know I could make a copy of that data and run my script against a local SSMS instance, but I'm wondering if there is a permission-based way of accomplishing this. | I don't think there is a way to bypass SQL Server permissions. And I don't think it's a good idea to develop on production database anyway. It would be much better to have development version of the database you work with.
---
If the number of affected rows is all you need then you can run select instead of update.
For example:
```
select count(*)
from Person
where pTrackId = 20
AND pWebId LIKE 'rtlr%';
``` | If you are only after the amount of rows that would be affected with this update, that would be same amount of rows that currently comply to the `WHERE` clause.
So you can just run a `SELECT` statement as such:
```
SELECT COUNT(pType)
FROM Person WHERE pTrackId = 20
AND pWebId LIKE 'rtlr%';
```
And you'd get the resulting potential rows affected. | SQL Server Update Permissions | [
"",
"sql",
"sql-server-2008",
""
] |
Consider a table entitled 'Calls', that among other things, contains the following columns:
```
CalledNumber | CallBegin
-----------------------------------
004401151234567|10/08/2013 09:06:53
004303111238493|15/09/2013 14:56:29
```
and so on...
How would one go about picking out the hour that, averaged over a date range, holds the highest number of rows? The goal is to find the 'Busy Hour' for each quarter in a year. SQL really isn't my forte here, and I'm looking at manipulating up to a 9.6Million rows at times, although, execution time isn't a key problem. | [`DATEPART`](http://msdn.microsoft.com/en-us/library/ms174420.aspx) is what you're looking for:
```
SELECT
DATEPART(hour, CallBegin), COUNT(*) as NumberOfCalls
FROM
Test
WHERE
CallBegin BETWEEN '2013-01-01' AND '2013-12-31'
GROUP BY
DATEPART(hour, CallBegin)
ORDER BY
NumberOfCalls DESC
```
Working [**DEMO**](http://sqlfiddle.com/#!3/1d08b/2) | You can use `DATEPART()` and `CONVERT()`, I'm assuming CallBegin is not stored as datetime:
```
SELECT YEAR(CONVERT(DATETIME,CallBegin,103))'Year'
, DATEPART(quarter,CONVERT(DATETIME,CallBegin,103))'Quarter'
, DATEPART(hour,CONVERT(DATETIME,CallBegin,103))'Hour'
, COUNT(*)
FROM Calls
GROUP BY YEAR(CONVERT(DATETIME,CallBegin,103))
, DATEPART(quarter,CONVERT(DATETIME,CallBegin,103))
, DATEPART(hour,CONVERT(DATETIME,CallBegin,103))
```
If you want to span years just eliminate the year portion:
```
SELECT DATEPART(quarter,CONVERT(DATETIME,CallBegin,103))'Quarter'
, DATEPART(hour,CONVERT(DATETIME,CallBegin,103))'Hour'
, COUNT(*)
FROM Calls
GROUP BY DATEPART(quarter,CONVERT(DATETIME,CallBegin,103))
, DATEPART(hour,CONVERT(DATETIME,CallBegin,103))
``` | Get hour with highest row count | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have a table named `test` in which two fields and there data is listed as below
```
id test_no
--- -------
1 2
2 2
3 2
4 2
5 3
6 3
```
now I want to get maximum `test_no` in my case I wan to get now 3 and 3 (because there are two counts of 3)
so I use this query but it gives me 3 and 2
I want this
```
id test_no
--- -------
5 3
6 3
```
my query is
```
SELECT MAX( `test_no` )
FROM `test`
GROUP BY `test_no`
``` | Try this way:
```
select `id`,`test_no`
from `tab`
where `test_no` = (
SELECT MAX( `test_no` )
FROM `test`
)
``` | You could try this query:
```
select * from test
where test_no = (select max(test_no) from test)
``` | SQL query to get max id | [
"",
"sql",
""
] |
I have the table `ACCOUNT` of structure as follow:
```
ACCOUNT_ID | ACCOUNT_STATUS|
004460721 | 2 |
042056291 | 5 |
601272065 | 3 |
```
I need to update the three rows at once using one `SELECT` statement such that, the second column will be 5, 3, 2 respectively.
I used the following query but seems there is something missing
```
UPDATE ACCOUNT
SET ACCOUNT_STATUS = CASE
WHEN ACCOUNT_STATUS = '004460721' THEN 5
WHEN ACCOUNT_STATUS = '042056291' THEN 3
WHEN ACCOUNT_STATUS = '601272065' THEN 2
WHERE ACCOUNT_ID IN ('004460721','042056291','601272065')
```
My question, is this way correct? if no, can I use `CASE WHEN` statement and how or I only have choice of using `SUB-SELECT` to acheive that in one statement?
Kindly, notice this is for `SQL ORACLE` | Ok based on the fiddle you have given i have tried these and it worked for me
```
create table account( account_id number primary key,
account_status varchar2(30));
insert into account values(1, '5');
insert into account values(2, '3');
insert into account values(3, '2');
select * from account
update account
set account_status= case
when account_id=1 then '2'
when account_id=2 then '5'
when account_id=3 then '3'
END
select * from account
```
I didn't use the where condition | try the following
```
update account
set account_status = CASE account_id
WHEN 004460721 then 2
WHEN 042056291 THEN 5
WHEN 601272065 THEN 3
END
WHERE account_id IN (004460721, 042056291, 601272065 )
;
``` | Update multiple rows using CASE WHEN - ORACLE | [
"",
"sql",
"oracle",
""
] |
I was having problem in retrieving from SQL Server so I posted this [question][1]
I did not get any suitable answers. So I have changed the column datatype from `datetime` to `varchar` and now it works fine.
```
SELECT *
FROM test
WHERE (timeStamp BETWEEN '05-09-2013 18:23:57' AND '05-09-2013 18:23:59')
```
But my query if `varchar` datatype can play the role of `datetime` and in `varchar` we can also store the string then why sql provides `datetime` datatype? I know `varchar` occupies more space than `datetime`. I would like to know other reasons. | Change datatype of your column to `datetime`. You can do your query IF you'll use datetime instead of varchar in where clause:
```
select *
from test
where timeStamp between convert(datetime, '2013-09-05 18:23:57', 120) and convert(datetime, '2013-09-05 18:23:59', 120)
```
I'm pretty sure it would work even with implicit cast if you use ISO format of date:
```
select *
from test
where timeStamp between '2013-09-05 18:23:57' and '2013-09-05 18:23:59'
```
Here's more info about [cast and convert](http://technet.microsoft.com/en-us/library/ms187928.aspx). | Another reason apart from space is this:
**Datetime** has other functions like picking up the day, year, month,hours,minutes,seconds etc so that you don't have to write it for yourself. If you use **varchar** then it will be your responsibility to provide functions for future use. You should use split function to retrive the part of date you want.
Another is that a query on a **varchar** works slower when compared to **Datetime** when you use to conditions to compare month / day/ year | varchar vs datetime in sql | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have used row oriented database design for long time and except for datawarehouse projects and Big data samples, I have not used column oriented database design for OLTP app.
My row oriented table looks like
```
ID, Make, Model, Month, Miles, Cost
1 BMW Z3 12 12000 100
```
Some people in our team advocating column oriented database design.
They suggest that all the column names should be property names in a Property table.
Then another table Quote will have two columns PropertyName and PropertyValue.
In the .net code, we read each key and compare and convert to strongly typed object. The code is really getting messy.
```
if (qwi.DomainCode == typeof(CoreBO.Base.iQQConstants.MBPCollateralInfo).Name)
{
if (qwi.RefCode == iQQConstants.MBPCollateralInfo.ENGINETYPE)
{
Aspiration = qwi.Value;
}
else if (qwi.RefCode == iQQConstants.MBPCollateralInfo.FUELTYPE)
{
FuelType = qwi.Value;
}
else if (qwi.RefCode == iQQConstants.MBPCollateralInfo.MAKE)
{
Make = qwi.Value;
}
else if (qwi.RefCode == iQQConstants.MBPCollateralInfo.MILEAGE)
{
int reading = 0;
bool success = int.TryParse(qwi.Value, out reading);
if (success)
{
OdometerReading = reading;
}
}
}
```
The arguement for this column oriented design is that we won't have to change table schema and the stored proc(we are still using stored proc instead of Entity Framework).
Seems like we are heading into real problem. Is Column oriented design well accepted in the industry. | I am having trouble with your terminology. You are describing an EAV structure (standing for Entity-Attribute-Value).
Aside: A "column-oriented" database usually refers to a database that stores each column separately from others (when I learned about databases, this was called "vertical partitioning", but I don't think that caught on). Examples include Paracel and Vertica.
An entity-attribute-value database is storing each attribute for an entity as a separate row.
The first problem that you have with your particular structure is typing. Some of the attributes are strings and some are numbers. This becomes a management nightmare in an EAV world. Either you store everything as strings (losing the ability to type check values and to guarantee that arithmetic words) or you include multiple columns for different types with a type column (making queries much more complicated).
Similarly, constraints and foreign key references are much harder to implement. Also, because you are repeating the entity id and attribute id on each row, the data often takes up more space. `NULL` values are typically quite space efficient.
On the OLTP side, you have another problem. When you want to insert an entity, you typically want to insert a bunch of attributes as well. One insert has now turned into many inserts, and you'll want to start wrapping these in transactions, affecting performance.
Given all these shortcomings, you might think *never* use EAV models. There is a place for them. They are particularly useful when attributes are changing over time. Say, if you have an application where users can put in their own information with tags. In such cases, a hybrid approach is the best solution. Use a regular relational table with many columns for the common information. Use an EAV table for optional information for each entity. | Source: WIKI
1. Column-oriented organizations are more efficient when an aggregate needs to be computed over many rows but only for a notably smaller subset of all columns of data, because reading that smaller subset of data can be faster than reading all data.
2. Column-oriented organizations are more efficient when new values of a column are supplied for all rows at once, because that column data can be written efficiently and replace old column data without touching any other columns for the rows.
3. Row-oriented organizations are more efficient when many columns of a single row are required at the same time, and when row-size is relatively small, as the entire row can be retrieved with a single disk seek.
4. Row-oriented organizations are more efficient when writing a new row if all of the column data is supplied at the same time, as the entire row can be written with a single disk seek.
In practice, row-oriented storage layouts are well-suited for OLTP-like workloads which are more heavily loaded with interactive transactions. Column-oriented storage layouts are well-suited for OLAP-like workloads (e.g., data warehouses) which typically involve a smaller number of highly complex queries over all data (possibly terabytes). | Column oriented database vs row oriented database | [
"",
"sql",
"database",
"database-design",
"architecture",
""
] |
Example: I have some `articles` and `comments` and I want to get something like this:
```
[{
title: "Article 1",
content: "Super long article goes here",
comments: [
{ author: "Troll", message: "You suck, Sir!" },
{ author: "SpamBot", message: "http://superawesomething.com/"}
]
},{
title: "Article 2",
content: "Another long article goes here",
comments: [ ... ]
}]
```
Right now I see two solutions:
1. Get the articles first, then the comments in a second query with some `IN` condition and finally add the comments to the respective articles.
2. Good old joins. For one I will still have to fiddle around with the data a lot to get into the structure I want. But beyond that I'm a little concerned since payload like `articles.content` will be transmitted for every comment - unless there is a way to do the join I am not aware of.
I'm hoping that my SQL-illiteracy makes me miss the simple solution. | You can do this, using aggregates and/or subqueries. Something like:
```
select title, content, json_agg(comments.author, comments.message) as comments
from articles
join comments on articles.article_id = comments.article_id
group by article_id;
```
If you need this aggregated into one string/json/something - just wrap it into another aggregate query like this:
```
select json_agg(sub)
from (
select title, content, json_agg(comments.author, comments.message) as comments
from articles
join comments on articles.article_id = comments.article_id
group by article_id) sub;
```
This is a Postgres query. Have no expirience with Mysql. | Here's a MySQL solution:
```
SELECT CONCAT( '[ { '
,GROUP_CONCAT( CONCAT( 'title: "', REPLACE( a.title, '"', '\"' ), '"'
,', contents: "', REPLACE( a.content, '"', '\"' ), '"'
,', comments: ', a.comments
)
SEPARATOR ' }, { '
)
,' } ]'
)
FROM (SELECT a1.title
,a1.content
,CONCAT( '[ { '
,GROUP_CONCAT( CONCAT( 'author: "', REPLACE( c.author, '"', '\"' ), '"'
,', message: "', REPLACE( c.message, '"', '\"' ), '"'
)
SEPARATOR ' }, { '
)
,' } ]'
) as comments
FROM articles a1
LEFT OUTER
JOIN comments c
ON c.articleId = a1.articleId
GROUP BY a1.title, a1.content
) a
;
```
This will need some tweaking as strings get large. Would probably be best to return one row per article:
```
SELECT a1.title
,a1.content
,CONCAT( '[ { '
,GROUP_CONCAT( CONCAT( 'author: "', REPLACE( c.author, '"', '\"' ), '"'
,', message: "', REPLACE( c.message, '"', '\"' ), '"'
)
SEPARATOR ' }, { '
)
,' } ]'
) as comments
FROM articles a1
LEFT OUTER
JOIN comments c
ON c.articleId = a1.articleId
GROUP BY a1.title, a1.content
```
SQLFiddle: <http://sqlfiddle.com/#!2/5edcd/13> | Construct nested object graph from SQL hasmany relationship | [
"",
"mysql",
"sql",
"postgresql",
""
] |
I have 3 tables shown below in MS Access 2010:
Table: **devices**
```
id | device_id | Company | Version | Revision |
-----------------------------------------------
1 | dev_a | Almaras | 1.5.1 | 0.2A |
2 | dev_b | Enigma | 1.5.1 | 0.2A |
3 | dev_c | Almaras | 1.5.1 | 0.2C |
*Field: device_id is Primary Key Unique String
*Field ID is just an auto-number column
```
**Table: activities**
```
id | act_id | act_date | act_type | act_note |
------------------------------------------------
1 | dev_a | 07/22/2013 | usb_axc | ok |
2 | dev_a | 07/23/2013 | usb_axe | ok | (LAST ROW for dev_a)
3 | dev_c | 07/22/2013 | usb_axc | ok | (LAST ROW for dev_c)
4 | dev_b | 07/21/2013 | usb_axc | ok | (LAST ROW for dev_b)
*Field: act_id contains device_id; NOT UNIQUE
*Field ID is just an auto-number column
```
**Table: matrix**
```
id | mat_id | tc | ts | bat | cycles |
-----------------------------------------
1 | dev_a | 2811 | 10 | 99 | 200 |
2 | dev_a | 2911 | 10 | 97 | 400 |
3 | dev_a | 3007 | 10 | 94 | 600 |
4 | dev_a | 3210 | 10 | 92 | 800 | (LAST ROW for dev_d)
5 | dev_b | 1100 | 5 | 98 | 100 |
6 | dev_b | 1300 | 8 | 93 | 200 |
7 | dev_b | 1411 | 11 | 90 | 300 | (LAST ROW for dev_b)
8 | dev_c | 4000 | 27 | 77 | 478 | (LAST ROW for dev_c)
*Field: mat_id contains device_id; NOT UNIQUE
*Field ID is just an auto-number column
```
Is there any way to query tables to get results as shown below (each device from devices and only last row added [see example output table] from each of the other two tables):
**Query Results:**
```
device_id | Company | act_date | act_type | bat | cycles |
------------------------------------------------------------
device_a | Almaras | 07/23/2013 | usb_axe | 92 | 800 |
device_b | Enigma | 07/21/2013 | usb_axc | 90 | 300 |
device_c | Almaras | 07/22/2013 | usb_axc | 77 | 478 |
```
Any ideas? Thank you in advance for reading and helping me out :) | I think is what you want,
```
SELECT a.device_id, a.Company,
b.act_date, b.act_type,
c.bat, c.cycles
FROM ((((devices AS a
INNER JOIN activities AS b
ON a.device_id = b.act_id)
INNER JOIN matrix AS c
ON a.device_id = c.mat_id)
INNER JOIN
(
SELECT act_id, MAX(act_date) AS max_date
FROM activities
GROUP BY act_id
) AS d ON b.act_id = d.act_id AND b.act_date = d.max_date)
INNER JOIN
(
SELECT mat_id, MAX(tc) AS max_tc
FROM matrix
GROUP BY mat_id
) AS e ON c.mat_id = e.mat_id AND c.tc = e.max_tc)
```
The subqueries: `d` and `e` separately gets the latest row for every `act_id`. | Try
```
SELECT devices.device_id, devices.Company, activities.act_data, activities.act_type, matrix.bat, matrix.cycles
FROM devices
LEFT JOIN activities
ON devices.device_id = activities.act_id
LEFT JOIN matrix
ON devices.device_id = matrix.mat_id;
``` | MS Access SQL query from 3 tables | [
"",
"sql",
"database",
"ms-access",
"vba",
"ms-access-2010",
""
] |
How can I use greater than equal to and less than equal to instead of using `between` in SQL?
I'm checking for date variable in SQL.
I tried like this:
```
Date between coalesce(@fromDate,Date) and coalesce(@toDate,Date)
```
but if user does not enter any of date (fromDate or toDate)
so that I need to convert above condition in "greater than equal to and less than equal to".
How can I write this syntax in SQL? | ```
IF @fromDate IS NULL
BEGIN
SET @fromDate = '1900-01-01';
END;
IF @toDate IS NULL
BEGIN
SET @toDate = '2099-01-01';
END;
SELECT things
FROM table
WHERE date_field BETWEEN @toDate AND @fromDate;
```
This code will essentially give you an arbitrarily large range to check which *should* give you reasonable performance and return all results (assuming that's what you want when neither value is supplied).
This code can be shortened to:
```
SELECT things
FROM table
WHERE date_field BETWEEN Coalesce(@toDate, '1900-01-01') AND Coalesce(@fromDate, '2099-01-01');
```
But I kept the verbose version to illustrate. | Try this
```
SELECT Date FROM TableName WHERE Date > @fromDate AND Date < @toDate
``` | Greater than and less than or equal to in SQL | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
I have a table:
```
select * from tmp_dib;
1360 DIBAKAR SE1
1360 DIBAKAR SE
1361 JAI SE1
1361 JAI SE
1365 NITISH SE1
1365 NITISH SE
```
where `SE1` are the old record and `SE` is new record.
By the below query I am getting the New record.
```
select a.* from tmp_dib a where rowid >
(select min(rowid) from tmp_dib b where a.ID=b.ID);
1360 DIBAKAR SE
1361 JAI SE
1365 NITISH SE
```
Please help me with a query to get the record for old record. I want to fetch the old record and want to delete.
My desired output:
```
1360 DIBAKAR SE1
1361 JAI SE1
1365 NITISH SE1
```
this is a sample table(not actual table/data).. i have a table with around 10k data and that is in a live system. so i need to identified the entry with old record. | Try this to fetch and delete the old records using rowid.
```
DELETE FROM tmp_dib A WHERE ROWID <
(SELECT max(ROWID) FROM tmp_dib b WHERE A.ID=b.ID);
``` | Wouldn't you reference the third column, as opposed to using the id's?
You haven't specified what the column names are but I'll assume it's called Column3
```
DELETE TMP_DIB
WHERE Column3 = 'SE1'
``` | how to fetch and delete old duplicate records from table in sql | [
"",
"sql",
"oracle",
""
] |
I need some help with an SQL query, im a bit rusty with this. The Wordpress database im using is set up like this.
```
post_id, meta_key, meta_value
33, opt_in, 1
33, email, john@hotmail.com
```
how can i collect all email address' where the opt in has the value of '1', the post\_id is what matches the two rows. I have looked at sql union, and this the closest I have got.
```
SELECT post_id
FROM wp_postmeta
WHERE meta_key='opt_in' AND meta_value='1'
UNION
SELECT meta_value
FROM wp_postmeta
WHERE meta_key='email'
``` | You can do it with JOIN
```
SELECT p2.meta_value email
FROM wp_postmeta p1 JOIN wp_postmeta p2
ON p1.post_id = p2.post_id
AND p1.meta_key = 'opt_in'
AND p2.meta_key = 'email'
WHERE p1.meta_value = 1
```
If you were to have
```
| POST_ID | META_KEY | META_VALUE |
|---------|----------|-------------------|
| 33 | opt_in | 1 |
| 33 | email | john@hotmail.com |
| 34 | opt_in | 0 |
| 34 | email | helen@hotmail.com |
| 35 | opt_in | 1 |
| 35 | email | mark@hotmail.com |
```
Query output would be:
```
| EMAIL |
|------------------|
| john@hotmail.com |
| mark@hotmail.com |
```
Here is **[SQLFiddle](http://sqlfiddle.com/#!2/d2abf/3)** demo | i think what you need is the `meta_value` field, not `post_id`
```
SELECT meta_value
FROM wp_postmeta
WHERE meta_key = 'email' AND
post_id = (SELECT post_id
FROM wp_postmeta
WHERE meta_key = 'opt_in' AND
meta_value = 1)
``` | SQL for the wordpress database | [
"",
"sql",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.