Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
Say I have a table A with attributes X
How do I find the X's with the largest occurrences? (There can be multiple that have the same highest occurrence)
i.e.
table A
```
X
--
'a'
'b'
'c'
'c'
'b'
```
I would want to get back
```
X
--
'b'
'c'
```
I can't use the keyword ALL in Sqlite so I'm at a loss.
I thought of getting the counts of each X and then sorting it and then somehow using ORDER BY DESC so that the biggest is at the top and then LIMIT with a comparison to check if values below the first tuple are equal (which means they are just as common) but I'm not sure about LIMIT syntax and if I can have a condition like that
Please give a hint and not the answer, are there any resources I can reference so I can figure out a way? | This takes care of multiple values having maximum occurence
```
SELECT X FROM yourTable
GROUP BY X
HAVING COUNT(*) = (
SELECT MAX(Cnt)
FROM(
SELECT COUNT(*) as Cnt
FROM yourTable
GROUP BY X
) tmp
)
```
[**SQL FIDDLE**](http://sqlfiddle.com/#!5/5d642/7) | Query like
```
SELECT x,COUNT(x) AS cnt FROM a
GROUP BY x
ORDER BY cnt DESC;
```
and stop processing the result rows when `cnt` changes. | SQLite How to find the most common occurrences of a value | [
"",
"sql",
"sqlite",
""
] |
I have read tips about sql server database on <http://www.sql-server-performance.com/2007/clustered-indexes>
In conclusion section author mentioned:
"Since you can only create one clustered index per table, take extra time to carefully consider how it will be used."
My question is:
\**Why only one clustered index per table should be created in sql server? \** | **Clustered Index:**
Clustered index defines the way in which data is ordered physically on the disk.
And there can only be one way in which you can order the data physically.
Hence there can only be one clustered index per table.
**Why care about clustered index?**
If we put clustered index on the table then the data is retrieved much faster because sql server doesn't have to read the whole data -- Depends on query.
But data retrieval is much faster.
**NOTE**: Although you can create more than one Non-Clustered index on a single table. | This is simply SQL Server's implementation decision. Theoretically, there are can be any number of clustering indexes. MongoDB and MyISAM have no clustering indexes and store data in a flat file. InnoDB for MySQL has one clustering index, the primary key, which may be hidden if a primary key is not declared. TokuDB for MySQL and TokuMX (both of which I work on) allow users to have multiple clustering indexes, with the implicit tradeoff being more disk space used for faster queries. | Why only one clustered index per table should be created in sql server? | [
"",
"sql",
"sql-server",
"database-administration",
""
] |
Apologies, I haven't slept, so just want to clarify something in my head.
A team can have many users, but a user can be a member of several teams.
Is this a many-to-many relationship? If so, do I need a third table to join them in MySQL? | ## Many to many relationships
**Many to many relationships** exist and you're right to point that it works somehow differently than many to one / one to many. So it's not **at all** a stupid question.
Most of the time, you will need to add extra fields that are characteristics to this relation. Because you'll want to get some **informations** about your relationships themselves.
Whether or not you need extra information (and thus extra fields) will determine if you need a third-part table or not.
## Real life example : men & women
Let's say you have Men and Women tables. A man can date many women through the course of his live, and reciprocally. So, you have a typical many to many relationship. You can do it without a third table.
But now, suppose you want to add an information: for each relationship (be it in a romantic sense or in a database sense (never thought I would say this sentence one day in my life)), you want to tell when the union started, and when it ended.
THEN you need a third part table.
## Structure
Let's have a look at what our real life example structure would look like.
I wrote this in simplified [Doctrine2-inspired Yaml](http://docs.doctrine-project.org/en/latest/), the model that runs [Symfony2](http://symfony.com/) by default:
```
tableName: Man
fields:
id:
type: int
id: true
firstName:
type: string
dateOfBirth:
type: datetime
favSport:
type: string
womenHePrefers:
type: string
oneToMany:
relationships:
target: ManMeetsWoman
mappedBy: man
cascade: remove
tableName: Woman
fields:
id:
type: int
id: true
firstName:
type: string
dateOfBirth:
type: datetime
favAuthor:
type: string
menShePrefers:
type: string
oneToMany:
relationships:
target: ManMeetsWoman
mappedBy: woman
cascade: remove
tableName: ManMeetsWoman
fields:
id:
type: int
id: true
dateOfEncounter:
type: datetime
dateOfBreakUp:
type: datetime
manyToOne:
man:
target: Man
inversedBy: relationships
woman:
target: Woman
inversedBy: relationships
```
## Key points
* Your **Man and Woman** tables both have a **oneToMany** relationship to your ManMeetsWoman table
* Your **ManMeetsWoman** table will have a
**manyToOne** relationship to both other tables
* Ideally, you will place **cascading rules** to ensure that when a Man is deleted from the Man table, relationships referencing him will also disappear from the third table.
## What if you don't want another table
Then you will have to declare actual **many to many relationships**.
It will work pretty much the same, but **you won't have any additional information** about what existed between this man and this woman, other than yeah, something happened between them. We don't know what, we don't know when, we don't know for how long.
To do this, you will replace the oneToMany and manyToOne by this:
```
# Man:
ManyToMany:
woman:
target: Woman
# Woman:
ManyToMany:
man:
target: Man
```
But then it begins to get tricky to manage, without the support of a good ORM. So, the third-part table usually remains the best and easiest solution. Plus, it allows you to add informations about the relationship itself. | Yes. And you almost certainly need to have a third 'relationship' table.
The easiest option is to have a relationship table that indicates users on teams.
Teams == Rel === Users
To query for users on a team, search rel for all usersIds for a teamId and vice versa to check for teams that a user belongs to. | Simple many-to-many relationships | [
"",
"mysql",
"sql",
""
] |
I need to update duplicated records as a invalid, but keep only record.
my table structure is as follows:
**field Name**
```
id (primary key)
phoneNo
IsValid
```
Sample data is as follows:
```
id | phoneNo | IsValid
--------------------------------------
1 | 011 | 1
2 | 011 | 1
3 | 022 | 1
4 | 022 | 1
5 | 033 | 1
6 | 011 | 1
```
my **output** should be something like this :
Sample data is as follows:
```
id | phoneNo | IsValid
--------------------------------------
1 | 011 | 0
2 | 011 | 0
3 | 022 | 0
4 | 022 | 1
5 | 033 | 1
6 | 011 | 1
```
either minimum id or maximum id is NOT a problem. only condition is **one** row should be there as valid row.
what i tried so far:
```
update phone set IsValid=0 where id in
(select id from phone
where id not in (
select MAX(id)id
from phone
group by phoneNo
having COUNT(phoneNo)>1))
and IsValid=1
```
in this case my o/p as follows:
```
id | phoneNo | IsValid
--------------------------------------
1 | 011 | 0
2 | 011 | 0
3 | 022 | 0
4 | 022 | 1
5 | 033 | 0
6 | 011 | 1
```
in here this **033** record also marked as o, How can i solve this? | remove having clause from your query there is no need of it and try...
```
update phone set IsValid=0 where id in
(select id from phone
where id not in (
select MAX(id)id
from phone
group by phoneNo))
and IsValid=1
``` | try this query:
```
UPDATE phone t1 SET IsValid = 0 WHERE
t1.ID NOT in (SELECT MAX(t2.ID) FROM phone t2 GROUP BY t2.PHONENO);
``` | Update record with count as a condition | [
"",
"sql",
"sql-server-2005",
""
] |
I am constantly selecting columns from a table after trimming them like the following:
```
SELECT TOP 1 RTRIM(LTRIM([UN_DataIN])) FROM [Names]
```
This is returning the name `Fadi`
```
SELECT TOP 1 RTRIM(LTRIM([UN_DataIN])), LEN(RTRIM(LTRIM([UN_DataIN]))) FROM [Names]
```
when I select the length of the trimmed column, I get back 10.
Which means `RTRIM` and `LTRIM` are not doing their jobs.
Is there an alternative to them? | > UN\_DataIN == 0x45062706470631062920292029202920292029202000
So presuming Arabic your string ends with Unicode paragraph separators *U+2029* and then a single whitespace all of which you need to remove;
```
select rtrim(replace(UN_DataIN, nchar(0x2029), '')) + '!'
```
# ماهر! | This is may work for you as my problem too.. ^-^
```
select rtrim(ltrim(replace(replace(replace(colname,char(9),' '),char(10),' '),char(13),' ')))
from yourtable
```
source : <http://www.sqlservercentral.com/Forums/Topic288843-8-1.aspx> | SQL Server - RTRIM(LTRIM(column)) does not work | [
"",
"sql",
"sql-server",
"trim",
""
] |
I am using sql 2008
My data set looks like
```
Entity Type1 Type2 Balance
1 A R 100
1 B Z 200
1 C R 300
2 A X 1000
2 B Y 2000
```
My output should look like
```
Entity A-Type2 A-Balance B-Type2 B-Balance C-Type2 C-Balance
1 R 100 Z 200 R 300
2 X 1000 Y 2000 0
```
Now I started writing a pivot query, and I think I can get away with MAX because there should be one record per Entity/Type1 combination. But can not figure out how to do two fields in one pivot. Is this possible? Is this something that CTE could help out with? | Easiest is the MAX idea, but with a CASE statement, e.g.:
```
SELECT
Entity,
MAX(CASE WHEN Type1 = 'A' THEN Type2 ELSE NULL END) AS AType2,
MAX(CASE WHEN Type1 = 'A' THEN Balance ELSE NULL END) AS ABalance,
MAX(CASE WHEN Type1 = 'B' THEN Type2 ELSE NULL END) AS BType2,
MAX(CASE WHEN Type1 = 'B' THEN Balance ELSE NULL END) AS BBalance,
MAX(CASE WHEN Type1 = 'C' THEN Type2 ELSE NULL END) AS CType2,
MAX(CASE WHEN Type1 = 'C' THEN Balance ELSE NULL END) AS CBalance
FROM
...
GROUP BY
Entity
```
In other words, only use the value when Type1 is a specific value (with other Type1 values getting a null). | You just use conditional aggregation for the pivoting like this:
```
select Entity,
max(case when Type1 = 'A' then Type2 end) as A_Type2,
max(case when Type1 = 'A' then Balance else 0 end) as A_Balance,
max(case when Type1 = 'B' then Type2 end) as B_Type2,
max(case when Type1 = 'B' then Balance else 0 end) as B_Balance,
max(case when Type1 = 'C' then Type2 end) as C_Type2,
max(case when Type1 = 'C' then Balance else 0 end) as C_Balance
from MyDataSet mds
group by Entity;
``` | Using Pivot or CTE to horizontalize a query | [
"",
"sql",
"sql-server-2008",
""
] |
I would like to produce results from the alphabet via SQL?
Something like this:
```
A
B
C
D
E
F
```
I have tried:
```
SELECT
'A','B','C'
```
But this just produces the letters across in columns. | ```
--
-- tally = 9466 rows in my db, select upper & lower alphas
--
;
with
cte_tally as
(
select row_number() over (order by (select 1)) as n
from sys.all_columns
)
select
char(n) as alpha
from
cte_tally
where
(n > 64 and n < 91) or
(n > 96 and n < 123);
go
```
The sys.all\_columns is a documented table. It will be around for a while.
<http://technet.microsoft.com/en-us/library/ms177522.aspx>
It seems clear that the table, sp\_values, is undocumented and can be removed in the future without any comment from Microsoft. | Use table `spt_values` and convert values to chars
```
SELECT Char(number+65)
FROM master.dbo.spt_values
WHERE name IS NULL AND
number < 26
```
EDIT: This table is undocumented. But, it's used by many system storedprocedures and it's extremely unlikely for this table to disappear, since all those procs should be rewritten. This would be like poking a sleeping lion. | create a list of the alphabet via SQL | [
"",
"sql",
"sql-server",
""
] |
I have a table in DB, with following columns:
* action\_type
* file\_id
* time
Now, I need to select all actions with particular type and time and use file\_id to delete file. The problem is that I might get several rows from that query, but I only need one, which was last added to DB.
Suppose the example records in DB:
```
_id = 1, action_type = 1, file_id= 1, time = 1000
_id = 2, action_type = 1, file_id= 1, time = 2000
_id = 3, action_type = 1, file_id= 1, time = 3000
_id = 4, action_type = 1, file_id= 2, time = 1000
_id = 5, action_type = 1, file_id= 2, time = 2000
_id = 6, action_type = 1, file_id= 2, time = 3000
```
Now, I only need records with \_id=3 and \_id=6. How do I construct such query?
Note: I'm using SQLite. | Ok, you can get the latest recored for all distinct file\_id. Normally, this request should work fine in sqlite :
```
SELECT file_id,
_id,
action_type,
Max(time)
FROM test
GROUP BY file_id;
```
Edit :
Ok, your request is more complex :
```
SELECT `_id`, `action_type`, `file_id`, `time` FROM testtable t1
GROUP BY file_id
HAVING time = (
SELECT MAX(time) FROM testtable t2
WHERE t2.file_id = t1.file_id
)
```
<http://sqlfiddle.com/#!5/9f936/2> | You can use a subquery to get this result:
```
SELECT _id, file_id
FROM table t
WHERE action_type = x
AND t.time = (SELECT Max(time)
FROM Table s
WHERE s.action_type = t.action_type
AND s.file_id = t.file_id)
```
See the following link for how to use subqueries:
<http://www.tutorialspoint.com/sqlite/sqlite_sub_queries.htm> | Select last record from similar in SQLite | [
"",
"android",
"sql",
"sqlite",
""
] |
I have a "Has-many-through" table `keyword_sentence` which contains links from the sentences to the keywords.
```
TABLE `keyword_sentence` {
`id` int(11) NOT NULL AUTO_INCREMENT,
`sentence_id` int(11) NOT NULL,
`keyword_id` int(11) NOT NULL,
`created` int(11) NOT NULL,
PRIMARY KEY (`id`),
KEY `sentence_id` (`sentence_id`),
KEY `keyword_id` (`keyword_id`)
)
```
**How do I get the top 5 keywords per week?**
I would like to see which `keyword_id`'s are being used each week so I can watch for trending items. I currently have the following query which isn't quite working.
```
SELECT ks.keyword_id
FROM
keyword_sentence ks
WHERE ks.keyword_id IN (
SELECT DISTINCT ks2.keyword_id FROM keyword_sentence ks WHERE
from_unixtime(ks.created) >= CURRENT_DATE - INTERVAL 2 MONTH
AND
from_unixtime(ks.created) < CURRENT_DATE - INTERVAL 1 MONTH
)
ORDER BY COUNT(*) DESC
``` | Try this query
```
SELECT *
FROM (
SELECT *, @rowNo := if(@pv = week, @rowNo+1, 1) as rNo, @pv := week
FROM (
SELECT keyword_id, COUNT(*), YEARWEEK(FROM_UNIXTIME(created)) AS week
FROM keyword_sentence
WHERE
FROM_UNIXTIME(created) >= CURRENT_DATE - INTERVAL 2 MONTH
AND
FROM_UNIXTIME(created) < CURRENT_DATE - INTERVAL 1 MONTH
GROUP BY week, keyword_id
ORDER BY week, COUNT(*) DESC
) temp
JOIN (
SELECT @rowNo := 0, @pv := 0
) tempValue
) tmp
WHERE
tmp.rNo < 6
```
## **[Fiddle](http://www.sqlfiddle.com/#!2/bcfd0e/7)**
Hope this helps | Have you tried this queryies ??
```
SELECT DISTINCT(keyword_id) FROM tmp WHERE created BETWEEN '2014-01-01' AND '2014-02-01' LIMIT 0, 20;
```
OR without DISTINCT
```
SELECT keyword_id FROM tmp WHERE created BETWEEN '2014-01-01' AND '2014-02-01' LIMIT 0, 20;
```
Then, if you have five or more per week them appeared,
I hope this help you! | MySQL tagging system - top keywords per month | [
"",
"mysql",
"sql",
"subquery",
"greatest-n-per-group",
"tagging",
""
] |
I have a string in the database which is comma separated.Like
'apple,banana,pineapple,grapes'
I need to split this string on the basis of comma and iterate through this.Since there is no built in function in sql server, Is there any efficient way in which this objective can be attained. | Try this function
```
CREATE FUNCTION [dbo].[func_Split]
(
@DelimitedString varchar(8000),
@Delimiter varchar(100)
)
RETURNS @tblArray TABLE
(
ElementID int IDENTITY(1,1), -- Array index
Element varchar(1000) -- Array element contents
)
AS
BEGIN
-- Local Variable Declarations
-- ---------------------------
DECLARE @Index smallint,
@Start smallint,
@DelSize smallint
SET @DelSize = LEN(@Delimiter)
-- Loop through source string and add elements to destination table array
-- ----------------------------------------------------------------------
WHILE LEN(@DelimitedString) > 0
BEGIN
SET @Index = CHARINDEX(@Delimiter, @DelimitedString)
IF @Index = 0
BEGIN
INSERT INTO
@tblArray
(Element)
VALUES
(LTRIM(RTRIM(@DelimitedString)))
BREAK
END
ELSE
BEGIN
INSERT INTO
@tblArray
(Element)
VALUES
(LTRIM(RTRIM(SUBSTRING(@DelimitedString, 1,@Index - 1))))
SET @Start = @Index + @DelSize
SET @DelimitedString = SUBSTRING(@DelimitedString, @Start , LEN(@DelimitedString) - @Start + 1)
END
END
RETURN
END
```
**Example Usage** – simply pass the function the comma delimited string as well as your required delimiter.
```
DECLARE @SQLStr varchar(100)
SELECT @SQLStr = 'Mickey Mouse, Goofy, Donald Duck, Pluto, Minnie Mouse'
SELECT
*
FROM
dbo.func_split(@SQLStr, ',')
```
Result will be like this
 | > ... Since there is no built in function in sql server ...
That was true at the time you asked this question but SQL Server 2016 introduces [`STRING_SPLIT`](https://msdn.microsoft.com/en-gb/library/mt684588.aspx).
So you can just use
```
SELECT value
FROM STRING_SPLIT ('apple,banana,pineapple,grapes', ',')
```
There are some limitations (only single character delimiters accepted and a lack of any column indicating the split index being the most eye catching). The various restrictions and some promising results of performance testing are in [this blog post by Aaron Bertrand](http://sqlperformance.com/2016/03/t-sql-queries/string-split). | Splitting the string in sql server | [
"",
"sql",
"sql-server",
"database",
"string",
""
] |
I have a mysql table (file\_payments) to keep records of payments contained in a file and looks like this
```
ID FILE START_DATE END_DATE NO_PAYMTS
-- ----- ---------- ---------- ---------
1 file1 2013-10-11 2013-10-15 6
2 file2 2013-10-16 2013-10-20 10
```
Then I have another table (payments) with more details about this payments
```
ID DATE AMOUNT BANK
--- ---------- ---------- ----
1 2013-10-11 100.00 3
2 2013-10-12 500.00 3
3 2013-10-13 400.00 2
4 2013-10-15 200.00 2
5 2013-10-16 400.00 4
6 2013-10-18 300.00 1
7 2013-10-19 700.00 3
```
I need to relate both tables to verify that the **NO\_PAYMTS** in first table correspond to the actual number of payments in the second one, So I'm thinking about counting the records on the second table which are between **START\_DATE** and **END\_DATE** from the first one. The output expected in this example is:
```
START_DATE END_DATE NO_PAYMTS ACTUAL_PAYMTS
---------- ---------- --------- -------------
2013-10-11 2013-10-15 6 4
2013-10-16 2013-10-20 10 3
```
I'm confused how to do the query, but probably would be something like:
SELECT ID,FILE,START\_DATE,END\_DATE,NO\_PAYMTS FROM file\_payments
WHERE ()
Obviously this doesn't work because there is no criteria in WHERE clause to join the tables, how can I make it work? | Query:
**[SQLFIDDLEExample](http://sqlfiddle.com/#!2/06499/2)**
```
SELECT t1.ID,
t1.FILE,
t1.START_DATE,
t1.END_DATE,
t1.NO_PAYMTS,
(SELECT COUNT(*)
FROM Table2 t2
WHERE t2.DATE >= t1.START_DATE
AND t2.DATE <= t1.END_DATE ) AS ACTUAL_PAYMTS
FROM Table1 t1
```
Result:
```
| ID | FILE | START_DATE | END_DATE | NO_PAYMTS | ACTUAL_PAYMTS |
|----|-------|------------|------------|-----------|---------------|
| 1 | file1 | 2013-10-11 | 2013-10-15 | 6 | 4 |
| 2 | file2 | 2013-10-16 | 2013-10-20 | 10 | 3 |
``` | Query :
```
SELECT f.id, f.file, f.start_date, f.end_date, f.no_paymnts, COUNT(p.bank)
from file_payments f, payments p
WHERE p.date BETWEEN f.start_date AND f.end_date
GROUP BY f.id;
```
JSFiddle : <http://sqlfiddle.com/#!2/8446f/13> | Select between two dates from other table | [
"",
"mysql",
"sql",
""
] |
In the controller I want to verify if the record with the id in the query `params` exists in the database, but doing something like this `Project.find(params[:id]).present?` errors out if the id doesn't exist.
How should I verify presence instead? | You can use `find_by(id: params[:id])` (Rails 4) or `find_by_id(params[:id])` (Rails < 4) which will return `nil` instead of raising ActiveRecord::RecordNotFound.
You can also use `Project.exists?(params[:id])`. | In my experience there are two ways.
First and preferable, as as stated before - you can use find\_by\_id that returns nil if no object was found.
Second - you can use method try(:present?) - but it will return either true or nil (so you'd have to use Project.find(params[:id]).try(:present>).present? - and this seems ridiculous :) )
I also didn't know about Model.exists? method :) | How can I verify Project.find(12).present? and not get an error if ID 12 doesn't exist? | [
"",
"sql",
"ruby-on-rails",
"ruby",
"ruby-on-rails-4",
""
] |
I'm trying to compare the result of one query (example: number of rows), between the beginning of the current month and the current date, compared to what happened the previous month.
Example: Today is 25/01, so I'd like to know the number of rows created between the 01/01 and 25/01, vs the previous month (same interval) 01/12 and 25/12.
I'd like to retrieve it in one row, so that I can return the value of the current month, and the string : up/down depending on whether there have been more or less rows, compared to the previous month.
I've managed to get it working this way, but it looks too hacky and I'd like to know if there is a better approach (apart from retrieving two rows and process the result).
```
SELECT MAX(total_current) as current, IF(MAX(total_current) > MAX(total_previous), 'up', 'down') as status, 'Number of Items'
FROM
(SELECT INTEGER(count(*)) as total_current, INTEGER(0) as total_previous
FROM [mybucket.mytable]
WHERE mydate BETWEEN TIMESTAMP(STRFTIME_UTC_USEC(CURRENT_TIMESTAMP(), "%Y-%m-01")) and CURRENT_TIMESTAMP()),
(SELECT INTEGER(count(*)) as total_previous, INTEGER(0) as total_current
FROM [mybucket.mytable]
WHERE mydate
BETWEEN DATE_ADD(TIMESTAMP(STRFTIME_UTC_USEC(CURRENT_TIMESTAMP(), "%Y-%m-01")), -1, 'MONTH')
AND DATE_ADD(CURRENT_TIMESTAMP(), -1, 'MONTH'))
```
Does it make sense, or is absolutely wrong? If so, how could I improve it, or it's just that this kind of things aren't supposed to be done in a query. | I've managed to optimise/simplify the query as follows:
```
SELECT TOP(LEFT(DATE(mydate), 7), 2) as month, count(*) as total
FROM [mybucket.mytable]
WHERE DAY(mydate) BETWEEN 1 and DAY(CURRENT_TIMESTAMP())
AND LEFT(DATE(mydate), 7) >= LEFT(DATE(DATE_ADD(CURRENT_TIMESTAMP(), -1, 'MONTH')), 7);
```
However, I still would like to get only one row with the result of the current month and up/down compared to the previous month.
For example if the query returns this:
```
1 2013-12 48946
2 2014-01 40497
```
In that case I'd like to get the following row:
`1 40497 'down'` (because previous month's value was higher).
Is there any way to do it? Thanks | Ok lets try again, not that I think I understand what you want. This is somewhat cleaner. Also I included case statement to check if Current and Previous are Equal
```
DECLARE @dayInt INTEGER = 18;
SELECT COUNT(CASE WHEN MONTH(GETDATE()) = MONTH(d.Date_Dt) THEN 1
ELSE NULL
END) AS 'Current'
,CASE WHEN COUNT(CASE WHEN MONTH(GETDATE()) = MONTH(d.Date_Dt) THEN 1
ELSE NULL
END) > COUNT(CASE WHEN MONTH(GETDATE()) <> MONTH(d.Date_Dt) THEN 1
ELSE NULL
END) THEN 'UP'
WHEN COUNT(CASE WHEN MONTH(GETDATE()) = MONTH(d.Date_Dt) THEN 1
ELSE NULL
END) = COUNT(CASE WHEN MONTH(GETDATE()) <> MONTH(d.Date_Dt) THEN 1
ELSE NULL
END) THEN 'Equal'
ELSE 'Down'
END AS 'Status'
,'Number of Items'
FROM dbo.Date AS d
WHERE DAY(d.Date_Dt) <= @dayInt
AND ( d.Date_Dt BETWEEN DATEADD(MONTH, DATEDIFF(MONTH, 0, GETDATE()) - 1, 0) AND GETDATE() )
``` | Optimization of BigQuery aggregated data between two dates | [
"",
"sql",
"optimization",
""
] |
I have SQL generated by `JPA`, which in the worst case had 1250 lines.
The structure of my query was 20 sub-queries nested inside the `WHERE` statement of a query. This query ran in 0.015 seconds.
I tried to optimize my query as I noticed I had reused a lot of joins in the sub-queries (e.g where two sub-queries only differed by their `WHERE` statement). This reduced the SQL down to 750 lines and to 12 sub-queries, but for some reason it took 0.9 seconds to run.
Is there anything to explain this? Might my attempt to make the query run faster actually run faster when there is much more data available?
Thanks | With the limited information provided in the question I can only speculate as to exactly why the execution time increases in your specific case, but the long and the short of it is that less code does not equal faster queries.
One of the main reasons "simplifying" queries can result in longer execution times is that the simplification means indexes are no longer used because while the query may appear more simple to read, you are actually asking the optimiser to do something more complicated.
Imagine this simple schema:
```
CREATE TABLE T1 (ID INT AUTO_INCREMENT PRIMARY KEY, A INT);
CREATE TABLE T2 (ID INT AUTO_INCREMENT PRIMARY KEY, A INT, B INT);
CREATE INDEX IX_T2_A ON T2 (A);
CREATE INDEX IX_T2_B ON T2 (B);
```
Now, supposing I have the following query:
```
SELECT COUNT(T1.ID)
FROM T1
INNER JOIN
( SELECT ID
FROM T2
WHERE A IN (1, 10)
UNION
SELECT ID
FROM T2
WHERE B IN (1, 10)
) T2
ON t2.ID = t1.ID;
```
You might think, that this can be "simplified" to remove the `UNION` as follows:
```
SELECT COUNT(T1.ID)
FROM T1
INNER JOIN
( SELECT ID
FROM T2
WHERE A IN (1, 10)
OR B IN (1, 10)
) T2
ON t2.ID = t1.ID;
```
**HOWEVER**, by combining your criteria you have ensured that neither index (on `T2.A` or `T2.B`) will be used, because the optimiser is trying to perform both at once. So instead of using the two indexes you have in place, a full table scan will be performed, and depending on the distrubution of your data this can be much more costly.
This is confirmed when running `EXPLAIN`:
```
ID SELECT_TYPE TABLE TYPE POSSIBLE_KEYS KEY KEY_LEN REF ROWS FILTERED EXTRA
1 PRIMARY <derived2> system (null) (null) (null) (null) 1 100
1 PRIMARY T1 const PRIMARY PRIMARY 4 const 1 100 Using index
2 DERIVED T2 index IX_T2_A IX_T2_A 5 (null) 1 100 Using where; Using index
3 UNION T2 index IX_T2_B IX_T2_B 5 (null) 1 100 Using where; Using index
(null) UNION RESULT <union2,3> ALL (null) (null) (null) (null) (null) (null)
ID SELECT_TYPE TABLE TYPE POSSIBLE_KEYS KEY KEY_LEN REF ROWS FILTERED EXTRA
1 PRIMARY <derived2> system (null) (null) (null) (null) 1 100
1 PRIMARY T1 const PRIMARY PRIMARY 4 const 1 100 Using index
2 DERIVED T2 ALL IX_T2_A,IX_T2_B (null) (null) (null) 1 100 Using where
```
**[Example on SQL Fiddle](http://sqlfiddle.com/#!2/52268/1)** | Maybe those extra lines created much smaller tables *(or selected less data)* than the ones you have atm, hence you could compare tables *(data)* more quickly. However now, as you have reduced the number of smaller tables and presumably increased the size of the bigger ones, the queries has to page through larger tables *(more data)* when you run a specific query, therefore they take more time.
More data to compare = More processing time | Reduced my SQL by 40% but it runs 60 times slower? | [
"",
"mysql",
"sql",
"hibernate",
"jpa",
""
] |
For example:
```
RouteID StopName
1 stop_1
1 stop_2
1 stop_3
2 stop_1
2 stop_2
3 stop_4
4 stop_5
```
I want to select the route that it has a stop named 'stop\_1', I expect the results as follows:
```
RouteID StopName
1 stop_1
1 stop_2
1 stop_3
2 stop_1
2 stop_2
```
**EDIT**
How about the `RouteID` is from the table `Route`
and `StopName` is from the table `Stop`? Actually, the above table is their relation table. | You can use an inner query that selects the routes for that:
```
select r.RouteID, s.StopName from route r
inner join stop s on r.StopID = s.StopID
where RouteID in
(select t1.RouteID from route t1
where exists (select * from stop s2 where t1.StopID = s2.StopID and s2.StopName = 'stop_1'))
order by r.RouteID, s.StopName
```
[SQL Fiddle demo](http://sqlfiddle.com/#!3/9894f/7) | **New Answer for your edit**
Again, assuming routes table is named `Routes` and your relation table is named `RouteStops`.
```
SELECT * FROM Routes r
JOIN RouteStops rs ON rs.RouteID = r.RouteID
WHERE rs.StopName = 'stop_1'
```
**Old Answer:**
For the sake of example, I'm going to assume your table name is Routes
```
SELECT * FROM Routes r
JOIN Routes r2 ON r.RouteID = r2.RouteID
WHERE r2.StopName = 'stop_1'
```
I'm basically joining the table with itself whenever a route contains `stop_1` and then listing all of that routes entries. | SQL - select rows that their child rows's column contains specific string | [
"",
"sql",
""
] |
I am attempting to find duplicates in a single table, where at least one of those duplicates was created in the last day.
Here is my query:
```
SELECT DateOfBirth DOB,
FirstName FirstName,
LastName LastName,
COUNT(*) TotalCount
FROM TABLE
WHERE DateOfBirth IS NOT NULL
AND DATEDIFF(d,dateCreated,getDate()) <= 1
GROUP BY DateofBirth, FirstName, LastName
HAVING COUNT(*) > 1
ORDER BY COUNT(*) DESC
```
The problem is that this query returns nothing, because both duplicates would need to be created within the last day (the way this reads).
I did some testing and found that this datediff requires that the `dateCreated` column both be within the datediff.
Any way to bring back these duplicates where the *most recent duplicate* was created within the last day? Even if the *oldest duplicate* was created a year ago? | ```
;WITH x AS
(
SELECT FirstName, LastName, DateOfBirth, DateCreated,
TotalCount = COUNT(*) OVER
(
PARTITION BY FirstName, LastName, DateOfBirth
)
FROM dbo.[TABLE]
)
SELECT FirstName, LastName, DateOfBirth, DateCreated, TotalCount
FROM x
WHERE TotalCount > 1
AND DateCreated >= DATEADD(DAY, -1, CURRENT_TIMESTAMP);
```
If you then want to eliminate those duplicates that were erroneously created in the last day, just change the outer query to:
```
;WITH x AS
(
...
)
DELETE x WHERE TotalCount > 1
AND DateCreated >= DATEADD(DAY, -1, CURRENT_TIMESTAMP);
``` | I have revised this as an alternative to Aarons answer, in case you wish to see only the duplicates which are not the original record.
```
;WITH x AS
(
SELECT FirstName, LastName, DateOfBirth, DateCreated,
Row_number() OVER
(
PARTITION BY FirstName, LastName, DateOfBirth
order by dateCreated) as rowNumber
FROM dbo.[TABLE1]
)
SELECT FirstName, LastName, DateOfBirth, DateCreated, rowNumber
FROM x
WHERE rowNumber > 1
AND DateCreated >= DATEADD(DAY, -1, CURRENT_TIMESTAMP);
``` | Find Duplicates By Created Date TSQL | [
"",
"sql",
"sql-server",
""
] |
I have table *t* that contains a person's name and the products they have:
```
NAME PRODUCT
Adam a
Adam b
Adam c
Ben c
Ben d
Chris b
Dave a
Dave b
Dave c
Dave d
Evan a
Evan b
Fred a
```
And I want a SQL query that returns NAME when the person has Product a or b or both, AND does NOT have either product c nor d (nor e, f, ...):
```
NAME
Chris
Evan
Fred
```
The actual 'does NOT have' list I'm working with is long, so I would like to avoid having to type in every single product name to exclude, if possible.
Thanks in advance for your help. | ```
SELECT DISTINCT name FROM T
WHERE product IN ('a', 'b')
AND name NOT IN (SELECT name FROM T WHERE product IN ('c', 'd', 'e', 'f'))
```
Or, if you have all those unwanted c d e f products in another table T2,
```
SELECT DISTINCT name FROM T
WHERE product IN ('a', 'b')
AND name NOT IN (SELECT name FROM T
WHERE product IN (SELECT product FROM T2))
```
Or, if the unwanted products are actually all except a and b:
```
SELECT DISTINCT name FROM T
WHERE product IN ('a', 'b')
AND name NOT IN (SELECT name FROM T
WHERE product NOT IN ('a', 'b'))
``` | I know in T-SQL (MSSQL) you can use `EXCEPT` to exclude results:
```
SELECT Name FROM t WHERE Product IN ('a','b')
EXCEPT
SELECT Name FROM t WHERE Product NOT IN ('a','b')
```
`EXCEPT` also only returns `DISTINCT` results.
See [SQL Fiddle](http://www.sqlfiddle.com/#!6/8835a/2).
The second `SELECT` query after the `EXCEPT` doesn't have to have the Products to exclude hard-coded and can retrieve them from another table or however else you wish. | SQL include AND exclude query | [
"",
"sql",
""
] |
Consider the example table name "Person".
```
Name |Date |Work_Hours
---------------------------
John| 22/1/13 |0
John| 23/1/13 |0
Joseph| 22/1/13 |1
Joseph| 23/1/13 |1
Johnny| 22/1/13 |0
Johnny| 23/1/13 |0
Jim| 22/1/13 |1
Jim| 23/1/13 |0
```
In the above table, I have to find rows with the sequence of '0' followed by '1' in the column Work\_Hours. Please share the idea/Query to do it.
The output I need is
```
Name |Date |Work_Hours
---------------------------
John| 23/1/13 |0
Joseph| 22/1/13 |1
Johnny| 23/1/13 |0
Jim| 22/1/13 |1
``` | To look into previous or following records, you would usually use the aggregate functions LAG and LEAD:
```
select first_name, work_date, work_hours
from
(
select first_name, work_date, work_hours
, lag(work_hours) over (order by first_name, work_date) as prev_work_hours
, lead(work_hours) over (order by first_name, work_date) as next_work_hours
from person
)
where (work_hours = 0 and next_work_hours = 1) or (work_hours = 1 and prev_work_hours = 0)
order by first_name, work_date;
``` | Your problem (as phrased) is equivalent to asking: Is there a `1` that follows any given row with a 0 for a `name`?
You can do this a correlated subquery:
```
select Name, Date, Work_Hours
from (select t.*,
(select min(date)
from table t2
where t2.name = t.name and t2.date > t.date and t2.Work_Hours = 1
) as DateOfLater1
from table t
) t
where DateOfLater1 is not null and work_hours = 0 or
(DateOfLater1 = date and work_hours = 1);
``` | How to find rows with the sequence of values in a column using SQL? | [
"",
"sql",
""
] |
I have a table that has attendance of employee. This table has two columns:
* first is the personnel number
* second is the time of arrival
I want to isolate the earliest time in this table, because an employee can register multiple times.
Indeed I want to gain Least time of arrivalTime field for each personelNumber
I wrote the following code but it's wrong and can't separate rows
```
SELECT tal.PersonNo, min(tal.AttendanceTime)
FROM mqa.T_AttendanceLog tal
GROUP BY tal.PersonNo, tal.AttendanceTime
``` | You're almost there. Just remove the `AttendanceTime` from the group by.
```
SELECT tal.PersonNo, min(tal.AttendanceTime)
FROM mqa.T_AttendanceLog tal
GROUP BY tal.PersonNo;
```
If you want the entire row (incase you have other columns) you can use something like this:
```
select *
from mqa.T_AttendanceLog a
where (PersonNo, AttendanceTime) in(
select b.PersonNo, min(b.AttendanceTime)
from mqa.T_AttendanceLog b
group by b.PersonNo);
``` | Modify your group by clause
```
SELECT tal.PersonNo,min(tal.AttendanceTime)
FROM mqa.T_AttendanceLog tal
GROUP BY tal.PersonNo
``` | Separate rows based on a column that has min value | [
"",
"sql",
"sql-server",
"sql-server-2008-r2",
""
] |
If I have several `LOJ`s and several `INNER JOINS` is there a correct **Standard** syntactical structure I should use?
*Example scenario*
* 5 tables #A - #E all with a UserId column and each with an additional column for a measure - MeasureA in table #A, MeasureB in table #B etc.
* Tables #A, #B, #C all have the same set of UserIds
* Tables #D and #E have different subsets of the set of UserIds in #A-#C.
Is this the correct structure to use:
```
SELECT
#A.UserId,
#A.MeasureA,
#B.MeasureB,
#C.MeasureC,
D = COALESCE(#D.MeasureD,0.),
E = COALESCE(#E.MeasureE,0.)
FROM
#A
JOIN #B
ON #A.UserId = #B.UserId
JOIN #C
ON #A.UserId = #C.UserId
LEFT OUTER JOIN #D
ON #A.UserId = #D.UserId
LEFT OUTER JOIN #E
ON #A.UserId = #E.UserId
```
Or should the LOJs be applied within a subquery on #A?
```
SELECT
X.UserId,
X.MeasureA,
#B.MeasureB,
#C.MeasureC,
X.D,
X.E
FROM
(
SELECT
#A.UserId,
#A.MeasureA,
D = COALESCE(#D.MeasureD,0.),
E = COALESCE(#E.MeasureE,0.)
FROM #A
LEFT OUTER JOIN #D
ON #A.UserId = #D.UserId
LEFT OUTER JOIN #E
ON #A.UserId = #E.UserId
) X
JOIN #B
ON X.UserId = #B.UserId
JOIN #C
ON X.UserId = #C.UserId
``` | When you are using left outer joins, the intention is that one of the tables is keeping all of its rows, regardless of matches in the other tables.
My preferred structure is to put this table first:
```
select . . .
from <really important table> t left outer join
. . .
```
This doesn't work if you have inner joins later in the `from` clause, because these would filter out rows with no matches.
In terms of your query, I think the first does what you expect. The second *happens* to do what you want, because you are only joining on the `id` column. But the structure is very dangerous. If one of your subsequent inner joins were on a column from `#E`, then it would (inadvertently) change the left joins to inner joins.
So, put the inner joins first, then the left outer joins. | As app developers, we trust frameworks, how come we can't trust SQL engines to do its work? The first syntax is what SQL is expecting, don't complicate it when not necessary.
However, if A -> D is one to many; A -> E is one to many and there is no relation between D and E. I would GROUP BY the D and E matching rows in indepedent sub-queries before plugging it back to the main query.
However, this practice doesn't seem to apply to your use case. | Standard approach when mixing several INNER with several LEFT OUTER JOINs | [
"",
"sql",
""
] |
I am attempting to speed up a query that takes around 60 seconds to complete on a table of ~20 million rows.
For this example, the table has three columns (id, dateAdded, name).
id is the primary key.
The indexes I have added to the table are:
```
(dateAdded)
(name)
(id, name)
(id, name, dateAdded)
```
The query I am trying to run is:
```
SELECT MAX(id) as id, name
FROM exampletable
WHERE dateAdded <= '2014-01-20 12:00:00'
GROUP BY name
ORDER BY NULL;
```
The date is variable from query to query.
The objective of this is to get the most recent entry for each name at or before the date added.
When I use explain on the query it tells me that it is using the (id, name, dateAdded) index.
```
+----+-------------+------------------+-------+------------------+----------------------------------------------+---------+------+----------+-----------------------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+------------------+-------+------------------+----------------------------------------------+---------+------+----------+-----------------------------------------------------------+
| 1 | SIMPLE | exampletable | index | date_added_index | id_element_name_date_added_index | 162 | NULL | 22016957 | Using where; Using index; Using temporary; Using filesort |
+----+-------------+------------------+-------+------------------+----------------------------------------------+---------+------+----------+-----------------------------------------------------------+
```
**Edit:**
Added two new indexes from comments:
```
(dateAdded, name, id)
(name, id)
+----+-------------+------------------+-------+---------------------------------------------------------------+----------------------------------------------+---------+------+----------+-------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+------------------+-------+---------------------------------------------------------------+----------------------------------------------+---------+------+----------+-------------------------------------------+
| 1 | SIMPLE | exampletable | index | date_added_index,date_added_name_id_index | id__name_date_added_index | 162 | NULL | 22040469 | Using where; Using index; Using temporary |
+----+-------------+------------------+-------+---------------------------------------------------------------+----------------------------------------------+---------+------+----------+-------------------------------------------+
```
**Edit:**
Added create table script.
```
CREATE TABLE `exampletable` (
`id` int(10) NOT NULL auto_increment,
`dateAdded` timestamp NULL default CURRENT_TIMESTAMP,
`name` varchar(50) character set utf8 default '',
PRIMARY KEY (`id`),
KEY `date_added_index` (`dateAdded`),
KEY `name_index` USING BTREE (`name`),
KEY `id_name_index` USING BTREE (`id`,`name`),
KEY `id_name_date_added_index` USING BTREE (`id`,`dateAdded`,`name`),
KEY `date_added_name_id_index` USING BTREE (`dateAdded`,`name`,`id`),
KEY `name_id_index` USING BTREE (`name`,`id`)
) ENGINE=MyISAM AUTO_INCREMENT=22046064 DEFAULT CHARSET=latin1
```
**Edit:**
Here is the Explain from the answer provided by HeavyE.
```
+----+-------------+--------------+-------+------------------------------------------------------------------------------------------+--------------------------+---------+--------------------------------------------------+------+---------------------------------------+
| id | select_type | table | type | possible_k | key | key_len | ref | rows | Extra |
+----+-------------+--------------+-------+------------------------------------------------------------------------------------------+--------------------------+---------+--------------------------------------------------+------+---------------------------------------+
| 1 | PRIMARY | <derived2> | ALL | NULL | NULL | NULL | NULL | 1732 | Using temporary; Using filesort |
| 1 | PRIMARY | example1 | ref | date_added_index,name_index,date_added_name_id_index,name_id_index,name_date_added_index | date_added_name_id_index | 158 | maxDateByElement.dateAdded,maxDateByElement.name | 1 | Using where; Using index |
| 2 | DERIVED | exampletable | range | date_added_index,date_added_name_id_index | name_date_added_index | 158 | NULL | 1743 | Using where; Using index for group-by |
+----+-------------+--------------+-------+------------------------------------------------------------------------------------------+--------------------------+---------+--------------------------------------------------+------+---------------------------------------+
``` | There is a great Stack Overflow post on optimization of Selecting rows with the max value in a column: <https://stackoverflow.com/a/7745635/633063>
This seems a little messy but works very well:
```
SELECT example1.name, MAX(example1.id)
FROM exampletable example1
INNER JOIN (
select name, max(dateAdded) dateAdded
from exampletable
where dateAdded <= '2014-01-20 12:00:00'
group by name
) maxDateByElement on example1.name = maxDateByElement.name AND example1.dateAdded = maxDateByElement.dateAdded
GROUP BY name;
``` | why are you using index on many keys?? if your where clause contains only one column, then use that index only, put index on dateAdded and on name separately and then use in sql statement like this:
```
SELECT MAX(id) as id, name
FROM exampletable
USE INDEX (dateAdded_index) USE INDEX FOR GROUP BY (name_index)
WHERE dateAdded <= '2014-01-20 12:00:00'
GROUP BY name
ORDER BY NULL;
```
here is the [link](http://dev.mysql.com/doc/refman/5.1/en/index-hints.html) if you want to know more. Please let me know, whether it is giving some positive results or not. | Improving speed of SQL query with MAX, WHERE, and GROUP BY on three different columns | [
"",
"mysql",
"sql",
"sqlperformance",
""
] |
I have been searching around for the proper way of connecting into the database(MS ACCESS 2007) using VB6.0... The problem is it says an error that "SYNTAX ERROR IN INSERT INTO STATEMENT"
DECLARATION CODE:
```
Dim adoConn As New ADODB.Connection
Dim adoRS As New ADODB.Recordset
Dim conStr, sqlStr As String
```
CONNECTION CODE:
```
conStr = "Provider=Microsoft.Jet.OLEDB.3.51;Data Source= " & App.Path & "\curriculum.mdb;Persist Security Info=False"
Set adoConn = New ADODB.Connection
adoConn.ConnectionString = conStr
adoConn.Open
```
Here is the BUTTON code:
```
sqlStr = "INSERT INTO cur(CourseCode, Units, Days, Time, RoomNumber, Instructor, Course, YearLevel, Term) VALUES ("
sqlStr = sqlStr & "'" & txtCurCourseCode.Text & "',"
sqlStr = sqlStr & "'" & txtCurUnits.Text & "',"
sqlStr = sqlStr & "'" & txtCurDays.Text & "',"
sqlStr = sqlStr & "'" & txtCurTime.Text & "',"
sqlStr = sqlStr & "'" & txtCurDays.Text & "',"
sqlStr = sqlStr & "'" & txtCurRoom.Text & "',"
sqlStr = sqlStr & "'" & txtCurInstructor.Text & "',"
sqlStr = sqlStr & "'" & cboCurCourse.Text & "',"
sqlStr = sqlStr & "'" & txtCurYearLevel.Text & "',"
sqlStr = sqlStr & "'" & txtCurTerm.Text & "')"
adoConn.Execute sqlStr
```
THE ERROR IS FOUND IN THIS LINE OF CODE WHEN I CLICK DEBUG: adoConn.Execute sqlStr
YOUr help would be greatly appreciated as this school project is needed by tomorrow. Been sleepless for many nights. thansk | Escape the column names that match reserved words: `TIME` by enclosing in `[]`:
```
sqlStr = "INSERT INTO cur(CourseCode, Units, Days, [Time], RoomNumber, Instructor, Course, YearLevel, Term) VALUES ("
```
You should also use paramaterized queries as what you have in vulnerable to SQL Injection. (Run with a ' in one of the textboxes) | Unfortunately, you are using duplicate value..
I mean yor are trying to `INSERT INTO` to 9 columns(CourseCode, Units, Days, Time, RoomNumber, Instructor, Course, YearLevel, Term), however, you are putting 10 values().
`txtCurDays` is duplicated. | Cannot insert records to ms access 2007 with vb6 | [
"",
"sql",
"database",
"insert",
"vb6",
""
] |
I am creating an access log on my website, for whenever a person entering the site, he set the date and time whenever in the same registry in the database. however, instead of deleting the last visit, he added with the other. exemple:
this is the current value: 2014-01-31 17:18:27
this is the new value: 2014-02-01 17:18:27
if I use the UPDATE method, it will replace the current value to the new.
And what I would like to stay was as follows: 2014-01-31 17:18:27, 2014-02-01 17:18:27
**let's be a little closer to what I'm doing.**
this is the function i'm creating:
```
function functionInd() {
$this = 'this';
$conn = db();
$prepare = $conn->prepare('
UPDATE table
SET dates=:date
WHERE this=:this');
$prepare->bindParam(':date',date("Y-m-d H:i:s"));
$prepare->bindParam(':this', $this, PDO::PARAM_STR);
$prepare->execute();
disconnectBase($conn);
}
```
What i need to do? | You don't want to do this. Storing lists in SQL is not the right approach to using a database. You need a table that stores the dates in a separate table, with one row per `this` per `date`. If you had such a table, your code would be easy:
```
insert into ThisDateTable(this, date)
select :this, :date;
```
That said, if you still insist on storing lists in a field -- and all the subsequent delays and problems this will cause you -- the syntax is one of the following:
```
set date = concat(date, ',' :date)
set date = date || ',' || :date
set date = date + ',' + :date
set date = date & "," & :date
```
The first uses the ANSI-standard function and is supported by MySQL, Oracle, SQL Server 2012, Postgres, DB2, and Teradata (and perhaps others).
The second uses the ANSI-standard operator and is supported by several databases, notably Oracle and Postgres (and probably others).
The third is "t-sql" format, and is used by SQL Server (all versions) and Sybase.
The fourth is used by Access and is highly non-standard, both in the use of `&` and the use of `"` to delimit strings. | ```
Update TABLE
SET COLUMN_NAME = COLUMN_NAME_A + COLUMN_NAME_B
WHERE "Condition"
``` | how can I add a value to the current with SQL UPDATE | [
"",
"sql",
"pdo",
"sql-update",
""
] |
At first I would like greet all Users and apologize for my english :).
I'm new user on this forum.
I have a question about MySQL queries.
I have table Items with let say 2 columns for example itemsID and ItemsQty.
```
itemsID ItemsQty
11 2
12 3
13 3
15 5
16 1
```
I need select itemsID but duplicated as many times as indicated in column ItemsQty.
```
itemsID ItemsQty
11 2
11 2
12 3
12 3
12 3
13 3
13 3
13 3
15 5
15 5
15 5
15 5
15 5
16 1
```
I tried that query:
```
SELECT items.itemsID, items.itemsQty
FROM base.items
LEFT OUTER JOIN
(
SELECT items.itemsQty AS Qty FROM base.items
) AS Numbers ON items.itemsQty <=Numbers.Qty
ORDER BY items.itemsID;
```
but it doesn't work correctly.
Thanks in advance for help. | **SQL answer - Option 1**
You need another table called `numbers` with the numbers 1 up to the maximum for ItemsQuantity
```
Table: NUMBERS
1
2
3
4
5
......
max number for ItemsQuantity
```
Then the following SELECT statement will work
```
SELECT ItemsID, ItemsQty
FROM originaltable
JOIN numbers
ON originaltable.ItemsQty >= numbers.number
ORDER BY ItemsID, number
```
See [this fiddle](http://sqlfiddle.com/#!2/1f1255/1) -> you should always set-up a fiddle like this when you can - it makes everyone's life easier!!!
**code answer - option 2**
MySQL probably won't do what you want 'cleanly' without a second table (although some clever person might know how)
What is wrong with doing it with script?
Just run a `SELECT itemsID, ItemsQty FROM table`
Then when looping through the result just do (pseudo code as no language specified)
```
newArray = array(); // new array
While Rows Returned from database{ //loop all rows returned
loop number of times in column 'ItemsQty'{
newArray -> add 'ItemsID'
}
}//end of while loop
```
This will give you a new array
```
0 => 11
1 => 11
2 => 12
3 => 12
4 => 12
5 => 13
```
etc. | ```
Select DISTINCT items.itemsID, items.itemsQty From base.items left outer join (select items.itemsQty as Qty from base.items) As Numbers On items.itemsQty <=Numbers.Qty
order by items.itemsID;
```
Use **DISTINCT** to remove duplicates. Read more here - <http://dev.mysql.com/doc/refman/5.0/en/select.html> | Duplicating rows in one select MySql query | [
"",
"mysql",
"sql",
"multiple-records",
""
] |
While explaining some concepts over CTE, was asked with a cute question.. can we find the multiplication of rows , while we always concentrate on the `SUM()` from a fresher. That gave me a thought! Is it ever possible with just an SQL. I was also considering the maximum number precision we can even support, since the product may be so huge.
Having said, we cant write our own aggregate function.(can we?) I am thinking is it ever possible with SQL only.
I thought of `2*3` is like adding 2 , 3 times to itself.. but when the set is huge.. I cant make it possible as it may be cumbersome.
Aonther possibility is `log (a+b) = log a * log b;` and `Anti logarithm`, gives you the result. This is what i managed to respond. For Oracle, I would have a function using `EXECUTE IMMEDIATE`.
Can we actually multiply itself. Which will be more beautiful SQL-ly.. This is for pure learning experience. | Oracle has exposed these [ODCI](http://docs.oracle.com/cd/B28359_01/appdev.111/b28425.pdf) (Oracle Data Cartridge Interface) methods to do the Aggregate! [Reference](http://docs.oracle.com/cd/B28359_01/appdev.111/b28425/ext_agg_ref.htm#CACJCHFF):
* ODCIAggregateDelete() - Removes an input value from the current group.
* ODCIAggregateInitialize() - Initializes the aggregation context and instance of the implementation object type, and returns it as an OUT parameter.
* ODCIAggregateIterate() - Iterates through input rows by processing the input values, updating and then returning the aggregation context.
* ODCIAggregateMerge() - Merges two aggregation contexts into a single object instance during either serial or parallel evaluation of the user-defined aggregate.
* ODCIAggregateTerminate() - Calculates the result of the aggregate computation and performs all necessary cleanup, such as freeing memory.
* ODCIAggregateWrapContext() Integrates all external pieces of the current aggregation context to make the context self-contained.
**Code For PRODUCT() Aggregate function :**
```
CREATE OR REPLACE type PRODUCT_IMPL
AS
object
(
result NUMBER,
static FUNCTION ODCIAggregateInitialize(sctx IN OUT PRODUCT_IMPL)
RETURN NUMBER,
member FUNCTION ODCIAggregateIterate(self IN OUT PRODUCT_IMPL,
value IN NUMBER)
RETURN NUMBER,
member FUNCTION ODCIAggregateTerminate( self IN PRODUCT_IMPL,
returnValue OUT NUMBER,
flags IN NUMBER)
RETURN NUMBER,
member FUNCTION ODCIAggregateMerge(self IN OUT PRODUCT_IMPL,
ctx2 IN PRODUCT_IMPL )
RETURN NUMBER );
/
/* 1.Initializes the computation by initializing the aggregation context—the rows over which aggregation is performed: */
CREATE OR REPLACE type body PRODUCT_IMPL
IS
static FUNCTION ODCIAggregateInitialize(sctx IN OUT PRODUCT_IMPL)
RETURN NUMBER
IS
BEGIN
sctx := PRODUCT_IMPL(1);
RETURN ODCIConst.Success;
END;
/* 2.Iteratively processes each successive input value and updates the context: */
member FUNCTION ODCIAggregateIterate(self IN OUT PRODUCT_IMPL,
value IN NUMBER)
RETURN NUMBER
IS
BEGIN
self.result := value * self.result;
RETURN ODCIConst.Success;
END;
member FUNCTION ODCIAggregateTerminate(
self IN PRODUCT_IMPL,
returnValue OUT NUMBER,
flags IN NUMBER)
RETURN NUMBER
IS
BEGIN
returnValue := self.result;
RETURN ODCIConst.Success;
END;
member FUNCTION ODCIAggregateMerge(self IN OUT PRODUCT_IMPL,
ctx2 IN PRODUCT_IMPL)
RETURN NUMBER
IS
BEGIN
self.result := self.result;
RETURN ODCIConst.Success;
END;
END;
/
/* Create A function using the PRODUCT_IMPL implementation we did above */
CREATE OR REPLACE FUNCTION product(input NUMBER)
RETURN NUMBER
PARALLEL_ENABLE AGGREGATE USING PRODUCT_IMPL;
/
```
**Results:**
```
SELECT group_name,product(num) FROM product_test GROUP BY group_name;
Mahesh -60000
Mahesh_1 9
``` | The logarathm/power approach is the generally used approach. For Oracle, that is:
```
select exp(sum(ln(col)))
from table;
```
I don't know why the original database designers didn't include `PRODUCT()` as an aggregation function. My best guess is that they were all computer scientists, with no statisticians. Such functions are very useful in statistics, but they don't show up much in computer science. Perhaps they didn't want to deal with overflow issues, that such a function would imply (especially on integers).
By the way, this function is missing from most databases, even those that implement lots of statistical aggregation functions.
edit:
Oy, the problem of negative numbers makes it a little more complicated:
```
select ((case when mod(sum(sign(col)), 2) = 0 then 1 else -1 end) *
exp(sum(ln(abs(col))))
) as product
```
I am not sure of a safe way in Oracle to handle `0`s. This is a "logical" approach:
```
select (case when sum(case when col = 0 then 1 else 0 end) > 0
then NULL
when mod(sum(sign(col)), 2) = 0
then exp(sum(ln(abs(col)))
else - exp(sum(ln(abs(col)))
end)
) as product
```
The problem is that the database engine might get an error on the log before executing the `case` statement. That happens to be how SQL Server works. I'm not sure about Oracle.
Ah, this might work:
```
select (case when sum(case when col = 0 then 1 else 0 end) > 0
then NULL
when mod(sum(sign(col)), 2) = 0
then exp(sum(ln(case when col <> 0 then abs(col) end)))
else - exp(sum(ln(case when col <> 0 then abs(col) end)))
end)
) as product
```
It returns `NULL` when there is a `0`. | Product() aggregate function | [
"",
"sql",
"oracle",
"statistics",
""
] |
Simple enough, I'm trying to take information out of a Column in an SQL Database where the Members\_ID is equal to the integer the users inputs to a Textbox.
* SendFromID.Text is the textbox containing the Members\_ID
* Current\_Points is the textbox I want to hold/show the SQL Column info
* U\_G\_Studio is the SQL Column holding the data
Only 1 row can match the Members\_ID as it's a Primary Key.
This is as far as I've managed to get with my code; after this I just get lost. If you could provide an example in code I'd really appreciate it.
```
stcon = "CONNECTION STRING;"
con = New SqlConnection(stcon)
con.Open()
Dim cmd As New SqlCommand("SELECT U_G_Studio FROM PersonsA WHERE Members_ID ='" & SendFromID.Text & "'", con)
cmd.CommandType = CommandType.Text
```
{EDIT 1}
Just want to clarify, I need to DOWNLOAD the data **from** the SQL Column **into** the texbox. | I ended up with this code:
```
stcon = "CONNECTION STRING"
con = New SqlConnection(stcon)
da = New SqlDataAdapter("SELECT U_G_Studio FROM PersonsA WHERE Members_ID =" & SendFromID.Text, con)
Dim dt = New DataTable
da.Fill(dt)
CurrentPoints.Text = dt.Rows(0)("U_G_Studio").ToString()
``` | Try something like this. [Using](http://msdn.microsoft.com/en-us/library/yh598w02.aspx) automatically takes care of disposing resources. Also parameterize your query to avoid sql injection attacks.
```
stcon = "CONNECTION STRING;"
Dim strUGStudio As String
Dim sql As String = _
"SELECT U_G_Studio FROM PersonsA WHERE Members_ID = @MemberId"
Using conn As New SqlConnection(stcon)
Dim cmd As New SqlCommand(sql, conn)
cmd.Parameters.AddWithValue("@MemberId", SendFromID.Text)
Try
conn.Open()
strUGStudio = Convert.ToString(cmd.ExecuteScalar())
Catch ex As Exception
Console.WriteLine(ex.Message)
End Try
End Using
Return strUGStudio
``` | SQL To Text Box | [
"",
"sql",
"sql-server",
"vb.net",
"vba",
""
] |
I'm very new to programming and SQL, I can't figure this one out, perhaps I haven't learned the concept yet, but I'm hoping you can help me. Sorry if it's too easy and boring.
```
/*2.37 Write an SQL statement to display the WarehouseID and the sum of
QuantityOnHand,grouped by WarehouseID. Omit all SKU items that have 3 or more items
on hand from the sum, and name the sum TotalItemsOnHandLT3 and display the results
in descending order of TotalItemsOnHandLT3.*/
SELECT WarehouseID, SUM(QuantityOnHand) AS TotalItemsOnHandLT3
FROM INVENTORY
GROUP BY WarehouseID
HAVING COUNT(WarehouseID) >= 3
ORDER BY TotalItemsOnHandLT3 DESC
``` | "*Omit all SKU items that have 3 or more items on hand from the sum*", sounds more like :
```
FROM INVENTORY WHERE QuantitiyOnHand < 3
```
rather than :
```
HAVING COUNT(WarehouseID) >= 3
``` | `INVENTORY` is the list of products (`SKU` = Stock Keeping Unit = Individual Product Stored in the warehouse) where every product has a `WarehouseID`. This `warehouseID` presumably determines where the product is stored.
By **Omit all SKU items**, it asks you to only display those products that are stored in minimum 3 places in the warehouse. This can be done with the `having` clause,
```
HAVING COUNT(WarehouseID) >= 3
```
I do not know the structure and data of your `INVENTORY` table, but simply put, Consider your data is like this:
```
SKUID WareHouseID QuantityOnHand
1 1 10
1 2 10
2 1 10
1 3 5
2 2 20
```
In the above case, Product = 1 (SKUID), is stored in 3 different warehouses whereas product 2 is stored in 2 warehouses. Hence,
```
SKUID COUNT(WareHouseID) SUM(QuantityOnHand)
1 3 25
2 2 30
```
In this case, your query will only "Omit" product 1, and not the product 2. | Omit item from Sum SQL | [
"",
"sql",
"having",
""
] |
[**Structure tables and result query on sqlfiddle**](http://sqlfiddle.com/#!3/d59e5/1)
I want use query:
```
INSERT INTO Firm('name', 'name_original', 'id_city', 'id_service', 'id_firm')
VALUES
('РЭД-АВТО ООО', 'РЭД-АВТО ООО', '73041', '2', '1429'),
('УМ-3 ЗАО ', 'УМ-3 ЗАО ', '73041', '2', '49806'),
('ООО West Hole', 'РЭД-АВТО ООО', '73041', '2', '10004');
```
But i get errors:
```
Parameters supplied for object 'Firm' which is not a function. If the parameters are intended as a table hint, a WITH keyword is required.:
INSERT INTO Firm('name', 'name_original', 'id_city', 'id_service', 'id_firm')
VALUES
('РЭД-АВТО ООО', 'РЭД-АВТО ООО', '73041', '2', '1429'),
('УМ-3 ЗАО ', 'УМ-3 ЗАО ', '73041', '2', '49806'),
('ООО West Hole', 'РЭД-АВТО ООО', '73041', '2', '10004')
```
Tell me please why i get errors and how correct insert data ? | Remove the quotes around your column names.
```
INSERT INTO Firm(name, name_original, id_city, id_service, id_firm)
VALUES
('РЭД-АВТО ООО', 'РЭД-АВТО ООО', '73041', '2', '1429'),
('УМ-3 ЗАО ', 'УМ-3 ЗАО ', '73041', '2', '49806'),
('ООО West Hole', 'РЭД-АВТО ООО', '73041', '2', '10004');
``` | For Example:
`Insert into TableName ( Name,ID ) THEN Values ( 'Joe',2).`
**Note:** The data type of column name should match with the data that you are inserting. | Sql Parameters supplied for object 'Firm' which is not a function | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
My tables are:
```
frequents(bar,drinker);
likes(beer,drinker);
serves(bar,beer)
```
I want to "select pairs of drinkers who frequents exactly the same bars".I think I can write that query using only the frequents table (as it has both bar and drinker column) using self joints I tried to do but couldn't get it.I don't mind using other tables too to get the exact query.The query must select drinkers who goes to the same bars only.In other words they should have all the the bars in common.The query must be in generalized form it should not depend on data that's why I didn't put any data.
```
DRINKER | BAR
____________________
John | Hyatt
Smith | Blue
William | Hilton
John | Geoffreys
Smith | Hyatt
Joe | Blue
Mike | Hilton
William | Dublin
Jeff | Hilton
Jake | Hilton
```
This is my frequents table I need to select only Joe and Smith and also Jake and Jeff because they visit exactly the same bars. | THe easiest way in MySQL is to use `group_concat()` to put the values together and compare:
```
select fd.bars, group_concat(fd.drinker) as drinkers
from (select f.drinker, group_concat(f.bar order by f.bar) as bars
from frequents f
group by f.drinker
) fd
group by bars
having count(*) > 1;
```
EDIT
You can also do this using `join`s, but to do it right, you need a `full outer join` -- which MySQL does not support.
Another way is to count the number of bars that each goes to, do an inner join, and be sure that the counts match as well as the bars:
```
select f1.drinker, f2.drinker
from frequents f1 join
frequents f2
on f1.bar = f2.bar join
(select f.drinker, count(*) as numbars
from frequents f
group by f.drinker
) fd1
on f1.drinker = fd1.numbars join
(select f.drinker, count(*) as numbars
from frequents f
group by f.drinker
) fd2
on f2.drinker = f2.drinker
group by f1.drinker, f2.drinker
having count(*) = max(f1.numbars) and count(*) = max(f2.numbars);
``` | This may help, however I'm using my knowledge from Microsoft SQL server. The syntax may differ from MYSQL. That being said you would need to JOIN the tables into themselves.
```
SELECT F.Bars, F.Drinkers, FB.Drinker
FROM frequents F
JOIN frequents FB ON (FB.Drinkers = F.Bars)
WHERE FB.Drinkers = F.Drinkers
```
basically I join the same table to itself saying the FB.Drinkers contains the bars that drinkers go to and F.Drinkers is the list of people who drink. I'm not 100% on the syntax but the idea is correct.
I haven't done SQL in 4 months. if you don't use it, you loose it :) | How to compare rows from same table in sql | [
"",
"mysql",
"sql",
""
] |
I am not expert.
I have following query, which contains 10 tables
MainTable has 10 fields 1st Prime Key and rest foreign keys of 9 tables called TableE1 - 10.
The following query is making outer join in each table, i want to optimize this query with index.
I want to know, how can we optimize queries with index, this query is fetching 10 lacs (1 million) records in 36 seconds, how much time we can reduce ?
MainTable contains 10 lacs (1 million) records, TableE1-9 each table contains 5000 records
```
select M.RecID,
M.E1, E1.Descr as E1_D,
M.E2, E2.Descr as E2_D,
M.E3, E3.Descr as E3_D,
M.E4, E4.Descr as E4_D,
M.E5, E5.Descr as E5_D,
M.E6, E6.Descr as E6_D,
M.E7, E7.Descr as E7_D,
M.E8, E8.Descr as E8_D,
M.E9, E9.Descr as E9_D
from ((((((((tableMain M
Left Outer Join TableE1 E1 ON (E1.RecID = M.E1) )
Left Outer Join TableE2 E2 ON (E2.RecID = M.E2) )
Left Outer Join TableE3 E3 ON (E3.RecID = M.E3) )
Left Outer Join TableE4 E4 ON (E4.RecID = M.E4) )
Left Outer Join TableE5 E5 ON (E5.RecID = M.E5) )
Left Outer Join TableE6 E6 ON (E6.RecID = M.E6) )
Left Outer Join TableE7 E7 ON (E7.RecID = M.E7) )
Left Outer Join TableE8 E8 ON (E8.RecID = M.E8) )
Left Outer Join TableE9 E9 ON (E9.RecID = M.E9)
Order by RecID
``` | Indexes are probably not going to help this query very much, because the query has no filtering. You are retrieving a million records. How much of the time spent on the query is retrieving the values and how much is spent processing the query?
SQL Server has a good optimizer, that will use sophisticated join algorithms for doing joins. It is quite possible that the query will run pretty well even with no indexes.
That said, an index on each of the "E" tables with both `RecId` and `Descr` could help the query: `E1(RecId, Descr)`, `E2(RecID, Descr)`, and so on. These are covering indexes. For this query, SQL Server would use these indexes without having to read from the data pages . An index only `RecId` would not work as well, because the `Descr` data would still need to be looked up on the data pages.
Note that these indexes would be unnecessary (redundant?) if `RecId` is already the primary key and `Descr` is the only column in the table.
EDIT:
This is too long for a comment (I think).
Here are some ideas for optimizing this query:
First, are all the rows necessary? For instance, can you just add a `top 1000` to get what you need? A lot of time is spent just passing the rows back to the application. Consider putting them into a temporary table (`select into`). That will probably run much faster.
Second, how much time is the `order by` taking? Try running the query without the `order by` to see if that is dominating the time.
Third, how long are the `descr` fields? If they are very long, even just a few thousand could be dominating the size of the data. Note "very long" here means many kbytes, not a few hundred bytes.
Fourth, are the `descr` fields `varchar()` or `char()` (or `nvarchar()` versus `nchar()`). `char()` and `nchar()` are very bad choices, because they occupy a lot of space in the result set.
Fifth (probably should be first), look at the execution plan. You have present a pretty simple scenario so I have assumed that the execution plan is a scan of the first table with index lookups into each of the other. If the plan doesn't look like this, then there may be opportunities for optimization.
EDIT II:
I will repeat. Transferring hundreds of megabytes from the server to an application will take time, and 30'ish seconds isn't unreasonable. (The return set has 10 ids = 40 bytes plus the description fields which are likely to be 100s of bytes per record.) The problem is the design of the layer between the database and the application, not the database performance. | If your tables `TableE1`-`TableE9` have lots of records, you need to create an index on `RecID` in all nine tables.
I guess you have many records and no indices currently, because 30 seconds is really slow for such a simple query. | How to optimize select query with index | [
"",
"sql",
"sql-server",
"indexing",
"query-optimization",
""
] |
I have the following extract of a code used in SAS and wanted to write it in SQL Server to extract data.
```
substr(zipname,1,4) in("2000","9000","3000","1000");run;
```
How do I write this in SQL Server ?
I tried and got this error:
> An expression of non-boolean type specified in a context where a
> condition is expected | In sql server, there's no `substr` function (it's [substring](http://msdn.microsoft.com/en-us/library/ms187748.aspx))
by the way, you need a complete query...
```
select blabla
from blibli
where substring(zipname, 1, 4) in ('2000', '9000', 3000', '1000')
```
assuming zipname is a varchar or something like that... | You need a table that you are getting the records from, and zipname would be a column in the table. The statement would be something like this:
```
select * from tablename where substring(zipname,1,4) in ('2000','9000','3000','1000')
``` | how to use substr in SQL Server? | [
"",
"sql",
"sql-server",
"sas",
""
] |
I have two tables.
table A:
```
code desc
001 sam
002 bob
003 mala
004 anna
```
table B:
```
code desc
001 marley
001 sam
002 bob
003 mala
004 anna
005 sana
```
I want to retrieve all the rows from both tables where the `code` value is common, regardless of the value of `desc`. That is, my final result should be:
```
001 marley
001 sam
002 bob
003 mala
004 anna
```
I try this but it's not returning me the duplicate that is `001 marley`.
```
SELECT COUNT(*)
FROM TABLEA
WHERE NOT EXISTS(SELECT * FROM TABLEB);
``` | Although I suspect you just want a MySQL option, here is the Oracle RDBMS solution. It uses a couple of neat features: the INTERSECT operator to produce a set of common values, and the WITH clause to improve the performance of the sub-query. It also employs the UNION operator which to produces a set of all the distinct values.
```
with cc as ( select code from a
intersect
select code from b )
select * from a
where code in ( select code from cc )
union
select * from b
where code in ( select code from cc )
/
```
[Here is Teh Obliquitory Fiddle!](http://sqlfiddle.com/#!4/aaf41/1) | You can do
```
SELECT a.code, a.desc
FROM tablea a JOIN tableb b
ON a.code = b.code
UNION
SELECT b.code, b.desc
FROM tablea a JOIN tableb b
ON a.code = b.code
ORDER BY code, `desc`
```
Output:
```
| CODE | DESC |
|------|--------|
| 1 | marley |
| 1 | sam |
| 2 | bob |
| 3 | mala |
| 4 | anna |
```
Here is **[SQLFiddle](http://sqlfiddle.com/#!2/1b019/4)** demo | Retrieving duplicate from two tables in MySQL | [
"",
"mysql",
"sql",
"oracle",
""
] |
My problem is that Intellisense does not provide complete auto suggest for the columns that I have in my tables .
Here is an example:

As you can see on SSMS it does give me auto suggest for my tables, but does not for columns. I have read couple articles about solving some Intellisense issues, but nothing helped. Here is things I tried described in this article: <http://www.mssqltips.com/sqlservertip/2591/troubleshooting-intellisense-in-sql-server-management-studio-2012/>
Any suggestions would be greatly appreciated, thank you for your time! | IntelliSense can't predict which table you're going to select from, and will wait until you have at least one table in the FROM clause, and probably only until you specify an `alias.` before populating the columns in the case of a join or other multi-table query.
There's a good reason for this. Imagine if you have `CustomerID` or `InvoiceID` in 20 different tables in your database. Should it list this 20 times? Which one should you pick? Do you really want *all* the columns in your entire database in a drop-down list? In a lot of scenarios this will be a *very long* list. And not pretty either, in things like SharePoint, NAV Dynamics, etc.
If you're not happy with the way the native IntelliSense works, there are 3rd party tools that might do what you want, but I'm not sure what you want will actually help you work any better. | First, that's because at the time of the screenshot SSMS does not know from what object you are selecting. In other words, it cannot guess what columns you're interested in when there is not a `from` clause in your `select` statement. If you try to type in the columns in the following `select` statement...
```
select from dbo.Invoices
```
you will see that SSMS will start to pick up your columns because you have already specified a `from` clause, so SSMS knows how to suggest you column names...because there is a table specified in the `from` clause | SQL Server 2012 Intellisense issue | [
"",
"sql",
"sql-server-2012",
""
] |
I am executing the following sql. I get a syntax error which is (Incorrect syntax near '=')
The query executes fine and gives proper results when executed normally. couldn't understand. plz take a look.
```
DECLARE @pvchMachineId VARCHAR(100) = ''
DECLARE @pvchMake VARCHAR(100) = ''
DECLARE @sql NVARCHAR(1000)
SELECT @sql = ' SELECT TOP 20 x.intId, x.vchMachineId, x.AUDenom, x.intGroupId,
x.vchMake, x.vchModel, x.mCurrency
from dbo.Machine x
inner join
(select max(m1.AUDenom) as audenom, m1.vchMachineId
from dbo.Machine m1
left JOIN dbo.ImportedFile ife on m1.intImportedFileId = ife.intId
WHERE ife.dtFileDate >= ''1-1-2013'' AND ife.dtFileDate <= ''1-29-2014'' AND
--following two lines cause the error
(' + @pvchMake + '= ''0'' OR m1.vchMake = @pvchMake) AND
(' + @pvchMachineId +'= ''0'' OR m1.vchMachineId = @pvchMachineId)
group by vchMachineId) y
on x.AUDenom = y.audenom and x.vchMachineId = y.vchMachineId
ORDER BY x.AUDenom DESC'
``` | Update your query to the following
```
(@pvchMake = ''0'' OR m1.vchMake = @pvchMake) AND
(@pvchMachineId = ''0'' OR m1.vchMachineId = @pvchMachineId)
```
than later when you go to execute just pass it in as parameters to sp\_executesql function.
```
EXEC sp_executesql @sql
,N'@pvchMachineId VARCHAR(100), @pvchMake VARCHAR(100)'
,@pvchMachineId,@pvchMake
```
or this which is cleaner
```
Declare @ParametersDefinition NVARCHAR(max) = N'@pvchMachineId VARCHAR(100), @pvchMake VARCHAR(100)'
EXEC sp_executesql @sql, @ParametersDefinition, @pvchMachineId,@pvchMake
```
In the end you do not want to concatenate your dynamic SQL statement, it opens it up for SQL Injections. Even though it is a valid option it should be avoided at all cost. | This statement :
```
'(' + @pvchMake + '= ''0'' OR m1.vchMake = @pvchMake)'
```
Will output, since the variables are not initialized by anything else than `''` :
```
(= '0' OR m1.vchMake = @pvchMake)
```
Which is not correct syntaxically.
You should use :
```
'(''' + @pvchMake + '''= ''0'' OR m1.vchMake = @pvchMake)'
```
Which would output :
```
(''= '0' OR m1.vchMake = @pvchMake)
``` | Dynamic sql is giving syntax error. | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have a requirement to convert the binary number to decimal. The converted decimal range would be no greater than 256. (100 Million in binary)
It involves an iOS app, where to read a problem investigation report, four binary integers are dynamically chosen and base on tht. (Some UI stuffs inside with 0s and 1s).
Then, we would concatenate them with '.' as delimiter.(similar to a IP address)
Based on the final Sequence formed, we fetch the report id after converting them into decimals.
Say, `10.100.1.11`(Input) could become `2.4.1.3`(Output)
Have been going through numerous reports and Objective-C. Literally lost my mind. Being in iOS dev, I couldnt get a PL/SQL compiled. So, I have to accomplish with a SQL.
**EDIT:**
Luckily, I was able to write something like this. (but just one number at a time :( )
```
SQL> var NUM number;
SQL> exec :NUM := 100000000;
PL/SQL procedure successfully completed.
SQL> SELECT SUM(value)
FROM
(SELECT POWER(2,LENGTH(TO_CHAR(:NUM))-level)*to_number(SUBSTR(TO_CHAR(:NUM),level,1),'FM9') AS value
FROM DUAL
CONNECT BY level <= LENGTH(TO_CHAR(:NUM))
); 2 3 4 5 6
SUM(VALUE)
----------
256
``` | There are different ways you can approach this. So, I'm choosing what might seem like a rather arcane way. The motivation for this is that `to_number()` accepts hexadecimal formats but not binary formats. What would be so hard about supporting binary and octal as well as hex? Well, that's not a question for me to ask. Oracle doesn't.
But, we can sort of readily convert from binary to hex. You are only dealing with 8 binary digits, so that is only two hex digits. Here is the code:
```
with bin2hex as (
select '0000' as bin, '0' as hex from dual union all
select '0001' as bin, '1' as hex from dual union all
select '0010' as bin, '2' as hex from dual union all
select '0011' as bin, '3' as hex from dual union all
select '0100' as bin, '4' as hex from dual union all
select '0101' as bin, '5' as hex from dual union all
select '0110' as bin, '6' as hex from dual union all
select '0111' as bin, '7' as hex from dual union all
select '1000' as bin, '8' as hex from dual union all
select '1001' as bin, '9' as hex from dual union all
select '1010' as bin, 'A' as hex from dual union all
select '1011' as bin, 'B' as hex from dual union all
select '1100' as bin, 'C' as hex from dual union all
select '1101' as bin, 'D' as hex from dual union all
select '1110' as bin, 'E' as hex from dual union all
select '1111' as bin, 'F' as hex from dual
)
select t.*, c1.bin as bin1, c2.bin as bin2, c1.hex as hex1, c2.hex as hex2,
to_number(c2.hex||c1.hex, 'xx')
from (select '10010010' as num from dual union all
select '10010' from dual
) t left outer join
bin2hex c1
on substr('00000000'||t.num, -4) = c1.bin left outer join
bin2hex c2
on substr('00000000'||t.num, -8, 4) = c2.bin;
``` | Just for fun, a version using a recursive CTE (so requires 11gR2), because apparently I look for any excuse to play with these at the moment:
```
with data as (
select '10.100.1.11' as str from dual
),
t as (
select level as pos,
regexp_substr(str, '[^\.]+', 1, level) as val
from data
connect by level <= regexp_count(str, '[^\.]+')
),
r (pos, val, len, lvl, pos_val) as (
select pos, val, length(val), 0, 0
from t
union all
select pos, val, len, lvl + 1,
power(2, lvl) * substr(val, len - lvl, 1)
from r
where lvl < length(val)
)
select listagg(sum(pos_val), '.') within group (order by pos)
from r
group by pos;
2.4.1.3
```
[SQL Fiddle](http://sqlfiddle.com/#!4/d41d8/24348).
Still not as arcane as Gordon's though... | Convert binary to decimal in the given sequence | [
"",
"sql",
"oracle",
"binary",
""
] |
Am trying to access 2 tables values to single query. For Eg Customer name, id, Address to a single query. Customer address - State & Country in another table *Am trying to access all address to a single column in the query.* In Customer table state and country code only stored. Now I can able to access Customer name, Id, Address(Single column) all in query but cont able to access that State & Country.
```
SELECT CustomerName, CustomerId, (BFlatNo +','+ BPremises +','+ BStreet +','+ BArea +','+ BLocation +','+ BCity +','+ BState +','+ BCountry) as Address FROM Customer WHERE CustomerId=11;
```
this code working successful & got result this

Am getting 24 and 1 are id's of State & Country.
so Again I modify this code to get exact address in a single column state delhi instead off 24 and country india instead of 1
```
SELECT CustomerName, CustomerId, (BFlatNo +','+ BPremises +','+ BStreet +','+ BArea +','+ BLocation +','+ BCity +','+ BState +','+ (select CountryName from Country where CountryIdId=Customer.Country) AS Country) AS Address FROM Customer;
```
this code showing syntax error ! How to solve this ? | I'm not sure about MS Access syntax vs. SQL Server, but you can give this a shot:
```
SELECT
CustomerName,
CustomerId,
(
(BFlatNo +','+ BPremises +','+ BStreet +','+ BArea +','+ BLocation +','+ BCity +','+ BState)
+','+
(select top 1 CountryName from Country where CountryIdId=Customer.Country)
)
AS Address
FROM Customer;
```
Basically you don't need to say "as Country" as you're doing in the subquery, and you should return the top 1 result because if there are more results this will cause a problem. | If you want to `SELECT` from multiple tables then include the tables in the `FROM` clause or use a `JOIN`.
```
SELECT CustomerName,
CustomerId,
(BFlatNo & ',' & BPremises & ',' & BStreet & ',' & BArea & ',' & BLocation & ',' & BCity & ',' & BState & ',' & CountryName) AS Address
FROM Customer
INNER JOIN
Country
ON Country.CountryId = Customer.Country;
``` | Error in Sql Query in MS Access Database? | [
"",
"sql",
"ms-access",
""
] |
I'm working within one table of a MS Access DB. I would like to use an iif statement to determine if a value from Field A conforms to a valid format (in this case, either one or two numbers followed by a letter). If it does, I would like to take just the numeric portion of Field A (e.g., if an entry for Field A is "15B", I would like to consider only the "15" part) and insert it into a currently empty Field B that I have created.
How can I write a MS Access query that only considers the numeric portion of Field A and then inserts it into Field B? | For the pattern you described, you can build an update query with `like`, like this:
```
UPDATE tbl1 SET tbl1.ValB =
Switch([ValA] Like "#[a-z]",Left([valA],1),
[valA] Like "##[a-z]",Left([valA],2),True,NULL);
```
Or use the `Val` function, which will try to convert as much as possible from the string into a number:
```
UPDATE tbl1 SET tbl1.ValB =
iif(valA like "#[a-z]" or valA like "##[a-z]",val(ValA),NULL)
``` | For the validation part of your question, you can use `Like` pattern-matching. Here is an example from the Immediate window.
```
? "15A" Like "#[A-Z]" Or "15A" Like "##[A-Z]"
True
? "4B" Like "#[A-Z]" Or "4B" Like "##[A-Z]"
True
? "123A" Like "##[A-Z]" Or "123A" Like "##[A-Z]"
False
? "15AB" Like "#[A-Z]" Or "15AB" Like "##[A-Z]"
False
? "15!" Like "#[A-Z]" Or "15!" Like "##[A-Z]"
False
```
If those tests correctly express your intent, you could use this as the *Validation Rule* for `Field A`:
```
Like "#[A-Z]" Or Like "##[A-Z]"
```
As for `Field B`, you could make that a field expression in a query.
```
SELECT
[Field A],
Val([Field A]) AS [Field B]
FROM YourTable;
```
Use that query anywhere you need to see `[Field B]`. With that approach, if `[Field B]` doesn't exist in your table, you needn't be concerned about updating `[Field B]` stored values whenever `[Field A]` values change. | Inserting part of a value from one field into another | [
"",
"sql",
"ms-access",
""
] |
Here's my query:
```
INSET INTO cities (name, country_id)
VALUES ('New York', 11), ('London', 215), ('Moscow', 66)
```
Is it right to say that rows will be inserted excactly in order of query? So, New Yor will be inserted first, than London, than Moscow. Is it possible that London will be inserted first?
I can't find this information in postgresql documentation.
edit:
So, I need to know is it safe to think that first id will be for New York, second for London, third for Moscow.
edit2:
I just need to associate city with returned id.
Full query:
```
INSET INTO cities (name, country_id)
VALUES ('New York', 11), ('London', 215), ('Moscow', 66)
RETURNING id
```
So, it's not possible for multiple insertion?
edit3:
My question duplicates this one: [Is INSERT RETURNING guaranteed to return things in the "right" order?](https://stackoverflow.com/questions/5439293/is-insert-returning-guaranteed-to-return-things-in-the-right-order)
So, I guess to make sure I need to make each insert in it's own query.
Thanks. | It is possible to return multiple values, so you could match them up that way, for example:
```
INSET INTO cities (name, country_id)
VALUES ('New York', 11), ('London', 215), ('Moscow', 66)
RETURNING id, name, country_id
```
just pick the columns you want to return. | the data are not stored in specified order. | INSERT multiple values in one query (order) | [
"",
"sql",
"postgresql",
""
] |
I need to pass system function `MAX()` in `where` so I tried below code
```
select *
from tb_sales_entry_total_product
where Sno = MAX(Sno)
```
but it's showing error
> An aggregate may not appear in the WHERE clause unless it is in a subquery contained in a HAVING clause or a select list, and the column being aggregated is an outer reference.
What's my error? Thanks | Use a sub-query:
```
SELECT p1.*
FROM Tb_sales_entry_total_product p1
WHERE p1.Sno = (SELECT Max(p2.Sno)
FROM Tb_sales_entry_total_product p2)
``` | Try this:
```
select * from tb_sales_entry_total_product p1
where p1.Sno = (select MAX(p2.Sno) from tb_sales_entry_total_product p2)
``` | How to use system function in where SQL Server 2008? | [
"",
"sql",
"sql-server",
""
] |
well I am stuck with a simple but very irritating problem.
I am creating a program that will save documents to *SharePoint*, when I click it will save the documents:
```
name,
date,
username,
location
version
```
to SQL. Now here is my problem. Each new documents version needs to increment/add by 1.
So lets say we have version - 1 it must be version -2 when clicked
and I can't use auto-increment because there might be different docs
with different names and the version needs to be added according to the correct doc name.
Now be-warned I am tired as hell and my SQL proc is not pretty:
```
@username varchar(100),
@Type varchar(100),
@Name varchar(100),
@Version int,
@Location varchar(100)
AS
SET @VERSION = 1
INSERT INTO TABLE (Username,[Date],[Type],Name,[Version],Location)
VALUES('Wolf',GetDate(),'pdf','doc',@version ,@Location)
SELECT max([Version]) FROM table
```
this is roughly what it will look like now I just need to be able to add 1 to the version each time a new doc is inserted with the name lets say for argument sake its a Manual.
aah to the oaks editing this please note i have not slept in 48 hours i am tired and spelling is the least of my worries | ```
@username varchar(100),
@Type varchar(100),
@Name varchar(100),
@Version int,
@Location varchar(100)
as
IF NOT EXISTS(select * from table where type = 'USERGUIDE')
BEGIN
SET @VERSION = 1
END ELSE BEGIN
SET @VERSION = (SELECT MAX(VERSION) + 1 FROM table WHERE TYPE = 'USERGUIDE')
END
SELECT @VERSION [VERSION]
``` | The problem with your code is that you are always setting the Version Number to 1 before the insert:
```
SET @VERSION = 1
```
You need to get the current max version from your table *prior* to the insert. Try this:
```
Declare @username varchar(100) = 'Wolf',
@Type varchar(100) = 'pdf',
@Name varchar(100) = 'doc',
@Version int,
@Location varchar(100) = 'TestLoc'
Select @Version = IsNull(Max([Version]), 0) + 1
From Test
Where Name = @Name
INSERT INTO Test (Username,[Date],[Type],Name,[Version],Location)
VALUES(@username,GetDate(),@type,@name,@version ,@Location)
Select Max([Version]) From Test
Where Name = @Name
```
You can see this in action on [SqlFiddle](http://sqlfiddle.com/#!6/592e9/4) | how to add 1 to a value each time a new item is inserted SQL | [
"",
"sql",
"vb.net",
""
] |
What is faster:
Using a join to get userdetails for posts or only get the post data which includes the userid, collect the userIDs and after the posts are queried run one:
SELECT x,y,z FROM users WHERE id in (1,2,3,4,5,6,7...etc.)
Short:
What is better?:
```
SELECT x,y,z,userid
FROM posts
WHERE id > x
ORDER BY id
LIMIT 20
SELECT x,y,z
FROM users
WHERE id IN (1,2,3,4,5,6,7...etc.)
```
or:
```
SELECT p.x,p.y,p.z, u.username,u.useretc,u.user.etc
FROM posts p
INNER JOIN users u
ON u.id = p.userid
AND id > n
ORDER BY id
LIMIT 20
```
In some scenarios this could reduce the querying of the user table to 2 instead of 20 times. A page in a discussion where only two user posted. | anyway the second way is better:
1. You have only one call to database instead of two - so the channel between your DB and Application server is less loaded
2. Second way usually should be faster and less memory consuming because analyser can decide better how to manage its resources (it has all the requirements in one query)
3. In first example you force database to use not-cached queries (second query of the first example is not constant because in-list has different amount of inputs) so it parses the second query more often which leads to performance losses | If I'm not wrong... normally dealing with INNER JOIN is more readable and cleaner. | Split a mysql query into two to avoid a join -> sum ids and use where in | [
"",
"mysql",
"sql",
"join",
"where-in",
""
] |
Will my code work for solving the following problem? The total cost of all items ordered for all orders placed in July, 1996. Looking at 2 tables (`Orders` and `OrderDetails`) this is what I have so far.
```
SELECT
Orders.OrderID, Customers.ContactName, Orders.OrderDate
FROM
Customers
INNER JOIN
Orders ON Customers.CustomerID = Orders.CustomerID
INNER JOIN
[Order Details] ON Orders.OrderID = [Order Details].OrderID
WHERE
(Orders.OrderDate BETWEEN CONVERT(DATETIME, '1996-07-01 00:00:00', 102) AND CONVERT(DATETIME, '1996-07-31 00:00:00', 102))
AND SUM(Quantity * UnitPrice) AS grand_total;
```
The intent is to find the sum of each row and keep that number then sum the rows to produce a grand total. When I run the query it definitely doesn't produce what should be. | Try this:
```
SELECT
o.OrderID, c.ContactName, o.OrderDate, SUM(od.Quantity * od.UnitPrice) AS grand_total
FROM
dbo.Customers c
INNER JOIN
dbo.Orders o ON c.CustomerID = o.CustomerID
INNER JOIN
dbo.[Order Details] od ON o.OrderID = od.OrderID
WHERE
o.OrderDate BETWEEN {ts '1996-07-01 00:00:00.000'} AND {ts '1996-07-31 23:59:59.997'}
GROUP BY
GROUPING SETS ((o.OrderID, c.ContactName, o.OrderDate), ())
```
`GROUPING SETS ((o.OrderID, c.ContactName, o.OrderDate), ...)` will group the source rows by `o.OrderID, c.ContactName, o.OrderDate` and, also, will compute the `SUM` for every `OrderID, ContactName, OrderDate` pair of values.
`GROUPING SETS ((...), ())` instructs SQL Server to compute, also, the grand total (total of all order totals):
```
SELECT OrderID, SUM(OrderDetailValue) AS OrderTotal
FROM (
SELECT 11, 1, 'A', 1 UNION ALL
SELECT 12, 1, 'B', 10 UNION ALL
SELECT 13, 2, 'A', 100
) AS Orders(OrderDetailID, OrderID, ProductName, OrderDetailValue)
GROUP BY GROUPING SETS ((OrderID), ());
```
Results:
```
OrderID OrderTotal
------- ----------
1 11 <-- Total generated by GROUPING SETS ((OrderID), ...)
2 100 <-- Total generated by GROUPING SETS ((OrderID), ...)
NULL 111 <-- Total generated by GROUPING SETS (..., ())
``` | There are many ways to do this.
GROUP BY ROLLUP - <http://technet.microsoft.com/en-us/library/ms177673.aspx>
GROUPING SETS - <http://technet.microsoft.com/en-us/library/bb522495(v=SQL.105).aspx>
OVER - <http://technet.microsoft.com/en-us/library/ms189461.aspx>
Do not use Shiva's solution since COMPUTE is defunct in SQL Server 2012!
Here are some issues that I corrected in my solution.
```
1 - Use table alias's
2 - Do not use between or convert on date ranges.
It will not be SARGABLE.
```
Since you were not specific, I chose a simple sum by each customers order id with a total month to-date for the customer regardless of the order id.
To solve this, I used a OVER clause. Since I did not have test data or tables from you, It is your homework to check the solution for syntax errors.
```
-- Customer sum by order id, total sum by customer.
SELECT
C.ContactName,
O.OrderID,
O.OrderDate,
SUM(D.Quantity * D.UnitPrice) AS OrderTotal,
SUM(D.Quantity * D.UnitPrice) OVER (PARTITION BY C.ContactName) as CustomerTotal
FROM Customers as c INNER JOIN Orders as O
ON C.CustomerID = O.CustomerID
INNER JOIN [Order Details] D
ON O.OrderID = D.OrderID
WHERE
O.OrderDate >= '1996-07-01 00:00:00' AND
O.OrderDate < '1996-08-01 00:00:00'
GROUP BY
C.ContactName,
O.OrderID,
O.OrderDate
``` | SQL Subtotal and Grand Totals | [
"",
"sql",
"sql-server",
""
] |
I have a table with a id column, My query is
```
select id from tb_abc where title='xyz'
```
the result is
```
1
3
6
8
```
But what i want is to add a letter to id selected value
like
```
1d
3d
6d
8d
```
The letter d is fixed and declared in the query. Kindly guide. | ```
select concat(id,'d') as id
from tb_abc
where title='xyz'
```
### [SQLFiddle](http://sqlfiddle.com/#!2/d41d8/30679) | ```
SELECT CONCAT(id,'d') AS id FROM tb_abc WHERE title = 'xyz'
```
This will work for you.
Earlier as posted query for MSSQL. | Write a query to select a column with a additional letter | [
"",
"mysql",
"sql",
"t-sql",
""
] |
I have names of primary keys in a variable and I need to find the table to which they belong. The db has many table so linear search is not an option. | you can try this out ::
```
SELECT table_name
FROM information_Schema.columns
WHERE column_name='dept_id'
and ordinal_position = 1;
``` | You can use the `information_schema` tables. If the primary key name is the first column in the table, you can just do:
```
select table_name
from information_schema.columns
where column_name in (<your list here>) and
ordinal_position = 1;
```
Otherwise, you have to go through the constraints to get what you want. Something like:
```
select kcu.table_name, kcu.column_name
from information_schema.table_constraints tc join
information_schema.key_column_usage kcu
on tc.contraint_name = kcu.contraint_name and
tc.table_name = kcu.table_name
where tc.contraint_type = 'PRIMARY KEY' and
column_name in (<your list here>);
```
You can also do this using the system tables and views. | How to find table name using primary key name in sql? | [
"",
"sql",
"sql-server",
"t-sql",
"key",
""
] |
this is my table layout simplified:
table1: pID (pkey), data
table2: rowID (pkey), pID (fkey), data, date
I want to select some rows from table1 joining one row from table2 per pID for the most recent date for that pID.
I currently do this with the following query:
```
SELECT * FROM table1 as a
LEFT JOIN table2 AS b ON b.rowID = (SELECT TOP(1) rowID FROM table2 WHERE pID = a.pID ORDER BY date DESC)
```
This way of working is slow, probabaly because it has to do a subquery on each row of table 1. Is there a way to improve performance on this or do it another way? | You can try something on these lines, use the subquery to get the latest based on the date field (grouping by the pID), then join that with the first table, this way the subquery would not have not have to be executed for *each* row of Table1 and will result in better performance:
```
Select *
FROM Table1 a
INNER JOIN
(
SELECT pID, Max(Date) FROM Table2
GROUP BY pID
) b
ON a.pID = b.pID
```
I have provided the sample SQL for one column using the **group by**, in case you need additional columns, add them to the GROUP BY clause. Hope this helps. | use the below code, and note that i added the order by Date desc to get the most resent data
```
select *
from table1 a
inner join table2 b on a.pID=b.pID
where b.rowID in(select top(1) from table2 t where t.pID=a.pID order by Date desc)
``` | tsql: alternative to select subquery in join | [
"",
"sql",
"sql-server",
""
] |
Can anyone tell me what is going on in this function??
In the following code snippet, `user.Id = 0`, `id.Value = 0` and `id.SqlDbType = Int`.. as expected since `user.Id` is an int field.
However, `error.Value = null` and `error.SqlDbType = BigInt`. What gives? If I use non-zero it detects an int and the correct value.
Note: the Value properties are the same before and after declaring the parameter direction.
```
public static long InsertUpdate(User user) {
SqlParameter id = new SqlParameter("@id", user.Id);
id.Direction = ParameterDirection.InputOutput;
cmd.Parameters.Add(id);
SqlParameter error = new SqlParameter("@error_code", 0);
error.Direction = ParameterDirection.Output;
cmd.Parameters.Add(error);
.... other stuff
}
```
As well, if @SET @error\_Code = 0 in the sproc, error.Value = NULL and error.SqlDbType = NVarChar AFTER the procedure runs. If I set it to an integer I get an Int type.
UPDATE:
After specifying SqlDbType.Int the parameter now has the correct SqlDbType before and after the command... however the stored procedure is still setting @error\_code = null when I in fact set it to 0.
UPDATE:
When the sproc executes the SELECT statement the @error\_code parameter is always returned as null, regardless of when or not it has been set... this only happens when there's a select statement...
Here is the procedure to reproduce:
```
ALTER PROCEDURE [dbo].[usp_user_insert_v1]
@username VARCHAR(255),
@password VARCHAR(255),
@gender CHAR(1),
@birthday DATETIME,
@error_code INT OUTPUT
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
DECLARE @default_dt DATETIME
EXEC @default_dt = uf_get_default_date
DECLARE @dt DATETIME = GETUTCDATE()
INSERT INTO users(username, password, gender, birthday, create_dt, last_login_dt, update_dt, deleted)
VALUES(@username, @password, @gender, @birthday, @dt, @default_dt, @default_dt, 0)
SELECT * FROM users WHERE id = SCOPE_IDENTITY()
SET @error_code = 3
RETURN
END
```
***SOLUTION?***
<http://forums.asp.net/t/1208409.aspx?Interesting+problem+with+getting+OUTPUT+parameters+from+SQL+Server+using+C+>
Found this link on the ASP forums... apparently you can't read the output parameter until you have read all the results from the SqlDataReader... very unfortunate for me since I decide whether or not I even WANT to read the results based on the output param... | Both of the current answers are slightly incorrect because they're based on the assumption that the constructor being called for your `error` object is the [`(string,object)`](http://msdn.microsoft.com/en-us/library/0881fz2y%28v=vs.110%29.aspx) one. This is not the case. A literal `0` can be converted to any enum type1, and such a conversion would be preferred over a conversion to `object`. So the constructor being called is the [`(string,SqlDbType)`](http://msdn.microsoft.com/en-us/library/h8f14f0z%28v=vs.110%29.aspx) constructor.
So the type is set to `BigInt` because that's the `0` value for the [`SqlDbType`](http://msdn.microsoft.com/en-us/library/system.data.sqldbtype%28v=vs.110%29.aspx) enumeration, and the `Value` is null because you have no code that attempts to set the value.
```
SqlParameter error = new SqlParameter("@error_code", (object)0);
```
should cause it to select the correct overload.
---
Demo:
```
using System;
using System.Data;
namespace ConsoleApplication
{
internal class Program
{
private static void Main()
{
var a = new ABC("ignore", 0);
var b = new ABC("ignore", (object)0);
var c = new ABC("ignore", 1);
int i = 0;
var d = new ABC("ignore", i);
Console.ReadLine();
}
}
public class ABC
{
public ABC(string ignore, object value)
{
Console.WriteLine("Object");
}
public ABC(string ignore, SqlDbType value)
{
Console.WriteLine("SqlDbType");
}
}
}
```
Prints:
```
SqlDbType
Object
Object
Object
```
---
1From the [C# Language specification, version 5](http://www.microsoft.com/en-us/download/details.aspx?id=7029), section 1.10 (that is, just in the *introduction* to the language, not buried deep down in the language lawyery bits):
> In order for the default value of an enum type to be easily available, the literal 0 implicitly converts to any enum type. Thus, the following is permitted.
```
Color c = 0;
```
I'd have also thought this important enough to be in the Language Reference on MSDN but haven't found a definitive source yet. | From [SqlParameter.Value on MSDN](http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlparameter.value(v=vs.110).aspx)
> For output and return value parameters, the value is set on completion of the SqlCommand
i.e. I wouldn't rely on type inference to set the return type implicitly.
I would explicitly set the type of the output parameter:
```
var error = new SqlParameter("@error_code", SqlDbType.Int)
{
Direction = ParameterDirection.Output
};
```
**Edit**
After some reflection of `SqlParameter`:
The `BigInt` is easy to explain - it is the default `SqlDbType`, and the `SqlParameter(string parameterName, object value)` ctor doesn't overwrite this value.
```
public enum SqlDbType
{
BigInt = 0,
...
```
Re: `@error_code` is returned as NULL
The only thing I can think of is that the PROC fails to complete cleanly. Try moving the `SET @error_code = 0` above the `EXEC @default_dt = uf_get_default_date` ?
**Edit**
Confirmed, @Damien's point is correct
```
SqlParameter error = new SqlParameter("@error_code", 0);
```
Actually calls this ctor:
```
public SqlParameter(string parameterName, SqlDbType dbType)
```
whereas
```
SqlParameter error = new SqlParameter("@error_code", 1234);
```
calls
```
public SqlParameter(string parameterName, object value)
```
Reason : [0 is implicitly castable to enum.](https://stackoverflow.com/questions/2043554/method-overload-resolution-unexpected-behavior) | .NET SqlParameter constructor inconsistent? | [
"",
"sql",
".net",
"stored-procedures",
"sqlparameter",
""
] |
```
DECLARE @CityId AS VARCHAR(20) = NULL
DECLARE @CityList AS VARCHAR(20) = '1, 2, 3, 4, 5';
IF (@CityId IS NULL)
SET @CityId = @CityList;
SELECT *
FROM City
WHERE CityID IN (@CityId)
```
I have a stored procedure that lists all cities. But If parameter is set, then it should display only specific information about that city. CityID in City Table is bigint. When CityId is left null, it gives error saying '***Error converting data type varchar to bigint.***'
Note: If I construct following query, everything seems OK.
```
SELECT * FROM City WHERE CityID IN (1, 2, 3, 4, 5)
```
but if I go ahead with the following query, it gives error.
```
SELECT * FROM City WHERE CityID IN ('1, 2, 3, 4, 5')
```
I guess I should be constructing int array in this case but I don't know how to do that.
Any suggestions? | You can use dynamic SQL
```
exec('SELECT * FROM City WHERE CityID IN (' + @CityId + ')')
``` | Try this:
```
IF (@CityId IS NULL)
SET @CityId = ',' + REPLACE(@CityList, ' ', '') + ',';
ELSE
SET @CityId = ',' + @CityId + ',';
SELECT *
FROM City
WHERE charindex(',' + CAST(CityID as nvarchar(20)) + ',', @CityId) > 0
``` | SELECT Cities in stored procedure | [
"",
"sql",
""
] |
SQL query:
```
SELECT *
FROM Account
WHERE (type <>100000002 ? Id='something': Id=null)
```
but it shows error :
> Incorrect syntax near '?'
Please help me. | This is for SQL Server. `IIF` is not available in SQL Server 2008 or earlier.
```
SELECT *
FROM Account
WHERE
Id=IIF(type<> 100000002,"something",null )
```
If you are using SQL Server 2008 or earlier, then try this.
```
SELECT *
FROM Account
WHERE (Id= CASE WHEN type <>100000002 THEN 'something' ELSE null END)
``` | You can do this :
```
SELECT *
FROM Account
where (type <>100000002 and Id='something') or (type =100000002 and id is null)
```
or
```
SELECT *
FROM Account
where isnull(id,'_null_')= case when type <>100000002 then 'something' else isnull(id,'_null_') end
``` | How to use ternary operator in SQL Server 2008? | [
"",
"sql",
"sql-server",
""
] |
When I do this selcect:
```
SELECT tabuoj.id, tabuoj.tabuo, montritaj_tabuoj.id, montritaj_tabuoj.id_de_tabuo
FROM tabuoj LEFT JOIN montritaj_tabuoj
ON (tabuoj.id=montritaj_tabuoj.id_de_tabuo)
```
I get:
```
id | tabuo | id | id_de_tabuo |
1 | dom | 2 | 1 |
2 | samochód | null | null |
3 | okno | 1 | 3 |
```
but if I add where in which I compare id\_de\_tabuo column to null I dont't get any result.
```
SELECT tabuoj.id, tabuoj.tabuo, montritaj_tabuoj.id, montritaj_tabuoj.id_de_tabuo
FROM tabuoj LEFT JOIN montritaj_tabuoj
ON (tabuoj.id=montritaj_tabuoj.id_de_tabuo)
WHERE montritaj_tabuoj.id_de_tabuo=null
```
If I compare for example 1 like this:
```
SELECT tabuoj.id, tabuoj.tabuo, montritaj_tabuoj.id, montritaj_tabuoj.id_de_tabuo
FROM tabuoj LEFT JOIN montritaj_tabuoj
ON (tabuoj.id=montritaj_tabuoj.id_de_tabuo)
WHERE montritaj_tabuoj.id_de_tabuo=1
```
I get correct:
```
id | tabuo | id | id_de_tabuo |
1 | dom | 2 | 1 |
```
But I need rows with null in id\_de\_tabuo column. What I do wrong? | Try with `IS NULL` instead of `= null` like this
```
SELECT tabuoj.id, tabuoj.tabuo, montritaj_tabuoj.id, montritaj_tabuoj.id_de_tabuo
FROM tabuoj LEFT JOIN montritaj_tabuoj
ON (tabuoj.id=montritaj_tabuoj.id_de_tabuo)
WHERE montritaj_tabuoj.id_de_tabuo is null
```
Reason: Actually
`montritaj_tabuoj.id_de_tabuo is null` checks that the value is null. whereas
`montritaj_tabuoj.id_de_tabuo = null` checks that the value is equal to NULL which is never true.
Even *NULL is not equal to NULL*. You can check using this.
```
if(null = null)
print 'equal'
else
print 'not equal'
```
it will print `not equal`. Now try this
```
if(null is null)
print 'equal'
else
print 'not equal'
```
it will print `equal`. | When you filter on a left joined table in the where clause, the join becomes an inner join. To solve the problem, filter in the from clause, like this:
```
FROM tabuoj LEFT JOIN montritaj_tabuoj
ON (tabuoj.id=montritaj_tabuoj.id_de_tabuo)
and montritaj_tabuoj.id_de_tabuo is null
where clause starts here
``` | WHERE coulmn.name=null doesn't return rows when left join | [
"",
"android",
"sql",
"sqlite",
""
] |
I need to get total number of visitors for each day & order them by date
Below is a sample dates
```
NoOfBooking VisitDate
2 2014-01-21 11:16:28.490
2 2014-01-20 06:12:46.983
1 2014-01-19 11:28:24.743
2 2014-01-19 02:43:46.867
1 2014-01-18 16:24:25.200
2 2014-01-05 20:20:29.597
2 2014-01-05 16:10:31.760
2 2014-01-02 10:23:52.333
2 2014-01-01 02:30:11.780
2 2013-12-31 10:02:01.083
```
My output is pretty much same may be because of Time stamp with date. How can i only get to get details based on date only.
T-SQL query
```
SELECT
SUM(NoOfBooking) AS Total, VisitDate
FROM
Booking
GROUP BY
NoOfBooking, VisitDate
ORDER BY
VisitDate DESC
```
UPDATE: I also tried below it didn't work either.
```
SELECT
SUM(NoOfBooking) AS Total,
VisitDate
FROM
Booking
GROUP BY
NoOfBooking, CAST(VisitDate AS DATE)
ORDER BY
VisitDate DESC
``` | Try with this:
```
SELECT SUM(NoOfBooking) AS Total, cast(VisitDate as date) as VisitDate
FROM Booking
GROUP BY cast(VisitDate as date)
order by VisitDate DESC
```
Adding `NoOfBooking` in your `group by` will produce incorrect results... | try this ::
```
SELECT SUM(NoOfBooking) AS Total, VisitDate
FROM Booking GROUP BY NoOfBooking,
date(VisitDate) order by VisitDate DESC
``` | Total no of visitors dates wise not getting desired result | [
"",
"sql",
"sql-server-2008",
""
] |
I'm new to SQL somewhat so bear with me if this is a n00b question. So my code runs something akin to the following:
```
(select "Balance."CodeValue" AS "CodeValue"
, "Balance"."OtherValue" AS "OtherValue"
from "SomeDB"."dbo"."AValue" "Balance"
where ("Balance"."CodeValue" between 'A' and'Z'
or "Balance"."CodeValue" in ('ABCDEFG'))
and "Balance"."CodeValue" NOT in ('XYZ', '1234', 'Etc')
or "Balance"."CodeValue" between 'A' and 'Z') "Balance"
on "SomeMatrix"."CodeValue" = "Balance"."CodeValue"
```
Reading it, it would seem that it checks for the "Balance"."CodeValue" to be between A and Z or in 'ABCDEFG' and not in 'XYZ', '1234', 'Etc' or between A and Z. Wouldn't the two checks for A and Z cancel each other out?
Thank you ahead of time for your assistance. | As written above you are correct, the first bit isn't doing anything because it's negated by the last bit:
```
;WITH cte AS (SELECT 'XYZ' AS CodeValue
UNION
SELECT 'A')
SELECT *
FROM cte
WHERE (CodeValue between 'A' and'Z' or CodeValue in ('ABCDEFG'))
AND CodeValue NOT in ('XYZ', '1234', 'Etc')
OR CodeValue between 'A' and 'Z'
```
Will Return `XYZ` even though `XYZ` is listed in the `NOT IN` portion.
Demonstration: [SQL Fiddle](http://sqlfiddle.com/#!3/1fa93/11217/0) | ```
select Balance.CodeValue AS CodeValue
,Balance.OtherValue AS OtherValue
from SomeDB.dbo.AValue Balance INNER JOIN SomeMatrix
on SomeMatrix.CodeValue = Balance.CodeValue
where
(
Balance.CodeValue between 'A' and'Z' ----\
OR -- Either of this is true
Balance.CodeValue in ('ABCDEFG') ----/
)
AND -- AND
(
Balance.CodeValue
NOT IN ('XYZ', '1234', 'Etc') ----\
OR -- Either of this is true
Balance.CodeValue between 'A' and 'Z' ----/
)
```
The precedence of operator is `NOT --> AND --> OR`
When you have a bit complex/Tricky NOT INs , ANDs & ORs in your WHERE clause closing related conditions in parenthesis `()` makes it easier to read and debug your code. | SQL - Clarification of Between and NOT IN Combo | [
"",
"sql",
"sql-server",
"between",
"notin",
""
] |
I'm working on a school assignment that wants me to insert some new values into a database table and then print a message based on whether or not the INSERT was successful.
The question goes like this:
> Write a script that attempts to insert a new category named “Guitars”
> into the Categories table. If the insert is successful, the script
> should display this message: SUCCESS: Record was inserted.
>
> If the update is unsuccessful, the script should display a message
> something like this: FAILURE: Record was not inserted. Error 2627:
> Violation of UNIQUE KEY constraint 'UQ\_*Categori*\_8517B2E0A87CE853'.
> Cannot insert duplicate key in object 'dbo.Categories'. The duplicate
> key value is (Guitars).
Currently, this Categories table consists of 2 columns: CategoryID and Category name. It's populated with the values
```
1 Guitars
2 Basses
3 Drums
4 Keyboards
```
Obviously the Guitars category that the question wants you to insert is already there, so I'm guessing the whole point of the question is to get it to print the error message. The logic of the question seems fairly straightforward; insert the Guitars category into the table. If the insert was successful, print such-and-such. If it was unsuccessful, print so-and-so. I'm just not sure about the syntax. Here's the SQL code I've got so far:
```
USE MyGuitarShop;
INSERT INTO Categories (CategoryID, CategoryName)
VALUES (5, 'Guitars')
IF ( ) --insert is successful
PRINT 'SUCCESS: Record was inserted'
ELSE --if insert is unsuccessful
PRINT 'FAILURE: Record was not inserted.'
PRINT 'Error 2627: Violation of UNIQUE KEY constraint 'UQ__Categori__8517B2E0A87CE853'.'
PRINT 'Cannot insert duplicate key in object 'dbo.Categories'. The duplicate key value is (Guitars).'
```
I feel like there'd be some sort of boolean equation in that IF statement (IF INSERT = success, IF success = TRUE, etc.) but I'm just not sure how to write it. Am I on the right track?
EDIT: I should mention I'm using SQL Server 2012 | I would use try/catch myself
```
begin try
insert query
print message
end try
begin catch
print message
end catch
```
You should be able to take it from here. | ```
USE MyGuitarShop
GO
BEGIN TRY
-- Insert the data
INSERT INTO Categories (CategoryName)
VALUES ('Guitars')
PRINT 'SUCCESS: Record was inserted.'
END TRY
BEGIN CATCH
PRINT 'FAILURE: Record was not inserted.';
PRINT 'Error ' + CONVERT(VARCHAR, ERROR_NUMBER(), 1) + ': '+ ERROR_MESSAGE()
END CATCH
GO
``` | SQL syntax for checking to see if an INSERT was successful? | [
"",
"sql",
"sql-server",
"database",
"insert",
"sql-server-2012",
""
] |
SQL Server to show if field1 is null then field2, if field1 and field2 are null then field3, if field1, field2, field3 are null then field4 if all 4 fields are null then NULL How would this be displayed in SQL Server? I am assuming maybe a case statement and do case when etc but I am lost on this.
EDIT --
I tried running a straight COALESCE function like suggested and I get an error
The text, ntext, and image data types cannot be compared or sorted, except when using IS NULL or LIKE operator.
```
Select
Count(SoldNum),
Coalesce(Store1, Store2, Store3, Store4) As Store_Item_Sold_From
ItemSoldBy
ItemSold
FROM salesDatabase
Where Sold Is not null
Group By ItemSoldBy, ItemSold, Coalesce(Store1, Store2, Store3, Store4)
``` | COALESCE Evaluates the arguments in order and returns the current value of the first expression that initially does not evaluate to NULL.
```
SELECT COALESCE(field1, field2, field3, field4) FROM yourtable
```
# [COALESCE](http://msdn.microsoft.com/en-us/library/ms190349.aspx)
Or You can use CASE Statement
```
SELECT CASE WHEN field1 IS NULL THEN Field2
WHEN Field1 IS NULL AND Field2 IS NULL Then Field3
WHEN Field1 IS NULL AND Field2 IS NULL AND Field3 IS NULL Then Field4
ELSE 'NIL'
END
``` | You're looking for the [COALESCE](http://msdn.microsoft.com/en-us/library/ms190349.aspx) operator
```
SELECT COALESCE(field1, field2, field3, field4)
``` | text, ntext, and image data types cannot be compared or sorted | [
"",
"sql",
"sql-server",
""
] |
I have a table like this
```
SA_ID ---+---Acct_Id---+---SA_Type
101 111 TYPE1
102 111 TYPE2
103 112 TYPE1
```
I have a query to get acct\_id having more than one sa\_type
```
select acct_id,count(*) from sa_tbl
having count(*) > 1
group by acct_id;
```
I get a result like this
```
acct_id ---+---count(*)
111 2
```
But I need to get the result like this:
```
acct_id ---+---count(*)-----+sa_type1----+---sa_type2
111 2 TYPE1 TYPE2
``` | I tried to get result like you mentioned in question but able to get something similar to this
check it on [SQL Fiddle](http://sqlfiddle.com/#!2/a54af4/1)
```
SELECT tbl.`Acct_Id` AS 'Acct_Id',COUNT(`Acct_Id`) AS 'counts',
GROUP_CONCAT(`SA_Type`) AS 'types'
FROM `sa_tbl` AS tbl
GROUP BY tbl.`Acct_Id`
HAVING counts > 1
```
hope this will help you ! | Check the pivot function, see question here [MySQL pivot table](https://stackoverflow.com/questions/7674786/mysql-pivot-table)
This function works for Oracle and MySQL. For other databases (if you need) you can check additionally by yourself. | Select different values of a column having count more than one | [
"",
"sql",
""
] |
I have two datasets `a` and `b` each with fields for cusip and ticker. The **sql** i'd like to perform would take `a` column from set `b` `if a.cusip=b.cusip`, but if i cannot find a match for the cusip, i'd like to take the column from `b` `if a.ticker=b.ticker`.
Is there an easy way to execute this? I'm having trouble constructing the code in one go. | Inner join and Or in your where Clause condition should do the job.
```
Select * from a
inner join b on a.cusip=b.cusip
or a.ticket = b.ticket;
``` | You'll need to join to your B table twice, once on cusip and once on ticker. Then you can use `coalesce` to take the first non-null value.
```
select
coalesce (b_cusip.column, b_ticker.column),
...
from
a
left outer join b b_cusip
on a.cusip = b_cusip.cusip
left outer join b b_ticker
on a.ticker = b_ticker.ticker
``` | Multiple case merge sql | [
"",
"sql",
"merge",
"sas",
""
] |
I can't get rid of this error. I have added the "NT AUTHORITY\NETWORK" user via SSMS, along with the relevant roles using this thread as reference: [Login failed for user 'NT AUTHORITY\NETWORK SERVICE'](https://stackoverflow.com/questions/2251839/login-failed-for-user-nt-authority-network-service)
I am trying to make a db connection via a Windows Service. In debug mode the DB connection works fine. When I actually try and run the installed service is when I get this error.
Here is my connection string from the app.config:
```
<connectionStrings>
<remove name="LocalSqlServer" />
<add name="LocalSqlServer" connectionString="Data Source=(LocalDb)\v11.0; database=MyDB; Integrated Security=True;" />
<remove name="SqlServer" />
<add name="SqlServer" connectionString="Data Source=(LocalDb)\v11.0; database=MyDB; Integrated Security=True;" />
<remove name="SqlServer" />
</connectionStrings>
```
I also tried with adding `User ID=myDomain\myUsername;` to the connection string, but that didn't help. | First read [this description of the security limitations](http://technet.microsoft.com/en-us/library/hh510202.aspx#sectionToggle5) of using LocalDB. Reading that make me think that it may not be possible to use "NT AUTHORITY\NETWORK SERVICE"; I'm not sure. I think you'll need to use your credentials.
Not to be too obvious, but if you are using Integrated authentication **the credentials that the service is running under must match credentials that have access to the database**. If you don't want to use credentials for the service (that is, you want it to run under "NT AUTHORITY\NETWORK SERVICE"), then you'll need to add :"NT AUTHORITY\NETWORK SERVICE" as a user **with adequate access** to the database MyDB.
If possible, start with setting that user to `db_owner` for MyDB. If that works, then start adjusting permissions in SSMS to lower levels. If that *doesn't* work then something else is wrong with the database configuration. Ensure also that the user "NT AUTHORITY\NETWORK SERVICE" has file system access to the files that MyDB is using.
Also, you have an extra `<remove name="SqlServer" />` at the end there...not sure if that's deliberate. | in that link that you used as reference I think Jed gave the correct answer to your problem. Try it. You should add 'NT AUTHORITY\NETWORK SERVICE' not 'NT AUTHORITY\NETWORK'.
I think the default user for windows services is Local Service. So try to add 'NT AUTHORITY\LOCAL SERVICE'.
You could also try to change user that your service runs under. Try to run the service with your credentials and see if this helps.
I hope it helps. | The login failed. Login failed for user 'NT AUTHORITY\NETWORK | [
"",
"sql",
"sql-server",
""
] |
Given is the following table:
```
CREATE TABLE public.parenttest (
id bigserial NOT NULL PRIMARY KEY,
mydata varchar(30),
parent bigint
) WITH (
OIDS = FALSE
);
```
I'd like to insert a bunch of rows. Some of these rows should take the sequence-generated `id` of a row inserted before as value for the column `parent`.
For example:
```
INSERT INTO parenttest (mydata,parent) VALUES ('rootnode',null);
INSERT INTO parenttest (mydata,parent) VALUES ('child1', /*id of rootnode*/);
INSERT INTO parenttest (mydata,parent) VALUES ('child2', /*id of rootnode*/);
INSERT INTO parenttest (mydata,parent) VALUES ('child2.1', /*id of child 2*/);
INSERT INTO parenttest (mydata.parent) VALUES ('child2.2', /*id of child 2*/);
```
...should result in the following datasets (id, mydata,parent)
```
1,'rootnode',null
2,'child1',1
3,'child2',1
4,'child2.1',3
5,'child2.2',3
```
UNTIL `child2.2`, everything is fine when i'm using
```
SELECT currval('parenttest_id_seq');
```
to get the parent's `id` but then i'm getting the id of `child2.2`, of course.
It's important to me that i can do all necessary stuff with as few client-side requests as possible - and i'd like to do all id generation on the server-side. | Here's how I'd do it, if it's acceptable for the rows to briefly have a `NULL` parent:
```
INSERT INTO parenttest (mydata) VALUES
('rootnode'),
('child1'),
('child2'),
('child2.1'),
('child2.2');
UPDATE parenttest SET parent = (select id from parenttest pt where
(pt.mydata = 'rootnode' and parenttest.mydata in ('child1','child2')) or
(pt.mydata = 'child2' and parenttest.mydata in ('child2.1','child2.2')))
WHERE
mydata in ('child1',
'child2',
'child2.1',
'child2.2');
```
[fiddle](http://sqlfiddle.com/#!12/15c1e/1) | You could do something as ugly as:
```
INSERT INTO parenttest (mydata,parent) VALUES ('rootnode',null);
INSERT INTO parenttest (mydata,parent) SELECT 'child1', id FROM parenttest WHERE mydata='rootnode';
INSERT INTO parenttest (mydata,parent) SELECT 'child2', id FROM parenttest WHERE mydata='rootnode';
INSERT INTO parenttest (mydata,parent) SELECT 'child2.1', id FROM parenttest WHERE mydata='child2';
INSERT INTO parenttest (mydata.parent) SELECT 'child2.2', id FROM parenttest WHERE mydata='child2';
```
I think the right thing to do here is go with client-side, retrieving the id of 'rootnode' after inserting it, and then sending the following two statements with its id explicitly in the INSERT statement. | How to insert rows which optionally have parent rows in the same table at once | [
"",
"sql",
"postgresql",
"postgresql-9.2",
""
] |
I'm trying to calculate and list the websites in order of biggest overall reduction in response time from one time period to the next.
I don't strictly need to use a single query to do this, I can potentially run multiple queries.
websites:
```
| id | url |
| 1 | stackoverflow.com |
| 2 | serverfault.com |
| 3 | stackexchange.com |
```
responses:
```
| id | website_id | response_time | created_at |
| 1 | 1 | 93.26 | 2014-01-28 11:51:39
| 2 | 1 | 99.46 | 2014-01-28 11:52:38
| 2 | 1 | 94.51 | 2014-01-28 11:53:38
| 2 | 1 | 104.46 | 2014-01-28 11:54:38
| 2 | 1 | 85.46 | 2014-01-28 11:56:38
| 2 | 1 | 100.00 | 2014-01-28 11:57:36
| 2 | 1 | 50.00 | 2014-01-28 11:58:37
| 2 | 2 | 100.00 | 2014-01-28 11:58:38
| 2 | 2 | 80 | 2014-01-28 11:58:39
```
Ideally the result would look like:
```
| percentage_change | website_id |
| 52 | 1 |
| 20 | 2 |
```
I've got as far as figuring out the largest response time, but no idea how to do another query to calculate the lowest response time and then do the math, then sort the maths.
```
SELECT * FROM websites
LEFT JOIN (
SELECT distinct *
FROM responses
ORDER BY response_time desc) responsetable
ON websites.id=responsetable.website_id group by website_id
```
Thanks | Using a couple of sequence numbers:-
```
SELECT a.id, a.url, MAX(100 * (LeadingResponse.response_time - TrailingResponse.response_time) / LeadingResponse.response_time)
FROM
(
SELECT website_id, created_at, response_time, @aCnt1 := @aCnt1 + 1 AS SeqCnt
FROM responses
CROSS JOIN
(
SELECT @aCnt1:=1
) Deriv1
ORDER BY website_id, created_at
) TrailingResponse
INNER JOIN
(
SELECT website_id, created_at, response_time, @aCnt2 := @aCnt2 + 1 AS SeqCnt
FROM responses
CROSS JOIN
(
SELECT @aCnt2:=2
) Deriv2
ORDER BY website_id, created_at
) LeadingResponse
ON LeadingResponse.SeqCnt = TrailingResponse.SeqCnt
AND LeadingResponse.website_id = TrailingResponse.website_id
INNER JOIN websites a
ON LeadingResponse.website_id = a.id
GROUP BY a.id, a.url
```
SQL fiddle for this:-
<http://www.sqlfiddle.com/#!2/ace08/1>
EDIT - different way of doing it. This will only work if the id on the responses table is in date / time order.
```
SELECT a.id, a.url, MAX(100 * (r2.response_time - r1.response_time) / r2.response_time)
FROM responses r1
INNER JOIN responses r2
ON r1.website_id = r2.website_id
INNER JOIN
(
SELECT r1.website_id, r1.id, MAX(r2.id) AS prev_id
FROM responses r1
INNER JOIN responses r2
ON r1.website_id = r2.website_id
AND r1.id > r2.id
GROUP BY r1.website_id, r1.id
) ordering_query
ON r1.website_id = ordering_query.website_id
AND r1.id = ordering_query.id
AND r2.id = ordering_query.prev_id
INNER JOIN websites a
ON r1.website_id = a.id
GROUP BY a.id, a.url
```
You could do a similar thing based on the response\_time field rather than the id, but that would require the response\_time for a website to be unique.
EDIT
Just seen that you do not just want consecutive changes, rather just the highest to lowest response. Assuming that the lowest doesn't have to come after the highest:-
```
SELECT id, url, MAX(100 * (max_response - min_response) / max_response)
FROM
(
SELECT a.id, a.url, MIN(r1.response_time) AS min_response, MAX(r1.response_time) AS max_response
FROM responses r1
INNER JOIN websites a
ON r1.website_id = a.id
GROUP BY a.id, a.url
) Sub1
```
If you are only interested in the lower response time being after the higher one:-
```
SELECT id, url, MAX(100 * (max_response - min_following_response) / max_response)
FROM
(
SELECT a.id, a.url, MAX(r1.response_time) AS max_response, MIN(r2.response_time) AS min_following_response
FROM responses r1
INNER JOIN responses r2
ON r1.website_id = r2.website_id
AND (r1.created_at < r2.created_at
OR (r1.created_at = r2.created_at
AND r1.id < r2.id))
INNER JOIN websites a
ON r1.website_id = a.id
GROUP BY a.id, a.url
) Sub1
```
(assuming that the id field on the response table is unique and in created at order) | You need the equivalent of the `lag()` or `lead()` function. In MySQL, I do this using a correlated subquery:
```
select website_id, max(1 - (prev_response_time / response_time)) * 100
from (select t.*,
(select t2.response_time
from table t2
where t2.website_id = t.website_id and
t2.created_at < t.created_at
order by t2.created_at desc
limit 1
) as prev_response_time
from table t
) t
group by website_id;
```
EDIT:
If you want the change from the highest to the lowest:
```
select website_id, (1 - min(response_time) / max(response_time)) * 100
from table t
group by website_id;
``` | SQL: Find the biggest percentage change in response time | [
"",
"mysql",
"sql",
"statistics",
""
] |
Say I have following results
```
----------------------
| col1 | col2 |
----------------------
| a | b |
| b | a |
| c | d |
| e | f |
----------------------
```
I would like to get distinct tuple regardless of column order. In other words, (a, b) and (b, a) are considered "same" because changing the order make one same as the other (a, b) == (a, b). So, after executing query should be:
```
----------------------
| col1 | col2 |
----------------------
| a | b | // or (b, a)
| c | d |
| e | f |
----------------------
```
Can any query expert help me on this? I've been stuck for few hours and wasn't able to solve this.
Below is my detailed scenario I'm working on.
I have the following relations:
```
Ships(name, country) // ("Lincoln", "USA") = "Ship Lincoln belongs to USA"
Battles(ship, battleName) // ("Lincoln", "WW2") = "Ship Lincoln fought in WW2"
```
And I need to find: **List all pairs of countries that fought each other in battles**
I was able to find all pairs by executing below query:
```
SELECT DISTINCT c1, c2
FROM
(SELECT DISTINCT s1.country as c1, battleName as b1
FROM Ships as s1, Battles
WHERE s1.name = ship) as t1
JOIN
(SELECT DISTINCT s2.country as c2, battleName as b2
FROM Ships as s2, Battles
WHERE s2.name = ship) as t2
ON (b1 = b2)
WHERE c1 <> c2
```
And the result of executing above query is:
```
---------------------------------
| c1 | c2 |
---------------------------------
| USA | Japan | // Row_1
| Japan | USA | // Row_2
| Germany | Great Britain | // Row_3
| Great Britain | Germany | // Row_4
---------------------------------
```
But Row\_1 and Row\_2 are same as well as Row\_3 and Row\_4.
What I need is to print either one of Row\_1 or Row\_2 and either Row\_3 or Row\_4.
Thank you | Try it this way
```
SELECT DISTINCT
LEAST(s1.country, s2.country) c1,
GREATEST(s1.country, s2.country) c2
FROM battles b1 JOIN battles b2
ON b1.battlename = b2.battlename
AND b1.ship <> b2.ship JOIN ships s1
ON b1.ship = s1.name JOIN ships s2
ON b2.ship = s2.name
HAVING c1 <> c2
```
Output:
```
| C1 | C2 |
|---------|---------------|
| Germany | Great Britain |
| Japan | USA |
```
Here is **[SQLFiddle](http://sqlfiddle.com/#!2/8d81d/3)** demo | Here is how you can do it
Sample data
```
| COL1 | COL2 |
|------|------|
| a | b |
| b | a |
| c | d |
| e | f |
```
Query
```
SELECT
k.*
FROM test k
LEFT JOIN (SELECT
t.col1
FROM test t
INNER JOIN test r
ON (r.col1 = t.col2
AND t.col1 = r.col2)
LIMIT 1) b
ON b.col1 = k.col1
WHERE b.col1 IS NULL
```
OUTPUT
```
| COL1 | COL2 |
|------|------|
| a | b |
| c | d |
| e | f |
```
[## SQL Fiddle Demo](http://sqlfiddle.com/#!2/b082b/1) | Show distinct tuples regardless of column order | [
"",
"mysql",
"sql",
"distinct",
""
] |
I have a table called Locations that looks like this
```
Name Date
Location A 01/01/2014
Location A 12/12/2013
Location B 01/01/2014
Location C 01/01/2014
Location D 01/01/2014
Location D 12/12/2013
Location E 12/12/2013
```
I want to return only the Names where the date is MAX(Date) i.e. 01/01/2014 and where there is only 1 row for this Name and the date is MAX(Date)
To explain further I would like to return
```
Name Date
Location B 01/01/2014
Location C 01/01/2014
```
I have tried a few queries like a HAVING statement but cannot seem to get the desired result | This seems to work:
```
declare @t table (Name varchar(49), [Date] date)
insert into @t(Name,[Date]) values
('Location A','20140101'),
('Location A','20131212'),
('Location B','20140101'),
('Location C','20140101'),
('Location D','20140101'),
('Location D','20131212'),
('Location E','20131212')
select Name,MAX(Date)
from @t
group by Name
having MIN(Date) = (select MAX(Date) from @t)
```
It doesn't matter much which aggregate you use here:
```
select Name,MAX(Date)
```
provided the reference to `Date` is in an aggregate.
Result:
```
Name
------------------------------------------------- ----------
Location B 2014-01-01
Location C 2014-01-01
```
The logic is - if the *earliest* date for a particular `Name` is also the *latest* date for the whole table, then logically there's only one entry for this `Name` and it is for the latest date in the table.
(This is based on the assumption that each location can only have one entry per day) | We can first check the names which are not repeating using `GROUP BY` and `COUNT` in inner query and then `inner join` this one with original table over which we group by name and date to calculate max value dates and corresponding name
```
SELECT t.Name, MAX(Date) From table INNER JOIN
(SELECT DISTINCT Name from table group by Name having count(Name)=1) t
ON t.name=table.name
GROUP BY Date,t.Name
``` | SQL - Only choose unique names by max date? | [
"",
"sql",
"sql-server-2008",
"sql-server-2008-r2",
""
] |
I am trying to insert records into table B from table A, where the records don't already exist in table B. Only some of the fields I need are in table A, so I have set up some local variables to insert the data for these. On running the query below, I get the error message
"Msg 8114, Level 16, State 5, Line 17 Error converting data type varchar to numeric."
Would anyone be able to tell me what I am doing wrong, and perhaps provide an alternative method that would work. Many thanks (and apologies for the formatting of the query)
```
DECLARE @SupplierID as integer
DECLARE @UnitOfMeasurementID as integer
DECLARE @MinOrderQuantity as integer
DECLARE @SupplierProductGroupID as integer
DECLARE @ProductCondition as varchar (3)
SET @SupplierID = 1007
SET @UnitOfMeasurementID = 1
SET @MinOrderQuantity = 1
SET @SupplierProductGroupID = 41
SET @ProductCondition = 'N'
-- Insert
insert into tblProduct (SupplierID,
UnitOfMeasurementID,
MinOrderQuantity,
SupplierProductGroupID,
ProductCondition,
PartNumber,
ProductName,
CostPrice)
select
PartNumber,
ProductName,
CostPrice,
@SupplierID,
@UnitOfMeasurementID,
@MinOrderQuantity,
@SupplierProductGroupID,
@ProductCondition
from BearmachTemp source
where not exists
(
select * from tblProduct
where tblProduct.PartNumber = source.PartNumber
and tblProduct.ProductName = source.ProductName
)
``` | SELECT has columns in wrong order
```
select
@SupplierID,
@UnitOfMeasurementID,
@MinOrderQuantity,
@SupplierProductGroupID,
@ProductCondition,
PartNumber,
ProductName,
CostPrice
from BearmachTemp source
where not exists
(
select * from tblProduct
where tblProduct.PartNumber = source.PartNumber
and tblProduct.ProductName = source.ProductName
)
``` | In Insert Statement Insert columns and values should have the same Order, so :
```
insert into tblProduct (SupplierID,
UnitOfMeasurementID,
MinOrderQuantity,
SupplierProductGroupID,
ProductCondition,
PartNumber,
ProductName,
CostPrice)
select
@SupplierID,
@UnitOfMeasurementID,
@MinOrderQuantity,
@SupplierProductGroupID,
@ProductCondition,
PartNumber,
ProductName,
CostPrice
from BearmachTemp source
where not exists
(
select * from tblProduct
where tblProduct.PartNumber = source.PartNumber
and tblProduct.ProductName = source.ProductName
)
``` | SQL insert using select from .. where not exists and local variables in the select clause fails | [
"",
"sql",
"sql-server",
""
] |
I have two tables, one with users and another with errors. They have 2 relations. Errors can be fixed or submited by users. I want to show usernames with submited and fixed errors.
I tried something like this:
```
SELECT usr.username, err.description AS ERROR_SOLVED
FROM Users usr, Errors err
WHERE err.solved_by_id=usr.id
UNION
SELECT usr.username, err.description AS ERROR_SUBMITED
FROM Users usr, Errors err
WHERE err.submited_by_id=usr.id
```
Obviously it doesnt work, please help. | The columns must have the same descriptions.
```
SELECT usr.username, err.description AS ERROR, 1 as SOLVED
FROM Users usr, Errors err
WHERE err.solved_by_id=usr.id
UNION
SELECT usr.username, err.description AS ERROR, 0 as SOLVED
FROM Users usr, Errors err
WHERE err.submited_by_id=usr.id
```
would work. You can also define a additional column to show which type it is. | Something like this...
```
Select usr.username, submittedErrs.description AS ERROR_SUBMITTED,
fixedErrs.description AS ERROR_SOLVED
from Users usr
left outer join Errors submittedErrs ON submittedErrs.submitted_by_id=usr.id
left outer join Errors fixedErrs ON fixedErrs.solved_by_id=usr.id
``` | How to do select with 2 conditions into two columns | [
"",
"sql",
""
] |
I am working on a long query, here is a portion of it:
```
SELECT '3' AS RowType
,DTH.EnteredBy AS Person
,COALESCE(PDT.[Name], APP.AppName) AS Project
,(
CASE WHEN (
STY.KanBanProductId IS NOT NULL
AND STY.SprintId IS NULL
) THEN 'Kanban' WHEN (
STY.KanBanProductId IS NULL
AND STY.SprintId IS NOT NULL
) THEN 'Sprint' ELSE SCY.Catagory END
) AS ProjectType
,dbo.primaryTheme(STY.[Number], ???) AS Theme
```
Where the ??? is, I am having a problem.
I need to pass the result of the previous column, that is:
```
CASE WHEN (
STY.KanBanProductId IS NOT NULL
AND STY.SprintId IS NULL
) THEN 'Kanban' WHEN (
STY.KanBanProductId IS NULL
AND STY.SprintId IS NOT NULL
) THEN 'Sprint' ELSE SCY.Catagory END
)
```
WHat is the best way to achieve this? | You can either use 'Cross Apply' or CTE (Common Table Expression) to do this. I prefer Cross Apply so here is an example that way:
```
SELECT '3' AS RowType
,DTH.EnteredBy AS Person
,COALESCE(PDT.[Name], APP.AppName) AS Project
,CAResult.ProjectType
,dbo.primaryTheme(STY.[Number], CAResult.ProjectType) AS Theme
FROM [SomeTable]
CROSS APPLY (SELECT CASE WHEN (
KanBanProductId IS NOT NULL
AND SprintId IS NULL
) THEN 'Kanban' WHEN (
KanBanProductId IS NULL
AND SprintId IS NOT NULL
) THEN 'Sprint' ELSE Catagory END
) AS ProjectType) as CAResult
``` | You can either repeat the query, it will be executed only once, or you can use a sub-query/ CTE:
```
WITH CTE AS
(
SELECT '3' AS RowType
,DTH.EnteredBy AS Person
,COALESCE(PDT.[Name], APP.AppName) AS Project
,(
CASE WHEN (
STY.KanBanProductId IS NOT NULL
AND STY.SprintId IS NULL
) THEN 'Kanban' WHEN (
STY.KanBanProductId IS NULL
AND STY.SprintId IS NOT NULL
) THEN 'Sprint' ELSE SCY.Catagory END
) AS ProjectType
FROM dbo.TableName
)
SELECT *, Theme = dbo.primaryTheme(Number, ProjectType)
FROM CTE
``` | Passing logic of previous column as parameter | [
"",
"sql",
"sql-server",
""
] |
Hi my query getting this error help me to recover it
```
SELECT CompanyId, CompanyName, RegistrationNumber,
(select CompanyAddress from RPT_Company_Address where
RPT_Company_Address.CompanyId=Company.CompanyId) AS CompanyAddress,
MobileNumber, FaxNumber, CompanyEmail, CompanyWebsite, VatTinNumber
FROM Company;`
``` | It appears that your RPT\_Company\_Address table has more than one address for a given company. If this should not be possible, you should try to correct the data and modify your schema to prevent the possibility of this happening.
On the other hand, if there can be multiple addresses, you must decide how your query should handle them:
1) Do you want the same company row listed multiple times-- one per each address? If so, use an `INNER JOIN` to return them all:
```
SELECT Company.CompanyId, CompanyName, RegistrationNumber, CompanyAddress, ...
FROM Company
INNER JOIN RPT_Company_Address RCA ON RCA.CompanyId = Company.CompanyId
```
2) If you want only the first matching address, do a subquery on the first matching address corresponding to each company:
```
SELECT Company.CompanyId, CompanyName, RegistrationNumber, CompanyAddress, ...
FROM Company
INNER JOIN
(
SELECT CompanyId, ROW_NUMBER() OVER (ORDER BY 1 PARTITION BY CompanyId) AS Num
FROM RPT_Company_Address
) Addresses
ON Addresses.ComapnyId = Company.CompanyId
WHERE Num = 1
```
3) If you have some other way to identify the "primary" address that you want, include a `WHERE` clause with that criteria:
```
SELECT Company.CompanyId, CompanyName, RegistrationNumber, CompanyAddress, ...
FROM Company
INNER JOIN RPT_Company_Address RCA ON RCA.CompanyId = Company.CompanyId
WHERE RCA.PrimaryAddress = 1
``` | Your subquery below is returning more than one result
```
select CompanyAddress
from RPT_Company_Address
where RPT_Company_Address.CompanyId = Company.CompanyId
```
Therefore more than one address matches your company id.
Try fixing the data or using:
```
select top 1 CompanyAddress
from RPT_Company_Address
where RPT_Company_Address.CompanyId = Company.CompanyId
``` | At most one record can be returned by this subquery. (Error 3354) | [
"",
"sql",
"ms-access",
""
] |
Ok, so I have a query that is returning more rows than expected with repeating data. Here is my query:
```
SELECT AP.RECEIPTNUMBER
,AP.FOLDERRSN
,ABS(AP.PAYMENTAMOUNT)
,ABS(AP.PAYMENTAMOUNT - AP.AMOUNTAPPLIED)
,TO_CHAR(AP.PAYMENTDATE,'MM/DD/YYYY')
,F.REFERENCEFILE
,F.FOLDERTYPE
,VS.SUBDESC
,P.NAMEFIRST||' '||P.NAMELAST
,P.ORGANIZATIONNAME
,VAF.FEEDESC
,VAF.GLACCOUNTNUMBER
FROM ACCOUNTPAYMENT AP
INNER JOIN FOLDER F ON AP.FOLDERRSN = F.FOLDERRSN
INNER JOIN VALIDSUB VS ON F.SUBCODE = VS.SUBCODE
INNER JOIN FOLDERPEOPLE FP ON FP.FOLDERRSN = F.FOLDERRSN
INNER JOIN PEOPLE P ON FP.PEOPLERSN = P.PEOPLERSN
INNER JOIN ACCOUNTBILLFEE ABF ON F.FOLDERRSN = ABF.FOLDERRSN
INNER JOIN VALIDACCOUNTFEE VAF ON ABF.FEECODE = VAF.FEECODE
WHERE AP.NSFFLAG = 'Y'
AND F.FOLDERTYPE IN ('405B','405O')
```
Everything works fine until I add the bottom two Inner Joins. I'm basically trying to get all payments that had NSF. When I run the simple query:
```
SELECT *
FROM ACCOUNTPAYMENT
WHERE NSFFLAG = 'Y'
```
I get only 3 rows pertaining to 405B and 405O folders. So I'm only expecting 3 rows to be returned in the above query but I get 9 with information repeating in some columns. I need the exact feedesc and gl account number based on the fee code that can be found in both the Valid Account Fee and Account Bill Fee tables.
I can't post a picture of my output.
Note: when I run the query without the two bottom joins I get the expected output.
Can someone help me make my query more efficient? Thanks!
As requested, below are the results that my query is returning for vaf.feedesc and vaf.glaccountnumber columns:
```
Boiler Operator License Fee 2423809
Boiler Certificate of Operation without Manway - Revolving 2423813
Installers (Boiler License)/API Exam 2423807
Boiler Public Inspection/Certification (State or Insurance) 2423816
Boiler Certificate of Operation with Manway 2423801
Boiler Certificate of Operation without Manway 2423801
Boiler Certificate of Operation with Manway - Revolving 2423813
BPV Owner/User Program Fee 2423801
Installers (Boiler License)/API Exam Renewal 2423807
``` | The cause is that at least one of the connections `ACCOUNTBILLFEE-FOLDER` or `VALIDACCOUNTFEE-ACCOUNTBILLFEE` is not one-to-one. It allows for one *Folder* to have many *AccountBillFees* or for one *ValidAccountFee* to have many *AccountBillFees*.
To find the cause of such a problem this is what I usually do:
* Change the `SELECT A, B, C` part of your query to `SELECT *`.
* Reduce the results to one of the rows that is causing you trouble (by adding a `WHERE ...`). That is a single row without your last two joins and a few rows after you add those two joins.
* Look at the result table from left to right. The first columns will probably show the same values for all rows. Once you see a difference between the values in a column, you know that the table of the column you are currently looking at is causing your "multiple row problem".
* Now create a `SELECT *` statement that includes only the two tables joined together that cause multiple rows with the same `WHERE ...` you used above.
* The result should give you a clear picture of the cause.
* Once you know the reason for your problem you can think of a solution ;) | Try this if it helps then those tables have additional rows which are not relevant. If it doesn't then look at the results of the subqueries I have below to see what additional filters are needed
```
SELECT AP.RECEIPTNUMBER
,AP.FOLDERRSN
,ABS(AP.PAYMENTAMOUNT)
,ABS(AP.PAYMENTAMOUNT - AP.AMOUNTAPPLIED)
,TO_CHAR(AP.PAYMENTDATE,'MM/DD/YYYY')
,F.REFERENCEFILE
,F.FOLDERTYPE
,VS.SUBDESC
,P.NAMEFIRST||' '||P.NAMELAST
,P.ORGANIZATIONNAME
,VAF.FEEDESC
,VAF.GLACCOUNTNUMBER
FROM ACCOUNTPAYMENT AP
INNER JOIN FOLDER F ON AP.FOLDERRSN = F.FOLDERRSN
INNER JOIN VALIDSUB VS ON F.SUBCODE = VS.SUBCODE
INNER JOIN FOLDERPEOPLE FP ON FP.FOLDERRSN = F.FOLDERRSN
INNER JOIN PEOPLE P ON FP.PEOPLERSN = P.PEOPLERSN
INNER JOIN
(
SELECT DISTINCT ABF.FEECODE, ABF.FOLDERRSN
FROM ACCOUNTBILLFEE ABF
) ABF ON F.FOLDERRSN = ABF.FOLDERRSN
INNER JOIN
(
SELECT DISTINCT VAF.FEEDESC, VAF.GLACCOUNTNUMBER, VAF.FEECODE
FROM VALIDACCOUNTFEE VAF
) VAF ON ABF.FEECODE = VAF.FEECODE
WHERE AP.NSFFLAG = 'Y'
AND F.FOLDERTYPE IN ('405B','405O')
``` | Last two joins cause duplicate rows | [
"",
"sql",
"sql-server",
"join",
""
] |
Suppose that you are given the following simple database table called Employee that has 2 columns named Employee ID and Salary:
```
Employee
Employee ID Salary
3 200
4 800
7 450
```
I wish to write a query select max(salary) as max\_salary, 2nd\_max\_salary from employee
then it should return
```
max_salary 2nd_max_salary
800 450
```
i know how to find 2nd highest salary
```
SELECT MAX(Salary) FROM Employee
WHERE Salary NOT IN (SELECT MAX(Salary) FROM Employee )
```
or to find the nth
```
SELECT FROM Employee Emp1 WHERE (N-1) = ( SELECT COUNT(DISTINCT(Emp2.Salary)) FROM Employee Emp2
WHERE Emp2.Salary > Emp1.Salary)
```
but i am unable to figureout how to join these 2 results for the desired result | You can just run 2 queries as inner queries to return 2 columns:
```
select
(SELECT MAX(Salary) FROM Employee) maxsalary,
(SELECT MAX(Salary) FROM Employee
WHERE Salary NOT IN (SELECT MAX(Salary) FROM Employee )) as [2nd_max_salary]
```
[SQL Fiddle Demo](http://sqlfiddle.com/#!2/2445c/2) | Try like this
```
SELECT (select max(Salary) from Employee) as MAXinmum),(max(salary) FROM Employee WHERE salary NOT IN (SELECT max(salary)) FROM Employee);
```
**(Or)**
Try this, n would be the nth item you would want to return
```
SELECT DISTINCT(Salary) FROM table ORDER BY Salary DESC LIMIT n,1
```
In your case
```
SELECT DISTINCT(column_name) FROM table_name ORDER BY column_name DESC limit 2,1;
``` | Find max and second max salary for a employee table MySQL | [
"",
"mysql",
"sql",
""
] |
I have two tables (`entity` and `kind`) plus a n:m table (`entity_kind`).
```
CREATE TABLE
entity
(
entity_id INT
, name NVARCHAR(100)
, PRIMARY KEY(entity_id)
)
CREATE TABLE
kind
(
kind_id INT
, name NVARCHAR(100)
, PRIMARY KEY(kind_id)
)
CREATE TABLE
entity_kind
(
entity_id INT
, kind_id INT
, PRIMARY KEY(entity_id, kind_id)
)
```
Test data:
```
INSERT INTO
entity
VALUES
(1, 'Entity A')
, (2, 'Entity B')
, (3, 'Entity C')
INSERT INTO
kind
VALUES
(1, 'Kind 1')
, (2, 'Kind 2')
, (3, 'Kind 3')
, (4, 'Kind 4')
INSERT INTO
entity_kind
VALUES
(1, 1)
, (1, 3)
, (2, 1)
, (2, 2)
, (3, 4)
```
My code so far:
```
DECLARE
@selected_entities
TABLE
(
entity_id INT
)
DECLARE
@same_kinds BIT;
INSERT INTO
@selected_entities
VALUES
(1), (2)
-- Missing code here
SELECT
@same_kinds AS "same_kinds"
```
The table var `@selected_entities` is filled with entities that should be compared.
The logical var `@same_kinds` should indicate whether the selected entities have exactly the same kinds assigned.
How can I achieve this? | This is a compare two sets of things type problem. The query I'm going to show gives all pairs along with a flag. You can easily incorporate comparing a subquery by changing the first two `entity` tables to the table of ids you want to compare.
This query has a few parts. First, it produces all pairs of entities from the entity tables. This is important, because this will pick up even entities that have no "kinds" associated with them. You want a flag, rather than just a list of those that match.
Then the heart of the logic is to do a self-join on the entity-kinds table with the match on "kind". This is then aggregated by the two entities. The result is a count of the kinds that two entities share.
The final logic is to compare this count to the count of "kinds" on each entity. If all of these counts are the same, then the entities match. If not, they do not. This approach does assume that there are no duplicates in `entity_kinds`.
```
select e1.entity_id as e1, e2.entity_id as e2,
(case when count(ek1.entity_id) = max(ek1.numkinds) and
count(ek2.entity_id) = count(ek1.entity_id) and
max(ek1.numkinds) = max(ek2.numkinds)
then 1
else 0
end) as IsSame
from entity e1 join
entity e2
on e1.entity_id < e2.entity_id left outer join
(select ek.*, count(*) over (partition by entity_id) as numkinds
from entity_kind ek
) ek1
on e1.entity_id = ek1.entity_id left outer join
(select ek.*, count(*) over (partition by entity_id) as numkinds
from entity_kind ek
) ek2
on e2.entity_id = ek2.entity_id and
ek2.kind_id = ek1.kind_id
group by e1.entity_id, e2.entity_id;
```
The SQL Fiddle is [here](http://www.sqlfiddle.com/#!6/abeb5/1). | You can do this with two checks: First, if the kind-count on each entity is not the same, then they cannot match. Second, provided the count is the same, you just need to find one kind that doesn't match the list of an arbitrary other entity (I just take the first entity in the compare list). In code:
```
DECLARE @firstEntity int = (SELECT TOP 1 entity_id from @selected_entities)
IF EXISTS(SELECT TOP 1 se.entity_id FROM @selected_entities se
INNER JOIN entity_kind ek ON ek.entity_id = se.entity_id
WHERE ek.kind_id NOT IN (SELECT kind_id from entity_kind where entity_id = @firstEntity)
OR ((SELECT COUNT(1) FROM entity_kind WHERE entity_id = ek.entity_id)
<> (SELECT COUNT(1) FROM entity_kind WHERE entity_id = @firstEntity)))
SET @same_kinds = 0
ELSE
SET @same_kinds = 1
``` | How to compare n:m assignments? | [
"",
"sql",
"sql-server",
""
] |
I just got a query cod :
```
SELECT o.id,o.sort_order,od.object FROM i_objects o, i_objects_description od
WHERE o.id=od.objects_id AND o.object_status = ? AND od.languages_id = ?
ORDER BY o.sort_order ASC
```
I want figure it out what does "?" mean in this query ?
If I run this query , it gives me this error :
```
You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '?
```
Im using [PEAR](http://pear.php.net/) and this is my function :
```
function getArrayObjects( $language_id )
{
$q = 'SELECT o.id,o.sort_order,od.object FROM ' . TABLE_OBJECTS . ' o, ' . TABLE_OBJECTS_DESCRIPTION . ' od ';
$q.= 'WHERE o.id=od.objects_id AND o.object_status = ? AND od.languages_id = ? ';
$q.= 'ORDER BY o.sort_order ASC';
$sth = $this->_db->prepare( $q );
$res = $sth->execute( array( 'active', $language_id ) );
//var_dump($res);echo "<br>";echo "<br>";echo "<br>";
$objects = array();
while( $row = $res->fetchRow())
{
$objects[$row['id']] = $row;
}
return $objects;
}
``` | It's a placeholder for parameter. In your query you have this:
```
AND o.object_status = ? AND od.languages_id = ?
```
And then you execute it like this:
```
$res = $sth->execute( array( 'active', $language_id ) );
```
So, when query is actually executed by database server, `object_status` is `'active'` and `language_id` is `$language_id`.
This is done this way to guard from SQL injection. Another reason is efficiency. When you use prepared statements, database doesn't need to parse/compile query each time. It uses the template and just substitutes values in it. (more on this: [Prepared statement](http://en.wikipedia.org/wiki/Prepared_statement)) | The `?` are placeholder the values of which are filled in in the `$sth->execute( array( 'active', $language_id ) )` statement.
One of the main purposes for this construct is to prevent *sql injection attacks*. | What does it mean "?" in sql query? | [
"",
"sql",
"pear",
""
] |
Not even sure to word this, which is probably why I can't immediately think of a solution, however…
I have a table of store locations and a second table which are the stats which record, amongst other things, which stores have appeared in 'find my closest store' type searches.
So, for instance, table A has this content
```
ID | Store_Name
1 | London
2 | Edinburgh
3 | Bristol
4 | Crawley
5 | Brighton
6 | Cambridge
```
When a user does a search, resultant store ids are saved in the stats table like this (there are more columns, but keeping this simple)
```
ID | Search_Results
1 | 1,4,5,6
```
Which would indicate a visitor search had pulled up London, Crawley, Brighton and Cambridge in their results.
Now what I'm attempting to do is pull useful stats from this data and what to find the number of times each store had appeared in a search result. I played with this, but it obviously doesn't work (and I can see why, but you can probably see my confused thought process):
```
SELECT COUNT(ID) as Total, ID, Store_Name FROM a WHERE id IN (SELECT Search_Results FROM b) group by ID
```
What I'm trying to do is get a recordset that tells me the id of the store, the store name and the number of times it's appeared in a search result.
MySQL gurus, what do you think? | You have a very poor format for the `stats` table. You should have a second table for "StatsIds".
Here is the way to phrase your join:
```
SELECT COUNT(a.ID) as Total, a.ID, a.Store_Name
FROM a join
b
on find_in_set(a.id, b.Search_Results) > 0
group by a.ID;
```
Unfortunately, this query cannot be optimized using indexes. | One option I can think of is:
```
SELECT stores.id, stores.name, (SELECT count(*)
FROM stats
WHERE stats.search_results like '%' | stores.id | '%') hit_count
FROM stores
```
I don't have a database to try this, but I think this should work | Query to count number of times an id appears in string | [
"",
"mysql",
"sql",
""
] |
**Schema and data**
I have two tables with the following schema and data:
*#table1:*
```
create table #table1(
PK int IDENTITY(1,1) NOT NULL,
[TEXT] nvarchar(50) NOT NULL
);
PK TEXT
1 a
2 b
3 c
4 d
5 e
```
*#table2:*
```
create table #table2(
PK int IDENTITY(1,1) NOT NULL,
FK int NOT NULL,
[TEXT] nvarchar(50) NOT NULL
);
PK FK TEXT
1 2 B
2 3 C
```
**Problem**
Now, if I select all from `#table1` and left join `#table2` like this:
```
select
#table1.PK,
(case #table2.[TEXT] when NULL then #table1.[TEXT] else #table2.[TEXT] end) as [TEXT]
from
#table1
left join
#table2 on #table2.FK = #table1.PK
;
```
the output are as following:
```
PK TEXT
1 NULL
2 B
3 C
4 NULL
5 NULL
```
# Question
I expected the result to be:
```
PK TEXT
1 a <
2 B
3 C
4 d <
5 e <
```
So why does this happen (or what am I doing wrong) and how can I fix this?
**Source code**
```
if (OBJECT_ID('tempdb..#table1') is not null) drop table #table1;
if (OBJECT_ID('tempdb..#table2') is not null) drop table #table2;
create table #table1(PK int IDENTITY(1,1) NOT NULL, [TEXT] nvarchar(50) NOT NULL);
create table #table2(PK int IDENTITY(1,1) NOT NULL, FK int NOT NULL, [TEXT] nvarchar(50) NOT NULL);
insert into #table1 ([TEXT]) VALUES ('a'), ('b'), ('c'), ('d'), ('e');
insert into #table2 (FK, [TEXT]) VALUES (2, 'B'), (3, 'C');
select
#table1.PK,
(case #table2.[TEXT] when NULL then #table1.[TEXT] else #table2.[TEXT] end) as [TEXT]
from
#table1
left join
#table2 on #table2.FK = #table1.PK
;
drop table #table1;
drop table #table2;
``` | From my perspective this is equivalent of
```
select isnull(table2.text, table1.text) as text from ...
``` | You should check whether a field is null or not by `is null`, even though your `case when` is syntactically correct, you should use the other syntactically correct version.
```
case
when #table2.[TEXT] is null then #table1.[TEXT]
else #table2.[TEXT]
end
``` | Unexpected result in SELECT CASE WHEN NULL | [
"",
"sql",
"sql-server-2008",
"select-case",
""
] |
I have 2 Tables..
**User 1**
```
user_id mobile_no
1 1111
2 2222
```
**User 2**
```
user_id mobile_no
1 3333
2 2222
```
I Want to first UNION These tables, then group by and then want to count total records
I am using this query but it's not working
```
SELECT COUNT(Q2.total) AS Overall
FROM (
SELECT COUNT(Q.user_id) AS total
FROM (
SELECT * FROM user1
UNION ALL
SELECT * FROM user2
) Q
GROUP BY Q.mobile_no
) Q2
```
if i user Inner Query e-g:
```
SELECT COUNT(Q.user_id) AS total
FROM (
SELECT * FROM user1
UNION ALL
SELECT * FROM user2
) Q
GROUP BY Q.mobile_no
```
I get these results, actually i want to again count these result...
```
total
2
1
1
```
i expect this result
```
Overall
3
``` | This is weird. No one seems to have realised it's as simple as:
```
select count(*) overall
from (select mobile_no from user1 union select mobile_no from user2)
```
The difference between `union` and `union all` is that `union` removes duplicates. | Try this:
```
SELECT COUNT(*) FROM
( (SELECT * FROM user1) UNION
(SELECT user_id as u1,mobile_no as m1
FROM user2) ) as a1 GROUP BY a1 .1
``` | UNION All, Group BY and then get overall COUNT | [
"",
"mysql",
"sql",
"group-by",
"union",
""
] |
This is my database table structure:
```
-----------------------------------
| id | title | date |
-----------------------------------
| 001 | my event OO1 | 04/02/2014 |
| 002 | my event OO2 | 04/02/2014 |
| 003 | my event OO3 | 04/02/2014 |
| 004 | my event OO4 | 05/02/2014 |
| 005 | my event OO5 | 05/02/2014 |
| 006 | my event OO6 | 05/02/2014 |
| 007 | my event OO7 | 05/02/2014 |
| 008 | my event OO8 | 06/02/2014 |
| 009 | my event OO9 | 06/02/2014 |
-----------------------------------
```
I succceed to display it like it :
```
04/02/2014
- my event OO1
- my event OO2
- my event OO3
05/02/2014
- my event OO4
- my event OO5
- my event OO6
- my event OO7
06/02/2014
- my event OO8
- my event OO9
```
But I want to display just 1 result (event) per date randomly like:
```
04/02/2014
- my event OO2
05/02/2014
- my event OO6
06/02/2014
- my event OO8
```
On my sql result, I have tried `ORDER BY RAND() LIMIT 1` but of course I have not sorted by date
so results are randomly ordered
So, how to get date sorted and have random results? | ```
SELECT distinct t1.date,
(SELECT t2.ID, t2.title
FROM mytable as t2
WHERE t2.date = t1.date
ORDER BY RAND()
LIMIT 1) AS event_title
FROM mytable as t1
ORDER BY t1.date;
``` | Use:
```
SELECT d1.date, (
SELECT d2.title
FROM table_1 d2
WHERE d2.date = d1.date
ORDER BY RAND()
LIMIT 1
) AS title
FROM table_1 d1
GROUP BY d1.date
HAVING COUNT(d1.date)>=1
ORDER BY d1.date ASC
```
The only thing you should need to change is the name of your table. Make sure you only table `table_1` to the actual name of your table, in both `FROM` clauses.
The idea is you retrieve the date, then you do a subquery to retrieve a random event within that date, by selecting from the same table, and joining on the date. | Display random results ordered by date | [
"",
"sql",
"random",
"sql-order-by",
""
] |
I have a mysql table with country, state, city and locality. I want to get only those countries which are having either city or locality as null or empty. I tried this query:
```
select distinct country from xyz where state != "" and ((city="" and Locality="") or (city="" and Locality!="") or (city!="" and Locality="")) order by country
```
Basically need to fetch all the countries where either city or locality value is empty. This query gives me few countries which are having city and locality both in same row. What am I doing wrong? Its giving me countries which are having city and Locality both values.
I need country list which doesn't have city or Locality which means all the cities or locality of the country is empty or null. Do not want country if even one record of the country has either city or locality value. | Are you looking for a simple `or`:
```
select distinct country
from xyz
where state <> '' and
(city = '' or Locality= '')
order by country;
```
If this doesn't return what you want, you might have a problem with `NULL` values:
```
select distinct country
from xyz
where state <> '' and
(city = '' or Locality= '' or city is null or Locality is null)
order by country;
```
Or possibly the condition on `state` is not needed.
By the way, you should use single quotes rather than double quotes for string constants in SQL.
EDIT:
If you want a query where *all* the values are empty or `NULL` for a given country, then use aggregation and a `having` clause:
```
select country
from xyz
group by contry
having min(city = '' or Locality= '' or city is null or Locality is null) > 0
order by country;
``` | ```
select distinct country
from xyz
where state != ""
and (city="" or Locality="")
``` | get specific records based on where condition | [
"",
"mysql",
"sql",
""
] |
I want to write a query where it deletes duplicates from table where the Access column value is 1 - 5, but always take the highest number but if it's 5 then that should be the lowest, because of how they designed the database. but 5 have no access.(5 should have been 0 in my opinion.)
So there is an ID column and an Access column, if there are more than 1 IDs, then delete the ID with the lowest Access value. But remember treat 5 as 0 or lowest.

So I thought something like:
```
Delete from [table]
Where [ID] > 1
AND [Access] = (CASE [Access]
WHEN Access = 1 THEN ____ <----'Do nothing')
...
WHEN Access = 5 THEN ____ <----'Do Delete')
```
That is where I struggle. How would I check ID, and see which Access is the highest and delete all the lowest if they exist. Remember, if it's 5 then 4 is actually higher so delete 5.
So confusing! | If you apply modulo 5 to the Access column, you will get the following transformation:
```
1 % 5 = 1
2 % 5 = 2
3 % 5 = 3
4 % 5 = 4
5 % 5 = 0
```
As you can see, 5 yields the lowest result, others remain unchanged – seems the proper ranking of privileges for your case. With that in mind, I would probably try the following method:
```
DELETE FROM u
FROM dbo.UserTable AS u
LEFT JOIN (
SELECT ID, MAX(Access % 5) AS Access
FROM dbo.UserTable
GROUP BY ID
) AS keep
ON u.ID = keep.ID
AND u.Access % 5 = keep.Access
WHERE i.ID IS NULL
;
```
The subquery returns a set of IDs with Access values to keep. The main query anti-joins the target table to that set to determine which rows to delete.
This method is somewhat specific to your particular situation: it may not work correctly after extending the current set of valid values for `Access`. As an alternative, you could use CASE, as others suggested, that would certainly be more flexible. However, I would actually suggest you add an `AccessRank` column to the `Access` table to indicate which privilege is higher or lower than others.
That would make your DELETE query more complex but you wouldn't need to adapt it every time new `Access` values are introduced (you'd just need to define proper ranking *in your data*):
```
DELETE FROM u
FROM dbo.UserTable AS u
INNER JOIN dbo.Access AS a
ON u.Access = a.ID
LEFT JOIN (
SELECT u.ID, MAX(a.AccessRank) AS AccessRank
FROM dbo.UserTable AS u
INNER JOIN dbo.Access AS a
ON u.Access = a.ID
) AS keep
ON u.ID = keep.ID
AND a.AccessRank = keep.AccessRank
;
```
It is implied that `AccessRank` contains unique rankings only. | If you want to delete the row for each id that first has access 5 and the rest ordered, then do:
```
with todelete as (
select t.*, row_number() over (partition by id
order by (case when access = 5 then -1 else access end)
) as seqnum
from table t
)
delete from todelete
where seqnum = 1;
```
If you only want to do this when there is more than one record for the `id`:
```
with todelete as (
select t.*, row_number() over (partition by id
order by (case when access = 5 then -1 else access end)
) as seqnum,
count(*) over (partition by id) as cnt
from table t
)
delete from todelete
where seqnum = 1 and cnt > 1;
```
EDIT:
If you want to delete everything except one row according to your precedence rules:
```
with todelete as (
select t.*, row_number() over (partition by id
order by (case when access = 5 then -1 else access end)
) as seqnum,
count(*) over (partition by id) as cnt
from table t
)
delete from todelete
where seqnum < cnt;
``` | Delete row with a complex if | [
"",
"sql",
"sql-server",
""
] |
I have tables:
```
'gallery' - information about existing galleries
id_gallery name date
========== ==== ===========
1 ... timestamp
'photo' - information and name of every photo in system
id_photo photo_name
======== ===========
1 some name
'photo_gallery' - "connecting" table, which says which photo are in which gallery
id_photo_gallery id_photo id_gallery
================ ================ =================
1 id from 'photo' id from 'gallery'
```
I need to select gallery (with some inforamtion about it, but, that's not important) and only ONE id of photo from 'photo' table.
This (for a big surprise) return all photos. (Duplicated informations about gallery)
```
SELECT
photo_gallery.id_photo as id_photo
FROM
gallery
JOIN
photo_gallery ON gallery.id_gallery = photo_gallery.id_gallery
```
**EDIT**
This return only one gallery, not many. I need to retrieve one photo per one gallery ...
```
SELECT
photo_gallery.id_photo as id_photo
FROM
gallery
JOIN
photo_gallery ON gallery.id_gallery = photo_gallery.id_gallery LIMIT 1
``` | Please try the following query:
```
SELECT galery.*, photo.*
FROM galery
LEFT JOIN photo_gallery ON galery.id = photo_gallery.id_gallery
LEFT JOIN photo ON photo_gallery.id_photo = photo.id
GROUP BY galery.id
``` | use LIMIT
```
SELECT
pg.id_photo
FROM gallery g
LEFT OUTER JOIN (
SELECT id_gallery, MAX(id_photo) AS id_photo photo_gallery pg
GROUP BY id_gallery
) AS pg
ON g.id_gallery = pg.id_gallery
``` | Select first joined value, not all | [
"",
"mysql",
"sql",
""
] |
I'm trying to implement [this askTom solution](http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:185012348071) to count the business days between two date fields in a table.
```
select count(*)
from ( select rownum rnum
from all_objects
where rownum <= to_date('&1') - to_date('&2')+1 )
where to_char( to_date('&2')+rnum-1, 'DY' ) not in ( 'SAT', 'SUN' )
```
I don't know how I can pass values to toms code or how to do a work around so that each time the select executes with a different set of dates and that way obtaining a similar output :
`rowid | bussiness_days`
I guess this could be implemented with a PL/SQL block but I'd prefer to keep it down to a query if possible. Could it be possible to pass values using &1 parameters from a select above toms one? | This is not like the original askTom, but if you're using 11gR2 you can use a Recursive CTE:
```
with rcte(a,b, i, is_wd) as (
select from_dt , to_dt , id, case when (to_char(from_dt, 'DY') in ('SAT','SUN')) then 0 else 1 end from t
union all
select decode(a, null, a,a+1), b, i, case when (to_char(a, 'DY') in ('SAT','SUN')) then 0 else 1 end
from rcte
where a+1 <= b
)
select i id, sum(is_wd)
from rcte
group by i
```
where t is a table containing "from\_dates" and "to\_dates"
[Here is a sqlfiddle demo](http://www.sqlfiddle.com/#!4/99103/9) | Try this one:
```
SELECT COUNT(*)
FROM ( SELECT ROWNUM rnum
FROM all_objects
WHERE ROWNUM <= TO_DATE('2014-02-07','yyyy-mm-dd') - TO_DATE('2014-02-01','yyyy-mm-dd')+1 )
WHERE TO_CHAR( TO_DATE('2014-02-01','yyyy-mm-dd')+rnum-1, 'DY' ) NOT IN ( 'SAT', 'SUN' )
```
However, it has at least two issues:
* It presumes the session `NLS_DATE_LANGUAGE` is set to English
* The query gives wrong result if number of rows in ALL\_OBJECTS is smaller than the amount of days between your ranges.
The version is less error-prone and faster:
```
WITH t AS
(SELECT TO_DATE('2014-02-01','yyyy-mm-dd')+LEVEL-1 the_date
FROM dual
CONNECT BY TO_DATE('2014-02-01','yyyy-mm-dd')+LEVEL-1 <= TO_DATE('2014-02-07','yyyy-mm-dd'))
SELECT COUNT(*)
FROM t
WHERE TO_CHAR(the_date, 'DY', 'NLS_DATE_LANGUAGE = AMERICAN') NOT IN ( 'SAT', 'SUN' )
``` | Counting business days between two dates for each row in a table | [
"",
"sql",
"oracle",
""
] |
I am using `oracle10g`.
I want to remove all occurrences of particular word from sentence, But I don't want to remove any other word which contains other characters between a-z or A-Z.
**For example**, Following is a sentence from which I want to remove `some`:
```
some text, 123 someone, another text some1
```
**Expected output**:
```
text, 123 someone, another text
```
Note that I also want to remove `some` word if it contains `some`+ any other word than `A-Z` and `a-z` before or after `some`.
This is what I have tried so far:
```
select replace('some text, 123 someone, another text some1','some','')
from dual;
```
I am getting output:
```
text, 123 one, another text 1
```
In above output I am expecting `someone` not to be replaced and `some1` should be replaced totally.
How should I achieve this? Any suggestion will be appreciated.
**Edit:** For clarity this is another example of what I am looking for:
```
some other text someone other text, someB some1 some.
```
output should be:
```
other text someone other text, someB
```
From above sentence `someB` is not removed because it has characters between `a-z`
And `some1` and `some.` is removed becasue it doesn't has characters between `a-z`.
**Edit2**
If I use regex:
```
select REGEXP_REPLACE('some text, 123 someone, another text some1','[^a-zA-Z]','')
from dual
```
I am getting output:
```
sometextsomeoneanothertextsome
```
Expected output:
```
sometextsomeoneanothertext
```
Note that I want `some1` also be removed from string as it contains other character than `A-Z`.
Answers using `regex` are also appreciated. | Due to lack of support for [lookbehind/lookahead](http://www.regular-expressions.info/lookaround.html) and [word boundary(`\b`)](http://www.regular-expressions.info/wordboundaries.html) in Oracle implementation of regular expression, it seems to be impossible to meet all requirements in single [`REGEXP_REPLACE`](http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions130.htm) call. Especially for case, [pointed out by Egor Skriptunoff](https://stackoverflow.com/questions/21527273/remove-specific-word-from-string/21593820#comment-32504712) : pattern matches, followed one by one with only one separator between them like `some some some some ...`.
Without this case it's possible to match all such strings with this call:
```
regexp_replace(
source_string, -- source string
'([^[:alnum:]]|^)((\d)*some(\d)*)([^[:alnum:]]|$)', -- pattern
'\1\5', -- leave separators in place
1, -- start from beginning
0, -- replace all occurences
'im' -- case-insensitive and multiline
);
```
Pattern parts:
```
( -- start of Group #1
[^[:alnum:]] -- any non-alphanumeric character
| -- or
^ -- start of string or start of line
) -- end of Group #1
( -- start of Group #2
( -- start of Group #3
\d -- any digit
) -- end of Group #3
* -- include in previous group zero or more consecutive digits
some -- core string to match
( -- start of group #4
\d -- any digit
) -- end of group #4
* -- include in previous group zero or more consecutive digits
) -- end of Group #2
( -- start of Group #5
[^[:alnum:]] -- any non-alphanumeric character
| -- or
$ -- end of string or end of line
) -- end of Group #5
```
Because separators used for matching (Group #1 and Group #5) included in match pattern it will be removed from source string on successful match, so we need restore this parts by specifying in third `regexp_replace` parameter.
Based on this solution it's possible to replace all, even repetitive occurrences within a loop.
For example, you can define a function like that:
```
create or replace function delete_str_with_digits(
pSourceString in varchar2,
pReplacePart in varchar2 -- base string (like 'some' in question)
)
return varchar2
is
C_PATTERN_START constant varchar2(100) := '([^[:alnum:]]|^)((\d)*';
C_PATTERN_END constant varchar2(100) := '(\d)*)([^[:alnum:]]|$)';
vPattern varchar2(4000);
vCurValue varchar2(4000);
vPatternPosition binary_integer;
begin
vPattern := C_PATTERN_START || pReplacePart || C_PATTERN_END;
vCurValue := pSourceString;
vPatternPosition := regexp_instr(vCurValue, vPattern);
while(vPatternPosition > 0) loop
vCurValue := regexp_replace(vCurValue, vPattern,'\1\5',1,0,'im');
vPatternPosition := regexp_instr(vCurValue, vPattern);
end loop;
return vCurValue;
end;
```
and use it with SQL or other PL/SQL code:
```
SELECT
delete_str_with_digits(
'some text, -> awesome <- 123 someone, 3some3
line of 7 :> some some some some some some some <
222some another some1? some22 text 0some000',
'some'
) as result_string
FROM
dual
```
**[`SQLFiddle example`](http://sqlfiddle.com/#!4/ec122/1)** | Here is an approach that doesn't use regular expressions:
```
select trim(replace(' '||'some text, 123 someone, another text some1'||' ',
' some ',' '
)
)
from dual;
``` | remove specific word from string | [
"",
"sql",
"regex",
"oracle",
"replace",
""
] |
I'm working with a recursive table and I'm trying to build a query to get my child node counts however my query fails to return the correct count when I restrict it by the `basic_user_id`.
The idea behind the design is to enable different users to have their own hierarchy of companies, however I can't have them conflicting with each other which is what appears to be happening.

**My query**
```
select * , count(c2.company_id )
from company_user c1 left join company_user c2 on c2.parent_id = c1.company_id
where c1.company_id in (1337)
and c1.basic_user_id = 23
group by c1.company_id;
```
`basic_user_id = 23` should return a count of 1 which it does correctly

Now when I change `basic_user_id` to 541, I'm expecting it to return a count of 0 however it still seems to return a count of 1.

How do I get `basic_user_id = 541` to return a count of 0 and `basic_user_id = 23` to return a count of 1? | Could you try this? You can test here. <http://www.sqlfiddle.com/#!2/73f61/2>
I have added `AND child_tab.basic_user_id = parent_tab.basic_user_id` ON clause. Because You are doing `LEFT JOIN`
```
SELECT parent_tab.company_id, COUNT(child_tab.company_id)
FROM company_user parent_tab LEFT JOIN company_user child_tab
ON child_tab.parent_id = parent_tab.company_id
AND child_tab.basic_user_id = parent_tab.basic_user_id
WHERE parent_tab.company_id IN (1337)
AND parent_tab.basic_user_id = 23
GROUP BY parent_tab.company_id;
``` | Resolved, I just needed to add a second join.
```
select * , count(c2.company_id )
from deepblue.company_user c1 left join deepblue.company_user c2 on c2.parent_id = c1.company_id
and c1.basic_user_id = c2.basic_user_id
where c1.company_id in (1337)
and c1.basic_user_id= 541
group by c1.company_id;
``` | Getting child node count from mysql query | [
"",
"mysql",
"sql",
"aggregate-functions",
"hierarchical-data",
""
] |
I want to take the first row (postgresql 9.2) :
```
SELECT posts.employee_id, posts.month, posts.year
FROM posts
WHERE ((posts.month = 9 AND posts.year = 2013)
OR
(posts.month IS NULL AND posts.year IS NULL))
ORDER BY posts.employee_id
```
But it returns:

**The goal is to get result in just one row for each employee\_id**:
IF month and year have found
THEN return the first record with the month and year
ELSE return the second record with month = NULL and YEAR = NULL
There's should be just one row (not two as on picture)
It will seems as a default fallback for non present records.
[SQL FIDDLE](http://sqlfiddle.com/#!15/32227/1/0) | Not exactly sure how `MAX` behaves in PostgreSQL but something like this would work in `SQL-server`:
```
SELECT
posts.employee_id,
max(posts.month) as "month",
max(posts.year) as "year"
FROM posts
WHERE
(posts.month = 9 AND posts.year = 2013)
OR
(posts.month IS NULL AND posts.year IS NULL)
GROUP BY posts.employee_id
```
This assumes that only one or two records will be returned for each id.
---
**EDIT** (Updated Fiddle)
**[SQL Fiddle](http://sqlfiddle.com/#!15/47647/20)** | ```
SELECT posts.employee_id, posts.month, posts.year
FROM posts
WHERE posts.employee_id = 1
AND (posts.month = 9 AND posts.year = 2013)
OR (posts.month IS NULL AND posts.year IS NULL)
ORDER BY month DESC NULLS LAST
LIMIT 1
```
[**From Documentation**](http://www.postgresql.org/docs/8.3/static/queries-order.html)
> The NULLS FIRST and NULLS LAST options can be used to determine
> whether nulls appear before or after non-null values in the sort
> ordering. By default, null values sort as if larger than any non-null
> value; that is, NULLS FIRST is the default for DESC order, and NULLS
> LAST otherwise.
[**SQL Fiddle Demo**](http://sqlfiddle.com/#!15/47647/4) | SQL WHERE (contains or equal) | [
"",
"sql",
"postgresql",
""
] |
Thanks everyone for the input, especially during the closing hours of the bounty, it's been incredible helpful.
*This is a followup question to [Select courses that are completely satisfied by a given list of prerequisites](https://stackoverflow.com/questions/20671624/select-courses-that-are-completely-satisfied-by-a-given-list-of-prerequisites), and further explains the situation. It is definitely recommended to read to help understand this question further.* (Courses and subjects are distinct entities, subjects are only prerequisites for courses and need not be prerequisites for other subjects - think high school subjects leading to possible university courses)
I have my database laid out as such.
```
Prerequisite:
+---------------+---------------+
| Id | Name | (Junction table)
|---------------|---------------| CoursePrerequisites:
| 1 | Maths | +---------------+---------------+
| 2 | English | | Course_FK | Prerequisite_FK
| 3 | Art | |---------------|---------------|
| 4 | Physics | | 1 | 1 |
| 5 | Psychology | | 1 | 2 |
+-------------------------------+ | 2 | 3 |
| 2 | 5 |
Course: | 5 | 4 |
+---------------+---------------+ +---------------v---------------+
| Id | Name |
|---------------|---------------|
| 1 | Course1 |
| 2 | Course2 |
| 3 | Course3 |
| 4 | Course4 |
| 5 | Course5 |
+---------------v---------------+
```
and I've been making use of the following query:
```
SELECT Course.id, course.Name, GROUP_CONCAT(DISTINCT Prerequisite.Name) AS 'Prerequisite Name(s)'
FROM Course
LEFT JOIN CoursePrerequisites ON Course.id = CoursePrerequisites.Course_FK
LEFT JOIN Prerequisite ON Prerequisite.id = CoursePrerequisites.Prerequisite_FK
WHERE NOT EXISTS
(SELECT 1
FROM CoursePrerequisites
WHERE Course.id = CoursePrerequisites.Course_FK
AND CoursePrerequisites.Prerequisite_FK NOT IN (SELECT Prerequisite.id FROM Prerequisite Where Name = 'Art' OR Name = 'English' OR Name = 'Psychology''))
GROUP BY Course.id;
```
Which works well to select courses that are exactly filled by their prerequisites.
However, I've come to a roadblock trying to organise the database in such a way that is is able to represent courses with compound prerequisites. For example, a course may require English, Maths and either Art or Psychology. Another example may be prerequisites English and two of either Physics, Psychology, Art, etc.
What would be an appropriate way to structure the database to handle these types of prerequisites (I tried doing some searches, but I couldn't find anything (edit: found this, but not helpful: [Modeling courses and pre-requisites in the database](https://stackoverflow.com/questions/4149216/modeling-courses-and-pre-requisites-in-the-database)) and how would I modify the above query to again return only courses that have at least their prerequisites filled?
For clarification:
Given a list of subjects (from Prerequisite table), I wish to return a list of Courses that would be eligible given those subjects. In the current database schema, given Maths, English, Art and Physics, returned courses should be Course1 and Course5 (and NOT Course2 - it has prerequisites Art and Psychology, the later of which is not satisfied by the given input) as stipulated by the junction table.
I wish to extend the complexity of a Course's prerequisites from a simple 'AND' (Course1 requires Maths AND English) to something that can handle 'OR'/One of x from a set of y (e.g. Course1 now requires English, Maths AND One or more of Art or Psychology).
Progress Edit:
I've been thinking of extending the junction table with a few extra columns for 'at least one of the following' and 'at least two of', etc as well as another column for 'all of' and placing the prerequisites into a structure that way. Is this a sane way to go about this and what would be an efficient query in MySQL to query to find eligible courses given a list of subjects?
Progress:
Kuba Wyrostek has suggested below to enumerate all prerequisite combinations for each course into distinct sets. While this would work, I need to do this for ~6k rows, each with many enumerations. Is there a more efficient way to accomplish this? | In my opinion modeling conjunction and disjunction in one table is always uneasy and leads to either violation of normal form or inability to predict how many self joins are necessary. What I understand is that your prerequisites can be generally always expressed as alternatives of conjunctions. So the following:
```
Math AND English AND (Physics1 OR Physics2)
```
may be as well expressed as:
```
(Math AND English AND Physics1) OR (Math AND English AND Physics2)
```
This lead to a conclusion, that you probably need an intermediate table describing *sets of prerequisites*. A course is available when *any* of sets is successful, while set is successful when *all* of subjects in the set are completed.
So the structure may look like this:
```
Prerequisite:
+---------------+---------------+
| Id | Name |
|---------------|---------------| PrerequisiteSets:
| 1 | Maths | +---------------+---------------+
| 2 | English | | SetNumber | Prerequisite_FK
| 3 | Art | |---------------|---------------|
| 4 | Physics | | 1 | 1 |
| 5 | Psychology | | 1 | 2 |
+-------------------------------+ | 1 | 4 |
| 2 | 1 |
| 2 | 2 |
Course: | 2 | 5 |
+---------------+---------------+ +---------------v---------------+
| Id | Name |
|---------------|---------------|
| 1 | Course1 |
| 2 | Course2 |
| 3 | Course3 |
| 4 | Course4 |
| 5 | Course5 |
+---------------v---------------+
CoursePrerequisite:
+---------------+---------------+
| Course_FK | SetNumber |
|---------------|---------------|
| 5 | 1 |
| 5 | 2 |
+---------------v---------------+
```
An example Course5 can be satisfied with either SetNumber 1 (Math, English, Physics) or SetNumber 2 (Math, English, Psychology).
Unfortunately it's too late here for me to help you with exact queries now, but in case you need it I can extend my answer tomorrow. Good luck though! :-)
**EDIT**
To generate queries I'd start with observation, that particular set is matched, when all prerequisites in set are a subset of given prerequisites. This leads to condition, that number of distinct prerequisites in set must match number of prerequisites in this set that are also in given set. Basically (assumming SetNumber-Prerequisite\_FK is unique pair in table):
```
select
SetNumber,
count(Prerequisite_FK) as NumberOfRequired,
sum(case when Prerequisite.Name in ('Math','English','Art') then 1 else 0 end)
as NumberOfMatching
from PrerequisiteSets
inner join Prerequisite on PrerequisiteSets.Prerequisite_FK = Prerequisite.ID
group by SetNumber
having
count(Prerequisite_FK)
=
sum(case when Prerequisite.Name in ('Math','English','Art') then 1 else 0 end)
```
Now getting final Courses boils down to getting all courses, which at least one set number is found in the results of query above. Starting like this (definitely can be expressed better and optimized with joins but general idea is the same):
```
select Id, Name
from Course
where Id in
(select Course_FK from CoursePrerequisite
where SetNumber in
(
-- insert query from above (but only first column: SetNumber, skip the two latter)
) as MatchingSets
) as MatchingCourses
``` | > Kuba Wyrostek has suggested below to enumerate all prerequisite combinations for each course into distinct sets. While this would work, I need to do this for ~6k rows, each with many enumerations. Is there a more efficient way to accomplish this?
Storing sets is an obvious choice, I agree with Kuba. But I suggest a bit different approach:
```
prereqs: courses:
+------+------------+ +------+------------+
| p_id | Name | | c_id | Name |
|------|------------| |------|------------|
| 1 | Math | | 1 | Course1 |
| 2 | English | | 2 | Course2 |
| 3 | Art | | 3 | Course3 |
| 4 | Physics | | 4 | Course4 |
| 5 | Psychology | | 5 | Course5 |
+------+------------+ +------+------------+
compound_sets: compound_sets_prereqs:
+-------+-------+-------+ +-------+-------+
| s_id | c_id | cnt | | s_id | p_id |
|-------|-------|-------| |-------|-------|
| 1 | 1 | 1 | | 1 | 1 |
| 2 | 1 | 2 | | 1 | 2 |
| 3 | 2 | 1 | | 2 | 3 |
| 4 | 2 | null | | 2 | 4 |
| 5 | 3 | null | | 2 | 5 |
+-------+-------+-------+ | 3 | 1 |
| 3 | 4 |
| 4 | 1 |
| 4 | 2 |
| 5 | 2 |
| 5 | 3 |
+-------+-------+
```
The "cnt" column above stores the minimum number of required matches, NULL value means all prerequisites have to match. So in my example we have the following requirements:
Course1: ( Math or English ) and ( at least two out of Art, Physics and Psychology )
Course2: ( Math or Physics ) and ( both Math and English )
Course3: both English and Art
Here's the SQL:
```
select t.c_id
, c.name
from ( select c_id
, sets_cnt
-- flag the set if it meets the requirements
, case when matched >= min_cnt then 1 else 0 end flag
from ( select c.c_id
, cs.s_id
-- the number of matched prerequisites
, count(p.p_id) matched
-- if the cnt is null - we need
-- to match all prerequisites
, coalesce( cnt, count(csp.p_id) ) min_cnt
-- the total number of sets the course has
, ( select count(1)
from compound_sets t
where t.c_id = c.c_id
) sets_cnt
from courses c
join compound_sets cs
on cs.c_id = c.c_id
join compound_sets_prereqs csp
on cs.s_id = csp.s_id
left join ( select p_id
from prereqs p
-- this data comes from the outside
where p.name in ( 'Physics',
'English',
'Math',
'Psychology' )
) p
on csp.p_id = p.p_id
group by c.c_id, cs.s_id, cs.cnt
) t
) t
, courses c
where t.c_id = c.c_id
group by t.c_id, c.name, sets_cnt
-- check that all sets of this course meet the requirements
having count( case when flag = 1 then 1 else null end ) = sets_cnt
``` | Compound course prerequisites (One or more of a,b,c and either x or y as well as z style) | [
"",
"mysql",
"sql",
"prerequisites",
"relational-division",
""
] |
I need to display the keyboard players from a list of bands, and I've been able to using the following SQL:
```
SELECT BAND.NAME AS Band_Name, KBPLAYER.NAME AS Keyboard_Player
FROM BAND
FULL OUTER JOIN (
SELECT M.NAME, MO.BID
FROM MEMBEROF MO, MEMBER M
WHERE MO.INSTRUMENT='keyboards'
AND M.MID=MO.MID
) KBPLAYER
ON BAND.BID=KBPLAYER.BID
ORDER BY BAND.NAME, KBPLAYER.NAME
```
The above query displays the names of all the band and the keyboard player (if any) in that band, but I also want to display 'No KeyBoard Players' for those bands that don't have a keyboard player. How can I achieve this? Please let me know if you need me to furnish with details of the table structure.
**Update:** Please note that I'm not able to use any of the `SQL3 procedures` (`COALESCE, CASE, IF..ELSE`). It needs to conform strictly to SQL3 standard. | I've decided to do it differently since I wasn't going anywhere with the above SQL. I'll appreciate if anyone has suggestions to make for the above SQL with the set constraints.
```
SELECT
band.name AS Band_Name, 'NULL' AS Keyboard_Player
FROM
memberof
INNER JOIN
member
ON
memberof.mid = member.mid
FULL JOIN
band
ON
memberof.bid = band.bid
AND
instrument = 'keyboards'
WHERE
member.name IS NULL
UNION
SELECT
band.name AS Band_Name, member.name AS Keyboard_Player
FROM
memberof
INNER JOIN
member
ON
memberof.mid = member.mid
FULL JOIN
band
ON
memberof.bid = band.bid
WHERE
instrument = 'keyboards'
``` | Use the [coalesce](http://www.postgresql.org/docs/current/static/functions-conditional.html#FUNCTIONS-COALESCE-NVL-IFNULL) function. This function returns the first of it's arguments that are not null. Eg:
```
COALESCE(KBPLAYER.NAME,'No KeyBoard Players') AS Keyboard_Player
``` | SQL Replace NULL Values with Own text | [
"",
"sql",
"postgresql",
""
] |
I have a database and my dataset is really messed up. The column of importance is the a "uniqueidentity" number where some records have "&&" or "%%" in contained at the end of the value. If it does, I would like to delete the entire row from the table. uniqueidentity = VARCHAR
Does anybody have any ideas on how to do this using a SQL Query?
Thanks in advance | you could use
```
DELETE FROM table WHERE RIGHT(uniqueidentity, 2) = "&&" OR RIGHT(uniqueidentity, 2) = "%%"
``` | Try like this:
```
DELETE FROM tableName WHERE
uniqueidentity REGEXP '%%$' OR
uniqueidentity REGEXP '&&$';
```
I have created supporting [SQL FIDDLE](http://sqlfiddle.com/#!2/a2c1d6/5) with select query which you can change it to delete as above. | Deleting all rows which contain a unique value | [
"",
"mysql",
"sql",
""
] |
Work on sql server 2008 r2. Want to join table=**ValueTable** and table= **DateRangeTable** .Both table data as like as picture .There is no strong relation between them ,Need to update **ValueTable** column on range of **DateRangeTable** column **StartDate** and **EndDate**.

If have any query please ask ,Any type of suggestion will be acceptable,thanks in advanced. | Try this and let me know your comments/Results
```
UPDATE ValueTable SET IncRange=DRT.IncRange
FROM ValueTable VT INNER JOIN DaatRangeTable DRT ON VT.BillDate >= DRT.StartDate AND VT.BillDate <=DRT.EndDate
``` | You can use an `update...select` query like below
```
update ValueTable
set ValueTable.incRange = drt.incRange
from DateRangeTable drt
where ValueTable.BillDate >= drt.StartDate and ValueTable.BillDate <= drt.EndDate
``` | How to update tables with in date range | [
"",
"sql",
"t-sql",
"sql-server-2008-r2",
""
] |
hi i wrote this query in SqlServer 2008
but some thing goes wrong
```
select * from News_Table
where (DATEDIFF( DAY ,convert(datetime, NewsDate) , convert(datetime,@Todaydate )) <= @Count)
```
that `@NewsDate` and `@Todaydate` are two nvarchar parameters that are saved like this `2014/11/16`
running this query give me an error:
```
Conversion failed when converting date and/or time from character string
``` | Try adding the correct style parameter to your convert function (see MSDN: [link](http://msdn.microsoft.com/en-us/library/ms187928.aspx) )
ie `CONVERT(DATETIME, NewsDate, 111)` (111 is the style for YYYY/MM/DD)
Then you get:
```
SELECT *
FROM News_Table
WHERE (DATEDIFF( DAY ,
CONVERT(DATETIME, NewsDate, 111) ,
CONVERT(DATETIME,@Todaydate, 111)
) <= @Count)
``` | To know more [click here](http://networking.ringofsaturn.com/SQL/howtoformatdatetime.php)
```
SELECT convert(datetime, '2014/11/16', 111) as datetime
```
**OP**

So your query would be like this
```
Select * from News_Table
where (DATEDIFF( DAY ,convert(datetime, '2014/11/16', 111) , convert(datetime,@Todaydate,111 )) <= @Count)
``` | how to convert nvarchar(50) to datetime in sqlserver 2008 | [
"",
"sql",
"sql-server",
"sql-server-2008",
"datetime",
"converters",
""
] |
I want split text from NAME column and insert comma separated data to PARCA column for each row. ex:
```
name parca
---- -------------
john j,jo,joh,john
```
Code:
```
DECLARE @i int = 0
WHILE @i < 8
BEGIN
SET @i = @i + 1
update export1 set PARCA = cast ( PARCA as nvarchar(max)) + cast (substring(NAME,1,@i) as nvarchar(max) ) +','
FROM export1
end
```
There are two things I can't do;
1. I could not equalize the @i value to name row count
2. I could not checked NAME column whether the value in PARCA column | Create this function:
```
create function f_parca
(
@name varchar(100)
) returns varchar(max)
as
begin
declare @rv varchar(max) = ''
if @name is not null
select top (len(@name)) @rv += ','+ left(@name, number + 1)
from master..spt_values v
where type = 'p'
return stuff(@rv, 1,1,'')
end
```
Testing the function
```
select dbo.f_parca('TClausen')
```
Result:
```
T,TC,TCl,TCla,TClau,TClaus,TClause,TClausen
```
Update your table like this:
```
UPDATE export1
SET PARCA = dbo.f_parca(name)
``` | ```
DECLARE @Count INT,@I INT
SET @I = 1
SET @Count = LEN('SURESH')
DECLARE @N VARCHAR(2000)
SET @N = ''
WHILE @Count > 0
BEGIN
SET @N = @N + ','+SUBSTRING('SURESH',1,@I)
SET @I = @I+1
SET @Count = @Count -1
END
SELECT SUBSTRING(@N,2,2000)
```
The above code is only a sample.'SURESH' is your name field.from which you can pass your own name values.Instead of final select u can put ur update. | Split text value insert another cell | [
"",
"sql",
"sql-server",
"database",
"t-sql",
""
] |
Using SQLCe, I have a column of DateTime type. I would like to filter just by year. Is it possible or should I store year separately, which seems to me redundant?
E.g. get distinct results of 2010,2011,2013.
Thanks | think you have the `DATEPART` function (but not the `YEAR` function)
so
```
select DatePart(yyyy, <yourDateTime>)
```
or if that's for ordering, of course
```
order by DatePart(yyyy, <yourDatetime>)
```
**EDIT**
```
select max(InvoiceID)
from yourTable
where DatePart(yyyy, IssuedDate) = 2013
``` | The usual way to do this is to use a range filter:
```
select *
from table
where datecolumn >= '2012/01/01' and datecolumn < '2013/01/01'
```
This has the benefit that any index you may have on `datecolumn` can be used.
Since the answer you accepted shows that you only care about one single year, your objection to this answer doesn't really apply.
```
select max(InvoiceID)
from table
where IssuedDate >= '2012/01/01' and IssuedDate < '2013/01/01'
```
will work just fine. | Select "YYYY" component only from DateTime column | [
"",
"sql",
"datetime",
"select",
""
] |
I have a table which records each event in a system such as 'Queued', 'Started', 'Finished', 'Failed' etc... There are a lot more steps in the table but these are all I am interested in.
I select from this only the events I want
```
SELECT [Id]
,[EventTime]
,[Message]
FROM [Log]
WHERE [Message] LIKE '%Queued%'
OR [Message] LIKE '%Started%'
OR [Message] LIKE '%Finished%'
OR [Message] LIKE '%Failed%'
```
Which gives me something similar to
```
Id EventTime Message
5764 2013-12-20 17:52:25.037 Queued
5764 2013-12-20 17:53:09.767 Started
5765 2013-12-20 17:55:50.403 Queued
5764 2013-12-20 17:57:07.503 Finished
5765 2013-12-20 17:57:39.010 Started
5765 2013-12-20 17:58:05.553 Failed
```
What I would like to end up from this query is a recordset in the following format
Id, QueuedTime, StartTime, FinishedTime, Duration, Status
Now the Status Column should be 'Queued if there is a QueuedTime only, 'InProgress' if there is a Start but no Finish Time', 'Success' if There is a Start and Finish Time and 'Failed' if there is a failed time.
I know i will need some sort of case statement for the Status column but I am not sure how to get it in the format with everything for one Id on the same row.
Can anyone provide some assistance on how to achieve this? | You can achieve this with multiple LEFT OUTER JOINs, but be aware that the performance can be bad if your're having a lot of data. Queurying the *message* column with LIKE is not a really good idea as well from a performance viewpoint.
Did you think of using some kind of ETL tool to prepare the data before loading it to the DB (assuming that it comes from an external source)?
Anyway, here's what you can do:
```
select
q.id,
q.eventtime as QueuedTime,
s.eventtime as StartTime,
f.eventtime as FinishTime,
f.eventtime-s.eventtime as Duration,
case
when fail.id is not null then 'Failed'
when (q.eventtime is not null and s.eventtime is null and f.eventtime is null) then 'Queued'
when (q.eventtime is not null and s.eventtime is not null and f.eventtime is null) then 'InProgress'
when (q.eventtime is not null and s.eventtime is not null and f.eventtime is not null) then 'Success'
end as Status
from
(
select
distinct id,
eventtime
from
log
where
message like '%Queued%'
) q
left outer join
(
select
distinct id,
eventtime
from
log
where
message like '%Started%'
) s
on
q.id = s.id
left outer join
(
select
distinct id,
eventtime
from
log
where
message like '%Finished%'
) f
on
q.id = f.id
left outer join
(
select
distinct id,
eventtime
from
log
where
message like '%Failed%'
) fail
on
q.id = fail.id
``` | you have a missing variable for grouping all those logs in to one session.
in your query they may be several logs with status "Started" so you need to have a "requestId" or something similar that will contain 1 log from each type. | SQL grouping records from a log table | [
"",
"sql",
"sql-server",
""
] |
How do I realize the following in MySQL with Triggers:
When value of some column is null -> set other column values to null
and
when value of some column is not null -> set other column values to null
table definition:
```
CREATE TABLE variations (
id int(10) NOT NULL,
x1 int(10) NOT NULL,
x2 int(10),
x1_option1 BOOL,
x2_option1 BOOL,
x1_option2 varchar(10),
x2_option2 varchar(10)
);
```
The idea is that we have 2 Elements, `x1`and `x2`. While `x1`is mandatory, `x2`is optional and can be null. Both, x1 and x2 have two options: `x1_option1` , `x2_option1`, `x1_option2` and `x2_option2`.
The first rule should be that when x2 is null, both options for x2 (`x2_option1`, `x2_option2`) must also be null.
My attempt:
```
CREATE
TRIGGER check_null_x2 BEFORE INSERT
ON variations
FOR EACH ROW BEGIN
IF NEW.x2 IS NULL THEN
SET NEW.x2_option1 = NULL;
SET NEW.x2_option2 = NULL;
END IF;
END$$
```
Throws Error:
```
#1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '' at line 6
```
Can you please help me figuring out whats wrong? I just dont understand what '' means.
The second rule should be that there can only be one of the two options selected. that means if `x2_option1` is NOT NULL, `x2_options2` must be NULL. In general i think this can be done the same way as the first rule. My question: how can i do multiple 'IF', 'ELSE IF' etc in one trigger? | This is syntax for trigger:
```
delimiter //
CREATE TRIGGER upd_check BEFORE UPDATE ON account
FOR EACH ROW
BEGIN
IF NEW.amount < 0 THEN
SET NEW.amount = 0;
ELSEIF NEW.amount > 100 THEN
SET NEW.amount = 100;
END IF;
END;//
delimiter ;
```
...and your code is here:
```
DELIMITER //
CREATE TRIGGER check_null_x2 BEFORE INSERT ON variations
FOR EACH ROW
BEGIN
IF NEW.x2 IS NULL THEN
SET NEW.x2_option1 = NULL;
SET NEW.x2_option2 = NULL;
END IF;
END$$ -- THIS LINE SHOULD BE: "END;//"
DELIMITER ;
```
**EDIT:**
The official [Documentation](https://dev.mysql.com/doc/refman/5.7/en/stored-programs-defining.html) says the following:
> If you use the mysql client program to define a stored program containing semicolon characters, a problem arises. By default, mysql itself recognizes the semicolon as a statement delimiter, so you must redefine the delimiter temporarily to cause mysql to pass the entire stored program definition to the server.
>
> To redefine the mysql delimiter, use the delimiter command. The following example shows how to do this for the dorepeat() procedure just shown. The delimiter is changed to // to enable the entire definition to be passed to the server as a single statement, and then restored to ; before invoking the procedure. This enables the ; delimiter used in the procedure body to be passed through to the server rather than being interpreted by mysql itself. | you seem to have ";" set as DELIMETER, which causes the query to execute once it sees a ";". try changing it first:
```
DELIMITER //
CREATE
TRIGGER check_null_x2 BEFORE INSERT
ON variations
FOR EACH ROW BEGIN
IF NEW.x2 IS NULL THEN
SET NEW.x2_option1 = NULL;
SET NEW.x2_option2 = NULL;
END IF;
END;//
DELIMITER ;
``` | MySQL Trigger usage : if column is null set other column null | [
"",
"mysql",
"sql",
""
] |
I'm doing multiple inserts in a **single** query:
```
INSERT INTO table (c1, c2) VALUES (1,2),
(2,3),
(3,4),......
ON DUPLICATE KEY UPDATE c2 = VALUES(c2)
```
Now suppose that there's over tens of thousands of VALUES specified in the query (hence the ellipsis)....
Can there ever be a case in which some parts of the VALUES managed to get inserted/updated in the database but the rest did not get inserted/updated possibly due to some sort of db error/failure/memory-running-out etc?
**Are mysql queries ALL or Nothing?**
Is it true that for every mysql query executed, either all values specified in the query will be inserted/updated smoothly, or none of the values will be inserted/updated? | [ACID](https://en.wikipedia.org/wiki/ACID) (Atomicity, Consistency, Isolation, Durability) properties are used to describe such behaviour in databases. Atomicity is only important if we're dealing with [concurrent modifications](https://en.wikipedia.org/wiki/Concurrency_control). To ensure Consistency, a certain level of Isolation must be reached. The more isolated multiple transactions run, however, the less performance the DBMS usually has. So there is the so called "[isolation level](https://en.wikipedia.org/wiki/Isolation_%28computer_science%29#Isolation_levels)", which states what errors can possibly occur in a DBMS and which cannot.
Now, MySQL implements all isolation levels in INNODB databases, and you can choose for each transaction: <https://dev.mysql.com/doc/refman/5.1/en/set-transaction.html>
MyIsam databases don't support transactions, single operations should however run atomically. (Source: <https://dev.mysql.com/doc/refman/5.0/en/ansi-diff-transactions.html>). Note however, that this does NOT guarantee data isn't changed between the reads and writes in one operation - atomicity in DBMS terms only means that the operation is either completely done or completely skipped. It does NOT guarantee isolation, consistency or durability. | *"Can there ever be a case in which some parts of the VALUES managed to get inserted/updated in the database but the rest did not get inserted/updated possibly due to some sort of db error/failure/memory-running-out etc?"*
Late answer, but perhaps interesting: `[ON DUPLICATE KEY] UPDATE` is not *strictly* atomic for single rows (neither for `MyISAM`, nor for `InnoDB`), but it will be atomic in regards to errors.
What's the difference? Well, this illustrates the potential problem in assuming strict atomicity:
```
CREATE TABLE `updateTest` (
`bar` INT(11) NOT NULL,
`foo` INT(11) NOT NULL,
`baz` INT(11) NOT NULL,
`boom` INT(11) NOT NULL,
PRIMARY KEY (`bar`)
)
COMMENT='Testing'
ENGINE=MyISAM;
INSERT INTO `updateTest` (`bar`, `foo`, `baz`, `boom`) VALUES (47, 1, 450, 2);
INSERT
`updateTest`
(`bar`, `foo`, `baz`, `boom`)
VALUES
(47, 0, 400, 5)
ON DUPLICATE KEY UPDATE
`foo` = IF(`foo` = 1, VALUES(`foo`), `foo`),
`baz` = IF(`foo` = 1, VALUES(`baz`), `baz`),
`boom` = IF(`foo` = 1, VALUES(`boom`), `boom`);
```
`(47, 1, 450, 2)` will have turned into `(47, 0, 450, 2)`, and not into `(47, 0, 400, 5)`. If you assume *strict atomicity* (which is not to say you should; you might prefer this behaviour), that shouldn't happen - `foo` should certainly not change before the other columns' values are even *evaluated*. `foo` should change together with the other columns - *all or nothing*.
If I say *atomic in regards to errors*, I mean that if you remove the `IF()` condition in the above example that's highlighting the stricter situation, like this...
```
INSERT INTO `updateTest` (`bar`, `foo`, `baz`, `boom`) VALUES (48, 1, 450, 2);
INSERT
`updateTest`
(`bar`, `foo`, `baz`, `boom`)
VALUES
(48, 0, 400, 5)
ON DUPLICATE KEY UPDATE
`foo` = VALUES(`foo`),
`baz` = VALUES(`baz`),
`boom` = VALUES(`boom`);
```
...you will always *either* end up with `(48, 1, 450, 2)` *or* `(48, 0, 400, 5)` after your statement has finished/crashed, and *not* some in-between state like `(48, 0, 450, 2)`.
The same is true for the behaviour of `UPDATE`, but there's even less of a reason to juggle `IF()` statements there, since you can just put your conditionals into your `WHERE` clause there.
In conclusion: Outside of edge-cases, you do have atomicity for single-row statements, even using `MyISAM`. See [Johannes H.'s answer for further information](https://stackoverflow.com/a/21584504/245790). | Are mysql multiple inserts within a Single query atomic? | [
"",
"mysql",
"sql",
"insert",
"sql-update",
"atomic",
""
] |
How to get sysdate in following format in SQL Server?
> 1/1/2014
>
> 1/2/2014
>
> 1/3/2014
i.e **`M/D/YYYY`**
I searched in google, i got different formats, but i want following format. Please help me.
Thanks. | Have a look at the CONVERT documentation in msdn article [here](http://msdn.microsoft.com/en-us/library/ms187928.aspx)
You can see many formats here.
For your case use
```
SELECT CONVERT(NVARCHAR(20),yourdate,101) as formatteddate
```
Based on your question edit I would recommend `datepart` and associated functions like `MONTH, DAY, YEAR`
```
SELECT CAST(MONTH(GETDATE()) AS VARCHAR(2))+'/'+CAST(DAY(GETDATE()) AS VARCHAR(2))+'/'+CAST(YEAR(GETDATE()) AS VARCHAR(4))
```
Another edit for 2 years back
```
SELECT CAST(MONTH(DATEADD(yy,-2,GETDATE())) AS VARCHAR(2))+'/'+CAST(DAY(DATEADD(yy,-2,GETDATE())) AS VARCHAR(2))+'/'+CAST(YEAR(DATEADD(yy,-2,GETDATE())) AS VARCHAR(4))
``` | Use this Code:
```
SELECT CONVERT(VARCHAR(10), GETDATE(), 101) AS [MM/DD/YYYY]
```
OR
```
SELECT CONVERT(VARCHAR(10), GETDATE(), 101) AS [M/D/YYYY]
``` | How to get sysdate with different formats? | [
"",
"sql",
"sql-server",
""
] |
```
select datepart(yyyy,hiredate) as Hire_Date_of_Year,jobTitle,count(jobTitle) as Number_Of_Title
from [AdventureWorks2012].[HumanResources].[Employee]
group by jobTitle,hiredate
having hiredate like '2004%'
order by jobtitle asc
```
## Above is my code.
the Output I am getting is this
```
Hire_Date_of_Year jobTitle number_of_Count
2004 Buyer 1
2004 Buyer 1
2004 Buyer 1
2004 Buyer 1
2004 Buyer 1
2004 Buyer 1
2004 Buyer 1
2004 Janitor 1
2004 Janitor 1
2004 Janitor 1
2004 Janitor 1
```
---
The Output I am looking for
```
Hire_Date_of_Year jobTitle number_of_Count
2004 Buyer 7
2004 Janitor 4
```
Thanks in Advance. | You shouldn't really do date comparisons using `like`. That is best used on strings. Here is a way to write the query you want:
```
select datepart(yyyy, hiredate) as Hire_Date_of_Year,
jobTitle, count(jobTitle) as Number_Of_Title
from [AdventureWorks2012].[HumanResources].[Employee]
where datepart(yyyy, hiredate) = 2004
group by jobTitle, datepart(yyyy, hiredate)
order by jobtitle asc;
```
If you want, the `datepart(yyyy, hiredate)` (or `year(hiredate)` if you like) in the `group by` is optional. If you don't include it, the `select` needs to put the year in an aggregation function, such as `max(datepart(yyyy, hiredate))`.
I moved the condition to the `where` clause for efficiency purposes. You can do the comparison after the aggregation (i.e. in the `having` clause). But that means the `group by` is grouping by all the years before doing the filtering. | You need to group by the Year, not the full date
```
SELECT DatePart(yy, hiredate) As Hire_Date_of_Year
, jobTitle
, Count(jobTitle) As Number_Of_Title
FROM [AdventureWorks2012].[HumanResources].[Employee]
WHERE DatePart(yy, hiredate) = 2004
GROUP
BY DatePart(yy, hiredate)
, jobTitle
``` | I need a SQL Query which group by employee with year, | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
Sorry for Title, don't know how to explain.

Ok so I want to see if any protocol (PTC\_ID) is linked to an Audit (AUD\_ID), in the picture you can see there is 3 tables and each one has a value of the other.
I though of using `inner join` all 3 tables with the `ON` , `ON ADA_PTCID = PTC_ID` etc. and if a audit is linked with a `PTC` then display year? | try
```
select
ptc.ptc_name,
aud.aud_year
from
ptc_table ptc
inner join
ada_table ada
on
ada.ada_ptcid=ptc.ptc_id
inner join
aud_table aud
on
aud.aud_id=ada.ada_aud_id
``` | ```
Select AUD_YEAR
From AUD_Table at
Inner Join ADA_TABLE ad
ON at.AUD_ID = ad.ADA_AUD_ID
Inner Join PTC_TABLE pt
ON pt.PTC_ID=ad.ADA_PTCID
``` | Get ID from another table through a table | [
"",
"sql",
"sql-server",
""
] |
Could someone verify my understanding of proc sql union operations? My interpretation of the differences between outer union and union is the following:
1. Union deletes duplicate rows, while outer union does not
2. Union will overlay columns, while outer union, by default, will not.
So, would there be any difference between union all corresponding and outer union corresponding? It seems like "ALL" would remove the first difference, and "CORRESPONDING" would remove the second difference, but I'm concerned there could be an additional difference between the two I'm not seeing. | It turns out there is, actually, a difference: how columns which only exist in one dataset are handled. `Outer Union Corresponding` will display columns that appear only in one dataset, not overlaid by position. `Union All Corresponding` will not display any columns that appear in only one dataset. | My understanding is that `OUTER UNION` and `UNION ALL` are effectively if not actually identical. `CORR` is needed for either one to guarantee the columns line up; with `OUTER UNION` the columns will not stack even if they *are* identical, while with `UNION ALL` the columns *always* stack even if they are not identical (must be same data type or it will error), and pay no attention at all to column name. In both cases adding `CORR` causes them to stack.
Here are some examples:
Not stacking:
```
proc sql;
select height, weight from sashelp.class
union all
select weight,height from sashelp.class;
select height, weight from sashelp.class
outer union
select height, weight from sashelp.class;
quit;
```
Stacking:
```
proc sql;
select height, weight from sashelp.class
union all corr
select weight,height from sashelp.class;
select height, weight from sashelp.class
outer union corr
select height, weight from sashelp.class;
quit;
```
[This SAS doc page](http://v8doc.sas.com/sashtml/proc/zueryexp.htm) does a good job of showing the differences. | Any difference between union all corresponding and outer union corresponding? | [
"",
"sql",
"sas",
""
] |
I have a table named
Category\_tbl
```
Id
Categoryname
info
```
subcategory\_tbl
```
id
categoryid
subcatname
info
```
product\_tbl
```
id
subcat
info
```
i in here subcategory\_tbl has the category id and product\_tbl has subcategory id as subcat.now if i want to delete a category then i all the corresponding data of same key in subcategory\_Tbl and product\_tbl needs to be deleted.how can i do it?i tried joining but its not working | First delete the data from the tables which have foreign key and then the table with primary key..Using multiple delete statements... | Without [cascading deletes](https://stackoverflow.com/questions/12185811/mysql-on-delete-cascade-test-example) on the foreign keys, what you can do this as a sequence of deletes from `subcategory` through `product` and finally `category`. Assuming you wish to delete category 123:
```
delete
from product_tbl
where subcatid in
(
select id
from subcategory_tbl
where categoryid = 123
);
delete
from subcategory_tbl
where categoryid = 123;
delete
from Category_tbl
where id = 123;
```
If however you do have `ON DELETE CASCADE` defined on both foreign keys, all you would need to do is delete the category and the product and subcategories would be deleted as well. Dangerous, but effective. | how to delete from another 2 table using one key? | [
"",
"mysql",
"asp.net",
"sql",
""
] |
I tried this code -
```
UPDATE Table
SET Name = RTRIM(LTRIM(Name))
```
Data type of Name is `varchar(25)`
None of the leading and trailing spaces get removed. When I copy-paste one such `Name`,
i get this -
```
"big dash" "space symbol" ABC001
```
Why is this happening and how do trim the spaces ?
**EDIT -**
The question has already been answered. I found one more table with this problem. I get
"- value" when i copy a column of a row. When I press the enter key at end of this copy-pasted value, i see more dashes. See image below -
 | I suspect, some non readable(Non-ascii characters) inside the name column, that might not get removed as part of `TRIM` calls.
```
select convert(varbinary, Name) from table
```
Reading the `HEX` output from above query should reveal the same.
Kindly read [this](http://iso30-sql.blogspot.com/2010/10/remove-non-printable-unicode-characters.html) to find how to write functions to remove such characters. | Kindly use below query it will remove space new line etc..
```
select LTRIM(RTRIM(REPLACE(REPLACE(REPLACE(REPLACE(Name, CHAR(10), CHAR(32)),CHAR(13), CHAR(32)),CHAR(160), CHAR(32)),CHAR(9),CHAR(32))))
``` | Trim spaces in string - LTRIM RTRIM not working | [
"",
"sql",
"sql-server",
""
] |
I have two tables `posts` and `posts_replies` I queried all posts and order them by timestamp (post time).
Now I want to do the following :
Any time a user make a reply for any post I want to put that post at the top of the wall
so I did get the max replies' timestamp of each post and order the posts using the max timestamp of the replies of specific post.
The problem is that some posts does not have any replies so the max timestamp\_of\_replies of this post will be NULL so I want to know : Does it possible to order the result by timestamp\_of\_replies when it is not null and by post\_timestamp when it is NULL.
My query :
```
SELECT
posts.*,
u1.id as user_id,
u1.avatar,
u1.username as poster_username,
u2.username as posted_username ,
posts.timestamp,
f1.recent_reply_time
FROM
posts
INNER JOIN
users u1
ON posts.poster_id = u1.id
INNER JOIN
users u2
ON posts.posted_id = u2.id
LEFT JOIN (
SELECT
max(timestamp) as recent_reply_time,
post_id
FROM
posts_replies
GROUP BY post_id) f1
ON f1.post_id = posts.id
order by
f1.recent_reply_time DESC
```
Note : `order by f1.recent_reply_time, posts.timestamp DESC` did not give me right results | MySQL has an [`IF()`](http://dev.mysql.com/doc/refman/5.5/en/control-flow-functions.html#function_if) function, that you may also use in the `ORDER BY` clause:
```
ORDER BY IF(column IS NULL, othercolumn, column)
```
in your case it would be:
```
ORDER BY IF(f1.recent_reply_time IS NULL, posts.timestamp, f1.recent_reply_time)
``` | `SORTING` is easily readable and maintainable, if you make it based on what you select. Better have some `sortkey` in your select and use it in `ORDER BY`.
i have used `COALESCE` to handle nulls. Makes ure sortkey atleast have one `NOT NULL` value, by using suitable arguments to COALESCE
```
SELECT posts.*,
u1.id as user_id,
u1.avatar,
u1.username as poster_username,
u2.username as posted_username,
posts.timestamp,
f1.recent_reply_time,
COALESCE(f1.recent_reply_time,posts.timestamp) as sortkey
FROM posts
INNER JOIN users u1 ON posts.poster_id = u1.id
INNER JOIN users u2 ON posts.posted_id = u2.id
LEFT JOIN (SELECT max(timestamp) as recent_reply_time, post_id FROM posts_replies GROUP BY post_id) f1 ON f1.post_id = posts.id
order by sortkey DESC
``` | How to order a query by column if some of its values are null? | [
"",
"mysql",
"sql",
"query-optimization",
""
] |
I have a table like so:
```
create table t1 (
id_a int
id_b int
dt datetime
)
```
Sample data might be:
```
id_a id_b dt
39838 6 2014-01-21 11:20:29.537
39838 546 2014-01-21 11:20:29.790
39839 4088 2014-01-21 11:20:31.543
39795 6 2014-01-21 11:20:33.117
39795 546 2014-01-21 11:20:34.100
39795 3189 2014-01-21 11:20:35.520
39841 6 2014-01-21 11:20:36.957
39841 7588 2014-01-21 11:20:38.030
```
I want some SQL that will tell me which id\_b follows an id\_b of 6 (by follows I mean by dt) for the most id\_a
For the sample data above, id\_b 546 follow 6 twice for the same id\_a and 7588 follows 6 just once for the same id\_a, so the output I would be looking for in this case is 546
I hope I've made that clear, can anybody help me with how I'd write sql to do that?
Something to this effect:
```
SELECT most_common(id_b)
FROM t1
WHERE previous_entry(id_b) = 6
AND previous_entry(id_a) = this_entry(id_a)
ORDER BY id_a, dt
``` | You can accomplish this by using Parition by clause:
```
SELECT IB_B, COUNT(ID_A) NO_OF_TIMES FROM
(SELECT *, ROW_NUMBER() OVER (PARTITION BY ID_A ORDER BY DT) AS ROW_NO
FROM T1)TEMP
WHERE ROW_NO = (SELECT TOP 1 ROW_NO+1 FROM
(SELECT *, ROW_NUMBER() OVER (PARTITION BY ID_A ORDER BY DT) AS ROW_NO
FROM T1)TEMP
WHERE ib_b =6
)
GROUP BY IB_B ORDER BY COUNT(id_a) DESC
``` | This will also work in SQL Server 2008:
```
WITH cte AS
( SELECT id_a, id_b,
ROW_NUMBER() OVER (ORDER BY dt) AS rn
FROM t1
)
SELECT TOP 1 t2.id_b, COUNT(*) AS cnt
FROM cte AS t1 JOIN cte AS t2
ON t2.rn = t1.rn+1
WHERE t1.id_b = 6
GROUP BY t2.id_b
ORDER BY COUNT(*) DESC
``` | how to select which id comes second the most often in sql server | [
"",
"sql",
"sql-server",
""
] |
I have two tables that store email information:
* EMAIL
* EMAIL\_ADDRESS
EMAIL has:
* email ID
* timestamp
* and other info we don't care about
EMAIL\_ADDRESS has:
* ID (foreign key references EMAIL.ID)
* EMAIL\_ADDRESS
* TYPE (to, from)
Say I have 6 rows in EMAIL - the query should return the ID, timestamp, to and from address.
At the moment I have this:
```
SELECT ea.EMAIL_ADDRESS, e.ID, e.sent_date
FROM EMAIL_ADDRESS ea, CHANN_EMAIL e
WHERE e.ID=ea.id
AND ea.TYPE in ('to','from')
```
This returns 12 rows, in the format:
-to, ID, date
-from, ID, date
What would the query be so I would have 6 rows with:
-to, from, ID, date | You must distinct EMAIL\_ADDRESS table to two view:
```
SELECT eat.EMAIL_ADDRESS as to ,ea.EMAIL_ADDRESS as from, e.ID, e.sent_date
FROM EMAIL_ADDRESS ea, CHANN_EMAIL e,EMAIL_ADDRESS eat
WHERE e.ID=ea.id and e.ID=eat.id
AND ea.TYPE in ('from') AND eat.TYPE in ('to')
``` | try using `GROUP BY e.ID` or `GROUP BY ea.id` | Select multiple column values in single query - oracle | [
"",
"sql",
"oracle",
""
] |
I download The world database from <http://downloads.mysql.com/docs/world.sql.gz>
However when I pasted all the content in the sql file into phpMyAdmin to create the tables, error shows:
```
#1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'The Valley','AIA','?,595)' at line 1
```
Don't know why this could be a syntax error
```
INSERT INTO `City` VALUES (60,'Namibe','AGO','Namibe',118200);
INSERT INTO `City` VALUES (61,'South Hill','AIA','?,961);
INSERT INTO `City` VALUES (62,'The Valley','AIA','?,595);
INSERT INTO `City` VALUES (63,'Saint John磗','ATG','St John',24000);
INSERT INTO `City` VALUES (64,'Dubai','ARE','Dubai',669181);
```
Anyone met this issue before? | Try this zip file for dump world database
<http://gyankosh.siplstudio.com/wp-content/uploads/2014/02/worlddump-1.sql_.zip> | check the single quotes on the question mark try '?' instead of '? | MySQL the world database sql file has syntax error when inserting data | [
"",
"mysql",
"sql",
""
] |
Suppose today is Thursday. I have 1 flag for both Wednesday and Friday.
If I create a temporary table then it will look like
```
ID WeekDay XFlag
==================================
1 Mon 0
2 Tue 0
3 Wed 1
4 Thu 0
5 Fri 1
6 Sat 0
7 Sun 0
```
Now, as we assumed today is *Thursday*, the next day when `XFlag` value 1 is Friday.
Any or all weekdays can be marked/flagged 1. But, I ultimately want to get the next weekday based on today. So, my result will be `Fri` for this in a varchar variable in a stored procedure.
Here, if today is wed then also result will be "Fri". But if today is "Fri" result will be "Wed". So, please care for such cases also.
How can I do it? | So you want the next day but the challenge is that the week days form a cycle. You can handle this using `order by` and some cleverness:
```
select top 1 tt.*
from TemporaryTable tt
where flag = 1
order by (7 + tt.id - datepart(dw, getdate())) % 7
```
[Here](http://www.sqlfiddle.com/#!6/d41d8/14549) is a SQL Fiddle.
EDIT:
If datefirst might be set differently, you can do the join on the date name. Just a bit more complicated with the `order by` condition:
```
select top 1 tt.*
from TemporaryTable tt cross join
(select id from TemporaryTable tt where Weekday = left(datepart(dw, getdate()), 3)
) as startid
where flag = 1
order by (tt.id - startid.id + 7) % 7;
```
This assumes, of course, that the language being returned is English. | I've gone quite procedural here, but the parts can be incorporated into a larger query, rather than using local variables, if required:
```
declare @t table (ID int not null,Weekday char(3) not null,XFlag bit not null)
insert into @t(ID,WeekDay,XFlag) values
(1,'Mon',0),(2,'Tue',0),(3,'Wed',1),
(4,'Thu',0),(5,'Fri',1),(6,'Sat',0),
(7,'Sun',0)
declare @Today int
declare @NextDay int
--Set today, in a DATEFIRST safe manner
set @Today = ((DATEPART(weekday,CURRENT_TIMESTAMP) + 7) --Today
- DATEPART(weekday,'20140106') --Known Monday
) % 7 + 1
set @NextDay = COALESCE((select MIN(ID) from @t where XFlag = 1 and ID > @Today),
(select MIN(ID) from @t where XFlag = 1))
select Weekday from @t where ID = @NextDay
```
Hopefully it's (relatively) easy to see how I'm thinking.
Setting `@Today` is probably the most complex part, and that only because I'm trying to write code that can be run by anyone, anywhere, without having to adjust either it or their `DATEFIRST` setting. We subtract the results of two calls to `DATEPART(weekday,...` where we know that one of them is definitely a Monday, and we've also set it up so that we always produce a positive result. We then use `% 7` to ensure that it's in the range 0-6, corresponding to Monday-Sunday, and then add 1 so that the values produced by this expression match the `ID`s in your table. | Algorithm to get next weekday based on today | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.