Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
I am using Mainframe Db2. Its a patient database. My requirement is to fetch different information from 3 tables, TABACC, TABPAY and TABINS. TABACC will always have a row for a patient but TABPAY and TABINS may or may not have a row for a patient in the system.
Which will be more efficient, a QUERY on TABACC with LEFT OUTER JOIN to TABPAY and TABINS
or three different queries, one for each TABACC, TABPAY and TABINS.
|
It depends.
If all you doing is pulling back 1 row data from three tables, then it's hard to beat COBOL's random read. There's simply less overhead. SQL isn't magic.
But you mention writing out to a file. So lets assume that you're pull back 100s, 1,000s or even millions of rows from those files and outputting them to a new file.
Instead of working row by row, which is COBOL's only option and all too often used when using SQL. You could work with the entire set in SQL
```
insert into newtable
(SELECT TB1.COL1,
TB2.COL4,
TB3.COL5,
TB4.COL6
FROM TB1 JOIN TB2 ON TB1.KEY = TB2.KEY
LEFT OUTER JOIN TB3 ON TB1.KEY = TB3.KEY
LEFT OUTER JOIN TB4 ON TB1.KEY = TB4.KEY)
```
Now the SQL solution should be much, much faster.
The key with SQL is to think in sets. If you are doing something row by row (aka using a cursor) you're probably (but not always) doing something wrong.
You can't simply change from COBOL's native I/O to SQL and expect better performance. It will in fact be worse.
Lastly, consider what the output file is being used for. If you're exporting data to an outside system, then you're pretty much done. But if you're writing a work file for another COBOL program to process...well you've probably got an opportunity for more improvement. Look at the entire process, consider what's being done as a whole and how a set based SQL solution could do it.
|
If the columns on which you are joining are indexed then that is the more efficient method.
|
Left Outer Join Vs Individual Queries DB2
|
[
"",
"sql",
"db2",
"mainframe",
""
] |
The software I am working on has a requirement to get the first and last records of an ordered dataset. Dataset is ordered by a date column.
The data I have:
```
--table "notes":
-- ordered by this
-- |
-- V
note_id date_created attribute1 attribute2 ... -- I want to get
-----------------------------------------------------
596 2014/01/20 ... ... ... -- <- this
468 2014/02/28 ... ... ...
324 2014/03/01 ... ... ...
532 2014/04/08 ... ... ...
465 2014/05/31 ... ... ... -- <- and this
```
Desired output:
```
596 2014/01/20 ... ... ...
465 2014/05/31 ... ... ...
```
|
You can use window functions:
```
select t.*
from (select t.*, row_number() over (order by date_created) as seqnum,
count(*) over () as cnt
from t
) t
where seqnum = 1 or seqnum = cnt;
```
In Oracle 12, you can also do:
```
select t.*
from t
order by date_created
fetch first 1 rows only
union all
select t.*
from t
order by date_created desc
fetch first 1 rows only;
```
|
If I got it right, try this:
```
select t1.*
from YOUR_TABLE t1
, (
select min(note_id) keep(dense_rank first order by date_created) min_val
, max(note_id) keep(dense_rank last order by date_created) max_val
from YOUR_TABLE
) t2
where t1.note_id = t2.min_val
or t1.note_id = t2.max_val
```
|
Oracle SQL get the first and last records from an ordered dataset
|
[
"",
"sql",
"oracle",
""
] |
I have date in my select query in below format
```
Thu Sep 04 23:03:30 GMT 2014
```
and in my Table date is in this format
```
13-NOV-13 03.38.22.000000000 AM
```
How can I convert in the DB format. I tried using
```
TO_CHAR('Thu Sep 04 23:03:30 GMT 2014','DD-MMM-YY HH:MI:SS')
```
but not working. Any help will be appreciated.
|
This:
```
select to_char(timestamp '2014-09-04 23:03:30 GMT','DD-MON-YY HH:MI:SS')
from dual
```
converts the timestamp represented by 'Thu Sep 04 23:03:30 GMT 2014'
into the format 'DD-MON-YY HH:MI:SS' (I think you meant MON, not MMM)
But the input literal has to be changed to '2014-09-04 23:03:30 GMT' (as shown)
**Fiddle:** <http://sqlfiddle.com/#!4/d41d8/34741/0>
**Output:**
```
04-SEP-14 11:03:30
```
The format you put in your select list only contains seconds, however, whereas you indicate you are storing fractional seconds in your table (likely a timestamp field), to match that format use FF rather than SS. Also, you indicate you are using HH12 format, not HH (12 hour vs. 24 hour). To do those you would want a different format than you are currently trying to convert to:
```
select to_char(timestamp '2014-09-04 23:03:30 GMT','DD-MON-YY HH12:MI:FF AM')
from dual
```
**Fiddle:** <http://sqlfiddle.com/#!4/d41d8/34744/0>
Note that if the time were in the latter half of the day it would show PM rather than AM, despite the fact that you see AM in the sql (refer to <http://www.techonthenet.com/oracle/functions/to_char.php>)
**Output:**
```
04-SEP-14 11:03:000000000 PM
```
|
Try This:
```
LEFT(DATENAME(dw,date),3) +' '+ CAST(date AS VARCHAR(20))
```
|
Date in SQL Select Query
|
[
"",
"sql",
""
] |
I have a SQL variable:
```
SET @TSQL = 'this is test string [this text may vary] other text'
```
Now I want to replace the sub-string "[this text is may vary]" with my own different text.
Can anybody help me? Remember the the sub-string which I want to replace is not static it is dynamic and may vary.
This looks similar to my problem but it is only for before specific character. I need for before and after both characters.
[How do I replace a substring of a string before a specific character?](https://stackoverflow.com/questions/12371701/how-do-i-replace-a-substring-of-a-string-before-a-specific-character).
|
Take the Substring before '[' add your replacement and the Substring right of ']'
```
declare @TSQL varchar(100)
declare @R varchar(100)
SET @TSQL = 'this is test string [this text may vary] other text'
SET @R = 'MySubstitute'
Select Left(@TSQL,Charindex('[',@TSQL)-1) + @R + RIGHT(@TSQL,Charindex(']',REVERSE(@TSQL))-1)
```
|
Is it just as simple as this?
```
SET @TSQL = 'this is test string ' + @NewText + ' other text'
```
Or, if the expected-preceding-text is not the only text preceding, maybe:
```
SET @TSQL = 'this is test string [this text may vary] other text'
DECLARE INT @TSQL_PrefixEnding = PATINDEX('%this is test string [[]%[]] other text%', @TSQL) + LEN('this is test string [') - 1
DECLARE INT @TSQL_SuffixStart = CHARINDEX('] other text', @TSQL, @TSQL_PrefixEnding)
SET @TSQL = LEFT(@TSQL, @TSQL_PrefixEnding ) + @NewText + SUBSTRING(@TSQL, @TSQL_SuffixStart, LEN(@TSQL) - @TSQL_SuffixStart + 1)
```
(Note: I'd have to test it, to see if the "+1" is necessary or not... but it's just a common kind of adjustment I've seen, in string-length calculation.)
---
Notes re' answer & edits:
-- My answer was written as if `'this is test string '`, etc. were strings to *recognize*.
-- Patindex (replacing Charindex) means you ONLY recognize the prefix-string, when the suffix-string is also present.
-- I added `[` to the string before, and `]` to the string after, (wherever they occur) based on your later comment, that had it sound like the brackets are actually part of the string to recognize.
-- The `[` is itself enclosed, in `[` and `]`, to "escape" it -- so it will be interpreted literally.
|
How to replace a substring before and after a specific character in SQL Server?
|
[
"",
"sql",
"sql-server",
"database",
""
] |
I'm trying to convert `nvarchar(16)` to `bigint` but receives an error as follows
```
"Arithmetic overflow error converting expression to data type int."
```
How can I fix it this is my sp
```
Create Procedure [dbo].[GenerateAccountNo]
As
Begin
declare @tmp bigint
set @tmp =(CONVERT(bigint,'1111111111111111'))
return @tmp
End
```
And call procedure
```
DECLARE @returnvalue bigint
EXEC @returnvalue = GenerateAccountNo
select @returnvalue
```
|
The return value from a stored proc is INT, not BIGINT. Stored procs are designed for data selection. You should use a FUNCTION in this case instead to return a scalar value that is not related to a table.
|
From [MSDN](http://msdn.microsoft.com/en-us/library/ms187926.aspx)
> Return a status value to a calling procedure or batch to indicate success or failure (and the reason for failure).
You are returning the status(`INT`) from a Stored Procedure. This would be the execution result code (success or failure). If you want to return results from a Stored Procedure just use `SELECT` within the SP.
```
CREATE PROCEDURE [dbo].[GenerateAccountNo]
As
BEGIN
SELECT CONVERT(BIGINT,'1111111111111111')
END
```
|
Converting nvarchar to bigint in sql server
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have a table name video
table video
```
id name upload_date
1 one 1408336348
2 two 1409884215
```
now i want to select all data also calculate if video uploaded between last 2 days then yes or no
result like
```
id name upload_date new
1 one 1408336348 no
2 two 1409884215 yes
```
I am using this query but not work
```
SELECT v.*,( if(from_unixtime(v.upload_date) < NOW() - INTERVAL 2 DAY) then 'no' else 'yes')
AS new FROM `video` as v
```
Query return error
```
#1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ') then 'no' else 'yes')
AS new FROM `video` as v
LIMIT 0, 25' at line 1
```
|
Try using case
```
SELECT v.*
,case
when (from_unixtime(v.upload_date) < NOW() - INTERVAL 2 DAY) then 'yes'
else 'no')
end AS new
FROM `video` as v
```
|
That's not the correct syntax for an [`IF`](http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_if) in MySQL. Try it like this:
```
SELECT v.*,
IF(from_unixtime(v.upload_date) < NOW() - INTERVAL 2 DAY, 'yes', 'no') AS new
FROM `video` as v
```
|
Mysql Database query with if condition
|
[
"",
"mysql",
"sql",
""
] |
I have a query like so
```
Select * from tableA A
LEFT JOIN
mydb.tableB B
ON A.pk = B.tableA_pk
LEFT JOIN
mydb.tableC C
ON B.tableC_pk = C.pk
WHERE C.PK IN ('325','305', '322')
ORDER BY A.pk desc;
```
This will return a result like so with potentially multiple rows for an entry where applicable
```
4, xxx, yyy, 325, 325, zzz <<<< most recent for this entry ( 'unique' key is tableC.pk, 325)
3, aaa, bbb, 325, 325, eee <<<< next most recent
3, ccc, ddd, 322, 322, fff
2, eee, fff, 305, 305, rrr
2, ggg, hhhh,322, 322, ttt
1, iii, jjj, 325, 325, uuu <<< oldest
```
Ideally i want the result to look like below, that is for each matching entry in the LIST ..IN ('325', '322') it should only return the most recent entry like below
```
4, xxx, yyy, 325, 325, zzz <<<< most recent
3, ccc, ddd, 322, 322, fff
2, eee, fff, 305, 305, rrr
```
To explain further..
```
Select * from tableA A
LEFT JOIN
mydb.tableB B
ON A.pk = B.tableA_pk
LEFT JOIN
mydb.tableC C
ON B.tableC_pk = C.pk
WHERE C.PK IN ('325')
ORDER BY A.pk desc;
```
I would get this
```
4, xxx, yyy, 325, 325, zzz <<<< most recent for this entry
3, aaa, bbb, 325, 325, eee <<<< next most recent
1, iii, jjj, 325, 325, uuu <<< oldest
```
But I need this:
```
4, xxx, yyy, 325, 325, zzz <<<< most recent for this entry ( 'unique' key is tableC.pk)
```
I have tried LIMIT 1 (I'm using MySQL) like below but this limits to 1 for the total request not 1 per searched entry in my query list.
```
Select * from tableA A
LEFT JOIN
mydb.tableB B
ON A.pk = B.tableA_pk
LEFT JOIN
mydb.tableC C
ON B.tableC_pk = C.pk
WHERE C.pk IN ('325','305', '322')
ORDER BY A.pk desc
LIMIT 1;
```
|
After reading suggestions and other answers I achieved it like so..many thanks for all help....
```
SELECT
DISTINCT C.PK
,B.PK
,B.tableC_pk
,B.tableA_pk
,A.pk
FROM
tableC C
INNER JOIN
tableB B
ON C.PK = B.tableC_pk
INNER JOIN
tableA A
ON A.PK = B.tableA_pk
INNER JOIN ( SELECT tableC_pk,MAX(PK) AS PK FROM tableB
WHERE tableC_pk IN ('325','322','111')
GROUP BY 1) D
ON C.PK = B.tableC_pk
AND B.PK = D.PK
```
|
You can use variable by iterating it with 1 like so.
```
Select A.*, @i:=@i+1 AS rownum
from tableA A
LEFT JOIN
mydb.tableB B
ON A.pk = B.tableA_pk
LEFT JOIN
mydb.tableC C
ON B.tableC_pk = C.pk
WHERE C.PK IN ('325') and (SELECT @i:= 0) r
ORDER BY A.pk desc;
```
**Note:** I have not tested above query.
|
SQL Select/Join, LIMIT result to 1st entry per element in query list
|
[
"",
"mysql",
"sql",
"database",
""
] |
Hi i am trying to make simple database with 2 tables, first for user information, and second for their uploads, because it's project for faculty, i have some assignments... And one is to use foreign key.
```
DROP TABLE IF EXISTS `korisnici`;
CREATE TABLE `korisnici` (
`UserID` INT(11) NOT NULL AUTO_INCREMENT,
`username` VARCHAR(12) NOT NULL,
`password` VARCHAR(32) NOT NULL,
`email` VARCHAR(32) NOT NULL,
`telefon` VARCHAR(16) NOT NULL,
PRIMARY KEY (`UserID`)
);
DROP TABLE IF EXISTS `slike`;
CREATE TABLE `slike` (
`id` INT(11) NOT NULL AUTO_INCREMENT,
`name` VARCHAR(200) NOT NULL,
`size` INTEGER(11) NOT NULL,
`type` VARCHAR(200) NULL,
`file_path` VARCHAR(200) NOT NULL,
`username` VARCHAR(12) NOT NULL,
`naslov` VARCHAR(32) NOT NULL,
`adresa` VARCHAR(80) NOT NULL,
`opis` VARCHAR(1200) NOT NULL,
`datum` DATE NOT NULL,
`UserID` INTEGER(11) NOT NULL,
PRIMARY KEY (`id`)
);
ALTER TABLE `slike` ADD FOREIGN KEY (UserID) REFERENCES `korisnici` (`UserID`);
-- ALTER TABLE `korisnici` ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin;
-- ALTER TABLE `slike` ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin;
-- ---
-- Test Data
-- ---
INSERT INTO `korisnici` (`UserID`,`username`,`password`,`email`,`telefon`) VALUES('1','dd','d','d','d');
INSERT INTO `slike` (`id`,`name`,`size`,`type`,`file_path`,`username`,`naslov`,`adresa`,`opis`,`datum`,`UserID`) VALUES('2','a','122','png','ksks/sss','dsss','aaaa','sss','ssss','2014/09/04','2');
```
ERROR:
```
ERROR 1217 (23000) at line 14: Cannot delete or update a parent row: a foreign key constraint fails
```
Anybody now where is the probel and how could i fix it? It also doesn't work on sqlfiddle when i insert some values for testing.
Thanks :)
|
I think the error message is actually misleading. What I see from your code is that it is the insert into SLIKE that fails because UserID=2 does not match the UserID of the previous insert into KORISNICI.
|
You have child record and so since you have put the **ON DELETE RESTRICT** as well as **ON UPDATE RESTRICT** constraints(I mean as they are *default*) whatever changes you make on the parent row i.e. a row in *korisnici* table with child rows in *slike* table will be restricted by ***MySQL***.
Now for deletion you can do something like this:
- Either change the **ON DELETE** constraint to **CASCADE ... OR ...**- Use the following query to delete the record:
```
DELETE FROM `slike` WHERE `UserID`=`<UserId you want to delete>`;
DELETE FROM `korisnici` WHERE `UserID`=`<UserId you want to delete>`;
```
And for Updation...
- Either change the **ON UPDATE** constraint to **CASCADE ... OR ...**- Or else you'll have to write extra database end program (like PL-SQL) in which you will have to take the backup of the child record and then update the parent record and then again insert the child record as per the new updation you have done in the parent record.
Anyways the better option always is to mention the appropriate foeign constraints while specifying or establishing the foreign key.
To get bit more info you can refer [this link](https://stackoverflow.com/questions/13444859/sql-on-delete-cascade-which-way-does-the-deletion-occur)
|
MySQL: Cannot delete or update a parent row: a foreign key constraint fails
|
[
"",
"mysql",
"sql",
"foreign-keys",
"constraints",
"parent",
""
] |
Since I'm new to SQL Server, I think I need a little help here... Since I've rewritten the code below many times there might be other errors as well, sorry about that. Removed the 'GO' for now.
Problem:
I don't know how to create a foreign key from "model" in the table `Cars` to the table `Model`.
My broken code:
```
create table [Cars]
(
[id] [int] not null primary key identity,
[name] [nvarchar](50) not null,
[weight] [int] not null,
[length] [int] not null,
[model] [nvarchar](50) not null,
[color] [nvarchar](50) not null
);
create table [Colors]
(
[id] [int] not null primary key identity,
[name] [nvarchar](50) not null
);
create table [Model]
(
[id] [int] primary key not null identity,
[name] [nvarchar](50) not null
);
insert into Cars(name, weight, length, model, color)
values('Ferrari', '1500', '4000', '360', 'Red');
insert into Colors values('Red');
insert into Colors values('Blue');
insert into Colors values('Yellow');
insert into Model values('Volvo');
insert into Model values('Fiat');
insert into Model values('Saab');
alter table Model
add foreign key (id)
references Cars(id)
alter table Colors
add foreign key (id)
references Cars(id)
```
|
This code managed to solve the problem I think, several of you people here did contribute, thank you.
```
alter table Cars
add constraint fk_colors foreign key (colorid)
references colors([id])
```
|
I advise against naming all your primary keys the same (`ID`) as this runs in to problems when creating foreign keys.
Your table `Cars` has `model` as `nvarchar`, which is unnecessary if you are using surrogate keys. Change the data type to `int`, the name to `model_id` and add a constraint to the table:
```
ALTER TABLE Cars
ADD CONSTRAINT FK$Model$id FOREIGN KEY (model_id)
REFERENCES Model(id)
```
As others have mentioned, your use of a foreign key in the `Colors` table is also incorrect.
|
Problems with foreign keys
|
[
"",
"sql",
"sql-server",
""
] |
I've function that should change notation of numbers from 12345.67β¬ to 12345,67β¬
```
select
replace(
to_char(round(
123456.789
,2),'999G999G999G990D'||substr('0000000000',1,2),'nls_numeric_characters=''.,'''),'.',','
) || ' β¬'
from dual;
```
current result is:
123,456,79 β¬
the result I want to get:
123456,79 β¬
But I' dont know how. My oracle returns values in format 123456.79 so this is a reason of my calculation. Doe's anybody know what to do?
|
ALTER SESSION SET NLS\_TERRITORY='SWEDEN' for an example. Or Estonia.
Numeric formats are derived from the setting of the NLS\_TERRITORY parameter, but they can be overridden by the `NLS_NUMERIC_CHARACTERS` parameter.
|
You should either set the NLS parameters, if they are not already correct, in the session or in the query e.g.
```
select to_char(123456.789
,'9999990D00L'
,'nls_numeric_characters='',.'' nls_currency=''β¬''')
from dual
```
|
Change number output [.] to [,] in pl/sql
|
[
"",
"sql",
"oracle",
"plsql",
"oracle-apex",
""
] |
Hi I table in which every row have data like:
```
0150566115,"HEALTH 401K","IC,ON","ICON HEALTH 401K",,,1,08/21/2014
```
What i want is to remove every comma(,) that is enclosed between double quotes " ". and then slpit the rest of the string with comma(,)
I dont want to do it checking every single character setting flags for start and end of double quotes.
Can i implement some sort of regex?
Is there a simple way?
What i have tried so far is just to split the string on base of comma(,) But it is working for single block of double quotes.
```
Declare @Query nvarchar(max)
Set @Query= 'Item1,Item2,"Item,Demo,3",New'
Declare @start int, @len int
SELECT @start = PATINDEX('%"%"%', @Query)+1
print @start
select @len=CHARINDEX('"', SUBSTRING(@Query, @start, LEN(@Query)))-1
select
SUBSTRING(@Query, 1, @start - 2) +
REPLACE((SUBSTRING(@Query, @start, @len)), ',', '') +
SUBSTRING(@Query, @start + @len + 1, LEN(@Query))
```
|
Try the following:
```
DECLARE @str nvarchar(max) = '0150566115,"HEALTH 401K","IC,ON","ICON HEALTH 401K",,,1,08/21/2014'
SELECT
SUBSTRING(@str, 1, CHARINDEX('"', @str, 1) - 1)
+ REPLACE(REPLACE(REPLACE(REPLACE(SUBSTRING(@str, CHARINDEX('"', @str, 1), LEN(@str) - CHARINDEX('"', REVERSE(@str), 1) - CHARINDEX('"', @str, 1) + 2), ',', ' ' + CHAR(7) + ' '), CHAR(7) + ' ', ''), '" "', ','), '"', '')
+ REVERSE(SUBSTRING(REVERSE(@str), 1, CHARINDEX('"', REVERSE(@str), 1) - 1))
--Explaination
--Extracting the portion of the string before the first occurrence of '"'.
DECLARE @part1 nvarchar(max) = SUBSTRING(@str, 1, CHARINDEX('"', @str, 1) - 1)
SELECT
@part1
--String between first and last occurrence of '"' and removing unwanted characters.
DECLARE @part2 nvarchar(max) = SUBSTRING(@str, CHARINDEX('"', @str, 1), LEN(@str) - CHARINDEX('"', REVERSE(@str), 1) - CHARINDEX('"', @str, 1) + 2)
SET @part2 = REPLACE(REPLACE(REPLACE(REPLACE(@part2, ',', ' ' + CHAR(7) + ' '), CHAR(7) + ' ', ''), '" "', ','), '"', '')
SELECT
@part2
--String after the last occurrence of '"'
DECLARE @part3 nvarchar(max) = REVERSE(SUBSTRING(REVERSE(@str), 1, CHARINDEX('"', REVERSE(@str), 1) - 1))
SELECT
@part3
--Concatenation
SELECT
@part1 + @part2 + @part3
```
HTH!!!
|
The format of your data appears to be a delimited CSV format. This is a format frequently used by Excel, and it is unfortunate that SQL Server doesn't seem to have a simple way to read it in. When I am faced with such files, I usually do the following:
* Load them into Excel
* Save them in a *tab* delimited format
* Import them into SQL Server
Fortunately, when I've had to deal with such files, they have been on the small side and fit into Excel.
You seem to already have the data in the database. With a little research, I stumbled across [this](https://stackoverflow.com/questions/2866168/sql-split-function-that-handles-string-with-delimeter-appearing-between-text-qua) reference to split functions that take a string delimiter as well as a separator.
|
Replace comma between quotes with space
|
[
"",
"sql",
"sql-server",
"database",
"t-sql",
"sql-server-2012",
""
] |
Trying to set up a query which returns the earliest date in a year in multiple years.
Ex:
```
06-apr-1990
07-may-1991
03-apr-1992
07-jun-1993
```
the earliest would be `03-apr-1992`
any help is appreciated (-:
I'm using Oracle SQL Developer
the dates are in Date format
|
If you are using SQL Server, try :
```
SELECT TOP(1) date FROM table ORDER BY Month(date), Day(date)
```
For MySQL this should do the trick :
```
SELECT date FROM table ORDER BY Month(date), Day(date) LIMIT 1;
```
For Oracle :
```
SELECT date FROM table ORDER BY Month(date), Day(date) WHERE ROWNUM <= 1;
```
|
Try the following query:
`SELECT column_name,column_name
FROM table_name
ORDER BY column_name,column_name ASC|DESC;`
source [w3school.](http://www.w3schools.com/sql/sql_orderby.asp)
|
Earliest Date in a year in multiple years
|
[
"",
"sql",
"oracle",
""
] |
I have following table
```
+-----+--------+-----------+-----------+-------+
| id | job_id | source_id | target_id | value |
+-----+--------+-----------+-----------+-------+
| 204 | 5283 | 247 | 228 | 1201 |
| 349 | 4006 | 247 | 228 | 100 |
| 350 | 4007 | 247 | 228 | 500 |
| 351 | 4008 | 247 | 228 | 1000 |
| 352 | 4009 | 1 | 100 | 100 |
| 353 | 4010 | 1 | 100 | 500 |
| 354 | 4011 | 1 | 100 | 50 |
+-----+--------+-----------+-----------+-------+
```
I want to create a diff between the column **value** groupped by source\_id and target\_id. The older one (smaller id) should be compared with the newer one
I have searched a little bit and found `coalesce`. I have written a small query and it works in "general", but not as expexted:
```
SELECT
c.id, c.source_id, c.target_id, c.value, COALESCE(c1.value - c.value, -1) AS diff
FROM
changes c LEFT JOIN changes c1 ON (c1.source_id = c.source_id AND c1.target_id = c.target_id)
GROUP BY c.source_id, c.target_id, c.job_id
ORDER BY c.id
```
I got following result:
```
+-----+-----------+-----------+-------+------+
| id | source_id | target_id | value | diff |
+-----+-----------+-----------+-------+------+
| 204 | 247 | 228 | 1201 | 0 |
| 349 | 247 | 228 | 100 | 1101 |
| 350 | 247 | 228 | 500 | 701 |
| 351 | 247 | 228 | 1000 | 201 |
| 352 | 1 | 100 | 100 | 0 |
| 353 | 1 | 100 | 500 | -400 |
| 354 | 1 | 100 | 50 | 50 |
+-----+-----------+-----------+-------+------+
```
You can see the diff work for id 349 and 353, I want this for all rows like the following expected result:
```
+-----+-----------+-----------+-------+------+
| id | source_id | target_id | value | diff |
+-----+-----------+-----------+-------+------+
| 204 | 247 | 228 | 1201 | 1201 |
| 349 | 247 | 228 | 100 | 1101 |
| 350 | 247 | 228 | 500 | -400 |
| 351 | 247 | 228 | 1000 | -500 |
| 352 | 1 | 100 | 100 | 100 |
| 353 | 1 | 100 | 500 | -400 |
| 354 | 1 | 100 | 50 | 450 |
+-----+-----------+-----------+-------+------+
```
It would be no problem if the diff result is inverted.
What did I miss?
Thanks for any hints.
|
if you use user defined variables you don't need to join the table to itself. just do a row by row comparrision like so
```
SELECT
id,
job_id,
target_id,
if(@a = source_id, @b - value, value) as diff,
@b := value as value,
@a := source_id as source_id
FROM changes
CROSS JOIN (SELECT @a:=0, @b:=0)t
```
[DEMO](http://sqlfiddle.com/#!2/4c17a/2)
|
I suspect that you're looking for something like this - although the COALESCE bit seems misleading to me...
```
SELECT a.*, COALESCE(b.value-a.value,a.value) diff
FROM
( SELECT x.* , COUNT(*) rank FROM changes x JOIN changes y ON y.id <= x.id GROUP BY x.id ) a
LEFT
JOIN
( SELECT x.* , COUNT(*) rank FROM changes x JOIN changes y ON y.id <= x.id GROUP BY x.id ) b
ON b.source_id = a.source_id
AND b.rank = a.rank - 1;
```
|
Calculate difference between one column over multiple rows using coalesce and join
|
[
"",
"mysql",
"sql",
""
] |
This is probably pretty simple and dumb to ask but Im just not getting there right now. I have a db for cocktails and want to check which cocktails I can make with the available ingredients:
> Get the names of all cocktails where **every** ingredient is in stock
These are my tables:
```
create table cocktails
(
name TEXT PRIMARY KEY
)
create table ingredients
(
name TEXT PRIMARY KEY
)
create table cocktail_ingredients
(
cocktail_name TEXT ,
ingredient_name TEXT ,
amount INTEGER ,
FOREIGN KEY ( cocktail_name ) REFERENCES cocktails( name ) ,
FOREIGN KEY ( ingredient_name ) REFERENCES ingredients( name )
)
create table ingredients_in_stock
(
ingredient_name TEXT ,
FOREIGN KEY ( ingredient_name ) REFERENCES ingredients ( name )
)
```
And this is my code so far:
```
SELECT ci.cocktail_name
FROM cocktail_ingredients ci
WHERE ci.ingredient_name IN ( SELECT iis.ingredient_name
FROM ingredients_in_stock iis
)
GROUP BY ci.cocktail_name
HAVING COUNT(*) = ( SELECT COUNT(*)
FROM ingredients_in_stock
)
;
```
|
A correlated subquery should work:
```
select cocktail_name as all_ingredients_in_stock
from cocktail_ingredients ci
inner join ingredients_in_stock iis
on ci.ingredient_name = iis.ingredient_name
group by cocktail_name
having count(*) =
(select count(*)
from cocktail_ingredients
where cocktail_name = ci.cocktail_name
)
```
[Sample SQL Fiddle](http://www.sqlfiddle.com/#!6/b895f/1)
|
You can use a LEFT JOIN and a IN clause for this. Something like this:
```
SELECT name FROM cocktails WHERE Name NOT IN(
SELECT DISTINCT ci.cocktail_name FROM cocktail_ingredients ci LEFT JOIN ingredients_in_stock istk
ON ci.ingredient_name=istk.ingredient_name WHERE istk.ingredient_name IS NULL)
```
This query inverts the logic: List the cocktails where none of it's ingredients are missing on the ingredients\_in\_stock table. Hope the idea helps you
|
SELECT Statement for cocktail db
|
[
"",
"sql",
""
] |
I want to interrogate a result set for a particular value:
```
DECLARE @validIds;
SET @validIds = (SELECT id FROM client WHERE valid = 1);
IIF(@id IN @validIds, 1, 0)
```
This obviously isn't working, what's the best way to do this?
If I have the following "client" table:
```
id | valid
1 | 0
2 | 1
3 | 1
```
When @id is 1, I would expect the IIF to resolve to 0.
When @id is 2 or 3, I would expect the IIF to resolve to 1.
|
## Test Data
```
DECLARE @validIds table (id int);
INSERT INTO @validIds VALUES (1),(2),(3)
DECLARE @Client table (id int);
INSERT INTO @Client VALUES (1),(2),(3),(4),(5)
```
## Query
```
SELECT c.id AS Client_ID
,CASE WHEN EXISTS (SELECT 1 FROM @validIds WHERE id = c.id)
THEN 1 ELSE 0 END AS [Valid_ID]
FROM @Client c
```
## Result
```
βββββββββββββ¦βββββββββββ
β Client_ID β Valid_ID β
β ββββββββββββ¬βββββββββββ£
β 1 β 1 β
β 2 β 1 β
β 3 β 1 β
β 4 β 0 β
β 5 β 0 β
βββββββββββββ©βββββββββββ
```
|
Create a table variable to hold your valid ids. Then return 1 if your @id variable is in the list of valid ids - if not return 0.
```
DECLARE @validIds TABLE (id INT);
INSERT INTO @validIds (id) SELECT id FROM clients WHERE VALID = 1;
SELECT ISNULL((SELECT 1 from @validIds where @id = id), 0);
```
|
Check for value in table variable in IIF
|
[
"",
"sql",
"sql-server",
""
] |
I need help with query statement to add selected records.
Example of records in table **"tblA"**:
```
Name, Year, Level, Points, Item
John, 2012, 1, 2, bag
John, 2012, 1, 1, book
John, 2013, 1, 1, pen
John, 2013, 1, 1, pencil
John, 2014, 2, 3, hat
John, 2014, 2, 1, ruler
Kent, 2014, 2, 2, bag
Mic, 2014, 2, 2, bag
Dan, 2014, 2, 2, bag
Tim, 2014, 2, 2, bag
```
Is it possible to do a 1 statement query to sum the points for John with condition that if the Level is the same for more than 2 years, then only the points for the latest year will be considered.
**Eg:** in the above case, only the following records should have the points added. (the 2012 records should be ignored because there is a later year (2013) that has Level 1. Thus John should have 6 points.
```
John, 2013, 1, 1, pen
John, 2013, 1, 1, pencil
John, 2014, 2, 3, hat
John, 2014, 2, 1, ruler
```
Thanks in advance.
|
To get all names and their points (following your logic), you could try:
```
SELECT tblA.Name, SUM(points) AS totalPoints FROM tblA
JOIN (
SELECT MAX(year) year, Name, Level
FROM tblA
GROUP BY Name, Level) tblA2
ON tblA.Name = tblA2.Name
AND tblA.year = tblA2.year
AND tblA.Level = tblA2.Level
GROUP BY Name
```
If you're only interested on 'John' points, then:
```
SELECT tblA.Name, SUM(points) AS totalPoints FROM tblA
JOIN (
SELECT MAX(year) year, Name, Level
FROM tblA
WHERE Name = 'John'
GROUP BY Name, Level) tblA2
ON tblA.Name = tblA2.Name
AND tblA.year = tblA2.year
AND tblA.Level = tblA2.Level
GROUP BY Name
```
SQL Fiddle demo: <http://sqlfiddle.com/#!2/75f478/9>
|
Maybe also try it like this:
```
select tblA.name, tblA.year, sum(tblA.points)
from tblA
inner join (
select name, level, max(year) as yr
from tblA
group by name, level ) as yearTbl
on tblA.name=yearTbl.name and tblA.year=yearTbl.yr and tblA.level=yearTbl.level
group by tblA.name, tblA.year
```
|
MYSQL: sum of latest record
|
[
"",
"mysql",
"sql",
""
] |
SQL Fiddle
<http://sqlfiddle.com/#!2/1c5fc3/1>
I am trying to create simple messaging system, but I am having trouble with the desired results from the SQL queries.
Here are the tables I have; I am trying to get INBOX data..
INBOX Definiton for this problem:
This should be threaded display in inbox, ie. google mail, but only to show the last message in that thread with the user who originaly created the thread and the last user who replied in the thread, if the last user is the same user that created the thread and there are no replies in beetween the message doesnt belog in inbox.
TABLES:
```
THREAD
id_thread
id_last_message
id_user_inital
id_user_last
THREAD_USERS
id
id_thread
id_user
THREAD_MESSAGES
id_thread_messages
id_user_sender
id_thread
datetime
subject
body
MESSAGE_STATUS
id_messsage_status
id_thread_messages
id_user
status
datetime
```
My logic is:
once a message has been sent
```
THREAD
id_thread id_last_message id_user_inital id_user_last
1 1 1 1
THREAD_USERS
id id_thread id_user
1 1 1
2 1 2
THEREAD_MESSAGES
id_thread_messages id_user_sender id_thread datetime subject body
1 1 1 07.09.2014 16:02 'title' 'text message'
MESSAGE_STATUS
id_message_status id_thread_messages id_user status datetime
1 1 1 4 07.09.2014 16:02
2 1 2 1 07.09.2014 16:02
```
Lets say status can be
```
0 = deleted (do not show at all)
1 = new (show only to user that is on the receiving end)
2 = read (this status will be shown to all users in the thread)
3 = replied (show only to user that makes this action)
4 = sent (show only to user that makes this action)
```
Query :
```
SELECT *
FROM thread
JOIN thread_users
ON thread.id_thread = thread_users.id_thread
JOIN thread_messages
ON thread.id_thread = thread_messages.id_thread
JOIN message_status
ON thread_messages.id_thread_messages = message_status.id_thread_messages
WHERE
thread_users.id_user = 2
AND message_status.status != 0
AND message_status.status != 4
AND thread.id_user_last != message_status.id_user
```
sample data
THREAD
```
id_thread id_last_message id_user_inital id_user_last
1 4 1 2
2 2 3 3
3 3 4 4
```
THREAD\_USERS
```
id id_thread id_user
1 1 1
2 1 2
3 2 3
4 2 2
5 3 4
6 3 2
```
THEREAD\_MESSAGES
```
id_thread_messages id_user_sender id_thread datetime subject body
1 1 1 07.09.2014 16:02 'title' 'text message'
2 3 2 07.09.2014 16:05 'hey two' 'foo'
3 4 2 07.09.2014 16:07 'hey two' 'bar'
4 2 1 07.09.2014 16:10 'title' 'replay on 1st'
```
MESSAGE\_STATUS
```
id_message_status id_thread_messages id_user status datetime
1 1 1 4 07.09.2014 16:02
2 1 2 1 07.09.2014 16:02
3 2 3 4 07.09.2014 16:05
4 2 2 1 07.09.2014 16:05
5 3 4 4 07.09.2014 16:07
6 3 2 1 07.09.2014 16:07
7 4 2 4 07.09.2014 16:10
8 4 1 1 07.09.2014 16:10
```
How would you extract INBOX data from this situation, as I am spinning in circles for hours and can't quite get what I am doing wrong.
Thank you.
|
Updated solution after taking into account explanations for message status:
```
SELECT DISTINCT t.*, tm.* , ms.*
FROM thread t
-- tm should be last message
INNER JOIN thread_messages tm ON t.id_thread = tm.id_thread
INNER JOIN message_status ms ON (ms.id_thread_messages = tm.id_thread_messages)AND
(ms.id_user=2)AND
(ms.status!=0)
-- try to find message after tm, and then in WHERE filter only those cases where there is no message after tm
LEFT JOIN thread_messages tm_next
INNER JOIN message_status ms_next ON (ms_next.id_thread_messages = tm_next.id_thread_messages)AND
(ms_next.id_user=2)AND
(ms_next.status!=0)
ON (t.id_thread = tm_next.id_thread)and
(tm_next.datetime>tm.datetime)
LEFT JOIN thread_messages tm_other
INNER JOIN message_status ms_other ON (ms_other.id_thread_messages = tm_other.id_thread_messages)AND
(ms_other.id_user=2)AND
(ms_other.status!=0)
ON (t.id_thread = tm_other.id_thread)and
(tm_other.id_thread_messages!=tm.id_thread_messages)and
(tm_other.id_user_sender!=2)
WHERE
-- ensure tm is last message in thread
(tm_next.id_thread is null)and
(
-- there is a non deleted message from another user in current thread
(tm_other.id_thread_messages is not null)or
-- last message is not from current user
(tm.id_user_sender!=2)
)
```
SqlFiddle is [here](http://sqlfiddle.com/#!2/855309/16/0).
Let me know is this working for you.
|
i think this is the solution that you are looking for
```
SELECT * FROM thread
JOIN thread_users ON thread.id_thread = thread_users.id_thread
JOIN thread_messages ON thread.id_thread = thread_messages.id_thread
JOIN message_status ON thread_messages.id_thread_messages = message_status.id_thread_messages
WHERE thread_users.id_user = 2
AND thread_users.id_user = message_status.id_user
AND message_status.status != 0
AND message_status.status != 4
AND thread.id_user_last != message_status.id_user
```
|
Get the last message for each thread
|
[
"",
"mysql",
"sql",
"greatest-n-per-group",
""
] |
I have a requirement to create a Sales report and I have a sql query:
```
SELECT --top 1
t.branch_no as TBranchNo,
t.workstation_no as TWorkstation,
t.tender_ref_no as TSaleRefNo,
t.tender_line_no as TLineNo,
t.tender_code as TCode,
T.contribution as TContribution,
l.sale_line_no as SaleLineNo
FROM TENDER_LINES t
LEFT JOIN SALES_TX_LINES l
on t.branch_no = l.branch_no and t.workstation_no = l.workstation_no and t.tender_ref_no = l.sale_tx_no
where l.sale_tx_no = 2000293 OR l.sale_tx_no = 1005246 --OR sale_tx_no = 1005261
order by t.tender_ref_no asc,
l.sale_line_no desc
```
The results of the query look like the following:

The results I am trying to achieve is:

With only 1 line for transaction 2 either SaleLineNo 1 or 2, while still have=ing both lines for transaction 1 because the TCode is different.
Thanks
I am using SSQL2012.
|
Appreciate your assistance with my query. After playing around, I have found a solution that works just as I want. It is as below: I did a Group by as hinted by @Yogesh86 on a few fields.
```
SELECT
MAX(t.branch_no) as TBranchNo,
Max(t.workstation_no) as TWorkstation,
t.tender_ref_no as TSaleRefNo,
Max(t.tender_line_no) as TLineNo,
t.tender_code as TCode,
MAx(T.contribution) as TContribution,
MAX(l.sale_line_no) as SaleLineNo
FROM TENDER_LINES t
LEFT JOIN SALES_TX_LINES l
on t.branch_no = l.branch_no and t.workstation_no = l.workstation_no and t.tender_ref_no = l.sale_tx_no
where l.sale_tx_no = 2000293 OR l.sale_tx_no = 1005246 --OR sale_tx_no = 1005261
GROUP BY
t.tender_ref_no,
t.tender_line_no,
t.tender_code
```
|
Not exactly sure on what data you have, but you might want to try
```
GROUP BY TlineNo, TCode ...
```
But you have to keep a look on not to group by something that would result in duplicate contribution values.
|
Assistance with SQL Query (aggregating)
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
i would like to ask you if there is a way to get the value of maxlength of column in ms sql server.
For example in the table A we have:
```
id | value
----+--------
1 | 0123
2 | 00034567
3 | 547
```
The desired result for this data set is 00034567.
Wherever i have searched for this problem i get the answer select max(len(value)) which is not what i need, because it gives the max number of characters of the value and not the value itself.
Any ideas?
|
```
SELECT TOP 1 t.value
FROM table AS t
ORDER BY LEN(t.value) DESC
```
|
Sort by length and take the first row:
```
select top 1 value
from mytable
order by len(value) desc
```
If you want ties:
```
select value
from mytable
where len(value) = (select max(len(value)) from mytable)
```
Or more efficient but SQL Server specific:
```
select value from (
select value,
rank() over (order by len(value) desc) rank
from mytable) x
where rank = 1
```
See [SQLFiddle](http://sqlfiddle.com/#!6/a3ddc/4) for last one.
|
How to get the value of max length in sql server
|
[
"",
"sql",
"sql-server",
"max",
"maxlength",
""
] |
Hiii
look at this request and tell me why always Oracle give me this error
```
ORA-00932: inconsistent datatypes: expected DATE got NUMBER
00932. 00000 - "inconsistent datatypes: expected %s got %s"
```
note 1 : that when I remove `TRUNK()` it's work but I went to put it on the request
note 2 : I use this date format `1/07/2014` or `1-07-2014` and `1/JUL/2014` always the same error
that's my request
```
SELECT
QTYORDERED,
LINENETAMT
FROM C_Orderline
INNER JOIN C_Order
ON (C_Orderline.C_Order_ID = C_Order.C_Order_ID)
WHERE C_Order.ad_org_id = 1401007
AND C_order.DOCSTATUS = 'CO'
AND (TRUNC('1-jul-2014' ) IS NULL
OR C_order.DATEORDERED >= TRUNC('1-jul-2014'))
AND (TRUNC('4-sep-2014') IS NULL
OR C_order.DATEORDERED <= TRUNC('4-sep-2014'))
ORDER BY c_order.ad_org_id,C_order.DATEORDERED DESC
```
|
I supsect the problem is with your column `DATEORDERED` which is `NUMBER` probably
An example here.
```
SQL> select * from dual where trunc(to_DATE('04-SEP-2014','DD-MON-YYYY')) > 100;
select * from dual where trunc(to_DATE('04-SEP-2014','DD-MON-YYYY')) > 100
*
ERROR at line 1:
ORA-00932: inconsistent datatypes: expected DATE got NUMBER
```
And this wont throw error. Since, Oracle recognise `04-SEP-2014` as Date
```
SQL> select * from dual where trunc(to_DATE('04-SEP-2014','DD-MON-YYYY')) > '04-SEP-2014';
no rows selected
```
So, you have to analyse the current data in it. Fo any date format it matches with..
Something like '`20140409`' -> '`YYYYDDMM`'
And try `TO_DATE(C_order.DATEORDERED,'YYYYDDMM') <= TRUNC(TO_DATE('4-sep-2014','DD-mon-yyyy'))`
|
There is no reason to use `trunc()` on a constant like that. Hey, just use the `date` keyword and specify the dates in ISO standard format:
```
WHERE C_Order.ad_org_id = 1401007 AND
C_order.DOCSTATUS = 'CO' AND
(DATE '2014-07-01' IS NULL OR C_order.DATEORDERED >= DATE '2014-07-01') AND
(DATE '2014-09-04' IS NULL OR C_order.DATEORDERED <= DATE '2014-07-01')
```
If you want to keep your date format, use `to_date('1-jul-2014', 'DD-MON-YYYY')` to convert the value to a date.
|
trunc(date) always give an error
|
[
"",
"sql",
"oracle",
"date",
""
] |
I cant seem to find a solution for Searching a group\_concatenated value,
I have 3 table that are connected with id's
* 1st table have the same value with 2nd table, but no same value with 3rd,
* 2nd table have the same value with 1st and 3rd table,
I want to get the value inside 3rd table,
concat the values in accordance to Distinct ID's of 2nd table, display them, and be able to search
this are my tables look like

how do i search for the concatenated values
please if there's a better way, your help is much appreciated?
the query below is what i have so far
```
$query = $db->prepare("
SELECT
a.problem_encountered,
GROUP_CONCAT(
DISTINCT
c.full_name)
AS
fnames
FROM
maintenance_sheet_table a
LEFT JOIN
mis_incharge_table b
ON
b.mis_incharge_id = a.mis_incharge_id
INNER JOIN
users_table c
ON
c.mis_id=b.mis_id
WHERE
a.problem_encountered
LIKE
:findMe
HAVING
fnames
LIKE
:findMe
GROUP BY a.id ORDER BY a.id
");
$query->bindValue(':findMe', '%' . $keywordSearch. '%');
```
|
A potential answer is to filter the Users\_table in a subquery. There are a number of different forms of this option, and hard to tell from your data which is required. The one I have below simply returns the users that match the search criteria.
```
SELECT a.problem_encountered, GROUP_CONCAT(DISTINCT innerc.full_name) AS fnames
FROM maintenance_sheet_table a
LEFT JOIN mis_incharge_table b ON b.mis_incharge_id = a.mis_incharge_id
LEFT JOIN (SELECT c.mis_id, c.full_name
FROM users_table c
WHERE c.full_name LIKE :findMe) innerc ON innerc.mis_id=b.mis_id
WHERE a.problem_encountered LIKE :findMe
GROUP BY a.id
ORDER BY a.id
```
However, you could also do the concatenation within the subquery if required.
```
SELECT a.problem_encountered, innerc.fnames
FROM maintenance_sheet_table a
INNER JOIN (SELECT mit.mis_incharge_id, GROUP_CONCAT(DISTINCT ut.full_name) AS fnames
FROM users_table ut
INNER JOIN mis_incharge_table mit ON ut.user_id = mit.user_id
GROUP BY mit.mis_incharge_id
HAVING fnames LIKE :findMe) innerc ON innerc.mis_incharge_id = a.mis_incharge_id
WHERE a.problem_encountered LIKE :findMe
GROUP BY a.id
ORDER BY a.id
```
Note: I agree with spencer7593, that you shouldn't use the same :findMe variable against 2 separate fields. Even if it works, to a maintenance programmer or even yourself in a few years time, will probably look at this and think that the wrong fields are being interrogated.
|
You can "search" the return from the GROUP\_CONCAT() expression in the `HAVING` clause. As a more efficient alternative, I suspect you could use an EXISTS predicate with a subquery.
I suspect part of the problem is that your query is referencing the same bind placeholder more than one time. (In previous releases of PDO, this was a restriction, a named bind placeholder could be referenced only once.)
The workaround to this issue is to use a separate bind placeholder, e.g.
```
HAVING fnames LIKE :findMeToo
```
And then bind a value to each placeholder:
```
$query->bindValue(':findMe', '%' . $keywordSearch. '%');
$query->bindValue(':findMeToo', '%' . $keywordSearch. '%');
```
(With this issue, I don't think PDO issued a warning or error; the effect was as if no value was supplied for the second reference to the named bind placeholder. Not sure if this issue is fixed, either by a code change or a documentation update. The workaround as above, reference a bind placeholder only *once* within a query.)
Beyond that, it's not clear what problem you are observing.
|
mysql JOIN Inside a JOIN and search a group_concatenated values
|
[
"",
"mysql",
"sql",
"pdo",
""
] |
I have a mysql table with a column (code) that contains 6 character strings (varchar(6)). An example record for that column would be:
```
ID code
1 ADHNQS
2 BDHLQS
3 AEGMQS
```
What I need to do is compare 1 code record at a time to the rest and retrieve "related codes". A related code will qualify if it contains at 4 or more of the same characters. Example:
**ADHNQS -> BDHLQS** would be a match since D,H,Q,S are shared and are 4 or greater.
**ADHNQS -> BCHLQR** would NOT be a match since their shared characters are less than 4.
How can I retrieve a set of all records in SQL with their character match count WHERE they are 4 or greater?I have looked into many of the MYSQL string functions but have not found anything that is jumping out to me as a simple solution. Thank you in advance for your help!
|
If you just have the id of your reference row (here: 1):
```
SELECT b.ID, b.code,
(if(substring(b.code,1,1)=substr(a.code,1,1),1,0) + if(substring(b.code,2,1)=substr(a.code,2,1),1,0) + if(substring(b.code,3,1)=substr(a.code,3,1),1,0) + if(substring(b.code,4,1)=substr(a.code,4,1),1,0) + if(substring(b.code,5,1)=substr(a.code,5,1),1,0) + if(substring(b.code,6,1)=substr(a.code,6,1),1,0)) as matchcount
FROM yourtablename as a, yourtablename as b
WHERE a.ID=1
AND b.ID<>a.ID
GROUP BY 1
HAVING matchcount>=4
ORDER BY matchcount desc
```
Returns:
```
ID code matchcount
2 BDHLQS 4
```
If you just have the code (here: ADHNQS), then you can build your query manually like this (will return your exact code, too, if it exists):
```
SELECT ID, code,
(if(substring(code,1,1)="A",1,0) + if(substring(code,2,1)="D",1,0) + if(substring(code,3,1)="H",1,0) + if(substring(code,4,1)="N",1,0) + if(substring(code,5,1)="Q",1,0) + if(substring(code,6,1)="S",1,0)) as matchcount
FROM yourtablename
GROUP BY 1
HAVING matchcount>=4
ORDER BY matchcount desc
```
Returns:
```
ID code matchcount
1 ADHNQS 6
2 BDHLQS 4
```
|
In the example "match", the characters that match are in the same positions in both strings. It's not clear if this is the actual specification, or if that's just an anomaly in the example. Also, we note that in the example data, the list of characters is distinct, there aren't two of the same character in any string. Again, not sure if that's part of the specification, or an anomaly in the example.
Also, are the code values always six characters in length? Any special handling for shorter strings, or space characters? Etc.
---
In the simplest case, where we're comparing the strings position by position, and the only requirement is that a character be equal to another character (no special handling for spaces, or non-alphabetic, etc.) then something like this would return the specified result:
```
SELECT c.id
, c.code
, d.id
, d.code
FROM mytable c
JOIN mytable d
ON d.id <> c.id
AND ( IFNULL( NULLIF(SUBSTR(c.code,1,1),'') = NULLIF(SUBSTR(d.code,1,1),'') ,0)
+ IFNULL( NULLIF(SUBSTR(c.code,2,1),'') = NULLIF(SUBSTR(d.code,2,1),'') ,0)
+ IFNULL( NULLIF(SUBSTR(c.code,3,1),'') = NULLIF(SUBSTR(d.code,3,1),'') ,0)
+ IFNULL( NULLIF(SUBSTR(c.code,4,1),'') = NULLIF(SUBSTR(d.code,4,1),'') ,0)
+ IFNULL( NULLIF(SUBSTR(c.code,5,1),'') = NULLIF(SUBSTR(d.code,5,1),'') ,0)
+ IFNULL( NULLIF(SUBSTR(c.code,6,1),'') = NULLIF(SUBSTR(d.code,6,1),'') ,0)
) >= 4
WHERE c.id = 1
ORDER BY c.id, d.id
```
If we need to compare each character in `code` to each of the characters in the other `code`, we'd have something similar, we'd just need to perform a total of 36 comparisons. (compare pos 1 to pos
1,2,3,4,5,6, compare pos 2 to pos 1,2,3,4,5,6)
That could be done exactly the same as the query above, except that the **`AND ( ) >= 4`** predicate would contain a total of 36 comparison tests, rather than just six.
That brings up (again) the issue of multiples of the same character in the string, and how those should be "counted" towards a match. For example, consider:
```
code1: QbQdef
code2: QxyQQz
```
The Q in position 1 of code1 matches three Q's in code2, and the Q in position 2 of code1 also matches three Q's in code2... for a total match count of 6. Do we want to consider these two codes as matching? If not, we could modify the test block in the query a bit, so that a match of the character in position 1 to any character in code2 would result in only 1 being added to the match count.
The specifications need to be fleshed out more, in order to make a determination of the actual SQL statement that achieves the desired result.
|
MySQL get substring match count in each column record
|
[
"",
"mysql",
"sql",
"string",
"substring",
"match",
""
] |
I don't understand how to link two tables together. This is an example:
```
CREATE TABLE IF NOT EXISTS itemStatus (
id int(11) AUTO_INCREMENT PRIMARY KEY,
name varchar(64) NOT NULL UNIQUE KEY
);
CREATE TABLE IF NOT EXISTS itemData (
id int(11) AUTO_INCREMENT PRIMARY KEY,
title varchar(64) NOT NULL,
status_id int(11) DEFAULT NULL,
CONSTRAINT `fk_id` FOREIGN KEY (`id`) REFERENCES `itemStatus` (`id`),
);
```
I'm calling the row "status\_id" but I don't reference this anywhere, so it can't link the two. For this example, should should my "CONSTRAINT" line read in order to be correct?
|
```
A FOREIGN KEY in one table points to a PRIMARY KEY in another table.
```
Take a look at:
<http://www.w3schools.com/sql/sql_foreignkey.asp>
So it should be:
```
CREATE TABLE IF NOT EXISTS itemStatus (
id int(11) AUTO_INCREMENT PRIMARY KEY,
name varchar(64) NOT NULL UNIQUE KEY
);
CREATE TABLE IF NOT EXISTS itemData (
id int(11) AUTO_INCREMENT PRIMARY KEY,
title varchar(64) NOT NULL,
status_id int(11) DEFAULT NULL,
CONSTRAINT `fk_id` FOREIGN KEY (`status_id`) REFERENCES `itemStatus` (`id`)
);
```
**FOREIGN KEY (`status_id`)** => Field in the table will **REFERENCES `itemStatus` (`id`)**
Constraint can't have "," when it's the last:
(...) REFERENCES `itemStatus` (`id`)**,**
So the structure should be:
```
CONSTRAINT <<CONSTRAINT_NAME>> FOREIGN KEY (<<COLUMN_IN_THE_TABLE>>) REFERENCES `<<ANOTHER_TABLE>>` (`<<ANOTHER_TABLE_COLUMN_ID>>`)
```
|
Looks like you're very close. Try this instead:
```
CONSTRAINT `fk_id` FOREIGN KEY (`status_id`) REFERENCES `itemStatus` (`id`)
```
|
SQL - CONSTRAINT to link two tables
|
[
"",
"mysql",
"sql",
""
] |
I am concerned about race conditions in my database backed application. My application allows users to reserve a spot in a class. If all the spots are taken, then they are placed on a wait list. In greatly simplified pseudo code, I'm doing something like:
```
1 spots = SELECT total_spots FROM classes WHERE class_id = 200;
2 regs = SELECT COUNT(*) FROM registrations WHERE class_id = 200;
3 wl = spots > regs
4 INSERT INTO registrations ( user_id, class_id, on_waitlist ) VALUES ( 500, 200, wl );
```
What happens if `total_spots` is changed in the `classes` table, or another user does an INSERT on the `registrations` table just after we read it (row 2).
Is it adequate to wrap the entire thing in a transaction? I've read a little about `SELECT FOR UPDATE` in MySQL. I'm currently using SQLite, but I can migrate to MySQL if necessary.
|
SQLite's transactions [are fully atomic](http://www.sqlite.org/transactional.html).
If you wrap the entire thing in a transaction, other connections to the same database cannot make any changes at the same time.
|
You can also select within your insert:
```
insert into registrations (user_id, class_id, on_waitlist)
values ( 500, 200,
(SELECT total_spots FROM classes WHERE class_id = 200) <
(SELECT COUNT(*) FROM registrations WHERE class_id = 200)
);
```
|
Avoide race conditions in SQLite
|
[
"",
"sql",
"database",
"sqlite",
"race-condition",
"select-for-update",
""
] |
I have a table with integer ID column. I would like to get the minimum unused value for this column. The query should find the first hole in table IDs and get the minimum value inside it. I'll try to explain it with some examples.
**Example 1: no-holes table**
In this case, I have a table without holes and query should simply get the minimum unused value: should get: 4
```
|id|
|1 |
|2 |
|3 |
```
**Example 2: table with hole on top**
In this case, we have a hole on top (missing value: 1). The query finds the hole and gets the minimum value inside it: should get 1.
```
|id|
|2 |
|3 |
|4 |
```
Also in this case, we have a hole on top, but we have more missing values inside it (missing values: 1 and 2). The query finds the hole and gets the minimum value inside it: should get 1.
```
|id|
|3 |
|4 |
|5 |
```
**Example 3: table with hole in the middle**
In this case, we have a hole in the middle (missing values: 2 and 3). The query finds the hole and gets the minimum value inside it: should get 2.
```
|id|
|1 |
|4 |
|5 |
```
**Example 4: table with holes on top and in the middle**
In this case, we have multiple holes: one on top (missing value: 1) and one in the middle (missing value: 3). The query finds the first hole and gets the minimum value inside it: should get 1.
```
|id|
|2 |
|4 |
|6 |
```
I've tried the solution proposed in [**this post**](https://stackoverflow.com/questions/174532/how-to-find-holes-in-a-table), but it doesn't work as expected in my case. Any ideas?
|
```
SELECT min(unused) AS unused
FROM (
SELECT MIN(t1.id)+1 as unused
FROM yourTable AS t1
WHERE NOT EXISTS (SELECT * FROM yourTable AS t2 WHERE t2.id = t1.id+1)
UNION
-- Special case for missing the first row
SELECT 1
FROM DUAL
WHERE NOT EXISTS (SELECT * FROM yourTable WHERE id = 1)
) AS subquery
```
|
A slightly different way to do it using a join rather than EXISTS:-
```
SELECT MIN(t1.id)
FROM
(
SELECT 1 AS id
UNION ALL
SELECT id + 1
FROM yourTable
) t1
LEFT OUTER JOIN yourTable t2
ON t1.id = t2.id
WHERE t2.id IS NULL;
```
Down side of any solution using a sub query is that they are not likely to use any indexes
|
Get minimum unused value in MySQL column
|
[
"",
"mysql",
"sql",
"minimum",
""
] |
I want to create a stored procedure that's flexible in handling an update on multiple columns for a table, but where all or some of the values may or may not be provided.
Something like this for instance:
```
UPDATE some_table
SET
IF(I_COLUMN_1 is not NULL) THEN
COLUMN_1 = I_COLUMN_1
END IF;
IF(I_COLUMN_2 is not NULL) THEN
COLUMN_2 = I_COLUMN_2
END IF;
WHERE
SOME_KEY = I_SOME_KEY;
```
Obviously not right, but just to give you some pseudo code to portray my idea.
The only way i can think to do this now, is check for each variable, and run a separate update statement for each column which feels really lame.
|
Here is a proposition of query that can do the job:
```
UPDATE some_table
SET COLUMN_1 = NVL(I_COLUMN_1, COLUMN_1)
,COLUMN_2 = NVL(I_COLUMN_2, COLUMN_2)
WHERE SOME_KEY = I_SOME_KEY
```
I'm not very familiar with Oracle but in T-SQL i would use `ISNULL()` function to do the job and the equivalent in Oracle is the `NVL()` function.
Hope this will help.
|
My first thought would be to manipulate the variables in PL/SQL before you execute the UPDATE statement. This is pseudo-code, but something like this:
```
I_COLUMN_1 [Datatype] := NVL2([Value of Incoming Parameter], [Value for Update if Not Null], NULL);
I_COLUMN_2 [Datatype] := NVL2([Value of Incoming Parameter], [Value for Update If Not Null], NULL);
UPDATE [some_table]
SET COLUMN_1 = I_COLUMN_1, COLUMN_2 = I_COLUMN_2
WHERE [some_key] = I_SOME_KEY;
```
Using the NVL2(expression, return-if-not-null, return-if-null) function will evaluate an expression, returning either a value if the expression is not null or a second value if the expression is null.
Evaluate the input, then update the table. :)
|
Oracle Pl SQL, conditional in UPDATE statement
|
[
"",
"sql",
"oracle",
"plsql",
"sql-update",
"conditional-statements",
""
] |
I'm using Firebird 2.1.
There is a table: `IDs, Labels`
There can be multiple labels for the same ID:
```
10 Peach
10 Pear
10 Apple
11 Apple
12 Pear
13 Peach
13 Apple
```
Let's say I have a set of labels, ie.: (Apple, Pear, Peach).
How can I write a single select to return all IDs that have all labels associated in a given set? Preferably I'd like to specify the set in a string separated with commas, like: ('Apple', 'Pear', 'Peach') -βΊ this should return ID = 10.
Thanks!
|
As asked, I'm posting my simpler version of piclrow's answer. I have tested this on my Firebird, which is version 2.5, but the OP (Steve) has tested it on 2.1 and it works as well.
```
SELECT id
FROM table
WHERE label IN ('Apple', 'Pear', 'Peach')
GROUP BY id
HAVING COUNT(DISTINCT label)=3
```
This solution has the same disadvantage as pilcrow's... you need to know how many values you are looking for, as the HAVING = condition must match the WHERE IN condition. In this respect, Ed's answer is more flexible, as it splits the concatenated value string parameter and counts the values. So you just have to change the one parameter, instead of the 2 conditions I and pilcrow use.
OTOH, if efficency is of concern, I would rather think (but I am absolutely not sure) that Ed's CTE approach might be less optimizable by the Firebird engine than the one I suggest. Firebird is very good at optimizing queries, but I don't really now if it is able to do so when you use CTE this way. But the WHERE + GROUP BY + HAVING should be optimizable by simply having an index on (id,label).
In conclusion, if execution times are of concern in your case, then you probably need some explain plans to see what is happening, whichever solution you choose ;)
|
It's easiest to split the string in code and then query
```
SQL> select ID
CON> from (select ID, count(DISTINCT LABEL) as N_LABELS
CON> from T
CON> where LABEL in ('Apple', 'Pear', 'Peach')
CON> group by 1) D
CON> where D.N_LABELS >= 3; -- We know a priori we have 3 LABELs
ID
============
10
```
|
Firebird how to select ids that match all items in a set
|
[
"",
"sql",
"firebird",
"firebird2.1",
""
] |
I am not sure how to fix this:
```
dyn-72-33-214-45:python mona$ sudo /usr/local/mysql/bin/mysqld stop
2014-09-06 09:49:04 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
2014-09-06 09:49:04 22992 [Warning] Setting lower_case_table_names=2 because file system for /usr/local/mysql-5.6.15-osx10.7-x86_64/data/ is case insensitive
2014-09-06 09:49:04 22992 [ERROR] Fatal error: Please read "Security" section of the manual to find out how to run mysqld as root!
2014-09-06 09:49:04 22992 [ERROR] Aborting
2014-09-06 09:49:04 22992 [Note] Binlog end
2014-09-06 09:49:04 22992 [Note] /usr/local/mysql/bin/mysqld: Shutdown complete
```
|
I'm using OS X (Yosemite) and this error happened to me when I upgraded from Mavericks to Yosemite. It was solved by using this command
```
sudo /usr/local/mysql/support-files/mysql.server start
```
|
you might try this if you logged in with root:
```
mysqld --user=root
```
|
Fatal error: Please read "Security" section of the manual to find out how to run mysqld as root
|
[
"",
"mysql",
"sql",
"my.cnf",
""
] |
Though I have relatively good exposer in SQL Server, but I am still a newbie in SSAS.
We are to create set of reports in SSRS and have the Data source as SSAS CUBE.
Some of the reports involves data from atleast 3 or 4 tables and also would involve Grouping and all possible things from SQL Environment (like finding the max record for a day and joining that with 4 more tables and apply filtering logic on top of it)
So the actual question is should I need to have these logics implemented in Cubes or have them processed in SQL Database (using Named Query in SSAS) and get the result to be stored in Cube which would be shown in the report? I understand that my latter option would involve creation of more Cubes depending on each report being developed.
I was been told to create Cubes with the data from Transaction Tables and do entire logic creation using MDX queries (as source in SSRS). I am not sure if that is a viable solution.
Any help in this would be much appreciated; Thanks for reading my note.
Aru
EDIT: We are using SQL Server 2012 version for our development.
|
OLAP cubes are great at performing aggregations of data, effectively grouping over the majority of columns all at once. You should not strive to implement all the grouping at the named query or relational views level as this will prevent you from being able to drill down through the data in the cube and will result in unnecessary overhead on the relational database when processing the cube.
I would start off by planning to pull in the most granular data from your relational database into your cube and only perform filtering or grouping in the named queries or views if data volumes or processing time are a concern. SSAS will perform some default aggregations of the data to allow for fast queries at the most grouped level.
More complex concerns such as max(someColumn) for a particular day can still be achieved in the cube by using different aggregations, but you do get into complex scenarios if you want to aggregate by one function (MAX) only to the day level and then by another function across other dimensions (e.g. summing the max of each day per country). In that case it may well be worth performing the max-per-day calculation in a named query or view and loading that into its own measure group to be aggregated by SUM after that.
It sounds like you're at the beginning of the learning path for OLAP, so I'd encourage you to look at resources from the [Kimball Group](http://www.kimballgroup.com/category/kimball-classics/) (no affiliation) including, if you have time, the excellent book "The Data Warehouse Toolkit". At a minimum, please look into Dimensional Modelling techniques as your cube design will be a good deal easier if you produce a dimensional model (likely a star schema) in either views or named queries.
|
I would look at BISM Tabular if your model is not complicated. It compresses and stores data in memory. As for data processing I would suggest to keep all calculations and grouping in database layer (create views).
|
SSAS Environment or CUBE creation methodology
|
[
"",
"sql",
"sql-server",
"reporting-services",
"ssas",
"olap-cube",
""
] |
I have the following table:
```
ID myDate myTime Value
1 2014-06-01 00:00:00 100
2 2014-06-01 01:00:00 125
3 2014-06-01 02:00:00 132
4 2014-06-01 03:00:00 139
5 2014-06-01 04:00:00 145
6 2014-06-01 05:00:00 148
FF.
24 2014-06-01 23:00:00 205
25 2014-06-02 00:00:00 209
26 2014-06-02 01:00:00 215
27 2014-06-02 02:00:00 223
```
Then I have the following SQL Statement:
```
SELECT * FROM MyTable WHERE myDate = '2014-06-01'
UNION ALL
SELECT * FROM MyTable WHERE myDate = DATEADD(dd, 1, '2014-06-01') AND myTime = '00:00:00'
```
So the result should be from record number 1 to record number 25.
What I am trying to do is `myTime` value with `00:00:00` on the last record, need to change to `24:00:00`.
Does anyone know how to do this? Or is not possible?
Thank you.
|
You can use simple IIF function of SQL Server: <http://msdn.microsoft.com/en-au/library/hh213574.aspx>
So the query will be like the below
```
SELECT
ID,
myDate,
IIF(myTime='00:00:00', '24:00:00', myTime) as myTime
FROM MyTable WHERE myDate = '2014-06-01'
```
*Please mark as answer if it answers your question*
Update:
If it is sql server 2008 or earlier you can use case.. when
```
SELECT ID,
myDate,
case when CAST(myTime AS VARCHAR(8))='00:00:00' then '24:00:00'
else CAST(myTime AS VARCHAR(8))
end as myTimeTemp
FROM MyTable WHERE myDate = '2014-06-01'
UNION ALL
SELECT ID,
myDate,
case when CAST(myTime AS VARCHAR(8))='00:00:00' then '24:00:00'
else CAST(myTime AS VARCHAR(8))
end as myTimeTemp
FROM MyTable WHERE myDate = DATEADD(dd, 1, '2014-06-01') AND myTime = '00:00:00'
```
|
You would do this with an `or`:
```
SELECT *
FROM MyTable
WHERE myDate = '2014-06-01' OR
(myDate = DATEADD(dd, 1, '2014-06-01') AND myTime = '00:00:00');
```
EDIT:
```
SELECT date('2014-06-01') as mydate,
(case when mydate <> date('2014-06-01') then '24:00:00' else mytime end) as mytime,
REST OF COLUMNS HERE
FROM MyTable
WHERE myDate = '2014-06-01' OR
(myDate = DATEADD(dd, 1, '2014-06-01') AND myTime = '00:00:00');
```
|
Change Value on SQL View
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
Is there a way to work out if a call overlaps another call already in progress.
For example I have a call that comes in at `10:00` and ends at `10:05`.
Second call comes in at `10:02` and ends at `10:06` so is overlapping the first calls, how could I show this in a sql query? I have a few thousand calls to compare.
Table I have uses `CallID, TimeAns, TimeFin`
Any suggestions welcome.
|
Here is non-inclusive query:
```
SELECT * FROM myCalls t1
INNER JOIN myCalls t2 on t2.CallID != t1.CallID
AND t1.TimeAns < t2.TimeFin
AND t1.TimeFin > t2.TimeAns
```
This query will return 2 rows for each overlap. if you want to have just one row per overlap this will do it:
```
SELECT * FROM myCalls t1
INNER JOIN myCalls t2 on t2.CallID > t1.CallID
AND t1.TimeAns < t2.TimeFin
AND t1.TimeFin > t2.TimeAns
```
|
Two calls overlap when start or end of call 1 is between start and end of call 2. (No matter, which of the two calls you call "call 1" and which "call 2".)
```
WHERE
(
call1.start BETWEEN call2.start AND call2.end
OR
call1.end BETWEEN call2.start AND call2.end
)
```
EDIT: And make your mind up, if you consider two calls where one ends at the same time as the other starts overlapping or not. The BETWEEN clause does. So if you don't want this, replace this by `(call1.start > call2.start AND call1.start < call2.end)`.
EDIT: To find all overlapping call combinations:
```
select *
from calls
join calls other
on other.callid < calls.callid and
(
(calls.timeans > other.timeans and calls.timeans < other.timefin)
or
(calls.timefin > other.timeans and calls.timeans < other.timefin)
)
;
```
|
Overlapping call durations
|
[
"",
"sql",
"sql-server",
"overlap",
""
] |
how to get the exact "year" from oracle data if the given data is in this format?
```
02-FEB-84
```
I want to get the year, but when I do something like this
```
case
when dob is not null then to_date(dob,'YYYY')
when dob is null then '""'
end
as yearofbirth
```
I am getting the value of
```
2084
```
when I was expecting
```
1984
```
how to solve this?, because my query is not smart enough to determine if the year is from the 19th century or 20th century
|
Assuming `DOB` is `varchar2()` having date in format `DD-MON-YY`,
You might be looking for
```
TO_CHAR(TO_DATE(DOB,'DD-MON-RR'),'YYYY')
```
Conversion using `YY` format! Current century is 20\*\* so, 84 -> 2084
```
SQL> SELECT TO_CHAR(TO_DATE('02-FEB-84','DD-MON-YY'),'YYYY') FROM DUAL;
TO_C
----
2084
```
Conversion using `RR` format! Current century is 20\*\* and year is < 2050 so, 84 -> 1984
```
SQL> SELECT TO_CHAR(TO_DATE('02-FEB-84','DD-MON-RR'),'YYYY') FROM DUAL;
TO_C
----
1984
```
|
One more condition to your case statement should do it.
```
case
when to_date(dob) > sysdate then to_date(dob,'YYYY') - 100
when dob is not null then to_date(dob,'YYYY')
when dob is null then '""'
end
as yearofbirth
```
|
how to get the exact year from the given date in oracle if it has a minimal format to base from?
|
[
"",
"sql",
"oracle",
""
] |
I have a table "tblSalesOrder" in Microsoft T-SQL with some sample records:
```
SalesOrderID OrderDate ItemID Quantity PromotionCode
====================================================================
1 2014-09-01 100 5 NULL
2 2014-09-01 120 10 ABC
3 2014-09-05 150 7 NULL
4 2014-09-08 200 15 NULL
```
I need to return NULL-friendly resultset for records which do not exist.
As an example, I want a monthly query for September 2014:
```
SELECT SalesOrderID, OrderDate, ItemID, Quantity, PromotionCode
FROM tblSalesOrder
WHERE OrderDate = BETWEEN '2014-09-01' AND '2014-09-30'
```
I need it to return at least 1 row for each day (i.e. 0 valued row, if the entry for that day is not available)
```
SalesOrderID OrderDate ItemID Quantity PromotionCode
====================================================================
1 2014-09-01 100 5 NULL
2 2014-09-01 120 10 ABC
0 2014-09-02 0 0 0
0 2014-09-03 0 0 0
0 2014-09-04 0 0 0
3 2014-09-05 150 7 NULL
0 2014-09-06 0 0 0
0 2014-09-07 0 0 0
4 2014-09-08 200 15 NULL
0 2014-09-09 0 0 0
...
...
...
0 2014-09-30 0 0 0
```
|
You can `join` the a date parameter in an empty `select` and `coalesce` the values:
```
select coalesce(t.SalesOrderID, 0) SalesOrderID
, coalesce(t.OrderDate, d.OrderDate) OrderDate
, coalesce(t.ItemID, 0) ItemID
, coalesce(t.Quantity, 0) Quantity
, coalesce(t.PromotionCode, 0) PromotionCode
from (select @dateParameter OrderDate) d
left
outer
join ( SELECT SalesOrderID, OrderDate, ItemID, Quantity, PromotionCode
FROM tblSalesOrder
) t
on t.OrderDate = d.OrderDate
```
|
master..spt\_values is a table in all microsoft sql databases containing 2506 rows, by cross joining, it will have 2506\*2506 rows to calculate dates between from and to. Other tables can be used as well, this is just a table used to create the dates. A calendar table would be even easier to use.
The EXCEPT will remove all dates already in use. Then by combining the rows from tblSalesOrder and CTE with union all, empty days will be filled with the required hardcoded values:
```
DECLARE @from date = '2014-09-01'
DECLARE @to date = '2014-09-30'
;WITH CTE as
(
SELECT top (case when @to < @from then 0 else datediff(day, @from, @to) + 1 end)
dateadd(day, row_number() over (order by (select 1)) - 1, @from) OrderDate
FROM
master..spt_values t1
CROSS JOIN
master..spt_values t2
EXCEPT
SELECT
OrderDate
FROM
tblSalesOrder
)
SELECT
0 SalesOrderID, OrderDate, 0 ItemID, 0 Quantity, '0' PromotionCode
FROM
CTE
UNION ALL
SELECT
SalesOrderID, OrderDate, ItemID, Quantity, PromotionCode
FROM
tblSalesOrder
ORDER BY
OrderDate, SalesOrderId
```
|
T-SQL NULL-Friendly Query
|
[
"",
"sql",
"sql-server",
"null",
""
] |
I have the following stored procedure:
```
ALTER proc [dbo].[insertperoll] @name nvarchar(50) , @snum int , @gnum int
as
DECLARE @value nvarchar(10)
SET @value = 's'+CONVERT(nvarchar(50),@snum)
DECLARE @sqlText nvarchar(1000);
DECLARE @sqlText2 nvarchar(1000);
DECLARE @sqlText3 nvarchar(1000);
declare @g nvarchar(50) = '''g1'''
SET @sqlText = N'SELECT ' + @value + N' FROM dbo.GrideBtable'
SET @sqlText2 = ' where Gnumber = '+@g --here is the problem it error invalid column name -- the @g is value from the table condition
set @sqlText3 = @sqlText+@sqlText2
Exec (@sqlText3) -- here how can i save the result of the exec into varibale
declare @sal nvarchar(50) = @sqlText3
insert employ (name,Snumber,Gnumber,Salary) values(@name,@snum,@gnum,@sal)
```
QUESTION: How to put in condition variable gets value from the table when i exec it it think that the @g is column but its not its a value from the table to test it so i display one value after the exec the other QUESTION is how to save the result from the exec in variable and then use that value
I'm using SQL Server 2008 (9.0 RTM)
This will be a stored procedure
Thanks in advance
|
You could do this using `sp_executesql` instead of `exec()` since this will allow you to use parameters, you can use an output parameter to get the value from the query:
```
DECLARE @SQL NVARCHAR(MAX) = N'SELECT @val = ' + CONVERT(NVARCHAR(10),@snum) +
N' FROM dbo.GrideBtable WHERE Gnumber = @G1';
DECLARE @val INT; -- NOT SURE OF DATATYPE REQUIRED
EXECUTE sp_executesql @SQL, N'@G1 VARCHAR(20), @val INT OUT', 'G1', @val OUT;
```
|
Not sure why you would go through all the loops to insert into the table where you can have a simple insert query like ..
```
ALTER PROC dbo.[insertperoll] @name nvarchar(50) , @snum int , @gnum int
AS
insert employ (name, Snumber, Gnumber, Salary)
select @name
, @sum
, @gnum
, case when @snum = 1 then s1
when @snum = 2 then s2
when @snum = 3 then s3
when @snum = 4 then s4
end as Salary
from dbo.GrideBtable
where Gnumber = @gnum
```
|
Can I use variable in condition statement as value with where condition that test that value
|
[
"",
"sql",
"sql-server-2008",
"variables",
"stored-procedures",
""
] |
I have three tables I am trying to build a query against in Eloquent to get only the rows from two tables that are absent from a third table.
One is the table `histories` which references the other two tables `schedules` and `templates`. If there is a template schedule history, I don't want to get that schedule.
Normally I would expect this could be done using a `LEFT JOIN` on the third table where the id `IS NULL` but this doesn't work in this instance. Every query I have tried thus far always returns the null columns along with the row I don't want.
Here is a fiddle with the relevant schema and an example query:
<http://sqlfiddle.com/#!2/28aad/5>
How can I structure my query to get the results desired?
|
```
select
schedules . *,
schedules.id as 'schedule_id',
histories . *,
histories.id as 'history_id'
from
schedules
left join
histories ON schedules.id = histories.schedule_id
and schedules.template_id = histories.template_id
where
histories.id IS NOT NULL
;
```
This lists schedules IDs 1 & 2
```
select
schedules . *,
schedules.id as 'schedule_id',
histories . *,
histories.id as 'history_id'
from
schedules
left join
histories ON schedules.id = histories.schedule_id
and schedules.template_id = histories.template_id
where
histories.id IS NULL
;
```
This lists schedule ID 3 , but this does not match other conditions such as
* schedules.schedulable\_id = 16
see: <http://sqlfiddle.com/#!2/28aad/21>
|
Your query should look like this: However your query is not returning any results because each schedule/template has a history record.
```
select
histories . *,
histories.id as 'history_id',
schedules . *,
schedules.id as 'schedule_id'
from
schedules
left join
histories ON schedules.id = histories.schedule_id
and schedules.template_id = histories.template_id
where
schedules.schedulable_id = 16
and schedules.schedulable_type = 'Item'
and histories.id IS NULL
```
|
Selecting rows absent from one table against two other tables?
|
[
"",
"mysql",
"sql",
"laravel-4",
"eloquent",
""
] |
I have two tables that store data for my wife application.
I don't have a problem to load the profile from customer table if there is a record from services table.
But mysql code doesn't work if query customer record that don't have a record in services table.
here is mysql code:
```
SELECT *
FROM customer, services
WHERE customer.cust_id = services.customer_id
AND customer.slug LIKE 'john-doe'
AND services.deleted = '0'
```
it work perfect for `john-doe` as it show all record for both customer and services table but
if i change **john-doe** to **jeff-chew** it show record count 0.
What I trying to achieve is using the same query code for my profile view page.
If there is a services record for this customer, it show display all the record from services table, else if there is no services record found in the services table, it should show only profile data from customer table.
here is [sqlfiddle](http://sqlfiddle.com/#!2/83871b/1)
|
You need a left outer join:
```
SELECT *
FROM customer c left outer join
services s
ON c.cust_id = s.customer_id and s.delete = '0'
WHERE c.slug LIKE 'john-doe';
```
A simple rule: Never use commas in the `from` clause. They are not needed. You should always use explicit `join` syntax, which is more expressive anyway (it handles outer joins, for instance).
Also, I added table aliases to the query, which makes it easier to read and write.
|
Use a `left join`. Remember that columns from the `left join` table are `null` if no match was found:
```
select *
from customer c
left join
services s
on s.customer_id = c.cust_id
and s.deleted = '0'
where c.slug LIKE 'john-doe'
```
|
mysql doesn't show result if the query option has empty record in one of the table
|
[
"",
"mysql",
"sql",
"select",
""
] |
I am learning SQL.
I have these records in my table:
```
STUDENT_NAME STATUS CLASS_ID ORDERING
----------------------------------------------
Adam 'FR' 10 1
Conan 'SE' 3 5
Chris 'SE' 4 8
Louis 'JR' null 10
Dave 'JR' 14 17
Alice 'SO' 14 8
Paul 'SE' 4 6
Tom 'SO' null 3
Eric 'FR' 14 5
Blake 'JR' 11 2
Ryan 'SE' 3 13
Matt 'FR' null 22
```
And I want to **order by ORDERING**, but I want to **group by its STATUS** (like the value which has same status be together).
For example, the result would look like this:
```
STUDENT_NAME STATUS CLASS_ID ORDERING
----------------------------------------------
Adam 'FR' 10 1
Eric 'FR' 14 5
Matt 'FR' null 22
Blake 'JR' 11 2
Louis 'JR' null 10
Dave 'JR' 14 17
Tom 'SO' null 3
Alice 'SO' 14 8
Conan 'SE' 3 5 // SAME CLASS_ID
Ryan 'SE' 3 13
Paul 'SE' 4 6 // SAME CLASS_ID
Chris 'SE' 4 8
```
Adam has lowest ORDERING value, so he goes the top. But his STATUS is 'FR', so the next student will be Eric who has lower ORDERING than Matt.
*EXTRA CONDITION:*
**If CLASS\_ID is not null, I want to group it based on CLASS\_ID**
Is it even possible to get the result with the clause 'SELECT'...'ORDER BY'? (Please say yes!)
I use this query, but it doesn't work well:
```
SELECT ... FROM STUDENTS stu
ORDER BY
(SELECT MIN(S.ORDERING) FROM STUDENTS S WHERE S.STATUS = 'FR' AND stu.STATUS = 'FR'),
(SELECT MIN(S.CLASS_ID) FROM STUDENTS S WHERE S.CLASS_ID = stu.CLASS_ID),
stu.CLASS_ID ASC,
stu.ORDERING ASC
```
I could use the CASE clause too. I need your help!
|
Try this. I wrote this for MySQL, it may work for SQL Server also. It will group NULL CLASS\_IDS together (if there are any). I don't can't think of another way to do it. It gives the results you are looking for.
```
SELECT stu.*
FROM STUDENTS stu
ORDER BY
(SELECT MIN(S.ORDERING) FROM STUDENTS S WHERE S.STATUS = stu.STATUS),
(SELECT MIN(S.ORDERING) FROM STUDENTS S WHERE S.STATUS = stu.STATUS AND IFNULL(S.CLASS_ID, '') = IFNULL(stu.CLASS_ID, '')),
stu.ORDERING
```
SQL Fiddle: [sqlfiddle.com/#!2/a5a10/10/0](http://sqlfiddle.com/#!2/a5a10/10/0)
|
You can simply do
```
SELECT * FROM students
ORDER BY STATUS, ORDERING, CLASS_ID
```
Here's a [fiddle](http://sqlfiddle.com/#!2/cf792/11).
|
Order By clause on multiple fields with group
|
[
"",
"sql",
"sql-server",
"sql-order-by",
""
] |
I have following 3 tables
```
ITEM table
===========
id desc
===========
IN table
=======================
id id_item number
=======================
OUT table
=========================
id id_item number
=========================
```
and this data on
item:
1 - GECO
in:
1 - 1 - 40
2 - 1 - 2
out:
1 - 1 - 3
2 - 1 - 2
3 - 1 - 3
4 - 1 - 2
This is my query:
```
SELECT item.id,
SUM(in.number) AS Sum_IN,
SUM(out.number) AS Sum_OUT,
(SUM(in.number) - SUM(out.number)) AS Dif
FROM item
LEFT OUTER JOIN IN ON item.id = IN.id_item
LEFT OUTER JOIN OUT ON item.id = OUT.id_item
GROUP BY item.id
```
And tis is the result
id - Sum\_IN - Sum\_OUT
1 - 168 - 20
But i want
id - Sum\_IN - Sum\_OUT
1 - 42 - 10
Where is the problem in my query?
|
```
SELECT item.id, A.Sum_IN, B.Sum_OUT, (A.Sum_IN - B.Sum_OUT) AS Dif
from item
left outer join(
select id_item as id, SUM(number) AS Sum_IN
FROM in GROUP BY id_item) as A ON item.id = A.id
left outer join(
select id_item as id, SUM(number) AS Sum_OUT
FROM out GROUP BY id_item) as B ON item.id = B.id
```
|
Try with INNER JOIN like
```
SELECT `item`.`id`,
SUM(`in`.`number`) AS Sum_IN,
SUM(`out`.`number`) AS Sum_OUT
FROM item
INNER JOIN `IN` ON item.id = `IN`.id_item
INNER JOIN `OUT` ON item.id = `OUT`.id_item
GROUP BY item.id
```
|
SQL query of Sum from 3 tables
|
[
"",
"mysql",
"sql",
"sum",
""
] |
I'm trying to concatenate lastName(3 letters) + firstName(2 letters) to create a new username. When I used this script, it created new records for the concatenation results.
```
USE [database]
INSERT INTO table (username)
SELECT SUBSTRING(lastName, 1, 3) + SUBSTRING(firstName, 1, 2)
FROM [database].[dbo].table
```
Could someone show me how to append the original record with the new username?
|
You want an update instead of an insert. The following will update all records with the new username. If you want a more specific update for just a single record, you will need a WHERE clause to identify the particular rows.
```
UPDATE table
SET username = SubString(lastName, 1, 3) + SubString(firstName, 1, 2);
```
|
I'm not sure whether I get your question right, but shouldn't the following work?
```
UPDATE table
SET username = SUBSTRING(lastName, 1, 3) + SUBSTRING(firstName, 1, 2)
```
This updates the `username` of every record in the table.
|
SQL Server query results as INSERT
|
[
"",
"sql",
"t-sql",
"sql-server-2012",
""
] |
I wasted all the day on one query without success , **SOS** I need a help :) with a given **@CustomerId** , I need to query all the Products that linked to customer seller can sell but **not** sold to him before , the Commissions table is indication of what products seller can sell

**Thanks in advance**
|
```
SELECT sellableProduct
FROM (SELECT Comissions.ProductId AS sellableProduct, Sellers.SellerId AS sellableSeller FROM Comissions INNER JOIN Sellers ON Comissions.SellerId=Sellers.SellerId INNER JOIN Customers ON Sellers.SellerId=Customers.SellerId WHERE Customers.CustomerId = @customerid) AS tblSellable
LEFT JOIN (SELECT Sales.ProductId AS soldProduct, Customers.SellerId as soldSeller FROM Customers INNER JOIN Sales ON Customers.CustomerId=Sales.CustomerId WHERE Customers.CustomerId = @customerid) AS tblSold
ON tblSellable.sellableProduct=tblSold.soldProduct AND tblSellable.sellableSeller=tblSold.soldSeller
WHERE tblSold.soldProduct IS NULL
```
this way you avoid time-consuming IN statements
If a Customer can only have one Seller, then you can omit the seller link:
```
SELECT sellableProduct
FROM (SELECT Comissions.ProductId AS sellableProduct FROM Comissions INNER JOIN Sellers ON Comissions.SellerId=Sellers.SellerId INNER JOIN Customers ON Sellers.SellerId=Customers.SellerId WHERE Customers.CustomerId = @customerid) AS tblSellable
LEFT JOIN (SELECT Sales.ProductId AS soldProduct FROM Sales WHERE Sales.CustomerId = @customerid) AS tblSold
ON tblSellable.sellableProduct=tblSold.soldProduct
WHERE tblSold.soldProduct IS NULL
```
|
Basically, you're looking for products that have a record in `commissions`, but not in `sales`. Using `:id` to denote the specific ID:
```
SELECT *
FROM products
WHERE productid IN (SELECT productid
FROM commissions
WHERE sellerid = :id) AND
productid NOT IN (SELECT productid
FROM sales
JOIN customers ON sales.customerid = cusomers.customerid
WHERE sellerid = :id)
```
|
SQL nested SELECT with JOIN
|
[
"",
"sql",
"sql-server",
""
] |
I am running a query in order to get student count by cities.
A table (`Address`) holds records of students' address information.
```
AddressID, StudentID, Address, City, Country, Telephone
```
are some fields of Address table. There is one point which I must tell about: City column values consists of 7 chars.
Each city has a unique offset number starting with 1, and first 9 cities loose the 0's at the beginning.
**CASE I**: If the student resides in city 09 and if it is center its city column has `900000`. As you can see, city codes less than 10 have 6 character length.
**CASE II**: If the student resides in a city but in a town of it, city code changes according to town's code like `900001`.
Now I am querying student count for each city with the query below but I have got repetitive rows for some cities. It brings student count living in the city center, and in a town of it seperately so I have got some result like that:
```
stuCnt cityCode
------ --------
102 01
240 02
21 03
54 03
SELECT COUNT(StudentID) AS stuCnt,
RIGHT('00' + ISNULL(LEFT(City, LEN(City) - 5), ''), 2) AS cityCode
FROM Address
GROUP BY City
ORDER BY City
```
How can I get a result like the following?
```
stuCnt cityCode
------ --------
102 01
240 02
75 03
```
|
You can just group by the calculated value:
```
select
count(StudentID) as stuCnt,
right('00' + isnull(left(City, len(City) - 5), ''), 2) as cityCode
from
address
group by
right('00' + isnull(left(City, len(City) - 5), ''), 2)
order by
right('00' + isnull(left(City, len(City) - 5), ''), 2)
```
in some versions of SQL, you can use cityCode in the order by, rather than rewriting the calculation:
```
select
count(StudentID) as stuCnt,
right('00' + isnull(left(City, len(City) - 5), ''), 2) as cityCode
from
address
group by
right('00' + isnull(left(City, len(City) - 5), ''), 2)
order by
cityCode
```
|
You can use the first query in an Inner `SELECT`, and group those results with a `SUM`:
```
SELECT SUM(stuCnt) As stuCnt,
cityCode
FROM
(
SELECT COUNT(StudentID) AS stuCnt,
RIGHT('00' + ISNULL(LEFT(City, LEN(City) - 5), ''), 2) AS cityCode
FROM Address
GROUP BY City
) X
GROUP BY cityCode
ORDER BY cityCode
```
|
How group again a grouped query in SQL
|
[
"",
"sql",
"sql-server",
""
] |
I have a changelog with insert / update / delete operations:
```
change_id | object_id | operation
----------+-----------+----------
1 | 1 | insert
2 | 2 | insert
3 | 1 | delete
4 | 1 | insert
5 | 3 | insert
6 | 2 | delete
7 | 4 | insert
8 | 3 | update
```
I need to select only the last row for each `object_id` and keep the result sorted by `change_id`. The result should look like this:
```
change_id | object_id | operation
----------+-----------+----------
4 | 1 | insert
6 | 2 | delete
7 | 4 | insert
8 | 3 | update
```
How can I do this? Is it possible with a simple query, without stored procedures?
|
[**SQL Fiddle**](http://sqlfiddle.com/#!6/68745/1/0):
```
SELECT c.change_id, c.object_id, c.operation
FROM
(
SELECT MAX(change_id) AS CID
FROM changelog
GROUP BY object_id
) s
INNER JOIN changelog c on c.change_id = s.CID
```
|
```
;WITH MyCTE AS
(
SELECT change_id,
object_id,
operation,
ROW_NUMBER() OVER(PARTITION BY object_id ORDER BY change_id DESC) AS rn
FROM ChangeLog
)
SELECT change_id,
object_id,
operation
FROM MyCTE
WHERE rn = 1
```
**[SQL Fiddle Demo](http://sqlfiddle.com/#!6/e5b87/2)**
|
MS SQL - Select only one row for an ID
|
[
"",
"sql",
"sql-server",
"greatest-n-per-group",
""
] |
I have a UserTypes table like this:
```
UserTypeID | ParentUserTypeID | Name
1 NULL Person
2 NULL Company
3 2 IT
4 3 Accounting Software
```
What I want to do is get all the `ParentUserTypeID` when a UserTypeID is passed to the CTE. I have got this so far but it only returns one `ParentUserTypeID` rather than all of them in the tree.
```
;WITH CTE
AS
(
SELECT
c1.ParentUserTypeID,
c1.UserTypeID,
c1.Name
FROM
dbo.UserTypes c1
WHERE
ParentUserTypeID IS NULL
UNION ALL
SELECT
c2.ParentUserTypeID,
c2.UserTypeID,
c2.Name
FROM
dbo.UserTypes c2
INNER JOIN CTE
ON c2.ParentUserTypeID = CTE.UserTypeID
)
SELECT
ParentUserTypeID,
UserTypeID
FROM
CTE
WHERE
UserTypeID = 4
```
I get a result of:
```
UserTypeID | ParentUserTypeID
4 3
```
## **EDIT/UPDATE**
What I actually need is a list of all the `UserTypeID`s in the hierarchy tree associated with the specified `@UserTypeID`. A bit like this:
```
UserTypeID | ParentUserTypeID
2 NULL
3 2
4 3
```
I can then use my application to loop around this and insert a UserID along with the UserTypeIDs applicable something like this:
```
<cfloop query="rsUserTypeIDs"> // loops around the UserTypeIDs brought back from the CTE
INSERT INTO dbo.User_UserType (UserID, UserTypeID)
VALUES
(
#UserID# // Provided by the web application
#rsUserTypeIDs.UserTypeID# // Each time it loops it gets the UserTypeID from the CTE
)
</cfloop>
```
The above code is not relevant to this question. I put it there so you understand what I'm doing with the results of the CTE.
How can I correct the recursive CTE so that it returns the `UserTypeID`s all the way up to the top of the tree instead of just the one row showing its immediate `ParentUserTypeID`?
|
I don't know how much you are allowed / desiring to change the structure, but you probably need to define the UserTypeID you are looking up in the CTE root, e.g.:
```
;WITH CTE
AS
(
SELECT
c1.ParentUserTypeID,
c1.UserTypeID,
c1.Name
FROM
dbo.UserTypes c1
WHERE
UserTypeID = 4
UNION ALL
SELECT
c2.ParentUserTypeID,
c2.UserTypeID,
c2.Name
FROM
dbo.UserTypes c2
INNER JOIN CTE
ON c2.UserTypeID = CTE.ParentUserTypeID
)
SELECT
ParentUserTypeID,
UserTypeID
FROM
CTE
```
[SQL Fiddle results](http://sqlfiddle.com/#!6/01262/6)
|
[SQL Fiddle](http://sqlfiddle.com/#!6/206af/6)
**MS SQL Server 2012 Schema Setup**:
```
CREATE TABLE UserTypes
(
UserTypeId INT PRIMARY KEY,
ParentUserTypeId INT NULL,
Name VARCHAR(50)
)
INSERT INTO UserTypes
VALUES
(1,NULL, 'Person'),
(2, NULL, 'Company'),
(3,2, 'IT'),
(4,3,'Accounting Software')
```
**Query 1**:
```
WITH CTE
AS
(
SELECT
c1.ParentUserTypeID,
c1.UserTypeID,
c1.Name,
1 as Level
FROM
UserTypes c1
WHERE
UserTypeId =4
UNION ALL
SELECT
c2.ParentUserTypeID,
c2.UserTypeId,
c2.Name,
CTE.Level + 1 As Level
FROM UserTypes c2
INNER JOIN CTE
ON CTE.ParentUserTypeID = C2.UserTypeId
)
SELECT UserTypeId, ParentUserTypeID
FROM CTE
ORDER BY Level Desc
```
**[Results](http://sqlfiddle.com/#!6/206af/6/0)**:
```
| USERTYPEID | PARENTUSERTYPEID |
|------------|------------------|
| 2 | (null) |
| 3 | 2 |
| 4 | 3 |
```
|
Recursive CTE with a Where clause not returning all results
|
[
"",
"sql",
"sql-server",
"recursion",
""
] |
I have simple table which puzzles me:
```
CREATE TABLE [dbo].[T3]
(
[id] [int] IDENTITY(1,1) NOT NULL,
[S_Id] [int] NULL,
[P_Id] [int] NULL,
[level] [int] NULL,
[Path] [nvarchar](255) NULL
) ON [PRIMARY]
```
In the table is data
```
id S_Id P_Id level Path
------------------------------------
1 218252 218231 1 218231
2 218271 218252 1 218252
3 218271 218252 2 218231-218252
```
**EDIT:**
I try to get the
```
ID, S_ID, P_ID, level, Path
```
on maximum length of column `Path`.
It should return id 3.
If I try to get max len from path like this:
```
select
b.id, a.p_id, a.s_id,
max(len(a.path)) as Path,
a.path
from
t3 a, t3 b
where
b.id = a.id
group by
a.p_id , a.s_id, b.id , a.path
order by
1
```
I get all the data, not just row with id 3, why ?
|
If you only want the max path record... Correct me if I'm wrong.
```
;WITH tmp AS (select TOP 1 id from #TaskTask3 ORDER BY LEN(path) DESC)
select t.*
from #TaskTask3 t
inner join tmp on tmp.id = t.id
```
---
## Updates
```
;WITH tmp AS (select id, row_number() over (partition by S_Id, P_Id order by len(path) DESC) as rn from #TaskTask3)
select t.*
from #TaskTask3 t
inner join tmp on tmp.id = t.id
WHERE tmp.rn = 1
```
|
I tried to keep it simple....There are other methods (mentioned already) but I think you need to start slow... :)
```
declare @maxLen int
select @maxLen = max(len(path))
from t3
select * from t3
where len (path) = @maxLen
```
|
Simple max() function and inner join T-SQL
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
In SQL server 2008r2, i want to display only lines where some columns are not equal to '0'
For example: I want to display only lines where columns 2 and 3 are <> '0'
> line 1 : val1 = 2, val2 = 2, val3 = 0
>
> line 2 : val1 = 1, val2 = 0, val3 = 0
>
> line 3 : val1 = 0, val2 = 0, val3 = 5
>
> line 4 : val1 = 10, val2 = 0, val3 = 3
>
> line 5 : val1 = 0, val2 = 0, val3 = 0
In this example, i want to display lines 1,3,4 but not lines 2 and 5 because both columns 2 & 3 are = '0'
I tried with `NOT EXISTS` but it didn't work.
EDIT :
Hi, i think it's difficult for me to explain clearly what i want, i put you my definitive request, perhaps you could define my need ^^ :
```
select * from CONSULTANT inner join REPORTINGCONSULTANT on CONSULTANT.ID = REPORTINGCONSULTANT.FK_CONSULTANT where [CONSULTANT].[ISDESACTIVE] = '0' and [CONSULTANT].[ISSUPPRIME] = '0' and [CONSULTANT].[INITIALES] not like 'IL%' and [REPORTINGCONSULTANT].[DATEDEBUT] >= '02/06/2014' and [REPORTINGCONSULTANT].[NBCANDIDATSPUSH] <> '0' and [REPORTINGCONSULTANT].[NBCVENVOYESURPOSTE] <> '0' and [REPORTINGCONSULTANT].[NBRDVPROSPECTS] <> '0' and [REPORTINGCONSULTANT].[NBRDVCLIENTS] <> '0' and [REPORTINGCONSULTANT].NBPROSPECTSRENCONTRES] <> '0' and [REPORTINGCONSULTANT].[NBPROSPECTSRENCONTRESBINOME] <> '0' and [REPORTINGCONSULTANT].[NBCLIENTSRENCONTRES] <> '0' and [REPORTINGCONSULTANT].[NBCLIENTSRENCONTRESBINOME] <> '0'
```
When i run only the begining (stop after the date), it counts me 1700 results
If i run the rest, it counts me 0 result
but i can find lines where all lines are true
|
## Count the number of values not equal to 0 by `sign(abs(x))`.
```
SELECT t.*
FROM dbo.TableName t
WHERE sign(abs(val1))+sign(abs(val2))+sign(abs(val3)) >1
```
It implements the requirement of :
> display only lines where some columns are not equal to '0'
If "some" means val2 and val3, it could be:
```
SELECT t.*
FROM dbo.TableName t
WHERE sign(abs(val2))+sign(abs(val3)) >0
```
|
> i want to display only lines where columns 2 and 3 are <> '0'
I don't see the particular problem
```
SELECT t.*
FROM dbo.TableName t
WHERE t.val2 <> 0 AND t.val3 <> 0
```
|
SQL Server Display only lines with all values not equal to 0
|
[
"",
"sql",
"sql-server-2008-r2",
"filtering",
""
] |
I have a problem getting my SEQUENCE.
I have created a sequence as administrator and have grant select and alter privileges to the other user.
```
CREATE SEQUENCE "OWNER"."TOT_SEQ" MINVALUE 1000 MAXVALUE 1000000000 INCREMENT BY 1 START WITH 1000 CACHE 20 NOORDER NOCYCLE ;
grant select,ALTER on TOT_SEQ to user;
commit;
```
When I do this :
```
select sequence_name from all_sequences;
TOT_SEQ
```
I can see my SEQUENCE in the list.
But I can't access the sequence in my code. using :
```
select <SEQUNCE_name>.nextval from dual;
```
What am I doing wrong?
|
You will either have to fully qualify your sequence via:
```
SELECT <owner>.<sequence name>.nextval FROM dual;
```
Or create a public synonym for it:
```
CREATE PUBLIC SYNONYM TOT_SEQ for OWNER.TOT_SEQ;
SELECT TOT_SEQ.nexval FROM DUAL;
```
|
Make sure that you create the sequence in uppercase, even if you use lower case in trigger / select statement.
|
sequence doesn't exist ORA-02289
|
[
"",
"sql",
"oracle",
""
] |
I need to query the same column for different values depending on another relational value.
Table is set up like this : athleteID, meetName, eventName, score
Events are all the same but there are Different Meets and my query needs to return: aid, event, score from meetName = 'whatever1', score from meetname = 'whatever2'
Ive tried every basic way about completeing this but cannot do it. I've lastly tried
```
SELECT distinct athleteID, event,
(select score from performances where meetName='Snowflake') as SnowScore,
(select score from performances where meetName='Valentine') as ValScore,
from performances
where event='high jump'
```
which returns: single-row subquery returns more than one row
My expected result would be like this:
```
aid, event, SnowScore, ValScore
1 , high jump, 6, 8
2 , high jump, 3, 5
3, high jump, 8, 10
```
|
I would like to add that Natural Inner Join is what should've been done here for basic(non-commercial) sql.
Syntax would've been: `select * from (subquery1) NIJ (subquery2)`
The subqueries syntax:
```
select athleteID, score as ValScore from performances, NIJ athletes where meet =βValentinβ and event=βbeamβ
```
and
```
select athleteID, score as SnowScore from performances, NIJ athletes where meet =βSnowFlakeβ and event=βbeamβ
```
|
Does not stipulate RDMS, my answer is with SQL Server:
If you wanted to use a subquery you need to reference the atherleteID and eventName, also if there were more than one result (not clear from your question but I assume atheletes compete at multiple meets) you would need to aggregate.
There may be a better way but as a simple one off query I would probably do it like:
```
SELECT athleteID, eventName,
sum(CASE WHEN meetName='Snowflake' THEN score ELSE 0 END) as SnowScore,
sum(CASE WHEN meetName='Valentine' THEN score ELSE 0 END) as ValScore
FROM performances
GROUP BY atheleteID,eventName
```
A better longer term solution would be with PIVOT and if the meetNames will change over time you can create dynamic pivot queries, a good example I found is [here](https://stackoverflow.com/questions/10986031/sql-dynamic-view-with-column-names-based-on-column-values-in-source-table)
|
SQL query same column twice
|
[
"",
"sql",
""
] |
I need to assign the output of split to the variable, how can do this, i tried this
```
DECLARE @StartMonth nvarchar(5) = select top 1 Value from dbo.split('jun - may', '-')
```
Step 1:
```
select top 1 Value from dbo.split('jun - may', '-')
```
Output is `jun`
Step 2:
I need to assign it to the variable `@StartMonth`
|
Try this
```
DECLARE @StartMonth nvarchar(5)
select top 1 @StartMonth = Value from dbo.split('jun - may', '-')
```
|
Please try the following:
```
DECLARE @StartMonth nvarchar(5)
select top 1 @StartMonth = Value from dbo.split('jun - may', '-')
```
|
how to get the cell value from table and assign to the variable in sql
|
[
"",
"sql",
"sql-server",
""
] |
In T-SQL I am after a constraint that will stop duplicate records being inserted for the flag.
For each [FK\_Something] the flag `[MyFlag]` can only be set once to value 1 but can be set to 0 many times.
Is this possible and is so how without using triggers?
I am using SQL Server 2008 R2.
```
set nocount on
IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[MyTable]') AND type in (N'U'))
DROP TABLE [dbo].[MyTable]
GO
CREATE TABLE [dbo].[MyTable]
(
[id] [int] IDENTITY(1,1) NOT NULL,
[FK_Something] [int] NOT NULL,
[MyFlag] [bit] NOT NULL
) ON [PRIMARY]
GO
-- Valid inserts --
INSERT INTO [MyTable] ([FK_Something],[MyFlag]) VALUES (1,0);
INSERT INTO [MyTable] ([FK_Something],[MyFlag]) VALUES (1,1);
INSERT INTO [MyTable] ([FK_Something],[MyFlag]) VALUES (1,0);
INSERT INTO [MyTable] ([FK_Something],[MyFlag]) VALUES (2,0);
INSERT INTO [MyTable] ([FK_Something],[MyFlag]) VALUES (2,0);
INSERT INTO [MyTable] ([FK_Something],[MyFlag]) VALUES (2,0);
INSERT INTO [MyTable] ([FK_Something],[MyFlag]) VALUES (2,1);
---------------------------------------
-- Inserts Should Fail
-- As [MyFlag] set for [FK_Something]
---------------------------------------
INSERT INTO [MyTable] ([FK_Something],[MyFlag]) VALUES (1,1)
GO
INSERT INTO [MyTable] ([FK_Something],[MyFlag]) VALUES (2,1)
GO
-- Constraint should apply this logic --
Declare @FK_Something INT
Declare @MyFlag INT
set @FK_Something = 2
Set @MyFlag = 0
if( @myflag = 0 OR
not exists(Select 1 from [MyTable] where FK_Something = @FK_Something and MyFlag = @MyFlag))
BEGIN
INSERT INTO [MyTable] ([FK_Something],[MyFlag]) VALUES (@FK_Something,@MyFlag);
print 'Inserted ';
END
GO
select * from [MyTable]
```
|
Thanks to Mack for his suggestion of using a function. I had to change the function logic in the function as it did not allow for the first record to be inserted. I have posted the complete solution for neatness below. TomTom's solution of the index does not involve a function so a lot of companies would accept it more. Both are in the code below, have a play and choose the right solution for your needs.
```
set nocount on
ALTER TABLE myTable drop CONSTRAINT chkFkPk
go
drop FUNCTION dbo.CheckMyTableFlag;
go
IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[MyTable]') AND type in (N'U'))
DROP TABLE [dbo].[MyTable]
GO
CREATE TABLE [dbo].[MyTable]
(
[id] [int] IDENTITY(1,1) NOT NULL,
[FK_Something] [int] NOT NULL,
[MyFlag] [bit] NOT NULL
) ON [PRIMARY]
GO
/*
CREATE FUNCTION dbo.CheckMyTableFlag(@FK_Something INT)
RETURNS INT
AS
BEGIN
Declare @retVal INT = 0
Select @retVal = sum(cast([myflag] as int)) from MyTable WHERE FK_Something = @FK_Something group by FK_Something
RETURN @retVal
END;
GO
ALTER TABLE myTable ADD CONSTRAINT chkFkPk CHECK (dbo.CheckMyTableFlag(FK_Something) <2 );
*/
go
CREATE UNIQUE NONCLUSTERED INDEX IX_Whatever
ON MyTable (FK_Something, myFlag)
WHERE MyFlag = 1
go
---- Valid inserts --
INSERT INTO [MyTable] ([FK_Something],[MyFlag]) VALUES (1,0);
INSERT INTO [MyTable] ([FK_Something],[MyFlag]) VALUES (1,0);
INSERT INTO [MyTable] ([FK_Something],[MyFlag]) VALUES (1,1);
INSERT INTO [MyTable] ([FK_Something],[MyFlag]) VALUES (1,0);
INSERT INTO [MyTable] ([FK_Something],[MyFlag]) VALUES (2,0);
INSERT INTO [MyTable] ([FK_Something],[MyFlag]) VALUES (2,0);
INSERT INTO [MyTable] ([FK_Something],[MyFlag]) VALUES (2,0);
INSERT INTO [MyTable] ([FK_Something],[MyFlag]) VALUES (2,1);
-----------------------------------------
---- Inserts Should Fail
---- As [MyFlag] set for [FK_Something]
-----------------------------------------
INSERT INTO [MyTable] ([FK_Something],[MyFlag]) VALUES (1,1)
GO
INSERT INTO [MyTable] ([FK_Something],[MyFlag]) VALUES (2,1)
GO
select [id],[FK_Something],[MyFlag] from mytable
go
```
|
Indices.
> For each [FK\_Something] the flag [MyFlag] can only be set once to
> value 1 but can be set to 0 many times.
That is 1 simle index:
* FK\_Something and MyFlag 1 - unique
The trick here is to not apply the unique index on ALL rows, which is why you add a filter.
<http://msdn.microsoft.com/en-us/library/cc280372.aspx>
explains the syntax which would look like:
```
CREATE UNIQUE NONCLUSTERED INDEX IX_Whatever
ON MyTable (FK_Something, myFlag)
WHERE MyFlag = 1
```
Simpler than a constraint and guaranteed to work.
|
Unique grouped constraint on SQL Server table
|
[
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2008-r2",
""
] |
I have the following sql:
```
select `items`.`id`, `items`.`item_url`, `items`.`title`, `items`.`description`, `items`.`item_image_name`, `items`.`downloads`, `items`.`likes`, `items`.`date`
from `items` left join `categories` as `c` on `c`.`id` = `items`.`category_id`
left join `item_tags` as `it` on `it`.`item_id` = `items`.`id`
inner join `tags` as `t` on `t`.`id` = `it`.`tag_id`
where (`items`.`title` LIKE '%psd%' or `c`.`name` LIKE '%psd%' or `t`.`name` LIKE '%psd%')
order by `items`.`date` desc limit 12
```
But for some reason some rows repeats multiple times from items table. Is there a way to take results from items without repeating the same item id.
|
You may try to use **DISTINCT** if you want to avoid duplicate data. Something like this:
```
select DISTINCT `items`.`id`, `items`.`item_url`, `items`.`title`, `items`.`description`, `items`.`item_image_name`, `items`.`downloads`, `items`.`likes`, `items`.`date`
from `items` left join `categories` as `c` on `c`.`id` = `items`.`category_id`
left join `item_tags` as `it` on `it`.`item_id` = `items`.`id`
inner join `tags` as `t` on `t`.`id` = `it`.`tag_id`
where (`items`.`title` LIKE '%psd%' or `c`.`name` LIKE '%psd%' or `t`.`name` LIKE '%psd%')
order by `items`.`date` desc limit 12
```
or you can use **GROUP BY** also
```
select `items`.`id`, `items`.`item_url`, `items`.`title`, `items`.`description`, `items`.`item_image_name`, `items`.`downloads`, `items`.`likes`, `items`.`date`
from `items` left join `categories` as `c` on `c`.`id` = `items`.`category_id`
left join `item_tags` as `it` on `it`.`item_id` = `items`.`id`
inner join `tags` as `t` on `t`.`id` = `it`.`tag_id`
where (`items`.`title` LIKE '%psd%' or `c`.`name` LIKE '%psd%' or `t`.`name` LIKE '%psd%')
group by `items`.`id`, `items`.`item_url`, `items`.`title`, `items`.`description`, `items`.`item_image_name`, `items`.`downloads`, `items`.`likes`, `items`.`date`
order by `items`.`date` desc limit 12
```
|
You need to use DISTINCT keyword in your query:
<http://www.mysqltutorial.org/mysql-distinct.aspx>
|
Sql returns row multiple times
|
[
"",
"mysql",
"sql",
"database",
""
] |
I have a query as below in my Stored Procedure:
I set the @SUBCONDITION FROM MY CODE BEHIND.
```
declare @SUBCONDITION VARCHAR(100)= NULL
if(@SUBCONDITION='DEPT')
BEGIN
Select * from table1 where IDDEPT=1
END
ELSE IF (@SUBCONTION='GRADE')
BEGIN
SELECT * FROM TABLE1 WHERE IDGRADE=1
END
ELSE IF(@SUBCONDITION='SECTION')
BEGIN
SELECT * FROM TABLE1 WHERE IDSECTION=1
END
```
Everything works just fine.
My question is whether I can do this in just one query??? Like using if or case or something in the where clause?
|
Try this:
```
SELECT *
FROM table1
WHERE 1 = (CASE @SUBCONDITION
WHEN 'DEPT' THEN IDDEPT
WHEN 'GRADE' THEN IDGRADE
WHEN 'SECTION' THEN IDSECTION
ELSE 0
END);
```
|
Yes, you can do this in one query. It would look like this:
```
Select *
from table1
where (@SUBCONDITION = 'DEPT' and IDDEPT = 1) or
(@SUBCONTION = 'GRADE') and IDGRADE = 1) or
(@SUBCONDITION = 'SECTION' and IDSECTION = 1)
```
|
conditional where clause in sql 2012
|
[
"",
"sql",
"sql-server",
"select",
"conditional-statements",
"where-clause",
""
] |
I am trying to return the number of people registered in a particular month, but would like also to return zeros in months that are not in the database.
The following query returns number of people registered in months in the database:
```
select
extract(year from "DATECREATED") as YEAR,
extract(month from "DATECREATED") as mon,
count("USERID") as Total
from
TBLG2O_USEROFO
where
extract(year from "DATECREATED") = '2014'
group by
extract(year from "DATECREATED"),
extract(month from "DATECREATED")
order by
1, 2
```
|
Unfortunately, to do this, you need a second table that just contains the month number per row.
Assuming you create a table called `MONTHS` with a single column, `MONTH_NUMBER`, you could use this query to create the output you're looking for - it will count the number of rows per month for the year you specify and show zeros for any month with no rows.
Note, this query also has a downside - it will only work over a calendar year, so you wouldn't be able to split it across years (say, July 2013 - June 2014).
```
SELECT COALESCE(YEAR, 2014) AS YEAR,
MONTHS.MONTH_NUMBER AS MONTH,
COALESCE(TOTAL, 0) AS "MONTH COUNT"
FROM (SELECT EXTRACT(YEAR FROM DATECREATED) AS YEAR,
EXTRACT(MONTH FROM DATECREATED) AS MONTH,
COUNT(DATECREATED) AS TOTAL
FROM TBLG2O_USEROFO
WHERE EXTRACT(YEAR FROM DATECREATED) = '2014'
GROUP BY EXTRACT(YEAR FROM DATECREATED),
EXTRACT(MONTH FROM DATECREATED)) LFT
RIGHT OUTER JOIN MONTHS
ON MONTH_NUMBER = LFT.MONTH
ORDER BY YEAR, MONTH
```
[See this query in action at SQL Fiddle](http://sqlfiddle.com/#!4/93f6a/1).
Essentially, this is the Oracle version of [@Eran's MySQL answer](https://stackoverflow.com/a/25736456/42471).
|
I tested this on mysql so you might require some tinkering for oracle.
First create a table with the months of the year...
```
create table months(month int);
insert into months values(1);
insert into months values(2);
insert into months values(3);
insert into months values(4);
insert into months values(5);
insert into months values(6);
insert into months values(7);
insert into months values(8);
insert into months values(9);
insert into months values(10);
insert into months values(11);
insert into months values(12);
```
The use a right join with the months table to achieve desired effect:
```
select coalesce(YEAR, 2014) year, m.month, coalesce(Total, 0) as total
from
(select year(DATECREATED) as YEAR, month(c. DATECREATED) month, count("DATECREATED") as Total
from comments c
where year(DATECREATED) ='2014'
group by year, month(c.date_sent)) as c
right join months m
on c.month = m.month
order by year, month
```
If you cannot add a temporary table, here's a way around it:
```
select coalesce(YEAR, 2014) year, m.month, coalesce(Total, 0) as total
from
(select year(DATECREATED) as YEAR, month(c. DATECREATED) month, count("DATECREATED") as Total
from comments c
where year(DATECREATED) ='2014'
group by year, month(c.date_sent)) as c
right join
(select 1 as month union select 2 as month union select 3 as month
union select 4 as month union select 5 as month union select 6 as month
union select 7 as month union select 8 as month union select 9 as month
union select 10 as month union select 11 as month union select 12 as month) m
on c.month = m.month
order by year, month
```
|
How to return zeros in months not in the database
|
[
"",
"sql",
"oracle",
"date-arithmetic",
""
] |
I have these problem I need to match the sum a columns to see if they match with the Final Total of the Invoice by Invoice Number ( I am working in a query to do it)
Example
```
Invoice No Line _no Total Line Invoice total Field I will create
----------------------------------------------------------------------
45 1 145 300 145
45 2 165 300 300 Match
46 1 200 200 200 Match
47 1 100 300 100
47 2 100 300 200
47 3 100 300 300 Match
```
|
Is this what you're looking for? I think subquery is what you're asking about, but i'm guessing to get an end result similar to the entire thing.
```
select t."Invoice No", t."Line no_", t."Invoice total",
calcTotals.lineNum as calcSum, case when t."Invoice total" = calcTotals.lineNum then 'matched' else 'not matched' end
from [table] t
inner join (
select "Invoice No" as invoiceNumber,
sum("Line _no") as lineNum
from [table]
group by "Invoice No"
) calcTotals on t."Invoice No" = calcTotals.invoiceNumber
```
|
You want a cumulative sum. In SQL Server 2012+, just do:
```
select e.*,
(case when InvoiceTotal = sum(InvoiceTotal) over (partition by invoice_no order by line_no)
then 'Match'
end)
from example e;
```
In earlier versions of SQL Server, I would be inclined to do it with a correlated subquery:
```
select e.*
(case when InvoiceTotal = (select sum(InvoiceTotal)
from example e2
where e2.Invoice_no = e.invoice_no and
e2.line_no >= e.line_no
)
then 'Match'
end)
from example e;
```
You can also do this with a `cross apply` as M Ali suggests.
EDIT:
Now that I think about the problem, you don't *need* a cumulative sum. That was just how I originally thought of the problem. So, this will work in SQL Server 2008:
```
select e.*,
(case when InvoiceTotal = sum(InvoiceTotal) over (partition by invoice_no)
then 'Match'
end)
from example e;
```
You can't get the cumulative sum out (the second to last column) without more manipulation, but the `match` column is not hard.
|
Sql Query Compare and Sum
|
[
"",
"sql",
"sql-server",
"sql-scripts",
""
] |
For sample data as below,
```
Col1 Col2
1 A
1 B
1 C
2 A
2 B
```
the output I am looking for is
```
COL1 COL2
1 A B C
2 A B
```
This can be done using LISTAGG on Oracle or recursive queries on other DBs but Redshift doesnt support both.
How do I achieve this on Redshift
|
They just added `LISTAGG()` to Redshift (2015-07-31). <http://docs.aws.amazon.com/redshift/latest/dg/r_LISTAGG.html>
|
Here is a solution from another similar question -
```
SELECT col1,
LISTAGG(col2,', ')
WITHIN GROUP (ORDER BY col2)
OVER (PARTITION BY col1) AS EMPLOYEE
FROM YOUR_TABLE
ORDER BY col1
```
[This question.](https://stackoverflow.com/questions/24502108/how-to-group-by-and-concatenate-fields-in-redshift)
> Redshift has introduced LISTAGG window function that makes it possible to do so now. Here is a quick solution to your problem - may or may not be useful but putting it here so that people will know!
[Here is the documentation about the function.](http://docs.aws.amazon.com/redshift/latest/dg/r_WF_LISTAGG.html) || [This is the announcement.](https://aws.amazon.com/about-aws/whats-new/2015/08/amazon-redshift-now-supports-dynamic-work-load-management-and-list-aggregate-functions/)
|
How to write a listagg on Redshift?
|
[
"",
"sql",
"amazon-redshift",
""
] |
I have a Category table:
```
CategoryID Description
-------------------------
1 Red category
2 Green category
3 Blue category
```
And an Item table:
```
ItemID CategoryID Status
------------------------------
1 1 "New"
2 1 "New"
3 1 "New"
4 2 "Editing"
5 2 "Editing"
6 2 "Editing"
7 3 "Published"
8 3 "Published"
9 3 "Published"
```
This comes from a legacy application which I am porting to a new code base. I suspect the Status column on the Item table should really be on the Category table. How can I verify that there are no Items that have a different status than any other items that belong to the same Category?
I know I want to aggregate the Items that have the same CategoryID, so I tried:
```
select categoryid
from item
group by categoryid
```
This gives me the distinct Categories. Now I want to select the ones with two distinct Statuses:
```
select categoryid, count(status) > 1
from item
group by categoryid
having count(status) > 1
```
But this just gives me the number of Items in each Category. I'm terrible at SQL.
|
```
SELECT *
FROM YourTable A
WHERE EXISTS(SELECT 1 FROM YourTable
WHERE CategoryId = A.CategoryId
AND [Status] <> A.[Status])
```
|
@Koveras, this will give you all the categories with multiple statuses.
```
select categoryid
from (
select distinct categoryid, status
from item) t
group by categoryid
having count(*) > 1
```
Your currently accepted solution will work but will be less efficient. Here is another solution.
[See Fiddle Here](http://sqlfiddle.com/#!6/f39e0/3)
|
SQL select categories where items have different values
|
[
"",
"sql",
"sql-server",
""
] |
I'm trying to make a search function in my website. When i try it out i get very crazy results back.
this query works fine:
```
SELECT * FROM (`products`)
JOIN `products_field` ON `products`.`id` = `products_field`.`prod_id`
WHERE menu_id=569 OR menu_id=952 AND (products.title LIKE '%mcgregor%' OR products_field.value LIKE '%mcgregor%')
GROUP BY `prod_id`
```
But when i change the positions of the menu ids like u see beneath, it gets results where there is nothing with mcgregor in it:
```
SELECT * FROM (`products`)
JOIN `products_field` ON `products`.`id` = `products_field`.`prod_id`
WHERE menu_id=952 OR menu_id=569 AND (products.title LIKE '%mcgregor%' OR products_field.value LIKE '%mcgregor%')
GROUP BY `prod_id`
```
What am i doing wrong?
|
Written this way
```
WHERE menu_id=952 OR menu_id=569 AND ...
```
To mysql is considered a match(true) when menu\_id = 952 and what is on the right side of the -OR- is not even evaluated, because you are saying that is this OR the other(the rest of the where declaration) you can wrap this OR like so:
```
WHERE (menu_id=952 OR menu_id=569) AND ...
```
or you can go like so:
```
WHERE menu_id IN (952,569) AND ...
```
|
Try adding parentheses around the menu section.
```
SELECT * FROM (`products`)
JOIN `products_field` ON `products`.`id` = `products_field`.`prod_id`
WHERE (menu_id=952 OR menu_id=569) AND (products.title LIKE '%mcgregor%' OR products_field.value LIKE '%mcgregor%')
GROUP BY `prod_id`
```
|
Crazy mysql behavior getting different results with almost same query
|
[
"",
"mysql",
"sql",
"search",
""
] |
I want to display `dishname` and `dishprice` with `DISTINCT` value of 2nd highest `dishprice`.
What is the problem with the following query?
```
SELECT dishName, DISTINCT(dishPrice) FROM `tblDish` ORDER BY dishPrice LIMIT 1,1
```
|
In case there are more then one record for same price then need to group them bye price and group concat all the records
```
SELECT GROUP_CONCAT(dishName), dishPrice FROM `tblDish` GROUP BY dishPrice ORDER BY dishPrice DESC LIMIT 1,1
```
|
Do it like this
```
SELECT DISTINCT(dishPrice), dishName FROM `tblDish` GROUP BY `dishPrice` ORDER BY `dishPrice` LIMIT 1,1
```
You should always mention Distinction before selecting any other column of the table
|
What is wrong with this query with DISTINCT keyword?
|
[
"",
"mysql",
"sql",
"distinct",
""
] |
I have a program that report its version and IP to the following table
```
CREATE TABLE `Operadores` (
`Num` int(11) NOT NULL AUTO_INCREMENT,
`Usuario` varchar(45) NOT NULL,
`Password` varchar(15) NOT NULL,
`LastLogin` varchar(25) DEFAULT '01/01/1970 00:00:00',
`AdminLevel` int(10) unsigned DEFAULT '1',
`Grupo` varchar(20) NOT NULL DEFAULT 'Operadores',
`Version` varchar(20) DEFAULT 'OLD',
`NombrePC` varchar(20) DEFAULT NULL,
`LastIP` varchar(16) DEFAULT '0.0.0.0',
PRIMARY KEY (`Num`,`Usuario`),
UNIQUE KEY `Num_UNIQUE` (`Num`),
UNIQUE KEY `Usuario_UNIQUE` (`Usuario`)
) ENGINE=InnoDB AUTO_INCREMENT=55 DEFAULT CHARSET=l
```
The version is saved as 1.5.4.3, and I use only one digit between dots; so bigger number means bigger version.
I need to make a query, in MySQL Workbench, to know the bigger version of each IP.
Example:
If the last version is 1.6.0.2 and I have the following data
```
Usuario Version LastIP
Mike 1.6.0.1 10.0.1.1
Eric 1.6.0.2 10.0.1.1
Adam 1.1.0.1 10.0.1.2
Clark 1.6.0.2 10.0.1.2
Kirk 1.6.0.0 10.0.1.3
```
I should obtain
```
Usuario Version LastIP
Eric 1.6.0.2 10.0.1.1
Clark 1.6.0.2 10.0.1.2
Kirk 1.6.0.0 10.0.1.3
```
So now I can see that Kirk on PC 10.0.1.3 is using an old version of the software.
I already tried with
```
SELECT * FROM gts.Operadores GROUP BY VersionGTS,LastIP ORDER BY LastLogin DESC;
```
But I obtain one row by every record, and I just want one result(the biggest version) for each IP.
|
The reason why you are getting two rows is because you are using two columns in your `group by` clause `VersionGTS,LastIP`.
If you want to get only one value then you can use the `max` function to get the max version of the `Version` column try this:
```
SELECT max(versionGTS),LastIP,LastLogin
FROM gts.Operadores
GROUP BY LastIP,LastLogin
ORDER BY LastLogin DESC;
```
|
To get the maximum version per IP, you can use this simple query:
```
SELECT
MAX(Version) AS MaxVersion,
LastIP
FROM
gts.Operadores
GROUP BY
LastIP
;
```
If you want to additionally return `Usuario` associated with every particular pair of `MaxVersion` and `LastIP`, you can achieve that by joining the results back to `gts.Operadores`:
```
SELECT
o.Usuario,
o.Version,
o.LastIP
FROM
gts.Operadores AS o
INNER JOIN
(
SELECT
MAX(Version) AS MaxVersion,
LastIP
FROM
gts.Operadores
GROUP BY
LastIP
) AS m
ON
o.LastIP = m.LastIP
AND o.Version = m.MaxVersion
;
```
If a user logs in from the same IP and with the same version which happens to be the maximum one for that IP, the above query will return duplicate entries. It is easy eliminate them with a `DISTNCT`:
```
SELECT DISTINCT
o.Usuario,
o.Version,
o.LastIP
FROM
gts.Operadores AS o
INNER JOIN
(
SELECT
MAX(Version) AS MaxVersion,
LastIP
FROM
gts.Operadores
GROUP BY
LastIP
) AS m
ON
o.LastIP = m.LastIP
AND o.Version = m.MaxVersion
;
```
However, if multiple users share the same IP, the last query might still give you multiple rows per IP. If you insist on getting only one row for every IP and you insist on having a (corresponding) `Usuario` coming with every row, you should specify which row you want. For instance, you could pick the maximum `Usuario` per (maximum) version and IP:
```
SELECT
MAX(o.Usuario) AS Usuario,
o.Version,
o.LastIP
FROM
gts.Operadores AS o
INNER JOIN
(
SELECT
MAX(Version) AS MaxVersion,
LastIP
FROM
gts.Operadores
GROUP BY
LastIP
) AS m
ON
o.LastIP = m.LastIP
AND o.Version = m.MaxVersion
GROUP BY
o.Version,
o.LastIP
;
```
|
Should I use DISTINCT?
|
[
"",
"mysql",
"sql",
""
] |
When I try to excecute this code, I'm getting error:
*Conflict of the DELETE statement with the restriction REFERENCE "FK\_Options\_users". The conflict occurred in database "WINTOUR", table "PrintForm.Options", column 'user\_code'*
Can't understand why and how to fix that.
```
declare
@USER_CODE int;
select
@USER_CODE = 24;
delete from Settings.Items where user_code = @USER_CODE
delete from usnet where code = @USER_CODE
delete from usgroups where usercode = @USER_CODE
delete from users where code = @USER_CODE
```
|
It seems like `Foreign Key constraint` exists between the `user_code` column in `PrintForm.Options` and the `code/user_code` column in given tables.
If you try to delete all the data in given tables an error will occur as the `user_code` column in `PrintForm.Options` reference the data in the any one of the table from which yor are deleting the data.
To resolve the issue you should either drop and recreate the constraint `FK_Options_users` or delete the data from child table `PrintForm.Options` that the Foreign Key references ie where `user_code = 24`.
|
It looks to me like you are removing user 24, but the PrintForm.Options table has an entry that is still using it, and were it to be deleted, the foreign key would no longer be satisifed.
Have you perchance missed "Printform.Options" from the list of delete queries?
|
Conflict of the DELETE statement with the restriction REFERENCE
|
[
"",
"sql",
""
] |
Some times I need to generate to many insert querys, like this:
```
update claim set register_dt = '2014-01-29 09:48' where folio = '0021'
update claim set register_dt = '2014-04-09 11:30' where folio = '0080'
update claim set register_dt = '2014-05-06 13:00' where folio = '0123'
update claim set register_dt = '2014-05-21 12:20' where folio = '0161'
update claim set register_dt = '2014-01-03 10:40' where folio = '0001'
update claim set register_dt = '2014-01-06 09:12' where folio = '0002'
update claim set register_dt = '2014-01-06 10:40' where folio = '0003'
update claim set register_dt = '2014-01-07 13:00' where folio = '0004'
update claim set register_dt = '2014-01-08 11:50' where folio = '0005'
update claim set register_dt = '2014-01-10 12:34' where folio = '0006'
update claim set register_dt = '2014-01-13 09:45' where folio = '0007'
update claim set register_dt = '2014-01-15 09:29' where folio = '0008'
update claim set register_dt = '2014-01-15 10:05' where folio = '0009'
[... more and more]
```
And I get this text and execute one by one inside of some small script in php or visual studio.
I want to now if can **do this in some only unique query**, for update all in the mysql-browser client. (and not have to make an executor of querys every time) Tanks
|
I'd choose to use a temporary table like @GordonLinoff shows. But that technically isn't a single query, because you have to create and populate the temporary table.
One way you can truly do this in a single query is to use a huge CASE expression:
```
update claim set register_dt = case folio
when '0021' then '2014-01-29 09:48'
when '0080' then '2014-04-09 11:30'
when '0123' then '2014-05-06 13:00'
when '0161' then '2014-05-21 12:20'
when '0001' then '2014-01-03 10:40'
when '0002' then '2014-01-06 09:12'
when '0003' then '2014-01-06 10:40'
when '0004' then '2014-01-07 13:00'
when '0005' then '2014-01-08 11:50'
when '0006' then '2014-01-10 12:34'
when '0007' then '2014-01-13 09:45'
when '0008' then '2014-01-15 09:29'
when '0009' then '2014-01-15 10:05'
. . .
end;
```
You can make an SQL statement as long as `max_allowed_packet`.
|
The easiest way, in my opinion, would be a two step process. First store the data in a temporary table and then use that for the update.
```
create temporary table toupdate as
select '2014-01-29 09:48' as register_date, '0021' as folio union all
. . .
;
update claim c join
toupdate tu
on c.folio = tu.folio
set c.register_date = tu.register_date;
```
You can actually do this in one query if you like:
```
update claim c join
(select '2014-01-29 09:48' as register_date, '0021' as folio union all
. . .
) tu
on c.folio = tu.folio
set c.register_date = tu.register_date;
```
And, if the new values are in the database, you could just reference another source table where they are coming from.
|
improve update in MySQL
|
[
"",
"mysql",
"sql",
"sql-update",
""
] |
Here I need to check the string which is in the format '`a,c,e`'. This situation occurred when user select multiple option from check box and in the query I need to check it with the present column.
**Example**:
Given string:
```
'a,c,e'
```
Need to check the given string each word is present in the column `columnA` or not:
```
columnA columnB
-------------------
a 1
b 2
c 3
d 4
e 5
f 6
g 7
h 8
```
If `a,c,e` present in the column `columnA` it should retrieve with the `columnB`:
**Expected Result**:
```
columnA columnB
------------------
a 1
c 3
e 5
```
My try:
```
select columnA,columnB from
test where columnA ='a'
or columnA = 'c'
or columnA = 'e'
```
I don't feel! this is the optimize way to do so. And this is also not good for the dynamic query where the string values become changes concurrently.
|
Try below code:
```
DECLARE @COMMASEPSTR NVARCHAR(200), @SQL nvarchar(max), @STR nvarchar(100) = 'a,b,c'
SET @COMMASEPSTR= '''' + REPLACE(@STR,',',''',''') + ''''
SET @SQL = 'select columnA,columnB from
test where columnA IN ( ' + @COMMASEPSTR + ')'
EXEC sp_executesql @SQL
```
Tell me if I am wrong somewhere.
|
I am not sure about the format of your dynamic input string or format of columnA but you can try this:
```
select columnA,columnB from
test where CHARINDEX(columnA, 'a,c,e')>0
```
|
Check string in select statement using SQL Server 2008
|
[
"",
"sql",
"sql-server",
""
] |
I have 3 different tables for a blog I am working with.
1st table: blog\_posts, it has an ID, Title, Content, user\_id (who created post), created\_date and slug.
2nd table: blog\_tags, it has an ID, post\_id and tag\_id
3rd table: tags, it has an ID and tag
I am using table 3 to save all tags once, so that there are no duplicates. Then I am using table 2 to connect a tag to a post (table 1).
The problem I am having is getting all the posts from a specific tag and also returning all the other tags.
My code right now only returns the tag that I want to find posts in, but I still want to write out the rest of the tags, only the posts showing HAS to include that specific tag...
I am usually sharp with SQL, but this time my head is totally still... Please help me :)
I am using PHP and CodeIgniter if that matters.
Thanks in advance.
Mike
**Edit**
I am printing the results out as json, which gives me following:
```
{
"data": [
{
"id": "28",
"title": "blabla",
"content": "<p>hello<\/p>",
"user_id": "1",
"created_date": "2014-08-18 23:57:22",
"slug": "blabla-2014-08-18-235722"
},
{
"id": "34",
"title": "test2",
"content": "<p>test2<\/p>",
"user_id": "1",
"created_date": "2014-08-23 21:41:00",
"slug": "test2-2014-08-23-214100"
}
],
"success": true
}
```
With the help from the answer below. My SQL and code now says:
```
$sql = "SELECT * FROM blog_posts bp
WHERE EXISTS(SELECT * FROM blog_tags bt INNER join
tags t ON t.id = bt.tag_id
WHERE bp.id = bt.post_id
AND t.id = ".$this->db->escape($tag_id).")";
$results = $this->db->query($sql)->result();
return $results;
```
What I want to get is the following:
```
{
"data": [
{
"id": "28",
"title": "blabla",
"content": "<p>hello<\/p>",
"user_id": "1",
"created_date": "2014-08-18 23:57:22",
"slug": "blabla-2014-08-18-235722",
"tags": [
{
"id": 1
"tag": "test",
},
{
"id": 2
"tag": "test2",
}
]
},
{
"id": "34",
"title": "test2",
"content": "<p>test2<\/p>",
"user_id": "1",
"created_date": "2014-08-23 21:41:00",
"slug": "test2-2014-08-23-214100"
"tags": [
{
"id": 3
"tag": "testa",
},
{
"id": 1
"tag": "test",
}
]
}
],
"success": true
}
```
|
I assume you are happy to send two requests to the database.
First, get all the posts for a given tag:
```
SELECT * FROM blog_posts bp
WHERE EXISTS (SELECT * FROM blog_tags bt INNER JOIN
tags t ON t.id = bt.tag_id
WHERE bp.id = bt.post_id
AND t.tag = @SearchTag)
```
Second, you want to tags, I guess, linked to the one you are looking for via posts:
```
SELECT * FROM tags t
WHERE EXISTS ( -- Here we link two tags via blog_tags
SELECT * FROM blog_tags bt1 INNER JOIN
blog_tags bt2 ON bt1.post_id = bt2.post_id
AND bt1.tag_id != bt2.tag_id INNER JOIN
tags t ON t.id = bt1.tag_id
WHERE t.tag = @SearchTag
AND t.id = bt2.tag_id
)
```
|
I did it in one go, continuing on @Bulat's code.
```
SELECT *,
GROUP_CONCAT(DISTINCT bt.tag_id) as tags_id,
GROUP_CONCAT(DISTINCT t.tag) as tags
FROM blog_posts bp
INNER JOIN blog_tags bt
ON bt.post_id = bp.id
ON t.id = bt.tag_id
GROUP BY bt.post_id
ORDER BY bp.created_date DESC
```
Then I return tags and tags\_id as an array with a foreach loop
```
$results = $this->db->query($sql)->result();
foreach($results as $result) {
$result->tags_comma = $result->tags;
strpos($result->tags, ',') ? $result->tags = explode(',', $result->tags) : $result->tags = array($result->tags);
$result->tags_comma = str_replace(',', ', ', $result->tags_comma);
}
foreach($results as $result) {
$result->tags_id_comma = $result->tags_id;
strpos($result->tags_id, ',') ? $result->tags_id = explode(',', $result->tags_id) : $result->tags_id = array($result->tags_id);
$result->tags_id_comma = str_replace(',', ', ', $result->tags_id_comma);
}
```
|
Get all posts that have a specific tag and keep all other tags on results with SQL
|
[
"",
"mysql",
"sql",
"codeigniter",
""
] |
I have a requirement to split a row into two based on the nullability of two columns in the table used in the query.
```
Sample Data
-------------------------------------------------------------------------------------------------
PrimaryID SecondaryID PassedSubject FailedSubject PSField FSField
-------------------------------------------------------------------------------------------------
1 11 ABC XYZ PSFldData1 FSFldData1
1 12 DEF NULL PSFldData2 FSFldData2
2 21 NULL GHI PSFldData3 FSFldData3
```
So for the above data I am looking for a possible result in the below format, the requirements being is that if the fields "PassedSubject" and "FailedSubject" columns are NOT NULL then I need to split the row in to and only PSField(PassedSubject Field) should be populated for the row containing PassedSubject value and FSField should be populated for the row containing the FailedSubject value.
```
Sample Result
-------------------------------------------------------------------------------------------------
PrimaryID SecondaryID PassedSubject FailedSubject PSField FSField
-------------------------------------------------------------------------------------------------
1 11 ABC NULL PSFldData1 NULL
1 11 NULL XYZ NULL FSFldData1
```
I need to split the row only for the field being NOT NULL for "PassedSubject" and "FailedSubject" columns.
|
```
SELECT PrimaryID,
SecondaryID,
CR.PassedSubject,
CR.FailedSubject,
CR.PSField,
CR.FSField
FROM TableName
CROSS APPLY (VALUES
( PassedSubject,NULL,PSField,NULL),
( NULL, FailedSubject,NULL,FSField))
CR ( PassedSubject, FailedSubject, PSField, FSField)
WHERE TableName.PassedSubject IS NOT NULL AND TableName.FailedSubject IS NOT NULL
UNION ALL
SELECT * FROM TableName
WHERE TableName.PassedSubject IS NULL OR TableName.FailedSubject IS NULL
```
`SQLFiddle`
```
PRIMARYID SECONDARYID PASSEDSUBJECT FAILEDSUBJECT PSFIELD FSFIELD
1 11 ABC (null) PSFldData1 (null)
1 11 (null) XYZ (null) FSFldData1
1 12 DEF (null) PSFldData2 FSFldData2
2 21 (null) GHI PSFldData3 FSFldData3
```
|
```
;WITH PassedCTE AS
(
SELECT PrimaryID,
SecondaryID,
PassedSubject,
NULL AS FailedSubject,
PSField,
NULL AS FSField
FROM TableName
WHERE PassedSubject IS NOT NULL
),FailedCTE AS
(
SELECT PrimaryID,
SecondaryID,
NULL AS PassedSubject,
FailedSubject,
NULL AS PSField,
FSField
FROM TableName
WHERE FailedSubject IS NOT NULL
)
SELECT *
FROM PassedCTE
UNION
SELECT *
FROM FailedCTE
```
|
Query to split a row into two
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
my query :
```
SELECT USERINFO.USERID [User ID], USERINFO.BADGENUMBER [Employee No.],
USERINFO.NAME [Name] FROM USERINFO where CHECKTIME='2014-09-08'
```
when i write above query then it will not return the value of checktime='2014-09-08'
```
but when i write checktime>='2014-09-08' then it will give
me the output of 2014-09-08 and 2014-09-09
```
but i want only output of `checktime='2014-09-08`
my checktime filed contain below data:
```
checktime
2014-09-08 09:19:34.000
2014-09-08 11:19:34.000
2014-09-09 09:19:34.000
2014-09-09 10:19:34.000
```
so how can i get only output of checktime='2014-09-08' ?
|
Your problem is having equality between a field that may contain a value like `2014-09-08 09:19:34.000` and a value that is `2014-09-08`. These are in no way equal. Having no time is equevalent to the midnight of the specified date, so `2014-09-08` is equal to `2014-09-08 00:00:00.000`.
You need a way to strip the time portion from your datetime field.
```
SELECT USERINFO.USERID [User ID],
USERINFO.BADGENUMBER [Employee No.],
USERINFO.NAME [Name]
FROM USERINFO
WHERE CONVERT(DATETIME,CONVERT(VARCHAR,CHECKTIME,112))='2014-09-08'
```
I think that in SQL Server 2008R2(could work in 2008) and above you could use:
```
SELECT USERINFO.USERID [User ID],
USERINFO.BADGENUMBER [Employee No.],
USERINFO.NAME [Name]
FROM USERINFO
WHERE CONVERT(DATE,CHECKTIME)='2014-09-08'
```
Here is a **[Fiddle](http://sqlfiddle.com/#!3/dae7f/1)** showing both cases.
The above cases will convert the field in the table, so you may want to avoid that using `>=` and `<` as a performance boost to your query. You may write a bit more, but you will do it only once, though the SQL will convert each time you run this. So maybe a solution as the one below may be more appropriate:
```
DECLARE @DateFrom AS DATETIME
DECLARE @DateTo AS DATETIME
SET @DateFrom = '20140908'
SET @DateTo = DATEADD(day,1,@DateFrom)
SELECT USERINFO.USERID [User ID],
USERINFO.BADGENUMBER [Employee No.],
USERINFO.NAME [Name]
FROM USERINFO
WHERE CHECKTIME >= @DateFrom
AND CHECKTIME < @DateTo
```
|
Its happening because you are comparing unequal values. Here , you are comaparing **2014-09-08** **09:19:34.000** with some thing line **2014-09-08 00:00:00.000**
So that is the reasoon , when equality operator is used , it is not returning any thing.
So, you need to discard the time part from the date time. This is can be done as follows:
```
SELECT USERINFO.USERID [User ID], USERINFO.BADGENUMBER [Employee No],USERINFO.NAME [Name]
FROM USERINFO
WHERE CONVERT(DATETIME,CONVERT(VARCHAR,CHECKTIME,112))='2014-09-08'
```
You need to specify an option while converting to varchar(). You can refer to [this](http://www.codeproject.com/Articles/576178/cast-convert-format-try-parse-date-and-time-sql) article for more details.
|
i want data with where condition of date field - compare two dates discarding time part
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
This is my `users` table:
<http://ezinfotec.com/Capture.PNG>
I need to select all rows those are not contain 2 in except column. How to write a query for this using php & Mysql.
The result i expect for this query is only return last row only.
Thank you.
|
Don't store comma separated values in your table, it's very bad practice, nevertheless you can use `FIND_IN_SET`
```
SELECT
*
FROM
users
WHERE
NOT FIND_IN_SET('2', except)
```
|
Try this:
```
SELECT *
FROM users
WHERE CONCAT(',', except, ',') NOT LIKE '%,2,%'
```
|
write select query for the below requirement
|
[
"",
"mysql",
"sql",
"select",
"concatenation",
"sql-like",
""
] |
This is the code I have:
```
INSERT INTO HCSERVREC (COMP_CD, JOBTL_CD, emp_no,from_dt,service_type)
values ( '1',( if NOT EXISTS(
SELECT 1
FROM SDOPTTABLE
WHERE OPTTABLENO = '324'
AND OPTTABLEVAL = '00883578'
)
BEGIN
INSERT INTO SDOPTTABLE (
comp_cd
,opttableno
,opttableval
,optname
,dt_stamp
)
VALUES (
'1'
,'324'
,'00883578'
,'STOCKROOM ATTENDANT'
,getdate()
)
END
SELECT *
FROM SDOPTTABLE
WHERE OPTTABLENO = '324'
AND OPTTABLEVAL = '00883578'),'1234567','0','123456')
```
These are the error messages I get:
> Msg 156, Level 15, State 1, Line 2
> Incorrect syntax near the keyword 'if'.
>
> Msg 102, Level 15, State 1, Line 28
> Incorrect syntax near ')'.
All I want to do is verify before insert data input.how to do it correctly?
FYI, when I replace the if statement with single data the sql work fine, even if I run the IF statement independently is also work just fine. just the issue when the both are combine.
Thanks.
|
Check first and then insert
```
IF NOT EXISTS(
SELECT 1
FROM SDOPTTABLE
WHERE OPTTABLENO = '324'
AND OPTTABLEVAL = '00883578'
)
BEGIN
INSERT INTO SDOPTTABLE (
comp_cd
,opttableno
,opttableval
,optname
,dt_stamp
)
VALUES (
'1'
,'324'
,'00883578'
,'STOCKROOM ATTENDANT'
,getdate()
)
END
INSERT INTO HCSERVREC (COMP_CD, JOBTL_CD, emp_no,from_dt,service_type)
SELECT '1',
(SELECT * -- select only the required column. You cannot use * here
FROM SDOPTTABLE
WHERE OPTTABLENO = '324'
AND OPTTABLEVAL = '00883578'),
'1234567',
'0',
'123456'
```
|
You need to do your check **first**, before you then either run your `INSERT`, or skip it. You cannot have `IF` statements in the middle of an `INSERT`:
```
IF NOT EXISTS(SELECT * FROM dbo.SDOPTTABLE WHERE OPTTABLENO = '324' AND OPTTABLEVAL = '00883578')
INSERT INTO HCSERVREC (COMP_CD, JOBTL_CD, emp_no,from_dt,service_type)
VALUES ('1', .......)
```
And the `INSERT` statement can take two forms - either you have a fixed list of values as literals (`'1'`) or SQL variables - then you can use the `INSERT INTO Table(columns) VALUES(values)` approach.
Or you want to use a `SELECT` statement from another table to insert the values, then you need to use the `INSERT INTO Table(columns) SELECT column FROM OtherTable WHERE condition` style. Again: you can pick one or the other - but you cannot mix them (e.g. you cannot have a sub-`SELECT` in the middle of a `VALUES(., ., ....)` enumeration
|
SQL Server: how to do if statement within the insert statement
|
[
"",
"sql",
"sql-server",
""
] |
I want to group my data using SQL or R so that I can get top or bottom 10 `Subarea_codes` for each `Company` and `Area_code`. In essence: the `Subarea_codes` within the `Area_codes` where each `Company` has its largest or smallest result.
```
data.csv
Area_code Subarea_code Company Result
10 101 A 15
10 101 P 10
10 101 C 4
10 102 A 10
10 102 P 8
10 102 C 5
11 111 A 15
11 111 P 20
11 111 C 5
11 112 A 10
11 112 P 5
11 112 C 10
result.csv should be like this
Company Area_code Largest_subarea_code Result Smallest_subarea_code Result
A 10 101 15 102 10
P 10 101 10 102 8
C 10 102 5 101 4
A 11 111 15 112 10
P 11 111 20 112 5
C 11 112 10 111 5
```
Within each `Area_code` there can be hundreds of `Subarea_codes` but I only want the top and bottom 10 for each Company.
Also this doesn't have to be resolved in one query, but can be divided into two queries, meaning smallest is presented in results\_10\_smallest and largest in result\_10\_largest. But I'm hoping I can accomplish this with one query for each result.
What I've tried:
```
SELECT Company, Area_code, Subarea_code MAX(Result)
AS Max_result
FROM data
GROUP BY Subarea_code
ORDER BY Company
;
```
This gives me all the `Companies` with the highest results within each Subarea\_code. Which would mean: A, A, P, A-C for the data above.
|
Using `sqldf` package:
```
df <- read.table(text="Area_code Subarea_code Company Result
10 101 A 15
10 101 P 10
10 101 C 4
10 102 A 10
10 102 P 8
10 102 C 5
11 111 A 15
11 111 P 20
11 111 C 5
11 112 A 10
11 112 P 5
11 112 C 10", header=TRUE)
library(sqldf)
mymax <- sqldf("select Company,
Area_code,
max(Subarea_code) Largest_subarea_code
from df
group by Company,Area_code")
mymaxres <- sqldf("select d.Company,
d.Area_code,
m.Largest_subarea_code,
d.Result
from df d, mymax m
where d.Company=m.Company and
d.Subarea_code=m.Largest_subarea_code")
mymin <- sqldf("select Company,
Area_code,
min(Subarea_code) Smallest_subarea_code
from df
group by Company,Area_code")
myminres <- sqldf("select d.Company,
d.Area_code,
m.Smallest_subarea_code,
d.Result
from df d, mymin m
where d.Company=m.Company and
d.Subarea_code=m.Smallest_subarea_code")
result <- sqldf("select a.*, b.Smallest_subarea_code,b.Result
from mymaxres a, myminres b
where a.Company=b.Company and
a.Area_code=b.Area_code")
```
|
If you already doing it in R, why not use the much more efficient `data.table` instead of `sqldf` using SQL syntax? Assuming `data` is your data set, simply:
```
library(data.table)
setDT(data)[, list(Largest_subarea_code = Subarea_code[which.max(Result)],
Resultmax = max(Result),
Smallest_subarea_code = Subarea_code[which.min(Result)],
Resultmin = min(Result)), by = list(Company, Area_code)]
# Company Area_code Largest_subarea_code Resultmax Smallest_subarea_code Resultmin
# 1: A 10 101 15 102 10
# 2: P 10 101 10 102 8
# 3: C 10 102 5 101 4
# 4: A 11 111 15 112 10
# 5: P 11 111 20 112 5
# 6: C 11 112 10 111 5
```
|
Grouping query into group and subgroup
|
[
"",
"sql",
"r",
"sqlite",
"greatest-n-per-group",
""
] |
I have been trying this query in different formats yet I don't know why I get this weird error.
```
insert into details ('refno','membershipno','name','dob','year_of_passing_diploma','phone','address','email','mobileno','city','state','pincode','degree','college') values('1001','2001','ragesh','1991-06-27','2012','24423021','D1, silver star apartments sec','rageshcv@gmail.com','9840526989','chennai','tamil nadu','600020','pilot','newzeleand','98455 27/06/14','IOB chennai','Chetta kadai','nandanam','leatrher','MD','3','24405158','chennai')
```
I don't find anything wrong in this query still I am getting the error
```
You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ''refno','membershipno','name','dob','year_of_passing_diploma','phone','address',' at line 1
```
Has anyone experienced the same issue?
|
You cannot use single quotes around your column identifiers. Either use ticks or nothing:
```
insert into details (refno,membershipno,name,dob,year_of_passing_diploma,phone,address,email,mobileno,city,state,pincode,degree,college) values('1001','2001','ragesh','1991-06-27','2012','24423021','D1, silver star apartments sec','rageshcv@gmail.com','9840526989','chennai','tamil nadu','600020','pilot','newzeleand','98455 27/06/14','IOB chennai','Chetta kadai','nandanam','leatrher','MD','3','24405158','chennai')
```
or
```
insert into details (`refno`,`membershipno`,`name`,`dob`,`year_of_passing_diploma`,`phone`,`address`,`email`,`mobileno`,`city`,`state`,`pincode`,`degree`,`college`) values('1001','2001','ragesh','1991-06-27','2012','24423021','D1, silver star apartments sec','rageshcv@gmail.com','9840526989','chennai','tamil nadu','600020','pilot','newzeleand','98455 27/06/14','IOB chennai','Chetta kadai','nandanam','leatrher','MD','3','24405158','chennai')
```
|
Apart from quoting: you have 14 columns and 23 values in your query
|
MySql syntax error is thrown in spite of writting the correct query
|
[
"",
"mysql",
"sql",
"syntax-error",
"mysql-error-1064",
""
] |
Given the following SQL Server schema:
```
CREATE TABLE #TableA (Id int);
CREATE TABLE #TableB (Id int, TableA_Id int, Status_Id int)
CREATE TABLE #Statuses (Id int)
SELECT
*
FROM
#TableA AS A
INNER JOIN #TableB AS B
ON A.Id = B.TableA_Id
INNER JOIN #Statuses AS S
ON B.Status_Id = S.Id
```
How can I get all the rows in `TableA` for which there are at least one entry of each of the rows in `dbo.Statuses`?
For example, in the following set of data only row 2 of `#TableA` should be returned:
```
#TableA
[1]
[2]
[3]
[4]
#Statuses
[1]
[2]
[3]
#TableB
[1][1][1]
[2][1][3]
[3][2][1]
[4][2][2]
[5][2][3]
[6][3][1]
```
|
```
SELECT B.TableA_Id
FROM #Statuses S
LEFT JOIN #TableB B
ON B.Status_Id = S.Id
GROUP BY B.TableA_Id
HAVING COUNT(DISTINCT S.Id) = COUNT(DISTINCT B.Status_Id)
```
|
You can do this with a "simple" `where` clause. Assuming that the status column in `#TableB` only has valid status values:
```
select a.*
from #TableA a
where (select count(*) from statuses) =
(select count(distinct status) from #TableB b where b.TableA_Id = a.id);
```
You can relax this assumption. Here is one way:
```
select a.*
from #TableA a join
(select count(*) as numstatuses from #statuses) s
where (select count(distinct b.status)
from #TableB b join
#statuses s
on b.id = s.id
where b.TableA_Id = a.id
) = numstatuses
```
|
Select rows from SQL table where ALL of a set of parameters exist in joined table
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
Is there anyway to combine mysql queries columnwise? I have below queries
```
select count(ticket_id) as numInProgress from ost_ticket where status
= 'in progress' and topic_id not in (select topic_id from ost_org_site_map)
select count(ticket_id) as numAssigned from ost_ticket where status =
'open' and topic_id not in (select topic_id from ost_org_site_map)
```
and I am trying to get the result in the below format
```
---numInProgress | numAssigned---
2 | 8
---------------------------------
```
|
Just use these as subqueries in a select statement, e.g.
```
SELECT ($statement1$),($statement2$)
```
or with your queries:
```
SELECT (select count(ticket_id) as numInProgress from ost_ticket where status = 'in progress' and topic_id not in (select topic_id from ost_org_site_map)) as numInProgress ,(select count(ticket_id) as numAssigned from ost_ticket where status = 'open' and topic_id not in (select topic_id from ost_org_site_map)) as numAssigned;
```
|
You may try this:
```
SELECT (select count(ticket_id) as numInProgress
from ost_ticket where status = 'in progress'
and topic_id not in (select topic_id from ost_org_site_map)),
(select count(ticket_id) as numAssigned
from ost_ticket where status = 'open'
and topic_id not in (select topic_id from ost_org_site_map));
```
ie combine both the queries to make a subquery in the select statement.
|
combine mysql queries columnwise
|
[
"",
"mysql",
"sql",
""
] |
I have an issue when trying to fetch out my data from my database.
here are my table design
I have an issue when trying to fetch out my data from my database.
here are my table design
## table1:
```
user_id username
1 test
2 test2
3 test3
```
## table2:
```
id table2_userid key value
1 2 position admin
2 2 name myname
```
### What i want to output is:
```
user_id username key value
1 test NULL NULL
2 test2 position admin
3 test3 NULL NULL
```
### This is my current sql code:
```
SELECT table1.user_id, table1.username, table2.key, table2.value
FROM table1
LEFT JOIN table2 ON table1.user_id = table2.table2_userid WHERE table2.key="position"
```
However, this return nothing. Please help me in this.
Thanks.
|
Try following Query it will work for your problem:
```
SELECT table1.user_id, table1.username, table2.key, table2.value FROM
table1 LEFT JOIN table2 ON table1.user_id = table2.table2_userid and
table2.key="position" group by table1.user_id
```
|
Try using single quotes:
```
SELECT table1.user_id, table1.username, table2.key, table2.value
FROM table1
LEFT JOIN table2 ON table1.user_id = table2.table2_userid WHERE table2.key = 'position'
```
Otherwise your query seems fine to me.
I've read this on SO in a comment: `[S]ingle for [S]trings; [D]ouble for [D]atabase`
|
MySQL SELECT all from first table and join matched data from second table with specific where clause for second table filter
|
[
"",
"mysql",
"sql",
"select",
"join",
"where-clause",
""
] |
I have two tables to register `TESTS` and `GRADES`
Table `TEST (id_test, id_person, retry)`
Table `GRADE (id_grade, id_test, id_question, grade)`
It's possible that a person make a second test, in that case, retry would be char 'Y' (for yes)
Now I need to sum all the grades grouped by person.
The problem is that When I group by person the sum comes from both test retry `'N'` and retry `'Y'`.
But if a person has a retry test, I want to sum only this one.
Example
if `TEST (id_test, id_person, retry)` has values
```
(1,1,'N')
(2,1,'Y')
(3,2,'N')
```
and `GRADE (id_grade, id_test, id_question, grade)` has values
```
(1, 1, 1, 4)
(2, 1, 2, 4)
(3, 1, 3, 4)
(4, 2, 1, 5)
(5, 2, 2, 5)
(6, 2, 3, 5)
(7, 3, 1, 7)
(8, 3, 2, 7)
(9, 3, 3, 7)
```
I want the sum for person #1 be 15 (5+5+5) and not 27 ( (5+5+5) + (4+4+4) ). And for person #2 who does not have a retry the sum is 21. How can I get a list of person and it's grade with sql?
|
```
select id_person,sum(b.grade) from
(
select id_person,max(id_test) testid from test
group by id_person
)a
inner join GRADE as b on a.testid = b.id_test
group by id_person
```
[Working Example](http://sqlfiddle.com/#!3/4ed51/13)
|
```
;WITH MyCTE AS
(
SELECT *,ROW_NUMBER()OVER(PARTITION BY id_person ORDER BY retry DESC) AS rn
FROM TEST
)
SELECT SUM(grade) AS Total
FROM MyCTE T
JOIN GRADE G
ON T.id_test = G.id_test
WHERE rn = 1
```
**[SQL Fiddle](http://sqlfiddle.com/#!6/b1ec8/3)**
|
SQL sum with condition field='x'
|
[
"",
"sql",
"sum",
"conditional-statements",
""
] |
```
SELECT
cat.CategoryID as CategoryID,
count(p.ProductID) as CountProducts
FROM
Category as cat
LEFT JOIN Products as p on p.CategoryID IN
(SELECT CategoryID FROM Category as cd
WHERE cd.ParrentCategoryID = '876')
WHERE
CategoryID = '876'
ORDER by Name
```
We are get error - "Column 'CategoryID' in where clause is ambiguous".
Tell me plese how will be right?
|
The error you get tells you that the column `CategoryID` in your `WHERE` clause is ambiguous, that means that the system has a problem to identify the appropriate column because there are multiple `CategoryID` columns.
In order to fix this problem, use the alias to specify which column you want to use for your `WHERE` clause:
```
SELECT cat2.CategoryID AS CategoryID
,cat2.Name AS CategoryName
,COUNT(p.ProductID) AS CountProducts
FROM Category AS cat
INNER JOIN Category AS cat2 ON cat2.ParrentCategoryID = cat.CategoryID
INNER JOIN Products AS p ON p.CategoryID = cat2.CategoryID
WHERE cat.CategoryID = '876'
GROUP BY cat2.CategoryID, cat2.Name
ORDER BY cat2.Name
```
I also changed a bit the query to get the same result but instead of using a combination of `LEFT JOIN` + `IN` clause + sub query, i used `INNER JOIN` clauses.
With this query you only need to define your desired `CategoryID` once and it will automatically get every child categories.
I'm not sure that your query runs correctly because you're using the `COUNT` function without grouping the results by `CategoryID`...
Hope this will help you.
|
The "ambiguous column" error means that there's a reference to a column identifier, and MySQL has two (or more) possible columns that match the specification.
In this specific case, it's the reference to `CategoryID` column in the `WHERE` clause.
It's "ambiguous" whether `CategoryID` is meant to refer to the `CategoryID` column from `cat`, or refer to the `CategoryID` column from `p`.
The fix is to *qualify* the column reference with the table alias to remove the ambiguity. That is, in place of `CategoryID`, specify either `cat.CategoryID` or `p.CategoryID`.
---
Some additional notes:
It's very odd to have an `IN (subquery)` predicate in the `ON` clause. If `CategoryID` is not guaranteed to be unique in the `Category` table, this query would likely generate more rows than would be expected.
The normal pattern for a query like this would be something akin to:
```
SELECT cat.CategoryID AS CategoryID
, COUNT(p.ProductID) AS CountProducts
FROM Category cat
LEFT
JOIN Products p
ON p.CategoryID = cat.CategoryID
WHERE cat.ParrentCategoryID = '876'
ORDER BY cat.Name
```
Normally, the join predicates in the `ON` clause reference columns in two (or more) tables in the FROM clause. (That's not a SQL requirement. That's just the usual pattern.)
The predicate in the `ON` clause of the query above specifies that the value of the `CategoryID` column from the `cat` (Category) table *match* the value in the `CategoryID` column from the `p` (Product) table.
Also, best practice is to qualify **all** columns referenced in a query with the name of the table, or a table alias, even if the column names are not ambiguous. For example, the `Name` column in the `ORDER BY` clause.
One big benefit of this is that if (at some time in the future) a column named `Name` is added to the `Products` table, the query with an unqualified reference to `Name` would (then) begin to throw an "ambiguous column" error for the that reference. So, qualifying all column names avoids a working query "breaking" when new columns are added to existing tables.
Another big benefit is that to the reader of the statement. MySQL can very quickly look in the dictionary, to find which table `Name` is in. But readers of the SQL statement (like myself) who do not "know" which table contains the column named `Name`, would also need to look at the table definitions to find out. If the name were qualified, we would already "know" which table that column was from.
|
mysql ...in where clause is ambiguous
|
[
"",
"mysql",
"sql",
""
] |
I currently have the following query:
```
select title,location,categorie,division,
reference.description as CA
from emploi
left join reference on emploi.categorie = reference.code_ref
where id_type = 'CA' and emploi.status ='1'
```
and I have a second query, which is the same but is taking a different value from the reference table:
```
select title,location,categorie,division,
reference.description as DI
from emploi
left join reference on emploi.division = reference.code_ref
where id_type = 'DI' and emploi.status ='1'
```
Is there a way to execute both these queries?
I was thinking it was something along the lines of the following, but it clearly does not work:
```
select title,location,categorie,division,
reference.description as DI, reference.description as CA
from emploi
(left join reference on emploi.division = reference.code_ref
where id_type = 'DI')
(left join reference on emploi.categorie = reference.code_ref
where id_type = 'CA')
where emploi.status ='1'
```
Thank you!
**NOTE: Each entry from the emploi table has a division and categorie column, which need to both be mapped to separate entries ON THE SAME ROW**
EDIT:
This is the working solution, for those interested:
```
select title,location,categorie,division,
r.description as DI, r1.description as CA
from emploi
left join reference r on emploi.division = r.code_ref
and r.id_type = 'DI'
left join reference r1 on emploi.categorie = r1.code_ref
and r1.id_type = 'CA'
where emploi.status ='1'
```
|
You may try this:
```
select title,location,categorie,division,
r.description as DI, r1.description as CA
from emploi
left join reference r on emploi.division = r.code_ref
left join reference r1 on emploi.categorie = r1.code_ref
where emploi.status ='1'
and id_type in ('CA', 'DI')
```
Presently you are having an incorrect syntax as you are using the `where` clause two times with your `left join`. Also put the alias to your tables and use those alias to get the column name
Something like
```
select tablename.title,tablename.location,tablename.categorie,tablename.division,
r.description as DI, r1.description as CA
from emploi
left join reference r on emploi.division = r.code_ref
left join reference r1 on emploi.categorie = r1.code_ref
where emploi.status ='1'
and id_type in ('CA', 'DI')
```
**Note:** Use the correct table name in place of `tablename` above
**EDIT:-**
If you dont want to use the IN clause then
```
select title,location,categorie,division,
r.description as DI, r1.description as CA
from emploi
left join reference r on emploi.division = r.code_ref
and r.id_type = 'DI'
left join reference r1 on emploi.categorie = r1.code_ref
and r1.id_type = 'CA'
where emploi.status ='1'
```
Assuming that `id_type` is a column of `emploi`
|
There are a couple of ways of achieving this:
```
select title,location,categorie,division,
reference.description as DI
from emploi
left join reference on emploi.division = reference.code_ref
where id_type IN ('DI','CA') and emploi.status ='1'
```
or you can change the where clause to an OR
```
where (id_type = 'DI' OR id_type = 'CA') and emploi.status ='1'
```
or if you are looking forward to what you wrote originally then
```
select title,location,categorie,division,
reference.description as DI, reference.description as CA
from emploi
left join reference r1 on emploi.division = r1.code_ref
and r1.id_type = 'DI'
left join reference r2 on emploi.categorie = r2.code_ref
and r2.id_type = 'CA'
```
Or a couple of other ways also...
|
Multiple sql joins from same reference table
|
[
"",
"mysql",
"sql",
""
] |
Note: with help from RhodiumToad on #postgresql, I've arrived at a solution, which I posted as answer. If anyone can improve on this, please chime in!
I have not been able to adapt a [previous recursive query solution](https://stackoverflow.com/questions/20979831/recursive-query-used-for-transitive-closure) to the following directed acyclic graph that includes multiple "root" (ancestor-less) nodes. I'm trying to write a query whose output is what is commonly known as a closure table: a many-to-many table that stores every path from each node to each of its descendants and to itself:
```
1 2 11 8 4 5 7
\/ | | \ | /
3 | \ 6
\ | \ /
9 | 10
\/ /
12 13
\ /
14
CREATE TABLE node (
id SERIAL PRIMARY KEY,
node_name VARCHAR(50) NOT NULL
);
CREATE TABLE node_relations (
id SERIAL PRIMARY KEY,
ancestor_node_id INT REFERENCES node(id),
descendant_node_id INT REFERENCES node(id)
);
INSERT into node (node_name)
SELECT 'node ' || g FROM generate_series(1,14) g;
INSERT INTO node_relations(ancestor_node_id, descendant_node_id) VALUES
(1,3),(2,3),(4,6),(5,6),(7,6),(3,9),(6,10),(8,10),(9,12),(11,12),(10,13),(12,14),(13,14);
```
It's been hard to pinpoint the issue(s) - am I'm missing `node_relation` rows? Is the query wrong?
```
WITH RECURSIVE node_graph AS (
SELECT ancestor_node_id, ARRAY[descendant_node_id] AS path, 0 AS level
FROM node_relations
UNION ALL
SELECT nr.ancestor_node_id, ng.path || nr.descendant_node_id,ng.level + 1 AS level
FROM node_graph ng
JOIN node_relations nr ON nr.descendant_node_id = ng.ancestor_node_id
)
SELECT path[array_upper(path,1)] AS ancestor,
path[1] AS descendant,
path,
level as depth
FROM node_graph
ORDER BY level, ancestor;
```
Expected Output:
```
ancestor | descendant | path
---------+------------+------------------
1 | 3 | "{1,3}"
1 | 9 | "{1,3,9}"
1 | 12 | "{1,3,9,12}"
1 | 14 | "{1,3,9,12,14}"
2 | 3 | "{2,3}"
2 | 9 | "{2,3,9}"
2 | 12 | "{2,3,9,12}"
2 | 14 | "{2,3,9,12,14}"
3 | 9 | "{3,9}"
3 | 12 | "{3,9,12}"
3 | 14 | "{3,9,12,14}"
4 | 6 | "{4,6}"
4 | 10 | "{4,6,10}"
4 | 13 | "{4,6,10,13}"
4 | 14 | "{4,6,10,13,14}"
5 | 6 | "{5,6}"
5 | 10 | "{5,6,10}"
5 | 13 | "{5,6,10,13}"
5 | 14 | "{5,6,10,13,14}"
6 | 10 | "{6,10}"
6 | 13 | "{6,10,13}"
6 | 14 | "{6,10,13,14}"
7 | 6 | "{7,6}"
7 | 10 | "{7,6,10}"
7 | 13 | "{7,6,10,13}"
7 | 14 | "{7,6,10,13,14}"
8 | 10 | "{8,10}"
8 | 13 | "{8,10,13}"
8 | 14 | "{8,10,13,14}"
9 | 12 | "{9,12}"
9 | 14 | "{9,12,14}"
10 | 13 | "{10,13}"
10 | 14 | "{10,13,14}"
11 | 12 | "{11,12}"
11 | 14 | "{11,12,14}"
12 | 14 | "{12,14}"
13 | 14 | "{13,14}"
```
|
With help from RhodiumToad on #postgresql, I've arrived at this solution:
```
WITH RECURSIVE node_graph AS (
SELECT ancestor_node_id as path_start, descendant_node_id as path_end,
array[ancestor_node_id, descendant_node_id] as path
FROM node_relations
UNION ALL
SELECT ng.path_start, nr.descendant_node_id as path_end,
ng.path || nr.descendant_node_id as path
FROM node_graph ng
JOIN node_relations nr ON ng.path_end = nr.ancestor_node_id
)
SELECT * from node_graph order by path_start, array_length(path,1);
```
The result is exactly as expected.
|
An alternative approach would be to traverse the graph in reversed order:
```
WITH RECURSIVE cte AS (
SELECT array[r.ancestor_node_id, r.descendant_node_id] AS path
FROM node_relations r
LEFT JOIN node_relations r0 ON r0.ancestor_node_id = r.descendant_node_id
WHERE r0.ancestor_node_id IS NULL -- start at the end
UNION ALL
SELECT r.ancestor_node_id || c.path
FROM cte c
JOIN node_relations r ON r.descendant_node_id = c.path[1]
)
SELECT path
FROM cte
ORDER BY path;
```
This produces a subset with every path from each root node to its ultimate descendant. For deep trees that also spread out a lot this would entail much fewer join operations. To additionally add every sub-path, you could append a `LATERAL` join to the outer `SELECT`:
```
WITH RECURSIVE cte AS (
SELECT array[r.ancestor_node_id, r.descendant_node_id] AS path
FROM node_relations r
LEFT JOIN node_relations r0 ON r0.ancestor_node_id = r.descendant_node_id
WHERE r0.ancestor_node_id IS NULL -- start at the end
UNION ALL
SELECT r.ancestor_node_id || c.path
FROM cte c
JOIN node_relations r ON r.descendant_node_id = c.path[1]
)
SELECT l.path
FROM cte, LATERAL (
SELECT path[1:g] AS path
FROM generate_series(2, array_length(path,1)) g
) l
ORDER BY l.path;
```
I ran a quick test, but it didn't run faster than RhodiumToad's solution. It might still be faster for big or wide tables. Try with your data.
|
Recursive query challenge - simple parent/child example
|
[
"",
"sql",
"postgresql",
"recursive-query",
"transitive-closure-table",
""
] |
I'm using SQL Server 2012 and I need to write a query that will extract data greater than a particular date. The date field is called 'CreatedOn" and dates are recorded in this format "2014-08-18 17:02:57.903".
Currently, the date part of my query stands as follows:
```
WHERE CreatedOn > '2014-08-18'
```
Problem is extracted data includes those of '2014-08-18'. It's like the > (greater than) is acting like >= (greater than or equal)!
How should I write my query if I need all data, say greater than '2014-08-18'?
|
Try the following condition. The problem is that `2014-08-18` is really `2014-08-18 00:00:00` (includes the hour), so any date time in that day will be greater.
```
WHERE CreatedOn >= '2014-08-19'
```
|
'2014-08-18' actually means '2014-08-18 00:00:00'
So if you do not want 18th you should put either '2014-08-19' or specify the hours you want your date to be bigger of.
|
Using Dates in SQL Server 2012 Query
|
[
"",
"sql",
"sql-server",
"date",
"sql-server-2012",
""
] |
I figured one way to solve my poorly formatted table would be to parse the "date" column in to three columns after which i can properly run queries on. The date format is 'ddmmmyyyy' style (31DEC2013).
How can I parse this string into three distinct columns "day", "month", "year".
Thanks in advance
|
The DATEPART command can do that for you. For example:
```
select datepart(day, '31DEC2013')
select datepart(month, '31DEC2013')
select datepart(year, '31DEC2013')
```
I would recommend just leaving your date in one column, and using datepart on the fly in your queries.
|
You are trying to get your data from one WRONG type to another WRONG type. It is a date value and you should use appropriate data type for it which is `DATE` or `DATETIME`.
Saving this data with appropriate data type will give you access to many datetime functions for data manipulations.
## Test Data
```
DECLARE @TABLE TABLE(String_Month VARCHAR(9))
INSERT INTO @TABLE VALUES
('31DEC2013')
,('26NOV2013')
,('22SEP2013')
,('31DEC2013')
,('31DEC2013')
```
## Query
```
SELECT CAST( CAST(DATEPART(YEAR, String_Month) AS VARCHAR(4)) + '-'
+ CAST(DATEPART(MONTH, String_Month) AS VARCHAR(2))+ '-'
+ CAST(DATEPART(DAY, String_Month) AS VARCHAR(2)) AS DATE) AS Result
FROM @TABLE
```
## Result
```
Result
2013-12-31
2013-11-26
2013-09-22
2013-12-31
2013-12-31
```
|
How can I parse a varchar string into three columns, Year, Month and Day
|
[
"",
"sql",
"sql-server",
"parsing",
""
] |
When I try to debug an Stored Procedure using Microsoft SQL Server Management Studio, I encounter this error message: Unable to start T-SQL Debugging. Could not connect to computer ("**\***"). The dubugger cannot connect to the remote computer. This may be because the remote computer does not exist or a firewall may preventing communication to the remote computer. Please see help for assistance.
I really appreciate any help.
|
SSMS needs to be able to find your SQL Server by DNS. When you connect to the DB in SSMS, you can use (local), but (local) does not resolve on your network so the debug program can not find it. Try connecting to localhost or your computer's name on the "Connect to Server" screen.
You should not have to run as administrator.

|
I just wanted to pay this forward, as after searching for quite some time I've yet to see anyone mention the problem that I encountered here.
I ran into this issue while connected to SQL using a SQL Server Authenticated user. Once I tried using a Windows Authenticated user I was able to debug without issue. That user must also be assigned the sysadmin role.
Hope this helps someone.
|
Unable to start T-SQL Debugging. Could not connect to computer ("*****")
|
[
"",
"sql",
"sql-server",
"debugging",
""
] |
I am planning to write a TSQL function of two input strings and a percentage of words similarity as an output, For example :
```
SELECT [dbo].[FN_CalcSimilarWords]('Golden horses hotel','Hotel Golden Horses')
```
Returns :
```
3/3
```
or
```
SELECT [dbo].[FN_CalcSimilarWords]('Golden horses','Golden horses Malaysia')
```
Returns :
```
2/3
```
I was thinking about looping and comparing the words after parsing the strings to [This split function](http://www.codeproject.com/Tips/666620/T-SQL-Most-Practical-Split-Function), Any other ideas for a better performance?
|
If you want to do it in SQL, here is an approach I would take.
Create two temporary tables, using a split routine, called Words1 and Words2
Now join the tables and get the count, i.e.
```
select count(*)
from Words1 w1
join Words2 w2 on w1.word=w2.word
```
Let SQL do it the way it is optimized for
Here is how to get counts from both tables
```
select count(distinct w1.word) as Matches,
count(distinct w1.word) as FromW1,
count(distinct w2.word) as FromW2
from #Words1 w1
left join #Words2 w2 on w1.word=w2.word
```
|
With this solution I am assuming you want duplicates to be removed. Switching first and second parameter makes no difference to result.
It returns a value, not a percentage since functions can only return 1 value or a table. I assume you want values between 0 and 1 making 2/3 = 0.67 or 67 percent if you multiply by 100.
```
CREATE function f_functionx
(
@str1 varchar(2000),
@str2 varchar(2000)
)
returns decimal(5,2)
as
BEGIN
DECLARE @returnvalue decimal(5,2)
DECLARE @list1 table(value varchar(50))
INSERT @list1
SELECT t.c.value('.', 'VARCHAR(2000)')
FROM (
SELECT x = CAST('<t>' +
REPLACE(@str1, ' ', '</t><t>') + '</t>' AS XML)
) a
CROSS APPLY x.nodes('/t') t(c)
DECLARE @list2 table(value varchar(50))
INSERT @list2
SELECT t.c.value('.', 'VARCHAR(2000)')
FROM (
SELECT x = CAST('<t>' +
REPLACE(@str2, ' ', '</t><t>') + '</t>' AS XML)
) a
CROSS APPLY x.nodes('/t') t(c)
;WITH isect as
(
SELECT count(*) match FROM
(
SELECT value FROM @list1
INTERSECT
SELECT value FROM @list2
) x
), total as
(
SELECT max(cnt) cnt
FROM
(
SELECT count(distinct value) cnt FROM @list1
UNION ALL
SELECT count(distinct value) FROM @list2
) x
)
SELECT
@returnvalue = cast(isect.match as decimal(9,2)) / total.cnt
FROM total
CROSS JOIN isect
RETURN @returnvalue
END
GO
```
You call the function like this:
```
SELECT dbo.f_functionx('Golden horses', 'Golden horses')
SELECT dbo.f_functionx('Golden horses', 'Golden horses XX')
```
Returns:
```
1
0.67
```
|
Calculating the number of similar words between two strings in SQL
|
[
"",
"sql",
"sql-server",
"string",
"algorithm",
"t-sql",
""
] |
Is there a known standard for writing **SQL logic to common English** for non-programmers to understand? The Monthly Reports I'm generating need to be verified by the client and I don't think I have the balls to shove SQL down his richie rich throat. :)
For example, this:
```
SELECT cust.*
FROM sl_customers cust
LEFT JOIN sl_orders orders ON cust.id = orders.customer_id
WHERE
DATE_FORMAT(orders.delivery_date, '%Y-%m-%d') BETWEEN '2014-08-01' AND '2014-08-31'
AND DATE_FORMAT(cust.created, '%Y-%m-%d') BETWEEN '2014-08-01' AND '2014-08-31'
AND orders.status != 'cancelled'
AND cust.entry = 'New User'
AND cust.infosrc IN ('Ads - Newspaper', 'Ads - TV', 'Ads - Radio', 'Internet - Social Network')
```
|
Here's a mostly line-by-line translation:
```
SELECT cust.*
```
Get the customer informationβ¦
```
FROM sl_customers cust
LEFT JOIN sl_orders orders ON cust.id = orders.customer_id
```
β¦for every order (and the customer associated with the order's customer ID),β¦
```
WHERE
DATE_FORMAT(orders.delivery_date, '%Y-%m-%d') BETWEEN '2014-08-01' AND '2014-08-31'
```
β¦where the order's delivery date is between August 1st and 31stβ¦
(Aside: This part, and the following one, could be written more simply in SQL as `orders.delivery_date BETWEEN '2014-08-01' AND '2014-08-31'`. You don't need the `DATE_FORMAT()` here.)
```
AND DATE_FORMAT(cust.created, '%Y-%m-%d') BETWEEN '2014-08-01' AND '2014-08-31'
```
β¦and the customer's created date is between August 1st and 31stβ¦
```
AND orders.status != 'cancelled'
```
β¦and the order's status is not "cancelled"β¦
```
AND cust.entry = 'New User'
```
β¦and the customer's entry is "New User" (whatever that is?)β¦
```
AND cust.infosrc IN ('Ads - Newspaper', 'Ads - TV', 'Ads - Radio', 'Internet - Social Network')
```
β¦and the customer's infosrc is "Ads - Newspaper", "Ads - TV", "Ads - Radio", or "Internet - Social Network".
|
I don't think there's a standard, but if I were trying to explain this to a non-SQL speaker, I'd say something like:
> This gives me all the new customers created during August
> who have active (non-cancelled) orders
> delivered during August and who came from
> newspaper, TV, or radio ads, or through social media
|
Write SQL code in common english for non-programmers to understand
|
[
"",
"mysql",
"sql",
""
] |
I'm writing a PostgreSQL function to count the number of times a particular text substring occurs in another piece of text. For example, calling count('foobarbaz', 'ba') should return 2.
I understand that to test whether the substring occurs, I use a condition similar to the below:
```
WHERE 'foobarbaz' like '%ba%'
```
However, I need it to return 2 for the number of times 'ba' occurs. How can I proceed?
Thanks in advance for your help.
|
I would highly suggest checking out this answer I posted to [*"How do you count the occurrences of an anchored string using PostgreSQL?"*](https://dba.stackexchange.com/a/166763/2639). The chosen answer was shown to be massively slower than an adapted version of `regexp_replace()`. The overhead of creating the rows, and the running the aggregate is just simply too high.
The fastest way to do this is as follows...
```
SELECT
(length(str) - length(replace(str, replacestr, '')) )::int
/ length(replacestr)
FROM ( VALUES
('foobarbaz', 'ba')
) AS t(str, replacestr);
```
Here we
1. Take the length of the string, `L1`
2. Subtract from `L1` the length of the string with all of the replacements removed `L2` to get `L3` the difference in string length.
3. Divide `L3` by the length of the replacement to get the *occurrences*
For comparison that's about **five times faster** than the method of using `regexp_matches()` which looks like this.
```
SELECT count(*)
FROM ( VALUES
('foobarbaz', 'ba')
) AS t(str, replacestr)
CROSS JOIN LATERAL regexp_matches(str, replacestr, 'g');
```
|
How about use a regular expression:
```
SELECT count(*)
FROM regexp_matches('foobarbaz', 'ba', 'g');
```
The `'g'` flag repeats multiple matches on a string (not just the first).
|
PostgreSQL count number of times substring occurs in text
|
[
"",
"sql",
"postgresql",
""
] |
my aim is a SQL statement to identify different accounts in a list of assets (assets have a mainnumber and a subnumber).
The table looks like this:
```
amainnr | asubnr | account
-----------------------------
10000 | 0 | 123
10000 | 1 | 123
10000 | 2 | 456
10000 | 3 | 789
10001 | 0 | 123
10001 | 1 | 123
10001 | 2 | 123
10002 | 0 | 123
10003 | 0 | 456
10004 | 0 | 123
10005 | 0 | 123
10005 | 1 | 456
```
As a result I need a table with all the lines where a mainnr exists with differing accounts, e.g.:
```
amainnr | asubnr | account
-----------------------------
10000 | 0 | 123
10000 | 1 | 123
10000 | 2 | 456
10000 | 3 | 789
10005 | 0 | 123
10005 | 1 | 456
```
I created a SQL Fiddle with this table: <http://sqlfiddle.com/#!2/c7e6d>
I tried a lot with GROUP BY, HAVING and COUNT, but so far I didn't succeed.
Is this problem solvable with SQL at all?
Any help would be appreciated!
|
```
SELECT b.amainnr,
b.asubnr,
b.account
FROM atable b
WHERE b.amainnr IN
(SELECT a.amainnr
FROM atable a
GROUP BY a.amainnr HAVING count(distinct(a.account)) > 1)
```
|
Try this:
```
select * from atable a
where exists (select 1 from atable b
where a.id!=b.id
and a.amainnr = b.amainnr
and a.account != b.account)
```
|
SQL: Find different accounts in asset list
|
[
"",
"mysql",
"sql",
"select",
""
] |
**SQL Server 2008**
My SQL table is Like below
```
----------------------------------------
**name department fee_paid id**
----------------------------------------
Farooq ECE 10000 NULL
Khan EEE 20000 NULL
Syed Chemistry 4000 NULL
Syed Chemistry 14000 NULL
Yousuf Physics 2000 NULL
Yousuf Physics 18000 NULL
Zubair EEE 4000 NULL
----------------------------------------
```
now i wish to fill data in id fields like below
```
----------------------------------------
**name department fee_paid id**
----------------------------------------
Farooq ECE 10000 1000
Khan EEE 20000 1001
Syed Chemistry 4000 1002
Syed Chemistry 14000 1003
Yousuf Physics 2000 1004
Yousuf Physics 18000 1005
Zubair EEE 4000 1006
----------------------------------------
```
i tried like below but it stores same value in all id fields..I know as i miss my where condition in update query below.But how i use where condition with above table criteria because it has duplicates ?
```
declare @i as int =1000
while @i<=1006
begin
update flatfile set id=@i
set @i+=1
end
```
|
Why don't you use `ALTER TABLE`?
```
alter table flatfile
add ID int identity(1000,1)
```
**EDIT** -
if you want to do it in loop:
```
declare @i as int = 1000
while @i<=1006
begin
update top(1) flatfile set id=@i
where id is null;
set @i+=1
end
```
|
To get the ID it's possible to use the `Row_Number` function.
Windowing functions are not allowed in an update script, but we can write an updatable view, or a CTE, and work with it instead:
```
WITH CTE AS (
SELECT [name], [department], [fee_paid], [id]
, num = Row_Number()
OVER (ORDER BY (SELECT NULL)) + 999
FROM table1
)
UPDATE CTE SET
ID = num;
```
`SQLFiddle Demo`
|
How to update particular column fields with while loop
|
[
"",
"sql",
"sql-server-2008-r2",
""
] |
This is edited - I have tried speeding this up but still very slow - can anyone offer anymore help please?
```
SELECT DISTINCT d.`group_ID`,
CONCAT(u.fname, ' ', u.lname) AS uname,
u.corp_id,
c.`corp_name`,
u.`user_username`,
u.location,
(
SELECT email
FROM users
WHERE role = 'Level 4'
AND location = d.location
AND email NOT LIKE '%manager.%'
LIMIT 1
) AS trainer,
u.`email`,
GROUP_CONCAT(
DISTINCT d.title
ORDER BY d.title SEPARATOR ', '
) AS docs
FROM regulatory d
LEFT JOIN users u
ON u.user_id = d.`group_ID`
AND u.`role` = 'Level 6'
LEFT JOIN corporations c
ON c.`corp_id` = u.`corp_id`
WHERE d.`spare1` <> 'green.gif'
AND u.`officialjobtitle` <> 'none'
AND d.`date_updated` < NOW()
AND (u.`corp_id` IN (1))
GROUP BY d.`group_ID`
ORDER BY corp_id,
u.`location`,
uname ;
```
|
* You negate the left join from `regulatory` to `users` in the where
clause, so use an `INNER JOIN` for those two tables
* use `INNER JOIN` on corporations if this is possible (I just don't know if it is or not)
* remove DISTINCT. You are already performing a `GROUP BY` and if that is done correctly distinct will be completely redundant (as it doesn't do anything that `GROUP BY` cannot do)
* I would try replacing the correlated subquery for `trainer` with a "derived table" that is already grouped so you don't multiply the rows
* when using `GROUP BY` in MySQL never rely on the "extension". Always specify all the fields needed to produce unique rows (like you would in almost every other SQL compliant rdbms)
Make sure you have indexes on all fields involved in the joins and if possible those used in the where clause too.
Inspect the explain plan.
**Suggested edits:**
```
SELECT
d.`group_ID`
, CONCAT ( u.fname , ' ' , u.lname ) AS uname
, u.corp_id
, c.`corp_name`
, u.`user_username`
, u.location
, MAX(l4.trainer) as trainer
, u.`email`
, GROUP_CONCAT(DISTINCT d.title ORDER BY d.title SEPARATOR ', ') AS docs
FROM regulatory d
INNER JOIN users u ON d.`group_ID` = u.user_id
AND u.`role` = 'Level 6'
LEFT JOIN corporations c ON u.`corp_id` = c.`corp_id`
LEFT JOIN (
SELECT location, MAX(email) AS trainer
FROM users
WHERE ROLE = 'Level 4'
AND email NOT LIKE '%manager.%'
GROUP BY location
) l4 ON d.location = l4.location
WHERE d.`spare1` <> 'green.gif'
AND u.`officialjobtitle` <> 'none'
AND d.`date_updated` < NOW()
AND u.`corp_id` IN (1)
GROUP BY
d.`group_ID`
, u.fname
, u.lname
, u.corp_id
, c.`corp_name`
, u.`user_username`
, u.location
, u.`email`
ORDER BY
corp_id
, u.`location`
, uname
;
```
|
I think that your query have little problem with line:
```
...
(
SELECT email
FROM users
WHERE role = 'Level 4'
AND location = d.location
AND email NOT LIKE '%manager.%'
LIMIT 1
) AS trainer,
...
```
For each row you prepare single select. Mayby you should change that query to another join.
I wuld also recommend using explain feature to MySQL:
<http://dev.mysql.com/doc/refman/5.1/en/using-explain.html>
|
Slow MySQL Query - how can it be speed up?
|
[
"",
"mysql",
"sql",
"performance",
""
] |
I'm actually trying to select one record from `amgb` using a limit while doing a join with `amga`.
My tables are like this:
**amga**
```
"id" "itemId" "itemTempId" "itemName" "itemCountry" "userId"
"1" "US1" "T001" "Samsung Galaxy Note 5" "US" "1"
"2" "CA2" "T002" "Samsung Galaxy Note 6" "CA" "2"
"3" "UK3" "T003" "Samsung Galaxy Note 7" "UK" "3"
```
**amgb**
```
"id" "itemId" "itemTempId" "itemImageName" "userId"
"1" "US1" "T001" "front.jpg" "1"
"2" "US1" "T001" "side-left.jpg" "1"
"3" "US1" "T001" "side-right.jpg" "1"
"4" "US1" "T001" "back.jpg" "1"
"5" "CA2" "T002" "front.jpg" "2"
"6" "CA2" "T002" "side-left.jpg" "2"
"7" "CA2" "T002" "side-right.jpg" "2"
"8" "CA2" "T002" "back.jpg" "2"
"9" "UK3" "T003" "front.jpg" "3"
```
In plain language, this amounts to something like this:
`select itemName from amga where itemId = 'US1' and userId = 1 along with 1 itemImageName from amgb where itemId = sameOneAsBefore and userId = sameOneAsBefore`
The end result I'm hoping for is as simple as this:
`Samsung Galaxy Note 5 | front.jpg` - that's itemName from `amga` and itemImageName from `amgb` where userId = 1 and itemId = US1
But I dont see where I'm going wrong despite all the joins and everything else being correct. What could be causing the problem `/* SQL Error (1054): Unknown column 'b.userId' in 'on clause' */`
```
SELECT a.itemName, b.itemImageName
FROM amga a
LEFT JOIN (
SELECT
itemImageName
FROM
amgb
LIMIT 1
) b
ON a.userId = b.userId and a.itemId = b.itemId
WHERE a.userId = 1 and a.itemId = 'US1';
```
|
If subquery returns one column and no more than one record you can include it as column in query:
```
SELECT a.itemName,
(SELECT itemImageName
FROM amgb b
WHERE a.userId = b.userId AND a.itemId = b.itemId
LIMIT 1
) AS itemImageName
FROM amga a
WHERE a.userId = 1 AND a.itemId = 'US1';
```
|
Try to use this:-
```
SELECT a.itemName, b.itemImageName
FROM amga a
LEFT JOIN (
SELECT itemId, userId, itemImageName
FROM amgb
LIMIT 1
) b
ON a.userId = b.userId and a.itemId = b.itemId
WHERE a.userId = 1 and a.itemId = 'US1';
```
|
MySql select with Limit 1 when using a join with 2 tables
|
[
"",
"mysql",
"sql",
""
] |
Is there any way I can script out all the SQL Server objects (tables, SP, functions etc) under a schema?
In our database we have a table containing name of all the schemas and there are more than 500 schema. Some of them are for dev and some are prod. I need to script out all the objects under dev schema and create a new database.
|
Thanks guys for your reply. I have solved this by generating all the scripts through SSMS and then created a schema only database. Than I dropped all the tables, views SP, functions etc those are not part of the schema I do not need.
It took me around 20 mins to do that. But after all the work is done.
|
[ApexSQL Script](https://www.apexsql.com/sql_tools_script.aspx) is the tool which can be very helpful in this situation. It is the tool which creates SQL scripts for database objects, and it can script all objects into one script or all objects separately.
For this situation here is what you should do:
1. Select server and the database which you want to script and load them.
2. Go to the View tab and click the βObject filterβ button, then select the βEdit filterβ button:
[](https://i.stack.imgur.com/7CNyL.png)
3. In the Filter editor for all objects select the βInclude if:β and βClick here to add filter criteriaβ:
[](https://i.stack.imgur.com/HtWX7.png)
4. Select the βSchemaβ, βEqualsβ and Enter the desired schema name, then click OK:
[](https://i.stack.imgur.com/oERcm.png)
5. Click on the Home tab, check all objects and Click the βScriptβ button:
[](https://i.stack.imgur.com/7CroC.png)
6. In the third step of the Synchronization wizard, under the Script file tab, select if you want to create one script for all objects or for each object individually from the Granularity drop down menu:
[](https://i.stack.imgur.com/hWRtI.png)
7. In the last step of the Script wizard click the Create button and check out the results β you will have the script which can be executed in the SQL Server Management Studio.
|
Script out all SQL objects under particular schema
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I want to store a certain struct into my database that has a JSON field within it.
```
type Comp struct {
CompId int64 `db:"comp_id" json:"comp_id"`
StartDate time.Time `db:"start_date" json:"start_date"`
EndDate time.Time `db:"end_date" json:"end_date"`
WeeklySchedule json.RawMessage `db:"weekly_schedule" json:"weekly_schedule"`
}
```
The schema for the table is:
```
CREATE TABLE IF NOT EXISTS Tr.Comp(
comp_id SERIAL,
start_date timestamp NOT NULL,
end_date timestamp NOT NULL,
weekly_schedule json NOT NULL,
PRIMARY KEY (comp_id)
);
```
I am using sqlx and lib/pq driver in my project and the following will not execute. Instead it panics saying there is a nil pointer. DB is a global `*sqlx.DB` struct
```
tx := DB.MustBegin()
compFixture := Comp{
StartDate: time.Now(),
EndDate: time.Now().AddDate(1, 0, 0),
WeeklySchedule: json.RawMessage([]byte("{}")),
}
_, err = tx.NamedExec(
`INSERT INTO
Tr.Comp(comp_id,
start_date, end_date, weekly_schedule)
VALUES (DEFAULT,
:start_date, :end_date, :weekly_schedule)
RETURNING comp_id;`, compFixture)
if err != nil {
t.Fatal("Error creating fixture.", err)
}
```
When I remove `weekly_schedule` from the schema and fixture things run fine. But for some reason, the when this field is included, the program panics. Any idea as to how I should define the `weekly_schedule` field in both my DB schema and Go struct?
|
sqlx has a type `JSONText` in `github.com/jmoiron/sqlx/types` that will do what you need
[doc](http://godoc.org/github.com/jmoiron/sqlx/types#JSONText) for JSONText
|
I don't know how clean of a solution this is but I ended up making my own data type `JSONRaw`. The DB driver sees it as a `[]btye` but it can still be treated like a json.RawMessage in the Go Code.
```
type JSONRaw json.RawMessage
func (j JSONRaw) Value() (driver.Value, error) {
byteArr := []byte(j)
return driver.Value(byteArr), nil
}
func (j *JSONRaw) Scan(src interface{}) error {
asBytes, ok := src.([]byte)
if !ok {
return error(errors.New("Scan source was not []bytes"))
}
err := json.Unmarshal(asBytes, &j)
if err != nil {
return error(errors.New("Scan could not unmarshal to []string"))
}
return nil
}
func (m *JSONRaw) MarshalJSON() ([]byte, error) {
return *m, nil
}
func (m *JSONRaw) UnmarshalJSON(data []byte) error {
if m == nil {
return errors.New("json.RawMessage: UnmarshalJSON on nil pointer")
}
*m = append((*m)[0:0], data...)
return nil
}
```
This is copy paste reimplementation of `MarshalJSON` and `UnmarshalJSON` from the `encoding/json` library.
|
Storing Golang JSON into Postgresql
|
[
"",
"sql",
"json",
"postgresql",
"struct",
"go",
""
] |
I am currently trying to export the past 3 months of data from a table on my SQL Server to a new table in a new database (same columns etc but slightly different primary keys). I get an error when doing this because there is a duplicated primary key in the new table.. I was wondering whether there is a way to ignore this error or only insert 1 of the values in which this primary key conflict is happening.
So far I have
```
SELECT *
FROM [action]
WHERE systemdatetime < getdate()
AND systemdatetime > DATEADD(month, -3, getdate())
```
And I am currently using the export wizard
|
I don't know what your PK field is called but lets say it is ID. You can do this to number the records that all had the same id, then just pull out the first one from each sequence. Notice I ordered this by the date so the earliest date would be the one selected.
```
SELECT ID, systemdatetime, etc..
FROM (
SELECT *,ROW_NUMBER() OVER(PARTITION BY ID ORDER BY systemdatetime) Row
FROM [action]
WHERE systemdatetime < getdate()
AND systemdatetime > DATEADD(month, -3, getdate())
) tmp
WHERE Row = 1
```
|
Create an unkeyed copy of your target table and dump your data into that. From there, you can decide what is really unique and what isn't.
|
Ignoring duplicated primary keys when exporting data from one database to another in SQL
|
[
"",
"sql",
"sql-server",
"export",
"primary-key",
""
] |
Is there any alternative, more elegant, expression to 1=2 that can be used in where clauses in sql?
Something resembling `WHERE false`
Couldn't find an answer to this in google.
**EDIT:**
In languages such as C++ the expressions (1==2) and 0 are equivalent, why doesn't TSQL accept `false` or `true` as a condition?
|
Well, TSQL defines `WHERE` clause as `WHERE <search_condition>` with a variety of "complex?" syntax that excludes an "expression result". Here is the reference for ["search\_condition"](http://msdn.microsoft.com/en-us/library/ms173545.aspx).
`False`, `4` are expressions results.
What you expect is perhaps more logical, as `WHERE` acts as a in/out gate, and of course any of the "search\_condition" evaluated, represents a boolean. But for TSQL it must be a "search\_condition" and not a boolean evaluable expression.
Ms-Access works in the way you mention, where the `WHERE` clause acts ONLY, yes, only on the evaluation of the expression.
This are valid in Ms-Access
```
where false -- excludes all
where 4 -- includes all, as casts to true
where mycolumn = 'aa' -- depends on mycolumn content
```
|
Firstly I think `1=2` is itself elegant. So you need to specify exactly in which situation or how you are using it. Next `1=2` specify a false condition(*which you are already aware of*). This is used mostly when you want to check for a false condition something like when you want to create the schema of the table without copying the data. A similar thing is `1=1` which is used for true condition.
Also as far as performance is concerned I think `1=2` will be a better option(*but I am not 100% sure about that*)
|
An alternative to 1=2 in sql
|
[
"",
"sql",
"t-sql",
""
] |
Lets say I have a table called `census` with the following information:
```
COUNTRY PROVINCE CITY POPULATION
==============================================
USA California Sacramento 1234
USA California SanFran 4321
USA Texas Houston 1111
USA Texas Dallas 2222
Canada Ontario Ottawa 3333
Canada Manitoba Winnipeg 4444
```
I'm building a report at the country/province level, which gives me the following:
```
SELECT country, province, SUM(population)
FROM census
GROUP BY country, province;
COUNTRY PROVINCE SUM(POPULATION)
=======================================
USA California 5555
USA Texas 3333
Canada Ontario 3333
Canada Manitoba 4444
```
I'm looking to have an "overall summary" row included on the report, so that the final result looks like:
```
COUNTRY PROVINCE SUM(POPULATION)
=======================================
USA California 5555
USA Texas 3333
Canada Ontario 3333
Canada Manitoba 4444
TOTAL 16665
```
I'm acquainted with `ROLLUP`s, but I can't seem to find a combination that gets me what I'm looking for. Using `GROUP BY ROLLUP(country, province)` includes the total value I want, but it also includes a large number of extra values which I don't care about. This is also true with `GROUP BY ROLLUP(country), province`
How can I go about making the "total" record?
I'm currently calculating it with a `UNION ALL` and repeating 90% of the first query with a different `GROUP BY`, but because the first query is non-trivial, the result is slow and ugly code.
Here's a SQL Fiddle for those who want to play with this: <http://sqlfiddle.com/#!4/12ad9/5>
|
Ok, I finally came up two approaches that are flexible and don't make me feel like a terrible programmer.
---
The first solution involves [`GROUPING SETS`](http://docs.oracle.com/cd/E11882_01/server.112/e25554/aggreg.htm#DWHSG8631).
What I'm essentially trying to do is group the expression at two different levels: one at the overall level, and one at the `(country, province)` level.
If I were to split the query into two parts and use a `UNION ALL`, one half would have a `GROUP BY country, province` and the other would lack a grouping clause. The un-grouped section can also be represented as `GROUP BY ()` if we feel like it. This will come in handy in a moment.
That gives us something like:
```
SELECT country, province, SUM(population)
FROM census
GROUP BY country, province
UNION ALL
SELECT NULL AS country, NULL AS province, SUM(population)
FROM census
GROUP BY ();
```
The query works, but it doesn't scale well. The more calculations you need to make, the more time you spend repeating yourself.
By using a `GROUPING SETS`, I can specify that I want the data grouped in two different ways:
```
SELECT country, province, SUM(population)
FROM census
GROUP BY GROUPING SETS( (country, province), () );
```
Now we're getting somewhere! But what about our result row? How can we detect it and label it accordingly? That's where the [`GROUPING`](http://docs.oracle.com/cd/E11882_01/server.112/e25554/aggreg.htm#DWHSG8622) function comes in. It returns a 1 if the column is NULL because of a GROUP BY statement.
```
SELECT
CASE
WHEN GROUPING(country) = 1 THEN 'TOTAL'
ELSE country
END AS country,
province,
SUM(population),
GROUPING(country) AS grouping_flg
FROM census
GROUP BY GROUPING SETS ( (country, province), () );
```
---
If we don't like the `GROUPING SETS` approach, we can still use a traditional [`ROLLUP`](http://docs.oracle.com/cd/B19306_01/server.102/b14223/aggreg.htm#i1007413) but with a minor change.
Instead of passing each column to the `ROLLUP` individually, we pass the collection of columns as a set by encasing them in parentheses. This makes it so the set of columns is treated as a *single* group instead of *multiple* groups. The following query will give you the same results as the previous:
```
SELECT
CASE
WHEN GROUPING(country) = 1 THEN 'TOTAL'
ELSE country
END AS country,
province,
SUM(population),
GROUPING(country) AS grouping_flg
FROM census
GROUP BY ROLLUP( (country, province) );
```
---
Feel free to try both approaches for yourself!
<http://sqlfiddle.com/#!4/12ad9/102>
|
This is exactly what **[`GROUPING SETS`](http://docs.oracle.com/cd/B19306_01/server.102/b14223/aggreg.htm#i1006842)** expressions was designed to do:
```
SELECT country, province, SUM(population)
FROM census
GROUP BY GROUPING SETS
( (country, province), -- first group by country and province
() -- then by (nothing), i.e. a total grouping
);
```
See the [SQL-Fiddle](http://sqlfiddle.com/#!4/12ad9/56)
|
Overall summary with multiple GROUP BY
|
[
"",
"sql",
"oracle",
"group-by",
"rollup",
"grouping-sets",
""
] |
I have a table which has data in this format:
```
id name cnt1 created_on
'1','uac','116','2014-09-06 17:16:29'
'2','uac','116','2014-09-06 17:17:06'
'3','uac','90','2014-09-06 21:53:34'
'4','uac','100','2014-08-06 21:53:34'
'5','uac','1','2014-07-06 21:53:34'
'6','uac','2','2014-07-26 21:53:34'
'7','uac','3','2014-09-01 21:53:34'
'8','uac','4','2014-09-02 21:53:34'
'9','uac','5','2014-09-03 21:53:34'
'10','uac','6','2014-09-04 21:53:34'
'11','uac','7','2014-09-05 21:53:34'
'12','uac','8','2014-09-07 21:53:34'
'13','uac','9','2014-09-08 21:53:34'
```
I want data for given date range should be grouped on 1. Weekly 2. Monthly
Also I want that for a week or month I should get data for last dat of week or month. Eg
if I am getting data for monthly I should get following output:
```
'6','uac','1','2014-07-26 21:53:34'
'4','uac','100','2014-08-06 21:53:34'
'13','uac','116','2014-09-08 21:53:34'
```
I tried this query
```
SELECT id,name,cnt1,created_on
FROM qa_dashboard.project_qa_coverage_detail
GROUP BY year(created_on), month(created_on);
```
but this is giving me following output
```
'5','uac','1','2014-07-06 21:53:34'
'4','uac','100','2014-08-06 21:53:34'
'1','uac','116','2014-09-06 17:16:29'
```
Please help
|
You don't actually want a `group by` query. You want to get the last row for each set. Here is a method using `not exists`:
```
SELECT cd.*
FROM qa_dashboard.project_qa_coverage_detail cd
WHERE NOT EXISTS (SELECT 1
FROM qa_dashboard.project_qa_coverage_detail cd2
WHERE year(cd2.created_on) = year(cd.created_on) and
month(cd2.created_on) = month(cd.created_on) and
cd2.created_on > cd.created_on
) ;
```
This is saying, in essence: "Get me all rows from the table where there is no other row with the same year and month and a more recent `created_on` date." That is a fancy way of saying "Get me the last row for each month."
EDIT;
If you want the values from the first and last date of the month, then use a `join` method instead:
```
select cd.*, cdsum.minco as first_created_on
from qa_dashboard.project_qa_coverage_detail cd join
(select year(cd2.created_on) as yr, month(cd2.created_on) as mon,
min(cd2.created_on) as minco, max(cd2.created_on) as maxco
from qa_dashboard.project_qa_coverage_detail cd2
group by year(cd2.created_on), month(cd2.created_on)
) cdsum
on cd.created_on = cd2.maxco;
```
|
Pretty sure this will get you your expected output:
**Last for month:**
```
select t.*
from tbl t
join (select max(created_on) as last_for_month
from tbl
group by year(created_on), month(created_on)) v
on t.created_on = v.last_for_month
```
Except where you say you expect:
```
'6','uac','1','2014-07-26 21:53:34'
```
I think what you really want is:
```
'6','uac','2','2014-07-26 21:53:34'
```
(based on the sample data you provided)
**Fiddle:**
<http://sqlfiddle.com/#!9/faaa3/4/0>
**Last for week:**
```
select t.*
from tbl t
join (select max(created_on) as last_for_week
from tbl
group by year(created_on), week(created_on)) v
on t.created_on = v.last_for_week
```
Based on your comment, if you want the last value and the last id for the last of the month, but the value of cnt1 for the first of the month, use the following (change month() to week() if you want the same but for week):
```
select v.id, v2.first_created_on, v.cnt1
from (select t.id, t.created_on, t.cnt1
from tbl t
join (select max(created_on) as last_created_on
from tbl
group by year(created_on), month(created_on)) v
on t.created_on = v.last_created_on) v
join (select min(created_on) as first_created_on
from tbl
group by year(created_on), month(created_on)) v2
on year(v.created_on) = year(v2.first_created_on)
and month(v.created_on) = month(v2.first_created_on)
```
**Fiddle:**
<http://sqlfiddle.com/#!9/faaa3/5/0>
Output:
```
| ID | FIRST_CREATED_ON | CNT1 |
|----|----------------------------------|------|
| 4 | August, 06 2014 21:53:34+0000 | 100 |
| 6 | July, 06 2014 21:53:34+0000 | 2 |
| 13 | September, 01 2014 21:53:34+0000 | 9 |
```
|
How to get data from mysql for given date range group by weekly,monthly
|
[
"",
"mysql",
"sql",
""
] |
We have a table which will capture the swipe record of each `employee`. I am trying to write a query to fetch the list of distinct employee record by the first swipe for today.
We are saving the swipe date info in `datetime` column. Here is my query its throwing exception.
```
select distinct
[employee number], [Employee First Name]
,[Employee Last Name]
,min([DateTime])
,[Card Number]
,[Reader Name]
,[Status]
,[Location]
from
[Interface].[dbo].[VwEmpSwipeDetail]
group by
[employee number]
where
[datetime] = CURDATE();
```
Getting error:
> Column 'Interface.dbo.VwEmpSwipeDetail.Employee First Name' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.
Any help please?
Thanks in advance.
|
The error says it all:
> ...Employee First Name' is invalid in the select list because it is not contained
> in either an aggregate function or the GROUP BY clause
Saying that, there are other columns that need attention too.
Either reduce the columns returned to only those needed or include the columns in your `GROUP BY` clause or add aggregate functions (MIN/MAX). Also, your `WHERE` clause should be placed before the `GROUP BY`.
Try:
```
select distinct [employee number]
,[Employee First Name]
,[Employee Last Name]
,min([DateTime])
,[Card Number]
,min([Reader Name])
from [Interface].[dbo].[VwEmpSwipeDetail]
where CAST([datetime] AS DATE)=CAST(GETDATE() AS DATE)
group by [employee number], [Employee First Name], [Employee Last Name], [Card Number]
```
I've removed `status` and `location` as this is likely to return non-distinct values. In order to return this data, you may need a subquery (or CTE) that first gets the unique IDs of the `SwipeDetails` table, and from this list you can join on to the other data, something like:
```
SELECT [employee number],[Employee First Name],[Employee Last Name].. -- other columns
FROM [YOUR_TABLE]
WHERE SwipeDetailID IN (SELECT MIN(SwipeDetailsId) as SwipeId
FROM SwipeDetailTable
WHERE CAST([datetime] AS DATE)=CAST(GETDATE() AS DATE)
GROUP BY [employee number])
```
|
Please Try Below Query :
```
select distinct [employee number],[Employee First Name]
,[Employee Last Name]
,min([DateTime])
,[Card Number]
,[Reader Name]
,[Status]
,[Location] from [Interface].[dbo].[VwEmpSwipeDetail] group by [employee number],[Employee First Name]
,[Employee Last Name]
,[Card Number]
,[Reader Name]
,[Status]
,[Location] having [datetime]=GetDate();
```
|
Column invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause
|
[
"",
"sql",
"sql-server",
"group-by",
"aggregate",
""
] |
I have only gone so far:
```
select timestamp, trans, Count(trans)
From(
Select to_char(CREATED_TIMESTAMP) as timestamp,SOURCE_MSISDN||DEST_MSISDN||AMOUNT as trans
From ADMDBMC.TRANSACTION_CASH
WHERE TO_date(CREATED_TIMESTAMP) > = '1-sep-2014' AND TO_date(CREATED_TIMESTAMP) < '2-sep-2014'
and STATUS_DESCRIPTION='SUCCESS'
)
group by timestamp,trans
Having count(trans)>1
order by count(trans) desc
```
|
I think you can achieve that by truncating your timestamp on the nearest multiple of 10 minutes: you could replace `to_char(CREATED_TIMESTAMP) as timestamp` with `regexp_replace(to_char(CREATED_TIMESTAMP, 'dd-Mon-yyyy hh24:mi'), '.$', '0')` and I think your group by would then be ok.
**EDIT** : the previous solution was only working if the 2 transactions were part of the same multiple of 10 minutes. Here is a better one:
```
Select *
From
(
Select CREATED_TIMESTAMP,
SOURCE_MSISDN||DEST_MSISDN||AMOUNT as trans,
lag(CREATED_TIMESTAMP, 1, null) over (partition by SOURCE_MSISDN||DEST_MSISDN||AMOUNT order by CREATED_TIMESTAMP) as PREVIOUS_TIMESTAMP
From ADMDBMC.TRANSACTION_CASH
Where TO_date(CREATED_TIMESTAMP) >= '1-sep-2014'
And TO_date(CREATED_TIMESTAMP) < '2-sep-2014'
And STATUS_DESCRIPTION='SUCCESS'
)
where CREATED_TIMESTAMP <= PREVIOUS_TIMESTAMP + INTERVAL '10' MINUTE
```
The `lag` analytical function allows to get for every row the timestamp of preceeding row of the same group identified by the concatenation of `SOURCE_MSISDN` / `DEST_MSISDN` / `AMOUNT`. Then once we have this, we just have to check the difference.
|
Emmanuels way would return false negative for example if one transaction occured at 11:49 and second at 11:52
You could use LEAD or LAG aggregate function with window option to get timestamp of a older row then current and calculate [difference between two timestamps](https://stackoverflow.com/questions/11617962/calculating-difference-between-two-timestamps-in-oracle-in-milliseconds)
or COUNT(\*) with window, for example:
`select * FROM (
Select
to_char(CREATED_TIMESTAMP) as timestamp,SOURCE_MSISDN||DEST_MSISDN||AMOUNT as trans
,COUNT(*) OVER (PARTITION BY SOURCE_MSISDN||DEST_MSISDN||AMOUNT ORDER BY CREATED_TIMESTAMP RANGE NUMTODSINTERVAL(600,'SECOND') PRECEDING) L_OF_TRANS
From ADMDBMC.TRANSACTION_CASH
WHERE TO_date(CREATED_TIMESTAMP) > = '1-sep-2014' AND TO_date(CREATED_TIMESTAMP) < '2-sep-2014'
and STATUS_DESCRIPTION='SUCCESS'
) WHERE L_OF_TRANS > 1`
|
How do I find transactions that are less than 10 minutes apart using timestamps in sql
|
[
"",
"sql",
"oracle",
""
] |
Please help me to write an SQL query in the Oracle database. There is table called tbl and it has 12 rows. I want to select first 4 rows first then next 4 and the last 4 rows.
Can any anyone tell me how can I do this in Informix.
|
You can use `rownum`:
```
select * from (select t.*, rownum rn from tbl t) where rn between 1 and 4;
/
select * from (select t.*, rownum rn from tbl t) where rn between 5 and 8;
/
select * from (select t.*, rownum rn from tbl t) where rn between 9 and 12;
/
```
If you're using `order by` clause then use `row_number()` ([documentation](http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions137.htm))
```
select * from (select t.*, row_number() over (order by column_name) rn from tbl t) where rn between 1 and 4;
/
select * from (select t.*, row_number() over (order by column_name) rn from tbl t) where rn between 5 and 8;
/
select * from (select t.*, row_number() over (order by column_name) rn from tbl t) where rn between 9 and 12;
/
```
|
EDIT: now should be fixed with 3-level select:
```
select * from (
select q1.*, rownum as rn from ( --get correct rownum
select * from tbl order by column --get correct order
) q1
) q2
where q2.rn between 1 and 4; -- filter
```
for first part.
For second and third part:
```
where q2.rn between 5 and 8
where q2.rn between 9 and 12
```
|
Fetch a fixed number of rows in SQL query in Oracle
|
[
"",
"sql",
"oracle",
"informix",
""
] |
Sorry if this a duplicate, but i haven't found one. Why can't i use my column alias defined in the `SELECT` from the `ORDER BY` when i use `CASE`?
Consider this simple query:
```
SELECT NewValue=CASE WHEN Value IS NULL THEN '<Null-Value>' ELSE Value END
FROM dbo.TableA
ORDER BY CASE WHEN NewValue='<Null-Value>' THEN 1 ELSE 0 END
```
The result is an error:
> Invalid column name 'NewValue'
[Here's a sql-fiddle.](http://sqlfiddle.com/#!3/0018ac/4/0) (Replace the `ORDER BY NewValue` with the `CASE WHEN...` thatΒ΄'s commented out)
I know i can use `ORDER BY CASE WHEN Value IS NULL THEN 1 ELSE 0 END` like [**here**](http://sqlfiddle.com/#!3/0018ac/5/0) in this case but actually the query is more complex and i want to keep it as readable as possible. Do i have to use a sub-query or CTE instead, if so why is that so?
**Update** as Mikael Eriksson has commented *any* expression in combination with an alias is not allowed. So even this (pointless query) fails for the same reason:
```
SELECT '' As Empty
FROM dbo.TableA
ORDER BY Empty + ''
```
Result:
> Invalid column name 'Empty'.
So an alias is allowed in an `ORDER BY` and also an expression but **not both**. Why, is it too difficult to implement? Since i'm mainly a programmer i think of aliases as variables which could simple be used in an expression.
|
**This has to do with how a SQL dbms resolves ambiguous names.**
I haven't yet tracked down this behavior in the SQL standards, but it seems to be consistent across platforms. Here's what's happening.
```
create table test (
col_1 integer,
col_2 integer
);
insert into test (col_1, col_2) values
(1, 3),
(2, 2),
(3, 1);
```
Alias "col\_1" as "col\_2", and use the alias in the ORDER BY clause. The dbms resolves "col\_2" in the ORDER BY as an alias for "col\_1", and sorts by the values in "test"."col\_1".
```
select col_1 as col_2
from test
order by col_2;
```
```
col_2
--
1
2
3
```
Again, alias "col\_1" as "col\_2", but use an expression in the ORDER BY clause. The dbms resolves "col\_2" *not* as an alias for "col\_1", but as the column "test"."col\_2". It sorts by the values in "test"."col\_2".
```
select col_1 as col_2
from test
order by (col_2 || '');
```
```
col_2
--
3
2
1
```
So in your case, your query fails because the dbms wants to resolve "NewValue" in the expression as a column name in a base table. But it's not; it's a column alias.
**PostgreSQL**
This behavior is documented in PostgreSQL in the section [Sorting Rows](http://www.postgresql.org/docs/9.3/static/queries-order.html). Their stated rationale is to reduce ambiguity.
> Note that an output column name has to stand alone, that is, it cannot be used in an expression β for example, this is **not** correct:
```
SELECT a + b AS sum, c FROM table1 ORDER BY sum + c; -- wrong
```
> This restriction is made to reduce ambiguity. There is still ambiguity if an ORDER BY item is a simple name that could match either an output column name or a column from the table expression. The output column is used in such cases. This would only cause confusion if you use AS to rename an output column to match some other table column's name.
**Documentation error in SQL Server 2008**
A *slightly* different issue with respect to [aliases in the ORDER BY clause](http://technet.microsoft.com/en-us/library/ms188723%28v=sql.105%29.aspx).
> If column names are aliased in the SELECT list, only the alias name can be used in the ORDER BY clause.
Unless I'm insufficiently caffeinated, that's not true at all. This statement sorts by "test"."col\_1" in both SQL Server 2008 and SQL Server 2012.
```
select col_1 as col_2
from test
order by col_1;
```
|
It seems this limitation is related to another limitation in which "column aliases can't be referenced in same `SELECT` list". For example, this query:
```
SELECT Col1 AS ColAlias1 FROM T ORDER BY ColAlias1
```
Can be translated to:
```
SELECT Col1 AS ColAlias1 FROM T ORDER BY 1
```
Which is a legal query. But this query:
```
SELECT Col1 AS ColAlias1 FROM T ORDER BY ColAlias1 + ' '
```
Should be translated to:
```
SELECT Col1 AS ColAlias1, ColAlias1 + ' ' FROM T ORDER BY 2
```
Which will raise the error:
> Unknown column 'ColAlias1' in 'field list'
And finally it seems these are because of SQL standard behaviours not an impossibility in implementation.
More info at: [Here](https://dba.stackexchange.com/questions/96556/reference-column-alias-in-same-select-list)
Note: The last query can be executed by `MS Access` without error but will raise the mentioned error with `SQL Server`.
|
Why can't i refer to a column alias in the ORDER BY using CASE?
|
[
"",
"sql",
"t-sql",
"sql-server-2005",
"sql-order-by",
""
] |
I have a stored procedure that stores values in temp tables.
It all works well, but I can not bcp it with
```
exec master..xp_cmdshell 'bcp "exec sp_test '2006-07-21' " queryout c:\test.txt -c '
```
If I change the table to regular, then it all works. Can you not use temp tables this way?
I would not necessarily want to share the code as it contains company stuff, but it is basically like this
```
SELECT
*
INTO #Extractr
FROM
TABLE A
WHERE ID in (4,9,14)
```
The error message is `invalid object #Extractr`
Thanks!
|
I have just stumbled upon this a few days ago.
What I've learned from this link:
> <http://www.dbforums.com/microsoft-sql-server/1605565-can-we-have-temporary-table-store-procedure-when-using-bcp.html>
is that it won't see temp tables as they'd be in the tempdb database not the one you are using.
Also, I got mine working by replacing the local temp tables to global ones (## instead of # with a simple replace helped me).
As @Kevin has mentioned in the comments, you can alternatively use table variables for the same purpose.
Hope this will work for you.
|
Have you tried referencing the temp table like this in your query: `tempdb..#Extractr`
For example:
```
SELECT
*
INTO tempdb..#Extractr
FROM
TABLE A
WHERE ID in (4,9,14)
```
|
bcp won't output temp tables
|
[
"",
"sql",
"bcp",
""
] |
Using Oracle 10g, I have a table that looks like this (syntax shortened for brevity):
```
CREATE TABLE "BUZINESS"."CATALOG"
( "ID" NUMBER PRIMARY KEY,
"ENTRY_ID" VARCHAR2(40 BYTE) NOT NULL,
"MSG_ID" VARCHAR2(40 BYTE) NOT NULL,
"PUBLISH_STATUS" VARCHAR2(30 BYTE) NOT NULL, /* Can be NEW or PUBLISHED */
CONSTRAINT "CATALOG_UN1" UNIQUE (ENTRY_ID, MSG_ID)
)
```
One process, Process A, writes Catalog entries with a PUBLISH\_STATUS of 'NEW'. A second process, Process B, then comes in, grabs all 'NEW' messages, and then changes the PUBLISH\_STATUS to 'PUBLISHED'.
I need to write a query that will grab all PUBLISH\_STATUS='NEW' rows, BUT
I'm trying to prevent an out of order fetch, so that if Process B marks a row as PUBLISH\_STATUS='PUBLISHED' with MSG\_ID '1000', and then Process A writes an out of order row as PUBLISH\_STATUS='NEW' with MSG\_ID '999', the query will never fetch that row when grabbing all 'NEW' rows.
So, if I start with the data:
```
INSERT INTO BUZINESS.CATALOG VALUES (1, '1000', '999', 'NEW');
INSERT INTO BUZINESS.CATALOG VALUES (2, '1000', '1000', 'PUBLISHED');
INSERT INTO BUZINESS.CATALOG VALUES (3, '1000', '1001', 'NEW');
INSERT INTO BUZINESS.CATALOG VALUES (4, '2000', '1999', 'NEW');
INSERT INTO BUZINESS.CATALOG VALUES (5, '2000', '2000', 'PUBLISHED');
INSERT INTO BUZINESS.CATALOG VALUES (6, '2000', '2001', 'NEW');
INSERT INTO BUZINESS.CATALOG VALUES (7, '3000', '3001', 'NEW');
```
Then my query should grab only rows with ID:
3, 6, 7
I then have to join these rows with other data, so the result needs to be JOINable.
So far, I have a very large, ugly query UNIONing two correlated subqueries to do this. Could someone help me write a better query?
|
Requiring non-presence of joinable data is best solved with an outer join that filters out matching joins (leaving just the non-matches).
In your case, the join condition is a "published" row for the same entry with a later (higher) message if.
This query produces your desired output:
```
select t1.*
from buziness_catalog t1
left join buziness_catalog t2
on t2.entry_id = t1.entry_id
and to_number(t2.msg_id) > to_number(t1.msg_id)
and t2.publish_status = 'PUBLISHED'
where t1.publish_status = 'NEW'
and t2.id is null
order by t1.id
```
See [live demo](http://sqlfiddle.com/#!4/e2a96/11) of this query working with your sample data to produce the your desired output. Note that is used a table name of "buziness\_catalog" rather than "buziness.catalog" so the demo would run - you'll have to change the underscores back to dots.
Being a join, and not based on an `exists` correlated subquery, this will perform quite well.
This query would have been a little simpler had your `msg_id` column been a numeric type (the conversion from character to numeric would not have been needed). If your ID data is actually numeric, consider changing the datatype of entry\_id and msg\_id to a numeric type.
|
Reading between the lines, I think this might work:
```
select
*
from
buziness.catalog b1
where
b1.publish_status = 'NEW' and
not exists (
select
'x'
from
buziness.catalog b2
where
b1.entry_id = b2.entry_id and
b2.publish_status = 'PUBLISHED' and
to_number(b2.msg_id) > to_number(b1.msg_id) -- store numbers as numbers!
);
```
|
Refactoring SELECT with two correlated subqueries
|
[
"",
"sql",
"oracle",
""
] |
I need to create a query that pulls only the customer\_no column (because the software restrictions are as such, and I can't code it externally). But I need to be able to sort the data by create\_dt (in reverse) column. The code/SQL is restricting me in using the following because in order to sort by something that data has to appear int the select statement.
I can't have it appear there β is there any way around this?
```
Select Distinct top 3500 a.customer_no
From T_CUSTOMER a WITH (NOLOCK)
JOIN (Select a1.customer_no From VXS_CUST_TKW a1 WITH (NOLOCK) Where a1.tkw in (141)) as e ON e.customer_no = a.customer_no
Where 1 = 1
order by a.create_dt desc
```
|
Of course you can. Your query looks like SQL Server, where this will likely do what you want:
```
Select top 3500 a.customer_no
From T_CUSTOMER a WITH (NOLOCK) JOIN
(Select a1.customer_no
From VXS_CUST_TKW a1 WITH (NOLOCK)
Where a1.tkw in (141)
) e
ON e.customer_no = a.customer_no
Where 1 = 1
group by a.customer_no
order by max(a.create_dt) desc;
```
The equivalent query in MySQL would look like:
```
Select a.customer_no
From T_CUSTOMER a JOIN
(Select a1.customer_no
From VXS_CUST_TKW a1
Where a1.tkw in (141)
) e
ON e.customer_no = a.customer_no
Where 1 = 1
order by a.create_dt desc
limit 3500;
```
I removed the `distinct` because it may not be necessary. If it is, add it back in.
|
This does not look like mysql.
Anyway... you could try an ordered sub-select IE
```
From (select * from T_CUSTOMER order by create_dt) a WITH (NOLOCK)
```
I hope this helps.
|
Ordering by a field not in the select statement in SQL
|
[
"",
"sql",
"t-sql",
"join",
"sql-order-by",
""
] |
I have a scenario where I need something similar to `.NET's try-catch-finally block.`
On my try, I will `CREATE a #temp table`, `INSERT` data to it & process other data sets based on `#temp`.
On `CATCH` then `RAISERROR`.
Is it possible to have a `FINALLY` block to `DROP #temp`?
Below is the pseudo code:
```
BEGIN TRY
CREATE TABLE #temp
(
--columns
)
--Process data with other data sets
END TRY
BEGIN CATCH
EXECUTE usp_getErrorMessage
END CATCH
BEGIN FINALLY
DROP TABLE #temp
END FINALLY
```
|
Instead of creating a table you could just declare a table variable (which will automatically go away when the query ends).
```
BEGIN TRY
DECLARE @temp TABLE
(
--columns
)
--do stuff
END TRY
BEGIN CATCH
--do other stuff
END CATCH
```
|
While not exactly the same as FINALLY, the T-SQL version of Try-Catch does allow that code that needs execute after both the Try and Catch blocks can occur after the end of the END CATCH statement.
Using the question code as an example:
```
BEGIN TRY
CREATE TABLE #temp
(
--columns
)
--Process data with other data sets
END TRY
BEGIN CATCH
EXECUTE usp_getErrorMessage
END CATCH;
IF OBJECT_ID('tempdb..#temp') IS NOT NULL -- Check for table existence
DROP TABLE #temp;
```
The DROP TABLE command will execute whether the Try or Catch execute.
See: [BOL Try...Catch](https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&sqi=2&ved=0CB4QFjAA&url=https%3A%2F%2Fmsdn.microsoft.com%2Fen-us%2Flibrary%2Fms175976.aspx&ei=AU2UVeqaB4Lk-AHh5oC4BA&usg=AFQjCNG95BgubG5mT4Fyl9ggH3t_7hkFoA&sig2=OV4OAJVg5Xo3ibM04LLrbA&bvm=bv.96952980,d.cWw&cad=rja)
|
SQL Server TRY CATCH FINALLY
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have two tables one with I have 3 tables:
```
Agents
AgentID AgentEmail
Prospects
ProspectId ProspectMail
AgentProspects
AgentID ProspectID
```
The relationship between the agents and the prospects is done through AgentProspects table. I know how to join the tables to get a straight froward list of the emails for the agents and their associated prospects but what I want to to now is be able to get a subset of agents and a subset of Prospects associated to each agent. This would mean if I wanted 2 agents and 10 prospected I would be looking for 20 records returned (10 prospects per agent.)
I tried :
```
select top 10 a.email, pros.email
from agents a
join agentprospects ap on a.agentid = ap.agentId
and a.email in (select top 4 a.email from agents a
group by a.email)
join prospects pros on ap.prospectId = pros.prospectId
where a.IsDeleted = 0
```
No records where returned. So what is the best approach to tackling this?
|
```
SELECT a.email
, C.email
from agent a
CROSS APPLY (
SELECT TOP 10 pros.email
FROM AgentProspects AS AP
INNER JOIN prospects Pros ON ap.prospectId = pros.prospectId
WHERE AP.agentId = a.agentid
) C(email)
where a.IsDeleted = 0
and a.agentid IN (1,2) --<-- two AgentIDs you want results for
```
|
```
select top 10 a.email, pros.email
from agent a
join agentprospects ap on a.agentid = ap.agentId
join prospects pros on ap.prospectId = pros.prospectId
where a.IsDeleted = 0 and
(select count(*) from profile p where a.agentid=ap.agentid) =4
```
Alternatively, you can create function which return scalar (integer) and use it instead subquery from the last line.
|
How do I get 10 customers that are tied to 4 agents?
|
[
"",
"sql",
"sql-server",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.