Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I know there are tons of similar questions already, but they don't quite meet my needs. I'm aware that (+) is an Oracle-only join operator used in WHERE clauses, and the the orientation dictates LEFT or RIGHT. However, I'm not entirely sure if my situation matches because the syntax looks different from everything I've found so far. I'm confused because the =+ is in the ON clause instead of the WHERE clause, and it isn't in the same position as examples I've seen. Is this equivalent to (+)? Is it something else entirely? Is it redundant? I don't know what it's doing. I'm just not sure how to deal with this operator in my attempts to simplify this query. Hoping someone can help me out here.
Here's a horrible rendition of the monster I've inherited. This excerpt is then followed by UNION ALL and almost the exact same code block, with some minor differences on a few lines. I'm just tackling the top right now, trying to simplify everything while maintaining the same results. (Yes, it actually runs!) Let me know if I need to clarify. I used the direct code where the meaningful content is.
```
SELECT
tons,
of,
insane,
fields,
from,
way,
too,
many,
different,
tables
FROM
seriously,
these,
tables,
are,
nuts,
and,
someone,
decided,
to
LEFT JOIN ( --Two nested subqueries.
SELECT
R.SKN_NO
,R.REGION_NO
,R.REGION_PRICE_FLAG
,R.TICKET_RETAIL_DOL
,R.CLEARANCE
FROM(
SELECT
P.SKN_NO
,P.REGION_NO
,P.TICKET_RETAIL_DOL
,RI2.REGION_PRICE_FLAG
,CASE P.PRICE_TYPE_CODE
WHEN 'R' THEN 'Y'
ELSE 'N'
END CLEARANCE
,DENSE_RANK() OVER (PARTITION BY P.SKN_NO, P.REGION_NO
ORDER BY EFFECTIVE_DATE DESC) DR
FROM
RF_ITEM_PRICING P
,RF_ITEM RI2
,RF_SYS_PARMS RFID
,RF_MERCHANT_ORG MO2
,V_HB_STOCK_STATUS HSS2
,RF_STORE RS2
WHERE
RI2.SKN_NO = P.SKN_NO
AND RFID.PARM_KEY = 'RFID'
AND MO2.ORG_NO = 100
AND MO2.DMM_NO = TO_NUMBER(RFID.MIN_VAL)
AND RI2.DEPT_NO = MO2.DEPT_NO
AND RI2.SKN_NO = HSS2.SKN_NO
AND RS2.STORE_NO_4_DIGIT = HSS2.STORE_NO_4_DIGIT
AND RI2.REGION_PRICE_FLAG = 'Y'
AND P.REGION_NO = RS2.STORE_NO
AND P.EFFECTIVE_DATE <= SYSDATE
) R
WHERE R.DR=1
) RFPRICE
--The two lines below are where the LEFT JOIN finally ends. Note the =+ here.
ON HSS.SKN_NO =+ RFPRICE.SKN_NO
AND RFPRICE.REGION_NO = HSS.STORE_NO
WHERE
--There are a ton of conditions here
--that link all those crazy tables together
--with the exception of the LEFT JOIN one (alias RFPRICE)
``` | As far as I know, Oracle does not have an `=+` operator ([here](http://docs.oracle.com/cd/E25178_01/fusionapps.1111/e20835/appsql.htm#i1033699) is an example of a reference). It does, however, have an `=` and a unary `+`. So, I think this would parse as:
```
HSS.SKN_NO = (+ RFPRICE.SKN_NO)
```
This, in turn, would be the same as:
```
HSS.SKN_NO = RFPRICE.SKN_NO
```
The unary `+` operator doesn't seem to do anything. | If the "+" was on the other side of that statement i would say it is a right outer join condition. never have seen it there, though. If you move it to the right of the statement, does it still run? if so its a right outer join to HSS.
I would find out what the program is supposed to do and rewrite it using CTE since it looks like it is doing a ranking based on SKN\_NO.
Good luck! | Oracle Equals-Plus Operator (Join On t1.col1 =+ t2.col2) | [
"",
"sql",
"join",
"syntax",
"oracle11g",
"operators",
""
] |
I have the following challenge. I have 2 tables. First table contains changes in values of bikes, at a certain moment (i.e. price catalogue). This means a certain price for a product is valid untl there is a new price within the table.
```
Product | RowNr | Year | Month | Value
------------------------------------------
Bike1 | 1 | 2009 | 8 | 100
Bike1 | 2 | 2010 | 2 | 400
Bike1 | 3 | 2011 | 4 | 300
Bike1 | 4 | 2012 | 9 | 100
Bike1 | 5 | 2013 | 2 | 500
Bike1 | 6 | 2013 | 5 | 200
Bike2 | 1 | 2013 | 1 | 5000
Bike2 | 2 | 2013 | 2 | 4000
Bike2 | 3 | 2014 | 6 | 2000
Bike2 | 4 | 2014 | 10 | 4000
```
The second table contains dates for which I would like to determine the value of a bike (based on the information in table 1).
```
Product | Date | Value
-------------------------
Bike1 | 3/01/2008 | ?
Bike1 | 04/30/2011 | ?
Bike1 | 5/08/2009 | ?
Bike1 | 10/10/2012 | ?
Bike1 | 7/01/2014 | ?
```
So line 1 and 3 should get value "400", line 2 "300", line 4 "100" and line 5 "200" etc.
Does anyone know how this can be achieved in T-SQL? I've already partitioned the first table, but could use some advice on the next steps.
Many thanks, | You could do something like this, which will retrieve the most recent price catalogue value for the product, using the price that is less than or equal to the product table date.
```
SELECT p.product
, p.date
, valueAsOfDate =
( SELECT TOP 1 c.value
FROM priceCatalogue c
WHERE c.product = p.product
AND convert(date,
convert(varchar(4), c.year) + '-'
+ convert(varchar(2), c.month)
+ '-1'
) <= p.date
--this order by will ensure that the most recent price is used
ORDER BY c.year desc, c.month desc
)
FROM product p
```
This table structure is not ideal... you would be better off with an "AsOfDate" column in your priceCatalogue table, so that you do not have to cast the values in the priceCatalogue table as a date in order to compare. If this is new development, change the priceCatalogue table to have an asOfDate column that is a date data type. If this is an existing table that is populated from another data source, then you could look at adding a persisted computed column to the table. <http://msdn.microsoft.com/en-us/library/ms188300.aspx>
With asOfDate column on the productCatalogue table, you have a SARG-able query ([What makes a SQL statement sargable?](https://stackoverflow.com/questions/799584/what-makes-a-sql-statement-sargable) ) that can take advantage of indexes.
```
SELECT p.product
, p.date
, valueAsOfDate =
( SELECT TOP 1 c.value
FROM priceCatalogue c
WHERE c.product = p.product
AND c.asOfDate <= p.date
--this order by will ensure that the most recent price is used
ORDER BY c.year desc, c.month desc
)
FROM product p
``` | just use the `YEAR()` and `MONTH()` functions to take those parts of the date, and join them on your versioned table.
```
select
from product p
inner join productVersion pv
on p.product = pv.product
and Year(p.Date) = pv.Year
and Month(p.Date) = pv.Month
``` | Using t-sql to lookup value based on dates in other table | [
"",
"sql",
"sql-server",
"date",
"case",
"partition",
""
] |
I have a query which output a a column name `member_name`
```
select
LEFT(member_name, charindex('/', member_name) - 1) + ' '
+ SUBSTRING(member_name, charindex('/', member_name) + 1, 1) as member_name
from member
```
It output the string with the last name and first letter of first name.
How can I output in the table with the first letter and last name in different columns?
For example to have it output something like this:
```
first | last name
-----------------
J | DOE
B | DOE-DOE
Z | SMITH
```
I made a <http://sqlfiddle.com/#!6/c7a74/7> | ```
select
substring(member_name, charindex('/', member_name) + 1, 1) as first_initial,
left(member_name, charindex('/', member_name) - 1) as last_name
from
member
``` | I don't have the reputation to reply to comments just yet, but as you asked in a comment in the answer, It **can be** done in cold fusion, but I'd think SQL is your better choice.
```
<cfoutput query="get_names">
<cfset l_name = listfirst(member_name,"/")>
<cfset f_initial = left(listlast(member_name,"/"),1)>
<!--- if you want f_initial capitalized, use the below cfset rather than the above --->
<cfset f_initial = ucase(left(listlast(member_name,"/"),1))>
First initial: #f_initial#, Last name: #l_name#<br>
</cfoutput>
```
That would output, based on your fiddle:
```
First initial: J, Last Name: suarez
First initial: J, Last Name: suarez
First initial: S, Last Name: mejia
First initial: D, Last Name: orozco
```
It's a good exercise to help understand how cf works, and more importantly that there is an overlap between what your queries can do for you and what your code can but **once again** I recommend the sql solution. | How to separate string output? | [
"",
"sql",
""
] |
I have a query like the below:
```
SELECT *, (
SELECT `name` FROM `users` WHERE `users`.`id`=``.``
) AS `fullName` FROM `listings` WHERE `fullName` LIKE '%praveen%';
```
But when I execute the above query, I am getting this error:
```
#1054 - Unknown column 'fullName' in 'where clause'
```
I know that this bug has been [documented](http://bugs.mysql.com/bug.php?id=28814). I am not sure what's the workaround for this other than doing something like:
```
SELECT *, (
SELECT `name` FROM `users` WHERE `users`.`id`=`listings`.`user`
) FROM `listings` WHERE (SELECT `name` FROM `users` WHERE users`.`id`=`listings`.`user`) LIKE '%praveen%';
```
Is there any other way I can do this other than creating a view, or using a query like the above? I have tried referring other questions:
* [Unknown column in subquery where clause](https://stackoverflow.com/questions/13941770/unknown-column-in-subquery-where-clause)
* [Unknown column when using subquery result column in WHERE clause](https://stackoverflow.com/questions/24664991/unknown-column-when-using-subquery-result-column-in-where-clause)
* [Unknown Column In Where Clause](https://stackoverflow.com/questions/153598/unknown-column-in-where-clause)
* [Unknown column '' in 'where clause'](https://stackoverflow.com/questions/7042998/unknown-column-in-where-clause)
* [unknown column in where clause](https://stackoverflow.com/questions/2635445/unknown-column-in-where-clause)
I couldn't find a better solution. What's the best can I do in this case? Thanks in advance. | I think this is the correct syntax of what you are after:
```
select l.*, u.name as fullname
from listings l
join users u
on l.user = u.id
where u.name like '%praveen%'
``` | Try this....
```
SELECT *
FROM
(
SELECT *
, (SELECT `name`
FROM `users` WHERE `users`.`id`=``.``
) AS `fullName`
FROM `listings`
) Q
WHERE `Q`.`fullName` LIKE '%praveen%';
``` | Unknown column in WHERE clause when using subquery | [
"",
"mysql",
"sql",
"subquery",
"where-clause",
""
] |
In SQL Server 2008, I need to query across columns in T-SQL and return the largest number I find (example below using dates, but same idea applies to numbers). "NULL" indicates a null value, not the literal word (sorry if I'm being captain obvious). The column "DesiredResultColumn" shows the final result I want from searching across Column1 through Column3. I'm not finding an example on here which fits this exactly.
---
```
ID Column1 Column2 Column3 DesiredResultColumn
001 1/1/2010 5/7/2011 8/12/2008 5/7/2011
002 7/1/2014 7/3/2012 10/12/2013 7/1/2014
003 9/1/2012 12/7/2012 NULL 12/7/2012
004 11/1/2012 NULL 8/12/2013 8/12/2013
```
---
Unfortunately my tables, due to the source system constraints, aren't normalized, otherwise a max function would solve my problem. Thoughts? I appreciate it! | As per a [similar question](https://stackoverflow.com/a/6871572/1270504):
```
SELECT tbl.ID,
(SELECT MAX(Date)
FROM (VALUES (tbl.Column1), (tbl.Column2), (tbl.Column3)) AS AllDates(Date)) AS DesiredResultColumn
FROM tbl
```
Of course, that only works on SQL 2008 and above, but you said you have 2008 so it should be fine.
The nice thing about this over use of a `CASE` or similar expression is, for one, how it's a bit shorter and, in my opinion, easier to read. But also, it handles `NULL` values so you don't really have to think about them. | you can probably use a `case` condition along with `ISNULL()` to get the result like below ( a sample, didn't included nullity check using `ISNULL()`. You can include that)
```
select ID,
Column1,
Column2,
Column3,
case when Column1 > Column2 and Column1 > Column3 then Column1
when Column2 > Column1 and Column2 > Column3 then Column2
else column3 end as DesiredResultColumn
from your_table
``` | Returning largest value across columns in SQL Server 2008 | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
Trying to create sql or PIG queries that will yield a count of distinct values results based on type.
In other words, given this table:
```
Type: Value:
A x
B y
C y
B y
C z
A x
A z
A z
A x
B x
B z
B x
C x
```
I want to get the following results:
```
Type: x: y: z:
A 3 0 2
B 2 2 1
C 1 1 1
```
Additionally, a table of averages as a result would be helpful too
```
Type: x: y: z:
A 0.60 0.00 0.40
B 0.40 0.40 0.20
C 0.33 0.33 0.33
```
**EDIT 4**
I am a nooby at PIG, but reading 8 different stack overflow posts I came up with this.
When I use this PIG query
```
A = LOAD 'tablex' USING org.apache.hcatalog.pig.HCatLoader();
x = foreach A GENERATE id_orig_h;
xx = distinct x;
y = foreach A GENERATE id_resp_h;
yy = distinct y;
yyy = group yy all;
zz = GROUP A BY (id_orig_h, id_resp_h);
B = CROSS xx, yy;
C = foreach B generate xx::id_orig_h as id_orig_h, yy::id_resp_h as id_resp_h;
D = foreach zz GENERATE flatten (group) as (id_orig_h, id_resp_h), COUNT(A) as count;
E = JOIN C by (id_orig_h, id_resp_h) LEFT OUTER, D BY (id_orig_h, id_resp_h);
F = foreach E generate C::id_orig_h as id_orig_h, C::id_resp_h as id_resp_h, D::count as count;
G = foreach yyy generate 0 as id:chararray, flatten(BagToTuple(yy));
H = group F by id_orig_h;
I = foreach H generate group as id_orig_h, flatten(BagToTuple(F.count)) as count;
dump G;
dump I;
```
Sort of works.......
I get this:
```
(0,x,y,z)
(A,3,0,2)
(B,2,2,1)
(C,1,1,1)
```
Which I can import to a text file strip the "(" and ")" and use as a CSV with schema being first line. This sort of works it is **SO SLOW**. I would like a nicer, faster, cleaner way of doing this. If anyone out there knows of a way please let me know. | ```
A = LOAD 'tablex' USING org.apache.hcatalog.pig.HCatLoader();
x = foreach A GENERATE id_orig_h;
xx = distinct x;
y = foreach A GENERATE id_resp_h;
yy = distinct y;
yyy = group yy all;
zz = GROUP A BY (id_orig_h, id_resp_h);
B = CROSS xx, yy;
C = foreach B generate xx::id_orig_h as id_orig_h, yy::id_resp_h as id_resp_h;
D = foreach zz GENERATE flatten (group) as (id_orig_h, id_resp_h), COUNT(A) as count;
E = JOIN C by (id_orig_h, id_resp_h) LEFT OUTER, D BY (id_orig_h, id_resp_h);
F = foreach E generate C::id_orig_h as id_orig_h, C::id_resp_h as id_resp_h, D::count as count;
G = foreach yyy generate 0 as id:chararray, flatten(BagToTuple(yy));
H = group F by id_orig_h;
I = foreach H generate group as id_orig_h, flatten(BagToTuple(F.count)) as count;
dump G;
dump I;
``` | Updated code according to Edit#3 in question:
```
A = load '/path/to/input/file' using AvroStorage();
B = group A by (type, value);
C = foreach B generate flatten(group) as (type, value), COUNT(A) as count;
-- Now get all the values.
M = foreach A generate value;
-- Left Outer Join all the values with C, so that every type has exactly same number of values associated
N = join M by value left outer, C by value;
O = foreach N generate
C::type as type,
M::value as value,
(C::count == null ? 0 : C::count) as count; --count = 0 means value was not associated with the type
P = group O by type;
Q = foreach P {
R = order O by value asc; --Ordered by value, so values counts are ordered consistently in all the rows.
generate group as type, flatten(R.count);
}
```
Please note that I did not execute the code above. These are just the representational steps. | PIG query to pivot rows and columns with count of like rows | [
"",
"sql",
"hadoop",
"apache-pig",
""
] |
Like many others here I don't have a lot of experience under my belt. I've been directed to convert non clustered indexes to clustered (No clustered indexes exist). In the attached query I print out the @sql variable to see what my command looks like and it's only about half the tables.
1st question. Is there a limit on how long a string that can be executed or printed?
I tried commenting out SET @sql = @sql + ' UNION ALL ' in hopes I could print or execute one command at a time but nothing printed. I really don't want to execute until I'm confident I have the right syntax.
```
SET NOCOUNT ON
IF OBJECT_ID('tempdb..##ListIndex') IS NOT NULL DROP TABLE ##ListIndex
Create Table ##ListIndex (MySchema nvarchar(max), MyTable nvarchar(max), MyIndexName nvarchar(max), MyColumn nvarchar(max), IndexType int)
Insert into ##ListIndex
select s.name, t.name, i.name, c.name, i.type
from sys.tables t
inner join sys.schemas s on t.schema_id = s.schema_id
inner join sys.indexes i on i.object_id = t.object_id
inner join sys.index_columns ic on ic.object_id = t.object_id
inner join sys.columns c on c.object_id = t.object_id and
ic.column_id = c.column_id
where i.index_id =2
and i.type in (1, 2) -- clustered & nonclustered only
and i.is_disabled = 0
and i.is_hypothetical = 0
and ic.key_ordinal > 0
order by s.name asc, t.name desc --ic.key_ordinal
IF OBJECT_ID('tempdb..##ListIndex2') IS NOT NULL DROP TABLE ##ListIndex2
Create Table ##ListIndex2 (MySchema nvarchar(max), MyTable nvarchar(max), MyIndexName nvarchar(max), MyColumn nvarchar(max))
Insert into ##ListIndex2
SELECT DISTINCT
MySchema
, MyTable
, MyIndexName
, STUFF(
(
SELECT ', ' + MyColumn + ' ASC'
FROM ##ListIndex TInner -- replace with your table
WHERE TOuter.MyTable = TInner.MyTable
AND TOuter.MyIndexName = TInner.MyIndexName
AND TOuter.MySchema = TInner.MySchema
FOR XML PATH('')
), 1, 2, ''
) MyColumn
FROM ##ListIndex TOuter
--select * from ##ListIndex2
SET NOCOUNT ON
DECLARE MyCursor CURSOR FOR
SELECT
MySchema
, MyTable
, MyIndexName
,MyColumn
FROM ##ListIndex2 c
OPEN MyCursor
DECLARE @MySchema VARCHAR(100), @MyTable VARCHAR(100), @MyIndexName VARCHAR(100), @MyColumn VARCHAR(300)
DECLARE @sql VARCHAR(MAX)='';
FETCH FROM MyCursor INTO @MySchema, @MyTable, @MyIndexName, @MyColumn
WHILE @@FETCH_STATUS=0
BEGIN
IF LEN(@sql) > 0
SET @sql = @sql + ' UNION ALL '
SET @sql= @sql + 'CREATE UNIQUE CLUSTERED INDEX ' + @MyIndexName + ' on ' + @MySchema + '.' + @MyTable +' (' + @MyColumn +') WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = ON,ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, DATA_COMPRESSION = PAGE) ON [DefFG] GO '
FETCH FROM MyCursor INTO @MySchema, @MyTable, @MyIndexName, @MyColumn
END
CLOSE MyCursor
DEALLOCATE MyCursor
--exec sp_executesql @sql
print @sql
``` | There are several things to improve here. Some of the suggestions will make others obsolete (mainly # 14 about using a CTE would nullify most of the rest of them), but you said you were new to SQL Server so I will mention them all as they might help in future projects. And to be clear I have highlighted the two critical / functional suggestions. The answers to the stated questions regarding string limits (in relation to both printing and executing) are below the list of suggested improvements.
1. There is no need for global temporary tables (`##`) and local temporary tables should instead be used (`#`)
2. The datatypes for the 4 name fields in `##ListIndex` should be `SYSNAME` which equates to `NVARCHAR(128)`
3. The datatypes between the `##ListIndex2` global temp table (all set to `NVARCHAR(MAX)`) and the local variables (i.e. `DECLARE @MySchema VARCHAR(100), @MyTable VARCHAR(100)...`) don't match
4. The datatypes for the first 3 fields in `##ListIndex2` (`MySchema`, `MyTable`, and `MyIndexName`) and all 3 matching variables should be `SYSNAME` which equates to `NVARCHAR(128)`
5. The datatype of the `IndexType` field in `##ListIndex`, coming from `sys.indexes.type`, should be `TINYINT` instead of `INT` (according to the documentation for [sys.indexes](http://msdn.microsoft.com/en-us/library/ms173760.aspx))
6. The WHERE conditions of `i.[index_id] = 2 AND i.[type] IN (1, 2)` don't make sense because:
1. You currently don't have clustered indexes so you don't need to look for them
2. Clustered Indexes always have an `index_id` of 1 so they would get filtered out by the `i.[index_id] = 2` condition anyway
3. How will you handle tables that have more than 1 non-clustered index? Just pick the first one? That is what `i.[index_id] = 2` does. Is that what you want? In either case, you need only one of those conditions:
1. If you want to see all nonclustered indexes for each table so you can pick the one that is best to be the Clustered Index, remove `i.[index_id] = 2 AND` and change `i.[type] IN (1, 2)` to be `i.[type] = 2`
2. If you just want the first (even if not most appropriate) index for each table to be the Clustered Index, remove `AND i.[type] IN (1, 2)`
7. Get rid of the `ORDER BY` on that `Insert into ##ListIndex select...` query; it has no bearing on how the rows will be retrieved from the `##ListIndex` table and is hence a waste of resources
8. **The subquery inside the `STUFF` function that does the `FOR XML PATH` has two issues:
1. You are not ordering the columns by the original order that they are in. You need to add a `TINYINT` field for `KeyOrdinal` in `##ListIndex` and populate it from `ic.key_ordinal`. You can then add an `ORDER BY` clause on this new field in that subquery.
2. You are assuming that all columns are `ASC` when one or more could just as well be `DESC`, right? Unless you are certain that ALL of your index fields are defined as `ASC`, you need to also add a `BIT` field to `##ListIndex` for `IsDescending` and populate it from `ic.is_descending_key`. You would use this value in that subquery with `CASE` (`IIF` started in SQL 2012) in the `SELECT` in place of the hard-coded `' ASC'`.**
- You don't need the second `SET NOCOUNT ON` (just above the `DECLARE MyCursor CURSOR`) as you already have it at the beginning of the script
- Your `@sql` variable should probably be defined as `NVARCHAR(MAX)` instead of `VARCHAR(MAX)`
- Your `FETCH FROM` should probably be `FETCH NEXT FROM`
- **Remove the `IF LEN` line and the following `SET...'UNION ALL'` line as they are incorrect and useless**
- In the string at the end of the line where you build the `CREATE UNIQUE...`, put the `GO` on the next line
- You could accomplish all of this with a CTE and get rid of the two temp tables and the cursor, but I don't have time to show that and the query as it currently is does work
Regarding any possible limitation on how long of a string can be executed: whatever you can fit into either `VARCHAR(MAX)` or `NVARCHAR(MAX)` (according to the documentation for [EXECUTE](http://msdn.microsoft.com/en-us/library/ms188332.aspx)). I have built rather large sets of queries into an `NVARCHAR(MAX)` variable (*well* over 4000 characters) and called EXEC on it without problems.
The reason that you are seeing only about half of the tables when printing `@sql` is that the `PRINT` command will only print up to 8000 characters of `VARCHAR` or 4000 characters of `NVARCHAR`, even if your variable is a `MAX` type. In order to print it all you need to loop through every 4000 or 8000 characters (depending on NVARCHAR vs VARCHAR variable) using `SUBSTRING()`.
The reason that you saw nothing when commenting out the `SET @sql = @sql + ' UNION ALL '` line is the reason why I always recommend putting BEGIN / END markers around all `IF` / `WHILE` / etc constructs, even for just a single line to execute. Meaning, by commenting out that `SET` line, and most likely *not* commenting out the `IF` along with it, the `IF` statement then applied to the remaining `SET` and would only run that `SET @sql= @sql + 'CREATE UNIQUE...` line **IF** LEN(@sql) > 0, which it never would be ;-). | SQL Server Management Studio does have a limit of about 44k characters (at least for SQL Server 2008, the last time I had this problem) for select statements with a string variable. However, the full SQL still gets executed (in my experience).
Your problem looks different, though. I'm not sure what you are trying to do, but
```
create unique clustered index . . .
union all
create unique clustered index . . .
```
is not a valid statement. Perhaps you should just end line with a semi-colon and forget the `union all`. You use `union all` for `select` queries. | Build of varchar(max) command variable is truncated | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I'm using the following query to get information about all tables in a DB:
```
SELECT
t.NAME AS TableName,
i.name as indexName,
sum(p.rows) as RowCounts,
sum(a.total_pages) as TotalPages,
sum(a.used_pages) as UsedPages,
sum(a.data_pages) as DataPages,
(sum(a.total_pages) * 8) / 1024 as TotalSpaceMB,
(sum(a.used_pages) * 8) / 1024 as UsedSpaceMB,
(sum(a.data_pages) * 8) / 1024 as DataSpaceMB
FROM
sys.tables t
INNER JOIN
sys.indexes i ON t.OBJECT_ID = i.object_id
INNER JOIN
sys.partitions p ON i.object_id = p.OBJECT_ID AND i.index_id = p.index_id
INNER JOIN
sys.allocation_units a ON p.partition_id = a.container_id
WHERE
t.NAME NOT LIKE 'dt%' AND
i.OBJECT_ID > 255 AND
i.index_id <= 1
GROUP BY
t.NAME, i.object_id, i.index_id, i.name
ORDER BY
object_name(i.object_id)
```
The problem is that for some tables it reports a different row count than if I do:
```
select count(*) FROM someTable
```
Why is that?
**Edit:**
The first query returns a higher count:
```
First: 1 240 464
Second: 413 496
``` | The problem is that there's more than one allocation\_unit per partition, so the same partition can appear more than once and therefore the sum(p.rows) ends up counting the same partition more than once, so you get a multiple of the right number of rows.
Here's how I solved the problem:
(note that my query isn't identical to yours, I have slightly different columns and used Kb rather than Mb, but the idea is the same)
```
SELECT
s.Name + '.' + t.name AS table_name,
(select sum(p2.rows)
from sys.indexes i2 inner join sys.partitions p2 ON i2.object_id = p2.OBJECT_ID AND i2.index_id = p2.index_id
where i2.object_id = t.object_id and i2.object_id > 255 and (i2.index_id = 0 or i2.index_id = 1)
) as total_rows,
SUM(CASE WHEN (i.index_id=0) OR (i.index_id=1) THEN a.total_pages * 8 ELSE 0 END) AS data_size_kb,
SUM(CASE WHEN (i.index_id=0) OR (i.index_id=1) THEN a.used_pages * 8 ELSE 0 END) AS data_used_kb,
SUM(CASE WHEN (i.index_id=0) OR (i.index_id=1) THEN 0 ELSE a.total_pages * 8 END) AS index_size_kb,
SUM(CASE WHEN (i.index_id=0) OR (i.index_id=1) THEN 0 ELSE a.used_pages * 8 END) AS index_used_kb,
SUM(a.total_pages) * 8 AS total_size_kb,
SUM(a.used_pages) * 8 AS total_used_kb,
SUM(a.used_pages) * 100 / CASE WHEN SUM(a.total_pages) = 0 THEN 1 ELSE SUM(a.total_pages) END AS percent_full
FROM
sys.tables t
INNER JOIN
sys.schemas s ON s.schema_id = t.schema_id
INNER JOIN
sys.indexes i ON t.OBJECT_ID = i.object_id
INNER JOIN
sys.partitions p ON i.object_id = p.OBJECT_ID AND i.index_id = p.index_id
INNER JOIN
sys.allocation_units a ON p.partition_id = a.container_id
WHERE
t.is_ms_shipped = 0 AND i.OBJECT_ID > 255
GROUP BY
t.object_id, t.Name, s.Name
ORDER BY SUM(a.total_pages) DESC
``` | From [the sys.partitions documentation](http://msdn.microsoft.com/en-us/library/foo1c19e1b1-c925-4dad-a652-581692f4ab5e.aspx)
> rows bigint **Approximate number of rows in this partition.**
(emphasis mine). The system views aren't going to keep a spot-on number of rows in the table. Think of what that would entail and how much overhead it would add to all insert/delete statements. If I were a betting man, I'd say that it's doing something with the count of the number of pages in the clustered index or heap which is a far cheaper operation. That's purely speculative, though. | Count(*) differs from rows in sys.partitions | [
"",
"sql",
"sql-server",
"database-schema",
""
] |
I want to extract all the rows from a database table, where the rows cross-reference each other.
My table contains 2 rows: `ref1` & `ref2`
Table example:
```
ID ref1 ref2
01 23 83
02 77 55
03 83 23
04 13 45
```
In this case, I want my query to return only rows 01 and 03, because they cross-reference each other.
Is this possible using a single query, or will I need to iterate the entire table manually?
I'm using MySQL. | A simple JOIN can do that in a straight forward manner;
```
SELECT DISTINCT a.*
FROM mytable a
JOIN mytable b
ON a.ref1 = b.ref2 AND a.ref2 = b.ref1;
```
[An SQLfiddle to test with](http://sqlfiddle.com/#!2/1980fe/2). | ```
select
*
from
tbl t1
where
exists (
select
'x'
from
tbl t2
where
t1.ref1 = t2.ref2 and
t1.ref2 = t2.ref1
)
``` | Find records which cross-reference each other | [
"",
"mysql",
"sql",
""
] |
I'm working on a small search engine project and I need some help with a SQL query.
My table looks like this (example):
```
word | docNum
---------------------------
1. blue | 1
2. pen | 2
3. pen | 1
4. green | 1
5. key | 2
6. key | 1
7. car | 1
```
I would like to look for : blue, pen, green, key ONLY where their docNum is the same.
So a possible result would be:
```
word | docNum
---------------------------
1. blue | 1
3. pen | 1
4. green | 1
6. key | 1
```
How should the select query look like? | Group by the `docnum` and select only those having all 4 words you look for
```
select docnum
from your_table
where word in ('blue','pen','green','key')
group by docnum
having count(distinct word) = 4
``` | For above result simply you can use this query :
```
SELECT * FROM `your_table` WHERE docnum=1 and word IN ('blue','pen','green','key')
``` | Selecting records from mysql table | [
"",
"mysql",
"sql",
""
] |
I have one table named 'Vote' and there is a column named status. Status can have true or false values.
I want to find subtract result. Something like this
```
select (select count(*) from vote where status=true) -
(select (*) from vote where status=false)
```
the above sql is giving error: incorrect syntax near "\*"
What would be the correct syntax to achieve the result.
Thanks | Try this one:
```
select sum(
(case when status='true' then 1 else -1 end)
)
from table
``` | You missed to write count in your second select
```
select count(*)
```
instead of
```
select (*)
```
So your query would become
```
select
(select count(*) from vote where status = 'true') - (select count(*) from vote where status = 'false')
```
If datatype is bit then try this:
```
select
(select count(*) from vote where status = 1) - (select count(*) from vote where status = 0)
``` | SQL subtracting count(true) from count(false) value | [
"",
"sql",
".net",
"sql-server",
"t-sql",
""
] |
I have a table that holds data for employees and results of a questionnaire they took. The questionnaire has 3 questions, but not all questions had to be answered. I am trying to find out what questions were not answered by the employees.
EmployeeData:
```
EmployeeID | Question | Answer
-------------------------------
12345 | 1 | 100
-------------------------------
12345 | 2 | 85
-------------------------------
11111 | 1 | 100
-------------------------------
11111 | 2 | 90
-------------------------------
11111 | 3 | 25
-------------------------------
```
If using the table above, I am trying to write a query that would return:
```
EmployeeID | Question
----------------------
12345 | 3
----------------------
```
Any help is greatly appreciated! | The method for doing this is to create a list of all questions for all employees and then remove the ones that are already answered.
You can do this with subqueries, a `cross join`, and a `left outer join`:
```
select e.*, q.*
from (select distinct employeeid from employeedata) e cross join
(select distinct question from employeedata) q left join
employeedata ed
on ed.employeeid = e.employeeid and ed.question = q.question
where ed.employeeid is null;
``` | First you need a list of the employees. A better source would be the employee table if it exists. What if an employee didn't fill out the survey?
But based on your request, we'll use the employees in the table.
```
SELECT DISTINCT EmployeeID FROM EmployeeData
```
Now we need to add in the possible questions. For this you could use a temp table, but we'll just use a hard coded sql.
```
SELECT 1 AS Question UNION ALL SELECT 2 UNION ALL SELECT 3
```
Now we join them with no join condition. This produces a table with EmployeeID and each of the 3 questions. Now we can select from this table of all possibilities and find the ones that don't exist in EmployeeData. For this we're going to do a bunch of table valued subqueries.
```
SELECT EQ.EmployeeID, EQ.Question
FROM (SELECT E.EmployeeID, Q.Question
FROM (SELECT DISTINCT EmployeeID FROM EmployeeData) AS E
INNER JOIN (SELECT 1 AS Question UNION ALL SELECT 2 UNION ALL SELECT 3) AS Q ON (1=1) ) AS EQ
WHERE NOT EXISTS (SELECT Answer FROM EmployeeData WHERE EmployeeData.EmployeeID = EQ.EmployeeID AND EmployeeData.Question = EQ.Question)
```
You can also left outer join on employee data and check for null values rather than use a not exists clause.
```
SELECT EQ.EmployeeID, EQ.Question
FROM (SELECT E.EmployeeID, Q.Question
FROM (SELECT DISTINCT EmployeeID FROM EmployeeData) AS E
INNER JOIN (SELECT 1 AS Question UNION ALL SELECT 2 UNION ALL SELECT 3) AS Q ON (1=1) ) AS EQ
LEFT OUTER JOIN EmployeeData ON (EmployeeData.EmployeeID = EQ.EmployeeID AND EmployeeData.Question = EQ.Question)
WHERE EmployeeData.Answer IS NULL
``` | TSQL Get records that don't exist in a table | [
"",
"sql",
"sql-server",
"inner-join",
"exists",
"not-exists",
""
] |
I want to alter a view using a T-SQL script dynamic. Each month I have a new table in my database, I want to include the new table in my view. My idea is to create a var inside a T-SQL procedure and then build the sql statment to create the code I will use to alter the view. With this I only need to EXEC (@SqlView). The challenge now is to get (@SqlResults) in a string. Any ideas?
```
SQL for the view (@SqlView)
select a, b from table01
union all
select a, b from table02
union all
select a, b from table03
SQL statement for the view code (@SqlResults)
select 'select a,b from '+so.name' union all' from sysobjects so
join sys.schemas s
On so.uid = s.schema_id
where so.xtype = 'U'
and so.name like '%table0%'
``` | Here is another approach that doesn't use a loop.
```
declare @SqlResults nvarchar(max) = ''
select @SqlResults = @SqlResults + 'select a,b from ' + t.name + ' union all '
from sys.tables t
where t.name like '%table0%'
select @SqlResults = 'ALTER VIEW SomeView as ' + left(@SqlResults, LEN(@SqlResults) - 10)
select @SqlResults
--Uncomment the exec line when you are comfortable
--exec sp_executesql @SqlResults
``` | This is the SQL that I use when trying to generate a script for every table, altered to plug in your specific details:
```
DECLARE @SQLTable TABLE
(
ID INT IDENTITY(1,1) NOT NULL,
Sequel VARCHAR(4000) NOT NULL
)
INSERT INTO @SQLTable (Sequel)
SELECT
'SELECT A, B FROM ' + TABLE_NAME + ' UNION ALL'
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_NAME LIKE '%table0%'
DECLARE @Looper INT = (SELECT MAX(ID) FROM @SQLTable)
DECLARE @Counter INT = 0
DECLARE @ExecSQL VARCHAR(MAX) =
--'' -- use this if you just want to run the SQL
'ALTER VIEW MyView AS
' -- use this if you want to alter a view
WHILE @Counter <= @Looper
BEGIN
SET @Counter = @Counter + 1
SELECT @ExecSQL = @ExecSQL + '
' + Sequel
FROM @SQLTable
WHERE ID = @Counter
END
SET @ExecSQL =
LEFT (@ExecSQL, LEN(@ExecSQL) - 10) -- the LEFT() function is to remove the final UNION ALL
+ '
GO'
PRINT (@ExecSQL)
``` | Dynamic view with sysobjects tables | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
What I am trying to do can be exemplified (Incorrectly) by the following query
```
SELECT * FROM Table1
WHERE User IN
(CASE (SELECT Username From Table2 Where Table2.ID = 1)
WHEN IS NULL THEN NULL
ELSE (SELECT Username From Table2 Where Table2.ID = 1) END)
```
Since `User = NULL` is not the same as `User IS NULL` something with the above syntax won't work. Is there a way to do this? I don't want to grab `NULLS` if there are records in Table2.
For Example,
Table1
```
ID User
1 Elias
2 NULL
```
Table2
```
ID Username
1 Elias
2 NULL
```
I would want the above select to return the following recordset
```
ID User
1 Elias
```
If I was looking for ID 2 in Table 2 I would want the following recordset
```
ID User
2 NULL
``` | If you want to consider two `NULL` values as matching, you can do:
```
select t1.*
from table1 t1
where exists (select 1
from table2 t2
where t1.user = t2.username or t1.user is null and t2.user is null
);
```
If you are trying to match values in `table2` and if there are no values in `table2` then return values equal to `NULL` (how I interpret the title):
```
select t1.*
from table1 t1
where exists (select 1
from table2 t2
where t1.user = t2.username or t1.user is null and t2.user is null
) or
(not exists (select 1 from table2) and t1.user is null);
```
EDIT:
For performance, you can do:
```
select t1.*
from table1 t1
where exists (select 1
from table2 t2
where t1.user = t2.username
) or
exists (select 1
from table2
where t1.user is null and t2.user is null
) or
(not exists (select 1 from table2) and t1.user is null);
```
These can take advantage of an index on `table2(user)`. | A link where you use null as a relation between table will get you a wrong number of rows. Duplicated rows and stuff.
The simplest query I can think of (in order to avoid this) is:
```
select t1.*
from table1 t1
join table2 t2
on t1.user = t2.username
```
This will filter out the nulls.
In case that you also need to aggregate here null use Gordon's query, though if you have multiple nulls in one of the tables then your query is meaningless. | Where Clause That Grabs Values From A Table And If It Returns Null It Grabs Only Nulls | [
"",
"sql",
"sql-server",
"t-sql",
"syntax",
"null",
""
] |
I have a local server set up on my computer with a local database. I am trying to connect to a network server with a linked server (I'm having a lot of trouble with this) and then create tables and views in the local database from the data in the network database. I'm using Microsoft SQL Server Management Studio 2012. | Try something like :
```
SELECT * from openquery(LINKED_SERVER_NAME,'SELECT * FROM TABLE')
```
If your linked server is well configured, you should be able to query the tables in the linked server using [openquery](http://msdn.microsoft.com/fr-fr/library/ms188427.aspx). | Try this. First you link the Server then run query against the linked Server.
```
EXEC Sp_addlinkedserver
@server="MY-PC\SQLServer1",
@srvproduct='SQL Server'
-- from SQL Server2 you run this query
SELECT *
FROM Openquery("my-pc\sqlserver1", 'select * from Students')
```
Reference: [Running Query against linked Server](http://hammadk.com/running-query-against-linked-server/) | How do you pull data from a database to another one using a linked server? | [
"",
"sql",
"sql-server",
"database",
"sql-server-2012",
"linked-server",
""
] |
I am wanting to calculate the difference between the max(value) returned from a database against a row value returned.
Example Data
1. 400
2. 300
3. 200
4. 100
max(value return) = 400
Returned result should be:
1. 0 (no difference between the max value and row value)
2. 100
3. 200
4. 300
etc
Any ideas how this could achieve?
This is current mySQL statement I am using:
```
SELECT userName, totalPoints , (totalPoints) AS gap
FROM userTable
WHERE value='1'
ORDER BY totalPoints DESC
```
s | Try this below soln.
```
SELECT A.MaxtotalPoints - A.totalPoints AS GAP
FROM
(
SELECT userName, totalPoints , MAX(totalPoints) MaxtotalPoints
FROM userTable
WHERE value='1'
) A
ORDER BY GAP;
``` | Try this:
```
SELECT a.*, b.maxTotalPoints - a.totalPoints as gap
from userTable a
INNER JOIN
( SELECT userName, max(totalPoints) as maxTotalPoints
FROM userTable
) b on (a.userName = b.userName)
WHERE a.value='1'
ORDER BY a.totalPoints DESC
``` | MYSQL Get Difference between two values | [
"",
"mysql",
"sql",
""
] |
I have a list of tables
**Product**
```
{ ProdID (INT), Name(varchar), ...}
```
**Store**
```
{ StoreID(INT), Location(varchar), ...}
```
**Inventory**
```
{StoreID(INT) references Store(StoreID) ,ProdID(INT) references Products(ProdID),...}
```
I want the list of products not available in each store | ```
;WITH CTE
AS (
SELECT P.ProdID
, S.StoreID
FROM Product P CROSS JOIN Store S
)
SELECT C.StoreID, C.ProdID
FROM CTE C
WHERE NOT EXISTS (SELECT 1
FROM Inventory
WHERE C.StoreID = StoreID
AND C.ProdID = ProdID)
``` | Then you want to select all stores that do not have products in inventory.
```
select s.StoreID, p.ProdID, p.Name
from Store s, (select ProdID, Name
from Product
where ProdID not in (select i.ProdID
from inventory i
where i.StoreID = s.StoreID
)
) p
order by s.StoreID, p.Name
``` | Products not available in each store | [
"",
"sql",
"sql-server",
""
] |
I have the following situation.
In a **MySql** database I have a **posts** table. This table contains the following 2 fields: **post\_modified** and **post\_date**
For each record present in the **posts** table I need set the value of **post\_modified** field with the value into the **post\_date**
How can I do a query that do this work on all the table records?
Tnx | That's an easy one:
```
UPDATE posts SET post_modified = post_date;
``` | This query should do the job :
```
UPDATE posts SET post_modified=post_date;
``` | How to change the value of this field in all the records of this MySql table? | [
"",
"mysql",
"sql",
"database",
"rdbms",
""
] |
I have a puzzle which I'm hoping someone can help me solve
I have a table of apartment listings. Each listing has one or more 'unavailable' dates stored individually in another table, let's call it 'off\_days'. e.g. a listing which is unavailable from the 1st until the 4th of Sept would have 4 entries in the 'off\_days' table, one for each day.
I'm looking for the most efficient way to search (preferably on the database level) for listings with at least N consecutive available days between two calendar days ('available' is any day not in the 'off\_days' table for that particular listing). e.g. "Show me all listings with at least 5 consecutive available days in September"
I've been thinking about how i would solve this problem in the real world (by looking at calendar marked with X's and scanning for free blocks) and started thinking about using binary to represent available/unavailable days. i.e. for a given week, 0111001 ( = 57) would tell me that there are at most three consecutive available days in that week.
This [question](https://stackoverflow.com/questions/10911780/looping-through-bits-in-an-integer-ruby/10922528#10922528) seems like a good start once I have the binary number for a given date range, but now I'm stuck on how to calculate that number dynamically for a given date range, again, on the DB level.... any ideas? or thoughts about this approach or another approach? | The apartment is available for gaps in the off days. That means that you want to know how big the gap is for each sequence, and the `lag()` function can give you this information:
```
select od.*,
lag(unavailable) over (partition by apartmentid order by unavailable) as prev_una
from offdays od;
```
The actual number of days is the difference between the unavailable and the prev one minus 1. Now, assume the two calendar days a `v_StartDate` and `v_EndDate`. Now you can basically get what you want as:
```
select od.*,
((case when unavailable is NULL or unavailable > v_EndDate
then v_EndDate + 1 else unavailable
end) -
(case when prev_una is null or prev_una < v_StartDate
then v_StartDate - 1 else prev_una
end) - 1
) as days_available
from (select od.*, lag(unavailable) over (partition by apartmentid order by unavailable) as prev_una
from offdays od
) od
order by days_available desc;
```
The `case` logic is essentially putting in stop dates just before and just after the period.
This isn't quite complete because it has boundary issues: problems when the apartment is not in `offdays` and problems when the unavailable periods are outside the range. Let's fix this with a `union all` and some filtering:
```
select od.*,
((case when unavailable is NULL or unavailable > v_EndDate
then v_EndDate + 1 else unavailable
end) -
(case when prev_una is null or prev_una < v_StartDate
then v_StartDate - 1 else prev_una
end) - 1
) as days_available
from (select od.apartmentId, unavailable,
lag(unavailable) over (partition by apartmentid order by unavailable) as prev_una
from offdays od
where od.unavailable between v_StartDate and v_EndDate
union all
select apartmentid, NULL, NULL
from apartments a
where not exists (select 1
from offdays od
where od.apartmentid = a.apartmentid and
od.unavailable between v_StartDate and v_EndDate
)
) od
order by days_available desc;
``` | ### Candidate apartments
Using [**`generate_series()`**](https://stackoverflow.com/questions/tagged/generate-series) and two nested [`EXISTS`](http://www.postgresql.org/docs/current/interactive/functions-subquery.html#FUNCTIONS-SUBQUERY-EXISTS) expressions, you can translate this English sentence into SQL pretty much directly:
```
Find apartments
where at least one day exists in a given time range (September)
where no "off_day" exists in a 5-day range starting that day.
```
```
SELECT *
FROM apt a
WHERE EXISTS (
SELECT 1
FROM generate_series('2014-09-01'::date
, '2014-10-01'::date - 5
, interval '1 day') r(day)
WHERE NOT EXISTS (
SELECT 1
FROM offday o
WHERE o.apt_id = a.apt_id
AND o.day BETWEEN r.day::date AND r.day::date + 4
)
)
ORDER BY a.apt_id; -- optional
```
### Free time slots
You can apply a similar query to get an actual **list of free slots** (the starting day):
```
SELECT *
FROM apt a
-- FROM (SELECT * FROM apt a WHERE apt_id = 1) a -- for just agiven apt
CROSS JOIN generate_series('2014-09-01'::date
, '2014-10-01'::date - 5
, interval '1 day') r(day)
WHERE NOT EXISTS (
SELECT 1
FROM offday o
WHERE o.apt_id = a.apt_id
AND o.day BETWEEN r.day::date AND r.day::date + 4
)
ORDER BY a.apt_id, r.day;
```
For just the time slots in a **given apt**.:
```
SELECT *
FROM (SELECT * FROM apt a WHERE apt_id = 1) a
...
```
[**SQL Fiddle.**](http://sqlfiddle.com/#!15/1b27d/7)
### Database design
***If*** "off-days" typically consist of *multiple days in a row*, an alternative table layout based on [**date ranges**](http://www.postgresql.org/docs/current/interactive/rangetypes.html) instead of single days would be a (much) more efficient alternative.
[**Range operators**](http://www.postgresql.org/docs/current/interactive/functions-range.html) can make use of a GiST index on the range column. My first draft of the answer was built on ad-hoc ranges ([see answer history](https://stackoverflow.com/posts/25345930/revisions)), but while working with a "one row per day" design the updated solution is simpler and faster. The alternative layout with adapted queries:
[**SQL Fiddle** with range types.](http://sqlfiddle.com/#!15/a4904/1)
Related answer with a similar implementation with essential information:
* [Perform this hours of operation query in PostgreSQL](https://stackoverflow.com/questions/22108477/native-way-of-performing-this-hours-of-operation-query-in-postgresql/22111524#22111524) | Find available apartments given a date range, a duration, and a list of unavailable dates | [
"",
"sql",
"ruby",
"postgresql",
"bit-manipulation",
""
] |
I have a table with 13 columns (one is for the row ID # and the other 12 represent one for each month of the year). Each column will only contain numbers and I want to write several queries to check each row for certain conditions and return the ID of any rows that match.
So far I've been fine with writing all the basic SELECT queries but now I'm getting a bit stuck on writing SELECT queries to check multiple conditions at once without writing a million lines of code each time.
The current query I'm writing needs to check each pair of consecutive months (e.g. Jan/Feb, Feb/Mar, etc) to see how big the difference is between them and it needs to return any rows where 2 lots of consecutive months have a > 20% difference between them and the remaining pairs are all < 10% difference.
For example, if January was 1000 and February was 1300, that's a 30% difference so that's 1 lot. Then if say April was 1500 and May was 2100, that's a 40% difference so there's the 2nd lot. Then as long as every other pair (Feb/Mar, Mar/Apr, ..., Nov/Dec) are all < 10% difference each, then that row needs to be returned.
Unfortunately, the only way I can get this to work is by manually checking every single possibility (which works) but isn't very re-useable for writing similar queries.
Here is an abbreviated version of what I've got so far:
```
SELECT pkID
FROM dbo.tblMonthData
WHERE
((colFeb > colJan * 1.2 AND colMar > colFeb * 1.2) AND (colApr < colMar * 1.1 AND colMay < colApr * 1.1 AND colJun < colMay * 1.1 AND colJul < colJun * 1.1 AND colAug < colJul * 1.1 AND colSep < colAug * 1.1 AND colOct < colSep * 1.1 AND colNov < colOct * 1.1 AND colDec < colNov * 1.1))
OR ((colFeb > colJan * 1.2 AND colApr > colMar * 1.2) AND (colMar < colFeb * 1.1 AND colMay < colApr * 1.1 AND colJun < colMay * 1.1 AND colJul < colJun * 1.1 AND colAug < colJul * 1.1 AND colSep < colAug * 1.1 AND colOct < colSep * 1.1 AND colNov < colOct * 1.1 AND colDec < colNov * 1.1))
OR ((colFeb > colJan * 1.2 AND colMay > colApr * 1.2) AND (colMar < colFeb * 1.1 AND colApr < colMar * 1.1 AND colJun < colMay * 1.1 AND colJul < colJun * 1.1 AND colAug < colJul * 1.1 AND colSep < colAug * 1.1 AND colOct < colSep * 1.1 AND colNov < colOct * 1.1 AND colDec < colNov * 1.1))
...
OR ((colNov > colOct * 1.2 AND colDec > colNov * 1.2) AND (colFeb < colJan * 1.1 AND colMar < colFeb * 1.1 AND colApr < colMar * 1.1 AND colMay < colApr * 1.1 AND colJun < colMay * 1.1 AND colJul < colJun * 1.1 AND colAug < colJul * 1.1 AND colSep < colAug * 1.1 AND colOct < colSep * 1.1))
```
In total there is about 55 lines of OR statements to check every possible combination of this. If I then went to query something similar (e.g. return all Row ID #'s where 2 pairs are greater than 50% and 4 pairs are under 10%), that involves writing another long query from scratch checking every other possible combination.
So I was wondering how I could re-write this in a shorter version that could also be more re-useable for similar queries? | So, as an alternate answer to @Pieter's answer and to illustrate how a simpler data structure may make your task easier, I suggest the following:
```
create view tblEasy as (
select pkID, 1 as colMonth, colJan as colValue from tblMonthData
UNION
select pkID, 2 as colMonth, colFeb as colValue from tblMonthData
UNION
select pkID, 3 as colMonth, colMar as colValue from tblMonthData
UNION
select pkID, 4 as colMonth, colApr as colValue from tblMonthData
UNION
select pkID, 5 as colMonth, colMay as colValue from tblMonthData
UNION
select pkID, 6 as colMonth, colJun as colValue from tblMonthData
UNION
select pkID, 7 as colMonth, colJul as colValue from tblMonthData
UNION
select pkID, 8 as colMonth, colAug as colValue from tblMonthData
UNION
select pkID, 9 as colMonth, colSep as colValue from tblMonthData
UNION
select pkID, 10 as colMonth, colOct as colValue from tblMonthData
UNION
select pkID, 11 as colMonth, colNov as colValue from tblMonthData
UNION
select pkID, 12 as colMonth, colDec as colValue from tblMonthData
);
```
This makes the view look like how I would have structured the table initially. Then it is easy to create the pairs by comparing the value on `colMonth` to that on `colMonth + 1`.
I made a fiddle to illustrate how the comparing could also be done in a view, and then the query itself if fairly obvious.
<http://sqlfiddle.com/#!3/600f6/4>
Note that performance is not great due to the initial table structure.
---
**Update**
Since this is being accepted as the answer, I will embed the extra details from the sqlfiddle.
Extra view to pre-calculate the difference between consecutive months:
```
create view tblPairs as (
select t1.pkId , t1.colMonth as colStart, (t2.colValue * 100 / t1.colValue) as colPercentage
from tblEasy as t1
inner join tblEasy as t2
on t1.pkId = t2.pkId and t1.colMonth = t2.colMonth - 1);
```
Query to find where 2 months have over 20% increase and the other 9 have less than 10%:
```
select distinct pkid
from tblPairs as t1
where 2 = (
select count(*)
from tblPairs as t2
where t2.pkid = t1.pkid
and colPercentage >= 120)
and 9 = (
select count(*)
from tblPairs as t2
where t2.pkid = t1.pkid
and colPercentage <= 110)
;
``` | Unpivot the table to perform the comparison, and then if necessary pivot back to the original format.:
```
with
unpvt as (
select
YearNo
,case MonthNo when 1 then colJan
when 2 then colFeb
when 3 then colMar
when 4 then colApr
when 5 then colMay
when 6 then colJun
when 7 then colJul
when 8 then colAug
when 9 then colSep
when 10 then colOct
when 11 then colNov
when 12 then colDec
else 0
end as Value
,YearNo * 12 + MonthNo - 1 as PeriodNo
/* other columns */
-- from dbo.tblMonthData
from tblMonthData
cross join ( values
(1),(2),(3),(4),(5),(6),(7),(8),(9),(10),(11),(12)
)months(MonthNo)
)
select
this.*,
this.Value - isnull(prev.Value,0) as Delta
from unpvt this
left join unpvt prev
on prev.PeriodNo = this.PeriodNo - 1
;
```
etc. Depending on your version of SQL Server you may have access to the [UNPIVOT clause](http://technet.microsoft.com/en-us/library/ms177410(v=sql.105).aspx) to compress the source even further.
If you have a [static NUMBERS table](http://dataeducation.com/you-require-a-numbers-table/) then the first twelve rows from that can be used instead of the VALUE initialization in the CROSS JOIN. | How To Shorten This SQL Query Without A Million AND/OR's? | [
"",
"sql",
"sql-server",
""
] |
I have a stored procedure which is going really slow. I ran the SP and looked at the execution plan and was able to see what was taking so long time.
The part that is slow:
```
DECLARE
@id int
,@date datetime
,@endDate datetime
SELECT @id = 3483
,@date = DATEADD(DAY, -10, GETDATE())
,@endDate = GETDATE()
SET NOCOUNT ON
SELECT *
,prevId = dbo.fnGetPrevId(id)
FROM dbo.table WITH(READUNCOMMITTED)
```
And the part in this query that is slow is where I call the function dbo.fnGetPrevId.
dbo.fnGetPrevId:
```
DECLARE @prevId int
SELECT TOP 1 @prevId = t2.id
FROM dbo.table2 AS t2 WITH(READUNCOMMITTED)
RETURN @prevId
```
Is this possible to rewrite for better performance without create index or something like that to table? | You could use a sub-query instead of the scalar valued function.
```
// ...
,prevId = (
SELECT TOP 1 x.id
FROM dbo.table AS x WITH(READUNCOMMITTED)
WHERE 1 = 1)
// ...
```
In most cases, it's best to avoid scalar valued functions that reference tables because they are basically black boxes that need to be ran once for every row, and cannot be optimized by the query plan engine. | First thing, you should cut the function all together and inline the query. Which from what I see it would be fairly simple. Or if you want to preserve a function there use a table valued function. For both check:
<http://technet.microsoft.com/en-us/library/ms175156(v=sql.105).aspx>
Second, the best results in optimizing you will get with building an index (HUGE improvement) | Optimizing SQL query using function | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
I have 100 records from 3 users. I want to show the most recent record from each user. I have the following query:
```
SELECT *
FROM Mytable
WHERE Dateabc = CURRENT DATE
AND timeabc =
(
SELECT MAX(timeabc)
FROM Mytable
)
```
It returns the most recent record for everyone, and I need it to return most recent record from every user. | I assume somewhere in your table you have a userID...
```
select userID, max(timeabc) from mytable group by userID
``` | Should the solution support both DB2 and mysql?
```
SELECT * FROM Mytable as x
WHERE Dateabc = CURRENT_DATE
AND timeabc = (SELECT MAX( timeabc ) FROM Mytable as y where x.user = y.user)
```
If it's only DB2 more efficient solutions exists:
```
SELECT * from (
SELECT x.*, row_number() over (partition by user order by timeabc desc) as rn
FROM Mytable as x
)
WHERE rn = 1
``` | DB2 Show the latest row record from users, using the date and time | [
"",
"mysql",
"sql",
"database",
"db2",
"squirrel-sql",
""
] |
I need a query to get the Name of the Clients where there is no Client Contact record and if there is any Client Contact then they are ended. Please see below table structure.
Table : **Client**
```
Client ID Client Name
-------------------------
1 John
2 Sean
3 Johnson
```
Table : **Client\_Contact**
```
Client Contact ID Client ID Start Date End Date
---------------------------------------------------------
1 1 1/1/1999 2/2/1999
2 1 1/2/1999 2/3/1999
3 1 1/3/1999 2/4/1999
4 2 1/2/2005 1/2/2007
5 2 1/3/2005 NULL
```
The query will return **Johnson and Sean**.
1. Johnson has no Client Contact , so it is coming up in the Query
2. John has Client Contact but all the Client Contact are ended , so it is coming up in the query
3. Sean has Client Contact but one of the record is not ended , so it is not coming up in the query.
Thanks in advance for the query. | Here's a query pattern that I use for queries like this that can give substantial performance gains compared to "left join" patterns, when the tables involved get really large:
```
select c.[Client Name]
from Client c
where not exists
(
select *
from Client_Contact cc
where c.[Client ID] = cc.[Client ID]
and cc.[End Date] is null
)
``` | Try ..
```
select distinct cl.*
from Client cl
left outer join Client_Contact clcnt
on cl.[Client ID] = clcnt.[Client ID]
where (clcnt.[Client ID] IS NULL OR clcnt.[End Date] IS NULL)
``` | SQL Query , Clients with No Client Contact and No Active Client Contact | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have a data structure similiar to:
```
Person | Counter
Mary 1
Mary 2
John 5
John 6
```
I am trying to **Group By** on the Person column, and add up the values for the counter, for each person. Meaning, I am trying to return:
```
Person - Mary - Has 3 Counters
Person - John - Has 11 Counters
```
This is my current effort, however it is not working:
```
SELECT ('Person - '+ ISNULL(NULLIF([Person],''),'NOT ASSIGNED') + 'Has') as Name,
COUNT(*) Match, (SELECT COUNT(*) + 'Counters' FROM [MyTable] ) Total
FROM [MyTable]
``` | Here you go:
```
DECLARE @DATA TABLE (Person VARCHAR(25), [Counter] INT)
INSERT INTO @DATA
SELECT 'Mary',1 UNION
SELECT 'Mary',2 UNION
SELECT 'John',5 UNION
SELECT 'John',6
SELECT 'Person - ' + ISNULL(Person,'Not Assigned') + ' - has ' + CAST(SUM([Counter]) AS VARCHAR) + ' counters'
from @DATA
GROUP BY Person
``` | What you want is a group by clause just as you stated, so something based on this:
```
SELECT Person, SUM(Counter)
FROM MyTable
GROUP BY Person
```
I didn't include the extra text from your original query to make it a bit more compact. | Add Int Column Values Similar To Count | [
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
I get a string as input from an application that is supposed to be a date. For some asinine reason the developers of the application decided provide precision to the microsecond. Well, actually, to the tenth of a microsecond.
The string is in the format: `2014-08-15T17:38:22.2930000`
Before they changed this input format I was using the following to convert the date.
```
select DATEADD(dd, 30, @Date)
```
I know I could just do a substring on the input and lop off the last 4 characters, however, I'm wondering if there is some way that I can use `CONVERT` to just convert the date, or if SQL just doesn't support dates with this type of precision. | Either of these should work:
```
SELECT CONVERT(DATE, '2014-08-15T17:38:22.2930000', 101)
SELECT CONVERT(DATETIME2, '2014-08-15T17:38:22.2930000', 101)
``` | Whoops, the following assumes that a date *and* time are desired. Substituting in DATE for DATETIME2 will work, just as it does with CONVERT and the time component is simply dropped.
---
A CAST to DATETIME2 works (this is for SQL Server *2008 R2* - YMMV elsewhere),
```
select CAST('2014-08-15T17:38:22.2930001' AS DATETIME2) t;
```
as [DATETIME2](http://technet.microsoft.com/en-us/library/bb677335(v=sql.105).aspx) has a maximum precision of 100ns, or 7 digits after the decimal.
Then it can be converted to a normal DATETIME (either [explicitly or implicitly](http://msdn.microsoft.com/en-us/library/ms187928.aspx)), although this loses precision,
```
select CAST(CAST('2014-08-15T17:38:22.2930001' AS DATETIME2) AS DATETIME) t;
```
However such lost of precision is not allowed directly in a CHAR -> DATETIME cast, and there are precision tolerances for the other casts.
```
select CAST('2014-08-15T17:38:22.293' AS DATETIME) t; -- OK*
select CAST('2014-08-15T17:38:22.2930' AS DATETIME) t; -- fail
-- although a cast to DATETIME2 is happy to lose some precision
select CAST('2014-08-15T17:38:22.2930001' AS DATETIME2) t; -- OK, no loss
select CAST('2014-08-15T17:38:22.29300014' AS DATETIME2) t; -- OK, loss
select CAST('2014-08-15T17:38:22.293000144' AS DATETIME2) t; -- OK, loss
select CAST('2014-08-15T17:38:22.2930001444' AS DATETIME2) t; -- fail
-- and a cast to DATE works up to 8 digits after the decimal
select CAST('2014-08-15T17:38:22.29300014' AS DATE) t; -- OK, date only
select CAST('2014-08-15T17:38:22.293000144' AS DATE) t; -- fail
```
\*DATETIME has a precision of 3 decimal digits, but it does not have millisecond accuracy.
As far as I am aware, the same rules for CHAR -> DATETIME/DATETIME2 apply to COVERT and losing maximum precision still results in an error.
---
The [ISO 8601](http://en.wikipedia.org/wiki/ISO_8601) standard provides no limit to precision, but "the number of decimal places needs to be agreed to by the communicating parties". | Converting string with microseconds to DATE | [
"",
"sql",
"sql-server",
"sql-server-2008",
"date",
"iso8601",
""
] |
# Question
How can I keep leading zeros when concatenating two numbers together in an Access query?
# Table
```
FileYear | FileIteration
------------------------
14 | 0001
14 | 0002
14 | 0003
14 | 0004
```
(`FileIteration` has a "0000" format, but is still stored as integer)
# Query
```
SELECT MAX(FileYear & FileIteration)
FROM FileNumber
WHERE FileYear=Format(Date(),"yy");
```
# Current (wrong) output
> 144
It doesn't keep leading zeros.
# Desired output
> 140004
I want it to keep leading zeros. | Using the `Format` function is my opinion the "correct" way to do it although there are many ways to achieve what you want in this case.
The output of `Format()` will be a string
```
SELECT MAX(FileYear & Format(FileIteration,"0000"))
FROM FileNumber
WHERE FileYear=Format(Date(),"yy");
``` | If it's always a 4-digit number, you can simply do this:
```
Fileyear * 10000 + FileIteration
```
If the number of digits is dynamic, let's assume it's in the column NoOfDigits:
```
Fileyear * 10 ^ NoOfDigits + FileIteration
``` | How can I keep leading zeros when concatenating two numbers together in a query? | [
"",
"sql",
"ms-access",
"ms-access-2007",
""
] |
How do I convert table column data from 'LastName, FirstName' to 'FirstName LastName' in Postgres.
So if the table column contains 'Ona, Lisa' I want it to return 'Lisa Ona'. If the column doesnt contain a ' ,' then I want it to return null. I've been trying to use postgres functions SUBSTRING and REGEXP\_REPLACE but cannot get anything working. | You need `strpos` and `substr` functions and a `CASE` to return `NULL`.
```
SELECT CASE WHEN strpos(name,', ') > 0
THEN substr(name,strpos(name,', ') + 2) ||' ' || substr(name,0, strpos(name,', '))
ELSE NULL END
FROM person
```
[fiddle](http://sqlfiddle.com/#!15/d72db/1) | Nevermind worked out a solution using SPLIT\_PART function
```
SELECT
t1.sort_name,
split_part(t1.sort_name, ', ', 2)|| ' ' || split_part(t1.sort_name, ', ', 1)
FROM artist t1
WHERE
t1.sort_name LIKE '%, %'
``` | How do I convert table column data from 'LastName, FirstName' to 'FirstName LastName' in Postgres | [
"",
"sql",
"postgresql",
"substring",
""
] |
I have 2 tables (A, B). They each have a different column that is basically an order or a sequence number. Table A has 'Sequence' and the values range from 0 to 5. Table B has 'Index' and the values are 16740, 16744, 16759, 16828, 16838, and 16990. Unfortunately I do not know the significance of these values. But I do believe they will always match in sequential order. I want to join these tables on these numbers where 0 = 16740, 1 = 16744, etc. Any ideas?
Thanks | If the values are in ascending order as per your example, you can use the `ROW_NUMBER()` function to achieve this:
```
;with cte AS (SELECT *, ROW_NUMBER() OVER(ORDER BY [Index])-1 RN
FROM B)
SELECT *
FROM cte
``` | You could use a `case` expression to convert table a's values to table b's values (or vise-versa) and join on that:
```
SELECT *
FROM a
JOIN b ON a.[sequence] = CASE b.[index] WHEN 16740 THEN 0
WHEN 16744 THEN 1
WHEN 16759 THEN 2
WHEN 16828 THEN 3
WHEN 16838 THEN 4
WHEN 16990 THEN 5
ELSE NULL
END;
``` | SQL Join on sequence number | [
"",
"sql",
"sql-server-2008",
"join",
""
] |
There are alot questions on this topic, still can't figure out a way to make this work.
The query I'm doing is:
```
SELECT `b`.`ads_id` AS `ads_id`,
`b`.`bod_bedrag` AS `bod_bedrag`,
`a`. `ads_naam` AS `ads_naam`,
`a`.`ads_url` AS `ads_url`,
`a`.`ads_prijs` AS `ads_price`,
`i`.`url` AS `img_url`,
`c`.`url` AS `cat_url`
FROM `ads_market_bids` AS `b`
INNER JOIN `ads_market` AS `a`
ON `b`.`ads_id` = `a`.`id`
INNER JOIN `ads_images` AS `i`
ON `b`.`ads_id` = `i`.`ads_id`
INNER JOIN `ads_categories` AS `c`
ON `a`.`cat_id` = `c`.`id`
WHERE `i`.`img_order` = '0'
AND `b`.`u_id` = '285'
GROUP BY `b`.`ads_id`
HAVING MAX(b.bod_bedrag)
ORDER BY `b`.`bod_bedrag` ASC
```
But, the problem I keep seeing is that I need b.bod\_bedrag to be sorted before the GROUP BY is taking place or so. Don't know how to explain it exactly.
The bod\_bedrag i'm getting now are the lowest of the bids in the table. I need the highest.
Tried like everything, even tought of not grouping by but using DISTINCT. This didn't work either. Tried order by max, everything I know or could find on the internet.
Image 1 is the situation without the group by. Order By works great (ofc).

Image 2 is with the group by. As you can see, the lowest bid is taken as bod\_bedrag. I need the highest.
 | Judging by your output you want:
```
SELECT amb.ads_id,
MAX(amb.bod_bedrag) max_bod_bedrag,
am.ads_naam,
am.ads_url,
am.ads_prijs ads_price,
ai.url img_url,
ac.url cat_url
FROM ads_market_bids amb
JOIN ads_images ai
ON ai.ads_id = amb.ads_id
AND ai.img_order = 0
JOIN ads_market am
ON am.id = amb.ads_id
JOIN ads_categories ac
ON ac.id = am.cat_id
WHERE amb.u_id = 285
GROUP BY amb.ads_id,
am.ads_naam,
am.ads_url,
am.ads_prijs,
ai.url,
ac.url
ORDER BY max_bod_bedrag ASC
```
I have also removed all the unecessary backtickery and aliasing of columns to the same name.
Your `HAVING` was doing nothing as all the groups 'have' a `MAX(amb.bod_rag)`. | One approach is to simulate row\_number() (which MySQL does not have), but it allows for selection - by record - rather than by aggregates which may come from disparate source records. It works by adding to variables to each row (it does **not** increase the number of rows) Then, using an ordered subquery those variables are set to 1 for the highest `b`.`bod_bedrag` for each `b`.`ads_id, all other rows per`b`.`ads\_id` get a higher RN value. At the end we filter where RN = 1 (which equates the the record containing the highest bid value)
```
SELECT *
FROM (
SELECT
@row_num :=IF(@prev_value=`b`.`ads_id`, @row_num + 1, 1) AS RN
,`b`.`ads_id` AS `ads_id`
,`b`.`bod_bedrag` AS `bod_bedrag`
,`a`.`ads_naam` AS `ads_naam`
,`a`.`ads_url` AS `ads_url`
,`a`.`ads_prijs` AS `ads_price`
,`i`.`url` AS `img_url`
,`c`.`url` AS `cat_url`
, @prev_value := `b`.`bod_bedrag`
FROM `ads_market_bids` AS `b`
INNER JOIN `ads_market` AS `a` ON `b`.`ads_id` = `a`.`id`
INNER JOIN `ads_images` AS `i` ON `b`.`ads_id` = `i`.`ads_id`
INNER JOIN `ads_categories` AS `c` ON `a`.`cat_id` = `c`.`id`
CROSS JOIN
( SELECT @row_num :=1
, @prev_value :=''
) vars
WHERE `i`.`img_order` = '0'
AND `b`.`u_id` = '285'
ORDER BY `b`.`ads_id`, b`.`bod_bedrag` DESC
)
WHERE RN = 1;
```
---
You can even turn off that silly GROUP BY extension, details in the man page:
[MySQL Extensions to GROUP BY](http://dev.mysql.com/doc/refman/5.7/en/group-by-extensions.html) | Order a Group BY | [
"",
"mysql",
"sql",
""
] |
I have table with data something like this
```
Id value1 value2 IsIncluded
----------------------------------
1859 1702 4043 0
1858 1706 4045 0
1858 1703 4046 1
1860 1701 4046 0
1861 1702 4047 0
```
To get the Ids with `min(value1)` and `max(value2)` and filter based on included column I can do something like this
```
select
Id, min(value1), max(value2)
from table
where IsIncluded = 0
group by Id
```
and I get the result
```
Id value1 value2 IsIncluded
-----------------------------------
1859 1702 4043 0
1858 1706 4045 0
1860 1701 4046 0
1861 1702 4047 0`
```
but can I filter the data more if there is 1 in `IsIncluded` for that Id then it shouldn't pick up that row. | Similar to Gordon Linoff, this worked for me
select Id, min(value1), max(value2)
from table
group by Id
having sum(case when IsIncluded=1 then 1 else 0 end)=0 | ```
select Id, min(value1), max(value2)
from table t
where t.IsYpIncluded=0
and not exists (
select 0 from table t2
where t.Id = t2.Id
and t2.IsYpIncluded = 1
)
group by t.Id;
``` | Filter the data with GROUP BY in SQL Server | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
"sql-server-2008-r2",
""
] |
Trying to use the `REPLACE` to rename the start and end of the string. The string that I have is, for example: `ABCD - [001]`. I wanto to get just the `001` and count.
Example: [SQLFIDDLE](http://sqlfiddle.com/#!3/99e73/2)
The result should be:
```
Description Total
001 4
002 2
003 3
``` | Your description fields (at least in the example) all have the numbers in the same position. So, the easiest way to get them is `substring()`:
```
SELECT (case when Description LIKE '%/[___/]%' ESCAPE '/' then substring(description, 9, 3)
else Description
end) as Description,
COUNT (*) AS Total
FROM Table1
WHERE Description LIKE '%/[___/]%' ESCAPE '/' OR Description LIKE '___'
GROUP BY (case when Description LIKE '%/[___/]%' ESCAPE '/' then substring(description, 9, 3)
else Description
end)
ORDER BY Description ASC;
``` | [SQL Fiddle](http://sqlfiddle.com/#!3/99e73/25)
**MS SQL Server 2008 Schema Setup**:
```
CREATE TABLE Table1
(
Description varchar(20)
);
INSERT INTO Table1
(
Description
)
VALUES
('ABCD - [001]'),('ABCD - [001]'),('XIo9 - [001]'),('001'),
('XYZW - [002]'),('002'),('XYZW - [003]'),('XYZW - [003]'),('003');
```
**Query 1**:
```
SELECT
RIGHT(REPLACE(REPLACE(RIGHT('0000' + Description,4), '[', ''),']',''),3) Description,
COUNT (*) AS Total
FROM
Table1
GROUP BY
RIGHT(REPLACE(REPLACE(RIGHT('0000' + Description,4), '[', ''),']',''),3)
ORDER BY
Description ASC
```
**[Results](http://sqlfiddle.com/#!3/99e73/25/0)**:
```
| DESCRIPTION | TOTAL |
|-------------|-------|
| 001 | 4 |
| 002 | 2 |
| 003 | 3 |
``` | Using replace in select clause | [
"",
"sql",
"sql-server-2008",
""
] |
Is there a way to fine tune the SQL Server Management Studio Filter Settings results (in the Object Explorer) by using wildcards to find objects like `XYZ_unknown_abc` using a `Contains` operator?
I tried `(XYZ*abc)` or `(XYZ abc)` with no luck. | Coming a bit late, but %XYZ%abc% in the search box shows db object with names containing XYZ and abc, in that order. | Wildcards are still not allowed in the filter setting within Object Explorer in SSMS. BTW .. you can check this new set of Operators: <https://learn.microsoft.com/en-us/previous-versions/sql/sql-server-2005/ms181166(v=sql.90)> | Use Wildcard / Regex in filter settings of SSMS object explorer to find an object | [
"",
"sql",
"regex",
"sql-server-2008",
"wildcard",
"ssms",
""
] |
I want to get average for a column
Name Salary
HI 33168,00
Hello 32882,00
When I tried with below query
```
select Name, Salary, Avg(Salary) as avg from table group by name, Salary
```
I'm getting Salary and Avg salary both same value.
expected result:
```
Name Salary
HI 33168,00
Hello 32882,00
avg 33025,00
``` | If you group by name and salary you will get one row for each such combination. In many cases this means that avg(salary) will be the same as salary. If you want to only get the average(salary) per name:
```
select Name, Avg(Salary) as avg from table group by name
```
If you want each individual salary as well you can do this with a union:
```
select Name, Avg(Salary) as salary from table group by name
union
select Name, Salary from table
```
A shortcut for the latter is:
```
select Name, Salary, Avg(Salary) as salary from table
GROUP BY GROUPING SETS((Name, salary), (Name))
``` | If you group by [name] and [Salary], assuming your name is unique, you will get one average salary per [name], which will be the same as the salary value
```
select Avg(Salary) as avg from table
```
If you want detail and the average row, you could use a union all
```
select name, Salary from table
UNION ALL
select 'Avg', Avg(Salary) from table
``` | How to get average for a column value in sql server | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have one table in which I would like **only one entry**. So if someone is trying to insert another row it shouldn't be allowed, only after someone deleted the previously existing row.
How do I set a rule for a table like this? | A `UNIQUE` constraint allows multiple rows with `null` values, because two `null` values are not considered to be the same. (Except when using [`NULLS NOT DISTINCT` in Postgres 15](https://stackoverflow.com/a/8289253/939860) or later.)
Similar considerations apply to `CHECK` constraints. They allow the expression to be `true` or `null` (just not `false`). Again, `null` values get past the check.
To rule that out, define the column `NOT NULL`. Or make it the `PRIMARY KEY` since PK columns are defined `NOT NULL` automatically. See:
* [Why can I create a table with PRIMARY KEY on a nullable column?](https://stackoverflow.com/questions/20006374/why-can-i-create-a-table-with-primary-key-on-a-nullable-column/20006502#20006502)
Also, just use `boolean`:
```
CREATE TABLE public.onerow (
onerow_id bool PRIMARY KEY DEFAULT true
, data text
, CONSTRAINT onerow_uni CHECK (onerow_id)
);
```
The `CHECK` constraint can be that simple for a `boolean` column. Only `true` is allowed.
You may want to [`REVOKE`](https://www.postgresql.org/docs/current/sql-revoke.html) (or not `GRANT`) the `DELETE` and `TRUNCATE` privileges from `public` (and all other roles) to prevent the single row from ever being deleted. Like:
```
REVOKE DELETE, TRUNCATE ON public.onerow FROM public;
``` | Add a new column to the table, then add a check constraint and a uniqueness constraint on this column. For example:
```
CREATE TABLE logging (
LogId integer UNIQUE default(1),
MyData text,
OtherStuff numeric,
Constraint CHK_Logging_singlerow CHECK (LogId = 1)
);
```
Now you can only ever have one row with a LogId = 1. If you try to add a new row it will either break the uniqueness or check constraint.
(I might have messed up the syntax, but it gives you an idea?) | How to allow only one row for a table? | [
"",
"sql",
"postgresql",
"postgresql-9.1",
""
] |
The whole question is:
Rewrite the query so that the user is prompted to
enter a letter that the last name starts with. For example, if the user enters “H" (capitalized) when prompted for a letter, then the output should show all employees whose last name starts with the letter “H.”
```
SELECT last_name
FROM employees
WHERE last_name like '%' = &Start_Letter
```
This wont work :( | This query should work:
```
SELECT last_name FROM employees WHERE last_name like '&Start_Letter%';
```
I have tested it in Oracle11g. | ```
SELECT last_name FROM employees WHERE last_name like &Start_Letter + '%';
``` | SQL prompt to enter a letter that the last name starts with | [
"",
"sql",
"database",
"oracle",
""
] |
I have the following like table
```
A B
Yes OOS
No No
OOS Yes
OOS No
Yes No
```
I want to do the following
```
Criteria A B
Yes 2 1
No 1 3
OOS 2 1
```
I can get this right with one column like
```
Criteria A
Yes 2
No 1
OOS 2
```
Here is what I have to achieve the above:
```
SELECT A, count(A) FROM temp_db GROUP BY A;
``` | You need to get the values into a single column, so you can use `group by`. Here is one method:
```
select criteria, sum(A) as A, sum(B) as B
from ((select A as criteria, 1 as A, 0 as B
from liketable
) union all
(select B, 0, 1
from liketable
)
) t
group by criteria;
```
Using the `union all` approach is the safest way in MySQL, in case not all `criteria` are in both columns. The following is a slight tweak that might be a bit better performance-wise:
```
select criteria, sum(A) as A, sum(B) as B
from ((select A as criteria, count(*) as A, 0 as B
from liketable
group by A
) union all
(select B, 0, count(*)
from liketable
group by B
)
) t
group by criteria;
```
Often doing two aggregations on half the data is more efficient than a bigger aggregation on all the data. | For this sample data, you could do this with a join of derived tables:
```
SELECT qa.Criteria, qa.A, qb.B FROM
(SELECT A AS Criteria, count(A) AS A FROM temp_db GROUP BY A) qa
FULL OUTER JOIN
(SELECT B AS Criteria, count(B) AS B FROM temp_db GROUP BY B) qb
ON qa.Criteria=qb.Criteria
```
But if there are missing criteria in the A column, they will not appear in the results of this query, and you would need the UNION ALL approach others have suggested. | Mysql Count multiple columns based on the same values | [
"",
"mysql",
"sql",
""
] |
I have a table measurements that has data like below.
```
week hips wrist abs weight
1 26.3 6.3 24.3 100
2 25.2 6.3 23.3 96
```
I am trying to get the result from week 2 to week one
```
hips wrist abs weight
-1.1 0 -1 -4
```
I tried joining the tables on each other and subtracting but I kept getting duplicates. How would I accomplish this? | Have you tried this?
```
select mprev.hips - m.hips as hips,
mprev.wrist - m.wrist as wrist,
mprev.abs - m.abs as abs,
mprev.weight - m.weight as weight
from measurements m join
measurements mprev
on me.week = mprev.week + 1;
``` | ```
SELECT
a.[hips]-b.hips as Hips
, a.wrist - b.wrist as Wrist
, a.abs - b.abs as Abs
, a.weight - b.weight as Weight
FROM
tableName a
LEFT JOIN
tableName b
ON
a.week-1=b.week
``` | How do I subtract from the same table sql | [
"",
"sql",
"sql-server-2008-r2",
""
] |
I have a table with columns (id,col\_id,type\_id,value) which looks like below
```
| id | col_id | type_id | value |
------------------------------------
| 1 | 152 | 22 | DGL |
| 2 | 152 | 27 | AD56JK-5 |
| 3 | 152 | 30 | 5708 |
| 4 | 154 | 22 | JVM |
| 5 | 154 | 27 | AD29JL-9 |
| 6 | 154 | 30 | 9584 |
| 7 | 155 | 22 | MVR |
| 8 | 155 | 27 | AD29JM-1 |
```
What I want to do is for each distinct `col_id` change the value of row with type\_id=30 as value of row with type\_id=22+value of col\_id.
For Example the value '5708' should be changed as 'DGL152' and the value '9584' should be changed as 'JVM154'. In the case of col\_id=155 there is no row with type\_id=30, then we should insert a new row with following values
```
------------------------------------
| 9 | 155 | 30 | MVR155 |
------------------------------------
```
There are many records like this in the table. In this way we should change the value of rows with type\_id=30 or insert the row if it does not exist for all the records in the table. How could I do this? Could any body help me with a solution.
**Edit:**
I know how to set the update the value using UPDATE and SET but what I a looking for is do the entire updating and inserting process in single query.
Thanks and Regards | To either insert or update, use the INSERT statement with ON DUPLICATE KEY UPDATE clause. There must be a unique index on col\_id + type\_id to make it work.
```
insert into mytable (col_id, type_id, value)
select col_id, 30 as type_id, concat(value, col_id) as new_value
from mytable src
where type_id = 22
on duplicate key update value = src.new_value;
``` | Can you try this. (It is in TSQL, but you will understand how it works.)
```
DECLARE @TAB TABLE (ID INT,COL_ID INT,TYPE_ID INT, VALUE VARCHAR(15))
INSERT INTO @TAB VALUES
(1,152,22,'DGL152'),
(2,152,27,'AD56JK-5'),
(3,152,30,'5708'),
(4,154,22,'JVM154'),
(5,154,27,'AD29JL-9'),
(6,154,30,'9584'),
(7,155,22,'MVR'),
(8,155,27,'AD29JM-1')
UPDATE A
SET A.VALUE = B.VALUE
FROM @TAB A
JOIN @TAB B ON A.COL_ID = B.COL_ID
WHERE A.TYPE_ID = 30
AND B.TYPE_ID = 22
SELECT * FROM @TAB
```
Result:

After the update, Write an INSERT statement for the new record.
```
INSERT INTO @TAB VALUES (9,155,30,'MVR155')
```
---
Edit:
I read the question wrong,. If that is the case we may use a `MERGE`. Like this.
```
DECLARE @TAB TABLE (ID INT IDENTITY(1,1),COL_ID INT,TYPE_ID INT, VALUE VARCHAR(15))
INSERT INTO @TAB VALUES
(152,22,'DGL152'),
(152,27,'AD56JK-5'),
(152,30,'5708'),
(154,22,'JVM154'),
(154,27,'AD29JL-9'),
(154,30,'9584'),
(155,22,'MVR'),
(155,27,'AD29JM-1')
MERGE @TAB A
USING (SELECT * FROM @TAB WHERE TYPE_ID = 22) B
ON A.COL_ID = B.COL_ID AND A.TYPE_ID = 30
WHEN MATCHED AND A.TYPE_ID = 30 THEN UPDATE SET
VALUE = B.VALUE
WHEN NOT MATCHED THEN INSERT
(COL_ID,TYPE_ID,VALUE)
VALUES (B.COL_ID,B.TYPE_ID,B.VALUE);
SELECT * FROM @TAB
```
Result:

It is TSQL by the way. | insert a new row if doesnot exists depending on the colum value in other row of same table | [
"",
"mysql",
"sql",
""
] |
I have two tables with the following data:
```
[Animals].[Males]
DataID HerdNumber HerdID NaabCode
e46fff54-a784-46ed-9a7f-4c81e649e6a0 4 'GOLDA' '7JE1067'
fee3e66b-7248-44dd-8670-791a6daa5d49 1 '35' NULL
[Animals].[Females]
DataID HerdNumber HerdID BangsNumber
987110c6-c938-43a7-a5db-194ce2162a20 1 '9' 'NB3829483909488'
1fc83693-9b8a-4054-9d79-fbd66ee99091 2 'NATTIE' 'ID2314843985499'
```
I want to merge these tables into a view that looks like this:
```
DataID HerdNumber HerdID NaabCode BangsNumber
e46fff54-a784-46ed-9a7f-4c81e649e6a0 4 'GOLDA' '7JE1067' NULL
fee3e66b-7248-44dd-8670-791a6daa5d49 1 '35' NULL NULL
987110c6-c938-43a7-a5db-194ce2162a20 1 '9' NULL 'NB3829483909488'
1fc83693-9b8a-4054-9d79-fbd66ee99091 2 'NATTIE' NULL 'ID2314843985499'`
```
When I used the `UNION` keyword, SQL Server produced a view that merged the `NaabCode` and `BangsNumber` into one column. A book that I have on regular SQL suggested `UNION CORRESPONDING` syntax like so:
```
SELECT *
FROM [Animals].[Males]
UNION CORRESPONDING (DataID, HerdNumber, HerdID)
SELECT *
FROM [Animals].[Females]`
```
But when I type this SQL Server says "Incorrect syntax near 'CORRESPONDING'."
Can anyone tell me how to achieve my desired result and/or how to use `UNION CORRESPONDING` in T-SQL? | You can just do:
```
SELECT DataID, HerdNumber, HerdID, NaabCode, NULL as BangsNumber
FROM [Animals].[Males]
UNION ALL
SELECT DataID, HerdNumber, HerdID, NULL as NaabCode, BangsNumber
FROM [Animals].[Females]
```
**[SQL Fiddle](http://sqlfiddle.com/#!6/94929/2)**
I don't remember that SQL Server supports the `corresponding` syntax, but I might be wrong.
Anyway, this query will select `null` for the `BangsNumber` column for the males, and for the `NaabCode` column for the females, while selecting everything else correctly. | Just do the `union` explicitly listing the columns:
```
select DataID, HerdNumber, HerdID, NaabCode, NULL as BangsNumber
from Animals.Males
union all
select DataID, HerdNumber, HerdID, NULL, BangsNumber
from Animals.Females;
```
Note: you should use `union all` instead of `union` (assuming that no single animal is both male and female). `union` incurs a performance overhead to remove duplicates. | T-SQL Union with Dissimilar Columns | [
"",
"sql",
"sql-server",
""
] |
I'm not even sure if this is possible using SQL, but I'm completely stuck on this problem. I have a table like this:
```
Total Code
212 XXX_09_JUN
315 XXX_7_JUN
68 XXX_09_APR
140 XXX_AT_APR
729 XXX_AT_MAY
```
I need to sum the "total" column grouped by the code. The issue is that "XXX\_09\_JUN" and "XXX\_7\_JUN" and "XXX\_09\_APR" need to be the same group.
I was able to accomplish this by creating a new column where I assigned values based on the row's code, but since this needs to be done on multiple tables with millions of entries, I can't use that method.
Is there some way that I could group the rows based on a condition such as:
```
WHERE Code LIKE '%_09_%' OR Code LIKE '%_7_%'
```
This is not the only condition - I need about 10 conditions like this. Sorry if that doesn't make sense, I'm not sure how to explain this...
Also, if this can be accomplished using Visual Studio 2008 and SSRS more easily, that would work as well because that is the final goal of this query.
Edit: To clarify, this would be the ideal result:
```
Total Code
595 had_a_number
869 had_at
``` | One option is to use a `CASE` expression:
```
GROUP BY CASE
WHEN Code LIKE '%!_09!_%' ESCAPE '!'
THEN 'had_a_number'
WHEN Code LIKE '%!_7!_%' ESCAPE '!'
THEN 'had_a_number'
WHEN Code LIKE '%!_AT!_%' ESCAPE '!'
THEN 'had_at'
ELSE 'other'
END
```
Add however many `WHEN` conditions to assign whatever condition to a "group".
Note that the underscore is a wildcard character for the `LIKE` operator. An underscore will match any single character. To search for a literal underscore, you would need to "escape" the underscore within the string literal.
```
'A_12_E' LIKE '%_12_%' => TRUE
'AB12DE' LIKE '%_12_%' => TRUE
'A_12_E' LIKE '%!_12!_%' ESCAPE '!' => TRUE
'AB12DE' LIKE '%!_12!_%' ESCAPE '!' => FALSE
``` | [SQL Fiddle](http://sqlfiddle.com/#!3/f0a20/1)
**MS SQL Server 2008 Schema Setup**:
```
CREATE TABLE TEST_TABLE(Total INT, Code VARCHAR(20))
GO
INSERT INTO TEST_TABLE VALUES
(212, 'XXX_09_JUN'),
(315, 'XXX_7_JUN'),
(68, 'XXX_09_APR'),
(140, 'XXX_AT_APR'),
(729, 'XXX_AT_MAY')
GO
```
**Query 1**:
```
SELECT SUM(Total) Total
,CASE
WHEN Code LIKE '%_%[0-9]%_%'
THEN 'had a number'
WHEN Code NOT LIKE '%_%[0-9]%_%'
THEN 'had at'
END AS Code
FROM TEST_TABLE
GROUP BY CASE
WHEN Code LIKE '%_%[0-9]%_%'
THEN 'had a number'
WHEN Code NOT LIKE '%_%[0-9]%_%'
THEN 'had at'
END
```
**[Results](http://sqlfiddle.com/#!3/f0a20/1/0)**:
```
| TOTAL | CODE |
|-------|--------------|
| 595 | had a number |
| 869 | had at |
``` | Using conditions to specify groups for GROUP BY | [
"",
"sql",
"sql-server",
"sql-server-2008",
"reporting-services",
""
] |
I am looking for a means of removing leading and trailing spaces based on the following scenario within an Oracle 11gR2 DB using SQL. Not sure if I require REGEXP\_REPLACE.
At the moment I have the following possible scenario:
```
v_entries := '"QAZ "," QWERTY ","BOB "," MARY ","HARRY"," TOM JONES "'
```
where you can see that I have both leading as well as trailing spaces within some of data within the double quotes, i.e. `" QWERTY "` but not all like `"HARRY"`, which is fine.
What I am after and unsure how to do in Oracle 11gR2 is this result, i.e.:
```
v_entries := '"QAZ","QWERTY","BOB","MARY","HARRY",,"TOM JONES"'
```
where all leading/trailing spaces have been removed, where they exists within the double quote groupings but only from the beginning and ends of each grouping.
Groups that have spaces within the actual values, should not be removed, just like my `"TOM JONES"` - want to preserve the given name space surname value here. | Assuming you don't want to split the string into its components, you can use a regular expression to remove the spaces. If you want to get rid of all spaces - so you don't have any two-word values, for example - then you can use:
```
v_entries := regexp_replace(v_entries, '[[:space:]]', null);
```
But if you might have spaces in the middle of a value that you want to preserve you can only remove those either side of a double-quote character:
```
v_entries := '"QAZ "," QWERTY ","BOB "," MARY ","HARRY"';
```
For example:
```
declare
v_entries varchar2(80);
begin
v_entries := '"QAZ "," QWERTY ","BOB "," MARY ","TOM JONES"';
v_entries := regexp_replace(v_entries, '[[:space:]]*"[[:space:]]*', '"');
dbms_output.put_line(v_entries);
end;
/
anonymous block completed
"QAZ","QWERTY","BOB","MARY","TOM JONES"
``` | You should use the `trim()` function to remove leading and trailing spaces. [Here's the documentation](http://docs.oracle.com/javadb/10.6.1.0/ref/rreftrimfunc.html) on how to use `trim()`. | How to remove both leading and trailing spaces within a group of data in Oracle11gR2? | [
"",
"sql",
"regex",
"oracle11g",
""
] |
Is it possible to achieve something like this in SQL?
```
SELECT * FROM `services` WHERE servide_id = ('wifi','pulizia','parking','ombrellone','driver','colazione') AND active = 1
```
For example i have an table like:
```
service |price |active
wifi €60 0
park €20 1
option €30 1
another €40 1
```
The query must do something like:
```
1) SELECT * FROM service
2) loop through 'service' column and select WHERE service is = to any value on array
3) check if active is = 1
```
What i want?
```
query:
SELECT * FROM services WHERE service = ('wifi','park') AND active = 1
result:
service |price |active
park €20 1
```
Thanks for your help, I am just starting with SQL :D | You need to use the SQL [IN](http://www.w3schools.com/sql/sql_in.asp) operator to filter on multiple values.
So your query should be :
```
SELECT *
FROM `services`
WHERE servide IN ('wifi','pulizia','parking','ombrellone','driver','colazione') AND active = 1
``` | Just use `IN`
```
SELECT * FROM `services`
WHERE service_id IN ('wifi','pulizia','parking','ombrellone','driver','colazione') AND active = 1
``` | SQL : how to select different rows where values are equal to an set of values? | [
"",
"mysql",
"sql",
""
] |
My stored procedure:
```
CREATE PROCEDURE TEST_TEMPLATE
@TEMPLATE_TYPE INT=NULL
AS
BEGIN
SELECT
TEMPLATE_ID,
TEMPLATE_NAME
FROM
BDC_TEMPLATES
WHERE
TEMPLATE_CAN_BE_APPLIED_TO IN (
CASE
WHEN @TEMPLATE_TYPE=0 OR @TEMPLATE_TYPE=1 THEN @TEMPLATE_TYPE -- 0:EMAIL & LETTER , 1: EMAIL
WHEN @TEMPLATE_TYPE=2 THEN (SELECT DATA FROM UDF_DP_SPLIT_STRING('0,2', ',')) -- 2: LETTER
ELSE TEMPLATE_CAN_BE_APPLIED_TO
END)
END
```
Above stored procedure returns following error:
> Msg 512, Level 16, State 1, Procedure TEST\_TEMPLATE, Line 6
> Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , >, >= or when the subquery is used as an expression.
when executed with following inputs:
```
exec TEST_TEMPLATE 2
```
I am using the `IN` clause in the `WHERE` condition because when `@Template_Type` is '2', `Template_can_be_applied_to column` can be any one of the values 0 and 2. | First of all you can't use `INT` variable with `IN` statement, you need to use table variable with `SELECT` statement.
**UPDATE** You can’t use a scalar variable in the `IN` statement like record set. However if this variable has a single data value (as I think this is what you are trying to do) is OK.
In this case the solution provided by @ah\_hau to use equality sign “`TEMPLATE_CAN_BE_APPLIED_TO = @TEMPLATE_TYPE`” is better.
Secondly you can't have multiple records returned in the `CASE` statement. In order to achieve what you trying to do you need to use conditional `OR` and `AND` statements such as below:
```
WHERE (
(@TEMPLATE_TYPE=0 OR @TEMPLATE_TYPE=1)
AND TEMPLATE_CAN_BE_APPLIED_TO IN (SELECT col FROM [your table variable])
)
OR (
@TEMPLATE_TYPE=2
AND TEMPLATE_CAN_BE_APPLIED_TO IN (SELECT DATA FROM UDF_DP_SPLIT_STRING('0,2', ','))
)
``` | let's see if this help
```
CREATE PROCEDURE TEST_TEMPLATE
@TEMPLATE_TYPE INT=NULL
AS
BEGIN
SELECT TEMPLATE_ID,
TEMPLATE_NAME
FROM BDC_TEMPLATES
WHERE (TEMPLATE_CAN_BE_APPLIED_TO = @TEMPLATE_TYPE AND (@TEMPLATE_TYPE=0 OR @TEMPLATE_TYPE=1))
OR (TEMPLATE_CAN_BE_APPLIED_TO IN(SELECT DATA FROM UDF_DP_SPLIT_STRING('0,2', ',') AND @TEMPLATE_TYPE=2)
OR (@TEMPLATE_TYPE>3)
END
``` | SQL query error from case when statement with subquery | [
"",
"sql",
"sql-server",
"subquery",
""
] |
i've got a problem.
I have a table with these columns:
languageID languageItem
Every row can have only "2" or "3" as languageID and languageItem is always different, but only with the same languageID. For example i can have:
```
2 | header.title
2 | header.description
3 | header.description
3 | header.title
```
The problem is that now the rows which have the languageID as "3" are less than rows which have the languageID as "2" and i need that for each languageID there must be the same languageItem(s). Like:
```
2 | header.title
2 | header.description
2 | header.button
3 | header.title
3 | header.description
```
Is missing header.button for "3"
I want select all rows which the languageID 2 have more and then "copy" them and insert with the languageID 3.
Thanks
EDIT: The rows don't have only these 2 columns, but also others. | You can do this by just inserting values that don't exist. The query looks like:
```
insert into table(languageId, languageItem)
select 3, languageitem
from table t
where languageid = 2 and
not exists (select 1
from table t3
where t3.languageid = 3 and t3.languageitem = t.langaugeitem
);
``` | You can use INSERT IGNORE here:
```
insert ignore into mytable(languageId, languageItem)
select 3 as languageId, languageitem from mytable where languageid = 2;
```
(Provided of course there is a unique index on languageId + languageItem.) | Sql query for searching values in the same table | [
"",
"mysql",
"sql",
""
] |
Can MySQL search for table rows using two or more indexes?
I have a table with these columns:
```
relation:
- type varchar(32)
- id_foo int
- id_bar int
```
I need to search for rows based on `type` and either `presta_id` or `vario_id`. The queries will look like this:
```
SELECT id_foo FROM relation WHERE type = 'baz' AND id_bar = 128
SELECT id_bar FROM relation WHERE type = 'qux' AND id_foo = 256
```
What is the correct way to define indexes for this table and for these search queries?
Should I put separate indexes on all three columns (`type`, `id_foo`, `id_bar`), or is it better to create two multi-column indexes - `type + id_foo` and `type + id_bar`? | It is better to create two indexes: `relation(type, id_bar)` and `relation(type, id_foo)`.
In your case, the two columns could be in either order. But you need two indexes to optimize both queries.
MySQL actually has pretty good documentation on the use of composite indexes. See [here](http://dev.mysql.com/doc/refman/5.7/en/multiple-column-indexes.html). | Yes MySQL can. You can find out yourself by issuing:
```
EXPLAIN SELECT id_foo FROM relation WHERE type = 'baz' AND id_bar = 128
```
There MySQL prints which indexes were used.
Multi column indexes (type,id\_bar and type,id\_foo; you need both in order to speed up both `select` queries) are a possible way to speed up such queries, however, multi column indexes do not make sense in all cases. E.g., a multi column index on type and id\_bar (in this order) doesn't help if you have queries which do not include type.
See <http://dev.mysql.com/doc/refman/5.0/en/mysql-indexes.html> and <http://dev.mysql.com/doc/refman/5.0/en/multiple-column-indexes.html> | MySQL - lookup based on more indexes | [
"",
"mysql",
"sql",
"indexing",
""
] |
One of the columns of my SQL Server table is `mm:ss` of `varchar` type where `mm` = minutes and `ss` = seconds.
I need to get the average of that column.
Should I convert that column to `datetime` format first? If so can you tell me how? If not can you tell me what I should do?
Here is my failed attempt to convert it to datetime :
```
SELECT CONVERT(DATETIME, '2014-01-01 '+Pace+':00', 108)
```
Where pace is a `varchar` like `23:05` | If you want the average, I would convert the column to number of seconds and work with that:
```
select avg(pace_secs) as average_in_seconds
from (select cast(left(pace, 2) as int) * 60 + cast(right(pace, 2) as int) as pace_secs
from t
) t;
```
If you want this back in the format, then you can do:
```
select right('00' + cast(avg(pace_secs) / 60 as int), 2) + ':' +
right('00' + avg(page_secs) % 60), 2)
from (select cast(left(pace, 2) as int) * 60 + cast(right(pace, 2) as int) as pace_secs
from t
) t;
``` | ```
declare @pace varchar(20) = '23:05 ';
SELECT cast( '2014-01-01 '+cast(@pace as varchar(5))+':00' as datetime)
``` | SQL conversion from varchar to datetime | [
"",
"sql",
"sql-server",
""
] |
How can I set the default value of a field as '0000-00-00 00:00:00'? If we can't use '0000-00-00 00:00:00' as the default? What is the **basic** valid tiemdate?
For instance, this is the SQL for creating my article table,
```
-- -----------------------------------------------------
-- Table `article`
-- -----------------------------------------------------
DROP TABLE IF EXISTS `article` ;
CREATE TABLE IF NOT EXISTS `article` (
`article_id` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT,
`url` VARCHAR(255) NOT NULL,
`title` VARCHAR(255) NOT NULL,
`date_from` DATETIME NOT NULL DEFAULT '0000-00-00 00:00:00' COMMENT 'Set the article as new or featured from a datetime.',
`date_to` DATETIME NOT NULL DEFAULT '0000-00-00 00:00:00' COMMENT 'Set the article as new or featured to a datetime.',
`backdated_on` DATETIME NOT NULL DEFAULT '0000-00-00 00:00:00' COMMENT 'The manual datetime that is modified or input by the user.',
`created_on` DATETIME NOT NULL DEFAULT '0000-00-00 00:00:00' COMMENT 'The permanent datetime when the article is created.',
`updated_on` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT 'The datetime when the article is updated on.',
PRIMARY KEY (`article_id`, `parent_id`, `template_id`),
UNIQUE INDEX `url_UNIQUE` (`url` ASC))
ENGINE = MyISAM
AUTO_INCREMENT = 66
COMMENT = 'Entity that holds the article with one-to-one properties.';
```
I get this error when I run this query,
```
#1067 - Invalid default value for 'date_from'
``` | **Cause of the error: the SQL mode**
You can set the default value of a `DATE`, `DATETIME` or `TIMESTAMP` field to the special "zero" value of '0000-00-00' as dummy date if the sql mode permits it. For MySQL versions lower than 5.7.4 this is ruled by the NO\_ZERO\_DATE mode, see this excerpt of the [documentation](https://dev.mysql.com/doc/refman/5.6/en/date-and-time-types.html):
> MySQL permits you to store a “zero” value of '0000-00-00' as a
> “dummy date.” This is in some cases more convenient than using NULL
> values, and uses less data and index space. To disallow '0000-00-00',
> enable the NO\_ZERO\_DATE SQL mode.
Additionally strict mode has to be enabled for disallowing "zero" values:
> If this mode and strict mode are enabled, '0000-00-00' is not permitted
> and inserts produce an error, unless IGNORE is given as well.
As of [MySQL 5.7.4](http://dev.mysql.com/doc/refman/5.7/en/sql-mode.html#sql-mode-strict) this depends only on the strict mode:
> Strict mode affects whether the server permits '0000-00-00' as a
> valid date:
>
> If strict mode is not enabled, '0000-00-00' is permitted and inserts
> produce no warning.
>
> If strict mode is enabled, '0000-00-00' is not permitted and inserts
> produce an error, unless IGNORE is given as well. For INSERT IGNORE
> and UPDATE IGNORE, '0000-00-00' is permitted and inserts produce a
> warning.
**Check version and SQL mode**
So you should to check your MySQL version and the [SQL mode](https://dev.mysql.com/doc/refman/5.6/en/sql-mode.html) of your MySQL server with
```
SELECT version();
SELECT @@GLOBAL.sql_mode global, @@SESSION.sql_mode session
```
**Enable the INSERT**
You can set the sql\_mode for your session with `SET sql_mode = '<desired mode>'`
```
SET sql_mode = 'STRICT_TRANS_TABLES';
```
**Valid range for DATETIME**
The supported range for `DATETIME` is
```
[1000-01-01 00:00:00] to ['9999-12-31 23:59:59'],
```
so the minimal valid DATETIME value is '1000-01-01 00:00:00'.
I wouldn't recommend to use this value though.
**Additional Note**
Since MySQL 5.6.5 all `TIMESTAMP` and `DATETIME` columns can have the magic behavior (initializing and/or updating), not only `TIMESTAMP` and only one column at most, see [Automatic Initialization and Updating for TIMESTAMP and DATETIME](https://dev.mysql.com/doc/refman/5.6/en/timestamp-initialization.html):
> As of MySQL 5.6.5, TIMESTAMP and DATETIME columns can be automatically
> initializated and updated to the current date and time (that is, the
> current timestamp). Before 5.6.5, this is true only for TIMESTAMP, and
> for at most one TIMESTAMP column per table. The following notes first
> describe automatic initialization and updating for MySQL 5.6.5 and up,
> then the differences for versions preceding 5.6.5.
You could change your CREATE TABLE statement in the case of MySQL 5.6.5 or newer to:
```
CREATE TABLE IF NOT EXISTS `article` (
`article_id` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT,
`url` VARCHAR(255) NOT NULL,
`title` VARCHAR(255) NOT NULL,
`date_from` DATETIME NOT NULL DEFAULT '0000-00-00 00:00:00' COMMENT 'Set the article as new or featured from a datetime.',
`date_to` DATETIME NOT NULL DEFAULT '0000-00-00 00:00:00' COMMENT 'Set the article as new or featured to a datetime.',
`backdated_on` DATETIME NOT NULL DEFAULT '0000-00-00 00:00:00' COMMENT 'The manual datetime that is modified or input by the user.',
`created_on` DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT 'The permanent datetime when the article is created.',
`updated_on` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT 'The datetime when the article is updated on.',
PRIMARY KEY (`article_id`, `parent_id`, `template_id`),
UNIQUE INDEX `url_UNIQUE` (`url` ASC))
ENGINE = MyISAM
AUTO_INCREMENT = 66
COMMENT = 'Entity that holds the article with one-to-one properties.';
``` | There is a solution.
I'm not sure about the security of this so I recommend to
```
SELECT @@SESSION.sql_mode session
```
first. Save the value in your clipboard, then
```
SET SESSION sql_mode = '';
```
After that you can create or alter the table with default values like '0000-00-00 00:00:00' | How can I set the default value of a field as '0000-00-00 00:00:00'? | [
"",
"mysql",
"sql",
"date",
"datetime",
""
] |
I'm attempting to create a stored procedure that implements an exclusive or to return a specific record from a table (BasisValues).
Here's the table:
```
Basis Value
1 4.25
2 1.25
3 99.00
0 0.00
```
I've designed a stored procedure to return a BasisValue as follows:
```
CREATE PROCEDURE GetBasisValue
@Basis VARCHAR(2)
AS
BEGIN
SELECT Value FROM BasisValues
WHERE ( BASIS = @Basis) OR
((BASIS <> @Basis) AND (Basis = '0'))
END
```
As you can see above, I'm attempting to SELECT the Value field from the BasisValues table according to the following logic:
1. If the Basis is equal to the @Basis passed into the stored procedure, return that value.
2. If the Basis is not equal to the @Basis passed into the stored procedure, then return the value of Basis = 0.
However, given a 'Basis' that exists in the table, I end up getting two records. For example, if I called GET\_BASIS\_VALUES with a @Basis of 2 I get:
```
Basis Value
2 1.25
0 0.00
```
In other words, I always return the '0' record no matter what. What am I missing in terms of my logic? | You want to use `order by` to select the prefered value:
```
SELECT TOP 1 Value
FROM BasisValues
WHERE ( BASIS = @Basis) OR
((BASIS <> @Basis) AND (Basis = '0'))
ORDER BY (case when BASIS = @BASIS then 1 else 0 end) DESC;
```
This query (without the `top`) will return one or two values, based on what you describe in the question. It will then order the results, so the the non-default is first. By choosing the first row after the `order by`, you get a match first, and then the default if there is no match. | ```
CREATE PROCEDURE GetBasisValue
@Basis VARCHAR(2)
AS
BEGIN
SELECT ISNULL((SELECT Value FROM BasisValues
WHERE (BASIS = @Basis)), '0.00')
END
``` | Implementing 'exclusive or' to return one record | [
"",
"sql",
"sql-server-2008",
"stored-procedures",
"logical-operators",
""
] |
so I've got a student table which has
FirstName, Lastname, MisID and NetworkName. (MisID is username)
Theres an issue that students can potentially have two laptops (an older one) which is not allowed. How can I find when for example the userID occurs twice (therefore having two laptops)
This is my code so far but it just groups the entire MisID therefore making the count useless.
```
Select DISTINCT MisID, networkname FROM dbo.CompiledStudentData
GROUP BY MisID, networkname
HAVING COUNT(MisID) >= 2;
```
Cheers | You need to use:
```
select distinct MisID, networkname
FROM dbo.CompiledStudentData as t1
inner join
(
Select MisID FROM dbo.CompiledStudentData
GROUP BY MisID
HAVING COUNT(distinct networkname) >= 2
) as t2
on (t1.MisID = t2.MisID)
order by 1, 2
```
the main idea here:
1. get list of students (MisID) who has at least 2 different network names (laptops), this is done by inner query (which creates temporary "table")
2. Join between original table (alias: t1) and this temporary one (alias: t2) gives us list of such students with all network names used | `Select count(1) as cnt
, MisID
, networkname
FROM dbo.CompiledStudentData
GROUP BY MisID, networkname` | Finding multiple (different) values under same ID SQL | [
"",
"sql",
"sql-server",
""
] |
how to optimize this query and increase its performance,
```
SELECT user.id,user.name,profile.info,score.amount
FROM user
LEFT JOIN profile ON profile.user_id = user.id AND profile.type = 'language'
LEFT JOIN score ON score.user_id = user.id AND score.type = 'total'
WHERE email = 'example@mail.com'
```
return results:
```
[id] => 1060225
[name] => john
[info] => En
[ammount] => 533
```
return results 2:
```
[id] => 1022805
[name] => karin
[info] =>
[ammount] => 11
```
tables:
users table
```
id name email
1 john example@mail.com
2 karin tt@kkk.com
3 Tom kk@yahoo.com
4 kit mm@gmail.com
```
profile table
```
id user_id type info
1 1 is_admin true
2 1 language En
3 1 active true
4 2 is_admin false
1 1 like null
2 2 favorite null
3 3 is_admin false
4 2 experience 4
```
score table
```
id user_id type amount
1 1 daily 33
2 1 total 533
3 2 total 11
4 3 daily 44
```
thanks, | For this query:
```
SELECT u.id, u.name, p.info, s.amount
FROM user u LEFT JOIN
profile p
ON p.user_id = u.id AND p.type = 'language' LEFT JOIN
score s
ON s.user_id = u.id AND s.type = 'total'
WHERE u.email = 'example@mail.com';
```
(I just added table aliases to make the query more readable and to clarify where the columns come from.)
Try the following indexes:
```
create index idx_user_email_id_name on user(email, id, name);
create index idx_profile_userid_type on profile(user_id, type);
create index idx_score_userid_type on score(user_id, type);
``` | You should give the `profile` and `score` tables composite indexes on `(type, user_id)`. I'm not sure whether it will be better to have `type` or `user_id` first in the indexes, you should try each of them and see which is better; you can use `EXPLAIN` to compare the execution plans for this. | optimize SQL query with multiple JOIN | [
"",
"mysql",
"sql",
""
] |
I have defined a stored procedure in Postgresql:
```
create or replace function
test1(arg1 text, opt1 text default 'default value')
returns text as $$ select $1 || ' ' || $2; $$ language sql;
```
I would like to inspect this function definition to find out:
1. Which parameters have a default value
2. (bonus) What the default values are
I tried comparing `information_schema.parameters` records for the example:
```
select * from information_schema.parameters;
```
*(using `diff -y --suppress-common-lines`)*
```
-[ RECORD 4808 ]---------+----------------------- | -[ RECORD 4809 ]---------+-----------------------------------
ordinal_position | 1 | ordinal_position | 2
parameter_name | arg1 | parameter_name | opt1
dtd_identifier | 1 | dtd_identifier | 2
```
The only place I have managed to find reference to the default value is in the `pg_proc` table:
```
select proargdefaults from pg_proc where proname = 'test1' ;
-[ RECORD 1 ]--+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------
proargdefaults | ({CONST :consttype 25 :consttypmod -1 :constcollid 100 :constlen -1 :constbyval false :constisnull false :location 62 :constvalue 10 [ 40 0 0 0 119 105 98 98 108 101 ]})
```
That bit in square brackets at the end seems to contain the default in some kind of internal format:
*(using Python)*
```
>>> d="68 0 0 0 100 101 102 97 117 108 116 32 118 97 108 117 101"
>>> "".join([chr(int(a)) for a in d.split()])
'D\x00\x00\x00default value'
```
My question: Is there a better way to inspect this function to find out about default parameters?
**Update** I am looking for something that produces output like...
```
-[ RECORD 1 ]----+----------------------------
arg_index | 1
arg_name | arg1
arg_has_default | f
arg_default_text | <null>
-[ RECORD 2 ]----+----------------------------
arg_index | 2
arg_name | opt1
arg_has_default | t
arg_default_text | default value
``` | You can select the optional arguments from `pg_proc`, but you probably won't be able to parse their default values' expressions on PostgreSQL's side (because they can have different types).
```
select proargnames[pronargs-pronargdefaults+1:pronargs] optargnames,
pg_get_expr(proargdefaults, 0) optargdefaults
from pg_proc
where proname = 'test1'
```
[SQLFiddle](http://sqlfiddle.com/#!15/d41d8/2931)
**EDIT**: found a way to easily parse the default values into a `json` value (keys are the argument names, values are their default values' json representation):
```
create or replace function proargdefaultsjson(proc pg_proc) returns json
language plpgsql
as $function$
declare
expr_parsed record;
begin
execute format(
'select * from (values (%s)) v(%s)',
pg_get_expr(proc.proargdefaults, 0),
array_to_string(array(
select quote_ident(n)
from unnest(proc.proargnames[proc.pronargs-proc.pronargdefaults+1:proc.pronargs]) n
), ',')
) into expr_parsed;
return row_to_json(expr_parsed);
end
$function$;
```
This function should work in PostgreSQL 9.2+: [SQLFiddle](http://sqlfiddle.com/#!12/d41d8/2503)
**EDIT 2**: you can achieve something similar with the [`hstore` module](http://www.postgresql.org/docs/current/static/hstore.html), if you return with `hstore(expr_parsed);` (this case you will end up with each default expression's text representation). | You can use `pg_get_function_arguments()` to retrieve the definition of the parameters:
```
SELECT pg_get_function_arguments(p.oid) as parameters
FROM pg_proc p
JOIN pg_namespace n ON p.pronamespace = n.oid
WHERE p.proname = 'test1'
AND n.nspname = 'public';
```
This will return `arg1 text, opt1 text DEFAULT 'default value'::text` for your example. | Postgresql: How can I inspect which arguments to a procedure have a default value? | [
"",
"sql",
"postgresql",
""
] |
I want to retrieve the maximum invoice number from my invoice table where it returns the following when I select all records where you can clearly see the maximum invoice no as "10".
```
select * from invoice
```

But when I query for
```
select MAX(invoice_number) as maxinv from invoice
```
It returns me "9". Why is that?
 | This situation can occur if your `invoice_number` is stored as a text column e.g. `varchar(10)`. In that case, based on alphabetical order, 9 will be the maximum value.
Ideally, you should be storing values on which you want to perform numerical operations as numeric datatypes e.g. `int`. However, if for some reason you cannot change column datatype, you can try casting the column before applying `MAX`, like so:
```
select max (convert(invoice_number, signed integer)) as maxinv from invoice
```
NOTE: I specifically mention **"values on which you want to perform numerical operations"** because there are cases where the input text is entirely numeric, such as phone numbers or perhaps credit card numbers, but there is no scenario in which you would want to add 2 phone numbers, or take the square root of a credit card number. Such values should be stored as text. | It happens because your column is a `VARCHAR`, try to cast it to `UNSIGNED`:
```
select MAX(CAST(invoice_number AS UNSIGNED)) as maxinv from invoice
```
As Joshi noticed, if you have negative values you can use `SIGNED`
```
select MAX(CAST(invoice_number AS SIGNED)) as maxinv from invoice
``` | MAX(Column) returns me a wrong value | [
"",
"mysql",
"sql",
""
] |
I'm creating a multi. language website. here is my sample sql
> ```
> select * from admin_real_estate_category_languages as category_languages
> left join admin_real_estate_categories as category on category_languages.real_estate_category_id = category.id
> where language = 'jp' and category = 'Area'
> ```
All details have language = 'en' but not all have language = 'jp', if language = 'jp' does not exist in the details I want it to query the language = 'en'.
In short I want the query to have a default where clause if the first where clause does not return anything.
here is a sample of my goal.
> ```
> select * from admin_real_estate_category_languages as category_languages
> left join admin_real_estate_categories as category on category_languages.real_estate_category_id = category.id
> where language = 'jp' and category = 'Area'
> default where language = 'en' and category = 'Area'
> ```
thanks :) | While this might not be the best solution since it introduces an additional join, you can solve your problem in this way.
First off, you probably don't want to do select \*, that's generally a bad idea. Select the columns that you actually want to use instead (even if that is all columns). If you then have the actual columns you can use COALESCE to solve your problem.
Since you haven't provided the actual schema I'm guessing a bit here, hopefully you'll get the gist of the solution. Also notice that I've switched places on your language table and your category table so I join the language table on the category table, not the other way around as you are doing in the main question.
```
SELECT category.foo,
category.bar,
category.baz,
COALESCE(category_language.text, default_category_language.text)
FROM admin_real_estate_categories AS category
LEFT JOIN admin_real_estate_category_languages AS category_language
ON category_language.real_estate_category_id = category.id
AND category_language.language = 'jp'
JOIN admin_real_estate_category_languages AS default_category_language
ON default_category_language .real_estate_category_id = category.id
AND default_category_language.language = 'en'
WHERE category.category = 'Area'
```
Of course, this can quickly escalate and get out of hand if you need too many columns with this type of handling. If you only need this type of handling on one or two columns this strategy usually works fine, but more than that and you really shouldn't use this strategy. In that case I'd probably split the query instead (just make sure not to accidentially create some sort of SELECT N+1 monstrosity if you do that!). | You could select both, order by language and limit to one (to get just the best match).
```
WHERE language in ('jp', 'en') ORDER BY language DESC LIMIT 1
```
This assumes that there is only one translation resource to be returned.
If there are many, you'd have to group by (or just get all and handle it in your application code). | SQL: Default where clause | [
"",
"mysql",
"sql",
"sql-server",
""
] |
I have a table that is having 2 duplicate rows (total of 3 rows), so I used the code below to get the duplicate value in the column
```
SELECT CustNo, COUNT(*) TotalCount
FROM Rental
GROUP BY CustNo
HAVING COUNT(*) > 1
ORDER BY COUNT(*) DESC
```
So once I get the repeated value, I need to get the CustNo derived as duplicate from the customer table. How do I go about taking this value and using it in the select statment all in the same query.
I also have the select statement prepared like this.
```
Select * from Customer where CustNo = 'T0002';
```
Thanks. | ```
Select * from Customer
where CustNo IN
(
SELECT CustNo
FROM Rental
GROUP BY CustNo
HAVING COUNT(*) > 1
)
``` | You can use `join`:
```
SELECT c.*
FROM (SELECT CustNo, COUNT(*) TotalCount
FROM Rental
GROUP BY CustNo
HAVING COUNT(*) > 1
) cc JOIN
Customer c
on cc.CustNo = c.CustNo;
``` | Row value from another table | [
"",
"sql",
"sql-server",
"t-sql",
"select",
""
] |
I'm trying to make an output array from next statement:
I have 2 tables. In each table there is "material\_code" which plays the main role. I want to select *material\_code* from *table1* where it'is equal to *material\_code* from *table2* and join them where *status* (from table2) is equal to *0*.
This is what I've got so far.
```
SELECT material_code FROM table1
LEFT JOIN table2 ON material_code WHERE status IS 0
```
Solution:
```
SELECT material_code FROM table1
LEFT JOIN table2 ON table1.material_code = table2.material_code WHERE status = 0
```
Thank everyone. | You should try this one:
```
SELECT
material_code
FROM table1
LEFT JOIN table2
ON table1.material_code = table2.material_code
WHERE status = 0
``` | Sounds like you want an `INNER JOIN`
```
SELECT material_code FROM table1
INNER JOIN table2 USING (material_code) WHERE status = 0
```
If you use `LEFT JOIN`, you will get records that only exist in table1 and not in table2. You also want to use `USING` not `ON` if the column name is the same in both tables (at least in MySQL). | Select join statement is not working | [
"",
"mysql",
"sql",
""
] |
I wrote a SQL function in SQL Server to convert year-week string into a date as below:
```
CREATE FUNCTION dbo.f_ConvertStringToDate
(@yearWeek VARCHAR)
RETURNS DATETIME
AS
BEGIN
RETURN DATEADD(WEEK, CONVERT(INT, SUBSTRING(@yearWeek, 6, 2)),
DATEADD(YEAR, CONVERT(INT, SUBSTRING(@yearWeek, 1, 4)) - 1900, 0))
END
```
And then when I execute the below query:
```
select dbo.f_ConvertStringToDate(yearWeek)
from dbo.mytab
where year = 2014 and week = 25;
```
`yearWeek` is a string of format `2014-25`. I get an error:
> SQL Error: Adding a value to 'datetime' column caused an overflow. | I found that this worked when I changed your definition to:
```
CREATE FUNCTION dbo.f_ConvertStringToDate
(@yearWeek VARCHAR(50))
RETURNS DATETIME
AS
BEGIN
RETURN DATEADD(WEEK, CONVERT(INT, SUBSTRING(@yearWeek, 6, 2)), DATEADD(YEAR, CONVERT(INT, SUBSTRING(@yearWeek, 1, 4)) - 1900, 0))
END
```
Just to note, the way to debug this was to run the SQL without using a function, if you try
```
DECLARE @yearweek VARCHAR;
SELECT @yearweek = '2014-05';
SELECT @yearweek;
```
...then you will see that @yearweek has been set a value of "2" instead of "2014-05". | Varchar [ ( n | max ) ]
Variable-length, non-Unicode string data. n defines the string length and can be a value from 1 through 8,000. max indicates that the maximum storage size is 2^31-1 bytes (2 GB). The storage size is the actual length of the data entered + 2 bytes.
When n is not specified in a data definition or variable declaration statement, the default length is 1. When n is not specified when using the CAST and CONVERT functions, the default length is 30.
So in your case overflow will happen to fix this change `varchar` to `varchar(20)`
[Link](http://msdn.microsoft.com/en-IN/library/ms176089.aspx) | Adding a value to 'datetime' causing an overflow in SQL Server select query | [
"",
"sql",
"sql-server",
""
] |
I'm working on a problem with two tables. Charge and ChargeHistory. I want to display a selection of columns from both tables where either the matching row in ChargeHistory has a different value and/or date from Charge or if there is no matching entry in ChargeHistory at all.
I'm using a left outer join declared using the ansi standard and while it does show the rows correctly where there is a difference, it isn't showing the null entries.
I've read that there can sometimes be issues if you are using the WHERE clause as well as the ON clause. However when I try and put all the conditons in the ON clause the query takes too long > 15 minutes (so long I have just cancelled the runs).
To make things worse both tables use a three part compound key.
Does anyone have any ideas as to why the null values are being left out?
```
SELECT values...
FROM bcharge charge
LEFT OUTER JOIN chgHist history
ON charge.key1 = history.key1 AND charge.key2 = history.key2 AND charge.key3 = history.key3 AND charge.chargeType = history.chargeType
WHERE charge.chargeType = '2'
AND (charge.value <> history.value OR charge.date <> history.date)
ORDER BY key1, key2, key
``` | You probably want to explicitly select the null values:
```
SELECT values...
FROM bcharge charge
LEFT OUTER JOIN chgHist history
ON charge.key1 = history.key1 AND charge.key2 = history.key2 AND charge.key3 = history.key3 AND charge.chargeType = history.chargeType
WHERE charge.chargeType = '2'
AND ((charge.value <> history.value or history.value is null) OR (charge.date <> history.date or history.date is null))
ORDER BY key1, key2, key
``` | You can explicitly look for a match in the `where`. I would recommend looking at one of the keys used for the `join`:
```
SELECT . . .
FROM bcharge charge LEFT OUTER JOIN
chgHist history
ON charge.key1 = history.key1 AND charge.key2 = history.key2 AND
charge.key3 = history.key3 AND charge.chargeType = history.chargeType
WHERE charge.chargeType = '2' AND
(charge.value <> history.value OR charge.date <> history.date OR history.key1 is null)
ORDER BY key1, key2, key;
```
The expressions `charge.value <> history.value` change the `left outer join` to an `inner join` because `NULL` results will be filtered out. | Oracle left outer join, only want the null values | [
"",
"sql",
"oracle",
"join",
"outer-join",
""
] |
Let's say I have a table with two columns:
```
id | value
----------
1 | 101
2 | 356
3 | 28
```
I need to randomly permute the value column so that each id is randomly assigned a new value from the existing set {101,356,28}. How could I do this in Oracle SQL?
It may sound odd but this *is* a real problem, just with more columns. | You can do this by using `row_number()` with a random number generator and then joining back to the original rows:
```
with cte as (
select id, value,
row_number() over (order by id) as i,
row_number() over (order by dbms_random.random) as rand_i
from table t
)
select cte.id, cte1.value
from cte join
cte cte1
on cte.i = cte.rand_i;
```
This guarantees a permutation (i.e. no original row has its value used twice).
EDIT:
By the way, if the original `id`s are sequential from 1 and have no gaps, you could just do:
```
select row_number() over (order by dbms.random) as id, value
from table t;
``` | An Option : `select * from x_table where id = round(dbms_random.value() * 3) + 1;` [Here 3 is the number of rows in your random data table and I am assuming that id is incremental and unique?]
I'll think of other options. | Permuting values in SQL | [
"",
"sql",
"oracle",
"permutation",
""
] |
I'm fairly new to SQL, and I'm having some problems converting a data type of Guid to String when creating my View. After having a look into the issue I'm not sure why the following statement is throwing the error:
```
Create View or Function failed because no column name was specified for column 2.
```
Statement:
```
CREATE VIEW CustomerContactView
AS
SELECT [Id]
,CAST([Guid] AS NVARCHAR(100))
,[ContactGuid]
,[MethodId]
,[Details]
,[OutcomeId]
,[ActionRequired]
,[IsValid]
FROM [Main].[dbo].[Customer_ContactLog]
```
Any help is greatly appreciated, Thanks | You just need to give a new name for that column that will be used when selecting from the view:
```
CREATE VIEW CustomerContactView
AS
SELECT [Id]
,CAST([Guid] AS NVARCHAR(100)) AS Guid
,[ContactGuid]
,[MethodId]
,[Details]
,[OutcomeId]
,[ActionRequired]
,[IsValid]
FROM [Main].[dbo].[Customer_ContactLog]
```
This is because the value in that column is calculated dynamically and is not related to the value in table. A computed column is a virtual column that is not physically stored in the table therefore the DBMS cannot use the original column name from a table. | In a view, when you apply a function or otherwise manipulate a column, concatenate multiple columns, anything like that, you need to give that column an alias:
`CAST([Guid] AS NVARCHAR(100)) AS <some clever alias goes here>`. | Converting data type on SQL Server View Creation | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have one table in oracle database which has one column of type `varchar2` and it stores date. the problem is it store date in 2 character year format like 19/08/66 which is equivalent to the 19/08/1966. But when i convert this date using `to_date` function, it returns 19/08/2066 but expecting 19/08/1966. how can i convert it properly. | Use to\_date function.
```
TO_DATE('16/06/66','DD/MM/RRRR')
```
Output is
16/06/1966
```
TO_DATE('16/06/16','DD/MM/RRRR')
```
Output is
16/06/2016
`TO_DATE('16/06/16','DD/MM/RRRR')` when u have run this query in 1940s
Output is
16/06/1916
The logic Oracle follows as below the function accepts a 2-digit year and returns a 4-digit year.
A value between 0-49 will return a 20xx year.
A value between 50-99 will return a 19xx year. | If I remember Oracle correct the `to_date`function accepts a format\_mask and if you use this mask: `'DD/MM/RRRR'` values between 0-49 will return a 20xx year and values between 50-99 will return a 19xx year. So try this:
```
to_date('19/08/66', 'DD/MM/RRRR')
``` | Oracle varchar2 to date conversion | [
"",
"sql",
"oracle",
""
] |
I am having problem with the following error
> ORA-01843: not a valid month
The following query works well if I use
```
SELECT mx.work_order_no, mx.work_order_name, mx.comments
FROM max_orders mx
WHERE TO_DATE (wo_dt, 'dd/mm/rr') <= (TO_DATE (SYSDATE, 'dd/mon/rr') - 7)
```
However if I change where condition clause to
```
WHERE TO_DATE (wo_dt, 'dd/mm/rr') >= (TO_DATE (SYSDATE, 'dd/mon/rr') - 7)
```
I am having issue with
> ORA-01843: not a valid month
What has caused this and how can I resolve this error?
**Update 1**
Underlying view
```
SELECT work_order_no,
work_order_name,
wo_dt,
comments
FROM (SELECT mx_master.work_order_no,
mx_master.work_order_name,
SUBSTR (mx_master.uom, 1, 15) wo_dt,
mx_master.remarks
FROM mx_wo_data mx_master)
``` | I propose you create a stored function to identify the bad rows:
```
create function invalid_date(p_d in varchar2) return number as
v_d date;
begin
v_d := TO_DATE(p_d, 'dd/mm/rr');
return 0;
exception
when others then
return 1;
end;
/
select * from mx_orders where invalid_date(wo_dt)=1;
``` | `SYSDATE` is already a date. You should not be passing it into `TO_DATE()`. When you do that you're doing an implicit conversion to a string, and an explicit conversion back. Gordon Linoff already showed a better way to do that.
Based on the view definition you added, `wo_dt` is a string. You're expecting that to be in `dd/mm/rr` format. The error is telling you what you have values in that column which are not actually in that format, so you'll need to examine the data in the view or the underlying table to see which record(s) have incorrect data.
You could use [something like this](https://stackoverflow.com/a/5966390/266304) to either exclude the values that are not in the right format; or more usefully identify the bad values so they can be removed or corrected, e.g. with something like:
```
select * from max_orders
where my_to_date(wo_dt, 'dd/mm/rr') is null;
```
or from the underlying table:
```
select * from mx_wo_data
where my_to_date(substr(uom, 1, 8), 'dd/mm/rr') is null;
```
If you can't create a function then you can use the same logic in an anonymous block.
---
It's odd that changing the condition causes the error though, as your (implicit and explicit) conversions are applied before the condition is evaluated, and using the function means any index on that column can't be used; so (in the absence of any other filters) you should be doing a full table scan for both queries, the conversion should be applied to all values in the column before it's filtered, and you should get the error either way. So this doesn't really answer that aspect of the question. | ORA-01843: not a valid month When where condition is changed | [
"",
"sql",
"oracle",
""
] |
Im trying to update a column in SQL so i tried using this:
```
update [BaseHistorica_]
set Cierre=CONVERT(datetime2,cierre,103)
```
But appears the following error message:
```
Msg 241, Level 16, State 1, Line 15
Conversion failed when converting date and/or time from character string.
```
Then i tried this query, and the result was that i was expecting
```
SELECT top 1000 [ID]
,convert(datetime2,[Cierre],103) as cierre
,[Seg1]
,[Mora1]
,[Saldo_T]
,[Saldo_FD]
,[Seg2]
,[Mora2]
FROM [TARSHOP].[dbo].[BaseHistorica_]
```
and finally i tried this query to create a table that i want, with the same error message.
```
SELECT top 1000 [ID]
,convert(datetime2,[Cierre],103) as cierre
,[Seg1]
,[Mora1]
,[Saldo_T]
,[Saldo_FD]
,[Seg2]
,[Mora2]
into [BaseHistoricaF]
FROM [T].[dbo].[BaseHistorica_]
```
Why can be happen? how can i do to update this column or create a table with this column. | The error message tells us that attempting to convert into datetime2 fails "from character string"; hence [Cierre] appears to be a string already.
1. learn what the actual data type of [Cierre] is first (probably varchar or nvarchar)
2. then learn what format the `stored` values conform to, possibly (if you are lucky) YYYYMMDD or YYYY-MM-DD
3. do NOT set them to a different format, & absolutely NOT 103 (DD/MM/YYYY)
4. Use convert(varchar,[Cierre],103) for output only!
If storing date/time information as strings then
* you simply MUST understand how the values are stored, and
* you are denied simple access to many date/time functions
Put simply storing date/time information as strings is bad and will make life in your SQL world more difficult. | What's happening is column Cierre is a string format, such as varchar or nvarcher. When updating BaseHistorica\_ you're trying to convert cierre to datetime2 and store it in column Cierre, which is a string. You'll have to figure out the format of the strings in Cierre and conform to that standard.
There's a problem with saving your dates as varchars though. One is that anyone else can input a date in a different format. For instance, '01/08/2014' and you input the date as '01-08-2014'. They're totally different values because they're strings, not dates. The best thing to do is to have your date columns formatted as dates, in your case datetime2, and then format the dates when you're querying the data. When your values are dates then you can use various SQL functions such as date ranges (BETWEEN '01/01/2012' AND '01/01/2013') and DATEDIFF. So the rule of thumb is to store as date and only format the output. | Convert date in SQL | [
"",
"sql",
"sql-server",
"string",
"sql-update",
"converters",
""
] |
I have an Ingres table with date field (data type ingresdate) which has a not null restriction. However blank i.e. empty values are allowed.
How can you check for a blank value?
Of course testing for null values using ifnull() does not work - as per example below.
```
INGRES TERMINAL MONITOR Copyright 2008 Ingres Corporation
Ingres SPARC SOLARIS Version II 9.2.3 login
continue
create table test ( id integer not null, date_field ingresdate not null with default )\g
insert into test (id, date_field) values ( 1, '' )\g
insert into test (id, date_field) values ( 2, '31/12/2014' )\g
continue
(1 row)
continue
select id, date_field, ifnull( date_field, '01/01/2014' ) as test_field from test\g
(1 row)
continue
+-----------------------------------------------------------------+
¦id ¦date_field ¦test_field ¦
+-------------+-------------------------+-------------------------¦
¦ 1¦ ¦ ¦
¦ 2¦31/12/14 ¦31/12/14 ¦
+-----------------------------------------------------------------+
(2 rows)
continue
\q
Your SQL statement(s) have been committed.
Ingres Version II 9.2.3 logout
``` | Actually, thanks to PaulM, I figured it out:
```
INGRES TERMINAL MONITOR Copyright 2008 Ingres Corporation
Ingres SPARC SOLARIS Version II 9.2.3 login
select id, date_field,
case when date_field = '' then '01/01/2014' else date_field end as test_field
from test
+-----------------------------------------------------------------+
¦id ¦date_field ¦test_field ¦
+-------------+-------------------------+-------------------------¦
¦ 1¦ ¦01/01/14 ¦
¦ 2¦31/12/14 ¦31/12/14 ¦
+-----------------------------------------------------------------+
(2 rows)
Ingres Version II 9.2.3 logout
``` | You can use
```
select * from test where date_field = ''
``` | How to check for a blank date value in Ingres | [
"",
"sql",
"date",
"string",
"ingres",
""
] |
I have two sets of data with the same fields:
```
+----+---------+-------------+
| PK | myCDKey | DateCreated |
+----+---------+-------------+
| 1 | 131048 | 8/18/2014 |
| 2 | 131049 | 8/18/2014 |
| 3 | 131050 | 8/18/2014 |
| 4 | 131051 | 8/18/2014 |
| 5 | 131052 | 8/18/2014 |
| 6 | 131053 | 8/18/2014 |
| 7 | 131054 | 8/18/2014 |
| 8 | 131055 | 8/18/2014 |
| 9 | 131058 | 8/18/2014 |
| 10 | 131059 | 8/18/2014 |
+----+---------+-------------+
```
and
```
+----+---------+-------------+
| PK | myCDKey | DateCreated |
+----+---------+-------------+
| 11 | 131048 | 8/19/2014 |
| 12 | 131049 | 8/19/2014 |
| 13 | 131053 | 8/19/2014 |
| 14 | 131054 | 8/19/2014 |
| 15 | 131055 | 8/19/2014 |
| 16 | 131058 | 8/19/2014 |
| 17 | 131059 | 8/19/2014 |
| 18 | 111111 | 8/19/2014 |
| 19 | 222222 | 8/19/2014 |
| 20 | 333333 | 8/19/2014 |
+----+---------+-------------+
```
The output that I would like to have is something like this:
```
+----+---------+------------+
| PK | myCDKey | Delete/Add |
+----+---------+------------+
| 3 | 131050 | delete |
| 4 | 131051 | delete |
| 5 | 131052 | delete |
| 18 | 111111 | add |
| 19 | 222222 | add |
| 20 | 333333 | add |
+----+---------+------------+
```
The output shows me that when comparing the two dates, the most recent actions were that 3 of the CDs were deleted and 3 were added.
**Is there already an out of the box way to do this perhaps with the merge function?**
Thank you to @Linger for pointing out that I should explain how we know that they were added/deleted.
> Added: if the myCDKey exists in the most recent date, but not the
> previous date, then it is added.
>
> Deleted: if the myCDKey exists in the previous date, but not in the
> most recent
**Please note that when comparing 2 data sets, we will ONLY have 2 dates (as in the example here we have only 8/18 and 8/19)** | [**SQL Fiddle**](http://sqlfiddle.com/#!2/c3b601/14/0):
```
SELECT m1.PK, m1.myCDKey, 'delete' AS `DELETE/ADD`
FROM MyTable1 m1
WHERE m1.myCDKey NOT IN
(
SELECT t2.myCDKey
FROM MyTable2 t2
)
UNION
SELECT m2.PK, m2.myCDKey, 'add' AS `DELETE/ADD`
FROM MyTable2 m2
WHERE m2.myCDKey NOT IN
(
SELECT t1.myCDKey
FROM MyTable1 t1
);
```
Or you could do something like ([**SQL Fiddle**](http://sqlfiddle.com/#!2/c3b601/27/0)):
```
SELECT m1.PK, m1.myCDKey, 'delete' AS `DELETE/ADD`
FROM MyTable1 m1
LEFT JOIN MyTable2 m2 ON m1.myCDKey = m2.myCDKey
WHERE m2.myCDKey IS NULL
UNION
SELECT m2.PK, m2.myCDKey, 'add' AS `DELETE/ADD`
FROM MyTable1 m1
RIGHT JOIN MyTable2 m2 ON m1.myCDKey = m2.myCDKey
WHERE m1.myCDKey IS NULL
``` | the version using a FULL JOIN would be:
```
select coalesce(m1.PK,m2.PK) PK
, coalesce(m1.myCDKey, m2.myCDKey) myCDKey
, case
when m1.PK is null then 'add'
when m2.PK is null then 'delete' else 'error'
end as action
from MyTable1 m1
FULL OUTER JOIN MyTable2 m2 ON m1.myCDKey = m2.myCDKey
WHERE m1.PK is null or m2.PK is null
```
The fiddle is here: [SQL Fiddle](http://sqlfiddle.com/#!3/c3b60/6/0) | merging two sets of data and figuring out which is new and which is old | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
How do you sum a column from an inner join?
I have got this far but the results are not correct, for example.
```
SELECT DISTINCT it.CODE, pl.UNITS
FROM ITEMDETAILS it inner join plant pl ON it.CODE = pl.CODE
WHERE it.LOCNUMBER = '3434';
```
This give me this result which is correct
```
CODE UNITS
GE-ARH 2
GE-ARV 2
GE-EC 0.5
GE-JB 0.5
GE-JT 0.5
GE-VL2 2
GE-VL4 2
```
I then want to sum all the UNITS into a TOTAL, but when I execute the below query it gives me the wrong calculation? Can anyone show me the error of my ways?
```
SELECT DISTINCT SUM(pl.UNITS) as TotalUnits
FROM PLANT pl inner join ITEMDETAILS it on pl.CODE = it.CODE
WHERE it.LOCNUMBER = '3434';
TotalUnits
972
```
The answer obviously should be 9.5, I presume it is calculating against the whole column and not taking the where clause into consideration, but not sure why?
Thanks for your help as always. | You can do something like:
```
select sum(units)
from
(
SELECT DISTINCT it.CODE, pl.UNITS
FROM ITEMDETAILS it inner join plant pl ON it.CODE = pl.CODE
WHERE it.LOCNUMBER = '3434'
) un
```
Or depending on the sql version
```
;with un as (
SELECT DISTINCT it.CODE, pl.UNITS
FROM ITEMDETAILS it inner join plant pl ON it.CODE = pl.CODE
WHERE it.LOCNUMBER = '3434'
)
select sum(units)
from un
``` | I would do it like this
```
SELECT SUM(UNITS) AS TOTAL_UNITS
FROM
(
SELECT DISTINCT it.CODE, pl.UNITS
FROM ITEMDETAILS it inner join plant pl ON it.CODE = pl.CODE
WHERE it.LOCNUMBER = '3434'
) X
``` | Sum column from inner join | [
"",
"sql",
"sql-server",
""
] |
I have an Excel file that exports data into Word. It includes a cover page and grabs the user name ("First.Last") and changes it to "First Last" but I also need to include the user's professional title. This information is housed in an Access table. It has a field called Name and a field called Title. The Name field match exactly to User with no duplicates.
I have tried about eight different methods I've found online to grab this value from the table. The table will never happen to be open so I can't use "CurrentDB()".
I just need to be able to reach into the table in a database, grab the "title" value given that the value of the field "Name" is equal to the value of User (user name from the environment - the person using the excel file).
If it helps, I can provide examples of the different chunks of code I've used so far.
User is the username from the environment
tblPickName is the table I am trying to open
Path is the directory and file where the table is located
tblPickName has 2 fields, Name and Title
I need to grab the Title from this table and set it to my variable "Title" as long as Name equals User. Then I can export the username and title to Word along with the rest of the data.
```
Dim Path As String
Dim User As String
Dim Title As String
Dim db As DAO.Database
Dim rs As DAO.Recordset
User = Environ("UserName")
User = Replace(User, ".", " ")
User = StrConv(User, vbProperCase)
Path = "Directory\Name.mdb"
Set db = DBEngine.OpenDatabase(Path)
Set rs = db.OpenRecordset("SELECT tblPickAnalyst.Title FROM tblPickAnalyst WHERE [Analyst]='" & User & "'")
Title = rs!Title
rs.Close
db.Close
Set rs = Nothing
Set db = Nothing
docWord.Bookmarks("AnalystName").Range.Text = User
docWord.Bookmarks("AnalystTitle").Range.Text = Title
``` | Here is the final solution. Thank you to everyone who helped!
```
Dim Path As String
Dim User As String
Dim Title As String
Dim db As DAO.Database
Dim rs As DAO.Recordset
User = Environ("UserName")
User = Replace(User, ".", " ")
User = StrConv(User, vbProperCase)
Path = "Directory\FileName"
Set db = DBEngine.OpenDatabase(Path)
Set rs = db.OpenRecordset("SELECT tblPickAnalyst.Title FROM tblPickAnalyst WHERE [Analyst]='" & User & "'")
Title = rs!Title
rs.Close
db.Close
Set rs = Nothing
Set db = Nothing
docWord.Bookmarks("AnalystName").Range.Text = User
docWord.Bookmarks("AnalystTitle").Range.Text = Title
``` | Try this:
```
Public Sub JohnTayloristheBest()
Dim conAccess As ADODB.Connection
Set conAccess = New ADODB.Connection
Dim rdsAccess As ADODB.Recordset
Dim strTitle as String
With conAccess
.ConnectionString = "Data Source= **insert filepath here** "
.Provider = "Microsoft.ACE.OLEDB.12.0"
.Open
End With
With rdsAccess
.Open "SELECT tblPickName.Title FROM tblPickName WHERE tblPickName.Name = '" & Environ("username") & "';", conAccess
If Not .EOF And Not IsNull(.fields(0)) Then
strTitle = .fields(0)
Else
Msgbox "Error: No record in the database for " & Environ("username")
End If
.Close
End With
conAccess.Close: conAccess = Nothing
End Sub
```
Be sure to select the correct references by doing this: <http://msdn.microsoft.com/en-us/library/windows/desktop/ms677497(v=vs.85).aspx>
Also, this is my first answer ever written on this site (or any other for that matter), so be kind. | Retrieve value from Access table in Excel | [
"",
"sql",
"excel",
"vba",
""
] |
In my Ms Sql command, I want to get sum of C.HEIGHT's values
```
SELECT SUM(C.HEIGHT)
FROM CIHAZ AS C
INNER JOIN ILISKI AS I ON I.REL_CIHAZ=C.ID
WHERE I.REL_DOLAP=2
```
This value is returning NULL but I want to give Default Value to sum(C.HEIGHT) .
How can we provide this?
Thanks. | Use `COALESCE` to provide a default value in case the column value is NULL, like so:
```
SELECT COALESCE(SUM(C.HEIGHT), yourdefaultvalue)
FROM CIHAZ AS C
INNER JOIN ILISKI AS I ON I.REL_CIHAZ=C.ID
WHERE I.REL_DOLAP=2
```
`COALESCE` is an ANSI standard function, and you can use it in the way shown above in MySQL, Oracle and SQL Server. | You can use [`COALESCE`](http://msdn.microsoft.com/en-gb/library/ms190349.aspx) but, since you're aggregating over all results, you're probably getting `NULL` because there are no rows. In that case, you need to wrap the *entire* query as a subquery to get `COALESCE` to act the way that we want:
```
SELECT COALESCE(
(SELECT SUM(C.HEIGHT)
FROM CIHAZ AS C
INNER JOIN ILISKI AS I ON I.REL_CIHAZ=C.ID
WHERE I.REL_DOLAP=2),
0 /* <-- default */) as Result
``` | SQL Command Default Value | [
"",
"mysql",
"sql",
"default-value",
""
] |
I have a table 'consumption' in mysql with around 5 million records like:
```
month from | month to | consumption
2012-12-20 2013-01-10 200
2013-01-11 2013-02-13 345
```
Is there a way to get the consumption for each month like:
consumption for January(2013-01-01 to 2013-01-31) = ..., for February = .... The value can be an estimated figure, need not be perfect.
I thought of taking the average consumption per day multiplying it with the number of days for the month for different date ranges, but not sure how to go with it.
Update:
@Karolis Using the original excel formula, I am getting an estimated consumption value that is higher than the value computed using the sql script. As far as I know both the sql script and excel formula is doing the same computation. Can you please help me in finding out why this is happening and making the sql script consumption value same as the one obtained using excel.
Original Table:
```
id month_from month_to consumption
121 2009-12-30 2009-01-28 1251 <-First period
121 2010-01-29 2010-02-24 915 <-Second period
993 xxxx-xx-xx xxxx-xx-xx xxx
121 2010-02-25 2010-03-25 741
121 2010-03-26 2010-04-28 1508
```
I Used the script you had given, made a slight modification and added a group by id and order by id, the script I am using is:
```
SELECT
m.month, id,
SUM(
-- partial consumption = date subrange / date range * consumption
(
DATEDIFF(
IF(c.date_to > m.last_day, m.last_day, c.date_to),
IF(c.date_from < m.first_day, m.first_day, c.date_from)
) + 1
) / (DATEDIFF(c.date_to, c.date_from) + 1) * c.consumption
) consumption
FROM
consumption c
JOIN (
-- series of months
SELECT DISTINCT
DATE_FORMAT(date_from, '%Y %M') month,
DATE_FORMAT(date_from, '%Y-%m-01') first_day,
LAST_DAY(date_from) last_day
FROM consumption
GROUP BY date_from -- redundant, but for speed purposes
) m ON
-- condition indicating a date range belongs to a particular
-- month (fully or partially)
c.date_from <= m.last_day AND c.date_to >= m.first_day
GROUP BY m.month, id
ORDER BY m.month, id
```
Excel formula:
```
if((idInCurrentLine = idInNextLine), ((((month_to - start_date) + 1 )*consumptionPerDayForFirstPeriod/day ) + (start_date - month_from) * consumptionPerDayForsecondPeriod/day), "")
consumptionPerDayForFirstPeriod = consumptionFortheFirstPeriod/((month_to - month_from)+ 1)
consumptionPerDayForSecondPeriod = consumptinoFortheSecondPeriod/((month_to - month_from)+ 1)
```
In the example given
```
idInCurrentLine = 121, idInNextLine = 121
```
Using these two I calculated estimated consumption and result is :
Estimated Consumption: (As you can see there is a difference in estimated value in both cases, with the estimation using excel higher than sql.
```
Month Using Excel Using mysql script
2009 january 1313.4 1269.3
2009 Febuary 950.47 915.5
2009 March 935.78 904..9
xxxx xxxx xxxxx
xxxx xxxx xxxxx
``` | ```
SELECT
m.month,
SUM(
-- partial consumption = date subrange / date range * consumption
(
DATEDIFF(
IF(c.month_to > m.last_day, m.last_day, c.month_to),
IF(c.month_from < m.first_day, m.first_day, c.month_from)
) + 1
) / (DATEDIFF(c.month_to, c.month_from) + 1) * c.consumption
) consumption
FROM
consumption c
JOIN (
-- series of months
SELECT DISTINCT
DATE_FORMAT(month_from, '%Y %M') month,
DATE_FORMAT(month_from, '%Y-%m-01') first_day,
LAST_DAY(month_from) last_day
FROM consumption
GROUP BY month_from -- redundant, but for speed purposes
) m ON
-- condition indicating that a date range belongs
-- to a particular month (fully or partially)
c.month_from <= m.last_day AND c.month_to >= m.first_day
GROUP BY m.first_day
ORDER BY m.first_day
``` | Datediff is your friend - try `consumption * 30 / Datediff(day, monthfrom, monthto)` | MYSQL query to get monthly consumption data from table | [
"",
"mysql",
"sql",
"database",
"time-series",
"energy",
""
] |
I'm having this small issue.
the table basically looks like the following
**Subject**
* SubjectId
* SubjectName
* ParentSubjectId
`ParentSubjectId` references the subject table itself.
and it can go down for many levels (no specific number of levels)
example (using countries just for the sake of this example):
```
1.Europe
2.America
1.1 France
1.1 Italy
2.1 USA
2.2 Canada
1.1.1 Paris
1.2.1 Rome
```
and so on..
SubjectID is Guid
ParentSubjectID is a GUID too.
Sample Overview: <https://i.stack.imgur.com/d36EU.png>
it can even keep going down in levels indefinitely (maybe even to the street number level)
my question is:
given a subject (no matter the depth).
i would like to get the top parent of that subject (`Europe/America` in this case)
How can i do this ?
Is it possible using Basic SQL query ?
*please note that i cannot modify the database at all (i'm querying data from an already existing database)* | Write as:
```
declare @Subject as varchar(max)
set @Subject = 'Rome'; -- set subject name here
WITH SubjectCTE AS
(
SELECT SubjectId , SubjectName , ParentSubjectId
FROM Subject
WHERE SubjectName = @Subject
UNION ALL
SELECT C.SubjectId , C.SubjectName , C.ParentSubjectId
FROM SubjectCTE AS P
JOIN Subject AS C
ON P.ParentSubjectId = C.SubjectId
)
,SubjectCTE2 as
(
SELECT SubjectId , SubjectName , ParentSubjectId,
Row_Number() over ( order by SubjectId asc) as rownum
FROM SubjectCTE
)
select SubjectName as RequiredParentName
from SubjectCTE2
where rownum =1
```
[check demo here..](http://rextester.com/ULQ26758) | This is simple to handle, just add new column HID (varchar) and fill it.
```
01 Europe
02 America
0101 France
0102 Italy
010101 Paris
```
Selecting parent:
```
DECLARE @childHID varchar(10) = '010101' --Paris
SELECT A.ID, A.Name
FROM Address A
WHERE A.HID = SUBSTRING(@childHID, 1, 2) -- first level
```
Also you can get all your branch:
```
SELECT *
FROM Address
WHERE HID LIKE '01%'
ORDER BY HID
``` | SQL - how to get the top parent of a given value in a self referencing table | [
"",
"sql",
"sql-server",
""
] |
I have a `sales` table in which I store every sales. This table has columns like `year_ordered`, `userId`, `orderId` etc.
I wish to write a SQL query to select rows, where user has ordered every year from 2008.
So I only want those who have been loyal and ordered from 2008 till 2014.
I have tried with this query, but it give me anything where the `year_ordered` is greater than 2007 -
```
select COUNT(*) as sales_count, ss.userID, ss.year_ordered
from subscriber_sub ss
where ss.date_deleted is null
and ss.year_ordered > 2007
group by ss.year_ordered, ss.userID
having COUNT(*) > 1
order by ss.year_ordered
``` | What you strive for is called relational division. There are basically two ways to accomplish that:
```
select COUNT(distinct ss.year_ordered) as sales_count, ss.userID
from subscriber_sub ss
where ss.date_deleted is null
and ss.year_ordered > 2007
group by ss.userID
having COUNT(distinct ss.year_ordered) >= ( select 2014 - 2008 )
```
The other way is to rewrite FORALL x : p(x) <=> NOT EXISTS x : NOT p(x), i.e. users where it does not exist a year such that there is no sale that year. I'll leave that as an exercise :-) | Try this for your `HAVING` clause:
```
HAVING (SELECT COUNT(DISTINCT ss.year_ordered)) = 7
``` | Select where user bought every year | [
"",
"sql",
"sql-server-2008",
""
] |
I have my database setup to allow a user to "Like" or "Dislike" a post. If it is liked, the column isliked = true, false otherwise (null if nothing.)
The problem is, I am trying to create a view that shows all Posts, and also shows a column with how many 'likes' and 'dislikes' each post has. Here is my SQL; I'm not sure where to go from here. It's been a while since I've worked with SQL and everything I've tried so far has not given me what I want.
Perhaps my DB isn't setup properly for this. Here is the SQL:
```
Select trippin.AccountData.username, trippin.PostData.posttext,
trippin.CategoryData.categoryname, Count(trippin.LikesDislikesData.liked)
as TimesLiked from trippin.PostData
inner join trippin.AccountData on trippin.PostData.accountid = trippin.AccountData.id
inner join trippin.CategoryData on trippin.CategoryData.id = trippin.PostData.categoryid
full outer join trippin.LikesDislikesData on trippin.LikesDislikesData.postid =
trippin.PostData.id
full outer join trippin.LikesDislikesData likes2 on trippin.LikesDislikesData.accountid =
trippin.AccountData.id
Group By (trippin.AccountData.username), (trippin.PostData.posttext), (trippin.categorydata.categoryname);
```
Here's my table setup (I've only included relevant columns):
```
LikesDislikesData
isliked(bit) || accountid(string) || postid(string
PostData
id(string) || posttext || accountid(string)
AccountData
id(string) || username(string)
CategoryData
categoryname(string)
``` | Problem 1: FULL OUTER JOIN versus LEFT OUTER JOIN. Full outer joins are seldom what you want, it means you want all data specified on the "left" and all data specified on the "right", that are matched and unmatched. What you want is all the PostData on the "left" and any matching Likes data on the "right". If some right hand side rows don't match something on the left, then you *don't care* about it. Almost always work from left to right and join results that are relevant.
Problem 2: table alias. Where ever you alias a table name - such as Likes2 - then every instance of that table within the query needs to use that alias. Straight after you declare the alias Likes2, your join condition refers back to trippin.LikesDislikesData, which is the first instance of the table. Given the second one in joining on a different field I suspect that the postid and accountid are being matched on the same row, therefore it should be AND together, not a separate table instance. EDIT reading your schema closer, it seems this wouldn't be needed at all.
Problem 3: to solve you Counts problem separate them using CASE statements. Count will add the number of non NULL values returned for each CASE. If the likes.liked = 1, then return 1 otherwise return NULL. The NULL will be returned if the columns contains a 0 or a NULL.
```
SELECT trippin.PostData.Id, trippin.AccountData.username, trippin.PostData.posttext,
trippin.CategoryData.categoryname,
SUM(CASE WHEN likes.liked = 1 THEN 1 ELSE 0 END) as TimesLiked,
SUM(CASE WHEN likes.liked = 0 THEN 1 ELSE 0 END) as TimesDisLiked
FROM trippin.PostData
INNER JOIN trippin.AccountData ON trippin.PostData.accountid = trippin.AccountData.id
INNER JOIN trippin.CategoryData ON trippin.CategoryData.id = trippin.PostData.categoryid
LEFT OUTER JOIN trippin.LikesDislikesData likes ON likes.postid = trippin.PostData.id
-- remove AND likes.accountid = trippin.AccountData.id
GROUP BY trippin.PostData.Id, (trippin.AccountData.username), (trippin.PostData.posttext), (trippin.categorydata.categoryname);
```
Then "hide" the PostId column in the User Interface. | Instead of selecting `Count(trippin.LikesDislikesData.liked)` you could put in a select statement:
```
Select AccountData.username, PostData.posttext, CategoryData.categoryname,
(select Count(*)
from LikesDislikesData as likes2
where likes2.postid = postdata.id
and likes2.liked = 'like' ) as TimesLiked
from PostData
inner join AccountData on PostData.accountid = AccountData.id
inner join CategoryData on CategoryData.id = PostData.categoryid
``` | How can I perform the Count function with a where clause? | [
"",
"sql",
"sql-server",
""
] |
I have a sql table that is an audit trail. I am trying to grab the most recent time the price was changed.
Using this query I have the following data
```
select id as id_1
,item_no, price
,post_dt
,post_tm
,aud_action
from iminvaud_sql
where item_no = '1-ADVENT ENGLISH'
and aud_action in ('B','C')
order by id desc
id_1 item_no price post_dt post_tm aud_action
221 1-ADVENT ENGLISH 2.000000 2014-08-18 00:00:00.000 1900-01-01 10:19:35.113 C
218 1-ADVENT ENGLISH 2.000000 2014-08-18 00:00:00.000 1900-01-01 10:19:35.110 B
217 1-ADVENT ENGLISH 2.000000 2014-08-18 00:00:00.000 1900-01-01 10:01:47.163 C
216 1-ADVENT ENGLISH 3.000000 2014-08-18 00:00:00.000 1900-01-01 10:01:46.757 B
59 1-ADVENT ENGLISH 3.000000 2013-08-19 00:00:00.000 1900-01-01 13:23:32.950 C
58 1-ADVENT ENGLISH 1.000000 2013-08-19 00:00:00.000 1900-01-01 13:23:32.890 B
```
The system writes a B for a before change and a C for the change so in a sense the B and C records are grouped. For this example I wanted to get the 217 record because it was the most recent price change. | Your query is essentially:
```
select s.*
from iminvaud_sql s
where s.item_no = '1-ADVENT ENGLISH' and s.aud_action in ('B','C')
order by id desc;
```
As far as I can tell, the "B" and "C" are not adding any information. Instead, let's look at the first occurrence of the most recent price. I will base this on `id`. The following will work if the prices are monotonic (either always increasing or decreasing):
```
select top 1 s.*
from iminvaud_sql s
where exists (select 1
from iminvaud_sql s2
where s2.item_no = s.item_no and s2.aud_action in ('B','C') and s2.price = s.price
) and
s.aud_action in ('B','C') and s.item_no = '1-ADVENT ENGLISH'
order by s.id_1 asc;
```
If this isn't the case, you can use a trick. The trick is to take the difference between `row_number()` and `row_number()` partitioned by price. The largest values for the difference will be the most recent price.
```
select top 1 s.*
from (select s.*,
(row_number() over (order by id_1) -
row_number() over (partition by price order by id_1)
) as pricegroup
from iminvaud_sql s
where s2.aud_action in ('B','C') and s.item_no = '1-ADVENT ENGLISH'
) s
order by price_group, s.id_1 asc;
``` | Not sure what your requirement is - but if you just want the top row - then use top:
```
select select top 1 id as id_1, item_no, price, post_dt, post_tm, aud_action
from iminvaud_sql where item_no = '1-ADVENT ENGLISH '
and aud_action in ('B','C') order by post_dt desc, post_tm desc
``` | How do I grab the most recent price change from table | [
"",
"sql",
"sql-server-2008-r2",
""
] |
I want list all the rows by alternative publisher with price ascending, see the example table below.
```
id publisher price
1 ABC 100.00
2 ABC 150.00
3 ABC 105.00
4 XYZ 135.00
5 XYZ 110.00
6 PQR 105.00
7 PQR 125.00
```
The expected result would be:
```
id publisher price
1 ABC 100.00
6 PQR 105.00
5 XYZ 110.00
3 ABC 105.00
7 PQR 125.00
4 XYZ 135.00
2 ABC 150.00
```
What would be the required SQL? | This should do it:
```
select id, publisher, price
from (
select id, publisher, price,
row_number() over (partition by publisher order by price) as rn
from publisher
) t
order by rn, publisher, price
```
The window functions assigns unique numbers for each publisher price. Based on that the outer order by will then first display all rows with rn = 1 which are the rows for each publisher with the lowest price. The second row for each publisher has the second lowest price and so on.
SQLFiddle example: <http://sqlfiddle.com/#!4/06ece/2> | ```
SELECT id, publisher, price
FROM tbl
ORDER BY row_number() OVER (PARTITION BY publisher ORDER BY price), publisher;
```
One cannot use the output of [window functions](http://www.postgresql.org/docs/current/interactive/functions-window.html) in the `WHERE` or `HAVING BY` clauses because window functions are applied *after* those. But one can use window functions in the `ORDER BY` clause.
[SQL Fiddle.](http://sqlfiddle.com/#!15/affaf/1) | Select all rows based on alternative publisher | [
"",
"sql",
"postgresql",
"sql-order-by",
"window-functions",
""
] |
Is there any function in `PostgreSQL` that returns `Boolean` whether a given string is a date or not just like `ISDATE()` in MSSQL?
```
ISDATE("January 1, 2014")
``` | You can create a function:
```
create or replace function is_date(s varchar) returns boolean as $$
begin
perform s::date;
return true;
exception when others then
return false;
end;
$$ language plpgsql;
```
Then, you can use it like this:
```
postgres=# select is_date('January 1, 2014');
is_date
---------
t
(1 row)
postgres=# select is_date('20140101');
is_date
---------
t
(1 row)
postgres=# select is_date('20140199');
is_date
---------
f
(1 row)
``` | @ntalbs answer is good except in the case of `NULL` values. I don't want `is_date` to return `true` if I pass it a `NULL` value. This tweak gets around that issue:
```
create or replace function is_date(s varchar) returns boolean as $$
begin
if s is null then
return false;
end if;
perform s::date;
return true;
exception when others then
return false;
end;
$$ language plpgsql;
``` | Check whether string is a date Postgresql | [
"",
"sql",
"postgresql",
"date",
""
] |
have no idea how to solve following:
There are three tables each with a column of names, for example
* Table 1 - column name 'Name' - values 'A', 'B' and 'C'
* Table 2 - column name 'Name' - values 'A' and 'B'
* Table 3 - column nane 'Name' - values 'A' and 'C'
The goal is to UNION the tables - each value of the three tables should be shown only one time. In addition there should be three new "virtual columns" showing in which table the value is included('1' when value is included, '0' if not). So the result should look like this:
```
Value | Table1 | Table2 | Table3
--------------------------------
A | 1 | 1 | 1
B | 1 | 1 | 0
C | 1 | 0 | 1
```
Hope someone can help me, thanks in advance. | Does this do what you want?
```
select Name, max(Table1) as Table1, max(Table2) as Table2, max(Table3) as Table3
from (select Name, 1 as Table1, 0 as Table2, 0 as Table3
from table1
union all
select Name, 0 as Table1, 1 as Table2, 0 as Table3
from table2
union all
select Name, 0 as Table1, 0 as Table2, 1 as Table3
from table3
) t
group by Name;
```
You might want to use `sum()` instead of `max()` to get the number of times the value occurs in each table. | If your db supports full joins you can try the query below.
```
select
coalesce(t1.Name,t2.Name,t3.Name) myValue,
(case when max(t1.Name) is not null then 1 else 0 end) c1,
(case when max(t2.Name) is not null then 1 else 0 end) c2,
(case when max(t3.Name) is not null then 1 else 0 end) c3
from
Table1 t1
full join Table2 t2 on t1.Name = t2.Name
full join Table3 t3 on t1.Name = t3.Name
group by coalesce(t1.Name,t2.Name,t3.Name)
```
If you know that a value will not appear more than once in each table, you can remove the `group by` and `max` parts. | Union three tables and show where data came from | [
"",
"sql",
"union",
""
] |
I have a database table that contains multiple entries for each user during a period of time. The query I'm trying to build will only return the most recent record for each user during that time period.
what I have thus far:
```
SELECT *
FROM (SELECT DISTINCT * FROM picks_recorded)
WHERE date_submitted > '" . $week_start . "'
AND date_submitted < '" . $week_end . "'
ORDER BY date_submitted DESC
LIMIT 1;
```
I've tried several versions of this without success. | Try this:
```
select distinct p.* from
picks_recorded p
inner join
(select userid, max(date_submitted) as maxdate from picks_recorded
WHERE date_submitted > '" . $week_start . "'
AND date_submitted < '" . $week_end . "'
group by userid) s on p.userid = s.userid and p.date_submitted = s.maxdate
```
Basically, we use group by to get the maximum value of `date_submitted` per `userid` i.e. date of latest record for each user. Then we inner join with the source table based on the `userid` and `submitted_date` to get all the required data.
Note: Using `distinct` in the outer query will ensure that only one row is returned for a given user if that user has multiple, identical records for the latest date. However, if you have 2 records with same `userid` and `submitted_date`, but even one different field, then both records will be returned. In that case, you need to provide additional logic to say which of the 2 records should be kept in the final result.
On a side note, you should avoid building queries by concatenating with variables. Instead, you should use prepared statements/parameterised queries. | Try to change `DISTINCT *` into this `DISTINCT(userid)`.
For example:
```
SELECT * FROM (SELECT DISTINCT(userid) FROM picks_recorded)
WHERE
date_submitted > '" . $week_start . "'
AND
date_submitted < '" . $week_end . "'
ORDER BY date_submitted DESC
LIMIT 1;
``` | select most recent records with unique user id | [
"",
"mysql",
"sql",
"distinct",
""
] |
Scenario:
I am creating a website for a checklist, it can be done/accessed by multiple users in the same time thus it needs certain fields to be editable, saveable and retrievable.
Dilemma:
The checklist has 212 entries, this means 212 rows. I have 5 columns that needs entries thus, 212x5. This means, I have to create 1060 columns to be able for me to code the website to do what I want it to do. 1060 columns on a table seems wrong and very tiring to use.
My Sample solution:
I would divide the 5 columns into 5 tables, making the date that the checklist was created as their primary key. I would then use this 5 tables for their corresponding columns thus reducing the number of columns per table to 212.
Is there anyway that I could reduce this? Sorry for the long post, any help would be appreciated.
\*\*Edit: For some reason, I can't comment on any answers, says error on page. Nevertheless, I do appreciate everybody's answer; Yet I have 1 info that may change your answers. I have thought of making 5 columns instead of the obnoxious 1060, but doing that, I would need/The system would need to create 1 table per 1 worksheet and of course, over time, this would cause massive problems to the database.
Although still pretty huge, ah\_hau's answer seems to be the smallest and easiest to handle. By using it, the system would create 217 entries per checklist, but will use 6-8 columns only. Thank you to everyone who shed light into this dillemma, I do hope that I see you guys again on my next question. *Cheers!* | why not directly add 1 more column in your checklist table?
your table structure should look like
1. userid
2. entryid (value from 1-212)
3. col1\_entry
4. col2\_entry
5. col3\_entry
6. col4\_entry
7. col5\_entry | looking thrrough all your requirements, thou you've ban the common 6 column setup, I'd still suggest you to use a similar setup.
try to have a table like this
```
id [bigInt] (PK,auto_incrment)
rowId [int] //checklist row id
columnId [int] //checklist column id
text [varchar/text]
create_date [Date]
create_time [Time]
Index
unique key chekcklist_cell (create_date, rowId, columnId)
```
depending on your preference, you could also split `columnId` field into 5 columns with name `column1~5` to reduce the DB entry count. But i'd suggest using my setup as it seems like user will update your checklist 1 cell at a time (or multiple cell all around the list), which my schema will make more sense. Also this schema is very expandable and could easily add new fields to them. Last thing I could think of is that you doesn't have to lock the whole checklist while a user is only updating 1 cell. This helps speed up that concurrent access thing. | Dilemma about the number of columns on a table | [
"",
"mysql",
"sql",
"database",
""
] |
I've got some sensory information going into a table. I have figured out the query that will tell me exactly when the value at a particular device changes.
What I need to know is the status of all of the other sensors at that time. The trick is, the timestamps won't be equal. I could get a data point from sensor 1, then 3 minute later, one from sensor 2, and then 30 seconds later, another from sensor 1.
So, here is an example of what I am talking about:
```
--- data_table ---
sensor | state | stime
-------+-------+---------------------
1 | A | 2014-08-17 21:42:00
1 | A | 2014-08-17 21:43:00
2 | B | 2014-08-17 21:44:00
3 | C | 2014-08-17 21:45:00
2 | D | 2014-08-17 21:46:00
3 | C | 2014-08-17 21:47:00
1 | B | 2014-08-17 21:48:00
3 | A | 2014-08-17 21:49:00
2 | D | 2014-08-17 21:50:00
2 | A | 2014-08-17 21:51:00
```
Now, I know the query that will deliver me the state changes. I've got this down, and it's in a view. That table would look like:
```
--- state_changed_view ---
sensor | state | stime
-------+-------+---------------------
2 | D | 2014-08-17 21:46:00
1 | B | 2014-08-17 21:48:00
3 | A | 2014-08-17 21:49:00
2 | A | 2014-08-17 21:51:00
```
What I want is a JOIN, where I can get all of the values of the 'state\_changed\_view', but also the values of the other corresponding sensors at the 'sensor\_timestamp' within the view.
So, ideally, I want my result to look like (or something similar to):
```
sensor | state | stime | sensor | state | stime
-------+-------+---------------------+--------+-------+---------------------
2 | D | 2014-08-17 21:46:00 | 1 | A | 2014-08-17 21:43:00
2 | D | 2014-08-17 21:46:00 | 2 | D | 2014-08-17 21:46:00
2 | D | 2014-08-17 21:46:00 | 3 | C | 2014-08-17 21:45:00
1 | B | 2014-08-17 21:48:00 | 1 | B | 2014-08-17 21:48:00
1 | B | 2014-08-17 21:48:00 | 2 | D | 2014-08-17 21:46:00
1 | B | 2014-08-17 21:48:00 | 3 | C | 2014-08-17 21:47:00
3 | A | 2014-08-17 21:49:00 | 1 | B | 2014-08-17 21:48:00
3 | A | 2014-08-17 21:49:00 | 2 | D | 2014-08-17 21:46:00
3 | A | 2014-08-17 21:49:00 | 3 | A | 2014-08-17 21:49:00
2 | A | 2014-08-17 21:51:00 | 1 | B | 2014-08-17 21:48:00
2 | A | 2014-08-17 21:51:00 | 2 | A | 2014-08-17 21:51:00
2 | A | 2014-08-17 21:51:00 | 3 | A | 2014-08-17 21:49:00
```
As you can see, I need the most recent row in 'data\_table' for each sensor, for every row that exists in `state_changed_view`.
I just don't know how to get the SQL to get me the most recent row according to a particular timestamp.
This is on a PL/pgSQL system, so anything compatible with Postgres is handy. | ## Query
For a **small**, **given** set of sensors to retrieve (this works for Postgres **8.4** or later):
```
SELECT c.sensor AS sensor_change
, d1.state AS state_1, d1.stime AS stime_1
, d2.state AS state_2, d2.stime AS stime_2
, d3.state AS state_3, d3.stime AS stime_3
FROM (
SELECT sensor, stime
, lag(state) OVER (PARTITION BY sensor ORDER BY stime)
<> state AS change
, max(CASE WHEN sensor = 1 THEN stime ELSE NULL END) OVER w AS last_1
, max(CASE WHEN sensor = 2 THEN stime ELSE NULL END) OVER w AS last_2
, max(CASE WHEN sensor = 3 THEN stime ELSE NULL END) OVER w AS last_3
FROM data d
WINDOW w AS (ORDER BY stime)
) c
JOIN data d1 ON d1.sensor = 1 AND d1.stime = c.last_1
JOIN data d2 ON d2.sensor = 2 AND d2.stime = c.last_2
JOIN data d3 ON d3.sensor = 3 AND d3.stime = c.last_3
WHERE c.change
ORDER BY c.stime;
```
Not using the view at all, building on the table directly, that's faster.
This is assuming a UNIQUE INDEX on `(sensor, stime)` to be unambiguous. Performance also heavily depends on such an index.
As opposed to [@Nick's solution](https://stackoverflow.com/a/25388757/939860), building on `JOIN LATERAL` (Postgres 9.3+), this returns a **single row** with all values for every change in state.
## PL/pgSQL function
Since you mentioned PL/pgSQL, I would expect this (highly optimized) plpgsql function to perform better, since it can make do with a single sequential scan of the table:
```
CREATE OR REPLACE FUNCTION f_sensor_change()
RETURNS TABLE (sensor_change int -- adapt to actual data types!
, state_1 "char", stime_1 timestamp
, state_2 "char", stime_2 timestamp
, state_3 "char", stime_3 timestamp) AS
$func$
DECLARE
r data%rowtype;
BEGIN
FOR r IN
TABLE data ORDER BY stime
LOOP
CASE r.sensor
WHEN 1 THEN
IF r.state = state_1 THEN -- just save stime
stime_1 := r.stime;
ELSIF r.state <> state_1 THEN -- save all & RETURN
stime_1 := r.stime; state_1 := r.state;
sensor_change := 1; RETURN NEXT;
ELSE -- still NULL: init
stime_1 := r.stime; state_1 := r.state;
END IF;
WHEN 2 THEN
IF r.state = state_2 THEN
stime_2 := r.stime;
ELSIF r.state <> state_2 THEN
stime_2 := r.stime; state_2 := r.state;
sensor_change := 2; RETURN NEXT;
ELSE
stime_2 := r.stime; state_2 := r.state;
END IF;
WHEN 3 THEN
IF r.state = state_3 THEN
stime_3 := r.stime;
ELSIF r.state <> state_3 THEN
stime_3 := r.stime; state_3 := r.state;
sensor_change := 3; RETURN NEXT;
ELSE
stime_3 := r.stime; state_3 := r.state;
END IF;
ELSE -- do nothing, ignore other sensors
END CASE;
END LOOP;
END
$func$ LANGUAGE plpgsql;
```
Call:
```
SELECT * FROM f_sensor_change();
```
Makes sense for repeated use. Related answer:
* [GROUP BY and aggregate sequential numeric values](https://stackoverflow.com/questions/8014577/group-by-and-aggregate-sequential-numeric-values/8014694#8014694)
[**SQL Fiddle for Postgres 9.3.**](http://sqlfiddle.com/#!15/4e7c8/1)
[**SQL Fiddle for Postgres 8.4.**](http://sqlfiddle.com/#!11/4e7c8/1) | There are a couple of things making this not-so-straightforward:
* You want to do a subquery for each `state_changed_view` row, but the subquery must mention the corresponding `stime` from the view (to restrict it to earlier records). Ordinary subqueries aren't allowed to depend on external fields, but you can accomplish this (as of Postgres 9.3, at least) with a [lateral join](http://www.postgresql.org/docs/9.3/static/queries-table-expressions.html#QUERIES-LATERAL).
* You need not only `MAX(data_table.stime)`, but the corresponding `data_table.state`. You could do this with *another* nested query to retrieve the rest of the row, but [`SELECT DISTINCT ON`](http://www.postgresql.org/docs/9.3/static/sql-select.html) gives you an easy way to fetch the whole thing at once.
The end result is something like this:
```
SELECT *
FROM
state_changed_view,
LATERAL (
SELECT DISTINCT ON (sensor)
sensor,
state,
stime
FROM
data_table
WHERE
data_table.stime <= state_changed_view.stime
ORDER BY
sensor,
stime DESC
) a
``` | SQL Query where I get most recent rows from timestamp from another table | [
"",
"sql",
"postgresql",
"join",
"timestamp",
"plpgsql",
""
] |
I need to add a constraint to one table in my database. The table name is Experience. And there is a column named ToDate. Every time the select statement executes like following.
```
select ToDate from Experience
```
It should return current date.
So every time select statement executes, the ToDate column get updated with current date.
I know I can do this with some type of sql trigger but is there a way to do it by sql constraint.
like
```
alter table add constraint...
```
Any help will be appreciated.
Thanks | You can use a [computed column](http://msdn.microsoft.com/en-us/library/ms188300.aspx#TsqlProcedure). That's specified like `colname as <expression>`:
```
create table t1(id int, dt as getdate());
insert t1 values (1);
select * from t1;
``` | You cannot use a constraint, because a constraint is basically a rule on what can go in the table, how the table can relate to others, etc. It has no bearing on the data in the table once it goes into the table. Now if I am understanding you correctly, you want to update the `ToDate` column whenever you select that column. Now you can't use a trigger either as mentioned [here](https://community.oracle.com/thread/1556647?tstart=0) and [here](https://stackoverflow.com/questions/6137935/can-i-launch-a-trigger-on-select-statement-in-mysql). They suggest a stored procedure where you would use an `update` followed by an `insert`. This is probably my preferred SQL method to go with if you have to use it repeated, which you seem to have to do. Though Andomar's answer is probably better. | sql current date constraint | [
"",
"sql",
"sql-server",
"sql-server-2008",
"sql-server-2012",
""
] |
I have a column in the sql server called "Ordinal" that is used to indicate the display order of the rows. It starts from 0 and skips 10 for the next row. so we have something like this:
```
Id Ordinal
1 0
2 20
3 10
```
It skips 10 because we wanted to be able to move item in between items (based on ordinal) without having to reassign ordinal number for the entire table.
As you can imagine eventually, Ordinal number will need to be reassign somehow for a move in between operation either on surrounding rows or for the entire table as the unused ordinal numbers between the target items are all used up.
Is there any algorithm that I can use to effectively reorder the ordinal number for the move operation taken in the consideration like long term maintainability of the table and minimizing update operations of the table? | You can re-number the sequences using a somewhat complicated `UPDATE` statement:
```
UPDATE u
SET u.sequence = 10 * (c.num_below-1)
FROM test u
JOIN (
SELECT t.id, count(*) AS num_below
FROM test t
JOIN test tr ON tr.sequence <= t.sequence
GROUP BY t.id
) c ON c.id=u.id
```
The idea is to obtain a count of items with the `sequence` *lower* than that of the current row, multiply the count by ten, and assign it as the new count.
The content of `test` before the `UPDATE`:
```
ID Sequence
__ ________
1 0
2 10
3 20
4 12
```
The content of `test` after the `UPDATE`:
```
ID Sequence
__ ________
1 0
2 30
3 10
4 20
```
Now the sequence numbers are evenly spread again, so you can continue inserting in the middle until you run out of new sequence numbers; then you can re-number again.
[Demo.](http://sqlfiddle.com/#!3/f00da/1) | These won't answer your question directly--I just thought I might suggest some other approaches:
One possibility--don't try to do it by hand. Have your software manage the numbers. If they need re-writing, just save them with new numbers.
a second--use a "Linked List" instead. In each record store the index of the next record you want displayed, then have your code load that directly into a linked list. | What is the best way to reassign ordinal number of a move operation | [
"",
"sql",
"algorithm",
""
] |
Is there some SQL that will either return a list of table names or (to cut to the chase) that would return a boolean as to whether a tablename with a certain pattern exists?
Specifically, I need to know if there is a table in the database named `INV[Bla]` such as `INVclay`, `INVcherri`, `INVkelvin`, `INVmorgan`, `INVgrandFunk`, `INVgobbledygook`, `INV2468WhoDoWeAppreciate`, etc. (the `INV` part is what I'm looking for; the remainder of the table name could be almost anything).
IOW, can "wildcards" be used in a SQL statement, such as:
```
SELECT * tables
FROM database
WHERE tableName = 'INV*'
```
or how would this be accomplished? | This should get you there:
```
SELECT *
FROM INFORMATION_SCHEMA.TABLES
where table_name LIKE '%INV%'
```
EDIT:
fixed `table_name` | To check for exists:
--
## -- note that the sql compiler knows that it just needs to check for existence, so this is a case where "select \*" is just fine
if exists
(select \*
from [sys].[tables]
where upper([name]) like N'INV%')
select N'do something appropriate because there is a table based on this pattern'; | How can I Retrieve a List of All Table Names from a SQL Server CE database? | [
"",
"sql",
"sql-server-ce",
"windows-ce",
"database-schema",
""
] |
I am currently having a problem when trying to select where a job is listed in the tbl\_jobs table and has not been assigned to a delivery item in the tbl\_delivery\_items table by using a NOT IN subquery.
The sub query should return supplier\_job\_job\_id 1 (which it does when you run this as a seperate query), with the NOT IN excluding the job with an id of 1. Alas, it is not working and causing me a headache by returningthe job with a job\_id of 1 when I was expecting an empty set. Here is the codeigniter code generating the query:
```
$this->db->join("tbl_jobs", "tbl_jobs.job_id = tbl_supplier_jobs.supplier_job_job_id");
$this->db->where_not_in("supplier_job_job_id", "SELECT delivery_item_job_id FROM tbl_delivery_items");
$result = $query->result_array();
echo $this->db->last_query();
return $result;
```
Here is the query it generates:
```
SELECT * FROM (`tbl_supplier_jobs`) JOIN `tbl_jobs` ON `tbl_jobs`.`job_id` = `tbl_supplier_jobs`.`supplier_job_job_id` WHERE `supplier_job_job_id` NOT IN ('SELECT delivery_item_job_id FROM tbl_delivery_items') AND `supplier_job_supplier_id` = '1' ORDER BY `tbl_jobs`.`job_number` DESC
```
And here is the data:
tbl\_supplier\_jobs
```
supplier_job_id | supplier_job_job_id | supplier_job_supplier_id
1 1 1
2 2 2
```
tbl\_jobs
```
job_id | job_number | job_description | job_delivered
1 1024 aaaaa 0
2 2048 bbbbb 0
```
tbl\_delivery\_items
```
delivery_item_id | delivery_item_delivery_id | delivery_item_job_id | delivery_item_toa | delivery_item_pallet_quantity | delivery_item_box_quantity
1 1 1 2014-08-18 16:23:04 2 1
```
Any ideas? | The problem is that the subquery is rendered as a string. You can see this clearly in the generated query that you supplied.
This seems to be a limitation in the `where_not_in` method of CodeIgniter. A possible solution, change the code to call the `where` method and render a slightly larger part of the query yourself:
```
$this->db->where("supplier_job_job_id NOT IN (SELECT delivery_item_job_id FROM tbl_delivery_items)");
``` | The query isn't executing the subquery it is using the string value:
```
`supplier_job_job_id` NOT IN (
'SELECT delivery_item_job_id FROM tbl_delivery_items'
)
```
Will check if `supplier_job_job_id` equals the string `'SELECT delivery_item_job_id FROM tbl_delivery_items'`.
You should consider a `LEFT JOIN` to `tbl_delivery_items` and a `WHERE` condition of `delivery_item_job_id IS NULL`.. which should be fairly easy in your framework. | MySql NOT IN failing to return empty set | [
"",
"mysql",
"sql",
"codeigniter",
"sql-in",
""
] |
I have this sample table on MSSQL Server
```
CREATE TABLE tbl (
value FLOAT,
formula NVARCHAR(50)
);
INSERT INTO tbl VALUES ('45.6452','/0.055');
```
now i want to select the table with the result of the 2 columns
```
select <computation> as Result from tbl
```
what is the correct query to generate the result that i want?
tried "Compute" and "Compute By" but it is not applicable
Note: I'm NOT allowed to change the schema of the table since all im doing is to generate a report with the result of the 2 columns indicated above | You can try using `CASE WHEN` clause, however the calculations should be known beforehand (and also should be supported in SQL server):
```
select case left(formula,1)
when '/' then value / convert(float,stuff(formula,1,1,''))
when '*' then value * convert(float,stuff(formula,1,1,''))
when '+' then value + convert(float,stuff(formula,1,1,''))
when '-' then value - convert(float,stuff(formula,1,1,''))
when '%' then value % convert(float,stuff(formula,1,1,''))
end as Result
from tbl
```
## [Sql fiddle demo](http://sqlfiddle.com/#!3/53c2d/6) | You can try using this query
```
DECLARE @query VARCHAR(1000)
SET @query = ''
SELECT @query = @query + ' SELECT ' + CONVERT(VARCHAR,value) + formula + ' as [RESULT]' FROM xtbl
EXEC(@query)
``` | how to compute value on columns with formula | [
"",
"sql",
"sql-server",
""
] |
I meet a SQL statement with ORDER BY clause, that I can not understand.
```
SELECT ...
FROM ...
WHERE ...
JOIN ...
ORDER BY (
CASE
WHEN versions.effective_date IS NULL THEN
1
ELSE
0
END) DESC,
versions.effective_date DESC,
versions.name DESC,
versions.id DESC
```
Please point me the meaning of ORDER BY clause. Thank you. | **Illustration:**
Supposing original data is ordered like this
```
effective_date, effective_date, name, id
3/1/2010 3/1/2010 ABC 1
1/1/2010 1/1/2010 ABC 2
2/1/2010 2/1/2010 ABC 3
NULL NULL ABC 4
NULL NULL ABC 5
NULL NULL ABC 6
```
**After ordering will be**
```
effective_date, effective_date, name, id
NULL NULL ABC 6
NULL NULL ABC 5
NULL NULL ABC 4
3/1/2010 3/1/2010 ABC 1
2/1/2010 2/1/2010 ABC 3
1/1/2010 1/1/2010 ABC 2
```
**Translation [how the order statement will be translated at run time]:**
```
effective_date, effective_date, name, id
1 NULL ABC 6
1 NULL ABC 5
1 NULL ABC 4
0 3/1/2010 ABC 1
0 2/1/2010 ABC 3
0 1/1/2010 ABC 2
``` | > ORDER BY (CASE WHEN versions.effective\_date IS NULL THEN 1 ELSE 0 END) DESC
If `versions.effective_date` not available (`NULL`), those records will be **`listed at last`**. All other records (`NOT NULL`) will be shown at **`top of the list`**. | SQL order by with case | [
"",
"mysql",
"sql",
"sql-server",
"sql-order-by",
""
] |
In Sql Server 2008
my select query result is like below.
```
Col1 Col2 Col3
------------------------------------------
1001 HO 160.00
1001 HO 40.00
1001 HO 200.00
1002 HO 10.00
1002 HO 130.00
1003 HO 10.00
1003 HO 130.00
1003 HO 130.00
1003 HO 230.00
```
Now, I want the select result as below (please find change in Col2)
```
Col1 Col2 Col3
------------------------------------------
1001 HO1 160.00
1001 HO2 40.00
1001 HO3 200.00
1002 HO1 10.00
1002 HO2 130.00
1003 HO1 10.00
1003 HO2 130.00
1003 HO3 130.00
1003 HO4 230.00
```
Experts over-there... please help me in extracting the data like above. | i am just created sample data and gave you the query.So you can implement in your original query
```
declare @t table(id int,Name varchar(10))
insert into @t (id,name)values (1,'HO')
insert into @t (id,name)values (1,'HO')
insert into @t (id,name)values (2,'HO')
insert into @t (id,name)values (2,'HO')
select id,name +CAST(row_number()OVER(PARTITION BY id order by name )AS VARCHAR)rn from @t
``` | Try this
```
select col1,
col2+cast(row_number() over (partition by col1 order by col1) as varchar(10)) as col2,
col3 from your_table
``` | Sequence numbers for individual values in column | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have a table test with a column names which have both integer as well as non integer values
```
+------------+
| names |
+------------+
| 123 |
| 123abc |
| 89 |
| dkkjdk |
| dkdn |
+------------+
```
I would like to display the count of integer and non integer values in a single row like this
```
integer_count non_integer_count
2 3
```
I tried to print integer values using this query
```
select cast(names as signed) as int_val from test;
```
But I got this result
```
+---------+
| int_val |
+---------+
| 123 |
| 123 |
| 89 |
| 0 |
| 0 |
+---------+
```
The name field is a varchar(20) field. | Here is the solution
```
SELECT COUNT(CASE WHEN not names REGEXP '^[0-9]+$' THEN 1 END) AS ALPHA, COUNT(CASE WHEN names REGEXP '^[0-9]+$' THEN 1 END) AS DIGIT FROM test;
```
Output
```
+-------+-------+
| ALPHA | DIGIT |
+-------+-------+
| 3 | 2 |
+-------+-------+
``` | Use a sum of a condition based on regex:
```
select
sum(names rlike '^[0-9]+$') integer_count,
sum(names not rlike '^[0-9]+$') non_integer_count
from test
```
See [SQLFiddle](http://sqlfiddle.com/#!2/c60d2/3)
Some explanation:
* `rlike` is the regex version of `like`
* the regex `^[0-9]+$` matches if the *entire* value is one or more digits. Note the `^` and `$`, without which it would test for digit(s) *anywhere* in the value
* using `sum(condition)` counts how many times the condition was true, because (in mysql) true is 1 and false is 0 | display count of numeric and non numeric column value in a single row in mysql | [
"",
"mysql",
"sql",
""
] |
```
SELECT t.KeyID, t.VaR, t.DB
FROM
(
SELECT t1.[KeyID], t1.[VaR], '383' AS DB FROM Table1 AS t1
UNION ALL
SELECT t2.[KeyID], t2.[VaR], '55' AS DB FROM Table2 AS t2
) AS t
INNER JOIN
(
(
SELECT t3.[KeyID], t3.[VaR] FROM Table1 AS t3
UNION ALL
SELECT t4.[KeyID], t4.[VaR] FROM Table2 AS t4
) AS u
GROUP BY u.KeyID, u.VaR
) ON t.KeyID=u.KeyID AND t.VaR=u.VaR
ORDER BY t.KeyID, t.VaR, t.DB;
```
When I run the above SQL statement in MS Access 2010 I get "SYntax error in JOIN operation". However, if I remove the "GROUP BY" clause the syntax runs fine.
Any ideas? | this code must be work
```
SELECT t.KeyID, t.VaR, t.DB
FROM
(
SELECT t1.[KeyID], t1.[VaR], '383' AS DB FROM Table1 AS t1
UNION ALL
SELECT t2.[KeyID], t2.[VaR], '55' AS DB FROM Table2 AS t2
) AS t
INNER JOIN
(
select * from
(
SELECT t3.[KeyID], t3.[VaR] FROM Table1 AS t3
UNION ALL
SELECT t4.[KeyID], t4.[VaR] FROM Table2 AS t4
) AS u1
GROUP BY u1.KeyID, u1.VaR
) as u ON t.KeyID=u.KeyID AND t.VaR=u.VaR
ORDER BY t.KeyID, t.VaR, t.DB;
``` | You can only `GROUP BY` the subject of a `SELECT`, so you'd need to add an additional `SELECT` clause to get this to work, I think. Something like:
```
SELECT t.KeyID, t.VaR, t.DB
FROM (
SELECT t1.[KeyID], t1.[VaR], '383' AS DB FROM Table1 AS t1
UNION ALL
SELECT t2.[KeyID], t2.[VaR], '55' AS DB FROM Table2 AS t2
) AS t
INNER JOIN (
SELECT u.[KeyID], u.[VaR]
FROM (
SELECT t3.[KeyID], t3.[VaR] FROM Table1 AS t3
UNION ALL
SELECT t4.[KeyID], t4.[VaR] FROM Table2 AS t4
) AS u
GROUP BY u.KeyID, u.VaR
) ON t.KeyID=u.KeyID AND t.VaR=u.VaR
ORDER BY t.KeyID, t.VaR, t.DB;
``` | SQL Group By and Inner Join syntax | [
"",
"sql",
"ms-access",
"syntax",
"group-by",
"inner-join",
""
] |
I have an awkward SQL Puzzle that has bested me.
I am trying to generate a list of possible configurations of student blocks so I can fit their course choices into a timetable. A list of possible qualifications and blocks for a student could be as the following:
```
Biology A
Biology C
Biology D
Biology E
Chemistry B
Chemistry C
Chemistry D
Chemistry E
Chemistry F
Computing D
Computing F
Tutorial A
Tutorial B
Tutorial E
```
A possible solution of blocks for a student could be
```
Biology D
Chemistry C
Computing F
Tutorial E
```
How would I query the above dataset to produce all possible combinations of lessons and blocks for a student? I could then pare down the list removing the ones that clash and choose one that works. I estimate that in this instance there will be about 120 combinations in total.
I could imagine that it would be some kind of cross join. I have tried all sorts of solutions using window functions and cross apply etc but they have all had some kind of flaw. They all tend to get tripped up because each student has a different number of courses and each course has a different number of blocks.
Cheers for any help you can offer! I can paste in the gnarled mess of a query I have if necessary too!
Alex | For a fixed number of qualifications, the answer is relatively simple - the `CROSS JOIN` option from the previous answers will work perfectly.
However, if the number of qualifications is unknown, or likely to change in the future, hard-coding four `CROSS JOIN` operations won't work. In this case, the answer gets more complicated.
For small numbers of rows, you could use a variation of [this answer on DBA](https://dba.stackexchange.com/a/29666), which uses powers of two and bit comparisons to generate the combinations. However, this will be limited to a very small number of rows.
For larger numbers of rows, you can use a function to generate every combination of 'M' numbers from 'N' rows. You can then join this back to a `ROW_NUMBER` value computed on your source data to get the original row.
The function to generate the combinations *could* be written in TSQL, but it would make more sense to use SQLCLR if possible:
```
[SqlFunction(
DataAccess = DataAccessKind.None,
SystemDataAccess = SystemDataAccessKind.None,
IsDeterministic = true,
IsPrecise = true,
FillRowMethodName = "FillRow",
TableDefinition = "CombinationId bigint, Value int"
)]
public static IEnumerable Combinations(SqlInt32 TotalCount, SqlInt32 ItemsToPick)
{
if (TotalCount.IsNull || ItemsToPick.IsNull) yield break;
int totalCount = TotalCount.Value;
int itemsToPick = ItemsToPick.Value;
if (0 >= totalCount || 0 >= itemsToPick) yield break;
long combinationId = 1;
var result = new int[itemsToPick];
var stack = new Stack<int>();
stack.Push(0);
while (stack.Count > 0)
{
int index = stack.Count - 1;
int value = stack.Pop();
while (value < totalCount)
{
result[index++] = value++;
stack.Push(value);
if (index == itemsToPick)
{
for (int i = 0; i < result.Length; i++)
{
yield return new KeyValuePair<long, int>(
combinationId, result[i]);
}
combinationId++;
break;
}
}
}
}
public static void FillRow(object row, out long CombinationId, out int Value)
{
var pair = (KeyValuePair<long, int>)row;
CombinationId = pair.Key;
Value = pair.Value;
}
```
(Based on [this function](http://rosettacode.org/wiki/Combinations#C.23).)
Once the function is in place, generating the list of valid combinations is fairly easy:
```
DECLARE @Blocks TABLE
(
Qualification varchar(10) NOT NULL,
Block char(1) NOT NULL,
UNIQUE (Qualification, Block)
);
INSERT INTO @Blocks
VALUES
('Biology', 'A'),
('Biology', 'C'),
('Biology', 'D'),
('Biology', 'E'),
('Chemistry', 'B'),
('Chemistry', 'C'),
('Chemistry', 'D'),
('Chemistry', 'E'),
('Chemistry', 'F'),
('Computing', 'D'),
('Computing', 'F'),
('Tutorial', 'A'),
('Tutorial', 'B'),
('Tutorial', 'E')
;
DECLARE @Count int, @QualificationCount int;
SELECT
@Count = Count(1),
@QualificationCount = Count(DISTINCT Qualification)
FROM
@Blocks
;
WITH cteNumberedBlocks As
(
SELECT
ROW_NUMBER() OVER (ORDER BY Qualification, Block) - 1 As RowNumber,
Qualification,
Block
FROM
@Blocks
),
cteAllCombinations As
(
SELECT
C.CombinationId,
B.Qualification,
B.Block
FROM
dbo.Combinations(@Count, @QualificationCount) As C
INNER JOIN cteNumberedBlocks As B
ON B.RowNumber = C.Value
),
cteMatchingCombinations As
(
SELECT
CombinationId
FROM
cteAllCombinations
GROUP BY
CombinationId
HAVING
Count(DISTINCT Qualification) = @QualificationCount
And
Count(DISTINCT Block) = @QualificationCount
)
SELECT
DENSE_RANK() OVER(ORDER BY C.CombinationId) As CombinationNumber,
C.Qualification,
C.Block
FROM
cteAllCombinations As C
INNER JOIN cteMatchingCombinations As MC
ON MC.CombinationId = C.CombinationId
ORDER BY
CombinationNumber,
Qualification
;
```
This query will generate a list of 172 rows representing the 43 valid combinations:
```
1 Biology A
1 Chemistry B
1 Computing D
1 Tutorial E
2 Biology A
2 Chemistry B
2 Computing F
2 Tutorial E
...
```
---
In case you need the TSQL version of the `Combinations` function:
```
CREATE FUNCTION dbo.Combinations
(
@TotalCount int,
@ItemsToPick int
)
Returns @Result TABLE
(
CombinationId bigint NOT NULL,
ItemNumber int NOT NULL,
Unique (CombinationId, ItemNumber)
)
As
BEGIN
DECLARE @CombinationId bigint;
DECLARE @StackPointer int, @Index int, @Value int;
DECLARE @Stack TABLE
(
ID int NOT NULL Primary Key,
Value int NOT NULL
);
DECLARE @Temp TABLE
(
ID int NOT NULL Primary Key,
Value int NOT NULL Unique
);
SET @CombinationId = 1;
SET @StackPointer = 1;
INSERT INTO @Stack (ID, Value) VALUES (1, 0);
WHILE @StackPointer > 0
BEGIN
SET @Index = @StackPointer - 1;
DELETE FROM @Temp WHERE ID >= @Index;
-- Pop:
SELECT @Value = Value FROM @Stack WHERE ID = @StackPointer;
DELETE FROM @Stack WHERE ID = @StackPointer;
SET @StackPointer -= 1;
WHILE @Value < @TotalCount
BEGIN
INSERT INTO @Temp (ID, Value) VALUES (@Index, @Value);
SET @Index += 1;
SET @Value += 1;
-- Push:
SET @StackPointer += 1;
INSERT INTO @Stack (ID, Value) VALUES (@StackPointer, @Value);
If @Index = @ItemsToPick
BEGIN
INSERT INTO @Result (CombinationId, ItemNumber)
SELECT @CombinationId, Value
FROM @Temp;
SET @CombinationId += 1;
SET @Value = @TotalCount;
END;
END;
END;
Return;
END
```
It's virtually the same as the SQLCLR version, except for the fact that TSQL doesn't have stacks or arrays, so I've had to fake them with table variables. | One giant cross join?
```
select * from tablea,tableb,tablec,tabled
```
That will actually work for what you need, where tablea is the biology entries, b is chem, c is computing and d is tutorial. You can specify the joins a bit better:
```
select * from tablea cross join tableb cross join tablec cross join tabled.
```
Technically both statement are the same...this is all cross join so the comma version above is simpler, in more complicated queries, you'll want to use the second statement so you can be very explicit as to where you are cross joining vs inner/left joins.
You can replace the 'table' entries with a select union statement to give the values you are looking for in query form:
```
select * from
(select 'biology' as 'course','a' as 'class' union all select 'biology','c' union all select 'biology','d' union all select 'biology','e') a cross join
(select 'Chemistry' as 'course','b' as 'class' union all select 'Chemistry','c' union all select 'Chemistry','d' union all select 'Chemistry','e' union all select 'Chemistry','f') b cross join
(select 'Computing' as 'course','a' as 'class' union all select 'Computing','c') c cross join
(select 'Tutorial ' as 'course','a' as 'class' union all select 'Tutorial ','b' union all select 'Tutorial ','e') d
```
There is your 120 results (4\*5\*3\*2) | Generating all possible combinations of a timetable using an SQL Query | [
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
I really need help this time. I search everywhere, tried numerous solutions. but i can't seem to solve my problem. Now i'm going to ask, please help. I have been having this problem for a week now.
```
ExecuteSQL("select * from account_database where idnum= @idnum and password= @pass")
'Dim idnum As New SqlParameter("@idnum", SqlDbType.VarChar)
'Dim pass As New SqlParameter("@pass", SqlDbType.VarChar, -1)
'idnum.Value = idnumtxt.Text
'pass.Value = output
'cmd.Parameters.Add(idnum)
'cmd.Parameters.Add(pass)
cmd.Parameters.Add("@idnum", SqlDbType.VarChar).Value = idnumtxt.Text
cmd.Parameters.Add("@pass", SqlDbType.VarChar, -1, "password").Value = output
```
those commented out lines are the codes which i have tried, also there are codes which i implemented that also failed.
The error message concludes as `"Must declare scalar variable @idnum"`
i really need help please. Please shine some light.
This is the code what the function executeSQL contains :
```
Public Shared Sub ExecuteSQL(ByVal strSQL As String)
Try
If connection.State = 1 Then ' check connection if open
connection.Close()
End If
' connection
connection.ConnectionString = "Data Source=.\SQLEXPRESS;AttachDbFilename=C:\Users\Jr\documents\visual studio 2010\Projects\VotingSystem\VotingSystem\Resources\Database.mdf;Integrated Security=True;User Instance=True"
connection.Open()
Dim rowAffected As Integer = 0
'cmd = New SqlCommand(strSQL, connection) 'buiding the sql command with the use of strSQL (sql statement) and connection (database connection)
cmd = New SqlCommand(strSQL, connection)
DARec = New SqlDataAdapter(strSQL, connection) 'buiding the adapter
cb = New SqlCommandBuilder(DARec)
rowAffected = cmd.ExecuteNonQuery() 'executing of sql statement
successID = 1
connection.Close()
Catch ex As Exception
successID = 0
MsgBox(ex.Message)
End Try
End Sub
```
Thanks and please help. | Problem is simply you're doing this in the wrong order. You're attempting to execute your SQL statement before defining the parameters. You don't need ExecuteSQL() until you've defined your parameters. It likely breaks on the following line in ExecuteSQL()
```
' See how many rows the query will impact
' Since @idnum and @pass are not defined until the
' ExecuteSQL() sub is finished, this line breaks.
rowAffected = cmd.ExecuteNonQuery()
```
You need to build your SqlCommand() to first include the select statement, and then use AddWithValue() on the parameters you've defined in the string. Defining the datatypes is also unnecessary because your database already knows, and form validation should handle input.
```
' Define your connection
connection.ConnectionString = "Data Source=.\SQLEXPRESS;AttachDbFilename=C:\Users\Jr\documents\visual studio 2010\Projects\VotingSystem\VotingSystem\Resources\Database.mdf;Integrated Security=True;User Instance=True"
' Setup your SQL Command.
cmd = New SqlCommand("select * from account_database where idnum = @idnum and password = @pass", connection)
' Define the parameters you've created
cmd.Parameters.AddWithValue("@idnum", idnumtxt.Text)
cmd.Parameters.AddWithValue("@pass", output)
' Now execute your statement
connection.open()
cmd.ExecuteNonQuery()
connection.close()
```
And here is a better version of the above code, since you understand the order of events now. This ensures that in the event of exception the connection is closed.
```
strConn = "Data Source=.\SQLEXPRESS;AttachDbFilename=C:\Users\Jr\documents\visual studio 2010\Projects\VotingSystem\VotingSystem\Resources\Database.mdf;Integrated Security=True;User Instance=True"
strSQL = "select * from account_database where idnum = @idnum and password = @pass"
Using connection As New SqlConnection(strConn), cmd As SqlCommand(strSQL, connection)
cmd.Parameters.Add("@idnum", SqlDbType.VarChar).Value = idnumtxt.Text
cmd.Parameters.Add("@pass", SqlDbType.VarChar, -1, "password").Value = output
connection.Open()
cmd.ExecuteNonQuery()
End Using
``` | Try this:
```
cmd.Parameters.AddWithValue("idnum", idnumtxt.Text)
```
Reference:
* [SqlParameterCollection.AddWithValue @ MSDN](http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlparametercollection.addwithvalue(v=vs.110).aspx). | vb.net sql parameterization | [
"",
"sql",
"vb.net",
"vb.net-2010",
""
] |
i have a this sql query using uin
```
SELECT Sum(scores_ofexpert.score) FROM scores_ofexpert
WHERE scores_ofexpert.user_id = '30' AND scores_ofexpert.score > '0'
UNION
SELECT Sum(scores_ofexpert.score) FROM scores_ofexpert
WHERE scores_ofexpert.user_id = '30' AND scores_ofexpert.score < '0'
UNION
SELECT Sum(scores_ofexpert.score) FROM scores_ofexpert
WHERE scores_ofexpert.user_id = '30'
```
this extract sum of negative scores,sum of positive scoes and sum of all scores of a user;
like this
```
-------------
|-10 |
-------------
|30 |
-------------
|20 |
-------------
```
but i want the result as this
```
-------------------------------------
|n_sum |p_sum |sum |
-------------------------------------
|-10 |30 |20 |
-------------------------------------
```
how do it? | You can use `CASE`
```
SELECT
SUM(CASE WHEN s.score < 0 THEN s.score ELSE 0 END) n_sum,
SUM(CASE WHEN s.score > 0 THEN s.score ELSE 0 END) p_sum,
SUM(s.score) `sum`
FROM scores_ofexpert s
WHERE s.user_id = '30'
``` | This is another answer
```
SELECT
( SELECT Sum(scores_ofexpert.score)
FROM scores_ofexpert
WHERE
scores_ofexpert.user_id = '30' AND
scores_ofexpert.score > '0'
) p_sum,
( SELECT
Sum(scores_ofexpert.score)
FROM
scores_ofexpert
WHERE
scores_ofexpert.user_id = '30' AND
scores_ofexpert.score < '0'
) n_sum,
( SELECT
Sum(scores_ofexpert.score)
FROM
scores_ofexpert
WHERE
scores_ofexpert.user_id = '30'
) sum_all
``` | joining 3 field in mysql | [
"",
"mysql",
"sql",
"union",
""
] |
Having difficulty with inner joins when trying to display from 3 tables. They are structured as in the picture below:
<http://pbrd.co/1odLBZy>
What I'm trying to achieve is to select the following from the **task** table:
* ProjectID (as project.Name)
* WorkerID (as user.Username)
* Name
* OrderInProject
* TimeSpent
* Description
* DueDate
How would the SQL query be structured? | ```
SELECT p.Name AS ProjectID, u.Username AS WorkerID, t.Name, t.OrderInProject,
t.TimeSpent, t.Description, t.DueDate
FROM task t
INNER JOIN user u ON t.WorkerID = u.UserID
INNER JOIN project p ON t.ProjectID = p.ProjectID
```
If you also want to get the `Username` of the `ProjectManagerID` then use the following:
```
SELECT p.Name AS ProjectID, u.Username AS WorkerID, t.Name, t.OrderInProject,
t.TimeSpent, t.Description, t.DueDate, u2.Username AS ProjectManager
FROM task t
INNER JOIN user u ON t.WorkerID = u.UserID
INNER JOIN project p ON t.ProjectID = p.ProjectID
INNER JOIN user u2 ON p.ProjectManagerID = u2.UserID
``` | In order to answer something like this it is best to start with a single table then add the columns and other tables testing each addition along the way. That said the following should be close:
```
SELECT t.Name,
t.OrderInProject,
t.TimeSpent,
t.Description,
t.DueDate,
p.Name,
u.Username
FROM task t
INNER JOIN project p ON p.ProjectID = t.ProjectID
INNER JOIN user u ON u.UserId = t.WorkerID
```
Best of luck! | Inner join from 3 tables | [
"",
"sql",
"join",
"inner-join",
""
] |
I need to create a stored procedure that:
1. Accepts a table name as a parameter
2. Find its dependencies (FKs)
3. Removes them
4. Truncate the table
I created the following so far based on <http://www.mssqltips.com/sqlservertip/1376/disable-enable-drop-and-recreate-sql-server-foreign-keys/> . My problem is that the following script successfully does 1 and 2 and generates queries to alter tables but does not actually execute them. In another word how can execute the resulting "Alter Table ..." queries to actually remove FKs?
```
CREATE PROCEDURE DropDependencies(@TableName VARCHAR(50))
AS
BEGIN
SELECT 'ALTER TABLE ' + OBJECT_SCHEMA_NAME(parent_object_id) + '.[' + OBJECT_NAME(parent_object_id) + '] DROP CONSTRAINT ' + name
FROM sys.foreign_keys WHERE referenced_object_id=object_id(@TableName)
END
EXEC DropDependencies 'TableName'
```
Any idea is appreciated!
Update:
I added the cursor to the SP but I still get and error:
"Msg 203, Level 16, State 2, Procedure DropRestoreDependencies, Line 75
The name 'ALTER TABLE [dbo].[ChildTable] DROP CONSTRAINT [FK\_\_ChileTable\_\_ParentTable\_\_745C7C5D]' is not a valid identifier."
Here is the updated SP:
CREATE PROCEDURE DropRestoreDependencies(@schemaName sysname, @tableName sysname)
AS
BEGIN
SET NOCOUNT ON
```
DECLARE @operation VARCHAR(10)
SET @operation = 'DROP' --ENABLE, DISABLE, DROP
DECLARE @cmd NVARCHAR(1000)
DECLARE
@FK_NAME sysname,
@FK_OBJECTID INT,
@FK_DISABLED INT,
@FK_NOT_FOR_REPLICATION INT,
@DELETE_RULE smallint,
@UPDATE_RULE smallint,
@FKTABLE_NAME sysname,
@FKTABLE_OWNER sysname,
@PKTABLE_NAME sysname,
@PKTABLE_OWNER sysname,
@FKCOLUMN_NAME sysname,
@PKCOLUMN_NAME sysname,
@CONSTRAINT_COLID INT
DECLARE cursor_fkeys CURSOR FOR
SELECT Fk.name,
Fk.OBJECT_ID,
Fk.is_disabled,
Fk.is_not_for_replication,
Fk.delete_referential_action,
Fk.update_referential_action,
OBJECT_NAME(Fk.parent_object_id) AS Fk_table_name,
schema_name(Fk.schema_id) AS Fk_table_schema,
TbR.name AS Pk_table_name,
schema_name(TbR.schema_id) Pk_table_schema
FROM sys.foreign_keys Fk LEFT OUTER JOIN
sys.tables TbR ON TbR.OBJECT_ID = Fk.referenced_object_id --inner join
WHERE TbR.name = @tableName
AND schema_name(TbR.schema_id) = @schemaName
OPEN cursor_fkeys
FETCH NEXT FROM cursor_fkeys
INTO @FK_NAME,@FK_OBJECTID,
@FK_DISABLED,
@FK_NOT_FOR_REPLICATION,
@DELETE_RULE,
@UPDATE_RULE,
@FKTABLE_NAME,
@FKTABLE_OWNER,
@PKTABLE_NAME,
@PKTABLE_OWNER
WHILE @@FETCH_STATUS = 0
BEGIN
-- create statement for dropping FK and also for recreating FK
IF @operation = 'DROP'
BEGIN
-- drop statement
SET @cmd = 'ALTER TABLE [' + @FKTABLE_OWNER + '].[' + @FKTABLE_NAME
+ '] DROP CONSTRAINT [' + @FK_NAME + ']'
EXEC @cmd
-- create process
DECLARE @FKCOLUMNS VARCHAR(1000), @PKCOLUMNS VARCHAR(1000), @COUNTER INT
-- create cursor to get FK columns
DECLARE cursor_fkeyCols CURSOR FOR
SELECT COL_NAME(Fk.parent_object_id, Fk_Cl.parent_column_id) AS Fk_col_name,
COL_NAME(Fk.referenced_object_id, Fk_Cl.referenced_column_id) AS Pk_col_name
FROM sys.foreign_keys Fk LEFT OUTER JOIN
sys.tables TbR ON TbR.OBJECT_ID = Fk.referenced_object_id INNER JOIN
sys.foreign_key_columns Fk_Cl ON Fk_Cl.constraint_object_id = Fk.OBJECT_ID
WHERE TbR.name = @tableName
AND schema_name(TbR.schema_id) = @schemaName
AND Fk_Cl.constraint_object_id = @FK_OBJECTID -- added 6/12/2008
ORDER BY Fk_Cl.constraint_column_id
OPEN cursor_fkeyCols
FETCH NEXT FROM cursor_fkeyCols INTO @FKCOLUMN_NAME,@PKCOLUMN_NAME
SET @COUNTER = 1
SET @FKCOLUMNS = ''
SET @PKCOLUMNS = ''
WHILE @@FETCH_STATUS = 0
BEGIN
IF @COUNTER > 1
BEGIN
SET @FKCOLUMNS = @FKCOLUMNS + ','
SET @PKCOLUMNS = @PKCOLUMNS + ','
END
SET @FKCOLUMNS = @FKCOLUMNS + '[' + @FKCOLUMN_NAME + ']'
SET @PKCOLUMNS = @PKCOLUMNS + '[' + @PKCOLUMN_NAME + ']'
SET @COUNTER = @COUNTER + 1
FETCH NEXT FROM cursor_fkeyCols INTO @FKCOLUMN_NAME,@PKCOLUMN_NAME
END
CLOSE cursor_fkeyCols
DEALLOCATE cursor_fkeyCols
END
FETCH NEXT FROM cursor_fkeys
INTO @FK_NAME,@FK_OBJECTID,
@FK_DISABLED,
@FK_NOT_FOR_REPLICATION,
@DELETE_RULE,
@UPDATE_RULE,
@FKTABLE_NAME,
@FKTABLE_OWNER,
@PKTABLE_NAME,
@PKTABLE_OWNER
END
CLOSE cursor_fkeys
DEALLOCATE cursor_fkeys
END
```
For running use:
```
EXEC DropRestoreDependencies dbo, ParentTable
``` | Use a cursor to go through your SELECT results, populating a variable with the single column, and executing that query with EXEC(@YourVariable). Be sure to use parens around the variable! | The issue is that you are only preparing the SQL statement and not executing it (I think)
```
CREATE PROCEDURE DropDependencies(@TableName VARCHAR(50))
AS
BEGIN
DECLARE @SQL nvarchar(max)
SELECT @SQL = 'ALTER TABLE ' + OBJECT_SCHEMA_NAME(parent_object_id) + '.[' + OBJECT_NAME(parent_object_id) + '] DROP CONSTRAINT ' + name
FROM sys.foreign_keys WHERE referenced_object_id=object_id(@TableName)
EXEC @SQL
END
EXEC DropDependencies 'TableName'
```
Whenever using EXEC though from constructed strings, ensure you aren't vulnerable to SQL Injection attacks. | Stored procedure to remove FK of a given table | [
"",
"sql",
"sql-server-2012",
""
] |
This is my sql query I wont to convert into rails model query
```
sql : SELECT "vouchers".* FROM "vouchers" WHERE "vouchers"."business_id" = 31 AND '2014-08-20' between start_date and end_date;
```
My tried model query but this not working please help me to make it working
```
Voucher.where(["#{Date.today} BETWEEN ? AND ?",:start_date,:end_date])
``` | Assuming that start\_date and end\_date are columns try this version
```
Voucher.where("current_date between start_date and end_date").where(business_id: 31)
``` | Try in this format:
```
data = ModelName.where("today >= from_date AND today <= to_date")
``` | how to write Rails model query between date | [
"",
"sql",
"ruby-on-rails",
"database",
"ruby-on-rails-3",
"activerecord",
""
] |
I am trying to check for dates but after running the query below, it displays no result. Could someone recommend me the correct syntax?
```
SELECT TOP 10 * FROM MY_DATABASE.AGREEMENT
WHERE end_dt=12/31/9999
``` | **12/31/9999** might look like a date for you but for the database it's a calculation:
**12 divided by 31 divided by 9999** and because this involves INTEGER division this results in an INTEGER **0**
So finally you compare a DATE to an INT and this results in typecasting the DATE to a INT.
The only reliable way to write a date literal in Teradata is **DATE** followed by a string with a **YYYY-MM-DD** format:
**DATE '9999-12-31'**
Similar for **TIME '12:34:56.1'** and **TIMESTAMP '2014-08-20 12:34:56.1'** | Is it a date column? Then try `where end_dt = '9999-12-31'`. | check for dates syntax - teradata SQL | [
"",
"sql",
"teradata",
""
] |
why does this work,
```
select*
from some_table
WHERE some_column_name like '%i%'
```
and not this?
```
select*
from some_table
WHERE
some_column_name like (select ''''+'%' +value +'%' + '''' as val
from [dbo].[fn_Split](' i this is a test testing Chinese undefined',' ')
where idx = 0)
```
I am trying to search for individual words instead of the whole phrase, the split function above will split the string on space characters and plug the results into a table with two columns, idx and value. | the LIKE operator takes a string for an argument. It cannot be used on a table, which I assume your function returns.
I think what you want to do is JOIN to the function, and then check where LIKE fn.Value:
```
select *
from some_table t
INNER JOIN (select value as val
from [dbo].[fn_Split](' i this is a test testing Chinese undefined',' ')
where idx = 0) f
ON t.some_column_name like '%'+f.val+'%'
```
If your subquery is guaranteed to only return one result, you could try putting the modulo symbols around it instead of inside it:
```
LIKE '%' + (YourSubQuery) + '%'
``` | One possible reason is because you are appending single quotes onto the beginning and end of the string, and none of the values actually store single quotes in the string.
Another reason is might not work is because the subquery returns more than one row or zero rows. The function `fn_split()` is your own function, so I don't know what it returns. You have a subquery in a context where it can return at most one row and one column. That is called a scalar subquery. If the subquery returns more than one row, you will get an error. If the subquery returns no rows -- for instance, if `idx` starts counting at 1 rather than 0 -- then it will return `NULL` which fails the test.
If you want to find a match this way, I would recommend `exists`:
```
select t.*
from some_table t
where exists (select 1 as val
from [dbo].[fn_Split](' i this is a test testing Chinese undefined',' ') s
where s.idx = 0 and
t.some_column_name like '%' + value + '%'
);
``` | sql, using a subquery in the like operator | [
"",
"sql",
"sql-server",
""
] |
Is there any SQL statements to replace everything in a string with an 'X'. The strings aren't all the same length so it makes it a bit tricky. I haven't been able to find anything that does this except the function below but it takes a long time when I pass in 'a-z0-9' since I have to search on all of those but I really just want to replace everything no matter what it is.
```
[dbo].[cfn_StripCharacters]
(
@String NVARCHAR(MAX),
@MatchExpression VARCHAR(255)='a-z0-9'
)
RETURNS NVARCHAR(MAX)
AS
BEGIN
SET @MatchExpression = '%['+@MatchExpression+']%'
WHILE PatIndex(@MatchExpression, @String) > 0
SET @String = Stuff(@String, PatIndex(@MatchExpression, @String), 1, 'X')
RETURN @String
```
For example the data column looks like this and I want to replace the whole string with x's:
975g -> XXXX
ryth5 -> XXXXX
1234vvsdf5 -> XXXXXXXXXX
test1234 -> XXXXXXXX | If this is SQL Server, you can use the `REPLICATE` function and simply replicate `x` the `LEN()` of the string.
```
SELECT REPLICATE('x', LEN(@String))
```
[sqlFiddle](http://sqlfiddle.com/#!3/f084e/1)
*Edit - Looks like this is also available in MySQL via `REPEAT()` but I haven't tested* | You can use Replicate and Len functions to replace all characters.
```
declare @str varchar(20)
set @str = 'abc'
select @str = REPLICATE('X', len(@str))
select @str
``` | SQL Replace all characters in a string | [
"",
"sql",
"string",
"replace",
""
] |
I have a table containing following column
```
APP_ID(VarChar),USER_ID(VarChar), ROLE_ID(Number),....
app1 user1 3
app1 user2 3
app2 user2 4
app2 user3 3
app3 user1 7
```
As you can see there is no ID field and i cannot add it.
if i use below query it is giving me unique and same number every time i use below query
```
SELECT owa_opt_lock.checksum(a.APP_ID || a.USER_ID || a.ROLE_ID) as ID
```
I want to know it is safe to use this. | No. You might have a collision. For instance, these two rows would have the same id (app1user212):
```
app1 user2 12
app1 user21 2
```
This is easily fixed by using delimiters. So, the following would be fine with most tables:
```
(a.APP_ID || '|' || a.USER_ID || '|' || a.ROLE_ID) as ID
``` | Can you use the `rownum`?
```
UPDATE table
SET table_key = rownum;
``` | Generating unique number in Oracle | [
"",
"sql",
"oracle",
"oracle11g",
""
] |
I am reviewing a query which should pull in a list of transactions within a month, either based on its **created\_at** date OR its **returned\_at** date.
```
SELECT *
FROM (`transaction`)
WHERE `created_at` >= '2014-08-01'
AND `created_at` <= '2014-08-31'
OR `returned_at` >= '2014-08-01'
AND `returned_at` <= '2014-08-31'
```
My initial thought was that this would not work as intended since we do not have parentheses around the grouped `WHERE` logic. In other words, it would evaluate each condition one after another.
This is how I would have written the statement to ensure order of operations (I added parenthesis):
```
SELECT *
FROM (`transaction`)
WHERE
(`created_at` >= '2014-08-01' AND `created_at` <= '2014-08-31')
OR
(`returned_at` >= '2014-08-01' AND `returned_at` <= '2014-08-31')
```
Oddly enough, the initial query appears to be working though I am not sure if its because of my data set or not. Could someone clarify that both sql statements will always return the same results? | Writing it explicitly is always a good idea, but yeah, [`AND` comes before `OR`](http://dev.mysql.com/doc/refman/5.5/en/operator-precedence.html)
For your pleasure, checked them all explicitly for you:
```
$ php -r '$d = 0; while($d < pow(2,4)){$bin = sprintf("%04b",$d); echo $bin." => ".@mysql_result(mysql_query("SELECT ".$bin[0]." AND ".$bin[1]." OR ".$bin[2]." AND ".$bin[3]),0,0).PHP_EOL;$d++;}'
0000 => 0
0001 => 0
0010 => 0
0011 => 1
0100 => 0
0101 => 0
0110 => 0
0111 => 1
1000 => 0
1001 => 0
1010 => 0
1011 => 1
1100 => 1
1101 => 1
1110 => 1
1111 => 1
```
So yeah, operator precedence and all. I would still throw that query back, if only it's hard for the next guy to see at a glance what it does. | Its operator precence. AND is guaranteed to be evaluated before OR | mysql OR/AND - how does order of operations work | [
"",
"mysql",
"sql",
"database",
""
] |
Got a table in SQL Server which contains a `varchar` column with date data. Unfortunately the dates are in a whole slew of different formats.
```
2012-05-01
27/05/2012
07MAY2014
19/07/13
```
There may be others, but that's all I've encountered so far.
I need to squeeze these into a `datetime` column into another table, so I've been trying to select them as standard date-time values. At first, I thought that'd be easy:
```
UPDATE myTable
SET myDateColumn = CONVERT(DATETIME, myDateColumn, 103)
WHERE ISDATE(myDateColumn) = 0
```
But the trouble is that SQL Server treats `dd/mm/yy` and `dd/mm/yyyy` as separate formats. The former is code 3, and the latter is code 103. So whichever way I run that update, it chokes on the opposite format.
Is there any way I can select/update based on the date format, and get all these dates converted to a single valid `DateTime` format? | My guess is that you just have to try to differentiate between the different classes and handle each case in the appropriate way. Something like this:
```
declare @tab table (d varchar(20))
insert @tab values ('2012-05-01'),('27/05/2012'),('07MAY2014'),('19/07/13')
select
case
when isnumeric(left(d,4)) = 1 then cast(d as date)
when len(d) = 10 then convert(date, d, 103)
when len(d) = 8 then convert(date, d, 3)
when charindex('/',d) = 0 and isnumeric(d) = 0 then convert(date, d, 106)
end as [date]
from @tab
```
Output:
```
date
----------
2012-05-01
2012-05-27
2014-05-07
2013-07-19
```
It might not be that efficient, but I presume this is a one-off operation. I didn't write it as an update statement, but the query should be easy to adapt, and you should consider adding the converted date as a new proper datetime column if possible in my opinion.
Edit: here's the corresponding update statement:
```
update @tab
set d =
case
when isnumeric(left(d,4)) = 1 then cast(d as date)
when len(d) = 10 then convert(date, d, 103)
when len(d) = 8 then convert(date, d, 3)
when charindex('/',d) = 0 and isnumeric(d) = 0 then convert(date, d, 106)
end
from @tab
``` | In SQL Server 2012, you could use `try_convert()`. Otherwise, you could multiple updates:
```
UPDATE myTable
SET myDateColumn = CONVERT(DATETIME, myDateColumn, 103)
WHERE ISDATE(myDateColumn) = 0 AND MyDateColumn like '[0-9][0-9]/[0-9][0-9]/[0-9][0-9][0-9][0-9]';
UPDATE myTable
SET myDateColumn = CONVERT(DATETIME, myDateColumn, 3)
WHERE ISDATE(myDateColumn) = 0 AND MyDateColumn like '[0-9][0-9]/[0-9][0-9]/[0-9][0-9]';
```
Note: the `where` clause will probably work here for the `update`. It does not work for a `select`. You may need to use a `case` as well:
```
UPDATE myTable
SET myDateColumn = (CASE WHEN ISDATE(myDateColumn) = 0 AND MyDateColumn like '[0-9][0-9]/[0-9][0-9]/[0-9][0-9][0-9][0-9]'
THEN CONVERT(DATETIME, myDateColumn, 103)
ELSE myDateColumn
END)
WHERE ISDATE(myDateColumn) = 0 AND MyDateColumn like '[0-9][0-9]/[0-9][0-9]/[0-9][0-9][0-9][0-0]'
```
Also, you are putting the values back in the same column so you are overwriting the original data -- and you have another implicit conversion back to a string I would strongly recommend that you add another column to the table with a `datetime` data type and put the correctly-typed value there. | How to standardise a column of mixed date formats in T-SQL | [
"",
"sql",
"sql-server",
"t-sql",
"date",
"datetime",
""
] |
I'm basically trying to reverse what this question is asking...
[SQL Server query xml attribute for an element value](https://stackoverflow.com/questions/12913724/sql-server-query-xml-attribute-for-an-element-value)
I need to produce a result set of "row" elements that contain a group of "field" elements with an attribute that defines the key.
```
<resultset statement="" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<row>
<field name="id">1</field>
<field name="version”>0</field>
<field name="property">My Movie</field>
<field name="release_date">2012-01-01</field>
<field name="territory_code”>FR</field>
<field name="territory_description">FRANCE</field>
<field name="currency_code”>EUR</field>
</row>
<row>
<field name="id">2</field>
<field name="version”>0</field>
<field name="property">My Sequel</field>
<field name="release_date">2014-03-01</field>
<field name="territory_code”>UK</field>
<field name="territory_description">United Kingdom</field>
<field name="currency_code”>GBP</field>
</row>
</resultset>
```
I've got a query that returns this...
```
<resultset statement="" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<row>
<id>1</id>
<version>0</version>
<property>My Movie</property>
<release_date>2012-01-01</release_date>
<territory_code>FR</territory_code>
<territory_description>FRANCE</territory_description>
<currency_code>EUR</currency_code>
</row>
<row>
<id>2</id>
<version>0</version>
<property>My Sequel</property>
<release_date>2014-03-01</release_date>
<territory_code>UK</territory_code>
<territory_description>UNITED KINGDOM</territory_description>
<currency_code>GBP</currency_code>
</row>
</resultset>
```
Using `FOR XML PATH ('row'), ROOT ('resultset')` in my SQL statement.
What am I missing? Thanks. | It's a bit involved in SQL Server - the normal behavior is what you're seeing - the column names will be used as XML element names.
If you *really* want all XML elements to be named the same, you'll have to use code something like this:
```
SELECT
'id' AS 'field/@name',
id AS 'field',
'',
'version' AS 'field/@name',
version AS 'field',
'',
'property' AS 'field/@name',
property AS 'field',
'',
... and so on ....
FROM Person.Person
FOR XML PATH('row'),ROOT('resultset')
```
This is necessary to make sure the column name is used as the `name` attribute on the `<field>` element, and the empty string are necessary so that the SQL XML parser doesn't get confused about which `name` attribute belongs to what element...... | You can do this without having to specify the columns as constants and that will allow you to also use `select *`. It is a bit more complicated than the answer provided by marc\_s and it will be quite a lot slower to execute.
```
select (
select T.X.value('local-name(.)', 'nvarchar(128)') as '@name',
T.X.value('text()[1]', 'nvarchar(max)') as '*'
from C.X.nodes('/X/*') as T(X)
for xml path('field'), type
)
from (
select (
select T.*
for xml path('X'), type
) as X
from dbo.YourTable as T
) as C
for xml path('row'), root('resultset')
```
[SQL Fiddle](http://sqlfiddle.com/#!6/95f2f/1)
The query creates a derived table where each row has a XML that looks something like this:
```
<X>
<ID>1</ID>
<Col1>1</Col1>
<Col2>2014-08-21</Col2>
</X>
```
That XML is then shredded using `nodes()` and `local-name(.)` to create the shape you want. | SQL Server generating XML with generic field elements | [
"",
"sql",
"sql-server",
"xml",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.