Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I am trying to execute my whole stored procedure as a string since I have to make the condition dynamic.
Here is my code:
```
CREATE PROCEDURE SP1
(@VoucherType varchar(10),
@ProductID bigint,
@BrandID bigint)
AS
BEGIN
DECLARE @Condition as varchar(300)
SET @Condition=' WHERE VoucherType=@VoucherType '
IF (@ProductID<>-1)
BEGIN
SET @Condition=@Condition+' AND ProductID='+cast(@ProductID as varchar)
END
IF (BrandID<>-1)
BEGIN
SET @Condition=@Condition+' AND BrandID='+cast(@BrandID as varchar)
END
EXEC('SELECT * FROM Products '+@Condition)
END
```
Filtering by `ProductID` and `BrandID` are optional (if their values are not -1 then those conditions are added to where clause). And filtering by VoucherType is compulsory. The problem is that I can't get the value of the parameter `@VoucherType` in line:
```
SET @Condition=' WHERE VoucherType=@VoucherType '
```
The error says there is no column Named SI (Which was my input for `@VoucherType`).
How can I get the value of that parameter. | Just do it without a dynamic SQL:
```
SELECT * FROM Products WHERE VoucherType=@VoucherType AND
(@ProductID=-1 OR ProductID=@ProductID)
AND
(@BrandID=-1 OR BrandID=@BrandID)
``` | If the logic is as you describe, you can do this without dynamic SQL - just using the parameters directly in the where clause by using a comparison with your default values for each parameter (essentially making that part of the where clause a no-op if the values are at the value you use to signal for "don't filter").
```
CREATE PROCEDURE Sp1
(
@VoucherType VARCHAR(10),
@ProductID BIGINT,
@BrandID BIGINT
)
AS
BEGIN
SELECT *
FROM Products
WHERE (@VoucherType IS NULL OR VoucherType = @VoucherType)
AND (@ProductID = -1 OR ProductID = @ProductID)
AND (@BrandID = -1 OR BrandID = @BrandID)
END
```
Note you should probably only select the columns you need rather than `*` though.
(If you don't want VoucherType to be optional as I've made it here, just remove the NULL comparison) | Using varchar parameter in exec() | [
"",
"sql",
"sql-server",
"stored-procedures",
"parameter-passing",
""
] |
I have a database that is used to record patient information for a small clinic. We use MS SQL Server 2008 as the backend. The patient table contains the following columns:
```
Id int identity(1,1),
FamilyName varchar(30),
FirstName varchar (20),
DOB datetime,
AddressLine1 varchar (50),
AddressLine2 varchar (50),
State varchar (20),
Postcode varchar (4),
NextOfKin varchar (20),
Homephone varchar (20),
Mobile varchar (20)
```
Occasionally the staff register a new patient, unaware that the patient already has a record in the system. We end up with several thousands duplicated records.
What I would like to do is to present a list of patients who have duplicated records for the staff to merge during quiet time. We consider 2 records to be duplicated if the 2 records have exactly the same FamilyName, FirstName and DOB. What I am doing at the moment is to use a sub query to return the records as follow:
```
SELECT FamilyName,
FirstName,
DOB,
AddressLine1,
AddressLine2,
State,
Postcode,
NextOfKin,
HomePhone,
Mobile
FROM
Patients AS p1
WHERE Id IN
(
SELECT Max(Id)
FROM Patients AS p2,
COUNT(id) AS NumberOfDuplicate
GROUP BY
FamilyName,
FirstName,
DOB HAVING COUNT(Id) > 1
)
```
This produces the result but the performance is terrible. Is there any better way to do it? The only requirements is I need to show all the fields in the Patients table as the user of the system wants to view all the details before making the decision whether to merge the records or not. | This will output every row which has a duplicate, based on firstname and lastname
```
SELECT DISTINCT t1.*
FROM Table AS t1
INNER JOIN Table AS t2
ON t1.firstname = t2.firstname
AND t1.lastname = t2.lastname
AND t1.id <> t2.id
``` | ```
WITH CTE
AS
(
SELECT Id, FamilyName, FirstName ,DOB
ROW_NUMBER() OVER(PARTITION BY FamilyName, FirstName ,DOB ORDER BY Id) AS DuplicateCount
FROM PatientTable
)
select * from CTE where DuplicateCount > 1
``` | Listing duplicated records using T SQL | [
"",
"sql",
"sql-server",
"database",
"sql-server-2008",
""
] |
Let's say I have a User which has a status and the user's status can be 'active', 'suspended' or 'inactive'.
Now, when creating the database, I was wondering... would it be better to have a column with the string value (with an enum type, or rule applied) so it's easier to both query and know the current user status or are joins better and I should join in a UserStatuses table which contains the possible user statuses?
Assuming, of course statuses can not be created by the application user.
**Edit:** Some clarification
1. I would **NOT** use string joins, it would be a int join to UserStatuses PK
2. My primary concern is performance wise
3. The possible status **ARE STATIC** and will **NEVER** change | On most systems it makes little or no difference to performance. Personally I'd use a short string for clarity and join that to a table with more detail as you suggest.
```
create table intLookup
(
pk integer primary key,
value varchar(20) not null
)
insert into intLookup (pk, value) values
(1,'value 1'),
(2,'value 2'),
(3,'value 3'),
(4,'value 4')
create table stringLookup
(
pk varchar(4) primary key,
value varchar(20) not null
)
insert into stringLookup (pk, value) values
(1,'value 1'),
(2,'value 2'),
(3,'value 3'),
(4,'value 4')
create table masterData
(
stuff varchar(50),
fkInt integer references intLookup(pk),
fkString varchar(4)references stringLookup(pk)
)
create index i on masterData(fkInt)
create index s on masterData(fkString)
insert into masterData
(stuff, fkInt, fkString)
select COLUMN_NAME, (ORDINAL_POSITION %4)+1,(ORDINAL_POSITION %4)+1 from INFORMATION_SCHEMA.COLUMNS
go 1000
```
This results in 300K rows.
```
select
*
from masterData m inner join intLookup i on m.fkInt=i.pk
select
*
from masterData m inner join stringLookup s on m.fkString=s.pk
```
On my system (SQL Server)
- the query plans, I/O and CPU are identical
- execution times are identical.
- The lookup table is read and processed once (in either query)
There is ***NO*** difference using an int or a string. | I think, as a whole, everyone has hit on important components of the answer to your question. However, they all have good points which should be taken together, rather than separately.
1. As logixologist mentioned, a healthy amount of Normalization is generally considered to increase performance. However, in contrast to logixologist, I think your situation is the perfect time for normalization. Your problem seems to be one of normalization. In this case, using a numeric key as Santhosh suggested which then leads back to a code table containing the decodes for the statuses will result in less data being stored per record. This difference wouldn't show in a small Access database, but it would likely show in a table with millions of records, each with a status.
2. As David Aldridge suggested, you might find that normalizing this particular data point will result in a more controlled end-user experience. Normalizing the status field will also allow you to edit the status flag at a later date in one location and have that change perpetuated throughout the database. If your boss is like mine, then you might have to change the Status of Inactive to Closed (and then back again next week!), which would be more work if the status field was not normalized. By normalizing, it's also easier to enforce referential integrity. If a status key is not in the Status code table, then it can't be added to your main table.
3. If you're concerned about the performance when querying in the future, then there are some different things to consider. To pull back status, if it's normalized, you'll be adding a join to your query. That join will probably not hurt you in any sized recordset but I believe it will help in larger recordsets by limiting the amount of raw text that must be handled. If your primary concern is performance when querying the data, here's a great resource on how to optimize queries: <http://www.sql-server-performance.com/2007/t-sql-where/> and I think you'll find that a lot of the rules discussed here will also apply to any inclusion criteria you enforce in the join itself.
Hope this helps!
Christopher | Is it better to have int joins instead of string columns? | [
"",
"sql",
"database-design",
"database-normalization",
""
] |
I am getting an error when attempting to execue a dynamic sql string in MS Access (I am using VBA to write the code).
Error:
> Run-time error '3075':
> Syntax error (missing operator) in query expression "11/8/2013' FROM tbl\_sample'.
Here is my code:
```
Sub UpdateAsOfDate()
Dim AsOfDate As String
AsOfDate = Form_DateForm.txt_AsOfDate.Value
AsOfDate = Format(CDate(AsOfDate))
Dim dbs As Database
Set dbs = OpenDatabase("C:\database.mdb")
Dim strSQL As String
strSQL = " UPDATE tbl_sample " _
& "SET tbl_sample.As_of_Date = '" _
& AsOfDate _
& "' " _
& "FROM tbl_sample " _
& "WHERE tbl_sample.As_of_Date IS NULL ;"
dbs.Execute strSQL
dbs.Close
End Sub
```
I piped the strSQL to a MsgBox so I could see the finished SQL string, and it looks like it would run without error. What's going on? | Get rid of `& "FROM tbl_sample " _`. A from clause isn't valid in your update statement. | You really should be using a parameterized query because
* they're safer,
* you don't have to mess with delimiters for date and text values,
* you don't have to worry about escaping quotes within text values, and
* they handle dates properly so your code doesn't mangle dates on machines set to `dd-mm-yyyy` format.
In your case you would use something like this:
```
Sub UpdateAsOfDate()
Dim db As DAO.Database, qdf As DAO.QueryDef
Dim AsOfDate As Date
AsOfDate = DateSerial(1991, 1, 1) ' test data
Set db = OpenDatabase("C:\Users\Public\Database1.accdb")
Set qdf = db.CreateQueryDef("", _
"PARAMETERS paramAsOfDate DateTime; " & _
"UPDATE tbl_sample SET As_of_Date = [paramAsOfDate] " & _
"WHERE As_of_Date IS NULL")
qdf!paramAsOfDate = AsOfDate
qdf.Execute
Set qdf = Nothing
db.Close
Set db = Nothing
End Sub
``` | error when executing a dynamic sql string | [
"",
"sql",
"ms-access",
"vba",
""
] |
Suppose I have a dataset as
```
----------------------
col1 | col2 | col3 |
----------------------
a b 3
c d 2
----------------------
```
Now Is there a way in SQL to select
```
----------------------
colx | coly | colz |
----------------------
a b 1
a b 2
a b 3
c d 1
c d 2
----------------------
```
i.e. col1 and col2 repeated col3 number of times. | ```
SELECT DISTINCT t.col1 AS colx, t.col2 AS coly, level AS colz
FROM tablee t
CONNECT BY level <= t.col3
ORDER BY t.col1, t.col2, level
```
fiddle: <http://sqlfiddle.com/#!4/01f5b/12> | Please try:
```
with T (colx , coly , colz , mxcol3) as
(
select col1, col2, 1 col3, max(col3) over (partition by col1, col2) mxcol3
from YourTab
union all
select colx , coly , colz +1 colz , mxcol3
from T
where colz +1<=mxcol3
)
select
colx, coly, colz
From T
order by colx, coly, colz;
``` | Select repeated data based on column value | [
"",
"sql",
"oracle",
"iteration",
"calculated-columns",
"repeat",
""
] |
Is there any way by which we can lock only single row of the table (IN SQL SERVER 2008)
I am trying something like this
```
BEGIN TRAN
select TOP 1 from [TableName] with (ROWLOCK,UPDLOCK)
COMMIT
```
But seems like SQL server is ignoring ROWLOCK and acquiring IU lock on page and IX on Table.
Your help will be appreciated :) | With UPDLOCK hint, you are requesting an UPDATE lock, that's why your SELECT is stuck waiting for previous transaction to complete.
You should specify HOLDLOCK but not UPDLOCK:
```
select TOP 1 * from t1 with (ROWLOCK, HOLDLOCK)
```
This will keep a shared lock and let other transaction select data, but not update it : if another session try to update a row that is locked with this shared lock, it will have to wait that you release the lock( end your transaction)
You can view locks with this query:
```
SELECT request_session_id ,
resource_type ,
DB_NAME(resource_database_id) AS DatabaseName ,
OBJECT_NAME(resource_associated_entity_id) AS TableName ,
request_mode ,
request_type ,
request_status
FROM sys.dm_tran_locks AS L
JOIN sys.all_objects AS A ON L.resource_associated_entity_id = A.object_id
``` | ```
update Production.Location with (ROWLOCK)
set CostRate = 100.00
where LocationID = 1
```
Use rowlock when you want to update that records. | How to lock single row only for update | [
"",
"sql",
"sql-server",
""
] |
I am looking to use a regular expression to capture the last value in a string. I have copied an example of the data I am looking to parse. I am using oracle syntax.
Example Data:
```
||CULTURE|D0799|D0799HTT|
||CULTURE|D0799|D0799HTT||
```
I am looking to strip out the last value before the last set of pipes:
```
D0799HQTT
D0799HQTT
```
I am able to create a regexp\_substr that returns the CULTURE:
```
REGEXP_SUBSTR(c.field_name, '|[^|]+|')
```
but I have not been able to figure out how to start at the end look for either one or two pipes, and return the values I'm looking for. Let me know if you need more information. | Consider the following Regex...
```
(?<=\|)[\w\d]*?(?=\|*$)
```
Good Luck! | You can try this:
```
select rtrim(regexp_substr('||CULTURE|D0799|D0799HTT||',
'[[:alnum:]]+\|+$'), '|')
from dual;
```
Or like this:
```
select regexp_replace('||CULTURE|D0799|D0799HTT||',
'(^|.*\|)([[:alnum:]]+)\|+$', '\2')
from dual;
```
[Here is a sqlfiddle demo](http://www.sqlfiddle.com/#!4/d41d8/20673) | Using Regular Expression Oracle | [
"",
"sql",
"regex",
"oracle",
""
] |
I am trying to get this query to return a record for each year instead of returning a single record for everything in the table.
```
SELECT
JAN
, FEB
, MAR
, APR
, MAY
, JUN
, JUL
, AUG
, SEP
, OCT
, NOV
, [DEC]
,JAN + FEB + MAR + APR + MAY + JUN + JUL + AUG + SEP + OCT + NOV + [DEC] AS TOTAL
FROM
(SELECT DISTINCT
(SELECT COUNT(*) FROM RET.tbl_Record WHERE MONTH(dt_updated) = 1) AS JAN
, (SELECT COUNT(*) FROM RET.tbl_Record WHERE MONTH(dt_updated) = 2) AS FEB
, (SELECT COUNT(*) FROM RET.tbl_Record WHERE MONTH(dt_updated) = 3) AS MAR
, (SELECT COUNT(*) FROM RET.tbl_Record WHERE MONTH(dt_updated) = 4) AS APR
, (SELECT COUNT(*) FROM RET.tbl_Record WHERE MONTH(dt_updated) = 5) AS MAY
, (SELECT COUNT(*) FROM RET.tbl_Record WHERE MONTH(dt_updated) = 6) AS JUN
, (SELECT COUNT(*) FROM RET.tbl_Record WHERE MONTH(dt_updated) = 7) AS JUL
, (SELECT COUNT(*) FROM RET.tbl_Record WHERE MONTH(dt_updated) = 8) AS AUG
, (SELECT COUNT(*) FROM RET.tbl_Record WHERE MONTH(dt_updated) = 9) AS SEP
, (SELECT COUNT(*) FROM RET.tbl_Record WHERE MONTH(dt_updated) = 10) AS OCT
, (SELECT COUNT(*) FROM RET.tbl_Record WHERE MONTH(dt_updated) = 11) AS NOV
, (SELECT COUNT(*) FROM RET.tbl_Record WHERE MONTH(dt_updated) = 12) AS [DEC]
FROM RET.tbl_Record)x
```
This returns 1 record (I know that is what is suppose to do) but I would like it to return a record for each year. I'm just not sure how to accomplish this.
**EDIT:**
```
SELECT
JAN
, FEB
, MAR
, APR
, MAY
, JUN
, JUL
, AUG
, SEP
, OCT
, NOV
, [DEC]
,JAN + FEB + MAR + APR + MAY + JUN + JUL + AUG + SEP + OCT + NOV + [DEC] AS TOTAL
FROM
(SELECT
(SELECT COUNT(dt_updated) FROM RET.tbl_Record WHERE MONTH(dt_updated) = 1) AS JAN
, (SELECT COUNT(dt_updated) FROM RET.tbl_Record WHERE MONTH(dt_updated) = 2) AS FEB
, (SELECT COUNT(dt_updated) FROM RET.tbl_Record WHERE MONTH(dt_updated) = 3) AS MAR
, (SELECT COUNT(dt_updated) FROM RET.tbl_Record WHERE MONTH(dt_updated) = 4) AS APR
, (SELECT COUNT(dt_updated) FROM RET.tbl_Record WHERE MONTH(dt_updated) = 5) AS MAY
, (SELECT COUNT(dt_updated) FROM RET.tbl_Record WHERE MONTH(dt_updated) = 6) AS JUN
, (SELECT COUNT(dt_updated) FROM RET.tbl_Record WHERE MONTH(dt_updated) = 7) AS JUL
, (SELECT COUNT(dt_updated) FROM RET.tbl_Record WHERE MONTH(dt_updated) = 8) AS AUG
, (SELECT COUNT(dt_updated) FROM RET.tbl_Record WHERE MONTH(dt_updated) = 9) AS SEP
, (SELECT COUNT(dt_updated) FROM RET.tbl_Record WHERE MONTH(dt_updated) = 10) AS OCT
, (SELECT COUNT(dt_updated) FROM RET.tbl_Record WHERE MONTH(dt_updated) = 11) AS NOV
, (SELECT COUNT(dt_updated) FROM RET.tbl_Record WHERE MONTH(dt_updated) = 12) AS [DEC]
FROM RET.tbl_Record
GROUP BY YEAR(dt_updated))x
```
Now returns 3 records which is the correct amount of records I am looking for however each record returns the same values (it counts all three years in each record) | This will solve, for example, all of the months in 2011, 2012 and 2013, with far less repetitive code.
```
DECLARE @years TABLE(y INT);
INSERT @years SELECT 2011 UNION ALL SELECT 2012 UNION ALL SELECT 2013;
;WITH m(m) AS
(
SELECT TOP (12) ROW_NUMBER() OVER (ORDER BY [object_id]) - 1
FROM sys.all_objects ORDER BY [object_id]
),
dates(y,m) AS
(
SELECT y.y, DATEADD(MONTH, m.m, DATEADD(YEAR, y.y - 1900, 0)) FROM m
CROSS JOIN @years AS y
),
s([YEAR],m,c) AS
(
SELECT d.y, LEFT(UPPER(DATENAME(MONTH, d.m)),3), COUNT(r.dt_updated)
FROM dates AS d LEFT OUTER JOIN RET.tbl_Record AS r
ON r.dt_updated >= d.m AND r.dt_updated < DATEADD(MONTH, 1, d.m)
GROUP BY d.y, DATENAME(MONTH, d.m)
),
n AS
(
SELECT * FROM s PIVOT (MAX(c) FOR m IN
(JAN,FEB,MAR,APR,MAY,JUN,JUL,AUG,SEP,OCT,NOV,[DEC])) AS p
)
SELECT *,Total = JAN+FEB+MAR+APR+MAY+JUN+JUL+AUG+SEP+OCT+NOV+[DEC] FROM n
ORDER BY [YEAR];
```
Need to solve for different years? No problem, just change the hard-coded insert into `@years`.
Need it to be dynamic? Again, no problem; this will solve for every year found in the table:
```
INSERT @years SELECT DISTINCT YEAR(dt_updated) FROM RET.tbl_Record;
```
Need it to be ever more dynamic (e.g. the most recent three years in the table):
```
INSERT @years SELECT DISTINCT TOP (3) YEAR(dt_updated)
FROM RET.tbl_Record ORDER BY YEAR(dt_updated) DESC;
```
For average, sorry, you're on your own (you're changing requirements on me way too late in the game). My suggestion: do that in your reporting tool and/or presentation tier. | perhaps :
GROUP BY YEAR(dt\_updated) | Return a record for each year | [
"",
"sql",
"t-sql",
"date",
"sql-server-2005",
""
] |
Say I have a query with several joins across three tables:
```
SELECT
main_data.id,
main_data.dt,
main_data.seq_num,
main_data.sale_amt,
main_data.sale_cd,
promo.promo_cd,
payment.card,
payment.priority
FROM
main_data
INNER JOIN promo
ON promo.id = main_data.id
AND main_data.dt >= promo.start_dt
AND main_data.dt <= promo_end_dt
INNER JOIN payment
ON payment.sale_cd = main_data.sale_cd
AND payment.card = main_data.card
WHERE
main_data.dt BETWEEN '2013-10-12' AND '2013-10-12'
```
Basically, sales are tied to a form of payment (`payment`) and a promotion (`promo`). There are a few problems with mapping promo codes to eligible payments (one-to-many relationships).
At this point, there are possible duplicate records from `main-data`. Therefore, I need to use the `payment.priority` that has the lowest value. How can I extract only the line with the lowest value for that field? I tried nesting this as a sub-query but couldn't make it work properly. The database itself is totally static and I'm unable to change the schema in any way. | You could try this. The row\_number function groups the items in the PAYMENT table by sale\_cd, then orders the entries by priority asc. Thus, row\_num = '1' should give you the lowest value for priority grouped by sale\_cd.
```
WITH CTE AS (CARD, PRIORITY, SALE_CD, ROW_NUM)
AS
(
SELECT CARD
, PRIORITY
, SALE_CD
, ROW_NUMBER() OVER(PARTITION BY SALE_CD ORDER BY PRIORITY ASC) AS ROW_NUM
FROM PAYMENT
)
SELECT
main_data.id,
main_data.dt,
main_data.seq_num,
main_data.sale_amt,
main_data.sale_cd,
promo.promo_cd,
CTE.card,
CTE.priority
FROM
main_data
INNER JOIN promo
ON promo.id = main_data.id
AND main_data.dt >= promo.start_dt
AND main_data.dt <= promo_end_dt
INNER JOIN CTE CTE
ON CTE.sale_cd = main_data.sale_cd
AND CTE.card = main_data.card
WHERE
main_data.dt BETWEEN '2013-10-12' AND '2013-10-12'
AND CTE.ROW_NUM = '1'
``` | You have mentioned as main attributes give duplicates, so I have assumed these to be GROUP BY columns
```
WITH A AS
(
SELECT
main_data.id,
main_data.dt,
main_data.seq_num,
main_data.sale_amt,
main_data.sale_cd,
promo.promo_cd,
payment.card,
payment.priority
, ROW_NUMBER() OVER(PARTITION BY main_data.id, main_data.dt, main_data.seq_num, main_data.sale_amt, main_data.sale_cd ORDER BY payment.priority) AS RN
FROM
main_data
INNER JOIN promo
ON promo.id = main_data.id
AND main_data.dt >= promo.start_dt
AND main_data.dt <= promo_end_dt
INNER JOIN payment
ON payment.sale_cd = main_data.sale_cd
AND payment.card = main_data.card
WHERE
main_data.dt BETWEEN '2013-10-12' AND '2013-10-12'
)
SELECT * FROM A
WHERE RN = 1
``` | SQL to retrieve only the minimum value in query with multiple joins | [
"",
"sql",
"sql-server",
""
] |
i want to select set of records in SQL Server table.
In SQL Server, I can do this:
```
select TOP 40,60 *
from sometable
order by name
```
I want to get the 40th to the 60th rows.
Thank you. | i did that like this query
```
WITH temp AS
( SELECT *, ROW_NUMBER() OVER (ORDER BY name) AS 'RowNumber'
FROM sometable
)
SELECT * FROM temp WHERE RowNumber BETWEEN 40 AND 60
```
Thank you for all comment. | LIMIT exists in Oracle but with different arguments, the first argument specifies the offset of the first row to return, and the second specifies the maximum number of rows to return. The offset of the initial row is 0 (not 1):
```
SELECT * FROM sometbl Order by name LIMIT 40,100; # Retrieve rows 41-100
```
Or you can proceed as;
```
SELECT * FROM
(
SELECT st.*, Row_Number() OVER (ORDER BY name) rn FROM sometable st
)
WHERE rn BETWEEN 40 AND 60;
``` | Get some set of records in table in SQL Server | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
i have one field which contain data like `4563******3245`. when i execute my sql query it is inserted successfully. but in mysql database it is showing only `4563`
my sql query is:
```
insert into mytable ('myfield') values ('4563******3245');
```
Can any one tell me where is the problem.
thank you. | ```
insert into mytable ('myfield') values ('4563******3245');
```
it is working fine
make sure your column type should be varchar | You can't store text in a column of a number data type. You have to change your data type to `char(14)`.
```
ALTER TABLE your_table MODIFY myfield CHAR(14);
``` | How to insert * symbol in mysql using php | [
"",
"mysql",
"sql",
""
] |
I'm trying to create a script for basic database and login/user creation for SqlServer
When i run my script I get many errors like:
> Must declare the scalar variable "@varname"
when the variable is used in an `EXEC` statement for the second time during the script execution.
```
declare
@dbname nvarchar(50) ,
@userName nvarchar(50) ,
@password nvarchar(50);
SET @dbname = N'DBNAME';
SET @userName = N'DBUSERNAME';
SET @password= N'PASSWORD';
Begin transaction
IF EXISTS (SELECT name FROM master.dbo.sysdatabases WHERE name = @dbname)
BEGIN
exec ('
DROP login '+ @userName + '
DROP USER '+ @userName + '
alter database '+ @dbname + '
SET OFFLINE
WITH ROLLBACK IMMEDIATE;
ALTER DATABASE '+ @dbname + '
SET ONLINE;
DROP DATABASE '+ @dbname)
END
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
EXEC ('CREATE DATABASE '+ @dbname)
GO
EXEC (@dbname+ '.[dbo].[sp_fulltext_database] @action = ''disable''')
GO
EXEC ('ALTER DATABASE '+ @dbname + ' SET COMPATIBILITY_LEVEL = 100')
GO
...
EXEC ('ALTER DATABASE '+ @dbname + ' SET DB_CHAINING OFF')
GO
EXEC('create login '+ @userName + 'with password = '+ @password);
EXEC('use '+ @dbname+ '
create user '+ @userName + ' for login '+ @userName);
EXEC ('EXEC sp_addrolemember ''db_owner'', '+ @userName)
commit transaction
```
Have I to use [sp\_executesql](http://msdn.microsoft.com/en-us/library/ms188001%28v=SQL.90%29.aspx) for every query? Is there a shorter solution? | Thanks to [Raj](https://stackoverflow.com/questions/19950982/must-declare-the-scalar-variable-varname-in-database-creation-script#comment-29693253) and [Vaibhav Parmar](https://stackoverflow.com/questions/19950982/must-declare-the-scalar-variable-varname-in-database-creation-script#comment-29693778) comments I changed my script and now it works fine.
> The main error was the GO statement that should be removed because it
> is used to group SQL commands into batches which are sent to the
> server together. The commands included in the batch, that is, the set
> of commands since the last GO command or the start of the session,
> must be logically consistent. You can't define a variable in one batch
> and then use it in another since the scope of the variable is limited
> to the batch in which it's defined.
>
> The second error was that I tried to use transaction with statement
> that don't allow them
I leave the corrected script there as reference:
```
SET ANSI_NULLS ON
SET QUOTED_IDENTIFIER ON
SET ANSI_PADDING ON
declare
@dbname nvarchar(50) ,
@userName nvarchar(50) ,
@password nvarchar(50);
SET @dbname = N'DATABASENAME';
SET @userName = N'DBUSERNAME';
SET @password= N'DBUSERPASSWORD';
IF EXISTS (SELECT name FROM master.dbo.sysdatabases WHERE name = @dbname)
BEGIN
exec ('
DROP USER '+ @userName + '
DROP login '+ @userName + '
alter database '+ @dbname + '
SET OFFLINE
WITH ROLLBACK IMMEDIATE;
ALTER DATABASE '+ @dbname + '
SET ONLINE;
DROP DATABASE '+ @dbname)
END
EXEC ('CREATE DATABASE '+ @dbname);
EXEC (@dbname+ '.[dbo].[sp_fulltext_database] @action = ''disable''');
EXEC ('ALTER DATABASE '+ @dbname + ' SET COMPATIBILITY_LEVEL = 100');
EXEC ('ALTER DATABASE '+ @dbname + ' SET ANSI_NULL_DEFAULT OFF');
EXEC ('ALTER DATABASE '+ @dbname + ' SET ANSI_NULLS ON');
EXEC ('ALTER DATABASE '+ @dbname + ' SET ANSI_PADDING ON');
EXEC ('ALTER DATABASE '+ @dbname + ' SET AUTO_CLOSE OFF');
EXEC ('ALTER DATABASE '+ @dbname + ' SET AUTO_SHRINK OFF');
EXEC ('ALTER DATABASE '+ @dbname + ' SET QUOTED_IDENTIFIER ON');
EXEC ('ALTER DATABASE '+ @dbname + ' SET RECOVERY FULL');
EXEC ('ALTER DATABASE '+ @dbname + ' SET PAGE_VERIFY CHECKSUM');
EXEC ('ALTER DATABASE '+ @dbname + ' SET ANSI_WARNINGS ON');
EXEC ('ALTER DATABASE '+ @dbname + ' SET ARITHABORT ON');
EXEC ('ALTER DATABASE '+ @dbname + ' SET AUTO_CREATE_STATISTICS ON');
EXEC ('ALTER DATABASE '+ @dbname + ' SET AUTO_UPDATE_STATISTICS ON');
EXEC ('ALTER DATABASE '+ @dbname + ' SET CURSOR_CLOSE_ON_COMMIT OFF');
EXEC ('ALTER DATABASE '+ @dbname + ' SET CURSOR_DEFAULT GLOBAL');
EXEC ('ALTER DATABASE '+ @dbname + ' SET CONCAT_NULL_YIELDS_NULL OFF');
EXEC ('ALTER DATABASE '+ @dbname + ' SET NUMERIC_ROUNDABORT OFF');
EXEC ('ALTER DATABASE '+ @dbname + ' SET RECURSIVE_TRIGGERS OFF');
EXEC ('ALTER DATABASE '+ @dbname + ' SET ENABLE_BROKER');
EXEC ('ALTER DATABASE '+ @dbname + ' SET AUTO_UPDATE_STATISTICS_ASYNC OFF');
EXEC ('ALTER DATABASE '+ @dbname + ' SET DATE_CORRELATION_OPTIMIZATION OFF');
EXEC ('ALTER DATABASE '+ @dbname + ' SET TRUSTWORTHY OFF');
EXEC ('ALTER DATABASE '+ @dbname + ' SET ALLOW_SNAPSHOT_ISOLATION OFF');
EXEC ('ALTER DATABASE '+ @dbname + ' SET PARAMETERIZATION SIMPLE');
EXEC ('ALTER DATABASE '+ @dbname + ' SET READ_COMMITTED_SNAPSHOT OFF');
EXEC ('ALTER DATABASE '+ @dbname + ' SET HONOR_BROKER_PRIORITY OFF');
EXEC ('ALTER DATABASE '+ @dbname + ' SET READ_WRITE');
EXEC ('ALTER DATABASE '+ @dbname + ' SET MULTI_USER');
EXEC ('ALTER DATABASE '+ @dbname + ' SET DB_CHAINING OFF');
EXEC ('create login '+ @userName + ' with password = '''+ @password+ ''', default_database = ' + @dbname);
EXEC ('use '+ @dbname+ ' create user '+ @userName + ' for login '+ @userName);
EXEC ('use '+ @dbname+ ' EXEC sp_addrolemember ''db_owner'', '+ @userName);
``` | The GO Statement tells the query analyzer that a batch is complete.
<http://technet.microsoft.com/en-us/library/ms188037.aspx>
Therefore, the declared variable that are set are out of scope by the time the dynamic code is executed.
If you truly want this in a transaction, then wrap it with **BEGIN TRY/END TRY** in the **BEGIN CATCH /END CATCH**, perform a **ROLLBACK**.
<http://craftydba.com/?p=5930>
I never tried this with the CREATE DATABASE statement. That might be a fun exercise. Does it undo the database creation? Something to add to my bucket list to try.
Also, you need to use a **semicolon (;)** when combining multiple commands. Otherwise, you will get a syntax error. | Must declare the scalar variable "@varname" in database creation script | [
"",
"sql",
"sql-server",
"t-sql",
"execute",
"variable-declaration",
""
] |
```
SELECT CASE WHEN age IS NULL THEN 'Unspecified'
WHEN age < 18 THEN '<18'
WHEN age >= 18 AND age <= 24 THEN '18-24'
WHEN age >= 25 AND age <= 30 THEN '25-30'
WHEN age >= 31 AND age <= 40 THEN '31-40'
WHEN age > 40 THEN '>40'
END AS ageband,
COUNT(*)
FROM (SELECT age
FROM table) t
GROUP BY ageband
```
This is my query. These are the results:

However if the table.age doesn't have at least 1 age in a category, it will just flat out ignore that case in the result. Like such:

This data set didnt have any records for age < 18. So the ageband "<18" doesnt show up. How can I make it so it does show up and return a value 0?? | You need a table of agebands to populate the result for entries that have no matching rows. This can be done through an actual table, or dynamically generated with a subquery like this:
```
SELECT a.ageband, IFNULL(t.agecount, 0)
FROM (
-- ORIGINAL QUERY
SELECT
CASE
WHEN age IS NULL THEN 'Unspecified'
WHEN age < 18 THEN '<18'
WHEN age >= 18 AND age <= 24 THEN '18-24'
WHEN age >= 25 AND age <= 30 THEN '25-30'
WHEN age >= 31 AND age <= 40 THEN '31-40'
WHEN age > 40 THEN '>40'
END AS ageband,
COUNT(*) as agecount
FROM (SELECT age FROM Table1) t
GROUP BY ageband
) t
right join (
-- TABLE OF POSSIBLE AGEBANDS
SELECT 'Unspecified' as ageband union
SELECT '<18' union
SELECT '18-24' union
SELECT '25-30' union
SELECT '31-40' union
SELECT '>40'
) a on t.ageband = a.ageband
```
Demo: <http://www.sqlfiddle.com/#!2/7e2a9/10> | I haven't tested it, but this should work.
```
SELECT ageband, cnt FROM (
SELECT '<18' as ageband, COUNT(*) as cnt FROMT table WHERE age < 18
UNION ALL
SELECT '18-24' as ageband, COUNT(*) as cnt FROMT table WHERE age >= 18 AND age <= 24
UNION ALL
SELECT '25-30' as ageband, COUNT(*) as cnt FROMT table WHERE age >= 25 AND age <= 30
UNION ALL
SELECT '31-40' as ageband, COUNT(*) as cnt FROMT table WHERE age >= 31 AND age <= 40
UNION ALL
SELECT '>40' as ageband, COUNT(*) as cnt FROMT table WHERE age > 40
) as A
``` | MySQL CASE WHEN THEN empty case values | [
"",
"mysql",
"sql",
"case",
""
] |
I have a .CSV import that I will be performing a series of Transformations on.
The first Transformation that I need to do is to merge two City Columns into 1 column.
The data that I have looks like this.
```
| City1 | City2 |
|Wichita| |
| |Houston|
| |Chicago|
|Denver | |
```
The required output should be,
```
| City |
|Wichita|
|Houston|
|Chicago|
|Denver |
```
I want to keep this as an SSIS Derived Column Expression so that I can tie it to the rest of the transformation that I need to perform.
I already went back to the vendor and asked them to correct the data, they denied it. Now it's up to me to correct the dirty data so that we can use it in a series of reports.
Thank you in advance for any support. | Use a derived column to replace city 1. The formula would look something like City1 == "" ? City2 : City 1 | You might try the unpivot transformation. But that may leave you with some blank rows to clean up.
Another possibility would be simulating a coalesce function. [[reference](http://social.msdn.microsoft.com/Forums/sqlserver/en-US/50a9955d-07c4-4ab5-b2eb-ed69f7feaa28/how-to-use-coalesce-function-in-derived-column-component)] | SSIS Expression to Merge Two Columns | [
"",
"sql",
"sql-server",
"ssis",
""
] |
I need to add a new column to a table in my database. The table contains around 140 million rows and I'm not sure how to proceed without locking the database.
The database is in production and that's why this has to be as smooth as it can get.
I have read a lot but never really got the answer if this is a risky operation or not.
The new column is nullable and the default can be NULL. As i understood there is a bigger issue if the new column needs a default value.
I'd really appreciate some straight forward answers on this matter. Is this doable or not? | Yes, it is eminently doable.
Adding a column where NULL is acceptable and has no default value does not require a long-running lock to add data to the table.
If you supply a default value, then SQL Server has to go and update each record in order to write that new column value into the row.
**How it works in general:**
```
+---------------------+------------------------+-----------------------+
| Column is Nullable? | Default Value Supplied | Result |
+---------------------+------------------------+-----------------------+
| Yes | No | Quick Add (caveat) |
| Yes | Yes | Long running lock |
| No | No | Error |
| No | Yes | Long running lock |
+---------------------+------------------------+-----------------------+
```
**The caveat bit:**
I can't remember off the top of my head what happens when you add a column that causes the size of the NULL bitmap to be expanded. I'd like to say that the NULL bitmap represents the nullability of all the the columns *currently in the row*, but I can't put my hand on my heart and say that's definitely true.
Edit -> @MartinSmith pointed out that the NULL bitmap will only expand when the row is changed, many thanks. However, as he also points out, if the size of the row expands past the 8060 byte limit in SQL Server 2012 then [a long running lock may still be required](http://rusanu.com/2012/02/16/adding-a-nullable-column-can-update-the-entire-table/). Many thanks \* 2.
**Second caveat:**
Test it.
**Third and final caveat:**
No really, test it. | My example is how do I add a new column to the table by tens of millions of rows and fill it by default value without long running lock
```
USE [MyDB]
GO
ALTER TABLE [dbo].[Customer] ADD [CustomerTypeId] TINYINT NULL
GO
ALTER TABLE [dbo].[Customer] ADD CONSTRAINT [DF_Customer_CustomerTypeId] DEFAULT 1 FOR [CustomerTypeId]
GO
DECLARE @batchSize bigint = 5000
,@rowcount int
,@MaxID int;
SET @rowcount = 1
SET @MaxID = 0
WHILE @rowcount > 0
BEGIN
;WITH upd as (
SELECT TOP (@batchSize)
[ID]
,[CustomerTypeId]
FROM [dbo].[Customer] (NOLOCK)
WHERE [CustomerTypeId] IS NULL
AND [ID] > @MaxID
ORDER BY [ID])
UPDATE upd
SET [CustomerTypeId] = 1
,@MaxID = CASE WHEN [ID] > @MaxID THEN [ID] ELSE @MaxID END
SET @rowcount = @@ROWCOUNT
WAITFOR DELAY '00:00:01'
END;
ALTER TABLE [dbo].[Customer] ALTER COLUMN [CustomerTypeId] TINYINT NOT NULL;
GO
```
`ALTER TABLE [dbo].[Customer] ADD [CustomerTypeId] TINYINT NULL` changes only the metadata (Sch-M locks) and lock time does not depend on the number of rows in a table
After that, I fill a new column by default value in small portions (5000 rows). I wait one second after each cycle so as not to block the table too aggressively. I have a int column "ID" as the primary clustered key
Finally, when all the new column is filled I change it to NOT NULL | Add a new column to big database table | [
"",
"sql",
"sql-server",
"database",
""
] |
I'm trying to understand what's the purpose of the join in this query.
```
SELECT
DISTINCT o.order_id
FROM
`order` o,
`order_product` as op
LEFT JOIN `provider_order_product_status_history` as popsh
on op.order_product_id = popsh.order_product_id
LEFT JOIN `provider_order_product_status_history` as popsh2
ON popsh.order_product_id = popsh2.order_product_id
AND popsh.provider_order_product_status_history_id <
popsh2.provider_order_product_status_history_id
WHERE
o.order_id = op.order_id
AND popsh2.last_updated IS NULL
LIMIT 10
```
What bothering me is that provider\_order\_product\_status\_history has joined 2 times and I'm not sure the purpose of it. Highly appreciate if someone can help | It's a technique to retrieve the latest order status.
Because of
```
AND popsh.provider_order_product_status_history_id < popsh2.provider_order_product_status_history_id
```
and
```
AND popsh2.last_updated IS NULL
```
Only those order status that doesn't have any newer status are returned.
For a minimum set example, consider the following status history table:
```
id status order_id last_updated
--------------------------------
1 A X 1:00
2 B X 2:00
```
The self join will result in:
```
id status order_id last_updated id status order_id last_updated
-------------------------------- --------------------------------
1 A X 1:00 2 B X 2:00
2 B X 2:00 NULL NULL NULL
```
The first row will be filtered out by the `IS NULL` condition, leaving only the second raw, which is the latest one.
For a 3-row case the self join result will be:
```
id status order_id last_updated id status order_id last_updated
-------------------------------- --------------------------------
1 A X 1:00 2 B X 2:00
1 A X 1:00 3 C X 3:00
2 B X 2:00 3 C X 3:00
3 C X 3:00 NULL NULL NULL
```
And only the last one will pass the `IS NULL` condition, leaving the latest one again.
It looks like an unnecessarily complicated way to do the job, but it actually works quite well as RDBMS engines do joins very efficiently.
BTW, as the query retrieves only order\_id, the query is not useful as it is. I guess the OP omitted other fields in the select clause. It should be something like `SELECT o.order_id, popsh.* FROM ...` | Wait, you have an error:
```
SELECT
DISTINCT o.order_id
FROM
`order` o,
`order_product` as op
LEFT JOIN `provider_order_product_status_history` as popsh
on op.order_product_id = popshs.order_product_id
** YOU HAVE EXCESS 's' HERE ^
LEFT JOIN `provider_order_product_status_history` as popsh2
ON popsh.order_product_id = popsh2.order_product_id
AND popsh.provider_order_product_status_history_id < popsh2.provider_order_product_status_history_id
WHERE
o.order_id = op.order_id
AND popsh2.last_updated IS NULL
LIMIT 10
```
Based from my analysis, the query is *trying* to extract the **first o.order\_id or first entry** (based on `provider_order_product_status_history.provider_order_product_status_history_id`) of the `provider_order_product_status_history`. However, *the joins semantic used in this query is not recomendable.* | joining same table twice in a mysql query | [
"",
"mysql",
"sql",
""
] |
I'm struggling to figure this one out.
What I want to do is like so:
```
select [fields],
((select <criteria>) return 0 if no rows returned, return 1 if any rows returned) as SubqueryResult
where a=b
```
Is this possible? | In T-sql you can use Exists clause for the given requirement as:
```
select [fields],
case when exists (select <criteria> from <tablename> ) then 1
else 0
end as SubqueryResult
from <tablename>
where a=b
``` | Please try:
```
select [fields],
case when (select COUNT(*) from YourTable with criteria)>0 then
1
else
0
end
as SubqueryResult
where a=b
``` | Return 1 or 0 as a subquery field with EXISTS? | [
"",
"sql",
""
] |
So, I have two tables in SQL Sever 2008 R2:
```
Table A:
patient_id first_name last_name external_id
000001 John Smith 4753-23314.0
000002 Mike Davis 4753-12548.0
Table B:
guarantor_id visit_date first_name last_name
23314 01/01/2013 John Smith
12548 02/02/2013 Mike Davis
```
Notice that the guarantor\_id from Table B matches the middle section of the external\_id from Table A. Would someone please help me strip the 4753- from the front and the .0 from the back of the external\_id so I can join these tables?
Any help/examples is greatly appreciated. | Assuming the prefix and suffix are always the same length, just do this:
```
SUBSTRING(external_id, 6, 5)
```
The documentation for `SUBSTRING` is [here](http://technet.microsoft.com/en-us/library/ms187748.aspx) if you want to look at that.
If the prefix and suffix change, also use [CHARINDEX](http://technet.microsoft.com/en-us/library/ms186323.aspx) AND [LEN](http://msdn.microsoft.com/en-us/library/ms190329.aspx).
```
SUBSTRING(external_id, CHARINDEX(external_id,'-') + 1, CHARINDEX(external_id,'.') - CHARINDEX(external_id,'-') + 1)
``` | Try this one
```
SELECT *
FROM TABLE_A inner join TABLE_B on TABLE_A.external_id like '%'+TABLE_B.guarantor_id+'%'
``` | SQL - Join tables with modified data | [
"",
"sql",
"sql-server-2008",
""
] |
I have two tables, say A and B.
Table : A
```
ID_Sender | Date
________________________
1 | 11-13-2013
1 | 11-12-2013
2 | 11-12-2013
2 | 11-11-2013
3 | 11-13-2013
4 | 11-11-2013
```
Table : B
```
ID | Tags
_______________________
1 | Company A
2 | Company A
3 | Company C
4 | Company D
```
result table:
```
Tags | Date
____________________________
Company A | 11-13-2013
Company C | 11-13-2013
Company D | 11-11-2013
```
I have already tried out this out [GROUP BY with MAX(DATE)](https://stackoverflow.com/questions/3491329/group-by-with-maxdate) but failed with no luck, I did some inner joins and subqueries but failed to produce the output.
Here is my code so far, and an image for the output attached.
```
SELECT E.Tags, D.[Date] FROM
(SELECT A.ID_Sender AS Sendah, MAX(A.[Date]) AS Datee
FROM tblA A
LEFT JOIN tblB B ON A.ID_Sender = B.ID
GROUP BY A.ID_Sender) C
INNER JOIN tblA D ON D.ID_Sender = C.Sendah AND D.[Date] = C.Datee
INNER JOIN tblB E ON E.ID = D.ID_Sender
```
Any suggestions? I'm already pulling my hairs out !
(maybe you guys can just give me some sql concepts that can be helpful, the answer is not that necessary cos I really really wanted to solve it on my own :) )
Thanks! | ```
SELECT Tags, MAX(Date) AS [Date]
FROM dbo.B INNER JOIN dbo.A
ON B.ID = A.ID_Sender
GROUP BY B.Tags
```
`Demo`
The result
```
Company A November, 13 2013 00:00:00+0000
Company C November, 13 2013 00:00:00+0000
Company D November, 11 2013 00:00:00+0000
``` | try this please let me correct if I wrong. In table B Id = 2 is Company B I am assuming.. if it is right then go ahead with this code.
```
declare @table1 table(ID_Sender int, Dates varchar(20))
insert into @table1 values
( 1 , '11-13-2013'),
(1 , '11-12-2013'),
(2 ,'11-12-2013'),
(2 ,'11-11-2013'),
(3 ,'11-13-2013'),
(4 ,'11-11-2013')
declare @table2 table ( id int, tags varchar(20))
insert into @table2 values
(1 ,'Company A'),
(2 , 'Company B'),
(3 , 'Company C'),
(4 , 'Company D')
;with cte as
(
select
t1.ID_Sender, t1.Dates, t2.tags
from @table1 t1
join
@table2 t2 on t1.ID_Sender = t2.id
)
select tags, MAX(dates) as dates from cte group by tags
``` | SQL Query MAX date and some fields from other table | [
"",
"sql",
"date",
"max",
""
] |
I have this simple SQL Update
```
IF(@MyID IS NOT NULL)
BEGIN
BEGIN TRY
UPDATE DATATABLE
SET Param1=@Param1, Data2=@Data2,...
WHERE MyID=@MyID
END TRY
BEGIN CATCH
SELECT ERROR_MESSAGE() AS 'Message'
RETURN -1
END CATCH
SELECT * FROM DATATABLE WHERE MyID= @@IDENTITY
SET @ResultMessage = 'Succefully Inserted'
SELECT @ResultMessage AS 'Message'
RETURN 0
END
```
The problem is that when I provide an invalid ID, one that does not exist it, does not throw an error I still get an error code of 0 with the Successfully inserted message. I also added this after the catch. Still nothing, am I missing something fundamental?
```
END CATCH
IF(@@ERROR != 0)
BEGIN
SET @ResultMessage = 'Not Successful Inserted'
SELECT @ResultMessage AS 'Message'
RETURN -1
END
SELECT * FROM DATATABLE WHERE MyID= @@IDENTITY
SET @ResultMessage = 'Succefully Inserted'
SELECT @ResultMessage AS 'Message'
RETURN 0
```
Is there something special I am suppose to look for? | SQL will catch errors, but an UPDATE statement that does not update any rows, is a valid SQL statement and should not return an error. You can check **@@RowCount** to see how many rows the update statement actually updated
```
IF(@MyID IS NOT NULL)
BEGIN
BEGIN TRY
UPDATE DATATABLE
SET Param1=@Param1, Data2=@Data2,...
WHERE MyID=@MyID
IF @@RowCount = 0
BEGIN
SELECT 'No record found...' AS Message
RETURN -1
END
END TRY
BEGIN CATCH
SELECT ERROR_MESSAGE() AS 'Message'
RETURN -1
END CATCH
SELECT * FROM DATATABLE WHERE MyID= @@IDENTITY
SET @ResultMessage = 'Succefully Inserted'
SELECT @ResultMessage AS 'Message'
RETURN 0
END
``` | **IF @@RowCount = 0 BEGIN Select 1/0; END**
I don know if this is a good way of doing it but this is what I just did. The User gets notified with error :) if there are 0 rows affected.
```
Update Table1 Set Table1_Sold = 1 Where table1_ID = '043f258B-8A0B-4CA1-87EC-CDCBD38EE9E1';
IF @@RowCount = 0 BEGIN Select 1/0; END
```
You get Error if there were 0 rows effected. Probably not the best way but it works for me. I may get dislikes for this.
In C# you cna catch the error and Display your own Message.
```
catch (SqlException e)
{
switch(e.Number)
{
case 8134:// Divide by zero error
MessageBox.Show("ERROR Divide by ZERO, "Error", MessageBoxButtons.OK, MessageBoxIcon.Error);
break;
default:
MessageBox.Show($"{e.Number.ToString()} \t {e.Message} \t {e.InnerException} \t {e.Data} \t {e.ToString()}", "Error", MessageBoxButtons.OK, MessageBoxIcon.Error);
break;
}
``` | Error not thrown for SQL update | [
"",
"sql",
"sql-server",
"sql-update",
""
] |
I have a Web Application (Java backend) that processes a large amount of raw data that is uploaded from a hardware platform containing a number of sensors.
Currently the raw data is uploaded and the data is decompressed and stored as a 'text' field in a Postgresql database to allow the users to log in and generate various graphs / charts of the data (using a JS charting library clientside).
Example string...
[45,23,45,32,56,75,34....]
The arrays will typically contain ~300,000 values but this could be up to 1,000,000 depending on how long the sensors are recording so the size of the string being stored could be a few hundred kilobytes
This currently seems to work fine for now as there are only ~200 uploads per day but as I am looking at the scalability of the application and the ability to backup the data I am looking at alternatives for storing this data
DynamoDB looked like a great option for me as I can carry on storing the uploads details in my SQL table and just save a URL endpoint to be called to retrieve the arrays....but then I noticed the item size is limited to 64kb
As I am sure there are a million and one ways to do this I would like to put this out to the SO community to hear what others would recommend, either web services or locally stored....considering performance, scalability, maintainability etc etc...
Thanks in advance!
UPDATE:
Just to clarify the data shown above is just the 'Y' values as it is time-sampled the X values are taken as the position in the array....so I dont think storing as a tuple would have any benefits. | If you are looking to store such strings, you probably want to use [**S3**](http://aws.amazon.com/s3/) (1 object containing
the array string), in this case you will have "backup" out of the box by enabling bucket
versioning. | I have just come across Google Cloud Datastore, which allows me to store single item Strings up to 1Mb (un-indexed), seems like a good alternative to Dynamo | Best solution for storing / accessing large Integer arrays for a web application | [
"",
"sql",
"design-patterns",
"database-design",
"nosql",
"amazon-dynamodb",
""
] |
How could I create a view for a table with subset of the table's records, all of the table's columns, plus additional "flag" column whose value is set to 'X' if the table contains certain type of record? For example, consider the following relations table `Relations`, where values for types stand for `H'-human, 'D'-dog`:
```
id | type | relation | related
--------------------------------
H1 | H | knows | D2
H1 | H | owns | D2
H2 | H | knows | D1
H2 | H | owns | D1
H3 | H | knows | D1
H3 | H | knows | D2
H3 | H | treats | D1
H3 | H | treats | D2
D1 | D | bites | H3
D2 | D | bites | H3
```
There may not be any particular order of records in this table.
I seek to create a view `Humans` which will contain all human-to-dog `knows` relations from `Relations` and all of the `Relations`'s columns and additional column `isOwner` storing `'X'` if a human in a given relation owns someone:
```
id | type | relation | related | isOwner
------------------------------------------
H1 | H | knows | D2 | X
H2 | H | knows | D1 | X
H3 | H | knows | D1 |
```
but struggling quite a bit with this. Do you know of a way to do it, preferably in one `CREATE VIEW` call, or any way really? | ```
CREATE VIEW vHumanDogRelations
AS
SELECT
id,
type,
relation,
related,
-- Consider using a bit 0/1 instead
CASE
WHEN EXISTS (
SELECT 1
FROM Relations
WHERE
id = r.id
AND related = r.related -- Owns someone or this related only?
AND relation = 'owns'
) THEN 'X'
ELSE ''
END AS isOwner
FROM Relations r
WHERE
relation = 'knows'
AND type = 'H'
AND related = 'D';
``` | You should be able to put the following `select` into the view definition
```
select *, case when exists
(select * from Relations where id = r.id and relation= 'owns') then 'X'
else '' end as isOwner
from Relations r
``` | Set column value if certain record exists in same table | [
"",
"sql",
"sql-server",
""
] |
I have an Oracle Tree hierarchy structure that is basically similar to the following table called MY\_TABLE
```
(LINK_ID,
PARENT_LINK_ID,
STEP_ID )
```
with the following sample data within MY\_TABLE:
```
LINK_ID PARENT_LINK_ID STEP_ID
-----------------------------------------------
A NULL 0
B NULL 0
AA A 1
AB A 1
AAA AA 2
BB B 1
BBB BB 2
BBBA BBB 3
BBBB BBB 3
```
Based on the above sample data, I need to produce a report that basically returns the total count of rows for all children
of both parent link IDs (top level only required), that is, I need to produce a SQL query that returns the following information, i.e.:
```
PARENT RESULT COUNT
----------------------------
A 3
B 4
```
So I need to rollup total children that belong to all (parent) link ids, where the LINK\_IDs have a PARENT\_LINK\_ID of NULL | I think something like this:
```
select link, count(*)-1 as "RESULT COUNT"
from (
select connect_by_root(link_id) link
from my_table
connect by nocycle parent_link_id = prior link_id
start with parent_link_id is null)
group by link
order by 1 asc
``` | Please try:
```
WITH parent(LINK_ID1, LINK_ID, asCount) AS
(
SELECT LINK_ID LINK_ID1, LINK_ID, 1 as asCount
from MY_TABLE WHERE PARENT_LINK_ID is null
UNION ALL
SELECT LINK_ID1, t.LINK_ID, asCount+1 as asCount FROM parent
INNER JOIN MY_TABLE t ON t.PARENT_LINK_ID = parent.LINK_ID
)
select
LINK_ID1 "Parent",
count(asCount)-1 "Result Count"
From parent
group by LINK_ID1;
```
[SQL Fiddle Demo](http://sqlfiddle.com/#!4/d0ce2/5) | How to obtain total children count for all parents in Oracle tree hierarchy? | [
"",
"sql",
"tree",
"oracle11g",
""
] |
I want to keep track of opening hours of various shops, but I can't figure out what is the best way to store that.
An intuitive solution would be to have starting and ending time for each day. But that means two attributes per day, which doesn't look nice.
Another approach would be to have a starting time and a `day to second` interval for each day. But still, that means two attributes.
What is the most commonly and easiest way to represent this? I'm working with oracle. | I think it makes total sense to have two Column One for Open DateTime and One for Close Datetime. Since Once a shop is open it will have to be closed someday/sometime.
**My Suggestion**
I would Create a separate table for shop Opening/Closing Times. Since everytime A shop is opened it will have a close time value as well so you wont have any unwanted nulls in you second column. to me it makes total sense to have a separate table altogether for shop opening closing times. | The lowest granularity you need is probably minutes (actually, probably 15 minute intervals, but call it minutes).
Possibly you also want to consider day of the week.
If you use a table such as:
```
create table day_of_week_opening_hours(
id integer primary key,
day_of_week integer not null,
store_id integer not null,
opening_minutes_past_midnight integer default 0 not null,
closing_minutes_past_midnight integer default (24*60) not null)
```
Pop a unique constraint on store\_id and day\_of\_week, and for a given store and day of the week you can find the opening time with:
```
the_date + (opening_minutes_past_midnight * interval '1 minute')
```
or ...
```
the_date + (opening_minutes_past_midnight / (24*60))
```
Shops that open 24 hours a day could be represented with a special code instead of opening times, in a separate table instead of with special opening and closing times, or maybe you could just leave the opening/closing times null.
```
create table day_of_week_24_hour_opening(
id integer primary key,
day_of_week integer not null,
store_id integer not null)
```
Think about shops that do not open at all on a given day as well, and how to represent that.
Probably you could do with a date-based override also, to indicate different (or no) opening hours on certain dates (Xmas, etc).
Quite an interesting problem, this. | What data type to use for opening hours in a database | [
"",
"sql",
"oracle",
"time",
"intervals",
""
] |
Here is my query:
```
select year(p.datetimeentered) as Year, month(p.datetimeentered), datename(month, p.datetimeentered) as Month, st.type as SubType, sum(p.totalpaid) as TotalPaid from master m
inner join jm_subpoena s on s.number = m.number
inner join payhistory p on p.number = m.number
inner join jm_subpoenatypes st on st.id = s.typeid
where p.batchtype in ('PU','PUR','PA','PAR')
and p.datetimeentered > s.completeDate
group by year(p.datetimeentered), month(p.datetimeentered), datename(month, p.datetimeentered), st.type, p.batchtype
order by year(p.datetimeentered), month(p.datetimeentered), datename(month, p.datetimeentered), st.type, p.batchtype
```
Here is the criteria for the `TotalPaid` column.
When `p.batchtype` is `PU` or `PA` I need to sum those totals
When `p.batchtype` is `PUR` or `PAR` I need to sum those totals
then I need to subtract the two numbers from each other.
Is there an easy way to do this? | You could do the addition/subtraction as part of the summation:
```
SUM(CASE
WHEN p.batchtype IN ('PU','PA') THEN p.totalpaid
WHEN p.batchtype IN ('PUR','PAR') THEN -p.totalpaid
ELSE 0
END) as TotalPaid
``` | Just use a case statement:
```
Case when p.batch in ('PU','PA') then x + y
else when p.batch in ('PUR, PAR') then x-y
end as YourColumnNameHere
```
If my original interpretation was incorrect, then pobrelkey's suggestion is spot on:
```
sum (
when p.batch in ('PU','PA') then totalpaid
else when p.batch in ('PUR, PAR') then -totalpaid)
end as YourColumnNameHere
``` | Sum and subtraction to a single column | [
"",
"sql",
"t-sql",
""
] |
I have a table of users and a table of groups and a table linking users with groups. The default group is called "All Users". All new users are placed in this group. They are then linked to more specific groups like sales or purchasing, etc but they stay in "All Users" as well.
I would like to query all the users that are in the "All Users" group but are not members of any other group.
I hope I explained it well enough.
Thanks,
Bob | ```
SELECT userid, count(*) group_count
FROM user_groups
GROUP BY userid
HAVING group_count = 1
```
If a user is only in one group, it must be the All Users group, and this lists just those users. | This is assuming your table called AllUsers has a unique id for each user called id and your Group table has a foreign key userId linking to AllUsers.id.
SELECT \* FROM AllUsers WHERE AllUsers.id != userId;
That should do it if I understood your question. | Find a user that is not a member of a specific group | [
"",
"mysql",
"sql",
"ms-access",
""
] |
I have a Zip file created and I am unable to delete it using the below command.
`xp_cmdshell 'rm "F:\EXIS\Reports\Individual.zip"'`
It gives an error saying File not found, when I can actually see the file.
I tried using `xp_cmdshell 'del "F:\EXIS\Reports\Individual.zip"'`
But, this asks for a confirmation, which I actually cannot input.
Please suggest if anything,
Thanks. | The message is more generic in the sense the file is not found with the current credentials of SQL Server process while accessing the indicated location.
I suspect it is a problem of rights, so please assure the SQL Server proecess has rights to delete file in that location. An alternative suggestion is to perform a "dir" on that location. | Try executing `del`in silent mode like:
```
xp_cmdshell 'del /Q "F:\EXIS\Reports\Individual.zip"'
```
And also: if SQL Server is running on a different machine the path must of course be valid for that machine. | Unable to delete zip files from DB server location using xp_cmdshell | [
"",
"sql",
"sql-server",
"sql-server-2005",
"xp-cmdshell",
""
] |
I am responsible for an old time recording system which was written in ASP.net Web Forms using ADO.Net 2.0 for persistence.
Basically the system allows users to add details about a piece of work they are doing, the amount of hours they have been assigned to complete the work as well as the amount of hours they have spent on the work to date.
The system also has a reporting facility with the reports based on SQL queries. Recently I have noticed that many reports being run from the system have become very slow to execute. The database has around 11 tables, and it doesn’t store too much data. 27,000 records is the most records any one table holds, with the majority of tables well below even 1,500 records.
I don’t think the issue is therefore related to large volumes of data, I think it is more to do with poorly constructed sql queries and possibly even the same applying to the database design.
For example, there are queries similar to this
```
@start_date datetime,
@end_date datetime,
@org_id int
select distinct t1.timesheet_id,
t1.proposal_job_ref,
t1.work_date AS [Work Date],
consultant.consultant_fname + ' ' + consultant.consultant_lname AS [Person],
proposal.proposal_title AS [Work Title],
t1.timesheet_time AS [Hours],
--GET TOTAL DAYS ASSIGNED TO PROPOSAL
(select sum(proposal_time_assigned.days_assigned)-- * 8.0)
from proposal_time_assigned
where proposal_time_assigned.proposal_ref_code = t1.proposal_job_ref )
as [Total Days Assigned],
--GET TOTAL DAYS SPENT ON THE PROPOSAL SINCE 1ST APRIL 2013
(select isnull(sum(t2.timesheet_time / 8.0), '0')
from timesheet_entries t2
where t2.proposal_job_ref = t1.proposal_job_ref
and t2.work_date <= t1.work_date
and t2.work_date >= '01/04/2013' )
as [Days Spent Since 1st April 2013],
--GET TOTAL DAYS REMAINING ON THE PROPOSAL
(select sum(proposal_time_assigned.days_assigned)
from proposal_time_assigned
where proposal_time_assigned.proposal_ref_code = t1.proposal_job_ref )
-
(select sum(t2.timesheet_time / 8.0)
from timesheet_entries t2
where t2.proposal_job_ref = t1.proposal_job_ref
and t2.work_date <= t1.work_date
) as [Total Days Remaining]
from timesheet_entries t1,
consultant,
proposal,
proposal_time_assigned
where (proposal_time_assigned.consultant_id = consultant.consultant_id)
and (t1.proposal_job_ref = proposal.proposal_ref_code)
and (proposal_time_assigned.proposal_ref_code = t1.proposal_job_ref)
and (t1.code_id = @org_id) and (t1.work_date >= @start_date) and (t1.work_date <= @end_date)
and (t1.proposal_job_ref <> '0')
order by 2, 3
```
Which are expected to return data for reports. I am not even sure if anyone can follow what is happening in the query above, but basically there are quite a few calculations happening, i.e., dividing, multiplying, substraction. I am guessing this is what is slowing down the sql queries.
I suppose my question is, can anyone even make enough sense of the query above to even suggest how to speed it up.
Also, should calculations like the ones mentioned above ever been carried out in an sql query? Or should the this be done within code?
Any help would be really appreciated with this one.
Thanks. | based on the information given i had to do an educated guess about certain table relationships. if you post the table structures, indexes etc... we can complete remaining columns in to this query.
As of right now this query calculates "Days Assigned", "Days Spent" and "Days Remaining"
for the KEY "timesheet\_id and proposal\_job\_ref"
what we have to see is how "work\_date", "timesheet\_time", "[Person]", "proposal\_title" is associate with that.
are these calculation by person and Proposal\_title as well ?
you can use [sqlfiddle](http://sqlfiddle.com/) to provide us the sample data and output so we can work off the meaning full data instead doing guesses.
```
SELECT
q1.timesheet_id
,q1.proposal_job_ref
,q1.[Total Days Assigned]
,q2.[Days Spent Since 1st April 2013]
,(
q1.[Total Days Assigned]
-
q2.[Days Spent Since 1st April 2013]
) AS [Total Days Remaining]
FROM
(
select
t1.timesheet_id
,t1.proposal_job_ref
,sum(t4.days_assigned) as [Total Days Assigned]
from tbl1.timesheet_entries t1
JOIN tbl1.proposal t2
ON t1.proposal_job_ref=t2.proposal_ref_code
JOIN tbl1.proposal_time_assigned t4
ON t4.proposal_ref_code = t1.proposal_job_ref
JOIN tbl1.consultant t3
ON t3.consultant_id=t4.consultant_id
WHERE t1.code_id = @org_id
AND t1.work_date BETWEEN @start_date AND @end_date
AND t1.proposal_job_ref <> '0'
GROUP BY t1.timesheet_id,t1.proposal_job_ref
)q1
JOIN
(
select
tbl1.timesheet_id,tbl1.proposal_job_ref
,isnull(sum(tbl1.timesheet_time / 8.0), '0') AS [Days Spent Since 1st April 2013]
from tbl1.timesheet_entries tbl1
JOIN tbl1.timesheet_entries tbl2
ON tbl1.proposal_job_ref=tbl2.proposal_job_ref
AND tbl2.work_date <= tbl1.work_date
AND tbl2.work_date >= '01/04/2013'
WHERE tbl1.code_id = @org_id
AND tbl1.work_date BETWEEN @start_date AND @end_date
AND tbl1.proposal_job_ref <> '0'
GROUP BY tbl1.timesheet_id,tbl1.proposal_job_ref
)q2
ON q1.timesheet_id=q2.timesheet_id
AND q1.proposal_job_ref=q2.proposal_job_ref
``` | The Problem what i see in your query is :
1> Alias name is not provided for the Tables.
2> Subqueries are used (which are execution cost consuming) instead of WITH clause.
if i would write your query it will look like this :
```
select distinct t1.timesheet_id,
t1.proposal_job_ref,
t1.work_date AS [Work Date],
c1.consultant_fname + ' ' + c1.consultant_lname AS [Person],
p1.proposal_title AS [Work Title],
t1.timesheet_time AS [Hours],
--GET TOTAL DAYS ASSIGNED TO PROPOSAL
(select sum(pta2.days_assigned)-- * 8.0)
from proposal_time_assigned pta2
where pta2.proposal_ref_code = t1.proposal_job_ref )
as [Total Days Assigned],
--GET TOTAL DAYS SPENT ON THE PROPOSAL SINCE 1ST APRIL 2013
(select isnull(sum(t2.timesheet_time / 8.0), 0)
from timesheet_entries t2
where t2.proposal_job_ref = t1.proposal_job_ref
and t2.work_date <= t1.work_date
and t2.work_date >= '01/04/2013' )
as [Days Spent Since 1st April 2013],
--GET TOTAL DAYS REMAINING ON THE PROPOSAL
(select sum(pta2.days_assigned)
from proposal_time_assigned pta2
where pta2.proposal_ref_code = t1.proposal_job_ref )
-
(select sum(t2.timesheet_time / 8.0)
from timesheet_entries t2
where t2.proposal_job_ref = t1.proposal_job_ref
and t2.work_date <= t1.work_date
) as [Total Days Remaining]
from timesheet_entries t1,
consultant c1,
proposal p1,
proposal_time_assigned pta1
where (pta1.consultant_id = c1.consultant_id)
and (t1.proposal_job_ref = p1.proposal_ref_code)
and (pta1.proposal_ref_code = t1.proposal_job_ref)
and (t1.code_id = @org_id) and (t1.work_date >= @start_date) and (t1.work_date <= @end_date)
and (t1.proposal_job_ref <> '0')
order by 2, 3
```
Check above query for any indexing option & number of records to be processed from each table. | Slow SQL Server Query due to calculations? | [
"",
"asp.net",
"sql",
"sql-server",
"timeout",
""
] |
I am trying to formulate a query which, given two tables: (1) Salespersons, (2) Sales; displays the id, name, and sum of sales brought in by the salesperson. The issue is that I can get the id and sum of brought in money but I don't know how to add their names. Furthermore, my attempts omit the salespersons which did not sell anything, which is unfortunate.
In detail:
There are two naive tables:
```
create table Salespersons (
id integer,
name varchar(100)
);
create table Sales (
sale_id integer,
spsn_id integer,
cstm_id integer,
paid_amt double
);
```
I want to make a query that for each Salesperson displays
their total sum of sales brought in.
This query comes to mind:
```
select spsn_id, sum(paid_amt) from Sales group by spsn_id
```
This query only returns list of ids and total amount brought
in, but not the names of the salespersons and it omits
salespersons that sold nothing.
How can I make a query that for each salesperson in Salespersons
table, prints their id, name, sum of their sales, 0 if they
have sold nothing at all.
I appreciate any help!
Thanks ahead of time! | Try the following
```
select sp.id, sp.name, sum(s.paid_amt)
from salespersons sp
left join sales s
on sp.id = s.spsn_id
group by sp.id, sp.name
``` | Try this:
```
SELECT sp.id,sp.name,SUM(NVL(s.paid_amt,0))
FROM salespersons sp
LEFT JOIN sales s ON sp.id = s.spsn_id
GROUP BY sp.id, sp.name
```
The LEFT JOIN will return the salespersons even when they have no sales.
THE [NVL](http://pic.dhe.ibm.com/infocenter/db2luw/v10r5/index.jsp?topic=/com.ibm.db2.luw.sql.ref.doc/doc/r0052627.html) will give you 0 whenever there is no sale for that user. | trouble formulating an sql query | [
"",
"sql",
"db2",
""
] |
Is there a way to get to the root of the hierarchy with a single SQL statement?
The significant columns of the table would be: EMP\_ID, MANAGER\_ID.
MANAGER\_ID is self joined to EMP\_ID, as manager is also an employee. Given an EMP\_ID is there a way to get to the employee (manager) (walking up the chain) where EMP\_ID is null?
In other words the top guy in the org?
I'm using SQL Server 2008
Thanks. | You want a [Common Table Expression](http://msdn.microsoft.com/en-us/library/ms175972%28v=sql.105%29.aspx). Among other things, they can do recursive queries just like what you're looking for. | It's hard to find a single SQL query that will bring the result with the current structure you have for your table. Like Brett said, you can try with a stored function.
But what I think is best worth looking at is [nested sets](http://en.wikipedia.org/wiki/Nested_set_model), which is one well-confirmed design for trees implemented in relational databases. | Manager-Managed classic self-join table | [
"",
"sql",
"sql-server-2008-r2",
"self-join",
""
] |
I've created a simple mysql table like [this](https://lh5.googleusercontent.com/-6lsK0AZGV54/UoG5UyVXtSI/AAAAAAAAAXE/ZTpiIjZPYJA/w556-h125-no/table.jpg)
models column takes VARCHAR(30)
But when I execute this query
```
SELECT *
FROM `Vehicle_Duty_Chart`
WHERE models = "SE3P"
LIMIT 0 , 30
```
It returns this
```
MySQL returned an empty result set (i.e. zero rows). (Query took 0.0004 sec)
```
What ever the 'models' column value I give result is the same. For queries like
```
WHERE ins =7000
```
returns desired outputs.
I've no idea what I'm doing wrong here.
Thank you. | try this:
```
SELECT *
FROM `Vehicle_Duty_Chart`
WHERE TRIM(models) = "SE3P"
LIMIT 0 , 30
``` | Try with below query
```
SELECT * FROM Vehicle_Duty_Chart WHERE models LIKE '%SE3P%' LIMIT 0 , 30
``` | Take string value in where clause - MySQL | [
"",
"mysql",
"sql",
"database",
""
] |
I'm trying to design a new database structure and was wondering of any advantages/disadvantages of using one-to-many vs many-to-many relationships between tables for the following example.
Let's say we need to store information about clients, products, and their addresses (each client and product may have numerous different addresses (such as "shipping" and "billing")).
One way of achieving this is:

This is a rather straight forward approach where each relationship is of one-to-many type, but involves creating additional tables for each address type.
On the other hand we can create something like this:

This time we simply store an additional "type" field indicating what kind of address it is (Client-Billing (0), Client-Shipping (1), Product-Contact (2)) and "source\_id" which is either (Clients.ID or Products.ID depending on the "type" field's value).
This way "Addresses" table doesn't have a "direct" link to any other tables, but the structure seems to be a lot simpler.
My question is if either of those approaches have any significant advantages or is it just a matter of preference? Which one would you choose? Are there any challenges I should be aware of in the future while extending the database? Are there any significant performance differences?
Thank you. | I think that you have pretty much covered the answer yourself.
To me it really depends on what you are modelling and how you think it will change in the future.
Personally I'd not recommend over engineering solutions, so the one-to-many solution sound great.
However if you expect that your needs will change in say 1 month then select the many-to-many.
It's really what you need.
You could change the database at a later stage to have a many-to-many relationship but will have a cost & time impact.
Personally I'd keep the database structure as simple as needed, however understanding how you'd change it later.
Personally I only used MS-SQL Server, so perhaps others have a better understanding of other database technologies.
I think it'd be interesting to see what you are using to access your database, eg Sprocs, or something like NHibernate or Entity Framework.
If you believe that the changing the database structure could cause you big issues (it always has for me), experiment and find out how you'd do it.
That'll give you the experience you need to make more informed decisions. | There seems to be redundancy in both of the designs, using junction tables with an "address type" field with a unique constraint accross all three columns would minimize this.
* client : `id | name`
* client\_address : `client_id | address_id | address_type`
* address : `id | line_one | line_two | line_three | line_four | line_five`
* product\_address : `product_id | address_id | address_type`
* product : `id | name`
either that or make the address type an attribute of product and client
* client : `id | name | billing_address | contact_address`
* address : `id | line_one | line_two | line_three | line_four | line_five`
* product : `id | name | billing_address | contact_address` | Database design advantages for using one-to-many vs many-to-many relationships | [
"",
"sql",
"database",
"foreign-keys",
"schema",
""
] |
In my server I'm adding values to database using a command like
```
INSERT INTO votes VALUES ('1',0),('2',0),('3',0)
```
The primary key is the first argument of each value key. Sometimes, when adding, the primary key may already exist in the table.
The issue is that, if that happens, I get an error, and it stops the whole process of adding them all.
Is there a way such that, if the current key already exists, then it just skips the current value set, and moves on to the next one?
Thanks | `INSERT IGNORE INTO votes VALUES ('1',0),('2',0),('3',0);` | No need to explicitly assign primary key if the primary key is `AUTO_INCREMENT`.
Assuming your table `votes` has 2 columns: `id` and `data`:
The query is:
```
INSERT INTO votes (`data`) VALUES (0),(0),(0)
``` | How to add all list of values to mysql database and keep adding even if there are duplicate primary keys? | [
"",
"mysql",
"sql",
"database",
"sql-insert",
""
] |
Good morning Stack, I am in need of some major help.
I have to convert an EXCEL Formula to a SQL case.
The whole thing keeps falling apart on me when I get to the last section. I can't for the life of me figure out how to set this up properly, and I am tired of getting kicked in the nads over it.
My main problem is how excel handles Multiple "Else" potions of its If/then's
Here is the Excel :
```
=IF(AND(W7+H7<=0,IF(E7-H7-S7>0,E7-H7-S7,0)<=0),0,IF(W7+H7<IF(E7-H7-S7>0,E7-H7-S7,0),IF(W7+H7<0,0,W7+H7),IF(E7-H7-S7>0,IF(W7>H7,E7-S7,H7+AF7),H7)))
```
Here is the relevant sections and how they have been converted to SQL so far.
```
E4 = TotInChar.TotInChar
H4 = CAST(tmpdc.[3]AS Decimal (10,2)) + CAST(tmpdc.[5]AS Decimal (10,2))
W4 = CAST(tmpdc.[14]AS decimal(10,2))
S4 = T4MathBlock.TTIV
AF4 = [AmountIncluded].[Amount]
```
Here is the SQL Case I have done so far:
(Its designed to be part of an Outer Apply, as I am handling all math portions in their own Outer apply. That way anyone that comes behind me will find the code segmented and documented properly)
```
SELECT (CASE
WHEN CAST(tmpdc1.[14]AS decimal(10,2))+(CAST(tmpdc1.[3]AS Decimal (10,2)) + CAST(tmpdc1.[5]AS Decimal (10,2))) <= 0
AND (CASE WHEN(TotInChar.TotInChar -(CAST(tmpdc1.[3]AS Decimal (10,2)) + CAST(tmpdc1.[5]AS Decimal (10,2)))- T4MathBlock.TTIV) >0
THEN TotInChar.TotInChar -(CAST(tmpdc1.[3]AS Decimal (10,2)) + CAST(tmpdc1.[5]AS Decimal (10,2)))- T4MathBlock.TTIV
ELSE '0.00' END) <=0 THEN '0.00'
/* part 2*/
WHEN CAST(tmpdc1.[14] AS decimal(10,2))+(CAST(tmpdc1.[3]AS Decimal (10,2)) + CAST(tmpdc1.[5]AS Decimal (10,2)))
<
(CASE WHEN (TotInChar.TotInChar -(CAST(tmpdc1.[3]AS Decimal (10,2)) + CAST(tmpdc1.[5]AS Decimal (10,2)))- T4MathBlock.TTIV)
> 0
THEN TotInChar.TotInChar -(CAST(tmpdc1.[3]AS Decimal (10,2)) + CAST(tmpdc1.[5]AS Decimal (10,2)))- T4MathBlock.TTIV
ELSE '0.00'
END)
THEN (TotInChar.TotInChar -(CAST(tmpdc1.[3]AS Decimal (10,2)) + CAST(tmpdc1.[5]AS Decimal (10,2)))- T4MathBlock.TTIV)
/*Part 3*/
WHEN CAST(tmpdc.[14]AS decimal(10,2))+CAST(tmpdc.[3]AS Decimal (10,2)) + CAST(tmpdc.[5]AS Decimal (10,2))
< 0 THEN '0.00'
ELSE (CAST(tmpdc.[14]AS decimal(10,2))+CAST(tmpdc.[3]AS Decimal (10,2)) + CAST(tmpdc.[5]AS Decimal (10,2)))
/*Part 4*/
WHEN TotInChar.TotInChar -(CAST(tmpdc1.[3]AS Decimal (10,2)) + CAST(tmpdc1.[5]AS Decimal (10,2)))- T4MathBlock.TTIV
> 0 THEN
(CASE(CAST(tmpdc.[14]AS decimal(10,2)) > CAST(tmpdc.[3]AS Decimal (10,2)) + CAST(tmpdc.[5]AS Decimal (10,2))
THEN TotInChar.TotInChar - T4MathBlock.TTIV ELSE '0.00')
ELSE CAST(tmpdc.[3]AS Decimal (10,2)) + CAST(tmpdc.[5]AS Decimal (10,2)))
End
End) AS[ANT4]
From #TempDisclosure tmpdc1
WHERE tmpdc1.[Number] = Tmpdc.[Number]
```
Thank you for any help you can give on this, I have been working at it for about 10 hours now, and my brain just hurts | a quick paren match shows there may be an error in the last section...
you have this:
```
WHEN TotInChar.TotInChar -(CAST(tmpdc1.[3]AS Decimal (10,2)) + CAST(tmpdc1.[5]AS Decimal (10,2)))- T4MathBlock.TTIV
> 0 THEN
(CASE(CAST(tmpdc.[14]AS decimal(10,2)) > CAST(tmpdc.[3]AS Decimal (10,2)) + CAST(tmpdc.[5]AS Decimal (10,2))
THEN TotInChar.TotInChar - T4MathBlock.TTIV ELSE '0.00')
ELSE CAST(tmpdc.[3]AS Decimal (10,2)) + CAST(tmpdc.[5]AS Decimal (10,2)))
End
```
Try this:
```
WHEN TotInChar.TotInChar -(CAST(tmpdc1.[3]AS Decimal (10,2)) + CAST(tmpdc1.[5]AS Decimal (10,2)))- T4MathBlock.TTIV
> 0 THEN
(CASE(CAST(tmpdc.[14]AS decimal(10,2)) > CAST(tmpdc.[3]AS Decimal (10,2)) + CAST(tmpdc.[5]AS Decimal (10,2)))
THEN TotInChar.TotInChar - T4MathBlock.TTIV ELSE '0.00'
ELSE CAST(tmpdc.[3]AS Decimal (10,2)) + CAST(tmpdc.[5]AS Decimal (10,2)))
End
``` | Simplify it.
Formulate the CASE assuming that all are integers. The details of casting should come later.
At present the solution you have is so cluttered that its impossible to verify if it matches the Excel formula or not. | Converting Excel formula to SQL Case | [
"",
"sql",
"sql-server",
"excel",
""
] |
I get this error when I try to add the foreign key :
> "ORA-00904: "BR\_ID": invalid identifier"
```
create table Branch9
(br_id number NOT NULL,br_name varchar2(25) NOT NULL ,br_address varchar2(30),PRIMARY KEY(br_id))
create table Employee9
(emp_id number NOT NULL,emp_name varchar2(25) NOT NULL UNIQUE,emp_address varchar2(30),emp_age number,emp_dob date,emp_salary number,PRIMARY KEY(emp_id))
ALTER TABLE Employee9
ADD FOREIGN KEY (br_id) REFERENCES Branch9 (br_id);
```
Is it anything related to `NOT NULL` constraint added to br\_id? | `br_id` has to be in `Employee9` table:
This is how you add it:
```
ALTER TABLE Employee9 ADD br_id number NOT NULL;
```
Then you can do:
```
ALTER TABLE Employee9
ADD CONSTRAINT fk_br_id FOREIGN KEY (br_id) REFERENCES Branch9 (br_id);
``` | br\_id needs to be a column in Employee9. | SQL - foreign key error | [
"",
"sql",
"oracle",
"foreign-keys",
""
] |
I have a composite primary key on 2 columns in the table I am INSERTing into. I come from working with SQL Server, and I know that if I attempted to insert a duplicate key value into a PK table, it would throw an error.
My problem is, my code is not throwing this kind of error. Can you look at it and see if it's a problem with the code? Or does Access not throw errors for this kind of violation?
[Edit]
I guess I'm looking for a way to just acknowledge that duplicate records are attempted to be inserted. I want the current functionality to remain (dups are tossed; valid records are inserted). I don't want the entire INSERT to get rolled back.
My code is below.
```
Function InsertData(Ignore As String)
' define file path of CSV to be imported
Dim CurrentDate As String
Dim CurrentYear As String
CurrentDate = Format(Date, "yyyymmdd")
CurrentYear = Format(Date, "yyyy")
Dim Exfile As String
Exfile = iPath + "\" + CurrentYear + "\" + "FileName" + CurrentDate + ".txt"
'this calls a saved import routine
DoCmd.RunSavedImportExport "tbl_TEMP"
'merge data with that already existing in tbl_Perm.
'the clustered PK on product_ID and As_of_Date prevents dup insertion
Dim dbs As Database
Dim errLoop As Error
Set dbs = OpenDatabase(iPath + "\ExDatabase.mdb")
dbs.Execute " INSERT INTO tbl_Perm (Col1,Col2,Date_Created) " _
& "SELECT ColA + ColB, ColC, Format$(Now(),'Short Date')" _
& "FROM tbl_TEMP;"
' Trap for errors, checking the Errors collection if necessary.
On Error GoTo Err_Execute
'delete temp table
dbs.Execute "DROP TABLE tbl_TEMP;"
dbs.Close
Err_Execute:
' Notify user of any errors that result from
' executing the query.
If DBEngine.Errors.Count > 0 Then
For Each errLoop In DBEngine.Errors
MsgBox "Error number: " & errLoop.Number & vbCr & _
errLoop.Description
Next errLoop
End If
Resume Next
End Function
``` | If you want to allow the INSERT to proceed and determine whether any duplicates were rejected then you could do something like this
```
Dim cdb As DAO.Database, qdf As DAO.QueryDef, rst As DAO.Recordset
Dim sqlSelect As String, sourceRecords As Long
Set cdb = CurrentDb
sqlSelect = _
"SELECT ColA + ColB, ColC, Format$(Now(),'Short Date') " & _
"FROM tbl_TEMP"
Set rst = cdb.OpenRecordset("SELECT COUNT(*) AS n FROM (" & sqlSelect & ")", dbOpenSnapshot)
sourceRecords = rst!n
rst.Close
Set rst = Nothing
Set qdf = cdb.CreateQueryDef("", _
"INSERT INTO tbl_Perm (Col1,Col2,Date_Created) " & sqlSelect)
qdf.Execute
If qdf.RecordsAffected < sourceRecords Then
Debug.Print sourceRecords - qdf.RecordsAffected & " record(s) not inserted"
End If
Set qdf = Nothing
Set cdb = Nothing
``` | From the Microsoft DAO Doc ([here](http://msdn.microsoft.com/en-us/library/office/ff197654.aspx)):
> In a Microsoft Access workspace, if you provide a syntactically
> correct SQL statement and have the appropriate permissions, the
> Execute method won't fail — even if not a single row can be modified
> or deleted. Therefore, always use the dbFailOnError option when using
> the Execute method to run an update or delete query. This option
> generates a run-time error and rolls back all successful changes if
> any of the records affected are locked and can't be updated or
> deleted.
So add the `dbFailOnError` option to your call. | detecting whether some records are excluded from an INSERT INTO operation | [
"",
"sql",
"ms-access",
"error-handling",
"vba",
""
] |
First, I'm not sure if the title represent the best of the issue. Any better suggestion is welcomed. My problem is I have the following table:
```
+----+----------+-------+-----------------+
| ID | SUPPLIER | BUYER | VALIDATION_CODE |
+----+----------+-------+-----------------+
| 1 | A | Z | 937886521 |
| 2 | A | X | 937886521 |
| 3 | B | Z | 145410916 |
| 4 | C | V | 775709785 |
+----+----------+-------+-----------------+
```
I need to show SUPPLIERS A and B which have BUYER Z, X. However, I want this condition to be **one-to-one** relationship rather than **one-to-many**. That is, for the supplier A, I want to show the column with ID: 1, 2. For the supplier B, I want to show the column 3 only. The following script will show the supplier A with all possible buyers (which I do not want):
```
SELECT *
FROM validation
WHERE supplier IN ( 'A', 'B' )
AND buyer IN ( 'X', 'Z');
```
This will show the following pairs: (A,Z), (A,X), (B, Z). I need to show only the following: (A,X)(B,Z) in **one statement**.
The desired result should be like this:
```
+----+----------+-------+-----------------+
| ID | SUPPLIER | BUYER | VALIDATION_CODE |
+----+----------+-------+-----------------+
| 2 | A | X | 937886521 |
| 3 | B | Z | 145410916 |
+----+----------+-------+-----------------+
``` | You can update the WHERE clause to filter on the desired pairs:
```
select *
from sample
where (upper(supplier),upper(buyer))
in (('A','X'),('A','Y'),('A','Z'),('B','X'),('B','Y'),('B','Z'));
```
I used the UPPER function based on your mixed case examples. | try this query:
```
select ID,SUPPLIER,BUYER,VALIDATION_CODE from
(select
t2.*,t1.counter
from
validation t2,
(select supplier,count(supplier) as counter from hatest group by supplier)t1
where
t1.supplier = t2.supplier)t3
where t3.supplier in('A','B') and
id = case when t3.counter > 1 then
(select max(id) from validation t4 where t4.supplier = t3.supplier) else t3.id end;
``` | Choose rows based on two connected column values in one statement - ORACLE | [
"",
"sql",
"oracle",
""
] |
I have the following table:
```
RowId, UserId, Date
1, 1, 1/1/01
2, 1, 2/1/01
3, 2, 5/1/01
4, 1, 3/1/01
5, 2, 9/1/01
```
I want to get the latest records based on date and per `UserId` but as a part of the following query (due to a reason I cannot change this query as this is auto generated by a tool but I can write pass any thing starting with AND...):
```
SELECT RowId, UserId, Date
FROM MyTable
WHERE 1 = 1
AND (
// everything which needs to be done goes here . . .
)
```
I have tried [similar query](https://stackoverflow.com/a/2411690/1158845), but get an error:
> Only one expression can be specified in the select list when the subquery is not introduced with EXISTS.
EDIT: Database is Sql Server 2008 | You could use a `NOT EXISTS` condition:
```
SELECT RowId, UserId, Date
FROM MyTable
WHERE 1 = 1
AND NOT EXISTS (
SELECT *
FROM MyTable AS t
WHERE t.UserId = MyTable.UserId
AND t.Date > MyTable.Date
)
;
```
Note that if a user has more than one row with the same latest `Date` value, the query will return all such entries. If necessary, you can modify the subquery's condition slightly to make sure only one row is returned:
```
WHERE t.UserId = MyTable.UserId
AND (t.Date > MyTable.Date
OR t.Date = MyTable.Date AND t.RowId > MyTable.RowId
)
```
With the above condition, if two or more rows with the same `Date` exist for the same user, the one with the greater `RowId` value will be returned. | Try:
```
SELECT RowId, UserId, Date
FROM MyTable
WHERE 1 = 1
AND EXISTS
(SELECT 1
FROM (SELECT UserId, MAX(Date) MaxDate
FROM MyTable
GROUP BY UserId) m
WHERE m.UserId = MyTable.UserId and m.MaxDate = MyTable.Date)
```
SQLFiddle [here](http://sqlfiddle.com/#!3/9c3a4/1). | Get latest sql rows based on latest date and per user | [
"",
"sql",
""
] |
Database indices are meant for making lookup faster , But how about the performance of a query which uses a regular expression in where clause on a column which is indexed.
Say we have a table `FILES` which has a field `FILE_NAME` and we have a index created on FILE\_NAME.
Then we have a query for searching files with matching name patterns
```
SELECT * FROM FILES WHERE FILE_NAME RLIKE regexp
```
Is creating index on FILE\_NAME will help improving performance of above query ?? | Yes, it *may* improve the performance - but not the way you think.
In normal usage an index allows the DBMS to find rows in a table without scanning through each entry - it can skip over some (for range comparisons, specific values and partial matches such as `LIKE 'ABC%'` but *not* for a query with `LIKE '%XYZ'`). When you try to find rows using some transformation of the data (despite the syntax, RLIKE is a function based on the operands rathe than an operator) the DBMS must apply the transformation to each row of the table. Some DBMS (e.g. Oracle) support function based indexes, hence as long as your regex is constant you could defines an index based on a regex match - MariaDB supports virtual columns which can be indexed which amounts to the same thing.
Hence using an index here won't reduce the number of rows the DBMS has to fetch in order to filter the query.
However, if the number of matches is low relative to the number of rows in the underlying data, and the width of the index is relatively small to the width of the table rows, then the DBMS can identify matching rows by reading from the index - which will be faster and require fewer I/O operations than reading the table rows. OTOH if the index is not massively more compact than the table it represents and the index is not covering (i.e. all terms in the query can be satisfied from the index) you will get worse performance than without the index - since the DBMS must perfom an additional seek and read operation after each match to get to the data - indeed, it's likely the DBMS will never use such an index without an explicit hint. | Nope. A where clause with a regexp won't use an index for the column. However, a index will work for a `LIKE 'foo%'`, so you may use that for narrowing the results. | impact of database Index on regular expression queries in MySQL | [
"",
"mysql",
"sql",
"regex",
"database",
"indexing",
""
] |
I have a table with two temporal columns. First (name is DATE) is storing the date (not including the time part) and therefor the datatype is DATE. Second column (name is TIME) is for storing the time in seconds and therefor the datatype is NUMBER.
I need to compare this two dates with a timestamp from another table. How can I calculate the date of the two columns (DATE and TIME) and compare to the timestamp of the other table?
I have tried to calculate the hours out of the time column and add it to the date column, but the output seems not correct:
```
SELECT to_date(date + (time/3600), 'dd-mm-yy hh24:mi:ss') FROM mytable;
```
The output is just the date, but not the time component. | `date + (time/3600)` [is already a `DATE`](http://docs.oracle.com/cd/E11882_01/server.112/e41084/sql_elements001.htm#sthref172), so you don't need to do `to_date()`. It does have the time part you added though, you just aren't displaying it. If you want to output that as a string in the format you've shown, use `to_char()` instead:
```
SELECT to_char(date + (time/3600), 'dd-mm-yy hh24:mi:ss') FROM mytable;
```
... except that if `time` is actually in seconds, you need to divide by 86400 (24x60x60), not 3600. At the moment you're relying on your client's default date format, probably NLS\_DATE\_FORMAT, which doesn't include the time portion from what you've said. That doesn't mean the time isn't there, it just isn't displayed.
But that is just for display. Leave it as a date, by just adding the two values, when comparing against you timestamp, e.g.
```
WHERE date + (time/86400) < systimestamp
``` | You can use the `INTERVAL DAY TO SECOND` type:
```
SELECT your_date + NUMTODSINTERVAL(your_time_in_seconds, 'SECOND') FROM dual;
```
Example:
```
SELECT TRUNC(SYSDATE) + NUMTODSINTERVAL(39687, 'SECOND') FROM dual;
```
The calculated date with time is: `10-11-2013 11:01:27`
This is a better idea than dividing your value by 3600 in my opinion, as you have an interval in seconds, so it feels natural to use an interval to represent your time, which can then be easily added to a column of DATE datatype.
[Oracle Interval in Documentation](http://docs.oracle.com/cd/B19306_01/server.102/b14200/sql_elements003.htm#i38598)
[NUMTODSINTERVAL Function in documentation](http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions103.htm) | Compare date + time with timestamp | [
"",
"sql",
"oracle",
""
] |
I have a two column in my database like:
```
Fisrt Name | Last Name | Full Name
------------------------------------------------
Fayyaz | Jaan |
Danny | Rock |
Tom | Jerry |
```
I have another column "Full Name" that is empty and I want to merge these two column in the third column like"
```
Fisrt Name | Last Name | Full Name
----------------------------------------------------
Fayyaz | Jaan | Fayyaz Jaan
Danny | Rock | Danny Rock
Tom | Jerry | Tom Jerry
```
Any helpfor SQL query.
Thanks
Hi everyone I found the right notation of concatination of two columns within the query
```
UPDATE contact SET fullName=CONCAT(firstName,LastName)
```
But the problem is, the resulted column has no space between the first and last name. like "TomJerry" and "DannyRock"
Any help? | Perhaps this will work:
```
UPDATE <table> SET FullName = FirstName || ' ' || LastName
```
Or, seeing your edit, this will probably work:
```
UPDATE <table> SET FullName = CONCAT(FirstName,' ',LastName);
```
And, if that doesn't work, then this will:
```
UPDATE <table> SET FullName = CONCAT(CONCAT(FirstName,' '),LastName);
``` | try
```
update <table> set FullName = FirstName + ' ' + LastName
```
Otherwise, add the 3rd result to select, something like
```
select firstname, lastname, firstname + ' ' + lastname as FullName from <table>
``` | How to two Column in another column using SQL Query? | [
"",
"mysql",
"sql",
""
] |
I want to implement this type of logic, using DB2:
```
Create Procedure sp_MyProcedure (
IN Var1 Decimal(6, 0),
IN Var2 Decimal(6, 0)
) Language SQL
Dynamic Result Sets 1
Begin
If Exists(Select * from MyTable where CustomerNbr = Var1) Then
return (Select * from MyTable where CustomerNbr = Var1)
Else If Exists(Select * from MyTable where CustomerNbr = Var2) Then
return (Select * from MyTable where CustomerNbr = Var2)
Else
return (Select * from MyTable where CustomerNbr = 0)
End If
End
```
But can't figure out the syntax to implement this type of logic. Plus, I am running each query twice; once to check that it returns values and, if it does return values, once to create the returned set of data. So, I know there has to be a better way. I have been looking at the "With" statement to create temporary tables but, so far it has essentially the same issues of inefficiency and syntax limits.
(I apologize for the formatting of the code. I can't seem to get it to work right in this text editor)
Can someone suggest the best way to accomplish this?
Thanks, in advance, for your advice. | Here is an ugly select statement that will work:
```
With allrecords as
(
Select 1 as qnum,*
from MyTable
where CustomerNbr = Var1
union all
Select 2 as qnum, *
from MyTable
where CustomerNbr = Var2
union all
Select 3 as qnum, *
from MyTable
where CustomerNbr = 0
)
select *
from allrecords
where qnum = (select min(qnum) from allrecords)
```
Note, if you don't want qnum returned in the result then you have to give a select list for the final select statement that does not include qnum. | Best I can think of is as such:
```
SELECT * FROM MyTable
WHERE custNo = Var1
OR (custNo != Var1
AND custNo = Var2)
OR (custNo != Var1
AND custNo != Var2
AND custNo = 0);
``` | what is the best way to select the first query that returns a result in DB2 | [
"",
"sql",
"db2",
"exists",
"with-statement",
""
] |
I have a table user\_address and it has some fields like
```
attributes: {
user_id: 'integer',
address: 'string' //etc.
}
```
currently I'm doing this to insert a new record, but if one exists for this user, update it:
```
UserAddress
.query(
'INSERT INTO user_address (user_id, address) VALUES (?, ?) ' +
'ON DUPLICATE KEY UPDATE address=VALUES(address);',
params,
function(err) {
//error handling logic if err exists
}
```
Is there any way to use the Waterline ORM instead of straight SQL queries to achieve the same thing? I don't want to do two queries because it's inefficient and hard to maintain. | Make a custom model method that does what you want using Waterline queries isntead of raw SQL. You will be doing two queries, but with Waterline syntax.
Example below (if you don't know about deferred objects then just use callback syntax, but the logic is the same):
```
var Q = require('q');
module.exports = {
attributes: {
user_id: 'integer',
address: 'string',
updateOrCreate: function (user_id, address) {
var deferred = Q.defer();
UserAddress.findOne().where({user_id: user_id}).then(function (ua) {
if (ua) {
// UserAddress exists. Update.
ua.address = address;
ua.save(function (err) {deferred.resolve();});
} else {
// UserAddress does not exist. Create.
UserAddress.create({user_id: user_id, address: address}).done(function (e, ua) {deferred.resolve();});
}
}).fail(function (err) {deferred.reject()});
return deferred.promise;
}
};
``` | The answer above is less than ideal. It also has the method as part of the attributes for the model, which is not correct behavior.
Here is what the ideal native solution looks like that returns a promise just like any other waterline model function would:
```
module.exports = {
attributes: {
user_id: 'integer',
address: 'string'
},
updateOrCreate: function (user_id, address) {
return UserAddress.findOne().where({user_id: user_id}).then(function (ua) {
if (ua) {
return UserAddress.update({user_id: user_id}, {address: address});
} else {
// UserAddress does not exist. Create.
return UserAddress.create({user_id: user_id, address: address});
}
});
}
}
```
Then you can just use it like:
```
UserAddress.updateOrCreate(id, address).then(function(ua) {
// ... success logic here
}).catch(function(e) {
// ... error handling here
});
``` | Waterline ORM equivalent of insert on duplicate key update | [
"",
"sql",
"node.js",
"orm",
"sails.js",
"waterline",
""
] |
This query suggests friendship based on how many words users have in common. in\_common sets this threshold.
I was wondering if it was possible to make this query completely % based.
What I want to do is have user suggested to current user, if **30%** of their words match.
curent\_user total words 100
in\_common threshold 30
some\_other\_user total words 10
3 out of these match current\_users list.
Since 3 is 30% of 10, this is a match for the current user.
**Possible?**
```
SELECT users.name_surname, users.avatar, t1.qty, GROUP_CONCAT(words_en.word) AS in_common, (users.id) AS friend_request_id
FROM (
SELECT c2.user_id, COUNT(*) AS qty
FROM `connections` c1
JOIN `connections` c2
ON c1.user_id <> c2.user_id
AND c1.word_id = c2.word_id
WHERE c1.user_id = :user_id
GROUP BY c2.user_id
HAVING count(*) >= :in_common) as t1
JOIN users
ON t1.user_id = users.id
JOIN connections
ON connections.user_id = t1.user_id
JOIN words_en
ON words_en.id = connections.word_id
WHERE EXISTS(SELECT *
FROM connections
WHERE connections.user_id = :user_id
AND connections.word_id = words_en.id)
GROUP BY users.id, users.name_surname, users.avatar, t1.qty
ORDER BY t1.qty DESC, users.name_surname ASC
```
SQL fiddle: <http://www.sqlfiddle.com/#!2/c79a6/9> | May I suggest a different way to look at your problem?
You might look into a similarity metric, such as [Cosine Similarity](http://en.wikipedia.org/wiki/Cosine_similarity) which will give you a much better measure of similarity between your users based on words. To understand it for your case, consider the following example. You have a vector of words `A = {house, car, burger, sun}` for a user `u1` and another vector `B = {flat, car, pizza, burger, cloud}` for user `u2`.
Given these individual vectors you first construct another that positions them together so you can map to each user whether he/she has that word in its vector or not. Like so:
```
| -- | house | car | burger | sun | flat | pizza | cloud |
----------------------------------------------------------
| A | 1 | 1 | 1 | 1 | 0 | 0 | 0 |
----------------------------------------------------------
| B | 0 | 1 | 1 | 0 | 1 | 1 | 1 |
----------------------------------------------------------
```
Now you have a vector for each user where each position corresponds to the value of each word to each user. Here it represents a simple count but you can improve it using different metrics based on word frequency if that applies to your case. Take a look at the most common one, called [tf-idf](http://en.wikipedia.org/wiki/Tf%E2%80%93idf).
Having these two vectors, you can compute the `cosine similarity` between them as follows:

Which basically is computing the sum of the product between each position of the vectors above, divided by their corresponding magnitude. In our example, that is 0.47, in a range that can vary between 0 and 1, the higher the most similar the two vectors are.
If you choose to go this way, you don't need to do this calculation in the database. You compute the similarity in your code and just save the result in the database. There are several libraries that can do that for you. In Python, take a look at the [numpy library](http://www.numpy.org/). In Java, look at [Weka](http://www.cs.waikato.ac.nz/ml/weka/) and/or [Apache Lucene](http://lucene.apache.org/core/). | OK, so the issue is "users in common" defined as asymmetric relation. To fix it, let's assume that in\_common percentage threshold is checked against user with the least words.
Try this query ([fiddle](http://www.sqlfiddle.com/#!2/1556b/25)), it gives you full list of users with at least 1 word in common, marking friendship suggestions:
```
SELECT user1_id, user2_id, user1_wc, user2_wc,
count(*) AS common_wc, count(*) / least(user1_wc, user2_wc) AS common_wc_pct,
CASE WHEN count(*) / least(user1_wc, user2_wc) > 0.7 THEN 1 ELSE 0 END AS frienship_suggestion
FROM (
SELECT u1.user_id AS user1_id, u2.user_id AS user2_id,
u1.word_count AS user1_wc, u2.word_count AS user2_wc,
c1.word_id AS word1_id, c2.word_id AS word2_id
FROM connections c1
JOIN connections c2 ON (c1.user_id < c2.user_id AND c1.word_id = c2.word_id)
JOIN (SELECT user_id, count(*) AS word_count
FROM connections
GROUP BY user_id) u1 ON (c1.user_id = u1.user_id)
JOIN (SELECT user_id, count(*) AS word_count
FROM connections
GROUP BY user_id) u2 ON (c2.user_id = u2.user_id)
) AS shared_words
GROUP BY user1_id, user2_id, user1_wc, user2_wc;
```
Friendship\_suggestion is on SELECT for clarity, you probably need to filter by it, so yu may just move it to HAVING clause. | Make HAVING count(*) percentage based - complicated query with percentage calculations | [
"",
"mysql",
"sql",
""
] |
i have two tables of results, a mid-term result table and annual result table.
**Mid-term result table:**

**Annual result table:**

i want to join them and find the total marks of each student for each subject
**wanted result table after finding the total marks of mid-term and terminal result**
**my code so far:**
```
mysql_query("drop view if exists result_view");
mysql_query("create view result_view as
select * from mid_term
UNION ALL
select * from annual") or die(mysql_error());
//now find the total marks of students for each subject
mysql_query("select id,student_id,subject_id,result_id,year,sum(mark) as mark
from result_view ");
```
but this does not work out, any help or advice plz!!!!!!1 | I don't think id field make much sense in the result so I omitted it.
```
SELECT Student_id,
Subject_id,
Result_id,
year,
SUM(mark)
FROM result_view
GROUP BY Student_id,
Subject_id,
Result_id,
year
``` | You need GROUP BY:
```
select id, student_id, subject_id,result_id,year,sum(mark) as TotalMark
from result_view
GROUP BY id, student_id, subject_id, result_id, year
```
See my [SqlFiddle](http://sqlfiddle.com/#!2/d89cf/1) | joining two tables and finding the sum of marks for each student | [
"",
"mysql",
"sql",
"database-design",
""
] |
I have a problem with a SQL query. The problem is with the `IN ....`, but I don't know how to correct it. This is the query:
```
SELECT *
FROM Reports, Games, Developers
WHERE Game = SpelID
AND Developer = IDDvl
AND Land IN[‘Japan’,‘USA’,‘UK’,‘Indië’,‘Duitsland’,‘Zweden’]
```
This is the error message:
```
An unhandled exception of type 'System.Data.OleDb.OleDbException'
occurred in System.Data.dll
In operator without () in query expression 'Game = SpelID AND Developer = IDDvl
AND Land IN [‘Japan’, ‘USA’, ‘UK’, ‘Indië’, ‘Duitsland’, ‘Zweden’]'.
``` | ```
dagoederen = New OleDb.OleDbDataAdapter("SELECT * FROM Reports , Games,
Developers WHERE Game = SpelID AND Developer = IDDvl AND Land IN (‘Japan’,
‘USA’, ‘UK’, ‘Indië’, ‘Duitsland’, ‘Zweden’)", connectie)
```
You need to replace `[ ]` with `()` after `IN` | The answer is specified in the error provided" In operator without () in query expression". Replace the "[]" with "()" for the IN T-SQL | Visual Studio SQL query not working | [
"",
"sql",
"visual-studio",
""
] |
I am building a leaderboard for some of my online games. Here is what I need to do with the data:
* Get rank of a player for a given game across multiple time frame (today, last week, all time, etc.)
* Get paginated ranking (e.g. top score for last 24 hrs., get players between rank 25 and 50, get rank or a single user)
I defined with the following table definition and index and I have a couple of questions.
**Considering my scenarios, do I have a good primary key?** The reason why I have a clustered key across gameId, playerName and score is simply because I want to make sure that all data for a given game is in the same area and that score is already sorted. Most of the time I will display the data is descending order of score (+ updatedDateTime for ties) for a given gameId. Is this a right strategy? In other words, I want to make sure that I can run my queries to get the rank of my players as fast as possible.
```
CREATE TABLE score (
[gameId] [smallint] NOT NULL,
[playerName] [nvarchar](50) NOT NULL,
[score] [int] NOT NULL,
[createdDateTime] [datetime2](3) NOT NULL,
[updatedDateTime] [datetime2](3) NOT NULL,
PRIMARY KEY CLUSTERED ([gameId] ASC, [playerName] ASC, [score] DESC, [updatedDateTime] ASC)
CREATE NONCLUSTERED INDEX [Score_Idx] ON score ([gameId] ASC, [score] DESC, [updatedDateTime] ASC) INCLUDE ([playerName])
```
Below is the first iteration of the query I will be using to get the rank of my players. However, I am a bit disappointed by the execution plan (see below). **Why does SQL need to sort?** The additional sort seem to come from the RANK function. But isn’t my data already sorted in descending order (based on the clustered key of the score table)? I am also wondering if I should normalize a bit more my table and move out the PlayerName column in a Player table. I originally decided to keep everything in the same table to minimize the number of joins.
```
DECLARE @GameId AS INT = 0
DECLARE @From AS DATETIME2(3) = '2013-10-01'
SELECT DENSE_RANK() OVER (ORDER BY Score DESC), s.PlayerName, s.Score, s.CountryCode, s.updatedDateTime
FROM [mrgleaderboard].[score] s
WHERE s.GameId = @GameId
AND (s.UpdatedDateTime >= @From OR @From IS NULL)
```

Thank you for the help! | [Updated]
**Primary key is not good**
You have a unique entity that is [GameID] + [PlayerName]. And composite clustered Index > 120 bytes with nvarchar. Look for the answer by @marc\_s in the related topic [SQL Server - Clustered index design for dictionary](https://stackoverflow.com/questions/3849068/sql-server-clustered-index-design-for-dictionary?rq=1)
**Your table schema does not match of your requirements to time periods**
Ex.: I earned 300 score on Wednesday and this score stored on leaderboard. Next day I earned 250 score, but it will not record on leaderboard and you don't get results if I run a query to Tuesday leaderboard
For complete information you can get from a historical table games played score but it can be very expensive
```
CREATE TABLE GameLog (
[id] int NOT NULL IDENTITY
CONSTRAINT [PK_GameLog] PRIMARY KEY CLUSTERED,
[gameId] smallint NOT NULL,
[playerId] int NOT NULL,
[score] int NOT NULL,
[createdDateTime] datetime2(3) NOT NULL)
```
Here are solutions to accelerate it related with the aggregation:
* Indexed view on historical table (see [post](https://stackoverflow.com/a/19980350/911976) by @Twinkles).
You need 3 indexed view for the 3 time periods. Potentially huge size of historical tables and 3 indexed view. Unable to remove the "old" periods of the table. Performance issue to save score.
* Asynchronous leaderboard
Scores saved in the historical table. SQL job/"Worker" (or several) according to schedule (1 per minute?) sorts historical table and populates the leaderboards table (3 tables for 3 time period or one table with time period key) with the precalculated rank of a user. This table also can be denormalized (have score, datetime, PlayerName and ...). Pros: Fast reading (without sorting), fast save score, any time periods, flexible logic and flexible schedules. Cons: The user has finished the game but did not found immediately himself on the leaderboard
* Preaggregated leaderboard
During recording the results of the game session do pre-treatment. In your case something like `UPDATE [Leaderboard] SET score = @CurrentScore WHERE @CurrentScore > MAX (score) AND ...` for the player / game id but you did it only for "All time" leaderboard. The scheme might look like this:
```
CREATE TABLE [Leaderboard] (
[id] int NOT NULL IDENTITY
CONSTRAINT [PK_Leaderboard] PRIMARY KEY CLUSTERED,
[gameId] smallint NOT NULL,
[playerId] int NOT NULL,
[timePeriod] tinyint NOT NULL, -- 0 -all time, 1-monthly, 2 -weekly, 3 -daily
[timePeriodFrom] date NOT NULL, -- '1900-01-01' for all time, '2013-11-01' for monthly, etc.
[score] int NOT NULL,
[createdDateTime] datetime2(3) NOT NULL
)
```
```
playerId timePeriod timePeriodFrom Score
----------------------------------------------
1 0 1900-01-01 300
...
1 1 2013-10-01 150
1 1 2013-11-01 300
...
1 2 2013-10-07 150
1 2 2013-11-18 300
...
1 3 2013-11-19 300
1 3 2013-11-20 250
...
```
So, you have to update all 3 score for all time period. Also as you can see leaderboard will contain "old" periods, such as monthly of October. Maybe you have to delete it if you do not need this statistics. Pros: Does not need a historical table. Cons: Complicated procedure for storing the result. Need maintenance of leaderboard. Query requires sorting and JOIN
```
CREATE TABLE [Player] (
[id] int NOT NULL IDENTITY CONSTRAINT [PK_Player] PRIMARY KEY CLUSTERED,
[playerName] nvarchar(50) NOT NULL CONSTRAINT [UQ_Player_playerName] UNIQUE NONCLUSTERED)
CREATE TABLE [Leaderboard] (
[id] int NOT NULL IDENTITY CONSTRAINT [PK_Leaderboard] PRIMARY KEY CLUSTERED,
[gameId] smallint NOT NULL,
[playerId] int NOT NULL,
[timePeriod] tinyint NOT NULL, -- 0 -all time, 1-monthly, 2 -weekly, 3 -daily
[timePeriodFrom] date NOT NULL, -- '1900-01-01' for all time, '2013-11-01' for monthly, etc.
[score] int NOT NULL,
[createdDateTime] datetime2(3)
)
CREATE UNIQUE NONCLUSTERED INDEX [UQ_Leaderboard_gameId_playerId_timePeriod_timePeriodFrom] ON [Leaderboard] ([gameId] ASC, [playerId] ASC, [timePeriod] ASC, [timePeriodFrom] ASC)
CREATE NONCLUSTERED INDEX [IX_Leaderboard_gameId_timePeriod_timePeriodFrom_Score] ON [Leaderboard] ([gameId] ASC, [timePeriod] ASC, [timePeriodFrom] ASC, [score] ASC)
GO
-- Generate test data
-- Generate 500K unique players
;WITH digits (d) AS (SELECT 1 UNION SELECT 2 UNION SELECT 3 UNION
SELECT 4 UNION SELECT 5 UNION SELECT 6 UNION SELECT 7 UNION SELECT 8 UNION SELECT 9 UNION SELECT 0)
INSERT INTO Player (playerName)
SELECT TOP (500000) LEFT(CAST(NEWID() as nvarchar(50)), 20 + (ABS(CHECKSUM(NEWID())) & 15)) as Name
FROM digits CROSS JOIN digits ii CROSS JOIN digits iii CROSS JOIN digits iv CROSS JOIN digits v CROSS JOIN digits vi
-- Random score 500K players * 4 games = 2M rows
INSERT INTO [Leaderboard] (
[gameId],[playerId],[timePeriod],[timePeriodFrom],[score],[createdDateTime])
SELECT GameID, Player.id,ABS(CHECKSUM(NEWID())) & 3 as [timePeriod], DATEADD(MILLISECOND, CHECKSUM(NEWID()),GETDATE()) as Updated, ABS(CHECKSUM(NEWID())) & 65535 as score
, DATEADD(MILLISECOND, CHECKSUM(NEWID()),GETDATE()) as Created
FROM ( SELECT 1 as GameID UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4) as Game
CROSS JOIN Player
ORDER BY NEWID()
UPDATE [Leaderboard] SET [timePeriodFrom]='19000101' WHERE [timePeriod] = 0
GO
DECLARE @From date = '19000101'--'20131108'
,@GameID int = 3
,@timePeriod tinyint = 0
-- Get paginated ranking
;With Lb as (
SELECT
DENSE_RANK() OVER (ORDER BY Score DESC) as Rnk
,Score, createdDateTime, playerId
FROM [Leaderboard]
WHERE GameId = @GameId
AND [timePeriod] = @timePeriod
AND [timePeriodFrom] = @From)
SELECT lb.rnk,lb.Score, lb.createdDateTime, lb.playerId, Player.playerName
FROM Lb INNER JOIN Player ON lb.playerId = Player.id
ORDER BY rnk OFFSET 75 ROWS FETCH NEXT 25 ROWS ONLY;
-- Get rank of a player for a given game
SELECT (SELECT COUNT(DISTINCT rnk.score)
FROM [Leaderboard] as rnk
WHERE rnk.GameId = @GameId
AND rnk.[timePeriod] = @timePeriod
AND rnk.[timePeriodFrom] = @From
AND rnk.score >= [Leaderboard].score) as rnk
,[Leaderboard].Score, [Leaderboard].createdDateTime, [Leaderboard].playerId, Player.playerName
FROM [Leaderboard] INNER JOIN Player ON [Leaderboard].playerId = Player.id
where [Leaderboard].GameId = @GameId
AND [Leaderboard].[timePeriod] = @timePeriod
AND [Leaderboard].[timePeriodFrom] = @From
and Player.playerName = N'785DDBBB-3000-4730-B'
GO
```
This is only an example for the presentation of ideas. It can be optimized. For example, combining columns GameID, TimePeriod, TimePeriodDate to one column through the dictionary table. The effectiveness of the index will be higher.
P.S. Sorry for my English. Feel free to fix grammatical or spelling errors | You could look into [indexed views](http://msdn.microsoft.com/en-us/library/dd171921%28SQL.100%29.aspx) to create scoreboards for common time ranges (today, this week/month/year, all-time). | Leaderboard design using SQL Server | [
"",
"sql",
"sql-server",
"database",
"database-design",
"azure-sql-database",
""
] |
I have exported my database to a CSV file and the timestamp now liiks like this:
`1384204028`
How can I convert it to the typical format, for example `2013-01-19 03:14:07` ? | Use [`FROM_UNIXTIME()`](http://dev.mysql.com/doc/refman/5.1/en/date-and-time-functions.html#function_from-unixtime)
```
SELECT FROM_UNIXTIME(1384204028);
```
or (equivalent but with parameter to control the format):
```
SELECT FROM_UNIXTIME(1384204028, '%Y-%m-%d %H:%i:%s');
``` | ```
SELECT DATE_FORMAT(NOW(), '%Y-%m-%d %H:%i:%s')
``` | MySQL Timestamp format | [
"",
"mysql",
"sql",
"date",
"timestamp",
"date-format",
""
] |
I have been practicing SQL query on [W3schools](http://goo.gl/pL6rcn). I'm practicing LIKE in WHERE clause. The following query is working on the site but not working on the SQL SERVER 2008 R2 which is installed on my machine.
```
SELECT * FROM Customers
WHERE CustomerName LIKE '[!uvwxyz]%';
```
This will result into Customer Names starting other then UVWXYZ. When I fire the same query into local machine, the syntax " [! ] " is not seems to be working. Customer name starting with UVWXYZ are also resulted into result.
What mistake I am doing? | Try this:-
In `SQL-SERVER`, to list all customer whose names don't start with `[uvwxyz]`
```
SELECT * FROM Customers
WHERE CustomerName NOT LIKE '[u-z]%';
``` | Please try this query instead.
This will list out all the records with `CustomerName` not starting with `UVWXYZ`:
```
SELECT *
FROM Customers
WHERE CustomerName NOT LIKE 'uvwxyz%';
``` | SQL query showing 2 different results | [
"",
"sql",
"sql-server-2008",
""
] |
In an SQL statement, I am trying to divide two integers (integer 1 is "abc" in my code below, integer 2 is "xyz" in my code), and get a result as a decimal (def in my code below). The decimal result should have a leading 1 or 0 only, followed by a decimal and 3 numbers after the decimal. However my code keeps returning a straight 0 with no decimals.
```
SELECT CONVERT(DECIMAL(4,3), abc/xyz) AS def
```
This code results in "0", when what I want is something like "0.001" or "0.963". I believe that it is still looking at "def" as an integer, and not as a decimal.
I have also tried using CAST on abc and xyz but it returns the same thing. I have also tried the following code:
```
SELECT CONVERT(DECIMAL(4,3), abc/xyz) AS CONVERT(DECIMAL(4,3)def)
```
But this gives me an error, saying there is a syntax error near the word "CONVERT". | Convert to decimal before the divide, not after. The convert for answer format.
```
SELECT
CONVERT( DECIMAL(4,3)
, ( CONVERT(DECIMAL(10,3), abc) / CONVERT(DECIMAL(10,3), xyz) )
) AS def
``` | This should do it. Also add divide by zero check.
```
SELECT CONVERT(DECIMAL(4,3), abc) / NULLIF(xyz,0)
``` | SQL divide two integers and get a decimal value error | [
"",
"sql",
"sql-server",
"asp-classic",
""
] |
I have a database in Microsoft SQL Server and I am using Microsoft SQL Server Management Studio.
I have an option to insert the query result to the file, although the results of the query are not separated by any special signs.
It looks like this
```
select * from table_name
```
Output:
```
18 182 3386 NULL
18 790 12191 NULL
```
In File:
```
18 182 3386 NULL
18 790 12191 NULL
```
Is there any possibility to modify query so after every record it will put a special char, like this:
In File:
```
18; 182; 3386; NULL;
18; 790; 12191; NULL;
```
It will be easier then for me to copy this database to other programs. | In SSMS:
`Tools` > `Options` > `Query Results` > `Results To Text`
Top option there is `Output Format`, can set a delimiter there. | 
Click here on your SSMS before you execute your query. | Microsoft SQL Server Management Studio - query result as text | [
"",
"sql",
"sql-server",
""
] |
I am coding a dashboard, and I need to pull some data out of Microsoft SQL Server.
For a simple example, I have three tables, one master Category table, and two tables containing values linked to the Category table via a primary/foreign key relationship (Blue and Green value tables).
Using Microsoft SQL Sever (t-sql), I wish to total (sum) the values in the two value tables, grouped by the common category found in the category table.
Category Table
```
CategoryID (PK) | CategoryName
1 | Square
2 | Circle
```
Blue Table
```
BlueID (PK) | CategoryID (FK) | BlueValue | BlueMonth | BlueYear
1 | 1 | 10 | 6 | 2012
2 | 1 | 20 | 12 | 2012
3 | 2 | 5 | 6 | 2012
4 | 2 | 9 | 12 | 2012
5 | 1 | 12 | 6 | 2013
6 | 1 | 21 | 12 | 2013
7 | 2 | 4 | 6 | 2013
8 | 2 | 8 | 12 | 2013
```
Green Table
```
GreenID (PK)| CategoryID (FK) | GreenValue| GreenMonth| GreenYear
1 | 1 | 3 | 6 | 2012
2 | 1 | 6 | 12 | 2012
3 | 2 | 2 | 6 | 2012
4 | 2 | 7 | 12 | 2012
5 | 1 | 2 | 6 | 2013
6 | 1 | 5 | 12 | 2013
7 | 2 | 4 | 6 | 2013
8 | 2 | 8 | 12 | 2013
```
If I use the following SQL, I get the results I expect.
```
SELECT
[Category].[CategoryName],
SUM([Green].[GreenValue]) AS [GreenTotal]
FROM
[Category]
LEFT JOIN
[Green] ON [Category].[CategoryID] = [Green].[CategoryID]
GROUP BY
[Category].[CategoryName]
```
Results:
```
CategoryName | GreenTotal
Square | 16
Triangle | 21
```
However, if I add the Blue table, to try and fetch a total for BlueValue as well, my obviously incorrect T-SQL gives me unexpected results.
```
SELECT
[Category].[CategoryName],
SUM([Green].[GreenValue]) AS [GreenTotal],
SUM([Blue].[BlueValue]) AS [BlueTotal]
FROM
[Category]
LEFT JOIN
[Green] ON [Category].[CategoryID] = [Green].[CategoryID]
LEFT JOIN
[Blue] ON [Category].[CategoryID] = [Blue].[CategoryID]
GROUP BY
[Category].[CategoryName]
```
Incorrect Results:
```
CategoryName | GreenTotal | BlueTotal
Square | 64 | 252
Triangle | 84 | 104
```
The results all seem to be out by a factor of 4, which is the total number of rows in each value table for each category.
I am aiming to see the following results:
```
CategoryName | GreenTotal | BlueTotal
Square | 16 | 63
Triangle | 21 | 26
```
I would be over the moon if someone could tell me what on earth I am doing wrong?
Thanks,
Mark. | What you're getting is a Cartesian product. You can see the effects of this by removing the grouping and looking through the data.
For example; if your green table contained 2 rows and your blue table contained 4, your join would return a total of 8 records.
To resolve the problem, well, you're nearly there. You've got all the right pieces, just not put them together quite right.
Assuming the following query returns the correct results for green:
```
SELECT CategoryID
, Sum(GreenValue) As GreenTotal
FROM Green
GROUP
BY CategoryID
```
The results for blue can be retrieved by following the same method:
```
SELECT CategoryID
, Sum(BueValue) As BlueTotal
FROM Blue
GROUP
BY CategoryID
```
Now that we have two distinct results that are correct, we should join *these* results to our category table:
```
SELECT Category.CategoryName
, GreenSummary.GreenTotal
, BlueSummary.BlueTotal
FROM Category
LEFT
JOIN (
SELECT CategoryID
, Sum(GreenValue) As GreenTotal
FROM Green
GROUP
BY CategoryID
) As GreenSummary
ON GreenSummary.CategoryID = Category.CategoryID
LEFT
JOIN (
SELECT CategoryID
, Sum(BlueValue) As BlueTotal
FROM Blue
GROUP
BY CategoryID
) As BlueSummary
ON BlueSummary.CategoryID = Category.CategoryID
``` | Something like this would be best done with an APPLY in my opinion. Fast performance-wise, simple to use, and easy to control in case of variations in the query.
IE:
```
SELECT C.[CategoryName], G.[GreenTotal], B.[BlueTotal]
FROM [Category] C
OUTER APPLY (SELECT SUM([GreenValue]) AS [GreenTotal] FROM [Green] WHERE [CategoryID] = C.CategoryID) G
OUTER APPLY (SELECT SUM([BlueValue]) AS [BlueTotal] FROM [Blue] WHERE [CategoryID] = C.CategoryID) B
``` | Using SQL, what is the correct way to total columns from multiple tables into common groups? | [
"",
"sql",
"sql-server",
"group-by",
"sum",
"multiple-tables",
""
] |
I have column `store_name` (`varchar`). In that column I have entries like `prime sport`, `best buy`... with a space. But when user typed concatenated string like `primesport` without space I need to show result `prime sport`. how can I achieve this? Please help me | ```
SELECT *
FROM TABLE
WHERE replace(store_name, ' ', '') LIKE '%'+@SEARCH+'%' OR STORE_NAME LIKE '%'+@SEARCH +'%'
``` | Have you tried using `replace()`
You can replace the white space in the query then use `like`
`SELECT * FROM table WHERE replace(store_name, ' ', '') LIKE '%primesport%'`
It will work for entries like 'prime soft' querying with 'primesoft'
Or you can use regex. | SQL fetch results by concatenating words in column | [
"",
"sql",
"string-search",
""
] |
I have the following table in SQL with lines of an order as follows:
```
RowId OrderId Type Text
----------------------------------------
1 1 5 "Sometext"
2 1 5 "Sometext"
3 2 4 "Sometext"
4 3 5 "Sometext"
5 2 4 "Sometext"
6 1 3 "Sometext"
```
**Each order cannot have a duplicate type, but can have multiple different types.**
Rows 1 and 2 are duplicates for Order **1**, but row 6 is fine.
Rows 3 and 5 are duplicates for Order **2**.
I need to delete all of the duplicated data, so in this case I need to delete **row 2 and row 5**.
What is the best query to delete the data? Or even just return a list of **RowID**'s that contain duplicates to be deleted (or the opposite, a list of **RowID**'s to be kept)?
Thanks. | Try a simple approach:
```
DELETE FROM t
WHERE rowid NOT IN (
SELECT min(rowid) FROM t
GROUP BY orderid, type
)
```
Fiddle [here](http://sqlfiddle.com/#!6/1c2c5/1).
Note that it seems you want to keep the lowers `rowid` when it is repeated. That's why I'm keeping the `min`. | ```
;with cte as
(
Select Row_Number() Over(Partition BY ORDERID,TYPE ORDER BY RowId) as Rows,
RowId , OrderId , Type , Text from TableName
)
Select RowId , OrderId , Type , Text from cte where Rows>1
```
[**Sql Fiddle Demo**](http://www.sqlfiddle.com/#!3/e49d1/7) | How can I remove duplicates in SQL but keep one copy? | [
"",
"sql",
"sql-server",
""
] |
I have a column which has data in this way,
**Table1**
**FullName**
```
Lastname1, Firstname1
Lastname2, Firstname2
Lastname3, Firstname3
```
I want the result to be,
```
Firstname1 Lastname1
Firstname2 Lastname2
Firstname3 Lastname3
```
When I use this query
```
SELECT (substring(Name,charindex(',',Name)+1 ,250)+ SUBSTRING(Name,0, CHARINDEX(',',Name))) from @table1
```
the result I am getting is
Firstname1 ...
Firstname2 ...
Firstname3 ...
Please help | Working example: <http://sqlfiddle.com/#!3/0fc0f/16/0>
```
Select
right(name,len(name)-charindex(', ',name)) + ' ' +
left(name,charindex(', ',name)-1)
from @table1;
``` | Provided name doesn't contain a . You can use
```
parsename ( replace ( name, ',','.'),1) + ' ' + parsename ( replace ( name, ',','.'),2)
``` | SQL Divide a string when a comma appears and join it back in reverse order | [
"",
"sql",
"sql-server",
""
] |
I have two tables:
**x\_community\_invites** :
```
id community_id from to registered message seen accepted updated
```
and
**x\_communities** :
```
id name tags registered updated
```
With the query:
```
$query = sprintf("SELECT x_commmunities.* FROM x_commmunities, x_community_invites WHERE x_community_invites.to = '%s' AND x_community_invites.accepted = '%d'", $id, 1);
```
My problem is that the query I run returns all the fields from the x\_communities table.
> Example scenario:
>
> There are 2 communities in the x\_communities table:
>
> * id's - 1 and 2 name
> * 1stCommunity and 2ndCommunity
>
> There are 3 community invites in the x\_community\_invites table:
>
> * All different id's
> * 2 with the same community id as 1st Community, both to fields accepted
> * 1 with the same community id as 2nd Community, to = profile id and accepted = 1
>
> But with the query, it grabs all of the communities ids and names, for
> some reason unknown to me.
I want to return the community id and name where the x\_communities\_invites.to field is the user id and the x\_communities\_invites.accepted field is 1.
Also, what sort of query is the above query? Some sort of join, I can't find a similar query online with similar syntax.
Can you help me out here?
What am I doing wrong here? | You haven't linked tables. You should use JOIN:
```
SELECT x_commmunities.*
FROM x_commmunities
JOIN x_community_invites on x_commmunities.id=x_community_invites.community_id
WHERE x_community_invites.to = '%s' AND x_community_invites.accepted = '%d'
``` | It is an implicit inner join, but the condition that connects the two tables is missing.
```
SELECT x_commmunities.id, x_commmunities.name, COUNT(x_community_invites.*) AS invites
FROM x_commmunities, x_community_invites
WHERE x_commmunities.id = x_community_invites.community_id
AND x_community_invites.to = 'some_id_value'
AND x_community_invites.accepted = '1'
GROUP BY x_commmunities.id, x_commmunities.name
```
This could result in duplicates (multiple invites for the same community). GROUP BY aggregates the records by the provided fields. | Why isn't this mysql query working? | [
"",
"mysql",
"sql",
"left-join",
""
] |
I want to add a descriptive column at the end of my query results. For example
I have
```
SELECT sum(amount) as Balance FROM tbDebits WHERE CustomerID =@CustomerID;
```
Now if the Balance is positive I want to add another column called 'Description'
in my query results describing each result as positive or negative.
Any ideas?
Here is my original query:
```
SELECT t.CustomerID, c.name, c.Surname, (SELECT (
(SELECT ISNULL(SUM(cashout),0)-
((select ISNULL(sum(Buyin),0) from [Transaction] where TYPE='Credit' and CustomerID=t.CustomerID )
+ (select ISNULL(sum(Paid),0) from [Transaction] where TYPE='Credit' and CustomerID=t.CustomerID ))
FROM [transaction]
WHERE TYPE='Credit'
AND CustomerID=t.CustomerID
)
-------------------
+
(
(SELECT ISNULL(SUM(cashout),0)
- (select ISNULL(sum(Paid),0) from [Transaction] where TYPE='Debit' AND Cashout>buyin and CustomerID=t.CustomerID )
+ (select ISNULL(sum(Cashout),0)- (select ISNULL(sum(PAID),0) from [Transaction] where TYPE='Debit' AND Cashout<buyin and CustomerID=t.CustomerID )
from [Transaction] where TYPE='Debit' AND Cashout<Buyin and CustomerID=t.CustomerID )
+ (select ISNULL(sum(Cashout),0)- (select ISNULL(sum(PAID),0) from [Transaction] where TYPE='Debit' AND Cashout=buyin and CustomerID=t.CustomerID )
from [Transaction] where TYPE='Debit' AND Cashout=Buyin and CustomerID=t.CustomerID )
FROM [Transaction]
WHERE CustomerID=t.CustomerID
AND TYPE='Debit'
AND Cashout>buyin )
)
--------------
-
(
select ISNULL(sum(Paid),0)
from [Transaction]
where type='Debit Settlement'
AND CustomerID =t.CustomerID
)
--------------
+
(
select ISNULL(sum(Paid),0)
from [Transaction]
where type='Credit Settlement'
AND CustomerID =t.CustomerID
)
)) as Balance FROM [Transaction] as t
inner JOIN Customer AS c
on t.CustomerID = c.CustomerID
GROUP BY t.CustomerID, c.name, c.Surname
``` | I like to use OUTER APPLY when I need to manipulate the results of a SELECT.
I hope it is useful.
```
SELECT t.CustomerID,
c.name,
c.Surname,
Balance.Custom,
case
when sum(Balance.Custom)>0 then 'positive'
when sum(Balance.Custom)<0 then 'negative'
else 'zero'
end as [description]
FROM [Transaction] AS t
INNER JOIN Customer AS c
ON t.CustomerID = c.CustomerID
OUTER APPLY (SELECT ( (SELECT Isnull(Sum(cashout), 0)
- ( (SELECT Isnull(Sum(Buyin), 0)
FROM [Transaction]
WHERE TYPE = 'Credit'
AND CustomerID = t.CustomerID)
+ (SELECT Isnull(Sum(Paid), 0)
FROM [Transaction]
WHERE TYPE = 'Credit'
AND CustomerID = t.CustomerID) )
FROM [transaction]
WHERE TYPE = 'Credit'
AND CustomerID = t.CustomerID)
-------------------
+ ((SELECT Isnull(Sum(cashout), 0) - (SELECT Isnull(Sum(Paid), 0)
FROM [Transaction]
WHERE TYPE = 'Debit'
AND Cashout buyin
AND CustomerID = t.CustomerID) + (SELECT Isnull(Sum(Cashout), 0)
- (SELECT Isnull(Sum(PAID), 0)
FROM [Transaction]
WHERE TYPE = 'Debit'
AND Cashout < buyin
AND CustomerID = t.CustomerID)
FROM [Transaction]
WHERE TYPE = 'Debit'
AND Cashout < Buyin
AND CustomerID = t.CustomerID) + (SELECT Isnull(Sum(Cashout), 0)
- (SELECT Isnull(Sum(PAID), 0)
FROM [Transaction]
WHERE TYPE = 'Debit'
AND Cashout = buyin
AND CustomerID = t.CustomerID)
FROM [Transaction]
WHERE TYPE = 'Debit'
AND Cashout = Buyin
AND CustomerID = t.CustomerID)
FROM [Transaction]
WHERE CustomerID = t.CustomerID
AND TYPE = 'Debit'
AND Cashout buyin))
- (SELECT Isnull(Sum(Paid), 0)
FROM [Transaction]
WHERE type = 'Debit Settlement'
AND CustomerID = t.CustomerID)
+ (SELECT Isnull(Sum(Paid), 0)
FROM [Transaction]
WHERE type = 'Credit Settlement'
AND CustomerID = t.CustomerID) ) AS Custom) AS Balance
GROUP BY t.CustomerID,
c.name,
c.Surname
``` | ```
SELECT
sum(amount) as balance,
case
when sum(amount)>0 then 'positive'
when sum(amount)<0 then 'negative'
else 'zero'
end as description
FROM tbDebits
WHERE CustomerID=@CustomerID;
``` | Adding a descriptive/commenting column on query results | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have 2 tables A and b namely.
**Table A has**
```
+------+---------------+
|MANUID|LMD |
+------+---------------+
|1,001 |10/18/2013 7:13|
+------+---------------+
|1,001 |10/18/2013 7:13|
+------+---------------+
|1,001 |10/18/2013 7:13|
+------+---------------+
|1,001 |10/18/2013 7:13|
+------+---------------+
|1,001 |10/18/2013 7:13|
+------+---------------+
|1,001 |10/18/2013 7:13|
+------+---------------+
|1,001 |10/18/2013 7:13|
+------+---------------+
```
**Table B has**
```
+------+---------------------------------+
|MANUID|MANUFACTURERDESC |
+------+---------------------------------+
|1,001 |ACTION CONSTRUCTION EQUIPMENT LTD|
+------+---------------------------------+
|1,002 |AJAX FIORI ENGG (INDIA) PVT LTD |
+------+---------------------------------+
|1,003 |APOLLO CONST. EQUIPMENT PVT LTD |
+------+---------------------------------+
|1,004 |APOLLO INDUSTRIAL EQUIP. PVT LTD|
+------+---------------------------------+
|1,006 |ASHOK LEYLAND |
+------+---------------------------------+
|1,007 |ATUL AUTO LTD |
+------+---------------------------------+
|1,009 |ATLAS COPCO INDIA LTD |
+------+---------------------------------+
```
I want to remove MANUID from Table B which is not available in Table A.
So from above my final **Table B** will be has only 1,001 in MANUID | ```
SELECT * FROM Table_B
WHERE MANUID IN
(SELECT MANUID FROM Table_A)
```
OR
```
SELECT * FROM Table_B
JOIN Table_A ON Table_B.MANUID = Table_A.MANUID
```
Or if you want to delete records:
```
DELETE * FROM Table_B
WHERE MANUID NOT IN
(SELECT MANUID FROM Table_A)
``` | If you are using SQL query Try this:
```
select tb.*
from Table B as tb, Table A as ta
where tb.MANUID = ta.MANUID
``` | How to compare and remove not used id's in excel | [
"",
"mysql",
"sql",
"excel",
""
] |
I am having some trouble with my Stored Procedure in Oracle 12.1
The blow code is getting an error and I think it has to do with using two EXCEPTION statements in a row?
I am created a stored Procedure for a fake small online DVD company that updates the queue position in the rental queue table. I use an IF statement that says if the member already has this DVD in the queue, then updates the queue position. In order to check if the member already has a DVD in the queue I write an SQL Select statement that checks for the DVD in the members queue. If the statement finds a DVD then it inserts it into "dvd\_check". If no DVD is found then i place DVD\_check to equal 0.
I also have a 2nd SQL statement to determine the original queue order of the row that is about to be updated. Both Statements run great on their own within the stored procedure but when I put them together in the StoredProcedure before my IF statement, then I get an error:
Error(17,5): PLS-00103: Encountered the symbol "EXCEPTION" when expecting one of the following: ( begin case declare end exit for goto if loop mod null...
AND
Error(29,4): PLS-00103: Encountered the symbol "end-of-file" when expecting one of the following: end not pragma final instantiable order overriding static...
Here is my Code:
```
CREATE OR REPLACE PROCEDURE sp_queueorder (
dvdid_sp number,
memberid_sp number,
queueposition_sp number)
IS
dvd_check number;
old_queueposition number;
BEGIN
SELECT dvdid INTO dvd_check FROM rentalqueue
WHERE DVDid = dvdid_sp and memberid = memberid_sp;
EXCEPTION
WHEN no_data_found THEN dvd_check := 0;
SELECT queueposition INTO old_queueposition FROM rentalqueue
WHERE DVDid = dvdid_sp and memberid = memberid_sp;
EXCEPTION
WHEN no_data_found THEN old_queueposition := 0;
IF dvd_check > 0 THEN
UPDATE rentalqueue SET queueposition = queueposition + 1
WHERE memberid = memberid_sp and queueposition >= queueposition_sp AND queueposition <= old_queueposition;
UPDATE rentalqueue SET queueposition = queueposition_sp
WHERE dvdid = dvdid_sp AND memberid = memberid_sp;
END IF;
END;
```
Thanks so much for helping me solve this issue. | ```
CREATE OR REPLACE PROCEDURE sp_queueorder (
dvdid_sp number,
memberid_sp number,
queueposition_sp number)
IS
dvd_check number;
old_queueposition number;
BEGIN
-- 1st select
BEGIN
SELECT dvdid INTO dvd_check FROM rentalqueue
WHERE DVDid = dvdid_sp and memberid = memberid_sp;
EXCEPTION
WHEN no_data_found THEN dvd_check := 0;
END;
-- 2nd select
BEGIN
SELECT queueposition INTO old_queueposition FROM rentalqueue
WHERE DVDid = dvdid_sp and memberid = memberid_sp;
EXCEPTION
WHEN no_data_found THEN old_queueposition := 0;
END;
IF dvd_check > 0 THEN
UPDATE rentalqueue SET queueposition = queueposition + 1
WHERE memberid = memberid_sp and queueposition >= queueposition_sp AND queueposition <= old_queueposition;
UPDATE rentalqueue SET queueposition = queueposition_sp
WHERE dvdid = dvdid_sp AND memberid = memberid_sp;
COMMIT; -- if is not needed remove
END IF;
END;
``` | There is a missing END; in your code. Depending on how control is supposed to flow it might be supposed to look like this:
```
CREATE OR REPLACE PROCEDURE sp_queueorder (
dvdid_sp number,
memberid_sp number,
queueposition_sp number)
IS
dvd_check number;
old_queueposition number;
BEGIN
SELECT dvdid INTO dvd_check FROM rentalqueue
WHERE DVDid = dvdid_sp and memberid = memberid_sp;
EXCEPTION
WHEN no_data_found THEN dvd_check := 0;
---add an END; to the block here?
END;
SELECT queueposition INTO old_queueposition FROM rentalqueue
WHERE DVDid = dvdid_sp and memberid = memberid_sp;
EXCEPTION
WHEN no_data_found THEN old_queueposition := 0;
----or add an end here?
END;
IF dvd_check > 0 THEN
UPDATE rentalqueue SET queueposition = queueposition + 1
WHERE memberid = memberid_sp and queueposition >= queueposition_sp AND queueposition <= old_queueposition;
UPDATE rentalqueue SET queueposition = queueposition_sp
WHERE dvdid = dvdid_sp AND memberid = memberid_sp;
END IF;
END;
--or add it here?
END;
``` | Stored Procedure two exception select statements | [
"",
"sql",
"oracle",
"stored-procedures",
""
] |
I have a table with few columns, one of these columns called \_bandwidth and the value inside it is decimal.
So i wanna type a sql query that adds values to existing value.
Say the value of \_bandwidth of user id 1 is 150.000 and i wanna add 200 to this value so sum would be 350.000
This is the query i typed but it didin't work.
```
update users set _bandwidth = (select _bandiwdth from users where _ID = 1) + 150 where _ID = 1
```
Also did something like:
```
update users set _bandwidth += 200 where _ID = 1
```
Of course they are wrong, but i hope you understand what i wanna achieve.
Thanks a lot in advance.
**EDIT**
Found the solution and the answer would be:
```
update users set _bandwidth = _bandwidth + 200 where _ID = 1
``` | ```
UPDATE Users
SET _bandwidth = _bandwidth + 200
WHERE _ID =1
```
would work | ```
update users
set _bandwidth = _bandiwdth + 200
where _ID = 1
``` | sql server query to add value to existing value | [
"",
"sql",
"sql-server",
""
] |
I am new to T-SQL so have zero knowledge & am using SQL Server 2012. Currently I have a table called `dbo.Securities` which contains two columns. One column is called `PairName` (`nchar(15)`) & the other column is called `RunPair` (`bit`). This table contains 30 rows.
What I would like to do is create 30 new tables that all have the same structure. The structure is one column of type `datetime` which will be the primary key & must go down to the second. There are 12 other columns all of type `decimal(5,5)`.
The 30 tables would be named after the `PairName` column in the table `dbo.Securities`. I have been trying to find how out to loop through the `dbo.Securities` table and use the `PairName` to create a new table based on the structure mentioned above.
However after reading some guides it appears looping through a table is not the best way to go about trying to complete this task. | You can do this with a cursor to loop through the securities table, and then dynamic sql to create the tables. Looping is a perfectly acceptable way to perform a number of create tables. There's no real way to do that in a set oriented way, which is the usual reason people say to avoid cursors.
```
declare
@sql nvarchar(max),
@pair sysname;
declare securities_cursor cursor local fast_forward for
select PairName from Securities;
open securities_cursor;
fetch next from securities_cursor into @pair;
while @@fetch_status = 0
begin
set @sql = '
Create Table dbo.' + quotename(@pair) + ' (
datekey datetime2(0) not null constraint ' + quotename('PK_' + @pair) + ' primary key,
column1 decimal(10, 5),
column2 decimal(10, 5),
column3 decimal(10, 5),
column4 decimal(10, 5),
column5 decimal(10, 5) -- etc
);'
exec sp_executesql @sql;
fetch next from securities_cursor into @pair;
end
close securities_cursor;
deallocate securities_cursor;
``` | Note, that you'll want to edit the table schema part in the below code. This loop will build thirty tables. As far as the best and fastest approach? Using `TSQL` this would be faster than manually building each table, but I don't know if it's the fastest method overall.
```
DECLARE @Build TABLE(
ID SMALLINT IDENTITY(1,1),
TableName VARCHAR(250)
)
INSERT INTO @Build
SELECT PairName
FROM dbo.Securities
DECLARE @begin SMALLINT = 1, @max SMALLINT, @sql NVARCHAR(MAX), @table NVARCHAR(250)
SELECT @max = MAX(TableID) FROM @Build
WHILE @begin <= @max
BEGIN
SELECT @table = TableName FROM @Build WHERE ID = @begin
SET @sql = 'CREATE TABLE ' + @table + '(
-- EDIT TABLE COLUMNS HERE
)'
EXECUTE sp_executesql @sql
PRINT 'TABLE ' + @table + ' has been built.'
SET @begin = @begin + 1
END
``` | T-SQL Create new tables based on another table | [
"",
"sql",
"sql-server",
""
] |
I am trying to update a enroll\_date row to null and it is telling me `"cannot update (%s) to NULL"`, so I tried doing putting `TO_CHAR`and it still doesn't help...
enroll\_date shows this which I want to make it to null
```
ENROLL_DATE
07-FEB-07
```
This is what I have
```
UPDATE ENROLLMENT
SET TO_CHAR(ENROLL_DATE) = NULL
WHERE STUDENT_ID ='125'
AND SECTION_ID ='61';
```
how can I set enroll\_date to null ? | The error appears to be telling you that your data model defines `enroll_date` as a `NOT NULL` column. You cannot, therefore, set it to be NULL.
You could modify the table definition to allow NULL values
```
ALTER TABLE enrollment
MODIFY( enroll_date DATE NULL )
```
It seems likely, however, that this was an intentional choice made when defining the data model that should not be altered. I don't see how it would make sense to have an `enrollment` without having an `enroll_date`. | Remove the TO\_CHAR function. You're updating a column, not the function'd value
```
UPDATE ENROLLMENT
SET ENROLL_DATE = NULL
WHERE STUDENT_ID ='125'
AND SECTION_ID ='61';
``` | updating date to null | [
"",
"sql",
"oracle",
"null",
"sql-update",
""
] |
I know that writing :
```
SELECT * FROM <ANY TABLE>
```
in a stored procedure will output a result set... what why do we have a return value separately in a stored procedure? where do we use it ?
If any error comes then the result set will be null rite? | First of all you have two distinct ways to return something. You may return a result *set* (i.e. a table) as the result of the operation as well as return *value* indicating either some sort of error or status of the result set.
Also, a return value is limited to a single 32bit integer, whereas a result set can have as many rows and columns the RDBMS allows.
My personal opinion is to use a stored procedure to execute a task mainly, and not to create a result set. But that is a matter of taste. However, using this paradigm, an action should inform the caller about the success and -in case of a failure- about the reason. Some RDBMS allow using exceptions, but if there is nothing to throw, i.e. just returning a status (e.g. 0,1,2 for 'data was new and had to be inserted, data existed and was updated, data could not be updated etc.)
There is a third way to pass information back to the caller: By using `output` parameter. So you have three different possibilities of passing information back to the caller.
This is one more than with a 'normal' programming language. They usually have the choice of either returning a value (e.g. `int Foo()` or an output/ref parameter `void Foo(ref int bar)`. But SQL introduces a new and very powerful way of returning data (i.e. tables).
In fact, you may return more than one table which makes this feature even more powerful. | Because if you use return values you can have a more fine grained control over the execution status and what the error (if any) were and you can return different error codes for malformed or invalid parameters etc and hence add error control/checking on the calling side to.
If you just check for an empty result set you really don't know why the set might be empty (maybe you called the procedure with an invalid parameter).
The main difference between a result set and a return value is that the result set stores the data returned (if any) and the return code holds some kind of status information about the execution itself. | What is the difference between a Result Set and Return value in a SQL procedure? what do they signify? | [
"",
"sql",
"sql-server",
"stored-procedures",
""
] |
I am using a contained database, after creating the database I attempted to create a user but I got the error:
> You can only create a user with a password in a contained database
My code is:
```
sp_configure 'show advanced options',1
GO
RECONFIGURE WITH OVERRIDE
GO
sp_configure 'contained database authentication', 1
GO
RECONFIGURE WITH OVERRIDE
GO
```
```
CREATE DATABASE [MyDb]
CONTAINMENT = PARTIAL
ON PRIMARY
( NAME = N'My', FILENAME = N'C:\My.mdf')
LOG ON
( NAME = N'My_log', FILENAME =N'C:\My_log.ldf')
```
```
CREATE USER MyUser
WITH PASSWORD = 'pass@123';
GO
``` | In **SQL Server 2017**, I found that my initial setup did not configure the ["contained databases"](https://learn.microsoft.com/en-us/sql/relational-databases/security/contained-database-users-making-your-database-portable) feature to help this along.
so, at server level you can check the server properties in the UI or run:
```
EXEC sp_configure 'CONTAINED DATABASE AUTHENTICATION'
```
if the running value isn't 1, then:
```
EXEC sp_configure 'CONTAINED DATABASE AUTHENTICATION', 1
GO
RECONFIGURE
GO
```
At the database level, it also might not have the "contained database" feature fully enabled. The option sits in the Database Properties panel on the Options section, the fourth dropdown at the top...
> Containment type == None or Partial
You can set it via SQL too. eg:
```
USE [master]
GO
ALTER DATABASE [MyDb] SET CONTAINMENT = PARTIAL
GO
```
thereafter, you can create the contained user as suggested by [@aleksandr](https://stackoverflow.com/users/1085940/aleksandr-fedorenko)
```
USE [MyDb]
GO
CREATE USER MyUser WITH PASSWORD = 'pass@123';
GO
``` | Create the login and the user separately:
```
CREATE LOGIN MyUser WITH PASSWORD = 'pass@123';
CREATE USER MyUser FOR LOGIN MyUser;
```
The names can be the same but of course they can also be different. | You can only create a user with a password in a contained database | [
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
I've bloody well done this before, but I plumb forget how.
I have a table of Schools, each with an ID and name.
I have another table of Students, each with a school\_id foreign key.
I want to write a query that'll return a list of schools and the count of their students I have on record; something like this:
```
School1 - 319
School2 - 166
School3 - 120
```
Bonus if it can be done elegantly in Rails without resorting to raw SQL! | In SQL you can just do:
```
SELECT s.ID, COUNT(*)
FROM schools s
INNER JOIN students ss ON ss.schoolID = s.id
GROUP BY s.ID
```
(about the rails part i can't help you, sorry). | No rails points for me:
```
SELECT
school_name, count(*)
FROM
schools
LEFT JOIN
students
USING(school_id)
GROUP BY
school_id
``` | SELECT name and rowcount? | [
"",
"mysql",
"sql",
"ruby-on-rails",
""
] |
I have never used Bitewise AND in my life. I have researched this operator but it still eludes me as to what it exactly does. So, I will ask with some code I just came across, what is the Bitwise And doing here:
```
CASE
WHEN (ft.Receiver_Status & 2) = 2 THEN '3D'
WHEN (ft.Receiver_Status & 1) = 1 THEN '2D'
WHEN (ft.Receiver_Status & 32) = 32 THEN 'Invalid' -- AR 220312
ELSE 'None'
```
Is it enforcing the same datatype, such as smallint converts to int before comparing the value of Receiver\_Status? | `ft.Receiver_Status & 1`: 1 is 20, so it is pulling out the value of the bit at position 0.
`ft.Receiver_Status & 2`: 2 is 21, so it is pulling out the value of the bit at position 1.
`ft.Receiver_Status & 32`: 32 is 25, so it is pulling out the value of the bit at position 5.
Note that, for example, the `= 32` in `(ft.Receiver_Status & 32) = 32` is actually redundant. This could instead be `(ft.Receiver_Status & 32) != 0` because all you're interested in is whether that bit is a 0 or a 1. | The bitwise AND checks to see whether a particular bit is set. It appears `ft.Receiver_Status` is an integer which stores various flags in different bits.
* `1` in binary is `00001` so `ft.Receiver_Status & 1` is checking to see if the first bit is set.
* `2` in binary is `00020` so `ft.Receiver_Status & 1` is checking to see if the second bit is set.
* `32` in binary is `10000` so `ft.Receiver_Status & 32` is checking to see if the fifth bit is set.
To see precisely how this works, the result of the AND operation will be the bit at position *n* will be `1` f and only if the bit at position *n* in both the first and the second number is `1`. Consider the following binary numbers:
```
011010001 (209)
000010000 ( 32)
---------------
000010000 ( 32)
```
And alternatively,
```
011001001 (201)
000010000 ( 32)
---------------
000000000 ( 0)
``` | What is the Bitewise AND doing here | [
"",
"sql",
"bitwise-operators",
"boolean-operations",
""
] |
I have a query and display that gives me employees grouped by departments. It works fine but there is one additional thing that I need that I cannot figure out. By the department name I need to add text that says: "(x employees)", where x is the number of employees in that department. Example:
```
MARKETING (2 employees)
John Doe
Jane Smith
```
My code is below:
```
<cfquery name="getEmpsByDept" datasource="#application.DSN#" dbtype="ODBC">
SELECT DISTINCT First, Last, Department
FROM SuccessFactorsPeople
ORDER BY Department
</cfquery>
<table border="0" width="70%" cellpadding="0" cellspacing="0">
<cfoutput query="getEmpsByDept" group="Department">
<tr>
<td><b>#Ucase(Department)#</b></td>
</tr>
<cfoutput>
<tr>
<td> #TRIM(First)# #TRIM(Last)#</td>
</tr>
</cfoutput>
<tr>
<td height="0"> </td>
</tr>
</cfoutput>
</table>
``` | There's not a built in counter you can use. You'll have to loop through each record inside your group by to get a counter for each department.
Also, make sure you scope your query variables
```
<cfoutput query="getEmpsByDept" group="Department">
<cfset empCount = 0>
<cfoutput>
<cfset empCount++>
</cfoutput>
<tr>
<td><b>#Ucase(getEmpsByDept.Department)# #empCount# Employees</b></td>
</tr>
<cfoutput>
<tr>
<td> #TRIM(getEmpsByDept.First)# #TRIM(getEmpsByDept.Last)#</td>
</tr>
</cfoutput>
<tr>
<td height="0"> </td>
</tr>
</cfoutput>
``` | You need to use a group by in your SQL
If for any given department, if the count is supposed to be different if there is a different name, then you want this
```
<cfquery name="getEmpsByDept" datasource="#application.DSN#" dbtype="ODBC">
SELECT First, Last, Department, COUNT(Department) AS Department Count
FROM SuccessFactorsPeople
GROUP BY First, Last, Department
Order by Department
</cfquery>
```
This will guarantee that you will get one row per department no matter what, but `MAX(first)` and `MAX(last)` may have other problems
```
<cfquery name="getEmpsByDept" datasource="#application.DSN#" dbtype="ODBC">
SELECT MAX(First) AS First, MAX(Last) AS Last, Department, COUNT(Department) AS DepartmentCount
FROM SuccessFactorsPeople
GROUP BY Department
Order by Department
</cfquery>
```
If you do not want to collapse on first and last.
```
<cfquery name="getEmpsByDept" datasource="#application.DSN#" dbtype="ODBC">
SELECT DISTINCT First, Last, A.Department, DepartmentCount
FROM SuccessFactorsPeople A
INNER JOIN (
SELECT Department, COUNT(Department) AS DepartmentCount
FROM SuccessFactorsPeople
GROUP BY Department
) B
ON A.Department = B.Department
ORDER BY A.Department
</cfquery>
``` | How to count the number of employees and add number to title | [
"",
"sql",
"coldfusion",
""
] |
I am trying to write a query (PostgreSQL) to get "Movies with highest number of awards in year 2012."
I have following tables:
```
CREATE TABLE Award(
ID_AWARD bigserial CONSTRAINT Award_pk PRIMARY KEY,
award_name VARCHAR(90),
category VARCHAR(90),
award_year integer,
CONSTRAINT award_unique UNIQUE (award_name, category, award_year));
CREATE TABLE AwardWinner(
ID_AWARD integer,
ID_ACTOR integer,
ID_MOVIE integer,
CONSTRAINT AwardWinner_pk PRIMARY KEY (ID_AWARD));
```
And I written following query, which gives correct results, but there's is quite a lot of code duplication I think.
```
select * from
(select id_movie, count(id_movie) as awards
from Award natural join awardwinner
where award_year = 2012 group by id_movie) as SUB
where awards = (select max(count) from
(select id_movie, count(id_movie)
from Award natural join awardwinner
where award_year = 2012 group by id_movie) as SUB2);
```
So `SUB` and `SUB2` are exactly the same subquery. Is there a better way to do this? | ### Get all winning movies
```
SELECT id_movie, awards
FROM (
SELECT aw.id_movie, count(*) AS awards
,rank() OVER (ORDER BY count(aw.id_movie) DESC) AS rnk
FROM award a
JOIN awardwinner aw USING (id_award)
WHERE a.award_year = 2012
GROUP BY aw.id_movie
) sub
WHERE rnk = 1;
```
### Major points
* This should be simpler and faster than suggestions so far. Test with `EXPLAIN ANALYZE`.
* There are cases where CTEs are instrumental to avoid code duplication. But not in this time: a subquery does the job just fine and is usually faster.
* You can run a window function OVER an aggregate function on the same query level. That's why this works:
```
rank() OVER (ORDER BY count(aw.id_movie) DESC) AS rnk
```
* I'd suggest to use explicit column names in the JOIN condition instead of `NATURAL JOIN`, which is prone to breakage if you later change / add columns to the underlying tables.
The JOIN condition with `USING` is almost as short, but doesn't break as easily.
* Since `id_movie` cannot be NULL (ruled out by the JOIN condition and also part of the pk) it is shorter ans slightly faster to use `count(*)` instead. Same result.
### Just one movie
Shorter and faster, yet, if you only need *one* winner:
```
SELECT aw.id_movie, count(*) AS awards
FROM award a
JOIN awardwinner aw USING (id_award)
WHERE a.award_year = 2012
GROUP BY 1
ORDER BY 2 DESC, 1 -- as tie breaker
LIMIT 1
```
Using positional references (`1`, `2`) here as shorthand.
I added `id_movie` to `ORDER BY` as tie breaker in case multiple movies should qualify for the win. | Well you can use [common table expression](http://www.postgresql.org/docs/current/static/queries-with.html) to avoid code duplication:
```
with cte_s as (
select id_movie, count(id_movie) as awards
from Award natural join awardwinner
where award_year = 2012
group by id_movie
)
select
sub.id_movie, sub.awards
from cte_s as sub
where sub.awards = (select max(sub2.awards) from cte_s as sub2)
```
or you can do something like this with [window function](http://www.postgresql.org/docs/current/static/functions-window.html) (untested, but I think PostgreSQL allows this):
```
with cte_s as (
select
id_movie,
count(id_movie) as awards,
max(count(id_movie)) over() as max_awards
from Award natural join awardwinner
where award_year = 2012
group by id_movie
)
select id_movie
from cte_s
where max_awards = awards
```
Another way to do this could be to use [rank()](http://www.postgresql.org/docs/current/static/functions-window.html) function (untested, may be you have to use two cte instead of one):
```
with cte_s as (
select
id_movie,
count(id_movie) as awards,
rank() over(order by count(id_movie) desc) as rnk
from Award natural join awardwinner
where award_year = 2012
group by id_movie
)
select id_movie
from cte_s
where rnk = 1
```
**update** When I've created this answer, my main goal was to show how to use cte to avoid code duplication. In genearal, it's better to avoid using cte more than one time in query if it's possible - first query uses 2 table scan (or index seek) and second and third uses only one, so I've should specify that it's better to go with these queries. Anyway, @Erwin made this tests in his answer. Just to add to his great major points:
* I also advice against `natural join` because of error-prone nature of this. Actually, my main RDBMS is SQL Server which are not support it so I'm more used to explicit `outer/inner join`.
* It's good habit to always use aliases in your queries, so you can avoid [strange results](https://stackoverflow.com/questions/18577622/sql-in-query-produces-strange-result/18579128#18579128).
* This could be totally subjective thing, but usually if I'm using some table only to filter out rows from main table of the query (like in this query, we just want to get `awards` for year 2012 and just filter rows from `awardwinner`), I prefer not to use `join`, but use `exists` or `in` instead, it seems more logical for me.
So final query could be:
```
with cte_s as (
select
aw.id_movie,
count(*) as awards,
rank() over(order by count(*) desc) as rnk
from awardwinner as aw
where
exists (
select *
from award as a
where a.id_award = aw.id_award and a.award_year = 2012
)
group by aw.id_movie
)
select id_movie
from cte_s
where rnk = 1
``` | Find movies with highest number of awards in certain year - code duplication | [
"",
"sql",
"postgresql",
"aggregate-functions",
"window-functions",
""
] |
I wrote the following SQL statement to get data from two tables `gendata` & `TrainingMatrix`:
```
SELECT * FROM (SELECT DISTINCT ON ("TrainingMatrix".payroll, "TrainingName", "Institute")"gendata"."Employee Name","gendata"."Position", "gendata"."Department", "TrainingMatrix".*
FROM "TrainingMatrix" JOIN "gendata" ON "TrainingMatrix".payroll = "gendata".payroll
ORDER BY payroll, "TrainingName", "Institute" ,"TrainingDate" DESC NULLS LAST) AS foo;
```
It works fine, but I need to filter the records more by:
```
WHERE "TrainingMatrix"."ExpiryDate" - current_date <= 0
AND EXTRACT(YEAR FROM "TrainingMatrix"."ExpiryDate") = EXTRACT(YEAR FROM current_date);
```
So, the orginal SQL statement will be:
```
SELECT * FROM (SELECT DISTINCT ON ("TrainingMatrix".payroll, "TrainingName", "Institute")"gendata"."Employee Name","gendata"."Position", "gendata"."Department", "TrainingMatrix".*
FROM "TrainingMatrix" JOIN "gendata" ON "TrainingMatrix".payroll = "gendata".payroll
ORDER BY payroll, "TrainingName", "Institute" ,"TrainingDate" DESC NULLS LAST) AS foo WHERE "TrainingMatrix"."ExpiryDate" - current_date <= 0
AND EXTRACT(YEAR FROM "TrainingMatrix"."ExpiryDate") = EXTRACT(YEAR FROM current_date);
```
But I got this error:
> ERROR: missing FROM-clause entry for table "TrainingMatrix" LINE 3:
> ...te" ,"TrainingDate" DESC NULLS LAST) AS foo WHERE "TrainingM...
I am using PostgreSQL. Any advise guys? | As you have wrapped your actual query into a derived table (the `select .. from (...) as foo`) your "table" isn't called `TrainingMatrix` any longer. You need to reference it using the alias you use for the derived table:
```
select *
from (
... you original query ..
) as foo
where foo."ExpiryDate" - current_date <= 0
and extract(year from foo."ExpiryDate") = extract(year from current_date)
```
---
Btw: I would recommend you stop using quoted identifiers `"ExpiryDate"` using case-sensitive names usually gives you more trouble than it's worth. | 100% what [@a\_horse already said](https://stackoverflow.com/a/19976236/939860). Plus a couple more things:
**Format** your query so it's easy to read and understand for humans before you try to debug. Even more so, before you post in a public forum.
Use [**table aliases**](https://www.postgresql.org/docs/current/sql-select.html#SQL-FROM), especially with your unfortunate CaMeL-case names to make it easier to read.
Provide your table definitions or at least **table-qualify** column names in your query, so we have a chance to parse it. Your immediate problem is already fixed in the query below. You would also replace `?.` accordingly:
* `t` .. alias for `"TrainingMatrix"`
* `g` .. alias for `gendata`
```
SELECT *
FROM (
SELECT DISTINCT ON (t.payroll, ?."TrainingName", ?."Institute")
g."Employee Name", g."Position", g."Department", t.*
FROM "TrainingMatrix" t
JOIN gendata g ON g.payroll = t.payroll
ORDER BY t.payroll, ?."TrainingName", ?."Institute"
, ?."TrainingDate" DESC NULLS LAST
) AS foo
WHERE foo."ExpiryDate" - current_date <= 0
AND EXTRACT(YEAR FROM foo."ExpiryDate") = EXTRACT(YEAR FROM current_date);
```
Like @a\_horse wrote, it's a bad idea to use illegal identifiers that have to be double-quoted all the time. But an **identifier with enclosed space** character is even worse: `"Employee Name"`. That's one step away from home-made SQL-injection.
The way your additional filters are phrased is **bad for performance**:
```
WHERE "ExpiryDate" - current_date <= 0
```
Is not [sargable](https://en.wiktionary.org/wiki/sargable) and therefore can't use a plain index. Leaving that aside, it is also more expensive than it needs to be. Use instead:
```
WHERE "ExpiryDate" >= current_date
```
Similar for your 2nd expression, which should be rewritten to:
```
WHERE "ExpiryDate" >= date_trunc('year', current_date)
AND "ExpiryDate" < date_trunc('year', current_date) + interval '1 year'
```
Combining both, we can strip a redundant expression:
```
WHERE "ExpiryDate" >= current_date
AND "ExpiryDate" < date_trunc('year', current_date) + interval '1 year'
```
Your question is **ambiguous**. Do you want to apply the additional filter before `DISTINCT` or after? Different result.
Assuming *before* `DISTINCT`, you don't need a subquery - which removes the cause for your immediate problem: No different alias for the subquery.
All together:
```
SELECT DISTINCT ON (t.payroll, "TrainingName", "Institute")
g."Employee Name", g."Position", g."Department", t.*
FROM "TrainingMatrix" t
JOIN gendata g USING (payroll)
WHERE t."ExpiryDate" >= current_date
AND t."ExpiryDate" < date_trunc('year', current_date) + interval '1 year'
ORDER BY t.payroll, "TrainingName", "Institute", "TrainingDate" DESC NULLS LAST;
``` | Missing FROM-clause entry for a table | [
"",
"sql",
"database",
"postgresql",
"select",
"greatest-n-per-group",
""
] |
i'm trying to join table Group on my link table userGroup, to get the groups where user with id 30 is not a member, but i can't seem to get it right.
joining with no conditions returns this
```
SELECT Groups.ID, Groups.Name, UserGroup.groupID, UserGroup.userID
FROM Groups LEFT OUTER JOIN UserGroup ON Groups.ID = UserGroup.groupID
```
```
+----------+------------+-------------------+------------------+
| Group.ID | Group.Name | UserGroup.GroupID | userGroup.UserID |
+----------+------------+-------------------+------------------+
| 1 | g1 | 1 | 30 |
| 2 | g2 | NULL | NULL |
+----------+------------+-------------------+------------------+
```
however when i try to sourt out the row where userID = 30, i'm getting two equally wrong results.
adding the condition in the join part returns both rows but claiming the userGroup.userId is null. i'm suspecting some duplicate rows due to join, but haven't been able to figure out what exactly and how to fix it
```
SELECT Groups.ID, Groups.Name, UserGroup.groupID, UserGroup.userID
FROM Groups LEFT OUTER JOIN UserGroup ON Groups.ID = UserGroup.groupID AND UserGroup.userID <> 30
```
```
+----------+------------+-------------------+------------------+
| Group.ID | Group.Name | UserGroup.GroupID | userGroup.UserID |
+----------+------------+-------------------+------------------+
| 1 | g1 | NULL | NULL |
| 2 | g2 | NULL | NULL |
+----------+------------+-------------------+------------------+
```
adding the condition to the very end of my statement, returns 0 rows.
```
SELECT Groups.ID, Groups.Name, UserGroup.groupID, UserGroup.userID
FROM Groups LEFT OUTER JOIN UserGroup ON Groups.ID = UserGroup.groupID
WHERE UserGroup.userID <> 30
```
i'm running on an mssql server and using c# SqlCommand to execute the query, but i don't think that's connected to the issue | You actually want to join trying to find where usergroup.userID = 30, not <> 30. From that join you want to return the records where there *is no* such usergroup record.
Try something like this:
```
SELECT Groups.ID, Groups.Name, UserGroup.groupID, UserGroup.userID
FROM Groups
LEFT OUTER JOIN UserGroup ON UserGroup.groupID = Groups.ID
AND UserGroup.userID = 30
WHERE UserGroup.UserID IS NULL
``` | " to get the groups where user with id 30 is not a member"
```
SELECT *
FROM Groups g
WHERE NOT EXISTS (SELECT 1 FROM UserGroup WHERE groupID = g.iD and userID = 30);
```
seems more natural to me in your case. | sql conditional join getting wrong results | [
"",
"sql",
"sql-server-2008",
"join",
""
] |
Not sure why the Query and View Designer is not appearing as per
<http://msdn.microsoft.com/en-us/library/vstudio/ms172013.aspx>
It's an ASP.net project with database in the AppData folder, and a connection using SQL 2008 Express.
If it helps the connection string is:
Data Source=.\SQLEXPRESS;AttachDbFilename="C:\folders...\App\_Data\database.mdf";Integrated Security=True;Connect Timeout=30;User Instance=True
If I open the project in VS2012 (where I've worked on this project for some time), find a table, right click, new query, I get the nice interactive visual design tool.
If I open the project in VS2013 (just installed) and do the same, blank page named SQLQuery1.sql and a basic connection to SQL Express (ie. master, model, etc.)
MS page tells me Tools / Options / Visual Database tools - but this doesn't exist on my installation!
Am I missing something?
Also tried with blank project, added App\_Data and a new database (so using LocalDB), added a table, right click, new query - exactly the same blank file. | I have been having the same problem and think I have found the solution.
When you are adding the connection to the Database, in the Add Connection Dialog.
Click 'Change...' next to the Data source
Dialog opens showing 'Change Data Source'
You probable have selected 'Microsoft SQL Server' and then below that there is a drop down where you can select:
.NET Framework Data Provider for OLE DB
.NET Framework Data Provider for SQL Server (This is probably selected by default)
try changing to the OLE DB connection.
Click OK, and then complete the rest of the connection on the 'Add Connection' dialog.
I have then found that using the database works as it did in VS2012 and as you are describing above. | Was searching for a solution to a similar issue. Unable to use Query Designer for a local mdf file database. This is what I found...hope it helps someone.
Reading the comments here (<http://social.msdn.microsoft.com/Forums/sqlserver/en-US/ed4675d3-aa84-47db-bdf5-f852355409e6/query-designer-not-displaying-in-visual-studio-2013?forum=ssdt>).
According to the moderator of that forum question (Kevin Cunnane of Microsoft), "in Visual Studio 2013 the query designer and database diagram features are no longer available from the Server Explorer. If you wish to keep using the query designer, the only other workaround for now is to access it from SSMS. "
My current workaround is to use Visual Studio 2012 Server Explorer to view data and write queries for a local MDF file. And to use Visual Studio 2013 for everything else. | Visual Studio 2013 SQL Query and View Designer not appearing | [
"",
"sql",
"designer",
"visual-studio-2013",
""
] |
I am using Sybase and I am doing a select which returns me a column called "iftype", but its type is int and I need to convert into varchar. When I try to do the select without the convert function I get this error:
> Error code 257, SQL state 37000: Implicit conversion from datatype 'VARCHAR' to 'INT' is not allowed. Use the CONVERT function to run this query.
I dont know how to implement the function `CONVERT`. Can anyone help me, please ? | Use the convert function.
```
SELECT CONVERT(varchar(10), field_name) FROM table_name
``` | Use the `STR` function:
```
SELECT STR(field_name) FROM table_name
```
---
## **Arguments**
**float\_expression**
Is an expression of approximate numeric (float) data type with a decimal point.
**length**
Is the total length. This includes decimal point, sign, digits, and spaces. The default is 10.
**decimal**
Is the number of places to the right of the decimal point. decimal must be less than or equal to 16. If decimal is more than 16 then the result is truncated to sixteen places to the right of the decimal point.
source: <https://msdn.microsoft.com/en-us/library/ms189527.aspx> | Convert INT to VARCHAR SQL | [
"",
"sql",
"select",
"type-conversion",
"sybase",
""
] |
I have the following problem in my sqlite database there are values stored and for some purposes I need to remove entries.
But before I need to get the id for the rows which I would like to remove. To get the id tried to use the following query:
```
select commonid from TestData where coordinate_x=5.707523 and coordinate_y=6.693941;
```
The result is zero, because the value for coordinate\_y is 6.693940. Therefore I was thinking to use like to get the right result. It does not matter, because the values do not difference only in the first two digits after the comma. I tried:
```
select commonid from TestData where coordinate_x=5.707523 and coordinate_y LIKE "6.69394%";
```
This failed either and the result was zero.
I figured out in the internet that I could use \* instead of %.
```
select commonid from TestData where coordinate_x=5.707523 and coordinate_y LIKE "6.69394*";
```
Unfortunately this did not work too.
So how can I use like or any other command to solve this problem? | This works for me:
```
sqlite> .schema t
CREATE TABLE t (f double);
sqlite> select * from t;
1.23456789
2.23456789
3.23456789
sqlite> select * from t where f = 1.23456789;
1.23456789
sqlite> select * from t where f like '1.23456789';
1.23456789
sqlite> select * from t where f like '1.234%';
1.23456789
sqlite> select * from t where f like '_.23456789';
1.23456789
2.23456789
3.23456789
``` | You can, but it would be smarter (and probably faster) to do
```
SELECT commonid FROM TestData WHERE coordinate_x=5.707523 AND
ABS(coordinate_y-6.693945)<0.000005;
```
or
```
SELECT commonid FROM TestData WHERE coordinate_x=5.707523 AND
coordinate_y BETWEEN 6.69394 AND 6.69395;
``` | How can I use like with float numbers in sqlite? | [
"",
"sql",
"sqlite",
"floating-point",
"numbers",
"precision",
""
] |
Here's the scenario:
I'm migrating data from an old system to a new system.
The old system had 2 tables that represented comments and their replies.
The new system has a single table for comments that allows nested comments. So, it has a self-referencing foreign key.
I need to move data from the 2 tables into 1.
Here's the problem:
While I know which sub-comments are related to which parent comments, while I'm doing the inserting into the new table, I do not know the new ID for the parent comment.
I have considered using a while loop to loop through each of the parent comments then perform the 2 inserts inside the loop.
Is this an appropriate time to use a cursor? Per the recommendation of nearly everyone, I avoid them like the plague.
Can you think of a different approach to move the data from 2 tables into 1?
All of this is happening inside of another while loop. I'm also wondering if I should try to break this loop out into a separate loop instead of nesting them. | So it looks like if I were using SQL 2008 or above, I could use the `MERGE` statement with the `OUTPUT` keyword. Unfortunately, I need to support SQL 2005 which does not have the `MERGE` statement. I ended up using nested loops. | Without a test database in front of me, you can do it using the `OUTPUT` keyword in MSSQL. Should be enough to get you started:
```
DECLARE @NewIDs Table
(
NewID INT,
OldID INT
)
INSERT INTO NewTable
OUTPUT NewTable.ID,
OldTable.ID
INTO @NewIDs
SELECT NULL As ParentCommentID, Othercolumns
FROM OldParentTable
INSERT INTO NewTable
SELECT NewID As ParentCommentID, OtherColumns
FROM OldChildTable
JOIN @NewIDs NewIDs
ON NewIDs.OldID = OldChildTable.OldParentTableID
``` | SQL Migration Script - nested loops | [
"",
"sql",
"sql-server",
"data-migration",
""
] |
I have the following SQL statement: `SELECT sys_context('userenv','db_name') FROM dual;`. I stored it inside a SQL file called `db.sql` in C:
Now, I can run this file:
> @C:\db.sql
I know we can invoke the file just as @db, but, when I try it, I get this error:
```
SQL> @db
SP2-0310: unable to open file "db.sql"
```
So, where should I place the file? I read here <http://docs.oracle.com/html/B12033_01/sqlplus.htm> that I have to edit some variable names like SQLPATH. Where should I edit this? | Found the answer myself :) here sharing this...
Always these files will be looked in Bin of your Oracle Client. Most user create ORACLE\_HOME environmental variable at time of installation and the files will be looked in ORACLE\_HOME\Bin
1. So create a environmental variable if not present already pointing
to [driver/folder\_installed]/product/{version}/client
2. Place the files in bin
that's all enjoy! | SQLPATH seems like an environment variable.
RClick on your MyComputer. Go to "Advanced System Settings". Under "Advanced" Tabs click on "Environment Variables..." and there you can set the SQLPATH under your system variables if you have admin rights. | From where the Stored Procedure file will be called when using @filename in SQL plus | [
"",
"sql",
"oracle",
"stored-procedures",
""
] |
It's easy to understand why left outer joins are not commutative, but I'm having some trouble understanding whether they are associative. Several online sources suggest that they are not, but I haven't managed to convince myself that this is the case.
Suppose we have three tables: A, B, and C.
Let A contain two columns, ID and B\_ID, where ID is the primary key of table A and B\_ID is a foreign key corresponding to the primary key of table B.
Let B contain two columns, ID and C\_ID, where ID is the primary key of table B and C\_ID is a foreign key corresponding to the primary key of table C.
Let C contain two columns, ID and VALUE, where ID is the primary key of table C and VALUE just contains some arbitrary values.
Then shouldn't `(A left outer join B) left outer join C` be equal to `A left outer join (B left outer join C)`? | If you're assuming that you're JOINing on a foreign key, as your question seems to imply, then yes, I think OUTER JOIN is guaranteed to be associative, as covered by [Przemyslaw Kruglej's answer](https://stackoverflow.com/a/20022519/1709587).
However, given that you haven't actually specified the JOIN condition, the pedantically correct answer is that no, they're not guaranteed to be associative. There are two easy ways to violate associativity with perverse `ON` clauses.
## 1. One of the JOIN conditions involves columns from all 3 tables
This is a pretty cheap way to violate associativity, but strictly speaking nothing in your question forbade it. Using the column names suggested in your question, consider the following two queries:
```
-- This is legal
SELECT * FROM (A JOIN B ON A.b_id = B.id)
JOIN C ON (A.id = B.id) AND (B.id = C.id)
-- This is not legal
SELECT * FROM A
JOIN (B JOIN C ON (A.id = B.id) AND (B.id = C.id))
ON A.b_id = B.id
```
The bottom query isn't even a valid query, but the top one is. Clearly this violates associativity.
## 2. One of the JOIN conditions can be satisfied despite all fields from one table being NULL
This way, we can even have different numbers of rows in our result set depending upon the order of the JOINs. For example, let the condition for JOINing A on B be `A.b_id = B.id`, but the condition for JOINing B on C be `B.id IS NULL`.
Thus we get these two queries, with very different output:
```
SELECT * FROM (A LEFT OUTER JOIN B ON A.b_id = B.id)
LEFT OUTER JOIN C ON B.id IS NULL;
SELECT * FROM A
LEFT OUTER JOIN (B LEFT OUTER JOIN C ON B.id IS NULL)
ON A.b_id = B.id;
```
You can see this in action here: <http://sqlfiddle.com/#!9/d59139/1> | In this thread, it is said, that they are not associative: [Is LEFT OUTER JOIN associative?](https://stackoverflow.com/questions/9614922/does-the-join-order-matters-in-sql)
However, I've found some book online where it is stated, that OUTER JOINs are associative, when the tables on the far left side and far right side have no attributes in common ([here](http://books.google.pl/books?id=rZNFwWub_4IC&pg=PA34&lpg=PA34&dq=Are+left+outer+joins+associative&source=bl&ots=r1a6H56BqX&sig=I6YUodAVqWZ5zk5reFuE0iDKr6U&hl=pl&sa=X&ei=7b2HUoaaHqSn0QW4l4CYDQ&ved=0CGUQ6AEwBw#v=onepage&q=Are%20left%20outer%20joins%20associative&f=false)).
Here is a graphical presentation (MSPaint ftw):

Another way to look at it:
Since you said that table A joins with B, and B joins with C, then:
* When you first join A and B, you are left with all records from A. Some of them have values from B. Now, for *some* of those rows for which you got value from B, you get values from C.
* When you first join B and C, you and up with the whole table B, where some of the records have values from C. Now, you take all records from A and join some of them with all rows from B joined with C. Here, again, you get all rows from A, but some of them have values from B, some of which have values from C.
I don't see any possibility where, in conditons described by you, there would be a data loss depending on the sequence of LEFT joins.
Basing on the data provided by Tilak in his answer (which is now deleted), I've built a simple test case:
```
CREATE TABLE atab (id NUMBER, val VARCHAR2(10));
CREATE TABLE btab (id NUMBER, val VARCHAR2(10));
CREATE TABLE ctab (id NUMBER, val VARCHAR2(10));
INSERT INTO atab VALUES (1, 'A1');
INSERT INTO atab VALUES (2, 'A2');
INSERT INTO atab VALUES (3, 'A3');
INSERT INTO btab VALUES (1, 'B1');
INSERT INTO btab VALUES (2, 'B2');
INSERT INTO btab VALUES (4, 'B4');
INSERT INTO ctab VALUES (1, 'C1');
INSERT INTO ctab VALUES (3, 'C3');
INSERT INTO ctab VALUES (5, 'C5');
SELECT ab.aid, ab.aval, ab.bval, c.val AS cval
FROM (
SELECT a.id AS aid, a.val AS aval, b.id AS bid, b.val AS bval
FROM atab a LEFT OUTER JOIN btab b ON (a.id = b.id)
) ab
LEFT OUTER JOIN ctab c ON (ab.bid = c.id)
ORDER BY ab.aid
;
```
```
AID AVAL BVAL CVAL
---------- ---------- ---------- ----------
1 A1 B1 C1
2 A2 B2
3 A3
```
```
SELECT a.id, a.val AS aval, bc.bval, bc.cval
FROM
atab a
LEFT OUTER JOIN (
SELECT b.id AS bid, b.val AS bval, c.id AS cid, c.val AS cval
FROM btab b LEFT OUTER JOIN ctab c ON (b.id = c.id)
) bc
ON (a.id = bc.bid)
ORDER BY a.id
;
```
```
ID AVAL BVAL CVAL
---------- ---------- ---------- ----------
1 A1 B1 C1
2 A2 B2
3 A3
```
It seems in this particular example, that both solutions give the same result. I can't think of any other dataset that would make those queries return different results.
Check at SQLFiddle:
* [MySQL](http://sqlfiddle.com/#!2/edca3b/1)
* [Oracle](http://sqlfiddle.com/#!4/edca3)
* [PostgreSQL](http://sqlfiddle.com/#!15/edca3/1)
* [SQLServer](http://sqlfiddle.com/#!3/edca3/1) | Are left outer joins associative? | [
"",
"sql",
"join",
"outer-join",
""
] |
I'm currently working on an interesting problem and though I'd share it to solicit ideas.
I have a table, lets call it *Table One*. Each *ID* has an **effective date range** implied by **datetimes** (these are actually DATETIME types, even though I've presented them strangely below) *Date A* and *Date B*. So for example, ID 1 was in effect from 1-Nov until 3-Nov.

From this table, I'd like to produce a table similar to *Target Table* (pictured below), where each ID/Effective Date combination is listed row-by-row:

To accomplish this, I also have a table containing sequential dates, with the string format for the date, along with a start datetime and an end datetime:

My experience is mostly with C# and while I could whip up a script to accomplish this, I worry that I would be hacking around what can be accomplished with pure SQL. | you can do it with a join, but considering your data types, it's a bit convoluted. The best to do is to use, in your Table One, columns with the DATE datatype, instead of strings. Then you can do something like that :
```
SELECT t2.DateStr, t1.Id
FROM TableOne t1
JOIN TableTwo t2 ON t2 StartDatetime BETWEEN t1.DateA AND t1.DateB
ORDER BY t2.DateStr, t1.Id;
``` | What you need as a calendar table - like you have but using actual `DATETIME`, then just join you data on to it as
```
SELECT c.ShortDate, t.Id, t.StartDate, t.EndDate
FROM dbo.Calendar c
JOIN tbl t ON c.ShortDate BETWEEN t.StartDate AND t.EndDate
ORDER BY c.ShortDate, t.Id
```
Sidenote: keep all dates as `DATETIME` (or related data type) and let the client format it. | SQL - Creating a historical fact table from latest snapshot data | [
"",
"sql",
"sql-server",
"sql-server-2008",
"datetime",
"business-intelligence",
""
] |
i need the LAST\_NAME, JOB\_ID, SALARY columns from table workers and the job id needs to be SH\_CLERK or SA\_REP and the SALARY column CANNOT equal to 1000,2600,3000,7000,8000 or 9000
```
SELECT `LAST_NAME`, `JOB_ID`, `SALARY` FROM `workers`
WHERE `JOB_ID` = SH_CLERK OR SA_REP AND `SALARY` != 1000 and 2600 and 3000 and 7000 and 8000 and 9000;
```
i dont know whats wrong, i get the #1054 - Unknown column 'SH\_CLERK' in 'where clause' error.
Thanks ! | Put quotes around strings.
Switch multiple `OR`'s to using `IN` instead.
```
SELECT LAST_NAME, JOB_ID, SALARY
FROM workers
WHERE JOB_ID IN ('SH_CLERK', 'SA_REP')
AND SALARY NOT IN (1000, 2600, 3000, 7000, 8000, 9000);
``` | ```
SELECT LAST_NAME, JOB_ID, SALARY
FROM workers
WHERE JOB_ID IN ('SH_CLERK', 'SA_REP') AND
SALARY NOT IN (1000, 2600, 3000, 7000,8000, 9000)
``` | SQL table where clause error | [
"",
"mysql",
"sql",
""
] |
I have a table with CUSTOMERNAME and REDIRECTNAME columns in my REDIRECTS table as follow
```
ID NAME REDIRECTLINK
1 Gregory XYUS_555
2 Sam VYU_787
3 Smith XYUS_555
4 John PPIU_987
```
So basically I want to update the duplicate and append a number to it like for Smith it should be
```
Smith XYUS_555_01
```
Which will take care of the duplicate. I am just not sure how to go about updating only the duplicate. I have the following to find the duplicates:
```
SELECT
REDIRECTLINK, COUNT(*) dupcount
FROM
REDIRECTS
WHERE
REDIRECTLINK IS NOT NULL
GROUP BY
REDIRECTLINK
HAVING
COUNT(*) > 1
```
This tells me how many dups per redirect link, but how can I go about updating the the dups?
```
UPDATE REDIRECTS
SET REDIRECTLINK = REDIRECTLINK + '01" //NOT SURE HOW TO APPROACH THIS
WHERE REDIRECTLINK IN (
SELECT REDIRECTLINK
FROM REDIRECTS
GROUP BY REDIRECTLINK
HAVING ( COUNT(REDIRECTLINK) > 1 )
)
```
I am having issues on the code above because i am not sure how link it to the duplicate found and not both. | First determine the subnumber of the RedirectLink by using ROW\_NUMBER() (and subtract 1 to make it start at zero). Then use it in an update to update all records that have a subnumber above zero.
```
with NT as (
select
ID,
Row_Number() over (PARTITION BY RedirectLink ORDER BY ID)-1 as Nr
from Table1
)
update T
set T.RedirectLink = T.RedirectLink + '_' + cast(NT.Nr as varchar)
FROM Table1 T
JOIN NT ON (NT.ID = T.ID)
where Nr>0
``` | You can `UPDATE` using `JOIN`:
```
;WITH cte AS
(
SELECT *,
ROW_NUMBER() OVER (PARTITION BY NAME ORDER BY Id DESC) AS rn
FROM REDIRECTS
)
UPDATE REDIRECTS
SET REDIRECTLINK = REDIRECTLINK + '01'
FROM cte
INNER JOIN REDIRECTS
ON REDIRECTS.Id = cte.Id
WHERE cte.rn = 1;
```
You can see [this post](https://stackoverflow.com/questions/17931760/updating-single-row-per-group) for more details | Updating duplicate column with value | [
"",
"sql",
"sql-server-2008",
""
] |
This is pertaining to Oracle 11g development. Can a column from its view definition be defined to transform its data?
For example,
```
CREATE VIEW TestView AS SELECT Col1, Col2, TestColumn FROM TestTable;
```
For TestColumn, originating Table holds values= (True or False). But should be displayed as (Y,N) . How could I make this transformation happen on view?
1. Could User Defined Function be of any help that can be included in View definition?
2. using virtual transformation using $dpconf. ([Reference here](http://docs.oracle.com/cd/E20295_01/html/821-1220/virtual-trans-examples.html))
Would approach would you recommend or is there a better solution? Appreciate your helpful inputs in advance. | ```
CREATE VIEW TestView AS
SELECT Col1,
Col2,
DECODE( TestColumn, 0, 'N', 1, 'Y', null ) AS TestColumn
FROM TestTable;
```
[SQLFIDDLE](http://sqlfiddle.com/#!4/32564/1/0) | You can do that in query itself
```
SELECT CASE WHEN COL='TRUE' THEN 'Y' WHEN COL='FALSE' THEN 'NO' END AS NEW_COL_NAME ...
``` | Oracle View Data Transformation | [
"",
"sql",
"oracle",
"view",
""
] |
I am trying to insert some data to SQL Server 2008 R2 by using JAP and HIBERNATE. Everything "works" except for that it's very slow. To insert 20000 rows, it takes about 45 seconds, while a C# script takes about less than 1 second.
Any veteran in this domain can offer some helps? I would appreciate it a lot.
Update: got some great advices from the answers below, but it still doesn't work as expected. Speed is the same.
Here is the updated persistence.xml:
```
<persistence version="2.0"
xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd">
<persistence-unit name="ClusterPersist"
transaction-type="RESOURCE_LOCAL">
<provider>org.hibernate.ejb.HibernatePersistence</provider>
<class>cluster.data.persist.sqlserver.EventResult</class>
<exclude-unlisted-classes>true</exclude-unlisted-classes>
<properties>
<property name="javax.persistence.jdbc.url"
value="jdbc:sqlserver://MYSERVER:1433;databaseName=MYTABLE" />
<property name="javax.persistence.jdbc.user" value="USER" />
<property name="javax.persistence.jdbc.password" value="PASSWORD" />
<property name="javax.persistence.jdbc.driver"
value="com.microsoft.sqlserver.jdbc.SQLServerDriver" />
<property name="hibernate.show_sql" value="flase" />
<property name="hibernate.hbm2ddl.auto" value="update" />
<property name="hibernate.connection.provider_class"
value="org.hibernate.service.jdbc.connections.internal.C3P0ConnectionProvider" />
<property name="hibernate.c3p0.max_size" value="100" />
<property name="hibernate.c3p0.min_size" value="0" />
<property name="hibernate.c3p0.acquire_increment" value="1" />
<property name="hibernate.c3p0.idle_test_period" value="300" />
<property name="hibernate.c3p0.max_statements" value="0" />
<property name="hibernate.c3p0.timeout" value="100" />
<property name="hibernate.jdbc.batch_size" value="50" />
<property name="hibernate.cache.use_second_level_cache" value="false" />
</properties>
</persistence-unit>
```
And here is the updated code part:
```
public static void writeToDB(String filePath) throws IOException {
EntityManager entityManager = entityManagerFactory.createEntityManager();
Session session = (Session) entityManager.getDelegate();
Transaction tx = session.beginTransaction();
int i = 0;
URL filePathUrl = null;
try {
filePathUrl = new URL(filePath);
} catch (MalformedURLException e) {
filePathUrl = (new File(filePath)).toURI().toURL();
}
String line = null;
BufferedReader stream = null;
try {
InputStream in = filePathUrl.openStream();
stream = new BufferedReader(new InputStreamReader(in));
// Read each line in the file
MyRow myRow = new MyRow();
while ((line = stream.readLine()) != null) {
String[] splitted = line.split(",");
int num1 = Integer.valueOf(splitted[1]);
float num2= Float.valueOf(splitted[6]).intValue();
myRow.setNum1(num1);
myRow.setNum2(num2);
session.save(myRow);
if (i % 50 == 0) {
session.flush();
session.clear();
}
i++;
}
tx.commit();
} finally {
if (stream != null)
stream.close();
}
session.close();
}
```
Updated, here is the source for MyRow:
```
@Entity
@Table(name="MYTABLE")
public class MyRow {
@Id
@GeneratedValue(strategy=GenerationType.IDENTITY)
private Long id;
@Basic
@Column(name = "Num1")
private int Num1;
@Basic
@Column(name = "Num2")
private float Num2;
public Long getId() {
return id;
}
public void setId(Long id) {
this.id = id;
}
public float getNum1() {
return Num1;
}
public void setNum1(float num1) {
Num1 = num1;
}
public int getNum2() {
return Num2;
}
public void setNum2(int num2) {
Num2 = num2;
}
}
``` | To enable [JDBC batching](http://docs.jboss.org/hibernate/orm/4.0/devguide/en-US/html/ch04.html) you should initialize the property hibernate.jdbc.batch\_size to between 10 and 50 (int only)
```
hibernate.jdbc.batch_size=50
```
If it's still not as fast as expected, then I'd review the document above paying attention to NOTE(s) and section 4.1. Especially the NOTE that says, "Hibernate disables insert batching at the JDBC level transparently if you use an identity identifier generator." | ## The problem
One of the major performance hits if you use Hibernate as your ORM is the way its "dirty check" is implemented (because without Byte Code Enhancement, which is standard in all JDO based ORMs and some others, dirty checking will always be an inefficient hack).
When flushing, a dirty check needs to be carried out on every object in the session to see if it is "dirty" i.e. one of its attributes has changed since it was loaded from the database. For all "dirty" (changed) objects Hibernate has to generate SQL updates to update the records that represent the dirty objects.
The Hibernate dirty check is notoriously slow on anything but a small number of objects because it needs to perform a "field by field" comparison between objects in memory with a snapshot taken when the object was first loaded from the database. The more objects, say, a HTTP request loads to display a page, then the more dirty checks will be required when commit is called.
## Technical details of Hibernate's dirty checking mechanism
You can read more about Hibernate's dirty check mechanism implemented as a "field by field" comparison here:
[How does Hibernate detect dirty state of an entity object?](https://stackoverflow.com/questions/5268466/how-does-hibernate-detect-dirty-state-of-an-entity-object)
## How the problem is solved in other ORMs
A much more efficient mechanism used by some other ORMs is to use an automatically generated "dirty flag" attribute instead of the "field by field" comparison but this has traditionally only been available in ORMs (typically JDO based ORMs) that use and promote byte code enhancement or byte code 'weaving' as it is sometimes called eg., <http://datanucleus.org> and others
During byte code enhancement, by DataNucleus or any of the other ORMs supporting this feature, each entity class is enhanced to:
* add an implicit dirty flag attribute
* add the code to each of the setter methods in the class to automatically set the dirty flag when called
Then during a flush, only the dirty flag needs to be checked instead of performing a field by field comparison - which, as you can imagine, is orders of magnitude faster.
## Other negative consequences of "field by field" dirty checking
The other innefficiency of the Hibernate dirty checking is the need to keep a snap shot of every loaded object in memory to avoid having to reload and check against the database during dirty checking.
Each object snap shot is a collection of all its fields.
In addition to the performance hit of the Hibernate dirty checking mechanism at flush time, this mechanism also burdens your app with the extra memory consumption and CPU usage associated with instantiating and initializing these snapshots of every single object that is loaded from the database - which can run into the thousands or millions depending on your application.
Hibernate has introduced byte code enhancement to address this but I have worked on many ORM persisted projects (both Hibernate and non Hibernate) and I am yet to see a Hibernate persisted project that uses that feature, possibly due to a number of reasons:
* Hibernate has traditionally promoted its "no requirement for byte code enhancement" as a feature when people evaluate ORM technologies
* Historical reliability issues with Hibernate's byte code enhancement implementation which is possibly not as mature as ORMs that have used and promoted byte code enhancement from the start
* Some people are still scared of using byte code enhancement due to the promotion of an anti 'byte code enhancement' stance and the fear certain groups instilled in people regarding the use of byte code enhancement in the early days of ORMs
These days byte code enhancement is used for many different things - not just persistence. It has almost become mainstream. | JPA with HIBERNATE insert very slow | [
"",
"sql",
"sql-server",
"database",
"hibernate",
"jpa",
""
] |
I'm having trouble getting a sproc to give me this type of output:
```
+-----------+------------------+
| Month | NoOfModifcations |
+-----------+------------------+
| Jan 2008 | 5 |
| Nov 2008 | 6 |
| Feb 2010 | 20 |
| Jul 2013 | 7 |
+-----------+------------------+
```
So far i managed to get the output to look like this, but i haven't yet managed to get the output to be sorted by year, then month - it's going alphabetically. Here's what i have so far:
```
SELECT convert(varchar(4), YEAR(LastModifedDate)) + ' ' + convert(varchar(3),datename(month, LastModifedDate)) as Dates,
count(*) as Number
FROM aims.Modification
WHERE CompanyID = @companyID
AND LastModifedDate >= DATEADD(month, @numberOfMonths * -1, GETDATE())
GROUP BY convert(varchar(4), YEAR(LastModifedDate)) + ' ' + convert(varchar(3),datename(month, LastModifedDate))
```
I did try an ORDER BY on the datetime field but got:
```
Column "aims.Modification.LastModifedDate" is invalid in the ORDER BY clause because it is not contained in either an aggregate function or the GROUP BY clause.
```
It's also very messy on converting/selecting/group by on the datetime to string. Any advice there would also be very helpful!
Thanks in advance! | This should do:
```
SELECT CONVERT(VARCHAR(4),YEAR(LastModifedDate)) + ' ' +
CONVERT(VARCHAR(3),DATENAME(MONTH,LastModifedDate)) AS Dates,
COUNT(*) as Number
FROM aims.Modification
WHERE CompanyID = @companyID
AND LastModifedDate >= DATEADD(MONTH,@numberOfMonths * -1,GETDATE())
GROUP BY CONVERT(VARCHAR(4),YEAR(LastModifedDate)) + ' ' +
CONVERT(VARCHAR(3),DATENAME(MONTH,LastModifedDate)),
CONVERT(VARCHAR(6),LastModifedDate,112)
ORDER BY CONVERT(VARCHAR(6),LastModifedDate,112)
``` | ```
SELECT YEAR(LastModifedDate), MONTH(month) as Dates, count(*) as Number
FROM aims.Modification
WHERE CompanyID = @companyID
AND LastModifedDate >= DATEADD(month, @numberOfMonths * -1, GETDATE())
GROUP BY YEAR(LastModifedDate), MONTH(LastModifiedDate)
ORDER BY YEAR(LastModifedDate), MONTH(LastModifiedDate)
```
Keep it simple and make the concatenation in your C# code or report... | Count number of rows within each month of a datetime, group by year month | [
"",
"sql",
"sql-server",
"datetime",
"count",
"group-by",
""
] |
Is there a function that is similar to `COALESCE` but for values that are not `NULL`?
I need to replace a return value of a scalar-valued-function with another value if it returns the string literal `N/A`. If it would return `NULL` i could use:
```
SELECT COALESCE(dbo.GetFirstSsnInFromSsnOut(RMA.IMEI), RMA.IMEI) AS [IMEI to credit]
, OtherColumns
FROM dbo.TableName
```
But since it returns `N/A` this doesn't work. I could also use:
```
SELECT CASE WHEN dbo.GetFirstSsnInFromSsnOut(RMA.IMEI)='N/A'
THEN RMA.IMEI
ELSE dbo.GetFirstSsnInFromSsnOut(RMA.IMEI)
END AS [IMEI To Credit]
, OtherColumns
FROM dbo.TableName
```
But that would be inefficient since it needs to execute the function twice per record.
Note that this query is in a table-valued-function. | Perhaps there you can use [NULLIF](http://technet.microsoft.com/en-us/library/ms177562.aspx) function, for example:
```
SELECT ISNULL(NULLIF(dbo.GetFirstSsnInFromSsnOut(RMA.IMEI), 'N/A'), RMA.IMEI) AS [IMEI to credit]
, OtherColumns
FROM dbo.TableName;
``` | If you can't change `GetFirstSsnInFromSsnOut` to return Null-Values then try this:
`SELECT COALESCE(dbo.GetFirstSsnInFromSsnOut(NULLIF(RMA.IMEI, 'N/A')), RMA.IMEI) AS [IMEI to credit]
, OtherColumns
FROM dbo.TableName` | ISNULL/COALESCE counterpart for values that are not NULL | [
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2005",
"sql-function",
""
] |
I have an ids range (it defines by startId and stopId). I have a table with some records(every record has an id - primaty key). Now I need to select all ids from specified range that don't exist in the table. I am using postgres database. Please suggest what options I have to perform such query. | You may look at [generate\_series](http://www.postgresql.org/docs/9.3/interactive/functions-srf.html)() function.
Then use an except clause to get the difference.
```
select s.a from generate_series(<start>, <stop>) as s(a)
except
select id from <myTable>
where <yourClause>
--order by a
```
See [SqlFiddle](http://sqlfiddle.com/#!15/5129b/10) | You can use the: <http://www.postgresql.org/docs/9.1/static/functions-srf.html>
```
SELECT * FROM generate_series(startId,stopId) AS all_ids WHERE id NOT IN (SELECT id FROM table WHERE id
>= startId AND id <= stopId) as existent_ids;
``` | Select ids that don't exist in a table | [
"",
"sql",
"database",
"postgresql",
""
] |
I've 2 columns which I want to use a condition on. But I get this error message and my query is correct I'll come to that soon.
> Msg 8114, Level 16, State 5, Line 1
> Error converting data type varchar to float.
So this is the problem, I have a temp-table in which ID-number looks like this 9001011234 we can call it A, in the other one that I want to check with it looks like this 900101-1234 and this one for B this is the swedish format for Id-numbers.
So in my condition I want to check this to get the right amount and the correct result.
```
where A = B
```
The rest of the query is fine, when I remove this condition it gives me a result. It's just this one bit that is incorrect. | You have a `VARCHAR` format that can't be trivially transformed to a number. I'd use `REPLACE(b,'-','') = a` to fix the format, and let SQL Server take care of the rest. | Say:
```
where A = CAST(REPLACE(B, '-', '') AS float)
``` | SQL converting float to varchar | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I'm using a `CASE` statement to help populate a temp table, but for some reason I'm getting an Operand type clash error:
```
CREATE TABLE #users
(
id INT IDENTITY (1, 1),
userid UNIQUEIDENTIFIER,
username VARCHAR(50),
sysrole VARCHAR(50)
)
--Insert all loadtest users into temp table
INSERT INTO #users
(userid,
username,
sysrole)
SELECT id,
username,
sysrole =
CASE
WHEN id between 0 and 100 THEN '63F7971B-45FF-4E3C-86E7-9D0507AF60C0'
WHEN id between 101 and 200 THEN '26478736-7937-4E52-891B-E80A8D609693'
WHEN id between 201 and 300 THEN 'CEF5F44F-0961-414C-8578-22BEB0C4B9FF'
WHEN id between 301 and 400 THEN '63F7971B-45FF-4E3C-86E7-9D0507AF60C0'
WHEN id between 401 and 500 THEN '7FD13C9E-731B-43AE-95F0-E9AEDC65759C'
END
FROM user
WHERE username LIKE '%Test%'
```
There are 500 users, so the CASE does handle all those possibilities. However the operand type clash is reading (on the line that the INSERT INTO begins on) `uniqueidentifier is incompatible with tinyint`.
Is the `IDENTITY` column not autopopulating during the `INSERT`?
EDIT: I just realized it would be useful to know what the `user` table looks like. It has about a dozen columns, but the ones I'm wanting to pull from it are:
`id` which is the primary key
`username` which is a string (uniqueidentifier on user)
`sysrole` is getting defined by the `CASE` as you can see. | If just you want to assign five string values to 500 users arbitrarily, one to each hundred, you should consider a different approach:
```
WITH enumerated AS (
SELECT
id,
username,
rn = ROW_NUMBER() OVER (ORDER BY username)
FROM user
WHERE username LIKE '%test%'
)
SELECT
rn,
id,
username,
sysrole = CASE
WHEN rn between 0 and 100 THEN '63F7971B-45FF-4E3C-86E7-9D0507AF60C0'
WHEN rn between 101 and 200 THEN '26478736-7937-4E52-891B-E80A8D609693'
WHEN rn between 201 and 300 THEN 'CEF5F44F-0961-414C-8578-22BEB0C4B9FF'
WHEN rn between 301 and 400 THEN '63F7971B-45FF-4E3C-86E7-9D0507AF60C0'
WHEN rn between 401 and 500 THEN '7FD13C9E-731B-43AE-95F0-E9AEDC65759C'
END
FROM enumerated
;
```
The `enumerated` subquery assigns row numbers to the selected rows and the main query uses the numbers to assign `sysrole` values.
And you don't need a temporary table for that specific task. | I have vague recollections that tinyint only goes up to 127. You can look it up. The solution is to change the datatype to smallint.
**Edit**
I looked it up. I had the right value, but for the wrong database engine. Solution still applies though. | Temp Table giving Operand Clash error | [
"",
"sql",
"sql-server-2008",
"t-sql",
""
] |
I have a database with the following tables:
```
student(sid, name,address)
course(cid,type,department)
takes(sid,cid,score)
```
The query which i need to solve is
Find the students such that, for each course they take, their score on the course is higher than any other student who has taken that same course.
I know I have to take a self join in such cases but still confused how to solve such queries ! | ```
SELECT
s.sid,
s.name
FROM student s
WHERE NOT EXISTS (
SELECT *
FROM takes t1
JOIN takes t2 ON (t1.cid = t2.cid AND t1.sid <> t2.sid AND t2.score > t1.score)
WHERE t1.sid = s.sid
);
```
We're selecting all students, for whom we can't find any course they took where someone else had a higher score.
[Sqlfiddle](http://sqlfiddle.com/#!4/7ccf1/1/0) | Try this:
```
SELECT s.*
FROM student s
INNER JOIN (
SELECT t1.*
FROM takes t1
LEFT JOIN takes t2
ON (t1.cid = t2.cid AND t1.score < t2.score)
WHERE t2.cid IS NULL
) b ON b.sid = s.sid
```
`sqlfiddle demo`
You can see in the fiddle that if two users have the same score that is the best, both will be reurned.
You find the row with the best score in table `takes`, and you JOIN `Student` with it
(I started using this way to get the max value from a table based on [this](https://stackoverflow.com/a/123481/1385896) answer from Bill karwin. You can check his detailed explanation on that answer on how it works). | Comparing each value with rest in oracle | [
"",
"sql",
"oracle",
"oracle11g",
""
] |
I have created the following view in MySQL:
```
CREATE VIEW MYVIEW AS (
SELECT A.FNAME
, A.LNAME
, B.EMAIL
FROM EMPLOYEE A, EMPEMAIL B
WHERE A.EID = :empId
AND A.EID = B.EID
AND B.EMAILTYP = :emailType)
```
I have to make `empId` and `emailType` dynamic i.e. get the result for the desired values of `empId` and `emailType` at runtime.
What changes should be made in the code?
Thanks in advance. | Just create the view without the parameters (i.e., to take care of the join only):
```
CREATE VIEW MYVIEW AS (
SELECT A.FNAME
, A.LNAME
, B.EMAIL
, A.EID AS EID -- added to be used in the WHERE
, B.EMAILTYP AS EMAILTYP -- added to be used in the WHERE
FROM EMPLOYEE A, EMPEMAIL B
WHERE A.EID = B.EID)
```
And apply the dynamic parameters when you query:
```
SELECT FNAME, LNAME, EMAIL
FROM my_view
WHERE eid = 'your_empId' AND emailtyp = 'your_emailType'
```
Note the `WHERE` shown above, it uses the two extra fields declared in the `VIEW` | You can use this solution with a function -
```
CREATE FUNCTION func() RETURNS int(11)
RETURN @var;
CREATE VIEW view1 AS
SELECT * FROM table1 WHERE id = func();
```
Using example:
```
SET @var = 1;
SELECT * FROM view1;
``` | How to pass dynamic parameters to a MySQL view | [
"",
"mysql",
"sql",
"select",
"view",
"mysql-workbench",
""
] |
I have a MySQL table that looks like the following:
```
player_rankings | CREATE TABLE `player_rankings` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`user_id` int(11) NOT NULL,
`award_id` int(11) NOT NULL,
`rank_1` int(11) DEFAULT NULL,
`rank_2` int(11) DEFAULT NULL,
`rank_3` int(11) DEFAULT NULL,
`rank_4` int(11) DEFAULT NULL,
`rank_5` int(11) DEFAULT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `user_id` (`user_id`,`award_id`)
)
```
`rank_1` through `rank_5` contain ids of players that I have in a different table, but for the purpose of this query, it isn't relevant. What I am trying to do is count how many times a value shows up in each `rank_*` column and then give it the following amount of points:
* `rank_1` - 5 points
* `rank_2` - 4 points
* `rank_3` - 3 points
* `rank_4` - 2 points
* `rank_1` - 1 point
The end result would sort from highest to lowest (descending) the player ids with their total point count. | To get the sum of points per rank:
```
SELECT user_id,
sum(rank1Points) rank1Points,
sum(rank2Points) rank2Points,
sum(rank3Points) rank3Points,
sum(rank4Points) rank4Points,
sum(rank5Points) rank5Points
FROM (
SELECT pr.rank_1 user_id, count(*) * 5 rank1Points, 0 rank2Points, 0 rank3Points, 0 rank4Points, 0 rank5Points
FROM player_rankings pr
GROUP BY pr.rank_1
UNION ALL
SELECT pr.rank_2, 0, count(*) * 4, 0, 0, 0
FROM player_rankings pr
GROUP BY pr.rank_2
UNION ALL
SELECT pr.rank_3, 0, 0, count(*) * 3, 0, 0
FROM player_rankings pr
GROUP BY pr.rank_3
UNION ALL
SELECT pr.rank_4, 0, 0, 0, count(*) * 2, 0
FROM player_rankings pr
GROUP BY pr.rank_4
UNION ALL
SELECT pr.rank_5, 0, 0, 0, 0, count(*)
FROM player_rankings pr
GROUP BY pr.rank_5
) s
GROUP BY user_id
```
Fiddle [here](http://sqlfiddle.com/#!2/0af63/1).
To get the total number of points together in one column:
```
SELECT user_id,
sum(rankPoints) rankPoints
FROM (
SELECT pr.rank_1 user_id, count(*) * 5 rankPoints
FROM player_rankings pr
GROUP BY pr.rank_1
UNION ALL
SELECT pr.rank_2, count(*) * 4
FROM player_rankings pr
GROUP BY pr.rank_2
UNION ALL
SELECT pr.rank_3, count(*) * 3
FROM player_rankings pr
GROUP BY pr.rank_3
UNION ALL
SELECT pr.rank_4, count(*) * 2
FROM player_rankings pr
GROUP BY pr.rank_4
UNION ALL
SELECT pr.rank_5, count(*)
FROM player_rankings pr
GROUP BY pr.rank_5
) s
GROUP BY user_id
```
Fiddle [here](http://sqlfiddle.com/#!2/0af63/3). | For your table structure may be good solution is calculate all in script. Just get all rows and then calculate.
If you want make all in one SQL query then you can try use UNION for 5 subqueries
```
SELECT user_id, rank FROM(
SELECT rank_1 AS user_id, 5 AS rank FROM player_rankings
UNION
SELECT rank_2 AS user_id, 4 AS rank FROM player_rankings
....
) AS u;
```
There is not working script, but I hope you understand my solution | MySQL query to count and score multiple columns | [
"",
"mysql",
"sql",
""
] |
I currently have use of If Not RS.EOF and Not RS.BOF is a DAO Recordset, but I cannot use DAO in the new SQL backend environment. Code is as follows:
```
Function CloseSession()
'This closes the open session
Dim Rs As DAO.Recordset
Set Rs = CurrentDb.OpenRecordset("SELECT * FROM Tbl_LoginSessions WHERE fldLoginKey =" & LngLoginId)
If Not Rs.EOF And Not Rs.BOF Then
Rs.Edit
Rs.Fields("fldLogoutEvent").Value = Now()
Rs.Update
Rs.Close
End If
Set Rs = CurrentDb.OpenRecordset("SELECT * FROM [Tbl_Users] WHERE PKUserID =" & LngUserID)
'Flag user as being logged out
If Not Rs.EOF And Not Rs.BOF Then
Rs.Edit
Rs.Fields("fldLoggedIn").Value = 0
Rs.Fields("FldComputer").Value = ""
Rs.Update
Rs.Close
End If
Set Rs = Nothing
End Function
```
Essentially, I have started to write the code in ADODB. However, upon researching If Not RS.EOF topic on the internet for ADODB, I was utterly unsuccessful. Does someone have knowledge on the utilization RS.EOF and RS.BOF that could be helpful in my plight to rewrite?
```
Function CloseSession()
'/This closes the open session
'/Define the OLE DB connection string.
StrConnectionString = "DRIVER=SQL Server;SERVER=dbswd0027;UID=Mickey01;PWD=Mouse02;DATABASE=Regulatory;"
'/Instantiate the Connection object and open a database connection.
Set cnn = CreateObject("ADODB.Connection")
cnn.Open StrConnectionString
Dim strSQL1 As String
Dim strSQL2 As String
Dim StrLoginName As String
Dim StrComputerName As String
'/passing variables
StrComputerName = FindComputerName
strLoggedIn = "False"
'/Declaring what table you are passing the variables to
strSQL1 = "Update tTbl_LoginSessions SET fldLogoutEvent = '" & Now() & "'" & _
" WHERE fldLoginKey = " & LngLoginId
'/Declaring what table you are passing the variables to
strSQL2 = "Update tTbl_LoginUsers SET fldLoggedIn = '" & strLoggedIn & "', fldComputer = '" & StrComputerName & "'" & _
" WHERE intCPIIUserID = " & LngUserID
cnn.Execute strSQL1, , adCmdText + adExecuteNoRecords
cnn.Execute strSQL2, , adCmdText + adExecuteNoRecords
'/close connections and clean up
cnn.Close
Set cnn = Nothing
End Function
``` | This was my answer:
```
Function LogMeIn(sUser As Long)
' was Function LogMeIn()
'/Go to the users table and record that the user has logged in
'/and which computer they have logged in from
Dim con As ADODB.Connection
Dim cmd As ADODB.Command
Dim strSQL As String
'/Dim sUser As Long
Dim strComputerName As String, strLoggedIn As String
'passing variables
strLoggedIn = "True"
strComputerName = FindComputerName()
'Declaring what table you are passing the variables to
strSQL = "Update dbo.tTbl_LoginUsers SET fldLoggedIn = '" & strLoggedIn & "', fldComputer = '" & strComputerName & "'" & _
" WHERE intCPIIUserID = " & sUser
Debug.Print strSQL
'connect to SQL Server
Set con = New ADODB.Connection
With con
.ConnectionString = cSQLConn
.Open
End With
'write back
Set cmd = New ADODB.Command
With cmd
.ActiveConnection = con
.CommandText = strSQL
.CommandType = adCmdText
.Execute
'Debug.Print strSQL
End With
'close connections
con.Close
Set cmd = Nothing
Set con = Nothing
End Function
``` | In the DAO case your code is using (DAO) Recordset objects and avoiding explicit SQL statements; in the ADO case you're using explicit SQL statements and avoiding using (ADO) Recordset objects. As such, the short answer to your question is: use ADO Recordset objects, and you will have your `BOF` and `EOF` properties like with DAO (<http://msdn.microsoft.com/en-us/library/windows/desktop/ms675787%28v=vs.85%29.aspx>). As an aside, you're also using late binding in the ADO case, and I'd suggest you use early binding instead (i.e., add a reference to the ADO type library and use strongly not weakly typed object variables).
That said, in the DAO Recordset case you usually get an instance of it by calling a `Database` object's `OpenRecordset` method; in contrast, you instantiate the ADO version directly before calling its `Open` method, usually passing in an ADO `Connection` object (an ADO `Connection` roughly corresponds to a DAO `Database`). There's also no explicit `Edit` method:
```
Dim Connection As New ADODB.Connection, RS As New ADODB.Recordset
Connection.Open StrConnectionString
RS.Open "SELECT * FROM Tbl_LoginSessions WHERE fldLoginKey =" & LngLoginId,
Connection, adOpenForwardOnly, adLockPessimistic
If Not Rs.EOF And Not Rs.BOF Then
Rs.Fields("fldLogoutEvent").Value = Now() '!!!though see jacouh's comment
Rs.Update
Rs.Close
End If
RS.Open "SELECT * FROM [Tbl_Users] WHERE PKUserID =" & LngUserID,
Connection, adOpenForwardOnly, adLockPessimistic
'Flag user as being logged out
If Not Rs.EOF And Not Rs.BOF Then
Rs.Fields("fldLoggedIn").Value = 0
Rs.Fields("FldComputer").Value = ""
Rs.Update
Rs.Close
End If
``` | If Not RS.EOF and Not RS.BOF then written for ADODB versus current DAO format | [
"",
"sql",
"ms-access",
"vba",
""
] |
Having a problem with one of my SQL queries. This is my query:
```
explain
SELECT DISTINCT profiles.hoofdrubriek, profiles.plaats, profiles.bedrijfsnaam, profiles.gemeente, profiles.bedrijfsslogan, profiles.straatnaam, profiles.huisnummer, profiles.postcode, profiles.telefoonnummer, profiles.fax, profiles.email, profiles.website, profiles.bedrijfslogo
FROM profiles
LEFT JOIN profile_subrubriek ON profiles.ID=profile_subrubriek.profile_id
LEFT JOIN rubrieken ON profile_subrubriek.subrubriek_id=rubrieken.ID
WHERE (
rubrieken.rubriek = 'Pedicurepraktijken' OR
profiles.hoofdrubriek = 'Pedicurepraktijken'
)
ORDER BY profiles.grade DESC, profiles.bedrijfsnaam
```
The 'OR' operator in this piece of the query is causing troubles:
```
rubrieken.rubriek = 'Pedicurepraktijken' OR profiles.hoofdrubriek = 'Pedicurepraktijken'
```
I have indexes applied on all my tables which function properly if I take out one of the two pieces of the above line of code. Combining them with an OR operator causes it to break down and it refuses to use the index I have applied on the 'hoofdrubriek' column in my profiles table. Below the layouts of my relevant tables:
```
CREATE TABLE `profiles` (
`ID` varchar(255) NOT NULL DEFAULT '',
......
`hoofdrubriek` varchar(255) DEFAULT NULL,
...
`timestamp` datetime DEFAULT NULL,
`meerderevestigingen` varchar(255) NOT NULL,
`grade` int(5) NOT NULL,
PRIMARY KEY (`ID`),
KEY `IDX_TIMESTAMP` (`timestamp`),
KEY `IDX_NIEUW` (`nieuw`),
KEY `IDX_HOOFDRUBRIEK` (`hoofdrubriek`),
KEY `bedrijfsnaam` (`bedrijfsnaam`),
KEY `grade` (`grade`),
KEY `gemeente` (`gemeente`),
KEY `plaats` (`plaats`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8
CREATE TABLE `rubrieken` (
`ID` mediumint(9) NOT NULL AUTO_INCREMENT,
`rubriek` varchar(255) NOT NULL,
PRIMARY KEY (`ID`),
UNIQUE KEY `rubriek` (`rubriek`)
) ENGINE=MyISAM AUTO_INCREMENT=1905 DEFAULT CHARSET=utf8
CREATE TABLE `profile_subrubriek` (
`profile_id` varchar(20) NOT NULL,
`subrubriek_id` mediumint(9) NOT NULL,
PRIMARY KEY (`subrubriek_id`,`profile_id`),
KEY `profile_id` (`profile_id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8
```
Of course I could solve the problem with a UNION DISTICT, thus combining the two different queries, but I don't think that should be the way to go.. | Well, if an `or` is causing a problem, then the simplest solution is to break the query into two pieces and put them together using `union` (in your case, because of the `distinct`). Rectifying the `where` clause using an index is going to probably be impossible, because it references two different columns:
```
SELECT p.hoofdrubriek, p.plaats, p.bedrijfsnaam, p.gemeente, p.bedrijfsslogan, profiles.straatnaam,
p.huisnummer, profiles.postcode, p.telefoonnummer, p.fax, p.email, p.website, p.bedrijfslogo,
p.grade
FROM profiles p
LEFT JOIN profile_subrubriek ON p.ID=profile_subrubriek.profile_id
LEFT JOIN rubrieken ON profile_subrubriek.subrubriek_id=rubrieken.ID
WHERE rubrieken.rubriek = 'Pedicurepraktijken'
union
SELECT p.hoofdrubriek, p.plaats, p.bedrijfsnaam, p.gemeente, p.bedrijfsslogan, profiles.straatnaam,
p.huisnummer, profiles.postcode, p.telefoonnummer, p.fax, p.email, p.website, p.bedrijfslogo,
p.grade
FROM profiles p
LEFT JOIN profile_subrubriek ON p.ID=profile_subrubriek.profile_id
LEFT JOIN rubrieken ON profile_subrubriek.subrubriek_id=rubrieken.ID
WHERE p.hoofdrubriek = 'Pedicurepraktijken'
ORDER BY grade DESC, bedrijfsnaam;
```
I added `grade` into the `select` clause so it could be used by the `order by`. | I think Gordon is right about using UNION, but you can make the UNION much more efficient:
In the first query below, since you are only referring to the `profiles` table, you can remove the joins, they will only serve to cause duplicates that subsequently need to be removed. Then in the second, you can change the JOINs from OUTER to INNER, since you are referring to a field in the outer most table in the where clause you are stating there has to be a match. Then by adding a clause to remove values picked up by the first part of the union you will have less records to sort and remove duplicates from.
```
SELECT profiles.hoofdrubriek,
profiles.plaats,
profiles.bedrijfsnaam,
profiles.gemeente,
profiles.bedrijfsslogan,
profiles.straatnaam,
profiles.huisnummer,
profiles.postcode,
profiles.telefoonnummer,
profiles.fax,
profiles.email,
profiles.website,
profiles.bedrijfslogo,
profiles.grade
FROM profiles
WHERE profiles.hoofdrubriek = 'Pedicurepraktijken'
UNION
SELECT profiles.hoofdrubriek,
profiles.plaats,
profiles.bedrijfsnaam,
profiles.gemeente,
profiles.bedrijfsslogan,
profiles.straatnaam,
profiles.huisnummer,
profiles.postcode,
profiles.telefoonnummer,
profiles.fax,
profiles.email,
profiles.website,
profiles.bedrijfslogo,
profiles.grade
FROM profiles
INNER JOIN profile_subrubriek
ON profiles.ID=profile_subrubriek.profile_id
INNER JOIN rubrieken
ON profile_subrubriek.subrubriek_id=rubrieken.ID
WHERE rubrieken.rubriek = 'Pedicurepraktijken'
AND profiles.hoofdrubriek != 'Pedicurepraktijken'
ORDER BY grade DESC, bedrijfsnaam;
```
My knowledge of the inner workings of the MySQL optimisier is hazy to say the least, but my understanding is that the underlying cause is that MySQL does not use the index because it needs to scan the whole table anyway to check for the other predicate (`rubrieken.rubriek = 'Pedicurepraktijken'`). I think you are expecting that the optimiser will do implicitly is what the UNION does explicitly. (I think) because of the OUTER JOINs and the OR, the optimiser is unable to accurately determine how many rows of `profiles` it is going to have to read to find a match in `rubrieken` OR in profiles, it cannot determine accurately if an index seek will be more efficient than a table scan, and opts for the table scan.
This is not unique to MySQL, it is not uncommon accross all DMBS for it to be more efficient to use `UNION` instead of `OR`.
Rearranging the queries as I have gives the optimiser a much better chance of using the right indexes (I suspect without the `USE INDEX` hint too, but I haven't tested). | SQL query no using indexes properly | [
"",
"mysql",
"sql",
"join",
"indexing",
""
] |
I have a column which contains data but the problem is that this column has data type of varchar(50) and it has to be this due to some reasons,now what i want to do is while selecting data from table , i want to treat this column as date so that i can use it in where clause. i am using the code below for converting it yo date , but it converts some values and then gives an error
this is my sample data
```
8/1/2002
6/9/2001
14/9/2001
26/7/2001
14/12/2001
21/1/2002
29/4/2001
7/5/2001
9/11/2001
16/7/2001
select CONVERT(date,sowingDate,103) from tblAgriculture_staging
```
I have tried which differnt version of date format e.g 103,105 etc
but still it converts some values but error comes on some values and query execution stops | Try this:
```
SET DATEFORMAT dmy;
select case when isdate(sowingDate) = 1 then CONVERT(date,sowingDate,103) end [date] from tblAgriculture_staging
```
or (if you are using sql 2012)
```
SET DATEFORMAT dmy;
select case when TRY_CONVERT(date, sowingDate) IS NOT NULL then CONVERT(date,sowingDate,103) end [date] from tblAgriculture_staging
```
but this solution hides (convert to NULL) all dates that are wrong. You can reverse the condition first and find/fix all rows with incorrect date (i.e. 31/02/2013) and then use this queries to show only valid dates
[**`SQLFiddle`**](http://sqlfiddle.com/#!6/bfb94/1) | > but it converts some values and then gives an error this is my sample
> data
because some data are in invalid format or contains incorrect symbols.
Try this:
```
select CONVERT(date,ltrim(rtrim(sowingDate)), 103) from tblAgriculture_staging
```
or examine your values:
```
select ISDATE(sowingDate) as IsDate, sowingDate, CASE WHEN ISDATE(sowingDate)=1 THEN CONVERT(date,ltrim(rtrim(sowingDate)), 103) ELSE NULL END from tblAgriculture_staging
``` | Convert String to date in select statement | [
"",
"sql",
"sql-server",
"date",
""
] |
I've come from a Postgres/MySQL world, so am wondering if there's a difference between these 2 queries in SQL server:
```
SELECT e.employee_id, s.salary
FROM employees e INNER JOIN salaries s
ON e.employee_id = s.employee_id
SELECT e.employee_id, s.salary
FROM employees e, salaries s
WHERE e.employee_id = s.employee_id
``` | The new join syntax was introduced with [SQL-92](http://en.wikipedia.org/wiki/SQL-92) and is supported by SQL Server (at least as far back as SQL Server 2000) and most other DBMSs these days. Performance wise, they are equivalent, but generally I recommend the new SQL-92 syntax for new code. | There is no functional difference between the two. The latter (known as implicit join) is considered outdated, and is usually frowned upon. | Difference in SQL server INNER JOIN syntax? | [
"",
"sql",
"sql-server",
""
] |
This query is working fine:
```
SELECT posts.titulo as value,
posts.id as id,
posts.img_src as img,
posts.id_familia,
posts.tiempo,
posts.votos,
familias.clave,
familias.id as fm,
textos.clave,
textos.texto as familia,
FROM posts,familias,textos
WHERE posts.id_familia = familias.id AND familias.clave = textos.clave AND textos.lengua = ".detectarIdioma()."
and posts.id_usuario = $term
ORDER BY posts.id DESC
```
But now I would like to add how many comments has a post, which is in the comentarios table.
```
SELECT posts.titulo as value,
posts.id as id,
posts.img_src as img,
posts.id_familia,
posts.tiempo,
posts.votos,
familias.clave,
familias.id as fm,
textos.clave,
textos.texto as familia,
count(comentarios.id)
FROM posts,familias,textos
JOIN comentarios ON comentarios.id_post = posts.id
WHERE posts.id_familia = familias.id AND familias.clave = textos.clave AND textos.lengua = ".detectarIdioma()."
and posts.id_usuario = $term
ORDER BY posts.id DESC
```
The thing is that the mysql error is
`You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '*) FROM posts,familias,textos JOIN comentarios ON ' at line 12`
Any idea what am I missing here? | Try something like this:
```
SELECT posts.titulo AS value,
posts.id AS id,
posts.img_src AS img,
posts.id_familia,
posts.tiempo,
posts.votos,
familias.clave,
familias.id AS fm,
textos.clave,
textos.texto AS familia,
COALESCE(COM_COUNT.NUM_COMMENTS,0) AS num_comments
FROM posts
INNER JOIN familias ON posts.id_familia = familias.id
INNER JOIN textos familias.clave = textos.clave
LEFT JOIN
( SELECT id_post, COUNT(*) AS NUM_COMMENTS
FROM comentarios
GROUP BY id_post
) COM_COUNT ON COM_COUNT.id_post = posts.id
WHERE AND textos.lengua = ".detectarIdioma()."
AND posts.id_usuario = $TERM
ORDER BY posts.id DESC
```
This will left join with the number of comments per post, and if the JOIN is not matched will show 0. | Try this:
```
SELECT posts.titulo as value,
posts.id as id,
posts.img_src as img,
posts.id_familia,
posts.tiempo,
posts.votos,
familias.clave,
familias.id as fm,
textos.clave,
textos.texto as familia,
count(comentarios.id)
FROM posts INNER JOIN familias ON posts.id_familia = familias.id
INNER JOIN textos ON familias.clave = textos.clave
LEFT OUTER JOIN comentarios ON comentarios.id_post = posts.id
WHERE textos.lengua = ".detectarIdioma()."
AND posts.id_usuario = $term
GROUP BY posts.titulo,
posts.id,
posts.img_src,
posts.id_familia,
posts.tiempo,
posts.votos,
familias.clave,
familias.id,
textos.clave,
textos.texto
ORDER BY posts.id DESC
```
You were just mixing two types of `JOIN` syntax.
...and you'll probably need to group by every column except the one you're counting.
**EDIT:** To not restrict your results to only those with comments, you'll need to do a `LEFT OUTER JOIN` on that table. | Adding a count from another table to existing query | [
"",
"mysql",
"sql",
"count",
""
] |
I have a db with 3 tables, as following
```
CREATE TABLE Person
(id_pers number (3),
name varchar(20) NOT NULL,
phone number (15),
PRIMARY KEY(id_pers)
);
CREATE table Book(
id_book number(3),
title varchar(30),
about varchar (200),
nr_of_books number (3),
type varchar (11),
PRIMARY KEY(id_book)
);
CREATE table Author(
id_book number(3),
id_aut number(3),
FOREIGN KEY (id_book) REFERENCES Book(id_book),
FOREIGN KEY (id_aut) REFERENCES Person(id_pers)
);
```
I want to display the title of the book that has exactly 2 authors and the name of the authors which wrote the book. Example "Book1 - Author1, Author2"
All i managed to do is to take the book id and the number of authors but i want them more precisely.
The code that I wrote is this:
```
SELECT au.id_book, count(au.id_aut)
FROM author au join book bk ON au.id_book = bk.id_book
JOIN person p ON p.id_pers = au.id_aut
GROUP BY au.id_book
HAVING COUNT(au.id_aut) = 2;
```
everything I thought about had as result either "no group by expression" either some error because of the syntax.
Should I make a nested query? if so, what should i type in "where" to be equal to this?
I'm so confused. Any help would be appreciated. | This will include your title in the result, and it won't give errors about missing group by. I am not sure this is all you want, though...
```
SELECT au.id_book, bk.title, count(au.id_aut)
FROM author au
JOIN book bk
ON au.id_book = bk.id_book
JOIN person p
ON p.id_pers = au.id_aut
GROUP BY au.id_book, bk.title
HAVING COUNT(au.id_aut) = 2;
```
In order to include the authors names, I resorted to something a bit more elaborate:
```
DECLARE @result varchar(500)
DECLARE @numAut int
SET @result = ''
SET @numAut = 2
SELECT @result = @result + [Name] + ', '
FROM person WHERE id_pers in
(SELECT id_aut FROM author WHERE id_book in
( SELECT id_book FROM author
GROUP BY id_book HAVING COUNT(*) = @numAut)
);
SELECT bk.title, @result
FROM author au JOIN book bk ON au.id_book = bk.id_book
GROUP BY au.id_book, bk.title
HAVING COUNT(*) = @numAut;
```
First, we enumerate the names of the authors that appear in the "books with N authors"-list. This result Is then included in th (now a bit simpler) actual query, where I basically only select the titles of the books in that same list.
This works, but i am wondering if there is not a more elegant way... | ```
SELECT au.id_book, count(au.id_aut)as count
FROM author au join book bk ON au.id_book = bk.id_book
JOIN person p ON p.id_pers = au.id_aut
GROUP BY au.id_book
HAVING (count=2)
``` | How do i select datas from two different tables when count() = 2 with join? | [
"",
"sql",
"oracle-sqldeveloper",
""
] |
I have a classic ASP page with a simple html table, and I want to loop the table rows based on a unknown number of records pulled from the database, however, when I am looping the records with a do/while loop, I am getting an error saying that Either BOF or EOF is True. I want every other row of the table to alternate background colors (the colors I have set in CSS).
```
<% do while not rsTest.eof %>
<tr class="odd">
<td colspan="5"><%=(rsTest.Fields.Item("field").Value)%></td>
</tr>
<% rsTest.moveNext
if not rsTest.eof then
count = count + 1 %>
<tr class="even">
<td colspan="5"><%=(rsTest.Fields.Item("field").Value)%></td>
</tr>
<% end if %>
<% count = count + 1
rsTest.moveNext
loop %>
```
The error, according to the browser, is occuring on the last "rsRoster.moveNext" right before the loop. The loop doesn't error out if there are an even number of records being pulled from the database, but it errors if there are an odd amount of records being pulled. I have tried inserting some "if EOF then nothing, else execute code", but the code checking if EOF just seems to be getting ignored when I do that. Any suggestions would be appreciated. | I know I am rusty on this one but try this:
```
<%
Dim oddFlag
oddFlag = 1
do while not rsTest.eof
if oddFlag=1 Then
oddFlag=0
Response.write("<tr class='odd'>")
Response.write("<td colspan='5'>")
Response.write(rsTest.Fields.Item("field").Value)
Response.write("</td></tr>")
else
oddFlag=1
Response.write("<tr class='even'>")
Response.write("<td colspan='5'>")
Response.write(rsTest.Fields.Item("field").Value)
Response.write("</td></tr>")
end if
rsTest.moveNext
loop
%>
``` | Since the other answers don't mention this: the problem with your code is that you're doing MoveNext twice, and the second one doesn't test if the first one already reached the EOF.
In any case, that's a needlessly complicated way to do alternating colors.
```
dim i, rs
'... database stuff, table header, etc.
i = 0
Do Until rs.EOF
i = i + 1
Response.Write "<tr class='"
If i Mod 2 = 0 Then Response.Write "even" Else Response.Write "odd" End If
Response.Write "'>"
'... write out the actual content of the table
Response.Write "</tr>"
rs.Movenext
Loop
'... clean up database, close table
```
With this method, your counter variable (`i`) is available as an actual, well, *counter* - so for example if you want to write out a "number of rows returned" message at the end, you can. | Classic ASP do while loop showing error | [
"",
"sql",
"asp-classic",
"do-while",
""
] |
The cells in an N-dimensional space are modelled with the 2 tables below. A script is needed that takes a single cell (by CellID) and returns all the other cells that are "inline" with the given cell along each axis (including itself).
For example, suppose the space has 3 dimensions (X, Y, Z) and X has 2 positions, Y has 2 and Z has 3. If the cell with coordinates {1,1,1} is given, the result should be:
```
+----------+---------+
| AxisCode | Cell |
+----------+---------+
| X | {1,1,1} | <- showing coordinates for clarity, but should be CellID
| X | {2,1,1} |
| Y | {1,1,1} |
| Y | {1,2,1} |
| Z | {1,1,1} |
| Z | {1,1,2} |
| Z | {1,1,3} |
+----------+---------+
```
I have spent hours on this and only come up with queries that are hard-coded for a specific number of dimensions ....
Please note:
**Changing the schema of the 3 tables is not an option!
The script has to work for N dimensions, and should not involve loops or cursors.**
Compatibility must be MS SQL 2008 R2
Any ideas gratefully received!
```
create table dbo.Cells(
CellID int not null,
CellValue int not null,
constraint PK_Cells primary key (CellID)
)
create table dbo.AxisPositions(
AxisCode char(1) not null, -- X, Y, Z etc
PositionOnAxis int not null, -- 1, 2, 3, 4 etc
constraint PK_AxisPositions primary key (AxisCode, PositionOnAxis)
)
create table dbo.CellAxes(
CellID int not null,
AxisCode char(1) not null, -- X, Y, Z etc
PositionOnAxis int not null, -- 1, 2, 3, 4 etc
constraint PK_CellAxes primary key (CellID, AxisCode),
constraint FK_CellAxes_Cells foreign key (CellID) references Cells(CellID),
constraint FK_CellAxes_AxisPositions foreign key (AxisCode, PositionOnAxis) references AxisPositions(AxisCode, PositionOnAxis)
)
-- Example data
insert Cells (CellID, CellValue)
values (1, 67), (2, 45), (3, 0), (4, 4), (5, 78), (6, 213), (7, 546), (8, 455), (9, 12), (10, 67), (11, 4), (12, 5)
insert AxisPositions (AxisCode, PositionOnAxis)
values ('X', 1), ('X', 2), ('Y', 1), ('Y', 2), ('Z', 1), ('Z', 2), ('Z', 3)
insert CellAxes (CellID, AxisCode, PositionOnAxis)
values (1, 'X', 1), (1, 'Y', 1), (1, 'Z', 1),
(2, 'X', 2), (2, 'Y', 1), (2, 'Z', 1),
(3, 'X', 1), (3, 'Y', 2), (3, 'Z', 1),
(4, 'X', 2), (4, 'Y', 2), (4, 'Z', 1),
(5, 'X', 1), (5, 'Y', 1), (5, 'Z', 2),
(6, 'X', 2), (6, 'Y', 1), (6, 'Z', 2),
(7, 'X', 1), (7, 'Y', 2), (7, 'Z', 2),
(8, 'X', 2), (8, 'Y', 2), (8, 'Z', 2),
(9, 'X', 1), (9, 'Y', 1), (9, 'Z', 3),
(10, 'X', 2), (10, 'Y', 1), (10, 'Z', 3),
(11, 'X', 1), (11, 'Y', 2), (11, 'Z', 3),
(12, 'X', 2), (12, 'Y', 2), (12, 'Z', 3)
``` | ```
select x.AxisCode, a2.CellID
from CellAxes a1
inner join CellAxes a2 on a2.AxisCode = a1.AxisCode
inner join CellAxes x on x.CellID = a1.CellID
where (a1.AxisCode = x.AxisCode or a1.PositionOnAxis = a2.PositionOnAxis)
and a1.CellID = @CellID -- Cell to match against
group by x.AxisCode, a2.CellID
having count(*) = (select count(distinct AxisCode) from CellAxes where CellID = @CellID)
``` | Based on your original two tables in your question, you can write dyanamic SQL to create columns that you can then use to compare with whatever you are using to define "inline".
```
-- Build list of column values to pivot
DECLARE @cols NVARCHAR(1000);
SELECT @cols =
STUFF((SELECT N'],[' + axiscode
FROM (SELECT DISTINCT axiscode FROM CellAxes) AS O(axiscode)
ORDER BY axiscode
FOR XML PATH('')
), 1, 2, '') + N']';
SELECT @cols;
-- Build dynamic SQL query for pivoting
DECLARE @sql NVARCHAR(2000);
SET @sql =
N'WITH pivotedData AS (SELECT CellID, ' + @cols +
N'FROM CellAxes ' +
N'PIVOT ' +
N'(MAX(PositionOnAxis) FOR AxisCode IN (' + @cols + N')) AS P)' +
N'SELECT * from pivotedData'
-- Modify this query with a generated WHERE clause that defines what "inline" means.
;
EXEC(@sql);
```
Note that @BillKarwin is correct - there is not a safe recommended way to do this with straight SQL, since you are *storing your schema in your data*. | T-SQL puzzler - given a cell in an N-dimensional space, return the cells that are inline with the given cell along each axis | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
From the following data, how can I select all the children a parent has ordered by the perCent desc of those children?
Parents are type 3 only.
I tried the below, but that orders the entire result set. I need to order by each parents childrens perCent. You'll see in the expected results:
```
select b.id, b.type, b.perCent from scores a
INNER JOIN scores b on a.id = b.parent and a.type=3 order by b.perCent desc;
```
**Main Table**
```
"id" "type" "parent" "country" "votes" "perCent"
"24" "1" "1" "US" "30" "0"
"25" "3" "24" "US" "30" "0"
"26" "10" "25" "US" "15" "50.00"
"27" "10" "25" "US" "10" "33.33"
"28" "10" "25" "US" "5" "16.66"
"29" "1" "1" "US" "50" "0"
"30" "3" "29" "US" "50" "0"
"31" "10" "29" "US" "20" "40.00"
"32" "10" "29" "US" "15" "25.00"
"33" "10" "29" "US" "15" "35.00"
```
**Expected results:**
```
"id" "perCent" //Ordered by each parents childrens perCent desc
"26" "50.00"
"27" "33.33"
"28" "16.66"
"31" "40.00"
"33" "35.00"
"32" "25.00"
``` | Keep the same general join, but apply an order by across multiple columns:
```
select child.id as childId, child.perCent
from scores parent
join scores child
on parent.id = child.parent
where parent.type = 3
order by parent.id, child.perCent desc
```
This says, first order by the parent (such that all children for the parent will appear together) and then order by each child's (per said parent) percent descending.
Note that even though the `parent.id` column wasn't selected, it is still eligible for ordering. | ```
select id,perCent from scores
where parent in (select id from scores where type=3)
order by parent, percent desc
``` | Select records and order by scores desc | [
"",
"mysql",
"sql",
""
] |
I'm creating a View in MySQL that will group the titles of movies by the number of times they have been rented, in descending order. So far, I have this:
```
CREATE VIEW number_of_rentals as
SELECT title
FROM film
JOIN inventory ON film.film_id=inventory.film_id
JOIN rental ON inventory.inventory_id=rental.inventory_id;
```
Which gives me a result like this:
Movie1
Movie1
Movie1
Movie1
Movie2
Movie2
Movie3
Movie3
Movie3
What I need is to display a list that looks like this (the actual count does not need to be displayed):
Movie1 (because it appears 4 times in the list)
Movie3 (because it appears 3 times in the list)
Movie2 (because it appears 2 times in the list)
That is, each individual movie in descending order of the number of times it appears in the initial list. I'm having trouble figuring out how to count the number of times each individual movie appears. Any tips on how to do this? | ```
SELECT f.title
FROM film f
JOIN inventory i ON f.film_id = i.film_id
JOIN rental r ON i.inventory_id = r.inventory_id
group by f.title
order by count(*) desc
``` | Your SELECT in your view should probably look very similar to this:
```
SELECT title, count(title) as rentals
FROM film
JOIN inventory ON film.film_id=inventory.film_id
JOIN rental ON inventory.inventory_id=rental.inventory_id
GROUP BY title
ORDER BY count(title) DESC;
```
Here's a basic [sqlfiddle example](http://www.sqlfiddle.com/#!2/8ba61/9) that may show similar results. I hope this helps. | In MySQL, how do I order by the number of times a specific instance appears? | [
"",
"mysql",
"sql",
"sql-order-by",
""
] |
i have 3 tables:
members(nic,name)
awards(nic,aw\_name)
championships(nic,ch\_name)
a person can have multiple awards or championships. for example Richard has 2 awards and 3 championships, and Steve does not have any award or championship.
now i'd like to list of persons with awards and championships. for example:
```
| nic | name | aw_name | ch_name |
|----------|------------|-------------|-------------------|
| 1 | Richard | award 1 | championship 1 |
| 1 | Richard | award 2 | championship 2 |
| 1 | Richard | | championship 3 |
| 2 | Steve | | |
```
Can anyone help me out? Thanks! | Try this:
```
SELECT
members.nic,
members.name,
awards.aw_name,
championships.ch_name
FROM
(
SELECT
CASE WHEN @curNic = members.nic THEN @curRow := @curRow + 1 ELSE @curRow := 1 END num,
@curNic := members.nic nic,
name
FROM
members LEFT JOIN
(
SELECT nic FROM awards UNION ALL
SELECT nic FROM championships
) tmp ON tmp.nic = members.nic JOIN
(SELECT @curNic := null, @curRow := 0) r
) members LEFT JOIN
(
SELECT
CASE WHEN @curNic1 = nic THEN @curRow1 := @curRow1 + 1 ELSE @curRow1 := 1 END num,
@curNic1 := nic nic,
aw_name
FROM
awards JOIN
(SELECT @curNic1 := null, @curRow1 := 0) r
) awards ON awards.nic = members.nic AND awards.num = members.num LEFT JOIN
(
SELECT
CASE WHEN @curNic2 = nic THEN @curRow2 := @curRow2 + 1 ELSE @curRow2 := 1 END num,
@curNic2 := nic nic,
ch_name
FROM
championships JOIN
(SELECT @curNic2 := null, @curRow2 := 0) r
) championships ON championships.nic = members.nic AND championships.num = members.num
WHERE
awards.nic IS NOT NULL OR championships.nic IS NOT NULL OR members.num = 1
```
It would be easier if there was FULL OUTER JOIN in mysql, but there isn't.
Fiddle: <http://www.sqlfiddle.com/#!2/31d47/43> | Try this out.
```
SELECT m.nic,
m.name,
a.aw_name,
c.ch_name
FROM members m
LEFT JOIN awards AS a ON m.nic = a.nic
LEFT JOIN championships AS c ON m.nic = c.nic
GROUP BY m.nic, m.name
``` | how to left join 3 tables in mysql | [
"",
"mysql",
"sql",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.