Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I'm in the process of creating unique customers ID's that is an alternative Id for external use.
In the process of adding a new column "cust\_uid" with datatype INT for my unique ID's,
When I do an INSERT into this new column:
```
Insert Into Customers(cust_uid)
Select ABS(CHECKSUM(NEWID()))
```
I get a error:
Could not create an acceptable cursor. OLE DB provider "SQLNCLI" for linked server "SHQ2IIS1" returned message "Multiple-step OLE DB operation generated errors. Check each OLE DB status value, if available. No work was done.
I've check all data types on both tables and the only things that has changed is the new column in both tables.
The update is being done on one Big @$$ table...and for reasons above my pay grade, we would like to have new uid's that are different form the one's that we currently have "so users don't know how many accounts we actually have."
* Is INT the correct datatype for `ABS(CHECKSUM(NEWID()))` ? | For a moment, forget your issue with what must be an attempt to insert into a linked server (though it is not obvious from your code that `Customers` must either be a synonym or you dumbed the statement down).
Ask yourself: why would you use random numbers for uniqueness? Random and unique may seem like similar concepts, but they're not.
I also see a lack of error handling (again, this may just be that you dumbed down your code to "help" us). Eventually you will get duplicates. You may want to read [this tip](http://www.mssqltips.com/sqlservertip/3055/generating-random-numbers-in-sql-server-without-collisions/?utm_source=AaronBertrand) and [this blog post](http://www.sqlperformance.com/2013/09/t-sql-queries/random-collisions). Essentially, as you insert more and more "unique" values, the likelihood that you will get a collision increases. So rather than solve the issue with your solution, I think you should step back and re-consider the problem.
Why are you using random numbers instead of simpler concepts that - at least by default - help assure uniqueness in a much more predictable way, like `IDENTITY` or `SEQUENCE`? Is it to prevent people from guessing the next value, or being able to determine how many values you generate in a time period? If so, then pre-populate a table with a bunch of random values, and pull one off the stack when you need one, [as I described here](http://www.sqlperformance.com/2013/09/t-sql-queries/random-collisions). If this isn't the crucial issue, then stop breaking your back and just use an existing methodology for generating unique - and not random - numbers. | Again bad choice for generating a unique ID
But with that said this does not throw an error so I think something else is going on
```
declare @id int
set @id = ABS(CHECKSUM(NEWID()))
print @id
```
Your update that you don't want users to know how many accounts and custID is an identity should have been in the original problem statement. | Is INT the correct datatype for ABS(CHECKSUM(NEWID()))? | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I want to export the results of my query to an excel file by clicking on a button in a form.
For this I used this code and it works well:
```
Private Sub Command9_Click()
On Error GoTo ProcError
DoCmd.OutputTo _
ObjectType:=acOutputQuery, _
ObjectName:="Contract Type Billing", _
OutputFormat:=acFormatXLSX, _
Autostart:=True
ExitProc:
Exit Sub
ProcError:
Select Case Err.Number
Case 2501 'User clicked on Cancel
Case Else
MsgBox "Error " & Err.Number & ": " & Err.Description, vbCritical, _
"Error in cmdExportQuery_Click event procedure..."
End Select
Resume ExitProc
End Sub
```
But my query uses 2 parameters **sdate** and **edate**, I don't want access to ask me for these value but I want the user to enter them in the form with the appropriate textboxes.
So I added this bit to the code before DoCMD.OutputTo
```
Dim qdf As DAO.QueryDef
Set qdf = CurrentDb.QueryDefs("Contract Type Billing")
qdf.Parameters("sdate") = sdate.Value
qdf.Parameters("edate") = edate.Value
```
But unfortunately it doesn't work. How can put the parameters into my query before I export it ? | If you wanted to keep your original parameter query intact you could create a temporary QueryDef to dump the data into a temporary table, and then output the temporary table to Excel:
```
Dim cdb As DAO.Database, qdf As DAO.QueryDef
Const tempTableName = "_tempTbl"
Set cdb = CurrentDb
On Error Resume Next
DoCmd.DeleteObject acTable, tempTableName
On Error GoTo 0
Set qdf = cdb.CreateQueryDef("")
qdf.SQL = "SELECT * INTO [" & tempTableName & "] FROM [Contract Type Billing]"
qdf.Parameters("sdate").Value = DateSerial(2013, 1, 3) ' test data
qdf.Parameters("edate").Value = DateSerial(2013, 1, 5)
qdf.Execute
Set qdf = Nothing
Set cdb = Nothing
DoCmd.OutputTo acOutputTable, tempTableName, acFormatXLSX, "C:\__tmp\foo.xlsx", True
``` | I've bump into same problem and instead of using parameters i'd rather insert WHERE criteria in the sql script and export the query result into excel directly (off course you'll have to define a target file). Assuming that date field in Contract Type Billing named dDate.
```
Set qdf = CurrentDb.CreateQueryDef("qTempQuery")
qdf.SQL = "SELECT * FROM [Contract Type Billing] WHERE ((([Contract Type Billing].dDate)>#" _
& cdate(sdate.value) & "# And ([Contract Type Billing].dDate)<#" & cdate(edate.value) & "#));"
DoCmd.OutputTo acOutputQuery, "qTempQuery", "ExcelWorkbook(*.xlsx)", OutputFileName, False
DoCmd.DeleteObject acQuery, "qTempQuery"
Set qdf = Nothing
``` | Exporting a parameterized query to excel using vba | [
"",
"sql",
"excel",
"vba",
"ms-access",
""
] |
I've come across this problem numerous times but haven't found a "MySQL way" to solve the issue as such - I have a database that contains **users** and **reports**. Each **report** has an **id** which I display as a **report number** to my users.
The main complaint is that users are confused as to why reports have gone missing from their system. This is not actually the case. It is actually that they are recognizing a gap between their IDs and assume that these are missing reports, when in actual fact, it is simply becasue another user has filled in this auto-incrementing gap.
I need to know if there is a way to do this in MySQL:
Is it possible that I can have a **second** auto-increment field called **report\_number** which is **based on a user\_id** field which has a different set of auto-increments per user?
e.g.
```
|------|---------|---------------|
| id | user_id | report_number |
|------|---------|---------------|
| 1 | 1 | 1 |
| 2 | 1 | 2 |
| 3 | 1 | 3 |
| 4 | 2 | 1 |
| 5 | 1 | 4 |
| 6 | 1 | 5 |
| 7 | 2 | 2 |
| 8 | 3 | 1 |
| 9 | 3 | 2 |
|------|---------|---------------|
```
I am using **InnoDB** for this as it is quite heavily weighted with foreign-keys. It appears to complain when I add a second auto increment field, but I wasn't sure if there was a different way to do this? | `MyISAM` supports the second column with auto increment, but **`InnoDB` doesn't**.
For InnoDB you might create a trigger `BEFORE INSERT` to get the max value of the reportid and add one to the value.
```
DELIMITER $$
CREATE TRIGGER report_trigger
BEFORE INSERT ON reports
FOR EACH ROW BEGIN
SET NEW.`report_id` = (SELECT MAX(report_id) + 1 FROM reports WHERE user_id = NEW.user_id);
END $$
DELIMITER ;
```
If you can use MyISAM instead, in the documentation of MySQL page there is an example:
<http://dev.mysql.com/doc/refman/5.0/en/example-auto-increment.html>
```
CREATE TABLE animals (
grp ENUM('fish','mammal','bird') NOT NULL,
id MEDIUMINT NOT NULL AUTO_INCREMENT,
name CHAR(30) NOT NULL,
PRIMARY KEY (grp,id)
) ENGINE=MyISAM;
INSERT INTO animals (grp,name) VALUES
('mammal','dog'),('mammal','cat'),
('bird','penguin'),('fish','lax'),('mammal','whale'),
('bird','ostrich');
SELECT * FROM animals ORDER BY grp,id;
```
Which returns:
```
+--------+----+---------+
| grp | id | name |
+--------+----+---------+
| fish | 1 | lax |
| mammal | 1 | dog |
| mammal | 2 | cat |
| mammal | 3 | whale |
| bird | 1 | penguin |
| bird | 2 | ostrich |
+--------+----+---------+
``` | Right one with IFNULL:
```
DELIMITER $$
CREATE TRIGGER salons_trigger
BEFORE INSERT ON salon
FOR EACH ROW BEGIN
SET NEW.salon_id = IFNULL((SELECT MAX(salon_id) + 1 FROM salon WHERE owner = NEW.owner), 1);
END $$
DELIMITER ;
``` | MySQL second auto increment field based on foreign key | [
"",
"mysql",
"sql",
"database-design",
"relational-database",
"innodb",
""
] |
I have a relation Presidents(firstName,lastName,beginTerm,endTerm)
that gives information about US Presidents. Attribute firstName is a string
with the first name, and in some cases, one or more
middle initials.
Attribute lastName is a string with the last name of the president. For example,
the previous president has firstName = 'George W.' and his father has firstName = 'George H.W.'; both have lastName = 'Bush'. The last 2 attributes, beginTerm and endTerm,
are the years the president entered and left office, respectively.
One subtlety is that Grover Cleveland served 2 noncontiguous
terms. He appears in 2 tuples, one with the beginning and ending years of his first term and the other for the second term.
The question I have is below:
There are 2 pairs of presidents that were father and son. But there are
a number of other pairs of presidents that shared a last name. Find all the last names belonging to 2 or more Presidents. Do not repeat a last name, and remember that the same person serving 2 different terms (e.g., Grover Cleveland) does not constitute a case of 2 presidents with the same last name.
I first thought the answer might be:
```
SELECT lastName
FROM Presidents
WHERE COUNT(lastName) > 2
EXCEPT lastName = 'Cleveland';
```
I'm not too sure if the COUNT() function can be used in the WHERE clause though.
Is this possible?
Thanks! | Use HAVING instead of WHERE when checking against Group functions.
```
SELECT lastName
FROM Presidents
WHERE lastName != 'Cleveland'
GROUP BY lastName
HAVING COUNT(lastName) > 2;
```
However, when solving SQL-puzzles likes this, you should never take into account the actual data. It should work for all consistent data-sets! I believe this is an actual solution to your problem:
```
SELECT DISTINCT p1.lastName
FROM Presidents p1, Presidents p2
WHERE p1.lastName == p2.LastName
AND p1.firstName != p2.firstName;
``` | You constrain on aggregates using `HAVING`, and you are also missing a group by.
```
SELECT lastName
FROM Presidents
where lastName <> 'Cleveland';
group by lastname
having COUNT(lastName) > 2
``` | Can I use COUNT() function in the WHERE clause? Query about Presidents | [
"",
"sql",
"count",
"where-clause",
""
] |
I have a mysql table with columns `userid` and `createdon` where `createdon` column has `DATE` datatype.
```
---------------------------------------------------------------------------------
ID UserId CreatedOn
1 65 2013-06-13
2 54 2013-07-03
3 34 2013-08-23
4 65 2013-09-13
5 89 2013-09-13
```
Now I want the userids where **last** createdon was before 2013-09-08. The correct answer will be userids 54,34
Using
```
select userid from table where createdon <'2013-09-08'
```
returns 65,54,34 and usind
```
select userid from table where userid notin (select userid from table where createdon > '2013-09-07')
```
takes a lot of time.
How to get the rows where last createdon < 2013-09-08 | Try
```
SELECT UserID, MAX(CreatedOn) CreatedOn
FROM table1
GROUP BY UserId
HAVING CreatedOn < '2013-09-08'
```
Output:
```
| USERID | CREATEDON |
|--------|-------------------------------|
| 34 | August, 23 2013 00:00:00+0000 |
| 54 | July, 03 2013 00:00:00+0000 |
```
Here is **[SQLFiddle](http://sqlfiddle.com/#!2/0ec9a4/1)** demo | Your expectation is wrong. The first row of the table you posted would meet your criteria (`'2013-06-'13' < '2013-09-08'`). So a resultset with `65,54,34` is as expected.
Misunderstood the question.
What you want is:
```
SELECT DISTINCT
UserId
FROM
tbl outerTbl
WHERE NOT EXISTS (
SELECT *
FROM tbl innerTbl
WHERE
outerTbl.UserId = innerTbl.UserId
AND innerTbl.createdOn >= '2013-09-08'
)
``` | 'NOT GREATER THAN' condition Mysql | [
"",
"mysql",
"sql",
""
] |
I've created a View with a SQL query:
```
CREATE VIEW rightview
AS
SELECT data.accounts.login,data.accounts.password,data.customers.right_
FROM data.accounts join data.customers using (id);
```
When i cliick on execute the query, it indicates that the View was created successfully. But when I try to display its columns with this query
```
select * from data.rightview;
```
I got this error:
> Error code 1356, SQL state HY000: View 'data.rightview' references
> invalid table(s) or column(s) or function(s) or definer/invoker of
> view lack rights to use them | IMHO better will be:
```
CREATE VIEW rightview
AS
SELECT acc.login, acc.password, cust.right_
FROM data.accounts acc
LEFT JOIN data.customers cust on acc.id=cust.id
```
if table customers include column ID which is reference to table accounts... | > **Try this one..**
```
CREATE VIEW rightview
AS SELECT a.login,a.password,b.right_
FROM accounts a INNER JOIN customers b
on a.id = b.id;
```
> **then,**
```
select * from rightview;
```
> **will work..** | Error while displaying a view sql | [
"",
"mysql",
"sql",
"view",
""
] |
I have an sql statement from a colleague overseas, but I get a box that pops up when I try to run it that says "Enter Substitution Variable". What do I put in it? My colleague says that they do not get the same box. What causes this message? I am providing the statement below. Any help is greatly appreciated.
Also, line three gives a message saying missing right parenthesis.
```
select SUBSTR(created_ts, 1, 7) as created_month,
count(t1.id) as mops_created,
count(completed_ts <> '') as num_complete,
round(AVG(DATEDIFF(planning_complete_ts, created_ts)), 1) as average_created_to_planning_complete,
round(AVG(DATEDIFF(review_complete_ts, planning_complete_ts)), 1) as average_planning_complete_to_review_complete,
round(AVG(DATEDIFF(scheduling_complete_ts, planning_complete_ts)), 1) as average_planning_complete_to_scheduling_complete,
round(AVG(DATEDIFF(scheduled_ts, scheduling_complete_ts)), 1) as average_scheduled_for_x_days_out,
round(AVG(DATEDIFF(completed_ts, planning_complete_ts)), 1) as average_planning_complete_to_mop_complete,
round(AVG(planning_complete_num), 1) as average_num_times_planning_complete,
max(planning_complete_num) as max_planning_complete,
round(AVG(scheduling_complete_num), 1) as average_num_time_scheduled,
max(scheduling_complete_num) as max_scheduled_num
FROM
(select a.id, a.created_ts, a.scheduled_ts, work_start_ts, completed_ts
from TRAFFIC_ENG.DSIS_CC_MASTER as a
WHERE
a.created_ts >= '2013-01-01 00:00:00'
AND
a.status_cc_options_id NOT IN (810, 820)) as t1
LEFT JOIN
(select a.id, max(c.orig_date) as review_complete_ts
from
TRAFFIC_ENG.DSIS_CC_MASTER as a
INNER JOIN
TRAFFIC_ENG.DSIS_CC_LOG_NOTE as c
ON c.cc_master_id = a.id
AND (c.cc_log_note_type_id = 8 OR c.note = 'Passed Review & submitted to Scheduling')
WHERE
a.created_ts >= '2013-01-01 00:00:00'
AND
a.status_cc_options_id NOT IN (810, 820)
GROUP BY a.id) as review_complete
ON review_complete.id = t1.id
LEFT JOIN
(
select a.id, min(c.orig_date) as planning_complete_ts
from
TRAFFIC_ENG.DSIS_CC_MASTER as a
INNER JOIN
TRAFFIC_ENG.DSIS_CC_LOG_NOTE as c
ON c.cc_master_id = a.id
AND (c.cc_log_note_type_id = 5 OR c.note = 'Submitted for Review')
WHERE
a.created_ts >= '2013-01-01 00:00:00'
AND
a.status_cc_options_id NOT IN (810, 820)
GROUP BY a.id
) as planning_complete
ON planning_complete.id = t1.id
LEFT JOIN
(
select a.id, (c.orig_date) as scheduling_complete_ts
from
TRAFFIC_ENG.DSIS_CC_MASTER as a
INNER JOIN
TRAFFIC_ENG.DSIS_CC_LOG_NOTE as c
ON c.cc_master_id = a.id
AND (c.cc_log_note_type_id = 7 OR c.note='Scheduled')
WHERE
a.created_ts >= '2013-01-01 00:00:00'
AND
a.status_cc_options_id NOT IN (810, 820)
GROUP BY a.id
) as scheduling_complete
ON scheduling_complete.id = t1.id
LEFT JOIN
(
select a.id, count(c.orig_date) as planning_complete_num
from
TRAFFIC_ENG.DSIS_CC_MASTER as a
INNER JOIN
TRAFFIC_ENG.DSIS_CC_LOG_NOTE as c
ON c.cc_master_id = a.id
AND (c.cc_log_note_type_id = 5 OR c.note = 'Submitted for Review')
WHERE
a.created_ts >= '2013-01-01 00:00:00'
AND
a.status_cc_options_id NOT IN (810, 820)
GROUP BY a.id
) as planning_complete_num
ON planning_complete_num.id = t1.id
LEFT JOIN
(
select a.id, count(c.orig_date) as scheduling_complete_num
from
TRAFFIC_ENG.DSIS_CC_MASTER as a
INNER JOIN
TRAFFIC_ENG.DSIS_CC_LOG_NOTE as c
ON c.cc_master_id = a.id
AND (c.cc_log_note_type_id = 7 OR c.note='Scheduled')
WHERE
a.created_ts >= '2013-01-01 00:00:00'
AND
a.status_cc_options_id NOT IN (810, 820)
GROUP BY a.id
) as scheduling_complete_num
ON scheduling_complete_num.id = t1.id
GROUP BY created_month
``` | As @the\_slk said, you can turn off [substitution variables](http://docs.oracle.com/cd/E11882_01/server.112/e16604/ch_five.htm#CACIFHGB) with [`set define off`](http://docs.oracle.com/cd/E11882_01/server.112/e16604/ch_twelve040.htm#SQPUG073). That will stop the client interpreting the `&` in the string `'Passed Review & submitted to Scheduling'` as user input. It sounds like you might be using SQL Developer, but much of the SQL\*Plus documentation is still relevant. If you're using a different client that it might have a preference to turn this off instead.
The second part of the question, about 'missing right parenthesis', looks like it might be because you have blank lines in your code. I think your client is treating each section of the code, separated by blank lines, as separate statements - I make that four the way you have it formatted, but I'm not sure that quite aligns with the error. SQL\*Plus does that by default. Anyway, only the final block is actually being executed, but that if it was really just running `'GROUP BY created_month'` you'd get a different error, so whatever your client is, it seems to behaving a bit differently, or you have more code after this that you haven't shown.
To make SQL\*Plus ignore blank lines, and treat code separated by blanks lines as one block, [`set sqlblanklines on`](http://docs.oracle.com/cd/E11882_01/server.112/e16604/ch_twelve040.htm#i2678904). SQL Developer doesn't use that setting though and lets you use blank lines anyway, so maybe your client is something else, and will either use the same command or have its own preference. If you are using the same client as your colleagues, ask them how they have their preferences set. If your client doesn't have an obvious way to change the behaviour then removing the blank lines from the script might solve this. | Can you try with "SET DEFINE OFF"?
```
SQL> SET SERVEROUTPUT ON
SQL> SET DEFINE OFF
SQL> BEGIN
2 DBMS_OUTPUT.PUT_LINE('A&B');
3 END;
4 /
A&B
PL/SQL procedure successfully completed.
``` | Enter Substitution Variable | [
"",
"sql",
"oracle",
""
] |
I have very limited knowledge of SQL and have tried 4 different similar solutions offered here and I'm still getting error messages.
I'm using SQL2005.
I want to run a query on a database table (named `plans`) where we store information about students' learning plans. I'd like the query to only grab the most recent plan for a student. There are many columns, but the columns I'd like to get at for now are...
```
PlanID
PlanStatus
PlanDate
PlanEndDate
StudentID
Firstname
Lastname
```
I hoping to pull just one plan per `StudentID` (the most recent plan). I was planning on using the `PlanDate` to determine what is the most recent plan. I've tried various `JOINS` and `MAX` statements that I've seen in similar questions and each one returns an error message for me. | ```
SELECT A.*
FROM [plans] A
LEFT OUTER JOIN [plans] B ON B.PlanDate > A.PlanDate
AND B.StudentID = A.StudentID
WHERE B.PlanID IS NULL
```
This is saying, basically, give me all rows for which there isn't a row with a higher plan date and the same student ID.
Another, equivalent way of writing this query that is perhaps easier for a human to understand but runs much, much slower in Sql Server would be
```
SELECT A.*
FROM [plans] A
WHERE NOT EXISTS (SELECT *
FROM [plans] B
WHERE B.PlanDate > A.PlanDate
AND B.StudentID = A.StudentID)
``` | This one assumes that there can be PlanDates in the future and 'most recent' means 'the latest, earlier than today'.
```
WITH CTE AS
(
SELECT
P.StudentID,
MAX(P.PlanDate) AS MostRecentPlanDate
FROM plans P
WHERE P.PlanDate < GETDATE()
GROUP BY P.StudentID
)
SELECT P.*
FROM
plans P
INNER JOIN CTE ON CTE.StudentID = P.StudentID
AND CTE.MostRecentPlanDate = p.PlanDate
```
* I made a correction and here is the fiddle:
<http://sqlfiddle.com/#!3/6eb38/3>
The CTE (Common Table Expression) selects each student-ID and the corresponding maximum date, but only from those before today. In the main query, this is used by an inner join to filter the wanted rows. | Selecting the most current plan for a student | [
"",
"sql",
"sql-server-2005",
""
] |
I have been racking my brain with searching on the internet for the solution, but to no avail, I have been unsuccessful.
```
strSQL = "Update tTbl_LoginPermissions SET LoginName = '" & StrUserName & "', PWD = '" & StrPWD & "', fldPWDDate = '" & Now() & "'" & _
"WHERE intLoginPermUserID = " & MyMSIDColumn0
```
Once I get the error out, I would like to actually use this where clause:
```
'WHERE intLoginPermUserID IN (SELECT intCPIIUserID From vw_ADMIN_Frm_LoginBuilder)
```
Here is the entire code:
```
Dim con As ADODB.Connection
Dim cmd As ADODB.Command
Dim strSQL As String
Const cSQLConn = "DRIVER=SQL Server;SERVER=dbswd0027;UID=Mickey01;PWD=Mouse02;DATABASE=Regulatory;"
Dim StrUserName As String, StrPWD As String
'passing variables
StrUserName = FindUserName()
StrPWD = EncryptKey(Me.TxtConPWD)
'Declaring the SQL expression to be executed by the server
strSQL = "Update tTbl_LoginPermissions SET LoginName = '" & StrUserName & "', PWD = '" & StrPWD & "', fldPWDDate = '" & Now() & "#" & _
"WHERE intLoginPermUserID = " & MyMSIDColumn0
'WHERE intLoginPermUserID = ANY (SELECT intCPIIUserID From vw_ADMIN_Frm_LoginBuilder)
Debug.Print strSQL
'connect to SQL Server
Set con = New ADODB.Connection
With con
.ConnectionString = cSQLConn
.Open
End With
'write back
Set cmd = New ADODB.Command
With cmd
.ActiveConnection = con
.CommandText = strSQL
.CommandType = adCmdText
.Execute
Debug.Print strSQL
End With
'close connections
con.Close
Set cmd = Nothing
Set con = Nothing
MsgBox "You password has been set", vbInformation + vbOKOnly, "New Password"
```
NEWEST CODE Producing Error:
```
'/Declaring the SQL expression to be executed by the server
strSQL = "Update dbo_tTbl_LoginPermissions " _
& "SET LoginName = '" & StrUserName & "' " _
& "SET PWD = '" & StrPWD & "' " _
& "SET fldPWDDate = '" & Now() & "' " _
& "WHERE intLoginPermUserID = 3;"
```
I have gone to this site to try to figure out my mistake, but I still cannot figure it out: | After much dilberation and help, it turns out the `FindUserName` that utilizes a Win32API function was not trimming the Username appropriately.
I changed it to the following:
```
Public Function FindUserName() As String
' This procedure uses the Win32API function GetUserName
' to return the name of the user currently logged on to
' this machine. The Declare statement for the API function
' is located in the Declarations section of this module.
Dim strBuffer As String
Dim lngSize As Long
strBuffer = Space$(255)
lngSize = Len(strBuffer)
If GetUserName(strBuffer, lngSize) = 1 Then
FindUserName = Left$(strBuffer, lngSize - 1)
Else
FindUserName = "User Name not available"
End If
End Function
Public Declare Function GetUserName Lib "advapi32.dll" Alias "GetUserNameA" (ByVal lpBuffer As String, nSize As Long) As Long
``` | Try this:
```
strSQL = "Update tTbl_LoginPermissions SET LoginName = '" & replace(StrUserName, "'", "''") & "', PWD = '" & replace(StrPWD, "'", "''") & "', fldPWDDate = '" & Now() & "'" & _
"WHERE intLoginPermUserID = " & MyMSIDColumn0
``` | Unclosed Quotation mark after the character string 'test' | [
"",
"sql",
"ms-access",
"vba",
""
] |
I do not know how to phrase this question so it makes sense but the problem is probably best understood through the example below.
My table structured in such a way that an ID can have different row values:
```
PK ID VALUE
1 160487 10122
2 160487 MF
3 166980 10147
4 166980 MF
5 166986 10147
6 166986 MF
7 166695 10121
```
I need to select a list of the *numeric* values and corresponding ID number for every ID that have the value "MF" attributed to it:
```
PK ID VALUE
1 160487 10122
3 166980 10147
5 166986 10147
```
How do I approach this problem? I use SQL Server 2005. | Here is one way with `IN`:
```
select *
from yourtable
where isnumeric(value) = 1
and id in (select id from yourtable where value = 'mf')
```
* [SQL Fiddle Demo](http://sqlfiddle.com/#!3/848b0/1) | If you always have two records where one has `MF` as value and the other is numeric you can use:
```
SELECT PK, ID, VALUE
FROM dbo.TableName t
WHERE EXISTS(SELECT 1 FROM dbo.TableName t2
WHERE t2.ID=t.ID AND t2.VALUE='MF')
AND ISNUMERIC(t.VALUE) = 1
```
[**Sql-Fiddle demo**](http://sqlfiddle.com/#!6/b82fc/2/0)
If you have multiple records with the same ID and with numeric values and you want to see only one you need to `GROUP BY ID` or use `ROW_NUMBER` in a CTE. However, if that's not the case above is easier. | Filter a table with column values represented as rows | [
"",
"sql",
"sql-server",
""
] |
I have a table with:
Name Surname and Outputname ( = Name Surname )
Name and Surname should always have a value, but some time are empty.
I need to select from table by surname and i can't use surname field.
I can't even change field values with a script because i get this table from an outside source and can be change any time.
Database is MySQL 4.x
Can i select by second word in Outputname starting with some letter?
something like
```
SELECT Outputname FROM USERS
WHERE seconword(Outputname) LIKE 'A%' SORT BY seconword(Outputname) ASC
``` | try this
```
SELECT
SUBSTRING_INDEX(Outputname, ' ', -1) as SecondWord
FROM USERS
WHERE SUBSTRING_INDEX(Outputname, ' ', -1) LIKE 'A%'
ORDER BY SecondWord ASC
```
[**demo**](http://sqlfiddle.com/#!2/051fb/1) | One possible approach:
```
SELECT Surname
FROM (
SELECT SUBSTRING_INDEX(Outputname, ' ', -1)
AS Surname
FROM Users) AS S
WHERE Surname LIKE 'A%'
ORDER BY Surname;
```
[**SQL Fiddle**](http://sqlfiddle.com/#!2/e1d7b/1). This method is based on assumption that Outputname's format is always `'FirstName LastName'` (i.e., `' '` symbol is used as a delimiter, and used only once each time). | mysql query where second words starts with a letter | [
"",
"mysql",
"sql",
""
] |
I have three tables and i'm trying to return details from 2 of them, however they do not have a direct link, so i have to use a common 3rd table to join them. I'm using SQL developer - Oracle 11g
Here is a very simple view of the tables:
```
Country
CID | CName
-------------
ENG | England
FRA | France
Branch
BID | Branch Name | RegionID | CID
------------------------------
B1 | ABC | R1 | ENG
B2 | DEF | R1 | ENG
B3 | GHI | R2 | FRA
Region
RegionID| RegionName
------------------------------
R1 | UK
R2 | CEurope
```
This is just a very basic sample to illustrate. I want the query to return:
```
RegionID| RegionName | CID | CName
------------------------------
R1 | UK | ENG | England
R2 | CEurope | FRA | France
```
So i want to return data from the Region Table and Country using branch as a common link.
Here is my current code which doesn't seem to be working:
```
Select
c.CID,
c.CName,
r.RegionID,
r.RegionName
FROM
Regions r inner join
(
Branch b inner join Countries c on c.CID = b.BID
)
on b.RegionID = r.RegionID;
``` | ```
SELECT
c.CID,
c.CName,
r.RegionID,
r.RegionName
FROM
Regions r
INNER JOIN Branch b ON b.RegionID = r.RegionID
INNER JOIN Countries c ON c.CID = b.CID;
``` | This is not how we do join in SQL. You may consider having a look at this simple [reference](http://www.techonthenet.com/sql/joins.php). For your script, whether you need to use mid-table or not; you need to specify which column you are going to link using `ON` phrase. the following could work:
```
SELECT c.cid,
c.cname,
r.regionid,
r.regionname
FROM regions r
INNER JOIN branch b
ON b.regionid = r.regionid
INNER JOIN countries c
ON c.cid = b.cid;
``` | Joining 2 tables in SQL using a common third | [
"",
"sql",
"join",
"oracle11g",
"inner-join",
""
] |
If I were to have these three tables (just an example in order to learn UNION, these are not real tables):
**Tables with their columns:**
```
Customer:
id | name | order_status
Order_Web:
id | customer_id | order_filled
Order:
id | customer_id | order_filled
```
And I wanted to update order\_status in the Customer table when there is a order filled in either the Order\_Web table or the Order table for that customer using Union:
```
UPDATE c
SET c.order_status = 1
FROM Customer AS c
INNER JOIN Order_Web As ow
ON c.id = ow.customer_id
WHERE ow.order_filled = 1
UPDATE c
SET c.order_status = 1
FROM Customer AS c
INNER JOIN Order As o
ON c.id = o.customer_id
WHERE o.order_filled = 1
```
How can I combine these two updates using a Union on order\_web and order?
Using Microsoft SQL Server Management Studio | You do not need a `UNION` for that - replacing an inner join with a pair of outer ones should do it:
```
UPDATE c
SET c.order_status = 1
FROM Customer AS c
LEFT OUTER JOIN Order_Web As ow ON c.id = ow.customer_id
LEFT OUTER JOIN Order As o ON c.id = o.customer_id
WHERE ow.order_filled = 1 OR o.order_filled = 1
```
You could also use a `WHERE EXISTS`, like this:
```
UPDATE c
SET c.order_status = 1
FROM Customer AS c
WHERE EXISTS (
SELECT 1 FROM Order_Web As ow WHERE c.id = ow.customer_id AND ow.order_filled = 1
) OR EXISTS (
SELECT 1 FROM Order As o WHERE c.id = o.customer_id AND o.order_filled = 1
)
```
If you must use `UNION`, you can do it as follows:
```
UPDATE c
SET c.order_status = 1
FROM Customer AS c
WHERE c.id in (
SELECT ow.id FROM Order_Web As ow WHERE ow.order_filled = 1
UNION
SELECT o.id FROM Order As o WHERE o.order_filled = 1
)
```
or the same one with a `JOIN`:
```
UPDATE c
SET c.order_status = 1
FROM Customer AS c
JOIN (
SELECT ow.id AS id FROM Order_Web As ow WHERE ow.order_filled = 1
UNION
SELECT o.id AS id FROM Order As o WHERE o.order_filled = 1
) AS ids ON ids.id = c.id
``` | ```
UPDATE c
SET c.order_status = 1
FROM (
SELECT customer_id
FROM order_web
WHERE order_filled = 1
UNION
SELECT customer_id
FROM order
WHERE order_filled = 1
) o
JOIN customer c
ON c.id = o.customer_id
``` | Performing an UPDATE with Union in SQL | [
"",
"sql",
"sql-update",
"union",
""
] |
I have a batch file which runs a SQL script in sqlplus and sends the output to a log file:
> sqlplus user/pw < RowCount.sql > RowCount.log
My log file contains this:
```
Connected to:
Oracle Database 11g Express Edition Release 11.2.0.2.0 - Production
SQL> SQL>
COUNT(*)
----------
0
SQL>
COUNT(*)
----------
0
```
etc. but it's several thousand lines of output and therefore hard to determine which results belong to which statement.
I would like to add some formatting to the output, so that I may discern what happened. Either an echo of the executed statement or manually inserting some "echo" statements into the script would be fine. Ideally it would look something like this:
```
SQL> select(*) from TableA;
COUNT(*)
----------
0
SQL> select(*) from TableB;
COUNT(*)
----------
0
``` | You can use `SET ECHO ON` in the beginning of your script to achieve that, however, you have to specify your script using `@` instead of `<` (also had to add `EXIT` at the end):
**test.sql**
```
SET ECHO ON
SELECT COUNT(1) FROM dual;
SELECT COUNT(1) FROM (SELECT 1 FROM dual UNION SELECT 2 FROM dual);
EXIT
```
**terminal**
```
sqlplus hr/oracle@orcl @/tmp/test.sql > /tmp/test.log
```
**test.log**
```
SQL>
SQL> SELECT COUNT(1) FROM dual;
COUNT(1)
----------
1
SQL>
SQL> SELECT COUNT(1) FROM (SELECT 1 FROM dual UNION SELECT 2 FROM dual);
COUNT(1)
----------
2
SQL>
SQL> EXIT
``` | The **prompt** command will echo text to the output:
```
prompt A useful comment.
select(*) from TableA;
```
Will be displayed as:
```
SQL> A useful comment.
SQL>
COUNT(*)
----------
0
``` | How to echo text during SQL script execution in SQLPLUS | [
"",
"sql",
"oracle11g",
"sqlplus",
""
] |
I have a query that takes 20 min to executed... I remember in one project we used /\*+ PARALLEL(T,8) */ or we would use the with clause and /*+ materialize \*/ and it would make the query responds time really fast withing seconds. How can I do this to this query?
```
select count(*) from (
select hdr.ACCESS_IND,
hdr.SID,
hdr.CLLI,
hdr.DA,
hdr.TAPER_CODE,
hdr.CFG_TYPE as CFG_TYPE,
hdr.IP_ADDR,
hdr.IOS_VERSION,
hdr.ADMIN_STATE,
hdr.WIRE_CENTER,
substr(hdr.SID_IO_PRI, 1, 8) PRI_IO_CLLI,
substr(hdr.SID_IO_SEC, 1, 8) SEC_IO_CLLI,
hdr.VHO_CLLI ,
hdr.CFG_TYPE ,
-- dtl.MULTIPURPOSE_IND,
lkup.code3 as shelf_type
from RPT_7330_HDR hdr
INNER JOIN RPT_7330_DTL dtl on hdr.EID = dtl.EID
INNER JOIN CODE_LKUP2 lkup ON LKUP.CODE1 = hdr.ACCESS_IND
where LKUP.CATEGORY='ACCESS_MAPPING' and hdr.DT_MODIFIED = (select DT_MODIFIED
from LS_DT_MODIFIED
where NAME = 'RPT_7330_HDR')) n;
```
 | If you want data then
```
SELECT /*+ PARALLEL(DTL,4) */
HDR.ACCESS_IND,
HDR.SID,
HDR.CLLI,
HDR.DA,
HDR.TAPER_CODE,
HDR.CFG_TYPE AS CFG_TYPE,
HDR.IP_ADDR,
HDR.IOS_VERSION,
HDR.ADMIN_STATE,
HDR.WIRE_CENTER,
SUBSTR ( HDR.SID_IO_PRI, 1, 8 ) PRI_IO_CLLI,
SUBSTR ( HDR.SID_IO_SEC, 1, 8 ) SEC_IO_CLLI,
HDR.VHO_CLLI,
HDR.CFG_TYPE,
LKUP.CODE3 AS SHELF_TYPE
FROM
RPT_7330_HDR HDR INNER JOIN RPT_7330_DTL DTL ON HDR.EID = DTL.EID
INNER JOIN CODE_LKUP2 LKUP ON LKUP.CODE1 = HDR.ACCESS_IND
INNER JOIN LS_DT_MODIFIED ON HDR.DT_MODIFIED = DT_MODIFIED
WHERE
LKUP.CATEGORY = 'ACCESS_MAPPING'
AND NAME = 'RPT_7330_HDR';
```
If you want count then
```
SELECT /*+ PARALLEL(DTL,4) */
COUNT (*)
FROM
RPT_7330_HDR HDR INNER JOIN RPT_7330_DTL DTL ON HDR.EID = DTL.EID
INNER JOIN CODE_LKUP2 LKUP ON LKUP.CODE1 = HDR.ACCESS_IND
INNER JOIN LS_DT_MODIFIED ON HDR.DT_MODIFIED = DT_MODIFIED
WHERE
LKUP.CATEGORY = 'ACCESS_MAPPING'
AND NAME = 'RPT_7330_HDR';
```
**Note:** Hint used on DTL table which is taking more cost for FTS. Number 4 means query fired on 8 CPU's parallely. Identify your pain points from query plan and decide on your hints for parallel or any other. You can also use parallel hint on multiple tables `/*+ PARALLEL(table1 4) PARALLEL(table2 4) PARALLEL(table3 4) PARALLEL(table4 4)*/` . Also This works only on Enterprise edition and not on standard edition. | Try this, might be faster:
```
select count(*)
from RPT_7330_HDR hdr
JOIN LS_DT_MODIFIED LS ON LS.NAME = 'RPT_7330_HDR' AND hdr.DT_MODIFIED = LS.DT_MODIFIED
JOIN RPT_7330_DTL dtl on hdr.EID = dtl.EID
JOIN CODE_LKUP2 lkup ON LKUP.CODE1 = hdr.ACCESS_IND AND LKUP.CATEGORY='ACCESS_MAPPING'
```
The SQL engine can optimize JOINS to be parallel if you have the right indexes and such. It is often able to optimize joins when it can't optimize sub-queries. | SQL shorten responds time using parallel or with clause | [
"",
"sql",
"performance",
"oracle",
""
] |
I am trying to have a result 'none' every time it gives me a null result. Right now it is giving me a 0 for a null result. How could I have a row show me 'none' instead of a 0 for a null result.
I have tried TO\_CHAR and TO\_NUMBER for the sum and I can't get it to display 'none'...
```
CASE WHEN SUM(ENROLLED) = 0 THEN 'none' ELSE SUM(ENROLLED) END AS ENROLLED
```
so when try the above I get SQL Error: ORA-00932: inconsistent datatypes: expected CHAR got NUMBER
this is what I have
```
SELECT lt.STUDENT_ID,lt.FIRST_NAME, lt.LAST_NAME, CASE WHEN SUM(ENROLLED) = 0 THEN 'none' ELSE SUM(ENROLLED) END AS ENROLLED
FROM STUDENT lt
LEFT OUTER JOIN
(SELECT s.STUDENT_ID, e.ENROLL_DATE,COUNT(z.COURSE_NO)AS ENROLLED
FROM STUDENT s
LEFT JOIN ENROLLMENT e ON s.STUDENT_ID = e.STUDENT_ID
LEFT JOIN SECTION z ON e.SECTION_ID = z.SECTION_ID
WHERE s.PHONE LIKE '702%'
GROUP BY s.STUDENT_ID, e.ENROLL_DATE) rt
ON lt.STUDENT_ID = rt.STUDENT_ID
WHERE lt.PHONE LIKE '702%'
GROUP BY lt.STUDENT_ID,lt.FIRST_NAME, lt.LAST_NAME,ENROLLMENTS;
```
instead of having
```
STUDENT_ID FIRST_NAME LAST_NAME ENROLLED
---------- ------------------------- ------------------------- -----------
253 Walter Boremmann 1
396 James E. Norman 0
etc
```
I'd like to have it like this
```
STUDENT_ID FIRST_NAME LAST_NAME ENROLLED
---------- ------------------------- ------------------------- -----------
253 Walter Boremmann 1
396 James E. Norman none
``` | Try using the function: COALESCE (cast(sum(expr1) as varchar), 'none')
As a side note, I question the use of DISTINCT in your query. | ```
CASE WHEN SUM(ENROLLED) = 0 THEN 'none' ELSE SUM(ENROLLED) END AS ENROLLED
```
this returns different types. Make it the same (cast SUM to string) | SQL Error: ORA-00932: inconsistent datatypes: expected CHAR got NUMBER | [
"",
"sql",
"oracle",
"sum",
"to-char",
""
] |
I am often confronted with such db queries:
> Get all entries (e.g. comments) of userX and also all entries of the friends of userX
Which is the best way to do this in SQL (MySQL), assuming userX is not friend of himself.
**1. Make two queries and merge them later with PHP**
```
a = SELECT *
FROM comments
WHERE user = X
b = SELECT c.*
FROM comments c
INNER JOIN relation r ON r.user2 = c.user
WHERE r.user1 = X
merge(a, b)
```
That is what I have usually done. It is rather performant, but I cannot use things as ORDER BY or LIMIT
**2. Subqueries with IN and UNION**
```
SELECT c.*
FROM comments
WHERE user IN (
SELECT "X"
UNION
SELECT user2 FROM relation WHERE user1 = X
)
```
This seems to be very slow, and therefore a bad idea, isn't it?
**3. Other solutions? Conditional Joins or something...** | One way to do this is to start the query from the user table. Join it with the relation table. Then you can join comments on this with a conditional 'ON' statement. This way MySQL can use the indexes.
```
select c.* from users a
left outer join relation friend on a.id = friend.user1_id
join comments c on (c.user_id = a.id or c.user_id = friend.user2_id)
where a.id = 1
group by c.id;
```
Here's a working example: <http://sqlfiddle.com/#!2/da298/1> | Why not:
```
SELECT c.*
FROM comments
WHERE user = X OR user IN (
SELECT user2 FROM relation WHERE user1 = X
)
```
If that's slow, you should look at the execution plan. You might be missing an index. | How to select own entries and entries of friends in one query | [
"",
"mysql",
"sql",
"join",
"union",
""
] |
I am trying to use MySQL command line client to execute a procedure. The procedure `helloworld()` is executing fine in MySQL query browser.
db scheme selected in query browser:
```
DELIMITER $$
DROP PROCEDURE IF EXISTS helloworld$$
CREATE PROCEDURE helloworld()
BEGIN
SELECT 'helloworld';
END$$
```
When I `call helloworld()` it returns helloworld. I am saving the procedure as `helloworld.sql` saved in SQL SCRIPT FILE ANSI .sql in desktop
Now I am trying to access the .sql file saved in desktop from cmd client giving the password that connects successfully
Now when I type
```
ENTER PASSWORD:******
Your Mysql connection id is 43
Server Vesion 5.5.24
mysql> SOURCE helloworld.sql
ERROR failed to open file helloworld.sql Error 2
```
Should I give the path of the file? | Try:
/path/to/file/helloworld.sql:
```
USE `yourdb`;
DELIMITER $$
DROP PROCEDURE IF EXISTS `helloworld`$$
CREATE PROCEDURE `helloworld`()
BEGIN
SELECT 'helloworld';
END$$
DELIMITER ;
```
Then from the command line try:
```
Your Mysql connection id is 43
Server Vesion 5.5.24
mysql> SOURCE /path/to/file/helloworld.sql
``` | In my case, the solution was in fact to change the ownership of the source file to be the username of the user who was logging into the machine and running the command via the `mysql` terminal. I am using CentOS Linux with MariaDB/MySQL5.6.
The command that worked to remove the error was: `sudo chown username:username sourcefile.sql`
I also tried changing it to `mysql:mysql` instead of `username:username`, but the error persisted until I used `sudo chown username:username sourcefile.sql` | MySQL Server 5.0 command line client failed to open file .sql | [
"",
"mysql",
"sql",
"stored-procedures",
""
] |
Can someone please let me know the issue with the below query? I am running on MS Access and its giving
> Syntax error in query expression 'id = @@IDENTITY'
Code:
```
public DosageBO SaveDosage(DosageBO dosage)
{
try
{
using (IDbConnection connection = OpenConnection())
{
StringBuilder sql = new StringBuilder();
sql.AppendLine("INSERT INTO dosage_master ( medicine_type, dosage, remarks, updateby, updatedate )");
sql.AppendLine("VALUES (@type, @dose, @remarks, @updateby, NOW());");
var parameters = new
{
type = dosage.MedicineType,
dose = dosage.Dosage,
remarks = dosage.Remarks,
updateby = Environment.UserName
};
connection.Execute(sql.ToString(), parameters);
return connection.Query<DosageBO>("SELECT medicine_type as MedicineType, dosage, remarks FROM dosage_master WHERE id = @@IDENTITY").FirstOrDefault();
}
}
catch
{
throw;
}
}
``` | `SELECT @@Identity` is a specialized query. And `@@Identity` is only valid in that context. If you attempt to use `@@Identity` elsewhere, as in a `WHERE` clause, the db engine will throw an error.
You will have to retrieve the value from `SELECT @@Identity`, save it, and then use that saved value in your other query. | Remove the `)` at the end
```
WHERE id = @@IDENTITY)
^---here
``` | MS Access @@identity query issue | [
"",
"sql",
"ms-access",
""
] |
Little by little I am moving on and understanding sql. Below I have 4 tables. I am trying to work through the “many to many” and “one to many”. Earlier on a similar question I was trying to figure out academies to courses and vice versa. Now it was explained to me that one table was for the course and the other table(academy\_course) was for the relationship table where I can store the many-to-many relationships. Do I need to apply the same logic for instructors affiliated with an academy? A unique instructor per academy.
```
CREATE TABLE academy
(
academy_id int(11) not null auto_increment,
name varchar(25) not null,
primary key (id),
);
CREATE TABLE course
(
course_id int(11) not null auto_increment,
course_name VARCHAR(50) NOT NULL ,
primary key (course_id),
);
CREATE TABLE academy_course
(
academy_id int(11) not null,
course_id int(11) not null ,
primary key (academy_id, course_id),
foreign key (academy_id) REFERENCES academy (academy_id) on delete cascade,
foreign key (course_id) REFERENCES course (course_id) on delete cascade
);
CREATE TABLE instructors
(
instructor_id VARCHAR(10) NOT NULL ,
academy_id INT NOT NULL ,
instructor_name VARCHAR(50) NOT NULL ,
primary key (instructor_id),
foreign key (academy_id) REFERENCES academy (academy_id) on delete cascade
);
``` | In the instructor only teaches at one academy, then it's easy to just have an academy\_id like you have there.
If the instructor can teach at different academies, then yes. You should create a table for that:
```
CREATE TABLE academy_instructor
(
academy_id int(11) not null,
instructor_id int(11) not null ,
primary key (academy_id, instructor_id),
foreign key (academy_id) REFERENCES academy (academy_id) on delete cascade,
foreign key (instructor_id) REFERENCES instructor (instructor_id) on delete cascade
);
``` | If one each academy has only one instructor then I would consider adding
`instructor_id` to the `academy_course` table. | Understanding the use of one-to-many or many-to-many | [
"",
"mysql",
"sql",
""
] |
I am inserting data into a database using millions of insert statements stored in a file. Is it better to insert this row by row or in bulk ? I am not sure what the implications can be.
Any suggestions on the approach ? Right now, I am executing 50K of these statements at a time. | Generally speaking, you're much better off inserting in bulk, provided you know that the inserts won't fail for some reason (i.e. invalid data, etc). If you're going row by row, what you're doing, is opening the data connection, adding the row, closing the data connection. Rinse wash, repeat in your case tens of thousands of times (or more?). It's a huge performance hit as opposed to opening the connection once, dumping all the data at one shot, then closing the connection once. If your data ISN'T a clean set of data, you might be better off going row by row, as the bulk insert won't fail if you have data to be cleaned up. | If you are using SSIS, I would suggest a data flow task as another possible avenue. This will allow you to move data from a flat text file, SQL table or other source and map it into your new table. Performance, I have found, is always pretty good and I use it regularly.
If your table is not created before the insert, what I do is drag an Execute SQL Task function into my process with the table creation query (CREATE TABLE....etc.) and update the properties on the data flow function to delay validation.
As long as my data structure is consistent, this works. Here are a couple screenshots.

 | Inserting rows - bulk or row by row? | [
"",
"sql",
"sql-server",
"bulkinsert",
"sql-insert",
""
] |
Okay, That might be asked somewhere, but I am unable to find it over google or SO,
My query was
```
INSERT INTO Tracking (OrderNum, PickupDate, ...) VALUES (95370,10/01/2013, .....)
```
plate form was a php script updating MS-Access db..and target column `pickupDate` was of Date/Time data type in access db..
when I ran this query, the information inserted in DB **wasn't** `10/01/2013`, rather its was long digit like 34444444299384 (something like that, didnt noted that exactly)..
but when I use the following query
```
INSERT INTO Tracking (OrderNum, PickupDate, ...) VALUES (95370,'10/01/2013', .....)
```
So for what I know is, **anything between the qoutes is a STRING**
* so, is the Date a string??
* why results are different for 10/01/2013 and `"10/01/2013"`?
Can somebody please explain it to me??
thanks for your guideline.. | When you use:
```
10/01/2013
```
it is an expression: 10 divided by 1 divided by 2013, giving as a result: `0,0049677`... which is then converted to a date, as dates are really numbers.
When you use:
```
'10/01/2013'
```
It's a string, which is converted into a date.
Internally, a date is stored as a number, not a string. | My guess is that `10/01/2013` was interpreted as a division operation, so the resulting number was a small floating point number:
```
0.00496770988
```
From there, MySQL probably assumed you gave it a UNIX timestamp and it mangled the number to try to convert it into a storeable date.
What you want to do instead is give MySQL the format it prefers `YYYY-MM-DD` and quote it:
```
'2013-10-01'
```
[More on MySQL Dates](http://dev.mysql.com/doc/refman/5.0/en/datetime.html) | What is Date? a string? or Double? | [
"",
"mysql",
"sql",
"string",
"date",
""
] |
```
SELECT
dealing_record.*
,shares.*
,transaction_type.*
FROM
shares
INNER JOIN shares ON shares.share_ID = dealing_record.share_id
INNER JOIN transaction_type ON transaction_type.transaction_type_id = dealing_record.transaction_type_id;
```
The above SQL code produces the desired output but with a couple of duplicate columns. Also, with incomplete display of the column headers. When I change the
```
linesize 100
```
the headers shows but data displayed overlaps
I have checked through similar questions but I don't seem to get how to solve this. | You have duplicate columns, because, you're asking to the SQL engine for columns that they will show you the same data (with `SELECT dealing_record.*` and so on) , and then duplicates.
For example, the `transaction_type.transaction_type_id` column and the `dealing_record.transaction_type_id` column will have matching rows (otherwise you won't see anything with an `INNER JOIN`) and you will see those duplicates.
If you want to avoid this problem or, at least, to reduce the risk of having duplicates in your results, improve your query, using only the columns you really need, as @ConradFrix already said. An example would be this:
```
SELECT
dealing_record.Name
,shares.ID
,shares.Name
,transaction_type.Name
,transaction_type.ID
FROM
shares
INNER JOIN shares ON shares.share_ID = dealing_record.share_id
INNER JOIN transaction_type ON transaction_type.transaction_type_id = dealing_record.transaction_type_id;
``` | Try to join shares with dealing\_record, not shares again:
```
select dealing_record.*,
shares.*,
transaction_type.*
FROM shares inner join dealing_record on shares.share_ID = dealing_record.share_id
inner join transaction_type on transaction_type.transaction_type_id=
dealing_record.transaction_type_id;
``` | Duplicate columns with inner Join | [
"",
"sql",
"sqlplus",
""
] |
We are about to move our website to windows azure and faced a problem with sql-azure. They are not supporting the identity increment property due to a performance optimization.
Hence we have to design our own logic to support the existing feature in generating sequential identity.
So to generate unique sequential value we get the max(value) from the ID field of the table and increment it by 1 to insert a new record.
We now have to manage concurrent transactions, hence checked with isolation levels.
However none of the isolation level lock the table to avoid reading the max value under concurrency.
Any help on locking a table or comment about this method would be very helpful. If some of you have already overcome this problem in a nice way, then can you please share..
Thanks for your help. | Use the following procedure to increment a value stored in a separate table. Putting an exclusive lock on the main table will result in horrible concurrency issues.
```
CREATE PROCEDURE [dbo].[GetNextID](
@IDName nvarchar(255)
)
AS
BEGIN
/*
Description: Increments and returns the LastID value from tblIDs for a given IDName
Author: Max Vernon / Mike Defehr
Date: 2012-07-19
*/
DECLARE @Retry int;
DECLARE @EN int, @ES int, @ET int;
SET @Retry = 5;
DECLARE @NewID int;
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
SET NOCOUNT ON;
WHILE @Retry > 0
BEGIN
BEGIN TRY
UPDATE dbo.tblIDs
SET @NewID = LastID = LastID + 1
WHERE IDName = @IDName;
IF @NewID IS NULL
BEGIN
SET @NewID = 1;
INSERT INTO tblIDs (IDName, LastID) VALUES (@IDName, @NewID);
END
SET @Retry = -2; /* no need to retry since the operation completed */
END TRY
BEGIN CATCH
IF (ERROR_NUMBER() = 1205) /* DEADLOCK */
SET @Retry = @Retry - 1;
ELSE
BEGIN
SET @Retry = -1;
SET @EN = ERROR_NUMBER();
SET @ES = ERROR_SEVERITY();
SET @ET = ERROR_STATE()
RAISERROR (@EN,@ES,@ET);
END
END CATCH
END
IF @Retry = 0 /* must have deadlock'd 5 times. */
BEGIN
SET @EN = 1205;
SET @ES = 13;
SET @ET = 1
RAISERROR (@EN,@ES,@ET);
END
ELSE
SELECT @NewID AS NewID;
END
GO
```
(For completeness, here is the table associated with the stored proc)
```
CREATE TABLE [dbo].[tblIDs]
(
IDName nvarchar(255) NOT NULL,
LastID int NULL,
CONSTRAINT [PK_tblIDs] PRIMARY KEY CLUSTERED
(
[IDName] ASC
) WITH
(
PAD_INDEX = OFF
, STATISTICS_NORECOMPUTE = OFF
, IGNORE_DUP_KEY = OFF
, ALLOW_ROW_LOCKS = ON
, ALLOW_PAGE_LOCKS = ON
, FILLFACTOR = 100
)
);
GO
```
Every time you want to obtain a new ID to use in the main table, you simply `EXEC GetNextID 'TableIDField';` | You can achieve something similar this way
1. Start a transaction (in read committed transaction isolation level).
2. Select the max existing value with exclusive lock in `SELECT` query (`(XLOCK)` hint).
3. Update the value to be increased by 1.
4. Commit transaction.
Putting exclusive lock into the select statement will lock all other processes that will want to read the ID at the same time. They will have to wait until the transaction is finalised and will read the new value if the transaction was committed.
---
As pointed out by Max Vernon below, this approach *may* not be suitable for high volume, highly concurrent systems. It may result in deadlocks as well (though possibility of deadlocks is not limited to this solution). | SQL simulating Auto increment field | [
"",
"sql",
"sql-server",
"transactions",
"azure-sql-database",
""
] |
I have this MySQL request:
```
SELECT *
FROM candy
RIGHT JOIN candytype ON candy.user_id = 1 AND candytype.candy_id = candy.id;
```
In my DB everything shows up but I see the same candy row showing twice because that one candy has two types. Is there a way that if it shows up once, that MySQL does not show it again in the result?
It just useless and I assume it makes my DB work more. I am just looking for a way to filter out... | I just needed to use DISTINCT.
```
SELECT DISTINCT candy.id, candy.name
FROM candy
RIGHT JOIN candytype
ON candy.user_id = 1
AND candytype.candy_id = candy.id;
``` | If you want just any old type, you can do this (here I'm guessing at column names):
```
SELECT candy.id, candy.name, min(candytype.type)
FROM candy
RIGHT JOIN candytype
ON candy.user_id = 1
AND candytype.candy_id = candy.id GROUP BY candy.id;
```
If you want one row/candy, but you want to see all the types you can do:
```
SELECT candy.id, candy.name, GROUP_CONCAT(candytype.type)
FROM candy
RIGHT JOIN candytype
ON candy.user_id = 1
AND candytype.candy_id = candy.id GROUP BY candy.id;
``` | How to remove repetitive rows in MySQL when using JOIN | [
"",
"mysql",
"sql",
""
] |
i am trying to build a view that should look like this:
```
CREATE OR REPLACE VIEW TestView AS
SELECT
p.name,
?? (select count(*) from Test t where t.currentstate in ('Running', 'Ended') and t.ref = p.key) as HasValues
from ParentTable p
```
I want the column HasValues to bei either 1 or 0. 1 if the count on the current state is > 0.
Can someone tell me how to do this?
Thanks | If you potentially have a great many rows in the Test table for each row in the parenttable and the join key on the test table is indexed then it may be most efficient to construct the query as:
```
CREATE OR REPLACE VIEW TestView AS
SELECT
p.name,
(select count(*)
from Test t
where t.currentstate in ('Running', 'Ended') and
t.ref = p.key and
rownum = 1) as HasValues
from ParentTable p;
```
HasValues will always be 0 or 1 for this query.
If the ratio of rows between test and the parent table was less than about 10:1 *and* I wanted to run this for all the rows in the parenttable then I'd just join the two tables together as in StevieG's answer | ```
CREATE OR REPLACE VIEW TestView AS
SELECT
p.name,
case
when nvl(t.mycount,0) > 0 then '1'
else '0'
end HasValues
from ParentTable p
left outer join (select ref, count(*) mycount from Test group by ref) t on t.ref = p.key
``` | Oracle SQL View Column value 1 if count > n | [
"",
"sql",
"oracle",
""
] |
I'm designing a database and I have a user table with users, and a group table with user's group.
These groups will have a **owner** (a user that has created it), and a set of users that are part of the group (like a Whatsapp group).
To represent this I have this design:

**Do you think the `owner` column on `Group` table is necessary?** Maybe I can add an `owner` column on `Group` table I can know easily the group's owner. | If you don't add the `owner` in the `group` then where are you going to add it? The only way I see apart from this is adding a boolean `isowner` to the `usergroup`. Anyway, this would not make sense if there will only be 1 owner. If there can be `N` owners then that would be the way to go. | You are on the right track, but you'll need one more step to ensure an owner must actually belong to the group she owns:

There is a FOREIGN KEY in Group {groupID, owner} that references UserGroup {groupID, userID}.
If your DBMS supports deferred foreign keys, you can make owner NOT NULL, to ensure a group cannot be owner-less. Otherwise you can leave it NULL-able (and you will still be able to break the "chicken-and-egg" problem with circular references if your DBMS supports [MATCH SIMPLE](http://wiki.burnsoft.net/KB58.ashx?NoRedirect=1) FKs - almost all do). | Relationship redundant? | [
"",
"sql",
"database",
"database-design",
""
] |
I have some SQL code which generates random numbers using the following technique:
```
DECLARE @Random1 INT, @Random2 INT, @Random3 INT, @Random4 INT, @Random5 INT, @Random6 INT, @Upper INT, @Lower INT
---- This will create a random number between 1 and 49
SET @Lower = 1 ---- The lowest random number
SET @Upper = 49; ---- The highest random number
with nums as (
select @lower as n
union all
select nums.n+1
from nums
where nums.n < @Upper
),
randnums as
(select nums.n, ROW_NUMBER() over (order by newid()) as seqnum
from nums
)
select @Random1 = MAX(case when rn.seqnum = 1 then rn.n end),
@Random2 = MAX(case when rn.seqnum = 2 then rn.n end),
@Random3 = MAX(case when rn.seqnum = 3 then rn.n end),
@Random4 = MAX(case when rn.seqnum = 4 then rn.n end),
@Random5 = MAX(case when rn.seqnum = 5 then rn.n end),
@Random6 = MAX(case when rn.seqnum = 6 then rn.n end)
from randnums rn;
select @Random1, @Random2, @Random3, @Random4, @Random5, @Random6
```
My question is how random is this number generation? and is there another way to do this which is more "random".
I am using:
```
Microsoft SQL Server 2008 (SP3) - 10.0.5512.0 (X64) Aug 22 2012 19:25:47 Copyright (c) 1988-2008 Microsoft Corporation Developer Edition (64-bit) on Windows NT 6.1 <X64> (Build 7601: Service Pack 1)
```
The problem with most solutions is you'll end up with values like this: `14,29,8,14,27,27` I cannot have duplicate numbers! | I guess you could do something like this much simpler and much easier
```
DECLARE @Upper INT;
DECLARE @Lower INT;
SET @Lower = 1; /* -- The lowest random number */
SET @Upper = 49; /* -- The highest random number */
SELECT @Lower + CONVERT(INT, (@Upper-@Lower+1)*RAND());
```
For getting a random number without repetition, this will do the job
```
WITH CTE
AS
(
SELECT randomNumber, COUNT(1) countOfRandomNumber
FROM (
SELECT ABS(CAST(NEWID() AS binary(6)) %49) + 1 randomNumber
FROM sysobjects
) sample
GROUP BY randomNumber
)
SELECT TOP 5 randomNumber
FROM CTE
ORDER BY newid()
```
To set the highest limit, you can replace 49 with your highest limit number. | For **Laravel**:
```
public function generatUniqueId()
{
$rand = rand(10000, 99999);
$itemId = $rand;
while (true) {
if (!BookItem::whereBookItemId($itemId)->exists()) {
break;
}
$itemId = rand(10000, 99999);
}
return $itemId;
}
``` | Generate unique random numbers using SQL | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have a varchar collumn which may contain format like this:
> 123,124,125,126
Now i want to get all number and put it in a single column like this in select command
> 123
> 124
> 125
> 126
Any idea? | **Answering umpteenth time...**
```
WITH CTE
AS (SELECT
'123,124,125,126' AS COL1
FROM
DUAL)
SELECT
REGEXP_SUBSTR ( COL1,
'[^,]+',
1,
RN )
COL1
FROM
CTE
CROSS JOIN
(SELECT
ROWNUM RN
FROM
(SELECT
MAX ( LENGTH ( REGEXP_REPLACE ( COL1,
'[^,]+' ) ) )
+ 1
MAX_L
FROM
CTE)
CONNECT BY
LEVEL <= MAX_L)
WHERE
REGEXP_SUBSTR ( COL1,
'[^,]+',
1,
RN )
IS NOT NULL
ORDER BY
COL1;
``` | Try this too,
```
with test as
(
SELECT '123,124,125,126' str FROM dual
)
SELECT regexp_substr (str, '[^,]+', 1, ROWNUM) SPLIT
FROM TEST
CONNECT BY LEVEL <= LENGTH (regexp_replace (str, '[^,]+')) + 1;
```
Try this if you have an additional comma at the end,
```
with test as
(
SELECT '123,124,125,126,' str FROM dual
)
SELECT regexp_substr(str,'[^,]+', 1, LEVEL) FROM test
connect by regexp_substr(str, '[^,]+', 1, level) is not null;
``` | get numbers from string delimiter by ',' in oracle sql | [
"",
"sql",
"regex",
"oracle",
""
] |
I got the following tables:
```
pictures
------------------------------------
id
name
views
votes
------------------------------------
id
user_id
pic_id
```
I want to get a result from a query that will give me each picture id, with the views of the picture, and the total votes from the table `votes` for the specific pic\_id
example:
pictures.id, pictures.views, total votes
1 ------------ 78------------------ 123
2 ------------ 23------------------- 69
and so on...
The code I tried:
```
SELECT `pictures`.`id`,`pictures`.`views`, COUNT(`votes`.`pic_id`) as votes
FROM `pictures`
JOIN `votes`
ON `pictures`.`id` = `votes`.`pic_id`
```
But it doesn't give me the reuslt I desire. | You need to have `GROUP BY` clause.
The use of `LEFT JOIN` will display all records on table `pictures` with or without matching record on table `votes`.
```
SELECT a.id, a.name, a.views,
COUNT(b.pic_id) TotalVotes
FROM pictures a
LEFT JOIN votes b
ON a.id = b.pic_id
GROUP BY a.id, a.name, a.views
``` | Use left join with a group function ,normal join keyword means an inner join and you need a left join for one to many relation
```
SELECT `pictures`.`id`,`pictures`.`views`, COUNT(`votes`.`pic_id`) as votes
FROM `pictures`
LEFT JOIN `votes`
ON `pictures`.`id` = `votes`.`pic_id`
GROUP BY `pictures`.`id`
``` | SQL query with count() from another table | [
"",
"mysql",
"sql",
""
] |
I have a basic SQL question. To be able to explain it correctly.I'll give the schema of them .
`Sailors(sid:integer,sname:string,rating:integer)` (sid is a primary key.)
Now, the query is *"Find sailors whose rating is greater than some sailor called Horatio"* (there will be more than one Horatio since,sname is not a primary key)
The answer is :
```
SELECT *
FROM Sailors S
WHERE S.rating >ANY (SELECT S2.rating
FROM Sailors S2
WHERE S2.sname="HORATIO")
```
I wonder,Could I use IN instead of ANY ? IF I dont, What differs ? Is there anyone to help me ? Thanks in advance. | You couldn't use IN in with this subquery, because:
1. When you say `> ANY`, it returns those rows from outer query where the rating is higher than the rating of ANY of the ratings returned by the subquery.
2. If you would use `IN`, then it would return only those sailors who have the same rating as any of the sailor with name Horatio.
Notice that the first query may return sailors with name 'Horatio', because one sailor with a name 'Horatio' may have higher rating than another sailor with name 'Horatio'.
Also, `ANY` returns `FALSE` if subquery returns no rows.
**Edit** Ok, I didn't understand your question. You can't use `< > =` etc. operators with `IN`, they can only be used with `ANY`, `SOME` and `ALL`.
Take a look here for more information: [Tim Hall about ANY, SOME and ALL](http://www.oracle-base.com/articles/misc/all-any-some-comparison-conditions-in-sql.php) | I don't think you can use `in` for this, but you can use `exists`:
```
select *
from Sailors S
where
exists (
select *
from Sailors S2
where S2.sname = 'HORATIO' and S.rating > S2.Rating
)
``` | SQL Set-Comparision Operators ANY and IN difference | [
"",
"sql",
"database",
"predicate",
""
] |
I have following two table Diagnose & Exercise
I would like to extract Exercise date closest to the Diagnose\_Date and it should be 1 row from exercise table.
I have tried left join with DATEDIFF function in where condition
```
SELECT D.ID,D.Diagnose_Date,D.Type1,D.Type2,E.Exercise_Date],E.Field1,E.Field2,E.Field3
FROM Diagnose D
LEFT JOIN Exercise E
ON D.ID=E.ID
WHERE DATEDIFF(DAY,[Diagnose_Date],[Exercise_Date]) BETWEEN -30 AND 30
```
any help would be very helpful
Thanks in Advance
---
Diagnose Table
```
------------------------------------------
ID Dignose_Date Type1 SubType1
------------------------------------------
1 10/01/2010 01 1.1
2 20/02/2012 02 2.2
3 30/03/2013 01 1.2
------------------------------------------
```
Exercise Table
```
------------------------------------------
ID Exercise_Date Field1 Field2 Field3
------------------------------------------
1 01/01/2010 x y z
2 10/02/2012 a b c
2 01/04/2012 e f f
3 01/03/2013 x y z
3 05/04/2013 a b c
3 01/06/2013 x y z
------------------------------------------
```
Expected Result should be :
```
------------------------------------------------------------------------
ID Diagnose_Date Exercise_Date Type1 SubType2 Field1 Field2 Field3
------------------------------------------------------------------------
1 10/01/2010 01/01/2010 01 1.1 x y z
2 20/02/2012 10/02/2012 02 2.2 a b c
3 30/03/2013 05/04/2013 01 1.2 a b c
-------------------------------------------------------------------------
``` | First, in a CTE, for each diagnose get the smallest time interval between the diagnose date and all the exercise dates associated with that diagnose.
```
WITH MIN_DATES_CTE(ID, DATE_DIFF)
AS (
SELECT ID, MIN(ABS(DATEDIFF(DAY,[Diagnose_Date],[Exercise_Date])))
FROM Exercise E
INNER JOIN Diagnose D ON D.ID = E.ID
GROUP BY E.ID
)
```
Then, join Diagnose and Exercise by ID and the smallest time interval
```
SELECT D.ID,D.Diagnose_Date,D.Type1,D.Type2,E.Exercise_Date],E.Field1,E.Field2,E.Field3
FROM Diagnose D
LEFT JOIN Exercise E ON D.ID = E.ID
INNER JOIN MIN_DATES_CTE ON MIN_DATES_CTE.ID = E.ID
WHERE ABS(DATEDIFF(DAY,[Diagnose_Date],[Exercise_Date])) = MIN_DATES_CTE.DATE_DIFF
``` | I'm assuming you're just matching ANY single diagnose entry with ANY single exercise entry based on their dates being closest to each other.
Here's my line of thinking:
Do a full `JOIN` on diagnoses and exercises, order by absolute date difference, ascending.
```
SELECT
D.ID,
D.Date,
E.ID,
E.Date,
ABS(DATEDIFF(day, D.Date, E.Date)) Diff
FROM Diagnosis D, Exercise E
ORDER BY Diff
```
You'll get a result like this:
```
ID Date ID Date Diff
3 2013-03-30 5 2013-03-25 5
2 2012-02-20 2 2012-02-10 10
3 2013-03-30 4 2013-03-01 29
2 2012-02-20 3 2012-04-01 41
3 2013-03-30 6 2013-06-01 63
1 2010-10-01 1 2010-01-01 273
3 2013-03-30 3 2012-04-01 363
2 2012-02-20 4 2013-03-01 375
2 2012-02-20 5 2013-03-25 399
3 2013-03-30 2 2012-02-10 414
2 2012-02-20 6 2013-06-01 467
1 2010-10-01 2 2012-02-10 497
1 2010-10-01 3 2012-04-01 548
2 2012-02-20 1 2010-01-01 780
1 2010-10-01 4 2013-03-01 882
1 2010-10-01 5 2013-03-25 906
1 2010-10-01 6 2013-06-01 974
3 2013-03-30 1 2010-01-01 1184
```
Now you can see the dates that are closest to each other, with the number of days they are far.
Of course, you won't use this, but from this list, you can select the first one:
```
SELECT TOP 1
D.ID,
D.Date,
E.ID,
E.Date,
ABS(DATEDIFF(day, D.Date, E.Date)) Diff
FROM Diagnosis D, Exercise E
ORDER BY Diff
```
Now you can plug this statement in a `LEFT` join, so you can singly select a date matching another.
Like this:
```
SELECT
fD.ID,
fD.Date,
fE.ID,
fE.Date
FROM
Diagnosis fD
LEFT JOIN Exercise fE
ON fE.ID = (SELECT TOP 1 E.ID
FROM Diagnosis D, Exercise E
WHERE D.ID = fD.ID
ORDER BY ABS(DATEDIFF(day, D.Date, E.Date)))
```
Which gives the result:
```
ID Date ID Date
1 2010-10-01 1 2010-01-01
2 2012-02-20 2 2012-02-10
3 2013-03-30 5 2013-03-25
``` | How to extract data from table2 which is closest to the DATE FIELD of table1? | [
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2012",
""
] |
data will be like this:
```
id | data
----|---------
1 | AABBCC
2 | FFDD
3 | TTHHJJKKLL
```
what the result I want is
```
id | data
----|---------
1 | AA
1 | BB
1 | CC
2 | FF
2 | DD
3 | TT
3 | HH
3 | JJ
3 | KK
3 | LL
```
I have a sql code using cursor
```
DECLARE @table2 table ( id INTEGER, data VARCHAR(500))
DECLARE Cur CURSOR FOR
SELECT id FROM table1
OPEN Cur
WHILE ( @@FETCH_STATUS = 0 )
BEGIN
DECLARE @LoopNum INTEGER
DECLARE @tempID INTEGER
DECLARE @tempDATA VARCHAR(255)
FETCH NEXT FROM Cur INTO @tempID
SET @tempDATA = SELECT data from table1 where id = @teampID
SET @LoopNUM = 0
WHILE @LoopNum< len(@tempDATA) / 2
BEGIN
INSERT INTO table2 (id, data)
VALUES( @tempID, SUBSTRING(@tempDATA, @LoopNum * 2 +1, 2))
SET @LoopNum = @LoopNum + 1
END
END
CLOSE Cur
DEALLOCATE Cur
SELECT * FROM table2
```
I don't want to use CURSOR because it's too slow, I have many data.
Does anyone has good idea for it? Is it possible? | 1. Create function that parses incoming string (say "AABBCC") as a table of strings (in particular "AA", "BB", "CC").
2. Select IDs from your table and use [CROSS APPLY](http://technet.microsoft.com/en-us/library/ms175156%28v=sql.105%29.aspx) the function with *data* as argument so you'll have as many rows as values contained in the current row's *data*. No need of cursors or stored procs. | You Could do something like this .....
**Your Table**
```
CREATE TABLE TestTable
(
ID INT,
Data NVARCHAR(50)
)
GO
INSERT INTO TestTable
VALUES (1,'AABBCC'),
(2,'FFDD'),
(3,'TTHHJJKKLL')
GO
SELECT * FROM TestTable
```
**My Suggestion**
```
CREATE TABLE #DestinationTable
(
ID INT,
Data NVARCHAR(50)
)
GO
SELECT * INTO #Temp FROM TestTable
DECLARE @String NVARCHAR(2)
DECLARE @Data NVARCHAR(50)
DECLARE @ID INT
WHILE EXISTS (SELECT * FROM #Temp)
BEGIN
SELECT TOP 1 @Data = DATA, @ID = ID FROM #Temp
WHILE LEN(@Data) > 0
BEGIN
SET @String = LEFT(@Data, 2)
INSERT INTO #DestinationTable (ID, Data)
VALUES (@ID, @String)
SET @Data = RIGHT(@Data, LEN(@Data) -2)
END
DELETE FROM #Temp WHERE ID = @ID
END
SELECT * FROM #DestinationTable
```
**Result Set**
```
ID Data
1 AA
1 BB
1 CC
2 FF
2 DD
3 TT
3 HH
3 JJ
3 KK
3 LL
```
**DROP Temp Tables**
```
DROP TABLE #Temp
DROP TABLE #DestinationTable
``` | In SQL Server, how to create while loop in select | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
I am getting a correct result, but now instead of showing me result that is 0 I want them to show me null result . How could I get results null instead of 0 ?
```
SELECT w.FIRST_NAME,w.LAST_NAME,COUNT(s.SECTION_ID) AS COUNTED_SECTIONS
FROM INSTRUCTOR w LEFT OUTER JOIN SECTION s
ON w.INSTRUCTOR_ID = s.INSTRUCTOR_ID
GROUP BY w.FIRST_NAME,w.LAST_NAME
ORDER BY w.LAST_NAME;
```
currently showing
```
FIRST_NAME LAST_NAME COUNTED_SECTIONS
------------------------- ------------------------- ----------------
Rick Chow 0
Marilyn Frantzen 10
Fernand Hanks 9
Charles Lowry 0
etc
```
but I want
```
FIRST_NAME LAST_NAME COUNTED_SECTIONS
------------------------- ------------------------- ----------------
Rick Chow
Marilyn Frantzen 10
Fernand Hanks 9
Charles Lowry
etc
```
I've tried it with NVL and it doesn't work
```
NVL(COUNT(s.SECTION_ID),NULL) AS COUNTED_SECTIONS
``` | I think `NULLIF()` is available in oracle:
```
SELECT w.FIRST_NAME,w.LAST_NAME,NULLIF(COUNT(s.SECTION_ID),0) AS COUNTED_SECTIONS
FROM INSTRUCTOR w LEFT OUTER JOIN SECTION s
ON w.INSTRUCTOR_ID = s.INSTRUCTOR_ID
GROUP BY w.FIRST_NAME,w.LAST_NAME
ORDER BY w.LAST_NAME;
``` | Try nullif:
```
SELECT w.FIRST_NAME,w.LAST_NAME, NULLIF(COUNT(s.SECTION_ID), 0) AS COUNTED_SECTIONS
FROM INSTRUCTOR w LEFT OUTER JOIN SECTION s
ON w.INSTRUCTOR_ID = s.INSTRUCTOR_ID
GROUP BY w.FIRST_NAME,w.LAST_NAME
ORDER BY w.LAST_NAME;
``` | how to display a row that is null instead of 0 | [
"",
"sql",
"oracle",
"null",
""
] |
Could really use your help!
I have a query:
```
Select *
from Customers
Where Customer_id in (001,002,003)
```
...you get the idea.
My problem is, if there is no record for customer\_id 003 for example, no record is displayed.
How can i display "003" and state that no record was found? I would prefer this than having no record display at all.
Thanking you in advance! | Usually for this sort of thing I have a function which takes a csv and returns a table for me.
Then I do something like the following.
```
-- this is creating the temporary table which would normally be created by a function.
DECLARE @Temp TABLE (Customer_id int)
INSERT INTO @Temp(Customer_id)
SELECT 1
INSERT INTO @Temp(Customer_id)
SELECT 2
INSERT INTO @Temp(Customer_id)
SELECT 3
-- now do the select statement.
SELECT
T.Customer_id,
C.*
FROM
Customers C
RIGHT OUTER JOIN
@Temp T
ON
T.Customer_id = C.Customer_id
```
This will then give you results whereby if the `C.Customer_id` is `NULL` then it is not an `Id` that exists in your `Customer` `table`.
This does the same job as @MartinSmith did, but is supported by most SQL server versions. | In SQL Server 2008+ you can use
```
SELECT V.Customer_id,
C.*
FROM (VALUES(1),
(2),
(3)) V(Customer_id)
LEFT JOIN Customers C
ON C.Customer_id = V.Customer_id
```
Non matching values will have `NULL` in the `C.Customer_id` column. | SQL where customer_id in (xx) | [
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
i have a job Name column in the JobDetails table.like this
```
job Name
A_1
A_2
B_1
B_2
B_3
```
I search max number like this
```
SELECT MAX(JobDetails.[Job Name])
FROM Monara.JobDetails
WHERE
( [Job Name] like 'B_%' )
```
Result is B\_3
But i want to get only 3.
How can i do it? | I refuse to make a column containing space(s)
Underscore is a wildchar that you should handle carefully when using *Like*
```
create table JobDetails ([JobName] varchar(10))
insert JobDetails values('A_1'),('A_2'),('B_1'),('B_2'),('B_3')
SELECT max(cast(stuff(JobName, 1, patindex('%[_]%', JobName), '') as int))
FROM JobDetails
WHERE JobName like 'B[_]%'
-- added check to prevent most bad data
and IsNumeric(stuff(JobName, 1, patindex('%[_]%', JobName), '')) > 0
``` | Try this code:
```
SELECT SUBSTRING(JobDetails.[Job Name],
( SELECT CHARINDEX('_', JobDetails.[Job Name])
) + 1,
LEN(JobDetails.[Job Name]) -
CHARINDEX('_', JobDetails.[Job Name])
+ 1)
FROM Monara.JobDetails
WHERE [Job Name] LIKE 'B_%'
``` | How to get some value in column in sql server | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
I'm using SQL developer, and my usual work is largely querying database tables with very long names.I don't have admin access to database, so I can't change the table names to short names.Plus it's a centralized database on which many people in many offices are working.To come to the point, I want to be able to have short reference names to these long table names which would be local to my SQL developer.For example, if table name is \*I\_am\_a\_long\_name\*, I want a short reference name like **IAALN** (The short name will be given by me)so I can write:
```
select * from IAALN;
```
How can I do this? | If you have enough privileges, you can create [synonyms](http://docs.oracle.com/cd/B19306_01/server.102/b14200/statements_7001.htm):
```
CREATE SYNONYM IAALN
FOR somedb.I_am_a_long_name;
``` | The simplest solution is to use synynoms (as suggested by @Smileek). If you can't or don't want to do that, you could create a view for every table you want to query:
```
CREATE VIEW IAALN
AS
SELECT * FROM I_am_a_long_name;
```
and use these views in your ad-hoc queries. | How to create a simple shortcut name for long table names? | [
"",
"sql",
"oracle",
"oracle-sqldeveloper",
""
] |
I am trying to make a php script to update a table column structure.
```
Name Type Collation Attributes Null Default
promotion_dt date No 0000-00-00
```
I want to update the structure to change Default to NULL, and seccond I want to replace all the 0000-00-00 data with NULL.
What would the sql\_query be? | One way would be to first alter the table to allow null, then change the default and last update the existing values that are equal to the default;
```
-- Set nullable and update the default
ALTER TABLE myTable CHANGE promotion_dt promotion_dt DATE NULL DEFAULT NULL;
-- Update the old values to the new default
UPDATE myTable SET promotion_dt=NULL WHERE promotion_dt='0000-00-00';
``` | ```
ALTER TABLE tbl MODIFY promotin_dt DATE DEFUALT NULL;
UPDATE tbl SET primotion_dt = NULL WHERE promotin_dt = '0000-00-00';
``` | SQL UPDATE TABLE COLUMN DEFAULT NULL QUERY | [
"",
"mysql",
"sql",
"sql-server",
"alter-table",
""
] |
I need to fetch distinct values of a table, which has an auto-incremented id field. I need the id to identify, but I don't care which id when all values are the same except the id.
I came up with the following, which seems to work. Is there a better way to do this though?
```
SELECT id, date_f, date_t, num_n, num_d, mn, is
FROM t
GROUP BY date_f, date_t, num_n, num_d, mn, is
```
Another concern is, would this always return the same ids if the query is executed more than once?
EDIT:
Sample db:
```
id date_f date_t num_n num_d mn is
1 10 10 10 10 10 10
2 10 10 10 10 10 10
3 10 10 10 10 10 10
```
I want to store all of the columns of **one** row out of these. I don't care if the id is 1, 2 or 3 as long as it's the same the next time I execute the query (without adding/deleting any rows between). So far, two answers suggested using `min(id)` which seems like a good idea. | If you're not using the id in the result set at all, then the answer above will do the job. If you need at least one Id in the result set for any purpose, then you should alter your statement slightly:
```
SELECT min(id), date_f, date_t, num_n, num_d, mn, is
FROM t
GROUP BY date_f, date_t, num_n, num_d, mn, is
```
which will return the first Id that matches the distinct set (or use (Max) if you want the last - this will likely change over time though as data is added to the table. | What you have is basically right - I would add that if you want the id (identity) then you need to add an aggregate such as -
```
SELECT min(id), date_f, date_t, num_n, num_d, mn, is
FROM t
GROUP BY date_f, date_t, num_n, num_d, mn, is
```
this will make sure that you get the first/same one each time (unless of course it is deleted) | Selecting distinct rows on all but one column | [
"",
"mysql",
"sql",
"database",
""
] |
How can I calculate the number of work days between two dates from table (from the 1st row to the end) in SQL Server 2008?
I tried something like this, but it does not work
```
DECLARE @StartDate as DATETIME, @EndDate as DATETIME
Select @StartDate = date2 from testtable ;
select @EndDate = date1 from testtable ;
SELECT
(DATEDIFF(dd, @StartDate, @EndDate) + 1)
-(DATEDIFF(wk, @StartDate, @EndDate) * 2)
-(CASE WHEN DATENAME(dw, @StartDate) = 'Sunday' THEN 1 ELSE 0 END)
-(CASE WHEN DATENAME(dw, @EndDate) = 'Saturday' THEN 1 ELSE 0 END)
``` | I would always recommend a [Calendar table](http://blog.jontav.com/post/9380766884/calendar-tables-are-incredibly-useful-in-sql), then you can simply use:
```
SELECT COUNT(*)
FROM dbo.CalendarTable
WHERE IsWorkingDay = 1
AND [Date] > @StartDate
AND [Date] <= @EndDate;
```
Since SQL has no knowledge of national holidays for example the number of weekdays between two dates does not always represent the number of working days. This is why a calendar table is a must for most databases. They do not take a lot of memory and simplify a lot of queries.
But if this is not an option then you can generate a table of dates relatively easily on the fly and use this
```
SET DATEFIRST 1;
DECLARE @StartDate DATETIME = '20131103',
@EndDate DATETIME = '20131104';
-- GENERATE A LIST OF ALL DATES BETWEEN THE START DATE AND THE END DATE
WITH AllDates AS
( SELECT TOP (DATEDIFF(DAY, @StartDate, @EndDate))
D = DATEADD(DAY, ROW_NUMBER() OVER(ORDER BY a.Object_ID), @StartDate)
FROM sys.all_objects a
CROSS JOIN sys.all_objects b
)
SELECT WeekDays = COUNT(*)
FROM AllDates
WHERE DATEPART(WEEKDAY, D) NOT IN (6, 7);
```
---
**EDIT**
If you need to calculate the difference between two date columns you can still use your calendar table as so:
```
SELECT t.ID,
t.Date1,
t.Date2,
WorkingDays = COUNT(c.DateKey)
FROM TestTable t
LEFT JOIN dbo.Calendar c
ON c.DateKey >= t.Date1
AND c.DateKey < t.Date2
AND c.IsWorkingDay = 1
GROUP BY t.ID, t.Date1, t.Date2;
```
**[Example on SQL-Fiddle](http://sqlfiddle.com/#!3/019d5/5)** | This does it excluding the days out but date part rather than description. You can substitute the parameters used as an example for the values in your query.
```
Declare
@startdate datetime = '2013-11-01',
@enddate datetime = '2013-11-11'
SELECT
(DATEDIFF(dd, @StartDate, @EndDate) + 1)
-(DATEDIFF(wk, @StartDate, @EndDate) * 2)
-(case datepart(dw, @StartDate)+@@datefirst when 8 then 1 else 0 end)
-(case datepart(dw, @EndDate)+@@datefirst when 7 then 1 when 14 then 1 else 0 end)
Returns 7
``` | Calculating days to excluding weekends (Monday to Friday) in SQL Server | [
"",
"sql",
"sql-server-2008",
""
] |
I have a stored procedure in which I am trying to execute a @cmd variable which is a BACKUP database command. When I run the stored procedure (which is in a job through the reporting agent) I get an error with the 'C:' in my path, I believe because the path is not encased in quotes. How can I encase the @Filename variable I have to have quotes around it so the database backup command can work?
Here is the code for the stored procedure:
```
CREATE PROCEDURE pAgentBackup
AS
--1. Declare Variables.
DECLARE @MaxID INT
DECLARE @MinID INT
DECLARE @CurrentID INT
DECLARE @Path VARCHAR(100)
DECLARE @DBName VARCHAR(50)
DECLARE @Type CHAR(1)
DECLARE @cmd VARCHAR(500)
DECLARE @Recovery VARCHAR(300)
DECLARE @Filename VARCHAR(100)
SELECT @CurrentID = 0, @MaxID = MAX(BKID) FROM DBBackups
--2. Loop through the columns in the table and execute the backups.
WHILE @CurrentID < @MaxID
BEGIN
SELECT @CurrentID = MIN(BKID) FROM DBBackups WHERE BKID > @CurrentID
SELECT @DBName = DbName FROM DBBackups WHERE BKID = @CurrentID
SELECT @Path = Path FROM DBBackups WHERE BKID = @CurrentID
SELECT @Type = Type FROM DBBackups WHERE BKID = @CurrentID
IF @Type = 'F'
BEGIN
SELECT @Recovery = CAST(DATABASEPROPERTYEX(name, 'Recovery') AS VARCHAR(25))
FROM master.dbo.sysdatabases WHERE name = @DBName
--Set recovery to full if it is not already turned on.
IF @Recovery <> 'FULL'
BEGIN
SELECT @cmd = 'ALTER DATABASE ' + @DBName + ' SET RECOVERY FULL'
EXEC (@cmd)
END
--BEGIN Backup
SET @Filename = @Path + @DBName + 'Full' + CONVERT(VARCHAR(8), GETDATE(), 112) + '.bak'
SET @cmd = 'BACKUP DATABASE ' + @DBName + ' TO DISK = ' + @Filename
EXEC (@cmd)
SET @Filename = @Path + '\Log' + @DBName + 'Log' + CONVERT(VARCHAR(8), GETDATE(), 112) + '.bak'
SET @cmd = 'BACKUP LOG ' + @DBName + ' TO DISK = ' + @Filename
EXEC (@cmd)
END
IF @Type = 'L'
BEGIN
SELECT @Recovery = CAST(DATABASEPROPERTYEX(name, 'Recovery') AS VARCHAR(25))
FROM master.dbo.sysdatabases WHERE name = @DBName
--Set recovery to full if it is not already turned on.
IF @Recovery <> 'FULL'
BEGIN
SELECT @cmd = 'ALTER DATABASE ' + @DBName + ' SET RECOVERY FULL'
EXEC (@cmd)
END
--Begin Backup
SET @Filename = @Path + '\Log' + @DBName + 'Log' + CONVERT(VARCHAR(8), GETDATE(), 112) + '.bak'
SET @cmd = 'BACKUP LOG ' + @DBName + ' TO DISK = ' + @Filename
EXEC (@cmd)
END
END
GO
``` | Change your query like
```
SET @cmd = 'BACKUP DATABASE ' + @DBName + ' TO DISK = ''' + @Filename + ''''
```
Also change all queries where @Filename variable used in your SP. | I think `REPLACE` + `QUOTENAME` makes the query much easier to read and maintain
```
SET @cmd = '
BACKUP DATABASE @DBName
TO DISK = @Filename
';
SET @cmd = REPLACE(@cmd, '@DBName', QUOTENAME(@DBName));
SET @cmd = REPLACE(@cmd, '@Filename', QUOTENAME(@Filename, ''''));
``` | Adding Single Quotes Around A Scalar Variable | [
"",
"sql",
"sql-server",
""
] |
Suppose I have a table named EMPLOYEE containing the following attributes
```
(EMPLOYEE_ID, LAST_NAME, FIRST_NAME, MIDDLE_NAME, JOB_ID, MANAGER_ID, Salary)
```
Can I
Display the Nth highest salary drawing employee details
Please help | **Test Table**
```
CREATE TABLE Test
(ID INT IDENTITY(1,1),
Salary INT)
INSERT INTO Test
VALUES (100), (200), (300), (400), (500)
SELECT * FROM Test
```
**Query**
```
SELECT TOP 1 Salary
FROM
(SELECT TOP 3 Salary FROM Test ORDER BY Salary DESC)q
ORDER BY Salary ASC
```
In your Sub-query SELECT TOP Nth the rest remains the same and it will get you the desired results | `ORDER BY` and `LIMIT` where `10` is `n + 1`:
```
SELECT
*
FROM
employees
ORDER BY
Salary DESC
LIMIT
10, 1
```
(If you want the first record, use `LIMIT 0, 1`. For the tenth, use `LIMIT 9, 1` etc.) | Mysql retriving the nth record | [
"",
"mysql",
"sql",
""
] |
I am given a MySQL 12 digit TimeStamp (253402214400) to format it in `mm/dd/yyyy` without using any language. Database field is type of longtext and here is what I am doing.
`SELECT time_stamp;` returns `253402214400`
`SELECT DATE_FORMAT(time_stamp,'%m/%d/%Y');` returns `NULL`
`SELECT FROM_UNIXTIME(time_stamp);` also returns `NULL`
`SELECT UNIX_TIMESTAMP(time_stamp);` returns `0`
I don't know what type of `Timestamp` is this, but please help me to format it in `mm/dd/yyyy`.
`Thanks` | I am sorry everybody, I think the time was givin in `milliseconds`, so by some research I could reach here <https://stackoverflow.com/a/9483289/750302> and resolved it with `DATE_FORMAT(FROM_UNIXTIME(time_stamp/1000), '%m/%d/%Y');` [SQLFIDDLE](http://sqlfiddle.com/#!2/93a575/2 "SqlFiddle")! ...
> `Thank You Everyone for your Support!` | Try this:
```
DATE_FORMAT(FROM_UNIXTIME(time_stamp), '%d/%m/%Y')
```
Example:
```
SELECT DATE_FORMAT('2016-05-01 09:23:00', '%d/%m/%Y') // result: 01/05/2016
``` | Format MySQL timestamp to mm/dd/yyyy | [
"",
"mysql",
"sql",
"timestamp",
"unix-timestamp",
""
] |
I have a table\_1:
```
id custno
1 1
2 2
3 3
```
and a table\_2:
```
id custno qty descr
1 1 10 a
2 1 7 b
3 2 4 c
4 3 7 d
5 1 5 e
6 1 5 f
```
When I run this query to show the minimum order quantities from every customer:
```
SELECT DISTINCT table_1.custno,table_2.qty,table_2.descr
FROM table_1
LEFT OUTER JOIN table_2
ON table_1.custno = table_2.custno AND qty = (SELECT MIN(qty) FROM table_2
WHERE table_2.custno = table_1.custno )
```
Then I get this result:
```
custno qty descr
1 5 e
1 5 f
2 4 c
3 7 d
```
Customer 1 appears twice each time with the same minimum qty (& a different description) but I only want to see customer 1 appear once. I don't care if that is the record with 'e' as a description or 'f' as a description. | "Generic" SQL way:
```
SELECT table_1.custno,table_2.qty,table_2.descr
FROM table_1, table_2
WHERE table_2.id = (SELECT TOP 1 id
FROM table_2
WHERE custno = table_1.custno
ORDER BY qty )
```
SQL 2008 way (probably faster):
```
SELECT custno, qty, descr
FROM
(SELECT
custno,
qty,
descr,
ROW_NUMBER() OVER (PARTITION BY custno ORDER BY qty) RowNum
FROM table_2
) A
WHERE RowNum = 1
``` | First of all... I'm not sure why you need to include `table_1` in the queries to begin with:
```
select custno, min(qty) as min_qty
from table_2
group by custno;
```
But just in case there is other information that you need that wasn't included in the question:
```
select table_1.custno, ifnull(min(qty),0) as min_qty
from table_1
left outer join table_2
on table_1.custno = table_2.custno
group by table_1.custno;
``` | left join without duplicate values using MIN() | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
"join",
""
] |
Suppose i have a long query string for eg.
```
SELECT id from users where collegeid='1' or collegeid='2' . . . collegeid='1000'
```
will it affect the speed or output in any way?
```
SELECT m.id,m.message,m.postby,m.tstamp,m.type,m.category,u.name,u.img
from messages m
join users u on m.postby=u.uid
where m.cid = '1' or m.cid = '1' . . . . . .
or m.cid = '1000'. . . .
``` | I would prefer to use `IN` in this case as it would be better. However to check the performance you may try to look at the Execution Plan of the query which you are executing. You will get the idea about what performance difference you will get by using the both.
Something like this:
```
SELECT id from users where collegeid IN ('1','2','3'....,'1000')
```
According to the [MYSQL](http://dev.mysql.com/doc/refman/5.5/en/comparison-operators.html#function_in)
> If all values are constants, they are evaluated according to the type
> of expr and sorted. The search for the item then is done using a
> binary search. **This means IN is very quick if the IN value list
> consists entirely of constants.**
>
> The number of values in the IN list is only limited by the
> **[max\_allowed\_packet](http://dev.mysql.com/doc/refman/5.1/en/server-system-variables.html#sysvar_max_allowed_packet)** value.
You may also check [IN vs OR in the SQL WHERE Clause](https://stackoverflow.com/questions/3074713/in-vs-or-in-the-sql-where-clause) and **[MYSQL OR vs IN performance](https://stackoverflow.com/questions/782915/mysql-or-vs-in-performance)**
The answer given by Ergec is very useful:
```
SELECT * FROM item WHERE id = 1 OR id = 2 ... id = 10000
```
This query took **0.1239 seconds**
```
SELECT * FROM item WHERE id IN (1,2,3,...10000)
```
This query took **0.0433 seconds**
# IN is 3 times faster than OR
> **will it affect the speed or output in any way?**
So the answer is **Yes** the performance will be affected. | Obviously, there is no direct correlation between the length of a query string and its processing time (as some very short query can be tremendeously complex and vice versa). For your specific example: It depends on how the query is processed. This is something you can check by looking at the query execution plan (syntax depends on your DBMS, something like EXPLAIN PLAN). If the DBMS has to perform a full table scan, performance will only be affected slightly, since the DBMS has to visit all pages that make up the table anyhow. If there is an index on collegeid, performance will likely suffer more the more entries you put into your disjunction, since there will be several (though very fast) index lookups. At some point, there will we an full index scan instead of individual lookups, at which point performance will not degrade significantly anymore.
However - details depend ony our DBMS and its execution planner. | Does long query string affect the speed? | [
"",
"mysql",
"sql",
"performance",
"optimization",
""
] |
I have the following table
```
col1 col2 col3 col4
==== ==== ==== ====
1233 4566 ABCD CDEF
1233 4566 ACD1 CDEF
1233 4566 D1AF CDEF
```
I need to count the characters in col3, so from the data in the previous table it would be:
```
char count
==== =====
A 3
B 1
C 2
D 3
F 1
1 2
```
Is this possible to achieve by using SQL only?
At the moment I am thinking of passing a parameter in to SQL query and count the characters one by one and then sum, however I did not start the VBA part yet, and frankly wouldn't want to do that.
This is my query at the moment:
```
PARAMETERS X Long;
SELECT First(Mid(TABLE.col3,X,1)) AS [col3 Field], Count(Mid(TABLE.col3,X,1)) AS Dcount
FROM TEST
GROUP BY Mid(TABLE.col3,X,1)
HAVING (((Count(Mid([TABLE].[col3],[X],1)))>=1));
```
Ideas and help are much appreciated, as I don't usually work with Access and SQL. | Knowing that colum3 has a fixed length of 4, this problem is quite easy.
Assume there is a view V with four columns, each for one character in column 3.
```
V(c1, c2, c3, c4)
```
Unfortunately, I'm not familiar with Access-specific SQL, but this is the general SQL statement you would need:
```
SELECT c, COUNT(*) FROM
(
SELECT c1 AS c FROM V
UNION ALL
SELECT c2 FROM V
UNION ALL
SELECT c3 FROM V
UNION ALL
SELECT c4 FROM V
)
GROUP BY c
``` | You can accomplish your task in pure Access SQL by using a Numbers table. In this case, the Numbers table must contain integer values from 1 to some number larger than the longest string of characters in your source data. In this example, the strings of characters to be processed are in [CharacterData]:
```
CharacterList
-------------
GORD
WAS
HERE
```
and the [Numbers] table is simply
```
n
--
1
2
3
4
5
```
If we use a cross join to extract the characters (eliminating any empty strings that result from `n` exceeding `Len(CharacterList)`)...
```
SELECT
Mid(cd.CharacterList, nb.n, 1) AS c
FROM
CharacterData cd,
Numbers nb
WHERE Mid(cd.CharacterList, nb.n, 1) <> ""
```
...we get ...
```
c
--
G
W
H
O
A
E
R
S
R
D
E
```
Now we can just wrap that in an aggregation query
```
SELECT c AS Character, COUNT(*) AS CountOfCharacter
FROM
(
SELECT
Mid(cd.CharacterList, nb.n, 1) AS c
FROM
CharacterData cd,
Numbers nb
WHERE Mid(cd.CharacterList, nb.n, 1) <> ""
)
GROUP BY c
```
which gives us
```
Character CountOfCharacter
--------- ----------------
A 1
D 1
E 2
G 1
H 1
O 1
R 2
S 1
W 1
``` | Counting characters in an Access database column using SQL | [
"",
"sql",
"ms-access",
"vba",
""
] |
How can I display data from table1 where its id (Table1 Id)is not contained in table2
```
string Query = "SELECT * FROM Table1 WHERE Table1ID!=" Table2_Table1ID;
``` | ```
SELECT * FROM TABLE1
WHERE NOT EXISTS(SELECT Table1ID from Table2 where Table1ID=Table1.Id)
``` | You can do:
```
SELECT * FROM table1
WHERE table1ID NOT IN (SELECT table1ID FROM table2);
``` | display data from table1 where its id (Table1 Id)is not contained in table2 | [
"",
"sql",
""
] |
We had an issue with our stock whereby we had a balance column that would be added to or subtracted from whenever a transaction occurred.
There were some issues that we could not trace and hence have made changes whereby the stock would be calculated by adding and subtracting (where appropriate) the quantity moving and come up to today's stock value.
However now, the structure is such that one product has multiple stocks therefore product A with expiry 01/2012, 02/2013 etc.
I have currently created a query whereby for one stock of a product, it would calculate its current stock as follows:
```
select
(select ISNULL(sum(gd.qty),0) as grnadd from grns as g
INNER JOIN grndetails as gd ON g.id = gd.grnid
INNER JOIN stocks as s ON s.id = gd.stockid
where g.locationid = 10 and s.prodid =2653)
-
(select ISNULL(sum(cod.qty), 0) as salesub from salesorders as co
INNER JOIN salesorddetails as cod ON co.id = cod.cusordid
INNER JOIN stocks as s ON s.id = cod.stockid
where co.status != 'cancel' and co.locid = 10 and s.prodid =2653)
-
(select ISNULL(sum(cod.qty), 0) as cussub from customerorders as co
INNER JOIN customerorddetails as cod ON co.id = cod.cusordid
INNER JOIN stocks as s ON s.id = cod.stockid
where co.status != 'cancel' and co.locid = 10 and s.prodid =2653)
```
Therefore in this case the stock is calculated for one product however can I make a query that would list all products with their totals (as above) in the second column?
Thank you and hope the structure is understood from the above query
EDIT:
Stocks Table: id, prodid, expiry
Products Table: id, tradename
GRN Details Table: id, grnid, qty, stockid (since its affecting the stock not product)
Sales & Customer Order Details Table: id, cusordid, qty, stockid
GRN & Sales & Cus Ord Table: id, locid
Locations Table: id, locname | try to include all product-ids and group by it. here's some proposed code
```
select prodid, sum(total) as total from (
(select s.prodid
, ISNULL(sum(gd.qty),0) as total
from grns as g
INNER JOIN grndetails as gd
ON g.id = gd.grnid
INNER JOIN stocks as s
ON s.id = gd.stockid
where g.locationid = 10 group by s.prodid)
union all
(select s.prodid
, ISNULL(sum(cod.qty), 0)*(-1) as total
from salesorders as co
INNER JOIN salesorddetails as cod
ON co.id = cod.cusordid
INNER JOIN stocks as s
ON s.id = cod.stockid
where co.status != 'cancel'
and co.locid = 10 group by s.prodid)
union all
(select s.prod_id
, ISNULL(sum(cod.qty), 0)*(-1) as total
from customerorders as co
INNER JOIN customerorddetails as cod
ON co.id = cod.cusordid
INNER JOIN stocks as s
ON s.id = cod.stockid
where co.status != 'cancel'
and co.locid = 10 group by s.prodid)
) as x
group by prodid
```
i have changed all the minuses to union-alls so that the product-ids will not get subtracted from each other and the grand total will be calculated respective to your product-ids. | Try this:
```
WITH
S AS (
SELECT DISTINCT prodid FROM stocks
),
L AS (
/* SELECT 10 AS locationid */
SELECT DISTINCT locationid FROM grns
),
T1 AS (
select
g.locationid,
s.prodid,
sum(gd.qty) as grnadd
from grns as g
INNER JOIN grndetails as gd ON g.id = gd.grnid
INNER JOIN stocks as s ON s.id = gd.stockid
GROUP BY g.locationid, s.prodid
),
T2 AS (
select
co.locid as locationid,
s.prodid,
sum(cod.qty) as salesub
from salesorders as co
INNER JOIN salesorddetails as cod ON co.id = cod.cusordid
INNER JOIN stocks as s ON s.id = cod.stockid
where co.status != 'cancel'
GROUP BY co.locid, s.prodid
),
T3 AS (
select
co.locid as locationid,
s.prodid,
sum(cod.qty) as cussub
from customerorders as co
INNER JOIN customerorddetails as cod ON co.id = cod.cusordid
INNER JOIN stocks as s ON s.id = cod.stockid
where co.status != 'cancel'
GROUP BY co.locid, s.prodid
)
SELECT
S.prodid, L.locationid, ISNULL(grnadd, 0) - ISNULL(salesub, 0) - ISNULL(cussub, 0)
FROM
S CROSS JOIN
L LEFT OUTER JOIN
T1 ON T1.locationid = L.locationid AND T1.prodid = S.prodid LEFT OUTER JOIN
T2 ON T2.locationid = L.locationid AND T2.prodid = S.prodid LEFT OUTER JOIN
T3 ON T3.locationid = L.locationid AND T3.prodid = S.prodid
```
I assumed that location might be also important, so it returns 3 columns. If you want results only for location with id 10, then in `L` CTL uncomment first line and comment second. | Calculating stock quantity from multiple table transactions with SQL | [
"",
"sql",
"sql-server",
""
] |
suppose i have this table:
```
group_id | image | image_id |
-----------------------------
23 blob 1
23 blob 2
23 blob 3
21 blob 4
21 blob 5
25 blob 6
25 blob 7
```
how to get results of only 1 of each group id? in this case,there may be multiple images for one group id, i just want one result of each **group\_id**
i tried distinct but i will only get group\_id. max for image also would not work. | There are **no** standard aggregate functions in Oracle that would work with `BLOB`s, so `GROUP BY` solutions won't work.
Try this one based on `ROW_NUMBER()` in a sub-query.
```
SELECT inn.group_id, inn.image, inn.image_id
FROM
(
SELECT t.group_id, t.image, t.image_id,
ROW_NUMBER() OVER (PARTITION BY t.group_id ORDER BY t.image_id) num
FROM theTable t
) inn
WHERE inn.num = 1;
```
The above should return the first (based on `image_id`) row for each group.
[**SQL Fiddle**](http://sqlfiddle.com/#!4/ff076/1) | ```
SELECT group_id, image, image_id
FROM a_table
WHERE (group_id, image_id) IN
(
SELECT group_id, MIN(image_id)
FROM a_table
GROUP BY
group_id
)
;
``` | How to get only one record for each duplicate rows of the id in oracle? | [
"",
"sql",
"oracle",
"greatest-n-per-group",
""
] |
I have MyTable with a Column Message NVARCHAR(MAX).
Record with ID 1 contains the Message '0123456789333444 Test'
When I run the following query
```
DECLARE @Keyword NVARCHAR(100)
SET @Keyword = '0123456789000001*'
SELECT *
FROM MyTable
WHERE CONTAINS(Message, @Keyword)
```
Record ID 1 is showing up in the results and in my opinion it should not because 0123456789333444 does not contains 0123456789000001.
Can someone explain why the records is showing up anyway?
**EDIT**
```
select * from sys.dm_fts_parser('"0123456789333444 Test"',1033,0,0)
```
returns the following:
```
group_id phrase_id occurrence special_term display_term expansion_type source_term
1 0 1 Exact Match 0123456789333444 0 0123456789333444 Test
1 0 1 Exact Match nn0123456789333444 0 0123456789333444 Test
1 0 2 Exact Match test 0 0123456789333444 Test
``` | Found a solution that works. I've added `language 1033` as an additional parameter.
```
SELECT * FROM MyTable WHERE CONTAINS(Message, @Keyword, langauge 1033)
``` | This is because the @Keyword is not wrapped in double quotes. Which forces zero, one, or more matches.
> Specifies a match of words or phrases beginning with
> the specified text. Enclose a prefix term in double quotation marks
> ("") and add an asterisk (*) before the ending quotation mark, so that
> all text starting with the simple term specified before the asterisk
> is matched. The clause should be specified this way: CONTAINS (column,
> '"text*"'). The asterisk matches zero, one, or more characters (of the
> root word or words in the word or phrase). If the text and asterisk
> are not delimited by double quotation marks, so the predicate reads
> CONTAINS (column, 'text\*'), full-text search considers the asterisk as
> a character and searches for exact matches to text\*. The full-text
> engine will not find words with the asterisk (\*) character because
> word breakers typically ignore such characters.
>
> When is a phrase, each word contained in the phrase is
> considered to be a separate prefix. Therefore, a query specifying a
> prefix term of "local wine\*" matches any rows with the text of "local
> winery", "locally wined and dined", and so on.
Have a look at the MSDN on the topic. [MSDN](http://technet.microsoft.com/en-us/library/ms187787.aspx) | Strange behaviour with Fulltext search in SQL Server | [
"",
"sql",
"sql-server",
"t-sql",
"full-text-search",
"contains",
""
] |
I have a table looks like given below query, I add products price in this table daily, with different sellers name :
```
create table Product_Price
(
id int,
dt date,
SellerName varchar(20),
Product varchar(10),
Price money
)
insert into Product_Price values (1, '2012-01-16','Sears','AA', 32)
insert into Product_Price values (2, '2012-01-16','Amazon', 'AA', 40)
insert into Product_Price values (3, '2012-01-16','eBay','AA', 27)
insert into Product_Price values (4, '2012-01-17','Sears','BC', 33.2)
insert into Product_Price values (5, '2012-01-17','Amazon', 'BC',30)
insert into Product_Price values (6, '2012-01-17','eBay', 'BC',51.4)
insert into Product_Price values (7, '2012-01-18','Sears','DE', 13.5)
insert into Product_Price values (8, '2012-01-18','Amazon','DE', 11.1)
insert into Product_Price values (9, '2012-01-18', 'eBay','DE', 9.4)
```
I want result like this for n number of sellers(As more sellers added in table)
```
DT PRODUCT Sears[My Site] Amazon Ebay Lowest Price
1/16/2012 AA 32 40 27 Ebay
1/17/2012 BC 33.2 30 51.4 Amazon
1/18/2012 DE 7.5 11.1 9.4 Sears
``` | I think this is what you're looking for.
[SQLFiddle](http://sqlfiddle.com/#!3/e4bb4/3)
It's kind of ugly, but here's a little breakdown.
This block allows you to get a dynamic list of your values. (Can't remember who I stole this from, but it's awesome. Without this, pivot really isn't any better than a big giant case statement approach to this.)
```
DECLARE @cols AS VARCHAR(MAX)
DECLARE @query AS NVARCHAR(MAX)
select @cols = STUFF((SELECT distinct ',' +
QUOTENAME(SellerName)
FROM Product_Price
FOR XML PATH(''), TYPE
).value('.', 'NVARCHAR(MAX)')
, 1, 1, '')
```
Your @cols variable comes out like so:
```
[Amazon],[eBay],[Sears]
```
Then you need to build a string of your entire query:
```
select @query =
'select piv1.*, tt.sellername from (
select *
from
(select dt, product, SellerName, sum(price) as price from product_price group by dt, product, SellerName) t1
pivot (sum(price) for SellerName in (' + @cols + '))as bob
) piv1
inner join
(select t2.dt,t2.sellername,t1.min_price from
(select dt, min(price) as min_price from product_price group by dt) t1
inner join (select dt,sellername, sum(price) as price from product_price group by dt,sellername) t2 on t1.min_price = t2.price) tt
on piv1.dt = tt.dt
'
```
The piv1 derived table gets you the pivoted values. The cleverly named tt derived table gets you the seller who has the minimum sales for each day.
(Told you it was kind of ugly.)
And finally, you run your query:
```
execute(@query)
```
And you get:
```
DT PRODUCT AMAZON EBAY SEARS SELLERNAME
2012-01-16 AA 40 27 32 eBay
2012-01-17 BC 30 51.4 33.2 Amazon
2012-01-18 DE 11.1 9.4 13.5 eBay
```
(sorry, can't make that bit line up).
I would think that if you have a reporting tool that can do crosstabs, this would be a heck of a lot easier to do there. | The problem is this requirement:
> I want result like this for n number of sellers
If you have a fixed, known number of columns for your results, there are several techniques to [PIVOT](http://technet.microsoft.com/en-us/library/ms177410%28v=sql.105%29.aspx) your data. But if the number of columns is not known, you're in trouble. The SQL language really wants you to be able to describe the exact nature of the result set for the select list in terms of the number and types of columns up front.
It sounds like you can't do that. This leaves you with two options:
1. Query the data to know how many stores you have and their names, and then use that information to build a dynamic sql statement.
2. (Preferred option) Perform the pivot in client code. | Product price comparison in sql | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
I have table which list a number of cases and assigned primary and secondary technicians. What I am trying to accomplish is to aggregate the number of cases a technician has worked as a primary and secondary tech. Should look something like this...
```
Technician Primary Secondary
John 4 3
Stacy 3 1
Michael 5 3
```
The table that I am pulling that data from looks like this:
```
CaseID, PrimaryTech, SecondaryTech, DOS
```
In the past I have used something like this, but now my superiors are asking for the number of secondary cases as well...
```
SELECT PrimaryTech, COUNT(CaseID) as Total
GROUP BY PrimaryTech
```
I've done a bit of searching, but cant seem to find the answer to my problem. | You can group two subqueries together with a FULL JOIN as demonstrated in this [SQLFiddle](http://www.sqlfiddle.com/#!6/0db6a/1).
```
SELECT Technician = COALESCE(pri.Technician, sec.Technician)
, PrimaryTech
, SecondaryTech
FROM
(SELECT Technician = PrimaryTech
, PrimaryTech = COUNT(*)
FROM Cases
WHERE PrimaryTech IS NOT NULL
GROUP BY PrimaryTech) pri
FULL JOIN
(SELECT Technician = SecondaryTech
, SecondaryTech = COUNT(*)
FROM Cases
WHERE SecondaryTech IS NOT NULL
GROUP BY SecondaryTech) sec
ON pri.Technician = sec.Technician
ORDER By Technician;
``` | ```
Select Tech,
sum(case when IsPrimary = 1 then 1 else 0 end) as PrimaryCount,
sum(case when IsPrimary = 0 then 1 else 0 end) as SecondaryCount
from
(
SELECT SecondaryTech as Tech, 0 as IsPrimary
FROM your_table
union all
SELECT PrimaryTech as Tech, 1 as IsPrimary
FROM your_table
) x
GROUP BY Tech
``` | SQL Aggreate Functions | [
"",
"sql",
"sql-server",
"sql-server-2008",
"aggregate",
""
] |
I have come across many websites to find the answer about which one is better, ANSI or non- ANSI syntax. What is the difference between these two queries?
```
select a.name,a.empno,b.loc
from tab a, tab b
where a.deptno=b.deptno(+);
```
and:
```
select a.name,a.empno,b.loc
from tab a
left outer join tab b on a.deptno=b.deptno;
```
The result is same in both the cases. The second query is also longer. Which one is better?
suppose if we have added another table Salgrade in the above query based on what conditions we need to join them?? ..
can any one assume one table and give me explanation | both syntaxes usually work without problems, but if you try to add a where condition you will see that with the second one is much simpler to understand which is the join condition and which is the where clause.
1)
```
SELECT a.name,
a.empno,
b.loc
FROM tab a,
tab b
WHERE a.deptno = b.deptno(+)
AND a.empno = 190;
```
2)
```
SELECT a.name,
a.empno,
b.loc
FROM tab a,
LEFT OUTER JOIN tab b
ON a.deptno = b.deptno
WHERE a.empno = 190;
```
Also, it's much easier to recognize an outer join and do not forget to include the (+). Overall you can say it's just a question of taste, but the truth is that the second syntax is much more readable and less prone to errors. | The first is a legacy Oracle specific way of writing joins, the second is ANSI SQL-92+ standard and is the preferred one. | What is difference between ANSI and non-ANSI joins, and which do you recommend? | [
"",
"sql",
"oracle",
"join",
""
] |
have table with some values
```
role_number action_number status
4 2 1
4 5 0
4 8 1
4 7 0
4 10 1
4 3 0
```
Now want new role\_number with same action\_number and status , how to insert it ? for example it must be like :
```
role_number action_number status
4 2 1
4 5 0
4 8 1
4 7 0
4 10 1
4 3 0
5 2 1
5 5 0
5 8 1
5 7 0
5 10 1
5 3 0
``` | ```
INSERT INTO YourTable
(role_number, action_number, status)
SELECT role_number + 1, action_number, status
FROM YourTable
WHERE role_number = 4
``` | Here it is a possible solution using INSERT/SELECT:
```
INSERT INTO YourTable(role_number, action_number, status)
SELECT @NewRoleNumber, action_number, status
FROM YourTable
WHERE RoleNumber = @RoleNumberToBeCopied
``` | How to copy values from table for another column | [
"",
"sql",
"sql-server",
""
] |
I am trying to get this table to display book num, book title, and the number of times each book was checked out (even if it was never checked out.)
The current output I get is correct, but I want to sort it by book num and "Times Checked Out".
```
SELECT book.book_num
,book_title
,Count(checkout.book_num) AS "Times Checked Out"
FROM checkout right join book ON checkout.book_num = book.book_num
GROUP BY book.book_num, book_title;
``` | You can also use it as inner query:
```
SELECT * FROM (
SELECT book.book_num,book_title,Count(checkout.book_num) AS "Times Checked Out"
FROM checkout right join book ON checkout.book_num = book.book_num
GROUP BY book.book_num, book_title
) AS foo
ORDER BY "Times Checked Out";
``` | ```
SELECT
book.book_num,
book_title,
COUNT(checkout.book_num) AS "Times Checked Out"
FROM checkout
RIGHT JOIN book ON checkout.book_num = book.book_num
GROUP BY
book.book_num,
book_title
ORDER BY
book.book_num,
COUNT(checkout.book_num);
``` | Need help sorting table with an aggregate function | [
"",
"sql",
""
] |
I am using SQL Server 2008 and I have two separate Count queries. I want to divide the two queries and multiply by 100 to get a percentage.
The two Queries
```
SELECT COUNT(*) FROM
(
SELECT * FROM admin WHERE principal LIKE '%Mrs%'
UNION
SELECT * FROM admin where deputy_principal LIKE '%Mrs%'
) as Count1
SELECT COUNT(*) * 2 FROM admin as Count2
```
I have tried something like this:
```
SELECT(
SELECT COUNT(*) FROM
(
SELECT * FROM admin WHERE principal LIKE '%Mrs%'
UNION
SELECT * FROM admin where deputy_principal LIKE '%Mrs%'
)
/
(
SELECT COUNT(*) * 2 FROM admin as Count2
)
)
```
But i get syntax errors which I am unable to debug correctly. Which is the correct way of going about this?
It should divide Count1 and Count2 then multiply by 100 and display it as such:
```
37%
``` | ```
SELECT
((SELECT CAST(COUNT(*) AS DECIMAL(18,2)) FROM admin WHERE principal LIKE '%Mrs%'
OR deputy_principal LIKE '%Mrs%') /
(SELECT CAST(COUNT(*) * 2 AS DECIMAL(18,2)) FROM admin)) * 100
```
**UDPATE: Comparison vs original approach with dummy data....**
```
DECLARE @admin TABLE(Id INTEGER IDENTITY(1,1), principal VARCHAR(50),
deputy_principal VARCHAR(50))
INSERT @admin (principal, deputy_principal)
VALUES ('Mr Person 1', 'Mr Person 2'),
('Mrs Person 3', 'Mr Person 4'),
('Mr Person 5', 'Mrs Person 6'),
('Mrs Person 7', 'Mrs Person 8')
-- Original attempted way (syntax corrected)
SELECT ((
SELECT CAST(COUNT(*) AS DECIMAL(18,2))
FROM
(
SELECT * FROM @admin WHERE principal LIKE '%Mrs%'
UNION
SELECT * FROM @admin where deputy_principal LIKE '%Mrs%'
) x)
/
(SELECT CAST(COUNT(*) * 2 AS DECIMAL(18,2)) FROM @admin)) * 100
-- Shortened query
SELECT
((SELECT CAST(COUNT(*) AS DECIMAL(18,2)) FROM @admin WHERE principal LIKE '%Mrs%'
OR deputy_principal LIKE '%Mrs%') /
(SELECT CAST(COUNT(*) * 2 AS DECIMAL(18,2)) FROM @admin)) * 100
```
Both output the same result: 37.5 in this example | You could try something like below using `CASE Expression` and `SUM() Function`:
```
SELECT SUM(CASE WHEN principal LIKE '%Mrs%' OR
deputy_principal LIKE '%Mrs%' THEN 100.0 END) / SUM(2) Pecentage
FROM admin
```
**[Fiddle demo](http://sqlfiddle.com/#!3/d132d/18)** using @AdaTheDev 's data | Add two different SQL Count Queries | [
"",
"sql",
"sql-server",
""
] |
I need to use sql query using VBA. My input values for the query is from the Column in the excel sheet.I need to take all the values present in the column and it should be passed as input to the query to sqlserver. But i could'nt get the answer. I am getting type mismatch as error. could any one help me out. Thanks in advance
for example in J column contains J1=25, j2=26, ....so on
```
stSQL = "SELECT * FROM prod..status where state in"
stSQL = stSQL & wsSheet.Range("J:J").Value
```
My full code is below
```
Sub Add_Results_Of_ADO_Recordset()
'This was set up using Microsoft ActiveX Data Components version 2.8
Dim cnt As ADODB.Connection
Dim rst As ADODB.Recordset
Dim stSQL As Variant
Dim wbBook As Workbook
Dim wsSheet As Worksheet
Dim rnStart As Range
Const stADO As String = "Provider=SQLOLEDB.1;Integrated Security=SSPI;" & _
"Persist Security Info=False;" & _
"Initial Catalog=prod;" & _
"Data Source=777777777V009D\YTR_MAIN4_T"
'where BI is SQL Database & AURDWDEV01 is SQL Server
Set wbBook = ActiveWorkbook
Set wsSheet = wbBook.Worksheets("sheet1")
With wsSheet
Set rnStart = .Range("A2")
End With
' My SQL Query
stSQL = "SELECT * FROM prod..status where state in"
stSQL = stSQL + wsSheet.Range("J:J").Value
Set cnt = New ADODB.Connection
With cnt
.CursorLocation = adUseClient
.Open stADO
.CommandTimeout = 0
Set rst = .Execute(stSQL)
End With
'Here we add the Recordset to the sheet from A1
rnStart.CopyFromRecordset rst
'Cleaning up.
rst.Close
cnt.Close
Set rst = Nothing
Set cnt = Nothing
End Sub
``` | change to
```
stSQL = stSQL + " ('" + Replace(wsSheet.range("J:J").Value, "'", "") + ")"
```
but sql IN statement is usually used like this
```
state IN (25,26,28)
```
But if you are only using one integer value you might want to go this way.
```
stSQL = "SELECT * FROM prod..status where state = "
stSQL = Val(wsSheet.range("J:J").Value)
```
There is though one thing that is dangerous in using a in statement.
If your In part of the statement is very long, it will become slow and then with even larger in statements crash altogeather.
The solution for that kind of situation is creating a temp table with the in values and do where in (temp table) or a inner join based on the temp table | I used for loop and mid command to convert the values in the column to a single variable. Below is the code i used to perform the function which i required
```
' Getting the last row
With wsSheet
lastrow1 = .Range("J" & .Rows.Count).End(xlUp).Row
End With
' Appending the values to a single variable
For i = 1 To lastrow
s1 = s1 & "'" & Val(wsSheet.Cells(i, 10)) & "'" & ","
Next
' Variable which could be used in IN command
If lastrow > 0 Then
s1 = Mid(s1, 1, Len(s1) - 1)
s1 = "(" & s1 & ")"
Else
Exit Sub
End If
``` | Sql query taking values from the column in the excelsheet using VBA(macro) | [
"",
"sql",
"vba",
""
] |
given this Schema:
```
table tblSET
SetID int PK
SetName nvarchar(100)
Table tblSetItem
SetID int PK
ItemID int PK
```
tblSetItem.SetID is a FK into the tblSet Table.
Some data:
**tblSet**
```
SetID SetName
1 Red
2 Blue
3 Maroon
4 Yellow
5 Sky
```
**tblSetItem**
```
SetID ItemID
1 100
1 101
2 100
2 108
2 109
3 100
3 101
4 101
4 108
4 109
4 110
5 100
5 108
5 109
```
I'd like a way to identify which sets contain the same items. In the example above **Red** and **Maroon** contain the same items (100,101) and **Blue** and **Sky** contain the same values (100,108,109)
Is there a sql query which would provide this answer? | You can use xml support to create a comma separated list (cf this answer: <https://stackoverflow.com/a/1785923/215752>). For this case I don't care about the form so I leave the starting comma in.
Note, I couldn't test this right now so I might have a typo...
```
select * from
(
select SetID, setitemuniquestring,
count(*) OVER (PARTITION BY setitemuniquestring) as cnt
from
(
select S.SetID,
(select ',' + I.ItemID
from tblSetItem I
where S.SetID = I.SetID
order by u.ItemID ASC
for xml path('')
) as setitemuniquestring
from tblSet S
group by S.SetID
) sub
) sub2
where cnt > 2
``` | I do assume that you need to identify sets which contents is exactly the same.
So i would go for a temp table, where to store an "hash" of the contained item.
Hash may be as simple as a list of item ids comma separated.
Eg.
```
Set Hash
1 100,101
2 100,108,109
3 100,101
4 101,108,109,110
5 100,108,109
```
Then, you simply neet to select on such a temp table grouping by hash value
Eg.
Duplicate sets only:
```
Count Hash
2 100,101
2 100,108,109
```
So, resuming:
* populate temp table using an xml path function to join item ids (remember to get an ordered list of item ids)
* select duplicate sets on temp table by counting rows grouping by hash
* apply any subsequent form of logic on your duplicate sets | SQL Find duplicate sets | [
"",
"sql",
"sql-server-2008-r2",
""
] |
The image shows my proposed layout for part of a database. My concern is with the price bands and the way these attach to [shows] and [bookings]. There needs to be a list of price bands (as in titles) but the same band can have multiple values depending on which show it is attached to (a standard ticket for Friday could be £10 where as a standard ticket on Saturday could be £11).
It just seems to me with this approach them will be a lot of almost identical data - lots of entries for £5 tickets in [showpriceband] with the only difference being the showid.
Is there a better approach to this?
 | I think that your approach is correct. You have
* different ticket types
* different shows
And their relation is n:n. The correct solution for resolving a n:n relation is a separate table (in your case ShowPriceBand) to enlist all the combinations. | As the relationship between `Show` and `PriceBand` is many-to-many, it is a standard approach to define an intermediary table to define this relationship. In your case, apart from the linking columns (foreign keys to `Show` and `PriceBand`) you defined additional properties of the link.
This is a valid approach and there is no need to reduce possible duplication of those additional fields. | Database layout/design inefficient | [
"",
"sql",
"sql-server",
"database",
"database-design",
""
] |
I am using Spring Framwork, `SimpleJdbcTemplate`, to call a stored procedure (SQL-Server) from my Java code.
For the stored procedures that do some update/insert, I do call `simpleJdbcTemplate.update(...)` and for the ones with select only, I call `simpleJdbcTemplate.query(...)`. Both works fine.
Now, I have a stored procedure that do some updates first and run a select query at the end.
I was wondering how can I call this stored procedure?
I have tried the `simpleJdbcTemplate.query(...)` and got error code 0. I am not sure if having the update in my stored procedure is the problem or not.
**UPDATE**
Here is the stack when I am calling from java code:
```
PreparedStatementCallback; uncategorized SQLException for SQL [EXEC NotificationAlertHourlyReport ?, ?, ?, ?];
SQL state [null]; error code [0];
The statement did not return a result set.; nested exception is
com.microsoft.sqlserver.jdbc.SQLServerException: The statement did not return a result set.
org.springframework.jdbc.UncategorizedSQLException: PreparedStatementCallback;
uncategorized SQLException for SQL [EXEC NotificationAlertHourlyReport ?, ?, ?, ?];
SQL state [null]; error code [0]; The statement did not return a result set.;
nested exception is com.microsoft.sqlserver.jdbc.SQLServerException: The statement did not return a result set.
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslat
or.translate(AbstractFallbackSQLExceptionTranslator.java:83)
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslat
or.translate(AbstractFallbackSQLExceptionTranslator.java:80)
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslat
or.translate(AbstractFallbackSQLExceptionTranslator.java:80)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:602)
at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:636)
at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:665)
at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:673)
at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:713)
at org.springframework.jdbc.core.simple.SimpleJdbcTemplate.query(SimpleJdbcTemplate.java:200)
at com.test.MYCLASS.myMethod(Unknown Source)
at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
at java.util.concurrent.FutureTask.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: com.microsoft.sqlserver.jdbc.SQLServerException: The statement did no
t return a result set.
at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDriverError(S
QLServerException.java:171)
at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.doExecutePrep
aredStatement(SQLServerPreparedStatement.java:394)
at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement$PrepStmtExecC
md.doExecute(SQLServerPreparedStatement.java:340)
at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:4575)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLSe
rverConnection.java:1400)
at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLSer
verStatement.java:179)
at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLS
erverStatement.java:154)
at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.executeQuery(
SQLServerPreparedStatement.java:283)
at com.mchange.v2.c3p0.impl.NewProxyPreparedStatement.executeQuery(NewPr
oxyPreparedStatement.java:76)
at org.springframework.jdbc.core.JdbcTemplate$1.doInPreparedStatement(Jd
bcTemplate.java:643)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:
586)
... 13 more
``` | So long as the SQL Server sproc actually works without error and returns data, it should be of no consequence that it might also perform an update as well (I'd make sure that the sproc did its update first and ended with the select). If this is the case then simpleJdbcTemplate.query(...)
should do the trick. I don't know if it's checking the rowcount or not, but you might play with the SET NOCOUNT ON / OFF statement at the beginning of your sproc to see if that makes the java call happy. | In case your SQL executed by a `Statement` or any of its subclasses (viz. `PreparedStatement`, `CallableStatement`) has multiple SQL statements inside it (can be INSERT/UPDATE/DELETE/SELECT), you should use the `execute()` method on the statement to execute such SQL, then call [getMoreResults](http://docs.oracle.com/javase/7/docs/api/java/sql/Statement.html#getMoreResults%28%29) to browse through the responses for executing each of the statements in the SQL.
E.g.
Let's say your SQL has the following statements inside it:
```
UPDATE ....
SELECT ....
```
Then your code would appear like this:
```
// Create statement or prepare the call.
boolean result = stmt.execute();
// result will be false here indicating that the first result is an update count.
int updateCount = stmt.getUpdateCount();
result = stmt.getMoreResults();
// result will be true here indicating that the next result is a ResultSet.
ResultSet rs = stmt.getResultSet();
```
BTW, even though the documentation says that the call to `getMoreResults()` closes the previously opened ResultSet objects, it appears to be driver dependent and is safe to close those ResultSets explicitly to avoid resource leaks.
```
// JDK 7 provides try-with-resources that takes care of closing the resources.
try (ResultSet rs = stmt.getResultSet()) {
}
``` | Call a stored procedure that do both select and update | [
"",
"sql",
"sql-server",
"spring",
"stored-procedures",
"spring-jdbc",
""
] |
I have been using the following SQL:
```
SELECT DISTINCT NAME
FROM Events t1
LEFT JOIN UserHistory t2 ON t1.Name = t2.Event
WHERE t2.Event IS NULL
```
To select all rows from table 1 where table 2 is Null. This effectively filters out all my Table 1 data where Table 2 has data. However, I want to apply this only when a column in table 2 equals a certain value. Therefore I am looking to do a `SELECT * FROM t2 WHERE t2.ID = 1` but am unsure how this fits into this query. | ```
SELECT DISTINCT NAME
FROM Events t1
LEFT JOIN UserHistory t2 ON t1.Name = t2.Event and t2.certain_column = 1234
WHERE t2.Event IS NULL
``` | Also you can try query with NOT EXISTS:
```
SELECT DISTINCT NAME
FROM Events t1
WHERE NOT EXISTS(SELECT * FROM UserHistory t2
WHERE t1.Name = t2.Event AND t2.ID = 1)
``` | SQL Select where NOT matching specific selection | [
"",
"mysql",
"sql",
""
] |
How to get all column names without one column in sql table in SQL Server.
I have a temp table. It has columns:
```
ID, Status, Code, Name, Location, Address, Title, Category, Line, date, time, UserName
```
I want all data without the `id` column
I want an alternative to this SQL code for this
```
SELECT Status, Code, Name, Location, Address, Title, Category, Line, date, time, UserName
FROM TEMP
``` | Please try below query to select all column data without a column. Variable `@ColList` gives the column names except column `ID`:
```
DECLARE @ColList nvarchar(4000), @SQLStatment nvarchar(4000),@myval nvarchar(30)
SET @ColList = ''
select @ColList = @ColList + Name + ' , ' from syscolumns where id = object_id('TableName') AND Name != 'ID'
SELECT @SQLStatment = 'SELECT ' + Substring(@ColList,1,len(@ColList)-1) + ' From TableName'
EXEC(@SQLStatment)
``` | Unfortunately there is no "SELECT Everything except some columns" in SQL. You have to list out the ones you need.
If this is a temp table I guess you could try to drop the ID column before selecting the result. This is not appropriate if you need it again though...
```
ALTER TABLE Temp DROP COLUMN Id
Then
SELECT * FROM Temp
``` | How to get all column names without one column in sql table in SQL Server | [
"",
"sql",
"sql-server",
"database",
""
] |
Suppose I have a table
```
id value
------ ---------
A 123
A 422
B 441
B 986
B 674
C 648
```
I need a query which will return only those id's which have 3 or more values associated with them. So, in that case it will only return B.
Thank you. | Use the `Group By` clause with [`Having`](http://technet.microsoft.com/en-us/library/ms180199.aspx):
```
SELECT id
FROM dbo.TableName
GROUP BY ID
HAVING COUNT(*) >= 3
```
`Demo` | In case you want to include the value, you can use window functions to find the ones with three or more rows, e.g.:
```
DECLARE @x TABLE(id CHAR(1), value INT);
INSERT @x SELECT 'A', 123;
INSERT @x SELECT 'A', 422;
INSERT @x SELECT 'B', 441;
INSERT @x SELECT 'B', 986;
INSERT @x SELECT 'B', 674;
INSERT @x SELECT 'C', 648;
;WITH x AS
(
SELECT id, value, rn = ROW_NUMBER() OVER (PARTITION BY id ORDER BY id)
FROM @x
)
SELECT id FROM x WHERE rn = 3;
```
And you can change the `ORDER BY` to help better determine which `value` is included. | Only return rows where the value appears more than n times | [
"",
"sql",
"sql-server",
"count",
""
] |
In my table I have `myDate` column of type `nvarchar(50)`.
The result I need is to select this date/time: `07/11/2013 11:22:07`
And I need to get `07/11/2013 11:22:07 am` from it (add `am/pm` to the original date&time).
I tried everything but get only the original data without am/pm.
This is an example from my query :
```
select convert(dateTime,myDate,100) as Date from Info
```
or
```
select convert(dateTime,myDate,0) as Date from Info
```
What am I missing ? | `try this !!`
```
declare @date datetime
set @date='07/11/2013 11:22:07'
SELECT cast(convert(varchar(20),substring(convert(nvarchar(20),@date, 9), 0, 21)
+ ' ' + substring(convert(nvarchar(30), @date, 9), 25, 2),105) as datetime)
``` | You can get AM/PM data using following query
```
declare @date datetime
select @date= CAST('07/11/2013 11:22:07' AS datetime)
select RIGHT ( CONVERT(VARCHAR,@date,9),2)
``` | Convert from nvarchar to DateTime with am pm? | [
"",
"sql",
"sql-server",
"datetime",
"type-conversion",
""
] |
I have a Table called "contas" and another table called "cartoes" I need to verify what "IDCARTAO" doesn't exists in table contas, like that: "If I have one conta with cartoes.IDCARTAO = 1, the result needs to be 2 and 3";
```
SELECT cartoes.IDCARTAO
from cartoes
WHERE NOT EXISTS(SELECT *
from cartoes
LEFT OUTER JOIN contas ON (cartoes.IDCARTAO = contas.IDCARTAO)
WHERE contas.IDCARTAO = cartoes.IDCARTAO)
```
Why this sql code doesn't work? | Are you looking for this?
```
SELECT IDCARTAO
FROM cartoes c
WHERE NOT EXISTS
(
SELECT *
FROM contas
WHERE IDCARTAO = c.IDCARTAO
)
``` | Tiny tweak: Instead of using `not exists`, try `not in`. As in...
```
SELECT ct.IDCARTAO
from cartoes ct
WHERE ct.idcartao not in
(SELECT c.idcartao from contas c)
``` | NOT EXISTS Clause | [
"",
"mysql",
"sql",
""
] |
I have a basic database with a few columns i will show u below
I have a program that allows the user to input if they are Male OR Female OR NULL if they want to
Then they must select a Category such as Love OR Cooking
I tried this Query but it didnt work It displayed other Genders ignoring the Where Gender = M statement
It can only be one Gender but different types of categorys
Any help would be appreciated
```
SELECT *
FROM products
WHERE Gender = "M"
AND Category = "Love"
OR Category = "Cooking"
```
Database
```
Id Name Gender Category
1 AB M Love
2 AC F Love
3 AS M Cooking
4 SF Null Cooking
``` | You need to add brackets around the OR
Something like
```
SELECT *
FROM products
WHERE Gender = "M"
AND (Category = "Love"
OR Category = "Cooking")
```
The way you have it, it would be the same as
```
SELECT *
FROM products
WHERE (Gender = "M"
AND Category = "Love")
OR Category = "Cooking"
```
Have a look at [Operator precedence](http://www.w3resource.com/sql/sql-syntax.php#PRECEDENC) | You have to wrap the categories in brackets in order to tell the query that the gender has to be "M" :
```
SELECT *
FROM products
WHERE Gender = "M"
AND (Category = "Love"
OR Category = "Cooking")
``` | Trouble getting SQL query to work | [
"",
"mysql",
"sql",
"database",
""
] |
I'm using SQL Server 2008 R2. I have table called EmployeeHistory with the following structure and sample data:
```
EmployeeID Date DepartmentID SupervisorID
10001 20130101 001 10009
10001 20130909 001 10019
10001 20131201 002 10018
10001 20140501 002 10017
10001 20141001 001 10015
10001 20141201 001 10014
```
Notice that the Employee 10001 has been changing 2 departments and several supervisors over time. What I am trying to do is to list the start and end dates of this employee's employment in each Department ordered by the Date field. So, the output will look like this:
```
EmployeeID DateStart DateEnd DepartmentID
10001 20130101 20131201 001
10001 20131201 20141001 002
10001 20141001 NULL 001
```
I intended to use partitioning the data using the following query but it failed. The Department changes from 001 to 002 and then back to 001. Obviously I cannot partition by DepartmentID... I'm sure I'm overlooking the obvious. Any help? Thank you, in advance.
```
SELECT * ,ROW_NUMBER() OVER (PARTITION BY EmployeeID, DepartmentID
ORDER BY [Date]) RN FROM EmployeeHistory
``` | A bit involved. Easiest would be to refer to [this SQL Fiddle](http://sqlfiddle.com/#!6/8879fb/4) I created for you that produces the exact result. There are ways you can improve it for performance or other considerations, but this should hopefully at least be clearer than some alternatives.
The gist is, you get a canonical ranking of your data first, then use that to segment the data into groups, then find an end date for each group, then eliminate any intermediate rows. ROW\_NUMBER() and CROSS APPLY help a lot in doing it readably.
---
EDIT 2019:
The SQL Fiddle does in fact seem to be broken, for some reason, but it appears to be a problem on the SQL Fiddle site. Here's a complete version, tested just now on SQL Server 2016:
```
CREATE TABLE Source
(
EmployeeID int,
DateStarted date,
DepartmentID int
)
INSERT INTO Source
VALUES
(10001,'2013-01-01',001),
(10001,'2013-09-09',001),
(10001,'2013-12-01',002),
(10001,'2014-05-01',002),
(10001,'2014-10-01',001),
(10001,'2014-12-01',001)
SELECT *,
ROW_NUMBER() OVER (PARTITION BY EmployeeID ORDER BY DateStarted) AS EntryRank,
newid() as GroupKey,
CAST(NULL AS date) AS EndDate
INTO #RankedData
FROM Source
;
UPDATE #RankedData
SET GroupKey = beginDate.GroupKey
FROM #RankedData sup
CROSS APPLY
(
SELECT TOP 1 GroupKey
FROM #RankedData sub
WHERE sub.EmployeeID = sup.EmployeeID AND
sub.DepartmentID = sup.DepartmentID AND
NOT EXISTS
(
SELECT *
FROM #RankedData bot
WHERE bot.EmployeeID = sup.EmployeeID AND
bot.EntryRank BETWEEN sub.EntryRank AND sup.EntryRank AND
bot.DepartmentID <> sup.DepartmentID
)
ORDER BY DateStarted ASC
) beginDate (GroupKey);
UPDATE #RankedData
SET EndDate = nextGroup.DateStarted
FROM #RankedData sup
CROSS APPLY
(
SELECT TOP 1 DateStarted
FROM #RankedData sub
WHERE sub.EmployeeID = sup.EmployeeID AND
sub.DepartmentID <> sup.DepartmentID AND
sub.EntryRank > sup.EntryRank
ORDER BY EntryRank ASC
) nextGroup (DateStarted);
SELECT * FROM
(
SELECT *, ROW_NUMBER() OVER (PARTITION BY GroupKey ORDER BY EntryRank ASC) AS GroupRank FROM #RankedData
) FinalRanking
WHERE GroupRank = 1
ORDER BY EntryRank;
DROP TABLE #RankedData
DROP TABLE Source
``` | I would do something like this:
```
;WITH x
AS (SELECT *,
Row_number()
OVER(
partition BY employeeid
ORDER BY datestart) rn
FROM employeehistory)
SELECT *
FROM x x1
LEFT OUTER JOIN x x2
ON x1.rn = x2.rn + 1
```
Or maybe it would be x2.rn - 1. You'll have to see. In any case, you get the idea. Once you have the table joined on itself, you can filter, group, sort, etc. to get what you need. | Trouble using ROW_NUMBER() OVER (PARTITION BY ...) | [
"",
"sql",
"sql-server",
"sql-server-2008",
"row-number",
"gaps-and-islands",
""
] |
I am a newbie and i am using **mysql**. I want to create a table **product** from an existing table **customer** but i want only the selected columns of **customer** table to be created in new table and that too with the same structure. Eg In customer table there is a column with name **DATE** which has **default as CURRENT\_TIMESTAMP** and **ON UPDATE CURRENT TIMESTAMP** . SO i want that this structure is also created in the new table.
I looked other answers but they do not copy the structure. Please help me | Here is what you can do
1. use [`CREATE TABLE new_table LIKE old_table`](http://dev.mysql.com/doc/refman/5.5/en/create-table.html) syntax to create a new table with the same structure
2. use [`ALTER TABLE new_table DROP COLUMN unnecessary_column`](http://dev.mysql.com/doc/refman/5.5/en/alter-table.html) syntax to drop unnecessary columns in the new table
3. use [`INSERT INTO new_table ... SELECT ... FROM old_table`](http://dev.mysql.com/doc/refman/5.5/en/insert-select.html) syntax to copy data
Let's say your `customer` table looks like
```
CREATE TABLE customer
(
id INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
name VARCHAR(5),
created DATETIME,
modified TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
other_column INT
);
```
And you want to create a new table `customer_new` but without `other_column`
```
CREATE TABLE customer_new LIKE customer;
ALTER TABLE customer_new DROP COLUMN other_column;
INSERT INTO customer_new (id, name, created, modified)
SELECT id, name, created, modified
FROM customer;
```
Here is **[SQLFiddle](http://sqlfiddle.com/#!2/3f164/2)** demo | USE THIS
```
CREATE TABLE NEW-TAB SELECT NAME,DATE FROM OLD-TAB;
```
Please change COLUMN/TABLE names as per your requirements | Create a table in Mysql from selected columns of exisiting table and with same structure | [
"",
"mysql",
"sql",
""
] |
I need to figure out how to find a certain group of records using T-SQL and I'm having trouble figuring out how I would need to create the `WHERE` clause to do this.
I have a SQL 2008 R2 system that I'm working with, and in this database there are a couple of tables. One contains personnel records, and another contains addresses. The addresses relate to the personnel records by a foreign key relationship. So for example, to get a list of all personnel and all of their associated addresses (a single person could have multiple addresses) I could write something like this:
```
SELECT id, name FROM personnel p
INNER JOIN address a
ON p.id = a.personnelid
```
However, each address has a column called `isprimary`, that is either 0 or 1. What I need to do is figure out how to find all personnel who do not have an associated address with `isprimary` set to 1. Or records that have no primary address.
Currently my thought is to build a temporary table with personnel who have addresses that aren't marked as primary. Then cycle through those and build a subset that have a primary address.
Then subtract the Personnel With Primary table from the results of Personnel With Non-Primary and I should have my list. However, I'm thinking that there has to be a more elegant way of doing this. Any ideas? | Try this, it should get all Personnel rows with no matching primary address:
```
SELECT *
FROM Personnel p
WHERE NOT EXISTS
(SELECT * FROM Address a WHERE a.personnelId = p.id AND a.isprimary = 1)
``` | This ends up beeing a [Left anti semi join](http://sqlity.net/en/1360/a-join-a-day-the-left-anti-semi-join/) pattern
and can be written like this:
```
SELECT id, name FROM personnel p
LEFT OUTER JOIN address a
ON p.id = a.personnelid
AND a.isprimary = 1
WHERE a.personnelId IS NULL
```
It can be interesting to test different ways because query plan are often not the same. | One to many relationships in T-SQL | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I am trying to get the latest `PatientData.DateVal` and `PatientData.DecVal` before `PatientTreatment.Startdate` for each row in `PatientTreatment`. E.g I want the latest date and value for a specific type of patientdata before each treatment for every patient.
I have used the script below. It works OK. But this script is no good for me because it gives me a list which only lists a patient once. I want to get the latest value for each treatment.
```
SELECT
PatientTreatment.PatientId
,PatientTreatment.StartDate
,PatientData.FK_PatientId
,PatientData.DateVal
,PatientData.DecVal
FROM
PatientTreatment
,PatientData
WHERE
PatientTreatment.PatientId = PatientData.FK_PatientId
AND PatientData.FK_ParamSettingId = 68
AND PatientData.DateVal =
(
SELECT
MAX(DateVal)
FROM
PatientData
WHERE
PatientData.FK_PatientId = PatientTreatment.PatientId
AND DateVal < PatientTreatment.StartDate
)
```
My table `PatientData` has the following columns (simplified):
```
---------------------------------------------------------------
| Id | FK_PatientID | FK_ParamsettingId | DateVal | DecVal |
---------------------------------------------------------------
| 1 | 247 | 69 | 2010-09-11 | 1 |
| 2 | 514 | 68 | 2011-11-21 | 0 |
| 3 | 20291 | 69 | 2012-11-21 | 2.4 |
| 4 | 20291 | 69 | 2013-12-21 | 3 |
| 5 | 20291 | 69 | 2011-03-03 | 0 |
| 6 | 20221 | 68 | 2012-03-04 | 3 |
| 7 | 20291 | 68 | 2011-06-06 | 2 |
| 10 | 234 | 69 | 2011-03-07 | 4 |
| 11 | 444 | 69 | 2012-04-05 | 1.1 |
| 12 | 212 | 69 | 2012-12-04 | 4.2 |
| 13 | 21342 | 69 | 2011-11-03 | 5.5 |
| 14 | 223 | 69 | 2013-11-01 | 3.3 |
---------------------------------------------------------------
```
And my table `PatientTreatment` has the following columns (simplified):
```
--------------------------
| PatientID | StartDate |
--------------------------
| 247 | 2010-09-11 |
| 514 | 2011-11-21 |
| 20291 | 2012-11-21 |
| 201 | 2013-12-21 |
| 2291 | 2011-03-03 |
| 221 | 2012-03-04 |
| 20291 | 2011-06-06 |
| 234 | 2011-03-07 |
| 80998 | 2012-04-05 |
| 212 | 2012-12-04 |
| 21342 | 2011-11-03 |
| 223 | 2013-11-01 |
--------------------------
```
I hope you guys can help me out.
Kind regards
Doggabyte
EDIT: I want a output which contains the following columns: PatientId, Startdate, LastDateValBeforeStartdate, LastDecValBeforeStartdate | You can use window functions to do this:
```
select
x.PatentId,
x.StartDate,
x.DateVal,
x.DecVal
from (
select
t.PatientId,
t.StartDate,
p.DateVal,
p.DecVal,
row_number() over (
partition by t.patientid, t.StartDate
order by p.DateVal Desc
) rn
from
PatientTreatment t
inner join
PatientData p
on t.PatientId = p.FK_PatientId
where
p.FK_ParamSettingId = 68 And
t.StartDate > p.DateVal
) x
where
x.rn = 1
``` | Similar to the windowed example above but with a CTE:
```
;WITH OrderedPatientData
AS
(
SELECT PatientID,
StartDate,
DateVal AS LastDateValBeforeStartdate,
DecVal AS LastDecValBeforeStartdata,
ROW_NUMBER() OVER (Partition BY PatientID ORDER BY DateVal DESC) RowNum
FROM PatientData PD
INNER JOIN PatientTreatment PT
ON PT.PatientID = PD.FK_PatientID AND PD.DateVal < PT.StartDate
WHERE PD.FK_ParamSettingId = 68
)
SELECT PatientID, StartDate, LastDateValBeforeStartdate, LastDecValBeforeStartdata
FROM OrderedPatientData OPD
WHERE RowNum = 1
``` | SQL Server get latest value in table x before date for not distinct rows in table y | [
"",
"sql",
"sql-server",
"max",
"row-number",
""
] |
I need to count the number of duplicates in a column, which works when done like this:
```
SELECT CorrelationId, Count(CorrelationId) as total
FROM VoipDetailsView
WHERE ResponseCode = 200
GROUP BY CorrelationId
```
However, when I try to add other columns to the result
```
SELECT SessionIdTime, CorrelationId, Count(CorrelationId) as total
FROM VoipDetailsView
WHERE ResponseCode = 200
GROUP BY CorrelationId, SessionIdTime
ORDER BY SessionIdTime
```
All the counts are now 0 or 1 even though (or possibly because) the duplicate values are being displayed now. I don't mind the repetition but I need the last column to contain the total number of duplicate CorrelationIds for every one selected. (So for instance if the value exists twice, the count would be 2 for both of those rows)
What kind of query do I need to do this? | You need to separate the aggregation of CorrelationId into a separate query and join that with your table to fetch other columns, something like:
```
SELECT v.SessionIdTime, v.CorrelationId, total
From VoipDetailsView v inner join
(SELECT CorrelationId, Count(CorrelationId) as total
FROM VoipDetailsView
WHERE ResponseCode = 200
GROUP BY CorrelationId) agg
on v.CorrelationId = agg.CorrelationId
``` | How about using an aggregate function on the `SessionIdTime` column like `min()`
```
SELECT min(SessionIdTime), CorrelationId, Count(CorrelationId) as total
FROM VoipDetailsView
WHERE ResponseCode = 200
GROUP BY CorrelationId
ORDER BY min(SessionIdTime)
``` | Adding column to group by clause messes up counting | [
"",
"sql",
"sql-server",
""
] |
As u expect I have got problem with GETDATE in DEFAULT constraint on some tables in SQL Server 2012.
I have got two tables like below (A and B):
```
CREATE TABLE [dbo].[TABLE_A_OR_B] (
[TABLE_A_OR_B_PK] BIGINT IDENTITY (1, 1) NOT NULL,
[CREATE_DATETIME] DATETIME2 (7) CONSTRAINT [DF_TABLE_A_OR_B_CREATE_DATETIME] DEFAULT (getdate()) NOT NULL,
[CREATE_USER] VARCHAR (100) CONSTRAINT [DF_TABLE_A_OR_B_CREATE_USER] DEFAULT (suser_sname()) NOT NULL,
...
CONSTRAINT [PK_TABLE_A_OR_B] PRIMARY KEY CLUSTERED ([TABLE_A_OR_B_PK] ASC)
);
```
And i have got procedure where I’m doing two inserts - first to table A and second to B without column CREATE\_DATETIME. Between them is a lot of stuff.
Now guess what is in column CREATE\_DATETIME in tables A and B?
Two times - maybe after 1 000 000 records, never before - there is in table A datetime greater than in table B for records from the same sp execution (verified) like:
```
row in A: 2013-11-07 00:02:22.7000000
row in B: 2013-11-07 00:02:22.6970000
```
Can u give me some clues why?
Answers for comments:
1. No triggers.
2. No 1 000 000 records at a time, it is total count of records in table in moment of error first appearance. This information is for statistical analysis - today error occurred after xx thousands of records after last error - so it’s very random.
3. Yes, statements are 100% executed in this order.
4. No transaction or single - two different processes - same error.
5. Sure DATETIME2.
Important! Someone told me that GETDATE has accuracy to 3 milliseconds so maybe GETDATE rounds milliseconds with round robin method so two times for same or nearly same time (diff < 3ms) it can give two different approximations? | GETDATE to SYSDATETIME change in constraints seems to solve problem. | [`GETDATE()`](http://technet.microsoft.com/en-us/library/ms188383.aspx) is derived from the operating system's clock - if something caused the clock on the server to change (to an earlier time), you'll achieve your (apparent) time travel.
What could cause such changes? The obvious are a manual adjustment, or if the server is set to automatically synchronize its clock with an external source - such as another machine on your domain or via NTP. There may be other possible causes also. | GETDATE in DEFAULT constraint on some tables in SQL Server 2012 | [
"",
"sql",
"sql-server",
"constraints",
"getdate",
""
] |
Is it possible to use array as source for merge statement
E.g. I have the below merge statement and when I compile, I am getting error.
How can I use merge with array as source table?
```
PLS-00436: implementation restriction: cannot reference fields of BULK In-BIND
table of records
```
Procedure code
```
CREATE or REPLACE PROCEDURE my_proc (varray IN my_array)
AS
BEGIN
FORALL i IN varray.first..varray.last
MERGE INTO tab t
USING dual
ON ( t.proj_id = varray(i).proj_id)
WHEN MATCHED THEN
UPDATE set
proj_title = varray (i).proj_title
WHERE proj_id = varray (i).proj_id
WHEN NOT MATCHED THEN
insert (proj_id,proj_title)
values (varray (i).proj_id,varray (i).proj_title);
values (varray (i).proj_id,varray (i).proj_title);
``` | There is a restriction in Oracle 10g - you can't access individual fields of records in `FORALL` statement. You could do it if you were using Oracle 11g.
There are workarounds, however, and I recommend the following article that proposes a few of them: [PLS-00436 in 10g - Workaround](http://www.oracle-developer.net/display.php?id=410). | The problem here is that you are referring to the same collection in your `SET` clause and `WHERE` clause. See [Oracle Documentation for Forall statement](http://docs.oracle.com/cd/B19306_01/appdev.102/b14261/forall_statement.htm), go to the Restrictions section, second bullet point.
I would suggest you rename your `varray` collection as something different, as it is a keyword. I would also suggest you separate this collection into multiple scalar collections (varrays or nested tables having only one column) for each column and then use these collections in your forall statement. | Merge statement array as source table | [
"",
"sql",
"oracle",
"plsql",
"merge",
"oracle10g",
""
] |
Tables:
```
Country
-------
PK CountryID
Name
City
-------
PK CityID
FK CountryID
Name
Airport
--------
PK AirportID
FK CityID
Name
```
My task is to select names of countries that have no airport.
I can imagine only one solution with EXCEPT (or MINUS)
```
SELECT Country.Name
FROM Country EXCEPT (SELECT DISTINCT Country.Name FROM Country, City, Airport
WHERE City.CountryID = Country.CountryID AND Airport.CityID = City.CityID);
```
But is it possible not to use EXCEPT but something like IS NULL? | If you need to make this query with `IS NULL` try following query:
[SQLFiddle demo](http://sqlfiddle.com/#!2/c56dc/4)
```
select ct.CountryId,max(ct.Name) from Country ct
left join City c on ct.CountryId=c.CountryId
left join Airport a on a.CityId=c.CityID
group by ct.CountryId
HAVING max(a.AirportID) IS NULL
``` | ```
SELECT cn.CountryID
FROM Country cn
LEFT JOIN City ct ON cn.CountryID = ct.CountryID
LEFT JOIN Airport ar on ar.CityID=ct.CityID
WHERE ar.AirportID is null
``` | SQL query - IS NULL | [
"",
"sql",
""
] |
I have the following queries
1. DELETE FROM A WHERE Id IN (SELECT Id FROM B)
2. DELETE FROM B
I want to make sure that the first statement is complete before the second is executed since the 1) is dependent on 2)
I want to know if I execute these queries just like this, does SQL take care of completing the first transaction before starting the second?
Or is there a way I can make sure that the second transaction starts only after the first is complete?
Any help would be appreciated | You can execute queries in succession by separating them with a semicolon `;`. More details are in the [MySQL documentation](https://dev.mysql.com/doc/refman/5.0/en/entering-queries.html).
Simply do:
```
DELETE FROM A WHERE Id IN (SELECT Id FROM B); DELETE FROM B;
```
Based on your requirement; this does exactly what you asked for based on the below example:
```
mysql> select sleep(5); show databases;
+----------+
| sleep(5) |
+----------+
| 0 |
+----------+
1 row in set (5.00 sec)
+--------------------+
| Database |
+--------------------+
| ... |
+--------------------+
9 rows in set (0.01 sec)
```
You can do this with `mysql -e` command and virtually any mysql library (such as the one with php). | If the statements are in the same batch it's guaranteed.
But you might want to wrap them in a transaction to ensure that either both happen or neither happen (get rolled back). | How to execute two DELETE queries one after another | [
"",
"mysql",
"sql",
""
] |
I am new to databases and i really stuck! Please give me a hand! Have no idea, where I made a mistake...
I have 2 tables patient and caretaker
they both have lastname and firstname
I need to retrieve lastname and firstname from both of them and i made the following query:
SELECT firstname
FROM `mortenu8`.`patient`, `caretaker`
where caretaker.firstname = patient.firstname;
But it says
Error Code: 1052. Column 'firstname' in field list is ambiguous 0.034 sec
Do you have any idea why? I will really appreciate your help...
Thanks! | Use Database Objects.
When you specify the firstname in the columns list where both the tables have the same column name, data base engine cannot recognize the first name of which table exactly are you trying to retrieve!!
Many of the above answers says the same thing.
Just to reiterate the same,
```
SELECT patient.firstname, ctaker.firstname
FROM mortenu8.patient patient, caretaker ctaker
WHERE ctaker.firstname = patient.firstname;
``` | Assuming your query of join is working you can try this
```
SELECT patient.firstname,caretaker.firstname
FROM mortenu8.patient, caretaker
where caretaker.firstname = patient.firstname
```
OR
```
SELECT caretaker.firstname
FROM mortenu8.patient, caretaker
where caretaker.firstname = patient.firstname
``` | Retrieving names from 2 tables with same column names | [
"",
"mysql",
"sql",
""
] |
I recently set up a MYSQL database connected to a form filled with checkboxes. If the checkbox was selected, it would insert into the associated column a value of '1'; otherwise, it would receive a value of '0'.
I'd like to eventually look at aggregate data from this form, and was wondering if there was any way I could use MYSQL to get a number for each column which would be equal to the number of rows that had a value of '1'.
I've tried variations of:
```
select count(*) from POLLDATA group by column_name
```
which was unsuccessful, and nothing else I can think of seems to make sense (admittedly, I'm not all too experienced in SQL).
I'd really like to avoid:
```
select count(*) from POLLDATA where column_1='1'
```
for each column (there close to 100 of them).
Is there any way to do this besides typing out a select count(\*) statement for each column?
EDIT:
If it helps, the columns are 'artist1', 'artist2', ....'artist88', 'gender', 'age', 'city', 'state'. As I tried to explain below, I was hoping that I'd be able to do something like:
```
select sum(EACH_COLUMN) from POLLDATA where gender='Male', city='New York City';
```
(obviously EACH\_COLUMN is bogus) | If you are just looking for the sheer number of 1's in the columns, you could try…
```
select sum(col1), sum(col2), sum(col3) from POLLDATA
``` | ```
SELECT SUM(CASE
WHEN t.your_column = '1' THEN 1
ELSE 0
END) AS OneCount,
SUM(CASE
WHEN t.your_column='0' THEN 1
ELSE 0
END) AS ZeroCount
FROM YOUR_TABLE t
``` | MYSQL get count of each column where it equals a specific value | [
"",
"mysql",
"sql",
"count",
""
] |
I wrote a SELECT query to find out how many records will be impacted with my UPDATE query.
The SELECT and UPDATE returned different record counts.
Here is my SELECT query:
```
SELECT *
FROM T1
JOIN T2 on T1.ID = T2.ID
WHERE T1.Name IS NULL
AND T2.Status = 'happy'
```
Here is my UPDATE query:
```
UPDATE T1
SET T1.Name = T2.Name
FROM T1
JOIN T2 on T1.ID = T2.ID
WHERE T1.Name IS NULL
AND T2.Status = 'happy'
```
My SELECT returns 19K records, and my UPDATE affects 12K records. Please note that the WHERE clause is exactly the same for both the SELECT and UPDATE.
What is causing the discrepancy in records counts between the SELECT and UPDATE queries?
Can you please help me understand what is happening here?
Thanks in advance!! | That can happen on a one to many join when you're updating the one side. In your case, it looks like there are more than one `T2` row for some of your `T1` rows, and the server will return the `T1` row as many times as there is matches in the `T2` table.
Check if this matches your update count:
```
SELECT count(distinct T1.ID)
FROM T1
JOIN T2 on T1.ID = T2.ID
WHERE T1.Name IS NULL
AND T2.Status = 'happy'
``` | One possible reason: You have instances of `ID` where more than one `T2` record exists for the corresponding `T1` record | Why SELECT and UPDATE Return Different Record Counts/Affected | [
"",
"sql",
"sql-update",
""
] |
In Excel I make an Analysis Services connection to a data cube. I would like to be able to show a user how current the data is by showing them when the last cube processing time occurred. Making an analysis services connection to the cube in SQL Server Management Studio (SSMS), I can right click on the cube and see the property of the last cube processing time exists. I can also create an MDX query as follows to return the last process time:
```
SELECT LAST_DATA_UPDATE FROM $system.mdschema_cubes
```
I would like to be able to retrieve this same information in Excel whether it is via VBA or some other method as long as it can be done in Excel without some external tool. | I actually found a way to do it in Excel without having to create any views or new measures. In Excel 2013, **PowerPivot** allows you to create your own custom MDX queries against a cube. You can open PowerPivot, make the connection to your cube, paste in the MDX query I used in SSMS to return the cube process time,
```
SELECT LAST_DATA_UPDATE FROM $system.mdschema_cubes
```
and then export this to a pivot table. I did not need to modify anything outside of Excel. Here is a [document](http://office.microsoft.com/en-us/excel-help/analysis-services-mdx-query-designer-power-pivot-HA102836091.aspx) with step by step procedures. | I had the same need on a project, to show cube last processed date/time in Excel. This may be a little hokie but it definitely works. I added a query against my database in my DSV (technically I made a view since all of my source data came from views rather than named queries or tables) that was just
```
Select CURRENT_TIMESTAMP as CubeLastRefreshed
```
I made it a dimension that is related to nothing. Then users can pull it into Excel. You can make a pivot table with just that in it. Or you can write a cube function in Excel to show it at the bottom of the report. It would look something like
```
=cubemember("Cube","[Cube Process Date].[Cube Last Processed].firstchild")
```
Just make sure to pay attention to when this dimension gets processed. If you only process certain dimensions or measures on certain days, make sure processing of this dimension is included in the correct places. | Get SSAS cube last process time | [
"",
"sql",
"excel",
"vba",
"ssas",
"olap-cube",
""
] |
I'm trying to figure out how to determine the total sales for an employee using MySQL. The DB has 4 tables in it that will help determine the total sales. I was able to create a Query that selects all the necessary tables to calculate the Sales Total.
Query:
```
SELECT employees.eno, employees.ename, orders.ono, orders.eno,
parts.pno, parts.price,odetails.ono, odetails.pno, odetails.qty
FROM test.employees, test.parts, test.orders, test.odetails
WHERE employees.eno = orders.eno AND parts.pno = odetails.pno
```
This comes up with a Table that displays the Employee's name, Item, price it's sold at. I'm not sure where to go from here. And any help would be much appreciated! I'm not sure if a stored procedure would help, then I could call it in a Java program to print out the results. Just really confused here. Any help would be appreciated. Thank you! | Try:
```
SELECT employees.eno, employees.ename, SUM(parts.price * odetails.qty) as TotalSales
FROM test.employees
INNER JOIN test.orders
ON empoyee.eno = orders.eno
INNER JOIN test.odetails
ON orders.ono = odetails.ono
INNER JOIN test.parts
ON odetails.pno = parts.pno
GROUP BY employees.eno, employees.ename
``` | The following *should* work.
```
SELECT employees.ename, COUNT(employees.ename), (COUNT(employees.ename) * parts.price) FROM test.employees INNER JOIN (orders, parts) ON (employees.eno = orders.eno AND parts.pno = odetails.pno)
``` | Calculating Total Sales from Multiple MySQL Tables | [
"",
"mysql",
"sql",
""
] |
I have a table with dates that all happened in the month November.
I wrote this query
```
select id,numbers_from,created_date,amount_numbers,SMS_text
from Test_Table
where
created_date <= '2013-04-12'
```
This query should return everything that happened in month 11 (November) because it happened **before** the date '2013-04-12' (in December)
But it's only returning available dates that happened in days lesser than 04 (2013-**04**-12)
Could it be that it's only comparing the day part? and not the whole date?
How to fix this?
Created\_date is of type **date**
Date format is by default **yyyy-dd-MM** | Instead of '2013-04-12' whose meaning depends on the local culture, use '20130412' which is recognized as the culture invariant format.
If you want to compare with December 4th, you should write '20131204'. If you want to compare with April 12th, you should write '20130412'.
The article [Write International Transact-SQL Statements](http://technet.microsoft.com/en-us/library/ms191307.aspx) from SQL Server's documentation explains how to write statements that are culture invariant:
> Applications that use other APIs, or Transact-SQL scripts, stored procedures, and triggers, should use the unseparated numeric strings. For example, yyyymmdd as 19980924.
**EDIT**
Since you are using ADO, the best option is to parameterize the query and pass the date value as a date parameter. This way you avoid the format issue entirely and gain the performance benefits of parameterized queries as well.
**UPDATE**
To use the the the ISO 8601 format in a literal, all elements must be specified. To quote [from the ISO 8601 section of datetime's documentation](https://msdn.microsoft.com/en-us/library/ms187819.aspx)
> To use the ISO 8601 format, you must specify each element in the format. This also includes the T, the colons (:), and the period (.) that are shown in the format.
>
> ... the fraction of second component is optional. The time component is specified in the 24-hour format. | Try like this
```
select id,numbers_from,created_date,amount_numbers,SMS_text
from Test_Table
where
created_date <= '2013-12-04'
``` | Query comparing dates in SQL | [
"",
"sql",
"sql-server",
"date",
"compare",
""
] |
i have select statment query with join statment, which I use to get value from joining the table
here's my Select query:
```
select
case when p1.Value2 is not null then p1.Value3 end as fix1,
case when p1.Value2 is not null then p2.Value3 end as fix2,
case when p1.Value2 is not null then p3.Value3 end as fix3,
case when p1.Value2 is not null then p4.Value3 end as fix4,
case when p1.Value2 is not null then p5.Value3 end as fix5,
case when p1.Value2 is not null then p6.Value3 end as fix6
from tc t
left join (select * from Parameter where paramtype='fix') as p1 on p1.Value2 = t.FIX
left join (select * from Parameter where paramtype='fix') as p2 on p2.Value2 = t.FIX1
left join (select * from Parameter where paramtype='fix') as p3 on p3.Value2 = t.FIX2
left join (select * from Parameter where paramtype='fix') as p4 on p4.Value2 = t.FIX3
left join (select * from Parameter where paramtype='fix') as p5 on p5.Value2 = t.FIX4
left join (select * from Parameter where paramtype='fix') as p6 on p6.Value2 = t.FIX5
where CUST_ACCT ='10053996'
```
I thought its to complicated , so I change my code like this :
```
select
fix +' '+(select Value3 from Parameter where ParamType = 'fix' and Value2 = fix) as fix,
FIX1 +' '+(select Value3 from Parameter where ParamType = 'fix' and Value2 = FIX1) as FIX1,
FIX2 +' '+(select Value3 from Parameter where ParamType = 'fix' and Value2 = FIX2) as FIX2,
FIX3 +' '+(select Value3 from Parameter where ParamType = 'fix' and Value2 = FIX3) as FIX3,
FIX4 +' '+(select Value3 from Parameter where ParamType = 'fix' and Value2 = FIX4) as FIX4,
FIX5 +' '+(select Value3 from Parameter where ParamType = 'fix' and Value2 = FIX5) as FIX5
from tc where CUST_ACCT ='10053996'
```
is it possible to simplify my code ? or if I wanna do something like this I should Create select statment like that? | How about such solution?
```
SELECT /*extra fields from tc ==>>*/ x, y /*<<==*/
, FIX, FIX1, FIX2, FIX3, FIX4, FIX5
FROM (
SELECT fields, P.Value3, /*extra fields from tc ==>>*/ x, y /*<<==*/
FROM (
SELECT *
FROM tc
WHERE CUST_ACCT ='10053996'
) T
UNPIVOT (
unpvt FOR fields in (FIX,FIX1,FIX2,FIX3,FIX4,FIX5)
) UPV
LEFT JOIN Parameter P ON P.paramtype='fix' AND P.Value2 = UPV.unpvt
) T
PIVOT (
MIN(Value3) FOR fields in (FIX,FIX1,FIX2,FIX3,FIX4,FIX5)
) AS PV
``` | You can replace `Case` Statement with `IsNull(expression, valueIfNull)` in your first statement.
i have select statment query with join statment, which I use to get value from joining the table
here's my Select query:
```
select
IsNull(p1.Value2 ,p1.Value3) as fix1,
IsNull(p1.Value2 ,p1.Value3) as fix2,
IsNull(p1.Value2 ,p1.Value3) as fix3,
IsNull(p1.Value2 ,p1.Value3) as fix4,
IsNull(p1.Value2 ,p1.Value3) as fix5,
IsNull(p1.Value2 ,p1.Value3) as fix6,
from tc t
left join (select * from Parameter where paramtype='fix') as p1 on p1.Value2 = t.FIX
left join (select * from Parameter where paramtype='fix') as p2 on p2.Value2 = t.FIX1
left join (select * from Parameter where paramtype='fix') as p3 on p3.Value2 = t.FIX2
left join (select * from Parameter where paramtype='fix') as p4 on p4.Value2 = t.FIX3
left join (select * from Parameter where paramtype='fix') as p5 on p5.Value2 = t.FIX4
left join (select * from Parameter where paramtype='fix') as p6 on p6.Value2 = t.FIX5
where CUST_ACCT ='10053996'
``` | Simplify Multiple Join SQL Column | [
"",
"sql",
"sql-server",
"join",
""
] |
I have three dates in table, for each record i need to find out max of three columns and need to ignore incase if the column is null, could you please help on this ?
I am using oracle 10g version.
```
Table-1
---------
SL NO date1 date2 date3 age
``` | Since `GREATEST` will return `NULL` if any of the values that you pass it is `NULL`, you need a combination of `GREATEST` and `COALESCE`:
```
SELECT GREATEST(
COALESCE(date1, date2, date3)
, COALESCE(date2, date1, date3)
, COALESCE(date3, date1, date2)
)
FROM my_test_table
``` | You can use the `GREATEST` function, which returns the largest of the given n arguments:
```
SELECT GREATEST(date1, date2, date3)
FROM table1
``` | Query to Find max of three columns in a row | [
"",
"sql",
"oracle",
"plsql",
""
] |
Using the below as a basis how would i work out a person age in months i not to sure how to go about changing this so the months is cacualated instead of years
```
IF cast(datepart(m, GETDATE()) as int) > cast(datepart(m,@in_DOB) as int)
SET @age = cast(datediff(yyyy,@in_DOB,GETDATE()) as int)
else
IF cast(datepart(m,GETDATE()) as int) = cast(datepart(m,@in_DOB) as int)
IF datepart(d,GETDATE()) >= datepart(d,@in_DOB)
SET @age = cast(datediff(yyyy,@in_DOB,GETDATE()) as int)
ELSE
SET @age = cast(datediff(yyyy,@in_DOB,GETDATE()) as int) -1
ELSE
SET @age = cast(datediff(yyyy,@in_DOB,GETDATE()) as int) - 1
RETURN @age
``` | Its my birthday today :) and I am `SELECT DATEDIFF(MONTH, '13 Nov 1963', GetDate())` months old. Whoops, too many beers already.
**Update**
```
DECLARE @birthdate datetime
DECLARE @months INT
SELECT @birthdate = '13 Nov 1963'
SELECT @months = DATEDIFF(MONTH, @birthdate, GETDATE()) - CASE WHEN DAY(@birthdate) > DAY( GETDATE()) THEN 1 ELSE 0 END
SELECT @months
``` | It's a bit of a mouthful, but we get the crude estimate just using [`DATEDIFF`](http://technet.microsoft.com/en-us/library/ms189794.aspx)1 - and then we adjust it if it's wrong:
```
select @age = DATEDIFF(month,@in_DOB,CURRENT_TIMESTAMP) -
CASE WHEN DATEADD(month,DATEDIFF(month,@in_DOB,CURRENT_TIMESTAMP),@in_DOB)
> CURRENT_TIMESTAMP
THEN 1 ELSE 0 END
```
1 It's crude because `DATEDIFF` tells you the number of *transitions* that have occurred between two dates, rather than what some people might intuit. This means that the difference in months between 30th September and 1st October is 1, per this function.
So it can end up reporting a value 1 higher than the intuitive "difference in months" between two dates. | Work out person age in months sql | [
"",
"sql",
"stored-procedures",
""
] |
In ORACLE I am trying to get values from `PS_EMP_REVIEW_GOAL` with a `REVIEW_DT` between **01-01-YYYY** and **12-31-YYYY** from last year.
I get the following error msg:
```
ORA-01843: not a valid month
01843. 00000 - "not a valid month"
```
\*Cause:
\*Action:
```
SELECT
ERG.REVIEW_DT,
ERG.CAREER_GOAL
from PS_EMP_REVIEW_GOAL ERG, PS_PERSONNEL P
where ERG.EMPLID = P.EMPLID
and ERG.REVIEW_DT = (Select max(ERG1.REVIEW_DT) from PS_EMP_REVIEW_GOAL ERG1
where ERG1.EMPLID = ERG.EMPLID
and ERG1.REVIEW_DT BETWEEN to_date('01-01-' || trunc(sysdate, 'YYYY'))-1
AND to_date('12-31-' || trunc(sysdate, 'YYYY'))-1
);
``` | The problem is that `TRUNC(SYSDATE, 'YYYY')` will return whole date, which you then try to concatenate with day and month:
```
SELECT TRUNC(SYSDATE, 'YYYY') FROM dual;
```
```
TRUNC(SYSDATE,'YYYY')
---------------------
01-01-2013
```
What you should do instead is `EXTRACT` the year from `SYSDATE` and use the date format model to convert the string into a `DATE`:
```
SELECT TO_DATE('01-01-' || EXTRACT(YEAR FROM SYSDATE), 'MM-DD-YYYY') - 1 AS val
FROM dual;
```
```
VAL
----------
31-12-2012
```
So your code should look like this:
```
SELECT
ERG.REVIEW_DT,
ERG.CAREER_GOAL
from PS_EMP_REVIEW_GOAL ERG, PS_PERSONNEL P
where ERG.EMPLID = P.EMPLID
and ERG.REVIEW_DT = (Select max(ERG1.REVIEW_DT) from PS_EMP_REVIEW_GOAL ERG1
where ERG1.EMPLID = ERG.EMPLID
and ERG1.REVIEW_DT BETWEEN TO_DATE('01-01-' || EXTRACT(YEAR FROM SYSDATE), 'MM-DD-YYYY') - 1
AND TO_DATE('12-31-' || EXTRACT(YEAR FROM SYSDATE), 'MM-DD-YYYY') - 1
);
```
**Edit** If you want to compare the date with the dates from the previous year, you should move the `- 1` inside the `TO_DATE` like this:
```
and ERG1.REVIEW_DT BETWEEN TO_DATE('01-01-' || (EXTRACT(YEAR FROM SYSDATE) - 1), 'MM-DD-YYYY')
AND TO_DATE('12-31-' || (EXTRACT(YEAR FROM SYSDATE) - 1), 'MM-DD-YYYY')
``` | Another shorter option would be:
```
extract(year from ERG1.REVIEW_DT) = extract(year from current_date) - 1
```
**BUT**: this would **not** use an an index on `ERG1.REVIEW_DT` so it might be to slow for your purposes. IN that case Przemyslaw's answer is much better, because that expression *can* use an index on that column. | In ORACLE get values from 01-01-YYYY to 12-31-YYYY previous year | [
"",
"sql",
"oracle",
"date",
""
] |
I'm trying to better understand set-based logic and simplify a piece of code I'm working on. Here's an example of what I am currently working with (it doesn't work currently for reasons that will quickly become apparent):
```
SELECT
userid,
rn = ROW_NUMBER() OVER (ORDER BY username)
FROM user
WHERE username like 'test%'
WHILE rn between 1 and 100
<RUN SP USING INFORMATION>
WHILE rn between 101 and 200
<RUN SP WITH DIFFERENT INFORMATION>
```
For the purposes of example, assume there are 200 rows that meet the criteria in the `SELECT` statement. Also assume that we can't make changes to the `user` table. My question is, without using a temp table and without (hopefully) using a `WHILE` loop, how else could I handle this?
Note that the SP referenced is a rather complex stored procedure that needs to use the information within that `SELECT` statement. I could handle this by using a temp table and going row by row, but I'm trying to understand other approaches to this where I could handle it in batches. | You can't call a stored procedure as part of another query, so it would be impossible to do what you're asking as a set-based operation without rewriting the stored procedures. | Several years ago I wrote a SQL-based utility specifically to address needs like this. It is detailed in this article at another site here: <http://www.sqlservercentral.com/scripts/Administration/69737/>. (If you want, I can include it here as well, but it is fairly long)
Please note the following about it:
1. It is written entirely in T-SQL, and
2. It uses no Cursors or While loops whatsoever.
You would probably address your problem with this tool something like this:
```
EXECUTE OVER_SET '
IF {rn} between 1 and 100
<RUN SP USING INFORMATION>
IF {rn} between 101 and 200
<RUN SP WITH DIFFERENT INFORMATION>
',
@from = '
(SELECT
userid,
rn = ROW_NUMBER() OVER (ORDER BY username)
FROM user
WHERE username like ''test%'') aa '
@subs1 = '{userid}=userid',
@subs2 = '{rn}=rn',
@quote = '"' -- Allows you to use (") for quotes inside the 1st string
;
```
Since that is a sign-up site, I am posting the code below (long):
```
CREATE PROC
OVER_SET (
@command AS NVARCHAR(MAX), -- Template SQL command
@from AS NVARCHAR(MAX), -- FROM..WHERE clause string
@subs1 AS NVARCHAR(MAX) = N'', -- Substitution parameters, these are
@subs2 AS NVARCHAR(MAX) = N'', -- of the form "<find>=<repl>" where:
@subs3 AS NVARCHAR(MAX) = N'', -- <find> will be searched for in @command, and
-- <repl> will replace it, if it was found
-- (typically, <repl> should be a column name
-- returned by the FROM clause)
@print AS BIT = 1, -- 0 = suppress PRINT of the SQL before executing
@catch AS VARCHAR(12) = 'continue',
-- TRY/CATCH option parameters. Choices are:
-- 'continue' on an error, print a message & continue
-- 'ignore' attempt to suppress all errors
-- 'fail' try to re-raise the error
-- 'none' no TRY/CATCH blocks
@use_db AS NVARCHAR(255) = N'', -- DB to switch to befor execution of the SQL text
@quote AS NVARCHAR(8) = N'' -- search for this character & replace with (').
)
AS
--
DECLARE @qt AS NVARCHAR(1), @cr AS NVARCHAR(1);
SELECT @qt = N'''', @cr = N'
';
DECLARE @find1 AS NVARCHAR(MAX), @prfx1 AS NVARCHAR(MAX), @sufx1 AS NVARCHAR(MAX)
DECLARE @find2 AS NVARCHAR(MAX), @prfx2 AS NVARCHAR(MAX), @sufx2 AS NVARCHAR(MAX)
DECLARE @find3 AS NVARCHAR(MAX), @prfx3 AS NVARCHAR(MAX), @sufx3 AS NVARCHAR(MAX)
DECLARE @prtst AS NVARCHAR(MAX), @prfxC AS NVARCHAR(MAX), @sufxC AS NVARCHAR(MAX)
DECLARE @newdb AS NVARCHAR(MAX), @declr AS NVARCHAR(MAX)
DECLARE @NewCmd AS NVARCHAR(MAX), @GenCmd AS NVARCHAR(MAX)
;
SELECT
@find1 = CASE WHEN @subs1 = N'' THEN N'' ELSE LEFT(@subs1,CHARINDEX(N'=',@subs1)-1) END,
@prfx1 = CASE WHEN @subs1 = N'' THEN N'' ELSE N'REPLACE(' END,
@sufx1 = CASE WHEN @subs1 = N'' THEN N'' ELSE N',@find1,'+RIGHT(@subs1,LEN(@subs1)-CHARINDEX(N'=',@subs1))+N')' END,
@find2 = CASE WHEN @subs2 = N'' THEN N'' ELSE LEFT(@subs2,CHARINDEX(N'=',@subs2)-1) END,
@prfx2 = CASE WHEN @subs2 = N'' THEN N'' ELSE N'REPLACE(' END,
@sufx2 = CASE WHEN @subs2 = N'' THEN N'' ELSE N',@find2,'+RIGHT(@subs2,LEN(@subs2)-CHARINDEX(N'=',@subs2))+N')' END,
@find3 = CASE WHEN @subs3 = N'' THEN N'' ELSE LEFT(@subs3,CHARINDEX(N'=',@subs3)-1) END,
@prfx3 = CASE WHEN @subs3 = N'' THEN N'' ELSE N'REPLACE(' END,
@sufx3 = CASE WHEN @subs3 = N'' THEN N'' ELSE N',@find3,'+RIGHT(@subs3,LEN(@subs3)-CHARINDEX(N'=',@subs3))+N')' END,
@newdb = CASE WHEN @use_db= N'' THEN N'' ELSE N'USE [' + @use_db + N'];' + @cr END,
@declr = N'DECLARE @_Num AS INT, @_Lin AS INT, @_Err AS NVARCHAR(MAX), @_Msg AS NVARCHAR(MAX);'+@cr
;
;WITH
[base] AS (SELECT cmd = @command),
[quot] AS (SELECT cmd = CASE @quote WHEN N'' THEN cmd ELSE REPLACE(cmd, @quote, @qt) END FROM [base]),
[dble] AS (SELECT cmd = N'N'+@qt+REPLACE(cmd, @qt, @qt+@qt)+@qt FROM [quot]),
[prnt] AS (SELECT cmd = CASE @print WHEN 1 THEN N' PRINT '+cmd+';'+@cr ELSE N'' END
+ N' EXEC('+cmd+N');' FROM [dble]),
[ctch] AS (SELECT cmd =
CASE @catch WHEN N'none' THEN cmd
ELSE N'BEGIN TRY'+@cr+cmd+@cr+N'END TRY'+@cr+N'BEGIN CATCH'+@cr
+ N' SELECT @_Num=ERROR_NUMBER(), @_Lin=ERROR_LINE(), @_Err=ERROR_MESSAGE()'+@cr
+ CASE @catch
WHEN N'continue' THEN
N' SELECT @_msg=''Continuing after Error(''+CAST(@_Num AS NVARCHAR)+'') at Line ''+CAST(@_Lin AS NVARCHAR)+'''
+@cr+' ''+@_Err;'+@cr
+N' PRINT @_msg; '+@cr
+N' PRINT '' ''; '+@cr
WHEN N'ignore' THEN N' -- ignore = do nothing'+@cr
WHEN N'fail' THEN
N' SELECT @_msg=''Failing after Error(''+CAST(@_Num AS NVARCHAR)+'') at Line ''+CAST(@_Lin AS NVARCHAR)+'''
+@cr+' ''+@_Err;'+@cr
+N' RAISERROR(@_Num, 16, 1);'+@cr
+N' PRINT '' ''; '+@cr
ELSE N' --BAD else branch, shouldnt get here' END
+ N'END CATCH;' END FROM [prnt])
SELECT
@NewCmd = @prfx1+@prfx2+@prfx3+ N'@command' +@sufx1+@sufx2+@sufx3,
@command = cmd + @cr
FROM [ctch]
;
SELECT @GenCmd = '
DECLARE @sql AS NVARCHAR(MAX); SET @sql = '''+@newdb+ +@declr+ '''
;WITH
[-@from] AS ( SELECT * FROM ' +@from+ ' )
, [-@subs] AS ( SELECT [-NewCmd] = ' +@NewCmd+ ' FROM [-@from] )
, [-@print] AS ( SELECT [-NewCmd] = [-NewCmd] FROM [-@subs] )
SELECT
@sql = @sql + ''
'' + [-NewCmd]
FROM [-@subs]
;
EXEC sp_executesql @sql;
'
;
EXEC sp_executesql @GenCmd
, N'@command NVARCHAR(MAX), @from NVARCHAR(MAX), @find1 NVARCHAR(MAX), @find2 NVARCHAR(MAX), @find3 NVARCHAR(MAX)'
, @command, @from, @find1, @find2, @find3
;
```
Here are the post-comments which include some examples:
```
Example Usages
1) INSERT..EXECute:
Demonstrates capturing the SELECT output from an EXECUTE OVER_SET that
searches every database in the SQL Server Instance for routines with
the work "cursor" in them.
--
CREATE TABLE #temp (DB sysname, [Schema] sysname, Routine sysname);
INSERT INTO #temp
EXECUTE OVER_SET '
SELECT ROUTINE_CATALOG, ROUTINE_SCHEMA, ROUTINE_NAME
FROM [{db}].INFORMATION_SCHEMA.ROUTINES
WHERE ROUTINE_DEFINITION like "%cursor%"',
@from = 'sys.sysdatabases WHERE dbid > 4',
@subs1 = '{db}=name',
@quote = '"'
;
SELECT * from #temp;
DROP table #temp;
--
The @from argument returns the list of non-system databases in the server,
and the @susbs1 argument "{db}=name" tells it to replace every instance
of "{db}" in the command strings with the value of the [name] column (from
sys.sysdatabases). Note also the @quote argument's value (") allows us to
use a single quotation mark in the quoted command text instead of having
to use double apostrophes (ie, ' "%cursor%" ', instead of ' ''%cursor%'' ').
--======
2) Nested Example:
Demonstrates, nesting OVER_SET execution to operate against the combination
of to different sets, the second dependent on the first. Specifically,
it searches every non-system database for every user that is a windows
user or group, and then attempts to map them back to a server Login of
the same name.
--
EXECUTE OVER_SET '
EXECUTE OVER_SET "
ALTER USER [{name}] WITH LOGIN = [{name}];
PRINT `USER {name} has been mapped to its Login.`;",
@from = "sys.database_principals
WHERE ( type_desc = ""WINDOWS_GROUP"" OR type_desc = ""WINDOWS_USER"" )
AND name NOT like ""%dbo%"" AND name NOT LIKE ""%#%"" ",
@use_db = "{db}",
@subs1 = "{name}=name",
@catch = "continue",
@print = 1,
@quote = "`";
',
@from = 'sys.sysdatabases
WHERE dbid > 4',
@subs1 = '{db}=name',
@catch = 'continue',
@print = 0,
@quote = '"';
--
The outer OVER_SET uses the @from argument to return the list of all databases
which the @subs1 argument "{db}=name", uses to modify the inner OVER_SET
commands @use_db argument, cuasing the inner execution to USE [{db}} to each
database in turn. The inner execution's @from argument returns the list
of database users that are WINDOWS_* user or group, and the @subs1 ({name}=name)
cause the "{name}" token to be replaced with the value of the [name] column
from the database_principals table.
Note that two different @quote characters are used ( ("), then (`) ), removing
the need for double or even quadruple apostrophes in the inner command text.
(also note, that the @from argument text does not benefit from this, and can
only use the outer command quote (") becuase it is part of the outer command
text argument.
``` | Handling set-based operations in SQL | [
"",
"sql",
"sql-server",
"t-sql",
"set-based",
""
] |
Recently i have migrated my postgres from 8.2 to 8.4. when I run my application and tried to
login i am getting these error
```
ERROR [JDBCExceptionReporter] ERROR: function to_date(timestamp without time zone, unknown) does not exist
```
i had checked in my postgres by excecuting these to\_date function
```
SELECT to_date(createddate,'YYYY-MM-DD') FROM product_trainings;
```
it is giving me error function to\_date does not exist
when i execute the same query in postgres 8.2 i am not getting error
Please help me to resolve these issue. | It seems like all it needs is a conversion from timestamp to text as function definition is: to\_date(text,text).
Perhaps in 8.2 this conversion from timestamp to text was already predefined.
<http://www.postgresql.org/docs/8.4/static/functions-formatting.html> | Three year later. You can cast
```
SELECT
to_date(cast(createddate as TEXT),'YYYY-MM-DD')
FROM
product_trainings;
``` | Getting error function to_date(timestamp without time zone, unknown) does not exist | [
"",
"sql",
"database",
"postgresql",
"jdbc",
""
] |


I have a problem where I need to select the Project Number, Controlling Department Number, Department Manager's Lname, address, and birthdate for each project located in Stafford. I am having trouble getting the results I want.
I tried:
```
SELECT PROJECT.PNUMBER, PROJECT.DNUM, EMPLOYEE.LNAME, EMPLOYEE.ADDRESS, EMPLOYEE.BDATE
FROM PROJECT, EMPLOYEE, DEPARTMENT
WHERE PLOCATION = 'STAFFORD' AND DEPARTMENT.MGRSSN = EMPLOYEE.SSN;
```
And Got:
```
+---------+------+---------+-------------------------+-----------+
| PNUMBER | DNUM | LNAME | ADDRESS | BDATE |
| 30 | 4 | WONG | 683 VOSS, HOUSTON, TX | 08-DEC-55 |
| 10 | 4 | WONG | 683 VOSS, HOUSTON, TX | 08-DEC-55 |
| 30 | 4 | WALLACE | 291 BERRY, BELLAIRE, TX | 20-JUN-41 |
+---------+------+---------+-------------------------+-----------+
```
But what I should have gotten is (or what I wanted):
```
+---------+------+---------+-------------------------+-----------+
| PNUMBER | DNUM | LNAME | ADDRESS | BDATE |
| 10 | 4 | WALLACE | 391 BERRY, BELLAIRE, TX | 20-JUN-41 |
| 30 | 4 | WALLACE | 291 BERRY, BELLAIRE, TX | 20-JUN-41 |
+---------+------+---------+-------------------------+-----------+
```
Can anyone help me figure out what is wrong with my sql statement? *sorry I wasn't able to figure out how to format this* | Basically, you're missing out the join on `DEPARTMENT` and `PROJECT`.
I'd use explicit joins rather than the outdated `where` syntax:
```
select
PROJECT.PNUMBER,
PROJECT.DNUM,
EMPLOYEE.LNAME,
EMPLOYEE.ADDRESS
-- and so on with the EMPLOYEE fields
from
PROJECT
inner join
DEPARTMENT
on DEPARTMENT.DNUMBER = PROJECT.DNUM
inner join
EMPLOYEE
on EMPLOYEE.SSN = DEPARTMENT.MGRSSN
where
PROJECT.PLOCATION = 'Stafford'
```
But with the old syntax:
```
select
PROJECT.PNUMBER,
PROJECT.DNUM,
EMPLOYEE.LNAME,
EMPLOYEE.ADDRESS
-- and so on with the EMPLOYEE fields
from
PROJECT, DEPARTMENT, EMPLOYEE
where
PROJECT.PLOCATION = 'Stafford'
and DEPARTMENT.DNUMBER = PROJECT.DNUM -- This was the missing bit
and EMPLOYEE.SSN = DEPARTMENT.MGRSSN
``` | Try something like this
```
select p.pnumber, d.dnumber,e.lname, e.[address], e.bdate
from department D
inner join project P on D.dnumber = P.dnum
inner join employee E on e.ssn = D.mgrssn
where p.location = 'STAFFORD'
``` | I need to select information form a database based on criteria that crosses multiple tables | [
"",
"sql",
"sql-server",
""
] |
I'm trying to add a column to one of of my database tables, but there is a syntax error
and I can't seem to find the problem...
My current database table looks like this:
```
component + tag_id + item_id
------------|----------|-----------
com_content | 23 | 2642
com_content | 26 | 3481
com_content | 35 | 1868
com_content | 85 | 5827
com_content | 89 | 7882
```
I want it to look like this, where 'id' is auto increment and all columns part of the primary key
```
id + component + tag_id + item_id
-----|--------------|----------|-----------
1 | com_content | 23 | 2642
2 | com_content | 26 | 3481
3 | com_content | 35 | 1868
4 | com_content | 85 | 5827
5 | com_content | 89 | 7882
```
This is my query:
```
DROP PRIMARY KEY
ALTER TABLE gitags_items
ADD COLUMN id INT NOT NULL AUTO_INCREMENT FIRST
PRIMARY KEY (id,component,tag_id,item_id)
```
However I'm getting this error message:
```
#1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'PRIMARY KEY ALTER TABLE gitags_items ADD COLUMN id INT NOT NULL AUTO_INC' at line 1
```
Any help/pointers would be much appreciated | The 'ALTER TABLE' bit must come first, and then each part must be seperated by a comma:
```
ALTER TABLE gitags_items
DROP PRIMARY KEY,
ADD COLUMN id INT NOT NULL AUTO_INCREMENT FIRST,
ADD PRIMARY KEY (id,component,tag_id,item_id);
```
but I'm not sure if you can drop and create a primary key in the same staatement. | This one works fine, there was the problem with the commas, theres no need to drop the primary key since your going to set one yourself
```
ALTER TABLE gitags_items ADD id INT NOT NULL AUTO_INCREMENT FIRST, ADD PRIMARY KEY(id,component,tag_id,item_id);
``` | MySQL alter table add column with primary key syntax error | [
"",
"mysql",
"sql",
""
] |
First, hope the title expresses the issue. Otherwise, any suggest is welcomed. My issue is I have the following table structure:
```
+----+------+------------------+-------------+
| ID | Name | recipient_sender | user |
+----+------+------------------+-------------+
| 1 | A | 1 | X |
| 2 | B | 2 | Y |
| 3 | A | 2 | Z |
| 4 | B | 1 | U |
| | | | |
+----+------+------------------+-------------+
```
Whereby in the column `recipient_sender` the value 1 means the user is recipient, the value 2 means the user is sender.
I need to present data in the following way:
```
+----+------+-----------+---------+
| ID | Name | recipient | sender |
+----+------+-----------+---------+
| 1 | A | X | Z |
| 2 | B | U | Y |
+----+------+-----------+---------+
```
I've tried self-join but it did not work. I cannot use `MAX` with `CASE WHEN`, as the number of records is too big.
**Note:** Please ignore the bad table design as it's just a simplified example of the real one | Please try:
```
SELECT
MIN(ID) ID
Name,
max(case when recipient_sender=1 then user else null end) sender,
max(case when recipient_sender=2 then user else null end) recipient
From yourTable
group by Name
``` | maybe you can try this:
```
select min(id) id,
name,
max(decode(recipient_sender, 1, user, '')) sender,
max(decode(recipient_sender, 2, user, '')) recipient
from t
group by name
```
You can check a demo [here on SQLFiddle](http://sqlfiddle.com/#!4/c9a41/1). | divide a column into two based on another column value - ORACLE | [
"",
"sql",
"oracle",
""
] |
I have two string columns `a` and `b` in a table `foo`.
`select a, b from foo` returns values `a` and `b`. However, concatenation of `a` and `b` does not work. I tried :
```
select a || b from foo
```
and
```
select a||', '||b from foo
```
Update from comments: both columns are type `character(2)`. | The problem was in nulls in the values; then the concatenation does not work with nulls. The solution is as follows:
```
SELECT coalesce(a, '') || coalesce(b, '') FROM foo;
``` | With [string types](https://www.postgresql.org/docs/current/datatype-character.html) (including `character(2)`), the displayed concatenation just works because, [quoting the manual:](https://www.postgresql.org/docs/current/functions-string.html)
> [...] the string concatenation operator (`||`) accepts non-string
> input, so long as **at least one input is of a string type**, as shown in
> [Table 9.8](https://www.postgresql.org/docs/current/functions-string.html#FUNCTIONS-STRING-SQL). For other cases, insert an explicit coercion to `text` [...]
Bold emphasis mine. The 2nd example `select a||', '||b from foo` works for *any* data types because the untyped string literal `', '` defaults to type `text` making the whole expression valid.
For non-string data types, you can "fix" the 1st statement by [casting](https://www.postgresql.org/docs/current/sql-expressions.html#SQL-SYNTAX-TYPE-CASTS) at least one argument to `text`. *Any* type can be cast to `text`.
```
SELECT a::text || b AS ab FROM foo;
```
Judging from [your own answer](https://stackoverflow.com/a/19958775/939860), "*does not work*" was supposed to mean *"returns null"*. The result of *anything* concatenated to null is null. If **`null`** values can be involved and the result shall not be null, use [**`concat_ws()`**](https://www.postgresql.org/docs/current/functions-string.html#FUNCTIONS-STRING-OTHER) to concatenate any number of values:
```
SELECT concat_ws(', ', a, b) AS ab FROM foo;
```
Separators are only added between non-null values, i.e. only where necessary.
Or **[`concat()`](https://www.postgresql.org/docs/current/functions-string.html#FUNCTIONS-STRING-OTHER)** if you don't need separators:
```
SELECT concat(a, b) AS ab FROM foo;
```
No need for type casts since both functions take [`"any"`](https://www.postgresql.org/docs/current/datatype-pseudo.html#DATATYPE-PSEUDOTYPES-TABLE) input and work with text representations. **However**, that's also why the [function volatility](https://www.postgresql.org/docs/current/xfunc-volatility.html) of both `concat()` and `concat_ws()` is only `STABLE`, not `IMMUTABLE`. If you need an **immutable function** (like for an index, a generated column, or for partitioning), see:
* [Create an immutable clone of concat\_ws](https://stackoverflow.com/questions/54372666/create-an-immutable-clone-of-concat-ws/54384767#54384767)
* [PostgreSQL full text search on many columns](https://dba.stackexchange.com/a/164081/3684)
More details (and why `COALESCE` is a poor substitute) in this related answer:
* [Combine two columns and add into one new column](https://stackoverflow.com/questions/12310986/postgresql-combine-two-columns-and-add-into-one-new-column/12320369#12320369)
### Asides
`+` (as mentioned in comments) is not a valid operator for string concatenation in Postgres (or standard SQL). It's a private idea of Microsoft to add this to their products.
There is hardly any good reason to use `character(n)` (synonym: `char(n)`). Use [`text` or `varchar`](https://www.postgresql.org/docs/current/datatype-character.html). Details:
* [Any downsides of using data type "text" for storing strings?](https://stackoverflow.com/questions/20326892/any-downsides-of-using-data-type-text-for-storing-strings/20334221#20334221)
* [Best way to check for "empty or null value"](https://stackoverflow.com/questions/23766084/best-way-to-check-for-empty-or-null-value/23767625#23767625) | How to concatenate columns in a Postgres SELECT? | [
"",
"sql",
"postgresql",
"types",
"concatenation",
"coalesce",
""
] |
I have a SQL Server 2012 Enterprise setup issue and was so far unable to find a solution of my specific use case:
I have two SQL servers, one in the United States and one in Germany. Both are being used for reading and writing and the task is to make them synchronized. The good news is, that while reading happens a lot, writing only once every minute or so (to various tables however). Basically I am looking for a replication setup in which both servers are masters and can send changes to the other..
Is that possible?
Thanks, Christoph | Either [Merge Replication](http://technet.microsoft.com/en-us/library/ms152746.aspx), [Bidirectional Transactional Replication](http://technet.microsoft.com/en-us/library/ms151855.aspx), or [Peer-to-Peer Replication](http://technet.microsoft.com/en-us/library/ms151196.aspx) is the best fit here.
Since writes can occur at both servers, you will need to consider what to do in the event of a conflict. A conflict will occur when the same row/column is changed on 2 different servers between a sync. If possible, it is best to avoid conflicts altogether by partitioning the write operations. One way this can be acheived is by adding a location-specific identifier column to the writable tables and ensure that write operations for a particular row are performed at only one location.
Merge Replication provides bidirectional synchronization and the ability to define static and parameterized row filters to provide a subset of data to be published to subscribers. Merge Replication also provides built-in conflict resolvers along with the ability to implement custom conflict resolvers.
Bidirectional Transactional Replication provides bidirectional synchronization but does not offer any type of conflict detection or resolution.
Peer-to-Peer Replication provides bidirectional synchronization, however, it requires all nodes to be Enterprise Edition and does not support row or column filtering. Peer-to-Peer Replication has built-in conflict detection but does not offer automatic conflict resolution.
I would recommend setting each one up in your test environment to see which replication type best fits your needs. | Yes, for this you can use `MergeReplication` or `Transactional Replication`
They are both described in detail on the following page:
<http://technet.microsoft.com/en-us/library/ms152531.aspx>
Each Table must have one field set up as rowguid.
The Requirement for this is that it is of type uniqueidentifier.
You should be aware of the fragmentation issues coming with the default value "newid". Instead of this you should make sure that the default value "newsequentialid" is used instead.
When setting up this kind of replication with the help of SQL Server Management Studio it should do this automatically.
In the following post some details of newsequentialid and newid are mentioned:
<http://www.mssqltips.com/sqlservertip/1600/auto-generated-sql-server-keys-with-the-uniqueidentifier-or-identity/> | SQL Server 2012: synchronize two servers with writes to both | [
"",
"sql",
"sql-server-2012",
"replication",
"mirroring",
""
] |
I'm trying to get a percent complete column in SQL for the following data. This is the results from my query.
```
work_order_no status orderqty complete precentcomplete
WO-000076 Approved 20.0000 9 5201725
WO-000076 Approved 20.0000 10 15605175
WO-000078 Approved 12000.0000 200 91258.3333333333
WO-000078 Approved 12000.0000 500 228145.833333333
```
What I need is a result with 2 rows. Row 1 will be WO-00076 and the percent complete. Row 2 will be WO-000078 and the percent complete. Below is the query that I am working with.
```
Select distinct wo.work_order_no,
wos.status_description,
wo.order_qty as [ORDERQTY],
p.good_tot_diff as [COMPLETE],
(sum(p.good_tot_diff)/wo.order_qty) * 100 as [PERCENTCOMPLETE]
from wo_master as wo,
process as p,
wo_statuses as wos,
so_children as soc,
so_sales_orders as sos,
cs_customers as csc
where wo.work_order_no = p.entry_18_data_txt
and wo.work_order_no = soc.work_order_no
and wo.wo_status_id = wos.wo_status_id
and wo.mfg_building_id = @buildid
and wos.wo_status_id = @statusid
group by wo.work_order_no, wos.status_description, wo.order_qty, p.good_tot_diff
```
So I would like the results to be like below
```
work_order_no status orderqty complete precentcomplete
WO-000076 Approved 20.0000 19 0.95
WO-000078 Approved 12000.0000 700 0.005
``` | This should be what you want. The complete column should not be included in the GROUP BY, and should be summed.
```
SELECT DISTINCT wo.work_order_no, wos.status_description, wo.order_qty AS [ORDERQTY], SUM(p.good_tot_diff) AS [COMPLETE], (sum(p.good_tot_diff)/wo.order_qty) * 100 AS [PERCENTCOMPLETE]
FROM wo_master AS wo,
process AS p,
wo_statuses AS wos,
so_children AS soc,
so_sales_orders AS sos,
cs_customers AS csc
WHERE wo.work_order_no = p.entry_18_data_txt
AND wo.work_order_no = soc.work_order_no
AND wo.wo_status_id = wos.wo_status_id
AND wo.mfg_building_id = @buildid
AND wos.wo_status_id = @statusid
GROUP BY wo.work_order_no,
wos.status_description,
wo.order_qty;
``` | Try this:
```
Select wo.work_order_no,
wos.status_description,
wo.order_qty as [ORDERQTY],
SUM(p.good_tot_diff) as [COMPLETE],
SUM(p.good_tot_diff)/wo.order_qty as [PERCENTCOMPLETE]
from wo_master as wo,
process as p,
wo_statuses as wos,
so_children as soc,
so_sales_orders as sos,
cs_customers as csc
where wo.work_order_no = p.entry_18_data_txt
and wo.work_order_no = soc.work_order_no
and wo.wo_status_id = wos.wo_status_id
and wo.mfg_building_id = @buildid
and wos.wo_status_id = @statusid
group by wo.work_order_no, wos.status_description, wo.order_qty
``` | SQL Sum of 2 rows when 4 rows are returned | [
"",
"sql",
"sql-server-2008",
"sum",
""
] |
I'm getting the error "operand should be contain 1 column".
Yes, I try to compare 1 column with 3, so is what I want to achieve possible?
Clear exemple: I have a table test
```
id profile_id result_number
10 1232 3
10 3263 5
10 2222 4
10 2321 1
```
Actually, I have 3 query and I want to get all in one query. First, I get the result number of profile\_id 2222 (return 4)
```
SELECT `result_number` FROM test WHERE id=10 AND `profile_id`=2222
```
Next, I get the profile\_id who have result\_number - 1
```
SELECT `profile_id` FROM test
WHERE id=10 AND `result_number` = 4 - 1
```
(return 3)
Finally, I get the profile\_id who have result\_number + 1
```
SELECT `profile_id` FROM test WHERE id=10 AND `result_number` = 4 + 1
```
(return 5)
result expected :
```
profile_id
3263
2222
1232
```
Is it possible to achieve it? | You can `JOIN` a subquery if it contains multiple rows:
```
SELECT profile_id
FROM test t, (
SELECT result_number
FROM test
WHERE id = 10
AND profile_id = 2222
) q
WHERE t.result_number BETWEEN q.result_number-1 AND q.result_number+1
AND id = 10
ORDER BY t.result_number DESC
```
DEMO: <http://sqlfiddle.com/#!2/6b922/1> | ```
with a as
(
SELECT `result_number`, profile_id FROM test WHERE id=10 AND `profile_id`=2222
)
SELECT PROFILE_ID FROM A
UNION
SELECT B.PROFILE_ID FROM TEST AS B, A WHERE B.RESULT_NUMBER = (A.RESULT_NUMBER - 1)
UNION
SELECT C.PROFILE_ID FROM TEST AS C, A WHERE C.RESULT_NUMBER = (A.RESULT_NUMBER + 1)
``` | Operand should have 1 column in mysql | [
"",
"mysql",
"sql",
"subquery",
""
] |
Say I have a table where `Col2` is varchar
```
Col1 Col2
1 001,002
2 003,004,005
```
I need to count the number of elements in Col2,and return it, if I do:
```
select --do something here with column-- from table
```
it'll give me:
```
2
3
``` | So by counting the number of `,`s you have in Col2 and adding 1 to it would give you your answer. Below I get the length of Col2. Then I replace the `,`s with nothing and get that length. I take the first length and subtract the second length to get the total number of commas. Then simply add 1 to the result to get the total you are looking for:
```
SELECT (LENGTH(Col2) - LENGTH(REPLACE(Col2,",","")) + 1) AS MyCol2Count
FROM MyTable
``` | If it's always formatted like that simply [count the number of commas](http://docs.oracle.com/cd/E11882_01/server.112/e41084/functions147.htm#sthref1418) and then add 1:
```
select regexp_count(col, ',') + 1
from table
``` | Count the number of elements in a comma separated string in Oracle | [
"",
"sql",
"oracle",
""
] |
Hey guys I have the below sample data which i want to query for.
```
MemberID AGEQ1 AGEQ2 AGEQ2
-----------------------------------------------------------------
1217 2 null null
58458 3 2 null
58459 null null null
58457 null 5 null
299576 6 5 7
```
What i need to do is to lookup the table and if any AGEx COLUMN contains any data then it counts the number of times there is data for that row in each column
Results example:
for memberID 1217 the count would be 1
for memberID 58458 the count would be 2
for memberID 58459 the count would be 0 or null
for memberID 58457 the count would be 1
for memberID 299576 the count would be 3
This is how it should look like in SQL if i query the entire table
1 Children - 2
2 Children - 1
3 Children - 1
0 Children - 1
So far i have been doing it using the following query which isnt very efficient and does give incorrect tallies as there are multiple combinations that people can answer the AGE question. Also i have to write multiple queries and change the is null to is not null depending on how many children i am looking to count a person has
```
select COUNT (*) as '1 Children' from Member
where AGEQ1 is not null
and AGEQ2 is null
and AGEQ3 is null
```
The above query only gives me an answer of 1 but i want to be able to count the other columns for data as well
Hope this is nice and clear and thank you in advance | If all of the columns are integers, you can take advantage of integer math - dividing the column by itself will yield 1, unless the value is NULL, in which case COALESCE can convert the resulting NULL to 0.
```
SELECT
MemberID,
COALESCE(AGEQ1 / AGEQ1, 0)
+ COALESCE(AGEQ2 / AGEQ2, 0)
+ COALESCE(AGEQ3 / AGEQ3, 0)
+ COALESCE(AGEQ4 / AGEQ4, 0)
+ COALESCE(AGEQ5 / AGEQ5, 0)
+ COALESCE(AGEQ6 / AGEQ6, 0)
FROM dbo.table_name;
```
To get the number of people with each count of children, then:
```
;WITH y(y) AS
(
SELECT TOP (7) rn = ROW_NUMBER() OVER
(ORDER BY [object_id]) - 1 FROM sys.objects
),
x AS
(
SELECT
MemberID,
x = COALESCE(AGEQ1 / AGEQ1, 0)
+ COALESCE(AGEQ2 / AGEQ2, 0)
+ COALESCE(AGEQ3 / AGEQ3, 0)
+ COALESCE(AGEQ4 / AGEQ4, 0)
+ COALESCE(AGEQ5 / AGEQ5, 0)
+ COALESCE(AGEQ6 / AGEQ6, 0)
FROM dbo.table_name
)
SELECT
NumberOfChildren = y.y,
NumberOfPeopleWithThatMany = COUNT(x.x)
FROM y LEFT OUTER JOIN x ON y.y = x.x
GROUP BY y.y ORDER BY y.y;
``` | Try this:
```
select id, a+b+c+d+e+f
from ( select id,
case when age1 is null then 0 else 1 end a,
case when age2 is null then 0 else 1 end b,
case when age3 is null then 0 else 1 end c,
case when age4 is null then 0 else 1 end d,
case when age5 is null then 0 else 1 end e,
case when age6 is null then 0 else 1 end f
from ages
) as t
```
See here in fiddle <http://sqlfiddle.com/#!3/88020/1>
To get the quantity of persons with childs
```
select childs, count(*) as ct
from (
select id, a+b+c+d+e+f childs
from
(
select
id,
case when age1 is null then 0 else 1 end a,
case when age2 is null then 0 else 1 end b,
case when age3 is null then 0 else 1 end c,
case when age4 is null then 0 else 1 end d,
case when age5 is null then 0 else 1 end e,
case when age6 is null then 0 else 1 end f
from ages ) as t
) ct
group by childs
order by 1
```
See it here at fiddle <http://sqlfiddle.com/#!3/88020/24> | Counting if data exists in a row | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I've already looked at these two questions:
* [Grouping by Column with Dependence on another Column](https://stackoverflow.com/questions/13410052/grouping-by-column-with-dependence-on-another-column)
* [MySQL GROUP BY with preference](https://stackoverflow.com/questions/1741866/mysql-group-by-with-preference)
However both of them use an aggregate function MAX in order to obtain the highest or filled in value, which doesn't work for my case.
For the purposes of this question, I've simplified my situation. Here is my current data:

I'd like to obtain the operator name for each route, but with respect to direction of travel (i.e. ordering or "preferring" values). This is my pseudo-code:
```
if(`direction` = 'west' AND `operatorName` != '') then select `operatorName`
else if(`direction` = 'north' AND `operatorName` != '') then select `operatorName`
else if(`direction` = 'south' AND `operatorName` != '') then select `operatorName`
else if(`direction` = 'east' AND `operatorName` != '') then select `operatorName`
```
My current SQL query is:
```
SELECT route, operatorName
FROM test
GROUP BY route
```
This gives me the grouping, but wrong operator for my purposes:
```
route | operatorName
--------------------
95 | James
96 | Mark
97 | Justin
```
I have tried applying a `ORDER BY` clause but `GROUP BY` takes precedence. What my desired result is:
```
route | operatorName
--------------------
95 | Richard
96 | Andrew
97 | Justin
```
I cannot do `MAX()` here as "north" comes before "south" in alphabetical order. How do I explicitly state my preference/ordering before the `GROUP BY` clause is applied?
Also keep in mind that empty strings are not preferred.
Please note that this is a simplified example. The actual query selects a lot more fields and joins with three other tables, but there are no aggregate functions in the query. | You can use that MAX example, you just needed to "fake it". See here: <http://sqlfiddle.com/#!2/58688/5>
```
SELECT *
FROM test
JOIN (SELECT 'west' AS direction, 4 AS weight
UNION
SELECT 'north',3
UNION
SELECT 'south',2
UNION
SELECT 'east',1) AS priority
ON priority.direction = test.direction
JOIN (
SELECT route, MAX(weight) AS weight
FROM test
JOIN (SELECT 'west' AS direction, 4 AS weight
UNION
SELECT 'north',3
UNION
SELECT 'south',2
UNION
SELECT 'east',1) AS priority
ON priority.direction = test.direction
GROUP BY route
) AS t1
ON t1.route = test.route
AND t1.weight = priority.weight
``` | I came up with this solution, however, it is ugly. Anyway, you may try it:
```
CREATE TABLE test (
route INT,
direction VARCHAR(20),
operatorName VARCHAR(20)
);
INSERT INTO test VALUES(95, 'east', 'James');
INSERT INTO test VALUES(95, 'west', 'Richard');
INSERT INTO test VALUES(95, 'north', 'Dave');
INSERT INTO test VALUES(95, 'south', 'Devon');
INSERT INTO test VALUES(96, 'east', 'Mark');
INSERT INTO test VALUES(96, 'west', 'Andrew');
INSERT INTO test VALUES(96, 'south', 'Alex');
INSERT INTO test VALUES(96, 'north', 'Ryan');
INSERT INTO test VALUES(97, 'north', 'Justin');
INSERT INTO test VALUES(97, 'south', 'Tyler');
SELECT
route,
(SELECT operatorName
FROM test
WHERE route = t2.route
AND direction =
CASE
WHEN direction_priority = 1 THEN 'west'
WHEN direction_priority = 2 THEN 'north'
WHEN direction_priority = 3 THEN 'south'
WHEN direction_priority = 4 THEN 'east'
END) AS operator_name
FROM (
SELECT
route,
MIN(direction_priority) AS direction_priority
FROM (
SELECT
route,
operatorName,
CASE
WHEN direction = 'west' THEN 1
WHEN direction = 'north' THEN 2
WHEN direction = 'south' THEN 3
WHEN direction = 'east' THEN 4
END AS direction_priority
FROM test
) t
GROUP BY route
) t2
;
```
Firstly, we select all records with `direction` changed to a number so it is in required order. Then, we `GROUP` by each route and get the minimal direction. What is left remains in the outermost query - select the operator name based on the lowest found direction.
Output:
```
ROUTE OPERATOR_NAME
95 Richard
96 Andrew
97 Justin
```
Please, next time attach sample data not as a picture, but either as a plain text, or as inserts (best at [SQLFiddle](http://sqlfiddle.com)).
Check this solution at [SQLFiddle](http://sqlfiddle.com/#!2/dfa42e/4) | MySQL group by with ordering/priority of another column | [
"",
"mysql",
"sql",
"group-by",
"sql-order-by",
""
] |
I am trying to create a database that holds staff information such as their names, timesheets, holidays booked etc and also information about the projects the are carrying out and what companies the projects are for. My code is below:
```
CREATE TABLE IF NOT EXISTS tblcompany (
companyid INT(11) UNSIGNED NOT NULL,
custfirst VARCHAR(50),
custlast VARCHAR(50),
company VARCHAR(50),
custphone VARCHAR(50),
custemail VARCHAR(50),
PRIMARY KEY (companyid),
INDEX (companyid),
CONSTRAINT FOREIGN KEY (companyid)
REFERENCES tblproject (companyid)
) ENGINE=InnoDB;
CREATE TABLE IF NOT EXISTS tblemployee (
employeeid INT(11) UNSIGNED NOT NULL,
employeefirst VARCHAR(50),
employeelast VARCHAR(50),
employeephone VARCHAR(50),
employeeemail VARCHAR(50),
PRIMARY KEY (employeeid),
INDEX (employeeid),
CONSTRAINT FOREIGN KEY (employeeid)
REFERENCES tbltimesheet (employeeid),
CONSTRAINT FOREIGN KEY (employeeid)
REFERENCES tblholiday (employeeid),
CONSTRAINT FOREIGN KEY (employeeid)
REFERENCES tblannualleave (employeeid)
) ENGINE=InnoDB;
CREATE TABLE IF NOT EXISTS tblholiday (
holidayid INT(11) UNSIGNED NOT NULL,
employeeid INT(11) UNSIGNED NOT NULL,
holidayfrom DATE,
holidayto DATE,
holidayhalfday BOOLEAN,
holidayreason VARCHAR(50),
INDEX (employeeid),
PRIMARY KEY (holidayid)
) ENGINE=InnoDB;
CREATE TABLE IF NOT EXISTS tblannualleave (
annualleaveid INT(11) UNSIGNED NOT NULL,
employeeid INT(11) UNSIGNED NOT NULL,
annualleavetaken INT(11),
annualleaveremain INT(11),
anuualleavetotal INT(11),
INDEX (employeeid),
PRIMARY KEY (annualleaveid)
) ENGINE=InnoDB;
CREATE TABLE IF NOT EXISTS tblproject (
projectid INT(11) UNSIGNED NOT NULL,
projectname VARCHAR(50),
projecttype VARCHAR(50),
companyid INT(11) UNSIGNED NOT NULL,
projectnotes VARCHAR(50),
PRIMARY KEY (projectid),
INDEX (projectid),
CONSTRAINT FOREIGN KEY (projectid)
REFERENCES tbltimesheet (projectid)
) ENGINE=InnoDB;
CREATE TABLE IF NOT EXISTS tbltimesheet (
timesheetid INT(11) UNSIGNED NOT NULL,
employeeid INT(11) UNSIGNED NOT NULL,
projectid INT(11) UNSIGNED NOT NULL,
timesheetdate DATE,
timesheethours INT(11),
timesheetnotes VARCHAR(50),
INDEX (employeeid),
PRIMARY KEY (timesheetid)
) ENGINE=InnoDB;
```
I have been looking around and have tried everything, it is probably something so simple. I have changed all the datatypes to similar ones to see if this would solve the problem butno luck. The error code I get is:
> Error Code: 1215. Cannot add foreign key constraint
> 0.063 sec
>
> CREATE TABLE IF NOT EXISTS tblcompany ( companyid INT(11) UNSIGNED
> NOT NULL, custfirst VARCHAR(50), custlast VARCHAR(50),
> company VARCHAR(50), custphone VARCHAR(50), custemail
> VARCHAR(50), PRIMARY KEY (companyid), INDEX (companyid),
> CONSTRAINT FOREIGN KEY (companyid) REFERENCES tblproject
> (companyid) ) ENGINE=InnoDB
>
> 11:15:57 CREATE TABLE IF NOT EXISTS tblcompany ( companyid INT(11)
> UNSIGNED NOT NULL, custfirst VARCHAR(50), custlast
> VARCHAR(50), company VARCHAR(50), custphone VARCHAR(50),
> custemail VARCHAR(50), PRIMARY KEY (companyid), INDEX
> (companyid), CONSTRAINT FOREIGN KEY (companyid) REFERENCES
> tblproject (companyid) ) ENGINE=InnoDB Error Code: 1215. Cannot add
> foreign key constraint 0.063 sec
Thank you for looking.. | Create the table tblproject first before you reference it.
Besides the wrong table order,you need either a primary or unique key on referenced columns.
[SQL fiddle](http://sqlfiddle.com/#!2/7c43d) | I think you will solve your problem creating your tables in the opposite ordersm, otherwise you will have the same issue with tblemployee.
A Foreigh Key needs that the reference table exists already. | MySQL error 1215, what am I doing wrong? | [
"",
"mysql",
"sql",
"database",
""
] |
I don't understand why this query isn't working as expected. I have a horse table and file table. The file table has columns "fk\_object" and "fk\_id" so I can get records like fk\_object="horse\_photo" and fk\_id=725. The file table also has is\_default and position columns so I can grab the file (photo) that should be its profile picture. However, I'm not getting the default photo or even first position file. Can someone explain to me why this query doesn't work as expected and what the proper solution would be? Thanks!
```
SELECT `horse`.`id`,
`horse`.`name`,
`file`.`id` AS `fid`,
`file`.`is_default`,
`file`.`position`
FROM `horse`
LEFT OUTER JOIN `file` ON (`file`.`fk_id`=`horse`.`id`
AND `file`.`fk_object`="horse_photo")
GROUP BY `horse`.`id`
ORDER BY `horse`.`id` ASC,
`file`.`is_default` DESC,
`file`.`position` ASC;
```
To be clear, I want to retrieve all horses and their default photo (if there is one).
**More Details:**
File.is\_default is a boolean with either 0 or 1. File.position is UNSIGNED INT starting at 0. The file joined should first be is\_default=1, and then resolve to File.position=0 (or the smallest int).
The results I'm getting:
A list sorted by horse.id ASC, however the File joined appears to just be the first File (ordered by the primary id column). | To answer your question in the comment to my first answer, no, you cannot order anything used in a join. You can't order a view, inline function, derived table, CTE, or a subquery without using restrictive commands.
So, let me give this one more try. You want a query that returns one record per horse. If there is a default photo, return that photo. If there is no default, return the one with the lowest position number. Now, I'm a big fan of CTEs, as you'll see below. If you're new to CTEs, this may be confusing, but they can make your life a lot easier once you master them!
The first CTE (i.e., Default\_Photo) will bring back, for each horse, the data on any default photo that exists. If there is no default photo, then there will be no record for that horse.
The second CTE (i.e., Next\_Best\_Photos) will bring back one row for each horse with the minimum position number for photos that are not the default. This will contain non-default photos even for horses that do have default photos. That's the way it's supposed to be.
The third CTE (i.e., Final\_Dataset) will bring back one distinct row for each horse. If that horse is missing a corresponding row in the Default\_Photos CTE (i.e., it does not have a default photo), then it will display the information associated with the photo with the minimum position.
The reason you need the third query is because, in t-sql (which is what I use), when you use the distinct command, it will not let you sort by derived columns.
```
With Default_Photos as (
Select H.ID
, F.ID as 'FID'
, F.Position
From Horse H
Inner Join File F
on H.ID = F.ID
Where F.is_default = 1
)
, Next_Best_Photo as (
Select H.ID
, F.ID as 'FID'
, Min(F.Position) as Min_Position
From Horse H
Inner Join File F
on H.ID = F.ID
Where F.is_default = 0
Group by H.ID
, F.ID
)
, Final_Dataset as (
Select Distinct H.ID
, H.name
, F.ID as 'FID'
, Case
When DP.ID is null
Then 1
Else 0
End as is_default
, Case
When DP.ID is null
Then F2.Position
Else DP.Position
End as position
From Horse H
Inner Join File F
on H.ID = F.ID
Left Outer Join Default_Photos DP
on H.ID = DP.ID
Left Outer Join Next_Best_Photo NBP
on H.ID = NBP.ID
Left Outer Join File F2
on NBP.FID = F2.ID
and NBP.Min_Position = F2.Position
)
Select *
From Final_Dataset
Order by id asc
, is_default desc
, position asc
```
There may be an easier way to do this, but this is how I construct datasets for my dashboards where I need to be absolutely sure I only get one row per ID. | Your issue is that you are grouping by horse id but not using any aggregate functions on the other columns you are selecting, if you group on one column every other column you select has to be a sum, or max, or min, etc. This should work:
```
SELECT `horse`.`id`, `horse`.`name`, `file`.`id` AS `fid`, `file`.`is_default`, `file`.`position`
FROM `horse`
LEFT OUTER JOIN `file` ON (`file`.`fk_id`=`horse`.`id` AND `file`.`fk_object`="horse_photo")
GROUP BY `horse`.`id`, `horse`.`name`, `file`.`id` AS `fid`, `file`.`is_default`, `file`.`position`
ORDER BY `horse`.`id` ASC, `file`.`is_default` DESC, `file`.`position` ASC;
``` | MySQL ordering join results | [
"",
"mysql",
"sql",
"join",
"group-by",
""
] |
I am new to SQL. I have a syntax error and cannot seem to get the SQL Query System to agree with it:
```
select t.tracktitle from tracks t
inner join titles ti
inner join artists ar
if (ar.artistname = "The Bullets
", 'yes', 'no')
on ti.titleid = t.titleid;
```
I am trying to find all tracks by the artist name, "The Bullets". My tables resemble the following:
**Tracks**
```
TitleID, TrackNum, TrackTitle
```
**Titles**
```
TitleID, ArtistID, Title
```
**Artists**
```
ArtistID, ArtistName, Region
```
My question is having to find all tracks by the artistname, "The Bullets", and my attempt at the query:
```
select t.tracktitle from tracks t
inner join titles ti
inner join artists ar
if (ar.artistname = "The Bullets
", 'yes', 'no')
on ti.titleid = t.titleid;
```
The Problem is that I need a YES (if it matches the artistname) or NO if it does not match the artistname. | The `if` statement is the issue, as that type of syntax for SQL is called a `case` statement. That said, based on what you want, you should move that up to the `JOIN`:
```
SELECT Tracks.tracktitle
FROM tracks Tracks
INNER JOIN titles Titles ON Titles.titleid = Tracks.titleid
INNER JOIN artists Artists ON Artists.artistid = Titles.artistid
AND Artists.artistname = 'The Bullets';
```
If you wanted to bring over all artists and have something like an identifier row (your "yes" or "no"):
```
SELECT Tracks.tracktitle
,CASE Artists.artistname
WHEN 'The Bullets' THEN 'yes'
ELSE 'no'
END AS isTheBullets
FROM tracks Tracks
INNER JOIN titles Titles ON Titles.titleid = Tracks.titleid
INNER JOIN artists Artists ON Artists.artistid = Titles.artistid;
```
This is how you do an `if` statement like you were attempting with SQL syntax. Notice that I no longer include `ar.artistname` in the JOIN, because you want to bring back all artists, and just identify those that are 'The Bullets'. | Try this:
```
select t.tracktitle
from tracks t inner join titles ti
on ti.titleid = t.titleid
inner join artists ar
on ar.artistid = ti.artistid
and ar.ArtistName = 'The Bullets'
```
If you need to check if record exists:
```
select t.tracktitle
from tracks t inner join titles ti
on ti.titleid = t.titleid
inner join artists ar
on ar.artistid = ti.artistid
and ar.ArtistName = 'The Bullets'
limit 1
```
Empty result (no rows) - 'yes', one row - 'yes' | SQL Query INNER JOIN Syntax | [
"",
"sql",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.