Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I am trying to run a series of checks on TextBox1 and display error messages when conditions are not met. Ultimately when all conditions are met before I then do a table insert and open a file dialog.
The first statement is to check if there is a value
The second checks that no numbers have been used
The third checks an SQL dB to make sure the value is not already in use - It is here I am having the issue.
If the value exists in the dB it flags the correct error:
```
MessageBox.Show(TextBox1.Text & " exists already.")
```
My issue is that when that condition is not met, I get a system error:
"Object reference not set to an instance of an object."
For this line of code.
```
Dim tarrif As String = cmd.ExecuteScalar().ToString()
```
I am not sure why because the rest of the code is below the ELSE statement?
Below is a longer section of my code, to give context.
```
Private Sub Button3_Click(sender As Object, e As EventArgs) Handles Button3.Click
'Checks something is in Textbox1
If TextBox1.Text.Trim.Length = 0 Then
MessageBox.Show("Please specify new tarrif name!", "Indigo Billing", _
MessageBoxButtons.OK, MessageBoxIcon.Exclamation)
Else
'Check textbox on only has characters
If System.Text.RegularExpressions.Regex.IsMatch(TextBox1.Text, "^[0-9]*$") Then
'Error is box contains numbers
MessageBox.Show("Please enter Characters only!", "Indigo Billing", _
MessageBoxButtons.OK, MessageBoxIcon.Exclamation)
Else
'Check name is not already in use
Using conn As New SqlClient.SqlConnection("server=barry-laptop\SQLEXPRESS; database=Test; integrated security=yes")
Using cmd As SqlClient.SqlCommand = conn.CreateCommand()
cmd.CommandText = "SELECT 1 FROM Tarrifs WHERE Tarrif = @Tarrif"
cmd.Parameters.AddWithValue("@Tarrif", TextBox1.Text)
conn.Open()
Dim tarrif As String = cmd.ExecuteScalar().ToString()
If tarrif = "1" Then
MessageBox.Show(TextBox1.Text & " exists already.")
Else
'Create new table based on specified Tarrif name
Using con = New SqlConnection("server=barry-laptop\SQLEXPRESS; database=Test; integrated security=yes")
Using cmda = New SqlCommand("CREATE TABLE " & TextBox1.Text & " (CallType VarChar(30),ChargeCode VarChar(30),Destination VarChar(30),TariffUsed VarChar(30),Peak Float,OffPeak Float,Weekend Float,Setup Float,MinimumCharge Float,ChargeCap INT,InitialUnits INT,InitialCharge INT,InitialPeak INT,InitialOffPeak INT,InitialWeekend INT,BillingUnit INT,MinimumUnits INT,RateType VarChar(30));", con)
con.Open()
cmda.ExecuteNonQuery()
con.Close()
End Using
'import name into Tarrif table
Using cmdb = New SqlCommand("INSERT INTO Tarrifs (Tarrif) VALUES (@tarrif2)", con)
con.Open()
cmdb.Parameters.AddWithValue("@tarrif2", TextBox1.Text)
cmdb.ExecuteNonQuery()
con.Close()
End Using
End Using
'--First create a datatable with the same cols as CSV file, the cols order in both should be same
Dim table As New DataTable()
table.Columns.Add("CallType", GetType(String))
table.Columns.Add("ChargeCode", GetType(String))
table.Columns.Add("Destination", GetType(String))
table.Columns.Add("TariffUsed", GetType(String))
table.Columns.Add("Peak", GetType(Decimal))
table.Columns.Add("OffPeak", GetType(Decimal))
table.Columns.Add("Weekend", GetType(Decimal))
table.Columns.Add("Setup", GetType(Decimal))
table.Columns.Add("MinimumCharge", GetType(Decimal))
table.Columns.Add("ChargeCap", GetType(Integer))
table.Columns.Add("InitialUnits", GetType(Integer))
table.Columns.Add("InitialCharge", GetType(Integer))
table.Columns.Add("InitialPeak", GetType(Integer))
table.Columns.Add("InitialOffPeak", GetType(Integer))
table.Columns.Add("InitialWeekend", GetType(Integer))
table.Columns.Add("BillingUnit", GetType(Integer))
table.Columns.Add("MinimumUnits", GetType(Integer))
table.Columns.Add("RateType", GetType(String))
'open file dialog and store filename'
Dim openFileDialog1 As New OpenFileDialog
Dim strFileName As String
```
If anyone can help with this issue, I would be very grateful | ```
Dim tarrif As String = cmd.ExecuteScalar().ToString()
```
The problem is if no result is returned, then `cmd.ExecuteScalar()` will return `Nothing` and `cmd.ExecuteScalar().ToString()` will throw an exception.
A better option is:
```
cmd.CommandText = "SELECT 1 FROM Tarrifs WHERE Tarrif = @Tarrif"
cmd.Parameters.AddWithValue("@Tarrif", TextBox1.Text)
conn.Open()
If cmd.ExecuteScalar() Is Not Nothing Then
``` | The query will return Null I think if @tariff does not match any values on the table. This might be the problem. Just a guess - haven't tested. Maybe just change the query to
```
SELECT isNull(TarrifId,0) FROM Tarrifs WHERE Tarrif = @Tarrif
``` | Object reference error when checking if value existi in SQL dB | [
"",
"sql",
"vb.net",
""
] |
Table 1
```
Column1 Column2 Column 3
A Null 12
B Null 15
C 0 15
```
Table 2
```
Column2 Column3
0 15
0 12
```
I have table 1 and Table 2 , Here I'm passing table 2 parameters to table 1 which should return the column1 but it should match with Null values like the below scenario
If I pass (0, 15) to table 1 then it should return 'C' not 'B'.
If I Pass (0,12) to table 1 then it should return 'A'
Anytime it should return one value not multiple vales.
Could you please help me with this logic ? | You could do this as a union:
```
select Column1 from
(
select Table1.Column1 as Column1,Table1.Column2 as Column2
from Table1
join Table2 on
Table1.Column2=Table2.Column2 and Table1.Column3=Table2.Column3
UNION
select Table1.Column1 as Column1,Table1.Column2 as Column2
from Table1
join Table2 on
Table1.Column2 is NULL and Table1.Column3=Table2.Column3
) as Unioned
ORDER BY Column2 NULLS LAST
LIMIT 1;
```
The UNION discards duplicates, but you could always wrap the entire statement in another SELECT if you explicitly need to tell it to return at most one value. | ```
SELECT t1.* FROM table1 t1
INNER JOIN table2 t2
ON NVL(t1.column2,0) = t2.column2 and t1.column3 = t2.column3
``` | SQL - Null value should match with not null value with another table | [
"",
"sql",
""
] |
Help me explain what is this question meant.
"Find all combination of employee names and department names."
For my table
```
//EMPLOYEE
E# ENAME DNAME
-----------------------------
1 JOHN IT
2 MAY SCIENCE
3 SITI SCIENCE
//DEPARTMENT
DNAME
------------
RESEARCH
IT
SCIENCE
```
just for my understanding. what is the question want me to do?
i used the following query
```
SELECT ENAME,DNAME FROM EMPLOYEE;
``` | Its a bit unclear, but to find all combinations you would create a [Cartesian product.](http://en.wikipedia.org/wiki/Cartesian_product)
```
select
e.ename, d.dname
from
employee e,
dname d
```
Oracle supports the above sql and uses [`CROSS JOIN`](http://docs.oracle.com/javadb/10.8.2.2/ref/rrefsqljcrossjoin.html) as well to mean the same thing.
```
select
e.ename, d.dname
from
employee e cross join dname d
```
This joins each row in the `employee` table to each other row in the `dname` table.
This would create:
```
ENAME DNAME
---------------------
JOHN IT
JOHN SCIENCE
JOHN RESEARCH
MAY SCIENCE
MAY RESEARCH
MAY IT
SITI SCIENCE
SITI IT
SITI RESEARCH
``` | My understanding is they want you to create a cartesian:
```
select
EMPLOYEE.ENAME
,DEPARTMENT.DNAME
FROM EMPLOYEE, DEPARTMENT
``` | SELECT clause with condition | [
"",
"sql",
"oracle",
""
] |
The database is updated by executing a list of queries that are located in a folder.
I need to be able to detect *any* errors that would also result in "Query completed with errors" in SQL Server Management Studio.
The following works to detect the "Invalid Object" error:
```
PS SQLSERVER:\> $ErrorActionPreference
Stop
PS SQLSERVER:\> $Error.Clear()
PS SQLSERVER:\> $Error
PS SQLSERVER:\> Invoke-Sqlcmd -ServerInstance .\SQLEXPRESS -Database Test -Query "select * from doesnotexist" -ErrorAction SilentlyContinue
PS SQLSERVER:\> $Error.Exception
Invalid object name 'doesnotexist'.
PS SQLSERVER:\>
```
Doing the same for select 1/0 does *not* work:
```
PS SQLSERVER:\> $ErrorActionPreference
Stop
PS SQLSERVER:\> $Error.Clear()
PS SQLSERVER:\> $Error
PS SQLSERVER:\> Invoke-Sqlcmd -ServerInstance .\SQLEXPRESS -Database Test -Query "select 1/0" -ErrorAction SilentlyContinue
PS SQLSERVER:\> $Error.Exception
PS SQLSERVER:\>
```
I would expect this to result in a "Divide by zero error encountered" error just like in SSMS.
Not detecting this particular error makes me wonder if other errors will also remain undetected.
Any idea why this is a happening and how I can make sure the *all* errors will be detected?
**UPDATE**
It turns out that I do not have Invoke-Sqlcmd available on the server I am installing, so on second thought I have to use sqlcmd.exe.
I think this is working for me:
```
$tempfile = [io.path]::GetTempFileName()
$cmd = [string]::Format("sqlcmd -S {0} -U {1} -P {2} -d {3} -i {4} -b > $tempfile",
$g_connectionstring."Data Source",
$g_connectionstring."User ID",
$g_connectionstring."Password",
$g_connectionstring."Initial Catalog",
$path)
Invoke-Expression -Command $cmd
if ($LASTEXITCODE)
{
$err = Get-Content $tempfile | Out-String
Corax-Message "SQL" "Error" $err
exit
}
Remove-Item $tempfile
``` | Regardless of the ErrorAction setting, invoke-sqlcmd cmdlet has a bug present in SQL Server 2008, 2008 R2 and 2012 versions of the cmdlet where T-SQL errors like divide by 0 do not cause an error. I logged a connect item on this and you can see details here:
<https://connect.microsoft.com/SQLServer/feedback/details/779320/invoke-sqlcmd-does-not-return-t-sql-errors>
Note: the issue is fixed in SQL 2014, however it does not appear a fix has been or will be provided for previous versions. | You are ignoring erros by using `-ErrorAction SilentlyContinue`, change that to `-ErrorAction Stop`.
**Edit:**
Turns out if you want error handling don't use `Invoke-SqlCmd` (<http://blogs.technet.com/b/heyscriptingguy/archive/2013/05/06/10-tips-for-the-sql-server-powershell-scripter.aspx>)
You can use `GenericSqlQuery` function from this answer:[Powershell SQL SELECT output to variable](https://stackoverflow.com/questions/22714531/powershell-sql-select-output-to-variable/22715645#22715645)
which throws errors correctly. | Error detection from Powershell Invoke-Sqlcmd not always working? | [
"",
"sql",
"powershell",
"error-handling",
""
] |
I'm using a mysql database, and I have the next query:
```
SELECT DISTINCT OP1.meta_value AS meta_value
FROM wp_postmeta OP1 WHERE OP1.meta_key like 'size'
and OP1.meta_value is not null and OP1.meta_value <> 'null'
ORDER BY OP1.meta_value DESC
```
The results are: small, medium, huge, big,
I need to order this way: small, medium, big, huge...
Is there any solution to order by "literal".
I think something like this, but it doesn't work:
```
SELECT DISTINCT OP1.meta_value AS meta_value FROM
wp_postmeta OP1 WHERE OP1.meta_key like 'size'
and OP1.meta_value is not null
and OP1.meta_value <> 'null'
ORDER BY ('small', 'medium', 'big', 'huge')
```
Thanks in advance | You can do it with the [FIELD](http://dev.mysql.com/doc/refman/5.6/en/string-functions.html#function_field) function
```
SELECT DISTINCT
OP1.meta_value AS meta_value
FROM
wp_postmeta OP1
WHERE OP1.meta_key like 'size'
and OP1.meta_value is not null
and OP1.meta_value <> 'null'
ORDER BY FIELD (meta_value, 'small', 'medium', 'big', 'huge')
``` | ```
SELECT DISTINCT OP1.meta_value AS meta_value FROM
wp_postmeta OP1 WHERE OP1.meta_key like 'size'
and OP1.meta_value is not null
and OP1.meta_value <> 'null'
ORDER BY (CASE
WHEN meta_value='small' THEN 1
WHEN meta_value='medium' THEN 2
WHEN meta_value='big' THEN 3
WHEN meta_value='hude' THEN 4
END)
``` | Order an SQL Query by literal | [
"",
"mysql",
"sql",
""
] |
I've inherited a poorly set-up data source. Given the current data structure of:
```
PreAddress City County
12312 Osprey Drive NW Gig Harbor NULL NULL
12312 Osprey Drive NW Gig Harbor NULL NULL
3022 SW Bradford St Seattle NULL NULL
3022 SW Bradford St Seattle NULL NULL
4605 Prestwick Lane SE Olympia NULL NULL
921 129th Street Court East Tacoma Auburn/Pierce NULL NULL
```
I need to tear the City names out of the PreAddress column, and dump it in the City column, so it looks like:
```
PreAddress City County
12312 Osprey Drive NW Gig Harbor NULL
12312 Osprey Drive NW Gig Harbor NULL
3022 SW Bradford St Seattle NULL
3022 SW Bradford St Seattle NULL
4605 Prestwick Lane SE Olympia NULL
921 129th Street Court East Tacoma Auburn/Pierce NULL
```
Any SQL Gurus out there have any idea how to script that?
**UPDATE**
First Pass SQL:
```
USE [SMS]
GO
IF OBJECT_ID('tempdb..#tmpCitiesCounties') IS NOT NULL
DROP TABLE #tmpCitiesCounties
GO
IF OBJECT_ID('tempdb..#tmpCityCleanup') IS NOT NULL
DROP TABLE #tmpCityCleanup
GO
CREATE TABLE #tmpCitiesCounties
( [ccId] INT IDENTITY(1, 1)
PRIMARY KEY
, [City] VARCHAR(50) NOT NULL
, [County] VARCHAR(50) NOT NULL );
CREATE TABLE #tmpCityCleanup
( [Id] INT NULL
, [Address] VARCHAR(64) NULL
, [City] VARCHAR(50) NULL
, [County] VARCHAR(50) NULL );
INSERT INTO [#tmpCitiesCounties]
( [City], [County] )
VALUES ( 'Battle Ground', 'Clark' ),
( 'Camas', 'Clark' ),
( 'La Center', 'Clark' ),
( 'Ridgefield', 'Clark' ),
( 'Vancouver', 'Clark' ),
( 'Washougal', 'Clark' ),
( 'Yacolt', 'Clark' ),
( 'Fircrest', 'Pierce' ),
( 'Gig Harbor', 'Pierce' ),
( 'Unincorporated', 'Skagit' ),
( 'Arlington', 'Snohomish' ),
( 'Bothell/Snohomish', 'Snohomish' );
INSERT INTO [#tmpCityCleanup]
SELECT [SNPR].[Id]
, REPLACE(LOWER([SNPR].[PreAddress]), LOWER([TCC].[City]), '') AS [Address After]
, [TCC].[City]
, [TCC].[County]
FROM [dbo].[SellerNetProceedsResult] AS SNPR
LEFT JOIN [#tmpCitiesCounties] AS TCC
ON [TCC].[City] = RIGHT(LOWER([SNPR].[PreAddress]), LEN(LOWER([TCC].[City])))
ORDER BY [SNPR].[Id] DESC
SELECT [TCC1].[Id]
, [TCC1].[Address]
, [TCC1].[City]
, [TCC1].[County]
FROM [#tmpCityCleanup] AS TCC1
```
So, this chunk of SQL correctly tears things out (that temp table of Cities and Counties is truncated, as there are a lot more than I wanted to put in this post), but as in the case of the row above where it has "Tacoma Auburn/Pierce", the SQL above leaves "Tacoma" after removing "Auburn/Pierce".
If I then run the following sql code, I get no recognition between the two tables on the address:
```
SELECT [TCC1].*
, REPLACE([TCC1].[Address], [TCC2].City, '') AS [Address After]
--, [TCC2].[City]
--, RIGHT([TCC1].[Address], LEN([TCC2].[City]))
FROM [#tmpCityCleanup] AS TCC1
left JOIN [#tmpCitiesCounties] AS TCC2
ON [TCC2].[City] = RIGHT([TCC1].[Address], LEN([TCC2].[City]))
```
Instead, the "Address After" column is just null.
```
Id PreAddress City County Address After
151 12312 osprey drive nw Gig Harbor Algona King NULL
150 12312 osprey drive nw Gig Harbor Pierce NULL
```
Perhaps I'm missing something. | You'd need to have a lookup table of the possible table names. Off the top of my head, maybe something like:
```
UPDATE Address SET
PreAddress=Replace(a.PreAddress,b.City,''),
City=b.City
FROM Address a INNER JOIN Cities b ON b.City=RIGHT(a.PreAddress,LEN(b.city))
``` | If this is a once off update as opposed to a regular fix-up, for an alternative to SQL you use a batch geocoding service that will format the address's, and you could load them back into the table. You'd get co-ordinates that way too :). | SQL Copy part of a column into a new column | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
Using Sequel Pro, I have these two tables:
```
Table1
Name Year x y
John Smith 2010 10 12
Adam Jones 2010 8 13
John Smith 2011 7 15
Adam Jones 2011 9 14
etc.
```
and
```
Table2
Name Year z
Smith John Smith John 2010 27
Jones Adam Jones Adam 2010 25
Smith John Smith John 2011 29
Jones Adam Jones Adam 2011 21
etc.
```
Basically, the names in Table2 are the same only with the last name and first name switched, then repeated once. So the Names in Table1 are found in the names of Table2 ("John Smith" is found in "Smith **John Smith** John"). I want to perform an inner join and connect the z value of Table2 to the other values of Table1 and get something like this:
```
Name x y z
John Smith 10 12 27
Adam Jones 8 13 25
```
So to do that, I ran this query:
```
Select Table1.*, Table2.z
From Table1
Inner join Table2
On Table1.Name like "%Table2.Name%" and Table1.Year=Table2.Year
```
But I got this as the output:
```
Name Year x y z
```
And that's it. I got the headings, but no rows. I don't know what I'm doing wrong... I suspect it probably has to do with the way I'm using the like operator but I don't know. Any help would be much appreciated. | A bit of an odd data model aside, you've turned the tables around in the `LIKE` part (table1.name should be a part of table2.name, not the other way around), and you need to add the percents to the *value*, not the *name* of the field, that means not quoting the name;
```
SELECT table1.*, table2.z
FROM table1
INNER JOIN table2
ON table2.name LIKE CONCAT('%', table1.name, '%')
AND table1.year = table2.year
```
[An SQLfiddle to test with](http://sqlfiddle.com/#!2/24102/4). | Your query is incorrect, you are saying that the content of the column should be like `abcdTable2.Nameefgh`. This would be correct:
```
Select Table1.*, Table2.z
From Table1
Inner join Table2
On Table1.Name like "%" + Table2.Name+ "%" and Table1.Year=Table2.Year
```
This query will be quite slow for bigger table, but I'm afraid that if you are joining on a name only, the table can't really be bigger as you'll have duplicates quite soon. | "Like" operator in inner join in SQL | [
"",
"sql",
"join",
"sql-like",
""
] |
I'm trying to write a query to produce a dataset from two or more tables, and I'm having trouble writing the query. I apologize in advanced my lack of knowledge in SQL.
Table 1 consists of basic customer account info and Table 2 consists of customer contract details where one customer account can have multiple contracts, both inactive and active
Table 1 and Table 2 can be joined with the values contained under a column named acct\_id.
I would like the query to show only acct\_ids where account status (acct\_status) is "active" from Table 1, and that do not have an "active" contract from Table 2.
The problem is that in Table 2, there are more than one contract associated to an acct\_id and are in different statuses.
If my where clause just focuses on the contract status values from table 2, my dataset won't be accurate. It will only return acct\_ids that have contracts with those values.
for example:
```
acct_iD 123 has 6 contracts: 1 active contract, 4 cancelled contracts, 1 cancel in progress contract
acct_ID 456 has 3 contracts: 3 cancelled contracts
acct_ID 789 has 4 contracts: 2 active contracts, 2 cancelled contracts
acct_ID 012 has 1 contract: 1 cancelled contract
```
I would like my query result to show only acct\_IDs: 456 and 012 as it truly represents that they do not have "active" contracts
I'm using SQL Management Studio 2008 R2. | select acct\_id
from table1
where acct\_status = "active" and
acct\_id not in (select acct\_id from table2 where contract\_status = "active") | ```
SELECT A.*
FROM Table1 A
WHERE A.acct_status = 'active'
AND NOT A.acct_id in (SELECT acct_id FROM Table2 WHERE contract_status = 'active')
``` | Query Results from Two Different Tables | [
"",
"sql",
"select",
"join",
""
] |
Hi I have a column with number datatype
the data like **1310112000** this is a date, but I don't know how to make it in an understandable format:
ex: 10-mar-2013 12:00:00 pm
Can any one please help me. | That is EPOCH time: number of seconds since Epoch(1970-01-01). Use this:
```
SELECT CAST(DATE '1970-01-01' + ( 1 / 24 / 60 / 60 ) * '1310112003' AS TIMESTAMP) FROM DUAL;
```
Result:
```
08-JUL-11 08.00.03.000000000 AM
``` | Please try
```
select from_unixtime(floor(EPOCH_TIMESTAMP/1000)) from table;
```
This will give the result like E.g: 2018-03-22 07:10:45
PFB refence from [MYSQL](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_from-unixtime) | How to convert Epoch time to date? | [
"",
"sql",
"oracle11g",
"unix-timestamp",
""
] |
I'm currently trying to teach myself SQL in order to write better reports with our Orion system and I'm running into a small issue. I want to generate a report with a count of the number of Windows machines and Linux machines. This is my current code.
```
SELECT OperatingSystem, Count(OperatingSystem) AS TotalMachines
FROM Machines
Where
(
(OperatingSystem LIKE '%Windows%') OR
(OperatingSystem LIKE '%Linux%')
)
GROUP BY OperatingSystem
```
And the return I get is this
```
Red Hat Enterprise Linux 20
Novell SUSE Linux Enterprise 17
Debian Linux 5
Windows Server 2008 (32-bit) 11
Windows Server 2008 R2 (32-bit) 49
Windows Server 2008 (64-bit) 33
Windows Server 2008 R2 (64-bits) 16
Windows Server 2003 (32-bit) 35
```
Is it possible to combine all of the different Linux Operating Systems into a single row called Linux and combine all of the Windows Operating Systems into a single row called Windows in an SQL Query? | to solve such problems I prefer to use `union` neither `case`, as it helps you easily extend query in future:
```
select OSType, count(*) as TotalMachines
from (
SELECT 'Linux' as OSType FROM Machines WHERE OperatingSystem LIKE '%Linux%'
UNION ALL
SELECT 'Windows' as OSType FROM Machines WHERE OperatingSystem LIKE '%Windows%'
) as subquery
GROUP BY OSType
```
in any case, check both variants and select fastest | Yes. You want to use `case` in the `group by` clause itself:
```
SELECT (case when OperatingSystem LIKE '%Windows%' then 'Windows'
when OperatingSystem LIKE '%Linux%' then 'Linux'
end) as WhichOs, Count(*) AS TotalMachines
FROM Machines
Where (OperatingSystem LIKE '%Windows%') OR
(OperatingSystem LIKE '%Linux%')
GROUP BY (case when OperatingSystem LIKE '%Windows%' then 'Windows'
when OperatingSystem LIKE '%Linux%' then 'Linux
end);
```
EDIT:
The above should work (note the same expression is in the `select` and `group by`. Perhaps this will work:
```
SELECT WhichOs, Count(*) AS TotalMachines
FROM (SELECT m.*,
(case when OperatingSystem LIKE '%Windows%' then 'Windows'
when OperatingSystem LIKE '%Linux%' then 'Linux'
end) as WhichOs
FROM Machines m
) m
Where (OperatingSystem LIKE '%Windows%') OR
(OperatingSystem LIKE '%Linux%')
GROUP BY WhichOs;
``` | Combine SQL Count Results | [
"",
"sql",
""
] |
Could anyone give me an idea how to fetch last 50 (or last n records ) when the table is in Ascending Order.? | I know this post is old but had to share this solution.
I struggled with this for some time as well but at midnight today, it hit me like a rock. The below works for SQL Server and has a very fast return time. It returns the records in ASC order.
```
SELECT * FROM table1
WHERE column1 in (
SELECT TOP(50) column1 FROM Table1
ORDER BY Column1 DESC)
ORDER BY Column1
```
note: changing `TOP(50)` to `TOP(5000)` on a 20 column table with approximately 300 000 rows, takes SQL Server 3 seconds. | You can use - limit or TOP to solve it .
With limit -
```
SELECT col1,col2,.. from table_name ORDER BY col_name DESC limit 50
```
with top -
```
SELECT TOP 50 col1,col2,.. from table_name ORDER BY col_name DESC
``` | SQL Table in Ascending Order + Get Last 50 Records | [
"",
"sql",
""
] |
I have a not-that-large database that I'm trying to migrate to SQL Azure, about 1.6 gigs. I have the BACPAC file in blob storage, start the import, and then... nothing. It gets to 5% as far as the status goes, but otherwise goes nowhere fast. After 10 hours, the database size appears to be 380 MB, and the monitoring shows on average around 53 successful connections per hour, with no failures. It appears to still be going.
I took the same BACPAC to my local machine at home, and I can import the same database in just under three minutes, so I'm making the assumption that the integrity of the file is all good.
What could possibly be the issue here? I can't just keep the app I'm migrating offline for days while this goes on. There must be something fundamentally different that it's choking on. | Ugh, I hate answering my own question, but I figured out what the issue is. It appears to have to do with their new basic-standard-premium tiers (announced April 2014). The database imports to the old web-business levels no problem. Because the new tiers are built around throttling for "predictable performance," they make the high volume transactions of importing crazy slow. | I've yet to try this myself (spent 5 hours so far today waiting for a ~11GB Test database to import to an S3 sized Azure SQL database)...
But MS themselves mention that is due to them not assigning enough hamsters to run the wheel, and suggest using SQLPackage.exe direct from the command line as a workaround for slow imports. (see here: [Azure SQL Database and Managed Instance Import/Export service takes a long time to import or export a database](https://support.microsoft.com/en-nz/help/2965554/azure-sql-database-import-export-service-takes-a-long-time-to-import-o))
Interested to see what will happen, I can't wait this long to migrate the Prod DB in a few days' time so will give this tool a try!
UPDATE:
After waiting 4+ hours for my BACPAC file to import, it crashed.
Interestingly, the database from the crashed import was present in the Azure Portal, and I could see the DTU usage (S3 database) had been railed at 100% for almost the whole 4 hours...
So I wondered what happens if I set the DB performance ALL THE WAY UP in the import process, so I deleted the DB from crashed import and started it again, this time with the performance set to P15 (4000DTUs vs. 40 from an S2).
Guess what...
The import took just a little over **3 MINUTES**
Turned the DB back down to S3 (what the client has spec'ed and is happy to pay for).
That's 4 hours of my life I'll never get back... | SQL Azure import slow, hangs, but local import takes 3 minutes | [
"",
"sql",
"sql-server",
"azure",
""
] |
I have the following results in employee table :
```
ID | Name | Number
---------------------
1 | Dani | 15
2 | Dani | 20
3 | Yara | 10
```
I need to write sql query to get the following result:
```
ID | Name | Number
------------------
2 | Dani | 20
3 | Yara | 10
```
as you see , if the result contained of the frequent name for the employee,the query must get the max value (number).
How i can do that? | Since you also want the correct id pointing to the maximum value, this is the easiest way:
```
WITH TT AS (
SELECT ID, Name, Number, rank() OVER(PARTITION BY Name ORDER BY Number DESC) R
FROM employee
)
SELECT ID, Name, Number
FROM TT
WHERE R = 1;
```
It should work correctly on Oracle and SQL Server as they both support window functions.
[SQLFiddle example using Oracle](http://sqlfiddle.com/#!4/1a88d/6) | From the data set you provided you could simply use a `GROUP BY` clause. The syntax in the various engines you mentioned are all pretty similar.
In MySQL:
```
SELECT MAX(`ID`), `Name`, MAX(`Number`)
FROM `Employee`
GROUP BY `Name`
```
In T-SQL (SQL Server):
```
SELECT MAX([ID]), [Name], MAX([Number])
FROM [Employee]
GROUP BY [Name]
```
In Oracle:
```
SELECT MAX("ID"), "Name", MAX("Number")
FROM "Employee"
GROUP BY "Name"
```
Note that this independently calculates the maximum values of `ID` and `Number`. For example, if you had this data set:
```
ID - Name - Number
1 - Dani - 20
2 - Dani - 15
3 - Yara - 10
```
You'd get this result:
```
ID - Name - Number
2 - Dani - 20
3 - Yara - 10
```
Notice that the `ID` for `Dani` is 2, since that's the maximum value of the `ID` column for `Dani`. If you'd like to get an `ID` of 1, you'd probably be better off using [*Vincent's* solution](https://stackoverflow.com/a/23417358/1715579). | Get the max values from list result | [
"",
"sql",
"sql-server",
"oracle",
""
] |
I have a similar issue to this [No duplicates in SQL query](https://stackoverflow.com/questions/1543435/no-duplicates-in-sql-query)
Please [find here the sqlFiddle](http://sqlfiddle.com/#!3/99ea1/1)
I have this:
```
+----------+-----------------+----------+----------+-----------+----------+
| TAFIELDA | DESCRIPTION | TBFIELDA | TBFIELDB | DOCNUMBER | TCFIELDB |
+----------+-----------------+----------+----------+-----------+----------+
| 1000 | some data | 2000 | 1000 | 525 | 2000 |
| 1001 | some other data | 2000 | 1001 | 525 | 2000 |
+----------+-----------------+----------+----------+-----------+----------+
```
Expected result:
```
+----------+-----------------+----------+----------+-----------+----------+
| TAFIELDA | DESCRIPTION | TBFIELDA | TBFIELDB | DOCNUMBER | TCFIELDB |
+----------+-----------------+----------+----------+-----------+----------+
| 1001 | some other data | 2000 | 1001 | 525 | 2000 |
+----------+-----------------+----------+----------+-----------+----------+
```
I need only the highest `TAFIELDA` value with the `DocNumber = 525`, so I did this:
```
SELECT max(tAFieldA) tAFieldA,DocNumber
FROM TABLEA A
INNER JOIN TABLEB B ON A.TAFIELDA = B.TBFIELDB
INNER JOIN TABLEC C ON B.tBFieldA = C.tCFieldB
where DocNumber = 525
group by (DocNumber)
```
That query returns me only the row I'm looking for, the problem is that If I add another field that could not be summary, for example `Description`, I get again several records.
¿How could I obtain only one record per `DocNumber` with all the fields of the [sample DB](http://sqlfiddle.com/#!3/99ea1/1)? | **Using Sub-Query**
```
SELECT * FROM
(
SELECT *
,ROW_NUMBER() OVER (PARTITION BY DOCNUMBER ORDER BY tAFieldA DESC) rn
FROM TABLEA A
INNER JOIN TABLEB B ON A.TAFIELDA = B.TBFIELDB
INNER JOIN TABLEC C ON B.tBFieldA = C.tCFieldB
) Sub
WHERE rn = 1
```
**Using CTE**
```
;WITH CTE
AS
(
SELECT *
,ROW_NUMBER() OVER (PARTITION BY DOCNUMBER ORDER BY tAFieldA DESC) rn
FROM TABLEA A
INNER JOIN TABLEB B ON A.TAFIELDA = B.TBFIELDB
INNER JOIN TABLEC C ON B.tBFieldA = C.tCFieldB
)
SELECT * FROM CTE
WHERE rn = 1
```
## [`Working SQL FIDDLE`](http://sqlfiddle.com/#!3/99ea1/16) | Use a sub query:
```
SELECT *
FROM TABLEA A
INNER JOIN TABLEB B ON A.TAFIELDA = B.TBFIELDB
INNER JOIN TABLEC C ON B.tBFieldA = C.tCFieldB
WHERE TAFIELDA IN (
SELECT max(tAFieldA)
FROM TABLEA A
INNER JOIN TABLEB B ON A.TAFIELDA = B.TBFIELDB
INNER JOIN TABLEC C ON B.tBFieldA = C.tCFieldB
WHERE DocNumber = 525
GROUP BY (DocNumber)
)
``` | Removing duplicates with Group By | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
Table `Movie(Title,year,director,length)`
Find movies whose titles appears more than once (different years)
```
Select Distinct title
From Movie as x
where year <> any
(Select year
from movie
where title = x.title);
```
I don't understand this sql code, anyone can explain? | [From the docs](http://www.postgresql.org/docs/current/interactive/functions-comparisons.html#AEN18030):
**9.23.3. ANY/SOME (array)**
> expression operator ANY (array expression)
> expression operator SOME (array expression)
>
> The right-hand side is a parenthesized expression, which must yield an array value. The left-hand expression is evaluated and compared to each element of the array using the given operator, which must yield a Boolean result. The result of ANY is "true" if any true result is obtained. The result is "false" if no true result is found (including the case where the array has zero elements).
>
> If the array expression yields a null array, the result of ANY will be null. If the left-hand expression yields null, the result of ANY is ordinarily null (though a non-strict comparison operator could possibly yield a different result). Also, if the right-hand array contains any null elements and no true comparison result is obtained, the result of ANY will be null, not false (again, assuming a strict comparison operator). This is in accordance with SQL's normal rules for Boolean combinations of null values.
>
> SOME is a synonym for ANY.
So basically if any of the predicates formed by using the array, then the whole expression evaluates to true:
With some example data
```
Title | Year
----------------+--------
Point Break | 1991
The Italian Job | 1969
The Italian Job | 2003
```
For the Italian Job your where clause will be
```
WHERE Year <> 1969
OR Year <> 2003
```
For each record this will evaluate to true since one of the years will be different each time. For Point Break the where clause will be
```
WHERE Year <> 1991
```
This will not return true so the record is not returned.
So your query is asking for a distinct list of film titles where another film exists with the same title and a different year.
**[Examples on SQL Fiddle](http://sqlfiddle.com/#!15/d083a/4)** | Let's talk about `where year <> any`:
The sub query will get you all years one title appears. For every row (`title`), check if there is **any other** year(s) that title appears which is different `<>` from current year.
1. If TRUE, that title appears more than once (in different years). Return that title.
2. If FALSE, that title appears only once. | Correlated sub query of movie | [
"",
"sql",
""
] |
Generally speaking I need to search `varchar(100)` column for the pattern `'%bA%'`, where
* `A` - uppercase non-ascii character and
* `b` - lowercase non-ascii
character.
From the high level perspective I need to find all strings where **[space]** symbol is missed before the uppercase character, for example as a result of **firstname** and **lastname** columns concatenation without a space between them.
[SQLFiddle environment](http://sqlfiddle.com/#!3/a05bf/1) for reproducing
```
-- WORKING (ASCII) - thanx to @Serpiton
create table #tmp (value varchar(100));
insert into #tmp (value) values ('JohnnyBravo'); -- expected output
insert into #tmp (value) values ('Johnny Bravo');
insert into #tmp (value) values ('Johnnybravo');
insert into #tmp (value) values ('johnnybravo');
select * from #tmp WHERE value collate Latin1_General_CS_AI like '%[a-z][^ abcdefghijklmnopqrstuvwxyz]%'
drop table #tmp;
-- NOT WORKING (NON-ASCII, Cyrillic, win-1251)
create table #tmp (value varchar(100));
insert into #tmp (value) values ('АндрейМорозов'); -- expected output
insert into #tmp (value) values ('Андрей Морозов');
insert into #tmp (value) values ('Андрейморозов');
insert into #tmp (value) values ('андрейморозов');
select * from #tmp WHERE value collate Cyrillic_General_CS_AI like '%[а-я][^ абвгдежзиклмнопрстуфхцчшщъыьэюя]%'
drop table #tmp;
-- UNICODE - NOT WORKING TOO
create table #tmp (value nvarchar(100) collate Cyrillic_General_CS_AI);
insert into #tmp (value) values (N'АндрейМорозов'); -- expected output
insert into #tmp (value) values (N'Андрей Морозов');
insert into #tmp (value) values (N'Андрейморозов');
insert into #tmp (value) values (N'андрейморозов');
select * from #tmp WHERE value like '%[а-я][^ абвгдежзиклмнопрстуфхцчшщъыьэюя]%'
drop table #tmp;
``` | If it really is just "firstname and lastname columns concatenation without a space between" then this may suffice:
```
select
*
from mytable
where LEN(ConcatenatedName) <> LEN(REPLACE(ConcatenatedName, ' ', ''));
```
You can substitute whatever passes for a space in your local environment, of course. If there are, say, middle name which should be space-delimited this will pick them up too. However, you may get false positives if there are intended spaces e.g. two part surnames.
**Second go**
Fair point. I hadn't accounted for case. Here's some magic code which works with your test data:
```
with digits as
(
SELECT
*
FROM
(
VALUES (0),(1), (2), (3), (4), (5), (6), (7), (8), (9)
) AS MyTable(i)
)
, Number as
(
select (a.i * 10) + b.i as number
from digits as a
cross join digits as b
)
, LetterCase as
(
select
n.number
,t.value
,SUBSTRING(t.value, n.number, 1) as Letter
,ASCII(SUBSTRING(t.value, n.number, 1)) LetterASCII
,CASE
when ASCII(SUBSTRING(t.value, n.number, 1)) between 65 and 90
then 'True'
else 'False'
end as IsUpper
from Number as n
cross join #tmp as t
where n.number between 1 and LEN(t.value)
)
select
lc.value
from LetterCase as lc
where lc.IsUpper = 'True'
and lc.number > 1
and SUBSTRING(lc.value, lc.number - 1, 1) <> ' '
drop table #tmp;
```
It draws on my answer to this other question - [Split words with a capital letter in sql](https://stackoverflow.com/questions/23470794/split-words-with-a-capital-letter-in-sql/23471538#23471538).
**Third go**
Heres some magic code which works with your *revised* test data. You have to take care if your instance's (or database's or columns's) default collation is not the same as the one with which you want to work.
```
;with digits as
(
SELECT
*
FROM
(
VALUES (0),(1), (2), (3), (4), (5), (6), (7), (8), (9)
) AS MyTable(i)
)
, Number as
(
select (a.i * 10) + b.i as number
from digits as a
cross join digits as b
)
, UpperCaseCharacters as
(
select NCHAR(1040) collate Cyrillic_General_CS_AI as CodePoint --А
UNION ALL
select NCHAR(1052) --М
-- Extend this list with all the upper case character in your chosen glyph list.
)
, LetterCase as
(
select
n.number
,t.value
,CASE
when SUBSTRING(t.value, n.number, 1) IN (select Codepoint from UpperCaseCharacters)
then 'True'
else ''
end as IsUpper
from Number as n
cross join #tmp as t
where n.number between 1 and LEN(t.value)
)
select
lc.value
from LetterCase as lc
where lc.IsUpper = 'True'
and lc.number > 1
and SUBSTRING(lc.value, lc.number - 1, 1) <> ' ';
``` | You can use the basic pattern recognition of the `LIKE` operator.
The most used capability is the `%`, but it recognize set search with `[set]`.
The pattern you search is
```
WHERE column LIKE '%[^ ][A-Z]%'
```
Where the character `^` is used to negate the following pattern.
**EDIT**
Checking the comment from the OP I found out that SQLFiddle use a case insensite collation, with that the check will never work, still I had to change the logic to "a lower case letter followed by something that is not a lower case letter or a space", but to do this the second range have to be expanded to add the space. For english letter is
```
WHERE column LIKE '%[a-z][^ abcdefghijklmnopqrstuvwxyz]%'
```
[SQLFiddle](http://sqlfiddle.com/#!3/9a386/1) demo | SQL Server: How to find position of the first uppercase non-ascii character in the string? | [
"",
"sql",
"sql-server",
"t-sql",
"non-ascii-characters",
""
] |
I have mysql dump called `dbBACKUP.sql` about 152 MB. It has thousands of records. I use the following command to import it:
```
mysql -u root -p --default-character-set=utf8 phpbb3 < dbBACKUP.sql
```
After supplying the password of the root, I go to phpmyadmin to check the data imported to the database. I notice slow increase in the amount of data imported by the changes in both total rows number and the total size in MB, till point it seems there is no any change with each refresh of phpmyadmin.
I believe this is may be a memory issue for the server. Is there any configuration settings allow to increase the memory used by MySQL. Or even any solution to increase the performance of this task?
This is occured on my own desktop based on Windows 7 64bit. The Mysql server is 5.6.16 | I did a google search and found this blog that might help you: <http://cmanios.wordpress.com/2013/03/19/import-a-large-sql-dump-file-to-a-mysql-database-from-command-line/>
```
-- connect to database
mysql -u root -p
-- set network buffer length to a large byte number.
set global net_buffer_length=1048576;
-- set maximum allowed packet size to a large byte number.
set global max_allowed_packet=1073741824;
-- set read buffer size to a large byte number.
set global read_buffer_size=2147479552;
-- disable foreign key checking to avoid delays,errors and unwanted behaviour
SET foreign_key_checks = 0;
-- import your sql dump file
source C:\[your_path]\dbBACKUP.sql
-- enable foreign key checking after import
SET foreign_key_checks = 1;
``` | Try to use SOURCE command instead of redirecting your sql file.
```
mysql -u root -p --default-character-set=utf8
use phpbb3
source /full/path/to/your/dbBACKUP.sql
``` | Large mysql dump failed to be imported | [
"",
"mysql",
"sql",
"windows",
"import",
""
] |
I have following query -
```
UPDATE THINGS
SET Col 1 = CASE when 'A' then 'APPLE'
when 'B' then 'BALL'
when 'C' then 'CARROT'
else NULL
end,
Col 2 = Case Col 1 when 'APPLE' then 'FRUIT'
when 'BALL' then 'TOY'
when 'CARROT' then 'SALAD'
else NULL
end
```
My question is will the update happen on column by column basis so that I can get updated values in my Col 2 successfully? If it is not possible by above query, is there any other way to update it within a single query? I cannot write two separate queries. | You can use this approach by using variable to store data and then use it foe next column.
```
DECLARE @val VARCHAR(20) -- type of Col1
UPDATE THINGS
SET @val = [Col 1] = CASE WHEN 'A' THEN 'APPLE'
WHEN 'B' THEN 'BALL'
WHEN 'C' THEN 'CARROT'
END,
[Col 2] = CASE @val WHEN 'APPLE' THEN 'FRUIT'
WHEN 'BALL' THEN 'TOY'
WHEN 'CARROT' THEN 'SALAD'
END
``` | You can't rely upon the order of execution of assignments in a query. Most likely, the DBMS will pull the row, check the values, then write the changes. In SQL Server, I'd write it like this:
```
UPDATE THINGS
SET [Col 1] = CASE WHEN [Col 1] = 'A' then 'APPLE'
WHEN [Col 1] = 'B' then 'BALL'
WHEN [Col 1] = 'C' then 'CARROT'
ELSE NULL
END,
[Col 2] = CASE WHEN [Col 1] IN ('A','APPLE') then 'FRUIT'
WHEN [Col 1] IN ('B', 'BALL') then 'TOY'
WHEN [Col 1] IN ('C','CARROT') then 'SALAD'
ELSE NULL
END
``` | Does UPDATE happen column by column in SQL Server | [
"",
"sql",
"sql-server",
"sql-update",
"case",
""
] |
I want select children of category and count of theirs children.
Structure of **Category**:
```
Id Parent Name
0 0 main
1 0 Games
2 0 Books
3 2 Drama
4 2 Comedy
```
For id '0' it should return games 0, books 2. For id '2' it should return drama 0, comedy 0
How to do it? | ```
Id Parent Name
1 0 main
2 1 Games
3 1 Books
4 3 Drama
5 3 Comedy
```
If you add '0' value for the Main Row you can do this using a Simple Query. Main is a Parent of Book and Games so we had to use some other value for Main.
So After that you need to write a Simple Query
```
Select c1.name, (select count(id) from category c2 where c2.parent = c1.id)
as count from category c1 where c1.parent=1;
```
Thank you. | Use inner join to fetch the data :
```
SELECT c.Name, c.Parent
FROM Category c
INNER JOIN Category p on c.id = p.parent
``` | SQL: Select children and number of their children | [
"",
"mysql",
"sql",
""
] |
I have a SQL Server table with a nvarchar(50) column. This column must be unique and can't be the PK, since another column is the PK. So I have set a non-clustered unique index on this column.
During a large amount of insert statements in a serializable transaction, I want to perform select queries based on this column only, in different transaction. But these inserts seem to lock the table. If I change the datatype of the unique column to bigint for example, no locking occurs.
Why isn't nvarchar working, whereas bigint does? How can I achieve the same, using nvarchar(50) as the datatype? | After all, mystery solved! Rather stupid situation I guess..
The problem was in the select statement. The where clause was missing the quotes, but due to a devilish coincidence of the existing data were only numbers, the select wasn't failing but just wasn't executing until the inserts committed. When the first alphanumeric data were inserted, the select statement begun failing with 'Error converting data type nvarchar to numeric'
e.g
Instead of
```
SELECT [my_nvarchar_column]
FROM [dbo].[my_table]
WHERE [my_nvarchar_column] = '12345'
```
the select statement was
```
SELECT [my_nvarchar_column]
FROM [dbo].[my_table]
WHERE [my_nvarchar_column] = 12345
```
I guess a silent cast was performed, the unique index was not being used which resulted to the block.
Fixed the statement and everything works as expected now.
Thanks everyone for their help, and sorry for the rather stupid issue! | First, you can change the PK to be a non-clustered index, then you you could create a clustered index on this field. Of course, that may be a bad idea based on your usage, or just simply not help.
You might have a use case for a covering index, see previous [question re: covering index](https://stackoverflow.com/questions/62137/what-is-a-covered-index)
You might be able to change your "other queries" to non-blocking by changing the isolation level of those queries.
It is relatively uncommon for it to be a necessity to insert a large number of rows in of a single transaction. You may be able to simply not use a transaction, or split up into a smaller set of transactions to avoid locking large sections of the table. E.g., you can insert the records into a pending table (that is not otherwise used in normal activity) in a transaction, then migrate these records in smaller transactions to the main table if real-time posting to the main table is not required.
ADDED
Perhaps the most obvious question. Are you sure you have to use a serializable transaction to insert a large number of records? These are relatively rarely necessary outside of financial transactions, and impose a high concurrency cost compared to the other isolation levels?
ADDED
Based on your comment about "all or none", you are describing atomicity, not serializability. I.e., you might be able to use a different isolation level for your large insert transaction, and still get atomicity.
Second thing, I notice you specify a large amount of insert statements. This just sounds like you should be able to push these inserts into a pending/staging table, then perform a single insert or batches of inserts from the staging table into the production table. Yes, it is more work, but you may just have an existing problem that requires the extra effort. | SQL Server unique index on nvarchar issue | [
"",
"sql",
"sql-server",
"nvarchar",
"unique-index",
""
] |
I need to run a SQL Server query that either "gets" or "creates" a record. Pretty simple, but I am not sure if this will create any race conditions.
I have read a few articles regarding locking during insert/updates e.g. <http://weblogs.sqlteam.com/dang/archive/2007/10/28/Conditional-INSERTUPDATE-Race-Condition.aspx>, but I'm not sure if this is relevant to me given I'm using SQL Server 2012, and I don't have contention over modification per se, only over ensuring 'one off creation' of my record.
Also, I will have a primary key constraint over the `Id` column (not an identity column), so I know the race condition at worst could only ever create an error, and not invalid data, but I also don't want the command to throw an error.
Can someone please shed some light on how I need to solve this? Or am I over thinking this and can I simply do something like:
```
IF EXISTS(SELECT * FROM Table WHERE Id = @Id)
BEGIN
SELECT Id, X, Y, Z FROM Table WHERE Id = @Id
END
ELSE
BEGIN
INSERT INTO Table (Id, X, Y, Z)
VALUES (@Id, @X, @Y, @Z);
SELECT @Id, @x, @Y, @Z;
END
```
I've been in document database land for a few years and my SQL is very rusty. | GET occurs far more frequently than CREATE, so it makes sense to optimize for that.
```
IF EXISTS (SELECT * FROM MyTable WHERE Id=@Id)
SELECT Id, X, Y, Z FROM MyTable WHERE Id=@Id
ELSE
BEGIN
BEGIN TRAN
IF NOT EXISTS (SELECT * FROM MyTable WITH (HOLDLOCK, UPDLOCK) WHERE Id=@Id )
BEGIN
-- WAITFOR DELAY '00:00:10'
INSERT INTO MyTable (Id, X, Y, Z) VALUES (@Id, @X, @Y, @Z)
END
SELECT Id, X, Y, Z FROM MyTable WHERE Id=@Id
COMMIT TRAN
END
```
If your record does not exist, begin an explicit transaction. *Both HOLDLOCK and UPDLOCK are required (thanks @MikaelEriksson).* The table hints hold the shared lock and update lock for the duration of the transaction. This properly serializes the INSERT and reliably avoids the race condition.
To verify, uncomment `WAITFOR` and run two or more queries at the same time. | Your code definitely has race conditions.
One approach is to merge the conditions into one `insert`, but I'm not 100% sure that this protects against all race conditions, but this might work:
```
INSERT INTO Table(Id, X, Y, Z)
SELECT @Id, @X, @Y, @Z
WHERE NOT EXISTS (SELECT 1 FROM Table WHERE ID = @ID);
SELECT Id, X, Y, Z FROM Table WHERE ID = @Id;
```
The issue is about the underlying locking mechanism during the duplicate detection versus the insert. You can try to use more explicit locking, but a table lock will surely slow down processing (to learn about locks, you can turn to the documentation or a blog post such as [this](http://www.sqlservercentral.com/blogs/basits-sql-server-tips/2012/10/21/sql-server-locking-overview/)).
The simplest way that I can think to make this work consistently is to use `try`/`catch` blocks:
```
BEGIN TRY
INSERT INTO Table(Id, X, Y, Z)
SELECT @Id, @X, @Y, @Z;
END TRY
BEGIN CATCH
END CATCH;
SELECT Id, X, Y, Z FROM Table WHERE ID = @Id;
```
That is, try the `insert`. If it fails (presumably because of a duplicate key, but you can explicitly check for that), then ignore the failure. Return the values for the id afterwards. | SQL Server 2012 potential race condition on select or insert | [
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
I have a SQL query (in MySQL) that select a total and a count of how many completed task have the table:
```
SELECT
count(*) as total, IF(SUM(NOT `completed`) IS NULL,0,SUM(NOT `completed`)) as incomplete
FROM
tasks
```
The table can be made something like:
```
CREATE TABLE `tasks` (
`clave` int(11) NOT NULL AUTO_INCREMENT,
`completed` tinyint(1) NOT NULL DEFAULT '0' COMMENT 'If it is 0 isn\'t completed, otherwise is completed',
PRIMARY KEY (`clave`)
) ENGINE=InnoDB;
```
You can observe that in the query I have used `SUM(NOT completed)` two times, one to check if it will return null (and return 0 instead) and one to return the count (when it isn't null). A `SUM()` function return `null` if the table (or query) is empty, namely the table have not row at all.
As you are trying to SUM two times, I think that MySQL will calculate the sum two times.
I have tested to try if I can set an alias for the column and then use it in a IF so mysql don't need to recalculate it.
And as a requirement, incomplete column must not be null in any circumstance (for example if the table tasks is empty).
And my question, **is this efficient? Will MySQL need to recalculate the sum each time or it remember it?** | this may be a better way to do it because you do less calculations for **performance**
using the schema richard posted - see [FIDDLE](http://sqlfiddle.com/#!2/c2e74b/5)
```
SELECT
total,
num_complete,
total - num_complete as num_incomplete
FROM(
SELECT
COUNT(*) as total,
SUM(IF(t.completed > 0, 1, 0)) as num_complete
FROM status_log t
) as t
```
---
checking for empty table per OP request
```
SELECT
total,
num_complete,
total - num_complete as num_incomplete
FROM(
SELECT
COUNT(*) as total,
IF(COUNT(*) > 0, SUM(IF(t.completed > 0, 1, 0)), 0) as num_complete
FROM status_log t
) as t
```
**you should do your check for null or empty table in another programming language** ... generally speaking SQL should be used to query from the table.. not when it is empty though... if it is completely empty then you should check for a null response when running this query in another programming language. that will improve the **performance** of it a lot. | The efficiency (or lack thereof) will have more to do with the indices present on the table, as well as the storage engine itself. Likewise, whether or not the result is stored in cache will have more to do with the storage engine than with the statement you are writing.
If I were writing this on an INNODB-based storage engine, I would do the following:
```
SELECT
count(*) as total,
SUM(CASE WHEN completed = 0 OR completed IS NULL THEN 1 ELSE 0 END) AS incomplete
FROM tasks;
```
And I would index my "completed" column in order to do this.
The reason I would change from "IF" to case-when is largely just do the code-portability. CASE WHEN will be easier to move to other databases, if need be.
Also, an index on completed will allow this query to simply evaluate the index, the not the table value itself. That, with LRU, should give you plenty in the way of efficiency. | Performance of a duplicate agregate function in a select query | [
"",
"mysql",
"sql",
"performance",
""
] |
I would like to know, if there is any possible way in SQL to distinguish which one of default parameters was set when function was called?
F.e. if there is function:
```
CREATE OR REPLACE FUNCTION someFunction (
ID NUMBER,
showNames VARCHAR2 DEFAULT 'N',
showAddress VARCHAR2 DEFAULT 'N')
```
and it will be called like that:
```
someFunction(123, 'Y')
```
is there any way to distinguish which parameter was set by 'Y'? Or if I need to set showAddress to 'Y' do I have to call function like that:
```
someFunction(123, 'N', 'Y')
``` | if you use Oracle database you could specify exact param name:
```
someFunction(123, showAddress =>'Y')
``` | Since this is tagged as `Oracle`; you can use named parameter like `formal => actual`. So in your case, you can say `someFunction(123, shownames => 'Y')` in which case ID and shownames will be set; whereas showaddress will have default value.
Otherwise, it's all positional parameter and hence calling `someFunction(123, 'Y')` will set ID and then shownames based on position of the parameters.
See more here
<http://docs.oracle.com/cd/E11882_01/appdev.112/e25519/subprograms.htm#LNPLS00825> | SQL function with multiple default parameters | [
"",
"sql",
"oracle",
""
] |
I tried a lot allready, but im not getting the result that i want.
Lets say i got the following data in the first table:
```
ColA ColB ColC ColD
AA 11 1 50€
AA 11 1 60€
AA 11 1 10€
BB 22 1 15#
```
And in Table B I have this:
```
ColA ColB ColC ColD
AA 11 1 Delta
AA 11 1 Delta
AA 11 1 Delta
BB 22 1 Gamma
BB 22 1 Gamma
```
I need to create a table that looks like this - note only the 4 rows, not the 11 that the inner join I was using produces.
```
ColA ColB ColC ColD ColE
AA 11 1 50€ Delta
AA 11 1 60€ Delta
AA 11 1 10€ Delta
BB 22 1 15€ Gamma
```
If anyone has an idea on how to make this work that would be greatly appreciated. | Try this:
```
SELECT A.ColA,
A.ColB,
A.ColC,
A.ColD,
B.Cold AS ColE
FROM Table_A AS A
JOIN Table_B AS B
ON B.ColA = A.ColA
AND B.ColB = A.ColB
GROUP BY A.ColA,
A.ColB,
A.ColC,
A.ColD,
B.Cold
``` | Try this;
```
SELECT A.ColA,A.ColB,A.ColC,A.ColD,B.ColD as BCol
FROM TableA A JOIN
TableB B ON A.ColA=B.ColA AND A.ColB=B.ColB AND A.ColC=B.ColC
GROUP BY A.ColA,A.ColB,A.ColC,A.ColD,B.ColD
```
Result:
```
COLA COLB COLC COLD BCOL
AA 11 1 10€ Delta
AA 11 1 50€ Delta
AA 11 1 60€ Delta
BB 22 1 15# Gamma
```
See result in [**SQL Fiddle**](http://www.sqlfiddle.com/#!3/07d4d/7). | Join two tables without extra rowes | [
"",
"sql",
"join",
"sql-server-2012",
""
] |
I am trying to get the difference between below 2 queries
**Query 1:**
In which I have given condition **'Orders.OrderID != null'** in on clause
```
SELECT Customers.CustomerName, Orders.OrderID
FROM Customers
LEFT JOIN Orders
ON (Customers.CustomerID=Orders.CustomerID and Orders.OrderID is not null)
```
**Query 2:**
In which I have given condition **'Orders.OrderID != null'** in where clause
```
SELECT Customers.CustomerName, Orders.OrderID
FROM Customers
LEFT JOIN Orders
ON Customers.CustomerID=Orders.CustomerID where Orders.OrderID is not null
```
These 2 queries are generating difference results. Why?
*What is the execution flow for where clause and on condition clause?*
P.S I am using the following link for testing of the queries <http://www.w3schools.com/sql/trysql.asp?filename=trysql_select_join> | The difference in that in query 1, Orders.OrderID != null (or Orders.OrderId is not null) is being used to filter the rows in the Orders table and in query2, it is being used to filter the rows in the result set. | Change the `!= NULL` to `IS NOT NULL` | SQL Query: difference in condition in On clause and where clause | [
"",
"mysql",
"sql",
"left-join",
"where-clause",
""
] |
***UPDATE NO 2***
With very important inputs from Solution 1 and Solution 2, I finally have a sql that does exactly what it needs to. Thanks to both users, appreciate their assistance. I wish I could mark both solution as answers, but considering the base of my query is from @papo's solution and also that it is very short compared to the other solution, I mark that as my answer.
Final working sql
```
SELECT CNumber, Item, Status
FROM WEBS t1
WHERE ( ((t1.Item IN ('SHIPEXP', 'SHIPPING', 'SHIPINT', 'SHIPBOND', 'GIFT WRAP'))
AND EXISTS (SELECT * FROM WEBS
WHERE CNumber = t1.CNumber
AND ( Status NOT IN (1, 11, 12, 13, 14, 15, 16, 17)
)
)
)
OR
( Status NOT IN (1, 11, 12, 13, 14, 15, 16, 17)
)
)
AND CNumber = 12140836
```
---
***UPDATE***
I have a final query that looks as below: This is the solution that I accepted as the answer. This has been modified to suit my table structure and other clauses.
The issue is query fails on SCENARIO 2 as described before i.e. display no results when all items completed. Scenario 1 succeeds.
```
SELECT t.ItemCode
, t.ItemStatus
FROM WEBORDER_STATUS t
WHERE t.CustOrderNumber = 12140799
AND ( ( ( t.ItemCode <> 'SHIPEXP' OR t.ItemCode <> 'SHIPPING' OR t.ItemCode <> 'SHIPINT' OR t.ItemCode <> 'SHIPBOND' OR t.ItemCode <> 'GIFT WRAP'
)
AND ( t.ItemStatus <> 11 AND t.ItemStatus <> 12 AND
t.ItemStatus <> 1 AND t.ItemStatus <> 13 AND
t.ItemStatus <> 14 AND t.ItemStatus <> 15 AND
t.ItemStatus <> 16 AND t.ItemStatus <> 17
)
)
OR
( (t.ItemCode = 'SHIPEXP' OR t.ItemCode = 'SHIPPING' OR t.ItemCode = 'SHIPINT' OR t.ItemCode = 'SHIPBOND' OR t.ItemCode = 'GIFT WRAP')
AND EXISTS
( SELECT 1
FROM WEBORDER_STATUS s
WHERE s.CustOrderNumber = t.CustOrderNumber
AND (t.ItemCode <> 'SHIPEXP' OR t.ItemCode <> 'SHIPPING' OR t.ItemCode <> 'SHIPINT' OR t.ItemCode <> 'SHIPBOND' OR t.ItemCode <> 'GIFT WRAP')
AND ( t.ItemStatus <> 11 OR t.ItemStatus <> 12 OR
t.ItemStatus <> 1 OR t.ItemStatus <> 13 OR
t.ItemStatus <> 14 OR t.ItemStatus <> 15 OR
t.ItemStatus <> 16 OR t.ItemStatus <> 17
)
)
)
)
```
---
I need assistance in writing a SQL query using standard Tsql on sql server.
I have a situation where i need to select items from a table based on their status.
Lines for order number 111 are as follows: Itemcode Shipping will always be Completed.
**Scenario 1:**
> ItemCode---- Status
>
> 123 -----------Inprogress
>
> 456 -----------Inprogress
>
> 789 -----------Completed
>
> SHIPPING ----Completed
**Expected Result** (Exclude completed items, but include shipping item as all orders will have SHIPPING)
> 123 -----------Inprogress
>
> 456 -----------Inprogress
>
> SHIPPING ----Completed
---
**Scenario 2:**
> ItemCode---- Status
>
> 123 -----------Completed
>
> 456 -----------Completed
>
> 789 -----------Completed
>
> SHIPPING ----Completed
**Expected Result** (Exclude completed items, but as all items are completed Exclude SHIPPING as well)
> No results (as all items are completed)
Any help or direction is appreciated. Thanks | Only show the SHIPPING row when there are uncompleted items for this order. Show the other items only when they are not completed.
```
SELECT OrderNo, ItemCode, Status
FROM table1 t1
WHERE ((ItemCode = 'SHIPPING'
AND EXISTS (SELECT * FROM table1
WHERE OrderNo = t1.OrderNo
AND Status != 'Completed'))
OR Status != 'Completed')
AND OrderNo = 111;
``` | One option is to use an EXISTS predicate to test whether there are any rows (other than 'SHIPPING') that aren't status 'COMPLETED' to conditionally return the 'SHIPPING' row.
For example:
```
SELECT t.ItemCode
, t.Status
FROM mytable t
WHERE t.OrderNumber = 111
AND ( ( t.ItemCode <> 'SHIPPING' AND t.Status <> 'Completed'
)
OR ( t.ItemCode = 'SHIPPING' AND EXISTS
( SELECT 1
FROM mytable s
WHERE s.OrderNumber = t.OrderNumber
AND s.ItemCode <> 'SHIPPING'
AND s.Status <> 'Completed'
)
)
)
```
NOTES
The predicate `t.OrderNumber = 111` isn't necessary. This could be omitted, or be whatever other criteria. (I assumed your query would have that as a predicate because the examples you showed omitted an OrderNumber column. If you omit that predicate, it's likely you'd want to return OrderNumber in the select list.)
This:
```
( t.ItemCode <> 'SHIPPING' AND t.Status <> 'Completed'
)
```
gets all the rows that aren't SHIPPING that have a status other than completed. (This might not match any rows, as in the second scenario.)
This:
```
( t.ItemCode = 'SHIPPING' AND EXISTS
( SELECT 1
FROM mytable s
WHERE s.OrderNumber = t.OrderNumber
AND s.ItemCode <> 'SHIPPING'
AND s.Status <> 'Completed'
)
)
```
gets the SHIPPING row only if there's another row in the table, for the same OrderNumber and a status that's not 'Completed'. If there isn't a row returned by the subquery, the `EXISTS` predicate returns FALSE, so the entire expression evaluates to FALSE, so the SHIPPING row will not be returned.
Add an `ORDER BY` clause to get the rows returned in a specific sequence. To get the SHIPPING row last, e.g
```
ORDER BY CASE t.ItemCode WHEN 'SHIPPING' THEN 2 ELSE 1 END, t.ItemCode
```
---
**FOLLOWUP**
Based on your updated query, note that the "negation" of this:
```
(a = 'b' OR a = 'c')
```
would be:
```
(a <> 'b' AND a <> 'c')
```
Note that the `OR` needs to be replaced with an `AND`. If you work through a test case, for example, consider a row with `itemCode = 'SHIPEXP'`. We know that if this is true, then `itemCode <> 'SHIPPING'` will also be true.
Similarly, the expression:
```
( ItemStatus <> 11 OR ItemStatus <> 12 )
```
Will return TRUE for any non-null value of `ItemStatus`. (If the value is 11, then `ItemStatus <> 12` will be TRUE. That would result in ( FALSE OR TRUE ), which would return TRUE. The same goes for a value of `12`... ( TRUE OR FALSE ) will return TRUE. I think what you want is to return a FALSE. To get that, the predicates would need to be AND'd together.
I believe that's the crux of the behavior you are seeing with your query. You've got predicates returning TRUE when you expect them to return FALSE.
---
An equivalent (and more concise) way to express the predicate on `ItemCode` (to see if the value matches one of a list of values) would be:
```
t.ItemCode IN ('SHIPEXP','SHIPPING','SHIPINT','SHIPBOND','GIFT WRAP')
```
Similarly, a check of `ItemStatus` could be expressed as:
```
t.ItemStatus NOT IN (11,12,1,13,14,15,16,17)
```
Note that with a `NOT IN`, the inequality tests are AND'ed together rather than OR'd together. So that's equivalent to:
```
t.ItemStatus <> 11 AND t.itemStatus <> 12 AND ...
``` | SQL query to include/exclude an item line based on the status of other lines in the order | [
"",
"sql",
"sql-server",
""
] |
in sql server basically I have a query to find duplicates in one table. I want to filter this list down to all the duplicates that appear in the one table and are not contained in the second table
to get duplicates:
```
select [OBId], COUNT(*) AS dupes
FROM [Broker] b
GROUP BY [OBId]
HAVING (COUNT(*) > 1)
```
broker has id,OBId
and I want all the duplicates that don't have a brokerid in this table
second table

I tried to do a subquery but I couldn't figure it out | what if you simply use a subquery and say this
```
select [OBId],
COUNT(*) AS dupes
FROM [Broker] b
WHERE [OBId] NOT IN (select other_table_id from other_table)
GROUP BY [OBId]
HAVING (COUNT(*) > 1)
``` | You could do it with a NOT EXISTS in a subquery:
```
SELECT sub.*
FROM
(SELECT [OBId]
, COUNT(*) AS dupes
FROM [Broker] b
GROUP BY [OBId]
HAVING COUNT(*) > 1) sub
WHERE NOT EXISTS
(SELECT 1
FROM SomeSecondTable sst
WHERE sub.OBId = BrokerId);
```
If you are trying to return the full set of records from the Broker table for the duplicates and you want to suppress the records that don't exist in the second table, then you could use this code:
```
SELECT sub.*
FROM [Broker] b
WHERE EXISTS
(SELECT [OBId]
, COUNT(*) AS dupes
FROM [Broker] sb
WHERE b.OBId = sb.OBId
GROUP BY [OBId]
HAVING COUNT(*) > 1)
AND NOT EXISTS
(SELECT 1
FROM SomeSecondTable sst
WHERE b.OBId = BrokerId);
``` | sql show duplicate rows missing from another table | [
"",
"sql",
"sql-server-2008",
""
] |
Does anyone know how to split words starting with capital letters from a string?
Example:
```
DECLARE @var1 varchar(100) = 'OneTwoThreeFour'
DECLARE @var2 varchar(100) = 'OneTwoThreeFourFive'
DECLARE @var3 varchar(100) = 'One'
SELECT @var1 as Col1, <?> as Col2
SELECT @var2 as Col1, <?> as Col2
SELECT @var3 as Col1, <?> as Col2
```
expected result:
```
Col1 Col2
OneTwoThreeFour One Two three Four
OneTwoThreeFourFive One Two Three Four Five
One One
```
If this is not possible (or if too long) an scalar function would be okay as well. | Here is a function I created that is similar to the "removing non-alphabetic characters". [How to strip all non-alphabetic characters from string in SQL Server?](https://stackoverflow.com/questions/1007697/how-to-strip-all-non-alphabetic-characters-from-string-in-sql-server/1008566#1008566)
This one uses a case sensitive collation which actively seeks out a non-space/capital letter combination and then uses the STUFF function to insert the space. This IS a scalar UDF, so some folks will immediately say that it will be slower than other solutions. To that notion, I say, please test it. This function does not use any table data and only loops as many times as necessary, so it will likely give you very good performance.
```
Create Function dbo.Split_On_Upper_Case(@Temp VarChar(1000))
Returns VarChar(1000)
AS
Begin
Declare @KeepValues as varchar(50)
Set @KeepValues = '%[^ ][A-Z]%'
While PatIndex(@KeepValues collate Latin1_General_Bin, @Temp) > 0
Set @Temp = Stuff(@Temp, PatIndex(@KeepValues collate Latin1_General_Bin, @Temp) + 1, 0, ' ')
Return @Temp
End
```
Call it like this:
```
Select dbo.Split_On_Upper_Case('OneTwoThreeFour')
Select dbo.Split_On_Upper_Case('OneTwoThreeFour')
Select dbo.Split_On_Upper_Case('One')
Select dbo.Split_On_Upper_Case('OneTwoThree')
Select dbo.Split_On_Upper_Case('stackOverFlow')
Select dbo.Split_On_Upper_Case('StackOverFlow')
``` | Here is a function I have just created.
**FUNCTION**
```
CREATE FUNCTION dbo.Split_On_Upper_Case
(
@String VARCHAR(4000)
)
RETURNS VARCHAR(4000)
AS
BEGIN
DECLARE @Char CHAR(1);
DECLARE @i INT = 0;
DECLARE @OutString VARCHAR(4000) = '';
WHILE (@i <= LEN(@String))
BEGIN
SELECT @Char = SUBSTRING(@String, @i,1)
IF (@Char = UPPER(@Char) Collate Latin1_General_CS_AI)
SET @OutString = @OutString + ' ' + @Char;
ELSE
SET @OutString = @OutString + @Char;
SET @i += 1;
END
SET @OutString = LTRIM(@OutString);
RETURN @OutString;
END
```
**Test Data**
```
DECLARE @TABLE TABLE (Strings VARCHAR(1000))
INSERT INTO @TABLE
VALUES ('OneTwoThree') ,
('FourFiveSix') ,
('SevenEightNine')
```
**Query**
```
SELECT dbo.Split_On_Upper_Case(Strings) AS Vals
FROM @TABLE
```
**Result Set**
```
╔══════════════════╗
║ Vals ║
╠══════════════════╣
║ One Two Three ║
║ Four Five Six ║
║ Seven Eight Nine ║
╚══════════════════╝
``` | Split words with a capital letter in sql | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have 5 tables and columns pertinent to this query:
* Dogs (ID, CallName, Color, Sex, Chipnumber, BreedID)
* Breed (ID, Name)
* Status (ID, Status, OwnedOnPremises)
* DogsStatus (ID, DogsID, StatusID, StatusDate, Note, ContactsID)
* Contacts (ID, Name)
I am wanting a result of all dogs and their LATEST status. For a test I am using the following records:
* Dogs (251, Tank, Fawn, M, 14410784, 23) (266, Bonnie, Brindle, 14964070, 23)
* Breed (23, Mastiff)
* Status (3, Sold) (4, Given Away) (7, Purchased) (9, Returned)
* DogsStatus (29, 251, 3, 2013-10-12, 5) (39, 251, 9, 2013-11-10, 17) (146, 251, 4, 2014-01-10, 7) (40, 266, 7, 2013-10-30, 1)
* Contacts (1, Person1) (5, Person5) (7, Person7) (17, Person17)
So far I have:
```
SELECT
d.CallName, b.Name AS 'Breed', d.Color, d.Sex, d.ChipNumber
FROM
Dogs d
JOIN
(SELECT
DogsID, MAX(StatusDate) as MaxStatusDate
FROM DogsStatus
GROUP BY DogsID) mds ON mds.DogsID = d.ID
JOIN
Breeds b ON b.ID = d.BreedID
```
This will return 2 unique records (1 for Tank and 1 for Bonnie), but whenever I try to get any other of the DogsStatus and/or Status info, I either return only one dog record, or all 3 of Tanks DogsStatus records.
Thanks in advance. | You'll need to join your `MaxStatusDate` to the `DogsStatus` table. That way you will only get the most recent status, in the case where you have multiple statuses.
Something like
```
SELECT
d.CallName, b.Name AS 'Breed', d.Color, d.Sex, d.ChipNumber
FROM
Dogs d
innner join DogsStatus ds
ON d.dogsid = ds.dogs_id
JOIN
(SELECT
DogsID, MAX(StatusDate) as MaxStatusDate
FROM DogsStatus
GROUP BY DogsID) mds ON mds.DogsID = d.ID
JOIN
Breeds b ON b.ID = d.BreedID
AND mds.maxstatusdate = ds.statusdate
```
Something along those lines. | You're close. You just need to go back to the DogStatus table to get that full record. Note that I prefer CTEs for this, but your existing derived table (subquery) approach works just fine, too:
```
With StatusDates As
(
SELECT
DogsID, MAX(StatusDate) as StatusDate
FROM DogsStatus
GROUP BY DogsID
), CurrentStatus As
(
SELECT ds.*
FROM DogStatus ds
INNER JOIN StatusDates sd ON sd.DogsID = ds.DogsID AND ds.StatusDate = sd.StatusDate
)
SELECT d.Name, b.Name As Breed, d.Color, d.Sex, d.ChipNumber
, s.Status, cs.StatusDate, c.Name As ContactName
FROM Dogs d
INNER JOIN CurrentStatus cs ON cs.DogsID = d.ID
INNER JOIN Breed b on b.ID = d.BreedID
INNER JOIN Status s on s.ID = cs.StatusID
INNER JOIN Contact c on c.ID = cs.ContactID
```
You may want to use a LEFT join for some of those, and then change the select list to use coalesce() expressions to clean up the NULLs. | SQL Query return latest date with additional information from tables | [
"",
"sql",
"sql-server",
""
] |
Given a column containing ngrams in a `VARCHAR` with `utf8mb4_unicode_ci` collation:
```
+---------------------------+
| ngram |
+---------------------------+
| stack overflow |
| stack |
| overflow |
| stack overflow protection |
| overflow protection |
| protection |
+---------------------------+
```
And a query:
```
SELECT * FROM ngrams WHERE ngram IN ('stack', 'stack overflow', 'protection', 'overflow')
```
Given the rows returned by this query, how can I keep only the rows with the longest ngrams **from the returned rows**?
In this example, I get 3 rows: `stack`, `stack overflow`, and `protection`.
Then, I need to filter rows like this:
* I filter out `stack`, because `stack overflow` exists in the returned rows
* I keep `stack overflow`, because no other returned row is a ngram containing `stack overflow` (there is `stack overflow protection` in the table, but it's not in the returned rows)
* I keep `protection` too
* I filter out `overflow`, because `stack overflow` exists in the returned rows
---
It must be done in MySQL because of collations (comparisons outside of MySQL wouldn't give the same results than in MySQL). (Unless I'm not aware of some MySQL function allowing to expose the *collated* version of a string.)
---
I can think of the following solution: ([sql fiddle](http://www.sqlfiddle.com/#!2/f8be79/33))
```
SELECT ngram
FROM ngrams n1
WHERE n1.ngram IN ('stack', 'stack overflow', 'protection')
AND NOT EXISTS (
SELECT 1
FROM ngrams n2
WHERE n2.ngram IN ('stack', 'stack overflow', 'protection')
AND LENGTH(n2.ngram) > LENGTH(n1.ngram)
AND CONCAT(' ', n2.ngram, ' ') LIKE CONCAT('% ', n1.ngram, ' %')
)
```
It's inefficient, though, since the sub-query will be executed for every matched ngram.
---
So I'm searching for
* either a way to make this query efficient
* or a way to do this reliably outside of MySQL (taking collations into account) | If I understand your logic correctly, this query should give you the correct result:
```
SELECT n1.ngram
FROM
ngrams n1 LEFT JOIN ngrams n2
ON
n2.ngram IN ('stack', 'stack overflow', 'protection')
AND n2.ngram LIKE CONCAT('%', n1.ngram, '%')
AND CHAR_LENGTH(n1.ngram) < CHAR_LENGTH(n2.ngram)
WHERE
n1.ngram IN ('stack', 'stack overflow', 'protection')
AND n2.ngram IS NULL;
```
Please see fiddle [here](http://www.sqlfiddle.com/#!2/20d2a7/2). But since I expect that your table could have a lot of records, while your list of words is certanly much limited, why not remove the shortest ngrams from this list before executing the actual query? My idea is to reduce the list
```
('stack', 'stack overflow', 'protection')
```
to
```
('stack overflow', 'protection')
```
and this query should do the trick:
```
SELECT *
FROM
ngrams
WHERE
ngram IN (
SELECT s1.ngram
FROM (
SELECT DISTINCT ngram
FROM ngrams
WHERE ngram IN ('stack','stack overflow','protection')
) s1 LEFT JOIN (
SELECT DISTINCT ngram
FROM ngrams
WHERE ngram IN ('stack','stack overflow','protection')
) s2
ON s2.ngram LIKE CONCAT('%', s1.ngram, '%')
AND CHAR_LENGTH(s1.ngram) < CHAR_LENGTH(s2.ngram)
WHERE
s2.ngram IS NULL
);
```
Yes I'm querying the table `ngrams` twice before joining the result back to `ngrams` again, because we have to make sure that the longest value actually exists in the table, but if you have a proper index on the ngram column the two derived queries that use DISTINCT should be very efficient:
```
ALTER TABLE ngrams ADD INDEX idx_ngram (ngram);
```
Fiddle is [here](http://www.sqlfiddle.com/#!2/a208b0/2).
**Edit:**
As samuil correctly noted, if you just need to find the shortest ngram and not the whole rows associated to it, then you don't need the outer query, and you can just execute the inner query. With the proper index, two SELECT DISTINCT queries will be very efficient, and even if the JOIN cannot be optimized (`n2.ngram LIKE CONCAT('%', n1.ngram, '%')` can't take advantage of an index) it will be executed only on a few already filtered records and should be quite fast. | You are trying to filter the ngrams in the query itself.
It is probably more efficient to do it in two steps.
Start with a table with all possible ngrams:
```
CREATE TABLE original (ngram varchar(100) NOT NULL)
GO
CREATE TABLE refined (ngram varchar(100) NOT NULL PRIMARY KEY)
GO
INSERT INTO original (ngram)
SELECT DISTINCT ngram
FROM ngrams
WHERE ngram IN ('stack', 'stack overflow', 'protection')
GO
INSERT INTO refined (ngram)
SELECT ngram
FROM original
```
Then delete the ones you do not want.
For each ngram, generate all possible substrings. For each substring, delete that entry (if any) from the list.
It takes a couple of nested loops, but unless your ngrams contain an extremely large number of words, it should not take much time.
```
CREATE PROCEDURE refine()
BEGIN
DECLARE done INT DEFAULT FALSE;
DECLARE words varchar(100);
DECLARE posFrom, posTo int;
DECLARE cur CURSOR FOR SELECT ngram FROM original;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = TRUE;
OPEN cur;
read_loop: LOOP
FETCH cur INTO words;
IF done THEN
LEAVE read_loop;
END IF;
SET posFrom = 1;
REPEAT
SET posTo = LOCATE(' ', words, posFrom);
WHILE posTo > 0 DO
DELETE FROM refined WHERE ngram = SUBSTRING(words, posFrom, posTo - posFrom);
SET posTo = LOCATE(' ', words, posTo + 1);
END WHILE;
IF posFrom > 1 THEN
DELETE FROM refined WHERE ngram = SUBSTRING(words, posFrom);
END IF;
SET posFrom = LOCATE(' ', words, posFrom) + 1;
UNTIL posFrom = 1 END REPEAT;
END LOOP;
CLOSE cur;
END
```
What's left, is a table with only the longest ngrams:
```
CALL refine;
SELECT ngram FROM refined;
```
SQL Fiddle: <http://sqlfiddle.com/#!2/029dc/1/1>
---
**EDIT:** I added an index on table `refined`; now it should run in *O(n)* time. | Find longest matching ngrams in MySQL | [
"",
"mysql",
"sql",
"rdbms",
""
] |
I am having to find and replace a substring over all columns in all tables in a given database.
I tried this code from sqlserver 2012 ssms but resulting in errors from <http://www.dbtalks.com/uploadfile/anjudidi/find-and-replace-string-values-in-all-tables-and-column-in-s/> [Find and Replace string Values in All Tables and column in SQL Serve](http://www.dbtalks.com/uploadfile/anjudidi/find-and-replace-string-values-in-all-tables-and-column-in-s/)
I think its for older version, it having problems with some of the tables names that start with a number: example dbo.123myTable
Appreciate all the help in advance
Error Print:
> Msg 102, Level 15, State 1, Line 1
> Incorrect syntax near '.153'.
> UPDATE dbo.153Test2dev SET [ALCDescription] = REPLACE(convert(nvarchar(max),[ALCDescription]),'TestsMT','Glan') WHERE [ALCDescription] LIKE '%SherlinMT%'
> Updated: 1
>
> Msg 102, Level 15, State 1, Line 1
> Incorrect syntax near '.153'.
> UPDATE dbo.153TypeTest2 SET [FormTypeDescription] = REPLACE(convert(nvarchar(max),[FormTypeDescription]),'TestsMT','Glan') WHERE [FormTypeDescription] LIKE '%SherlinMT%'
> Updated: 1 | Just as a guess, to add delimiters to your table names, modify the script you linked to by editing this line:
```
SET @sqlCommand = 'UPDATE ' + @schema + '.' + @table + ' SET [' + @columnName + '] = REPLACE(convert(nvarchar(max),[' + @columnName + ']),''' + @stringToFind + ''',''' + @stringToReplace + ''')'
```
and change it to
```
SET @sqlCommand = 'UPDATE [' + @schema + '].[' + @table + '] SET [' + @columnName + '] = REPLACE(convert(nvarchar(max),[' + @columnName + ']),''' + @stringToFind + ''',''' + @stringToReplace + ''')'
``` | Are you sure table names may begin with a digit? If so, include them in '[' ']', like
```
UPDATE [dbo].[153TypeTest2].....
``` | how to do a search and replace for a string in mssql 2012 | [
"",
"sql",
"sql-server",
"sql-server-2012",
"ssms-2012",
""
] |
I have the following table
```
Type SubType value
A 1 1
A 2 2
A 3 3
A 4 4
B 1 1
B 2 2
B 3 3
C 1 1
C 2 2
C 3 3
C 4 4
```
I want to group by all rows except where Type=A and the output should like below
```
Type Sum
A1 1
A2 2
A3 3
A4 4
B 6
C 10
```
Is it possible to group by few rows on one condition and others on a different condition? | Yes, you have to write an expression that creates the group definition:
```
Select case When Type = 'A' then type + ltrim(str(subtype, 9))
Else Type End Type, Sum(Value) Sum
From table
Group By case When Type = 'A' then type + ltrim(str(subtype, 9))
Else Type End
``` | Yes, you can `GROUP BY` a `CASE` expression;
```
SELECT CASE WHEN type='A'
THEN type+CAST(subtype AS VARCHAR(MAX))
ELSE type END [Type],
SUM(value) [Sum]
FROM mytable
GROUP BY CASE WHEN type='A'
THEN type+CAST(subtype AS VARCHAR(MAX))
ELSE type END
ORDER BY [Type]
```
[An SQLfiddle to test with](http://sqlfiddle.com/#!3/6c4ab/5).
In SQL Server 2012, you can use `CONCAT` without the cast, which simplifies the query somewhat. | How to group by using multiple conditions | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have a program that retrieves data and stores it in a table each day, and then another program that queries that data to produce reports. The reports need to say when the data was last updated, so we know how old the information is.
It seems wasteful to add a column with the last update date to the table, since all the rows will have the same value. It also seems wasteful to create a table just to store one value.
What is the best solution for keeping track of the last time a table was updated? | My preferred way is to create a new "report" table to store the last time the target table is updated, and create a trigger to update the "report" table whenever there is change on the target table.
See this for more information on creating such trigger:
<http://www.techonthenet.com/oracle/triggers/after_update.php> | You Probably should add a column "LastModified" and save the datetime when its getting updated. This should be the best way to identify when your table was last updated. | Best way to keep track of when a table was last updated? | [
"",
"sql",
"oracle",
"database-design",
"oracle11g",
""
] |
I have 2 tables 1st table contains following columns,
```
id code Name
1 c1 chk1
2 c2 chk2
3 c3 chk3
```
2nd table contains following columns,
```
id,Name,Chk1,chk2,Chk3
```
i have to add the column 'Chk4' into table2 if table1 is updated with value '4,'c4','ch4' dynamically.How to write procedure to perform this?
i've tried the following procedure but its not working fine.
```
create proc Add_Check
as
begin
declare @Column varchar(50)
declare @query varchar(255)
declare @query1 varchar(255)
set @Column= (select top 1 QUOTENAME(Name)
from table1 where id=(Select MAX id) from table1))
if exists(select 1 from table1
where Name=@Column)
begin
set @query = 'alter table table2 add ' + @Column + ' Varchar (50)'
set @query1 = 'alter table table2 add ' + @Column + '_CompletedDate Varchar (50)'
exec(@query)
end
end
``` | use this query as procedure.
```
CREATE PROC ADD_CHECK
AS
BEGIN
DECLARE @COLUMN VARCHAR(50)
DECLARE @QUERY VARCHAR(255)
DECLARE @QUERY1 VARCHAR(255)
SET @COLUMN= (SELECT TOP 1 NAME FROM TABLE1 WHERE ID=(SELECT MAX (ID) FROM TABLE1))
IF EXISTS(SELECT 1 FROM TABLE1 WHERE NAME=@COLUMN)
BEGIN
SET @QUERY = 'ALTER TABLE TABLE2 ADD ' + @COLUMN + ' VARCHAR (50)'
SET @QUERY1 = 'ALTER TABLE TABLE2 ADD ' + @COLUMN + '_COMPLETEDDATE VARCHAR (50)'
EXEC(@QUERY)
END
END
``` | Try this:
```
CREATE PROCEDURE <procedurename>
AS
BEGIN
DECLARE @COLUMN varchar(10), @SQL Varchar(100);
SELECT @COLUMN = Name FROM Table1
WHERE id = (SELECT MAX(id) FROM Table1)
IF NOT EXISTS(
SELECT COLUMN_NAME
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'Table2'
AND COLUMN_NAME = @COLUMN
)
BEGIN
SELECT @SQL =
'ALTER TABLE Table2 ADD '
+@COLUMN+' varchar(10),'
+@COLUMN+'_CompletedDate varchar(50)'
EXECUTE (@SQL)
END
END
``` | How to add dynamic column to an existing table | [
"",
"sql",
"sql-server",
"sql-server-2008",
"sql-server-2008-r2",
"sql-server-2012",
""
] |
We are receiving rather large files, of which we have no control over the format of, that are being bulk-loaded into a SQL Server table via SSIS to be later imported into our internal structure. These files can contain over 800 columns, and often the column names are not immediately recognizable.
As a result, we have a large table that represents the contents of the file with over 800 `Varchar` columns.
The problem is: I know what specific values I'm looking for in this data, but I do not know what column contains it. And eyeballing the data to find said column is neither efficient nor ideal.
My question is: is it at all possible to search a table by some value `N` and return the column names that have that value? I'd post some code that I've tried, but I really don't know where to start on this one... or if it's even possible.
For example:
```
A B C D E F G H I J K L M N ...
------------------------------------------------------------
'a' 'a' 'a' 'a' 'a' 'b' 'a' 'a' 'a' 'b' 'b' 'a' 'a' 'c' ...
```
If I were to search this table for the value 'b', I would want to get back the following results:
```
Columns
---------
F
J
K
```
Is something like this possible to do? | This script will search all tables and all string columns for a specific string. You might be able to adapt this for your needs:
```
DECLARE @tableName sysname
DECLARE @columnName sysname
DECLARE @value varchar(100)
DECLARE @sql varchar(2000)
DECLARE @sqlPreamble varchar(100)
SET @value = 'EDUQ4' -- *** Set this to the value you're searching for *** --
SET @sqlPreamble = 'IF EXISTS (SELECT 1 FROM '
DECLARE theTableCursor CURSOR FAST_FORWARD FOR
SELECT TABLE_NAME FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_SCHEMA = 'dbo' AND TABLE_TYPE = 'BASE TABLE'
AND TABLE_NAME NOT LIKE '%temp%' AND TABLE_NAME != 'dtproperties' AND TABLE_NAME != 'sysdiagrams'
ORDER BY TABLE_NAME
OPEN theTableCursor
FETCH NEXT FROM theTableCursor INTO @tableName
WHILE @@FETCH_STATUS = 0 -- spin through Table entries
BEGIN
DECLARE theColumnCursor CURSOR FAST_FORWARD FOR
SELECT COLUMN_NAME FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = @tableName AND (DATA_TYPE = 'nvarchar' OR DATA_TYPE = 'varchar')
ORDER BY ORDINAL_POSITION
OPEN theColumnCursor
FETCH NEXT FROM theColumnCursor INTO @columnName
WHILE @@FETCH_STATUS = 0 -- spin through Column entries
BEGIN
SET @sql = @tableName + ' WHERE ' + @columnName + ' LIKE ''' + @value +
''') PRINT ''Value found in Table: ' + @tableName + ', Column: ' + @columnName + ''''
EXEC (@sqlPreamble + @sql)
FETCH NEXT FROM theColumnCursor INTO @columnName
END
CLOSE theColumnCursor
DEALLOCATE theColumnCursor
FETCH NEXT FROM theTableCursor INTO @tableName
END
CLOSE theTableCursor
DEALLOCATE theTableCursor
``` | One option you have is to be a little creative using XML in SQL Server.
Turn a row at a time into XML using cross apply and query for the nodes that has a certain value in a second cross apply.
Finally you output the distinct list of node names.
```
declare @Value nvarchar(max)
set @Value= 'b'
select distinct T3.X.value('local-name(.)', 'nvarchar(128)') as ColName
from YourTable as T1
cross apply (select T1.* for xml path(''), type) as T2(X)
cross apply T2.X.nodes('*[text() = sql:variable("@Value")]') as T3(X)
```
[SQL Fiddle](http://sqlfiddle.com/#!3/5fea0/1) | Selecting column names that have specified value | [
"",
"sql",
"sql-server",
""
] |
In my database, i created a table name DEPTLOC
```
//DEPTLOC
DNAME CITY
----------------------------
RESEARCH BOSTON
IT PARIS
SCIENCE LONDON
RESEARCH LONDON
SALES NEWYORK
RESEARCH PARIS
RESEARCH NEWYORK
MARKETING NEWYORK
```
So i used the following query
```
SELECT CITY FROM DEPTLOC
INTERSECT
(
SELECT CITY FROM DEPTLOC WHERE DNAME='SALES'
UNION
SELECT CITY FROM DEPTLOC WHERE DNAME='RESEARCH'
);
```
But my output is all the CITY will be displayed. My question is want find which of the DNAME='SALES' OR DNAME='RESEARCH' has its location in all cities.
So from the table above, all distinct city are
```
CITY
-------
BOSTON
PARIS
LONDON
NEWYORK
```
Since 'RESEARCH' have all the location but 'SALES' Only have some, my output should be display like this
```
DNAME
---------
RESEARCH
```
What should i change for my query in order to get the correct output | Here's a shorter (than my previous answer) query that checks only between DNAMES SALES and RESEARCH to see which DNAME has all CITIES. Set operators UNION ALL and MINUS are used.
```
SELECT DNAME
FROM
(
SELECT 'SALES' DNAME, COUNT(*) MISSING_CITIES
FROM
(
SELECT DISTINCT CITY FROM DEPTLOC
MINUS
SELECT CITY FROM DEPTLOC WHERE DNAME = 'SALES'
)
UNION ALL
SELECT 'RESEARCH', COUNT(*) FROM
(
SELECT DISTINCT CITY FROM DEPTLOC
MINUS
SELECT CITY FROM DEPTLOC WHERE DNAME = 'RESEARCH'
)
)
WHERE MISSING_CITIES = 0;
``` | One way of doing this is to count the distinct locations, and join it on the departments query:
```
SELECT dname
FROM (SELECT dname, COUNT(*) AS dept_city_count
FROM deptloc
GROUP BY dname) d
JOIN (SELECT COUNT (DISTINCT city) AS city_count
FROM deptloc) ON city_count = dept_cirt_count
``` | SELECT clause with MINUS INTERSECT and UNION | [
"",
"sql",
"oracle",
""
] |
I want to drop and add a constraint
```
ALTER TABLE [dbo].[Entity] DROP CONSTRAINT [x_UpdateDate1]
-- default value
ALTER TABLE [dbo].[Entity] ADD CONSTRAINT [x_UpdateDate1]
DEFAULT ([dbo].[GETSYSTEMDATE]())
FOR [CreateDate]
```
I want to find the alternative for this. As dropping and adding a constraint is not advisable. Is there any better way to handle | No. You cannot alter a constraint, you need to drop it and then recreate it just as you did. | No, there's no other way than to drop and re-create constraints ([check also sqlauthority.com blog entry](http://blog.sqlauthority.com/2011/04/24/sql-server-how-to-alter-constraint/)) | Dropping and adding a constraint in sql | [
"",
"sql",
"sql-server",
""
] |
I have a table as follows
```
fab_id x y z m
12 14 10 3 5
12 10 10 3 4
```
Here im using group by clause on id .Now i want to subtract those column values which have similar id.
e.g group by on id (12). Now to subtract (14-10)X, (10-10)Y, (3-3)z, (5-4)m
I know there is a aggregate function sum for addition but is there any function which i can use to subtract this value.
Or is there any other method to achieve the results.
Note- There may be a change that value may come in -ve. So any function handle this?
one more example - (order by correction\_date desc so result will show recent correction first)
```
fab_id x y z m correction_date
14 20 12 4 4 2014-05-05 09:03
14 24 12 4 3 2014-05-05 08:05
14 26 12 4 6 2014-05-05 07:12
```
so result to achieve group by on id (14). Now to subtract (26-20)X, (12-12)Y, (4-4)z, (6-4)m | Now, that you have given more information on how to deal with more records and that you revealed that there is a time column involved, here is a possible solution. The query selects the first and last record per fab\_id and subtracts the values:
```
select
fab_info.fab_id,
earliest_fab.x - latest_fab.x,
earliest_fab.y - latest_fab.y,
earliest_fab.z - latest_fab.z,
earliest_fab.m - latest_fab.m
from
(
select
fab_id,
min(correction_date) as min_correction_date,
max(correction_date) as max_correction_date
from fab
group by fab_id
) as fab_info
inner join fab as earliest_fab on
earliest_fab.fab_id = fab_info.fab_id and
earliest_fab.min_correction_date = fab_info.min_correction_date
inner join fab as latest_fab on
latest_fab.fab_id = fab_info.fab_id and
latest_fab.min_correction_date = fab_info.max_correction_date;
``` | Seeing as you say there will always be two rows, you can simply do a 'self join' and subtract the values from each other:
```
SELECT t1.fab_id, t1.x - t2.x as diffx, t1.y - t2.y as diffy, <remainder columns here>
from <table> t1
inner join <table> t2 on t1.fab_id = t2.fab_id and t1.correctiondate > t2.correctiondate
```
If you have more than two rows, then you'll need to make subqueries or use window ranking functions to figure out the largest and smallest correctiondate for each fab\_id and then you can do the very same as above by joining those two subqueries together instead of | SQL - subtract value from same column | [
"",
"sql",
"sql-server-2008",
""
] |
I have a few related questions about MySQL indexes:
1. Does MySQL update the index every time something is inserted?
2. When MySQL updates the index due to an insert, does it rebuild the entire index?
3. Is there a way to make MySQL update the index after every x inserts?
I have a lot of inserts in my application and I'm afraid MySQL is rebuilding the index after every insert. The data does not have to be real-time, so I can update the index after a specific number of inserts (if it's possible). | MySQL is probably already doing what you describe, as much as it can.
In the case of InnoDB (which should be your default storage engine with MySQL), inserts and updates and deletes change primary key or unique key indexes immediately. But they never rebuild the *whole* index, they add new values into (or take values out of) these indexes.
For non-unique indexes, InnoDB performs [change buffering](https://dev.mysql.com/doc/refman/en/innodb-change-buffer.html). That is, it queues the changes, which will be merged into the index later in the background. It will even consolidate changes so that the physical updates to the index are done more efficiently.
You don't have to do anything to enable this feature, because it's enabled by default. MySQL 5.1 does change buffering only for `INSERT`. MySQL 5.5 and later additionally does change buffering for `UPDATE` and `DELETE`.
You can *disable* this feature if you want (for instance if you use SSD, avoiding random I/O is not so important, and you might want to ensure that queued changes don't accumulate). Normally, you should keep the feature enabled. | MySQL doesn't "rebuild" the index after every insert. MySQL inserts one or more rows into an existing index.
MySQL has lots of unusual options, and I don't know all of them. I would be surprised if there were an option that said: "Oh, let the index on the table be out of synch with the data in the table." Doesn't sound reasonable.
If you have lots of inserts, the best strategy is to do the inserts in one statement. Instead of:
```
insert into t(...)
select . . .
from t2
where id = id1;
```
Do:
```
insert into t(...)
select . . .
from t2
where id in (id1, id2, . . .)
```
An extension on this is to insert into a temporary table. Then just load the temporary table into the big table all at once:
```
insert into t(...)
select ...
from temptable;
```
Finally, it is sometimes faster to drop indexes, do a big insert (in one or more steps), and then re-create the indexes.
One caution: if you drop unique indexes you are also dropping the unique constraint. This is important if you are using `on duplicate key update`, because it needs a secondary index to find the duplicate key (except for the primary key). | Does MySQL update the index on all inserts? Can I make it update after every x inserts? | [
"",
"mysql",
"sql",
"indexing",
""
] |
I am trying to delete all records from a table called user\_enrole.I am using this query
```
DELETE * FROM user_enrole
```
I think syntax of my query is not wrong but it is giving me error saying
> #1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use
> near '\* FROM user\_enrole' at line 1
I have doubled check my syntax i am not able to figure out what is going wrong can someone point out please.
Is it occuring because of the relationship this table has with use table or what? | You don't need to use the asterisk in a delete. Just do `DELETE FROM user_enrole` to delete all records.
If you want to delete specific records filtered by one or more conditions, you will specify those conditions in the `WHERE` clause, like so:
```
DELETE FROM user_enrole
WHERE somecolumn > 1
AND anothercolumn = 'Username'
``` | When you write `SELECT * FROM...` the `*` means *everything*, i.e. all fields. All parts of the row(s). It makes no sense to use the same syntax when deleting, because you can only delete entire rows. This is why the syntax is `DELETE FROM...` | Delete query not working in mysql | [
"",
"mysql",
"sql",
""
] |
Is it possible to split a value(X) over several months if it exceeds value(Y), without using a cursor?
For example, I have value X = 505 and I want to split it over as many months as possible with each month having a maximum value of 100 (value Y = 100)?
So my expected output would be:
```
JAN 100
FEB 100
MAR 100
APR 100
MAY 100
JUN 5
```
I'm not concerned with the overlap (the 5 in June), if it were possible without this that is ok. | You can use a table of numbers to spread a value over a sequence of months using CROSS APPLY as follows.
```
create table T(
id int,
dat datetime,
val int,
primary key(id, dat)
);
insert into T values (1, '20140101', 100);
insert into T values (2, '20140101', 99);
insert into T values (3, '20140201', 274);
insert into T values (4, '20140301', 300);
declare @chunk int = 100;
select
id,
dateadd(month,n-1,dat) as dat,
case when n=max(n) over (partition by id) then (val-1)%@chunk+1 else @chunk end as val
from T
cross apply (
select n from Nums
where n <= ceiling((val+@chunk-1)/@chunk)
) as N(n);
```
Result:
```
id dat val
1 2014-01-01 100
2 2014-01-01 99
3 2014-02-01 100
3 2014-03-01 100
3 2014-04-01 74
4 2014-03-01 100
4 2014-04-01 100
4 2014-05-01 100
```
(You’ll need to adjust things slightly if your values aren’t integers.)
Here’s some SQL to create a table of numbers from 1 to 1024 if you don’t have one handy.
```
create table Nums(
n int primary key
)
insert into Nums values (1);
declare @i int = 10;
while @i>0 begin
insert into Nums
select max(n) over () + n
from Nums;
set @i -= 1;
end;
``` | A little math can help, bit math.
Using the table from the answer of Steve Kass
```
declare @chunk int = 100;
WITH Base(N) AS (
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
SELECT 1 UNION ALL SELECT 1
), Base10(N) AS (
SELECT (ROW_NUMBER() OVER (ORDER BY (SELECT NULL))) - 1
FROM Base
), Counter(N) AS (
SELECT u.N + 10*t.N
FROM Base10 u
CROSS JOIN Base10 t
)
SELECT id
, DateAdd(MONTH, N, dat) Month
, Cast((val - (@chunk * N)) / @chunk as BIT) * @chunk
+ (1 - Cast((val - (@chunk * N)) / @chunk as BIT)) * (val % @chunk) Value
FROM T
LEFT JOIN Counter ON N < CEILING(Cast(val as Float) / @chunk)
```
The CTEs are here only to get a counter, the bit conversion is used as a IF, and can be translated to
```
IF (val - (100 * N) / 100) > 0 THEN
RETURN 100
ELSE
RETURN VAL % 100
END IF
```
The val column is casted to Float to prevent integer division, I used a `LEFT JOIN` instead of a `CROSS JOIN` to be able to add the condition there instead of in a `WHERE` condition
[SQLFiddle](http://www.sqlfiddle.com/#!6/57f6c/2) demo, in the demo the chunk is static | SQL Server - splitting a value over several months without a cursor | [
"",
"sql",
"sql-server",
""
] |
I have the weirdest problem and my extremely basic knowledge of SQL must be terribly wrong, but I cannot make sense of the behaviour illustrated below.
I have this file `test.csv`
```
id,field
A,0
B,1
C,2
D,"0"
E,"1"
F,"2"
G,
H,""
I," "
```
And this test code:
```
#! /usr/bin/perl
use strict;
use warnings;
use DBI;
use Devel::VersionDump qw(dump_versions);
my $dbh = DBI->connect ("dbi:CSV:");
$dbh->{RaiseError} = 1;
$dbh->{TraceLevel} = 0;
my $i = 0;
foreach my $cond ("TRUE",
"field <> 0 AND field <> 1",
"field = 0 OR field = 1",
"NOT (field = 0 OR field = 1)",
"NOT field = 0 OR field = 1",
"field <> 0",
"NOT field <> 0",
) {
print "Condition #" . $i++ . " is $cond:\n";
my $sth = $dbh->prepare("SELECT * FROM test.csv WHERE $cond");
$sth->execute();
$sth->dump_results();
};
print "\n\n";
dump_versions();
```
When run, this is the output:
```
Condition #0 is TRUE:
'A', '0'
'B', '1'
'C', '2'
'D', '0'
'E', '1'
'F', '2'
'G', ''
'H', ''
'I', ' '
9 rows
Condition #1 is field <> 0 AND field <> 1:
'C', '2'
'F', '2'
'G', ''
'H', ''
'I', ' '
5 rows
Condition #2 is field = 0 OR field = 1:
'A', '0'
'B', '1'
'D', '0'
'E', '1'
4 rows
Condition #3 is NOT (field = 0 OR field = 1):
'A', '0'
'B', '1'
'D', '0'
'E', '1'
4 rows
Condition #4 is NOT field = 0 OR field = 1:
'B', '1'
'C', '2'
'E', '1'
'F', '2'
'G', ''
'H', ''
'I', ' '
7 rows
Condition #5 is field <> 0:
'B', '1'
'C', '2'
'E', '1'
'F', '2'
'G', ''
'H', ''
'I', ' '
7 rows
Condition #6 is NOT field <> 0:
'A', '0'
'D', '0'
2 rows
Perl version: v5.16.3 on MSWin32 (C:\Program Files\Perl64\bin\perl.exe)
ActivePerl::Config - Unknown
ActiveState::Path - 1.01
AutoLoader - 5.73
C:::Program Files::Perl64::site::lib::sitecustomize.pl - Unknown
Carp - 1.26
Class::Struct - 0.63
Clone - 0.34
Config - Unknown
Config_git.pl - Unknown
Config_heavy.pl - Unknown
Cwd - 3.40
DBD::CSV - 0.41
DBD::File - 0.42
DBI - 1.631
DBI::DBD::SqlEngine - 0.06
DBI::SQL::Nano - 1.015544
Data::Dumper - 2.139
Devel::VersionDump - 0.02
DynaLoader - 1.14
Encode - 2.49
Encode::Alias - 2.16
Encode::Config - 2.05
Encode::Encoding - 2.05
Errno - 1.15
Exporter - 5.67
Exporter::Heavy - 5.67
Fcntl - 1.11
File::Basename - 2.84
File::Spec - 3.40
File::Spec::Unix - 3.40
File::Spec::Win32 - 3.40
File::stat - 1.05
IO - 1.25_06
IO::Dir - 1.1
IO::File - 1.16
IO::Handle - 1.33
IO::Seekable - 1.1
List::Util - 1.27
Math::BigFloat - 1.997
Math::BigInt - 1.998
Math::BigInt::Calc - 1.997
Math::Complex - 1.59
Math::Trig - 1.23
Params::Util - 1.07
SQL::Dialects::AnyData - 1.405
SQL::Dialects::Role - 1.405
SQL::Eval - 1.405
SQL::Parser - 1.405
SQL::Statement - 1.405
SQL::Statement::Function - 1.405
SQL::Statement::Functions - 1.405
SQL::Statement::Operation - 1.405
SQL::Statement::Placeholder - 1.405
SQL::Statement::RAM - 1.405
SQL::Statement::Term - 1.405
SQL::Statement::TermFactory - 1.405
SQL::Statement::Util - 1.405
Scalar::Util - 1.27
SelectSaver - 1.02
Symbol - 1.07
Text::CSV_XS - 1.07
Tie::Hash - 1.04
Time::HiRes - 1.9725
Win32 - 0.47
XSLoader - 0.16
base - 2.18
bytes - 1.04
constant - 1.25
integer - 1.00
overload - 1.18
overloading - 0.02
sort - 2.01
strict - 1.07
unicore::Heavy.pl - Unknown
unicore::lib::Perl::Word.pl - Unknown
unicore::lib::Perl::_PerlIDS.pl - Unknown
utf8 - 1.09
utf8_heavy.pl - Unknown
vars - 1.02
warnings - 1.13
warnings::register - 1.02
```
Condition #0 shows the complete dataset and is fine.
Condition #1 is just some compound condition and works fine.
Condition #2 is the opposite condition (basic logic rules used to invert it), and works fine too.
Yet, condition #3 should be the opposite of #2 and thus equal to #1, but the result is the same as #2: **I cannot make any sense of this**.
Condition #4 shows that, omitting the parentheses, NOT does work fine, but of course this query is different from any of the previous ones.
Conditions #5 and #6 show a situation where NOT acts exactly as one would expect.
So, why NOT on a compound condition acts as if the NOT were not specified at all?!
By the way, I read this scary post [Perl DBD::CSV - SQL Syntax - "AND" clause is not working properly](https://stackoverflow.com/questions/12164087/perl-dbdcsv-sql-syntax-and-clause-is-not-working-properly) and added [Devel::VersionDump](http://search.cpan.org/perldoc?Devel%3a%3aVersionDump) to check whether I have a similar issue but it seems to me that all relevant packages are the newest available. Hence, I really have no clue about this. | I confirm it's a bug of SQL::Parser:
```
'where_clause' => HASH(0x7f9686737480)
'arg1' => HASH(0x7f9686808248)
'arg1' => HASH(0x7f96866b50f8)
'fullorg' => 'field'
'type' => 'column'
'value' => 'field'
'arg2' => HASH(0x7f968588dfe0)
'fullorg' => 0
'type' => 'number'
'value' => 0
'neg' => 0
'nots' => HASH(0x7f96866b55d8)
empty hash
'op' => '='
'arg2' => HASH(0x7f9684498ce0)
'arg1' => HASH(0x7f96845fb798)
'fullorg' => 'field'
'type' => 'column'
'value' => 'field'
'arg2' => HASH(0x7f96866b5158)
'fullorg' => 1
'type' => 'number'
'value' => 1
'neg' => 0
'nots' => HASH(0x7f96866b55a8)
empty hash
'op' => '='
'neg' => 0
'nots' => HASH(0x7f9686808320)
empty hash
'op' => 'OR'
```
The top-most "neg" should be 1. Please open a ticket at <https://rt.cpan.org/Dist/Display.html?Name=SQL-Statement> - when you refer this thread, the test case is proven :)
Cheers,
Jens | SQL logic for DBD::CSV is NOT contained in DBD::CSV, which is just a thin glue layer between Text::CSV\_XS and DBI.
All SQL knowledge is dealt with by SQL::Statement. If you think you found a real bug, please try to dig in that module and find the cause, create a patch and post the issue with the patch on RT :) | Perl, DBI, and SQL: Why NOT does not work in some SQL queries? | [
"",
"sql",
"perl",
"csv",
"dbi",
""
] |
I have two simple SQL tables defined below:
* friend(id1, id2)
* Person(id, name)
The friend table is like:
```
Id1 Id2
1 2
1 3
2 3
3 4
```
How can i query the database for the pair of names of friends, omitting the duplicate?
(That means if the pair of 'john' and 'david' is in answer, i do not need the 'david' and 'john' pair) | Use a projection to ensure that the friend pairs are ordered lowest first, which makes the duplicate elimination trivial using `DISTINCT`:
```
WITH OrderedFriends AS
(
SELECT
CASE WHEN ID1 < ID2 THEN ID1 ELSE ID2 END AS ID1,
CASE WHEN ID1 < ID2 THEN ID2 ELSE ID1 END AS ID2
FROM Friends
)
SELECT DISTINCT
ID1, ID2
FROM OrderedFriends ;
```
[SqlFiddle Here](http://sqlfiddle.com/#!6/d41d8/17266)
Note that the original ordering of the Friend pairs is however lost, if that matters? | ```
SELECT p1.name,
p2.name
FROM Friend AS f1
JOIN Person AS p1 ON p1.id = f1.id1
JOIN Person AS p2 ON f1.id2 = p2.id
WHERE f1.id1 < f1.id2
```
May be this. | removing duplicate results from friendship table query | [
"",
"sql",
""
] |
I designed following skill database with three tables:
1. employees with columns name, personnelnumber
2. skills with columns skillname, skillid
3. skillmapping with columns skillid, personnelnumber, skilllevel
1) and 2) just assign numbers to persons and skills. 3) assigns skill levels to employees.
For example, row 1 3 6 in table 3) means person with personal number 3 has knowledge of skill number 1 and the knowledge level is 6.
What I want is to retrieve persons who at the same time are skilled in
1. skill 2 with knowledge level 3
2. skill 4 with knowledge level 5 and
3. skill 8 with knowledge level 2.
What would be the best way to do that? I thought about first selecting persons who meet condition 1), then select from them persons who meet condition 2) and then select from the result people meeting condition 3). But for that approach I have to create temporary tables and go beyond SQL using some procedural programming language. Isn't there a better approach? | I'd use a subquery to count the number of skill matches a person has, and make sure it comes up as three:
```
SELECT name
FROM person
WHERE personnelnumber IN (SELECT personnelnumber
FROM skillmapping
WHERE (skillid = 2 AND skilllevel = 3) OR
(skillid = 4 AND skilllevel = 5) OR
(skillid = 8 AND skilllevel = 2)
GROUP BY personnelnumber
HAVING COUNT(*) = 3)
``` | Should be a fairly straight forward inner join;
```
SELECT e.*
FROM employees e
JOIN skillmapping s1 ON s1.personnelnumber = e.personnelnumber
AND s1.skillid = 2
AND s1.skillevel = 3
JOIN skillmapping s2 ON s2.personnelnumber = e.personnelnumber
AND s2.skillid = 4
AND s2.skillevel = 5
JOIN skillmapping s3 ON s3.personnelnumber = e.personnelnumber
AND s3.skillid = 8
AND s3.skillevel = 2
``` | Query for a skill database | [
"",
"sql",
""
] |
**Hi please help me. How join 3 columns with null values?.**
```
SELECT [item],[Prox],[z], [item]+[Prox]+[z] as result FROM [FIELD$];
```
**Result.**
 | As concatenating multiple strings with at least one null value results in `NULL` you may use `coalesce` to solve this:
```
SELECT
[item],
[Prox],
[z],
coalesce([item], '') + coalesce([Prox], '') + coalesce([z], '') as result
FROM
[FIELD$];
```
`coalesce` is ANSI standard and available in almost all reasonable databases. | Try this:
```
SELECT [item],[Prox],[z], COALESCE([item],'')+COALESCE([Prox],'')+COALESCE([z],'') as result
FROM [FIELD$];
```
**Explanation:**
`COALESCE` evaluates the arguments in order and returns the current value of the first expression that initially does not evaluate to NULL.
i.e., If `[item]` is `NULL`, then `COALESCE([item],'')` will return an empty string.
**Other alternatives:**
Instead of `COALESCE(ColName,'')`, you can use:
1. `ISNULL(ColName,'')` for `SQL Server`.
2. `IFNULL(ColName,'')` for `MySQL`.
3. `NVL(ColName,'')` for `Oracle`. | How join columns with null value? | [
"",
"sql",
"sql-server",
"database",
"join",
""
] |
I want to create a barcode by merge and combine two column.
here is my table :
```
ID | Items1 | Items2 | BArcode
001 | Shirt | CPU |
002 | Shirt | CPU |
001 | Shoes | Monitor |
002 | Jacket | Monitor |
001 | Shoes | CPU |
002 | | Keyboard |
002 | | Keyboard |
001 | Shirt | Keyboard |
002 | Shirt | |
```
The barcode created based on ID+Item1/Items2+Count. The count get from counting how many times one ID have oredered the same item. I want it to display only the data that haven't generated a barcode, so it's when the Barcode column value is null, from the above table the Barcode is null:
```
ID | Barcode |
001 | 001Shirt1 |
002 | 002Shirt1 |
001 | 001Shoes1 |
002 | 002Jacket1 |
001 | 001Shoes2 |
001 | 001Shirt2 |
002 | 002Shirt2 |
001 | 001CPU1 |
002 | 002CPU1 |
001 | 001Monitor1 |
002 | 002Monitor1 |
001 | 001CPU2 |
002 | 002Keyboard1 |
002 | 002Keyboard2 |
001 | 001Keyboard1 |
```
here is my first code :
```
Dim strcommand As String = "select [ID], ([ID] + [Items1])
as Barcode from tbl_Request where [Items1] != 'null'
and Barcode = 'null' union select ([ID] + [Items2])
from tbl_Request where [Items2] != 'Null' and Barcode = 'null'"
```
it doesn't work. it say "All queries combined using a UNION, INTERSECT or EXCEPT operator must have an equal number of expressions in their target lists". on my code above I haven't implement the counting yet. does anyone know how to do it?
Thanks in advances.... | You can use this query to generate the `barcode` values:
```
;with cte as
(select id, item1 item, row_number() over (partition by id, item1 order by getdate()) rn
from items
where item1 is not null and barcode is null
union
select id, item2 item, row_number() over (partition by id, item2 order by getdate()) rn
from items
where item2 is not null and barcode is null)
select id, cast(id as varchar) + item + cast(rn as varchar) barcode
from cte
```
If you wanted to add this to a new table, say `tbl_barcode` with columns `id` and `barcode`, you would do this:
```
;with cte as
(select id, item1 item, row_number() over (partition by id, item1 order by getdate()) rn
from items
where item1 is not null and barcode is null
union
select id, item2 item, row_number() over (partition by id, item2 order by getdate()) rn
from items
where item2 is not null and barcode is null)
insert into tbl_barcode (id, barcode)
select id, cast(id as varchar) + item + cast(rn as varchar) barcode
from cte
``` | The error that you are getting is that the union join that you have created does not contain the same fields in the second select statement, as your original select statement. [SQL UNION Operator](http://www.w3schools.com/sql/sql_union.asp), Notice that each SELECT statement within the UNION **must have the same number of columns**.
So therefore you will need to change
```
select ([ID] + [Items2])
```
to
```
select [ID], ([ID] + [Items2])
``` | SQL SERVER merge two column, combine it, and count the same | [
"",
"asp.net",
"sql",
"sql-server",
"vb.net",
""
] |
I'm trying to generate a random time between 8:00 AM and 8:00 PM *for each row* that is selected from a data set, however, I always get the ***same*** random value for each row – I want it to be ***different*** for each row.
*Table schema & data:*
```
╔══════╦════════════════╗
║ ID ║ CREATED_DATE ║
╠══════╬════════════════╣
║ ID/1 ║ 26/04/2014 ║
║ ID/2 ║ 26/04/2014 ║
║ ID/3 ║ 26/04/2014 ║
║ ID/4 ║ 26/04/2014 ║
║ ID/5 ║ 26/04/2014 ║
╚══════╩════════════════╝
```
*Сurrent SQL statement:*
```
SELECT [ID]
, MyFunction.dbo.AddWorkDays(14, [CREATED_DATE]) AS [New Date]
, CONVERT(VARCHAR, DATEADD(MILLISECOND, CAST(43200000 * RAND() AS INT), CONVERT(TIME, '08:00')), 114) AS [New Time]
FROM [RandomTable]
```
*Current results (**same** time for each row in the `[New Time]` column):*
```
╔══════╦════════════════╦════════════════╗
║ ID ║ New Date ║ New Time ║
╠══════╬════════════════╬════════════════╣
║ ID/1 ║ 10/05/2014 ║ 09:41:43 ║
║ ID/2 ║ 10/05/2014 ║ 09:41:43 ║
║ ID/3 ║ 10/05/2014 ║ 09:41:43 ║
║ ID/4 ║ 10/05/2014 ║ 09:41:43 ║
║ ID/5 ║ 10/05/2014 ║ 09:41:43 ║
╚══════╩════════════════╩════════════════╝
```
*Desired results (**different** time for each row in the `[New Time]` column):*
```
╔══════╦════════════════╦════════════════╗
║ ID ║ New Date ║ New Time ║
╠══════╬════════════════╬════════════════╣
║ ID/1 ║ 10/05/2014 ║ 09:41:43 ║
║ ID/2 ║ 10/05/2014 ║ 15:05:23 ║
║ ID/3 ║ 10/05/2014 ║ 10:01:05 ║
║ ID/4 ║ 10/05/2014 ║ 19:32:45 ║
║ ID/5 ║ 10/05/2014 ║ 08:43:15 ║
╚══════╩════════════════╩════════════════╝
```
Any ideas on how to fix this? All of the above is just sample data – my real table has around 2800 records (*not sure if that will make a difference to anyone's suggestions*). | ## Interpretation of Original Question:
The question states:
* Generate a *random* time between 8:00 AM and 8:00 PM (i.e. a 12-hour window)
* It should be different *for each row* (i.e. unique across all rows)
* The real table has around 2800 records
Now factor in the following points:
* Sample data shows only a single date
* There are 86,400 seconds in 24 hours, hence 43,200 seconds in 12 hours
There is some ambiguity in the following areas:
* What exactly is random within the context of "different *for every row*", given that truly random values cannot be guaranteed to be different for every row. In fact, *truly* random numbers *could* theoretically be the *same* for every row. So is the emphasis on "random" or "different"? Or are we really talking about different but not sequentially ordered (to give the appearance of randomness without actually being random)?
* What if there are ever more than 2800 rows? What if there are 1 million rows?
* If there can be more than 43,200 rows, how to handle "different *for each row*" (since it is not possible to have unique across all rows)?
* Will the date ever vary? If so, are we really talking about "different for each row *per date*"?
* If "different for each row *per date*":
+ Can the times for each date follow the same, non-sequential pattern? Or does the pattern need to differ per each date?
+ Will there ever be more than 43,200 rows for any particular date? If so, the times can only be unique *per each set of 43,200 rows*.
Given the information above, there are a few ways to interpret the request:
1. **Emphasis on "random":** Dates and number of rows don't matter. Generate truly random times that are highly likely, but not *guaranteed*, to be unique using one of the three methods shown in the other answers:
* @notulysses: `RAND(CAST(NEWID() AS VARBINARY)) * 43200`
* @Steve Ford: `ABS(CHECKSUM(NewId()) % 43201)`
* @Vladimir Baranov : `CAST(43200000 * (CAST(CRYPT_GEN_RANDOM(4) as int) / 4294967295.0 + 0.5) as int)`
2. **Emphasis on "different for each row", always <= 43,200 rows:** If the number of rows never exceeds the number of available seconds, it is easy to guarantee unique times across all rows, regardless of same or different dates, and appear to be randomly ordered.
3. **Emphasis on "different for each row", could be > 43,200 rows:** If the number of rows can exceed the number of available seconds, then it is not possible to guarantee uniqueness across *all* rows, but it would be possible to still guarantee uniqueness across rows of any particular date, provided that no particular date has > 43,200 rows.
Hence, I based my answer on the idea that:
* Even if the number of rows for the O.P. never exceeds 2800, it is more likely that most others who are encountering a similar need for randomness would have a larger data set to work with (i.e. there could easily be 1 million rows, for any number of dates: 1, 5000, etc.)
* Either the sample data is overly simplistic in using the same date for all 5 rows, or even if the date is the same for all rows in this particular case, in most other cases that is less likely to happen
* Uniqueness is to be favored over Randomness
* If there is a pattern to the "seemingly random" ordering of the seconds for each date, there should at least be a varying offset to the start of the sequence across the dates (when the dates are ordered sequentially) to give the appearance of randomness between any small grouping of dates.
---
## Answer:
If the situation requires unique times, that cannot be guaranteed with any method of generating truly random values. I really like the use of `CRYPT_GEN_RANDOM` by @Vladimir Baranov, but it is nearly impossible to get a unique set of values generated:
```
DECLARE @Table TABLE (Col1 BIGINT NOT NULL UNIQUE);
INSERT INTO @Table (Col1)
SELECT CONVERT(BIGINT, CRYPT_GEN_RANDOM(4))
FROM [master].sys.objects so
CROSS JOIN [master].sys.objects so2
CROSS JOIN [master].sys.objects so3;
-- 753,571 rows
```
Increasing the random value to 8 bytes does seem to work:
```
DECLARE @Table TABLE (Col1 BIGINT NOT NULL UNIQUE);
INSERT INTO @Table (Col1)
SELECT CONVERT(BIGINT, CRYPT_GEN_RANDOM(8))
FROM [master].sys.objects so
CROSS JOIN [master].sys.objects so2
CROSS JOIN [master].sys.objects so3;
-- 753,571 rows
```
Of course, if we are generating down to the second, then there are only 86,400 of those. Reducing the scope seems to help as the following does occasionally work:
```
DECLARE @Table TABLE (Col1 BIGINT NOT NULL UNIQUE);
INSERT INTO @Table (Col1)
SELECT TOP (86400) CONVERT(BIGINT, CRYPT_GEN_RANDOM(4))
FROM [master].sys.objects so
CROSS JOIN [master].sys.objects so2
CROSS JOIN [master].sys.objects so3;
```
However, things get a bit trickier if the uniqueness needs *per each day* (which seems like a reasonable requirement of this type of project, as opposed to unique across all days). But a random number generator isn't going to know to reset at each new day.
If it is acceptable to merely have the appearance of being random, then we can guarantee uniqueness per each date without:
* looping / cursor constructs
* saving already used values in a table
* using `RAND()`, `NEWID()`, or `CRYPT_GEN_RANDOM()`
The following solution uses the concept of [Modular Multiplicative Inverses](http://ericlippert.com/2013/11/12/math-from-scratch-part-thirteen-multiplicative-inverses/) (MMI) which I learned about in this answer: [generate seemingly random unique numeric ID in SQL Server](https://stackoverflow.com/questions/26967215/generate-seemingly-random-unique-numeric-id-in-sql-server) . Of course, that question did not have a tightly-defined range of values like we have here with only 86,400 of them per day. So, I used a range of 86400 (as "Modulo") and tried a few "coprime" values (as "Integer") in an [online calculator](http://planetcalc.com/3311/) to get their MMIs:
* 13 (MMI = 39877)
* 37 (MMI = 51373)
* 59 (MMI = 39539)
I use `ROW_NUMBER()` in a CTE, partitioned (i.e. grouped) by `CREATED_DATE` as a means of assigning each second of the day a value.
But, while the values generated for seconds 0, 1, 2, ... and so on sequentially will appear random, across different days that particular second will map to the same value. So, the second CTE (named "WhichSecond") shifts the starting point for each date by converting the date to an INT (which converts dates to a sequential offset from 1900-01-01) and then multiply by 101.
```
DECLARE @Data TABLE
(
ID INT NOT NULL IDENTITY(1, 1),
CREATED_DATE DATE NOT NULL
);
INSERT INTO @Data (CREATED_DATE) VALUES ('2014-10-05');
INSERT INTO @Data (CREATED_DATE) VALUES ('2014-10-05');
INSERT INTO @Data (CREATED_DATE) VALUES ('2014-10-05');
INSERT INTO @Data (CREATED_DATE) VALUES ('2014-10-05');
INSERT INTO @Data (CREATED_DATE) VALUES ('2014-10-05');
INSERT INTO @Data (CREATED_DATE) VALUES ('2015-03-15');
INSERT INTO @Data (CREATED_DATE) VALUES ('2016-10-22');
INSERT INTO @Data (CREATED_DATE) VALUES ('2015-03-15');
;WITH cte AS
(
SELECT tmp.ID,
CONVERT(DATETIME, tmp.CREATED_DATE) AS [CREATED_DATE],
ROW_NUMBER() OVER (PARTITION BY tmp.CREATED_DATE ORDER BY (SELECT NULL))
AS [RowNum]
FROM @Data tmp
), WhichSecond AS
(
SELECT cte.ID,
cte.CREATED_DATE,
((CONVERT(INT, cte.[CREATED_DATE]) - 29219) * 101) + cte.[RowNum]
AS [ThisSecond]
FROM cte
)
SELECT parts.*,
(parts.ThisSecond % 86400) AS [NormalizedSecond], -- wrap around to 0 when
-- value goes above 86,400
((parts.ThisSecond % 86400) * 39539) % 86400 AS [ActualSecond],
DATEADD(
SECOND,
(((parts.ThisSecond % 86400) * 39539) % 86400),
parts.CREATED_DATE
) AS [DateWithUniqueTime]
FROM WhichSecond parts
ORDER BY parts.ID;
```
Returns:
```
ID CREATED_DATE ThisSecond NormalizedSecond ActualSecond DateWithUniqueTime
1 2014-10-05 1282297 72697 11483 2014-10-05 03:11:23.000
2 2014-10-05 1282298 72698 51022 2014-10-05 14:10:22.000
3 2014-10-05 1282299 72699 4161 2014-10-05 01:09:21.000
4 2014-10-05 1282300 72700 43700 2014-10-05 12:08:20.000
5 2014-10-05 1282301 72701 83239 2014-10-05 23:07:19.000
6 2015-03-15 1298558 2558 52762 2015-03-15 14:39:22.000
7 2016-10-22 1357845 61845 83055 2016-10-22 23:04:15.000
8 2015-03-15 1298559 2559 5901 2015-03-15 01:38:21.000
```
---
If we want to only generate times between 8:00 AM and 8:00 PM, we only need to make a few minor adjustments:
1. Change the range (as "Modulo") from 86400 to half of it: 43200
2. Recalculate the MMI (can use the same "coprime" values as "Integer"): 39539 (same as before)
3. Add `28800` to the second parameter of the `DATEADD` as an 8 hour offset
The result will be a change to just one line (since the others are diagnostic):
```
-- second parameter of the DATEADD() call
28800 + (((parts.ThisSecond % 43200) * 39539) % 43200)
```
---
Another means of shifting each day in a less predictable fashion would be to make use of `RAND()` by passing in the INT form of `CREATED_DATE` in the "WhichSecond" CTE. This would give a stable offset per each date since `RAND(x)` will return the same value `y` for the same value of `x` passed in, but will return a different value `y` for a different value of `x` passed in. Meaning:
RAND(1) = y1
RAND(2) = y2
RAND(3) = y3
RAND(2) = y2
The second time `RAND(2)` was called, it still returned the same value of `y2` that it returned the first time it was called.
Hence, the "WhichSecond" CTE could be:
```
(
SELECT cte.ID,
cte.CREATED_DATE,
(RAND(CONVERT(INT, cte.[CREATED_DATE])) * {some number}) + cte.[RowNum]
AS [ThisSecond]
FROM cte
)
``` | The issue *OP* had while using just `rand()` is due to it's evaluation *once per query*.
*From the [**documentation**](https://msdn.microsoft.com/en-us/library/ms177610.aspx):*
> If *seed* is not specified, the SQL Server Database Engine assigns a *seed* value at random. For a specified *seed* value, the result returned is always the same.
Approach that is described below removes optimization and suppresses this behavior, so `rand()` is evaluated *once per row*:
```
dateadd( second
, rand(cast(newid() as varbinary)) * 43200
, cast('08:00:00' as time) )
```
* [`newid()`](https://msdn.microsoft.com/en-us//library/ms190348.aspx) generates unique value of type [`uniqueidentifier`](https://msdn.microsoft.com/en-us//library/ms187942.aspx);
* the value is converted with `cast` to be used as *seed* in [`rand([seed])`](https://msdn.microsoft.com/en-us//library/ms177610.aspx) function to generate a pseudo-random `float` value from *0 through 1*, and as *seed* is always unique the returning value is unique too .
[**SQLFiddle**](http://sqlfiddle.com/#!3/84dc3/13) | Distinct random time generation in the fixed interval | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
"random-time-generation",
""
] |
I want to count all goals in a soccer match in the first and second halftime, but I only have the data for the first halftime and the final score.
```
SELECT SUM( scorehome_ht + scoreguest_ht ) AS halftime,
(SUM( scorehome_end + scoreguest_end ) - halftime) AS end
FROM matches;
```
I'm new at stackoverflow, not familiar with MySQL and yes, I used google before :) | An alias assigned to a column can't be referenced by another expression in the SELECT list.
One workaround is to repeat the expression, and that's usually the most efficient, in terms of performance, e.g.
```
SELECT SUM( scorehome_ht + scoreguest_ht ) AS halftime,
SUM( scorehome_end + scoreguest_end ) - SUM( scorehome_ht + scoreguest_ht ) AS end
FROM matches;
```
---
I said that was one workaround, because there are a couple of other workarounds available. It's also possible to use an inline view, since a query hast to reference the columns from the inline view by the name assigned to each column. This approach requires MySQL to run the inline view query, and materialize it as a temporary MyISAM table, and then the outer query runs against the MyISAM table, so this approach is less efficient.
Another option is to make use of MySQL user variables to hold the result of an expression:
```
SELECT @halftime := SUM( scorehome_ht + scoreguest_ht ) AS halftime,
SUM( scorehome_end + scoreguest_end ) - @halftime AS end
FROM matches;
```
But this behavior is dependent on undocumented behavior; MySQL processes the expressions in the SELECT list in the order they are listed, so the value assigned to `@halftime` is available in expressions following the assignment. (References to @halftime BEFORE the assignment will get whatever value is there from a previous assignment (like from the previous row.) | You can't use aliases in the same select query, but what you can do is wrap them in a derived table:
```
SELECT halftime, scorehome_end + scoreguest_end - halftime AS end
FROM
(
SELECT scorehome_end,
scoreguest_end,
SUM( scorehome_ht + scoreguest_ht ) AS halftime
FROM matches
) x;
``` | SQL alias doesn't work | [
"",
"mysql",
"sql",
""
] |
We have two tables:
```
HUSBANDS
---------------
id
name
surname
age
WIFES
---------------
id
name
surname
age
husbandId --(with constrain -> HUSBANDS.ID)
```
and suppose, that we need to write procedure removing wifes older than 60 years together with their husbands ;)
It could be perfect to do this using typical SQL statement, but it is impossible in Oracle to delete rows from two tables in one DELETE statement, right? So...
We can't do this like this:
```
PROCEDURE remove_old_wifes() IS
BEGIN
DELETE FROM husbands WHERE id IN (SELECT husbandId FROM wifes WHERE age >= 60);
DELETE FROM wifes WHERE age >= 60;
END;
```
because of constrain.
On the other hand, undermentioned solution is wrong as well:
```
PROCEDURE remove_old_wifes() IS
BEGIN
DELETE FROM wifes WHERE age >= 60;
DELETE FROM husbands WHERE id IN (SELECT husbandId FROM wifes WHERE age >= 60);
END;
```
bacause when we remove the wifes firstly, none of the husbands will be removed...
What is typical solution of this case?
**IMPORTANT:** I can't set cascade. | You can update the husbands first with an indicator that they have no wife. For your data, let's use `age = -1`. Then delete from wifes [sic] and then delete from husbands.
```
update husbands
set age = -1
where id in (select husbandId from wifes where age >= 60);
delete from wifes where age >= 60;
delete from husbands where age = -1;
``` | If no "unmarried husbands" are allowed in the table, you could just first remove the wives, and then all husbands that no longer have a wife. If temporary inconsistency is a problem, you may want to wrap the deletes inside a transaction.
```
PROCEDURE remove_old_wifes() IS
BEGIN
DELETE FROM wifes WHERE age >= 60;
DELETE FROM husbands WHERE id NOT IN (SELECT husbandId FROM wifes);
END;
``` | Deleting rows from two tables related by constraint | [
"",
"sql",
"oracle",
"plsql",
""
] |
Following is the table:
```
emp_id salary salary_date
Emp1 1000 Feb 01
Emp1 2000 Feb 15
Emp1 3000 Feb 28
Emp1 4000 Mar 01
Emp2 5000 Jan 01
Emp2 6000 Jan 15
Emp2 2000 Mar 01
Emp2 5000 Apr 01
Emp3 1000 Jan 01
Emp4 3000 Dec 31
Emp4 5000 Dec 01
```
And I want the following result:
```
Emp1 Feb 3000
Emp2 Jan 6000
Emp4 Dec 5000
Emp2 Apr 5000
Emp1 Mar 4000
``` | I have found my answer!!!
select emp\_id, substr(sal\_date,1,3) monthly, salary from
(select emp\_id, sal\_date, salary,
max(salary)over (partition by substr(sal\_date,1,3)) max\_sal from emp\_salary order by emp\_id)
where salary=max\_sal;
Result-set:
EMP\_ID MONTHLY SALARY
---
Emp1 Mar 4000
Emp1 Feb 3000
Emp2 Jan 6000
Emp2 Apr 5000
Emp4 Dec 5000 | ```
SELECT e.Emp_ID,MaxSalary,MonthName
FROM employeeTable e
INNER JOIN
(
SELECT MAX(salary) as MaxSalary,
LEFT(salary_date,3) as MonthName
FROM employeeTable
GROUP BY LEFT(salary_date,3)
)t
ON e.Salary=t.MaxSalary
``` | I have an emp_salary table with emp_id, salary and salary_date. I want to write a query to find out which employee was paid highest in every month | [
"",
"sql",
"date",
"select",
"oracle11g",
"highest",
""
] |
This question was asked for [MySQL](https://stackoverflow.com/questions/4418776/what-is-the-default-mysql-join-standalone-inner-or) already, but for Transact-SQL, what is the default `JOIN` behaviour?
That is, is simply writing `JOIN` in a query synonymous with writing `INNER JOIN` (as is the case with MySQL), or something else, like perhaps `FULL OUTER JOIN`? | `JOIN` defaults to `INNER JOIN` behaviour.
To verify this, I ran the following code:
```
DECLARE @A TABLE (x INT)
INSERT INTO @A
SELECT 1 UNION ALL
SELECT 2
DECLARE @B TABLE (x INT)
INSERT INTO @B
SELECT 2 UNION ALL
SELECT 3
SELECT
A.x AS 'A.x',
B.x AS 'B.x'
FROM @A A
JOIN @B B
ON A.x = B.x
```
This produces just one row, consistent with `INNER JOIN` behaviour:
```
A.x | B.x
-----+-----
2 | 2
```
Contrast this with a `FULL OUTER JOIN`:
```
...
SELECT
A.x AS 'A.x',
B.x AS 'B.x'
FROM @A A
FULL OUTER JOIN @B B
ON A.x = B.x
```
This of course shows all three rows:
```
A.x | B.x
-----+-----
1 | NULL
2 | 2
NULL | 3
``` | In T-SQL, `JOIN` without an explicit type is an `INNER JOIN`, as specified by [the documentation on the `FROM` clause](https://msdn.microsoft.com/library/ms177634) (excerpt):
```
[ FROM { <table_source> } [ ,...n ] ]
<table_source> ::=
{
...
| <joined_table>
...
}
<joined_table> ::=
{
<table_source> <join_type> <table_source> ON <search_condition>
...
}
<join_type> ::=
[ { INNER | { { LEFT | RIGHT | FULL } [ OUTER ] } } [ <join_hint> ] ]
JOIN
```
> **INNER**
>
> Specifies all matching pairs of rows are returned. Discards
> unmatched rows from both tables. When no join type is specified, this
> is the default. | What is the default T-SQL JOIN behaviour, INNER or OUTER? | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have one table (in MySQL) like this:
```
subject | predicate | object
A | P1 | X
X | P2 | B
```
I want to know transitive relation by using this table, so the result should like this:
```
element1 | predicate1 | pivot | predicate2 | element2
A | P1 | X | P2 | B
```
I have been tried to construct the query by using nested query, but in the end i have syntax error (I think `p2` (in nested query) cannot be determined in main query)
```
select p1.subject, p1.predicate, p1.object, p2.predicate, p2.object
from some_relation p1
where p1.subject = 'A'
and p1.object = (select p2.subject from some_relation p2 where p2.object = 'B');
```
Anyone know how to do this kind of query? Is it possible? | Use a [self join](https://stackoverflow.com/questions/1284441/how-does-a-mysql-self-join-work):
```
SELECT
a.subject as element1,
a.predicate as predicate2,
a.object as pivot,
b.predicate as predicate2,
b.subject as element2 FROM
table a JOIN
table b ON a.object = b.subject;
``` | I would write a simple `JOIN`:
```
select p1.subject, p1.predicate, p1.object, p2.predicate, p2.object
from some_relation p1
inner join some_relation p2 on p1.object = p2.subject and p2.object = 'B'
where p1.subject = 'A'
```
Or do you mean something different? | Select Transitive Relation on One Table | [
"",
"mysql",
"sql",
""
] |
I have a MS SQL Server express 2012 table that is constantly being populated with data to let us know which of our products are available. Under each product are one or many sub\_products. When the process that updates the product does its update, it also updates all sub\_products at the same time. (thus all will have the same founddate stamp because they were found in the same run) Because this data is constantly being added to the table only the most recent data matters. We don’t want to delete the old data as it is used for other purposes but I need help creating a statement to help me view only the newest data for each product (including subproducts) The challenge is that we have thousands of products and subproducts and they will all have different “most recent found” times.
This sql fiddle is a very simplified version of what my data looks like: <http://sqlfiddle.com/#!3/0531b/1>
I would like help creating a query that returns only the most recent founddate (and corresponding data) for each product. I would like the result of this query (using the data set in the fiddle) to look like this:
```
product sub_product founddate
1 1 5/3/2014
1 2 5/3/2014
2 7 5/4/2014
2 8 5/4/2014
2 9 5/4/2014
3 10 4/15/2014
```
Any help is very appreciated. | Probably the most efficient way to do this is with a `not exists` clause:
```
select *
from project_data pd
where not exists (select 1
from project_data pd2
where pd2.product = pd.product and
pd2.founddate > pd.founddate
);
```
The logic is: "Get me all rows from `project_data` where the same product does not have a larger date." This will perform best with an index on `project_data(product, founddate)`. | Here's another option joining the table to itself using `Max`:
```
select pd.product, pd.sub_product, pd.founddate
from project_data pd
join (select product, max(founddate) maxdt
from project_data
group by product
) t on pd.product = t.product and pd.founddate = t.maxdt
```
* [Updated Fiddle](http://sqlfiddle.com/#!3/c5d4e/1) | Need help creating SQL query to most recent data | [
"",
"sql",
"sql-server",
""
] |
If I create a table with an entity that is suppose to be `DATE` and when I `Insert` and leave that column blank shouldn't it display the current date? Same with time?
For example...
```
CREATE TABLE Register
(
Name CHAR(20) NOT NULL,
Date DATE,
Time TIME
);
```
Then I Insert:
```
INSERT INTO Register (Name)
VALUES ('Howard');
```
I want it to display on the table:
```
Howard | 5/6/2014 | 8:30 PM
```
But instead it displays:
```
Howard | NULL | NULL
```
Is this incorrect and if so what am I suppose to `Insert` to allow the current date and time of insert to display? | ***Firstly***, you should have a `PRIMARY KEY` in your table.
***Secondly***, you have not set default values for columns `Date` and `Time`. Also, you can't set them separately for the `DATE` and `TIME` types – you should use `TIMESTAMP` type and `DEFAULT CURRENT_TIMESTAMP` like :
```
CREATE TABLE Register (
Name CHAR(20) PRIMARY KEY NOT NULL,
Date TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
```
***Thirdly***, if you want to use exactly two columns for date storing, you can set a trigger on `INSERT` event for this table, like it is shown below :
```
CREATE TRIGGER default_date_time
BEFORE INSERT ON my_table_name
FOR EACH ROW
BEGIN
SET NEW.Date = CURDATE();
SET NEW.Time = CURTIME();
END;
$$
``` | Here are two options:
1. Get rid of `Date` and `Time` columns and add `time stamp`
`INSERT INTO Register (Name,Ctime) VALUES ('Howard',CURRENT_TIMESTAMP);`
2. If you want to continue with your table structure
`INSERT INTO Register (Name,`Date`,`Time`) VALUES ('Howard',CURDATE(), CURTIME());`
Also Note that `date` and `time` are reserved words of MySQL and hence should be quoted with backticks to avoid conflicting with reserved words. Or just rename it according to a `table name` format. | INSERT current date or time into MySQL | [
"",
"mysql",
"sql",
""
] |
I have a table
```
id | name | ts
------+------------+---------------------
3812 | name1 | 2014-05-01 00:10:02
3900 | name1 | 2014-05-02 00:10:03
3838 | name2 | 2014-05-01 00:10:08
3893 | name3 | 2014-05-02 00:10:02
3933 | name2 | 2014-05-02 00:10:14
3977 | name3 | 2014-05-03 00:10:01
3985 | name1 | 2014-05-03 00:10:02
4006 | name2 | 2014-05-03 00:10:10
3815 | name3 | 2014-05-01 00:10:02
```
I need perform a select which returns only latest(by 'table.ts') and only ONE entry for every DISTINCT value in field 'table.name'
```
id | name | ts
------+------------+---------------------
3977 | name3 | 2014-05-03 00:10:01
3985 | name1 | 2014-05-03 00:10:02
4006 | name2 | 2014-05-03 00:10:10
```
Help me for this task, please. | Assuming ID is incremented sequentially...
naming a table and column in the table with the same name may cause trouble later...
```
Select max(n.ID), n.name, max(n.TS)
From Name n
group by n.name
```
However if ID is not sequential -- and assuming name and ts are UNIQUE...
```
Select a.id, a.name, a.ts
from name a
inner join (Select n.name, max(n.ts) mts from name n group by n.name) b
on A.Name = B.Name and A.TS = b.mts
``` | ```
select id, name, ts
from (
select id,
name,
ts,
max(ts) over (partition by name) as max_ts
from the_table
) t
where ts = max_ts;
``` | PgSQL select latest ts for every distinct value in column | [
"",
"sql",
"postgresql",
"select",
"group-by",
"distinct",
""
] |
I am getting an issue and I feel like I fixed everything. MySQL workbench is still giving me this error. If you can please explain what is wrong I would be greatly appreciate it. I did check for key constraints in the Account table. It is giving the error in the account table.
```
CREATE TABLE Account(
AcctNum int AUTO_INCREMENT,
MemberID int,
Balance double,
PIN int,
creationDate date,
InitialBalance double,
CreatedByEmployee int,
type VARCHAR(20),
PRIMARY KEY(AcctNum),
FOREIGN KEY(MemberID) REFERENCES Member(MemNum),
FOREIGN KEY(CreatedByEmployee) REFERENCES Employee(EmpId)
);
CREATE TABLE Member(
MemNum int AUTO_INCREMENT,
DOB date,
CreditScore int,
AcctOpened date,
SSN VARCHAR(11),
Address VARCHAR(255),
PRIMARY KEY(MemNum)
);
CREATE TABLE Employee(
EmpId int AUTO_INCREMENT,
DOB date,
SSN VARCHAR(11),
HireDate date,
Salary double,
EmpLevel VARCHAR(50),
PRIMARY KEY(EmpId)
);
``` | You need to create the referencing table first before creating the table which is referring other tables with FOREIGN key.
The order of table creation should be as
```
CREATE TABLE Member(
MemNum int AUTO_INCREMENT,
DOB date,
CreditScore int,
AcctOpened date,
SSN VARCHAR(11),
Address VARCHAR(255),
PRIMARY KEY(MemNum)
);
CREATE TABLE Employee(
EmpId int AUTO_INCREMENT,
DOB date,
SSN VARCHAR(11),
HireDate date,
Salary double,
EmpLevel VARCHAR(50),
PRIMARY KEY(EmpId)
);
CREATE TABLE Account(
AcctNum int AUTO_INCREMENT,
MemberID int,
Balance double,
PIN int,
creationDate date,
InitialBalance double,
CreatedByEmployee int,
type VARCHAR(20),
PRIMARY KEY(AcctNum),
FOREIGN KEY(MemberID) REFERENCES Member(MemNum),
FOREIGN KEY(CreatedByEmployee) REFERENCES Employee(EmpId)
);
``` | Ofcourse the order of table creation is important here.
Create the Member and Employee tables first, followed by the Account. It should work fine. | Error Code: 1215 in mysql cannot add foreign key constraint | [
"",
"mysql",
"sql",
""
] |
Hello everyone I have spent a few days looking at ways to connect to SQL server using vba and found an interesting post by microsoft on how to setup a DSN-less connection they have provided the code this is what it looks like.
```
'//Name :AttachDSNLessTable
'//Purpose:Create a linked table to SQL Server without using a DSN
'//stLocalTableName: Name of the table that you are creating in the current database
'//stRemoteTableName: Name of the table that you are linking to on the SQL Server
'//database
'//stServer: Name of the SQL Server that you are linking to
'//stDatabase: Name of the SQL Server database that you are linking to
'//stUsername: Name of the SQL Server user who can connect to SQL Server, leave blank
'//to use a Trusted Connection
'//stPassword: SQL Server user password
Function AttachDSNLessTable(stLocalTableName As String, stRemoteTableName As String, stServer As String, stDatabase As String, Optional stUsername As String, Optional stPassword As String)
On Error GoTo AttachDSNLessTable_Err
Dim td As TableDef
Dim stConnect As String
For Each td In CurrentDb.TableDefs
If td.Name = stLocalTableName Then
CurrentDb.TableDefs.Delete stLocalTableName
End If
Next
If Len(stUsername) = 0 Then
'//Use trusted authentication if stUsername is not supplied.
stConnect = "ODBC;DRIVER=SQL Server;SERVER=" & stServer & ";DATABASE=" & stDatabase & ";Trusted_Connection=Yes"
Else
'//WARNING: This will save the username and the password with the linked table information.
stConnect = "ODBC;DRIVER=SQL Server;SERVER=" & stServer & ";DATABASE=" & stDatabase & ";UID=" & stUsername & ";PWD=" & stPassword
End If
Set td = CurrentDb.CreateTableDef(stLocalTableName, dbAttachSavePWD, stRemoteTableName, stConnect)
CurrentDb.TableDefs.Append td
AttachDSNLessTable = True
Exit Function
AttachDSNLessTable_Err:
AttachDSNLessTable = False
MsgBox "AttachDSNLessTable encountered an unexpected error: " & Err.Description
End Function
Private Sub Form_Open(Cancel As Integer)
If AttachDSNLessTable("authors", "authors", "(local)", "pubs", "", "") Then
'// All is okay.
Else
'// Not okay.
End If
End Sub
```
My problem here is that this code works but it does not work properly for me because if I were to hand this form to someone else they will not be able to open it because the event is set in Form\_open and you cant open the form unless you already have the data table available. Is there a better event feature I can use to get this to create the DSN-less connection before I open the form?
This is the error I receive if I try to open the form without a set DSN "The record source specified on this form or report does not exist."
Here is the link to the source <http://support.microsoft.com/kb/892490> I used method 1. | If this code works for you but you need a better place to execute it/a place that will execute that is not tied to that form's definition you can run it in the AutoExec macro.
The `AutoExec` macro is a macro named `AutoExec` which runs when you start your Access app. You can use the `RunCode` macro function to call your VBA function (it does have to be a function and not a sub).
This way you can (re)link your table before the form is even opened. | Open a blank form, with a fixed data record as source. Run your Code, then change the source form to your real form (VBA Code in OPEN FORM action) according to what you want to do. This is SMOKE and MIRROW Code. You will have one hell of a time debugging it. | Connecting to sql server using access vba | [
"",
"sql",
"sql-server",
"ms-access",
"vba",
""
] |
Consider the following query which generates customerid and days on which they bought a particular product, clearly each customer will have different dates on which he/she bought an item. What I want to do is get total purchase made on those days that the customer bought that product.
I have the ff query.
```
Select customerid, eventdate
into #days
from table1
where product='chocolate'
```
now i want to sum all purchases made on just those days customer bought 'chocolate'.
so i have
```
select customerid, sum(purchases) purchases
into #pur
from table1 a
where eventdate in (select eventdate from #days where customerid=a.customerid)
group by customerid
```
but the above is taking to long to run so i cancelled it.
please assist with a better query. | After thinking through carefully the following worked for me, and is faster.
--drop table #days
select customerid, eventdate
into #days
from table1
with(nolock, index(ix\_eventdate))
WHERE EVENTDATE between 20140401 and 20140430
and product='chocolate'
--drop table #pur
select customerid, eventdate, purchases
into #pur
from table1
with(nolock, index(ix\_eventdate))
where eventdate between 20140401 and 20140430
--drop table #first
select a.\*, b.purchases
into #first from #days a
left join #pur b
on a.customerid=b.customerid
and a.EventDate =b.EventDate
--select \* from #first
--drop table #purdays
select customerid, sum(purchases) revenue into #purdays from #first
group by customerid
order by customerid
select \* from #purdays | This gives sum of purchases for each customer **AND** for each of that day on which chocolate was purchased.
```
select customerid, eventdate ,sum(purchases) purchases
into #pur
from table1
where product='chocolate'
group by customerid,eventdate
```
If you want total purchase when chocolate was brought then do this
```
select customerid,sum(purchases) purchases
into #pur
from table1
where product='chocolate'
group by customerid
```
As per your clarification
```
select customerid, sum(purchases) purchases
into #pur
from table1 a
where eventdate in (select eventdate from table1 where product='chocolate')
group by customerid
```
I suggest you to apply indexing on eventdate column to improve query performance. | How to sum over selected dates from table | [
"",
"sql",
"sql-server",
""
] |
There are three tables
Students, Courses and Registration
```
Students has id, name columns
Courses has also course.id, course.name
```
and there is third table joining the Students and Courses table
```
Registration : stu_id, course_id
```
One Student can take one or many courses.
I would like to find the name of Students registered in only one course. | Try with INNER JOIN
```
SELECT S.id, S.name
FROM students S
INNER JOIN registration R ON S.id = R.stu_id
GROUP BY S.id, S.name
HAVING COUNT(*) = 1
``` | Like below:
```
SELECT s.id, s.name
FROM students s
LEFT JOIN registration r ON s.id = r.stu_id
GROUP BY s.id, s.name
HAVING COUNT(r.course_id) = 1
``` | SQL query to find the students in one course | [
"",
"mysql",
"sql",
""
] |
I am using following query to do this
```
select (Col_A + Col_B) Foo,
(Col_C + Col_D) Bar,
(Col_A + Col_B + Col_C + Col_D) FooBar from Table1
```
But is there a way like to do like this
```
select (Col_A + Col_B) Foo,
(Col_C + Col_D) Bar,
(Foo + Bar) FooBar from Table1
```
it gives Error
```
'Invalid Column Foo'
'Invalid Column Bar'
```
How to solve this? | Partial solution is to use `with` construction:
```
with query as (
select (Col_A + Col_B) Foo,
(Col_C + Col_D) Bar
from Table1)
select Foo,
Bar,
Foo + Bar
from query
``` | Afaik those columns you're referring to don't exist by those names yet.
You can of course try the following instead:
```
SELECT *, (Foo + Bar) FooBar
FROM (select (Col_A + Col_B) Foo,
(Col_C + Col_D) Bar
from Table1) SRC
``` | How to add dynamic Columns in SQL? | [
"",
"sql",
"sql-server",
""
] |
I have a stored procedure that fetches records based on dates matching a date input which works fine so far.
Both the dates in the table and my input date are formatted as datetime.
Instead of comparing the full dates I would like to change this so that it only compares month and day so that it works with any year for the input.
**Example:**
A date in the table is saved as `2013-04-30` and my input date is `2014-04-30`.
What I want is that the stored procedure still returns that record independent of the year as long as month and day match.
**My stored procedure:**
```
ALTER PROCEDURE [dbo].[FetchDays]
@inputDate datetime
AS
BEGIN
SET NOCOUNT ON;
SELECT dateID,
dayDT,
countries,
regions
FROM DaysDT
WHERE dayDT = @inputDate
FOR XML PATH('daysFixed'), ELEMENTS, TYPE, ROOT('root')
END
```
Many thanks for any help with this, Mike. | You can do something like this ;)
```
ALTER PROCEDURE [dbo].[FetchDays]
@inputDate datetime
AS
BEGIN
SET NOCOUNT ON;
SELECT dateID,
dayDT,
countries,
regions
FROM DaysDT
WHERE
DAY(dayDT) = DAY(@inputDate) --Extract and compare day
AND MONTH(dayDT) = MONTH(@inputDate) --Extract and compare month
FOR XML PATH('daysFixed'), ELEMENTS, TYPE, ROOT('root')
END
``` | Try this:
```
WHERE datepart(day, dayDT) = datepart(day,@inputDate)
AND datepart(month, dayDT) = datepart(month,@inputDate)
```
This will compare the date and month parts of your overall date, without checking the year. | SQL Server: compare dates by only matching month and day | [
"",
"sql",
"sql-server",
"date",
"stored-procedures",
""
] |
I am writing an SSIS Expression that creates a Date Key to be used with a DimDate table.
I need the Expression to return the previous Month not the Current month.
```
(DT_STR, 4,1252)YEAR(GETDATE()) + RIGHT("0" + (DT_STR, 2, 1252) MONTH(GETDATE()),2) + RIGHT("0" + (DT_STR, 2, 1252) DAY( GETDATE()), 2)
```
I tried putting a -1 in the following, but it did not allow it to work.
```
RIGHT("0" + (DT_STR, 2, 1252) MONTH( GETDATE()-1 ),2)
RIGHT("0" + (DT_STR, 2, 1252) MONTH( GETDATE() )-1,2)
RIGHT("0" + (DT_STR, 2, 1252) MONTH( GETDATE() ),2)-1
```
What am I doing wrong? | Try this when you are retrieving the Month part -
RIGHT("0" + (DT\_STR, 2, 1252) MONTH( DATEADD("MONTH",-1,GETDATE()) ),2) | ```
DATEADD("Month", -1,GETDATE())
```
use code above that can make sure it comes with Year and Date also. then you can present the datetime with any format | Return Previous Month in SSIS Expression | [
"",
"sql",
"sql-server",
"ssis",
""
] |
What is the purely MySQL way of getting yesterday, but only for business days?
Right now I'm using `SUBDATE(CURRENT_DATE, 1))` for yesterday but on Monday it's returning Sunday.
I would like it to return results for the previous Friday's date, or in other words, the previous business day.
I should have clarified I'm trying to use this in the `WHERE` part of the query (and also the subquery).
```
SELECT ... WHERE DATE(timestamp) = SUBDATE(CURRENT_DATE, 1))
```
Here is the whole query:
```
SELECT r.drivername, l.branchcity as location, COUNT(o.ordernum) as deliveries, COUNT(x.pictures) as pics, CONCAT(ROUND((COUNT(x.pictures) / COUNT(o.ordernum))*100,2),'%') as percentage_having_images
FROM deliveries d, drivers r, locations l, staging s, stations t, orders o LEFT OUTER JOIN
(SELECT a.ordernum AS pictures
FROM orders a WHERE a.stationID = '16' AND DATE(a.scantime) = SUBDATE(CURRENT_DATE, 1)) x
ON x.pictures = o.ordernum
WHERE o.deliveryID = d.ID AND d.driverID = r.ID AND s.locationID = l.ID AND o.stationID = t.ID AND o.stagingID = s.ID AND t.ID IN ('11','12','13') AND DATE(o.scantime) = SUBDATE(CURRENT_DATE, 1) GROUP BY s.locationID, r.drivername ORDER BY s.locationID, percentage_having_images DESC
``` | ```
SELECT ..
.
.
AND DATE(a.scantime) = (CASE WEEKDAY(CURRENT_DATE)
WHEN 0 THEN SUBDATE(CURRENT_DATE,3)
WHEN 6 THEN SUBDATE(CURRENT_DATE,2)
WHEN 5 THEN SUBDATE(CURRENT_DATE,1)
ELSE SUBDATE(CURRENT_DATE,1)
END)
..
..
``` | try this to get the day:
```
SELECT
CASE DAYOFWEEK(SUBDATE(CURRENT_DATE, 1))
WHEN 1 THEN SELECT SUBDATE(CURRENT_DATE, 3);
WHEN 7 THEN SELECT SUBDATE(CURRENT_DATE, 2);
ELSE
BEGIN
SUBDATE(CURRENT_DATE, 1);
END;
END CASE;
```
THEN use a subquery
```
SELECT ... WHERE DATE(timestamp) = (SELECT
CASE DAYOFWEEK(SUBDATE(CURRENT_DATE, 1))
WHEN 1 THEN SELECT SUBDATE(CURRENT_DATE, 3);
WHEN 7 THEN SELECT SUBDATE(CURRENT_DATE, 2);
ELSE
BEGIN
SUBDATE(CURRENT_DATE, 1);
END;
END CASE);
``` | Select the previous business day by using SQL | [
"",
"mysql",
"sql",
"date",
""
] |
I have two tables. Cat and Data.
```
Cat
Cat_Serno
Cat_Name
Data
Data_Serno
Data_Name
Data_Cat_ID
Data_Project_ID
```
When i Am doing a regular join I am getting
```
SELECT t1.*,t2.*
FROM Cat t1
LEFT JOIN Data t2 ON t1.Cat_Serno = t2.Data_Cat_Id
```

but when I apply a where condition on Project\_Id it gives me only one column. I want to Display all the category and Null if there is no related data in the Data table along with the where clause on the Project\_Id. It should also contain Null if I am using a where clause with a project\_id without any value in the Data table (eg: where Project\_Id=2) even if 2 is not present in the Data Table.
When I do it with Project\_Id=2 which is not existing in Data Table I only get one Record with Null Values.
 | If you include `column` of `table Data` in where clause, your `join` will almost act as `inner join`, so if you want all records of `Cat table`, you should not include any `column` of Data table in `where` clause, still if you want to apply condition you can include it in "`on`" in `join` Try this,
```
SELECT t1.*,t2.*
FROM Cat t1
LEFT JOIN Data t2 ON t1.Cat_Serno = t2.Data_Cat_Id
and Project_Id=2
``` | You have to add the where condition with the required values and add an OR condition with IS NULL
Example:
```
WHERE (DataName = 'Data_One' OR DataName IS NULL)
```
Please note, that NULL is not equals to any values (including NULL), so you have to handle it.
Another way:
```
WHERE COALESCE(DataName, 'n/a') IN ('Data_One', 'n/a')
``` | Joining Two Tables with Where Clause | [
"",
"sql",
"sql-server",
"sql-server-2008",
"join",
""
] |
I have this sql query
```
SELECT `price` FROM `used_cars` ORDER BY `price` DESC
```
So I obviously want to order by price from high to low. However, it seems to be taking the first digit and sorting by that. My theory is that it is treating it like a string, and as the column is a varchar it makes sense. However, this is third party data, so I am stuck with it. How can I order so that the larger numbers come first?
So this is an example of how they are ordered
```
9698
8999
8988
8900
5983
4988
4984
42441
40949
3995
3995
38995
37685
36999
35983
34990
34785
32999
30594
29999
29999
2862
28000
27995
``` | CAST should work:
```
SELECT CAST(price AS UNSIGNED) AS NumPrice
FROM used_cars
ORDER BY NumPrice DESC
``` | You should convert the column to a numeric data type. You can do that in the table definition, or in the query itself, for example with:
```
... ORDER BY `price`+0 DESC
``` | ORDER BY DESC is not working right | [
"",
"mysql",
"sql",
""
] |
In the below code, I need to check if the table exists in each line. Please tell me how to do it.
```
Select * from Table 1 union all
select * from Table 2 union all
Select * from Table 3
```
I tried this but didn't work
```
if objectid ('Table1') is not null
Select * from Table 1 union all
if objectid ('Table2') is not null
select * from Table 2 union all
if objectid ('Table3') is not null
Select * from Table 3
``` | A possible solution would be this:
```
USE MyDB;
DECLARE @recordsExistingTables TABLE(Column1 nvarchar(50), Column2 nvarchar(50));
IF (EXISTS (SELECT *
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_SCHEMA = 'dbo'
AND TABLE_NAME = 'Table1'))
BEGIN
INSERT INTO @recordsExistingTables
SELECT *
FROM Table1;
END
IF (EXISTS (SELECT *
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_SCHEMA = 'dbo'
AND TABLE_NAME = 'Table2'))
BEGIN
INSERT INTO @recordsExistingTables
SELECT *
FROM Table2;
END
IF (EXISTS (SELECT *
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_SCHEMA = 'dbo'
AND TABLE_NAME = 'Table3'))
BEGIN
INSERT INTO @recordsExistingTables
SELECT *
FROM Table3;
END
SELECT * FROM @recordsExistingTables;
```
Using a table variable, you insert only the rows of the tables that exist in your database.
At the end of the checks, selecting the records of the table variable, you have the rows of each existing table. | Tried to create a sort of generic solution. Created a procedure, where you have to pass all table names in comma separated values and it will return the complete data list of all tables.
```
Create Procedure spGetTableData
@tableNames varchar(4000) --expecting all table names in csv format here
As
Begin
declare @sql nvarchar(max);
declare @tablelist table(tablename varchar(50));
--getting all existint table names in one table
set @sql = 'select name from sys.objects where name in (''' + REPLACE(@tableNames, ',',''',''') + ''')';
insert into @tablelist
exec (@sql);
--creating query with union all
set @sql = '';
select @sql = @sql + 'Select * from ' + tablename + ' Union All ' From @tablelist;
set @sql = left(@sql, len(@sql) - 9);
exec sp_executesql @sql;
End
```
You can execute this as :
```
Exec spGetTableData 'existing1,nonexisting1,existing2,existing3'
```
Hope it helps. | Union all if the table exists in SQL server 2008 | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I need some help with refreshing my memory on how to query correctly.
```
CREATE TABLE Editor
(eid CHAR(4) PRIMARY KEY NOT NULL,
fname VARCHAR(15),
lname VARCHAR(15));
CREATE TABLE Edited_by
(eid CHAR(4) NOT NULL,
isbn CHAR(10) NOT NULL);
```
Now I need to query an ISBN #3489374345 and get the eid, fname, lname.
From what I know it's...
```
SELECT eid, fname, lname
FROM Editor, Edited_by
WHERE isbn='3489374345'
```
But I think my `WHERE` is incorrect as I know it's not pulling the ISBN from the Edited\_by table. | ```
SELECT e.eid, e.fname, e.lname
FROM Editor e inner join Edited_by b
on e.eid=b.eid
WHERE b.isbn='3489374345'
``` | Here, you are trying to retrieve data from Editor table (at least fname and lname) and your 'where' condition is applied on 'Edited\_by' table. So, you need to join the table. And before joining the table, as 'eid' of 'Edited\_by' table would definitely be a subset of 'eid' of 'Editor' table, you should add a foreign key relation in which 'eid' ('Edited\_by') will refer 'eid' (Editor).
Related Queries are given in above answers. | Queries MySQL tables | [
"",
"mysql",
"sql",
""
] |
I am pretty new to SQL and hope someone here can help me with this.
I have a stored procedure where I would like to pass a different value depending on whether a column contains a certain country or not.
So far I only used `CASE` when checking for the match with a specific number or value so I am not sure about this one.
Can someone tell me if the following is valid and correct or let me know how to write this properly (just regarding the part in brackets) ?
```
(CASE countries
WHEN LIKE '%'+@selCountry+'%' THEN 'national'
ELSE 'regional') AS validity
```
**Notes:** @selCountry is the variable name of a country, countries can either be empty, one country or several countries separated with comma and space.
Basically I just want to check if countries contains @selCountry and if yes, set validity to 'national'. | This is the syntax you need:
```
CASE WHEN countries LIKE '%'+@selCountry+'%' THEN 'national' ELSE 'regional' END
```
Although, as per your original problem, I'd solve it differently, splitting the content of @selcountry int a table form and joining to it. | Add an `END` before alias name.
```
CASE WHEN countries LIKE '%'+@selCountry+'%' THEN 'national'
ELSE 'regional'
END
AS validity
``` | SQL Server: use CASE with LIKE | [
"",
"sql",
"sql-server",
"stored-procedures",
"case",
"sql-like",
""
] |
does anybody have any suggestions to optimize this query :
```
SELECT COUNT(client_id) FROM (
SELECT client_id FROM `cdr` WHERE
(DATE(start) BETWEEN '2014-04-21' AND '2014-04-25') AND
`service` = 'test'
GROUP BY client_id
HAVING SUM(duration) > 300
)as t1
```
The problem is , inner query scans millions of rows and returns thousands of rows and it makes main query lazy.
Thanks. | Try below query, avoid functions in searching as it does not use index and kill the performance. "start" columns must be indexed.
```
SELECT COUNT(client_id) FROM (
SELECT client_id FROM `cdr` WHERE
(start BETWEEN '2014-04-21 00:00:00' AND '2014-04-25 23:59:59') AND
`service` = 'test'
GROUP BY client_id
HAVING SUM(duration) > 300
)as t1
``` | Why not this?I have read somewhere that comparing dates directly with < or > works faster than Between.
```
SELECT Count(client_id) FROM `cdr`
WHERE DATE(start) >= '2014-04-21'
AND DATE(start) <= '2014-04-25'
AND `service` = 'test'
GROUP BY client_id
HAVING SUM(duration) > 300
```
What was logic behind having sub query in your sql? | Let me know if i can optimize this query? | [
"",
"mysql",
"sql",
"optimization",
""
] |
I'm trying to append certain rows from a table to another table. I used a SQL command to get the rows I wanted and Microsoft SQL Server Management Studio gave the output in the format of .RPT. So I want to append this RPT to my SQL table.
Is there a way to convert RPT to SQL so I can import it? If not, how do I download the database table from SQL on a Windows server, so I can put it on my Linux SQL server and manager it all with PHPMyAdmin there? I'm new to Windows hosting, and moving everything back to Linux. | I was able to open the RPT file in Excel, manipulate it, then save as CSV to import into PHPMyAdmin. | Here are the [steps](https://stackoverflow.com/a/56188149/5070440) to import the .rpt file to a SQL Server table. I will link to it instead of repeating it here. To summarize the steps, you can use the Import Export wizard and load it as a Ragged right file of format UTF-8. | Import RPT (Output from Select command) to SQL Table | [
"",
"sql",
""
] |
I want to convert a dateTime value to Month-Year (note the **(-)** between them).
i want them like Jan-2014, Feb-2014, Mar-2014, Apr-2014.
i have tried
```
SELECT convert(varchar(7), getdate(), 126)
SELECT right(convert(varchar, getdate(), 106), 8)
```
But the first line gives me 2014-05 (i need it like May-2014) while the second line gives May 2014 (its missing the -, as in May-2014) | You were almost there:
```
SELECT replace(right(convert(varchar, getdate(), 106), 8), ' ', '-')
``` | Write as:
```
SELECT REPLACE(RIGHT(CONVERT(VARCHAR(11),
GETDATE(), 106), 8),
' ', '-') AS [Mon-YYYY]
``` | Converting datetime to month-year only - T-SQL | [
"",
"sql",
"sql-server",
"t-sql",
"datetime",
""
] |
i am trying to fill a variable in PL/SQL with an if-condition value
like this:
`v_variable := if(4 > 3) THEN 'A' ELSE 'B' END;`
but this doesnt work.
Are there any options to do this? | Instead of doing this with an `if`, I'd go with `case`:
```
v_variable := case when 4>3 then 'A' else 'B' end;
``` | Try this:
```
IF (4 > 3) THEN
v_variable :='A';
ELSE
v_variable :='B';
END IF;
``` | PL/SQL fill variable with if-condition value | [
"",
"sql",
"variables",
"plsql",
""
] |
I have two tables, **likes** and **comments** both of which refer to topic posts - like a topic of a forum for example.
Both of them have a column that refers to a specific topic\_id.
Now here's the deal: i want to create a top 5 chart of most liked + commented, the total of both summed up i mean, topics.
For example i did this query for selecting from the topics table the most liked, i want to make the same chart with the total of likes + comments. Here's my top 5 topics by total of likes only.
```
SELECT topics.* ,
COUNT(q_id)
AS post_count
FROM topics
LEFT JOIN likes
ON topics.id = likes.q_id
WHERE topics.to_user = 'someuser'
GROUP BY likes.q_id
ORDER BY post_count DESC
LIMIT 0, 5
```
Tnx in advance! | ```
SELECT posts.id, count(comments.id) + count(likes.id) AS score
FROM posts
LEFT JOIN comments ON posts.id = comments.post_id
LEFT JOIN likes ON posts.id = likes.post_id
GROUP BY posts.id
ORDER BY score desc;
```
For those interested here's the solution. | This will works like a charm!!
```
SELECT topics.id
,count(DISTINCT(comments.id)) + count(DISTINCT(likes.id)) AS score
FROM topics
LEFT JOIN comments ON topics.id = comments.post_id
LEFT JOIN likes ON topics.id = likes.post_id
WHERE topics.to_user = 'someuser'
GROUP BY topics.id
ORDER BY score desc
LIMIT 0, 5;
``` | Counting likes and comments between two different table of a specific topic | [
"",
"mysql",
"sql",
"count",
"sum",
""
] |
I get the error code `ORA-12704` with the following query:
```
SELECT COALESCE(BankDetails.description,'') as description FROM BankDetails
```
The datatype of description column nvarchar2. I'm assuming the `''` is the cause of the issue as this is not matching with the datatype. | Try this
```
SELECT COALESCE(BankDetails.description,n'') as description FROM BankDetails
```
Reference
[ORA-12704: character set mismatch](https://stackoverflow.com/questions/15967201/ora-12704-character-set-mismatch) | You should use the `n` variant, to cast the `''` to a `nvarchar`:
```
SELECT COALESCE(BankDetails.description,n'') as description FROM BankDetails
``` | ORA-12704: character set mismatch with nvarchar2 datatype in Oracle | [
"",
"sql",
"oracle",
""
] |
Here in below table i want to update the second row opening with first row closing and so on.
Closing column is calculated as (opening + Total)
```
ID opening Total Closing
--- -------- ------------ -------------
1 0 3015591.25 3015591.25
2 0 2146798.4 NULL
3 0 3015591.25 NULL
4 0 2146798.4 NULL
5 0 3015591.25 NULL
6 0 2146798.4 NULL
7 0 3015591.25 NULL
8 0 2146798.4 NULL
```
And the **Output** should be as:
```
ID opening Total Closing
--- -------- ------------ -------------
1 0 3015591.25 3015591.25
2 3015591.25 2146798.4 5162389.65
3 5162389.65 3015591.25 8177981.25
4 8177980.9 2146798.4 10324779.4
5 10324779.3 3015591.25 13340370.25
6 13340370.55 2146798.4 15487168.4
7 15487168.95 3015591.25 18502759.25
8 18502760.2 2146798.4 20649557.4
```
any solution on this. | An answer involving `WHILE` loop.
```
--Simulated your table
DECLARE @tbl TABLE
(
ID INT,
opening FLOAT,
Total FLOAT,
Closing FLOAT
)
--Testing values
INSERT INTO @tbl(ID, opening , Total ,Closing) VALUES(1,0,3015591.25,3015591.25)
INSERT INTO @tbl(ID, opening , Total ,Closing) VALUES(2,0,2146798.4,NULL)
INSERT INTO @tbl(ID, opening , Total ,Closing) VALUES(3,0,3015591.25,NULL)
INSERT INTO @tbl(ID, opening , Total ,Closing) VALUES(4,0,2146798.4,NULL)
INSERT INTO @tbl(ID, opening , Total ,Closing) VALUES(5,0,3015591.25,NULL)
INSERT INTO @tbl(ID, opening , Total ,Closing) VALUES(6,0,2146798.4,NULL)
INSERT INTO @tbl(ID, opening , Total ,Closing) VALUES(7,0,3015591.25,NULL)
INSERT INTO @tbl(ID, opening , Total ,Closing) VALUES(8,0,2146798.4,NULL)
--Solution starts from here
DECLARE @StartCount INT, @TotalCount INT, @OPENING FLOAT,@CLOSING FLOAT
SELECT @TotalCount = MAX(ID) FROM @tbl;
SET @StartCount = 2;
WHILE(@StartCount <= @TotalCount)
BEGIN
SELECT @OPENING = ISNULL(Closing, 0) FROM @tbl WHERE ID = @StartCount - 1
SELECT @CLOSING = (@OPENING + Total) FROM @tbl WHERE ID = @StartCount
UPDATE @tbl
SET opening = @OPENING,
Closing = @CLOSING
WHERE ID = @StartCount
SELECT @StartCount = @StartCount + 1
END
SELECT * FROM @tbl
```
Hope this helps | You can use this query which is based on the fact that your closing value is the sum of totals from all previous rows (current row included) and opening value is the same not including the current row.
```
update table1
set
Opening = isnull((select sum(total) from table1 t where t.ID < table1.ID), 0),
Closing = (select sum(total) from table1 t where t.ID <= table1.ID)
```
Here's a [SQL Fiddle](http://sqlfiddle.com/#!3/a6a0b7/2) to show that it works.
As this solution recalculates all rows, getting the totals from the beginning, this is more suited to smaller number of records. | How to update running records in SQL Server | [
"",
"sql",
"sql-server",
""
] |
I have 2 Tables Company and Car.
```
Company:
CompanyId,
CarId,
Inative: 1 or 0.
Car:
CarId,
Plate.
```
The problem is:
In the same month a Car could be rented for more than one Company and I have to bring only the last company that the car was rented.
```
SELECT CompanyId
from Company
where
-- Some Business Rules
GroupBy CompanyId
-- I Need to bring only the CompanyId on this query.
-- The company may be inative, but if more than one company used the same car I must return only the last company.
```
I Can't figure out how I write this GroupBy Clause. Since if i put CarId on the group by, I still return all companies.
Thanks in advance. | Your case:
```
SELECT CompanyId
from Company
WHERE
-- Some Business Rules
ORDER BY CompanyId DESC
LIMIT 1
```
But as i undestood You have to normalize Your DB first.
There should be 3 tables:
Company,
Car,
Car\_to\_company(
id Serial PK,
id\_car integer FK,
id\_company integer FK,
date\_from date
etc... | You don't necessarily need a group by in your main query for this.
Assuming that a date value is unique (i.e. there cannot be two rentals of the same car for the same date) you could do this with a clause like this:
```
...
WHERE date = (SELECT MAX(date) FROM ... )
``` | Can't figure out how to make this GroupBy clause | [
"",
"sql",
"sql-server",
""
] |
In sql server I am trying add a column to my query that will sum the fields of colC wherever colA and colB show to have a duplicate entry.
Example: I am trying to figure out how much money each person has paid in total. Each record represents one payment that a person has made. How do I find the total amount that each person has paid?
My table would look like this:
```
FName | LName | $Paid
Bob | Dole | 1
Bob | Dole | 2.2
Bob | Barker | 6
Bob | Barker | 2
Bob | Barker | 2
Herbie| Hancock| 14
```
My desired result would be this:
```
FName | LName | $Paid | sum ColC where FName and LName are duplicates
Bob | Dole | 1 | 3.2
Bob | Dole | 2.2 | 3.2
Bob | Barker | 6 | 10
Bob | Barker | 2 | 10
Bob | Barker | 2 | 10
Herbie | Hancock| 14 | 14
```
The fourth column repetetive outputis not necessary. This table would also achieve the desired result:
```
FName | LName | sum ColC where FName and LName are duplicates
Bob | Dole | 3.2
Bob | Barker | 10
Herbie | Hancock | 14
```
Thanks in advance for any help! | you can join back on the original table to join the groups back to the original transactions.
```
SELECT
M.FName,
M.LName,
M.total_paid,
X.SUM(ColC) AS total_paid
FROM
MyTable M
INNER JOIN
(
SELECT
FName,
LName,
SUM(ColC) AS total_paid
COUNT(ColC) num
FROM
MyTable
GROUP BY
FName,
LName
HAVING
COUNT(colc) > 1 -- only include places where there are duplicate?
) AS x ON X.FName = M.FName and X.LName = M.LName
``` | An alternate that may not be terribly efficient given the distinct keyword, but try this.
```
CREATE TABLE PAID ( FName VARCHAR(12), LName VARCHAR(12), [$Paid] NUMERIC(10,2))
INSERT INTO PAID (FName, LName, [$Paid]) VALUES ('Bob','Dole',1);
INSERT INTO PAID (FName, LName, [$Paid]) VALUES ('Bob','Dole',2.2);
INSERT INTO PAID (FName, LName, [$Paid]) VALUES ('Bob','Barker',6);
INSERT INTO PAID (FName, LName, [$Paid]) VALUES ('Bob','Barker',2);
INSERT INTO PAID (FName, LName, [$Paid]) VALUES ('Bob','Barker',2);
INSERT INTO PAID (FName, LName, [$Paid]) VALUES ('Herbie','Hancock',14);
```
```
SELECT DISTINCT
FName,
LName,
SUM([$Paid])OVER(PARTITION BY FName + LName) AS total_paid
FROM
PAID
;
```
Check [SQLFiddle](http://www.sqlfiddle.com/#!3/f2260/4) | SQL: How do you sum he fields in one column, when a separate 2 columns fields are equal | [
"",
"sql",
"sql-server",
""
] |
Alright, I have those columns on MySQL :
```
id
id_conv
associated_statut
```
The **associated\_statut** is a number between `1` and `7`.
What I want to do is to count only the `id_conv` if the **LAST** associated\_statut for this `id_conv` is `2` for example.
Example :
```
-----------------------------------------------
| id | id_conv | associated_statut |
-----------------------------------------------
| 1 | 15 | 1 |
| 2 | 15 | 2 |
| 3 | 15 | 2 |
| 4 | 15 | 4 |
| 5 | 15 | 2 |
| 6 | 15 | 3 |
```
The `id_conv` would NOT be counted if I want the `associated_statut` = 2, because the last associated\_statut for this `id_conv` is `3`.
I already tried this query :
```
SELECT COUNT(DISTINCT id_conv) FROM MyTable WHERE associated_statut = 2
```
But this doesn't returns what I want.
Is there a way to do this in SQL ?
Thanks. | Maybe, this will work for you:
```
SELECT count(t1.id) FROM mytable t1
INNER JOIN (SELECT id_conv, MAX(id) id FROM foo GROUP BY id_conv) t2
ON t1.id = t2.id
WHERE t1.associated_statut = 2
``` | We can do same thing without sub query. It will take less time when you have more data.
```
SELECT count(t1.id) FROM
mytable t1
LEFT JOIN
mytable t2
ON t1.id_conv = t2.id_conv
AND t1.id < t2.id
WHERE t2.id IS NULL
AND t1.associated_statut = 2;
``` | MySQL - Count only if the last associated_statut = 2 | [
"",
"mysql",
"sql",
"count",
""
] |
I'm learning about RoR/databases and this topic particularly confused me. In the book *Agile Development with Rails 4* , they give an example of finding a list of all orders with for an entry with name Dave:
```
pos = Order.where("name = 'Dave' and pay_type = 'po")
```
The book goes on to say that you would never want to do something like this:
```
name = params[:name]
pos = Order.where("name = '#{name}'and pay_type = 'po'")
```
Instead you should do this:
```
name = params[:name]
pos = Order.where(["name = ? and pay_type = 'po'",name])
```
I understand, what SQL injection is as a concept but there are a few particulars that confuse me. For starters, how exactly does the SQL injection work as a syntax.
I get that the danger is that someone can drop a table/database if you interpolate an outside form parameter like the first example but how?
Lets say you had this:
```
name = params[:name] #DROP DATABASE database_name
pos = Order.where("name = '#{DROP DATABASE database_name}'and pay_type = 'po'")
```
Is this how SQL injection works? SQL is a syntax, there should be no field in the database where "name = DROP DATABASE database\_name", wouldn't this return an error instead of dropping the database?
Also, how would the question mark version protect against this. Again, let's say you have this scenario.
```
name = params[:name] #DROP DATABASE database_name
pos = Order.where(["name = ? and pay_type = 'po'", DROP DATABASE database_name])
```
Won't this replace the question mark with the DROP DATABASE database\_name syntax and then wouldn't we have the same problem that we had in the first example? How exactly is this protecting an application against SQL? I searched for a few tutorials on <http://hub.tutsplus.com/> and searched on Google, but I don't get the concept behind it. Any help? | **The simplest Explanation i can give for what SQL injection is:**
This may make a SQL query like the following:
```
SELECT * FROM Order WHERE name = 'Dan' AND pay_type = 'po'
```
Now a nice user would supply the name Dan like above.
But an evil user (let's call him Bobby), would supply the name:
`Bobby Tables'; DROP DATABASE master; --`
That creates a query like:
```
SELECT * FROM Order WHERE name = 'Bobby Tables'; DROP DATABASE master; --' AND pay_type = 'po'
```
which effectively executes the two queries:
```
SELECT *
FROM Order
WHERE name = 'Bobby Tables';
DROP DATABASE master;
```
And now the database is gone. Worse damage comes from when they pull private information out of the database instead (like username/passwords or credit card info)
---
**As for why the question mark magically now protects you:**
Using the question mark in RoR, makes use of a pattern called parameterization. When you parameterize a SQL query, you write it in such a way that it prevents anyone from entering a successful SQL injection. Everywhere a question mark is used, it is replaced by a parameter. That parameter is then safely set to a value at the top of the query by escaping any quotations.
If you now supply the name Dan to:
```
Order.where(["name = ? and pay_type = 'po'", params[:name])
```
the query would look something like: (RoR may parameterize slightly differently internally, but the effect is the same)
```
DECLARE @p0 nvarchar(4000) = N'po',
@p1 nvarchar(4000) = N'Dan';
SELECT [t0].[ID], [t0].[name], [t0].[pay_type]
FROM Order AS [t0]
WHERE ([t0].[name] = @p1) AND ([t0].[pay_type] = @p1)
```
And now if evil Bobby comes along with his name of:
`Bobby Tables'; DROP DATABASE master; --
if would parameterize (and escape quotations) the query like:
```
DECLARE @p0 nvarchar(4000) = N'po',
@p1 nvarchar(4000) = N'Bobby Tables''; DROP DATABASE master; --';
SELECT [t0].[ID], [t0].[name], [t0].[pay_type]
FROM Order AS [t0]
WHERE ([t0].[name] = @p1) AND ([t0].[pay_type] = @p1)
```
That is now a perfectly safe query
Hope that helps you understand | It has to do with how the code interpreter works.
In the first example, the parameter is simply inserted as text, and then the entire command is processed. Hence, problems.
In the second example, the command is interpreted first, and then the parameter is inserted afterwards. (IE, it interprets "do statement where name=[some parameter]", and then after it does that, adds the parameter.) So all you'd get would be a very weird equality where name = "); drop table blah;" which of course wouldn't work unless you have some weird names in your data.
Note, the injection has to actually properly end your command and start a new one - otherwise it would just cause an error. | How this SQL injection works? Explanation needed | [
"",
"sql",
"ruby-on-rails",
"ruby",
"sql-injection",
""
] |
I have some value with data type Numeric(28,10) (e.g. 128000,0000000000). I want to round it up to 2 significances and convert it into string. What is wrong with this?
```
convert(varchar,round(isnull(td2.Qty,0),2))
```
where td2.Qty is that value. It coverts it to string, but doesn't round it. Thanks in advance | It does round, but it keeps displaying the zeros because this is how `numeric`s are always displayed.
If you need to stop displaying zeros, convert the value to a different type after the rounding, e.g. `float` or `numeric(28,2)`:
```
convert(varchar, cast(round(isnull(td2.Qty,0),2) as numeric(28,2)))
``` | ```
SELECT CAST(round(isnull(128000,0000000000),2)AS FLOAT)
SELECT CAST(round(isnull(128000,0000000000),2)AS NUMERIC(28,2))
``` | rounding and converting value | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I'm currently collecting data from individual tables. I'm supposed to get my sum (amount\*weight) For each shipment. However it gives me the overall sum\*weight, which means I get the same result all the way down.
Here is the code I carried out:
```
SELECT SUM(Weight*Amount), Arrival_Date
FROM Specifications, Shipment, `Product shipment`
WHERE Specifications.Product_Code = `Product shipment`.Product_Code
AND `Product shipment`.Shipment_ID = `product Shipment`.shipment_ID
GROUP BY Arrival_Date
```
What am I missing? | Extend `GROUP BY` of your query:
```
SELECT SUM(Weight * Amount),
Arrival_Date
FROM Specifications,
Shipment,
`Product shipment`
WHERE Specifications.Product_Code = `Product shipment`.Product_Code
AND `Product shipment`.Shipment_ID = `product Shipment`.shipment_ID
GROUP BY `Product shipment`.Shipment_ID, -- <- try add this
Arrival_Date
``` | You should group by shipment\_ID, not Arrival\_Date.
I'm not sure but maybe shipment\_ID needs to appear in the SELECT clause for this to work. | Sql individual SUM | [
"",
"sql",
""
] |
I have a dynamic SQL query with different column names and tables at runtime.
I am looking to get the SQL query to ignore reading data based on if a row contains Null value in any cell.
```
SELECT rsv_intension_rsvt
FROM resolve_table_kb
where rsv_intension_rsvt is not null;
```
I am aware of using `IS NOT NULL`.
But the problem is that I wouldn't know the query syntax (i.e. columns name so that `IS NOT NULL` can be used).
Is there a dynamic solution that can be used for the SQL query to read/collect rows only when all cells in a selected row are NOT NULL.
Thanks | No, there's no `select where * is not null`-type query. You have to test every field individually:
```
SELECT ...
FROM ...
WHERE field1 is not null AND field2 is not null AND .... AND fieldN is not null
```
You could try a `COALESCE()` operation, perhaps, but that's still ugly:
```
WHERE COALESCE(field1, field2, ..., fieldN, 'allarenull') <> 'allarenull'
```
but then you STILL have to list all of the fields in the table. | I believe you will need to use a stored procedure or multiple joins(which might not be the healthiest solution) to solve this as [Marc B](https://stackoverflow.com/users/118068/marc-b) indicated. You can also check the following [question](https://stackoverflow.com/questions/5285448/mysql-select-only-not-null-values) that addresses the same issue you are asking about. | Dynamic SQL Query to ignore null values based on null value in cell | [
"",
"mysql",
"sql",
"sql-server",
"oracle",
""
] |
Could anybody tell me whether it is possible to have a column within a table in MYSQL that automatically performs the SUM function for a given number of columns.
As a comparative example in Microsoft Excel, it's possible to have a cell that performs the SUM function for a given range of cells and automatically updates `i.e. (=SUM E4:E55)`
Is it possible to have a column which achieves the same function in MYSQL?
To further elaborate -
I have numerous columns relating to the quantity of different sizes of our products i.e. `quantity_size_*` and wanted a column that would SUM the value of the quantity columns and update automatically if any of the values are changed.
Any advice would be great. Thanks | Normally you would do that in your select query on-the-fly and don't store those calculation.
```
select some_column,
col1 * col2 as some_calculation_result
from your_table
```
But if you have a really good reason not to do it that way then you can use a trigger to calculate those data.
You need an update trigger to catch changes in the data and an insert trigger to calculate on insertion.
An example of an insert trigger goes like this
```
delimiter |
CREATE TRIGGER sum_trigger AFTER INSERT ON your_table
FOR EACH ROW BEGIN
SET NEW.sum_column = NEW.column1 * NEW.column2;
END
|
delimiter ;
``` | I think you will have to do this with a trigger. A column by itself just store data, it can't do things programmatically. | How to create a column in MYSQL which automatically SUMs the values from other columns within the same row | [
"",
"mysql",
"sql",
""
] |
I'm developing my first ever application with PostgreSQL.
# The Scenario
**This is what my table "person" looks like:**
```
Column | Type | Modifiers
------------+-----------------------------+-----------------------------------------------------
id | bigint | not null default nextval('person_id_seq'::regclass)
first_name | character varying(255) | not null
last_name | character varying(255) | not null
email | character varying(255) | not null
password | character varying(255) | not null
created_at | timestamp without time zone |
updated_at | timestamp without time zone |
Indexes:
"person_pkey" PRIMARY KEY, btree (id)
"person_email_unique" UNIQUE CONSTRAINT, btree (email)
"person_id_unique" UNIQUE CONSTRAINT, btree (id)
Referenced by:
TABLE "access" CONSTRAINT "access_person_id_foreign" FOREIGN KEY (person_id) REFERENCES person(id)
```
This was created using migrations in [knex.schema](http://knexjs.org/#Schema).
If I run the following query in psql...
`insert into person (first_name, last_name, email, password) values ('Max', 'Mustermann', '', '123123123');`
I get back `INSERT 0 1` and the row is successfully inserted:
```
id | first_name | last_name | email | password | created_at | updated_at
----+------------+------------+-----------------------------------+----------------+-------------------------+-------------------------
12 | Max | Mustermann | | 123123123 | |
```
# My Question:
**I expect the operation to fail, because no e-mail (NOT NULL) was specified. Why does it not fail?**
Thank you very much for your help!
Max | Some DBMS (like Oracle) treats empty string (`''`) as `NULL`. Others (like MySQL, PostgreSQL, etc) treat empty string and `NULL` as different.
PostgreSQL treats `''` as empty string, not `NULL`, so your `insert` statement executed successfully. | `null` and an empty string are not the same values, by passing in an empty string you have satisfied the requirement.
If you query was
```
insert into person (first_name, last_name, password)
values ('Max', 'Mustermann', '123123123');
```
then an error would be thrown because you are not passing in a value for email | PostgreSQL: Why is NOT NULL not working here? | [
"",
"sql",
"postgresql",
"psql",
"postgresql-9.3",
""
] |
On my database table `stocks`, I have the columns:
```
stock_id
make
model
buy_rate
tax
other_charges
```
After inserting data into the above table, I have added a new column named `sell_rate` which is at the moment `NULL`.
* **How do I update data in `sell_rate` using `buy_rate` + `tax` + `other_charges`?** | Try something like that:
```
UPDATE stocks
SET sell_rate = buy_rate + tax + other_charges
``` | You can use update statement as other guys suggested for fill column in records which are already exist.
for fill the column in the future automatically you should set the column as 'Computed Column' by Set:
Table Design -> Computed Column Specification -> Formula to : buy\_rate + tax + other\_charges
But you should be aware about some considerations like:
-Computed Columns have some restrictions for indexing (in some cases)
-You can't insert or update directly computed columns
Check this [Article](http://technet.microsoft.com/en-us/library/ms191250%28v=sql.105%29.aspx) for more information.
Note: another ways are using update Trigger or defining a job which are useful in specific situations | How to update new column based on the value of existent columns? | [
"",
"sql",
"sql-server",
""
] |
I need to get a report that shows distinct users per week to show user growth per week, but I need it to show cumulative distinct users.
So if I have 5 weeks of data, I want to show:
Distinct users from week 0 through week 1
Distinct users from week 0 through week 2
Distinct users from week 0 through week 3
Distinct users from week 0 through week 4
Distinct users from week 0 through week 5
I have a whole year's worth of data. The only way I know how to do this is to literally query the time ranges adjusting a week out at a time and this is very tedious. I just can't figure out how I could query everything from week 0 through week 1 all the way to week 0 through week 52.
EDIT - What I have so far:
```
select count(distinct user_id) as count
from tracking
where datepart(wk,login_dt_tm) >= 0 and datepart(wk,login_dt_tm) <= 1
```
Then I take that number, record it, and update it to -- datepart(wk,login\_dt\_tm) <= 2. And so on until I have all the weeks. That way I can chart a nice growth chart by week.
This is tedious and there has to be another way.
UPDATE-
I used the solution provided by @siyual but updated it to use a table variable so I could get all the results in one output.
```
Declare @Week Int = 0
Declare @Totals Table
(
WeekNum int,
UserCount int
)
While @Week < 52
Begin
insert into @Totals (WeekNum,UserCount)
select @Week,count(distinct user_id) as count
from tracking
where datepart(wk,login_dt_tm) >= @Week and datepart(wk,login_dt_tm) <= (@Week + 1)
Set @Week += 1
End
Select * from @Totals
``` | You could try something like this:
```
Declare @Week Int = 1
While @Week <= 52
Begin
select count(distinct user_id) as count
from tracking
where datepart(wk,login_dt_tm) >= 0 and datepart(wk,login_dt_tm) <= @Week
Set @Week += 1
End
``` | Why not something like:
```
select count(distinct user_id) as count, datepartk(wk, login_dt_tm) as week
from tracking
group by datepart(wk,login_dt_tm)
order by week
``` | Growth Of Distinct Users Per Week | [
"",
"sql",
"sql-server",
""
] |
I have the following table for Customer:
```
+------------+-------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+------------+-------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| first | varchar(45) | YES | | NULL | |
| last | varchar(45) | YES | | NULL | |
| password | varchar(45) | NO | | NULL | |
| contact_id | int(11) | NO | PRI | NULL | |
| address_id | int(11) | NO | PRI | NULL | |
+------------+-------------+------+-----+---------+----------------+
```
And the following structure for Appointment:
```
+-------------+------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------------+------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| time | datetime | NO | | NULL | |
| cancelled | tinyint(1) | YES | | 0 | |
| confirmed | tinyint(1) | YES | | 0 | |
| customer_id | int(11) | NO | PRI | NULL | |
+-------------+------------+------+-----+---------+----------------+
```
I want to use a single query to get the customer information, if they have appointment information, then it'll query it, otherwise it won't.
I am trying to use the following:
```
CASE
WHEN (SELECT count(a.id) FROM appointment
INNER JOIN customer c ON a.customer_id = c.id)
THEN (SELECT c.first, c.last, c.id, a.id FROM appointent
INNER JOIN customer c ON a.customer_id = c.id)
ELSE
(SELECT c.first, c.last, c.id FROM customer)
END;
```
Do you have any suggestions? | how about
```
SELECT * FROM Customer c LEFT JOIN Appointment a ON a.CustomerId = c.Id
``` | You could make two queries and `UNION` them.
```
SELECT c.first, c.last, c.id, a.id FROM appointent a
INNER JOIN customer c ON a.customer_id = c.id
UNION
SELECT c.first, c.last, c.id, null FROM customer c
```
Or an outer join, where the `a.id` would be populated with null if there was no match during the join.
```
SELECT c.first, c.last, c.id, a.id FROM customer c
OUTER JOIN appointent a ON a.customer_id = c.id
``` | SELECT statement with an case statement | [
"",
"mysql",
"sql",
""
] |
Is there a way to use a where clause to check if there were zero matches between tables for a record from the first table, and produce one row or results reflecting that?
I'm trying to get results that look like this:
```
+----------+----------+-----------+----------+-------------+
| Results |
+----------+----------+-----------+----------+-------------+
| Date | Queue ID | From Date | To Date | Campaign ID |
| 3/1/2014 | 1 | 2/24/2014 | 3/2/2014 | 1 |
| 3/1/2014 | 2 | (NULL) | (NULL) | (NULL) |
+----------+----------+-----------+----------+-------------+
```
From a combination of tables that look like this:
```
+----------+-------+ +-------+----+ +----+-----------+-----------+----------+
| Table 1 | | Table 2 | | Table 3 |
+----------+-------+ +-------+----+ +----+-----------+-----------+----------+
| Date | Queue | | Queue | SP | | SP | From Date | To Date | Campaign |
| | ID | | ID | ID | | ID | | | ID |
+----------+-------+ +-------+----+ +----+-----------+-----------+----------+
| 3/1/2014 | 1 | | 1 | 1 | | 1 | 2/24/2014 | 3/2/2014 | 1 |
| 3/1/2014 | 2 | | 1 | 2 | | 2 | 3/3/2014 | 3/9/2014 | 5 |
| | | | 1 | 3 | | 3 | 3/10/2014 | 3/16/2014 | 1 |
| | | | 1 | 4 | | 4 | 3/17/2014 | 3/23/2014 | 1 |
| | | | 1 | 5 | | 5 | 3/24/2014 | 3/30/2014 | 4 |
| | | | 2 | 6 | | 6 | 3/3/2014 | 3/9/2014 | 5 |
| | | | 2 | 7 | | 7 | 3/10/2014 | 3/16/2014 | 5 |
| | | | 2 | 8 | | 8 | 3/17/2014 | 3/23/2014 | 5 |
| | | | 2 | 9 | | 9 | 3/24/2014 | 3/30/2014 | 5 |
+----------+-------+ +-------+----+ +----+-----------+-----------+----------+
```
I'm joining Table 1 to Table 2 on QUEUE ID,
and Table 2 to Table 3 on SP ID,
and DATE from Table 1 should fall between Table 3's FROM DATE and TO DATE.
I want a single record returned for each queue, including if there were no date matches.
Unfortunately any combinations of joins or where clauses I've tried so far only result in either one record for Queue ID 1 or multiple records for each Queue ID. | I would suggest this:
```
SELECT
t1.Date,
t1.QueueID,
s.FromDate,
s.ToDate,
s.CampaignID
FROM
Table1 t1
LEFT JOIN
(
SELECT
t2.QueueID,
t3.FromDate,
t3.ToDate,
t3.CampaignID
FROM
Table2 t2
INNER JOIN
Table3 t3 ON
t2.SPID = t3.SPID
) s ON
t1.QueueID = s.QueueID AND
t1.Date BETWEEN s.FromDate AND s.ToDate
```
[SQL Fiddle here with an abbreviated dataset](http://sqlfiddle.com/#!3/724f9/1) | A trivial amendment to AHiggins code. Using the CTE makes it a little easier to read perhaps.
With AllDates as
(
```
SELECT
t2.QueueID,
t3.FromDate,
t3.ToDate,
t3.CampaignID
FROM Table2 t2
INNER JOIN Table3 t3 ON
t2.SPID = t3.SPID
```
)
SELECT
```
t1.Date,
t1.QueueID,
s.FromDate,
s.ToDate,
s.CampaignID
```
FROM Table1 t1
LEFT JOIN AllDates s ON
t1.QueueID = s.QueueID AND
t1.Date BETWEEN s.FromDate AND s.ToDate | SQL Query to get results that match between three tables, or a single result for no match | [
"",
"sql",
""
] |
I have a very simple code for a cron job that makes a date entry into an SQL DB:
```
$qry_cron_test = "INSERT INTO ".$tblprefix."cron_test SET
create_datetime = '".date("Y-d-m H:i:s")."'";
$rs_cron_test = $db -> Execute($qry_cron_test);
```
The problem is the following:
Between 1st and 12th of every month the date entry is like this - 2014-10-03 07:30:39, which is what i want.
However, when the current date is between 13th and the end of the month, the date entry looks like this - 0000-00-00 00:00:00. Then when 1st comes the entires are all ok again.
I tested this on couple of servers and also locally on Xampp always with the same result.
Any suggestions? What could be possibly wrong? | You have month and day the wrong way around.
```
$qry_cron_test = "INSERT INTO ".$tblprefix."cron_test SET
create_datetime = '".date("Y-m-d H:i:s")."'";
$rs_cron_test = $db -> Execute($qry_cron_test);
```
date("Y-**m**-**d** H:i:s") | I recommend that, unless you need milisecond information, you always store date information in [Unix Timestamp](http://www.unixtimestamp.com/). It is lighter to store, since it is only a integer value, is faster to retrieve and is universal, since it is always based on UTC.
Specially in PHP, converting date information to ([`time()`](http://php.net/time) and [`strtotime`](http://php.net/strtotime)) and from ([`date()`](http://php.net/date)) a unix timestamp is pretty easy. This way no matter where your user is, you can always show correct information in local time with almost no effort. | Cron job making date database entry | [
"",
"mysql",
"sql",
"datetime",
"cron",
""
] |
I have the following problem:
I have a data table that is fed by data from a SQL query.
The query works just fine, but not all the data is displayed. I deleted one of the columns before and no wanted to readd it, but it does not show.
Is there a way to get this to work?
Basically, I have those columns:
```
Name, First name, birthday, gender
```
Now I deleted `gender`:
```
Name, First name, birthday
```
After a while, I wanted to readd `gender`, but the data table shows the following:
```
Name, first name, birthday
```
It does work, if I change the column name from `gender` to `sex` in the SQL query, but that is not a solution I can live with.
If I change the name, then rename the column header, on the next refresh, the name is reinstated. If I rename the column header, then change the column name in the SQL query, the column disappears on the next refresh.
Anyone with a solution? | I'm guessing you have Preserve column/sort/filter/layout checked in the External Data Properties dialog (right-click> Table> External Data Properties). Try unchecking it, refreshing, and then checking it again. Save first! | I had the same issue, and finally found an easy solution for adding columns. Click on the table, then Query>Edit>Advanced Editor (under the home tab).
You should see the source code for the query. In the first line of code, you will see Columns= (followed by your number of columns).
You need to change this number to reflect the correct number of columns in the new CSV file. I originally had 17 columns. I added two data columns, so I changed this number to 19.
Close the editor and refresh, and you should be all set. | Excel data table (SQL query): Once deleted column no longer shows up | [
"",
"sql",
"excel",
"datatable",
""
] |
I am trying to split a single string containing multiple email address data into three variables. The strings mark the start/end of an email address with the ; character.
An example string would be:
```
'joebloggs@gmailcom;jimbowen@aol.com;dannybaker@msn.com'
```
The code I currently have for this is as follows:
```
DECLARE @Email VARCHAR(100),
@Email2 VARCHAR(100),
@Email3 VARCHAR(100)
SET @Email = 'joebloggs@gmailcom;jimbowen@aol.com;dannybaker@msn.com'
SET @Email2 = SUBSTRING(@Email, CHARINDEX(';', @Email)+1, LEN(@Email))
SET @Email3 = SUBSTRING(@Email, CHARINDEX(';', @Email)+1, LEN(@Email))
SET @Email = SUBSTRING(@Email, 1, CHARINDEX(';', @Email)-1)
```
Unfortunately this doesn't seem to work. Could someone please point out where I am going wrong and what I should do to fix my problem?
Thanks in advance. | Assuming that there will always be 3 email addresses - the following seems to work;
```
DECLARE @Email VARCHAR(100),
@Email2 VARCHAR(100),
@Email3 VARCHAR(100)
SET @Email = 'joebloggs@gmailcom;jimbowen@aol.com;dannybaker@msn.com'
SELECT @Email = LEFT(@Email, CHARINDEX(';', @Email) - 1)
,@Email2 = SUBSTRING (
@Email,
CHARINDEX(';', @Email) + 1,
CHARINDEX(';', @Email, CHARINDEX(';', @Email) + 1) - LEN(LEFT(@Email, CHARINDEX(';', @Email) )) - 1
)
,@Email3 = RIGHT(@Email, CHARINDEX(';', @Email)-1)
``` | This solution:
```
create function dbo.SplitString
(
@str nvarchar(max),
@separator char(1)
)
returns table
AS
return (
with tokens(p, a, b) AS (
select
cast(1 as bigint),
cast(1 as bigint),
charindex(@separator, @str)
union all
select
p + 1,
b + 1,
charindex(@separator, @str, b + 1)
from tokens
where b > 0
)
select
p-1 ItemIndex,
substring(
@str,
a,
case when b > 0 then b-a ELSE LEN(@str) end)
AS Item
from tokens
);
GO
```
Taken from [How do I split a string so I can access item x](https://stackoverflow.com/questions/2647/how-do-i-split-a-string-so-i-can-access-item-x) | Splitting an SQL string into multiple strings | [
"",
"sql",
"sql-server",
"string",
"substring",
""
] |
What I am trying to do is output (order\_id, payment\_type) together with a column title of 'Order & Payment Type Used' and (order\_date, order\_time) together with a column title of 'Date/Time'. Iv tried the below query in a number of different ways now but I always get errors.
**Query I am trying to execute**
```
SELECT CONCAT('order_id', ' ', 'payment_type') AS 'Order & Payment Type Used',
CONCAT('order_date', ' ', 'order_time') AS 'Date/Time'
FROM 'ORDER'
ORDER BY order_id;
```
**Error Result**
```
#1064 - You have an error in your SQL syntax; check the manual that corresponds
to your MySQL server version for the right syntax to use near ''ORDER' ORDER BY
order_id LIMIT 0, 30' at line 1
```
Table . . .
**'ORDER'**
```
order_id
order_date
order_time
payment_type
``` | Escaping of special names is with ` not with '
```
SELECT CONCAT(`order_id`, ' ', `payment_type`) AS `Order & Payment Type Used`,
CONCAT(`order_date`, ' ', `order_time`) AS `Date/Time`
FROM `ORDER` ORDER BY order_id;
```
' is for strings. | Use back ticks for column and table names,quotes are for strings and dates
```
`order_id`, ' ', `payment_type`
``` | MySQL - SELECT CONCAT AS ' ' & ORDER BY - Error/Not working? | [
"",
"mysql",
"sql",
""
] |
I'm learning the way ADO.NET Models works in Entity Framework with MySQL. I Generate new test model and then select **"Generate Database from model"**
It produce for me new file "model\_name.edms.sql" - actual MySQL script for database creation.
However to execute it I have to "Connect to Server" which by default comes as SQL Server 2012. but in my case I use MySQL and not MS SQL Server. I dont have SQL12 instance, im working with MySQL
How do I change it to connect to MySQL?
P.S. I know I can use "New Query" directly on database and copy / paste content of the file and execute. also I can use MySQL Workbench and tons of other applications, however im working in VS2013 where most of the tools already integrated, I cant believe that SQL files in VS13 can be executed only trough MS SQL12 | I've just ran into the same problem and that's how to solve it.
I'm using VS 2010 Ultimate but I guess it's the same in VS 2013.
First, when you connect (and execute the sql) from the toolbar you actually request to do it on the **'Transact-SQL Editor' toolbar**, which means that the toolbar handles the MS databases. That's why you ALWAYS get a connection dialog to MS databases.
If MySQL package is properly installed in VS (and apparently that's the case for you) you should:
1. *Right click any existing toolbar (or go to View->Toolbars) and select MySQL*. This should add the MySql Toolbar.
2. Click on *the first button from the left* (in **'MySQL Toolbar'**) to either *connect to an existing Data Connection* or *create a new connection* to your MySql DB. A MySql script tab will be added.
3. **Copy all** the sql generated from the EDMX file to the **'MySql script tab'**.
4. **Run the script from the 'MySql script tab'**.
That should do it.
I know there's still a little copy-paste involved but at least you don't have to leave VS.
Hope that helps
cheerio | First, you have to be sure that you have downloaded MySQL for Visual Studio. This is NOT Connector/Net (though you should probably have that, too).
In VS, when you open Server Explorer, you should be able to add a database. Input your server name, user name, password, and don't forget to click the Advanced button and add in your port (usually 3306). All of this information can be obtained from your MySQL Workbench. Now you should be able to deploy your EDMX to your MySQL database using the same steps you would use for SQL Server.
Full steps from Oracle can be found at <http://dev.mysql.com/doc/connector-net/en/connector-net-visual-studio-making-a-connection.html>.
EDIT: Once you've performed the steps above, right-click in a blank space of your EDMX and choose SSDLToMySQL.tt in the dropdown on the DDL Generation Template. SSDLToSQL[version].tt is the default choice. | Visual Studio 2013 - execute Model.edmx.sql trough MySQL? | [
"",
"mysql",
"sql",
"entity-framework",
"ado.net",
"visual-studio-2013",
""
] |
I have order management system in which I have a table. There can be a request for which there should be a response. For example:-
There are 2 columns UniqueNumber and Type as below:-
```
A Request
A Response
B Request
C Request
D Request
E Request
E Response
C Response
```
I want to query those unique numbers in the table which has request but do not have a response. For example in above case B and D | Use a left self join with a condition that filters out matches:
```
select t1.UniqueNumber
from mytable t1
left join mytable t2 on t2.UniqueNumber = t1.UniqueNumber
and t2.type = 'Response'
where t1.type = 'Request'
and t2.type is null
```
This query works because the join condition attempts to find the response by putting the test for type in the join condition and missed joins return nulls for the values and the where clause seeks those.
As long as there's an index on UniqueNumber, this query will out-perform all other forms due to the efficiencies of joins. | You can select all Requests and the remove those that have a Response:
```
SELECT t1.UniqueNumber
FROM your_table t1
WHERE t1.Type = 'Request'
AND NOT EXISTS
(
SELECT 1
FROM your_table t2
WHERE t2.UniqueNumber = t1.UniqueNumber
AND t2.Type = 'Response'
)
```
Performance could be improved with a composite index on `(Type, UniqueNumber)`. | How to query for request records that do not have response | [
"",
"sql",
"oracle",
"oracle11g",
"oracle10g",
""
] |
I have a table called `table1` in SQL Server 2008
It has the following data:
```
id refId Date IsActive
=====================================
1 2 2014-03-01 1
2 2 2014-03-01 1
3 2 2014-04-15 0 <
4 2 2014-04-15 0 <
5 2 2014-05-20 1
6 2 2014-05-20 1
7 4 2014-03-01 1
8 4 2014-03-01 1
9 4 2014-04-15 1 <
10 4 2014-05-20 1
```
**EDIT**
`refId` refers to a person in another table. So I want the persons whose records does not have `Date = 2014-04-15` OR They have `Date = 2014-04-15` but the `IsActive = 0`
So according to the top, the output should be:
```
refId
=====
2
```
I can do this via `MySQL` using this query (**EDIT 2**):
```
SELECT refId
FROM table1
GROUP BY refId
/*Check if there is no value with this date*/
HAVING MAX(Date='2014-04-15') = 0
/*Check if the date exists but the IsActive flag is off*/
OR MAX(Date='2014-04-15' AND IsActive=0) = 1
```
but the problem is, SQL Server does not accept condition in the `MAX()` function. | if you need to only include the refIds that meet your requirements, then this should work
```
select refId
from table1 as t
group by refId
having exists(
select refId
from table1 as t2
where [Date]<>'2014-04-15'
or ([Date]='2014-04-15' and IsActive=0)
group by refId
having t.refId=t2.refId
-- this next line is where we make sure we are only matching
-- where the total number of records of refId is equal to the total match count
and count(t.refId)=count(t2.refId) )
``` | If this is the query in MySQL:
```
SELECT refId
FROM table1
GROUP BY refId
HAVING MAX(Date='2014-04-15') = 0 OR
MAX(Date='2014-04-15' AND IsActive=0) = 1;
```
You can readily translate this to SQL Server/ANSI SQL syntax by using the `case` statement:
```
SELECT refId
FROM table1
GROUP BY refId
HAVING MAX(CASE WHEN Date = '2014-04-15' THEN 1 ELSE 0 END) = 0 OR
MAX(CASE WHEN Date = '2014-04-15' AND IsActive = 0 THEN 1 ELSE 0 END) = 1;
```
This query will also work in MySQL. | Filter SQL Server records | [
"",
"sql",
"sql-server",
""
] |
I need a single query to fetch the counts based on some condition(oracle).
```
select count(*) AS TOTAL_ROWS from table
select count(*) AS TOTAL_ERROR_ROWS from table where ERROR_CODE is not null
select count(*) AS TOTAL_SUCCESS_ROWS from table where ERROR_CODE is null
```
I want to fetch all three results in 1 query. I have tried like below after googling but it is not working:
```
select
count(*) as TotalCount,
count(case when { where ERROR_CODE is not null} then 1 else 0 end) as QualifiedCount
from
wli_qs_report_attribute
select count(*) as TOTAL,
count(case when ERROR_CODE is not null then 1 else 0 end) as ExecCount,
from wli_qs_report_attribute
```
it doesn't work. | `SELECT TOTAL_ROWS, TOTAL_ERROR_ROWS, TOTAL_ROWS-TOTAL_ERROR_ROWS AS TOTAL_SUCCESS_ROWS
FROM (
SELECT COUNT(*) AS TOTAL_ROWS
, COUNT(ERROR_CODE) AS TOTAL_ERROR_ROWS
FROM table
) AS T` | You can use `UNION` :
```
select count(*) AS TOTAL_ROWS from table
UNION
select count(*) AS TOTAL_ROWS from table where ERROR_CODE is not null
UNION
select count(*) AS TOTAL_ROWS from table where ERROR_CODE is null
```
But it will display all three `COUNT`s in one column. To display it in three different columns try the following :
```
SELECT COUNT(*) AS TOTAL_ROWS
, COUNT(CASE WHEN ERROR_CODE IS NOT NULL THEN * END) AS TOTAL_ERROR_ROWS
, COUNT(CASE WHEN ERROR_CODE IS NULL THEN * END) AS TOTAL_SUCCESS_ROWS
FROM table
``` | Multiple count(*) in single query | [
"",
"sql",
"oracle",
"oracle11g",
""
] |
HI all here is a MySQL problem that uses results from a 2 table join, conditionally assess them and outputs 2 values.
Here is the database structure.
The 1st table gtpro contains
a user ID (column name id)
a samples/year number ie 2, 4 or 12 times/year (column name labSamples\_\_yr)
The 2nd table labresults contains
that same user ID (column name idgtpro)
and a date column for the sample dates (when the samples were provided) column name date
so this query returns an overview of all id's and when were the last samples submitted for that id.
```
SELECT a.id, a.labSamples__yr, max(b.date) as ndate from gtpro as a
join labresults as b on a.id = b.idgtpro group by a.id
```
the conditions I want to evaluate looks like this.
```
a.labSamples__yr = 2 and ndate >= DATE_SUB(CURDATE(), INTERVAL 6 MONTH)
a.labSamples__yr = 4 and ndate >= DATE_SUB(CURDATE(), INTERVAL 3 MONTH)
a.labSamples__yr = 12 and ndate >= DATE_SUB(CURDATE(), INTERVAL 1 MONTH)
```
So if number of samples /year is 2 and the last samle date was more than 6 months ago I want to know the id and latest date of samples for that id.
I tried using CASE and IF statements but can't quite get it right. This was my latest attempt.
```
select id, ndate,
case when (labSamples__yr = 2 and ndate <= DATE_SUB(CURDATE(), INTERVAL 6 MONTH))is true
then
(SELECT id from gtpro as a join labresults as b on a.id = b.idgtpro where
labSamples__yr = 2 and max(b.date) <= DATE_SUB(CURDATE(), INTERVAL 6 MONTH)) end as id
from (SELECT a.id, a.labSamples__yr, max(b.date) as ndate from gtpro as a
join labresults as b on a.id = b.idgtpro group by a.id) d
```
this tells me invalid use of group function.
Desperate for a bit of help
**EDIT** I messed up some of the names in the code above which i have now fixed. | If I understand your question correctly, you should be able to put the conditions in the `where` clause:
```
SELECT a.id, a.labSamples__yr, max(b.date) as ndate
from gtpro a join
labresults b
on a.id = b.idgtpro
where (a.labSamples__yr = 2 and b.date >= DATE_SUB(CURDATE(), INTERVAL 6 MONTH)) or
(a.labSamples__yr = 4 and b.date >= DATE_SUB(CURDATE(), INTERVAL 3 MONTH)) or
(a.labSamples__yr = 12 and b.date >= DATE_SUB(CURDATE(), INTERVAL 1 MONTH))
group by a.id;
```
That fixes your syntax problem. But, if you want the `id` with the maximum date, try doing this:
```
select a.labSamples__yr, max(b.date) as ndate,
substring_index(group_concat(a.id order by b.date desc)) as maxid
from gtpro a join
labresults b
on a.id = b.idgtpro
where (a.labSamples__yr = 2 and b.date >= DATE_SUB(CURDATE(), INTERVAL 6 MONTH)) or
(a.labSamples__yr = 4 and b.date >= DATE_SUB(CURDATE(), INTERVAL 3 MONTH)) or
(a.labSamples__yr = 12 and b.date >= DATE_SUB(CURDATE(), INTERVAL 1 MONTH))
group by a.labSamples__yr;
```
Putting `a.id` in the `group by` is not going to give you the maximum id of anything. | The answer partly inspired by Tomas (sql clarification and syntax clarification) I got rid of the CASE all together. It seems nice and clean to me but I would like to hear any other suggestions
```
select id, labSamples__yr, ndate from
(SELECT a.id, a.labSamples__yr, max(b.date) as ndate from gtpro as a
join labresults as b on a.id = b.idgtpro group by a.id)d
where (ndate <= DATE_SUB(CURDATE(), INTERVAL 6 MONTH) and labSamples__yr = 2)
or (ndate <= DATE_SUB(CURDATE(), INTERVAL 3 MONTH) and labSamples__yr = 4)
or (ndate <= DATE_SUB(CURDATE(), INTERVAL 1 MONTH) and labSamples__yr = 12)
```
Thanks for looking but it would still be nice to see a solution using a CASE statement for future reference??? | MySQL using IF or CASE statement across joined tables | [
"",
"mysql",
"sql",
"if-statement",
"case",
""
] |
Let say I have a table:
```
ColumnA ColumnB
---------------------------------
1 10.75
4 1234.30
6 2000.99
```
How can I write a SELECT query that will result in the following:
```
ColumnA ColumnB
---------------------------------
1 10.75
2 0.00
3 0.00
4 1234.30
5 0.00
6 2000.99
``` | You can use a CTE to create a list of numbers from 1 to the maximum value in your table:
```
; with numbers as
(
select max(ColumnA) as nr
from YourTable
union all
select nr - 1
from numbers
where nr > 1
)
select nr.nr as ColumnA
, yt.ColumnB
from numbers nr
left join
YourTable yt
on nr.nr = yt.ColumnA
order by
nr.nr
option (maxrecursion 0)
```
[See it working at SQL Fiddle.](http://sqlfiddle.com/#!6/8b347/1/0) | Please try:
```
declare @min int, @max int
select @min=MIN(ColumnA), @max=MAX(ColumnA) from tbl
select
distinct number ColumnA,
isnull(b.ColumnB, 0) ColumnB
from
master.dbo.spt_values a left join tbl b on a.number=b.ColumnA
where number between @min and @max
``` | Select non-existing rows | [
"",
"sql",
"sql-server",
"sql-server-2008",
"sql-server-2005",
"sql-server-2008-r2",
""
] |
I'm not sure I phrased the question correctly, so feel free to correct me. Here are the tables with their data:
```
product category category_product
------- -------- ----------------
id_product id_category active id_category id_product
1 1 1 1 1
2 2 1 2 1
3 3 0 1 2
4 0 2 2
3 2
3 3
4 3
```
I need to select only those products, which have **all** categories as inactive.
For example:
* Product `1` is good, since it belongs to active categories (`1`, `2`).
* Product `2` is good, since it has at least one active category (`1`, `2`; `3` - inactive)
* Product `3` must be selected, since all its categories are inactive (`3`, `4`).
I have the following query, which is obviously incorrect, since it selects both products: `2` and `3`:
```
SELECT p.id_product
FROM product p
JOIN category_product cp
ON p.id_product = cp.id_product
JOIN category c
ON c.id_category = cp.id_category
WHERE
c.active = 0;
```
Here is the SQL Fiddle: <http://sqlfiddle.com/#!2/909dd/2/0>
How can I solve this? | Consider the following:
```
SELECT p.*
, COUNT(*)
, SUM(c.active = 1) active
, SUM(c.active = 0) inactive
FROM product p
JOIN category_product cp
ON cp.id_product = p.id_product
JOIN category c
ON c.id_category = cp.id_category
GROUP
BY p.id_product;
+------------+----------+--------+----------+
| id_product | COUNT(*) | active | inactive |
+------------+----------+--------+----------+
| 1 | 2 | 2 | 0 |
| 2 | 3 | 2 | 1 |
| 3 | 2 | 0 | 2 |
+------------+----------+--------+----------+
```
<http://sqlfiddle.com/#!2/909dd/55>
The last part of this problem has been left as an exercise for the reader | This way you can select product without active category.
```
SELECT p.id_product
FROM product p
WHERE NOT EXISTS
(SELECT * FROM
category_product cp
INNER JOIN category c ON c.id_category = cp.id_category
WHERE p.id_product = cp.id_product AND c.active = 1);
```
[SQL Fiddle](http://sqlfiddle.com/#!2/909dd/7) | How to select rows, which have all or none corresponding values in another table? | [
"",
"mysql",
"sql",
""
] |
I have a problem removing duplicates in a select query, while still considering the order of the rows.
I have the following example data:
```
myDate myValue
---------------------------
2014-01-01 100
2014-01-02 100
2014-01-03 200
2014-01-04 100
2014-01-05 100
2014-01-06 100
2014-01-07 300
```
I need a query able to remove duplicates which come on following dates. Thereby, producing the following result, note that the value 100 is returned more than once in the result, which is not the case for my current query.
```
myDate myValue
---------------------------
2014-01-01 100
2014-01-03 200
2014-01-04 100
2014-01-07 300
```
What I have so far which does not work is:
```
SELECT * FROM (
SELECT myDate, myValue
FROM testtable
ORDER BY myDate
) AS t_temp GROUP BY myValue;
```
Any ideas on how I could improve the query to produce the desired result? | I have not verified this, but I think this will give you what you're looking for. The inner query grabs each row, where the current value does not match the previous one. It uses `@previous` to keep track of previous row. Otherwise, it generates a `NULL` row. And finally, the outer query eliminates the `NULL` rows.
For example, when it looks at the first row, it sees that `myValue` doesn't match `@previous`, because it is empty, and it grabs the whole row. When it looks at the second row, it sees that `myValue` equals `@previous`, so in this case, it generates NULLs. When it looks at the third row, it sees that `myValue` does not equal `100`, so it grabs the whole row. And it does this to the end. Then the outer query eliminates all the `NULL` rows.
```
SET @previous := '';
SELECT
myDate,
myValue
FROM (
SELECT
IF( myValue != @previous, myDate, NULL ) AS myDate,
IF( myValue != @previous, myValue, NULL ) AS myValue,
@previous := myValue
FROM testtable
) temp
WHERE myDate IS NOT NULL;
```
This can also be written as follows:
```
SELECT
myDate,
myValue
FROM (
SELECT
IF( myValue != @previous, myDate, NULL ) AS myDate,
IF( myValue != @previous, myValue, NULL ) AS myValue,
@previous := myValue
FROM my_table
, (SELECT @previous := '') val
ORDER
BY myDate
) temp
WHERE myDate IS NOT NULL;
``` | In SQL you would use LAG or LEAD to look into the previous or next record, but MySQL doesn't support them.
So provided there is an entry for every day, you can just select the day before and compare with the current value:
```
select
mytable.mydate,
mytable.myvalue
from mytable
left outer join mytable prev on adddate(prev.mydate, interval 1 day) = mytable.mydate
where prev.myvalue is null or prev.myvalue != mytable.myvalue
order by mydate;
```
If there are gaps however, you would have to select all earlier records and find the minimum date within to get the previous one. | MySQL "group by" maintaining the sorting of data | [
"",
"mysql",
"sql",
"group-by",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.