Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I have two tables (using PostgreSQL) which look about the following:
Table 1 (ppoints from 1 to 450 incrementing by 1)
```
--------+-------+--------+---------+---------+-------+
ppoint | tom08 | tom920 | tom2135 | tom3650 | tom51 |
--------+-------+--------+---------+---------+-------+
1 | 2.5 | 125 | 52.5 | 15 | 2.5 |
... | ... | ... | ... | ... | ... |
450 | 0 | 7.5 | 87.5 | 0 | 0 |
--------+-------+--------+---------+---------+-------+
```
Table 2
```
--------+-------+
ppoint | tom |
--------+-------+
1 | 197.5 |
... | ... |
450 | 95 |
--------+-------+
```
Table 2's "tom" column is the sum of the Table 1 "tom..." values.
Thing is I want to divide every "tom..." cell from Table 1 from the corresponding summarized value of Table 2.
I've got one value using the following:
```
SELECT
(SELECT tom08 FROM Table1 WHERE ppoint = 1)/
(SELECT tom FROM Table 2 WHERE ppoint = 1)
FROM Table1 WHERE ppoint = 1
```
But it's still only one value and I want to divide every cell with the corresponding Table2 summarized value and put them into a new table (overwriting Table1 is also an option).
This new table's row 1 (with column 1 being the corresponding "ppoint") should contain:
```
tom08 (FROM Table1 WHERE ppoint=1)/tom (FROM Table2 WHERE ppoint=1)
tom920 (FROM Table1 WHERE ppoint=1)/tom (FROM Table2 WHERE ppoint=1)
tom2135 (FROM Table1 WHERE ppoint=1)/tom (FROM Table2 WHERE ppoint=1)
tom3650 (FROM Table1 WHERE ppoint=1)/tom (FROM Table2 WHERE ppoint=1)
tom51 (FROM Table1 WHERE ppoint=1)/tom (FROM Table2 WHERE ppoint=1)
```
This new table's row 2 should contain:
```
tom08 (FROM Table1 WHERE ppoint=2)/tom (FROM Table2 WHERE ppoint=2)
tom920 (FROM Table1 WHERE ppoint=2)/tom (FROM Table2 WHERE ppoint=2)
tom2135 (FROM Table1 WHERE ppoint=2)/tom (FROM Table2 WHERE ppoint=2)
tom3650 (FROM Table1 WHERE ppoint=2)/tom (FROM Table2 WHERE ppoint=2)
tom51 (FROM Table1 WHERE ppoint=2)/tom (FROM Table2 WHERE ppoint=2)
```
And so on until 450, so the final table should look something like this with the following :
```
+--------+-------------+-------------+-------------+-------------+-------------+
| ppoint | tom08 | tom920 | tom2135 | tom3650 | tom51 |
+--------+-------------+-------------+-------------+-------------+-------------+
| 1 | 0.012658228 | 0.632911392 | 0.265822785 | 0.075949367 | 0.012658228 |
| ...| ... | ... | ... | ... | ... |
| 2 | 0 | 0.078947368 | 0.921052632 | 0 | 0 |
+--------+-------------+-------------+-------------+-------------+-------------+
```
Is there some sort solution for this? Do you have any suggestions?
Sorry for my English - this was probably not the best way to explain my problem :)
Thanks! | If I understand you correctly, I think this is what you are looking for.
```
SELECT tom08/tom, tom920/tom, tom2135/tom, tom3650/tom, tom51/tom
FROM table1 t1, table2 t2
WHERE t1.ppoint = t2.ppoint;
``` | You can do this using a join:
```
SELECT tom08/tom as tom08, tom920/tom as tom930, tom2135/tom as tom2135, tom3650/tom as tom2135, tom51/tom as tom51
FROM table1
JOIN table2 ON table1.ppoint = table2.ppoint;
``` | SQL Arithmetics with unique cells of tables | [
"",
"sql",
"postgresql",
""
] |
I'm writing a `SQL` query to check `Gross Profit Percentage` for a group of sales items in a database.
```
SELECT T0.[itmsgrpcod],
T0.sell,
T0.cost,
( T0.sell / T0.cost ) AS "GP"
FROM (SELECT T0.[itmsgrpcod],
Sum(T1.[price]) AS "Sell",
Sum(T1.[stockprice]) AS "Cost"
FROM inv1 T1
LEFT OUTER JOIN oitm T0
ON T1.[itemcode] = T0.[itemcode]
GROUP BY T0.[itmsgrpcod]) T0
```
I'm having an odd problem in that when I have the `SELECT` statement as:
```
SELECT T0.[ItmsGrpCod], T0.Sell, T0.Cost
```
It returns 96 rows - the correct amount, with Sell and Cost data filled.
When I add the column:
```
(T0.Sell / T0.Cost) as "GP"
```
It returns only the first row of the query, with GP calculated properly. | It turns out that SAP Business One's Query Generator does not report Division by Zero. When I tried the query in SQL Server Management Studio, it provided a proper Division by Zero error, and I fixed the problem.
Complete query, for those interested:
```
SELECT T0.ItmsGrpCod,
SUM(T1.Price) As "Sell",
ISNULL(SUM(T1.StockPrice), 0) As "Cost",
CASE WHEN SUM(T1.StockPrice) = 0 THEN 100
ELSE (SUM(T1.Price) - SUM(T1.StockPrice)) / SUM(T1.Price) * 100
END As "GP"
FROM INV1 T1 LEFT OUTER JOIN OITM T0 ON T1.ItemCode = T0.ItemCode
GROUP BY T0.ItmsGrpCod
ORDER BY T0.ItmsGrpCod
``` | As every one in this thread even I am puzzled and intrigued. I have added some test data to see whats going on However I am unable to replicate your scenerio.
```
IF OBJECT_ID(N'TempINV1') > 0
BEGIN
DROP TABLE TempINV1
END
IF OBJECT_ID(N'TempOITM') > 0
BEGIN
DROP TABLE TempOITM
END
CREATE TABLE TempINV1 (ItemCode INT,
Price DECIMAL(8, 2),
StockPrice DECIMAL(8, 2))
CREATE TABLE TempOITM (ItemCode INT,
ItmsGrpCod VARCHAR(10))
INSERT INTO TempINV1
VALUES
(1, '3.21', '2.34'),
(2, '4.32', '3.45'),
(3, '5.43', '4.56')
INSERT INTO TempOITM
VALUES
(1, 'Product1'),
(2, 'Product2'),
(3, 'Product3')
```
**Both your sacripts(table names prefixed with Temp) outputs the same number of row items**
```
SELECT T0.[ItmsGrpCod], T0.Sell, T0.Cost, (T0.Sell / T0.Cost) as "GP"
FROM (
SELECT T0.[ItmsGrpCod], SUM(T1.[Price]) as "Sell", SUM(T1.[StockPrice]) as "Cost"
FROM TempINV1 T1
LEFT OUTER JOIN TempOITM T0
ON T1.[ItemCode] = T0.[ItemCode]
GROUP BY T0.[ItmsGrpCod]) T0
--Or
SELECT T0.[ItmsGrpCod], T0.Sell, T0.Cost
FROM (
SELECT T0.[ItmsGrpCod], SUM(T1.[Price]) as "Sell", SUM(T1.[StockPrice]) as "Cost"
FROM TempINV1 T1
LEFT OUTER JOIN TempOITM T0
ON T1.[ItemCode] = T0.[ItemCode]
GROUP BY T0.[ItmsGrpCod]) T0
```
Outputs for both queries are same (number or rows returned).
Please can you run the scripts on your machine and confirm if that is true. Also to help us better understand, it would be great if you could provide us some sample date.
**Compensate for divide by zero:**
```
SELECT T0.ItmsGrpCod,
T0.Sell,
T0.Cost,
T0.Sell / CASE
WHEN ISNULL(T0.Cost, '0') = 0 THEN T0.Sell
ELSE T0.Cost
END AS "GP"
FROM(SELECT T0.ItmsGrpCod,
SUM(T1.Price)AS "Sell",
SUM(T1.StockPrice)AS "Cost"
FROM TempINV1 AS T1
LEFT OUTER JOIN TempOITM AS T0
ON T1.ItemCode = T0.ItemCode
GROUP BY T0.ItmsGrpCod)AS T0
``` | Division column only returns one row | [
"",
"sql",
"sql-server",
"t-sql",
"sapb1",
""
] |
I'm trying to use the patindex() function, where I'm matching for the `-` character.
```
select PATINDEX('-', table1.col1 )
from table1
```
Problem is it always returns 0.
The following also didn't work:
```
PATINDEX('\-', table1.col1 )
from table1
PATINDEX('/-', table1.col1 )
from table1
``` | The `-` character in a `PATINDEX` or `LIKE` pattern string outside of a character class has no special meaning and does not need escaping. The problem isn't that `-` can't be used to match the character literally, but that you are using `PatIndex` instead of `CharIndex` and are providing no wildcard characters. Try this:
```
SELECT CharIndex('-', table1.col1 )
FROM Table1;
```
If you want to match a pattern, it has to use wildcards:
```
SELECT PatIndex('%-%', table1.col1 )
FROM Table1;
```
Even inside a character class, if first or last, the dash also needs no escaping:
```
SELECT PatIndex('%[a-]%', table1.col1 )
FROM Table1;
SELECT PatIndex('%[-a]%', table1.col1 )
FROM Table1;
```
Both of the above will match the characters `a` or `-` anywhere in the column. Only if the pattern has characters on either side of the `-` inside a character class will it be interpreted as a range. | Please make sure to use the '-' as the first or last character within wildcard and it will work.
You can even use the below function to replace any special characters.
```
CREATE Function [dbo].[ReplaceSpecialCharacters](@Temp VarChar(200))
Returns VarChar(200)
AS
Begin
Declare @KeepValues as varchar(200)
Set @KeepValues = '%[-,~,@,#,$,%,&,*,(,),!,?,.,,,+,\,/,?,`,=,;,:,{,},^,_,|]%'
While PatIndex(@KeepValues, @Temp) > 0
SET @Temp =REPLACE(REPLACE(REPLACE( REPLACE (REPLACE(REPLACE( @Temp, SUBSTRING( @Temp, PATINDEX( @KeepValues, @Temp ), 1 ),'') ,' ',''),Char(10),''),char(13),''),' ',''), ' ','')
Return REPLACE (RTRIM(LTRIM(@Temp)),' ','')
End
```
I am using in my project and it works fine | SQL need to match the '-' char in patindex() function | [
"",
"sql",
"sql-server",
""
] |
Please advise me on the following question:
I have two tables in an Oracle db, one that contains full numbers and the other that contains parts of them.
Table 1:
```
12323543451123
66542123345345
16654232423423
12534456353451
64565463345231
34534512312312
43534534534533
```
Table 2:
```
1232
6654212
166
1253445635
6456546
34534
435345
```
Could you please suggest a query that joins these two tables and shows the relation between 6456546 and 64565463345231, for example. The main thing is that Table 2 contains a lot more data than Table 1, and i need to find all the substrings from Table 2 that are not present in Table 1.
Thanks in advance! | Try this:
```
with t as (
select 123 id from dual union all
select 567 id from dual union all
select 891 id from dual
), t2 as (
select 1112323 id from dual union all
select 32567321 id from dual union all
select 44891555 id from dual
)
select t.id, t2.id
from t, t2
where t2.id||'' like '%'||t.id||'%'
``` | you first need to say if the number in Table 1 and 2 are repeated, if is not then I think this query would help you:
```
SELECT *
FROM Table_1
JOIN Table_2 ON Table_1.ID = Table_2.ID
WHERE Table_2.DATA LIKE Table_1.DATA
``` | Matching a substring to a string | [
"",
"sql",
"oracle-sqldeveloper",
""
] |
Im creating a query that gets data based on 1 day currently. I would like it to get data for all dates in a date range so just after ideas and basic structure for how best to go about this.
example query
```
Select date
from table
where date between 01/01/2014 and 05/01/2014
```
Ideally id like the results returns as follows
> 01/01/2014
>
> 02/01/2014
>
> 03/01/2014
>
> 04/01/2014
>
> 05/01/2014 | You mentioned that you wanted a stored procedure for this, so this should work for you:
```
Create Procedure spGenerateDateRange(@FromDate Date, @ToDate Date)
As Begin
;With Date (Date) As
(
Select @FromDate Union All
Select DateAdd(Day, 1, Date)
From Date
Where Date < @ToDate
)
Select Date
From Date
Option (MaxRecursion 0)
End
``` | Here is how you could do this with a tally table instead of a triangular join.
```
create Procedure GenerateDateRange(@FromDate Date, @ToDate Date)
As
WITH
E1(N) AS (select 1 from (values (1),(1),(1),(1),(1),(1),(1),(1),(1),(1))dt(n)),
E2(N) AS (SELECT 1 FROM E1 a, E1 b), --10E+2 or 100 rows
E4(N) AS (SELECT 1 FROM E2 a, E2 b), --10E+4 or 10,000 rows max
E6(N) AS (SELECT 1 FROM E4 a, E2 b),
cteTally(N) AS
(
SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) FROM E6
)
select DATEADD(day, t.N - 1, @FromDate) as MyDate
from cteTally t
where N <= DATEDIFF(day, @FromDate, @ToDate) + 1
go
```
Now let's compare the performance of this versus the triangular join method that was chosen as the answer.
```
declare @FromDate date = '1000-01-01', @ToDate date = '3000-01-01'
exec spGenerateDateRange @FromDate, @ToDate
exec GenerateDateRange @FromDate, @ToDate
``` | Get all dates in date range | [
"",
"sql",
"sql-server",
""
] |
I'm inserting new records into a Person table, and if there's already a record with the same SSN, I want to backup this old record to another table (let's call it PersonsBackup) and update the row with my new values. There is an identity column in Person table that serves as my primary key, which has to be the same.
Source table structure:
```
Name | Addr | SSN
```
Person table structure:
```
PrimaryKeyID | Name | Addr | SSN
```
PersonBackup table structure:
```
BackupKeyID | Name | Addr | SSN | OriginalPrimaryKeyID
```
where OriginalPrimaryKeyID = PrimaryKeyID for the record that was backed up. How can this be done? I was thinking of using cursor to check if SSN matches, then insert that record accordingly, but I've been told that using cursors like this is very inefficient. Thanks for your help! | You can do so like this, combine the insert/update using MERGE
```
INSERT INTO PersonBackup
SELECT P.Name, P.Addr, P.SSN, P.PrimaryKeyID
FROM Person P
INNER JOIN source s ON P.SSD = s.SSD
MERGE Person AS target
USING (SELECT Name, Addr, SSN FROM SOURCE) AS source (NAME, Addr, SSN)
ON (target.SSN = source.SSN)
WHEN MATCHED THEN
UPDATE SET name = source.name, Addr = source.Addr
WHEN NOT MATCHED THEN
INSERT(Name, Addr, SSN)
VALUES(source.name, source.addr, source.SSN)
``` | Assuming that `BackupKeyID` is identity in the `PersonBackup` table, you may try `update` statement with the `output` clause followed by `insert` of the records not existing in the target table:
```
update p
set p.Name = s.Name, p.Addr = s.Addr
output deleted.Name, deleted.Addr,
deleted.SSN, deleted.PrimaryKeyID into PersonBackup
from Source s
join Person p on p.SSN = s.SSN;
insert into Person (Name, Addr, SSN)
select s.Name, s.Addr, s.SSN
from Source s
where not exists (select 1 from Person where SSN = s.SSN);
```
or using `insert into ... from (merge ... output)` construct in a single statement:
```
insert into PersonBackup
select Name, Addr, SSN, PrimaryKeyID
from
(
merge Person p
using (select Name, Addr, SSN from Source) s
on p.SSN = s.SSN
when matched then
update set p.Name = s.Name, p.Addr = s.Addr
when not matched then
insert (Name, Addr, SSN) values (s.Name, s.Addr, s.SSN)
output $action, deleted.Name, deleted.Addr, deleted.SSN, deleted.PrimaryKeyID)
as U(Action, Name, Addr, SSN, PrimaryKeyID)
where U.Action = 'UPDATE';
``` | SQL Server: Insert if doesn't exist, else update and insert in another table | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I am attempting to `convert` the `DATETIME` Column titled `CREAT_DTTM` to a simple "1/1/2014" format.
I have looked at `CAST`, `CONVERT` and `FORMAT` functions but i just can't get it to work.
Any guidance would be greatly appreciated! I am running SQL Server 2012
Some sample data
```
CREAT_DTTM
------------------------
2014-01-01 00:33:58.000
2014-01-01 00:33:58.000
2014-01-01 07:40:01.000
2014-01-01 09:50:27.000
2014-01-01 10:40:04.000
2014-01-01 10:40:04.000
```
By convert I mean: This data is being pulled from another table by a stored proc our developer created. It is sales data that shows when an order has been entered into the system. I created a powerpivot data slicer in Excel that is linked to this table but they do not like the format the date is displayed in. So I was attempting to convert it from the aforementioned format to one more acceptable by the stakeholders. Only thing is that I do not have ample experience in writing queries | Try the following.
```
select convert(varchar(10), creat_dttm, 101)
from yourTable
``` | You can change the format of the dates in a PowerPivot table through the PowerPivot Window. The advantage here is that you do not need to do any modification of your stored procedure and your datatype is still a date when it comes into your pivot table.
Open your `PowerPivot Window` again

Select your data column, Select Fromat from the Formatting section on the Home tab.
 | SQL Server 2012 Date Change | [
"",
"sql",
"sql-server",
"excel",
"powerpivot",
""
] |
I have two tables which join themselves by a field called `user_id`. The first table called `sessions` can have multiple lines for the same day. I'm trying to find a way of selecting the total of that sessions without repeating the days (sort of).
Example:
```
Table sessions
ID | user_id | datestart
1 1 2014-08-05
2 1 2014-08-05
3 2 2014-08-05
```
As you can see there are two lines that are repeated (the first and second). If I query `SELECT COUNT(sess.id) AS total` this will retrieve `3`, but I want it to retrieve `2` because the first two lines have the same `user_id` so it must count as one.
Using the clause `Group By` will retrieve two different lines: `2` and `1`, which is also incorrect.
You can view a full example working at [SQLFiddle](http://sqlfiddle.com/#!2/89471/2).
Is there anyway of solving this only by query or do I need to do it by language? | Try this way:
```
SELECT COUNT(distinct sess.user_id) AS total
FROM sessions AS sess
INNER JOIN users AS user ON user.id = sess.user_id
WHERE user.equipment_id = 1 AND sess.datestart = CURDATE()
```
**[Sql Fiddle](http://sqlfiddle.com/#!2/89471/11/0)** | I think you are looking for `count(distinct)`:
```
SELECT COUNT(distinct user_id) AS total
FROM sessions sess INNER JOIN
users user
ON user.id = sess.user_id
WHERE user.equipment_id = 1 AND
sess.datestart = CURDATE();
``` | SQL Query to count sessions without repeating lines | [
"",
"mysql",
"sql",
""
] |
Hi I have created a Table in which I am storing first name, last name and email ID of some persons.
The table is as follows:
```
create table frnds
(
id int Primary Key,
firstName varchar (20) not null,
lastName varchar (20) not null,
emailID varchar (30)not null
)
```
I have to write a Stored Procedure to return the "first name" values from the table wherever there is a match with the input "last name" value.
I have some of the names with common last names. So I want to have all the first names as the output of the Stored Procedure.
I am using the following code to do that but have been able to get only one matching first name as output.
```
Create Procedure GetFirstName
(
@lastName varchar(20),
@firstName varchar(20) OUT
)
AS
Begin
Select @firstName = firstName from frnds where lastName = @lastName
End
```
Also this is how I am displaying the result:
```
declare @LastName varchar (20)
Exec GetFirstName Daniels, @LastName output
select @LastName
```
I know that this is because I have the output parameter such that it can hold only one matching first name. Since I am new to SQL, I am not able to figure out how to achieve this. Please help with any suggestion or some link which can guide me in the right direction.
Thanks!!! | You can use temporary table do store the varchars.
First create your procedure like this:
```
Create table #temp
(
fisrtName varchar(20) not null
)
go
Create Procedure GetFirstName
(
@lastName varchar(20)
-- , @firstName varchar(20) OUT with my solution you don't need it
)
AS
Begin
insert into #temp
Select firstName from frnds where lastName = @lastName
End
go
drop table #temp
```
Then you can use it as below
```
Create table #temp
(
fisrtName varchar(20) not null
)
exec GetFirstName 'Daniels'
select * from #temp
``` | Try this
```
Create Procedure GetFirstName
(
@lastName varchar(20),
@firstName varchar(20) OUT
)
AS
Begin
Select firstName from frnds where lastName = @lastName
End
Create table #lastname(lastname varchar (20))
Insert into #lastname
Exec GetFirstName 'Daniels'
select * from #lastname
``` | How to return multiple varchar from a stored procedure where condition is met | [
"",
"sql",
"sql-server",
"stored-procedures",
""
] |
I have an app that displays ads from campaigns. I limit display of each campaign's set of ads to a specific date range per campaign. My "campaigns" table looks like this:
```
campaigns
id : integer
start_date : date
end_date : date
```
I now need to be able to optionally limit the display of campaign ads to a specific time range each day. So now my table looks like
```
campaigns
id : integer
start_date : date
end_date : date
start_time : time, default: null
end_time : time, default: null
```
And so my [MySQL] query looks like this:
```
SELECT
ads.*
FROM
ads
INNER JOIN campaigns ON campaigns.id = ads.campaign_id
WHERE
campaigns.start_date <= "2014-08-05" AND
campaigns.end_date >= "2014-08-05" AND
campaigns.start_time <= "13:30"
campaigns.end_time >= "13:30";
```
(Dates and times are actually injected using the current date/time.)
This works fine. However, because I store start\_time and end\_time in UTC time, sometimes end\_time is earlier than start\_time; for example, in the database:
```
start_time = 13:00 (08:00 CDT)
end_time = 01:00 (20:00 CDT)
```
How can I adjust the query (or even use code) to account for this? | The problem is harder than it looks because either the campaigns *or* the comparison time span can go over the date boundary. Then, there is an additional complication. If the campaign says that it is running until 1:00 a.m., is the end date on the current date or the next date? In other words, for your example, 1:00 a.m. on 2014-08-06 should really be counted as 2014-08-05.
My recommendation, then, is to switch to local time. If your campaigns don't span midnight, then this should solve your problem.
If you only care about the campaigns themselves spanning midnight, you can do something like:
```
WHERE campaigns.start_date <= '2014-08-05' AND
campaigns.end_date >= '2014-08-05' AND
((campaigns.start_time <= campaigns.end_time and
campaigns.start_time >= '13:00' and
campaigns.end_time <= '18:00'
) or
(campaigns.start_time >= campaigns.end_time and
(campaigns.start_time >= '13:00' or
campaigns.end_time <= '18:00'
)
)
```
Note that when the end\_time is greater than the start time, then you want times greater than the start time *or* less than the end time. In the normal case, you want times greater than the start time *and* less than the end time.
You can do something similar if you only care about the comparison time period. Combining the two seems quite complicated. | You just can check the both ways.
```
SELECT
ads.*
FROM
ads
INNER JOIN campaigns ON campaigns.id = ads.campaign_id
WHERE "2014-08-05" BETWEEN campaigns.start_date AND campaigns.end_date
AND ("13:30" BETWEEN campaigns.start_time AND campaigns.end_time
OR "13:30" BETWEEN campaigns.end_time AND campaigns.start_time)
```
If your range is `08:00` to `16:00` the first part will find your results and the second one none because the range is wrong.
If your range is `16:00` to `08:00` the first part won't find any result and the second one will give them you. | How can I limit query results to a certain time range each day? | [
"",
"mysql",
"sql",
"utc",
""
] |
I... don't quite know if I have the right idea about Access here.
I wrote the following, to grab some data that existed in two places:-
```
Select TableOne.*
from TableOne inner join TableTwo
on TableOne.[LINK] = TableTwo.[LINK]
```
Now, my interpretation of this is:
* Find the table "TableOne"
* Match the `LINK` field to the corresponding field in the table "TableTwo"
* Show only records from TableOne *that have a matching record in TableTwo*
Just to make sure, I ran the query with some sample tables in SSMS, and it worked as expected.
So why, when I deleted the rows from within that query, did it delete the rows from TableTwo, and *NOT* from TableOne as expected? I've just lost ~3 days of work.
Edit: For clarity, I manually selected the rows in the query window and deleted them. I did not use a delete query - I've been stung by that a couple of times lately. | Since you have deleted the records manually, your query has to be updateable. This means that your query couldn't have been solely a cartesian join or a join without referential integrity, since these queries are non-updateable in ms access.
When I recreate your query based on two fields without indexes or primary keys, I am not even able to manualy delete records. This leads me to believe there was unknowingly a relationship established which deleted the records in table two. Perhaps you should take a look in the design view of your queries and relationships window, since the query itself should indeed select only records from table one. | Not sure why it got deleted, but I suggest to rewrite your query:
```
delete TableOne
where LINK in (select LINK from TableTwo)
``` | That was not the right table: Access wiped the wrong data | [
"",
"sql",
"ms-access",
"ms-access-2010",
"inner-join",
""
] |
Is it possible (using only T-SQL no C# code) to write a stored procedure to execute a series of other stored procedures without passing them any parameters?
What I mean is that, for example, when I want to update a row in a table and that table has a lot of columns which are all required, I want to run the first stored procedure to check if the `ID` exists or not, if yes then I want to call the update stored procedure, pass the `ID` but (using the window that SQL Server manager shows after executing each stored procedure) get the rest of the values from the user.
When I'm using the `EXEC` command, I need to pass all the parameters, but is there any other way to call the stored procedure without passing those parameter? (easy to do in C# or VB, I mean just using SQL syntax) | Right after reading your comment now I can understand what you are trying to do. You want to make a call to procedure and then ask End User to pass values for Parameters.
This is a very very badddddddddddddddddddd approach, specially since you have mentioned you will be making changes to database with this SP.
You should get all the values from your End Users before you make a call to your database(execute procedure), Only then make a call to database you Open a transaction and Commit it or RollBack as soon as possible and get out of there. as it will be holding locks on your resources.
Imagine you make a call to database (execute sp) , sp goes ahead and opens a transaction and now wait for End user to pass values, and your end user decides to go away for a cig, this will leave your resources locked and you will have to go in and kill the process yourself in order to let other user to go and use database/rows.
## Solution
At application level (C#,VB) get all the values from End users and only when you have all the required information, only then pass these values to sp , execute it and get out of there asap. | I think you are asking "can you prompt for user input in a sql script?". No not really.
You could actually do it with seriously hack-laden calls to the Windows API. And it would almost certainly have serious security problems.
But just **don't do this**. Write a program in C#, VB, Access, Powerscript, Python or whatever makes you happy. Use an tool appropriate to the task.
-- ADDED
Just so you know how ugly this would be. Imagine using the Flash component as an ActiveX object and using Flash to collect input from the user -- now you are talking about the kind of hacking it would be. Writing CLR procs, etc. would be just as big of a hack.
You should be cringing right now. But it gets worse, if the TSQL is running on the sql server, it would likely prompt or crash on the the server console instead of running on your workstation. You should definitely be cringing buy now.
If you are coming from Oracle Accept, the equivalent in just not available in TSQL -- nor should it be, and may it never be. | Calling a series of stored procedures sequentially SQL | [
"",
"sql",
"sql-server",
"stored-procedures",
""
] |
Basically I have a table named hiscores where I want to search a nickname of one user and get his current rank, since the rank rown doesn't exist because the ranks are organized by lvl DESC and then by Experience , so I want a sql query where I search the name of that
"player1" and it returs me rank 2. or input healdeal and get rank 1
```
Table = hiscores
id - nickname- lvl - experience
1 - healdeal - 99 - 1000
2 - philip - 98 - 595
3 - Player1 - 98 - 620
4 - Mindblow - 52 - 35
```
I have tried the following
```
SELECT (COUNT(*) + 1) AS rank FROM hiscores WHERE lvl >(SELECT lvl FROM hiscores WHERE nickname="player1")
``` | Assuming this is MySQL, this will work:
```
select @rownum:=@rownum+1 Rank,
h.*
from hiscores h,
(SELECT @rownum:=0) r
order by level desc, experience desc
```
[SQLFiddle](http://sqlfiddle.com/#!2/ace02/3)
If this is MS SQL Server 2005 onwards, you can directly use window functions, like so:
```
select *, rank() over (order by level desc, experience desc) Rank
from hiscores
```
In either case, if you want to filter by the nickname, you can put the above expression into a subquery and filter that by the nickname i.e.
```
select * from
(<ranking expression from above>) rankedresults
where nickname = <input>
``` | I see. You are trying to calculate the rank. I think this might do it:
```
select count(*) as rank
from hiscores hs cross join
(select hs.*
from hiscores
where nickname = 'player1'
) hs1
where hs.lvl > hs1.lvl or
hs.lvl = hs1.lvl and hs.experience >= hs1.experience;
```
Actually, if you have ties on both `experience` and `lvl`, then this might be a better rank:
```
select 1 + count(*) as rank
from hiscores hs cross join
(select hs.*
from hiscores
where nickname = 'player1'
) hs1
where hs.lvl > hs1.lvl or
hs.lvl = hs1.lvl and hs.experience > hs1.experience;
``` | Mysql calculate rank based on level DESC and experience DESC | [
"",
"mysql",
"sql",
""
] |
I am looking for a method to detect differences between two versions of the same table.
Let's say I create copies of a live table at two different days:
Day 1:
```
CREATE TABLE table_1 AS SELECT * FROM table
```
Day 2:
```
CREATE TABLE table_2 AS SELECT * FROM table
```
The method should identify all rows added, deleted or updated between day 1 and day 2;
if possible the method should not use a RDBMS-specific feature;
Note: Exporting the content of the table to text files and comparing text files is fine, but I would like a SQL specific method.
Example:
```
create table table_1
(
col1 integer,
col2 char(10)
);
create table table_2
(
col1 integer,
col2 char(10)
);
insert into table_1 values ( 1, 'One' );
insert into table_1 values ( 2, 'Two' );
insert into table_1 values ( 3, 'Three' );
insert into table_2 values ( 1, 'One' );
insert into table_2 values ( 2, 'TWO' );
insert into table_2 values ( 4, 'Four' );
```
Differences between table\_1 and table\_2:
* Added: Row ( 4, 'Four' )
* Deleted: Row ( 3, 'Three' )
* Updated: Row ( 2, 'Two' ) updated to ( 2, 'TWO' ) | I think I found the answer - one can use this SQL statement to build a list of differences:
Note: "col1, col2" list must include all columns in the table
```
SELECT
MIN(table_name) as table_name, col1, col2
FROM
(
SELECT
'Table_1' as table_name, col1, col2
FROM Table_1 A
UNION ALL
SELECT
'Table_2' as table_name, col1, col2
FROM Table_2 B
)
tmp
GROUP BY col1, col2
HAVING COUNT(*) = 1
+------------+------+------------+
| table_name | col1 | col2 |
+------------+------+------------+
| Table_2 | 2 | TWO |
| Table_1 | 2 | Two |
| Table_1 | 3 | Three |
| Table_2 | 4 | Four |
+------------+------+------------+
```
In the example quoted in the question,
* Row ( 4, 'Four' ) present in table\_2 ; verdict row "**Added**"
* Row ( 3, 'Three' ) present in table\_1; verdict row "**Deleted**"
* Row ( 2, 'Two' ) present in table\_1 only; Row ( 2, 'TWO' ) present in table\_2 only; if col1 is primary key then verdict "**Updated**" | If you want differences in both directions. I am assuming you have an `id`, because you mention "updates" and you need a way to identify the same row. Here is a `union all` approach:
```
select t.id,
(case when sum(case when which = 't2' then 1 else 0 end) = 0
then 'InTable1-only'
when sum(case when which = 't1' then 1 else 0 end) = 0
then 'InTable2-only'
when max(col1) <> min(col1) or max(col2) = min(col2) or . . .
then 'Different'
else 'Same'
end)
from ((select 'table1' as which, t1.*
from table_1 t1
) union all
(select 'table2', t2.*
from table_2 t2
)
) t;
```
This is standard SQL. You can filter out the "same" records if you want to.
This assumes that all the columns have non-NULL values and that rows with a given id appear at most once in each table. | Detect differences between two versions of the same table | [
"",
"sql",
"rows",
"difference",
""
] |
I have a Postgres function:
```
CREATE OR REPLACE FUNCTION get_stats(
_start_date timestamp with time zone,
_stop_date timestamp with time zone,
id_clients integer[],
OUT date timestamp with time zone,
OUT profit,
OUT cost
)
RETURNS SETOF record
LANGUAGE plpgsql
AS $$
DECLARE
query varchar := '';
BEGIN
... -- lot of code
IF id_clients IS NOT NULL THEN
query := query||' AND id = ANY ('||quote_nullable(id_clients)||')';
END IF;
... -- other code
END;
$$;
```
So if I run query something like this:
```
SELECT * FROM get_stats('2014-07-01 00:00:00Etc/GMT-3'
, '2014-08-06 23:59:59Etc/GMT-3', '{}');
```
Generated query has this condition:
```
"... AND id = ANY('{}')..."
```
But if an array is empty this condition should not be represented in query.
How can I check if the array of clients is not empty?
I've also tried two variants:
```
IF ARRAY_UPPER(id_clients) IS NOT NULL THEN
query := query||' AND id = ANY ('||quote_nullable(id_clients)||')';
END IF;
```
And:
```
IF ARRAY_LENGTH(id_clients) THEN
query := query||' AND id = ANY ('||quote_nullable(id_clients)||')';
END IF;
```
In both cases I got this error: `ARRAY_UPPER(ARRAY_LENGTH) doesn't exists`; | [`array_length()`](https://www.postgresql.org/docs/current/functions-array.html#ARRAY-FUNCTIONS-TABLE) requires *two* parameters, the second being the dimension of the array:
```
array_length(id_clients, 1) > 0
```
So:
```
IF array_length(id_clients, 1) > 0 THEN
query := query || format(' AND id = ANY(%L))', id_clients);
END IF;
```
This excludes both empty array *and* NULL.
Use [`cardinality()`](https://www.postgresql.org/docs/current/functions-array.html#ARRAY-FUNCTIONS-TABLE) in Postgres 9.4 or later. [See added answer by @bronzenose.](https://stackoverflow.com/a/36924295/939860)
---
But if you're concatenating a query to run with `EXECUTE`, it would be smarter to pass values with a `USING` clause. Examples:
* [Multirow subselect as parameter to `execute using`](https://stackoverflow.com/questions/9809339/multirow-subselect-as-parameter-to-execute-using/9810832#9810832)
* [How to use EXECUTE FORMAT ... USING in Postgres function](https://stackoverflow.com/questions/14065271/how-to-use-execute-format-using-in-postgres-function/14066715#14066715)
---
To explicitly check whether an **array is empty** like your title says (but that's *not* what you need here) just compare it to an empty array:
```
id_clients = '{}'
```
That's all. You get:
`true` .. array is empty
`null` .. array is NULL
`false` .. any other case (array has elements - even if just `null` elements) | if for some reason you don't want to supply the dimension of the array, [`cardinality`](http://www.postgresql.org/docs/current/interactive/functions-array.html#ARRAY-FUNCTIONS-TABLE) will return 0 for an empty array:
From the docs:
> cardinality(anyarray) returns the total number of elements in the
> array, or 0 if the array is empty | How to check if an array is empty in Postgres | [
"",
"sql",
"arrays",
"postgresql",
"stored-procedures",
"null",
""
] |
I have a Oracle SELECT Query like this:
```
Select * From Customer_Rooms CuRo
Where CuRo.Date_Enter Between 'TODAY 12:00:00 PM' And 'TODAY 11:59:59 PM'
```
I mean, I want to select all where the field "date\_enter" is today. I already tried things like `Trunc(Sysdate) || ' 12:00:00'` in the between, didn't work.
**Advice:** I can't use TO\_CHAR because it gets too slow. | Assuming `date_enter` is a `DATE` field:
```
Select * From Customer_Rooms CuRo
Where CuRo.Date_Enter >= trunc(sysdate)
And CuRo.Date_Enter < trunc(sysdate) + 1;
```
The `trunc()` function strips out the time portion by default, so `trunc(sysdate)` gives you midnight this morning.
If you particularly want to stick with `between`, and you have a `DATE` not a `TIMESTAMP`, you could do:
```
Select * From Customer_Rooms CuRo
Where CuRo.Date_Enter between trunc(sysdate)
And trunc(sysdate) + interval '1' day - interval '1' second;
```
`between` is inclusive, so if you don't take a second off then you'd potentially pick up records from exactly midnight tonight; so this generates the 23:59:59 time you were looking for in your original query. But using `>=` and `<` is a bit clearer and more explicit, in my opinion anyway.
If you're sure you can't have dates later than today anyway, the upper bound isn't really adding anything, and you'd get the same result with just:
```
Select * From Customer_Rooms CuRo
Where CuRo.Date_Enter >= trunc(sysdate);
```
You don't want to use `trunc` or `to_char` on the `date_enter` column though; using any function prevents an index on that column being used, which is why your query with `to_char` was too slow. | In my case, I was searching trough some log files and wanted to find only the ones that happened TODAY.
For me, it didn't matter what time it happened, just had to be today, so:
```
/*...*/
where
trunc(_DATETIMEFROMSISTEM_) = trunc(sysdate)
```
It works perfectly for this scenario. | Oracle Select Where Date Between Today | [
"",
"sql",
"oracle",
"select",
""
] |
Let's say I wanted to make a database that could be used to keep track of bank accounts and transactions for a user. A database that can be used in a Checkbook application.
If i have a user table, with the following properties:
1. user\_id
2. email
3. password
And then I create an account table, which can be linked to a certain user:
1. account\_id
2. account\_description
3. account\_balance
4. user\_id
And to go the next step, I create a transaction table:
1. transaction\_id
2. transaction\_description
3. is\_withdrawal
4. account\_id // The account to which this transaction belongs
5. user\_id // The user to which this transaction belongs
Is having the user\_id in the transaction table a good option? It would make the query cleaner if I wanted to get all the transactions for each user, such as:
```
SELECT * FROM transactions
JOIN users ON users.user_id = transactions.user_id
```
Or, I could just trace back to the users table from the account table
```
SELECT * FROM transactions
JOIN accounts ON accounts.account_id = transactions.account_id
JOIN users ON users.user_id = accounts.user_id
```
I know the first query is much cleaner, but is that the best way to go?
My concern is that by having this extra (redundant) column in the transaction table, I'm wasting space, when I can achieve the same result without said column. | Let's look at it from a different angle. From where will the query or series of queries start? If you have customer info, you can get account info and then transaction info or just transactions-per-customer. You need all three tables for meaningful information. If you have account info, you can get transaction info and a pointer to customer. But to get any customer info, you need to go to the customer table so you still need all three tables. If you have transaction info, you could get account info but that is meaningless without customer info or you could get customer info without account info but transactions-per-customer is useless noise without account data.
Either way you slice it, the information you need for any conceivable use is split up between three tables and you will have to access all three to get meaningful information instead of just a data dump.
Having the customer FK in the transaction table may provide you with a way to make a "clean" query, but the result of that query is of doubtful usefulness. So you've really gained nothing. I've worked writing Anti-Money Laundering (AML) scanners for an international credit card company, so I'm not being hypothetical. You're always going to need all three tables anyway.
Btw, the fact that there are FKs in the first place tells me the question concerns an OLTP environment. An OLAP environment (data warehouse) doesn't need FKs or any other data integrity checks as warehouse data is static. The data originates from an OLTP environment where the data integrity checks have already been made. So there you can denormalize to your hearts content. So let's not be giving answers applicable to an OLAP environment to a question concerning an OLTP environment. | You should not use two foreign keys in the same table. This is not a good database design.
A user makes transactions through an account. That is how it is logically done; therefore, this is how the DB should be designed.
Using joins is how this should be done. You should not use the `user_id` key as it is already in the account table.
The wasted space is unnecessary and is a bad database design. | Is it good or bad practice to have multiple foreign keys in a single table, when the other tables can be connected using joins? | [
"",
"mysql",
"sql",
"sql-server",
"database",
"join",
""
] |
I need an SQL query that checks for whether a person is active for two consecutive weeks in the year.
For example,
```
Table1:
Name | Activity | Date
Name1|Basketball| 08-08-2014
Name2|Volleyball| 08-09-2014
Name3|None | 08-10-2014
Name1|Tennis | 08-14-2014
```
I want to retrieve Name1 because that person has been active for two consecutive weeks in the year.
This is my query so far:
```
SELECT DISTINCT Name
FROM Table1
Where YEAR(Date) = 2014 AND
Activity NOT 'None' AND
```
This is where I would need the logic that checked for an activity in two consecutive weeks. A week can be described as 7 to 14 days later. I am working with MYSQL. | You can do the logic using an `exists` subquery:
```
select t.*
from table1 t
where exists (select 1
from table1 t2
where t2.name = t.name and
t2.date between t.date + 7 and t.date + 14
);
``` | I have avoided using YEAR(`Date`) in the where clause deliberately, and recommend you do too. Using functions on multiple rows of data to suit a single criteria (2014) never makes sense to me, plus it destroys the effectiveness of indexes (see "sargable" at wikipedia). Way easier to just define a filter by a date range IMHO.
I've used a correlated subquery to derive `nxt_date` which might not scale very well, but overall the performance will depend on your indexes most probably.
```
select distinct
name
from (
select
t.name
, t.Activity
, t.`Date`
, (
select min(table1.`Date`) from table1
where t.name = table1.name
and table1.Activity <> 'None'
and table1.`Date` > t.`Date`
) as nxt_date
from table1 as t
where ( t.`Date` >= '2014-01-01' and t.`Date` < '2015-01-01' )
and t.Activity <> 'None'
) as sq
where datediff(sq.nxt_date, sq.`Date`) <= 14
;
```
see: <http://sqlfiddle.com/#!9/cbbb3/9> | SQL query that checks by week | [
"",
"mysql",
"sql",
""
] |
Which way do you reset magento orders?
For example, now the current order - # 100000051
I need:
```
1) completely remove all orders from the Database
2) new order will have to start with a number # 100000001 (this is important)
```
Please let us know your trusty and checked up way! | Use third party extension for delete all orders from magento system:
<http://www.magentocommerce.com/magento-connect/orders-eraser.html>
Or
<http://www.magentocommerce.com/magento-connect/delete-orders-6.html>
Then truncate sales\_flat\_order or sales\_flat\_order\_grid or rest increment field value using mysql query
**Alternative Solution:**
Also a good idea to truncate eav\_entity\_store table and other sales table:
```
SET FOREIGN_KEY_CHECKS=0;
##############################
# SALES RELATED TABLES
##############################
TRUNCATE `sales_flat_creditmemo`;
TRUNCATE `sales_flat_creditmemo_comment`;
TRUNCATE `sales_flat_creditmemo_grid`;
TRUNCATE `sales_flat_creditmemo_item`;
TRUNCATE `sales_flat_invoice`;
TRUNCATE `sales_flat_invoice_comment`;
TRUNCATE `sales_flat_invoice_grid`;
TRUNCATE `sales_flat_invoice_item`;
TRUNCATE `sales_flat_order`;
TRUNCATE `sales_flat_order_address`;
TRUNCATE `sales_flat_order_grid`;
TRUNCATE `sales_flat_order_item`;
TRUNCATE `sales_flat_order_payment`;
TRUNCATE `sales_flat_order_status_history`;
TRUNCATE `sales_flat_quote`;
TRUNCATE `sales_flat_quote_address`;
TRUNCATE `sales_flat_quote_address_item`;
TRUNCATE `sales_flat_quote_item`;
TRUNCATE `sales_flat_quote_item_option`;
TRUNCATE `sales_flat_quote_payment`;
TRUNCATE `sales_flat_quote_shipping_rate`;
TRUNCATE `sales_flat_shipment`;
TRUNCATE `sales_flat_shipment_comment`;
TRUNCATE `sales_flat_shipment_grid`;
TRUNCATE `sales_flat_shipment_item`;
TRUNCATE `sales_flat_shipment_track`;
TRUNCATE `sales_invoiced_aggregated`;
TRUNCATE `sales_invoiced_aggregated_order`;
TRUNCATE `log_quote`;
ALTER TABLE `sales_flat_creditmemo_comment` AUTO_INCREMENT=1;
ALTER TABLE `sales_flat_creditmemo_grid` AUTO_INCREMENT=1;
ALTER TABLE `sales_flat_creditmemo_item` AUTO_INCREMENT=1;
ALTER TABLE `sales_flat_invoice` AUTO_INCREMENT=1;
ALTER TABLE `sales_flat_invoice_comment` AUTO_INCREMENT=1;
ALTER TABLE `sales_flat_invoice_grid` AUTO_INCREMENT=1;
ALTER TABLE `sales_flat_invoice_item` AUTO_INCREMENT=1;
ALTER TABLE `sales_flat_order` AUTO_INCREMENT=1;
ALTER TABLE `sales_flat_order_address` AUTO_INCREMENT=1;
ALTER TABLE `sales_flat_order_grid` AUTO_INCREMENT=1;
ALTER TABLE `sales_flat_order_item` AUTO_INCREMENT=1;
ALTER TABLE `sales_flat_order_payment` AUTO_INCREMENT=1;
ALTER TABLE `sales_flat_order_status_history` AUTO_INCREMENT=1;
ALTER TABLE `sales_flat_quote` AUTO_INCREMENT=1;
ALTER TABLE `sales_flat_quote_address` AUTO_INCREMENT=1;
ALTER TABLE `sales_flat_quote_address_item` AUTO_INCREMENT=1;
ALTER TABLE `sales_flat_quote_item` AUTO_INCREMENT=1;
ALTER TABLE `sales_flat_quote_item_option` AUTO_INCREMENT=1;
ALTER TABLE `sales_flat_quote_payment` AUTO_INCREMENT=1;
ALTER TABLE `sales_flat_quote_shipping_rate` AUTO_INCREMENT=1;
ALTER TABLE `sales_flat_shipment` AUTO_INCREMENT=1;
ALTER TABLE `sales_flat_shipment_comment` AUTO_INCREMENT=1;
ALTER TABLE `sales_flat_shipment_grid` AUTO_INCREMENT=1;
ALTER TABLE `sales_flat_shipment_item` AUTO_INCREMENT=1;
ALTER TABLE `sales_flat_shipment_track` AUTO_INCREMENT=1;
ALTER TABLE `sales_invoiced_aggregated` AUTO_INCREMENT=1;
ALTER TABLE `sales_invoiced_aggregated_order` AUTO_INCREMENT=1;
ALTER TABLE `log_quote` AUTO_INCREMENT=1;
#########################################
# DOWNLOADABLE PURCHASED
#########################################
TRUNCATE `downloadable_link_purchased`;
TRUNCATE `downloadable_link_purchased_item`;
ALTER TABLE `downloadable_link_purchased` AUTO_INCREMENT=1;
ALTER TABLE `downloadable_link_purchased_item` AUTO_INCREMENT=1;
#########################################
# RESET ID COUNTERS
#########################################
TRUNCATE `eav_entity_store`;
ALTER TABLE `eav_entity_store` AUTO_INCREMENT=1;
``` | Here is some trick.
you can remove all order from using admin sales order grid by user interface.
and just change increment id from table `eav_entity_store` of column name like
`increment_last_id` which you need **# 100000001**
So from next order of your store will start as fresh magento orders
**EDIT**
Follow below steps for using mysql query
**Step 1.** Find the Store ID.
Even if you have only one store, you don’t want to guess at your store ID. Log into mysql and run this query.
```
SELECT store_id, name FROM core_store;
```
This will return your store name and its ID.
**Step 2.** Update the order increment ID for your store.
Run this query.
```
UPDATE eav_entity_store SET increment_last_id = [new order value] WHERE store_id =[your store id] and entity_type_id =5;
```
Hope this will sure help you. | Which way do you reset magento orders? | [
"",
"sql",
"magento",
""
] |
I'm trying to add a table to my newly created database through SQL Server Management Studio.
However I get the error:
> *the backend version is not supported to design database diagrams or tables*
To see my currently installed versions I clicked about in SSMS and this is what came up:

What's wrong here? | This is commonly reported as an error due to using the wrong version of SSMS(Sql Server Management Studio). Use the version designed for your database version. You can use the command `select @@version` to check which version of sql server you are actually using. This version is reported in a way that is easier to interpret than that shown in the Help About in SSMS.
---
Using a newer version of SSMS than your database is generally error-free, i.e. backward compatible. | I found the solution. The SSMS version was older. I uninstalled SSMS from the server, went to the microsoft website and downloaded a more current version and now the Database Diagrams works ok.
 | The backend version is not supported to design database diagrams or tables | [
"",
"sql",
"sql-server",
"database",
"ssms",
""
] |
I am new to T-SQL. I want to use `substring` and `replace` together, but it doesn't work as expected,
or maybe there is some extra function that I do not know.
I have a column that stores date time, and the format is like this : '1393/03/03'.
But I want to show it like this: '930303' , i.e. I want these characters '13' and '/' to be omitted.
I tried `substring` and `replace` but it does not work.
Here is my code :
```
SELECT SUBSTRING(CreateDate,2,REPLACE(CreateDate,'/',''),8)
```
Can you help me ? | Try this:
```
SELECT REPLACE(SUBSTRING(CreateDate,2,8),'/','')
``` | You could use the `CONVERT` function.
<http://msdn.microsoft.com/en-us/library/ms187928.aspx>
Allows converting dates from one format to another, in this case, 111- JAPAN yyyy/mm/dd to 12 - ISO yymmdd
```
SELECT convert(NVARCHAR(50), GETDATE(), 12)
SELECT convert(NVARCHAR(50), CAST('1393/03/03' AS DATE), 12)
``` | Substring and replace together? | [
"",
"sql",
"sql-server",
"t-sql",
"substring",
""
] |
I have a taxi database with two datetime fields 'BookedDateTime' and 'PickupDateTime'. The customer needs to know the average waiting time from the time the taxi was booked to the time the driver actually 'picked up' the customer.
There are a heap of rows in the database covering a couple of month's data.
The goal is to craft a query that shows me the daily average.
So a super simple example would be:
```
BookedDateTime | PickupDateTime
2014-06-09 12:48:00.000 2014-06-09 12:45:00.000
2014-06-09 12:52:00.000 2014-06-09 12:58:00.000
2014-06-10 20:23:00.000 2014-06-10 20:28:00.000
2014-06-10 22:13:00.000 2014-06-10 22:13:00.000
```
2014-06-09 ((-3 + 6) / 2) = average is 00:03:00.000 (3 mins)
2014-06-10 ((5 + 0) / 2) = average is 00:02:30.000 (2.5 mins)
Is this possible or do I need to do some number crunching in code (i.e. C#)?
Any pointers would be greatly appreciated. | I think this will do :
```
select Convert(date, BookedDateTime) as Date, AVG(datediff(minute, BookedDateTime, PickupDateTime)) as AverageTime
from tablename
group by Convert(date, BookedDateTime)
order by Convert(date, BookedDateTime)
``` | Using the day of the booked time as the day for reporting:
```
select
convert(date, BookedDateTime) as day,
AVG(DATEDIFF(minute, PickupDateTime, BookedDateTime)) as avg_minutes
from bookings
group by convert(BookedDateTime, datetime, 101)
``` | Calculate average time difference between two datetime fields per day | [
"",
"sql",
"t-sql",
"date",
"datetime",
""
] |
Can anyone tell me how I would go about checking if a database and tables exists in sql server from a vb.net project? What I want to do is check if a database exists (preferably in an 'If' statement, unless someone has a better way of doing it) and if it does exist I do one thing and if it doesn't exist I create the database with the tables and columns. Any help on this matter would be greatly appreciated.
Edit:
The application has a connection to a server. When the application runs on a PC I want it to check that a database exists, if one exists then it goes and does what it's supposed to do, but if a database does NOT exist then it creates the database first and then goes on to do what it's supposed to do. So basically I want it to create the database the first time it runs on a PC and then go about it's business, then each time it runs on the PC after that I want it to see that the database exists and then go about it's business. The reason I want it like this is because this application will be on more than one PC, I only want the database and tables created once, (the first time it runs on a PC) and then when it runs on a different PC, it sees that the database already exists and then run the application using the existing database created on the other PC. | You can query SQL Server to check for the existence of objects.
To check for database existence you can use this query:
```
SELECT * FROM master.dbo.sysdatabases WHERE name = 'YourDatabase'
```
To check for table existence you can use this query against your target database:
```
SELECT * FROM sys.tables WHERE name = 'YourTable' AND type = 'U'
```
This below link shows you how to check for database existence is SQL Server using VB.NET code:
## [Check if SQL Database Exists on a Server with vb.net](http://kellyschronicles.wordpress.com/2009/02/16/check-if-sql-database-exists-on-a-server-with-vb-net/)
Referenced code from above link:
> ```
> Public Shared Function CheckDatabaseExists(ByVal server As String, _
> ByVal database As String) As Boolean
> Dim connString As String = ("Data Source=" _
> + (server + ";Initial Catalog=master;Integrated Security=True;"))
>
> Dim cmdText As String = _
> ("select * from master.dbo.sysdatabases where name=\’" + (database + "\’"))
>
> Dim bRet As Boolean = false
>
> Using sqlConnection As SqlConnection = New SqlConnection(connString)
> sqlConnection.Open
> Using sqlCmd As SqlCommand = New SqlCommand(cmdText, sqlConnection)
> Using reader As SqlDataReader = sqlCmd.ExecuteReader
> bRet = reader.HasRows
> End Using
> End Using
> End Using
>
> Return bRet
>
> End Function
> ```
You could perform the check in another way, so it's done in a single call by using an `EXISTS` check for both the database and a table:
```
IF NOT EXISTS (SELECT * FROM master.dbo.sysdatabases WHERE name = 'YourDatabase')
BEGIN
-- Database creation SQL goes here and is only called if it doesn't exist
END
-- You know at this point the database exists, so check if table exists
IF NOT EXISTS (SELECT * FROM sys.tables WHERE name = 'YourTable' AND type = 'U')
BEGIN
-- Table creation SQL goes here and is only called if it doesn't exist
END
```
By calling the above code once with parameters for database and table name, you will know that both exist. | Connect to the master database and select
```
SELECT 1 FROM master..sysdatabases WHERE name = 'yourDB'
```
and then on the database
```
SELECT 1 FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = 'yourTable'
```
i dont know the exact vb syntax but you only have to check the recordcount on the result | How to check if a database and tables exist in sql server in a vb .net project? | [
"",
"sql",
"sql-server",
"vb.net",
"sql-server-2008",
""
] |
I have a table structure like this one:

I need to get the latest values of each column != NULL. My current approach is to use a `UNION`statement like this one:
```
SELECT TOP 1 testlong1_min, testlong1_max, NULL, NULL, NULL, NULL, NULL, NULL FROM Grenzwerte WHERE testlong1_min IS NOT NULL AND testlong1_max IS NOT NULL UNION
SELECT TOP 1 NULL, NULL, testlong2_min, testlong2_max, NULL, NULL, NULL, NULL FROM Grenzwerte WHERE testlong2_min IS NOT NULL AND testlong2_max IS NOT NULL UNION
SELECT TOP 1 NULL, NULL, NULL, NULL, testlong3_min, testlong3_max, NULL, NULL FROM Grenzwerte WHERE testlong3_min IS NOT NULL AND testlong3_max IS NOT NULL UNION
SELECT TOP 1 NULL, NULL, NULL, NULL, NULL, NULL, testlong4_min, testlong4_max FROM Grenzwerte WHERE testlong4_min IS NOT NULL AND testlong4_max IS NOT NULL
```
This doesn't seem to work, my result is empty. I also thought about doing 1 query per field, but I guess that'll be too much of an overhead compared to the UNION-statement.
**Question:**
Is there a way to concenate the columns and return the latest values in one row?
**EDIT**
Using Parodo's suggestion I now get this result, now I need to combine the rows into one.
 | In the bellow statement, each CTE will have all the respective non null couples only and will also get a row number where the latest id will have number 1. So then i only get the latest id based on rownumber for each CTE, so each will have only one row. That way CROSS JOINING will result in a single row having the needed results.
```
;WITH testLong1CTE AS
(
SELECT testlong1_min, testlong1_max, ROW_NUMBER()OVER(ORDER BY ID DESC) AS rn
FROM Table
WHERE testlong1_min IS NOT NULL AND testlong1_max IS NOT NULL
),testLong2CTE AS
(
SELECT testlong2_min, testlong2_max, ROW_NUMBER()OVER(ORDER BY ID DESC) AS rn
FROM Table
WHERE testlong2_min IS NOT NULL AND testlong2_max IS NOT NULL
),testLong3CTE AS
(
SELECT testlong3_min, testlong3_max, ROW_NUMBER()OVER(ORDER BY ID DESC) AS rn
FROM Table
WHERE testlong3_min IS NOT NULL AND testlong3_max IS NOT NULL
),testLong4CTE AS
(
SELECT testlong4_min, testlong4_max, ROW_NUMBER()OVER(ORDER BY ID DESC) AS rn
FROM Table
WHERE testlong4_min IS NOT NULL AND testlong4_max IS NOT NULL
)
SELECT testlong1_min,
testlong1_max,
testlong2_min,
testlong2_max,
testlong3_min,
testlong3_max,
testlong4_min,
testlong4_max
FROM (SELECT testlong1_min, testlong1_max FROM testLong1CTE WHERE rn = 1) AS T1
CROSS JOIN (SELECT testlong2_min, testlong2_max FROM testLong2CTE WHERE rn = 1) AS T2
CROSS JOIN (SELECT testlong3_min, testlong3_max FROM testLong3CTE WHERE rn = 1) AS T3
CROSS JOIN (SELECT testlong4_min, testlong4_max FROM testLong4CTE WHERE rn = 1) AS T4
``` | You should use `is not null` instead of `<> NULL` as below
```
SELECT TOP 1 testlong1_min, testlong1_max, NULL, NULL, NULL, NULL, NULL, NULL FROM Grenzwerte WHERE testlong1_min IS NOT NULL AND testlong1_max IS NOT NULL
UNION
SELECT TOP 1 testlong1_min, testlong1_max, NULL, NULL, NULL, NULL, NULL, NULL FROM Grenzwerte WHERE testlong1_min IS NOT NULL AND testlong1_max IS NOT NULL
UNION
SELECT TOP 1 NULL, NULL, testlong2_min, testlong2_max, NULL, NULL, NULL, NULL FROM Grenzwerte WHERE testlong2_min IS NOT NULL AND testlong2_max IS NOT NULL
UNION
SELECT TOP 1 NULL, NULL, NULL, NULL, testlong3_min, testlong3_max, NULL, NULL FROM Grenzwerte WHERE testlong3_min IS NOT NULL AND testlong3_max IS NOT NULL
UNION
SELECT TOP 1 NULL, NULL, NULL, NULL, NULL, NULL, testlong4_min, testlong4_max FROM Grenzwerte WHERE testlong4_min IS NOT NULL AND testlong4_max IS NOT NULL
```
Be careful with `UNION` because it eliminates duplicates. If you want to see duplicates use `UNION ALL` instead. | Combining multiple rows into one single row | [
"",
"sql",
"sql-server",
""
] |
I have an issue. I have a table with almost 2 billion rows (yeah I know...) and has a lot of duplicate data in it which I'd like to delete from it. I was wondering how to do that exactly?
The columns are: first, last, dob, address, city, state, zip, telephone and are in a table called `PF_main`. Each record does have a unique ID thankfully, and its in column called `ID`.
How can I dedupe this and leave 1 unique entry (row) within the `pf_main` table for each person??
Thank you all in advance for your responses... | A 2 billion row table is quite big. Let me assume that `first`, `last`, and `dob` constitutes a "person". My suggestion is to build an index on the "person" and then do the `truncate`/re-insert approach.
In practice, this looks like:
```
create index idx_pf_main_first_last_dob on pf_main(first, last, dob);
select m.*
into temp_pf_main
from pf_main m
where not exists (select 1
from pf_main m2
where m2.first = m.first and m2.last = m.last and m2.dob = m.dob and
m2.id < m.id
);
truncate table pf_main;
insert into pf_main
select *
from temp_pf_main;
``` | ```
SELECT
ID, first, last, dob, address, city, state, zip, telephone,
ROW_NUMBER() OVER (PARTITION BY first, last, dob, address, city, state, zip, telephone ORDER BY ID) AS RecordInstance
FROM PF_main
```
will give you the "number" of each unique entry (sorted by Id)
so if you have the following records:
> id, last, first, dob, address, city, state, zip, telephone
> 006, trevelyan, alec, '1954-05-15', '85 Albert Embankment', 'London',
> 'UK', '1SE1 7TP', 0064
> 007, bond, james, '1957-02-08', '85 Albert Embankment', 'London',
> 'UK', '1SE1 7TP', 0074
> 008, bond, james, '1957-02-08', '85 Albert Embankment', 'London',
> 'UK', 'SE1 7TP', 0074
> 009, bond, james, '1957-02-08', '85 Albert Embankment', 'London',
> 'UK', 'SE1 7TP', 0074
you will get the following results (note last column)
> 006, trevelyan, alec, '1954-05-15', '85 Albert Embankment', 'London',
> 'UK', '1SE1 7TP', 0064, **1**
> 007, bond, james, '1957-02-08', '85 Albert Embankment', 'London',
> 'UK', '1SE1 7TP', 0074, **1**
> 008, bond, james, '1957-02-08', '85 Albert Embankment', 'London',
> 'UK', 'SE1 7TP', 0074, **2**
> 009, bond, james, '1957-02-08', '85 Albert Embankment', 'London',
> 'UK', 'SE1 7TP', 0074, **3**
So you can just delete records with RecordInstance > 1:
```
WITH Records AS
(
SELECT
ID, first, last, dob, address, city, state, zip, telephone,
ROW_NUMBER() OVER (PARTITION BY first, last, dob, address, city, state, zip, telephone ORDER BY ID) AS RecordInstance
FROM PF_main
)
DELETE FROM Records
WHERE RecordInstance > 1
``` | Deduping SQL Server table | [
"",
"sql",
"sql-server",
"deduplication",
""
] |
I am implementing an auditing system on my database. It uses triggers on each table to log changes.
I need to make modifications to these triggers and so am producing ALTER scripts for each one.
What I'd like to do is only have these triggers be altered if they exist, ideally like so:
```
IF EXISTS (SELECT * FROM sysobjects WHERE type = 'TR' AND name = 'MyTable_Audit_Update')
BEGIN
ALTER TRIGGER [dbo].[MyTable_Audit_Update] ON [dbo].[MyTable]
AFTER Update
...
END
```
However when I do this I get an error saying "Invalid syntax near keyword TRIGGER"
The reason that these triggers may not exist is that auditing can be enabled/disabled on tables which the end user can specify. This involves either creating or dropping the triggers. I am unable to make the changes to the triggers upon creation as they are dynamically created and so I must still provide a way altering the triggers should they exist. | The alter statement has to be the first in the batch. So for sql server it would be:
```
IF EXISTS (SELECT * FROM sysobjects WHERE type = 'TR' AND name = 'MyTable_Audit_Update')
BEGIN
EXEC('ALTER TRIGGER [dbo].[MyTable_Audit_Update] ON [dbo].[MyTable]
AFTER Update
...')
END
``` | Unlike [CREATE TRIGGER](http://msdn.microsoft.com/en-us/library/ms189799.aspx), I failed to find a reference that explicitly states that
> CREATE TRIGGER must be the first statement in the batch
but it seems that this restriction applies to `ALTER TABLE` too.
The simple way to do this would be to DROP the TRIGGER and re-create it:
```
IF EXISTS (SELECT * FROM sysobjects WHERE type = 'TR' AND name = 'MyTable_Audit_Update')
DROP TRIGGER MyTable_Audit_Update
GO
CREATE TRIGGER [dbo].[MyTable_Audit_Update] ON [dbo].[MyTable]
AFTER Update
...
END
``` | Only ALTER a TRIGGER if is exists | [
"",
"sql",
"sql-server",
"triggers",
""
] |
I am stuck on inserting data from one table into another table and exporting data into csv file using `bcp`. The problem is that the csv text file contains `null` instead of '' empty string.
When I insert data into a table from another table the empty string column is treated as a `NULL`. Doing the `bcp` command generate correct file.
I am using this command to export bcp
```
bcp ClientReportNewOrder out "D:\Temp\Neeraj\TestResults\oOR.txt" -c -t"," -r"\n" -S"." -U"sa" -P"123"
```
and this for insert data into table
```
insert into ClientReportNewOrder
select * from ClientReportNewOrder_import
``` | After lots of brainstorming ,I have been found the solution.
1. Execute `Insert` statement
```
insert into ClientReportNewOrder select * from ClientReportNewOrder_import
```
2. After insert record update record by `Dynamic` query like below.
```
DECLARE @qry NVARCHAR(MAX)
SELECT @qry = COALESCE( @qry + ';', '') +
'UPDATE ClientReportNewOrder SET [' + COLUMN_NAME + '] = NULL
WHERE [' + COLUMN_NAME + '] = '''''
FROM INFORMATION_SCHEMA.columns
WHERE DATA_TYPE IN ('char','nchar','varchar','nvarchar') and TABLE_NAME='ClientReportNewOrder '
EXECUTE sp_executesql @qry
```
And follows above steps, I have been able to resolved the issue.
If any one have good technique for achieve same please post. | Not sure about `bcp` but if you are inserting using SQL query (`INSERT .. SELECT` construct) directly like below
```
insert into ClientReportNewOrder
select * from ClientReportNewOrder_import
```
Then you can use either `ISNULL()` or `COALESCE()` function to get around this like below (a sample, considering that `col3` having `NULL`)
```
insert into ClientReportNewOrder(col1,col2,col3)
select col1,col2,ISNULL(col3,'') from ClientReportNewOrder_import
``` | Convert Empty string into NULL when data insert into table from another table | [
"",
"sql",
"sql-server",
"sql-server-2008",
"bcp",
""
] |
Just studying some code , and came across this line:
```
v_VLDT_TOKEN_VLU := v_onl_acctID || ‘|’ || p_onl_external_id || ‘|’ || p_validation_target
```
It's a "validation token value" , but why would you concatenate the pipe symbol? I understand this is for dynamic SQL. | Here Pipe symbol is used as a delimiter/separator between the fields:
Assume,
```
v_onl_acctID = 123
p_onl_external_id = abc
p_validation_target = xyz
```
then
```
v_VLDT_TOKEN_VLU := v_onl_acctID || ‘|’ || p_onl_external_id || ‘|’ || p_validation_target
```
will evaluate to
```
v_VLDT_TOKEN_VLU = 123|abc|xyz
```
It is just another character for delimiter purpose and can be replaced with any other delimiter too. For reference, if the `|` is replaced by `*`, say
```
v_VLDT_TOKEN_VLU := v_onl_acctID || ‘*’ || p_onl_external_id || ‘*’ || p_validation_target
```
then the expression's value would be `123*abc*xyz`
Note: `||` is used for concatenation | I have actually seen similar code, but it was used to generate a unix statement that piped (|) the output of one command to another. If I remember correctly, they had a table with all of our database hosts, and oracle data directories. They used code similar to this to shell over to the specific database host, get a directory of the datafiles and write the output to a logfile back on the parent server which they then read in to update disk usage for reporting. This was years ago so I'm sure there is a better way to do it now. | In Oracle SQL, what would code like this be used for? | [
"",
"sql",
"oracle",
""
] |
I have downloaded and installed SQL Server 2014 Express
(from this site: <http://www.microsoft.com/en-us/server-cloud/products/sql-server-editions/sql-server-express.aspx#Installation_Options>).
The problem is that I can't connect/find my local DB server, and I can't develop DB on my local PC. How can I reach my local server?
My system consists of Windows 8.1 (no Pro or Enterprise editions) 64 bits
Checking the configuration of SQL Server with `SQL Server 2014 Configuration Manager` tool, I see an empty list selecting "SQL Server Services" from the tree at the left. Below you can find a screenshot.

In the Windows Services list, there is just only one service: **"SQL Server VSS Writer"**
**EDIT**
My installation window of SQL Server 2014 is the following:
 | Most probably, you didn't install any SQL Server Engine service. If no SQL Server engine is installed, no service will appear in the SQL Server Configuration Manager tool. Consider that the packages `SQLManagementStudio_Architecture_Language.exe` and `SQLEXPR_Architecture_Language.exe`, available in the [Microsoft site](http://www.microsoft.com/en-US/download/details.aspx?id=42299) contain, respectively only the Management Studio GUI Tools and the SQL Server engine.
If you want to have a full featured SQL Server installation, with the database engine and Management Studio, [download the installer file of **SQL Server with Advanced Services**](http://www.microsoft.com/en-US/download/details.aspx?id=42299).
Moreover, to have a sample database in order to perform some local tests, use the [Adventure Works database](https://msftdbprodsamples.codeplex.com/releases/view/125550).
Considering the package of **SQL Server with Advanced Services**, at the beginning at the installation you should see something like this (the screenshot below is about SQL Server 2008 Express, but the feature selection is very similar). The checkbox next to "Database Engine Services" must be checked. In the next steps, you will be able to configure the instance settings and other options.
Execute again the installation process and select the database engine services in the feature selection step. At the end of the installation, you should be able to see the SQL Server services in the SQL Server Configuration Manager.
 | I downloaded a different installer "SQL Server 2014 Express with Advanced Services" and found Instance Features in it. Thanks for Alberto Solano's answer, it was really helpful.
My first installer was "SQL Server 2014 Express". It installed only SQL Management Studio and tools without Instance features. After installation "SQL Server 2014 Express with Advanced Services" my LocalDB is now alive!!! | After installing SQL Server 2014 Express can't find local db | [
"",
"sql",
"sql-server",
"sql-server-2014-express",
""
] |
I have two tables from a site similar to SO: one with posts, and one with up/down votes for each post. I would like to select all votes cast on the day that a post was modified.
**My tables layout is as seen below:**
Posts:
```
-----------------------------------------------
| post_id | post_author | modification_date |
-----------------------------------------------
| 0 | David | 2012-02-25 05:37:34 |
| 1 | David | 2012-02-20 10:13:24 |
| 2 | Matt | 2012-03-27 09:34:33 |
| 3 | Peter | 2012-04-11 19:56:17 |
| ... | ... | ... |
-----------------------------------------------
```
Votes (each vote is only counted at the end of the day for anonymity):
```
-------------------------------------------
| vote_id | post_id | vote_date |
-------------------------------------------
| 0 | 0 | 2012-01-13 00:00:00 |
| 1 | 0 | 2012-02-26 00:00:00 |
| 2 | 0 | 2012-02-26 00:00:00 |
| 3 | 0 | 2012-04-12 00:00:00 |
| 4 | 1 | 2012-02-21 00:00:00 |
| ... | ... | ... |
-------------------------------------------
```
**What I want to achieve**:
```
-----------------------------------
| post_id | post_author | vote_id |
-----------------------------------
| 0 | David | 1 |
| 0 | David | 2 |
| 1 | David | 4 |
| ... | ... | ... |
-----------------------------------
```
I have been able to write the following, but it selects all votes on the day **before** the post modification, not on the same day (so, in this example, an empty table):
```
SELECT Posts.post_id, Posts.post_author, Votes.vote_id
FROM Posts
LEFT JOIN Votes ON Posts.post_id = Votes.post_id
WHERE CAST(Posts.modification_date AS DATE) = Votes.vote_date;
```
How can I fix it so the `WHERE` clause takes the day before `Votes.vote_date`? Or, if not possible, is there another way? | Depending on which type of database you are using (SQL, Oracle ect..);To take the Previous days votes you can usually just subtract 1 from the date and it will subtract exactly 1 day:
```
Where Cast(Posts.modification_date - 1 as Date) = Votes.vote_date
```
or if modification\_date is already in date format just:
```
Where Posts.modification_date - 1 = Votes.vote_date
``` | If you have a site similar to Stack Overflow, then perhaps you also use SQL Server:
```
SELECT p.post_id, p.post_author, v.vote_id
FROM Posts p LEFT JOIN
Votes v
ON p.post_id = v.post_id
WHERE CAST(DATEDIFF(day, -1, p.modification_date) AS DATE) = v.vote_date;
```
Different databases have different ways of subtracting one day. If this doesn't work, then your database has something similar. | Select rows where one column is within a day of another column | [
"",
"sql",
"join",
""
] |
I Have problem with this code.
I have tree in my data base and i need to select all nodes except The Node and his children, grandchilder etc.
I done WITH query - it works. I can select all big family of The Node.
But when i try to do SELECT ... WHERE id NOT ID (and here is the big family) i have errors...
**CODE:**
```
Select * from TALBE
where id NOT IN
(
WITH TempTable
AS
(
SELECT ...
UNION ALL
SELECT ...
)
Select id from TempTable;
);
```
or another version
```
Select * from TALBE
where id NOT IN
(
select id from
(
WITH TempTable
AS
(
SELECT ...
UNION ALL
SELECT ...
)
Select id from TempTable
)
);
```
In this code I need to have first 2 lines. Its not my decision - just must to have.
**Errors:**
```
Msg 156, Level 15, State 1, Line 4
```
Incorrect syntax near the keyword 'WITH'.
```
Msg 319, Level 15, State 1, Line 4
```
Incorrect syntax near the keyword 'with'. If this statement is a common table expression, an xmlnamespaces clause or a change tracking context clause, the previous statement must be terminated with a semicolon.
```
Msg 102, Level 15, State 1, Line 14
```
Incorrect syntax near ')'. | As the comments have suggested, you need to remove the CTE or move it to the beginning of you're query.
<http://technet.microsoft.com/en-us/library/ms190766(v=sql.105).aspx>
```
WITH TempTable
AS
(
SELECT ...
UNION ALL
SELECT ...
)
Select * from TALBE
where id NOT IN
(
select id from TempTable
);
``` | Something like this. You should probably read the documentation about CTEs so you understand what they are and how they are used.
```
Select * from TALBE
where id NOT IN
(
SELECT ID from someTable
UNION ALL
SELECT ID from SomeOtherTable
);
```
Now that it sounds like you want this as a recursive cte you will need to adjust this a little bit.
```
WITH TempTable
AS
(
SELECT ...
UNION ALL
SELECT ...
)
Select * from TALBE
where id NOT IN
(
SELECT ID from TempTable
);
``` | SQL Server : With in Select in NOT IN | [
"",
"sql",
"sql-server",
"with-statement",
""
] |
I have a table with a column that contains numbers 1 - 5, it looks something like the following
```
Column
1
2
4
3
2
1
5
2
```
How do I count the number of times each number shows up so that my final table looks like this:
```
Number | Count
one | 2
two | 3
three | 1
four | 1
five | 1
``` | Try this:
```
select
case
when number = 1 then 'one'
when number = 2 then 'two'
when number = 3 then 'three'
when number = 4 then 'four'
when number = 5 then 'five'
end number,
count
from
(select number,count(*) count
from yourtable
group by number) s
```
In this case you have only 5 values so `case` is feasible to do the replacement. However, good practice would be to use a table with the number to name mapping and then join it to your counting query. So, assuming you have table `tblMap` with 2 columns - `number` and `name`, you would do;
```
select name, count(*)
from
yourtable t
inner join tblMap m on t.number = m.number
group by t.number, name --group by number so that results are ordered by number
``` | ```
select
case
when [Column] = '1' then 'One'
when [Column] = '2' then 'Two'
when [Column] = '3' then 'Three'
when [Column] = '4' then 'Four'
when [Column] = '5' then 'Five'
end [Number]
, COUNT (*) as [Count] from Table_Name
group by [Column]
``` | How do I count and replace elements in an sql column | [
"",
"sql",
""
] |
I have a table with below contents :
```
SQL> select RULE_IDS ||' | '|| ID ||' | '|| ALERT_COUNT from alarms ;
RULE_IDS||'|'||ID||'|'||ALERT_COUNT
--------------------------------------------------------------------------------
3714,3715,3703 | 1031 | 3
3703,3714,3722,3721 | 1032 | 4
3715 | 1033 | 1
3721,3722 | 1034 | 2
3714,3715 | 1035 | 2
3706 | 1030 | 1
3723,3714 | 1036 | 2
3703 | 1025 | 1`
```
My requirement is to find the count of each RULE\_IDS (the comma separated values). e.g. the output should be like this :
```
SQL> select RULE_IDS ||' | '|| ID ||' | '|| ALERT_COUNT from alarms ;
RULE_IDS||'|'||COUNT
--------------------------------------------------------------------------------
3714 | 4
3715 | 3
3703 | 3
3721 | 2
3722 | 2
3723 | 1
3706 | 1
```
How can I achive that. Please help me out. Thanks in advance.
Regards | Let me assume that you have a table of ruleids. If so, you can do this using a `join`:
```
select r.rule_id, count(*)
from alarms a join
rules r
on ',' || a.rule_ids || ',' like '%,' || r.rule_id || ',%'
group by r.rule_id;
```
Storing lists of ids as a comma delimited string is a bad idea. For one thing, you are storing integer values as a string. More importantly, SQL has a very nice structure for storing lists. It is called a table. In this case, you would want a junctions table `AlertRules` with one row per alert and rule. | Found here: <https://community.oracle.com/thread/2348338>
Use REGEXP\_SUBSTR:
```
SELECT REGEXP_SUBSTR (str, '[^,]+', 1, 1) AS part_1
, REGEXP_SUBSTR (str, '[^,]+', 1, 2) AS part_2
, REGEXP_SUBSTR (str, '[^,]+', 1, 3) AS part_3
, REGEXP_SUBSTR (str, '[^,]+', 1, 4) AS part_4
FROM table_x
;
```
Can str contain emply items? For example, can you have a string like 'foo,,,bar', where you'd want to count part\_2 and part\_3 as NULL, and 'bar' is part\_4? If so:
```
SELECT RTRIM (REGEXP_SUBSTR (str, '[^,]*,', 1, 1), ',') AS part_1
, RTRIM (REGEXP_SUBSTR (str, '[^,]*,', 1, 2), ',') AS part_2
, RTRIM (REGEXP_SUBSTR (str, '[^,]*,', 1, 3), ',') AS part_3
, LTRIM (REGEXP_SUBSTR (str, ',[^,]*', 1, 3), ',') AS part_4
FROM table_x
;
``` | how to split a comma seperated field in oracle? | [
"",
"sql",
"oracle",
""
] |
How does one implement SQL joins without using the JOIN keyword?
This is not really necessary, but I thought that by doing this I could better understand what joins actually do. | The basic INNER JOIN is easy to implement.
The following:
```
SELECT L.XCol, R.YCol
FROM LeftTable AS L
INNER JOIN RightTable AS R
ON L.IDCol=R.IDCol;
```
is equivalent to:
```
SELECT L.XCol, R.YCol
FROM LeftTable AS L, RightTable AS R
WHERE L.IDCol=R.IDCol;
```
In order to extend this to a LEFT/RIGHT/FULL OUTER JOIN, you only need to UNION the rows with no match, along with NULL in the correct columns, to the previous INNER JOIN.
For a LEFT OUTER JOIN, add:
```
UNION ALL
SELECT L.XCol, NULL /* cast the NULL as needed */
FROM LeftTable AS L
WHERE NOT EXISTS (
SELECT * FROM RightTable AS R
WHERE L.IDCol=R.IDCol)
```
For a RIGHT OUTER JOIN, add:
```
UNION ALL
SELECT NULL, R.YCol /* cast the NULL as needed */
FROM RightTable AS R
WHERE NOT EXISTS (
SELECT * FROM LeftTable AS L
WHERE L.IDCol=R.IDCol)
```
For a FULL OUTER JOIN, add both of the above. | There *is* an older deprecated SQL syntax that allows you to join without using the `JOIN` keyword.. but I personally find it more confusing than any permutation of the `JOIN` operator I've ever seen. Here's an example:
```
SELECT A.CustomerName, B.Address1, B.City, B.State, B.Zip
FROM dbo.Customers A, dbo.Addresses B
WHERE A.CustomerId = B.CustomerId
```
In the older way of doing it, you join by separating the tables with a comma and specifying the JOIN conditions in the WHERE clause. Personally, I would prefer the `JOIN` syntax:
```
SELECT A.CustomerName, B.Address1, B.City, B.State, B.Zip
FROM dbo.Customers A
JOIN dbo.Addresses B
ON A.CustomerId = B.CustomerId
```
The reason you should shy away from this old style of join is clarity and readability. When you are simply joining one table to another, it's pretty easy to figure out what's going on. When you're combining multiple types of joins across a half dozen (or more) tables, this older syntax becomes very challenging to manage.
The best way to get a handle on the JOIN operator is working with it. Here's a decent visual example of what the different JOINs do:
<http://blog.codinghorror.com/a-visual-explanation-of-sql-joins/>
Some more info:
<https://sqlblog.org/2009/10/08/bad-habits-to-kick-using-old-style-joins>
<http://www.sqlservercentral.com/blogs/brian_kelley/2009/09/30/the-old-inner-join-syntax-vs-the-new-inner-join-syntax/> | How to implement SQL joins without using JOIN? | [
"",
"sql",
"join",
""
] |
I use the following sql to update a table called `userStats`, and it runs per user who's `userId` I supply.
```
update userStats us
join (select
sum(postStatus = 0) published,
sum(postStatus = 1) inactive,
sum(postStatus = 5) recalled,
sum(postStatus = 6) deleted
from userData where userId = @x) d on d.userId = us.userId
set
us.published = coalesce(d.published,0),
us.inactive = coalesce(d.inactive,0),
us.recalled = coalesce(d.recalled,0),
us.deleted = coalesce(d.deleted,0),
us.total = coalesce(d.published+d.inactive+d.recalled+d.deleted,0);
```
I keep running into the error `Unknown column 'd.userId' in 'on clause'` even when the column exists in the table. I used to use this very same thing without the error all this time.
Also, concerning the use of `where`, is another `where` needed after the set columns, or will the where inside be sufficient to update only the `userId` I supply. It does update only the `userId` I supply, but I was quite uncomfortable not using one more `where` after the where. Do i need another where outside like I've done before?
```
us.deleted = coalesce(d.deleted,0),
us.total = coalesce(d.published+d.inactive+d.recalled+d.deleted,0) where userId =@x;
``` | The table aliased as `d`, does not explicitly select the column `userID` though your `userData` table has the value.
So, You should probably do something like:
```
(select
userID, --> This field has to be selected to use in the join condition
sum(postStatus = 0) published,
sum(postStatus = 1) inactive,
sum(postStatus = 5) recalled,
sum(postStatus = 6) deleted
from userData where userId = @x) d on d.userId = us.userId
^
|___ You were getting error here.
``` | change in your sub query
```
select
sum(postStatus = 0) published,
sum(postStatus = 1) inactive,
sum(postStatus = 5) recalled,
sum(postStatus = 6) deleted,userId
from userData where userId = @x
```
Then use `userid` in where clause it works | Unknown column when using update and join even when the column exists | [
"",
"mysql",
"sql",
""
] |
```
SELECT
b.User_Id
,(CONVERT(varchar, DATEADD(hh, - 7, b.callstartdt), 101))as 'Dt'
,(COUNT(distinct b.SeqNum ) + Count(distinct c.SeqNum) + count(distinct d.seqnum)) as 'TotalCalls'
,COUNT(distinct b.SeqNum )as 'ACD'
,COUNT(distinct c.SeqNum)as 'AOD'
,COUNT(distinct d.seqnum) as 'Manual'
,COUNT(distinct e.SeqNum)as 'Contacts'
,COUNT (distinct es.seqnum) as 'Success'
FROM
[detail_epro].[dbo].[ACDCallDetail]as b
LEFT JOIN
[detail_epro].[dbo].[AODCallDetail]as c on c.User_Id = b.User_Id
LEFT JOIN
[detail_epro].[dbo].[manualCallDetail]as d on d.User_Id = b.User_Id
LEFT JOIN
(SELECT
USER_ID, CallStartDt, SeqNum
FROM
[detail_epro].[dbo].[AgentDispoDetail]
WHERE
Disp_Id IN
(100000150, 100000126, 100000137, 100000093, 100000133,
100000123, 100000094, 100000161, 100000162, 100000085,
100000084, 100000086, 100000096, 100000087, 100000157,
100000088, 100000097, 100000154, 100000148, 100000134,
100000131, 100000160, 100000156, 100000165, 100000166,
100000122, 100000121, 100000138, 100000130, 100000144,
100000132, 100000158, 100000098, 100000147, 100000100,
100000153, 100000139, 100000145, 100000101, 100000140,
100000102, 100000103, 100000104, 100000105, 100000106,
100000159, 100000112, 100000135, 100000090, 100000113,
100000141, 100000146, 100000115, 100000108, 100000092,
100000155, 100000125, 100000151, 100000136, 100000107,
100000142)
) AS e ON e.User_Id = b.User_Id
LEFT JOIN
(SELECT
USER_ID, CallStartDt, SeqNum
FROM
[detail_epro].[dbo].[AgentDispoDetail]
WHERE Disp_Id IN
(100000150, 100000137, 100000093, 100000133, 100000123,
100000094, 100000161, 100000085, 100000086, 100000157,
100000088, 100000131, 100000160, 100000156, 100000165,
100000166, 100000122, 100000121, 100000138, 100000144,
100000132, 100000098, 100000100, 100000153, 100000139,
100000145, 100000101, 100000140, 100000102, 100000103,
100000105, 100000106, 100000159, 100000112, 100000135,
100000141, 100000146, 100000115, 100000108, 100000092,
100000155, 100000125, 100000151, 100000136, 100000107)
) AS es ON es.User_Id = b.User_Id
WHERE
(CONVERT(varchar, DATEADD(hh, - 7, b.CallStartDt), 101)) = (CONVERT(varchar, DATEADD(hh, - 7, c.CallStartDt), 101))
AND (CONVERT(varchar, DATEADD(hh, - 7, b.CallStartDt), 101))= (CONVERT(varchar, DATEADD(hh, - 7, d.CallStartDt), 101))
AND (CONVERT(varchar, DATEADD(hh, - 7, b.CallStartDt), 101))= (CONVERT(varchar, DATEADD(hh, - 7, e.CallStartDt), 101))
AND (CONVERT(varchar, DATEADD(hh, - 7, b.CallStartDt), 101))= (CONVERT(varchar, DATEADD(hh, - 7, es.CallStartDt), 101))
AND (CONVERT(varchar, DATEADD(hh, - 7, b.CallStartDt), 101)) >= '08/01/2014'
GROUP BY
b.User_Id, b.CallStartDt
```
It's taking a lot longer than I would like to run this query, over a minute, I'm guessing it has a lot to do with the server but figured I would ask to see if anyone had any thoughts of making this faster
The query is to get some phone agent data, that isn't summarized by any tables
* acd = inbound calls
* aod = dialer calls
* manual = manual calls
* contacts = are based on the disposition codes an agent would use
* success = is a subset of contacts | There are numerous issues with the query but the first thing I notice is the inefficient datetime conversions. So, I'd start with that part first, before examining the indexing and the execution plan.
I suppose you want to check if the various datetimes are in the same date (minus the 7 hours which is probably your timezone while the data are stored in UTC). So, lets try this, instead of that (horrible) `WHERE`:
```
CROSS APPLY
( SELECT dt = DATEADD(hour, -7, b.CallStartDt) ) AS x
CROSS APPLY
( SELECT dt = DATEADD(day, +1, x.dt) ) AS y
WHERE
b.CallStartDt >= DATEADD(hour, +7, '20140801')
AND c.CallStartDt >= x.dt AND c.CallStartDt < y.dt
AND d.CallStartDt >= x.dt AND d.CallStartDt < y.dt
AND e.CallStartDt >= x.dt AND e.CallStartDt < y.dt
AND es.CallStartDt >= x.dt AND es.CallStartDt < y.dt
```
Explanation/notes:
* the `(CONVERT(varchar, DATEADD(hh, - 7, b.CallStartDt), 101)) >= '08/01/2014'` is utterly wrong. Not only it uses inefficient conversions, it will also return wrong results. Because the date (both as string and as date) `'08/03/2014'` is after `'08/01/2014'` but for other examples, it's the other way around: `'09/09/2011' > '08/01/2014'` but obviously 2011 is before 2014.
* All unnecessary calls to `DATEDIFF()` and `CONVERT()` have been removed. This way, not only several thousands calls (or million, depends on your tables sizes) to the functions will not be done but the optimizer will be able to use indexes for the various conditions, if there are indexes on these columns.
* Only the (-7 hours) of `b.CallStartDt` has been kept as there is no way to avoid that without altering the table (adding a computed column with an index though would help.)
* Use sane formats for dates and datetimes, like `'20140801'` (`'YYYYMMDD'`), which as Aaron Bertrand's blog explains is the only 100% secure format for using with dates in SQL-Server. See: **[Bad habits to kick : mis-handling date / range queries](https://sqlblog.org/2009/10/16/bad-habits-to-kick-mis-handling-date-range-queries)**
* With the `DATEADD()` function, use the long forms. `hour` instead of `hh`, `day` instead of `dd`, `minute` instead of `mm` (or is it `mi`?, or `min`?, I keep forgetting.) Less error-prone.
More to do:
* The 4 above conditions (the ones regarding `c`, `d`, `e` and `es` tables) should probably be moved to the respective `LEFT` joins (as DRapp commented.)
* Check the execution plan and whether indexes are available and used.
Minor details:
* Avoid single quotes aliases. use either unquoted aliases, or double-quoted if needed. See (another Aaron Bertrand's blog post): **[Bad Habits to Kick : Using AS instead of = for column aliases](https://sqlblog.org/2012/01/23/bad-habits-to-kick-using-as-instead-of-for-column-aliases)** | You need to remove your subqueries and put
```
WHERE Disp_Id IN
(100000150, 100000137, 100000093, 100000133, 100000123,
100000094, 100000161, 100000085, 100000086, 100000157,
100000088, 100000131, 100000160, 100000156, 100000165,
100000166, 100000122, 100000121, 100000138, 100000144,
100000132, 100000098, 100000100, 100000153, 100000139,
100000145, 100000101, 100000140, 100000102, 100000103,
100000105, 100000106, 100000159, 100000112, 100000135,
100000141, 100000146, 100000115, 100000108, 100000092,
100000155, 100000125, 100000151, 100000136, 100000107)
```
In a temp table and join that temp table to [detail\_epro].[dbo].[AgentDispoDetail] on Disp\_ID.
And as others have already posted, why are you using converts to compare the dates? Are they in a different format on each table? If they aren't remove the converts.
Also, remove the DATEADDS. You are using them in a comparison, but adding -7 to each half of the equation. That is like: X + 7 = 8 + 7. Remove the 7s and the value of x hasn't changed. | How can I optimize my SQL query to run faster? | [
"",
"sql",
"sql-server",
"optimization",
"lag",
""
] |
I have a Microsoft SQL database of many customers, the columns are `CustomerID,CustomerName,Locations,SQFT,EditDate, DateRecorded`
Every day, we record the square footage used by the customer. So the database should have a value for every day of the month. I want to select the day with the highest square footage use for every customer. The code below only returns a single customer, and not all customers. How can I return the highest SQFT for the month for each customer?
My code:
```
// $db_month_start = the first of the current month.
// $db_month_end = the end of the current month.
$query = "SELECT CustomerID,CustomerName,SQFT,EditDate
FROM SquareFootage
WHERE DateRecorded >= '{$db_month_start}'
AND DateRecorded <= '{$db_month_end}'
AND SQFT = (Select Max(SQFT) From SquareFootage WHERE DateRecorded >= '{$db_month_start}' AND DateRecorded <= '{$db_month_end}') ";
``` | The max SQFT per customer per month:
```
select CustomerID
, CustomerName
, convert(varchar(7), DateRecorded, 120) as [Month]
, max(SQFT) as MaxSQFTThisMonth
from SquareFootage
group by
CustomerID
, CustomerName
, convert(varchar(7), DateRecorded, 120)
```
This will work for any number of months, and customers that have their top SQFT for multiple days are only listed on one row. | You can follow down the same path. You just need a correlated subquery:
```
SELECT CustomerID, CustomerName, SQFT, EditDate
FROM SquareFootage sf
WHERE DateRecorded >= '{$db_month_start}' AND DateRecorded <= '{$db_month_end}' AND
SQFT = (Select Max(sf2.SQFT)
From SquareFootage sf2
WHERE sf2.DateRecorded >= '{$db_month_start}' AND
sf2.DateRecorded <= '{$db_month_end}' AND
sf2.CustomerId = sf.CustomerId
);
``` | Selecting the highest values from database, unique to customer name? | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
Gender is a Bool value but i dont know how to insert its value
Below is what i have written
> INSERT INTO users
> (user\_id, user\_name, joined, password, first\_name, last\_name, date\_of\_birth, gender)
> VALUES
> ('150/08', 'Otollo', NOW(), SHA('WUODBABA'), 'Joshua', 'Abuto', '1998-08-23', | Just put `TRUE` or `FALSE` as below
```
INSERT INTO users
(user_id, user_name, joined, password, first_name, last_name, date_of_birth, gender)
VALUES
('150/08', 'Otollo', NOW(), SHA('WUODBABA'), 'Joshua', 'Abuto', '1998-08-23',TRUE)
```
[**Here you can find more information**](http://wiki.ispirer.com/sqlways/mysql/data-types/bool) | use true and false such as
```
INSERT INTO users
(user_id, user_name, joined, password, first_name, last_name, date_of_birth, gender)
VALUES
('150/08', 'Otollo', NOW(), SHA('WUODBABA'), 'Joshua', 'Abuto', '1998-08-23',TRUE)
```
and also use this link to check all possible way to insert bool value
<http://www.postgresql.org/docs/8.1/static/datatype-boolean.html> | How do i insert values for Bool in my insert values command? | [
"",
"mysql",
"sql",
"database",
"command-line",
""
] |
I have a database that contains multiple `url` and a date (`created_at`) associate with each of these `url`.
I would like to have something like:
```
Select DISTINCT url, "the first date of this url"
from database
where blabala
```
My problem is when a add the date to the select I get this:
```
/url/sdfsd | 2014-07-19
/url/sdfsd | 2014-07-20
/url/sdfsd | 2014-07-25
```
And what I want is only:
```
/url/sdfsd | 2014-07-19
```
---
I realise that i over simplified my problem but thanks to you guys i managed to find a solution
```
select req2.date, COUNT(DATE(req2.date)) as count
from (
select hash_request -> 'PATH_INFO', min(DATE(created_at)) as date
from (
select *
from request_statistics
where LOWER(hash_request -> 'HTTP_USER_AGENT') LIKE '%google%'
) req1
group by hash_request -> 'PATH_INFO'
) req2
group by req2.date
order by req2.date asc
```
i had difficulty grouping the date on all the unique url. now i have, for each day what is the amount of unique url of all the unique url | Are the records sorted?
```
select url, min(created_at)
from databaseTable
where blabala
group by url
``` | In the event that there are other fields that you want as well, then the `distinct on` syntax may be what you want:
```
Select DISTINCT ON (url) d.*
from database d
where blabala
order by url, created_at asc;
``` | SQL return a Distinct column and the first date of the distinct column | [
"",
"sql",
"postgresql",
"aggregate-functions",
"greatest-n-per-group",
""
] |
I have a large Stored Procedure and there is a subquery within it that looks like the following.
```
SELECT COUNT(userID) AS totalLikes FROM apsLikes WHERE likeID = 'empowerment_'+A.[submissionID]
FOR XML PATH ('likes'), TYPE, ELEMENTS
```
Within the column `likeID` i have values with a `prefix_id`
When I try and run the query I get this error:
```
Conversion failed when converting the varchar value 'empowerment_' to data type int.
```
The data type of likeID is varchar(60) so not sure what it means? | Just do an explicit `cast()`:
```
SELECT COUNT(userID) AS totalLikes
FROM apsLikes
WHERE likeID = 'empowerment_' + cast(A.[submissionID] as varchar(255))
FOR XML PATH ('likes'), TYPE, ELEMENTS;
```
The problem is that `+` can mean either addition or string concatenation. Because the second argument is an integer, it assumes the `+` is addition, and `'empowerment_'` cannot be converted to an integer. Hence, the error. | SubmissionID is probably of the data typeint. Try this.
```
SELECT COUNT(userID) AS totalLikes
FROM apsLikes WHERE likeID = 'empowerment_'+ CAST(A.[submissionID] AS NVARCHAR(255))
FOR XML PATH ('likes'), TYPE, ELEMENTS
``` | SQL Sub Query with column prefix | [
"",
"sql",
"t-sql",
"stored-procedures",
"sql-server-2012",
""
] |
I have Windows 7 Home edition SP1 installed on my system, and downloaded SQL Server 2008 R2 Management Studio and got it installed. But when I try to connect via .\SQLEXPRESS, SSMS shows error
A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) (Microsoft SQL Server, Error: -1) For help, click: <http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&EvtSrc=MSSQLServer&EvtID=-1&LinkId=20476>.
I go back to check the local services there I don't find any SQL Server service running.
What I have to do for this? DO I need to install any SQL service on my system? or how I should proceed? | > downloaded Sql server management 2008 r2 and got it installed. Its
> getting installed but when I try to connect it via .\SQLEXPRESS it
> shows error. DO I need to install any SQL service on my system?
You installed management studio which is just a management interface to SQL Server. If you didn't (which is what it seems like) already have SQL Server installed, you'll need to install it in order to have it on your system and use it.
<http://www.microsoft.com/en-us/download/details.aspx?id=1695> | Updated link to install Sql server Database in local machine. Once it is installed you can get access to local database and you can see SQL Server windows services instance up and running in your services list window.
<https://www.microsoft.com/en-us/sql-server/sql-server-downloads> | SQL Server Service not available in service list after installation of SQL Server Management Studio | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I working on FIFO implementation in sql.
I have Batch number concept in my application.
If suppose I am selling on inventory then my application should tell me that which inventory is the first come.
Lets. Say I purchased Inventory 'A' on 4th-Aug, 5th-Aug & 6th-Aug
```
On 4th Aug - A Inventory has batch number BT002 - 10 (Qty)
On 5th Aug - A's Inventory has batch number BT003 - 15 (Qty)
On 6th Aug - A's Inventory has batch number BT001 - 10 (Qty)
```
So, Now I am having stock Now in my hand as following :
```
A Inventory
BT002 - 10 - 4-Aug
BT003 - 15 - 5-Aug
BT001 - 10 - 6-Aug
```
Now If I want to sell that Inventory to anybody then my application should tell me that I should sell
BT002 (Batch number) inventory first beacause it came first.
That was the concept I am using in my application.
Now I want to sell 15 Qty from 'A' (Inventory).
Then O/p Should be like this :
```
BT002 - 10
BT003 - 5
```
Here's My Query :
```
SELECT ISNULL(SUM(qty),0) AS Qty,batch_no,accept_date FROM RS_GIN_Master
GROUP BY batch_no,accept_date
HAVING ISNULL(SUM(qty),0) <= 15
ORDER BY accept_date asc
```
O/p Of Given Query :

How can I get O/P like this :
```
BT002 - 10
BT003 - 5
```
Any Help will be appreciated.
Thank you in Advance. | This should work for you:
[Working sample on Fiddle](http://sqlfiddle.com/#!3/b9887/1)
```
CREATE FUNCTION [dbo].[GetBatchAmounts]
(
@requestedAmount int
)
RETURNS
@tBatchResults TABLE
(
Batch nvarchar(50),
Amount int
)
AS
BEGIN
/*This is just a mock of ersults of your query*/
DECLARE @RS_GIN_Master TABLE(
Qty int,
batch_no NVARCHAR(max),
accept_date DATETIME
)
insert into @RS_GIN_Master(Qty,batch_no,accept_date)
SELECT 10,'BT002', CAST(CAST(2014 AS varchar) + '-' + CAST(8 AS varchar) + '-' + CAST(4 AS varchar) AS DATETIME)
insert into @RS_GIN_Master(Qty,batch_no,accept_date)
SELECT 10,'BT003', CAST(CAST(2014 AS varchar) + '-' + CAST(8 AS varchar) + '-' + CAST(5 AS varchar) AS DATETIME)
insert into @RS_GIN_Master(Qty,batch_no,accept_date)
SELECT 10,'BT001', CAST(CAST(2014 AS varchar) + '-' + CAST(8 AS varchar) + '-' + CAST(6 AS varchar) AS DATETIME)
/*---------------------------*/
DECLARE @Qty int
DECLARE @batch_no NVARCHAR(max)
DECLARE @accept_date DATETIME
DECLARE myCursor CURSOR FOR
SELECT Qty, batch_no, accept_date FROM @RS_GIN_Master ORDER BY accept_date ASC
OPEN myCursor
FETCH NEXT FROM myCursor INTO @Qty, @batch_no,@accept_date
WHILE (@@FETCH_STATUS = 0 AND @requestedAmount > 0 )
BEGIN
Declare @actualQty int
IF @requestedAmount > @Qty
SET @actualQty = @Qty
ELSE
SET @actualQty = @requestedAmount
INSERT INTO @tBatchResults (batch, Amount)
SELECT @batch_no, @actualQty
set @requestedAmount = @requestedAmount - @actualQty
FETCH NEXT FROM myCursor INTO @Qty, @batch_no,@accept_date
END /*WHILE*/
CLOSE myCursor
DEALLOCATE myCursor
RETURN
END
```
Just make sure to replace the marked part of the function with your query... | You need to create a stored procedure in your database and taking the quantity from your stock table. And you should also have the id of each record to update that records from where u have taken that qty.
```
Alter PROCEDURE sp_UpdateStockForSale
@batchNO varchar(10),
@qty decimal(9,3)
AS
BEGIN
Create Table #tmpOutput(ID int identity(1,1), StockID int, batchNo varchar(10), qty decimal(9,3));
SET NOCOUNT ON;
DECLARE @ID int;
DECLARE @Stock Decimal(9,3);
DECLARE @TEMPID int;
Select @TEMPID=(Max(ID)+1) From RS_GIN_Master Where qty > 0 And batch_no = @batchNO;
While (@qty > 0) BEGIN
Select @ID=ID, @Stock=qty From RS_GIN_Master Where qty > 0 And batch_no = @batchNO AND ID < @TEMPID Order By accept_date Desc;
--If Outward Qty is more than Stock
IF (@Stock < @qty) BEGIN
SET @qty = @qty - @Stock;
SET @Stock = 0;
END
--If Outward Qty is less than Stock
ELSE BEGIN
SET @Stock = @Stock - @qty;
SET @qty = 0;
END
Insert Into #tmpOutput(StockID,batchNo,qty)Values(@ID,@batchNO,@Stock);
SET @TEMPID = @ID;
--This will update that record don't need it now.
--Update RS_GIN_Master Set qty = @Stock Where ID=@ID
END
Select StockID, batchNo, qty From #tmpOutput;
END
GO
```
The above example is not compiled but, you can get the logic how you can retrieve the records from your stock table according to FIFO method. You can use `accept_date` instead of `ID` in RS\_GIN\_Master table. but, i would prefer to make it unique so, if i want to get a specific record then it can be possible. | How to implement FIFO in sql | [
"",
"sql",
"sql-server",
""
] |
I have inherited a MySQL InnoDB table with around 500 million rows. The table has IP numbers and the name of the ISP to which that number belongs, both as strings.
Sometimes, I need to update the name of an ISP to a new value, after company changes such as mergers or rebranding. But, because the table is so big, a simple UPDATE...WHERE statement doesn't work - The query usually times out, or the box runs out of memory.
So, I have written a stored procedure which uses a cursor to try and make the change one record at a time. When I run the procedure on a small sample table, it works perfectly. But, when I try to run it against the whole 500 million row table in production, I can see a temporary table gets created (because a /tmp/xxx.MYI and /tmp/xxx.MYD file appear). The temporary table file keeps growing in size until it uses all available disk space on the box (around 40 GB).
I'm not sure why this temporary table is necessary. Is the server trying to maintain some kind of rollback state? My real question is, can I change the stored procedure such that the temporary table is not created? I don't really care if some, but not all of the records get updated - I can easily add some reporting and just keep running the proc until no records are altered.
At this time, architecture changes are not really an option – I can't change the structure of the table, for example.
Thanks in advance for any help.
David
This is my stored proc;
```
DELIMITER $$
DROP PROCEDURE IF EXISTS update_isp;
CREATE PROCEDURE update_isp()
BEGIN
DECLARE v_finished INT DEFAULT 0;
DECLARE v_num VARCHAR(255) DEFAULT "";
DECLARE v_isp VARCHAR(255) DEFAULT "";
DECLARE ip_cursor CURSOR FOR
SELECT ip_number, isp FROM ips;
DECLARE CONTINUE HANDLER
FOR NOT FOUND SET v_finished = 1;
OPEN ip_cursor;
get_ip: LOOP
IF v_finished = 1 THEN
LEAVE get_ip;
END IF;
FETCH ip_cursor INTO v_num, v_isp;
IF v_isp = 'old name' THEN
UPDATE ips SET isp = 'new name' WHERE ip_number = v_num;
END IF;
END LOOP get_ip;
CLOSE ip_cursor;
END$$
DELIMITER ;
CALL update_isp();
```
I have also tried wrapping the update statement in a transaction. It didn't make any difference.
[EDIT] My assumption below, that a simple counting procedure does not create a temporary table, was wrong. The temporary table is still created, but it grows more slowly and the box does not run out of disk space before the procedure completes.
So the problem seems to be that any use of a cursor in a stored procedure results in a temporary table being created. I have no idea why, or if there is any way to prevent this. | I don't think there is a solution to this problem.
From this page; <http://spec-zone.ru/mysql/5.7/restrictions_cursor-restrictions.html>
> In MySQL, a server-side cursor is materialized into an internal
> temporary table. Initially, this is a MEMORY table, but is converted
> to a MyISAM table when its size exceeds the minimum value of the
> max\_heap\_table\_size and tmp\_table\_size system variables.
I misunderstood how cursors work. I assumed that my cursor functioned as a pointer to the underlying table. But, it seems MySQL must build the full result set first, and then give you a pointer to that. So, I don't really understand the benefits of cursors in MySQL. Thanks to everyone who tried to help.
David | If your update is essentially:
```
UPDATE ips
SET isp = 'new name'
WHERE isp = OLDNAME;
```
I am guessing that this `update` -- without the cursor -- will work better if you have an index on `isp(isp)`:
```
create index idx_isp_isp on isp(isp);
```
Your original query should be fine once this index is created. There should be no performance issue updating a single row even in a very large table. The issue is in all likelihood *finding* the row, not *updating* it. | MySQL stored procedure on big table eats server disk space | [
"",
"mysql",
"sql",
"stored-procedures",
""
] |
I'm trying to order the output order of some distinct aggregated text based on the value of another column with something like:
```
string_agg(DISTINCT sometext, ' ' ORDER BY numval)
```
However, that results in the error:
> ERROR: in an aggregate with DISTINCT, ORDER BY expressions must appear in argument list
I do understand why this is, since the ordering would be "ill-defined" if the `numval` of two repeated values differs, with that of another lying in-between.
Ideally, I would like to order them by first appearance / lowest order-by value, but the ill-defined cases are actually rare enough in my data (it's mostly sequentially repeated values that I want to get rid of with the `DISTINCT`) that I ultimately don't particularly care about their ordering and would be happy with something like MySQL's `GROUP_CONCAT(DISTINCT sometext ORDER BY numval SEPARATOR ' ')` that simply works despite its sloppiness.
I expect some Postgres contortionism will be necessary, but I don't really know what the most efficient/concise way of going about this would be. | If this is part of a larger expression, it might be inconvenient to do a `select distinct` in a subquery. In this case, you can take advantage of the fact that `string_agg()` ignores `NULL` input values and do something like:
```
select string_agg( (case when seqnum = 1 then sometext end) order by numval)
from (select sometext, row_number() over (partition by <whatever>, sometext order by numval) as seqnum
from t
) t
group by <whatever>
```
The subquery adds a column but does not require aggregating the data. | ### Building on `DISTINCT ON`
```
SELECT string_agg(sometext, ' ' ORDER BY numval) AS no_dupe
FROM (
SELECT DISTINCT ON (1,2) <whatever>, sometext, numval
FROM tbl
ORDER BY 1,2,3
) sub;
```
This is the simpler equivalent of [@Gordon's query](https://stackoverflow.com/a/25181698/939860).
From your description alone I would have suggested [@Clodoaldo's simpler variant](https://stackoverflow.com/a/25181026/939860).
### `uniq()` for integer
For `integer` values instead of `text`, the additional module [`intarray`](https://www.postgresql.org/docs/current/intarray.html) has **just the thing** for you:
```
uniq(int[]) int[] remove adjacent duplicates
```
Install it once per database with:
```
CREATE EXTENSION intarray;
```
Then the query is simply:
```
SELECT uniq(array_agg(some_int ORDER BY <whatever>, numval)) AS no_dupe
FROM tbl;
```
Result is an array, wrap it in [`array_to_string()`](https://www.postgresql.org/docs/current/functions-array.html#ARRAY-FUNCTIONS-TABLE) if you need a string.
Related:
* [How to create an index for elements of an array in PostgreSQL?](https://stackoverflow.com/questions/10867577/in-postgresql-how-do-you-create-an-index-by-each-element-of-an-array/10868144#10868144)
* [Compare arrays for equality, ignoring order of elements](https://stackoverflow.com/questions/12870105/compare-arrays-for-equality-ignoring-order-of-elements/12870508#12870508)
In fact, it wouldn't be hard to create a custom aggregate function to do the same with `text` ...
## Custom aggregate function for any data type
Function that only adds next element to array if it is different from the previous. (`NULL` values are removed!):
```
CREATE OR REPLACE FUNCTION f_array_append_uniq (anyarray, anyelement)
RETURNS anyarray
LANGUAGE sql STRICT IMMUTABLE AS
'SELECT CASE WHEN $1[array_upper($1, 1)] = $2 THEN $1 ELSE $1 || $2 END';
```
Using [polymorphic types](https://stackoverflow.com/a/14712776/939860) to make it work for *any* scalar data-type.
Custom aggregate function:
```
CREATE AGGREGATE array_agg_uniq(anyelement) (
SFUNC = f_array_append_uniq
, STYPE = anyarray
, INITCOND = '{}'
);
```
Call:
```
SELECT array_to_string(
array_agg_uniq(sometext ORDER BY <whatever>, numval)
, ' ') AS no_dupe
FROM tbl;
```
Note that the aggregate is `PARALLEL UNSAFE` (default) by nature, even though the transition function could be marked `PARALLEL SAFE`.
Related answer:
* [Custom PostgreSQL aggregate for circular average](https://stackoverflow.com/questions/10225357/custom-postgresql-aggregate-for-circular-average/10226197#10226197) | Ordering distinct column values by (first value of) other column in aggregate function | [
"",
"sql",
"postgresql",
"sql-order-by",
"distinct",
"aggregate-functions",
""
] |
I need language to set a year in an SQL query. If the current date is between 10/1 and 12/31, I need the year value to be CurrentYear-5, and if the current date is between 1/1 and 9/30, I need the year value to be CurrentYear-6. This is how I would state it, but I know this isn't quite SQL yet. [END] is the date field that is being evaluated and it contains a full date (e.g., dd/mm/yyyy).
```
WHERE
if CurrentDay BETWEEN 10/1 AND 12/31
Year([END]) = CurrentYear-5
else
Year([END]) = CurrentYear-6
endif
``` | You can do this in Access with the `DATEPART()` function using two sets of criteria with `OR` rather than a `CASE` statement:
```
WHERE (DATEPART("yyyy",[END]) = DATEPART("yyyy",DATE())-5 AND DATEPART("q",DATE()) = 4)
OR (DATEPART("yyyy",[END]) = DATEPART("yyyy",DATE())-6 AND DATEPART("q",DATE()) <> 4)
```
This requires that either the `[END]` field is 5 years back and it's currently the 4th quarter, or that the `[END]` field is 6 years back and it's not the 4th quarter. | This might work
```
WHERE
YEAR([END]) =
CASE WHEN DATEPART(Q,CURRENT_TIMESTAMP) = 4
THEN YEAR(CURRENT_TIMESTAMP) - 5
ELSE YEAR(CURRENT_TIMESTAMP) - 6
END
``` | SQL Language to set a year from the current year | [
"",
"sql",
"date",
"ms-access",
""
] |
Alright I'm brand new to SQL Server and I have a script to rename constraints in a database to abide by naming standards. I would just like it explained because I'm very confused.
```
IF @PrimaryKeys = 1
BEGIN
SELECT @sql = @sql + @cr + @cr + '/* Primary Keys */' + @cr;
SELECT @sql = @sql + @cr + 'EXEC sp_rename @objname = '''
+ REPLACE(name, '''', '''''') + ''', @newname = ''PK_'
+ LEFT(REPLACE(object_name(parent_object_id), '''', ''), @TableLimit) + ''';'
FROM sys.key_constraints
WHERE type = 'PK'
AND is_ms_shipped = 0;
END
```
The part I'm most confused about is the `replace(name, '''', '''''')`. I know that it's taking a string, finding something in it, and replacing that something, but I don't understand all of the single quotes. | Your script is generating something called 'Dynamic SQL' and storing it in a variable. It's concatenating together several strings, columns, and other things into a single long text value.
SQL Server, though, understands a text value as something that starts and ends with a single quote (apostrophe). If you need to pass a single quote into that text value (like here), you actually have to type out two quotes together - SQL will understand then that you're not trying to indicate the start or end of a text string, but instead want to keep the apostrophe.
So, your REPLACE() function is stuffing in extra apostrophes so that your column's name (e.g., "O'Laughlin") will be read by the concatenation as "O''Laughlin" and then be understood to actually represent the value "O'Laughlin". | It replaces single quotes with double quotes:
```
before after
------ -----
it's it''s
``` | Confused about SQL Server procedure to rename constraints | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have a certain day range, let's say 8-01 to 8-08. I also have a database that has historical data from 1975-2014. Dates are formatted in yyyy-mm-dd form. So for example, today is 2014-08-06. How can I get historical records from 8-01 to 8-08 for all years? Keep in mind my date range might not span nicely across the same month. Another example would be 7/31 to 8/07.
Basically, I want to be able to do BETWEEN(%-8-01 AND %-8-08) where % is a wildcard. However, wildcards seem incompatible with date objects in MS SQL. Do I need to convert the dates to strings? What is the most efficient way of getting a generic month-day range independent of year?
Thanks. | This is a different approach than my other answer. It will make all dates have a year of `1900`. Then you only need to take whatever is between your selected dates.
```
SELECT *
FROM Table1
WHERE DATEADD
(
year,
-DATEDIFF(year,'19000101',dateField),
dateField
) BETWEEN'19000801' AND'19000818'
``` | You can extract the month and date part and compare those:
```
where
month(<yourdate>) = 8
and day(<yourdate>) between 1 and 8
```
Be aware that if you have an index on this column, you won't be using it this way. | How to format a wildcard query for dates in MS SQL? | [
"",
"sql",
"sql-server",
"date",
"formatting",
""
] |
if i have dates/time like these
```
8/5/2014 12:00:01 AM
8/5/2014 12:00:16 AM
8/5/2014 12:00:18 AM
8/5/2014 12:17:18 AM
8/5/2014 12:19:18 AM
```
i want these date/times
* if the minutes less than 15 and greater than 00 i want the time for minutes to be 00
* if the minutes less than 30 and greater than 15 i want the minutes to be 15
* if the minutes less than 45 and greater than 30 i want the minutes to be 30
* if the minutes less than 00 and greater than 45 i want the minutes to be 45
```
8/5/2014 12:00:00 AM
...
...
8/5/2014 12:15:00AM
...
...
8/5/2014 12:30:00AM
...
...
8/5/2014 12:45:00AM
```
i need to do that for my report
how can i apply these in oracle | Here's another method, which is really a variation of Gordon Linoff's approach:
```
trunc(dt, 'HH24') + ((15/1440) * (floor(to_number(to_char(dt, 'MI'))/15)))
```
The `trunc(dt, 'HH24')` gives you the value truncated to hour precision, so for your sample that's always midnight. Then `floor(to_number(to_char(dt, 'MI'))/15)` gives you the number of complete 15-minute periods represented by the minute value; with your data that's either zero or 1. As Gordon mentioned when you add a numeric value to a date it's treated as fractions of a day, so that needs to be multiplied by '15 minutes' (15/1400).
```
with t as (
select to_date('8/5/2014 12:00:01 AM') as dt from dual
union all select to_date('8/5/2014 12:00:16 AM') as dt from dual
union all select to_date('8/5/2014 12:00:18 AM') as dt from dual
union all select to_date('8/5/2014 12:17:18 AM') as dt from dual
union all select to_date('8/5/2014 12:19:18 AM') as dt from dual
union all select to_date('8/5/2014 12:37:37 AM') as dt from dual
union all select to_date('8/5/2014 12:51:51 AM') as dt from dual
)
select dt, trunc(dt, 'HH24')
+ ((15/1440) * (floor(to_number(to_char(dt, 'MI'))/15))) as new_dt
from t;
DT NEW_DT
---------------------- ----------------------
08/05/2014 12:00:01 AM 08/05/2014 12:00:00 AM
08/05/2014 12:00:16 AM 08/05/2014 12:00:00 AM
08/05/2014 12:00:18 AM 08/05/2014 12:00:00 AM
08/05/2014 12:17:18 AM 08/05/2014 12:15:00 AM
08/05/2014 12:19:18 AM 08/05/2014 12:15:00 AM
08/05/2014 12:37:37 AM 08/05/2014 12:30:00 AM
08/05/2014 12:51:51 AM 08/05/2014 12:45:00 AM
``` | To add yet another method this looks like a situation where the function [`NUMTODSINTERVAL()`](http://docs.oracle.com/cd/E11882_01/server.112/e41084/functions117.htm#SQLRF00682) could be useful - it makes it slightly more obvious what's happening:
```
select trunc(dt)
+ numtodsinterval(trunc(to_char(dt, 'sssss') / 900) * 15, 'MINUTE')
from ...
```
[`TRUNC()`](http://docs.oracle.com/cd/E11882_01/server.112/e41084/functions220.htm#SQLRF06151) truncates the date to the beginning of that day. The [format model](http://docs.oracle.com/cd/E11882_01/server.112/e41084/sql_elements004.htm#SQLRF51078) `sssss` calculates the number of seconds since midnight. The number of complete quarter-hours since midnight is the number of seconds divided by 900 (as there are 900 quarter-hours in the day). This is truncated again to remove any part-completed quarter-hours, multipled by 15 to give the number of minutes (there are 15 minutes in a quarter hour). Lastly, convert this to an INTERVAL DAY TO SECOND and add to the original date.
```
SQL> alter session set nls_date_format = 'dd/mm/yyyy hh24:mi:ss';
Session altered.
SQL> with t as (
2 select to_date('05/08/2014 12:00:01') as dt from dual union all
3 select to_date('05/08/2014 12:00:16') as dt from dual union all
4 select to_date('05/08/2014 12:00:18') as dt from dual union all
5 select to_date('05/08/2014 12:17:18') as dt from dual union all
6 select to_date('05/08/2014 12:19:18') as dt from dual union all
7 select to_date('05/08/2014 12:37:37') as dt from dual union all
8 select to_date('05/08/2014 12:51:51') as dt from dual
9 )
10 select trunc(dt)
11 + numtodsinterval(trunc(to_char(dt, 'sssss') / 900) * 15, 'MINUTE')
12 from t
13 ;
TRUNC(DT)+NUMTODSIN
-------------------
05/08/2014 12:00:00
05/08/2014 12:00:00
05/08/2014 12:00:00
05/08/2014 12:15:00
05/08/2014 12:15:00
05/08/2014 12:30:00
05/08/2014 12:45:00
7 rows selected.
```
I've explicitly set my NLS\_DATE\_FORMAT so I can rely on implicit conversion in `TO_DATE()` so that it fits on the page without scrolling. It is *not* recommended to use implicit conversion normally. | how to convert date format in sql | [
"",
"sql",
"oracle",
""
] |
Hey I have a problem with Laravel on my site [Phones number lookup](http://phoneslookup.com/). I try to choose places that have city that I choose, through contact table.
My model classes:
Places class:
```
class Places extends Eloquent {
public function contacts()
{
return $this->hasOne('Contacts');
}
public function clubs()
{
return $this->hasOne('Clubs');
}
}
```
Contacts class:
```
class Contacts extends Eloquent {
public function cities()
{
return $this->hasOne('Cities');
}
}
```
Cities class:
```
class Cities extends Eloquent {
}
```
My query:
```
$view->clubs = Places::whereHas('contacts',function ($q) use($city_id){
$q->where('contacts', function ($q) use($city_id){
$q->where('id', $city_id);
});
})->get();
```
Error msg:
> MySQL server version for the right syntax to use near 'where `id` = ?)) >= 1' at line 1 (SQL: select \* from `places` where (select count(\*) from `contacts` where `contacts`.`places_id` = `places`.`id` and `contacts` = (select \* where `id` = 2223)) >= 1)
I know it is missing "from" `citites` but I do not know how to achieve it.
 | You have 3 options using relations:
1 most straightforward solution:
```
Places::whereHas('contacts',function ($q) use ($city_id){
$q->whereHas('cities', function ($q) use ($city_id){
$q->where('id', $city_id);
});
})->get();
```
2 the same as above but using this PR: <https://github.com/laravel/framework/pull/4954>
```
Places::whereHas('contacts.cities', function ($q) use ($city_id){
$q->where('id', $city_id);
})->get();
```
3 Using `hasManyThrough` relation:
```
// Place model
public function cities()
{
return $this->hasManyThrough('City', 'Contact');
}
// then
Places::whereHas('cities',function ($q) use ($city_id){
$q->where('cities.id', $city_id);
})->get();
```
---
## edit
Having your schema it's obvious that none of the suggest or your original setup can work.
This is a many-to-many relation which in Eloquent is `belongsToMany`:
```
// Places model
public function cities()
{
return $this->belongsToMany('Cities', 'contacts', 'places_id', 'cities_id')
->withPivot( .. contacts table fields that you need go here.. );
}
// Cities model
public function places()
{
return $this->belongsToMany('Places', 'contacts', 'cities_id', 'places_id')
->withPivot( .. contacts table fields that you need go here.. );
}
```
Then you can call relations like this:
```
$city = Cities::first();
$city->places; // collection of Places models
// contacts data for a single city-place pair
$city->places->first()->pivot->open_hours; // or whatever you include in withPivot above
```
Now, there's another way to setup this, in case you need also `Contacts` model itself:
```
// Places model
public function contact()
{
return $this->hasOne('Contacts', 'places_id');
}
// Contacts model
public function city()
{
return $this->belongsTo('Cities', 'cities_id');
}
public function place()
{
return $this->belongsTo('Places', 'places_id');
}
// Cities model
public function contact()
{
return $this->hasOne('Contacts', 'cities_id');
}
```
then:
```
$city = Cities::first();
$city->contact; // Contacts model
$city->contact->place; // Places model
```
`hasManyThrough` won't work here at all | As you know the city id and from this you want to find the corresponding place you can start at the city and work back to the place. To do this you will need to define the inverse of your relationships.
```
// Add this function to your Cities Model
public function contact()
{
return $this->belongsTo('Contact');
}
// Add this function to your Contacts Model
public function place()
{
return $this->belongsTo('Places');
}
```
Now you can query the City and find Place.
```
$place = Cities::find($city_id)->contact->place;
```
**EDIT:**
Added missing return in functions | Laravel Eloquent ORM "whereHas" through table | [
"",
"sql",
"laravel",
"orm",
"eloquent",
""
] |
I have a table with a varying amount of columns of varying names.
All columns are one of the following:
* identity columns,
* NULL columns,
* or have a default value
Now I want to insert into this table a new row and read its content.
I tried all the following:
```
INSERT INTO globalsettings() VALUES()
INSERT INTO globalsettings VALUES()
INSERT INTO globalsettings VALUES
INSERT INTO globalsettings
```
Did I miss the correct syntax, or won't I be able to insert an all-defaults row? | ```
INSERT INTO globalsettings DEFAULT VALUES;
```
you can find the description here: <http://msdn.microsoft.com/en-us/library/ms174335.aspx> | You could do something like this:
```
INSERT INTO globalsettings (Column1) VALUES (DEFAULT)
```
It will use the default value for `Column1` and all other columns. | INSERT a row only with default or null values | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I'm facing a SQL problem that deals with selecting rows based on certain conditions. Example table below called 'users' as u:
```
id category label date rank count
111 weak FFF 2014-06-01 1 4
111 strong DDD 2014-06-02 2 4
111 strong BBB 2014-06-03 3 4
111 weak RRR 2014-06-04 4 4
222 weak WWW 2014-07-01 1 3
222 weak YYY 2014-07-02 2 3
222 weak ZZZ 2014-07-03 3 3
```
There are two unique IDs (`111` and `222`). I want to retrieve the very last `u.label` for each `distinct ID` assuming all values under `users.category == 'weak'` (as in `ID 222`).
However, if there exists `'strong'` under `users.category`, return the `u.label` corresponding to the **latest** date where `users.category == 'strong'`.
Essentially, I'm looking for the following:
```
id category label date
111 strong BBB 2014-06-03
222 weak ZZZ 2014-07-03
```
What I've tried so far was a query with `"where rank=count"` to get the last record but I'm drawing a blank as to how I can select `u.label where category=='strong'` given its last record is `u.category=='weak'`, as in `ID 111`.
Thank you and please let me know if I can clarify any further! | Something like this, may be:
```
select * from (
select t.*,
row_number() over (
partition by id
order by category asc, dt desc) rn
from t
) tt where rn = 1;
```
SQLFiddle: <http://sqlfiddle.com/#!1/475d49/1/0> | No need for window functions
```
select distinct on (id) id, category, dt
from t
order by id, category = 'weak', dt desc
```
[SQL Fiddle](http://sqlfiddle.com/#!1/475d49/3)
`false` sorts before `true` so when `category = 'weak'` it will sort last
You can add other columns to the select list. I just reused @mustaccio's Fiddle which has no `label` column.
Check `distinct on`:
<http://www.postgresql.org/docs/current/static/sql-select.html#SQL-DISTINCT> | SQL: How do I select fields based on varying existing conditions? | [
"",
"sql",
"postgresql",
"select",
"conditional-statements",
""
] |
Hi I am trying to replicate a sumproduct I use with SQL but am struggling. I have some values for some asset types and another table with weightings.
```
R1 TBL1
R2 Sim Type A Type B Type C
R3 1 1.836 1.794 1.153
R4 2 1.629 1.128 1.928
R5 3 1.616 1.956 1.411
R6 4 1.350 1.590 1.958
R7
R8
R9 TBL2
R10 Asset ID Type A Type B Type C
R11 BA Der 12% 2% 5%
R12 BSL ENH 0% 20% 1%
R13 BSL Der 42% 6% 7%
```
In Excel I use the following formulas to create my output:
```
Output (formulas)
Sim BA Der BSL ENH
1 =SUMPRODUCT(B3:D3,$B$11:$D$11) =SUMPRODUCT(B3:D3,$B$12:$D$12)
2 =SUMPRODUCT(B4:D4,$B$11:$D$11) =SUMPRODUCT(B4:D4,$B$12:$D$12)
3 =SUMPRODUCT(B5:D5,$B$11:$D$11) =SUMPRODUCT(B5:D5,$B$12:$D$12)
4 =SUMPRODUCT(B6:D6,$B$11:$D$11) =SUMPRODUCT(B6:D6,$B$12:$D$12)
Output (values)
Sim BA Der BSL ENH
1 0.313824843 0.37037487
2 0.314473553 0.244925331
3 0.303555238 0.405301715
4 0.291739471 0.33764572
```
So essentially I am using SUMPRODUCT to apply different weighting categories to the simulations
I am looking to do this in Access or SQL Server, any suggestions? | Try:
```
SELECT Sim, ((A.TypeA * B.TypeA) + (A.TypeB * B.TypeB) + (A.TypeC * B.TypeC)) 'BA Der',
((A.TypeA * C.TypeA) + (A.TypeB * C.TypeB) + (A.TypeC * C.TypeC)) 'BSL ENH'
FROM tbl1 A, tbl2 B, tbl2 C
WHERE B.AssetID = 'BA Der'
AND C.AssetID = 'BSL ENH'
```
See `Demo` | This should be the basic implementation of a sumproduct equivalent:
```
SELECT id, SUM(type * weight) AS sum_product FROM tbl_TableName GROUP BY id
```
Try adjusting this to your case. | SUMPRODUCT In SQL | [
"",
"sql",
"sql-server",
""
] |
I'm generating a data set that looks like this
```
category user total
1 jonesa 0
2 jonesa 0
3 jonesa 0
1 smithb 0
2 smithb 0
3 smithb 5
1 brownc 2
2 brownc 3
3 brownc 4
```
Where a particular user has 0 records in **all** categories is it possible to remove their rows form the set? If a user has some activity like smithb does, I'd like to keep all of their records. Even the zeroes rows. Not sure how to go about that, I thought a CASE statement may be of some help but I'm not sure, this is pretty complicated for me. Here is my query
```
SELECT DISTINCT c.category,
u.user_name,
CASE WHEN (
SELECT COUNT(e.entry_id)
FROM category c1
INNER JOIN entry e1
ON c1.category_id = e1.category_id
WHERE c1.category_id = c.category_id
AND e.user_name = u.user_name
AND e1.entered_date >= TO_DATE ('20140625','YYYYMMDD')
AND e1.entered_date <= TO_DATE ('20140731', 'YYYYMMDD')) > 0 -- I know this won't work
THEN 'Yes'
ELSE NULL
END AS TOTAL
FROM user u
INNER JOIN role r
ON u.id = r.user_id
AND r.id IN (1,2),
category c
LEFT JOIN entry e
ON c.category_id = e.category_id
WHERE c.category_id NOT IN (19,20)
```
I realise the case statement won't work, but it was an attempt on how this might be possible. I'm really not sure if it's possible or the best direction. Appreciate any guidance. | There are a number of ways to do this, but the less verbose the SQL is, the harder it may be for you to follow along with the logic. For that reason, I think that using multiple Common Table Expressions will avoid the need to use redundant joins, while being the most readable.
```
-- assuming user_name and category_name are unique on [user] and [category] respectively.
WITH valid_categories (category_id, category_name) AS
(
-- get set of valid categories
SELECT c.category_id, c.category AS category_name
FROM category c
WHERE c.category_id NOT IN (19,20)
),
valid_users ([user_name]) AS
(
-- get set of users who belong to valid roles
SELECT u.[user_name]
FROM [user] u
WHERE EXISTS (
SELECT *
FROM [role] r
WHERE u.id = r.[user_id] AND r.id IN (1,2)
)
),
valid_entries (entry_id, [user_name], category_id, entry_count) AS
(
-- provides a flag of 1 for easier aggregation
SELECT e.[entry_id], e.[user_name], e.category_id, CAST( 1 AS INT) AS entry_count
FROM [entry] e
WHERE e.entered_date BETWEEN TO_DATE('20140625','YYYYMMDD') AND TO_DATE('20140731', 'YYYYMMDD')
-- determines if entry is within date range
),
user_categories ([user_name], category_id, category_name) AS
( SELECT u.[user_name], c.category_id, c.category_name
FROM valid_users u
-- get the cartesian product of users and categories
CROSS JOIN valid_categories c
-- get only users with a valid entry
WHERE EXISTS (
SELECT *
FROM valid_entries e
WHERE e.[user_name] = u.[user_name]
)
)
/*
You can use these for testing.
SELECT COUNT(*) AS valid_categories_count
FROM valid_categories
SELECT COUNT(*) AS valid_users_count
FROM valid_users
SELECT COUNT(*) AS valid_entries_count
FROM valid_entries
SELECT COUNT(*) AS users_with_entries_count
FROM valid_users u
WHERE EXISTS (
SELECT *
FROM user_categories uc
WHERE uc.user_name = u.user_name
)
SELECT COUNT(*) AS users_without_entries_count
FROM valid_users u
WHERE NOT EXISTS (
SELECT *
FROM user_categories uc
WHERE uc.user_name = u.user_name
)
SELECT uc.[user_name], uc.[category_name], e.[entry_count]
FROM user_categories uc
INNER JOIN valid_entries e ON (uc.[user_name] = e.[user_name] AND uc.[category_id] = e.[category_id])
*/
-- Finally, the results:
SELECT uc.[user_name], uc.[category_name], SUM(NVL(e.[entry_count],0)) AS [entry_count]
FROM user_categories uc
LEFT OUTER JOIN valid_entries e ON (uc.[user_name] = e.[user_name] AND uc.[category_id] = e.[category_id])
``` | Try this:
```
delete from t1
where user in (
select user
from t1
group by user
having count(distinct category) = sum(case when total=0 then 1 else 0 end) )
```
The sub query can get all the users fit your removal requirement.
`count(distinct category)` get how many category a user have.
`sum(case when total=0 then 1 else 0 end)` get how many rows with activities a user have. | Remove grouped data set when total of count is zero with subquery | [
"",
"sql",
""
] |
I know that this question has been asked (and answered) many times, but none of them appear to be the same problem that I am seeing...
The table that is giving me problems only has two columns: the first field is an integer, the second field is longtext. Here is a portion of a dump file from MySQL 5.5.30:
```
1 - MySQL dump 10.13 Distrib 5.5.30, for Linux (x86_64)
2 --
3 -- Host: localhost Database: mydatabasename
4 -- ------------------------------------------------------
5 -- Server version 5.5.30-log
32 DROP TABLE IF EXISTS `large_file`;
33 /*!40101 SET @saved_cs_client = @@character_set_client */;
34 /*!40101 SET character_set_client = utf8 */;
35 CREATE TABLE `large_file` (
36 `id` int(11) NOT NULL AUTO_INCREMENT,
37 `data` longtext,
38 PRIMARY KEY (`id`)
39 ) ENGINE=InnoDB AUTO_INCREMENT=59 DEFAULT CHARSET=latin1;
40 /*!40101 SET character_set_client = @saved_cs_client */;
43 -- Dumping data for table `large_file`
44 --
45
46 LOCK TABLES `large_file` WRITE;
47 /*!40000 ALTER TABLE `large_file` DISABLE KEYS */;
48 INSERT INTO `large_file` VALUES(38,'GyUtMTIzNDVYQ... <large data> ...);
49 /*!40000 ALTER TABLE `large_file` ENABLE KEYS */;
50 UNLOCK TABLES;
```
As you can see this dump file came from MySQL 5.5.30, and I can import this data into 5.5.30. But, when I try to import into 5.6.x, I get the **ERROR 1118 (42000) Row size too large** error.
The data going into the large\_file table, is (relatively) large, values range in size from 15 MB to about 25 MB. The data is all ASCII (base 64 encoded).
Other posters have had issues with very large number of columns, but I only have two columns in this table.
The longtext type should be capable of storing approx 4 GB, and this has been the case with 5.5.30, but I am finding migration to 5.6.x to be difficult.
Can anyone offer insight into why this is happening? Or, how I can work around it?
Thanks in advance! | Check that the innodb\_log\_file\_size setting is sufficiently large -- 10 times the largest BLOB data size found in the rows in the table plus the length of other variable length fields.
The following is from [MySQL 5.6 Release Notes](http://dev.mysql.com/doc/relnotes/mysql/5.6/en/news-5-6-20.html)
*InnoDB Notes*
* **Important Change**: Redo log writes for large, externally stored BLOB fields could overwrite the most recent checkpoint. The 5.6.20 patch limits the size of redo log BLOB writes to 10% of the redo log file size. The 5.7.5 patch addresses the bug without imposing a limitation. For MySQL 5.5, the bug remains a known limitation.
As a result of the redo log BLOB write limit introduced for MySQL 5.6, innodb\_log\_file\_size should be set to a value greater than 10 times the largest BLOB data size found in the rows of your tables plus the length of other variable length fields (VARCHAR, VARBINARY, and TEXT type fields). ***Failing to do so could result in “Row size too large” errors***. No action is required if your innodb\_log\_file\_size setting is already sufficiently large or your tables contain no BLOB data. (Bug #16963396, Bug #19030353, Bug #69477) | I had this issue with MYSQL 5.7 (OSX 10.11).
The following worked although it may not be ideal.
In `my.cfn` add:
```
innodb_strict_mode = 0
``` | ERROR 1118 (42000) Row size too large | [
"",
"mysql",
"sql",
""
] |
```
SELECT DISTINCT(journey.id)
FROM journey
JOIN journey_day ON journey_day.journey = journey.id
JOIN pattern ON pattern.id = journey.pattern
JOIN pattern_link ON pattern_link.section = pattern.section
WHERE pattern.service = :service AND pattern.direction = :direction AND journey_day.day = :day
```
Above I have a MySQL query that should pull out a list of journeys based on some variables.
What I want to do now is add another two variables which can adjust the query. I need to convert what I'm about to say into SQL format and then append it to my current query but I'm not sure how to do so.
I need to ensure that the `pattern_link` table contains **both** a `:departure` and `:arrival` value, essentially making it only pull journeys that contain these two variables.
In this example we're talking about bus timetables, so, in theory I only want to select journeys that travel through both the `:departure` and `:arrival` bus stops. The `pattern_link` table is essentially a table that connects bus stop to bus stop on a journey. Also, it's worth saying that I don't mean just these two stops, I mean that these two stops are somewhere in the table.
How can I do this? | I don't fully understand how you have phrased the question. I think, though, that you are looking for journeys that contain both a departure and arrival value in corresponding `pattern_link`s. If so, you can do this with `group by` and `having`:
```
SELECT j.id
FROM journey j JOIN
journey_day jd
ON jd.journey = j.id JOIN
pattern p
ON p.id = j.pattern JOIN
pattern_link pl
ON pl.section = pattern.section
WHERE p.service = :service AND p.direction = :direction AND jd.day = :day
GROUP BY j.id
HAVING sum(p.busstop = :arrival) > 0 and
sum(p.busstop = :departure) > 0;
```
I'm not sure if the fields you want are in the `pattern` table or the `pattern_link`, but the same idea would hold. | The easiest way to add the additional condition would be to create a self-join, as below:
```
SELECT DISTINCT(journey.id)
FROM journey
JOIN journey_day ON journey_day.journey = journey.id
JOIN pattern ON pattern.id = journey.pattern
JOIN pattern_link pattern_link_dep ON pattern_link_dep.section = pattern.section AND pattern_link_dep.<time_column> = :departure
JOIN pattern_link pattern_link_arr ON pattern_link_arr.section = pattern.section AND pattern_link_arr.<time_column> = :arrival
WHERE pattern.service = :service AND pattern.direction = :direction AND journey_day.day = :day;
```
Replace the `time_column` with the appropriate column name. | Only select items where a table contains two specific values | [
"",
"mysql",
"sql",
""
] |
I have a SQL Table with recurring tasks. For instance:
```
+------------------+----+--------------+
| Task | ID | RecurranceID |
+------------------+----+--------------+
| Take Out Garbage | 1 | 0 |
| Order Pizza | 2 | 0 |
| Eat Breakfast | 3 | 1 |
| Eat Breakfast | 4 | 1 |
| Eat Breakfast | 5 | 1 |
| Order Pizza | 6 | 0 |
+------------------+----+--------------+
```
Anything with a RecurranceID of 0 is not a recurring task, but otherwise it is a recurring task.
How can I show all tasks with a limit of one row on a recurring task?
I would just like the resulting set to show:
```
+------------------+----+
| Task | ID |
+------------------+----+
| Take Out Garbage | 1 |
| Order Pizza | 2 |
| Eat Breakfast | 3 |
| Order Pizza | 6 |
+------------------+----+
```
Using SQL Server 2012
Thank you! | ```
;WITH MyCTE AS
(
SELECT Task,
ID,
ROW_NUMBER() OVER (PARTITION BY TASK ORDER BY ID) AS rn
FROM Tasks
)
SELECT *
FROM MyCTE
WHERE rn = 1
```
It is not clear by your sample data, but you may need to also apply `RecurranceID` in the `PARTITION BY` clause, as bellow:
```
;WITH MyCTE AS
(
SELECT Task,
ID,
RecurranceID,
ROW_NUMBER() OVER (PARTITION BY TASK,RecurranceID ORDER BY ID) AS rn
FROM Tasks
)
SELECT *
FROM MyCTE
WHERE rn = 1
OR RecurranceID = 0
``` | You seem to want all non-recurring tasks returned, along with a single row for each recurring task (whether or not it shares names with a non-recurring task). Are you looking for:
```
SELECT Task, ID
FROM RecurringTaskTable
WHERE RecurrenceID = 0
UNION ALL
SELECT Task, MIN(ID) AS ID
FROM RecurringTaskTable
WHERE RecurrenceID <> 0
GROUP BY Task
``` | How to limit SQL result with recurrance | [
"",
"sql",
"t-sql",
"sql-server-2012",
""
] |
I want to alter a table called person and want to add foreign key to it using `office` table
the query I am using is
```
ALTER TABLE person
ADD CONSTRAINT person_Office_FK
FOREIGN KEY ( Office_id )
REFERENCES Office ( Office_id ) ;
```
Table `office` has around 500,000 rows and table `person` has around 5 million
This query is taking forever i am not sure what is happening. | Before adding your constraint, make sure that there's a clustered index on office\_id on the office table and a non-clustered index on office\_id on the person table.
Remember that every occurrence of office\_id on the person table needs to check against every office\_id record. This will also speed things up if you ever have to delete an office record.
You don't want to disable the checks, since your constraint will be untrusted and you won't get the performance benefit a foreign key gives you in the query optimizer. | If `Office_id` is the primary key of `Office`, make sure it has a (primary key) index. This will definitely speed up the adding of the constraint.
Also, according to [How to temporarily disable a foreign key constraint in MySQL?](https://stackoverflow.com/questions/15501673/how-to-temporarily-disable-a-foreign-key-constraint-in-mysql), you can use
```
SET FOREIGN_KEY_CHECKS=0;
```
To disable ALL foreign key constraint checks, possibly this works too when adding them. | alter table query taking too long for adding constraint | [
"",
"mysql",
"sql",
""
] |
I have a Mysql table called `tblad_clicks` which tracks the no of clicks for a ads on a website. I also have a table called `tblprofiles` that has userdata stored.
I need to query the database to select the distinct `CLIENTID` (the identifier to know whom an ad belongs to). I then also need to select `profileid` where it is greater than 0 and I think thats that for the `tblad_clicks` table. Then I need to join the `profileid` from `tblad_clicks` with the `profileid` from `tblprofiles` to get the `ZONE_ID` and then I need to join that with `tblzones` to get the `zone_name`
Here is a list of the tables and columns:
(1) **tblad\_clicks**
* clickid
* clientid
* profileid
(2) **tblprofiles**
* profileid
* zone\_id
(3) **tblzones**
* zoneid
* zone\_name
There are more columns in these table but I have shared only required column names.
I am not sure if you are getting me but in essence I need to see how many people from each region have clicked on an ad for each client
Thank you in advance | Try this:-
```
SELECT A.clientid, A.profileid,
B.zone_ID, C.zone_name
FROM tblad_clicks A
INNER JOIN tblprofiles B ON A.profileid = B.profileid
INNER JOIN tblzones C ON B.zone_id = C.zoneid
WHERE A.profileid > 0;
``` | Try this :
```
Select c.clientid, c.clickid, z.zoneid, z.zone_name from tblad_clicks as c inner join tblprofiles as p on p.profileid = c.profileid inner join tblzones as z on z.zoneid = p.zone_id group by c.clientid, c.clickid, z.zoneid
``` | MYSQL Database Query with Join and Count | [
"",
"mysql",
"sql",
""
] |
How to create this query (exercise included in Stanford Database MOOC):
> For all cases where the same reviewer rated the same movie twice and gave it a higher rating the second time, return the reviewer's name and the title of the movie.
The system used is SQLite.
Table `movie`:
| Field | Type | Null | Key | Default | Extra |
| --- | --- | --- | --- | --- | --- |
| mID | int(11) | YES | | NULL | |
| title | text | YES | | NULL | |
| year | int(11) | YES | | NULL | |
| director | text | YES | | NULL | |
Table `rating`:
| Field | Type | Null | Key | Default | Extra |
| --- | --- | --- | --- | --- | --- |
| rID | int(11) | YES | | NULL | |
| mID | int(11) | YES | | NULL | |
| stars | int(11) | YES | | NULL | |
| ratingDate | date | YES | | NULL | |
Table `reviewer`:
| Field | Type | Null | Key | Default | Extra |
| --- | --- | --- | --- | --- | --- |
| rID | int(11) | YES | | NULL | |
| name | text | YES | | NULL | |
Expected result:
| name | title |
| --- | --- |
| Sarah Martinez | Gone with the Wind |
[The data](https://class.stanford.edu/c4x/DB/SQL/asset/moviedata.html). | ```
SELECT
W.name,
M.title
FROM
reviewer AS R
INNER JOIN movie AS M
ON EXISTS ( -- there is at least one rating
SELECT *
FROM rating AS G
WHERE
-- by the reviewer and movie in question
R.rID = G.rID
AND M.mID = G.mID
AND EXISTS ( -- for which another rating exists
SELECT *
FROM rating AS G2
WHERE
-- for the same reviewer and movie
R.rID = G2.rID
AND M.mID = G2.mID
AND G.stars < G2.stars -- but rated higher
AND G.ratingDate < G2.ratingDate -- and later
)
)
;
```
I am not 100% sure if SQLite allows `ON` clauses to have `EXISTS` expressions. If not, you can just move the `EXISTS` expression to the `WHERE` clause and perform a cross join between `reviewer` and `movie`.
If SQLite doesn't support `EXISTS`, then put the `EXISTS` queries as a derived table in the `FROM` clause, with the two tables `INNER JOIN`ed to each other, and then `GROUP BY` the `mId` and the `rID`, then `INNER JOIN` to the main tables. That might look like this:
```
SELECT
R.name,
M.title
FROM
(
SELECT
G.rID,
G.mID
FROM
rating AS G
INNER JOIN rating AS G2
ON G.rID = G2.rID
AND G.mID = G2.mID
AND G.stars < G2.stars
AND G.ratingDate < G2.ratingDate
GROUP BY
G.rID,
G.mID
) C
INNER JOIN reviewer AS R
ON C.rID = R.rID
INNER JOIN movie AS M
ON C.mID = R.mID
;
```
I hope you can see how these two queries express the same semantics. In a very large database where people have rated the same movies many times, there could be a performance difference (the `EXISTS` version I first showed could perform better as it can stop as soon as it finds one result).
Note: You could just join the whole mess into a single query and `GROUP BY` the `name`, `title`, `rID`, `mID`, but while "simpler", that would be more wrong, as there is no need to duplicate the name and title for many rows, only to throw away that information by grouping. The grouping should happen as early as possible. | I've managed to solve this specific exercise with this query:
```
SELECT R.name, M.title
FROM
Rating AS RatingLatest
JOIN Rating AS R2
ON RatingLatest.rID = R2.rID AND R1.mID = R2.mID
JOIN Reviewer AS R USING (rID)
JOIN Movie AS M USING (mID)
-- Check if there is a newer rating with more stars than the previous one
WHERE RatingLatest.ratingDate > R2.ratingDate
AND RatingLatest.stars > R2.stars
```
It returns reviewers where it has rated the same movie more than once and when, on the last time (not specifically the second), the rating was greater. | How to create this SQL query? | [
"",
"sql",
"sqlite",
""
] |
I have a query like this:
```
select *, (CAST (ie_usage_count as float)/total_count)*100 as percent_ie from(
SELECT DISTINCT CAST (account_id AS bigint),
count(case when
user_agent LIKE '%MSIE 7%'
AND user_agent NOT LIKE '%Trident%'
then 1 end) as ie_usage_count,
count(*) as total_usage
FROM acc_logs
WHERE account_id NOT LIKE 'Account ID '
group by account_id
ORDER BY account_id )
where not ie_usage_count = 0
```
That gives me a table with account\_ids, and the ie\_usage\_count, total\_usage, and percent\_ie associated with each account ID
```
account_id | ie_usage_count | total_usage | percent_ie
```
I have another query
```
select name, account_id
from accounts
```
That gives me the name of the person associated with each account.
```
name | account_id |
```
I'd like to have a single query that includes name, account\_id, ie\_usage\_count, total\_usage, and percent\_ie.
```
name | account_id | ie_usage_count | total_usage | percent_ie
```
Any ideas? | Your first query is more easily written as:
```
select CAST(account_id AS bigint),
SUM(case when user_agent LIKE '%MSIE 7%' AND user_agent NOT LIKE '%Trident%'
then 1 else 0
end) as ie_usage_count,
count(*) as total_usage,
AVG(case when user_agent LIKE '%MSIE 7%' AND user_agent NOT LIKE '%Trident%'
then 100.0 else 0
end) as percent_ie
from acc_logs
where account_id NOT LIKE 'Account ID '
group by account_id
having SUM(case when user_agent LIKE '%MSIE 7%' AND user_agent NOT LIKE '%Trident%'
then 1 else 0
end) <> 0;
```
You can get the name just by joining it in:
```
select CAST(al.account_id AS bigint), a.name,
SUM(case when user_agent LIKE '%MSIE 7%' AND user_agent NOT LIKE '%Trident%'
then 1 else 0
end) as ie_usage_count,
count(*) as total_usage,
AVG(case when user_agent LIKE '%MSIE 7%' AND user_agent NOT LIKE '%Trident%'
then 100.0 else 0
end) as percent_ie
from acc_logs al join
accounts a
on al.account_id = a.account_id
where al.account_id NOT LIKE 'Account ID '
group by al.account_id, a.name
having SUM(case when user_agent LIKE '%MSIE 7%' AND user_agent NOT LIKE '%Trident%'
then 1 else 0
end) <> 0;
``` | Just join to it:
```
SELECT a.name, l.*, (l.ie_usage_count * 100)::float / l.total_count AS percent_ie
FROM (
SELECT account_id::bigint -- Why cast to bigint?
, count(user_agent LIKE '%MSIE 7%'
AND user_agent NOT LIKE '%Trident%'
OR NULL) AS ie_usage_count
, count(*) AS total_usage
FROM acc_logs
WHERE account_id NOT LIKE 'Account ID ' -- trailing blank? typo?
GROUP BY account_id
ORDER BY account_id
) l
JOIN accounts a USING (account_id)
WHERE ie_usage_count <> 0;
```
* Your original query was invalid, because the subquery had no alias.
Otherwise, the subquery is just fine, don't flatten it. Short syntax, percentage calculation and join to `accounts` only for qualifying rows, division by 0 avoided elegantly.
* Why do you cast `account_id` to `bigint`? There's something off here. Are you storing numbers as text? I would need to see your table definitions ...
* Reformulated the calculation. It's cheaper and more precise to multiply the `integer` by 100 *before* you cast to `float`.
You may want to cast to `numeric` instead and wrap that into [`round(expression, 2)`](http://www.postgresql.org/docs/current/interactive/functions-math.html#FUNCTIONS-MATH-FUNC-TABLE) to get pretty output.
* Using a shorter (equivalent) expression for the conditional count. [Details here.](https://stackoverflow.com/questions/15644713/compute-percents-from-sum-in-the-same-select-sql-query/15650897#15650897)
* Simplified your `WHERE` condition. | How can I join these tables in postgresql? | [
"",
"sql",
"postgresql",
""
] |
There is my plain sql query:
```
SELECT id, title,
IF(o.fixed_on_top = 1 AND o.fixing_on_top_day_counter > 5, 1 ,0) actual_fixed_on_top
FROM orders o
ORDER BY actual_fixed_on_top DESC, publication_date DESC;
```
How can I perform sorting like this on JpaRepository?
Maybe using Criteria API?
But I dint find any examples..
Thanks!
EDIT:
ok, studied different ways, and convinced that it cant be implemented through Criteria API (but it is a pity)
One working variant: Native SQL Query, i am sure. | I found the solution:
```
public class MyEntitySpecifications {
public static Specification<MyEntity> GetByPageSpecification() {
return new Specification<MyEntity>() {
@Override
public Predicate toPredicate(Root<MyEntity> root, CriteriaQuery<?> cq, CriteriaBuilder cb) {
Expression fixingExpr = cb.greaterThan(root.get(MyEntity_.fixingDueDate), new DateTime(DateTimeZone.UTC));
cq.orderBy(new OrderImpl(cb.selectCase().when(fixingExpr, 1).otherwise(0), false));
return cb...;
}
};
}
}
```
The main idea is using case expression instead of simple logical.
plain sql equuivalent is:
```
select ...
from my_entity e
where ...
order by case when e.fixing_due_date > now() then 1 else 0 end desc
```
So we can build dynamic queries with criteria api specifications as well.
No direct access to EntityManager, no plain sql. | You can do it by using the [QueryDslPredicateExecutor](http://docs.spring.io/spring-data/data-commons/docs/current/api/org/springframework/data/querydsl/QueryDslPredicateExecutor.html) which provides the following two method:
```
Iterable<T> findAll(Predicate predicate, OrderSpecifier<?> ... orderSpecifiers);
Page<T> findAll(Predicate predicate, Pageable pageable);
```
The [PageRequest](http://docs.spring.io/spring-data/data-commons/docs/current/api/org/springframework/data/domain/PageRequest.html) class implements `Pageable` and allows you to specified the sorting you want.
You could also add a `Pageable` parameter in a `@Query` annotated method on a regular `JpaRepository` and Spring Data Jpa will do the rest, for example:
```
@Query("select e from SomeEntity e where e.param1 = :param1")
public Page<SomeEntity> findSome(@Param("param1") String param1, Pageable pageable);
``` | JpaRepository: Spring Sort for runtime query variabels | [
"",
"sql",
"hibernate",
"sorting",
"jpa",
"spring-data-jpa",
""
] |
I am new to coding in SQL Server stored procedures. I am having difficulty deciding between temp table, table variable, global temp table to use to store data in my stored procedure.
My situation is that I need to store data locally in a table to calculate values while retrieving data from various SQL Servers. Common sense tells me to use temp table, but why can't I use table variable or global temp table?
Any insights will be helpful. | The rule is pretty much, choose a table variable as the default. Then re-assess the situation in the following circumstances:
If you want to use transaction rollback
If you want to pass the resultsets of your tables from one stored procedure to another
If you want the query optimizer to be able to work out how best to run a complex query
If you’re getting really clever with dynamic SQL, you need a temporary table.
So, table variables ought to be your first choice. For small result sets and the everyday type of data manipulation, they are faster and more flexible than temporary tables, and the server cleans them up when you’ve finished.
Temp variable does not persist in tempdb unlike temp table, it will be cleared automatically immediately after SP or function. Temp variable can only have 1 index i.e primary, TT can have more indexes. Use temp variables for small volume of data and vice versa for TT. No data logging and data rollback in variable but for TT it’s available. Less SP recompilations with variables than TT.
Global temp tables are just like temp tables except that they are visible to all sessions as long creating session is valid. Hope this gives you good idea which way to go given your scenario. | A Table Variable is functionally almost identical to a temp table -- in fact, SQL server actually implements a temp variable as a temp table at least some of the time. There are some functional differences in terms of limitations on what you can do with them that make one more convenient than the other on some occasions, insert the result of an exec is a common on as is using create index to make secondary keys.
A temp var is not guaranteed to be written to tempdb. In fact the Microsoft documentation seems to suggest that it will not be written to tempdb, so in theory, temp vars can be expected to be more performant or at least not slower that temp tables.
A global temp table is a temp table that can be shared on multiple database connections. These are comparatively rare compared to temp vars and temp tables.
ADDED
The differences have been discussed at length in a number of articles [article#1](http://www.sql-server-performance.com/2007/temp-tables-vs-variables/), [article#2](http://databases.aspfaq.com/database/should-i-use-a-temp-table-or-a-table-variable.html), [article#3](http://sqlserverplanet.com/tsql/yet-another-temp-tables-vs-table-variables-article) and on [Stack Overflow](https://stackoverflow.com/questions/6991511/sql-server-temp-table-vs-table-variable)
ADDED
From some of the the comment on this answer. I've read those articles in the past, and yes, to the best of my knowledge table vars are always implemented via tempdb today. Since Microsoft could change the implementation in the future or even in 2014 version (since those articles predate 2014). That's why I used the hedge words like "in theory". The Stack Overflow article I referenced mentioned this briefly. But the additional articles references are also quite good.
I was trying to explain that they were basically the same. You can choose based on how you need to use them. And if either one could be used equally, you might prefer a temp var since MS could choose to make this more efficient in the future. Seemed like an awful lot to dump on a newbie question though. | Difference between temp table, table variable, global temp table to use in stored procedure in SQL Server 2008 | [
"",
"sql",
"sql-server",
"stored-procedures",
""
] |
I've got a table with 2 columns,
```
GROUP PROJECTS
10001 1
10001 2
```
First column (GROUP) stays the same value 10001.
Second column (PROJECTS) changes values 3,5,9,100, etc. (I have 400 project ID's)
What would be to correct (loop?) statement to insert all 400 PROJECTS.
I used insert, values for smaller lists.
```
INSERT INTO table (GROUP_ID, PROJECTS) VALUES (10001, 1, 10001, 2, 10001, etc, 10001, etc);
```
I have the list in Excel (if needed I can create a Temp table with all 400 project ID's)
Thanks. | I typically write such inserts as:
```
INSERT INTO table(GROUP_ID, PROJECTS)
select 10001, 1 from dual union all
select 10001, 2 from dual union all
. . . ;
```
You should be able to generate the `select` statement pretty easily in Excel. | If the project IDs exist in their own table (or you can create one from your Excel data), then yu can get the list of values from there and cross-join those with all the group IDs:
```
insert into group_projects (group_id, project_id)
select g.group_id, p.project_id
from groups g
cross join projects p
where not exists (
select 1 from group_projects gp
where gp.group_id = g.group_id and gp.project_id = p.project_id
);
```
The `where not exists()` excludes all the existing pairs so you don't insert duplicates.
[SQL Fiddle](http://sqlfiddle.com/#!4/122653/1)
If the groups don't have their own table then you can use the existing values from a subquery:
```
insert into group_projects (group_id, project_id)
select g.group_id, p.project_id
from (select distinct group_id from group_projects) g
cross join projects p
where not exists (
select 1 from group_projects gp
where gp.group_id = g.group_id and gp.project_id = p.project_id
);
```
You could use Gordon's approach to generate the project ID list as a subquery as well, if you didn't want to create a table for those. | Insert values in table only one column changes value | [
"",
"sql",
"plsql",
"oracle11g",
""
] |
I have two tables `[dbo].Notifications` and `[dbo].ClientsNotifications` where :
`[dbo].Notifications`:
```
ID : Notification
1 : 'Notification1'
2 : 'Notification2'
3 : 'Notification3'
and so on...
```
and
`[dbo].ClientsNotifications`:
```
Id : ClientId : NotificationId
1 : 1 : 1
2 : 2 : 2
3 : 1 : 3
4 : 5 : 2
and so on..
```
What I want is by given `ClientId` to fetch all the string values from `[dbo].Notifications`.SELECT \*
I tried something like this :
```
FROM [dbo].Notifications c, [dbo].ClientsNotifications a
WHERE a.ClientId IN (
SELECT c.Notification
FROM [dbo].ClientsNotifications a2, [dbo].Notifications c2
WHERE a2.ClientId = 1
)
```
but it gives me this error :
> Msg 245, Level 16, State 1, Line 1
> Conversion failed when converting the nvarchar value 'Notification1' to data type int.
I'm not very experienced in SQL so I'm not certain that this is even the right way (query) to fetch this data. At the end I want to get all string value notifications for user so I can show them in the view. | Use this:
```
select n.Notification
from Notifications n
where n.Id in
( select cn.NotificationId
from ClientNotifications cn
where cn.ClientId = 1
)
```
It selects all notifications where there is a matching record in `ClientNotifications`. There is no need for a `join` since you only wish the `Notification` field. If you *do* need values from that other table, use this:
```
select n.Notification
, cn.ClientId /* some fields from ClientNotifications */
from Notifications n
join ClientNotifications cn
on n.Id = cn.NotificationId
where cn.ClientId = 1
``` | Your code is trying to grab the ClientID from a list of Notification messages. Try:
```
SELECT Notification
FROM Notifications
JOIN ClientsNotifications
ON Notifications.ClientId = ClientsNotifications.ClientId
WHERE ClientId = 1
``` | How to get the actual values from a SQL join table | [
"",
"sql",
"sql-server",
""
] |
I am trying to Pivot a table and display the results in 1 record. This all works if there is no PK field , but there is one and I am not getting the desired result. You can use the statement below to Test.
```
Create table #Test
(id int identity(1,1), item_name varchar(50),item_value varchar(50), decode float )
insert into #Test (item_name, item_value, decode)
values ('Threshold', 'GROWTH', 0.6),
('Threshold', 'LEVERAGE', 0.4)
Select pvt.* from #Test d
Pivot (
max(d.decode)
for d.item_value in (Growth,Leverage)
) as pvt
where item_name = 'Threshold'
drop table #test
```
The Results of this Query are
```
| id | item_name | Growth | Leverage
1 Threshold 0.6 NULL
2 Threshold NULL 0.4
```
My Desired Results are
```
item_name | Growth | Leverage
Threshold 0.6 0.4
```
If you remove the Identity column from the create statement I can achieve the result I am looking for, but I can't do this because I am using real tables. How can I make the query 'bypass' the id column and achieve my desired result | Since the PK is unique (an identity) for each row, you'll need to exclude it from the query or it will be including in the grouping aspect of PIVOT. To get around this, you'll want to use a subquery to select only the columns needed for the PIVOT.
Change your query to the follow:
```
Select item_name, Growth,Leverage
from
(
select item_name, item_value, decode
from #Test
where item_name = 'Threshold'
) d
Pivot
(
max(d.decode)
for d.item_value in (Growth,Leverage)
) as pvt;
```
See [Demo](http://sqlfiddle.com/#!3/d41d8/37602). | Another option is to use a crosstab for this instead of PIVOT.
```
select item_name
,MAX(case when item_value = 'Growth' then decode end) as Growth
, MAX(case when item_value = 'Leverage' then decode end) as Leverage
from #Test
where item_name = 'Threshold'
group by item_name
``` | PK Field Does Not Allow Pivot to Display as 1 Record | [
"",
"sql",
"sql-server",
"t-sql",
"pivot",
""
] |
There is an error on PostgreSQL that it gives on one of my select statements. I searched the web for an answer and came out empty handed. The answer given in another question did not suit my problem.
> ```
> ERROR: failed to find conversion function from unknown to text
> ********** Error **********
> ERROR: failed to find conversion function from unknown to text
> SQL state: XX000
> ```
My query looks something like this:
```
SELECT *
FROM (SELECT 'string' AS Rowname, Data FROM table)
UNION ALL (SELECT 'string2' AS Rowname, Data FROM table);
```
The point of doing this is to specify what the row is at one point. The string being the name of the row. Here is my desired output:
```
Rowname Data
string 53
string2 87
```
Any possible way to fix this error? | **Update:** Type resolution in later versions of Postgres became smarter and this rule for [*UNION, CASE, and Related Constructs*](https://www.postgresql.org/docs/current/typeconv-union-case.html) resolves it to text without explicit cast:
> 3. If all inputs are of type unknown, resolve as type `text` (the preferred type of the string category). [...]
```
SELECT 'string' AS rowname, data FROM tbl1
UNION ALL
SELECT 'string2', data FROM tbl2;
```
In older versions before Postgres 9.4 (?), or for non-default types you may still need to add an explicit cast like below.
---
Your statement has a couple of problems. But the error message implies that you need an **[explicit cast](https://www.postgresql.org/docs/current/sql-expressions.html#SQL-SYNTAX-TYPE-CASTS)** to declare the (yet unknown) data type of the string literal `'string'`:
```
SELECT text 'string' AS rowname, data FROM tbl1
UNION ALL
SELECT 'string2', data FROM tbl2;
```
It's enough to cast in one `SELECT` of a `UNION` query. Typically the first one, where column names are also decided. Subsequent `SELECT` lists with unknown types will fall in line.
In other contexts (like the `VALUES` clause attached to an `INSERT`) Postgres derives data types from target columns and tries to coerce to the right type automatically. | ```
Select * from (select CAST('string' AS text) as Rowname, Data
From table) Union all
(select CAST('string2' AS text) as Rowname, Data
From table)
```
[Reference](https://www.postgresql.org/docs/current/static/sql-createcast.html) | ERROR: failed to find conversion function from unknown to text | [
"",
"sql",
"database",
"postgresql",
"type-conversion",
"union",
""
] |
I'd like to perform a conditional `where` in a sql statement, and use two different criteria, e.g. in pseudo code:
```
procedure(..., bool_value IN boolean default false) is
....
begin
select * from mytable mt
where
if bool_value = true then mt.criterion_1 = value
else
mt_.criterion_2 = value; -- if boolean value is set, use criterion_1 column, otherwise use criterion_2 column
end
```
Suppose it's possible, What's the best way to do it?
Thanks | Try this:
```
bool_value_string varchar2(5)
bool_value_string = case when bool_value then 'true' else 'false' end;
select * from mytable mt
where
(bool_value_string = 'true' and mt.criterion_1 = value)
or
(bool_value_string = 'false' and mt.criterion_2 = value)
```
Basically, convert your when...then idiom to an either...or one. Either the boolean field is non-null and true, meaning filter has to be by the first criterion, or it isn't, meaning filter by the second one. | Basically, your condition gets translated as:
```
if bool_value = true
then mt.criterion_1 = value
else if bool_value = false
then mt_.criterion_2 = value;
```
Since you cannot directly use boolean value in select statements (Refer comments), use as below: (Change bool\_value from boolean to varchar2 or a number)
```
procedure(..., bool_value IN varchar2(10) default 'FALSE') is
....
begin
select * from mytable mt
where
(case
when (bool_value = 'TRUE' and mt.criterion_1 = value) then (1)
when (bool_value = 'FALSE' and mt_.criterion_2 = value) then (1)
(else 0)
end) = 1;
OR
select * from mytable mt
where
(bool_value = 'TRUE' and mt.criterion_1 = value)
or
(bool_value = 'FALSE' and mt.criterion_2 = value)
end
```
---
**ORIGINAL ANSWER**
You can also use `case statement` in `where` clause as below:
```
select * from mytable mt
where
(case
when (bool_value = true and mt.criterion_1 = value) then (1)
when (bool_value = false and mt_.criterion_2 = value) then (1)
(else 0)
end) = 1;
```
In oracle, you can use below Query also.
```
select * from mytable mt
where
(bool_value = true and mt.criterion_1 = value)
or
(bool_value = false and mt.criterion_2 = value)
```
Note: Since default of `bool_value = false`, `is null` condition need not be checked. | plsql conditional where clause | [
"",
"sql",
"oracle",
"where-clause",
""
] |
I am trying to search through a column of freetext for the following - an 8 digit code, starting with an `F` and followed by seven digits - e.g `F1234567.` These can be located anywwhere in the string, which is of variable length. I have attempted this:
```
case when [SHORT-DESCRIPTION] like '%[0-9]%' then 1 end --finds integers
case when [SHORT-DESCRIPTION] like '%F%' then 1 end --finds the letter F
```
This seemed like the next logical step, but my syntax is wrong
```
case when [SHORT-DESCRIPTION] like '%F' & '%[0-9]%' then 1 end
```
It returned the following error:
```
The data types varchar and varchar are incompatible in the '&' operator.
```
Can anyone point me in the right direction? I'm using SQL Server 2008 R2.
Thanks, | I first thought of Regular Expressions, but those are not fully supported in SQL 2008 yet. However, you can achieve your goal with normal pattern matching (<http://msdn.microsoft.com/en-us/library/ms187489(SQL.90).aspx>).
I've tested this and this works:
```
like '%F[0-9][0-9][0-9][0-9][0-9][0-9][0-9]%'
``` | Use `LIKE '%[a-z0-9]%` insted of `'%[0-9]%'` or `'%F%'` | Searching for an Alpha-Numeric code within a string | [
"",
"sql",
"sql-server",
""
] |
So I'm trying to correct some SQL that's attempting to output the last July 1 that is less than two years away from a certain date. So 06/30/2014 would return 07/01/2015 since 07/01/2016 is more than two years in the future. The date 07/02/2014 should return 07/01/2016, but the SQL below is returning 07/01/2015 incorrectly.
```
DECLARE @Year VARCHAR (10)
DECLARE @Date VARCHAR(10)
DECLARE @OrigDate VARCHAR (10) = /*substitution for relevant date here*/
SET @Year=(DATEPART (YYYY,DATEADD(YYYY,2,GETDATE())))
SET @Date = (SELECT '07/01/'+@Year)
IF @OrigDate > DATEADD(YEAR,-2,GETDATE())
BEGIN
SET @Year=(DATEPART (YYYY,DATEADD(YYYY,1,GETDATE())))
SET @Date = (SELECT '07/01/'+@Year)
END
SELECT @Date AS RetValue
```
Now that we're in August, it's firing correctly, but I need to fix it before next July rolls around. (The date isn't always the current date, but it's usually close) | And the SQL server version of my postgres answer. Credit to <https://stackoverflow.com/a/7149386/13822>
```
WITH cSequence AS
(
SELECT
cast('2014-01-01 00:00' as datetime ) AS StartRange,
DATEADD(DAY, 1, cast('2014-01-01 00:00' as datetime )) AS EndRange
UNION ALL
SELECT
EndRange,
DATEADD(DAY, 1, EndRange)
FROM cSequence
WHERE DATEADD(DAY, 1, EndRange) < DATEADD(year, 2, '2014-01-01 00:00')
)
SELECT top 1 StartRange FROM cSequence where datepart(month, startRange) = 7 and datepart(day, StartRange) = 1
ORDER BY StartRange DESC
OPTION (MAXRECURSION 0) ;
``` | As far as I am aware, this is cannot be done in a standard way and is dependent on the SQL flavour you are using. Your example looks like MS SQL server, in which case my answer won't be overly helpful. But in postgres:
```
SELECT s FROM generate_series('2014-07-02 00:00'::timestamp,
('2014-07-02 00:00'::timestamp +
(interval '2 years' - interval '1 day')) , '1 day') as s
where date_part('day', s) = 1 and date_part('month', s) = 7 order by s DESC LIMIT 1;
``` | SQL that outputs the last July 1 that is less than two years away | [
"",
"sql",
"date",
""
] |
I'm a little confused on the code that is being suggested for checking if a table exists.
Can someone explain whether the code that I have written will work for checking if a table exists?
I want it to do nothing if the table does exist.
Here is my code:
```
BEGIN
SET NOCOUNT ON;
DECLARE @SQL NVARCHAR(MAX);
IF NOT EXISTS (SELECT * FROM sys.objects
WHERE object_id = OBJECT_ID(N' + @TABLENAME + ') AND type in (N'U'))
BEGIN
SET @SQL =
N'CREATE TABLE ' + @TABLENAME + '
('
+ '[ID] [int] IDENTITY(1,1) NOT NULL,
[intID] [int] NULL,
[varID] [varchar](50) NULL,
[FormName] [varchar](250) NULL,
[UID] [varchar](3) NOT NULL,
CONSTRAINT [PK_Selections' + @TABLENAME + '_1] PRIMARY KEY CLUSTERED(
[ID]));';
EXEC sp_executesql @sql;
END
END
```
I forgot to mention that currently the procedure **DOES** create the table. But when I run the procedure again it tries to create the table again and then fails.
How do I make it exit the procedure if the table exists? | Just do this simple check. No need to query sys.objects
```
...
IF OBJECT_ID(@TABLENAME, 'U') IS NULL
BEGIN
```
Your check failed because you were actually looking for a table called "`+ @TABLENAME +`" | Here is the checking whether table exists or not:
```
IF NOT EXISTS (SELECT * FROM sys.objects
WHERE object_id = OBJECT_ID(N' + @TABLENAME + ') AND type in (N'U'))
```
table sys.objects contains description all objects in a database.
Function OBJECT\_ID() - return id of object by its name.
type in (N'U')) - checks that object was created by user.
To check that table is EXISTS use check:
```
IF EXISTS (SELECT * FROM sys.objects
WHERE object_id = OBJECT_ID(N' + @TABLENAME + ') AND type in (N'U'))
BEGIN
...
END
``` | If table exists with dynamic query in SQL Server | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
I am having some problem when trying to perform a SQLite query to get the records which does not exist from another table. Basically I have two database tables:

My exercise table stored all the exercises available whereas the bookedExercise table store the exercises booked by each users. What I am trying to do is, for example if the exercise does exist in the bookedExercise, it should be filtered out.
Here is the SQLite query which I used:
```
SELECT exercise.exerciseID, exercise.exerciseType, exercise.amout FROM exercise LEFT JOIN bookedExercise WHERE exercise.exerciseID = bookedExercise.exerciseID AND bookedExercise.exerciseID IS NULL
```
However, it returned me empty records. Any ideas?
Thanks in advance. | If you're fine with not using joins you could use
```
SELECT * FROM exercise WHERE exerciseID not in (SELECT exerciseID FROM bookedExercise)
``` | When you are using LEFT JOIN, you must put the join condition into the ON clause:
```
SELECT exercise.exerciseID,
exercise.exerciseType,
exercise.amout
FROM exercise /* !! */
LEFT JOIN bookedExercise ON exercise.exerciseID = bookedExercise.exerciseID
WHERE bookedExercise.exerciseID IS NULL
``` | SQLite query select all records that does not exist in another table | [
"",
"sql",
"sqlite",
""
] |
I have this:
```
SELECT
firstname, lastname, Sexe, Weight,
sum(E.Weight) OVER (ORDER BY E.Height
RANGE BETWEEN 10 PRECEDING AND 5 FOLLOWING ) AS SumHeight
FROM HR.Employees AS E
ORDER BY E.Height
```
But I get this error:
> RANGE is only supported with UNBOUNDED and CURRENT ROW window frame delimiters. | This [link](http://msdn.microsoft.com/en-us/library/ms189461%28v=sql.110%29.aspx) (T-SQL OVER Clause) shows that:
> <unsigned value specification> PRECEDING:
> Specified with <unsigned value specification> to indicate the number of rows or values to precede the current row. This
> specification is not allowed for RANGE.
So your <window frame preceding> and <window frame following> in the RANGE specification can only use UNBOUNDED PRECEDING|CURRENT ROW, respectively UNBOUNDED FOLLOWING|CURRENT ROW.
You are probably looking for ROWS i/o RANGE? | Pretty much what it says - you've got to have either unbounded or current row at the ends of a range.
You can't do what you're trying to do. | SQL, Range between | [
"",
"sql",
"database",
"sql-server-2012",
""
] |
I am having one column containing both the first-name and second-name. Now i want to make a function which remove all the spaces between first name and last name and make 1st letter as as capital follow with the small letter.
For example if user type..
```
muKesH AmBanI
```
then i will get the Output as a
```
Mukesh Ambani
```
only one space is der with 1st letter Capital by using MS SQL server. | The best solution is to build a function. After some inspiration from the manipulations available online, I adapted it to your query;
```
CREATE FUNCTION FormatString(@text varchar(100))
RETURNS varchar(100)
AS
declare @counter int,
@length int,
@char char(1),
@textnew varchar(4000)
' @text = 'muKesH AmBanI'
set @text = rtrim(@text)
set @text = lower(@text)
set @length = len(@text)
set @counter = 1
set @text = upper(left(@text, 1) ) + right(@text, @length - 1)
while @counter <> @length --+ 1
begin
select @char = substring(@text, @counter, 1)
IF @char = space(1) or @char = '_' or @char = ',' or @char = '.' or @char = '\'
or @char = '/' or @char = '(' or @char = ')'
begin
set @textnew = left(@text, @counter) + upper(substring(@text,
@counter+1, 1)) + right(@text, (@length - @counter) - 1)
set @text = @textnew
end
set @counter = @counter + 1
end
return replace(replace(replace(@text,' ','<>'),'><',''),'<>',' ')
END
``` | Please try:
```
declare @var nvarchar(500)='muKesH AmBanI'
select
STUFF(FN, 1, 1, UPPER(LEFT(FN, 1)))+' '+
STUFF(LN, 1, 1, UPPER(LEFT(LN, 1)))
FROM(
select
LOWER(SUBSTRING(@var, 1, charindex(' ', @var)-1)) FN,
LOWER(LTRIM(SUBSTRING(@var, charindex(' ', @var), 500))) LN
)x
```
AS function
```
CREATE FUNCTION FormatString(@text varchar(100))
RETURNS varchar(100)
AS
BEGIN
SET @text=@text+' '
SELECT @text=
STUFF(FN, 1, 1, UPPER(LEFT(FN, 1)))+' '+
ISNULL(STUFF(LN, 1, 1, UPPER(LEFT(LN, 1))), '')
FROM(
SELECT
LOWER(SUBSTRING(@text, 1, charindex(' ', @text)-1)) FN,
LOWER(RTRIM(LTRIM(SUBSTRING(@text, charindex(' ', @text), 500)))) LN
)x
RETURN @text
END
``` | Remove spacing between 2 words | [
"",
"sql",
"sql-server",
"replace",
"trim",
""
] |
Good morning everyone! I am trying to write a Query that accepts a list of zip codes as a variable, and then queries for values in that set. Everything seems to be working fine, except the variable which I have placed my list values into.
I have never tried using a `varchar` value to represent a list of values, so if it's not possible or if there is a better way, please let me know.
here is the code so far:
```
DECLARE @trade_firm VARCHAR(20)
DECLARE @zip_list VARCHAR(255)
---------------------------------
SET @trade_firm = '0016986'
SET @zip_list = '44654,15219,15219,15219' --this is my list of zip codes
---------------------------------
SELECT oad.POSTAL_CODE_1,
o.CHANNEL
FROM dbo.OFFICE AS o
INNER JOIN dbo.OFFICE_ALIAS AS oa
ON o.OFFICE_ID = oa.OFFICE_ID
INNER JOIN dbo.OFFICE_ADDRESS AS oad
ON o.ADDRESS_1_ID = oad.OFFICE_ADDRESS_ID
WHERE oa.TRADE_FIRM = @trade_firm
AND o.CHANNEL != ( SELECT f.channel
FROM dbo.FIRM AS f
INNER JOIN dbo.FIRM_ALIAS AS fa
ON f.FIRM_ID = fa.FIRM_ID
WHERE fa.trade_firm = @trade_firm)--will return the channel of the main firm
AND oad.POSTAL_CODE_1 NOT IN @zip_list --this is the variable as I currently have it, broken
``` | This should work. Hope it helps.
```
DECLARE @trade_firm VARCHAR(20)
DECLARE @zip_list VARCHAR(255)
DECLARE @tbl_zip_list TABLE (zip_list VARCHAR(10))
---------------------------------
SET @trade_firm = '0016986'
SET @zip_list = '44654,15219,15219,15219' --this is my list of zip codes
---------------------------------
SET @zip_list = @zip_list + ',' --Add trailing ',' for closing the loop
WHILE CHARINDEX(',', @zip_list) > 0
BEGIN
INSERT INTO @tbl_zip_list
SELECT SUBSTRING(@zip_list, 1, CHARINDEX(',', @zip_list) - 1)
SET @zip_list = SUBSTRING(@zip_list, CHARINDEX(',', @zip_list) + 1, LEN(@zip_list))
END
SELECT *
FROM @tbl_zip_list
SELECT oad.POSTAL_CODE_1,
o.CHANNEL
FROM dbo.OFFICE AS o
INNER JOIN dbo.OFFICE_ALIAS AS oa
ON o.OFFICE_ID = oa.OFFICE_ID
INNER JOIN dbo.OFFICE_ADDRESS AS oad
ON o.ADDRESS_1_ID = oad.OFFICE_ADDRESS_ID
WHERE oa.TRADE_FIRM = @trade_firm
AND o.CHANNEL != ( SELECT f.channel
FROM dbo.FIRM AS f
INNER JOIN dbo.FIRM_ALIAS AS fa
ON f.FIRM_ID = fa.FIRM_ID
WHERE fa.trade_firm = @trade_firm)--will return the channel of the main firm
AND NOT EXISTS (SELECT 1
FROM @tbl_zip_list AS ni
WHERE n1.zip_list = oad.POSTAL_CODE_1) --this is the variable as I currently have it, broken
``` | This is not possible, you need to convert your string variable into table variable in order to be used with `IN` keyword.
One solution would be to pass a table valued parameter from your application logic. That parameter could be an array like object. Refer to [Using Table-Valued Parameters Arrays and Lists in SQL Server 2008](http://www.sommarskog.se/arrays-in-sql-2008.html).
If the above is not possible one of the techniques I use is also described in the above article that is to split the comma seperated list of values into table variable.
```
declare @trade_firm varchar(20)
declare @zip_list varchar(255)
---------------------------------
set @trade_firm = '0016986'
set @zip_list = '44654,15219,15219,15219' --this is my list of zip codes
--DECLARE TABLE VARIABLE
DECLARE @ziplist_table table(
zipcode varchar(10)
)
--TAKSE COMA SEPERATED LIST OF CODES AN INSERTS TO TABLE VARIABLE FOR LATER USE
DECLARE @pos int, @nextpos int, @valuelen int
SELECT @pos = 0, @nextpos = 1
WHILE @nextpos > 0
BEGIN
SELECT @nextpos = charindex(',', @zip_list, @pos + 1)
SELECT @valuelen = CASE WHEN @nextpos > 0
THEN @nextpos
ELSE len(@zip_list) + 1
END - @pos - 1
INSERT @ziplist_table (zipcode)
VALUES (substring(@zip_list, @pos + 1, @valuelen))
SELECT @pos = @nextpos
END
```
Now you can use the table variable in the `WHERE` clause like this
```
WHERE oad.POSTAL_CODE_1 not in (SELECT zipcode FROM @zip_list)
``` | How to declare and set a variable equal to a list of values? | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
I want to execute some things like that
```
select * from table1
inner join table2 on table1.numero=table2.numero
CASE WHEN table2.test = 'test'
THEN and table2.test = table1.test END
```
i mean if i have table2.test = 'test' then table2.test = table1.test
thanks | Try this ...
```
SELECT *
FROM
table1
INNER JOIN
table2 ON
table1.numero=table2.numero AND
((Table2.Test = 'test' AND Table1.test = Table2.test) OR Table2.Test != 'test')
``` | When i execute this query
```
select * from table1
inner join table2 on table1.numero=table2.numero
and CASE WHEN table2.test = 'test' then table2.test else 1 end
= CASE WHEN table2.test = 'test' then table1.test else 1 end
```
i have an error like
"Syntaxe incorrecte vers '='."
And for the second query, why we put a statement Or
i think that's correct like :
```
SELECT *
FROM
table1
INNER JOIN
table2 ON
table1.numero=table2.numero AND
((Table2.Test = 'test' AND Table1.test = Table2.test))
```
thenk you | SQL - AND STATEMENT INSIDE RESULT CASE STATEMENT | [
"",
"sql",
"select",
"case",
"inner-join",
"conditional-statements",
""
] |
I am trying to remove double quotes `"` from a column in my SQL export and I get an error, after researching the proper way... this is one of the ways I have tried....
```
SELECT
'293453' as custnum,
REPLACE(Orders.Order_Comments, '"', '') as FULFILL1,
OrderDetails.OrderID as OrderID2, etc.
```
The resulting error is:
> Your SQL is invalid: Argument data type text is invalid for argument 1 of replace function. | Take a look at this answer: [SQL Server find and replace in TEXT field](https://stackoverflow.com/questions/2641068/sql-server-find-and-replace-in-text-field/2641280#2641280)
The `text` data type is not valid for use with `replace`, so you need to cast it as VARCHAR(MAX)
```
REPLACE(CAST(Orders.Order_Comments AS VARCHAR(MAX)), '"', '')
``` | Your `Orders.Order_Comments` columns is of type [`text`](http://msdn.microsoft.com/en-us/library/ms187993.aspx). You can't use the REPLACE() function with that data type. It only works with char/varchar/nchar/nvarchar.
To fix this, the best thing to do is ALTER the table to use a `varchar(max)` column. The `text` type is depcrecated, anyway. But, if that's not an option, you'll have to cast it in the query:
```
REPLACE(CAST(Orders.Order_Comments as varchar(max)), '"', '')
```
Note that this is potentially *very* slow. | SQL wont let me replace a quote in SELECT statement | [
"",
"sql",
""
] |
I had created a query to find if the value I am passing is alphanumeric or number
by using this query:
```
SELECT COUNT (*)
INTO l_col
FROM DUAL
WHERE REGEXP_LIKE (l_columnvalue, '^\d+(\.\d+)?$', '');
IF l_col > 0 THEN
NULL;
ELSE
l_columnvalue := '"' || l_columnvalue || '"';
END IF;
```
This code checks if the `l_columnvalue` is number and if it is then `l_col` will become greater than 0 otherwise the `l_columnvalue` will be enclosed in quotes `" "` .
Now I want to check if the `columnvalue` is a number and its length equals to 7. If so, then it should not be in quotes `""`, otherwise if it is a number and its length less than or more than 7 then it should be in quotes.
How can I use regex for this ?
```
**Example :**
```
If the `l_columnvalue` is `15679` then
query1 will return 1 and the `if` statement will be executed accordingly.
if it is `'abdg12'` then query1 will return `null` then the else statement will be executed and the output will come as `"abdg12"`
Now I want if `l_columnvalue` is number and its length is not 7 e.g. : `123`
then query 1 should not return 1 instead the output should be `"123"`
How to go about it ? | If you are planning on implementing this logic in PL/SQL(looks like it), then a function(whether it's a standalone function or a part of a package function) could look something like this:
```
create or replace function nameTheFunction(
p_expr in varchar2 -- based on information you've provided
) return varchar2 -- an actual parameter may be either numeric
is -- or alphanumeric, thus varchar2
l_result varchar2(101); -- adjust the size as needed
l_isDigit boolean;
begin
l_isDigit := regexp_like(p_expr, '^[[:digit:]]+$');
return case
when (l_isDigit and length(p_expr) < 7) or (not l_isDigit)
then dbms_assert.enquote_name(p_expr)
else p_expr
end;
end;
```
There is no need to use `select into` statement, no need to do an extra switch between PL/SQL and SQL engines. `regexp_like()` function can be directly used inside a PL/SQL block:
Test case:
```
column res format a11
clear screen;
with t1(col) as(
select '1234567' from dual union all
select 'ah123' from dual union all
select '1234' from dual
)
select nameTheFunction(col) as res
from t1
```
Result:
```
RES
-----------
1234567
"AH123"
"1234"
``` | While you can use a regular expression to do this there are other options. For example, you can attempt to convert the string to NUMBER - if that succeeds, it's a number and you can check the length - if the conversion fails, it's not a number. For example:
```
DECLARE
strNumeric_value VARCHAR2(100) := '1234567890';
strNonnumeric_value VARCHAR2(100) := '123A56789o';
nValue NUMBER;
BEGIN
BEGIN
nValue := TO_NUMBER(strNumeric_value);
EXCEPTION
WHEN OTHERS THEN
nValue := NULL; -- not numeric
END;
DBMS_OUTPUT.PUT_LINE('strNumeric_value ' ||
CASE
WHEN nValue IS NOT NULL THEN
'IS NOT '
ELSE
'is '
END || 'numeric');
BEGIN
nValue := TO_NUMBER(strNonnumeric_value);
EXCEPTION
WHEN OTHERS THEN
nValue := NULL; -- not numeric
END;
DBMS_OUTPUT.PUT_LINE('strNonnumeric_value ' ||
CASE
WHEN nValue IS NOT NULL THEN
'IS NOT '
ELSE
'is '
END || 'numeric');
END;
```
Share and enjoy. | Regex to find length of the column value in sql | [
"",
"sql",
"regex",
"oracle",
"plsql",
""
] |
I'm new to SPARK-SQL. Is there an equivalent to "CASE WHEN 'CONDITION' THEN 0 ELSE 1 END" in SPARK SQL ?
`select case when 1=1 then 1 else 0 end from table`
Thanks
Sridhar | **Before Spark 1.2.0**
The supported syntax (which I just tried out on Spark 1.0.2) seems to be
```
SELECT IF(1=1, 1, 0) FROM table
```
This recent thread <http://apache-spark-user-list.1001560.n3.nabble.com/Supported-SQL-syntax-in-Spark-SQL-td9538.html> links to the SQL parser source, which may or may not help depending on your comfort with Scala. At the very least the list of keywords starting (at time of writing) on line 70 should help.
Here's the direct link to the source for convenience: <https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/SqlParser.scala>.
**Update for Spark 1.2.0 and beyond**
As of Spark 1.2.0, the more traditional syntax is supported, in response to [SPARK-3813](https://issues.apache.org/jira/browse/SPARK-3813): search for "CASE WHEN" in the [test source](https://github.com/apache/spark/blob/master/sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala). For example:
```
SELECT CASE WHEN key = 1 THEN 1 ELSE 2 END FROM testData
```
**Update for most recent place to figure out syntax from the SQL Parser**
The parser source can now be found [here](https://github.com/apache/spark/blob/master/sql/catalyst/src/main/antlr4/org/apache/spark/sql/catalyst/parser/SqlBase.g4).
**Update for more complex examples**
In response to a question below, the modern syntax supports complex Boolean conditions.
```
SELECT
CASE WHEN id = 1 OR id = 2 THEN "OneOrTwo" ELSE "NotOneOrTwo" END AS IdRedux
FROM customer
```
You can involve multiple columns in the condition.
```
SELECT
CASE WHEN id = 1 OR state = 'MA'
THEN "OneOrMA"
ELSE "NotOneOrMA" END AS IdRedux
FROM customer
```
You can also nest CASE WHEN THEN expression.
```
SELECT
CASE WHEN id = 1
THEN "OneOrMA"
ELSE
CASE WHEN state = 'MA' THEN "OneOrMA" ELSE "NotOneOrMA" END
END AS IdRedux
FROM customer
``` | **For Spark 2.+**
[Spark when function](https://spark.apache.org/docs/2.0.2/api/java/org/apache/spark/sql/functions.html#when)
From documentation:
> Evaluates a list of conditions and returns one of multiple possible result expressions. If otherwise is not defined at the end, null is returned for unmatched conditions.
```
// Example: encoding gender string column into integer.
// Scala:
people.select(when(col("gender") === "male", 0)
.when(col("gender") === "female", 1)
.otherwise(2))
// Java:
people.select(when(col("gender").equalTo("male"), 0)
.when(col("gender").equalTo("female"), 1)
.otherwise(2))
``` | SPARK SQL - case when then | [
"",
"sql",
"apache-spark",
""
] |
Please help me in creating a stored procedure which accepts comma separated values and inserts as multiple rows.
So one parameter `@Name` will contain values A,B,C and the other parameter `@Id` will contain values as 1,2,3
The table values after insertion should be as below:
```
Name Id
------------
A 1
A 2
A 3
B 1
B 2
B 3
C 1
C 2
C 3
```
How can I write a stored procedure that can insert the comma-separated values as shown above. Also, If the table already consists of a Name,id pair for example, if A,2 is already there in the table, then it should not insert.
I am using SQL Server 2005. Thanks in advance. | Something like this?
```
DECLARE @var1 VARCHAR(100)='A,B,C';
DECLARE @var2 VARCHAR(100)='1,2,3';
WITH rep1(name, delim) AS
(
SELECT @var1 name, ',' delim
UNION ALL
SELECT LEFT(name, CHARINDEX(delim, name, 1) - 1) name, delim
FROM rep1
WHERE (CHARINDEX(delim, name, 1) > 0)
UNION ALL
SELECT RIGHT(name, LEN(name) - CHARINDEX(delim, name, 1)) name, delim
FROM rep1
WHERE (CHARINDEX(delim, name, 1) > 0)
)
,rep2(id, delim) AS
(
SELECT @var2 id, ',' delim
UNION ALL
SELECT LEFT(id, CHARINDEX(delim, id, 1) - 1) id, delim
FROM rep2
WHERE (CHARINDEX(delim, id, 1) > 0)
UNION ALL
SELECT RIGHT(id, LEN(id) - CHARINDEX(delim, id, 1)) id, delim
FROM rep2
WHERE (CHARINDEX(delim, id, 1) > 0)
)
INSERT #table
(Name
,ID)
SELECT
r1.name
,r2.id
FROM rep1 r1
CROSS JOIN rep2 r2
LEFT JOIN #table t
ON r2.id=t.id
AND t.name=r1.name
WHERE (CHARINDEX(r1.delim, r1.name, 1) = 0)
AND (CHARINDEX(r2.delim, r2.id, 1) = 0)
AND t.name IS NULL
ORDER BY r1.name
,r2.id
OPTION (MAXRECURSION 0);
``` | **Here we are sepearting Comma Seperated into rows**
```
IF OBJECT_ID('tempdb..#Temp') IS NOT NULL
DROP TABLE #Temp
IF OBJECT_ID('tempdb..#NewTemp') IS NOT NULL
DROP TABLE #NewTemp
Declare @Testdata table ( name Varchar(max), Data varchar(max))
insert @Testdata select 'A', '1,2,3'
insert @Testdata select 'B', '1,2,3'
insert @Testdata select 'C', '1,2'
insert @Testdata select 'A', '1,2,3,4'
insert @Testdata select 'C', '1,2,3,4,5'
;with tmp(name, DataItem, Data) as (
select name, LEFT(Data, CHARINDEX(',',Data+',')-1),
STUFF(Data, 1, CHARINDEX(',',Data+','), '')
from @Testdata
union all
select name, LEFT(Data, CHARINDEX(',',Data+',')-1),
STUFF(Data, 1, CHARINDEX(',',Data+','), '')
from tmp
where Data > ''
)
```
**Then Inserting into Temp Table**
```
select DISTINCT name, DataItem INTO #Temp
from tmp WHERE EXISTS (Select DISTINCT name,DataItem from tmp)
order by name
```
**Here we are controlling entry of Duplicates we can observe combination won't repeat like (A,1),(B,1)Even though they are multiple**
```
CREATE TABLE #NewTemp(name Varchar(max), Data varchar(max))
INSERT INTO #NewTemp (name,Data)
Select name,DataItem from #Temp
Select * FROM #NewTemp
``` | Stored Procedure to Insert comma seperated values as multiple records | [
"",
"sql",
"sql-server",
"sql-server-2005",
""
] |
My requirements are to add a line to the chart below which represents averages per 8 weeks, which will be used multiple time throughout the report. At this point I am pulling out my hair tying to figure this one out. So far I have been using a temp table, looping through 8 weeks of data at a time, calculating the averages based on those 8 weeks, insert those values into a lut, joining the table in my proc, and then dishing the report out via SSRS. Its not very maintainable, alot of ugly code, and I am sure there are some performance hits along the way, though not my biggest concern.
Below is some sample data:
```
region yearnbr weeknbr value
A 2014 1 32
A 2014 2 77
A 2014 3 102
A 2014 4 84
A 2014 5 90
A 2014 6 90
A 2014 7 79
A 2014 8 103
A 2014 9 107
A 2014 10 110
A 2014 11 85
A 2014 12 120
A 2014 13 83
A 2014 14 79
A 2014 15 91
A 2014 16 101
A 2014 17 102
A 2014 18 103
A 2014 19 106
A 2014 20 95
...
```
Basically, what I am trying to do is add another field which would account for the averages on a 8 week rotation for each region, then use that field in the SSRS chart to generate an average with their respected values. Below is an example of what I am after. If there is a better solution, I would love to hear it... In this example, lets just assume we are on week 20.
```
region yearnbr weeknbr value average
A 2014 1 32 82
A 2014 2 77 82
A 2014 3 102 82
A 2014 4 84 82
A 2014 5 90 82
A 2014 6 90 82
A 2014 7 79 82
A 2014 8 103 82
A 2014 9 107 97
A 2014 10 110 97
A 2014 11 85 97
A 2014 12 120 97
A 2014 13 83 97
A 2014 14 79 97
A 2014 15 91 97
A 2014 16 101 97
A 2014 17 102 102
A 2014 18 103 102
A 2014 19 106 102
A 2014 20 95 102
...
```
Any suggestions or help would be greatly appreciated. Thanks.
Details:
SQL Server 2008 R2
SSRS | It's this simple in SQL Server 2005 and up:
```
SELECT
D.*,
Average8weeks =
Avg(D.value * 1.0) -- multiply by 1.0 if you don't want only integers
OVER (PARTITION BY D.region, D.yearnbr, (D.weeknbr - 1) / 8)
FROM
#Data D -- or whatever your table or query is
;
```
If you're using SQL Server 2000 or earlier, please let me know.
Here's setup script for sample data to try this out:
```
SELECT *
INTO #Data
FROM
(VALUES
('A', 2014, 1, 15),
('A', 2014, 2, 30),
('A', 2014, 3, 25),
('A', 2014, 4, 45),
('A', 2014, 5, 15),
('A', 2014, 6, 30),
('A', 2014, 7, 25),
('A', 2014, 8, 45),
('A', 2014, 9, 10),
('A', 2014, 10, 54),
('B', 2014, 1, 63),
('B', 2014, 2, 10),
('B', 2014, 3, 54),
('B', 2014, 4, 63),
('B', 2014, 5, 54),
('B', 2014, 6, 63),
('B', 2014, 7, 54),
('B', 2014, 8, 63),
('B', 2014, 9, 54),
('B', 2014, 10, 63)
) D (region, yearnbr, weeknbr, value)
;
``` | If I understand your question correctly, your output would show plot two line graphs on the same Y-axis, one with weekly data, the other a step function where for the first 8 weeks the value is the avg of weeks 1-8, and so on, correct?
If that is the case, then create a second set of records for the second series, and combine both series into one data set, then plot them in SSRS as a line graph with two series sharing the same axis. | Generate Rotating Average | [
"",
"sql",
"sql-server-2008",
"reporting-services",
"ssrs-2008",
""
] |
How can I use an IF statement in my stored procedure to check if my temporary table has rows?
I want to check if #TempDataTable has rows then I would do either of the two statements, insert or update depending on the output.
I did the following:
```
BEGIN
SELECT *
INTO #TempDataTable
FROM
(SELECT *
FROM BranchNameTable BT
LEFT JOIN BranchLocationActivate BLA ON BT.loc_id = BLA.loc_id
WHERE BT.Branchloc = 1;) as Q
if(//TempDataTable has 0 rows)
INSERT INTO BranchLocationActivate
VALUES(//my values)
ELSE
UPDATE BranchLocationActivate
SET //My values where //my values
END
```
How do I put it after my select statement in my procedure?? | You could use `not exists`:
```
if (not exists (select 1 from #tempdataatable))
``` | Assuming that there are not any SQL statements between `SELECT * INTO #TempDataTable` and `if(//TempDataTable has 0 rows)` then you don't even need the temp table in the first place. Instead you should simplify the test to be just:
```
IF (NOT EXISTS(
SELECT *
FROM BranchNameTable BT
LEFT JOIN BranchLocationActivate BLA
ON BT.loc_id = BLA.loc_id
WHERE BT.Branchloc = 1
)
)
BEGIN
INSERT INTO BranchLocationActivate VALUES(//my values)
END;
ELSE
BEGIN
UPDATE BranchLocationActivate SET //My values where //my values
END;
```
If there are statements between those parts that make use of the temp table, then you can simplify by using the information SQL Server already gives you after the DML statement via the `@@ROWCOUNT` variable:
```
DECLARE @RowsInserted INT;
SELECT *
INTO #TempDataTable
FROM BranchNameTable BT
LEFT JOIN BranchLocationActivate BLA
ON BT.loc_id = BLA.loc_id
WHERE BT.Branchloc = 1;
SET @RowsInserted = @@ROWCOUNT;
-- other statements
IF (@RowsInserted = 0)
BEGIN
INSERT INTO BranchLocationActivate VALUES(//my values)
END;
ELSE
BEGIN
UPDATE BranchLocationActivate SET //My values where //my values
END;
``` | Using IF statement to check if temporary table has rows in a stored procedure | [
"",
"sql",
"sql-server",
"stored-procedures",
""
] |
I've got a DB structure as is (simplified to maximum for understanding concern):
```
Table "entry" ("id" integer primary key)
Table "fields" ("name" varchar primary key, and others)
Table "entry_fields" ("entryid" integer primary key, "name" varchar primary key, "value")
```
I would like to get, for a given "entry.id", the detail of this entry, ie. all the "entry\_fields" linked to this entry, in a single SQL query.
An example would be better perhaps:
"fields":
```
"result"
"output"
"code"
"command"
```
"entry" contains:
```
id : 842
id : 850
```
"entry\_fields" contains:
```
entryid : 842, name : "result", value : "ok"
entryid : 842, name : "output", value : "this is an output"
entryid : 842, name : "code", value : "42"
entryid : 850, name : "result", value : "ko"
entryid : 850, name : "command", value : "print ko"
```
The wanted output would be:
```
| id | command | output | code | result |
| 842 | NULL | "this is an output" | 42 | ok |
| 850 | "print ko" | NULL | NULL | ko |
```
The aim is to be able to add a "field" without changing anything to "entry" table structure
I tried something like:
```
SELECT e.*, (SELECT name FROM fields) FROM entry AS e
```
but Postgres complains:
> ERROR: more than one row returned by a subquery used as an expression
Hope someone can help me! | ### Solution as requested
While stuck with this unfortunate design, the fastest query would be with `crosstab()`, provided by the additional module `tablefunc`. Ample details in this related answer:
* [PostgreSQL Crosstab Query](https://stackoverflow.com/questions/3002499/postgresql-crosstab-query/11751905#11751905)
For the question asked:
```
SELECT * FROM crosstab(
$$SELECT e.id, ef.name, ef.value
FROM entry e
LEFT JOIN entry_fields ef
ON ef.entryid = e.id
AND ef.name = ANY ('{result,output,code,command}'::text[])
ORDER BY 1, 2$$
,$$SELECT unnest('{result,output,code,command}'::text[])$$
) AS ct (id int, result text, output text, code text, command text);
```
### Database design
If you don't have a *huge* number of different fields, it will be **much simpler and more efficient** to merge all three tables into one simple table:
```
CREATE TABLE entry (
entry_id serial PRIMARY KEY
,field1 text
,field2 text
, ... more fields
);
```
Fields without values can be `NULL`. `NULL` storage is **very cheap** (basically 1 bit per column in the NULL bitmap):
* [How much disk-space is needed to store a NULL value using postgresql DB?](https://stackoverflow.com/questions/4229805/how-much-disk-space-is-needed-to-store-a-null-value-using-postgresql-db/7654497#7654497)
* [Do nullable columns occupy additional space in PostgreSQL?](https://stackoverflow.com/questions/12145772/do-nullable-columns-occupy-additional-space-in-postgresql/12147130#12147130)
Even if you have hundreds of different columns, and only few are filled per entry, this will still use much less disk space.
You query becomes trivial:
```
SELECT entry_id, result, output, code, command
FROM enty;
```
If you have too many columns1, and that's not just a misguided design (often, this can be folded into much fewer columns), consider the data types [**`hstore`**](http://www.postgresql.org/docs/current/interactive/hstore.html) or [`json`](http://www.postgresql.org/docs/current/interactive/datatype-json.html) / [`jsonb`](http://www.postgresql.org/docs/9.4/interactive/datatype-json.html) (in Postgres 9.4) for [**EAV**](https://stackoverflow.com/tags/entity-attribute-value/info) storage.
1 [Per Postgres "About" page](http://www.postgresql.org/about/):
```
Maximum Columns per Table 250 - 1600 depending on column types
```
Consider this related answer with alternatives:
* [Use case for hstore against multiple columns](https://stackoverflow.com/questions/21560508/use-case-for-hstore-against-multiple-columns/21561917#21561917)
And this question about typical use cases / problems of EAV structures on dba.SE:
* [Is there a name for this database structure?](https://dba.stackexchange.com/questions/20759/is-there-a-name-for-this-database-structure) | Dynamic SQL:
```
CREATE TABLE fields (name varchar(100) PRIMARY KEY)
INSERT INTO FIELDS VALUES ('RESULT')
INSERT INTO FIELDS VALUES ('OUTPUT')
INSERT INTO FIELDS VALUES ('CODE')
INSERT INTO FIELDS VALUES ('COMMAND')
CREATE TABLE ENTRY_fields (ENTRYID INT, name varchar(100), VALUE VARCHAR(100) CONSTRAINT PK PRIMARY KEY(ENTRYID, name))
INSERT INTO ENTRY_fields VALUES(842, 'RESULT', 'OK')
INSERT INTO ENTRY_fields VALUES(842, 'OUTPUT', 'THIS IS AN OUTPUT')
INSERT INTO ENTRY_fields VALUES(842, 'CODE', '42')
INSERT INTO ENTRY_fields VALUES(850, 'RESULT', 'KO')
INSERT INTO ENTRY_fields VALUES(850, 'COMMAND', 'PRINT KO')
CREATE TABLE ENTRY (ID INT PRIMARY KEY)
INSERT INTO ENTRY VALUES(842)
INSERT INTO ENTRY VALUES(850)
DECLARE @COLS NVARCHAR(MAX), @SQL NVARCHAR(MAX)
select @Cols = stuff((select ', ' + quotename(dt)
from (select DISTINCT name as dt
from fields) X
FOR XML PATH('')),1,2,'')
PRINT @COLS
SET @SQL = 'SELECT * FROM (SELECT id, f.name, value
from fields F CROSS join ENTRY LEFT JOIN entry_fields ef on ef.name = f.name AND ID = ef.ENTRYID
) Y PIVOT (max(value) for name in ('+ @Cols +'))PVT '
--print @SQL
exec (@SQL)
```
If you think your values are going to be constant in the fields table:
```
SELECT * FROM (SELECT id, f.name ,value
from fields F CROSS join ENTRY LEFT JOIN entry_fields ef on ef.name = f.name AND ID = ef.ENTRYID
) Y PIVOT (max(value) for name in ([CODE], [COMMAND], [OUTPUT], [RESULT]))PVT
```
Query that may work with postgresql:
```
SELECT ID, MAX(CODE) as CODE, MAX(COMMAND) as COMMAND, MAX(OUTPUT) as OUTPUT, MAX(RESULT) as RESULT
FROM (SELECT ID,
CASE WHEN f.name = 'CODE' THEN VALUE END AS CODE,
CASE WHEN f.name = 'COMMAND' THEN VALUE END AS COMMAND,
CASE WHEN f.name = 'OUTPUT' THEN VALUE END AS OUTPUT,
CASE WHEN f.name = 'RESULT' THEN VALUE END AS RESULT
from fields F CROSS join ENTRY LEFT JOIN entry_fields ef on ef.name = f.name AND ID = ENTRYID
) Y
GROUP BY ID
``` | SQL : Create a full record from 2 tables | [
"",
"sql",
"database",
"postgresql",
"database-design",
"entity-attribute-value",
""
] |
I have a report that need a duplicate of the information in order to generate 2 sheets (original and copy).
So I would like to know how to do it considering the following SQL:
```
SELECT
emp.name,
emp.lastname,
emp.birthdate,
emp.gender
FROM employee emp
WHERE employeeid = 1
```
> The above query will just through 1 result (1 Row) and I need the same information to be duplicated so I have 2 rows.
One way to do that is to do a `UNION ALL` with the same query like:
```
SELECT
emp.name,
emp.lastname,
emp.birthdate,
emp.gender
FROM employee emp
WHERE employeeid = 1
UNION ALL
SELECT
emp.name,
emp.lastname,
emp.birthdate,
emp.gender
FROM employee emp
WHERE employeeid = 1
```
But I don't like to repeate the query.
> Is there any other way to achieve this? | Similiar logic to Jaaz Cole, but it shouldn't matter what SQL flavour you have. Simple logic, join the table to a 2 row set...it'll bring back 2 of each row.
```
SELECT
emp.name,
emp.lastname,
emp.birthdate,
emp.gender
FROM employee emp
inner join (select 1 as a union all select 2) a on 1 = 1
WHERE employeeid = 1
```
Honestly I think it'd be easier to replicate two sheets on the sheet level instead of the data level like this...seems a bit odd in practice to me at any rate. | This is only going to work for SQL Server 2005+ (you didn't tag a version on your question), but should duplicate your results quickly with a constant scan.
```
SELECT
emp.name,
emp.lastname,
emp.birthdate,
emp.gender
FROM employee emp
CROSS APPLY ( SELECT 1 AS R UNION ALL SELECT 2 AS R) N
WHERE employeeid = 1
``` | SQL Repeate or Duplicate results | [
"",
"sql",
"sql-server",
""
] |
Oracle DB
Table1 looks like so, it is strictly one to one, no duplicates...
```
Row Primary Secondary
--------------------------------
1 1 2
2 3 4
3 5 6
```
Table2 has the corresponding IDs with multiple names for each.
```
ID Name
------------------------
1 Server1
2 Server2
3 Server3
3 Server4
3 Server5
4 Server6
4 Server7
5 Server8
6 Server9
6 Server10
```
Now, this is what I am doing right now, simplified...
```
Select t1.row row,
t2.name p_name,
t2a.name s_name
From table1 t1
left join table2 t2 on t1.primary = t2.id
left join table2 t2a on t1.secondary = t2a.id
```
this gives me a result like this, which I know is correct for the code I used..
```
Row p_name s_name
-------------------------------
1 Server1 Server2
2 Server3 Server6
2 Server3 Server7
2 Server4 Server6
2 Server4 Server7
2 Server5 Server6
2 Server5 Server7
3 Server8 Server9
3 Server8 Server10
```
What I want, and cannot figure out how to do... is get this result:
```
Row p_name s_name
-------------------------------
1 Server1 Server2
2 Server3 Server6
2 Server4 Server7
2 Server5 null
3 Server8 Server9
3 null Server10
```
In essence... I want to be able to pair up values from a 1 to 1 relationship, where there could be 1, 2, 3 or more for each value, but I don't want all the combinations, just the 1, 2 or 3, and then the other set of 1, 2 or 3, and a null value for the numbers are no equal. I am obviously new to SQL and have searched around, I just can't figure out what to try next. | You need another join conditions that is a sequence number. Fortunately, you can get this using `row_number()`. This is quite close to what you want:
```
Select t1.row as row,
t2.name as p_name,
t2a.name as s_name
From table1 t1 left join
(select t2.*, row_number() over (partition by t2.id order by t2.id) as seqnum
from table2 t2
) t2
on t1.primary = t2.id left join
(select t2.*, row_number() over (partition by t2.id order by t2.id) as seqnum
from table2 t2
) t2a
on t1.secondary = t2a.id and t2.seqnum = t2a.seqnum;
```
Unfortunately, it doesn't handle the situation where the second list is longer than the first. I think this should work:
```
Select t1.row as row,
t2.name as p_name,
t2a.name as s_name
From table1 t1 left join
(select t2.*, row_number() over (partition by t2.id order by t2.id) as seqnum
from table2 t2
) t2
on t1.primary = t2.id full outer join
(select t2.*, row_number() over (partition by t2.id order by t2.id) as seqnum
from table2 t2
) t2a
on t1.secondary = t2a.id and t2.seqnum = t2a.seqnum;
``` | You can use the [`rank()`](http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions123.htm) function to create an "order" in the duplicate names, and then use it as part of your join condition:
```
SELECT t1.row,
primary_names.name AS p_name,
secondary_names.name AS s_name
FROM table1 t1
JOIN (SELECT id, name, RANK() OVER (PARTITION BY name ORDER BY id ASC) AS rk
FROM table2) primary_names
ON t1.primary = primary_names.id
LEFT JOIN (SELECT id, name, RANK() OVER (PARTITION BY name ORDER BY id ASC) AS rk
FROM table2) secondary_names
ON t1.secondary = secondary_names.id AND
primary_names.rk = secondary_names.rk
``` | SQL joining a table containing multiple/duplicate values, to both fields on a table that is 1 to 1 | [
"",
"sql",
"oracle",
"join",
""
] |
*NB: Not being a savvy SQL coder I admit that my googling prior to asking might be flawed. Please have patience if I missed something obvious to others.*
I have a *select* statement, which gives a single row or a multiple rows as a reply. Then, I'm executing a different *select* statement with a parameter that is fetched from the first row. Then, I execute the very same *select* statement again but with the parameter fetched from the second row and so on.
I'd like to put that into a single statement in SQL but, regrettably enough, I lack the skils for that. How do I do that?
At the moment, I'm fetching the first set of data separately. Then, I do some C# magic and execute each of the following, parametrized, statements individually. Waaay not neat...
The issue can be restated as follows. Of course, that's not how I'll solve it in the end (there'll be an SP doing that) but it shows the approach we use today, which we're very discomforted about.
```
String mainCommand = "select Index from SomeTable";
...
foreach(String parameter in ...)
{
String subCommand =
"select RealValue from AnotherTable where Index = " + parameter;
...
}
``` | Let's say
```
table1 = {Id1, name, etc...}
table2 = {Id2, name, Id1, etc...}
```
What I understand you want is to, for example, do a select from table 1 to get some rows, and then do a different select with an item from those rows in table 2.
That can be achieved with:
```
SELECT * FROM TABLE2
WHERE ID1 IN (SELECT ID1 FROM TABLE1 WHERE="WhatEver condition you are looking for")
```
This performs a select in Table 2 where the column Id1 is in a list from the second select (which only brings a list of Id's from Table1 that fulfill a condition) | You want to use a join operation and a subquery to perform your operation all at the same time.
Something like this:
```
SELECT [fields] from tbl_mainCommand
JOIN
(SELECT [fields] from tbl_subCommand)
on tbl_mainCommand.fieldKey = tbl_subCommand.fieldKey
```
Depending on how your tables are set up, the subquery may not be the most efficient way of doing this as it could result in a [correlated subquery](http://en.wikipedia.org/wiki/Correlated_subquery). But ultimately the nice thing about SQL and Set Theory is that you can perform a massive operation like this all at once as opposed to iterating over a number of parameters and running multiple SQL statements.
Note that the `on tbl...` line is your link between the tables. This is essentially where you are passing your parameter into your second table or subquery. If you can look at it more as a link between two different data sets as opposed to passing a parameter, you are more likely to avoid the correlated subquery. | How to SELECT based on another SELECT with multiple rows as response? | [
"",
"sql",
"sql-server",
""
] |
Sorry for the basicness of this query, but I need a bit of help learning here :)
This is a database for a Hotel.
Here are my tables:
* Hotel (**hotelNo**, hotelName, city)
* Room (**roomNo**, **hotelNo**, type, price)
* Booking (**hotelNo**, **guestNo**, **dateFrom**, dateTo, roomNo)
* Guest (**guestNo**, guestName, guestAddress)
Where the bolded are the primary keys. I realise it isn't the best design to have a concatenated primary key for Bookings, but that is the way it is here.
I need a query for the following:
How many different guests have made a booking for August? | ```
SELECT COUNT (DISTINCT guestno)
FROM booking
WHERE MONTH(datefrom) = 8
``` | You don't actually need a JOIN:
```
SELECT COUNT(DISTINCT guestNo)
FROM booking
WHERE dateTo >= CONCAT(YEAR(CURDATE()),'-08');
AND dateFrom < CONCAT(YEAR(CURDATE()),'-09');
```
This will include bookings that cover August entirely, or start or finish in August.. which I don't think the other answers cover.
I also assumed that you only wanted this for the current year. | Basic SQL query | [
"",
"mysql",
"sql",
""
] |
This question is surprisingly hard to describe using English, so I'll give some examples.
I have a table named Games. Each game consists of game numbers and players, represented in two columns: GameNum, PlayerNum.
My question is, I want to select pairs of players who have **only** played in games with each other, and **no one else**.
Here is some sample data:
```
GameNum PlayerNum
1 100
1 101
2 102
2 103
3 102
3 104
4 105
4 106
5 106
5 107
6 100
6 101
```
I'm looking to return the results:
```
PlayerNum1 PlayerNum2
100 101
```
This is because we can see that players 100 and 101 are the only players to have played games with one another, and no one else. 102 has also played with 104, so we exclude 102 and 104. And while 105 has only played a game with 106, 106 has also played a game with 107, so we exclude both players 105 and 106 (and hence 107) from the results. This leaves us with only players 100 and 101.
Each GameNum will only occur twice in the table (i.e. each game will always have exactly two players). Also note that we don't care if players have played multiple games together (e.g. GameNum 1 and 6) - provided that they have only played these games with each other.
I have tried something like the following query using min/max, but I can't figure out how to exclude players who have played games with other players.
```
SELECT *
FROM
(
SELECT AU1.PlayerNum AS PlayerNum1, AU2.PlayerNum AS PlayerNum2
FROM
(
SELECT GameNum, MIN(PlayerNum) AS PlayerNum
FROM GAMES
GROUP BY GameNum
HAVING count(GameNum) = 2
) AU1
INNER JOIN
(
SELECT GameNum, MAX(PlayerNum) AS PlayerNum
FROM GAMES
GROUP BY GameNum
HAVING count(GameNum) = 2
) AU2
ON AU2.GameNum = AU1.GameNum
) T2
GROUP BY T2.PlayerNum1, T2.PlayerNum2
ORDER BY T2.PlayerNum1, T2.PlayerNum2;
```
Many thanks! :)
EDIT: Here is the query to create a basic version of the table with the above data:
```
CREATE TABLE Games
(
GameNum int,
PlayerNum int
);
INSERT INTO Games (GameNum, PlayerNum) VALUES (1, 100);
INSERT INTO Games (GameNum, PlayerNum) VALUES (1, 101);
INSERT INTO Games (GameNum, PlayerNum) VALUES (2, 102);
INSERT INTO Games (GameNum, PlayerNum) VALUES (2, 103);
INSERT INTO Games (GameNum, PlayerNum) VALUES (3, 102);
INSERT INTO Games (GameNum, PlayerNum) VALUES (3, 104);
INSERT INTO Games (GameNum, PlayerNum) VALUES (4, 105);
INSERT INTO Games (GameNum, PlayerNum) VALUES (4, 106);
INSERT INTO Games (GameNum, PlayerNum) VALUES (5, 106);
INSERT INTO Games (GameNum, PlayerNum) VALUES (5, 107);
INSERT INTO Games (GameNum, PlayerNum) VALUES (6, 100);
INSERT INTO Games (GameNum, PlayerNum) VALUES (6, 101);
INSERT INTO Games (GameNum, PlayerNum) VALUES (5, 107);
INSERT INTO Games (GameNum, PlayerNum) VALUES (6, 100);
INSERT INTO Games (GameNum, PlayerNum) VALUES (6, 101);
``` | try something like this (mysql syntax):
```
select distinct least(t1.p1, t2.p2), greatest(t1.p1, t2.p2)
from
(
select p1, max(p2) as p2
from (select min(PlayerNum) as p1, max(PlayerNum) as p2 from GAMES group by GameNum union select max(PlayerNum) as p1, min(PlayerNum) as p2 from GAMES group by GameNum) as q1
group by q1.p1
having count(distinct p2)=1
) as t1
,
(
select min(p1) as p1, p2
from (select min(PlayerNum) as p1, max(PlayerNum) as p2 from GAMES group by GameNum union select max(PlayerNum) as p1, min(PlayerNum) as p2 from GAMES group by GameNum) as q2
group by q2.p2
having count(distinct p1)=1
) as t2
where t1.p1=t2.p1 and t1.p2=t2.p2
```
<http://sqlfiddle.com/#!2/4d9c4/9>
main idea:
* select all player1 who had played only with 1 another player
* select all player2 who had played only with 1 another player
join both sets to find out players who played only with each other | ```
with t as (
select distinct min(playernum) a, max(playernum) b
from games group by gamenum
)
select x.a PlayerNum1, x.b PlayerNum2
from t x left join t y
on not (x.a=y.a and x.b=y.b) and (y.a in (x.a,x.b) or y.b in (x.a,x.b))
where y.a is null
```
[fiddle](http://sqlfiddle.com/#!4/362fc/1) | Excluding pairs of values occurring non-exclusively | [
"",
"sql",
"oracle",
""
] |
Currently having Minutes(`bigint`) as
```
390
```
I want to convert this Minutes to hours. Is there any function to convert it?
```
Eg: 390 as 6.30
``` | you can have a `PostgreSQL Function` for that,
```
CREATE OR REPLACE FUNCTION fn_min_to_hrs(mins int)
RETURNS numeric AS
$BODY$
select cast(date_part('hours',interval '1 minute' * mins) * 1.0 +
(date_part('minutes',interval '1 minute' * mins) * .01) as numeric(18,2));
$BODY$
LANGUAGE sql VOLATILE
```
---
```
select fn_min_to_hrs(390)
Result
+-------------+
|fn_min_to_hrs|
|numeric(18,2)|
+-------------+
| 6.30 |
+-------------+
``` | Something like this should work:
```
SELECT TO_CHAR('390 minute'::interval, 'HH24:MI')...
```
From an answer to a related question: [PostgreSQL - How to convert seconds in a numeric field to HH:MM:SS](https://stackoverflow.com/questions/2905692/postgresql-how-to-convert-seconds-in-a-numeric-field-to-hhmmss) | Convert Minutes to Hours in PostgreSQL | [
"",
"sql",
"postgresql",
""
] |
This code returns zero rows count but there are 2 rows in `appointment` table.
The msgbox I commented was to check if the date is correct and format is correct and shows date as 2014/08/09. The appointment date in database is 2014/08/09 for 2 records (the only 2 records). Record count variable shows 0.
Table name (copied directly cut and paste) is `Appointments` and column is `AppointmentDate`.
The `connectDatabase` sub routine connects to the database successfully as I use it whenever I connect to database so it's correct as I connect to other tables correctly before I run this code using same sub routine.
`Command.text` contains
```
SELECT * FROM Appointments WHERE AppointmentDate = 2014/08/09
```
Don't know what other details to specify.
```
Private Sub frmAppointments_Load(sender As Object, e As EventArgs) Handles MyBase.Load
'load appointments
LoadAppointments(dtpAppointmentDate.Value.Date)
End Sub
Public Sub LoadAppointments(whichdate As Date)
Dim sqlcmd As New OleDb.OleDbCommand
'set connection
ConnectDatabase()
With frmAppointments
'MsgBox(whichdate)
M_connDB.Open()
'fetch records from database
sqlcmd.Connection = M_connDB
sqlcmd.CommandText = "SELECT * FROM Appointments WHERE AppointmentDate = " & whichdate
.dataAdapterAppointments = New OleDb.OleDbDataAdapter(sqlcmd.CommandText, M_connDB)
'first clear data table to prevent duplicates
.dataTableAppointments.Rows.Clear()
.dataAdapterAppointments.Fill(.dataTableAppointments)
M_connDB.Close()
Dim rowindex As String
Dim iy As Long
Dim recordcount As Long
'check if any records exist
recordcount = .dataTableAppointments.Rows.Count
If Not recordcount = 0 Then
For iy = 0 To .dataTableAppointments.Rows.Count
For Each row As DataGridViewRow In .dtgrdAppointments.Rows
If row.Cells(0).Value = .dataTableAppointments.Rows(iy).Item(6) Then
rowindex = row.Index.ToString()
MsgBox(.dtgrdAppointments.Rows(rowindex).Cells(0).Value, vbInformation + vbOKOnly, "MSG")
Exit For
End If
Next
Next iy
Else
MsgBox("No Appointments for selected date.", vbInformation + vbOKOnly, "No Appoinments")
End If
End With
``` | Use sql-parameters instead of string-concatenation. This should work in MS Access:
```
sqlcmd.CommandText = "SELECT * FROM Appointments WHERE AppointmentDate = ?"
sqlcmd.Parameters.AddWithValue("AppointmentDate", whichdate)
```
This prevents you from conversion or localization issues and -even more important- sql-injection. | `2014/08/09` doesn't have quotes around it, making it a math expression (2014 divided by 8 divided by 9). Of course, your table has no rows with a date matching the result of that expression. But don't put in the quotes. Instead, add a parameter. | SQL QUERY Return 0 Records but 2 records exist | [
"",
"sql",
"vb.net",
"ms-access",
""
] |
This is my tables (many to many relationship).
Their goal - to keep a history of status changes for a different requests.
(I drop from illustration all the fields what dont make sense for a question)
1.**Requests**
```
id (primary, ai) | name (varchar) | phone (varchar) | text (varchar)
1 | Maria | 9232342323 | "fist text"
2 | Petr | 2342342323 | "second text"
3 | Lenny | 2342342323 | "third text"
```
2.**Statuses**
```
id (primary, ai) | title (varchar)
1 | New
2 | In progress
3 | Abort
```
3.**Requests\_has\_Statuses**
```
id (primary, ai) | requests_id (fk) | statuses_id (fk) | comment (varchar) | date (timestamp)
1 | 1 | 1 | "Fist comment about a Maria's request" | 2014-08-08 12:24
2 | 1 | 2 | "Second comment about Maria" | 2014-08-08 12:26
3 | 1 | 2 | "Third comment about Maria" | 2014-08-08 12: 57
4 | 2 | 1 | "First comment about Petr" | 2014-08-08 13:23
5 | 3 | 1 | "First comment about Lenny" | 2014-08-08 13:45
6 | 2 | 3 | "Second comment about Petr" | 2014-08-08 14:00
```
My Goal is to have a select output like this:
```
1 | Maria | 9232342323 | "In progress" | "Third comment about Maria" | 2014-08-08 12: 57
2 | Petr | 2342342323 | "Abort" | "Second comment about Petr" | 2014-08-08 14:00
3 | Lenny | 2342342323 | "New" | "First comment about Lenny" | 2014-08-08 13:45
```
In simple words I need to output a current actual status and info about each request.
I was trying to do it like so:
```
SELECT r.name, r.phone, st.title, rhs.comment, rhs.date
FROM requests AS r
INNER JOIN Requests_has_Statuses AS rhs ON rhs.requests_id = r.id
INNER JOIN Statuses AS st ON rhs.statuses_id = st.id;
```
But I got an unnecessary duplication like:
```
1 | Maria | 9232342323 | "New" | "Fist comment about a Maria's request" | 2014-08-08 12:24
1 | Maria | 9232342323 | "In progress" | "Second comment about Maria" | 2014-08-08 12: 26
1 | Maria | 9232342323 | "In progress" | "Third comment about Maria" | 2014-08-08 12: 57
2 | Petr | 2342342323 | "New" | "First comment about Petr" |2014-08-08 13:23
3 | Lenny | 2342342323 | "New" | "First comment about Lenny" | 2014-08-08 13:45
2 | Petr | 2342342323 | "Abort" | "Second comment about Petr" | 2014-08-08 14:00
```
Could you help me with an advice or solution?
Thanks | Try this:
```
SELECT r.id, r.phone, st.title, rhs.comment, rhs.date
FROM (
SELECT r.id, max(rhs.date) AS date
FROM Requests_has_Statuses AS rhs
INNER JOIN Requests r ON r.id = rhs.requestId
GROUP BY r.id) AS temp
INNER JOIN Requests AS r ON temp.id = r.id
INNER JOIN Requests_has_Statuses AS rhs ON rhs.requestId = temp.id and rhs.date = temp.date
INNER JOIN Statuses AS st ON rhs.statusId = st.id
``` | Add a filter to grab only the most recent status for each requestor:
```
SELECT r.name, r.phone, st.title, rhs.comment, rhs.date
FROM requests AS r
INNER JOIN Requests_has_Statuses AS rhs ON rhs.requests_id = r.id
INNER JOIN Statuses AS st ON rhs.statuses_id = st.id
where rhs.id in
(select max(id)
from Requests_has_Statuses AS rhs2
where rhs.requests_id = rhs2.requests_id
group by rhs2.requests_id
)
``` | How to avoid a duplicate row situation in mysql query below? | [
"",
"mysql",
"sql",
""
] |
How can this query produce 20 results instead of 10?
```
SELECT TOP 10 colA, colB, colC
FROM table
WHERE id in (1, 2)
ORDER BY colA, colB
```
I would like to see 10 results for id equal to 1 and another 10 for id equal to 2. Do I have to use a cursor? | This way you do not need to care about how many ids are in the `IN` clause.
```
;WITH MyCTE AS
(
SELECT colA,
colB,
colC,
ROW_NUMBER() OVER(PARTITION BY id ORDER BY colA, colB) AS rn
FROM table
WHERE id in (1, 2)
)
SELECT *
FROM MyCTE
WHERE rn<= 10
ORDER BY colA, colB
``` | Maybe `union all` will help you
```
SELECT TOP 10 colA, colB, colC FROM table WHERE id = 1
union all
SELECT TOP 10 colA, colB, colC FROM table WHERE id = 2
ORDER BY colA, colB
``` | T-SQL: retrieve n times a specific number of rows where n is defined in an "IN" clause | [
"",
"sql",
"sql-server",
"loops",
"cursor",
""
] |
I am working on a query which compares two tables and returns one or zero rows for each type of product. It returns zero if after comparing the values of two tables and if second row has at least one better row than the first table and it returns one row if the first table has the best row.
For Example I have two tables T1 and T2 with following rows
Table T1
```
Product Price Tax
Pen 10 2.25
Pen 5 1.25
Pen 15 1.5
Board 25 5.26
Board 2 NULL
Water 5 10
```
Table T2
```
Product Price Tax
Pen 8 2.5
Pen 12 4.2
Board NULL 4.26
Book 10 5
```
Upon running the SQL query on the above two tables the result should be like
```
Product Price Tax
Pen 15 NULL
Board 25 5.26
Water 5 10
```
The above results can be explained as
`Price` of `Pen` in `T1` is `15` which is the highest of two tables but `tax` for `pen` is highest in `T2` hence we return `NULL` for `tax` and `15` for `Price`
Similarly Board has both `price` and `tax` in `T1` which are highest so `25` and `5.26` are returned and `Water` has no entry in `T2` so highest values in `T1` are returned and for `Book` there are no values returned as `T1` has no entry for `Book`
I am using the below query but they aren't yielding expected results
```
select t1.product,
(case when t1.price >= t2.price then t1.price
when t2.price > t1.price then NULL
else coalesce(t1.price, t2.price)
end) as price,
(case when t1.tax >= t2.tax then t1.tax
when t2.tax > t1.tax then NULL
else coalesce(t1.tax, t2.tax)
end) as tax
from table1 t1 left join
table2 t2
on t1.product = t2.product;
``` | You need to do grouping and aggregations. To be safe, I would use derived tables:
```
select t1.product,
(case when t1.price >= COALESCE(t2.price,0) then t1.price
else NULL
end) as price,
(case when t1.tax >= COALESCE(t2.tax,0) then t1.tax
else NULL
end) as tax
from (SELECT product, MAX(price) as price, MAX(tax) as tax FROM table1 GROUP BY product) t1 left join
(SELECT product, MAX(price) as price, MAX(tax) as tax FROM table2 GROUP BY product) t2
on t1.product = t2.product;
``` | I believe what you are looking for is
```
SELECT t1.Product,
CASE
WHEN MAX(t1.Price) >= MAX(t2.Price) THEN MAX(t1.Price)
WHEN MAX(t2.Price) > MAX(t1.Price) THEN NULL
ELSE COALESCE(MAX(t1.Price), MAX(t2.Price))
END AS Price,
CASE
WHEN MAX(t1.Price) >= MAX(t2.Price) AND MAX(t1.Tax) >= MAX(t2.Tax) THEN MAX(t1.Tax)
WHEN MAX(t1.Price) >= MAX(t2.Price) AND MAX(t1.Tax) < MAX(t2.Tax) THEN NULL
ELSE COALESCE(MAX(t1.Tax), MAX(t2.Tax))
END AS Tax
FROM #Temp t1
LEFT JOIN #Temp1 t2
ON t1.Product = t2.Product
GROUP BY t1.Product, t2.Product
``` | Unable Compare the values of two columns of two tables and returning the best row if it is in the first table | [
"",
"sql",
"sql-server",
""
] |
My problem is that I'm trying to store a image .tif in a sqlite database, I convert the image in base64String, but in the moment to execute the query give an exception
"SQL logic error or missing database
unrecognized token: "4eL2""
HERE is my code:
```
Bitmap bitmap =(Bitmap)System.Drawing.Image.FromFile(@"C:\Users\myuser\Desktop\Images\myimage.TIF");
ImageConverter converter = new ImageConverter();
byte [] b = (byte[])converter.ConvertTo(bitmap, typeof(byte[]));
string base64String = Convert.ToBase64String(b);
SaveImage(base64String);
void SaveImage(string pic)
{
string query = "insert into mytable (date, imagen) values (2014, "+pic+");";
string conString = @" Data Source = C:\Users\myuser\Documents\Visual Studio 2012\Projects\images.sqlite3";
SQLiteConnection con = new SQLiteConnection(conString);
SQLiteCommand cmd = new SQLiteCommand(query, con);
con.Open();
try
{
cmd.ExecuteNonQuery();
}
catch (SQLiteException exc1)
{
MessageBox.Show(exc1.Message);
}
con.Close();
}
``` | I'm use this way:
```
void SaveImage(byte[] imagen)
{
string conStringDatosUsuarios = @" Data Source = C:\Users\myuser\Documents\Visual Studio 2012\Projects\images.sqlite3";
SQLiteConnection con = new SQLiteConnection(conStringDatosUsuarios);
SQLiteCommand cmd = con.CreateCommand();
cmd.CommandText = String.Format("INSERT INTO imagenes (date, imagen) VALUES ('01/01/2014', 'DIRIGIDO A QUIEN LE INTERESE', '2014', @0);");
SQLiteParameter param = new SQLiteParameter("@0", System.Data.DbType.Binary);
param.Value = imagen;
cmd.Parameters.Add(param);
con.Open();
try
{
cmd.ExecuteNonQuery();
}
catch (Exception exc1)
{
MessageBox.Show(exc1.Message);
}
con.Close();
}
```
In the .sqlite3 i put the column imagen with blob datatype | Since `pic` is string data, you need to wrap it in quotes for your query:
```
string query = "insert into mytable (date, imagen) values (2014, '"+pic+"');";
``` | unrecognized token: "4eL2" c# SQLITE | [
"",
"sql",
"sqlite",
"c#-4.0",
"exception",
""
] |
I have 2 tables. 2nd tables 2 PK referenced as 2FK from 1st table which are also 2 PK. i want to pull out the records that do not match?
The following query matches the records that match base on these 2 keys. This returns 3828 rows. how do i do the opposite>
```
SELECT *
FROM hdb.addressesconsultants a
join orderconsultants o where a.CONSULTANT = o.CONSULTANT and a.ORDERNO = o.ORDERNO
```
below returns 3837 rows.
```
SELECT * FROM hdb.addressesconsultants a
```
@munguea05
this did what i was after, it returned 9 rows which was the discrepancy, however i have 4044 rows in `orderconsultants` how do i resolve that or grab the rows that do not match to `addressconsultants` from `orderconsutlants`?
How can I delete these rows?
```
SELECT o.*
FROM hdb.orderconsultants o
LEFT OUTER JOIN addressesconsultants a
ON a.CONSULTANT = o.CONSULTANT AND a.OrderNo = o.OrderNo
WHERE a.CONSULTANT is NULL and a.OrderNo is NULL
``` | It depends if you want results from just one table that don't match, or results from both tables that don't match.
This gives you one, if you need the results from both just do the opposite.
```
SELECT *
FROM hdb.addressesconsultants a (NOLOCK)
LEFT OUTER JOIN orderconsultants o (NOLOCK)
ON A.consultant = o.Consultant AND a.OrderNo = o.OrderNo
WHERE o.Consultant is NULL and O.OrderNo is NULL
``` | You want those without an order?
Use a left join and consider only those that have NULL on the left joined table
```
SELECT *
FROM hdb.addressesconsultants a
LEFT JOIN orderconsultants o
ON a.CONSULTANT = o.CONSULTANT and a.ORDERNO = o.ORDERNO
WHERE
o.CONSULTANT IS NULL
``` | mysql opposite query of a join | [
"",
"mysql",
"sql",
""
] |
I am currently learning `postgresql` and have a noob question. If I have a simple table like this:
```
CREATE TABLE rectangle
(
width int,
length int
);
```
And a function like this:
```
CREATE OR REPLACE FUNCTION Area()
RETURNS int AS $area$
declare
area int;
BEGIN
SELECT (width*length) into area FROM rectangle;
return area ;
END;
$area$ LANGUAGE plpgsql;
```
And I call it like this:
```
select width, length, area() from rectangle;
```
I get results like this:
```
width | length | area
-------------------------
2 | 3 | 6
4 | 3 | 6
5 | 2 | 6
```
Which shows that the area function is working but it is only using the first entry in the table and not the corresponding row. What am I doing wrong. Thanks in advance! | ```
CREATE OR REPLACE FUNCTION Area(integer, integer)
RETURNS int AS $area$
declare
area int;
BEGIN
SELECT ($1 * $2) into area ;
return area ;
END;
$area$ LANGUAGE plpgsql;
```
And then on your query:
```
select width, length, area(width, length) from rectangle;
``` | In your function you are getting the values from one undetermined row from the table. It is necessary to pass the parameters to the function
```
create or replace function area (
width int, length int
) returns int as $area$
select width * length;
$area$ language sql;
```
It can be plain SQL. | Postgresql function per row | [
"",
"sql",
"function",
"postgresql",
"parameters",
""
] |
Hey I have column called `DATE_Hour` in my table which is `varchar` and has values like
```
Tue, 29 Jul 2014 14:00
```
How can I convert that date in to `2014-07-29 14:00:00.000`?
Thanks | If it consistently has the day of the week followed by a comma, you can use:
```
SELECT CAST(STUFF(DATE_Hour,1,CHARINDEX(',',DATE_Hour),'')AS DATETIME)
FROM YourTable
```
The format: '29 Jul 2014 14:00' will cast without issue, but the day of the week preceding that is not an accepted format, so you just use `STUFF()` to strip off the day then `CAST()` what remains. | ```
DECLARE @str VARCHAR(100) = 'Tue, 29 Jul 2014 14:00'
PRINT CONVERT(VARCHAR(100),CAST(RIGHT(@str,LEN(@str)-CHARINDEX(',',@str)) AS DATETIME),121)
``` | How to cast Date from varchar to DATETIME | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I'm having some problems with this query below. I have all my users in a table named `ItemInstances` and all my data for the users in tables named `ItemPropertyValuesInt`, `ItemPropertyValuesBool` etc.
This query works fine until I add the `IsBanned` query. Not all users have an entry in this table so only 5 users are returned instead of 300. My understanding was that a `LEFT JOIN` would be the solution but maybe I've misunderstood how to write this in this query.
Any additional optimisation tips for this query would also be welcome as the database is HUGE
```
SELECT ItemInstances.Id,
Relics.PropertyValue AS Hugo_Relics,
AllianceID.PropertyValue AS Hugo_AllianceID,
Faction.PropertyValue AS Hugo_Faction,
LevelXP.PropertyValue AS Hugo_Level,
WeeklyRelics.PropertyValue AS Hugo_WeeklyRelics,
UserName.PropertyValue AS Hugo_UserName,
ItemInstances.CreatorId,
IsBanned.PropertyValue
FROM dbo.ItemInstances
LEFT OUTER JOIN dbo.ItemPropertyValuesInt Faction
ON Faction.RecordId = ItemInstances.Id
LEFT OUTER JOIN dbo.ItemPropertyValuesInt LevelXP
ON LevelXP.RecordId = ItemInstances.Id
LEFT OUTER JOIN dbo.ItemPropertyValuesInt Relics
ON ItemInstances.Id = Relics.RecordId
LEFT OUTER JOIN dbo.ItemPropertyValuesInt AllianceID
ON ItemInstances.Id = AllianceID.RecordId
LEFT OUTER JOIN dbo.ItemPropertyValuesInt WeeklyRelics
ON WeeklyRelics.RecordId = ItemInstances.Id
LEFT OUTER JOIN dbo.ItemPropertyValuesString UserName
ON UserName.RecordId = ItemInstances.Id
LEFT OUTER JOIN dbo.ItemPropertyValuesBool IsBanned
ON IsBanned.RecordId = ItemInstances.Id
WHERE Relics.PropertyId = 541
AND AllianceID.PropertyId = 504
AND Faction.PropertyId = 520
AND LevelXP.PropertyId = 529
AND WeeklyRelics.PropertyId = 730
AND UserName.PropertyId = 554
AND IsBanned.PropertyId = 728
ORDER BY Hugo_Relics DESC
``` | If there aren't going to be records that matched in the left joined table, you need to account for that in any WHERE clause conditions. For example:
```
AND (IsBanned.PropertyId = 728 OR IsBanned.PropertyId IS NULL)
``` | Even though you are left joining you are also selecting on
`AND IsBanned.PropertyId = 728`
The users that are not in the Banned Table have a isBanned.PropertyId of null
So you aren't selecting them. | sql JOIN is exists | [
"",
"sql",
"sql-server",
""
] |
I'm writing my first SQL CASE statement and I have done some research on them. Obviously the actual practice is going to be a little different than what I read because of context and things of that nature. I understand HOW they work. I am just having trouble forming mine correctly. Below is my draft of the SQL statement where I am trying to return two values (Either a code value from version A and it's title or a code value from version B and its title). I've been told that you can't return two values in one CASE statment, but I can't figure out how to rewrite this SQL statement to give me all the values that I need. Is there a way to use a CASE within a CASE (as in a CASE statement for each column)?
P.S. When pasting the code I removed the aliases just to make it more concise for the post
```
SELECT
CASE
WHEN codeVersion = A THEN ACode, Title
ELSE BCode, Title
END
FROM Code.CodeRef
WHERE ACode=@useCode OR BCode=@useCode
``` | A `case` statement can only return one value. You can easily write what you want as:
```
SELECT (CASE WHEN codeVersion = 'A' THEN ACode
ELSE BCode
END) as Code, Title
FROM Code.CodeRef
WHERE @useCode in (ACode, BCode);
``` | A `case` statement can only return a single column. In your scenario, that's all that is needed, as title is used in either outcome:
```
SELECT
CASE
WHEN codeVersion = "A" THEN ACode,
ELSE BCode
END as Code,
Title
FROM Code.CodeRef
WHERE ACode=@useCode OR BCode=@useCode
```
If you actually did need to apply the case logic to more than one column, then you'd need to repeat it. | SQL CASE returning two values | [
"",
"sql",
"case",
""
] |
I have a SQL question,I am relatively new to SQL so please don't mind if this too naive.I am working on updating values from one table to another in ORACLE.
Table `Person`
```
PersonID BirthPlace
---------------------------
1 Madison,WI,USA
2 Chicago,IL
3 Houston,TX,USA
4 Madison,WI,USA
5 Madison,WI,USA
6 Houston,TX,USA
7 NULL
```
Table `PersonProfile`
```
PersonID CITY STATE COUNTRY
-------------------------------------------------
1 Madison WI USA
2 Chicago IL NULL
3 NULL NULL NULL
4 NULL WI NULL
5 NULL NULL USA
6 HOUSTON NULL NULL
7 NULL NULL NULL
```
I need to update Table Person Profile with values from Table Person and only need to update the columns in Table Person Profile when they have null values and if both the tables have null value then I need to put 'Unknown'.
I can write separate update statements to update the values in each columns like for updating city :
```
Update PersonProfile PF
SET PF.CITY= (SELECT
CASE
WHEN PP.CITY LIKE '%MADISON%'
THEN 'MADISON'
WHEN PP.CITY LIKE '% Houston%'
THEN 'HOUSTON'
WHEN PP.CITY LIKE '% CHICAGO%'
THEN 'CHICAGO'
ELSE
'UNKNOWN'
END AS CITY
FROM PERSON PP
WHERE PF.PERSONID=PP.PERSON.ID
AND PF.CITY IS NULL)
```
and similar queries for updating the state,and country.
What my question is that is there any way I can write a single update statement for updating all three columns instead of updating them one by one? and also instead of using CASE statements if I can use the like operator in a decode function? | Use a multi-column update and regular expressions to parse the birthplace data on a comma.
```
update PERSONPROFILE pp
set (city, state, country) = (
select nvl(REGEXP_SUBSTR (BIRTHPLACE, '[^,]+', 1, 1), 'UNKNOWN'), -- city
nvl(REGEXP_SUBSTR (BIRTHPLACE, '[^,]+', 1, 2), 'UNKNOWN'), -- state
nvl(REGEXP_SUBSTR (BIRTHPLACE, '[^,]+', 1, 3), 'UNKNOWN') -- country
from PERSON p
where p.id = pp.id);
```
The regex matches 0 or more characters that do not match a comma, the first occurrence for city, 2nd occurrence for state, etc. Wrap it in NVL to set to "UNKNOWN" if the occurance is NULL.
**UPDATE**: I have since learned the regex expression `'[^,]+'` should be avoided as it fails when there are NULL elements in the list and you are selecting that element or one after it. Use this instead as it allows for NULLs:
```
update PERSONPROFILE pp
set (city, state, country) = (
select nvl(REGEXP_SUBSTR (BIRTHPLACE, '([^,]*)(,|$)', 1, 1, NULL, 1)), 'UNKNOWN'), -- city
nvl(REGEXP_SUBSTR (BIRTHPLACE, '([^,]*)(,|$)', 1, 2, NULL, 1)), 'UNKNOWN'), -- state
nvl(REGEXP_SUBSTR (BIRTHPLACE, '([^,]*)(,|$)', 1, 3, NULL, 1)), 'UNKNOWN') -- country
from PERSON p
where p.id = pp.id);
```
For more info see this post: [Split comma seperated values to columns](https://stackoverflow.com/questions/31464275/split-comma-seperated-values-to-columns/31464699#31464699) | Yes, either update a tuple like:
```
set (city, state, country) = (select case when pp.city like ... end as city
, case when ... end as state
, case ... end as country)
```
bu I suspect that it would be better to create a table function (I assume these exists in Oracle) that splits the string in three columns using , as a separator and then use merge (assuming this als exists in Oracle) to update. Something like:
```
merge into t2
using t1 (
select personid, table(split(birthplace))
from t1
) x (PersonID, CITY, STATE, COUNTRY)
on t2.person_id = x.personid
when matched
set t2.city = case when t2.city is null then x.city else t2.city end
, t2.state = ...
, ...
``` | SQl to Update columns when the fields are null | [
"",
"sql",
"oracle",
"decoding",
""
] |
I'm building the backoffice of my application and I want to get the 20 last users ordered in descending id.
I implemented an infinite scroll in my page, so each time the admin hits the bottom of the page, a new request is sent via Ajax to retrieve the 20 following users, etc, etc... until we get the first user.
To sum up, here are the several queries :
```
select * from users order by id desc limit 0, 20
select * from users order by id desc limit 20, 20
select * from users order by id desc limit 40, 20
...
```
But the problem is that I order users in descending order. So if a new user registers while I'm scrolling, I will have duplicate result each time I send a request.
Let's say my user list is the following and my limit is 3 (instead of 20)
```
1. Pierre
2. Paul
3. Jack
4. Jean
5. Mike
```
* The first query will give me : Mike, Jean, Jack
* The second one : Paul, Pierre
But if a user "Carla" registers between the 2 requests :
* The first query will still give me : Mike, Jean, Jack
* >> Carla registers and get ID 6
* The second one : Jack, Paul, Pierre
The conclusion is that I will have the record "Jack" twice in my table.
This problems looks very ridiculous, but do you know how to avoid this ?
PS : I don't know if you noticed it, but Facebook seems to have a similar problem as when you scroll down your news feed, you can see several times the same item. It must be a more complex problem but the idea is the same. | If you don't want to see the newly created user, the solution would be relatively simple. If ids is a auto sequence - you potentially capture the `max(id)` before running the retrieval queries, and update your query as -
```
select * from users where id <= max_id order by id desc limit 0, 20
```
You may be able to do similar with `time stamp` - if there is a column for `last_created` on the table.
If you somehow want to include the newly created user in your output without repeating results, it would require additional logic.
Hope this helps. | I think you basically have two realistic approaches to solving this. An unrealistic approach is to lock the table while browsing -- then new users can't go in. Another approach is to create a temporary table for each user when they start the page. Yuck!
The second is to put a timestamp in when users register. Then fetch this timestamp and use it for scrolling. You need to get the timestamp from the database, to avoid slight differences in times on the application side and database side. Something like:
```
select now() as as_of_date
```
Then put this in a variable and change the query to:
```
select * from users where CreationTime <= @as_of_date order by id desc limit 0, 20;
select * from users order CreationTime <= @as_of_date by id desc limit 20, 20;
select * from users order CreationTime <= @as_of_date by id desc limit 40, 20;
```
Note that in more recent versions of MySQL you can set the current time as a default for a `datetime` time (instead of having to deal with a `timestamp` for this purpose). | How to paginate query results ordered in descending order | [
"",
"mysql",
"sql",
"pagination",
"ebean",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.