Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I have a table with a column that contains a few null values. I want to add a `NOT NULL` constraint on that column without updating the existing nulls to a non-null value. I want to keep the existing null values and check for future rows that they contain a not null value for this column. Is this possible? How? | You can add an unvalidated constraint - it will not look at existing rows, but it will be checked for any new or updated rows.
```
ALTER TABLE mytable MODIFY mycolumn NOT NULL NOVALIDATE;
```
Just be aware that you won't be able to update an existing row unless it satisfies the constraint.
Also, be aware of the downside that the optimizer will not be able to take advantage of this constraint in making its plans - it has to assume that some rows may still have a null. | ALTER TABLE table\_name
SET column\_name = '0'
WHERE column\_name IS NULL;
ALTER TABLE table\_name
MODIFY COLUMN(column\_name NUMBER CONSTRAINT constraint\_identifier NOT NULL);
This is of course assuming that your column is a number but it's the same thing really, you would just change the '0' to a default value that isn't null. | How to add a not null constraint on column containing null values | [
"",
"sql",
"oracle",
"plsql",
"oracle11g",
"constraints",
""
] |
I need to calculate number of spesific days (ex. wednesday & thursday) between two dates.
I know about time function and all related ones, but I do not know how to use them in this context.
---
The problem solved in two solutions, but it seems I can't choose booth solution.
Thank you [Andrius Naruševičius](/users/1167953/andrius-narusevicius) and [Kickstart](/users/1723545/kickstart) for your enlightment :D | Method taken and adapted from [here](https://stackoverflow.com/questions/9757919/count-days-between-two-dates-excluding-weekends-mysql-only)
```
@S = start date
@E = end date, not inclusive
@full_weeks = floor( ( @E-@S ) / 7)
@days = (@E-@S) - @full_weeks*7 OR (@E-@S) % 7
SELECT
@full_weeks*1 -- saturday
+IF( @days >= 1 AND weekday( S+0 )=5, 1, 0 )
+IF( @days >= 2 AND weekday( S+1 )=5, 1, 0 )
+IF( @days >= 3 AND weekday( S+2 )=5, 1, 0 )
+IF( @days >= 4 AND weekday( S+3 )=5, 1, 0 )
+IF( @days >= 5 AND weekday( S+4 )=5, 1, 0 )
+IF( @days >= 6 AND weekday( S+5 )=5, 1, 0 )
```
Done.
[Working SQL Fiddle](http://sqlfiddle.com/#!2/d41d8/19076) | An alternative way of doing it with a SELECT statement:-
```
SELECT DAYNAME(DATE_ADD(@StartDate, INTERVAL (Units.i + Tens.i * 10 + Hundreds.i * 100) DAY)) AS aDayOfWeek, COUNT(*)
FROM (SELECT 0 AS i UNION SELECT 1 UNION SELECT 2 UNION SELECT 3 UNION SELECT 4 UNION SELECT 5 UNION SELECT 6 UNION SELECT 7 UNION SELECT 8 UNION SELECT 9) AS Units
CROSS JOIN (SELECT 0 AS i UNION SELECT 1 UNION SELECT 2 UNION SELECT 3 UNION SELECT 4 UNION SELECT 5 UNION SELECT 6 UNION SELECT 7 UNION SELECT 8 UNION SELECT 9) AS Tens
CROSS JOIN (SELECT 0 AS i UNION SELECT 1 UNION SELECT 2 UNION SELECT 3 UNION SELECT 4 UNION SELECT 5 UNION SELECT 6 UNION SELECT 7 UNION SELECT 8 UNION SELECT 9) AS Hundreds
WHERE DATE_ADD(@StartDate, INTERVAL (Units.i + Tens.i * 10 + Hundreds.i * 100) DAY) <= @EndDate
GROUP BY aDayOfWeek
```
This will work with dates up to 1000 days apart but easy to expand it for larger date ranges.
What it is doing is generating a range of numbers (starting at 0) and adding them to the start date where the result is <= to the end date. Then getting the day names of each of those and getting a count of each one. | Count specific days of week between two dates | [
"",
"mysql",
"sql",
""
] |
There is a table:
```
+---------+--------+---------------------+----------+
| user_id | marker | date | trans_id |
+---------+--------+---------------------+----------+
| 6 | M | 2013-08-27 11:45:24 | 5 |
| 6 | MA | 2013-08-27 11:45:42 | 6 |
| 6 | A | 2013-08-27 11:45:59 | 7 |
+---------+--------+---------------------+----------+
```
I tested query:
`SELECT marker , MAX(date) AS maxdate
FROM mytable
WHERE user_id =6`
but it isn't proper.
How you would write query?
Thanks in advance. | This will give you the latest record for every `user_id`
```
SELECT a.*
FROM tableName a
INNER JOIN
(
SELECT user_id , MAX(date) date
FROM tableName
GROUP BY user_ID
) b ON a.user_id = b.user_id AND
a.date = b.date
-- WHERE a.user_id = 6 ==> if you want for specific user_id only
```
* [SQLFiddle Demo](http://sqlfiddle.com/#!2/1351d9/3) | This will give you the latest record for every `user_id`
```
SELECT
a.user_id,
b.marker,
MAX(a.date) AS maxdate,
b.trans_id
FROM TableName a
JOIN TableName b
ON b.date = a.date AND a.user_id = b.user_id
GROUP BY a.user_id
ORDER BY a.date DESC;
```
`MAX()` is an aggregate function, just like `SUM()` and `COUNT()`. Those functions should be used in combination with `GROUP BY`. | How to retrieve row data by MAX(date)? | [
"",
"mysql",
"sql",
"date",
"select",
""
] |
I am writing the following query that I want to display car registration, car group name, model name, cost and the number of bookings for each car. I have to use an explicit cursor and I have to use an implicit cursor to calculate the number of bookings that belong to each car.
My query is as follows:
```
SET SERVEROUTPUT ON FORMAT WRAP SIZE 12000
Declare
v_count number;
cursor carcur IS
SELECT * FROM i_car;
v_car carcur%ROWTYPE;
Begin
Select COUNT (registration)
INTO v_count
from i_booking
group by registration;
FOR v_car IN carcur LOOP
DBMS_OUTPUT.PUT_LINE('Registration:'|| ' '|| v_car.registration);
DBMS_OUTPUT.PUT_LINE('Car Group:'|| ' ' ||v_car.car_group_name);
DBMS_OUTPUT.PUT_LINE('Model Name:'|| ' '||v_car.model_name);
DBMS_OUTPUT.PUT_LINE('Cost:'|| ' '||v_car.cost);
DBMS_OUTPUT.PUT_LINE('Total Bookings:'|| ' '||v_count);
DBMS_OUTPUT.NEW_LINE;
END LOOP;
End;
```
The output I am getting is as follows:
Declare
\*
ERROR at line 1:
ORA-01422: exact fetch returns more than requested number of rows
ORA-06512: at line 7
I am sure it has something to do with the return values being put into the variable, but I have no idea how to rectify this.
Any advice would be greatly appreciated.
Many thanks. | This did the trick for me. I moved my implicit SELECT statement into the cursor for loop and added a WHERE clause saying WHERE registration = v\_car.registration;
```
SET SERVEROUTPUT ON FORMAT WRAP SIZE 12000
Declare
v_count number;
cursor carcur IS
SELECT * FROM i_car;
v_car carcur%ROWTYPE;
Begin
FOR v_car IN carcur LOOP
Select COUNT (registration)
INTO v_count
from i_booking
WHERE registration = v_car.registration;
DBMS_OUTPUT.PUT_LINE('Registration:'|| ' '|| v_car.registration);
DBMS_OUTPUT.PUT_LINE('Car Group:'|| ' ' ||v_car.car_group_name);
DBMS_OUTPUT.PUT_LINE('Model Name:'|| ' '||v_car.model_name);
DBMS_OUTPUT.PUT_LINE('Cost:'|| ' '||v_car.cost);
DBMS_OUTPUT.PUT_LINE('Total Bookings:'|| ' '||v_count);
DBMS_OUTPUT.NEW_LINE;
END LOOP;
End;
```
Many thanks to everyone for you assistance. | The error you are facing is because you are trying to assign multiple values to a single valued variable.
Your query is returning multiple values primarily because you are using a `group by`. What a `group by` does is, in your case, finds the total count of values in column `registration` for every distinct value in that column.
Suppose, let's see an example-
```
registration | other columns ...
1 | ...
1 | ...
1 | ...
2 | ...
3 | ...
3 | ...
```
Hence the output of your query with the `group by` would be
```
registration | count(registration)
1 | 3
2 | 1
3 | 2
```
Note that the column `registration` is not selected in your `select` list, only `count(registration)` is selected, which according to the example can contain multiple values.
Hence now there are three cases now-
1. If the above is your desired output, clieu's answer above is useful.
2. If you want to get just the count of all the non-null values in the column `registration`, just remove the `group by` clause and you'll be fine.
3. If you want the count of all the `DISTINCT` non-null values in the column `registration`, you can use `count(distinct registration)` and remove `group by` as follows:
```
SET SERVEROUTPUT ON FORMAT WRAP SIZE 12000
Declare
v_count number;
cursor carcur IS
SELECT * FROM i_car;
v_car carcur%ROWTYPE;
Begin
Select COUNT (DISTINCT registration)
INTO v_count
from i_booking;
FOR v_car IN carcur LOOP
DBMS_OUTPUT.PUT_LINE('Registration:'|| ' '|| v_car.registration);
DBMS_OUTPUT.PUT_LINE('Car Group:'|| ' ' ||v_car.car_group_name);
DBMS_OUTPUT.PUT_LINE('Model Name:'|| ' '||v_car.model_name);
DBMS_OUTPUT.PUT_LINE('Cost:'|| ' '||v_car.cost);
DBMS_OUTPUT.PUT_LINE('Total Bookings:'|| ' '||v_count);
DBMS_OUTPUT.NEW_LINE;
END LOOP;
End;
/
``` | PL/SQL Group By - ORA-01422: exact fetch returns more than requested number of rows | [
"",
"sql",
"oracle",
"plsql",
"oracle11g",
"oracle10g",
""
] |
I have a table as shown
```
Amount Debit Credit Description
------ ----- ------ -----------
275.00 275.00 Payroll
500.00 500.00 Payroll
288.00 288.00 Payroll
500.00 500.00 Payroll
600.00 600.00 Payroll
988.00 988.00 Payroll
600.00 600.00 Payroll
```
and I want to display two distinct numbers as shown below from the above mentioned table
```
Amount Debit Credit Description
------ ----- ------ -----------
500.00 500.00 Payroll
500.00 500.00 Payroll
```
and
```
Amount Debit Credit Description
------ ----- ------ -----------
600.00 600.00 Payroll
600.00 600.00 Payroll
```
Now, the question is what would be the Oracle SQL for the this Query?? | Like this
```
select amount,debit,credit,Description
from tablename
where amount = 500 and amount = 600
group by amount,debit,credit,description;
``` | try it using `EXISTS`.
```
SELECT a.*
FROM tableName a
WHERE EXISTS
(
SELECT 1
FROM tableName b
WHERE a.Amount = b.Amount
GROUP BY Amount
HAVING COUNT(AMOUNT) = 2 AND
COUNT(Debit) = 1 AND
COUNT(Credit) = 1
)
```
* [SQLFiddle Demo](http://sqlfiddle.com/#!4/4c69f/5) | Oracle SQL Query for displaying values? | [
"",
"sql",
"oracle",
""
] |
I need a query which retrieve data from 1 table based on 3 table:
## Table1
UID | GID
1 | 0
2 | 1
3 | 1
4 | 2
## Table2
CID | UID
1 | 2
2 | 3
3 | 4
4 | 5
## Table3
LID | CID
1 | 2
2 | 2
3 | 3
4 | 1
Now I need to retrieve data from table3 where table1.GID=1 and table2.UID=table1.UID and table3.CID=table2.CID | You can use this:
```
SELECT Table3.* FROM Table1 LEFT JOIN (Table2, Table3)
ON (table1.GID=1 and table2.UID=table1.UID and table3.CID=table2.CID)
``` | Unless I made a typo
```
SELECT
t3.*
FROM
t3
LEFT JOIN
t2 ON t3.cid = t2.cid
LEFT JOIN
t1 ON t2.uid = t1.uid
WHERE
t1.gid = 1;
```
With tables:
```
CREATE TABLE IF NOT EXISTS `t1` (
`uid` int(11) NOT NULL AUTO_INCREMENT,
`gid` int(11) NOT NULL,
PRIMARY KEY (`uid`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=5 ;
INSERT INTO `t1` (`uid`, `gid`) VALUES
(1, 0),
(2, 1),
(3, 1),
(4, 2);
CREATE TABLE IF NOT EXISTS `t2` (
`cid` int(11) NOT NULL AUTO_INCREMENT,
`uid` int(11) NOT NULL,
PRIMARY KEY (`cid`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=5 ;
INSERT INTO `t2` (`cid`, `uid`) VALUES
(1, 2),
(2, 3),
(3, 4),
(4, 5);
CREATE TABLE IF NOT EXISTS `t3` (
`lid` int(11) NOT NULL AUTO_INCREMENT,
`cid` int(11) NOT NULL,
PRIMARY KEY (`lid`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=5 ;
INSERT INTO `t3` (`lid`, `cid`) VALUES
(1, 2),
(2, 2),
(3, 3),
(4, 1);
``` | MySQL- how to get data from 3 table? | [
"",
"mysql",
"sql",
""
] |
I have a Google fusion table with 3 row layouts as shown below:

We can query the fusion table as,
```
var query = new google.visualization.Query("https://www.google.com/fusiontables/gvizdata?tq=select * from *******************");
```
which select the data from the first row layout ie **Rows 1** by default. Is there any way that we can query the second or 3rd Row layout of a fusion table? | API queries apply to the actual table data. The row layout tabs are just different views onto that data. You can get the actual query being executed for a tab with Tools > Publish; the HTML/JavaScript contains the FusionTablesLayer request.
I would recommend using the regular [Fusion Tables APi](https://developers.google.com/fusiontables/) rather than the gvizdata API because it's much more flexible and not limited to 500 response rows. | A couple of things:
1. Querying Fusion Tables with SQL is [deprecated](https://groups.google.com/forum/?hl=en-US&fromgroups=#!topic/fusion-tables-api-announce/_U1U4x2yD9g). Please see the [porting guide](https://developers.google.com/fusiontables/docs/v1/migration_guide).
2. Check out the [Working With Rows](https://developers.google.com/fusiontables/docs/v1/using#WorkingRows) part of the documentation. I believe this has your answers. | querying google fusion table | [
"",
"sql",
"select",
"google-visualization",
"google-fusion-tables",
""
] |
Me and my workmate would like to fetch data from squid server via access.log to mysql server on another Debian server.
We've found sort of guide to that kind of things on the internet and made a script that will put data in mysql from string in access.log file. but this doesnt seems to be working, something might be with the insert thing, no idea here. Please help us to find out what we need to sort out.
Heres a script:
```
#!/bin/bash
cp /www/logs/squid/access.log /tmp/squidforparse.log
>/www/logs/squid/access.log
awk '{print "INSERT INTO squid (ip,bytes,link,trans,time) \
VALUES(\""$3"\","$5",\""$7"\",\""$9"\",from_unixtime("$1"));"};' \
< /tmp/squidforparse.log | mysql -D traffics -u root --password=my_sql_passwd
rm -f /tmp/squidforparse.log
```
I am not really that great at sql, though i do know most of the operators and functions at base level, still i can't figure out whats not making it work. | If your log file looks like below,
```
"apple","red",1230,"Newton","Physics","Da vinci code"
"iphone","black",44500,"Jobs","Mobiles","telephone booth"
"shoe","brown",9900,"Elizabeth","Fashion","shoe shop"
```
My table structure looks like below,
```
table_name = t1
columns = topic, price, department
```
then I would something like below,
```
mysql -D traffics -u root --password=my_sql_passwd
LOAD DATA LOCAL INFILE 'file.log' INTO TABLE t1
FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n'
(@col1,@col2,@col3,@col4,@col5,@col6) set topic=@col1,price=@col3,department=@col5;
```
**Note:**
1. Assuming the log file is comma separated, if its pipe delimited, use
FIELDS TERMINATED BY '|'.
2. Also, note how @col1, @col2,.. are being used to store the values of
the log file. | I created the table needed in mysql.
Then created the fields required.
Then I used the file import and pointed to where my log file was.
Otherwise known as: File to import
Where the Format of imported file was CSV using LOAD DATA
I was able to specify the file delimiter as well as the fields.
The log file with about 160000 records was successfully loaded.
This was accomplished using XAMMP control panel. | insert data strings from log file to mysql database | [
"",
"mysql",
"sql",
"unix",
"logging",
"debian",
""
] |
I have this sample table:
```
ColA ColB ColC
CBCP 25 popx
ABC1 10 ppp1
ABC1 25 ppxx
CBCP 30 xm2x
```
from there I would like to get these columns
```
ColA ColB ColC
ABC1 25 ppxx
CBCP 30 xm2x
```
what I would want is to get the row with max ColB.. But I am getting an error when I try to include the ColC in my query:
```
select ColA, max(ColB), ColC
from tblCaseDev
where ColB > getdate() group by ColA
```
this is my error..
```
Msg 8120, Level 16, State 1, Line 1
Column 'tblCaseDev.ColC' is invalid in the
select list because it is not contained in either
an aggregate function or the GROUP BY clause.
```
hope someone could help me.. thanks in advance.. | ```
SELECT
CaseNo,Date,Remarks,
(SELECT max(cast(Date as datetime)) FROM tblCaseDev subc WHERE subc.CaseNo=c.CaseNo Group by c.CaseNo) AS MaxEntryDate
FROM tblCaseDev c
order by CaseNo
``` | You want to use the `row_number()` window function:
```
select CaseNo, "Date", Remarks
from (select t.*, row_number() over (partition by caseno order by date desc) as seqnum
from tblCaseDev t
where date > getdate()
) t
where seqnum = 1;
```
EDIT:
You can do this the old fashioned way if you don't have `row_number()`:
```
select t.*
from tblCaseDev t join
(select caseno, max(date) as maxdate
from tblCaseDev
group by caseno
) tsum
on t.caseno = tsum.caseno and t.date = tsum.maxdate
``` | How can I get the value of non-aggregated column with an aggregated column? | [
"",
"sql",
"sql-server",
""
] |
I have three tables.
* tax\_master
* item\_master
* item\_tax
The values in it are like this.
```
*tax_master*
tax_id tax_name tax_value
--------------------------
1 Vat 5
2 LBT 8
*item_master*
item_id Prise
---------------
1 30
2 100
*item_tax*
item_tax_id item_id tax_id
------------------------------
1 1 1
2 2 2
3 1 2
```
Now i want output like this.
```
item_id prise VAT LBT Total_prise
---------------------------------------
1 30 1.5 2.4 33.9
2 100 - 8 108
```
VAT value is calculated like `5/30*100` like `5%` on `30=1.5` | ```
select item_id, price,
(min(case when tax_name = 'VAT' then tax end)) vat,
(min(case when tax_name = 'LBT' then tax end)) lbt,
coalesce(min(case when tax_name = 'VAT' then tax end),0) +
coalesce(min(case when tax_name = 'LBT' then tax end),0) +
price total
from
(select a.item_id item_id,
c.tax_name tax_name,
(c.tax_value * b.price / 100) tax,
b.price price
from item_tax a inner join item_master b on a.item_id = b.item_id
inner join tax_master c on a.tax_id = c.tax_id) as calc
group by item_id, price;
```
Demo [here](http://sqlfiddle.com/#!2/b6e45/4). | The following statement should return the data the way you want it:
```
SELECT DISTINCT
item_master.item_id,
(SELECT
tax_master.tax_value * item_master.prise / 100
FROM
item_tax, tax_master
WHERE
item_tax.item_id = item_master.item_id and
tax_master.tax_id = item_tax.tax_id and
tax_master.tax_name = 'VAT') as VAT,
(SELECT
tax_master.tax_value * item_master.prise / 100
FROM
item_tax, tax_master
WHERE
item_tax.item_id = item_master.item_id and
tax_master.tax_id = item_tax.tax_id and
tax_master.tax_name = 'LBT') as LBT
FROM
item_tax, item_master, tax_master
WHERE
item_master.item_id = item_tax.item_id and
tax_master.tax_id = item_tax.tax_id
```
bwt: I think **prise** should be meant to be **price** ;) | How to select column in table by creating row in another table in MySQL | [
"",
"mysql",
"sql",
""
] |
I am trying to wrap a few actions in a transaction so I can determine if I should delete a table in the last step. Here's what I have so far:
```
--select the DB
use DB1
--if the table exists, we want to delete it first
IF (EXISTS (SELECT *
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_SCHEMA = 'dbo'
AND TABLE_NAME = 'VV'))
BEGIN
drop table dbo.VV
END
BEGIN TRAN
SELECT field1
,field2
,field3
INTO dbo.vv
FROM vvr A
WHERE A.field1 <> 'GEN'
AND A.field2 <> 'NO DATA'
AND A.field3 <> '(BLANK) NO'
PRINT 'ROW1:' + CAST(@@ROWCOUNT as varchar(11))
IF @@ROWCOUNT = 0
ROLLBACK TRAN
ELSE
COMMIT TRAN
UPDATE dbo.vv
SET dbo.field1 = vvr.field1
FROM dbo.vv
PRINT 'ROW2:' + CAST(@@ROWCOUNT as varchar(11))
IF @@ROWCOUNT = 0
ROLLBACK TRAN
ELSE
COMMIT TRAN
```
when I run this without the transaction statements, it runs just fine, so I know the SQL works but when I add the transaction statements, it fails telling me the table VV doesn't exist. When I do a select on VV, it is definitely gone.
Once I get the above to run fine I will add one more statement at the end to drop table vvr but I haven't got that far yet. | If you want to perform multiple actions based on the number of rows that a single statement affected, then you need to capture that value into your own variable immediately:
```
DECLARE @rc int
SELECT field1
,field2
,field3
INTO dbo.vv
FROM vvr A
WHERE A.field1 <> 'GEN'
AND A.field2 <> 'NO DATA'
AND A.field3 <> '(BLANK) NO'
SET @rc = @@ROWCOUNT
PRINT 'ROW1:' + CAST(@rc as varchar(11))
IF @rc = 0
ROLLBACK TRAN
ELSE
COMMIT TRAN
```
Even simple statements like `PRINT`s cause [`@@ROWCOUNT`](http://technet.microsoft.com/en-us/library/ms187316.aspx) to be assigned a new value (in this case, 0) | `PRINT 'ROW1:' + CAST(@@ROWCOUNT as varchar(11))`
This line resets the @@ROWCOUNT. If you inserted 50 records into the table, the print statement would return 50, but then when you reference @@ROWCOUNT in your next line, the value will return 0, so therefore the table will never exist since you perform a rollback operation
This then causes, the next line (UPDATE statement) to always fail. | Commit tran on @@ROWCOUNT | [
"",
"sql",
"transactions",
""
] |
I am trying to bring the last 5 rows of the `Order` table based on `OrderDate` with the column name `firstname` from `Customer` table.
The below query displays all the values from the `Order` table instead of last 5 rows.
```
SELECT
A.[FirstName], B.[OrderId], B.[OrderDate], B.[TotalAmount], B.[OrderStatusId]
FROM
[schema].[Order] B
OUTER APPLY
(SELECT TOP 5 *
FROM [schema].[Customer] A
WHERE B.[CustomerId] = 1
AND A.[CustomerId] = B.[CustomerId]
ORDER BY
B.[OrderDate] DESC) A
```
Any mistake in my logic of using `TOP` and `DESC` ? | If you want to get the last 5 rows of `Order` table, why do you apply `TOP` to `Customer` table?
```
SELECT TOP 5 A.[FirstName],B.[OrderId],B.[OrderDate],B.[TotalAmount],B.[OrderStatusId]
FROM [schema].[Order] B
LEFT JOIN [schema].[Customer] A ON A.[CustomerId]=B.[CustomerId]
WHERE B.[CustomerId]=1
ORDER BY B.[OrderDate] DESC
``` | ```
;WITH MyCTE AS
(
SELECT A.[FirstName],
B.[OrderId],
B.[OrderDate],
B.[TotalAmount],
B.[OrderStatusId],
ROWNUMBER() OVER (ORDER BY B.[OrderDate] DESC) AS RowNum
FROM [schema].[Order] B
OUTER APPLY
(
SELECT TOP 5 *
FROM [schema].[Customer] A
WHERE B.[CustomerId]=1
AND A.[CustomerId]=B.[CustomerId]
ORDER BY
B.[OrderDate] DESC
) A
)
SELECT [FirstName],
[OrderId],
[OrderDate],
[TotalAmount],
[OrderStatusId]
FROM MyCTE
WHERE RowNum <= 5
``` | Select last 5 rows in join query SQL Server 2008 | [
"",
"sql",
"select",
"join",
""
] |
I have the following SQL query:
```
SELECT t.trans_id, t.business_process_id, tsp.status, tsp.timestamp
FROM tran_stat_p tsp, tran t
WHERE t.trans_id = tsp.trans_id
AND tsp.timestamp BETWEEN '1-jan-2008' AND SYSDATE
AND t.business_process_id = 'ABC01'
```
It outputs data like this:
`trans_ID` `business_process_id` `status` `timestamp`
`14444400` `ABC01` `F` `6/5/2008 12:37:36 PM`
`14444400` `ABC01` `W` `6/6/2008 1:37:36 PM`
`14444400` `ABC01` `S` `6/7/2008 2:37:36 PM`
`14444400` `ABC01` `P` `6/8/2008 3:37:36 PM`
`14444401` `ABC01` `F` `6/5/2008 12:37:36 PM`
`14444401` `ABC01` `W` `6/6/2008 1:37:36 PM`
`14444401` `ABC01` `S` `6/7/2008 2:37:36 PM`
`14444401` `ABC01` `P` `6/8/2008 3:37:36 PM`
In addition to the above, I'd like to add a column which calculates the time difference (in days) between statuses W&F, S&W, P&S for every unique `trans_id`.
The idea is to figure out how long transactions are sitting in the various statuses before they are finally processed to status "P". The life cycle of a transaction is in the following order -> F -> W -> S -> P. Where F is the first status, and P is the final status.
Can anyone help? Thanks in advance. | You can use `LEAD` to retrieve the next timestamp value and calculated the time left in every status (F, W and S) and `TRUNC` to calculated days between as an integer :
```
SELECT t."trans_ID", t."business_process_id", tsp."status", tsp."timestamp",
LEAD("timestamp", 1) OVER (
PARTITION BY tsp."trans_ID"
ORDER BY "timestamp") AS "next_timestamp",
trunc(LEAD("timestamp", 1) OVER (
PARTITION BY tsp."trans_ID"
ORDER BY "timestamp")) - trunc(tsp."timestamp") as "Days"
FROM tran t
INNER JOIN tran_stat_p tsp ON t."trans_ID" = tsp."trans_ID"
AND tsp."timestamp" BETWEEN '01-jan-2008 12:00:00 AM' AND SYSDATE
WHERE t."business_process_id" = 'ABC01'
```
See SQLFIDDLE : <http://www.sqlfiddle.com/#!4/04633/49/0> | The actual query would use [`LAG`](http://docs.oracle.com/cd/B28359_01/server.111/b28286/functions075.htm), which will give you a value from a prior row.
Your status codes won't sort as `F` -> `W` -> `S` -> `P`, which is why the query below has the big `CASE` statement for the `LAG` function's `ORDER BY` - it translates the status codes into a value that follows your transaction life cycle.
```
SELECT
t.trans_id,
t.business_process_id,
tsp.status,
tsp.timestamp,
tsp.timestamp - LAG(timestamp) OVER (
PARTITION BY tsp.trans_id
ORDER BY
CASE tsp.Status
WHEN 'F' THEN 1
WHEN 'W' THEN 2
WHEN 'S' THEN 3
WHEN 'P' THEN 4
END) AS DaysBetween
FROM tran t
INNER JOIN tran_stat_p tsp ON t.trans_id = tsp.trans_id
WHERE tsp.timestamp BETWEEN DATE '2008-01-01' AND SYSDATE
AND t.business_process_id = 'ABC01';
```
A couple more notes:
* The query is untested. If you have trouble please post some sample data and I'll test it.
* I used `DATE '2008-01-08'` to define Jnauary 1, 2008 because that's how Oracle (and ANSI) likes a date constant to look. When you use `1-jan-2008` you're relying on Oracle's default date format, and that's a session value which can be changed. If it's changed your query will stop working. | Date difference between rows | [
"",
"sql",
"oracle",
"date-difference",
""
] |
Here is the problematic SQL query and the results that it yields:

What I need, however, is to have the data formatted in the following way:
```
Emp No Sign In Sign out
022195 2013-01-29 09:18:00 2013-01-29 19:18:00
043770 2013-01-29 10:07:00 2013-01-29 17:07:00
``` | You can use the date() function to aggregate the day in MySQL. The trick is to use it in the group by clause.
```
select name,
min(date) as Sign_In,
max(date) as Sign_Out,
from text_based_attendance
group by date(date), name
order by name
```
This will give a result grouping the employees by their names and the dates that they worked on, showing only the sing in/out times.
Here is my SQLFiddle: <http://sqlfiddle.com/#!2/740e2/1>
In Orace, you would do the same with the EXTRACT function:
```
select name,
min(mdate) as Sign_In,
max(mdate) as Sign_Out
from text_based_attendance
group by EXTRACT(day from mdate),name
```
<http://sqlfiddle.com/#!4/96b6c/1>
For Postgres, it's more or less the same as in Oracle.
<http://www.postgresql.org/docs/7.4/static/functions-datetime.html> | you don't have to do any subqueries for that, you can do it with basic [aggregate functions](http://www.postgresql.org/docs/9.2/static/functions-aggregate.html) over the table and group by name
```
select
t.name as EmpNo,
min(t.date) as SignIn,
max(t.date) as SignOut
from text_based_attendance as t
group by t.name
order by t.name
```
there're no need for alias (`as t`) in this query, but I think it's good practice to add alias to your query so you can easily modify and add joins in the future.
For PostgreSQL, if you want to group by date and get min and max for each date, you can do:
```
select
t.name as EmpNo,
t::date as Date,
min(t.date) as SignIn,
max(t.date) as SignOut
from text_based_attendance as t
group by t.name, t::date
order by t.name
``` | How to query data with the GROUP BY keyword within a date | [
"",
"sql",
"oracle",
"postgresql",
""
] |
Are there any performance difference in querying a database for some character column starting with something and ending with something?
```
SELECT * FROM table WHERE column like '%something'
SELECT * FROM table WHERE column like 'something%'
```
Any downsides to choose one approach instead of another? Disregarding the user need for matching the start or end of a word.
The way it is implemented maybe the algorithms differ in some way? | The SQL optimizer will look for an index on column in the second example, and will narrow it down to the records beginning with the characters before the wildcard. In the first example, it can't narrow it down, so you'll like scan either the index or table (depending on index structure).
Which SQL product are you using? | Instead of creating a new column and index to deal with the
```
SELECT * FROM table WHERE column like '%something'
```
query, you could create an index
```
CREATE INDEX rev_col_idx ON table (reverse(column) text_pattern_ops);
```
and then rewrite the above query as
```
SELECT * FROM table WHERE reverse(column) like reverse('%something')
```
which will achieve the same thing. | Performance of querying for a string that starts and ends with something | [
"",
"sql",
"performance",
"algorithm",
""
] |
Let's say I have three tables - Orders, OrderDetails, and ProductType - and the Orders table includes a column for Customer. What I need to do is write a query that will show me a list of customers and how many orders each customer has placed, as well as displaying and grouping by another column, which is a boolean based on whether a particular type of product - say, telephones - is in the order.
For example, we might have:
> Customer | NumOrders | IncludesPhone
> ---------------------------------
> Jameson | 3 | Yes
> Smith | 5 | Yes
> Weber | 1 | Yes
> Adams | 2 | | No
> Jameson | 1 | No
> Smith | 7 | No
> Weber | 2 | No
However, when I try to write the query for this, I'm getting multiple rows with the same values for Customer and IncludesPhone, each with a different value for NumOrders. Why is this happening? My query is below:
`SELECT Customer, COUNT(Customer) AS NumOrders, CASE WHEN (ProductType.Type = 'Phone') THEN 'Yes' ELSE 'No' END AS IncludesPhone
FROM Orders INNER JOIN OrderDetails INNER JOIN ProductType
GROUP BY Customer, Type
Order By IncludesPhone, Customer` | This query should work
```
SELECT Customer, COUNT(Customer) AS NumOrders,
CASE WHEN (ProductType.Type = 'Phone') THEN 'Yes' ELSE 'No' END AS IncludesPhone
FROM Orders INNER JOIN OrderDetails INNER JOIN ProductType
GROUP BY Customer,
CASE WHEN (ProductType.Type = 'Phone') THEN 'Yes' ELSE 'No' END
Order By IncludesPhone, Customer
``` | Change the group by to
```
GROUP BY Customer,
CASE WHEN (ProductType.Type = 'Phone') THEN 'Yes' ELSE 'No' END
``` | Group By not functioning as desired | [
"",
"sql",
"sql-server-2008",
"group-by",
""
] |
Will there be any other ways instead of writing the same criteria multiple times?
```
SELECT * FROM tblEmployees E
WHERE E.CurrentAddress LIKE '%dan%' OR
E.Email1 LIKE '%dan%' OR
E.Email2 LIKE '%dan%' OR
E.LatinName LIKE '%dan%'
``` | There are other ways, but yours is probably the most efficient already. You could always do something like:
```
SELECT *
FROM tblEmployees
WHERE CurrentAddress + Email1 + Email2 + LatinName LIKE '%dan%'
```
If some of the columns are `NULL`, you could use `ISNULL([field], '')`.
However, as @MitchWheat pointed out, it's not exactly the same query, since a field could end
by `d` and the next field could start by `an`. | If none of the values are `NULL`, you could do:
```
SELECT *
FROM tblEmployees E
WHERE concat(E.CurrentAddress, ' ', E.email1, ' ', E.email2, ' ', E.LatinName) LIKE '%dan%';
``` | How to select rows with the same criteria in multiple columns? | [
"",
"sql",
""
] |
I am trying to write a query to Identify my subscribers who have abandoned a shopping cart in the last day but also I need a calculated field that represents weather or not they have received and incentive in the last 7 days.
I have the following tables
AbandonCart\_Subscribers
Sendlog
The first part of the query is easy, get abandoners in the last day
```
select a.* from AbandonCart_Subscribers
where DATEDIFF(day,a.DateAbandoned,GETDATE()) <= 1
```
Here is my attempt to calculate the incentive but I am fairly certain it is not correct as IncentiveRecieved is always 0 even when I know it should not be...
```
select a.*,
CASE
WHEN DATEDIFF(D,s.SENDDATE,GETDATE()) >= 7
THEN 1
ELSE 0
END As IncentiveRecieved
from AbandonCart_Subscribers a
left join SendLog s on a.EmailAddress = s.EmailAddress and s.CampaignID IS NULL
where
DATEDIFF(day,a.DateAbandoned,GETDATE()) <= 1
```
Here is a SQL fiddle with the objects and some data. I would really appreciate some help.
Thanks
<http://sqlfiddle.com/#!3/f481f/1> | Kishore is right in saying the main problem is that it should be <=7, not >=7. However, there is another problem.
As it stands, you could get multiple results. You don't want to do a left join on SendLog in case the same email address is in there more than once. Instead you should be getting a unique result from that table. There's a couple of ways of doing that; here is one such way which uses a derived table. The table I have called s will give you a unique list of emails that have been sent an incentive in the last week.
```
select a.*,
CASE
WHEN s.EmailAddress is not null
THEN 1
ELSE 0
END As IncentiveRecieved
from AbandonCart_Subscribers a
left join (select distinct EmailAddress
from SendLog s
where s.CampaignID IS NULL
and DATEDIFF(D,s.SENDDATE,GETDATE()) <= 7
) s on a.EmailAddress = s.EmailAddress
where DATEDIFF(day,a.DateAbandoned,GETDATE()) <= 1
``` | should it not be less than 7 instead of greater than 7?
select a.\*,
CASE
WHEN DATEDIFF(D,s.SENDDATE,GETDATE()) <= 7 AND CampaignID is not null
THEN 1
ELSE 0
END As IncentiveRecieved
from AbandonCart\_Subscribers a
left join SendLog s on a.EmailAddress = s.EmailAddress --and s.CampaignID IS NULL
where
DATEDIFF(day,a.DateAbandoned,GETDATE()) <= 1
Hope this satisfies your need. | Join Table on one record but calculate field based on other rows in the join | [
"",
"sql",
"join",
""
] |
I try to use 3 local variable in a PL/SQL function, but when a run log shows that 2 are invalid
Following is the code:
```
create or replace function valor_parcela(p_num_venda number, p_juros number)
return number is
tmp_valor_parcela number(7,2);
f_num_parcelas number(7,2);
f_valor number(7,2);
begin
Select num_parcelas
into :f_num_parcela
from VENDS
where numero=p_num_venda;
Select valor_total
into :f_valor
from VENDS
where numero=p_num_venda;
tmp_valor_parcela := (f_valor/f_num_parcela)*p_juros;
return (tmp_valor_parcela);
end;
``` | 1. As it already has been said by @REW, variable's name in the `into clause` of a query should not be preceded by a colon.
```
into f_num_parcela -- without colon
```
and
```
into f_valor
```
2. Those two queries could be simply replaced by one:
```
Select num_parcelas
, valor_total
into f_num_parcela
, f_valor
from VENDS
where numero=p_num_venda;
```
3. You probably could do calculation in the `select` section of the query
```
Select (valor_total / num_parcelas) *p_juros
into tmp_valor_parcela
from VENDS
where numero=p_num_venda;
```
But you should guarantee that your query is returning exactly one record, otherwise `too_many_rows` exception will be raised. On the contrary, if the query returns no rows `no_data_found` exception will be raised. So it would be a good idea to include `exception` section in your stored procedure. | Try this too, you'll get the value for num\_parcelas and valor\_total from a single query.
```
CREATE OR REPLACE
FUNCTION valor_parcela(
p_num_venda NUMBER,
p_juros NUMBER)
RETURN NUMBER
IS
tmp_valor_parcela NUMBER(7,2);
f_num_parcelas NUMBER(7,2);
f_valor NUMBER(7,2);
BEGIN
SELECT num_parcelas, valor_total
INTO f_num_parcela, f_valor
FROM vends
WHERE numero = p_num_venda;
tmp_valor_parcela := (f_valor/f_num_parcela)*p_juros;
RETURN (tmp_valor_parcela);
END;
/
``` | how to use 3 variables in PL/SQL function | [
"",
"sql",
"oracle",
"plsql",
"oracle-sqldeveloper",
""
] |
I need help...
For example i have this table..
```
acc_num open close activate date
------- ---- ----- -------- ----------
200 1 0 0 2013/01/01
200 0 1 0 2013/01/12
200 0 0 1 2013/01/10
```
The output should be:
```
acc_num open_date act_date close_date
200 2013/01/01 2013/01/10 2013/01/12
```
Thanks for your help | Try this (assuming there is only one row per state per acc\_num):
```
select acc_num,
max(decode(open,1,date)) open_date,
max(decode(close,1,date)) close_date,
max(decode(activate,1,date)) activate_date
from table
group by acc_num
``` | You can use a series of `CASE` statements to create the needed columns.
here is a sample [fiddle](http://sqlfiddle.com/#!4/e5a88/5)
```
SELECT t1.acc_num
,MAX(CASE WHEN t1.open = 1 THEN t1.date ELSE TO_DATE('1901-01-01', 'YYYY-DD-MM') END) AS Open_Date
,MAX(CASE WHEN t1.activate = 1 THEN t1.date ELSE TO_DATE('1901-01-01', 'YYYY-DD-MM') END) AS Activate_Date
,MAX(CASE WHEN t1.close = 1 THEN t1.date ELSE TO_DATE('1901-01-01', 'YYYY-DD-MM') END) AS Close_Date
FROM YourTable t1
GROUP BY
t1.acc_num
``` | oracle 11g SQL :help for a complicated sql query | [
"",
"sql",
"oracle11g",
"oracle10g",
""
] |
I have a table called `Issues` that I need to get data from. I am joining the Issues table with another table called `IssueActivities`. So for each instance of an IssueID, there could be 1 to many IssueActivities. In the `IssueActivities` table is a field called `Notes` and it is of datatype `text`. I'm trying to select a DISTINCT list of IssueID's where the Notes field DOES NOT contain 2 particular string.
Here's my SQL:
```
SELECT DISTINCT i.IssueID
FROM Issues i
INNER JOIN IssueActivities ia ON i.IssueID = ia.IssueID
WHERE i.IssueStatusID = 2 --Closed issues only
AND (PATINDEX('%Pending DR%', ia.Notes) < 1 AND PATINDEX('%Pending E%', ia.Notes) < 1)
```
The problem with this sql is that it returns IssueID's for issues that have that criteria because of the fact an Issue can have **many** IssueActivities, so not all the rows contain that criteria. Does that make sense? Here's a quick example:
**Issues table**
```
IssueID | IssueStatusID
-----------------------
1700 2
1701 2
```
**IssueActivities table**
```
IssueActivityID | IssueID | Notes
---------------------------------
1 1700 Issue Entered
2 1700 Sub Status changed from New to In Progress
3 1700 Sub Status changed from In Progress to Pending DR
4 1701 Issue Entered
5 1701 Issue Assigned
6 1701 Sub Status changed from New to Closed
```
So from the above table, I would like to get only issue 1701 because of all the IssueActivities that belong to it, none of them contain the criteria that I am using.
Any help is greatly appreciated. | You should find the rows you want to exclude, then eliminate them in the where clause. This example uses `APPLY` to find any IssueActivity row with "Pending DR" or "Pending E" in it for each Issues row. Here is a [SQLFiddle](http://sqlfiddle.com/#!3/acfcd/1)
```
SELECT i.IssueID
FROM Issues i
OUTER APPLY(
SELECT TOP 1 ia.IssueActivityID
FROM IssueActivities ia
WHERE ia.IssueID = i.IssueID
AND (PATINDEX('%Pending DR%', ia.Notes) > 0
OR PATINDEX('%Pending E%', ia.Notes) > 0)
) pending
WHERE i.IssueStatusID = 2
AND pending.IssueActivityID IS NULL
``` | If I understand correctly, you need to first select what you want to ignore:
```
SELECT DISTINCT IssueID
FROM IssuesActivities
WHERE PATINDEX('%Pending DR%', ia.Notes) > 0 OR PATINDEX('%Pending E%', ia.Notes) > 0
```
Then you can ignore it:
```
SELECT DISTINCT i.IssueID
FROM Issues i
LEFT JOIN (SELECT DISTINCT IssueID
FROM IssuesActivities
WHERE PATINDEX('%Pending DR%', ia.Notes) > 0 OR PATINDEX('%Pending E%', ia.Notes) > 0
)ia
ON i.IssueID = ia.IssueID
WHERE i.IssueStatusID = 2 --Closed issues only
AND ia.IssueID IS NULL
``` | Select DISTINCT field when using PATINDEX | [
"",
"sql",
"sql-server-2008",
"t-sql",
""
] |
I made this procedure..
```
ALTER PROCEDURE [dbo].[MyProcedure]
@pSelect nvarchar
AS
BEGIN
SET NOCOUNT ON;
select @pSelect from tabel1
END
```
I want to pass a select query like from c# code to this stored procedure
```
MyProcedure("column1,column2");
```
How could I do this because stored procedure treat my parameter as a string and it behaves like
```
select N'column1,column2' from tabel1
```
pls help me
or provide a better option for this | You'll have to use dynamic sql inside the stored procedure.
```
ALTER PROCEDURE [dbo].[MyProcedure]
@pSelect nvarchar(max)
AS
BEGIN
SET NOCOUNT ON;
DECLARE @SQL nvarchar(max)
SET @SQL = 'select ' + @pSelect + ' from tabel1';
EXEC (@SQL)
END
```
Here's a script to test the above stored procedure:
```
CREATE TABLE tabel1 (id int, data varchar(50))
INSERT INTO tabel1 VALUES(1,'aaa'),(2,'bbb'),(3,'ccc')
EXEC [dbo].[MyProcedure] 'id'
EXEC [dbo].[MyProcedure] 'data'
EXEC [dbo].[MyProcedure] 'id,data'
``` | You can use dynamic SQL for the purpose, but it is not recommended. (More info here - [The Curse and Blessings of Dynamic SQL](http://www.sommarskog.se/dynamic_sql.html))
```
create procedure MyProcedure
@pSelect nvarchar
AS
begin
declare @sql nvarchar(4000);
set @sql='select ['+ @pSelect +'] from Table_1';
exec sp_executesql @sql
end
go
exec MyProcedure 'column1,column2'
go
``` | i want to pass a select query in a stored procedure as an argumnet | [
"",
"sql",
"sql-server",
""
] |
I need to convert data of a table and do some manipulation.
One of the column datatypes is `Varchar`, but it stores **decimal** numbers.
I am struggling to convert the `varchar` into `decimal`.
I have tried `CAST( @TempPercent1 AS DECIMAL(28, 16))`
Problem is that data also has some values in exponential notation, for example: `1.61022e-016`.
The sql query is throwing error on encountering such value.
The error is `Error converting data type varchar to numeric.`
How should I handle exponential notation values during varchar to decimal conversion? | You may try following, but you may loose accuracy:
```
select cast(cast('1.61022e-016' AS float) as DECIMAL(28, 16))
``` | You could just do
```
CAST(CAST(value AS float) AS decimal(36, 20))
```
**but...**
If you do that, the cast-to-float will also change all values that have no E inside - because floats can only be as exact as the machine-epsilon ...
You can mitigate that, by only casting to float if the number contains an E (+/-).
If it's a normal decimal number, just cast to decimal directly.
This will minimize float-round-off errors.
```
SELECT
CASE
WHEN factor LIKE '%E-%' THEN CAST(CAST(factor AS FLOAT) AS DECIMAL(36, 20))
WHEN factor LIKE '%E+%' THEN CAST(CAST(factor AS FLOAT) AS DECIMAL(36, 20))
ELSE CAST(factor AS DECIMAL(36, 20))
END AS factor
FROM T_IFC_Import_Currency
```
Or more compact:
```
SELECT
CASE
WHEN factor LIKE '%E[+-]%' THEN CAST(CAST(factor AS FLOAT) AS DECIMAL(36, 20))
ELSE CAST(factor AS DECIMAL(36, 20))
END AS factor
FROM T_IFC_Import_Currency
```
**Note:**
Don't simplify this to:
```
SELECT
CAST
(
CASE
WHEN factor LIKE '%E[+-]%' THEN CAST(factor AS FLOAT)
ELSE factor
END
AS decimal(36,20)
) AS factor
FROM T_IFC_Import_Currency
```
because that means the case expression is first implicitly cast to float before it is being cast to decimal... | SQL Server: Convert varchar to decimal (with considering exponential notation as well) | [
"",
"sql",
"sql-server",
"decimal",
"type-conversion",
""
] |
I'm trying to update a stored procedure that determines a response time from when a ticket is received. In the table I have the timestamp when the ticket was received (ref\_dttm TIMESTAMP WITHOUT TIME ZONE) and the timestamp when the ticket was first responded to (first\_action\_dttm TIMESTAMP WITHOUT TIME ZONE). When calculating the response time, I need to account for operating hours, weekends, and their holiday closures.
Currently the function calculates the interval and can subtract the hours they business is closed but I can't seem to figure out a way to exclude the weekends and holidays. Basically I'll need to subtract 15 hrs per week day (open 0900-1800) and 24 hrs for each weekend day and holiday.
Given the day of the week that the ticket is received and the time span:
```
Select
extract(dow from ref_dttm) as dow,
extract(days from (ref_dttm - first_action_dttm) as days
```
Is there a simple way to determine how many weekends have passed?
This is what I have so far - it subtracts 15 hrs per day and doesn't account for weekends:
```
CREATE TEMP TABLE tmp_ticket_delta ON COMMIT DROP AS
SELECT id,ticket_id,ticket_num
,(ticket_dttm - first_action_dttm) as delta
,extract(days from (ticket_dttm - first_action_dttm)) as days
,ticket_descr
FROM t_tickets
WHERE ticket_action_by > 0
SELECT id,ticket_id,ticket_num,delta,days,ticket_descr,
CASE WHEN days = 0 THEN
CASE WHEN extract(hour from delta) > 15 THEN
--less than one day but outside of business hours so subtract 15 hrs
delta - INTERVAL '15:00:00.000'
ELSE
delta
END
ELSE
CASE WHEN extract(hour from delta) > 15 THEN
--take the total number of hours - closing hours + delta - closed hours
(((days * 24) - (days * 15)) * '1 hour'::INTERVAL) + delta - INTERVAL '15:00:00.000' - (days * '1 day'::INTERVAL)
ELSE
(((days * 24) - (days * 15)) * '1 hour'::INTERVAL) + delta - (days * '1 day'::INTERVAL)
END
END AS adj_diff
FROM tmp_ticket_delta
``` | I like to store important business data in tables. Queries like this
```
select min(cal_date),
max(cal_date),
sum(hours_open) total_time_open,
sum(hours_closed) total_time_closed
from daily_hours_open_and_closed
where cal_date between '2013-08-28' and '2013-09-03';
```
are easy to understand, maintain, and debug when they're based on data stored in simple tables.
I'd start with a [calendar table](https://stackoverflow.com/a/5030686/562459), and add a table for the time your place is open. This table, "open\_times", is the simplest way to start, but it might be too simple for your business. For example, you might want tighter CHECK constraints. Also, I've made no attempt to make this efficient, although the final query runs in only 12 ms on my development box.
```
create table open_times (
bus_open timestamp primary key,
bus_closed timestamp not null
check (bus_closed > bus_open)
);
```
Quick and dirty way to populate that table with weekday hours for 2013.
```
with openings as (
select generate_series(timestamp '2013-01-01 09:00',
timestamp '2013-12-31 18:00', '1 day') bus_open
)
insert into open_times
select bus_open, bus_open + interval '9 hours' bus_closed
from openings
where extract(dow from bus_open) between 1 and 5
order by bus_open;
```
Labor day is a holiday here, so Monday, Sep 2, is a holiday. Delete 2013-09-02.
```
delete from open_times
where bus_open = '2013-09-02 09:00';
```
That's the only holiday I'm interested in for the purpose of showing how this works. You'll have to do better than I did, of course.
To make things simpler still, create a view that shows the daily hours of operation as intervals.
```
create view daily_hours_open_and_closed as
select c.cal_date,
ot.bus_open,
ot.bus_closed,
coalesce(bus_closed - bus_open, interval '0 hours') as hours_open,
interval '24 hours' - (coalesce(bus_closed - bus_open, interval '0 hours')) as hours_closed
from calendar c
left join open_times as ot
on c.cal_date = cast(ot.bus_open as date);
```
Now, how many hours are we open and how many hours are we closed for the 7 days between 2013-08-28 and 2013-09-03? The query for raw data is dead simple now.
```
select *
from daily_hours_open_and_closed
where cal_date between '2013-08-28' and '2013-09-03'
order by cal_date;
cal_date bus_open bus_closed hours_open hours_closed
--
2013-08-28 2013-08-28 09:00:00 2013-08-28 18:00:00 09:00:00 15:00:00
2013-08-29 2013-08-29 09:00:00 2013-08-29 18:00:00 09:00:00 15:00:00
2013-08-30 2013-08-30 09:00:00 2013-08-30 18:00:00 09:00:00 15:00:00
2013-08-31 00:00:00 24:00:00
2013-09-01 00:00:00 24:00:00
2013-09-02 00:00:00 24:00:00
2013-09-03 2013-09-03 09:00:00 2013-09-03 18:00:00 09:00:00 15:00:00
```
Use aggregate functions to do the arithmetic.
```
select min(cal_date),
max(cal_date),
sum(hours_open) total_time_open,
sum(hours_closed) total_time_closed
from daily_hours_open_and_closed
where cal_date between '2013-08-28' and '2013-09-03'
min max total_time_open total_time_closed
--
2013-08-28 2013-09-03 36:00:00 132:00:00
``` | You can count weekends by something like this query:
```
select
*,
(extract(week from ref_dttm) - extract(week from first_action_dttm)) * 2 -
case extract(dow from first_action_dttm) when 0 then 1 else 0 end +
case extract(dow from ref_dttm) when 0 then 2 when 6 then 1 else 0 end
from t_tickets
```
try it on [**`sql fiddle demo`**](http://sqlfiddle.com/#!12/33990/7)
or if you dates could have different years:
```
select
*,
trunc(date_part('day', ref_dttm - first_action_dttm) / 7) * 2 +
case extract(dow from first_action_dttm) when 0 then 1 when 6 then 2 else 0 end +
case extract(dow from ref_dttm) when 6 then 1 when 0 then 2 else 0 end -
case when extract(dow from ref_dttm) = extract(dow from first_action_dttm) then 2 else 0 end as weekends
from t_tickets
```
try it on [**`sql fiddle demo`**](http://sqlfiddle.com/#!12/33990/24) | Postgres function to determine number of weekends in an interval | [
"",
"sql",
"postgresql",
""
] |
table1:
```
Id Customer No_
7044900804 Z0172132
7044900804 Z0194585
7044907735 Z0172222
7044907735 Z0172222
7044910337 Z0172216
7044911903 Z0117392
```
I would like to get only the values with the **same Id AND different Customer No\_** from table1.
```
Id Customer No_
7044900804 Z0172132
7044900804 Z0194585
```
I've already tried to use query for finding duplicates, but it won't filter values with the **same Id AND same Customer No\_** from table1.
```
SELECT Id
, [Customer No_]
FROM table1
GROUP BY Id, [Customer No_]
HAVING COUNT(*) > 1
```
Could you help me with **T-SQL** query solution?
Thanks for all the advices. | Try this
```
SELECT * FROM table1 WHERE Id IN
(
SELECT Id
FROM table1
GROUP BY Id
HAVING COUNT(DISTINCT [Customer No_]) > 1
)
```
[**SQL FIDDLE DEMO**](http://www.sqlfiddle.com/#!3/e92fe/1) | I think the easiest way is just to join the table together twice, where the IDs are joined, but the Customer No\_s do not match.
```
select t1.id, t1.[customer No_], t2.[customer No_]
from table1 t1
inner join table1 t2
on t1.id = t2.id
and t1.[customer No_] != t2.[customer No_]
``` | T-SQL - query filtering | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I wrote a query to obtain First of month,
```
SELECT DATEADD(DAY, -(DATEPART(DAY,GETDATE())-1), GETDATE()) as First_Of_Month;
```
for which i do get the appropriate output, but my time stamp shows the current time.
Here's what I am doing in the query, hope i make sense.
using `datepart` i calculated the no. of days (int) between the 1st and today (27-1 =26)
Then using `dateadd` function, i added "-datepart" to get the first of the month.
This is just changing the date, what should i look at or read about in order to change the time. I am assuming that it would have something to do with 19000101 | **For SQL Server 2012** (thanks adrift and i-one)
```
DECLARE @now DATETIME = CURRENT_TIMESTAMP;
SELECT DATEADD(DAY, 1, EOMONTH(@now, -1));
-- or
SELECT DATEFROMPARTS(YEAR(@now), MONTH(@now), 1);
```
**For SQL Server 2008+**
```
DECLARE @now DATETIME = CURRENT_TIMESTAMP;
-- This shorthand works:
SELECT CONVERT(DATE, @now+1-DAY(@now));
-- But I prefer to be more explicit, instead of relying on
-- shorthand date math (which doesn't work in all scenarios):
SELECT CONVERT(DATE, DATEADD(DAY, 1-DAY(@now), @now));
```
**For SQL Server 2005**
```
SELECT DATEADD(MONTH, DATEDIFF(MONTH, 0, GETDATE()),0);
```
A caveat if you're using this SQL Server 2005 method: you need to be careful about using expressions involving `DATEDIFF` in queries. [SQL Server can transpose the arguments and lead to horrible estimates](http://connect.microsoft.com/SQLServer/feedback/details/630583/incorrect-estimate-with-condition-that-includes-datediff) - [as seen here](https://stackoverflow.com/questions/18241977/query-runs-slow-with-date-expression-but-fast-with-string-literal/). It might actually be safer to take the slightly less efficient string approach:
```
SELECT CONVERT(DATETIME, CONVERT(CHAR(6), GETDATE(), 112) + '01');
``` | ```
SELECT convert(varchar(10),DATEADD(DAY, - (DATEPART(DAY,GETDATE())-1), GETDATE()),120)+' 00:00:00' as First_Of_Month;
``` | How to get 00:00:00 in datetime, for First of Month? | [
"",
"sql",
"sql-server",
"datetime",
"sql-server-2008-r2",
""
] |
anybody an idea why this gives me an error 102, wrong syntax?
```
declare @i int=20
while @i<=50
begin
try
if convert(char,getdate(),@i) is not null
begin
select convert(char,getdate(),@i)
end
set @i=@i+1
end try
begin catch
set @i=@i+1
end catch;
end
``` | Judging by your indentation, I believe you wanted `BEGIN TRY` for line 5. | Remove the semi colon after end catch, and add begin before the try.
Something like
```
declare @i int=20
while @i<=50
begin
begin try
if convert(char,getdate(),@i) is not null
begin
select convert(char,getdate(),@i)
end
set @i=@i+1
end try
begin catch
set @i=@i+1
end catch
end
```
## [SQL Fiddle DEMO](http://www.sqlfiddle.com/#!3/d41d8/19495)
Here is the [corrected Syntax](http://technet.microsoft.com/en-us/library/ms175976.aspx)
```
BEGIN TRY
{ sql_statement | statement_block }
END TRY
BEGIN CATCH
[ { sql_statement | statement_block } ]
END CATCH
``` | Why does this code sequence give me an error 102 | [
"",
"sql",
"sql-server-2008",
"t-sql",
""
] |
I have an application where I find a Sum() of a database column for a set of records and later use that sum in a separate query, similar to the following (made up tables, but the idea is the same):
```
SELECT Sum(cost)
INTO v_cost_total
FROM materials
WHERE material_id >=0
AND material_id <= 10;
[a little bit of interim work]
SELECT material_id, cost/v_cost_total
INTO v_material_id_collection, v_pct_collection
FROM materials
WHERE material_id >=0
AND material_id <= 10
FOR UPDATE;
```
However, in theory someone could update the cost column on the materials table between the two queries, in which case the calculated percents will be off.
Ideally, I would just use a FOR UPDATE clause on the first query, but when I try that, I get an error:
```
ORA-01786: FOR UPDATE of this query expression is not allowed
```
Now, the work-around isn't the problem - just do an extra query to lock the rows before finding the Sum(), but that query would serve no other purpose than locking the tables. While this particular example is not time consuming, the extra query could cause a performance hit in certain situations, and it's not as clean, so I'd like to avoid having to do that.
Does anyone know of a particular reason why this is not allowed? In my head, the FOR UPDATE clause should just lock the rows that match the WHERE clause - I don't see why it matters what we are doing with those rows.
EDIT: It looks like SELECT ... FOR UPDATE can be used with analytic functions, as suggested by David Aldridge below. Here's the test script I used to prove this works.
```
SET serveroutput ON;
CREATE TABLE materials (
material_id NUMBER(10,0),
cost NUMBER(10,2)
);
ALTER TABLE materials ADD PRIMARY KEY (material_id);
INSERT INTO materials VALUES (1,10);
INSERT INTO materials VALUES (2,30);
INSERT INTO materials VALUES (3,90);
<<LOCAL>>
DECLARE
l_material_id materials.material_id%TYPE;
l_cost materials.cost%TYPE;
l_total_cost materials.cost%TYPE;
CURSOR test IS
SELECT material_id,
cost,
Sum(cost) OVER () total_cost
FROM materials
WHERE material_id BETWEEN 1 AND 3
FOR UPDATE OF cost;
BEGIN
OPEN test;
FETCH test INTO l_material_id, l_cost, l_total_cost;
Dbms_Output.put_line(l_material_id||' '||l_cost||' '||l_total_cost);
FETCH test INTO l_material_id, l_cost, l_total_cost;
Dbms_Output.put_line(l_material_id||' '||l_cost||' '||l_total_cost);
FETCH test INTO l_material_id, l_cost, l_total_cost;
Dbms_Output.put_line(l_material_id||' '||l_cost||' '||l_total_cost);
END LOCAL;
/
```
Which gives the output:
```
1 10 130
2 30 130
3 90 130
``` | The syntax `select . . . for update` locks records in a table to prepare for an update. When you do an aggregation, the result set no longer refers to the original rows.
In other words, there are no records in the database to update. There is just a temporary result set. | You might try something like:
```
<<LOCAL>>
declare
material_id materials.material_id%Type;
cost materials.cost%Type;
total_cost materials.cost%Type;
begin
select material_id,
cost,
sum(cost) over () total_cost
into local.material_id,
local.cost,
local.total_cost
from materials
where material_id between 1 and 3
for update of cost;
...
end local;
```
The first row gives you the total cost, but it selects all the rows and in theory they could be locked.
I don't know if this is allowed, mind you -- be interesting to hear whether it is. | Why can't I use SELECT ... FOR UPDATE with aggregate functions? | [
"",
"sql",
"database",
"oracle",
"aggregate-functions",
"select-for-update",
""
] |
I have the following tables:

and:

And I formulated this query:
```
Update Table 1
SET DY_H_ID = (
SELECT MAX(ID)
FROM Table 2
WHERE H_DateTime <= DY_Date
AND H_IDX = DY_IDX
AND H_HA_ID = 7
AND H_HSA_ID = 19
AND H_Description LIKE 'Diary item added for :%'
)
WHERE DY_H_ID IS NULL AND DY_IDX IS NOT NULL
```
which results in this:

However, this query updates all 6 rows. I need to update only the two rows with the latest date, that would be `'2013-08-29 15:00:00.000'`. That would mean only 2 of the 6 records would be updated and the other 4 would remain NULL.
How can I do this by adding to the above query? I know this might not be ideal but there is no option but to do something like this. What I don't understand is how do you select only the latest dates without hardcoding it. This data can change and it won't always be the same dates etc. | Try this:
```
UPDATE TABLE 1
SET DY_H_ID = (SELECT Max(ID)
FROM TABLE 2
WHERE H_DATETIME <= DY_DATE
AND H_IDX = DY_IDX
AND H_HA_ID = 7
AND H_HSA_ID = 19
AND H_DESCRIPTION LIKE 'Diary item added for :%')
WHERE DY_H_ID IS NULL
AND DY_IDX IS NOT NULL
AND DY_DATE = (SELECT Max(DY_DATE)
FROM TABLE1)
``` | Just add another condition to where:
```
Update Table 1
SET DY_H_ID = (
SELECT MAX(ID)
FROM Table 2
WHERE H_DateTime <= DY_Date
AND H_IDX = DY_IDX
AND H_HA_ID = 7
AND H_HSA_ID = 19
AND H_Description LIKE 'Diary item added for :%'
)
WHERE DY_H_ID IS NULL AND DY_IDX IS NOT NULL
and DY_Date = (select max(DY_Date) from Table 1)
``` | Updating only ID's with the latest date SQL (2 of 6) | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
i have 3 tables
1)main\_table
2)Flag\_table
3)emp\_flagdetails
main\_table structure
```
emp_no hod_no emp_name flgType
E51397 E44417 Asha V
E42342 E44417 Shaikh Faiz Ahmed
E06636 E44417 Balu K U
```
in the above table i kept flgtype column blank to update later
now i have Flag\_table structure as follow
```
FlagId FlagCategory FlagType
1 BM BRML12B
2 BM BRMM12B
3 BM BRMRMB
4 BM BRMCMB
5 BM BRMZM
6 VH BRML12V
7 VH BRMM12V
8 VH BRMRMV
9 VH BRMCMV
```
emp\_flagdetails structure is a follow
```
ecode flag
E44417 BRML12B
E42342 BRMRMB
E06636 BRMZM
E51397 BRML12B
```
this is my tables structure,now my query is i want to update flgtype coloumn of main\_table with flagcategory column of Flag\_table ..in such way that if empno from main\_table is present in emp\_flagdetails table then we will check the flag column of emp\_flagdetails for that matching employee and then we get this flag column value and we will find this value in flag\_table if it is present in the flag\_table ,we will update main\_table flgtype column with flagcategory column value....so the output will be as follow
```
emp_no hod_no emp_name flgType
E51397 E44417 Asha V BM
E42342 E44417 Shaikh Faiz Ahmed BM
E06636 E44417 Balu K U BM
```
please help me to write the query | Query:
**[SQLFIDDLEExample](http://sqlfiddle.com/#!6/fdb50/1)**
```
UPDATE m
SET m.flgType = f.FlagCategory
FROM main_table m
JOIN emp_flagdetails fd
ON fd.ecode = m.emp_no
JOIN flag_table f
ON f.FlagType = fd.flag;
``` | ```
update main_table m, (select a.emp_no, b.flagcategory from emp_flagdetails a, flag_table b
where a.flag = b.flagtype) s set m.flgtype = s.flagcategory where m.emp_no = s.emp_no
``` | complex sql query to update main table | [
"",
"sql",
"sql-server",
"sql-update",
"join",
""
] |
I want to know if a date (Year-Month-Day) is between a day of the month (Month-Day).
For example, I want to know if 'April 2, 2013' is between 'April 2' and 'April 19'.
I can easily do this of the range has a year value. Howver, without a year, I need ideas how to do this.
Some examples of what I'm trying to achieve:
```
SELECT 1 WHERE '2013-04-02' BETWEEN '04-01' AND '04-19'; -- result is 1
SELECT 1 WHERE '2014-04-02' BETWEEN '04-01' AND '04-19'; -- result is 1
SELECT 1 WHERE '2014-03-02' BETWEEN '04-01' AND '04-19'; -- result is 1
```
Thanks | You can make use of `DATE_FORMAT()`.
**Example**
```
SELECT
*
FROM
table
WHERE
DATE_FORMAT(date,'%m-%d') BETWEEN '04-01' AND '04-19'
```
**Edit**
```
SELECT
*
FROM
table
WHERE
STR_TO_DATE(date, '%m-%d') BETWEEN '04-01' AND '04-19'
``` | Following is work with oracle
```
select * from table_name where to_char(date_column, 'MM-DD') BETWEEN '08-01' AND '08-14'
``` | Query to check if date is between two month-day values | [
"",
"mysql",
"sql",
"date",
""
] |
I have two tables like this:
```
Table1
________
StudentNumbers ExamType
1234 1
2343 2
3345 5
3454 1
5465 2
...
Table2
________
StudentNumbers ExamType ExamDate School Area
1234 1 0825 warren ny
1234 1 0829 north nj
1233 2 0921 north nj
2343 1 0922 warren ny
2343 1 0925 north ny
...
```
I need to find out each students maximum ExamDate from the Table2 by using data from Table1 for particular ExamType. I have come up with this so far but this seems incorrect:
```
Select t2.StudentNumbers, t2.ExamType, max(t2.ExamDate), t2.School, t2.Area
from Table2 as t2
Join Table1 as t1 on
t1.StudentNumbers = t2.StudentNumbers
where t2.ExamType = 1
```
I am getting error as invalid in the select list because it is not contained in either an aggregate function or the group by clause
It should basically return back:
```
StudentNumbers ExamType ExamDate School Area
1234 1 0829 north nj
2343 1 0925 north ny
``` | When using `max()` or another aggregate function, the other fields in the result set need to be grouped.
```
Select t2.StudentNumbers, t2.ExamType, max(t2.ExamDate), t2.School, t2.Area
from Table2 as t2
Join Table1 as t1 on
t1.StudentNumbers = t2.StudentNumbers
where t2.ExamType = 1
group by t2.StudentNumbers, t2.ExamType, t2.School, t2.Area
```
This will give you the latest exam date per student, exam type, school and area. | You need to use either `GROUP BY` or change the aggregate to a window function (eg. `MAX(t2.ExamDate) over (partition by t2.StudentNumbers, t2.ExamType)`.
BTW, as mentioned in some of the comments, considering the data in the first table is included in the second , the whole idea of the `JOIN` is not really useful. | Sql script to get values for two tables combined | [
"",
"sql",
""
] |
**UPDATE**
I know with variable $id the ID of data in table\_1. I have two same columns in table\_1 and table\_2 (with same content). I want to select and show column in table\_2 (result).
**TABLE 1**
```
| ID | color |
-----------------------------------
1 | data1 |
2 | data2 |
3 | data3 |
4 | data4 |
5 | data5 |
```
**TABLE 2**
```
| ID | flower | result |
------------------------------------------------------
11 | data1 | result1 |
12 | data2 | result2 |
13 | data3 | result3 |
14 | data4 | result4 |
15 | data5 | result5 |
```
ex
ID = 5
result = result5 | ```
Select t2.*, t1.color from t2 inner join t1 on t1.color = t2.data and t1.id = '$id'
``` | A simple join will handle this.
```
select t2.result
from table1 t1
join table2 t2
on t1.color = t2.flower
where t1.id = 5
``` | MYSQL select in different table with condition | [
"",
"mysql",
"sql",
"select",
""
] |
I have a column of type bigint (ProductSerial) in my table. I need to filter the table by the Product serial using like operator. But I found that, like operator can't be used for integer type.
Is there any other method for this (I don't want to use the `=` operator). | If you must use `LIKE`, you can cast your number to `char`/`varchar`, and perform the `LIKE` on the result. This is quite inefficient, but since `LIKE` has a high potential of killing indexes anyway, it may work in your scenario:
```
... AND CAST(phone AS VARCHAR(9)) LIKE '%0203'
```
If you are looking to use `LIKE` to match the beginning or the end of the number, you could use integer division and modulus operators to extract the digits. For example, if you want all nine-digit numbers starting in `407`, search for
```
phone / 1000000 = 407
``` | Although I'm a bit late to the party, I'd like to add the method I'm using to match the first N given numbers (in the example, 123) in any numeric-type column:
```
SELECT * FROM MyTable WHERE MyColumn / POWER(10, LEN(MyColumn) - LEN(123)) = 123
```
The technique is similar to @dasblinkenlight's one, but it works regardless of the number of digits of the target column values. This is a viable workaround if your column contain numbers with different length and you don't want to use the CAST+LIKE method (or a calculated column).
For additional details on that (and other LIKE workarounds) check out [this blog post](https://www.ryadel.com/en/like-operator-equivalent-integer-numeric-columns-sql-t-sql-database/) that I wrote on this topic. | Like operator for integer | [
"",
"sql",
"sql-like",
""
] |
I am doing quite a large query to a database and for some reason it is returning many results that do not match any of my search terms. It also seems to duplicate my results so I get the same SQL item 16 times. Any ideas why?
```
SELECT a.*
FROM
j20e8_datsogallery AS a,
j20e8_datsogallery_tags AS t
WHERE
(a.id LIKE "%bear%" OR
a.imgtitle LIKE "%bear%" OR
a.imgtext LIKE "%bear%" OR
a.imgauthor LIKE "%bear%" OR
t.tag LIKE "%bear%")
ORDER BY a.id DESC
LIMIT 0, 16
```
I think it maybe something to do with the LIKE %term% section but cannot get it working at all. | I'd make sure you qualify your join. Otherwise you'll end up with a full join, or worse, a Cartesian product from a cross join. Something along these lines:
```
SELECT a.*
FROM
j20e8_datsogallery AS a
JOIN j20e8_datsogallery_tags AS t ON a.ID = t.GalleryID
WHERE
...
ORDER BY a.id DESC
LIMIT 0, 16
```
Also, consider using a `FULLTEXT INDEX` ... it could combine all those columns into a single index, and would make searching all of them quite functional.
A `FULLTEXT INDEX` in MySql can be used to 'combine' several different columns into one big pile of text, which you can then `MATCH()` columns `AGAINST` search terms.
To create a `FULLTEXT INDEX`, you can simply use the `CREATE INDEX` syntax documented [here](http://dev.mysql.com/doc/refman/5.7/en/create-index.html).
```
CREATE FULLTEXT INDEX FDX_datsogallery
ON j20e3_datsogallery ( id, imgtitle, imgtext, imgauthor )
```
You can then use it in a query with the `MATCH() ... AGAINST` statements, which are documented [here](http://dev.mysql.com/doc/refman/5.7/en/fulltext-search.html):
```
SELECT a.*
FROM j20e8_datsogallery AS a
WHERE MATCH( id, imgtitle, imgtext, imgauthor ) AGAINST( 'bear' )
``` | It's bringing back multiples because:
```
SELECT a.*
FROM j20e8_datsogallery AS a, j20e8_datsogallery_tags AS t
```
brings back ***every*** combination of records from the two tables on it's own. So `bear` in one table joins to every record in the other table.
You need to specify a relationship between the tables, preferably using an explicit `JOIN` | SQL Specific Statement Query | [
"",
"mysql",
"sql",
""
] |
in my table i have a feild **user\_ids**. the user\_ids feilds containeing the values like **12,45,78,95,21,78,98**
what i need is i need an **mysql** query that search for a specific id(for ex:45) in the feild.
i used **like** operator but its not working as i expected. for ex: when i search for **5** its return tru, but the **id 5** not in the list.
i would like to know is there any default function is available in **mysql**.
could you pls help me...
my query.
```
SELECT * FROM `FRIENDSLIST` WHERE `USERS_ID` LIKE '%'.$ID.'%';
```
NB: i dont know whether this question meets standard,pls dont do down vote. pls help me.... | This also works:
```
select * from FRIENDSLIST where find_in_set( $ID, USERS_ID ) <> 0
``` | Try
```
Where ',' + MyField + ',' Like '%,' + MySearchString + ',%'
```
The problem is that you're thinking of it as IDs, but it's just a string. So when you search for '5' in '12,45,78,95,21,78,98' it finds it in the 5 of the 45. If you surround with commas then it searches for ',45,' in ',12,45,78,95,21,78,98,' and finds it, but is you look for ',5,' it won't match, as desired.
you also need to add commas at the beginning and end to be able to math the first and last IDs. | how to search for an id from a group of id using mysql? | [
"",
"mysql",
"sql",
""
] |
I would like to know how many rows would be returned without rownum condition in a given query:
```
SELECT columns
, [number of rows before applying rownum] AS numberofrows
FROM tables
WHERE conditions
AND rownum < 100
```
Is it possible without using a subquery? | You can use the analytic version of `COUNT()` in a nested query, e.g.:
```
SELECT * FROM
(
SELECT table_name,
COUNT(*) OVER() AS numberofrows
FROM all_tables
WHERE owner = 'SYS'
ORDER BY table_name
)
WHERE rownum < 10;
```
You need to nest it anyway to apply an order-by before the `rownum` filter to get consistent results, otherwise you get a random(ish) set of rows.
You can also replace `rownum` with the analytic `ROW_NUMBER()` function:
```
SELECT table_name, cnt FROM
(
SELECT table_name,
COUNT(*) OVER () AS numberofrows,
ROW_NUMBER() OVER (ORDER BY table_name) AS rn
FROM all_tables
WHERE owner = 'SYS'
)
WHERE rn < 10;
``` | You need to run two queries. First to find the size of set and second with data.
`select count(*) from tables where conditions;`
then the second one.
Note that before you should order your set to have, and wrap it to another select
`select * from (select columns from tables where conditions order by columns) rownum < 100`.
[Lecture about limiting resources](http://www.oracle.com/technetwork/issue-archive/2006/06-sep/o56asktom-086197.html) | Oracle - calculate number of rows before some condition is applied | [
"",
"sql",
"oracle",
""
] |
I would like to create a table that represents the list of contacts of an user, but each user may have a number diferent of contacts...
I have a table **user** whith the colums **id\_user, nome, info**
Each group needs contains the id of a user that is the owner of the group, the ids of the others users and other integer values for each other users.
So, the table **group** is something like
**id\_dono\_do\_grupo, id\_user\_1, id\_user\_2, .., id\_user\_N, valor\_1, valor\_2, .., valor\_N**
but this **N** may to change.
Any idea about to model this table?
Thank you very much. | Something like this:
* (P) means primary key
* (F table.column) means foreign key to table.column
A table for all the groups.
```
groups
id unsigned int(P)
owner_id unsigned int(F users.id)
name varchar(50) // Vitor's Group, Dan's Group, etc.
```
A table for all your users.
```
users
id unsigned int(P)
name varchar(50)
info varchar(50)
```
And finally a table tying users to groups. The owner of the group is specified in the `groups` table so this `users_groups` table lets you have as many user|value pairs associated with a group as you want.
```
users_groups
id unsigned int(P)
user_id unsigned int(F users.id)
group_id unsigned int(F groups.id)
value unsigned int // Or whatever data type the "value" is...
``` | You need to have two tables to model this correctly.
The first table should store info about the group, including an FK to the owning user.
The second table should store the membership of the group. Basically to columns `group_id` and `user_id`. | About modeling a DB | [
"",
"mysql",
"sql",
"database",
"database-design",
""
] |
I have a list of players. The players are sorted by points. What I'd like to know is how do I get the ranking number of a CERTAIN player?
This is my code so far (which doesn't work because it seems to have some bugs):
```
$rank = mysql_query (SET @rank := 0;
SELECT *, @rank := @rank + 1
FROM ava_users
WHERE id = '".$id."'
ORDER BY points DESC);
$rank_res = mysql_fetch_array($rank);
```
When I try to use my query I get an error message:
```
mysql_fetch_array() expects parameter 1 to be resource, boolean given in /Users/***/Documents/Arcades/Arc_development/arc_projects/***/arc_dev_website/arc_offline/includes/profile/profile_main.inc.php
``` | ```
$rank = mysql_query (
"SELECT a.*,
(
SELECT COUNT(1)
FROM ava_users b
WHERE (b.points, b.id) >= (a.points, a.id)
) AS rank
FROM ava_users a
WHERE a.`user` = '$id'"
);
``` | Try this:
```
SELECT `user`, rank
FROM (
SELECT `user`, ( @rank := @rank + 1 ) as rank
FROM ava_users, ( select (@rank := 0 ) ) rnk
ORDER BY points DESC
) ranks
WHERE `user` = '".$id."'
``` | How to get ranking of specific player? | [
"",
"mysql",
"sql",
"database",
""
] |
@ConfListTable is a table valued parameter (TVP) which has a list of confirmation codes.
I want to select all the records from PmtHist table where the confirmation code is also in @ConfListTable. The following code works well for this; no problems.
```
Select * from PmtHist
Where Confirmation in(
Select Str1 as ConfirmationCode
From @ConfListTable
)
```
My problem is this: The confirmation code in PmtHist occasionally has "voided" following the actually confirmation code. Like "ab321voided" But I really want those records too. How do I modify the above query to get the records that match either a record in @ConfListTable or matches @ConfListTable + 'voided'? | A quick & easy way is to simply use REPLACE:
```
Select *
from PmtHist
Where REPLACE(Confirmation, 'voided', '') in(
Select Str1 as ConfirmationCode
From @ConfListTable
)
``` | ```
select *
from PmtHist ph
where exists (
select 1 from @ConfListTable
where ph.Confirmation in (Str1, Str1 + 'voided')
)
``` | How to Select where in(Other table) but with modification | [
"",
"sql",
"t-sql",
""
] |
I tested each part of the SQL and it works fine.
Something is Wrong with my `WHERE` clause however.. not getting much detail setting breakpoints in VS 2012 however...
How should this be code for an `INSERT` statement with a subquery using MS SQL SERVER?
```
INSERT INTO Emails (email, insertDate)
VALUES (@Email, @DateToday)
WHERE NOT EXISTS (SELECT Emails.email
FROM Emails WHERE Emails.email = @Email);
``` | ```
INSERT INTO Emails (email, insertDate)
SELECT @Email, @DateToday
WHERE NOT EXISTS (SELECT Emails.email
FROM Emails WHERE Emails.email = @Email);
``` | This will work
```
INSERT INTO Emails (email, insertDate)
SELECT @Email, @DateToday
WHERE NOT EXISTS (SELECT Emails.email
FROM Emails WHERE Emails.email = @Email)
```
But a **better solution** may be a unique index on the email column | How to code INSERT Subquery correctly? | [
"",
"sql",
"sql-server",
""
] |
I have a table with a column with strings that looke like this:
```
static-text-here/1abcdefg1abcdefgpxq
```
From this string `1abcdefg` is repeated twice, so I want to remove that partial string, and return:
```
static-text-here/1abcdefgpxq
```
I can make no guarantees about the length of the repeat string. In pure SQL, how can this operation be performed? | If you can guarantee a minimum length of the repeated string, something like this would work:
```
select REGEXP_REPLACE
(input,
'(.{10,})(.*?)\1+',
'\1') "Less one repetition"
from tablename tn where ...;
```
I believe this can be expanded to meet your case with some cleverness. | ```
regexp_replace('static-text-here/1abcdefg1abcdefgpxq', '/(.*)\1', '/\1')
```
[fiddle](http://www.sqlfiddle.com/#!4/8e58e/2) | Oracle SQL -- remove partial duplicate from string | [
"",
"sql",
"regex",
"oracle",
""
] |
I'm really struggling with a case statement in MSSQL 2012. I've looked around for other answers, but although I've got some help, none seem to fix the problem.
```
case firstname
when len(ltrim(rtrim(firstname))) > 11 then 'blah'
else 'blahblah'
end as test
```
I am getting a syntax error, on the '>' character.
Originally, this was
```
case firstname
when ltrim(rtrim(firstname)) like '% %' then 'blah'
else 'blahblah'
end as test
```
but I thought there may have been some sensitivity on the like keyword, so I changed it to '>', but I get the same thing.
Probably a dumb question, but I've been banging my head for a couple of hours and some insight would be much appreciated. | Try this version:
```
(case when len(ltrim(rtrim(firstname))) > 11 then 'blah'
else 'blahblah'
end) as test
```
The `case` statement has two versions. The one with the variable is used for constant expressions:
```
(case firstname
when 'Arnold' then . . .
when 'Betty' then . . .
end)
```
The second version (which is really the only one I use) takes a conditional for each part:
```
(case when firstname = 'Arnold' then . . .
``` | Chnage it to
```
case
when len(ltrim(rtrim(firstname))) > 11 then 'blah'
else 'blahblah'
end as test
``` | Case statement with aggregate and like clause - MSSQL 2012 | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have the below two tables ONE & TWO
```
ONE TWO
ID ID CODE
----- ---------
1 1 1
2 1 2
3 2 1
4 3 1
5 3 2
3 3
```
## Output required
```
ID CODE
----------
1 2
2 1
3 3
4 NULL
5 NULL
```
I used the below query but it is throwing the error "**An ON clause associated with a JOIN operator or in a MERGE statement is not valid.**"
```
SELECT A.ID
,B.CODE
FROM ONE A
LEFT JOIN
TWO B
ON A.ID = B.ID
AND B.CODE = (
SELECT
MAX(Z.CODE)
FROM TWO Z
WHERE Z.ID = A.ID
)
``` | ```
SELECT A.ID
,B.CODE
FROM ONE A
LEFT outer JOIN
(select id,max(code) CODE from two group by id) B
ON A.ID = B.ID
```
I believe this is what you are looking for.... | You can do this with a join and aggregation (if I understand the logic correctly):
```
select ONE.id, max(TWO.CODE)
from ONE left outer join
TWO
on ONE.id = TWO.id
group by ONE.id;
``` | SQL Using inner query in left join | [
"",
"mysql",
"sql",
"sql-server",
"oracle",
"db2",
""
] |
Got two tables: "saldos\_one" and "saldos\_two", both tables got the following fields: CLIENTID, DATE, VALUE.
I have to get the TOTAL arithmetic mean from the arithmetic mean for each CLIENTID in some periods of time.
Lets take some examples:
```
> SELECT * FROM saldos_one;
+----------+------------+---------+
| CLIENTID | DATE | VALUE |
+----------+------------+---------+
| 1 | 2009-08-01 | 1000.00 |
| 1 | 2009-09-01 | 2000.00 |
| 1 | 2009-10-01 | 3000.00 |
| 2 | 2009-08-01 | 1000.00 |
| 2 | 2009-09-01 | 2000.00 |
| 2 | 2009-10-01 | 3000.00 |
| 3 | 2009-08-01 | 1000.00 |
| 3 | 2009-09-01 | 2000.00 |
| 3 | 2009-10-01 | 3000.00 |
| 4 | 2009-08-01 | 1000.00 |
| 4 | 2009-09-01 | 2000.00 |
| 4 | 2009-10-01 | 3000.00 |
+----------+------------+---------+
> SELECT * FROM saldos_two;
+----------+------------+---------+
| CLIENTID | DATE | VALUE |
+----------+------------+---------+
| 1 | 2009-08-01 | 10.00 |
| 1 | 2009-09-01 | 20.00 |
| 1 | 2009-10-01 | 30.00 |
| 2 | 2009-08-01 | 100.00 |
| 2 | 2009-09-01 | 200.00 |
| 2 | 2009-10-01 | 300.00 |
| 3 | 2009-08-01 | 1000.00 |
| 3 | 2009-09-01 | 2000.00 |
| 3 | 2009-10-01 | 3000.00 |
| 5 | 2009-08-01 | 1.00 |
| 5 | 2009-09-01 | 2.00 |
| 5 | 2009-10-01 | 3.00 |
+----------+------------+---------+
```
After QUERY arithmetic means for each table:
```
> SELECT CLIENTID, TRUNCATE(SUM(VALUE)/COUNT(VALUE), 2)
FROM saldos_one
WHERE (DATE BETWEEN '2009-08-01' AND '2009-10-01')
GROUP BY CLIENTID;
+----------+---------+
| CLIENTID | VALUE |
+----------+---------+
| 1 | 2000.00 |
| 2 | 2000.00 |
| 3 | 2000.00 |
| 4 | 2000.00 |
+----------+---------+
> SELECT CLIENTID, TRUNCATE(SUM(VALUE)/COUNT(VALUE), 2)
FROM saldos_two
WHERE (DATE BETWEEN '2009-08-01' AND '2009-10-01')
GROUP BY CLIENTID;
+----------+---------+
| CLIENTID | VALUE |
+----------+---------+
| 1 | 20.00 |
| 2 | 200.00 |
| 3 | 2000.00 |
| 5 | 2.00 |
+----------+---------+
```
What I would like to get is the arithmetic means for each client from the arithmetic means of different tables, that will be:
```
+----------+---------+
| CLIENTID | VALUE |
+----------+---------+
| 1 | 1010.00 | = 2000.00 + 20.00 / 2
| 2 | 200.00 | = 200.00 + 200.00 / 2
| 3 | 2000.00 | = 2000.00 + 2000.00 / 2
| 4 | 1000.00 | = 2000.00 + 0 / 2
| 5 | 1.00 | = 2.00 + 0 / 2
+----------+---------+
```
SOLUTION: See @bvr's response | Try this
```
SELECT CLIENTID,SUM(VALUE)/2 VALUE FROM
(
SELECT CLIENTID, TRUNCATE(SUM(VALUE)/COUNT(VALUE), 2) VALUE
FROM saldos_one
WHERE (DATE BETWEEN '2009-08-01' AND '2009-10-01')
GROUP BY CLIENTID;
UNION ALL
SELECT CLIENTID, TRUNCATE(SUM(VALUE)/COUNT(VALUE), 2) VALUE
FROM saldos_two
WHERE (DATE BETWEEN '2009-08-01' AND '2009-10-01')
) t
GROUP BY CLIENTID
``` | You could use a `UNION` of the tables, and then perform your analysis on them.
E.g.
```
SELECT CLIENTID, AVG(VALUE) FROM (
SELECT CLIENTID, VALUE FROM saldos_one WHERE
(DATE BETWEEN '2009-08-01' AND '2009-10-01')
UNION
SELECT CLIENTID, VALUE FROM saldos_two WHERE
(DATE BETWEEN '2009-08-01' AND '2009-10-01')
) t
GROUP BY CLIENTID
``` | join results and perform arithmetic mean | [
"",
"mysql",
"sql",
"mean",
""
] |
I have a query and when I run it I get the error message **ORA-00907: missing parenthesis**. When I replace the **CASE** statement with either **x = g and** or **y = g and** it runs as expected.
```
SELECT *
FROM
table1,
table2,
table3,
table4,
table5,
table6,
table7,
table8
WHERE
a = b and
c = d and
e = d and
CASE strfldvar
WHEN 'BROKEN_ARROW' THEN (x = g)
WHEN 'BROKEN_BOX' THEN (y = g)
ELSE -1
end
and
f = h and
i = j
```
What am I doing wrong here? | `case` is an expression, not a predicate (i.e condition) : it 'returns' a typed value and can not contain predicate as result (in the `then` parts). In your case (assuming `else -1` means 'no match') :
```
AND g = CASE strfldvar
WHEN 'BROKEN_ARROW' THEN x
WHEN 'BROKEN_BOX' THEN y
ELSE NULL -- never match, even if g is null
END
```
although I think it would be simpler to replace it with :
```
AND (
(strfldvar = 'BROKEN_ARROW' AND x = g)
OR (strfldvar = 'BROKEN_BOX' AND y = g)
)
``` | Instead of
```
CASE strfldvar
WHEN 'BROKEN_ARROW' THEN (x = g)
WHEN 'BROKEN_BOX' THEN (y = g)
ELSE -1
```
have this:
```
x=CASE strfldvar WHEN 'BROKEN_ARROW' THEN g ELSE x END
y=CASE strfldvar WHEN 'BROKEN_ARROW' THEN g ELSE y END
``` | Where Clause Case Statement; ORA-00907: missing parenthesis | [
"",
"sql",
"oracle",
""
] |
I just created a table within a huge database and saved it. I've tried to refresh the database to see if my new table appears and I closed & reopened the Management Studio but don't see my new table. I was wondering if there is a way to search for my new created table? I'm using SQL Server 2005.
Thanks a lot. | You can try this:-
```
SELECT * FROM dbo.sysobjects WHERE id = OBJECT_ID(N'[table_name_here]')
```
or try this:-
```
USE YourDBName
GO
SELECT *
FROM sys.Tables
where name like '%tablename%'
GO
``` | ```
Select * from sys.tables where name like '%tablename%'
``` | How do I search for a table in sql 2005? | [
"",
"sql",
"sql-server",
"sql-server-2005",
"search",
"qsqlquery",
""
] |
I want to store partial dates in a relational database (MySQL, PostgreSQL, etc.). For example, input may just be the year (2013); the year and month (2013-08); or the year, month, and day (2013-08-29). I cannot just use a normal DATE type, as the year will be expanded to 2013-01-01, and this indistinguishable from having the year, month, and day.
I have thought of either separating the date into three separate fields (year, month, and day as integers), but I lose all date niceties in the DBS and have to manage more indicies.
My other thought is to store it as a DATE and have another column saying how precise the date is. For example, '2013-08-01' and 'month' would mean the date is only precise up to the month (2013-08). '2013-08-01' and 'day' would mean the date is fully 2013-08-01.
What's the best way to do this? | I think there're two possible ways to go:
**(1)** Store **substrings** of date, like:
```
'2013' -- year 2013
'2013-01' -- year 2013, January
'2013-01-01' -- year 2013, January 1
```
**(2)** Store **3 different columns**, Year, Month, Day (and you can build index Year + Month + Date with no proble)
```
2013 null null -- year 2013
2013 1 null -- year 2013, January
2013 1 1 -- year 2013, January 1st
```
Which one is the best depends on how do you want to query data. Suppose you have stored procedure and you want to pass a parameter to get all rows falling into condition.
In case **(1)**, you pass string `@Date = '2013-01'` as a parameter and you want to get all rows where year = 2013 and month = 01. So the `where` clause would be like:
```
where left(Date, len(@Date)) = @Date
```
In case **(2)**, you pass three parameters - `@Year = 2013, @Month = 1, @Day = null` and the `where` clause would be something like:
```
where
Year = @Year and -- Supposing @Year is always not null
(@Month is null or @Month is not null and Month = @Month) and
(@Day is null or @Day is not null and Day = @Day)
```
It could be more complex depending on how do you want to process rows. For example, if you give parameter like `2013-01`, do you want to get rows where month = null or not?
On the other hand, if you want to pass date and check if it falls into date range, then Gordon Linoff suggestion is a good one to use. | Perhaps the best way is to treat these as spans of time and to have an `effdate` and `enddate`. The you can represent any time span you want. A year would be like '2012-01-01' and '2012-12-31'. A single date would be like '2013-08-28' and '2013-08-28'.
This would also give you the flexibility to expand the representation to handle quarters or other groups of time. | Storing partial dates in a database | [
"",
"sql",
"database-design",
"schema",
""
] |
I want something like this. But I dont know the proper syntax.
```
select id from table1
where name = (insert into table1 (name) values ('value_abcd'))
``` | So you have a table with an autoincrementing primary key, and you want to know its value after you have inserted a record.
In SQL, you can use the [last\_insert\_rowid](http://www.sqlite.org/lang_corefunc.html#last_insert_rowid) function:
```
INSERT INTO table1(name) VALUES('value_abcd');
SELECT last_insert_rowid();
```
However, in most cases you do not need to execute a separate `SELECT` query because most libraries have some function to return that value.
For example, SQLite's C API has the [sqlite3\_last\_insert\_rowid](http://www.sqlite.org/c3ref/last_insert_rowid.html) function, and Android's [insert](http://developer.android.com/reference/android/database/sqlite/SQLiteDatabase.html#insert%28java.lang.String,%20java.lang.String,%20android.content.ContentValues%29) function returns the ID directly. | When you say:
```
WHERE something = somethingElse
```
`somethingElse` is an expression (so is `something`, but anyway). You are trying to use an `INSERT` statement as your `somethingElse`, but an `INSERT` statement doesn't return anything, so it can not be evaluated as an expression. Therefore, not only is whatever you're trying to invalid syntax, it also doesn't make any sense in a pseudo-code kind of way.
Perhaps if you can try to explain more clearly what it is you're trying to do (and why), someone will be able to suggest something for you. | How can i use INSERT as subquery in SELECT statement in sqlite? | [
"",
"sql",
"sqlite",
"select",
"insert",
""
] |
I've been trying to write a SQL query to return all rows in a table without a matching value.
I have company, job, subjob, costcode, and costtype (among other fields). I need to return all rows which have a 'J' costtype, but no 'L' costtype.
This is probably better explained with data:
```
Company Job Subjob Costcode Costtype
------- -------- ------- --------- ----------
1 1234 0132 J
1 2345 01 9394 E
1 2345 02 9233 L
1 2345 02 9992 J
1 2345 02 9992 L
1 2345 03 1112 J
1 3384 3928 J
1 3384 03 3928 J
1 3384 11 2838 L
```
So I would expect the following:
```
Company Job Subjob Costcode Costtype
------- -------- ------- --------- ----------
1 1234 0132 J
1 2345 03 1112 J
1 3384 3928 J
1 3384 03 3928 J
```
I know it's something simple I'm missing, but cannot get the right combination of JOIN, ON, and WHERE clauses to make it work. | No need to use `JOIN`s:
```
SELECT *
FROM YourTable A
WHERE EXISTS(SELECT 1 FROM YourTable
WHERE Company = A.Company
AND Job = A.Job
AND Costtype = 'J')
AND NOT EXISTS(SELECT 1 FROM YourTable
WHERE Company = A.Company
AND Job = A.Job
AND Costtype = 'L')
``` | SELECT company, job, subjob, costcode
FROM Table
WHERE Company = A.Company
AND Job = A.Job
AND Costtype = 'J'
Minus
SELECT company, job, subjob, costcode,
FROM Table
WHERE Company = A.Company
AND Job = A.Job
AND Costtype = 'L'
I hope this satisfies,unless you want to display cost type in your result set. | SQL inner join without a match (same table) | [
"",
"sql",
"self-join",
""
] |
I'm trying to create a search function.
If the search input field is "foo bar", I split it into two keywords then do this query:
```
SELECT p.* FROM p_extra_fields as x INNER JOIN products as p ON x.product = p.id
WHERE x.type = "1"
AND
(
(x.key = "model" AND x.value LIKE "%foo%")
OR (x.key = "model" AND x.value LIKE "%bar%")
OR (x.key = "color" AND x.value LIKE "%foo%")
OR (x.key = "color" AND x.value LIKE "%bar%")
OR (x.key = "make" AND x.value LIKE "%foo%")
OR (x.key = "make" AND x.value LIKE "%bar%")
)
GROUP BY x.product LIMIT 0, 50
```
The number of keywords may be higher so I might need more "likes". Also the number of "key" can increase :)
Is there any way I could simplify this query? Can I do something like `LIKE("%foo%", "%bar%")` ? | If you have [**SQLite FTS3 and FTS4 Extensions**](http://www.sqlite.org/fts3.html) enabled then you can take advantage of Full Text Search (FTS) capabilities. You will need to recreate the `p_extra_fields` table as a [`VIRTUAL`](http://www.sqlite.org/vtab.html) table. Then you can insert `OR` between your search terms and use the `MATCH` operator...
```
SELECT p.*
FROM p_extra_fields x
JOIN products p ON p.id = x.product
WHERE x.key IN ('model', 'color', 'make')
AND x.type = '1'
AND x.value MATCH 'foo OR bar'
GROUP BY x.product LIMIT 0, 50;
```
Good info [**here**](http://www.phparch.com/2011/11/full-text-search-with-sqlite/) also.
Click [**here**](http://sqlfiddle.com/#!7/8d4e9/1) to see it in action at SQL Fiddle. | I think this `where` clause is simpler:
```
WHERE x.type = "1" and
x.key in ('model', 'color', 'make') and
(x.value like '%foo%' or x.value like '%bar%')
``` | Multiple LIKE in sqlite | [
"",
"sql",
"sqlite",
"search",
"select",
"sql-like",
""
] |
If I set a Record in Sql with a `Default` constraint like
```
[Publicbit] BIT DEFAULT ((0)),
```
Do I need to set the `NULL/NOTNULL` Constraint?
**Edit:** I am using a boolean but please expand your answer to other datatypes. | You never *need* the Null constraints, but the default value won't guard your table against explicit NULL's. So you should use a NOT NULL constraint if you want to enforce this condition.
```
use tempdb
go
CREATE TABLE example
(
id BIT DEFAULT (0)
)
INSERT example (id) VALUES (null)
SELECT * FROM example
``` | You should specify NOT NULL, to protect against a NULL from getting into this record. It sounds like the only valid values you want are 0 and 1. If someone (or you in the future) writes a record (or code) passing in a NULL for this column, then it will be allowed unless you specified NOT NULL.
By specifying a DEFAULT, you are only protecting against a NULL value if the SQL INSERT statement doesn't include that column. | NULL NOT NULL on Default constraint in sql | [
"",
"sql",
"sql-server",
""
] |
Join multiple table and get data in one line.
tbl\_2012-08 (structure)
```
id | data_log | data_name
1 | 0001 | first
2 | 0002 | second
```
tbl\_2012-09 (structure)
```
id | data_log | data_name
1 | 0003 | third
```
Output:
```
data_log
0001
0002
0003
```
How could I join this 2 table so that I can extract data at once.
any case would help
like:
create another table or something | I don't know why you have separate tables for each month but you should be able to use a `UNION` query to return the data from both tables:
```
select id, data_log, data_name
from `tbl_2012-08`
union all
select id, data_log, data_name
from `tbl_2012-09`
```
I used a `UNION ALL` to return all rows from both tables which will include duplicates (if any). You cannot `JOIN` the tables unless you have some common value in both tables and if you have separate tables for each month then I would guess that you don't have a common value in both tables.
As side note, I might include include a column so you can easily identify what table the data is coming from:
```
select id, data_log, data_name, '2012-08' mth
from `tbl_2012-08`
union all
select id, data_log, data_name, '2012-09' mth
from `tbl_2012-09`
```
My suggestion would be to look at changing this data structure, having a separate table for each month will get very cumbersome to manage.
If you want to just return the `data_log`, then you just use:
```
select data_log
from `tbl_2012-08`
union all
select data_log
from `tbl_2012-09`
``` | You should uses [UNION ALL](http://technet.microsoft.com/en-us/library/ms180026.aspx)
```
SELECT id, data_log, data_name
FROM `tbl_2012-08`
UNION ALL
SELECT id, data_log, data_name
FROM`tbl_2012-09`
```
In this case the Union will join the result from one select statement with the other to make one table. Its important to note that both select statements must return the same number of columns. | How to join and get data with multiple table but the same structure? | [
"",
"mysql",
"sql",
""
] |
So .. I want to have a query that creates multiple rows with one query. Say I want something like this
```
Row 1: col1 = 'val1', col2 = 'val2', col3 = 'val3'
Row 2: col1 = 'val1', col2 = 'val2', col3 = 'val4'
Row 2: col1 = 'val1', col2 = 'val2', col3 = 'val5'
```
where
```
val3,val4,val5
```
are returned by a sub-query. I was thinking something like
```
insert into table_name (col1, col2, col3) values ('val1', val2, (select column_name from table_two where condition));
```
Any ideas how I can do this with one query? | Yes, it's possible: if your `val1` and `val2` are constant, then:
```
insert into table_name (col1, col2, col3) select 'val1', 'val2', column_name from table_two where condition;
``` | Try this:
```
INSERT INTO table_name
(col1, col2, col3)
SELECT
'val1', 'val2', column_name
FROM table_two
WHERE condition;
``` | inserting multiple rows with varying data with one query | [
"",
"sql",
""
] |
I have created the following script to set up my MySQL database:
```
CREATE DATABASE IF NOT EXISTS magicc_hat;
USE magicc_hat;
CREATE TABLE people (
personID INT NOT NULL AUTO_INCREMENT,
firstName VARCHAR(45) NOT NULL,
lastName VARCHAR(45),
archived BOOL NOT NULL DEFAULT '0',
PRIMARY KEY (personID)
);
CREATE TABLE categories (
categoryID INT NOT NULL AUTO_INCREMENT,
categoryName VARCHAR(45) NOT NULL,
description TEXT,
archived BOOL NOT NULL DEFAULT '0',
PRIMARY KEY (categoryID)
);
CREATE TABLE homes (
homeID INT NOT NULL AUTO_INCREMENT,
homeName VARCHAR(45) NOT NULL,
notes TEXT,
archived BOOL NOT NULL DEFAULT '0',
PRIMARY KEY (homeID)
};
CREATE TABLE items (
itemID INT NOT NULL AUTO_INCREMENT,
itemName VARCHAR(100) NOT NULL,
identifier VARCHAR(100),
quantity INT NOT NULL,
categoryID INT NOT NULL,
homeID INT NOT NULL,
itemStatus ENUM('normal', 'broken', 'missing') NOT NULL DEFAULT 'normal',
description TEXT,
image VARCHAR(45),
notes TEXT,
archived BOOL NOT NULL DEFAULT '0',
PRIMARY KEY (itemID),
FOREIGN KEY (categoryID) REFERENCES categories(categoryID),
FOREIGN KEY (homeID) REFERENCES homes(homeID)
};
CREATE TABLE projects (
projectID INT NOT NULL AUTO_INCREMENT,
projectName VARCHAR(45) NOT NULL,
description TEXT,
archived BOOL NOT NULL DEFAULT '0',
PRIMARY KEY (projectID)
);
CREATE TABLE checkouts (
checkoutID INT NOT NULL AUTO_INCREMENT,
itemID INT NOT NULL,
personID INT NOT NULL,
projectID INT,
quantity INT NOT NULL,
outDateTime DATETIME,
inDateTime DATETIME,
outNotes TEXT,
inNotes TEXT,
checkedIn BOOL,
archived BOOL NOT NULL DEFAULT '0',
PRIMARY KEY (checkoutID),
FOREIGN KEY (itemID) REFERENCES items(itemID),
FOREIGN KEY (personID) REFERENCES people(personID),
FOREIGN KEY (projectID) REFERENCES projects(projectID)
);
```
However, when I run this script I get the following error messages:
```
ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '}' at line 8
ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '}' at line 17
ERROR 1215 (HY000): Cannot add foreign key constraint
```
I suspect it has something to do with using the bool datatype since that is what is happening at lines 8 and 17; however I have not been able to find any solutions to this issue online. Is someone able to see what might be causing the problem? I'm running MySQL 5.6.11 on my local Windows machine.
Thanks! | The characters need to be the same. Use parentheses for both
```
CREATE TABLE homes (
};
```
In this case: It's showing you the incorrect character in the error message:
```
check the manual that corresponds to your MySQL server version for the right
syntax to use near '}' at line 8
``` | Get rid of the curly braces and replace with `(`
Works great : <http://sqlfiddle.com/#!8/4ad37> | MySQL error on CREATE TABLE with BOOL column | [
"",
"mysql",
"sql",
""
] |
I've been working all day on a problem that I expected to be simple, but is proving incredibly elusive. I suspect I may not be asking the right questions, so please bear with me.
I have a table with a bunch of newspaper articles for a research project. The idea is that researchers can tag individual articles. These tags are stored in a second table. An article can have any number of tags.
I can then select articles with a certain tag by using;
```
SELECT *,
GROUP_CONCAT(`TAGS`.`TAG`) AS tags
FROM ARTICLES
LEFT JOIN TAGS
ON TAGS.ID = ARTICLES.ID
WHERE TAGS.TAG = 'search term'
GROUP BY ARTICLES.ID;
```
My problems start when I want to select articles based on the absence of a particular tag. If an article has only one tag, the result is as expected, but if there is more then one tag associated with an article, the tag is simply ommitted.
```
SELECT *,
GROUP_CONCAT(`TAGS`.`TAG`) AS tags
FROM ARTICLES
LEFT JOIN TAGS
ON TAGS.ID = ARTICLES.ID
WHERE TAGS.TAG != 'search term'
OR TAGS.TAG IS NULL
GROUP BY ARTICLES.ID;
```
if the original tables where as follows;
```
ID Name
1 Article #1
2 Article #2
```
and;
```
ArticleID Tag
1 New
1 Long
1 Boring
2 Old
2 Long
2 Interesting
```
Then if I use the above query to select articles where tag != Boring, the results would be;
```
ArticleID Name Tags
1 Article #1 New, Long
2 Article #2 Old, Long, Interesting
```
How can I make it exclude the first article altogether, rather then just excluding that tag? Keeping in mind that there are over a hundred thousand articles in the database, what is the most efficient way to do this? I've looked at dozens of other questions and google searches, but selecting for the absence of a tag like this is something I could not find advice on.
On a sidenote, I am currently using a one-to-many table, as each tag appears once for each article it is linked to. I noticed that a lot of people in similar scenarios use a many-to-many design. Is this that much faster then having just a foreign key in the tags table referencing the article table?
Thank you for helping out an SQL noob :). | Try this:
```
SELECT A.ID,
A.`Name`,
GROUP_CONCAT(`TAGS`.`TAG`) AS tags
FROM ARTICLES A
LEFT JOIN TAGS
ON TAGS.ID = A.ID
WHERE NOT EXISTS( SELECT 1 FROM TAGS
WHERE ID = A.ID
AND Tag = 'search term')
GROUP BY ARTICLES.ID, ARTICLES.`Name`;
``` | ```
SELECT A.ID,
A.`Name`,
GROUP_CONCAT(`TAGS`.`TAG`) AS tags
FROM ARTICLES A
LEFT JOIN TAGS
ON TAGS.ID = A.ID
LEFT JOIN (
SELECT ArticleID FROM TAGS LKP
WHERE Tag = 'search term'
) SRCH
ON SRCH. ArticleID =A.ID
WHERE SRCH. ArticleID IS NUll
``` | Selecting for the absence of value in a one-to-many table design | [
"",
"mysql",
"sql",
""
] |
I have a table that has data that looks something like this:
```
data_type, value
World of Warcraft, 500
Quake 3, 1500
Quake 3, 1400
World of Warcraft, 1200
Final Fantasy, 100
Final Fantasy, 500
```
What I want to do is select the maximum of each of these values in a single statement. I know I can easily do something like
```
select data_type, max(value)
from table
where data_type = [insert each data type here for separate queries]
group by data_type
```
But what I want it to display is is
```
select data_type,
max(value) as 'World of Warcraft',
max(value) as 'Quake 3',
max(value) as 'Final Fantasy'
```
So I get the max value of each of these in a single statement. How would I go about doing this? | If you want to return the max value for each data\_type in a separate column, then you should be able to use an aggregate function with a CASE expression:
```
select
max(case when data_type='World of Warcraft' then value end) WorldofWarcraft,
max(case when data_type='Quake 3' then value end) Quake3,
max(case when data_type='Final Fantasy' then value end) FinalFantasy
from yourtable;
```
See [SQL Fiddle with Demo](http://sqlfiddle.com/#!2/096e7/2) | Once again, for more than just a few "data types", I suggest to use `crosstab()`:
```
SELECT * FROM crosstab(
$$SELECT DISTINCT ON (1, 2)
'max' AS "type", data_type, val
FROM tbl
ORDER BY 1, 2, val DESC$$
,$$VALUES ('Final Fantasy'), ('Quake 3'), ('World of Warcraft')$$)
AS x ("type" text, "Final Fantasy" int, "Quake 3" int, "World of Warcraft" int)
```
Returns:
```
type | Final Fantasy | Quake 3 | World of Warcraft
-----+---------------+---------+-------------------
max | 500 | 1500 | 1200
```
More explanation for the basics:
[PostgreSQL Crosstab Query](https://stackoverflow.com/questions/3002499/postgresql-crosstab-query/11751905#11751905)
## Dynamic solution
The tricky thing is to make this *completely dynamic*: to make it work for
* an **unknown number** of columns (data\_types in this case)
* with **unknown names** (data\_types again)
At least the *type* is well known: `integer` in this case.
In short: that's not possible with current PostgreSQL (including 9.3). There are [approximations with polymorphic types and ways to circumvent the restrictions with arrays or hstore types](https://stackoverflow.com/questions/15415446/pivot-on-multiple-columns-using-tablefunc). May be good enough for you. But it's *strictly not possible* to get the result with individual columns in a single SQL query. SQL is very rigid about types and wants to know what to expect back.
**However**, it can be done with *two* queries. The first one builds the actual query to use. Building on the above simple case:
```
SELECT $f$SELECT * FROM crosstab(
$$SELECT DISTINCT ON (1, 2)
'max' AS "type", data_type, val
FROM tbl
ORDER BY 1, 2, val DESC$$
,$$VALUES ($f$ || string_agg(quote_literal(data_type), '), (') || $f$)$$)
AS x ("type" text, $f$ || string_agg(quote_ident(data_type), ' int, ') || ' int)'
FROM (SELECT DISTINCT data_type FROM tbl) x
```
This generates the query you actually need. Run the second one inside the *same transaction* to avoid concurrency issues.
Note the strategic use of [`quote_literal()` and `quote_ident()`](http://www.postgresql.org/docs/current/interactive/functions-string.html#FUNCTIONS-STRING-OTHER) to sanitize all kinds of illegal (for columns) names and prevent *SQL injection*.
Don't get confused by multiple layers of [dollar-quoting](http://www.postgresql.org/docs/current/interactive/sql-syntax-lexical.html#SQL-SYNTAX-DOLLAR-QUOTING). That's necessary for building dynamic queries. I put it as simple as possible. | Selecting multiple max() values using a single SQL statement | [
"",
"sql",
"postgresql",
"pivot",
"crosstab",
""
] |
I am trying to formulate a seemingly simple SQL query where I join two tables and order the results by the date difference from one of the columns and `NOW()`.
I am trying the query:
```
SELECT advertise_id,
qr_startdate,
qr_enddate,
DATEDIFF(day, NOW(), t1.qr_enddate) AS d
FROM `adv_qr` t1
INNER JOIN advertise_table t2
ON t1.advertise_id = t2.lid
ORDER BY t1.d ASC
```
Which seems like it should be right, but clearly something in the syntax is incorrect. I've been trying various combinations of things, but can't seem to get the `DATEDIFF` to return in a way that I can order the results with it. | ```
SELECT advertise_id,
qr_startdate,
qr_enddate,
DATEDIFF(NOW(), t1.qr_enddate) AS d
FROM `adv_qr` t1
INNER JOIN advertise_table t2
ON t1.advertise_id = t2.lid
ORDER BY d ASC
``` | Check out [this SQLFiddle](http://sqlfiddle.com/#!2/754de/8):
```
SELECT advertise_id,
qr_startdate,
qr_enddate,
DATEDIFF(NOW(), t1.qr_enddate) AS d
FROM `adv_qr` t1
INNER JOIN advertise_table t2
ON t1.advertise_id = t2.lid
ORDER BY d ASC
```
You can find the MySQL [`DATEDIFF` documentation](http://dev.mysql.com/doc/refman/5.1/de/date-and-time-functions.html) here. | SQL - Order by ascending date difference from NOW() | [
"",
"mysql",
"sql",
""
] |
I have a database structure in the below format,
subjects table
```
subject_id subject_name
1 HTML
2 Java
```
chapters table
```
chapter_id chapter_name subject_id
1 Doctype 1
2 Intro to Java 2
```
tutorials table
```
tutorial_id tutorial_name chapter_id subject_id
1 Intro to doctype 1 1
2 Details of doctype 1 1
3 Intro to JVM 2 2
```
should subject\_id be in tutorials table? | No, you can get it indirectly from the chapters table. It's redundant in the tutorial table. | No need of Using it as its chapter id is available in chapter table | Database structure | [
"",
"sql",
"database-design",
""
] |
I need to select recods from oracle table for the current calendar week based on a date datatype field. Currently I am doing like this:
```
select * from my_table where enter_date > sysdate -7
```
If today is Thursday, this query will select records seven days from today which infiltrates to last week on Thursday. Is there a way to select records for the current calendar week dynamically? | `to_char(sysdate, 'd')` returns day of week in [1..7] range; so try using
```
select *
from my_table
where enter_date >= trunc(sysdate) - to_char(sysdate, 'd') + 1
``` | If your week starts on a Monday then:
```
select ...
from ...
where dates >= trunc(sysdate,'IW')
```
For alternative definitions of the first day of the week, trunc(sysdate,'W') truncates to the day of the week with which the current month began, and trunc(sysdate,'WW') truncates to the day of the week with which the current year began.
See other available truncations here: <http://docs.oracle.com/cd/E11882_01/server.112/e26088/functions255.htm#i1002084> | select records weekly from Oracle table | [
"",
"sql",
"oracle11g",
""
] |
Xml Column returns no rows.
Below created a table type and xml value inserted into that table. After executing the below mentioned two queries no value is returned.
Xml data is a valid data.
```
DECLARE @Test TABLE (Id INT IDENTITY (1,1), XMLDATA XML)
INSERT INTO @test
SELECT '
<TXLife xmlns="http://ACORD.org/Standards/Life/2" xmlns:tx="http://ACORD.org/Standards/Life/2" Version="2.20.00">
<TXLifeRequest PrimaryObjectID="Holding_1">
<CorrelationGUID>4b30545a-158b-441a-a37a-0b259f757059</CorrelationGUID>
</TXLifeRequest>
</TXLife>'
SELECT
Id
, XMLDATA.query('//CorrelationGUID') AS 'TransType'
, XMLDATA
FROM @test
SELECT C.value('./CorrelationGUID[1]', 'varchar(50)') AS 'TransType'
FROM @test
CROSS APPLY XMLDATA.nodes('/TXLife/TXLifeRequest') n (C)
``` | Well, if I test your code with `CREATE TABLE`, [here](http://sqlfiddle.com/#!6/4d160/1) and [here](http://sqlfiddle.com/#!6/4d160/3), the unfiltered select works just fine. However, the second filtered select returns no results.
If I alter the query to to specify the right namespace, like in [@Devart](https://stackoverflow.com/a/18461103/659190)'s answer,
```
WITH XMLNAMESPACES (DEFAULT 'http://ACORD.org/Standards/Life/2')
SELECT
C.value('./CorrelationGUID[1]', 'varchar(50)') AS 'TransType'
FROM
Test
CROSS APPLY
XMLDATA.nodes('/TXLife/TXLifeRequest') n (C)
```
You can see, [it works as expected](http://sqlfiddle.com/#!6/4d160/9). | Try this one -
```
DECLARE @XML XML
SELECT @XML = '
<TXLife xmlns="http://ACORD.org/Standards/Life/2" xmlns:tx="http://ACORD.org/Standards/Life/2" Version="2.20.00">
<TXLifeRequest PrimaryObjectID="Holding_1">
<CorrelationGUID>4b30545a-158b-441a-a37a-0b259f757059</CorrelationGUID>
</TXLifeRequest>
</TXLife>'
;WITH XMLNAMESPACES (DEFAULT 'http://ACORD.org/Standards/Life/2')
SELECT t.c.value('CorrelationGUID[1]', 'UNIQUEIDENTIFIER')
FROM @XML.nodes('//TXLifeRequest') t(c)
```
Output -
```
------------------------------------
4B30545A-158B-441A-A37A-0B259F757059
``` | Selecting XML Data from SQL query - No values returned | [
"",
"sql",
"sql-server",
""
] |
I'm working with Microsoft SQL Server.
```
IF EXISTS (
SELECT *
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'ServerSettings'
AND COLUMN_NAME = 'MapIsAlwayCalcLenByWebServices'
)
IF (
SELECT MapIsAlwayCalcLenByWebServices
FROM ServerSettings
) = 0
UPDATE ServerSettings
SET MapCalculateDistanceSource = 0
```
Does anybody know why this code throw the error "Invalid column name". I thought that second select execute only when the first if is true. | No, the entire *batch* is first compiled, and then execution starts. Since it can't compile the batch because you reference an invalid column, it never executes the `if` statement.
You would need to protect the code that references a possibly absent column in an `EXEC`, something like:
```
if exists(select * from INFORMATION_SCHEMA.COLUMNS
where TABLE_NAME = 'ServerSettings' and
COLUMN_NAME = 'MapIsAlwayCalcLenByWebServices')
begin
exec('UPDATE ServerSettings SET MapCalculateDistanceSource = 0
WHERE MapIsAlwayCalcLenByWebServices = 0')
end
```
--- | Try this one -
```
IF EXISTS(
SELECT 1
FROM sys.columns c
WHERE c.name = 'MapIsAlwayCalcLenByWebServices'
AND c.[object_id] = OBJECT_ID('dbo.ServerSettings')
) BEGIN
EXEC sys.sp_executesql '
UPDATE dbo.ServerSettings
SET MapCalculateDistanceSource = 0
WHERE MapIsAlwayCalcLenByWebServices = 0'
END
``` | Sql if statement after another if check | [
"",
"sql",
"sql-server",
""
] |
Here are my two tables:
Songs:
```
+--------+---------------------+
| SongID | SongDeviceID |
+--------+---------------------+
| 3278 | 1079287588763212246 |
| 3279 | 1079287588763212221 |
| 3280 | 1079287588763212230 |
+--------+---------------------+
```
Votes:
```
+--------+--------+
| SongID | UserID |
+--------+--------+
| 3278 | 71 |
| 3278 | 72 |
| 3279 | 71 |
+--------+--------+
```
I am trying to count the number of entries in the `votes` table (for each unique `SongID` and return the `SongDeviceID` of the entry with the most.
This is what I have so far:
```
SELECT Songs.SongDeviceID, COUNT(*) FROM Votes INNER JOIN Songs ON Votes.SongID = Songs.SongId GROUP BY Votes.SongID, Songs.SongDeviceID ORDER BY count(*) DESC
```
Which returns:
```
+---------------------+----------+
| SongDeviceID | Count(*) |
+---------------------+----------+
| 1079287588763212246 | 2 |
| 1079287588763212221 | 1 |
+---------------------+----------+
```
**UPDATE:**
The query has been updated to take care of getting the SongDeviceID but I still need help returning only the first row. | Try this
```
SELECT Songs.SongDeviceID, COUNT(*) FROM Votes INNER JOIN Songs ON Votes.
SongID = Songs.SongId GROUP BY Votes.SongID, Songs.SongDeviceID ORDER BY count(*)
DESC LIMIT 1
``` | for first
```
SELECT SongID,COUNT(*) from votes GROUP BY SongID order by count(*) DESC LIMIT 1
```
and you want song device name then
```
SELECT SongID,COUNT(*),SongDeviceID
from
votes
left join
Songs on votes.songid = song.songid
GROUP BY
SongID
order by
count(*) DESC LIMIT 1
``` | Selecting only top result from two table query | [
"",
"mysql",
"sql",
"relational-database",
""
] |
I had join 4 sql statement by using UNION ALL.
as the WHERE condition is the same, is it possible to combine the WHERE condition into one ?
```
select 'Transfer In' as MovementType, * from vHRIS_StaffMovement_TransferIn
where cur_deptid in (1,2,3,4,5)
and cast(EffectiveDate as date) <='2013-08-02'
and cast(EffectiveDate as date) >= '2012-08-01'
and StaffType in (1,2,3,4,5)
union all
select 'Terminate' as MovementTyep, * from vHRIS_StaffMovement_Terminate
where cur_deptid in (1,2,3,4,5)
and cast(EffectiveDate as date) <='2013-08-02'
and cast(EffectiveDate as date) >= '2012-08-01'
and StaffType in (1,2,3,4,5)
union all
select 'New Hire' as MovementTyep, * from vHRIS_StaffMovement_NewHire
where cur_deptid in (1,2,3,4,5)
and cast(EffectiveDate as date) <='2013-08-02'
and cast(EffectiveDate as date) >= '2012-08-01'
and StaffType in (1,2,3,4,5)
union all
select 'Transfer Out' as MovementType, * from vHRIS_StaffMovement_TransferOut
where cur_deptid in (1,2,3,4,5)
and cast(EffectiveDate as date) <='2013-08-02'
and cast(EffectiveDate as date) >= '2012-08-01'
and StaffType in (1,2,3,4,5)
``` | You can do it this way:
```
select * from (
select 'Transfer In' as MovementType, * from vHRIS_StaffMovement_TransferIn
union all
select 'Terminate' as MovementTyep, * from vHRIS_StaffMovement_Terminate
union all
select 'New Hire' as MovementTyep, * from vHRIS_StaffMovement_NewHire
union all
select 'Transfer Out' as MovementType, * from vHRIS_StaffMovement_TransferOut ) as a
where cur_deptid in (1,2,3,4,5)
and cast(EffectiveDate as date) <='2013-08-02'
and cast(EffectiveDate as date) >= '2012-08-01'
and StaffType in (1,2,3,4,5)
``` | You can use a sub query:
```
SELECT X.*
FROM (SELECT 'Transfer In' AS MovementType, *
FROM vhris_staffmovement_transferin
UNION ALL
SELECT 'Terminate' AS MovementTyep, *
FROM vhris_staffmovement_terminate
UNION ALL
SELECT 'New Hire' AS MovementTyep, *
FROM vhris_staffmovement_newhire
UNION ALL
SELECT 'Transfer Out' AS MovementType, *
FROM vhris_staffmovement_transferout ) X
WHERE cur_deptid IN ( 1, 2, 3, 4, 5 )
AND Cast(effectivedate AS DATE) <= '2013-08-02'
AND Cast(effectivedate AS DATE) >= '2012-08-01'
AND stafftype IN ( 1, 2, 3, 4, 5 )
``` | SQL Union all statement | [
"",
"sql",
"union",
"where-clause",
"union-all",
""
] |
I have the following table
```
table x
labref name repeat_status
111 L 1
111 L 1
111 K 1
111 K 1
111 L 2
111 L 2
111 M 1
111 M 1
```
The result that i need is
```
labref name repeat_status
111 L 1
111 L 2
111 K 1
111 M 1
```
I have tried this query but it does not bring the result, needs tweaking
```
SELECT name, repeat_status
FROM `x`
WHERE labref = '111'
GROUP BY repeat_status;
```
suggestions! | ```
SELECT name, repeat_status
FROM `x`
WHERE labref = '111'
GROUP BY name ,repeat_status;
``` | ```
select DISTINCT labref ,name, repeat_status
FROM x
WHERE labref = '111'
```
This query would get the result you are looking for. | Group returned result by mysql | [
"",
"mysql",
"sql",
""
] |
I have a problem I'm working on with Oracle SQL that goes something like this.
TABLE
```
PurchaseID CustID Location
----1------------1-----------A
----2------------1-----------A
----3------------2-----------A
----4------------2-----------B
----5------------2-----------A
----6------------3-----------B
----7------------3-----------B
```
I'm interested in querying the Table to return all instances where the same customer makes a purchase in different locations. So, for the table above, I would want:
OUTPUT
```
PurchaseID CustID Location
----3------------2-----------A
----4------------2-----------B
----5------------2-----------A
```
Any ideas on how to accomplish this? I haven't been able to think of how to do it, and most of my ideas seem like they would be pretty clunky. The database I'm using has 1MM+ records, so I don't want it to run too slowly.
Any help would be appreciated. Thanks! | ```
SELECT *
FROM YourTable T
WHERE CustId IN (SELECT CustId
FROM YourTable
GROUP BY CustId
HAVING MIN(Location) <> MAX(Location))
``` | You should be able to use something similar to the following:
```
select purchaseid, custid, location
from yourtable
where custid in (select custid
from yourtable
group by custid
having count(distinct location) >1);
```
See [SQL Fiddle with Demo](http://sqlfiddle.com/#!3/559e9/2).
The subquery in the WHERE clause is returning all `custids` that have a total number of distinct locations that are greater than 1. | Oracle SQL - Comparing Rows | [
"",
"sql",
"oracle",
""
] |
I'm looking for a SQL Server UDF that will be equivalent to `DATEPART(WEEK, @date)`, but will allow the caller to specify the first day of the week. Somewhat similar to MySql's `WEEK` function. E.g.:
```
CREATE FUNCTION Week (@date date, @firstdayofweek int)
RETURNS int
BEGIN
-- return result would be the same as:
-- SET DATEFIRST @firstdayofweek
-- DATEPART(WEEK, @date)
END
```
My application does not have the opportunity to call `SET DATEFIRST`.
Examples:
```
SELECT Week('2013-08-28', 2) -- returns 35
SELECT Week('2013-08-28', 3) -- returns 36
```
The above results would always be the same, regardless of SQL Server's value for `@@DATEFIRST`. | I've found a couple of articles that helped me answer to derive an answer to this question:
1. [Deterministic scalar function to get week of year for a date](https://stackoverflow.com/questions/11432719/deterministic-scalar-function-to-get-week-of-year-for-a-date/11433441)
2. <http://sqlmag.com/t-sql/datetime-calculations-part-3>
It may be possible to simplify this UDF, but it gives me exactly what I was looking for:
```
CREATE FUNCTION Week (@date DATETIME, @dateFirst INT)
RETURNS INT
BEGIN
DECLARE @normalizedWeekOfYear INT = DATEDIFF(WEEK, DATEADD(YEAR, DATEDIFF(YEAR, 0, @date), 0), @date) + 1
DECLARE @jan1DayOfWeek INT = DATEPART(WEEKDAY, DATEADD(YEAR, DATEDIFF(YEAR, 0, @date), 0) + @@DATEFIRST- 7) - 1
DECLARE @dateDayOfWeek INT = DATEPART(WEEKDAY, DATEADD(DAY, @@DATEFIRST- 7, @date)) - 1
RETURN @normalizedWeekOfYear +
CASE
WHEN @jan1DayOfWeek < @dateFirst AND @dateDayOfWeek >= @dateFirst THEN 1
WHEN @jan1DayOfWeek >= @dateFirst AND @dateDayOfWeek < @dateFirst THEN -1
ELSE 0
END
END
GO
```
Then, executing the following statements would return 35 and 36 respectively:
```
SELECT dbo.Week('2013-08-28', 2)
SELECT dbo.Week('2013-08-28', 3)
``` | You could use something like this:
```
DATEPART(WEEK, DATEADD(DAY, 8 - @firstdayofweek, @date))
```
Instead of moving the first day of the week you are moving the actual date. Using this formula the first day of the week would be set using the same number values for days that MS SQL Server uses. (Sunday = 1, Saturday = 7) | SQL Server UDF for getting week of year, with first day of week argument | [
"",
"sql",
"sql-server",
"datepart",
""
] |
I'm trying to do an `UPDATE` on a double `INNER JOIN`, and getting the following error:
```
#1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'FROM pro_users AS u INNER JOIN cAlerts AS c ON c.user_id = u.user_id I' at line 3
```
Here's my mysql code:
```
UPDATE u
SET u.active_member = 0
FROM pro_users AS u
INNER JOIN cAlerts AS c
ON c.user_id = u.user_id
INNER JOIN srAlerts AS s
ON s.user_id = c.user_id
WHERE c.status=0
AND s.active=0
AND u.active_member = 1
```
Can you spot my error? | This would be the ISO/ANSI SQL way of doing the update:
```
update pro_users u set active_member = 0
where u.active_member = 1
and exists ( select *
from cAlerts c
where c.user_id = u.user_id
and c.status = 0
)
and exists ( select *
from srAlerts sr
where sr.user_id = u.user_id
and sr.active = 0
)
```
As far as I know, a `from` clause, with or without joins, in an `update` statement is a Microsoft/Sybase Sql Server aberration.
Edited to note: A little rummaging in the mySql manual at <http://dev.mysql.com/doc/refman/5.0/en/update.html> and some SQLFiddling says this will work:
```
update pro_users u
join cAlerts c on c.user_id = u.user_id and c.status = 0
join srAlerts sr on sr.user_id = u.user_id and sr.active = 0
set active_member = 0
where u.active_member = 1
``` | Can you try this :
```
UPDATE
pro_users AS u
SET
u.active_member = 0
WHERE
u.active_member = 1
AND
(
SELECT c.user_id
FROM cAlerts AS c
WHERE c.user_id = u.user_id AND c.status = 0
LIMIT 1
) IS NOT NULL
AND
(
SELECT s.user_id
FROM srAlerts AS s
WHERE s.user_id = u.user_id AND s.active = 0
LIMIT 1
) IS NOT NULL
``` | UPDATE and double INNER JOIN kicking up mysql error | [
"",
"mysql",
"sql",
""
] |
I have two tables: Table1 looks like this:
```
id type
1 bike
2 car
3 cycle
4 bike
```
And Table2 looks like this:
```
id type
1 bike
2 car
```
I want my final output to look like the following:
```
type count_table1 count_table2
bike 2 1
car 1 1
cycle 1 0
```
What is the most efficient way to do this in SQL? | Simple solution, no need for complicated table joins & functions:
```
SELECT type, MAX(count_table1) as count_table1, MAX(count_table2) as count_table2 FROM (
(
SELECT type, COUNT(*) AS count_table1, 0 AS count_table2
FROM Table1
GROUP BY type
) UNION (
SELECT type, 0 AS count_table1, COUNT(*) AS count_table2
FROM Table2
GROUP BY type)
) AS tmp
GROUP BY type
```
[SQL Fiddle](http://sqlfiddle.com/#!2/ad44e/4/0) | You can try this:
```
SELECT t1.TYPE,
ifnull(t1.COUNT1,0) CountTable1,
ifnull(t2.COUNT2,0) CountTable2
FROM (SELECT TYPE,
COUNT(*) count1
FROM TABLE1
GROUP BY TYPE)T1
LEFT JOIN (SELECT TYPE,
COUNT(*) count2
FROM TABLE2
GROUP BY TYPE)T2
ON t1.TYPE = t2.TYPE
UNION
SELECT t1.TYPE,
t1.COUNT1,
t2.COUNT2
FROM (SELECT TYPE,
COUNT(*) count1
FROM TABLE1
GROUP BY TYPE)T1
RIGHT JOIN (SELECT TYPE,
COUNT(*) count2
FROM TABLE2
GROUP BY TYPE)T2
ON t1.TYPE = t2.TYPE
```
See my working example on [SQL Fiddle](http://sqlfiddle.com/#!2/fc9ea/14). | Group by column and display count from all tables in mysql | [
"",
"mysql",
"sql",
""
] |
I have a key string like
```
Empl:9998 Earn Code:7704 Seq:1
```
I need take the employee number 9998 out of the string.
The employee number will always start at position 6 and end before the second `E`.
I have been played around with all string function but no success. I use MS SQL. | The following statement will do this:
```
select substring(empno, 6,
charindex('E', empno, 6) - 6)
from (select 'Empl:9998 Earn Code:7704 Seq:1' as empno) t;
```
You might really want `-7` if you don't want the space in the "number". | the [substring](http://technet.microsoft.com/en-us/library/ms187748.aspx) function will get you started, but you'll also need [charindex](http://technet.microsoft.com/en-us/library/ms186323.aspx). (I recommend searching for the index of the space character) | Take substring out of string | [
"",
"sql",
"sql-server",
""
] |
Consider the below table
```
CREATE TABLE `temp` (
`id` int(11) NOT NULL,
`lang` char(2) COLLATE utf8_unicode_ci NOT NULL,
`channel` char(2) COLLATE utf8_unicode_ci NOT NULL,
`name` varchar(20) COLLATE utf8_unicode_ci DEFAULT NULL,
PRIMARY KEY (`id`,`lang`,`channel`)
)
insert into `temp` (`id`, `lang`, `channel`, `name`) values('1','fr','ds','Jacket');
insert into `temp` (`id`, `lang`, `channel`, `name`) values('1','en','ds','Jacket');
insert into `temp` (`id`, `lang`, `channel`, `name`) values('2','en','ds','Jeans');
insert into `temp` (`id`, `lang`, `channel`, `name`) values('3','en','ds','Sweater');
insert into `temp` (`id`, `lang`, `channel`, `name`) values('1','de','ds','Jacket');
```
The question is how can I find which entries with lang en do not exist for fr?
My head is stuck and I believe this to be a trivial query but I am having one of these days. | There are several ways of achieving this. One way is to use a sub-select with `not exists` clause:
```
select id, channel
from temp t1
where t1.lang = 'en'
and not exists (
select 1
from temp t2
where t2.lang = 'fr'
and t1.id = t2.id
)
```
Alternatively, you can use an outer join:
```
select t1.id, t1.channel
from temp t1
left outer join temp t2 on t1.id = t2.id
where t1.lang = 'en'
and t2.id is null
``` | based on @AleksG
```
SELECT t1.id, t1.channel, t1.name
FROM temp t1
LEFT JOIN temp t2 ON t1.id = t2.id AND t2.lang = 'fr'
WHERE t1.lang = 'en' AND t2.lang IS NULL
``` | SQL join table to self to find difference | [
"",
"mysql",
"sql",
"join",
""
] |
I have the following example code
```
DECLARE
myRow table%rowtype
myVar table2.column%type
BEGIN
SELECT table.col1, table.col3, table.col4, table2.column
INTO myRow
FROM table
JOIN table2
On table.col6 = table2.col1;
END;
```
How can I refactor so that it is a valid statement? Can I somehow store the joined column onto myRow or myVar? | Your PL/SQL is valid and acceptable provided:
1. Table TABLE contains exactly 4 columns, corresponding to the 4 values you are selecting.
2. The query will return exactly 1 row.
If table TABLE does not contain exactly 4 columns then you need to select into something else, perhaps just 4 variables:
```
DECLARE
v_col1 table.col1%type;
v_col3 table.col3%type;
v_col4 table.col4%type;
v_column table2.column%type;
BEGIN
SELECT table.col1, table.col3, table.col4, table2.column
INTO v_col1, v_col3, v_col4, v_column
FROM table
JOIN table2
On table.col6 = table2.col1;
END;
```
If your query returns more than 1 row you will get a TOO\_MANY\_ROWS exception; and if it returns no rows you will get a NO\_DATA\_FOUND exception. | You can use a cursor, for doing this. That way you dont have to worry about TOO\_MANY\_ROWS or NO\_DATA\_FOUND exception.
And also you will have the flexibility of each time you add a column to your query it automatically gets added to your variable of the same type
You have two options with the cursor: using just the first row returned or use all the rows.
Option #1
```
DECLARE
CURSOR C_DATA IS
SELECT
table.col1, -- Column 2 intentionally left out
table.col3,
table.col4,
table2.column --Column from joined table
FROM table
JOIN table2
On table.col6 = table2.col1;
myRow C_DATA%rowtype
BEGIN
OPEN C_DATA;
FETCH c_data INTO myRow;
CLOSE C_DATA;
-- USE ANYWHERE INSIDE THIS ESCOPE YOUR COLUMNS as myRow.col4.
END;
```
Option #2
```
DECLARE
CURSOR C_DATA IS
SELECT
table.col1, -- Column 2 intentionally left out
table.col3,
table.col4,
table2.column --Column from joined table
FROM table
JOIN table2
On table.col6 = table2.col1;
BEGIN
FOR myRow IN C_DATA LOOP
-- USE INSIDE HERE YOUR COLUMNS as myRow.col4.
END LOOP;
END;
``` | How to use SELECT... INTO with a JOIN? | [
"",
"sql",
"oracle",
"join",
"plsql",
"select-into",
""
] |
I am trying to understand how the single quotes work in SQL.
All I want to achieve is
```
INSERT INTO LOGTABLE
(ID,
ROLLNO)
VALUES ('E8645F55-A18C-43EA-9D68-1F9068F8A9FB',
28)
```
Here `ID` is a uniqueidentifier field and `rollNo` is an int.
So I have this sample test code:
```
set @query = '
insert into fileLog
(
id,
rollNo
)
values
('+
'''' + NEWID() + '''' + ',' + 28 +
')'
print @query
```
I have tried several combination of single quotes left and right but nothing works. I would really appreciate if someone could solve this. But in particular I wanted to know how many single quotes are required on both sides of a string to get something like 'SQL'.
Thanks | *(I'm going to assume you need dynamic SQL for reasons not obvious in the question, since this doesn't seem to require dynamic SQL at all.)*
As @Gidil suggested, the problem here is trying to treat a uniqueidentifier as a string. In this case, there really isn't any reason to declare `NEWID()` in the outer scope, since you can simply say:
```
SET @query = 'INSERT ... VALUES(NEWID(), 28);';
PRINT @query;
```
*Now, [you should be using `NVARCHAR(MAX)` as your parameter, because ultimately you should be executing this using `sp_executesql`, not `EXEC()`](https://sqlblog.org/2011/09/17/bad-habits-to-kick-using-exec-instead-of-sp_executesql).*
If you need to have a literal you can double up the quotes:
```
DECLARE @string VARCHAR(32);
SET @string = 'foo';
SET @query = N'INSERT ... VALUES(''' + @string + ''', 28);';
```
However I find it more readable to use `CHAR(39)`:
```
SET @query = N'INSERT ... VALUES(' + CHAR(39) + @string + CHAR(39) + ', 28);';
```
And even better is to not append these variables to a string anyway. You should be using properly typed parameters where possible.
```
DECLARE @query NVARCHAR(MAX);
DECLARE @string VARCHAR(32), @newid UNIQUEIDENTIFIER, @id INT;
SELECT @string = 'foo', @newid = NEWID(), @id = 28;
SET @query = N'INSERT ... VALUES(@string, @newid, @id);';
EXEC sp_executesql @query,
N'@string VARCHAR(32), @newid UNIQUEIDENTIFIER, @id INT',
@string, @newid, @id;
```
It's bulkier, sure, but it's much safer from SQL injection and it lets you stop trying to figure out and deal with the hassle of embedding single quotes into the string... | My question is: Why are you using dynamic SQL? It's one of those techniques that is useful in some situations, but can be abused easily.
As for the answer to your question, I use a technique to help minimize the flipping in and out of SQL construction:
```
DECLARE @query VARCHAR(MAX)
SET @query = '
insert into fileLog
(
id,
rollNo
)
values
(''|NEWID|'', |INT|)'
SET @query = REPLACE(@query, '|NEWID|', NEWID())
SET @query = REPLACE(@query, '|INT|', 28)
PRINT @query
``` | Building insert statement with quotes in SQL | [
"",
"sql",
"sql-server",
""
] |
I am trying to select rows that have columns with incomplete data. In this case, incomplete columns have a value less than 1. I'm trying to find rows that have at least one incomplete column, but I'm not sure how to combine this into a SQL statement. Here's what I've been trying unsuccessfully:
```
SELECT
id
FROM
eval
WHERE
(month = :month) and (uid=:uid)
OR (rotation < 1)
OR (mr < 1)
OR (nc < 1)
OR (clinic_days < 1)
OR (clinic_pts < 1)
OR (mksap < 1)
```
This statement is returning multiple rows of data so the semantics must be incorrect. I get the same thing without the parentheses. | In SQL `AND` takes precedence over `OR` in the same way as multiplication takes precedence over addition, so you need to use parentheses to force the precedence that you want:
```
SELECT id
FROM eval
WHERE (month = :month)
AND
(uid=:uid)
AND (
(rotation < 1)
OR (mr < 1)
OR (nc < 1)
OR (clinic_days < 1)
OR (clinic_pts < 1)
OR (mksap < 1)
)
```
Your original statement will take the long chain of `OR`s, and return everything matched by any of its individual condition, along with the row that has the desire `:uid` and `:month`. | You need to fix the logic. I think you want an `and` before the `or`, along with parentheses:
```
SELECT id
FROM eval
WHERE (month = :month) and (uid=:uid) AND
( (rotation < 1) OR (mr < 1) OR (nc < 1) OR (clinic_days < 1) OR
(clinic_pts < 1) OR (mksap < 1) )
``` | SQL Statment with OR and AND < 1 | [
"",
"mysql",
"sql",
""
] |
I have a table like this
```
CREATE TABLE [dbo].[tbl_LandRigs](
[ID] [int] IDENTITY(700000,1) NOT NULL,
[Company] [nvarchar](500) NULL,
[Rig] [nvarchar](500) NULL,
[RigType] [nvarchar](200) NULL,
[DrawWorks] [nvarchar](500) NULL,
[TopDrive] [nvarchar](200) NULL,
[RotaryTable] [nvarchar](500) NULL,
[MudPump] [nvarchar](500) NULL,
[MaxDD] [nvarchar](50) NULL,
[Operator] [nvarchar](500) NULL,
[Country] [nvarchar](200) NULL,
[Location] [nvarchar](500) NULL,
[OPStatus] [nvarchar](200) NULL,
[CreatedDate] [datetime] NULL,
[CreatedByID] [int] NULL,
[CreatedByName] [nvarchar](50) NULL,
CONSTRAINT [PK_tbl_LandRigs] PRIMARY KEY CLUSTERED
(
[ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
```
And I am trying to get data from MaxDD column in Descending order
```
SELECT distinct "MaxDD" FROM [tbl_LandRigs] ORDER BY "MaxDD" Desc
```
But this returns data in following order

According to my calculation 4000 must be the first value followed by others.But this results astonished me.Can any one help me out in this? | You are storing them as text(`nvarchar`), that's why you get [lexographical order](http://en.wikipedia.org/wiki/Lexicographical_order). That means every character is compared with each other from left to right. Hence `4000` is "higher" than `30000` (the last zero doesn't matter since the first 4 is already higher than the 3).
So the correct way is to store it as a numeric value. However, that seems to be impossible since you also use values like `16.000 with 4.1/2"DP`. Then i would add another column, one for the numeric value you want to order by and the other for the textual representation. | As `MaxDD` is a `varchar`, not a number it is sorted in lexicographical order (i.e. ordered by the first character, then second, ...), not numerical. You should convert it to a numerical value | SQL Order By not working properly | [
"",
"sql",
"sql-server-2008",
"sql-order-by",
""
] |
Imagine the following MySQL table of orders:
```
id | name
1 | Mike
2 | Steve
3 | Janet
4 | Juliet
5 | Mike
6 | Jane
```
This is my current query:
```
SELECT * FROM table ORDER BY id DESC
```
However, I'd like to "group" those by `name`, so that I have orders from the same person listed after one another, however, I cannot do ORDER BY `name`.
This is my desired output:
```
id | name
6 | Jane
5 | Mike
1 | Mike
4 | Juliet
3 | Janet
2 | Steve
```
What's the query for this output? | E.g.:
```
SELECT y.id
, y.name
FROM my_table x
JOIN my_table y
ON y.name = x.name
GROUP
BY name
, id
ORDER
BY MAX(x.id) DESC
, id DESC;
``` | You need to have special calculation to get their row position.
```
SELECT a.*
FROM tableName a
INNER JOIN
(
SELECT Name,
@ord := @ord + 1 ord
FROM
(
SELECT MAX(ID) ID, NAME
FROM TableName
GROUP BY Name
) a, (SELECT @ord := 0) b
ORDER BY ID DESC
) b ON a.Name = b.Name
ORDER BY b.ord, a.ID DESC
```
* [SQLFiddle Demo](http://sqlfiddle.com/#!2/d6eb9/7) | Order by two columns | [
"",
"mysql",
"sql",
""
] |
SQL Server - I have 3 simple tables (Fname, Lname and Exceptions) with one column each called Name. I want my end result to look like: (Everybody in Fname + Everybody in LName) - (Everybody in Exceptions).
FName:
```
Name
A
B
```
LName:
```
Name
Y
Z
```
Exceptions:
```
Name
A
Z
```
Expected Query Result Set:
```
B
Y
```
Current SQL Query:
```
Select Name from Fname
UNION ALL
Select Name from Lname
WHERE Name NOT IN
(Select Name from Exceptions)
```
The SQL query only works on removing data which appears in LName but not in Fname. Can somebody please help. | The parts of a `UNION` are handled as separate queries, so you can group them in a subquery:
```
SELECT Name
FROM (Select Name from Fname
UNION ALL
Select Name from Lname)sub
WHERE Name NOT IN (Select Name from Exceptions)
```
You can keep that as `UNION ALL` if you don't care about duplicates. | You need a subquery, i prefer `NOT EXISTS`:
```
SELECT X.name
FROM (SELECT name
FROM fname
UNION ALL
SELECT name
FROM lname) X
WHERE NOT EXISTS (SELECT 1
FROM exceptions E
WHERE x.name = E.name)
```
[**Demo**](http://sqlfiddle.com/#!6/dfebd/7/0)
[Should I use NOT IN, OUTER APPLY, LEFT OUTER JOIN, EXCEPT, or NOT EXISTS?](http://www.sqlperformance.com/2012/12/t-sql-queries/left-anti-semi-join)
However, `NOT IN` works the same way:
```
SELECT X.name
FROM (SELECT name
FROM fname
UNION ALL
SELECT name
FROM lname) X
WHERE X.name NOT IN (SELECT name
FROM exceptions)
``` | UNION ALL and NOT IN together | [
"",
"sql",
"sql-server",
"sql-server-2008",
"union",
""
] |
The result of the two queries should be identical. Same data. Same formula. Same cast. One result is calculated in a query against a table variable, while the second is calculated against variables. I have replaced the table variable with temp table and permanent table with identical results.
Why are my results different?
```
DECLARE
@comm DECIMAL(20 , 6)
, @quantity INT
, @multiplier INT
, @price DECIMAL(38 , 10)
SET @comm = 210519.749988;
SET @quantity = 360000;
SET @multiplier = 1;
SET @price = 167.0791666666;
DECLARE @t AS TABLE
(
[comm] [decimal](38 , 6)
, [multiplier] [int]
, [Quantity] [int]
, [Price] [decimal](38 , 10)
)
INSERT INTO @t
VALUES
( @comm , @quantity , @multiplier , @price )
SELECT
@comm = comm
, @quantity = quantity
, @multiplier = multiplier
, @price = price
FROM
@t
SELECT
CAST(comm / quantity / multiplier / price AS DECIMAL(32 , 10))
FROM
@t
UNION ALL
SELECT
CAST(@comm / @quantity / @multiplier / @price AS DECIMAL(32 , 10));
```
Result
```
1. 0.0034990000
2. 0.0035000000
```
Same results against different servers. SQL Server 2008 R2 Web Edition, Standard and Express and SQL Server 2012 Standard. | The difference is due to the difference in precision of your two `DECIMAL` fields:
Changing @comm to `(38,6)`:
```
DECLARE
@comm DECIMAL(38 , 6)
, @quantity INT
, @multiplier INT
, @price DECIMAL(38 , 10)
```
I get:
```
---------------------------------------
0.0034990000
0.0034990000
```
Likewise changing `comm` in `@t` to `[comm] [decimal](20 , 6)` gets me:
```
---------------------------------------
0.0035000000
0.0035000000
```
If the fields are consistent, the results will be consistent. | `@comm` is defined as decimal(20,6) while the `comm` column is decimal(38,6). You also assign a value with 7 decimal points to @comm, which only accepts up to 6 decimals
According to the [docs](http://msdn.microsoft.com/en-us/library/ms187746.aspx), decimals with a precision between 20-28 take 13 bytes while larger decimals use 17 bytes. When you `SELECT` the larger value stored in `comm` back into the smaller `@comm`variable some rounding is bound to happen. | SQL Formula Returns Inconsistent Precision | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
How to insert string with super script characters in PostgreSQL?
I want to insert "TM" as power of string "RACH"? I tried the following query:
```
update contact SET name=E'RACH$^'TM where id='10782'
``` | Use the dedicated characters '®' or '™' as trademark signs.
```
UPDATE contact SET name = 'RACH™' -- '™' not 'ᵀᴹ'
WHERE id = '10782';
```
Some other characters (but not all!) have superscript variants in Unicode, but many fonts don't support these exotic code points and don't even include glyphs to represent them. Except for the common '¹ ² ³', I would rather use formatting to achieve a superscript effect.
Here on SO, you could use: demo superscript ABC
That's the output of `<sup>demo superscript ABC</sup>`
More info on the [Wikipedia page on superscript characters](https://en.wikipedia.org/wiki/Unicode_subscripts_and_superscripts).
**If** you need a mapping function use **[`translate()`](https://www.postgresql.org/docs/current/functions-string.html)**. `replace()` in a loop would be *very* inefficient.
```
translate('TM', 'ABDEGHIJKLMNOPRTU', 'ᴬᴮᴰᴱᴳᴴᴵᴶᴷᴸᴹᴺᴼᴾᴿᵀᵁ');
``` | I don't know if it possible to convert symbols to supersripts without creating mapping function for that, but you can just write it manually:
```
update contact SET name='RACHᵀᴹ' where id='10782'
```
[**`sql fiddle demo`**](http://sqlfiddle.com/#!1/a2222/1)
Mapping function could be something like this:
```
create or replace function superscript(data text)
returns text
as
$$
declare
ss text[];
lt text[];
begin
ss := '{ᴬ,ᴮ,ᴰ,ᴱ,ᴳ,ᴴ,ᴵ,ᴶ,ᴷ,ᴸ,ᴹ,ᴺ,ᴼ,ᴾ,ᴿ,ᵀ,ᵁ}';
lt := '{A,B,D,E,G,H,I,J,K,L,M,N,O,P,R,T,U}';
for i in 1..array_length(ss, 1)
loop
data := replace(data, lt[i], ss[i]);
end loop;
return data;
end;
$$
language plpgsql;
```
[**`sql fiddle demo`**](http://sqlfiddle.com/#!1/47cbe/1) | Special superscript characters | [
"",
"sql",
"postgresql",
"postgresql-9.1",
"superscript",
""
] |
I need help building a SQL query that returns a flattened result of the top 2 items in an order.
The tables and relevant fields are as follows:
```
Order OrderItem
------- -----------
orderId orderId
productCode
quantity
```
I'm looking for the desired result set:
```
[orderId] [productCode1] [quantity1] [productCode2] [quantity2]
---------- -------------- ----------- -------------- -----------
o123 p134 3 p947 1
o456 p384 2 p576 1
```
The results would be grouped by `orderId` from `Order`, with the TOP 2 `productCode` from `quantity` from `OrderItem`. I don't care which TOP 2 get returned, just need any two.
Any help would be greatly appreciated. | ```
select
o.orderId,
max(case when row_num = 1 then oi.ProductCode end) as ProductCode1,
max(case when row_num = 1 then oi.Quantity end) as Quantity1,
max(case when row_num = 2 then oi.ProductCode end) as ProductCode2,
max(case when row_num = 2 then oi.Quantity end) as Quantity2
from Order as o
outer apply (
select top 2
oi.*, row_number() over (order by oi.productCode) as row_num
from OrderItem as oi
where oi.orderId = o.orderId
) as oi
group by o.orderId
``` | You can do this with conditional aggregation and the window function `row_number()`:
```
select orderId,
max(case when seqnum = 1 then ProductCode end) as ProductCode1,
max(case when seqnum = 1 then Quantity end) as Quantity1,
max(case when seqnum = 2 then ProductCode end) as ProductCode2,
max(case when seqnum = 2 then Quantity end) as Quantity2
from (select oi.*,
row_number() over (partition by orderId order by quantity desc) as seqnum
from OrderItem oi
) oi
group by orderId;
``` | SQL Query Flattens Order and Top Order Items | [
"",
"sql",
"sql-server",
""
] |
I need to write a query that will return to me all the items in a group in one record, separated by commas,from two tables, example result below,
Items table:
```
--------------------
Name | Group_ID
--------------------
item1 | 1
item2 | 1
item3 | 3
```
Group table:
```
--------------------
ID | Name
--------------------
1 | Group1
3 | Group3
```
Result i'm looking for:
```
------------------------------
GId | Items
------------------------------
1 | item1, item2
3 | item3
``` | USE [GROUP\_CONCAT](http://dev.mysql.com/doc/refman/5.0/en/group-by-functions.html#function_group-concat)
```
SELECT group_concat(Name) FROM table
``` | You need to use GROUP\_CONCAT and a GROUP BY
It would be something like this:
```
SELECT gr.id, GROUP_CONCAT(item.name SEPARATOR ',')
FROM `group` gr LEFT JOIN item
ON(gr.id=item.group_id)
GROUP BY gr.id
```
This query will display groups that don't have items associated. If you don't need those groups, then the best option is @peterms | sql query to list all the items in a group in one record | [
"",
"mysql",
"sql",
""
] |
```
SELECT school WHERE school LIKE '%suny at albany%';
```
This will result in Suny at Albany.
If I were to type in:
```
SELECT school WHERE school LIKE '%suny albany%';
```
Returns nothing
Is there a way to separate the white space characters so that there are LIKE statements between them.
so that when I enter 'suny albany', it will return 'SUNY at Albany'. Like below :
```
SELECT school WHERE school LIKE '%suny%' AND school LIKE '%albany%';
``` | Use a `%` wildcard in the middle as well.
```
SELECT school WHERE school LIKE '%suny%albany%';
```
You could also leave a space around the `%` like
```
SELECT school WHERE school LIKE 'suny % albany';
```
to prevent matching super strings like `sunyys palbanys` or `sunyalbany`. | Use another percentage sign instead of the space like so:
```
SELECT school WHERE school LIKE '%suny%albany%';
``` | Is there a way to separate a MySql statement by space with a LIKE statement? | [
"",
"mysql",
"sql",
""
] |
I have created a query to generate some data to sql databases, but generation of 1 GB data takes about 45 minutes. How to increase a performance of data generation?
```
DECLARE @RowCount INT
DECLARE @RowString VARCHAR(10)
DECLARE @Random INT
DECLARE @Upper INT
DECLARE @Lower INT
DECLARE @InsertDate DATETIME
SET @Lower = -730
SET @Upper = -1
SET @RowCount = 0
WHILE @RowCount < 3000000
BEGIN
SET @RowString = CAST(@RowCount AS VARCHAR(10))
SELECT @Random = ROUND(((@Upper - @Lower -1) * RAND() + @Lower), 0)
SET @InsertDate = DATEADD(dd, @Random, GETDATE())
INSERT INTO Table_1
(q
,w
,e
,r
,t
,y)
VALUES
(REPLICATE('0', 10 - DATALENGTH(@RowString)) + @RowString
, @InsertDate
,DATEADD(dd, 1, @InsertDate)
,DATEADD(dd, 2, @InsertDate)
,DATEADD(dd, 3, @InsertDate)
,DATEADD(dd, 4, @InsertDate))
SET @RowCount = @RowCount + 1
END
``` | You may try following also:
```
;with seq as (
select top (3000000) N = row_number() over (order by @@spid) - 1 from sys.all_columns c1, sys.all_columns c2
)
INSERT INTO Table_1 (q, w, e, r, t, y)
select
right('0000000000' + cast(N as varchar(10)), 10)
,p.InsertDate
,DATEADD(dd, 1, p.InsertDate)
,DATEADD(dd, 2, p.InsertDate)
,DATEADD(dd, 3, p.InsertDate)
,DATEADD(dd, 4, p.InsertDate)
from seq
cross apply (select DATEADD(dd, ROUND(((@Upper - @Lower -1) * RAND(checksum(newid())) + @Lower), 0), GETDATE())) p(InsertDate)
``` | The problem is you are generating and inserting the data one row at a time. SQL Server is not designed to work that way. You need to find a set-based solution. This worked for me in under 30 seconds:
```
CREATE TABLE #Table_1 (
Id INT IDENTITY(1,1)
, RowString AS REPLICATE('0', 10 - LEN(CAST(Id AS VARCHAR))) + CAST(Id AS VARCHAR)
, Date1 DATETIME
);
DECLARE @Upper INT = -1;
DECLARE @Lower INT = -730;
INSERT #Table_1 (Date1)
SELECT TOP 3000000 DATEADD(dd, ROUND(((@Upper - @Lower -1) * RAND(checksum(newid())) + @Lower), 0), GETDATE())
FROM ( SELECT number
FROM master..spt_values
WHERE TYPE = 'P' AND number <= 2000
) a (Number)
,( SELECT number
FROM master..spt_values
WHERE TYPE = 'P' AND number <= 2000
) b (Number);
```
Once you have the above data in the #Table\_1 temp table, it is a simple matter to insert it into Table\_1:
```
INSERT Table_1 (q,w,e,r,t,y)
SELECT RowString, Date1, Date1 + 1, Date1 + 2, Date1 + 3, Date1 + 4
FROM #Table_1;
``` | SQL query to fast data generation | [
"",
"sql",
"sql-server",
"performance",
"data-generation",
""
] |
I'm converting old code to use LINQ. The old code looked like this:
```
// Get Courses
sqlQuery = @"SELECT Comment.Comment, Status.StatusId, Comment.DiscussionBoardId, DiscussionBoard.CourseId, Comment.CommentID
FROM Status INNER JOIN Comment ON Status.StatusId = Comment.StatusId INNER JOIN
DiscussionBoard ON Comment.DiscussionBoardId = DiscussionBoard.DiscussionBoardId
WHERE (DiscussionBoard.CourseID = 'CourseID')";
var comments = new List<Comment>(dataContext.ExecuteQuery<Comment>(sqlQuery));
```
I've converted the above SQL to LINQ:
```
var db = new CMSDataContext();
var query = from c in db.Comments
join s in db.Status on c.StatusId equals s.StatusId
join d in db.DiscussionBoards on c.DiscussionBoardId equals d.DiscussionBoardId
where d.CourseId == "CourseID"
select new
{
d.ItemType,
c.Comment1,
s.Status1,
c.DiscussionBoardId,
d.CourseId,
c.CommentID
};
```
The problem I've having, though, is with trying to get the results of the query into the List. Can someone offer me some pointers?
Thanks! | Enclose the whole query in parentheses and add `.ToList()` at the end.
Or add another line:
```
var list = query.ToList();
``` | Try adding the `ToList()` method at the end of the query. | Getting LINQ query result into a list? | [
"",
"asp.net",
"sql",
"linq",
""
] |
I have two tables. One of them named `files` and there is al list of all files. the second table called `payments`, and there is in there a list of payments for some files.
Payments:
```
id | fileid | {...}
1 2
2 3
3 2
```
Files:
```
id | {...}
1
2
3
```
I want to select all files, and join the table `payments` to order by count of this table.
In this case, the first row will be file #2, because it repeats the most in the `payments` table.
I tried to do it, but when I do it - **not all of the rows are shown!**
I think it happens because not all of the files are in the `payments` table. So in this case, I think that it won't display the first row.
Thanks, and sorry for my English
P.S: I use mysql engine
\*\* UPDATE \*\*
My Code:
```
SELECT `id`,`name`,`size`,`downloads`,`upload_date`,`server_ip`,COUNT(`uploadid`) AS numProfits
FROM `uploads`
JOIN `profits`
ON `uploads`.`id` = `profits`.`uploadid`
WHERE `uploads`.`userid` = 1
AND `removed` = 0
ORDER BY numProfits
``` | As others have noted you need to use LEFT JOIN. - This tells MySQL that entries from the tables to the left should be included even if no corresponding entries exists in the table on the right.
Also you should use GROUP BY to indicate how the COUNT should be deliminated.
So the SQL should be something like;
```
SELECT Files.ID, count(Payments.FileID) as numpays FROM
Files
LEFT OUTER JOIN
Payments
ON Files.id=Payments.FileID
GROUP BY files.ID
ORDER BY numpays desc
```
[SQL Fiddle](http://www.sqlfiddle.com/#!2/571b32/4/0) | Try `LEFT JOIN` - in MySQL, the default `JOIN` is actually an `INNER JOIN`. In an `INNER JOIN`, you will only get results back that are in both sides of the join.
See: [Difference in MySQL JOIN vs LEFT JOIN](https://stackoverflow.com/questions/9770366/difference-in-mysql-join-vs-left-join)
And, as noted in the comments, you may need a `GROUP BY` with your `COUNT` as well, to prevent it from just counting all the rows that come back. | SQL Select - Some Rows Won't Display | [
"",
"mysql",
"sql",
"join",
""
] |
I'm using PostgreSQL 9.1. I have the column name of a table. Is it possible to find the table(s) that has/have this column? If so, how? | you can query [system catalogs](http://www.postgresql.org/docs/current/static/catalogs-overview.html):
```
select c.relname
from pg_class as c
inner join pg_attribute as a on a.attrelid = c.oid
where a.attname = <column name> and c.relkind = 'r'
```
[**`sql fiddle demo`**](http://sqlfiddle.com/#!1/f5c03/5) | You can also do
```
select table_name from information_schema.columns where column_name = 'your_column_name'
``` | How to find a table having a specific column in postgresql | [
"",
"sql",
"database",
"postgresql",
"postgresql-9.1",
""
] |
I have a datetime column that I need to display in the following format:
```
YYYY/MM/DD 00:00:00:000
```
With `CONVERT`, I can't find any suitable styles.
The `/` in the format is really important as it needs to be compared with a `VARCHAR` column that has text `YYYY/MM/DD 00:00` as part of the description like below:
If I can find a way/style then I can use a SUBSTRING function to compare it to the value below:
```
Diary item added for : 2013/08/20 14:12
```
I have had a look at:
<http://databases.aspfaq.com/database/what-are-the-valid-styles-for-converting-datetime-to-string.html>
But, I can't find any sufficient styles that has / 's. Only -'s. | You can use the style that most closely resembles what you want to compare on and use `REPLACE` to replace the `-` with `\`.
```
SELECT REPLACE(CONVERT(<yourstyle>),'-','/');
```
---
**Edit** *Cudo's to @bluefeet*
```
SELECT REPLACE(CONVERT(varchar(23), yourdate, 121),'-','/');
```
From [Technet](http://technet.microsoft.com/en-us/library/ms187928.aspx): Cast and Convert
> 21 or 121 (2) ODBC canonical (with milliseconds) yyyy-mm-dd
> hh:mi:ss.mmm(24h) | Well you can convert both Dates in any format before you compare the two just use any of these: run this and you can select from them, make sure to convert both dates before Comparing
```
Select
convert(varchar, GetDate(), 100) as '100 Conversion',
convert(varchar, GetDate(), 101) as '101 Conversion',
convert(varchar, GetDate(), 102) as '102 Conversion',
convert(varchar, GetDate(), 103) as '103 Conversion',
convert(varchar, GetDate(), 104) as '104 Conversion',
convert(varchar, GetDate(), 105) as '105 Conversion',
convert(varchar, GetDate(), 106) as '106 Conversion',
convert(varchar, GetDate(), 107) as '107 Conversion',
convert(varchar, GetDate(), 108) as '108 Conversion',
convert(varchar, GetDate(), 109) as '109 Conversion',
convert(varchar, GetDate(), 110) as '110 Conversion',
convert(varchar, GetDate(), 111) as '111 Conversion',
convert(varchar, GetDate(), 112) as '112 Conversion',
convert(varchar, GetDate(), 113) as '113 Conversion',
convert(varchar, GetDate(), 114) as '114 Conversion',
convert(varchar, GetDate(), 120) as '120 Conversion',
convert(varchar, GetDate(), 121) as '121 Conversion',
convert(varchar, GetDate(), 126) as '126 Conversion',
convert(varchar, GetDate(), 130) as '130 Conversion',
convert(varchar, GetDate(), 131) as '131 Conversion'
```
Or Use this
```
select REPLACE(convert(varchar, GetDate(), 121),'-','/')
``` | Convert Datetime format to varchar format style workaround | [
"",
"sql",
"sql-server",
"t-sql",
"datetime",
""
] |
DBMS : MS SQL 2005
Consider the following table as an example
```
[CurrencyID] ---- [Rate] ---- [ExchangeDate]
USD --------------- 1 ------ 08/27/2012 11:52 AM
USD -------------- 1.1 ----- 08/27/2012 11:58 AM
USD -------------- 1.2 ----- 08/28/2012 01:30 PM
USD --------------- 1 ------ 08/28/2012 01:35 PM
```
How can i get the rate of the **latest [ExchangeDate] Per Day** for each currency ?
The output would be :
```
[CurrencyID] ---- [Rate] ---- [ExchangeDate]
USD ----------- 1.1 ------- 08/27/2012
USD ------------ 1 -------- 08/28/2012
``` | For SQL 2008, the following does the trick:
```
SELECT CurrencyID, cast(ExchangeDate As Date) as ExchangeDate , (
SELECT TOP 1 Rate
FROM Table T2
WHERE cast(T2.ExchangeDate As Date) = cast(T1.ExchangeDate As Date)
AND T2.CurrencyID = T1.CurrencyID
ORDER BY ExchangeDate DESC) As LatestRate
FROM Table T1
GROUP BY CurrencyID, cast(T1.ExchangeDate As Date)
```
For anything below 2008, take a look [here](https://stackoverflow.com/questions/923295/how-can-i-truncate-a-datetime-in-sql-server). | You didn't specify which DBMS, following is Standard SQL:
```
select CurrencyID, Rate, ExchangeDate
from
(
select CurrencyID, Rate, ExchangeDate,
row_number()
over (partition by CurrencyID, cast(ExchangeDate as date)
order by ExchangeDate desc) as rn
from tab
) as dt
where rn = 1;
``` | SQL select value by Maximum time per day | [
"",
"sql",
""
] |
I am trying to visualize tables and their relations using pgAdmin. I have understood that there is a [query visualizer tool](http://link.sheidaei.com/byqyv) available for pgAdmin. However, that only is useful if you are dealing with queries. My main goal is to generate a graphical representation of all the tables available in database. | I have found [this webpage](https://wiki.postgresql.org/wiki/Community_Guide_to_PostgreSQL_GUI_Tools) on postgresql wiki, with various tools on utilizing a postgresql database. I have used DbWrench on Mac to generate the ERD. | In pgAdmin 4 right click on the database and then "Generate ERD (Beta)"
[](https://i.stack.imgur.com/WbVVv.png) | How to visualize database tables in postgresql using pgAdmin? | [
"",
"sql",
"database",
"postgresql",
"pgadmin",
""
] |
I have a table like following:
```
ID Type ParentID
001 C 007
002 C 005
003 B 007
004 C 008
005 R NULL
006 C 004
007 B 004
008 R 009
009 X NULL
```
The type hierarchy is X>R>C=B. I need to find all the records' R parent for B and C.
The challenge is some of the B or C records' parents are B or C and X needs to be excluded.
Results would be:
```
ID Type ParentID MasterParentID
001 C 007 008
002 C 005 005
003 B 007 008
004 C 008 008
005 R NULL NULL
006 C 004 008
007 B 004 008
008 R 009 NULL
009 X NULL NULL
```
Any suggestions? Much appreciated. | you need recursive CTE:
```
with cte1 as (
select
T.ID, T.Type, T.ParentID,
case
when T2.Type = 'X' then cast(null as varchar(3))
else T.ParentID
end as MasterParentID
from Table1 as T
left outer join Table1 as T2 on T2.ID = T.ParentID
), cte2 as (
select
T.ID, T.Type, T.ParentID,
T.MasterParentID, T.ID as MasterParentID2
from cte1 as T
where T.MasterParentID is null
union all
select
T.ID, T.Type, T.ParentID,
c.MasterParentID2 as MAsterParentID,
c.MasterParentID2 as MAsterParentID2
from cte1 as T
inner join cte2 as c on c.ID = T.MasterParentID
)
select
T.ID, T.Type, T.ParentID, T.MasterParentID
from cte2 as T
order by ID asc
```
[**`sql fiddle demo`**](http://sqlfiddle.com/#!6/bb619/24) | In SQL Server 2005 and above, a "[common table expression](http://technet.microsoft.com/en-us/library/ms175972%28v=sql.105%29.aspx)" can do this...
```
-- Assuming this table structure
--CREATE TABLE dbo.Test ( ID char(3), Type char(1), ParentID char(3))
;
WITH Tree(StartID, StartType, Parent, Child) AS
(
SELECT ID, Type, cast(NULL as char(3)), ID
FROM Test
WHERE ParentID IS NULL
UNION ALL
SELECT
-- Skip over the "X" rows...
CASE WHEN Tree.StartType = 'X'
THEN Test.ID
ELSE Tree.StartID
END,
CASE WHEN Tree.StartType = 'X'
THEN Test.Type
ELSE Tree.StartType
END,
Test.ParentID,
Test.ID
FROM Test
INNER JOIN Tree
ON Test.ParentID = Tree.Child
)
SELECT Test.ID, Test.Type, Test.ParentID,
CASE WHEN Tree.StartID = Test.ID
THEN NULL
ELSE Tree.StartID
END AS MasterParentID
FROM Test
LEFT OUTER JOIN Tree
ON Test.ID = Tree.Child
ORDER BY Test.ID
``` | SQL Find Parent ID for Certain Record Type | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
Little backstory: We have two scheduling databases that we import from, that feeds Data into our SQL Server. Sometimes that creates duplicate records for the same classroom during the same academic period, but it doesn't happen very often. What I need to do is merge them into one row WITHOUT altering database (as I do not have the rights to do that just yet). I just want it to take effect in the query I am running.
So in the following data you can see that the all of my filtering criteria (AcademicPeriod, BuildingNumber, RoomNumber and DayOfWeek) are all the same but I have two rows that show it is occupied at separate times.
0 = Vacant 1 = Occupied
AcademicPeriod | BuildingNumber | RoomNumber | DayOfWeek | 9:30a | 9:45a | 12:30p | 12:45p
------201401 -------------- 0001 ------------- 00015 ------------- R ---------- 0 ------- 0 -------- 1 --------- 1---
------201401 -------------- 0001 ------------- 00015 ------------- R ---------- 1 ------- 1 -------- 0 --------- 0---
Is there a way to join these two rows within my query (again WITHOUT altering the tables) so that it simply has one row with all '1's in it?
I am on a deadline to figure this out, and I could use any suggestions that you may have.
Thank you!!! | ```
SELECT AcademicPeriod , BuildingNumber , RoomNumber , DayOfWeek
MAX(930A) AS 930A,
MAX(945A) AS 945A,
MAX(1230P) AS 1230P,
MAX(1245P) AS 1245P
FROM {YourResultSet}
Group BY AcademicPeriod,
BuildingNumber,
RoomNumber,
DayOfWeek
``` | Try this question:
```
SELECT x.AcademicPeriod, x.BuildingNumber, x.RoomNumber, x.DayOfWeek,
MAX(x.[9:30a]) AS [9:30a],
MAX(x.[9:45a]) AS [9:45a],
MAX(x.[12:30p]) AS [12:30p],
MAX(x.[12:45p]) AS [12:45p]
FROM MySchema.MyTable x
GROUP BY x.AcademicPeriod, x.BuildingNumber, x.RoomNumber, x.DayOfWeek;
``` | Merging two Schedule rows into one in SQL | [
"",
"mysql",
"sql",
"sql-server",
""
] |
**Context:**
I've the following table(Example):
```
| ID | name | COUNTRY |
---------------------------------------------
| 1 | cristian | FRANCIA,HOLANDA,ALEMANIA |
| 2 | Andrea | FRANCIA,ESPAÑA,BELGICA |
| 3 | Fabian | BELGICA,ALEMANIA |
```
I need to put all countries in a field, but I need that there aren't repeat values.
So, I'm trygin the following query:
```
select GROUP_CONCAT(DISTINCT(COUNTRY)) FROM Usuario;
```
or using regular expresion Some like :
```
select GROUP_CONCAT(DISTINCT(COUNTRY)) FROM Usuario
WHERE GROUP_CONCAT(COUNTRY) REGEXP 'somepattern'
```
The wrong answer is the next:
```
FRANCIA,HOLANDA,ALEMANIA,FRANCIA,ESPAÑA,BELGICA,BELGICA,ALEMANIA
```
The expected answer is:
```
FRANCIA,HOLANDA,ALEMANIA,ESPAÑA,BELGICA
```
Or make a some **stored procedure**?
How to get the expected answer, **to N values** and **differing values**?
Thanks for your knowledge and Time!. | There's no builtin function to do this in MySQL.
It's possible to do a boatload of string processing in MySQL, but it's ugly, and there has to be known finite limit on the number of string values in the comma separated list.
Here's one way to get a distinct list of field values from all those comma separated lists:
```
SELECT SUBSTRING_INDEX(SUBSTRING_INDEX(CONCAT(country,','),',',1),',',-1) AS fld
FROM mytable
HAVING fld <> ''
UNION
SELECT SUBSTRING_INDEX(SUBSTRING_INDEX(CONCAT(country,','),',',2),',',-1) AS fld
FROM mytable
HAVING fld <> ''
UNION
SELECT SUBSTRING_INDEX(SUBSTRING_INDEX(CONCAT(country,','),',',3),',',-1) AS fld
FROM mytable
HAVING fld <> ''
UNION
SELECT SUBSTRING_INDEX(SUBSTRING_INDEX(CONCAT(country,','),',',4),',',-1) AS fld
FROM mytable
HAVING fld <> ''
UNION
SELECT SUBSTRING_INDEX(SUBSTRING_INDEX(CONCAT(country,','),',',5),',',-1) AS fld
FROM mytable
HAVING fld <> ''
UNION
SELECT SUBSTRING_INDEX(SUBSTRING_INDEX(CONCAT(country,','),',',6),',',-1) AS fld
FROM mytable
HAVING fld <> ''
ORDER BY 1
```
I'll leave it as an exercise for you to figure out what that's doing and how it works.
Now that's each value is in a separate row, we'd think you might want to leave it that way.
It's easy to wrap that query in another query, and use a GROUP\_CONCAT function, and return a single row with a string value containing a comma separated list. | No way with mysql for getting unique like this way.
First it's bad that you storing values like this way.
you have two solution:
1- Normalize your database.
2-get values from table and use php `explode()` , and use `array_unique` to remove duplicate values. | Distinct,REGEXP apply to Field and CONCAT_GROUP in MYSQL to remove repeated words to stored procedure | [
"",
"mysql",
"sql",
"regex",
"stored-procedures",
"group-concat",
""
] |
Is there a way to determine the first day of a year in Teradata SQL?
For example, I use this to find the first day of the month:
```
SELECT dt + (1 - EXTRACT(DAY FROM dt)) AS dt
```
I know I can extract the year directly using `YEAR()` but I want to output the results as a date so it will work in some external charts.
I tried this, but it added a bunch of spaces at the start of the date:
```
CONCAT(YEAR(dt),'-01-01')
``` | Figured it out:
```
ADD_MONTHS(dt, -(EXTRACT(MONTH FROM dt) - 1)) + (1 - EXTRACT(DAY FROM dt))
``` | dnoeth's approach is cool for long term optimization of jobs, but, in my opinion, the simple answer is either Jeffrey's or the following. Not sure which one uses less resources:
```
select cast(extract(year from current_date)||'0101' as date format 'yyyymmdd');
``` | Teradata Determine First Day of Year | [
"",
"sql",
"teradata",
""
] |
Hi I am using Oracle SQL Developer and am trying to get percentages on my table.
My Current Query is:
```
select
decode(grouping(life), 1, 'Total', life) as "LifeName",
sum(case
when task is not null and to_char(datee, 'MM') = '08'
and to_char(datee, 'YYYY') = '2013'
then 1
else 0
end) as "MonthStatus"
from life_lk, task_c
where life_lk.life_id = task_c.life_id
group by rollup(life)
```
The current output I get is:
```
LifeName MonthStatus
dog 5
bear 20
cat 1
Fish 4
Total 30
```
However, I want the output to be:
```
LifeName MonthStatus Percent
dog 5 16
bear 20 66
cat 1 3
Fish 4 13
Total 30 100
```
So for each cell under Month Status I want the number to be divided by the Total which in this case is 30. The number will dynamically change over time so i cannot simply divide by 30.
Sorry for the sloppy looking tables. I am not sure how to make them look neat and lined up.
Thanks for the help | ```
SELECT t1.lifename, t1.monthstatus, (t1.monthstatus / t2.total * 100) as prcent FROM
(
select
decode(grouping(life), 1, 'Total', life) as "LifeName",
sum(case
when task is not null and to_char(datee, 'MM') = '08'
and to_char(datee, 'YYYY') = '2013'
then 1
else 0
end) as "MonthStatus"
from life_lk, task_c
where life_lk.life_id = task_c.life_id
group by rollup(life) ) t1
,
(select sum(monthstatus) as total FROM life_lk) t2 ;
```
This should work, I may have got your tables and columns wrong though | The analytic function ratio\_to\_report() would probably work for you. It gives the ratio of a value to the sum of all of those values for a specified window.
In your case:
```
100* ratio_to_report(sum(...)) over ()
```
I'm not sure how it would work in the presence of a rollup, though.
<http://docs.oracle.com/cd/E18283_01/server.112/e17118/functions142.htm> | Oracle SQL Trying to get Percentage of columns | [
"",
"sql",
"oracle",
""
] |
I have a question according to SSRS.
I am working with MSSQL Server management studio 2012 and BIDS Visual studio 2008 for the report design.
I have a report with some multivalue parameters and a stored procedure in behind which returns the records.
Now I've tried to find the problem on the parameter values passed to the stored procedure and a string split function. I looked in the SQL server profiler if the strings get passed in an unexpected form but thats not the case. I ran the exact execution code in the server as a query and got results back. but if i run the report in the preview pane of the report designer i get no results back.
If you need any additional infos, just let me know. I just thought there may be someone who faced the same issue and knows the response. | I will take a guess and say it is 'how' you are passing the multi value parameter. Personally when dealing with SSRS I use views, table functions, or just selects as SSRS can understand natively that this:
```
Where thing in (@Thing)
```
Actually means this in SSMS:
```
Where thing in (@Thing.Value1, @Thing.Value2, @Thing.Value3, etc...)
```
I am guessing your proc is taking a string that is actually a comma seperated array. When you do a parameter that takes a string array like '1,2,3,4' and you are addressing the procedure with something like a 'Text' parameter accepting multiple values you either specify or get from a query you essentially need to 'Join' the parameters if your procedure takes a value of a string that contains the array. EG: Proc called dbo.test executes to return rows for values 1,2,4 for a parameter ids are shown like:
```
exec dbo.test @ids = '1,2,4'
```
If I wanted to run the proc in SSRS with this value and I had a multi value parameter called 'IDS' I would have to assemble the array manually in a function in SSRS like:
```
=JOIN(Parameters!IDS.Value, ",")
```
Essentially telling the proc run a parameter 'IDS' by joining together the multiple values in a chain of comma seperated values. You do this in the dataset on the left pane where it lists 'Parameters' instead of specify the Parameter as it is like [@IDS] you click the 'Fx' instead and put in the function above.
For this reason I am a big proponent of views, selects, and table functions as you can use the predicate logic to take care of this. | This answer might look outdated but in case someone needs it I will just reproduce my experience since I had the same problem just yesterday and I the solution from a webpage.
The problem: My report has 2 date parameters so to get data between that range, but i first noticed it did not work for 1 day, then I noticed that using a range did not bring the data in the last day
Workaround: I worked with the query, making some arrangement but when i made the query to return a whole month and nothing happened in the report i knew all the problem was in SSRS, of course I checked the parameters I was sending to the report.
Solution: My date parameters were OK (no problem with the time part) and I read somewhere to check for the filters in the "Data" Panel, that is the other tab besides the parameters. I had put those filters and totally forgot about then. | SSRS shows no records in report but query returns results | [
"",
"sql",
"t-sql",
"reporting-services",
"sql-server-2012",
""
] |
I want to do query. ı use Microsoft sql
if product is equal FAST and tree\_level is equal 0,1,2,3,4,5 count the number 0,1,2,3,4,5 and multiply 2 and tree\_level equal -1 count number multiply 2
and
if product is equal MOBIL and tree\_level is equal 0,1,2,3,4,5 count the number multiply 3
and
if product is equal FACE and tree\_level is equal 0,1,2,3,4,5 count the number 0,1,2,3,4,5 and multiply 3 and tree\_level equal -1 count number multiply 2
I want to do it same query but I can not do algorithm
**Joıned table**
```
perstel| AD|SOYAD|RefPhoner|Product |Tree_level
_______________________________________________
7857887|AS |DFDSF|5645545 |FAST |0
6566464|SD |DFDDS|4578857 |MOBİL |1
7487887|SD |FSDFD|8787878 |FACE |2
7487887|SD |FSDFD|8788278 |FACE |2
7487887|SD |FPOFD|8933878 |MOBIL |5
7445887|WE |FSPLD|8771878 |FACE |3
7387887|SD |LBDFD|8712878 |FAST |4
0487887|WE |FSPLD|8771878 |FACE |-1
4487887|WE |FOLLD|8771878 |MOBIL |-1
```
**ı want this out put** ı update it for understable
```
perstel| AD(name at eng)|SOYAD|RefPhoner|Product |Tree_level | POint
_________________________________________________________________
7857887|AS |DFDSF|5645545 |FAST |-1 | 2 (because it is -1 and it is face so it is point 2)
6566464|EM |DFDDS|4578857 |FACE |2 | 3 (because it is 2 and it is face so it is point 3)
7487887|MM |FSDFD|8787878 |FAST |2 | 2 .....
7487887|AS |DFDSF|8788278 |MOBIL |0 | 3 ...
7487887|EM |DFDDS|8933878 |FAST |-1 | 2 ...
7445887|HL |FSPLD|8771878 |FACE |3 | 3 ...
```
so ı will sum person'S ALL Of point after that
ı only do that :(
```
select
DS.PersTel ,
DW.AD ,
DW.SOYAD ,
DS.RefPhoner ,
DS.Product ,
DS.Tree_level
from dw_prod.FRTN.DIG_SEFER AS DS
inner join dw_prod.dbo.DW_MUST AS DW
ON DW.CEP_TEL = DS.PersTel
```
**I UPDATE IT I TRYed it but it still have mistake what is my mistake**
```
select
DS.PersTel ,
DW.AD ,
DW.SOYAD ,
DS.RefPhoner ,
DS.Product ,
DS.Tree_level
CASE DS.Tree_level
WHEN DS.Tree_level IN (0,1,2,3,4,5) THEN count(DS.Tree_level) * 3
WHEN DS.Tree_level IN (-1) THEN count(DS.Tree_level) * 2
WHERE DS.Product like '%FACE%' END AS Answer1
CASE DS.Tree_level
WHEN DS.Tree_level IN (0,1,2,3,4,5) THEN count(DS.Tree_level) * 3
WHERE DS.Product like '%MOBIL%' END AS Answer2
CASE DS.Tree_level
WHEN DS.Tree_level IN (0,1,2,3,4,5) THEN count(DS.Tree_level) * 2
WHEN DS.Tree_level IN (-1) THEN count(DS.Tree_level) * 2
WHERE DS.Product like '%FAST%' END AS Answer3
from dw_prod.FRTN.DIG_SEFER AS DS
inner join dw_prod.dbo.DW_MUST AS DW
ON DW.CEP_TEL = DS.PersTel
```
**updated case part**
```
select
DS.PersTel ,
DW.AD ,
DW.SOYAD ,
DS.RefPhoner ,
DS.Product ,
DS.Tree_level
CASE
WHEN DS.Tree_level IN (0,1,2,3,4,5)AND DS.Product LIKE '%FACE%' THEN count(DS.Tree_level) * 3
WHEN DS.Tree_level IN (-1) THEN count(DS.Tree_level) * 2
END AS Answer1
CASE DS.Tree_level
WHEN DS.Tree_level IN (0,1,2,3,4,5) AND DS.Product LIKE '%MOBIL%' THEN count(DS.Tree_level) * 3
END AS Answer2
CASE DS.Tree_level
WHEN DS.Tree_level IN (0,1,2,3,4,5) AND DS.Product LIKE '%FAST%' THEN count(DS.Tree_level) * 2
WHEN DS.Tree_level IN (-1) THEN count(DS.Tree_level) * 2
END AS Answer3
from dw_prod.FRTN.DIG_SEFER AS DS
inner join dw_prod.dbo.DW_MUST AS DW
ON DW.CEP_TEL = DS.PersTel
``` | try this query and do let me know if u still facing problem
```
select
DS.PersTel ,
DW.AD ,
DW.SOYAD ,
DS.RefPhoner ,
DS.Product ,
DS.Tree_level ,
CASE
WHEN DS.Tree_level IN (-1) And DS.Product LIKE '%FACE%' THEN count(DS.Tree_level) * 2
WHEN DS.Tree_level IN (-1) And DS.Product LIKE '%FAST%' THEN count(DS.Tree_level) * 2
WHEN DS.Tree_level IN (0,1,2,3,4,5) AND DS.Product LIKE '%FACE%' THEN count(DS.Tree_level) * 3
WHEN DS.Tree_level IN (0,1,2,3,4,5) AND DS.Product LIKE '%MOBIL%' THEN count(DS.Tree_level) * 3
WHEN DS.Tree_level IN (0,1,2,3,4,5) AND DS.Product LIKE '%FAST%' THEN count(DS.Tree_level) * 2
Else DS.Tree_level
END AS Answer1
from dw_prod.FRTN.DIG_SEFER AS DS
inner join dw_prod.dbo.DW_MUST AS DW
ON DW.CEP_TEL = DS.PersTel
Group by DS.PersTel , DW.AD , DW.SOYAD , DS.RefPhoner , DS.Product , DS.Tree_level
``` | You can't do this:
```
CASE DS.Tree_level
WHEN DS.Tree_level IN (0,1,2,3,4,5) THEN count(DS.Tree_level) * 3
WHEN DS.Tree_level IN (-1) THEN count(DS.Tree_level) * 2
WHERE DS.Product like '%FACE%' END AS Answer1
```
But you can do:
```
CASE
WHEN DS.Tree_level IN (0,1,2,3,4,5) AND DS.Product like '%FACE%' THEN count(DS.Tree_level) * 3
WHEN DS.Tree_level IN (-1) AND DS.Product like '%FACE%' THEN count(DS.Tree_level) * 2
END AS Answer1
```
(or you could nest case statements but that might be uglier!)
Update:
Just make sure you separate each 'case' with a comma:
```
CASE
WHEN DS.Tree_level IN (0,1,2,3,4,5)AND DS.Product LIKE '%FACE%' THEN count(DS.Tree_level) * 3
WHEN DS.Tree_level IN (-1) THEN count(DS.Tree_level) * 2
END AS Answer1, -- COMMA HERE IMPORTANT
CASE DS.Tree_level
WHEN DS.Tree_level IN (0,1,2,3,4,5) AND DS.Product LIKE '%MOBIL%' THEN count(DS.Tree_level) * 3
END AS Answer2
```
Because you need to separate the fields as you would with a normal query:
```
SELECT Field1, Field2, Field3 FROM X
```
or
```
SELECT CASE X WHEN Y THEN Z END as Field1, Field2, Field3 FROM X
```
The comma is important (you don't need a comma before the FROM, just between each field) | Multiply numbers using SQL and if else statement | [
"",
"sql",
"sql-server",
""
] |
The below query gives the result i.e wherever value1 is there changes into changeValue1
```
select case 1 when person ="value1" then "changeValue1"
when person="value2" then "changeValue2"
else person end as nickname
from person;
```
Below query is not changing the value1 and value2 to corresponding "then" value
```
select case 2 when person ="value1" then "changeValue1"
when person="value2" then "changeValue2"
else person end as nickname
from person;
```
I know its only because of case 1 and case2.Can anybody explain me how this function is working | ## General notes
There are 2 types of cases: with a value after CASE and without it.
1. `CASE value WHEN [compare_value] THEN result [WHEN [compare_value] THEN result ...] [ELSE result] END`
It compares the value after case with a list of possible values:
```
mysql> SELECT CASE 1 WHEN 1 THEN 'one'
-> WHEN 2 THEN 'two' ELSE 'more' END;
-> 'one'
```
This version compares the value after case with values, given after `WHEN`.
`2.` `CASE WHEN [condition] THEN result [WHEN [condition] THEN result ...] [ELSE result] END`
```
mysql> SELECT CASE WHEN 1>0 THEN 'true' ELSE 'false' END;
-> 'true'
```
This version will return the value after the first true condition.
As you are using the 2-nd version, you don't need to put 1 or 2.
## Investigating, how your code works
In your case you get the correct result for 1 as it tries to cast the 2-nd type query to the 1-st type. It evaluates the expressions and compares the result with 1. 1 and TRUE is the same value, that's why it works.
When you type 2, it always goes to ELSE branch, because 2 is neither TRUE or FALSE. If you try to set 0 instead of 2, it will give you the result of the first false expression:
```
mysql> SELECT CASE 0 WHEN 2<0 THEN 'true' ELSE 'false' END;
+----------------------------------------------+
| CASE 0 WHEN 2<0 THEN 'true' ELSE 'false' END |
+----------------------------------------------+
| true |
+----------------------------------------------+
``` | The reason it's not working is that you are comparing an integer with a boolean, which in mysql is either `1` or `0` - never `2`.
There are two forms of the case statement. The first form is like this:
```
case when condition then value1
when condition1 then value2
else value3 end
```
the second is like this, more like a `switch` in C/java:
```
case col
when `foo` then value1
when `bar` then value2
else value3 end
```
You're trying to use the second form, comparing `2` with the boolean result of `person="changeValue1", which in mysql is`1`for`true`and`0`for`false`.
Try this instead:
```
case person
when "value1" then "changeValue1"
when "value2" then "changeValue2"
else person
end
```
If you're trying to use the first form, just remove the `2`:
```
case when person ="value1" then "changeValue1"
when person="value2" then "changeValue2"
else person end as nickname
``` | Case Value Statement | [
"",
"mysql",
"sql",
""
] |
I want to display the names which is start from 0 to 9. i do not to use reg\_exp and like functions.
My data:
```
select * from emp2;
15326
25371
35371
48615
59718
69718
79718
89718
99718
05326
15326
25371
35371
48615
59718
69718
79718
89718
99718
a19716
b09414
d%5034
!5033
**5031
89718
39718
05326
a19716
b09414
d%5034
!5033
**5031
89718
39718
```
I used below query to fetch the records.
```
select * from emp2 where name between '0' and '9';
15326
25371
35371
48615
59718
69718
79718
89718
05326
15326
25371
35371
48615
59718
69718
79718
89718
89718
39718
05326
89718
39718
```
Result: As i am getting values form 0 to 8.. 9 is not fetching, why?
some more query i tried.
```
select * from emp2 where name between '+1' and '10';
05326
05326
select * from emp2 where name between '9' and '0';
no rows selected
```
why this statement is not fetching any of the records
MY expected result: I want to fetch the records whose name starts with 0 to 9. | ```
select * from emp2 where name >= '0' and name < ':'
```
This statement would use an index on `name` column if it exists. | The query you want is:
```
select *
from emp2
where substr(name, 1, 1) between '0' and '9';
```
You are misunderstanding `between` when applied to strings. The expression:
```
name between '+1' and '10'
```
Is looking for names that start with a character ranging from a `'+'` to a `'1'`. For digits, that would be those starting with `0` and `1`.
The expression:
```
name between '9' and '0'
```
is backwards, because the values for a between need to be in order.
The related expression:
```
name between '0' and '9'
```
comes close. But it will miss something starting with `9` when there is another digit. | I want to display the name which is start from 0 to 9 | [
"",
"sql",
"database",
"oracle",
""
] |
Is this possible in SQLite?
Like:
```
UNIQUE(col1, col2 = "something", col3),
```
What I mean by unique:
```
#1 col1=Foo | col2=Foo | col3=Foo > OK
#2 col1=Foo | col2=Foo | col3=Foo > OK
#3 col1=Foo | col2=something | col3=Foo > OK
#4 col1=Foo | col2=something | col3=Foo > Should fail because of #3
#5 col1=Foo | col2=something | col3=Bar > OK
#6 col1=Foo | col2=something | col3=Bar > Should fail because of #5
``` | This is not supported directly; you have to implement it with a trigger:
```
CREATE TRIGGER something_unique_check
BEFORE INSERT ON MyTable
FOR EACH ROW
WHEN NEW.col2 = 'something'
BEGIN
SELECT RAISE(FAIL, '"something" record is not unique')
FROM MyTable
WHERE col1 = NEW.col1
AND col2 = NEW.col2
AND col3 = NEW.col3;
END;
``` | I haven't tested it but I think you can do that with [partial indexes in SQLite 3.8.0](http://www.sqlite.org/partialindex.html) (released 2013-08-26):
```
CREATE UNIQUE INDEX
"partial_index" ON "table" ("col1", "col2", "col3")
WHERE ("col2" = 'something');
```
I might be wrong though. | UNIQUE constraint but only when column = something | [
"",
"sql",
"sqlite",
"pdo",
"constraints",
"unique",
""
] |
For example I have this stored procedure
```
Create Procedure sampleProcedure
AS
BEGIN
Select EmpID, Name, Address, PhoneNumber From EmpTable
END
```
Now in my application how I will do this: For example a user checked the checkBox1 then the query above will be `ORDER BY EmpID` else if the checkBox2 is checked then the query will be `ORDER BY Name`. Do I have to create two procedures?
```
Create Procedure sampleProcedure1
AS
BEGIN
Select EmpID, Name, Address, PhoneNumber From EmpTable Order By EmpID
END
Create Procedure sampleProcedure2
AS
BEGIN
Select EmpID, Name, Address, PhoneNumber From EmpTable Order By Name
END
```
If checkbox1 is checked then sampleProcedure1 would be execute, else if checkbox2 is checked then sampleProcedure2 would be execute ? | Introduce a parameter for the procedure to dictate the sort order:
```
Create Procedure sampleProcedure1
@orderByEmpId bit
AS
BEGIN
if (@orderByEmpId = 1)
Select EmpID, Name, Address, PhoneNumber From EmpTable Order By EmpID
else
Select EmpID, Name, Address, PhoneNumber From EmpTable Order By Name
END
```
You can further refine this to:
```
CREATE PROCEDURE sampleProcedure1
@orderByEmpId Bit
AS BEGIN
SELECT
EmpID, Name, Address, PhoneNumber
FROM EmpTable
ORDER BY CASE WHEN @orderByEmpId = 1 THEN EmpID ELSE Name END
END
``` | I would suggest ordering your collection in your application based on your checkbox selection.
E.g.
```
if (checkbox1.Checked)
employees = employees.OrderBy(x => x.EmpID).ToList();
else
employees = employees.OrderBy(x => x.Name).ToList();
```
But if you need to do this SQL side for whatever reason, I would suggest a parameterized stored procedure:
```
Create Procedure sampleProcedure1
(
@OrderByEmpID BIT = 1
)
AS
BEGIN
IF (@OrderByEmpID = 1)
Select EmpID, Name, Address, PhoneNumber From EmpTable Order By EmpID
ELSE
Select EmpID, Name, Address, PhoneNumber From EmpTable Order By Name
END
``` | Conditional Order By In Stored Procedure | [
"",
"sql",
"sql-server",
"t-sql",
"stored-procedures",
"sql-server-2008-r2",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.