Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
I need help with this Conversion failed when converting date and/or time from character string.
I'm trying to run a simple query on an @as\_of\_date, 10 days ago and 30 days ago. The query works fine if I run it on 3-5 accounts, but any more than that if fails. How do I change my query so that I don't get the conversion error?
Here's my sample query:
[](https://i.stack.imgur.com/lZvMk.jpg)
**Edit**
Also tried removing the quotes from the select statement. Still doesn't work.
[](https://i.stack.imgur.com/AdPAM.jpg)
|
You can try using ISDATE function to see the input data is a valid date.. Also I don't see in the screenshot how/what is the input.
|
You are converting the `@as_of_date` twice.
You can use `SELECT convert(date, getdate(), 110)`
Correct me if I am wrong.
|
tsql - Conversion failed when converting date and/or time from character string
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
Why do we need to have two tables (master and transaction table) of any topic like `sales`,`purchase`,etc.. What should be the relationship between the two tables and what should be the difference between them. Why do we really need them.
|
Master and Transaction tables are needed in the database schema specially in the verticals of sales.
**Master Data**: Data which seldom changes.
For example, if a company has a list of 5 customer then they
will maintain a customer master table having the name and
address of the customers alongwith other data which will
remain permanent and is less likely to change.
**Transaction Data**: Data which frequently changes. For
example, the company is selling some materials to one of the
customer.So they will prepare a sales order for the
customer. When they will generate a sales order means they
are doing some sales transactions.Those transactional data
will be stored in Transactional table.
This is really required to maintain database normalization.
|
In the end, it really depends on the type of data you are working with. If you have a specific example, that might give us a better indication on what you are trying to do. However, in general, a master table would theoretically be constant in relationship to habitual changes seen in your transaction table.
|
difference between master and transaction table
|
[
"",
"sql",
"sql-server",
"database",
""
] |
I want to get the latest highscore, per user, per game. My current query isn't working.
I have a SQL DB like the following:
```
player(string) game(string) score(int) Date(Date) time(Time)
jake soccer 20 2016/02/26 10:00:00
jake chess 50 2016/02/26 10:00:00
jake soccer 40 2016/02/26 13:00:00
jake chess 30 2016/02/26 13:00:00
jake soccer 20 2016/02/26 15:00:00
jake chess 60 2016/02/26 15:00:00
jake soccer 80 2016/02/26 18:00:00
jake chess 10 2016/02/26 18:00:00
mike chess 30 2016/02/26 13:00:00
mike soccer 20 2016/02/26 15:00:00
mike chess 60 2016/02/26 15:00:00
mike soccer 80 2016/02/26 18:00:00
mike chess 10 2016/02/26 18:00:00
```
What I want to get out of it is:
```
jake soccer 80 2016/02/26 18:00:00
jake chess 10 2016/02/26 18:00:00
mike soccer 80 2016/02/26 18:00:00
mike chess 10 2016/02/26 18:00:00
```
I found out the Time column also has the date, so this should work.
This is my current Query:
```
SELECT t1.*
FROM db t1
INNER JOIN (
SELECT player, MAX(time) TS
FROM db
GROUP BY player
) t2 ON t2.player = t1.player and t2.TS = t1.time
ORDER BY score DESC";
```
EDIT: I'm getting lots of wrong rows. Basically. I'm getting them sorted by time, but not the date
I now need to sort them not only by MAX(Time) but MAX(Date) as well. Or merge Date and Time in a new var
|
To get the latest highscore, per user, per game, try this:
```
;WITH cte as (
SELECT player, game, MAX(convert(datetime,cast([date] as nvarchar(10)) + ' '+ cast([time] as nvarchar(10)))) TS
FROM db
GROUP BY player, game)
SELECT db.*
FROM cte
LEFT JOIN db ON cte.player = db.player and cte.game = db.game and cte.TS = convert(datetime,cast(db.[date] as nvarchar(10)) + ' '+ cast(db.[time] as nvarchar(10)))
ORDER BY highscore DESC
```
|
Try using ROW\_NUMBER()
```
SELECT
t1.*
FROM (
SELECT
*
, ROW_NUMBER() OVER (PARTITION BY player ORDER BY [time] DESC) AS rn
FROM db
) AS t1
WHERE rn = 1
;
```
|
Using MAX(Time) and MAX(Date) to get only the latest per group
|
[
"",
"sql",
"sql-server",
""
] |
I have a XML column that is not generated with a namespace, meaning no xmlns attribute. Unfortunately, I cannot fix the actual problem, meaning where the XML is created.
For example:
```
<root>Our Content</root>
```
I *can* modify the XML data before it's returned to a particular client that expects a namespace. What I want is pretty simple:
```
<root xmlns="http://OurNamespace">Our Content</root>
```
I tried something like:
```
.modify('insert attribute xmlns {"ournamespace"}...
```
But that errors with
> Cannot use 'xmlns' in the name expression.
My questions are:
1. Is there a technique around this particular error?
2. Is there an alternative or better way to add/change a namespace on a SQL XML type?
This is in a SQL Server 2012 stored procedure.
|
Maybe as simple as this?
```
DECLARE @xml XML='<root>Our Content</root>';
SELECT CAST( REPLACE(CAST(@xml AS NVARCHAR(MAX)),'<root>','<root xmlns="http://OurNamespace">') AS XML)
```
|
Best alternative I could come up with, select root subnodes and place it between `<root xmlns="http://OurNameSpace">...</root>`.
```
DECLARE @t TABLE(e XML);
INSERT INTO @t(e)VALUES('<root><el1>Our Content</el1></root>');
INSERT INTO @t(e)VALUES('<root><el2>Our Content</el2></root>');
SELECT
'<root xmlns="http://OurNameSpace">'+
CAST(e.query('/root/*') AS NVARCHAR(MAX))+
'</root>'
FROM @t;
```
Closest I got with XQuery is this:
```
SELECT e.query('<root xmlns="http://OurNameSpace">{*:root/*}</root>')
FROM @t;
```
But that selects the first subelement (eg `<el1>`) with `xmlns=""` (`<el1 xmlns="">`). I didn't find a way to remove that. But maybe that is good enough for you?
|
How to add or change an Xml namespace in SQL Server / Cannot use 'xmlns' in the name expression
|
[
"",
"sql",
"sql-server",
"xml",
"xml-namespaces",
""
] |
I need to return `true` or `false` rather than `1` & `0`, using following query:
```
select if(u.id is null,false,true) status
from user u
limit 10
```
the above query returns `status` with value 0 or 1 rather than `true` and `false`,
Is there any way to fix this?
|
If you want, you can return the values as *strings*:
```
SELECT IF(u.id IS NULL, 'false', 'true') as status
FROM user u
LIMIT 10
```
|
TRUE/FALSE is equivalent to 1/0. It's just a matter of how your front end displays it.
If you need to return the strings "true" and "false" (which I don't suggest - handle that in the display) then you'll have to account for that as well:
`IF(IF(u.id ISNULL,false,true) = 1, 'TRUE', 'FALSE')`
|
MySQL: return "0" & "1" rather than "false" & "true"
|
[
"",
"sql",
""
] |
I am curious about the logical query processing phase of SQL queries.
For `SELECT` queries, the logical query processing phase order is:
1. FROM
2. ON
3. OUTER
4. WHERE
5. GROUP BY
6. CUBE | ROLLUP
7. HAVING
8. SELECT
9. DISTINCT
10. ORDER BY
11. TOP
What is the order for `INSERT`, for `UPDATE` and for `DELETE`?
|
If you would like to know what the actual query processing order is, take a look at the execution plan. That will tell you step by step exactly what SQL Server is doing.
<https://technet.microsoft.com/en-us/library/ms178071(v=sql.105).aspx>
|
**SQL Server: [Source](https://msdn.microsoft.com/en-us/library/ms189499.aspx "Source")**
1. FROM
2. ON
3. JOIN
4. WHERE
5. GROUP BY
6. WITH CUBE or WITH ROLLUP
7. HAVING
8. SELECT
9. DISTINCT
10. ORDER BY
11. TOP
|
Logical query processing phase of INSERT, DELETE, and UPDATE in SQL queries
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I wanna get all my articles out of my database where the date of the last update is greater than the publishing date + 1 week.
Do you habe an idea how that should work?
In my table there is the `publish_date` and the `update_date`. Both of them contain a datetime in the following format: `Y-m-d H:i:s`
It should be something like the following (which does not work!)
```
SELECT * FROM foo WHERE update_date > publish_date+1week
```
|
```
SELECT * FROM foo WHERE update_date > DATE_ADD(publish_date, INTERVAL 1 WEEK)
```
See [MySQL DATE\_ADD](http://www.w3schools.com/sql/func_date_add.asp).
|
If you want to retain your structure (i.e. not use a DATE\_ADD function), I would use:
```
SELECT * FROM foo WHERE update_date > publish_date + INTERVAL 1 WEEK
```
|
How to get data out of a database where the saved date is greater than another date + 1 week
|
[
"",
"mysql",
"sql",
""
] |
How do I retrieve results for not used positions in my database. Here is an example:
```
TABLE POSITION
PosID PosName
101 President
102 Vice President
103 Secretary
104 Treasurer
105 Auditor
106 Srgt of Arms
TABLE OFFICER
OfficerID OrganizationID Name PosID (FK to TABLE POSITION)
1001 2016-02081-0 Kris 101
1002 2016-02081-0 Nitche 102
1003 2016-02081-0 Russel 103
```
Now I want my query to retrieve positions from the TABLE POSITION where the position is not being used by a certain organization. This is what i did which returns too many results:
```
SELECT *
FROM POSITION, OFFICER
WHERE OrganizationID = '2016-02081-0' AND OFFICER.PosID != POSITION.PosID;
```
Take note that I want to retrieve only the following result
```
TABLE POSITION
PosID PosName
104 Treasurer
105 Auditor
106 Srgt of Arms
```
It should retrieve positions not being used by the organizationID of '2016-02081-0'
|
You can use `NOT EXISTS`:
```
SELECT *
FROM POSITION
WHERE NOT EXISTS (SELECT 1
FROM OFFICER
WHERE OrganizationID = '2016-02081-0' AND
OFFICER.PosID = POSITION.PosID);
```
The above query returns all `POSITION` records not being related to an `OFFICER` record having `OrganizationID = '2016-02081-0'`.
|
You could use the `not in` operator:
```
SELECT PosID
FROM position
WHERE PosID NOT IN (SELECT PosID
FROM officer)
```
Or the `not exists` operator:
```
SELECT PosID
FROM position p
WHERE NOT EXISTS (SELECT *
FROM officer o
WHERE p.PosId = o.PosId)
```
|
database - retrieve results for unused positions
|
[
"",
"mysql",
"sql",
"database",
"select",
"database-design",
""
] |
My database looks like:
**Table: dept\_emp**:
```
+--------------------------+-----------------------+------------------------+
| emp_no (Employee Number) | from_date (Hire date) | to_date (Worked up to) |
+--------------------------+-----------------------+------------------------+
| 5 | 1995-02-27 | 2001-01-19 |
| 500 | 1968-01-01 | 9999-01-01 |
+--------------------------+-----------------------+------------------------+
```
Note: If the employee is still currently working for the company their to\_date will show `9999-01-01`.
What I'm wanting to do is display the emp\_no of the longest working employee. I'm not sure how to do that with the random `9999-01-01`'s in the database.
Here's what I've come up with so far:
```
SELECT emp_no
FROM (SELECT max(datediff( (SELECT to_date
FROM dept_emp),
(SELECT from_date
FROM dept_emp)
)
)
);
```
This doesn't work, and it also doesn't the take `9999-01-01` into account.
I'm thinking I should use `CURDATE()` in their some where?
|
You could try something like this:
```
select
d.*,
datediff(
case when to_date = '9999-01-01' then current_date else to_date end,
from_date) as how_long
from dept_emp d
where
datediff(
case when to_date = '9999-01-01' then current_date else to_date end,
from_date) = (
-- find the longest tenure
select max(datediff(
case when to_date = '9999-01-01' then current_date else to_date end,
from_date))
from dept_emp
)
```
If this is the kind of information you have in your table:
```
create table dept_emp (
emp_no int,
from_date date,
to_date date
);
insert into dept_emp values
(1, '2000-01-01', '2000-01-02'),
(2, '2000-01-01', '2005-02-01'),
(3, '2000-01-01', '9999-01-01');
```
Your result will be:
```
| emp_no | from_date | to_date | how_long |
|--------|---------------------------|---------------------------|----------|
| 3 | January, 01 2000 00:00:00 | January, 01 9999 00:00:00 | 5902 |
```
Example SQLFiddle: <http://sqlfiddle.com/#!9/55886/11>
|
Firstly I would suggest to make that to\_date DEFAULT NULL.
You want to have NULL there, if employee is still working, no need for 9999- stuff.
Now, to your question about longest working employee. You could calculate the date difference like this, accounting for NULL to be today's date:
```
SELECT emp_no, MAX(DATEDIFF( IFNULL(to_date,CURDATE()) ,from_date)) FROM dept_emp;
```
Here what we did, is if to\_date is NULL, meaning person is still employed, we assume his to\_date is today's date which is true.
**EDIT:** I am sorry, forgot to return the employee number, just add your emp\_no to the query.
**EDIT 2:** Since you are not allowed to use NULL, this is what you should do:
```
SELECT emp_no, MAX(DATEDIFF( IF(to_date='9999-01-01',CURDATE(), to_date) ,from_date)) FROM dept_emp;
```
So basically, we are saying if 9999- is set, use it as todays date. Hope this helps. I assume that no one's to\_date is not going to be bigger that today's date, other than 9999- of course.
**EDIT 3:** You are right about emp\_no, so here it goes:
```
SELECT emp_no, DATEDIFF( IF(to_date='9999-01-01',CURDATE(), to_date)
,from_date) as longest_date FROM dept_emp ORDER BY longest_date DESC LIMIT 0,1;
```
|
Select longest working employee in SQL
|
[
"",
"mysql",
"sql",
""
] |
For the following tables:
**ROOM**
```
+----+--------+
| ID | NAME |
+----+--------+
| 1 | ROOM_1 |
| 2 | ROOM_2 |
+----+--------+
```
**ROOM\_STATE**
```
+----+---------+------+------------------------+
| ID | ROOM_ID | OPEN | DATE |
+----+---------+------+------------------------+
| 1 | 1 | 1 | 2000-01-01 00:00:00 |
| 2 | 2 | 1 | 2000-01-01 00:00:00 |
| 3 | 2 | 0 | 2000-01-06 00:00:00 |
+----+---------+------+------------------------+
```
Stored data is room with last changed state:
* ROOM\_1 opened at 2000-01-01 00:00:00
* ROOM\_2 opened at 2000-01-01 00:00:00
* ROOM\_2 closed at 2000-01-06 00:00:00
ROOM\_1 is still open, ROOM\_2 is closed (no opened since 2000-01-06). How to select actual opened rooms names with a join ? If i wrote:
```
SELECT ROOM.NAME
FROM ROOM
INNER JOIN ROOM_STATE ON ROOM.ID = ROOM_STATE.ROOM_ID
WHERE ROOM_STATE.OPEN = 1
```
ROOM\_1 and ROOM\_2 are selected because `ROOM_STATE` with `ID` `2` is `OPEN`.
SQL Fiddle: <http://sqlfiddle.com/#!9/68e8bf/3/0>
|
In Postgres, I would recommend `distinct on`:
```
select distinct on (rs.room_id) r.name, rs.*
from room_state rs join
room r
on rs.room_id = r.id
order by rs.room_id, rs.date desc;
```
`distinct on` is specific to Postgres. It guarantees that the results have only one row for each room (which is what you want). The chosen row is the first row encountered, so this chooses the row with the largest date.
Another fun method is to use a lateral join:
```
select r.*, rs.*
from room r left join lateral
(select rs.*
from room_state rs
where rs.room_id = r.id
order by rs.date desc
fetch first 1 row only
) rs;
```
|
You can use the following query:
```
SELECT R.ID, R.NAME
FROM ROOM AS R
INNER JOIN ROOM_STATE AS RS ON R.ID = RS.ROOM_ID AND RS.OPEN = 1
LEFT JOIN ROOM_STATE AS RS2 ON R.ID = RS2.ROOM_ID AND RS2.OPEN = 0 AND RS2.DATE > RS.date
WHERE RS2.ID IS NULL
```
[**Demo here**](http://sqlfiddle.com/#!9/68e8bf/21)
This will return all rooms that are related to an 'open' state **and** have no relation to a 'closed' state that has a date posterior to the date of the 'open' state.
|
Where clause on joined table
|
[
"",
"sql",
"postgresql",
""
] |
I am trying to calculate in a SQL statement. I am trying to calculate the total amount in the `invoice.total` column per customer. I created the following statement:
```
SELECT customers.firstname, customers.lastname, customers.status, SUM(invoice.total) AS total
FROM customers
INNER JOIN invoice
ON customers.id=invoice.id;
```
When I run this, I get the total amount in the table. I have 15 different customers in this table but I get only the name of the first customer and the total amount of all customers. What am I doing wrong?
|
First, you need to Group the data when you want to have aggregate results:
```
SELECT customers.firstname, customers.lastname, customers.status, SUM(invoice.total) AS total
FROM customers
INNER JOIN invoice
ON customers.id=invoice.id
GROUP BY customers.firstname, customers.lastname, customers.status;
```
Second, are you sure you are joining the table by correct fields? Is `invoice.id` correct column? I would expect `invoice.id` to be primary key for the table and instead I would expect another column for foreign key, `invoice.customerid` for example. Please double check it is correct.
UPDATE: As it was mentioned in comments, if you have two customers with same first name, last name and status, the data will be grouped incorrectly. In that case you need to add unique field (e.g. `customers.id`) to `SELECT` and `GROUP BY` statements.
|
You have to add a group by customer (id property for example). If you want to have first and last name in select then you will have to group by them as well.
|
Calculate in SQL with Inner Join
|
[
"",
"mysql",
"sql",
""
] |
In my project I need to query the db with pagination and provide user the functionality to query based on current search result. Something like limit, I am not able to find anything to use with nodejs. My backend is mysql and I am writing a rest api.
|
You could try something like that (assuming you use [Express](http://expressjs.com/) 4.x).
Use GET parameters (here page is the number of page results you want, and npp is the number of results per page).
In this example, query results are set in the `results` field of the response payload, while pagination metadata is set in the `pagination` field.
As for the possibility to query based on current search result, you would have to expand a little, because your question is a bit unclear.
```
var express = require('express');
var mysql = require('mysql');
var Promise = require('bluebird');
var bodyParser = require('body-parser');
var app = express();
var connection = mysql.createConnection({
host : 'localhost',
user : 'myuser',
password : 'mypassword',
database : 'wordpress_test'
});
var queryAsync = Promise.promisify(connection.query.bind(connection));
connection.connect();
// do something when app is closing
// see http://stackoverflow.com/questions/14031763/doing-a-cleanup-action-just-before-node-js-exits
process.stdin.resume()
process.on('exit', exitHandler.bind(null, { shutdownDb: true } ));
app.use(bodyParser.urlencoded({ extended: true }));
app.get('/', function (req, res) {
var numRows;
var queryPagination;
var numPerPage = parseInt(req.query.npp, 10) || 1;
var page = parseInt(req.query.page, 10) || 0;
var numPages;
var skip = page * numPerPage;
// Here we compute the LIMIT parameter for MySQL query
var limit = skip + ',' + numPerPage;
queryAsync('SELECT count(*) as numRows FROM wp_posts')
.then(function(results) {
numRows = results[0].numRows;
numPages = Math.ceil(numRows / numPerPage);
console.log('number of pages:', numPages);
})
.then(() => queryAsync('SELECT * FROM wp_posts ORDER BY ID DESC LIMIT ' + limit))
.then(function(results) {
var responsePayload = {
results: results
};
if (page < numPages) {
responsePayload.pagination = {
current: page,
perPage: numPerPage,
previous: page > 0 ? page - 1 : undefined,
next: page < numPages - 1 ? page + 1 : undefined
}
}
else responsePayload.pagination = {
err: 'queried page ' + page + ' is >= to maximum page number ' + numPages
}
res.json(responsePayload);
})
.catch(function(err) {
console.error(err);
res.json({ err: err });
});
});
app.listen(3000, function () {
console.log('Example app listening on port 3000!');
});
function exitHandler(options, err) {
if (options.shutdownDb) {
console.log('shutdown mysql connection');
connection.end();
}
if (err) console.log(err.stack);
if (options.exit) process.exit();
}
```
Here is the `package.json` file for this example:
```
{
"name": "stackoverflow-pagination",
"dependencies": {
"bluebird": "^3.3.3",
"body-parser": "^1.15.0",
"express": "^4.13.4",
"mysql": "^2.10.2"
}
}
```
|
I taked the solution of @Benito and I tried to make it more clear
```
var numPerPage = 20;
var skip = (page-1) * numPerPage;
var limit = skip + ',' + numPerPage; // Here we compute the LIMIT parameter for MySQL query
sql.query('SELECT count(*) as numRows FROM users',function (err, rows, fields) {
if(err) {
console.log("error: ", err);
result(err, null);
}else{
var numRows = rows[0].numRows;
var numPages = Math.ceil(numRows / numPerPage);
sql.query('SELECT * FROM users LIMIT ' + limit,function (err, rows, fields) {
if(err) {
console.log("error: ", err);
result(err, null);
}else{
console.log(rows)
result(null, rows,numPages);
}
});
}
});
```
|
Pagination in nodejs with mysql
|
[
"",
"mysql",
"sql",
"node.js",
"pagination",
"limit",
""
] |
I need to alter the physical structure of a table.
Combine 2 columns into a single column for the entire table.
E.g
```
ID Code Extension
1 012 8067978
```
Should be
```
ID Num
1 0128067978
```
|
You can just add them together in the select statement:
```
SELECT Column1 + Column2 AS 'CombinedColumn' FROM TABLE
```
To Permanently Add them together:
Step 1. [Add Column](https://msdn.microsoft.com/en-us/library/ms190238.aspx):
```
ALTER TABLE YOUR_TABLE
ADD COLUMN Combined_Column_Name VARCHAR(15) NULL
```
Step 2. Combine fields
```
UPDATE YOUR_TABLE
SET Combined_Column_Name = Column1 + Column2
```
If you wanted to keep the table intact you could just access the table information through a [view](https://msdn.microsoft.com/en-us/library/ms187956.aspx).
```
CREATE VIEW View_To_Access_Table
AS
SELECT t.Column1, t.Column2, etc....
t.CombinedColumn1 + t.CombinedColumn2 AS 'CombinedColumnName'
FROM YOUR_TABLE t
```
You could also create a [computed column](https://msdn.microsoft.com/en-us/library/ms188300.aspx) if you didn't want to create a view:
```
ALTER TABLE YOUR_TABLE
ADD COLUMN CombinedColumn AS Column1 + Column2
```
|
```
ALTER TABLE DataTable ADD FullNumber VARCHAR(15) NULL
GO
UPDATE DataTable SET FullNumber = ISNULL(Column1, '') + ISNULL(Column2, '')
GO
-- you may have FullNumber as NOT NULL, if the number is mandatory and not null for every record
ALTER TABLE DataTable ALTER COLUMN FullNumber VARCHAR(15) NOT NULL
```
First step creates the column and the second makes the concatenation of strings, also taking care of null values, if any.
Before dropping old columns, you should consider their usage. If you need any of the numbers in some reports, it is harder to split the string than actually having the value stored. However, this also implies redundancy (more space, possible consistency problems).
|
Alter table merge columns T-SQL
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have a table in MySQL that I need to join with a couple of tables in a different server. The catch is that these other tables are in Informix.
I could make it work by selecting the content of a MySQL table and creating a temp table in Informix with the selected data, but I think in this case it would be too costly.
Is there an optimal way to join MySQL tables with Informix tables?
|
What I ended up doing is manually (that is, from the php app) keeping in sync the mysql tables with their equivalents in informix, so I didn't need to change older code. This a temporary solution, given that the older system, which is using informix, is going to be replaced.
|
I faced a similar problem a number of years ago while developing a Rails app that needed to draw data from both an Informix and a MySQL database. What I ended up doing was using of an ORM library that could connect to both databases, thereby abstracting away the fact that the data was coming from two different databases. Not sure if this will end up as a better technique than your proposed temp table solution. A quick google search also brought up [this](https://www.mysql.com/products/connector/), which might be promising.
|
Joining MySQL and Informix tables
|
[
"",
"mysql",
"sql",
"database",
"join",
"informix",
""
] |
I haved saved SELECT query. I need create update query to update table field with value from saved select query.
Im getting error "Operation must use an updatable query".
Problem is that saved select query result not contain primary key.
```
UPDATE [table] INNER JOIN
[saved_select_query]
ON [table].id_field = [saved_select_query].[my_field]
SET [table].[target_field] = [saved_select_query]![source_field]);
```
Im also try with select subquery instead of inner join, but same error.
|
Perhaps a [DLookUp()](https://support.office.com/en-us/article/DLookup-Function-8896cb03-e31f-45d1-86db-bed10dca5937) will do the trick:
```
UPDATE [table] SET
[target_field] = DLookUp("source_field", "saved_select_query", "my_field=" & id_field)
```
... or, if the joined field is text ...
```
UPDATE [table] SET
[target_field] = DLookUp("source_field", "saved_select_query", "my_field='" & id_field & "'")
```
|
I'm not sure I completely understand what you are asking.
If you are asking what syntax to use when performing an update with an inner join.
```
UPDATE tableAlias
SET Column = Value
FROM table1 as tableAlias
INNER JOIN table2 as table2Alias on tableAlias.key = table2Alias.key
```
|
UPDATE query with inner joined query
|
[
"",
"sql",
"ms-access",
"inner-join",
""
] |
I'm working on products filter (faceted search) like Amazon. I have a table with properties (color, ram, screen) like this:
```
ArticleID PropertyID Value
--------- ---------- ------------
1 1 Black
1 2 8 GB
1 3 15"
2 1 White
2 2 8 GB
3 3 13"
```
I have to select articles depending on what properties are selected. You can select multiple values for one property (for example RAM: 4 GB and 8 GB) and you can select multiple properties (for example RAM and screen size).
I need functionality like this:
```
SELECT ArticleID
FROM ArticlesProperties
WHERE (PropertyID = 2 AND Value IN ('4 GB', '8 GB'))
AND (PropertyID = 3 AND Value IN ('13"'))
```
I used to do that by creating a dynamic query and then executing that query:
```
SELECT ArticleID
FROM ArticlesProperties
WHERE PropertyID = 2 AND Value IN ('4 GB', '8 GB')
INTERSECT
SELECT ArticleID
FROM ArticlesProperties
WHERE PropertyID = 3 AND Value IN ('13"')
```
But I don't think it is good way, there must be some better solution. There are millions of properties in the table, so optimization is necessary.
A solution should work on SQL Server 2014 Standard Edition without some add-ons or search engines like `solr` etc.
I am in a pickle so if someone has some idea or solution, I would really appreciate it. Thanks!
|
I made a snippet showing the lines along which I would work. Good choice of indices is important to speed up queries. Always check the execution plan for tweaking of indices.
Notes:
* The script uses temporary tables, but in essence they're not different from regular tables. Except for `#select_properties`, the temporary tables should become regular tables if you plan to use the way of working as outlined in the script.
* Store the article properties with ID's for property choice values, instead of the actual choice values. This saves you disk space, and memory when these tables are cached by SQL Server. SQL Server will cache tables in memory as much as it can to service select statements faster.
If the article properties table is too big, it's possible that SQL Server will have to do disk IO to execute the select statement and that will surely slow the statement down.
Added benefit is that for lookups, you are looking for ID's (integers) rather than text (`VARCHAR`'s). Lookup for integers is a lot faster than lookup for strings.
* Provide suitable indices on tables to speed up queries. To that end it is a good practice to analyze queries by inspecting the [Actual Execution Plan](https://msdn.microsoft.com/en-us/library/ms189562.aspx).
I've included several such indices in the snippet below. Depending on the number of rows in the article properties table and statistics, SQL Server will choose the best index to speed up the query.
If SQL Server thinks the query is missing a proper index for a SQL statement, the actual execution plan will have an indication saying that you are missing an index. It is good practice that when your queries become slow, to analyze these queries by inspecting the actual execution plan in SQL Server Management Studio.
* The snippet uses a temporary table to specify what properties you are looking for: `#select_properties`. Supply the criteria in that table by inserting the property ID's and property choice value ID's. The final selection query selects articles where at minimum one of the property choice values applies for each property.
You would create this temporary table in the session in which you want to select articles. Then insert the search criteria, fire the select statement and finally drop the temporary table.
---
```
CREATE TABLE #articles(
article_id INT NOT NULL,
article_desc VARCHAR(128) NOT NULL,
CONSTRAINT PK_articles PRIMARY KEY CLUSTERED(article_id)
);
CREATE TABLE #properties(
property_id INT NOT NULL, -- color, size, capacity
property_desc VARCHAR(128) NOT NULL,
CONSTRAINT PK_properties PRIMARY KEY CLUSTERED(property_id)
);
CREATE TABLE #property_values(
property_id INT NOT NULL,
property_choice_id INT NOT NULL, -- eg color -> black, white, red
property_choice_val VARCHAR(128) NOT NULL,
CONSTRAINT PK_property_values PRIMARY KEY CLUSTERED(property_id,property_choice_id),
CONSTRAINT FK_values_to_properties FOREIGN KEY (property_id) REFERENCES #properties(property_id)
);
CREATE TABLE #article_properties(
article_id INT NOT NULL,
property_id INT NOT NULL,
property_choice_id INT NOT NULL
CONSTRAINT PK_article_properties PRIMARY KEY CLUSTERED(article_id,property_id,property_choice_id),
CONSTRAINT FK_ap_to_articles FOREIGN KEY (article_id) REFERENCES #articles(article_id),
CONSTRAINT FK_ap_to_property_values FOREIGN KEY (property_id,property_choice_id) REFERENCES #property_values(property_id,property_choice_id)
);
CREATE NONCLUSTERED INDEX IX_article_properties ON #article_properties(property_id,property_choice_id) INCLUDE(article_id);
INSERT INTO #properties(property_id,property_desc)VALUES
(1,'color'),(2,'capacity'),(3,'size');
INSERT INTO #property_values(property_id,property_choice_id,property_choice_val)VALUES
(1,1,'black'),(1,2,'white'),(1,3,'red'),
(2,1,'4 Gb') ,(2,2,'8 Gb') ,(2,3,'16 Gb'),
(3,1,'13"') ,(3,2,'15"') ,(3,3,'17"');
INSERT INTO #articles(article_id,article_desc)VALUES
(1,'First article'),(2,'Second article'),(3,'Third article');
-- the table you have in your question, slightly modified
INSERT INTO #article_properties(article_id,property_id,property_choice_id)VALUES
(1,1,1),(1,2,2),(1,3,2), -- article 1: color=black, capacity=8gb, size=15"
(2,1,2),(2,2,2),(2,3,1), -- article 2: color=white, capacity=8Gb, size=13"
(3,1,3), (3,3,3); -- article 3: color=red, size=17"
-- The table with the criteria you are selecting on
CREATE TABLE #select_properties(
property_id INT NOT NULL,
property_choice_id INT NOT NULL,
CONSTRAINT PK_select_properties PRIMARY KEY CLUSTERED(property_id,property_choice_id)
);
INSERT INTO #select_properties(property_id,property_choice_id)VALUES
(2,1),(2,2),(3,1); -- looking for '4Gb' or '8Gb', and size 13"
;WITH aid AS (
SELECT ap.article_id
FROM #select_properties AS sp
INNER JOIN #article_properties AS ap ON
ap.property_id=sp.property_id AND
ap.property_choice_id=sp.property_choice_id
GROUP BY ap.article_id
HAVING COUNT(DISTINCT ap.property_id)=(SELECT COUNT(DISTINCT property_id) FROM #select_properties)
-- criteria met when article has a number of properties matching, equal to the distinct number of properties in the selection set
)
SELECT a.article_id,a.article_desc
FROM aid
INNER JOIN #articles AS a ON
a.article_id=aid.article_id
ORDER BY a.article_id;
-- result is the 'Second article' with id 2
DROP TABLE #select_properties;
DROP TABLE #article_properties;
DROP TABLE #property_values;
DROP TABLE #properties;
DROP TABLE #articles;
```
|
`intersect` is likely to work very well.
An alternative approach is to construct a `where` clause and use aggregation and `having`:
```
SELECT ArticleID
FROM ArticlesProperties
WHERE ( PropertyID = 2 AND Value IN ('4 GB', '8 GB') ) OR
( PropertyID = 3 AND Value IN ('13"') )
GROUP BY ArticleId
HAVING COUNT(DISTINCT PropertyId) = 2;
```
However, the `INTERSECT` method might make better use of an index on `ArticlesProperties(PropertyId, Value)`, so try that first to see what performance an alternative would have to beat.
|
SQL Server query by column pair
|
[
"",
"sql",
"sql-server",
"faceted-search",
""
] |
1) Table1 say table1 with structure as :
`moduleID | moduleName
10 | XYZ
20 | PQR
30 | ABC`
2) Table2 say table2 with structure as :
`moduleID | Level | Value
10 | 1 | 20
10 | 2 | 30
30 | 3 | 40
10 | 3 | 50
20 | 2 | 30`
`moduleID` being primary key in table1,and value of the column `level` can have values 1 to 3.
Now it is required to display the data as follows :
`moduleID | moduleName | Level1 | Level2 | Level3
10 | XYZ | 20 | 30 | 50
20 | PQR | NULL | 30 | NULL
30 | ABC | NULL | NULL | 50`
In simpler terms, values of column `Level` in table2 is displayed as Level1, Level2 and Level3 and values corresponding to each level is populated in the corresponding `moduleID` row.
Any help on this? beginner here in SQL. Something to do with Views?
|
You can use *conditional aggregation*:
```
select t1.moduleID, t1.moduleName,
MAX(CASE WHEN Level = 1 THEN Value END) Level1,
MAX(CASE WHEN Level = 2 THEN Value END) Level2,
MAX(CASE WHEN Level = 3 THEN Value END) Level3
from table1 as t1
left join table2 as t2 on t1.moduleID = t2.moduleID
group by t1.moduleID, t1.moduleName
```
|
Refer this all process it will work fine for your expected answer.
```
CREATE TABLE Table1
(moduleName VARCHAR(50),moduleID INT)
GO
--Populate Sample records
INSERT INTO Table1 VALUES('.NET',10)
INSERT INTO Table1 VALUES('Java',20)
INSERT INTO Table1 VALUES('SQL',30)
CREATE TABLE Table2
(moduleID INT,[Level] INT,Value INT)
GO
--Populate Sample records
INSERT INTO Table2 VALUES(10,1,20)
INSERT INTO Table2 VALUES(10,2,30)
INSERT INTO Table2 VALUES(30,3,40)
INSERT INTO Table2 VALUES(10,3,50)
INSERT INTO Table2 VALUES(20,2,30)
INSERT INTO Table2 VALUES(20,4,60)
GO
CREATE VIEW [dbo].[vw_tabledata]
AS
SELECT t1.[moduleID],[moduleName]
,[Level]
,[Value]
FROM [db_Sample].[dbo].[Table2] t2 inner join [db_Sample].[dbo].[Table1] t1 on t1.[moduleID] = t2.[moduleID]
GO
DECLARE @DynamicPivotQuery AS NVARCHAR(MAX)
DECLARE @ColumnName AS NVARCHAR(MAX)
--Get distinct values of the PIVOT Column
SELECT @ColumnName= ISNULL(@ColumnName + ',','')
+ QUOTENAME([Level])
FROM (SELECT DISTINCT [Level] FROM Table2) AS [Level]
--Prepare the PIVOT query using the dynamic
SET @DynamicPivotQuery =
N'SELECT moduleID,moduleName, ' + @ColumnName + '
FROM [vw_tabledata]
PIVOT(MAX(Value)
FOR [Level] IN (' + @ColumnName + ')) AS PVTTable'
--Execute the Dynamic Pivot Query
EXEC sp_executesql @DynamicPivotQuery
```
|
Select from two tables based on values of one of the table
|
[
"",
"mysql",
"sql",
""
] |
[](https://i.stack.imgur.com/4NXhl.png)
The problem is :Find the employee last names for employees who do not work on any projects.
My solution is:
```
SELECT E.Lname
FROM EMPLOYEE E
WHERE E.Ssn NOT IN (SELECT *
FROM EMPLOYEE E,WORKS_ON W
WHERE E.Ssn = W.Essn);
```
This should subtract the ssns from employee from the essns from works\_on, However I keep getting the error "Operand should contain 1 column". What does this mean and how can I correct this code?
|
Try this
```
SELECT EMPLOYEE.Lname FROM EMPLOYEE WHERE EMPLOYEE.SSN NOT IN (SELECT DISTINCT WORKS_ON.Essn FROM WORKS_ON);
```
|
The result of a NOT IN subquery must be one value. Your subquery returns all the columns in the `EMPLOYEE` and `WORKS_ON` tables. You can use NOT EXISTS instead:
```
SELECT E.Lname
FROM EMPLOYEE E
WHERE NOT EXISTS (SELECT 1
FROM WORKS_ON W
WHERE E.Ssn = W.Essn);
```
The `1` could be any scalar, or it could even be NULL.
|
MySQL: using "NOT IN" clause correctly
|
[
"",
"mysql",
"sql",
"database",
""
] |
I Have a long query which is throwing an exception when i execute.
Query:
```
SELECT HostID,HostName,RackID,HostTypeID,DomainName,RackNumberOfHeightUnits,RackStartHeightUnits
FROM tHosts, tDomains
WHERE tHosts.DomainID=tDomains.DomainID AND (RackID IN ( SELECT tRacks.Name,tRacks.RackID,tRacks.SiteID,tRacks.Description,NumberOfHeightUnits
FROM tDomains, tSites, tRacks
WHERE tDomains.AccountID= tSites.AccountID
AND tSites.SiteID = tRacks.SiteID
AND tSites.SiteID = 2
AND tDomains.AccountID=1 )
AND SiteID IN (SELECT SiteID FROM tSites WHERE SiteID IN (SELECT SiteID FROM tSites WHERE AccountID=1)))AND AccountID=1
```
It is accomplishing for the query in here:
```
SELECT tRacks.Name,tRacks.RackID,tRacks.SiteID,tRacks.Description,NumberOfHeightUnits
FROM tDomains, tSites, tRacks
WHERE tDomains.AccountID= tSites.AccountID
AND tSites.SiteID = tRacks.SiteID
AND tSites.SiteID = 2
AND tDomains.AccountID=1
```
\*\*The error: \*\* Only one expression can be specified in the select list when the subquery is not introduced with EXISTS.
Thanx in advance.
|
With `IN` You must return one column, the column you want to compare against:
Change this
```
...AND (RackID IN ( SELECT tRacks.Name,tRacks.RackID,tRacks.SiteID,tRacks.Description,NumberOfHeightUnits
FROM tDomains, tSites, tRacks ...
```
To this:
```
... AND (RackID IN ( SELECT tRacks.RackID FROM tDomains, tSites, tRacks ...
```
In this place no other column will be used "outside"
But - to be honest - the whole query looks like - uhm - improveable ...
|
I think you should re-look at your SQL and determine exactly why you think you need to write the query in the way you have, not only for your own sanity when debugging it, but because it seems that you could simplify this query a great deal if there was a little more understanding of what was going on.
From your SQL it seems like you want all host, domain and rack details for a given account (with ID 1) and site (with ID 2)
When you write your query with a comma seperated list of tables in your select, it's a) straight away more diffcult to read and b) more likely to another developer later down the line who has to come and amend your query, your first select would be re-written as:
```
SELECT (columns)
FROM tHosts
INNER JOIN tDomains ON tDomains.DomainID = tHosts.DomainID
```
You then want to join to find the rack details for the site with ID 2 and account with ID 1. Your tDomains and tSites have common AccountID columns, so you can join on those:
```
INNER JOIN tSites ON tSites.AccountID = tDomains.AccountID
```
and your tRacks and tSites have a common SiteID column so you can join on those:
```
INNER JOIN tRacks ON tRacks.SiteID = tSites.SiteID
```
you can then apply your where clause to filter the results down to your required criteria:
```
WHERE tDomains.AccountID = 1
AND tSites.SiteID = 2
```
You now have the following query:
```
SELECT HostID
, HostName
, RackID
, HostTypeID
, DomainName
, RackNumberOfHeightUnits
, RackStartHeightUnits
FROM tHosts
INNER JOIN tDomains ON tDomains.DomainID = tHosts.DomainID
INNER JOIN tSites ON tSites.AccountID = tDomains.AccountID
INNER JOIN tRacks ON tRacks.SiteID = tSites.SiteID
WHERE tDomains.AccountID = 1
AND tSites.SiteID = 2
```
The final line in your SQL seems unnecessary, as you are selecting the site ids for the account with ID 1 again (and you've already filtered to those racks anyway in your inner select).
There may be something missing from this, as its hard to understand your exact domain without seeing the table definitions, but it seems likely you can improve the readability but more importantly the performance of your query with a few changes?
|
Introduce sub query with EXISTS
|
[
"",
"sql",
"sql-server",
"exists",
""
] |
I have a table with FirstName and LastName.
```
FirstName LastName
John Smith
John Taylor
Steve White
Adam Scott
Jane Smith
Jane Brown
```
I want to select LastName that does not contain "Smith". If it matchs, don't use any of the same FirstName
Output Result
```
FirstName LastName
Steve White
Adam Scott
```
Notice "John Taylor" and "Jane Brown" aren't in the result, because the other John and Jane name contain Smith.
My current query (includes John Taylor and Jane Brown):
```
Select FirstName, LastName
From tablPerson
where LastName != "Smith"
```
|
You can do this with `not exists`:
```
select p.*
from tablPerson p
where not exists (select 1
from tablPerson p2
where p2.LastName = 'Smith' and p2.FirstName = p.FirstName
);
```
I prefer `not exists` for this type of query because it has more intuitive behavior if any first names are `NULL`. If any of the first names for `'Smith'` are `NULL`, then `not in` returns the empty set.
|
You need to use a sub query to filter out all the persons whose last name contains Smith, as mentioned below:
```
select first_name, last_name from person
where first_name not in (
select first_name from person where last_name like '%Smith%');
```
Here's the example [SQL Fiddle](http://sqlfiddle.com/#!9/2b250/2/0).
|
SQL Select First Name and Doesn't Contain Last Name Matching
|
[
"",
"mysql",
"sql",
""
] |
It seems i can't figure how to swap two bites of a varchar with eachother in a string . Example :
`string : 6806642004683587 (varchar)`
`end : 8660460240865378`
It should work like this : 68 06 64 20 04 68 35 87 and flip them like 86 60 46 ...
Is there a function in sql that does bcd string manipulation ?
Thank you
|
You could wrap this in a function, but here's the basic code:
```
DECLARE @VAL NVARCHAR(16) = N'6806642004683587';
DECLARE @OUT NVARCHAR(16);
;WITH A(N, S) AS (
SELECT 1 N, SUBSTRING(@VAL, 1, 2) S
UNION ALL
SELECT N+2 N, SUBSTRING(@VAL, N+2, 2) S FROM A WHERE N+2 < LEN(@VAL)
)
SELECT @OUT = COALESCE(@OUT + '', '') + REVERSE(S) FROM A;
SELECT @VAL, @OUT;
---------------- ----------------
6806642004683587 8660460240865378
```
|
Can use a `while` loop.
**Query**
```
declare @str as varchar(max);
declare @len as int;
declare @i as int;
declare @ii as int;
declare @res as varchar(max);
declare @res1 as varchar(max);
set @str = '6806642004683587';
set @len = len(@str) / 2;
set @i = 1;
set @ii = 1;
set @res = '';
while @len >= @i
begin
set @res1 = substring(@str, @ii, 2)
set @ii = @ii + 2;
set @i = @i + 1;
set @res += reverse(@res1)
end
select @str as [actual string], @res as [updated string];
```
**Result**
```
+------------------+------------------+
| actual string | updated string |
+------------------+------------------+
| 6806642004683587 | 8660460240865378 |
+------------------+------------------+
```
If the string len is an odd number, and if you need to concatenate the last single character also. Then change `while @len >= @i` to `while @len >= @i - 1`.
|
mssql bcd string manipulation - swap 2 digits in a string
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I've got a **date** field.
**For example :**
```
01/03/2016 09:40:35
```
I would like to know if this date is from **Today**.
|
`01/03/2016 09:40:35` is not a date, it is displayed in a format you want to see. It will be a date if you convert it using **TO\_DATE**.
To know if the date part is current date, you need to compare it with **SYSDATE**.
For example,
```
SQL> SELECT
2 CASE
3 WHEN TRUNC(to_date('01/03/2016 09:40:35', 'dd/mm/yyyy hh24:mi:ss')) = TRUNC(SYSDATE)
4 THEN 'Today'
5 ELSE 'Not Today'
6 END date_check
7 FROM dual;
DATE_CHECK
----------
Today
SQL>
```
|
You can compare your date value with `TRUNC(SYSDATE)` or `TRUNC(SYSTIMESTAMP)`, for example.
|
Know if a date is from today in Oracle
|
[
"",
"sql",
"oracle",
"date",
""
] |
I have 24 h time stored as `last_time` in database and I need to get the difference between current time and `last_time` in minutes. I have searched a lot but in all occasions time difference is taken by by two dates. Please tell me how to use `DiffDate` function correctly using 24 h formatted time.
|
For time difference is minutes
```
SELECT DATEDIFF(mi, last_time, CAST(getdate() as time)) from <TABLE_NAME>
```
For time difference in DD:MM:SS
```
declare @null time
SET @null = '00:00:00';
SELECT DATEADD(SECOND, - DATEDIFF(SECOND, last_time, CAST(getdate() as time)), @null) from <TABLE_NAME>
```
For MYSQL
```
SELECT TIMEDIFF(last_time, cast( now() as time)) as diff
```
|
Try this:
```
SELECT DATEDIFF(mi, last_time, CONVERT(varchar(10), GETDATE(), 108))
```
|
How to take time difference in minutes from SQL Server using 24 hour time?
|
[
"",
"mysql",
"sql",
""
] |
I have an XML file that I am passing into a stored procedure.
I also have a table. The table has the columns VehicleReg | XML | ProcessedDate
My XML comes in like so:
```
<vehicles>
<vehicle>
<vehiclereg>AB12CBE</vehiclereg>
<anotherprop>BLAH</anotherprop>
</vehicle>
<vehicle>
<vehiclereg>AB12CBE</vehiclereg>
<anotherprop>BLAH</anotherprop>
</vehicle>
</vehicles>
```
What I need to do is read the xml and insert the vehiclereg and the full vehicle xml string into each row (the dateprocessed is a getdate() so not a problem).
I was working on something like below but had no luck:
```
DECLARE @XmlData XML
Set @XmlData = EXAMPLE XML
SELECT T.Vehicle.value('(vehiclereg)[1]', 'NVARCHAR(10)') AS vehiclereg,
T.Vehicle.value('.', 'NVARCHAR(MAX)'),
GETDATE()
FROM @XmlData.nodes('Vehicles/Vehicle') AS T(Vehicle)
```
I was wondering if someone could point me in the right direction?
Regards
|
Full query as you want:
```
DECLARE @XmlData XML
Set @XmlData = '<vehicles>
<vehicle>
<vehiclereg>AB12CBE</vehiclereg>
<anotherprop>BLAH</anotherprop>
</vehicle>
<vehicle>
<vehiclereg>AB12CBE</vehiclereg>
<anotherprop>BLAH</anotherprop>
</vehicle>
</vehicles>'
SELECT T.Vehicle.value('./vehiclereg[1]', 'NVARCHAR(10)') AS vehiclereg,
T.Vehicle.query('.'),
GETDATE()
FROM @XmlData.nodes('/vehicles/vehicle') AS T(Vehicle)
```
[](https://i.stack.imgur.com/MrrqF.jpg)
|
Just need to remember XML is case sensitive. You had:
```
FROM @XmlData.nodes('Vehicles/Vehicle') AS T(Vehicle)
```
but you should have had:
```
FROM @XmlData.nodes('/vehicles/vehicle') AS T(Vehicle)
```
Also as TT pointed out there was no column named `Registration`
This should do it:
```
DECLARE @XmlData XML
Set @XmlData = '<vehicles>
<vehicle>
<vehiclereg>AB12CBE</vehiclereg>
<anotherprop>BLAH</anotherprop>
</vehicle>
<vehicle>
<vehiclereg>AB12CBE</vehiclereg>
<anotherprop>BLAH</anotherprop>
</vehicle>
</vehicles>'
SELECT Vehicle.value('(vehiclereg)[1]', 'NVARCHAR(10)') AS vehiclereg,
Vehicle.value('.', 'NVARCHAR(MAX)'),
GETDATE()
FROM @XmlData.nodes('/vehicles/vehicle') AS T(Vehicle)
```
Result:
[](https://i.stack.imgur.com/7CLTq.png)
This would return XML:
```
DECLARE @XmlData XML
Set @XmlData = '<vehicles>
<vehicle>
<vehiclereg>AB12CBE</vehiclereg>
<anotherprop>BLAH</anotherprop>
</vehicle>
<vehicle>
<vehiclereg>AB12CBE</vehiclereg>
<anotherprop>BLAH</anotherprop>
</vehicle>
</vehicles>'
SELECT T.Vehicle.value('(vehiclereg)[1]', 'NVARCHAR(10)') AS vehiclereg,
T.Vehicle.query('.'),
GETDATE()
FROM @XmlData.nodes('vehicles/vehicle') AS T(Vehicle)
```
Result:
[](https://i.stack.imgur.com/uRnOl.png)
|
Insert XML Node into a SQL column in a table
|
[
"",
"sql",
"sql-server",
"xml",
""
] |
Is possible to select a numerical series or date series in SQL? Like create a table with N rows like 1 to 10:
```
1
2
3
...
10
```
or
```
2010-01-01
2010-02-01
...
2010-12-01
```
|
If you install [common\_schema](https://common-schema.googlecode.com/svn/trunk/common_schema/doc/html/introduction.html), you can use the `numbers` table to easily create queries to output those types of ranges.
For example, these 2 queries will produce the output from your examples:
```
select n
from common_schema.numbers
where n between 1 and 10
order by n
select ('2010-01-01' + interval n month)
from common_schema.numbers
where n between 0 and 11
order by n
```
|
An SQL solution:
```
SELECT *
FROM (
SELECT 1 as id
UNION SELECT 2
UNION SELECT 3
UNION SELECT 4
UNION SELECT 5
)
```
|
Is somehow possible to create select a series in mysql?
|
[
"",
"mysql",
"sql",
"date",
"range",
""
] |
I have a query based on a date with get me the data I need for a given day (lets say sysdate-1):
```
SELECT TO_CHAR(START_DATE, 'YYYY-MM-DD') "DAY",
TO_CHAR(TRUNC(MOD(ROUND(AVG((END_DATE - START_DATE)*86400),0),3600)/60),'FM00') || ':'
|| TO_CHAR(MOD(ROUND(AVG((END_DATE - START_DATE)*86400),0),60),'FM00') "DURATION (mm:ss)"
FROM UI.UIS_T_DIFFUSION
WHERE APPID IN ('INT', 'OUT', 'XMD','ARPUX')
AND PSTATE = 'OK'
AND TO_CHAR(START_DATE, 'DD-MM-YYYY') = TO_CHAR( sysdate-1, 'DD-MM-YYYY')
AND ROWNUM <= 22
GROUP BY TO_CHAR(START_DATE, 'YYYY-MM-DD');
```
Gives me this (as expected):
```
╔════════════╦══════════╗
║ DAY ║ DURATION ║
╠════════════╬══════════╣
║ 2016-02-28 ║ 303║
╚════════════╩══════════╝
```
Now I'm trying to add a loop to get the results for each day since 10-10-2015. Somehting like this:
```
╔═══════════╦══════════╗
║ DAY ║ DURATION ║
╠═══════════╬══════════╣
║ 2016-02-28║ 303║
╠═══════════╬══════════╣
║ 2016-02-27║ 294║
╠═══════════╬══════════╣
║ ...║ ...║
╠═══════════╬══════════╣
║ 2015-10-10║ 99║
╚═══════════╩══════════╝
```
I've tried to put the query inside a loop:
```
DECLARE
i NUMBER := 0;
BEGIN
WHILE i <= 142
LOOP
i := i+1;
SELECT TO_CHAR(START_DATE, 'YYYY-MM-DD') "DAY",
TO_CHAR(TRUNC(MOD(ROUND(AVG((END_DATE - START_DATE)*86400),0),3600)/60),'FM00') || ':'
|| TO_CHAR(MOD(ROUND(AVG((END_DATE - START_DATE)*86400),0),60),'FM00') "DURATION (mm:ss)"
FROM UI.UIS_T_DIFFUSION
WHERE APPID IN ('INT', 'OUT', 'XMD','ARPUX')
AND PSTATE = 'OK'
AND TO_CHAR(START_DATE, 'DD-MM-YYYY') = TO_CHAR(sysdate - i, 'DD-MM-YYYY')
AND ROWNUM <= 22
GROUP BY TO_CHAR(START_DATE, 'YYYY-MM-DD');
END LOOP;
END;
```
but I'm getting this error:
```
Error report -
ORA-06550: line 7, column 5:
PLS-00428: an INTO clause is expected in this SELECT statement
06550. 00000 - "line %s, column %s:\n%s"
*Cause: Usually a PL/SQL compilation error.
*Action:
```
Can anyone tell me how to accomplish this?
|
While bastihermann gave you the query to get all of those values in a single result set, if you want to understand the issue with your pl/sql block, the following should simplify it for you. The error relates to the fact that, in pl/sql you need to select INTO local variables to contain the data for reference within the code.
To correct (and simplify with a FOR LOOP) your block:
```
DECLARE
l_day varchar2(12);
l_duration varchar2(30);;
BEGIN
-- don't need to declare a variable for an integer counter in a for loop
For i in 1..142
LOOP
SELECT TO_CHAR(START_DATE, 'YYYY-MM-DD'),
TO_CHAR(TRUNC(MOD(ROUND(AVG((END_DATE - START_DATE)*86400),0),3600)/60),'FM00') || ':'
|| TO_CHAR(MOD(ROUND(AVG((END_DATE - START_DATE)*86400),0),60),'FM00')
INTO l_Day, l_duration
FROM UI.UIS_T_DIFFUSION
WHERE APPID IN ('INT', 'OUT', 'XMD','ARPUX')
AND PSTATE = 'OK'
AND TO_CHAR(START_DATE, 'DD-MM-YYYY') = TO_CHAR(sysdate - i, 'DD-MM-YYYY')
AND ROWNUM <= 22
GROUP BY TO_CHAR(START_DATE, 'YYYY-MM-DD');
-- and here you would do something with those returned values, or there isn't much point to this loop.
END LOOP;
END;
```
Assuming you needed to do something with those values and want even more efficient, you could simplify even further with a cursor loop;
```
BEGIN
-- don't need to declare a variable for an integer counter in a for loop
For i_record IN
(SELECT TO_CHAR(START_DATE, 'YYYY-MM-DD') the_Day,
TO_CHAR(TRUNC(MOD(ROUND(AVG((END_DATE - START_DATE)*86400),0),3600)/60),'FM00') || ':'
|| TO_CHAR(MOD(ROUND(AVG((END_DATE - START_DATE)*86400),0),60),'FM00') the_duration
FROM UI.UIS_T_DIFFUSION
WHERE APPID IN ('INT', 'OUT', 'XMD','ARPUX')
AND PSTATE = 'OK'
AND TO_CHAR(START_DATE, 'DD-MM-YYYY') <= TO_CHAR( sysdate, 'DD-MM-YYYY')
AND TO_CHAR(START_DATE, 'DD-MM-YYYY') >= TO_CHAR( sysdate-142, 'DD-MM-YYYY')
AND ROWNUM <= 22
GROUP BY TO_CHAR(START_DATE, 'YYYY-MM-DD')
ORDER BY to_char(start_date,'dd-mm-yyyy')
)
LOOP
-- and here you would do something with those returned values, but reference them by record_name.field_value.
-- For now I will put in the NULL; command to let this compile as a loop must have at least one command inside.
NULL;
END LOOP;
END;
```
Hope that helps
|
First you need a "date generator"
```
select trunc(sysdate - level) as my_date
from dual
connect by level <= sysdate - to_date('10-10-2015','dd-mm-yyyy')
MY_DATE
----------
2016/02/28
2016/02/27
2016/02/26
....
....
2015/10/12
2015/10/11
2015/10/10
142 rows selected
```
an then you need to plug this generator into your query
If you are using Oracle 12c this is very easy with the help of **lateral inline view**
```
SELECT *
FROM (
select trunc(sysdate - level) as my_date
from dual
connect by level <= sysdate - to_date('10-10-2015','dd-mm-yyyy')
) date_generator,
LATERAL (
/* your query goes here */
SELECT TO_CHAR(START_DATE, 'YYYY-MM-DD') "DAY",
....
AND START_DATE >= date_generator.my_date
AND START_DATE < date_generator.my_date + 1
AND ROWNUM <= 22
GROUP BY TO_CHAR(START_DATE, 'YYYY-MM-DD');
)
```
If you are on Oracle 11 or 10, it is still possible, but more complicated;
```
SELECT TO_CHAR(START_DATE, 'YYYY-MM-DD') "DAY",
TO_CHAR(TRUNC(MOD(ROUND(AVG((END_DATE - START_DATE)*86400),0),3600)/60),'FM00') || ':'
|| TO_CHAR(MOD(ROUND(AVG((END_DATE - START_DATE)*86400),0),60),'FM00') "DURATION (mm:ss)"
FROM (
SELECT t.* ,
row_number() over (partition by date_generator.my_date) rn
FROM UI.UIS_T_DIFFUSION t
JOIN (
select trunc(sysdate - level) as my_date
from dual
connect by level <= sysdate - to_date('10-10-2015','dd-mm-yyyy')
) date_generator
ON ( t.UIS_T_DIFFUSION >= date_generator.my_date
AND t.UIS_T_DIFFUSION < date_generator.my_date + 1 )
WHERE APPID IN ('INT', 'OUT', 'XMD','ARPUX')
AND PSTATE = 'OK'
)
WHERE rn <= 22
GROUP BY TO_CHAR(START_DATE, 'YYYY-MM-DD');
```
---
First remark - while you are not using ORDER BY clause, your query `WHERE rownum <=22` is not deterministic - it may return different results on each run, because it picks 22 rows from the table according to their physical order in the table. But the physical order of rows can change in the time, Oracle doesn't guarantee any order unless `ORDER BY` clause is used, so your query .... returns random results.
---
Second remark - never use this:
```
AND TO_CHAR(START_DATE, 'DD-MM-YYYY') = TO_CHAR(sysdate - i, 'DD-MM-YYYY')
```
this prevents the database from using indices, and this may cause performance problems.
Use this instead:
```
START_DATE >= trunc(sysdate - i) AND START_DATE < trunc(sysdate - i + 1)
```
|
Loop query with a variable in Oracle
|
[
"",
"sql",
"oracle",
"plsql",
""
] |
I retrieve data by joining multiple tables as indicated on the image below. On the other hand, as there is no data in the FK column (EmployeeID) of Event table, I have to use CardNo (nvarchar) fields in order to join the two tables. On the other hand, the digit numbers of CardNo fields in the Event and Employee tables are different, I also have to use RIGHT function of SQL Server and this makes the query to be executed approximately 10 times longer. So, in this scene what should I do? Can I use CardNo field without changing its data type to int, etc (because there are other problem might be seen after changing it and it sill be better to find a solution without changing the data type of it). Here is also execution plan of the query below.
**Query:**
```
; WITH a AS (SELECT emp.EmployeeName, emp.Status, dep.DeptName, job.JobName, emp.CardNo
FROM TEmployee emp
LEFT JOIN TDeptA AS dep ON emp.DeptAID = dep.DeptID
LEFT JOIN TJob AS job ON emp.JobID = job.JobID),
b AS (SELECT eve.EventID, eve.EventTime, eve.CardNo, evt.EventCH, dor.DoorName
FROM TEvent eve LEFT JOIN TEventType AS evt ON eve.EventType = evt.EventID
LEFT JOIN TDoor AS dor ON eve.DoorID = dor.DoorID)
SELECT * FROM b LEFT JOIN a ON RIGHT(a.CardNo, 8) = RIGHT(b.CardNo, 8)
ORDER BY b.EventID ASC
```
[](https://i.stack.imgur.com/GuPJg.png)
[](https://i.stack.imgur.com/MFjqJ.png)
|
Many thanks all of your help. With the help of your answers, I managed to reduce the query execution time from 2 minutes to 1 at the first step after using computed columns. After that, when creating an index for these columns, I managed to reduce the execution time to 3 seconds. Wow, it is really perfect :)
Here are the steps posted for those who suffers from a similar problem:
**Step I:** Adding computed columns to the tables (As CardNo fields are nvarchar data type, I specify data type of computed columns as int):
```
ALTER TABLE TEvent ADD CardNoRightEight AS RIGHT(CAST(CardNo AS int), 8)
ALTER TABLE TEmployee ADD CardNoRightEight AS RIGHT(CAST(CardNo AS int), 8)
```
**Step II:** Create index for the computed columns in order to execute the query faster:
```
CREATE INDEX TEmployee_CardNoRightEight_IDX ON TEmployee (CardNoRightEight)
CREATE INDEX TEvent_CardNoRightEight_IDX ON TEvent (CardNoRightEight)
```
**Step 3:** Update the query by using the computed columns in it:
```
; WITH a AS (
SELECT emp.EmployeeName, emp.Status, dep.DeptName, job.JobName, emp.CardNoRightEight --emp.CardNo
FROM TEmployee emp
LEFT JOIN TDeptA AS dep ON emp.DeptAID = dep.DeptID
LEFT JOIN TJob AS job ON emp.JobID = job.JobID
),
b AS (
SELECT eve.EventID, eve.EventTime, evt.EventCH, dor.DoorName, eve.CardNoRightEight --eve.CardNo
FROM TEvent eve
LEFT JOIN TEventType AS evt ON eve.EventType = evt.EventID
LEFT JOIN TDoor AS dor ON eve.DoorID = dor.DoorID)
SELECT * FROM b LEFT JOIN a ON a.CardNoRightEight = b.CardNoRightEight --ON RIGHT(a.CardNo, 8) = RIGHT(b.CardNo, 8)
ORDER BY b.EventID ASC
```
|
You can add a computed column to your table like this:
```
ALTER TABLE TEmployee -- Don't start your table names with prefixes, you already know they're tables
ADD CardNoRight8 AS RIGHT(CardNo, 8) PERSISTED
ALTER TABLE TEvent
ADD CardNoRight8 AS RIGHT(CardNo, 8) PERSISTED
CREATE INDEX TEmployee_CardNoRight8_IDX ON TEmployee (CardNoRight8)
CREATE INDEX TEvent_CardNoRight8_IDX ON TEvent (CardNoRight8)
```
You don't need to persist the column since it already matches the criteria for a computed column to be indexed, but adding the `PERSISTED` keyword shouldn't hurt and might help the performance of other queries. It will cause a minor performance hit on updates and inserts, but that's probably fine in your case unless you're importing a lot of data (millions of rows) at a time.
The better solution though is to make sure that your columns that are supposed to match actually match. If the right 8 characters of the card number are something meaningful, then they shouldn't be part of the card number, they should be another column. If this is an issue where one table uses leading zeroes and the other doesn't then you should fix that data to be consistent instead of putting together work arounds like this.
|
What if the column to be indexed is nvarchar data type in SQL Server?
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"join",
"indexing",
""
] |
I am currently learning sqlite and I've been working with sqlite manager so far.
I have different tables and want to select all Project Names where 3 or more people have worked on.
I have my project table which looks like this:
```
CREATE TABLE "Project"
("Project-ID" INTEGER PRIMARY KEY NOT NULL , "Name" TEXT, "Year" INTEGER)
```
And I have my relation where it is specified how many people work on a project:
```
CREATE TABLE "Works_on"
("User" TEXT, "Project-ID" INTEGER, FOREIGN KEY(User) REFERENCES People(User),
FOREIGN KEY(Project-ID) REFERENCES Project(Project-ID), PRIMARY KEY(User, Project-ID))
```
So in the simple view (sadly I can not upload Images) you have something like this in the relation "Works\_on":
```
User | Project-ID
-------+-----------
Greg | 1
Daniel | 1
Daniel | 2
Daniel | 3
Jeny | 3
Mark | 3
Mark | 1
```
Now I need to select the names of the projects where 3 or more people are working on, this means I need the name of project 3 and 1.
I tried so far to use count() but I can not figure out how to get the names:
```
SELECT Project-ID, count(Project-ID)
FROM Works_on
WHERE Project-ID >= 3
```
|
Try this:
```
SELECT t1.Project-ID, t1.Name
FROM Project AS t1
JOIN (
SELECT Project-ID
FROM Works_on
GROUP BY Project-ID
HAVING COUNT(*) >= 3
) AS t2 ON t1.Project-ID = t2.Project-ID
```
|
You need join and group by with a having clause like this:
```
SELECT t.project-id,t.name
FROM project t
INNER JOIN works_on s
ON(t.project-id = s.project-id)
GROUP BY t.project-id,t.name
HAVING COUNT(*) > 2
```
|
SQLite Query Display Name with WHERE condition - Multiple Tables
|
[
"",
"sql",
"sqlite",
""
] |
I'm practicing some SQL and I have to retrieve from the database all the information about employees whose (sal+comm) > 1700. I know this can be done with a simple WHERE expression, but we are practicing the CASE statement, so we need to do it that way.
I already know how regular CASE works to assign a new row values dependending on other row values, but I don't understand how to select rows depending on a condition using a CASE expression.
This is what I have achieved so far:
```
SELECT
CASE
WHEN (sal+comm) > 1700 THEN empno
END AS empno
FROM emp;
```
I know it's wrong, but I'm stucked here. Any help or any resources to search and read about would be appreciated, thanks!
|
```
SELECT *
FROM emp
WHERE (CASE WHEN (sal+comm) > 1700 THEN 1
ELSE 0
END) = 1
```
|
the `CASE` statement will produce a result and you can compare it or use it on other operations.
In this case the salary depend on emp.job
```
SELECT *
FROM emp
WHERE CASE WHEN emp.job = 'DEVELOPER' THEN salary*1.5
WHEN emp.job = 'DB' THEN salary*2
END > 1700
```
To solve your case is overkill but I guess you can include other cases.
```
SELECT *
FROM emp
WHERE CASE WHEN sal+comm > 1700 THEN 1
ELSE 0
END > 0
```
|
SQL select rows conditionally using CASE expression
|
[
"",
"sql",
"oracle",
"case",
""
] |
I have a complex SQL query that can be simplified to the below:
```
Select ColA,ColB,ColC,ColD
From MyTable
Where (ColA In (Select ItemID From Items Where ItemName like '%xxx%')
or ColB In (Select ItemID From Items Where ItemName like '%xxx%'))
```
As you can see, the sub-query appears twice. Is the compiler intelligent enough to detect this and gets the result of the sub-query only once? Or does the sub-query run twice?
FYI, table Items has about 20,000 rows and MyTable has about 200,000 rows.
Is there another way to re-write this SQL statement so that the sub-query appears/runs only once?
Update: The Where clause in the main query is dynamic and added only when needed (i.e. only when a user searches for 'xxx'). Hence changes to the main select statement or re-structuring of the query are not possible.
|
### UPDATE Your request not to change the query, just the `WHERE`
You can pack the CTE directly in the place where it is called (untested):
```
Select ColA,ColB,ColC,ColD
From MyTable
Where EXISTS (SELECT 1 FROM (Select i.ItemID
From Items AS i
Where iItemName like '%xxx%') AS itm
WHERE itm.ItemID=MyTable.ColA OR itm.ItemID=MyTable.ColB)
```
### previous
I think this should be the same...
```
WITH MyCTE AS
(
Select ItemID From Items Where ItemName like '%xxx%'
)
Select ColA,ColB,ColC,ColD
From MyTable
Where EXISTS (SELECT 1 FROM MyCTE WHERE ItemID=ColA OR ItemID=ColB)
```
A substring `LIKE` search is - for sure - not performant.
If you can reduce your "Items" to just a few rows with your `LIKE` filter, you must test which is fastest.
|
You can also write the query like this:
```
SELECT ColA, ColB, ColC, ColD
FROM MyTable
WHERE EXISTS(
(SELECT ItemID FROM Items WHERE ItemName LIKE '%xxx%')
INTERSECT
SELECT t.v FROM (VALUES (ColA), (ColB)) AS t(v) )
```
|
Remove duplicate sub-query
|
[
"",
"sql",
"sql-server",
"t-sql",
"common-table-expression",
"dynamic-sql",
""
] |
I'm using the below query to replace the value 2 with 5. My input string will be in the format as shown below. Each value will be delimited with carrot(^) symbol. It's working fine when there is no duplicate value. But with duplicate values it's not working. Please advice.
```
select regexp_replace('1^2^2222^2','(^|\^)2(\^|$)','\15\2') OUTPUT from dual;
```
Output:
```
1^5^2222^5 ( Working Fine as there is no consecutive duplicates at the starting or at the end)
```
.
```
select regexp_replace('2^2^2222^2^2','(^|\^)2(\^|$)','\15\2') OUTPUT from dual;
```
Output:
```
5^2^^5^2222^5^2(Not working as there is consecutive duplicate at the starting/end)
```
Please let me know how to correct this?
|
As others have said, the problem is the terminating delimiter caret being consumed matching the first occurrence, so it isn't seen as the opening delimiter for the next instance.
If you don't want to use nested regex calls, you could use a simple replace to double up the delimiters, then strip them afterwards:
```
replace(
regexp_replace(
replace(<value>, '^', '^^'), '(^|\^)2(\^|$)','\15\2'), '^^', '^')
```
The inner replace turns your value into `2^^2^^2222^^2^^2`, so after the first occurrence is matched there is still a caret to act as the opening delimiter for the second instance, etc. The outer replace just strips those doubled-up delimiters back to single ones.
With some sample strings:
```
with t (input) as (
select '1^2^2222^2' from dual
union all select '2^2^2222^2^2' from dual
union all select '2^2^2222^2^^2^2' from dual
)
select input,
replace(
regexp_replace(
replace(input, '^', '^^'), '(^|\^)2(\^|$)','\15\2'), '^^', '^') as output
from t;
INPUT OUTPUT
--------------- --------------------
1^2^2222^2 1^5^2222^5
2^2^2222^2^2 5^5^2222^5^5
2^2^2222^2^^2^2 5^5^2222^5^^5^5
```
|
### problem
The problem is that the second adjacent occurrences of the searched string is not matched. This is because of the first portion of the regex:
```
(^|\^)2(\^|$)
^
-- this is not matched when the text preceding "2" is a replaced string
```
### solution
One way to solve your problem is to run the regex twice in a row:
```
SELECT REGEXP_REPLACE (tmpRes, '(^|\^)2(\^|$)', '\15\2') OUTPUT
FROM (
-- first pass of replacement
SELECT REGEXP_REPLACE ('2^2^2222^2^2', '(^|\^)2(\^|$)', '\15\2') tmpRes
FROM DUAL
)
-- OUTPUT: 5^5^2222^5^5
```
|
Regarding Regexp_replace - Oracle SQL
|
[
"",
"sql",
"oracle",
"oracle11g",
"regexp-replace",
""
] |
I have below table with 2 columns, DATE & FACTOR. I would like to compute cumulative product, something like CUMFACTOR in SQL Server 2008.
Can someone please suggest me some alternative.[](https://i.stack.imgur.com/NNYMj.jpg)
|
Unfortunately, there's not `PROD()` aggregate or window function in SQL Server (or in most other SQL databases). But you can emulate it as such:
```
SELECT Date, Factor, exp(sum(log(Factor)) OVER (ORDER BY Date)) CumFactor
FROM MyTable
```
|
You can do it by:
```
SELECT A.ROW
, A.DATE
, A.RATE
, A.RATE * B.RATE AS [CUM RATE]
FROM (
SELECT ROW_NUMBER() OVER(ORDER BY DATE) as ROW, DATE, RATE
FROM TABLE
) A
LEFT JOIN (
SELECT ROW_NUMBER() OVER(ORDER BY DATE) as ROW, DATE, RATE
FROM TABLE
) B
ON A.ROW + 1 = B.ROW
```
|
How to compute cumulative product in SQL Server 2008?
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
Long time reader, first time poster.
I'm trying to consolidate a table I have to the rate of sold goods getting lost in transit. In this table, we have four kinds of products, three countries of origin, three transit countries (where the goods are first shipped to before being passed to customers) and three destination countries. The table is as follows.
```
Status Product Count Origin Transit Destination
--------------------------------------------------------------------
Delivered Shoes 100 Germany France USA
Delivered Books 50 Germany France USA
Delivered Jackets 75 Germany France USA
Delivered DVDS 30 Germany France USA
Not Delivered Shoes 7 Germany France USA
Not Delivered Books 3 Germany France USA
Not Delivered Jackets 5 Germany France USA
Not Delivered DVDS 1 Germany France USA
Delivered Shoes 300 Poland Netherlands Canada
Delivered Books 80 Poland Netherlands Canada
Delivered Jackets 25 Poland Netherlands Canada
Delivered DVDS 90 Poland Netherlands Canada
Not Delivered Shoes 17 Poland Netherlands Canada
Not Delivered Books 13 Poland Netherlands Canada
Not Delivered Jackets 1 Poland Netherlands Canada
Delivered Shoes 250 Spain Ireland UK
Delivered Books 20 Spain Ireland UK
Delivered Jackets 150 Spain Ireland UK
Delivered DVDS 60 Spain Ireland UK
Not Delivered Shoes 19 Spain Ireland UK
Not Delivered Books 8 Spain Ireland UK
Not Delivered Jackets 8 Spain Ireland UK
Not Delivered DVDS 10 Spain Ireland UK
```
I would like to create a new table that shows the count of goods delivered and not delivered in one row, like this.
```
Product Delivered Not_Delivered Origin Transit Destination
Shoes 100 7 Germany France USA
Books 50 3 Germany France USA
Jackets 75 5 Germany France USA
DVDS 30 1 Germany France USA
Shoes 300 17 Poland Netherlands Canada
Books 80 13 Poland Netherlands Canada
Jackets 25 1 Poland Netherlands Canada
DVDS 90 0 Poland Netherlands Canada
Shoes 250 19 Spain Ireland UK
Books 20 8 Spain Ireland UK
Jackets 150 8 Spain Ireland UK
DVDS 60 10 Spain Ireland UK
```
I've had a look at some other posts and so far I haven't found exactly what I'm looking for. Perhaps the issue here is that there will be multiple WHERE statements in the code to ensure that I don't group all shoes together, ore all country groups.
Is this possible with SQL?
|
Something like this?
```
select
product
,sum(case when status = 'Delivered' then count else 0 end) as delivered
,sum(case when status = 'Not Delivered' then count else 0 end) as not_delivered
,origin
,transit
,destination
from table
group by
product
,origin
,transit
,destination
```
|
This is rather easy; instead of one line per Product, Origin, Transit, Destination and Status, you want one result line per Product, Origin, Transit and Destination only. So group by these four columns and aggregate conditionally:
```
select
product, origin, transit, destination,
sum(case when status = 'Delivered' then "count" else 0 end) as delivered,
sum(case when status = 'Not Delivered' then "count" else 0 end) as not_delivered
from mytable
group by product, origin, transit, destination;
```
BTW: It is not a good idea to use a keyword for a column name. I used double quotes to use your column `count`, which is standard SQL, but I don't know if it works in Google BigQuery. Maybe it must be `"Count"` rather than `"count"` or something entirely else.)
|
Joining a Table with Itself with multiple WHERE statemetns
|
[
"",
"sql",
"google-bigquery",
""
] |
I have two tables of service providers, `providers` and `providers_clean`. `providers` contains many thousands of providers with very poorly formatted data, `providers_clean` only has a few providers which still exist in the 'dirty' table as well.
I want the system using this data to remain functional while the user is 'cleaning' the data up, so I'd like to be able to select all of the rows that have already been 'cleaned' and the rows that are still 'dirty' while excluding any 'dirty' results that have the same id as the 'clean' ones.
How can I select all of the providers from the `providers_clean` table merged with all of the providers from the `providers` table, and EXCLUDE the ones that have already been 'cleaned'
I've tried:
```
SELECT * FROM providers WHERE NOT EXISTS (SELECT * FROM providers_clean WHERE providers_clean.id = providers.id)
```
which gives me all of the 'dirty' results from `providers` EXCLUDING the 'clean' ones, but how can I rewrite the query to now merge all of the 'clean' ones from `providers_clean`?
Here's a visual representation of what I'm trying to do:
```
Clean Table
+----+-------------------+
| ID | Name |
+----+-------------------+
| 1 | Clean Provider 1 |
| 4 | Clean Provider 4 |
| 5 | Clean Provider 5 |
+----+-------------------+
Dirty Table
+----+------------------+
| ID | Name |
+----+------------------+
| 1 | Dirty Provider 1 |
| 2 | Dirty Provider 2 |
| 3 | Dirty Provider 3 |
| 4 | Dirty Provider 4 |
| 5 | Dirty Provider 5 |
+----+------------------+
Desired Result
+----+------------------+
| ID | Name |
+----+------------------+
| 1 | Clean Provider 1 |
| 2 | Dirty Provider 2 |
| 3 | Dirty Provider 3 |
| 4 | Clean Provider 4 |
| 5 | Clean Provider 5 |
+----+------------------+
```
Thanks
**UPDATE**
This is working, however, Is there a more efficient way to write this query?
```
SELECT providers.id AS id,
CASE
WHEN
providers_clean.id IS NOT NULL
THEN
providers_clean.provider_name
ELSE
providers.provider_name
END AS pname,
CASE
WHEN
providers_clean.id IS NOT NULL
THEN
providers_clean.phone
ELSE
providers.phone
END AS pphone,
CASE
WHEN
providers_clean.id IS NOT NULL
THEN
providers_clean.website
ELSE
providers.website
END AS pwebsite
FROM providers
LEFT JOIN providers_clean ON providers_clean.id = providers.id
ORDER BY providers.id asc
```
|
You need to do an outer join from Dirty to Clean (since Dirty has all rows Clean has, but not vice versa)
```
SELECT dirty.id AS id,
CASE
WHEN clean.id IS NOT NULL THEN clean.name
ELSE dirty.name
END AS new_name
FROM dirty
LEFT JOIN clean ON clean.id = dirty.id
ORDER BY dirty.id asc
```
[Example](http://sqlfiddle.com/#!9/2d3305/1)
|
Seems like a `LEFT JOIN` is what you need:
```
SELECT COALESCE(pc.ID, p.ID), COALESCE(pc.Name, p.Name)
FROM providers AS p
LEFT JOIN providers_clean AS pc ON p.ID = pc.ID
```
What this query essentially does: if the record exists in the 'clean' table then select this one, otherwise select the one from the 'dirty' table.
|
Select rows from two tables and exclude primary keys that exist in both tables
|
[
"",
"mysql",
"sql",
"database",
""
] |
Here is the situation. I have two parameters to query:
Item\_code
Item\_type
The first\_table contains:
Item\_Code,Item\_Characteristics
The second\_table contains:
Item\_Type,Item\_Characteristics
My goal is to get the item\_characteristics. If the specific item is not found in the first table, I would want to use the Item\_type to get them from the second table.
Any way this can be done in a single query?
I am using SQL Server 2012.
|
One way of doing this uses `not exists`:
```
select t1.Item_Characteristics
from t1
where t1.item_code = @Item_Code
union all
select t2.Item_Characteristics
from t2
where t2.item_type = @Item_Type and
not exists (select 1 from t1 where t1.item_code = @Item_Code);
```
|
You can try with a `FULL JOIN` in case `Item_Code` and `Item_Type` are of the same type:
```
SELECT COALESCE(t1.Item_Characteristics, t2.Item_Characteristics) AS Item_Characteristics
FROM table1 AS t1
FULL JOIN table2 AS t2
ON t1.Item_Code = t2.Item_Type
WHERE COALESCE(t1.Item_Code, t2. Item_Type) = @param
```
|
Select from another table if record not found in the first table - SQL Server 2012
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I create an index `createdate_idx` on field `createdate`, then make this query:
> select \* from tablename where createdate>=to\_date('2016-02-29','yyyy-mm-dd');
But I am not sure whether the index `createdate_idx` has been used or not. So How can I make this confirm?
|
[`EXPLAIN PLAN`](https://docs.oracle.com/cd/B19306_01/server.102/b14211/ex_plan.htm#g42231) will show you what index is used and other information.
For example:
```
explain plan for
select * from tablename where createdate>=to_date('2016-02-29','yyyy-mm-dd');
select * from table(dbms_xplan.display);
Plan hash value: 3955337982
-------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 9 | 2 (0)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| TABLENAME | 1 | 9 | 2 (0)| 00:00:01 |
-------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("CREATEDATE">=TO_DATE(' 2016-02-29 00:00:00',
'syyyy-mm-dd hh24:mi:ss'))
Note
-----
- dynamic statistics used: dynamic sampling (level=2)
```
|
Index status can be checked with query to Oracle system view:
```
select TABLE_NAME, INDEX_NAME, STATUS from USER_INDEXES where TABLE_NAME like '%';
```
Value `N/A` for the `STATUS` field means that index is partitioned. Individual index partitions' status can be checked with this query:
```
select INDEX_NAME, PARTITION_NAME, STATUS from USER_IND_PARTITIONS where INDEX_NAME like '%';
```
|
Oracle:How to check index status?
|
[
"",
"sql",
"oracle",
"indexing",
""
] |
I have columns such as pagecount, convertedpages and changedpages in a table along with many other columns.
pagecount is the sum of convertedpages and changedpages.
I need to select all rows along with pagecount and i cant group them. I am wondering if there is any way to do it?
This select is part of view. so can i use another sql statement to bring just the sum and then somehow make it part of the main sql query?
Thank you.
|
```
SELECT
*,
(ConvertedPages + ChangedPages) as PageCount
FROM Table
```
|
If I'm understanding your question correctly, while I'm not sure why you can't use `group by`, another option would be to use a `correlated subquery`:
```
select distinct id,
(select sum(field) from yourtable y2 where y.id = y2.id) summedresult
from yourtable y
```
---
This assumes you have data such as:
```
id | field
1 | 10
1 | 15
2 | 10
```
And would be equivalent to:
```
select id, sum(field)
from yourtable
group by id
```
|
How to sum two columns in sql without group by
|
[
"",
"sql",
"sql-server",
""
] |
We have a script for update of a specific column. In this script we are using a `FOR UPDATE`cursor. In the first version of the script we did not use the `OF`part of the `FOR UPDATE`clause. As we found [here](https://stackoverflow.com/questions/16081582/difference-between-for-update-of-and-for-update) and [here](http://www.fast-track.cc/___aaa_pl_sql_cursor_for_update.htm) this should not affect the script as all rows of all joined tables should be locked and therefore can be updated.
But when we were running the script although the log messages were printed no update was made on the column (column\_a).
When we were changing the script with the cursor having an `FOR UPDATE OF t1.column_a`the same log messages appear but the updates are made correctly!
Can anyone explain why the script does not work without the `OF`clause?
The Oracle database version is 'Oracle Database 11g Enterprise Edition Release 11.2.0.3.0' also tested with 'Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit'.
Here's a simple version of the executed script:
```
BEGIN
-- anonymous procedure
DECLARE PROCEDURE update_column_a IS
c_to_find CONSTANT NUMBER := -42;
c_constant_value CONSTANT VARCHAR2 := 'value';
CURSOR c_my_cursor IS
SELECT t1.*
FROM table_1 t1, table_2 t2, table_3 t3
WHERE t1.t2_id = t2.id
AND t2.t3_id = t3.id
AND t3.column_b = c_to_find
-- FOR UPDATE with OF clause works
-- FOR UPDATE OF t1.column_a;
-- FOR UPDATE without OF clause does not
FOR UPDATE;
BEGIN
FOR cursor_rec IN c_my_cursor LOOP
IF cursor_rec.column_a IS NULL OR cursor_rec.column_a = '' THEN
dbms_output.put_line('Updating column...');
UPDATE t1 SET column_a = c_constant_value WHERE CURRENT OF c_my_cursor;
ELSE
dbms_output.put_line('Column already set...');
END IF;
END LOOP;
END update_column_a;
-- anonymous execution
BEGIN
update_column_a;
END;
END;
/
```
|
According to the Oracle 11G PL/SQL documentation [here](https://docs.oracle.com/cd/E11882_01/appdev.112/e25519/static.htm#LNPLS602):
> When SELECT FOR UPDATE queries multiple tables, it locks only rows
> whose columns appear in the FOR UPDATE clause.
So it might appear that in your example, no rows are locked and `current of` might not work.
However, when I try this:
```
declare
cursor c is
select ename, dname
from emp join dept on dept.deptno = emp.deptno
for update;
begin
for r in c loop
null;
end loop;
end;
```
I find that the rows of EMP and DEPT **are** locked (an update to either from another session hangs).
If I change the code to try to update one of the tables, it works fine for EMP:
```
declare
cursor c is
select ename, dname
from emp join dept on dept.deptno = emp.deptno
for update;
begin
for r in c loop
update emp
set ename = upper(ename)
where current of c;
end loop;
end;
```
But if I try to update DEPT instead I get the exception:
> ORA-01410: invalid ROWID
This doesn't surprise me, because I have a foreign key from EMP to DEPT, and EMP will be "key-preserved" by the cursor's query, but DEPT will not be (i.e the same DEPT row can appear more than once in the results).
This suggests to me that the documentation is wrong, or at least misleading. However, I cannot see how your code could just not update the row, without raising an error as mine did.
|
"Can anyone explain why the script does not work without the OFclause?"
You are almost there :)
What does the FOR UPDATE in regular? --> locks the result set
```
SELECT t1.*
FROM table_1 t1, table_2 t2, table_3 t3
WHERE t1.t2_id = t2.id
AND t2.t3_id = t3.id
AND t3.column_b = c_to_find
FOR UPDATE;
```
So, in that way you cannot UPDATE none of these tables in the resultset.
But, if you specify the FOR UPDATE OF clause, then you're telling to ORACLE you want make an exception in the lock, naming the specific column.
```
SELECT t1.*
FROM table_1 t1, table_2 t2, table_3 t3
WHERE t1.t2_id = t2.id
AND t2.t3_id = t3.id
AND t3.column_b = c_to_find
FOR UPDATE OF t1.column_a;
```
|
Oracle FOR UPDATE (OF) Cursor behaviour
|
[
"",
"sql",
"oracle",
"plsql",
"sql-update",
"cursor",
""
] |
I am trying to search for some duplicate records in a database, these duplicate were entered by an end user. As they were entered by an end user (presumably different people) the records are slightly different. The only thing I can possibly search on would be the company name, which as you may have guessed could potentially be worded differently. Example:
```
Id Code Name
1 A001 Company A LTD
2 A002 Company A Limited
3 A003 Co. A LTD
```
All 3 records are for the same company just different people may have entered these records so they have different wordings for the company name. So what I am trying to do is make it easier from a support standpoint, I want to find these records for a company with a duplicate and just remove it but as you can imagine its difficult to search on this as the names could be different.
I was thinking get the first letter and the second word of the company name using a Like statement.
```
Where Name Like 'C%A%'
```
But surely there must be a better way to do this.
|
Take a look at `SOUNDEX()` and `DIFFERENCE()`:
[Using SOUNDEX and DIFFERENCE to Standardize Data in SQL Server](https://www.mssqltips.com/sqlservertip/2159/using-soundex-and-difference-to-standardize-data-in-sql-server/)
> SOUNDEX converts an alphanumeric string to a four-character code that
> is based on how the string sounds when spoken. The first character of
> the code is the first character of character\_expression, converted to
> upper case. The second through fourth characters of the code are
> numbers that represent the letters in the expression. The letters A,
> E, I, O, U, H, W, and Y are ignored unless they are the first letter
> of the string. Zeroes are added at the end if necessary to produce a
> four-character code. For more information about the SOUNDEX code, see
> The Soundex Indexing System.
>
> SOUNDEX codes from different strings can be compared to see how
> similar the strings sound when spoken. The DIFFERENCE function
> performs a SOUNDEX on two strings, and returns an integer that
> represents how similar the SOUNDEX codes are for those strings.
>
> SOUNDEX is collation sensitive. String functions can be nested.
Taken from [MSDN - SOUNDEX (Transact-SQL)](https://msdn.microsoft.com/en-GB/library/ms187384.aspx)
|
This is not an SQL question actually. You are looking for a *rule* to apply instead, an algorithm.
From your example you could work with a list of synonymes ('Company' = 'Co.', 'Limited' = 'LTD'). Then you'd replace all 'Company' with 'Co.', all 'Limited' with 'LTD', etc. You would then compare the result strings, possibly case insensitive.
But that still wouldn't find 'A Limited' or 'A'. So maybe better remove all that is not the actual name (A in your example)? But that may cause false positives.
And then there may be typos 'Sdidas' ainstead of 'Adidas' becase the S is next to the A on the keyboard or 'didas' because the A key was not pressed hard enough :-(
It is up to you how hard or soft an algorithm you apply. One rule may get you too many "duplicates" that are none, and another not all of the actually existing duplicates.
Make up your mind what you actually want to consider a duplicate. There porbably is no "perfect" solution for this.
|
SQL: Search for duplicate rows even though they might not be exact duplicates
|
[
"",
"sql",
"sql-server",
""
] |
I'm trying to run a single SQL command which will update table 'name', where the column 'id' matches 'eg1', 'eg2' and 'eg3'. The column to be updated is 'status' and should be changed to 'new\_status' for the previously specified ids only.
Unfortunately I'm new with SQL, therefore I only got as far as this which doesn't seem to be working:
```
SELECT * FROM `tblhosting` WHERE 'id' IN (eg1,eg2,eg3) UPDATE 'status' SET new_status
```
|
```
Update tblhosting set status = 'new_status' where id in ('eg1','eg2','eg3')
```
This assuems you want to update tblhosting column status set to 'new\_status' where the ID is either eg1, eg2 or eg3.
|
String literals are enclosed in single quotes.
Identifiers can be optionally enclosed in backtick characters.
The syntax for an `UPDATE` statement is like this:
```
UPDATE `tblhosting`
SET `status` = 'new_status'
WHERE `id` IN ('eg1','eg2','eg3')
```
The specification is a little ambiguous. The example above searches the table named `tblhosting` for rows to be updated, and assigns a value to the `status` column. This assumes that the value to be assigned is a string literal `"new_status"`, and that `"eg1"`, `"eg2"` and `"eg3"` are string values found in the column named `id`.)
|
Updating column for more than one row
|
[
"",
"mysql",
"sql",
""
] |
I have 4 tables in my Database
**Bookings**
```
Booking_Id : int, Primary Key
Hotel_No : int, foreign key
Guest_No : int, foreign key
Date_From : date
Date_To : date
Room_No : int, foreign key
```
**Guest**
```
Guest_No : int, primary key
Name : varchar(30)
Address : varchar(50)
```
**Hotel**
```
Hotel_No : int, primary key,
Name : varchar(30)
Address : varchar(50)
```
**Room**
```
Room_No : int, primary key
Hotel_No : int primary key, foreign key
Types : char(1)
Price : float
```
**Question:**
I want to display all information from room table on a given hotel. If a Guest is staying in one of the rooms today, i want to display his name, else null.
i've tried several Queries, but none of them solves my problem.
Thanks in advance
edit:
I've tried something similar to
```
select Room.*,Guest.Name
from Room
join Booking on Room.Room_No = Booking.Room_No
join Guest on Guest.Guest_No = Booking.Guest_No
where Booking.Hotel_No = 6 AND cast(Date_From AS DATE) < = cast(CURRENT_TIMESTAMP AS DATE)
```
When i use this Query:
Select \* From Room where Room.Hotel\_No = 6
I get this result
[](https://i.stack.imgur.com/ZUERa.png)
But the result i want is:
[](https://i.stack.imgur.com/Mqyq8.png)
|
I Found the solution:
```
select Room.*, Guest.Name from Room
left join(select * from Booking where Booking.Date_From <= GetDate()
And Booking.Date_To >= GetDate()) as book on room.Room_No = book.Room_No
left join Guest on book.Guest_No = Guest.Guest_No where Room.Hotel_No = 6
```
|
try this:
```
Select r.*, g.name
from Room as r
left join Booking as b on r.Room_No = b.Room_No
left join Guest as g on g.Guest_No = b.Guest_No
left join Hotel as h on h.Hotel_No = b.Hotel_No
where h.Hotel_No = 6 AND b.Date_From <= cast(GETDATE() as date)
```
|
SQL - Join and Subqueries
|
[
"",
"sql",
"sql-server",
"database",
""
] |
I have the following statement which is going to run automatically several times a month :
```
select
case
when to_char(sysdate, 'MM') <> '03' or (to_char(sysdate, 'MM')= '03'
and count(a.cust_id) >'1000000') then '0'
when to_char(sysdate, 'MM')= '03' and count(a.cust_id) <'1000000' then '1'
end
from table a
where a.template_id = '99'
and a.start_date >= sysdate-1
```
since the count statement runs for a long time, i would like to add a condition
that will only allow it to run when the current month is March. when it's not March it should only display '0'
any ideas how to do it ? can ELSE be used here ?
thanks,
Assaf.
|
Oracle [uses short-circuit evaluation](http://docs.oracle.com/cd/E11882_01/server.112/e41084/expressions004.htm):
> Oracle Database uses short-circuit evaluation. For a simple CASE expression, the database evaluates each comparison\_expr value only before comparing it to expr, rather than evaluating all comparison\_expr values before comparing any of them with expr. Consequently, Oracle never evaluates a comparison\_expr if a previous comparison\_expr is equal to expr. For a searched CASE expression, the database evaluates each condition to determine whether it is true, and never evaluates a condition if the previous condition was true.
So if you had three conditions then if the first would match the count would not be done:
```
case
when to_char(sysdate, 'MM') <> '03' then 0
when count(a.cust_id) > 1000000) then 0
else 1
end
```
However it's still going to hit all of the rows from the table that match the filter values, so you won't gain any performance just by doing this.
You could move the count and table access into a subquery and make the whole thing one of the conditions:
```
select
case
when extract(month from sysdate) <> 3 then 0
when (
select count(a.cust_id)
from tablea a
where a.template_id = 99
and a.start_date >= sysdate-1) > 1000000 then 0
else 1
end
from dual;
```
I've also changed some string literals to numbers to avoid extra implicit conversions - it's possible your IDs are actually strings. You might want to investigate why the count itself is slow anyway; perhaps `start_date` isn't indexed, or it's choosing a different index. Look at the execution plan for that on its own to see if it can be tuned.
|
Check this script :
```
select
case
when to_char(sysdate, 'MM') <> '03' or (to_char(sysdate, 'MM')= '03'
and count(a.cust_id) >'1000000') then '0'
when to_char(sysdate, 'MM')= '03' and count(a.cust_id) <'1000000' then '1'
else 'Your Text'
end
from table a
where a.template_id = '99'
and a.start_date >= sysdate-1
```
|
use of condition with CASE on oracle sql
|
[
"",
"sql",
"oracle",
"case",
""
] |
I want to search through a SQL table and find two consecutive missing dates.
For example, person 1 inserts 'diary' entry on day 1 and day 2, misses day 3 and day 4, and enters an entry on day 5.
I am not posting code because I am not sure of how to do this at all.
Thanks!
|
My high level approach to this problem would be to select from a dynamic table of dates, using an integer counter to add or subtract from the current DateTime to get as many dates as you require into the future or past, then LEFT join your data table to this, order by date and select the first row, or N many rows which have a NULL join.
So your data ends up being
```
DATE ENTRY_ID
---- -----
2016-01-01 1
2016-01-02 2
2016-01-03 NULL
2016-01-04 3
2016-01-05 4
2016-01-06 NULL
2016-01-07 NULL
2016-01-08 NULL
```
And you can pick all of the values you need from this dataset
|
This uses a LEVEL aggregate to build the list of calendar dates from the first entry to the last, then uses LAG() to check a given date with the previous date, and then checks that neither of those dates had an associated entry to find those two-day gaps:
```
With diary as (
select to_date('01/01/2016','dd/mm/yyyy') entry_dt from dual union all
select to_date('02/01/2016','dd/mm/yyyy') entry_dt from dual union all
select to_date('04/01/2016','dd/mm/yyyy') entry_dt from dual union all
--leave two day gap of 5th and 6th
select to_date('07/01/2016','dd/mm/yyyy') entry_dt from dual union all
select to_date('08/01/2016','dd/mm/yyyy') entry_dt from dual union all
select to_date('10/01/2016','dd/mm/yyyy') entry_dt from dual )
select calendar_dt -1, calendar_dt
FROM (
select calendar_dt, entry_dt, lag(entry_dt) over (order by calendar_dt) prev_entry_dt
from diary
RIGHT OUTER JOIN (select min(entry_dt) + lvl as calendar_dt
FROM diary
,(select level lvl
from dual connect by level < (select max(entry_dt) - min(entry_dt)+1 from diary))
group by lvl) ON calendar_dt = entry_dt
order by calendar_dt
)
where entry_dt is null and prev_entry_dt is null
```
returns:
```
CALENDAR_DT-1, CALENDAR_DT
05/01/2016, 06/01/2016
```
I am only doing the calendar building to simplify building all 2-day gaps, as if a person took three days off that would be two overlapping two-day gaps (day 1-2, and days 2-3). If you want a far simpler query that outputs the start and end point of any gap of two or more days, then the following works:
```
With diary as (
select to_date('01/01/2016','dd/mm/yyyy') entry_dt from dual union all
select to_date('02/01/2016','dd/mm/yyyy') entry_dt from dual union all
select to_date('04/01/2016','dd/mm/yyyy') entry_dt from dual union all
select to_date('07/01/2016','dd/mm/yyyy') entry_dt from dual union all
select to_date('08/01/2016','dd/mm/yyyy') entry_dt from dual union all
select to_date('10/01/2016','dd/mm/yyyy') entry_dt from dual )
select prev_entry_dt +1 gap_start, entry_dt -1 gap_end
FROM (
select entry_dt, lag(entry_dt) over (order by entry_dt) prev_entry_dt
from diary
order by entry_dt
) where entry_dt - prev_entry_dt > 2
```
|
Searching SQL table for two consecutive missing dates
|
[
"",
"sql",
"oracle",
"search",
""
] |
I'm going through a SQL tutorial, and came across this question.
I've been stuck for sometime.
**Customers**
```
id INTEGER PRIMARY KEY
lastname VARCHAR
firstname VARCHAR
```
**Purchases**
```
id INTEGER PRIMARY KEY
customers_id INTEGER FOREIGN KEY customers(id)
purchasedate DATETIME
purchaseamount REAL
```
`Write a statement that gives a list of all customers who purchases something this month.`
I know I want to inner-join the tables on customer\_id and then get the customer names where the month is February, does this look right?
```
SELECT * from Purchases
inner join Customers
on Purchases.customers_id=Customers.id
WHERE MONTH(purchasedate) = 2
```
|
Yeah, or to avoid the whole `distinct` business you could write
```
SELECT id, LastName, firstname FROM Customers
WHERE EXISTS ( SELECT 1 FROM Purchases
WHERE customers_id=Customers.id
AND MONTH(purchasedate)=2 AND YEAR(purchasedate)=2016 )
```
|
I will go with `EXISTS` which will avoid duplicate
```
SELECT Customers.id,
Customers.LastName,
Customers.firstname
FROM Customers
WHERE EXISTS (SELECT 1
FROM Purchases
WHERE Purchases.customers_id = Customers.id
AND Month(purchasedate) = 2)
```
Considering you want to pull customer information who purchased in `2` month of any year
|
SQL Query for all purchases in a given month
|
[
"",
"sql",
""
] |
I'm currently working on a query that should return a *subset* of a CartoDB table (i.e. a new table) sorted by proximity to a given point. I want to display labels on the map corresponding to the closest, second closest, etc. and thought to capture that by using the PostgreSQL row\_number() method in a new column:
```
SELECT
*,
ST_Distance(
ST_GeomFromText('Point(-73.95623080000001 40.6738101)', 4326)::geography,
the_geom::geography
) / 1609 AS dist,
row_number() OVER (ORDER BY dist) as rownum
FROM locations
WHERE ST_Intersects(
ST_GeomFromText(
'Point(-73.95623080000001 40.6738101)', 4326
),
the_geom
)
ORDER BY dist ASC
```
However, when I try this, CartoDB/PostgreSQL returns the following error:
```
Error: column "dist" does not exist
```
Any suggestions on a better approach or something I'm missing?
|
You **CANT** use a field calculated on the same level.
```
SELECT (x1-x2)^2 + (y1-x2)^2 as dist, dist * 1.6 as miles
^^^^^
undefined
```
So you create a subquery.
```
SELECT dist * 1.6 as miles
FROM ( SELECT (x1-x2)^2 + (y1-x2)^2 as dist
FROM table
) T
```
|
You cannot use a column alias in another column. Here you are defining `dist` for result list and also use it in the `row_number`'s `ORDER BY`. You have to write the same expression that you used for `order` instead.
|
PostgreSQL "column does not exist" error in CartoDB
|
[
"",
"sql",
"postgresql",
"postgis",
"cartodb",
""
] |
I have a column which contains many records with one word in common (e.g Channel names):
```
| channel |
--------------------------
| Fox |
| Fox Life |
| Fox News |
| Fox Action |
| Fox Movies |
```
And the search criteria is: "Channel Fox News"
Now I'd like to select the only row that have Fox News.
I've tried so many things, like split the string by spaces, or using LIKE with no success, because the word "Fox" always select all five records.
note that its has to be word level match.
thanks
|
This screams for fulltext index
```
ALTER TABLE T add FULLTEXT index(channel);
```
You search with
```
SELECT * FROM t WHERE MATCH (channel)
AGAINST ('+Fox +News' IN BOOLEAN MODE);
```
plus sign means that word HAS to exist in the column
There are a couple of things you need to pay attention to:
[innodb\_ft\_min\_token\_size](http://dev.mysql.com/doc/refman/5.6/en/innodb-parameters.html#sysvar_innodb_ft_min_token_size)
the minimum size of the word searched
and [stop words](http://dev.mysql.com/doc/refman/5.6/en/fulltext-stopwords.html)
|
How about using `like`?
```
where lower(channel) like '%fox news%'
```
|
Select records with spaces in mysql
|
[
"",
"mysql",
"sql",
""
] |
I have to select a bunch of rows from a database (DB2), based on an array of ID's. However as the amount of rows can become rather large (up to 7,000), the multi-row query (below) will fall to the 30 seconds time-out in VBA. I am instead considering a looped single-row approach, but I have no clue how much stress this will put on the database.
Is the looped single-row query a reckless approach? And are there any better solutions, without writing to the database?
Multi-row:
```
SELECT * FROM my_table WHERE my_table.id IN ('0001', '0002', '0003', ...)
```
Single-row:
```
DECLARE @cnt INT = 0;
WHILE @cnt < cnt_total
SELECT * FROM my_table WHERE my_table.id = @cnt
SET @cnt = @cnt + 1;
END;
```
|
7000 calls will be very slow, because of the number of round-trips.
The best approach to solving this problem is creating a temporary table, populating it with 7000 rows, and joining to it in your query.
If this approach is not acceptable, you could limit the number of round-trips by querying, say, a 1000 rows at a time in a loop. This would give you seven round-trips instead of 7000, which should be a lot faster.
|
The looped single-row query is more reckless. Well, at least it doesn't perform as well.
In general, executing a query incurs overhead. At the very minimum, the databases engine needs to parse the query, determine the query plan (which may be passed), execute the query, and return the results. Repeating this steps requires extra effort.
From a performance perspective, the `IN` is a better approach.
That said, it might be easier to process the code row-by-row at the application layer. And if you only have a handful of values, the performance difference might not be very important. But, the better approach is a single query.
|
Multiple queries vs. huge WHERE IN
|
[
"",
"sql",
"vba",
"db2",
""
] |
I have Ip2LocationLite database with ip\_from, ip\_to, country\_code columns.
All IP's are in decimal format (converted using a function).
So I have to detect those users with IP addresses from certain country.
For example, Russia has over 3 thousands IP ranges and I want to search through all of them.
Script looks like this:
```
SELECT * FROM [dbo].[USERINFO] as UserInfo
WHERE (SELECT gaminator.dbo.IPAddressToInteger (UserInfo.LastIp)) BETWEEN
(
SELECT [ip2location].[dbo].[ip2location_db1].ip_from FROM [ip2location].[dbo].[ip2location_db1]
WHERE [ip2location].[dbo].[ip2location_db1].country_code = 'RU'
)
AND
(
SELECT [ip2location].[dbo].[ip2location_db1].ip_to FROM [ip2location].[dbo].[ip2location_db1]
WHERE [ip2location].[dbo].[ip2location_db1].country_code = 'RU'
)
```
The last thing gives error:
```
Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , >, >= or when the subquery is used as an expression.
```
This is because BETWEEN expects one diapason instead of 3 thousands diapasons.
How to program this so it makes 3 thousands BETWEEN's instead? (according to the number of diapasons to be checked)
|
It can be a little tricky to answer these questions without a few sample records and the expected output. But if I've understood your schema correctly you can use a join instead of the sub queries.
**Example**
```
SELECT
*
FROM
[dbo].[USERINFO] as UserInfo
INNER JOIN [ip2location].[dbo].[ip2location_db1] AS ipRange ON UserInfo.LastIp BETWEEN ipRange.ip_to AND ipRange.ip_from
WHERE
ipRange.country_code = 'RU'
;
```
|
Get the `min/lowest` value from your first sub-query and the `max/highest` value from your second sub-query to avoid the error and if you want to use them as parameters for your `BETWEEN` clause.
If your IP addresses are canonical or can be ordered numerically, you may get the lowest IP value by sorting them in ASC order and select the first row (i.e. `SELECT TOP 1`) and to get the highest IP value by sorting them in DESC order and select the first row also.
|
How to make multiple BETWEEN statements not knowing the quantity of them beforehand? (detecting country by IP range)
|
[
"",
"sql",
"sql-server",
"geolocation",
"ip",
"between",
""
] |
I need to show serial numbers for each row of an invoice. That means, that on one position there can be several serial numbers. Along with the serial number there needs to be a quantity, which is obviously allways one. Unfortunately, there could be rows with more items than serial numbers. This happens when serial numbers are not scanned in the shipping process. In my output I need an extra row for these positions where I show the number of REMAINING items. So let's say, that there is a position with 10 items in it and only four are scanned in the shipping process. That would mean I print four rows with the serials and quantity one and a fith row with no serial and the quantity six.
I work with SQL Server 2008 and would prefer a solution without temp tables or CTEs.
Here is an example of what I mean:
```
CREATE TABLE #data (doc int, pos int, qty int)
CREATE TABLE #serial (doc int, pos int, serial varchar(10))
INSERT INTO #data
SELECT 1,1,6
UNION ALL
SELECT 1,2,3
UNION ALL
SELECT 2,1,4
INSERT INTO #serial
select 1,1,'aaaaaaaaaa'
UNION ALL
select 1,1,'bbbbbbbbbb'
UNION ALL
select 1,1,'cccccccccc'
UNION ALL
select 1,1,'dddddddddd'
UNION ALL
select 1,2,'eeeeeeeeee'
UNION ALL
select 1,2,'ffffffffff'
UNION ALL
select 1,2,'gggggggggg'
UNION ALL
select 2,1,'hhhhhhhhhh'
SELECT d.doc, d.pos, s.serial, CASE WHEN s.serial IS NOT NULL THEN 1 ELSE d.qty END qty
FROM #data d
INNER JOIN #serial s ON s.doc = d.doc and s.pos = d.pos
```
This is the desired output:
```
doc | pos | serial | qty
1 | 1 |'aaaaaaaaaa'| 1
1 | 1 |'bbbbbbbbbb'| 1
1 | 1 |'cccccccccc'| 1
1 | 1 |'dddddddddd'| 1
1 | 1 | NULL | 2
1 | 2 |'eeeeeeeeee'| 1
1 | 2 |'ffffffffff'| 1
1 | 2 |'gggggggggg'| 1
2 | 1 |'hhhhhhhhhh'| 1
2 | 1 | NULL | 3
```
|
```
select
s.doc, s.pos, s.serial, d.qty - s.cnt qty
from
( select
s.doc, s.pos, s.serial, count(*) cnt,
case when grouping(s.doc) = 0 and grouping(s.pos) = 0 and grouping(s.serial) = 1 then 1 else 0 end grp
from
#serial s
group by
s.doc, s.pos, s.serial with cube
having
grouping(s.doc) = 0 and grouping(s.pos) = 0 and grouping(s.serial) = 1
or grouping(s.doc) = 0 and grouping(s.pos) = 0 and grouping(s.serial) = 0
) s
left join #data d on s.doc = d.doc and s.pos = d.pos and s.grp = 1
where
s.grp = 0 or d.qty - s.cnt > 0
order by
s.doc, s.pos, s.grp
```
|
Dynamic approach
```
SELECT d.doc, d.pos, s.serial, 1 qty
FROM #data d
INNER JOIN #serial s ON s.doc = d.doc and s.pos = d.pos
UNION
select t1.doc,t1.pos,null,t1.qty-ss from
(
SELECT d.doc,d.pos, SUM(1) SS , d.qty
FROM #data d
INNER JOIN #serial s ON s.doc = d.doc and s.pos = d.pos
group by d.doc,d.pos,d.qty
)t1 where SS<>qty
Order by d.doc,d.pos,s.serial
```
|
Add summarizing column with calculation
|
[
"",
"sql",
"sql-server",
"summarization",
""
] |
I have created a table
```
CREATE TABLE `region_details` (
`id` int(5) NOT NULL,
`file_id` varchar(20) NOT NULL,
`region` varchar(30) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
```
Lets assume I have 10 records as follows
```
id file_id region
1 aaa a
2 bbb b
3 ccc a
4 ddd c
5 abc d
6 a01 a
7 j05 b
8 005 c
9 a1021 a
10 111j b
```
I need id and region for distinct region values
```
id region
1 a
2 b
4 c
5 d
```
|
Simply do a `GROUP BY`, use `MIN(id)` to pick each region's first id.
```
select min(id) as id, region
from region_details
group by region
```
|
You can do it like this:
```
SELECT min(id),region
FROM YourTable
GROUP BY region
```
|
How to select only distinct values for two columns in mysql
|
[
"",
"mysql",
"sql",
""
] |
This is my SQL clause
```
ISNULL((LTRIM(RTRIM(Masters.comment1))+';'+LTRIM(RTRIM(masters.comment2))),'')Note1 ,
```
When there are no value in both the column, then I am getting only a semicolon. If the value is not in `comment1` then I am getting `;xyz`.
I want that:
* when no value is in `comment1` and `comment2` then `note1 = ''`
* when no value in `comment1` but there is a value in `comment2` then `xyz`
|
This will add the semicolon when there is data is both columns
```
SELECT
COALESCE(LTRIM(RTRIM(comment1)),'') +
CASE WHEN NULLIF(comment1, '') + NULLIF(comment2, '')IS NULL THEN '' ELSE ';' END +
COALESCE(LTRIM(RTRIM(comment2)),'')
FROM yourtable
```
|
You can use `NULLIF` to turn empty string to `NULL`. Then concatenation of `NULL` with ';' would still be `NULL` and that can be turned into and empty string with `ISNULL`:
```
WITH masters (comment1, comment2) AS (
SELECT NULL, NULL UNION ALL
SELECT ' 1', NULL UNION ALL
SELECT NULL, '2 ' UNION ALL
SELECT '3 ', ' 4' UNION ALL
SELECT '' , '' UNION ALL
SELECT ' 5', '' UNION ALL
SELECT '' , '6 ' UNION ALL
SELECT '7 ', ' 8'
)
SELECT
ISNULL(
(
ISNULL(NULLIF(LTRIM(RTRIM(masters.comment1)), '') + ';', '')
+ NULLIF(LTRIM(RTRIM(masters.comment2)), '')
)
, ISNULL(LTRIM(RTRIM(masters.comment1)), '')) Note1
FROM masters;
```
*Update*: [Jorge Campos](https://stackoverflow.com/users/460557/jorge-campos) has a nice and very easy to read solution using `CASE`:
```
WITH masters (comment1, comment2) AS (
SELECT NULL, NULL UNION ALL
SELECT ' 1', NULL UNION ALL
SELECT NULL, '2 ' UNION ALL
SELECT '3 ', ' 4' UNION ALL
SELECT '' , '' UNION ALL
SELECT ' 5', '' UNION ALL
SELECT '' , '6 ' UNION ALL
SELECT '7 ', ' 8'
)
SELECT ISNULL(LTRIM(RTRIM(masters.comment1)), '') +
CASE WHEN ISNULL(LTRIM(RTRIM(masters.comment1)), '') <> ''
AND ISNULL(LTRIM(RTRIM(masters.comment2)), '') <> ''
THEN ';'
ELSE ''
END +
ISNULL(LTRIM(RTRIM(masters.comment2)),'') AS Note1
FROM masters;
```
|
Inserting an optional semicolon between two selected fields
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I want to find only records that exist in one table, that don't exist in another table. The part I'm struggling with is that one item can have multiple variations.
Example Table one
```
ItemNumber | VendorName
1 | Frito Lay
1 | Joe's Chips
1 | Chips Galore
```
Example Table two
```
Item Number | Vendor Name
1 | Frito Lay
1 | Joe's Chips
```
I'm looking to return only the records that exist in table one that don't exist in table two. The tables are identical in schema btw.
The record I'm looking to return is
1, Chips Galore
I am using SQL Server 2008.
|
```
SELECT ItemNumber, VendorName
from Table1
except select ItemNumber, VendorName
from Table2
```
This will select everything in the "first" set that is not also found in the "second" set... and it checks every specified column in every row column in doing it. (Duplicates are removed.)
|
You can use `NOT EXISTS`:
```
SELECT *
FROM dbo.TableOne T1
WHERE NOT EXISTS(SELECT 1 FROM dbo.TableTwo
WHERE ItemNumber = T1.ItemNumber
AND VendorName = T1.VendorName);
```
Another option could be using `EXCEPT`:
```
SELECT *
FROM dbo.TableOne
EXCEPT
SELECT *
FROM dbo.TableTwo;
```
|
How to find records that exist in one table but not the other when each record has variations
|
[
"",
"sql",
"sql-server-2008",
""
] |
Here is my database structure, it contains 3 tables : School, Student, Course
Each student must belong to a school (so School ID will be a foreign key in the student table referring to the School table).
Each course also must belong to a school (so School ID will be a foreign key in the course table referring to the School table).
Finally, each student may sign up for one or more courses (pivot table with a foreign key for student and a foreign key for school), but with the restriction that the student and the course must both belong to the same school. A student cannot be allowed to sign up for a course which belongs to a different school from the student.
Is there a way to add this constraint in the pivot table, or some other way model it?
Or should the constraint be enforced in the code before each insert on the pivot table?
|
You can add the SchoolID as a part Primary keys in both Student and Course.
This will force foreign keys to those tables to also specify the SchoolID.
We use this fact in our StudentInCourse-table to force both the student and the course to belong to the same school.
```
CREATE TABLE School(id INT PRIMARY KEY);
CREATE TABLE Course(id int, schoolID INT
FOREIGN KEY REFERENCES dbo.School(id),
PRIMARY KEY(id, schoolID));
CREATE TABLE Student(id INT, schoolID INT
FOREIGN KEY REFERENCES dbo.School(id),
PRIMARY KEY(id, SchoolID));
CREATE TABLE StudentInCourse(StudentId INT, schoolId INT, CourseID INT,
CONSTRAINT [fk_student]
FOREIGN KEY (StudentId,schoolID) REFERENCES student(id, SchoolID),
CONSTRAINT [fk_course]
FOREIGN KEY (CourseID,schoolID) REFERENCES Course(id, SchoolID),
);
INSERT dbo.School ( id )
VALUES ( 1 ), ( 2 );
INSERT dbo.Student
( id, schoolID )
VALUES ( 19950516, 1 );
INSERT dbo.Course
( id, schoolID )
VALUES ( 77666, 1 ),
( 99988, 2 );
-- Student in same school as course is OK
INSERT dbo.StudentInCource
( StudentId, schoolId, CourseID )
VALUES ( 19950516, 1, 77666 );
-- Student in other school as course is FAIL
INSERT dbo.StudentInCourse
( StudentId, schoolId, CourseID )
VALUES ( 19950516, 2, 99988 );
INSERT dbo.StudentInCourse
( StudentId, schoolId, CourseID )
VALUES ( 19950516, 1, 99988 );
```
|
I recommend to make :
- The three fields Pivot`(StudentID,CourseID,SchoolID)` a `Primary Key`- The two fields Pivot`(StudentID,SchoolID)` a double `Foreign Key` to the `Student` table- The two fields Pivot`(CourseID,SchoolID)` a double `Foreign Key` to the `Course` table
So the model will be in this way:
School Table:
```
+----------+
| SchoolID |
+----------+
| 1 |
| 2 |
| 3 |
+----------+
```
where `SchoolID` is a `Primary Key`
Course Table:
```
+----------+----------+
| CourseID | SchoolID |
+----------+----------+
| 1 | 1 |
| 2 | 1 |
| 3 | 1 |
+----------+----------+
```
where `SchoolID` references School`(SchoolID)` and (CourseID,SchoolID) a `Primary Key`
Student Table:
```
+-----------+----------+
| StudentID | SchoolID |
+-----------+----------+
| 1 | 1 |
| 2 | 1 |
| 3 | 1 |
+-----------+----------+
```
where `SchoolID` references School`(SchoolID)` and (StudentID ,SchoolID) a `Primary Key`
Pivot Table:
```
+-----------+----------+----------+
| StudentID | CourseID | SchoolID |
+-----------+----------+----------+
| 1 | 1 | 1 |
| 2 | 2 | 1 |
| 3 | 3 | 1 |
+-----------+----------+----------+
```
where `Pivot` (StudentID,SchoolID) references `Student` (StudentID,SchoolID)
and `Pivot` (CourseID,SchoolID) references `Course` (CourseID,SchoolID)
|
SQL db design constraints for co-belonging
|
[
"",
"sql",
"database",
"constraints",
"foreign-key-relationship",
""
] |
I have a dataset like the one below, and I would like to remove the date component from it. One challenge is that the date can be in different formats as shown below.
**Existing output**
```
Event A 05-25-2015
Event B 25-05-2015
Event C April 2015
Event D 2016
```
**Desired Output**
```
Event A
Event B
Event C
Event D
```
|
Something to get you started. Depending on the number of formats that you might face, you might want to put them all into a table as patterns, join to that table, and use `LEN` to calculate values for the `STUFF` command.
```
DECLARE @test TABLE (my_string VARCHAR(50) NOT NULL)
INSERT INTO @test (my_string)
VALUES
('Event A 05-25-2015'),
('Event B 25-05-2015'),
('Event C April 2015'),
('Event D 2016')
SELECT
CASE
WHEN PATINDEX('%[0-9][0-9]-[0-9][0-9]-[0-9][0-9][0-9][0-9]%', my_string) > 0
THEN STUFF(my_string, PATINDEX('%[0-9][0-9]-[0-9][0-9]-[0-9][0-9][0-9][0-9]%', my_string), 10, '')
WHEN PATINDEX('%April [0-9][0-9][0-9][0-9]%', my_string) > 0
THEN STUFF(my_string, PATINDEX('%April [0-9][0-9][0-9][0-9]%', my_string), 10, '')
WHEN PATINDEX('%[0-9][0-9][0-9][0-9]%', my_string) > 0
THEN STUFF(my_string, PATINDEX('%[0-9][0-9][0-9][0-9]%', my_string), 4, '')
ELSE my_string
END AS my_string
FROM
@test
```
My guess is that this is **very** error prone and might find false positives if someone has an event named, "Event 6421" for example.
This also only handles the formats that you had in your sample data. I would figure that you might need to handle more, but this should point in the right direction.
|
Here another way using `CROSS APPLY` should handle any format of data.
This will consider than you have **two space** in your data and extracts the data which is before the second space.
```
CREATE TABLE #strings
(
col VARCHAR(50)
)
INSERT INTO #strings
VALUES ('Event A 05-25-2015'),
('Event B 25-05-2015'),
('Event C April 2015'),
('Event D 2016'),
('Event 4736765 2016')
SELECT Extracted_string = LEFT(col, pos2 - 1)
FROM #strings
CROSS apply (VALUES (Charindex(' ', col)))a(pos1)
CROSS apply (VALUES(Charindex(' ', col, pos1 + 1)))b(pos2)
```
**Result:**
```
Extracted_string
----------------
Event A
Event B
Event C
Event D
Event 4736765
```
|
Remove date from text in SQL
|
[
"",
"sql",
"sql-server",
"date",
""
] |
We're supposed to make a function that adds +7 days to the current `SYSDATE`, and also write the time and minute, however, my code only displays the date, not the time. What am I doing wrong? This is probably very easy, but I just can't figure it out, and there's not much help on the Internet either.
So far, I've tried:
```
CREATE OR REPLACE FUNCTION get_date(n IN NUMBER)
RETURN DATE AS
BEGIN
RETURN to_date(to_char(SYSDATE, 'DD.MM.YYYY HH24:MI'),'DD.MM.YYYY HH24:MI') + n;
END get_date;
/
```
So when you write (the 7 is the amount of days to advance):
```
SELECT get_date(7)
FROM dual;
```
Its result is this:
```
GET_DATE(7)
----------------
09.03.2016
```
However, as you can see, the time isn't included in the result, and that's what I need in this case. Any help at all would be appreciated. I'm sure I'm just too blind to see it, but I've stared for too long on this piece of code now, I'll admit my defeat.
|
You have to format the result to your specifications, as in
```
--create the function
CREATE OR REPLACE FUNCTION get_date(n IN NUMBER)
RETURN DATE AS
BEGIN
RETURN SYSDATE + n;
END get_date;
--call the function
SELECT TO_CHAR(get_date(7), 'DD.MM.YYYY HH24:MI')
FROM dual;
```
Or your new requirement of no formatting outside the function
```
--create the function
CREATE OR REPLACE FUNCTION get_date(n IN NUMBER)
RETURN VARCHAR2 AS
BEGIN
RETURN TO_CHAR(SYSDATE + n,'DD.MM.YYYY HH24:MI');
END get_date;
--call the function
SELECT get_date(7)
FROM dual;
```
|
You can decide if you want your function to return a `date` or a `varchar`; you may choose one of the following ways, depending on your need:
```
CREATE OR REPLACE FUNCTION get_date(n IN NUMBER)
RETURN varchar2 AS
BEGIN
RETURN to_char(SYSDATE + n, 'DD.MM.YYYY HH24:MI');
END get_date;
/
CREATE OR REPLACE FUNCTION get_date2(n IN NUMBER)
RETURN date AS
BEGIN
RETURN to_date(to_char(SYSDATE + n, 'DD.MM.YYYY HH24:MI'), 'DD.MM.YYYY HH24:MI');
END get_date2;
/
select to_char(get_date2(1), 'DD.MM.YYYY HH24:MI') from dual;
select get_date(1) from dual;
```
|
PL/SQL function to add +7 days to SYSDATE
|
[
"",
"sql",
"oracle",
"function",
"sysdate",
""
] |
I try to create in SSRS a chart which should show me only the Sunday at 10 pm until 11 pm value.
My query:
```
select intervaldateweek as Week,
SUM(GoodUnits) As GoodUnits,
SUM(NetUnits) As NetUnits,
SUM(GoodUnits) / NULLIF(SUM(NetUnits) , 0.0)* 100 As Value
from Count
inner join tsystem ON Count.systemid = tsystem.ID
where IntervalDate >= getdate()-300
and tsystem.ID = 2
group by intervaldate
```
I tried it with this expression to get only Sunday:
```
(DATEPART(dw, IntervalDate) = 1
```
But I don´t know how am I can determine a specific time interval. In this case 10pm - 11pm.
I need the values between this time period
Between the hour of 22:00:00 to 23:00:00
```
IntervalDate >= DATEADD(HOUR, 1,CAST(DATEADD(DAY,0, CAST(GETDATE() AS DATE)) AS DATETIME))
AND IntervalDate <= DATEADD(HOUR, 22, CAST(CAST(GETDATE() AS DATE) AS DATETIME))
```
I tried it with this expression but I think I set the false numbvers in the expression.
|
You can use the `DATEPART` function ([documentation here](https://msdn.microsoft.com/en-us/library/ms174420.aspx)) to look at the relevant parts, like the hour of a date time value. For example:
```
SELECT
intervaldateweek AS [Week],
SUM(GoodUnits) AS GoodUnits,
SUM(NetUnits) AS NetUnits,
SUM(GoodUnits) / NULLIF(SUM(NetUnits), 0.0)* 100 AS Value
FROM
[Count]
INNER JOIN tsystem ON [Count].systemid = tsystem.ID
WHERE
DATEPART(WEEKDAY,intervaldate) = 1 /* See note about SET DATEFIRST */
AND DATEPART(HOUR, intervaldate) = 22
AND tsystem.ID = 2
GROUP BY
intervaldateweek
```
NOTE: When using the `DATEPART(WEEKDAY, ...)` function, be sure to know your server configuration because sometimes Sunday will be 1 and other times Sunday will be 7. You can force it using for example `SET DATEFIRST 1` to make it predictable. Refer to this [MSDN link](https://msdn.microsoft.com/en-us/library/ms181598.aspx) for details.
*PS: Your select statement groups by `intervaldate`, yet your column list uses `intervaldateweek` which won't work. I'm going to assume that the group by is intended to be `intervaldateweek`.*
|
try this
```
select intervaldateweek as Week,
SUM(GoodUnits) As GoodUnits,
SUM(NetUnits) As NetUnits,
SUM(GoodUnits) / NULLIF(SUM(NetUnits) , 0.0)* 100 As Value
from Count
inner join tsystem ON Count.systemid = tsystem.ID
where DATEPART(dw,intervaldate) = 1 AND CAST(intervaldate AS TIME(0))
BETWEEN CAST('22:00:00' AS TIME(0)) AND CAST('23:00:00' AS TIME(0))
and tsystem.ID = 2
group by intervaldate
```
|
Determine a time interval
|
[
"",
"sql",
"sql-server",
"reporting-services",
"ssrs-2008-r2",
""
] |
Hi I need to merge 2 tables with information about customers. Table 2 tells us if we have customer information about email, address and phonenumber, but the data is structured so each customer has 3 rows. Is there a way to merge these two tables so that I only get one row per customer but with all the contact information?
Table 1:
```
id customerID ... ...
1 11
2 99
```
and Table 2:
```
id customerID Channel Y_N
1 11 Email Y
2 11 Address Y
3 11 Phone N
4 99 Email N
5 99 Address Y
6 99 Phone N
```
My code is this
```
TABLE 1
left join(
select customerID,
case when Y_N='Y' and Channel='Email' then 1 else 0 end as Email
FROM table2
where Channel='Email')a
on table1.customerID=a.customerID
Left join(
select customerID,
case when Y_N='Y' and Channel='Address' then 1 else 0 end as Address
FROM table2
where Channel='Address') b
on table1.customerID=b.customerID
Left join(
select customerID,
case when Y_N='Y' and Channel='Phone' then 1 else 0 end as Phone
FROM table2
where Channel='SMS') c
on table1.customerID=c.customerID
```
which actually does the job, but if I have to do it again in the future is there then a smarter way?
Thank you
|
You can replace those three joins with a single usong conditional aggregation (=pivot):
```
TABLE1
left join(
select customerID,
max(case when Y_N='Y' and Channel='Email' then 1 else 0 end) as Email
max(case when Y_N='Y' and Channel='Address' then 1 else 0 end) as Address
max(case when Y_N='Y' and Channel='Phone' then 1 else 0 end) as Phone
FROM table2
GROUP BY 1) a
on table1.customerID=a.customerID
```
This might be more efficient, check Explain...
|
Well, if it works, why change it? But if you have to, maybe something like:
```
SELECT
c.*,
IF(e.Y_N='Y',1,0) AS Email,
IF(a.Y_N='Y',1,0) AS Address,
IF(p.Y_N='Y',1,0) AS Phone
FROM table1 AS c
LEFT JOIN table2 AS e ON(c.customerID=e.customerID AND Channel='Email')
LEFT JOIN table2 AS a ON(c.customerID=a.customerID AND Channel='Address')
LEFT JOIN table2 AS p ON(c.customerID=p.customerID AND Channel='Phone')
```
Also, I can't really see a good reason to have three rows per customer in table2. If at all possible, you'd be better off changing it to
```
customerID|Email|Address|Phone
11 | 1 | 1 | 0
99 | 0 | 1 | 0
```
Such that you can just do
```
SELECT * FROM table1 AS a LEFT JOIN table2 AS b ON a.customerID=b.customerID
```
|
Several rows with info to a single row
|
[
"",
"sql",
"teradata",
""
] |
SQL MINUS is used as:
```
SELECT expression1, expression2, ... expression_n
FROM tables
[WHERE conditions]
MINUS
SELECT expression1, expression2, ... expression_n
FROM tables
[WHERE conditions];
```
In case I see no difference between first minus second but see a difference between second minus first, what does this signify? Is there any real difference? If so, then why and how may I get first minus second as no difference?
Please help.
|
If I understand you correctly, there are two cases of the problem as follows:
1. Table1 and Table2 have ame number of rows, but different values: In this case using Table1 Minus Table2 will have the same results as Table2 Minuse table1.
2. Different number of rows: In this case, Table1 Minus Table2 will only return those rows exists in Table1 and do not exist in Table2.
If you want to return rows exist in Table2, and do not exist in Table1, you have to write Table2 Minus Table1.
If you want to return all the differences, then you can use UNION ALL:
```
Table1 MINUS Table2
UNION All
Table2 MINUS Table1
```
|
You can refer the docs to understand the [MINUS operator](http://www.techonthenet.com/sql/minus.php):
> The Oracle MINUS operator is used to return all rows in the first
> SELECT statement that are not returned by the second SELECT statement.
> Each SELECT statement will define a dataset. The MINUS operator will
> retrieve all records from the first dataset and then remove from the
> results all records from the second dataset.
So if all the records of table1 exist in table2 then there will not be any record shown in the output. But when you reverse the tables and if there is a change in records then the same difference of records can be seen.
[](https://i.stack.imgur.com/6gBUO.png)
|
SQL MINUS showing no difference between first and second while shows difference between second and first
|
[
"",
"sql",
"oracle",
"oracle11g",
""
] |
I need to see only the current year rows from a table.
Would it be possible filter a timestamp column only by current year parameter, is there some function that can return this value?
```
SELECT * FROM mytable WHERE "MYDATE" LIKE CURRENT_YEAR
```
|
In PostgreSQL you can use this:
```
SELECT * FROM mytable WHERE date_part('year', mydate) = date_part('year', CURRENT_DATE);
```
The date\_part function is available in all PostgreSQL releases from current down to 7.1 (at least).
|
This is pretty old, but now you can do:
```
SELECT date_part('year', now()); -> 2020
```
In the place of `now` you can pass any timestamp. More on [Documentation](https://www.postgresql.org/docs/8.2/functions-datetime.html)
|
Postgresql query current year
|
[
"",
"sql",
"postgresql",
""
] |
Consider this data:
```
---+-----------+-------+--+
| id | name | grade | |
+----+-----------+-------+--+
| 13 | Maria | 10 | |
| 18 | Lorenzo | 10 | |
| 2 | Cissa | 10 | |
| 3 | Neto | 9 | |
| 15 | Gabriel | 9 | |
| 10 | Laura | 9 | |
| 12 | Joãozinho | 8 | |
| 16 | Sergio | 8 | |
| 8 | Adriele | 8 | |
| 6 | Jorgito | 8 | |
| 5 | Aline | 8 | |
| 1 | Cintia | 8 | |
| 19 | Fabiana | 7 | |
| 11 | Vinicius | 7 | |
| 9 | Tatiane | 7 | |
| 7 | Chico | 7 | |
| 4 | Marcos | 7 | |
| 14 | Pedro | 6 | |
| 17 | Mauricio | 6 | |
| 20 | Leonardo | 6 | |
+----+-----------+-------+--+
```
I need the students with the three biggest grades. I think I need to group the data by grade and limit to the top 3 groups.
"SELECT \* FROM student GROUP BY grade LIMIT 3" only gives me 3 rows, that's not what I want.
I've tried to use HAVING to filter the groups, but without success. I don't want to set the filter grade>MAX(grade)-2, because theoretically I will not know the grades. But this filter didn't work anyway.
I'am using MySQL. Please help!
|
You can do this using a `join`:
```
select s.*
from student s join
(select grade
from student
group by grade
order by grade desc
limit 3
) g3
on s.grade = g3.grade;
```
In most databases, you an do this using `in`:
```
select s.*
from student s
where s.grade in (select grade
from student
group by grade
order by grade desc
limit 3
);
```
However, MySQL seems to reject this syntax.
|
```
select s1.*
from students s1
join (select distinct grade from students
order by grade desc limit 3) s2 on s1.grade = s2.grade
```
Alternatively:
```
select *
from students
where grade >= (select distinct grade from students
order by grade desc limit 2,1)
```
|
Select groups with the three biggest values
|
[
"",
"mysql",
"sql",
"select",
"group-by",
"limit",
""
] |
So this is table A
```
+------------+--------+----------+
| LineNumber | Pallet | Location |
+------------+--------+----------+
| 1 | a | X |
+------------+--------+----------+
| 2 | a | X |
+------------+--------+----------+
| 3 | b | Y |
+------------+--------+----------+
| 4 | b | Y |
+------------+--------+----------+
| 5 | b | Y |
+------------+--------+----------+
| 6 | c | Z |
+------------+--------+----------+
| 7 | c | Z |
+------------+--------+----------+
| 8 | c | Z |
+------------+--------+----------+
| 9 | d | Q |
+------------+--------+----------+
| 10 | d | Q |
+------------+--------+----------+
```
and this is table b
```
+-------------+----------+
| MaxPalCount | Location |
+-------------+----------+
| 2 | X |
+-------------+----------+
| 2 | Y |
+-------------+----------+
| 2 | Z |
+-------------+----------+
| 2 | Q |
+-------------+----------+
```
What I want to show is linenumbers that will exceed to the maxpalcount per location. So for example line 3 4 5 will go to location Y but as you see the max pal count of the location is only 2. And same goes to line 6 7 8.
So the query result that I want would be something like this.
```
+------------+--------+----------+
| LineNumber | Pallet | Location |
+------------+--------+----------+
| 5 | b | Y |
+------------+--------+----------+
| 8 | c | Z |
+------------+--------+----------+
```
The last line that will exceed will be shown. I know it is possible to do this with fetching per row. But is it possible without using any looping method?
|
my dummy data are same as @Backs.
Also i think it should partition on Location.
I have use exists clause instead of inner join.
```
;WITH Cte
AS (
SELECT LineNumber
,Pallet
,Location
,ROW_NUMBER() OVER (
PARTITION BY t.Location ORDER BY t.LineNumber
) AS RowId
FROM @Test1 t
)
SELECT *
FROM CTE C
WHERE EXISTS (
SELECT MaxPalCount
FROM @Tsst2 t1
WHERE t1.location = c.Location
AND c.rowid > t1.MaxPalCount
)
```
|
```
DECLARE @Test1 TABLE (
LineNumber int,
Pallet nvarchar(10),
Location nvarchar(10)
)
DECLARE @Tsst2 TABLE (
MaxPalCount int,
Location nvarchar(10)
)
INSERT INTO @Test1
(LineNumber,Pallet,Location)
VALUES
(1, 'a', 'X'),
(2, 'a', 'X'),
(3, 'b', 'Y'),
(4, 'b', 'Y'),
(5, 'b', 'Y'),
(6, 'c', 'Z'),
(7, 'c', 'Z'),
(8, 'c', 'Z'),
(9, 'd', 'Q'),
(10, 'd', 'Q')
INSERT INTO @Tsst2
(MaxPalCount, Location)
VALUES
(2, 'X'),
(2, 'Y'),
(2, 'Z'),
(2, 'Q')
SELECT *
FROM
(
SELECT
*,
ROW_NUMBER()OVER (PARTITION BY t.Pallet ORDER BY t.LineNumber) AS RowId
FROM @Test1 t
) AS t
INNER JOIN @Tsst2 t2 ON t.Location = t2.Location
WHERE t.RowId > t2.MaxPalCount
```
|
How to query only rows that will exceed to the max quantity
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have a table with duplicates like that:
```
ClassId Name Days SpecHours
1 Jack 250 130
1 Jack 250 130
1 Jack 250 120
2 Mike 300 130
…
```
What I want is to sum totalSpecHours for non-duplicates. My expected output is:
```
ClassId Name Days totalSpecHours
1 Jack 250 250
2 Mike 300 130
```
I am not able to manage to calculate the totalSpecHours. I am trying to use something like on below, but is doesnt work correctly.
```
totalSpecHours = sum(distinct SpecHours)
```
I try the equality on below, but it also doesnt work, but I guess I need something like that. I appreciate if someone helps. Thanks.
```
totalSpecHours= (sum(X.SpecHours in (select mt.ClassId, mt.SpecHours from myTable mt group by mt.ClassId, mt.SpecHours) as X))
```
Cheers.
Edit: Ty for the answers, most correct. First answer has been selected as the correct one. Appreciated all!
|
Try this query:
```
select
classID,
Name,
Days,
sum(specHours) as TotalSpecHours
from
(
select distinct ClassId, Name, Days, SpecHours from myTable )t
group by classID, Name, Days
```
**Logic:**
We take distinct values in inner query and then over this result set we do a group by and get the sum.
Please note that inner query is evaluated only once, so no performance issue.
|
Shouldn't something like this work?
```
SELECT ClassId, Name, Days, SUM(SpecHours) AS totalSpecHours
FROM (
SELECT DISTINCT ClassId, Name, Days, SpecHours
FROM Table
) outer
GROUP BY ClassId, Name, Days
```
|
Summing a column value in a table including duplicates
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I tried the following example in Oracle 11g: <http://joshualande.com/filters-joins-aggregations/>
```
SELECT c.recipe_name,
COUNT(a.ingredient_id),
SUM(a.amount*b.ingredient_price)
FROM recipe_ingredients a
JOIN ingredients b
ON a.ingredient_id = b.ingredient_id
JOIN recipes c
ON a.recipe_id = c.recipe_id
GROUP BY a.recipe_id;
```
I get the SQL Error: `ORA-00979: not a GROUP BY expression...`
The tables used in the query are the following:
```
CREATE TABLE recipes (
recipe_id INT NOT NULL,
recipe_name VARCHAR(30) NOT NULL,
PRIMARY KEY (recipe_id),
UNIQUE (recipe_name)
);
INSERT INTO RECIPES (RECIPE_ID, RECIPE_NAME) VALUES (1, 'Tacos');
INSERT INTO RECIPES (recipe_id, recipe_name) VALUES (2, 'Tomato Soup');
INSERT INTO RECIPES (recipe_id, recipe_name) VALUES (3, 'Grilled Cheese');
CREATE TABLE ingredients (
ingredient_id INT NOT NULL,
ingredient_name VARCHAR(30) NOT NULL,
ingredient_price INT NOT NULL,
PRIMARY KEY (ingredient_id),
UNIQUE (ingredient_name)
);
INSERT INTO ingredients (ingredient_id, ingredient_name, ingredient_price) VALUES (1, 'Beef', 5);
INSERT INTO ingredients (ingredient_id, ingredient_name, ingredient_price) VALUES (2, 'Lettuce', 1);
INSERT INTO ingredients (ingredient_id, ingredient_name, ingredient_price) VALUES (3, 'Tomatoes', 2);
INSERT INTO ingredients (ingredient_id, ingredient_name, ingredient_price) VALUES (4, 'Taco Shell', 2);
INSERT INTO ingredients (ingredient_id, ingredient_name, ingredient_price) VALUES (5, 'Cheese', 3);
INSERT INTO ingredients (ingredient_id, ingredient_name, ingredient_price) VALUES (6, 'Milk', 1);
INSERT INTO ingredients (ingredient_id, ingredient_name, ingredient_price) VALUES (7, 'Bread', 2);
CREATE TABLE recipe_ingredients (
recipe_id int NOT NULL,
ingredient_id INT NOT NULL,
amount INT NOT NULL,
PRIMARY KEY (recipe_id,ingredient_id)
);
INSERT INTO recipe_ingredients (recipe_id, ingredient_id, amount) VALUES (1,1,1);
INSERT INTO recipe_ingredients (recipe_id, ingredient_id, amount) VALUES (1,2,2);
INSERT INTO recipe_ingredients (recipe_id, ingredient_id, amount) VALUES (1,3,2);
INSERT INTO recipe_ingredients (recipe_id, ingredient_id, amount) VALUES (1,4,3);
INSERT INTO recipe_ingredients (recipe_id, ingredient_id, amount) VALUES (1,5,1);
INSERT INTO recipe_ingredients (recipe_id, ingredient_id, amount) VALUES (2,3,2);
INSERT INTO recipe_ingredients (recipe_id, ingredient_id, amount) VALUES (2,6,1);
INSERT INTO recipe_ingredients (recipe_id, ingredient_id, amount) VALUES (3,5,1);
INSERT INTO recipe_ingredients (recipe_id, ingredient_id, amount) VALUES (3,7,2);
```
I know this question was asked several times already, but plz explain me trough this example.
|
In the SELECT clause, you can only refer to expression that either appear in the GROUP BY clause or are aggregations (such as SUM). `c.recipe_name` does not qualify as such.
You might know that grouping by `a.recipe_id` will lead to a unique result for `c.recipe_name` (within each group). And Oracle might even be able to derive this information as well. But SQL is more strict and requires you to put the expression in the GROUP BY clause.
So, just write:
```
SELECT c.recipe_name,
COUNT(a.ingredient_id),
SUM(a.amount*b.ingredient_price)
FROM recipe_ingredients a
JOIN ingredients b
ON a.ingredient_id = b.ingredient_id
JOIN recipes c
ON a.recipe_id = c.recipe_id
GROUP BY a.recipe_id, c.recipe_name;
```
|
You have to also put field `c.recipe_name` in the `GROUP BY` clause:
```
SELECT c.recipe_name,
COUNT(a.ingredient_id) AS cnt,
SUM(a.amount*b.ingredient_price) AS sum
FROM recipe_ingredients a
JOIN ingredients b
ON a.ingredient_id = b.ingredient_id
JOIN recipes c
ON a.recipe_id = c.recipe_id
GROUP BY c.recipe_name, a.recipe_id;
```
The problem with your query is that a non-aggregated column, like `c.recipe_name`, appears in the `SELECT` clause.
**Output:**
```
recipe_name cnt sum
------------------------
Grilled Cheese 2 7
Tacos 5 20
Tomato Soup 2 5
```
|
ORA-00979: not a GROUP BY expression with a simple example
|
[
"",
"sql",
"oracle",
"oracle11g",
"group-by",
"ora-00979",
""
] |
I have sequences like table\_name\_sq in postgresql for all tables.
For example;
```
seqtest-> seqtest_sq
seqtest2-> seqtest2_sq
```
I need to change all sequences in database.
(I cannot run query for every tables manually)
I can get tables and make sequence string
```
select table_name || '_sq' as sequence_name from information_schema.tables where table_catalog='test' and table_schema='public'
```
I can change sequence value for specified table
```
select setval('seqtest_sq',(select max(id) from seqtest)+1)
```
But I cannot merge these two.
I think should use for loop but that I could not make that work.
**CODE IS:**
```
DO $$
DECLARE
i RECORD;
BEGIN
FOR i IN (select table_name from information_schema.tables) LOOP
EXECUTE 'SELECT count(*) FROM ' || i;
END LOOP;
END$$;
```
**OUTPUT is:**
```
ERROR: syntax error at or near ")"
LINE 1: SELECT count(*) FROM (seqtest)
^
QUERY: SELECT count(*) FROM (seqtest)
CONTEXT: PL/pgSQL function inline_code_block line 6 at EXECUTE
********** Error **********
ERROR: syntax error at or near ")"
SQL state: 42601
Context: PL/pgSQL function inline_code_block line 6 at EXECUTE
```
Also I printed table names with for loop but table names come with parentheses.
**HERE is CODE**
```
DO $$
DECLARE
i RECORD;
BEGIN
FOR i IN (select table_name from information_schema.tables where table_catalog='test' and table_schema='public') LOOP
raise notice 'Value: %',i;
END LOOP;
END$$;
```
**HERE is OUTPUT:**
```
NOTICE: Value: (seqtest)
NOTICE: Value: (seqtest2)
```
I think it is enough for me to get rid of this parentheses.
Would you help me to build a proper loop or find a easy way to achive this?
|
Here is the solution with help of @Nick Barnes and @a\_horse\_with\_no\_name
If someone needs a idea of how to fix sequences can use this script.
```
DO $$
DECLARE
i TEXT;
BEGIN
FOR i IN (select table_name from information_schema.tables where table_catalog='YOUR_DATABASE_NAME' and table_schema='public') LOOP
EXECUTE 'Select setval('''||i||'_sq'', (SELECT max(id) as a FROM ' || i ||')+1);';
END LOOP;
END$$;
```
|
Here is the script I use.
```
DO $$
DECLARE
i TEXT;
BEGIN
FOR i IN (
SELECT 'SELECT SETVAL('
|| quote_literal(quote_ident(PGT.schemaname) || '.' || quote_ident(S.relname))
|| ', COALESCE(MAX(' ||quote_ident(C.attname)|| '), 1) ) FROM '
|| quote_ident(PGT.schemaname)|| '.'||quote_ident(T.relname)|| ';'
FROM pg_class AS S,
pg_depend AS D,
pg_class AS T,
pg_attribute AS C,
pg_tables AS PGT
WHERE S.relkind = 'S'
AND S.oid = D.objid
AND D.refobjid = T.oid
AND D.refobjid = C.attrelid
AND D.refobjsubid = C.attnum
AND T.relname = PGT.tablename
) LOOP
EXECUTE i;
END LOOP;
END $$;
```
|
postgresql change all sequences with for loop
|
[
"",
"sql",
"postgresql",
"for-loop",
""
] |
I'm writiting a store procedure.
This procedure is getting 4 arguments which will be used in `where` clause. Problem is that these parameters can be empty.
I'm trying to write something like this:
```
select * from Books
If(@param1 <> "")
add where title =@param1
to the clause
```
But I have no idea how to make it. Is it even possible?
|
This type of query is called [catch-all-query](http://sqlinthewild.co.za/index.php/2009/03/19/catch-all-queries/). There are several ways to do this, one of which is using combinations of`AND` and `OR` conditions:
```
SELECT *
FROM Books
WHERE
(@param1 = '' OR title = @param1)
```
Another way is to use dynamic sql:
```
DECLARE @sql NVARCHAR(MAX) = ''
SELECT @sql = 'SELECT * FROM Books WHERE 1 = 1'
IF @param1 <> '' BEGIN
SELECT @sql = @sql + ' AND title = @param1'
END
EXEC sp_executesql
@sql,
N'@param1 VARCHAR(10)',
@param1 = @param1
```
Additional reading:
* Erland Sommarskog's article on [dynamic search conditions.](http://www.sommarskog.se/dyn-search-2008.html)
|
You can try like this
```
Select * from Books where (Title=@title or @title='')
```
|
Conditionally add where to the statement
|
[
"",
"sql",
"sql-server",
""
] |
I'm having a table with an id and a name.
I'm getting a list of id's and i need their names.
In my knowledge i have two options.
Create a forloop in my code which executes:
```
SELECT name from table where id=x
```
where x is always a number.
or I'm write a single query like this:
```
SELECT name from table where id=1 OR id=2 OR id=3
```
The list of id's and names is enormous so i think you wouldn't want that.
The problem of id's is the **id is not always a number** but a random generated id containting numbers and characters. So talking about **ranges** is not a solution.
I'm asking this in a performance point of view.
What's a nice solution for this problem?
|
SQLite has [limits](http://www.sqlite.org/c3ref/limit.html) on the size of a query, so if there is no known upper limit on the number of IDs, you cannot use a single query.
When you are reading multiple rows (note: `IN (1, 2, 3)` is easier than many ORs), you don't know to which ID a name belongs unless you also SELECT that, or sort the results by the ID.
There should be no noticeable difference in performance; SQLite is an embedded database without client/server communication overhead, and the query does not need to be parsed again if you use a prepared statement.
|
A "nice" solution is using the `IN`operator:
```
SELECT name from table where id in (1,2,3)
```
|
SQL or statement vs multiple select queries
|
[
"",
"sql",
"sqlite",
""
] |
This is my query:
```
SELECT d.DeptName, CASE WHEN e.WorkCity is NULL THEN 'Mobile'
ELSE 'Stationary'
END AS EmpType,
AVG(e.MonthlyPayScale) AS AvgMnthPay
FROM Department d, Employee e
WHERE d.DeptId = e.DeptId
GROUP BY d.DeptName, EmpType
ORDER BY d.DeptName, EmpType
```
This is my output which seems to be right:
```
deptname | emptype | avgmnthpay
----------+------------+------------------------
EvanDept | Mobile | 7500.0000000000000000
MaxDept | Stationary | 11250.0000000000000000
PaulDept | Mobile | 5000.0000000000000000
PaulDept | Stationary | 12500.0000000000000000
(4 rows)
```
It groups by department name and mobile/stationary employees, and calculates the average monthly pay of the employees of each group.
From what I understand though, this is the ordering of SQL queries:
```
FROM
WHERE
GROUP BY
SELECT
ORDER BY
```
So the GROUP BY statement happens **before** the SELECT statement. So why does:
GROUP BY d.DeptName, **EmpType**
Know anything about EmpType, which is declare in the CASE statement inside the SELECT statement?
So basically, **why** does my code work when GROUP BY is evaluated before SELECT?
|
The processing that you describe is often used to explain the *compilation* of a query and what identifiers are known when. SQL Server is very explicit about this, and their rules are documented [here](https://msdn.microsoft.com/en-us/library/ms189499.aspx).
Just because SQL Server does it that way, does not mean that all databases do it that way.
So, different databases are better or worse at allowing column aliases in the various clauses. MySQL and Postgres allow column aliases in the `GROUP BY` and `HAVING` clauses; Oracle and SQL Server do not. All databases allow column aliases in the `ORDER BY`. In fact some, such as Hive, require a column alias and don't allow aggregation functions.
I'm sure that the ANSI standard has something to say on this topic. Whether it is actually unambiguous is another question, but different databases have different personalities.
And, this discussion refers to the *lexical* analysis of the query. The actual execution order has nothing to do with the original statements. Most databases engines use a dataflow engine and there is not a one-to-one correspondence between the dataflow operators and the SQL constructs.
|
Do the `CASE` expression in a derived table:
```
select DeptName, EmpType, AVG(MonthlyPayScale) AS AvgMnthPay
FROM
(
SELECT d.DeptName as DeptName,
CASE WHEN e.WorkCity is NULL THEN 'Mobile'
ELSE 'Stationary'
END AS EmpType,
e.MonthlyPayScale as MonthlyPayScale
FROM Department d
JOIN Employee e ON d.DeptId = e.DeptId
)
GROUP BY DeptName, EmpType
ORDER BY DeptName, EmpType
```
Did also switch to modern, explicit `JOIN` syntax. Easier to write (without errors), easier to read and maintain, and also easier to convert to outer join if needed!
|
Using SELECT'd Case Statement's columnName in GROUP BY
|
[
"",
"sql",
"postgresql",
""
] |
I'm working on a system to manage students attendance in class. I created two tables: student (student\_id,name,mail..) and course (course\_id, name, lecturer...). now my question is how should i keep record of which students taking which courses?
should i create another table for each course with this structure:
course\_id,lecturer,student\_1,students\_2,student\_3...
or maybe there is a better solution for creating this relation?
tx
**UPDATE:** i should have mentioned that student can take several courses
|
Since there is a many to many relations between your tables(every student can take many courses,each course can be taken by multiple students) you need an intermediary table which hold the primary key of both table.
```
coursestudent(course_id,student_id)
```
with FOREIGN KEYs to the respective tables.
|
Depends, if a student can have multiple courses and a course belongs to multilple students you want to make a table that contains id, course\_id(FOREIGN KEY) and student\_id(FOREIGN KEY).
If a student can have only one course, but a course can be followed by multiple students you probably want to add course\_id to student as foreign key.
|
good practice in MySQL DB structure
|
[
"",
"mysql",
"sql",
""
] |
I'm using Apache Derby 10.10.
I have a list of participants and would like to calculate their rank in their country, like this:
```
| Country | Participant | Points | country_rank |
|----------------|---------------------|--------|--------------|
| Australia | Bridget Ciriac | 1 | 1 |
| Australia | Austin Bjorklun | 4 | 2 |
| Australia | Carrol Motto | 7 | 3 |
| Australia | Valeria Seligma | 8 | 4 |
| Australia | Desmond Miyamot | 27 | 5 |
| Australia | Maryjane Digma | 33 | 6 |
| Australia | Kena Elmendor | 38 | 7 |
| Australia | Emmie Hicke | 39 | 8 |
| Australia | Kaitlyn Mund | 50 | 9 |
| Australia | Alisia Vitaglian | 65 | 10 |
| Australia | Anika Bulo | 65 | 11 |
| UK | Angle Ifil | 2 | 1 |
| UK | Demetrius Buelo | 12 | 2 |
| UK | Ermelinda Mell | 12 | 3 |
| UK | Adeline Pee | 21 | 4 |
| UK | Alvera Cangelos | 23 | 5 |
| UK | Keshia Mccalliste | 23 | 6 |
| UK | Alayna Rashi | 24 | 7 |
| UK | Malinda Mcfarlan | 25 | 8 |
| United States | Gricelda Quirog | 3 | 1 |
| United States | Carmina Britto | 5 | 2 |
| United States | Noemi Blase | 6 | 3 |
| United States | Britta Swayn | 8 | 4 |
| United States | An Heidelber | 12 | 5 |
| United States | Maris Padill | 21 | 6 |
| United States | Rachele Italian | 21 | 7 |
| United States | Jacquiline Speake | 28 | 8 |
| United States | Hipolito Elami | 45 | 9 |
| United States | Earl Sayle | 65 | 10 |
| United States | Georgeann Ves | 66 | 11 |
| United States | Conchit Salli | 77 | 12 |
```
The schema looks like this ([sqlfiddle](http://sqlfiddle.com/#!5/f48f8/4)):
```
create table Country(
id INTEGER NOT NULL GENERATED ALWAYS AS IDENTITY,
name varchar(255),
PRIMARY KEY (id)
);
create table Team(
id INTEGER NOT NULL GENERATED ALWAYS AS IDENTITY,
country_id int not null,
PRIMARY KEY (id),
FOREIGN KEY (country_id) REFERENCES Country(id)
);
create table Participant(
id INTEGER NOT NULL GENERATED ALWAYS AS IDENTITY,
team_id int not null,
name varchar(100),
points int,
PRIMARY KEY (id),
FOREIGN KEY (team_id) REFERENCES Team(id)
);
```
This is what I have tried:
```
select
Country.name,
Participant.name,
Participant.points,
ROW_NUMBER() OVER(order by Country.name, Participant.points) as country_rank
from Country
join Team
on Country.id = Team.country_id
join Participant
on Team.id = Participant.team_id;
```
But according to the [apache derby doco](https://db.apache.org/derby/docs/10.10/ref/index.html), the OVER() statement doesn't take any arguments.
Does anyone have a way to achieve the country rank?
|
**SQL**
```
SELECT c.name AS Country,
p.name AS Participant,
p.points AS Points,
(SELECT COUNT(*)
FROM Participant p2
JOIN Team t2 ON p2.team_id = t2.id
WHERE t2.country_id = t.country_id
AND (p2.points < p.points
OR p2.points = p.points AND p2.name <= p.name)) AS country_rank
FROM Country c
JOIN Team t ON c.id = t.country_id
JOIN Participant p ON t.id = p.team_id
ORDER BY c.name, p.points, p.name;
```
**Online Demo**
SQL Fiddle demo: <http://sqlfiddle.com/#!5/f48f8/14>
**Explanation**
A simple ANSI-SQL subselect can be used to do the same job, counting the number of records for participants in the same country with a lower score or with the same score and a name that is alphabetically no higher.
|
Consider a non-windows function SQL query that uses a correlated aggregate count subquery. Because the group column (`Country.name`) is not in same table as the rank criteria (`Participant.points`), we need to run same joins in the subquery but rename table aliases to properly compare inner and outer queries.
Now of course, in a perfect world that would be it but we must now account for tied points. Therefore, another very similar subquery (for tie breaker) is used to be added to first subquery. This second nested query matches inner and outer query's `Country.name` and `Participant.points` but ranks by alphabetical order of `Participant.name`.
```
SELECT
Country.name AS Country,
Participant.name AS Participant,
Participant.points,
(SELECT Count(*) + 1
FROM Country subC
INNER JOIN Team subT
ON subC.id = subT.country_id
INNER JOIN Participant subP
ON subT.id = subP.team_id
WHERE subC.name = Country.name
AND subP.points < Participant.points)
+
(SELECT Count(*)
FROM Country subC
INNER JOIN Team subT
ON subC.id = subT.country_id
INNER JOIN Participant subP
ON subT.id = subP.team_id
WHERE subC.name = Country.name
AND subP.points = Participant.points
AND subP.name < Participant.name) As country_rank
FROM Country
INNER JOIN Team
ON Country.id = Team.country_id
INNER JOIN Participant
ON Team.id = Participant.team_id
ORDER BY Country.name, Participant.points;
```
|
Query to rank rows in groups
|
[
"",
"sql",
"derby",
"rank",
"row-number",
"groupwise-maximum",
""
] |
When I try to run this query:
```
select branch_no, max (avg_salary)
from (select allocatedto, avg (salary)
from staff, worker
where staff.staff_no = worker.staff_no
group by allocatedto)
as branch_avg (branch_no, avg_salary);
```
I get this error:
```
Error: near "(": syntax error
```
|
```
select my_alias1,my_alias2 from (select col1,col2,...) as A (my_alias1,my_alias2)
```
The above syntax is valid in `SQL Server`.
To alias the column in derived table you need to use `AS` inside the derived table. Try this
```
SELECT Max (avg_salary)
FROM (SELECT allocatedto AS branch_no,
Avg (salary) AS avg_salary
FROM staff
INNER JOIN worker
ON staff.staff_no = worker.staff_no
GROUP BY allocatedto) AS branch_avg;
```
Also start using `INNER JOIN` instead of old style comma separated join
|
In SQLite, an AS clause on a subquery cannot assign column names (and it is not needed in the first place).
To rename the output columns of a (sub)query, you must use AS in the SELECT clause:
```
SELECT branch_no,
max(avg_salary) AS avg_salary
FROM (...);
```
|
SQL syntax error: near "("
|
[
"",
"sql",
"sqlite",
"subquery",
"derived-table",
""
] |
I'd like to find a way to drop multiple indexes from various tables with a single query in SQL Server. I can find the index name and table by using the query below, but I'm a bit lost on how I should go about dropping the tables it finds.
```
SELECT
so.name AS TableName
, si.name AS IndexName
, si.type_desc AS IndexType
FROM
sys.indexes si
JOIN sys.objects so ON si.[object_id] = so.[object_id]
WHERE
so.type = 'U' --Only get indexes for User Created Tables
AND si.name IS NOT NULL
and si.name like 'NCI%'
ORDER BY
so.name, si.type
```
|
You can construct your drop script starting from your `SELECT`:
```
DECLARE @SQL NVARCHAR(MAX) = N''
SELECT @SQL = @SQL + 'DROP INDEX ' + so.name + '.' + si.name + ';' + CHAR(13) + CHAR(10)
FROM
sys.indexes si
JOIN sys.objects so ON si.[object_id] = so.[object_id]
WHERE
so.type = 'U' --Only get indexes for User Created Tables
AND si.name IS NOT NULL
and si.name like '%%'
ORDER BY
so.name, si.type
PRINT @SQL
EXEC (@SQL)
```
`@SQL` is built incrementally and will contain all `DROP` statements separated by `;` so that they execute in a single batch. `DROP`ing tables can be done in a similar manner.
NOTE: your query also catches indexes associated to `PK`s and they cannot be dropped directly, as they are doubled by a constraint (PK = constraint + index)
|
This is one of those times in life you generally will want to use a cursor, because your statement is dynamic.
```
DECLARE drop_cur cursor for
SELECT
QUOTENAME(so.name) AS TableName,
QUOTENAME(si.name) AS IndexName,
si.type_desc AS IndexType
FROM
sys.indexes si
JOIN sys.objects so ON si.[object_id] = so.[object_id]
WHERE
so.type = 'U' --Only get indexes for User Created Tables
AND si.name IS NOT NULL
and si.name like 'NCI%'
ORDER BY
so.name, si.type
OPEN drop_cur
DECLARE @tablename NVARCHAR(MAX), @indexname NVARCHAR(MAX), @indextype NVARCHAR(MAX);
FETCH NEXT FROM drop_cur INTO @tableName, @indexName, @indextype
WHILE @@fetch_status = 0 BEGIN
DECLARE @dropSql NVARCHAR(MAX) = 'drop index ' + @tablename + '.' + @indexname
PRINT @dropSql
--EXEC sp_executesql @dropSql --Uncomment to perform the drop.
FETCH NEXT FROM drop_cur INTO @tableName, @indexName, @indextype
END
CLOSE drop_cur
DEALLOCATE drop_cur
```
|
Drop index if name like
|
[
"",
"sql",
"sql-server",
""
] |
I have the following table
```
Date Value promo item
01/01/2011 626 0 1230
01/02/2011 231 1 1230
01/03/2011 572 1 1230
01/04/2011 775 1 1230
01/05/2011 660 1 1230
01/06/2011 662 1 1230
01/07/2011 541 1 1230
01/08/2011 849 1 1230
01/09/2011 632 1 1230
01/10/2011 906 1 1230
01/11/2011 961 1 1230
01/12/2011 361 0 1230
01/01/2012 461 0 1230
01/02/2012 928 1 1230
01/03/2012 855 0 1230
01/04/2012 605 0 1230
01/05/2012 83 0 1230
01/06/2012 44 0 1230
01/07/2012 382 0 1230
01/08/2012 862 0 1230
01/09/2012 549 0 1230
01/10/2012 632 0 1230
01/11/2012 2 0 1230
01/12/2012 26 0 1230
```
I try to calculate average sum (SoldAmt)/number od days between the min date and max date rolling back the first 28 rows(4 weeks ) in which promo =1 by Article
The smoothing period is 4 weeks on the back regardless of the day discount.
That is to say, if such item is in promotion for a week during the last 4 weeks smoothing is over 5 weeks without regard to the promotion of sales week.
How to calculate the first 4 weeks/28rows data order by time for promo =1?
I try
```
CREATE TABLE #RollingTotalsExample
(
[Date] DATE
,[Value] INT
,promo float
,item int
);
INSERT INTO #RollingTotalsExample
SELECT '2011-01-01',626,1,1230
UNION ALL SELECT '2011-02-01',231,1,1230 UNION ALL SELECT '2011-03-01',572,1,1230
UNION ALL SELECT '2011-04-01',775,1,1230 UNION ALL SELECT '2011-05-01',660,1,1230
UNION ALL SELECT '2011-06-01',662,1,1230 UNION ALL SELECT '2011-07-01',541,1,1230
UNION ALL SELECT '2016-08-01',849,1,1230 UNION ALL SELECT '2016-09-01',632,1,1230
UNION ALL SELECT '2016-10-01',906,1,1230 UNION ALL SELECT '2016-11-01',961,1,1230
UNION ALL SELECT '2016-04-01',775,1,1230 UNION ALL SELECT '2016-05-01',660,1,1230
UNION ALL SELECT '2016-06-01',662,1,1230 UNION ALL SELECT '2016-07-01',541,1,1230
UNION ALL SELECT '2016-08-01',849,1,1230 UNION ALL SELECT '2016-09-01',632,1,1230
UNION ALL SELECT '2016-10-01',906,1,1230 UNION ALL SELECT '2016-11-01',961,1,1230
UNION ALL SELECT '2016-12-01',361,0,1230 UNION ALL SELECT '2012-01-01',461,0,1230
UNION ALL SELECT '2012-02-01',928,0,1230 UNION ALL SELECT '2012-03-01',855,0,1230
UNION ALL SELECT '2012-04-01',605,0,1230 UNION ALL SELECT '2012-05-01',83,0,1230
UNION ALL SELECT '2012-06-01',44,0,1230 UNION ALL SELECT '2012-07-01',382,0,1230
UNION ALL SELECT '2012-08-01',862,0,1230 UNION ALL SELECT '2012-09-01',549,0,1230
UNION ALL SELECT '2012-10-01',632,0,1230 UNION ALL SELECT '2012-11-01',2,0,1230
UNION ALL SELECT '2012-12-01',26,0,1230;
SELECT * FROM #RollingTotalsExample;
-- Rolling twelve month total by using INNER JOIN
SELECT a.[Date]
,Value=MAX(CASE WHEN a.[Date] = b.[Date] THEN a.Value END)
,Rolling12Months=CASE
WHEN ROW_NUMBER() OVER (ORDER BY a.[Date]) < 12
THEN NULL
ELSE SUM(b.Value)
END
FROM #RollingTotalsExample a
JOIN #RollingTotalsExample b ON b.[Date] BETWEEN DATEADD(month, -11, a.[Date]) AND a.[Date]
GROUP BY a.[Date]
ORDER BY a.[Date];
```
Now how to modify the query in order to calculate average sum (SoldAmt)/number od days between the min date and max date rolling back the 28 first rows data order by time for promo =1 by Article
|
Here is an alternate approach that requires LAG() which is available from SQL 2012, but note the sample data does not contain "28 distinct days" prior to each date. Also the actual data type being used isn't known (date/smalldatetime/datetime/datetime2) nor is it known if truncating time from date is needed. So with some caveats, this approach creates a series of date ranges for the 28 distinct dates (but is the data doesn't provide these then they are 28 elapsed days). Here is a is a [sqlfiddle demo](http://sqlfiddle.com/#!15/0aa5f/1)
**PostgreSQL 9.3 Schema Setup**:
(as SQL Server not operating at sqlfiddle)
```
CREATE TABLE Table1
(theDate timestamp, Value int, promo int, item int)
;
INSERT INTO Table1
(theDate, Value, promo, item)
VALUES
('2011-01-01 00:00:00', 626, 0, 1230),
('2011-01-02 00:00:00', 231, 1, 1230),
('2011-01-03 00:00:00', 572, 1, 1230),
('2011-01-04 00:00:00', 775, 1, 1230),
('2011-01-05 00:00:00', 660, 1, 1230),
('2011-01-06 00:00:00', 662, 1, 1230),
('2011-01-07 00:00:00', 541, 1, 1230),
('2011-01-08 00:00:00', 849, 1, 1230),
('2011-01-09 00:00:00', 632, 1, 1230),
('2011-01-10 00:00:00', 906, 1, 1230),
('2011-01-11 00:00:00', 961, 1, 1230),
('2011-01-12 00:00:00', 361, 0, 1230),
('2012-01-01 00:00:00', 461, 0, 1230),
('2012-01-02 00:00:00', 928, 1, 1230),
('2012-01-03 00:00:00', 855, 0, 1230),
('2012-01-04 00:00:00', 605, 0, 1230),
('2012-01-05 00:00:00', 83, 0, 1230),
('2012-01-06 00:00:00', 44, 0, 1230),
('2012-01-07 00:00:00', 382, 0, 1230),
('2012-01-08 00:00:00', 862, 0, 1230),
('2012-01-09 00:00:00', 549, 0, 1230),
('2012-01-10 00:00:00', 632, 0, 1230),
('2012-01-11 00:00:00', 2, 0, 1230),
('2012-01-12 00:00:00', 26, 0, 1230)
;
```
**Query 1**:
```
select
t1.item
, ranges.theStart
, ranges.theEnd
, sum(t1.value)
, sum(t1.value) / 28 avg
from (
select
coalesce(lag(theDay,28) over(order by theDay) , theDay - INTERVAL '28 DAYS') as theStart
, theDay as theEnd
from (
select distinct cast(thedate as date) theDay from Table1
) days
) ranges
inner join table1 t1 on theDate between ranges.theStart and ranges.theEnd
group by
t1.item
, ranges.theStart
, ranges.theEnd
```
**[Results](http://sqlfiddle.com/#!15/0aa5f/1/0)**:
```
| item | thestart | theend | sum | avg |
|------|----------------------------|---------------------------|------|-----|
| 1230 | December, 04 2010 00:00:00 | January, 01 2011 00:00:00 | 626 | 22 |
| 1230 | December, 05 2010 00:00:00 | January, 02 2011 00:00:00 | 857 | 30 |
| 1230 | December, 06 2010 00:00:00 | January, 03 2011 00:00:00 | 1429 | 51 |
| 1230 | December, 07 2010 00:00:00 | January, 04 2011 00:00:00 | 2204 | 78 |
| 1230 | December, 08 2010 00:00:00 | January, 05 2011 00:00:00 | 2864 | 102 |
| 1230 | December, 09 2010 00:00:00 | January, 06 2011 00:00:00 | 3526 | 125 |
| 1230 | December, 10 2010 00:00:00 | January, 07 2011 00:00:00 | 4067 | 145 |
| 1230 | December, 11 2010 00:00:00 | January, 08 2011 00:00:00 | 4916 | 175 |
| 1230 | December, 12 2010 00:00:00 | January, 09 2011 00:00:00 | 5548 | 198 |
| 1230 | December, 13 2010 00:00:00 | January, 10 2011 00:00:00 | 6454 | 230 |
| 1230 | December, 14 2010 00:00:00 | January, 11 2011 00:00:00 | 7415 | 264 |
| 1230 | December, 15 2010 00:00:00 | January, 12 2011 00:00:00 | 7776 | 277 |
| 1230 | December, 04 2011 00:00:00 | January, 01 2012 00:00:00 | 461 | 16 |
| 1230 | December, 05 2011 00:00:00 | January, 02 2012 00:00:00 | 1389 | 49 |
| 1230 | December, 06 2011 00:00:00 | January, 03 2012 00:00:00 | 2244 | 80 |
| 1230 | December, 07 2011 00:00:00 | January, 04 2012 00:00:00 | 2849 | 101 |
| 1230 | December, 08 2011 00:00:00 | January, 05 2012 00:00:00 | 2932 | 104 |
| 1230 | December, 09 2011 00:00:00 | January, 06 2012 00:00:00 | 2976 | 106 |
| 1230 | December, 10 2011 00:00:00 | January, 07 2012 00:00:00 | 3358 | 119 |
| 1230 | December, 11 2011 00:00:00 | January, 08 2012 00:00:00 | 4220 | 150 |
| 1230 | December, 12 2011 00:00:00 | January, 09 2012 00:00:00 | 4769 | 170 |
| 1230 | December, 13 2011 00:00:00 | January, 10 2012 00:00:00 | 5401 | 192 |
| 1230 | December, 14 2011 00:00:00 | January, 11 2012 00:00:00 | 5403 | 192 |
| 1230 | December, 15 2011 00:00:00 | January, 12 2012 00:00:00 | 5429 | 193 |
```
NB: For SQL Server
* instead of `theDay - INTERVAL '28 DAYS'` use **dateadd(day,-28,theDay)**
|
if you are looking for cumulative sum for all days excluding first 12 days from mindate...
```
with cte
as
(
select dateadd(day,12,min(date)) as mindate,max(date) as maxdate,
datediff(day,dateadd(day,12,min(date)),max(date)) as n
from #RollingTotalsExample
)
select
date,
(select sum(value) from #RollingTotalsExample t2 where t2.date<=t1.date) as rollingsum,
promo,
item,
mindate,
maxdate
from
#RollingTotalsExample t1
join
cte c
on t1.date >=c.mindate and t1.date<= c.maxdate
order by date
```
--this matches your exact output
```
with cte
as
(
select dateadd(day,12,min(date)) as mindate,max(date) as maxdate,
datediff(day,dateadd(day,12,min(date)),max(date)) as n
from #RollingTotalsExample
)
select
date,
sum(value) as avgsum,
promo,
item
from
#RollingTotalsExample t1
join
cte c
on t1.date >=c.mindate and t1.date<= c.maxdate
where promo=1
group by date,promo,item
order by date
```
---if you are looking for avg: sum/no of days
```
with cte
as
(
select dateadd(day,12,min(date)) as mindate,max(date) as maxdate,
datediff(day,dateadd(day,12,min(date)),max(date)) as n
from #RollingTotalsExample
)
select
date,
sum(value)/n*1.0 as avgsum,
promo,
item
from
#RollingTotalsExample t1
join
cte c
on t1.date >=c.mindate and t1.date<= c.maxdate
where promo=1
group by date,promo,item,n
order by date
```
|
Custom calculation for amount
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
[](https://i.stack.imgur.com/CBhQp.png)
This is my table.
I need to write a query that shows all the records but when a project reaches 100% It shows the record for that project only once.
so the result should be
[](https://i.stack.imgur.com/bwkqg.png)
USING SQL SERVER 2005
|
Use `NOT EXISTS` to return a row if no other row for the same project already (lower date) has reached 100%:
```
SELECT t1.*
FROM table_name t1
WHERE NOT EXISTS (select 1 from tablename t2
where t2.project = t1.project
and t2.report_date < t1.report_date
and t2.percentage_complete = 100)
```
Do `SELECT DISTINCT` to remove duplicate rows (no matter of percentage.)
|
You could use `GROUP BY`:
```
SELECT project, MIN(report_date) AS report_date, percentage
FROM table_name
GROUP BY project, percentage
ORDER BY project, report_date;
```
`MIN` will select first date when percentage is the same.
`LiveDemo`
Output:
```
╔═════════╦═════════════════════╦════════════╗
║ project ║ report_date ║ percentage ║
╠═════════╬═════════════════════╬════════════╣
║ ABC ║ 2016-02-03 00:00:00 ║ 45 ║
║ DEF ║ 2016-02-04 00:00:00 ║ 55 ║
║ EFG ║ 2016-02-05 00:00:00 ║ 66 ║
║ EFG ║ 2016-02-15 00:00:00 ║ 100 ║
║ XYZ ║ 2016-02-06 00:00:00 ║ 55 ║
║ XYZ ║ 2016-02-07 00:00:00 ║ 76 ║
║ XYZ ║ 2016-02-08 00:00:00 ║ 87 ║
║ XYZ ║ 2016-02-09 00:00:00 ║ 100 ║
╚═════════╩═════════════════════╩════════════╝
```
|
sql how to show row with same column value only once
|
[
"",
"sql",
""
] |
I have 2 sprocs for an assignment, what I'm trying to do is pass the xml output from one sproc to another and put it in a variable, I know ex1.xml\_sp1 is returning an int while calling it with `EXEC` and obviously when trying to select this it returns null because `@x` is xml data type.
What I want to do is retrieve and store the xml data from sproc 1 in to `@x` in sproc 2.
Is there any way to do this?
sproc 1:
```
ALTER PROC [ex1].[xml_sp1]
@careteamid int
as
select CareTeams.CareTeamID, Doctors.DoctorID, Doctors.DoctorName,
CareTeamDoctors.DateJoined, CareTeamDoctors.CurrentMember
from dbo.CareTeamTbl as CareTeams
inner join dbo.CareTeamDoctorTbl as CareTeamDoctors on
CareTeams.CareTeamID = CareTeamDoctors.CareTeamID
inner join dbo.DoctorTbl as Doctors on
CareTeamDoctors.DoctorID=CareTeamDoctors.DoctorID
where CareTeamDoctors.CareTeamID = @careteamid
and CareTeamDoctors.DoctorID = Doctors.DoctorID
for xml auto, root('thedata')
```
sproc 2:
```
ALTER PROC [ex1].[xml_sp2]
@careteamid int
as
declare @x xml
exec @x = ex1.xml_sp1
@careteamid = @careteamid
select @x as XMLData
```
|
> I want to do is retrieve and store the xml data from sproc 1 in to @x in sproc 2.
You could achieve it very easily using [`OUTPUT` parameters](https://msdn.microsoft.com/en-us/library/ms378108%28v=sql.110%29.aspx):
```
CREATE PROCEDURE [xml_sp1]
@careteamid INT,
@xml_output XML OUTPUT
AS
BEGIN
SET @xml_output = (SELECT * FROM ... FOR XML AUTO, root('thedata'));
END;
GO
CREATE PROCEDURE [xml_sp2]
@careteamid INT
AS
BEGIN
DECLARE @x XML;
EXEC [xml_sp1]
@careteamid,
@x OUTPUT;
SELECT @x AS XMLData;
END;
GO
```
And final call:
```
EXEC [xml_sp2] @careteamid = 1;
```
`LiveDemo`
Consider using `BEGIN/END` block and end each statement with `;` to avoid possible nasty problems.
The full list of possible sharing data methods **[How to Share Data between Stored Procedures by Erland Sommarskog](http://www.sommarskog.se/share_data.html)**
|
For the return value (i.e. `EXEC @ReturnValue = StoredProcName...;`), `INT` is the only datatype allowed. If this needs to really stay as a Stored Procedure then you can either use an `OUTPUT` variable or create a temp table or table variable in the second Stored Procedure and do `INSERT INTO ... EXEC StoredProc1;`.
However, given that the first Stored Procedure is only doing a simple SELECT statement, you would be far better off converting this to be an Inline Table-Valued Function (iTVF) as follows:
```
CREATE FUNCTION dbo.GetData (@CareTeamID INT)
RETURNS TABLE
AS RETURN
SELECT tab.col AS [CareData]
FROM (
SELECT CareTeams.CareTeamID, Doctors.DoctorID, Doctors.DoctorName,
CareTeamDoctors.DateJoined, CareTeamDoctors.CurrentMember
FROM dbo.CareTeamTbl as CareTeams
INNER JOIN dbo.CareTeamDoctorTbl as CareTeamDoctors
ON CareTeams.CareTeamID = CareTeamDoctors.CareTeamID
INNER JOIN dbo.DoctorTbl as Doctors
ON CareTeamDoctors.DoctorID=CareTeamDoctors.DoctorID
WHERE CareTeamDoctors.CareTeamID = @CareTeamID
AND CareTeamDoctors.DoctorID = Doctors.DoctorID
FOR XML AUTO, ROOT('thedata'), TYPE -- "TYPE" causes result to be XML and not NVARCHAR
) tab(col);
```
Then just change the second stored procedure to be:
```
SELECT care.CareData AS [XMLData]
FROM dbo.GetData(@CareTeamID);
```
Or, if you actually need to make use of that XML data in the second Stored Procedure, do the following:
```
DECLARE @TempData XML;
SELECT @TempData = care.CareData
FROM dbo.GetData(@CareTeamID);
```
Finally, if you need the XML data to actually be of the XML datatype coming out of that iTVF (or even if you keep it as a Stored Procedure), then you need to add the `TYPE` option to the `FOR XML` clause, else what is returned is a string / `NVARCHAR(MAX)`:
```
FOR XML AUTO, ROOT ('thedata'), TYPE
```
|
How can I pass xml data from one sproc to another?
|
[
"",
"sql",
"sql-server",
"xml",
"sql-server-2008",
"t-sql",
""
] |
In a DB I have a table `tca` with a `status` column. This can be `new, process, completed`, etc.When I query this table with `SELECT status, count(*) FROM tca GROUP BY status` to find out the count of a certain status I get follwing multidimensional array back
```
Array
(
[0] => Array
(
[status] => new
[0] => new
[count(*)] => 6
[1] => 6
)
[1] => Array
(
[status] => process
[0] => process
[count(*)] => 1
[1] => 1
)
[2] => Array
(
[status] => completed
[0] => completed
[count(*)] => 1
[1] => 1
)
)
```
Why are there redundant subarrays `[status]` , `[0]` and `[count(*)]` , `[1]`?
|
Since you have selected 'status' and 'count(\*)' , these keys are present in your result array.
Also same result is there with numeric index of result array.
|
Hope this solve your query.
```
Use mysqli_fetch_assoc($dbcon,$countQuery);
//$dbcon = your db connection variable // $countQuery = your query
```
|
mySQL count(*) meaning of arrays
|
[
"",
"mysql",
"sql",
""
] |
There are 2 tables, a Component table and a Log table. The component table holds the actual(current) value description and a timestamp when it was last updated.
The Log table contains a component ID that references to wich component it belongs:
```
Component:
Id
Actual
LastUpdated
Log:
Id
ComponentId
Value
Timestamp
```
the query that used to work but currently lock the table looks like this.
```
update Component set Actual =
(select top 1 Value from Log
where Component.Id = ComponentId
order by Id desc),
LastUpdated =
(select top 1 TimeStamp from Log
where Component.Id = ComponentId
order by Id desc)
```
Both the log and component tables are growing and this query can't keep up like it used to be able to do. there are around 80 components now and a couple of million records.
Is it possible to work in a way like this and just improve the query or is the entire approach wrong?
ps the devices that send the data don't have an reliable system time and therefor having them update the component table leads to inconsistency. When inserting a log i take the system time on the SQL server(default value)
EDIT:
taking a suggestion from the awnsers im trying to create a trigger on log to automaticly update Component when a log is created.
```
CREATE TRIGGER trg_log_ins
ON Log
AFTER INSERT
AS
BEGIN
update Component
SET Actual = (SELECT i.value FROM inserted as i),
LastUpdated = (SELECT i.Timestamp FROM inserted as i);
END;
```
but for some reason the query doesn't finish and keeps executing.
|
I think you're going about this all wrong. A better solution would be a trigger on the Component table, that inserts into the Log table whenever a Component is inserted or updated.
```
CREATE TRIGGER trg_component_biu
ON Component
AFTER INSERT, UPDATE
AS
BEGIN
INSERT INTO Log(
ComponentId,
Value,
Timestamp
)
SELECT
Id,
Actual,
LastUpdated
FROM inserted;
END;
```
|
All data in your Component table is coming from the Log table. Instead of making Component an actual table, you can make it a view, indexed if necessary.
```
CREATE VIEW Component
WITH SCHEMABINDING
AS
SELECT
ComponentId AS Id,
FIRST_VALUE(Value)
OVER(PARTITION BY ComponentId
ORDER BY Timestamp DESC)
AS Actual,
MAX(Timestamp) AS LastUpdated
FROM Log
GROUP BY ComponentId;
```
|
Looking for an alternative method to this query
|
[
"",
"sql",
"sql-server",
""
] |
I need to select all free rooms from `hotel` DB and I think I can do it with two steps:
1. `bookings = select * from booking where booking.startDate>=selectedStartDate and booking.endDate=<selectedEndDate`.
2. pseudo query: `select * from room where room.room_id not includes bookings.room_id`.
I wrote my second query like pseudo query because I can't find how I can do what I want to do. How to check that `bookings` doesn't includes `room_id`'s?
My booking table looks like:
```
+-----------+------------+------------+---------+---------+
| bookingId | startDate | endDate | room_id | guestId |
+-----------+------------+------------+---------+---------+
| 1 | 2016-03-12 | 2016-03-22 | 1 | 1 |
| 2 | 2016-03-12 | 2016-03-22 | 2 | 2 |
+-----------+------------+------------+---------+---------+
```
|
You could transform the first query to a subquery of the second query by using the `not in` operator:
```
SELECT *
FROM room
WHERE room.room_id NOT IN (SELECT room_id
FROM booking
WHERE startDate >= selectedEndDate AND
endDate <= selectedStartDate)
```
|
If you want rooms free during a period of time, use `not exists`. The correct logic is:
```
select r.*
from room r
where not exists (select 1
from booking b
where $startdate <= b.enddate and $enddate >= b.startdate
);
```
Two periods overlap when one starts before the second ends and the first ends after the second starts.
Note that the `<=` and `>=` might be strict inequalities, depending on whether the first and last dates are included in the period.
|
SQL query to check if column includes some data
|
[
"",
"mysql",
"sql",
"database",
"select",
""
] |
I have a complex query which requires fields from a total of 4 tables. The inner joins are causing the query to take much longer than it should. I have run an EXPLAIN statement, whose visual result is attached below:
[](https://i.stack.imgur.com/RJ8Ze.png)
This is my query:
```
SELECT
pending_corrections.corrected_plate , pending_corrections.seenDate
FROM
(pending_corrections
INNER JOIN cameras ON pending_corrections.camerauid = cameras.camera_id)
INNER JOIN
vehicle_vrn ON (pending_corrections.corrected_plate = vehicle_vrn.vrn500
OR pending_corrections.corrected_plate = vehicle_vrn.vrnno)
INNER JOIN
vehicle_ownership ON vehicle_vrn.fk_sysno = vehicle_ownership.fk_sysno
WHERE
pending_corrections.seenDate >= '2015-01-01 00:00:00'
AND pending_corrections.seenDate <= '2015-01-31 23:59:59'
ORDER BY pending_corrections.corrected_plate , pending_corrections.seenDate ASC;
```
How can I achieve the same effect but without the `OR` in one of the joins?
|
Rewriting as a `UNION` is simple, copy the source and remove one of the `OR`ed conditions in each:
```
SELECT
pending_corrections.corrected_plate , pending_corrections.seenDate
FROM
(pending_corrections
INNER JOIN cameras ON pending_corrections.camerauid = cameras.camera_id)
INNER JOIN
vehicle_vrn ON (pending_corrections.corrected_plate = vehicle_vrn.vrn500)
INNER JOIN
vehicle_ownership ON vehicle_vrn.fk_sysno = vehicle_ownership.fk_sysno
WHERE
pending_corrections.seenDate >= '2015-01-01 00:00:00'
AND pending_corrections.seenDate <= '2015-01-31 23:59:59'
union
SELECT
pending_corrections.corrected_plate , pending_corrections.seenDate
FROM
(pending_corrections
INNER JOIN cameras ON pending_corrections.camerauid = cameras.camera_id)
INNER JOIN
vehicle_vrn ON pending_corrections.corrected_plate = vehicle_vrn.vrnno)
INNER JOIN
vehicle_ownership ON vehicle_vrn.fk_sysno = vehicle_ownership.fk_sysno
WHERE
pending_corrections.seenDate >= '2015-01-01 00:00:00'
AND pending_corrections.seenDate <= '2015-01-31 23:59:59'
ORDER BY 1,2;
```
Is there an index on `pending_corrections.seenDate`?
|
you may try the following:
```
select
pending_corrections.corrected_plate , pending_corrections.seenDate
from pending_corrections
where pending_corrections.seenDate >= '2015-01-01 00:00:00'
and pending_corrections.seenDate <= '2015-01-31 23:59:59'
and exists(select 1 from cameras where pending_corrections.camerauid = cameras.camera_id)
and exists(select 1 from vehicle_ownership where vehicle_vrn.fk_sysno = vehicle_ownership.fk_sysno)
and exists(select 1 from vehicle_vrn
where pending_corrections.corrected_plate in (vehicle_vrn.vrnno, vehicle_vrn.vrn500))
order by 1,2;
```
or alternatively as dnoeth already mentioned:
```
select * from (
select
pending_corrections.corrected_plate , pending_corrections.seenDate
from pending_corrections
where pending_corrections.seenDate >= '2015-01-01 00:00:00'
and pending_corrections.seenDate <= '2015-01-31 23:59:59'
and exists(select 1 from cameras where pending_corrections.camerauid = cameras.camera_id)
and exists(select 1 from vehicle_ownership where vehicle_vrn.fk_sysno = vehicle_ownership.fk_sysno)
and exists(select 1 from vehicle_vrn where pending_corrections.corrected_plate = vehicle_vrn.vrnno)
union
select
pending_corrections.corrected_plate , pending_corrections.seenDate
from pending_corrections
where pending_corrections.seenDate >= '2015-01-01 00:00:00'
and pending_corrections.seenDate <= '2015-01-31 23:59:59'
and exists(select 1 from cameras where pending_corrections.camerauid = cameras.camera_id)
and exists(select 1 from vehicle_ownership where vehicle_vrn.fk_sysno = vehicle_ownership.fk_sysno)
and exists(select 1 from vehicle_vrn where pending_corrections.corrected_plate = vehicle_vrn.vrn500)
) by 1,2;
```
PS of course i couldn't test it myself without having the data and knowing all your indexes
|
How to handle multiple joins
|
[
"",
"mysql",
"sql",
"database",
"performance",
"relational-database",
""
] |
I want to get all the possible `type` from a table and then `count` the rows under a `group` who has this `type`.
To better illustrate consider the following table.
### Object (o)
```
id name group_id type
-----------------------------------------------------------
1 Computer 100 A
2 Monitor 100 A
3 Chair 100 B
4 Table 100 B
5 Telephone 100 C
6 Notebook 200 D
7 Pen 200 D
8 Wire 100 E
```
What I want to get is the following result:
```
group_id type total
-------------------------------
100 A 2
100 B 2
100 C 1
100 D 0
100 E 1
```
The `group_id` doesn't have to be included in the result, I just wanted to let you see here that I'm using a `WHERE` clause in my query. Basically I only want the results for `group | 100` but I want the counts for all `type`.
Using my query below
```
SELECT o.group_id, o.type, COUNT(*) AS total
FROM object o
WHERE o.group_id = 100
GROUP BY o.type
```
I only get the following result:
```
group_id type total
-------------------------------
100 A 2
100 B 2
100 C 1
100 E 1
```
You will notice that I am missing the row for `type | D`. I also want to include that.
|
You can use the following query:
```
SELECT 100 AS group_id,
o.type,
COUNT(CASE WHEN o.group_id = 100 THEN 1 END) AS total
FROM object o
GROUP BY o.type
```
This query groups by `type` and uses *conditional aggregation* so as to count the rows under each group who have `type = 100`.
|
The problem is that Type D is not returned in the query for that ID, so it is not included in the groupings. To do this, we need to get the list of ALL Types in the table, then do the counts for your ID in the table. Something like this:
```
SELECT o_list.type, COUNT(o.id) AS total
FROM object o
RIGHT OUTER JOIN ( SELECT DISTINCT type from object ) o_list on o_list.type = o.type
WHERE o.group_id = 100
GROUP BY o_list.type
```
Giorgos's version is, arguably, cleaner to read and will always run in one full scan of the table. Mine is more complex, but if there are indexes on group\_id and type, will run on index scans and so may be significantly faster on a large data set.
And you get to see that there are always options on how to solve a problem. :)
|
SQL Complex Counting
|
[
"",
"sql",
"oracle",
"count",
"group-by",
""
] |
I have 2 tables - TC and T, with columns specified below. TC maps to T on column T\_ID.
```
TC
----
T_ID,
TC_ID
T
-----
T_ID,
V_ID,
Datetime,
Count
```
My current result set is:
```
V_ID TC_ID Datetime Count
----|-----|------------|--------|
2 | 1 | 2013-09-26 | 450600 |
2 | 1 | 2013-12-09 | 14700 |
2 | 1 | 2014-01-22 | 15000 |
2 | 1 | 2014-01-22 | 15000 |
2 | 1 | 2014-01-22 | 7500 |
4 | 1 | 2014-01-22 | 1000 |
4 | 1 | 2013-12-05 | 0 |
4 | 2 | 2013-12-05 | 0 |
```
Using the following query:
```
select T.V_ID,
TC.TC_ID,
T.Datetime,
T.Count
from T
inner join TC
on TC.T_ID = T.T_ID
```
Result set I want:
```
V_ID TC_ID Datetime Count
----|-----|------------|--------|
2 | 1 | 2014-01-22 | 15000 |
4 | 1 | 2014-01-22 | 1000 |
4 | 2 | 2013-12-05 | 0 |
```
I want to write a query to select each distinct `V_ID + TC_ID` combination, but only with the maximum datetime, and for that datetime the maximum count. E.g. for the distinct combination of `V_ID = 2` and `TC_ID = 1`, `'2014-01-22'` is the maximum datetime, and for that datetime, `15000` is the maximum count, so select this record for the new table. Any ideas? I don't know if this is too ambitious for a query and I should just handle the result set in code instead.
|
One method uses `row_number()`:
```
select v_id, tc_id, datetime, count
from (select T.V_ID, TC.TC_ID, T.Datetime, T.Count,
row_number() over (partition by t.V_ID, tc.tc_id
order by datetime desc, count desc
) as seqnum
from t join
tc
on tc.t_id = t._id
) tt
where seqnum = 1;
```
The only issue is that some rows have the same maximum `datetime` value. SQL tables represent *unordered* sets, so there is no way to determine which is really the maximum -- unless the `datetime` really has a time component or another column specifies the ordering within a day.
|
It is possible to solve this using CTEs. First, extracting the data from your query. Second, get the maxdates. Third, get the highest count for each maxdate.:
```
;WITH Dataset AS
(
select T.V_ID,
TC.TC_ID,
T.[Datetime],
T.[Count]
from T
inner join TC
on TC.T_ID = T._ID
),
MaxDates AS
(
SELECT V_ID, TC_ID, MAX(t.[Datetime]) AS MaxDate
FROM Dataset t
GROUP BY t.V_ID, t.TC_ID
)
SELECT t.V_ID, t.TC_ID, t.[Datetime], MAX(t.[Count]) AS [Count]
FROM Dataset t
INNER JOIN MaxDates m ON t.V_ID = m.V_ID AND t.TC_ID = m.TC_ID AND m.MaxDate = t.[Datetime]
GROUP BY t.V_ID, t.TC_ID, t.[Datetime]
```
|
SQL Server - Select Distinct of two columns, where the distinct column selected has a maximum value based on two other columns
|
[
"",
"sql",
"sql-server",
"distinct",
"multiple-columns",
""
] |
I have a table which contains geometry lines (ways).There are lines that have a unique geometry (not repeating) and lines which have the same geometry (2,3,4 and more). I want to list only unique ones. If there are, for example, 2 lines with the same geometry I want to drop them. I tried DISTINCT but it also shows the first result from duplicated lines. I only want to see the unique ones.
I tried window function but I get similar result (I get a counter on first line from the duplicating ones). Sorry for a newbie question but I'm learning :) Thanks!
Example:
```
way|
1 |
1 |
2 |
3 |
3 |
4 |
```
Result should be:
```
way|
2 |
4 |
```
That actually worked. Thanks a lot. I also have other tags in this table for every way (name, ref and few other tags) When I add them to the query I loose the segregation.
```
select count(way), way, name
from planet_osm_line
group by way, name
having count(way) = 1;
```
Without "name" in the query I get all unique values listed but I want to keep "name" for every line. With this example I stilll get all the lines in the table listed.
|
You first calculate the rows you want, and then search for the rest of the fields. So the aggregation doesnt cause you problems.
```
WITH singleRow as (
select count(way), way
from planet_osm_line
group by way
having count(way) = 1
)
SELECT P.*
FROM planet_osm_line P
JOIN singleRow S
ON P.way = S.way
```
|
To expound on @Nithila answer:
```
select count(way), way
from your_table
group by way
having count(way) = 1;
```
|
SQL list only unique / distinct values
|
[
"",
"sql",
"postgresql",
"postgis",
""
] |
Is there a way to use `MIN` within the `WHERE` clause?
I'm trying to insert a subquery into one of my scripts and the subquery I set up was originally its own query that used something like this:
```
SELECT
person, MIN(date)
FROM
orders
WHERE
date > "1/1/2015" and date < "1/31/2015"
GROUP BY
person
```
This was to ensure the FIRST record, the earliest record, was the return value. Using `ORDER BY ASC LIMIT 1` didnt work. I want to replicate this so that I can say:
```
WHERE
otherPersons IN (
SELECT
person, MIN(date)
FROM
orders
WHERE
date > "1/1/2015" and date < "1/31/2015"
)
GROUP BY
person
```
Meaning, otherPersons has to be in Person, but only those persons whose earliest date falls within my date spread. I understand this is impossible as a subquery can only have 1 column selected.
I'm new to SQL, sorry for the simplicity of the question.
|
You don't need only one
```
WHERE
(otherPersons, date) IN (
SELECT
person, MIN(date)
FROM
orders
WHERE
date > "1/1/2015" and date < "1/31/2015"
GROUP BY
person
)
GROUP BY
person
```
This is the same as a join with two clauses
```
JOIN (
SELECT
person, MIN(date) as mindate
FROM
orders
WHERE
date > "1/1/2015" and date < "1/31/2015"
GROUP BY
person
) sub ON otherPersons = sub.person and date = sub.mindate
```
|
try this code
```
SELECT
person, MIN(date) as min_date -- field alias
FROM
orders
WHERE date > "1/1/2015" and date < "1/31/2015"
HAVING Min(Date) > "xx/xx/xxxx"
```
you can field alias instead of `MIN(DATE)` in `MySQL`
|
using MIN within WHERE
|
[
"",
"mysql",
"sql",
""
] |
We're currently working on a query for a report that returns a series of data. The customer has specified that they want to receive 5 rows total, with the data from the previous 5 days (as defined by a start date and an end date variable). For each day, they want the data from the row that's closest to 4am.
I managed to get it to work for a single day, but I certainly don't want to union 5 separate select statements simply to fetch these values. Is there any way to accomplish this via CTEs?
```
select top 1
'W' as [RecordType]
, [WellIdentifier] as [ProductionPtID]
, t.Name as [Device Name]
, t.RecordDate --convert(varchar, t.RecordDate, 112) as [RecordDate]
, TubingPressure as [Tubing Pressure]
, CasingPressure as [Casing Pressure]
from #tTempData t
Where cast (t.recorddate as time) = '04:00:00.000'
or datediff (hh,'04:00:00.000',cast (t.recorddate as time)) < -1.2
order by Name, RecordDate desc
```
|
assuming that the #tTempData only contains the previous 5 days records
```
SELECT *
FROM
(
SELECT *, rn = row_number() over
(
partition by convert(date, recorddate)
order by ABS ( datediff(minute, convert(time, recorddate) , '04:00' )
)
FROM #tTempData
)
WHERE rn = 1
```
|
You can use row\_number() like this to get the top 5 last days most closest to 04:00
```
SELECT TOP 5 * FROM (
select t.* ,
ROW_NUMBER() OVER(PARTITION BY t.recorddate
ORDER BY abs(datediff (minute,'04:00:00.000',cast (t.recorddate as time))) rnk
from #tTempData t)
WHERE rnk = 1
ORDER BY recorddate DESC
```
|
Selecting 1 row per day closest to 4am?
|
[
"",
"sql",
"sql-server",
"common-table-expression",
""
] |
I have the following table:
```
create table Likes(ID1 number(5), ID2 number(5));
insert into Likes values(1689, 1709);
insert into Likes values(1709, 1689);
insert into Likes values(1782, 1709);
insert into Likes values(1911, 1247);
insert into Likes values(1247, 1468);
insert into Likes values(1641, 1468);
insert into Likes values(1316, 1304);
insert into Likes values(1501, 1934);
insert into Likes values(1934, 1501);
insert into Likes values(1025, 1101);
```
The table contains the 'likes' of users identified by their IDs. Liking is a one way connection (If ID1 likes ID2 does not mean that TD2 is liking ID1).
I want to find those IDs, where is a two way connection (where the 'liker' is 'liked back' by user he likes).
I am beginner with Oracle SQL, I hope my question isn't that banal...
|
You can do it with a join:
```
SELECT t.id1,t.id2
FROM Likes t
INNER JOIN Likes s
ON(t.id1 = s.id2 and t.id2 = s.id1)
```
Or with EXISTS()
```
SELECT t.*
FROM Likes t
WHERE EXISTS(select 1 FROM Likes s
WHERE t.id1 = s.id2
AND t.id2 = s.id1)
```
|
You need to pick a given record id1=X, id2=Y only in a case when another record exists in the table,
which has id1=Y, id2=X.
A condition like this can be expressed in SQL with the help of EXISTS operator and a dependent subquery:
```
SELECT *
FROM likes t
WHERE EXISTS (
SELECT 1 FROM likes t1
WHERE t.id1 = t1.id2 AND t.id2 = t1.id1
)
```
|
How to list multiple field matches from a single table in oracle SQL
|
[
"",
"sql",
"oracle",
"oracle11g",
""
] |
I have a simple query as the following:
```
Select SUBSTRING(Email, Charindex('@', Email) + 1,
Len(Email) - CharIndex('@', Email)) as EmailDomain,
Count(Email) as Total
from tblPerson
Group by SUBSTRING(Email, Charindex('@', Email) + 1,
Len(Email) - CharIndex('@', Email))
```
Which is working fine except that there is a subquery duplicate:
```
SUBSTRING(Email, Charindex('@', Email) + 1,
Len(Email) - CharIndex('@', Email))
```
I have put an alias for the sub-query there `as EmailDomain`, but when I query like this:
```
Select SUBSTRING(Email, Charindex('@', Email) + 1,
Len(Email) - CharIndex('@', Email)) as EmailDomain,
Count(Email) as Total
from tblPerson
Group by EmailDomain
```
I got an error:
> Invalid column name 'EmailDomain'
How to fix this but also at the same time not using duplicate query?
|
The answer is simple you can not use column alias in WHERE clause or GROUP BY clause.
You can try like this:
```
SELECT EmailDomain,
Total
FROM (
Select SUBSTRING(Email, Charindex('@', Email) + 1,
Len(Email) - CharIndex('@', Email)) as EmailDomain,
Count(Email) as Total
from tblPerson
) a
Group by EmailDomain
```
|
You cannot use the alias in the `GROUP BY`. To fix this, you can use a subquery:
```
SELECT
t.EmailDomain,
COUNT(t.Email) AS Total
FROM (
SELECT
SUBSTRING(Email, CHARINDEX('@', Email) + 1, LEN(Email) - CHARINDEX('@', Email)) AS EmailDomain,
Email
FROM tblPerson
) t
GROUP BY t.EmailDomain
```
|
SQL Server, Duplicate subquery, and Alias
|
[
"",
"sql",
"sql-server",
""
] |
I'd like to use postgresql window functions like `rank()` and `dense_rank` in some queries in need to do in Django. I have it working in raw SQL but I'm not sure how to do this in the ORM.
Simplified it looks like this:
```
SELECT
id,
user_id,
score,
RANK() OVER(ORDER BY score DESC) AS rank
FROM
game_score
WHERE
...
```
How would you do this in the ORM?
At some point I might need to add partitioning too :|
(we're using Django 1.9 on Python 3 and already depend on django.contrib.postgres features)
|
Since Django 2.0 it is built-in into the ORM. See [window-functions](https://docs.djangoproject.com/en/2.0/ref/models/database-functions/#window-functions)
```
# models.py
class GameScore(models.Model):
user_id = models.IntegerField()
score = models.IntegerField()
# window function usage
from django.db.models.expressions import Window
from django.db.models.functions import Rank
GameScore.objects.annotate(rank=Window(
expression=Rank(),
order_by=F('score').desc(),
partition_by=[F('user_id')]))
# generated sql
SELECT "myapp_gamescore"."id",
"myapp_gamescore"."user_id",
"myapp_gamescore"."score",
RANK() OVER (
PARTITION BY "myapp_gamescore"."user_id"
ORDER BY "myapp_gamescore"."score" DESC
) AS "rank"
FROM "myapp_gamescore"
```
|
There are a few ways of doing this:
1) Using annotate and RawSQL(). Preferred method. Example:
```
from django.db.models.expressions import RawSQL
GameScore.objects.filter().annotate(rank=RawSQL("RANK() OVER(ORDER BY score DESC)", []) )
```
2) Using GameScore.objects.filter(...).extra() function. As this is an old API that it is aimed to be deprecated at some point in the future, it should be used it only if you cannot express your query using other queryset methods... but it still works. Example:
```
GameScore.objects.filter().extra(select={'rank': 'RANK() OVER(ORDER BY score DESC)' )
```
In this way, you are able to add partitioning, dense rank, ... without any problem:
```
RawSQL("RANK() OVER(PARTITION BY user_id ORDER BY score DESC")
```
And you can access to the data as:
```
game_scores = GameScore.objects.filter().extra(select={'rank': 'RANK() OVER(ORDER BY score DESC)' )
for game_score in game_scores:
print game_score.id, game_score.rank, game_score.score
```
1) <https://docs.djangoproject.com/es/1.9/ref/models/querysets/#annotate>
2) <https://docs.djangoproject.com/es/1.9/ref/models/querysets/#extra>
|
Clean way to use postgresql window functions in django ORM?
|
[
"",
"sql",
"django",
"postgresql",
"orm",
""
] |
I have a table with all word positions in a book
the table is like this
```
Word PageNo Position
----------------------
A 1 10
A 1 15
B 1 13
B 2 18
C 2 20
```
I want to find the pages that has word A and word B
the result is page 1
I can get the answer by this query
```
select PageNo from
(select * from t where word = 'A') a inner join
(select * from t where word = 'B') b on a.PageNo = b.PageNo
```
but what I really want is this
```
Word PageNo Position
----------------------
A 1 10
A 1 15
B 1 13
```
I want all rows that have pageno = (pages that have both A and B)
If i use this query
```
select * from
(select * from t where word = 'A') a inner join
(select * from t where word = 'B') b on a.PageNo = b.PageNo
```
here is the result
```
Word PageNo Position Word PageNo Position
------------------------------------------------------
A 1 10 B 1 13
A 1 15 B 1 13
```
When i join the two lists and select all columns, six columns are shown
but i want three columns that union two lists
Is there a way i can join two lists and union the the lists?
I can do it with two selects but i want one.
|
```
select * from tablename
where PageNo in
(select PageNo
from tablename
where Word in ('A','B')
group by PageNo
having count(distinct Word) >= 2)
```
|
Try this:
```
SELECT Word, PageNo, Position
FROM t
WHERE PageNo IN (SELECT PageNo
FROM t
WHERE word IN ('A', 'B')
GROUP BY PageNo
HAVING COUNT(DISTINCT word) = 2)
```
|
SQL query for book search
|
[
"",
"sql",
""
] |
I am using PostgreSQL version 8.1. I have a table as follows:
```
datetime | usage
-----------------------+----------
2015-12-16 02:01:45+00 | 71.615
2015-12-16 03:14:42+00 | 43.000
2015-12-16 01:51:43+00 | 25.111
2015-12-17 02:05:26+00 | 94.087
```
I would like to add the integer values in the `usage` column based on the date in the `datetime` column.
Simply, I would like the output to look as below:
```
datetime | usage
-----------------------+----------
2015-12-16 | 139.726
2015-12-17 | 94.087
```
I have tried `SELECT dateTime::DATE, usage, SUM(usage) FROM tableName GROUP BY dateTime::DATE, lngusage;` which does not perform as expected. Any assistance would be appreciated. Thanks in advance.
|
Below query should give you the desired result:
```
select to_char(timestamp, 'YYYY-MM-DD') as time, sum(usage)
from table
group by time
```
|
This one is for postgreSQL, I see you added MySQL also.
```
SELECT
dt
SUM(usage),
FROM (
SELECT
DATE_TRUNC('day', datetime) dt,
usage
FROM
tableName
) t
GROUP BY
dt
```
|
Sum Column of Integers Based on Timestamp in PostgreSQL
|
[
"",
"sql",
"postgresql",
"datetime",
""
] |
I recently upgraded our SQL Server from 2005 to 2014 (linked server) and I am noticing that one of the stored procedures which calls the exec command to execute a stored procedure on the upgraded linked server is failing with the error
> Could not find server 'server name' in sys.servers.Verify that the correct server name was specified. If necessary, execute the stored procedure sp\_addlinkedserver to add the server to sys.servers.
The issue is that the linked server exists and I have done tests to ensure I can query the tables from the linked server. Here are the checks I did to see if the linked server is configured correctly.
```
- select name from sys.servers -- > Lists the linked server
- select top 10 * from linkedserver.database.dbo.table --> Gets top 10 records
- exec linkedserver.database.dbo.storedproc --> Executes the stored procedure (I created a test stored procedure on the linked server and I can execute it)
```
However the one that is failing with the error is below
```
exec linkedserver.database.dbo.failing_storedprocedure @id,'load ','v2',@file_name, @list_id = @listid output;
```
I've recreated the linked server and RPC is enabled.I've granted execute permission on the stored procedure. I can select records and execute other stored procedures on the linked server but the above exec is failing(it worked before the upgrade).Is there a syntax difference between SQL Server 2005 and SQL Server 2014 that is causing this to fail?
|
I figured out the issue. The linked server was created correctly. However, after the server was upgraded and switched the server name in `sys.servers` still had the old server name.
I had to drop the old server name and add the new server name to `sys.servers` on the new server
```
sp_dropserver 'Server_A'
GO
sp_addserver 'Server',local
GO
```
|
At first check out that your linked server is in the list by this query
```
select name from sys.servers
```
If it not exists then try to add to the linked server
```
EXEC sp_addlinkedserver @server = 'SERVER_NAME' --or may be server ip address
```
After that login to that linked server by
```
EXEC sp_addlinkedsrvlogin 'SERVER_NAME'
,'false'
,NULL
,'USER_NAME'
,'PASSWORD'
```
Then you can do whatever you want ,treat it like your local server
```
exec [SERVER_NAME].[DATABASE_NAME].dbo.SP_NAME @sample_parameter
```
Finally you can drop that server from linked server list by
```
sp_dropserver 'SERVER_NAME', 'droplogins'
```
If it will help you then please upvote.
|
Could not find server 'server name' in sys.servers. SQL Server 2014
|
[
"",
"sql",
"sql-server",
"stored-procedures",
"sql-server-2014",
"linked-server",
""
] |
Is there a way to return result with its order?
Sample: Run the following query on the `quiz` table:
```
select q.category_id, q.quiz_id, concat('Quiz ',q.name) name
from quiz q
where q.category_id = 11
order by q.category_id ASC
limit 2
offset 2;
```
Table `quiz` (structure):
```
+-------------+---------+-------+
| category_id | quiz_id | name |
+-------------+---------+-------+
| 10 | 10 | math |
| 11 | 10 | sport | => Quiz Sport 1
| 11 | 11 | sport | => Quiz Sport 2
| 12 | 10 | Geo. |
| 11 | 15 | sport | => Quiz Sport 3
| 11 | 12 | sport | => Quiz Sport 4
| 10 | 17 | math |
| 11 | 20 | sport | => Quiz Sport 5
| 11 | 22 | sport | => Quiz Sport 6
| 10 | 19 | math |
+-------------+---------+-------+
```
returns:
```
+-------------+---------+------------+
| category_id | quiz_id | name |
+-------------+---------+------------+
| 11 | 15 | Quiz sport |
| 11 | 12 | Quiz sport |
+-------------+---------+------------+
```
<http://sqlfiddle.com/#!9/110752/2>
Is there a way to return a result with the Quiz number order, like this:
```
+-------------+---------+--------------+
| category_id | quiz_id | name |
+-------------+---------+--------------+
| 11 | 15 | Quiz sport 3 |
| 11 | 12 | Quiz sport 4 |
+-------------+---------+--------------+
```
|
You need to establish a row number to do this. Doing so means you need a unique field to `order by` to ensure the order of the results. You can get the row number using `user-defined variables` in mysql. Here's an example reordering by `quiz_id`:
```
select *
from (select q.category_id, q.quiz_id, @rn := @rn + 1, concat('Quiz ',q.name, @rn) name
from quiz q, (select @rn := 0) t
where q.category_id = 11
order by q.category_id ASC, q.quiz_id) t
order by category_id ASC, quiz_id
limit 2
offset 2;
```
* [SQL Fiddle Demo](http://sqlfiddle.com/#!9/71f62/5)
|
You can either order by your primary key if you have one such as
```
select q.category_id, q.quiz_id, concat('Quiz ',q.name) name
from quiz q
where q.category_id = 11
order by q.id, q.category_id ASC
limit 2
offset 2;
```
If you don't I suggest you add a primary with [something like this](https://stackoverflow.com/questions/9070764/insert-auto-increment-primary-key-to-existing-mysql-database):
```
ALTER TABLE quiz ADD id INT PRIMARY KEY AUTO_INCREMENT;
```
It does not *have* to be `PRIMARY` if for some reasons that doesn't work with what you do. You could also use the aforementioned statement to add an `rank` column which is also `auto_increment`
Something like:
```
ALTER TABLE quiz ADD rank INT AUTO_INCREMENT;
```
(Then run the first query.)
|
MySQL: return result with its order
|
[
"",
"mysql",
"sql",
""
] |
I have a room inventory table each room has one record for each day,
but some rooms have double record for a day. I want to query to pull out those id.
Inventory\_table => id, roomid, inv\_date....
|
The following may get you a list of the rooms with duplicates.
```
select id, roomid, inv_date, count(room)
from room_inventory
group by id, roomid, inv_date
having count(room) > 1;
```
|
```
Select Id, count(*) from inventory_table group by roomid, inv_date having count(*)>1
```
|
How to find duplicate record for same date
|
[
"",
"sql",
""
] |
I am really having problem trying to sum up a column of one table and then group it according to the data from another table. the only common key that they have is account number.
here is the scenario
```
table 1.
account_no volume read_date
1 32 12016
2 22 12016
3 20 12016
4 21 12016
5 54 12016
3 65 12016
7 84 12016
8 21 12016
9 20 12016
10 30 12016
```
=========================================================
```
table 2
account_no Zone
1 A
2 A
3 B
4 B
5 A
3 A
7 B
8 B
9 C
10 C
```
Result
```
Zone Total
A 173
B 146
C 50
```
so far this is how my query goes
```
Select sum(volume) as Total, read_date as bDate
GROUP BY bDate
ORDER By bDate
```
it was able to sum all the volumes based on the read\_date
any help would be much appreciated...
|
Try this, This will sum up based on read\_dates and Zones
```
SELECT A.read_date
,B.Zone
,sum(A.volume) SumOfVolume
FROM @Table1 A
INNER JOIN @Table2 B ON A.account_no = B.account_no
GROUP BY A.read_date
,B.Zone
```
|
You just have to `JOIN` the two tables together and do a `GROUP BY Zone`:
```
SELECT t2.Zone, SUM(volume) AS Total
FROM table1 AS t1
INNER JOIN table2 AS t2
ON t1.account_no = t2.account_no
GROUP BY t2.Zone
```
|
how to sum a column from one table and group the sum using data from column of another table
|
[
"",
"sql",
"sql-server",
""
] |
Need a solution for the following situation:
I need to update T1, column Email
with the following data:
id from T1 + @test.com
Result that I need should look like this = 1234@test.com,
That number must be an ID from T1 + @test.com
For example:
```
table 1 = customer
COLUMNS = ID, Email,
```
Need to update the Email column.
Results should look like this:
```
ID Email
1 1@test.com
2 2@test.com
3 3@test.com
```
I'm doing this on an existing tables, not creating them.
tnx in advance
|
Are you looking for something like this?
```
CREATE TABLE T1 (ID INT, OtherData VARCHAR(100),eMail VARCHAR(100));
INSERT INTO T1(id,OtherData) VALUES(1,'Row 1'),(2,'Row 2'),(3, 'Row 3');
SELECT * FROM T1;
UPDATE T1 SET eMail= CAST(ID AS VARCHAR(10)) + '@test.com'
SELECT * FROM T1;
--Clean Up
DROP TABLE T1;
```
The result:
```
ID OtherData eMail
1 Row 1 1@test.com
2 Row 2 2@test.com
3 Row 3 3@test.com
```
|
```
UPDATE T1 SET
[Email] = T2.ID + '@test.com'
FROM T2 INNER JOIN T1
ON T2.[key_column] = T1.[foreign_key_column];
```
Is this what you want ?
|
Update SQL table usign data from another table + custom text
|
[
"",
"sql",
"sql-server",
"database",
"sql-update",
""
] |
In the below query, I'm using Subquery1 and Subquery2 to get Account Number and Account Name. However the first Subquery and second joins same tables except an additional table account\_nameinfo\_t in Subquery 2 to get the account name. Is there a way I avoid selecting from other tables and just use the value of Subquery 1 i.e account# to get the account name in Subquery 2?
```
SELECT
(
SELECT acct.account_no
FROM group_t grp1, account_t acct
WHERE grp1.poid_id0 = grpbm.obj_id0
AND acct.poid_id0 = grp1.ACCOUNT_OBJ_ID0
) PARENT_ACCOUNT, -- (#Subquery 1 to get the parent account)
(
SELECT ant.Firstname || ' ' || ant.LastName
FROM group_t grp1, account_t acct,account_nameinfo_t ant
WHERE grp1.poid_id0 = grpbm.obj_id0
AND acct.poid_id0 = grp1.ACCOUNT_OBJ_ID0
AND ant.obj_id0 = acct.poid_id0
) "ACCOUNT NAME", -- (#Subquery 2 to get the parent account name which is in a different table)
bgs.REC_ID2 RECORD_TYPE,
bgs.current_bal VALUE
FROM group_t grp,
group_billing_members_t grpbm,
BAL_GRP_SUB_BALS_T bgs
WHERE poid_type = '/group/sharing/discounts'
AND grpbm.OBJECT_ID0 = grp.ACCOUNT_OBJ_ID0
AND bgs.obj_id0 = grp.BAL_GRP_OBJ_ID0
AND bgs.rec_id2 NOT IN (1000203,
1030001,
1000303,
1000306)
ORDER BY PARENT_ACCOUNT;
```
|
It looks like you can simplify this using simple joins rather than subqueries, either in the select list or as inline views:
```
SELECT acct.account_no AS "PARENT ACCOUNT",
ant.first_name||' '||ant.last_name AS "ACCOUNT NAME",
bgs.rec_id2 AS record_type,
bgs.current_bal
FROM group_t grp
JOIN group_billing_members_t grpbm ON grpbm.obj_id0 = grp.account_obj_id0
JOIN group_t grp1 ON grp1.poid_id0 = grpbm.obj_id0
JOIN bal_grp_sub_bals_t bgs ON bgs.obj_id0 = grp.bal_grp_obj_id0
JOIN account_t acct ON acct.poid_id0 = grp1.account_obj_id0
JOIN account_nameinfo_t ant ON ant.obj_id0 = acct.poid_id0
WHERE grp.poid_type='/group/sharing/discounts'
AND bgs.rec_id2 not in (1000203, 1030001, 1000303, 1000306)
AND ant.rec_id = 1
ORDER BY "PARENT ACCOUNT";
```
You only seem to be using `group_billing_members_t` between two references to `group_t`, and it isn't clear if they both point to the same record, or if that expands to multiple rows. The column names seem a bit inconsistent, which may be from your retyping the code rather than copying and pasting it. If it is the same record then you seem to be able to remove that table and the rejoin:
```
SELECT acct.account_no AS "PARENT ACCOUNT",
ant.first_name||' '||ant.last_name AS "ACCOUNT NAME",
bgs.rec_id2 AS record_type,
bgs.current_bal
FROM group_t grp
JOIN bal_grp_sub_bals_t bgs ON bgs.obj_id0 = grp.bal_grp_obj_id0
JOIN account_t acct ON acct.poid_id0 = grp.account_obj_id0
JOIN account_nameinfo_t ant ON ant.obj_id0 = acct.poid_id0
WHERE grp.poid_type='/group/sharing/discounts'
AND bgs.rec_id2 not in (1000203, 1030001, 1000303, 1000306)
AND ant.rec_id = 1
ORDER BY "PARENT ACCOUNT";
```
Without table structures, relationships, sample data and expected results that's rather speculative though.
|
Oracle supports a `WITH` clause which you may find useful: <http://psoug.org/reference/with.html>
Essentially, it allows you to create a temporary view within a query that can be accessed multiple times. In your case, the result of your common join can be "factored out" and the result can be reused.
|
Oracle SQL subquery scalar as input to other subquery
|
[
"",
"sql",
"oracle",
""
] |
I want to query for values such as:
```
AVOCADOS, GRANDPAS-HASS 70CT 23#
```
...which may be stored without a comma, such as:
```
AVOCADOS GRANDPAS-HASS 70CT 23#
```
I want to disregard whether the comma exists or not, as long as the rest of the varchar value is identical. How can I query for a commaized value and get back both records containing that identical value, but also records that are identical except for the lack of the comma?
IOW, with the following query:
```
select bla from unitproducts WHERE Description = 'AVOCADOS ABUELOS-HASS 70CT 23#'
```
...I want to get 'bla' for records with both of these descriptions:
```
AVOCADOS GRANDPAS-HASS 70CT 23#
AVOCADOS, GRANDPAS-HASS 70CT 23#
```
I could run the query twice, if the first one returns no records, inserting a comma before the first space in the string, but I'm hoping there's a more 'elegant' approach that wouldn't require such a workaround.
Can this be done with "LIKE"? If so, how?
## UPDATE
It ended up that in my case I couldn't change the SQL itself (it is in a shared StoredProc), so I bandaided the code with this:
Former code:
```
bool isContractItem = IsContractItem(desc);
```
Current code:
```
bool isContractItem = IsContractItem(desc);
if (!isContractItem)
{
desc = Decommaize(desc);
isContractItem = IsContractItem(desc);
}
. . .
private string Decommaize(string stringWithComma)
{
return stringWithComma.Replace(",", string.Empty);
}
```
|
Maybe a bit overkill since it may make just as much sense to just clean the data in the first place, but to make the search hit indexes, you can add the stripped field as a computed column and add an index on it;
```
ALTER TABLE unitproducts ADD desc_search AS REPLACE(description, ',', '');
CREATE INDEX ix_desc_search ON unitproducts(desc_search);
```
Then you should be able to just efficiently do;
```
SELECT bla
FROM unitproducts
WHERE desc_search = REPLACE('AVOCADOS ABUELOS-HASS 70CT 23#', ',', '');
```
Since the replace is just performed once on your string instead of also once per row to find a match, the search can use the index on the computed column to find your row.
|
How about
```
WHERE REPLACE(string1, ',', '') = REPLACE(string1, ',', '')
```
or for your specific example
```
WHERE REPLACE(Description,',','') = 'AVOCADOS ABUELOS-HASS 70CT 23#'
```
If you want to put "identical" records first in the result list you can also do
```
ORDER BY (CASE WHEN Description='AVOCADOS ABUELOS-HASS 70CT 23#' THEN 0 ELSE 1 END)
```
Note that neither method is sargable, so any query that does not limit data in some other way will likely do a full table scan.
|
How can I disregard whether commas exist in a string with SQL?
|
[
"",
"sql",
"sql-server",
""
] |
My goal is to get SQL to check if the row exists, if it does update, if not insert.
`FTP_num` is the name of the first column. The SQL Statement is
```
using (SqlCommand cmd = new SqlCommand("IF NOT EXISTS(SELECT ftp_num from Distributor WHERE fpt_num = FTP_num)"
" insert FTP_Info set IP=@IP, Port=@Port, UN=@UN, PW=@PW, Folder=@Folder where FTP_num = @ftp_num" +
" else" +
" update FTP_Info set IP=@IP, Port=@Port, UN=@UN, PW=@PW, Folder=@Folder where FTP_num = @ftp_num", con))
```
I don't have much SQL experience, any help is appreciated.
```
Line 120:
Line 121: con.Open();
Line 122: cmd.ExecuteNonQuery();
Line 123: con.Close();
Line 124: }
```
|
Your insert syntax is wrong.
It should be something like this:
```
using (SqlCommand cmd = new SqlCommand("IF NOT EXISTS(SELECT 1from Distributor WHERE fpt_num = @FTP_num)" +
" insert into FTP_Info (IP, Port, UN, PW, Folder, FTP_num) VALUES(@IP, @Port, @UN, @PW, @Folder @ftp_num)" +
" else" +
" update FTP_Info set IP=@IP, Port=@Port, UN=@UN, PW=@PW, Folder=@Folder where FTP_num = @ftp_num", con))
```
|
Check out the merge statement: <https://technet.microsoft.com/en-us/library/bb522522(v=sql.105).aspx> for SQL Server, <http://docs.oracle.com/cd/B19306_01/server.102/b14200/statements_9016.htm> for Oracle
|
SQL IF NOT EXISTS INSERT Else Update, based on column ID
|
[
"",
"sql",
"sql-server",
"sql-server-2014",
""
] |
I have a table `Employee` with following Sample Data
```
ID Name Gender
1 Mary F
2 John M
3 Smith M
```
I want to write an Update query that would set Gender to `F` where Gender is `M` and set Gender to `M` where Gender is `F`. How can I do this in single `update` query?
|
You would simply use `case`:
```
update t
set Gender = (case when Gender = 'F' then 'M' else 'F' end)
where Gender in ('F', 'M');
```
|
We can update by using `CASE` expression.
**Query**
```
update Employee
set Gender = (
case Gender when 'M' then 'F'
when 'F' then 'M'
else Gender end
);
```
|
Update multiple rows of same column
|
[
"",
"sql",
"sql-server",
"sql-update",
""
] |
My data-set inside table `object_type_t` looks something like the following:
```
OBJ_ID PARENT_OBJ OBJECT_TYPE OBJECT_DESC
--------- ------------ ------------- -----------------------
ES01 <null> ESTATE Bucks Estate
BUI01 ES01 BUILDING Leisure Centre
BUI02 ES01 BUILDING Fire Station
BUI03 <null> BUILDING Housing Block
SQ01 BUI01 ROOM Squash Court
BTR01 BUI02 ROOM Bathroom
AP01 BUI03 APARTMENT Flat No. 1
AP02 BUI03 APARTMENT Flat No. 2
BTR02 AP01 ROOM Bathroom
BDR01 AP01 ROOM Bedroom
BTR03 AP02 ROOM Bathroom
SHR01 BTR01 OBJECT Shower
SHR02 BTR02 OBJECT Shower
SHR03 BTR03 OBJECT Shower
```
Which in practical, hierarchical terms, looks something like this:
```
ES01
|--> BUI01
| |--> SQ01
|--> BUI02
| |--> BTR01
|--> SHR01
=======
BUI03
|--> AP01
| |--> BTR02
| | |--> SHR02
| |--> BDR01
|--> AP02
|--> BTR03
|--> SHR03
```
I know how to use hierarchical queries, such as `CONNECT BY PRIOR`. I'm also aware of how to find the root of the tree via `connect_by_root`. But what I am looking to do is find a given "level" of a tree (i.e. not the root level, but rather the "BUIDLING" level of a given object).
So for example, I would like to be able to query out every object in the hierarchy, which belongs to `BUI01`.
And then in reverse, given an object ID, I would like to be able to query out the associated (say) ROOM `object_id` for that object.
Things would be much easier if I could associate each `OBJECT_TYPE` with a given `level`. But as you see from the above example, BUILDING does not always appear at level 1 in the hierarchy.
My initial idea is to fetch the data into an intermediate tabular format (perhaps a materialized view) which would look like the following. This would allow me to find the data I want by simple SQL queries on the materialized view:
```
OBJ_ID OBJECT_DESC ESTATE_OBJ BUILDING_OBJ ROOM_OBJ
--------- ---------------- ---------- ------------ ----------
ES01 Bucks Estate ES01
BUI01 Leisure Centre ES01 BUI01
BUI02 Fire Station ES01 BUI02
BUI03 Housing Block BUI03
SQ01 Squash Court ES01 BUI01 SQ01
BTR01 Bathroom ES01 BUI02 BTR01
AP01 Flat No. 1 BUI03
AP02 Flat No. 2 BUI03
BTR02 Bathroom BUI03 BTR02
BDR01 Bedroom BUI03 BDR01
BTR03 Bathroom BUI03 BTR03
SHR01 Shower ES01 BUI02 BTR01
SHR02 Shower BUI03 BTR02
SHR03 Shower BUI03 BTR03
```
But (short of writing PL/SLQ, which I would like to avoid), I haven't been able to concisely structure a query which would achieve this tabular format.
Does anyone know how I can do this? Can it be done?
Solutions must be executable in Oracle 12c.
**Additionally:** Performance is important, since my underlying data structure contains several hundred-thousand lines, and structures can be quite deep. So faster solutions will be preferred over slower ones :-)
Thanks for your help, in advance.
|
The desired output has 3 columns which are determined by object types. In general this could be extended with more columns, one for each possible value for the field `object_type`. Even with the given example data, one could imagine an additional column `apartment_obj`.
To make this generic without the need to self-join the table as many times as there are object type values, one could use a combination of `CONNECT BY` and `PIVOT` clauses:
```
SELECT *
FROM (
SELECT obj_id,
object_desc,
CONNECT_BY_ROOT obj_id AS pivot_col_value,
CONNECT_BY_ROOT object_type AS pivot_col_name
FROM object_type_t
-- skip the STARTS WITH clause to get all connected pairs
CONNECT BY parent_obj = PRIOR obj_id
)
PIVOT (
MAX(pivot_col_value) AS obj
FOR (pivot_col_name) IN (
'ESTATE' AS estate,
'BUILDING' AS building,
'ROOM' AS room
)
);
```
The `FOR ... IN` clause has a hard-coded list of names of the desired columns -- without the `_obj` suffix, as that gets added during the pivot transformation.
Oracle does not allow this list to be dynamically retrieved. NB: there is an exception to this rule when using the [`PIVOT XML` syntax](https://oracle-base.com/articles/11g/pivot-and-unpivot-operators-11gr1), but there you get the XML output in one column, which you would then need to parse. That would be rather inefficient.
The sub-query with the `CONNECT BY` clause does not have a `STARTS WITH` clause, which makes that query take any record as starting point and produce the descendants from there. Together with the `CONNECT_BY_ROOT` selection, this allows to produce a full list of all *connected* pairs, where the distance between the two in the hierarchy can be anything. The `JOIN` then matches the deeper of the two, so you get all ancestors of that node (including the node itself). And those ancestors are then pivoted into columns.
The `CONNECT BY` sub-query could also be written in way that the hierarchy is traversed backwards. The output is the same, but maybe there is a performance difference. If so, I think that variation could have better performance, but I did not test this on large datasets:
```
SELECT *
FROM (
SELECT CONNECT_BY_ROOT obj_id AS obj_id,
CONNECT_BY_ROOT object_desc AS object_desc,
obj_id AS pivot_col_value,
object_type AS pivot_col_name
FROM object_type_t
-- Connect in backward direction:
CONNECT BY obj_id = PRIOR parent_obj
)
PIVOT (
MAX(pivot_col_value) AS obj
FOR (pivot_col_name) IN (
'ESTATE' AS estate,
'BUILDING' AS building,
'ROOM' AS room
)
);
```
Note that in this variant the `CONNECT_BY_ROOT` returns the deeper node of the pair, because of the opposite traversal.
### Alternative based on self-joins (previous answer)
You could use this query:
```
SELECT t1.obj_id,
t1.object_desc,
CASE 'ESTATE'
WHEN t1.object_type THEN t1.obj_id
WHEN t2.object_type THEN t2.obj_id
WHEN t3.object_type THEN t3.obj_id
END estate_obj,
CASE 'BUILDING'
WHEN t1.object_type THEN t1.obj_id
WHEN t2.object_type THEN t2.obj_id
WHEN t3.object_type THEN t3.obj_id
END building_obj,
CASE 'ROOM'
WHEN t1.object_type THEN t1.obj_id
WHEN t2.object_type THEN t2.obj_id
WHEN t3.object_type THEN t3.obj_id
END room_obj
FROM object_type_t t1
LEFT JOIN object_type_t t2 ON t2.obj_id = t1.parent_obj
LEFT JOIN object_type_t t3 ON t3.obj_id = t2.parent_obj
```
|
If I correctly understand your need, maybe you can avoid the tabular view, directly querying your table;
Say you want to find all the objects belonging to `BUI01`, you can try:
```
with test(OBJ_ID, PARENT_OBJ, OBJECT_TYPE, OBJECT_DESC) as
(
select 'ES01','','ESTATE','Bucks Estate' from dual union all
select 'BUI01','ES01','BUILDING','Leisure Centre' from dual union all
select 'BUI02','ES01','BUILDING','Fire Station' from dual union all
select 'BUI03','','BUILDING','Housing Block' from dual union all
select 'SQ01','BUI01','ROOM','Squash Court' from dual union all
select 'BTR01','BUI02','ROOM','Bathroom' from dual union all
select 'AP01','BUI03','APARTMENT','Flat No. 1' from dual union all
select 'AP02','BUI03','APARTMENT','Flat No. 2' from dual union all
select 'BTR02','AP01','ROOM','Bathroom' from dual union all
select 'BDR01','AP01','ROOM','Bedroom' from dual union all
select 'BTR03','AP02','ROOM','Bathroom' from dual union all
select 'SHR01','BTR01','OBJECT','Shower' from dual union all
select 'SHR02','BTR02','OBJECT','Shower' from dual union all
select 'SHR03','BTR03','OBJECT','Shower' from dual
)
select OBJECT_TYPE, OBJ_ID, OBJECT_DESC
from test
connect by prior obj_id = parent_obj
start with obj_ID = 'BUI01'
```
This considers `BUI01` belonging to itself; if you don't want this, you can modify the query in quite simple way to cut off your starting value.
On the opposite way, say you're looking for the room in which `SHR01` is, you can try with the following; it's basically the same recursive idea, but in ascending order, instead of descending the tree:
```
with test(OBJ_ID, PARENT_OBJ, OBJECT_TYPE, OBJECT_DESC) as
(...
)
SELECT *
FROM (
select OBJECT_TYPE, OBJ_ID, OBJECT_DESC
from test
connect by obj_id = PRIOR parent_obj
start with obj_ID = 'SHR01'
)
WHERE object_type = 'ROOM'
```
In both cases, you only scan your table once, without any other structure; this way, this has a chance to be fast enough.
|
Fetch object of specified level-type from data hierarchy (Oracle 12 SQL)
|
[
"",
"sql",
"oracle",
"oracle12c",
""
] |
Is there a way to stack/group string/text per user ?
data I have
```
USER STATES
1 CA
1 AR
1 IN
2 CA
3 CA
3 NY
4 CA
4 AL
4 SD
4 TX
```
What I need is
```
USER STATES
1 CA / AR / IN
2 CA
3 CA / NY
4 CA / AL / SD / TX
```
I tried cross join and then another cross join however but the data spools out. Thanks!
|
I am not an expert but this should work. You may need to modify it a bit per your exact requirement. Hope this helps!
```
CREATE VOLATILE TABLE temp AS (
SELECT
USER
,STATES
,ROW_NUMBER() OVER (PARTITION BY USER ORDER BY STATES) AS rn
FROM yourtable
) WITH DATA PRIMARY INDEX(USER) ON COMMIT PRESERVE ROWS;
WITH RECURSIVE rec_test(US,ST, LVL)
AS
(
SELECT USER,STATES (VARCHAR(10)),1
FROM temp
WHERE rn = 1
UNION ALL
SELECT USER, TRIM(STATES) || ', ' || ST,LVL+1
FROM temp INNER JOIN rec_test
ON USER = US
AND temp.rn = rec_test.lvl+1
)
SELECT US,ST, LVL
FROM rec_test
QUALIFY RANK() OVER(PARTITION BY US ORDER BY LVL DESC) = 1;
```
|
If Teradata's XML-services are installed there's a function named XMLAGG, which returns a similar result: `CA, AR, IN`
```
SELECT user,
TRIM(TRAILING ',' FROM (XMLAGG(TRIM(states)|| ',' /* optionally ORDER BY ...*/) (VARCHAR(10000))))
FROM tab
GROUP BY 1
```
Btw, using recursion will result in huge spool usage, because you keep all the intermediate rows in spool before returning the final row.
|
Teradata SQL stack rows per user
|
[
"",
"sql",
"teradata",
""
] |
I've been getting my hands on apps for a few time now, and I now want to develop an App which will help user manage events. Here's the problem, I have very few knowledge in servers/DB and such.
To allow users to log in (and import friends) I count on Facebook API. But I have to manage events and users attending events (and other things but here are the main points).
Is an SQL Server sufficient (and secure) to do such things ? In the DB I plan to store only events and users attending to such events.
If yes, what kind of offer from web hosting companies would allow me to set up such databases (Dedicated SQL server ? Web hosting with output in XML/JSON ?)
If no, what method would be better.
Thank you very much for the answers and time.
|
**Choice of database.**
Any database can do it, but you need to know that some of them are really expensive, such as SQL Server. You can see the SQL Server pricing [here](https://www.microsoft.com/en/server-cloud/products/sql-server/purchasing.aspx). I'd recommend you [MySQL](https://www.mysql.com/) database if your business is not really big. It is 100% free and you can find it at most of the web hosting provider.
**JSON/XML output**. This is managed by your web-service. Web-service is the server-side application, it provides information to your app, which is the client-side. You can use many technologies the do it, such as Java, PHP, Node.js, Python ... Implementation is up to you. But normally, you don't communicate the database directly with apps. You need a web service.
**Hosting**. This is hard to say, it depends on the technology chosen of your web-service.
|
Its entirely possible to use a SQL server to do these things, but you might be more comfortable using something like [Firebase](https://www.firebase.com/) or other NoSQL database providers
|
Create an App that interact with SQL
|
[
"",
"android",
"ios",
"sql",
"sql-server",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.