Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
While running this query
```
update p_leave_allocation_14122015
set 'month' + @actualMonth = @actualleavedays
where year = @actualYear
and emp_card_no = @emp_card_no
```
I get an error:
> Incorrect syntax near 'month'.
The table `p_leave_allocation_14122015` has columns like `month1`, `month2`, `month3`,**.....**
In my update query what I want is `month` will be hardcoded and the no of month will be dynamic based on some conditions which I have written.
But when I tried with the above query I got the mentioned error.
I am using SQL Server 2005.
Kindly suggest
|
*You should try something like below*,
```
DECLARE @sql NVARCHAR(MAX);
SET @sql = N'update p_leave_allocation_14122015
SET MONTH'+CAST(@actualMonth AS VARCHAR(2)) +' = ' + CAST(@actualleavedays AS VARCHAR(10))+
' WHERE YEAR = '+ CAST(@actualYear AS VARCHAR(4))+
' AND emp_card_no = '+CAST(@emp_card_no AS VARCHAR(100)) +'';
PRINT @sql --Note: Check the query first before updating.
EXEC SP_EXECUTESQL @sql;
```
|
You need to use `dynamic query` to do this
```
DECLARE @sql NVARCHAR(max) = ''
SET @sql = 'update p_leave_allocation_14122015 set month'
+ Cast(@actualMonth AS VARCHAR(10)) + ' = '
+ Cast(@actualleavedays AS VARCHAR(10))
+ ' where year = @actualYear and emp_card_no = @emp_card_no '
EXEC Sp_executesql
@sql,
N'@actualYear int,@emp_card_no int',
@actualYear,
@emp_card_no
```
Note: If `@actualMonth` or `@actualleavedays` is of `integer` type then you need to cast it to `varchar` to work in dynamic query
|
Incorrect syntax near 'month' with dynamic number
|
[
"",
"sql",
"sql-server-2005",
""
] |
I have a mysql table with this structure:
```
CREATE TABLE `item_rate` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`item` int(11) NOT NULL,
`ip` int(32) NOT NULL,
`rate` int(1) NOT NULL,
`date` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
PRIMARY KEY (`id`),
KEY `item` (`item`),
CONSTRAINT `item_rate_ibfk_1` FOREIGN KEY (`item`) REFERENCES `items` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;
```
I need to get this calculated information from this table:
here 'rate' may be 1,2,3,4 or 5
so I need to select every 'item'
and with this 'item'
calculate
```
if rate = 5
5 * COUNT(id)
if rate = 4
4 * COUNT(id)
```
and so on...
Example:
On item 1 I have
```
rate 5 count 100
rate 4 count 311
rate 3 count 188
rate 2 count 83
rate 1 count 10
```
select must calculate like this
```
(5 * 100 + 4 * 311 + 3 * 188 + 2 * 83 + 1 * 10) / 692(sum of counts) = 3.58
```
then round this 3.58
and return info like this
```
item count
1 3
2 4
3 5
4 3
5 5
```
How can I do this with 1 query?
P.S. I`m using mysql db
|
Try this:
```
SELECT item, FLOOR(SUM(rate * ratecount) / SUM(ratecount)) AS cnt
FROM (SELECT item, rate, COUNT(id) ratecount
FROM item_rate
GROUP BY item, rate
) AS A
GROUP BY item;
```
To fetch ip related details check below query:
```
SELECT item, FLOOR(SUM(rate * ratecount) / SUM(ratecount)) AS cnt,
(CASE WHEN SUM(ipExists) > 0 THEN 1 ELSE 0 END) ipExists
FROM (SELECT item, rate, COUNT(id) ratecount,
SUM(CASE WHEN ip > 0 THEN 1 ELSE 0 END) ipExists
FROM item_rate
GROUP BY item, rate
) AS A
GROUP BY item;
```
|
It looks like you simply want the average rating for an item. This is most easily written as:
```
select item, avg(rate)
from item_rate
group by item;
```
If you want this to two decimal places, cast as a `decimal` or use `format()`:
```
select item, cast(avg(rate) as decimal(8, 2)) as avg_rate
from item_rate
group by item;
```
|
How do I get the average 'rate' from my below MySQL table?
|
[
"",
"mysql",
"sql",
"select",
"group-by",
"sum",
""
] |
I have a query that returns results grouped by the hour, like so.
```
SELECT COUNT(*) FROM (`shipped_products`) WHERE HOUR(timestamp) = 8
```
The issue is, I get different results on localhost than on the live server. The data is exactly the same on both (UTC). I believe that my localhost database is New York while the live Database is UTC, and because of this, HOUR() is working differently on the two environments. I don't have permission to change the live server timezone, and would like to avoid changing the local database timezone. Is there a way inside the query to ensure the HOUR() function is returning the same results regardless of database timezone?
|
You could just set the session timezone (or the global time zone) to UTC, [as described in the docs](https://dev.mysql.com/doc/refman/5.7/en/time-zone-support.html).
```
set time_zone = '+00:00';
```
Then `HOUR` and other functions will interpret `timestamp` types in UTC. See also the differences between `timestamp` and `datetime` [in this part of the docs](http://dev.mysql.com/doc/refman/5.7/en/datetime.html).
However, another concern is that `WHERE HOUR(timestamp) = 8` is going to make your query [non-sargable](https://en.wikipedia.org/wiki/Sargable), so it will have to scan the entire table, and won't be able to take advantage of any indexes you may have. The more records in the table, the slower this will get. (Especially if you have thousands or millions of records).
A better approach is to rewrite this as a *range query*. Get the starting and stopping times you want to filter by, and use a half-open interval.
```
WHERE timestamp >= @start AND timestamp < @end
```
Of course, you will lock down the value to a specific date when you calculate the start and end values. If instead you really do want the 8:00 UTC hour of *every* day, then you should create a separate integer column on the table with that information, so you can use it in a sargable query.
```
WHERE your_utc_hour_column = 8
```
This could be a computed column in some databases, but I don't think those are supported in MySQL, so you'd have to create and populate it manually.
|
Try the [`CONVERT_TZ()` function](https://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_convert-tz):
```
SELECT COUNT(*) FROM (`shipped_products`)
WHERE HOUR(CONVERT_TZ(timestamp, 'UTC', 'America/New York')) = 8
```
The manual warns:
> ## Note
>
> To use named time zones such as `'MET'` or `'Europe/Moscow'`, the time zone tables must be properly set up. [See Section 10.6, “MySQL Server Time Zone Support”](https://dev.mysql.com/doc/refman/5.5/en/time-zone-support.html), for instructions.
You can also use hour values instead, but will lose any timezone knowledge such as historic timezone changes, DST, etc.
```
SELECT COUNT(*) FROM (`shipped_products`)
WHERE HOUR(CONVERT_TZ(timestamp, '+00:00', '-05:00')) = 8
```
|
mysql HOUR() accounting for timezones
|
[
"",
"mysql",
"sql",
"timezone",
""
] |
Is there a way to correct this query so that it works with `ONLY_FULL_GROUP_BY` enabled?
```
SELECT LOWER(s) AS lower_s, SUM(i) AS sum_i
FROM t
GROUP BY 1
HAVING LENGTH(lower_s) < 5
```
It gives the error message
> Non-grouping field 'lower\_s' is used in HAVING clause
[Fiddle](http://sqlfiddle.com/#!9/749fb9/1)
|
why dont you just use `where LENGTH(LOWER(s)) < 5`
It seems the using of `having` is not right here.
According to [having sql wiki](https://en.wikipedia.org/wiki/Having_(SQL))
> A HAVING clause in SQL specifies that an SQL SELECT statement should only return rows where aggregate values meet the specified conditions. It was added to the SQL language because the WHERE keyword could not be used with aggregate functions.[1]
|
No need to use **HAVING** clause in your query. Check below query that will satisfy your requirement.
Try this:
```
SELECT LOWER(s) AS lower_s, SUM(i) AS sum_i
FROM t
WHERE LENGTH(LOWER(s)) < 5
GROUP BY 1;
```
|
MySQL error: Non-grouping field is used in HAVING clause
|
[
"",
"mysql",
"sql",
"select",
"group-by",
"having",
""
] |
I have a project where I need to record answers to a questionnaire that has 50+ questions. Each question is associated with numbers 1 to 5. So the data for each questionnaire filled out is something like
```
Username : "Bob",
"Question 1" : 1,
"Question 2" : 1,
"Question 3" : 5,
.
.
.
"Question N" : 2
```
where the same questions are asked for each person. I'm wondering what is the correct database design for this type of situation. Do I make a table like
```
Questionaire_Answers
---------------------------------------------------------------------------
userid | Quesiton_1_Answer | .... | Question_N_Answer
```
or what is the better way of doing this? Do I serialize the information with a giant varchar column like
```
Questionaire_Answers
-------------------------------------------------------------------------
userid | Question_Answers
--------------------------------------------------------------------------
1238 | "Question1->1,Question2->1,Question3->5,...QuestionN->2"
```
or what should I do?
|
The direct answer to your question is: "No, you do not serialize the information into a giant `varchar()` column." That is easy, because such serialization is almost never the right thing to do in SQL. The right thing is a junction table.
However, apart from that, a data structure for surveys is quite hard to implement because:
* Questions can be added and removed from a survey, particularly on-line surveys.
* Individual questions can have different versions and the nuances are important.
* Answers can be added and removed for a given question.
* Individual answers can have different versions and the nuances are important.
This is particularly true of on-line surveys, where data is gathered over multiple days. Early responses can lead to changes in the survey -- and for good reasons.
|
```
UserID QuestionID Answer
--------------------------------
1238 1 2
1238 2 5
...
1238 50 3
```
|
What type of database design do I use for a questionnaire that has a lot of questions?
|
[
"",
"sql",
"sql-server",
"t-sql",
"database-design",
""
] |
The following is the "pc" table which contains details about pc.
```
user=> SELECT * FROM pc;
code | model | speed | ram | hd | cd | price
------+----------+-------+------+------+-----+---------
101 | Imac | 2000 | 4096 | 500 | 16x | ₹550.00
102 | G450 | 1500 | 2048 | 500 | 8x | ₹450.00
(2 rows)
user=>
```
Now I want to take the average value of price. So tried the following way. But it produce error.
```
user=> SELECT AVG(price) FROM pc;
ERROR: function avg(money) does not exist
LINE 1: SELECT AVG(price) FROM pc;
^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
user=>
```
So, what is the way to get the average value of the price column which is in money datatype.
|
```
SELECT AVG(price::numeric) FROM pc;
```
or
As per PostgreSQL 8 .4's [Monetary Types](http://www.postgresql.org/docs/8.4/static/datatype-money.html)
```
select avg(regexp_replace(price::text, '[$,]', '', 'g')::numeric) from pc
```
|
You can do:
```
SELECT AVG(price::money::numeric::float8) FROM pc;
```
|
Average value for Money Datatype
|
[
"",
"sql",
"regex",
"postgresql",
"average",
"postgresql-8.4",
""
] |
I have column with type `BOOLEAN`
At first (on `INSERT`), value would be always `FALSE`, only after may be updated column as `TRUE`
So question is: make this column `NOT NULL DEFAULT FALSE`, or make this column `DEFAULT NULL` and then update as `TRUE` (and if then also would be reversing update, set column `NULL` and not `FALSE` value, so in this case, never would be used `FALSE` value )
Which choice will be better point of view performance and storage saving ?
|
It makes no meaningful difference to storage. Depending on other nullable columns, we're talking at most a single byte if this column happens to overflow the nullable bitmap to the next byte, but in all likelihood it won't add any storage at all.
For performance (as well as programmer productivity/maintenance), it depends a lot on how you plan to use this. There's nowhere near enough information here, nor would it be possible to get enough info without inviting anyone who actually reads this question to all of your project meetings.
Personally, when I have a boolean column that is false by default (however you choose to represent it) and then will at some point become true and *stay true*, I like use a nullable **DateTime** column for this value, where any date at all means the value is true, and NULL means false. My experience is that you'll almost always eventually want to know the date and time on which the value became true.
As an example of this, I work in Higher Ed. One thing that happens in this business is sending financial aid award letters to students. Students then sign and return the letters. Accepted awards may later be declined or superceeded, but that won't change the fact that the student had signed this letter and accepted this award. Regulations require us to keep these letters on file, but from a database standpoint it was good enough just to know the boolean question of whether awards in the letter are accepted. It quickly became apparent that we also need to know *when* the awards are accepted. Thus, a datetime column is used for that field.
|
If you want to assume FALSE unless specifically set to TRUE, then use `NOT NULL DEFAULT FALSE`. It also depends on how you are using this column. If you only test where column = TRUE, then don't think it will matter. The point being that NULL is not equal to FALSE.
|
FALSE and TRUE vs NULL and TRUE
|
[
"",
"sql",
"postgresql",
"plpgsql",
"postgresql-9.5",
""
] |
I was reading some Oracle SQL resources and I found this SQL code:
```
SELECT e.ename AS "NAME",
e.sal AS "Salary",
e.deptno,
AVG(a.sal) dept_avg
FROM emp e, emp a
WHERE e.deptno = a.deptno
AND e.sal > ( SELECT AVG(sal)
FROM emp
WHERE deptno = e.deptno )
GROUP BY e.ename, e.sal, e.deptno;
```
This SQL code is supposed to return every employee that gets more than the average salary of his department and display his name, his salary his department's ID and then the average salary in his department.
In order to return the dept\_avg, we have to `group by` `deptno`, but the grouping columns are weird. What I guess, is that the grouping column is the column that is used as a join condition, the `a.deptno`. Is that true ? if not can someone please clarify it?
|
If you remove `GROUP BY` and use `SELECT *`, you'll see what's happening.
`emp` is joined on itself, every employee with salary higher than average is joined with every other employee in his department, making an awful lot of rows. Then, from that data, average salary (from every other worker in dept) is counted again, using `GROUP BY`. It's impressively inefficient, look at other answers to see how it should have been done.
|
Maybe re-writing with more modern conventions makes it clearer?
```
WITH avgbydept as
(
SELECT deptno, avg(sal) as avgsal
FROM emp
GROUP BY deptno
)
SELECT e.ename AS "NAME",
e.sal AS "Salary",
e.deptno,
AVG(a.sal) dept_avg
FROM emp e
JOIN emp a ON e.deptno = a.deptno
JOIN avgbydept abd ON e.deptno = abd.deptno
WHERE e.sal > abd.avgsal
GROUP BY e.ename, e.sal, e.deptno;
```
One thing this makes clear is that it has a "bug" of an extra join and group by -- To do as you say:
> This SQL code is supposed to return every employee that gets more than
> the average salary of his department and display his name, his salary
> his department's ID and then the average salary in his department.
I believe you want this
```
WITH avgbydept as
(
SELECT deptno, avg(sal) as avgsal
FROM emp
GROUP BY deptno
)
SELECT e.ename AS "NAME",
e.sal AS "Salary",
e.deptno,
abd.avgsal as dept_avg
FROM emp e
JOIN avgbydept abd ON e.deptno = abd.deptno
WHERE e.sal > abd.avgsal
```
|
the grouping column for an aggregate function is as a join condition?
|
[
"",
"sql",
"oracle11g",
""
] |
I know similar questions had been asked before, but none of them had this same conditions and their answers didn't work for this case.
The table containing the messages looks like this:
```
id | owner_id | recipient_id | content | created
1 | 1 | 2 | Hello | 2015-12-08 20:00
2 | 2 | 1 | Hey | 2015-12-08 20:10
3 | 3 | 1 | You there? | 2015-12-08 21:00
4 | 1 | 3 | Yes | 2015-12-08 21:15
5 | 4 | 1 | Hey buddy | 2015-12-08 22:00
```
And let's say I query for the last message from each one of the conversations for User ID 1, the expected result is:
```
id | owner_id | recipient_id | content | created
5 | 4 | 1 | Hey buddy | 2015-12-08 22:00
4 | 1 | 3 | Yes | 2015-12-08 21:15
2 | 2 | 1 | Hey | 2015-12-08 20:10
```
I tried many combinations, using JOINs and sub-queries but none of them gave the expected results.
Here is one of the queries I tried but it's not working. I believe is not even near to what I'm needing.
```
SELECT
IF ( owner_id = 1, recipient_id, owner_id ) AS Recipient,
(
SELECT
content
FROM
messages
WHERE
( owner_id = 1 AND recipient_id = Recipient )
OR
( owner_id = Recipient AND recipient_id = 1 )
ORDER BY
created DESC
LIMIT 1
)
FROM
messages
WHERE
owner_id = 1
OR
recipient_id = 1
GROUP BY
Recipient;
```
|
```
select t.*
from
t
join
(select user, max(created) m
from
(
(select id, recipient_id user, created
from t
where owner_id=1 )
union
(select id, owner_id user, created
from t
where recipient_id=1)
) t1
group by user) t2
on ((owner_id=1 and recipient_id=user) or
(owner_id=user and recipient_id=1)) and
(created = m)
order by created desc
```
[example on sqlfiddle](http://sqlfiddle.com/#!9/df9024/1)
|
This should do the trick:
```
$joins = array(
array('table' => 'conversations',
'alias' => 'Conversation2',
'type' => 'LEFT',
'conditions' => array(
'Conversation.id < Conversation2.id',
'Conversation.owner_id = Conversation2.owner_id',
)
),
array('table' => 'conversations',
'alias' => 'Conversation3',
'type' => 'LEFT',
'conditions' => array(
'Conversation.id < Conversation3.id',
'Conversation.recepient_id = Conversation3.recepient_id',
)
)
);
$conditions = array(
'OR' => array(
array(
'Conversation2.id'=>null,
'Conversation.owner_id' => $ownerId
),
array(
'Conversation3.id'=>null,
'Conversation.recipient_id' => $ownerId
),
)
);
$order = array('Conversation.created'=>'DESC');
$lastConversations=$this->Conversation->find('all',compact('conditions','joins','order'));
```
Provided that the name of the table is `conversations` and the name of your model is `Conversation`. It's based on the technique described in the accepted answer of [Retrieving the last record in each group](https://stackoverflow.com/q/1313120/3144159).
|
Get last message from each conversation
|
[
"",
"mysql",
"sql",
"database",
"relational-database",
""
] |
I want to see only grant privileges on a particular procedure in MySQL. [This documentation](http://dev.mysql.com/doc/refman/5.7/en/show-grants.html) shows all grants.
Is there a way to see grants only for a particular procedure in a db.
The below command gives all grants including select, update etc. :
```
SHOW GRANTS FOR 'root'@'localhost';
```
|
Try this:
```
SELECT *
FROM mysql.procs_priv A
WHERE A.Db = 'DatabaseName' AND A.User = 'root' AND
A.host = 'localhost' AND A.Routine_name = 'ProcedureName' AND
FIND_IN_SET('Grant', A.proc_priv);
```
|
Suppose, you want to create a procedure called MyFirstProc and you wanted to grant ALL users the ability to EXECUTE this procedure. You would run the following GRANT statement:
```
GRANT EXECUTE ON PROCEDURE MyFirstProc TO '*'@'localhost';
```
|
How to view only procedure related grants in MySQL?
|
[
"",
"mysql",
"sql",
"select",
"privileges",
"sql-grant",
""
] |
What I'm trying to do is get the name of the city with biggest number of orders. Seems simple enough, yet when I run this code (I had to use CTEs), I get only the number of orders, no city name.
```
WITH CityOrder AS
(
SELECT City
FROM Person.Address JOIN Sales.SalesOrderHeader
ON ShipToAddressID = AddressID
),
CitiesOrders AS
(
SELECT City, COUNT(City) AS "NoOfOrders"
FROM CityOrder
GROUP BY City
)
SELECT MAX("NoOfOrders") FROM CitiesOrders;
```
I tried to change last line to
```
SELECT City, MAX("NoOfOrders") FROM CitiesOrders;
```
But then, it shows all the cities with their orders. I'm pretty sure I'm not seeing some simple mistake. Any advice?
|
Order your results and take top 1 but with ties in case several cities have same number of orders:
```
...
SELECT TOP 1 WITH TIES *
FROM CitiesOrders
ORDER BY NoOfOrders DESC
```
|
Try this!
```
select top 1 * from
(
SELECT City,count(1) as x
FROM Person.Address JOIN Sales.SalesOrderHeader
ON ShipToAddressID = AddressID
group by City
) order by x desc
```
|
SQL Select with CTE
|
[
"",
"sql",
"sql-server",
""
] |
I've got me this nice table.
```
create table Records (
Id uniqueidentifier primary key,
Occasion datetime not null,
Information varchar(999) default ' '
)
```
When I execute the first line line below, I get an error. When I execute the second, I see *NULL* in the cell. How should I insert stuff into the table so that the default white space kicks in?
```
insert into Records values (newid(), convert(varchar(21), getdate(), 121))
insert into Records values (newid(), convert(varchar(21), getdate(), 121), null)
```
The error message is the defaulty one but I'll put it in here anyway, in case someone asks.
> Column name or number of supplied values does not match table definition.
|
By using the `DEFAULT` keyword:
```
INSERT INTO Records VALUES (newid(), CONVERT(varchar(21), getdate(), 121), DEFAULT)
```
Or, better, by specifying the exact columns with the `INSERT` statement
```
INSERT INTO Records (Id, Occasion)
VALUES (newid(), CONVERT(varchar(21), getdate(), 121))
```
Or:
```
INSERT INTO Records (Id, Occasion, Information)
VALUES (newid(), CONVERT(varchar(21), getdate(), 121), DEFAULT)
```
Why is it better?
1. Because you shouldn't rely on the order of columns in your table.
2. Because you shouldn't rely on the fact that the table will never have any new or removed columns (which is when your query will break again).
Also interesting: [SQL Column definition : default value and not null redundant?](https://stackoverflow.com/questions/11862188/sql-column-definition-default-value-and-not-null-redundant)
|
```
insert into Records (
Id
Occasion
)
values (newid(), convert(varchar(21), getdate()), 121))
```
Try this.
when you don't specify column names in insert query, sql-server expects values for all the columns. that is why your first query is throwing error as you have supplied value for two columns only, whereas your second query is running correctly
|
How not to store a value in a column?
|
[
"",
"sql",
"sql-server",
""
] |
I am not very friendly with SQL so I need to write a SQL update/delete query on a table. The requirement is as below.
Table A:
Col1(PK) Col2 Col3 Quantity
1 Code1
Value1 5
2 Code2 Value2 2
3 Code1 Value1 3
4 Code3 Value3 8
Considering above table, in which there are multiple rows with same value for Col2 and Col3. I want to write a query which will delete duplicate combination of Col2 and Col3 and sum the Quantity for the resulting record.
The result should be like this:Col1(PK) Col2 Col3 Quantity
1 Code1
Value1 8
2 Code2 Value2 2
4 Code3 Value3 8
|
You will need to do this in two parts, and if you want to ensure the integrity of the data, the two parts should be wrapped in a transaction.
The first part updates the required rows' `Quantity`, the second deletes the now duplicated rows.
```
BEGIN TRANSACTION
UPDATE TableA
SET Quantity=upd.Quantity
FROM TableA a
INNER JOIN (
SELECT MIN(Col1) AS Col1, SUM(Quantity) AS Quantity
FROM TableA
GROUP BY Col2, Col3
) upd
ON a.Col1 = upd.col1
;WITH DELETES
AS
(
SELECT Col1,ROW_NUMBER() OVER (PARTITION BY Col2,Col3 ORDER BY Col1) rn
FROM TableA
)
DELETE FROM TableA WHERE Col1 IN (
SELECT Col1 FROM DELETES WHERE rn>1
)
COMMIT TRANSACTION
```
Live example: <http://www.sqlfiddle.com/#!3/9efa9/7>
(EDIT: Updated to fix issue noted in comments)
|
Use this:
```
select *
from (select *, sum(Quantity) over (partition by col2,col3) as Quantity
from tableA
) t
```
|
Writing SQL Query to query and update same table
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I am trying to create a report to show me the last date a customer filed a ticket.
Customers can file dozens of tickets. I want to know when the last ticket was filed and show how many days it's been since they have done so.
The fields I have are:
Customer,
Ticket\_id,
Date\_Closed
All from the Same table "Tickets"
I'm thinking I want to do a ranking of tickets by min date? I tried this query to grab something but it's giving me all the tickets from the customer. (I'm using SQL in a product called Domo)
```
select * from (select *, rank() over (partition by "Ticket_id"
order by "Date_Closed" desc) as date_order
from tickets ) zd
where date_order = 1
```
|
```
select Customer, datediff(day, date_closed, current_date) as days_since_last_tkt
from
(select *, rank() over (partition by Customer order by "Date_Closed" desc) as date_order
from tickets) zd
join tickets t on zd.date_closed = t.date_closed
where zd.date_order = 1
```
Or you can simply do
```
select customer, datediff(day, max(Date_closed), current_date) as days_since_last_tkt
from tickets
group by customer
```
To select other fields
```
select t.*
from tickets t
join (select customer, max(Date_closed) as mxdate,
datediff(day, max(Date_closed), current_date) as days_since_last_tkt
from tickets
group by customer) tt
on t.customer = tt.customer and tt.mxdate = t.date_closed
```
|
This should be simple enough,
```
SELECT customer,
MAX (date_closed) last_date,
ROUND((SYSDATE - MAX (date_closed)),0) days_since_last_ticket_logged
FROM emp
GROUP BY customer
```
|
Days Since Last Help Ticket was Filed
|
[
"",
"sql",
"amazon-redshift",
"domo",
""
] |
I'm trying to sort using the months name, when I have it in numbers everything is working, but when I try to make a select like this
```
SELECT DISTINCT
CONVERT(nvarchar(50), DATENAME(m, date) + ', ' + DATENAME(yyyy,date)) as date
From MyTable
```
In my output I have something like this
```
August, 2015
July, 2015
September, 2015
```
How to sort by month, not by alphabetic order?
I tried to add `order by date` but without any result.
And the output format should be exactly like this.
|
Just embed the query into subquery and select month number along with your string:
```
DECLARE @t TABLE ( [date] DATE )
INSERT INTO @t
VALUES ( '20150701' ),
( '20150701' ),
( '20150801' ),
( '20150901' )
SELECT [date]
FROM ( SELECT DISTINCT
CONVERT(NVARCHAR(50), DATENAME(m, [date]) + ', '
+ DATENAME(yyyy, [date])) AS [date] ,
YEAR([date]) AS y ,
MONTH([date]) AS m
FROM @t
) t
ORDER BY t.y, t.m
```
Output:
```
date
July, 2015
August, 2015
September, 2015
```
|
You should set a order criteria that you must have in the selected list.
Try something like this:
```
SELECT DISTINCT
CONVERT(nvarchar(50), DATENAME(m, date) + ', ' + DATENAME(yyyy,date)) as date,
year(date) * 100 + month(date) as ordercrit
From
MyTable
order by
ordercrit
```
The `ordercrit` should also contain the `year` if you want it to be considered in the ordering, like,
```
January, 2014
February, 2014
January, 2015
May, 2015
January, 2016
```
|
Sort by distinct date in MSSQL
|
[
"",
"sql",
"sql-server",
"datetime",
""
] |
I have a query and i want different conditions to be used as per value in a column.
```
SELECT *
FROM com140 cm
WHERE cm.csnstar IN (90)
```
but the problem comes because i want this to happen.
when `cm.csnrsncde IN (135,177)` i want this condition to execute
```
AND cm.duedate > (SELECT MAX(e.dte1)
FROM events e
WHERE e.ispec ='FI560'
AND e.inrtyp ='RVP'
AND e.trntyp1 = 'RP'
AND e.staevt = 55
AND e.dte2 > DATE_1)
```
but when `cm.csnrsncde IN (91,98,172)` i want only below condition to execute not the previous one.
```
AND cm.duedate> DATE1
```
Please help me.
Thanks .
|
Code that you added should work
```
SELECT *
FROM com140 cm
WHERE cm.csnstar = 90
AND ((cm.csnrsncde IN ( 135, 177 )
AND cm.duedate > (SELECT Max(e.dte1)
FROM events e
WHERE e.ispec = 'FI560'
AND e.inrtyp = 'RVP'
AND e.trntyp1 = 'RP'
AND e.staevt = 55
AND e.dte2 > DATE_1) )
OR (cm.csnrsncde IN (91,98,172)
AND cm.duedate > DATE1))
```
|
Try this `Where` clause
```
SELECT *
FROM com140 cm
WHERE cm.csnstar = 90
OR ( cm.csnrsncde IN ( 135, 177 )
AND cm.duedate > (SELECT Max(e.dte1)
FROM events e
WHERE e.ispec = 'FI560'
AND e.inrtyp = 'RVP'
AND e.trntyp1 = 'RP'
AND e.staevt = 55
AND e.dte2 > DATE_1) )
OR ( cm.csnrsncde IN ( 91, 98, 172 )
AND cm.duedate > DATE1 )
```
|
using different condition based on column value possible by switch case
|
[
"",
"sql",
"oracle",
"plsql",
"oracle11g",
"oracle10g",
""
] |
The following is the "printer" table which contains details about printer.
```
user=> SELECT * FROM printer ;
code | model | color | type | price
------+-----------+-------+--------+------------
1002 | ProXPress | y | laser | ₹20,000.00
1003 | M1136 | y | jet | ₹100.00
1004 | Air1000 | y | laser | ₹1,000.00
1001 | ML 2161 | n | matrix | ₹90.00
(4 rows)
user=>
```
Now I want to find cheapest color printer. So, I tried the following way.
```
user=> SELECT min(price) from printer where color='y';
min
---------
₹100.00
(1 row)
user=>
```
It gives the expected price as output. But I want result set as code,model and price. So, I tried the following way.
```
user=> SELECT code,model,min(price) from printer where color='y';
ERROR: column "printer.code" must appear in the GROUP BY clause or be used in an aggregate function
LINE 1: SELECT code,model,min(price) from printer where color='y';
user=>
```
It shows error as shown above. So, what is meant by group by clause and what is the way to achieve expected output.
|
*column "printer.code" must appear in the GROUP BY clause or be used in an aggregate* means that columns selected that aren't in an aggregate function (MIN is an aggregate function) must be in a group by.
So you need to add `GROUP BY code,model` to the end as those columns are in the select but they aren't in a aggregate function
This won't give you what you're after though.
Here's one way to get what you're after:
```
SELECT * FROM Printer WHERE color='y'
AND price = (
SELECT min(price)
FROM printer where color='y'
);
```
You could also do this with windowing functions but I don't know if they are supported by postgresql
|
You can `INNER JOIN` the `printers` table to a subquery table containing the minimum price whose record(s) you want to display.
```
SELECT p1.code, p1.model, p1.price
FROM printer p1
INNER JOIN
(
SELECT MIN(price) AS minPrice
FROM printer
WHERE color='y'
) p2
ON p1.price = p2.minPrice
WHERE p1.color='y'
```
|
What is the use of Group by clause
|
[
"",
"sql",
"postgresql",
"group-by",
""
] |
I can't find solution, I'm trying to take in my database date of event that are 10 days after now.
I tried :
```
SELECT * FROM XXX.Vente WHERE date > (now()+40);
```
and :
```
SELECT * FROM LeVigneau.Vente WHERE date > now()+INTERVAL 10 DAY;
```
But it doesn't work.
Du you have an idea ?
Thanks a lot
|
You have to use backticks on date, that because `DATE` is [reserved keyword](https://dev.mysql.com/doc/refman/5.5/en/keywords.html) and `DATE_ADD` function in following:
**Syntax**
```
DATE_ADD(date,INTERVAL expr type)
```
**Query**
```
SELECT * FROM LeVigneau.Vente WHERE `date` > DATE_ADD(now(), INTERVAL 10 DAY);
```
Also use `>=` or `=`, It depends on what exactly do you need, to get records only for 10th day from now or from 10 days and later.
|
For exactly 10 days:
```
SELECT * FROM LeVigneau.Vente WHERE `date` = DATE_ADD(now(), INTERVAL 10 DAY);
```
All other solution give more then 10 days, not exactly 10 days.
for 10 days or more:
```
SELECT * FROM LeVigneau.Vente WHERE `date` >= DATE_ADD(now(), INTERVAL 10 DAY);
```
|
SQL - NOW() + 10 days
|
[
"",
"mysql",
"sql",
"database",
""
] |
I have tables with
T1(user)
```
Id Name CourseIds
1 Joel 1,2,3
2 Jeff 2,3,4
```
T2(courses)
```
CourseId CourseName
1 C#
2 Javascript
3 SQL
4 VB
```
I have to join the two tables and find out the count of each courses - learned by a user (group by) like
(result table)
```
Courses Count
C# 1
Javascript 2
SQL 2
VB 1
```
I have tried stackoverflow answers related to the question but they din't help. Please help me.
|
First, you should fix your data structure. Storing lists of ids in a comma delimited list is bad for many reasons:
* Storing numbers as strings is bad.
* Storing multiple values in a single field is bad.
* Not being able to declare foreign key relationships is bad.
* Such lists cannot take advantage of indexes.
* Junction tables are a much better alternative.
Sometimes we are stuck with other peoples really bad decisions. If this is the case, you can do what you want, although not efficiently:
```
select c.coursename,
(select count(*)
from user u
where ',' + u.courseids + ',' like '%,' + cast(c.id as varchar(255)) + ',%'
) as cnt
from courses c;
```
But really, rather than putting together arcane string operations, it is better to fix the data structure.
|
```
With cte(courseids,si,ei)
As(
Select courseids,
1,
charindex(',', courseids)
From user
Union all
Select courseids,
Cast( 1 +ei as int),
Charindex(',',courseids,1+ei)
From cte
Where ei >0
)
Select substring(courseids,si, case when ei>0 then ei- si else Len(courseids) end) as courseid into #t
From cte;
Select count(1) as count, (select coursename from courses where courseid = t.courseid) as courses
From #t t
Group by courseid;
Drop table #t;
```
|
How to group by comma delimited column in mssql?
|
[
"",
"sql",
"sql-server",
""
] |
I am using cursor on below temporary reultset
```
SELECT price FROM flight where source1='delhi' and dest='Bangkok';
```
I want to find minimum ticket price of flight on above route.
How to do this?
|
Try this, **ORDER BY** AND **LIMIT**
```
SELECT `price` FROM flight
WHERE `source1` = 'delhi'
AND `dest` = 'Bangkok'
ORDER BY `price` ASC
LIMIT 1
```
|
Use **ORDER BY** clause to order your desired result.
Try this:
```
SELECT price
FROM flight
WHERE source1='delhi' AND dest='Bangkok'
ORDER BY price ASC;
```
**OR**
If you want only minimum price then Use **LIMIT** also.
```
SELECT price
FROM flight
WHERE source1='delhi' AND dest='Bangkok'
ORDER BY price ASC
LIMIT 1;
```
|
how to find minimum value from result set using cursor in mysql?
|
[
"",
"mysql",
"sql",
"select",
"cursor",
"sql-order-by",
""
] |
I am having a bit of trouble dynamically creating a table and inserting into it. I have tried this multiple ways and cannot produce a desirable outcome. See below for a simplified version of what I am trying to do. Note that this is only one attempt - I have tried others as well (such as `INSERT INTO [table] EXEC(@exe)`) with no success either.
Every time I attempt below, I get a message
> Command(s) completed successfully
but no table has even been created.
I am using SQL Server 2008 R2.
```
DECLARE @sqlText nvarchar
SET @sqlText =
N'
IF OBJECT_ID(''[BudgetProcedures].dbo.UnitOccupancy'', ''U'') IS NOT NULL
DROP TABLE [BudgetProcedures].dbo.UnitOccupancy;
CREATE TABLE [BudgetProcedures].dbo.UnitOccupancy (Property varchar(15)
,Unit varchar(15)
,YearLength varchar(15)
,Lease varchar(15));
INSERT INTO [BudgetProcedures].[dbo].[UnitOccupancy] (Property, Unit, YearLength, Lease)
(SELECT ''ExProp''
,''ExUnit''
,''ExYrlen''
,''ExLease''
)
'
EXEC(@sqlText)
```
|
You need to declare as:
```
DECLARE @sqlText nvarchar(1000)
```
Other way it has length 1 by default. And since you have new line in the beginning of statement it doesn't throw syntax exception(your query contains only new line symbol and it is legal). Delete new line and you will get an error.
<https://msdn.microsoft.com/en-us/library/ms176089.aspx>
> When n is not specified in a data definition or variable declaration
> statement, the default length is 1.
You can use:
```
print @sqlText
```
to see what does your statement look like.
|
I was able to successfully test your script when I changed this:
```
DECLARE @sqlText nvarchar
```
To this:
```
DECLARE @sqlText nvarchar(max)
```
|
TSQL - Dynamic Create Table and Insert
|
[
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2008-r2",
""
] |
I have a set of data I am wanting to report on (in Oracle 12c). I am wanting to show the sum of certain groupings, based on the value of the 'ITEM' column. If ITEM = ('A', 'B' or 'C'), then they get "grouped" together and the values for columns Cost1 and Cost2 are summed up. This is called "Group1". Group2 contains ITEMSs ('D, E, F'). There are only 6 groupings, made up of around 13 static/fixed items. I know exactly what the items are in each grouping.
Here is a set of sample data, with borders showing what should be grouped together. The 2nd listing shows what the output should look like. I think i am wanting to sum within a case statement, but I just can't seem to wrap my head around it...
[](https://i.stack.imgur.com/F7CXK.jpg)
|
The `CASE` statement is good for this type of one-off value transformation:
```
SELECT
Year,
CASE
WHEN Item IN ('A', 'B', 'C') THEN 'Group1'
WHEN Item IN ('D', 'E', 'F') THEN 'Group2'
ELSE 'Others' END AS Item,
SUM(Cost1),
SUM(Cost2)
FROM myTable
GROUP BY
Year,
CASE
WHEN Item IN ('A', 'B', 'C') THEN 'Group1'
WHEN Item IN ('D', 'E', 'F') THEN 'Group2'
ELSE 'Others' END;
```
Note that the `GROUP BY` expressions must be stated *exactly* as they are in the column selection list, except you have to drop the alias (`AS Item`).
|
Look like you want `Group By` query:
```
select Year,
case
when item = 'A' or item = 'B' or item = 'C' then
'Group1'
else
'Group2'
end as Item
Sum(nvl(Cost1, 0)) as Cost1, -- note NVL to prevent nulls from summing
Sum(nvl(Cost2, 0) as Cost2
from MyTable
group by Year,
Item
```
|
Oracle - How to sum related items?
|
[
"",
"sql",
"oracle",
""
] |
I am having following data in a table
```
**TEAM NAME**
Germany
Holland
Scotland
Brazil
```
I AM Expecting data to be like below with same order, Please help
```
**ScheduledMatches**
Germany VS Holland
Germany VS Scoltland
Germany VS Brazil
Holland VS Scoltland
Holland VS Brazil
Scoltland VS Brazil
```
Thanks
Vijay Sagar
|
```
DECLARE @t TABLE (Team VARCHAR(100))
INSERT INTO @t (Team)
VALUES
('Germany'),
('Holland'),
('Scotland'),
('Brazil')
;WITH cte AS
(
SELECT *, RowNum = ROW_NUMBER() OVER (ORDER BY 1/0)
FROM @t
)
SELECT t2.Team + ' vs ' + t1.Team
FROM cte t1
JOIN cte t2 ON t1.RowNum > t2.RowNum
```
output -
```
---------------------------
Germany vs Holland
Germany vs Scotland
Germany vs Brazil
Holland vs Scotland
Holland vs Brazil
Scotland vs Brazil
```
|
You can first calculate `ROW_NUMBER` for each team and use self join with `<` based on calculated `rn`:
```
CREATE TABLE #teams(name VARCHAR(100));
INSERT INTO #teams(name)
VALUES('Germany'), ('Holland'), ('Scotland'), ('Brazil');
;WITH cte AS
(
SELECT *, rn = ROW_NUMBER() OVER (ORDER BY (SELECT 1))
FROM #teams
)
SELECT CONCAT(c1.name, ' vs ', c2.name) AS result
FROM cte c1
JOIN cte c2
ON c1.rn < c2.rn
-- ORDER BY result;
```
`LiveDemo`
`CONCAT` is available from `SQL Server 2012+` if you have lower version use `+` to concatenate string.
|
@SQL With single column i need to get below mentioned output like below
|
[
"",
"sql",
"sql-server",
""
] |
I want to get ID value from mySQL table.
For example :Get id from Second row or Third row. How can I do this?
```
SELECT Id From myTable --------- ;
```
What is the second row syntax?
|
If you want to fetch second row then use `LIMIT 2, 1` and if you want 3rd row then use `LIMIT 3, 1`.
Try this:
```
SELECT Id
FROM myTable
ORDER BY Id
LIMIT 2, 1;
```
|
2 ways to do that...
Return 1 record, start from second record
```
select id from table_name limit 2,1
```
Fetch 10 records, leave first 15.
```
SELECT * FROM Orders LIMIT 10 OFFSET 15
```
|
How can I get ID from MYSQL
|
[
"",
"mysql",
"sql",
"select",
"row",
""
] |
Let's say I have a table like this
```
id col1 col2 col3 col4 col5 col6
1 1 2 3 4 5 6
2 2 1 4 3 6 5
3 1 1 2 3 4 5
```
I would want to select the rows where every field has a different value. The out put would be
```
id col1 col2 col3 col4 col5 col6
1 1 2 3 4 5 6
2 2 1 4 3 6 5
```
I know you can do this
```
SELECT * FROM table
WHERE col1 <> col2 AND col2 <> col3...
```
but that would take forever with this many columns. Is there a specific function for determining if all columns are unique?
|
You can unpivot your table using `UNION ALL`:
```
SELECT id
FROM (
SELECT id, col1 AS col
FROM mytable
UNION ALL
SELECT id, col2
FROM mytable
UNION ALL
SELECT id, col3
FROM mytable
UNION ALL
SELECT id, col4
FROM mytable
UNION ALL
SELECT id, col5
FROM mytable
UNION ALL
SELECT id, col6
FROM mytable) AS t
GROUP BY id
HAVING COUNT(DISTINCT col) = 6
```
If you want all columns selected then you do something like:
```
SELECT *
FROM mytable
WHERE id IN ( .... above query here ...)
```
Unfortunately MySQL does not have an `UNPIVOT` operator that would make the above query less verbose.
|
A little less verbose than your solution, although it doesn't scale perfectly if the number of columns is excessively large:
```
SELECT
* -- In actual live code you would of course never use '*'
FROM
MyTable
WHERE
col1 NOT IN (col2, col3, col4, col5, col6) AND
col2 NOT IN (col3, col4, col5, col6) AND
col3 NOT IN (col4, col5, col6) AND
col4 NOT IN (col5, col6) AND
col5 NOT IN (col6) -- Done as an IN just to be consistent
```
|
mySQL - Selecting all records where all columns are unique
|
[
"",
"mysql",
"sql",
""
] |
I need to create mysql table with default value on column CURRENT\_DATE()
I try
```
DROP TABLE IF EXISTS `visitors`;
CREATE TABLE `visitors` (
`id` BIGINT(20) UNSIGNED NOT NULL AUTO_INCREMENT,
`ip` VARCHAR(32) NOT NULL,
`browser` VARCHAR(500) NOT NULL,
`version` VARCHAR(500) NOT NULL,
`platform` ENUM('w','l','m') NOT NULL,
`date` TIMESTAMP NOT NULL DEFAULT CURRENT_DATE(),
PRIMARY KEY (`id`),
UNIQUE KEY `person` (`ip`,`date`)
) ENGINE=INNODB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;
```
but it is writting an error
```
Query: CREATE TABLE `visitors` ( `id` BIGINT(20) UNSIGNED NOT NULL AUTO_INCREMENT, `ip` VARCHAR(32) NOT NULL, `browser` VARCHAR(500) NO...
Error Code: 1064
You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'CURRENT_DATE() NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `person` (`ip`,`date`)
) ENGINE=INNODB AUTO_INCREMENT=1' at line 7
Execution Time : 0 sec
Transfer Time : 0 sec
Total Time : 0.063 sec
```
what is the problem?
I need to uniques only date
not with full time info...
can you help me?
|
Use **CURRENT\_TIMESTAMP** function instead of **CURRENT\_DATE()** function
Try this:
```
DROP TABLE IF EXISTS `visitors`;
CREATE TABLE `visitors` (
`id` BIGINT(20) UNSIGNED NOT NULL AUTO_INCREMENT,
`ip` VARCHAR(32) NOT NULL,
`browser` VARCHAR(500) NOT NULL,
`version` VARCHAR(500) NOT NULL,
`platform` ENUM('w','l','m') NOT NULL,
`date` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (`id`),
UNIQUE KEY `person` (`ip`,`date`)
) ENGINE=INNODB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;
```
|
you just have to replace CURRENT\_DATE() by NOW() in your query.
I tried it and it's look ok.
|
How to create mysql table with column timestamp default current_date?
|
[
"",
"mysql",
"sql",
"datetime",
"timestamp",
"create-table",
""
] |
I am using Cursor for some operation on same table which I used in Select statement of Cursor as below,
```
CREATE TABLE #nsn (ID INT IDENTITY(1,1)
,Val1 Varchar(MAx)
,Val2 Varchar(MAx)
)
INSERT INTO #nsn(Val1,VAl2) select 'A1','A2'
INSERT INTO #nsn(Val1,VAl2) select 'B1','B2'
INSERT INTO #nsn(Val1,VAl2) select 'C1','C2'
INSERT INTO #nsn(Val1,VAl2) select 'D1','D2'
INSERT INTO #nsn(Val1,VAl2) select 'E1','F2'
SELECT * From #nsn
ID Val1 VAl2
1| A1 | A2
2| B1 | B2
3| C1 | C2
4| D1 | D2
5| E1 | F2
```
use cursor to update Val2 column,
```
DECLARE @ID INT
,@Val1 Varchar(MAx)
,@Val2 Varchar(MAx)
DECLARE cursor_Latest CURSOR
FOR SELECT ID,Val1,Val2
FROM #nsn
ORder by 1 asc
OPEN cursor_Latest
FETCH NEXT FROM cursor_Latest INTO @ID , @Val1 ,@Val2
WHILE (@@FETCH_STATUS <> -1)
BEGIN
UPDATE #nsn
SET Val2 = Val2 +'-'+ @Val1 +'-'+@Val2
WHERE ID = @ID+1
FETCH NEXT FROM cursor_Latest INTO @ID , @Val1 ,@Val2
END
close cursor_Latest
DEALLOCATE cursor_Latest
```
and its result is
```
SELECT * From #nsn
ID Val1 VAl2
1| A1 | A2
2| B1 | B2-A1-A2
3| C1 | C2-B1-B2
4| D1 | D2-C1-C2
5| E1 | F2-D1-D2
```
But I am expecting result as below,
```
ID Val1 VAl2
1| A1 | A2
2| B1 | B2-A1-A2
3| C1 | C2-B1-B2-A1-A2
4| D1 | D2-C1-C2-B1-B2-A1-A2
5| E1 | F2-D1-D2-C1-C2-B1-B2-A1-A2
```
So is there any thing wrong in this code? or
does Cursor has any record store thing because it is not getting latest updated values of new row.
Advance Thanks for Help..
|
To achieve your expected result you should try something like below,
```
DECLARE @ID INT
,@Val1 VARCHAR(MAX)
,@Val2 VARCHAR(MAX)
DECLARE cursor_Latest CURSOR
FOR
SELECT ID,Val1,Val2 FROM #nsn
ORDER BY 1 ASC
OPEN cursor_Latest
FETCH NEXT FROM cursor_Latest INTO @ID , @Val1 ,@Val2
WHILE (@@FETCH_STATUS<>-1)
BEGIN
IF @ID=1
BEGIN
UPDATE #nsn
SET Val2 = Val2+'-'+@Val1+'-'+@Val2
WHERE ID = @ID+1
END
ELSE
BEGIN
UPDATE #nsn
SET Val2 = Val2+'-'+@Val1+'-'+@Val2+'-'+( SELECT Val1+'-'+Val2 FROM #nsn WHERE ID = @ID-1)
WHERE ID = @ID+1
END
FETCH NEXT FROM cursor_Latest INTO @ID , @Val1 ,@Val2
END
CLOSE cursor_Latest
DEALLOCATE cursor_Latest
SELECT * FROM #nsn
DROP TABLE #nsn
```
[SQL Fiddle Demo](http://sqlfiddle.com/#!3/004b4/1)
|
DECLARE @ID INT ,@Val1 Varchar(MAx),@Val2 Varchar(MAx) ,@v1 Varchar(MAx)
,@v2 Varchar(MAx),@t Varchar(MAx),@val varchar(max)
set @ID=1
while @ID<=(select count(\*) from #nsn)
begin
select @v1=val1,@v2=val2 from #nsn where ID=@ID
select @Val1=val1,@Val2=val2 from #nsn where id=@ID+1
update #nsn set val2= case when @val is not null then @Val2+'-'+@v1+'-'+@val else @Val2+'-'+@v1+'-'+@v2 end where id=@ID+1
select @val=@Val2+'-'+@v1+'-'+@v2
set @ID=@ID+1
end
SELECT \* From #nsn
|
How to get Latest value of row in Cursor result set in SQL
|
[
"",
"sql",
"sql-server",
"cursor",
""
] |
When i executed below query
```
DECLARE @I INT=1
while(@I<13)
BEGIN
PRINT DATENAME(MONTH,@I)
PRINT @I
SET @I=@I+1
END
```
I got output like:
```
January
1
January
2
January
3
January
4
January
5
January
6
January
7
January
8
January
9
January
10
January
11
January
12
```
Why all months names are not printing in the above result? Can anyone specify the reason?
|
Because when you use `DATENAME(MONTH,@I)`
the values 1,2,3,4... are implicitly converted to `DATETIME`:
```
SELECT CAST(1 AS DATETIME)
-- 1900-01-02 00:00:00
1 - Jan 2 1900 12:00AM
2 - Jan 3 1900 12:00AM
3 - Jan 4 1900 12:00AM
...
12 - Jan 13 1900 12:00AM
```
and `DATENAME` for month part is always `January`
You can use `DATEADD` to get all months:
```
DECLARE @I INT=1
while(@I<13)
BEGIN
PRINT DATENAME(MONTH, DATEADD(MONTH, @i-1, '2000-01-01'))
PRINT @I
SET @I=@I+1
END
```
`LiveDemo`
|
For [DATENAME](https://msdn.microsoft.com/de-de/library/ms174395(v=sql.120).aspx) to work the second parameter needs to be a date and not a number. Using an `int` will automatically convert this number to a day starting from January 1st in 1900 (`@I = 0`):
```
SELECT CONVERT(DATETIME, 0) -- returns 1900-01-01 00:00:00.000
SELECT CONVERT(DATETIME, 1) -- returns 1900-01-02 00:00:00.000
```
Therefore, the month will always be January for values from 0 to 30 for `@I` (and a lot of other integer values, of course).
|
Why all months are not printing in it?
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
I am in the process of producing a report in SSRS and trying to add a check in to say when this parameter equals this value then display these values, but because the parameter is an integer and I can't pass multiple integer values into the one parameter.
How can I do this?
Here is example of what I am trying to do:
```
DECLARE @EntityGroupID INT
SET @EntityGroupID = 741
IF @EntityGroupID = 741
BEGIN
SET @EntityGroupID= 3097,3098,3099,3100,3101,3125
END
SELECT *
FROM tEntityGroup
WHERE ID in (@EntityGroupID)
```
|
Why not just use two `SELECT` statements. If you already have your control flow, just use it to do the `SELECT`, e.g.
```
DECLARE @EntityGroupID INT
SET @EntityGroupID = 741
IF @EntityGroupID = 741
BEGIN
SELECT *
FROM tEntityGroup
WHERE ID IN (3097,3098,3099,3100,3101,3125);
END
ELSE
BEGIN
SELECT *
FROM tEntityGroup
WHERE ID = @EntityGroupID;
END
```
Or if you really want to do it with a single select, then use a table variable:
```
DECLARE @EntityGroupID INT
SET @EntityGroupID = 741
DECLARE @Entities TABLE (ID INT NOT NULL);
INSERT @Entities (ID)
SELECT @EntityGroupID
WHERE @EntityGroupID != 741
UNION ALL
SELECT ID
FROM (VALUES (3097),(3098),(3099),(3100),(3101),(3125)) t (ID)
WHERE @EntityGroupID = 741;
SELECT *
FROM tEntityGroup
WHERE ID in (SELECT ID FROM @Entities);
```
|
Pass it as table variable like
```
declare @tbl table(EntityGroupID int);
insert into @tbl
select 3097
union
select 3098
union
select 3099
```
Then you can just say
```
SELECT *
FROM tEntityGroup
WHERE ID in (select EntityGroupID from @tbl);
```
|
How to pass multiple integer values in one parameter in SQL Server
|
[
"",
"sql",
"t-sql",
"sql-server-2014",
""
] |
i'm trying to grab some data out of a table. The Variable is VARCHAR 30000 and when I use COBOL EXEC SQL to retrieve it, the string is returned but without the necessary (hex) Line Feeds and Carriage Returns. I inspect the string one character at a time looking for the Hex Values of 0A or 0D but they never come up.
The LF and CR seem to be lost as soon as fill the string into my cobol variable.
Ideas?
|
I have seen situations where CR and LF were in data in the database. These are valid characters so it's possible for them to be stored there.
Have you tried to confirm that there really are CR and LF characters in the database using some other tool or method? My Z-Series experience is quite limited, so I'm unable to suggest options. However, there must be some equivalent of SSMS and SQL Server on the Z-Series to query the DB2 database.
Check out this SO link on querying DB2 and cleaning up CR and LF characters.
[DB2/iSeries SQL clean up CR/LF, tabs etc](https://stackoverflow.com/questions/7573321/db2-iseries-sql-clean-up-cr-lf-tabs-etc)
|
If the data is stored / **converted** to ebcdic when **retrieved** on the mainframe, you should get the EBCDIC New-Line characters x'15' decimal=21 rather than 0A or 0D.
It is only if you are retrieving the data in ASCII / UTF-8 that you would get 0A or 0D.
---
Most java editors can edit EBCDIC (with the EBCDIC New-Line Character x'15') just as easily as ASCII (with \n), not sure about Eclipse though.
|
DB2 to COBOL String Losing Line Feed and Carriage Returns
|
[
"",
"sql",
"db2",
"cobol",
""
] |
I have following tables
```
tbl1 (empID, Name, ctNo)
tbl2 (salID, empID, salry)
tbl3 (histryID, salDate, empID, salID)
```
Expect Output `(empID,Name,salary,Salary_Date)`
For Salary date, I only want the last salary\_date to be displayed.
(maybe By using Partitiion by or Selfjoin methods.)
Thank You.
|
Below query can answer your question
```
Create table #tbl1 (empID int , Name varchar(11), ctNo int)
Create table #tbl2 (salID int, empID int, salry int)
Create table #tbl3 (histryID int, salDate DATE, empID int, salID int)
INSERT INTO #tbl1
values(1,'Dinesh',23),(2,'Raj',11)
INSERT INTO #tbl2
values(1,1,1000),(2,1,2000),(3,2,100),(4,2,500)
INSERT INTO #tbl3
values(1,'20020101',1,1),(2,'20020201',1,2),(3,'20020101',2,3),(4,'20020201',2,4)
SELECT M1.* FROM(
SELECT T1.empID,T1.Name,T2.salry,T3.salDate FROM
#tbl1 AS T1
INNER join #tbl2 AS T2
ON T1.empID=T2.empID
INNER join #tbl3 AS T3
ON T1.empID=T3.empID AND T2.salID =T3.salID
) AS M1
INNER JOIN
(
SELECT empID,MAX(salDate) AS SALDATE FROM #tbl3 GROUP BY empID) AS M2
ON M1.SALDATE=M2.salDate AND M1.empID=M2.empID
```
|
You could try something like this:
```
select t1.empID, t1.Name, t2.salry as salary, top (t3.salDate) as Salary_Date from tbl1 t1, tbl2 t2, tbl3 t3 on t1.empID=t2.empID and t1.empID=t3.empID
```
Note: Pointing in correct direction. Not tested.
|
Join 3 tables based on Partition by or Self join
|
[
"",
"sql",
"sql-server-2008",
""
] |
For example if a column has value like `200806`, and I want to know if this variable is a date variable or a numeric variable or a character variable, where can I see that? Thanks!
|
If you have SSMS (SQL Server Management Studio) then you can expand Database -> Tables -> Columns and find the type of that column.
Else, use `sp_help` (or) `sp_helptext` along with your table name like below
```
sp_help your_table_name
```
|
You can pull the column information for your table by querying the `INFORMATION_SCHEMA.COLUMNS` table:
```
SELECT DATA_TYPE
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_CATALOG = 'YourDatabase'
AND TABLE_SCHEMA = 'YourSchema'
AND TABLE_NAME = 'YourTable'
AND COLUMN_NAME = 'YourColumn'
```
|
How can I view the property of a variable (column) in SQL Server
|
[
"",
"sql",
"sql-server",
"types",
""
] |
I have the query below that counts the amount of occurrences for each group
```
SELECT ss.year, COUNT(*)
FROM stackexchange_question seq
INNER JOIN secondary_study ss ON seq.secondary_study_code = ss.code
WHERE (seq.evaluation LIKE 'PARTIALLY' OR seq.evaluation LIKE 'TOTALLY')
GROUP BY ss.year
ORDER BY ss.year;
```
The other query below counts part of each group
```
SELECT ss.year, COUNT(*) FROM stackexchange_question seq
INNER JOIN secondary_study ss ON seq.secondary_study_code = ss.code
WHERE (seq.evaluation LIKE 'PARTIALLY' OR seq.evaluation LIKE 'TOTALLY') AND (seq.percentile = 90)
GROUP BY ss.year
ORDER BY ss.year;
```
The first query returns:
```
2008|36
2009|21
```
The second returns:
```
2008|17
2009|11
```
How can I divide each count from each group from the second query by the counts returned by the first query in order to obtain the percentage?
|
You can use the `where` clause from both the queries as a `case` expression. Then count the occurrences and divide one by the other.
```
SELECT ss.year,
1.0 *
sum(case when (seq.evaluation LIKE 'PARTIALLY' OR seq.evaluation LIKE 'TOTALLY')
AND seq.percentile = 90 then 1 else 0 end)
/ sum(case when seq.evaluation LIKE 'PARTIALLY' OR seq.evaluation LIKE 'TOTALLY' then 1 else 0 end)
as pct
FROM stackexchange_question seq
INNER JOIN secondary_study ss ON seq.secondary_study_code = ss.code
GROUP BY ss.year
ORDER BY ss.year;
```
|
Ue conditional aggregation. Here is a simple way:
```
SELECT ss.year, COUNT(*),
AVG(CASE WHEN (seq.percentile = 90) THEN 1.0 ELSE 0.0 END) as ProportionAT90
FROM stackexchange_question seq INNER JOIN
secondary_study ss
ON seq.secondary_study_code = ss.code
WHERE seq.evaluation IN ('PARTIALLY', seq.evaluation LIKE 'TOTALLY')
GROUP BY ss.year
ORDER BY ss.year
ORDER BY ss.year;
```
|
How to calculate percentage of each group on a select query
|
[
"",
"sql",
"percentage",
""
] |
I am not sure how to word this, so I wasn't sure what to google before posting this.
This will be done with SQLite as well.
Basically what I am trying to do is like this:
`SELECT name FROM users WHERE Language = 'en'`
If `'Language = 'en'` returns no rows, change `'en' to 'fr'`
I know I can achieve this by running the first query, checking result, and then trying the second one next, I thought there was probably a way to do this in a single query, as this would be alot better im my situation.
|
Please forgive me if any of this syntax isn't available in SQLLite, but this should work if it is:
```
SELECT
name
FROM
users
WHERE
language = 'en' OR
(
language = 'fr' AND
NOT EXISTS (SELECT * FROM users WHERE language = 'en')
)
```
|
If you are only looking for one row, then use prioritzation:
```
select name
from users
where language in ('en', 'fr')
order by (case when language = 'en' then 1 else 2 end)
fetch first 1 row only;
```
If you are looking for multiple rows, then an `exists` clause is the way to go.
|
SQL When where x = y returns no rows, use x = z
|
[
"",
"sql",
"sqlite",
""
] |
I have two table with same column but the second table has a column more than first table(`InfoDate`).
**Table 1**
```
KID ParentID KLevel
----------------------------------------
1 0 1
2 1 2
3 1 2
```
**Table 2**
```
KID ParentID KLevel InfoDate
------------------------------------------------------------
2 1 2 2015-12-31 00:00:00.000
```
I want insert all record from first table to second table that not exist in second table with condition `InfoDate=2015-12-31 00:00:00.000`
**For example**, in this sample example records with `KID=1` and `KID=3` must be inserted into table 2.
I do:
```
insert into table2 (KID , ParentID , KLevel)
select KID , ParentID , KLevel from table1
where not EXISTS(
SELECT KID , ParentID , KLevel FROM table2 WHERE InfoDate = '2015-12-31 00:00:00.000'
);
```
but `(0 row(s) affected)`
could you help me, please?
|
Your `WHERE NOT EXISTS...` condition does not really make sense. Right now you're only checking, if there is no row with `InfoDate = '2015-12-31 00:00:00.000'`. Since there obviously is such a row, no records are affected.
Try this:
```
INSERT INTO [table2] (
[KID]
, [ParentID]
, [KLevel])
SELECT
[KID]
, [ParentID]
, [KLevel]
FROM [table1]
WHERE NOT EXISTS (
SELECT
[KID]
, [ParentID]
, [KLevel]
FROM [table2]
WHERE [InfoDate] = '2015-12-31 00:00:00.000'
AND [table1].[KID] = [table2].[KID]
AND [table1].[ParentID] = [table2].[ParentID]
AND [table1].[KLevel] = [table2].[KLevel])
```
|
Use any key value get the not existing records.
```
insert into table2 (KID , ParentID , KLevel)
select KID , ParentID , KLevel from table1
where KID NOT IN (
SELECT KID FROM table2 WHERE InfoDate = '2015-12-31 00:00:00.000'
);
```
|
SQL: Insert from first table into second table when records not exist in second table
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
I have to build a database as follows:
We have one table, called Organization. An Organization can have one or more products, which they sell. Then, I have a table called Products, which lists the Product title, cost, and amount sold.
How can I create a relationship where a organization.product can contain a reference to multiple rows from the products table?
I would imagine I would need to potentially create another table?
Thanks!
|
Assuming you have something like this for your Organization and Product tables.
```
Organization
Id (Primary Key)
Name
....
Product
Id (Primary Key)
Name
Cost
....
```
Then you would create a join table like this that would link your Organization to Products.
```
OrganizationProduct
Id (Primary Key)
OrganizationId (Foreign Key to Organization Table)
ProductId (Foreign Key to Product Table)
```
|
What you want to do here is have a table representing your products and in that table one of the fields should represent the Organisation which has the product, called something like org\_id.
Then in your Organisation table you will have an id column to join with org\_id from the product table.
Eg:
```
Products:
id int
org_id int
name varchar
colour varchar
.....
any other information about products
Organisations:
id int
name varchar
type varchar
...
other organisation details
```
Now when you want to list all the products for each company you do:
```
SELECT products.name, organisations.name
FROM products
JOIN organisations
ON products.org_id = organisations.id;
```
New information, products can belong ot multiple organisations. Now you need a table called organisationProducts to act as an intermediary between the two tables and create a many to many relationship:
```
Products:
id int
name varchar
Organisations:
id int
name varchar
OrgnisationProducts:
id int
Org_id
Prod_id
```
You join prod\_id to products.id and org\_id to organisations.id
|
How do I fix the following many to many relationship?
|
[
"",
"sql",
"oracle",
""
] |
First I have table `users`
```
+---------+----------+------------------+
| user_id | username | email |
+---------+----------+------------------+
| 1 | User 1 | email1@gmail.com |
| 2 | User 2 | email2@gmail.com |
| 3 | User 3 | email3@gmail.com |
| 4 | User 4 | email4@gmail.com |
+---------+----------+------------------+
```
Next I have table `user_announcement`
```
+---------+----------+---------+-----------+
| user_id | annou_id | is_read | read_time |
+---------+----------+---------+-----------+
| 1 | 1 | 0 | Time |
| 2 | 1 | 1 | Time |
| 1 | 2 | 0 | Time |
| 2 | 3 | 1 | Time |
+---------+----------+---------+-----------+
```
I am trying to figure how to get all user and their read status for announcement id 1
I want output have something like this
```
+---------+----------+------------------+----------+---------+-----------+
| user_id | username | email | annou_id | is_read | read_time |
+---------+----------+------------------+----------+---------+-----------+
| 1 | User 1 | email1@gmail.com | 1 | 0 | Time |
| 2 | User 2 | email2@gmail.com | 1 | 1 | Time |
| 3 | User 3 | email3@gmail.com | NULL | NULL | NULL |
| 4 | User 4 | email4@gmail.com | NULL | NULL | NULL |
+---------+----------+------------------+----------+---------+-----------+
```
I tried all kinds of join but it didn't give me the result I want.
|
When doing a `LEFT JOIN`, if you need to filter on a column in the second table, you should do it in the `ON` clause.
```
SELECT *
FROM user u
LEFT JOIN user_announcement ua
ON u.user_id=ua.user_id AND ua.annou_id = 1
```
|
Edit:
Updated answer as per user requirement
This is what you are looking for. Here I am first getting `min(annou_id)` for each user and then getting other records for that row. Now treating this derived table as second table, I am going for a left join as earlier to get the required records.
```
select u.user_id,u.username,u.email,ua_derived.annou_id,ua_derived.is_read,ua_derived.read_time
from user u
left join
(
select ua1.user_id,ua1.annou_id,ua1.is_read,ua1.read_time from user_announcement ua1
inner join
(select user_id,min(annou_id) as annou_id from user_announcement
group by user_id
) ua2
on ua1.user_id=ua2.user_id
and ua1.annou_id=ua2.annou_id
) ua_derived
on u.user_id=ua_derived.user_id;
```
**SQL Fiddle demo here**
<http://sqlfiddle.com/#!9/57a74/4>
========================================================================
**Prev Answer:**
Answer is same as Yeldar, but you just need to use column alias to display.
```
select u.user_id,u.username,u.email,ua.annou_id,ua.is_read,ua.read_time
from user u
left join user_announcement ua
on u.user_id=ua.user_id
```
**SQL Fiddle demo here**
<http://sqlfiddle.com/#!9/db28e/1>
|
Join two tables but still get full data of first table
|
[
"",
"mysql",
"sql",
""
] |
I keep getting the below error when trying to do a Group By on Multiple Inner Joins.
> Msg 8120, Level 16, State 1, Line 1 Column 'address.Address1' is
> invalid in the select list because it is not contained in either an
> aggregate function or the GROUP BY clause.
Below is my code, can't work out what I am doing wrong.
```
select Ad.Address1, Ad.Address2, Ad.PostalCode, Tw.Town, Co.County, CI.InstallationName,C.CompanyName
FROM address as AD
Inner Join
(Select TownId, Town From
Town as Tw)TW
ON Tw.TownID=Ad.TownID
Inner Join
(select County, CountyID From
County as CO) CO
ON CO.countyid=Ad.CountyID
Inner Join
(select ClientID, InstallationName, AddressID From
ClientInstallations as CI) CI
ON CI.AddressID=Ad.AddressID
Inner Join
(select ClientID, CompanyName From
Client as C) C
ON C.ClientID=CI.ClientID
Where Address1 <> '' and Address2 <> '' and PostalCode <> '' and Town <> '' and County is not null
Group By CompanyName
```
Apologies for the messy code, just started learning this stuff so would appreciate some tips.
Many Thanks
Sam
|
what I understand from your comment is all you want is how many records of the same company name exist. If that’s the case then you can rewrite the query as:
```
select COUNT(*),
C.CompanyName
FROM [address] as AD
Inner Join
(Select TownId, Town FROM Town as Tw)TW
ON Tw.TownID=Ad.TownID
Inner Join
(select County, CountyID FROM County as CO) CO
ON CO.countyid=Ad.CountyID
Inner Join
(select ClientID, InstallationName, AddressID From
ClientInstallations as CI) CI
ON CI.AddressID=Ad.AddressID
Inner Join
(select ClientID, CompanyName FROM
Client as C) C
ON C.ClientID=CI.ClientID
Where Address1 <> '' and Address2 <> '' and PostalCode <> '' and Town <> '' and County is not null
Group By CompanyName
```
|
What you're doing wrong, is using a `mysql` like script for `sql server`. In sql server (and every other sane DBMS), every selected attribute, which is not in an aggregate function, has to be in the `group by`. In your case, I don't understand why you are using a `group by` at all, since you don't use aggregate functions. So your solution should be to remove the `group by`.
**edit based on comment**
If you only want to count the records for each `companyname`, you should do `select companyname, count(1) cnt from (your selection) group by companyname`. If you want the rest of the attributes as well, you'll have to include them in the `group by`.
|
Multiple Inner Joins Group By 'not contained in either an aggregate function or the GROUP BY clause'
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have 3 tables:
```
CREATE table materials
(id serial primary key not null,
name varchar(50) not null,
unit varchar(10) not null default 'шт',
price decimal(12, 2) not null check (price>0));
CREATE table warehouses
(id serial primary key not null,
lastname varchar(25) not null);
CREATE table materials_in_warehouses
(id_warehouses integer references warehouses(id) on update cascade on delete cascade,
id_materials integer references materials(id),
unit varchar(15) default 'шт',
count integer not null CHECK (count>0),
lastdate date not null,
primary key (id_warehouses, id_materials);
```
And i need to select for each material : name; count of warehouses, where the material is present in an amount of > 100.
I tried to do something like this:
```
select materials.name, count(select * from materials_in_warehouses where
materials.id = materials_in_warehouses.id_materials AND material_in_warehouses.count > 100) as sount from materials;
```
But of course, this nonsense can't work.
Thank you in advance.
|
Pretty straight forward.
```
SELECT m.name, count(miw.id_warehouses)
FROM materials m
LEFT JOIN materials_in_warehouses miw ON m.id=miw.id_materials AND miw."count">100
GROUP BY m.id, m.name
```
Your mistake might have just been the fact that you're using `count` as a column name, when it's an aggregate function. Use double quotes in PostgreSql for that:
[Escaping keyword-like column names in Postgres](https://stackoverflow.com/questions/7651417/escaping-keyword-like-column-names-in-postgres)
|
Try like this
```
select materials.name, count(
select id_warehouses from materials_in_warehouses join materials
on materials.id = materials_in_warehouses.id_materials
where material_in_warehouses.count > 100
) as sount from materials;
```
|
PostgreSQL select query doesn't work
|
[
"",
"sql",
"postgresql",
"select",
""
] |
I have two tables:
Target table: `Specialisation (id , name , description)`
Source table `TempSpecialisation(id , name , description)`
I want to copy `TempSpe.description` to `Specialisation.Description` if the name matches or insert a new record in `Specialisation` with all the columns.
I get duplicate entries because of white space at the end of `Specialisation.name`.
My procedure is:
```
USE [TempDatabase]
GO
/****** Object: StoredProcedure [dbo].[TempDatabase2] Script Date: 23/12/2015 3:46:49 PM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER PROCEDURE [dbo].[TempDatabase2]
WITH EXECUTE AS owner
as
BEGIN
MERGE Specialisation as T
USING TempSpecialisation as S
ON s.Name = T.Name
WHEN NOT MATCHED BY Target
THEN INSERT(id ,Name, Description1) VALUES(S.id ,S.Name, S.Description1)
WHEN MATCHED and t.name =s.name
THEN UPDATE SET T.Description1 = S.Description1
OUTPUT $action, inserted.*;
End
```
Table Specialisation
```
id Name Descriptions
545454 "Allergist " null
```
Table TEmpSpecialisation
```
id Name Descriptions
1 "Allergist" This is a doctor who helps with allergies.
```
I need to update Specialisation description from TEmpSpecialisation.
but it give output like
```
id Name Descriptions
1 "Allergist" This is a doctor who helps with allergies.
545454 "Allergist " null
```
|
Use `LTRIM(RTRIM(s.Name))` to remove white space from the beginning and the end of the field, so your merge statement becomes:
```
MERGE Specialisation as T
USING TempSpecialisation as S
ON LTRIM(RTRIM(s.Name)) = LTRIM(RTRIM(T.Name))
WHEN NOT MATCHED BY Target
THEN INSERT(id ,Name, Description1) VALUES(S.id ,S.Name, S.Description1)
WHEN MATCHED
THEN UPDATE SET T.Description1 = S.Description1
OUTPUT $action, inserted.*;
End
```
[LTRIM Documentation](https://msdn.microsoft.com/en-us/library/ms177827.aspx)
[RTRIM Documentation](https://msdn.microsoft.com/en-us/library/ms178660.aspx)
|
```
ALTER PROCEDURE [dbo].[TempDatabase2]
WITH EXECUTE AS OWNER
AS
BEGIN
MERGE dbo.Specialisation T
USING dbo.TempSpecialisation S ON LTRIM(S.Name) = LTRIM(T.Name)
WHEN NOT MATCHED BY TARGET
THEN
INSERT (id, Name, Description1)
VALUES (S.id, S.Name, S.Description1)
WHEN MATCHED AND ISNULL(T.Description1, '') != ISNULL(S.Description1, '') -- remove ISNULL if Description1 NOT NULL
THEN
UPDATE SET T.Description1 = S.Description1
OUTPUT $ACTION, INSERTED.*;
END
```
|
How to Trim white spaces of string in stored procedure?
|
[
"",
"sql",
"sql-server",
"trim",
""
] |
I have table Asset (see below). For all Asset\_Type that are not "Automobile" I need to display it and round the Asset\_Value to the nearest hundreds. I need to **omit Asset\_Type type with only one asset** and sequence the results by asset type.
```
+-------------------+----------------------------------+---------------+
| Asset_Type | Asset_Description | Asset_Value |
+-------------------+----------------------------------+---------------+
| Automobile | Model T | $100,923.99 |
| Automobile | 1967 Ford Mustang Convertible | $60,000.00 |
| Automobile | 1975 MGB | $52,000.00 |
| Automobile | 1962 Avanti | $88,000.00 |
| Wine | 2000 LaGrange Cabernet Sauvignon | $235.25 |
| Wine | 1999 LaGrange Cabernet Sauvignon | $400.88 |
| Fine Art | Hula-Hoop Girl Painting | $1,000.00 |
| Antique Furniture | 1860 Setee | $1,200.00 |
| Antique Furniture | 1860 4-Post Bed | $1,450.00 |
| Antique Furniture | Art Deco Dresser | $869.99 |
| Antique Furniture | 1830 Empire Chairs (4) | $2,200.00 |
| Structure | Historic Register Barn | $335,000.00 |
| Structure | Historic Register Silo | $335,000.00 |
| Fine Art | The Thinker | $1,200,000.00 |
| Fine Art | The Scream | $3,350,000.00 |
| Coins | 1880 2-Headed Penny | $500.00 |
| Coins | 1932 A-Series Dime | $750.00 |
| Coins | Buffalo Nickel | $469.99 |
| Stamps | Moon Landing | $175.00 |
| Stamps | American Centennial | $3,000.50 |
| Commodities | 175 Acre Cotton Field | $750,500.00 |
| Memorabilia | UofF Football Trophy | $16,555.55 |
| Memorabilia | UofF Football Jersey | $16,555.55 |
+-------------------+----------------------------------+---------------+
```
I'm able to get most criteria except omitting asset with only ONE count.
```
SELECT Asset_Type, ROUND(Asset_Value/100, 0)*100 AS [Rounded Asset Value]
FROM Asset
WHERE Asset_Type <> "Automobile"
ORDER BY Asset_Type;
```
I've tried using `GROUP BY Asset_Type, Asset_Value HAVING COUNT(*) > 1`, but that results in 2 rows with matching asset type and value. Since I need to still list all asset type, GROUP BY will not list all rows.
Any help is appreciated!
|
Try this
```
SELECT Asset_Type,
Round(Asset_Value / 100, 0) * 100 AS [Rounded Asset Value]
FROM Asset
WHERE Asset_Type IN (SELECT Asset_Type
FROM Asset
WHERE Asset_Type <> 'Automobile'
GROUP BY Asset_Type
HAVING Count(*) > 1)
ORDER BY Asset_Type
```
Or use `EXISTS`
```
SELECT Asset_Type,
Round(Asset_Value / 100, 0) * 100 AS [Rounded Asset Value]
FROM #test t
WHERE EXISTS (SELECT Asset_Type
FROM #test f
WHERE Asset_Type <> 'Automobile'
AND f.Asset_Type = t.Asset_Type
GROUP BY Asset_Type
HAVING Count(*) > 1)
ORDER BY Asset_Type
```
|
Here is an alternative to Shep's answer using joins instead of a subquery:
```
SELECT a1.Asset_Type, ROUND(a1.Asset_Value/100, 0)*100 AS [Rounded Asset Value]
FROM Asset a1
INNER JOIN
(
SELECT Asset_Type, COUNT(*) AS frequency
FROM Asset
GROUP BY Asset_Type
HAVING COUNT(*) > 1
) a2
ON a1.Asset_Type = a2.Asset_Type
WHERE a1.Asset_Type <> "Automobile"
ORDER BY a1.Asset_Type;
```
|
Filter by COUNT(*) > 1 and display all relevant rows
|
[
"",
"sql",
"ms-access",
""
] |
I have a table with records as below image
[](https://i.stack.imgur.com/djMy6.png)
I have to maintain the replica of student ABC as shown below, for each student in each process can contain only one active row. What is the best way to stop duplicates for every student entry from db end.
how i manage uniqueness of active row.
How to check new INSERTS to prevent future duplicates.
|
You can maintain the uniqueness by using a unique filtered index:
```
create unique index unq_table_name_processid_filtered
on table(name, processid) where IsActive = 'true';
```
[Here](https://msdn.microsoft.com/en-us/library/cc280372.aspx) is the documentation on the topic.
|
```
create unique index unq_table_name_processid_filtered on table(name, processid) where IsActive = 1;
```
1 for bit type true
|
How to deal with duplicate records in SQL Server database
|
[
"",
"sql",
"sql-server",
"database",
"sql-server-2008",
"sql-server-2012",
""
] |
I have a MySQL table which has a column containing VARCHAR data. The data could be of the following type:
* Simple integer (1,2,3, etc.)
* Integer appended with the letter 'L', for example (L1, L2, etc.)
* Integer appended with the letter 'Q', for example (Q1, Q2, etc.)
* Integer appended with the letter 'W', for example (W1, W2, etc.)
I need to pull the data from the table and sort the data matching this column, let's call it `type`. The sorting should be as follows:
First everything starting with 'Q%' keeping in mind that the integer after Q also plays a role (Q5 should be before Q6) etc.
Following is everything starting with 'L%', same logic as above. The 'W%' follow next and last are just the simple integers.
So what's the query and especially the ORDER BY part that I need to write? I tried the following which gives no errors but does not order the items in the order that I want and I can't figure out what's the logic behind it:
```
SELECT * FROM table
ORDER BY
CASE type
WHEN type LIKE 'Q%' THEN 1
WHEN type LIKE 'L%' THEN 2
WHEN type LIKE 'W%' THEN 3
ELSE 4
END,
type ASC
```
I also tried with `FIELD(...)` but I don't think it takes into account something like 'Q%' at all although it doesn't give an error either nor it sorts them properly.
Any tips or ideas?
|
This should allow desired ordering.
```
ORDER BY FIELD(LEFT(type,1), 'Q', 'L', 'W')
, CAST(type AS UNSIGNED)
, CAST(SUBSTRING(type, 2) AS UNSIGNED)
```
The [first one](http://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_field) ensures those letters happen in that order; anything beginning otherwise will be ordered earlier. This is a benefit since you want pure numbers first, but may cause issues if there are other prefixes not accounted for.
The second makes sure the numeric strings order themselves numerically (so you won't get things like `1, 11, 12, 100, 2`. For strings beginning with letters this usually results in 0 (no sort); though it would position unaccounted for prefixes (if they exist) BEFORE most pure numbers.
The third part performs nearly identically to the second ordering, but is effective for the prefixed strings' numeric portions.
|
The second column should be sorted based on the integer value of the column:
```
SELECT `type`
FROM `table`
ORDER BY
CASE
WHEN `type` LIKE 'Q%' THEN 1
WHEN `type` LIKE 'L%' THEN 2
WHEN `type` LIKE 'W%' THEN 3
ELSE 4
END,
CASE
WHEN `type` LIKE 'Q%' THEN CONVERT(SUBSTRING(`type`, 2), SIGNED)
WHEN `type` LIKE 'L%' THEN CONVERT(SUBSTRING(`type`, 2), SIGNED)
WHEN `type` LIKE 'W%' THEN CONVERT(SUBSTRING(`type`, 2), SIGNED)
ELSE CONVERT(`type`, SIGNED)
END
```
[SQL Fiddle](http://www.sqlfiddle.com/#!9/254d04/1)
|
MySQL ORDER BY using a CASE statement?
|
[
"",
"mysql",
"sql",
"sql-order-by",
""
] |
I have Two tables in my access database
* table1(ID,productname,qunatity,remainder)
* table2(ID,productname,sales)
these tables are related together using "product name" ,How can I update"reminder" from table1 with the value of "quantity form first table - Sum(sales) from second table"
|
Because update queries in MS Access require [updateable](http://allenbrowne.com/ser-61.html) status, you cannot use a direct inner join on an aggregate query. Consider using the MS Access [DSum()](http://www.techonthenet.com/access/functions/domain/dsum.php) function:
```
UPDATE table1
SET table1.remainder = table1.quantity -
DSum("Sales", "table2", "ProductName='" & table1.ProductName & "'")
```
|
Perform an update join by getting the `SUM()` first like
```
UPDATE a
SET a.remainder = x.SaleTotal
FROM table1 a
INNER JOIN (SELECT productname, SUM(sales) AS SaleTotal
FROM TABLE2 GROUP BY productname) x
ON a.productname = x.productname;
```
|
How to update ms access database table using update and sum() function?
|
[
"",
"sql",
""
] |
I have data in a table which looks like:
[](https://i.stack.imgur.com/y5HCa.png)
I want to split its data and make it look like the following through a sql query in Oracle (without using pivot):
[](https://i.stack.imgur.com/anlJQ.png)
How can it be done?? is there any other way of doing so without using pivot?
|
You need to use a pivot query here to get the output you want:
```
SELECT Name,
MIN(CASE WHEN ID_Type = 'PAN' THEN ID_No ELSE NULL END) AS PAN,
MIN(CASE WHEN ID_Type = 'DL' THEN ID_No ELSE NULL END) AS DL,
MIN(CASE WHEN ID_Type = 'Passport' THEN ID_No ELSE NULL END) AS Passport
FROM yourTable
GROUP BY Name
```
You could also try using Oracle's built in `PIVOT()` function if you are running version 11g or later.
|
Since you mention without using PIVOT function, you can try to
**use SQL within group for moving rows onto one line and listagg to display multiple column values in a single column.**
In Oracle 11g, we can use the *listagg* built-in function :
```
select
deptno,
listagg (ename, ',')
WITHIN GROUP
(ORDER BY ename) enames
FROM
emp
GROUP BY
deptno
```
Which should give you the below result:
```
DEPTNO ENAMES
------ --------------------------------------------------
10 CLARK,KING,MILLER
20 ADAMS,FORD,JONES,SCOTT,SMITH
30 ALLEN,BLAKE,JAMES,MARTIN,TURNER,WARD
```
You can find all the solution(s) to this problem here:
<http://www.dba-oracle.com/t_converting_rows_columns.htm>
|
Splitting rows to columns in oracle
|
[
"",
"sql",
"oracle",
"pivot",
""
] |
Please help. I have 2 tables. First one is called Reservation and second one is called LastOrders.
There are the columns DATE\_FROM in Date type and DATE:)\_TO in date type too. And I want if the date in table Reservation in column DATE\_TO = SYSDATE then delete this row and insert new row to table LastOrders I think a trigger can do that, but I dont know how. Any idea? Thank you very much and have a nice Christmas :)
```
TABLE RESERVATION
Name Null Type
-------------------------- -------- -------------
ID NOT NULL VARCHAR2(25)
DESCRIPTION NOT NULL VARCHAR2(100)
DATE_FROM NOT NULL DATE
DATE_TO NOT NULL DATE
TABLE LASTORDERS
Name Null Type
-------------------------- -------- -------------
ID NOT NULL VARCHAR2(25)
DESCRIPTION NOT NULL VARCHAR2(100)
DATE_FROM NOT NULL DATE
DATE_TO NOT NULL DATE
```
|
[SQL Fiddle](http://sqlfiddle.com/#!4/86d27/1)
**Oracle 11g R2 Schema Setup**:
```
CREATE TABLE RESERVATION (
ID VARCHAR2(25) CONSTRAINT RESERVATION__ID__NN NOT NULL,
DESCRIPTION VARCHAR2(100) CONSTRAINT RESERVATION__DE__NN NOT NULL,
DATE_FROM DATE CONSTRAINT RESERVATION__DF__NN NOT NULL,
DATE_TO DATE CONSTRAINT RESERVATION__DT__NN NOT NULL
)
/
CREATE TABLE LASTORDERS (
ID VARCHAR2(25) CONSTRAINT LASTORDERS__ID__NN NOT NULL,
DESCRIPTION VARCHAR2(100) CONSTRAINT LASTORDERS__DE__NN NOT NULL,
DATE_FROM DATE CONSTRAINT LASTORDERS__DF__NN NOT NULL,
DATE_TO DATE CONSTRAINT LASTORDERS__DT__NN NOT NULL
)
/
CREATE PROCEDURE fulfilReservation(
I_ID IN RESERVATION.ID%TYPE,
O_SUCCESS OUT NUMBER
)
AS
r_reservation RESERVATION%ROWTYPE;
BEGIN
DELETE FROM RESERVATION
WHERE ID = I_ID
AND SYSDATE BETWEEN DATE_FROM AND DATE_TO
RETURNING ID, DESCRIPTION, DATE_FROM, DATE_TO INTO r_reservation;
INSERT INTO lastorders VALUES r_reservation;
o_success := 1;
EXCEPTION
WHEN NO_DATA_FOUND THEN
o_success := 0;
END;
/
INSERT INTO RESERVATION VALUES ( 1, 'Test', SYSDATE - 1, SYSDATE + 1 )
/
DECLARE
success NUMBER(1,0);
BEGIN
fulfilReservation( 1, success );
END;
/
```
**Query 1**:
```
SELECT * FROM RESERVATION
```
**[Results](http://sqlfiddle.com/#!4/86d27/1/0)**:
```
No Results
```
**Query 2**:
```
SELECT * FROM LASTORDERS
```
**[Results](http://sqlfiddle.com/#!4/86d27/1/1)**:
```
| ID | DESCRIPTION | DATE_FROM | DATE_TO |
|----|-------------|----------------------------|----------------------------|
| 1 | Test | December, 24 2015 18:59:07 | December, 26 2015 18:59:07 |
```
|
Trigger can do such job but it will fire only on some changes on table data. If you need to move data between tables in real-time you have to use jobs.
```
begin
dbms_scheduler.create_job(
job_name => 'my_job',
job_type => 'PLSQL_BLOCK',
enabled => true,
repeat_interval => 'FREQ=DAILY; INTERVAL=1',
start_date => round(sysdate,'DD'),
job_action => 'begin
insert into LastOrders
select *
from Reservation
where DATE_TO = round(sysdate,''DD'');
delete Reservation where DATE_TO = round(sysdate,''DD'');
commit;
end;');
end;
```
This job will move data from Reservation to LastOrders every day at 00:00.
|
How to delete row with past date and add this row to an other table?
|
[
"",
"sql",
"oracle",
"plsql",
"triggers",
""
] |
In SQL I am trying to filter results based on an ID and wondering if there is any logical difference between
```
SELECT value
FROM table1
JOIN table2 ON table1.id = table2.id
WHERE table1.id = 1
```
and
```
SELECT value
FROM table1
JOIN table2 ON table1.id = table2.id AND table1.id = 1
```
To me, it seems as if the logic is different though you will always get the same set of results but I wondered if there were any conditions under which you would get two different result sets (or would they always return the exact same two result sets)
|
The answer is **NO** difference, but:
I will always prefer to do the following.
* Always keep the *Join Conditions* in `ON` clause
* Always put the *filter's* in `where` clause
This makes the query more **readable**.
So I will use this query:
```
SELECT value
FROM table1
INNER JOIN table2
ON table1.id = table2.id
WHERE table1.id = 1
```
However when you are using `OUTER JOIN'S` there is a big difference in keeping the filter in the `ON` condition and `Where` condition.
**Logical Query Processing**
The following list contains a general form of a query, along with step numbers assigned according to the order in which the different clauses are logically processed.
```
(5) SELECT (5-2) DISTINCT (5-3) TOP(<top_specification>) (5-1) <select_list>
(1) FROM (1-J) <left_table> <join_type> JOIN <right_table> ON <on_predicate>
| (1-A) <left_table> <apply_type> APPLY <right_table_expression> AS <alias>
| (1-P) <left_table> PIVOT(<pivot_specification>) AS <alias>
| (1-U) <left_table> UNPIVOT(<unpivot_specification>) AS <alias>
(2) WHERE <where_predicate>
(3) GROUP BY <group_by_specification>
(4) HAVING <having_predicate>
(6) ORDER BY <order_by_list>;
```
Flow diagram logical query processing
[](https://i.stack.imgur.com/GdLq0.png)
* (1) FROM: The FROM phase identifies the query’s source tables and
processes table operators. Each table operator applies a series of
sub phases. For example, the phases involved in a join are (1-J1)
Cartesian product, (1-J2) ON Filter, (1-J3) Add Outer Rows. The FROM
phase generates virtual table VT1.
* (1-J1) Cartesian Product: This phase performs a Cartesian product
(cross join) between the two tables involved in the table operator,
generating VT1-J1.
* (1-J2) **ON Filter**: This phase filters the rows from VT1-J1 based on
the predicate that appears in the ON clause (<on\_predicate>). Only
rows for which the predicate evaluates to TRUE are inserted into
VT1-J2.
* (1-J3) **Add Outer Rows**: If OUTER JOIN is specified (as opposed to
CROSS JOIN or INNER JOIN), rows from the preserved table or tables
for which a match was not found are added to the rows from VT1-J2 as
outer rows, generating VT1-J3.
* (2) **WHERE**: This phase filters the rows from VT1 based on the
predicate that appears in the WHERE clause (). Only
rows for which the predicate evaluates to TRUE are inserted into VT2.
* (3) GROUP BY: This phase arranges the rows from VT2 in groups based
on the column list specified in the GROUP BY clause, generating VT3.
Ultimately, there will be one result row per group.
* (4) HAVING: This phase filters the groups from VT3 based on the
predicate that appears in the HAVING clause (<having\_predicate>).
Only groups for which the predicate evaluates to TRUE are inserted
into VT4.
* (5) SELECT: This phase processes the elements in the SELECT clause,
generating VT5.
* (5-1) Evaluate Expressions: This phase evaluates the expressions in
the SELECT list, generating VT5-1.
* (5-2) DISTINCT: This phase removes duplicate rows from VT5-1,
generating VT5-2.
* (5-3) TOP: This phase filters the specified top number or percentage
of rows from VT5-2 based on the logical ordering defined by the ORDER
BY clause, generating the table VT5-3.
* (6) ORDER BY: This phase sorts the rows from VT5-3 according to the
column list specified in the ORDER BY clause, generating the cursor
VC6.
it is referred from book "[T-SQL Querying (Developer Reference)](https://rads.stackoverflow.com/amzn/click/com/B00TPRWVHY)"
|
While there is no difference when using **INNER JOINS**, as VR46 pointed out, there is a significant difference when using **OUTER JOINS** and evaluating a value in the second table (for left joins - first table for right joins). Consider the following setup:
```
DECLARE @Table1 TABLE ([ID] int)
DECLARE @Table2 TABLE ([Table1ID] int, [Value] varchar(50))
INSERT INTO @Table1
VALUES
(1),
(2),
(3)
INSERT INTO @Table2
VALUES
(1, 'test'),
(1, 'hello'),
(2, 'goodbye')
```
If we select from it using a left outer join and put a condition in the `WHERE` clause:
```
SELECT * FROM @Table1 T1
LEFT OUTER JOIN @Table2 T2
ON T1.ID = T2.Table1ID
WHERE T2.Table1ID = 1
```
We get the following results:
```
ID Table1ID Value
----------- ----------- --------------------------------------------------
1 1 test
1 1 hello
```
This is because the where clause limits the result set, so we are only including records from `Table1` that have an ID of `1`. However, if we move the condition to the `ON` clause:
```
SELECT * FROM @Table1 T1
LEFT OUTER JOIN @Table2 T2
ON T1.ID = T2.Table1ID
AND T2.Table1ID = 1
```
We get the following results:
```
ID Table1ID Value
----------- ----------- --------------------------------------------------
1 1 test
1 1 hello
2 NULL NULL
3 NULL NULL
```
This is because we are no longer filtering the result-set by the `Table1`'s ID of `1` - rather we are filtering the JOIN. So, even though `Table1`'s ID of `2` DOES have a match in the second table, it's excluded from the join - but NOT the result-set (hence the `null` values).
So, for inner joins it doesn't matter, but you should keep it in the where clause for readability and consistency. However, for outer joins, you need to be aware that it DOES matter where you put the condition as it will impact your result-set.
|
Difference between filtering queries in JOIN and WHERE?
|
[
"",
"sql",
"join",
"where-clause",
"resultset",
""
] |
I am using `REPLACE` statement to replace certain string in my column.
```
update USER_DEFINED_DATA_SETS
set DefaultValue = replace(DefaultValue, 'Area', 'Area_123')
```
It works fine most of the times. But If I have `Area` and `Area_Description` data in my `DefaultValue` column, it replaces both to `Area123` and `Area_123_Description`.
`Area` and `Area_Description` are different. I want to replace just `Area` and not `Area_Description`.
Sample Data
```
Sno DefaultValue
1 Area
2 Area_Description
3 Area123
4 Equipment
5 Equipment_Name
```
Of all the above values only first row i.e DefaultValue=Area should change to Area123. Rest all should not change.
Also i cant put a where clause like WHERE DeaultValue='Area' in my replace statement as I need this to work for all other scenarios like Equipment or any other.
|
You can add a `WHERE` condition to your `UPDATE` query to restrict the replacement to only occurrences where the `DefaultValue` column begins or ends with `Area`, or contains `Area` as a standalone word.
```
UPDATE USER_DEFINED_DATA_SETS
SET DefaultValue = REPLACE(DefaultValue, 'Area', 'Area_123')
WHERE DefaultValue LIKE 'Area' OR
DefaultValue LIKE 'Area ' OR
DefaultValue LIKE ' Area ' OR
DefaultValue LIKE ' Area'
```
|
On `oracle` 11g and above.
```
SELECT REGEXP_REPLACE('Area Area_Description',
'(^|[^[:alnum:]_])Area($|[^[:alnum:]_])',
'\1Area_123\2') "Area"
FROM DUAL;
```
Results in:
```
Area
-------------------------
Area_123 Area_Description
```
|
Alternate To REPLACE
|
[
"",
"sql",
""
] |
I have this table below:
[](https://i.stack.imgur.com/zq5lk.png)
And when I run this query:
```
Select Gender, City, SUM(Salary) as TotalSalary, COUNT(ID) as [Total Employees]
from tblEmployee
Group by Gender, City
```
The output table is:
[](https://i.stack.imgur.com/grOmk.png)
I am a confused on how from the query is the Total Employees being sorted out. I have `Count(ID) as [Total Employees]` in the Query but how is the Group by clause in the SQL code dividing up the Total Employees? Since I thought `Count(ID) as [Total Employees]` will return 10 since I have 10 records in my table but I am not sure how this group by clause is dividing up the total employees equally? Is it by City or by Gender?
|
# General info about grouping
Taken from Postgres [documentation](http://www.postgresql.org/docs/current/static/sql-select.html). `GROUP BY` clause behavior is described below.
**Edit**. After reading @lad2025 comment I'm putting some clarification. Yes, explanation of `GROUP BY` clause is quoted from specific DBMS manual, but those things are implemented likewise among all different database vendors I'm familiar with, so it shouldn't matter from which vendor you get the information as long as it's understandable for you.
> The optional **GROUP BY** clause has the general form:
>
> ```
> GROUP BY expression [, ...]
> ```
>
> GROUP BY will condense into a single row all selected rows that share
> the same values for the grouped expressions.
>
> [...]
>
> Aggregate functions, if any are used, are computed across all rows making up each group, producing a separate value for each group.
# Response to your question
**Question** :
> I am a confused on how from the query is the Total Employees being sorted out. I have Count(ID) as [Total Employees] in the Query but how is the Group by clause in the SQL code dividing up the Total Employees? Since I thought Count(ID) as [Total Employees] will return 10 since I have 10 records in my table but I am not sure how this group by clause is dividing up the total employees equally? Is it by City or by Gender?
**Answer** :
Remember that what name you assign to a column (as alias) doesn't actually impact the behavior and data presented in this column. Your column `TotalEmployees` counts every occurence of column `id` for every group that consists of those columns from the `GROUP BY` clause.
In your particular case each different pair of values from columns `(Gender, City)` is being condensed into exactly one row. Without your `TotalEmployees` column this would behave the same as applying `DISTINCT` to your query, so that **both below presented queries would yield the same result**:
```
1. Select Gender, City from tblEmployee Group by Gender, City;
2. Select Distinct Gender, City from tblEmployee;
```
Going back to your questions, aggregate function `COUNT(*)` (and any aggregate function for that matter) is being applied on **entire grouped expression**. This means that both below queries would yield the same number of `id` column occurence:
```
1. SELECT Count(ID) as [Total Employees] from tblEmployee;
2. SELECT SUM([Total Employees])
FROM (
SELECT
Gender, City, SUM(Salary) as TotalSalary, COUNT(ID) as [Total Employees]
FROM
tblEmployee
GROUP BY Gender, City
) foo
```
|
First SQL separates your table into all the different genders (2), then it separates those groups into all the different cities (3).
2 \* 3 = 6, i.e. there are 6 output rows. However, there are no London females so you get 5 output rows.
The select part of the query - including count(whatevers) - are done per each of the 6 groups. You can't get total employee count in this query by itself; you have to make a separate query.
|
How does SQL Group By clause divides query result
|
[
"",
"sql",
"oracle",
""
] |
Due to some strange circumstances, random long text fields in my Microsoft Access database appear to be corrupting and being replaced with "################". I want to be able to count the number of corrupted fields with an SQL query so that I can quickly check if the number has changed.
I have written a query that can count the number of records with corrupted values, but not the total fields (e.g if 5 records have 13 corrupted values, I can get the number 5 but I want the total number of 13). How can I adjust my query?
```
SELECT Count(*) AS [Number of Errors]
FROM GPInformation
WHERE Profile="################"
OR Notes="################"
OR CriminalConvictionsNotes="################"
OR HealthIssueNotes="################"
OR NextOfKinAddress="################"
```
Output:
[](https://i.stack.imgur.com/t9P6k.png)
|
You can add conditional logic to the `select`:
```
SELECT (sum(iif(Profile = "################", 1, 0)) +
sum(iif(Notes = "################", 1, 0)) +
sum(iif(CriminalConvictionsNotes = "################", 1, 0)) +
sum(iif(HealthIssueNotes = "################", 1, 0)) +
sum(iif(NextOfKinAddress = "################", 1, 0))
) AS NumBadValues
FROM GPInformation
WHERE Profile = "################" OR
Notes = "################" OR
CriminalConvictionsNotes = "################" OR
HealthIssueNotes = "################" OR
NextOfKinAddress = "################";
```
|
Here is one way by `unpivoting` the data
```
SELECT Count(error_data) AS [Number of Errors]
FROM (SELECT 1 AS error_Data
FROM gpinformation
WHERE profile = "################"
UNION ALL
SELECT 1
FROM gpinformation
UNION ALL
SELECT 1
FROM gpinformation
WHERE notes = "################"
UNION ALL
SELECT 1
FROM gpinformation
WHERE criminalconvictionsnotes = "################"
UNION ALL
SELECT 1
FROM gpinformation
WHERE healthissuenotes = "################"
UNION ALL
SELECT 1
FROM gpinformation
WHERE nextofkinaddress = "################") A
```
|
Count number of times value appears in columns
|
[
"",
"sql",
"database",
"ms-access",
"oledb",
"corruption",
""
] |
In Microsoft SQL Server 2012 or above, is it possible to convert a datetime value to Unix time stamp in a single select statement? If so, how can it be done?
|
As Peter Halasz mentions in [T-SQL DateTime to Unix Timestamp](https://web.archive.org/web/20141216081938/http://skinn3r.wordpress.com/2009/01/26/t-sql-datetime-to-unix-timestamp/):
> Converting a datetime to unix timestamp is easy, but involves error
> prone typing the following:
>
> ```
> @timestamp=DATEDIFF(second,{d '1970-01-01'},@datetime)
> ```
>
> Where @datetime is the datetime value you want to convert. The {d
> ‘yyyy-mm-dd’} notation is an ODBC escape sequence.
>
> The function:
>
> ```
> CREATE FUNCTION UNIX_TIMESTAMP (
> @ctimestamp datetime
> )
> RETURNS integer
> AS
> BEGIN
> /* Function body */
> declare @return integer
>
> SELECT @return = DATEDIFF(SECOND,{d '1970-01-01'}, @ctimestamp)
>
> return @return
> END
> ```
Try it out now like below @O A:
> ```
> SELECT UNIX_TIMESTAMP(GETDATE());
> ```
|
maybe this answer will help someone... If you have a problem when you try to convert datetime using datediff function to number of seconds (mssql message: The datediff function resulted in an overflow. The number of dateparts separating two date/time instances is too large. Try to use datediff with a less precise datepart.) then use:
```
select cast(datediff(d,'1970-01-01',@d1) as bigint)*86400+datediff(s,dateadd(day, datediff(day, 0, @d1), 0),@d1)
```
if you have ms sql server 2016+ use DATEDIFF\_BIG function
|
Convert Datetime to Unix timestamp
|
[
"",
"sql",
"sql-server",
"datetime",
"sql-server-2012",
"unix-timestamp",
""
] |
I have a problem, i'm trying to use the PIVOT in my query and without any result. Right now I have a table that looks like:
```
Category Month Value
A August 10
B August 19
C August 15
A September 20
B September 23
C September 25
A October 24
B October 87
C October 44
```
I want to make to look in this way:
```
Category August September October
A 10 20 24
B 19 23 87
C 15 25 47
```
In my select is something like:
```
Select cat_name, CAST(month AS VARCHAR(20)), value from dbo.table1.
```
\_
```
select * from (
select ft.categoryData as [category], CAST(fft.date AS VARCHAR(20)) as [month], tt.value as [value] from firstt ft
join secondt st on ft.id = st.id
join thirdt tt on ft.id = tt.type_id
join fourtht fft on ft.id = fft.category_id
where ft.date between '2015-07-01' and '2015-09-01' and ft.country = 'EUR'
group by fft.date, ft.categoryData, tt.value
) as t
PIVOT (
max(value)
for [date] in ([jul], [aug], [sept])
) as pvt
```
|
Try `Conditional Aggregate`.
```
select Category,
max(case when Month = 'August' then Value END) as August,
max(case when Month = 'September' then Value END) as September,
max(case when Month = 'October' then Value END) as October
from Yourtable
Group by Category
```
* [**SQL FIDDLE DEMO**](http://sqlfiddle.com/#!3/df53c/1)
Or use `Pivot`
```
select *
from Yourtable
pivot (max(Value) For Month in ([August],[September],[October]))pv
```
* [**SQL FIDDLE DEMO**](http://sqlfiddle.com/#!3/df53c/2)
When the values in month column in unknown the use dynamic sql
```
declare @sql nvarchar(max)= '',
@col_list varchar(max)=''
set @col_list =(select distinct quotename(Month)+',' from yourtable for xml path (''))
set @col_list = left(@col_list,len(@col_list)-1)
set @sql = '
select *
from yourtable
pivot (max(Value) For Month in ('+@col_list+'))pv'
exec sp_executesql @sql
```
* [**SQL FIDDLE DEMO**](http://sqlfiddle.com/#!3/df53c/5)
|
By using pivot we can write below query and for syntax follow link below
as mentioned
> <https://www.simple-talk.com/sql/t-sql-programming/questions-about-pivoting-data-in-sql-server-you-were-too-shy-to-ask/>
```
SELECT *
FROM TABLE_NAME
PIVOT(MAX(VALUE) FOR MONTH IN (
[AUGUST]
,[SEPTEMBER]
,[OCTOBER]
)) AS
PIVOT_SALES
```
out put is
[](https://i.stack.imgur.com/cfyYG.png)>
|
SQL using PIVOT right
|
[
"",
"sql",
"sql-server",
"pivot",
""
] |
I am trying to do the following query in SQL Server Manager:
```
SELECT TOP 100 *
FROM [myDB].[dbo].[content]
where content_html like %<images><img src='/' alt=''></img></images>%'
```
I need to search for that exact string that exists in numerous content items and is breaking an XML feed I am creating.
```
<images><img src='/' alt=''></img></images>
```
The problem here are the single quotes while performing the query.
Is there any way to do this?
|
If you need to search for `'` you need to double it (`''` inside `'...'` is treated as single `'`):
```
CREATE TABLE #content(content_html VARCHAR(100));
INSERT INTO #content(content_html)
VALUES ('<images><img src=''/'' alt=''''></img></images>')
SELECT TOP 100 *
FROM #content
where content_html like '%<images>%<img src=''/'' alt=''''></img>%</images>%'
```
`LiveDemo`
Output:
```
<images><img src='/' alt=''></img></images>
```
|
You need to add wrap `single quotes` with `single quotes`. Try this
```
declare @var varchar(4000) = '<images><img src=''/'' alt=''''></img></images>'
SELECT TOP 100 *
FROM [myDB].[dbo].[content]
where content_html like '%'+var+'%'
```
Also using `TOP` without `Order by` does not guarantee same `TOP 100` result everytime
|
SQL Select Statement for text within a field that includes double and or single quotes
|
[
"",
"sql",
"sql-server",
""
] |
I have 15 tables which are normalized to contain records and many user may be inserting/updating data in these tables( No two user can update the same record(s) simultaneously, this is restricted by client implementation). these 15 tables are updated 1 by 1 in a stored procedure. WITH(NOLOCK) is used where select statements are to avoid read locks as some user may be viewing the data at same time. But sometimes there is locking in these select statements in these procedures. As my past experience I put these nolock to avoid blocking.
I am not sure if there is WITH(NOLOCK) is missing somewhere or this is some other problem. **`Should I search for missing WITH(NOLOCK)?`**
***`or what other ways are there to avoid this locking?`***
> ***What are the restrictions with NOLOCK? I'm just reading the data and I don't care if I read committed or uncommitted data. What else
> steps can be taken to remove this blocking?***
|
**WITH(NOLOCK) :**
WITH (NOLOCK) is the equivalent of using READ UNCOMMITED as a transaction isolation level.
Basically it gives the dirty read. i.e. if any transaction hold some data and then we try to get the data it will give the output without wait of transaction commit. so that we get the dirty read without wait.
So, you stand the risk of reading an uncommitted row that is subsequently rolled back, i.e. data that never made it into the database. So, while it can prevent reads being deadlocked by other operations, it comes with a risk. In a banking application with high transaction rates, it's probably not going to be the right solution to whatever problem you're trying to solve with it.
<https://social.msdn.microsoft.com/Forums/sqlserver/en-US/a3b1d05b-5479-485b-94c5-4f271adf5802/where-to-and-where-not-to-use-with-nolock?forum=transactsql>
|
I think what you want to do based on the comments that you left on your original post is set the isolation level to READ UNCOMMITED. This is the same as NO LOCK except that it sets it on the entire transaction and not on the individual select statement.
Code Example:
```
USE DatabaseName;
GO
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITED;
GO
BEGIN TRANSACTION;
GO
...
COMMIT TRANSACTION;
GO
```
This should prevent any locks to the database for the reading. However this will also then apply to inserts, which could present a possible concurrency issue I would imagine. Data could change while being inserted.
I would reconsider this if you plan to do updates to the database under this isolation level.
Here is more detail of the ms sql isolation levels: <https://msdn.microsoft.com/en-za/library/ms173763.aspx>
|
WITH(NOLOCK) : where & when to use
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
when I use DataFrame groupby like this:
```
df.groupBy(df("age")).agg(Map("id"->"count"))
```
I will only get a DataFrame with columns "age" and "count(id)",but in df,there are many other columns like "name".
In all,I want to get the result as in MySQL,
> "select name,age,count(id) from df group by age"
What should I do when use groupby in Spark?
|
Long story short in general you have to join aggregated results with the original table. Spark SQL follows the same pre-SQL:1999 convention as most of the major databases (PostgreSQL, Oracle, MS SQL Server) which doesn't allow additional columns in aggregation queries.
Since for aggregations like count results are not well defined and behavior tends to vary in systems which supports this type of queries you can just include additional columns using arbitrary aggregate like `first` or `last`.
In some cases you can replace `agg` using `select` with window functions and subsequent `where` but depending on the context it can be quite expensive.
|
One way to get all columns after doing a groupBy is to use join function.
```
feature_group = ['name', 'age']
data_counts = df.groupBy(feature_group).count().alias("counts")
data_joined = df.join(data_counts, feature_group)
```
data\_joined will now have all columns including the count values.
|
How to get other columns when using Spark DataFrame groupby?
|
[
"",
"sql",
"apache-spark",
"dataframe",
"apache-spark-sql",
""
] |
```
SELECT
Siren,
CASE WHEN Code_Juridique LIKE 'M%' AND Enseigne IS NOT NULL AND Enseigne <> '' --ok
THEN 'Enseigne : ' + Enseigne
WHEN (Sigle IS NULL OR Sigle ='')
AND (Enseigne IS NULL OR Enseigne ='')
THEN '' -- ok
WHEN
(Sigle IS NOT NULL OR Sigle <> '' ) THEN 'Sigle : ' + Sigle
ELSE 'Sigle / Enseigne : ' + Sigle + ' / ' + Enseigne
END as SigleEnseigne1,
Sigle,
Enseigne,
Code_Juridique
FROM #JohnJack
```
The code is straightforward.
Issue lies with the third `when` as you can see below
[](https://i.stack.imgur.com/YswuU.png)
I should have have nothing on my 4th and 5th line, yet it is giving me `Sigle :`
What I want is to have the column `SigleEnseigne1` on the 4th and 5th line , to be empty
Thanks for your insights
|
Try this:
```
SELECT
Siren,
CASE WHEN ( Code_Juridique LIKE 'M%' ) AND ( IsNull( Enseigne, '' ) <> '' )
THEN 'Enseigne : ' + Enseigne
WHEN ( IsNull( RTrim(LTrim(Sigle)), '') = '') AND ( IsNull( Enseigne, '' ) = '')
THEN '' -- ok
WHEN ( IsNull( RTrim(LTrim(Sigle)), '' ) <> '' )
THEN 'Sigle : ' + RTrim(LTrim(Sigle))
ELSE
'Sigle / Enseigne : ' + IsNull( RTrim(LTrim(Sigle)), '' ) + ' / ' + Enseigne
END as SigleEnseigne1,
Sigle,
Enseigne,
Code_Juridique
FROM #JohnJack
```
|
Besides stating the obvious that a `(TRUE OR FALSE) = TRUE`
I would simplify and bullet proof the code by using `ISNULL()` and `LEN()`.
```
SELECT
Siren,
CASE WHEN Code_Juridique LIKE 'M%' AND LEN(ISNULL(Enseigne,'')) > 0 --ok
THEN 'Enseigne : ' + Enseigne
WHEN (LEN(ISNULL(Sigle, '')) = 0)
AND (LEN(ISNULL(Enseigne, '')) = 0)
THEN '' -- ok
WHEN
LEN(ISNULL(Sigle, '')) > 0 THEN 'Sigle : ' + Sigle
ELSE 'Sigle / Enseigne : ' + ISNULL(Sigle, '') + ' / ' + ISNULL(Enseigne, '')
END as SigleEnseigne1,
Sigle,
Enseigne,
Code_Juridique
FROM #JohnJack
```
How would your code react if those fields contain whitespaces? `LEN` automatically trims trailing whitespaces.
|
issue with CASE combined with an OR
|
[
"",
"sql",
"sql-server",
"t-sql",
"if-statement",
"case",
""
] |
I have a text of elements seperated by `,` as follows:
```
'{4,56,7,3,2}'
```
the amount of elemets is **unkonwn**.
How do I get the last element?
in the above example `2`
|
Try this:
```
SELECT regexp_replace('4,56,7,3,2', '^.*,', '')
```
**[FIDDLE DEMO](http://sqlfiddle.com/#!15/9eecb7db59d16c80417c72d1e1f4fbf1/5160)**
**EDIT:**
Try
```
SELECT replace(replace(regexp_replace('4,56,7,3,2', '^.*,', ''),'{',''),'}','')
```
**[DEMO](http://sqlfiddle.com/#!15/9eecb7db59d16c80417c72d1e1f4fbf1/5164)**
|
If the value is indeed stored with the curly braces you can simply cast it into an array and pick the last array element:
```
select (elements::text[])[array_length(elements::text[],1)]
from the_table;
```
As you didn't include the table definition, I assumed the column is named `elements`.
If the column does **not** contain the curly braces, but just a comma separated list of values, you can still use this approach:
```
select (string_to_array(elements,','))[array_length(string_to_array(elements,','), 1)]
```
|
How to get last element in postgresql Text?
|
[
"",
"sql",
"postgresql",
""
] |
So I'm selecting a name and a number of sports that name is related to, and I need to only select them when that number of sports is greater than 1.
```
SELECT DISTINCT name AS FullName,
(SELECT COUNT(id) FROM coaches WHERE coaches.name=FullName) AS NrOfSports
FROM coaches WHERE NrOfSports>1
```
If `WHERE` is removed the query works just fine and displays all rows of which some have only "1" as NrOfSports. When I add it to the `WHERE` clause I get an error because it's not recognized. This baffles me since if I were to use it in another `SELECT` column it would work fine.
Is there a way to do this? It can't be software dependant.
|
Use `Group By` and `Having` instead:
```
SELECT name AS FullName,
COUNT(id) AS NrOfSports
FROM coaches
GROUP BY name
HAVING COUNT(id) > 1
```
Your correlated query can work, you just need to move it to a subquery and then you can add the `where` criteria. However, I believe the `group by` would be faster.
|
## Error explanation
You're getting an error, because `WHERE` clause can not access columns by their alias in the very same level. There are two solutions, which **I don't recommend**, but for the sake of your question I'm attaching:
**(1)** Move your `WHERE NrOfSports > 1` to another level
```
SELECT *
FROM (
SELECT DISTINCT
name AS FullName,
( SELECT COUNT(id) FROM coaches c2 WHERE c1.name = c2.name ) AS NrOfSports
FROM coaches c1
) foo
WHERE NrOfSports > 1
```
**(2)** Use your condition in the same level.
```
SELECT DISTINCT
name AS FullName,
( SELECT COUNT(id) FROM coaches c2 WHERE c1.name = c2.name ) AS NrOfSports
FROM
coaches c1
WHERE
( SELECT COUNT(id) FROM coaches c2 WHERE c1.name = c2.name ) > 1
```
That would fix the error, **but it still wouldn't be efficient (!)**
## Better approach
Instead, you could combine a power of `GROUP BY` with it's own WHERE clause called `HAVING` to limit your output only to those `name`s that have more than 1 `id`.
```
SELECT DISTINCT
name AS FullName,
count(id) AS NrOfSports
FROM
coaches
GROUP BY
name
HAVING
count(id) > 1
```
|
Use a value from SELECT in WHERE
|
[
"",
"mysql",
"sql",
""
] |
Suppose I have a table named `process_states`:
```
+--------+--------+
|Process | State |
-------------------
| A | 0 |
| A | 0 |
| B | 0 |
| B | -1 |
| C | -99 |
-------------------
Note: State can have many more negative and positive state values
```
> I want to find all processes having all rows with state 0. In the above case I want to get A.
I am trying to do it using `group by`, is there a way to do something like this:
```
select process from process_states
group by process
having <all state for that process is 0>
```
Is it possible to do it using `group by`?
|
You could use a `group by` clause and filter the processes with a `having` clause:
```
SELECT process
FROM process_states
GROUP BY process
HAVING COUNT(CASE state WHEN -1 THEN 1 END) = 0
```
EDIT:
Given the clarifying comments on other answers, if the requirement is to find only processes that only have `0` states, you could count the total number of rows and the number of rows with a `0` state and compare them:
```
SELECT process
FROM process_states
GROUP BY process
HAVING COUNT(CASE state WHEN 0 THEN 1 END) = COUNT(*)
```
|
You can `group by` process column and check if the max and min of the state column is `0`.
```
select process
from process_states
group by process
having max(state) = min(state) and min(state) = 0
```
|
Group by query based on other column value
|
[
"",
"sql",
"oracle",
"select",
"group-by",
""
] |
I have a table:
```
ID | Name | Date
1 | ABC | 2015-01-01
2 | XYZ | 2015-01-02
3 | ABC | 2015-01-03
4 | ABC | 2015-01-04
```
I want to query this table such a way that the result becomes:
```
ID | Name | Date | NextDate
1 | ABC | 2015-01-01 | 2015-01-03
2 | XYZ | 2015-01-02 | null
3 | ABC | 2015-01-03 | 2015-01-04
4 | ABC | 2015-01-04 | null
```
One Solution is:
```
select t1.*,
(select min(t2.Date) from TAB t2 where t2.ID > t1.ID t2.Name = t1.Name) NextDate
from TAB t1
```
But this is very slow as we are doing aggregation on each row. Is there any alternative solution to this which is more efficient than above ?
|
Since `SQL SERVER 2008` does not support `LEAD` window function we have to simulate it using `row_number` and `self join`. Try this
```
;WITH cte
AS (SELECT t1.*,
Row_number()
OVER(
partition BY Name
ORDER BY [Date]) AS rn
FROM TAB t1)
SELECT a.ID,a.Name,a.Date,b.Date as nextdate
FROM cte a
LEFT OUTER JOIN cte b
ON a.Name = b.Name
AND a.rn + 1 = b.rn ORDER BY a.ID
```
|
Another way to write it would be.
```
Select a.*, c.nextDate from table a
outer apply (
Select top 1 b.Date nextDate from table b
where b.Id > a.ID and a.Name = b.Name order by b.id
) c
```
|
How to find the next value within a table based on certain criteria
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I'm trying to check if a username it was already chosen. Before of show the query this is my tables structure:
**USER**
```
ID | data
5 0
6 1
```
**USERS\_SETTINGS**
```
ID | Username
5 Dillinger
6 John
```
How you can see in the table `user` I've a list of all users available in the system and in the `users_settings` table I've the settings of each user of `user` table. Now if a customer who is registering in the system chose the username `Dillinger` an error message appear 'cause the username has already chosen, instead, if the customer chose John the system doesn't show any error 'cause there was already a registered user with the username but was deleted **`data: 1`**.
What I tried:
```
query = "SELECT Count(username) from users_settings u
INNER JOIN user id d ON
u.id = d.id
WHERE u.username = @usernamep AND
d.data = @datap"
```
But when I do `.ExecuteScalar` I get:
> You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use neare 'd ON u.id = d.id 'at line 2
What is wrong exactly?
|
You seems to assign two different alias (or a typo) to a table which is not valid and should be changed to :
```
query = "SELECT Count(username)
FROM users_settings u INNER JOIN user d
ON u.id = d.id
WHERE u.username = @usernamep
AND d.data = @datap"
```
|
This fixes the problem with the syntax:
```
select count(*)
from user u join
user_settings us
on u.id = us.id -- I am guessing about the right column name here
where u.username = @usernamep and us.data = @datap;
```
However, this is more expensive than it might need to be. A cheaper method uses `exists` rather than `count()`:
```
select 1
from user u
where u.username = @usernamep and
exists (select 1
from user_settings us
where u.id = us.id and us.data = @datap
)
limit 1;
```
A user already exists if this returns any rows. For best performance, you want an index on `user(username)` and `user_settings(user_id, data)`.
|
Error in MySql syntax on INNER JOIN use
|
[
"",
"mysql",
"sql",
"vb.net",
""
] |
Well i am new to SQL and i have just read that it's a good practice that stored procedures always return a value determining whether the query inside the procedure has been executed successfully or not.
So i have a simple stored procedure with a select statement in it and i want to return 1 if it gets executed and return -1 if it does not.
Can you please tell me how to write that condition in SQL?
If it matters my database is MS SQL Server.
Thank you.
```
CREATE PROCEDURE myTest
AS
BEGIN
SELECT FirstName,LastName FROM Employees
RETURN 1 (If execution is successfull)
RETURN -1(If execution is not successfull)
END;
```
|
Use an output parameter to return the success status along with Try..Catch blocks
```
CREATE PROCEDURE myTest
@Result INT OUTPUT
AS
BEGIN
SET NOCOUNT ON;
BEGIN TRY
SELECT FirstName,LastName FROM Employees
SET @Result = 1;
END TRY
BEGIN CATCH
SET @Result = 0;
END CATCH
END;
```
|
@@ROWCOUNT this command helpful,means effective rows number.
```
return @@ROWCOUNT
```
you just need do this. if result > 0, success, if not ,fail.
|
How to check whether a SELECT query is Successful?
|
[
"",
"sql",
"sql-server",
"stored-procedures",
""
] |
I have 4 tables, as below.
Table: **Class**
```
ClassID | ClassSTD
--------------------------------
1 | STD-1<br>
2 | STD-2
```
Table: **Section**
```
SectionId | SectionName | ClassId
--------------------------------------------
1 | sec-A | 1
2 | sec-B | 1
3 | sec-C | 1
4 | sec-A | 2
5 | sec-B | 2
6 | sec-C | 2
```
Table: **Subject**
```
subjectId | subjectName
------------------------------------
1 | Art
2 | Music
3 | Play
```
Table **SubjectAllocationToClass**
```
classId | sectionID | subjectId | type
-----------------------------------------------------------------------
1(STD-1) | 1(sec-A) | 1(Art) | main
1(STD-1) | 2(sec-B) | 1(Art) | main
1(STD-1) | 3(sec-C) | 1(Art) | optional
1(STD-1) | 1(sec-A) | 2(Music) | main
1(STD-1) | 2(sec-B) | 2(Music) | optional
```
Above table "SubjectAllocationToClass" shows distribution of two type of subject (Main and optional) to section for class.
How I can achieve below result from SELECT statement?
```
classSTD | sectionName | Main subjectName | Optional subjectName
-----------------------------------------------------------------------------
STD-1 | sec-A | Art, Music |
STD-1 | sec-B | Art | Music
STD-1 | sec-C | | Art
```
|
Use **GROUP\_CONCAT()** function:
Try this:
```
SELECT D.classSTD,
C.sectionName,
GROUP_CONCAT(B.subjectName SEPARATOR ', ') AS subjectName
FROM SubjectAllocationToClass A
INNER JOIN Subject B ON A.subjectId = B.SubjectId
INNER JOIN Section C ON A.sectionId = C.SectionId
INNER JOIN Class D ON A.classID = D.ClassID
GROUP BY D.ClassID, C.SectionId;
```
**For your another question**
```
SELECT D.classSTD,
C.sectionName,
GROUP_CONCAT(B.subjectName SEPARATOR ', ') AS subjectName,
GROUP_CONCAT(CASE WHEN B.type = 'main' THEN B.subjectName ELSE NULL END SEPARATOR ', ') AS mainsubjectName,
GROUP_CONCAT(CASE WHEN B.type = 'optional' THEN B.subjectName ELSE NULL END SEPARATOR ', ') AS optionalSubjectName
FROM SubjectAllocationToClass A
INNER JOIN SUBJECT B ON A.subjectId = B.SubjectId
INNER JOIN Section C ON A.sectionId = C.SectionId
INNER JOIN Class D ON A.classID = D.ClassID
GROUP BY D.ClassID, C.SectionId;
```
|
you can do this by `group_concat` like this:
```
select classSTD , SectionName , group_concat(`subjectName` separator ',') as `subjectName`
FROM SubjectAllocationToClass
INNER JOIN Class ON SubjectAllocationToClass.classId = Class.ClassID
INNER JOIN Section ON SubjectAllocationToClass.sectionID = Section.sectionId
INNER JOIN Subject ON SubjectAllocationToClass.subjectID = Subject .subjectId
GROUP BY SubjectAllocationToClass.subjectID ;
```
|
MYSQL comma base separated select result
|
[
"",
"mysql",
"sql",
"select",
"group-by",
"group-concat",
""
] |
I want to get last 8 week starting from today ( `GETDATE()` )
So the format must be dd/mm for all 8 weeks.
I tried something like this
```
select "start_of_week" = cast(datepart(dd,dateadd(week, datediff(week, 0, getdate()), 0)) as CHAR(2))+'/'+cast(datepart(mm,dateadd(week, datediff(week, 0, getdate()), 0)) as CHAR(2));
```
and this is good only for the current week, but how to put this in query and return for curr-1,curr-2,...curr-7 weeks. The final result must be table with some amounts for one player and each week in format dd/mm
|
Here you go:
```
DECLARE @DateTable TABLE ( ADate DATETIME )
DECLARE @CurrentDate DATETIME
SET @CurrentDate = GETDATE()
WHILE (SELECT COUNT(*) FROM @DateTable WHERE DATEPART( dw, ADate ) = 2) <= 7
BEGIN
INSERT INTO @DateTable
SELECT @CurrentDate
SET @CurrentDate = DATEADD( dd, -1, @CurrentDate )
END
SELECT "start_of_week" = cast(datepart(dd,dateadd(week, datediff(week, 0, ADate), 0)) as CHAR(2))
+'/'+cast(datepart(mm,dateadd(week, datediff(week, 0, ADate), 0)) as CHAR(2))
FROM @DateTable
WHERE DATEPART( dw, ADate ) = 2
DELETE @DateTable
```
**OUTPUT**
```
start_of_week
28/12
21/12
14/12
7 /12
30/11
23/11
16/11
9 /11
```
|
Maybe as easy as this?
```
WITH EightNumbers(Nmbr) AS
(
SELECT 0
UNION SELECT -1
UNION SELECT -2
UNION SELECT -3
UNION SELECT -4
UNION SELECT -5
UNION SELECT -6
UNION SELECT -7
UNION SELECT -8
)
SELECT CONVERT(VARCHAR(5),GETDATE()+(Nmbr*7),103)
FROM EightNumbers
ORDER BY Nmbr DESC
```
If you need (as the title suggests) the "first day" of the week, you might change the select to:
```
SELECT CONVERT(VARCHAR(5),GETDATE()+(Nmbr*7)-DATEPART(dw,GETDATE())+@@DATEFIRST,103)
FROM EightNumbers
ORDER BY Nmbr DESC
```
Be aware that the "first day of week" depends on your system's culture. Have a look on `@@DATEFIRST`!
The result:
```
28/12
21/12
14/12
07/12
30/11
23/11
16/11
09/11
02/11
```
|
Get first day and month for last x calendar weeks
|
[
"",
"sql",
"sql-server",
""
] |
I want to write such a Query :
```
SELECT * FROM db WHERE ID = id AND Text != text OR Text = text
```
I mean `ID = id` clause and `Text != text` clause should be combined with and, and they should be combined with **`OR`** with the last clause. How can I do that? Thanks.
|
```
SELECT * FROM db WHERE (ID = id AND Text != text) OR Text = text
```
You should always use parentheses when combining `and` and `or`
|
```
SELECT * FROM db WHERE (ID = id AND Text != text) OR Text = text
```
Use brackets to separate two clauses
|
AND OR Clauses in SQL
|
[
"",
"sql",
"sql-server",
""
] |
I have some data that is hard coded in my select query.
The SQL is as follows:
```
SELECT
'ZZ0027674',
'ZZ0027704',
'ZZ0027707',
'ZZ0027709',
'ZZ0027729',
'ZZ0027742',
'ZZ0027750'
```
Unfortunately it does not display the data. It just returns 7 columns and each column has each value. I just want 1 column with the different values.
Please provide me different solutions to display the data?
|
You can use [`VALUES`](https://msdn.microsoft.com/en-us/library/dd776382.aspx), aka *Table Value Constructor*, clause for hardcoded values:
```
SELECT *
FROM (VALUES('ZZ0027674'),('ZZ0027704'),('ZZ0027707'),
('ZZ0027709'),('ZZ0027729'),('ZZ0027742'),
('ZZ0027750')
) AS sub(c)
```
`LiveDemo`
*Warning:* This has limitation up to 1000 rows and applies to `SQL Server 2008+`. For lower version you could use `UNION ALL` instead.
**EDIT:**
> Extra points if someone can show me unpivot ?
```
SELECT col
FROM (SELECT 'ZZ0027674','ZZ0027704','ZZ0027707',
'ZZ0027709','ZZ0027729','ZZ0027742','ZZ0027750'
) AS sub(v1,v2,v3,v4,v5,v6,v7)
UNPIVOT
(
col for c in (v1,v2,v3,v4,v5,v6,v7)
) AS unpv;
```
`LiveDemo2`
|
Use union:
```
SELECT
'ZZ0027674' union all
SELECT 'ZZ0027704' union all
SELECT 'ZZ0027707' union all
SELECT 'ZZ0027709' union all
SELECT 'ZZ0027729' union all
SELECT 'ZZ0027742' union all
SELECT 'ZZ0027750'
```
|
Different ways of returning hard coded values via SQL
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
This is my first question here so I apologize if I do something wrong. I am also an SQL amateur so sorry if i don't make any sense. I am working with SQL Server 2014.
I have dates in a date range (11/01/15 to 11/30/15) and I have 3 location names : NY, LA, SF. Can I get each of those names to show up in a seperate row for each date in the range?
Something like this:
11/01/15 | NY
11/01/15 | LA
11/01/15 | SF
11/02/15 | NY
11/02/15 | LA
11/02/15 | SF
11/03/15 | NY
11/03/15 | LA
11/03/15 | SF
.
.
.
.
11/30/15 | NY
11/30/15 | LA
11/30/15 | SF
|
Seeing your CTE:
```
WITH mycte
AS
(
SELECT CAST('20151101' AS DATETIME
) DateValue
UNION ALL
SELECT DateValue + 1
FROM mycte
WHERE DateValue + 1 < '20151201'
),
myNames
(myName
)
AS
(
SELECT 'NY'
UNION
SELECT 'LA'
UNION
SELECT 'SF'
)
SELECT *
FROM mycte m
CROSS JOIN myNames mn;
```
|
Assuming dates are in a table called myDates and names are in a table called myNames with fieldnames myDate and myName respectively:
```
select d.myDate, n.myName
from myDates d
cross join myNames n
where d.MyDate >= '20151101' and d.myDate < '20151201'
```
|
How can I get 3 names to repeat for every date in a date range?
|
[
"",
"sql",
"sql-server",
""
] |
Does JOIN conditions order affect pefrormance? I have two tables A and B. I'm trying to join them like that:
```
SELECT * FROM A
INNER JOIN B on B.ID_A = A.ID
```
In this case firebird use `NATURAL` plan instead using foreign key.
```
SELECT * FROM A
INNER JOIN B on A.ID = B.ID_A
```
works good.
Is that normal?
|
I guess you use Firebird with version older than 2.5.4 (probably 2.5.3). And it had bug <http://tracker.firebirdsql.org/browse/CORE-4530> fixed in 2.5.4. Please upgrade to Firebird 2.5.5 and check whether your problem disappear.
|
i'm not familiar with firebird but usually it doesn't matter for inner join.
And in the new version of Oracle or SQLServer or MySQL or Postgresql it won't impact the performance. we have explain plan to check the Database choose the right way to Join you can try it.
|
SQL join conditions order performance
|
[
"",
"sql",
"performance",
"join",
"firebird",
""
] |
I am trying to do an update in SQL Server. I have a temporary table with 50 users. The table contains a `personalid` and a date. I want to update the main tables date, with the date from the temporary table based on the `personalid` of the temp table. The main table has 3000+ users.
The logic is to update the users date (which is different for each user) in the main table using the date from the temp table when the `personalid` exists in the temp table and the main table.
The query always returns a subquery error for too many records and it tries to update all of the records in the main table.
> Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , >, >= or when the subquery is used as an expression.
I have tried using cursors, where and for loops and FETCH etc:
```
BEGIN TRAN
UPDATE MainTable
SET MainTable_Date = (SELECT t.Date FROM TempTable t
INNER JOIN MainTable m
ON t.PersonalId = m.personalid
WHERE t.PersonalId = m.personalid)
FROM
MainTable AS m
INNER JOIN
TempTable t ON m.personal_id = t.PersonalId
WHERE
m.personal_id IN (SELECT personalId FROM TempTable)
ROLLBACK TRAN
```
|
This query will satisfy ur need i think
```
CREATE TABLE MainTable
(
Personal_id int,
[MainTable_Date] date
)
CREATE TABLE #TempTable
(
Personalid int,
[Date] date
)
INSERT INTO MainTable
VALUES(1,'20120202'),(2,'20130202'),(3,'20150202'),(4,'20150206')
INSERT INTO #TempTable
VALUES(1,'20120203'),(2,'20130311'),(3,'20100202')
SELECT * FROM MainTable
SELECT * FROM #TempTable
UPDATE MainTable
SET MainTable_Date=(SELECT [DATE] FROM #TempTable AS T WHERE T.Personalid=M.Personal_id)
FROM MainTable AS M
WHERE M.Personal_id IN (SELECT Personalid FROM #TempTable)
SELECT * FROM MainTable
DROP TABLE MainTable
DROP TABLE #TempTable
```
|
If this fails:
```
UPDATE m
SET MainTable_Date = t.[Date]
FROM MainTable AS m
INNER JOIN TempTable AS t ON m.personal_id = t.PersonalId
```
it means that there is not an one-on-one relationship on the tables.
For checking run:
```
SELECT
m.personal_id,
t.PersonalId
FROM MainTable AS m
LEFT JOIN TempTable AS t ON m.personal_id = t.PersonalId
ORDER BY M.personal_id, t.PersonalId
```
|
Update a column from multiple columns in SQL Server
|
[
"",
"sql",
"sql-server",
"t-sql",
"sql-update",
""
] |
I have a stored procedure defined as below. When I execute it, I get an error and couldn't come up with a solution.
```
ALTER PROCEDURE [dbo].[GetShippmentList]
@shipmentNumber VARCHAR(20),
@requestType VARCHAR(1),
@shipmentNames VARCHAR(100),
@assigneeDateFrom DATETIME,
@assignedDateTo DATETIME,
@completedDateFrom DATETIME,
@completedDateto DATETIME,
@status VARCHAR(20),
@userId VARCHAR(20),
@pageNo int,
@pageSize int,
@sortField VARCHAR(20),
@sortOrder VARCHAR(4)
AS
BEGIN
SET NOCOUNT ON
IF OBJECT_ID('tempdb..##ApTemp') IS NULL
BEGIN
CREATE TABLE ##ApTemp(INST_ID VARCHAR(32),
STATUS NVARCHAR(16),
NAME NVARCHAR(64),
ASSIGNED_DATE DATETIME NULL,
COMPLETED_DATE DATETIME NULL,
USER_ID NVARCHAR(64) )
INSERT INTO ##ApTemp(INST_ID, STATUS, NAME, ASSIGNED_DATE, COMPLETED_DATE, USER_ID)
SELECT
w.INST_ID, w.STATUS, w.NAME, w.ASSIGNED_DATE,
w.COMPLETED_DATE, w.USER_ID
FROM
WestShipment.Shipmentt_Prod.dbo.WShipments w
WHERE
w.NAME IN ('T1', 'T2', 'T5', 'T51', 'T3', 'T31')
AND w.STATUS NOT IN ('Removed', 'Cancelled')
AND w.APP IN ('East', 'West')
END
AS (
SELECT w.INST_ID, w.NAME, w.ASSIGNED_DATE AS AssignedDate, w.COMPLETED_DATE AS CompletedDate,w.STATUS AS WfStatus, w.USER_ID AS User_Id,
STUFF(
(SELECT '','' + t.USER_ID
FROM ##ApTemp t
WHERE t.INST_ID = w.Inst_ID AND t.NAME = w.Name
FOR XML PATH ('')), 1, 1, '') AS Assignees
FROM ##ApTemp AS w
WHERE (w.NAME IN( @shipmentNames) OR LEN(@shipmentNames) > 4)
AND ( w.STATUS IN (@status) OR LEN(@status) > 4)
AND ((w.ASSIGNED_DATE BETWEEN @assigneeDateFrom AND @assignedDateTo) OR LEN(@assigneeDateFrom) >4 OR LEN(@assignedDateTo) > 4)
AND ((w.COMPLETED_DATE BETWEEN @completedDateFrom AND @completedDateto) OR LEN(@completedDateFrom) >4 OR LEN(@completedDateto) >4)
AND (w.User_ID LIKE @userId OR LEN(@userId) > 4 )
GROUP BY w.NST_ID, w.NAME , w.ASSIGNED_DATE, w.COMPLETED_DATE,w.STATUS, User_Id
)
INSERT INTO #tempTable(INST_ID, NAME, AssignedDate, CompletedDate, WfStatus, Assignees, ShippmentNumber , ShipmentName , RequestType )
SELECT w.INST_ID, w.NAME, w.AssignedDate, w.CompletedDate, w.WfStatus, w.Assignees, m.DocumentNumber, m.ShipmentName, m.RequestType
FROM dbo.DncMain m INNER JOIN workflows w ON m.InstanceId = w.INST_ID
WHERE (m.ShipmentNumber = @shipmentNumber OR LEN(@shipmentNumber) > 4)
AND(m.RequestType = @requestType OR @requestType NOT LIKE '0')
IF @sortOrder = 'DESC'
begin
SELECT INST_ID, NAME, AssignedDate, CompletedDate, WfStatus, Assignees, ShipmentNumber, ShipmentName FROM #tempTable
ORDER BY CASE @sortField
WHEN 'ShipmentNumber' THEN ShipmentNumber
WHEN 'TaskName' THEN NAME
WHEN 'ShipmentName' THEN ShipmentName
WHEN 'AssignedDate' THEN AssignedDate
WHEN 'CompletedDate' THEN CompletedDate
WHEN 'Assignees' THEN Assignees
WHEN 'Status' THEN WfStatus
END DESC OFFSET (@pageNo) ROWS FETCH NEXT(@pageSize) ROW ONLY
end
ELSE
begin
SELECT TOP(@pageNo) INST_ID, NAME, AssignedDate, CompletedDate, WfStatus, Assignees, ShipmentNumber, ShipmentName FROM #tempTable
ORDER BY CASE @sortField
WHEN 'TaskName' THEN NAME
WHEN 'ShipmentName' THEN ShipmentName
WHEN 'ShipmentNumber' THEN ShipmentNumber
WHEN 'AssignedDate' THEN AssignedDate
WHEN 'CompletedDate' THEN CompletedDate
WHEN 'Assignees' THEN Assignees
WHEN 'Status' THEN WfStatus
END ASC OFFSET (@pageNo) ROWS FETCH NEXT(@pageSize) ROW ONLY
end
SELECT COUNT(1) AS TotalRows FROM #tempTable
END
GO
```
The error I got is :
> Msg 8115, Level 16, State 2, Procedure GetShipmentList, Line 60
> Arithmetic overflow error converting expression to data type datetime.
The problem looks like in the order by clause. If I remove the case statement from the `ORDER BY` clause, it works fine.
|
This is an `order by`:
```
ORDER BY CASE @sortField
WHEN 'TaskName' THEN NAME
WHEN 'ShipmentName' THEN ShipmentName
WHEN 'ShipmentNumber' THEN ShipmentNumber
WHEN 'AssignedDate' THEN AssignedDate
WHEN 'CompletedDate' THEN CompletedDate
WHEN 'Assignees' THEN Assignees
WHEN 'Status' THEN WfStatus
END ASC
```
This is a single expression in SQL. It returns one specific type -- regardless of which `WHEN` clause is being executed. That causes a problem.
So, another way of writing the logic is to split this to different `case` statements:
```
ORDER BY (CASE @sortField WHEN 'TaskName' THEN NAME END) ASC,
(CASE @sortField WHEN 'ShipmentName' THEN ShipmentName END) ASC,
(CASE @sortField WHEN 'ShipmentNumber' THEN ShipmentNumber END) ASC,
(CASE @sortField WHEN 'AssignedDate' THEN AssignedDate END) ASC,
(CASE @sortField WHEN 'CompletedDate' THEN CompletedDate END) ASC,
(CASE @sortField WHEN 'Assignees' THEN Assignees END) ASC,
(CASE @sortField WHEN 'Status' THEN WfStatus END) ASC
```
This puts each column as a separate key, so you cannot get conflicts.
|
Another approach is to use dynamic SQL to generate different SQL for the different choices, allowing the optimizer to choose the best plan for each.
```
DECLARE @ComputedSortField SYSNAME = CASE @sortField
WHEN 'ShipmentNumber' THEN N'ShipmentNumber'
WHEN 'TaskName' THEN N'NAME'
WHEN 'ShipmentName' THEN N'ShipmentName'
WHEN 'AssignedDate' THEN N'AssignedDate'
WHEN 'CompletedDate' THEN N'CompletedDate'
WHEN 'Assignees' THEN N'Assignees'
WHEN 'Status' THEN N'WfStatus'
END
DECLARE @TopClause NVARCHAR(13) = CASE @SortOrder
WHEN 'DESC' THEN N''
ELSE N'TOP(@pageNo)'
END
DECLARE @ComputedSortOrder NVARCHAR(4) = CASE @SortOrder
WHEN 'DESC' THEN N'DESC'
ELSE N'ASC'
END
DECLARE @Sql NVARCHAR(MAX) = N'SELECT ' + @TopClause + N' INST_ID, NAME, AssignedDate,
CompletedDate, WfStatus, Assignees, ShipmentNumber, ShipmentName
FROM #tempTable
ORDER BY ' + @ComputedSortField + N' ' + @ComputedSortOrder + N'
OFFSET (@pageNo) ROWS FETCH NEXT(@pageSize) ROW ONLY'
PRINT @Sql
EXECUTE sp_executesql @Statement = @Sql
, @Params = N'@pageNo int, @pageSize int'
, @PageNo = @PageNo
, @PageSize = @PageSize
```
Note that all the pieces being concatenated together are defined within the code. There is nothing from a parameter that is being concatenated into the string to avoid SQL injection attacks.
If `@SortField` does not match one of the defined sorts `@ComputedSortField` will be `null` causing `@Sql` to be `null`. You may want to define a default `@ComputedSortField` or make an `@OrderByClause` that can be set to `''`, removing sorting, when a proper `@SortField` value is passed.
|
Error in Order by clause when using case statement in SQL Server
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have two tables : a table PERSON, and a table CAR. there is a one to many relatshionship between the two tables : One person can own multiple cars, and one car can be owned by one and only one person.
The table PERSON has the following columns:
```
PERSON
PERSON_ID
PERSON_NAME
```
The table CAR has the following columns:
```
CAR
CAR_ID
CAR_DESCRIPTION
PERSON_ID (the owner)
```
The content of the two tables is as follows:
```
PERSON
PERSON_ID PERSON_NAME
-------------------------
1 John
2 Karl
3 Sarah
4 Kevin
--------------------------
CAR
CAR_ID CAR_DESCRIPTION PERSON_ID (owner)
-----------------------------------------
1 Mercedes 3
2 Honda 3
3 Hundai 1
```
I want to use the above tables to get a table result similar to the following:
```
Result:
john
Hundai
karl
sarah
mercedes
honda
kevin
```
In other words, I want to get a result table composed of one column where I show the owner followed by all the cars he owns; then the next person followed by the cars he owns (if any)..etc
Can this be done at the database level ?
|
or simply
```
select descr from (
select -1 as car_id, person_id, person_name as descr from person
union all
select car_id, person_id, car_description as descr from car
order by 2,1)
```
|
Possible solution -
```
with person as
(
select 1 as PERSON_ID, 'John' as PERSON_NAME from dual union all
select 2 as PERSON_ID, 'Karl' as PERSON_NAME from dual union all
select 3 as PERSON_ID, 'Sarah' as PERSON_NAME from dual union all
select 4 as PERSON_ID, 'Kevin' as PERSON_NAME from dual
),
car as
(
select 1 as CAR_ID, 'Mercedes' as CAR_DESCRIPTION, 3 as PERSON_ID from dual union all
select 2 as CAR_ID, 'Honda' as CAR_DESCRIPTION, 3 as PERSON_ID from dual union all
select 3 as CAR_ID, 'Hundai' as CAR_DESCRIPTION, 1 as PERSON_ID from dual
)
select lower(nm) as nm
from
(select PERSON_ID as ID, null as PRN_ID, PERSON_NAME as NM
from person
union all
select CAR_ID as ID, PERSON_ID as PRN_ID, CAR_DESCRIPTION as NM
from car)
order by nvl(prn_id, id)
```
and using the hierarchical query -
```
select lower(nm) as nm
from
(select 1 as TP, PERSON_ID as ID, null as PRN_ID, PERSON_NAME as NM
from person
union all
select 2 as TP, CAR_ID as ID, PERSON_ID as PRN_ID, CAR_DESCRIPTION as NM
from car)
connect by (prior id = prn_id) and ((prior tp = 1) and (tp = 2))
start with prn_id is null
order siblings by id
```
|
select columns from two tables and put the result in one colum in a specific order
|
[
"",
"sql",
"oracle",
""
] |
I have a table like this:
```
SaleID Region Customer OrderAmt
1 North Keesha 10
2 West Mary 10
3 North Winston 10
4 North John 10
5 North Keesha 10
6 West John 10
7 West Mary 10
8 South John 10
```
Using SQL Server 2012, what is the best way to select each region's highest-ordering customer, alongside the totals for the customer and for the region, i.e.:
```
Region Customer CustAmt RegAmt
North Keesha 20 40
West Mary 20 30
South John 10 10
```
Although multiple regions may contain the same names, we want `CustAmt` to be the total for that name only within each region, not across regions (i.e., in last line, John's total in South region is 10, not 30).
|
With a CTE:
```
WITH cte AS (
SELECT Region, Customer
, sum(OrderAmt) AS CustAmt
, sum(sum(OrderAmt)) OVER (PARTITION BY Region) AS RegAmt
, row_number() OVER (PARTITION BY Region ORDER BY sum(OrderAmt) DESC) AS rn
FROM tbl
GROUP BY Region, Customer
)
SELECT Region, Customer, CustAmt, RegAmt
FROM cte
WHERE rn = 1;
```
Or, the same with a subquery:
```
SELECT Region, Customer, CustAmt, RegAmt
FROM (
SELECT Region, Customer
, sum(OrderAmt) AS CustAmt
, sum(sum(OrderAmt)) OVER (PARTITION BY Region) AS RegAmt
, row_number() OVER (PARTITION BY Region ORDER BY sum(OrderAmt) DESC) AS rn
FROM tbl
GROUP BY Region, Customer
) sub
WHERE rn = 1;
```
The key features are window functions over an aggregate, so we only need a *single* CTE / subquery.
[**SQL Fiddle.**](http://sqlfiddle.com/#!6/7861c/4)
|
In one select:
```
select top 1 with ties
Region ,
Customer ,
CustAmt = sum(OrderAmt),
RegAmt = sum(sum(OrderAmt)) over (partition by Region)
from
your_table
group by
Region,
Customer
order by
row_number() over(partition by Region order by sum(OrderAmt) desc);
```
**[SQL Fiddle](http://sqlfiddle.com/#!3/ea4fc/2)**
|
select customer with the highest total orders per region, alongside both totals
|
[
"",
"sql",
"sql-server",
"greatest-n-per-group",
"window-functions",
""
] |
I need some help with the SUM feature in android app. I have a table that looks something like the following :
[](https://i.stack.imgur.com/0jOKm.png)
I have need to `SUM` Quantities between last two records Notes and last one record with Note. I need to sum Quantity of rows 31,32 and 33. It would return 90. I've tried
```
SELECT Sum(QUANTITY) FROM fuel_table WHERE NOTE!='' ORDER BY ID DESC
```
but it returns SUM of all quantities with note.
|
I am inclined to phrase the question as: sum the quantity from all rows that have one note "ahead" of them. This suggests:
```
select sum(quantity)
from (select ft.*,
(select count(*)
from fuel_table ft2
where ft2.note = 'Yes' and ft2.id >= ft.id
) as numNotesAhead
from fuel_table ft
) ft
where numNotesAhead = 1;
```
|
```
WITH max_id_with_note AS
(
SELECT MAX(ID) AS max_id
FROM YourTable
WHERE IFNULL(note, '') <> ''
)
, previous_max_id_with_note AS
(
SELECT max(ID) as max_id
FROM YourTable
WHERE IFNULL(note, '') <> ''
AND ID < (SELECT max_id FROM max_id_with_note)
)
SELECT SUM(Quantity)
FROM YourTable
WHERE (SELECT max_id FROM previous_max_id_with_note)
< ID and ID <=
(SELECT max_id FROM max_id_with_note)
```
[Example at SQL Fiddle.](http://sqlfiddle.com/#!7/07d49/8/0)
|
SQLite SUM() between several rows
|
[
"",
"android",
"sql",
"sqlite",
""
] |
I have a table `software` and columns in it as `dev_cost`, `sell_cost`. If `dev_cost` is 16000 and `sell_cost` is 7500, how do I find the quantity of software to be sold in order to recover the `dev_cost`?
I have queried as below:
```
select dev_cost / sell_cost from software ;
```
It is returning 2 as the answer. But we need to get 3, right?
What would be the query for that?
|
Your columns have integer types, and [integer division truncates the result towards zero](http://www.postgresql.org/docs/current/static/functions-math.html). To get an accurate result, you'll need to [cast at least one of the values to float or decimal](https://stackoverflow.com/questions/28736227/postgresql-9-3-convert-to-float):
```
select cast(dev_cost as decimal) / sell_cost from software ;
```
or just:
```
select dev_cost::decimal / sell_cost from software ;
```
You can then round the result up to the nearest integer using the `ceil()` function:
```
select ceil(dev_cost::decimal / sell_cost) from software ;
```
(See [demo on SQLFiddle](http://sqlfiddle.com/#!15/dd9ec/1/0).)
|
You can cast integer type to `numeric` and use [`ceil()`](http://www.postgresql.org/docs/9.4/static/functions-math.html#FUNCTIONS-MATH-FUNC-TABLE) function to get the desired output
> The PostgreSQL ceil function returns the smallest integer value that
> is greater than or equal to a number.
```
SELECT 16000::NUMERIC / 7500 col
,ceil(16000::NUMERIC / 7500)
```
Result:
```
col ceil
------------------ ----
2.1333333333333333 3
```
So your query should be
```
select ceil(dev_cost::numeric/sell_cost)
from software
```
|
Division ( / ) not giving my answer in postgresql
|
[
"",
"sql",
"postgresql",
"division",
"modulo",
""
] |
Is it possible to set the transaction isolation level to read uncommitted for users by default in MS SQL Server Management Studio 2012?
I was thinking it could either be done through editing a config file or changing a reg key but i haven't been able to locate anything that would change this yet.
|
As far as I know you can't change the default lock level.
For workloads with a lot of reads and fewer writes, you can avoid blocking queries with [multiversion concurrency control](https://en.wikipedia.org/wiki/Multiversion_concurrency_control). That's the default for Postgres and Oracle. In SQL Server, MVCC is called ["read committed snapshot"](https://msdn.microsoft.com/en-us/library/bb522682.aspx), and you can enable it with:
```
ALTER DATABASE YourDb SET READ_COMMITTED_SNAPSHOT ON;
```
|
You can’t. The default isolation level for all SQL Server databases is Read Committed, and your only option is to set the isolation level within a session, if you want to use a level other than the default.
You could also set *SET TRANSACTION ISOLATION LEVEL* within stored procedure body.
|
How to set transaction isolation level as read uncommitted as default for users
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
I have a simple query:
```
Select qty from X where id=....;
```
this query always return 0 or 1 row.
when it return 1 row everything works.
but if it return 0 rows my query fails as `qty` is used in calculations. (This is acatually a Sub query in Select statment).
I need somehow to make sure the query always return 1 row.
I tried:
```
Select coalesce(qty,0) from X where id=....;
```
but it doesn't help as if there are no rows the coalesce is useless.
if no row found it should give `0`
How can I fix it?
|
You can do this:
```
SELECT COALESCE( (SELECT qty from X where id=....), 0)
```
if nothing is returned from the inner `SELECT` statement, `COALESCE` will give you `0` in the outer `SELECT` statement.
|
```
Select *.a from (
Select qty, 0 as priority from X where id=....
Union
Select 0, 1 as priority
) a
Order By a.priority Asc
Limit 1
```
This select basically ensures that at least one row is returned by adding an additional row to the end and by adding the limit statement we just return the first row.
So now there is the case where at least one row is found and the additional row is added to the end. In this case the first row found (see `->`) will be returned due to the ascending order by priority:
```
qty priority
-> 1 0
3 0
4 0
. .
0 1
```
And then there is the case where no row is found but the additional row is returned:
```
qty priority
-> 0 1
```
|
Is it possible to force reutrn value from query when no rows found?
|
[
"",
"sql",
"postgresql",
""
] |
I'm working on Library Database and it contains a table called book\_transaction.
There are 2 columns called `issued_date` and `due_date`. `due_date` should be 7 days from `issued_date`. Can I specify this condition using `default` key word while creating the table?
If it is not possible please leave an alternative for the same.
Thank you,
|
Thank you for the use full comment by "fabulaspb". I come up with this
```
create table book_transaction
(
transaction_number int primary key,
book_isbn int REFERENCES book_master(book_isbn),
student_code int references student_master(student_code),
issued_date date default sysdate,
due_date date as (issued_date+7),
submited_date date,
fine int
);
```
The table is created without an error and it is working fine.
Thank you for all.
|
Oracle `default` constraints cannot refer to other columns. You can get the same functionality using a trigger (see [this answer](https://stackoverflow.com/a/3787229/50552)):
```
CREATE TRIGGER book_transaction_trigger
BEFORE INSERT ON book_transaction
FOR EACH ROW
BEGIN
IF :new.due_date IS NULL THEN
:new.due_date := :new.issued_date + 7;
END IF;
END book_transaction_trigger;
```
You can add days by adding a number to a `date`.
|
updating column value based on another column value
|
[
"",
"sql",
"oracle",
""
] |
For an example, if have:
```
SELECT 'A@G.com' AS Email, 2 AS Somenumber, 3 AS Number
UNION ALL
SELECT 'A@G.com' AS Email, 2 AS Somenumber, 5 AS Number
UNION ALL
SELECT 'z@y.com' AS Email, 1 AS Somenumber, 6 AS Number
```
instead of:
[](https://i.stack.imgur.com/LLiYr.jpg)
I want to get:
[](https://i.stack.imgur.com/n3jOK.jpg)
|
```
SELECT Email, Somenumber, Number
FROM (
SELECT *, RowNum = ROW_NUMBER() OVER (PARTITION BY Email ORDER BY Number DESC)
FROM (
VALUES
('A@G.com', 2, 3),
('A@G.com', 2, 5),
('z@y.com', 1, 6)
) t(Email, Somenumber, Number)
) t
WHERE RowNum = 1
```
output -
```
Email Somenumber Number
------- ----------- -----------
A@G.com 2 5
z@y.com 1 6
```
|
It looks like you're after one row per email. You can do that like:
```
; with all_rows as
(
... your union query here ...
)
, with numbered_rows as
(
select row_number() over (partition by email order by somenumber) as rn
, *
from all_rows
)
select email
, somenumber
, number
from numbered_rows
WHERE rn = 1
```
|
How to handle duplicates in sql on union all select statements
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
I will need your help with this one. I want to normalize numbers within groups. Say I have this dataset:
```
A B C
-------
0 1 3
1 1 0
1 1 2
1 1 4
1 2 1
1 2 1
```
I want to group these rows by A and B, then normalize the values of C *within its group*, i.e. summing up all C's of the group and then dividing each C by that sum. In the above example, I would expect this result:
```
A B C
---------
0 1 1 // sum of C's in group is 1, 1/1=1
1 1 0 // sum of C's in group is 6, 0/6=0
1 1 1/3 // ...
1 1 2/3 // ...
1 2 1/2 // sum of C's in group is 2, 1/2=1/2
1 2 1/2 // ...
```
Division by zero can be handled separately. How to do this using SQL (or PSQL, if that helps)? I can think of ways to do this in principle, but I always end up with deeply nested SELECTs, which I want to avoid.
Thanks in advance!
|
You could use windowed functions for it:
```
SELECT a,b,
1.0 * c / CASE WHEN SUM(c) OVER(PARTITION BY a,b) = 0 THEN 1
ELSE SUM(c) OVER(PARTITION BY a,b) END AS c
FROM tab
```
`SqlFiddleDemo`
Output:
```
╔════╦════╦════════════════════╗
║ a ║ b ║ c ║
╠════╬════╬════════════════════╣
║ 0 ║ 1 ║ 1 ║
║ 1 ║ 1 ║ 0 ║
║ 1 ║ 1 ║ 0.3333333333333333 ║
║ 1 ║ 1 ║ 0.6666666666666666 ║
║ 1 ║ 2 ║ 0.5 ║
║ 1 ║ 2 ║ 0.5 ║
║ 2 ║ 2 ║ 0 ║ -- added for check division by 0
╚════╩════╩════════════════════╝
```
|
You can use a derived table to aggregate the values and then `join` the results back to the original table. To avoid division by `0`, a `where` clause has been included. For these `0` sum cases, you might have to include a special condition to select them.
```
select t.a, t.b, 1.0 * t.c/t1.total_c
from tablename t
join (select a, b, sum(c) as total_c
from tablename
group by a, b) t1
on t.a = t1.a and t.b = t1.b
where t1.total_c > 0
```
|
Normalize values within a group using SQL
|
[
"",
"sql",
"postgresql",
""
] |
I have a table with 3 values.
```
ID AuditDateTime UpdateType
12 12-15-2015 18:09 1
45 12-04-2015 17:41 0
75 12-21-2015 04:26 0
12 12-17-2015 07:43 0
35 12-01-2015 05:36 1
45 12-15-2015 04:35 0
```
I'm trying to return only records where the `UpdateType` has changed from `AuditDateTime` based on the IDs. So in this example, ID 12 changes from the 12-15 entry to the 12-17 entry. I would want that record returned. There will be multiple instances of ID 12, and I need all records returned where an ID's `UpdateType` has changed from its previous entry. I tried adding a `row_number` but it didn't insert sequentially because the records are not in the table in order. I've done a ton of searching with no luck. Any help would be greatly appreciated.
|
By using a CTE it is possible to find the previous record based upon the order of the AuditDateTime
```
WITH CTEData AS
(SELECT ROW_NUMBER() OVER (PARTITION BY ID ORDER BY AuditDateTime) [ROWNUM], *
FROM @tmpTable)
SELECT A.ID, A.AuditDateTime, A.UpdateType
FROM CTEData A INNER JOIN CTEData B
ON (A.ROWNUM - 1) = B.ROWNUM AND
A.ID = B.ID
WHERE A.UpdateType <> B.UpdateType
```
The Inner Join back onto the CTE will give in one query both the current record (Table Alias A) and previous row (Table Alias B).
|
This should do what you're trying to do I believe
```
SELECT
T1.ID,
T1.AuditDateTime,
T1.UpdateType
FROM
dbo.My_Table T1
INNER JOIN dbo.My_Table T2 ON
T2.ID = T1.ID AND
T2.UpdateType <> T1.UpdateType AND
T2.AuditDateTime < T1.AuditDateTime
LEFT OUTER JOIN dbo.My_Table T3 ON
T3.ID = T1.ID AND
T3.AuditDateTime < T1.AuditDateTime AND
T3.AuditDateTime > T2.AuditDateTime
WHERE
T3.ID IS NULL
```
Alternatively:
```
SELECT
T1.ID,
T1.AuditDateTime,
T1.UpdateType
FROM
dbo.My_Table T1
INNER JOIN dbo.My_Table T2 ON
T2.ID = T1.ID AND
T2.UpdateType <> T1.UpdateType AND
T2.AuditDateTime < T1.AuditDateTime
WHERE
NOT EXISTS
(
SELECT *
FROM
dbo.My_Table T3
WHERE
T3.ID = T1.ID AND
T3.AuditDateTime < T1.AuditDateTime AND
T3.AuditDateTime > T2.AuditDateTime
)
```
The basic gist of both queries is that you're looking for rows where an earlier row had a different type and no other rows exist between the two rows (hence, they're sequential). Both queries are logically identical, but might have differing performance.
Also, these queries assume that no two rows will have identical audit times. If that's not the case then you'll need to define what you expect to get when that happens.
|
SQL Server Return Rows Where Field Changed
|
[
"",
"sql",
"sql-server",
"record",
""
] |
I have this query segment below where I'm trying to build a string of "month-year" from a date field in this table. It's very important that it comes in the right order starting from current month going forward 12 months.
```
DECLARE @cols AS NVARCHAR(MAX)
SELECT @cols = STUFF(
(SELECT N',' + QUOTENAME(y) AS [text()]
FROM (SELECT DISTINCT CONVERT(char(3), StartDate, 0) + '-' +
RIGHT(CONVERT(varchar, YEAR(StartDate)), 2) AS y
FROM Products2
) AS Y
--ORDER BY y desc
FOR XML PATH('')),
1, 1, N'')
```
This query isn't pulling the dates in the right order and I wanted to see if you guys know of any neat tricks to pull the dates in the correct order. I can bring in the startDate column and sort it by that but it brings in duplicates as it may have several entries for the same month. I've created a sample table here <http://sqlfiddle.com/#!6/3a500/5>
|
If you are using `SQL Server 2012+` you could use [`FORMAT`](https://msdn.microsoft.com/en-us/library/hh213505.aspx) function:
```
DECLARE @cols AS NVARCHAR(MAX);
;WITH cte AS -- get only one date per month/year
(
SELECT MIN(StartDate) AS StartDate
FROM #Products2
GROUP BY YEAR(StartDate),MONTH(StartDate)
)
SELECT @cols = STUFF((SELECT ',' + QUOTENAME(FORMAT(StartDate, 'MMM-yy'))
FROM cte
ORDER BY StartDate
FOR XML PATH('')),
1, 1, N'');
SELECT @cols;
```
`LiveDemo`
Output:
```
╔══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╗
║ result ║
╠══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╣
║ [Dec-15],[Jan-16],[Feb-16],[Mar-16],[Apr-16],[May-16],[Jun-16],[Jul-16],[Aug-16],[Sep-16],[Oct-16],[Nov-16],[Dec-16] ║
╚══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╝
```
|
You could use
```
DECLARE @cols AS NVARCHAR(MAX);
SELECT @cols = STUFF((SELECT N',' + QUOTENAME(y) AS [text()]
FROM (SELECT CONVERT(CHAR(3), StartDate, 0) + '-'
+ RIGHT(CONVERT(VARCHAR, YEAR(StartDate)), 2) AS y,
MIN(StartDate) AS z
FROM Products2
GROUP BY CONVERT(CHAR(3), StartDate, 0) + '-'
+ RIGHT(CONVERT(VARCHAR, YEAR(StartDate)), 2)) AS Y
ORDER BY z
FOR XML PATH('')), 1, 1, N'');
SELECT @cols;
```
[SQL Fiddle](http://sqlfiddle.com/#!6/3a500/25)
|
MSSQL Order by date with distinct
|
[
"",
"sql",
"sql-server",
"t-sql",
"sql-order-by",
""
] |
I need to insert custom date time in the oracle database where the `ITEM_TIME_DATE` column has the `TIMESTAMP` data type.
Here is my query:
```
INSERT INTO items
(ITEM_AUTHOR_ID, ITEM_TITLE, ITEM_DESCRIPTION, ITEM_TIME_DATE, ITEM_STATUS)
VALUES
('0', 'test title', 'test des', TO_TIMESTAMP('12-23-2015 8:00 PM', 'MM-DD-YYYY HH24:MI:SS'), 0);
```
Though the custom date should convert to a timestamp format, but i am getting this error:
> ORA-01858: a non-numeric character was found where a numeric was expected
I am using Oracle 11.2.0.2.0 (Windows 10) 64-bit
|
The date format in the `TO_TIMESTAMP` call doesn't match the data. Use:
```
INSERT INTO items
(ITEM_AUTHOR_ID, ITEM_TITLE, ITEM_DESCRIPTION, ITEM_TIME_DATE, ITEM_STATUS)
VALUES
('0', 'test title', 'test des', TO_TIMESTAMP('12-23-2015 8:00 PM', 'MM-DD-YYYY HH:MI AM'), 0);
```
Best of luck.
|
You need to remove the `:SS` or add seconds `:00` to your time, and add `AM` to the format.
This works:
```
INSERT INTO items
(ITEM_AUTHOR_ID, ITEM_TITLE, ITEM_DESCRIPTION, ITEM_TIME_DATE, ITEM_STATUS)
VALUES
('0', 'test title', 'test des', TO_TIMESTAMP('12-23-2015 8:00:00 PM', 'MM-DD-YYYY HH:MI:SS AM'), 0);
```
|
Inserting custom TIMESTAMP value in the Oracle SQL
|
[
"",
"sql",
"oracle",
""
] |
I have following query
```
SELECT R.ServiceID,ItemName, Price
FROM RequisitionDetail RD, Requisition R
WHERE RD.ItemStatus='Used' AND RD.RequisitionID=R.RequisitionID AND R.ServiceID=2264
```
and result yields
```
ServiceID ItemName Price
2264 LCD 1500.00
2264 Touch Panel 1200.00
2264 Service Charge 250.00
```
i want following result with three column without cursor.
```
2264 LCD,Touch Panel,Service Charge 2950
```
i have tried group by
```
SELECT R.ServiceID,SUM(Price) as Price--,ItemName, Price
FROM RequisitionDetail RD, Requisition R
WHERE RD.ItemStatus='Used' AND RD.RequisitionID=R.RequisitionID AND R.ServiceID=2264
GROUP BY R.ServiceID
```
results
```
ServiceID Price
2264 2950.00
```
but cannot take itemname as comma seperated string.
Help me guys.
|
Try this simple sql.
```
SELECT t.ServiceID, STUFF(
(
SELECT ',' + s.ItemName FROM (SELECT R.ServiceID,ItemName, Price
FROM RequisitionDetail RD, Requisition R
WHERE RD.ItemStatus='Used' AND RD.RequisitionID=R.RequisitionID AND R.ServiceID=2264) s
WHERE s.ServiceID = t.ServiceID
FOR XML PATH('')
),1,1,'') AS Items, sum(t.Price) as Price
FROM (SELECT R.ServiceID,ItemName, Price
FROM RequisitionDetail RD, Requisition R
WHERE RD.ItemStatus='Used' AND RD.RequisitionID=R.RequisitionID AND R.ServiceID=2264) AS t
GROUP BY t.ServiceID
```
output is :
`2264 LCD,Touch Panel,Service Charge 2950.00`
|
Aggregate string concatenation is a bit of a pain in SQL Server. You can do this as:
```
WITH r as (
SELECT R.ServiceID, ItemName, Price
FROM RequisitionDetail RD JOIN
Requisition R
ON RD.RequisitionID = R.RequisitionID
WHERE RD.ItemStatus = 'Used' AND R.ServiceID = 2264
)
SELECT r.ServiceID, SUM(r.Price) as Price,
STUFF((SELECT ', ' + rd2.ItemName
FROM r r2
WHERE r2.ServiceId = r.ServiceId
FOR XML PATH (''), TYPE
).VALUE('.', 'varchar(max)'
), 1, 2, ''
) as items
FROM r
GROUP BY r.ServiceID;
```
|
Showing multiple data in select query in Sql server 2014
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"sql-server-2012",
""
] |
I keep getting this error in Oracle when i try to run this statement. I am not sure where the formatting error is coming from. maybe someone with fresh eyes can assist me with this problem.
```
INSERT INTO Faculty
(FacNo, FacFirstName, FacLastName, FacCity, FacState,
FacDept, FacRank, FacSalary, FacSupervisor, FacHireDate, FacZipCode)
VALUES ('543-21-0987','VICTORIA','EMMANUEL','BOTHELL','WA','MS','PROF',120000.0,'','2001-04-15','98011-2242');
```
Here is the error message i keep getting:
> Error starting at line : 1 in command - Error report -
> SQL Error: ORA-01861: literal does not match format string
> 01861. 00000 - "literal does not match format string"
> \*Cause: Literals in the input must be the same length as literals in
> the format string (with the exception of leading whitespace). If the
> "FX" modifier has been toggled on, the literal must match exactly,
> with no extra whitespace.
> \*Action: Correct the format string to match the literal.
Here are the specs on the table i am trying to INSERT this data into:
> FACNO CHAR(11 BYTE)
> FACFIRSTNAME VARCHAR2(30 BYTE)
> FACLASTNAME VARCHAR2(30 BYTE)
> FACCITY VARCHAR2(30 BYTE)
> FACSTATE CHAR(2 BYTE)
> FACZIPCODE CHAR(10 BYTE)
> FACRANK CHAR(4 BYTE)
> FACHIREDATE DATE
> FACSALARY NUMBER(10,2)
> FACSUPERVISOR CHAR(11 BYTE)
> FACDEPT CHAR(6 BYTE)
|
Most likely, your [`NLS_DATE_FORMAT`](https://docs.oracle.com/cd/B19306_01/server.102/b14237/initparams122.htm#REFRN10119), the default date format for literals does not match your string. Never assume dates are formatted one way or another. use `TO_DATE` function to specify the format, so convert to :
`Insert (... to_date('2001-04-15','YYYY-MM-DD')`...
|
OldProgrammers answer is the correct answer. Explicitly converting the string to a date is the safest. MS-SQL will generally automatically convert anything recognizable as a date and Oracle will do it if your formatting matches the default formatting for the system. All the oracle systems I have worked with used 'DD/MON/YY' or the two digit day, the three letter month abbreviation and the two digit year as their default and would automatically convert that. Not the proper way to do it but everybody likes to be lazy sometimes.
|
SQL Error: ORA-01861: literal does not match format string
|
[
"",
"sql",
"string",
"oracle",
""
] |
I got 2 tables of recipes, and recipe\_ratings
and I don't know how to get the AVG(rating\_value) from recipe\_rating
and join them
Recipes table (get '\*') and join it with AVG(recipe\_value)
order by date\_posted
|
here is my approach:
**Table & Data :**
```
CREATE TABLE recipes
(`id` int, `recipe_name` varchar(7))
;
CREATE TABLE recipe_ratings
(`id` int, `recipeId` int,`rating_value` int,`date_posted` datetime)
;
INSERT INTO recipes
(`id`, `recipe_name`)
VALUES
(1, 'Cake'),
(2, 'Bun'),
(3, 'Hot Dog'),
(4, 'Tea')
;
INSERT INTO recipe_ratings
(`id`, `recipeId`,`rating_value`,`date_posted`)
VALUES
(1,1,3,'2015-03-20 10:30:00.000'),
(2,2,4,'2015-06-20 10:30:00.000'),
(3,3,5,'2014-03-20 10:30:00.000')
;
```
**Query:**
```
SELECT R.*, AVG(RR.rating_value) rating_value
FROM recipes R
LEFT JOIN recipe_ratings RR ON R.id = RR.recipeId
GROUP BY RR.date_posted,R.id
ORDER BY RR.date_posted DESC
```
**SqlFiddle** [**Demo**](http://sqlfiddle.com/#!9/27711/3)
|
Execute the following query:
```
SELECT t1.*, t2.avg(rating) as 'Average Rating' FROM table1 t1 INNER JOIN table2 t2
ON t1.id = t2.id GROUP BY t1.id
```
|
Join 2 tables for viewing
|
[
"",
"mysql",
"sql",
"select",
"join",
"group-by",
""
] |
I'm looking to summarise data by week from a given starting date into the indefinite future. Typically I would just dump a result set into an Excel dash and group there via pivots etc., but I'm writing this query to hand off to a set of users and allow them access through a front-end BI application, so I want to give it to them as complete as possible.
My current method is obviously inefficient, so I'd love to find a simple formula to do this (or a way to create a variable for the week to cut back on all of the repetitive scripting). I've thought about using DATEADD but haven't been able to make it work yet.
Example of my current solution is below:
```
SELECT
'Week Ending' = (CASE
WHEN YEAR(duedate) = '2015'
THEN (CASE
WHEN datepart(wk, duedate) = 32 THEN '2015/08/07'
WHEN datepart(wk, duedate) = 33 THEN '2015/08/14'
WHEN datepart(wk, duedate) = 34 THEN '2015/08/21'
WHEN datepart(wk, duedate) = 35 THEN '2015/08/28'
WHEN datepart(wk, duedate) = 36 THEN '2015/09/04'
WHEN datepart(wk, duedate) = 37 THEN '2015/09/11'
WHEN datepart(wk, duedate) = 38 THEN '2015/09/18'
WHEN datepart(wk, duedate) = 39 THEN '2015/09/25'
WHEN datepart(wk, duedate) = 40 THEN '2015/10/02'
WHEN datepart(wk, duedate) = 41 THEN '2015/10/09'
WHEN datepart(wk, duedate) = 42 THEN '2015/10/16'
WHEN datepart(wk, duedate) = 43 THEN '2015/10/23'
WHEN datepart(wk, duedate) = 44 THEN '2015/10/30'
WHEN datepart(wk, duedate) = 45 THEN '2015/11/06'
WHEN datepart(wk, duedate) = 46 THEN '2015/11/13'
WHEN datepart(wk, duedate) = 47 THEN '2015/11/20'
WHEN datepart(wk, duedate) = 48 THEN '2015/11/27'
WHEN datepart(wk, duedate) = 49 THEN '2015/12/04'
WHEN datepart(wk, duedate) = 50 THEN '2015/12/11'
WHEN datepart(wk, duedate) = 51 THEN '2015/12/18'
WHEN datepart(wk, duedate) = 52 THEN '2015/12/25'
WHEN datepart(wk, duedate) = 53 THEN '2016/01/01'
END)
WHEN YEAR(duedate) = '2016'
THEN (CASE
WHEN datepart(wk, duedate) = 1 THEN '2016/01/01'
WHEN datepart(wk, duedate) = 2 THEN '2016/01/08'
WHEN datepart(wk, duedate) = 3 THEN '2016/01/15'
WHEN datepart(wk, duedate) = 4 THEN '2016/01/22'
END)
END),
SUM(linetotal)
FROM
PurchaseOrderDetail
WHERE
YEAR(duedate) > '2014'
GROUP BY
YEAR(duedate), DATEPART(wk, duedate)
```
I imagine the answer is a simple one, but I haven't been able to find quite what I'm looking for yet.
|
Something like this will give you the same output without having to manually specify every week ending date:
```
SELECT
convert(varchar, duedate - datepart(dw, duedate) + 6, 111) as [Week Ending],
sum(linetotal)
FROM
PurchaseOrderDetail
WHERE
year(duedate) > '2014'
GROUP BY
year(duedate), convert(varchar, duedate - datepart(dw, duedate) + 6, 111)
```
Converting to varchar (style 111) is not absolutely necessary, but it will give the same 'YYYY/MM/DD' format that you had in your original case statement. You could just as easily use `duedate - datepart(dw, duedate) + 6` without the convert function, which will output the same value, but use the same datetime/datetime2/smalldatetime format as your input (probably 'YYYY-MM-DD HH:MM:SS').
|
Consider a solution along these lines (please note - the code is untested as I do not have access to a SQL Server instance right now). The idea is to replace the (hard-coded) date with year and week numbers. This suggested solution will allow your BI users to view the SUM of `linetotal` by year and week number. As you get more data, users can even compare the weekly `linetotal` by year (`linetotal` for 2014, 2015, 2016, ..).
```
SELECT
YEAR(duedate) AS Year1, datepart(wk, duedate), SUM(linetotal)
FROM
PurchaseOrderDetail
WHERE
YEAR(duedate) > '2014'
GROUP BY
YEAR(duedate), DATEPART(wk, duedate)
```
|
tSQL summarise data by week based on date field?
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have a table contains information about calls, each call have a `start date` and `end date` with DATE type with **YYYY:MM:DD HH:MI:SS** format.
how to get the following:
1- number of seconds in the range from **00:00:00** to **07:30:00** between the `start date` and the `end date`, and the number of seconds out of the given range (**00:00:00** to **07:30:00**).
2- number of seconds in Fridays days between the `start date` and the `end date` .
|
My try with hierarchical subquery generating days for each call and functions `greatest`, `least`:
[SQLFiddle](http://sqlfiddle.com/#!4/a68e3/2)
```
with t as (
select id, sd, ed, trunc(sd)+level-1 dt
from calls
connect by trunc(sd)+level-1<=trunc(ed)
and prior dbms_random.value is not null and prior id = id)
select id, sum(sec) sec, sum(fri) fri, sum(mrn) mrn, sum(sec)-sum(mrn) rest
from (
select id, (least(ed, dt+1)-greatest(sd, dt))*24*60*60 sec,
case when trunc(dt) - trunc(dt, 'iw') = 4
then (least(dt+1, ed) - greatest(dt, sd)) * 24*60*60 end fri,
(least(dt+7.5/24, ed) - greatest(dt, sd)) * 24*60*60 mrn
from t )
group by id
```
---
Query version for "Fridays" - "non Fridays mornings" - "non Fridays rest of days" output (as precised in comments):
```
with cte as (
select id, sd, ed, trunc(sd)+level-1 dt from calls
connect by level <= trunc(ed)-trunc(sd) + 1
and prior dbms_random.value is not null and prior id = id )
select id, max(sd) start_time, max(ed) end_time,
sum(sec) all_seconds, sum(fri) fridays, sum(mrn) mornings,
sum(sec) - sum(fri) - sum(mrn) rest
from (
select id, sd, ed, dt, (least(ed, dt+1) - greatest(sd, dt))*24*60*60 sec,
case when dt - trunc(dt, 'iw') = 4
then (least(ed, dt+1) - greatest(sd, dt))*24*60*60 else 0 end fri,
case when dt - trunc(dt, 'iw') <> 4 and dt+7.5/24 > sd
then (least(dt+7.5/24, ed) - greatest(sd, dt))*24*60*60
else 0 end mrn
from cte )
group by id order by id
```
Sample data and output:
```
create table calls (id number(3), sd date, ed date);
insert into calls values (1, timestamp '2015-12-25 07:29:00', timestamp '2015-12-25 07:31:00');
insert into calls values (2, timestamp '2015-12-24 01:00:00', timestamp '2015-12-26 23:12:42');
insert into calls values (3, timestamp '2015-12-24 23:58:00', timestamp '2015-12-25 00:01:00');
insert into calls values (4, timestamp '2015-12-24 07:00:00', timestamp '2015-12-25 00:01:00');
ID START_TIME END_TIME ALL_SECONDS FRIDAYS MORNINGS REST
---- ------------------- ------------------- ----------- ---------- ---------- ----------
1 2015-12-25 07:29:00 2015-12-25 07:31:00 120 120 0 0
2 2015-12-24 01:00:00 2015-12-26 23:12:42 252762 86400 50400 115962
3 2015-12-24 23:58:00 2015-12-25 00:01:00 180 60 0 120
4 2015-12-24 07:00:00 2015-12-25 00:01:00 61260 60 1800 59400
```
Edit:
|
With :
```
SELECT (end_date - start_date), ... FROM ...
```
you obtain the number of days...
With :
```
SELECT (end_date - start_date)*24, ... FROM ...
```
you obtain the number of hours...
And, with :
```
SELECT (end_date - start_date)*24*60*60, ... FROM ...
```
you obtain the number of seconds...
|
Get number of seconds in specific duration
|
[
"",
"sql",
"oracle",
"datetime",
""
] |
I am working on a SQL query which returns all records between two dates from a table as follows
```
select convert(varchar(2),TestDate,108) from dbo.Table
where TestDate between convert(datetime,convert(varchar,GETDATE(),101))
and dateadd(day,1,convert(datetime,convert(varchar,GETDATE(),101)))
```
The above query works fine and gives me the desired results but when I tried to use a normal date string instead of getdate(), the query returns and empty result as follows
```
select convert(varchar(2),TestDate,108) from dbo.Table
where TestDate between convert(datetime,convert(varchar,'2015-12-27 00:00:00.000',101)) and dateadd(day,1,convert(datetime,convert(varchar,2015-12-27 00:00:00.000',101)))
```
The above query returns an empty result set which is not what I wanted.
I tried passing date string in different formats but that didn't work.
**May I know a correct way to do it?**
|
Why would you convert dates to a string for comparisons? Just do the comparisons as dates.
In addition, you can use `datepart()` to extract the hour, rather than using some esoteric format to `convert()`:
```
select datepart(hour, TestDate)
from dbo.Table
where TestDate between cast(GETDATE() as date) and
cast(dateadd(day, 1, getdate()) as date)
```
If you want the hour as a string instead of a number, then use `datename()` rather than `datepart()`.
|
I guess that you are having an extra CONVERT.
Wherever you have this
```
convert(varchar,GETDATE(),101)
```
just replace with your date:
```
'2015-12-27 00:00:00.000'
```
because the purpose of the CONVERT function is to translate a Date into a Varchar
|
incorrect results while passing a datetime instead of getdate()
|
[
"",
"sql",
"datetime",
""
] |
How to change `*` [asterisk symbol] into list of column names? I can view the list of column names after placing mouse cursor over the `*`. Is it possible to click-crack on something to change the `*` into names without running the script an inserting the results into some dbo.temp table?
[](https://i.stack.imgur.com/wr4G4.png)
So the desired results would be:
```
with A as (select
MyColumn1=1
,MyColumn2=2
,MyColumn3=3)
select
MyColumn1
,MyColumn2
,MyColumn3
from A
```
|
There is an option into Sql Server Management in which you can execute a stored procedure with the keyboard, you can configure that option to execute a procedure that lists the columns of a table, this is the way you can do it:
[](https://i.stack.imgur.com/Ed9B8.png)
Click over "Options"
[](https://i.stack.imgur.com/mlzOM.png)
As you can see there are many keyboard shortcuts to execute a stored procedure, eg when you highlight a name of a table with the shortcut alt+f1 you can see the metadata of the table, I wrote a stored procedure that shows the lists of the columns of a table separated with ",", this is the procedure:
```
Create Procedure [dbo].[NS_rs_columnas]
@Tabla Sysname,
@Alias Char(3)=null
AS
Begin
Declare @Colums Nvarchar(Max)='';
Select
@Colums+=','+isnull(Ltrim(Rtrim(@Alias))+'.','')+'['+b.name+']' + CHAR(13)+CHAR(10)
from sys.tables a
Inner join sys.all_columns b
on (a.object_id=b.object_id)
Where a.name=ltrim(rtrim(@Tabla));
Select ' '+Substring(@Colums,2,len(@Colums)-2);
End
```
So what you can do is configure a shortcut to execute that procedure.
this is the result when I press the shortcut ctrl+f1 over a table name:
[](https://i.stack.imgur.com/qPc9S.png)
As you can see the procedure has two parameters, the second parameter is to send an alias, this is an example:
[](https://i.stack.imgur.com/3ChiT.png)
|
In SQL Server Management Studio, you can do what you want.
On the left side of the screen, expand the database to get the table name. Then, drag the table name over to the query pane and it will list all the columns.
|
Edit asterisk symbol into list of column names in SSMS by wildcard expansion
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"sql-server-2017",
"sql-server-2019",
""
] |
I have this query that's taking forever to run. The table contains about 7 million rows. Everything else I'm doing with it (it's a "temporary" permanent table) is going relatively quickly (an hour or so) while this one UPDATE belong took 7 hours! We have SQL Server 2014.
`DOI` is an `NVARCHAR(72)` and has a non-unique `CLUSTERED` index on it. `Affiliations` is a `VARCHAR(8000)`. I'm not really allowed to change these data types. `Affiliations` has an index on it as an include. We couldn't do a "regular" index since the field is so big.
```
CREATE NONCLUSTERED INDEX IX_Affiliations
ON TempSourceTable (DOI) INCLUDE (Affiliations);
```
What the statement below does is set a bit field to 1 if all the records for a `DOI` have the same value in their `Affiliations` column. This table has multiple records per `DOI` value, and we want to know if the `Affiliations` column is the same for all of the records with that same `DOI` or not.
Is there any way I can speed this up, by writing a different query, a different index or am I going about this all wrong?
```
UPDATE S
SET AffiliationsSameForAllDOI = 1
FROM TempSourceTable S
WHERE NOT EXISTS (SELECT 1
FROM TempSourceTable S2
WHERE S2.DOI = S.DOI
AND S2.Affiliations <> S.Affiliations)
```
|
Here is another way
`SUB-QUERY` version
```
UPDATE TempSourceTable
SET AffiliationsSameForAllDOI = 1
WHERE doi IN (SELECT doi
FROM TempSourceTable S
GROUP BY DOI
HAVING COUNT(DISTINCT Affiliations) = 1)
```
`EXISTS` Version
```
UPDATE TempSourceTable S
SET AffiliationsSameForAllDOI = 1
WHERE EXISTS (SELECT 1
FROM TempSourceTable S1
Where s1.DOI = s.DOI
HAVING COUNT(DISTINCT Affiliations) = 1)
```
`INNER JOIN` Version
```
UPDATE S
SET AffiliationsSameForAllDOI = 1
FROM TempSourceTable S
INNER JOIN (SELECT doi
FROM TempSourceTable
GROUP BY DOI
HAVING COUNT(DISTINCT Affiliations) = 1) S1
ON S.DOI = S1.DOI
```
* [FIDDLE DEMO](http://www.sqlfiddle.com/#!3/d7cfc/1)
|
```
update TempSourceTable
set AffiliationsSameForAllDOI = 1
where DOI in (
select DOI
from TempSourceTable
group by DOI
having count(distinct Affiliations) = 1
)
```
Depending on what your data looks like maybe you'd have some luck with performance by creating a computed column that strips out say the first 16 characters from `Affiliations` or just using the `checksum()` and then indexing on that column instead. Perhaps it would look something like this:
```
update TempSourceTable
set AffiliationsSameForAllDOI = 1
where DOI in (
select DOI
from TempSourceTable
where DOI in (
select DOI
from TempSourceTable
group by DOI
having count(distinct AffiliationsChecksum) = 1
)
group by DOI
having count(distinct Affiliations) = 1
)
```
|
How to speed up SQL query with JOIN on large varchar field and a NOT EXISTS
|
[
"",
"sql",
"sql-server",
"performance",
"t-sql",
""
] |
I need to `convert` `Time` value into `integer` value .
For E.g: If the `time` value is `'07:00 PM'` I need to `convert` it into `19`.
I have tried below query to do this.
```
select CAST(substring(CONVERT(varchar,convert(smalldatetime,'07:00 PM'),114), 1,2) as int) as VALUE
```
It returns me the desired output. But it's taking too much time in case of `big transactions`. Can anyone suggest me an alternative solution ( a simple one perhaps)?
I can't use `datepart(hour, 'timevalue')`. Because there are data's with time value `'00:00 AM'`and`'00:00 PM'`which through's conversion error`.`In This case I need to return the value as `0`.`
|
You seem to want to extract the hour from the time. Use `datepart()`:
```
select datepart(hour, timecol)
```
|
By using DATEPART() ...
```
DECLARE @DATE DATETIME = N'07:00 PM'
SELECT DATEPART(hh, @DATE)
```
|
Convert Time value into Integer in sql server
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have a bit of a brain lock on this one.
**TLDR;**
How do I implement "mark as read" for million items efficiently?
I have a use case of marking all articles as read. Tens of thousands.
The solution I have now works fine for small number of articles, but it's very, very slow with big numbers.
It's essentially because for every read article I am populating one table with a new row.
It doesn't matter whether I do 100 thousands inserts in one transaction or I do insert into select ( bla bla bla ) it still takes ages.
I was wondering about switching to postgresql's arrays, but I'm not sure how this will work with hundreds of thousands items.
Any suggestions what will be the best approach here?
I have 3 tables:
```
articles
- id
- title
users
- id
- name
```
and a table with mapping which article given user has read.
```
read_articles
- article_id - foreign key
- user_id - foreign key
```
|
Why don't you remove the foreign keys from the `read_article` table and create a unique index on both the columns. This should speed up the search and insert queries. To check if the article has been read you can use a subquery in the main query and when inserting the row in the table you could use an upsert statement.
I think it is a waste of time and cpu to link the articles and the user id with foreign keys as yes you do need consistency, but it is not that crucial.
|
```
insert into read_articles (article_id, user_id)
select a.id, u.id
from articles a cross join users u
```
|
Mark all as read
|
[
"",
"sql",
"postgresql",
""
] |
I have a table something like:
```
stuff type price
first_stuff 1 43
second_stuff 2 46
third_stuff 3 24
fourth_stuff 2 12
fifth_stuff NULL 90
```
And for every `type` of stuff is assigned a description which is not stored in DB
```
1 = Bad
2 = Good
3 = Excellent
NULL = Not_Assigned
```
All I want is to return a table which `count` each `type` separately, something like:
```
Description Count
Bad 1
Good 2
Excellent 1
Not_Assigned 1
```
|
```
DECLARE @t TABLE ([type] INT)
INSERT INTO @t ([type])
VALUES (1),(2),(3),(2),(NULL)
SELECT
[Description] =
CASE t.[type]
WHEN 1 THEN 'Bad'
WHEN 2 THEN 'Good'
WHEN 3 THEN 'Excellent'
ELSE 'Not_Assigned'
END, t.[Count]
FROM (
SELECT [type], [Count] = COUNT(*)
FROM @t
GROUP BY [type]
) t
ORDER BY ISNULL(t.[type], 999)
```
output -
```
Description Count
------------ -----------
Bad 1
Good 2
Excellent 1
Not_Assigned 1
```
|
```
;WITH CTE_TYPE
AS (SELECT DESCRIPTION,
VALUE
FROM (VALUES ('BAD',
1),
('GOOD',
2),
('EXCELLENT',
3))V( DESCRIPTION, VALUE )),
CTE_COUNT
AS (SELECT C.DESCRIPTION,
Count(T.TYPE) TYPE_COUNT
FROM YOUR_TABLE T
JOIN CTE_TYPE C
ON T.TYPE = C.VALUE
GROUP BY TYPE,
DESCRIPTION
UNION ALL
SELECT 'NOT_ASSIGNED' AS DESCRIPTION,
Count(*) TYPE_COUNT
FROM YOUR_TABLE
WHERE TYPE IS NULL)
SELECT *
FROM CTE_COUNT
```
|
SQL Server case when or enum
|
[
"",
"sql",
"sql-server",
"enums",
"case-when",
""
] |
how can I find the count of number of rows of different tables.i.e
i am using mysql server and i want to have a total count of rows in both the tables
This is what I have tried so far
```
SELECT COUNT(*) FROM student_it;
UNION ALL
SELECT COUNT(*) FROM student_cs;
SUM(COUNT(*));
```
|
You could also try the following sql:
```
SELECT SUM(tot) FROM
(
SELECT COUNT(*) tot FROM `student_it`
UNION ALL
SELECT COUNT(*) tot FROM `student_cs`
) sub
```
But I think you should consider storing all students in one `student` table.
|
You can use `sp_msforeachtable` like this:
```
Use <your_db>;
EXEC sp_MSforeachtable N'select count(*) from ?;';
```
Also if you need to filter the data you can use the `@whereand` paremeter.
|
count number of rows different table database
|
[
"",
"mysql",
"sql",
"count",
""
] |
I have a list, for instance: 1,2,5,6,8,12,15
I'm trying to come up with an SQL query that returns to me a list of numbers, from the previous list, not present in another list.
So, suppose I'm getting all id's from a table and those are: 1,3,7,8,15
The resultset should be: 2,5,6,12
Because those were the numbers not present in the second list, but present in the first.
I thought this one would be easy, but I'm stumped. Googling it has yielded no results I can use, just listing things about lists and left joins.
|
```
with a (id) as (values
(1),(2),(5),(6),(8),(12),(15)
), b (id) as (values
(1),(3),(7),(8),(15)
)
select id from a
except all
select id from b
;
id
----
6
5
12
2
```
<http://www.postgresql.org/docs/current/static/sql-select.html#SQL-EXCEPT>
|
I would recommend using an inner join and checking for nulls.
```
with a (id) as (values
(1),(2),(5),(6),(8),(12),(15)
), b (id) as (values
(1),(3),(7),(8),(15)
)
select a.id from a
left join b on a.id=b.id
where b.id is null;
```
|
PostgreSQL: Get all numbers in a list that were not present in another list
|
[
"",
"sql",
"list",
"postgresql",
""
] |
I have a query which return a table something like:
```
Value Description
--------------------
12 Decription1
43 Decription2
78 Decription3
3 Decription4
6 Decription5
```
My `select` looks like
```
select
sum(value), description
from
table
group by
description
```
There are 5 rows and the sum of all 5 rows are 169 which is 100%, now I want to find out how many % is for first row, for second and so on, I know that the formula is for e.g for first row 12\*100/169. How to do this in my query.
Thanks!
|
You can use windowed functions:
```
SELECT *,
[percentage] = ROUND(100 * 1.0 * value / SUM(value) OVER(), 0)
FROM #tab;
```
`LiveDemo`
Output:
```
╔═══════╦═════════════╦════════════╗
║ value ║ Description ║ Percentage ║
╠═══════╬═════════════╬════════════╣
║ 12 ║ Decription1 ║ 8 ║
║ 43 ║ Decription2 ║ 30 ║
║ 78 ║ Decription3 ║ 55 ║
║ 3 ║ Decription4 ║ 2 ║
║ 6 ║ Decription5 ║ 4 ║
╚═══════╩═════════════╩════════════╝
```
|
```
declare @total int;
select @total = Sum(Value) from table
select (sum(value) * 100.0)/@total as percentage , description
from table
group by description`
```
|
SQL Server calculate percents of values for each row
|
[
"",
"sql",
"sql-server",
"select",
""
] |
Below is my table called 'datapoints'. I am trying to retrieve instances where there are different instances of 'sensorValue' for the same 'timeOfReading' and 'sensorNumber'.
For example:
```
sensorNumber sensorValue timeOfReading
5 5 6
5 5 6
5 6 10 <----same time/sensor diff value!
5 7 10 <----same time/sensor diff value!
```
Should output: sensorNumber:5, timeOfReading: 10 as a result.
[](https://i.stack.imgur.com/RrKZU.png)
I understand this is a duplicate question, in fact I have one of the links provided below for references - however none of the solutions are working as my query simply never ends.
Below is my SQL code:
```
SELECT table1.sensorNumber, table1.timeOfReading
FROM datapoints table1
WHERE (SELECT COUNT(*)
FROM datapoints table2
WHERE table1.sensorNumber = table2.sensorNumber
AND table1.timeOfReading = table1.timeOfReading
AND table1.sensorValue != table2.sensorValue) > 1
AND table1.timeOfReading < 20;
```
Notice I have placed a bound for timeOfReading as low as 20. I also tried setting a bound for both table1 and table 2 as well but the query just runs until timeout without displaying results no matter what I put...
The database contains about 700mb of data, so I do not think I can just run this on the entire DB in a reasonable amount of time, I am wondering if this is the culprit?
If so how could I properly limit my query to run a search efficiently? If not what am doing wrong that this is not working?
[Select rows having 2 columns equal value](https://stackoverflow.com/questions/7076766/select-rows-having-2-columns-equal-value)
EDIT:
Error Code: 2013. Lost connection to MySQL server during query 600.000 sec
When I try to run the query again I get this error unless I restart
Error Code: 2006. MySQL server has gone away 0.000 sec
|
You can use a self-JOIN to match related rows in the same table.
```
SELECT DISTINCT t1.sensorNumber, t1.timeOfReading
FROM datapoints AS t1
JOIN datapoints AS t2
ON t1.sensorNumber = t2.sensorNumber
AND t1.timeOfReading = t2.timeOfReading
AND t1.sensorValue != t2.sensorValue
WHERE t1.timeOfReading < 20
```
[DEMO](http://www.sqlfiddle.com/#!9/beeb0/1)
To improve performance, make sure you have a composite index on `sensorNumber` and `timeOfReading`:
```
CREATE INDEX ix_sn_tr on datapoints (sensorNumber, timeOfReading);
```
|
I think you have missed a condition. Add a not condition also to retrieve only instances with different values.
```
SELECT *
FROM new_table a
WHERE EXISTS (SELECT * FROM new_table b
WHERE a.num = b.num
AND a.timeRead = b.timeRead
AND a.value != b.value);
```
|
Retrieving rows that have 2 columns matching and 1 different
|
[
"",
"mysql",
"sql",
""
] |
Assuming I have a column named `creation_timestamp` on a table named `bank_payments`, I would like to break today into five-minute intervals, and then query the database for the count in each of those intervals.
I'm going to read this manually (i.e. this is not for consumption by an application), so the output format does not matter as long as I can use it to get the five-minute time period, and the count of records in that period.
Is it possible to do this entirely on the side of the database?
|
If you want to group by your records in table on 5 min interval then you can try this:
```
SELECT col1, count(col1), creation_timestamp
FROM bank_payments
WHERE DATE(`creation_timestamp`) = CURDATE()
GROUP BY UNIX_TIMESTAMP(creation_timestamp) DIV 300, col1
```
|
Yes. Here is one method:
```
select sec_to_time(floor(time_to_sec(time(datetimecol)*5/60))), count(*)
from t
where t.datetimecol >= curdate() and
t.dattimeecol < curdate() + interval 1 day
group by 1
order by 1;
```
|
How can I get the number of records for today in five-minute intervals?
|
[
"",
"mysql",
"sql",
""
] |
So, code is very simple:
```
var result = dbContext.Skip(x).Take(y).ToList();
```
When x is big (~1.000.000), query is very slow. y is small - 10, 20.
SQL code for this is: (from sql profiler)
```
SELECT ...
FROM ...
ORDER BY ...
OFFSET x ROWS FETCH NEXT y ROWS ONLY
```
The question is if anybody knows how to speed up such paging?
|
I think `OFFSET` .. `FETCH` is very useful when browsing the first pages from your large data (which is happening very often in most applications) and have a performance drawback when querying high order pages from large data.
Check this [article](https://web.archive.org/web/20190925145935/http://www.mssqlgirl.com/paging-function-performance-in-sql-server-2012.html) for more details regarding performance and alternatives to `OFFSET` .. `FETCH`.
Try to apply as many filters to your data before applying paging, so that paging is run against a smaller data volume. It is hard to imagine that the user wants no navigate through 1M rows.
|
You are right on that, Skip().Take() approach is slow on SQL server. When I noticed that I used another approach and it worked good. Instead of using Linq Skip().Take() - which writes the code you showed - , I explicitly write the SQL as:
```
select top NTake ... from ... order by ... where orderedByValue > lastRetrievedValue
```
this one works fast (considering I have index on the ordered by column(s)).
|
Entity Framework Skip/Take is very slow when number to skip is big
|
[
"",
"sql",
"performance",
"entity-framework",
"linq",
""
] |
I have a table named PublishedData, see image below
[](https://i.stack.imgur.com/jMfjy.png)
I'm trying to get the output like, below image
[](https://i.stack.imgur.com/Znevm.png)
|
I think you can use a query like this:
```
SELECT dt.DistrictName, ISNULL(dt.Content, 'N/A') Content, dt.UpdatedDate, mt.LastPublished, mt.Unpublished
FROM (
SELECT *, ROW_NUMBER() OVER (PARTITION BY DistrictName ORDER BY UpdatedDate DESC, ISNULL(Content, 'zzzzz')) seq
FROM PublishedData) dt
INNER JOIN (
SELECT DistrictName, MAX(LastPublished) LastPublished, COUNT(CASE WHEN IsPublished = 0 THEN 1 END) Unpublished
FROM PublishedData
GROUP BY DistrictName) mt
ON dt.DistrictName = mt.DistrictName
WHERE
dt.seq = 1;
```
Because I think you use an order over `UpdatedDate, Content` to gain you two first columns.
|
Check out something like this (I don't have your tables, but you will get the idea where to follow with your query):
```
SELECT DirectName,
MAX(UpdatedDate),
MAX(LastPublished),
(
SELECT COUNT(*)
FROM PublishedData inr
WHERE inr.DirectName = outr.DirectName
AND inr.IsPublished = 0
) AS Unpublished
FROM PublishedData outr
GROUP BY DirectName
```
|
Group by not working to get count of a column with other max record in sql
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"sql-server-2012",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.