Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I need to remove the sixth character from a `nvarchar` string in sql. I need to do this on every row for the column.
For example:
```
111116111111
```
The length of every record is `12` characters, so I would need to make it `11` characters and remove the `6`th.
Can anyone assist me? Thank you. | You could just do
```
UPDATE table
set column = LEFT(column,5) + RIGHT (column,6)
```
But maybe someone has a fancier way of achieving the same result. | ```
select left([ColumnName],5)+right([ColumnName],6)
```
such as below:
```
declare @Something as nvarchar(12)
set @Something = '111116111111'
select @Something, left(@Something,5)+right(@Something,6)
```
if it gives conversion errors (such as cannot convert datatype nvarchar to numeric), you may have to cast both sides of the expression. | removing the sixth character from a string | [
"",
"sql",
"sql-server",
""
] |
I am trying to show some stats for a fictitious game.
There's a Team model, a Player model, and a Run model.
I am able to get runs for a particular month in the player model:
```
def count_runs(date)
self.runs.count(:conditions => {:created_at => (date.beginning_of_month..date.end_of_month)})
end
```
I am able to get them in the correct order in the Team controller and model:
```
@players = @team.players_by_count(Date.today)
def players_by_count(date)
@date = date
self.players.all.sort_by{|p| [-p.count_runs(@date)]}
end
```
I display that in a table to show their position:
```
<table>
<% @players.each_with_index do |player, index| %>
<tr>
<td><%= (index+1).ordinalize %></td>
<td><%= player.name %></td>
<td><%= player.count_runs(Date.today) %></td>
</tr>
<% end %>
</table>
```
My schema is as follows:
```
create_table "teams", :force => true do |t|
t.string "name"
t.datetime "created_at", :null => false
t.datetime "updated_at", :null => false
end
create_table "players", :force => true do |t|
t.string "name"
t.integer "team_id"
t.datetime "created_at", :null => false
t.datetime "updated_at", :null => false
end
create_table "runs", :force => true do |t|
t.integer "player_id"
t.datetime "created_at", :null => false
t.datetime "updated_at", :null => false
end
```
I want to be able to work out what their end-of-the-month average position has been. So at the end of every month they were in positions (1st, 3rd, 1st, 5th / 4 months) = Avg position = 2.5
I'm also trying to figure out how I'd get the winner (top placed player) for each month.
Any ideas? | I would suggest creating a model that just stored these monthly aggregations:
```
create_table "placements", :force => true do |t|
t.integer "player_id"
t.date "month"
t.integer "runs_count", :default => 0
end
```
So after each run is created, perhaps add a callback like so: (You will want to do something for if you delete a run as well)
```
def aggregate!
# Many different ways to accomplish this, this is just the first I thought of.
Placements.where(player_id: self.player_id, month: self.created_at.beginning_of_month).first_or_create do |placement|
placement.runs_count += 1
end
end
```
This approach will allow you to then do the following:
```
class Player < ActiveRecord::Base
has_many :placements
belongs_to :team
end
class Team < ActiveRecord::Base
has_many :players
has_many :placements, through: game
end
class Placement < ActiveRecord::Base
belongs_to :player
end
```
Which will allow you to then do the following:
```
@players = @team.players_by_count(Date.today)
```
becomes
```
@placements = @team.placements.includes(:player).where(month: Date.today.beginning_of_month).order('runs_count DESC')
```
That way you get the Player, their runs\_count AND their positions.
So in your view you could display them like so:
```
<table>
<% @placements.each_with_index do |placement, index| %>
<tr>
<td><%= (index+1).ordinalize %></td>
<td><%= placement.player.name %></td>
<td><%= placement.runs_count %></td>
</tr>
<% end %>
</table>
```
If you want to get their average placements, I'd recommend you add a new column to placements, which will store their position for that given month, so you can do the following:
```
positions = player.placements.pluck(:position)
average = positions.sum / positions.size.to_f
```
The reason I suggested this path, was because I see how query intensive your current implementation was. If you had 10 players on a team, your `@team.players_by_count(Date.today)` would create 11 queries (1 + players\_count). Where as `@team.placements.includes(:player).where(month: Date.today.beginning_of_month).order('runs_count DESC')` is 2 queries regardless of Player count.
Hope this helps! | If you wanted to add another column to your table, I'd create an instance method to handle the calculation:
```
def avg
positions = runs.map(&:position)
avg = positions.inject(:+) / positions.size
return avg
end
```
You could also perform an [SQL query to calculate the average](http://guides.rubyonrails.org/active_record_querying.html):
```
def avg
runs.average(:positions)
end
``` | How to work out average position over a certain period | [
"",
"sql",
"ruby-on-rails",
"ruby",
""
] |
So a little bit of a amateur question. I haven't worked too much with `MySQL`, but I need to pull a *query* from a table and I am not sure where I am going.
The table `core_user` has the following fields:
```
user_id
user_display_name
user_name
user_password
user_email
user_group
user_role
date_registered
language
timezone
region
```
I need to write a query to pull the following `Column`s:
```
user_display_name
user_name
user_password
user_email
```
Thing is there are sometimes up to 20 users that can have the same `user_display_name`. I only want the query to return one result per `user_display_name`. It doesn't matter who the user is but I can't have 20 people from the same company logging into my new module.
Can someone please give me a hand in writing this query. I have read about doing an `SELF JOIN` to do this, but I have no idea where to begin. | You can `GROUP BY` **user\_display\_name** field as follows;
```
SELECT user_display_name, user_name, user_password, user_email
FROM core_user
GROUP BY user_display_name
```
This works great for `MySQL` listing *unique* display names as you suggested in question description:
> I only want the query to return one result per user\_display\_name | You can use distinct to get only one result for all same username.
`SELECT DISTINCT user_display_name, user_name, user_password, user_email
FROM core_user` | SELECT from single table one user per company | [
"",
"mysql",
"sql",
"self-join",
""
] |
I have a table of user data in my SQL Server database and I am attempting to summarize the data. Basically, I need some min, max, and sum values and to group by some columns
Here is a sample table:
```
Member ID | Name | DateJoined | DateQuit | PointsEarned | Address
00001 | Leyth | 1/1/2013 | 9/30/2013 | 57 | 123 FirstAddress Way
00002 | James | 2/1/2013 | 7/21/2013 | 34 | 4 street road
00001 | Leyth | 2/1/2013 | 10/15/2013| 32 | 456 LastAddress Way
00003 | Eric | 2/23/2013 | 4/14/2013 | 15 | 5 street road
```
I'd like the summarized table to show the results like this:
```
Member ID | Name | DateJoined | DateQuit | PointsEarned | Address
00001 | Leyth | 1/1/2013 | 10/15/2013 | 89 | 123 FirstAddress Way
00002 | James | 2/1/2013 | 7/21/2013 | 34 | 4 street road
00003 | Eric | 2/23/2013 | 4/14/2013 | 15 | 5 street road
```
Here is my query so far:
```
Select MemberID, Name, Min(DateJoined), Max(DateQuit), SUM(PointsEarned), Min(Address)
From Table
Group By MemberID
```
The `Min(Address)` works this time, it retrieves the address that corresponds to the earliest DateJoined. However, if we swapped the two addresses in the original table, we would retrieve "123 FirstAddress Way" which would not correspond to the 1/1/2013 date joined. | For almost everything you can use a simple groupby, but as you need "the same address than the row where the minimum datejoined is" is a little bit tricker and you can solve it in several ways, one is a subquery searching the address each time
```
SELECT
X.*,
(select Address
from #tmp t2
where t2.MemberID = X.memberID and
t2.DateJoined = (select MIN(DateJoined)
from #tmp t3
where t3.memberID = X.MemberID))
FROM
(select MemberID,
Name,
MIN(DateJoined) as DateJoined,
MAX(DateQuit) as DateQuit,
SUM(PointsEarned) as PointEarned
from #tmp t1
group by MemberID,Name
) AS X
```
`
Or other is a subquery with a Join
```
SELECT
X.*,
J.Address
FROM
(select
MemberID,
Name,
MIN(DateJoined) as DateJoined,
MAX(DateQuit) as DateQuit,
SUM(PointsEarned) as PointEarned
from #tmp t1
group by MemberID,Name
) AS X
JOIN #tmp J ON J.MemberID = X.MemberID AND J.DateJoined = X.DateJoined
``` | You could `rank` your rows according to the date, and select the minimal one:
```
SELECT t.member_id,
name,
date_joined,
date_quit,
points_earned
address AS address
FROM (SELECT member_id
name,
MIN (date_joined) AS date_joined,
MAX (date_quit) AS date_quit,
SUM (points_earned) AS points_earned,
FROM my_table
GROUP BY member_id, name) t
JOIN (SELECT member_id,
address,
RANK() OVER (PARTITION BY member_id ORDER BY date_joined) AS rk
FROM my_table) addr ON addr.member_id = t.member_id AND rk = 1
``` | Find Min Value and value of a corresponding column for that result | [
"",
"sql",
"select",
""
] |
I am trying to select items from a table that contains the followings columns: `productName`, `ProductPrice` and `ProductCategory`. The products in this table have been categorised into Alcoholic and Soft Drinks.
I am trying to write a SQL statement that will select the `productName` and `ProductPrice`, but only select the products that have been categorized as soft drinks and leave out the the products that have been categorised as Alcoholic.
My SQL so far that i am trying to write is;
```
SELECT *
FROM Products
WHERE ProductPrice BETWEEN =''100' AND '300'" ...
```
This where I am stuck, how do I only select products that have categorised as 'Soft Drinks' | This is good,
```
SELECT * FROM Products
WHERE ProductPrice BETWEEN '100' AND '300'
```
Here's what you need to add -
```
AND ProductCategory = 'Soft Drinks'
```
To get the final query -
```
SELECT * FROM Products
WHERE ProductPrice BETWEEN '100' AND '300'
AND ProductCategory = 'Soft Drinks'
``` | ```
select
productname,
productprice
from
productsTable
where
price between 100 and 300
and productcategory = "soft drinks";
``` | How to write this SQL Command | [
"",
"sql",
""
] |
I have a couple of tables. One table with Groups:
```
[ID] - [ParentGroupID]
1 - NULL
2 1
3 1
4 2
```
And another with settings
```
[Setting] - [GroupId] - [Value]
Title 1 Hello
Title 2 World
```
Now I'd like to get **"Hello"** back if I'd query the Title for Group 3
And I'd like to get **"World"** back if I'd query the Title for Group 4 (And not "Hello" as well)
Is there any way to efficiently do this in MSSQL? At the moment I am resolving this recursively in code. But I was hoping that SQL could solve this problem for me. | This is a problem we've encountered multiple times in our company. This would work for any case, including when the settings can be set only at some levels and not others (see SQL Fiddle <http://sqlfiddle.com/#!3/16af0/1/0> :
```
With GroupSettings(group_id, parent_group_id, value, current_level)
As
(
Select g.id as group_id, g.parent_id, s.value, 0 As current_Level
From Groups As g
Join Settings As s On s.group_id = g.id
Where g.parent_id Is Null
Union All
Select g.id, g.parent_id, Coalesce((Select value From Settings s Where s.group_id=g.id), gs.value), current_level+1
From GroupSettings as gs
Join Groups As g On g.parent_id = gs.group_id
)
Select *
From GroupSettings
Where group_id=4
``` | Don't knoww the SQL Server syntax but something like the following?
```
SELECT settings.value
FROM settings
JOIN groups ON settings.groupid = groups.parentgroupid
WHERE settings.setting = 'Title'
AND groups.id = 3
``` | How to get first entry with a value from an hierarchical setting structure? | [
"",
"sql",
"sql-server",
""
] |
I am currently creating a full install script for Magento and Node.js/Grunt
I'd like to skip installation steps.
One of my goals is to configure skin and package directly without accessing the admin (I know how i can do that in the magento admin)
So, someone know which table(s) contain package and skin name ?
Thanks a lot | it is in **core\_config\_data** just check it out with your **scope** and **scope\_id** with if u have multistore.
below are the **path** column which you can find in table
**design/theme/skin**
**design/theme/template**
**design/theme/layout**
hopw this will sure help you | you can find on table core\_config\_data,
with path:
```
- design/theme/locale
- design/theme/template
- design/theme/skin
- design/theme/layout
``` | Magento - Change Skin and Package in DataBase | [
"",
"sql",
"database",
"magento",
"package",
"skin",
""
] |
What I'm trying to do is show all the information about the countries from Asia and Europe, but not the other continents. Then order them by continent. I'm just missing something obvious, but what is it?
If I put `AND` between Asia and Europe then it will try to show countries that are from both Asia and Europe, so that won't do but if I put OR then it will only display Asia.
What my code looks like;
```
SELECT *
FROM country
WHERE Continent = 'Asia' AND 'Europe'
ORDER BY Continent DESC;
``` | ```
SELECT * FROM country
WHERE Continent = 'Asia' OR Continent = 'Europe'
ORDER BY Continent DESC;
``` | Try following
```
SELECT * FROM country
WHERE Continent in ('Asia','Europe')
ORDER BY Continent DESC;
``` | SQL Show multiple columns | [
"",
"mysql",
"sql",
""
] |
I need to optimize/implement a MYSQL query wich do the following:
I need to store if an user marked an item as seen, and create a SEEN/UNSEEN button, that delete the correspondent row on the database if already exists (is marked as seen) OR insert a new row if it not (is not marked).
It means, I need a mysql query that do that:
```
DELETE FROM table WHERE userid = ? AND itemid = ?
```
or that
```
INSERT INTO table (userid, itemid) VALUES (?,?)
```
depending on if the row already exists or not.
I now it's very easy doing it with php and checking the number of rows of a SELECT before INSERT or DELETE but I would like to doing that all the optimized I can. | You can use [stored procedure](http://dev.mysql.com/doc/refman/5.5/en/create-procedure.html) to SELECT and DELETE if exists or INSERT. But much easier would just be left the data there and update seen/unseen with [ON DUPLICATE KEY UPDATE](http://dev.mysql.com/doc/refman/5.5/en/insert-on-duplicate.html). | You could use a third column to store an integer, ad use that value as a "toggle" to switch between seen/unseen through an UPSERT query.
Something like:
```
INSERT INTO table (userid, itemid, seen) VALUES (?,?,0) ON DUPLICATE KEY UPDATE seen = ((seen+1) MOD 2);
```
The primary key should be formed by `userid` and `itemid`. The `MOD` operator will make it easy and efficient to implement the seen/unseen behaviour you want. | INSERT new row or DELETE old row IF it already exists | [
"",
"mysql",
"sql",
"insert",
"sql-delete",
"exists",
""
] |
I have a table like this in Firebird:
```
Tablename: USERNAMES
+--------+-----+
|username|code |
+--------+-----+
|a |1 |
+--------+-----+
|b |2 |
+--------+-----+
|c |3 |
+--------+-----+
|d |4 |
+--------+-----+
|e |5 |
+--------+-----+
```
and this table
```
Tablename: SERVICES
+-----------+-----+-----+-----+
|serviceno |user1|user2|user3|
+-----------+-----+-----+-----+
|v1 | |1 |2 |
+-----------+-----+-----+-----+
|v2 |3 |2 | |
+-----------+-----+-----+-----+
|v3 |5 |4 | |
+-----------+-----+-----+-----+
|v4 |3 |2 |1 |
+-----------+-----+-----+-----+
```
I want that table as
```
+-----------+-----+-----+-----+
|serviceno |user1|user2|user3|
+-----------+-----+-----+-----+
|v1 | |a |b |
+-----------+-----+-----+-----+
|v2 |c |b | |
+-----------+-----+-----+-----+
|v3 |e |d | |
+-----------+-----+-----+-----+
|v4 |c |b |a |
+-----------+-----+-----+-----+
```
I've tried that
```
SELECT IFF(A.USER1 = NULL,NULL,B.NAME),
IFF(A.USER2 = NULL,NULL,C.NAME),
IFF(A.USER3 = NULL,NULL,D.NAME)
FROM SERVICES A INNER JOIN USERNAMES B ON (A.USER1 =B.CODE)
INNER JOIN USERNAMES C ON (A.USER2 =C.CODE)
INNER JOIN USERNAMES D ON (A.USER3 =D.CODE)
```
And several attempts too,but allways the result is only row v4. why? and how can i get all rows? | I'm not familiar with Firebird, but here's a generic SQL solution:
```
select
serviceno,
(select username
from usernames
where services.user1 = usernames.code) as user1,
(select username
from usernames
where services.user2 = usernames.code) as user2,
(select username
from usernames
where services.user3 = usernames.code) as user3
from
services
``` | You should use `OUTER JOINS`:
```
SELECT serviceno,
IFF(A.USER1 = NULL,NULL,B.NAME),
IFF(A.USER2 = NULL,NULL,C.NAME),
IFF(A.USER3 = NULL,NULL,D.NAME)
FROM SERVICES A
LEFT OUTER JOIN USERNAMES B ON (A.USER1 =B.CODE)
LEFT OUTER JOIN USERNAMES C ON (A.USER2 =C.CODE)
LEFT OUTER JOIN USERNAMES D ON (A.USER3 =D.CODE)
```
An inner join only returns results if there is a match, and the result you want to have needs the rows even if `user1`, `user2`, or `user3` is null, and hence there is no match between e. g. `A` and `B`.
And you can simplify that to
```
SELECT serviceno,
B.NAME,
C.NAME,
D.NAME
FROM SERVICES A
LEFT OUTER JOIN USERNAMES B ON (A.USER1 =B.CODE)
LEFT OUTER JOIN USERNAMES C ON (A.USER2 =C.CODE)
LEFT OUTER JOIN USERNAMES D ON (A.USER3 =D.CODE)
``` | Which SQL clause should be used? | [
"",
"sql",
"firebird",
"clause",
""
] |
I am trying to get highest points row along with ranks and select query without rownum=rownum+1 .I have tried the query below but I am missing something also link <http://sqlfiddle.com/#!2/fd7897/7> .
I am looking for answer like for each receiver the last entry which would also be highest points entry rankwise: I really appreciate any help.Thanks in Advance.
Something like this:
('2', '4', 'test ...','2011-08-21 14:13:19','40','009')---rank1
('4', '2', 'test ...','2011-08-21 14:13:19','30','003')----rank2
('1', '3', 'test ...','2011-08-21 14:12:19','20','005')---- rank3
query so far:
```
SELECT * ,
(select count(*)
from tblA u2
where u2.points > u.points or
u2.points = u.points and u2.id <= u.id
) as rank
FROM (SELECT u.receiver, MAX(u.id) AS id
FROM tblA u
GROUP BY u.receiver
) subset JOIN
tblA u
ON subset.receiver = u.receiver AND subset.id = u.id order by rank;
```
tables:
```
CREATE TABLE if not exists tblA
(
id int(11) NOT NULL auto_increment ,
sender varchar(255),
receiver varchar(255),
msg varchar(255),
date timestamp,
points int(255),
refno varchar(255),
PRIMARY KEY (id)
);
CREATE TABLE if not exists tblB
(
id int(11) NOT NULL auto_increment ,
sno varchar(255),
name varchar(255),
PRIMARY KEY (id)
);
CREATE TABLE if not exists tblC
(
id int(11) NOT NULL auto_increment ,
data varchar(255),
refno varchar(255),
extrarefno varchar(255),
PRIMARY KEY (id)
);
INSERT INTO tblA (sender, receiver,msg,date,points,refno ) VALUES
('1', '2', 'buzz ...','2011-08-21 14:11:09','10','001'),
('1', '2', 'test ...','2011-08-21 14:12:19','20','002'),
('4', '2', 'test ...','2011-08-21 14:13:19','30','003'),
('1', '3', 'buzz ...','2011-08-21 14:11:09','10','004'),
('1', '3', 'test ...','2011-08-21 14:12:19','20','005'),
('1', '4', 'buzz ...','2011-08-21 14:11:09','10','006'),
('1', '4', 'test ...','2011-08-21 14:12:19','20','007'),
('3', '4', 'test ...','2011-08-21 14:13:19','30','008'),
('2', '4', 'test ...','2011-08-21 14:13:19','40','009');
INSERT INTO tblB (sno, name ) VALUES
('1', 'Aa'),
('2', 'Bb'),
('3', 'Cc'),
('4', 'Dd'),
('5', 'Ee'),
('6', 'Ff'),
('7', 'Gg'),
('8', 'Hh');
INSERT INTO tblC (data,refno,extrarefno ) VALUES
('data1', '001', '101'),
('data2', '002', '102'),
('data3', '003', '103'),
('data4', '004', '101'),
('data5', '005', '102'),
('data6', '006', '103'),
('data7', '007', '101'),
('data8', '008', '101'),
('data9', '009', '101');
``` | The problem is the `count(*)` in the subquery. Change it to `count(distinct receiver)`:
```
SELECT * ,
(select count(distinct receiver)
from tblA u2
where u2.points > u.points or
u2.points = u.points and u2.id <= u.id
) as rank
FROM (SELECT u.receiver, MAX(u.id) AS id
FROM tblA u
GROUP BY u.receiver
) subset JOIN
tblA u
ON subset.receiver = u.receiver AND subset.id = u.id
order by rank;
```
EDIT:
To create this as a MySQL view, you have to get right of the aggregation in the `from` clause:
```
SELECT * ,
(select count(distinct receiver)
from tblA u2
where u2.points > u.points or
u2.points = u.points and u2.id <= u.id
) as rank
FROM tblA u
WHERE u.id = (select max(u2.id) from tblA u2 where u2.receiver = u.receiver)
order by rank;
``` | how about that?
```
SELECT a.*
FROM tbla a,
(SELECT receiver,
Max(points) AS m
FROM tbla
GROUP BY receiver) AS b
WHERE a.receiver = b.receiver
AND a.points = b.m
ORDER BY m DESC
``` | rank users by max id and points but (rankings are wrong) | [
"",
"mysql",
"sql",
""
] |
I want to delete rows from multiple tables, How do I do this?
I have tried
```
DELETE a.*, b.*, c.*
FROM table1 as a, table2 as b, table3 as c
WHERE a.colName = 'value'
AND b.colName = 'value'
AND a.c.colName = 'value';
```
`'value'` is same for all tables as well as `colName`.
For this query records must exist in all tables but in my case records may or may not exist in the tables.
When we run a delete query, it deletes the existing records, returns `'0 row(s) affected'` otherwise. So simply I want to run 3 queries like this
```
DELETE FROM table1 WHERE colName = 'value';
DELETE FROM table1 WHERE colName = 'value';
DELETE FROM table1 WHERE colName = 'value';
```
in one query.
Thanks | Delete with joins is a bit strange in mysql. You need something like this:
EDIT:
To allow for rows not existing on all tables
```
DELETE FROM
table1, table2, table3
USING
table1
LEFT JOIN
table2 ON table2.colName = table1.colName
LEFT JOIN
table3 ON table3.colName = table1.colName
WHERE
table1.colName = 'value'
``` | since your actual problem is executing mutliple queries at once. the best solution for you would be to integrate mysqli extension and use [mysqli\_multi\_query()](http://www.php.net/manual/en/mysqli.multi-query.php) with these queries:
```
DELETE FROM table1 WHERE colName = 'value';
DELETE FROM table2 WHERE colName = 'value';
DELETE FROM table3 WHERE colName = 'value';
``` | Delete row from multiple tables | [
"",
"mysql",
"sql",
""
] |
I have the following table:
```
-----------------------------------
PK integer date
-----------------------------------
1 2 03/01/01
2 1 04/01/01
3 3 02/01/01
4 3 05/01/01
5 2 01/01/01
6 1 06/01/01
```
What I want to do is to order by the date column, BUT have the dates with integer 2 higher up the order than the other integers. My output would be like this.
```
-----------------------------------
PK integer date
-----------------------------------
1 2 01/01/01
5 2 03/01/01
3 3 02/01/01
2 1 04/01/01
4 3 05/01/01
6 1 06/01/01
```
At the moment I'm totally clueless as to how to achieve this in MySQL, or even if its possible. I haven't tried anything yet as I have no idea where to even start.
I should say that the order of integers that aren't 2 is not a concern, so the following table is equally good.
```
-----------------------------------
PK integer date
-----------------------------------
1 2 01/01/01
5 2 03/01/01
2 1 04/01/01
6 1 06/01/01
3 3 02/01/01
4 3 05/01/01
``` | You can order the query by a calculated expression, e.g., `case`:
```
SELECT *
FROM `my_table`
ORDER BY CASE `integer` WHEN 2 THEN 1 ELSE 0 END DESC, `date` ASC
``` | The easiest way to do this is subtract your `integer` by 2 and then get the absolute value of that number. Then sort on that. The absolute value of 2 - 2 will always be zero and any other calculation will be greater than zero. therefor you will be forcing `integer`s of 2 to the top of the list ([**SQL Fiddle**](http://sqlfiddle.com/#!2/a53128/2/0)):
```
SELECT * FROM myTable
ORDER BY ABS(`integer` - 2), `date`
``` | MySQL Order By date column and integer column, but specify ordering rules of integer column? | [
"",
"mysql",
"sql",
"database",
"sql-order-by",
"case",
""
] |
I use a Microsoft Access 2010 front end with linked tables on an SQL Server 2012 installation. Local network.
When Access Starts, a VBA script runs which connects to the SQL server and performs some checks.
I recently upgraded from SQL Server 2008 to 2012, that's when the connection between client and Server started to fail intermittently.
When the connection between my client and the server fails, I see a generic message "SQL Server does not exist or access denied". This is covered in a Microsoft support article <http://support.microsoft.com/kb/328306>. The potential causes detailed in that article do not match the trouble I am encountering.
This connection issue is intermittent. The trouble arises about 3 times a week and lasts for about 30 minutes at a time. Between these 30 minute failures, the the system works perfectly.
My Current VBA Connection String: (have tried several, trouble persists with all of them):
```
Dim conn As ADODB.Connection
Set conn = New ADODB.Connection
conn.Open "Provider=SQLOLEDB;Server=Server3.companydomain.local;Database=My_Database;Trusted_Connection=Yes"
```
I hope that I can find something in the SQL Server Logs (which I do have access to) but I do not know which Log file to investigate. | Well it's been 7 months. Here is the solution for this problem (in my case):
My secondary DNS Server was an external Server. It could not look up my database (192.168.x.x) because this is an internal address.
When My application called the Primary DNS Server to look up the SQL Server address, it worked fine. When my primary DNS was busy, and the application failed over to secondary DNS- the request would time out. There are many reasons someone may see this error. That is the reason I was seeing it. | I do not have a direct answer to your question, but I believe you could start by exploring the IP and ports on which the SQL server is listening to...Is is possible that the machine using DHCP to assign IP for the DB server as well? In this case I guess it could happen when the IPs are refreshed.
We had a similar issue where multiple IPs were getting assigned on the same machine (having multiple NIC connected) which created such intermittent disruptions. | SQL Server does not exist or access denied (intermittently) | [
"",
"sql",
"ms-access",
"sql-server-2012",
"connection-string",
""
] |
I am working on exporting a table from my server DB which is about few thousand rows and the PHPMyadmin is unable to handle it.So I switched to the **command line option**
But I am running into this error after executing the `mysqldump` command.The error is
`Couldn't execute 'SET OPTION SQL_QUOTE_SHOW_CREATE=1': You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'OPTION SQL_QUOTE_SHOW_CREATE=1' at line 1 (1064)`
After doing some search on the same I found this as a [bug](http://bugs.mysql.com/bug.php?id=67507) in the mysql version 5.5 not supporting the `SET OPTION` command.
I am running a EC2 instance with centos on it.My mysql version is `5.5.31`(from my phpinfo).
**I would like to know if there is a fix for this as it wont be possible to upgrade the entire database for this error**.
Or if there is any other alternative to do a export or dump,please suggest. | An alternative to mysqldump is the SELECT ... INTO form of SELECT, which allows results to be written to a file (<http://dev.mysql.com/doc/refman/5.5/en/select-into.html>).
Some example syntax from the above help page is:
```
SELECT a,b,a+b INTO OUTFILE '/tmp/result.txt'
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\n'
FROM test_table;
```
Data can then be loaded back in using LOAD DATA INFILE (<http://dev.mysql.com/doc/refman/5.5/en/load-data.html>).
Again the page gives an example:
```
LOAD DATA INFILE '/tmp/test.txt' INTO TABLE test
FIELDS TERMINATED BY ',' LINES STARTING BY 'xxx';
```
And with a complete worked example pair:
> When you use SELECT ... INTO OUTFILE in tandem with LOAD DATA INFILE
> to write data from a database into a file and then read the file back
> into the database later, the field- and line-handling options for both
> statements must match. Otherwise, LOAD DATA INFILE will not interpret
> the contents of the file properly. Suppose that you use SELECT ...
> INTO OUTFILE to write a file with fields delimited by commas:
>
> ```
> SELECT * INTO OUTFILE 'data.txt' FIELDS TERMINATED BY ','
> FROM table2;
> ```
>
> To read the comma-delimited file back in, the correct statement would
> be:
>
> ```
> LOAD DATA INFILE 'data.txt' INTO TABLE table2 FIELDS TERMINATED BY ',';
> ``` | Not tested, but something like this:
```
cat yourdumpfile.sql | grep -v "SET OPTION SQL_QUOTE_SHOW_CREATE" | mysql -u user -p -h host databasename
```
This inserts the dump into your database, but removes the lines containing "SET OPTION SQL\_QUOTE\_SHOW\_CREATE". The `-v` means reverting.
Couldn't find the english manual entry for SQL\_QUOTE\_SHOW\_CREATE to link it here, but you don't need this option at all, when your table and database names don't include special characters or something (meaning they don't need to put in quotes).
UPDATE:
```
mysqldump -u user -p -h host database | grep -v "SET OPTION SQL_QUOTE_SHOW_CREATE" > yourdumpfile.sql
```
Then when you insert the dump into database you have to do nothing special.
```
mysql -u user -p -h host database < yourdumpfile.sql
``` | MySQL mysqldump command error(bug in mysql 5.5) | [
"",
"sql",
"database",
"mysql",
""
] |
I have a table similar to,
```
Class City
01 xx
02 xx
03 yy
04 zz
```
And the result I want is like, (select Class and Count of City where city = xx or city = yy )
```
Class Count
01 2
02 2
03 1
04 0
```
The one I can't understand how to do is getting the mismatching rows into the result as well. In this example the last row of Class 04.
any help appreciated | Try below SQL:
```
SELECT t1.class, count(t2.class) AS COUNT FROM test t1
LEFT JOIN test t2 ON t1.city=t2.city AND t1.city in ('xx', 'yy')
GROUP BY t1.class, t1.city;
```
**[SQL Fiddle](http://sqlfiddle.com/#!2/2ddf07/1/0)** | This should help you:
```
create table #cc (class int ,city varchar(2))
insert into #cc values (1,'xx')
insert into #cc values (2,'xx')
insert into #cc values (3,'yy')
insert into #cc values (4,'zz')
--insert into #cc values (1,'xx')
select class,isnull(c.cnt,0)
from #cc cc
left join (select city ,count(city) as cnt
from #cc c where city in ('xx','yy')
group by city ) c on c.city = cc.city
``` | Join two sql statements in same table | [
"",
"mysql",
"sql",
""
] |
Following query runs:
```
select foo.g from (select 'hello' as g) as foo
```
and this query runs aswell:
```
select bla.x from (select 'world' as x) as bla
```
but this one does not:
```
select * from (select foo.g from (select 'hello' as g) as foo) join (select bla.x from (select 'world' as x) as bla)
```
I get the following error message:
```
Msg 156, Level 15, State 1, Line 1
Incorrect syntax near the keyword 'join'.
```
Why is the error ? Is it possible to join these tables somehow? | Give names to you tables using `As <temp_table_name>` and you need to specify the *column* two tables are joining `ON`. Since there is no column to join `ON` i used a *tautology* (which will always result `True`) assuming that the result you expect is: *hello* , *world* in two columns
Here is the rearranged query:
```
select *
from
(
select foo.g
from
(
select 'hello' as g
) as foo
) As t1
inner join
(
select bla.x
from (select 'world' as x) as bla
) As t2 ON 1 = 1
``` | You need to give alias names to your tables. The below code works
```
select * from (select foo.g from (select 'hello' as g) as foo) T1
join (select bla.x from (select 'world' as x) as bla) T2 on t1.g=t2.x
``` | Join two anonymous tables with each other | [
"",
"sql",
"sql-server",
""
] |
I'm a web programmer transitioning to vb.net programming and I am in need of proper guidance. What I am trying to do is perform an SQL table join as shown in the reference [here](http://www.w3schools.com/sql/sql_join.asp). It's my understanding that it's a good practice to for OOP to create a Class Object for each "type" of table in SQL.
For example, In the link in Reference I would have the following classes in .net.
```
Public Class Orders
Private _OrderID As GUID
Private _CustomerID As GUID
Private _OrderDate As Date
Public Property OrderID As Guid
Get
Return _OrderID
End Get
Set(ByVal value As Guid)
_OrderID = value
End Set
End Property
Public Property CustomerID As Guid
Get
Return _CustomerID
End Get
Set(ByVal value As Guid)
_CustomerID = value
End Set
End Property
Public Property OrderDate As Date
Get
Return _OrderDate
End Get
Set(ByVal value As Date)
_OrderDate = value
End Set
End Property
Public Sub New(ByVal obj As Object)
End Sub
End Class
```
and
```
Public Class Customers
Private _CustomerID As GUID
Private _CustomerName As String
Private _ContactName As String
Private _Country As String
Public Property CustomerID As Guid
Get
Return _CustomerID
End Get
Set(ByVal value As Guid)
_CustomerID = value
End Set
End Property
Public Property CustomerName As String
Get
Return _CustomerName
End Get
Set(ByVal value As String)
_CustomerName = value
End Set
End Property
Public Property ContactName As String
Get
Return _ContactName
End Get
Set(ByVal value As String)
_ContactName = value
End Set
End Property
Public Property Country As String
Get
Return _Country
End Get
Set(ByVal value As String)
_Country = value
End Set
End Property
Public Sub New(ByVal obj As Object)
End Sub
End Class
```
So given the two classes in .NET, how can I query my SQL with a table join and still maintain a proper OOP approach? Should I create a 3rd class which combines both tables?
Like This...
```
Public Class CustomerOrdersCombined
Private _OrderID As GUID
Private _CustomerID As GUID
Private _OrderDate As Date
Private _CustomerName As String
Private _ContactName As String
Private _Country As String
Public Property OrderID As Guid
Get
Return _OrderID
End Get
Set(ByVal value As Guid)
_OrderID = value
End Set
End Property
Public Property CustomerID As Guid
Get
Return _CustomerID
End Get
Set(ByVal value As Guid)
_CustomerID = value
End Set
End Property
Public Property OrderDate As Date
Get
Return _OrderDate
End Get
Set(ByVal value As Date)
_OrderDate = value
End Set
End Property
Public Property CustomerName As String
Get
Return _CustomerName
End Get
Set(ByVal value As String)
_CustomerName = value
End Set
End Property
Public Property ContactName As String
Get
Return _ContactName
End Get
Set(ByVal value As String)
_ContactName = value
End Set
End Property
Public Property Country As String
Get
Return _Country
End Get
Set(ByVal value As String)
_Country = value
End Set
End Property
Public Sub New(ByVal obj As Object)
End Sub
End Class
```
What is the best way to do this as an object? Let me know if this post is kind of confusing. I'm not sure if I made my question very clear. Thanks in advance. | First of all, yes this question is a bit confusing, mainly because it seems you're showing objects and asking about an INNER JOIN SQL statement into an object.
That being said I thing it seems that you have 2 tables in your Database Customers and Orders. I am assuming that a customer can have many orders, but an order can have only one customer making Orders and Customers have a 1-N relationship (one-to-many).
Now that I think I understand your question I will go into a bit of explaining why you shouldn't have a third object and the proper way to solve the problem. The main reason for normalizing your database and having 2 separate tables is that you don't want your Orders table also containing all the information for your customers. Now I'm assuming that the SQL query for your INNER JOIN would look something like this
```
SELECT
*
FROM
Customers INNER JOIN Orders
ON Customers.CustomerID = Orders.CustomerID
WHERE
Customers.CustomerID = @CustomerID
```
This would mean your are pulling all the data back for the same customer for each order. This means that if a customer has 100 orders you would in essence be creating 100 instances of your CustomerOrdersCombined Class and this would be in addition too your instance of customers class and 100 instances of your Orders Class. Hopefully you can see why this is a bad idea.
Now for how I would solve this problem.
Way one Make a method:
One way to solve the issue would be to create a method in your Customers class that would return all the Orders that are related to the customer
```
Public Function GetCustomerOrders() As List(Of Orders)
Return (New Orders).GetOrdersByCustomerID(Me.CustomerID)
End Function
```
The function "GetOrdersByCustomerID" would be a constructor for the Orders Class that returns a list of Orders by a CustomerID The SQL for that would be something like
```
SELECT * FROM Orders WHERE CustomerID = @CustomerID
```
Way 2 of solving your problem:
Orders is a property of your Customer Class, this may sound confusing but think of this assume that before your the instance of Customer class dies you want to look at the customers orders several times. In the above solution every time you want to look a the orders you have to hit the data base, this is in efficient and can be handled better. By making the customers orders a property of the Customer class, that way lives in memory until the customer object dies so you only have to hit the database once. Now you might be thinking what if I want to get information about a customer and not their 100 orders, why should I look that too. Well you shouldn't! There is a method called "Lazy Loading" This mean that you only load the order if/when they're requested. So This is how I would design the Customer Class
```
Public Class Customers
Private m_CustomerID As GUID
Private m_CustomerName As String
Private m_ContactName As String
Private m_Country As String
Private m_Orders As List(Of Orders)
Public Property CustomerID As Guid
Get
Return _CustomerID
End Get
Set(ByVal value As Guid)
_CustomerID = value
End Set
End Property
Public Property CustomerName As String
Get
Return _CustomerName
End Get
Set(ByVal value As String)
_CustomerName = value
End Set
End Property
Public Property ContactName As String
Get
Return _ContactName
End Get
Set(ByVal value As String)
_ContactName = value
End Set
End Property
Public Property Country As String
Get
Return _Country
End Get
Set(ByVal value As String)
_Country = value
End Set
End Property
Public Property Orders AS List(Of Orders)
Get
If m_Orders Is Nothing Then
'Only get the orders if requested'
m_Orders = (New Orders).GetOrdersByCustomerID(Me.CustomerID)
End If
Return m_Orders
End Get
Set(ByVal value As List(Of Orders))
m_Orders = value
End Set
End Property
Public Sub New(ByVal obj As Object)
End Sub
End Class
```
Notice in the getter is where the value is set, so otherwise orders with be a blank value. I hope this long answer is helpful to you. | You're falling into the OO purity trap.
Don't go for that trap where you query for a collection, then iterate over the collection to do the JOIN. That's the n+1 latency trap. You'll die a slow, non-performance death doing things like that. Objects aren't buying you anything.
Databases have been optimized to do JOINs very well; let them. Do the JOIN in SQL and sort out the mapping into objects that way.
An `Order` object might have a collection of `LineItem` objects underneath. Bring them all back in one query and map them once you've got the data. | SQL table join to VB.net class object | [
"",
"sql",
"sql-server",
"vb.net",
""
] |
Need a little help solving problem. Apologies if this is vague which is why I need help. Show The title ID, title, publisher id, publisher name, and terms for all books that have never been sold. This means that there are no records in the Sales Table. Use OUTER JOIN or NOT IN against SALES table. In this case, the terms are referring to the items on the titles table involving money. ( Price, advance, royalty ).
I've come up with the following code and there are no records returned. I've looked at the tables and no records should be returned. Not sure my code is correct based on using the OuterJoin or NOT IN. Thanks!
```
SELECT titles.title_id,
titles.title,
publishers.pub_id,
sales.payterms,
titles.price,
titles.advance,
titles.royalty
FROM publishers
INNER JOIN titles
ON publishers.pub_id = titles.pub_id
INNER JOIN sales
ON titles.title_id = sales.title_id
WHERE (sales.qty = 0)
``` | I believe all you need it to change the `INNER JOIN` to a `LEFT JOIN` for your sales table. Then check for `NULL` on any of the columns from the sales table.
```
SELECT titles.title_id, titles.title, publishers.pub_id, sales.payterms,
titles.price, titles.advance, titles.royalty
FROM publishers INNER JOIN
titles ON publishers.pub_id = titles.pub_id LEFT JOIN
sales ON titles.title_id = sales.title_id
WHERE sales.qty IS NULL
``` | Well, you are using `INNER JOIN` so it will only return records that match the join condition which is not what you want. You should use `LEFT OUTER JOIN` (or `LEFT JOIN`) and check that the `sales` side is null (meaning there's no record matching).
```
SELECT titles.title_id, titles.title, publishers.pub_id, sales.payterms,
titles.price, titles.advance, titles.royalty
FROM publishers INNER JOIN
titles ON publishers.pub_id = titles.pub_id LEFT OUTER JOIN
sales ON titles.title_id = sales.title_id
WHERE sales.title_id IS NULL
``` | Outer Join or NOT IN | [
"",
"sql",
"sql-server-2008-r2",
""
] |
I wanted to see if there is a better way to do this. The statement takes all rows that happened further out than 6 months and puts them into an archive table. Then it takes all the ids present in both tables and removes them from the main table. Any Ideas?
```
INSERT INTO ArchiveTable
SELECT *
FROM MainTable
WHERE DateItHappened < (SYSDATE - 180);
DELETE FROM MainTable a
WHERE a.ID IN (
SELECT b.ID
FROM ArchiveTable b
JOIN MainTable a
ON b.ID =
a.ID);
```
Update:
I went ahead and implemented Joe's suggestion below to make this the final code. If anyone has any changes that should be made please let me know.
```
INSERT INTO ArchiveTable
SELECT *
FROM MainTable
WHERE DateItHappened < (SYSDATE - 180);
DELETE FROM maintable a
WHERE EXISTS (SELECT 1
FROM archivetable b
WHERE a.id = b.id)
``` | The DELETE can be simplified:
```
DELETE FROM maintable a
WHERE EXISTS (SELECT 1
FROM archivetable b
WHERE a.id = b.id)
``` | My usual approach is to use a global temporary table to store the rowid's of the rows that I'm moving from the source table to the target table, then using that set to perform the deletes. It makes it 100% safe for such issues as an unstable initial data set.
```
insert all
into archivetable (col1, col2 ...)
values (col1, col2 ...)
into maintable_deletes (delete_rowid)
values (rowid)
select rowid, *
from maintable;
delete from maintable
where rowid in (select delete_rowid from maintable_deletes);
commit;
```
The GTT is useful because it can delete its own rows on commit. | Insert then delete rows out of an oracle table | [
"",
"sql",
"oracle",
""
] |
I know this is a quite a basic query, but I can't find exactly the solution for my needs, so here goes...
I have a table of customers INT\_AUX\_LISTING, a table of folders in which they're contained INT\_AUX\_DIRECTORY and a joining table INT\_AUX\_DIR\_LIST.
I need to return a list of all customers who are in folder 40017, and also not in folder 2. Any other folder in which they're contained is irrelevant.
So far I've come up with
```
SELECT *
FROM INT_AUX_LISTING l
LEFT JOIN INT_AUX_DIR_LIST dl
ON l.LISTING_ID=dl.LISTING_ID
WHERE dl.CONTAIN_DIR_ID=40017
AND dl.CONTAIN_DIR_ID <> 2
```
However this (obviously) isn't correct, and is returning far too many values. I think I need to introduce a subquery but things go awry when I try.
Apologies for the entry level nature of the question, but it's only my 3rd day working with SQL! | First, clarification on left-join and where. If you have a query that has a left-join to a table (per your example where "dl" was the right-side of the join), and then add the "dl" alias to a where clause, that in essence converts it to an INNER JOIN (unless you explicitly tested with an OR dl.listing\_ID IS NULL).
Now, back to your primary objective. Without the table structures, and not exactly clear on your column intent, it APPEARS that the "listing\_id" column is your client as found in the int\_aux\_listing table. the "contain\_dir\_id" is the directory (one of many that may be possible) as found in the int\_aux\_dir\_list.
That said, you want a list of all people who are in 40017 AND NOT in 2. For this, you can do a left-join but using the int\_aux\_listing table twice but with different aliases. Once you get the list of clients, you can then join that to your client table. Since you are new, take the sample one step at a time.
Just get clients within 40017
```
select
dl.listing_id
from
int_aux_dir_list dl
where
dl.contain_dir_id = 40017
```
Now, exclude those if they are found in directory 2
```
select
dl.listing_id
from
int_aux_dir_list dl
left join int_aux_dir_list dl2
on dl.listing_id = dl2.listing_id
AND dl2.contain_dir_id = 2
where
dl.contain_dir_id = 40017
AND dl2.listing_id IS NULL
```
By doing a left-join to the directory listing table a second time (via alias "dl2"), but specifically for its directory id = 2 (via AND dl2.contain\_dir\_id = 2), you are getting all clients from dl that are within directory 40017 and looking for that same client ID in the second instance ("dl2").
Here's the kicker. Notice the where clause is stating "AND dl2.listing\_id IS NULL".
This is basically saying WHEN Looking at the DL2 instance, I only want records that DO NOT have a corresponding listing ID matched.
Now, wrap this up to get the client information you need. In this case, getting all columns from the directory listing (in case other columns of importance), AND getting all columns from the "l"isting table.
```
select
dl.*,
l.*
from
int_aux_dir_list dl
left join int_aux_dir_list dl2
on dl.listing_id = dl2.listing_id
AND dl2.contain_dir_id = 2
join INT_AUX_LISTING l
on dl.listing_id = l.listing_id
where
dl.contain_dir_id = 40017
AND dl2.listing_id IS NULL
```
Hopefully this clears things up for you some. | Your WHERE clause is the problem; it brings back ALL rows except the one where that id is not 2, including the one where id is 40017 (since it's not equal to 2). I think you should revisit it. | Beginner SQL trouble with simple Join/Subquery | [
"",
"sql",
"sql-server",
""
] |
I am trying to extract an ID from a URL and running into some issues. The URL's will look something like this:
> <http://www.website.com/news/view.aspx?id=95>
>
> <http://www.website.com/news/view.aspx?id=20&ReturnURL=%2fnews%2fview.aspx%3fid%3d20>
I am trying to return back the number following "?id=" and nothing after the number. I will then convert it to an INT in reference to another table. Any suggestions as how to do this properly? | Use `charindex` to find the position of `?id` and `stuff` to remove the characters that is before `?id`. Then you use `left` to return the characters to the next `&`
```
declare @T table
(
URL varchar(100)
);
insert into @T values
('http://www.website.com/news/view.aspx?id=95'),
('http://www.website.com/news/view.aspx?id=20&ReturnURL=%2fnews%2fview.aspx%3fid%3d20');
select left(T2.URL, charindex('&', T2.URL) - 1) as ID
from @T as T
cross apply (select stuff(T.URL, 1, charindex('?id', T.URL) + 3, '')+'&') as T2(URL);
``` | Here is an option that you can use when you want to find the value of any parameter value within a URL, it also supports parsing text values that contain a URL
```
DECLARE @Param varchar(50) = 'event'
DECLARE @Data varchar(8000) = 'User Logged into https://my.website.org/default.aspx?id=3066&event=49de&event=true from ip'
DECLARE @ParamIndex int = (select PatIndex('%'+@param+'=%', @Data)+LEN(@param)+1)
-- @ParamValueSubstring chops off everthing before the first instance of the parameter value
DECLARE @ParamValueSubstring varchar(8000) = SubString(@Data, @ParamIndex, 8000)
SELECT @ParamValueSubstring as ParamSubstring
DECLARE @SpaceIndex int = (SELECT CHARINDEX(' ', @ParamValueSubstring))-1
DECLARE @AmpIndex int = (SELECT CHARINDEX('&', @ParamValueSubstring))-1
DECLARE @valueLength int = -1
-- find first instance of ' ' or & after parameter value
IF @SpaceIndex = -1
SET @valueLength = @AmpIndex
ELSE IF @AmpIndex = -1
SET @valueLength = @SpaceIndex
ELSE IF @SpaceIndex < @AmpIndex
SET @valueLength = @SpaceIndex
ELSE
SET @valueLength = @AmpIndex
IF(@valueLength = -1) -- check to see if there was no space or '&' following the parameter value
BEGIN
SET @valueLength = 8000
END
select Left(@ParamValueSubstring, @valueLength) as ParamValue
-- approach similar to idea function found here http://blogs.lessthandot.com/index.php/DataMgmt/DataDesign/extracting-numbers-with-sql-server/
``` | How to extract an ID number from a URL string in SQL? | [
"",
"sql",
"sql-server",
""
] |
While trying to insert multiple values into the table, I get a syntax error.I created table named `arts` by the following query :
```
CREATE TABLE arts( id INT NOT NULL AUTO_INCREMENT , subdir VARCHAR (255), PRIMARY KEY(id));
```
and the insert query I am using is :
```
INSERT INTO TABLE arts (id,subdir) VALUES('Aesthetics','Literature');
```
What could be the reason for the error ? | ```
INSERT INTO arts (subdir) VALUES('Aesthetics'),('Literature');
```
Multiple records are seperated with `( )`. Without it, like in your case, it is trying to insert one record with two fields (not counting auto increment et cetera).
Also notice that I've removed the `id`, since you don't wanna insert it (it's `AUTO_INCREMENT` after all, so it does that for you). | **INSERT** multiple values in table.
Create table
```
CREATE TABLE table1(ID int IDENTITY(1,1) NOT NULL, field1 varchar(50),field2 varchar(50),field3 varchar(50));
```
method 1:
```
INSERT INTO table1 (field1,field2,field3)
VALUES('AAA','1','AAA1'),('BBB','2','BBB2'),('CCC','3','CCC3')
```
method 2:
```
INSERT INTO table1 (field1,field2,field3)
SELECT 'AAA','1','AAA1' UNION
SELECT 'BBB','2','BBB2' UNION
SELECT 'CCC','3','CCC3'
``` | Getting an error while trying to insert multiple values at once. Why is that? | [
"",
"mysql",
"sql",
"database",
""
] |
How Can i get Microsoft Excel 2010 to Connect to MYSQL Server (5.6.12)
I installed MYSQL as a WAMP installation
Windows 7 Ultimate 64bit
MYSQL 5.6.12
Apache 2.4.4
PHP 5.4.16
this should be straight forward, but i've obviously missed something
Now.. When i open Excel 2010
i then go to "Data"
i then go to "From Other Sources"
i then select "From SQL Server"
I'm then presented with the Data Connection Wizard
I have the Log on Credentials (that's not an issue)
I'm Logging on with "Use the following User Name and Password"
i require the SERVER NAME
but i keep getting this error
[DBNETLIB][ConnectionOpen (Connect()).]SQL Server does not exist or access denied
I understand the concept of what needs to happen, that being, i need to specify the connection path of MYSQL to Excel
Now.. so far i have done this...
1. Ensured that the WAMP Server was ONLINE and the Service was STARTED
2. obviously the default server is `localhost` that does not work, but then again on my machine that only brings up the WAMP page (so that's Understandable)
3. i usually access MYSQL via PhpMyAdmin the URL for the Login page is
http://`localhost`/phpmyadmin/
i tried entering that (that doesn't work either, but then again, i understand that is only the front end)
so..
1. i tried typing in MYSQL , mysql , MYSQL5.6.12 , mysql/mysql5.6.12
none of that worked
1. the MYSQL install path is
C:\WAMP\bin\mysql\mysql5.6.12
so.. Logically if i gave that to Excel, it should find it, But it didn't
1. I researched the Net and everything Either says Use the Wizard , but doesn't go into detail on the Server Name, one place said to install Microsoft Connector, I have done so.. it didnt' help
2. I have tried the Server Name as `localhost` , http://`localhost` , 127.0.0.1 didn't work
3. Using MYSQL CLI i entered "status;" and used the connection name `localhost` via TCP/IP didn't work
4. i went into MYSQL buddy and i found this
You are connected to MySQL 5.6.12 with the user Martin.Kuliza@`localhost`.
i tried this also, didn't work
1. i noticed the URL on the command line title as being
C:\Windows\System32\cmd.exe - C:\wamp\bin\mysql\mysql5.6.12.bin\mysql.exe -u
i entered all this... didn't work
i entered the part Starting from C:\wamp.... Didn't work
1. I've tried entering wamp, WAMP, Apache, APACHE, apache
I've tried dozens of combinations
I've tried finding the config file for connector in case that had something to do with it (couldn't find it)
i've tried looking at the config.inc.php file for clues,
I'm stuck and need some help (for this seemingly simple problem)
it's just a Connection Wizard and i'm honestly gobsmacked that i actually need to ask the forum for help, but, Sadly i do | I FOUND A SOLUTION...
Kudos to HongTAT for keeping me on the right path....
(Even though i did do all the initial stuff that you posted, before i actually posted the question, The Act of you Posting it, Got me Looking at it a little deeper...
One thing led to another and i found the solution... Thank you
i suspect some sort of incompatibility issue must have been at fault here
i noticed that Power Query 64 bit would not install on my 64bit system
instead Power Query installer provided an Error that simply stated to the effect of:
since office is 32bit you can't install 64bit Power Query
but, i did install Connector/ODBC and Connector/Net as 64bit
perhaps there was something in that configuration that didn't mesh well with each other.
Also, Aparently "bit-ness" is a confusing term that is supposed to mean ( Ensure that that bit instruction set of the of the ODBC Connector Matches the bit instruction set of Power Query in Excel)
In actual fact you don't need Power Query or Visual Studio for that matter
However i can see the benefit of power Query, at this stage
WHAT I DID...
I uninstalled all traces of Microsoft Visual Studio from Add/Remove Programs
I Uninstalled all Traces of Visual C++ Redistributable from Add/Remove Programs
I Uninstalled Power Query from Add/Remove Programs
I Uninstalled MYSQL Connector/ODBC from Add/Remove Programs
I Uninstalled MYSQL Connector/Net from Add/Remove Programs
i left the WAMP Installation and MYSQL Intact
i conducted a Windows Update
i restarted the system
i then downloaded the following and installed it
<http://dev.mysql.com/downloads/file.php?id=450946>
Now.. Even though this Package installation is the same thing as installing it 1 by 1
i found that by doing a "CUSTOM INSTALLATION"
and installing all products EXCEPT FOR MYSQL SERVER
i was able to connect via the connector/net to the MYSQL Database right away
LASTLY i installed Power Query for Excel (since that was the whole point to begin with)
and this time it connected right away and Worked Perfectly
the Server Name was `localhost`
as i suspected
and the credentials were correct as i suspected
the only thing was that i had to set it up without Encryption of password
i got a message saying something like My Version of Office did not support encryption
or something to that effect
but, i connected without encryption and it worked perfectly
PROBLEM SOLVED
I NOW DEEM THIS QUESTION TO BE RESOLVED
THANKYOU TO HongTat | Have you done this?
<http://office.microsoft.com/en-001/excel-help/connect-to-a-mysql-database-HA104019820.aspx>
> Before you can connect to a MySQL database, you need the MySQL
> Connector on your computer. To install the MySQL Connector
> (Connector/Net), go to <http://go.microsoft.com/fwlink/?LinkId=278885>
> and download Connector/Net 6.6.5 for Microsoft Windows. The driver
> bit-ness needs to match the Microsoft Power Query for Excel add-in
> installation (32-bit or 64-bit). | How do i get MS EXCEL to Connect to MYSQL Database | [
"",
"mysql",
"sql",
"excel",
""
] |
I saw this in some ASP code and didnt understand the last line, specifically all the apostrophies and quotation marks between Name= and AND. what is being appended? why do we need both?
```
uName = getRequestString("UserName");
uPass = getRequestString("UserPass");
sql = "SELECT * FROM Users WHERE Name ='" + uName + "' AND Pass ='" + uPass + "'"
``` | That is very simple, you have the start of a string sentence with double quotes. Double quotes indicate the start and the end or part of a string.
for example, if you have
```
sql ="SELECT * FROM USERS"
```
your sentence takes all the value; if you have:
```
sql = "SELCT * FROM USERS"
whereSentence = " WHERE id = 1"
wholeSql = sql + whereSentence
```
with the + (plus symbol) you are concatening all the string.
With the simple quotes you are adding the simple quote in the string and concatening the other parts of the sentence.
For example if
> uName = 'John' and uPass = 'McDonals'
```
sql = "SELECT * FROM Users WHERE Name ='" + uName + "' AND Pass ='" + uPass + "'"
```
your final sentence should be
```
SELECT * FROM Users WHERE Name = 'John' And Pass = 'McDonals'.
```
Is a simple way to say that the name is John McDonals as String, but the parameters are variables, depending the request | The code is building a query that looks like this:
```
SELECT * FROM Users WHERE Name = 'foo' AND Pass = 'bar'
```
It passes in the text from the `uName` and `uPass` variables into the query string.
This is very dangerous though - it's an open door for [SQL Injection](http://en.wikipedia.org/wiki/SQL_injection). | Can someone clarify this SQL command | [
"",
"sql",
""
] |
I want to calculate the difference between the results of 2 `count(*)`-type SELECT queries executed on 2 separate tables of my PostgreSQL database.
This what I'm currently using (however I should be able to wrap it all up into a single SELECT statement):
```
SELECT "count"(*) AS val1 FROM tab1;
SELECT "count"(*) AS val2 FROM tab2;
SELECT val2-val1;
```
Thanks in advance | Try this way:
```
select
(
SELECT
"count"(*) as val1
from
tab1
) - (
SELECT
"count"(*) as val2
from
tab2
) as total_count
``` | for same table you can do this.
`select (count(abc)-count(DISTINCT xyz)) from tab;` | Calculate the difference between results of two count(*) queries based on 2 tables in PostgreSQL | [
"",
"sql",
"postgresql",
""
] |
So I am doing a homework project where I have to display the current date, and then the date for three months from now, which I have done with this:
```
SELECT CURRENT_DATE AS "Today's Date", LOWER(ADD_MONTHS(CURRENT_DATE, 3)) AS "Today, Three Months Hence" FROM dual;
```
and again with this:
```
SELECT CURRENT_DATE AS "Today's Date", LOWER(CURRENT_DATE + INTERVAL '3' MONTH)AS "Today, Three Months Hence" FROM dual;
```
results:
`Today's Date`
`05-FEB-14`
`Today, Three Months Hence`
`05-may-14`
What I am wondering is if there is a way to display the results of the future date where it only shows the month and nothing else, (i.e. no day or year). Is this possible in Oracle, or am I just looking to try the impossible? | You just need to specify the display format, which you should really do anyway instead of relying on your session defaults:
```
SELECT TO_CHAR(CURRENT_DATE, 'DD/MM/YYYY') AS "Today's Date",
TO_CHAR(ADD_MONTHS(CURRENT_DATE, 3), 'YYYY-MM-DD')
AS "Today, Three Months Hence",
TO_CHAR(ADD_MONTHS(CURRENT_DATE, 3), 'Month') AS "Three Months Hence"
FROM dual;
| TODAY'S DATE | TODAY, THREE MONTHS HENCE | THREE MONTHS HENCE |
|--------------|---------------------------|--------------------|
| 06/02/2014 | 2014-05-06 | May |
```
Simple [SQL Fiddle](http://sqlfiddle.com/#!4/d41d8/24849).
The available date format model elements are shown [in the documentation](http://docs.oracle.com/cd/E11882_01/server.112/e41084/sql_elements004.htm#i34924). | A `date` value is actually stored as a number in database, not a character string. What you see in your result is the default date format interpretation of your session.
To change the way your date fields are displayed, you need to use `to_char` function.
```
select to_char(sysdate, 'MM') from dual;
```
`'MM'` parameter here stands for month, which is what you asked. It is a format string to convert a date into a character string. For a list of your options with format string, a simple google search will help you. | How would you truncate or limit the results of a date result you are adding time to? (Oracle SQL) | [
"",
"sql",
"oracle",
"date",
""
] |
I have searched and searched for an answer to this, but the only examples I come across are for ones using the preset `date('Y-m-d')` command.
I have three fields in my database - month, day, year. These fields have values entered into them by a person, not the computer. Now, how do I figure out how to NOT show the entries that are less than 6 months old? | convert and cast your individual columns into a datetime, then compare it to today ... something like this??
```
SELECT *
FROM table
WHERE cast(CONCAT(year, '-', month, '-', day) as datetime) >= NOW() - INTERVAL 6 MONTH
``` | You can still use the `date` function ... just combine it with the `mktime` function.
```
// DATE IN IDF
// WILL RETURN 2013-08-12
$date_less_6_months = date("Y-m-d", mktime(0, 0, 0, date("m") - 6, date("d"), date("Y")));
// DATE BROKEN APART
$year_less_6_months = date("Y", mktime(0, 0, 0, date("m") - 6, date("d"), date("Y")));
$month_less_6_months = date("m", mktime(0, 0, 0, date("m") - 6, date("d"), date("Y")));
$date_less_6_months = date("d", mktime(0, 0, 0, date("m") - 6, date("d"), date("Y")));
``` | interval 6 month but with user added dates | [
"",
"mysql",
"sql",
""
] |
I have three tables: `A`, `B`, `C`. `B` has a many-to-many relationship with `A` and `C`.
```
A >-< B >-< C
```
Assume every table has a primary key column called id, and the join tables have two columns [a\_id, b\_id] and [b\_id, c\_id].
The same row in `B` can be linked to both rows in `A` and `C`. It's fairly straightforward to find rows in `C` that share a specific row in `B` (a series of inner joins).
Given a row id of `A`, I would like to query for all rows in `C` that share ALL rows of `B` associated with that row in `A`:
```
select id,
(select count(*) from c
inner join b_c on c.id = b_c.c_id
) as c_group,
(select count(*) from c
inner join b_c on c.id = b_c.c_id
inner join b on b.id = b_c.b_id
inner join a_b on b.id = a_b.b_id
where a_id = ?
) as a_c_group
from c
where c_group <= a_c_group;
```
Can this be done via SQL? I'm working in MySQL, so a MySQL-specific solution would be fine. | Solution was to compare the count of an inner join of B and C with a inner join of all three tables.
```
SELECT ID
FROM C
WHERE (
SELECT COUNT(*)
FROM B_C
WHERE C_ID = C.ID
) <= (
SELECT COUNT(*)
FROM B_C INNER JOIN A_B ON B_C.B_ID = A_B.B_ID
WHERE C_ID = C.ID AND A_ID = ?
);
```
This covers:
1. If C has no associated B's, it should appear in the result set.
2. If C has associated B's not associated with A, it does not appear in the result set.
3. If A has associated B's not associated with C, but all B's C is associated with, C appears in the result set. | This would yield all the id's for B that are associated with the selected A:
```
SELECT b_id FROM ab WHERE a_id = ?
```
So you need to find any C's that are related to only these B id's and not others. This can be done by excluding all C's that match other B id's:
```
SELECT c.id
FROM c
LEFT JOIN bc ON c.id = bc.c_id
AND bc.b_id NOT IN (SELECT b_id FROM ab WHERE a_id = ?)
WHERE bc.c_id IS NULL
``` | Query for rows that share a set of associations | [
"",
"mysql",
"sql",
""
] |
I have a table with a list of IDs that I need to exclude rows from. For each row, an ID can have one value in the comment\_code column. I need to exclude IDs from my results that have a combination of two specific values and keep IDs that have only one of those two values. Sorry if this isn't clear.
Table example:
```
ID SEQUENCE FUNCTION COMMENT_CODE
---------------------------------------------------------------
1 1 tree apple
1 2 bush orange
2 1 tree apple
2 2 tree banana
3 1 bush strawberry
```
Output needed:
```
ID SEQUENCE FUNCTION COMMENT_CODE
---------------------------------------------------------------
2 1 tree apple
```
Basically, if an ID has a row with 'apple' as the comment\_code and a row with 'orange' as the comment code, I don't want to see that ID in my results. If an ID has a row with 'apple' as a comment code, I want to see them in my results regardless of any other comment\_code they may have as long as they don't have the orange comment code.
Any help would be greatly appreciated. I've tried using a combination of exists and not exists clauses as well as listagg to group the comment\_code values together for each ID, but I'm not getting the expected result. | This approach uses aggregation to find all `id`s that have both values. This is used as an exclusion list using a `left outer join`:
```
select *
from table t left outer join
(select id
from table t
group by t.id
having sum(case when comment = 'apple' then 1 else 0 end) > 0 and
sum(case when comment = 'orange' then 1 else 0 end) > 0
) toexclude
on t.id = toexclude.id
where toexclude.id is null;
``` | Something like this should work.
```
select *
from yourtable
where sequence = 1
and whatever
``` | Oracle SQL: Trying to exclude rows based on a combination of column values | [
"",
"sql",
"oracle",
"oracle-sqldeveloper",
"oracle11gr2",
""
] |
I am learning triggers in PostgreSQL.
I have created a trigger function `update_name()`:
```
CREATE OR REPLACE FUNCTION update_name()
RETURNS trigger AS
$BODY$
BEGIN
NEW.name := "ankit";
RETURN NEW;
END;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
ALTER FUNCTION update_name()
OWNER TO postgres;
```
My table `user_table` is:
```
CREATE TABLE user_table (
name character varying(20) NOT NULL,
password character varying(20),
email character varying(20),
gender character varying(20),
phone bigint,
CONSTRAINT user_table_pkey PRIMARY KEY (name)
);
```
and the trigger for the table is :
```
CREATE TRIGGER "change-name"
BEFORE INSERT OR UPDATE
ON user_table
FOR EACH ROW
EXECUTE PROCEDURE update_name();
```
when I am inserting data to my table using query:
```
INSERT INTO user_table(name, password, email, gender, phone)
VALUES ('aa', '9874', 'poi@ka.in', 'male', 8978987896);
```
I've got the error:
> ```
> ERROR: column "ankit" does not exist
> LINE 1: SELECT "ankit"
> ^
> QUERY: SELECT "ankit"
> CONTEXT: PL/pgSQL function update_name() line 3 at assignment
>
> ********** Error **********
>
> ERROR: column "ankit" does not exist
> SQL state: 42703
> Context: PL/pgSQL function update_name() line 3 at assignment
> ```
What am I doing wrong? | This is about PostgreSQL syntax. I removed irrelevant references to pgAdmin from the question.
In Postgres, literal values (constants) are enclosed in single quotes: `'value'`.
Double quotes are reserved for identifiers but optional as long as it consists of legal, lower-case letters: `"Odd Name"` vs. `odd_name`.
This is also the SQL standard. Start by reading the chapter ["Lexical Structure" in the manual](http://www.postgresql.org/docs/current/interactive/sql-syntax-lexical.html). | The ERROR has gone by changing just:
```
NEW.name := "ankit";
```
to
```
NEW.name := 'ankit';
```
Why is it so??? | Assign a constant value in a trigger function | [
"",
"sql",
"postgresql",
"syntax",
"triggers",
"quotes",
""
] |
I am facing a problem :
table1
```
C1 C2 C3 tempId
1 4 5 ab
2 6 7 fc
3 8 9 vb
```
table2
```
ids val
1 a
2 b
3 c
4 d
5 e
6 f
7 g
8 h
9 i
```
I want to pass the value of tempId i.e. ab and want a output like
```
valofc1 valofc2 valofc3
a d e
```
Please help i dont know how to achieve that. | try this
```
select t2.val valofc1,t3.val valofc2,t4.val valofc3 from table1 t1
inner join table2 t2 on t1.C1 = t2.ids
inner join table2 t3 on t1.C2 = t3.ids
inner join table2 t4 on t1.C3 = t4.ids
where tempId = 'ab'
```
[**DEMO HERE**](http://sqlfiddle.com/#!2/32097/3) | Try this way:
```
select t21.val as valofc1, t22.val as valofc2, t23.val as valofc3
from table1 as t
join table2 as t21 on t21.ids = t.C1
join table2 as t22 on t22.ids = t.C2
join table2 as t23 on t23.ids = t.C3
where t.tempId = 'ab'
``` | Sql query with multiple column join | [
"",
"mysql",
"sql",
"sql-server",
"sqlite",
""
] |
I'm getting the first 50 records using this query. However, there is a flag named `read` in the same table, and I only want to return the first 50 records that are not `read` (i.e. `isread=false`). How can I achieve this? Can anybody give me ideas for the required query? I have tried using sub queries.
```
SELECT * FROM notification WHERE toUserId = 'email' ORDER BY id DESC LIMIT 50;
``` | Try adding an `AND` condition to your WHERE clause: `userID = 'email' AND flag = true`. This will only return users with true value for flag of which you can get the top 50 by your limit condition. | Since you want to get the first 50 values regardless of the IsRead condition and then filter it, you can try this but I populated the query in Sql Server.
```
SELECT * FROM (SELECT TOP 50 * FROM notification) S WHERE toUserId = 'email' and isread=false
```
This should help. You can try the same technique in MySql. | Fetching data from database and Filter Results | [
"",
"mysql",
"sql",
"database",
"database-design",
"database-schema",
""
] |
I am trying to run the following piece of code to update table summary from data in click\_summ table.
```
data temp(index=(comp=(card_number package_sk)));
set click_summ(where=(^missing(e_1st_click_dt)));
keep card_number package_sk e_1st_click_dt;
run;
data summary(drop=new_date) ;
set summary;
set temp(rename=(e_1st_click_dt= new_date) in=a) key=comp;
if a then do;
e_1st_click_dt = min(e_1st_click_dt,new_date);
end;
else
_ERROR_ = 0; /*No need for IORC errors*/
run;
```
This particular piece of code is throwing an error saying:
> ERROR: The ORACLE table SUMMARY has been opened for OUTPUT. This table already exists, or there is a name conflict
> with an existing object. This table will not be replaced. This engine does not support the REPLACE option.
What is the work around for the same? This question is related to a question I raised earlier ([Summerizing a table in SAS](https://stackoverflow.com/questions/20943101/summerizing-a-table-in-sas)) | ```
data temp;
set click_summ(where=(^missing(e_1st_click_dt)));
keep card_number package_sk e_1st_click_dt;
run;
data summary(drop=new_date) ;
modify summary temp(rename=(e_1st_click_dt= new_date) in=a);
by card_number, package_sk;
if a then do;
e_1st_click_dt = min(e_1st_click_dt,new_date);
end;
else
_ERROR_ = 0; /*No need for IORC errors*/
run;
```
Modify was the key. This will work fine with Oracle tables as well. And pretty fast too. | Just change the table name from summary to something else and try..am not sure whether we can use summary as a table name since proc summary is there...not sure but try and see | Update an oracle table with data from another table in SAS | [
"",
"sql",
"sas",
""
] |
Reading about `Triggers`, `Equi-Joins` and `cross-joins` has my head spinning and unable to figure out what i need.
Basically, I have 3 tables:
```
Table 1: Users
id
username
score
Table 2: Acts
act_id
act
user_id
act_score
Table 3: Votes
vote_id
act_id
user_voter
score_given
```
When a vote is cast, I want mysql to automatically add the `score_given` to the users `score' and to the`act\_score`.
Is this possible and what type of `JOIN` and/or `TRIGGER` would I need. And if someone is feeling really generous, could they provide me with the sql code? I am really struggling to get my head around this... | I would suggest to use `VIEW` to calculate scores for each `User` or `Act` as needed. That would guarantee that scores will always be correct and you would not need a lot of triggers.
**UPDATED:**
1) Remove scores from `Users` and `Acts` tables;
2) Create a view `ActsWithScore`:
```
CREATE VIEW ActsWithScore AS
SELECT
*,
(SELECT SUM(Votes.score) FROM Votes WHERE Votes.act_id = Acts.act_id) AS act_score
FROM Acts;
```
3) Create a view `UsersWithScore`:
```
CREATE VIEW UsersWithScore AS
SELECT
*,
(SELECT SUM(ActsWithScore.act_score) FROM ActsWithScore WHERE ActsWithScore.user_id = Users.id) AS user_score
FROM Users;
```
4) Now you can query data by following commands:
```
SELECT *
FROM UsersWithScore
```
and
```
SELECT *
FROM ActsWithScore
```
After you have done all these changes, don't forget to change `Users` to `UsersWithScore` and `Acts` to `ActsWithScore` in all the places where you are querying the score.
PS. Sorry, I don't have MYSQL at hand right now, so bear with me if there are some syntax errors. | A trigger is the most obvious solution but it can add a maintenance issue. If you app is not that big/complex you ill be fine.
Also I prefer to do both inserts at business layer. If you are doing a insert, why not do two inserts? It can add overhead to your application but if your app is not a doing that thousands times/sec you ill be fine. | Inserting data into table automatically based on another table | [
"",
"mysql",
"sql",
"join",
"triggers",
""
] |
I would like to know if I can get in a single query, All the objects of certain model where its date's year equals the year of the Max('date') of the model. For example, using the models from the [Aggregation Django Docs](https://docs.djangoproject.com/en/dev/topics/db/aggregation/), how can I get All the `Book`s published in the year of the more recently published `Book`?
All the examples in the docs filter by immediate values (`pubdate__year=2006`), but I need to use a calculated value over the same object in the same query.
Of course, I could do this by performing two queries: one for getting the max year, and a second one to filter by that year, but I think it should be possible to do it in a single query. It's just I haven't figured it out yet.
Thanks for your help!
## EDIT:
Since some of you have given similar answers, I'm writing this update so my problem can be better understood.
This is my model:
```
class Expo(models.Model):
class Meta:
verbose_name= _('Expo')
verbose_name_plural = _('Expos')
name = models.CharField(max_length=255)
place = models.CharField(max_length=255, null=True, blank=True)
date = models.DateField()
bio = models.ForeignKey(Bio, related_name='expos')
```
I need **"All the `Expo`s that happened in the latest year of the list of `Expo`s stored in my database"**
To resolve this, I'm doing this:
```
from django.db.models import Max
max_year = Expo.objects.all().aggregate(Max('date'))['date__max'].year
expos = Expo.objects.filter(date__year=max_year)
```
But this, I understand that performs two queries on the database. I would like an expression that let me get the same result, but performing a single query.
I've tried as suggested:
```
Expo.objects.annotate(max_year=Max('date__year')).filter(date__year=F('max_year'))
```
But get the error:
```
FieldError: Join on field 'date' not permitted. Did you misspell 'year' for the lookup type?
```
I also have tried:
```
Expo.objects.annotate(max_date=Max('date')).filter(date__year__gte=F('max_date__year'))
```
but I get the error:
```
FieldError: Cannot resolve keyword 'max_date' into field. Choices are: bio, date, id, items, name, place, max_date
```
Notice that it says that it can't resolve `'max_date'`, but it appears listed among the choices. Weird.
Again, Thanks a lot for your help! :) | Performing statement in **a single query is no guarantee to improve performance**, this is easy to understand if you try to write an agnostic RDBMS brand SQL single sentence for yours requirements. Also, you **lost in readability**.
In my opinion, you can see and elegant solution in this approach:
1. Get last `Expo` by date .
2. Do a simple filter query.
**For your code:**
```
max_year = Expo.objects.latest('date').date.year
expos = Expo.objects.filter(date__year=max_year)
```
Remember you can **[cache](https://stackoverflow.com/questions/4631865/caching-query-results-in-django)** max\_year, also create a `DESC` index over `date` can helps. | Here is how you can do something using a combination of [Annotation](https://docs.djangoproject.com/en/dev/topics/db/aggregation/#filtering-on-annotations) and [F](https://docs.djangoproject.com/en/dev/topics/db/queries/#filters-can-reference-fields-on-the-model) object
To filter on Max date:
```
ModelClass.objects.annotate(max_date=Max('pubdate')).filter(pubdate=F('max_date'))
```
To filter on the year of Max date:
```
max_date = ModelClass.objects.latest("pubdate").pubdate
published_objs = ModelClass.objects.filter(pubdate__year=max_date.year)
```
There does not seem to be a way to filter on *Max(date.year)* in a single query. And as mentioned by @danihp, even a single query is not a guarantee of performance. | Django - Filter a queryset by Max(date) year | [
"",
"sql",
"django",
"django-queryset",
""
] |
I have two tables, related by a foreign key. I want to select a column from the child table based on some information about a row in the parent table.
The definitions of the tables are:
```
CREATE TABLE Runs(
id INTEGER,
name TEXT UNIQUE NOT NULL,
rundate TEXT NOT NULL,
PRIMARY KEY (id)
);
CREATE TABLE Location(
lid INTEGER,
rid INTEGER NOT NULL,
direc TEXT NOT NULL,
PRIMARY KEY (lid),
FOREIGN KEY (rid) REFERENCES Runs(id)
);
```
I am new to SQL so I have not quite figured out how to do this. This is what I have so far (suppose I want to get the directory for run named 012114:
```
SELECT direc FROM Location INNER JOIN Runs WHERE Runs.name = '012114';
```
I've also tried
```
SELECT direc FROM Location INNER JOIN (SELECT * FROM Runs WHERE Name = '012114');
```
Both of these attempts lists all entries from the direc column in the Location table. I am using sqlite. | You are probably missing part of the join command where you are telling sqllite how to merge the tables, so you are getting a cartesian join.
```
SELECT direc FROM Location
INNER JOIN Runs
ON location.rid = Runs.id WHERE Runs.name = '012114';
```
This should work, assuming that location.rid is the referring to the run. | When you do a `JOIN` clause, you need to specify which columns on each table to join.
For example:
```
SELECT direc
FROM Location
INNER JOIN Runs ON Runs.id = Location.rid
WHERE Runs.name = '012114';
```
Without this, you can return quite unexpected results. | Inner join of child table with parent table | [
"",
"sql",
"select",
"sqlite",
"inner-join",
""
] |
I created a table `scores` as follows.
```
student_id course score
1 math 90
1 cs 70
2 math 60
2 cs 50
```
I want to categorise all scores of a student by a scale:
```
>60 pass
<60 fail
=60 bottomline
```
Then, I wrote a select statement with a case expression as
```
select student_id, course, score =
(case score
when score > 60
then 'pass'
when score < 60
then 'fail'
else 'bottomline'
end) as 'result'
from scores
order by student_id
```
However, the result set looks like below, and the `result` column did not show categories as `pass`,
`fail`, or `bottomline` according to the scale. Instead, all the `result` values are `0`.
```
student_id course result
1 math 0
1 cs 0
2 math 0
2 cs 0
```
Where was I wrong with the select statement that caused all the `result` values to `0`? I am using a MySQL database server. | ```
select student_id, course, score,
case --<-- you dont need to mention column name here
when score < 60 then 'fail'
when score = 60 then 'bottomline'
when score > 60 then 'pass'
end as 'result'
from scores
order by student_id
```
# `Sql Fiddle` | why `score = ...` ?
just:
```
select student_id, course, score,
(case score
when score > 60
then 'pass'
when score < 60
then 'fail'
else 'bottomline'
end) as 'result'
from scores
order by student_id
``` | Using a SELECT statement with a searched CASE expression | [
"",
"mysql",
"sql",
"database",
""
] |
I currently have two tables that have the same column names;
```
APPLICATION_Reference.firstName, APPLICATION_Reference.lastName
APPLICATION_Recommendation.firstName, APPLICATION_Recommendation.lastName
```
```
$all_references = mysqli_query($con, "SELECT *
FROM APPLICATION_Reference
INNER JOIN APPLICATION_Recommendation
ON APPLICATION_Reference.referenceId = APPLICATION_Recommendation.referenceId
INNER JOIN Applicant ON APPLICATION_Reference.firstName = Applicant.givenName
ORDER BY APPLICATION_Reference.referenceId ASC");
```
And then I have an echo statement where I echo out my data: `$applicant_reference['firstName']`
My question is: **How do I differentiate between my different tables when echoing out the column names?** | Use table aliases and column aliases.
In your query you can alias a whole table to make things more readable, e.g.
```
APPLICATION_Reference ref
```
Your query then becomes
```
SELECT *
FROM APPLICATION_Reference ref
INNER JOIN APPLICATION_Recommendation rec ref.referenceId = rec.referenceId
INNER JOIN Applicant app ON ref.firstName = app.givenName
ORDER BY ref.referenceId ASC
```
Instead of then using `SELECT *`, list the fields you are interested in and rename them to something that makes sense in your result set, e.g.
```
SELECT ref.firstName reference_first_name,
rec.firstname recommendation_first_name,
...
```
You can then access the new column names in your PHP code. You certainly don't have to use such long references (e.g. `reference_first_name`) - those are just examples. | you could use
```
SELECT *, table1.field AS field1, table2.field AS field2 ...
```
and then reference the fields as field1 and field2 in your output | Differentiating Between Table Names | [
"",
"mysql",
"sql",
""
] |
I have two select statements such as bellow:
```
SELECT t.name, SUM(t.number)
FROM table1 t
WHERE t.something = '1'
GROUP BY t.name
SELECT v.name, SUM(v.number2)
FROM table2 v
WHERE v.somethingElse = '2'
GROUP BY v.name
```
The result from both of these tables have the common column of 'name' I want to combine the two SELECT statements so I have 1 name column and both the sums show next to their cosponsoring name.
I have tried a FULL OUTER JOIN but I cannot seems to get it to work.
```
SELECT t.name, SUM(t.number)
FROM table1 t
FULL OUTER JOIN (SELECT v.name, SUM(v.number2) FROM table2 v
WHERE v.somethingElse = '2' GROUP BY v.name)
ON t.name = v.name
WHERE t.something = '1'
GROUP BY t.name
```
Hope someone can point out my silly mistake or how I should go about doing this.
Thank you.
UPDATE: Keep getting error 'Missing right parenthesis'. Have a look on sqlfiddle <http://sqlfiddle.com/#!4/b04c4/4> | Create the virtual column on subqueries with values 0 and them sum them all by names using union all. Something like this
```
select name, sum(number1),sum(number2) from (
SELECT t.name, SUM(t.value) as number1 , 0 as number2
FROM table1 t
WHERE t.value = '1'
GROUP BY t.name
UNION
SELECT v.name, 0 as number1 ,SUM(v.value) as number2
FROM table2 v
WHERE v.value = '2'
GROUP BY v.name
) group by name;
```
here it is on [sql fiddle](http://sqlfiddle.com/#!4/8cf62/1) | Try this way:
```
select A.Name, A.cnt1, B.cnt2
from(
SELECT t.name, SUM(t.number) cnt1
FROM table1 t
WHERE t.something = '1'
GROUP BY t.name
) A
left join (
SELECT v.name, SUM(v.number2) cnt2
FROM table2 v
WHERE v.somethingElse = '2'
GROUP BY v.name
) B on B.name= A.name
``` | Join 2 SELECT statements on common column | [
"",
"sql",
"oracle",
""
] |
I have tow tables with same composite key.
Following is the table
table T1
```
No | Date | S_hour
----+---------------+-------
1 | 2012-01-01 | 8
1 | 2012-01-02 | 8
1 | 2012-01-03 | 8
2 | 2012-01-01 | 8
2 | 2012-01-02 | 8
2 | 2012-01-03 | 8
```
table T2
```
No | Date | L_hour
----+---------------+-------
1 | 2012-01-01 | 8
1 | 2012-01-02 | 3
1 | 2012-01-03 | 8
2 | 2012-01-01 | 2
2 | 2012-01-02 | 8
2 | 2012-01-03 | 8
```
S\_hour is scheduled hour and L\_hour is leave hour.
In my Query out put I want S\_hour - L\_hour.Following should be query output
Query output
```
No | Date | S_hour - L_hour
----+---------------+-------
1 | 2012-01-01 | 0
1 | 2012-01-02 | 5
1 | 2012-01-03 | 0
2 | 2012-01-01 | 6
2 | 2012-01-02 | 0
2 | 2012-01-03 | 0
```
Thanks in advance | ```
select t1.No
, t1.Date
, t1.S_hour - t2.L_hour
from Table1 t1
join Table2 t2
on t1.No = t2.No
and t1.Date = t2.Date
``` | something like this
```
SELECT T1.No, T1.Date, s_hour - L_hour AS result
FROM T1 INNER JOIN
T2 ON t1.No = t2.No AND
T1.Date = T2.Date
``` | Two tables having same composite key | [
"",
"mysql",
"sql",
"t-sql",
"plsql",
""
] |
I want to loop through several stored procs and extract one string form each of them to use in another procedure (basically the 4-part remote server string)
So I can get the stored procs list from `SysObjects` (with Type = P) into a table, then I can loop or cursor through that table variable calling `sp_helptext` on each one.
But how can I get the text result of `sp_helptext` into a variable so I can do a CharIndex on the word '**BOCTEST**' etc?
Is there a table like sys.procedures that stores the text. | The portable way is to use the ANSI/ISO view `INFORMATION_SCHEMA.ROUTINES`, but you'll only get the first 4000 characters of the stored procedure definition:
```
declare @source_code varchar(max)
select @source_code = t.ROUTINE_DEFINITION
from information_schema.routines t
where specific_schema = 'owner-schema' -- e.g., dbo
and specific_name = 'my_stored_procedure_name' -- your stored procedure name here
```
Or you can use the system view `sys.sql_modules` in the same vein:
```
declare @source_code varchar(max)
select @source_code = definition
from sys.sql_modules
where object_id = object_id('dbo.my_stored_procedure_name')
```
Or, the simplest way:
```
declare @source_code varchar(max)
set @source_code = object_definition( 'dbo.my_stored_procedure_name' )
``` | Nicholas Carey's third option needs some changes,
object\_definition function needs object\_id as parameter,
so the code should be
```
declare @source_code varchar(max)
declare @objectid int
select @objectid=object_id('dbo.my_stored_procedure_name')
select @source_code = object_definition(@objectid )
```
Check to this link more details
<https://msdn.microsoft.com/en-IN/library/ms176090.aspx> | get the text of a stored procedure into a variable in SQL Server | [
"",
"sql",
"stored-procedures",
"sql-server-2008-r2",
""
] |
Is it possible to insert into a table based on a count condition (using just 1 statement)?
I want to do something like this:
```
insert into [table] (...) values (...) if select count(*) from [table] < 5
```
(insert into the table only if it has less than 5 entries) | try something like this:
```
insert into [table] (#FIELDS#)
select (#VALUES#) from [table]
where (select count(*) from [table]) < 5
FETCH FIRST ROW ONLY
``` | You can try like this
```
insert into [table] (...) values Select (...) from [table] Where count(*) < 5
``` | SQL conditional insert into | [
"",
"sql",
"derby",
""
] |
I have a rails/backbone single page application processing a lot of SQL queries.
Is this request:
```
SELECT * FROM `posts` WHERE `thread_id` = 1
```
faster than this one:
```
SELECT `id` FROM `posts` WHERE `thread_id` = 1
```
How big is the impact of selecting unused columns on the query execution time? | For all practical purposes, when looking for a single row, the difference is negligible. As the number of result rows increases, the difference can become more and more important, but as long as **you have an index on thread\_id** and you are not more than 10-20% of all the rows in the table, here is still not a big issue. FYI the differentiation factor comes from the fact that selecting `*` will force, for each row, an additional lookup in the primary index. Selecting only `id` can be satisfied just by looking up the secondary index on `thread_id`.
There is also the obvious cost associated with any *large field*, like BLOB documents or big test fields. If the `posts` fields have values measuring tens of KBs, then obviously retrieving them adds extra transfer cost.
All these assume a normal execution engine, with B-Tree or ISAM row-mode storage. Almost all 'tables' and engines would fall into this category. The difference would be significant if you would be talking about a columnar storage, because columnar storage only reads the columns of interests and reading extra columns unnecessary impacts more visible such storage engines.
Having or not having an index on `thread_id` will have a hugely more visible impact. Make sure you have it. | Selecting fewer columns is generally faster. Unfortunately, it is hard to say exactly how much the time difference will be. It may depend on things like how many columns there are and what data is in them (for example, large CLOBS can take longer to fetch than simple integers), what indexes have been set up, and the network latency between you and the database server.
For a definitive answer on the time difference, the best I can say is do both queries and see how long each takes. | Is selecting fewer SQL columns making the request faster? | [
"",
"sql",
""
] |
I have a simple property table with an id and name field. I also have a propertyPrices table whose schema and data is shown below;

So, it specifies prices for given date ranges. The price value indicates the cost for a day within that period.
When given a start and end date, I'd like to be able to select the sum of the prices based on the number of days which fall between each date range.
I have attempted the following;
```
DECLARE @propertyID int = 1;
DECLARE @startDate date = '2014/02/13';
DECLARE @endDate date = '2014/02/18';
/* get number of days between start and end date */
DECLARE @duration int = DATEDIFF(DAY, @startDate, @endDate)
/* select properties with holiday cost */
SELECT
property.name, SUM(dbo.propertyPrices.price) * @duration AS totalCost
FROM
dbo.property INNER JOIN dbo.propertyPrices ON dbo.property.id = dbo.propertyPrices.propertyID
WHERE
property.id = @propertyID
GROUP BY
dbo.property.name
```
Which returns 2000 as the total cost. This is for 5 days, which is then multiplied by 400 (which is the sum of all the prices). I only want to select the appropriate price for each day in the range. So it should be;
(2 days at 100) + (3 days at 300) = 1100.
I'm not sure how I can aggregate the individual day values. My priority is performance.
Thanks in advance for any help.
Nick | I can imagine writing a stored procedure, or the other solution would be to build up an auxiliary table just containing a list of days.
If you had a table `calendar(date)` with all dates in the affected range, you could do
```
SELECT
property.name, SUM(dbo.propertyPrices.price) AS totalCost
FROM
dbo.property
INNER JOIN dbo.propertyPrices
ON dbo.property.id = dbo.propertyPrices.propertyID
JOIN calendar
ON calendar.date between startDate and endDate
WHERE
property.id = @propertyID
GROUP BY
dbo.property.name
``` | I would do it using CTEs. First get the prices with actual date ranges, then multiply price by number of days in the range and sum it.
```
with cte as
(
select propertyID, price,
case when startDate < @startDate then @startDate else startDate end as startDate,
case when endDate > @endDate then @endDate else endDate end as endDate
from propertyPrices
where endDate >= @startDate
and startDate <= @endDate
)
, cte2 as
(
select propertyID, sum((datediff(day, startDate, endDate)+1) * price) as totalCost
from cte
group by propertyID
)
select name, totalCost
from cte2
inner join property on id = propertyID
where id = @propertyID
```
You can view the whole solution on [SQLFiddle](http://www.sqlfiddle.com/#!3/af939/5/0) | SQL Sum of aggregated values within SELECT | [
"",
"sql",
"sql-server",
""
] |
Find the sid who have reserved a red and a green boat.
```
Table reserves
sid bid
22 101
22 102
22 103
31 103
32 104
Table Boats
Bid Color
101 blue
102 red
103 green
104 red
```
Is it possible to use and operator with sub query something like this
```
select sid from reserves where bid in
(select bid from boats where color='red') and
(select bid from boats where color='green') ;
```
Here I need check whether "bid" is present in the results of both 1st and 2nd sub query then select the sid.It doesn't considers the result of second sub query result for me though. | You need to specify in the bin in against both subsqueries, and as you would both red and green boats (and boats can only be one colour) you are actually saying you would like the reservers that are either red or green, so OR is appropriate here.
```
select sid from reserves where
bid in (select bid from boats where color='red')
OR
bid in (select bid from boats where color='green') ;
```
But a more efficient way to do this is not with two subqueries but with a join:
```
SELECT sid FROM reserves AS r
LEFT JOIN Boats as b ON r.bid = b.bid
WHERE b.color IN ('red', 'green')
```
If you only want the list to contain unduplicated SIDs you can use either of the following:
```
SELECT distinct(sid) FROM reserves AS r
LEFT JOIN Boats as b ON r.bid = b.bid
WHERE b.color IN ('red', 'green')
```
or
```
SELECT sid FROM reserves AS r
LEFT JOIN Boats as b ON r.bid = b.bid
WHERE b.color IN ('red', 'green')
GROUP BY sid
```
I have had mixed experience with the efficiency of GROUP BY vs DISTINCT on really big tables (40GB+)
UPDATE: As I miss understood your previous question this maybe a more suitable solution:
```
SELECT sid FROM reserves AS r
LEFT JOIN Boats as b ON r.bid = b.bid
GROUP BY sid
HAVING sum(b.color = 'red') and sum(b.color= 'green')
```
Here we are joining the tables, then grouping the rows by the SID. Using the having clause we are counting the sum of the boolean checks on b.color = 'red' (and 'green') the sum will be zero if you dont have any bids boats that are red (or green) and by anding those together you know that the sum of reds > 1 and sum(green) >1.
And a sqlfiddle for you to play with: <http://sqlfiddle.com/#!2/b5ec1/8> | It works for something like this
```
select sid from reserves where bid in(select bid from boats where color='red')
and sid in
(select sid from reserves where bid in(select bid from boats where color='green'));
``` | Using And operator with subquery | [
"",
"mysql",
"sql",
"mysql-workbench",
""
] |
The following SQL is not exactly what I need. In a query I am summing Line totals according to
Part# and Date.
This is an invoice history file. the user has a report that shows QTY sold by part# - but the file used there does not hold $amt. So I made a query that creates a temp file that breaks on Part# and date. Because the report selects by date. This sql is created by the RTVQMQRY. IT doesn't seem to summary.
```
SELECT
ALL (((T02.IDSHP#*IDNTU$))), T02.IDDOCD, T02.IDINV#, T02.IDPRT#
FROM DTALIB/OEIND1 T02 INNER JOIN
OKLIB/ICDET76OK T01
ON T02.IDCOM# = T01.ITCOM#
AND T02.IDIDC# = T01.ITTRN#
WHERE IDDOCD >= 20130101
ORDER BY 004 ASC, 003 ASC, 002 ASC, 001 ASC
``` | SQL is not a reporting tool. SQL is a language that asks for sets of data; typically either details or totals (GROUP BY). SQL will do totals if you specify GROUP BY WITH ROLLUP, but that doesn't apply here.
There are several approaches you can take.
1) Create a view or logical file that does the JOIN you've specified above. Then use Query/400 to generate the report you need, with totals.
2) Since SQL wants to return a set, return 2 sets and do a UNION. The first set is the details and the second set is the total. There's a bit of a trick, in that each column in the two sets needs to be the same data type, but something like this should work:
```
SELECT
ALL 'D', T02.IDSHP#*IDNTU$, T02.IDDOCD, T02.IDINV#, T02.IDPRT#
FROM DTALIB/OEIND1 T02 INNER JOIN
OKLIB/ICDET76OK T01
ON T02.IDCOM# = T01.ITCOM#
AND T02.IDIDC# = T01.ITTRN#
WHERE IDDOCD >= 20130101
union
SELECT
ALL 'T', sum(T02.IDSHP#*IDNTU$), ' ', 0, 0
FROM DTALIB/OEIND1 T02 INNER JOIN
OKLIB/ICDET76OK T01
ON T02.IDCOM# = T01.ITCOM#
AND T02.IDIDC# = T01.ITTRN#
WHERE IDDOCD >= 20130101
ORDER BY 1, 5, 4, 3, 2
```
See the second SELECT? It has the same columns as the first, and hopefully, the same data types. If your invoice number or part numbers are character columns, then use characters (' ') rather than numbers (0). Just to make sure the total comes AFTER the detail, I added another column that will sort the total last.
From a style perspective, I'd use meaningful correlation names. T01 and T02 aren't as easy to read (and debug in the future) as, say, INV and DTL. Yes, that's what RTVQMQRY generated, but you don't need to keep them that way. | Nooooooooo.... *Please* don't use Query/38 (oh yeah, sorry, they renamed it to Query/400), which has been around and probably unimproved since 1982.
QM Query is a reporting tool that will help you format your reports, have multi-level breaks, allows a prompted interface similar to Query/400. You have found how to retrieve the old query into QM Query source. With STRQM you can work with the query itself in prompted mode (as Query/400 users will be most comfortable with) initially if you wish, or in SQL mode, which I recommend for becoming more comfortable with SQL. This helped me learn quite a few years ago.
You can define a QM Query Form, which will give you the level breaks and other formatting that you may want. In this mode, you can let the Form do the breaks and summarizing, while your SQL need only supply the detail lines of the report. | Need SQL To do summary like Query/400 | [
"",
"sql",
"ibm-midrange",
"db2-400",
""
] |
I have several fields in a table, but wish to remove some records. I'll call them duplicates, but they aren't in the true sense.
The table and some example data
```
Id1 Id2 Name1 Name2 DOB1 DOB2
123, abc, jones, smith, 19740901, 19820101
abc, 123, smith, Jones, 19820101, 19740901
def, 456, davis, short, 19720101, 20011010
456, def, short, davis, 20011010, 19720101
```
What I want to do is remove one of each of the "duplicate" records as its just the same as the other, but with "1 columns" transposed with the "2 columns". Any help would be greatly appreciated. | Here is a standard SQL way of doing this:
```
delete from t
where Id1 > Id2 and
exists (select 1
from t t2
where t2.Id1 = t.Id2 and
t2.Id2 = t.Id1 and
t2.Name1 = t.Name2 and
t2.Name2 = t.Name1 and
t2.DOB1 = t.DOB2 and
t2.DOB2 = t.DOB1
);
``` | You could use an `INNER JOIN` to pair up any rows with transposed `Id1` and `Id2` values, filter down to those that are actually duplicates, select all but one of the duplicate rows, and then send the results to a `DELETE`.
```
DELETE T1
FROM [TableName] T1
-- Pair up the duplicate rows
INNER JOIN [TableName] T2
ON T1.Id1 = T2.Id2
AND T1.Id2 = T2.Id1
WHERE T1.Id1 > T1.Id2 -- Only delete the row with the higher "Id1" value
-- Filter out any rows that are not actually duplicates
AND T1.Name1 = T2.Name1
AND T1.Name2 = T2.Name2
AND T1.Name2 = T2.Name2
AND T1.DOB1 = T2.DOB1
AND T1.DOB2 = T2.DOB2
``` | SQL Duplication across 2 fields | [
"",
"sql",
""
] |
Using Oracle PL/SQL, how can I populate the `bind variable :b3` with more than one value for the `IN clause`? (This code is for demo purposes only -- it may not compile but it does clarify the question if any is needed)
```
declare
type work_rec is record (
work_status varchar2(50),
work_cd varchar2(50));
type work_tab is table of work_rec index by pls_integer;
t_work_tab work_tab;
sql_stmt varchar2(400);
begin
select case
when status_desc like '%Employed%' then 'Employed'
else 'Unknown'
end as work_status
,case
when status_cd between '1' and '9' then '1,2,3,4'
else '0'
end as work_cd
bulk collect into t_work_tab
from employee_table;
for i in t_work_tab.first..t_work_tab.last
loop
sql_stmt := 'insert into employee_hist
select name,
employer
from tax_table
where employment_cd in (:b3)'; --< how to populate this with '1','2','3','4'
execute immediate sql_stmt using t_work_tab(i).work_cd;
commit;
end loop;
end;
/
``` | Better solution (at least to my issue) - don't use a bind variable but instead concatenate the string of values (code segment shown only):
```
for i in t_work_tab.first..t_work_tab.last
loop
sql_stmt := 'insert into employee_hist
select name,
employer
from tax_table
where employment_cd in (' || t_work_tab(i).work_cd || ')';
execute immediate sql_stmt;
...
```
You get the idea. Thanks for all your input. | When you loop through the values, keep appending the string with ' and , as required to make up the part of in. Then you can use that string as part of your sql statement.
Example
```
temp = "'"
Loop through values a,b,c,d as str
temp = temp + str + "'",
End Loop
temp = substr(temp,0,length(temp)) // this is to trim the last , character
```
Hope it helps! | How to put string of values with IN clause using dynamic PL/SQL? | [
"",
"sql",
"oracle",
"plsql",
""
] |
This is my table
```
create table #t(id int, amt int)
insert into #t values(1, 40), (1, 20), (1, 10), (2, 100), (2, 120), (2, 400)
```
I need output like this!
```
id amt
1 70
1 70
1 70
2 620
2 620
2 620
```
I tried
```
SELECT
id, SUM(amt)
FROM
#t
GROUP BY
id
``` | Try This !
```
select id,sum(amt) over(partition by id) as amt from #t
``` | ```
select id
, sum(amt) over (partition by id)
from #t
```
[Example at SQL Fiddle.](http://sqlfiddle.com/#!6/0ffd3/1/0) | Error querying SQL Server | [
"",
"sql",
"sql-server",
""
] |
My apologies in advance, this is probably a basic question asked and answered but I don't know how to word the search to find the right results.
I have a table that (among other columns) contains program names for a customer number. I need to identify customers that have only one specific program and no others. A simplified example:
Col1 = Customer\_Number, Col2 = Program\_Name
Customer 1 has three records because they are enrolled in 2013BA1111, 2013BO1161 and 2013BO1163. Customer 2 has just one record because they are only enrolled in 2013BA1111.
Using Teradata SQL Assistant, if I select WHERE Program\_Name = '2013BA1111', both Customer 1 and Customer 2 will be returned since they are both enrolled in program 2013BA1111. I want to select only Customer 2 since they have ONLY 2013BA1111.
Thanks! | In standard (ANSI/ISO) SQL, a *derived table* is your friend. Here, we join the customer table against a derived table that produces the list of customers having only 1
```
select *
from customer c
join ( select customer_id
from customer
group by customer_id
having count(program_name) = 1
) t on t.customer_id = c.customer_id
where ... -- any further winnowing of the result set occurs here
``` | Perhaps something like this:
```
select Customer_Number, Program_Name
from YourTable t1
left join (
select Customer_Number
from YourTable t2
where t2.Program_Name <> '2013BA1111'
) t3 on t1.Customer_Number = t3.Customer_Number
where
t1.Program_Name = '2013BA1111'
and t3.Customer_Number is null
```
The outer query is selecting all records that have the given `Program_Name`, then it is joined with an inner query of everyone who has a record that does not equal the given `Program_Name`, and the outer query checks to make sure that the joined inner query doesn't have a match. | Select Records With Only One Value In A Column Where Multiple Are Possible | [
"",
"sql",
"teradata",
""
] |
How do I accomplish the following in the most efficient manner in SQL?
```
SELECT FIELD1, FIELD2, FIELDN ... FROM TABLE1 WHERE FIELD1 =
(the row with the max value from an aggregate query)
```
For example
```
SELECT * FROM PS_DEPT_TBL WHERE DEPTID =
...
SELECT DEPTID, COUNT(*) FROM PS_JOB GROUP BY DEPTID ORDER BY 2 DESC
(i'd want the row with the largest count above)
``` | As a literal answer to your question, see the below.
However, having knowledge of peoplesoft, I question what you're trying to do with that query, because ps\_job contains effective dated rows. It will contain one row for every single change to an employee's job over time. The same employee might have 50 rows on it. So if you're trying to find the department with the most employees, this query is not correct. But as a literal answer to your question I think it does what you say you want.
```
select *
from ps_dept_tbl
where deptid in
(select deptid
from ps_job
group by deptid
having count(*) = (select max(num_recs)
from (select deptid, count(*) as num_recs
from ps_job
group by deptid)))
``` | Order the query by the value you want to min\max by, and use `LIMIT 1` or `TOP 1`(depends on your DB). | How do I do a simple "select x where it's value of y is the max in z"? | [
"",
"sql",
"oracle",
"oracle11g",
""
] |
I have two tables, `room` and `message`, in a chat database :
```
CREATE TABLE room (
id serial primary key,
name varchar(50) UNIQUE NOT NULL,
private boolean NOT NULL default false,
description text NOT NULL
);
CREATE TABLE message (
id bigserial primary key,
room integer references room(id),
author integer references player(id),
created integer NOT NULL,
);
```
Let's say I want to get the rooms with the numbers of messages from an user and dates of most recent message :
```
id | number | last_created | description | name | private
----+--------+--------------+-------------+------------------+---------
2 | 1149 | 1391703964 | | Dragons & co | t
8 | 136 | 1391699600 | | Javascript | f
10 | 71 | 1391684998 | | WBT | t
1 | 86 | 1391682712 | | Miaou | f
3 | 423 | 1391681764 | | Code & Baguettes | f
...
```
I see two solutions :
**1)** selecting/grouping on the messages and using subqueries to get the room columns :
```
select m.room as id, count(*) number, max(created) last_created,
(select name from room where room.id=m.room),
(select description from room where room.id=m.room),
(select private from room where room.id=m.room)
from message m where author=$1 group by room order by last_created desc limit 10
```
This makes 3 almost identical subqueries. This looks very dirty. I could reverse it to do only 2 suqueries on message columns but it wouln't be much better.
**2)** selecting on both tables and using aggregate functions for all columns :
```
select room.id, count(*) number, max(created) last_created,
max(name) as name, max(description) as description, bool_or(private) as private
from message, room
where message.room=room.id and author=$1
group by room.id order by last_created desc limit 10
```
All those aggregate functions look messy and useless.
Is there a clean solution here ?
It looks like a general problem to me. Theoretically, those aggregate functions are useless as, by construct, all the joined rows are the same row. I'd like to know if there's a general solution. | Try performing the grouping in a subquery:
```
select m.id, m.number, m.last_created, r.name, r.description, r.private
from (
select m.room as id, count(*) number, max(created) last_created
from message m
where author=$1
group by room
) m
join room r
on r.id = m.id
order by m.last_created desc limit 10
```
**Edit:** Another option (likely with similar performance) is to move that aggregation into a view, something like:
```
create view MessagesByRoom
as
select m.author, m.room, count(*) number, max(created) last_created,
from message m
group by author, room
```
And then use it like:
```
select m.room, m.number, m.last_created, r.name, r.description, r.private
from MessagesByRoom m
join room r
on r.id = m.room
where m.author = $1
order by m.last_created desc limit 10
``` | Maybe use a join?
```
SELECT
r.id, count(*) number_of_posts,
max(m.created) last_created,
r.name, r.description, r.private
FROM room r
JOIN message m on r.id = m.room
WHERE m.author = $1
GROUP BY r.id
ORDER BY last_created desc
``` | Avoid useless subqueries or aggregations when joining and grouping | [
"",
"sql",
"postgresql",
"postgresql-9.2",
""
] |
I have big DB. It's about 1 mln strings. I need to do something like this:
```
select * from t1 WHERE id1 NOT IN (SELECT id2 FROM t2)
```
But it works very slow. I know that I can do it using "JOIN" syntax, but I can't understand how. | Try this way:
```
select *
from t1
left join t2 on t1.id1 = t2.id
where t2.id is null
``` | First of all you should optimize your indexes in both tables, and after that you should use join | How to make SQL query faster? | [
"",
"mysql",
"sql",
"performance",
"cpu-load",
""
] |
I use the following
```
select TotalCredits - TotalDebits as Difference
from
(
select
(select sum(TOTALAMOUNT) from journal where memberid=48 and CREDIT =1) as TotalCredits,
(select SUM(totalamount) from Journal where MEMBERID=48 and DEBIT =1) As TotalDebits
) temp
```
this returns one field with my difference, the problem i am occuring is that if the table has no credit, but has debits, the temp table contains a NULL value in the TotalCredits Field which prohibts math being done. (Vica Versa on has Credits but no Debits) I have tried coalese but cant seem how to make it work.
rationally i need to check if:
```
sum(TOTALAMOUNT) from journal where memberid=48 and CREDIT =1 as TotalCredits is
null then totalcredits = 0 and visa versa
```
sql server 2008 | ```
select ISNULL(TotalCredits,0) - ISNULL(TotalDebits,0) as Difference
from
(
select
(select sum(TOTALAMOUNT) from journal where memberid=48 and CREDIT =1) as TotalCredits,
(select SUM(totalamount) from Journal where MEMBERID=48 and DEBIT =1) As TotalDebits
) temp
``` | Change your query to conditional aggregation and it fixes the problem:
```
select sum(case when credit = 1 then TotalAmount else -TotalAmount end) as Difference
from Journal
where memberid = 48 and (credit = 1 or debit = 1);
```
EDIT:
If you have the case where credit and debit could both be 1, then use:
```
select (sum(case when credit = 1 then TotalAmount else 0 end) -
sum(case when debit = 1 then TotalAmount else 0 end)
) as Difference
from Journal
where memberid = 48 and (credit = 1 or debit = 1);
``` | Subtracting Two Column With Null | [
"",
"sql",
"sql-server",
""
] |
Like the title already explained, I'm struggling with my `WHERE` clause in my SQL stored procedure.
I got a `SELECT` query which joins multiple tables well and at the end if got a `WHERE` clause that give specific values to search for.
My problem is that I want to expand this stored procedure for 2 different `WHERE` clauses, but I can't get my `IF ELSE` correct to parse the query..
For example:
```
SELECT .......
FROM TABLE_X
INNER JOIN TABLE_Y.....
WHERE
man.Klant_ID = @Klant
AND (@ManID = 0 OR man.ID = @ManID)
AND .... (which continues like the rule above)
```
Here I want to get something like this:
```
SELECT .......
FROM TABLE_X
INNER JOIN TABLE_Y.....
IF @TEMPVAR = ''
WHERE man.Klant_ID=@Klant
AND (@ManID = 0 OR man.ID = @ManID)
AND...
ELSE
WHERE TABLE_X.ID IN (@TEMPVAR)
```
(and `@tempvar` should contain comma separated id's like `10001,10002,10003`)
I'm struggling with the syntax and searched for some while but can't seem to find a right solution.
Thanks in advance! | You can do that with a CASE:
```
SELECT ....... FROM TABLE_X
INNER JOIN TABLE_Y.....
WHERE TABLE_X.FIRSTCOLUMN =
CASE WHEN @TEMPVAR = '' THEN 123
ELSE 456
END
```
or directly in the `WHERE` clause with an `OR`:
```
WHERE (@TEMPVAR = ''
AND man.Klant_ID=@Klant
AND (@ManID = 0 OR man.ID = @ManID)
AND...
)
OR
(@TEMPVAR <> ''
AND TABLE_X.ID IN (@TEMPVAR)
)
``` | You can do this with directly logic in the `where`:
```
SELECT ....... FROM TABLE_X
INNER JOIN TABLE_Y.....
WHERE TABLE_X.FIRSTCOLUMN = 123 and @TEMPVAR = '' or
TABLE_X.FIRSTCOLUMN = 456 and @TEMPVAR <> ''
```
If `TEMPVAR` can be `NULL`, then:
```
SELECT ....... FROM TABLE_X
INNER JOIN TABLE_Y.....
WHERE TABLE_X.FIRSTCOLUMN = 123 and @TEMPVAR = '' or
TABLE_X.FIRSTCOLUMN = 456 and (@TEMPVAR <> '' or @TEMPVAR is null)
``` | SQL Server : Where clause inside IF .. ELSE | [
"",
"sql",
"sql-server",
"stored-procedures",
""
] |
I have a Database with two tables:
```
subscriber_mm
uid_local | uid_foreign | uid_partner
7 |2 |0
7 |4 |0
2 |1 |0
2 |2 |0
5 |1 |0
5 |3 |0
partner_mm
uid_local | uid_foreign | uid_partner
7 |1 |1
```
My goal is to count the total number of rows by uid\_local from both tables
example:
```
count both tables by uid_local = 7
```
result: 3
example:
```
count both tables by uid_local = 2
```
result: 2
This is my solution (not the best) without the WHERE statement
```
SELECT sum(
ROWS ) AS total_rows
FROM (
SELECT count( * ) AS ROWS
FROM partner_mm
UNION ALL
SELECT count( * ) AS ROWS
FROM subscriber_mm
) AS u
```
how can i implement the WHERE statement? | Here pass a value instead of 2 for your query :
```
select sum(total_count ) as total_count from
(select count(*) as total_count from subscriber_mm s where s.uid_local=2
union all
select count(*) as total_count from partner_mm m where m.uid_local=2) as a
```
or
```
select a.uid_local,sum(total_count ) as total_count from
(select s.uid_local as uid_local, count(*) as total_count from subscriber_mm s group by s.uid_local
union all
select m.uid_local as uid_local, count(*) as total_count from partner_mm m group by m.uid_local) as a
group by a.uid_local
``` | No need of doing SUM since you need only to count the total number of rows returned from both tables for particular `uid_local`. The total rows can be obtained by using `UNION ALL` operator which clubs the resultset returned from both the tables **WITHOUT** removing the repeated records
```
Select count(*) as Result From
(
Select * from subscriber_mm
where uid_local=7
union all
Select * from subscriber_mm
where uid_local=7
)as tmp
``` | Sum rows of two tables | [
"",
"sql",
"sum",
""
] |
I've been using Entity Framework 6 code first to do some simple CRUD operations on my domain model and it's performed admirably so far.
I've now come across a situation where I'm performing a reasonably complicated query which involves filtering and paging of the results. The query generated by EF 6 is pretty bad and is horribly inefficient when compared to something I can hand-crank myself. Here's the generated SQL, which executes in ~17 seconds:
```
SELECT TOP ( 15 )
[Project1].[Branch] AS [Branch] ,
[Project1].[Salesman] AS [Salesman] ,
[Project1].[Status] AS [Status] ,
[Project1].[OrderID] AS [OrderID] ,
[Project1].[DateCreated] AS [DateCreated] ,
[Project1].[DateCompleted] AS [DateCompleted] ,
[Project1].[RegNumber] AS [RegNumber] ,
[Project1].[Make] AS [Make] ,
[Project1].[Model] AS [Model] ,
[Project1].[Spec] AS [Spec] ,
[Project1].[Title] AS [Title] ,
[Project1].[Firstname] AS [Firstname] ,
[Project1].[Surname] AS [Surname] ,
[Project1].[Address1] AS [Address1] ,
[Project1].[Address2] AS [Address2] ,
[Project1].[Address3] AS [Address3] ,
[Project1].[Town] AS [Town] ,
[Project1].[County] AS [County] ,
[Project1].[Postcode] AS [Postcode] ,
[Project1].[HomePhone] AS [HomePhone] ,
[Project1].[WorkPhone] AS [WorkPhone] ,
[Project1].[MobilePhone] AS [MobilePhone] ,
[Project1].[EMailAddress] AS [EMailAddress] ,
[Project1].[AllowMarketing] AS [AllowMarketing] ,
[Project1].[Manager] AS [Manager] ,
[Project1].[FK_BranchID] AS [FK_BranchID]
FROM ( SELECT [Project1].[Branch] AS [Branch] ,
[Project1].[Salesman] AS [Salesman] ,
[Project1].[Status] AS [Status] ,
[Project1].[OrderID] AS [OrderID] ,
[Project1].[DateCreated] AS [DateCreated] ,
[Project1].[DateCompleted] AS [DateCompleted] ,
[Project1].[RegNumber] AS [RegNumber] ,
[Project1].[Make] AS [Make] ,
[Project1].[Model] AS [Model] ,
[Project1].[Spec] AS [Spec] ,
[Project1].[Title] AS [Title] ,
[Project1].[Firstname] AS [Firstname] ,
[Project1].[Surname] AS [Surname] ,
[Project1].[Address1] AS [Address1] ,
[Project1].[Address2] AS [Address2] ,
[Project1].[Address3] AS [Address3] ,
[Project1].[Town] AS [Town] ,
[Project1].[County] AS [County] ,
[Project1].[Postcode] AS [Postcode] ,
[Project1].[HomePhone] AS [HomePhone] ,
[Project1].[WorkPhone] AS [WorkPhone] ,
[Project1].[MobilePhone] AS [MobilePhone] ,
[Project1].[EMailAddress] AS [EMailAddress] ,
[Project1].[AllowMarketing] AS [AllowMarketing] ,
[Project1].[Manager] AS [Manager] ,
[Project1].[FK_BranchID] AS [FK_BranchID] ,
ROW_NUMBER() OVER ( ORDER BY [Project1].[DateCreated] DESC ) AS [row_number]
FROM ( SELECT [Extent1].[Branch] AS [Branch] ,
[Extent1].[Salesman] AS [Salesman] ,
[Extent1].[Status] AS [Status] ,
[Extent1].[OrderID] AS [OrderID] ,
[Extent1].[DateCreated] AS [DateCreated] ,
[Extent1].[DateCompleted] AS [DateCompleted] ,
[Extent1].[RegNumber] AS [RegNumber] ,
[Extent1].[Make] AS [Make] ,
[Extent1].[Model] AS [Model] ,
[Extent1].[Spec] AS [Spec] ,
[Extent1].[Title] AS [Title] ,
[Extent1].[Firstname] AS [Firstname] ,
[Extent1].[Surname] AS [Surname] ,
[Extent1].[Address1] AS [Address1] ,
[Extent1].[Address2] AS [Address2] ,
[Extent1].[Address3] AS [Address3] ,
[Extent1].[Town] AS [Town] ,
[Extent1].[County] AS [County] ,
[Extent1].[Postcode] AS [Postcode] ,
[Extent1].[HomePhone] AS [HomePhone] ,
[Extent1].[WorkPhone] AS [WorkPhone] ,
[Extent1].[MobilePhone] AS [MobilePhone] ,
[Extent1].[EMailAddress] AS [EMailAddress] ,
[Extent1].[AllowMarketing] AS [AllowMarketing] ,
[Extent1].[Manager] AS [Manager] ,
[Extent1].[FK_BranchID] AS [FK_BranchID]
FROM ( SELECT [vw_CS_OrderDetails].[Branch] AS [Branch] ,
[vw_CS_OrderDetails].[Salesman] AS [Salesman] ,
[vw_CS_OrderDetails].[Status] AS [Status] ,
[vw_CS_OrderDetails].[OrderID] AS [OrderID] ,
[vw_CS_OrderDetails].[DateCreated] AS [DateCreated] ,
[vw_CS_OrderDetails].[DateCompleted] AS [DateCompleted] ,
[vw_CS_OrderDetails].[RegNumber] AS [RegNumber] ,
[vw_CS_OrderDetails].[Make] AS [Make] ,
[vw_CS_OrderDetails].[Model] AS [Model] ,
[vw_CS_OrderDetails].[Spec] AS [Spec] ,
[vw_CS_OrderDetails].[Title] AS [Title] ,
[vw_CS_OrderDetails].[Firstname] AS [Firstname] ,
[vw_CS_OrderDetails].[Surname] AS [Surname] ,
[vw_CS_OrderDetails].[Address1] AS [Address1] ,
[vw_CS_OrderDetails].[Address2] AS [Address2] ,
[vw_CS_OrderDetails].[Address3] AS [Address3] ,
[vw_CS_OrderDetails].[Town] AS [Town] ,
[vw_CS_OrderDetails].[County] AS [County] ,
[vw_CS_OrderDetails].[Postcode] AS [Postcode] ,
[vw_CS_OrderDetails].[HomePhone] AS [HomePhone] ,
[vw_CS_OrderDetails].[WorkPhone] AS [WorkPhone] ,
[vw_CS_OrderDetails].[MobilePhone] AS [MobilePhone] ,
[vw_CS_OrderDetails].[EMailAddress] AS [EMailAddress] ,
[vw_CS_OrderDetails].[AllowMarketing] AS [AllowMarketing] ,
[vw_CS_OrderDetails].[Manager] AS [Manager] ,
[vw_CS_OrderDetails].[FK_BranchID] AS [FK_BranchID]
FROM [dbo].[vw_CS_OrderDetails] AS [vw_CS_OrderDetails]
) AS [Extent1]
WHERE UPPER([Extent1].[RegNumber]) LIKE '%SD59BBO%'
ESCAPE N'~'
) AS [Project1]
) AS [Project1]
WHERE [Project1].[row_number] > 0
ORDER BY [Project1].[DateCreated] DESC
```
The hand-cranked version of this is way smaller and completes in less than a second.
Given the horrible inefficiency of the first query, is there any way I can influence EF 6 in the query it creates?
I may have to resort to a stored procedure, are there any good patterns out there for integrating EF code first with stored procedures?
**Edit**: As per the request from Wahid Bitar, here is the LINQ I use to create the above SQL.
```
var query = _dbSet
.Where(o => o.RegNumber.ToUpper().Contains(searchTerm))
.OrderByDescending(c => c.DateCreated)
.Skip(skip)
.Take(pageSize);
```
In truth it's a little spread out among some helper methods, but that's mainly it. `ToList()` is eventually called on `query`, enumerating the result set.
**Edit**: As requested the hand-cranked SQL:
```
SELECT *
FROM
(
SELECT ROW_NUMBER() OVER ( ORDER BY DateCreated DESC ) AS RowNum
,[Branch]
,[Salesman]
,[Status]
,[OrderID]
,[DateCreated]
,[DateCompleted]
,[RegNumber]
,[Make]
,[Model]
,[Spec]
,[Title]
,[Firstname]
,[Surname]
,[Address1]
,[Address2]
,[Address3]
,[Town]
,[County]
,[Postcode]
,[HomePhone]
,[WorkPhone]
,[MobilePhone]
,[EMailAddress]
,[AllowMarketing]
,[Manager]
,[FK_BranchID]
FROM [SalesmanOffice2].[dbo].[vw_CS_OrderDetails]
WHERE RegNumber LIKE '%SD59BBO%'
) AS NumberedRows
WHERE NumberedRows.RowNum BETWEEN 1 AND 15 ORDER BY RowNum
``` | As @Jim Wooley highlighted, I do not have a full text index on the columns that I am attempting to do partial matches on. Given that the view is very large this means that every row is being compared without the aid of an index. I have not yet tested these queries on a view with text indexing enabled so cannot say for certain, but I have tested that when the search term exactly matches the column and I'm using an equality operator, e.g.:
```
WHERE RegNumber = 'SD59BBO'
```
With all other things being the same, the query executes in far more desirable times. | I noticed in your EF query you call `ToUpper`, however you are not doing this in your manual query. If `RegNumber` is an indexed field, passing it to SQL's `UPPER` function before doing your comparison will make it unable to use that index. This is probably the reason the performance is worse.
You probably have the `ToUpper` in there because .Net's String comparisons are case sensitive. However, when performing an EF query, it just translates that to the SQL comparison. So since SQL is not case sensitive (by default), EF is not case sensitive.
Take out that call to `ToUpper` and see if that improves performance.
**Edit**
It also looks like EF is treating `SD57WBO` as unicode. This can throw off your indexes as well.
What is the datatype of `RegNumber`? If it is `varchar`/`char` and not `nvarchar`/`nchar`, you may need to specify the column is not unicode. In EF Code First, Fluent API, you need use the command [`IsUnicode(false)`](http://msdn.microsoft.com/en-us/library/gg696416%28v=vs.113%29.aspx) on your property. | Options to replace slow SQL generated by Entity Framework 6 | [
"",
"sql",
"sql-server",
"performance",
"entity-framework",
""
] |
Is it possible to **"nest"** qualify statements in Teradata?
I have some data that looks like this:
```
event_id = 1:
user_id action_timestamp
971,134,265 17mar2010 20:16:56
739,071,748 17mar2010 22:19:59
919,853,934 18mar2010 15:47:49
919,853,934 18mar2010 15:55:21
919,853,934 18mar2010 16:01:20
919,853,934 18mar2010 16:01:48
919,853,934 18mar2010 16:04:52
472,665,603 20mar2010 18:23:58
472,665,603 20mar2010 18:24:07
472,665,603 20mar2010 18:24:26
....
event_id = 2:
971,134,265 17mar2069 20:16:56
739,071,748 17mar2069 22:19:59
919,853,934 18mar2069 15:47:49
919,853,934 18mar2069 15:55:21
919,853,934 18mar2069 16:01:20
919,853,934 18mar2069 16:01:48
919,853,934 18mar2069 16:04:52
472,665,603 20mar2069 18:23:58
472,665,603 20mar2069 18:24:07
472,665,603 20mar2069 18:24:26
```
For user 919,853,934, I would like to grab "18mar2010 16:04:52" action (the last one in the first cluster of events).
I tried this, which does not grab the right date:
```
SELECT action_timestamp
,user_id
,event_id
FROM table
WHERE ...
QUALIFY (
MAX(action_timestampt) OVER (PARTITION BY user_id, event_id) = action_timestamp
AND MIN(action_timestamp) OVER (PARTITION BY user_id) = action_timestamp
)
```
This actually makes sense since the MAX and MIN apply to the whole data, rather than sequentially.
I also tried 2 separate qualify statements to get the MIN() part to apply to the subset of the data created by the MAX() part, but that errors. | This seems to accomplish what I want:
```
SELECT *
FROM
(SELECT *
FROM table
WHERE ...
QUALIFY (MAX(action_date) OVER (PARTITION BY user_id, event_id) = action_date)
) AS a
QUALIFY (
MIN(a.action_date) OVER (PARTITION BY a.user_id) = a.action_date
)
``` | How is this query failing?
Of course you can use multiple conditions in a QUALIFY, your query is syntactically correct.
But it's logic will not return a row for the given data :-)
You probably need to rewrite it, maybe
```
QUALIFY (
RANK() OVER (PARTITION BY user_id ORDER BY event_id) = 1
)
``` | Nesting qualify statements in Teradata | [
"",
"sql",
"teradata",
""
] |
The query which i have written
```
SELECT Date_current,
COUNT(*) as'Total'
FROM Call_Register
WHERE (DATEDIFF(dd,'02/1/2014',Date_current) >=0)
AND (DATEDIFF(dd,'02/12/2014',Date_current) <=0)
GROUP BY Date_current
HAVING COUNT(*)>=(convert(int,'02/12/2014')) \
ORDER BY Date_current
```
But this query gives me error:
```
Conversion failed when converting the varchar value '02/12/2014' to data type int.
```
```
Date Total
---------- -------
Feb 3 2014 2:58PM 10
Feb 4 2014 2:59PM 10
```
Please Help me
getting Output like
Date Total
---------- -------
Feb 3 2014 2:58PM 1
Feb 3 2014 2:59PM 1
Feb 3 2014 3:00PM 1
Feb 3 2014 3:08PM 1
Feb 3 2014 3:20PM 1
Feb 3 2014 4:05PM 1
Feb 3 2014 4:17PM 1
Feb 3 2014 4:19PM 1
Feb 3 2014 4:21PM 1
Feb 3 2014 4:24PM 1
Feb 4 2014 1:11PM 1
Feb 4 2014 2:35PM 1
Feb 4 2014 2:37PM 1
Feb 4 2014 5:19PM 1 | Firstly, you should either use the culture invariant date format `yyyyMMdd`, or explicitly set the date format using `SET DATEFORMAT DMY`, or prepare to get inconsistent results.
Secondly, the following is potentially very inefficient:
```
WHERE (DATEDIFF(dd,'02/1/2014',Date_current) >=0)
AND (DATEDIFF(dd,'02/12/2014',Date_current) <=0)
```
If you have an index on `Date_Current` it will not be used because you are performing a function on it. You should instead use:
```
WHERE Date_Current >= '20140102'
AND Date_Current <= '20141202'
```
You then have a [sargable](http://en.wikipedia.org/wiki/Sargable) query. I have had to guess at whether '02/1/2014' meant 1st February 2014, or 2nd January 2014 as it is not clear (hence the importance of my first point).
Finally (this part has already been answered but including it for completeness as I couldn't not point out the first two errors) you cannot convert to int here:
```
convert(int,'02/12/2014')
```
You presumably need to convert to date time first:
```
CONVERT(INT, CONVERT(DATETIME, '20141202'))
```
Although I suspect this is still not what you want, you are just filtering the days to those that have 41973 records or more, seems like a fairly arbitrary filter.... | You need to Cast your String Date after that only you can proceed with Int CAST
```
CAST('02/12/2014' AS Datetime)
```
Try this
```
SELECT Date_current,COUNT(*) AS 'Total'
From Call_Register
WHERE (DATEDIFF(dd,'02/1/2014',Date_current)>=0) AND
(DATEDIFF(dd,'02/12/2014',Date_current)<=0)
Group By Date_current
having COUNT(*)>=(convert(int, CAST('02/12/2014' AS Datetime)) order By Date_current
``` | I want output of sql Query and i already done some query part but it fails | [
"",
"sql",
"sql-server-2008",
""
] |
I created the procedure listed below:
```
CREATE procedure getdata
(
@ID int,
@frm varchar(250),
@to varchar(250)
)
AS
BEGIN
DECLARE @SQL nvarchar(500)
set @SQL = 'select'
set @SQL = @SQL + ' EmpName, Address, Salary from Emp_Tb where 1=1 '
IF (@ID <> '' and @ID is not null)
Begin
SET @sql=@sql+' AND Emp_Id_Pk=' +@ID
End
END
print @sql
--execute (@sql)
```
I try to execute it using:
```
**execute getdata 3,'','';**
```
But I'm getting the following error:
> Conversion failed when converting the nvarchar value 'select EmpName,
> Address, Salary from Emp\_Tb where 1=1 AND Emp\_Id\_Pk=' to data type int
Please help. | You are trying to concatenate a string and an integer.
You need to [cast](http://msdn.microsoft.com/en-us/library/ms187928.aspx) `@ID` as a string.
try:
```
SET @sql=@sql+' AND Emp_Id_Pk=' + CAST(@ID AS NVARCHAR(10))
``` | Try Using
```
CONVERT(nvarchar(10),@ID)
```
This is similar to cast but is less expensive(in terms of time consumed) | Conversion failed when converting the nvarchar value ... to data type int | [
"",
"sql",
""
] |
I tried to change precision like this:
```
ALTER Table account_invoice ALTER amount_total SET NUMERIC(5);
```
But I get syntax error, so I'm clearly doing something wrong. What is the right syntax to change precision of numeric in PostgreSQL? | Try this:
```
ALTER Table account_invoice ALTER COLUMN amount_total TYPE DECIMAL(10,5);
```
`DECIMAL(X, Y)` -> X represents full length and Y represents precision of the number. | You can use this:
```
ALTER Table account_invoice ALTER amount_total SET DATA TYPE NUMERIC(5,Y);
```
where Y is your required level of precision. | PostgreSQL - change precision of numeric? | [
"",
"sql",
"postgresql",
"postgresql-9.2",
"alter-table",
""
] |
I need some help.
Lets say i have a table
```
ID Mark Transmition
1 Ford A
2 Ford A
3 Ford M
4 BMW M
5 BMW M
6 Ford A
```
And now i need to do a case when.
```
CASE WHEN mark = 'Ford' then 'Ford'
WHEN mark = 'Ford' and Transmition = 'A' then ' including Fords with automatic transmitions'
```
And I have to do this using case when, not case when exists. Since I need to use it in an OBIEE report.
The result i need is something like this:
```
Mark Count
Ford 4
inc Ford with automatic transmition 3
```
But the results evaluate to TRUE in bith of the cases...
Looking forward to hearing from you. | Query:
```
SELECT CASE WHEN mark = 'Ford' THEN 'Ford' END AS Mark,
COUNT(*)
FROM Table1 t
WHERE mark = 'Ford'
GROUP BY mark
UNION ALL
SELECT CASE WHEN mark = 'Ford' AND Transmition = 'A'
THEN 'including Fords with automatic transmitions' END AS Mark,
COUNT(*)
FROM Table1 t
WHERE mark = 'Ford'
AND Transmition = 'A'
GROUP BY CASE WHEN mark = 'Ford' AND Transmition = 'A'
THEN 'including Fords with automatic transmitions' END
```
Result:
```
| MARK | COUNT(*) |
|---------------------------------------------|----------|
| Ford | 4 |
| including Fords with automatic transmitions | 3 |
``` | You can do it without `CASE` like this :
```
SELECT Mark, count(1) FROM car GROUP By Mark
UNION
SELECT 'Including '||Mark||' with automatic transmitions' as MM, count(1) FROM car WHERE Transmition = 'A' GROUP By MM;
```
Result:
```
| MARK | COUNT(*) |
|---------------------------------------------|----------|
| BMW | 2 |
| Ford | 4 |
| including Fords with automatic transmitions | 3 |
``` | SQL CASE WHEN, when i want an "including" row | [
"",
"sql",
"case",
"obiee",
""
] |
The table is a big table,millions of records.
Column X represents an item etc Table, chair... ...
Column Y has values which i would like to sum up.
Column Z would place the value if the row needs to be summed.
```
Table 1
ID Column X Column Y Column Z
1 X1 5 1
2 x1 4 1
3 x1 Null 0
4 x1 5 1
5 x1 Null 0
6 x2 5 1
7 x2 5 1
8 x2 Null 0
9 x3 2 1
10 x3 Null 0
11 x3 2 1
12 x4 Null 0
13 x4 Null 0
14 x5 Null 0
... ...
the list goes on
Wanted Result
Table 1
ID Column X Column YY Column Z
1 X1 14 1
2 x1 14 1
3 x1 14 0
4 x1 14 1
5 x1 14 0
6 x2 10 1
7 x2 10 1
8 x2 10 0
9 x3 4 1
10 x3 4 0
11 x3 4 1
12 x4 0 0
13 x4 0 0
14 x5 0 0
```
I would require a select statement to get the intended result. | Try This
```
Select ID,X,Sum(Y)Over(Partition By X) AS YY,Z
From Table
``` | Try:
```
SELECT
T1.ID,
T1.X,
(
SELECT
SUM(T2.Y*T2.Z)
FROM
Table AS T2
WHERE
T2.ID = T1.ID
) AS YY,
T1.Z
FROM Table AS T1
``` | Summing record in table based on key in row | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I am writing a shell script to scp a project and part of it involves transferring some important tables in my database.
I have the following stored in the file `SQLTableTransfer`:
```
.o MyTable1.sql;
.dump MyTable1;
.o MyTable2.sql;
.dump MyTable2;
.o MyTable3.sql;
.dump MyTable3;
```
And took a chance on
```
$ sqlite3 SQLTableTransfer
```
But this just opened the Sqlite3 shell. What is the correct way to run a script like this from the command line? | The parameter you give to the `sqlite3` program is the database file name.
To execute commands from a file, you must redirect the input to that file:
```
$ sqlite3 mydatabase.db < SQLTableTransfer
```
or tell it to read from that file:
```
$ sqlite3 mydatabase.db ".read SQLTableTransfer"
``` | For Windows CLI, assuming your database is loaded:
```
sqlite> .read C:\\somesubdir\\some.sql
``` | Running a Sqlite3 Script from Command Line | [
"",
"sql",
"bash",
"sqlite",
"shell",
""
] |
This request is extracted from a complex one.
In this example I have two data tables: user\_table, ref, and one association table: user\_ref\_asso.
The schema, and test query are here : <http://sqlfiddle.com/#!4/0a302/18>
I try to limit the number of USER\_TABLE results using "where rownum < X" but it limits the total results (user + ref).
My current query is:
```
select * from
(SELECT u.user_id, r.ref_id, u.name, r.ref
FROM user_table u
INNER JOIN user_ref_asso ur
ON ur.user_id = u.user_id
INNER JOIN REF r
ON r.ref_id = ur.ref_id
order by u.user_id, r.ref_id)
WHERE rownum <= 2;
```
For example, if the result without row limits is:
```
USER REF
1 1
1 2
2 1
2 2
3 1
3 2
```
If I set the row number limit to 2, the expected result would be (2 distinct users):
```
USER REF
1 1
1 2
2 1
2 2
```
But in my case the result is (2 results):
```
USER REF
1 1
1 2
```
How to limit row numbers on distinct user\_id column ? | Use Analytical function to achieve this:
```
select user_id, ref_id, name, ref from
(SELECT u.user_id, r.ref_id, u.name, r.ref, dense_rank() over (order by u.user_id) rn
FROM user_table u
INNER JOIN user_ref_asso ur
ON ur.user_id = u.user_id
INNER JOIN REF r
ON r.ref_id = ur.ref_id
order by u.user_id, r.ref_id)
WHERE rn <= 2;
```
Output:
```
| USER_ID | REF_ID | NAME | REF | RN |
|---------|--------|-------|------|----|
| 1 | 1 | Name1 | Ref1 | 1 |
| 1 | 2 | Name1 | Ref2 | 1 |
| 2 | 1 | Name2 | Ref1 | 2 |
| 2 | 2 | Name2 | Ref2 | 2 |
```
[sql Fiddle](http://sqlfiddle.com/#!4/0a302/27) | The simplest solution, however not nice:) is:
```
select * from
(SELECT u.user_id, r.ref_id, u.name, r.ref
FROM (select * from user_table WHERE rownum <= 2) u
INNER JOIN user_ref_asso ur
ON ur.user_id = u.user_id
INNER JOIN REF r
ON r.ref_id = ur.ref_id
order by u.user_id, r.ref_id);
```
or in case of oracle you may use some analtyical query to calculate number of returned users and limit it using that...
**/*edit*/** Here is the query using analytical functions. Had to try it as I was not sure whether its RANK or DENSE\_RANK:)
```
select * from
(SELECT u.user_id, r.ref_id, u.name, r.ref, dense_rank() over (order by u.user_id) cnt
FROM user_table u
INNER JOIN user_ref_asso ur
ON ur.user_id = u.user_id
INNER JOIN REF r
ON r.ref_id = ur.ref_id
order by u.user_id, r.ref_id)
WHERE cnt <= 2;
``` | Oracle SQL: Limit row numbers in a query with association table | [
"",
"sql",
"oracle",
"rownum",
""
] |
```
select * from table (Return 10 columns)
```
or
```
select fild_a, fild_b from table
```
That makes any difference, talking about performance? | The difference in performance Marcelo refers to is afaik true, but miniscule at best.
More to the point, using asterisks results in an unreliable result set, since any changes to the data model will be reflected in the result of the query, which may potentially break any existing code that's using that query.
Also, there's the question of indexing. If **fild\_a** and **fild\_b** are indexed, the latter query will be MUCH faster than the former, since the former basically automatically forces a full table scan.
Other than that, it depends on the situation: whether your selection is a part of a larger query, etc. Naturally fetching all columns and their data, and transferring it over to the application requires more resources than just getting the data you need. But for example a a part of a sub-query, the query execution planner may often realize you're not actually returning all the values. But it will still fail to use the proper indexes unless your query matches their criteria. | The first Query select fild\_a, fild\_b from table is faster than
second select \* from table (Return 10 columns) Query because
There are two basic rules of querying a database table:
1. Query only the rows that you need.
2. Query only the columns that you need.
By doing "SELECT \* FROM table ", you are retreiving all the rows and all the columns all the time. Is that really a necessity? If you are doing this on a "very big" table (as you as say) it is bound to take time. I suggest you should revisit the design of querying the entire table every time.
<http://www.quest.com/whitepapers/10_SQL_Tips.pdf> | SQL Performance on Select | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
here is my situation:
I have 3 tables:
```
A: (A_id, Name)
B: (B_id, A_id, Name)
C: (C_id, B_id, State)
```
What i want is to have the following resultset:
```
A.A_id,A.Name, C.State
```
the complicator is that i need State to have a default value when there is no B data to link.
In that case, i want
```
A.A_id, A.Name, 'Default_Value'
```
I dont know much of advanced Sql, so any pointers are greatly appreciated. | ```
select
coalesce(c.State, 'default value')
from
a
left join b on a.id = b.A_id
left join c on b.B_id = c.B_id
```
* the best visual explanation of joins I've ever seen: [A Visual Explanation of SQL Joins](http://www.codinghorror.com/blog/2007/10/a-visual-explanation-of-sql-joins.html)
* `COALESCE()` returns the first of its parameters which isn't `NULL` | ```
SELECT A.A_id, A.Name, COASLESCE(C.State, 'Default_Value')
FROM
A LEFT JOIN
(B INNER JOIN C ON C.B_id = B.B_id)
ON B.A_id = A.A_id
```
Some information on joins: [What is the difference between "INNER JOIN" and "OUTER JOIN"?](https://stackoverflow.com/questions/38549/difference-between-inner-and-outer-join)
What's happening here is that we are joining table B and C with an `INNER JOIN` where the respective `B_id` column is equal. The `INNER` specifies that results will be returned only when records exist in both tables that match the `C.B_id = B.B_id` condition.
The `LEFT JOIN` will join those combined values to table A if the matching condition exists, while still returning the records from table A if no match exists. That is, if nothing exists for the condition `B.A_id = A.A_id`, `NULL` values are returned for the columns from the right side of the join (the B and C join). We perform the `COASLESCE`, so that if the queried column returns with `NULL`, it can default to some specified value.
`COALESCE` has some added benefits when performing this function: <http://msdn.microsoft.com/en-us/library/ms190349.aspx>
One last thing, table B in your example is commonly known as a junction table (or join table, or bridge table)... <http://en.wikipedia.org/wiki/Junction_table> | How to get values from tables A and C, joined by table B with default values from C when C has no key from A | [
"",
"sql",
""
] |
I have some look up tables that I don't want to do a join for, but I still want the fields value.
TABLES
Table1
```
ID Value LookupID
1 1 1
2 2 2
```
Table2
```
LookupID Result
1 Yes
2 No
```
I'd like to do something like
```
SELECT ID, Value, (if LoopupID =1, yes if =2 no)
FROM Table1
```
Rather than doing a join. I've done this before, but I cannot remember what command/syntax I used to achieve this. I'm trying to avoid using a join as it makes the query take significantly longer to run. There are only 3-4 values that need to be put in the string so it would not be difficult to hard code into the query. | Try this:
```
SELECT ID, Value, case when LoopupID =1 then 'yes' else 'no' end as result
FROM Table1
``` | Just adding a little more information here...
The case syntax will work for three or more values, as well.
Also, you can name the resultant column with an "as"
and that name will go after the "end" in the case clause.
Like this:
```
case
when LookupID = 1 then 'Yes'
when LookupID = 2 then 'Maybe'
else 'No'
end as YesNoMaybeColumn1
```
Note that you probably don't want to do any subqueries inside the values for the "then"s because that likely wouldn't perform well. It might get optimized though - you could try it out and see what you get as a result.
The reason your Joins aren't fast to begin with though is probably due to the tables not having convenient indices setup correctly. Lookup table joins like this should normally be blazingly fast if the indices are right. | SQL selecting specific field with defined values | [
"",
"sql",
"syntax",
""
] |
I need help to get my syntax-correct in a SQL query, I want to use `Substring` and `convert`, but in the convert I also convert the date into format 112.
```
where Substring(Convert(varchar(100),Datum,12,16,112)) = '8:00'
```
My code above.
EDIT Explanation
I've wrote a stored procedure that stores data in one place, this data I later called upon from another stored procedure in order too get statistics, the stored procedure is executed from an ERP system once the user chooses this particular report, the purpose is to give data 2 times a day to see if it gets more or less.
Cheers | Do your `CONVERT` first, then wrap that in the `SUBSTRING`
```
WHERE Substring(Convert(varchar(100),Datum,112),12,16) = '8:00'
``` | Firstly you must do Convert(varchar(100),Datum,112) and then put it in Substring. Try like this;
```
WHERE Substring(Convert(varchar(100),Datum,112),12,16) = '8:00'
``` | Convert and substring in SQL server | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I am writing an analytic query on a user activity log table in Postgres 9.3. It has a sign-up date, a data field (that can be summed) and a user type. I've constructed some sample data/sql for this problem and I'm hoping to get some help figuring out the last part. The SQL required to test is below - it will drop/create a table called facts - so be sure to work in a sandbox.
I aggregate the data by week and user type - so you get a count of the data field for each user type every week. The problem I have is that I get results that are missing a week for user type = 'x'. Because there is no user data in week 9-9-13 for user type 'x' no row appears (see sample results below). I'd like there to be a row for that user type and week. I'd like to accomplish this, if possible with a single select statement, with no temp or dimension tables (this is because I will pass this sql off to a business manager and a single self contained SQL select statement is hopefully more fool proof (criticism on this approach is welcome but not an answer). Thank you all for any assistance!
Here's the results I get:
```
Sum test_week user_type
4 "2013-09-02" "x"
5 "2013-09-02" "y"
10 "2013-09-09" "y"
2 "2013-09-16" "x"
1 "2013-09-16" "y"
```
Here's the results I want:
```
Sum test_week user_type
4 "2013-09-02" "x"
5 "2013-09-02" "y"
0 "2013-09-09" "x"
10 "2013-09-09" "y"
2 "2013-09-16" "x"
1 "2013-09-16" "y"
```
Here is the test data and SQL select statement:
```
drop table if exists facts;
create temp table facts (signup_date date, data integer, record_type varchar, alt varchar);
insert into facts (signup_date, data, record_type) values
('9/3/2013',1,'x'),
('9/4/2013',1,'y'),
('9/5/2013',2,'x'),
('9/6/2013',3,'y'),
('9/7/2013',1,'x'),
('9/8/2013',1,'y'),
-- note the week of 9/9 to 9/16 has no 'x' records
('9/9/2013',2,'y'),
('9/10/2013', 3, 'y'),
('9/11/2013', 4, 'y'),
('9/12/2013', 1, 'y'),
('9/17/2013', 2, 'x'),
('9/18/2013', 1, 'y');
select coalesce(data, 0), test_week, record_type
from
(select sum(data) as data, record_type, to_timestamp(EXTRACT(YEAR FROM signup_date) || ' ' || EXTRACT(WEEK FROM signup_date),'IYYY IW')::date as test_week
from facts
group by record_type, test_week
) as facts
order by test_week, record_type
``` | ```
select
coalesce(sum(data), 0) as "Sum",
to_char(date_trunc('week', c.signup_date), 'YYYY-MM-DD') as test_week,
c.record_type as user_type
from
facts f
right join
(
(
select distinct record_type
from facts
) f1
cross join
(
select distinct signup_date
from facts
) f2
) c on f.record_type = c.record_type and f.signup_date = c.signup_date
group by 2, 3
order by 2, 3
;
Sum | test_week | user_type
-----+------------+-----------
4 | 2013-09-02 | x
5 | 2013-09-02 | y
0 | 2013-09-09 | x
10 | 2013-09-09 | y
2 | 2013-09-16 | x
1 | 2013-09-16 | y
``` | To solve this problem, create a list of all combinations of all `record_type`s and all test weeks. The left join from those combinations to the actual fact table. This will give all the records, so you should be able to get the rows where there is no data:
```
select coalesce(sum(f.data), 0) as data, rt.record_type, w.test_week
from (select distinct record_type from facts) rt cross join
(select distinct to_timestamp(EXTRACT(YEAR FROM signup_date) || ' ' || EXTRACT(WEEK FROM signup_date),'IYYY IW')::date as test_week
from facts
) w left outer join
facts f
on f.record_type = rt.record_type and
w.test_week = to_timestamp(EXTRACT(YEAR FROM f.signup_date) || ' ' || EXTRACT(WEEK FROM f.signup_date),'IYYY IW')::date
group by rt.record_type, w.test_week
order by w.test_week, rt.record_type;
``` | How to get all dates in sequence for a group by query? | [
"",
"sql",
"postgresql",
"group-by",
""
] |
I have some data that I would like to split based on a delimiter that may or may not exist.
Example data:
```
John/Smith
Jane/Doe
Steve
Bob/Johnson
```
I am using the following code to split this data into First and Last names:
```
SELECT SUBSTRING(myColumn, 1, CHARINDEX('/', myColumn)-1) AS FirstName,
SUBSTRING(myColumn, CHARINDEX('/', myColumn) + 1, 1000) AS LastName
FROM MyTable
```
The results I would like:
```
FirstName---LastName
John--------Smith
Jane--------Doe
Steve-------NULL
Bob---------Johnson
```
This code works just fine as long as all the rows have the anticipated delimiter, but errors out when a row does not:
```
"Invalid length parameter passed to the LEFT or SUBSTRING function."
```
How can I re-write this to work properly? | May be this will help you.
```
SELECT SUBSTRING(myColumn, 1, CASE CHARINDEX('/', myColumn)
WHEN 0
THEN LEN(myColumn)
ELSE CHARINDEX('/', myColumn) - 1
END) AS FirstName
,SUBSTRING(myColumn, CASE CHARINDEX('/', myColumn)
WHEN 0
THEN LEN(myColumn) + 1
ELSE CHARINDEX('/', myColumn) + 1
END, 1000) AS LastName
FROM MyTable
``` | For those looking for answers for SQL Server 2016+. Use the built-in STRING\_SPLIT function
Eg:
```
DECLARE @tags NVARCHAR(400) = 'clothing,road,,touring,bike'
SELECT value
FROM STRING_SPLIT(@tags, ',')
WHERE RTRIM(value) <> '';
```
Reference: <https://msdn.microsoft.com/en-nz/library/mt684588.aspx> | T-SQL split string based on delimiter | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
"sql-server-2008-r2",
""
] |
I have a table like this:
Table: `activity`
```
Date Time task_name type
10-02-2014 07:00 Reporting Scheduled
10-02-2014 1200 mailing Failed
10-02-2014 16:00 Checking Scheduled
11-02-2014 10:00 DDDD Done
```
I want to update a column named `type` from `Scheduled` to `Done` for a particular date where time is minimum on that date.
Below there is the query:
```
UPDATE activity
SET type = "Done"
WHERE
type = "Scheduled"
AND Date = "10-02-2014"
AND time = (SELECT MAX(time) FROM activity WHERE Date = '10-02-2014');
```
But it is throwing an error:
> ERROR 1093 (HY000): You can't specify target table 'activity' for update in FROM clause | People don't half talk some nonsense around here. Consider the following...
```
DROP TABLE IF EXISTS my_table;
CREATE TABLE my_table
(id INT NOT NULL AUTO_INCREMENT PRIMARY KEY
, type VARCHAR(12) NOT NULL
);
INSERT INTO my_table (type) VALUES
('scheduled'),('failed'),('scheduled'),('done');
UPDATE my_table x
LEFT
JOIN my_table y
ON y.id < x.id
SET x.type = 'done'
WHERE y.id IS NULL;
SELECT * FROM my_table;
+----+-----------+
| id | type |
+----+-----------+
| 1 | done |
| 2 | failed |
| 3 | scheduled |
| 4 | done |
+----+-----------+
```
Just by way of example, here's some code that uses a subquery - albeit an uncorrelated one...
```
UPDATE my_table x
JOIN (SELECT MIN(id) minid FROM my_table) y
ON y.minid = x.id
SET x.type = 'done';
``` | Because of [documented limitation](http://dev.mysql.com/doc/refman/5.5/en/subquery-errors.html) of MySQL, you cannot use the same table for both the subquery FROM clause and the update target table.
You should then do something like this:
```
DECLARE @minTime time
SELECT @minTime = MIN(time) FROM activity WHERE date='10-2-2014'
UPDATE activity
SET type='Done'
WHERE date='10-2-2014' AND type='scheduled' AND time = @minTime
``` | Update data in table setting a particular date's minimum timing activity | [
"",
"mysql",
"sql",
""
] |
We have a very old software has been created around 10 years ago and we don't have source code.
The software uses two databases, `DB01` and `DB02` on the same SQL Server 2012 instance.
There is SQL statements such as `db01..table1 join db02..table2`, but the main issue is our processes don't allow us use `db02` as a name of database.
The question is: how we can create an alias of for database?
I was trying to use `CREATE SYNONYM`
```
CREATE SYNONYM [db02] FOR [db02_new_name];
```
but it doesn't work for database names.
Please suggest how it can be solved without patching a binary files to correct SQL statements. | Create a database with the name you want to impersonate. Re-jigg the DDL code generator to create a view for every table in the database that has the tables I need to access via the hardcoded name. Basically, each view will have a statement that looks like this..
```
CREATE VIEW schemaname.tablename as SELECT * FROM targetdbname.schemaname.tablename
```
Example:
The target database name that is hardcoded is called `ProdDBV1` and the Source DB you have is named `ProductDatabaseDatabaseV1`, schema is `dbo` and table name is `customer`
1. Create the database called `ProdDBV1` using SSMS or script.
2. `CREATE VIEW dbo.customer as SELECT * FROM ProductDatabaseDatabaseV1.dbo.customer`
If you can enumerate each table in your "source" database and then create the DDL as above. If you want I can update this posting with a code example. (using the `sp_msforeachtable` procedure if possible) | I had a similar issue.
Solved with this [workaround](http://www.baud.cz/blog/database-alias-in-microsoft-sql-server), using synonyms.
**Short version:** You flood your database with a synonym of every object you'll ever need to reference. Later you re-create every synonym with the other database name. | How to create an alias of database in SQL Server | [
"",
"sql",
"sql-server",
"database",
"alias",
"synonym",
""
] |
I want a select that naturally joins 2 tables. Afer joining table A with table B the new temporary table C doesn't contain the row of table A if the primary key of this row is not used in any rows of table B. I understand why that happens, but I want the row also in table C.
Example
```
select deptno, dname, loc, count(deptno) empcount, round(avg(sal),2) avgsal
from dept natural join emp
group by deptno, dname, loc
```
Result:
```
DEPTNO DNAME LOC EMPCOUNT AVGSAL
---------- -------------- ------------- ---------- ----------
20 RESEARCH DALLAS 5 2175
10 ACCOUNTING NEW YORK 3 2916.67
30 SALES CHICAGO 6 1566.67
```
What I want:
```
DEPTNO DNAME LOC EMPCOUNT AVGSAL
---------- -------------- ------------- ---------- ----------
20 RESEARCH DALLAS 5 2175
10 ACCOUNTING NEW YORK 3 2916.67
30 SALES CHICAGO 6 1566.67
40 OPERATIONS BOSTON 0 0
```
Table dept (A):
```
DEPTNO DNAME LOC
---------- -------------- -------------
10 ACCOUNTING NEW YORK
20 RESEARCH DALLAS
30 SALES CHICAGO
40 OPERATIONS BOSTON
```
Table emp (B):
```
EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ----------
7369 SMITH CLERK 7902 17-DEC-80 800 20
7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300 30
7521 WARD SALESMAN 7698 22-FEB-81 1250 500 30
7566 JONES MANAGER 7839 02-APR-81 2975 20
7654 MARTIN SALESMAN 7698 28-SEP-81 1250 1400 30
7698 BLAKE MANAGER 7839 01-MAY-81 2850 30
7782 CLARK MANAGER 7839 09-JUN-81 2450 10
7788 SCOTT ANALYST 7566 19-APR-87 3000 20
7839 KING PRESIDENT 17-NOV-81 5000 10
7844 TURNER SALESMAN 7698 08-SEP-81 1500 0 30
7876 ADAMS CLERK 7788 23-MAY-87 1100 20
7900 JAMES CLERK 7698 03-DEC-81 950 30
7902 FORD ANALYST 7566 03-DEC-81 3000 20
7934 MILLER CLERK 7782 23-JAN-82 1300 10
``` | Thanks a lot, it was the outer join I was looking for.
So here's the sql command that deliveres the result I was looking for:
SQL99-Syntax:
```
select deptno, dname, loc, nvl(ecount,0) empcount, nvl(round(avg(sal),2),0) avgsal
from dept left join emp using(deptno)
left join (select deptno, count(deptno) ecount
from emp
group by deptno) using(deptno)
group by deptno, dname, loc, ecount
```
Standard SQL-Syntax:
```
select d.deptno, d.dname, d.loc, nvl(n.ecount,0) empcount, nvl(round(avg(e.sal),2),0) avgsal
from dept d, emp e, (select deptno, count(deptno) ecount
from emp
group by deptno) n
where d.deptno = e.deptno (+)
and n.deptno (+) = e.deptno
group by d.deptno, d.dname, d.loc, n.ecount
```
The column empcount is 0 if there are no employees in the department. In the other solutions it was 1 because there was one row for each department after the outer join in the temporary table. | Use `left join` instead fo `natural join`.
```
select deptno, dname, loc, count(deptno) empcount, round(coalesce(avg(sal), 0),2) avgsal
from dept left join emp
group by deptno, dname, loc
``` | SQL join without data loss | [
"",
"sql",
"oracle",
""
] |
I have been assigned with the task of getting part numbers out of a string in our database. I know that the part will be between the first and second comma in the string but I have no idea how to go about doing this. Any one have any suggestions? This is an example of the string I will be working with LEL,SPEC2078,14 CUT LEADS,#18 1/32 | assuming the partno is in a column called p and the name of your table is products:
```
select SUBSTRING(p, charindex(',',p,1)+1, case when charindex(',',p,charindex(',',p,1)+1) = 0 then len(p)-charindex(',',p,1) else charindex(',',p,charindex(',',p,1)+1) end)
from products
``` | No problem! Might be best to write a function if you have to do this for several different items inside a CSV string.
```
SELECT 'LEL,SPEC2078,14 CUT LEADS,#18 1/32'
SELECT SUBSTRING('LEL,SPEC2078,14 CUT LEADS,#18 1/32',1,CHARINDEX(',','LEL,SPEC2078,14 CUT LEADS,#18 1/32',1)-1) -- Get's the first value
SELECT SUBSTRING('LEL,SPEC2078,14 CUT LEADS,#18 1/32',CHARINDEX(',','LEL,SPEC2078,14 CUT LEADS,#18 1/32',2) + 1, CHARINDEX(',','LEL,SPEC2078,14 CUT LEADS,#18 1/32',2) + CHARINDEX(',','LEL,SPEC2078,14 CUT LEADS,#18 1/32',1)) -- Get's the next value
``` | Pulling text out of a string in SQL | [
"",
"sql",
"sql-server",
""
] |
This is the code I have to optimize, I managed to get it to half of the starting cost only with indexes and changing the LIKE for a SUBSTRING sentence. My problem is now with the sub-query in the last line, and the SUM lines in the select, I believe I have to get rid of those by creating a new table or column but can't get it done.
```
SELECT
C.LastName as Customer , e.LastName as SalesPerson, s.ProductID,
p.Name as ProductName, SUM( s.Quantity ) as quantity,
SUM ( p.Price * s.Quantity ) as amount
FROM dbo.Sales s, dbo.Customers c, dbo.Employees e, dbo.Products p
WHERE
s.CustomerID = c.CustomerID and
s.ProductID = p.ProductID and
s.SalesPersonID = e.EmployeeID and
p.Name like 'Paint%'
GROUP BY C.LastName , e.LastName , s.ProductID, p.Name
HAVING sum ( s.Quantity ) <
(select AVG(s2.Quantity) from dbo.Sales s2 where s2.ProductID=s.ProductID )
```
Any help is welcome, thanks in advance. | Can't test it right now, but I think this should work:
```
SELECT c.LastName as Customer ,
e.LastName as SalesPerson,
s.ProductID,
p.Name as ProductName,
SUM(s.Quantity) as quantity,
SUM(p.Price * s.Quantity) as amount
FROM dbo.Sales s,
JOIN dbo.Customers c
ON c.CustomerID = s.CustomerID
JOIN dbo.Employees e
ON e.EmployeeID = s.SalesPersonID
JOIN dbo.Products p
ON p.ProductID = s.ProductID
AND p.Name like 'Paint%'
JOIN (SELECT ProductID,
AVG(Quantity) as avg_Quantity
FROM dbo.Sales) s2
ON s2.ProductID = s.ProductID
GROUP BY c.LastName , e.LastName , s.ProductID, p.Name
HAVING sum(s.Quantity) < AVG(s2.avg_Quantity)
``` | If using SQL Server you can streamline this a little with window functions:
```
WITH cte AS ( SELECT C.LastName AS Customer
, e.LastName AS SalesPerson
, s.ProductID
, p.Name AS ProductName
, SUM(s.Quantity) AS quantity
, SUM(p.Price * s.Quantity) AS amount
, SUM(SUM(s.Quantity)) OVER (PARTITION BY s.ProductID)*1.0/SUM(COUNT(*)) OVER (PARTITION BY s.ProductID) AS Avg_Qty
FROM dbo.Sales s
JOIN dbo.Customers c ON s.CustomerID = c.CustomerID
JOIN dbo.Employees e ON s.SalesPersonID = e.EmployeeID
JOIN dbo.Products p ON s.ProductID = p.ProductID
WHERE s.Name LIKE 'Paint%'
GROUP BY C.LastName
, e.LastName
, s.ProductID
, p.Name
)
SELECT *
FROM cte
WHERE quantity < Avg_Qty
```
Adding a `Name_Category` field or similar to avoid using `LIKE` would help. | Query Optimization, (subquery) (sql-transact) | [
"",
"sql",
"query-optimization",
"having",
"subquery",
""
] |
TABLE1:
```
ARTIKEL SUPPLIERID SALE_SUM_PIECES
TV SONY 7
```
TABLE2:
```
ROW_ID ARTIKEL SUPPLIERID PIECES
1 TV SONY 6
2 TV SONY 10
3 TV SONY 6
4 TV SONY 14
5 TV SONY 18
6 TV SONY 4
```
I need to subtract value `X=23` on `TABLE2."PIECES"`, only when the value TABLE1."SALE\_SUM\_PIECES" is less than the SUM of "PIECES" in TABLE2. For example: the value of `TABLE1."SALE_SUM_PIECES"` is `7`. NOw I need to check at which row the value `7 goes less than` the SUM of `TABLE2."PIECES"`.In the below example the first row in TABLE2 is not valid because 7 is greater than 6. But the second row in TABLE2 is valid since the SUM OF "PIECES" from row1 and row2 in TABLE2 i.e 6+10=16 is greater than 7. So, I need to subtract value of `X=23` from the second row to the following rows in `TABLE2`.
The query I have is as follows:
```
SELECT "SUPPLIERID", "ARTIKEL",
(case when ( cumulativesum - (select "SALE_SUM_PIECES" from T1 where T1."SUPPLIERID"=T2."SUPPLIERID" and T1."ARTIKEL" = T2."ARTIKEL" )) <= 0
then NULL
when ( cumulativesum - (select "SALE_SUM_PIECES" from TABLE1 T1 where T1."SUPPLIERID"=T2."SUPPLIERID" and T1."ARTIKEL" = T2."ARTIKEL" )) > 0
then
(case when @x - cumulativesum <= 0 and @x - (cumulativesum -PIECES) > 0
then 0
when @x - "cumulativesum" <= 0
then NULL
else @x - "cumulativesum"
end) as "VALUE_DRILL_DOWN"
from (SELECT T1."ARTIKEL", T1."SUPPLIERID", T1.PIECES
(select sum("PIECES")
from EXAMPLE_TABLE T2
where T2."ROW_ID" <= T1."ROW_ID" and T2."SUPPLIERID" = T1."SUPPLIERID" and T2."ARTIKEL"=T1."ARTIKEL"
) as "cumulativesum"
from EXAMPLE_TABLE T1
)
```
When I execute the above query I get the result as follows:
```
ROW_ID ARTIKEL SUPPLIERID PIECES VALUE_DRILL_DOWN
1 TV SONY 6 NULL
2 TV SONY 10 7
3 TV SONY 6 1
4 TV SONY 14 0
5 TV SONY 18 Null
6 TV SONY 4 Null
```
But I expect a result to be as follows:
```
ROW_ID ARTIKEL SUPPLIERID PIECES VALUE_DRILL_DOWN
1 TV SONY 6 NULL
2 TV SONY 10 13
3 TV SONY 6 7
4 TV SONY 14 0
5 TV SONY 18 Null
6 TV SONY 4 Null
```
I want the subtraction of ´X=23´ to start from the row in TABLE2 where the condition `TABLE1."SALE_SUM_PIECES" < TABLE2."PIECES"` i.e from row2. Any suggestions?
Thanks in advance. | The below solution gives the desired results. See [SqlFiddle](http://sqlfiddle.com/#!6/1454c/1)
```
1 TV SONY 922 6 110 2.50 NULL
2 TV SONY 922 10 80 1.00 13
3 TV SONY 922 6 65 1.50 7
4 TV SONY 922 14 95 1.50 0
5 TV SONY 922 18 95 1.50 NULL
6 TV SONY 922 4 95 1.50 NULL
DECLARE @x INT = 23
; WITH cte AS
(
SELECT t2.*, t1.SALE_SUM_PIECES,
CASE
WHEN SUM(t2.PIECES) OVER (PARTITION BY t2.ARTIKEL, t2.SUPPLIERID ORDER BY ROW_ID) < t1.SALE_SUM_PIECES THEN 'None'
ELSE t2.ARTIKEL + t2.SUPPLIERID
END AS GroupId
FROM @Table2 t2
JOIN @Table1 t1 ON t2.ARTIKEL = t1.ARTIKEL AND t2.SUPPLIERID = t1.SUPPLIERID
),
cumulative AS
(
SELECT *, SUM(PIECES) OVER (PARTITION BY GroupId ORDER BY ROW_ID) AS CumulativeSum
FROM cte
)
SELECT ROW_ID, ARTIKEL, SUPPLIERID, ORGID, PIECES, COSTPRICE, DISCOUNT,
CASE
WHEN CumulativeSum < SALE_SUM_PIECES THEN NULL
WHEN @x - CumulativeSum <= 0 AND @x - (CumulativeSum - PIECES) > 0 THEN 0
WHEN @x - CumulativeSum <= 0 THEN NULL
ELSE @x - CumulativeSum
END AS VALUE_DRILL_DOWN
FROM cumulative
``` | I'm not 100% sure what the expected calulation for VALUE\_DRILL\_DOWN is supposed to be.
You've said that for a valid row, it should be (sum of all rows at or above row) - 23
In this case the following code should work, but it gives different numbers from your expected results.
I've used a self-joining common table expression to work out the cumulative sums of the rows as you count up .
The (total of all rows at or above) is equal to the (total) - (cumulative sum at row before)
Therefore the following code *should* be correct...
```
DECLARE @table1 TABLE(ARTIKEL NVARCHAR(50), SUPPLIERID NVARCHAR(50), ORGID INT,
STORE INT, SALE_SUM_PIECES INT, [DATE] DATETIME)
DECLARE @table2 TABLE(ROW_ID INT, ARTIKEL NVARCHAR(50), SUPPLIERID NVARCHAR(50),
ORGID INT, PIECES INT, COSTPRICE MONEY, DISCOUNT DECIMAL(9,5))
DECLARE @x INT = 23
DECLARE @SumOfAll INT
INSERT @table1
( ARTIKEL , SUPPLIERID , ORGID , STORE , SALE_SUM_PIECES , [DATE] )
VALUES ( 'TV', 'SONY', 922,100,7 ,'2014-01-01' )
INSERT @table2
(ROW_ID, ARTIKEL , SUPPLIERID , ORGID , PIECES , COSTPRICE , DISCOUNT )
VALUES (1, 'TV', 'SONY', 922, 6, 110, 2.5 ),
(2, 'TV', 'SONY', 922, 10 , 80, 1 ) ,
(3, 'TV', 'SONY', 922, 6 , 65 , 1.5 ) ,
(4, 'TV', 'SONY', 922, 14 , 95 , 1.5 ),
(5, 'TV', 'SONY', 922, 18 , 95 , 1.5 ),
(6, 'TV', 'SONY', 922, 4 , 95 , 1.5 )
SELECT @SumOfAll = SUM(PIECES) FROM @table2 AS t
;WITH t2C AS (
SELECT ROW_ID, PIECES AS [cumulativeSum]
FROM @table2 AS t
WHERE ROW_ID = 1 -- assume starting at 1
UNION ALL
SELECT t.ROW_ID, t.PIECES + cumulativeSum AS [cumlativeSum]
FROM @table2 AS t
JOIN t2C ON t2C.ROW_ID + 1 = t.ROW_ID --assume rowIDs are sequential
)
SELECT
t2.ROW_ID,
t1.SUPPLIERID,
t1.ORGID,
t2.PIECES,
t2.COSTPRICE,
t2.DISCOUNT,
CASE WHEN t2C.cumulativeSum - t1.SALE_SUM_PIECES <= 0 THEN NULL
ELSE
CASE
WHEN @x - t2C.cumulativeSum <= 0
AND @x - (t2C.cumulativeSum - t2.PIECES) > 0 THEN 0
WHEN @x - t2C.cumulativeSum <= 0 THEN NULL
ELSE (@SumOfAll - t2cPrev.cumulativeSum) - @x
END
END AS [VALUE_DRILL_DOWN]
FROM t2C
LEFT JOIN t2C AS t2cPrev ON t2cPrev.ROW_ID = t2C.ROW_ID - 1
JOIN @table2 AS t2 ON t2.ROW_ID = t2C.ROW_ID
JOIN @table1 AS t1 ON t1.SUPPLIERID = t2.SUPPLIERID AND t1.ARTIKEL = t2.ARTIKEL
```
However this gives:
```
ROW_ID SUPPLIERID ORGID PIECES COSTPRICE DISCOUNT VALUE_DRILL_DOWN cumulativeSum sumOfRowsOnOrAfter
1 SONY 922 6 110.00 2.50000 NULL 6 NULL
2 SONY 922 10 80.00 1.00000 29 16 52
3 SONY 922 6 65.00 1.50000 19 22 42
4 SONY 922 14 95.00 1.50000 0 36 36
5 SONY 922 18 95.00 1.50000 NULL 54 22
6 SONY 922 4 95.00 1.50000 NULL 58 4
```
**EDIT:**
[SQL Fiddle](http://sqlfiddle.com/#!6/a857d/20) by WiiMaxx | Nested CASE in SQL | [
"",
"sql",
"sql-server",
"select",
"case",
""
] |
I'm spinning in circles trying to figure out what is likely a very simple SQL structure. My task seems simple - within the same table I need to update 3 related records with data from one master record. The master coordinates are in the record with a class of 'T', and I want to insert that record's coordinates into the rx\_latitude/longitude columns of the related records with class code 'R'
The table structure is: callsign, class, tx\_latitude, tx\_longitude, rx\_latitude, rx\_longitude. Sample data looks like this:
```
J877, T, 40.01, -75.01, 0, 0
J877, R, 39.51, -75.21, 0, 0
J877, R, 40.25, -75.41, 0, 0
J877, R, 39.77, -75.61, 0, 0
```
Within that same table, I want to populate all of the rx\_latitude and rx\_longitude fields where the class is 'R' with the tx\_latitude and tx\_longitude coordinates where the class is 'T' and the callsign matches.
I've tried several insert and update statements, but I can only seem to operate on the master record, not the related records. I would appreciate any guidance that you might offer. | ## Updated
Try :
```
update yourTable t1, yourTable t2 set
t1.tx_latitude = t2.tx_latitude,
t1.tx_longitude = t2.tx_longitude
where t1.class = 'R' and t2.class = 'T' and t1.callsign = t2.callsign
```
[Example](http://sqlfiddle.com/#!2/0937d/1) | You can use UPDATE...FROM statement:
```
UPDATE theTable
SET
tx_latitude = masterRecord.tx_latitude,
tx_longitude = masterRecord.tx_longitude
FROM
(SELECT tx_latitude,tx_longitude,callsign FROM theTable WHERE class='T') masterRecord
WHERE
class='R' AND callsign = masterRecord.callsign
``` | Update SQL records with values from another record | [
"",
"mysql",
"sql",
"insert",
""
] |
I have a query to calculate a sum over last 12 months like:
```
select part_no,
count(part_no) r12
from t1
where (t1.created<=sysdate and t1.created>=add_months(sysdate,-12)
```
Is it possible to create a query that also shows rolling 6 and rolling 3 in the same query like:
```
part_no r12 r6 r3
-----------------
100 8 2 1
200 12 1 0
300 10 4 4
``` | If you just want to know `COUNT` of all items for last 12, 6 and 3 you can change your query as follows.
```
SELECT part_no
,COUNT(CASE WHEN t1.created <= sysdate
AND t1.created >= add_months(sysdate, -12) THEN 1
ELSE NULL
END) r12
,COUNT(CASE WHEN t1.created <= sysdate
AND t1.created >= add_months(sysdate, -6) THEN 1
ELSE NULL
END) r6
,COUNT(CASE WHEN t1.created <= sysdate
AND t1.created >= add_months(sysdate, -3) THEN 1
ELSE NULL
END) r3
FROM t1
GROUP BY part_no
``` | You probably can try something like this:
```
SELECT part_no,
SUM( IF(t1.created>=add_months(sysdate,-12), 1, 0) ) r12,
SUM( IF(t1.created>=add_months(sysdate,-6), 1, 0) ) r6,
SUM( IF(t1.created>=add_months(sysdate,-3), 1, 0) ) r3
FROM t1
WHERE t1.created<=sysdate
GROUP BY part_no
``` | Same query with different parameters | [
"",
"sql",
"oracle",
"sum",
""
] |
EDIT:
Thanks to the responses, I have further cleaned up the code to the following:
```
SELECT
AllAlerts.AlertID as AlertID,
Queues.QueueID as QueueID,
Queues.QueueName as QueueName,
AllAlerts.ConnectorID as ConnectorID,
cast( ISNULL(STUFF ( (
select cast(',' as varchar(max)) + Services.Label
from (
SELECT distinct Services.Label
from [ISG_SOI ].[dbo].[SecureServiceCI] as Services
inner join [ISG_SOI ].[dbo].[CIRelationship] as Relationship
on Relationship.BNodeCIID = AllAlerts.CIID
where Services.CIID = Relationship.ServiceCIID
) as Services
for xml path ('')
), 1, 1, ''), '') as CHAR(1000)) as OwnedServices,
right(AllAlerts.DeviceID, len(AllAlerts.DeviceID)-charindex(',', AllAlerts.DeviceID)) as CIName,
AllAlerts.DeviceID as DeviceID,
AllAlerts.SituationMessage as Summary,
AllAlerts.AlertDetail as Detail,
AllAlerts.Acknowledged as Acknowledged,
AllAlerts.AssignedTo as AssignedTo,
AllAlerts.ReportedTime as CreatedTime,
AllAlerts.ClearedTime as ClearedTime,
AllAlerts.Severity as Severity,
AllAlerts.SvcDeskTicket as TicketID,
ISNULL(STUFF ( (
select cast('# ' as varchar(max)) + Notes.AnnotationText + '[' + Notes.CreatedBy + ', ' + cast(Notes.CreatedTime as varchar(max)) + ']'
from [ISG_SOI ].[dbo].[AlertAnnotation] as Notes
where Notes.AlertID = AllAlerts.AlertID
for xml path('')
), 1, 1, ''), '') as Notes
from
[ISG_SOI ].[dbo].[Alerts] as AllAlerts
inner join [ISG_SOI ].[dbo].[AlertQueueAssignments] as QA
on QA.[AlertID] = AllAlerts.[AlertID]
inner join [ISG_SOI ].[dbo].[AlertQueues] AS Queues
on Queues.[QueueID] = QA.[QueueID]
where Queues.QueueName = 'OCC'
```
-- ORIGINAL POST --
I have been working on a T-SQL search for a project that I am working on at work and finally got the search parameters down to get back all the results that I need. I was curious though, is there any way to improve on this command? You will have to forgive me as I am not a SQL expert.
```
SELECT AllAlerts.AlertID AS AlertID
,Queues.QueueID AS QueueID
,Queues.QueueName AS QueueName
,AllAlerts.ConnectorID AS ConnectorID
,CAST(ISNULL(STUFF(( SELECT CAST(',' AS VARCHAR(MAX)) + Services.Label
FROM ( SELECT DISTINCT Services.Label
FROM [ISG_SOI ].[dbo].[SecureServiceCI] AS Services
WHERE Services.CIID IN (
SELECT Relationship.ServiceCIID
FROM [ISG_SOI ].[dbo].[CIRelationship] AS Relationship
WHERE Relationship.BNodeCIID = AllAlerts.CIID ) ) AS Services
FOR
XML PATH('') ), 1, 1, ''), '') AS CHAR(1000)) AS OwnedServices
,RIGHT(AllAlerts.DeviceID, LEN(AllAlerts.DeviceID) - CHARINDEX(',', AllAlerts.DeviceID)) AS CIName
,AllAlerts.DeviceID AS DeviceID
,AllAlerts.SituationMessage AS Summary
,AllAlerts.AlertDetail AS Detail
,AllAlerts.Acknowledged AS Acknowledged
,AllAlerts.AssignedTo AS AssignedTo
,AllAlerts.ReportedTime AS CreatedTime
,AllAlerts.ClearedTime AS ClearedTime
,AllAlerts.Severity AS Severity
,AllAlerts.SvcDeskTicket AS TicketID
,ISNULL(STUFF(( SELECT CAST('# ' AS VARCHAR(MAX)) + Notes.AnnotationText + '[' + Notes.CreatedBy + ', '
+ CAST(Notes.CreatedTime AS VARCHAR(MAX)) + ']'
FROM [ISG_SOI ].[dbo].[AlertAnnotation] AS Notes
WHERE Notes.AlertID = AllAlerts.AlertID
FOR
XML PATH('') ), 1, 1, ''), '') AS Notes
FROM [ISG_SOI ].[dbo].[Alerts] AS AllAlerts
,[ISG_SOI ].[dbo].[AlertQueues] AS Queues
WHERE AllAlerts.AlertID IN ( SELECT QueueAssignment.AlertID
FROM [ISG_SOI ].[dbo].[AlertQueueAssignments] AS QueueAssignment
WHERE QueueAssignment.QueueID IN ( SELECT Queues.QueueID
WHERE Queues.QueueName = 'OCC' ) )
``` | To illustrate a partial solution based on my comment:
```
from [ISG_SOI ].[dbo].[Alerts] AS AllAlerts
inner join [ISG_SOI ].[dbo].[QueueAssignment] as QA on QA.[AlertID] = AllAlerts.[AlertID]
inner join [ISG_SOI ].[dbo].[AlertQueues] AS Queues on Queues.[QueueID] = QA.[QueueID]
where Queues.QueueName = 'OCC'
```
Instead of three nested subqueries, you're left with just one.
The same improvement can be made to the first `for xml` subquery - again, you're doing something joins are made for using subqueries. | You are using a lot of nested SELECTS, which are slow because you may be evaluating the SELECTS for each upper level row. Rather use Common Table Expressions (CTE) which enable you to do your SELECT upfront.
```
WITH MyCTE AS (
SELECT distinct Services.Label
from [ISG_SOI ].[dbo].[SecureServiceCI] as Services
where Services.CIID in (
select Relationship.ServiceCIID
from [ISG_SOI ].[dbo].[CIRelationship] as Relationship
where Relationship.BNodeCIID = AllAlerts.CIID
)
SELECT
AllAlerts.AlertID as AlertID,
Queues.QueueID as QueueID,
Queues.QueueName as QueueName,
AllAlerts.ConnectorID as ConnectorID,
cast( ISNULL(STUFF ( (
select cast(',' as varchar(max)) + Services.Label
from (
MyCTE
) as Services
```
You can define multiple CTE for each of your nested queries | Can I Make This T-SQL Search Better? | [
"",
"sql",
"sql-server",
""
] |
I'm struggling with this piece of SQL and I was wondering if someone could help me out.
```
INSERT INTO table_1(
rec_1,
rec_2,
rec_3
)
VALUES (
val_1,
val_2,
val_3
)
```
Now, rec\_2 and rec\_3 are clear and have absolute values.
Rec\_1 is filled with values from another table. Now I want to insert the values from the other table which do not exist already in this table. I was guessing I should use WHERE NOT IN?
So it would be something like this:
```
INSERT INTO table_1(
rec_1,
rec_2,
rec_3
)
VALUES (
val_1,
val_2,
val_3
)
WHERE NOT IN (
SELECT rec FROM table_2
)
```
But.. How can I insert those values in rec\_1 in my query? | How about a simple `INSERT/SELECT` if `rec_2` and `rec_3` are absolute values:
```
INSERT INTO table_1 (rec_1, rec_2, rec_3)
SELECT val_1, 'val_2', 'val_3'
FROM other_table
WHERE val_1 NOT IN (SELECT rec_1 FROM table_1)
``` | ```
INSERT INTO table_1(rec_1, rec_2, rec_3)
SELECT val_1, val_2, val_3 FROM dual WHERE NOT EXISTS (SELECT rec FROM table_2)
```
You might want to check this [answer](https://stackoverflow.com/a/2513187/2450409) for further usage
Further details [here](http://explainextended.com/2009/09/15/not-in-vs-not-exists-vs-left-join-is-null-sql-server/) | SQL insert query | [
"",
"mysql",
"sql",
""
] |
I have a list of values in a column 1, now I want to get the sum of next 5 row values or 6 row values like below and populate the value in appropriate column.

for example if you see 1st row value of the column 'next-5row-value' would be the sum of values from the current row to the next 5 rows which would be 9 and the next column would be sum of next 5 row values from that reference point.
I am trying to write functions to loop through to arrive at the sum. Is there an efficient way . can some one help me out. I am using postgres, greenplum . Thanks! | for example if you have this simple table:
```
sebpa=# \d numbers
Table "public.numbers"
Column | Type | Modifiers
--------+---------+------------------------------------------------------
id | integer | not null default nextval('numbers_id_seq'::regclass)
i | integer |
sebpa=# select * from numbers limit 15;
id | i
------+---
3001 | 3
3002 | 0
3003 | 5
3004 | 1
3005 | 1
3006 | 4
3007 | 1
3008 | 1
3009 | 4
3010 | 0
3011 | 4
3012 | 0
3013 | 3
3014 | 2
3015 | 1
(15 rows)
```
you can use this sql:
```
sebpa=# select id, i, sum(i) over( order by id rows between 0 preceding and 4 following) from numbers;
id | i | sum
------+---+-----
3001 | 3 | 10
3002 | 0 | 11
3003 | 5 | 12
3004 | 1 | 8
3005 | 1 | 11
3006 | 4 | 10
3007 | 1 | 10
3008 | 1 | 9
3009 | 4 | 11
3010 | 0 | 9
3011 | 4 | 10
3012 | 0 | 10
3013 | 3 | 13
3014 | 2 | 15
3015 | 1 | 17
3016 | 4 | 20
3017 | 3 | 17
--cutted output
``` | You can try something like this:
```
SELECT V,
SUM(V) OVER(ORDER BY YourOrderingField
ROWS BETWEEN 1 FOLLOWING AND 5 FOLLOWING) AS next_5row_value,
SUM(V) OVER(ORDER BY YourOrderingField
ROWS BETWEEN 1 FOLLOWING AND 6 FOLLOWING) AS next_6row_value
FROM YourTable;
```
If you don't want NULLs in your "next\_5row\_value" column (the same for the 6row), you can use the COALESCE function (supported in PostgreSQL too) that returns the first not null expression.
Something like this:
```
SELECT V,
COALESCE(SUM(V) OVER(ORDER BY YourOrderingField
ROWS BETWEEN 1 FOLLOWING AND 5 FOLLOWING), 0) AS next_5row_value,
COALESCE(SUM(V) OVER(ORDER BY YourOrderingField
ROWS BETWEEN 1 FOLLOWING AND 6 FOLLOWING), 0) AS next_6row_value
FROM YourTable;
``` | Grouping by a given number of row values | [
"",
"sql",
"postgresql",
"group-by",
"postgresql-8.4",
"greenplum",
""
] |
Why does the right outer join (of the example below) not give the full dataset of the table like the left outer join does, since outer joins always retain the rows of the corresponding table?
Create Table
```
create table join_test
(id number unique not null, name varchar2(10), department varchar2(10));
```
Populate Table
```
select * from join_test;
ID NAME DEPARTMENT
1 stelios sa
2 andros sa
3 stav ba
4 mary ba
5 antonia la
6 lambros ka
```
Inner Join
```
select j1.name, j1.department
from join_test j1 join join_test j2
on (j1.department=j2.department and j1.name<>j2.name);
NAME DEPARTMENT
andros sa
steliso sa
mary ba
stav ba
```
Left Outer Join
```
select j1.name, j1.department
from join_test j1 left join join_test j2
on (j1.department=j2.department and j1.name<>j2.name)
NAME DEPARTMENT
andros sa
steliso sa
mary ba
stav ba
antonia la
lambros ka
```
Right Outer Join
```
select j1.name, j1.department
from join_test j1 right join join_test j2
on (j1.department=j2.department and j1.name<>j2.name)
NAME DEPARTMENT
steliso sa
andros sa
stav ba
mary ba
```
Changing select list to j2
```
select j2.name, j2.department
from join_test j1 right join join_test j2
on (j1.department=j2.department and j1.name<>j2.name)
NAME DEPARTMENT
andros sa
steliso sa
mary ba
stav ba
antonia la
lambros ka
``` | Right and Left joins perform the same function. What's different in your examples is the tables that you are SELECTing from.
These two queries:
```
select j2.name,j2.department
from join_test j1 left join join_test j2
on(j1.department=j2.department and j1.name<>j2.name)
```
and
```
select j1.name,j1.department
from join_test j1 right join join_test j2
on(j1.department=j2.department and j1.name<>j2.name)
```
Produce the same result:
```
NAME DEPARTMENT
stelios sa
andros sa
stav ba
mary ba
(null) (null)
(null) (null)
```
The reason for the difference in results that you see between your left and right queries is that in the **left** example, you are SELECTing from the "driving" table (the left table, J1). The join is showing all rows from the driving table (J1), and matching rows (which are not displayed) from the right-hand, or non-driving table (J2).
In your **right** example, you are changing the join but still selecting from J1. Since J1 is now the **non-driving** table, you are only seeing the matched results from J1. If you add J2 columns to the select, you will see all of its rows:
```
NAME DEPARTMENT NAME DEPT
stelios sa andros sa
andros sa stelios sa
stav ba mary ba
mary ba stav ba
(null) (null) antonia la
(null) (null) lambros ka
```
You will see this same result with a LEFT join, but the nulls would be no the other side.
In both cases, the (null) rows represent the rows from the driving table that are not matched in the non-driving table. | This produces the correct results in SQL Fiddle (sqlfiddle.com/#!4/1e7075/2). However, I have a suspicion. The returned results are:
```
NAME DEPARTMENT
stelios sa
andros sa
stav ba
mary ba
(null) (null)
(null) (null)
```
I suspect that however you are returning the results (or looking at them), the rows with all `NULL` values are being ignored. Try choosing the columns from the `j2` table and see if you get more results. | Self join in SQL Oracle: Why does the right outer join not retain all the values of the table | [
"",
"sql",
"oracle",
"left-join",
"self-join",
"right-join",
""
] |
I have SQL table Policy that contains column as PolicyNumber which has value 'CCL-9997-10497'
and another table PolicyImages which also has column PolicyNumber which has value 'CCL-9997-000010497'
I wanted to inner join both these table on PolicyNumber ?
How can I achieve it ? | Your two tables have different format of PolicyNumber so you need some kind of computation.
I think below query will help you
```
SELECT a.* FROM
FROM Table1 a INNER JOIN Table1 b ON a.PolicyNumber =
Replace(b.PolicyNumber,'-' + right(b.PolicyNumber,charindex('-',REverse(b.PolicyNumber))-1),
'-' + convert(varchar,Convert(Decimal,right(b.PolicyNumber,charindex('-',REverse(b.PolicyNumber))-1)))
)
``` | This should do it:
```
SELECT *
FROM Policy
INNER JOIN PolicyImageq ON Policy.PolicyNumber = PolicyImages.PolicyNumber
``` | Inner join tables | [
"",
"sql",
"inner-join",
""
] |
IN this SQL, it is only returning rows that had data in the col from the join table
(subst\_Instructions). But we want it to display the row from main table even if there is not data in the join. Now, its' because of the where SUBST\_INSTRUCTIONS is not null presumably.
but without that, it will not show the data from that column. is there a way to say where SUBST\_INSTRUCTIONS is not null and null?
```
SELECT * FROM
(
SELECT ID_KEY, [BATCH] AS column1, [IMPORTDATE], [DATEBILLED], [RX],
[DATEDISPENSED], [DAYSUPPLY], [PAYTYPE], [NPI],
[PHYSICIAN], [COST], [QUANTITY], [MEDICATION], A.[NDC], [PATIENTNAME],
[ROUTEOFADMIN], [INVOICECAT], [COPAY], [BRAND], [TIER], [SKILLLEVEL],
[STAT] STATUS, [LASTTASKDATE],SEQNO,B.[SUBST_INSTRUCTIONS],
ROW_NUMBER() OVER(PARTITION BY ID_KEY ORDER BY ID_KEY) rn
FROM [PBM].[T_CHARGES] A
LEFT OUTER JOIN [OGEN].[NDC_M_FORMULARY] B
ON A.[NDC] = B.[NDC]
WHERE [STAT] NOT IN (3, 4)
AND [TIER] <> 'T1'
) a
WHERE SUBST_INSTRUCTIONS IS NOT NULL -- rn = 1
``` | Make sure not to have any fields of the outer joined table in the WHERE clause. Do [STAT] and [TIER] both belong to the main table? If they belong to the outer joined table, then put the criteria in the ON clause, not in the WHERE clause. | I would use full outer join instead of left outer join. | sql query only displays rows that have a match in the join | [
"",
"sql",
""
] |
I have a table that is time and milemarkers:
```
08:00 101.2
08:45 109.8
09:15 109.8
09:30 111.0
10:00 114.6
```
I need output that looks like this:
```
08:00-08:45 101.1-109.8
08:45-09:15 109.8-109.8
09:15-09:30 109.8-111.0
09:30-10:00 111.0-114.6
```
I figure I need 2 identical recordsets and somehow tie the first record of one to the second record of the other, but am clueless on how to accomplish that (or how to ask the question). Any help would be greatly appreciated.
Thanks in advance,
Ginny | Other way to do it is:
[SQLFiddle](http://sqlfiddle.com/#!3/20f9d/5/0)
```
select cast(A.TIME_COL as varchar) + ' - ' + cast(B.TIME_COL as varchar),
cast(A.MILES as varchar) + ' - ' + cast(B.MILES as varchar)
from (select row_number() OVER (order by time_col) ID, * from TABLE_A) A
inner join (select row_number() OVER (order by time_col) ID, * from TABLE_A) B
on A.ID = B.ID - 1
```
**UPDATE**: this query will only works for SQL Server 2008 and upwards and obviously not answer your question. I will not erase the answer cause it can be helpful for othe people.
**UPDATE2**: It works on [SQL Server 2005](http://technet.microsoft.com/en-us/library/ms186734%28v=sql.90%29.aspx). | The following query will get the next values:
```
select tm.*,
(select top 1 time
from timemilemarkers tm2
where tm2.time > tm.time
order by 1 desc
) as nexttime,
(select top 1 milemarker
from timemilemarkers tm2
where tm2.time > tm.time
order by 1 desc
) as nextmilemarker
from timemilemarkers tm;
```
You can put them into the form you want with something like:
```
select concat_ws('-', milemarker, nextmilemarker), concat_ws('-', time, nexttime)
from (select tm.*,
(select top 1 time
from timemilemarkers tm2
where tm2.time > tm.time
order by 1 desc
) as nexttime,
(select top 1 milemarker
from timemilemarkers tm2
where tm2.time > tm.time
order by 1 desc
) as nextmilemarker
from timemilemarkers tm
) tm
where nextmilemarker is not null;
``` | querying SQL Server 2005 | [
"",
"sql",
"sql-server",
""
] |
I want to display one unique row from a table which have repeated number of rows with the same value. I have tried this sql query,
```
select
table1.activityhp, table2.id, table1.sequenceno
from
ServiceDataElementsNew as table1,
VerticalsumNew as table2
where
table1.verticalsumid = table2.id
```
and get the this result:
```
activityhp id sequenceno
Total new and repeat acceptors 1 5
Total new and repeat acceptors 1 5
Total new and repeat acceptors 1 5
Total new and repeat acceptors 1 5
Total new and repeat acceptors 1 5
Total new and repeat acceptors 1 5
New acceptors 2 6
New acceptors 2 6
New acceptors 2 6
Repeat acceptors 3 10
Repeat acceptors 3 10
Repeat acceptors 3 10
```
But I want this:
```
activityhp id sequenceno
Total new and repeat acceptors 1 5
New acceptors 2 6
Repeat acceptors 3 10
```
Thanks for your help! | ```
**add to query**
group by table1.activityhp , table2.id ,table1.sequenceno
``` | Please try using [DISTINCT](http://www.w3schools.com/sql/sql_distinct.asp) keyword, which eliminates the duplicate entries.
```
select DISTINCT
table1.activityhp,
table2.id,
table1.sequenceno
from
EthioHIMS_ServiceDataElementsNew as table1,
EthioHIMS_VerticalsumNew as table2
where table1.verticalsumid=table2.id
```
Refer: [Eliminating Duplicates with DISTINCT (MSDN)](http://technet.microsoft.com/en-us/library/ms187831%28v=sql.105%29.aspx) | Display one unique row from sql database | [
"",
"sql",
"unique",
""
] |
So the short end of it is this is what I want to do, but I don't know the proper syntax.
Table 1, Table 2
Name, SSN, DOB, List, Date
I want to do a left join using the SSN, but when the SSN field IS NULL I want it to join on the DOB field Where the Name matches.
I can't join on name due to the file being 19k records and most of them are common names. | I believe that you can do what you want with a simple `LEFT JOIN` and the correct JOIN conditions. The trick in Access is that **IT DOES NOT LIKE** left joins with too few parentheses. When in doubt, add more parentheses. You will be surprised at what it makes possible in Access.
```
SELECT
T1.*,
T2.*
FROM
(Table1 AS T1
LEFT JOIN Table2 AS T2 ON (
(T1.SSN = T2.SSN)
OR (
(T1.SSN IS NULL)
AND (T1.DOB = T2.DOB)
AND (T1.Name = T2.Name)
)
));
```
If this doesn't work, please let me know and I'll see what I can do.
Note that you may or may not be able to go to the design view of this in the GUI query editor.
Also note that since NULL does not equal NULL, you don't need additional conditions before the `OR` about `X.SSN IS NOT NULL`. | Perhaps:
```
SELECT *
FROM Table7
INNER JOIN Table8
ON Table7.SSN = Table8.SSN
WHERE Table8.SSN Is Not Null
UNION
SELECT *
FROM Table7
INNER JOIN Table8
ON Table7.DOB = Table8.DOB
WHERE Table8.SSN Is Null
```
Note that the result will not be editable. | When field "IS NULL" join on an alternative field - Access 2007 | [
"",
"sql",
"if-statement",
"ms-access-2007",
"left-join",
""
] |
Suppose I have 3 tables
A:
```
---------------
id | name
---------------
1 | A
---------------
2 | B
---------------
```
B:
```
-----------------------
id | val | A_id
-----------------------
1 | 10 | 1
-----------------------
2 | 20 | 2
-----------------------
3 | 30 | 2
-----------------------
```
C:
```
-----------------------
id | val | B_id
-----------------------
1 | 40 | 2
-----------------------
2 | 50 | 2
-----------------------
3 | 60 | 2
-----------------------
```
How do I get this result:
```
----------------------------
A_name | B_val | C_val
----------------------------
A | 10 | 0
----------------------------
B | 50 | 150
----------------------------
```
I've tried to do this:
```
SELECT A.name, SUM(COALESCE(B.val,0)), SUM(COALESCE(C.val,0))
FROM A
LEFT JOIN B ON A.id = B.A_id
LEFT JOIN C ON B.id = C.B_id
GROUP BY A
```
But it returned this instead :
```
----------------------------
A_name | B_val | C_val
----------------------------
A | 10 | 0
----------------------------
B | 90 | 150
----------------------------
```
I suppose it's because C has 3 records that are related to B, so the second record of B is multiplied by three. What's the best way to get the result that I want? | Move the calculations into subselects:
```
select tablea.id, tablea.name, B.B_val, C.C_val FROM tablea
LEFT JOIN
(select tablea.id AS id, SUM(COALESCE(tableb.val,0)) as B_val
from tablea left join tableb on tablea.id = tableb.A_id
group by tablea.id) AS B ON tablea.id = B.id
LEFT JOIN
(select tablea.id AS id, SUM(COALESCE(tablec.valc,0)) as C_val
from tablea left join tableb on tablea.id = tableb.A_id
left join tablec on tablec.B_id = tableb.id
group by tablea.id) AS C on tablea.id = C.id
```
<http://sqlfiddle.com/#!2/4c268/14>
EDIT, I had SQL Fiddle set for MySQL. Nothing changes in the code, but here's postgres
<http://sqlfiddle.com/#!15/4c268/1> | Split the query in two and then use those subqueries as data source for a third one:
```
-- The first query (it returns the sum of b.val)
select a.id, a.name, sum(b.val) as sum_b_val
from a left join b on a.id = b.a_id
group by a.id;
-- The second query (it returns the sum of c.val)
select b.a_id, sum(c.val) as sum_c_val
from b left join c on b.id = c.b_id
group by b.a_id;
-- Put it all together
select
q1.name,
coalesce(q1.sum_b_val, 0) as sum_b_val,
coalesce(q2.sum_c_val, 0) as sum_c_val
from
(
select a.id, a.name, sum(b.val) as sum_b_val
from a left join b on a.id = b.a_id
group by a.id
) as q1
left join (
select b.a_id, sum(c.val) as sum_c_val
from b left join c on b.id = c.b_id
group by b.a_id
) as q2 on q1.id = q2.a_id;
```
Check [this example on SQL Fiddle](http://sqlfiddle.com/#!2/ed0357/6).
Hope this helps | I want to join 3 tables and aggregate records of two columns from 2 of those tables while avoiding record duplication | [
"",
"sql",
"postgresql",
""
] |
I have 2 tables that have identical structure:
`| term (varchar(50) utf8_bin) | count (bigint(11)) |`
One table is called "big\_table" and the other one "small\_table". Big table has ~10M rows and small\_table has 75k.
I want to update small\_table, so the count column will be filled from the big\_table.
I tried this:
```
UPDATE small_table b SET counter = (SELECT c.counter
FROM big_table c
WHERE c.term = b.term)
WHERE term = (SELECT c.term
FROM big_table c
WHERE c.term = b.term);
```
But it is only updating one row... | I think you only need a `JOIN`:
```
UPDATE small_table b
JOIN big_table c
ON c.term = b.term
SET b.counter = c.counter ;
``` | You don't need `WHERE` part at all:
```
UPDATE
small_table b
SET
counter =
ISNULL
(
(
SELECT c.counter
FROM big_table c
WHERE c.term = b.term
),
0
)
```
Think about this way:
`UPDATE` every row in `small_table` and `SET` column `counter` to the `counter` value found in `big_table` record `WHERE` value of `term` is equals to `term` in `small_table`. If nothing is found in `big_table` then just set `counter` to `0`. | Using JOIN in UPDATE query | [
"",
"mysql",
"sql",
""
] |
I have some mobile numbers stored in a Oracle DB. I have a stored procedure that carries out some checks around a variety of validations. The stored procedure is launched by a front end Windows application. One basic validation example is ensuring the mobile field is not null.
I'd like to add validation for: check the mobile number is at least 10 characters long, and do NOT count white spaces (leading, after or in the middle) If not, then ignore these mobile numbers.
Example: '188 123 4567'
'1881234567'
' 1881234 567 '
All thee above numbers should be taken as valid, where as:
'188 123 456'
'188123456'
' 1881234 567 '
Should fail. Can you assist with syntax? I'm still learning PL/SQL.
Here is an extract of the stored procedure I already have in place:
```
AND NOT EXISTS
( SELECT *
FROM person_a p
WHERE p.person_id = sa.person_id
AND p.mobile_ph_no = sa.mobile_ph_no
AND pu.a_mobile_ph_no IS NOT NULL
```
)
If the mobile number already stored on the DB is wrong, I just want to ignore it. Not correct it or modify it in any way. Simply if it doesn't meet the criteria, skip over it. | I don't think the other answers properly answer what s/he is asking. S/He says "*If the mobile number already stored on the DB is wrong, I just want to ignore it. Not correct it or modify it in any way. Simply if it doesn't meet the criteria, skip over it.*"
The other answers **do not** ignore invalid numbers, they alter them by removing the whitespace, and then select them.
So, to only query phone numbers in the DB that are 10 digits with no spaces and no other characters then s/he needs to do this...
```
SELECT * FROM person_a
WHERE REGEXP_LIKE (mobile_ph_no, '^\d{10}$');
```
So for your query...
```
SELECT *
FROM person_a p
WHERE p.person_id = sa.person_id
AND p.mobile_ph_no = sa.mobile_ph_no
AND REGEXP_LIKE (pu.a_mobile_ph_no, '^\d{10}$');
```
No need for the `IS NOT NULL` part anymore.
Your complete query isn't supplied, looks like you're querying a couple different numbers from different tables, so the select statement above may not be *EXACTLY* what you need, but the main point is, use `REGEXP_LIKE (pu.a_mobile_ph_no, '^\d{10}$')` | You can use `replace` function. E.g.,
```
select replace('188 123 4567',' ','') from dual;
```
result:
```
1881234567
```
Oracle Database SQL Reference for `replace` <http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions134.htm>
See also "Using Regular Expressions in Oracle Database" <http://docs.oracle.com/cd/B19306_01/appdev.102/b14251/adfns_regexp.htm>
Regexp-es are very useful for tasks like this one. E.g. `regexp_count('188 123 4567','[[:digit:]]')` returns the number of digits in `188 123 4567` ignoring all non-digit characters. | Check string length & ignore whitespace | [
"",
"sql",
"oracle",
"stored-procedures",
"plsql",
""
] |
I am trying to get a list of IDs and their corresponding names out of a table. Using
```
select distinct cast([ctID] as int), [ctName]
FROM [tbl_CTResults]
```
I find that the name has changed over time. I have an additional column (`[statdate]`) that will allow me to know which is the latest name.
I tried using something like
```
SELECT cast([ctID] as int) ,max([statdate])
FROM [InSightETL].[dbo].[tblPhWrk_WFM_iEX_CTResults] b
group by cast([ctID] as int)
join
SELECT * from (select distinct cast([ctID] as int)
,[ctName]
FROM [InSightETL].[dbo].[tblPhWrk_WFM_iEX_CTResults] ) a
on a.ctid=b.ctid
```
but apart from the fact the syntax is incorrect, the logic is also faulty. I know that I need to be able to select the correct record based on the date, but I can't figure out how to tie those two pieces of information together | You can do this using `row_number()`:
```
select cdid, ctName
from (select cast([ctID] as int) as ctid, [ctName],
row_number() over (partition by ctID order by startdate desc) as seqnum
FROM [tbl_CTResults]
) t
where seqnum = 1;
``` | ```
select *
from
(
select cast([ctID] as int), [ctName],
row_number() over (partition by ctID order by statdate desc) as rn
FROM [tbl_CTResults]
) as dt
where rn = 1
``` | selecting last used name | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
We have a table ta\_service with following data
## Ref\_id seq desc
2340 1 Service 1
2340 2 Offer 1
2340 3 Service 2
2340 4 Offer 2
2340 5 Service 3
2340 5 Service 4
We have a requirement of fetching the services which has offers and we dont have any reference for getting this data apart from Ref\_id(foreign key) and seq(Primary key) column.
We tried to fetch by following query but failed with that.
```
SELECT * FROM ta_service
WHERE Ref_id = 2340 AND seq IN (
SELECT seq-1 AS seq
FROM ta_service
WHERE Ref_id = 2340
AND desc LIKE '%Offer%'
UNION
SELECT seq AS seq
FROM ta_service
WHERE Ref_id = 2340
AND desc LIKE '%Offer%'
ORDER BY seq)
```
we are using sybase database. Any help appreciated | ASE doesn't support UNION in any subqueries.
You can refer the [Sybase Documentation](http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.infocenter.dc38151.1510/html/iqrefbb/Convert.htm "example") | Here is the link to sybase manual which says UNION is not supported in subqueries:
<http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.infocenter.dc36272.1570/html/commands/X56548.htm>
However I noticed that if we have the union clause within a derived table expression, then, it works fine. Here is the link to sybase manual which gives the syntax for the query:
<http://infocenter.sybase.com/archive/index.jsp?topic=/com.sybase.help.ase_15.0.sqlug/html/sqlug/sqlug418.htm>
Example of failing and working query given below:
Mocked use case: Get employees from either department D1 or D2.
Failing query:
```
SELECT empname from emp WHERE deptid IN
(SELECT deptid from dept WHERE deptid = "D1"
UNION
SELECT deptid from dept WHERE deptid = "D2")
Working query: With additional select and derived table based on union:
SELECT empname from emp WHERE deptid IN (
SELECT deptid FROM
(SELECT deptid from dept WHERE deptid = "D1"
UNION
SELECT deptid from dept WHERE deptid = "D2") derivedTable
)
``` | Getting syntax error with Union sub query in Sybase DB | [
"",
"sql",
"union",
"sybase",
""
] |
I'm using MySQL 5.1 and I have two tables, projects and employee. There is a foreign key in employee (number\_project) that is the primary key of projects(code\_project).
---> Exactly: [SQL Fiddle](http://sqlfiddle.com/#!2/f54cd/3/0)
I'm trying get the SUM of the projects by department with this query:
```
SELECT emp.department_emp AS Department, SUM( pro.price ) AS total_department
FROM employee AS emp, projects AS pro
WHERE emp.number_project = pro.code_project
GROUP BY emp.department_emp
```
It returns:
```
DEPARTMENT TOTAL_DEPARTMENTA
Accounting 2600
IT 4200
```
But It should returns:
```
DEPARTMENT TOTAL_DEPARTMENT
Accounting 1300
IT 4200
```
The problem is that when the query sums the same projects many times as employees of the same department are working in that project.
Thanks! | Use a subquery to create a list of unique `(department, project)` combinations:
```
select e.department_emp
, sum(p.price)
from (
select distinct department_emp
, number_project
from Employee
) e
join Projects p
on p.code_project = e.number_project
group by
e.department_emp
```
[Working example at SQL Fiddle.](http://sqlfiddle.com/#!2/f54cd/8/0)
Your database could use some serious work. Employee to project should be a many-to-many relation, not a many-to-one relation. And a field called "number" should not be called "code" in another table. | SELECT emp.department\_emp AS Department, SUM( pro.price ) AS total\_department
FROM(SELECT DISTINCT \*
FROM employee AS emp, projects AS pro
WHERE emp.number\_project = pro.code\_project
) AS P
GROUP BY emp.department\_emp | MySQL - Group By SUM with two tables | [
"",
"mysql",
"sql",
"group-by",
"sum",
""
] |
I have a table named `A` that contains a column named `x`. What I'm trying to do is to count the number of items that belong to a certain subset of A (more precisely, the ones that satisfy the `x > 4` condition) via a single SELECT query, for example:
```
SELECT COUNT(*)
FROM A
WHERE x > 4;
```
From thereon, I'd like to calculate the ratio between the size of this particular subset of `A` and `A` as a whole, i.e. perform the following division:
```
size_subset / size_A
```
My question is - how would I combine all of these pieces into a single SQL SELECT query? | My server is down, not able to get sure of the answer below:
```
SELECT count(case when x > 4 then x else null end) / COUNT(*) FROM A;
```
Is a slight better because its just a count, not a sum (nulls ill not be accounted)
but i prefer to do:
```
select (SELECT count(*) FROM A where x > 4)/(SELECT count(*) FROM A);
```
As I guess it can do faster | You want conditional aggregation:
```
SELECT sum(case when x > 4 then 1 else 0 end) / COUNT(*)
FROM A;
``` | Get ratio between the length of a table and one of its subsets via SQL | [
"",
"sql",
"postgresql",
""
] |
I have 2 tables - actual and proposed - as follows:
```
ACTUAL
Id|desc|Allocation|changeBit
1|X|20||
2|y|30||
PROPOSED
Id|desc|Allocation|changeBit
1|X|30|U|
3|z|40|I|
```
The sought result is as follows:
```
1|X|30|U|
2|y|30||
3|z|40|I|
```
A 'U' bit causes the record in 'proposed' to override 'actual'.
An 'I' bit in proposed just indicates that the 'new' record needs to be added-on.
What would be the most elegant way to achieve this?
Ideally i would wish to avoid creation of temp tables followed by insert-update.
I am using sql-server 2008. | I would be inclined to approach this with `union all` and `not exists`. I find the `coalesce()` or `isnull()` on the columns in the `full outer join` approach to be inelegant.
```
select Id, desc, Allocation, changeBit
from proposed
union all
select Id, desc, Allocation, changeBit
from actual a
where not exists (select 1 from proposed p where a.id = p.id);
```
EDIT:
If you want the validation, it is essentially:
```
select Id, desc, Allocation, changeBit
from proposed
where changebit in ('U', 'I')
union all
select Id, desc, Allocation, changeBit
from actual a
where not exists (select 1 from proposed p where a.id = p.id);
```
Really testing for a valid "U" versus "I" is more intensive and seems unnecessary (and isn't really part of the original question). You should probably ask another question, and explain exactly what you want done when there is an update to a non-actual record, or an insert for a record that doesn't exist. | You should join both tables using a `full outer join`, and then use `coalesce` to get the actual value only when a proposed value does not exist:
```
SELECT COALESCE (p.id, a.id) AS id,
COALESCE (p.desc, a.desc) AS desc,
COALESCE (p.allocation, a.allocation) AS allocation,
COALESCE (p.changebit, a.changebit) AS changebit
FROM actual a
FULL OUTER JOIN proposed p ON a.id = p.id
``` | Table join with overrides | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have 2 tables A and B with the same structure. I need to get each row from table A that are not present in table B, then insert them into a 3rd #temp table C.
Each table has 2 columns that need to be compared, Type and Step, the remaining columns are RowID, CreatedDate, CreatedUserID, ModifiedDate and ModifiedUserID that do not need to be compared.
Is there a single statement I can use to INSERT INTO #tempC that compares A and B and will insert the values FROM A that are not present in TABLE B using TSQL (SQL SERVER 2012) | SQL 2012 has the EXCEPT statement. Here's a trivial example.
```
CREATE TABLE #TmpA (
Col Varchar(10),
Irrelevant Int );
CREATE TABLE #TmpB (
Col Varchar(10) );
CREATE TABLE #TmpC (
Col Varchar(10) );
INSERT INTO #TmpA
SELECT 'A', 0 UNION ALL
SELECT 'B', 7 UNION ALL
SELECT 'C', 5 ;
INSERT INTO #TmpB
SELECT 'A' UNION ALL
SELECT 'C' UNION ALL
SELECT 'D';
INSERT INTO #TmpC
SELECT Col FROM #TmpA
EXCEPT
SELECT Col FROM #TmpB
SELECT * FROM #TmpC;
DROP TABLE #TmpA, #TmpB, #TmpC;
```
--- And a second scenario following Blams critique
```
CREATE TABLE #TmpA (
Col Varchar(10),
Irrelevant Int );
CREATE TABLE #TmpB (
Col Varchar(10) );
CREATE TABLE #TmpC (
Col Varchar(10),
Irrelevant Int );
INSERT INTO #TmpA
SELECT 'A', 0 UNION ALL
SELECT 'B', 7 UNION ALL
SELECT 'C', 5 ;
INSERT INTO #TmpB
SELECT 'A' UNION ALL
SELECT 'C' UNION ALL
SELECT 'D';
;WITH Exceptions AS (
SELECT Col FROM #TmpA
EXCEPT
SELECT Col FROM #TmpB
)
INSERT INTO #TmpC
SELECT A.Col, A.Irrelevant
FROM #TmpA A
JOIN Exceptions E ON A.Col = E.Col
SELECT * FROM #TmpC;
DROP TABLE #TmpA, #TmpB, #TmpC;
``` | ```
insert into #temp3 (Type, Step, RowID ..)
select #temp1.Type, #temp1.Step, #temp1.RowID
from #temp1
left outer join #temp2
on #temp1.Type = #temp2.Type
and #temp1.Step = #temp2.Step
where #temp2.Type is null
``` | SQL Compare 2 #temp tables and insert the differences into a 3rd #temp table | [
"",
"sql",
"t-sql",
"compare",
"sql-server-2012",
""
] |
For example I have 2 tables:
users
```
+----+
| id |
+----+
| 0 |
| 1 |
| 2 |
| 3 |
+----+
```
payments
```
+---------+--------+------+
| user_id | amount | case |
+---------+--------+------+
| 0 | 10 | 1 |
| 0 | 1 | 2 |
| 2 | 5 | 1 |
| 2 | 4 | 1 |
| 2 | 5 | 2 |
| 3 | 26 | 2 |
+---------+--------+------+
```
And I'm trying to get output:
```
+---------+---------------------+---------------------+
| user_id | total_amount_case_1 | total_amount_case_2 |
+---------+---------------------+---------------------+
| 0 | 10 | 1 |
| 2 | 9 | 5 |
| 3 | 0 | 26 |
+---------+---------------------+---------------------+
```
Explaining in words I'm trying to select each user who has at least one payment and SUM its payments.amount in separated cases.
I just could get to the point where I select each user total payments, but without depending on what case it was:
```
SELECT users.id AS user_id, SUM(payments.amount) AS total_amount FROM users LEFT JOIN users.id = payments.user_id GROUP BY user_id ORDER BY user_id;
```
Which gave me this output:
```
+---------+--------------+
| user_id | total_amount |
+---------+--------------+
| 0 | 11 |
| 1 | 0 |
| 2 | 14 |
| 3 | 26 |
+---------+--------------+
``` | Try this
```
SELECT
u.id AS user_id,
SUM(CASE WHEN p.`case` = 1 THEN p.amount ELSE 0 END) AS total_amount_case_1,
SUM(CASE WHEN p.`case` = 2 THEN p.amount ELSE 0 END) AS total_amount_case_2
FROM
users u
LEFT JOIN payments p ON u.id = p.user_id
GROUP BY u.user_id
ORDER BY u.user_id ;
``` | You can do this with conditional aggregation -- using the `case` statement in an aggregation function:
```
SELECT u.id AS user_id,
SUM(case when p.`case` = 1 then p.amount else 0 end) AS total_amount_case1,
SUM(case when p.`case` = 2 then p.amount else 0 end) AS total_amount_case2
FROM users u LEFT JOIN
payments p
on u.id = p.user_id
GROUP BY user_id
ORDER BY user_id;
```
I also added table aliases to make the query a bit less cumbersome. | SQL SUMed up data with clause | [
"",
"mysql",
"sql",
""
] |
I am working with visitor log data and need to summarize it by IP address. The data looks like this:
```
id | ip_address | type | message | ...
----------+----------------+----------+----------------
1 | 1.2.3.4 | purchase | ...
2 | 1.2.3.4 | visit | ...
3 | 3.3.3.3 | visit | ...
4 | 3.3.3.3 | purchase | ...
5 | 4.4.4.4 | visit | ...
6 | 4.4.4.4 | visit | ...
```
And should summarize with:
```
type="purchase" DESC, type="visit" DESC, id DESC
```
The yield:
```
chosenid | ip_address | type | message | ...
----------+----------------+----------+----------------
1 | 1.2.3.4 | purchase | ...
4 | 3.3.3.3 | purchase | ...
6 | 4.4.4.4 | visit | ...
```
**Is there an elegant way to get this data?**
---
An ugly approach follows:
```
set @row_num = 0;
CREATE TEMPORARY TABLE IF NOT EXISTS tt AS
SELECT *,@row_num:=@row_num+1 as row_index FROM log ORDER BY type="purchase" DESC, type="visit" DESC, id DESC
ORDER BY rating desc;
```
Then get the minimum row\_index and id for each ip\_address (<https://stackoverflow.com/questions/121387/fetch-the-row-which-has-the-max-value-for-a-column>)
Then join those id's back to the original table | I think this should be what you need:
```
SELECT yourtable.*
FROM
yourtable INNER JOIN (
SELECT ip_address,
MAX(CASE WHEN type='purchase' THEN id END) max_purchase,
MAX(CASE WHEN type='visit' THEN id END) max_visit
FROM yourtable
GROUP BY ip_address) m
ON yourtable.id = COALESCE(max_purchase, max_visit)
```
Please see fiddle [here](http://sqlfiddle.com/#!2/2a637/1).
My subquery will return the maximum purchase id (or null if there's no purchase) and the maximum visit id. Then I'm joining the table with COALESCE, if max\_purchase is not null the join will be on max\_purchase, otherwise it will be on max\_visit. | The following query gets the most recent `id` based on your rules by using a correlated subquery:
```
select t.ip_adddress,
(select t2.id
from table t2
where t2.ip_address = t1.ip_address
order by type = 'purchase' desc, id desc
limit 1
) as mostrecent
from (select distinct t.ip_address
from table t
) t;
```
The idea is to sort the data first by purchases (with id descending too) and then by visits and choose the first one in the list. If you have a table of ipaddresses, then you don't need the `distinct` subquery. Just use that table instead.
To get the final results, we can `join` to this or use `in` or `exists`. This uses `in`.
```
select t.*
from table t join
(select id, (select t2.id
from table t2
where t2.ip_address = t1.ip_address
order by type = 'purchase' desc, id desc
limit 1
) as mostrecent
from (select distinct t.ip_address
from table t
) t
) ids
on t.id = ids.mostrecent;
```
This query will work best if there is an index on `table(ip_address, type, id)`. | SQL group by with multiple conditions | [
"",
"mysql",
"sql",
"greatest-n-per-group",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.