Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I´m trying to write a query of the `Top 10` occurrences of a column in my database. But I don´t want to see it in a group by mode.
I have a database with 40000 rows and with 20 columns, and I want to write a query that returns x rows with the top 10 occurrences in one specific column.
When I use this:
```
Select top 10 colum.name
from table.name
```
what I get is the top 10 rows in the database.
When I use this:
```
select top 10 colum.name
From table.name
group by colum.name
order by colum.name DESC
```
I get my top 10 occurrences but group by my column, and what I want is to see that same top 10 but with all the rows displayed.
I don´t know if this is a dumb question, but I'm losing all my cool with this one!
So thank you for your time in advance. | You can use something like this:
```
;WITH Top10Distinct AS
(
SELECT DISTINCT TOP 10 YourColumn
FROM dbo.YourTable
ORDER BY YourColumn DESC
)
SELECT *
FROM dbo.YourTable tbl
INNER JOIN Top10Distinct cte ON tbl.YourColumn = cte.YourColumn
```
The CTE first fetches the `TOP 10 DISTINCT` values for `YourColumn` and then you join those top 10 values against the actual "base" table `dbo.YourTable`, thus retrieving **all rows** in full from the base table that have one of the top 10 distinct column values. | ```
you could use row_number()/rank()/dense_rank () depending on the requirement
```
ex :
```
with cte as
(
select name,ROW_NUMBER() over (id order by id) as rn
);
select * from cte where rn <=10
```
this outputs you 10 rows | SQL Server : TOP 10 query by occurrence without using group by | [
"",
"sql",
"sql-server",
""
] |
I want to use Group By for the rows of one column only, let me explain how.
**My Query:-**
```
SELECT
m.name AS brand,
opv.name AS model,
opv.product_condition AS condition,
(AVG(opv.final_price + opv.overhead_cost)) AS cost,
opv.product_color AS color,
COUNT(m.name) as quantity
FROM `order` o
JOIN order_veri AS ov
ON o.order_id = ov.order_fk
JOIN order_prod_veri AS opv
ON ov.order_fk = opv.order_id
JOIN product AS p
ON opv.product_id = p.product_id
JOIN manufacturer AS m
ON p.manufacturer_id = m.manufacturer_id
GROUP BY model,
product_condition
```
This is the data I get:-
```
Brand | Model | Cost | Condition | Color | Quantity
-------------------------------------------------------
Apple | iPhone 5 | $50.95 | Used | Black | 2
Blackberry | Blackberry 9900 | $22.98 | Used | Black | 2
Samsung | Galaxy S | $16.92 | Used | White | 2
HTC | Rhyme | $60.42 | New | Red | 2
Google | Google Nexus | $72.24 | New | Blue | 2
Motorola | Razr | $9.68 | Used | Silver | 2
Apple | iPad Air | $2.74 | New | Silver | 1
```
**This is how I want to get the data (or how I want it to display):-**
```
Brand | Model | Cost | Condition | Color | Quantity
-------------------------------------------------------
Apple | iPhone 5 | $50.95 | Used | Black | 2
Apple | iPhone 5 | $50.95 | Used | White |
Blackberry | Blackberry 9900 | $22.98 | Used | Black | 3
Blackberry | Blackberry 9900 | $22.98 | Used | White |
Blackberry | Blackberry 9900 | $22.98 | Used | Black |
HTC | Rhyme | $60.42 | New | Red | 2
HTC | Rhyme | $60.42 | New | Black |
Google | Google Nexus | $72.24 | New | Blue | 2
Google | Google Nexus | $72.24 | New | Black |
Motorola | Razr | $9.68 | Used | Silver | 2
Motorola | Razr | $9.68 | Used | White | 2
Apple | iPad Air | $2.74 | New | Silver | 1
```
Note how the price remains the same but there are now instead additional rows depending on the quantity which display the attributes of the same **Model**.
**What I want to achieve here:-**
Now in the color column, I am not getting the different color variations for each model due to grouping. For example, the quantity of **iPhone 5** in first row is 5 but it's showing the color for just the first in those 5.
Similarly, **Model** in the second row is **Blackberry 9900**, and the quantity is 2 but it's showing the color for 1 of them only due to grouping when they both are in two different colors (white and black).
So what I would like to ask is that is it possible to group just the cost using GROUP BY?
Please let me know if you would like to clarify something or provide a better explanation of what I want to achieve here. | It is not clear from your question what you are trying to do. It is clear that you are using a confusing nonstandard MySQL extension to `GROUP BY`. You should read this part of the MySQL manual.
<http://dev.mysql.com/doc/refman/5.5/en/group-by-extensions.html>
The thing to keep in mind about SQL is that the hard part of any problem is precisely specifying the result set you want.
You've said, in your comments, that you want to show a detail listing of your inventory, that is, one row for each distinct item in inventory. But in that detail listing you want to show a cost that's averaged over all items in inventory of that make, model, and condition. This is tricky in SQL, because you're mixing apples (details) and orchards (averages).
To do that you need a subquery and a join. The subquery works out your average costs. It is this:
```
SELECT m.name AS brand,
opv.name AS model,
opv.product_condition AS product_condition,
AVG(opv.final_price + opv.overhead_cost) AS cost
FROM ... the tables in your original ...
JOIN ... query in your question.
GROUP BY brand, model, product_condition
```
Then you need to treat this subquery as a virtual table, and join it to your main query like so.
```
SELECT m.name AS brand,
opv.name AS model,
opv.product_condition AS product_condition,
avg.cost,
opv.product_color AS color
FROM ... the tables in your original ...
JOIN ... query in your question.
JOIN ... in your question
JOIN (
SELECT m.name AS brand,
opv.name AS model,
opv.product_condition AS product_condition,
(AVG(opv.final_price + opv.overhead_cost)) AS cost
FROM ... the tables in your original
JOIN ... query in your question.
GROUP BY brand, model, product_condition
) AS avg ON ( m.name=avg.name
AND opv.name = avg.model
AND opv.product_condition = avg.product_condition)
```
Now, this looks like a horrible nasty query. In a sense it is, because your specification sometimes calls for detail records and sometimes calls for averages. But you should not worry about performance; it will be tolerably decent. Notice that there's a `GROUP BY` clause in the subquery, because it is computing averages. But there is no such clause in the main query, because it is presenting detail records (individual items).
If you were just looking for aggregate results, you could have a couple of choices. If you were trying to show the average cost separately for items of different colors and conditions. In that case you want
```
SELECT m.name AS brand,
opv.name AS model,
opv.product_condition AS product_condition,
AVG(opv.final_price + opv.overhead_cost) AS cost,
opv.product_color AS color,
FROM ...
JOIN ...
GROUP BY brand, model, product_condition, color
```
But maybe you want to show average prices for items, regardless of color. In that case you want
```
SELECT m.name AS brand,
opv.name AS model,
opv.product_condition AS product_condition,
AVG(opv.final_price + opv.overhead_cost) AS cost,
GROUP_CONCAT (opv.product_color) AS color,
FROM ...
JOIN ...
GROUP BY brand, model, product_condition
```
Do you see how each item in the `SELECT ... GROUP BY` is mentioned both in `SELECT` and `GROUP BY`, or is subjected to a summary function (`SUM`, `GROUP_CONCAT`) in `SELECT`? That's the standard way `GROUP BY` works. | If you want a row for each color (as well as your other group conditions) but an avg that ignores color, have to do it in 2 steps: get the average, then list the average for each color. You can do it with 2 queries, or by making the AVG a subquery.
I don't recommended the subquery as it is very inefficient. For any significant sized dataset or high volume application it will be slow. But if you prefer, here it is.
```
SELECT
m.name AS brand,
opv.name AS model,
opv.product_condition AS condition,
(
SELECT AVG(opv.final_price + opv.overhead_cost)
FROM order_prod_veri as opv1
JOIN product AS p1 ON opv1.product_id = p1.product_id
JOIN manufacturer AS m1 ON p1.manufacturer_id = m1.manufacturer_id
WHERE opv1.model = opv.model
AND opv1.product_condition = opv.product_condition
AND m1.name = m.name
GROUP BY m1.brand, opv1.model, opv1.product_condition
) as cost
opv.product_color AS color,
COUNT(m.name) as quantity
FROM `order` o
JOIN order_veri AS ov ON o.order_id = ov.order_fk
JOIN order_prod_veri AS opv ON ov.order_fk = opv.order_id
JOIN product AS p ON opv.product_id = p.product_id
JOIN manufacturer AS m ON p.manufacturer_id = m.manufacturer_id
GROUP BY brand, model, product_condition, color
``` | Group By for rows of one column only | [
"",
"mysql",
"sql",
""
] |
I have the following simplified tables:
**tblOrders**
```
orderID date
---------------------
1 2013-10-04
2 2013-10-05
3 2013-10-06
```
**tblOrderLines**
```
lineID orderID ProductCategory
--------------------------------------
1 1 10
2 1 3
3 1 10
4 2 3
5 3 3
6 3 10
7 3 10
```
I want to select records from tblOrders ONLY if any order line has ProductCategory = 10. So, if none of the lines of a particular order has ProductCategory = 10, then do not return that order.
How would I do that? | You can use exists for this
```
Select o.*
From tblOrders o
Where exists (
Select 1
From tblOrderLines ol
Where ol.ProductCategory = 10
And ol.OrderId = o.OrderId
)
``` | This should do:
```
SELECT *
FROM tblOrders O
WHERE EXISTS(SELECT 1 FROM tblOrderLines
WHERE ProductCategory = 10
AND OrderID = O.OrderID)
``` | Get orders where order lines meet certain requirements | [
"",
"sql",
"sql-server",
""
] |
I have two tables A and B
```
date Fruit_id amount
date_1 1 5
date_1 2 1
date_2 1 2
....
date Fruit_id amount
date_1 1 2
date_1 3 2
date_2 5 1
....
```
and a id\_table C
fruit\_id fruit
1 apple
....
And I try to get a table that shows the amount of both tables next to each other for each fruit for a certain day. I tried
```
SELECT a.date, f.fruit, a.amount as amount_A, b.amount as amount_B
from table_A a
JOIN table_C f USING(fruit_id)
LEFT JOIN table_B b ON a.date = b.date AND a.fruit_id = d.fruit_id
WHERE a.date ='myDate'
```
This now creates several rows per fruit instead of 1 and the values seem fairly random combinations of the amounts.
How can I get a neat table
```
date fruit A B
myDate apple 1 5
myDate cherry 2 2
....
``` | Tested this with your data and it seems to work. You want to use a full outer join in order to include all fruits and dates from A and B :)
```
SELECT C.Name
, COALESCE(A.Date, B.Date)
, COALESCE(A.Amount, 0)
, COALESCE(B.Amount, 0)
FROM C
JOIN (A FULL OUTER JOIN B ON B.FruitID = A.FruitID AND B.Date = A.Date)
ON COALESCE(A.FruitID, B.FruitID) = C.FruitID
ORDER BY 1, 2
``` | Try
```
select A.date, C.fruitname, A.amount, B.amount
from A,B,C
where
A.date = B.date
and A.fruitid = B.fruitid
and A.fruitid = C.fruitid
order by A.fruitid
``` | join on several conditions | [
"",
"sql",
"oracle",
"join",
""
] |
I'm using Oracle 11g, I need help trying to figure out how to match a string of any length of 0s. But it should contain only 0s.
```
i.e.
0 - Valid
00 - Valid
0000 - Valid
0010000 - Invalid
000000000- Valid
```
Been trying to find some help on this, but to no avail. | It looks like you can use [regular expressions](http://docs.oracle.com/cd/B28359_01/appdev.111/b28424/adfns_regexp.htm#i1007663) in your query, like so:
```
SELECT ID FROM table WHERE REGEXP_LIKE(zeroes, '^0+$')
``` | Assuming that the column only has numbers, then you might want to try and `CAST` it to an `INT`:
```
SELECT *
FROM YourTable
WHERE CAST(YourColumn AS INT) = 0
``` | How to match one and/or more than one 0s in SQL? | [
"",
"sql",
"oracle11g",
""
] |
Just need some advice here. Because I have a table that has 45,000 rows. What I did is I export the table as a multiple INSERT command. But the estimated time is too long. Which is faster to load in the sql? A rows that is exported as CSV or rows that exported as a multiple INSERT command? | `LOAD DATA INFILE` is the fastest.
hello, so you are dumping table by your own program. if loading speed is important. please consider belows:
* ensure that multiple `INSERT INTO .. VALUES (...), (...)`
* Disable INDEX before loading, enable after loading. This is faster.
* `LOAD DATA INFILE` is super faster than multiple INSERT but, has trade-off. maintance and handling escaping.
* BTW, I thing `mysqldump` is better than others.
how long takes to load 45,000 rows? | `INSERT` is faster, because you are skip parsing of csv.
If your table at `MyISAM`, you can copy files: `*.frm`, `*.myi`, `*.myd` and this migration will be faster. | Which is faster to load CSV file or multiple INSERT commands in MySQL? | [
"",
"mysql",
"sql",
""
] |
I have a column `ID` which is auto increment and another column `Request_Number`
In `Request_Number` i want to insert something like `"ISD0000"+ID` value...
e.g For the first record it should `ID 1` and Request\_Number "`ISD000001`"
How to achieve this? | You can use a computed column:
```
create table T
(
ID int identity primary key check (ID < 1000000),
Request_Number as 'ISD'+right('000000'+cast(ID as varchar(10)), 6)
)
```
Perhaps you also need a check constraint so you don't overflow. | If you have your table definition already in place you can alter the column and add Computed column marked as persisted as:
```
ALTER TABLE tablename drop column Request_Number;
ALTER TABLE tablename add Request_Number as 'ISD00000' + CAST(id AS VARCHAR(10)) PERSISTED ;
```
If computed column is not marked as persisted, it is not created when the column is created, in fact it is still computed at run time. Once you mark column as persisted, it is computed right away and stored in the data table.
[Edit]:
```
ALTER TABLE tablename drop column Request_Number;
ALTER TABLE tablename add Request_Number as 'ISD'
+right('000000'+cast(ID as varchar(10)), 6) PERSISTED ;
``` | Getting identity value while inserting | [
"",
"sql",
"sql-server",
""
] |
I have a Microsoft Access table that has only one column that contains URLs. I need to check and see if a URL DOES NOT EXIST in that table. What is the best way to do this? | A simple select with `COUNT` should do the trick:
```
SELECT COUNT(*)
FROM myTableWithUrl
WHERE URL='http://www.stackoverflow.com'
```
This query will always return a single row with one integer column. Zero indicates that the said URL is not present; any other number indicates that the URL is there.
The query may go faster if you define an index on the URL column. | I would go with
```
select count(*) records
from yourtable
where url = something
``` | Determine if Value Is Not In Database Column | [
"",
"sql",
"vb.net",
"ms-access",
""
] |
I have a database containing tables using foreign keys to indicate relationships among them. In one table I have 2 columns..
1 is id (foreign key) and other contains friendsids (foreign key from friends table).
Either I can put the friends on separate rows:
```
| id | friendsids |
| 1 | 3 |
| 1 | 4 |
| 1 | 5 |
```
Or pack them into a single string field:
```
| id | friendsids |
| 1 | 3,4,5 |
```
In second way later I will separate friendsids using PHP `explode()`. Also do remember I have lots of records. Which way is more efficient and why?
Thanks in advance. | Forget about splitting in PHP. What you are doing is a so-called mapping-table, and it really should be 1:1. That allows you to
* easily INSERT IGNORE to add a mapping w/o checking if it already exists,
* easily DELETE a mapping w/o checking if it already exists,
* easily COUNT(\*) the number of friends,
* easily JOIN data for complex queries
* search your data really fast with a UNIQUE-INDEX spanning both rows and a nonunique on the latter
* save digits as digits instead of a string saving lots of ram and disk i/o
and propably many more. | The rules of database normalization dictates that you only have `1` value in `1` field. This is so you can access any single value with the correct primary key, rather than have to do some ugly code to split up the string returned.
Whenever you're designing a database, you should always keep the normalization rules in mind.
[Here](http://en.wikipedia.org/wiki/Database_normalization) is a link to the wikipedia article.
And in case you forget:
> The Key, the whole key and nothing but the key - so help me Codd. | How to model one-to-many relationship in database | [
"",
"sql",
"database-design",
""
] |
My question is: How do I show the name of each artist which has recorded at least one title at a studio where the band: "The Bullets" have recorded?
I have formulated the following query:
```
select ar.artistname, ti.titleid from artists ar
where exists
(
select 0
from titles ti where ti.artistid = ar.artistid
and exists
(
select 0
from studios s
where s.studioid = ti.studioid
and ar.artistname = 'The Bullets'
)
);
```
However, I need to include HAVING COUNT(ti.titleid) > 0 to satisfy this part, "each artist which has recorded at least one title" in the question.
I also am unsure as to how to match the artistname, "The Bullets" who have recorded at least one studio.
The Artists table resmebles the following:
```
Artists
-------
ArtistID, ArtistName, City
```
The Tracks table resmebles the following:
```
Tracks
------
TitleID, ArtistID, StudioID
```
The Studios table resmebles the folllowing:
```
Studios
-------
StudioID, StudioName, Address
```
I also **must specify that I cannot use joins**, e.g., a performance preference. | Maybe like this?
```
select ArtistName from Artists where ArtistID in (
select ArtistID from Tracks where StudioID in (
select StudioID from Tracks where ArtistID in (
select ArtistId from Artists where ArtistName='The Bullets'
)
)
)
```
I don't see why do you think `having` is needed. | The studio(s) where the bullets recorded
```
SELECT StudioID
FROM Sudios S
JOIN Tracks T ON S.StudioID = S.StudioID
JOIN Artists A ON T.ArtistID = A.ArtistID AND A.ArtistName = 'The Bullets'
```
Every Artist who recorded there
```
SELECT A1.ArtistName, A1.City
FROM Artist A1
JOIN Tracks T1 ON T1.ArtistID = A2.ArtistID
WHERE T1.SudioID IN
(
SELECT StudioID
FROM Sudios S
JOIN Tracks T ON S.StudioID = S.StudioID
JOIN Artists A ON T.ArtistID = A.ArtistID AND A.ArtistName = 'The Bullets'
) T
``` | SQL Subquery using HAVING COUNT | [
"",
"mysql",
"sql",
""
] |
I have a dataset I need to build for number of visits per month for particular user. I have a SQL table which contains these fields:
* **User** nvarchar(30)
* **DateVisit** datetime
What I want to achieve now is to get all the visits grouped by month for each user, something like at the picture:

I started the query, I am able to get the months and the total sum of visits for that month (not split by user) with this query;
```
select [1] AS January,
[2] AS February,
[3] AS March,
[4] AS April,
[5] AS May,
[6] AS June,
[7] AS July,
[8] AS August,
[9] AS September,
[10] AS October,
[11] AS November,
[12] AS December
from
(
SELECT MONTH(DateVisit) AS month, [User] FROM UserVisit
) AS t
PIVOT (
COUNT([User])
FOR month IN([1], [2], [3], [4], [5],[6],[7],[8],[9],[10],[11],[12])
) p
```
With the query above I am getting this result:

Now I want to know how I can add one more column for user and split the values by user. | Okay, both solutions look good. The answer by Ali works but I would use a SUM() function instead, I hate NULLS. Let's try both and see the query plans versus execution times.
I always create a test table with data so that I do not give the user, Aziale, bad answers.
The code below is not the prettiest but it does set up a test case. I made a database in tempdb called user\_visits. For each month, I used a for loop to add the users and give them the create start date for the month.
Now that we have data, we can play.
```
-- Drop the table
drop table tempdb.dbo.user_visits
go
-- Create the table
create table tempdb.dbo.user_visits
(
uv_id int identity(1, 1),
uv_visit_date smalldatetime,
uv_user_name varchar(30)
);
go
-- January data
declare @cnt int = 1;
while @cnt <= 103
begin
if (@cnt <= 21)
insert into tempdb.dbo.user_visits
(uv_visit_date, uv_user_name)
values ('20130101', 'Patrick');
if (@cnt <= 44)
insert into tempdb.dbo.user_visits
(uv_visit_date, uv_user_name)
values ('20130101', 'Barbara');
if (@cnt <= 65)
insert into tempdb.dbo.user_visits
(uv_visit_date, uv_user_name)
values ('20130101', 'Danielle');
if (@cnt <= 103)
insert into tempdb.dbo.user_visits
(uv_visit_date, uv_user_name)
values ('20130101', 'John');
set @cnt = @cnt + 1
end
go
-- February data
declare @cnt int = 1;
while @cnt <= 99
begin
if (@cnt <= 29)
insert into tempdb.dbo.user_visits
(uv_visit_date, uv_user_name)
values ('20130201', 'Patrick');
if (@cnt <= 42)
insert into tempdb.dbo.user_visits
(uv_visit_date, uv_user_name)
values ('20130201', 'Barbara');
if (@cnt <= 55)
insert into tempdb.dbo.user_visits
(uv_visit_date, uv_user_name)
values ('20130201', 'Danielle');
if (@cnt <= 99)
insert into tempdb.dbo.user_visits
(uv_visit_date, uv_user_name)
values ('20130201', 'John');
set @cnt = @cnt + 1
end
go
-- March data
declare @cnt int = 1;
while @cnt <= 98
begin
if (@cnt <= 25)
insert into tempdb.dbo.user_visits
(uv_visit_date, uv_user_name)
values ('20130301', 'Patrick');
if (@cnt <= 46)
insert into tempdb.dbo.user_visits
(uv_visit_date, uv_user_name)
values ('20130301', 'Barbara');
if (@cnt <= 75)
insert into tempdb.dbo.user_visits
(uv_visit_date, uv_user_name)
values ('20130301', 'Danielle');
if (@cnt <= 98)
insert into tempdb.dbo.user_visits
(uv_visit_date, uv_user_name)
values ('20130301', 'John');
set @cnt = @cnt + 1
end
go
-- April data
declare @cnt int = 1;
while @cnt <= 91
begin
if (@cnt <= 32)
insert into tempdb.dbo.user_visits
(uv_visit_date, uv_user_name)
values ('20130401', 'Patrick');
if (@cnt <= 48)
insert into tempdb.dbo.user_visits
(uv_visit_date, uv_user_name)
values ('20130401', 'Barbara');
if (@cnt <= 60)
insert into tempdb.dbo.user_visits
(uv_visit_date, uv_user_name)
values ('20130401', 'Danielle');
if (@cnt <= 91)
insert into tempdb.dbo.user_visits
(uv_visit_date, uv_user_name)
values ('20130401', 'John');
set @cnt = @cnt + 1
end
go
-- May data
declare @cnt int = 1;
while @cnt <= 120
begin
if (@cnt <= 40)
insert into tempdb.dbo.user_visits
(uv_visit_date, uv_user_name)
values ('20130501', 'Patrick');
if (@cnt <= 41)
insert into tempdb.dbo.user_visits
(uv_visit_date, uv_user_name)
values ('20130501', 'Barbara');
if (@cnt <= 70)
insert into tempdb.dbo.user_visits
(uv_visit_date, uv_user_name)
values ('20130501', 'Danielle');
if (@cnt <= 120)
insert into tempdb.dbo.user_visits
(uv_visit_date, uv_user_name)
values ('20130501', 'John');
set @cnt = @cnt + 1
end
go
-- June data
declare @cnt int = 1;
while @cnt <= 103
begin
if (@cnt <= 17)
insert into tempdb.dbo.user_visits
(uv_visit_date, uv_user_name)
values ('20130601', 'Patrick');
if (@cnt <= 45)
insert into tempdb.dbo.user_visits
(uv_visit_date, uv_user_name)
values ('20130601', 'Barbara');
if (@cnt <= 62)
insert into tempdb.dbo.user_visits
(uv_visit_date, uv_user_name)
values ('20130601', 'Danielle');
if (@cnt <= 103)
insert into tempdb.dbo.user_visits
(uv_visit_date, uv_user_name)
values ('20130601', 'John');
set @cnt = @cnt + 1
end
go
-- July data
declare @cnt int = 1;
while @cnt <= 99
begin
if (@cnt <= 20)
insert into tempdb.dbo.user_visits
(uv_visit_date, uv_user_name)
values ('20130701', 'Patrick');
if (@cnt <= 43)
insert into tempdb.dbo.user_visits
(uv_visit_date, uv_user_name)
values ('20130701', 'Barbara');
if (@cnt <= 66)
insert into tempdb.dbo.user_visits
(uv_visit_date, uv_user_name)
values ('20130701', 'Danielle');
if (@cnt <= 99)
insert into tempdb.dbo.user_visits
(uv_visit_date, uv_user_name)
values ('20130701', 'John');
set @cnt = @cnt + 1
end
go
-- August data
declare @cnt int = 1;
while @cnt <= 98
begin
if (@cnt <= 26)
insert into tempdb.dbo.user_visits
(uv_visit_date, uv_user_name)
values ('20130801', 'Patrick');
if (@cnt <= 47)
insert into tempdb.dbo.user_visits
(uv_visit_date, uv_user_name)
values ('20130801', 'Barbara');
if (@cnt <= 71)
insert into tempdb.dbo.user_visits
(uv_visit_date, uv_user_name)
values ('20130801', 'Danielle');
if (@cnt <= 98)
insert into tempdb.dbo.user_visits
(uv_visit_date, uv_user_name)
values ('20130801', 'John');
set @cnt = @cnt + 1
end
go
-- September data
declare @cnt int = 1;
while @cnt <= 91
begin
if (@cnt <= 25)
insert into tempdb.dbo.user_visits
(uv_visit_date, uv_user_name)
values ('20130901', 'Patrick');
if (@cnt <= 49)
insert into tempdb.dbo.user_visits
(uv_visit_date, uv_user_name)
values ('20130901', 'Barbara');
if (@cnt <= 59)
insert into tempdb.dbo.user_visits
(uv_visit_date, uv_user_name)
values ('20130901', 'Danielle');
if (@cnt <= 91)
insert into tempdb.dbo.user_visits
(uv_visit_date, uv_user_name)
values ('20130901', 'John');
set @cnt = @cnt + 1
end
go
-- October data
declare @cnt int = 1;
while @cnt <= 120
begin
if (@cnt <= 25)
insert into tempdb.dbo.user_visits
(uv_visit_date, uv_user_name)
values ('20131001', 'Patrick');
if (@cnt <= 40)
insert into tempdb.dbo.user_visits
(uv_visit_date, uv_user_name)
values ('20131001', 'Barbara');
if (@cnt <= 73)
insert into tempdb.dbo.user_visits
(uv_visit_date, uv_user_name)
values ('20131001', 'Danielle');
if (@cnt <= 120)
insert into tempdb.dbo.user_visits
(uv_visit_date, uv_user_name)
values ('20131001', 'John');
set @cnt = @cnt + 1
end
go
-- November data
declare @cnt int = 1;
while @cnt <= 101
begin
if (@cnt <= 32)
insert into tempdb.dbo.user_visits
(uv_visit_date, uv_user_name)
values ('20131101', 'Patrick');
if (@cnt <= 50)
insert into tempdb.dbo.user_visits
(uv_visit_date, uv_user_name)
values ('20131101', 'Barbara');
if (@cnt <= 65)
insert into tempdb.dbo.user_visits
(uv_visit_date, uv_user_name)
values ('20131101', 'Danielle');
if (@cnt <= 101)
insert into tempdb.dbo.user_visits
(uv_visit_date, uv_user_name)
values ('20131101', 'John');
set @cnt = @cnt + 1
end
go
-- December data
declare @cnt int = 1;
while @cnt <= 90
begin
if (@cnt <= 40)
insert into tempdb.dbo.user_visits
(uv_visit_date, uv_user_name)
values ('20131201', 'Patrick');
if (@cnt <= 52)
insert into tempdb.dbo.user_visits
(uv_visit_date, uv_user_name)
values ('20131201', 'Barbara');
if (@cnt <= 61)
insert into tempdb.dbo.user_visits
(uv_visit_date, uv_user_name)
values ('20131201', 'Danielle');
if (@cnt <= 90)
insert into tempdb.dbo.user_visits
(uv_visit_date, uv_user_name)
values ('20131201', 'John');
set @cnt = @cnt + 1
end
go
```
Please do not use reserve words in coding as column names - IE - month is a reserve word.
The code below gives you the correct answer.
```
-- Grab the data (1)
select
my_user,
[1] AS January,
[2] AS Febrary,
[3] AS March,
[4] AS April,
[5] AS May,
[6] AS June,
[7] AS July,
[8] AS August,
[9] AS September,
[10] AS October,
[11] AS November,
[12] AS December
from
(
SELECT MONTH(uv_visit_date) AS my_month, uv_user_name as my_user FROM tempdb.dbo.user_visits
) AS t
PIVOT (
COUNT(my_month)
FOR my_month IN([1], [2], [3], [4], [5],[6],[7],[8],[9],[10],[11],[12])
) as p
```

```
-- Grab the data (2)
SELECT uv_user_name
, SUM(CASE WHEN MONTH(uv_visit_date) = 1 THEN 1 ELSE 0 END) January
, SUM(CASE WHEN MONTH(uv_visit_date) = 2 THEN 1 ELSE 0 END) Feburary
, SUM(CASE WHEN MONTH(uv_visit_date) = 3 THEN 1 ELSE 0 END) March
, SUM(CASE WHEN MONTH(uv_visit_date) = 4 THEN 1 ELSE 0 END) April
, SUM(CASE WHEN MONTH(uv_visit_date) = 5 THEN 1 ELSE 0 END) May
, SUM(CASE WHEN MONTH(uv_visit_date) = 6 THEN 1 ELSE 0 END) June
, SUM(CASE WHEN MONTH(uv_visit_date) = 7 THEN 1 ELSE 0 END) July
, SUM(CASE WHEN MONTH(uv_visit_date) = 8 THEN 1 ELSE 0 END) August
, SUM(CASE WHEN MONTH(uv_visit_date) = 9 THEN 1 ELSE 0 END) September
, SUM(CASE WHEN MONTH(uv_visit_date) = 10 THEN 1 ELSE 0 END) October
, SUM(CASE WHEN MONTH(uv_visit_date) = 11 THEN 1 ELSE 0 END) November
, SUM(CASE WHEN MONTH(uv_visit_date) = 12 THEN 1 ELSE 0 END) December
FROM tempdb.dbo.user_visits
GROUP BY uv_user_name
```
When doing this type of analysis, always clear the cache/buffers and get the I/O.
```
-- Show time & i/o
SET STATISTICS TIME ON
SET STATISTICS IO ON
GO
-- Remove clean buffers & clear plan cache
CHECKPOINT
DBCC DROPCLEANBUFFERS
DBCC FREEPROCCACHE
GO
-- Solution 1
SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 42 ms.
(4 row(s) affected)
Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'user_visits'. Scan count 1, logical reads 11, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server Execution Times:
CPU time = 16 ms, elapsed time = 5 ms.
```

```
-- Solution 2
SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 0 ms.
(4 row(s) affected)
Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'user_visits'. Scan count 1, logical reads 11, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server Execution Times:
CPU time = 16 ms, elapsed time = 5 ms.
```

Both solutions have the same number of reads, work table, etc. However, the SUM() solution has one less operator.
I am going to give both people who answered a thumbs up +1!! | You were nearly there: Just add the user to the select list:
```
select [Usr],
[1] AS January,
[2] AS February,
[3] AS March,
[4] AS April,
[5] AS May,
[6] AS June,
[7] AS July,
[8] AS August,
[9] AS September,
[10] AS October,
[11] AS November,
[12] AS December
from
(
SELECT MONTH(DateVisit) AS month, [User], [User] as [Usr] FROM UserVisit
) AS t
PIVOT (
COUNT([User])
FOR month IN([1], [2], [3], [4], [5],[6],[7],[8],[9],[10],[11],[12])
) p
``` | SQL query for counting records per month | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
We have this table for products on a store with values like this:
```
Id Name PartNumber Param1 Param2 Param3 Stock Active
-- --------- ---------- ------ ------ ------ ----- ------
1 BoxA1 10000 20 A B 4 1
2 BoxA1 10000.a 20 A B 309 1
3 CabinetZ2 30000 40 B C 0 0
4 CabinetZ2 30000.b 40 B C 1098 1
5 BoxA1 10000.c 20 A B 15 1
```
As you can see there are Products with identical name and params but different Id and part number.
Products with Id's 1, 2 and 5 have identical name and param values.
We need to disable identical param products based on stock so we have only the product with more stock active out of those with identical params.
The result should be like this:
```
Id Name PartNumber Param1 Param2 Param3 Stock Active
-- --------- ---------- ------ ------ ------ ----- ------
1 BoxA1 10000 20 A B 4 0 <- Not active
2 BoxA1 10000.a 20 A B 309 1 <- Active
3 CabinetZ2 30000 40 B C 0 0
4 CabinetZ2 30000.b 40 B C 1098 1
5 BoxA1 10000.c 20 A B 15 0 <- Not active
```
This process is required because we are receiving stock quantities from an external source (webservice) several times per day and after each stock update we need to evaluate which should remain active.
What we do at this moment and works ok but does not have a good performance is use a stored procedure that does the following:
```
DECLARE product_list CURSOR READ_ONLY FORWARD_ONLY LOCAL FOR
SELECT Id, Name, PartNumber, Param1, Param2, Param3, Stock
FROM Products
ORDER BY Name, Param1, Param2, Param3, Stock DESC
OPEN product_list
FETCH NEXT FROM product_list INTO @OldId, @OldName, @OldPartNumber, @OldParam1, @OldParam2, @OldParam3, @OldStock
WHILE @@FETCH_STATUS <> -1
BEGIN
(Compare all rows and perform updates to disable the ones with less stock)
FETCH NEXT FROM product_list INTO @OldId, @OldName, @OldPartNumber, @OldParam1, @OldParam2, @OldParam3, @OldStock
END
CLOSE product_list
```
Found this type of query using OVER (PARTITION BY) and we are very close of our objective of making this more efficient:
```
SELECT Id, Name, PartNumber, Param1, Param2, Param3, Stock, Active,
ROW_NUMBER() OVER (PARTITION BY Name, Param1, Param2, Param3 ORDER BY stock DESC) AS Items
FROM Products
```
With the following result:
```
Id Name PartNumber Param1 Param2 Param3 Stock Items
-- --------- ---------- ------ ------ ------ ----- ------
1 BoxA1 10000 20 A B 4 3
3 CabinetZ2 30000 40 B C 0 2
```
The problem is that we are getting the first Id found and not the Id of the one with more stock.
We are expecting a result like this but can't find the way to fix this query or a workaround:
```
Id Name PartNumber Param1 Param2 Param3 Stock Items
-- --------- ---------- ------ ------ ------ ----- ------
2 BoxA1 10000.a 20 A B 309 3
4 CabinetZ2 30000.b 40 B C 1098 2
``` | I use RANK function in sql server and order it in desc, see the below code:
```
select Id,
name,
partnumber,
param1,
param2,
param3,
stock,
active
from (
select *,
RANK() (parition by id, param1, param2, param3 order by stock desc) as max_stock
from product)x
where max_stock = 1
``` | ```
WITH t AS (SELECT ROW_NUMBER() OVER (PARTITION BY Name, Param1, Param2, Param3 ORDER BY stock DESC) i,* FROM Products)
UPDATE t
SET Active = CASE i WHEN 1 THEN 1 ELSE 0 END
```
There was one ambiguity in your question: if two ids have the same # in stock, are they both active, or just one? If only one, what determines the priority?
If you want both to be active:
```
WITH t AS (SELECT MAX(stock) OVER (PARTITION BY Name, Param1, Param2, Param3) max_stock,* FROM Products)
UPDATE t
SET Active = CASE WHEN stock = max_stock THEN 1 ELSE 0 END
``` | SQL select row count and Id value of multiple rows with equal values | [
"",
"sql",
"sql-server",
"select",
"group-by",
""
] |
I would like to select only unique row from the table, can someone help me out?
```
SELECT * FROM table
where to_user = ?
and deleted != ?
and del2 != ?
and is_read = '0'
order by id desc
+----+-----+------+
| id | message_id |
+----+-----+------+
| 1 | 23 |
| 2 | 23 |
| 3 | 23 |
| 4 | 24 |
| 5 | 25 |
+----+-----+------+
```
I need something like
```
+----+-----+------+
| id | message_id |
+----+-----+------+
| 3 | 23 |
| 4 | 24 |
| 5 | 25 |
+----+-----+------+
``` | Try this:
```
SELECT MAX(id), message_id
FROM tablename
GROUP BY message_id
```
and if you have other fields then:
```
SELECT MAX(id), message_id
FROM tablename
WHERE to_user = ?
AND deleted != ?
AND del2 != ?
AND is_read = '0'
GROUP BY message_id
ORDER BY id DESC
``` | If you only need the largest id for a particular message\_id
```
SELECT max(id), message_id FROM table
where to_user = ?
and deleted != ?
and del2 != ?
and is_read = '0'
group by message_id
order by id desc
``` | mysql select only unique row | [
"",
"mysql",
"sql",
""
] |
Here's a simplified version of the two tables:
```
Invoice
========
InvoiceID
CustomerID
InvoiceDate
TransactionDate
InvoiceTotal
Customer
=========
CustomerID
CustomerName
```
What I want is a listing of all invoices where there is more than one per customer. I don't want to group or count the invoices, I actually need to see all invoices. The output would look something like this:
```
CustomerName TransactionDate InvoiceTotal
-------------------------------------------------
Ted Tester 2012-12-14 335.49
Ted Tester 2013-02-02 602.00
Bob Beta 2013-05-04 779.50
Bob Beta 2013-07-07 69.00
Bob Beta 2013-09-10 849.79
```
What's the best way to write a query for SQL Server to accomplish this? | This should do:
```
SELECT C.CustomerName,
I.TransactionDate,
I.InvoiceTotal
FROM dbo.Invoice I
INNER JOIN dbo.Customer C
ON I.CustomerID = C.CustomerID
WHERE EXISTS(SELECT 1 FROM Invoice
WHERE CustomerID = I.CustomerID
GROUP BY CustomerID
HAVING COUNT(*) > 1)
```
And another way for SQL Server 2005+:
```
;WITH CTE AS
(
SELECT C.CustomerName,
I.TransactionDate,
I.InvoiceTotal,
N = COUNT(*) OVER(PARTITION BY I.CustomerID)
FROM dbo.Invoice I
INNER JOIN dbo.Customer C
ON I.CustomerID = C.CustomerID
)
SELECT *
FROM CTE
WHERE N > 1
``` | Using a window function will make this very clean to do - this will be supported by SQL Server 2005 and greater:
```
SELECT CustomerName, TransactionDate, InvoiceTotal
FROM (
SELECT c.CustomerName, i.TransactionDate, i.InvoiceTotal,
COUNT(*) OVER (PARTITION BY i.CustomerId) as InvoiceCount
FROM Invoice i
JOIN Customer c ON i.CustomerId = c.CustomerId
) t
WHERE InvoiceCount > 1
``` | Query to list all invoices for customers with more than one | [
"",
"sql",
"sql-server",
""
] |
I have a date field in the database called cutoffdate. The table name is Paydate. The cutoffdate is as shown below:
```
CutoffDate
-------------------------
2013-01-11 00:00:00.000
2013-02-11 00:00:00.000
2013-03-11 00:00:00.000
2013-04-11 00:00:00.000
2013-05-11 00:00:00.000
2013-06-11 00:00:00.000
2013-07-11 00:00:00.000
2013-08-11 00:00:00.000
2013-09-11 00:00:00.000
2013-10-11 00:00:00.000
2013-11-11 00:00:00.000
2013-12-11 00:00:00.000
```
I want to compare the current date with the cutoffdate (any of the 12 dates above) and then if the difference is 2 days, I need to proceed further.
This cutoffdate will remain same next year and the year after. so i need to compare the date ignoring the year part. For example if the system date is 2013-11-09, then it should come up as 2 days. Also if the system date is 2014-11-09, then it should show as 2 days. How can this be achieved? Please help | Try the following. It uses a Common Table Expression that returns your cutoff dates, but with the current year. Once that is done, finding the number of days between is a simple DATEDIFF operation.
```
;WITH normalizedCutoffs AS
(
SELECT CAST(STUFF(CONVERT(varchar, CutoffDate, 102), 1, 4, CAST(YEAR(GETDATE()) AS varchar)) AS datetime) AS CutoffDate
)
SELECT CutoffDate, DATEDIFF(day, GETDATE(), CutoffDate) FROM normalizedCutoffs
``` | ```
declare @difference int
set @difference = (SELECT ABS((MONTH(GETDATE())-MONTH(@date))*30+DAY(GETDATE())-DAY(@date)))
if(@difference>2)
begin
--Code goes here
end
```
In few words the function below
```
DAY(date)
```
it return the day of date as a int. So for today, DAY(GETDATE()) returns 26. | Difference between two dates ignoring the year | [
"",
"sql",
"date",
"sql-server-2008-r2",
""
] |
I have 2 tables that have a common column Material:
Table1
```
MaterialGroup | Material | MaterialDescription | Revenue | Customer | Month
MG1 | DEF | Desc1 | 12 | Customer A| Nov
MG2 | ABC | Desc2 | 13 | Customer A| Nov
MG3 | XYZ | Desc3 | 9 | Customer B| Dec
MG3 | LMN | Desc3 | 9 | Customer B| Jan
MG4 | IJK | Desc4 | 5 | Customer C| Jan
```
Table2
```
Vendor | VendorSubgroup| Material| Category
KM1 | DPPF | ABC | Cat1
KM2 | DPPL | XYZ | Cat2
```
There are two parts of the problem:
Part1 is fairly straight forward
I want to select all records from table1 where Material in table1 matches Material in table2
In the above scenario, I would want this result because the Material "ABC" and "XYZ" are present in table2:
```
MG2| ABC| Desc2| 13 | Customer A| Nov
MG3| XYZ| Desc3| 9 | Customer B| Dec
```
I used the following query to get the result and it worked:
```
SELECT T1.*
FROM TABLE1 AS T1
INNER JOIN TABLE2 AS T2
ON T1.MATERIAL = T2.MATERIAL
```
Part 2 is a bit complicated and I need help for this now:
After fetching all records from Table1 where material in table1 matches material in table2, I need to go to that Customer in table1 who purchased material from table2 and find out what else did that customer buy (which materials did he buy) in that same Month?
So, in this example: I would want the following result-
```
MG1| DEF| Desc1| 12 | Customer A| Nov
```
because Customer A purchased Material from table2 - they also purchased some other material in the same month.
Any help is appreciated. | You should really consider duplicates. What if there are multiple rows in table1 for the same customer and month? Most solutions posted here will give incorrect results.
To prevent duplicates in the result, use EXISTS instead of JOIN:
```
WITH Part1Customers AS (
SELECT TABLE1.Customer, TABLE1.[Month]
FROM TABLE1
WHERE EXISTS (
SELECT 1
FROM TABLE2
WHERE TABLE1.MATERIAL = TABLE2.MATERIAL
)
)
SELECT TABLE1.*
FROM TABLE1
WHERE EXISTS (
SELECT 1
FROM Part1Customers
WHERE Part1Customers.Customer = TABLE1.Customer
AND Part1Customers.[Month] = TABLE1.[Month]
)
``` | Try this:
```
SELECT T.*
FROM TABLE1 AS T
INNER JOIN
(
SELECT T1.Customer, T1.Month
FROM TABLE1 AS T1
INNER JOIN TABLE2 AS T2
ON T1.MATERIAL = T2.MATERIAL
) T1
ON T1.Customer = T.Customer AND T1.Month = T.Month
``` | SQL: Select Customer from Table1 who purchased material in table2 and find out what else did that customer purchase in the same month? | [
"",
"sql",
"sql-server",
""
] |
I have a Person table with multiple columns indicating the person's stated ethnicity (e.g. African American, Hispanic, Asian, White, etc). Multiple selections (e.g. White and Asian) are allowed. If a particular ethnicity is selected the column value is 1, if it is not selected it is 0, and if the person skipped the ethnicity question entirely it is NULL.
I wish to formulate a SELECT query that will examine the multiple Ethnicity columns and return a single text value that is a string concatenation based on the columns whose values is 1. That is, if the column White is 1 and the column Asian is 1, and the other columns are 0 or NULL, the output would be 'White / Asian'.
One approach would be to build a series of IF statements that cover all combinations of conditions. However, there are 8 possible ethnicity responses, so the IF option seems very unwieldy.
Is there an elegant solution to this problem? | Assuming SQL Server it would be this.
```
select case AfricanAmerican when 1 then 'African American/' else '' end
+ case White when 1 then 'White/' else '' end
+ case Hispanic when 1 then 'Hispanic/' else '' end
from PersonTable
``` | ```
-- Some sample data.
declare @Persons as Table ( PersonId Int Identity,
AfricanAmerican Bit Null, Asian Bit Null, Hispanic Bit Null, NativeAmerican Bit Null, White Bit Null );
insert into @Persons ( AfricanAmerican, Asian, Hispanic, NativeAmerican, White ) values
( NULL, NULL, NULL, NULL, NULL ),
( 0, 0, 0, 0, 0 ),
( 1, 0, 0, 0, 0 ),
( 0, 1, 0, 0, 0 ),
( 0, 0, 1, 0, 0 ),
( 0, 0, 0, 1, 0 ),
( 0, 0, 0, 0, 1 ),
( 0, 1, 1, 1, NULL );
-- Display the results.
select PersonId, AfricanAmerican, Asian, Hispanic, NativeAmerican, White,
Substring( Ethnicity, case when Len( Ethnicity ) > 3 then 3 else 1 end,
case when Len( Ethnicity ) > 3 then Len( Ethnicity ) - 2 else 1 end ) as Ethnicity
from (
select PersonId, AfricanAmerican, Asian, Hispanic, NativeAmerican, White,
case when AfricanAmerican = 1 then ' / African American' else '' end +
case when Asian = 1 then ' / Asian' else '' end +
case when Hispanic = 1 then ' / Hispanic' else '' end +
case when NativeAmerican = 1 then ' / Native American' else '' end +
case when White = 1 then ' / White' else '' end as Ethnicity
from @Persons
) as Simone;
``` | Build select result based on multiple column values | [
"",
"sql",
"t-sql",
""
] |
Could someone give me the correct syntax as to how I can insert and update data using a subquery?
```
SELECT PersonID
FROM Authors
WHERE PersonID IN (select personID
from person
where last_name = 'Smith' AND first_name = 'Barry'
);
```
and update certain columns that meet this example person's criteria. | ```
Update Authors
set Field1='x', Field2='Y', Field3='Z'
FROM Authors
WHERE PersonID IN (select personID
from person
where last_name = 'Smith' AND first_name = 'Barry'
);
``` | Regarding the Insert, I'm not 100% clear on what table you want to insert into. If you want to put 25 and Y into the Fee and Published Columns on the Authors table, then it's an update, not an insert, along the lines of what madtrubocow did, though this is how I would do it:
```
UPDATE a
SET Fee=25,Published='Y'
FROM
Authors a
WHERE EXISTS
(SELECT 1 FROM Person p WHERE a.PersonID=p.PersonID
AND p.last_name = 'Smith' AND p.first_name = 'Barry')
```
If you want to insert a row into a different table that has an AuthorID,Fee and Published columns, it would be something like this:
```
INSERT INTO NewTable(AuthorID,Fee,Published)
SELECT AuthorID,25,'Y'
FROM Authors a
WHERE EXISTS
(SELECT 1 FROM Person p WHERE a.PersonID=p.PersonID
AND p.last_name = 'Smith' AND p.first_name = 'Barry')
```
I should note that I would only write these queries this way if PersonID is a unique column on the Authors table and the combo of First and Last Name is unique on the Person table. If either of those are not enforced by the DB, you'll need to give some thought as to how you ensure that you don't insert or update more rows than you intend. | Inserting into SQL with a subquery | [
"",
"sql",
"database",
""
] |
I have a query where I want to find rows where both ActivityDate and TaskId have multiple entries at the same time:
```
SELECT
ActivityDate, taskId
FROM
[DailyTaskHours]
GROUP BY
ActivityDate, taskId
HAVING
COUNT(*) > 1
```
The above query appears to work. However I want all of the columns to return now just the two (ActivityDate, taskId). This doesn't work:
```
SELECT *
FROM
[DailyTaskHours]
GROUP BY
ActivityDate, taskId
HAVING
COUNT(*) > 1
```
because many of the columns are not in the group by clause. I don't want any columns to be effected by the HAVING COUNT(\*) > 1 other than ActivityDate, taskId.
How do I achieve this? | ```
SELECT t1.*
FROM
[DailyTaskHours] t1
INNER JOIN (
SELECT
ActivityDate, taskId
FROM
[DailyTaskHours]
GROUP BY
ActivityDate, taskId
HAVING
COUNT(*) > 1
) t2 ON (
t1.ActivityDate = t2.ActivityDate AND
t1.taskId = t2.taskId
)
``` | ```
WITH sel as(
SELECT
ActivityDate, taskId
FROM
[DailyTaskHours]
GROUP BY
ActivityDate, taskId
HAVING
COUNT(*) > 1
)
SELECT *
FROM [DailyTaskHours] d
INNER JOIN sel ON d.ActivityDate = sel.ActivityDate AND d.taskId = sel.taskId
``` | Return all rows/columns where multiple columns are non distinct | [
"",
"sql",
"sql-server",
""
] |
```
UPDATE Products
SET TotalQuantity = (CASE WHEN P.DateOfLastUpdate < PurchaseDate
THEN (SUM(P.TotalQuantity) + SUM(M.Quantity))
ELSE P.TotalQuantity END)
FROM Products AS P ,Products_Purchased as M
WHERE P.ProductName = M.ProductName;
```
I want to update table Products with respect to the another table Products\_Purchased. If I add new item to Products\_Purchased, it must be entered in table Products too and also should update TotalQuantity field of table
Products. Whats wrong in this query?
I got error message:
> ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'FROM Products AS P ,Products\_Sold as S WHERE P.ProductName = S.ProductName' at line 1
Can anybody help me to sort out this one..? | I have solved your problem.Just follow what I did..
I created two tables
```
create table products(product_name text,total_quantity int,date_of_lastupdate date);
create table products_purchased(product_name text,quantity int,date_of_purchase date);
insert into products values('food',10,'2013-5-6'),('dress',20,'2012-4-7');
insert into products_purchased('food',15,'2012-2-1'),('dress',12,'2013-6-8');
```
Now I tried to update in the same condition which you were trying using the following code and it worked perfectly.
Just go through the code,you'll understand what I did and it'll solve your problem..
update products as a,products\_purchased as b set a.total\_quantity=
case
when a.date\_of\_lastupdate
then (select \* from (select sum(products.total\_quantity)+sum(products\_purchased.quantity) from products natural join products\_purchased)as c)
else a.total\_quantity
end
where a.productname=b.productname ; | Try this
```
UPDATE Products
SET TotalQuantity =
(SELECT CASE WHEN P.DateOfLastUpdate < PurchaseDate
THEN (SUM(P.TotalQuantity) + SUM(M.Quantity))
ELSE P.TotalQuantity END
FROM Products AS P ,Products_Purchased as M
WHERE P.ProductName = M.ProductName
AND P.ProductName = :p1)
WHERE ProductName = :p1
```
:p1 is a parameter which holds the name of the part to be updated. | Update a table using another table | [
"",
"mysql",
"sql",
""
] |
I have a table with numbers stored as `varchar2` with '.' as decimal separator (e.g. '5.92843').
I want to calculate with these numbers using ',' as that is the system default and have used the following `to_number` to do this:
```
TO_NUMBER(number,'99999D9999','NLS_NUMERIC_CHARACTERS = ''.,''')
```
My problem is that some numbers can be very long, as the field is `VARCHAR2(100)`, and when it is longer than my defined format, my `to_number` fails with a `ORA-01722`.
Is there any way I can define a dynamic number format?
I do not really care about the format as long as I can set my decimal character. | > Is there any way I can define an unlimited number format?
The only way, is to set the appropriate value for `nls_numeric_characters` parameter session wide and use `to_number()` function without specifying a format mask.
Here is a simple example.Decimal separator character is comma `","` and numeric literals contain period `"."` as decimal separator character:
```
SQL> show parameter nls_numeric_characters;
NAME TYPE VALUE
------------------------------------ ----------- ------
nls_numeric_characters string ,.
SQL> with t1(col) as(
2 select '12345.567' from dual union all
3 select '12.45' from dual
4 )
5 select to_number(col) as res
6 from t1;
select to_number(col)
*
ERROR at line 5:
ORA-01722: invalid number
SQL> alter session set nls_numeric_characters='.,';
Session altered.
SQL> with t1(col) as(
2 select '12345.567' from dual union all
3 select '12.45' from dual
4 )
5 select to_number(col) as res
6 from t1;
res
--------------
12345.567
12.45
``` | You can't have "unlimited" number. Maximum precision is 38 significant digits. From the [documentation](http://docs.oracle.com/cd/B28359_01/server.111/b28318/datatype.htm#CNCPT313). | Dynamic length on number format in to_number Oracle SQL | [
"",
"sql",
"oracle",
"nls",
""
] |
How to do UPDATE and INSERT in two different table with a field with the same name/value with one line only? | I'm not aware of a way to do so for inserts or standard SQL, but in MySQL, you can update two tables at once using a `JOIN`;
```
UPDATE table_a a
JOIN table_b b
ON a.id=b.id
SET a.value = a.value+1, b.value = b.value-1
WHERE a.id=1;
```
[An SQLfiddle to test with](http://sqlfiddle.com/#!2/43858/1). | You just can't do that.
You can maybe use a trigger on the `INSERT`'s statement table to `UPDATE` the second table or the other way around.
Other than that you have to use two different statements. | UPDATE and INSERT in two different table with a field with the same name/value | [
"",
"mysql",
"sql",
""
] |
Sometimes I need to alter the number of filters in WHERE clause and I'm looking for suggestions to do it best.
Here is a scenario:
Table1 (Col1, Col2, Col3)
Col1 contains unique id.
Col2 contains numbers 0 thru 1000, non-unique.
Col3 contains letters of alphabet A thru Z, non-unique.
I have a StoredProc1 that takes only one argument but based on argument's value it should search either Col2 only or both Col2 and Col3. The decision to look at 1 or 2 columns would be arbitrary and the stored procedure needs to be optimized for performance.
Code below is doing the job but is extremely hard to manage. I have a stored procedure that contains 128 different branches this way and if I add one more condition it will constitute another 128 branches and total of 6000 lines of code. There must be a better way.
I was thinking about declaring another variable and setting it to a default value that would always be a no-match. Then based on the value passed in the StoredProc1 parameter, set the second variable to a relevant value. The problem with this solution is that it would decrease performance of searches where the second filter is not applicable.
I can't alter StoredProc1 definition because it is called by countless other processes.
So far the only thing that comes to my mind is to create another SP and call it from the main one if the condition is true and keep current proc as else branch.
```
StoredProc1 ( @filter ) as
begin
if (@filter = 1)
begin
select col1, col2, col3
from Table1
where Col2 = @filter or Col2 = 'A'
end
else
begin
select Col1, Col2, Col3
from Table1
where Col2 = @Filter
end
end
``` | ```
StoredProc1
@filter DataType
as
begin
DECLARE @Sql NVARCHAR(MAX);
SET @Sql = N'select Col1, Col2, Col3
from Table1 WHERE 1 = 1 '
+ CASE WHEN @filter = 1 THEN N' AND Col2 = @Param or Col2 = ''A'''
ELSE N' AND Col2 = @Filter' END
EXECUTE sp_executesql @Sql
, N'DECLARE @Param DataType'
, @Param = @filter
end
```
Your can have multiple Case statement to check against multiple Parameters and built you sql string.
Much more flexible and secure way of doing this kind of operations. | ```
StoredProc1 ( @filter ) as
begin
select col1, col2, col3
from Table1
where Col2 = @filter or (@filter = '1' and Col2 = 'A')
end
```
You're going to lose performance no matter what solution you're choosing. Dynamic SQL will potentially need to be compiled every time. Your "branch" method works fine. | SQL alter number of filters in where clause | [
"",
"sql",
"sql-server",
"t-sql",
"stored-procedures",
"where-clause",
""
] |
I have following sample data:
```
ID Name Street Number Code
100 John Street1 1 1234
130 Peter Street1 2 1234
135 Bob Street2 1 5678
141 Alice Street5 3 5678
160 Sara Street1 3 3456
```
Now I need a Query to return only the last record because its Code is unique. | You can identify which codes are unique with a query which uses `GROUP BY` and `HAVING`.
```
SELECT [Code]
FROM YourTable
GROUP BY [Code]
HAVING Count(*) = 1;
```
To get full rows which match those unique `[Code]` values, join that query back to your table.
```
SELECT y.*
FROM
YourTable AS y
INNER JOIN
(
SELECT [Code]
FROM YourTable
GROUP BY [Code]
HAVING Count(*) = 1
) AS sub
ON y.Code = sub.Code;
``` | Thanks to HansUp, this is my final query now:
```
SELECT
A.*
FROM
(T_NEEDED AS A
INNER JOIN
(
SELECT
CODE
FROM
T_NEEDED
GROUP BY
CODE
HAVING
Count(*) = 1
) AS B
ON
A.CODE = B.CODE)
LEFT OUTER JOIN
T_UNNEEDED AS C
ON
A.ID = C.ID
WHERE
C.ID Is Null
ORDER BY
A.NAME,
A.STREET,
A.NUMBER
```
Explanation: I have two tables, one with records with IDs that are needed and one with those unneeded. The unneeded IDs might be in the needed table and if they are I want them to be excluded, hence the LEFT OUTER JOIN. Then comes the second part for which opened the question. I want to exclude those records from the needed IDs that have Codes that are not unique or also belong to other IDs.
The result is a table that contains only needed IDs and in this table every Code is unique. | Select only records with distinct values in a certain row | [
"",
"sql",
"ms-access",
"duplicates",
"distinct",
"ms-access-2010",
""
] |
```
row | P_NO | B_NAME
1 | 123 | ABC ELEC
2 | 123 | ABC ELEC
3 | 123 | ABC ELEC
4 | 123 | ABC TRANSPORT
5 | 123 | ABC CONTRACTORS
6 | 124 | ABC STATIONARY
7 | 125 | ABC ELEC
8 | 126 | ABC ELEC
```
I'm very new in SQL.
How can I select only the P\_NO and B\_NAME where one P\_NO appears for more than one B\_NAME.
Output should be only one of the first three rows and row 4 and 5
SQL SERVER 2012 | [updated]
what you are looking for is this:
```
select p_no
, b_name
from Uhura.dbo.test
where p_no in (select p_no
from Uhura.dbo.test
group by
p_no
having count(distinct b_name) > 1)
group by
p_no
, b_name
```
it counts the number of distinct b\_name for each p\_no and uses the ones that have more than one as a filter for the outer select. it then eliminates the duplicates by grouping:
 | try this ...
```
SELECT
P_NO,
B_NAME
FROM
table_name1
WHERE P_NO IN
(SELECT
P_NO
FROM
table_name1
GROUP BY P_NO
HAVING COUNT(P_NO) > 1)
``` | SQL - Group By Distinct | [
"",
"sql",
"sql-server",
"group-by",
"sql-server-2012",
""
] |
I have a table that stores costs for consumables.
```
consumable_cost_id consumable_type_id from_date cost
1 1 01/01/2000 £10.95
2 2 01/01/2000 £5.95
3 3 01/01/2000 £1.98
24 3 01/11/2013 £2.98
27 3 22/11/2013 £3.98
33 3 22/11/2013 £4.98
34 3 22/11/2013 £5.98
35 3 22/11/2013 £6.98
```
If the same consumable is updated more than once on the same day I would like to select only the row where the consumable\_cost\_id is biggest on that day. Desired output would be:
```
consumable_cost_id consumable_type_id from_date cost
1 1 01/01/2000 £10.95
2 2 01/01/2000 £5.95
3 3 01/01/2000 £1.98
24 3 01/11/2013 £2.98
35 3 22/11/2013 £6.98
```
Edit:
Here is my attempt (adapted from another post I found on here):
```
SELECT cc.*
FROM
consumable_costs cc
INNER JOIN
(
SELECT
from_date,
MAX(consumable_cost_id) AS MaxCcId
FROM consumable_costs
GROUP BY from_date
) groupedcc
ON cc.from_date = groupedcc.from_date
AND cc.consumable_cost_id = groupedcc.MaxCcId
``` | You were very close. This seems to work for me:
```
SELECT cc.*
FROM
consumable_cost AS cc
INNER JOIN
(
SELECT
Max(consumable_cost_id) AS max_id,
consumable_type_id,
from_date
FROM consumable_cost
GROUP BY consumable_type_id, from_date
) AS m
ON cc.consumable_cost_id = m.max_id
``` | ```
SELECT * FROM consumable_cost
GROUP by consumable_type_id, from_date
ORDER BY cost DESC;
``` | How do I select records with max from id column if two of three other fields are identical | [
"",
"sql",
"ms-access-2007",
""
] |
Is it possible to drop all NOT NULL constraints from a table in one go?
I have a big table with a lot of NOT NULL constraints and I'm searching for a solution that is faster than dropping them separately. | You can group them all in the same alter statement:
```
alter table tbl alter col1 drop not null,
alter col2 drop not null,
…
```
---
You can also retrieve the list of relevant columns from the catalog, if you feel like writing a [do block](http://www.postgresql.org/docs/current/static/sql-do.html) to generate the needed sql. For instance, something like:
```
select a.attname
from pg_catalog.pg_attribute a
where attrelid = 'tbl'::regclass
and a.attnum > 0
and not a.attisdropped
and a.attnotnull;
```
(Note that this will include the primary key-related fields too, so you'll want to filter those out.)
If you do this, don't forget to use `quote_ident()` in the event you ever need to deal with potentially weird characters in column names. | ALTER TABLE table\_name ALTER COLUMN column\_name [SET NOT NULL| DROP NOT NULL] | How to drop all NOT NULL constraints from a PostgreSQL table in one go | [
"",
"sql",
"postgresql",
"constraints",
"sql-drop",
"notnull",
""
] |
I'm in the process of designing a database structure for an application that I wish to develop, and I'm stuck wondering how I can design an Entity (I'm using Chen's notation) such that it can be extended by the end user through the program interface.
For example, the software I plan to write is a recipe book/nutritional information manager and I have designated a separate table for the nutritional information of an ingredient. As it stands, I have outlined a few basic attributes, namely Sodium, Carbs, Calories, and Fat. Without going into massive amounts of detail and trying to add every single possible relevant measurement, I'd like the user to be able to add their own things of importance to the database, such as maybe Vitamin A or Iron. I don't know much about database modeling yet (I'm only recently learning how to do it in school) so I presume that I wouldn't want the program to alter the table so that I add new attributes to this entity. So how should I go about doing it?
My (rather incomplete) model thus far follows. There's obviously much more that needs to go in here yet (not to mention the relationships between these entities).
 | For this particular application I would model it as below:
Ingredients are the basic things you need to make a recipe.
```
ingredients
id unsigned int(P)
name varchar(15)
...
+----+-----------+-----+
| id | name | ... |
+----+-----------+-----+
| 1 | Flour | ... |
| 2 | Olive oil | ... |
| .. | ......... | ... |
+----+-----------+-----+
```
Now you have to define what nutrients are found in each of your ingredients.
```
ingredients_nutrients
id unsigned int(P)
ingredient_id unsigned int(F ingredients.id)
nutrient_id unsigned int(F nutrients.id)
grams double
+----+---------------+-------------+-------+
| id | ingredient_id | nutrient_id | grams |
+----+---------------+-------------+-------+
| 1 | 1 | 1 | 3.0 |
| 2 | 1 | 2 | 15.3 |
| 3 | 2 | 3 | 20.0 |
| .. | ............. | ........... | ..... |
+----+---------------+-------------+-------+
```
Define all the possible nutrients (do some searching on the USDA website and you can find a complete list). It's trivial to add a records to this table.
```
nutrients
id unsigned int(P)
name varchar(15)
...
+----+--------+-----+
| id | name | ... |
+----+--------+-----+
| 1 | Sodium | ... |
| 2 | Iron | ... |
| 3 | Fat | ... |
| .. | ...... | ... |
+----+--------+-----+
```
Define your recipes.
```
recipes
id unsigned int(P)
name varchar(50)
...
+----+-------+-----+
| id | name | ... |
+----+-------+-----+
| 1 | Pizza | ... |
| .. | ..... | ... |
+----+-------+-----+
```
Indicate what ingredients go into each recipe.
```
recipes_ingredients
id unsigned int(P)
recipe_id unsigned int(F recipes.id)
ingredient_id unsigned int(F ingredients.id)
+----+-----------+---------------+
| id | recipe_id | ingredient_id |
+----+-----------+---------------+
| 1 | 1 | 1 |
| 2 | 1 | 2 |
| .. | ......... | ............. |
+----+-----------+---------------+
``` | Assuming that your additional properties are in the same units, you can do it with a couple of extra tables.
```
table custom_field
-----------------
id
name
table ingredient_custom_field
----------------------------
id
ingredient_id
custom_field_id
custom_value
```
This assumes that the custom fields belong to ingredients. If all your values are in milligrams, then it makes it relatively simple to roll them up for a given recipe- well, simple until you start trying to convert fluid ounces to weight but that's another issue.
If you were using postgresql, you could use json, hstore, or another data type to store this complex information in the table itself.
Now, if you wind up having some of the types that are numeric, others that are flags (gluten free,etc.), then you will have to change the design somewhat or make everything store as text, but it'll be more difficult to sum the numeric results. | How can I model a database such that a given entity is extensible? | [
"",
"sql",
"database-design",
"entity-relationship",
""
] |
I am having a strange issue with a simple select query and unable to find what is happening. I am using below sql to fetch some records from a customer table. My query is as follows.
```
SELECT `C`.* FROM `customers` AS `C` WHERE `C`.`typeId`!=9 ORDER BY C.name ASC
```
My table structure is as follows
```
|id|name|typeId|
|1 |abc |1 |
|2 |abcd|9 |
|3 |xxxx|NULL |
|4 |qwe |NULL |
|5 |xvf |10 |
```
in the above table, typeId is an integer field and default NULL. Database is MySql
When I am executing the above mentiond SQL, I am getting all other rows except typeId=9 and typeId = NULL. My question is why typeId=NULL is getting omitted from the result? I have googled a lot and unable to find any answer. Am i doing anything wrong here? | try out this...
```
SELECT `C`.*
FROM `customers` AS `C`
WHERE `C`.`typeId`!=9
OR `C`.`typeId` IS NULL
ORDER BY C.name ASC
``` | Try this
> SELECT `C`.\* FROM `customers` AS `C` WHERE `C`.`typeId`!=9 OR `C`.`typeId` IS NULL ORDER BY C.name ASC | MySql rows with NULL values are not retriving in select query | [
"",
"mysql",
"sql",
""
] |
I have a table "person", an associative table "person\_vaccination" and a table "vaccination".
I want to get the person who has missing vaccinations but so far I only got it to work when I have the id.
```
SELECT vac.VACCINATION_Name
FROM VACCINATION vac
WHERE vac.VACCINATION_NUMBER NOT IN
(SELECT v.VACCINATION_NUMBER
FROM PERSON per
Join PERSON_VACCINATION pv ON per.PERSON_NUMBER = pv.PERSON_NUMBER
JOIN VACCINATION v ON pv.VACCINATION_NUMBER = v.VACCINATION_NUMBER
WHERE per.PERSON_NUMBER = 6)
```
It works fine but how do I get all the people missing their vaccinations? (ex:
555 , Vacccination 1
555 , Vacccination 2
666 , Vacccination 1) | [SQL Fiddle](http://sqlfiddle.com/#!4/c6ae5f/7)
**Oracle 11g R2 Schema Setup**:
```
CREATE TABLE VACCINATION ( VACCINATION_NUMBER, VACCINATION_NAME ) AS
SELECT 1, 'Vac 1' FROM DUAL
UNION ALL SELECT 2, 'Vac 2' FROM DUAL
UNION ALL SELECT 3, 'Vac 3' FROM DUAL
UNION ALL SELECT 4, 'Vac 4' FROM DUAL;
CREATE TABLE PERSON_VACCINATION ( VACCINATION_NUMBER, PERSON_NUMBER ) AS
SELECT 1, 1 FROM DUAL
UNION ALL SELECT 2, 1 FROM DUAL
UNION ALL SELECT 3, 1 FROM DUAL
UNION ALL SELECT 4, 1 FROM DUAL
UNION ALL SELECT 1, 2 FROM DUAL
UNION ALL SELECT 2, 2 FROM DUAL
UNION ALL SELECT 3, 2 FROM DUAL;
CREATE TABLE PERSON ( PERSON_NUMBER, PERSON_NAME ) AS
SELECT 1, 'P1' FROM DUAL
UNION ALL SELECT 2, 'P2' FROM DUAL
UNION ALL SELECT 3, 'P3' FROM DUAL;
```
**Query 1**:
```
SELECT p.PERSON_NAME,
v.VACCINATION_NAME
FROM VACCINATION v
CROSS JOIN
PERSON p
WHERE NOT EXISTS ( SELECT 1
FROM PERSON_VACCINATION pv
WHERE pv.VACCINATION_NUMBER = v.VACCINATION_NUMBER
AND pv.PERSON_NUMBER = p.PERSON_NUMBER )
ORDER BY p.PERSON_NAME,
p.PERSON_NUMBER,
v.VACCINATION_NAME,
v.VACCINATION_NUMBER
```
**[Results](http://sqlfiddle.com/#!4/c6ae5f/7/0)**:
```
| PERSON_NAME | VACCINATION_NAME |
|-------------|------------------|
| P2 | Vac 4 |
| P3 | Vac 1 |
| P3 | Vac 2 |
| P3 | Vac 3 |
| P3 | Vac 4 |
``` | Instead of an INNER JOIN, you should use LEFT JOIN.
Take a look at this link: <http://www.w3schools.com/sql/sql_join_left.asp> | SQL - Select what is not in second table from assocciative | [
"",
"sql",
"oracle",
""
] |
I was wondering if there is a way to select a column by using a SQL variable. Eg. Table is -
ID, Name, Address
```
DECLARE @Column varchar(25)
SET @Column = 'Name' -- This can be another column also
SELECT @Column
FROM MyTable
```
This shows me 'Name' as many times as there are rows in my table.
Is it even possible to do what I want ?
thanks. | Sql is currently interpreting your variable as a string.
[From a previous answer on stack overflow:](https://stackoverflow.com/a/12896225/2171102)
```
DECLARE @Column varchar(25)
SET @Column = 'Name' -- This can be another column also
SET @sqlText = N'SELECT ' + @Column + ' FROM MyTable'
EXEC (@sqlText)
``` | Can do this with dynamic SQL:
```
DECLARE @Column varchar(25)
,@sql VARCHAR(MAX)
SET @Column = 'Name' -- This can be another column also
SET @sql = 'SELECT '+@Column+'
FROM MyTable
'
EXEC (@sql)
```
You can test your dynamic sql queries by changing `EXEC` to `PRINT` to make sure each of the resulting queries is what you'd expect. | Select a column using a variable? | [
"",
"sql",
"sql-server",
"sql-server-2005",
""
] |
here is my serial table.it has more than 1000 records.its with start number and end number.but between numbers not exist.
i need to add all number [start/between & end numbers] records in another temp table **number by number**
like below
**EXIST TABLE**
```
select concat(CARD_BULK_CODE,start_serial) startserial,concat(CARD_BULK_CODE,end_serial) endserial
from TSR_BULK_CARD_SERIALS
```
```
---------------------------
STARTSERIAL ENDSERIAL |
---------------------------
18126944 18126946 |
18141101 18141122 |
15150722 15150729 |
19069303 19069317 |
---------------------------
```
**REQUIRED TABLE**
```
-----------
SERIAL_NO |
-----------
18126944
18126945
18141101
18141102
....
-----------
```
seem its need pl-sql to implement this.
please help me to sort out this issue
I tried with below query with the help of dual.
but its very slow and not yet got results :-) running more than 1 Hour
```
select distinct concat(t.CARD_BULK_CODE,t.START_SERIAL)+level-1 SERIAL
from TSR_BULK_CARD_SERIALS t, dual
connect by level-1<=(concat(t.CARD_BULK_CODE,t.END_SERIAL ))-concat(t.CARD_BULK_CODE,t.START_SERIAL)
order by 1
```
**EDIT :**
Dear Alen & Dba.i tried with your ones and below error occured.
```
DECLARE
l_st NUMBER;
l_en NUMBER;
BEGIN
FOR rec IN (select concat(card_bulk_code, start_serial) startserial,concat(card_bulk_code, end_serial) endserial from tsr_bulk_card_serials)
LOOP
l_st := rec.startserial;
l_en := rec.endserial;
FOR rec1 IN l_st..l_en
LOOP
INSERT INTO temp(serial_no) values(rec1);
END LOOP;
END LOOP;
COMMIT;
END;
Error at line 1
ORA-01426: numeric overflow
ORA-06512: at line 9
Script Terminated on line 1.
``` | Just write some PL/SQL - iterate through your table and insert rows in the temp table.
```
declare
l_start number;
l_end number;
begin
for r_rec in (select to_number(concat(card_bulk_code, start_serial)) startserial
, to_number(concat(card_bulk_code, end_serial)) endserial
from tsr_bulk_card_serials )
loop
l_start := r_rec.startserial;
l_end := r_rec.endserial;
for l_i in l_start..l_end loop
insert into your_temp_table;
end loop;
end loop;
end;
``` | One way to do it without resorting to plsql
```
WITH ranges AS
(
SELECT CONCAT(CARD_BULK_CODE, start_serial) startserial,
CONCAT(CARD_BULK_CODE, end_serial) endserial
FROM TSR_BULK_CARD_SERIALS
),
numbers(n) AS (
SELECT 0 n
FROM dual
UNION ALL
SELECT n + 1
FROM numbers
WHERE n <=
(
SELECT MAX(endserial - startserial)
FROM ranges
)
)
SELECT t.startserial + n.n SERIAL_NO
FROM ranges t JOIN numbers n
ON n.n <= t.endserial - t.startserial
ORDER BY SERIAL_NO
```
Here is **[SQLFiddle](http://sqlfiddle.com/#!4/9e8db/3)** demo | How to get Numbers in number range by PL-Sql .? | [
"",
"sql",
"oracle",
"plsql",
""
] |
I have a table of students.
and a table of teachers.
SOME of the students (not all) will have a teacher assigned to them.
this is controlled in a 3rd table, matching those students with their teachers, via the studentID and teacherID
what i need the SQL to do, is to `LEFT OUTER JOIN` onto the 3rd table, which is then `INNER JOINED` onto the teacher's table (because not all students will appear in the 3rd table, but any teacher that appears in the 3rd table WILL appear in the teachers table).
i am looking to get a result of all student names, and teacher's name, where they are assigned (and null if not).
what i have so far looks like this, and it basically operates as a full INNER JOIN, and does not give me students who do not have teachers assigned:
```
SELECT firstname, teacherlastName
FROM tblstudents
left outer join [tblStudentRakazot]
ON tblstudents.studentid = [tblStudentRakazot].studentID
INNER JOIN tblteachers
ON [tblStudentRakazot].teacherid = tblteachers.teacherID
```
can someone pls give me a pointer here? i tried with placing brackets, but that didn't see, to help.
thanks! | You don't use an `INNER JOIN` but only another `LEFT JOIN`.
Think of tblStudents as your base. You want to get all of them, not filter anything out, and only attach optional info.
With the first left join, you attach a first info
Student -> TeacherAssignment
The TeacherAssignment can be null or not null.
Now you only attach another info - the teacher's full name pulled from `tblTeachers`.
Student -> TeacherAssignnent -> TeacherName
Do this with another `LEFT JOIN`. That attaches the info, where possible, i.e. where TeacherAssignment is not null.
This ignores rows where TeacherAssignment is null anyway. | ```
SELECT firstname, teacherlastName
FROM tblstudents
left outer join
( select * from
[tblStudentRakazot] A INNER JOIN tblteachers B
ON A.teacherid = B.teacherID)AS C
ON tblstudents.studentid = C.studentID
``` | LEFT OUTER JOIN with INNER JOIN | [
"",
"sql",
"sql-server",
""
] |
I am trying to calculate the average number of pests in a Town who are using the same Variety of crops..
```
DistrictName | TownName | CropName |Pests
Rawalpindi | pindi | CIM 506 | 3.3
Rawalpindi | pindi | CIM 506 | 3.8
Rawalpindi | Muree | CIM 506 | 3.5
```
But this query keeps on giving me the syntax error i.e Invalid Column name `PestPopulation1` where as i thoroughly checked the spellings of my column name...`PestPopuplation` is of `varchar(max)`type
Following is my query
```
SELECT T.DistrictName, T.TownName, T.VarietyOfCrop, avg(T.PestPopulation1)
FROM (
SELECT DistrictName, TownName,VarietyOfCrop, cast(PestPopulation1 as float) as Pests
FROM FinalTable2
) as T
group by T.DistrictName, T.TownName, T.VarietyOfCrop
``` | As mentioned, the problem is that you refer to a column that's not available in that scope.
Altogether there's not even a good reason to use such a contrived query. This should work just as well, and is more readable:
```
SELECT T.DistrictName, T.TownName, T.VarietyOfCrop, avg(cast(T.Pests as float))
FROM FinalTable2 T
group by T.DistrictName, T.TownName, T.VarietyOfCrop
``` | The reason this gives you an error is because you gave the expression `cast(PestPopulation1 as float)` an alias `Pests`.
You should use that alias in your outer query, like this:
```
SELECT T.DistrictName, T.TownName, T.VarietyOfCrop, avg(T.Pests)
FROM (
SELECT DistrictName, TownName,VarietyOfCrop, cast(PestPopulation1 as float) as Pests
FROM FinalTable2
) as T
group by T.DistrictName, T.TownName, T.VarietyOfCrop
```
You should be able to simplify this query by pushing the `cast` into `avg`, like this:
```
SELECT
DistrictName
, TownName
, VarietyOfCrop
, avg(convert(float, PestPopulation1)) as avg_pests
FROM FinalTable2
group by T.DistrictName, T.TownName, T.VarietyOfCrop
```
> Error converting data type `nvarchar` to `float`. My `PestPopulation1` column in of `varchar(max)`.
You need to use `CONVERT` instead of `CAST` (see above). | Sql Calculate Average | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I wonder if it is possible to make query like following one.
I have a table that contains `id` and `value`.
Exapmle:
```
table EXAMPLE
|id | value |
| 1 | 65|
| 2 | 13|
| 3 | 22|
```
What I want is to make query that returns the set like this:
```
|id | value | average | difference|
| 2 | 13| 33.3| 20.3|
| 3 | 22| 33.3| 11.3|
```
The question is how to deal with different column ?
And one more question:
How to include in the set only values, that are less or great then average ,
```
SELECT id,
value,
(SELECT AVG(value) FROM EXAMPLE ) as average
having
value < average.
```
The reason i am using `having` instead of `where` is that `where` is executing before select and therefore i cant use average in it (average is computed in the select phase of query execution).
I also suspect that I am calculating AVG(value) not only once, but for every row in the table.
Am i right ? If i am, that is pretty bad and it is surely not what i wanted.
Please get some light for me on the MySQL execution sequence of subqueries or share some links for this theme.
Thank you in advance. | You'll have to calculate the overall average separately and force a cross join:
[SQL Fiddle](http://sqlfiddle.com/#!2/6f51e/4)
```
select
id,
value,
avgValue,
avgValue - value as Diff
from
table1
cross join
(
select
avg(value) as AvgValue
from
table1
) t2
where
value < avgValue
``` | I think this would do it. You need to group in order to do you aggregate functions.
```
SELECT id, value, (SELECT AVG(value) FROM example GROUP BY id) AS average,
average - value AS diff
FROM example
GROUP BY id
HAVING value < average;
``` | Difference between value and avg(value) | [
"",
"mysql",
"sql",
""
] |
I have 3 textfields where user types table name and 2 column names which need to be merged.
How should I merge (combine) 2 column values into 1?
I use oracle 11g enterprise | concatenate?
```
select col1 || ' ' || col2 from tablex
``` | This is a very vague requirement. Concatenate the values maybe?
```
insert into sometable( Column1 )
values ( Column1 || Column2 );
```
If you need to specify the table name to INSERT into, then you will need to use dynamic SQL to achieve this. Would you need to specify the target column name as well? This example assumes you would use PL/SQL, which may not be appropriate in your case.
```
sql_stmt := 'INSERT INTO '|| specified_table || '(' || merge_column || ') VALUES ( :1 )';
EXECUTE IMMEDIATE sql_stmt USING column1 || column2;
```
<http://docs.oracle.com/cd/B13789_01/appdev.101/b10807/13_elems017.htm> | How to merge (combine) 2 columns into 1 in oracle? | [
"",
"sql",
"oracle",
"merge",
""
] |
I have this SQL query
```
1 : show tables
2 : desc tablename
```
But this doesn't seem to be the syntax in the derby.
How to write these queries in derby ??
I want to check the schema of a table whether its primary key or not.
How to check that in websphere | Strictly speaking, this isn't SQL. Rather, these are IJ commands, and must be processed by the IJ tool.
Here's the documentation for "describe": <http://db.apache.org/derby/docs/10.10/tools/rtoolsijcomrefdescribe.html>
And here's the documentation for "show tables": <http://db.apache.org/derby/docs/10.10/tools/rtoolsijcomrefshow.html>
You don't run these commands in Websphere, you run them in IJ. | Show tables via a query (**no IJ**):
```
select st.tablename from sys.systables st LEFT OUTER join sys.sysschemas ss on (st.schemaid = ss.schemaid) where ss.schemaname ='APP'
```
Show columns via a query (**no IJ**):
```
select * from sys.syscolumns where referenceid = (select tableid from sys.systables where tablename = 'THE_TABLE') order by columnnumber
``` | how to describe and show table in DERBY DB? | [
"",
"sql",
"database",
"derby",
""
] |
I am trying search two columns in a table, ie title and description. but I want to search title first and then description. so all rows that matches title comes first and all rows that matches description comes second
can I implement this using a single SQL query ? | You can also use a case statement so it doesn't have to go through the whole table twice.
```
SELECT col1,col2,col3,
Case
WHEN title LIKE '%searchQuery%' THEN 0
WHEN description LIKE '%searchQuery%' THEN 1
END
AS ord
FROM table1
WHERE ord is not null
ORDER BY ord;
``` | I suppose this should be efficient
```
SELECT COLA
(
SELECT TITLE AS COLA, 'T' AS IND
FROM TABLE
UNION ALL
SELECT DESCRIPTION AS COLA, 'D' AS IND
FROM TABLE
)
ORDER BY COLA, IND DESC
``` | mysql : search two columns : one column first then second column | [
"",
"mysql",
"sql",
"database",
""
] |
I have the following connection string(get from a property of sql server):
```
Data Source=(LocalDB)\v11.0;AttachDbFilename=C:\Users\myUser\Desktop\adoBanche\Banche\bin\Debug\banche.mdf;Integrated Security=True;Connect Timeout=30
```
I don't understand what mean `Timeout=30`. Someone could explain what means? | That is the timeout to **create** the connection, NOT a timeout for commands executed **over** that connection.
See for instance <http://www.connectionstrings.com/all-sql-server-connection-string-keywords/>
(note that the property is "Connect Timeout" (or "Connection Timeout"), not just "Timeout")
---
From the comments:
It is not possible to set the command timeout through the connection string. However, the SqlCommand has a [CommandTimeout](https://learn.microsoft.com/en-us/dotnet/api/system.data.sqlclient.sqlcommand.commandtimeout) property (derived from DbCommand) where you can set a timeout (in seconds) per command.
Do note that when you loop over query results with `Read()`, the timeout is reset on every read. The timeout is for each network request, not for the total connection. | > **Connection Timeout=30** means that the database server has 30 seconds to establish a connection.
**Connection Timeout** specifies the time limit (in seconds), within which the connection to the specified server must be made, otherwise an exception is thrown i.e. It specifies how long you will allow your program to be held up while it establishes a database connection.
```
DataSource=server;
InitialCatalog=database;
UserId=username;
Password=password;
Connection Timeout=30
```
***SqlConnection.ConnectionTimeout.*** specifies how many seconds the SQL Server service has to respond to a connection attempt. This is always set as part of the connection string.
Notes:
* The value is expressed in seconds, not milliseconds.
* The default value is 30 seconds.
* A value of 0 means to wait indefinitely and never time out.
In addition, ***SqlCommand.CommandTimeout*** specifies the timeout value of a specific query running on SQL Server, however this is set via the SqlConnection object/setting (depending on your programming language), and not in the connection string i.e. It specifies how long you will allow your program to be held up while the command is run. | What is "Connect Timeout" in sql server connection string? | [
"",
"sql",
"sql-server",
""
] |
I came across below code today.
```
SELECT StaffGroup.*
FROM StaffGroup
LEFT OUTER JOIN StaffByGroup
ON StaffByGroup.StaffGroupId = StaffGroup.StaffGroupId
INNER JOIN StaffMember
ON StaffMember.StaffMemberId = StaffByGroup.StaffMemberId
WHERE StaffByGroup.StaffGroupId IS NULL
```
The main table `StaffGroup` is being LEFT JOINed with `StaffByGroup` table and then `StaffByGroup` table is being INNER JOINed with `StaffMember` table.
I thought the INNER JOIN is trying to filter out the records which exist in `StaffGroup` and `StaffByGroup` but do not exist in `StaffMember`.
But this is not how it is working. The query does not return any records.
Am I missing something in understanding the logic of the query ? Have you ever used INNER JOIN with a table which has been used with LEFT JOIN in earlier part of the query ? | This query looks fundamentally flawed - I guess what was originally intended is
```
SELECT StaffGroup.*
FROM StaffGroup
LEFT OUTER JOIN
(SELECT * FROM StaffByGroup
INNER JOIN StaffMember
ON StaffMember.StaffMemberId = StaffByGroup.StaffMemberId) StaffByGroup
ON StaffByGroup.StaffGroupId = StaffGroup.StaffGroupId
WHERE StaffByGroup.StaffGroupId IS NULL
```
which returns all groups from StaffGroup that dont' have existing staffmembers assigned to them (the INNER JOIN with StaffMember filters out those rows from StaffByGroup that don't have a matching row in StaffMember - probably because there exists no foreign key between them) | Actually you are missing one concept:
The main table `StaffGroup` is being `LEFT Joined` with `StaffByGroup` table and then this creates a `virtual table` say `VT1` with all records from StaffGroup and matching records from StaffByGroup based on your match/filter condition in `ON` predicate.Then not StaffByGroup table but this `VT1` is being `INNER Joined` with `StaffMember` table based on match/filter condition in `ON` predicate.
So basically the inner join is trying to filter out those records from StaffGroup and hence StaffByGroup which do not have a StaffMemberId.
Adding your where condition adds a final filter like from the final virtual table created by all the above joins remove all such records which don't have a StaffGroupId which in turn might be removing all rows collected in `VT1` as all of them will be having some value for StaffGroupId.
To get all records from StaffGroup which have no StaffGroupId along with details from StaffMember for all such records you can add condition in ON predicate as:
```
SELECT StaffGroup.*
FROM StaffGroup
LEFT OUTER JOIN StaffByGroup
ON StaffByGroup.StaffGroupId = StaffGroup.StaffGroupId and StaffByGroup.StaffGroupId IS NULL
INNER JOIN StaffMember
ON StaffMember.StaffMemberId = StaffByGroup.StaffMemberId
``` | When would you INNER JOIN a LEFT JOINed table / | [
"",
"sql",
"sql-server",
""
] |
What are the general storage and performance differences between the below two tables if their only difference is `nvarchar(50)` vs. `nvarchar(max)` and the strings in each field range from 1 to 50 characters? This is in SQL Server 2005.
**TABLE 1**
```
firstname nvarchar (50)
lastname nvarchar (50)
username nvarchar (50)
```
**TABLE 2**
```
firstname nvarchar (max)
lastname nvarchar (max)
username nvarchar (max)
``` | If you are guaranteed to have strings between 1 and 50 characters, then the same query run across strings of up-to-length X will run faster using varchar(X) vs. varchar(MAX). Additionally, you can't create an index on a varchar(MAX) field.
Once your rows have values above 8000 characters in length, then there are additional performance considerations to contend with (the rows are basically treated as TEXT instead of varchar(n)). Though this isn't terribly relevant as a comparison since there is no varchar(N) option for strings of length over 8000. | First and foremost, you won't be able to create indexes on the (max) length columns. So if you are going to store searchable data, ie., if these columns are going to be part of the WHERE predicate, you may not be able to improve the query performance. You may need to consider FullText Search and indexes on these columns in such a case. | Implications of nvarchar (50) vs nvarchar (max) | [
"",
"sql",
"sql-server",
"database",
"sql-server-2005",
""
] |
*edit*
I've realised that my question really has two parts:
* How do I group by time periods, and
* [How do I select a particular row in a group?](https://stackoverflow.com/q/3800551/1084416).
[One of the answers](https://stackoverflow.com/a/7630564/1084416) to the second question uses Postgres' [SELECT DISTINCT ON](http://www.postgresql.org/docs/current/interactive/sql-select.html#SQL-DISTINCT), which means I don't need a group at all. I've posted my solution below.
I have data which is normally queried to get the most recent value. However I need to be able to reproduce what results would have been received if I'd queried every minute, going back to some timestamp.
I don't really know where to start. I have very little experience with SQL.
```
CREATE TABLE history
(
detected timestamp with time zone NOT NULL,
stat integer NOT NULL
)
```
I select like:
```
SELECT
detected,
stat
FROM history
WHERE
detected > '2013-11-26 20:19:58+00'::timestamp
```
Obviously this gives me every result since the given timestamp. I want every `stat` closest to minutes going back from now to the timestamp. By closest I mean 'less than'.
Sorry I haven't made a very good effort of getting anywhere near the answer. I'm so unfamiliar with SQL I don't know where to begin.
*edit*
This question, [How to group time by hour or by 10 minutes](https://stackoverflow.com/q/5002661/1084416), seems helpful:
```
SELECT timeslot, MAX(detected)
FROM
(
SELECT to_char(detected, 'YYYY-MM-DD hh24:MI') timeslot, detected
FROM
(
SELECT detected
FROM history
where
detected > '2013-11-28 13:09:58+00'::timestamp
) as foo
) as foo GROUP BY timeslot
```
This gives me the most recent `detected` timestamp on one minute intervals.
How do I get the `stat`? The `MAX` is run on all the `detected` grouped by minutes, but the `stat` is inaccessible.
*2nd edit*
I have:
```
timeslot;max
"2013-11-28 14:04";"2013-11-28 14:04:05+00"
"2013-11-28 14:17";"2013-11-28 14:17:22+00"
"2013-11-28 14:16";"2013-11-28 14:16:40+00"
"2013-11-28 14:13";"2013-11-28 14:13:31+00"
"2013-11-28 14:10";"2013-11-28 14:10:02+00"
"2013-11-28 14:09";"2013-11-28 14:09:51+00"
```
I would like:
```
detected;stat
"2013-11-28 14:04:05+00";123
"2013-11-28 14:17:22+00";125
"2013-11-28 14:16:40+00";121
"2013-11-28 14:13:31+00";118
"2013-11-28 14:10:02+00";119
"2013-11-28 14:09:51+00";121
```
`max` and `detected` are the same | I can offer you this solution:
```
with t (tstamp, stat) as(
values
( current_timestamp, 'stat1'),
( current_timestamp - interval '50' second, 'stat2'),
( current_timestamp - interval '100' second, 'stat3'),
( current_timestamp - interval '150' second, 'stat4'),
( current_timestamp - interval '200' second, 'stat5'),
( current_timestamp - interval '250' second, 'stat6')
)
select stat, tstamp
from t
where tstamp in (
select max(tstamp)
from t
group by date_trunc('minute', tstamp)
);
```
But it is in Oracle... maybe it helps you anyway | Okay another try :)
I tried it with my AdventureWorks DB from Microsoft. I took some other datatypes but it should work with datetimeoffset or similar datetimes too.
So i tried it with a loop. While your timestamp is less than NOW, select for me the data between your timestamp and the timestamp plus the interval size. With that i get the data in one interval, and then i set the timestamp plus the interval to get the next, till the while loop arrives at today.
Maybe that is a way, if not sorry for that :)
```
DECLARE @today date
DECLARE @yourTimestamp date
DECLARE @intervalVariable date
SET @intervalVariable = '2005-01-07' -- start at your timestamp
SET @today = '2010-12-31'
WHILE @intervalVariable < @today -- your Timestamp on the left side
BEGIN
SELECT FullDateAlternateKey FROM dbo.DimDate
WHERE FullDateAlternateKey BETWEEN @intervalVariable AND DATEADD(dd,3, @intervalVariable)
SET @intervalVariable = DATEADD(dd,3, @intervalVariable) -- the three is your intervale
print 'interval'
END
print 'Nothing or finished'
``` | Get most recent data to a periodic timestamp | [
"",
"sql",
"postgresql",
"datetime",
"select",
""
] |
I have three tables: `userProfile`, `loginTimes`, `orders`.
I am trying get each user's profile row, his last login time, and his last order row.
Here's my query:
```
Select u.*, t.loginTime, orders.* From userProfiles u
Inner Join
(Select userId, MAX(time) loginTime From loginTimes Group By userID) t
On u.userId = t.userID
Inner Join
(Select userId, MAX(enterDate) orderDate From orders Group By userId) o
On u.userID = o.userID
Inner Join
orders On orders.userId = u.userId And orders.enterDate = o.orderDate
```
Is there any way to rewrite without so many sub queries? | OP I think this is the query you are going for, this still requires 2 subqueries, but I don't believe your original query functioned as intended.
You could remove the `loginTimes` subquery, and use `MAX(loginTime)` in the outer `SELECT` list, but then you'd need to `GROUP BY` every field in the `order` table, which is arguably just as unclean.
The following query retrieves the `UserId`, latest `LoginTime` and the entire `order` record for the user's most recent order:
```
SELECT u.userId,
u.userName,
l.loginTime,
o.*
FROM userProfiles u
INNER JOIN ( SELECT userId,
loginTime = MAX(time)
FROM loginTimes
GROUP BY userID) l ON u.userId = l.userId
INNER JOIN ( SELECT *,
rowNum = ROW_NUMBER() OVER (PARTITION BY userId
ORDER BY enterDate DESC)
FROM orders) o ON u.userId = o.userId AND o.rowNum = 1
```
[Working on SQLFiddle](http://sqlfiddle.com/#!3/82bf2/1/0) | I believe this will do:
```
SELECT u.userID
,u.otherColumn
,MAX(t.time) AS loginTime
,MAX(o.enterDate) AS orderDate
FROM userProfiles u
JOIN loginTimes t ON t.userID = u.userID
JOIN orders o ON o.userID = u.userID
GROUP BY u.userID, u.otherColumn
```
For every other column in userProfiles you add to the SELECT clause, you need to add it to the GROUP BY clause as well..
**Update:**
Just because it can be done.. I tried it without any subquery :)
```
SELECT u.userID
,MAX(t.time) AS loginTime
,o.*
FROM userProfiles u
JOIN loginTimes t ON t.userID = u.userID
JOIN orders o ON o.userID = u.userID
LEFT JOIN orders o1 ON o.userID = o1.userID AND o.enterDate < o1.enterDate
WHERE o1.orderID IS NULL
GROUP BY u.userID
,o.* --write out the fields here
```
You'll have to write down the fields of the orders table you want in the `select` clause in your `GROUP BY` clause also. | Transform sub-queries to joins | [
"",
"mysql",
"sql",
"sql-server",
""
] |
I want to import over 1GB size `sql` file to `MySQL` database in localhost `WAMP/phpmyadmin`. But `phpmyadmin` UI doesn't allow to import such big file.
What are the possible ways to do that such as any `SQL` query to import .sql file ?
Thanks | I suspect you will be able to import 1 GB file through phpmyadmin But you can try by increasing the following value in php.ini and restart the wamp.
```
post_max_size=1280M
upload_max_filesize=1280M
max_execution_time = 300 //increase time as per your server requirement.
```
You can also try below command from command prompt, your path may be different as per your MySQL installation.
```
C:\wamp\bin\mysql\mysql5.5.24\bin\mysql.exe -u root -p db_name < C:\some_path\your_sql_file.sql
```
You should increase the `max_allowed_packet` of mysql in `my.ini` to avoid `MySQL server gone away` error, something like this
```
max_allowed_packet = 100M
``` | **Step 1:**
Find the `config.inc.php` file located in the phpmyadmin directory. In my case it is located here:
```
C:\wamp\apps\phpmyadmin3.4.5\config.inc.php
```
Note: phymyadmin3.4.5 folder name is different in different version of wamp
**Step 2:**
Find the line with `$cfg['UploadDir']` on it and update it to:
```
$cfg['UploadDir'] = 'upload';
```
**Step 3:**
Create a directory called ‘upload’ within the phpmyadmin directory.
```
C:\wamp\apps\phpmyadmin3.2.0.1\upload\
```
**Step 4:**
Copy and paste the large sql file into upload directory which you want importing to phymyadmin
**Step 5:**
Select sql file from drop down list from phymyadmin to import. | How to Import 1GB .sql file to WAMP/phpmyadmin | [
"",
"mysql",
"sql",
"phpmyadmin",
"wamp",
""
] |
Not sure about the correct words to ask this question, so I will break it down.
I have a table as follows:
```
date_time | a | b | c
```
Last 4 rows:
```
15/10/2013 11:45:00 | null | 'timtim' | 'fred'
15/10/2013 13:00:00 | 'tune' | 'reco' | null
16/10/2013 12:00:00 | 'abc' | null | null
16/10/2013 13:00:00 | null | 'died' | null
```
How would I get the last record but with the value ignoring the null and instead get the value from the previous record.
In my provided example the row returned would be
```
16/10/2013 13:00:00 | 'abc' | 'died' | 'fred'
```
As you can see if the value for a column is null then it goes to the last record which has a value for that column and uses that value.
This should be possible, I just cant figure it out. So far I have only come up with:
```
select
last_value(a) over w a
from test
WINDOW w AS (
partition by a
ORDER BY ts asc
range between current row and unbounded following
);
```
But this only caters for a single column ... | This should work but keep in mind **it is an uggly solution**
```
select * from
(select dt from
(select rank() over (order by ctid desc) idx, dt
from sometable ) cx
where idx = 1) dtz,
(
select a from
(select rank() over (order by ctid desc) idx, a
from sometable where a is not null ) ax
where idx = 1) az,
(
select b from
(select rank() over (order by ctid desc) idx, b
from sometable where b is not null ) bx
where idx = 1) bz,
(
select c from
(select rank() over (order by ctid desc) idx, c
from sometable where c is not null ) cx
where idx = 1) cz
```
See it here at fiddle: <http://sqlfiddle.com/#!15/d5940/40>
The result will be
```
DT A B C
October, 16 2013 00:00:00+0000 abc died fred
``` | ### Order of rows
The "last row" and the sort order would need to be defined unambiguously. There is no natural order in a set (or a table). I am assuming `ORDER BY ts`, where ts is the timestamp column.
Like [@Jorge pointed out in his comment](https://stackoverflow.com/users/460557/jorge-campos): If `ts` is not `UNIQUE`, one needs to define tiebreakers for the sort order to make it unambiguous (add more items to `ORDER BY`). A primary key would be the ultimate solution.
## General solution with window functions
To get a result for *every* row:
```
SELECT ts
, max(a) OVER (PARTITION BY grp_a) AS a
, max(b) OVER (PARTITION BY grp_b) AS b
, max(c) OVER (PARTITION BY grp_c) AS c
FROM (
SELECT *
, count(a) OVER (ORDER BY ts) AS grp_a
, count(b) OVER (ORDER BY ts) AS grp_b
, count(c) OVER (ORDER BY ts) AS grp_c
FROM t
) sub;
```
### How?
The aggregate function `count()` ignores NULL values when counting. Used as aggregate-window function, it computes the running count of a column according to the [default window definition, which is `RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW`](https://www.postgresql.org/docs/current/sql-expressions.html#SYNTAX-WINDOW-FUNCTIONS). NULL values don't increase the count, so these rows fall into the same peer group as the last non-null value.
In a second window function, the only non-null value per group is easily extracted with `max()` or min().
### Just the last row
```
WITH cte AS (
SELECT *
, count(a) OVER w AS grp_a
, count(b) OVER w AS grp_b
, count(c) OVER w AS grp_c
FROM t
WINDOW w AS (ORDER BY ts)
)
SELECT ts
, max(a) OVER (PARTITION BY grp_a) AS a
, max(b) OVER (PARTITION BY grp_b) AS b
, max(c) OVER (PARTITION BY grp_c) AS c
FROM cte
ORDER BY ts DESC
LIMIT 1;
```
### Simple alternatives for just the last row
```
SELECT ts
,COALESCE(a, (SELECT a FROM t WHERE a IS NOT NULL ORDER BY ts DESC LIMIT 1)) AS a
,COALESCE(b, (SELECT b FROM t WHERE b IS NOT NULL ORDER BY ts DESC LIMIT 1)) AS b
,COALESCE(c, (SELECT c FROM t WHERE c IS NOT NULL ORDER BY ts DESC LIMIT 1)) AS c
FROM t
ORDER BY ts DESC
LIMIT 1;
```
Or:
```
SELECT (SELECT ts FROM t ORDER BY ts DESC LIMIT 1) AS ts
,(SELECT a FROM t WHERE a IS NOT NULL ORDER BY ts DESC LIMIT 1) AS a
,(SELECT b FROM t WHERE b IS NOT NULL ORDER BY ts DESC LIMIT 1) AS b
,(SELECT c FROM t WHERE c IS NOT NULL ORDER BY ts DESC LIMIT 1) AS c
```
*db<>fiddle [here](https://dbfiddle.uk/?rdbms=postgres_13&fiddle=66d145ef875e36a6e1fdede4a5d4a1f0)*
Old [sqlfiddle](http://sqlfiddle.com/#!17/f4f73/1)
### Performance
While this should be decently fast, if performance is your paramount requirement, consider a plpgsql function. Start with the last row and loop descending until you have a non-null value for every column required. Along these lines:
* [GROUP BY and aggregate sequential numeric values](https://stackoverflow.com/questions/8014577/group-by-and-aggregate-sequential-numeric-values/8014694#8014694) | Retrieve last known value for each column of a row | [
"",
"sql",
"postgresql",
"null",
"postgresql-9.2",
"window-functions",
""
] |
In SQL, is there any way to run multiple SELECT statements simultaneously, then join the results? Most examples that I've seen on the internet seem to involve the SELECT statements being run in sequential fashion, followed by JOINs and other statements.
An example of the code would look something like this:
```
SELECT x, y FROM Table apples WHERE ... as t1
left join
(SELECT x, y FROM Table oranges WHERE ... as t2)
on
t1.x = t2.x
```
Now imagine that `SELECT x, y FROM Table apples WHERE ... as t1` takes a long time. How can I run both SELECT statements above simultaneously (similar to threaded calls in say Java or C#, for example) to save time? Or does Oracle already do this?
Cheers | No, Oracle does not perform them at the same time. The closest that it comes to that is parallel query, in which stages of a single query can be apportioned across multiple parallel query sessions automatically and the results combined by a query coordinator session.
You might try combining the logic of multiple queries into one (as zerkms suggests) and optimising the query to use covering indexes (for example) -- in other words, standard approaches for a slow-performing query.
One of the potentially fatal problems with combining the results of separately executed queries is that they are not consistent. Each one would generally have a slightly different point of time for which it is looking for consistent data, unless you used flashback query to get slightly old results.
If the data is static then you could use DBMS\_Scheduler to run multiple queries at the same time that load to a common table, and then select from there. DBMS\_Scheduler can define chains of processes which can execute in series, or in parallel, and offers very sophisticated end condition checking on what steps should execute when. It's a big hammer to apply though, and is normally used for batch processing (eg data warehouse ELT processes). | It can be all done in a single query without joins or subqueries:
```
SELECT ks,
COUNT(*) AS '# Tasks',
SUM(CASE WHEN Age > Palt THEN 1 ELSE 0 END) AS '# Late'
FROM Table
GROUP BY ks
```
As a possible performance improvement it worth trying to create a composite `(ks, Age, Palt)` index | How to Run Two Slow SQL SELECT Statements Simultaneously | [
"",
"sql",
"oracle",
"plsql",
""
] |
Suppose I have a table like this
```
| id_grupo | nombre |
|:---------|----------------:|
| 1 | Emprendedores 1 |
| 2 | Emprendedores 2 |
| 3 | Emprendedoras 1 |
| 4 | Emprendedoras 2 |
| 5 | Los amigos 1 |
| 6 | Los amigos 2 |
| 7 | Los amigos no 1 |
```
I want to group by name that are equal but **ends** in different number. If you look closely there are names which consists of two or more words however the difference is the ending. Also there are name which look similar but they are not the same like "Los amigos" and "Los amigos no", these belong to different groups, also "Emprendedores" and "Emprendedoras" are different.
This is the query I have:
```
SELECT *, GROUP_CONCAT(id_grupo)
FROM creabien_sacredi_dev.grupos
GROUP BY SUBSTRING(nombre,1,5)
```
It works fine with most of the records however the problem comes with strings like in the example which are very similar. I choose a substring with 5 characters but in fact names doesn't have the same length so some strings are not working as expected.
How can I group these strings in the following form?
```
| id_grupo | nombre | GROUP_CONCAT(id_grupo) |
|:---------|----------------:|-----------------------:|
| 1 | Emprendedores 1 | 1,2 |
| 3 | Emprendedoras 1 | 3,4 |
| 5 | Los amigos 1 | 5,6 |
| 7 | Los amigos no 1 | 7 |
```
I think the key is on the last number the preceding string must be exactly the same, but I don't know how to do it. Could you help me please?
**Edit:**
There are also records like 'Emprendedores' without any number at the end and this also should be grouped with 'Emprendedores 1' and 'Emprendedores 2'. So I think the number isn't anymore the key, in fact I doubt if there exist a way to group these records. | How about the following:
```
SELECT CASE
WHEN RIGHT(nombre, 1) BETWEEN '0' AND '9' THEN
LEFT(nombre, Length(nombre) - 2)
ELSE nombre
END AS nombrechecked,
Group_concat(id_grupo)
FROM grupos
GROUP BY 1
```
Here is the [**SQL Fiddle**](http://sqlfiddle.com/#!9/89fb9/2/0) that shows it works. | If items to cut are numbers only and they are always separated by a space:
```
SELECT CASE nombre REGEXP '[0-9]$'
WHEN 1 THEN REVERSE (SUBSTR(REVERSE(nombre),
INSTR(REVERSE(nombre),' '))) ELSE nombre END grupo,
GROUP_CONCAT(id_grupo)
FROM grupos
GROUP BY grupo;
```
Just a proposal ... :) probably not the most performant. Advantage is, that it works with larger numbers at the end as well.
Check out [this Fiddle](http://sqlfiddle.com/#!2/8d3cc/2). | Group by similar string | [
"",
"mysql",
"sql",
"group-by",
"case",
""
] |
I have the following SQL:
```
select <misc things>
from pluspbillline
left outer join workorder
on workorder.siteid=pluspbillline.siteid
and workorder.wonum = pluspbillline.refwo
and workorder.orgid = pluspbillline.orgid
left outer join ticket
on ticket.ticketid = pluspbillline.ticketid
and ticket.class=pluspbillline.ticketclass
left outer join pluspsalesorder
on pluspsalesorder.salesordernum=pluspbillline.salesordernum
and pluspsalesorder.siteid=pluspbillline.siteid
```
In Oracle SQL Developer 4.0.0.13 (connected to a DB2 database), I get a squiggly line underneath the following italics: "from *pluspbillline*" and "left outer join *workorder*".
The warning says "pluspbillline is disconnected from the rest of the join graph". What does this mean? | I'm not sure what's causing Oracle SQL Developer to give the error. But I'm putting this comment here to format it properly.
A join graph might look something like this
```
pluspbillline ------+----< workorder
|
+----< ticket
|
+----< pluspsalesorder
```
The lines on the graph might be labeled with the join fields. But this gives you a basic idea.
I don't see any reason why you are getting this warning. A column name typo in your SQL perhaps? Or some quirk in Oracle's interface that it doesn't understand the DB2 metadata properly? I suggested trying IBM's tool to see if it's merely their program. | I got this as well. I'm not exactly sure how to articulate it but the error seems to be based on the logical flow of the code.
Essentially because you mention the table `pluspbillline` before `workorder` I think it expects the join to be `on pluspbillline.siteid=workorder.siteid` etc.
It seems that the order of the conditions for joins should flow from the first identified tables to the latest ones. So the following should make it happy:
```
plusbillline to workorder on pluspbillline.siteid=workorder.siteid...
"" to ticket on pluspbillline.ticketid = ticket.ticketid...
"" to pluspsalesorder on pluspbillline.salesordernum = pluspsalesorder.salesordernum...
```
I don't believe this would change the work oracle does (assuming you don't use optimizer hints) so I'd only bother to change if you hate the squiggly lines. | SQL Developer "disconnected from the rest of the join graph" | [
"",
"sql",
"db2",
"oracle-sqldeveloper",
""
] |
I have a table name `Companies` with `372370` records.
And there is only one row which has `CustomerNo = 'YP20324'`.
I an running following query and its taking so much time I waited 5 minutes and it was still running. I couldn't figure out where is the problem.
```
UPDATE Companies SET UserDefined3 = 'Unzustellbar 13.08.2012' WHERE CustomerNo = 'YP20324'
``` | You don't have triggers on update on that table?
Do you have a cascade foreign key based on that column?
Are you sure of the performance of your server? try to take a look of the memory, cpu first when you execute the query (for example on a 386 with 640mb i could understand it's slow :p)
And for the locks, you can right click the database and on the report you can see the blocking transactions. Sometimes it helps for concurrent access. | Try adding an index on the field you are using in your `WHERE` clause:
```
CREATE INDEX ix_CompaniesCustomerNo ON Companies(CustomerNo);
```
Also check if there are other active queries which might block the update. | Sql Server 2008 Update query is taking so much time | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I am trying to update every row in order\_item. Status is a newly created column, and must have the latest value from the order\_update table. One item can have several updates.
I am using PostgreSQL 9.1
I have this update sql.
The table `order_item` has 800K records.
The table `order_update` has 5Mil records.
```
update order_item
set status = (
select production_stage
from order_update
where id = (
select max(id)
from order_update
where order_item_id = order_item.id
)
);
```
How can I make this sql perform the best way. I know the update will take some time, just want to have it as fast as possible.
I found that when doing just this sql on 5Mil records.
```
select max(id) from order_update where order_item_id = 100;
```
Explain:
```
Result (cost=784.10..784.11 rows=1 width=0)" InitPlan 1 (returns $0)
-> Limit (cost=0.00..784.10 rows=1 width=8)
-> Index Scan Backward using order_update_pkey on order_update (cost=0.00..104694554.13 rows=133522 width=8)
Index Cond: (id IS NOT NULL)
Filter: (order_item_id = 100)
```
it takes about 6 seconds.
When I do the same sql in 1Mil records:
Explain:
```
Aggregate (cost=13.43..13.44 rows=1 width=8) -> Index Scan using
order_update_order_item_id_idx on order_update (cost=0.00..13.40
rows=11 width=8)
Index Cond: (order_item_id = 100)
```
it takes around 11 ms.
11 ms vs. 6 sec. Why the HUGE diff?
To narrow it down a little I try this :
```
select id from order_update where order_item_id = 100 order by id asc
limit 1
Total query runtime: 41 ms.
```
and then this :
```
select id from order_update where order_item_id = 100 order by id desc
limit 1
Total query runtime: 5310 ms.
```
so a huge diff in asc and desc.
Solution :
Create index :
```
CREATE INDEX order_update_mult_idx ON order_update (order_item_id, id DESC);
```
Update :
```
UPDATE order_item i
SET test_print_provider_id = u.test_print_provider_id
FROM (
SELECT DISTINCT ON (1)
test_print_provider_id
FROM orders
ORDER BY 1, id DESC
) u
WHERE i.order_id = u.id
AND i.test_print_provider_id IS DISTINCT FROM u.test_print_provider_id;
``` | My educated guess: this will be *substantially* faster.
```
UPDATE order_item i
SET status = u.production_stage
FROM (
SELECT DISTINCT ON (1)
order_item_id, production_stage
FROM order_update
ORDER BY 1, id DESC
) u
WHERE i.id = u.order_item_id
AND i.status IS DISTINCT FROM u.production_stage; -- avoid empty updates
```
* There is a subtle **difference** to the query in the question. The original one updates *every* row of `order_item`. If no matching rows in `order_update` are found, this result in `status` being set to `NULL`. This query leaves those rows alone (original value kept, no update).
* Detailed explanation for the subquery with `DISTINCT ON` in this closely related answer:
[Select first row in each GROUP BY group?](https://stackoverflow.com/questions/3800551/select-first-row-in-each-group-by-group/7630564#7630564)
* Generally, a *single* subquery should outperform your approach with correlated subqueries easily. Even more so with an optimized query.
* If `order_item.status` should be defined `NOT NULL`, the last line can be simplified with `<>`.
* A [multicolumn index](http://www.postgresql.org/docs/current/interactive/indexes-multicolumn.html) like this might help:
```
CREATE INDEX order_update_mult_idx ON order_update(order_item_id, id DESC);
```
The descending order on the second column is essential.
However, since you are using all or most of both tables in a single scan, an index is probably not going to help. Except for a [*covering* index](https://wiki.postgresql.org/wiki/Index-only_scans), maybe, in Postgres 9.2 or later:
```
CREATE INDEX order_update_mult_idx
ON order_update(order_item_id, id DESC, production_stage);
```
`EXPLAIN` only gives you the plan Postgres came up with. These numbers can be way off if the planner estimates and cost parameters are not set accurately. To get actual performance data, you would have to run `EXPLAIN ANALYZE` - which will take a long time for big tables, of course, since it test-executes the query. | It will help if you have an index in order\_update on the Id that includes order\_item\_id and production\_stage.
Other than that, this is fairly straightforward. Using a temporary table instead of a subquery might be an option, but I don't see much else that can be improved. | Update query on big table slow | [
"",
"sql",
"postgresql",
"sql-update",
"greatest-n-per-group",
"postgresql-performance",
""
] |
How to drop columns from a SQL Server temp table if exists from a list?
Something like:
```
DECLARE @cols AS NVARCHAR(MAX)
@cols will have value like [first_name], [last_name], .....
So @cols = '[first_name], [last_name], .....'
```
Now
```
ALTER TABLE #my_tables
DROP COLUMN @cols
```
How to check if `#my_tables` does contain `last_name` or not? If not ignore it instead of throwing error. I still need to drop other columns which are existing in @cols
I think we can do if exists condition on individual columns.
Thanks | I agree with CRAFTY DBA: you should be passing these values in as a table-valued parameter, not as a comma-separated list. But that doesn't really change the problem: you still to generate all of the `DROP COLUMN` commands dynamically, whether the column names are parsed from a CSV using a table-valued function, or pulled from a TVP.
I'll continue with your current requirement of a comma-separated list.
First, create a string splitting function. This is the one I use:
```
CREATE FUNCTION [dbo].[SplitString]
(
@List NVARCHAR(MAX),
@Delim VARCHAR(255)
)
RETURNS TABLE
AS
RETURN ( SELECT [Value] FROM
(
SELECT
[Value] = LTRIM(RTRIM(SUBSTRING(@List, [Number],
CHARINDEX(@Delim, @List + @Delim, [Number]) - [Number])))
FROM (SELECT Number = ROW_NUMBER() OVER (ORDER BY name)
FROM sys.all_objects) AS x
WHERE Number <= LEN(@List)
AND SUBSTRING(@Delim + @List, [Number], LEN(@Delim)) = @Delim
) AS y
);
GO
```
Now, create a silly #temp table:
```
CREATE TABLE #foo(id INT, x INT);
```
Now, you can declare the list of columns you want to try to delete, create a dynamic SQL statement that checks that they exist before dropping them, and then drops the ones it can:
```
DECLARE @cols VARCHAR(MAX);
SET @cols = '[first_name], [last_name], [id]';
DECLARE @sql NVARCHAR(MAX);
SET @sql = N'';
SELECT @sql = @sql + N'IF EXISTS (SELECT 1 FROM tempdb.sys.columns AS c
WHERE [object_id] = OBJECT_ID(''tempdb..#foo'')
AND name = ' + REPLACE(REPLACE(QUOTENAME(s.Value, ''''),']',''),'[','') + ')
ALTER TABLE #foo DROP COLUMN ' + s.Value + ';'
FROM dbo.SplitString(@cols, ',') AS s;
EXEC sp_executesql @sql;
SELECT * FROM #foo; -- this only returns the column x
```
If you run this part of the code again, against the same table, it will run without error. Some errors you might come across are:
* if you try to drop a column that participates in any constraint (default, PK, FK, check, etc).
* if you try to drop the last column in the table.
* if you have columns with really poorly chosen names that somehow bypass all of the protection I've tried to include here. | You can just use dynamic SQL; however, watch out for SQL Injection.
I would pass @cols as a parameter to a stored procedure.
The body of the procedure would be below.
```
-- Declare local variables
DECLARE @STMT NVARCHAR(MAX);
-- Create the stmt
SET @STMT = 'ALTER TABLE #my_tables DROP COLUMN ' + @cols;
-- Execute the stmt
EXEC sp_executesql @STMT;
```
I would rather pass a Table Value Parameter, but that is just me. | Drop columns from a #temp table based on CSV input parameter | [
"",
"sql",
"sql-server",
""
] |
Maybe I am missing something. I am trying to do a pretty simple select statement. The logic I want is that if two columns (ActivityDate and CreateDate) equal eachother AND ONE OR THE OTHER ours column and the DoubleBookedHours column are zero it should be returned:
```
SELECT *
FROM DailyTaskHours
WHERE ActivityDate = CreateDate
AND (
Hours != 0
AND DoubleBookedHours != 0
)
```
The above is returning nothing.

The first row in the above picture is what it SHOULD return, because not both of the columns Hours and DoubleBookedHours is zero but the dates are equivelant. The second row is what I don't want it to return because both Hours and DoubleBookedHours are zero.
What am I doing wrong?
Edit:
```
SELECT *
FROM DailyTaskHours
WHERE ActivityDate = CreateDate AND Hours != 0 AND DoubleBookedHours != 0
```
The above Returns nothing.
```
SELECT *
FROM DailyTaskHours
WHERE ActivityDate = CreateDate
AND Hours = 0
AND DoubleBookedHours = 0
```
The above returns the opposite of what I want:
 | You were performing just the opposite. By using `!` you are saying anything but 0. Also, there is no need to use the parenthesis in your where clause.
```
SELECT *
FROM DailyTaskHours
WHERE ActivityDate = CreateDate
AND Hours = 0
AND DoubleBookedHours = 0
```
IF you want those returned that don't have both `Hours = 0` and `DubleBookedHours = 0`, then:
```
SELECT *
FROM DailyTaskHours
WHERE ActivityDate = CreateDate
AND NOT
(Hours = 0 AND DoubleBookedHours = 0)
``` | It should be:
```
--test data
DECLARE @DailyTaskHours AS TABLE (ActivityDate datetime, CreateDate datetime, [Hours] int, DoubleBookedHours int)
INSERT INTO @DailyTaskHours VALUES ('01 Jan 2012','01 Jan 2012',0,1)
INSERT INTO @DailyTaskHours VALUES ('01 Jan 2012','01 Jan 2012',0,0)
--query
SELECT *
FROM @DailyTaskHours
WHERE ActivityDate = CreateDate
AND ([Hours] <> 0 OR DoubleBookedHours <> 0)
--result
ActivityDate CreateDate Hours DoubleBookedHours
2012-01-01 00:00:00.000 2012-01-01 00:00:00.000 0 1
``` | WHERE clause not grouping as expected | [
"",
"sql",
"sql-server",
""
] |
I need to sum-up the amount in 2 tables (c1, c2) linked n:1 to table a. The problem is: It would be greate if I could do it in just 1 query, because the real situation is a bit more complicated ;-) I brought it down to this testcase:
```
create table a (
`id` int(10) unsigned NOT NULL, KEY(id)
) ENGINE=InnoDB;
create table c1 (
`id` int(10) unsigned NOT NULL, KEY(id),
`a` int(10),
`amount` decimal(15,2) NOT NULL
) ENGINE=InnoDB;
create table c2 (
`id` int(10) unsigned NOT NULL, KEY(id),
`a` int(10),
`amount` decimal(15,2) NOT NULL
) ENGINE=InnoDB;
INSERT INTO a SET id=1;
INSERT INTO c1 SET a=1, amount = 2;
INSERT INTO c1 SET a=1, amount = 3;
INSERT INTO c2 SET a=1, amount = 1;
SELECT SUM(c1.amount), SUM(c2.amount)
FROM a
LEFT JOIN c1 ON c1.a = a.id
LEFT JOIN c2 ON c2.a = a.id
WHERE a.id = 1;
```
The result of course is:
```
+----------------+----------------+
| SUM(c1.amount) | SUM(c2.amount) |
+----------------+----------------+
| 5.00 | 2.00 |
+----------------+----------------+
```
because c1 is joined twice and doubles the amound in c2. But I need to get:
```
+----------------+----------------+
| SUM(c1.amount) | SUM(c2.amount) |
+----------------+----------------+
| 5.00 | 1.00 |
+----------------+----------------+
```
Any idea how to get to this? | One possible answer is:
```
SELECT (select SUM(c1.amount) from c1 where c1.a = a.id) as c1_amount,
(select SUM(c2.amount) from c2 where c2.a = a.id) as c2_amount
FROM a
WHERE a.id = 1;
```
[Link to SQL Fiddle](http://www.sqlfiddle.com/#!2/51431/1)
BTW - Thanks for putting in the data and create scripts. That helped a lot. | ```
SELECT a.*
, SUM(CASE WHEN b.source = 'c1' THEN amount END) c1_ttl
, SUM(CASE WHEN b.source = 'c2' THEN amount END) c2_ttl
FROM a
JOIN
(
SELECT *,'c1' source FROM c1 UNION SELECT *,'c2' FROM c2
) b
ON b.a = a.id;
+----+--------+--------+
| id | c1_ttl | c2_ttl |
+----+--------+--------+
| 1 | 5.00 | 1.00 |
+----+--------+--------+
``` | MySql: JOIN and SUM from different tables | [
"",
"mysql",
"sql",
"distinct",
""
] |
Ho to export MySQL table structure as text version table?
I mean something like this:
```
+-----------+-----------+------+-----+-------------------+-----------------------------+
| Field | Type | Null | Key | Default | Extra |
+-----------+-----------+------+-----+-------------------+-----------------------------+
| EID | int(11) | NO | PRI | 0 | |
| MOD_EID | int(11) | YES | | NULL | |
| EXIT_TIME | timestamp | NO | | CURRENT_TIMESTAMP | on update CURRENT_TIMESTAMP |
+-----------+-----------+------+-----+-------------------+-----------------------------+
```
I'm sure there is some tool which will export table like this. Does anyone know how to do this from MySQL? | you could accomplish that with 3 ways.
* DESC : ease of use
* SHOW CREATE TABLE : ease of create new table with another table's same schema
* information\_schema : difficult to use, but powerful.
**1. using DESCRIBE**
```
DESC $DB_NAME.$TBL_NAME;
sample output
mysql> DESC jsheo_test.test;
+--------+-------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+--------+-------------+------+-----+---------+-------+
| name | varchar(10) | YES | | NULL | |
| age | int(11) | YES | | NULL | |
| spent | int(11) | YES | | NULL | |
| gender | char(1) | YES | | NULL | |
+--------+-------------+------+-----+---------+-------+
```
**2. using SHOW CREATE TABLE**
```
SHOW CREATE TABLE $DB_NAME.$TBL_NAME;
sample output
mysql> SHOW CREATE TABLE jsheo_test.test\G
*************************** 1. row ***************************
Table: test
Create Table: CREATE TABLE `test` (
`name` varchar(10) DEFAULT NULL,
`age` int(11) DEFAULT NULL,
`spent` int(11) DEFAULT NULL,
`gender` char(1) DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8
```
**3. using information schema**
```
SELECT TABLE_NAME
, COLUMN_NAME
, ORDINAL_POSITION
, DATA_TYPE
, IS_NULLABLE
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_SCHEMA = '$DB_NAME'
AND TABLE_NAME = '$TBL_NAME'
ORDER BY TABLE_NAME, ORDINAL_POSITION;
sample output
+------------+-------------+------------------+-----------+-------------+
| TABLE_NAME | COLUMN_NAME | ORDINAL_POSITION | DATA_TYPE | IS_NULLABLE |
+------------+-------------+------------------+-----------+-------------+
| test | name | 1 | varchar | YES |
| test | age | 2 | int | YES |
| test | spent | 3 | int | YES |
| test | gender | 4 | char | YES |
+------------+-------------+------------------+-----------+-------------+
```
**4. from shell**
*run use `-e` option*
```
$ mysql -uusername -S ~/tmp/mysql.sock -e "DESC jsheo_test.test"
+--------+-------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+--------+-------------+------+-----+---------+-------+
| name | varchar(10) | YES | | NULL | |
| age | int(11) | YES | | NULL | |
| spent | int(11) | YES | | NULL | |
| gender | char(1) | YES | | NULL | |
+--------+-------------+------+-----+---------+-------+
```
otherwise output format is strange. something like below.
```
$ echo "desc jsheo_test.test;" | mysql -uusername -S /tmp/mysql.sock
Field Type Null Key Default Extra
name varchar(10) YES NULL
age int(11) YES NULL
spent int(11) YES NULL
gender char(1) YES NULL
``` | you can SELECT some fields and store them in a OUTFILE with this command:
> SELECT \* FROM table\_name INTO OUTFILE 'textile.txt' | How to generate SQL table structure as a text table? | [
"",
"mysql",
"sql",
"database",
"structure",
""
] |
I am trying to find which customer has the most transactions. Transaction table has an foreign key that identifies each transaction with a customer. What I currently is the following code:
```
WITH Customers as (
SELECT
[CustName] as 'Customer',
[TRANSACTION].[CustID] as 'Total # of Transactions'
FROM [dbo].[CUSTOMER]
INNER JOIN [dbo].[TRANSACTION]
ON [CUSTOMER].[CustID] = [TRANSACTION].[CustID]
)
SELECT *
FROM Customers
WHERE 'Total # of Transactions' = (SELECT MAX('Total # of Transactions') FROM Customers);
```
Two things are wrong:
1) The latter part of the code doesn't accept 'Total # of Transactions'. If I were to rename it to a single word, I could treat it kind of like a variable.
2) My last SELECT statement gives me a result of the customer and all their transactions, but doesn't give me a COUNT of those transactions. I'm not sure how to use COUNT in conjunction with MAX. | Your inner table just returns `CustID` as a total number of transactions? You need to start by finding the total count for each customer. Also for a column you can use `[Name]`, when you use apostrophes it thinks you are comparing a string. If you want to return all customers with the highest count, you could use this:
```
WITH TransactionCounts as (
SELECT
CustID,
COUNT(*) AS TransactionCount
FROM [dbo].[TRANSACTION]
GROUP BY CustID
)
SELECT TOP 1 CUSTOMER.*, TransactionCount
FROM TransactionCounts
INNER JOIN CUSTOMER ON CUSTOMER.CustID = TransactionCounts.CustId
ORDER BY TransactionCount DESC
-- alternate to select all if multiple customers are tied for highest count
--WHERE TransactionCount = (SELECT MAX(TransactionCount) FROM TransactionCounts)
``` | First select customers and transaction count.
Then select the largest one.
Them limit your select to that item.
Work you way from the inside out.
```
SELECT *
FROM Customers
WHERE CustID =
(
SELECT TOP 1 CustID
FROM (SELECT CustID, COUNT(*) AS TCOUNT
FROM TRANSACTIONS
GROUP BY CustID) T
ORDER BY T.TCOUNT DESC
) TT
``` | Using COUNT with MAX in SQL | [
"",
"sql",
"sql-server-2012",
""
] |
I want to get the most recent older value from one hour ago in my data but when I execute this query
```
SELECT locid, value
FROM table2
WHERE date_sub(t_stamp, interval 1 hour)
and locid = '2815'
order by t_stamp desc
```
I get all the values. How can I fix this?
The output should be
```
locid | value
2815 | 13.0
```
Here's the demo: <http://sqlfiddle.com/#!2/b3c89/5> | The Following query will get the current hour and subtract one from it and then compare it to the hour of `t_stamp`.
```
SELECT locid, value
FROM table2
WHERE DATE(t_stamp) = DATE(CURRENT_DATE)
AND HOUR(t_stamp) = HOUR(CURRENT_TIME) -1
AND locid = '2815'
ORDER BY t_stamp desc
LIMIT 1
```
Or the following query might be what you are looking for:
```
SELECT locid, value
FROM table2
WHERE t_stamp <= DATE_SUB(NOW(), INTERVAL 1 HOUR)
AND locid = '2815'
ORDER BY t_stamp desc
LIMIT 1
``` | Simply add a LIMIT to your query to return only the first row.
```
SELECT locid, value
FROM table2
WHERE date_sub(t_stamp, interval 1 hour)
and locid = '2815'
order by t_stamp desc limit 1
``` | Get most recent older value from one hour ago in mysql | [
"",
"mysql",
"sql",
"time",
""
] |
The query I really need to execute is follows:
```
SELECT u.points
(SELECT COUNT(1) FROM (SELECT 1 FROM checkin c INNER JOIN wineries w ON w.id = c.winery_id WHERE c.user_id = u.id GROUP BY region_id) b) as states_visited
FROM users u
GROUP BY u.id
ORDER BY points DESC
```
However, this causes the following error:
Unknown column 'u.id' in 'where clause'
I've tried with user-defined variables, no errors, but it's not actually referencing the user-defined variable value for some reason:
```
SELECT @uid := u.id, u.points
(SELECT COUNT(1) FROM (SELECT 1 FROM checkin c INNER JOIN wineries w ON w.id = c.winery_id WHERE c.user_id = @uid GROUP BY region_id) b) as states_visited
FROM users u
GROUP BY u.id
ORDER BY points DESC
```
Any thoughts how I can make this work? Without the obvious resorting to doing two separate queries? | Why do you need the double nesting? And if the `id` is the primary key of table `user`, you don't need the `GROUP BY` either:
```
SELECT u.points,
(SELECT COUNT(1) FROM reviews WHERE user_id = u.id) AS review_count
FROM users AS u
-- GROUP BY u.id
ORDER BY points DESC ;
```
You could also `GROUP BY` in a derived table - and then join:
```
SELECT u.points,
COALESCE(r.review_count,0) AS review_count
FROM users AS u
LEFT JOIN
(SELECT user_id, COUNT(1) AS review_count
FROM reviews
GROUP BY user_id
) AS r
ON r.user_id = u.id
ORDER BY points DESC ;
```
or join and then `GROUP BY`:
```
SELECT u.points,
COUNT(r.user_id) AS review_count
FROM users AS u
LEFT JOIN
reviews AS r
ON r.user_id = u.id
GROUP BY u.id, u.points
ORDER BY points DESC ;
```
---
The edited version is harder but can be done without double nesting, too:
```
SELECT u.points,
(SELECT COUNT(DISTINCT region_id)
FROM checkin c INNER JOIN wineries w ON w.id = c.winery_id
WHERE c.user_id = u.id
) AS states_visited
FROM users u
ORDER BY points DESC ;
``` | It seems like you can do what you want with a simpe table join
```
SELECT u.id AS `id`, u.points AS `points`, COUNT(r.review_id) AS `review_count` /* or whatever the id record for reviews table is */
FROM users AS u
INNER JOIN reviews AS r
ON u.id = r.user_id
GROUP BY `id`
ORDER BY `points` DESC
``` | MySQL - Referencing outer table value from double nested subquery | [
"",
"mysql",
"sql",
"subquery",
""
] |
i have 4 tables
```
1. the first table(d_cities) for cities // related with the next table by country_id
CityId |CountryID |RegionID |City |Latitude|Longitude
2. the second table(d_country) for countries
CountryId|Country
3. the third table(ip2location_db11) for ip
ip_from|ip_to |country_code|country_name| city_name
4 the fourth table (ip_relation) would be like this
CountryID |CityId|ip_from |ip_to
```
i create the fourth table to collect custom data from the three tables and put it in one table..
this will has been done by :
join (d\_country,d\_cities) by id ,
then compare this names with IP table if matched
it will fetch the ids for these names & ips that matched and put it in the fourth table
..so i write my code like this and need to support to modify this code
```
INSERT ip_relations (CountryID, CityId,ip_from,ip_to)
SELECT *
FROM d_cities
INNER JOIN d_country ON d_cities.CountryID = d_country.CountryId
INNER JOIN ip2location_db11 ON ip2location_db11.country_name = d_country.Country
AND ip2location_db11.city_name = d_cities.City
```
/// this sql statement not work | ```
INSERT INTO ip_relations (CityId,CountryID,ip_from,ip_to)
SELECT
d_cities.CityId,
d_cities.CountryID,
ip2location_db11.ip_from,
ip2location_db11.ip_to
FROM d_cities
INNER JOIN d_country ON d_cities.CountryID = d_country.CountryId
INNER JOIN ip2location_db11 ON ip2location_db11.country_name = d_country.Country
AND ip2location_db11.city_name = d_cities.City
``` | first, i am not sure why do you design table like this.maybe you could change them like below:
```
d_cities: city_id | country_id | region_id |city | latitude| longitude
d_country: country_id | country
ip2location_db11: ip_from | ip_to | country_code | city_id
```
pS: I am not very sure what does country\_code mean,so I keep it.base on the table structure above,it mostly like this: country to city is one-to-many and city\_id must be unique, the ips is only have relation with the city\_id.
I think ,this will be a better design...
then, if you have to solve the problem based on your current tables.
you would make a unique key "UNIQUE KEY `uniq_from_to` (`ip_from`,`ip_to`) "; and,there is sql:
```
INSERT IGNORE ip_relation
(SELECT cA.country_id,cA.city_id,ip.ip_from,ip.ip_to FROM ip2location_db11 ip
LEFT JOIN (SELECT cy.country,ct.city,ct.country_id,ct.city_id FROM d_country cy,d_cities ct WHERE cy.country_id = ct.country_id) as cA ON ip.city_name = cA.city AND ip.country_name = cA.country);
```
this means : 1.find All city-country groups;then based on the city-country groups;2.insert into your forth table,and when ip\_from-ip\_to is duplicate ,will cover the data before.
hope this can give you some help. | how to Populating table with data returned from 3 joins tables | [
"",
"mysql",
"sql",
""
] |
I'm designing a database and can't figure out how to model referential integrity.
I have the following tables
```
CREATE TABLE Groups
(
GroupId INT PRIMARY KEY,
GroupName VARCHAR(50)
)
CREATE TABLE GroupMembers
(
GroupId INT NOT NULL,
MemberId INT NOT NULL,
MemberName VARCHAR(50),
CONSTRAINT pk_GroupMember PRIMARY KEY (GroupId, MemberId)
)
CREATE TABLE Missions
(
MissionId INT PRIMARY KEY,
GroupId INT NOT NULL,
MissionName VARCHAR(50)
)
CREATE TABLE MissionRollAssignments
(
MissionId INT NOT NULL,
MemberId INT NOT NULL,
MemberRoll VARCHAR(50) --This will probably become RollId and move details to another table
)
```
Every mission will have assignments for some/all members of the corresponding group. There will be several missions associated with each group, but only one mission per group is active at a given time.
My question is:
Is it possible to enforce referenciay integrity for roll assignments such that only members
of the corresponding group (given by the MissionId) are selected? I know I can filter this from the GUI, but I'd feel more comfortable if I could create a FK constraint from MissionRollAssignments to GroupMembers while considering the GroupId indicated in the Mission.
A second question would be if you guys think this is a good way to model my domain, or maybe I should try a different approach.
Thanks in advance for any help on this.
Best regards,
Awer | You could put GroupId into MissionRollAssignments and then add two constraints as follows:
```
ALTER TABLE MissionRollAssignments
ADD CONSTRAINT fk1 FOREIGN KEY (GroupId, Memberid)
REFERENCES GroupMembers (GroupId, Memberid);
ALTER TABLE MissionRollAssignments
ADD CONSTRAINT fk2 FOREIGN KEY (GroupId, MissionId)
REFERENCES Missions (GroupId, MissionId);
```
To achieve this SQL Server first requires a (redundant) UNIQUE constraint on (GroupId, MissionId) in the Missions table. Other DBMSs are not so strict but SQL Server requires a FOREIGN KEY constraint to match exactly the columns of a uniqueness constraint. | You should use Foreign Keys to reinforce this, eg Mission.GroupId should refer to Group.GroupId. | Referential integrity over table portion | [
"",
"sql",
"sql-server",
"database",
"database-design",
""
] |
```
INSERT INTO kid_progress_Backup (userid)
SELECT id
FROM users1
WHERE id>770
```
I am a newbie but I copied and pasted then tried on my sqlite tables.
id is the autoincrement so maybe there is a problem there. I also tried changing id and using another column but that doesnt work either.
An error didnt even pop up- which is really weird.
Basically I have two tables. I have added records to the first and now I want to add that nuber of records to the other table. The relationship of keys is users1.id to kid\_progress\_Backup.userid. Then I need to add/chabge values in the kid\_progress\_Backup table according to some values in the users1 table. First I am just trying to add the same amount of new records than I was going to do a UPDATE and SET. | Your problem may be occurred, because there are **some other fields** in **kid\_progress\_Backup**, **which can't be NULL and does not have default values**.
If you use SQLite, then you should use INSERT OR REPLACE statement for your case. So, you will not need an UPDATE. But you should define PRIMARY KEY or UNIQUE INDEX on userid column. | Try this SQL Query
```
SET IDENTITY_INSERT kid_progress_Backup ON
INSERT INTO kid_progress_Backup (userid)
SELECT id
FROM users1
WHERE id>770
SET IDENTITY_INSERT kid_progress_Backup OFF
``` | INSERT INTO SELECT FROM doesnt work. Not even an error | [
"",
"sql",
"sqlite",
"insert",
""
] |
What does Go want for the second param in this SQL query.
I am trying to use the `IN` lookup in postgres.
```
stmt, err := db.Prepare("SELECT * FROM awesome_table WHERE id= $1 AND other_field IN $2")
rows, err := stmt.Query(10, ???)
```
What I really want:
```
SELECT * FROM awesome_table WHERE id=10 AND other_field IN (this, that);
``` | Query just takes varargs to replace the params in your sql
so, in your example, you would just do
```
rows, err := stmt.Query(10)
```
say, this and that of your second example were dynamic, then you'd do
```
stmt, err := db.Prepare("SELECT * FROM awesome_table WHERE id=$1 AND other_field IN ($2, $3)")
rows, err := stmt.Query(10,"this","that")
```
If you have variable args for the "IN" part, you can do ([play](http://play.golang.org/p/oagPvpxP7-))
```
package main
import "fmt"
import "strings"
func main() {
stuff := []interface{}{"this", "that", "otherthing"}
sql := "select * from foo where id=? and name in (?" + strings.Repeat(",?", len(stuff)-1) + ")"
fmt.Println("SQL:", sql)
args := []interface{}{10}
args = append(args, stuff...)
fakeExec(args...)
// This also works, but I think it's harder for folks to read
//fakeExec(append([]interface{}{10},stuff...)...)
}
func fakeExec(args ...interface{}) {
fmt.Println("Got:", args)
}
``` | It looks like you may be using the [pq driver](https://github.com/lib/pq). `pq` recently added Postgres-specific Array support via [pq.Array](https://godoc.org/github.com/lib/pq#Array) (see [pull request 466](https://github.com/lib/pq/pull/466)). You can get what you want via:
```
stmt, err := db.Prepare("SELECT * FROM awesome_table WHERE id= $1 AND other_field = ANY($2)")
rows, err := stmt.Query(10, pq.Array([]string{'this','that'})
```
I think this generates the SQL:
```
SELECT * FROM awesome_table WHERE id=10 AND other_field = ANY('{"this", "that"}');
```
Note this utilizes prepared statements, so the inputs should be sanitized. | How to execute an IN lookup in SQL using Golang? | [
"",
"sql",
"go",
""
] |
Invoices Table
```
invoice_id invoice_date
------------ --------------
1 2013-11-27
2 2013-10-09
3 2013-09-12
```
Orders Table
```
order_id invoice_id product quantity total
--------- ---------- --------- --------- -------
1 1 Product 1 100 1000
2 1 Product 2 50 200
3 2 Product 1 40 400
4 3 Product 2 50 200
```
And i want a single sql query that produces following result
```
products Month 9 Total Month 10 Total Mont 11 Total
-------- ------------- -------------- -------------
Product 1 0 400 100
Product 2 200 0 200
```
I have tried the following sql query
```
SELECT orders.products, DATEPART(Year, invoices.invoice_date) Year, DATEPART(Month, invoices.invoice_date) Month, SUM(orders.total) [Total],
FROM invoices INNER JOIN orders ON invoices.invoice_id=orders.invoice_id
GROUP BY orders.products, DATEPART(Year, invoices.invoice_date), DATEPART(Month, invoices.invoice_date)
```
But it returns nothing. Is it possible to get this result with single query and what should i do for that ? Thanks | I think you want to use PIVOT here ...
Try this:
```
WITH tmp
AS
(
SELECT orders.products,
DATEPART(Year, invoices.invoice_date) Year,
DATEPART(Month, invoices.invoice_date) Month,
SUM(orders.total) [Total]
FROM invoices INNER JOIN orders ON invoices.invoice_id = orders.invoice_id
GROUP BY
orders.products,
DATEPART(Year, invoices.invoice_date),
DATEPART(Month, invoices.invoice_date)
)
SELECT products,
ISNULL([9],0) AS Nine, ISNULL([10],0) AS Ten, ISNULL([11],0) as Eleven
FROM tmp
PIVOT
(
SUM([Total])
FOR Month IN
( [9], [10], [11])
) as PVT;
```
You can edit it here: <http://sqlfiddle.com/#!6/6f80f/6> | One way to do it is with Cases:
```
Select
O.product,
Sum(Case
When DATEPART(M, I.invoice_Date) = 9 Then O.total
Else 0
End) as Month9,
Sum(Case
When DATEPART(M, I.invoice_Date) = 10 Then O.total
Else 0
End) as Month10,
Sum(Case
When DATEPART(M, I.invoice_Date) = 11 Then O.total
Else 0
End) as Month11
From Invoice I
Left join Orders O on I.invoice_id = O.invoice_id
Group by O.product
```
There is another way using Pivots (depending on your version of SQL).
```
Select product, [9], [10], [11]
From
(
Select O.Product, O.total, DatePart(M, I.Invoice_Date) as [MonthNum]
From Invoice I
Left join Orders O on I.invoice_id = O.invoice_id
) P
PIVOT
(
Sum(P.total)
For [MonthNum] in ([9], [10], [11])
) as O
``` | JOIN, GROUP BY AND SUM in single Query | [
"",
"sql",
"sql-server",
""
] |
```
SELECT first_name, last_name
FROM employees
WHERE REGEXP_LIKE (first_name, '^Ste(v|ph)en$');
```
The following query returns the first and last names for those employees with a first name of Steven or Stephen (where first\_name begins with Ste and ends with en and in between is either v or ph)
is there a call that is opposite where the query will return everything that `would not have` (v or ph) between Ste and en?
so that it would return things like:
Stezen
Stellen
is it as simple as putting `NOT` in front of `REGEXP_LIKE`? | How about `MINUS`
```
SELECT *
FROM employees
WHERE REGEXP_LIKE( first_name , '^Ste([[:alpha:]])+en$')
MINUS
SELECT *
FROM employees
WHERE REGEXP_LIKE( first_name , '^Ste(v|ph)en$');
```
and this too:
```
WITH t AS
( SELECT 'Stezen' first_name FROM dual
UNION ALL
SELECT 'Steven' FROM dual
UNION ALL
SELECT 'Stephen' FROM dual
)
SELECT *
FROM t
WHERE REGEXP_LIKE( first_name , '^Ste([[:alpha:]])+en$')
AND NOT REGEXP_LIKE( first_name , '^Ste(v|ph)en$');
``` | Two options:
1. The first query uses two `REGEXP_LIKE` tests: one regular expression to generically match; and one for excluding the invalid matches.
2. The second query uses `REGEXP_SUBSTR` to testfor a generic match and extract the sub-group of the match and then tests to see whether it should be exluded.
The third query then looks at how you can extend the query by having another table containing the match criteria and allows you to build and test multiple name variants.
[SQL Fiddle](http://sqlfiddle.com/#!4/7000b/3)
**Oracle 11g R2 Schema Setup**:
```
CREATE TABLE tbl ( str ) AS
SELECT 'Stephen' FROM DUAL
UNION ALL SELECT 'Steven' FROM DUAL
UNION ALL SELECT 'Stepen' FROM DUAL
UNION ALL SELECT 'Steephen' FROM DUAL
UNION ALL SELECT 'Steeven' FROM DUAL
UNION ALL SELECT 'Steeven' FROM DUAL
UNION ALL SELECT 'Smith' FROM DUAL
UNION ALL SELECT 'Smithe' FROM DUAL
UNION ALL SELECT 'Smythe' FROM DUAL
UNION ALL SELECT 'Smythee' FROM DUAL;
CREATE TABLE exclusions ( prefix, exclusion, suffix ) AS
SELECT 'Ste', 'v|ph', 'en' FROM DUAL
UNION ALL SELECT 'Sm', 'ithe?|ythe', '' FROM DUAL;
```
**Query 1**:
```
SELECT str
FROM tbl
WHERE REGEXP_LIKE( str, '^Ste(\w+)en$' )
AND NOT REGEXP_LIKE( str, '^Ste(v|ph)en$' )
```
**[Results](http://sqlfiddle.com/#!4/7000b/3/0)**:
```
| STR |
|----------|
| Stepen |
| Steephen |
| Steeven |
| Steeven |
```
**Query 2**:
```
SELECT str
FROM (SELECT str,
REGEXP_SUBSTR( str, '^Ste(\w+)en$', 1, 1, NULL, 1 ) AS match
FROM tbl)
WHERE match IS NOT NULL
AND NOT REGEXP_LIKE( match, '^(v|ph)$' )
```
**[Results](http://sqlfiddle.com/#!4/7000b/3/1)**:
```
| STR |
|----------|
| Stepen |
| Steephen |
| Steeven |
| Steeven |
```
**Query 3**:
```
SELECT str
FROM tbl t
WHERE EXISTS ( SELECT 1
FROM exclusions e
WHERE REGEXP_LIKE( t.str, '^' || e.prefix || '(\w+)' || e.suffix || '$' )
AND NOT REGEXP_LIKE( t.str, '^' || e.prefix || '(' || e.exclusion || ')' || e.suffix || '$' )
)
```
**[Results](http://sqlfiddle.com/#!4/7000b/3/2)**:
```
| STR |
|----------|
| Stepen |
| Steephen |
| Steeven |
| Steeven |
| Smythee |
``` | Oracle SQL REGEX_LIKE | [
"",
"sql",
"oracle",
""
] |
So let's say I have the following tables Person and Wage. It's a 1-N relation, where a person can have more then one wage.
```
**Person**
id
name
**Wage**
id
person_id
amount
effective_date
```
Now, I want to query a list of all persons and their latest wages. I can get the results by doing the following query:
```
SELECT
p.*,
( SELECT w.amount
FROM wages a w
WHERE w.person_id = p.id
ORDER BY w.effective_date
LIMIT 1
) as wage_amount,
( SELECT w.effective_date
FROM wages a w
WHERE w.person_id = p.id
ORDER BY w.effective_date
LIMIT 1
) as effective_date
FROM person as p
```
The problem is, my query will have multiple sub-queries from different tables. I want to make it as efficient as possible. Is there an alternative to using sub-queries that would be faster and give me the same results? | Proper indexing would probably make your version work efficiently (that is, an index on `wages(person_id, effective_date)`).
The following produces the same results with a single subquery:
```
SELECT p.*, w.amount, w.effective_date
from person p left outer join
(select person_id, max(effective_date) as maxdate
from wages
group by personid
) maxw
on maxw.person_id = p.id left outer join
wages w
on w.person_id = p.id and w.effective_date = maxw.maxdate;
```
And this version might make better us of indexes than the above version:
```
SELECT p.*, w.amount, w.effective_date
from person p left outer join
wages w
on w.person_id = p.id
where not exists (select * from wages w2 where w2.effective_date > w.effective_date);
```
Note that these version will return multiple rows for a single person, when there are two "wages" with the same maximum effective date. | Subqueries can be a good solution like Sam S mentioned in his answer but it really depends on the subquery, the dbms you are using, and your indexes. See this question and answers for a good discussion on the performance of subqueries vs. joins: [Join vs. sub-query](https://stackoverflow.com/questions/2577174/join-vs-sub-query)
If performance is an issue for you, you must consider using the `EXPLAIN` command of your dbms. It will show you how the query is being built and where the bottlenecks are. Based on its results, you might consider rewriting your query some other way.
For instance, it was usually the case that a `join` would yield better performance, so you could rewrite your query according to this answer: <https://stackoverflow.com/a/2111420/362298> and compare their performance.
Note that creating the right indexes will also make a big difference.
Hope it helps. | MySQL alternative to using a subquery | [
"",
"mysql",
"sql",
""
] |
I am trying to use a stored procedure that has many possible where clauses. What I have for the stored procedure now returns "Incorrect syntax near the keyword 'from'" as an error.
```
Select @SQL = 'SELECT Table1.Col1, Table2.Col2
FROM Table1
INNER JOIN Table2 on Table1.Col1 = Table2.Col1
WHERE ' + @where
Exec(@SQL)
```
@Where would something like
```
'Table1.Col1 = 'Apples' OR Table1.Col1 = 'BANNANAS' OR Table2.Col2 = 'CHOCOLATE''
```
Edit:
After messing around with it with all the suggestions I was able to get it to run without an error. Now it wont return any results though. | Could your issue be that you need to double quote all of your literals?
```
'Table1.Col1 = ''Apples'' OR Table1.Col1 = ''BANNANAS'' OR Table2.Col2 = ''CHOCOLATE'''
``` | Here is an alternative solution.
It's not "dynamic sql", but it is flexible and "cleaner" IMHO.
```
Use Northwind
GO
declare @holder table (ProductName nvarchar(40))
Insert into @holder (ProductName )
select
'Chai' union all select 'Chang' union all select 'Aniseed Syrup'
SELECT
[ProductID]
,[ProductName]
,[SupplierID]
,[CategoryID]
,[QuantityPerUnit]
,[UnitPrice]
,[UnitsInStock]
,[UnitsOnOrder]
,[ReorderLevel]
,[Discontinued]
FROM
[dbo].[Products] prod
where
exists (select null from @holder innerH where innerH.ProductName = prod.ProductName)
declare @holderCOUNT int
select @holderCOUNT = count(*) from @holder
SELECT
[ProductID]
,[ProductName]
,[SupplierID]
,[CategoryID]
,[QuantityPerUnit]
,[UnitPrice]
,[UnitsInStock]
,[UnitsOnOrder]
,[ReorderLevel]
,[Discontinued]
FROM
[dbo].[Products] prod
where
( @holderCOUNT = 0 OR ( exists (select null from @holder innerH where innerH.ProductName = prod.ProductName) ))
delete from @holder
select @holderCOUNT = count(*) from @holder
SELECT
[ProductID]
,[ProductName]
,[SupplierID]
,[CategoryID]
,[QuantityPerUnit]
,[UnitPrice]
,[UnitsInStock]
,[UnitsOnOrder]
,[ReorderLevel]
,[Discontinued]
FROM
[dbo].[Products] prod
where
( @holderCOUNT = 0 OR ( exists (select null from @holder innerH where innerH.ProductName = `enter code here`prod.ProductName) ))
``` | Variable as Where clause in stored procedure | [
"",
"sql",
"sql-server",
"stored-procedures",
""
] |
I have Ran below query
```
DECLARE @temp TABLE(Id int identity(1,1), name varchar(20))
INSERT INTO @temp (name)
select name from sys.tables
```
The above query is working fine with out any issues in one machine. But i ran the another machine it through some error. Its very wired for me. I have attached the screen shot.
Both machines are having same sql server SQL server 2008 R2 Express(Microsoft SQL Server 2008 R2 (RTM) - 10.50.1600.1 (Intel X86) Apr 2 2010 15:53:02 Copyright (c) Microsoft Corporation Express Edition with Advanced Services on Windows NT 5.1 (Build 2600: Service Pack 3) ) Edition. | This means one machine has table names that are all less than 20 characters, but the machine it doesn't work on has table names that are longer than 20 characters.
Change the size of your `name` column to `nvarchar(255)`; you can go all the way up to 4000 if you still have trouble.
```
DECLARE @temp TABLE(Id int identity(1,1), name nvarchar(255))
INSERT INTO @temp (name)
select name from sys.tables
```
EDIT: based on @Raj's response and my subsequent research, I have modified the 'varchar' column to be 'nvarchar' because that is what table names can hold. For quick queries, I personally don't care if I use 255 instead of the actual potential length of a column's name (128). But per @Raj and the T-SQL documentation, the max column name length is 128.
<http://technet.microsoft.com/en-us/library/ms188348.aspx> | As Raj states the datatype for a column name in SQL Server is SYSNAME, therefore just use this as it is both backward and forward compatible:
```
DECLARE @temp TABLE(Id int identity(1,1), name sysname)
INSERT INTO @temp (name)
select name from sys.tables
```
See [What is SYSNAME data type in SQL Server?](https://stackoverflow.com/questions/5720212/what-is-sysname-data-type-in-sql-server) | Need resolution for this below code issue | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I am creating a table with `name` as a column such that there are multiple rows with same name and corresponding to every name there are other values (columns).
I have to design the table such that there are two set of values **default** and **latest**. So user may enquire to get either default/latest set and may edit table to make particular set of values as default replacing the previous defult set.
I am thinking of implementing this by keeping a **flag** column where:
0 means this row is default set and 1 means the latest set and 2 means just a general row.
TABLE:
```
NAME ADDRESS CITY FLAG
Hansen Timoteivn 10 Sandnes 1 (LATEST)
Hansen street 13 texas 0 (DEFAULT)
Svendson Borgvn 23 Sandnes 1 (LATEST)
Svendson street 14 colaba 2 (GENERAL)
Svendson street 15 Houston 0 (DEFAULT)
Pettersen Storgt 20 Stavanger 0 (DEFAULT)
Pettersen Storgt 21 texas 1 (LATEST)
```
I am thinking of implementing it like this:
```
alter table TABLE_NAME add flag number CONSTRAINT <constraintname>
CHECK (flag IN (0, 1, 2));
```
Is there a better approach to do this apart from keeping a **flag** ??
Am i doing it wrong?? | > Is there a better approach to do this apart from keeping a flag ??
Typically a default setting will be show by a flag column but not information about the latest.
To store latest information a table will include 2 additional timestamp columns: **Created** and **Modified** (and you might have two additional columns showing who eg CreatedBy, ModifiedBy).
If you have these auditing columns then figuring out the latest is just a matter of looking at these timestamps depending on if latest means most recent created or most recent modified. | > Is there a better approach to do this apart from keeping a flag ?? Am i doing it wrong??
If you can't distinguish "default" data from "latest" data any other way, you have to either add a column or add a table.
But I don't understand why you'd choose to use an integer the way you're using one. You're representing two distinct values with *three* distinct integers. | Database Table design issue? | [
"",
"mysql",
"sql",
"database",
""
] |
The following is an example of the text I want to create a function to query against:
```
"!PRINTSERVER.PAR
$MODE=QUIET
$DEBUG=N
$LOG_FILE=[file path]
$PRINTER_LIST=
-ACCOUNTS_LASER,\\print02\Accounts_Laser,winspool,Ne34:
-BE_PRINTER01,\\print01\BE_Printer01,winspool,Ne03:
-CUSTSERV_PRINTER,\\print01\CS_Laser,winspool,Ne06:
```
As an 'in' parameter of my function I want to search for the logical printer name, e.g. `ACCOUNTS_LASER` and I would like it to return the physical path e.g. `\\print02\Accounts_Laser`.
Also, the field in question that contains the aforementioned text has a data type of `long`, therefore I believe it requires converting to a string before any Oracle functions can be applied to it.
I would guess that I need a combination of `substr` and `instr` or maybe `regexp` however any assistance would be much appreciated. | Thanks to everyone who posted suggestions. I have used some of the sql in the answers below to create a function that solves my issue.
```
create or replace function get_printer_path (l_printer_name in varchar2)
return varchar2
is
p_printer_path varchar2(5000);
cursor temp is
select
regexp_replace(dbms_xmlgen.getxmltype('select info from print_server where
server_name = ''STAGING'''), '.*-'|| l_printer_name ||',([^,]*),.*', '\1', 1, 1, 'n')
from dual;
begin
open temp;
fetch temp into p_printer_path;
if (p_printer_path not like '\\%') then
p_printer_path := null;
end if;
close temp;
return p_printer_path;
end get_printer_path;
```
Any further enhancements or if I am breaking any standard practices then please continue to comment. | ### Regex
```
'^(.*?' || 'ACCOUNTS_LASER' || ',)([^,]+)(.*)$'
```
### Flags
```
n
```
### Description
```
^(.*?<Printer name goes here>',)([^,]+)(.*)$
```

### Recipe
```
-- INIT
create table test (input clob);
insert into test(input) values('
"!PRINTSERVER.PAR
$MODE=QUIET
$DEBUG=N
$LOG_FILE=[file path]
$PRINTER_LIST=
-ACCOUNTS_LASER,\\print02\Accounts_Laser,winspool,Ne34:
-BE_PRINTER01,\\print01\BE_Printer01,winspool,Ne03:
-CUSTSERV_PRINTER,\\print01\CS_Laser,winspool,Ne06:
');
-- SELECT
select
regexp_replace(input, '^(.*?' || 'ACCOUNTS_LASER' || ',)([^,]+)(.*)$','\2', 1, 0, 'n') printer_path
from
test
union all
select
regexp_replace(input, '^(.*?' || 'BE_PRINTER01' || ',)([^,]+)(.*)$','\2', 1, 0, 'n') printer_path
from
test
union all
select
regexp_replace(input, '^(.*?' || 'CUSTSERV_PRINTER' || ',)([^,]+)(.*)$','\2', 1, 0, 'n') printer_path
from
test
```
outputs
```
|PRINTER_PATH |
|--------------------------|
| \\print02\Accounts_Laser |
| \\print01\BE_Printer01 |
| \\print01\CS_Laser |
``` | Oracle SQL: Return text in string based on search of string | [
"",
"sql",
"regex",
"oracle",
"plsql",
"substring",
""
] |
First, I would like to mention that I already checked all other asked questions, and none of it is similar to mine, so I don't think it's a duplicate.
I have two table tables, "Article\_tbl" with more than 300,000 rows so far and "ArticleZone\_tbl" with almost the same rows count.
"Article\_tbl" Contains a Identity primary key, "ArticleID".
"ArticleZone\_tbl" contains a primary key consisting of three columns, "ArticleID", "ChannelID", "ZoneID"; Where "ArticleID" is a foreign key from "Article\_tbl"
Non clustered indexes were created on the columns to order by.
SQL Query:
```
WITH OrderedOrders AS(
Select ROW_NUMBER() Over(Order by LastEditDate desc, ArticleOrder Asc, LastEditDateTime desc) as RowNum, dbo.Article_tbl.*, ArticleZone_tbl.ChannelID, ArticleZone_tbl.ZoneID, ArticleZone_tbl.ArticleOrder
From Article_tbl INNER JOIN ArticleZone_tbl
ON dbo.Article_tbl.ArticleID = dbo.ArticleZone_tbl.ArticleID
Where ChannelID=1 And ZoneID=0)
SELECT * FROM OrderedOrders Where RowNum Between 1 And 10
```
The above query is taking about 2 seconds to complete, is there any way to optimize this query?
More info:
OS: Windows WebServer 2008R2
SQL Sever: 2008R2
RAM: 32GB
HDD: 160GB SSD
Thanks in advance.
Best regards,
McHaimech | You could try creating an [Indexed View](http://msdn.microsoft.com/en-us/library/ms191432.aspx) on the two tables:
```
CREATE VIEW dbo.YourIndexedView
WITH SCHEMABINDING
AS
SELECT az.ArticleID,
az.ChannnelID,
az.ZoneID,
a.LastEditDate,
a.LastEditDateTime,
az.ArticleOrder
FROM dbo.Article_tbl a
INNER JOIN dbo.ArticleZone_tbl az
ON a.ArticleID = az.AtricleID;
GO
CREATE UNIQUE CLUSTERED INDEX UQ_YourIndexView_ArticleID_ChannelID_ZoneID
ON dbo.YourIndexedView (ArticleID, ChannelID, ZoneID);
```
Once you have your clustered index in place you can create a nonclustered index that would assist in the sorting:
```
CREATE NONCLUSTERED INDEX IX_YourIndexedView_LastEditDate_ArticleOrder_LastEditDateTime
ON dbo.YourIndexedView (LastEditDate DESC, ArticleOrder ASC, LastEditDateTime DESC);
```
You can then reference this in your query:
```
WITH OrderedOrders AS
( SELECT RowNum = ROW_NUMBER() OVER(ORDER BY LastEditDate DESC, ArticleOrder ASC, LastEditDateTime DESC),
ArticleID,
ChannelID,
ZoneID,
LastEditDateTime,
ArticleOrder
FROM dbo.YourIndexedView WITH (NOEXPAND)
WHERE ChannelID = 1
AND ZoneID = 0
)
SELECT *
FROM OrderedOrders
WHERE RowNum BETWEEN 1 AND 10;
```
*N.B. I may have missed some columns from your article table, but I couldn't infer them from the question*
Furthermore, if your query is always going to have the same zone and channel, you could filter the view, then your clustered index column simply becomes `ArticleID`:
```
CREATE VIEW dbo.YourIndexedView
WITH SCHEMABINDING
AS
SELECT az.ArticleID,
az.ChannnelID,
az.ZoneID,
a.LastEditDate,
a.LastEditDateTime,
az.ArticleOrder
FROM Article_tbl a
INNER JOIN ArticleZone_tbl az
ON a.ArticleID = az.AtricleID
WHERE az.ChannelID = 1
AND Az.ZoneID = 1;
GO
CREATE UNIQUE CLUSTERED INDEX UQ_YourIndexView_ArticleID
ON dbo.YourIndexedView (ArticleID);
```
Which means your indexes will be smaller, and faster to use. | As you say "Same query with "Over(Order by Article\_tbl.ArticleID asc)" is taking 40ms", no doubt that you have an index missing. You should study the query plan (include actual execution plan button in SSMS)
One index with all fields covering your OVER(ORDER BY..) may give you good results.
ArticleId is implictly here because it is your cluster, respect the order ASC/DESC of your OVER clause.
try:
```
CREATE INDEX xxx on Article_tcl(LastEditDate desc, ArticleOrder asc, LastEditDateTime desc)
```
asc is the default you don't need to specify, here for clarity | ROW_NUMBER() Performance optimization | [
"",
"sql",
"sql-server",
""
] |
I have a procedure which returns the error:
> Must declare the table variable "@PropIDs".
But it is followed with the message:
> (123 row(s) affected)
The error appears when I execute it with
```
EXEC [dbo].[GetNeededProperties] '1,3,5,7,2,12', '06/28/2013', 'TT'
```
But works fine when
```
EXEC [dbo].[GetNeededProperties] NULL, '06/28/2013', 'TT'
```
Can any one help me with that?
The procedure:
```
CREATE PROCEDURE [dbo].[GetNeededProperties]
@NotNeededWPRNs nvarchar(max), --string like '1,2,3,4,5'
@LastSynch datetime,
@TechCode varchar(5)
AS
BEGIN
DECLARE @PropIDs TABLE
(ID bigint)
Declare @ProductsSQL nvarchar(max);
SET @ProductsSQL = 'Insert into @PropIDs (ID)
SELECT [WPRN] FROM [dbo].[Properties] WHERE(WPRN in (' + @NotNeededWPRNs + '))'
exec sp_executesql @ProductsSQL
SELECT p.WPRN AS ID,
p.Address AS Address,
p.Address AS Street
FROM [dbo].[Properties] AS p
WHERE
p.WPRN NOT IN( SELECT ID FROM @PropIDs)
```
---
I've found kind of solution when declaring table like this:
```
IF OBJECT_ID('#PropIDs', 'U') IS NOT NULL
DROP TABLE #PropIDs
CREATE TABLE #PropIDs
```
But when execute the procedure from C# (linq sql) it returns an error | The issue is that you're mixing up dynamic SQL with non-dynamic SQL.
Firstly - the reason it works when you put NULL into @NotNeededWPRNs is because when that variable is NULL, your @ProductsSQL becomes NULL.
WHat you need to do is either make your @PropsIDs table a non-table variable and either a temporary table or a physical table.
OR
you need to wrap everything in dynamic SQL and execute it.
So the easy way is to do something like this:
```
Declare @ProductsSQL nvarchar(max);
SET @ProductsSQL = '
DECLARE @PropIDs TABLE
(ID bigint)
Insert into @PropIDs (ID)
SELECT [WPRN] FROM [dbo].[Properties] WHERE(WPRN in (' + @NotNeededWPRNs + '))
SELECT p.WPRN AS ID,
p.Address AS Address,
p.Address AS Street
FROM [dbo].[Properties] AS p
WHERE
p.WPRN NOT IN( SELECT ID FROM @PropIDs)
'
```
and execute that.
OR as mentioned - change @ProdIDs to a temporary table. (The route you're approaching in the CREATE #ProdIds, but then you need to use #ProdIDs instead of @ProdIDs everywhere in the sproc). | Change you code to :
```
Declare @ProductsSQL nvarchar(max);
SET @ProductsSQL = 'DECLARE @PropIDs TABLE
(ID bigint);
Insert into @PropIDs (ID)
SELECT [WPRN] FROM [dbo].[Properties] WHERE(WPRN in (' + @NotNeededWPRNs + '))'
exec sp_executesql @ProductsSQL
```
Table variable declared outside the dynamic SQL will not be available to the dynamic SQL. | "Must declare the table variable "@name"" in stored procedure | [
"",
"sql",
"sql-server",
""
] |
I have a question regarding fetch data in the form of table. i have a table with a single row now i just want amount increased on the basis on given logic..
like
```
1st row amount=1200,
2nd row amount=1320(1200+120),
3rd row amount=1452(1320+132)
```
logic is 10% add with previous amount
My table is
```
Sno - Name- Amount
1 - A - 1200
```
Now I want result like this..
```
Sno - Name- Amount
1 - A - 1200
2 - A - 1320
3 - A - 1452
```
Can anybody help me i'm not find any logic for that | This has exactly the same limitation as WadimX's answer, but it'll do 100 rows. To produce your example output given your input table (which I'll refer to as `example`):
```
;WITH nums AS
(SELECT 1 AS RowNum, Name, Amount
FROM (SELECT Name, Amount FROM example) s
UNION ALL
SELECT RowNum + 1 As RowNum, Name, CAST(1.1*Amount AS INT) AS Amount
FROM nums
WHERE RowNum < 5)
SELECT RowNum AS SNo, Name, Amount
FROM nums
ORDER BY Name
```
[SQLFiddle](http://sqlfiddle.com/#!3/524ae/1/0)
That returns 5 rows for every record in `example`, you can increase that count by changing the `RowNum < 5` to `100` or however many you want.
**Output**
```
SNo Name Amount
-----------------------
1 A 1200
2 A 1320
3 A 1452
... ... ...
``` | ```
WITH COUNTER(RN)
AS
(
SELECT ROW_NUMBER() OVER(ORDER BY object_id)
FROM sys.objects
),
A(RN, value)
AS
(
SELECT CAST(1 as bigint),
CAST(1200 as decimal(20, 0))
UNION ALL
SELECT COUNTER.RN,
CAST(A.value*1.1 as decimal(20, 0))
FROM COUNTER JOIN A ON A.RN=COUNTER.RN-1
)
SELECT TOP 1000 *
FROM A
OPTION(MAXRECURSION 1000)
```
This example selects first 1000 rows from sys.objects. You should replace **sys.objects** with your table name and **object\_id** with primary/unique key column(s). Also you should change **TOP 1000** and **MAXRECURSION 1000**. Notice MAXRECURSION мау be between 0 and 32767, 0 - no limit.
Be aware of large tables, because it can cause arithmetic overflow of value. | Increased amount of each row in sql | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have a stored procedure which is supposed to return a result set from a table filtered according to parameters provided.
**UPDATE**
```
alter procedure Proc_CheckExchange
@Flag varchar(3),
@symbol varchar(13)=null,
@exchange char(3)=null,
@limit money=null,
@chargerate numeric(18,4)=null,
@ChgType char(2)=null,
@IsActive int=null,
@Mkrid varchar(11)=null,
@statecode varchar(4) =null
as
declare @sql nvarchar(max)
set @sql = N'select * from Tbl_StampDutyException1 where 1 = 1'
if(@Flag='CHK')
begin
if len(isnull(@exchange, '')) > 0
set @sql = @sql + N'and exchange=@exchange'+ cast(@exchange as nvarchar(100))
if len(isnull(@symbol, '')) > 0
set @sql = @sql + N'and symbol=@symbol'+ cast(@symbol as nvarchar(100))
if len(isnull(@limit, '')) > 0
set @sql = @sql + N'and limit=@limit'+ cast(@limit as nvarchar(100))
if len(isnull(@chargerate, '')) > 0
set @sql = @sql + N'and chargerate=@chargerate'+ cast(@chargerate as nvarchar(100))
if len(isnull(@ChgType, '')) > 0
set @sql = @sql + N'and ChgType=@ChgType'+ cast(@ChgType as nvarchar(100))
if len(isnull(@IsActive, '')) > 0
set @sql = @sql + N'and IsActive=@IsActive'+ cast(@IsActive as nvarchar(100))
if len(isnull(@statecode, '')) > 0
set @sql = @sql + N'and statecode=@statecode'+ cast(@statecode as nvarchar(100))
exec (@sql)
end
if (@Flag='ALL')
begin
select * from Tbl_StampDutyException1
end
```
**UPDATE 1**
```
alter procedure Proc_CheckExchange
@Flag varchar(3),
@symbol varchar(13)=null,
@exchange char(3)=null,
@limit money=null,
@chargerate numeric(18,4)=null,
@ChgType char(2)=null,
@IsActive int=null,
@Mkrid varchar(11)=null,
@statecode int =null
as
declare @sql nvarchar(max)
set @sql = N'select * from Tbl_StampDutyException1 where 1 = 1'
if(@Flag='CHK')
begin
if len(isnull(@exchange, '')) > 0
set @sql = @sql + N' and exchange = @exchange'
if len(isnull(@limit, '')) > 0
set @sql = @sql + N' and limit = @limit'
if len(isnull(@chargerate, '')) > 0
set @sql = @sql + N' and chargerate = @chargerate'
if len(isnull(@ChgType, '')) > 0
set @sql = @sql + N' and ChgType = @ChgType'
if len(isnull(@IsActive, '')) > 0
set @sql = @sql + N' and IsActive = @IsActive'
if len(isnull(@statecode, '')) > 0
set @sql = @sql + N' and statecode = @statecode'
if len(isnull(@symbol, '')) > 0
set @sql = @sql + N' and symbol = @symbol'
declare @params as nvarchar(max) = N'@Flag varchar(3),
@symbol varchar(13),
@exchange char(3),
@limit money,
@chargerate numeric(18,4),
@ChgType char(2),
@IsActive int,
@Mkrid varchar(11),
@statecode varchar(4)'
print @sql
--EXECUTE sp_executesql @sql, @params, @Flag, @symbol, @exchange, @limit, @chargerate, @ChgType, @IsActive, @Mkrid, @statecode
end
```
I am trying to create a stored procedure in which there will be as many conditions in `WHERE` clause as passed to the stored procedure. I hope I am clear about what I am trying to achieve. I am getting error `Error converting data type varchar to numeric.` | What you could do is to rewrite your stored proc using dynamic SQL and include parts of `where` clause only if parameters are defined, e.g. replace the body with
```
declare @sql nvarchar(max)
set @sql = N'select * from Tbl_StampDutyException1 where 1 = 1'
if len(isnull(@exchange, '')) > 0
set @sql = @sql + N' and exchange = ' + cast(@exchange as nvarchar(100))
-- add all parameters; you need to cast them to nvarchar if they have other type
exec (@sql)
```
---
As an improvement, you can use `sp_executesql` to execute dynamic SQL. See [here](http://technet.microsoft.com/en-us/library/ms188001.aspx) on how to use it. In this case, the code will be:
```
declare @sql nvarchar(max)
set @sql = N'select * from Tbl_StampDutyException1 where 1 = 1'
if len(isnull(@exchange, '')) > 0
set @sql = @sql + N' and exchange = @exchange'
-- add all parameters;
declare @params as nvarchar(max) = N'@Flag varchar(3),
@symbol varchar(13),
@exchange char(3),
@limit money,
@chargerate numeric(18,4),
@ChgType char(2),
@IsActive int,
@Mkrid varchar(11),
@statecode varchar(4)'
EXECUTE sp_executesql @sql, @params, @Flag, @symbol, @exchange, @limit, @chargerate, @ChgType, @IsActive, @Mkrid, @statecode
```
---
By the way, don't use `select *` in stored procedures, it's not a good practice. List all the columns you want to return. Otherwise, if the table definition changes, you will get different result to what it was previously. | I think that a good solution would be this one
```
SELECT * from dbo.Clients
WHERE
(@param1 IS NULL OR field1 = @param1)
AND (@param2 IS NULL OR field2 = @param2)
```
With this approach the statement will not be re-processed (execution plan) by the server every time you execute the query. | What is the best approach to filter data in SQL (Dynamic Query)? | [
"",
"sql",
"sql-server",
"t-sql",
"stored-procedures",
""
] |
Hi I have a need to store hundreds if not thousands of elements in the database as XML. I will not index anything in the XML field. I will simply select certain elements within the xml. I would like to know if there is any performance penalty for simply selecting fields in the XML. Here is example XML that will be stored in the database.
```
<fields>
<field name="FirstName" type="text" value="Gary" sort="2" />
<field name="LastName" type="text" value="Smith" sort="3" />
<field name="City" type="text" value="Los Angeles" sort="4" />
<field name="Age" type="number" value="12" sort="6" />
<field name="Address" type="text" sort="2">
<streetnumber value="1234" />
<streetname value="sail" />
</field>
</fields>
```
I will probably have more than 3000 field tags in one record. I simply want to get 10 fields in a single query. I will have a primary key on the table and will be selecting records based on the primary key but will be getting fields from the XML column. I am afraid the more field elements I put in the XML will compromise performance. Will there be a performance penalty for simply selecting 10 or more fields from the XML column? Also, I will not be using the xml column in a where clause I will use the primary in the where clause then I will select fields from the XML column. Will there be a performance penalty? | Based on my experience on XML in SQL Server Xml datatype, and on [Indexes on XML Data Type Columns](http://technet.microsoft.com/en-us/library/ms191497%28v=sql.100%29.aspx) (the whole section deserves thorough reading)
> Will there be a performance penalty for simply selecting 10 or more
> fields from the XML column ?
Yes, because your XML document is stored as a blob. Without a primary XML index, this blob will need to be exploded for query processing (filtering and projection)
As to XML, indexes can be seen as a relational representation of your document (pre-exploding the blob)
> *Without an index, these binary large objects are shredded at run time to evaluate a query. This shredding can be time-consuming*
As to your second question
> Also, I will not be using the xml column in a where clause I will use the primary in the where clause then I will select fields from the XML
> column. Will there be a performance penalty?
If you are going to project among 3000 field tags, you *might* benefit from a secondary XML index, though I'm not sure which one. PROPERTY secondary index seems fit for projection, but it seems to apply on `value` calls (the french documentation seems to imply more than just `value` calls but that may be some translation mistake)
For my part, I ended-up setting the three kind of secondary indexes on my XML column (1 million documents on 30 different schemas, 50-100 elements each) But my app requires a lot more filtering than projection. | [BEGIN EDIT]
jbl's direct answers to your questions, and Terror.Blade's answer re' XML being better than NVARCHAR(MAX) both make sense (I upvoted them :).
My experience was without storing an XML schema in SQL Server (Terror.Blade's tip), and without indexing (jbl gave the most, re' that)... but I'm leaving my answer, because I think my links could be very helpful... and it's still an example of worst case ;)
[END EDIT]
From experience, I'll say that the *loading* of an XML data type is quick, but as for using it -- I found that to be slow, but the personal example coming to mind involved updating, and using xQuery, and those may have been factors in my slowdown.
In that example, it took 1hr55mins to process only 127,861 rows.
(Terror.Blade's tip, of storing an XML schema in SQL Server, and jbl's link & share re' XML indexing both sound pretty slick ;) and might address that slowdown.)
**RELATED**:
Here's some tips re' optimizing XML in SQL... though some of them only apply if you have control over the format of the XML:
<http://msdn.microsoft.com/en-us/library/ms345118.aspx>
If you're using xQuery, check out these docs:
<http://download.microsoft.com/download/0/F/B/0FBFAA46-2BFD-478F-8E56-7BF3C672DF9D/XQuery%20Language%20Reference.pdf>
((And if you're using SQLXMLBulkLoad at all, consider using "overflow-field"s, to capture whatever is not defined in your schema. There's some useful tips in this *tangentially related* TechNote:
<http://social.technet.microsoft.com/Forums/sqlserver/en-US/393cf604-bf6e-488b-a1ea-2e984aa14500/how-do-i-redirect-xml-comments-that-sqlxmlbulkload-is-storing-in-the-overflowfield?forum=sqlxml> ))
HTH. | Does SQL server 2008 XML data type have performance issues | [
"",
"sql",
"sql-server",
"xml",
"sql-server-2008",
""
] |
There are 2 tables. Person and Account. There is a 1:n dependency (via ID) so a person could have n accounts. The Person has an email adress. The change should be, that the email adress will be an attribute in Account Table.
So i created a new attribute email for the account.
How can i move or copy all the email address values from the person to the account and also if a person has 2 accounts, that the email address is copied or moved to both accounts? | Try it this way
```
UPDATE account a
SET email =
(
SELECT email
FROM person
WHERE person_id = a.person_id
)
WHERE EXISTS
(
SELECT *
FROM person
WHERE person_id = a.person_id
);
```
Here is **[SQLFiddle](http://sqlfiddle.com/#!4/6d90c/1)** demo | First of all, there is no need to do such copy task.
Its always better to maintain a Primary Key & Foreign key in table.
Still if you want to copy, try something like this.
```
SELECT TMP.EMAIL INTO ACCOUNT AS 'A' FROM
(SELECT P.EMAIL FROM PERSON AS 'P') AS TMP
WHERE A.ID = P.ID
```
Hope, this is what you want. | SQL Move Attribute to another table | [
"",
"sql",
"database",
"oracle11g",
""
] |
How do I get the row count of a Cursor without using `moveToFirst()`?
(Is it possible to do .getCount() without doing .moveToFirst() ???)
**Logcat**
> 11-29 13:37:40.370: E/SQLiteLog(8459): (11) database corruption at
> line 62665 of [00bb9c9ce4]
>
> 11-29 13:37:40.370: E/SQLiteLog(8459): (11) statement aborts at 44:
> [select pos, definition, sample FROM word INNER JOIN sense ON
> word.wordid = sense.wordid INNER JOIN synset ON sense.synsetid =
> synset.synsetid LEFT JOIN sample ON sample.synsetid = syn
**Code**
```
try {
if (sqldb2 == null || !sqldb2.isOpen()) {
dbobj2 = new SqLiteConection(context,
"/mnt/sdcard/sk2.db");
sqldb2 = dbobj2.openDB2();
}
// if(sqldb2!=null){
cursor_n = sqldb2.rawQuery(NOUN, null);
cursor_n.moveToFirst();
if (cursor_n.getCount() > 0) {
if (cursor_n != null) {
if (cursor_n.moveToFirst()) {
do {
String strin = cursor_n
.getString(cursor_n
.getColumnIndex("definition"));
d_noun_List.add(strin);
} while (cursor_n.moveToNext());
searchData_DTO.setD_nounList(d_noun_List);
}
}
cursor_n.close();
}else {
break;
// record not found
}
} catch (Exception e) {
Log.d("in Exception", "reason" + e);}
``` | Done by myself by doing the backup of the db | Use .getCount function to get the row count
```
Cursor.getCount()
``` | Reading row count from Cursor | [
"",
"android",
"sql",
"database",
"cursor",
""
] |
I have a ASP.NET page which have details in below manner.
```
Date OfficerID DutyID
25-NOV-13 2 666
26-NOV-13 2 666
27-NOV-13 2 666
28-NOV-13 2 666
29-NOV-13 2 666
30-NOV-13 2 666
01-DEC-13 2 666
02-DEC-13 2 523
```
The above is being populated in gridview through below code snippet
```
DataTable table = new DataTable();
string connectionString = GetConnectionString();
string sqlQuery = "select * from duty_rota where duty_date between sysdate and sysdate+18";
using (OracleConnection conn = new OracleConnection(connectionString))
{
try
{
conn.Open();
using (OracleCommand cmd = new OracleCommand(sqlQuery, conn))
{
using (OracleDataAdapter ODA = new OracleDataAdapter(cmd))
{
ODA.Fill(table);
}
}
}
catch (Exception ex)
{
Response.Write("Not Connected" + ex.ToString());
}
}
//DropDownList1.DataSource = table;
//DropDownList1.DataValueField = "";
GridView1.DataSource = table;
GridView1.DataBind();
```
Now I also have a **previous button** which should output the same page but with sql query slightly changed
```
select * from duty_rota where duty_date between sysdate-18 and sysdate;
```
and with every button click the date parameters should be decreased by 18, i.e with 1st previous button click query will be
```
sysdate-18 and sysdate
```
with 2nd click
```
sysdate-36 and sysdate-18
```
with 3rd click
```
sysdate-54 and sysdate-36
```
and so on...
Please help me how could I acheieve it , I was trying to implement it with a variable associated with Previous buttons button click event which would change with every subsequent click. But I am not really able to accomplish it. Can anybody please guide me on this. | Write below code to handle dynamic query on previous and next button click event :
```
protected void PrevioseButton_Click(object sender, EventArgs e)
{
var sqlQuery = this.GenerateQuery(false);
this.BindGrid(sqlQuery);
}
protected void NextButton_Click(object sender, EventArgs e)
{
var sqlQuery = this.GenerateQuery(true);
this.BindGrid(sqlQuery);
}
private string GenerateQuery(bool isNext)
{
if (ViewState["fromDate"] == null && ViewState["toDate"] == null)
{
ViewState["fromDate"] = isNext ? "sysdate+18" : "sysdate-18";
ViewState["toDate"] = isNext ? "sysdate+36" : "sysdate";
}
else
{
var from = ViewState["fromDate"].ToString().Replace("sysdate", string.Empty);
var to = ViewState["toDate"].ToString().Replace("sysdate", string.Empty);
int fromDay = 0;
int toDay = 0;
if (from != string.Empty)
{
fromDay = Convert.ToInt32(from);
}
if (to != string.Empty)
{
toDay = Convert.ToInt32(to);
}
if (!isNext)
{
fromDay = fromDay - 18;
toDay = toDay - 18;
}
else
{
fromDay = fromDay + 18;
toDay = toDay + 18;
}
from = "sysdate";
to = "sysdate";
if (fromDay > 0)
{
from += "+" + fromDay;
}
else if (fromDay < 0)
{
from += fromDay.ToString();
}
if (toDay > 0)
{
to += "+" + toDay;
}
else if (toDay < 0)
{
to += toDay.ToString();
}
ViewState["fromDate"] = from;
ViewState["toDate"] = to;
}
var sqlQuery = "select * from duty_rota where duty_date between " + ViewState["fromDate"] + " and "
+ ViewState["toDate"];
return sqlQuery;
}
private void BindGrid(string sqlQuery)
{
DataTable table = new DataTable();
string connectionString = GetConnectionString();
using (OracleConnection conn = new OracleConnection(connectionString))
{
try
{
conn.Open();
using (OracleCommand cmd = new OracleCommand(sqlQuery, conn))
{
using (OracleDataAdapter ODA = new OracleDataAdapter(cmd))
{
ODA.Fill(table);
}
}
}
catch (Exception ex)
{
Response.Write("Not Connected" + ex.ToString());
}
}
GridView1.DataSource = table;
GridView1.DataBind();
}
``` | On the button click event, try this:
```
DataTable table = new DataTable();
string connectionString = GetConnectionString();
if (Session["sysdate"] == null || string.IsNullOrEmpty(Session["sysdate"].ToString()))
Session["sysdate"] = "-18";
else
Session["sysdate"] = "+ " + (Convert.ToInt32(Session["sysdate"]) - 18).ToString();
string sysdate = Session["sysdate"].ToString();
string sqlQuery = "select * from duty_rota where duty_date between sysdate " + sysdate + " and sysdate+18 " + sysdate;
using (OracleConnection conn = new OracleConnection(connectionString))
{
try
{
conn.Open();
using (OracleCommand cmd = new OracleCommand(sqlQuery, conn))
{
using (OracleDataAdapter ODA = new OracleDataAdapter(cmd))
{
ODA.Fill(table);
}
}
}
catch (Exception ex)
{
Response.Write("Not Connected" + ex.ToString());
}
}
GridView1.DataSource = table;
GridView1.DataBind();
``` | Changing the parameter in sql query of ASP.NET page - with button_click event, sql query in every button click | [
"",
"asp.net",
"sql",
"iterator",
"logic",
"code-behind",
""
] |
In one of my table I store user details when ever he login into it.
I want to get repeated occurrence count of each user. Means user x,y,z has logged in how many times.
```
SELECT count( id )
FROM user_stat
GROUP BY id
```
this just gives count value.
```
1 id varchar(64) latin1_swedish_ci Yes NULL Change Change Drop Drop More Show more actions
2 login varchar(64) latin1_swedish_ci Yes NULL Change Change Drop Drop More Show more actions
3 city varchar(128) latin1_swedish_ci Yes NULL Change Change Drop Drop More Show more actions
4 ip varchar(128) latin1_swedish_ci Yes NULL Change Change Drop Drop More Show more actions
5 logout varchar(64) latin1_swedish_ci Yes NULL Change Change Drop Drop More Show more actions
6 diff varchar(64) latin1_swedish_ci Yes NULL Change Change Drop Drop More Show more actions
7 username varchar(64) latin1_swedish_ci Yes NULL Change Change Drop Drop More Show more actions
```
I want to count repeated occurrence details of each `id`. Means how many time person with id `x` has loged in.
is this possible? | > select count(login\_id),login\_id from `table_name` group by login\_id;
```
Here login_id means users_login_id. try it
``` | If you want user info with count that how many time user logged in try below qry.
Try this :-
```
SELECT count( id ),username,id
FROM user_stat
GROUP BY id
```
You will may add more column if you want. | Getting count of repeated record occurrence in table | [
"",
"mysql",
"sql",
""
] |
I'm learning C# and some SQL server, and now i am trying to get information from my small database.
I have two tables: `Movie` and `MovieHandler`.
In the table Movie, i have `MovieCodeLable`, which is a uniqe number, and the `Title` of the movie.
In the MovieHandler table, i have `MovieCodeLable` which is the same as in my Movie table, and i have also her the colum `InStore`, which is eighter 0 or 1 for not in store or in store.
I'm trying to display the Title for the movies which is not in the store. But i find it hard figure out how to join tables.
I have tried this SQL query:
```
SELECT Title
FROM Movie
JOIN MovieCodeLable
ON MovieHandler.MovieCodeLable
WHERE InStore = 0
```
Since i only get errors trying this query in Visual Studio 2012, i've probably missed something fundametal with SQL and JOINS, but i hope that someonw could make JOINS smooth as butter for me and others, struggeling to learn it. | Your JOIN is wrong and your ON clause is incomplete. The JOIN should involve the names of the 2 tables that you are joining, which in this case is Movie and MovieHandler The ON should be expression of format A = B. So your query should be:
```
SELECT Title
FROM Movie
JOIN MovieHandler
ON Movie.MovieCodeLable = MovieHandler.MovieCodeLable
WHERE InStore = 0
``` | You need to specify both JOIN fields
```
SELECT Title
FROM Movie
JOIN MovieHandler
ON Movie.MovieCodeLable = MovieHandler.MovieCodeLable
WHERE InStore = 0
``` | How should I JOIN my tables? | [
"",
"sql",
"sql-server",
"sql-server-2008",
"join",
""
] |
I am using 3-4 tables and using autoincrement on the primaryk ey in all of them. I want all of the tables to start with 1
For example `student` table id starts from 1, and `Registration` table starts from 1.
Right now what happens auto increments start with the last id of student table. For e.g LastRow ID in student table is 100 so `Registeration` will start from 101. I want it to start from 1 how to do that. | As Einav said this subject is to be handled with caution!
If your student table is empty and you are trying to reset the counters because of testing, or because of a yearly wipe down of the database etc. then you can use the [DBCC CHECKIDENT](http://technet.microsoft.com/en-us/library/ms176057.aspx) command(if you are using MS SQL Server ) or you can use alter table(auto\_increment) for MySql <http://dev.mysql.com/doc/refman/5.0/en/alter-table.html>.
But be careful as any foreign key dependencies will also have to be reset - you can run into all sorts of trouble.
Here is a quick test scenario:
```
--Test variables & environment
DECLARE @TimeParam AS DATETIME
CREATE TABLE #Downtime (ID int identity, initial varchar(1))
INSERT INTO #Downtime VALUES('a')
INSERT INTO #Downtime VALUES('b')
INSERT INTO #Downtime VALUES('c')
INSERT INTO #Downtime VALUES('d')
INSERT INTO #Downtime VALUES('e')
INSERT INTO #Downtime VALUES('f')
INSERT INTO #Downtime VALUES('g')
INSERT INTO #Downtime VALUES('h')
INSERT INTO #Downtime VALUES('i')
INSERT INTO #Downtime VALUES('j')
--Queries
DBCC CHECKIDENT (#Downtime)
SELECT ID, initial FROM #Downtime
DELETE FROM #Downtime
DBCC CHECKIDENT (#Downtime, RESEED, 0)
INSERT INTO #Downtime VALUES('k')
SELECT ID, initial FROM #Downtime
DROP TABLE #Downtime
--results (first select)
ID initial
1 a
2 b
3 c
4 d
5 e
6 f
7 g
8 h
9 i
10 j
--results (second select)
ID initial
2 j
```
For your case you would use:
DBCC CHECKIDENT ('Student', RESEED, 0);
Once again handle this command with caution and check all references to this primary key BEFORE running the command. Make sure you have taken backups of the database too! | If you have some values already (tables not empty) then this is not possible. You could empty all your tables and run [DBCC CHECKIDENT command](http://msdn.microsoft.com/en-us/library/ms176057.aspx) to reset the value of your autoincrement primary key and restart from 1.
```
DBCC CHECKIDENT ("YourTableNameHere", RESEED, number);
```
If `number` = 0 then in the next insert the auto increment field will contain value 1
If `number` = 101 then in the next insert the auto increment field will contain value 102
But be aware that this command just resets the value of identity column, **it does not check for conflicts**. I mean, if the value 1 already exists then you will have errors when trying to insert on that table. | SQL Server Auto-increment | [
"",
"sql",
"sql-server-2008",
""
] |
I have a table full of prices, items, and dates. An example of this is:
```
AA, 1/2/3024, 1.22
AA, 1/3/3024, 1.23
BB, 1/2/3024, 4.22
BB, 1/3/3024, 4.23
```
Within the data there are only two rows per price, and they are ordered by date. How would I condense this data set into a single product row showing the difference from the last price to the previous? [Also this applies to a ratio, so AA would produce 1.23/1.22].
The result should look like
```
AA, todays price-yesterdays price
```
Despite being a sum function, there is no subtraction function over a list.
I'm using postgres 9.1. | ```
select product,
sales_date,
current_price - prev_price as diff
from (
select product,
sales_date,
price as current_price,
lag(price) over (partition by product order by sales_date) as prev_price,
row_number() over (partition by product order by sales_date desc) as rn
from the_unknown_table
) t
where rn = 1;
```
SQLFiddle example: <http://sqlfiddle.com/#!15/9f7d6/1> | Since `there are only two rows per price`, this can be *much* simpler and faster:
```
SELECT n.item, n.price - o.price AS diff, n.price / o.price AS ratio
FROM price n -- "new"
JOIN price o USING (item) -- "old"
WHERE n.day > o.day;
```
[**->SQLfiddle**](http://sqlfiddle.com/#!15/c3b12/1)
This form carries the additional benefit, that you can use all columns from both rows directly.
---
For **more complex scenarios** (not necessary for this), you could use window functions as has been pointed out. Here is a simpler approach than what has been suggested:
```
SELECT DISTINCT ON (item)
item
,price - lead(price) OVER (PARTITION BY item ORDER BY day DESC) AS diff
FROM price
ORDER BY item, day DESC;
```
Only one window function is needed here. And one query level, since [`DISTINCT ON`](https://stackoverflow.com/questions/3800551/select-first-row-in-each-group-by-group/7630564#7630564) is applied *after* window functions. The sort order in the window agrees with overall sort order, helping performance. | Aggregation function to get the difference or ratio of two rows in order | [
"",
"sql",
"postgresql",
"aggregate-functions",
"greatest-n-per-group",
"window-functions",
""
] |
I have a table which looks like this -
```
Id AttributeName AttributeValue
A1 Atr1 A1V1
A1 Atr2 A1V2
A1 Atr3 A1V3
A2 Atr1 A2V1
A2 Atr2 A2V2
A2 Atr3 A3V3
```
Each ID in this table has the exact same attributes, ie ATR1, ATR2, ATR3. The values of these attributes is unique.
I want to pivot this table and get the following output -
```
Id Atr1 Atr2 Atr3
A1 A1V1 A1V2 A1V3
A2 A2V1 A2V2 A2V3
```
How do I do this ?
I tried a query and it failed with the error - Msg 156, Level 15, State 1, Line 21
Incorrect syntax near the keyword 'FOR'.
```
-- Create a temporary table
DECLARE @MyTable TABLE
(Id varchar(25),
AttributeName varchar(30),
AttributeValue varchar(30))
-- Load Sample Data
INSERT INTO @MyTable VALUES ('A1', 'Atr1', 'A1V1')
INSERT INTO @MyTable VALUES ('A1', 'Atr2', 'A1V2')
INSERT INTO @MyTable VALUES ('A1', 'Atr3', 'A1V3')
INSERT INTO @MyTable VALUES ('A2', 'Atr1', 'A2V1')
INSERT INTO @MyTable VALUES ('A2', 'Atr2', 'A2V2')
INSERT INTO @MyTable VALUES ('A2', 'Atr3', 'A3V3')
SELECT Id, [Atr1], [Atr2],[Atr3]
FROM
(
SELECT ID, AttributeName, AttributeValue
FROM @MyTable) AS SourceTable
PIVOT
(
AttributeValue
FOR AttributeName IN ([ATR1], [ATR2], [ATR3])
) AS pvt
``` | Just to expand on the other answers, the PIVOT function requires some type of aggregation. Since the value that you want to convert from a row into a column is a string, then you are limited to using either the `max()` or `min()` aggregate function.
While [@Muhammed Ali's](https://stackoverflow.com/a/20253383/426671) answer will work when you have a single `AttributeName`/`AttributeValue` pair, if you have multiple pairs for each `ID`, then you will only return either the `max` or `min` value.
For example if your sample data is:
```
INSERT INTO @MyTable VALUES ('A1', 'Atr1', 'A1V1');
INSERT INTO @MyTable VALUES ('A1', 'Atr1', 'A1V4');
INSERT INTO @MyTable VALUES ('A1', 'Atr2', 'A1V2');
INSERT INTO @MyTable VALUES ('A1', 'Atr3', 'A1V3');
INSERT INTO @MyTable VALUES ('A2', 'Atr1', 'A2V1');
INSERT INTO @MyTable VALUES ('A2', 'Atr2', 'A2V2');
INSERT INTO @MyTable VALUES ('A2', 'Atr3', 'A3V3');
```
Even though you have multiple rows for the combination of `A1` and `Atr1`, the other queries are only returning the `max(attributevalue)`:
```
| ID | ATR1 | ATR2 | ATR3 |
|----|------|------|------|
| A1 | A1V4 | A1V2 | A1V3 |
| A2 | A2V1 | A2V2 | A3V3 |
```
I would guess that you would actually want to return all of the combinations. I suggest expanding your query to include the windowing function, `row_number()` in your query. This query generates a unique value that will then be included in the grouping aspect of the PIVOT and will allow you to return more than one row for each ID.
By adding the `row_number()`, the query will be similar to the following:
```
SELECT Id, [Atr1], [Atr2],[Atr3]
FROM
(
SELECT ID, AttributeName, AttributeValue,
row_number() over(partition by id, attributename
order by attributevalue) seq
FROM @MyTable
) AS SourceTable
PIVOT
(
max(AttributeValue)
FOR AttributeName IN ([ATR1], [ATR2], [ATR3])
) AS pvt
order by id;
```
See [SQL Fiddle with Demo](http://sqlfiddle.com/#!3/d41d8/25742). You will get a result that returns all rows:
```
| ID | ATR1 | ATR2 | ATR3 |
|----|------|--------|--------|
| A1 | A1V1 | A1V2 | A1V3 |
| A1 | A1V4 | (null) | (null) |
| A2 | A2V1 | A2V2 | A3V3 |
```
If you are having trouble grasping the concept of PIVOT, then I would suggest look at using a combination of an aggregate function with a CASE expression to get the result. You can then see the grouping of the sequence/id:
```
SELECT Id,
max(case when attributename = 'Atr1' then attributevalue end) Atr1,
max(case when attributename = 'Atr2' then attributevalue end) Atr2,
max(case when attributename = 'Atr3' then attributevalue end) Atr3
FROM
(
SELECT ID, AttributeName, AttributeValue,
row_number() over(partition by id, attributename
order by attributevalue) seq
FROM @MyTable
) AS SourceTable
group by id, seq
```
See [SQL Fiddle with Demo](http://sqlfiddle.com/#!3/d41d8/25745) | ```
DECLARE @MyTable TABLE
(Id varchar(25),
AttributeName varchar(30),
AttributeValue varchar(30))
-- Load Sample Data
INSERT INTO @MyTable VALUES ('A1', 'Atr1', 'A1V1')
INSERT INTO @MyTable VALUES ('A1', 'Atr2', 'A1V2')
INSERT INTO @MyTable VALUES ('A1', 'Atr3', 'A1V3')
INSERT INTO @MyTable VALUES ('A2', 'Atr1', 'A2V1')
INSERT INTO @MyTable VALUES ('A2', 'Atr2', 'A2V2')
INSERT INTO @MyTable VALUES ('A2', 'Atr3', 'A3V3')
SELECT Id, [Atr1], [Atr2],[Atr3]
FROM
(
SELECT ID, AttributeName, AttributeValue
FROM @MyTable) AS SourceTable
PIVOT
(
MAX(AttributeValue)
FOR AttributeName IN ([ATR1], [ATR2], [ATR3])
) AS pvt
```
You are missing the function in you Pivot table syntax
**Result Set**
```
Id Atr1 Atr2 Atr3
A1 A1V1 A1V2 A1V3
A2 A2V1 A2V2 A3V3
``` | Cannot pivot table with my query? | [
"",
"sql",
"sql-server",
"sql-server-2005",
"pivot",
""
] |
I have a table and I want to filter result from same table. Here is a sample.
```
STUDENT_ID | SCHOOL_YEAR |
-----------+-------------+
747 | 20122013 |
747 | 20132014 |
748 | 20122013 |
749 | 20122013 |
749 | 20132014 |
750 | 20122013 |
751 | 20112012 |
```
I want to sort the table so that only those student\_id show up who has 20122013 school year but NOT 20132014.
So the result would be
```
STUDENT_ID |
-----------+
748 |
750 |
```
I tried to UNION and LEFT/RIGHT JOIN but no luck.
Please help. Thanks. | Minus is the easy way:
```
select student_id
from tbl
where school_year = '20122013'
minus
select student_id
from tbl
where school_year = '20132014';
STUDENT_ID
----------
748
750
```
You could also do this with an "anti-join":
```
select a.student_id
from tbl a
left outer join tbl b
on a.student_id = b.student_id
and b.school_year = '20132014'
where
a.school_year = '20122013'
and b.student_id is null;
STUDENT_ID
----------
750
748
```
With the anti-join, you are outer joining the second copy of the table ("b" in this example) and then filtering where the rows from that set did not match (b.student\_id is null). | For very large data sets you'd probably want to avoid the implicit and unnecessary distinct on a minus, and use a NOT IN or NOT EXISTS:
```
select student_id
from tbl
where school_year = '20122013' and
student_id not in (
select student_id
from tbl
where school_year = '20132014');
```
or
```
select student_id
from tbl t1
where school_year = '20122013' and
not exists (
select null
from tbl t2
where school_year = '20132014' and
t2.student_id = t1.student_id);
```
The latter would be especially handy if their were potentially multiple rows per student\_id in the subquery set. | Filter from same table in Oracle | [
"",
"mysql",
"sql",
"oracle",
""
] |
Is there any way to select records if top clause parameter is null?
```
DECLARE @count FLOAT = 0;
Select @count = count from tblDetails
SELECT TOP(@count) from tblCompany
```
If @count var is null than i want to select all records from tblCompany. | ```
DECLARE @count FLOAT = 0;
Select @count = count(1) from tblDetails
IF @count > 0
BEGIN
SELECT TOP(@intRecords) * from tblCompany
END
ELSE
BEGIN
SELECT * FROM tblCompany
END
```
If you do not want to write the query twice - although only one will get executed, you can try this:
```
DECLARE @count FLOAT = 0;
Select @count = count(1) from tblDetails
IF @count = 0
BEGIN
SET @intRecords = 100000 -- Or some number larger than the possible count
END
SELECT TOP(@intRecords) * from tblCompany
``` | When facing situations like this one, I love using SQL [Rank](http://technet.microsoft.com/en-us/library/ms176102.aspx).
In the query below I assumed you have an `ID` column, but you can replace it with any other column for choosing the criteria for what would be considered to be the top columns:
```
DECLARE @count FLOAT = 0;
Select @count = count(*) from tblDetails
SELECT * FROM
(
SELECT
RANK () OVER
( ORDER BY ID ) 'Rank', -- <-- Assuming you have an ID column, replace with any other criteria for what will be considered as top...
*
FROM tblCompany
) tmp
WHERE (@count IS NULL) OR tmp.Rank <= @count
``` | Is there any way to select records if top clause parameter is null? | [
"",
"sql",
"sql-server",
""
] |
**The UI design** for storing event and event meta data is

**SQL TABLE DESIGN** is
```
CREATE TABLE [dbo].[EVENTS]
([ID] [int] IDENTITY(1,1) NOT NULL,
[Name] [nvarchar](255) NOT NULL)
```
and
```
CREATE TABLE [dbo].[EVENTS_META](
[ID] [int] IDENTITY(1,1) NOT NULL,
[event_id] [int] NOT NULL,
[meta_key] [varchar](255) NOT NULL,
[meta_value] [bigint] NOT NULL)
```
The Events data is

Event Metadata is 
I Followed [Repeating calendar events and some final maths](https://stackoverflow.com/questions/10545869/repeating-calendar-events-and-some-final-maths) and I wrote the below query
**LIST ALL THE EVENT DATES BEFORE THE GIVEN END DATE**
```
SELECT EV.*
FROM events AS EV
RIGHT JOIN events_meta AS EM1 ON EM1.event_id = EV.id
RIGHT JOIN events_meta AS EM2 ON EM2.meta_key = 'repeat_interval_'+ CAST(EM1.id as Varchar(100))
WHERE EM1.meta_key = 'repeat_start'
AND ((1391040000 - EM1.meta_value ) % EM2.meta_value) = 0
```
I am not getting anything. I want to display all dates after repeat\_start with the given interval.
Example here 1st event starts on (3rd Jan 2014, 10 A.M) unixtimestamp =1388743200 and continues every friday(7 days), we also schedule the first event on starts saturday(Jan04, 2014)1388858400 and continues once in every 7 days(saturday)
It can be once in a month/daily/etc. So we have the `interval` defined as seconds.
If i give some input like 30 Jan 2014 , `i.e =1391040000` (30 Jan 2014 00:00:00)
**Expected Result**
Billa Visit, 3 Jan 2014 10 A.M
Billa Visit, 4 Jan 2014 10 A.M
Billa Visit, 10 Jan 2014 10 A.M
Billa Visit, 11 Jan 2014 10 A.M
Billa Visit, 17 Jan 2014 10 A.M
Billa Visit, 18 Jan 2014 10 A.M
Billa Visit, 24 Jan 2014 10 A.M
Billa Visit, 25 Jan 2014 10 A.M
**[SQL FIDDLE LINK](http://www.sqlfiddle.com/#!3/29ced/1)** | Your first step is to get your event start dates with each event, and the repeat interval, to do this you can use:
```
SELECT EventID = e.ID,
e.Name,
StartDateTime = DATEADD(SECOND, rs.Meta_Value, '19700101'),
RepeatInterval = ri.Meta_Value
FROM dbo.Events e
INNER JOIN dbo.Events_Meta rs
ON rs.Event_ID = e.ID
AND rs.Meta_Key = 'repeat_start'
INNER JOIN dbo.Events_Meta ri
ON ri.Event_ID = e.ID
AND ri.Meta_Key = 'repeat_interval_' + CAST(e.ID AS VARCHAR(10));
```
This gives:
```
EventID | Name | StartDateTime | RepeatInterval
--------+--------------+---------------------+-----------------
1 | Billa Vist | 2014-01-03 10:00:00 | 604800
1 | Billa Vist | 2014-01-04 18:00:00 | 604800
```
To get this to repeat you will need a numbers table to cross join to, if you don't have one there are a number of ways to generate one on the fly, for simplicity reasons I will use:
```
WITH Numbers AS
( SELECT Number = ROW_NUMBER() OVER(ORDER BY a.object_id) - 1
FROM sys.all_objects a
)
SELECT Number
FROM Numbers;
```
For further reading, [Aaron Bertrand](https://stackoverflow.com/users/61305/aaron-bertrand) has done some in depth comparisons ways of generating sequential lists of numbers:
* [Generate a set or sequence without loops – part
1](http://www.sqlperformance.com/2013/01/t-sql-queries/generate-a-set-1)
* [Generate a set or sequence without loops – part
2](http://www.sqlperformance.com/2013/01/t-sql-queries/generate-a-set-2)
* [Generate a set or sequence without loops – part
3](http://www.sqlperformance.com/2013/01/t-sql-queries/generate-a-set-3)
If we limit our numbers table to only 0 - 5, and only look at the first event, cross joining the two will give:
```
EventID | Name | StartDateTime | RepeatInterval | Number
--------+--------------+---------------------+----------------+---------
1 | Billa Vist | 2014-01-03 10:00:00 | 604800 | 0
1 | Billa Vist | 2014-01-03 10:00:00 | 604800 | 1
1 | Billa Vist | 2014-01-03 10:00:00 | 604800 | 2
1 | Billa Vist | 2014-01-03 10:00:00 | 604800 | 3
1 | Billa Vist | 2014-01-03 10:00:00 | 604800 | 4
1 | Billa Vist | 2014-01-03 10:00:00 | 604800 | 5
```
Then you can get your occurance by adding `RepeatInterval * Number` to the event start time:
```
DECLARE @EndDate DATETIME = '20140130';
WITH EventData AS
( SELECT EventID = e.ID,
e.Name,
StartDateTime = DATEADD(SECOND, rs.Meta_Value, '19700101'),
RepeatInterval = ri.Meta_Value
FROM dbo.Events e
INNER JOIN dbo.Events_Meta rs
ON rs.Event_ID = e.ID
AND rs.Meta_Key = 'repeat_start'
INNER JOIN dbo.Events_Meta ri
ON ri.Event_ID = e.ID
AND ri.Meta_Key = 'repeat_interval_' + CAST(rs.ID AS VARCHAR(10))
), Numbers AS
( SELECT Number = ROW_NUMBER() OVER(ORDER BY a.object_id) - 1
FROM sys.all_objects a
)
SELECT e.EventID,
e.Name,
EventDate = DATEADD(SECOND, n.Number * e.RepeatInterval, e.StartDateTime)
FROM EventData e
CROSS JOIN Numbers n
WHERE DATEADD(SECOND, n.Number * e.RepeatInterval, e.StartDateTime) < @EndDate
ORDER BY e.EventID, EventDate;
```
This gives your expected output:
```
EVENTID | NAME | EVENTDATE
--------+---------------+--------------------------------
1 | Billa Vist | January, 03 2014 10:00:00+0000
1 | Billa Vist | January, 04 2014 18:00:00+0000
1 | Billa Vist | January, 10 2014 10:00:00+0000
1 | Billa Vist | January, 11 2014 18:00:00+0000
1 | Billa Vist | January, 17 2014 10:00:00+0000
1 | Billa Vist | January, 18 2014 18:00:00+0000
1 | Billa Vist | January, 24 2014 10:00:00+0000
1 | Billa Vist | January, 25 2014 18:00:00+0000
```
**[Example on SQL Fiddle](http://sqlfiddle.com/#!6/44057/1)**
---
I think the schema you have is questionable though, the join on:
```
Meta_Key = 'repeat_interval_' + CAST(rs.ID AS VARCHAR(10))
```
is flimsy at best. I think you would be much better off storing the start date and repeat interval associated with it together:
```
CREATE TABLE dbo.Events_Meta
( ID INT IDENTITY(1, 1) NOT NULL,
Event_ID INT NOT NULL,
StartDateTime DATETIME2 NOT NULL,
IntervalRepeat INT NULL, -- NULLABLE FOR SINGLE EVENTS
RepeatEndDate DATETIME2 NULL, -- NULLABLE FOR EVENTS THAT NEVER END
CONSTRAINT PK_Events_Meta__ID PRIMARY KEY (ID),
CONSTRAINT FK_Events_Meta__Event_ID FOREIGN KEY (Event_ID) REFERENCES dbo.Events (ID)
);
```
This would simplify your data to:
```
EventID | StartDateTime | RepeatInterval | RepeatEndDate
--------+---------------------+----------------+---------------
1 | 2014-01-03 10:00:00 | 604800 | NULL
1 | 2014-01-04 18:00:00 | 604800 | NULL
```
It also allows you to add an end date to your repeat, i.e. if you only want it to repeat for one week. This then your query simlifies to:
```
DECLARE @EndDate DATETIME = '20140130';
WITH Numbers AS
( SELECT Number = ROW_NUMBER() OVER(ORDER BY a.object_id) - 1
FROM sys.all_objects a
)
SELECT e.ID,
e.Name,
EventDate = DATEADD(SECOND, n.Number * em.IntervalRepeat, em.StartDateTime)
FROM Events e
INNER JOIN Events_Meta em
ON em.Event_ID = e.ID
CROSS JOIN Numbers n
WHERE DATEADD(SECOND, n.Number * em.IntervalRepeat, em.StartDateTime) <= @EndDate
AND ( DATEADD(SECOND, n.Number * em.IntervalRepeat, em.StartDateTime) <= em.RepeatEndDate
OR em.RepeatEndDate IS NULL
)
ORDER BY EventDate;
```
**[Example on SQL Fiddle](http://sqlfiddle.com/#!6/2d8c0/4)**
---
I won't give you my full schema for how I have achieved this in the past, but I will give a very cut down example, from which you can hopefully build your own. I will only add an example for an event that occurs weekly on Mon-Fri:

In the above ER RepeatEvent stores the basic information for the recurring event, then depending on the repeat type (Daily, weekly, monthly) one or more of the other tables is populated. In example of a weekly event, it would store all the days of the week that it repeats in in the table `RepeatDay`. If this needed to be limited to only certain months, you could store these months in `RepeatMonth`, and so on.
Then using a calendar table you can get all the possible dates after the first date, and limit these to only those dates that match the day of the week/month of the year etc:
```
WITH RepeatingEvents AS
( SELECT e.Name,
re.StartDateTime,
re.EndDateTime,
re.TimesToRepeat,
RepeatEventDate = CAST(c.DateKey AS DATETIME) + CAST(re.StartTime AS DATETIME),
RepeatNumber = ROW_NUMBER() OVER(PARTITION BY re.RepeatEventID ORDER BY c.Datekey)
FROM dbo.Event e
INNER JOIN dbo.RepeatEvent re
ON e.EventID = re.EventID
INNER JOIN dbo.RepeatType rt
ON rt.RepeatTypeID = re.RepeatTypeID
INNER JOIN dbo.Calendar c
ON c.DateKey >= re.StartDate
INNER JOIN dbo.RepeatDayOfWeek rdw
ON rdw.RepeatEventID = re.RepeatEventID
AND rdw.DayNumberOfWeek = c.DayNumberOfWeek
WHERE rt.Name = 'Weekly'
)
SELECT Name, StartDateTime, RepeatEventDate, RepeatNumber
FROM RepeatingEvents
WHERE (TimesToRepeat IS NULL OR RepeatNumber <= TimesToRepeat)
AND (EndDateTime IS NULL OR RepeatEventDate <= EndDateTime);
```
**[Example on SQL Fiddle](http://sqlfiddle.com/#!3/37618/1)**
This is only a very basic representation of how I implemented it, for instance I actually used entirely views any query for the repeating data so that any event with no entries in `RepeatDayOfWeek` would be assumed to repeat every day, rather than never. Along with all the other detail in this and other answers, you should hopefully have more than enough to get you started. | The following will generate events based on StartEvent and MEta description with a CTE.
Change the values for MaxDate and MaxEvents according to parameters values.
```
declare @MaxDate datetime = convert(datetime,'12/2/2014', 101);
declare @MaxEvents integer= 200;
; With
-- number generator by power of 2
n2(n) as ( select 1 as n union all select 1),
n4(n) as ( select 1 from n2 t1 cross join n2 t2 ),
n16(n) as ( select 1 from n4 t1 cross join n4 t2 ),
n256(n) as ( select 1 from n16 t1 cross join n16 t2 ),
n65k(n) as ( select 1 from n256 t1 cross join n256 t2 ),
Numbers (n) as (select row_number() over( order by n) from n65k ),
-- Start of events
StartEvents as
( SELECT 1 as EventNo, EV.Name, EM.ID, EM.Event_Id, EM.Meta_key, dateAdd(second,EM.meta_value,convert(datetime,'01/01/1970', 101)) as EventDate
FROM events AS EV
INNER JOIN events_meta EM
ON EM.event_id = EV.id
AND EM.meta_key = 'repeat_start'),
-- Repeating events N times
NextEvents AS
( SELECT Numbers.N+1 asEventNo, StartEvents.Name, EM.ID, EM.Event_Id, EM.Meta_key, dateAdd(second,EM.meta_value*Numbers.n,StartEvents.EventDate) as EventDate
FROM StartEvents
INNER JOIN events_meta EM
ON EM.event_id = StartEvents.event_id
AND EM.meta_key = 'repeat_interval_'+ ltrim(rtrim(str(StartEvents.ID )))
AND ((1391040000- EM.meta_value ) % EM.meta_value) = 0
cross join Numbers
-- not to overflow (dateadd parameter is int type)
Where Numbers.N < 3000
)
-- startEvents union nextEvents
select EventNo, Name, Meta_key, EventDate
FROM (
Select * from StartEvents
union all select * from NextEvents ) AllEvents
where EventDate < @MaxDate
and EventNo < @MaxEvents
order by ID ;
``` | Display next event date | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I am struggling with a database design, this is what I have so far.

Here are the problems.
1. I need to introduce a new user type (conglomeration manager), and they will have visibility of groups of companies (a conglomeration). A conglomeration manager can have multiple companies, and a company can belong to multiple conglomeration managers. It would be advantageous if an independent company could be added, and then at a later date be easily included as part of a conglomeration.
I am finding this difficult to model as all my users so far (manager,driver,recipient) all exist in the users table. This was by design as they all have nearly the same data fields, and I need to have a single login point for all users on my site. If I add a conglomeration manager to the users table, they will have relationships with other tables that my existing user types don't have.
2. I am uneasy about the dependency loop formed through users, ownerships, packages, companies, users. This strikes me as bad form, but I really can't think of a way to avoid it as:
managers, drivers and recipients all work for a single company. That company has an associated set of packages, yet I need to have the ability to associate a subset of those packages to a particular recipient (they own packages) and to a particular driver or manager (responsible for delivering those packages).
3. I am not happy with the "receive\_emails" field in users, as it is only relevant to users of type "recipient".
To add to the the problems, this design is already in use, and data will have to be migrated to any new design.
The most common operations that take place in the system are the viewing of statuses by recipients, followed by the creation of statuses by managers and drivers.
Can my problems be addressed with an elegant new design? | Well here is another try. I still think
* you shouldn't worry too much about that receive\_emails field, as explained in my other answer.
* you don't have to split users into user kinds.
What you are worried about in 2 is the dependencies. Dependencies are usually not a problem, but you are very strict in your id based database design, thus hiding the dependencies from your dbms. If it just knew, it could help you :-)
You could do this:
* Stick to your table "users", but remove the company\_id.
* You don't have to make any changes to "companies", "packages", "participations" and "statuses".
* Add a table to link users to companies. Let's call the table "affiliations" for the moment. (I don't know if this would be an appropriate name, my English fails me here.) Like ownerships this is just a link table, so the only fields are user\_id and company\_id (forming the primary key).
* Now add company\_id to "ownerships". (I know, it is kind of there implicitly because of your link to "packages", but the dbms doesn't know that.) So add the field and now that your dbms sees the field, you can also add a constraint (foreign key) on package\_id plus company\_id to "packages" and a constraint on user\_id plus company\_id to "affilations".
That's it. You haven't changed much, but now a user can be affiliated to many companies (conglomeration managers so far, but maybe you decide one day to allow recipients to work with multiple companies or let drivers work for more than one of the companies at a time). And there is no risk of wrong entries now in "ownerships" or any doubt aboout ist content and use.
If you want to play safe with your receive\_emails field, here is a way you might want to go (as I said, it is not really necessary): Have a new table email\_recipients with two fields: user\_id and user\_type. Yes, redundancy again. But doing this, you can have a constraint on user\_type only allowing certain types (only "recipient" so far). Again you would have a foreign key not only to user\_id, but to user\_id plus user\_type. | Extend users!
Like extending a class you can create a new table "Managers" with more columns and a FK to users.
So you can create a relational table between Managers and companies.
If you want a better control over that conglomerate entity, create the Conglomerate table and make a FK to managers, so you create a relational table between Conglomerate and Companies OR if a company cannot be owned by two conglomerates just a FK from company to conglomerate. | Incorporate additional requirements into a legacy database design | [
"",
"mysql",
"sql",
"database",
"database-design",
""
] |
In SQL Server 2008, If I have a table like this.
```
Date NoOfCases
--------------------
2013-11-27 1
2013-11-28 2
2013-11-29 1
2013-11-30 3
```
And I want the previous sum to be added to the next row's value. That means that the output should be like this:
```
Date AccumulatedNoOfCases
2013-11-27 1
2013-11-28 3
2013-11-29 4
2013-11-30 7
```
Any suggestions? Thanks in advance. | You could do a join on same table, with the date being smaller or equal.
```
select t1.date, sum(t2.NoOfCases) as AccumulatedNoOfCases
from Table1 t1
join Table1 t2 on t2.Date <= t1.Date
group by t1.date
```
[SqlFiddle](http://sqlfiddle.com/#!6/5a30c/10)
With count, i would use a cte for simplicity
```
with cnt as (select date, count(*) as cnt from Table1
group by date)
select t1.date, sum(t2.cnt) as AccumulateNoOfCases
FROM cnt t1
join cnt t2 on t2.date <=t1.date
group by t1.date;
```
see [SqlFiddle](http://sqlfiddle.com/#!6/6174e/2)
With your datas, it should be something like that.
```
WITH cte as
(select [Date of hospital visit]) as dte, count(*) as cnt
FROM DW_Epidemic_Warning
GROUP BY [Date of hospital visit])
select t1.dte, sum(t2.cnt) as AccumulateNoOfCases
FROM cte t1
join cte t2 on t2.dte <=t1.dte
group by t1.dte;
``` | You didn't say what version of SQL Server you are using. If it is 2012, you can do something like
```
SELECT
t1.date, sum(t1.NoOfCases) OVER (ORDER BY t1.date) AS AccumulatedNoOfCases
FROM Table1 t1
``` | Accumulate rows in SQL: summarizing previous values and new grouped by date | [
"",
"sql",
"sql-server",
"sql-server-2008",
"select",
""
] |
I have two tables:
```
Shop_Products
Shop_Products_Egenskaber_Overruling
```
I want to select all records in Shop\_Products\_Egenskaber\_Overruling which has a related record in
Shop\_Products. This Means a record with an equal ProductNum.
This Works for me with the statement below, but I don't think a CROSS JOIN is the best approach for large record sets. When using the statement in web controls, it becomes pretty slow, even with only 1000 records. Is there a better way to accomplish this?
```
SELECT Shop_Products.*, Shop_Products_Egenskaber_Overruling.*
FROM Shop_Products CROSS JOIN
Shop_Products_Egenskaber_Overruling
WHERE Shop_Products.ProductNum = Shop_Products_Egenskaber_Overruling.ProductNum
```
Any optimizing suggestions?
Best regards. | You are actually looking for an INNER JOIN.
```
SELECT
SO.*,
SPEO.*
FROM SHOP_PRODUCTS SP
INNER JOIN Shop_Products_Egenskaber_Overruling SPEO
ON SP.ProductNum = SPEO.ProductNum
```
This will have improved performance over your CROSS-JOIN, because the condition to look for records with equal ProductNum is implicit in the `JOIN` condition and the `WHERE` clause is eliminated.
`WHERE` clauses always execute AFTER a JOIN. In your case, all possible combinations are created by the CROSS JOIN and then filtered by the conditions in the WHERE clause.
By using an `INNER JOIN` you are doing the filtering in the first step. | You can do it that way but not sure it will ensure an optimization
```
SELECT Shop_Products.*, Shop_Products_Egenskaber_Overruling.*
FROM Shop_Products
INNER JOIN Shop_Products_Egenskaber_Overruling on Shop_Products.ProductNum = Shop_Products_Egenskaber_Overruling.ProductNum
``` | Optimize SQL select query with join | [
"",
"mysql",
"sql",
"sql-server",
"t-sql",
""
] |
I used the sql code in vb.net
```
SELECT [Table1 Query].[amel_code], [Table1 Query].[kala_code], Sum([Table1 Query].
[SumOfqty]) AS SumOfSumOfqty FROM(
SELECT Table1.amel_code,
Table1.amani_code,
Table1.kala_code,
Sum(Table1.qty) AS SumOfqty
FROM Table1
GROUP BY Table1.amel_code,
Table1.amani_code,
Table1.kala_code HAVING (((Table1.amel_code)=[?]) AND ((Table1.amani_code)<[?]));
)
GROUP BY [Table1 Query].[amel_code], [Table1 Query].[kala_code];
```
This code is working properly but the sql web. Sheet gives the following error:
> Incorrect syntax near the keyword 'SELECT'.
> Incorrect syntax near ')'.
please help me. | You need to remove semicolon at the end of the nested query, and add an alias to it:
```
SELECT [Table1 Query].[amel_code], [Table1 Query].[kala_code], Sum([Table1 Query].[SumOfqty]) AS SumOfSumOfqty
FROM (
SELECT Table1.amel_code,
Table1.amani_code,
Table1.kala_code,
Sum(Table1.qty) AS SumOfqty
FROM Table1
GROUP BY Table1.amel_code,
Table1.amani_code,
Table1.kala_code
HAVING (((Table1.amel_code)=[?])
AND ((Table1.amani_code)<[?])) -- ; <<== Remove this semicolon
) [Table1 Query] -- <<== Add this alias
GROUP BY [Table1 Query].[amel_code], [Table1 Query].[kala_code];
```
[Demo on SQLFiddle.](http://sqlfiddle.com/#!3/b19ba/5) | This is what you are missing:
1) Give an alias `Table1 Query` to a nested query. The error says: It is not able to identify what `[Table1 Query]` is for. so you have to give that alias to a sub query.
```
SELECT [Table1 Query].[amel_code], [Table1 Query].[kala_code], Sum([Table1 Query].[SumOfqty]) AS SumOfSumOfqty
FROM(
SELECT Table1.amel_code,
Table1.amani_code,
Table1.kala_code,
Sum(Table1.qty) AS SumOfqty
FROM Table1
GROUP BY Table1.amel_code,
Table1.amani_code,
Table1.kala_code HAVING (((Table1.amel_code)=[?]) AND ((Table1.amani_code)<[?]))
) [Table1 Query]
GROUP BY [Table1 Query].[amel_code], [Table1 Query].[kala_code];
``` | Incorrect syntax near the keyword 'SELECT'. Incorrect syntax near ')' | [
"",
"sql",
"syntax",
""
] |
I am trying to select data from one table
and insert the data into another table
```
SELECT ticker FROM tickerdb;
```
Using OracleSql I am trying to
get the ticker symbol "GOOG" from the tickerdb table,
and insert the t.ticker into the stockdb table.
select from tickerdb table --> insert into quotedb table
```
INSERT INTO quotedb
(t.ticker, q.prevclose, q.opn, q.rnge,
q.volume, q.marketcap, q.dividend, q.scrapedate)
VALUES (?,?,?,?,?,?,?,?,SYSDATE)
tickerdb t inner JOIN quotedb q
ON t.ticker = q.ticker
``` | From the oracle documentation, the below query explains it better
```
INSERT INTO tbl_temp2 (fld_id)
SELECT tbl_temp1.fld_order_id
FROM tbl_temp1 WHERE tbl_temp1.fld_order_id > 100;
```
You can read this [link](http://docs.oracle.com/cd/E17952_01/refman-5.1-en/insert-select.html)
Your query would be as follows
```
//just the concept
INSERT INTO quotedb
(COLUMN_NAMES) //seperated by comma
SELECT COLUMN_NAMES FROM tickerdb,quotedb WHERE quotedb.ticker = tickerdb.ticker
```
Note: Make sure the columns in insert and select are in right position as per your requirement
Hope this helps! | You can use
```
insert into <table_name> select <fieldlist> from <tables>
``` | select from one table, insert into another table oracle sql query | [
"",
"sql",
"oracle",
"select",
"insert",
""
] |
```
id refid date1 date2 nextdate
5 10 2008-02-21 2009-02-21 004/2008
6 10 2009-02-09 2010-02-09 002/2009
7 10 2010-02-08 2011-02-08 001/2010
10 11 2007-02-15 2008-02-15 002/2007
11 11 2008-02-21 2009-02-21 001/2008
12 11 2009-02-09 2010-02-09 001/2009
13 11 2010-02-09 2011-02-09 002/2010
14 11 2011-07-19 2012-07-19 054/2011
15 11 2012-07-17 2014-07-17 066/2012
18 14 2007-02-15 2008-02-15 006/2007
25 16 2007-02-15 2008-02-15 004/2007
27 16 2009-02-10 2010-02-10 004/2009
28 16 2010-02-12 2011-02-12 005/2010
29 16 2011-07-26 2012-07-26 055/2011
30 16 2012-07-18 2014-07-18 067/2012
```
I have this datatable. I need query that will do the following:
Return all rows for the refid where there exists a date2 value greater than 2014-01-01 (If there is id then I need all ids)
Result should be like this:
```
id refid date1 date2 nextdate
10 11 2007-02-15 2008-02-15 002/2007
11 11 2008-02-21 2009-02-21 001/2008
12 11 2009-02-09 2010-02-09 001/2009
13 11 2010-02-09 2011-02-09 002/2010
14 11 2011-07-19 2012-07-19 054/2011
15 11 2012-07-17 2014-07-17 066/2012
25 16 2007-02-15 2008-02-15 004/2007
27 16 2009-02-10 2010-02-10 004/2009
28 16 2010-02-12 2011-02-12 005/2010
29 16 2011-07-26 2012-07-26 055/2011
30 16 2012-07-18 2014-07-18 067/2012
``` | This might do what you want, if I understand your question correctly:
```
SELECT *
FROM datatable
WHERE refid IN (SELECT DISTINCT refid
FROM datatable
WHERE date2 > '20140101')
```
The subquery gives back all distinct *refid* values for which a date2 exists that is greater than 2014-01-01. You use the list of these *refid*'s to return all rows with those *refid*'s.
**Edit**
Adding an **[SQL Fiddle](http://www.sqlfiddle.com/#!3/a7c02/1)** to experiment with. (Also, [SQL Fiddle](http://www.sqlfiddle.com/) is a very useful tool to help you formulate your database questions clearer & make them more likely to be answered in a useful way!) | This is quite basic but if I've understood you correctly:
```
DELETE FROM tableName
WHERE date2 < '2014-01-01' AND NOT (refid IS NULL)
```
**Edit**
As stated in the comment below I possibly misunderstood the following part of the question to mean that the entries needed deleting.
> Remove all rows with refid where date2 is less than 2014-01-01
In which case the following would return all results after the specified date instead:
```
SELECT *
FROM tableName
WHERE date2 > '2014-01-01' AND NOT (refid IS NULL)
```
I have also assumed that the "with refid" means that `refid` should be populated.
To get the results described in the original question however I'd simplify the query as follows as the date part specificied wouldn't give the results:
```
SELECT *
FROM tableName
WHERE (refid = 11) OR (refid = 16)
```
**Another Edit**
The following should return all entries assigned to a `refid` with any entries with a `date2` after `2014-01-01`
```
SELECT DISTINCT t1.*
FROM tableName t1
INNER JOIN tableName t2 ON t1.refid = t2.refid AND t2.date2 > '2014-01-01'
```
Alternatively you could use exists:
```
SELECT *
FROM tableName t1
WHERE EXISTS(
SELECT *
FROM tableName t2
WHERE t1.refid = t2.refid AND t2.date2 > '2014-01-01'
)
```
Or link to a separate table of `refid`'s which fit the profile
```
SELECT *
FROM tableName t1
INNER JOIN (
SELECT DISTINCT refid
FROM tableName
WHERE date2 > '2014-01-01'
)t2 ON t1.refid = t2.refid
``` | Filter data with query | [
"",
"sql",
"sql-server-2008",
"filtering",
""
] |
In this table i have four column
```
id state_id state add_time
1 10 1 1385461144
2 12 3 1385461144
3 14 2 1385461144
4 12 5 1385461264
5 10 2 1385461264
```
i want to get latest records by last field (add\_time)
the result must be like this
```
id state_id state add_time
3 14 2 1385461144
4 12 5 1385461264
5 10 2 1385461264
```
i found this question [get the records for the latest timestamp mysql](https://stackoverflow.com/questions/2693408/get-the-records-for-the-latest-timestamp-mysql)
and it uses two select query (subquery). but i don't want to use subquery.
is there any other way to get latest records? | There are 2 approaches to get the results you're looking for.
Left join solution:
```
SELECT t1.* FROM t t1
LEFT JOIN t t2
ON t1.state_id = t2.state_id AND t1.add_time < t2.add_time
WHERE t2.add_time IS NULL
```
Fiddle [here](http://sqlfiddle.com/#!2/94393/1).
Subquery solution:
```
SELECT t1.* FROM t t1
JOIN (
SELECT state_id, max(add_time) add_time FROM t
GROUP BY state_id
) t2
ON t1.state_id = t2.state_id AND t1.add_time = t2.add_time
```
Fiddle [here](http://sqlfiddle.com/#!2/6a17dd/1).
Both will output:
```
| ID | STATE_ID | STATE | ADD_TIME |
|----|----------|-------|------------|
| 3 | 14 | 2 | 1385461144 |
| 4 | 12 | 5 | 1385461264 |
| 5 | 10 | 2 | 1385461264 |
``` | while you're at it to test which one is faster, here's another one to try
<http://sqlfiddle.com/#!2/760c73/13/0>
```
SELECT id,state_id,state,add_time
FROM t t1
WHERE NOT EXISTS
(SELECT 1
FROM t t2
WHERE t2.state_id = t1.state_id
AND t2.add_time > t1.add_time);
``` | mysql get latest records by timestamp without subquery | [
"",
"mysql",
"sql",
"timestamp",
"greatest-n-per-group",
""
] |
how can I extract the folder name of a filepath sitting right before the filename?
Example:
```
declare @a varchar(200)
set @a = '/path/to/category1/filename.txt'
select right(@a, charindex('/', reverse(@a)) - 1)
```
this returns filename.txt - In another column I now want to get the "category1" folder name.
If the path changes to:
`set @b = '/path/to/another/folder/category2/filename.txt'`
it should extract **category2**. | Based on your example code, you can extract the last folder name with the following snippet
```
declare @a varchar(200), @b varchar(200), @c varchar(200)
set @a = '/path/to/category1/filename.txt'
set @b = right(@a, charindex('/', reverse(@a)) - 1)
set @c = replace(@a, '/' + @b, '')
select right(@c, charindex('/', reverse(@c)) - 1), @b
```
in one select statement (except @a)
```
declare @a VARCHAR(200)
set @a = '/path/to/category1/filename.txt'
select right(replace(@a, '/' + right(@a, charindex('/', reverse(@a)) - 1), ''), charindex('/', reverse(replace(@a, '/' + right(@a, charindex('/', reverse(@a)) - 1), ''))) - 1) as folder, right(@a, charindex('/', reverse(@a)) - 1) as filename
``` | Alternative method below:
```
declare @a varchar(200)
set @a = '/path/to/category1/filename.txt'
select reverse(
SUBSTRING(reverse(@a),
charindex('/', reverse(@a)) + 1 ,
CHARINDEX('/',reverse(@a),charindex('/', reverse(@a))+ 1) - charindex('/', reverse(@a)) -1)
)
``` | Extract folder name right before filename from filepath | [
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2012",
""
] |
I'm trying to create a select query that returns the most recent record from the table based on date. Basically whichever row's `[Date]` field is closest to the current date.
Sample Data:
```
ID Content Date
--------------------------------------------
1 1050 Test Values 2013-11-27 10:46:24.900
2 1051 Test Test 2013-11-27 10:47:43.150
3 1057 Test Testx2 2013-11-27 10:48:22.820
```
I would like to only return these values
```
ID Content Date
--------------------------------------------
1 1057 Test Testx2 2013-11-27 10:48:22.820
```
Thanks! | You could try the following query:
```
SELECT TOP 1 *
FROM Table
ORDER BY [Date] DESC
``` | Or, if you want your query to work on any DBMS (not just SQL Server), use [ANSI SQL](http://en.wikipedia.org/wiki/SQL:2011):
```
select * from t order by Date desc limit 1
```
or
```
select * from t where Date = (select max(Date) from t)
``` | Select most recent record based on date | [
"",
"sql",
"sql-server",
""
] |
I have a MySQL table which looks like this:
```
-----
| 1 |
-----
| 3 |
-----
| 7 |
-----
| . |
-----
| . |
-----
| . |
-----
| n |
-----
```
is it possible to select from this table to have something like this:
```
----------
| 1 || 3 |
----------
| 3 || 7 |
----------
| 7 || . |
----------
| . || . |
----------
| . || . |
----------
| n || n-1 |
``` | I guess, below query is nearest what you want with MySQL
```
mysql> select * from num;
+------+
| a |
+------+
| 1 |
| 3 |
| 7 |
| 11 |
| 17 |
+------+
mysql> select @prev:=a FROM num ORDER BY a ASC LIMIT 1,1;
+----------+
| @prev:=a |
+----------+
| 3 |
+----------+
1 row in set (0.00 sec)
mysql> select @prev, t1.a, @prev:=t1.a from num t1;
+-------+------+-------------+
| @prev | a | @prev:=t1.a |
+-------+------+-------------+
| 3 | 1 | 1 | <= 1st row is dummy
| 1 | 3 | 3 |
| 3 | 7 | 7 |
| 7 | 11 | 11 |
| 11 | 17 | 17 |
+-------+------+-------------+
5 rows in set (0.00 sec)
mysql> select MAX(a), NULL FROM num;
+--------+------+
| MAX(a) | NULL |
+--------+------+
| 17 | NULL | <= last row founded here
+--------+------+
1 row in set (0.00 sec)
``` | Relational data has no concept of row-numbers or row-ordering, so "shifting by 1 row" is not possible using (standard) SQL. If your table contains a field "rownumber" you could use a self-join:
```
SELECT t1.field, t2.field
FROM mytable t1 join mytable t2 on(t1.rownumber = t2.rownumber+1)
```
But as BenM stated: just shifting by one row doesn't make too mauch sense, but if you have any other join-criteria a self-join is a useful tool. | Mysql select from table shifting one column | [
"",
"mysql",
"sql",
""
] |
I'm not so good with SQL, so help me please.
In my stored procedure I'm getting a parameter.
Then, based on this parameter I want to get ID from another table and put this ID into my another variable, but the problem is that ID could be multiple based on this parameter.
```
DECLARE @RateID int;
SET @RateID = (
Select [dbo].[Rate].RateID
from [dbo].[Rate]
Where [dbo].[Rate].Rate = 160
)
```
Here I'm getting the error, because `Subquery returned more than 1 value`
How to get the first value from subquery result and set to the @RateID? | You may want to try it like this:
```
DECLARE @RateID int;
SET @RateID = (
Select Top 1 [dbo].[Rate].RateID
from [dbo].[Rate]
Where [dbo].[Rate].Rate = 160
)
```
As in your present query there may be the case that you table has more than 1 row which satisfies the condition `Rate = 160`, so the `select` query will return more than 1 row which you cannot store in a single variable.
However in **MYSQL** you have to use `LIMIT 1` as `TOP 1` will not work in MYSQL. | If you only want the first result you can do
```
Select TOP 1 [dbo].[Rate].RateID
```
in place of
```
Select [dbo].[Rate].RateID
``` | How to set into variable 1st value from subquery result in SQL | [
"",
"mysql",
"sql",
"sql-server",
""
] |
I have table with multiple rows which will contain the data in the following manner
```
col1 col2
100 - 1
100 - 0
102 - 1
102 - 0
103 - 1
```
I need to write a query to fetch the row (103-1). Means the query should return rows that do not have corresponding pair of records(xxx-1,xxx-0). | ```
select col1, max(col2) as col2
from your_table
group by col1
having count(distinct col2) = 1
``` | Try this:
```
select *
from your_table a
where not exists (select 1
from your_table b
where b.col1 = a.col1
and b.col2 != a.col2)
```
Here is an example [sql fiddle](http://www.sqlfiddle.com/#!4/9f1e6/4) | How to retrieve rows from table based on column value(0,1) | [
"",
"sql",
"oracle",
""
] |
I have a small database for a retail scenario, within this I have a field called "Dispatched" which is a bool to indicate if the item has been dispatched yet. This is of course as 1 and 0, I have attempted a simple CASE WHEN to make 1 display as Yes and 0 as No.
My full query is:
```
SELECT
orders.OrdersID,
stock.ItemName,
basket.Quantity,
customer.FirstName,
customer.LastName,
address.AddressLine1,
address.AddressLine2,
address.TownOrCity,
address.Postcode,
address.Country,
CASE WHEN basket.Dispatched = 1 THEN 'Yes' ELSE 'No' END AS basket.Dispatched
FROM orders
JOIN OrdersBasketJoin ON orders.OrdersID = OrdersBasketJoin.OrdersID
LEFT JOIN basket ON OrdersBasketJoin.BasketID = basket.BasketID
JOIN customer ON orders.CustomerID = customer.CustomerID
JOIN address ON orders.DeliveryAddress = address.AddressID
JOIN stock ON basket.StockID = stock.StockID
ORDER BY `customer`.`CustomerID` ASC
LIMIT 0 , 30
```
The query works fine without the CASE WHEN, and will display the 1s and 0s when Dispatched is selected normally, as well as WHERE working fine when referencing Dispatched.
However when I try adding
```
CASE WHEN basket.Dispatched = 1 THEN 'Yes' ELSE 'No' END AS basket.Dispatched
```
I get the error
```
#1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '.Dispatched FROM orders JOIN OrdersBasketJoin ON orders.OrdersID = Ord' at line 12
```
From what I've researched this is pretty much as simple of a CASE WHEN you can do, and the syntax is correct I believe.
Not sure if its just a visual bug, but "END" in the CASE doesn't light up like it is known to be a function, whereas JOIN, ON, LEFT etc all light up, no matter where END doesn't.
Any and all help is much appreciated -Tom | You are missing with `,` after `address.Country` therefore you are getting syntax error try this one
```
SELECT
orders.OrdersID,
stock.ItemName,
basket.Quantity,
customer.FirstName,
customer.LastName,
address.AddressLine1,
address.AddressLine2,
address.TownOrCity,
address.Postcode,
address.Country,
(CASE WHEN basket.Dispatched = 1 THEN 'Yes' ELSE 'No' END) AS `basket.Dispatched`
FROM orders
JOIN OrdersBasketJoin ON orders.OrdersID = OrdersBasketJoin.OrdersID
LEFT JOIN basket ON OrdersBasketJoin.BasketID = basket.BasketID
JOIN customer ON orders.CustomerID = customer.CustomerID
JOIN address ON orders.DeliveryAddress = address.AddressID
JOIN stock ON basket.StockID = stock.StockID
ORDER BY `customer`.`CustomerID` ASC
LIMIT 0 , 30
``` | Put backticks:
```
AS `basket.Dispatched`
``` | SQL CASE WHEN issue, Query not running at all | [
"",
"mysql",
"sql",
"case",
"case-when",
""
] |
I'm using SQL Server. I find myself doing complex queries in the WHERE clause with the following syntax:
```
SELECT ..
WHERE StudentID IS NULL OR StudentID NOT IN (SELECT StudentID from Students)
```
was wondering if there's a better approach/more cleaner way to replace it with because this is a small example of the bigger query I'm doing which includes multiple conditions like that.
As you can see I'm trying to filter for a specific column the rows which its column value is null or not valid id.
**EDIT**
**Courses**:
```
|CourseID | StudentID | StudentID2|
|-----------------------------------|
| 1 | 100 | NULL |
| 2 | NULL | 200 |
| 3 | 1 | 1 |
```
**Students**
```
|StudentID | Name |
|--------------------
| 1 | A |
| 2 | B |
| 3 | C |
```
Query:
```
SELECT CourseID
FROM Courses
WHERE
StudentID IS NULL OR StudentID NOT IN (SELECT * FROM Students)
OR StudentID2 IS NULL OR StudentID2 NOT IN (SELECT * FROM Students)
```
Result:
```
| CourseID |
|-----------|
| 1 |
| 2 |
```
As you can see, course 1 and 2 has invalid students. | Alain was close, except the studentID2 column is associated with the courses table. Additionally, this is joining each studentID column to an instance of the students table and the final WHERE is testing if EITHER of the student ID's fail, so even if Student1 is valid and still fails on Student2, it will capture the course as you are intending.
```
SELECT
C.CourseID
FROM
Courses C
LEFT JOIN Students S
ON C.StudentId = S.StudentId
LEFT JOIN Students S2
OR C.StudentId2 = S2.StudentId
WHERE
S.StudentId IS NULL
OR S2.StudentID IS NULL
``` | this is not a sure shot but i have had experience that this is better performer than the question one:
```
SELECT CourseID from Courses WHERE
Courses.StudentID NOT exists (SELECT 1 FROM Students where Students.StudentID=nvl(Courses.StudentID,-1));
```
Also Create an index on StudentId in the Students Table.
And if your data model supports create a primary key foreign key relationship between the 2 tables. That way u definitely avoid invalid values in the courses table.
After your update:
```
SELECT CourseID from Courses WHERE
Courses.StudentID NOT exists (SELECT 1 FROM Students where Students.StudentID=nvl(Courses.StudentID1,-1) or Students.StudentID=nvl(Courses.StudentID2,-1));
``` | Optimizing WHERE clause SQL query | [
"",
"sql",
"sql-server",
"where-clause",
""
] |
I have a MySQL query which I'm running to return users and a count of their corresponding training results from an additional table, typically these are being displayed in the app front end 50 results at a time, however with this many results the query has become very slow.
Thing are infinitely better if I append the data using PHP after the MySQL query has run but this feels horrible and dirty and also prevents me from being able to sort on the result column. i.e. I would like to be able to sort users by the number of results they have. :)
Here's my current SQL query in all it's glory:
```
SELECT DISTINCT users.email,
(SELECT COUNT(r.result_id)
FROM programmes_results r
WHERE ((r.client_id=users.client_id) AND (r.email=users.email))
) as results
FROM users
INNER JOIN users_groups ON users_groups.user_id=users.id
WHERE users.client_id='130' AND users_groups.group_id IN (5)
ORDER BY email asc
LIMIT 10
```
This takes in the region of 4 seconds to execute which is obviously totally pants. I'm sure there has to be an alternative way to return this extra data without incurring such a performance penalty but I'm at the bounds of my knowledge here.
Any pointers greatly appreciated. :) | Try something like
```
SELECT users.email, COUNT(r.result_id) as results
FROM users
INNER JOIN users_groups ON users_groups.user_id=users.id
INNER JOIN programmes_results r on ((r.client_id=users.client_id) AND (r.email=users.email))
WHERE users.client_id='130' AND users_groups.group_id IN (5)
group by users.email
ORDER BY users.email asc
LIMIT 10
``` | I suspect the problem is you don't have the right index on programmes\_results. You need a single index that covers both client\_id and email, having two separate indexes won't help very much, because it will only be able to use one of them, say, client\_id and then have to scan all matching rows for ones that have the right email. Try:
```
CREATE INDEX id_email ON programmes_results (client_id, email)
```
and then run your query again. | MySQL subquery to count user results in another table is sloooooooow | [
"",
"mysql",
"sql",
"subquery",
""
] |
I have a select result like this:
```
from_loc | to_loc |
-------------------------
A | B
------------------------
B | C
------------------------
B | A
------------------------
```
How can I eliminate duplicates from the table which means occurrence of **A to B** and **B to A** means duplicate.
I'm trying to create result like this, after trying couple of ways I couldn't solve this problem...
```
from_loc | to_loc |
-------------------------
A | B
------------------------
B | C
------------------------
```
Can anyone give me some hint or reference how can I achieve this kind of result? | I didn't tested this solution but it could have a better performance (less logical reads):
```
DECLARE @MyTable TABLE
(
from_loc VARCHAR(100) NOT NULL,
to_loc VARCHAR(100) NOT NULL
);
INSERT @MyTable (from_loc, to_loc) VALUES ('A', 'B');
INSERT @MyTable (from_loc, to_loc) VALUES ('B', 'C');
INSERT @MyTable (from_loc, to_loc) VALUES ('B', 'A');
SELECT DISTINCT src.from_loc_new, src.to_loc_new
FROM
(
SELECT CASE WHEN x.from_loc <= x.to_loc THEN x.from_loc ELSE x.to_loc END AS from_loc_new,
CASE WHEN x.from_loc <= x.to_loc THEN x.to_loc ELSE x.from_loc END AS to_loc_new
FROM @MyTable x
) src
-- You could also test these query hints to see if there is a better performance
-- OPTION (HASH GROUP)
-- or
-- OPTION (ORDER GROUP);
``` | Perhaps with this `CASE` in `ROW_NUMBER`:
```
WITH CTE AS(
SELECT from_loc, to_loc,
rn = row_Number() Over (Partition By CASE WHEN from_loc > to_loc
Then to_loc + '|' + from_loc
Else from_loc + '|' + to_loc END
Order By from_loc, to_loc)
FROM dbo.TableName
)
SELECT from_loc, to_loc FROM cte WHERE rn = 1
```
`Demo` | Eliminating duplicate combinations from result | [
"",
"sql",
"sql-server",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.