Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
Having the following two tables in a MySQL Database:
```
USER
- ID
- Name
GAME1
- ID
- UserID
- Score
- Rank
```
This is what I need in a view:
```
- User.ID
- User.Name
- Game1.ID as MaxScoreGameID
- Max(Game1.Score) as MaxScore
- Game1.ID as BestRankGameID
- Min(Game1.Rank) as BestRank
```
I got it working without GameIDs, but I also need the IDs of the games where `MaxScore` and `BestRank` occurred.
I searched the web for some information and tried `GROUP_CONCAT`, `HAVING`, ... but I could not get the results I need.
**edit**
As a result of the query I except one row per user with the `MaxScore` and `BestRank` and the IDs of the games where these were reached.
If a User has the same score (which is best) in more than 1 game, I only want one of these in the user row. | ```
SELECT u.ID
, u.Name
, (SELECT sub.ID from Game1 as sub where sub.UserID = u.ID ORDER BY sub.Score desc LIMIT 1) as MaxScoreGameID
, Max(Game1.Score) as MaxScore
, (SELECT sub2.ID from Game1 as sub2 where sub2.UserID = u.ID ORDER BY Rank asc LIMIT 1) as BestRankGameID
, Min(Game1.Rank) as BestRank
FROM [User] u
JOIN Game1 on Game1.UserID = u.ID GROUP BY u.ID, u.Name
```
The idea is to use Subselects to find the max score and use that score to get the Game1.ID. Same idea is used for Game1.Rank. | You could use a filtering join. The example is just for score, but you can add a second filtering join for rank.
```
select u.name
, g1.ID as MaxScoreGameID
, g1.Score as MaxScore
from User u
join (
select UserID
, max(Score) as maxScore
from Game1
group by
UserID
) as MaxScorePerUser
on MaxScorePerUser.UserID = u.ID
join Game1 g1
on u.ID = g1.UserID
and MaxScorePerUser.MaxScore = g1.Score
``` | Query to get identifier and aggregate function value of a row | [
"",
"mysql",
"sql",
"group-by",
"aggregate",
"greatest-n-per-group",
""
] |
I've got a single table DB with 100K rows. There are about 30 columns and 28 of them are varchars / tiny text and one of them is an int primary key and one of them is a blob.
My question, is in terms of performance, would it be better to separate the blob from the rest of the table and store them in their own table with foreign key constraint to the primary id?
The table will eventually be turned into a sqlite persistent store for iOS core data and a lot of the searching / filtering will be done based on the NSPredicate for the lighter varchar columns.
Sorry if this is too subjective, but I'm thinking there is a recommended way.
Thanks! | This totally depends on data usage.
If you need the data everytime you query the table, there is no difference in haviong a separate table for it (as long as blob data is unique in each row - that is, "as long as the database is normalized").
If you don'T need the blob data but only metadata from other columns, there may be a speed bonus qhen querying if the blob has its own table. querying the blob data is slower thoguh, as you need to query bowth tables.
The USUAL way is not to store any blob data inside the database (at least not huge data), but store the binary data into files and have the fiel path inside the database instead. This is recommended, as binary data most likely doesn'T benefit from being inside a DBMS (not indexable, sortable, groupable, ..), so there is no drawback of storing it inside files, while the database isn't optimized for binary data ('cause, again, it can't do much with it anyway). | If you do `SELECT * FROM table` (which you shouldn't if you don't need the BLOB field actually) then yes, the query will be faster because in that case pages with BLOB won't be touched.
If you do frequent `SELECT f1, f2, f3 FROM table` (all fields are non-BLOBs) then yes, storing BLOBS in a separate table will make the query faster because of the same reason - MySQL will have to read less pages.
If however the BLOB is selected frequently then it makes no sense to keep it separately. | MySQL DB normalization | [
"",
"mysql",
"ios",
"sql",
"sqlite",
"core-data",
""
] |
```
ID ID_A Status
175 473 2
174 473 1
173 455 2
170 412 2
169 397 1
168 393 2
173 391 2
```
Thats my example table. As result I want it to display only entries with Status=1, Grouped by ID\_A. It must not contain the results with Status=2!
Result should look like this:
```
ID_A Status
397 1
```
My problem is that there can be two similar ID\_A entries. Dont know if thats easy doable with COUNT or DISTINCT? Im somehow not gettint it at the moment.. Thanks in advance! | This is sometimes called an *exclusion join*.
You do an outer join to try to find the row that would invalidate your condition. Where there is no such row, the outer join will put NULL into all the columns of the joined table, and then you have a match.
```
SELECT t1.ID_A, t1.Status
FROM exampletable AS t1
LEFT OUTER JOIN exampletable AS t2
ON t1.ID_A = t2.ID_A AND t2.Status = 2
WHERE t1.Status = 1
AND t2.ID IS NULL;
```
---
Re comment from @Strawberry:
A US patent from 2001, ["Optimizing an exclusion join operation using a bitmap index structure"](https://www.google.com/patents/US6957210) defines an exclusion join:
> An exclusion join operation selects rows in a first table having values in specified columns where the values cannot be found in a specified column of a second table. A database system may execute such a query by excluding entries in the first table that match entries in the second table.
That patent also cites a paper from 1993, ["Parallel implementations of exclusion joins"](http://www.computer.org/csdl/proceedings/spdp/1993/4222/00/0395458-abs.html)
I assume the term also predates that paper. | You can do this with an aggregation and a `having` clause:
```
select id_a, group_concat(status) as statuses
from table t
group by id_a
having sum(status = 2) = 0;
``` | SQL: only show entry if there is no other entry with special different value in same table | [
"",
"mysql",
"sql",
"select",
"count",
"distinct",
""
] |
I currently have two tables in my database, one for the values (using Foreign Keys) and one for their translations (same table has translation for more than one attribute type).
I am trying to perform a single inner join from the Values table to the Translations table and translate two or more fields.
I know this is a bad database design but this database is used only to generate a single report that doesnt change \*
---
Translation table
```
**id, attribute, value, name**
1 , office , 2 , office1
2 , office , 3 , office2
3 , office , 4 , office3
4 , office , 5 , office4
5 , segment , 31 , segment1
6 , segment , 32 , segment2
7 , segment , 33 , segment3
8 , segment , 34 , segment4
```
---
Values table
```
**office, segment, sum**
2 , 31 , 1234
3 , 31 , 4321
5 , 34 , 9813
2 , 33 , 8371
```
---
The result should be like this:
Results table
```
**office , segment , sum**
office1 , segment1 , 1234
office2 , segment1 , 4321
office4 , segment4 , 9813
office1 , segment3 , 8371
```
Is it possible? if yes, how?
Thank you. | You can do this by joining twice to the `translation` table:
```
SELECT toff.name as office, tseg.name as segment, v.`sum`
from values v join
translation toff
on v.office = toff.value and toff.attribute = 'office' join
translation tseg
on v.segment = tseg.value and tseg.attribute = 'segment';
```
I assume the attribute names are important for matching purposes. | Looks like you want to join the `Translation` table back to itself through the `Values` table:
```
SELECT o.name office, s.name segment, v.sum
FROM Translation o
INNER JOIN Values v ON o.value = v.office
INNER JOIN Translation s on v.segment = s.value
``` | Select inner join with multiple fields from same table | [
"",
"sql",
""
] |
We're trying to find data errors in a MySQL database and I can't figure out a way to handle this without thrashing the database.
Here's a mockup of the data:
```
Table: EXAMPLE
+-----+------+-------+
| ID | PID | YEAR |
+-----+------+-------+
| | | |
| 001 | A | 2014 |
| | | |
| 002 | B | 2014 |
| | | |
| 003 | A | 2014 |
| | | |
| 004 | A | 2009 |
| | | |
| 005 | B | 2014 |
| | | |
| 006 | C | 2014 |
| | | |
| 007 | C | 2014 |
+-----+------+-------+
```
The goal is to return all rows where PID and YEAR are out of sync with one another. If there are three rows where the PID matches but the YEAR is wrong in any of them, we need to return all rows.
In the above example data, the following would be returned:
```
+-----+------+-------+
| ID | PID | YEAR |
+-----+------+-------+
| | | |
| 001 | A | 2014 |
| | | |
| 003 | A | 2014 |
| | | |
| 004 | A | 2009 |
+-----+------+-------+
```
The query I've worked up thus far looks like this:
```
SELECT EXAMPLE.ID, EXAMPLE.PID, EXAMPLE.YEAR
FROM EXAMPLE
INNER JOIN
(SELECT PID FROM EXAMPLE where PID HAVING COUNT(PID) > 1 GROUP BY PID) tmpTbl
ON EXAMPLE.PID = tmpTbl.PID
```
The problem is that in my real dataset, the subquery returns 26k of the 255k rows in the database. The expected result is less "Here are your results" and more "Watch my processing animation".
Am I on the right track? Is there a more efficient method that I am overlooking?
Many thanks in advance! | You can do this with an `exists` subquery instead:
```
select e.*
from example e
where exists (select 1
from example e2
where e2.pid = e.pid and
e2.year <> e.year
);
```
To have this run faster, create an index on `example(pid, year)`.
If you just want all the years and can take them in one row, then this might suffice:
```
SELECT PID, group_concat(year) as years
FROM EXAMPLE
where PID
GROUP BY PID
HAVING COUNT(DISTINCT year) > 1 ;
``` | ```
SELECT EXAMPLE.*
FROM
EXAMPLE INNER JOIN (SELECT PID
FROM EXAMPLE
GROUP BY PID
HAVING COUNT(DISTINCT Year)>1) Dup
ON Example.PID = Dup.PID
```
Please see fiddle [here](http://sqlfiddle.com/#!2/68e24/1). | Return All MySQL Rows Where colA is Duplicated by colB Isn't | [
"",
"mysql",
"sql",
""
] |
I have a table with the columns
FacilityID, servername
with the facilityID being the primary key and the servername being the server that the data for that facility is located on.
When we add a new facility, I want to be able to select the servername used the least number of times, but I am not sure how to do that.
I am thinking it will use the count function, but I am unsure of the syntax involved.
Any help will be appreciated. | depends on database, in mssql i would do:
```
select top 1 FacilityID,count(servername) over (partition by FacilityID) as Cardinality from tablename order by Cardinality asc
``` | You could use the COUNT and then outside the MIN.
```
Select MIN (x.myCount )
FROM (
select COUNT (servername) as myCount
from ...
GROUP BY servername) as x
``` | How to Select the value duplicated the least number of times in SQL | [
"",
"sql",
""
] |
Ok. This is a homework exercise . I've tried looking up everything and can't figure out the error.
The schema is as follows:
```
course{
course_id
title
dept
}
section{
course_id
semester
year
}
```
The course table has all the offered courses.
The section table has what courses are offered, past and future.
I'm to list how many times each course has been offered. Even if it has never been offered before. (Including zeros).
So. What I have so far are sub queries that do each part separately but I'm failing miserably at merging the two. (That's where I need your help.) (Sidenote: I'm new to sql, sorry if a query looks painfully inefficient)
So I know how to get all the courses and sort them.
```
SELECT course_id FROM course ORDER BY course_id;
```
I also know how to count all the number of times a course has been offered.
```
SELECT course_id, COUNT( course_id )FROM section GROUP BY course_id;
```
So I thought, use the two sub-queries to make the result!
```
SELECT * FROM
(
SELECT course_id, COUNT( course_id ) FROM section GROUP BY course_id
) AS T1
NATURAL JOIN
(
SELECT course_id FROM course ORDER BY course_id
) as T2;
```
But that ignores the courses that have a count of 0. I've tried using IFNULL but I may be using it wrong. | You can do *(from most to less preferable)*
```
SELECT c.course_id, COUNT(s.course_id) total
FROM course c LEFT JOIN section s
ON c.course_id = s.course_id
GROUP BY c.course_id;
```
or
```
SELECT c.course_id, COALESCE(total, 0) total
FROM course c LEFT JOIN
(
SELECT course_id, COUNT(*) total
FROM section
GROUP BY course_id
) s
ON c.course_id = s.course_id;
```
or
```
SELECT course_id,
(
SELECT COUNT(*)
FROM section
WHERE course_id = c.course_id
) total
FROM course c;
```
**Note:** that in the very first query you have to count `course_id`s from `section` table rather than from `section` when using `LEFT JOIN`.
Here is **[SQLFiddle](http://sqlfiddle.com/#!2/88379/5)** demo | You just want to base your query off of the sections, and LEFT JOIN which tallies up even sections that have not yet had courses on the right side of the join
```
SELECT section.course_id, COUNT(course.id) FROM section
LEFT JOIN course ON course.course_id = section.course_id
GROUP BY section.course_id
``` | Basic mySQL - Don't know how to show all results including 0 | [
"",
"mysql",
"sql",
"database",
""
] |
I have a table that has labels and some codes next to them:
```
id | label | code
1 | foo | 21
2 | foo | 33
3 | foo | 33
4 | foo | 13
5 | foo | 13
6 | foo | 33
7 | bar | 13
8 | bar | 13
9 | bar | 33
10 | smt | 33
11 | smt | 13
```
I would need a query that selects top frequencies of the 'code' for every 'label'. Here is what I have so far:
```
SELECT count(*) frequency, label, code
FROM myTable
GROUP BY label, code
```
This gives me:
```
frequency | label | code
1 | foo | 21
3 | foo | 33
2 | foo | 13
2 | bar | 13
1 | bar | 33
1 | smt | 33
1 | smt | 13
```
What I would like though is :
```
frequency | label | code
3 | foo | 33
2 | bar | 13
1 | smt | 33
1 | smt | 13
```
As you can see only top frequencies are selected for 'foo' and 'bar'. Since 'smt' does not have a max frequency as such (all are the same), all rows are included.
I do not have an idea even where to start. Anyone can help? thanks. (I am using mssql by the way) | My similar solution as @TechDo, but with 1 subquery
```
SELECT frequency,label,code FROM
(
SELECT
count(*) AS frequency
,MAX(COUNT(*)) OVER (PARTITION BY label) AS Rnk
,label
,code
FROM myTable
GROUP BY label, code
) x
WHERE frequency=Rnk
ORDER BY frequency DESC
```
SQLFiddle [here](http://sqlfiddle.com/#!6/39465/12 "SQLFiddle") | Please try:
```
SELECT * FROM(
SELECT *,
MAX(frequency) OVER(PARTITION BY label) Col1
FROM(
SELECT count(*) frequency, label, code
FROM myTable
GROUP BY label, code
)x
)xx
WHERE frequency=Col1
``` | SQL select top counts for grouped rows | [
"",
"sql",
"sql-server",
"group-by",
""
] |
When i run *sql query* such as;
```
Select
F1 as name,
F2 as phone
from Table where condition = true
```
If `F2 = 0`
i need to that field to be `No Phone Number` instead of `null` or `0` | You can do that with a CASE statement:
```
SELECT
CASE WHEN F2 = '0' THEN '(No Phone Number)' ELSE F2 END AS Phone
...
``` | ```
SELECT
CASE WHEN F2 = '0' OR F2 IS NULL THEN '(No Phone Number)' ELSE F2 END AS Phone
``` | SQL Query put a word in a field instead of 0? | [
"",
"sql",
"database",
""
] |
There are countries in MySQL table:
```
id | title
1 | USA
2 | Spain
3 | Italy
4 | Canada
```
I need to select Italy on the top of list and other countries sorted by title below.
But 'order by' doesn't work.
```
(SELECT * FROM countries WHERE id = 3) UNION (SELECT * FROM countries WHERE id != 3 ORDER BY title)
``` | First sort your data based on whether it is Italy or not, getting Italy first. Then sort based on the title.
```
SELECT * FROM countries
ORDER BY title='Italy' DESC, title
```
(The only trick you have to know -- or experiment with -- is that `FALSE` comes before `TRUE`, and hence the `DESC` in the code. I guess that makes sense if you convert them to 0 < 1.) | you can try this
```
SELECT * FROM countries
ORDER BY case when id !=3 then `title` end asc ,
case when id =3 then `title` end asc ;
``` | How to sort in a different way | [
"",
"mysql",
"sql",
""
] |
Am searching for a way to emulate the "rollup" available in other sql based dbms. To be clear, I know how to get running subtotals and this is **not** what I am after. I also know that I can probably do this programmatically. However, if it is possible to do with one (or a couple) of sql statements I would rather do that.
As an example, for a table like this: product\_type, order\_amount, date I would seek a report which shows all the data grouped by product type and then by date, with a row showing the subtotal of order\_amount on each change in product type and date.
I am constrained to using sqlite on android. I know enough sql to probably hurt a db but clearly not enough to emulate rollup and at this point I'm thinking it is not possible.
Any insights (even if 'no its not possible') appreciated. | Here is a way that works to emulate rollup under sqlite. I stumbled upon an answer from 2007 on a database [forum](http://www.daniweb.com/web-development/databases/ms-sql/threads/255430/results-with-subtotals-and-grandtotal) asking about subtotals and grand totals. I'll summarize here with my simplified case.
I have a table stocks which contains Ticker\_Symbol(Text), Underlying(Text), and Quantity(Integer) among other fields. Those suffice for this example.
Rollup can be emulated by using the following call from sqlite:
```
select Ticker_Symbol,Underlying,Quantity from (
select '1' orderCol, Ticker_Symbol,Underlying,Quantity from stocks
union
select '2' orderCol, Ticker_Symbol, 'Subtotal' as Underlying, Sum(Quantity) as Quantity from stocks
group by Ticker_Symbol
union
select '99' orderCol, '_' as Ticker_Symbol, 'GrandTotal' as Underlying, sum(Quantity) as Quantity from stocks)
as t1 order by case when orderCol=99 then 1 else 0 end, Ticker_Symbol, orderCol;
```
This produces output similar to below:
```
|Ticker_Symbol |Underlying|Quantity|
|-------------------|----------|--------|
AAPL AAPL 500
AAPL AAPL 1000
AAPL AAPL 2000
AAPL Subtotal 3500
AAPL140222P00500000 AAPL 10
AAPL140222P00500000 Subtotal 10
IBM140322C00180000 IBM 25
IBM140322C00180000 Subtotal 25
R140222C00067500 R 10
R140222C00067500 Subtotal 10
VLCCF VLCCF 300
VLCCF VLCCF 2000
VLCCF Subtotal 2300
_ GrandTotal 5845
```
Unfortunately, I could not find a way to avoid using the Ticker\_Symbol. Ideally, it would be nice to just replace the current Ticker\_Symbol with 'Subtotal' (or GrandTotal) but that does not work. Also note the use of the "\_" to assure that GrandTotal does indeed show up on the last row.
I hope this helps others and if anyone out there has a way of making it better, please add. | Had a same problem myself - emulating `subtotals` in `SQLite3`, and here`s, what I came to:
```
with counter(numm)as(select 1 union all select numm+1 from counter where numm<34),
str(par,node)as(select 1, numm from counter where numm in(2,5,8,11)union
select 11, numm from counter where numm in(12)union
select 12, numm from counter where numm in(13,17,20)union
select 13, numm from counter where numm in(14,15,16)union
select 17, numm from counter where numm in(18,19)union
select 2, numm from counter where numm in(3,4)union
select 20, numm from counter where numm in(21)union
select 21, numm from counter where numm in(22,23)union
select 5, numm from counter where numm in(6,7)union
select 8, numm from counter where numm in(9,10)union
select null, numm from counter where numm in(1)),
struct(par,node,clevel)as(select par,node,0 from str where par is null union all select c.par,c.node,s.clevel+1 from str c join struct s on s.node=c.par)/*struct*/,
namez(namee,node)as(select 'Grandtotal', numm from counter where numm in(1)union
select 'Subtotal1', numm from counter where numm in(2)union
select 'Subtotal2', numm from counter where numm in(5)union
select 'Subtotal3', numm from counter where numm in(8)union
select 'Subtotal4', numm from counter where numm in(11)union
select 'Subtotal5', numm from counter where numm in(12)union
select 'Subtotal6', numm from counter where numm in(13)union
select 'Subtotal7', numm from counter where numm in(17)union
select 'Subtotal8', numm from counter where numm in(20)union
select 'Subtotal9', numm from counter where numm in(21)union
select 'value10', numm from counter where numm in(18)union
select 'value11', numm from counter where numm in(19)union
select 'value12', numm from counter where numm in(22)union
select 'value2', numm from counter where numm in(4)union
select 'value3', numm from counter where numm in(6)union
select 'value4', numm from counter where numm in(7)union
select 'value5', numm from counter where numm in(9)union
select 'value6', numm from counter where numm in(10)union
select 'value7', numm from counter where numm in(14)union
select 'value8', numm from counter where numm in(15)union
select 'value9', numm from counter where numm in(16)union
select 'valueN', numm from counter where numm in(23)union
select 'vaule1', numm from counter where numm in(3)),
some_random_values(node,val)as(
select node,
case node
when 3 then 10 when 4 then 33 when 6 then 123 when 7 then 2
when 9 then 321 when 10 then 202 when 14 then 2 when 15 then 88
when 16 then 56 when 18 then 17 when 19 then 345 when 22 then 99 when 23 then 9
else 0
end from str),
sval(par,node,val)as(select s.par,s.node,a.val from str s join some_random_values a on a.node=s.node),
recur(par,node,val)as(
select * from sval where par in(select par from str group by par having(node)>1)
union all
select b.pAR,b.node,a.val+b.val
from recur a join sval b on b.node = a.par)
select s.par,s.node,substr(' ',1,s.clevel*5)||n.namee name,v.val
from struct s join namez n on n.node=s.node
join(select par,node,sum(val)val from recur group by 1,2)v on v.node=s.node
order by s.node
```
Example might look a bit complicated. Main part starts with `recur(par,node,val)`.
Runs fine on `SQLite 3.9.1`. | Sqlite: subtotals in own row aka "rollup" | [
"",
"android",
"sql",
"sqlite",
"rollup",
""
] |
80-90% of the time I spend in SQL Server Management Studio is spent dealing with the same 5 or 6 tables/views/stored procedures, out of the 100+ in some databases.
It would be wonderfully helpful if there were a quick way to access those few through the GUI, instead of having to scroll all around and toggle the Tables/Views/Programmability folders day in and day out.
In my mind, I'm picturing a favorites or starred tables feature or section, or maybe just a recent tables menu would be a great timesaver.
Is this a feature buried somewhere in SSMS or available in a plug-in? | I can think of two ways that may help get you to the right objects in the SSMS Object Explorer. Both are SSMS addins.
1) Use the free [SQL Search](http://www.red-gate.com/products/sql-development/sql-search/) - when you need to locate an object, press Ctrl-Alt-D, type the name of the object, arrow down until you highlight the object, and press Enter. I will suggest to the SQL Search project team the idea to have a "recently used" list of objects as I think this could be a nice addition to the tool.
2) Use [SQL Treeo](http://www.sqltreeo.com/wp/download/) to create your own custom folders in the Object Explorer tree. You can create a folder to put the objects you use most frequently, which could help avoid scrolling through a large list of irrelevant objects. | One way to do this - and I use it all the time for the same reasons you cited - is to set up **Filters** (one time activity per object type) in SSMS.
For tables, right click the **Tables** node under your database, and select **Filter** -> **Filter Settings**. Then enter your filter criteria and save that filter. Now only the tables that match that filter criteria will be visible in the object browser.
Do the same for other object types like **Stored Procedures**, **Views** etc etc. HTH
 | Is there a way to favorite or star frequently-used tables in SSMS 2012? | [
"",
"sql",
"sql-server",
"ssms",
"ssms-2012",
""
] |
I have a table which has two columns.
```
create table txns(
person varchar(255),
fruit varchar(255)
);
```
This is a log table.
I have sqlfiddle [here](http://sqlfiddle.com/#!4/e4bbd/6).
This is as far as I am able to get with the sql query.
In essence, For every person, which is the most frequent fruit he has eaten.
I have both Oracle and MySql at my place.
In the future, it would also be deployed on hadoop (via Hive/Impala etc).
Thus a non-db centric answer would be best.
But pls also do provide a db centric answer if there is such only. | Following query would run both in Oracle and MySQL.
```
select k.person, k.fruit from
(
select person,fruit,count(fruit) as cnt
from txns
group by person,fruit
) k
join
(
select t.person,max(t.cnt) mxCnt
from
(
select person,fruit,count(fruit) as cnt
from txns
group by person,fruit
)t
group by t.person
) s
on s.person = k.person
and s.mxCnt = k.cnt
order by k.person
``` | [SQL Fiddle](http://sqlfiddle.com/#!4/e4bbd/118)
**Oracle 11g R2 Schema Setup**:
```
create table txns(
person varchar(255),
fruit varchar(255)
);
insert into txns
values ('alpha','apple');
insert into txns
values ('charlie','cherry');
insert into txns
values ('bravo','banana');
insert into txns
values ('alpha','apple');
insert into txns
values ('bravo','banana');
insert into txns
values ('alpha','apricot');
insert into txns
values ('bravo','berry');
```
**Query 1**:
```
with tab as (
select person, fruit,count(1) cnt,
max(count(1)) over (partition by person) m_cnt
from txns
group by person, fruit)
select person, fruit, cnt, m_cnt
from tab
where cnt = m_cnt
```
**[Results](http://sqlfiddle.com/#!4/e4bbd/118/0)**:
```
| PERSON | FRUIT | CNT | M_CNT |
|---------|--------|-----|-------|
| alpha | apple | 2 | 2 |
| bravo | banana | 2 | 2 |
| charlie | cherry | 1 | 1 |
``` | Group and choose max pair SQL | [
"",
"mysql",
"sql",
"oracle",
"hadoop",
""
] |
I have need to automate the process of database upgrade process.
So by getting the column structure of a table i need to create update/insert/create sql statement's .
BY problem was i am getting duplicate column name as constrains are also coming in the list which i don't need. So is there a way to restrict constraints in the result set.
I am using below query
```
SELECT c.NAME 'Column Name'
,t.NAME 'Data type'
,c.max_length 'Max Length'
,c.precision
,c.scale
,c.is_nullable AS 'Is Nullable'
,c.is_identity AS 'Is Identity'
,ISNULL(i.is_primary_key, 0) 'Primary Key'
,i.type_desc
FROM fblfeb12.sys.columns c
INNER JOIN fblfeb12.sys.types t ON c.system_type_id = t.system_type_id
LEFT OUTER JOIN fblfeb12.sys.index_columns ic ON ic.object_id = c.object_id
AND ic.column_id = c.column_id
LEFT OUTER JOIN fblfeb12.sys.indexes i ON ic.object_id = i.object_id
AND ic.index_id = i.index_id
WHERE c.object_id = OBJECT_ID('table name')
```
**Result set:**
```
Pms_ID uniqueidentifier 16 0 0 0 0 CLUSTERED
Pms_PRODMODELID uniqueidentifie 16 0 0 1 0NONCLUSTERED
Pms_PRODMODELID uniqueidentifier 16 0 0 10NONCLUSTERED
Pms_PRODMODELID uniqueidentifier 16 0 0 10NONCLUSTERED
Pms_ATTRIBUTEID uniqueidentifier 16 0 0 10NONCLUSTERED
Pms_ATTRIBUTEID uniqueidentifier 16 0 0 0NONCLUSTERED
Pms_ATTRIBUTEID uniqueidentifier 16 0 0 1NONCLUSTERED
```
where `PRODMODELID` , `ATTRIBUTEID` comes `3` times.
I need only foreign key column but here I am getting index,Constraints which i don't need.
I need `column name, data-type,identity,primary key, null able, foreign key`.
Can you provide me any better solution,If i am doing anything wrong? | The problem is that you are including all indexes on the column. Consider this simple table:
```
CREATE TABLE #T (ID INT NOT NULL CONSTRAINT PK_T_ID PRIMARY KEY);
CREATE INDEX IX_T_ID ON #T (ID);
```
When you run an adaption of you query:
```
SELECT c.name, i.name, i.is_primary_key
FROM tempdb.sys.columns c
LEFT OUTER JOIN tempdb.sys.index_columns ic
ON ic.object_id = c.object_id
AND ic.column_id = c.column_id
LEFT OUTER JOIN tempdb.sys.indexes i
ON ic.object_id = i.object_id
AND ic.index_id = i.index_id
WHERE c.object_id = OBJECT_ID('tempdb..#T');
```
The column name will be duplicated for each index that includes that column (either key or non key), so the result will be:
```
name name is_primary_key
ID PK_T_ID 1
ID IX_T_ID 0
```
Since you only care about the primary key, you can apply the a filter in the join to `sys.indexes` to only return the primary keys, to do this effectively though you need to make the join between `index_columns` and `indexes` an INNER JOIN, but maintain the OUTER JOIN from `columns` to `index_columns` which involves slightly rearranging the joins, so the above would become:
```
SELECT c.name, i.name, i.is_primary_key
FROM tempdb.sys.columns c
LEFT OUTER JOIN (tempdb.sys.index_columns ic
INNER JOIN tempdb.sys.indexes i
ON ic.object_id = i.object_id
AND ic.index_id = i.index_id
AND i.is_primary_key = 1) -- ONLY PRIMARY KEYS
ON ic.object_id = c.object_id
AND ic.column_id = c.column_id
WHERE c.object_id = OBJECT_ID('tempdb..#T');
```
This removes the duplicate result. Finally you can query `sys.foreign_key_columns` to find out if the column references another table giving a final query of:
```
SELECT c.NAME AS [Column Name]
,t.NAME AS [Data type]
,c.max_length AS [Max Length]
,c.precision
,c.scale
,c.is_nullable AS [Is Nullable]
,c.is_identity AS [Is Identity]
,ISNULL(i.is_primary_key, 0) [Primary Key]
,i.type_desc
,OBJECT_SCHEMA_NAME(fk.object_id) + '.' + OBJECT_NAME(fk.object_id) + ' (' + fk.Name + ')' AS [Foreign Key]
FROM sys.columns c
INNER JOIN sys.types t
ON c.system_type_id = t.system_type_id
LEFT OUTER JOIN (sys.index_columns ic
INNER JOIN sys.indexes i
ON ic.object_id = i.object_id
AND i.is_primary_key = 1
AND ic.index_id = i.index_id)
ON ic.object_id = c.object_id
AND ic.column_id = c.column_id
LEFT JOIN sys.foreign_key_columns fkc
ON fkc.parent_object_id = c.object_id
AND fkc.parent_column_id = c.column_id
LEFT JOIN sys.columns fk
ON fk.object_id = fkc.referenced_object_id
AND fk.column_id = fkc.referenced_column_id
WHERE c.object_id = OBJECT_ID('table')
ORDER BY c.Column_ID;
```
*N.B I have changed your column aliases from single quotes to brackets as using single quotes is deprecated (not to mention [easily confused with string literals](https://sqlblog.org/2012/01/23/bad-habits-to-kick-using-as-instead-of-for-column-aliases))* | What happens if you use the "DISTINCT"? so
```
SELECT distinct c.NAME...
``` | Get list of all columns with complete details (Identity, nullabel,primary key) in sql server without duplicate column | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
```
create table tbl (
id int,
comment varchar(255),
primary key (id)
);
insert into tbl (id, comment) values ('1', 'dumb,');
insert into tbl (id, comment) values ('2', 'duuumb,');
insert into tbl (id, comment) values ('3', 'dummb');
insert into tbl (id, comment) values ('4', 'duummb');
insert into tbl (id, comment) values ('5', 'very dumb person');
select comment, soundex(comment)
from tbl;
```
Result:
```
+------------------+------------------+
| comment | soundex(comment) |
+------------------+------------------+
| dumb, | D510 |
| duuumb, | D510 |
| dummb | D510 |
| duummb | D510 |
| very dumb person | V6351625 |
+------------------+------------------+
```
I want to find all rows containing 'dumb', including all typos and variations, anywhere in the field.
```
select comment
from tbl
where soundex(comment) like '%D510%'
```
This fails to get the final row #5, how can I also get that row?
If there is a better solution than soundex() that would be fine. | This will work for your particular example:
```
select comment
from tbl
where soundex(comment) like '%D510%' or comment like '%dumb%';
```
It won't find misspellings in the comment.
EDIT:
You could do something like this:
```
select comment
from tbl
where soundex(comment) = soundex('dumb') or
soundex(substring_index(substring_index(comment, ' ', 2), -1) = soundex('dumb') or
soundex(substring_index(substring_index(comment, ' ', 3), -1) = soundex('dumb') or
soundex(substring_index(substring_index(comment, ' ', 4), -1) = soundex('dumb') or
soundex(substring_index(substring_index(comment, ' ', 5), -1) = soundex('dumb');
```
A bit brute force.
The need to do this suggests that you should consider a full text index. | Can you try with MySQL [REGEXP](http://dev.mysql.com/doc/refman/5.1/en/regexp.html)? Is a good solution to find a specific word into text.
You can use **[[:<:]]** and **[[:>:]]** as word boundaries:
```
SELECT comment FROM tbl WHERE comment REGEXP '[[:<:]]dumb[[:>:]]'
``` | In MySQL how to write SQL to search for words in a field? | [
"",
"mysql",
"sql",
"soundex",
""
] |
Table `A` - column `B [DateTime]`
Need list users in that table, ordering at nearest birthday. Users who have already made birthday should be at the end of the list considering them for the following year.
```
select B from A (order by/where)?
```
Table Example
```
USER DATE
MARCELO 1988-04-11
RICARDO 1965-12-30
WILSON 1977-02-20
PABLO 1985-01-10
JOHN NULL
```
Expected Result
```
WILSON 20/02 (Month/Day)
MARCELO 11/04
RICARDO 30/12
PABLO 10/01
(JOHN NOT IN THE LIST)
``` | **Test Data**
```
DECLARE @TABLE TABLE(Name VARCHAR(100),Dob DATETIME)
INSERT INTO @TABLE VALUES
('Mark', '19961017'),('Josh', '19801119'),('Sam', '19700709'),
('Vicky', '19500210'),('Dom', '19890308'),('Paul', '19840401')
,('Nick', NULL)
```
**Query**
```
SELECT Name, CAST(MONTH(Dob) AS NVARCHAR(2))
+ '/' + CAST(DAY(Dob) AS NVARCHAR(2)) [Dob Month/Day]
FROM @TABLE
WHERE DATEPART(DAYOFYEAR,Dob) - DATEPART(DAYOFYEAR,GETDATE()) > 0
ORDER BY ABS(DATEPART(DAYOFYEAR,Dob) - DATEPART(DAYOFYEAR,GETDATE()))
```
**Result Set**
User vicky will be filtered out as birthday is gone.
```
ββββββββ¦ββββββββββββββββ
β Name β Dob Month/Day β
β βββββββ¬ββββββββββββββββ£
β Dom β 3/8 β
β Paul β 4/1 β
β Sam β 7/9 β
β Mark β 10/17 β
β Josh β 11/19 β
ββββββββ©ββββββββββββββββ
```
Result set is ordered by who's birthday is next | ```
select name, birthdate
from yourtable
where birthdate is not null
order by datepart(dy,dateadd(d,- DATEPART(dy, getdate()),birthdate ))
``` | SQL Server : list by Nearest Birthday | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
```
SELECT * FROM `articles` `t`
LEFT OUTER JOIN `category_type` `category` ON (`t`.`category_id`=`category`.`id`)
WHERE (
t.status = 6 AND
t.publish_on <= '2014-02-14' AND
t.id NOT IN (13112,9490,9386,6045,1581,1034,991,933,879,758) AND
t.category_id IN (14)
)
ORDER BY t.id DESC
LIMIT 7;
```
It take more then 1.5 second to execute this query.
Can you give me some idea ? How can I improve this query and minimum execution time ? | First thing => use `where` instead of `inner join`. Because `where` is faster than `inner join` query.
Second thing => use `indexes` for the `frequently searched columns`. As in your example you search on the basis of status, publish\_on besides id as `primary index`. | If you are using mysql then you can try propose table structure option in the phpmyadmin which can help you to decide valid data types for your column names. This could help you to optimize your query processing.
query processing time depends on many things like: database server load, amount of data in the table and the data types used for the column names too. | Mysql query exicution time is too slow | [
"",
"mysql",
"sql",
""
] |
I have a following table
```
Name Amount Paid
-----------------------
Ali 500 500
Baba 700 100
Cam 300 Null
Dave Null Null
```
I want the following desired result:
```
Name Amount Paid
--------------------
Baba 700 100
Cam 300 Null
```
I wrote the following SQL query but it did not work
```
SELECT Name, Amount, Paid
FROM Table_1
WHERE PAID <> AMOUNT
``` | Will it work if you treat `NULL` as Zero *(assuming that is valid business logic)*:
```
SELECT Name, ISNULL(Amount,0), ISNULL(Paid,0) as Paid
FROM Table_1
WHERE ISNULL(Paid,0) <> ISNULL(Amount,0)
``` | Well, `null <> null` is `null`, and `null = null` is also `null`, resulting in a false condition. Confusing?
You have to check for null separately, using the `is null` operator:
```
select
Name, Amount, Paid
from Table_1
where
(Paid is not null and Amount is not null and Paid <> Amount)
or
not (Paid is null and Amount is null);
```
Or, in some cases, you can easily use
```
coalesce(Paid, 0) <> coalesce(Amount, 0)
``` | SQL query - dealing with Null Value | [
"",
"sql",
"sql-server",
""
] |
Everything about my VS 2013 install appears to be working correctly except when I right click on a table in Server Explorer. I am trying to use the data tools to view the table, which is in a MS SQL database. Here is the message VS 2013 pops when I right click:
```
The 'Microsoft SQL Server Data Tools' package did not load correctly.
The problem may have been caused by a configuration change or by the installation of another extension. You can get more information by examining the file 'C:\Users\...\AppData\Roaming\Microsoft\VisualStudio\12.0\ActivityLog.xml'.
```
Here are the last few lines of the ActivityLog.xml, btw one of the most cryptic log files I have ever read:
```
1222 Begin package load [Microsoft SQL Server Data Tools] {00FEE386-5F9F-4577-99F4-F327FAFC0FB9} VisualStudio 2014/02/14 13:50:50.112
1223 ERROR SetSite failed for package [Microsoft SQL Server Data Tools] {00FEE386-5F9F-4577-99F4-F327FAFC0FB9} 80131534 VisualStudio 2014/02/14 13:50:50.641
1224 Entering function CVsPackageInfo::HrInstantiatePackage {FEF13793-C947-4FB1-B864-C9F0BE9D9CF6} VisualStudio 2014/02/14 13:50:50.650
1225 Warning Unexpected system error mode before loading package [SqlStudio Editor Package] {FEF13793-C947-4FB1-B864-C9F0BE9D9CF6} VisualStudio 2014/02/14 13:50:50.650
1226 Begin package load [SqlStudio Editor Package] {FEF13793-C947-4FB1-B864-C9F0BE9D9CF6} VisualStudio 2014/02/14 13:50:50.650
1227 End package load [SqlStudio Editor Package] {FEF13793-C947-4FB1-B864-C9F0BE9D9CF6} VisualStudio 2014/02/14 13:50:50.691
1228 ERROR End package load [Microsoft SQL Server Data Tools] {00FEE386-5F9F-4577-99F4-F327FAFC0FB9} 80131534 VisualStudio 2014/02/14 13:50:51.044
```
Here is what I have done to try to solve this problem:
* Disabled all plugins/extensions
* Checked for and applied all updates for all extensions and VS 2013 including Update 1
* Did a full VS 2013 install repair operation
VS 2013 now includes the SSDT tools in VS 2013, so I cannot download and install anything, it is suppose to already be working. Not unless someone knows of a different trick.
I am out of ideas and I have searched quite extensively about this, most of it applied to VS 2012. If anyone has any good suggestions, that would be awesome. | Go to 'Add/Remove Programs' in Control Panel, search for "*Microsoft SQL Server Data Tools*" and repair.
Worked for me. | And just in case none of those work, you can go to the **[Download Latest SQL Server Data Tools](https://msdn.microsoft.com/en-us/library/mt204009.aspx)** Microsoft page itself. It lists out the versions of the tool set for each version of Visual Studio | Microsoft SQL Server Data Tools package did not load correctly | [
"",
"sql",
"sql-server",
"visual-studio-2013",
"sql-server-data-tools",
""
] |
Is it possible to use between operator within a CASE statement within a WHERE Clause ? For example in the code below, the condition should be pydate between (sysdate-12) and (sysdate-2) if its a monday and pydate between (sysdate-11) and (sysdate-1) if its a tuesday and so on. But the following doesn't work. May be there is another way of writing this. Can someone please help ?
```
select * from table_name
where pricekey = 'JUF' and
case when to_char(to_date(sysdate,'DD-MON-YY'), 'DY')='MON' then pydate between to_date(sysdate-12,'DD-MON-YY') and to_date(sysdate-2,'DD-MON-YY')
when to_char(to_date(sysdate,'DD-MON-YY'), 'DY')='TUE' then pydate between to_date(sysdate-11,'DD-MON-YY') and to_date(sysdate-1,'DD-MON-YY')
else pydate='sysdate'
end
``` | You can apply the logic you are attempting, but it is done without the `CASE`. Instead, you need to create logical groupings of `OR/AND` to combine the `BETWEEN` with the other matching condition from your case.
This is because `CASE` is designed to return a value, rather than to dynamically construct the SQL inside it.
```
SELECT *
FROM table_name
WHERE
pricekey = 'JUF'
AND (
-- Condition 1
(to_char(to_date(sysdate,'DD-MON-YY'), 'DY') = 'MON' AND pydate BETWEEN to_date(sysdate-12,'DD-MON-YY') AND to_date(sysdate-2,'DD-MON-YY'))
-- Condition 2
OR (to_char(to_date(sysdate,'DD-MON-YY'), 'DY')='TUE' AND pydate BETWEEN to_date(sysdate-11,'DD-MON-YY') AND to_date(sysdate-1,'DD-MON-YY'))
-- ELSE case, matching neither of the previous 2
OR (to_char(to_date(sysdate,'DD-MON-YY'), 'DY') NOT IN ('MON', 'TUE') AND pydate = 'sysdate')
)
``` | This is hard to write using a `case`. Just do:
```
where pricekey = 'JUF' and
((to_char(to_date(sysdate,'DD-MON-YY'), 'DY') = 'MON' and
pydate between to_date(sysdate-12,'DD-MON-YY') and to_date(sysdate-2,'DD-MON-YY')
) or
(to_char(to_date(sysdate,'DD-MON-YY'), 'DY') = 'TUE' and
pydate between to_date(sysdate-11,'DD-MON-YY') and to_date(sysdate-1,'DD-MON-YY')
) or
(o_char(to_date(sysdate,'DD-MON-YY'), 'DY') not in ('MON', 'TUE') and
pydate = trunc(sysdate)
)
)
```
Note, I also removed the single quotes around "sysdate", so it won't be treated as a string. And, I trunc'ed it to just get the date portion with no time. | Using 'Between' operator after 'THEN' within 'CASE' statement within 'WHERE' Clause | [
"",
"sql",
"oracle",
""
] |
I need some help with SQL query creation. I have data like
```
EN 771-2:2011
EN 197-1:2011
EN 295-1:2013
771-1:2011
EN 54-24:2008
EN 492:2012
EN 54-25: 2008
EN 331:1998
EN 534:2006+A1:2010
EN 588-2:2001
EN 179:2008
EN 598:2007+A1:2009
EN 621:2009
EN 682: 2002
```
Is possible create ORDER BY causule, when result of ordering will be:
```
EN 54-24:2008
EN 54-25: 2008
EN 179:2008
EN 197-1:2011
EN 295-1:2013
EN 331:1998
EN 492:2012
EN 534:2006+A1:2010
EN 588-2:2001
EN 598:2007+A1:2009
EN 621:2009
EN 682: 2002
771-1:2011
EN 771-2:2011
```
respectively, I need order, which will be depended on part of substring:
EN **54**-*24*:2008,
EN **54**-*25*: 2008,
**771**-*1*:2011 ,
EN **771**-*2*:2011
Bold characters should have highest priority and italic characters should have lower.
Is possible to create "*ORDER BY*" causule for results something like this? I know about substring function, but she give different results to me.
Thank you for help. | You'd have to formally define via regexp what *exactly* is "bold" or "italic".
If you assume that the first group of digits is the first variable to order on and the second is the second:
```
ORDER BY substring(col,'\d+')::int, substring(col,'\d+[^\d]+(\d+)')::int;
```
Or alternatively, if your definition is that the first number is the digits following the first space and the second is the ones following a dash:
```
ORDER BY substring(col,' (\d+)')::int, substring(col,'-(\d+)')::int;
```
Of course you should first debug these buy running `select substring(...)`. | Yes, substring as you say, but then you have to cast it to integer so that a numeric order is being used (which seems to be what you want).
Untested, try this in the order by:
cast(substring(YOURFIELDNAME from '\s\*\d+-') as integer)
It uses a regular expression:
\s for a space
\* to make it optional
\d for digits
+ to pick one or more digits
- the minus sign | SQL order by part of substring | [
"",
"sql",
"postgresql",
"sql-order-by",
"hql",
""
] |
I have the following mySQL Query which returns a list of ID's.
I want to delete the rows with each ID of the list from the table "serverking\_user"
```
SELECT serverking_user.id FROM (serverking_user
INNER JOIN serverking_logins
ON serverking_user.id=serverking_logins.uid)
WHERE serverking_logins.time < 3600
GROUP BY serverking_user.id
HAVING COUNT(serverking_logins.login) = 1
```
One possibility would be to loop through each ID by using "in". But since "in" is deactivated on my Sql Server I need to solve this with a "join".
Any ideas ? | It has been a while since I have done MySQL development, but if I remember correctly this should work for you:
```
DELETE su
FROM serverking_user su
INNER JOIN serverking_logins sl
ON su.id=sl.uid
WHERE sl.time < 3600
GROUP BY su.id
HAVING COUNT(sl.login) = 1
```
In general, if you want to delete records from a single table in a JOIN, you just specify that table (or its alias) after `DELETE`
**UPDATED QUERY**
It seems that MySQL isn't fond of deleting where a grouped query is involved, so another possibility is to use an anti-join. However, I'm sure that the performance characteristics of this query will be suboptimal. It may be better to use two separate queries instead. Nonetheless, here is a solution using anti-joins:
```
DELETE su
FROM serverking_user su
INNER JOIN serverking_logins sl
ON su.id=sl.uid
LEFT JOIN serverking_logins antisl
ON sl.uid = antisl.uid
AND sl.id <> antisl.id
WHERE sl.time < 3600
AND antisl.id IS NULL
``` | ```
DELETE FROM serverking_user
WHERE EXISTS
(SELECT *
FROM (serverking_user
INNER JOIN serverking_logins
ON serverking_user.id=serverking_logins.uid)
WHERE serverking_logins.time < 3600
GROUP BY serverking_user.id
HAVING COUNT(serverking_logins.login) = 1) a
``` | SQL: Delete by result of Select with Join | [
"",
"mysql",
"sql",
"sql-delete",
""
] |
I tried various alterations to get the right query date format. Below is one of them.
The below date is actually filled in by string substitution from a java program. I need to be able to interpret it and run the query. Currently the query fails with the below error:
```
[Error] Execution (48: 18): ORA-01841: (full) year must be between -4713 and +9999, and not be 0
SELECT
to_date('Mon Jan 01 12:00:00 EST 1990', 'yyyy/mm/dd HH:MI:SS')
FROM
duAL
``` | You cant use to\_date for timezone. try this.
```
SELECT
TO_TIMESTAMP_TZ('MON JAN 01 12:00:00 EST 1990', 'DY MON DD HH24:MI:SS TZR YYYY')
FROM
duAL;
``` | I try, removed the **Mon** and **EST**
```
SELECT to_date('Jan 01 12:00:00 1990', 'MON DD HH:MI:SS RRRR') FROM dual
``` | How can I format an oracle query with this date? | [
"",
"sql",
"oracle",
""
] |
How can I **list all the tables** of a PostgreSQL database and **order them by size**? | ```
select
table_name,
pg_size_pretty(pg_total_relation_size(quote_ident(table_name))),
pg_total_relation_size(quote_ident(table_name))
from information_schema.tables
where table_schema = 'public'
order by 3 desc;
```
This shows you the size of all tables in the schema `public` if you have multiple schemas, you might want to use:
```
select table_schema, table_name, pg_relation_size('"'||table_schema||'"."'||table_name||'"')
from information_schema.tables
order by 3
```
SQLFiddle example: <http://sqlfiddle.com/#!15/13157/3>
List of all object size functions in the [manual](https://www.postgresql.org/docs/current/static/functions-admin.html#FUNCTIONS-ADMIN-DBSIZE). | This will show you the schema name, table name, size pretty and size (needed for sort).
```
SELECT
schema_name,
relname,
pg_size_pretty(table_size) AS size,
table_size
FROM (
SELECT
pg_catalog.pg_namespace.nspname AS schema_name,
relname,
pg_relation_size(pg_catalog.pg_class.oid) AS table_size
FROM pg_catalog.pg_class
JOIN pg_catalog.pg_namespace ON relnamespace = pg_catalog.pg_namespace.oid
) t
WHERE schema_name NOT LIKE 'pg_%'
ORDER BY table_size DESC;
```
I build this based on the solutions from here [list of schema with sizes (relative and absolute) in a PostgreSQL database](https://stackoverflow.com/questions/4418403/list-of-schema-with-sizes-relative-and-absolute-in-a-postgresql-database) | postgresql list and order tables by size | [
"",
"sql",
"postgresql",
"postgresql-9.3",
""
] |
How can I enumerate multiple date ranges in SQL Server 2008? I know how to do this if my table contains a single record
```
StartDate EndDate
2014-01-01 2014-01-03
;WITH DateRange
AS (
SELECT @StartDate AS [Date]
UNION ALL
SELECT DATEADD(d, 1, [Date])
FROM DateRange
WHERE [Date] < @EndDate
)
SELECT * FROM DateRange
```
OUTPUT
```
2014-01-01, 2014-01-02, 2014-01-03
```
I am however lost as how to do it if my table contains multiple records. I could possibly use the above logic in a cursor but want to know if there is a set based solution instead.
```
StartDate EndDate
2014-01-01 2014-01-03
2014-01-05 2014-01-06
```
DESIRED OUTPUT:
```
2014-01-01, 2014-01-02, 2014-01-03, 2014-01-05, 2014-01-06
``` | Well, let's see. Define the ranges as a table. Then generate the full range of dates from the first to the last date. Finally, select the dates that are in the range:
```
with dateranges as (
select cast('2014-01-01' as date) as StartDate, cast('2014-01-03' as date) as EndDate union all
select '2014-01-05', '2014-01-06'
),
_dates as (
SELECT min(StartDate) AS [Date], max(EndDate) as enddate
FROM dateranges
UNION ALL
SELECT DATEADD(d, 1, [Date]), enddate
FROM _dates
WHERE [Date] < enddate
),
dates as (
select [date]
from _dates d
where exists (select 1 from dateranges dr where d.[date] >= dr.startdate and d.[date] <= dr.enddate)
)
select *
from dates
. . .
```
You can see this work [here](http://www.sqlfiddle.com/#!6/d41d8/14729). | You could grab the min and max dates first, like so:
```
SELECT @startDate = MIN(StartDate), @endDate = MAX(EndDate)
FROM YourTable
WHERE ...
```
And then pass those variables into your date range enumerator.
*Edit...* Whoops, I missed an important requirement. See the accepted answer. | SQL Server 2008 - Enumerate multiple date ranges | [
"",
"sql",
"sql-server-2008",
""
] |
## here is my sql
```
SELECT * FROM
(
SELECT Row_number() OVER (ORDER BY [bbajobs].JID) AS RowNumber
,[BBAJobs].[JID]
,[AccountReference] as [Acc Ref]
,[BBAJobs].[OEReference] as [OERef]
,[JobType],[JobState]
,[JobShippedDate]
,[UPSShippingNumber]
,[CustomerName] [Customer Name]
,[ContactName] [Contact Name]
,[Telephone]
,[JobDescription]
,[CallRem].[rem]
,[CallRem].[callStatus]
,[CallRem].[ShopRemarks]
,CustomerNotes
,ShopNotes
,RecievedDate
,UserName
FROM dbo.BBAJobs LEFT OUTER JOIN dbo.CallRem
ON dbo.BBAJobs.JID = dbo.CallRem.jid) x
WHERE 1<2
AND x.jid IN (SELECT CONVERT(VARCHAR, data) AS [JID]
FROM dbo.Split1('33180,33265,33047', ','))
AND RowNumber BETWEEN 1 AND 20
```
when i execute the above sql then rwo number is not showing like `1,2,3`
when i exclude this line
```
AND x.jid IN (SELECT CONVERT(VARCHAR, data) AS [JID]
FROM dbo.Split1('33180,33265,33047', ','))
```
then right row number is showing. without excluding that line & position can't the row number can be show right ? please help. thanks | If you want to select 20 rows from the subquery, then you need to make sure that any conditions that filter the rows are applied before/at the same time as you number the rows. Move the condition inside the subquery:
```
SELECT *
FROM (SELECT Row_number()
OVER ( ORDER BY [bbajobs].jid ) AS RowNumber,
[bbajobs].[jid],
[accountreference] AS [Acc Ref],
[bbajobs].[oereference] AS [OERef],
[jobtype],
[jobstate],
[jobshippeddate],
[upsshippingnumber],
[customername] [Customer Name],
[contactname] [Contact Name],
[telephone],
[jobdescription],
[callrem].[rem],
[callrem].[callstatus],
[callrem].[shopremarks],
customernotes,
shopnotes,
recieveddate,
username
FROM dbo.bbajobs
LEFT OUTER JOIN dbo.callrem
ON dbo.bbajobs.jid = dbo.callrem.jid
WHERE dbo.bbajobs.jid IN (SELECT CONVERT(VARCHAR, data) AS [JID]
FROM dbo.Split1('33180,33265,33047', ',')) ) x
WHERE 1 < 2
AND rownumber BETWEEN 1 AND 20
``` | The way this is written, the subquery 'x' is returning an ordered dataset. Once that dataset is returned, the following line is filtering it to rows that match it:
```
AND x.jid IN (SELECT CONVERT(VARCHAR, data) AS [JID] FROM dbo.Split1('33180,33265,33047', ','))
```
Moving this condition within the 'x' subquery should yield the results you want.
ie.
```
SELECT * FROM
(SELECT Row_number() OVER (ORDER BY [bbajobs].JID) AS RowNumber,
[BBAJobs].[JID],[AccountReference] as [Acc Ref],[BBAJobs].[OEReference] as [OERef],[JobType],[JobState],
[JobShippedDate],[UPSShippingNumber],[CustomerName] [Customer Name],[ContactName] [Contact Name],[Telephone],
[JobDescription],[CallRem].[rem],[CallRem].[callStatus],[CallRem].[ShopRemarks],
CustomerNotes,ShopNotes,RecievedDate,UserName FROM dbo.BBAJobs LEFT OUTER JOIN dbo.CallRem
ON dbo.BBAJobs.JID = dbo.CallRem.jid
AND [bbajobs].jid IN (SELECT CONVERT(VARCHAR, data) AS [JID] FROM dbo.Split1('33180,33265,33047', ','))) x
WHERE 1<2
AND RowNumber BETWEEN 1 AND 20
``` | SQL server wrong Row Number is showing | [
"",
"sql",
"sql-server",
""
] |
say i have this result to this query..
```
customername p8 p12 p750 p1 m8
customer1 5 48 4 4 4
customer2 4 2 5 43
customer3 4 -3
```
now i want to ask if is there a way that i could add all the rows to form something like this:
```
customername p8 p12 p750 p1 m8 total
customer1 5 48 4 4 4 65
customer2 4 2 5 43 54
customer3 4 -3 1
```
can anyone help me with this?? | ```
create table tbl1 (m1 int, m2 int);
insert into tbl1 (m1,m2) values
(25,50),(30,43);
```
**Query:**
```
select m1,m2, (m1+m2) as total from tbl1;
```
# [Demo](http://sqlfiddle.com/#!2/348b2/1) | Sum up the fields and use an alias to name the dynamic column in your select
```
select coalesce(p8,0)+
coalesce(p12,0)+
coalesce(p750,0)+
coalesce(p1,0)+
coalesce(m8,0) as total
from your_table
```
If some of your column allow `null` then you have to use a function like `coalesce` to replace that with `0` since `1+null=null`. | sum sql rows and add to a new column | [
"",
"sql",
""
] |
I have 2 mysql databases
1. DB1
2. DB2
I want to select all contact values from **contact\_mst** table under **DB1**, and then insert all those values inside **contact\_mst** table under **DB2**. I do not want to copy the **contact\_id** field for some reason and want to keep them incremental, but also do not want them to be **AUTO\_INCREMENT** as I have used 2 primary keys **Company\_id** which is 1 and **Contact\_id** which is auto generated using php code for some specific purpose.
So I made a SQL query for transfering data like this :
```
INSERT INTO DB2.contactsmaster (Company_id, Contact_id, Contact_person)
SELECT 1, (SELECT COALESCE(MAX(Contact_id),0)+1 FROM DB2.contactsmaster), Contact_person FROM DB1.contact_mst;
```
Which I think I have done something wrong, as this will not generate new IDS each time and in place will return **Contact\_id** as 1 every time.
Any suggestion?
P.S. I just want to achieve this using SQL Query only. I know I can do this with PHP code but actually I want to supply .sql file to my client. | Below is the sample code for inserting records to table tab2 where value of column `a` is a sequence of integer and value of column b is same as value of column b of tab1
```
create table tab1
(
a int,
b int
);
create table tab2
(
a int,
b int
);
insert into tab1
values
(10,20),(30,40);
```
tab1 contents:
```
| a | b |
---------
|10 |20 |
|30 |40 |
insert into tab2
select @row := @row + 1, b FROM tab1 , (SELECT @row := 0) r
```
tab2 contents:
```
| a | b |
---------
|1 |20 |
|2 |40 |
```
Check the working of the query at sqlfiddle:
<http://www.sqlfiddle.com/#!2/5f9a39/1>
Let me know if it solved your problem or not. | try this
```
INSERT INTO DB2.contactsmaster (Company_id, Contact_id, Contact_person)
VALUES(
1,
(SELECT COALESCE(MAX(Contact_id),0)+1 FROM DB2.contactsmaster),
(SELECT Contact_person FROM DB1.contact_mst LIMIT 1)
)
``` | How to do Insert-Select with manual incremental value? | [
"",
"mysql",
"sql",
""
] |
I'm quite new to SQL Server, but I still manage to do most of the things I need. However, there is one thing I just can't understand and that it working with dates. I have a database like this used for recording sales:
```
**Sales**
[index] int
timestamp datetime
username varchar(10)
type int
amount int
value int
location int
receipt text
**Demo**
index timestamp username type amount value location receipt
1 2013-08-14 11:29:29.367 andrer 1 1 10 2 *long text*
```
I have to queries I'm trying to do... The first one is having a table of the last 7 days, showing me the number of customers each day. My main problem was to be able to order the output correctly while still displaying it in the format of `15.02.2014`. This is what I ended up with. While it works as it should.. is there a easier way of writing it?
```
SELECT CONVERT(varchar, DATEADD(dd, 0, DATEDIFF(dd, 0, [timestamp])), 104) as [timestamp],
COUNT([username]) as a
FROM [sales]
WHERE [timestamp] >= DATEADD(day,-7, GETDATE())
GROUP BY DATEADD(dd, 0, DATEDIFF(dd, 0, [timestamp]))
ORDER BY [timestamp]
```
My other question is regarding making a table that consists of the name of the month and the number of customers for the current and last year. This is something I've been struggling the last couple of weeks and can't get my head around how to solve.
Using the same table as above, I'm trying to get and output like this, where the first column is the name of the month, the second is the number of customers for the current year, followed by a column for the number of customers the previous year.
```
January | 1345 | 299 |
February | 231 | 342 |
```
...
Sadly I have no working code yet for the current year/previous year query and hope that someone of you knows an easy way of writing it. :) | ```
SELECT DATENAME(MONTH,[timestamp]) [Month]
,COUNT(DISTINCT CASE WHEN YEAR([timestamp]) = YEAR(GETDATE())
THEN username ELSE NULL END) CurrentYear
,COUNT(DISTINCT CASE WHEN YEAR([timestamp]) = YEAR(GETDATE())-1
THEN username ELSE NULL END) LastYear
FROM Sales
GROUP BY DATENAME(MONTH,[timestamp]), MONTH([timestamp])
ORDER BY MONTH([timestamp])
```
This will return the data in the format of
```
βββββββββββββ¦ββββββββββββββ¦βββββββββββ
β Month β CurrentYear β LastYear β
β ββββββββββββ¬ββββββββββββββ¬βββββββββββ£
β February β 1000 β 0 β
β March β 1235 β 202 β
β September β 1750 β 787 β
βββββββββββββ©ββββββββββββββ©βββββββββββ
``` | For your first question :
```
SELECT CONVERT(varchar, DATEADD(dd, 0, DATEDIFF(dd, 0, [timestamp])), 104) as [timestamp],
COUNT([username]) as a
FROM [sales]
WHERE [timestamp] >= DATEADD(day,-7, GETDATE())
GROUP BY DATEADD(dd, 0, DATEDIFF(dd, 0, [timestamp]))
ORDER BY DATEADD(dd, 0, DATEDIFF(dd, 0, [timestamp]))
```
For your second request you can try this:
```
SELECT datename(month,[timestamp]) as [monthname], COUNT([username]) as Customers, YEAR([timestamp]) as orderyear
FROM [sales]
GROUP BY datename(month,[timestamp]), YEAR([timestamp])
``` | SQL Server and working with dates, year and previous year | [
"",
"sql",
"sql-server",
"datetime",
""
] |
Running the following query:
```
SELECT exists (
SELECT
schema_name
FROM
information_schema.schemata
WHERE
schema_name = 'public'
) AS schema_exists;
```
I am getting always `FALSE`, even if the public schema exists.
How should i check if this schema exists?
**EDIT**
I am using PostgreSQL version 8.4 | The information from `information_schema.schemata` depends on the role you're connected with, so it's not really the right view to query to discover schemas in general.
The [doc on `information_schema.schemata` in 9.3](http://www.postgresql.org/docs/9.3/static/infoschema-schemata.html) says:
> The view schemata contains all schemas in the current database that
> are owned by a currently enabled role.
However it's not quite clear (at least to me) from just that sentence, why you can't see `public` .
In a mailing-list post, Tom Lane has an explanation the goes a bit further:
See <http://www.postgresql.org/message-id/11650.1357782995@sss.pgh.pa.us>
His conclusion:
> As things stand, a non-superuser won't see "public", "pg\_catalog", nor
> even "information\_schema" itself in this view, which seems a tad
> silly.
which looks exactly like the problem in this question.
Bottom line: use `pg_namespace` instead of `information_schema.schemata`
---
This was amended in version 9.4 to conform to what users expect. The [current doc](http://www.postgresql.org/docs/current/static/infoschema-schemata.html) says:
> The view schemata contains all schemas in the current database that
> the current user has access to (by way of being the owner or having
> some privilege).
`USAGE` privilege on a schema is now enough to get it from this view. | I guess you can't see public schema because of the database role you are using to test schema existence. `information_schema.schemata` is actually a view with the following definition:
```
SELECT
current_database()::information_schema.sql_identifier AS catalog_name,
n.nspname::information_schema.sql_identifier AS schema_name,
u.rolname::information_schema.sql_identifier AS schema_owner,
NULL::character varying::information_schema.sql_identifier AS default_character_set_catalog,
NULL::character varying::information_schema.sql_identifier AS default_character_set_schema,
NULL::character varying::information_schema.sql_identifier AS default_character_set_name,
NULL::character varying::information_schema.character_data AS sql_path
FROM pg_namespace n, pg_authid u
WHERE n.nspowner = u.oid AND pg_has_role(n.nspowner, 'USAGE'::text);
```
This is also described in [documentation](http://www.postgresql.org/docs/current/static/infoschema-schemata.html).
You can get the definition of views in `information_schema` using `\d+` in psql - `\d+ information_schema.schemata` in this case.
You should use `pg_namespace` instead of `information_schema.schemata` | How to check if PostgreSQL public schema exists? | [
"",
"sql",
"postgresql",
"schema",
""
] |
Want to have a counter field that starts with a character followed by a number as a regular counter, but the counter should start with x number of zeros.
Example:
```
L00001
L00002
L00003
```
Is it possible to construct such counters in sql server?
**Update:**
It should be auto generated counter values that is incremented by 1 when a new row is added. The number of rows in the table is never going to be as large as L99999.
Its Sql server 2012 express im using. | One possible trick, use a computed column:
```
--DROP TABLE MyTable
CREATE TABLE MyTable
(
MyTableId int not null identity(1,1)
,MyTableKey as 'L' + right('00000' + cast(MyTableId as varchar(5)), 5)
persisted
constraint PK_MyTable
primary key clustered
,SomeData varchar(50)
)
```
Column MyTableKey is calculated based on the identity column and actually stored within the table (persisted, as opposed to calculated on the fly--so it's only calculated when a row is first inserted). This allows it to be indexed, and I set it as the primary key sonce nothing should ever reference the "Id" column.
Some sample data:
```
INSERT MyTable (SomeData) values
('First')
,('Second')
,('Third')
,('Etc')
SELECT *
from MyTable
``` | hi david use this query for create your table
```
CREATE TABLE [dbo].[Emp](
[ID] [int] IDENTITY(1,1) NOT NULL,
[Name] [varchar](50) ,
[AutoGeneratedColumn] AS ('L'+right(replicate('0',(5))+CONVERT([varchar](5),[ID],0),(5)))
) ON [PRIMARY]
``` | Specialized counter field in Sql Server | [
"",
"sql",
"sql-server",
""
] |
I'm not sure how to accurately summarize the problem I'm having in one sentence, but it is fairly simple to describe (below). Here are three tables associated with foreign keys on the last table:
tbl\_village:
```
villageID | village_name
1 | Happy Village
2 | Sad Village
```
tbl\_setting:
```
settingID|setting_name
32 | Chill
33 | Hyper
```
tbl\_neighbourhood:
```
neighbourhoodID|settingID_fk|villageID_fk|neighbourhood_name
53 | 32 | 1 | Balls
54 | 32 | 1 | Peacefull
55 | 32 | 1 | SunnyBrook
56 | 33 | 1 | Rainbow Lane
57 | 33 | 1 | High Five
58 | 32 | 2 | Fungus square
59 | 32 | 2 | Mountains
```
What I would like to do is query tbl\_neighbourhood with any number of villageIDs and select the settingID\_fk where the villages have common settingID\_fks
So far I have a very basic query:
```
SELECT s.settingID
,s.setting_name
FROM tbl_neighbourhood n JOIN tbl_setting s ON s.settingID = n.settingID_fk
WHERE n.villageID_fk = 1
OR
n.villageID_fk = 2
GROUP BY s.settingID
```
This query returns:
```
settingID|setting_name
32 | Chill
33 | Hyper
```
But I want only the settingIDs that the specified villages have in common, so I want this result:
```
settingID|setting_name
32 | Chill
```
I've tried using different joins, but not having success. I would rather get the desired result from a query and not have to manually process the result with a server-side language after-the-fact. I feel like this is a simple problem for someone who is more adept at MySQL...?
Thanks for any assistance in advance! | You want to move the conditions to the `having` clause:
```
SELECT s.settingID, s.setting_name
FROM tbl_neighbourhood n JOIN
tbl_setting s
ON s.settingID = n.settingID_fk
GROUP BY s.settingID, s.setting_name
HAVING sum(n.villageID_fk = 1) > 0 and
sum(n.villageID_fk = 2) > 0;
```
Each conditions counts the number of rows that match one of the villages. These guarantee that both are present.
You can also write this as:
```
SELECT s.settingID, s.setting_name
FROM tbl_neighbourhood n JOIN
tbl_setting s
ON s.settingID = n.settingID_fk
WHERE n.villageID_fk in (1, 2)
GROUP BY s.settingID, s.setting_name
HAVING count(distinct n.villageID_fk) = 2;
``` | You want to `JOIN` neighbourhood against itself:
```
SELECT s.settingID, s.setting_name
FROM tbl_neighbourhood n1
JOIN tbl_neighbourhood n2
ON (n1.settingID_fk = n2.settingID_fk
AND n1.villageID_fk < n2.villageID_fk)
JOIN tbl_settings s ON s.settingID = n1.settingID_fk
WHERE n1.villageID_fk IN (1, 2)
GROUP BY s.settingID
```
This will fetch all villages, and for each of them check the settings of all subsequent villages; when two settings are identical, they are reported. Or you can also do, more explicitly:
> But I want only the settingIDs that the specified villages have in common
```
SELECT s.settingID, s.setting_name
FROM tbl_neighbourhood a
JOIN tbl_neighbourhood b
ON (a.settingID_fk = b.settingID_fk
AND a.villageID_fk = 1
AND b.villageID_fk = 2)
JOIN tbl_settings s ON s.settingID = a.settingID_fk
GROUP BY s.settingID
```
This is much more efficient if you have lots of villages, and the neighbourhood is indexed by villageID\_fk, since the query optimizer will first fetch village1 from A, then village2 from B, and from these two small sets of rows build the query proper.
# SQL Fiddle
Here you can find a [fiddle](http://sqlfiddle.com/#!2/3007c/2) to play with in order to test different settings. [This](http://sqlfiddle.com/#!2/c0d3b/1) is the same, with indexing. | MySQL SELECT where common foreign key among many | [
"",
"mysql",
"sql",
""
] |
The following procedure takes forever. Its purpose is to concatenate a list of IDs into one string.
```
declare @validatedIDList nvarchar(max)
SELECT
@validatedIDList = COALESCE(@validatedIDList + ', ', '') + CAST(si.SelectedID AS nvarchar(50))
FROM
dbo.SelectedID si
```
dbo.SelectedID is a table with only one int field.
It contains about 60000 rows. I tried to change COALESCE to ISNULL but with no significant improvement.
Any idea of how to make this request faster? | Running this on a table containing consecutive integers from `1 - 60,000` takes just over 5 minutes on my machine.
```
DECLARE @validatedIDList NVARCHAR(max)
SELECT @validatedIDList = COALESCE(@validatedIDList + ', ', '') +
CAST(si.SelectedID AS NVARCHAR(50))
FROM dbo.SelectedID si
SELECT internal_objects_alloc_page_count,
internal_objects_dealloc_page_count,
internal_objects_alloc_page_count - internal_objects_dealloc_page_count
FROM sys.dm_db_task_space_usage
WHERE session_id = @@SPID
SELECT DATALENGTH(@validatedIDList)
```
Profiling the process shows that it spends a lot of time performing Blob manipulation
[](https://i.stack.imgur.com/0UC1n.png)
To answer the question why this is so slow `nvarchar(max)` variables are stored as `LOB` data in `tempdb` on 8KB pages.
The final length of the string is 817,784 bytes (approx one hundred 8KB pages). The result of the query above is
```
+-----------------------------------+-------------------------------------+-------+
| internal_objects_alloc_page_count | internal_objects_dealloc_page_count | |
+-----------------------------------+-------------------------------------+-------+
| 5571528 | 5571424 | 104 |
+-----------------------------------+-------------------------------------+-------+
```
Showing the final string actually consumes 104 pages in `tempdb`.
Dividing the `alloc_page_count` by `60,000` shows that the average number of pages allocated and deallocated per assignment is 93.
Each concatenation operation does not just append to the existing LOB data but instead makes a copy of it. As the string grows longer the amount of work per concatenation grows accordingly.
The `XML PATH` method is much more efficiently implemented and has the additional "bonus" that it is actually documented to work ([unlike the method in the question](https://stackoverflow.com/questions/15138593/nvarchar-concatenation-index-nvarcharmax-inexplicable-behavior/15163136#15163136))
The article [Concatenating Row Values in Transact-SQL](https://www.simple-talk.com/sql/t-sql-programming/concatenating-row-values-in-transact-sql/) has a good round up of the available methods. | ```
DECLARE @SelectedID TABLE (IDs INT)
INSERT INTO @SelectedID VALUES
(1),(2),(null),(4),(5),(null),(7)
declare @validatedIDList nvarchar(max)
SET @validatedIDList = STUFF((SELECT ', ' + CAST(si.IDs AS nvarchar(50))
FROM @SelectedID si
FOR XML PATH('')),1 ,2 ,'')
SELECT @validatedIDList
RESULT : 1, 2, 4, 5, 7
``` | Why is the combination of coalesce and cast so slow in concatenation? | [
"",
"sql",
"sql-server",
""
] |
I'm trying to retrieve data from from a table such as:
```
col1 col2 col3 col4 col5 col6
aaaa bbbb cccc oooo eric date1
aasa bcbb ccfc ooho Samm date2
aaaa bbbb cccc oooo eric date3
aaaa bbbb cccc oooo Samm date4
aaaa bbbb cccc oooo eric date5
```
I would like to pull the latest date which is eric, as well as everything that belongs to Eric Not Samm.
```
aaaa bbbb cccc oooo eric date5
aaaa bbbb cccc oooo eric date3
aaaa bbbb cccc oooo eric date1
```
Thanks for any help? | ```
select *
from tbl
where col5 in (select col5 from tbl where col6 = (select max(col6) from tbl))
``` | I'm not sure if I fully understand the question here... You want to first filter the query by "col5" and then sort it descending by the dates in "col6"?
In that case, the query would be
```
SELECT * FROM tbl WHERE col5 = 'eric' ORDER BY col6 DESC
```
Or, do you want to just pull the first row with the latest date (regardless of who it belongs to), then followed by all the rows belonging to eric? In that case, it will be (depending on RDMS, below is T-SQL for SQL Server)
```
SELECT TOP 1 * FROM tbl ORDER BY col6 DESC UNION SELECT * FROM tbl WHERE col5 = 'eric'
```
## Edit
I think I understand. You want to show the latest row, and then show all the other rows in the table which belong to *that user* (who's update is the latest).
Try this (again, meant for T-SQL):
```
SELECT * FROM tbl WHERE col5 = (SELECT TOP 1 col5 FROM tbl ORDER BY col6 DESC) ORDER BY col6 DESC
```
Or for MySQL:
```
SELECT * FROM tbl WHERE col5 = (SELECT col5 FROM tbl ORDER BY col6 DESC LIMIT 1) ORDER BY col6 DESC
``` | SQL Group Data Issue | [
"",
"sql",
"grouping",
""
] |
new to SQL (Searched my problem but couldn't find anything :()
I'm hoping this is something simple...
Basically I'm trying to query a simple table of real estate properties which have Price, Address etc as well as Property Type (Detached, Semi, Apartment).
I'm asked to return the Highest and Lowest priced property for every property type. Easy right? I thought so, and still managed to mess up!
So my current query is this:
```
SELECT Area,
Address,
Property_Code,
Property_Type,
Price,
Market_Date,
Sold
FROM tbl_Sale_Property
WHERE (
Price IN (
SELECT MIN(Price)
FROM tbl_Sale_Property
GROUP BY Property_Type
)
OR Price IN (
SELECT MAX(Price)
FROM tbl_Sale_Property
GROUP BY Property_Type
)
)
ORDER BY Property_Type;
```
Which from my (very limited) experience should return every min/max priced record for each property type. My thinking was that if it just checked whether the price was a MIN or MAX and sorted it after, this would work.
Unfortunately, for some reason, it returns more than two records for some of the property types-
Semi-Detached returns:
* Β£120,000.00
* Β£210,000.00
* Β£210,000.00
* Β£210,000.00
* Β£380,000.00
When it should return only the highest and lowest numbers.Any help would be greatly appreciated!
I apologise again if this has been answered previously or is super mind numbingly simple!
I'm using Access 2007-2010. | The problem that you have is that you are comparing to the lowest/largest prices for all property types. So, property type 1 could match the lowest price of property type 2, and it will be in your output.
You want correlated subqueries:
```
SELECT Area, Address, Property_Code, Property_Type, Price, Market_Date, Sold
FROM tbl_Sale_Property sp
WHERE sp.Price = (SELECT MIN(Price) FROM tbl_Sale_Property sp2 where sp.Property_Type = sp2.Property_Type) or
sp.Price = (SELECT MAX(Price) FROM tbl_Sale_Property sp2 where sp.Property_Type = sp2.Property_Type)
ORDER BY Property_Type;
```
You also have the simple issue that more than one record might have the minimum value. Is your problem that you need to limit the results to one out of several minimum/maximum prices? | If you just need a list of property types and then the lowest sale price and highest sales price for each then try the following:
```
SELECT Property_Type, MIN(Price) AS LowestPrice, MAX(Price) AS HighestPrice
FROM tbl_Sale_Property AS sp
GROUP BY Property_Type
``` | Nested SQL query using min/max is returning too many records (New to this) | [
"",
"sql",
"max",
"min",
""
] |
I have a mysql\_query . I was wondering which one to index.
```
SELECT count(*) FROM foo WHERE c=5 GROUP BY d
```
Maybe (c,d) or just c?
I have another similar.
```
SELECT count(*) FROM foo WHERE d=6 GROUP BY b
```
Maybe (d,b) or just d?
Another one here :
```
SELECT a FROM foo WHERE b=5 GROUP BY c
```
Here (a,b,c)?
My point is just to make the index which will cover three queries.. The best one here would be? and on which column? | It's impossible to have one index to cover all use case.
MySQL can benefit from having index on order by/ group by columns. Saves it from doing filesort, temporary tables. My suggestions would be:
1) (c,d)
2) (d,b)
3) For the last one (a,b,c) would be unusable because it starts with a (column order matters). The proper index would be (b,c,a).
A bit explanation:
MySQL does the following:
1. filter the results (it needs b)
2. fetch it in order if have index on order by column or sort it (it needs c) (group by is the same)
3. return the columns in the select part (it needs a)
Another thing which is very important is the selectivity of your index. If for example you have a boolean with 50% true, 50% false MySQL won't use the index because it's easier to scan the whole table than traverse the index tree. | For these three queries, you need three different indexes, one on `a`, one on `b`, and one on `c`. These can be composite indexes, so they could be `a,b,c`, 'b,c,a`', and`c,b,a`. But for each query, you need an index that has the one column in the`where` clause as the first column in the index.
MySQL uses the columns from the left -- the ordering of columns in an index matters.
The [documentation](https://dev.mysql.com/doc/refman/5.7/en/multiple-column-indexes.html) actually explains this pretty well. | Indexing columns in mysql | [
"",
"mysql",
"sql",
"database",
"indexing",
""
] |
I defined a function to always give me the date of the next Sunday. It works fine, here is the code:
```
CREATE FUNCTION nextSunday() RETURNS date AS $$
DECLARE
dia_semana INT := CAST(EXTRACT(DOW FROM CURRENT_DATE)as INT);
dia INT := 7 - dia_semana;
BEGIN
RETURN current_date + dia;
END;
$$ LANGUAGE plpgsql
```
I have another function to dump data into a file and I need to use `nextSunday()` function inside:
```
CREATE OR REPLACE FUNCTION popularTabelaPessoa() RETURNS VOID AS $$
BEGIN
COPY(SELECT pe.id, pe.fk_naturalidade, pe.fk_documentacao_pessoal, pe.nome,
pe.cpf, pe.data_nascimento, pe.sexo, pe.estado_civil, pe.nome_mae,
pe.data_alteracao, pe.usuario_banco_alteracao,
pe.usuario_aplicacao_alteracao
FROM fluxo_lt.banca ba
INNER JOIN corporativo.localidade lo
ON ba.fk_municipio = lo.id
INNER JOIN fluxo_lt.agendamento_candidato ac
ON ac.fk_banca = ba.id
INNER JOIN info_detran.processo as pr
ON ac.fk_processo = pr.id
INNER JOIN info_detran.candidato as ca
ON pr.fk_candidato = ca.id
INNER JOIN corporativo.pessoa as pe
ON ca.fk_pessoa = pe.id
WHERE ba.data = (SELECT nextSunday())
ORDER BY lo.nome, pe.nome)
TO '/tmp/dump.sql';
END;
$$ LANGUAGE plpgsql
```
But it is not working. The field `ba.data` is `date`, the same type as return value of `nextSunday()` function. The code is executed without any errors, but the file is blank. If I hardcode a date it works just fine. Already tried everything (casting, putting it into a variable, pass as a argument to the function) but nothing worked so far.
I'm using Postgres 9.3. | First of all, your function can be much simpler with [**`date_trunc()`**](https://www.postgresql.org/docs/current/functions-datetime.html#FUNCTIONS-DATETIME-TRUNC):
```
CREATE FUNCTION next_sunday()
RETURNS date
LANGUAGE sql STABLE PARALLEL SAFE AS
$func$
SELECT date_trunc('week', LOCALTIMESTAMP)::date + 6;
$func$
```
`PARALLEL SAFE` only for Postgres 9.6 or later.
If you have to consider time zones, see:
* [Ignoring time zones altogether in Rails and PostgreSQL](https://stackoverflow.com/questions/9571392/ignoring-time-zones-altogether-in-rails-and-postgresql/9576170#9576170)
If "today" is a Sunday, the above returns it as "next Sunday".
To skip ahead one week in this case:
```
CREATE FUNCTION next_sunday()
RETURNS date
LANGUAGE sql STABLE PARALLEL SAFE AS
$func$
SELECT date_trunc('week', LOCALTIMESTAMP + interval '1 day')::date + 6;
$func$;
```
*db<>fiddle [here](https://dbfiddle.uk/?rdbms=postgres_13&fiddle=30ea36dc2493cacc6aaad0692fdf0c29)*
Old [sqlfiddle](http://sqlfiddle.com/#!17/d1040/1)
Or just use `date_trunc('week', LOCALTIMESTAMP)::date + 6` directly, instead of the function.
Next, simplify the call:
```
CREATE OR REPLACE FUNCTION popular_tabela_pessoa()
RETURNS VOID
LANGUAGE plpgsql AS
$func$
BEGIN
COPY (
SELECT pe.id, pe.fk_naturalidade, pe.fk_documentacao_pessoal, pe.nome
, pe.cpf, pe.data_nascimento, pe.sexo, pe.estado_civil, pe.nome_mae
, pe.data_alteracao, pe.usuario_banco_alteracao
, pe.usuario_aplicacao_alteracao
FROM fluxo_lt.banca ba
JOIN corporativo.localidade lo ON ba.fk_municipio = lo.id
JOIN fluxo_lt.agendamento_candidato ac ON ac.fk_banca = ba.id
JOIN info_detran.processo pr ON ac.fk_processo = pr.id
JOIN info_detran.candidato ca ON pr.fk_candidato = ca.id
JOIN corporativo.pessoa pe ON ca.fk_pessoa = pe.id
WHERE ba.data = next_sunday() -- NOT: (SELECT next_sunday())
-- WHERE ba.data = date_trunc('week', LOCALTIMESTAMP)::date + 6 -- direct alternative
ORDER BY lo.nome, pe.nome)
TO '/tmp/dump.sql';
END
$func$;
```
However, this *cannot* explain why your `COPY` fails. Have you made sure the query returns any rows? And have you tried a manual `COPY` without the function wrapper?
You need the [necessary privileges for `COPY TO`](http://id-1.9.3.55.8)
[`\copy` in psql](https://www.postgresql.org/docs/current/app-psql.html#APP-PSQL-META-COMMANDS-COPY) may be an alternative. | In my case, I was calling directly function fun1 from another function like:
```
CREATE OR REPLACE FUNCTION
fun2() returns trigger
LANGUAGE plpgsql VOLATILE AS
$BODY$
BEGIN
raise notice 'trigger fun2 with id: %', new.id;
fun1(new.id);
return trigger;
END;
$BODY$;
```
Resolved issue by calling function be with select statement like:
> perform fun1(new.id); | Calling a function inside another function in PL/pgSQL | [
"",
"sql",
"postgresql",
"date",
"plpgsql",
""
] |
There are 3 tables:
```
Users table
------------
|uid|username|
------------
Values table
------------------
|vid|values|checked|
------------------
Relations
-----------
|cid|uid|vid|
-----------
```
Relations table contains user ids related to value ids. How to select value id from values table that is not related to given user id in relations table?
**EDIT:**
What I tried so far:
```
SELECT vid FROM relations where uid=user_id //this gives me array of value ids
SELECT vid FROM values where vid!=vid1 AND vid!=vid2 .....
```
**EDIT2:**
Basic solution can be found [here](https://stackoverflow.com/questions/12230338/mysql-where-not-in-name-array). But is there more efficient way? If table is very large for both values table and relations table basic solution is not efficient. | I think you can execute a simple query like this (assuming that the data type of user identifier is `int`):
```
DECLARE @givenUserID int --local variable where you store the given user identifier
SELECT vid
FROM Values
WHERE vid NOT IN (SELECT vid FROM Relations where uid = @givenUserID)
``` | Which dbms are you using? Does it support the minus clause? If yes you can do something like this
```
select vid from values
minus
select vid from relations where uid = @user_id
```
this should give the vid's which are not mapped to a given user id
Another way to do this is through a not-exists clause (handy if your dbms doesn't support the minus clause)
```
select v.vid from values v where not exists (select 1 from relations r where
r.vid = v.vid and r.user_id = @user_id)
```
I would caution against using the not in clause though. Its performance is questionable and fails if the inner query returns a null value, which though is not possible in your case, but you should make it a habit to never use the 'not in' clause with a sub-query. Only use it when you have a list of literal values e.g. '... vid not in (1, 2, 3, 4)'. Whenever you have to 'Minus' something from one table based on values in another table use the 'not exists' and never 'not in' | SQL query from 3 different tables | [
"",
"mysql",
"sql",
""
] |
I have a query, shown below. However when I run this query I get rows whose [date\_effective] is prior to 14th Feb 2014. Why? I have dates of 1/1/1980 showing up.
```
SELECT *
FROM D_CORPACTIONS_MSCI
WHERE [date_effective] >= '14 February 2014'
AND [ca_status] = 'EXPECTED'
OR [ca_status] = 'CONFIRMED'
OR [ca_status] = 'UNDETERMINED'
ORDER BY [date_effective]
``` | Your `and` and `or` are conflicting each other.
Therefore the matched rows aren't correct.
Try to encapsulate the `or` statements or, it this case, use an `in`:
```
select *
from d_corpactions_msci
where date_effective >= '14 february 2014'
and ca_status in ('EXPECTED', 'CONFIRMED', 'UNDETERMINED')
order
by date_effective
```
Also, don't rely on the date format on client or server by converting the date:
```
where date_effective >= convert(datetime, '20140214', 112)
``` | ```
SELECT *
FROM d_corpactions_msci
WHERE ( date_effective >= '14 February 2014' )
AND ( ca_status = 'EXPECTED'
OR ca_status = 'CONFIRMED'
OR ca_status = 'UNDETERMINED' )
ORDER BY date_effective
``` | Sql Server Date Query not working | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have the following query:
```
SELECT SUM (T0.[TotalSumSy]) AS 'Line Sterling',
T0.[WhsCode] AS 'Business Unit'
FROM [dbo].[INV1] T0
INNER JOIN [dbo].[OINV] T1
ON T1.[DocEntry] = T0.[DocEntry]
WHERE T1.[DocDate] > (CONVERT(DATETIME, '20121001', 112) )
GROUP BY T0.[WhsCode]
UNION ALL
SELECT SUM (T0.[TotalSumSy] * -1) AS 'Line Sterling',
T0.[WhsCode] AS 'Business Unit'
FROM [dbo].[RIN1] T0
INNER JOIN [dbo].[ORIN] T1
ON T1.[DocEntry] = T0.[DocEntry]
WHERE T1.[DocDate] > (CONVERT(DATETIME, '20121001', 112) )
GROUP BY T0.[WhsCode]
```
However this is returning 2 sets of groups. I understand why, but I can't figure out how to resolve.
Can any one help? | There is another option for doing this query, which is to do the aggregation only once.
```
select sum("Line Sterling") as [Line Sterling], [Business Unit]
from ((SELECT T0.[TotalSumSy] AS [Line Sterling], T0.[WhsCode] AS [Business Unit], t1.DocDate
FROM [dbo].[INV1] T0 INNER JOIN
[dbo].[OINV] T1
ON T1.[DocEntry] = T0.[DocEntry]
) union all
(SELECT T0.[TotalSumSy] * -1, T0.[WhsCode], t1.DocDate
FROM [dbo].[RIN1] T0 INNER JOIN
[dbo].[ORIN] T1
ON T1.[DocEntry] = T0.[DocEntry]
)
) t
where [DocDate] > CONVERT(DATETIME, '20121001', 112)
group by [Business Unit];
```
Whether this performs better than pre-aggregating, doing the union, and aggregating again depends on many factors. However, this does put the logic for the aggregation and the filtering in one place. That means that it is easier to alter and maintain the query without making a mistake.
Note I also changed the escape characters for the column names. Single quotes should only be used for string and date constants. Using them for identifiers (column aliases) can lead to difficult-to-detect errors in the query. | Try something like :
```
select sum([TotalSumSy]), WhsCode AS 'Business Unit' from
(
SELECT TotalSumSy, T0.[WhsCode]
FROM [dbo].[RIN1] T0 INNER JOIN [dbo].[ORIN] T1 ON T1.[DocEntry] = T0.[DocEntry]
WHERE T1.[DocDate] > (CONVERT(DATETIME, '20121001', 112) )
UNION ALL
SELECT
T0.[TotalSumSy]*-1,
T0.[WhsCode]
FROM [dbo].[INV1] T0 INNER JOIN [dbo].[OINV] T1 ON T1.[DocEntry] = T0.[DocEntry]
WHERE T1.[DocDate] > (CONVERT(DATETIME, '20121001', 112) )
) x
group by WhsCode
``` | SQL Server 2008r2 Union All Group By | [
"",
"sql",
"sql-server-2008",
""
] |
In MySQL query:
```
SELECT host
FROM information_schema.processlist
WHERE ID = connection_id( )
LIMIT 0 , 30
```
The result of ^ this query is: `localhost`.
```
SELECT @@hostname;
```
The result of ^ this query is: `localhost`.
But I need to get ipaddress like `192.168.1.2`.
**Question:** *How to get this result using mysql query?* | To get the IP address only without the port number.
```
Select SUBSTRING_INDEX(host,':',1) as 'ip'
From information_schema.processlist
WHERE ID=connection_id();
``` | The query
`select host from information_schema.processlist WHERE ID=connection_id();`
Will give you the host name .You will get IP address( like `192.168.1.2.`) if name resolution is not enabled, which it is usually not. | Getting ip address using MySQL query | [
"",
"mysql",
"sql",
"ip-address",
""
] |
I am using MySQL.
I am trying to compare a date that is in a datetime field to today's date and return the number of days difference [i.e. today - column = no. of days]
Is that possible? How would I do it? | I think we need more information from your side, like the query you are using and what you tried so far, but maybe this can help you out:
```
SELECT
DATEDIFF('2014-02-20 00:00:00', NOW()); //return 7
```
In this case, you should add a 'FROM table' and replace the date in this query to the `datetime` column. Something like this:
```
SELECT
DATEDIFF(datetimefield, NOW())
FROM
tablename
``` | You can try to use DATEDIFF function
```
SELECT DATEDIFF(NOW(), date_column)
```
It should return the number of days between now and your column. | Compare date to today's date and return number of days | [
"",
"mysql",
"sql",
"datetime",
""
] |
In SQL Server 2012, I have a 'cross-reference' table containing four columns. The combination of the four columns must be unique. My initial thought was to simply create a a primary key containing all four columns, but some research has suggested that this might not be a good idea.
**Background to question...**
I am trying to implement a tagging service on a legacy web application. Some of the objects that need tagging use a uniqueidentifier as their primary key, whilst others use a simple integer id. I have approached this using a 'two-table' approach. One table contains the tags, whilst the other table provides a reference between the objects to be tagged and the tag table. This table I have name TagList...
```
CREATE TABLE TagList (
TagId nvarchar(40) NOT NULL,
ReferenceGuid uniqueidentifier NOT NULL,
ReferenceId int NOT NULL,
ObjectType nvarchar(40) NOT NULL
)
```
For example, to tag an object with a uniqueidentifier primary key with the word 'example', the TagList record would look like this:
```
TagList (
TagId 'example',
ReferenceGuid '1e93d578-321b-4f86-8b0f-32435d385bd7',
ReferenceId 0,
ObjectType 'Customer'
)
```
To tag an object with an integer primary key with the word 'example', the TagList record would look like this:
```
TagList (
TagId 'example',
ReferenceGuid '00000000-0000-0000-0000-000000000000',
ReferenceId 5639,
ObjectType 'Product'
)
```
In practice, either the TagId and the ReferenceGuid column must be unique or, if an int primary key object is being defined, the TagId, ReferenceId and ObjectType must be unique.
To simplify(?) things, making the combination of all four columns to be unique would also serve the same functional purpose.
Any advice would be appreciated. | Having a multi column primary key should do the trick
```
CREATE TABLE TagList (
TagId nvarchar(40) NOT NULL,
ReferenceGuid uniqueidentifier NOT NULL,
ReferenceId int NOT NULL,
ObjectType nvarchar(40) NOT NULL,
CONSTRAINT pk_TagList PRIMARY KEY (TagId,ReferenceGuid,ReferenceId,ObjectType)
)
``` | If you only require a unique constraint, and not a primary key, this can be used:
```
ALTER TABLE TagList
ADD CONSTRAINT UK_TagList_1 UNIQUE
(
TagId,
ReferenceGuid,
ReferenceId,
ObjectType
)
``` | How do I enforce uniqueness against four columns | [
"",
"sql",
"sql-server",
"tags",
"indexing",
""
] |
I am trying to update a field in a table using an SQL update query where there is a like statement referencing a value in another table. They syntax unfortunately is not working. Below is my code. In short, I am trying to put a '1' in the field 'Query07ParolaChiave' in the table 'tblSearchEngine01' when the value located in table 'tblsearchengine07' is present in the field 'tblMasterListOfEventsNotes' located in the table 'tblSearchEngine01'. I think my code is almost complete but there is a syntax issue which i cant find.
```
st_sql = "UPDATE tblSearchEngine01, tblSearchEngine07 SET tblSearchEngine01.Query07ParolaChiaveSelect = '1' WHERE ((([tblSearchEngine01].[tblMasterListOfEventsNotes]) Like " * " & [tblsearchengine07].[ParolaChiave] & " * "))"
Application.DoCmd.RunSQL (st_sql)
``` | I suggest you 2 solutions :
This one is using `EXISTS` functions, and will check for each row in `tblSearchEngine01` if there is a matching value in `tblsearchengine07`
```
UPDATE
tblSearchEngine01
SET
tblSearchEngine01.Query07ParolaChiaveSelect = '1'
WHERE
EXISTS (SELECT 1
FROM tblsearchengine07
WHERE [tblSearchEngine01].[tblMasterListOfEventsNotes] Like '*' & [tblsearchengine07].[ParolaChiave] & '*')
```
This one is more performant because it uses `JOIN`
```
UPDATE
tblSearchEngine01
INNER JOIN tblsearchengine07
ON [tblSearchEngine01].[tblMasterListOfEventsNotes] Like '*' & [tblsearchengine07].[ParolaChiave] & '*'
SET
tblSearchEngine01.Query07ParolaChiaveSelect = '1'
```
I read something like in *ADO/VBA, you have to use `%` instead of `*` as the wildcard.*
You can have more information on wildcard and `LIKE` comparator [here](http://www.techrepublic.com/article/10-tips-for-using-wildcard-characters-in-microsoft-access-criteria-expressions/)
**UPDATE**
**Why the '1' after select in your first solution?**
`EXISTS (SELECT 1 ...` is better for performance because it return only the number 1 instead of fields, anyway EXISTS just stop the excecution after 1 element found.
**'Performant' means more consuming in regards to space and memory?**
JOIN is more performant in term of time of execution, RDBMS are far better at joining tables than using subquery, in some rare case, it's more interesting to use the 1st solution.
**Also, any initial thoughts as to why my original solution (coming straight from an Access Query which works) does not function?**
I cannot really know but perhaps it's because of `" * "`, because you are saying `SPACE + * + SPACE + VALUE + SPACE + * + SPACE`. For ex : `'John' LIKE ' John '`
May be with `"*"` instead of `" * "` could solve it...
I have no other track, I'm not Access sql developper, I usually play around Sql server/Oracle/mySql, hope it helped. ;) | The like statement go into the WHERE clause.
If you do want to use LIKE without you care about caps letters, then you can use it like this:
LIKE COLUMN\_NAME = '%WhatYouLike%' | How do I run an SQL update query using a like statement | [
"",
"sql",
"vba",
"ms-access",
"ms-access-2010",
""
] |
I have the following 2 tables:
```
CREATE TABLE `personal_info` (
`p_id` int(11) NOT NULL AUTO_INCREMENT,
`name` text NOT NULL,
`initials` text NOT NULL,
`surname` text NOT NULL,
`home_lang` int(11) NOT NULL,
PRIMARY KEY (`p_id`),
KEY `home_lang` (`home_lang`),
CONSTRAINT `personal_info_ibfk_1` FOREIGN KEY (`home_lang`) REFERENCES `language_list` (`ll_id`)
) ENGINE=InnoDB AUTO_INCREMENT=44 DEFAULT CHARSET=latin1
CREATE TABLE `language_list` (
`ll_id` int(11) NOT NULL AUTO_INCREMENT,
`name` text NOT NULL,
PRIMARY KEY (`ll_id`)
) ENGINE=InnoDB AUTO_INCREMENT=73 DEFAULT CHARSET=latin1
```
I am trying to remove a column from a table with the following:
```
ALTER TABLE `personal_info` DROP `home_lang`
```
But cannot do it since I recieve this error:
```
#1025 - Error on rename of '.\MyDB\#sql-112c_82' to '.\MyDB\personal_info' (errno: 150)
```
I have tried to first remove the index and then remove the column with this:
```
ALTER TABLE personal_info DROP INDEX home_lang
```
But then I get the following error:
```
#1553 - Cannot drop index 'home_lang': needed in a foreign key constraint
```
So I tried to drop the foreign key:
```
ALTER TABLE personal_info DROP FOREIGN KEY home_lang
```
But received this error:
```
#1025 - Error on rename of '.\MyDB\personal_info' to '.\MyDB\#sql2-112c-8d' (errno: 152)
```
I have also tried to first set all the values to null:
```
update personal_info set home_lang = null
```
But then received this error:
```
#1452 - Cannot add or update a child row: a foreign key constraint fails (`MyDB`.`personal_info`, CONSTRAINT `personal_info_ibfk_1` FOREIGN KEY (`home_lang`) REFERENCES `language_list` (`ll_id`))
```
And now I am stuck. I have tried a few things but just cannot get the column removed. I am not allowed to alter the DB in any way other than removing the column. | Your `DROP FOREIGN KEY` syntax is using the wrong key name. It's trying to drop your "plain" index on the `home_lang` field. It's NOT the foreign key itself.
```
CONSTRAINT `personal_info_ibfk_1` FOREIGN KEY (`home_lang`) REFERENCES `language_list` (`ll_id`)
^^^^^^^^^^^^^^^^^^^^^--- THIS is the name of the foreign key
```
Try:
```
ALTER TABLE personal_info DROP FOREIGN KEY `personal_info_ibfk_1`
``` | Use this given below query to find the name of the foreign key.
```
SHOW CREATE TABLE forms_main;
```
Then once u got the key, execute drop foreign key command
```
alter TABLE `forms_main`
drop FOREIGN key `forms_main_ibfk_1`;
```
Then execute the drop column command
```
ALTER TABLE `forms_main` DROP `company_id`;
``` | Drop Column with foreign key in MySQL | [
"",
"mysql",
"sql",
"database",
"database-design",
""
] |
I'm trying to filter out my (users table) data with conditions derived from data in (users\_status table).
The **users** table is a table containing user ids and usernames
```
CREATE TABLE `users` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`username` varchar(25),
PRIMARY KEY (`id`)
) ENGINE=InnoDB;
```
The **groups** table is a table containing group ids
```
CREATE TABLE `groups` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`name` varchar(25),
PRIMARY KEY (`id`)
) ENGINE=InnoDB;
```
The **user\_status** table is a table containing a log of activity. The way it works is while a user is in a **group** that user can toggle "bookmarked" between "on" or "off".
```
CREATE TABLE `user_status` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`group_id` int(10) unsigned,
`user_id` int(10) unsigned,
`bookmarked` enum('on', 'off'),
`date` datetime,
PRIMARY KEY (`id`),
CONSTRAINT `group_id` FOREIGN KEY (`group_id`) REFERENCES `groups` (`id`) ON DELETE CASCADE,
CONSTRAINT `user_id` FOREIGN KEY (`user_id`) REFERENCES `users` (`id`) ON DELETE CASCADE
) ENGINE=InnoDB;
```
Now what I'm trying to do is retrieve all the users that either have no entries in **user\_status** or the last entry in user\_status is **"off"**
I have an SQL fiddle with an incomplete subquery where I tried doing this but I'm not getting this working.
<http://sqlfiddle.com/#!2/2d5b4/2>
```
select us.id, us.group_id, g.name as GROUP_NAME, us.user_id, u.username as USER_USERNAME, us.bookmarked, us.date
from user_status us
inner join users u ON u.id = us.user_id
inner join groups g ON g.id = us.group_id
where 'on' != (
select bookmarked
from user_status
group by (group_id, user_id)
where group_id = us.group_id AND user_id = us.user_id
order by ID DESC
limit 1;
);
```
**EDIT 6:28 pm**
So given **user\_status**
```
select * from user_status order by group_id, user_id, date;
+----+----------+---------+------------+--------------------------------+
| ID | GROUP_ID | USER_ID | BOOKMARKED | DATE |
+----+----------+---------+------------+--------------------------------+
| 1 | 1 | 1 | on | January, 16 2014 00:00:00+0000 |
| 2 | 1 | 1 | off | January, 17 2014 00:00:00+0000 |
| 3 | 1 | 1 | on | January, 18 2014 00:00:00+0000 |
| 9 | 1 | 1 | on | January, 18 2014 00:00:00+0000 |
| 7 | 1 | 2 | on | January, 16 2014 00:00:00+0000 |
| 8 | 1 | 2 | off | January, 17 2014 00:00:00+0000 |
| 4 | 2 | 1 | on | January, 16 2013 00:00:00+0000 |
| 5 | 2 | 1 | off | January, 17 2013 00:00:00+0000 |
| 6 | 2 | 1 | on | January, 18 2013 00:00:00+0000 |
+----+----------+---------+------------+--------------------------------+
```
I would expect
group\_id (1) user\_id (1)
**not returned** because the last bookmarked was 'on'
group\_id (1) user\_id (2)
**returned** because the last bookmarked was 'off'
group\_id (2) user\_id (1)
**not returned** because the last bookmarked was 'on'
user\_id (3)
**returned** because not present in user\_status
note: user\_id 3 was not added in the original sql fiddle example | The following assumes that a user can have different bookmark status for different groups at the same time.
It finds the most recent date for each (user\_id, group\_id) combination, then finds the record that corresponds to that.
Either the bookmark needs to be `'off'` or `NULL` as dictated by the `WHERE` clause.
It is possible for a user with no records in `user_group` to be returned due to the use of `LEFT JOIN`.
```
SELECT
u.*,
g.*,
us.*
FROM
users AS u
LEFT JOIN
(
SELECT user_id, group_id, MAX(date) AS date
FROM user_status
GROUP BY user_id, group_id
)
AS us_newest
ON us_newest.user_id = u.id
LEFT JOIN
user_status AS us
ON us.user_id = us_newest.user_id
AND us.group_id = us_newest.group_id
AND us.date = us_newest.date
LEFT JOIN
groups AS g
ON g.id = us.group_id
WHERE
us.bookmarked = 'off'
OR us.bookmarked IS NULL
;
```
<http://sqlfiddle.com/#!2/2d5b4/26> | 1- you using `group by` before `WHERE` clause
2- you are grouping by two columns in subquery while you should group by one columns
try this working query
```
select us.id, us.group_id, g.name as GROUP_NAME, us.user_id, u.username as USER_USERNAME, us.bookmarked, us.date
from user_status us
inner join users u ON u.id = us.user_id
inner join groups g ON g.id = us.group_id
where 'on' != (
select bookmarked
from user_status
where group_id = us.group_id AND user_id = us.user_id
group by (group_id) --->// you can choose to group by user_id or this.
order by ID DESC
limit 1
);
```
[**fiddle demo**](http://sqlfiddle.com/#!2/2d5b4/11) | group by and count within a subquery | [
"",
"mysql",
"sql",
"subquery",
""
] |
This is my query which I want to insert the value which should be selected from another table:
```
insert into payment_details_kohin(installment_no)
values(
select count(installment_amount)+2
from kohin_plan.payment_details_insert
where customer_id='KBP100058'
)
```
β¦but its gives me an error:
> Msg 515, Level 16, State 2, Line 1 Cannot insert the value NULL into
> column 'customer\_id', table
> 'kohinoor\_rdfd.kohin\_plan.payment\_details\_kohin'; column does not
> allow nulls. INSERT fails. The statement has been terminated.
When I try the following query:
```
insert into payment_details_kohin(installment_no)
values(
select count(installment_amount)+2
from kohin_plan.payment_details_insert
where customer_id='KBP100058'
)
```
β¦it gives me the following error
```
Msg 156, Level 15, State 1, Line 2
Incorrect syntax near the keyword 'select'.
Msg 102, Level 15, State 1, Line 3
Incorrect syntax near ')'.
``` | Your problem is that you have a non-NULL customer id. You need to insert it into the table as well:
```
insert into payment_details_kohin(customer_id, installment_no)
select customer_id, count(installment_amount)+2
from kohin_plan.payment_details_insert
where customer_id='KBP100058';
```
However, when I see such `inserts`, sometimes what is really wanted is an update:
```
update payment_details_kohin
set installment_no = (select count(installment_amount) + 2
from kohin_plan.payment_details_insert
where payment_details_kohin.customer_id = payment_details_insert.customer_id
)
where customer_id = 'KBP100058';
``` | You dont need the Values clause here just do as follows
```
INSERT INTO payment_details_kohin(installment_no)
SELECT ISNULL(COUNT(installment_amount), 0) + 2
FROM kohin_plan.payment_details_insert
WHERE customer_id = 'KBP100058'
``` | Insert into and select combination is not working | [
"",
"sql",
"sql-server-2008",
""
] |
I have a table which contains different messages, each with a messageType field. I need to fill a ASP.NET list control of the last loaded message for each messageType. My sql statement to return everything is:
```
SELECT MessageTypes.MessageType MessageType
,Messages.MessageDate
,Messages.ValueDate
,Messages.MessageReference
,Messages.Beneficiary
,Messages.StatusId
,MessageStatus.STATUS
,BICProfile.BIC
FROM Messages
INNER JOIN MessageStatus
ON Messages.StatusId = MessageStatus.Id
INNER JOIN MessageTypes
ON Messages.MessageTypeId = MessageTypes.MessageTypeId
INNER JOIN BICProfile
ON Messages.SenderId = dbo.BICProfile.BicId
WHERE (BICProfile.BIC = 'someValue')
AND Messages.StatusId IN (4, 5, 6)
```
So i need to pull back the last message for each message type.
***EDIT***
Some clarification on what the statusId is. These are basically used in conjunction with a users role, what messages that can see. So, i need to return the last loaded message for a given message type. Effectively, only looking to return 1 message for each message type. The messageId is unique, where the message date could be the same. | You can use the [ROW\_NUMBER()](http://technet.microsoft.com/en-us/library/ms186734.aspx) Function to assign each of your messages a rank by Message date (starting at 1 again for each message type), then just limit the results to the top ranked message:
```
WITH AllMessages AS
( SELECT MessageTypes.MessageType,
Messages.MessageDate,
Messages.ValueDate,
Messages.MessageReference,
Messages.Beneficiary,
Messages.StatusId,
MessageStatus.Status,
BICProfile.BIC,
RowNumber = ROW_NUMBER() OVER(PARTITION BY Messages.MessageTypeId
ORDER BY Messages.MessageDate DESC)
FROM Messages
INNER JOIN MessageStatus
ON Messages.StatusId = MessageStatus.Id
INNER JOIN MessageTypes
ON Messages.MessageTypeId = MessageTypes.MessageTypeId
INNER JOIN BICProfile
ON Messages.SenderId = dbo.BICProfile.BicId
WHERE BICProfile.BIC = 'someValue'
AND Messages.StatusId IN (4, 5, 6)
)
SELECT MessageType,
MessageDate,
ValueDate,
MessageReference,
Beneficiary,
StatusId,
Status,
BIC
FROM AllMessages
WHERE RowNumber = 1;
```
If you can't use `ROW_NUMBER` then you can use a subquery to get the latest message date per type:
```
SELECT Messages.MessageTypeID, MessageDate = MAX(Messages.MessageDate)
FROM Messages
INNER JOIN BICProfile
ON Messages.SenderId = dbo.BICProfile.BicId
WHERE BICProfile.BIC = 'someValue'
AND Messages.StatusId IN (4, 5, 6)
GROUP BY Messages.MessageTypeID
```
Then inner join the results of this back to your main query to filter the results:
```
SELECT MessageTypes.MessageType,
Messages.MessageDate,
Messages.ValueDate,
Messages.MessageReference,
Messages.Beneficiary,
Messages.StatusId,
MessageStatus.Status,
BICProfile.BIC
FROM Messages
INNER JOIN MessageStatus
ON Messages.StatusId = MessageStatus.Id
INNER JOIN MessageTypes
ON Messages.MessageTypeId = MessageTypes.MessageTypeId
INNER JOIN BICProfile
ON Messages.SenderId = dbo.BICProfile.BicId
INNER JOIN
( SELECT Messages.MessageTypeID,
MessageDate = MAX(Messages.MessageDate)
FROM Messages
INNER JOIN BICProfile
ON Messages.SenderId = dbo.BICProfile.BicId
WHERE BICProfile.BIC = 'someValue'
AND Messages.StatusId IN (4, 5, 6)
GROUP BY Messages.MessageTypeID
) AS MaxMessage
ON MaxMessage.MessageTypeID = Messages.MessageTypeID
AND MaxMessage.MessageDate = Messages.MessageDate
WHERE BICProfile.BIC = 'someValue'
AND Messages.StatusId IN (4, 5, 6);
```
*N.B This second method will return multiple rows per message type if the latest message date is common among more than one message. This behaviour can be replicated in the first query by replacing `ROW_NUMBER` with `RANK`*
---
**EDIT**
If you will have multiple messages with the same date and only want to return one of them you need to expand the ordering within the row\_number function, i.e. if you wanted to pick the message with the maximum id when there were ties you could make it:
```
RowNumber = ROW_NUMBER() OVER(PARTITION BY Messages.MessageTypeId
ORDER BY Messages.MessageDate DESC,
Messages.MessageID DESC)
```
So the full query would be:
```
WITH AllMessages AS
( SELECT MessageTypes.MessageType,
Messages.MessageDate,
Messages.ValueDate,
Messages.MessageReference,
Messages.Beneficiary,
Messages.StatusId,
MessageStatus.Status,
BICProfile.BIC,
RowNumber = ROW_NUMBER() OVER(PARTITION BY Messages.MessageTypeId
ORDER BY Messages.MessageDate DESC,
Messages.MessageID DESC)
FROM Messages
INNER JOIN MessageStatus
ON Messages.StatusId = MessageStatus.Id
INNER JOIN MessageTypes
ON Messages.MessageTypeId = MessageTypes.MessageTypeId
INNER JOIN BICProfile
ON Messages.SenderId = dbo.BICProfile.BicId
WHERE BICProfile.BIC = 'someValue'
AND Messages.StatusId IN (4, 5, 6)
)
SELECT MessageType,
MessageDate,
ValueDate,
MessageReference,
Beneficiary,
StatusId,
Status,
BIC
FROM AllMessages
WHERE RowNumber = 1;
``` | Try this:
```
SELECT MT.MessageType
,X.MessageDate
,X.ValueDate
,X.MessageReference
,X.Beneficiary
,X.StatusId
,X.STATUS
,X.BIC
FROM MessageTypes MT
OUTER APPLY -- or CROSS APPLY
(
SELECT TOP 1
M.MessageDate
,M.ValueDate
,M.MessageReference
,M.Beneficiary
,M.StatusId
,MS.STATUS
,B.BIC
FROM Messages M
INNER JOIN MessageStatus MS
ON M.StatusId = MS.Id
INNER JOIN BICProfile B
ON M.SenderId = B.BicId
WHERE (B.BIC = 'someValue')
AND M.StatusId IN (4, 5, 6)
AND M.MessageTypeId = MT.MessageTypeId
ORDER BY M.MessageDate DESC
) X
``` | Get Last message loaded based on message type | [
"",
"sql",
"t-sql",
""
] |
I'm creating a database-backed calendar in RoR, and the first step of this is to return all events that fall within a given year. My events table schema is as follows:
```
id:integer description:string startdate:datetime enddate:datetime
```
In English, I think this is the logic I need to execute to return all events that fall within a given year:
> (when **event start** is less than **year start** AND **event end** is greater than **year start**) OR
>
> (when **event start** is greater than **year start** AND **event end** is less than **year end**) OR
>
> (when **event start** is less than **year end** AND **event end** is greater than **year start**)
The first line gets all the events that end in the given year, the second line gets all the events that are within the year and the last line gets all events that start within the year.
**Could this logic be more efficient?**
I've read the ActiveRecord manual and created a fair few dynamic sites, but I've only needed to go `Person.find(34)` or `Person.where(name: 'Jimmy')` and really don't know how to execute the above logic with ActiveRecord's helper methods! | write a scope in your model
```
class Event < ActiveRecord::Base
scope :event_in_year, lambda { |year| where('(events.startdate < ? AND events.enddate > ? ) OR (events.startdate > ? AND events.enddate < ? ) OR (events.startdate < ? AND events.enddate > ? )', year.beginning_of_year, year.beginning_of_year, year.beginning_of_year, year.end_of_year, year.end_of_year, year.beginning_of_year )}
end
```
and then use this scope to find events between any year , by passing that year to this scope | With `ActiveRecord` `where` method, you can pass a `string` query to suit your needs. So something like:
```
your_string = "events.start_date > ...."
Model.where(your_string)
``` | Execute OR and AND conditionals with ActiveRecord helpers | [
"",
"sql",
"ruby-on-rails",
""
] |
I have inserted records into a SQL Server database table. The table had a primary key defined and the auto increment identity seed is set to βYesβ. This is done primarily because in SQL Azure, each table has to have a primary key and identity defined.
But since I have to delete some records from the table, the identity seed for those tables will be disturbed and the index column (which is auto-generated with an increment of 1) will get disturbed.
**How can I reset the identity column after I deleted the records so that the column has sequence in ascending numerical order?**
The identity column is not used as a foreign key anywhere in database. | The [`DBCC CHECKIDENT`](http://technet.microsoft.com/en-us/library/ms176057.aspx) management command is used to reset identity counter. The command syntax is:
```
DBCC CHECKIDENT (table_name [, { NORESEED | { RESEED [, new_reseed_value ]}}])
[ WITH NO_INFOMSGS ]
```
Example:
```
DBCC CHECKIDENT ('[TestTable]', RESEED, 0);
GO
```
It was not supported in previous versions of the Azure SQL Database but is supported now.
---
Thanks to [Solomon Rutzky](https://stackoverflow.com/questions/21824478/reset-identity-seed-after-deleting-records-in-sql-server/21824729?noredirect=1#comment97903325_21824729) the [docs](https://learn.microsoft.com/en-us/sql/t-sql/database-console-commands/dbcc-checkident-transact-sql) for the command are now fixed. | ```
DBCC CHECKIDENT ('TestTable', RESEED, 0)
GO
```
Where 0 is `identity` Start value | Reset identity seed after deleting records in SQL Server | [
"",
"sql",
"sql-server",
"database",
"sql-server-2008",
"azure-sql-database",
""
] |
I have this table:
```
ID Name 01 02 03 04 05 06 07
0000068 Name1 V VX
0000069 Name2 V VX VX V V
0000070 Name3 V V V V V V
```
This is an table for absence check and I want to count the amount of data from each row from column 01 to 07, so I expect the first row will give result of 2, second row 5, and third row 6. And if possible I want that result to be added as a new column. Is there a way to work this somehow? | If the number of columns is static, you can use a simple `CASE` expression.
```
SELECT *,
CASE WHEN [01] = '' OR [01] IS NULL THEN 0 ELSE 1 END +
CASE WHEN [02] = '' OR [02] IS NULL THEN 0 ELSE 1 END +
CASE WHEN [03] = '' OR [03] IS NULL THEN 0 ELSE 1 END +
CASE WHEN [04] = '' OR [04] IS NULL THEN 0 ELSE 1 END +
CASE WHEN [05] = '' OR [05] IS NULL THEN 0 ELSE 1 END +
CASE WHEN [06] = '' OR [06] IS NULL THEN 0 ELSE 1 END +
CASE WHEN [07] = '' OR [07] IS NULL THEN 0 ELSE 1 END [cnt]
FROM Table1;
```
[An SQLfiddle to test with](http://sqlfiddle.com/#!3/0a3eee/1). | If it's always `V` or `VX` you could replace the `VX` with `V`, concatenate
them together and take the `LEN()`:
```
SELECT *, LEN(REPLACE([01]+[02]+[03]+[04]+[05]+[06]+[07],'X','')) AS New_Ct
FROM YourTable
```
If the blanks are actually `NULL` you'd have to wrap them in `ISNULL()`:
```
SELECT *, LEN(REPLACE(ISNULL([01],'')+ISNULL([02],'')+ISNULL([03],'')+ISNULL([04],'')+ISNULL([05],'')+ISNULL([06],'')+ISNULL([07],''),'X','')) AS New_Ct
FROM Table1
```
Demo: [SQL Fiddle](http://sqlfiddle.com/#!3/b4cb5/2/0) | How to count amount of data for each row | [
"",
"sql",
"sql-server-2008",
""
] |
This question relates to the answer given in [this post](https://stackoverflow.com/a/21804334/1688265).
I want to convert the output from a tree analysis in Weka into a hierarchical table of decision splits and leaf-values (as per the post linked above). I can parse the Weka output to extract the `fac`, `split` and `val` values but I'm struggling to parse the output and generate the correct `hierachyid` values.
First thing I note is that the tree description don't map one-to-one with the records in `decisions`. There are 20 lines in the Weka output and 21 records in the `decisions` table. This is because there are 11 leaf-nodes and 10 splits β each record in `decisions` is either a leaf-node or a split.
The Weka output lines correspond to either zero, one or two records in `decisions`. For example Ruleset #8 corresponds to no records; ruleset #1 corresponds to one record; ruleset #4 corresponds to two records.
I have the following example output
```
# Ruleset
1 fac_a < 64
2 | fac_d < 71.5
3 | | fac_a < 49.5
4 | | | fac_d < 23.5 : 19.44 (13/43.71) [13/77.47]
5 | | | fac_d >= 23.5 : 24.25 (32/23.65) [16/49.15]
6 | | fac_a >= 49.5 : 30.8 (10/17.68) [5/22.44]
7 | fac_d >= 71.5 : 33.6 (25/53.05) [15/47.35]
8 fac_a >= 64
9 | fac_d < 83.5
10 | | fac_a < 91
11 | | | fac_e < 93.5
12 | | | | fac_d < 45 : 31.9 (16/23.25) [3/64.14]
13 | | | | fac_d >= 45
14 | | | | | fac_e < 21.5 : 44.1 (5/16.58) [2/21.39]
15 | | | | | fac_e >= 21.5
16 | | | | | | fac_a < 77.5 : 33.45 (4/2.89) [1/0.03]
17 | | | | | | fac_a >= 77.5 : 39.46 (7/10.21) [1/11.69]
18 | | | fac_e >= 93.5 : 45.97 (2/8.03) [1/107.71]
19 | | fac_a >= 91 : 42.26 (9/9.57) [4/69.03]
20 | fac_d >= 83.5 : 47.1 (9/30.24) [6/40.15]
```
I can determine if a Weak output line generates a `split` record in `decisions` by parsing for the substring `<`. I can determine if a line generates a `val` record in `decisions` by parsing for the `:`. However, I'm struggling to generate the appropriate `hierachyid` for both types of record in the `decisions` table.
The desired code to autogenerate for this example would be:
```
insert decisions values
(cast('/0/' as hierarchyid), 'a', 64,null),
(cast('/0/0/' as hierarchyid), 'd', 71.5,null),
(cast('/0/0/0/' as hierarchyid), 'a', 49.5,null),
(cast('/0/0/0/0/' as hierarchyid), 'd', 23.5,null),
(cast('/0/0/0/0/0/' as hierarchyid), NULL, NULL,19.44),
(cast('/0/0/0/0/1/' as hierarchyid), NULL, NULL, 24.25),
(cast('/0/0/0/1/' as hierarchyid), NULL, NULL, 30.8),
(cast('/0/0/1/' as hierarchyid), NULL, NULL, 33.6),
(cast('/0/1/' as hierarchyid), 'd', 83.5,null),
(cast('/0/1/0/' as hierarchyid), 'a', 91,null),
(cast('/0/1/1/' as hierarchyid), NULL, NULL, 47.1),
(cast('/0/1/0/0/' as hierarchyid), 'e', 93.5,null),
(cast('/0/1/0/0/0/' as hierarchyid), 'd', 45,null),
(cast('/0/1/0/0/0/0/' as hierarchyid), null,null,31.9),
(cast('/0/1/0/0/0/1/' as hierarchyid), 'e', 21.5,null),
(cast('/0/1/0/0/0/1/0/' as hierarchyid), null,null,44.1),
(cast('/0/1/0/0/0/1/1/' as hierarchyid), 'a', 77.5,null),
(cast('/0/1/0/0/0/1/1/0/' as hierarchyid), NULL,NULL,33.45),
(cast('/0/1/0/0/0/1/1/1/' as hierarchyid), NULL,NULL,39.46),
(cast('/0/1/0/0/1/' as hierarchyid), NULL,NULL,45.97),
(cast('/0/1/0/1/' as hierarchyid), NULL,NULL, 42.26);
go
```
What algorithm can I apply to generate the strings such as `/0/1/0/0/0/1/1/0/` that I need to attach to each `split` or `val` record in the `decisions` table? | As you noted, each of your Weka output lines corresponds to 0, 1, or 2 INSERT statements. I'm restating some of what you said in case it helps you or someone else reading.
**Summary**
Output lines with < and without . are pure branch nodes (IFs) and correspond to 1 `INSERT` with null for the column [val].
Output lines with < and : are both branch and assignment nodes, so they correspond to 2 `INSERT`s. One with null [val], and one with the hierarchyid extended by `0/` and with non-null [val].
Output lines with >= and without . are ELSE nodes in your tree. The >= comparison information is redundant in your source and those lines require no INSERT statement.
In this example, no `INSERT` statement is needed for the >= branching (source lines 8, 13, 15), because the >= condition is necessarily true at that point in the decision tree. Those lines of your output are like ELSE statements, where you've redundantly stated what must be true about the factor value at that point. (The decisions could be made correctly even without the ">= ##.#" information from the tree in those lines.)
**Algorithm outline**
Go through your Weka output in order.
1. If the line you're at is indented from the previous one, `INSERT` once (append '0\' to the hierarchyid) for the decision (put NULL in [val]),
2. If the Weka line has also got `:` in it, `INSERT` another row in the table (appending a second `0\`) for the assignment
3. If the line you're at is not indented from the previous one, skip it if it has no `:` in it
4. If it has `:` and is an assignment, find it's "sibling" in the decision tree (the most recent row above it at the same indentation level). The sibling's hierarchyid will end in '0\', because it's a < comparison. Change the `0\` to `1\` and `INSERT` with a non-null [val].
Hope that helps and can be done practically from what you have.
Here's another set of INSERT statements that reference the line of your Weka output.
```
create table decisions (
did hierarchyid primary key,
fac char,
split decimal(10,4),
val decimal(10,4),
sourceline int
)
insert decisions values
(cast('/0/' as hierarchyid), 'a', 64,null,1),
(cast('/0/0/' as hierarchyid), 'd', 71.5,null,2),
(cast('/0/0/0/' as hierarchyid), 'a', 49.5,null,3),
(cast('/0/0/0/0/' as hierarchyid), 'd', 23.5,null,4),
(cast('/0/0/0/0/0/' as hierarchyid), NULL, NULL,19.44,4),
(cast('/0/0/0/0/1/' as hierarchyid), NULL, NULL, 24.25,5),
(cast('/0/0/0/1/' as hierarchyid), NULL, NULL, 30.8,6),
(cast('/0/0/1/' as hierarchyid), NULL, NULL, 33.6,7),
(cast('/0/1/' as hierarchyid), 'd', 83.5,null,9),
(cast('/0/1/0/' as hierarchyid), 'a', 91,null,10),
(cast('/0/1/1/' as hierarchyid), NULL, NULL, 47.1,20),
(cast('/0/1/0/0/' as hierarchyid), 'e', 93.5,null,11),
(cast('/0/1/0/0/0/' as hierarchyid), 'd', 45,null,12),
(cast('/0/1/0/0/0/0/' as hierarchyid), null,null,31.9,12),
(cast('/0/1/0/0/0/1/' as hierarchyid), 'e', 21.5,null,14),
(cast('/0/1/0/0/0/1/0/' as hierarchyid), null,null,44.1,14),
(cast('/0/1/0/0/0/1/1/' as hierarchyid), 'a', 77.5,null,16),
(cast('/0/1/0/0/0/1/1/0/' as hierarchyid), NULL,NULL,33.45,16),
(cast('/0/1/0/0/0/1/1/1/' as hierarchyid), NULL,NULL,39.46,17),
(cast('/0/1/0/0/1/' as hierarchyid), NULL,NULL,45.97,18),
(cast('/0/1/0/1/' as hierarchyid), NULL,NULL, 42.26,19);
``` | Here's SQL code that may work to turn your Weka output into the rows for the [decisions] table.
Obviously, SQL isn't the natural language to use, but it's what I had open and handy near the rest of the SQL for this question. Ultimately, they key idea is to implement a stack to keep track of the hierarchy. This is terribly kludgy, so I'd examine and test it well before using the idea in whatever language you use for your data-munging script. The overall idea isn't as awful as this looks. The worst of the code is string manipulation; that can be slicked up a great deal if you use a language with regular expression support.
I also junked the hierarchyid type, following Itzik's improvements (noted in the other thread).
Hope this helps.
You'll note that I make no use of the indentation in the Weka output. Instead, I'm making relatively strong assumptions about the nature of the rules and their order. (Every new nested comparison uses the < operator, for example, and a >= with the same value appears later. I also make assumptions about exact numbers of spaces and names like fac\_x, some of which the use of regular expressions will obviate.)
```
create table ruleset (
id int primary key,
therule varchar(200)
);
insert into ruleset values
(1,'fac_a < 64'),
(2,'| fac_d < 71.5'),
(3,'| | fac_a < 49.5'),
(4,'| | | fac_d < 23.5 : 19.44 (13/43.71) [13/77.47]'),
(5,'| | | fac_d >= 23.5 : 24.25 (32/23.65) [16/49.15]'),
(6,'| | fac_a >= 49.5 : 30.8 (10/17.68) [5/22.44]'),
(7,'| fac_d >= 71.5 : 33.6 (25/53.05) [15/47.35]'),
(8,'fac_a >= 64'),
(9,'| fac_d < 83.5'),
(10,'| | fac_a < 91'),
(11,'| | | fac_e < 93.5'),
(12,'| | | | fac_d < 45 : 31.9 (16/23.25) [3/64.14]'),
(13,'| | | | fac_d >= 45'),
(14,'| | | | | fac_e < 21.5 : 44.1 (5/16.58) [2/21.39]'),
(15,'| | | | | fac_e >= 21.5'),
(16,'| | | | | | fac_a < 77.5 : 33.45 (4/2.89) [1/0.03]'),
(17,'| | | | | | fac_a >= 77.5 : 39.46 (7/10.21) [1/11.69]'),
(18,'| | | fac_e >= 93.5 : 45.97 (2/8.03) [1/107.71]'),
(19,'| | fac_a >= 91 : 42.26 (9/9.57) [4/69.03]'),
(20,'| fac_d >= 83.5 : 47.1 (9/30.24) [6/40.15]')
go
declare @ruleid int = 0;
declare @rulevar char;
declare @rulecomp decimal(10,4);
declare @ruleassign varchar(200);
declare @last int = (select max(id) from ruleset);
declare @rule varchar(200);
declare @resultindentlevel int = 0;
declare @stack table (
id int identity(1,1) primary key,
hier varchar(200),
resultindentlevel int
);
insert into @stack values ('',0);
declare @results table (
hier varchar(200),
line varchar(200)
);
while @ruleid < @last begin
set @ruleid += 1;
set @rule = (select therule+space(1) from ruleset where id=@ruleid);
declare @c char = case when @rule like '%[<]%' then '0' else '1' end;
if @rule not like '%[<:]%' continue;
declare @varpos int = charindex('f',@rule)+4;
set @rulevar = substring(@rule,@varpos,1);
set @rulecomp =
substring(@rule,@varpos+4,charindex(space(1),@rule,@varpos+5)-@varpos-4);
declare @peek varchar(200) =
(select top (1) hier from @stack order by id desc)
--select * from @stack;
if @rule not like '%>%' begin -- handle new condition
set @peek += @c;
if exists (select hier from @results where hier=@peek)
set @peek=left(@peek,len(@peek)-1)+'1';
insert into @results
select @peek,@peek+'|'+@rulevar+'|'+ltrim(str(@rulecomp,15,4))+'||';
insert into @stack values (@peek,0);
end
declare @colon int = charindex(':',@rule);
if @colon > 0 begin -- handle assignment value
set @ruleassign = substring(@rule,@colon+2,200);
insert into @results select @peek+@c,@peek+@c + '|'+@rulevar+'||'+@ruleassign;
end
if @rule like '%>%' delete from @stack where id = (select max(id) from @stack)
end;
update @results set line = ''''+replace(rtrim(line),'|',''',''')+'''';
update @results set line = replace(line,'''''','NULL');
select line from @results;
go
``` | Convert Weka tree into hierachyid for SQL hierachical table | [
"",
"sql",
"sql-server",
"parsing",
"weka",
"decision-tree",
""
] |
I have the following SQL query that is returning a result of `92.967013425802` and I need it to be formatted like `93%` and add the percent sign. I have tried changing the sum to round but I received an error
> The function 'round' is not a valid windowing function, and cannot be used with the OVER clause.
My query:
```
select
count(*) * 100.0 / sum(count(*)) over()
from
db_table_MetaData
group by
MetaValue
order by
MetaValue
```
Any help would be appreciated. | ```
select
--Cast(Round(count(*) * 100.0 / sum(count(*)), 0) as nvarchar(5) + '%'
CAST(Round(count(*) * 100.0 / sum(count(*)), 0) as nvarchar(5)) + '%'
from
db_table_MetaData
```
This should do the trick.
In essence you take the 08/15 `ROUND()` function to get your numeric value. After this you cast it into a `nvarchar(x)` and add a string to your string. However I have no method of checking my syntax right now. | Strange you got not a valid function. Perhaps you didn't provide the correct parameters?
This worked for me.
```
select cast(Round('92.967013425802', 0) as nvarchar(10)) + '%'
``` | SQL Server : round and add percent sign | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
My apologies. I have edited the below into 2 table, Im just having a bit of confusion.
---
I have a tables very similar as the ones below and I wanted to show all the table2-class 1 but only 1 random item per each table1-category
Sample Item Table1
```
+---------+---------------+---------------+
| ID | Item Name | Category |
+---------+---------------+---------------+
| 01 | Item A | Cat 1 |
| 02 | Item B | Cat 1 |
| 03 | Item C | Cat 2 |
| 04 | Item D | Cat 2 |
| 05 | Item E | Cat 3 |
| 06 | Item F | Cat 3 |
+---------+---------------+---------------+
```
Sample Item Table2
```
+---------------+---------------+
| Category | Class |
+---------------+---------------+
| Cat 1 | 1 |
| Cat 2 | 1 |
| Cat 3 | 2 |
+---------------+---------------+
```
I wanted to show all the table2-class 1 but only 1 random item per each table1-category
Desired Result
```
+---------+---------------+---------------+
| 02 | Item B | Cat 1 |
| 03 | Item C | Cat 2 |
+---------+---------------+---------------+
```
*(This is within my PHP script)*
Thanks in advance | You can do something like this
```
SELECT t.id, itemname, category
FROM
(
SELECT
(
SELECT id
FROM table1
WHERE category = t.category
ORDER BY RAND()
LIMIT 1
) id
FROM table1 t
GROUP BY category
) q JOIN table1 t
ON q.id = t.id
```
Note: *using `RAND()` is very costly*
Output:
```
| ID | ITEMNAME | CATEGORY |
|----|----------|----------|
| 1 | Item A | Cat 1 |
| 3 | Item C | Cat 2 |
| 6 | Item F | Cat 3 |
```
Here is **[SQLFiddle](http://sqlfiddle.com/#!2/f69a23/2)** demo | Try something like this:
```
SELECT id, itemname, category FROM (
SELECT id, itemname, category FROM sample_table
ORDER BY RAND()
) AS tmp
GROUP BY category
```
Note that this query is totally valid in MySQL
<http://dev.mysql.com/doc/refman/5.0/en/group-by-extensions.html> | MySQL - Selecting single entry per category | [
"",
"mysql",
"sql",
""
] |
Im in my final stages of creating a database for my android app, however, I cannot seem to get my primary key to increment. Here is my code where I set it up,
```
public class DatabaseHandler extends SQLiteOpenHelper {
// All Static variables
// Database Version
private static final int DATABASE_VERSION = 17;
// Database Name
private static final String DATABASE_NAME = "journeyManager";
// Contacts table name
public static final String TABLE_JOURNEY = "journey";
// Contacts Table Columns names
private static final String KEY_P = "key";
private static final String KEY_ID = "id";
private static final String KEY_DIST = "distance";
private static final String KEY_MPG = "mpg";
private static final String KEY_COST = "cost";
public DatabaseHandler(Context context) {
super(context, DATABASE_NAME, null, DATABASE_VERSION);
}
// Creating Tables
@Override
public void onCreate(SQLiteDatabase db) {
String CREATE_JOURNEY_TABLE = "CREATE TABLE " + TABLE_JOURNEY + "("
+ KEY_P + " INTEGER PRIMARY KEY," + KEY_ID + " TEXT," + KEY_DIST + " TEXT,"
+ KEY_MPG + " TEXT," + KEY_COST + " TEXT )";
db.execSQL(CREATE_JOURNEY_TABLE);
}
// Upgrading database
@Override
public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) {
// Drop older table if existed
db.execSQL("DROP TABLE IF EXISTS " + TABLE_JOURNEY);
// Create tables again
onCreate(db);
}
/**
* All CRUD(Create, Read, Update, Delete) Operations
*/
// Adding new contact
void addJourneyData(Journey journey) {
SQLiteDatabase db = this.getWritableDatabase();
ContentValues values = new ContentValues();
values.put(KEY_P, journey.getpKey());
values.put(KEY_ID, journey.getId());
values.put(KEY_DIST, journey.getDistance()); // Contact Name
values.put(KEY_MPG, journey.getMpg()); // Contact Phone
values.put(KEY_COST, journey.getCost()); // Contact Phone
// Inserting Row
db.insert(TABLE_JOURNEY, null, values);
db.close(); // Closing database connection
}
// Getting single contact
Journey getJourney(int id) {
SQLiteDatabase db = this.getReadableDatabase();
Cursor cursor = db.query(TABLE_JOURNEY, new String[] { KEY_P + KEY_ID +
KEY_DIST, KEY_MPG, KEY_COST }, KEY_P + "=?",
new String[] { String.valueOf(id) }, null, null, null, null);
if (cursor != null)
cursor.moveToFirst();
Journey journey = new Journey();
journey.setPkey(Integer.parseInt(cursor.getString(0)));
journey.setId(String.valueOf(cursor.getString(1)));
journey.setMpg(String.valueOf(cursor.getString(2)));
journey.setDistance(String.valueOf(cursor.getString(3)));
journey.setCost(String.valueOf(cursor.getString(4)));
// return contact
return journey;
}
// Getting All Contacts
public List<Journey> getAllJourneys() {
List<Journey> journeyList = new ArrayList<Journey>();
// Select All Query
String selectQuery = "SELECT * FROM " + TABLE_JOURNEY;
SQLiteDatabase db = this.getWritableDatabase();
Cursor cursor = db.rawQuery(selectQuery, null);
// looping through all rows and adding to list
if (cursor.moveToFirst()) {
do {
Journey journey = new Journey();
journey.setPkey(Integer.parseInt(cursor.getString(0)));
journey.setId(String.valueOf(cursor.getString(1)));
journey.setMpg(String.valueOf(cursor.getString(2)));
journey.setDistance(String.valueOf(cursor.getString(3)));
journey.setCost(String.valueOf(cursor.getString(4)));
// Adding contact to list
journeyList.add(journey);
} while (cursor.moveToNext());
}
// return contact list
return journeyList;
}
}
```
Here is where I add the details into a database from a button on another activity,
```
db.addJourneyData(new Journey(1,timeStamp, distanceLabel, mpgAnswer, pplAnswer));
```
I get to the point, where it will add the first one in, but from then on it will say the primary key is not unique - therefore it doesn't update the database.
Also, I want the data to be in descending order, to do this, I use DESC, but where shall I place this?
Any help would be appreciated,
Many thanks, | To make the database auto-generate the primary key for you, just don't specify it yourself. Remove this line from the insert code:
```
values.put(KEY_P, journey.getpKey());
```
You can capture the generated id from the return value of [`insert()`](http://developer.android.com/reference/android/database/sqlite/SQLiteDatabase.html#insert%28java.lang.String,%20java.lang.String,%20android.content.ContentValues%29).
> Also, I want the data to be in descending order, to do this, I use DESC, but where shall I place this?
Assuming this applies to `getAllJourneys()` where you do a `rawQuery()`, just add the `ORDER BY` directly in the SQL:
```
String selectQuery = "SELECT * FROM " + TABLE_JOURNEY + " ORDER BY " + KEY_P + " DESC";
``` | *Go through the following steps*
**Modify your Creating Tables code like this**
```
// Creating Tables
@Override
public void onCreate(SQLiteDatabase db) {
String CREATE_JOURNEY_TABLE = "CREATE TABLE " + TABLE_JOURNEY + "("
+ KEY_P + " INTEGER PRIMARY KEY AUTOINCREMENT DEFAULT 1 ," + KEY_ID + " TEXT," + KEY_DIST + " TEXT,"
+ KEY_MPG + " TEXT," + KEY_COST + " TEXT )";
db.execSQL(CREATE_JOURNEY_TABLE);
}
```
by using
```
INTEGER PRIMARY KEY AUTOINCREMENT DEFAULT 1
```
you can start increment from 1
**Then remove the following code**
```
values.put(KEY_P, journey.getpKey());
``` | Android SQLite - Primary Key - Inserting into table | [
"",
"android",
"sql",
"sqlite",
""
] |
I'm very new to this so I will try to explain this as good as I can, also try to explain as basic as you can, thank you!
I have a list with bills that a company gets when they buy something, I'm trying to select only those that have NOT bought anything (therefore their CustomerID is not on the Bill-list). How do I do that?
Here is my code so far (renamed some stuff to english so you hopefully understand better):
```
SELECT Name, Postnr+' '+City as Postadress
FROM Bill
RIGHT JOIN Customer
ON Customer.CustomerID = Bill.CustomerID
``` | Meanwhile i prefer `NOT EXISTS` since it's more readable, maintainable and has no issues with null-values:
```
SELECT Name,
Postnr + ' ' + City AS Postadress
FROM Customer c
WHERE NOT EXISTS (SELECT 1 FROM bill b
WHERE b.CustomerID = c.CustomerID)
```
Another (possibly less efficient) way is an `OUTER JOIN`:
```
SELECT Name,
Postnr + ' ' + City AS Postadress
FROM Customer c
LEFT OUTER JOIN Bill b
ON c.CustomerID = b.CustomerID
WHERE b.CustomerID IS NULL
```
Here's a list of all approaches:
[Should I use NOT IN, OUTER APPLY, LEFT OUTER JOIN, EXCEPT, or NOT EXISTS?](http://www.sqlperformance.com/2012/12/t-sql-queries/left-anti-semi-join) | You explained it quite well. You should have made your SQL right from your explanation:
Customer with their CustomerID not in Bill list. Now this would be like:
```
select * from Customer where CustomerID not in (select CustomerID from Bill)
```
Dependent on your database system the syntax could vary slightly e.g. not CustomerID in ... | Selecting those that does not match | [
"",
"sql",
"sql-server",
"join",
""
] |
I have a simple tree that has 4 level of deep data. Here is the table DDL
```
CREATE TABLE HIER_DEMO(
ID NUMBER,
LABEL VARCHAR2 (100),
PARENT_ID NUMBER)
```
Hierarchy starts `WITH ID = PARENT_ID`. Number of levels are fixed. It is always 4. We have leafs to all branches at 4th level. So we can also add 3 more columns that represent LABEL of ancestors if necessary.
I need to build a query that
1. Searches for particular phrase in LABEL on any level of hierarchy. For example `LABEL LIKE '%MAGIC_WORD%'`.
2. Returns all the nodes till leaf level under the hierarchy node that satisfies condition 1.
3. In addition we need to return all the ancestors of the hierarchy node that satisfies condition 1.
Here is an example
```
INSERT INTO HIER_DEMO VALUES (1, 'Mike', 1);
INSERT INTO HIER_DEMO VALUES (2, 'Arthur', 2);
INSERT INTO HIER_DEMO VALUES (3, 'Alex', 1);
INSERT INTO HIER_DEMO VALUES (4, 'Suzanne', 1);
INSERT INTO HIER_DEMO VALUES (5, 'Brian', 3);
INSERT INTO HIER_DEMO VALUES (6, 'Rick', 3);
INSERT INTO HIER_DEMO VALUES (7, 'Patrick', 4);
INSERT INTO HIER_DEMO VALUES (8, 'Simone', 4);
INSERT INTO HIER_DEMO VALUES (9, 'Tim', 5);
INSERT INTO HIER_DEMO VALUES (10, 'Andrew', 5);
INSERT INTO HIER_DEMO VALUES (11, 'Sandy', 6);
INSERT INTO HIER_DEMO VALUES (12, 'Brian', 6);
INSERT INTO HIER_DEMO VALUES (13, 'Chris', 7);
INSERT INTO HIER_DEMO VALUES (14, 'Laure', 7);
INSERT INTO HIER_DEMO VALUES (15, 'Maureen', 8);
INSERT INTO HIER_DEMO VALUES (16, 'Andy', 8);
INSERT INTO HIER_DEMO VALUES (17, 'Al', 2);
INSERT INTO HIER_DEMO VALUES (18, 'John', 17);
INSERT INTO HIER_DEMO VALUES (19, 'Frank', 18);
INSERT INTO HIER_DEMO VALUES (20, 'Tim', 19);
```
I am looking for the query that searches the tree for word 'Brian' in the `LABEL` column
The query should return these data
```
ID LABEL PARENT_ID
1 Mike 1
3 Alex 1
5 Brian 3
6 Rick 3
9 Tim 5
10 Andrew 5
12 Brian 6
```
Could somebody help with the Oracle query? We are using 11.2 version of Oracle database. | ```
SQL> select * from HIER_DEMO
2 start with label like '%Brian%'
3 connect by prior id = parent_id
4 union
5 select * from HIER_DEMO
6 start with label like '%Brian%'
7 connect by prior parent_id = id and PRIOR parent_id != PRIOR id
8 /
ID LABEL PARENT_ID
---- -------------------- ---------
1 Mike 1
3 Alex 1
5 Brian 3
6 Rick 3
9 Tim 5
10 Andrew 5
12 Brian 6
``` | We can use recursive CTE to accomplish this
```
WITH CTE1(ID, LABEL,PARENT_ID) AS
(
SELECT * FROM Hier_Demo
WHERE LABEL LIKE '%Brian%'
UNION ALL
SELECT h.ID, h.LABEL, h.PARENT_ID FROM Hier_Demo h
INNER JOIN CTE1 c
ON h.ID = c.PARENT_ID
WHERE h.ID <> h.PARENT_ID
),
CTE2(ID, LABEL,PARENT_ID) AS
(
SELECT * FROM Hier_Demo
WHERE LABEL LIKE '%Brian%'
UNION ALL
SELECT h.ID, h.LABEL, h.PARENT_ID FROM Hier_Demo h
INNER JOIN CTE2 c
ON h.PARENT_ID = c.ID
)
SELECT * FROM CTE2
UNION
SELECT * FROM CTE1
UNION
SELECT * FROM HIER_DEMO WHERE ID = 1
```
In the above code CTE1 gets records up in the hierarchy and CTE2 gets records down in the hierarchy of Brian, after that we just `UNION` the records returned by these CTEs
see the code working at SQLFiddle: <http://sqlfiddle.com/#!4/0c99d/39> | SQL query for searching in the tree | [
"",
"sql",
"oracle",
"search",
"tree",
""
] |
I read a **45-tips-database-performance-tips-for-developers** document from a famous commercial vendor for SQL tools today and there was one tip that confuse me:
> If possible, avoid NULL values in your database. If not, use the
> appropriate IS NULL and IS NOT NULL code.
I like having NULL values because to me it is a difference if a value was never set or it `0` or `string empty`. So databases have this for a porpuse.
So is this tip nonsense or should I take action to prevent having NULL values at all in my database tables? Does it effect performance a lot have a `NULL` value instead of a filled `number` or `string` value? | The NULL question is not simple... Every professional has a personal opinion about it.
Relational theory Two-Valued Logic (2VL: TRUE and FALSE) rejects NULL, and Chris Date is one of the most enemies of NULLs. But Ted Codd, instead, accepted Three-Valued Logic too (TRUE, FALSE and UNKNOWN).
Just a few things to note for Oracle:
1. Single column B\*Tree Indexes don't contain NULL entries. So the Optimizer can't use an Index if you code "WHERE XXX IS NULL".
2. Oracle considers a NULL the same as an empty string, so:
```
WHERE SOME_FIELD = NULL
```
is the same as:
```
WHERE SOME_FIELD = ''
```
Moreover, with NULLs you must pay attention in your queries, because every compare with NULL returns NULL.
And, sometimes, NULLs are insidious. Think for a moment to a WHERE condition like the following:
```
WHERE SOME_FIELD NOT IN (SELECT C FROM SOME_TABLE)
```
If the subquery returns one or more NULLs, you get the empty recordset!
These are the very first few cases that I want to talk about. But we can speak about NULLs for a lot of time... | Besides the reasons mentioned in other answers, we can look at NULLs from a different angle.
Regarding duplicate rows, Codd said
> If something is true, saying it twice doesnβt make it any more true.
Similarly, you can say
**If something is not known, saying it is unknown doesn't make it known.**
Databases are used to record facts. The facts (truths) serve as axioms from which we can deduce other facts.
From this perspective, unknown things should not be recorded - they are not useful facts.
Anyway, anything that is not recorded is unknown. So why bother recording them?
Let alone their existence makes the deduction complicated. | Why should I avoid NULL values in a SQL database? | [
"",
"sql",
"database-design",
""
] |
I have following tables:
**Table 1 : `AccessMenu`**
```
userid item
1 a
2 b
2 c
```
and
**Table 2 : `ActionRights`**
```
idAdmin itemCol
2 v
2 m
2 d
```
**DESIRED RESULT:**
```
userid yes/no
1 no
2 yes
```
When I get matching `userid` to `idadmin` in `ActionRight` it should return `Yes` otherwise `No`.
I tried below query:
```
select AccessMenu.userid,
case when ActionRights.IdAdmin=AccessMenu.userid then 'Yes' else 'No' end as 'GRP'
from AccessMenu left join ActionRights
on ActionRights.IdAdmin =AccessMenu.userid
```
But through this query i am getting repeated result as:
```
userid yes/no
1 no
2 yes
2 yes
2 yes
2 yes
2 yes
```
Please help me. | ```
SELECT userid,
CASE
WHEN idadmin IS NULL THEN 'No'
ELSE 'Yes'
END AS 'Grp'
FROM (SELECT accessmenu.userid AS UserId,
actionrights.idadmin AS IdAdmin
FROM accessmenu
LEFT JOIN actionrights
ON actionrights.idadmin = accessmenu.userid)z
``` | Please check using subquery:
```
select distinct
userid,
case when (SELECT COUNT(*) from ActionRights b where b.idAdmin=a.userid)>0 then 'Yes' else 'No' end [Yes/No]
from AccessMenu a
``` | How do I check if a table have matching record in another table? | [
"",
"sql",
"database",
"sql-server-2008-r2",
""
] |
I have a `select` statement
```
SELECT *
FROM TABLENAME
WHERE WORKERNAME = 'A'
AND DATE = '12/17/2014'
```
The output will be:
```
FREE | USED | DATE | WORKERNAME
------------------------------------
1 | 0 |12/17/2014 | A
1 | 0 |12/17/2014 | A
1 | 0 |12/17/2014 | A
```
I need to have an output where outputs for `DATE` and `WORKERNAME` will be column header that will look like:
```
A
----------
12/17/2014
----------
FREE | USED
----------
1 | 0
1 | 0
1 | 0
```
Can someone suggest how this could be achieved using an oracle SQL or PL/SQL? | It would not be that elegant to produce the output you are after using pure SQL or even PL/SQL. It would be better if you let a client do the work. Depending on how you want to present your final output to an end user your choices are ranging from simple SQL\*PLUS to a more sophisticated reporting tools. Here is a simple example of how you can produce that output using SQL\*PLUS:
```
clear screen;
column workername new_value worker_name;
column date1 new_value d1;
column workername noprint;
column date1 noprint;
set linesize 15;
column free format a7;
column used format a7;
ttitle center worker_name skip 1 -
center '------------' skip 1 -
center d1 skip 1 -
center '------------' skip 1;
set colsep '|'
/* sample of data from your question */
with t1(free, used, date1, workername) as(
select 1, 0, date '2014-12-17', 'A' from dual union all
select 1, 0, date '2014-12-17', 'A' from dual union all
select 1, 0, date '2014-12-17', 'A' from dual
)
select to_char(free) as free
, to_char(used) as used
, to_char(date1, 'mm/dd/yyyy') as date1
, workername
from t1
where workername = 'A'
and date1 = date '2014-12-17';
```
Result:
```
A
------------
12/17/2014
------------
FREE |USED
-------|-------
1 |0
1 |0
1 |0
```
If there is a need to produce a report that includes different `workernames` or/and different `date`, the `break on` SQL\*PLUS command can be used to break report on a specific column or a combination of columns. For example:
```
column workername new_value worker_name;
column date1 new_value d1;
column workername noprint;
column date1 noprint;
set linesize 15;
column free format a7;
column used format a7;
ttitle center worker_name skip 1 -
center '------------' skip 1 -
center d1 skip 1 -
center '------------' skip 1;
set colsep '|'
break on worker_name skip page on date1 skip page;
/* sample of data */
with t1(free, used, date1, workername) as(
select 1, 0, date '2014-12-17', 'A' from dual union all
select 1, 0, date '2014-11-17', 'A' from dual union all
select 1, 0, date '2014-12-17', 'A' from dual union all
select 1, 0, date '2014-11-17', 'B' from dual
)
select to_char(free) as free
, to_char(used) as used
, to_char(date1, 'mm/dd/yyyy') as date1
, workername
from t1
order by workername, date1;
```
Result:
```
A
------------
11/17/2014
------------
FREE |USED
-------|-------
1 |0
A
------------
12/17/2014
------------
FREE |USED
-------|-------
1 |0
1 |0
B
------------
11/17/2014
------------
FREE |USED
-------|-------
1 |0
```
Here is the [SQL\*PLUS user's guide](http://docs.oracle.com/cd/E11882_01/server.112/e16604/toc.htm) where you can find detailed information on any command that's been used in the above examples. | ```
TTITLE LEFT 'Manager: ' MGRVAR SKIP 2
BREAK ON MANAGER_ID SKIP PAGE
BTITLE OFF
SELECT MANAGER_ID, DEPARTMENT_ID, LAST_NAME, SALARY
FROM EMP_DETAILS_VIEW
WHERE MANAGER_ID IN (101, 201)
ORDER BY MANAGER_ID, DEPARTMENT_ID;
Manager: 101
DEPARTMENT_ID LAST_NAME SALARY
------------- ------------------------- ----------
10 Whalen 4400
40 Mavris 6500
70 Baer 10000
100 Greenberg 12000
110 Higgins 12000
Manager: 201
DEPARTMENT_ID LAST_NAME SALARY
------------- ------------------------- ----------
20 Fay 6000
6 rows selected.
```
See if something like this helps you. | Display row values as column header | [
"",
"sql",
"oracle",
"plsql",
"oracle10g",
""
] |
I'm performing a query on a table which tracks the results of a test taken by students. The test is composed of multiple sections, and there is a column for each section score. Each row is an instance of the test taken by a student. The sections can either be taken all at once, or split into multiple attempts. For example, a student can take one section today, and the rest tomorrow. In addition, a student is allowed to retake any section of the test.
**Sample Student**:
```
StudentID WritingSection ReadingSection MathSection DateTaken
1 65 85 54 4/1/2013 14:53
1 98 NULL NULL 4/8/2013 13:13
1 NULL NULL 38 5/3/2013 12:43
```
A `NULL` means that the section was not administered for the given test instance, and a second section score means the section was retaken.
I want a query that groups by the `StudentID` such that there is only one row per student, and the most recent score for each section is returned. I'm looking for an efficient way to solve this problem as we have many hundreds of thousands of test attempts in the database.
**Expected Result:**
```
StudentID WritingSection ReadingSection MathSection DateTaken
1 98 85 38 5/3/2013 12:43
```
**EDIT:**
There have been a lot of good solutions. I want to experiment with each next week a little more before choosing the answer. Thanks everyone! | Sorry - my previous answer answered a DIFFERENT question than the one posed :) It will return all data from the MOST RECENT row. The question asked is to aggregate over all rows to grab the most recent score for each subject individually.
But I'm leaving it up there because the question I answered is a common one, and maybe someone landing on this question actually had that question instead :)
Now to answer the actual question:
I think the cleanest way to do this is with PIVOT and UNPIVOT:
```
SELECT StudentID, [WritingSection], [ReadingSection], [MathSection], MAX(DateTaken) DateTaken
FROM (
SELECT StudentID, Subject, DateTaken, Score
FROM (
SELECT StudentID, Subject, DateTaken, Score
, row_number() OVER (PARTITION BY StudentID, Subject ORDER BY DateTaken DESC) as rowNum
FROM Students s
UNPIVOT (
Score FOR Subject IN ([WritingSection],[ReadingSection],[MathSection])
) u
) x
WHERE x.rowNum = 1
) y
PIVOT (
MAX(Score) FOR Subject IN ([WritingSection],[ReadingSection],[MathSection])
) p
GROUP BY StudentID, [WritingSection], [ReadingSection], [MathSection]
```
The innermost subquery (x) uses SQL's UNPIVOT function to normalize the data (meaning to turn each student's score on each section of the test into a single row).
The next subquery out (y) is simply to filter the rows to only the most recent score FOR EACH SUBJECT INDIVIDUALLY (this is a workaround of the SQL bug that you can't use windowed functions like row\_number() in a WHERE clause).
Lastly, since you want the data displayed back in the denormalized original format (1 column for each section of the test), we use SQL's PIVOT function. This simply turns rows into columns - one for each section of the test. Finally, you said you wanted the most recent test taken shown (despite the fact that each section could have its own unique "most recent" date). So we simply aggregate over those 3 potentially different DateTakens to find the most recent.
This will scale more easily than other solutions if there are more Sections added in the future - just add the column names to the list. | This is tricky. Each section score is coming potentially from a different record. But the normal rules of `max()` and `min()` don't apply.
The following query gets a sequence number for each section, starting with the latest non-NULL value. This is then used for conditional aggregation in the outer query:
```
select s.StudentId,
max(case when ws_seqnum = 1 then WritingSection end) as WritingSection,
max(case when rs_seqnum = 1 then ReadingSection end) as ReadingSection,
max(case when ms_seqnum = 1 then MathSection end) as MathSection,
max(DateTaken) as DateTaken
from (select s.*,
row_number() over (partition by studentid
order by (case when WritingSection is not null then 0 else 1 end), DateTaken desc
) as ws_seqnum,
row_number() over (partition by studentid
order by (case when ReadingSection is not null then 0 else 1 end), DateTaken desc
) as rs_seqnum,
row_number() over (partition by studentid
order by (case when MathSection is not null then 0 else 1 end), DateTaken desc
) as ms_seqnum
from student s
) s
where StudentId = 1
group by StudentId;
```
The `where` clause is optional in this query. You can remove it and it should still work on all students.
This query is more complicated than it needs to be, because the data is not normalized. If you have control over the data structure, consider an association/junction table, with one row per student per test with the score and test date as columns in the table. (Full normality would introduce another table for the test dates, but that probably isn't necessary.) | Group By Column, Select Most Recent Value | [
"",
"sql",
"t-sql",
""
] |
I can not find out what is wrong with my SQL Query:
```
CREATE TABLE Product (
productID int NOT NULL,
name varchar(255) NOT NULL,
price int(255),
PRIMARY KEY (productID)
)
CREATE TABLE User (
userID int NOT NULL,
PRIMARY KEY (userID)
)
CREATE TABLE Purchased (
productID int NOT NULL,
userID varchar(255) NOT NULL,
date date(255), NOT NULL,
FOREIGN KEY (productID) REFERENCES Product(productID) FOREIGN KEY (userID) REFERENCES User(userID)
)
```
Please can someone help | There are some syntax error in your `create table` statement.
* **Date** is a keyword so not a good practice to use it.
* **User\_id** is **int** in your `USER` table and in `purchased` table you are making it **varchar**
* For **date datatype** no need to specify the **number of characters.**
The correct statement is
```
CREATE TABLE purchased
(
productid INT NOT NULL,
userid INT NOT NULL,
date1 DATE NOT NULL,
FOREIGN KEY (productid) REFERENCES product(productid),
FOREIGN KEY (userid) REFERENCES USER(userid)
)
```
[SQL Fiddle](http://www.sqlfiddle.com/#!2/2dccb) | To start with, you have a syntax error in your third `CREATE TABLE` statement, where you have specified a comma before `NOT NULL` constraint and a missing comma before second foreign key definition.
Another thing to note is, you are not supposed to specify any parameter to `DATE` data type, like you have specified.
EDIT: The data type of userID in this table needs to be same as the data type of the user table for the foreign key to work.
The correct statement is
```
CREATE TABLE Purchased (productID int NOT NULL,
userID INT NOT NULL,
date date NOT NULL,
FOREIGN KEY (productID) REFERENCES Product(productID),
FOREIGN KEY (userID) REFERENCES User(userID)
)
```
If you're getting some other error, please update your question | SQL Structure help and query | [
"",
"mysql",
"sql",
"database",
""
] |
I would like to know if the last character of a string is an asterisk `*`
I have got the following which gives me the last character:
```
select RIGHT('hello*',1)
```
but how would I use it as a condition in an if statement by matching it? the following doesn't work as it only returns the last char.
```
select '*' = RIGHT('hello*',1)
```
can I use regex? | You're just about there. By using:
```
select '*' = RIGHT('hello*',1)
```
Sql evaluates the `RIGHT` but thinks you want to alias it as a column named `*`
You can use the expression conditionally:
```
if RIGHT('hello*',1) = '*'
print 'Ends in *'
else
print 'Does not end in *'
```
You can filter like so on a table:
```
select *
from MyTable
where RIGHT(MyColumn, 1) = '*';
```
Although the performance won't be stellar. *Edit* : See Karl Kieninger's answer for ideas on how to greatly improve the performance of this query | StuartLC mentioned performance. You can performance gains at the cost of an indexed persisted computed column. This tactic has been mentioned several in other places, such as:
* [sql-server-index-columns-used-in-like](https://stackoverflow.com/questions/1388059/sql-server-index-columns-used-in-like)
* [sql-server-index-on-a-computed-column](https://stackoverflow.com/questions/1323214/sql-server-index-on-a-computed-column)
I created a sample to see it in action, but the execution plan showed it was never actually picking up the index I expected for index seeks on the computed columns. So now I'm not sure what is wrong with my test. If someone can point out my failure, I'll correct the answer.
```
CREATE TABLE Test(
TestData VARCHAR(10)
,TestData_Reverse AS REVERSE(TestData) PERSISTED
,TestData_Right1 AS RIGHT(TestData,1) PERSISTED
)
INSERT INTO Test(TestData) VALUES ('Bob'),('Joe'),('Ed*')
CREATE INDEX IX_Test_TestData ON Test (TestData)
CREATE INDEX IX_Test_TestData_Reverse ON Test (TestData_Reverse)
CREATE INDEX IX_Test_TestData_Right1 ON Test (TestData_Right1)
GO
SELECT * FROM Test WHERE TestData LIKE '%*' --Index Scan
SELECT * FROM Test WHERE RIGHT(TestData,1) = '*' --Table Scan
SELECT * FROM Test WHERE TestData LIKE '*%' --Index Seek
SELECT * FROM Test WHERE TestData_Reverse LIKE '*%' --Table Scan
SELECT * FROM Test WHERE LEFT(TestData_Reverse,1) = '*' --Index Scan
SELECT * FROM Test WHERE TestData_Right1 = '*' --Table Scan
DROP TABLE Test
```
**Edit - temporary edit**
This really is excellent - You just need more data, so that:
1. the selectivity of a few '\*' rows improves to the point where select returns less than a few % of total table rows to warrant seeks
2. the number of pages used to store the data is non-trivial so that relative costs are more accurate
3. We need to be careful of SELECT \* as this will trigger RID / Bookmark lookups (which are possibly more expensive than a cluster since we could carry at least one of the columns in a Clustered Index)
Here's some more data, tests done on Sql Express 2014:
```
INSERT INTO Test(TestData)
SELECT o1.name + o2.name from sys.objects o1, sys.objects o2;
UPDATE STATISTICS Test;
SELECT TestData FROM Test WHERE TestData LIKE '%*' --Index Scan IX_Test_TestData *1 .038
SELECT TestData FROM Test WHERE RIGHT(TestData,1) = '*' --Index Seek (IX_Test_TestData_Right1) *2 .003
SELECT TestData FROM Test WHERE TestData LIKE '*%' --Index Seek IX_Test_TestData *3 .003
SELECT TestData FROM Test WHERE TestData_Reverse LIKE '*%' --Index Scan IX_Test_TestData *4 .038
SELECT TestData FROM Test WHERE LEFT(TestData_Reverse,1) = '*' --Index Scan IX_Test_TestData *5 .038
SELECT TestData FROM Test WHERE LEFT(TestData_Right1,1) = '*' --Index Scan IX_Test_TestData_Right1 *5 .031
SELECT TestData FROM Test WITH (INDEX = IX_Test_TestData_Reverse) WHERE TestData_Reverse LIKE '*%' --Index Scan IX_Test_TestData *6 .05
SELECT TestData FROM Test WHERE TestData_Right1 = '*' --Index Seek IX_Test_TestData_Right1 *7 .003
SELECT * FROM Test WHERE REVERSE(TestData) = '*dE' --Index Seek (IX_Test_TestData_Reverse) *8 .006
```
The *great news* IMO is that Sql Server was able to "grok" that `RIGHT(TestData,1)` could be substituted for the computed column and used `IX_Test_TestData_Right1` (and the same for `REVERSE` \*8). This has implication (admittedly for very specific queries) that the persisted computed columns can be hidden away from the world, behind the scenes like a plain index, and means that the lack of sargability of functions CAN actually be mitigated in limited cases.
w.r.t. the disappointing scans for e.g. \*4, it also worth noting that the test table is a heap, and one of the possible reasons why Sql isn't using seeks for the LIKE operator is the perception that the RID lookup into the cluster will outweigh the benefit of using `IX_Test_TestData_Right / IX_Test_TestData_Reverse` over `TestData`.
I believe an optimal strategy in real tables would be to use a covering index to `INCLUDE` the original unreversed column.
```
CREATE INDEX IX_Test_TestData_Reverse ON Test (TestData_Reverse) INCLUDE (TestData)
SELECT TestData FROM Test WHERE TestData_Reverse LIKE '*%' --Index Scan IX_Test_TestData .003
``` | how to match last character of string in TSQL? | [
"",
"sql",
"t-sql",
""
] |
I have this table that has wide range of dates and a corresponding value for each one of those dates, an example shown below.
```
Date Value
6/01/2013 8
6/02/2013 4
6/03/2013 1
6/04/2013 7
6/05/2013 1
6/06/2013 1
6/07/2013 3
6/08/2013 8
6/09/2013 4
6/10/2013 2
6/11/2013 10
6/12/2013 4
6/13/2013 7
6/14/2013 3
6/15/2013 2
6/16/2013 1
6/17/2013 7
6/18/2013 5
6/19/2013 1
6/20/2013 4
```
What I am trying to do is create a query that will create a new column that will display the sum of the Valueβs column for a specified date range. For example down below, the sum column contains the sum of its corresponding date going back one full week. So the Sum of the date 6/9/2013 would be the sum of the values from 6/03/2013 to 6/09/2013.
```
Date Sum
6/01/2013 8
6/02/2013 12
6/03/2013 13
6/04/2013 20
6/05/2013 21
6/06/2013 22
6/07/2013 25
6/08/2013 25
6/09/2013 25
6/10/2013 26
6/11/2013 29
6/12/2013 32
6/13/2013 38
6/14/2013 38
6/15/2013 32
6/16/2013 29
6/17/2013 34
6/18/2013 29
6/19/2013 26
6/20/2013 23
```
Iβve tried to using the LIMIT clause but I could not get it to work, any help would be greatly appreciated. | Using `data.table`
```
require(data.table)
#Build some sample data
data <- data.table(Date=1:20,Value=rpois(20,10))
#Build reference table
Ref <- data[,list(Compare_Value=list(I(Value)),Compare_Date=list(I(Date)))]
#Use lapply to get last seven days of value by id
data[,Roll.Val := lapply(Date, function(x) {
d <- as.numeric(Ref$Compare_Date[[1]] - x)
sum((d <= 0 & d >= -7)*Ref$Compare_Value[[1]])})]
head(data,10)
Date Value Roll.Val
1: 1 14 14
2: 2 7 21
3: 3 9 30
4: 4 5 35
5: 5 10 45
6: 6 10 55
7: 7 15 70
8: 8 14 84
9: 9 8 78
10: 10 12 83
```
Here is another solution if anyone is interested:
```
library("devtools")
install_github("boRingTrees","mgahan")
require(boRingTrees)
rollingByCalcs(data,dates="Date",target="Value",stat=sum,lower=0,upper=7)
``` | `zoo` has a function `rollapply` which can do what you need:
```
z <- zoo(x$Value, order.by=x$Date)
rollapply(z, width = 7, FUN = sum, partial = TRUE, align = "right")
## 2013-06-01 8
## 2013-06-02 12
## 2013-06-03 13
## 2013-06-04 20
## 2013-06-05 21
## 2013-06-06 22
## 2013-06-07 25
## 2013-06-08 25
## 2013-06-09 25
## 2013-06-10 26
## 2013-06-11 29
## 2013-06-12 32
## 2013-06-13 38
## 2013-06-14 38
## 2013-06-15 32
## 2013-06-16 29
## 2013-06-17 34
## 2013-06-18 29
## 2013-06-19 26
## 2013-06-20 23
``` | Moving sum over date range | [
"",
"sql",
"r",
"count",
"sum",
"data.table",
""
] |
I was solving one of the puzzles and came across swapping column values using DML queries:
```
SELECT * FROM TEMP_TABLE;
ID1, ID2
--------
20, 15
20, 15
20, 15
```
Solution is mathematical calculation:
```
UPDATE TEMP_TABLE SET ID1=ID1+ID2;
UPDATE TEMP_TABLE SET ID2=ID1-ID2;
UPDATE TEMP_TABLE SET ID1=ID1-ID2;
```
Now, I am trying to figure out whether this can be applied to Strings or not, please suggest.
```
SELECT * FROM TEMP_TABLE_NEW;
ID1, ID2
--------
ABC, XYZ
ABC, XYZ
ABC, XYZ
``` | There's no need to have three update statements, one is sufficient:
```
UPDATE temp_table_new
SET id1 = id2,
id2 = id1;
``` | ```
CREATE TABLE Names
(
F_NAME VARCHAR(22),
L_NAME VARCHAR(22)
);
INSERT INTO Names VALUES('Ashutosh', 'Singh'),('Anshuman','Singh'),('Manu', 'Singh');
UPDATE Names N1 , Names N2 SET N1.F_NAME = N2.L_NAME , N1.L_NAME = N2.F_NAME
WHERE N1.F_NAME = N2.F_NAME;
SELECT * FROM Names;
``` | Swapping column values in Oracle | [
"",
"sql",
"oracle",
"logic",
""
] |
I don't want a temporary join, I want to actually update one column to be the equivalent of its counterpart from another table based on sharing an ID. It would be something like this:
```
UPDATE parts2.PartNumber
SET parts2.PartNumber=parts.PartNumber
WHERE parts2.id=parts.id;
```
Thanks in advance, sorry if this is a repeat question but I can't seem to find the answer much to my surprise. The above code is giving a syntax error of `unknown column parts2.id in WHERE clause` even though that column certainly exists. | I believe the following will give you the desired results. But it based on a join:
```
UPDATE parts2,
parts
SET parts2.PartNumber=parts.PartNumber
WHERE parts2.id=parts.id;
```
It will do exactly what you want. And it is a most efficient way. I don't see any sence in trying to avoid `JOIN` here | You can do this by joining **parts** and **parts2** table .
```
UPDATE parts2 AS p2
INNER JOIN parts AS p1
ON p1.id= p2.id
SET p2.PartNumber=p1.PartNumber
``` | Merge single column of one table into another based on ID | [
"",
"mysql",
"sql",
""
] |
I have a column which has values of format "01-01-2012" and I need to change it to "01.01.2012". I have to do the same for all the entries of that coloumn.
Could you please suggest a way to do it? I need a general SQL statement.
Thanks | Try this using [REPLACE](http://dev.mysql.com/doc/refman/5.5/en/replace.html):
```
update tablename set column_name=REPLACE(column_name,'-','.')
``` | One way to do this would be to use a basic `UPDATE` statement. In this case, you would update your entire table and for each row, set the value of the column to the formatted value. The basic outline would be like so:
```
UPDATE [TableName]
SET [ColumnName] = [FormattingExpression]
```
An example of [FormattingExpression] could be:
```
REPLACE ([ColumnName], '-' , '.' )
```
This would [replace](http://technet.microsoft.com/en-us/library/ms186862.aspx) all instances of '-' with '.' for the values in the `[ColumnName]` column of your `[TableName]` table. | Format values on entire column SQL | [
"",
"mysql",
"sql",
"date-format",
""
] |
I have a MySQL database show uptime for devices. The uptime is how long the device has been up in seconds. I have a query showing all devices with uptime less than 86400 (24 hours) which I want to bring into a PHP page.
I want to format this uptime string number into a datetime that's human readable. Here is my query:
```
select hostname,type,uptime from devices where uptime < 86400
And here is sample output:
hostname: serverA
type: server
uptime: 10329
```
How can I turn that 10329 into an actual time? I assume there is some sort of value that can pull the current time in seconds, minus that uptime value, and convert that into MM/DD/YYYY HH:MM:SS readable value?
Any help is great appreciated!
EDIT:
Decided to use both answer below, but here is the query that fits my needs:
```
SELECT hostname AS Server,type AS Type,NOW() - INTERVAL uptime SECOND AS "Boot Time",last_polled AS "Last SNMP Poll" FROM devices WHERE uptime < 86400
```
Which returns:
```
Server: ServerA
Type: server
Boot Time: 2014-02-11 13:26:52
Last SNMP Poll: 2014-02-11 18:35:14
``` | ```
SELECT NOW() - INTERVAL 10329 SECOND
``` | `TIME_FORMAT(SEC_TO_TIME(uptime),'%Hh %im')`
Google: [Convert seconds to human readable time duration](https://stackoverflow.com/questions/8193868/convert-seconds-to-human-readable-time-duration)
`select hostname,type,TIME_FORMAT(SEC_TO_TIME(uptime),'%Hh %im') from devices where uptime < 86400`
This will give you Hours and minutes of uptime. You can add seconds in as well if you want by reading up on the `TIME_FORMAT` method.
<http://dev.mysql.com/doc/refman/4.1/en/date-and-time-functions.html#function_time-format> | MySQL: Convert relative seconds count to datetime? | [
"",
"mysql",
"sql",
"datetime",
""
] |
I have two tables, with independent ids (can't be connected via joins), I want to query and get a GROUP\_CONCAT of both columns.
Example: table "a" has ids: **1, 2, 3**. table "b" has the ids: **10, 11**.
End result should be: **1, 2, 3, 10, 11**
I have tried a few queries:
```
SELECT CONCAT_WS(',', GROUP_CONCAT(a.id), GROUP_CONCAT(b.id)) AS combined FROM a, b
SELECT GROUP_CONCAT(a.id, b.id) AS combined FROM a, b
```
These queries are returning me duplicate results though 8as in, all results from a twice and all results from b twice as well) | Try `union all`:
```
select group_concat(ab.id) as ids
from ((select id from a
) union all
(select id from b
)
) ab;
```
Your queries are doing cross join's between the tables, so data after the cross join is:
```
a.id b.id
1 10
1 11
2 10
2 11
3 10
3 11
```
After the `union all`, the data is:
```
ab.id
1
2
3
10
11
``` | `GROUP_CONCAT(DISTINCT [])`
will help
<https://dev.mysql.com/doc/refman/5.0/en/group-by-functions.html#function_group-concat> | GROUP_CONCAT on two tables | [
"",
"mysql",
"sql",
"concatenation",
""
] |
I need to show some records sorted based on modified column (latest activity on top)
(Post with new edit or comments at the top)
App UI has twitter like 'more' post button for infinite scroll. each 'more' will add next 10 records to UI.
Issue is that pagination index breaks when any of the to be shown record is modified
for example
Suppose i have records `A,B,C,..Z` in jobs table.
first time I'm' showing the records `A-J` to the user using
```
SELECT * FROM Jobs WHERE 1 ORDER BY last_modified DESC LIMIT 0, 10
```
second time if none of the records are modified
```
SELECT * FROM Jobs WHERE 1 ORDER BY last_modified DESC LIMIT 10, 10
```
will return `K-T`
But if some body modifies any records after `J` before the user clicks 'more button',
```
SELECT * FROM Jobs WHERE 1 ORDER BY last_modified DESC LIMIT 10, 10
```
will return `J-S`
Here record `J` is duplicated. I can hide it by not inserting `J` to the UI, but the more button will show only 9 records. But this mechanism fails when large number of records are updated, If 10 records are modified, the query will return `A-J` again.
What is the best way to handle this pagination issue?
Keeping a second time stamp fails if a record has multiple updates.
Server cache of queries? | I would do a NOT IN() and a LIMIT instead of just a straight LIMIT with a pre-set offset.
```
SELECT * FROM Jobs WHERE name NOT IN('A','B','C','D','E','F','G','H','I','J')
ORDER BY last_modified DESC LIMIT 10
```
This way you still get the most recent 10 every time but you would need to be tracking what IDs have already been shown and constantly negative match on those in your sql query. | Twitter timelines not paged queries they are queried by ids
This page will help you a lot understanding timeline basics <https://dev.twitter.com/docs/working-with-timelines>
lets say each column have id field too
```
id msg
1 A
2 B
....
```
First query will give you 10 post and max post\_id will be 10
Next query should be
```
SELECT * FROM Jobs WHERE id > 10 ORDER BY last_modified DESC LIMIT 0, 10
``` | Pagination issue while sorting based on last modified property | [
"",
"mysql",
"sql",
"sql-server",
"postgresql",
"pagination",
""
] |
This is my table
```
+---------------------+
| access_time |
+---------------------+
| 2014-02-17 12:00:00 |
| 2014-02-15 12:00:00 |
| 2014-02-15 12:00:00 |
| 2014-02-15 12:00:00 |
| 2014-02-15 11:00:00 |
| 2014-02-14 02:00:00 |
| 2014-02-13 18:00:00 |
| 2014-02-13 12:50:05 |
| 2014-02-13 12:48:57 |
| 2014-02-13 11:57:24 |
+---------------------+
```
and I want to have an output like this:
```
+---------------------+
| access_time |
+---------------------+
| 2014-02-17 12:00:00 |
| 2014-02-15 12:00:00 |
| 2014-02-14 02:00:00 |
| 2014-02-13 18:00:00 |
+---------------------+
```
What SQL command to have that kind of output? | If you want one arbitrary time per day, you can do:
```
select access_time
from table t
group by date(access_time);
```
More typically,you would choose which of the times you would want, and do something like:
```
select min(access_time)
from table t
group by date(access_time);
``` | For SQL Server 2008 or more recent versions, I'd recommend:
```
select cast(access_time as date)
from t
group by cast(access_time as date)
``` | How to select distinct days, excluding time of that day | [
"",
"mysql",
"sql",
"sql-server",
""
] |
I have a `Date` column in SQL (of **DateTime** datatype). For a particular scenario, I need to show it as blank in my browser. I tried
```
Dim DOB As Nullable(Of Date) = Date.Now
Dim datestring As String = DOB.Value.ToString("d")
DOB = Nothing
```
But it gives me error saying `DateTime was not in a proper format`.
What is wrong here? | You can try either of the following:
```
Dim datestring As String = IF(DOB Is Nothing,"", DOB.Value.ToString("d"))
```
or
```
Dim datestring As String = ""
If Not DoB Is Nothing Then
datestring = DOB.Value.ToString("d")
End If
``` | The reason why your code would throw `DateTime was not in a proper format` is because *most likely* you don't do `Dim DOB As Nullable(Of Date) = Date.Now`. You get date from database and then go straight for value and value is null.
Let's say you have a Sql table with nullable date field and you read it using DataReader, you can do
```
Dim strDate as string
if IsDbNull(reader("myDateColumn")) then
strDate = String.empty
else
DirectCast(reader("myDateColumn"), DateTime).ToString("your format here")
End If
```
A shorter version would be
```
Dim strDate as string = If(IsDbNull(reader("myDateColumn")), String.empty,
DirectCast(reader("myDateColumn"), DateTime).ToString("your format here"))
``` | Set SQL DateTime to Empty/Nothing in vb.net | [
"",
"sql",
"vb.net",
"datetime",
"null",
""
] |
I have a schema called `GBO_ARC_SCHEMA`, in which I have one table called `TEST_EMP`,
and I have two users say `USER_A` and `USER_B`.
First I connected to `USER_A` and fired below query
```
select count(*)from TEST_EMP;
count
-----
20
```
After that I connected as `USER_b` and fired below query but it is giving an error, saying that table or view does not exit
```
select count(*)from TEST_EMP;
```
But if I use `scma.object name` it is allowing me to query like below
```
select count(*)from GBO_ARC_SCHEMA.TEST_EMP;
```
but as per my requirement I don't want to specify schema name.
can somebody help me out? | If you want *all* users to be able to select from the table without qualifying with the schema name, you want to create a *public synonym*:
```
create public synonym TEST_EMP for GBO_ARC_SCHEMA.TEST_EMP;
```
If you only want `user_b` to omit the schema name, you want to create a *private synonym* WITHIN user\_b's schema (that is logged on as user\_b)
```
create synonym TEST_EMP for GBO_ARC_SCHEMA.TEST_EMP;
```
**If you insist** on not using synonyms, then, after logging in, do a
```
alter session set current_schema = GBO_ARC_SCHEMA;
``` | On the server roles for the login you are using, simply uncheck sysadmin and serveradmin roles.
That should solve it | Accessing a table without specifying the schema name | [
"",
"sql",
"oracle",
"plsql",
"oracle11g",
"oracle-sqldeveloper",
""
] |
I have a table which stores ID's in a comma separated string as follows:
```
field_id_13
------------
1234,5266,3678,4733,7372,5766,2578
```
and I'm using the following Active Record Construct (CI), to pull out the data:
```
$this->db->select("*")
->from("channel_data d")
->join("channel_titles t","t.entry_id=d.entry_id")
->where("d.field_id_13 LIKE '%".$id."%'")
->where("t.status","open")
->get();
```
The problem is that, sometimes on my search, the ID of '266' will be returned, therefore - because of my % surrounding the clause are returning a result matching against '5266'.
Whats the alternative here to make sure it only returns the correct ID/Rows?
Hopefully this makes sense. | You have to use FIND\_IN\_SET in where condition
```
$this->db->where("FIND_IN_SET('$id',d.field_id_13 ) !=", 0);
``` | The simplest way, if not the most elegant, would be to add commas to the start and end of the values in the id field, and then search for the value surrounded by commas as well. In other words:
```
field_id_13
------------
,1234,5266,3678,4733,7372,5766,2578,
```
and:
```
$this->db->select("*")
->from("channel_data d")
->join("channel_titles t","t.entry_id=d.entry_id")
->where("d.field_id_13 LIKE '%,".$id.",%'")
->where("t.status","open")
->get();
``` | Using LIKE to pull out an ID from comma separated string | [
"",
"mysql",
"sql",
"codeigniter",
"sql-like",
""
] |
I am creating a database of school gradebook. I have 2 tables: Class and Teacher. And i have a problem. A teacher needs to be a form-master of class. How should i make this relationship?
**Variant 1:**
**Class** will have a **teacherId** property.
**Variant 2:**
**Teacher** will have a **classId** property.
What variant is right? | *3 types of relationships are possible, generally speaking:*
**One-to-many**
* If a class can have many teachers, but a teacher can only teach one class, then put `ClassID` on the Teacher table.
**Many-to-many**
* If a class can have many teachers, and a teacher can teach many classes, then make a 3rd table, `TeacherClassLink` and put both `ClassID` and `TeacherID` in it, as well as an identity column like `TeacherClassID` -- as per the comments, this is known as a `cross reference table` although I always call them link tables, unofficially.
**One-to-one**
* If each teacher can only teach one class, AND each class can only have one teacher, then you can make `TeacherID` and `ClassID` be the same value in both tables.
---
In your example,
**Variant1** is a one-to-many relationship where each class can have only one teacher, but each teacher can many classes..
**Variant2** is a one-to-many relationship where each class can multiple teachers, but each teacher can only have one class.
*Based on the business needs of your application, you should decide which of the options to choose.* | Making the assumption that a specific class is taught by one teacher and that a teacher can typically teach more than one class you will want to add a TeacherId to the class entity. | About Database development, what variant is better? | [
"",
"sql",
""
] |
I don't know how else to call this.
I have this table right here:

And as you can see `Titon` followed `SLH_444` and `Knoedel_475`.
Now, how can I select all rows for users whom `Titon` followed?
I expect a result of all rows where the `username` is either `SLH_444` or `Knoedel_475`
<http://sqlfiddle.com/#!2/4a986/1>
This is what I have so far, but it errors because the subquery returns more than 1 row
```
SELECT * FROM ACTIVITIES
WHERE targetname =
(select targetname from ACTIVITIES
where activity='followed' and username='Titon');
``` | I think you want:
```
SELECT * FROM ACTIVITIES
WHERE username in
(select targetname from ACTIVITIES
where activity='followed' and username='Titon');
```
Although Titon has not followed `SLH_444`, he has followed `SLB_444`. | ```
SELECT targetname FROM ACTIVITIES WHERE username = 'Titon' AND activity = 'followed';
``` | How to select with multiple subqueries? | [
"",
"sql",
""
] |
I am loading data from a sqlite database into a ms SQL database.
The field type in the Sqlite is Numeric.
What is the best field type to use so that I don't loose any detail? | What is the underlying data representing? `Numeric` in SQLite is a technically a column affinity and not a storage type. Other databases do not have the concept of "column affinity", which is explained [here](http://www.sqlite.org/datatype3.html). In SQL Server, the types describe how the data is being stored.
The intention of a numeric column affinity is probably a fixed point numeric value. In that case, `decimal`/`numeric` would be the right type in SQL Server.
Note that `numeric` can also apply to dates, datetime, and boolean values. You would want to store these with the corresponding data types in SQL Server (probably `date`, `datetime`, or `bit`). | My understanding of SQLite is limited, but from what I'm reading here: [Datatypes in SQLite](http://www.sqlite.org/datatype3.html), the numeric affinity does not seem to actually tell you much about the data contained therein.
If that's the case, the best thing to do is to query the values to see what you're dealing with, then decide. Odds are it will be a `FLOAT` or `DECIMAL`. | What is the best SQL field type to use for a Sqlite numeric | [
"",
"sql",
"sql-server",
"sqlite",
""
] |
I have a feeling this is a fairly simple one. I need to edit a line in the `SELECT` statement that looks to see if there is a value of `NULL` in a field. If there is a `NULL` value, I need the new column (not named) to display a '0' for that row. Where the row has data I need to display '1' in that row. Is there a way to do this without greatly modifying the logic I have? (Using SQL Server Management Studio)
Here's the code:
```
SELECT DISTINCT t.Name,
CONVERT(VARCHAR(10), t.DischargeDateTime, 120) AS DischargeDate,
t.PatientPortalEnabled,
t.Allergy,
CONVERT(VARCHAR(10), t.AllergyUpdateTime, 120) AS AllergyUpdate,
/*This is where I would like to put the logic if possible*/ <> NULL,
t.ElapseTimeForAllergyUpdateInMinutes,
t.OldValue,
t.NewValue,
t.ElapseTimeForAllergyUpdateInHours,
t.DischargeDateTime
``` | Try this:
```
CASE WHEN MyField IS NULL THEN 0 ELSE 1 END
```
Here it is in your code:
```
SELECT DISTINCT t.Name,
CONVERT(VARCHAR(10), t.DischargeDateTime, 120) AS DischargeDate,
t.PatientPortalEnabled,
t.Allergy,
CONVERT(VARCHAR(10), t.AllergyUpdateTime, 120) AS AllergyUpdate,
CASE WHEN t.MyField IS NULL THEN 0 ELSE 1 END,
t.ElapseTimeForAllergyUpdateInMinutes,
t.OldValue,
t.NewValue,
t.ElapseTimeForAllergyUpdateInHours,
t.DischargeDateTime
``` | You can do this using a [CASE](http://technet.microsoft.com/en-us/library/ms181765.aspx) statement.
```
SELECT DISTINCT t.Name,
CONVERT(VARCHAR(10), t.DischargeDateTime, 120) AS DischargeDate,
t.PatientPortalEnabled,
t.Allergy,
CONVERT(VARCHAR(10), t.AllergyUpdateTime, 120) AS AllergyUpdate,
/*This is where I would like to put the logic if possible*/
CASE
WHEN t.MyField IS NULL THEN 0
ELSE 1
END AS MyNewField,
t.ElapseTimeForAllergyUpdateInMinutes,
t.OldValue,
t.NewValue,
t.ElapseTimeForAllergyUpdateInHours,
t.DischargeDateTime
``` | If not <> NULL then "xxxField" equals '1' | [
"",
"sql",
"sql-server-2008",
""
] |
I have a table with duplicate registers of students, but each row represent a course and a status from that student.
I'm using SQL SERVER 2008
Something like that:
```
+--------+-------------+-------------------------+---------------+-----------------+
| ID | STUDENT | DATE | COURSE | STATUS |
+--------+-------------+-------------------------+---------------+-----------------+
| 21245 | ROBERTA ZOR | 2014-01-08 00:00:00.000 | CIΓNCIAS | FORMADO |
| 39316 | IGOR BASTOS | 2008-04-07 00:00:00.000 | CIΓNCIAS | CANCELADO |
| 39316 | IGOR BASTOS | 2014-01-08 00:00:00.000 | ADMINISTRAΓΓO | FORMADO |
| 39961 | LUIZ FELIPE | 2014-02-12 00:00:00.000 | ADMINISTRAΓΓO | CURSANDO |
| 105937 | DANIEL CHO | 2014-02-14 00:00:00.000 | ADMINISTRAΓΓO | CURSANDO |
| 105937 | DANIEL CHO | 2014-02-10 00:00:00.000 | ADMINISTRAΓΓO | RESERVA DE VAGA |
+--------+-------------+-------------------------+---------------+-----------------+
```
I need the most recent STATUS from the combination STUDENT/COURSE for all Students.
**UPDATE**
To get the STATUS I'm using another join:
```
SELECT a.ID, a.STUDENT, a.COURSE, MAX(a.DATE) as DATE
into #TABLE
FROM #STUDENTS a
INNER JOIN #STUDENTS b
on a.ID = a.ID
and a.COURSE = b.COURSE
and a.STATUS <> b.STATUS
GROUP BY a.ID,a.STUDENT, a.COURSE
select c.ID, c.STUDENT, c.COURSE, c.STATUS
into #FINAL_TABLE
from #TABLE t
inner join #STUDENTS C
on C.ID = T.ID and C.STUDENT = T.STUDENT and C.COURSE = T.COURSE
``` | This query will find the most recent row for each Student/Course combination. It uses a [Common Table Expression](http://technet.microsoft.com/en-us/library/ms190766%28v=sql.105%29.aspx) to find the most recent date for each `STUDENT`/`COURSE` combination, and then uses that CTE to get the matching rows. The end result is the most recent row for each `STUDENT`/`COURSE` combination.
```
WITH
CTE_MostRecent AS (
-- For each student/course combination, retrieve:
-- * student ID
-- * course
-- * date of most recent entry
SELECT ID,
COURSE,
MAX(DATE) AS MaxDate -- Most recent date
FROM StudentCourses
GROUP BY ID,
COURSE
)
SELECT S.*
FROM StudentCourses AS S
-- Only select the the most recent row
-- for this STUDENT/COURSE combination
INNER JOIN CTE_MostRecent AS M
ON S.ID = M.ID
AND S.COURSE = M.COURSE
AND S.DATE = M.MaxDate
```
**Output ([SQLFiddle](http://sqlfiddle.com/#!3/dd480/28/0)):**
```
ββββββββββ¦ββββββββββββββ¦ββββββββββββββββββββββ¦ββββββββββββββββ¦ββββββββββββ
β ID β STUDENT β DATE β COURSE β STATUS β
β βββββββββ¬ββββββββββββββ¬ββββββββββββββββββββββ¬ββββββββββββββββ¬ββββββββββββ£
β 105937 β DANIEL CHO β 2014-02-14 00:00:00 β ADMINISTRAΓΓO β CURSANDO β
β 39961 β LUIZ FELIPE β 2014-02-12 00:00:00 β ADMINISTRAΓΓO β CURSANDO β
β 39316 β IGOR BASTOS β 2008-04-07 00:00:00 β CIΓNCIAS β CANCELADO β
β 39316 β IGOR BASTOS β 2014-01-08 00:00:00 β ADMINISTRAΓΓO β FORMADO β
β 21245 β ROBERTA ZOR β 2014-01-08 00:00:00 β CIΓNCIAS β FORMAD β
ββββββββββ©ββββββββββββββ©ββββββββββββββββββββββ©ββββββββββββββββ©ββββββββββββ
```
**Note:** The output above is taken from an actual SQL-Server instance, not from SQLFiddle. SQLFiddle displays `DATETIME` values as "[MonthName], DD YYYY 14 HH:MM:SS+0000"
**Note:** This solution assumes that you have at most one entry per `STUDENT`/`COURSE` combination per day. | ```
select * from
(select *,ROW_NUMBER()over(partition by COURSE,STATUS order by dates)rn
from @student)t4 where rn=1
``` | Compare difference between 2 rows at the same table | [
"",
"sql",
"sql-server",
""
] |
I've created a new database using Microsoft SQL Server Management Studio, and now I want to interact with it through LabVIEW. I already have several VIs to interact with a previous database, using the database connectivity tool kit. This database was created by someone who has since left the project and I can't find it in anything but LabVIEW.
I'm quite experienced with LabVIEW, but completely new to and bewildered by databases.
Thank you in advance. | The first Connectivity Toolkit VI called should be Open Connection.
The existing code (VI) will either use a file or a string as an input.
If the input is a string, then you will need to create a new connection string compatible with your server. You can find common SQL Server strings at <https://www.connectionstrings.com/sql-server-2008/>
If the input is a file name, you can copy the .UDL file that is referenced and then modify the copied file by opening it (double click) and then select the OLE DB Provider for SQL Server and then set the connection options to point to your server, database etc. and then test the connection. | Basically the workflow you have to go through is the following:
* Open connection
* Execute your query
* Fetch data (if needed)
* Close connection
If you search for "Database" in the **NI Example Finder** shipped with Labview you will find a few good starting points.
In particular give a look to Database Connection.vi and Database Fetching.vi.
If you plan to use [transactions](http://msdn.microsoft.com/en-us/library/aa366402%28VS.85%29.aspx) try also Database Transaction.vi. | How do I connect to a database in LabView | [
"",
"sql",
"database",
"ssms",
"labview",
""
] |
i want to get users with total transactions more than 50000
```
SELECT sum(tractions.amount) as total , user_id FROM `tractions` where `total` > 50000 group by user_id
```
i get this error
```
#1054 - Unknown column 'total' in 'where clause'
```
of curse there is no total column in table but how should i write the condition ? | You need a `having` clause with aggregation totals:
```
SELECT sum(tractions.amount) as total , user_id
FROM `tractions`
group by user_id
HAVING total > 50000;
```
EDIT:
If I were writing this query, it would look like this:
```
select t.user_id, sum(t.amount) as total
from tractions t
group by t.user_id
having sum(t.amount) > 50000;
```
I find capitalized keywords distracting. I (almost) always put the fields I am aggregating by at the beginning of a line. I use table aliases prolifically and they are abbreviations for the table. I don't escape identifiers, because I never use reserved words or exceptional characters in identifiers. I put the expression in the `having` clause because . . . well, I think once upon a time, not all databases supported using column aliases in `having` clauses. | try this
```
SELECT sum(tractions.amount) as total , user_id FROM `tractions` where sum(tractions.amount) > 50000 group by user_id
``` | select sum(amount) as total with condition on total | [
"",
"mysql",
"sql",
""
] |
I have a table instructor and I want to delete the records that have salary in a range
An intuitive way is like this:
```
delete from instructor where salary between 13000 and 15000;
```
However, under safe mode, I cannot delete a record without providing a primary key(ID).
So I write the following sql:
```
delete from instructor where ID in (select ID from instructor where salary between 13000 and 15000);
```
However, there is an error:
```
You can't specify target table 'instructor' for update in FROM clause
```
I am confused because when I write
```
select * from instructor where ID in (select ID from instructor where salary between 13000 and 15000);
```
it does not produce an error.
My question is:
1. what does this error message really mean and why my code is wrong?
2. how to rewrite this code to make it work under safe mode?
Thanks! | Googling around, the popular answer seems to be ["just turn off safe mode"](https://stackoverflow.com/questions/11448068/mysql-error-code-1175-during-update-in-mysql-workbench):
```
SET SQL_SAFE_UPDATES = 0;
DELETE FROM instructor WHERE salary BETWEEN 13000 AND 15000;
SET SQL_SAFE_UPDATES = 1;
```
If I'm honest, I can't say I've ever made a habit of running in safe mode. Still, I'm not entirely comfortable with this answer since it just assumes you should go change your database config every time you run into a problem.
So, your second query is closer to the mark, but hits another problem: MySQL applies a few restrictions to subqueries, and one of them is that you can't modify a table while selecting from it in a subquery.
Quoting from the MySQL manual, [Restrictions on Subqueries](https://dev.mysql.com/doc/refman/4.1/en/subquery-restrictions.html):
> In general, you cannot modify a table and select from the same table
> in a subquery. For example, this limitation applies to statements of
> the following forms:
>
> ```
> DELETE FROM t WHERE ... (SELECT ... FROM t ...);
> UPDATE t ... WHERE col = (SELECT ... FROM t ...);
> {INSERT|REPLACE} INTO t (SELECT ... FROM t ...);
> ```
>
> Exception: The preceding prohibition does not apply if you are using a subquery for the modified table in the FROM clause. Example:
>
> ```
> UPDATE t ... WHERE col = (SELECT * FROM (SELECT ... FROM t...) AS _t ...);
> ```
>
> Here the result from the subquery in the FROM clause is stored as a temporary table, so the relevant rows in t have already been selected by the time the update to t takes place.
That last bit is your answer. Select target IDs in a temporary table, then delete by referencing the IDs in that table:
```
DELETE FROM instructor WHERE id IN (
SELECT temp.id FROM (
SELECT id FROM instructor WHERE salary BETWEEN 13000 AND 15000
) AS temp
);
```
[SQLFiddle demo](http://sqlfiddle.com/#!2/47921/1). | You can trick MySQL into thinking you are actually specifying a primary key column. This allows you to "override" safe mode.
Assuming you have a table with an auto-incrementing numeric primary key, you could do the following:
```
DELETE FROM tbl WHERE id <> 0
``` | mysql delete under safe mode | [
"",
"mysql",
"sql",
""
] |
I have four tables and I need to find the average of each score for a particular id. I do not need the ENTIRE Column average, but the average of each record with the same id in each table.
I have tried this:
```
SELECT DISTINCT M.system_id, S.name, SUM(A.Score + WAL.score + F.score + WIN.score) / 4
AS avgScore
FROM dbo.T3_MovementSystemJoin AS M
INNER JOIN dbo.T3_systems AS S ON M.system_id = S.id
INNER JOIN T3_ApplicationSystemJoin AS A ON A.Application_id = @application_id
INNER JOIN T3_WallTypeSystemJoin AS WAL ON WAL.wall_id = @wall_id
INNER JOIN T3_FenestrationSystemJoin AS F ON F.fenestration_id = @fen_id
INNER JOIN T3_WindowOrientation_System AS WIN ON WIN.window_id = @window_id
INNER JOIN T3_ConstructionSystemJoin AS C ON C.contruction_id = @construction_id
INNER JOIN T3_JointDepthSystemJoin AS J ON J.JointDepth_id = @JointDepth_id
INNER JOIN T3_JointGapSystemJoin AS JG ON JG.JointGap_id = @JointGap_id
WHERE (M.movement_id = @movement_id)
GROUP BY M.System_id, S.name
```
:
Thanks for your help! | No `Sum` needed (and no grouping too)
```
SELECT DISTINCT M.system_id, S.name, (IsNull(A.Score, 0) + IsNull(WAL.score, 0) + IsNull(F.score, 0) + IsNull(WIN.score, 0)) /4
as avgscore
FROM dbo.T3_MovementSystemJoin AS M
INNER JOIN dbo.T3_systems AS S ON M.system_id = S.id
INNER JOIN T3_ApplicationSystemJoin AS A ON A.Application_id = @application_id
INNER JOIN T3_WallTypeSystemJoin AS WAL ON WAL.wall_id = @wall_id
INNER JOIN T3_FenestrationSystemJoin AS F ON F.fenestration_id = @fen_id
INNER JOIN T3_WindowOrientation_System AS WIN ON WIN.window_id = @window_id
INNER JOIN T3_ConstructionSystemJoin AS C ON C.contruction_id = @construction_id
INNER JOIN T3_JointDepthSystemJoin AS J ON J.JointDepth_id = @JointDepth_id
INNER JOIN T3_JointGapSystemJoin AS JG ON JG.JointGap_id = @JointGap_id
WHERE (M.movement_id = @movement_id)
``` | If you don't want NULL values to become zeros and included in the average:
```
SELECT DISTINCT M.system_id, S.name, X.avgScore
FROM dbo.T3_MovementSystemJoin AS M
INNER JOIN dbo.T3_systems AS S ON M.system_id = S.id
INNER JOIN T3_ApplicationSystemJoin AS A ON A.Application_id = @application_id
INNER JOIN T3_WallTypeSystemJoin AS WAL ON WAL.wall_id = @wall_id
INNER JOIN T3_FenestrationSystemJoin AS F ON F.fenestration_id = @fen_id
INNER JOIN T3_WindowOrientation_System AS WIN ON WIN.window_id = @window_id
INNER JOIN T3_ConstructionSystemJoin AS C ON C.contruction_id = @construction_id
INNER JOIN T3_JointDepthSystemJoin AS J ON J.JointDepth_id = @JointDepth_id
INNER JOIN T3_JointGapSystemJoin AS JG ON JG.JointGap_id = @JointGap_id
CROSS APPLY ( SELECT AVG(s) FROM (VALUES (A.Score),(WAL.score),(F.score),(WIN.score) ) scores(s) ) X(avgScore)
WHERE (M.movement_id = @movement_id)
GROUP BY M.System_id, S.name
``` | MSSQL How do I get average of four records from different tables? | [
"",
"sql",
"sql-server",
""
] |
How to Generate Fibonacci series in sql !
I need to generate Fibonacci series 0 1 1 2 3 5 8 13 21 ... N
I did this easily using C-code I need to do this using Sql ! | Try this !
```
declare @a int
declare @b int
declare @c int
Declare @i int
set @a=0
set @b=1
set @i=0
set @c=0
Print 'Fibonacci Series'
print @a
print @b
while @i<10
Begin
set @c=@a+@b
print @c
set @i=@i+1
set @a=@b
set @b=@c
end
```
# Outputs !
```
Fibonacci Series
```
0
1
1
2
3
5
8
13
21
34
55
89 | **Try This Simple Query:**
1) For Result In Row-by-Row (Single Column, Multiple Rows)
```
WITH Fibonacci (PrevN, N) AS
(
SELECT 0, 1
UNION ALL
SELECT N, PrevN + N
FROM Fibonacci
WHERE N < 1000000000
)
SELECT PrevN as Fibo
FROM Fibonacci
OPTION (MAXRECURSION 0);
```
*Output 1:*

2) For Result in Only One Row (Comma sepreted, in Single Cell)
```
WITH Fibonacci (PrevN, N) AS
(
SELECT 0, 1
UNION ALL
SELECT N, PrevN + N
FROM Fibonacci
WHERE N < 1000000000
)
SELECT Substring(
(SELECT cast(', ' as varchar(max)) + cast(PrevN as varchar(max)
);
FROM Fibonacci
FOR XML PATH('')),3,10000000) AS list
```
*Output 2:*
 | How to generate Fibonacci Series | [
"",
"sql",
"sql-server",
""
] |
This is my table
```
create table table1(action1 varchar(10),view1 varchar(10))
insert into table1 values('A1','VIEW'),('A1','EDIT'),('A2','VIEW'),('A3','VIEW'),('A3','EDIT')
```
I need output like this !
```
action1 VIEW EDIT
A1 VIEW EDIT
A2 VIEW NULL
A3 VIEW EDIT
```
I tried using pivot but I get error `Msg 102, Level 15, State 1, Line 8
Incorrect syntax near '('.` | ```
select * from table1
pivot
(
max(view1)
for view1 in([VIEW],[EDIT])
)as piv;
```
# `See Demo` | By using MAX() function you can get the result
> ```
> SELECT action1,
> MAX( CASE view1 WHEN 'View' THEN view1 ELSE '' END ) ViewCol,
> MAX( CASE view1 WHEN 'Edit' THEN view1 ELSE null END ) EditCol
> FROM table1
> GROUP BY action1
> ``` | Sql query to convert column to rows | [
"",
"sql",
"sql-server",
""
] |
I have simple database and I thing my answer should be so simple but I can't find my answer the db is look like:
```
-----------------
name | ip
-----------------
nick |192.168.1.10
john |192.168.1.1
john |192.168.1.2
john |192.168.1.3
lucy |192.168.10.1
lucy |192.168.10.2
```
I need a query that return all the rows but the result sorted by count(ip) per name
and the result of above list should be something like :
```
------------------
name |ip
------------------
nick |192.168.1.10
lucy |192.168.10.1
lucy |192.168.10.2
john |192.168.1.1
john |192.168.1.2
john |192.168.1.3
``` | try this query:
```
SELECT t1.*
FROM table1 t1
INNER JOIN (SELECT Count(name) counter,
name
FROM table1
GROUP BY name)t2
ON t1.name = t2.name
ORDER BY counter;
```
*[SQL Fiddle](http://sqlfiddle.com/#!15/199f3/5)* | Here a [SQL Fiddle](http://sqlfiddle.com/#!2/42219/10)
Try this statement.
```
SELECT * FROM tblSAMPLE ORDER BY name DESC,ip
``` | How to order the query result by count() of field in postgresql without group by | [
"",
"sql",
"postgresql",
""
] |
I'm inserting data problematically into tables. When I do this from another table, it's swift, only slowed very slightly if there are a lot of records. Even then, it's a matter of seconds.
When I insert from a query to a table, it goes into minutes - roughly a minute for every 1,000 records inserted.
The source query itself, when just run as a select query, takes maybe 1 - 2 seconds. Is the query running for every record that's inserted? I'd hoped that it would run once for the whole data set. Or is there something else that's causing the function to run so slowly when compared to inserting "flat" data from another table.
The VBA I'm using is fairly innocuous:
```
CurrentDb.Execute "SELECT [Extra Value Concatenation].* _
INTO [" & strTableName & "] FROM [Extra Value Concatenation];"
```
and the source query is below - it uses [Allen Browne's Concatenate function](http://allenbrowne.com/func-concat.html).
```
SELECT [Extra Fields - Fee Protection Insurance Concatenate].ContactID,
ConcatRelated('[Fee Protection Insurance]',
'[Extra Fields - Fee Protection Insurance Concatenate]',
'ContactID = ' & [ContactID])
AS [Fee Protection Insurance]
FROM [Extra Fields - Fee Protection Insurance Concatenate];
```
**EDIT: In answer to Fionnuala's comment, but I couldn't format it properly in the comments.**
Using fictional data, here's roughly what I want.
T1 contains client records.
```
ContactID Name
1 Example Limited
2 Another Company Limited
```
T2 contains extra fields. ContactID is there as a foreign key, and may be duplicated if multiple records are held.
```
ContactID FieldValue
1 Value 1
1 Value 2
2 Value 3
2 Value 4
2 Value 5
```
When I left join the tables, the duplicates from T2 show up, so I get
```
ContactID Name FieldValue
1 Example Limited Value 1
1 Example Limited Value 2
2 Another Company Limited Value 3
2 Another Company Limited Value 4
2 Another Company Limited Value 5
```
when what I want is
```
ContactID Name FieldValue
1 Example Limited Value 1; Value 2
2 Another Company Limited Value 3; Value 4; Value 5
```
Hence concatenating the data in a temporary table seemed like a good idea, but is slowing everything down. Is there another way I should be looking at my query? | You are using a user defined function (UDF) ConcatRelated, so the UDF runs for each record, otherwise, usually Access SQL works in the normal way. | I have written a pretty basic module that should accomplish this for you very quickly compared to your current process. Note you will need to re-name your project to something other than "Database" on the project navigation pane for this to work
I have assumed that table1 and table2 are the same as you have above
table3 is simply a list of all records in table 1 with a blank "FieldValues" field to add
the required "value1, value2" etc. This should result in Table3 being populated with your desired result
IMPORANT: For anyone using recordset .edit and .update functions make sure you remove record level locking in the access options menu, it can be found under the "client settings" section of Access options, failing to do so will cause extreme bloating of your file as access will not drop record locks until you compact and repair the database. This may cause your database to become un-recoverable once it hits the 2gb limit for windows.
```
Function addValueField()
'Declarations
Dim db As Database
Dim rs1 As DAO.Recordset
Dim rs2 As DAO.Recordset
Dim qry As String
Dim value As String
Dim recordcount as Long
Set db = CurrentDb()
'Open a select query that is a join of table 1 and table 2
'I have made Contact ID a foreign key in the second table
qry = "SELECT Table1.[Contact ID], Table1.Name, Table2.FieldValue FROM Table1 INNER JOIN Table2 ON Table1.[Contact ID] = Table2.[Contact ID(FK)] ORDER BY [Contact ID];"
Set rs1 = db.OpenRecordset(qry, dbOpenDynaset)
'Table 3 was filled with each record from table1, with a 3rd "Field Value" field to
'be filled with your Value 1, Value 2 etc.
qry = "SELECT * FROM Table3 ORDER BY [Contact ID]"
Set rs2 = db.OpenRecordset(qry, dbOpenDynaset)
'Ensure you have enough file locks to process records
recordcount = rs1.recordcount
DAO.DBEngine.SetOption DAO.dbMaxLocksPerFile, recordcount + 1000
rs1.MoveFirst
rs2.MoveFirst
'Here we test to see if "Name" is the same in both recordsets, if it is, add the FieldValue
'to the FieldValue in Table3, otherwise move to the next record in table 3 and compare again
Do While Not rs1.EOF
If IsNull(rs2![FieldValue]) = True Then
If rs2![FieldValue] = "" Then
rs2.Edit
rs2![FieldValue] = rs1![FieldValue]
rs2.Update
rs1.MoveNext
Else
rs2.Edit
rs2![FieldValue] = rs2![FieldValue] & "; " & rs1![FieldValue]
rs2.Update
rs1.MoveNext
End If
Else
rs2.MoveNext
End If
Loop
rs1.close
rs2.close
db.close
set db = nothing
set rs1 = nothing
set rs2 = nothing
End Function
``` | When inserting data from a query to a table, does the query run for each record inserted? | [
"",
"sql",
"ms-access",
"vba",
"ms-access-2010",
""
] |
I have the following table
```
Index BookNumber
2 51
2 52
2 53
1 41
1 42
1 43
```
I am trying to come up with the following output
```
Index BookNumber1 Booknumber2 Booknumber3
----------------------------------------------
1 41 42 43
2 51 52 53
```
I was able to come up with the following query , however the output is unexpected
```
SELECT DISTINCT
index,
CASE WHEN index = 1 THEN Booknumber END AS BookNumber1,
CASE WHEN index = 2 THEN Booknumber END AS BookNumber2,
CASE WHEN index = 3 THEN Booknumber END AS BookNumber3
FROM Mytable;
```
I get following output
```
Index BN1 BN2 BN3
------------------------------
1 41 null null
1 null 42 null
1 null null 43
2 51 null null
2 null 52 null
2 null null 53
```
Is there a way to compress this to only 2 rows? | I am not quite sure how the `index` in your query matches the `index` column in your data. But the query that you want is:
```
SELECT index,
max(CASE WHEN index = 1 THEN Booknumber END) AS BookNumber1 ,
max(CASE WHEN index = 2 THEN Booknumber END) AS BookNumber2,
max(CASE WHEN index = 3 THEN Booknumber END) AS BookNumber3
FROM Mytable
GROUP BY index;
```
Give your data, the query seems more like:
```
SELECT index,
max(CASE WHEN ind = 1 THEN Booknumber END) AS BookNumber1 ,
max(CASE WHEN ind = 2 THEN Booknumber END) AS BookNumber2,
max(CASE WHEN ind = 3 THEN Booknumber END) AS BookNumber3
FROM (select mt.*, row_number() over (partition by index order by BookNumber) as ind
from Mytable mt
) mt
GROUP BY index;
```
By the way, "index" is a reserved word, so I assume that it is just a placeholder for another column name. Otherwise, you need to escape it with double quotes or square braces. | Assuming there are always 3 or fewer book numbers for each index, you could use:
```
with data as
(select idx,
booknumber as bn1,
lag(booknumber, 1) over(partition by idx order by idx, booknumber) as bn2,
lag(booknumber, 2) over(partition by idx order by idx, booknumber) as bn3
from books)
select *
from data
where data.bn1 = (select max(x.bn1) from data x where x.idx = data.idx)
```
sqlfiddle demo is here: <http://sqlfiddle.com/#!6/8dc82/5/0> | Combing multiple rows into one row | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
i have field name birthdate in users database
some users birthdate like =
> 2014/02/18
and some others like =
> 2014-02-18
i should use birthdate varchar (10)
so i want change 2014/02/18 to 2014-02-18
```
SELECT `birthdate`
FROM `users`
WHERE `birthdate` LIKE '%%%%/%%/%%' REPLACE '%%%%-%%-%%'
```
this code just select users with 2014/02/18
how can I replace that with 2014-02-18 in sql tab? | You want an update statement if you want to change the data:
```
update users
set birthdate = replace(birthdate, '/', '-')
where birthdate like '%/%/%';
```
You should, however, learn to store dates as dates and not as strings. | If you just want to replace `/` with `-` you can do comparison like this:
```
SELECT `birthdate`
FROM `users`
WHERE replace( `birthdate`, '/', '-' ) LIKE '2014-02-18'
```
Otherwise, remove both `/` and `-` from date field values and compare for `YYYYMMDD` formatted value.
```
SELECT `birthdate`
FROM `users`
WHERE replace( replace( `birthdate`, '/', '' ), '-', '' ) LIKE '20140218'
``` | update field of table | [
"",
"mysql",
"sql",
"replace",
"phpmyadmin",
""
] |
I'm trying to create a invoice type report where i have a header, main body with a table (which includes a totals section) and footer.
The problem im getting is, the table height in the main body depends on how many rows are returned from my SPROC, hence if there is not much data, the table will take up a small portion on the middle of the page with the "totals" and "disclaimer" ending nowhere near the bottom of the page (ideally, want to put it just above the footer).
I have seen guides to get around this problem, with the general method to add blank (null) lines in the SPROC, forcing the table to be bigger than expected and thus forcing the totals and disclaimer row to be near the bottom of the page.
I have implemented this solution but there are a few problems with this.
However the problem with this method is that the logic only works assuming each line in the table only takes up 1 line (i.e. a short description so the line does not overflow to the next line). Once the row has multiple lines, the height of the row changes, and since the height is not a multiple of 1 line (i.e. single row is size 1, double row should be size 2 but is instead, size 1.5 or something), i cant take into account how many rows i should add in the SPROC.
I have tried another method where the rows i need are separated from the main table (which gives me problems in itself - cant calculate totals in the report but i guess i could go around this by calculating the total in the SPROC itself) and are hidden until the last page.
This method would be good except that with this method, the blank space is always showing up, stopping the table from expanding to that area.
I have included a link to imgur to describe my current problem which should be easier to understand.


There must be an easier way to just force the table to take up the whole space. | In the end I've settled for a solution which is very close to what I need and involves in using hidden elements. (similar to what Dan Andrews suggested - but catered to what I needed)
So first of all, I have included the totals in the footer of the report so that it stays at the bottom all the time.
This is shown below:
[](https://i.stack.imgur.com/glfO3.png)
Within the subtotals footer, I have placed a message "Continued on next page" which is also a hidden field - this is so I can show this message on any reports that have more than 1 page showing (hence the user knows there's more than 1 page for the report and so the blank space doesn't look as bad).
To hide the totals field, I have the following expression in the "hidden" property:
```
=iif(Globals!PageNumber=Globals!TotalPages,false,true)
```
And for the "continued" field:
```
=iif(Globals!PageNumber=Globals!TotalPages,true,false)
```
Now the problem with this is that the footer does not know what the tallied up values are from the table due to the footer not having access to the table in the report body.
To get around this issue, I have created a "totals" section which is part of the table that does all the calculation I need to show on the footer.
I put a name for each of the text boxes that I need access to in the footer like so:
[](https://i.stack.imgur.com/TxQ8R.png)
And on the corresponding footer element, I have the expression like so:
[](https://i.stack.imgur.com/vCXFX.png)
Now that the footer contains the totals, the totals field is always shown at the bottom regardless of how big the table grows (which was my initial problem - the footer being placed wherever it wanted to go) with a small trade off of having a blank space on any pages that's not the last. I have put in a "continued on next page" message there instead which shows that there are more pages to the report and so it looks like the white space is being used.
This is a single page example:
[](https://i.stack.imgur.com/pRm9d.png)
And this is a multi page example:
[](https://i.stack.imgur.com/WfDhY.png) | All of this can be achieved in the report design itself. Here's what you know:
* The height of the page (P)
* The height of your static data (S)
* The height of your header (H)
* The height of your data row (R)
* The count of data rows (C)
* The height of your footer (F)
So you can work out how big the remaining space is on the page: P - ((C\*R) + S + F + H)
When I was working through my problem, I realised that I needed two 'spacers' for when the space remaining on the page was too small to fit the footer; spacer one filled in the remainder of the page 1 whilst spacer 2 was the available space on page 2 - P - (S + F + H).
So, you'll need hidden sections in the report to do the height calculations and two detail rows for spacers beneath any other detail rows.
There's a demo solution I created on [GitHub](https://github.com/AdrianNichols/ssrs-non-native-functions) which shows this 'pin-to-bottom' feature in action as well as resetting page numbers for groups, data-driven headers/footers, label translation and international formatting (page size, number formats, etc).
For this look at Sales Invoice 5. | SSRS Reports - force table to expand to bottom of page | [
"",
"sql",
"reporting-services",
""
] |
I have the following query:
```
select'Amount' as Amount,
('Β£'+ CAST(SUM(rc.[Fee Charge] +rc.[Fee Charge VAT] +rc.ExtraCharges+rc.ExtraChargesVAT+rc.OtherCharges+rc.OtherChargesVAT+rc.WaitingCharge+rc.[WaitingCharge VAT])AS nvarchar(50))) AS [CompletedTurnover],
('Β£'+ CAST(SUM(rin.[Fee Charge] +rin.[Fee Charge VAT] +rin.ExtraCharges+rin.ExtraChargesVAT+rin.OtherCharges+rin.OtherChargesVAT+rin.WaitingCharge+rin.[WaitingCharge VAT])AS nvarchar(50))) AS [In Progress Turnover],
('Β£'+ CAST(SUM(run.[Fee Charge] +run.[Fee Charge VAT] +run.ExtraCharges+rc.ExtraChargesVAT+run.OtherCharges+run.OtherChargesVAT+run.WaitingCharge+run.[WaitingCharge VAT])AS nvarchar(50))) AS [Unallocated Turnover],
123 as [Credit Note Value]
from tblreservation R
left join tblreservation rc on R.ReservationsID = rc.reservationsid and rc.Completed = 1
left join tblreservation rin on R.reservationsid = rin.reservationsid and rin.InProgress = 1
left join tblreservation run on Run.ReservationsID = r.ReservationsID and run.completed = 0 and run.inprogress = 0
```
This returns data like so:
```
CompletedTurnover In progress Turnover Unallocated Turnover Credit Note Value
1202039920 23998858945 9384585845 123
```
This is as expected. However, I need the following output and I'm struggling a bit using pivots.
```
Completed Turnover 1202039920
In Progress Turnover 23998858945
Unallocated Turnover 9384585845
Credit Note Value 123
```
Any help would be greatly appreciated. | You can use a union to get the results you need:
```
select 'Completed turnover' Description,
( 'Β£'+ CAST(SUM(rc.[Fee Charge] +
rc.[Fee Charge VAT] +
rc.ExtraCharges+
rc.ExtraChargesVAT+
rc.OtherCharges+
rc.OtherChargesVAT+
rc.WaitingCharge+
rc.[WaitingCharge VAT]
)AS nvarchar(50))) value
from ....
union all
select 'In Progress turnover', .....
from ....
union all
select 'Unallocated Turnover', .....
from ....
```
you probably want to look at using in conjunction with a [CTE](http://msdn.microsoft.com/en-us/library/ms175972.aspx) | This process to convert columns into rows is actually called an UNPIVOT. You can do this a few different ways.
**UNPIVOT:** function:
```
;with cte as
(
select'Amount' as Amount,
('Β£'+ CAST(SUM(rc.[Fee Charge] +rc.[Fee Charge VAT] +rc.ExtraCharges+rc.ExtraChargesVAT+rc.OtherCharges+rc.OtherChargesVAT+rc.WaitingCharge+rc.[WaitingCharge VAT])AS nvarchar(50))) AS [CompletedTurnover],
('Β£'+ CAST(SUM(rin.[Fee Charge] +rin.[Fee Charge VAT] +rin.ExtraCharges+rin.ExtraChargesVAT+rin.OtherCharges+rin.OtherChargesVAT+rin.WaitingCharge+rin.[WaitingCharge VAT])AS nvarchar(50))) AS [In Progress Turnover],
('Β£'+ CAST(SUM(run.[Fee Charge] +run.[Fee Charge VAT] +run.ExtraCharges+rc.ExtraChargesVAT+run.OtherCharges+run.OtherChargesVAT+run.WaitingCharge+run.[WaitingCharge VAT])AS nvarchar(50))) AS [Unallocated Turnover],
123 as [Credit Note Value]
from tblreservation R
left join tblreservation rc
on R.ReservationsID = rc.reservationsid
and rc.Completed = 1
left join tblreservation rin
on R.reservationsid = rin.reservationsid
and rin.InProgress = 1
left join tblreservation run
on Run.ReservationsID = r.ReservationsID
and run.completed = 0
and run.inprogress = 0
)
select col, value
from cte
unpivot
(
value
for col in (CompletedTurnover, [In Progress Turnover],
[Unallocated Turnover], [Credit Note Value])
) u;
```
**CROSS APPLY with VALUES:**
```
;with cte as
(
select'Amount' as Amount,
('Β£'+ CAST(SUM(rc.[Fee Charge] +rc.[Fee Charge VAT] +rc.ExtraCharges+rc.ExtraChargesVAT+rc.OtherCharges+rc.OtherChargesVAT+rc.WaitingCharge+rc.[WaitingCharge VAT])AS nvarchar(50))) AS [CompletedTurnover],
('Β£'+ CAST(SUM(rin.[Fee Charge] +rin.[Fee Charge VAT] +rin.ExtraCharges+rin.ExtraChargesVAT+rin.OtherCharges+rin.OtherChargesVAT+rin.WaitingCharge+rin.[WaitingCharge VAT])AS nvarchar(50))) AS [In Progress Turnover],
('Β£'+ CAST(SUM(run.[Fee Charge] +run.[Fee Charge VAT] +run.ExtraCharges+rc.ExtraChargesVAT+run.OtherCharges+run.OtherChargesVAT+run.WaitingCharge+run.[WaitingCharge VAT])AS nvarchar(50))) AS [Unallocated Turnover],
123 as [Credit Note Value]
from tblreservation R
left join tblreservation rc
on R.ReservationsID = rc.reservationsid
and rc.Completed = 1
left join tblreservation rin
on R.reservationsid = rin.reservationsid
and rin.InProgress = 1
left join tblreservation run
on Run.ReservationsID = r.ReservationsID
and run.completed = 0
and run.inprogress = 0
)
select col, value
from cte
cross apply
(
values
('CompletedTurnover', CompletedTurnover),
('In Progress Turnover', [In Progress Turnover]),
('Unallocated Turnover', [Unallocated Turnover]),
('Credit Note Value', [Credit Note Value])
) c (col, value)
``` | Unpivot from existing query | [
"",
"sql",
"sql-server",
"sql-server-2012",
"unpivot",
""
] |
**BACKGROUND**
I have two tables:
`drugDistributionHistory` - details what/when drugs were distributed ti patients.
`drugPrices` - prices of drugs.
For this example, I am looking at drug called "FLUOXETINE", also known as "PROZAC".
If I query for this drug in the `drugPrices` table, I yield these results:
```
SELECT
drugName,
drugBrandName,
drugStrength,
drugDosage,
drugPrice
FROM
drugPrices
WHERE
drugName like '%fluoxetine%'
```
Results:
```
FLUOXETINE PROZAC 10MG CAP 0.02
FLUOXETINE PROZAC 20MG CAP 0.05
```
In my `drugDistributionHistory` table, I am looking for this same type of drug, however, the drug may be entered differently, for example:
```
select
drugName,
strength,
measurement
from
drugDistributionHistory ddh
INNER JOIN facilities f ON ddh.facilityId = f.facilityId
where
f.facilityId = 40
AND ddh.resultCode = 'Received'
AND MONTH(dateGiven) = '01'
AND YEAR(dateGiven) = '2014'
and ddh.drugName like 'fluoxetine%'
```
order by
ddh.drugName
Results:
```
Fluoxetine / Prozac 60.00 mg
Fluoxetine / Prozac 20.00 mg
```
**QUESTION**
My query to try to match drugs with their prices looks like this. I am trying to match on either the `drugName` or the `drugBrandName` (ie: PROZAC is the brand name drug of FLUOXETINE):
```
select
drugName,
strength,
measurement
from
drugDistributionHistory ddh
INNER JOIN facilities f ON ddh.facilityId = f.facilityId
INNER JOIN drugPrices dp ON
('%' + dp.drugName + '%' like '%' + ddh.drugName + '%'
OR '%' + dp.drugBrandName + '%' like '%' + ddh.drugName + '%')
where
f.facilityId = 40
AND ddh.resultCode = 'Received'
AND MONTH(dateGiven) = '01'
AND YEAR(dateGiven) = '2014'
and ddh.drugName like 'fluoxetine%'
order by
ddh.drugName
```
No results are found.
**Am I misinterpreting how to use wildcards when joining tables?** | Since you did put the percent signs on both sides of the `LIKE` operator, I suspect that you incorrectly believe that `LIKE` is symmetric; it is not. You do not get a match because neither `'%Fluoxetine%'` nor `'%Prozac%'` matches `'%Fluoxetine / Prozac%'` from the `drugDistributionHistory` table on the right side of the `LIKE` statement. However, you would get a match if your operands switch sides:
```
INNER JOIN drugPrices dp ON
(ddh.drugName LIKE '%' + dp.drugName + '%'
OR ddh.drugName LIKE '%' + dp.drugBrandName + '%')
```
The left-hand side of `LIKE` does not need `%` signs. | I think
```
INNER JOIN drugPrices dp ON
('%' + dp.drugName + '%' like '%' + ddh.drugName + '%'
OR '%' + dp.drugBrandName + '%' like '%' + ddh.drugName + '%')
```
Should be
```
INNER JOIN drugPrices dp ON
(dp.drugName like '%' + ddh.drugName + '%'
OR dp.drugBrandName like '%' + ddh.drugName + '%')
```
Notice the removal of the % symbols on the source fields. You only use these on the right hand side of the LIKE operator.
Of course, this will only find results where the dp.drugName contains the ddh.drugName or the dp.drugBrandName contains the ddh.DrugName. Is this acceptable? | Joining with wildcard not matching strings | [
"",
"sql",
"sql-server",
"join",
"wildcard",
""
] |
I'm new to SQL, and thinking about my datasets relationally instead of hierarchically is a big shift for me. I'm hoping to get some insight on the performance (both in terms of storage space and processing speed) versus design complexity of using numeric row IDs as a primary key instead of string values which are more meaningful.
Specifically, this is my situation. I have one table ("parent") with a few hundred rows, for which one column is a string identifier (10-20 characters) which would seem to be a natural choice for the table's primary key. I have a second table ("child") with hundreds of thousands (or possibly millions or more) of rows, where each row refers to a row in the parent table (so I could create a foreign key constraint on the child table). (Actually, I have several tables of both types with a complex set of references among them, but I think this gets the point across.)
So I need a column in the child table that gives an identifier to rows in the parent table. Naively, it seems like creating the column as something like VARCHAR(20) to refer to the "natural" identifier in the first table would lead to a huge performance hit, both in terms of storage space and query time, and therefore I should include a numeric (probably auto\_increment) id column in the parent table and use this as the reference in the child. But, as the data that I'm loading into MySQL don't already have such numeric ids, it means increasing the complexity of my code and more opportunities for bugs. To make matters worse, since I'm doing exploratory data analysis, I may want to muck around with the values in the parent table without doing anything to the child table, so I'd have to be careful not to accidentally break the relationship by deleting rows and losing my numeric id (I'd probably solve this by storing the ids in a third table or something silly like that.)
So my question is, are there optimizations I might not be aware of that mean a column with hundreds of thousands or millions of rows that repeats just a few hundred string values over and over is less wasteful than it first appears? I don't mind a modest compromise of efficiency in favor of simplicity, as this is for data analysis rather than production, but I'm worried I'll code myself into a corner where everything I want to do takes a huge amount of time to run.
Thanks in advance. | I wouldn't be concerned about space considerations primarily. An integer key would typically occupy four bytes. The varchar will occupy between 1 and 21 bytes, depending on the length of the string. So, if most are just a few characters, a `varchar(20)` key will occupy more space than an integer key. But not an extraordinary amount more.
Both, by the way, can take advantage of indexes. So speed of access is not particularly different (of course, longer/variable length keys will have marginal effects on index performance).
There are better reasons to use an auto-incremented primary key.
1. You know which values were most recently inserted.
2. If duplicates appear (which shouldn't happen for a primary key of course), it is easy to determine which to remove.
3. If you decide to change the "name" of one of the entries, you don't have to update all the tables that refer to it.
4. You don't have to worry about leading spaces, trailing spaces, and other character oddities.
You do pay for the additional functionality with four more bytes in a record devoted to something that may not seem useful. However, such efficiencies are premature and probably not worth the effort. | Gordon is right (which is no surprise).
Here are the considerations for you not to worry about, in my view.
When you're dealing with dozens of megarows or less, storage space is basically free. Don't worry about the difference between INT and VARCHAR(20), and don't worry about the disk space cost of adding an extra column or two. It just doesn't matter when you can buy decent terabyte drives for about US$100.
INTs and VARCHARS can both be indexed quite efficiently. You won't see much difference in time performance.
Here's what you should worry about.
There is one significant pitfall in index performance, that you might hit with character indexes. You want the columns upon which you create indexes to be declared `NOT NULL`, and you never want to do a query that says
```
WHERE colm IS NULL /* slow! */
```
or
```
WHERE colm IS NOT NULL /* slow! */
```
This kind of thing defeats indexing. In a similar vein, your performance will suffer bigtime if you apply functions to columns in search. For example, don't do this, because it too defeats indexing.
```
WHERE SUBSTR(colm,1,3) = 'abc' /* slow! */
```
One more question to ask yourself. Will you uniquely identify the rows in your subsidiary tables, and if so, how? Do they have some sort of natural compound primary key? For example, you could have these columns in a "child" table.
```
parent varchar(20) pk fk to parent table
birthorder int pk
name varchar(20)
```
Then, you could have rows like...
```
parent birthorder name
homer 1 bart
homer 2 lisa
homer 3 maggie
```
But, if you tried to insert a fourth row here like this
```
homer 1 badbart
```
you'd get a primary key collision because (homer,1) is occupied. It's probably a good idea to work how you'll manage primary keys for your subsidiary tables.
Character strings containing numbers sort funny. For example, '2' comes after '101'. You need to be on the lookout for this. | What are the merits of using numeric row IDs in MySQL? | [
"",
"mysql",
"sql",
"performance",
""
] |
We have couple of data schemas and we investigate the migration to Liquibase. (One of data schemas is already migrated to Liquibase).
Important question for us is if Liquibase supports dry run:
* We need to run database changes on all schemas without commit to ensure we do not have problems.
* In case of success all database changes run once again with commit.
(The question similar to this [SQL Server query dry run](https://stackoverflow.com/questions/19837655/sql-server-query-dry-run) but related to Liquibase)
**Added after the answer**
I read documentation related to updateSQL and it is not answers the requirements of βdry runβ.
It just generates the SQL (in command line, in Ant task and in Maven plugin).
I will clarify my question:
Does Liquibase support control on transactions?
I want to open transaction before executing of Liquibase changelog, and to rollback the transaction after the changelog execution.
Of course, I need to verify the result of the execution.
Is it possible?
**Added**
Without control on transactions (or dry run) we can not migrate to Liquibase all our schemas.
Please help. | You can try "updateSQL" mode, it will connect db (check you access rights), acquire db lock, generate / print SQL sentences to be applied (based on db state and you current liquibase change sets) also it will print chageset id's missing in current state of db and release db lock. | Unfortunately, no.
By default, Liquibase commits the transaction executing all statements of a changeset. I assume that the migration paths you have in mind usually involve more than a single changeset.
The only way you can modify the transaction behavior is the `runInTransaction` attribute for the `<changeset>` tag, as [documented here](http://www.liquibase.org/documentation/changeset.html). By setting it to `false`, you effectively disable the transaction management, i.e. it enables auto-commit mode as you can see in [ChangeSet.java](http://grepcode.com/file/repo1.maven.org/maven2/org.liquibase/liquibase-core/3.0.7/liquibase/changelog/ChangeSet.java#255).
I think that this feature could be a worthwhile addition to Liquibase, so I opened a feature request: [CORE-1790](https://liquibase.jira.com/browse/CORE-1790). | Does Liquibase support dry run? | [
"",
"sql",
"transactions",
"liquibase",
""
] |
Simplified example: Two tables - people and times. Goal is to keep track of all the times a person walks through a doorway.
A person could have between 0 and 50 entries in the times table daily.
What is the proper and most efficient way to keep track of these records? Is it
```
times table
-----------
person_id
timestamp
```
I'm worried that this table can get well over a million records rather quickly. Insertion and retrieval times are of utmost importance.
ALSO: Obviously non-normalized but would it be a better idea to do
```
times table
-----------
person_id
serialized_timestamps_for_the_day
date
```
We need to access each individual timestamp for the person but ONLY query records on date or the person's id. | The second solution has some problems:
* Since you need to access individual timestamps1, `serialized_timestamps_for_the_day` cannot be considered [atomic](http://en.wikipedia.org/wiki/First_normal_form#Atomicity) and would violate the 1NF, causing a [bunch of problems](https://stackoverflow.com/a/3653574/533120).
* On top of that, you are introducing a redundancy: the `date` can be inferred from the contents of the `serialized_timestamps_for_the_day`, and your application code would need to make sure they never become "desynchronized", which is vulnerable to bugs.2
Therefore go with the first solution. If [properly indexed](http://use-the-index-luke.com/), a modern database on modern hardware can handle much more than mere "well over a million records". In this specific case:
* A composite index on {person\_id, timestamp} will allow you to query for person or combination of person and date by a simple index range scan, which can be very efficient.
* If you need just "by date" query, you'll need an index on {timestamp}. You can easily search for all timestamps within a specific date by searching for a range 00:00 to 24:00 of the given day.
---
*1 Even if you don't query for individual timestamps, you still need to write them to the database one-by-one. If you have a serialized field, you first need to read the whole field to append just one value, and then write the whole result back to the database, which may become a performance problem rather quickly. And there are other problems, as mentioned in the link above.*
*2 As a general rule, **what can be inferred should not be stored**, unless there is a good performance reason to do so, and I don't see any here.* | Consider what are we talking about here. Accounting for just raw data `(event_time, user_id)` this would be `(4 + 4) * 1M ~ 8MB` per 1M rows. Let's try to roughly estimate this in a DB.
One integer 4 bytes, timestamp 4 bytes; row header, say 18 bytes -- this brings the first estimate of the row size to `4 + 4 + 18 = 26 bytes`. Using page fill factor of about 0.7; ==> `26 / 0.7 ~ 37` bytes per row.
So, for 1 M rows that would be about 37 MB. You will need index on `(user_id, event_time)`, so let's simply double the original to `37 * 2 = 74 MB`.
This brings the very rough, inacurate estimate to 74MB per 1M rows.
So, to keep this in memory all the time, you would need 0.074 GB for each 1M rows of this table.
To get better estimate, simply create a table, add the index and fill it with few million rows.
Given the expected data volume, this can all easily be tested with 10M rows even on a laptop -- testing always beats speculating.
P.S. Your option 2 does **not** look *"obviously better idea"* too me at all. | Proper way to create dynamic 1:M SQL table | [
"",
"sql",
"performance",
"database-design",
"architecture",
"one-to-many",
""
] |
Hey I have the column named UID , which current data is like below
```
uid
---
0
0
0
0
0
0
0
0
0
```
I want to fill the value from 1 to N. So for example below
```
uid
---
1
2
3
4
5
6
7
8
9
.
.
n
```
How can i loop through all row and increment values. | You can only define a column as AUTO\_INCREMENT if it is a PRIMARY KEY and an INT !
Since you want the UID to be set as AUTO\_INCREMENT, why not make it as PRIMARY KEY
```
CREATE TABLE TableName
(
UID INT AUTO_INCREMENT PRIMARY KEY,
-- other columns...
)
``` | ```
$n = 50; //number of your records suppose it was 50
for($i=1; $i<=$n; $i++)
{
$res = mysql_query("select * from `table name` where `UID`=0 LIMIT 1 ");
if($row = mysql_fetch_assoc($res))
{
mysql_query("update `table name` set `UID`='$i' where UID=0 limit 1 ") or die(mysql_error)
}
}
``` | Sql increment particular column from 1 to N | [
"",
"mysql",
"sql",
""
] |
I like using stored procedures in ASP.NET and want to make sure I have the exact correct syntax. Are there any significant differences between these?
```
ALTER PROCEDURE dbo.mySP
@param1
AS
BEGIN
SET NOCOUNT ON
SELECT f1 FROM foo WHERE f2 = @param1
END
RETURN
```
What if we don`t use begin, end, or return? Any difference performance-wise?
```
ALTER PROCEDURE dbo.mySP
@param1
AS
SET NOCOUNT ON
SELECT f1 FROM foo WHERE f2 = @param1
```
Does it make a difference if NOCOUNT is set on before or after BEGIN?
```
ALTER PROCEDURE dbo.mySP
@param1
AS
SET NOCOUNT ON
BEGIN
SELECT f1 FROM foo WHERE f2 = @param1
END
```
And do we really need return? Visual studio includes it but everything seems to work fine without it. Finally, I have seen it suggested to set NOCOUNT off before returning results but I am not sure why. Thanks! | None of the things you have mentioned will affect performance, and in reality it is personal preference, but here are my thoughts:
No you don't need `RETURN` unless you want to change the return value from the default of 0. This can be useful for [returning error codes](http://technet.microsoft.com/en-us/library/ms190778%28v=sql.105%29.aspx) from a procedure.
It doesn't really matter whether or not you include `BEGIN/END`, however I would advise you do, the main reason is just to safeguard against errors, for example if I want to create a procedure to select IDs from 2 tables, foo and bar, and run the following:
```
CREATE PROCEDURE dbo.Test
AS
SELECT ID
FROM Foo;
GO
SELECT ID
FROM Bar;
GO
```
The procedure will be created, and I'll get the IDs from Bar, and the procedure will be left just returning the IDs from Foo. If I had enclosed the procedure in `BEGIN/END` then it would not have compiled:
```
CREATE PROCEDURE dbo.Test
AS
BEGIN
SELECT ID
FROM Foo;
GO
SELECT ID
FROM Bar;
END
GO
```
You can't protect yourself from all errors and typos, and we all make them from time to time, but every little helps!
Finally it doesn't matter whether you put `SET NOCOUNT` before or after the `BEGIN`, but in keeping with the practice of wrapping the whole procedure with `BEGIN/END` then I think it should go after the `BEGIN`.
A lot of this is sourced from [this article](https://sqlblog.org/2008/10/30/my-stored-procedure-best-practices-checklist) by [Aaron Bertrand](https://stackoverflow.com/users/61305/aaron-bertrand), and summarised for this specific question. I am a pretty big fan of all his best practice guides, and that one is no exception. | I always include `SET NOCOUNT ON` on the very first line after `BEGIN` statement.
If you are reading SP with application and `SET NOCOUNT OFF` is set on SP. You will get results plus message back. And if you not handling return of that additional message you could run into problems.
That is why it is best to `SET NOCOUNT ON` this way you don't get extra messages that are not really important.
Message Shown:
```
ALTER PROC test
AS
BEGIN
SELECT *
FROM sys.tables
SET NOCOUNT ON;
END
```
Message Not Shown:
```
ALTER PROC test
AS
BEGIN
SET NOCOUNT ON;
SELECT *
FROM sys.tables
END
``` | Where exactly to set NOCOUNT ON in stored procedure? | [
"",
"asp.net",
"sql",
"t-sql",
""
] |
my English is not very good, but i try to explain my self clear. i need advice.
i am selecting from one table sum of values of each field.
and my query looks like this :
```
SELECT * FROM
(
SELECT SUM(Clicks) AS sales , 'sales' as type
FROM ClicksPerDay
WHERE BannerID = 3456
UNION
SELECT SUM(Clicks) AS rent, 'rents' as type
FROM ClicksPerDay
WHERE BannerID = 3457
) total
```
output that i have is :
```
sales | type
23 | rents
26 | sales
```
and i need it like this:
```
sales | rents
26 | 23
``` | Try the following:
```
SELECT
SUM( IF(BannerId = 3456, Clicks, 0) ) AS sales,
SUM( IF(BannerId = 3457, Clicks, 0) ) AS rents
FROM ClicksPerDay
WHERE BannerID IN (3456, 3457)
``` | Like this:
```
SELECT sales = SUM(sales), rent = SUM(clicks) FROM
(
SELECT SUM(Clicks) AS sales , 0 as rent
FROM ClicksPerDay
WHERE BannerID = 3456
UNION ALL
SELECT 0 as sales, SUM(Clicks) AS rent
FROM ClicksPerDay
WHERE BannerID = 3457
) total
``` | mysql UNION query output | [
"",
"mysql",
"sql",
""
] |
I have a table as such:
```
|testNr |date |
|1 | 2014-01-01 |
|2 | 2014-01-03 |
|3 | 2014-01-03 |
```
And another one like:
```
|finalID | testNr | from_date |to_date
|1 | 1 | 2013-12-01 |2013-12-20
|2 | 1 | 2013-12-25 |2014-01-05
|3 | 2 | 2014-01-01 |2014-01-05
```
I want to lookup the `finalID` from the second table and join it with the first. It is imporant that the `date` in the first column is between the date range in the second column.
I would like to end up with:
```
|testNr |date | finalID
|1 | 2014-01-01 | 2
|2 | 2014-01-03 | 3
|3 | 2014-01-03 | NULL
```
I am using SQL server. Any ideas on how to approach this? | I think this is what you want
```
select t1.testNr, t1.date, t2.finalID
from table1 t1 left join table2 t2
on t1.testNr=t2.testNr and t1.date between t2.from_date and t2.to_date
```
[fiddle](http://sqlfiddle.com/#!6/be125/2) | Use the following
```
SELECT t1.*, t2.finalID
FROM table1 t1
LEFT JOIN table2 t2 on t1.testNR=t2.testNR and t1.[date] between t2.from_date and t2.to_date
```
[SQLFIddle](http://sqlfiddle.com/#!3/02348/3) | Joining based on condition per row | [
"",
"sql",
"sql-server",
"join",
""
] |
I have this query:
```
$sql = "SELECT
catalogs_values.name as word, catalogs.name as catalog
FROM
catalogs_values
INNER JOIN
catalogs ON catalogs_values.catalog_id = catalogs.id
WHERE
catalogs_values.id NOT IN (SELECT
valueid
FROM
monitor
WHERE
userid = $user_id)
AND catalogs_values.checked = 0
ORDER BY RAND()
LIMIT 1";
```
In my table I have about 1 million records and my query is very slow. Can you suggest some improvements to it? | Try replacing the `not in` with a `left outer join` or `not exists`:
```
SELECT cv.name as word, c.name as catalog
FROM catalogs_values cv INNER JOIN
catalogs c
ON cv.catalog_id = c.id LEFT JOIN
monitor m
on cv.id = m.valueid and userid=$user_id
WHERE m.valueid is null and cv.checked = 0
ORDER BY RAND()
LIMIT 1;
```
That might solve the performance problem.
If it doesn't you might need another way to get a random row. One simplish approach is to select a subset of random rows and then choose just one:
```
select word, catalog
from (SELECT cv.name as word, c.name as catalog
FROM catalogs_values cv INNER JOIN
catalogs c
ON cv.catalog_id = c.id LEFT JOIN
monitor m
on cv.id = m.valueid and userid=$user_id
WHERE m.valueid is null and cv.checked = 0 and rand() < 0.001
) t
ORDER BY RAND()
LIMIT 1;
```
The inner query chooses about one in one thousand rows (the proportion may need to change depending on how many rows match the various other conditions). This reduced set is then passed to the `order by rand()` method for choosing just one. | Ordering by Mysql's RAND is ALWAYS slow, I use a very fast way to sort this:
1. select min and max ID from the table
2. run a php rand(min\_Id, max\_Id)
3. make a loop until you find an id that's really existing, believe me it's really fast if the id is a unique key as it should
4. once you are sure the ID exists exit the loop, there you have your random ID
```
$SQL = " SELECT MAX( id ) as x FROM table ";
$query = mysql_query($SQL);
$x = mysql_fetch_assoc($query);
$max = $x['x'];
$ok = false;
while($ok == false){
$id = rand(1, $max);
$SQL = "SELECT id FROM table WHERE id = ".$id." LIMIT 1";
$query = mysql_query($SQL);
$record = mysql_fetch_assoc($query);
if((int)$record['id'] > 0){
$ok = true;
}
}
//your ID is: $record['id'];
``` | Improve MySQL random | [
"",
"mysql",
"sql",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.