Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I have a sql command
```
SELECT A.*,
K1.AD CONTACTAD,
K1.SYD CONTACTSYD,
KR.AD KURUM,
K2.AD SUPERVISORAD,
K2.SYD SUPERVISORSOYAD,
K3.AD REQUIRERAD,
K3.SYD REQUIRERSOYAD
FROM
TD_ACTIVITY A,
KK_KS K1,
KK_KS K2,
KK_KS K3,
KK_KR KR
WHERE A.INS_CONTACT_PERSON = K1.ID
AND A.SUPERVISOR = K2.ID
AND A.REQUIRERID = K3.ID
AND A.INSTITUTIONID = KR.ID
```
I want to combine `CONTACTAD` and `CONTACTSOYAD` into one variable like `CONTACT`.How can i do that? | Use `||` for concatenating in oracle
```
SELECT A.*,
K1.AD || K1.SYD as CONTACT,
KR.AD KURUM,
K2.AD SUPERVISORAD,
K2.SYD SUPERVISORSOYAD,
K3.AD REQUIRERAD,
K3.SYD REQUIRERSOYAD
FROM
TD_ACTIVITY A,
KK_KS K1,
KK_KS K2,
KK_KS K3,
KK_KR KR
WHERE A.INS_CONTACT_PERSON = K1.ID
AND A.SUPERVISOR = K2.ID
AND A.REQUIRERID = K3.ID
AND A.INSTITUTIONID = KR.ID
``` | Use [Oracle concatenation operator](http://docs.oracle.com/cd/B19306_01/server.102/b14200/operators003.htm)
```
SELECT A.*,
K1.AD || ' ' || K1.SYD CONTACT
KR.AD KURUM,
K2.AD SUPERVISORAD,
K2.SYD SUPERVISORSOYAD,
K3.AD REQUIRERAD,
K3.SYD REQUIRERSOYAD
FROM TD_ACTIVITY A,
KK_KS K1,
KK_KS K2,
KK_KS K3,
KK_KR KR
WHERE A.INS_CONTACT_PERSON = K1.ID
AND A.SUPERVISOR = K2.ID
AND A.REQUIRERID = K3.ID
AND A.INSTITUTIONID = KR.ID
``` | Sql combining two column into one variable | [
"",
"sql",
"oracle",
""
] |
I was using the primary key (`TicketID`) as an `identity` column which was auto incrementing on every time a ticket is generated. Everything was going smoothly and then I started seeing that my `TicketID` jumps from 5,6 to like 1007, 1008.
Upon googling, I found about this design-by in SQL Server 2012:
<http://connect.microsoft.com/SQLServer/feedback/details/743300/identity-column-jumps-by-seed-value>
Now I want to custom create a column, which will auto increment just like identity column but does not jump and leave gaps (I don't want an `identity` column)
How should I do that? I think triggers are what I am looking for but as I have never used triggers before, I would really appreciate some help here.
OR should I use computed column? | I have just solved my problem using a AFTER INSERT trigger.
Here it is what I did.
```
ALTER TRIGGER [dbo].[tid2]
ON [dbo].[tblPrac2]
AFTER INSERT
AS
declare @nid int;
set @nid = ( select MAX(TicketID) from [tblPrac2] );
if(@nid is null)
begin
set @nid = 1;
end
else
set @nid = @nid + 1;
update tblPrac2 set TicketID = @nid where ID in (select ID from inserted)
``` | Create you own autoincrement mechanism is a bad idea.
Look at SEQUENCE object <http://technet.microsoft.com/en-us/library/ff878091.aspx>
But if you still want to do this, **one possible solution is**
```
CREATE TABLE dbo.TicketNumber
(
Number INT NOT NULL
)
INSERT INTO dbo.TicketNumber(Number) VALUES(0)
CREATE PROCEDURE dbo.sp_GenerateTicketNumber
(
@Number INT OUT
)
AS
BEGIN
DECLARE @Number INT
DECLARE @CurrentNumber INT
BEGIN TRANSACTION
SELECT
@CurrentNumber = Number
FROM dbo.TicketNumber WITH(UPDLOCK)
SET @Number = @CurrentNumber + 1
UPDATE dbo.TicketNumber
SET Number = @Number
COMMIT TRANSACTION
END
```
**The alternative implementation** of dbo.sp\_GenerateTicketNumber **may be looks like**
```
DECLARE @number TABLE(number INT);
UPDATE dbo.TicketNumber
SET
[Number] = [Number] + 1
OUTPUT INSERTED.Number INTO @number;
SELECT * FROM @number
```
**And the solution you want, probably**
```
CREATE PROCEDURE dbo.sp_RegisterTicket
(
@PersonName varchar(255),
@TicketNumber INT OUT
)
AS
BEGIN
BEGIN TRAN
SELECT
@TicketNumber = MAX(TicketId) + 1
FROM dbo.Tickets WITH(UPDLOCK)
INSERT INTO dbo.Tickets VALUES(@TicketNumber, @PersonName)
COMMIT TRAN
END
```
Using example:
```
DECLARE @Number INT
EXEC dbo.sp_RegisterTicket 'Vasya', @Number OUT
SELECT @Number
``` | Auto Generate Unique column in SQL database | [
"",
"sql",
"sql-server",
""
] |
I imported a few xml files into my database. I have multiple survery ID's and multiple varname's per survey each with a value.
I have been able to get the results i need but i am not sure if i am doing this correctly.
I am not entirely sure how i would write a query to select out desired survey ids..
```
where varname in ('age') and value >18
```
would give me all survey ids with participants older than 18
but what if i have multiple variables and some are numbers...so i cant just write >18 if i have other variables that are numbers too...
how can i associate the value to that varname?
```
SURVEY_ID VARNAME VALUE
674078265 PROVID provider name
674078265 SEX Female
674078265 age 55
674078265 SP Internal Med
674078265 ID# 12345
674111111 ADJSAMP Included
674111111 PROVID provider name2
674111111 SEX Male
674111111 age 34
674111111 SP Surgery
674111111 ADJSAMP Included
674111111 ID# 6789
``` | ```
SELECT *
FROM TableName
WHERE SURVEY_ID IN (SELECT SURVEY_ID
FROM TableName
WHERE VARNAME = 'age'
AND VALUE > 18)
```
Or a more efficient way will be
```
SELECT *
FROM TableName t
WHERE EXISTS (SELECT 1
FROM TableName
WHERE SURVEY_ID = t.SURVEY_ID
AND VARNAME = 'age'
AND VALUE > 18)
```
OR
```
SELECT t1.*
FROM TABLE_Name t1 LEFT JOIN TABLE_Name t2
ON t1.SURVEY_ID = t2.SURVEY_ID
WHERE t2.VARNAME = 'age'
AND t2.VALUE > 18
``` | Extending @M.Ali answer to actually answer your question - add as many exists as you need for all field conditions:
```
SELECT *
FROM TableName t
WHERE EXISTS (SELECT 1
FROM TableName
WHERE SURVEY_ID = t.SURVEY_ID
AND VARNAME = 'age'
AND VALUE > 18)
and EXISTS (SELECT 1
FROM TableName
WHERE SURVEY_ID = t.SURVEY_ID
AND VARNAME = 'sex'
AND VALUE = 'Male')
``` | sql where clause query | [
"",
"sql",
"sql-server",
""
] |
I have a text column in my DB that stores a list of house names and numbers and when i order by ASC it outputs like this:
```
"Name No 1"
"Name No 12"
"Name No 14"
"Name No 5"
"Name No 7"
```
Is there an easy way of ordering it in the actual order like:
```
"Name No 1"
"Name No 5"
"Name no 7"
"Name No 12"
"Name No 14"
```
If it was my site i would have had two columns one for name, and another for number but i cannot change it as it's a live site | `ORDER BY LENGTH(colname) ASC, colname ASC`
This is probably as close as you're going to get to a "proper order". | Try this
```
SELECT
col1
FROM
Table1
ORDER BY
LENGTH(col1), col1
``` | Order by text and number | [
"",
"mysql",
"sql",
""
] |
all with the same column headings and I would like to create one singular table from all three.
I'd also, if it is at all possible, like to create a trigger so that when one of these three source tables is edited, the change is copied into the new combined table.
I would normally do this as a view, however due to constraints on the STSrid, I need to create a table, not a view.
Edit\* Right, this is a bit ridiculous but anyhow.
> I HAVE THREE TABLES
>
> THERE ARE NO DUPLICATES IN ANY OF THE THREE TABLES
>
> I WANT TO COMBINE THE THREE TABLES INTO ONE TABLE
>
> CAN SOMEONE HELP PROVIDE THE SAMPLE SQL CODE TO DO THIS
>
> ALSO IS IT POSSIBLE TO CREATE TRIGGERS SO THAT WHEN ONE OF THE THREE TABLES IS EDITED THE CHANGE IS PASSED TO THE COMBINED TABLE
>
> I CAN NOT CREATE A VIEW DUE TO THE FACT THAT THE COMBINED TABLE NEEDS TO HAVE A DIFFERENT STSrid TO THE SOURCE TABLES, CREATING A VIEW DOES NOT ALLOW ME TO DO THIS, NOR DOES AN INDEXED VIEW.
Edit\* I Have Table A,Table B and Table C all with columns ORN, Geometry and APP\_NUMBER. All the information is different so
```
Table A (I'm not going to give an example geometry column)
ORN ID
123 14/0045/F
124 12/0002/X
Table B (I'm not going to give an example geometry column)
ORN ID
256 05/0005/D
989 12/0012/X
Table C (I'm not going to give an example geometry column)
ORN ID
043 13/0045/D
222 11/0002/A
```
I want one complete table of all info
```
Table D
ORN ID
123 14/0045/F
124 12/0002/X
256 05/0005/D
989 12/0012/X
043 13/0045/D
222 11/0002/A
```
Any help would be greatly appreciated.
Thanks | If the creation of the table is a one time thing you can use a `select into` combined with a `union` like this:
```
select * into TableD from
(
select * from TableA
union all
select * from TableB
union all
select * from TableC
) UnionedTables
```
As for the trigger, it should be easy to set up a `after insert` trigger like this:
```
CREATE TRIGGER insert_trigger
ON TableA
AFTER INSERT AS
insert TableD (columns...) select (columns...) from inserted
```
Obviously you will have to change the `columns...` to match your structure.
I haven't checked the syntax though so it might not be prefect and it could need some adjustment, but it should give you an idea I hope. | If IDs are not duplicated it ill be easy to achieve it, in another case you can must add a OriginatedFrom column. You also can create a lot of instead off triggers (not only for insert but for delete and update) but that a lazy excuse for not refactoring the app.
Also you must pay attention for any reference for the data, since its a RELATIONAL model is likely to other tables are related to the table you are about to drop. | How to combine three tables into a new table | [
"",
"sql",
"sql-server",
""
] |
I'm look for any help to concatenate an integer.
Example:
In a company a employee as a employee number.
And is number is 140024.
Now, the number 14 is the year (depending on the date), and it needs to be assigned automatically. The other number 0024 is my problem. I could get the number 24 but how can I add the 00 or 000 if the number is less than 10?
So I need help to concatenate all this. And also wanted to get it as an INT to make it as a primary key. | ```
DECLARE @Your_Number INT = 24;
SELECT CAST(RIGHT(YEAR(GETDATE()), 2) AS NVARCHAR(2))
+ RIGHT('000000000' + CAST(@Your_Number AS NVARCHAR), 4) --<-- This 4
RESULT: 140024
```
The Number `4` Decides how many Total digits you want after the Year Digits. | you have two choice : working with varchar or int itself.
Example :
```
select cast(14 as char(2)) + right('0000' + cast(24 as varchar(4)),4)
```
or with int
```
select 14 * 10000 + 24
```
Where 10000 of course is the number you can have max. It could be 100, 1000 or more. But your number of digit is probably fixed so it should be fixed too. | SqlServer - Concatenate an integer | [
"",
"sql",
"sql-server",
""
] |
Every query has a final row with an asterisk beside it and null values for every field? Why does this show up? I cant seem to find an explanation for it. Even putting a WHERE clause doesnt seem to get rid of it. Is this only happening to the MySQL implementation alone? | It is for inserting a new row, you enter the values there. | Are you sure it's not just the data entry row. Usually when viewing the table data in something like Workbench or SQLYog, it will display the rows of data, plus a blank row at the end that is used for adding new data. It will show NULL for each column if that is the default value for that columns data type | Why does SQL workbench always return a row full of null values in every query? | [
"",
"mysql",
"sql",
"null",
"mysql-workbench",
""
] |
I'm a newbie at web development, so here's a simple question. I've been doing a few tutorials in Django, setting up an SQL database, which is all good. I have now come across the JSON format, which I am not fully understanding. The definition on Wikipedia is: *It is used primarily to transmit data between a server and web application, as an alternative to XML*. Does this mean that JSON is a database like SQL? If not, what is the difference between SQL and JSON?
Thank you! | JSON is data markup format. You use it to define what the data is and means. eg: This car is blue, it has 4 seats.
```
{
"colour": "blue",
"seats": 4
}
```
SQL is a data manipulation language. You use it to define the operations you want to perform on the data. eg: Find me all the green cars. Change all the red cars to blue cars.
```
select * from cars where colour = 'green'
update cars set colour='blue' where colour='red'
```
A SQL database is a database that uses SQL to query the data stored within, in whatever format that might be. Other types of databases are available. | They are 2 completely different things.
**SQL** is used to communicate with databases, usually to Create, Update and Delete data entries.
**JSON** provides a *standardized* object notation/structure to talk to web services.
**Why standardized?**
Because JSON is relatively easy to process both on the front end (with javascript) and the backend. With no-SQL databases becoming the norm, JSON/JSON-like documents/objects are being used in the database as well. | Difference between JSON and SQL | [
"",
"sql",
"json",
""
] |
if i have a table as such:
```
CREATE TABLE IF NOT EXISTS users (
id int(10) unsigned NOT NULL AUTO_INCREMENT,
username varchar(255) COLLATE utf8_unicode_ci NOT NULL,
password varchar(255) COLLATE utf8_unicode_ci NOT NULL,
email varchar(255) COLLATE utf8_unicode_ci NOT NULL,
phone varchar(255) COLLATE utf8_unicode_ci NOT NULL,
name varchar(255) COLLATE utf8_unicode_ci NOT NULL,
created_at timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',
updated_at timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',
PRIMARY KEY (id)) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci AUTO_INCREMENT=3 ;
```
and I try to insert a row into it using phpmyadmin with:
```
INSERT INTO users ( 'john', 'johndoe', 'johndoe@gmail.com' , '123456', 'John', 2013-06-07 08:13:28, 2013-06-07 08:13:28)
```
Then why do I get the error:
```
#1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ''john', 'johndoe', 'johndoe@gmail.com' , '123456', 'John', 2013-06-07 08:13:28, ' at line 1
``` | The dates need to be quoted as well, and you forgot the `VALUES` keyword.
Also, because you are not inserting values for every column, you need to specify which columns you are inserting in. So your statement would become:
```
INSERT INTO users(
username,
password,
email,
phone,
name,
created_at,
updated_at)
VALUES (
'john',
'johndoe',
'johndoe@gmail.com' ,
'123456',
'John',
'2013-06-07 08:13:28',
'2013-06-07 08:13:28')
```
The proof is in this fiddle: <http://sqlfiddle.com/#!2/c35d9/1> | Date values should have quotes:
```
INSERT INTO users (username,password,email,phone,name,created_at,updated_at)
VALUES ('john',
'johndoe',
'johndoe@gmail.com' ,
'123456',
'John',
'2013-06-07 08:13:28',
'2013-06-07 08:13:28');
```
UPDATED
Also need to either provide a dummy value for the auto\_increment column, or specify column names. | INSERT INTO php MyAdmin | [
"",
"mysql",
"sql",
""
] |
I am trying to get count of a rows with specific values in a table, and if the count is 0, then add a value in the table. This count is a local variable in stored procedure.
I am building the SQL dynamically and storing SQL statement into a `nvarchar` variable.
Then, using `EXEC` I am running this SQL as follows hoping to populate count variable.
But it's not working.
```
DECLARE @qry NVARCHAR(max)
DECLARE @count INT
-- building @qry will result as follows
@qry = SELECT @count = COUNT(*) FROM aTable WHERE (col1 = @col1 AND ...)
@count = EXEC @qry
IF @count = 0
BEGIN
-- carry on with adding
END
``` | If you are building the query dynamically, you need `sp_executesql`. Try something like
```
-- building @qry will result as follows
@qry = 'SELECT @count = COUNT(*) FROM aTable WHERE (col1 = @col1 AND ...)'
EXEC sp_executesql @qry, N'@count INT OUTPUT', @count OUTPUT;
--Do whatever you want with @count...
```
**Source**: Aaron Bertrand's answer [here](https://stackoverflow.com/questions/15786335/obtain-output-parameter-values-from-dynamic-sql) and [sp\_executesql](http://technet.microsoft.com/en-us/library/ms188001.aspx) explanation.. | In your sql ,why you are execute your query through EXEC because of your required output is already in @count variable so it is not need in your case.
Please refer below syntax.
```
DECLARE @qry Numeric
DECLARE @count INT
-- building @qry will result as follows
SELECT @count = COUNT(*) FROM aTable WHERE (col1 = @col1 AND ...)
IF @count = 0
BEGIN
-- carry on with adding
END
``` | Build select statement dynamically and use it to populate a stored procedure variable | [
"",
"sql",
"stored-procedures",
""
] |
I have this table:
```
+----+---------------------+---------+---------+
| id | date | amounta | amountb |
+----+---------------------+---------+---------+
| 1 | 2014-02-28 05:58:41 | 148 | 220 |
+----+---------------------+---------+---------+
| 2 | 2014-01-20 05:58:41 | 50 | 285 |
+----+---------------------+---------+---------+
| 3 | 2014-03-30 05:58:41 | 501 | 582 |
+----+---------------------+---------+---------+
| 4 | 2014-04-30 05:58:41 | 591 | 53 |
+----+---------------------+---------+---------+
| 5 | 2013-05-30 05:58:41 | 29 | 59 |
+----+---------------------+---------+---------+
| 6 | 2014-05-30 05:58:41 | 203 | 502 |
+----+---------------------+---------+---------+
```
I want to sum all the columns of each row where year = 2014 and **up to month** = 3 (Jan, Feb, Mar).
## EDIT:
**I do not want to select quarterly. I want to be able to select between months. From Jan to Apr, or from Mar to Sep for example.**
How can I perform this? | You can use date functions, [`QUARTER()`](http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_quarter) and [`YEAR()`](http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_year)
```
SELECT
SUM(amounta),
SUM(amountb)
FROM
t
WHERE
YEAR(`date`)=2014
AND
QUARTER(`date`)=1
```
That will fit quarterly query. But for interval of month, of course, use [`MONTH()`](http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_month):
```
SELECT
SUM(amounta),
SUM(amountb)
FROM
t
WHERE
YEAR(`date`)=2014
AND
MONTH(`date`)>=1
AND
MONTH(`date`)<=8
``` | try this
```
SELECT SUM(amounta) FROM `your_table`
WHERE date LIKE '2014-01-%' OR date LIKE '2014-02-%' OR date LIKE '2014-03-%'
``` | MySQL: Select * from table up to this date and sum the column amounts | [
"",
"mysql",
"sql",
""
] |
My table has many columns, among them `attendance` and `register`
* if employee present its to update day1 = '1' and
* if employee present its to update day2 = '1' and
* if employee absent its to update day3 = '2' and etc.,
* if Sunday its to update day5='0'
Here my table:
```
Employee id | Month | day1 | day2 | day3 | day4| day5| ......| day31
1 | Jan | 1 | 1 | 2 | 1 | 0 | ......| 1
2 | Jan | 2 | 1 | 2 | 1 | 1 | ......| 1
```
I want to get Employee id = 1,
```
How to Count present days and absent days ?
``` | You can try using `Decode` function
```
select Decode(day1, 1, 1, 0) + -- <- treat 1 as 1, all other values as 0
Decode(day2, 1, 1, 0) +
Decode(day3, 1, 1, 0) +
...
Decode(day31, 1, 1, 0) as PresentCount,
Decode(day1, 2, 1, 0) + -- <- treat 2 as 1, all other values as 0
Decode(day2, 2, 1, 0) +
Decode(day3, 2, 1, 0) +
...
Decode(day31, 2, 1, 0) as AbsentCount
from MyTable
where (Employee_Id = 1)
-- and (Month = 'Feb') -- <- uncomment this if you want February only
```
P.S. Probably you should re-design your table | Use `UNPIVOT` to transform columns to rows. For more information on this see [the article](http://www.oracle.com/technetwork/articles/sql/11g-pivot-097235.html). Below is example of usage for your case:
```
with t as (
select 1234 emp_id,
'Feb' mon,
2 day1,
1 day2,
2 day3,
1 day4,
2 day5,
1 day6,
2 day7,
1 day8,
2 day9,
1 day10,
2 day11,
1 day12,
2 day13,
1 day14,
2 day15,
1 day16,
2 day17,
1 day18,
2 day19,
1 day20,
2 day21,
1 day22,
2 day23,
1 day24,
2 day25,
1 day26,
2 day27,
1 day28,
0 day29,
0 day30,
0 day31 from dual)
select emp_id,
mon,
sum(nullif(presence_code, 2)) presence_count,
sum(nullif(presence_code, 1)) / 2 absence_count
from (select *
from t unpivot(presence_code for presence_day in (day1,
day2,
day3,
day4,
day5,
day6,
day7,
day8,
day9,
day10,
day11,
day12,
day13,
day14,
day15,
day16,
day17,
day18,
day19,
day20,
day21,
day22,
day23,
day24,
day25,
day26,
day27,
day28,
day29,
day30,
day31))) q
group by emp_id, mon;
EMP_ID MON PRESENCE_COUNT ABSENCE_COUNT
---------- --- -------------- -------------
1234 Feb 14 14
``` | How to get count repeated data in row in sql? | [
"",
"sql",
"oracle",
"row",
""
] |
I’m newbie. I’m designing schema of database and I’m stuck at this place.
I have rooms table and one is rooms\_Items such that each room has many room\_Items.
The problem is that I need different fields in room\_items according to room type
**Room**
```
Id name
1 LG-12FG
2 LG-31FG
```
**Room\_item**
```
Id name room_id required field
1 Window 1 mirror/type/size
2 Door 1 type/size/code
3 Wall 1 layer/color
4 Wall 2 layer/color
```
I’m not sure how to do this. Do I need to make new table for each room\_item type ? If I do like this I have to make condition joins later to fetch all the items of room. Please help | The preferred (normalized) solution would be to create a separate table for each item type (table Windows, table Doors,table Walls). This will make selecting data harder since you will have to do multiple joins.
Alternatively you can create single table containing all attributes of your items:
```
item_id, room_id, type , size, color, code, type2 , layer
1 , 1, Window , 100, NULL, NULL, WindowType1, NULL
2 , 1, Door , 90, NULL, 1, DoorType1 , NULL
3 , 1, Wall , NULL, red, NULL, NULL, 2
3 , 2, Wall , NULL, green, NULL, NULL, 1
``` | I think this would be more comprehensible if you had a table for each of the entities in the room since this is the unit on which you want to make distinctions between data-values.
Since a Door is not a Wall in your case you can make both these separate tables, then you can use room\_items as a table in between.
```
Wall
ID | layer | color
Window
ID | Mirror | Type | Size
```
Then in the room, since an item\_id can point to several similar ids as each item creates their own incremental ID, you add the item\_type which is used to distinguish what type of element you want to look up.
```
Room
room_id | item_id | item_type
``` | database schema : how to make dynamic table according to type? | [
"",
"mysql",
"sql",
"database",
"oracle",
"database-schema",
""
] |
First of all I am really rusty on Oracle PLSQL, and I have seen several folks say this cannot be done and others that say it can, and I am just not able to make it happen. Any help would be greatly appreciated.
I am trying to read the value of a column in a record type dynamically.
I have a message with tokens and I need to replace the tokens with value from the record set.
So the message looks like: [status] by [agent\_name]
I have another place where I am parsing out the tokens.
In java script I know this can be accomplished with: (Will run in Console)
```
var record = {
status : "Open",
agent_name : "John"
};
var record2 = {
status : "Close",
agent_name : "Joe"
};
var records = [record, record2];
var token1 = "status";
var token2 = "agent_name";
for( var i=0; i<records.length; i++){
console.log(records[i][token1] + " by " + records[i][token2]);
}
Results : Open by John
Close by Joe
```
I want to do the same thing in PLSQL
Here is my test PLSQL:
```
SET SERVEROUTPUT ON;
declare
TYPE my_record is RECORD
(
status VARCHAR2(30),
agent_name varchar2(30)
);
TYPE my_record_array IS VARRAY(6) OF my_record;
v_records my_record_array := my_record_array();
v_current_rec my_record;
v_current_rec2 my_record;
v_token varchar2(50):= 'agent_name';
v_token2 varchar2(50):= 'status';
begin
v_current_rec.status := 'Open';
v_current_rec.agent_name := 'John';
v_records.extend;
v_records(1) := v_current_rec;
v_current_rec2.status := 'Close';
v_current_rec2.agent_name := 'Ron';
v_records.extend;
v_records(2) := v_current_rec2;
FOR i IN 1..v_records.COUNT LOOP
--Hard coded
DBMS_OUTPUT.PUT_LINE(v_records(i).status || ' by ' || v_records(i).agent_name);
--Substitution vars entering v_records(i).status and v_records(i).agent_name for the prompts.
--How to do this without user interaction.
DBMS_OUTPUT.PUT_LINE(&status || ' by ' || &agent_name);
--Dynamically that doesn't work. How would this be accomplished
DBMS_OUTPUT.PUT_LINE(v_records(i).v_token || ' by ' || v_records(i).v_token2);
END LOOP;
END;
```
I tried using substitution variables, and that will work if I use:
DBMS\_OUTPUT.PUT\_LINE(&agent\_name) and entering v\_records(i).agent\_name when prompted. How do I accomplish this on the fly?
ANSWER:
```
set serveroutput on;
DECLARE
type sr_record_map
IS
TABLE OF VARCHAR2(30) INDEX BY VARCHAR2(30);
type record_set
is
TABLE of sr_record_map index by BINARY_INTEGER;
v_current_rec sr_record_map;
v_record_set record_set;
v_token varchar2(30) := 'status';
v_token2 varchar2(30) := 'agent_name';
v_index number :=1;
begin
v_current_rec('status') := 'Open';
v_current_rec('agent_name') := 'John';
v_record_set(1) := v_current_rec;
v_current_rec('status') := 'Close';
v_current_rec('agent_name') := 'Joe';
v_record_set(2) := v_current_rec;
FOR i in 1..v_record_set.COUNT LOOP
v_current_rec := v_record_set(i);
DBMS_OUTPUT.PUT_LINE(v_current_rec(v_token) || ' by ' || v_current_rec(v_token2));
END LOOP;
end;
``` | Using an [ASSOCIATIVE ARRAY](http://docs.oracle.com/cd/E11882_01/appdev.112/e25519/composites.htm#CHDEIDIC) just like Maps in Java
```
DECLARE
type my_record_map
IS
TABLE OF VARCHAR2(30) INDEX BY VARCHAR2(30);
type my_record
IS
record
(
my_members my_record_map );
type final_map
IS
TABLE OF my_record INDEX BY VARCHAR2(20);
v_final_map final_map;
v_my_record_map my_record_map;
v_my_record my_record;
index_name VARCHAR2(100);
index_name_record VARCHAR2(100);
BEGIN
/* Individual Records as key value pairs with their Corresponding Columns */
/* You can put any member name inside */
v_my_record_map('status') := 'Open';
v_my_record_map('agent_name') := 'John';
v_my_record_map('added_by') := 'Maheshwaran';
/* Put it as a record */
v_my_record.my_members := v_my_record_map;
/* Put the record inside Another Map with any Key */
v_final_map('Record1') := v_my_record;
v_my_record_map('status') := 'Close';
v_my_record_map('agent_name') := 'Joe';
v_my_record_map('added_by') := 'Ravisankar';
v_my_record.my_members := v_my_record_map;
v_final_map('Record2') := v_my_record;
/* Take the First Key in the Outer most Map */
index_name := v_final_map.FIRST;
LOOP
/* status Here can be dynamic */
DBMS_OUTPUT.PUT_LINE(CHR(10)||'######'||v_final_map(index_name).my_members('status') ||' by '||v_final_map(index_name).my_members('agent_name')||'######'||CHR(10));
index_name_record := v_final_map(index_name).my_members.FIRST;
DBMS_OUTPUT.PUT_LINE('$ Ávailable Other Members + Values.. $'||CHR(10));
LOOP
DBMS_OUTPUT.PUT_LINE(' '||index_name_record ||'='||v_final_map(index_name).my_members(index_name_record));
index_name_record := v_final_map(index_name).my_members.NEXT(index_name_record);
EXIT WHEN index_name_record IS NULL;
END LOOP;
/* Next gives you the next key */
index_name := v_final_map.NEXT(index_name);
EXIT WHEN index_name IS NULL;
END LOOP;
END;
/
```
**OUTPUT:**
```
######Open by John######
$ Ávailable Other Members + Values.. $
added_by=Maheshwaran
agent_name=John
status=Open
######Close by Joe######
$ Ávailable Other Members + Values.. $
added_by=Ravisankar
agent_name=Joe
status=Close
``` | I don't think it can be done with a record type. It may be possible with an object type since you could query the fields from the data dictionary, but there isn't normally anywhere those are available (though there's an 11g facility called [PL/Scope](http://docs.oracle.com/cd/E11882_01/appdev.112/e25519/whatsnew.htm#LNPLS125) that might allow it if it's enabled).
Since you're defining the record type in the same place you're using it, and if you have a manageable number of fields, it might be simpler to just try to replace each token, which just (!) wastes a bit of CPU if one doesn't exist in the message:
```
declare
TYPE my_record is RECORD
(
status VARCHAR2(30),
agent_name varchar2(30)
);
TYPE my_record_array IS VARRAY(6) OF my_record;
v_records my_record_array := my_record_array();
v_current_rec my_record;
v_current_rec2 my_record;
v_message varchar2(50):= '[status] by [agent_name]';
v_result varchar2(50);
begin
v_current_rec.status := 'Open';
v_current_rec.agent_name := 'John';
v_records.extend;
v_records(1) := v_current_rec;
v_current_rec2.status := 'Close';
v_current_rec2.agent_name := 'Ron';
v_records.extend;
v_records(2) := v_current_rec2;
FOR i IN 1..v_records.COUNT LOOP
v_result := v_message;
v_result := replace(v_result, '[agent_name]', v_records(i).agent_name);
v_result := replace(v_result, '[status]', v_records(i).status);
DBMS_OUTPUT.PUT_LINE(v_result);
END LOOP;
END;
/
anonymous block completed
Open by John
Close by Ron
```
It needs to be maintained of course; if a field is added to the record type than a matching `replace` will need to be added to the body.
I assume in the real world the message text, complete with tokens, will be passed to a procedure. I'm not sure it's worth parsing out the tokens though, unless you need them for something else. | PLSQL getting value of property in record dynamically? | [
"",
"sql",
"oracle",
"dynamic",
"plsql",
"plsqldeveloper",
""
] |
```
Edit Copy Delete 119 anand a f 1957-05-08 5678 s yyyy anand_abc@gmail.com 2014-02-17 11:42:39 1
Edit Copy Delete 120 gangadhar m 1952-02-04 495 c xxxx gang_v@yahoo.com 2014-02-17 12:02:16 3,4
Edit Copy Delete 124 ganesh r m 1991-09-04 9840 s zzzz gan_bab_raj@yahoo.com 2014-02-26 12:45:58 1
Edit Copy Delete 125 manesh a m 1991-02-05 9841 s zzzzz manesh.25@gmail.com 2014-02-26 12:45:5
```
I want to fetch the last two rows detail which has been inserted today. How can I fetch data based on current date.
Here is my query which is not giving desired output..
```
SELECT *
FROM stud_enq
WHERE date_time = GETDATE()
``` | You need to account for the fact that your `date_time` column *has* a time component, as does `GETDATE()`. Assuming SQL Server 2008 or later:
```
SELECT *
FROM stud_enq
WHERE date_time >= CONVERT(date,GETDATE()) and
date_time < CONVERT(date,DATEADD(day,1,GETDATE()))
```
Selects all values which have todays date.
For older SQL Server, you can use:
```
SELECT *
FROM stud_enq
WHERE date_time >= DATEADD(day,DATEDIFF(day,0,GETDATE()),0) and
date_time < DATEADD(day,DATEDIFF(day,0,GETDATE()),1)
```
Where the `DATEADD`/`DATEDIFF` are just a trick for removing the time component. | Try this..
```
SELECT top 2 *
FROM stud_enq
WHERE convert(varchar(20),date_time,101) = convert(varchar(20),GETDATE(),101)
order by date_time desc
``` | How can i fetch data based on current date? | [
"",
"sql",
"sql-server",
""
] |
I have this small query
```
SELECT
MAX(myDate) AS DateToUser
FROM
blaTable
```
I'm getting this result "2011-05-23 15:18:01.223"
how can I get the result like 05/23/2011 "mm/dd/yyyy" format? | ```
SELECT
convert(varchar, MAX(myDate), 101) AS DateToUser
FROM
blaTable
```
For more Date Format Parameters: [**Click Here**](http://anubhavg.wordpress.com/2009/06/11/how-to-format-datetime-date-in-sql-server-2005/) | Try this
**[FIDDLE DEMO](http://sqlfiddle.com/#!6/d41d8/15422)**
```
SELECT
convert(varchar,MAX(myDate),101) AS DateToUser
FROM blaTable
```
**[CONVERT](http://msdn.microsoft.com/en-us/library/ms187928.aspx)**
If you are using SQL SERVER 2012 then Simple use **FORMAT**
```
SELECT
FORMAT(MAX(myDate),'MM/dd/yyyy') AS DateToUser
FROM blaTable
```
**[FORMAT](http://technet.microsoft.com/en-us/library/hh213505.aspx)** | Change the date format in SQLServer | [
"",
"sql",
"sql-server",
"date-format",
""
] |
```
---------------------------------- Lead MASTER TABLE
Create table leads
(
pk_lead_id int primary key identity,
lead_name varchar(50),
lead_mobile varchar(50)
)
insert into leads
select 'AK','9876543210'
insert into leads
select 'AK1','9876543210'
--....
insert into leads
select 'AK1000','9876543210'
----------------------------------Lead TRANSACTION TABLE
Create table lead_tr
(
pk_leadtr_id int primary key identity,
fk_lead_id int,
tr_date datetime,
tr_remarks varchar(500)
)
--- For Lead ID 1
insert into lead_tr select 1,GETDATE(),'This is First Transaction'
insert into lead_tr select 1,GETDATE(),'This is Second Transaction'
--...
insert into lead_tr select 1,GETDATE(),'This is Hundredth Transaction'
--- For Lead ID 1000
insert into lead_tr select 2,GETDATE(),'This is First Transaction'
insert into lead_tr select 2,GETDATE(),'This is Second Transaction'
---...
insert into lead_tr select 2,GETDATE(),'This is Hundredth Transaction'
------------------- MY PROC. for Lead Details With Latest STATUS
select * from leads
left join
(
select fk_lead_id,max(pk_leadtr_id) as pk_leadtr_id from lead_tr
group by fk_lead_id
)as ltmin on ltmin.fk_lead_id = pk_lead_id
left join lead_tr as lt on lt.pk_leadtr_id = ltmin.pk_leadtr_id
```
I have leads table contain lacs of records.
And transaction table will contain 50 transactions for each row of leads.
How can I avoid sub query?How to ptimize this SP.
Do i need to change my table sturcture for this? | I agree with Gary Walker's logic of APPLY being the best way to do this, however I don't agree with the application. I think your query should be:
```
SELECT *
FROM Leads L
OUTER APPLY
( SELECT TOP 1 *
FROM Lead_tr tr
WHERE tr.fk_lead_id = l.pk_lead_id
ORDER BY tr.pk_leadtr_id DESC
) tr;
```
**[Example on SQL Fiddle](http://sqlfiddle.com/#!3/2c9a2/5)**
This allows you to get all columns from `Lead_tr` without having to join to it again.
You can also do this using the ROW\_NUMBER() function, this has fewer logical reads on your sample, but consistently executes slower than when using APPLY when I have tested it:
```
SELECT *
FROM Leads L
LEFT JOIN
( SELECT *, RowNum = ROW_NUMBER() OVER(PARTITION BY tr.fk_lead_id ORDER BY tr.pk_leadtr_id DESC)
FROM Lead_tr tr
) tr
ON tr.fk_lead_id = l.pk_lead_id
AND tr.RowNum = 1;
```
**[Example on SQL Fiddle](http://sqlfiddle.com/#!3/2c9a2/6)**
---
## Comparing Performance
**APPLY**
> (3 row(s) affected)
>
> Table 'lead\_tr'. Scan count 1, logical reads 7, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
>
> Table 'leads'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
>
> SQL Server Execution Times: CPU time = 0 ms, elapsed time = 48 ms.
**ROW\_NUMBER**
> (3 row(s) affected)
>
> Table 'lead\_tr'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
>
> Table 'leads'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
>
> SQL Server Execution Times: CPU time = 0 ms, elapsed time = 146 ms.
*N.B Both queries run on a warm cache so compile time is 0ms for both so not included above.*
On your actual data you may get different results though, so as always, test on *your* data and chose the method that works fastest for you. | First, you should add an index to lead\_tr on fk\_lead\_id
Recommend you try cross apply
E.g., you can probably use something similar to this
```
select L.*, XL.*
from Leads L
cross apply (
select max(pk_leadtr_id) as tr_id
from lead_tr TR
where TR.fk_lead_id = L.pk_lead_id
) XL
```
Cross Apply is pretty much always the fastest method in my experience
There are plenty of cross apply tutorials on the web, there is also outer apply (which corresponds to outer join) | How to avoid using Sub query & optimize below mentioned query? | [
"",
"sql",
"sql-server",
"optimization",
"indexing",
"subquery",
""
] |
I run `EXEC sp_who2 78` and I get the following [results](https://stackoverflow.com/q/2234691/1501497):

How can I find why its status is suspended?
This process is a heavy `INSERT` based on an expensive query. A big `SELECT` that gets data from several tables and write some 3-4 millions rows to a different table.
There are no locks/ blocks.
The `waittype` it is linked to is `CXPACKET`. which I can understand because there are 9 78s as you can see on the picture below.
What concerns me and what I really would like to know is why the number 1 of the `SPID` 78 is suspended.
I understand that when the status of a `SPID` is suspended it means the process is waiting on a resource and it will resume when it gets its resource.
How can I find more details about this? what resource? why is it not available?
I use a lot the code below, and variations therefrom, but is there anything else I can do to find out why the `SPID` is suspended?
```
select *
from sys.dm_exec_requests r
join sys.dm_os_tasks t on r.session_id = t.session_id
where r.session_id = 78
```
I already used [sp\_whoisactive](http://whoisactive.com/downloads/). The result I get for this particular spid78 is as follow: (broken into 3 pics to fit screen)
 | SUSPENDED:
It means that the request currently is not active because it is waiting on a resource. The resource can be an I/O for reading a page, A WAITit can be communication on the network, or it is waiting for lock or a latch. It will become active once the task it is waiting for is completed. For example, if the query the has posted a I/O request to read data of a complete table tblStudents then this task will be suspended till the I/O is complete. Once I/O is completed (Data for table tblStudents is available in the memory), query will move into RUNNABLE queue.
So if it is waiting, check the wait\_type column to understand what it is waiting for and troubleshoot based on the wait\_time.
I have developed the following procedure that helps me with this, it includes the WAIT\_TYPE.
```
use master
go
CREATE PROCEDURE [dbo].[sp_radhe]
AS
BEGIN
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT es.session_id AS session_id
,COALESCE(es.original_login_name, '') AS login_name
,COALESCE(es.host_name,'') AS hostname
,COALESCE(es.last_request_end_time,es.last_request_start_time) AS last_batch
,es.status
,COALESCE(er.blocking_session_id,0) AS blocked_by
,COALESCE(er.wait_type,'MISCELLANEOUS') AS waittype
,COALESCE(er.wait_time,0) AS waittime
,COALESCE(er.last_wait_type,'MISCELLANEOUS') AS lastwaittype
,COALESCE(er.wait_resource,'') AS waitresource
,coalesce(db_name(er.database_id),'No Info') as dbid
,COALESCE(er.command,'AWAITING COMMAND') AS cmd
,sql_text=st.text
,transaction_isolation =
CASE es.transaction_isolation_level
WHEN 0 THEN 'Unspecified'
WHEN 1 THEN 'Read Uncommitted'
WHEN 2 THEN 'Read Committed'
WHEN 3 THEN 'Repeatable'
WHEN 4 THEN 'Serializable'
WHEN 5 THEN 'Snapshot'
END
,COALESCE(es.cpu_time,0)
+ COALESCE(er.cpu_time,0) AS cpu
,COALESCE(es.reads,0)
+ COALESCE(es.writes,0)
+ COALESCE(er.reads,0)
+ COALESCE(er.writes,0) AS physical_io
,COALESCE(er.open_transaction_count,-1) AS open_tran
,COALESCE(es.program_name,'') AS program_name
,es.login_time
FROM sys.dm_exec_sessions es
LEFT OUTER JOIN sys.dm_exec_connections ec ON es.session_id = ec.session_id
LEFT OUTER JOIN sys.dm_exec_requests er ON es.session_id = er.session_id
LEFT OUTER JOIN sys.server_principals sp ON es.security_id = sp.sid
LEFT OUTER JOIN sys.dm_os_tasks ota ON es.session_id = ota.session_id
LEFT OUTER JOIN sys.dm_os_threads oth ON ota.worker_address = oth.worker_address
CROSS APPLY sys.dm_exec_sql_text(er.sql_handle) AS st
where es.is_user_process = 1
and es.session_id <> @@spid
ORDER BY es.session_id
end
```
This query below also can show basic information to assist when the spid is suspended, by showing which resource the spid is waiting for.
```
SELECT wt.session_id,
ot.task_state,
wt.wait_type,
wt.wait_duration_ms,
wt.blocking_session_id,
wt.resource_description,
es.[host_name],
es.[program_name]
FROM sys.dm_os_waiting_tasks wt
INNER JOIN sys.dm_os_tasks ot ON ot.task_address = wt.waiting_task_address
INNER JOIN sys.dm_exec_sessions es ON es.session_id = wt.session_id
WHERE es.is_user_process = 1
```
Please see the picture below as an example:
 | I use sp\_whoIsActive to look at this kind of information as it is a ready made free tool that gives you good information for troubleshooting slow queries:
[How to Use sp\_WhoIsActive to Find Slow SQL Server Queries](http://www.brentozar.com/archive/2010/09/sql-server-dba-scripts-how-to-find-slow-sql-server-queries/)
With this, you can get the query text, the plan it is using, the resource the query is waiting on, what is blocking it, what locks it is taking out and a whole lot more.
Much easier than trying to roll your own. | How to find out why the status of a spid is suspended? What resources the spid is waiting for? | [
"",
"sql",
"sql-server",
"optimization",
"query-optimization",
"sql-tuning",
""
] |
I have a woocommerce installation that uses ajax to add products to cart, however it takes a really long time to add a product to the cart (7-10 seconds).
I started logging and realized that there are hundreds of SQL queries that are run each time a product is added to cart, and that is what takes all the time. It should be possible to add a product to cart without running so many queries I believe.
I noticed that WPML might be the culprit but I am not super good with SQL.
Is there a more optimal way to add products with AJAX? This is the code I use today:
```
// AJAX buy button for variable products
function setAjaxButtons() {
$('.single_add_to_cart_button').click(function(e) {
var target = e.target;
loading(); // loading
e.preventDefault();
var dataset = $(e.target).closest('form');
var product_id = $(e.target).closest('form').find("input[name*='product_id']");
values = dataset.serialize();
$.ajax({
type: 'POST',
url: window.location.hostname+'/?post_typ=product&add-to-cart='+product_id.val(),
data: values,
success: function(response, textStatus, jqXHR){
loadPopup(target); // function show popup
updateCartCounter();
},
});
return false;
});
}
```
Here is some of the SQL that is run each time (there are thousands of lines but I cant post all of them):
```
FROM wp_icl_translations
WHERE element_id='1354' AND element_type='post_product'
1935 Query SELECT post_type FROM wp_posts WHERE ID=1354
1935 Query SELECT post_type FROM wp_posts WHERE ID=1354
1935 Query SELECT post_type FROM wp_posts WHERE ID=1354
1935 Query SELECT post_type FROM wp_posts WHERE ID=162
1935 Query SELECT * FROM wp_posts WHERE ID = 1639 LIMIT 1
1935 Query SELECT post_id, meta_key, meta_value FROM wp_postmeta WHERE post_id IN (1639)
1935 Query SELECT post_id, meta_key, meta_value FROM wp_postmeta WHERE post_id IN (1442)
1935 Query SELECT post_id, meta_key, meta_value FROM wp_postmeta WHERE post_id IN (1443)
1935 Query SELECT post_id, meta_key, meta_value FROM wp_postmeta WHERE post_id IN (1444)
1935 Query SELECT post_id, meta_key, meta_value FROM wp_postmeta WHERE post_id IN (1445)
1935 Query SELECT * FROM wp_posts WHERE ID = 1694 LIMIT 1
1935 Query SELECT post_id, meta_key, meta_value FROM wp_postmeta WHERE post_id IN (1694)
1935 Query SELECT * FROM wp_posts WHERE ID = 1726 LIMIT 1
1935 Query SELECT post_id, meta_key, meta_value FROM wp_postmeta WHERE post_id IN (1726)
1935 Query SELECT * FROM wp_posts WHERE ID = 1603 LIMIT 1
1935 Query SELECT post_id, meta_key, meta_value FROM wp_postmeta WHERE post_id IN (1603)
1935 Query SELECT * FROM wp_posts WHERE ID = 1695 LIMIT 1
1935 Query SELECT post_id, meta_key, meta_value FROM wp_postmeta WHERE post_id IN (1695)
1935 Query SELECT * FROM wp_posts WHERE ID = 1442 LIMIT 1
1935 Query SELECT * FROM wp_posts WHERE ID = 1446 LIMIT 1
1935 Query SELECT post_id, meta_key, meta_value FROM wp_postmeta WHERE post_id IN (1446)
1935 Query SELECT * FROM wp_posts WHERE ID = 1443 LIMIT 1
1935 Query SELECT * FROM wp_posts WHERE ID = 1444 LIMIT 1
1935 Query SELECT * FROM wp_posts WHERE ID = 1445 LIMIT 1
1935 Query SELECT post_type FROM wp_posts WHERE ID=1441
1935 Query SELECT trid, language_code, source_language_code
FROM wp_icl_translations
WHERE element_id='1441' AND element_type='post_product'
1935 Query SELECT post_type FROM wp_posts WHERE ID=1441
1935 Query SELECT post_type FROM wp_posts WHERE ID=1441
1935 Query SELECT post_type FROM wp_posts WHERE ID=1441
1935 Query SELECT post_type FROM wp_posts WHERE ID=162
1935 Query SELECT * FROM wp_posts WHERE ID = 1652 LIMIT 1
1935 Query SELECT post_id, meta_key, meta_value FROM wp_postmeta WHERE post_id IN (1652)
1935 Query SELECT post_id, meta_key, meta_value FROM wp_postmeta WHERE post_id IN (1530)
1935 Query SELECT post_id, meta_key, meta_value FROM wp_postmeta WHERE post_id IN (1531)
1935 Query SELECT post_id, meta_key, meta_value FROM wp_postmeta WHERE post_id IN (1532)
1935 Query SELECT post_id, meta_key, meta_value FROM wp_postmeta WHERE post_id IN (1533)
1935 Query SELECT * FROM wp_posts WHERE ID = 1696 LIMIT 1
1935 Query SELECT post_id, meta_key, meta_value FROM wp_postmeta WHERE post_id IN (1696)
1935 Query SELECT * FROM wp_posts WHERE ID = 1716 LIMIT 1
1935 Query SELECT post_id, meta_key, meta_value FROM wp_postmeta WHERE post_id IN (1716)
1935 Query SELECT * FROM wp_posts WHERE ID = 1577 LIMIT 1
1935 Query SELECT post_id, meta_key, meta_value FROM wp_postmeta WHERE post_id IN (1577)
1935 Query SELECT * FROM wp_posts WHERE ID = 1697 LIMIT 1
1935 Query SELECT post_id, meta_key, meta_value FROM wp_postmeta WHERE post_id IN (1697)
1935 Query SELECT * FROM wp_posts WHERE ID = 1530 LIMIT 1
1935 Query SELECT * FROM wp_posts WHERE ID = 1564 LIMIT 1
1935 Query SELECT post_id, meta_key, meta_value FROM wp_postmeta WHERE post_id IN (1564)
1935 Query SELECT * FROM wp_posts WHERE ID = 1531 LIMIT 1
1935 Query SELECT * FROM wp_posts WHERE ID = 1532 LIMIT 1
1935 Query SELECT * FROM wp_posts WHERE ID = 1533 LIMIT 1
```
140226 11:34:04 1935 Query
```
SELECT post_type FROM wp_posts WHERE ID=1529
1935 Query SELECT trid, language_code, source_language_code
FROM wp_icl_translations
WHERE element_id='1529' AND element_type='post_product'
1935 Query SELECT post_type FROM wp_posts WHERE ID=1529
1935 Query SELECT post_type FROM wp_posts WHERE ID=1529
1935 Query SELECT post_type FROM wp_posts WHERE ID=1529
1935 Query SELECT post_type FROM wp_posts WHERE ID=162
1935 Query SELECT * FROM wp_posts WHERE ID = 1638 LIMIT 1
1935 Query SELECT post_id, meta_key, meta_value FROM wp_postmeta WHERE post_id IN (1638)
1935 Query SELECT post_id, meta_key, meta_value FROM wp_postmeta WHERE post_id IN (1342)
1935 Query SELECT post_id, meta_key, meta_value FROM wp_postmeta WHERE post_id IN (1343)
1935 Query SELECT post_id, meta_key, meta_value FROM wp_postmeta WHERE post_id IN (1344)
1935 Query SELECT post_id, meta_key, meta_value FROM wp_postmeta WHERE post_id IN (1345)
1935 Query SELECT * FROM wp_posts WHERE ID = 1698 LIMIT 1
1935 Query SELECT post_id, meta_key, meta_value FROM wp_postmeta WHERE post_id IN (1698)
1935 Query SELECT * FROM wp_posts WHERE ID = 1727 LIMIT 1
1935 Query SELECT post_id, meta_key, meta_value FROM wp_postmeta WHERE post_id IN (1727)
1935 Query SELECT * FROM wp_posts WHERE ID = 1628 LIMIT 1
1935 Query SELECT post_id, meta_key, meta_value FROM wp_postmeta WHERE post_id IN (1628)
1935 Query SELECT * FROM wp_posts WHERE ID = 1699 LIMIT 1
1935 Query SELECT post_id, meta_key, meta_value FROM wp_postmeta WHERE post_id IN (1699)
1935 Query SELECT * FROM wp_posts WHERE ID = 1342 LIMIT 1
1935 Query SELECT * FROM wp_posts WHERE ID = 1352 LIMIT 1
1935 Query SELECT post_id, meta_key, meta_value FROM wp_postmeta WHERE post_id IN (1352)
1935 Query SELECT * FROM wp_posts WHERE ID = 1343 LIMIT 1
1935 Query SELECT * FROM wp_posts WHERE ID = 1344 LIMIT 1
1935 Query SELECT * FROM wp_posts WHERE ID = 1345 LIMIT 1
1935 Query SELECT post_type FROM wp_posts WHERE ID=1340
1935 Query SELECT trid, language_code, source_language_code
``` | Managed to find the solution after many hours, the problem was that woocommerce did a GET for the index page after adding to cart, which was taking all the time to load. I commented out the redirect and it works superfast now.
in woocommerce-functions.php
```
if ( $was_added_to_cart ) {
$url = apply_filters( 'add_to_cart_redirect', $url );
// If has custom URL redirect there
if ( $url ) {
wp_safe_redirect( $url );
exit;
}
// Redirect to cart option
elseif ( get_option('woocommerce_cart_redirect_after_add') == 'yes' && $woocommerce->error_count() == 0 ) {
wp_safe_redirect( $woocommerce->cart->get_cart_url() );
exit;
}
// Redirect to page without querystring args
elseif ( wp_get_referer() ) {
// Commented the line below
//wp_safe_redirect( add_query_arg( 'added-to-cart', implode( ',', $added_to_cart ), remove_query_arg( array( 'add-to-cart', 'quantity', 'product_id' ), wp_get_referer() ) ) );
exit;
}
}
``` | In my case, I am using a modal popup with the variable product details, and needed to get the add-to-cart button to work via AJAX instead of being redirected to the product page. Since I'm fairly new to extending WooCommerce's functionality, I used daklock's code from above to derive my solution:
```
// AJAX Add To Cart button for variable products
$(document).on('click', '.single_add_to_cart_button', function(e) {
// stop default action of Add To Cart button
e.preventDefault();
// find/set the form in the DOM and the product_id
var dataset = $(e.target).closest('form');
var product_id = $(e.target).closest('form').find("input[name*='product_id']");
// do all your data serialization and make POST the AJAX request
values = dataset.serialize();
$.ajax({
type: 'POST',
url: '/?post_type=product&add-to-cart='+product_id.val(),
data: values,
success: function(response, textStatus, jqXHR){
console.log('product was added to cart!');
},
});
return false;
});
``` | Slow ajax add to cart for variable products Woocommerce | [
"",
"jquery",
"sql",
"ajax",
"wordpress",
"woocommerce",
""
] |
I created a trigger for my application that will allow me to keep a history of records that are edited in my grid.
I get the error: Invalid object name 'dbo.trgAfterUpdate'. and not sure why.
Here is my trigger:
```
USE [TestTable]
GO
/****** Object: Trigger [dbo].[trgAfterUpdate] Script Date: 2/25/2014 6:31:25 AM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER TRIGGER [dbo].[trgAfterUpdate] ON [dbo].[summary]
FOR UPDATE
AS
declare @ID int;
declare @CR varchar(50);
declare @Product varchar(50);
declare @Description varchar(MAX);
select @ID=i.ID from inserted i;
select @CR= i.CR from inserted i;
select @Product = i.Product from inserted i;
select @Description=i.Description from inserted i;
if update(Product)
Set @Summary = 'Old value was: ' + (Select Product from deleted) + ' New Value is: ' + (Select Product from inserted)
set @Changes_Made='Updated Record -- After Update Trigger.';
insert into History(ID,Product,Description,Changes_Made,Audit_Timestamp, Summary)
values(@ID,@Product,@Description,@Changes_Made,getdate(), @Summary);
```
Any help on how to fix this would be great! | Just confirm the table exists first. Try this:
```
use [TestTable]
GO
select * from [dbo].[summary]
```
If you get a message that the table doesn't exist, then you know what the problem is :) | My issue was that I didn't refresh local cache.
So I
> Ctrl + Shift + R
and voilà. | Invalid object name 'dbo.trgAfterUpdate' - Trigger Error | [
"",
"sql",
"sql-server",
"triggers",
""
] |
I have table called `Master` and i need all the possible records of all matching date as per user Input between `@FromDate` and `@Todate`....
**Master Table:**
```
ID FromDate ToDate
1 18/03/2014 18/03/2014
2 01/03/2014 05/03/2014
3 06/03/2014 08/03/2014
```
**My Query:**
```
select ID
from master M
where M.FromDate between('10/03/2014') and ('17/03/2014')
or M.ToDate between ('10/03/2014') and ('17/03/2014')
```
**User Input:**
```
FromDate ToDate
01/03/2014 06/03/2014
01/03/2014 31/03/2014
09/03/2014 09/03/2014
```
**Expected Output:**
```
2,3
1,2,3
No Records
``` | I Got The Solution:
```
SELECT ID
FROM master M
WHERE ((M.FromDate BETWEEN '10/03/2014' AND '17/03/2014')
OR (M.ToDate BETWEEN '10/03/2014' AND '17/03/2014'))
```
parenthesis is Very Important in Sequel Server..... | try this
```
select ID from master M where (M.FromDate>='10/03/2014' M.FromDate<'17/03/2014') or (M.ToDate>='10/03/2014' and M.ToDate<'17/03/2014');
``` | Between Two Date Condition in Sql Server | [
"",
"sql",
"sql-server",
"date",
""
] |
I've a problem with the following sql select query. The columns are not aggregated by the group by command.
```
SELECT
Dept.Name AS DeptName, COUNT (T.Id) AS TotalServiceNumber,
(Case when SS.Status <> 'Resolved' then COUNT (T.Id) end) AS UnresolvedNumber,
(Case when T.FixTime < '120' then COUNT(T.FixTime) end) AS ResolvedLessThanTwoHoursNumber,
(Case when T.FixTime > '120' then COUNT(T.FixTime) end) AS ResolvedMoreThanTwoHoursNumber,
FROM
dbo.Tickets AS T,
dbo.ServiceStatuses AS SS,
dbo.ComputerDesks AS Desk,
dbo.Personnels AS Person,
dbo.Departments AS Dept
WHERE
SS.Id = T.ServiceStatusId
AND T.ComputerDeskId = Desk.Id
AND Desk.PersonnelId = Person.Id
AND Person.DepartmentId = Dept.Id
GROUP BY
Dept.Name, SS.Status, T.FixTime
```
I'm getting the following result:
```
DeptName | TotalServiceNr | UnresolvedNumber | LessThanTwo | MoreThanTwo
DeptA | 8 | NULL | 8 | NULL
DeptB | 1 | 1 | NULL | 1
DeptC | 4 | NULL | NULL | 4
DeptA | 38 | NULL | NULL | 38
DeptB | 55 | NULL | 55 | NULL
DeptC | 7 | NULL | 7 | NULL
...
```
Expected result:
```
DeptName | TotalServiceNr | UnresolvedNumber | LessThanTwo | MoreThanTwo
DeptA | 46 | NULL | 8 | 38
DeptB | 56 | 1 | 55 | NULL
DeptC | 11 | NULL | 7 | 4
```
What I need to change to get the expected result? | try this query:
```
SELECT
Dept.Name AS DeptName, COUNT (T.Id) AS TotalServiceNumber,
sum(Case when SS.Status <> 'Resolved' then 1 else 0 end) AS UnresolvedNumber,
sum(Case when T.FixTime <= '120' then 1 else 0 end) AS ResolvedLessThanTwoHoursNumber,
sum(Case when T.FixTime > '120' then 1 else 0 end) AS ResolvedMoreThanTwoHoursNumber,
FROM
dbo.Tickets AS T,
dbo.ServiceStatuses AS SS,
dbo.ComputerDesks AS Desk,
dbo.Personnels AS Person,
dbo.Departments AS Dept
WHERE
SS.Id = T.ServiceStatusId
AND T.ComputerDeskId = Desk.Id
AND Desk.PersonnelId = Person.Id
AND Person.DepartmentId = Dept.Id
GROUP BY
Dept.Name
``` | Try this
```
SELECT TotalServiceNumber, SUM(UnresolvedNumber), SUM(ResolvedLessThanTwoHoursNumber), SUM(ResolvedMoreThanTwoHoursNumber)
FROM (
SELECT
Dept.Name AS DeptName, COUNT (T.Id) AS TotalServiceNumber,
(Case when SS.Status <> 'Resolved' then COUNT (C.Id) end) AS UnresolvedNumber,
(Case when T.FixTime < '120' then COUNT(T.FixTime) end) AS ResolvedLessThanTwoHoursNumber,
(Case when T.FixTime > '120' then COUNT(T.FixTime) end) AS ResolvedMoreThanTwoHoursNumber,
FROM
dbo.Tickets AS T,
dbo.ServiceStatuses AS SS,
dbo.ComputerDesks AS Desk,
dbo.Personnels AS Person,
dbo.Departments AS Dept
WHERE
SS.Id = T.ServiceStatusId
AND T.ComputerDeskId = Desk.Id
AND Desk.PersonnelId = Person.Id
AND Person.DepartmentId = Dept.Id
GROUP BY Dept.Name, SS.Status, T.FixTime
) GROUPED
GROUP BY
TotalServiceNumber
``` | Columns in sql query are not grouped | [
"",
"sql",
"sql-server",
""
] |
I have a series of records containing some information (product type) with temporal validity.
I would like to meld together adjacent validity intervals, provided that the grouping information (the product type) stays the same. I cannot use a simple `GROUP BY` with `MIN` and `MAX`, because some product types (`A`, in the example) can "go away" and "come back".
Using Oracle 11g.
A similar question for MySQL is: [How can I do a contiguous group by in MySQL?](https://stackoverflow.com/q/1610599/238421)
**[Input data](http://sqlfiddle.com/#!4/6d1e6/2/0)**:
```
| PRODUCT | START_DATE | END_DATE |
|---------|----------------------------------|----------------------------------|
| A | July, 01 2013 00:00:00+0000 | July, 31 2013 00:00:00+0000 |
| A | August, 01 2013 00:00:00+0000 | August, 31 2013 00:00:00+0000 |
| A | September, 01 2013 00:00:00+0000 | September, 30 2013 00:00:00+0000 |
| B | October, 01 2013 00:00:00+0000 | October, 31 2013 00:00:00+0000 |
| B | November, 01 2013 00:00:00+0000 | November, 30 2013 00:00:00+0000 |
| A | December, 01 2013 00:00:00+0000 | December, 31 2013 00:00:00+0000 |
| A | January, 01 2014 00:00:00+0000 | January, 31 2014 00:00:00+0000 |
| A | February, 01 2014 00:00:00+0000 | February, 28 2014 00:00:00+0000 |
| A | March, 01 2014 00:00:00+0000 | March, 31 2014 00:00:00+0000 |
```
**[Expected results](http://sqlfiddle.com/#!4/6d1e6/2/1)**:
```
| PRODUCT | START_DATE | END_DATE |
|---------|---------------------------------|----------------------------------|
| A | July, 01 2013 00:00:00+0000 | September, 30 2013 00:00:00+0000 |
| B | October, 01 2013 00:00:00+0000 | November, 30 2013 00:00:00+0000 |
| A | December, 01 2013 00:00:00+0000 | March, 31 2014 00:00:00+0000 |
```
See the complete [SQL Fiddle](http://sqlfiddle.com/#!4/6d1e6/2). | This is a gaps-and-islands problem. There are various ways to approach it; this uses `lead` and `lag` analytic functions:
```
select distinct product,
case when start_date is null then lag(start_date)
over (partition by product order by rn) else start_date end as start_date,
case when end_date is null then lead(end_date)
over (partition by product order by rn) else end_date end as end_date
from (
select product, start_date, end_date, rn
from (
select t.product,
case when lag(end_date)
over (partition by product order by start_date) is null
or lag(end_date)
over (partition by product order by start_date) != start_date - 1
then start_date end as start_date,
case when lead(start_date)
over (partition by product order by start_date) is null
or lead(start_date)
over (partition by product order by start_date) != end_date + 1
then end_date end as end_date,
row_number() over (partition by product order by start_date) as rn
from t
)
where start_date is not null or end_date is not null
)
order by start_date, product;
PRODUCT START_DATE END_DATE
------- ---------- ---------
A 01-JUL-13 30-SEP-13
B 01-OCT-13 30-NOV-13
A 01-DEC-13 31-MAR-14
```
[SQL Fiddle](http://sqlfiddle.com/#!4/6d1e6/5)
The innermost query looks at the preceding and following records for the product, and only retains the start and/or end time if the records are not contiguous:
```
select t.product,
case when lag(end_date)
over (partition by product order by start_date) is null
or lag(end_date)
over (partition by product order by start_date) != start_date - 1
then start_date end as start_date,
case when lead(start_date)
over (partition by product order by start_date) is null
or lead(start_date)
over (partition by product order by start_date) != end_date + 1
then end_date end as end_date
from t;
PRODUCT START_DATE END_DATE
------- ---------- ---------
A 01-JUL-13
A
A 30-SEP-13
A 01-DEC-13
A
A
A 31-MAR-14
B 01-OCT-13
B 30-NOV-13
```
The next level of select removes those which are mid-period, where both dates were blanked by the inner query, which gives:
```
PRODUCT START_DATE END_DATE
------- ---------- ---------
A 01-JUL-13
A 30-SEP-13
A 01-DEC-13
A 31-MAR-14
B 01-OCT-13
B 30-NOV-13
```
The outer query then collapses those adjacent pairs; I've used the easy route of creating duplicates and then eliminating them with `distinct`, but you can do it other ways, like putting both values into one of the pairs of rows and leaving both values in the other null, and then eliminating those with another layer of select, but I think distinct is OK here.
If your real-world use case has times, not just dates, then you'll need to adjust the comparison in the inner query; rather than +/- 1, an interval of 1 second perhaps, or 1/86400 if you prefer, but depends on the precision of your values. | It seems like there should be an easier way, but a combination of an analytical query (to find the different gaps) and a hierarchical query (to connect the rows that are continuous) works:
```
with data as (
select 'A' product, to_date('7/1/2013', 'MM/DD/YYYY') start_date, to_date('7/31/2013', 'MM/DD/YYYY') end_date from dual union all
select 'A' product, to_date('8/1/2013', 'MM/DD/YYYY') start_date, to_date('8/31/2013', 'MM/DD/YYYY') end_date from dual union all
select 'A' product, to_date('9/1/2013', 'MM/DD/YYYY') start_date, to_date('9/30/2013', 'MM/DD/YYYY') end_date from dual union all
select 'B' product, to_date('10/1/2013', 'MM/DD/YYYY') start_date, to_date('10/31/2013', 'MM/DD/YYYY') end_date from dual union all
select 'B' product, to_date('11/1/2013', 'MM/DD/YYYY') start_date, to_date('11/30/2013', 'MM/DD/YYYY') end_date from dual union all
select 'A' product, to_date('12/1/2013', 'MM/DD/YYYY') start_date, to_date('12/31/2013', 'MM/DD/YYYY') end_date from dual union all
select 'A' product, to_date('1/1/2014', 'MM/DD/YYYY') start_date, to_date('1/31/2014', 'MM/DD/YYYY') end_date from dual union all
select 'A' product, to_date('2/1/2014', 'MM/DD/YYYY') start_date, to_date('2/28/2014', 'MM/DD/YYYY') end_date from dual union all
select 'A' product, to_date('3/1/2014', 'MM/DD/YYYY') start_date, to_date('3/31/2014', 'MM/DD/YYYY') end_date from dual
),
start_points as
(
select product, start_date, end_date, prior_end+1, case when prior_end + 1 = start_date then null else 'Y' end start_point
from (
select product, start_date, end_date, lag(end_date,1) over (partition by product order by end_date) prior_end
from data
)
)
select product, min(start_date) start_date, max(end_date) end_date
from (
select product, start_date, end_date, level, connect_by_root(start_date) root_start
from start_points
start with start_point = 'Y'
connect by prior end_date = start_date - 1
and prior product = product
)
group by product, root_start;
PRODUCT START_DATE END_DATE
------- ---------- ---------
A 01-JUL-13 30-SEP-13
A 01-DEC-13 31-MAR-14
B 01-OCT-13 30-NOV-13
``` | Joining together consecutive date validity intervals | [
"",
"sql",
"oracle",
"oracle11g",
"gaps-and-islands",
""
] |
I have a query like the following:
```
SELECT
COUNT(*) AS `members`,
CASE
WHEN age >= 10 AND age <= 20 THEN '10-20'
WHEN age >=21 AND age <=30 THEN '21-30'
WHEN age >=31 AND age <=40 THEN '31-40'
WHEN age >=41 AND age <= 50 THEN '41-50'
WHEN age >=51 AND age <=60 THEN '51-60'
WHEN age >=61 AND age <=70 THEN '61-70'
WHEN age >= 71 THEN '71+'
END AS ageband
FROM `members`
GROUP BY ageband
```
Retrieves

How do I populate the empty age bands with 0?
This is what I am looking to achieve from my query above:
```
members ageband
1 10-20
0 21-30
2 31-40
0 41-50
1 51-60
0 61-70
1 71+
```
There are NO member in range of 41-50, hence I put 0. | The rows you wish for are not displayed because there's no data for them. Outer join (left or right) all possibilities and they will be displayed:
```
SELECT
COUNT(members.whatever_column_best_would_be_primary_key) AS `members`,
ages.age_range
FROM `members`
RIGHT JOIN (
SELECT '10-20' as age_range
UNION ALL
SELECT '21-30'
UNION ALL
SELECT '31-40'
UNION ALL
SELECT '41-50'
UNION ALL
SELECT '51-60'
UNION ALL
SELECT '61-70'
UNION ALL
SELECT '71+'
)ages ON ages.age_range = CASE
WHEN members.age >= 10 AND age <= 20 THEN '10-20'
WHEN members.age >=21 AND age <=30 THEN '21-30'
WHEN members.age >=31 AND age <=40 THEN '31-40'
WHEN members.age >=41 AND age <= 50 THEN '41-50'
WHEN members.age >=51 AND age <=60 THEN '51-60'
WHEN members.age >=61 AND age <=70 THEN '61-70'
WHEN members.age >= 71 THEN '71+'
END
GROUP BY ages.age_range
``` | you may use coalsce !
try this
```
SELECT
COALESCE(COUNT(*),0) AS `members`,
CASE
WHEN age >= 10 AND age <= 20 THEN '10-20'
WHEN age >=21 AND age <=30 THEN '21-30'
WHEN age >=31 AND age <=40 THEN '31-40'
WHEN age >=41 AND age <= 50 THEN '41-50'
WHEN age >=51 AND age <=60 THEN '51-60'
WHEN age >=61 AND age <=70 THEN '61-70'
WHEN age >= 71 THEN '71+'
END AS ageband
FROM `members`
GROUP BY ageband
``` | Give every "empty" age band 0 | [
"",
"mysql",
"sql",
""
] |
Is it possible to truncate or flush out a table variable in SQL Server 2008?
```
declare @tableVariable table
(
id int,
value varchar(20)
)
while @start <= @stop
begin
insert into @tableVariable(id, value)
select id
, value
from xTable
where id = @start
--Use @tableVariable
--@tableVariable should be flushed out of
-- old values before inserting new values
set @start = @start + 1
end
``` | just delete everything
```
DELETE FROM @tableVariable
``` | No, you cannot `TRUNCATE` a table variable since it is not a physical table. Deleting it would be faster. See this answer from [Aaron Bertrand](https://stackoverflow.com/a/11621024/2908279). | Truncate/Clear table variable in SQL Server 2008 | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I am trying to solve Exercise 14.on <http://www.sql-ex.ru/>. The query asks for :
> Find out makers who produce only the models of the same type, and the
> number of those models exceeds 1. Deduce: maker, type
The database schema is as follows:
```
Product(maker, model, type)
PC(code, model, speed, ram, hd, cd, price)
Laptop(code, model, speed, ram, hd, screen, price)
Printer(code, model, color, type, price)
```
I wrote the following query;
```
select distinct maker, type
from product
where maker in (
select product.maker
from product, ( select model, code
from printer
union
select model, code
from pc
union
select model, code
from laptop
) as T
where product.model = T.model
group by product.maker
having count(T.code) > 1 and count(distinct product.type) = 1
)
```
This is not the correct answer. What am i missing here ? | Try this query...
```
SELECT DISTINCT maker, type
FROM product
WHERE maker IN
(SELECT DISTINCT maker
FROM product
GROUP BY maker HAVING COUNT(distinct type) = 1
AND count(model) > 1)
``` | Claudiu Haidu answer is correct, though there's an even shorter answer:
```
SELECT maker,MIN(type)
FROM product
GROUP BY maker
HAVING COUNT(distinct type) = 1 AND COUNT(model) > 1
``` | SQL-ex.ru Execise 14 | [
"",
"sql",
""
] |
My table in this issue has three columns: Player, Team and Date\_Played.
```
Player Team Date_Played
John Smith New York 2/25/2014
Joe Smith New York 2/25/2014
Steve Johnson New York 2/25/2014
Steph Curry Orlando 2/25/2014
Frank Anthony Orlando 2/26/2014
Brian Smith New York 2/26/2014
Steve Johnson New York 2/27/2014
Steph Curry New York 2/28/2014
```
I know how to get a list of distinct team names or dates played, but what I'm looking for is a query that will group by Team and count the number of distinct Team & Date combinations. So the output should look like this:
```
Team Count
New York 4
Orlando 2
```
So far I have:
```
SELECT DISTINCT Tm, Count(Date_Played) AS Count
FROM NBAGameLog
GROUP BY NBAGameLog.[Tm];
```
But this gives me the total amount of records grouped by date. | This query has been tested in Access 2010:
```
SELECT Team, Count(Date_Played) AS Count
FROM
(
SELECT DISTINCT Team, Date_Played
FROM NBAGameLog
) AS whatever
GROUP BY Team
``` | A bit less code:
```
SELECT Team, Count(distinct Date_Played) AS Count
FROM NBAGameLog
GROUP BY Team
```
It does Show the same execution plan as the first answer. | SQL Distinct Count with Group By | [
"",
"sql",
"ms-access",
"distinct",
""
] |
I need to select a row from a table which must satisfy the following conditions:
* It needs to match a quite complex WHERE clause
* It needs to have the minimum value in column X
Thus far, the only working solution I have is this:
```
SELECT <columns>
FROM <table>
WHERE <complex WHERE clause>
AND <columnX> = (SELECT MIN(<columnX>) FROM <table> WHERE <same WHERE clause as before>)
```
This strikes me as quite ugly and cumberstome, having to repeat the same (complex) WHERE clause two times.
Do you know any better way to achieve this? | Turned out the easiest solution was ordering...
```
SELECT TOP 1 <columns>
FROM <table>
WHERE <complex WHERE clause>
ORDER BY <ColumnX> ASC
``` | Correct me if I'm wrong, I would use a CTE (Common Table Expression) to write the complex query and have the where clause reference to it.
Hope that helps. | How to select the row which matches a complex WHERE clause and has the minimum value in a given column? | [
"",
"sql",
"sql-server",
"select",
"where-clause",
""
] |
I have a table name "a"
```
Id name
1 abc
2 xyz
3 mmm
4 xxx
```
and Other table name is "b"
```
Id suId
3 2
3 1
```
My requirement is get detail from "a" table from "b" table where id=3. Any help? | ```
SELECT a.Id, a.name, b.Id, b.suId FROM b JOIN a ON b.suId = a.Id WHERE b.Id = 3;
``` | I wont recommend join for this kind of scenarios(you want all details from Table A whose ids are in Table B suId column, where Table B id should be 3., Its bad English but hope you got me and may be i got you too.)
```
SELECT a.name FROM a
WHERE
a.id IN(SELECT b.suId FROM b WHERE b.id = 3);
```
If you want to use join only then,
```
SELECT a.name FROM a,b
WHERE a.id = b.suId
AND
b.id = 3;
``` | Getting data from one table to another table using join | [
"",
"sql",
"database",
""
] |

This is the output of a select \* from table1, I have a doubt with count function... I want to count that NULL, in order to do that the proper option is to do this:
select count(\*) from table1 where fecha\_devolucion is null --> This gives me the proper answer counting 1 however if i do:
```
select count(fecha_devolucion)
from table1
where fecha_devolucion is null --> this returns 0, why? Isn't the same syntax?
```
What's the difference between choosing a specific field and \* from a table? | From the documentation (<http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions032.htm>):
> If you specify expr, then COUNT returns the number of rows where expr
> is not null. ...
>
> If you specify the asterisk (\*), then this function returns all rows...
In other words, COUNT(fecha\_devolucion) counts non-NULL values of that column. COUNT(\*) counts the total number of rows, regardless of the values. | This is the another way how you can get the count :
```
SELECT SUM(NVL(fecha_devolucion,1)) FROM table1 WHERE fecha_devolucion IS NULL;
``` | Oracle SQL Developer - Count function | [
"",
"sql",
"oracle",
""
] |
I need to `delete` all `rows` in some `table` `where` value is empty string.(I have multiple table which got similar name).
I tryed to execute those sql statement which is in string:
```
DECLARE @sql AS NVARCHAR(MAX)
DECLARE @emptyValue AS NVARCHAR(1) =''
set @sql = N'DELETE FROM SampleTable WHERE Value='+@emptyValue+''
exec sp_executesql @sql
```
But it's throw me error `Msg 102, Level 15, State 1, Line 1
Incorrect syntax near '='.`
I tryed to figure it out about an hour now.
Any help would be appreciated.
**Edit:**
Here's what I get after deleting last quota. `@tableName` is `nvarchar(MAX)`.
 | Instead of doing string concatenation, [parameterize](http://technet.microsoft.com/en-us/library/ms175170%28v=sql.105%29.aspx) the call to `sp_executesql`, for Sql Injection and other reasons (including caching of query plans, and not having to worry about escaping quotes :-):
```
DECLARE @sql AS NVARCHAR(MAX);
DECLARE @emptyValue AS NVARCHAR(1) ='';
set @sql = N'DELETE FROM SampleTable WHERE Value=@emptyValue';
exec sp_executesql @sql, N'@emptyValue NVARCHAR(1)', @emptyValue = @emptyValue;
```
[Fiddle](http://sqlfiddle.com/#!6/5007c/1) | You have two quotes:
```
set @sql = N'DELETE FROM SampleTable WHERE Value='+@emptyValue+''
```
Change it to:
```
set @sql = N'DELETE FROM SampleTable WHERE Value='+@emptyValue
```
<http://sqlfiddle.com/#!3/ce8e3/4> | Incorrect syntax near '=' sp_executesql | [
"",
"sql",
"sql-server",
"t-sql",
"sp-executesql",
""
] |
I have a table named `Car`
```
Table car ( id int NOT NULL, plate int NOT NULL, sent_to_server bit NOT NULL );
```
I want to select all Cars which are not sent to server yet
```
SELECT *
FROM car
WHERE sent_to_server = 0;
```
Then I should update my DB
```
UPDATE car
SET sent_to_server = 1
WHERE sent_to_server = 0;
```
I have multiple threads so this wont work (multiple threads are reading and writing to the
database at the same time -(im using sql server))
How can I execute 2 queries in One Query ?
Or is there a better solution !?
note: i'm using C# with petapoco library (if it matters !) | As long as you are using SQL Server 2005 or later you can make use of `OUTPUT` in a single query.
```
UPDATE Car
SET sent_to_server = 1
OUTPUT Inserted.id, Inserted.plate
WHERE sent_to_server = 0;
```
This will update the rows where `sent_to_server` is zero and return the modified rows. No need for a transaction.
[SQL Fiddle](http://sqlfiddle.com/#!3/35312/1) | I think you need to use transaction.
Place your both queries in below code
```
BEGIN TRANSACTION
select * from car where sent_to_server = 0;
update car set sent_to_server = 1 where sent_to_server = 0;
COMMIT TRANSACTION
```
By default tansaction will lock the table to read for other connection, so other thread will not be able to read the result untill the transaction is not commited. | How to select and update selected rows in a single sql query? | [
"",
"sql",
"t-sql",
"transactions",
"petapoco",
""
] |
i have a table that contains a
user: ID,Name,UserName
and a table that contains an
event:ID,AskerID,DoerID,EventName
I need to get the name of the Asker using AskerID but I already have a left outer join
that get the doer one
I'm not very good with sql ...a little help is always welcome | You can like one table to other tables based on different ChildKeys (AskerID, DoerID).
```
Select
a.Name as Asker
, d.Name as Doer
, e.EventName
from event as e
left outer join user as a
on e.AskerID = a.ID
left outer join user as d
on e.DoerID = d.ID
``` | Assuming the table with the user is called `Users` and the table with the event is called `Events`:
```
select e.ID
e.AskerID
a.Name as AskerName
e.DoerID
d.Name as DoerName
e.EventName
from Events e
left join Users a
on e.AskerID = a.ID
left join Users d
on e.DoerID = d.ID
```
You can left join (or inner join) to the same table more than once: you just have to give one or both of them aliases. That's what the `a`, `e`, and `d` are. You may also want to rename some of the fields: that's why I renamed `a.Name` to `AskerName` and `d.Name` to `DoerName`. | SQL Select with 2 foreign key referencing on the same table | [
"",
"sql",
"sql-server",
"select",
""
] |
Hi i have this query but its giving me an error of Operand should contain 1 column(s) not sure why?
```
Select *,
(Select *
FROM InstrumentModel
WHERE InstrumentModel.InstrumentModelID=Instrument.InstrumentModelID)
FROM Instrument
``` | according to your query you wanted to get data from instrument and instrumentModel table and in your case its expecting "from table name " after your select \* .when the subselect query runs to get its result its not finding table instrument.InstrumentModelId inorder to fetch result from both the table by matching you can use join .or you can also select perticuler fields by tableName.fieldName and in where condition use your condition.
like :
```
select Instrument.x,InstrumentModel.y
from instrument,instrumentModel
where instrument.x=instrumentModel.y
``` | You can use a join to select from 2 connected tables
```
select *
from Instrument i
join InstrumentModel m on m.InstrumentModelID = i.InstrumentModelID
``` | Nested SELECT SQL Queries Workbench | [
"",
"mysql",
"sql",
"subquery",
""
] |
I was asked this question the other day in a phone screen and I had trouble with it. Suppose I have the following table. This is a table of employees, and more than one employee can have the same supervisor.
```
+------------+--------------+------+
| Field | Type | Null |
+------------+--------------+------+
| name | varchar(40) | NO |
| department | varchar(100) | NO |
| supervisor | varchar(100) | NO |
+------------+--------------+------+
```
What query would I write to return a list of supervisors and a count of how many employees have that supervisor? | You would use something like this (assuming your `supervisor` column is not a reference to another table)
```
select supervisor, count(name)
from employees
group by supervisor ;
``` | ```
select supervisor, count(*) as cnt
from emp
group by supervisor
``` | Returning a list and a count in SQL | [
"",
"mysql",
"sql",
""
] |
**Background**
My application is backed up by an SQL Server (2008 R2), and have quite a few SP, triggers etc..
My goal is to make sure upon program start that all of those objects are still valid.
For example, if I have a stored procedure `A` which calls stored procedure `B`, If someone changes the the name of `B` to `C`, I would like to get a notification when running my application in `Debug` environment.
**What have I tried?**
So, I figured using `sp_refreshsqlmodule` which according to the [documentation](http://technet.microsoft.com/en-us/library/bb326754.aspx) returns `0 (success) or a nonzero number (failure)`:
```
DECLARE @RESULT int
exec @RESULT = sp_refreshsqlmodule N'A' --In this case A is the SP name
SELECT @@ERROR
SELECT @RESULT
```
So I changed SP `B` name to `C` and ran the script.
The results where:
* `@@ERROR` was `0`
* `@RESULT` was `0`
* I got a message of:
> The module 'A' depends on the missing object 'B'. The module will
> still be created; however, it cannot run successfully until the object
> exists.
**My question:**
Am I missing something here, shouldn't I get anon-zero number that indicates that something went wrong? | Assuming that all of your dependencies are at least schema qualified, it seems like you could use [`sys.sql_expression_dependencies`](http://technet.microsoft.com/en-us/library/bb677315.aspx). For instance, running this script:
```
create proc dbo.B
as
go
create proc dbo.A
as
exec dbo.B
go
select OBJECT_SCHEMA_NAME(referencing_id),OBJECT_NAME(referencing_id),
referenced_schema_name,referenced_entity_name,referenced_id
from sys.sql_expression_dependencies
go
sp_rename 'dbo.B','C','OBJECT'
go
select OBJECT_SCHEMA_NAME(referencing_id),OBJECT_NAME(referencing_id),
referenced_schema_name,referenced_entity_name,referenced_id
from sys.sql_expression_dependencies
```
The first query of `sql_expression_dependencies` shows the dependency as:
```
(No Column name) (No Column name) referenced_schema_name referenced_entity_name referenced_id
dbo A dbo B 367340373
```
And after the rename, the second query reveals:
```
(No Column name) (No Column name) referenced_schema_name referenced_entity_name referenced_id
dbo A dbo B NULL
```
That is, the `referenced_id` is `NULL`.
---
So this query may find all of your broken stored procedures (or other objects that can contain references):
```
select OBJECT_SCHEMA_NAME(referencing_id),OBJECT_NAME(referencing_id)
from
sys.sql_expression_dependencies
group by
referencing_id
having SUM(CASE WHEN referenced_id IS NULL THEN 1 ELSE 0 END) > 0
``` | You can try this. It may not be 100% for schema (it has owner name below) as it was based more when I worked with SQL Server 2000, but I tested it 2008, it basically runs alter statement on all the procs, functions, views. Comment out the PRINT @objName + ' seems valid.' to see only invalid procs, functions, views... Feel free to edit any parts you want!
```
DECLARE @objId INT
DECLARE @objName NVARCHAR(max)
DECLARE @owner NVARCHAR(255)
DECLARE @Def nvarchar(max)
DECLARE checker CURSOR FAST_FORWARD FOR
SELECT
id, name, USER_NAME(o.uid) owner
FROM sysobjects o
WHERE o.type IN ('P', 'TR', 'V', 'TF', 'FN', 'IF')
AND o.name <> 'RecompileSQLCode'
OPEN checker
FETCH FROM checker INTO @objId, @objName, @owner
WHILE @@FETCH_STATUS=0
BEGIN
SELECT @Def = definition
FROM sys.sql_modules
WHERE object_id = @objId
--print @objName
--print @def
SET @def = REPLACE(@def, 'create procedure','alter procedure')
SET @def = REPLACE(@def, 'create PROC','alter PROC')
SET @def = REPLACE(@def, 'create trigger','alter trigger')
SET @def = REPLACE(@def, 'create function','alter function')
SET @def = REPLACE(@def, 'create view','alter view')
BEGIN TRANSACTION
BEGIN TRY
EXEC sp_executesql @def
PRINT @objName + ' seems valid.'
END TRY
BEGIN CATCH
PRINT 'Error: ' + @objName + ' : ' + CONVERT(nvarchar(10), ERROR_NUMBER()) + ' ' + ERROR_MESSAGE()
END CATCH
ROLLBACK
FETCH NEXT FROM checker INTO @objId, @objName, @owner
END
CLOSE checker
DEALLOCATE checker
``` | Validating that all Stored procedures are valid | [
"",
"sql",
"sql-server",
"validation",
"stored-procedures",
"sql-server-2008-r2",
""
] |
I'm trying to get records that have the highest value in one particular column ("version"). I'm using the base\_id to get rows, and there may be more than one row with the same base\_id, but they will then have different version numbers. So the point of the statement is to only get the one with the highest version. And the statement below works, but only if there are actually more than one value. If there is only one I get no records at all back (as opposed to the expected one row). So how can I get only the value with the highest version number below, even if for some records only one version exists?:
```
SELECT r.id
, r.title
, u.name created_by
, m.name modified_by
, r.version
, r.version_displayname
, r.informationtype
, r.filetype
, r.base_id
, r.resource_id
, r.created
, r.modified
, GROUP_CONCAT( CONCAT(CAST(c.id as CHAR),',',c.name,',',c.value) separator ';') categories
FROM resource r
JOIN category_resource cr
ON r.id = cr.resource_id
JOIN category c
ON cr.category_id = c.id
JOIN user u
ON r.created_by = u.id
JOIN user m
ON r.modified_by = m.id
WHERE r.base_id = 'uuid_033a7198-a213-11e3-93de-2b47e5a489c2'
AND r.version = (SELECT MAX(r.version) FROM resource r)
GROUP
BY r.id;
```
EDIT:
I realize the other parts of the query itself may complicate things, so I'll try to create a cleaner example, which should show what I'm after, I hope.
If I do this:
```
SELECT id, title, MAX(version) AS 'version' FROM resource GROUP BY title
```
on a table that looks like this:

Then I get the following results:

which is not correct, as you can see from the table. I.e, it's fetched the highest value for each resource, but if you look at Introduction, e.g. the resource with the value 2 for version has the id 6, whereas the one fetched has the id 1. So the query seems to somehow combine the values from different rows...?
I should note that I'm very much a novice at SQL, and the original query that I exemplified the problem with was something I got help with here, so please do explain as clearly as possible, thanks.
Another note is that I found some suggestion of a subquery, but apart from not returning the correct results either, it was really slow. I'm testing on 5000 rows and I really need to expect it to take only a fraction of a second, in order to meet performance requirements.
EDIT 2:
Found a way to incorporate a statement, sort of like one of the suggested ones, as well as the various solutions here: [Retrieving the last record in each group](https://stackoverflow.com/questions/1313120/retrieving-the-last-record-in-each-group)
However, I tried them all, and even though most seem to work, they are incredibly slow…
Take this one:
```
SELECT
r.id, r.title,
u.name AS 'created_by', m.name AS 'modified_by', r.version, r.version_displayname, r.informationtype,
r.filetype, r.base_id, r.resource_id, r.created, r.modified,
GROUP_CONCAT( CONCAT(CAST(c.id as CHAR),',',c.name,',',c.value) separator ';') AS 'Categories'
FROM
resource r
INNER JOIN
(SELECT
DISTINCT r.id AS id
FROM
resource r
INNER JOIN
category_resource cr1 ON (r.id = cr1.resource_id)
WHERE
cr1.category_id IN (9)
) mr
ON r.id = mr.id
INNER JOIN category_resource cr
ON r.id = cr.resource_id
INNER JOIN category c
ON cr.category_id = c.id
INNER JOIN user u
ON r.created_by = u.id
INNER JOIN user m
ON r.modified_by = m.id
INNER JOIN
(
SELECT max(version) MyVersion, base_id
FROM resource
GROUP BY base_id
) r2
ON r.base_id = r2.base_id
AND r.version = r2.MyVersion
group by r.base_id
order by r.version desc;
```
The addition at the end (starting with the INNER JOIN) to get only the rows with the highest version value for each base\_id slows the query down from 20 ms to around 6-8 seconds. That is a no go… But this surprises me. Although I’m obviously no database expert, it seems to me that database queries should be optimized for getting data like this. But if I do the only alternative I can think of, which is to get all the records regardless of version number, and then filter them in PHP, guess what? That is much faster than this…
I initially thought the performance hit caused by filtering in PHP was too much, but that is about a second’s delay, so still much better than this.
But I feel like I’m missing something, shouldn’t it be possible to do this much more efficiently? | Well, I think I found the answer myself. As far as I can understand, a query like this will take a lot of time, and instead the database needs to be modified. I found this:
[How to version control a record in a database](https://stackoverflow.com/questions/323065/how-to-version-control-a-record-in-a-database)
The suggestion to use startend and enddate columns and set the enddate to null for the latest version made it very easy to do queries for the latest version. And it is again very very fast. So this is what I needed. It gives me something like this, all put together:
```
SELECT
r.id, r.title,
u.name AS 'created_by', m.name AS 'modified_by', r.version, r.version_displayname, r.informationtype,
r.filetype, r.base_id, r.resource_id, r.created, r.modified,
GROUP_CONCAT( CONCAT(CAST(c.id as CHAR),',',c.name,',',c.value) separator ';') AS 'categories', startdate, enddate
FROM
resource r
INNER JOIN
(SELECT
DISTINCT r.id AS id
FROM
resource r
INNER JOIN
category_resource cr1 ON (r.id = cr1.resource_id)
WHERE
cr1.category_id IN (9)
) mr
ON r.id = mr.id
INNER JOIN category_resource cr
ON r.id = cr.resource_id
INNER JOIN category c
ON cr.category_id = c.id
INNER JOIN user u
ON r.created_by = u.id
INNER JOIN user m
ON r.modified_by = m.id
WHERE r.enddate is null
group by r.id;
```
And this query once again is back to the 20 ms execution time. | Given your own answer, your question was basically the same as in the link you supplied. Since you had some sub-questions I'll try to give you some additional help there.
*If you want to have some kind of version control in your database, then you basically extend your primary key by some version column(s). I'd vote for using startdate/enddate-colums, too for the reason you mentioned. Given your own answer, you could modify your layout accordingly. That's the route you should go if you can!*
In your given example it is not clear what the primary key is, since the 'id' column has changing values, too. In your case the primary key would be the column 'title'. So you could use some query like
```
SELECT title, max(version) as version FROM resource GROUP BY title
```
to get a result in which you see your original primary key and the latest version -- which together form your actual primary key.
To get all other fields in that table, you'd join that result to the resource table and use the primary key fields as join condition.
```
SELECT * FROM (
SELECT title, max(version) as version
FROM resource
GROUP BY title) as s
INNER JOIN resource r on (r.title = s.title AND r.version = s.version)
```
**Why did your query give you wrong results?**
The reason is, that you had an error in your query that MySQL somewhat "*fixed*" for you. Normally you would need to supply every column that you did not use in an aggregate function (like MAX()) in your GROUP BY clause. In Your example
```
SELECT id, title, MAX(version) AS 'version' FROM resource GROUP BY title
```
you had a colum ('id') in the select-part of your query that you didn't supply in your GROUP BY clause.
**In MySQL you can ignore that rule ([see here](http://dev.mysql.com/doc/refman/4.1/en/group-by-hidden-columns.html))**.
> When using this feature, all rows in each group should have the same values for the columns that are ommitted from the GROUP BY part. The server is free to return any value from the group, so the results are indeterminate unless all values are the same.
Since the 'id' column had different values for your key (the 'title' column) you just got some result -- in that case MySQL probably just used the first row it found. But the result itself is undefined and might be subject to change e.g. when the database gets updated or the data grows. You should not depend on rules you deduce from results you see while testing!
On other databases like oracle and SQL-Server you would have gotten an error trying to execute that last query.
I hope I could clarify the reason for your results a little. | SQL max value in one column | [
"",
"mysql",
"sql",
"max",
"greatest-n-per-group",
""
] |
I have a table which records every status change of an entity
```
id recordTime Status
ID1 2014-03-01 11:33:00 Disconnected
ID1 2014-03-01 12:13:00 Connected
ID2 2014-03-01 12:21:00 Connected
ID1 2014-03-01 12:24:00 Disconnected
ID1 2014-03-01 12:29:00 Connected
ID2 2014-03-01 12:40:00 Disconnected
ID2 2014-03-01 13:03:00 Connected
ID2 2014-03-01 13:13:00 Disconnected
ID2 2014-03-01 13:29:00 Connected
ID1 2014-03-01 13:30:00 Disconnected
```
I need to calculate the total inactive time i.e time between 'Connected' and last 'Disconnected' status per ID for a given time window.
For above table and time range of 2014-03-01 11:00:00 to 2014-03-01 14:00:00 the output should be:
```
ID InactiveTime
ID1 01:15:00
ID2 02:00:00
``` | The special difficulty is not to miss the time spans to the outer time frame.
Assuming that the next row for any given `id` always has the opposite status.
Using the column name `ts` instead of `recordTime`:
```
WITH span AS (
SELECT '2014-03-01 13:00'::timestamp AS s_from -- start of time range
, '2014-03-01 14:00'::timestamp AS s_to -- end of time range
)
, cte AS (
SELECT id, ts, status, s_to
, lead(ts, 1, s_from) OVER w AS span_start
, first_value(ts) OVER w AS last_ts
FROM span s
JOIN tbl t ON t.ts BETWEEN s.s_from AND s.s_to
WINDOW w AS (PARTITION BY id ORDER BY ts DESC)
)
SELECT id, sum(time_disconnected)::text AS total_disconnected
FROM (
SELECT id, ts - span_start AS time_disconnected
FROM cte
WHERE status = 'Connected'
UNION ALL
SELECT id, s_to - ts
FROM cte
WHERE status = 'Disconnected'
AND ts = last_ts
) sub
GROUP BY 1
ORDER BY 1;
```
Returns intervals as requested.
IDs without entries in the selected time range don't show up. You would have to query them additionally.
[**SQL Fiddle.**](http://sqlfiddle.com/#!15/f91d3/5)
Note: I cast the resulting `total_disconnected` to `text` in the fiddle, because the type `interval` is displayed in a terrible format.
### Add IDs without entry in the selected time frame
Per request in comment.
Add to the query above (before the final `ORDER BY 1`):
```
...
UNION ALL
SELECT id, total_disconnected
FROM (
SELECT DISTINCT ON (id)
t.id, t.status, (s.s_to - s.s_from)::text AS total_disconnected
FROM span s
JOIN tbl t ON t.ts < s.s_from -- only from before time range
LEFT JOIN cte c USING (id)
WHERE c.id IS NULL -- not represented in selected time frame
ORDER BY t.id, t.ts DESC -- only the latest entry
) sub
WHERE status = 'Disconnected' -- only if disconnected
ORDER BY 1;
```
[**SQL Fiddle.**](http://sqlfiddle.com/#!15/f91d3/6)
Now, only IDs without entries in **or before** the selected time range don't show up. | This is how I understand your question [SQL Fiddle](http://sqlfiddle.com/#!15/b36ef/1)
```
select id, sum(diff) as inactive
from (
select
recordtime,
recordTime -
lag(recordTime, 1, recordTime)
over(
partition by id
order by recordTime
)
as diff,
status,
id
from t
) s
where status = 'Connected'
group by id
order by id
;
id | inactive
----+----------
1 | 00:45:00
2 | 00:39:00
```
Could you explain your desired output? | Sum of time difference between rows | [
"",
"sql",
"postgresql",
"aggregate-functions",
"window-functions",
""
] |
Table A records are
```
id | email
1 | abc@abc.com
2 | xyx@xy.com
3 | kkdk@kk.com
4 | 333@abc.com
```
Table B records are
```
id | email
1 | abc@abc.com
3 | kkdk@kk.com
```
Now result should be
```
id | email
2 | xyx@xy.com
4 | 333@abc.com
```
How to do this using joins ? | try something like:
```
SELECT
t1.*
FROM tableA as t1
LEFT JOIN tableB t2 ON t1.email=t2.email
WHERE t2.id IS NULL
``` | Try this:
```
select distinct(a.id), a.email from table1 a inner join table2 b
on a.id not in (select id from table2)
```
[fiddle](http://sqlfiddle.com/#!2/8f18a/5) | Display all table A records which are not in table B using mysql joins concept | [
"",
"mysql",
"sql",
""
] |
How do i get a list of all id/primary key for a table. Say i have this table:
```
class Blog(models.Model)
title = models.CharField()
body = models.CharField()
author = models.ForeignKey(Author)
```
assume the field *author* is an Author object. I want to get all the ids of Blog where author=author
i know i can use
```
blogs = Blog.objects.filter(author=author)
```
and get all the blog objects in a list form, but how do i get the list IDS/PK? Similar to "Select id from Blog where Author=author" | You can do this using [`values_list`](https://docs.djangoproject.com/en/stable/ref/models/querysets/#values-list) method.
```
blogs = Blog.objects.filter(author=author).values_list('id', flat=True)
```
See more at the [Django queryset documentation](https://docs.djangoproject.com/en/stable/ref/models/querysets/#values-list). | ```
Blog.objects.filter(author=author).values_list('id', flat=True)
```
`values_list()` gives a list of rows, each row a tuple of all of the fields you specify as arguments, in order. If you only pass a single field in as an argument, you can also specify `flat=True` to get a plain list instead of a list of tuples. | django models: get list of id | [
"",
"sql",
"django",
"django-models",
""
] |
This is my table structure
```
create table #t(id int identity(1,1),street varchar(100))
insert into #t values('#100 church street'),('#102 vitalFinaly street'),('#101 teriyakei street')
```
I need output like this: (6 characters after integer)
```
id (No column name)
1 church
2 vitalF
3 teriya
```
I tried this `select id,substring(street,6) from #t` | **[Use Charindex](http://technet.microsoft.com/en-us/library/ms186323.aspx)**
```
select id,substring(street,charindex(' ',street)+1,6) from #t
```
**[SEE DEMO](http://www.sqlfiddle.com/#!6/d41d8/15459)** | Please try:
```
select
id, LEFT(SUBSTRING(Street, PATINDEX('%[A-Z]%', Street), LEN(Street)), 6)
from #t
``` | selecting values after integer using substring | [
"",
"sql",
"sql-server",
""
] |
I have queries from two different tables that group their respective data (count) by yyyy-mm. I would like to take the count results from both queries and display them together. Following are my two queries:
QUERY 1:
```
DECLARE @ThisMonth DATETIME
SET @ThisMonth = DATEADD(MONTH, DATEDIFF(MONTH, '19000101', GETDATE()), '19000101')
select cast(cast(TransportAppv6.dbo.T_FatigueNCRs.date as date) as varchar(7)) AS [MONTH], COUNT(TransportAppv6.dbo.T_FatigueNCRDetails.NCRID) AS [NCRs]
FROM TransportAppv6.dbo.T_FatigueNCRs
LEFT OUTER JOIN TransportAppv6.dbo.T_FatigueNCRDetails
ON T_FatigueNCRs.NCRID = TransportAppv6.dbo.T_FatigueNCRDetails.NCRID
LEFT OUTER JOIN TransportAppv6.dbo.T_Driver
ON TransportAppv6.dbo.T_FatigueNCRs.DriverID = TransportAppv6.dbo.T_Driver.ID
WHERE (TransportAppv6.dbo.T_FatigueNCRDetails.CancelYN = 0)
AND (TransportAppv6.dbo.T_Driver.FleetID = 2)
AND (TransportAppv6.dbo.T_Driver.Active = 1)
AND (TransportAppv6.dbo.T_FatigueNCRDetails.NCRTypeID = 1)
AND (TransportAppv6.dbo.T_FatigueNCRDetails.Details <> 'Driving Hours Breach As Per Breach Report')
AND (TransportAppv6.dbo.T_FatigueNCRs.Date >= DATEADD(MONTH, -6, @ThisMonth)
AND TransportAppv6.dbo.T_FatigueNCRs.Date < @ThisMonth)
GROUP BY cast(cast(TransportAppv6.dbo.T_FatigueNCRs.date as date) as varchar(7))
```
RETURNS:
```
MONTH NCRs
2013-08 43
2013-09 34
2013-10 53
```
QUERY 2:
```
DECLARE @ThisMonth DATETIME
SET @ThisMonth = DATEADD(MONTH, DATEDIFF(MONTH, '19000101', GETDATE()), '19000101')
select cast(cast(landing.dbo.landing_runs.date as date) as varchar(7)) as [MONTH], COUNT(*) as [RUNS]
from landing.dbo.Landing_Runs
where (landing.dbo.Landing_Runs.date >= DATEADD(MONTH, -6, @ThisMonth)
AND landing.dbo.Landing_Runs.date < @ThisMonth)
GROUP BY cast(cast(landing.dbo.landing_runs.date as date) as varchar(7))
```
RETURNS:
```
MONTH RUNS
2013-08 4626
2013-09 4042
2013-10 4481
```
What I would like to do is somehow join these two separate queries so that the results appear as:
```
MONTH RUNS NCRs
2013-08 4626 43
2013-09 4042 34
2013-10 4481 53
``` | You can just join them together as subqueries. I chose `INNER JOIN` below but feel free to change it to a different type of join.
```
DECLARE @ThisMonth DATETIME
SET @ThisMonth = DATEADD(MONTH, DATEDIFF(MONTH, '19000101', GETDATE()), '19000101')
select t1.[MONTH], t1.[NCRs], t2.[RUNS]
from
(
select cast(cast(TransportAppv6.dbo.T_FatigueNCRs.date as date) as varchar(7)) AS [MONTH], COUNT(TransportAppv6.dbo.T_FatigueNCRDetails.NCRID) AS [NCRs]
FROM TransportAppv6.dbo.T_FatigueNCRs
LEFT OUTER JOIN TransportAppv6.dbo.T_FatigueNCRDetails
ON T_FatigueNCRs.NCRID = TransportAppv6.dbo.T_FatigueNCRDetails.NCRID
LEFT OUTER JOIN TransportAppv6.dbo.T_Driver
ON TransportAppv6.dbo.T_FatigueNCRs.DriverID = TransportAppv6.dbo.T_Driver.ID
WHERE (TransportAppv6.dbo.T_FatigueNCRDetails.CancelYN = 0)
AND (TransportAppv6.dbo.T_Driver.FleetID = 2)
AND (TransportAppv6.dbo.T_Driver.Active = 1)
AND (TransportAppv6.dbo.T_FatigueNCRDetails.NCRTypeID = 1)
AND (TransportAppv6.dbo.T_FatigueNCRDetails.Details <> 'Driving Hours Breach As Per Breach Report')
AND (TransportAppv6.dbo.T_FatigueNCRs.Date >= DATEADD(MONTH, -6, @ThisMonth)
AND TransportAppv6.dbo.T_FatigueNCRs.Date < @ThisMonth)
GROUP BY cast(cast(TransportAppv6.dbo.T_FatigueNCRs.date as date) as varchar(7))
) t1
inner join
(
select cast(cast(landing.dbo.landing_runs.date as date) as varchar(7)) as [MONTH], COUNT(*) as [RUNS]
from landing.dbo.Landing_Runs
where (landing.dbo.Landing_Runs.date >= DATEADD(MONTH, -6, @ThisMonth)
AND landing.dbo.Landing_Runs.date < @ThisMonth)
GROUP BY cast(cast(landing.dbo.landing_runs.date as date) as varchar(7))
) t2 on t1.[MONTH] = t2.[MONTH]
``` | If both queries will return the same Month ranges, you can join them together with an Inner Join. Otherwise, I would create a table variable containing all the months you want to display. Let me know if I can help you with that.
```
DECLARE @ThisMonth DATETIME
SET @ThisMonth = DATEADD(MONTH, DATEDIFF(MONTH, '19000101', GETDATE()), '19000101')
select ncrs.[Month],ncrs.NCRs,runs.Runs
from (
select
cast(cast(ncr.date as date) as varchar(7)) AS [MONTH],
COUNT(ncrd.NCRID) AS [NCRs]
FROM TransportAppv6.dbo.T_FatigueNCRs ncr
LEFT OUTER JOIN TransportAppv6.dbo.T_FatigueNCRDetails
ON T_FatigueNCRs.NCRID = ncrd.NCRID
LEFT OUTER JOIN TransportAppv6.dbo.T_Driver
ON ncr.DriverID = TransportAppv6.dbo.T_Driver.ID
WHERE (ncrd.CancelYN = 0)
AND (TransportAppv6.dbo.T_Driver.FleetID = 2)
AND (TransportAppv6.dbo.T_Driver.Active = 1)
AND (ncrd.NCRTypeID = 1)
AND (ncrd.Details <> 'Driving Hours Breach As Per Breach Report')
AND (ncr.Date >= DATEADD(MONTH, -6, @ThisMonth)
AND ncr.Date < @ThisMonth)
GROUP BY cast(cast(ncr.date as date) as varchar(7))
) ncrs
inner join (
select
cast(cast(landing.dbo.landing_runs.date as date) as varchar(7)) as [MONTH],
COUNT(*) as [RUNS]
from landing.dbo.Landing_Runs
where (landing.dbo.Landing_Runs.date >= DATEADD(MONTH, -6, @ThisMonth)
AND landing.dbo.Landing_Runs.date < @ThisMonth)
GROUP BY cast(cast(landing.dbo.landing_runs.date as date) as varchar(7))
) runs
on runs.[Month] = ncrs.NCRs
``` | SQL joining queries from two databases | [
"",
"sql",
"sql-server",
""
] |
Assuming I have the following table :
```
Table Files
id
size
```
In pseudo SQL I need this king of processing :
```
select id
where
size < S1 and (limit this clause to N1 results)
or
size > S2 and (limit this clause to N2 results)
```
I know Oracle defines rownum keyword to limit results.
But in my case, that does not work.
Do this require subselects ? How ? Is that possible to use multiple subselects ?
(This SQL would be generated by a java program with other where clauses, making the use of subselects difficult..)
Edit:
I need to filter more than just different size.
SOLVED
```
SELECT *
FROM Files
WHERE
someField = 'stuff'
AND
someOtherField = 'other stuff'
AND
(
SELECT id FROM Files WHERE size <= S1 AND ROWNUM <= N1
UNION
SELECT id FROM Files WHERE size > S2 AND ROWNUM <= N2
)
``` | Use a `UNION`:
```
SELECT *
FROM (
SELECT 'low' AS "which", "id"
FROM Files
WHERE "size" < S1)
WHERE ROWNUM <= N1
UNION
SELECT *
FROM (
SELECT 'high' AS "which", "id"
FROM Files
WHERE "size" >= S1)
WHERE ROWNUM <= N2
```
[DEMO](http://www.sqlfiddle.com/#!4/fc7e9/7) | Assuming the clauses are *independent* (which they are for your example), you can do this with `union all`:
```
select id
. . .
where size < S1 and rownum <= N1
union all
select id
. . .
where size > S1 and rownum <= N2
```
If you don't want to repeat the `from` clause (because it is complex for instance), you can use a CTE:
```
with t as (<blah blah blah>)
select *
from ((select id from t where size < S1 and rownum <= N1) union all
(select id from t where size > S1 and rownum <= N2)
) x
``` | SQL limit results in where clause | [
"",
"sql",
"oracle",
""
] |
I want to calculate date time difference between two dates but the minutes should be in .100 scale i.e if date time difference is `2.30 (2 Hours 30 Minutes)` i want it in format `2.50`
30 minutes = 0.50
i wrote a query for it but it does not work well when minutes are in range of `01 - 09`
Case 1 : Wrong Output
```
Declare @Start DateTime='02-03-2014 14:25:00'
Declare @End DateTime='02-03-2014 20:29:46'
Select STR(DateDiff(MINUTE,@Start,@End)/60)+'.'+STR(DateDiff(MINUTE,@Start,@End)%60/0.6)
```
DateTime Difference : `6.04`
Expected Output : `6.10`
Actual Output : `6.7`
Case 2 : Correct Output
```
Declare @Start DateTime='02-03-2014 13:55:02'
Declare @End DateTime='02-03-2014 17:33:31'
Select STR(DateDiff(MINUTE,@Start,@End)/60)+'.'+STR(DateDiff(MINUTE,@Start,@End)%60/0.6)
```
DateTime Difference : `3.38`
Expected Output : `6.63`
Actual Output : `6.63`
what i am missing in case, when minutes are less than 10 ??
DB : SQL Server 2008 | 60 minutes = 60/60 = 1.0
30 minutes = 30/60 = 0.5
4 minutes = 4/60 = `0.066`, not `0.10`
0.10 = 6 minutes, not 4 minutes difference as in your first example.
If we temporarily remove the seconds to reduce complexity, then you simply divide the seconds by 3600.00:
```
Declare @Start DateTime='02-03-2014 14:25:00'
Declare @End DateTime='02-03-2014 20:29:00'
SELECT DATEDIFF(s,@Start,@End)/3600.00
```
This returns `6.066` which is the correct portion of hours to return 6 hours 4 minutes difference and a far simpler expression. | It seems in `STR(DateDiff(MINUTE,@Start,@End)%60/0.6)` 0.6 should be replaced with 6.0
The above solution would round off upto 1 place after decimal.
If you want to `ROUND` off to 2 places after decimal you can try the below snippet:
```
SELECT CAST(CAST(ROUND(DateDiff(MINUTE,@Start,@End)%60/0.6) AS NUMERIC(12,2)) AS VARCHAR(50))
``` | convert datediff minutes to 100 base format in sql | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
```
INSERT into LibraryTable(ID, Artist, Name, Rating, PlayCount) VALUES(
Select MIN(a.ID + 1)
From LibraryTable A
Left Join LibraryTable B
On A.ID = B.ID - 1
Where B.ID Is NULL, 'eg', 'eg', 1, 1)
```
im getting a syntax error at "Select" and the "," after null. im a beginner so any helpful tips would be greatly appreciated. | You want `insert . . . select`:
```
INSERT into LibraryTable(ID, Artist, Name, Rating, PlayCount)
Select MIN(a.ID + 1), 'eq', 'eq', 1, 1
From LibraryTable A Left Join
LibraryTable B
On A.ID = B.ID - 1
Where B.ID Is NULL;
```
The `values` keyword is not used for this form of the `insert`.
If you are using SQL Server 2012, you can do this with the `lead()` function instead of a self join:
```
INSERT into LibraryTable(ID, Artist, Name, Rating, PlayCount)
Select MIN(lt.ID + 1), 'eq', 'eq', 1, 1
From (select lt.*, lead(id, 1) over (order by id) as nextid
from LibraryTable lt
) lt Left Join
where nextid <> id + 1;
``` | TRY this:
```
INSERT into LibraryTable(ID, Artist, Name, Rating, PlayCount)
Select MIN(a.ID + 1) as ID, 'eg' as Artist, 'eg' as Name, 1 as Rating, 1 as PlayCount
From LibraryTable A
Left Join LibraryTable B
On A.ID = B.ID - 1
Where B.ID Is NULL
``` | Syntax Error but not sure why (SQL Server) | [
"",
"sql",
"sql-server",
"database",
""
] |
Let's say we are to normalize a database into 3rd normal form using the requirement:
> I need a movie ticket registry program that can remember customers and
> the tickets that they've purchased.
We might end up with a database like this:
```
ticket
id
movie_name
price
customer
id
first_name
```
However, when I look at this, for some reason it looks redundant. What if I were to break it up into even smaller pieces, like this:
```
name
id
name
customer
id
fk_name_id
ticket
id
fk_name_id
price
```
Would this be a good approach? Is there a name for this approach? | As Jordan says, the point of breaking data out into a separate table is to avoid redundant data.
As you apparently realize, we do NOT want to lay out our tables like this:
```
WRONG!!!
ticket
customer_name
movie_name
```
That would mean that the customer\_name is repeated for every movie he watches, and the movie name is repeated for every person who watches that movie. Lots and lots of redundant names. If the user has to type them in every time, it's likely that sometimes he mis-spells a name or uses a variation on a name, like we find our table includes "Star Wars", "Star Wars IV", "Star Wars Episode IV", and "Stra Wars", all for the same movie. All sorts of problems.
By breaking the customer and the movie out into separate tables, we eliminate all the redundancy. Great. Celebrate.
But if we take your suggestion of making a "name" table that holds both customer names and movie names, did we eliminate any redundancy?
If a customer has the same name as a movie -- if we happen to have a customer named "Anna Karenina" or "John Carter" or whatever (or maybe someone named their kid "Batman Returns" for that matter) -- are you going to use the same record to store both? If no, then you have not saved any redundancy. You have just forced us to do an extra join every time we read the tables.
If you do use the same record, it's even worse. What if you create a record for customer "Anna Karenina" and you share the id/name record with the movie. Then Anna gets married and now her name is "Anna Smith". If you update the name record, you have not only changed the name of the customer, but also the title of the movie! This would be a very bad thing.
You could, of course, say that if you change the name, that instead of updating in place you create a new record for the new name. But then that defeats half the purpose of breaking the names out to a separate table. Suppose when we originally created the movie record we mistyped the name as "Anna Karina". Now someone points out our mistake and we fix it. But with the "make a new record every time there's a change" logic, we'd have to fix each ticket sale one by one.
I guess you could ask the user if this is a change for just the movie title, just the customer name, or both. But now we've added another level of complexity. And for what? Our program is more complex, our queries are more complex, and our user interface is more complex. In exchange, we get a tiny gain in saving disk space for the rare case where a customer coincidentally has the same name as a movie title.
Not worth it. | Your first approach is not correct. If you think about the problem, there are three entities:
* Movie
* Customer
* Ticket
The connection between `Movie` and `Customer` is really the `Ticket` table, so this is an example of an association or junction table that has additional information.
I wouldn't think of the problem as "there is an entity 'name' and customers and movies both have names". The name is an attribute of other entities, it is not its own entity (at least in this case). | A different (less redundant) approach to Normalization | [
"",
"sql",
"database",
"database-normalization",
""
] |
I have a column which I need to set to true or false based on a condition, is this possible to do as part of an existing update?
Example:
```
UPDATE i
SET i.Outstanding = i.Total - @Payments,
i.Paid = @Payments ,
i.Closed = (i.Total <= @Payments) -- THIS DOESNT WORK :(
FROM Invoice i
JOIN [Transaction] t ON t.Invoice_Id = i.Id
WHERE i.Id = @InvoiceId
``` | You can use `CASE` statement
```
UPDATE i
SET i.Outstanding = i.Total - @Payments,
i.Paid = @Payments ,
i.Closed = CASE WHEN i.Total <= @Payments THEN 1 ELSE 0 END
FROM Invoice i
JOIN [Transaction] t ON t.Invoice_Id = i.Id
WHERE i.Id = @InvoiceId
``` | You can use simple case statement like below
```
UPDATE i
SET i.Outstanding = i.Total - @Payments,
i.Paid = @Payments ,
i.Closed = CASE WHEN i.Total <= @Payments THEN 'Your value' ELSE 'your value' END
FROM Invoice i
JOIN [Transaction] t ON t.Invoice_Id = i.Id
WHERE i.Id = @InvoiceId
``` | How can I set a column based on a condition in SQL Server? | [
"",
"sql",
"sql-server",
""
] |
I'm studying this SQL statement :
```
SELECT * FROM Customers
ORDER BY Country,CustomerName;
```
[From this](http://www.w3schools.com/sql/sql_orderby.asp)
How does it work that it orders by both country and CustomerName ? In my mind it just doesn't seem intuitive, i.e you order by either Country OR CustomerName but not both | It orders by Country, but if some rows have the same Country, it orders them by CustomerName. | It orders by country first. If the countries match, then it orders by customer name. It sorts the customers alphabetically but groups them by country, so English customers come before French ones. | How do SQL order-by with multiple-columns work? | [
"",
"sql",
""
] |
How to Delete Duplicate records from Table in SQL Server ? | ```
WITH CTE AS(
SELECT [col1], [col2], [col3], [col4], [col5], [col6], [col7],
RN = ROW_NUMBER()OVER(PARTITION BY col1 ORDER BY col1)
FROM dbo.Table1
)
DELETE FROM CTE WHERE RN > 1
```
[Example](http://sqlfiddle.com/#!6/fae73/1/0) | To delete rows where the combination of columns `col_1, col_2, ... col_n` are duplicates, you can use a common table expression;
```
WITH cte AS (
SELECT *,
ROW_NUMBER() OVER (PARTITION BY col1, ..., col_n ORDER BY col_1) AS rn
FROM table_1
)
DELETE FROM cte WHERE rn<>1;
```
Since the rows are classified by the contents of the listed columns, if the rows are identical in all ways, you'll still need to list all columns in the query.
*As always, test and/or backup before running deletes from random people on the Internet on your production data.* | How to Delete Duplicate records from Table in SQL Server? | [
"",
"sql",
"sql-server",
""
] |
i have two different database schemas
First Structure is :
```
table 1 : country_master
Column name: id(primary key) country_name
table 2: State_master
Column name: id(primary key) State_name Country_id(foreign key of country_master)
table 3: City_Master
Column name: id(primary key) city_name State_id(foreign key of State_master)
```
Second Structure is:
```
table 1 : Master_table
Column name: id(primary key) name
table 2 : Sub_master
Column name: id(primary key) name master_id(Foreign key of master Table) subid(id of Sub_master)
```
Record of first structure
```
table: Country_master
1 india
2 UK
3 USA
table:State_master
1 gujarat 1
2 MP 1
3 Up 1
table:City_master
1 ahmedabad 1
2 surat 1
```
Record of structure 2:
```
table:master_table
1 Country
2 City
3 State
table:master_table
1 india 1 0
2 UK 1 0
3 USA 1 0
4 Gujarat 3 1
5 MP 3 1
6 Up 3 1
7 ahmedabad 2 4
8 surat 2 4
```
now my question is what is best schema of my tables for performance: | The second structure gonna be create more issue regarding performance point of view.
As the child table have all references, it is difficult to maintain. Consider the scenario, where you need to mark for the "capital" of each state having the crores of data, then what would you do to add one more column into existing table and update each of one? No not obviously, instead the better job is to separate all the cities, states and countries in a separate table and use in query having joins, so join performes on primary key. And the most important is, these data not gonna be change often means these remains constant most of the times. These are called lookups, hence you need to separate in a table. The first structure is good in a first appearance because dataset is too small, as data increases, that gonna create problems, hence the better is to use the first structure. | You could probably achieve the same performance by adding the right indices on the tables.
For ex: if one if your queries is search for all states in a country you would need to create an index on the master and sub id of the sub table in the second structure. | best structure of my table in Perfomance way: | [
"",
"mysql",
"sql",
"sql-server",
"sql-server-2008",
"sqlite",
""
] |
I have millions of records in a table with following columns:
```
id, item_id, customer_id, date
```
I want to find `customer_id` for a specific month which are inserted first time in the table in that month.
What is best and simple query for this.
Thanks | ```
select customer_id , min(date) as min_date
from theTable
group by customer_id
having min_date>=<the desired date>
``` | try something like:
```
SELECT
date_format(date,"%M") month,group_concat(customer_id ORDER BY customer_id) customer_ids
FROM
<table_name>
GROUP BY month;
``` | MySQL Query: Get new customers in the month | [
"",
"mysql",
"sql",
""
] |
**I have created three tables**
create table employee1 (eid int, ename varchar(25),email varchar(15));
create table accounts1 (eid int, accno int,atype varchar(2),bal int);
create table trans1(cid int, accno int, atype varchar(2), accounts int, bal int);
**using alter command I have added primary key and foreign keys**
alter table employee1 add primary key(eid);
alter table accounts1 add foreign key(eid) references employee1(eid);
alter table accounts1 add primary key(accno,atype);
alter table trans1 add foreign key(accno,atype) references accounts1(accno,atype);
Now my problem is I am trying to delete all the primary key and foreign keys of all the tables its giving me errors.
Initially I tried deleteing the primary key. then on refering to the below links showed me that foreign key must be deleted first and I tried doing that but still i am getting the error.
[MYSQL 5.5 Drop Primary Key](https://stackoverflow.com/questions/9243016/mysql-5-5-drop-primary-key)
errors:
mysql> alter table employee1 drop constraint eid;
ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corres
ponds to your MySQL server version for the right syntax to use near 'constraint eid' a
t line 1
mysql> alter table accounts1 drop primary key;
ERROR 1025 (HY000): Error on rename of '.\jlcindia#sql-b20\_1' to '.\jlcindia\accounts
1' (errno: 150)
mysql> alter table employee1 drop primary key;
ERROR 1025 (HY000): Error on rename of '.\jlcindia#sql-b20\_1' to '.\jlcindia\employee
1' (errno: 150)
mysql> alter table trans1 drop foreign key;
ERROR 1005 (HY000): Can't create table 'jlcindia.#sql-b20\_1' (errno: 150)
mysql> show engine innodb status
Note: I have no auto increment or any such thing in any of my tables.
I checked the following links but none was of any help.
[Remove Primary Key in MySQL](https://stackoverflow.com/questions/2111291/remove-primary-key-in-mysql)
[mysql, alter column remove primary key and auto incremement](https://stackoverflow.com/questions/3090442/mysql-alter-column-remove-primary-key-and-auto-incremement)
[Error altering primary key column in mysql table](https://stackoverflow.com/questions/17266401/error-altering-primary-key-column-in-mysql-table)
[How can I alter a primary key constraint using SQL syntax?](https://stackoverflow.com/questions/8761570/how-can-i-alter-a-primary-key-constraint-using-sql-syntax) | You need to drop the constraints in the following order(Which is the reverse of the order in which you created the keys):
* Foreign keys in the table `trans1`
* Primary keys in the table `accounts1`
* Foreign key in the table `accounts1`
* Primary key in the table `employee1`
[fiddle](http://sqlfiddle.com/#!2/f5195) | I think,You need to **first remove** the foreign key of table **trans1**.Then remove foreign key of table **accounts1**.Then try to remove primary key of table **accounts1**.then remove primary key of table \**employee1 \**. | Usage of Alter command to drop Primary key and Foreign Key | [
"",
"mysql",
"sql",
"database",
"primary-key",
"alter-table",
""
] |
Hi i am looking for a way to write a SQL statement which will come out with the following results :-
Lets say we have Dept & Emp ID i would like to generate like records from Dept 3 for the first two rows then followed by Dept 2 with one row then continue Dept 3 and so on :
```
DEPT EMPID
----- ------
3 1
3 2
2 3
3 7
3 8
2 9
```
Thank You. | You could use something like this
```
SELECT
DEPT,
EMPID
FROM (
SELECT
*,
ceil((row_number() OVER (PARTITION BY dept ORDER BY EMPID ))/ 2::numeric(5,2)) AS multiple_row_dept,
row_number() OVER (PARTITION BY dept ORDER BY EMPID ) AS single_row_dept
FROM
test_data2
) sub_query
ORDER BY
CASE
WHEN DEPT = 2 THEN single_row_dept
ELSE multiple_row_dept
END,
DEPT DESC,
EMPID
```
`single_row_dept` specifics which dept should appear only once, in this case its `DEPT 2` followed by multiple other departments | First order a table by empid in a subquery,
then calculate a remainder of rowids divided by 3,
then depending on a result of calculation, return 2 or 3, using `case` expression,
like this
```
SELECT
CASE REMAINDER( rownum, 3 )
WHEN 0 THEN 2
ELSE 3
END As DeptId,
empid
FROM (
SELECT empid
FROM table1
ORDER BY empid
)
```
demo: <http://sqlfiddle.com/#!4/bd1bb/3> | Oracle SQL to generate interleaved SQL results | [
"",
"sql",
"oracle",
""
] |
Assume that i have three field in my database like this :
```
ID Title MaxVisited
1 hi 6
2 bye 8
3 How? 9
4 News! 8
5 Hey 3
6 Thanks 9
```
now i want to select Title three of max value of MaxVisited.. the result which i want:
```
Thanks , How? , News!
``` | Just select the top 3 and order by `MaxVisited` and (if necessary) the `Id` column
```
SELECT TOP 3 Title from [TableName]
order by MaxVisited, Id desc
``` | Try this
```
SELECT TOP 3 Title
From [TableName]
order by MaxVisited,Title desc
``` | Selecting three of Max() values of a field in sql server | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have 4 tables here, their relationship is like the following:

For 'band' table, I have 4 columns: id, band\_name, first\_name, last\_name. (first\_name and last\_name refer to columns in 'independent artist' table).
For 'independent artist' I have 3 columns: id, first\_name, last\_name.
For 'act' table, I have 3 column: id, performer, music. Here for the perfumer, I want to refer to both band and independent artist table.
For 'music' table, I have 3 columns: id, song\_name, release\_year.
I also predict future changes here, for instance if a team member in a band wants to change his last name, it will affect both the 'band' and 'independent artist' table.
Intuitively I think my solution is not good, can anyone provide better design for the tables? Thanks. | Here's my suggestion:
Band table:
* band\_id PK
* band\_name
Artist table:
* artist\_id PK
* first\_name
* last\_name
Band\_members table:
* band\_id FK
* artist\_id FK
Acts table:
* act\_id PK
* band\_id FK
Music table:
* music\_id PK
* song\_name
* release\_year
Act\_Contents table:
* act\_id FK
* music\_id FK
There's no need for a separate `Independent Artists` table. Just treat them as a band with just one member. And you should always use an ID as the foreign key, so you don't have to worry about updating references if an attribute changes; you just change it in one place, and the references are unaffected. | You probably want to have a one-to-many relation between `band` and `artist` where a band may reference one or more artists (by id or artist). That way the first name, and last name stay in the `artist` table and are not duplicated in the `band` table.
As for the other problem that you are having - you can't reference columns from more than one foreign table in a relationship. You'd have to model that by either creating two columns in the `Act` table, one that references `band` and one that references `artist` and add a constraint that only one may be non-null, or something else.
That said, you may want to [take a look at MusicBrainz](https://musicbrainz.org/doc/MusicBrainz_Database/Schema) for inspiration. I've worked with that database in the past and the project is being developed by lots of smart people that have worked on the model for a while. | How to make a single column refers to multiple tables | [
"",
"mysql",
"sql",
"database-design",
""
] |
I have a CLOB type column (called xml) that stores an xml document. The xml document has a tag called `<productionDate>`.
What i want to do is search the table and retrieve any row that contains an xml document that has productionDate between two dates.
I know how to read the xml tag as a value:
```
Select
xmltype(xml).extract('//product/productionDate/text()').getStringVal() from myTable
```
The above query returns the following date.
```
1999-09-23 00:00:00.0 UTC
2000-01-18 00:00:00.0 UTC
2000-01-18 00:00:00.0 UTC
1999-11-02 00:00:00.0 UTC
1999-11-02 00:00:00.0 UTC
1999-11-02 00:00:00.0 UTC
1999-11-02 00:00:00.0 UTC
1999-11-02 00:00:00.0 UTC
1999-11-02 00:00:00.0 UTC
```
I want to only list the ones that are between 01-NOV-1999 and and 01-JAN-2000. To do this i tried to cast the value to a DATE type
```
Select
XMLCast(xmltype(xml).extract('//product/productionDate/text()').getStringVal() as DATE) from myTable
```
I get the following error:
```
ORA-01830: date format picture ends before converting entire input string
```
Presumably this is because the format of the date cannot be converted to a date format. What kind i do to read the date as a valid date (Ignoring the time and the text UTC) so that i can do a comparison on the date.
Maybe a substring might work but i want to hear what other more efficient options are available out there. | ## Poor-Man's Method for Optimizing Queries for File-Based Data Types
There are a number of ways to optimize storage and retrieval of file-based data types (CLOB/XML). If you're anchored to a relational database solution such as Oracle, here's some simple suggestions that use out-of-the-box functionality without having to buy, license or install additional add-ons and products.
### Initial Thoughts: Thinking About Traffic and Usage
It may help you to consider how often you need to dive into the XML files to extract their properties. It would also be important to consider how often the contents of these XML files change once they have been added to your CLOB-based table.
If the DML traffic involving your CLOB typed table isn't too heavy, consider building an assisting object which has the types of hooks that will help you find the right XML files when you need it. *Once you have it, you can divert query activity away from your source table except for specific Primary Key based inquiries.*
### Inline Function Conversions and Their Optimization
That being said, you could apply conversion functions on the fly and transform the date value every time you query it. There are even add-ons for Oracle that optimizes the database for querying and searching for values within LOB objects....
There is even an optimization convention called a *FUNCTION BASED INDEX*, which prompts Oracle to use an alternate reference (index) that also uses the TO\_DATE/SUBSTR function combinations you would need to convert the value in your XML document.
> *There are alternate methods to INLINE conversions, which are probably a lot better because an INLINE conversion is applied every time the query is called. In general, no conversion functions and data in their native format (i.e., date as date, datetime or timestamp) run much faster and with less cost to existing database resources.*
### Externally Tagging Your XML Documents
Assuming that the CLOB table with your XML files also has a primary key column, two ideas come to mind:
1. Create a second table with the converted values (such as **PRODUCTION\_DATE**), identify each date record with the PK ID of the CLOB that it came from. This table can be extended as you discover new attributes that are frequently accessed by your queries. Manage the second table by placing a trigger on the CLOB table.
Anytime a clob record is added, the data value is extracted and converted exactly once. If anything queries for a clob record that has not changed, there should be no need to extract and convert the value since the last time it was queried.
2. Create a Materialized View containing the Extraction and Conversion function so that the MView column for PRODUCTION\_DATE can be defined as a real DATE type. MViews also work like Idea (1), but a little more elegantly.
If it's possible a FAST refreshable view would work well as it manages your MView automatically and in near real-time response to changes in your CLOB table. A FAST refreshable view only updates the MView with changes and additions, so even if your source table is huge, daily operations are based on incremental effects only.
> Not knowing the volume, usage or stats on your data a COMPLETE refresh type MView may or may not be possible or efficient without additional compromises or assumptions. Sometimes MView query definitions are too complex to qualify for the FAST refreshable format.
In either case, you end up with a second object (MVIEW or TABLE) that contains a DATE formatted value for **PRODUCTION\_DATE**. Query this TAG table instead and use the associated PK value to identify which record in your CLOB table should be accessed once you've narrowed down the set or individual record that meets the query criteria.
### Parting Thoughts (Optional... sort of)
If it's possible to have a NULL PRODUCTION\_DATE value, it would be tempting to simply apply a NVL() inline function when querying the TAG supporting table. Depending on the volume of data in question, this probably shouldn't be a problem. Ideally, you'll want to always have a value in there... and a NOT NULL constraint on that DATE column... those kinds of things help the db optimizer make better assumptions about how to dive into the tables... especially if there are lots and lots of records in there. | You're getting the value as a string, so you should just be able to use use `to_date` or `to_timestamp`:
```
Select
to_timestamp(xmltype(xml).extract('//product/productionDate/text()').getStringVal(),
'YYYY-MM-DD HH24:MI:SS.FF "UTC"')
from myTable
```
[SQL Fiddle demo](http://sqlfiddle.com/#!4/d41d8/26159).
You could move it to a different time zone at the same time if you needed to.
Of course, given the date format you have, you could compare as strings and skip the conversion... | Oracle SQL - Selecting a date from an XML tag and casting it to a DATE type to allow comparison of the dates | [
"",
"sql",
"xml",
"oracle",
"xpath",
"oracle11g",
""
] |
So I have a mulitple select statement that goes like this.
The problem is when one of the sub select does not exist if will not return a result.
How can I make it that if it does not exist it will be null.
```
SELECT agent,
percentage1,
percentage2,
percentage3,
percentage4,
percentage5
FROM
(SELECT 'Agent 01' AS agent) AS agent,
(SELECT percentage AS percentage1
FROM APOS_QA_Scorecard..scorecard
WHERE mon = 'November'
AND dateMonitored = 4
AND agentName = 'Agent 01') AS percentage1,
(SELECT percentage AS percentage2
FROM APOS_QA_Scorecard..scorecard
WHERE mon = 'November'
AND dateMonitored = 5
AND agentName = 'Agent 01') AS percentage2,
(SELECT percentage AS percentage3
FROM APOS_QA_Scorecard..scorecard
WHERE mon = 'November'
AND dateMonitored = 6
AND agentName = 'Agent 01') AS percentage3,
(SELECT percentage AS percentage4
FROM APOS_QA_Scorecard..scorecard
WHERE mon = 'November'
AND dateMonitored = 7
AND agentName = 'Agent 01') AS percentage4,
(SELECT percentage AS percentage5
FROM APOS_QA_Scorecard..scorecard
WHERE mon = 'November'
AND dateMonitored = 8
AND agentName = 'Agent 01') AS percentage5
``` | The following query changes your multiple joins to conditional aggregations. This will fix the problem that if one of the conditions is not met, then you will still get results. Doing a `cross join` in your `from` clause will result in no record if any of the queries have no rows.
I also fixed the logic in a couple of other ways. The subquery to define your `agentName` is now joined to the rest, so the `AgentName` doesn't have to be repeated in the conditions. I also added a `group by`, so you can have more than on `AgentName` at a time, if you like:
```
select a.agentName,
sum(case when mon = 'November' and dateMonitored = 4
then percentage
end) as percentage1,
sum(case when mon = 'November' and dateMonitored = 5
then percentage
end) as percentage2,
sum(case when mon = 'November' and dateMonitored = 6
then percentage
end) as percentage3,
sum(case when mon = 'November' and dateMonitored = 7
then percentage
end) as percentage4,
sum(case when mon = 'November' and dateMonitored = 8
then percentage
end) as percentage5
from (select 'Agent 01' as agentName
) a left outer join
APOS_QA_Scorecard..scorecard sc
on a.agentName = sc.agentName
group by a.agentName;
``` | Try this instead
```
select agent,IsNull(p1.percentage,0.0) as Percentage1,
IsNull(p2.percentage,0.0) as Percentage2,
IsNull(p3.percentage,0.0) as Percentage3,
IsNull(p4.percentage,0.0) as Percentage4,
IsNull(p5.percentage,0.0) as Percentage5
from
(select 'Agent 01' as agent) as Agent
left join (select percentage as percentage
from APOS_QA_Scorecard..scorecard
where mon = 'November' and dateMonitored = 4) p1 on p1.agentName = agent.agent
left join (select percentage as percentage
from APOS_QA_Scorecard..scorecard
where mon = 'November' and dateMonitored = 5) p2 on p2.agentName = agent.agent
left join (select percentage as percentage
from APOS_QA_Scorecard..scorecard
where mon = 'November' and dateMonitored = 6) p3 on p3.agentName = agent.agent
left join (select percentage as percentage
from APOS_QA_Scorecard..scorecard
where mon = 'November' and dateMonitored = 7) p4 on p4.agentName = agent.agent
left join (select percentage as percentage
from APOS_QA_Scorecard..scorecard
where mon = 'November' and dateMonitored = 8) p5 on p5.agentName = agent.agent
``` | multiple select statement return a result even if one does not exist | [
"",
"sql",
"sql-server",
""
] |
I performing an aggregation on a table, below is a snap shot of the Job table I am querying.

my query is similar to below. In reality the above derived table will have many more columns as results of multiple joins.
```
select Attribute
from Job
where Description = 'Installation' and Attribute = 'NPL'
group by Attribute
having sum(Cost) >= 500
```
I want to get the `JobID` of the matching records as that is unique and I can use it for further joins. How can I get the jobID's(which is PK) of the matching records.
I am not sure if its OK to do this,
```
select Attribute, JobID
from Job
where Description = 'Installation' and Attribute = 'NPL'
group by Attribute, JobID
having sum(Cost) >= 500
```
I am expecting the result like(for above scenario),
 | Grouping this way won't give you desired result :
```
Group by Attribute, JobID
```
with that, `JobID = 34` and `JobID = 39` will be in separate group. One of several possible ways is using `INNER JOIN` and subquery to achieve that "*expected result*" picture :
```
select j.Attribute, j.JobID
from Job j
inner join
(select Attribute
from Job
where Description = 'Installation' and j.Attribute = 'NPL'
group by Attribute
having sum(Cost) >= 500) a on a.Attribute = j.Attribute
where j.Description = 'Installation' and j.Attribute = 'NPL'
``` | You can use aggregates with window functions so that the original rows are preserved but you can reason about the aggregate value. You need to compute the aggregate in a subquery or CTE though:
```
;With Totals as (
select *,SUM(Cost) OVER (PARTITION BY Attribute) as TotalCost
from Job
where Description = 'Installation' and Attribute = 'NPL'
)
select * from Totals where TotalCost > 500
``` | Aggregation result with a unique column in SQL Server | [
"",
"sql",
"sql-server-2008",
"t-sql",
""
] |
Currently I am working with grails and MySQL.i want to execute my own query example "SELECT fullName from User" while clicking an anchor tag.
its my example that i worked out
```
class InsertDataController {
def index() { }
def sample() {
InsertData insertData=new InsertData();
insertData.show();
}
}
```
Domain classes
```
import groovy.sql.Sql
class InsertData {
def dataSource
def show(){
System.out.println("demo show");
def sql = new Sql(dataSource)
def rows = sql.rows("SELECT fullName from User")
rows.each { row ->
System.out.println("demo data"+row.fullName);
log.debug row.fullName
}
sql.close()
}
class User {
String userName
String password
String fullName
String toString(){
"${fullName}"
}
static constraints = {
fullName();
userName(unique:true);
password(password:true);
}
}
```
can anybody suggest how to solve this
i want to know how to write the controller and model for this
thank you, | Instead of using `def sql = new Sql(dataSource)` in domain class,
use `def sql = Sql.newInstance(dataSource)`
Your Domain class can be modified as
```
import groovy.sql.Sql
class InsertData {
def dataSource
def show(){
def sql = Sql.newInstance(dataSource)
sql.eachRow("SELECT fullName from User") {
println "the fullname is ${it.fullname}"
}
}
```
And your Controller
```
class InsertDataController {
def index() { }
def sample() {
InsertData insertData=new InsertData();
insertData.show();
}
}
``` | The InsertData class would be a service (InsertDataService) and that would inject into the controller InsertDataController. The class User would be a domain class.
Controller class would then look like:
```
class InsertDataController {
def insertDataService
def index() { }
def sample() {
insertDataService.show()
}
}
``` | How to pass sql query in grails | [
"",
"sql",
"grails",
"grails-2.0",
"gsp",
""
] |
I've got two tables, which contain customer ages and heights.
```
Table: Ages
+-----------+------------+
|customerId | age |
+-----------+------------+
| 1 | 15 |
| 2 | 24 |
| 3 | 21 |
| 4 | 62 |
| 6 | 57 |
| 7 | 32 |
+-----------+------------+
Table: Heights
+-----------+------------+
|customerId | height |
+-----------+------------+
| 1 | 175 |
| 2 | 182 |
| 4 | 180 |
| 5 | 171 |
| 6 | 165 |
| 7 | 182 |
+-----------+------------+
```
I need to write a `SELECT` query that reads all the ages and heights. So something like this...
```
SELECT Ages.age, Heights.height
FROM Ages INNER JOIN Heights ON Ages.customerId=Heights.customerId;
```
**However** (and here's the twist) due to sloppy record-keeping, there are missing records from both tables. (e.g. customerId 5 in Ages, and customerId 3 in Heights).
Is there a way to write the query so that it will still work, but return zeros whenever the data is missing?
i.e.
```
+-----------+------------+------------+
|customerId | age | height |
+-----------+------------+------------+
| 1 | 15 | 175 |
| 2 | 24 | 182 |
| 3 | 21 | 0 |
| 4 | 62 | 180 |
| 5 | 0 | 171 |
| 6 | 57 | 165 |
| 7 | 32 | 182 |
+-----------+------------+------------+
``` | One way to go (they are others, as always)
```
select customerId, max(age), max(height)
from
(
select customerId, age, 0 as height from Ages
UNION
select customerId, 0 as age, height from heights
) s
group by customerId;
```
see [SqlFiddle](http://sqlfiddle.com/#!2/3ca57/2) | MySql doesn't have full outer joins, but you can [simulate one](https://stackoverflow.com/questions/4796872/full-outer-join-in-mysql) with a `LEFT JOIN`, followed by a `RIGHT JOIN`, combined with a `UNION`, which will combine + eliminate duplicates:
```
SELECT Ages.age, COALESCE(Heights.height, 0)
FROM Ages
LEFT OUTER JOIN Heights ON Ages.customerId=Heights.customerId
UNION
SELECT COALESCE(Ages.age, 0), Heights.height
FROM Ages
RIGHT OUTER JOIN Heights ON Ages.customerId=Heights.customerId;
```
[SqlFiddle Here](http://sqlfiddle.com/#!2/fd4be4/1) | How can I make an SQL query that handles missing records? | [
"",
"mysql",
"sql",
"select",
"records",
""
] |
I need to obtain the rows which having same column value from the following table, I tried it the following way, but it gives me a single row only.
```
select *
from employee e
group by e.empsalary
having count(e.empsalary) > 1
```
**Table employee**

Please suggest me a way. | You should be able to accomplish this by joining the table to itself:
```
SELECT
a.*
FROM
employee a
JOIN employee b
ON a.empsalary = b.empsalary
AND a.empid != b.empid
``` | Use an inner join to join with your query posted.
```
select A.* from employee A
inner join (
select empsalary
from employee
group by empsalary
having count(*) > 1
) B ON A.empsalary = B.empsalary
``` | MySQL Select the rows having same column value | [
"",
"mysql",
"sql",
""
] |
I have a web form which several `date` fields in it, I am using `Ajax` calender for the user to enter the date, I want to restrict the user from typing in extra info other than the date or even not being able to type anything in the textbox apart the value from the calender extender. I am using ASP.NET, VB.NET and SQL Server 2008 database
My date format is `dd-MMM-yy`
I tried:
```
Enable = false
```
but it is not saving the date to database
Below is a sample of my code:
```
<asp:TextBox ID="TextBox5" runat="server" Width="150px" style="margin-top: 0px"
ReadOnly="False" CausesValidation="True"></asp:TextBox>
<asp:CalendarExtender ID="TextBox5_CalendarExtender" runat="server" Format="dd-MMM-yy" PopupButtonID="ImageButton1" TargetControlID="TextBox5">
</asp:CalendarExtender>
```
Appreciate your help
Thank you | First you'll need to set the textbox as 'read only' in code behind, usually in Page\_Load event like this:
```
TxtDate.Attributes.Add("readonly","readonly");
```
Next, modify the design of the textbox to be like this in markup:
```
<asp:TextBox ID="TxtDate" runat="server" Width="150px"></asp:TextBox>
<a href="JavaScript:clearText('<%=TxtDate.ClientID %>')">
<img alt="" src="../clear.png"></img></a>
<asp:CalendarExtender ID="CalendarExtender1" runat="server" Enabled="True"
Format="dd-MMM-yy" PopupPosition="Right" TargetControlID="TxtDate">
</asp:CalendarExtender>
```
And finally add this small script to clear the date:
```
<script type="text/javascript">
function clearText(ctrl) {
document.getElementById(ctrl).value = "";
};
```
By doing this, you will have a textbox when clicking on it it will pop up the calendar, it will not allow users to type in, and when the user click on the "clear" image associated with the textbox, it will clear the text. | Don't set `ReadOnly = True` or `Enabled = False` that will prevent the value from being posted back. You need to only prevent user interaction while allowing value to be updateable otherwise. One way is to prevent field from obtaining focus, try adding this attribute:
```
onfocus = "this.blur();"
```
For more take a look at [this post](http://codecorner.galanter.net/2009/10/09/postback-disabled-textbox/). | Textbox should only accept date | [
"",
"asp.net",
"sql",
"vb.net",
"sql-server-2008",
""
] |
how can I update 1 record based on some field? I've already created a cursor to loop through records with the same ID, but I need to update the activeFlag for the record with the most current startDate.
```
someID startDate endDate activeFlag
6 2011-07-25 9999-12-31 0
6 2013-09-22 9999-12-31 0
```
in this example, I want to set activeFlag=1 for the record with a startDate of 09.22.2013 | If you order by the `startDate` you can limit the update to the first record
```
update your_table
set activeFlag = 1
order by startDate desc
limit 1
``` | If I'm understanding your criteria correctly, you can use an update query with a sub query to grab the latest (or maximum) start date value.
```
UPDATE myTable
SET activeFlag = 1
WHERE startDate = (SELECT MAX(startDate) FROM myTable);
```
However, if there are multiple rows with the same startDate, they'll all get updated. You could apply a `LIMIT 1` if you only need only 1 record updated. | Evaluate Like Records | [
"",
"sql",
""
] |
```
SELECT wavelength,
(LEAD(wavelength) OVER (ORDER BY wavelength) - wavelength)/
(LEAD(reflectance) OVER (ORDER BY wavelength) - reflectance)
AS reflectance
FROM grassland1
```
It shows error like this in Postgresql
```
ERROR: division by zero
********** Error **********
ERROR: division by zero
SQL state: 22012
```
How i will correct this Pls anyone suggest me | Try this:
```
SELECT wavelength,
(LEAD(wavelength) OVER (ORDER BY wavelength) - wavelength)/
(CASE WHEN (LEAD(reflectance) OVER (ORDER BY wavelength) - reflectance) = O THEN 1
ELSE (LEAD(reflectance) OVER (ORDER BY wavelength) - reflectance) END)
AS reflectance
FROM grassland1
``` | Use `case` then
```
SELECT wavelength,
case when (LEAD(reflectance) OVER (ORDER BY wavelength) - reflectance)=0
then 0
else ((LEAD(wavelength) OVER (ORDER BY wavelength) - wavelength)/
(LEAD(reflectance) OVER (ORDER BY wavelength) - reflectance))
end reflectance
FROM grassland1
``` | PostgreSQL division by zero when ordering column | [
"",
"sql",
"postgresql",
""
] |
I found many stackoverflow QnAs about consecutive days.
Still answers are too short for me to understand what's going on.
For concreteness, I'll make up a model (or a table)
(I'm using postgresql if it makes a difference.)
```
CREATE TABLE work (
id integer NOT NULL,
user_id integer NOT NULL,
arrived_at timestamp with time zone NOT NULL
);
insert into work(user_id, arrived_at) values(1, '01/03/2011');
insert into work(user_id, arrived_at) values(1, '01/04/2011');
```
1. (In simplest form) For a given user, I want to find the last-consecutive date range.
2. (My ultimate goal) For a given user, I want to find his consecutive working days.
If he came to work yesterday, he still(as of today) has chance of working consecutive days. So I show him consecutive days upto yesterday.
But if he missed yesterday, his consecutive days is either 0 or 1 depending on whether he came today or not.
Say today is 8th day.
```
3 * 5 6 7 * = 3 days (5 to 7)
3 * 5 6 7 8 = 4 days (5 to 8)
3 4 5 * 7 * = 1 day (7 to 7)
3 * * * * * = 0 day
3 * * * * 8 = 1 day (8 to 8)
``` | Here is my solution to this problem using `CTE`
```
WITH RECURSIVE CTE(attendanceDate)
AS
(
SELECT * FROM
(
SELECT attendanceDate FROM attendance WHERE attendanceDate = current_date
OR attendanceDate = current_date - INTERVAL '1 day'
ORDER BY attendanceDate DESC
LIMIT 1
) tab
UNION ALL
SELECT a.attendanceDate FROM attendance a
INNER JOIN CTE c
ON a.attendanceDate = c.attendanceDate - INTERVAL '1 day'
)
SELECT COUNT(*) FROM CTE;
```
Check the code at [SQL Fiddle](http://www.sqlfiddle.com/#!15/6cccf/1)
Here is how the query is working:
1. It selects today's record from `attendance` table. If today's record is not available then it selects yesterday's record
2. It then keeps adding recursively record a day before the least date
If you want to select latest consecutive date range irrespective of when was user's latest attendance(today, yesterday or x days before), then the initialization part of CTE must be replaced by below snippet:
```
SELECT MAX(attendanceDate) FROM attendance
```
[EDIT]
Here is query at SQL Fiddle which resolves your question#1: [SQL Fiddle](http://www.sqlfiddle.com/#!15/ea98a/1) | You can create an aggregate with the range types:
```
Create function sfunc (tstzrange, timestamptz)
returns tstzrange
language sql strict as $$
select case when $2 - upper($1) <= '1 day'::interval
then tstzrange(lower($1), $2, '[]')
else tstzrange($2, $2, '[]') end
$$;
Create aggregate consecutive (timestamptz) (
sfunc = sfunc,
stype = tstzrange,
initcond = '[,]'
);
```
Use the aggregate with the right order the get the consecutive day range for the last arrived\_at:
```
Select user_id, consecutive(arrived_at order by arrived_at)
from work
group by user_id;
┌─────────┬─────────────────────────────────────────────────────┐
│ user_id │ consecutive │
├─────────┼─────────────────────────────────────────────────────┤
│ 1 │ ["2011-01-03 00:00:00+02","2011-01-05 00:00:00+02"] │
│ 2 │ ["2011-01-06 00:00:00+02","2011-01-06 00:00:00+02"] │
└─────────┴─────────────────────────────────────────────────────┘
```
Use the aggregate in a window function:
```
Select *,
consecutive(arrived_at)
over (partition by user_id order by arrived_at)
from work;
┌────┬─────────┬────────────────────────┬─────────────────────────────────────────────────────┐
│ id │ user_id │ arrived_at │ consecutive │
├────┼─────────┼────────────────────────┼─────────────────────────────────────────────────────┤
│ 1 │ 1 │ 2011-01-03 00:00:00+02 │ ["2011-01-03 00:00:00+02","2011-01-03 00:00:00+02"] │
│ 2 │ 1 │ 2011-01-04 00:00:00+02 │ ["2011-01-03 00:00:00+02","2011-01-04 00:00:00+02"] │
│ 3 │ 1 │ 2011-01-05 00:00:00+02 │ ["2011-01-03 00:00:00+02","2011-01-05 00:00:00+02"] │
│ 4 │ 2 │ 2011-01-06 00:00:00+02 │ ["2011-01-06 00:00:00+02","2011-01-06 00:00:00+02"] │
└────┴─────────┴────────────────────────┴─────────────────────────────────────────────────────┘
```
Query the results to find what you need:
```
With work_detail as (select *,
consecutive(arrived_at)
over (partition by user_id order by arrived_at)
from work)
select arrived_at, upper(consecutive) - lower(consecutive) as days
from work_detail
where user_id = 1 and upper(consecutive) != lower(consecutive)
order by arrived_at desc
limit 1;
┌────────────────────────┬────────┐
│ arrived_at │ days │
├────────────────────────┼────────┤
│ 2011-01-05 00:00:00+02 │ 2 days │
└────────────────────────┴────────┘
``` | consecutive days in sql | [
"",
"sql",
"postgresql",
""
] |
The company I work for has started a new initiative in HL7 where we are trading both v2X and v3 (CDA specifically) messages. I am at the point where I am able to accept, validate and acknowledge the messages we are receiving from our trading partners and have started to create a data model for the backend storage of said messages. After a lot of consideration and research I am at a loss for the best way to approach this in MS SQL Server 2008 R2.
Currently my idea is to essentially load the data into a data warehouse directly from my integration engine (BizTalk) and foregoing a backing, normalized operational database. I have set up the database for v2X messages according to the v2.7 specs as all versions of HL7 v2 are backward compatible (I can store any previous versions in the same database). My initial design has a table for each segment which will tie back to a header table with a guid I am generating and storing at run time. The biggest issue with this approach is the amount of columns in each table and it's something I have no experience with. For instance the PV1 segment has 569 columns in order to accommodate all possible data. In addition to this I need to make all columns varchar and make them big enough to house any possible customization scenario from our vendors. I am planning on using varchar(1024) to achieve this. A lot of these columns (the majority probably) would be NULL so I would use SPARSE columns. This screams bad design to me but fully normalizing these tables would require a ton of work in both BizTalk and SQL server and I'm not sure what I would gain from doing so. I'm trying to be pragmatic since I have a deadline.
If fully normalized, I would essentially have to create stored procs that would have a ton of parameters OR split these messages to the nth degree to do individual loads into the smaller subtables and make sure they all correlate back to the original guid. I would also want to maintain ACID processing which could get tricky and cause a lot of overhead in BizTalk. I suppose a 3rd option would be to use nHapi to create objects out of the messages I could tie into with Entity Framework but nHapi seems like a dead project and I have no experience with Entity Framework as of right now.
I'm basically at a loss and need help from some industry professionals who have experience with HL7 data modeling. Is it worth the extra effort to fully normalize the tables? Will performance on the SQL side be abysmal if I use these denormalized segment tables with hundreds of columns (most of which will be NULL for each row)? I'm not a DBA so I'm trying to understand the pitfalls of each approach. I've also looked at RIMBAA but the HL7 RIM seems like a foreign language to me as an HL7 newbie and translating v2 messages to the RIM would probably take far longer than I have to complete this project. I'm hoping I'm overthinking this and there is a simpler solution staring me in the face. Hopefully this question isn't too open ended. | I would under no circumstances attempt to model anything using the HL7 v3 RIM. The reason is that this schema is very generic, deferring much of the metadata to the message itself. Are you familiar with an EAV table? The RIM is like that.
On the other hand, HL7 v2 should be a fairly simple basis for a DB schema. You can create tables around segment types, and columns around field names.
I think the problem of pulling in everything kills the project and you should not do it. Typically, HL7 v2 messages carry a small subset of the whole, so it would be an utter waste to build out the whole thing, and it would be very confusing.
Further, the version of v2 you model would impact your schemas dramatically, with later versions, more and more fields become repeating fields, and your join relationships would change.
I recommend that you put a stake in the sand and start with v2.4 which is pretty easy yet still more complicated than most interfaces actually in use. Focus on a few segments and a few fields. MSH and PID first.
Add an EAV table to capture what may come in that you don't yet have in your tables. You can then look at what comes into this table over time and use it to decide what to build next. Your EAV could look like this MSG\_ID, SEGMENT, SET\_ID, FIELD\_NAME, FIELD VALUE. Just store the unparsed HL7 contents of the field value. | HL7 is not a "tight" standard inputs and expected outputs vary depending on the system you are talking to. In this case the adding in a broker such as Mirth, Rhaposdy or BizTalk is a very good idea.
What ever solution you employ make sure you can cope with "non standard" input and output as you will soon find things vary. On the HL7 versions 2X and 3 be aware that very few hospitals have the version 3 most still run 2X.
I have been down the road of working with a database that tried to follow the HL7 structure, it can work however it will take time and effort. Given that you have a tight dead line maybe break out the bits of the data you will need to search on and have fields (e.g. PID segment 3 is the patient id would be useful to have) the rest can go in your varchar. Also if you are not indexing on the column you could use varchar(max).
As for your Guids in the database, this can work fine, but be careful not to cluster any indexes using the Guid as this will fragment your data. Do your research here and if in doubt go for identity columns instead.
I'll recommend the entity framework too, excellent ORM, well worth learning.
So my overall advice. Go for a hybrid for now, breaking out what you need. Expect it to evolve over time breaking out the pieces of HL7 into their own areas as needed. Do write a generic HL7 parser (not too difficult I've done it a couple of times) and keep it flexible. But most of all expect the HL7 to vary in structure don't treat the specification as 100% truth you will get variations. | HL7 v2X and v3 data modeling | [
"",
"sql",
"sql-server",
"hl7",
"hl7-cda",
"hl7-v2",
""
] |
I have `SQL` statement as follows:
```
Select TableB.section
from TableA left join TableB on TableA.fk = TableB.pk
group by TableB.section
```
Since this is `left join`, there is a chance that `TableB.Section` can return a *null* because a row of `TableA.fk` could be *null* and so the group by clause will return *null* for that row.
So there is a chance in which I will get something like this:
```
TableB.Section
-------------
| Section 1 |
+-----------+
| Section 2 |
+-----------+
| NULL |
+-----------+
```
How can I replace the word `NULL` to a nice text that like: "No value", within my SQL statement? | For *oracle* use `NVL()` as:
```
Select NVL(TableB.section, 'nice text')
from TableA left join TableB on TableA.fk = TableB.pk
group by NVL(TableB.section, 'nice text')
```
For *SQL Server* you can use `ISNULL(TableB.section, 'nice text')` or `COALESCE(TableB.section, 'nice text')`
I suppose there is also `IFNULL()` in *MySql*.
Edit: As [OracleUser](https://stackoverflow.com/questions/22149900/giving-null-values-a-label-in-sql/22150141#22150141)'s answer states; Oracle requires `NVL()` in `group by` as well. | You need to Apply `NVL()`/`COALESCE()` in `GROUP BY` Clause as well!
Else it would be a Syntax Error itself
```
Select NVL(TableB.section, 'No Value')
from TableA left join TableB on TableA.fk = TableB.pk
group by NVL(TableB.section, 'No Value')
``` | Giving null values a label in SQL | [
"",
"sql",
"oracle",
""
] |
Having this kind of SQL Statement, how can I remove milliseconds from the 'date\_created' column?
```
SELECT sales.id,
sales.filename,
account.description customer,
sales.uploaded,
sales.date_created
FROM account
JOIN salesStore sales
ON sales.customer_id = account.id
WHERE sales.date_created::date BETWEEN '2014-02-27'
AND '2014-03-01'
ORDER BY
sales.date_created DESC
``` | Try this
```
DATE_TRUNC('second', sales.date_created);
```
See the docs for [DATE\_TRUNC](http://www.postgresql.org/docs/current/static/functions-datetime.html#FUNCTIONS-DATETIME-TRUNC).
[postgresql doc](http://www.postgresql.org/docs/current/interactive/functions-datetime.html#FUNCTIONS-DATETIME-TRUNC) | [The manual:](https://www.postgresql.org/docs/current/datatype-datetime.html)
> `time`, `timestamp`, and `interval` accept an optional precision value *`p`*
> which specifies the number of fractional digits retained in the
> seconds field. By default, there is no explicit bound on precision.
> The allowed range of *`p`* is from 0 to 6 for the `timestamp` and `interval`
> types.
You can cast to `timestamp(0)` (or `timestamptz(0)`) to remove fractional seconds:
```
date_created::timestamp(0)
```
This ***rounds*** to full seconds - as opposed to [`date_trunc()`](http://www.postgresql.org/docs/current/static/functions-datetime.html#FUNCTIONS-DATETIME-TRUNC) which ***truncates***. | Remove milliseconds in SQL commands | [
"",
"sql",
"postgresql",
"timestamp",
""
] |
I have a table called `branch`
It looks something like.
```
+----------------+--------------+
| branch_id | branch_name |
+----------------+--------------+
| 1 | TestBranch1 |
| 2 | TestBranch2 |
+----------------+--------------+
```
I've set the `branch_id` as primary key.
Now my question is related to the next table called `item`
It looks like this.
```
+----------------+-----------+---------------------------+
| branch_id | item_id | item_name |
+----------------+-----------+---------------------------+
| 1 | 1 | Apple |
| 1 | 2 | Ball |
| 2 | 1 | Totally Difference Apple |
| 2 | 2 | Apple Apple 2 |
+----------------+-----------+---------------------------+
```
I'd like to know if I need to create a primary key for my `item` table?
**UPDATE**
They do not share the same items. Sorry for the confusion.. A branch can create a product that doesn't exist in the other branch. They are like two stores sharing the same database.
**UPDATE**
Sorry for the incomplete information.
These tables are actually from two local database...
I'm trying to create a database that can exist on its own but would still have no problem when mixed with another. So the system would just append all the item data from another branch without mixing them up.. The branches doesn't take the `item_id` of the other branches in consideration when generating a `unique_id` for their items. All the databases however may share same `branch` table as reference.
Thank you guys in advance. | > I'd like to know if I need to create a primary key for my item table?
You always1 need a key2, whether the table is involved in a relationship3 or not. The only question is what *kind* of key?
Here are your options in this case:
1. Make `{item_id}` alone a key. This makes the relationship "non-identifying" and `item` a "strong" entity...
* Which produces a slimmer key (compared to the second option), therefore any child tables that may reference it are slimmer.
* Any ON UPDATE CASCADE actions are cut-off at the level of the `item` and not propagated to children.
* May play better with ORMs.
2. Make a composite4 key on `{branch_id, item_no}`. This makes the relationship "identifying" and `item` a "weak" entity...
* Which makes `item` itself slimmer (one less index).
* Which may be very useful for [clustering](http://use-the-index-luke.com/sql/clustering/index-organized-clustered-index).
* May help you avoid a JOIN in some cases (if there are child tables, `branch_id` is propagated to them).
* May be necessary for correctly modelling ["diamond-shaped" dependencies](https://stackoverflow.com/a/18164242/533120).
So pick your poison ;)
Of course, `branch_id` is a foreign key (but not key) in both cases.
And orthogonal to all that, if `item_name` has to be unique per-branch (as opposed to per whole table), you need a composite key on `{branch_id, item_name}` as well.
---
*1 From the **logical** perspective, you always need a key, otherwise your table would be a multiset, therefore not a relation (which is a set), therefore your database would no longer be "relational". From the **physical** perspective, there may be some special cases for breaking this rule, but they are rare.*
*2 Whether its primary or not is [immaterial](https://stackoverflow.com/a/9798682/533120) from the logical standpoint, although it may be important if the DBMS ascribes a special meaning to it, such is the case with InnoDB which [uses primary key as clustering key](http://www.ovaistariq.net/521/understanding-innodb-clustered-indexes/).*
*3 Please make a distinction between "relation" and "relationship".*
*4 Aka. "compound".* | Yes you do. The Primary key is what allows the many to one relationship to exist.
This requirement is already catered for by the branch\_id column.
The item\_id column is not required for the one-to-many relationship in your example. | Do I need a Primary Key If I'm using 1 to Many Relationship? | [
"",
"mysql",
"sql",
"database",
"database-design",
"primary-key",
""
] |
I am running a SBS 2008 with SQL Server 2008 R2
MYOB EXONET SQL Database
I am trying to find out how much our top customers are spending per month but the below query will not return a zero value if the customer has not spent, I imagine that it does not give me a zero result as there is nothing to add up in the database, however I still need it to produce a zero result as otherwise I can't find out which customers are not spending.
```
SELECT
DR.ACCNO,
SUM(Analytics_SaleInvoice.SalesValueTaxExclusive)
FROM
DR_ACCS DR
INNER JOIN
Analytics_SaleInvoice Analytics_SaleInvoice ON (Analytics_SaleInvoice.AccountNumberAnalysis = DR.ACCNO)
WHERE
(DR.X_TOPCUSTOMER = 'Y')
AND (Analytics_SaleInvoice.AgePeriod = 5)
GROUP BY
DR.ACCNO, DR.ACCNO,Analytics_SaleInvoice.AccountNumberAnalysis
```
Does anyone have any idea how I can get a zero result? | You need `left outer join` so you get all customers:
```
SELECT DR.ACCNO,
COALESCE(SUM(Analytics_SaleInvoice.SalesValueTaxExclusive), 0) as Total
FROM DR_ACCS DR LEFT OUTER JOIN
Analytics_SaleInvoice Analytics_SaleInvoice
ON Analytics_SaleInvoice.AccountNumberAnalysis = DR.ACCNO AND
Analytics_SaleInvoice.AgePeriod = 5
WHERE DR.X_TOPCUSTOMER = 'Y'
GROUP BY DR.ACCNO, DR.ACCNO,Analytics_SaleInvoice.AccountNumberAnalysis
ORDER BY COALESCE(SUM(Analytics_SaleInvoice.SalesValueTaxExclusive), 0) DESC;
```
Along with the `left outer join`, the condition on the second table needs to be moved to the `on` clause and the `coalesce()` is used to turn the `NULL` value into a `0`.
I also added an `order by` because you seem to want the best customers first. | Try a `LEFT JOIN` instead:
```
SELECT
DR.ACCNO,
ISNULL(SUM(Analytics_SaleInvoice.SalesValueTaxExclusive), 0)
FROM DR_ACCS DR
LEFT JOIN Analytics_SaleInvoice Analytics_SaleInvoice
ON (Analytics_SaleInvoice.AccountNumberAnalysis = DR.ACCNO)
AND (Analytics_SaleInvoice.AgePeriod = 5)
WHERE (DR.X_TOPCUSTOMER = 'Y')
GROUP BY DR.ACCNO, DR.ACCNO,Analytics_SaleInvoice.AccountNumberAnalysis
``` | Display zero value in place of null in SQL Server 2008 R2 | [
"",
"sql",
"sql-server-2008-r2",
""
] |
Here's the sceanrio...
Table CLIENT
```
CLIENT NAME
------ ----
123 Smith
456 Jones
```
Table CLIENT\_ADDRESS
```
CLIENT ADDRESS
------ -------
000 100 MAIN ST (this is a default address)
123 999 ELM ST
```
I want my result set to look like this...
```
CLIENT NAME ADDRESS
------ ---- -------
123 Smith 999 ELM ST
456 Jones 100 MAIN ST (pulls the default address since none found for client)
```
Can I do this in a single join?
Obviously this join
```
SELECT A.CLIENT, A.NAME, B.ADDRESS
FROM CLIENT A
LEFT OUTER JOIN CLIENT_ADDRESS B
ON A.CLIENT = B.CLIENT
```
Won't return the default address for client 456.
How would I alter this join to accomplish that? | Not in a single join, but with two joins, and coalesce
```
SELECT A.CLIENT, A.NAME,
coalesce(b.ADDRESS, d.Address) Address
FROM CLIENT A
LEFT JOIN CLIENT_ADDRESS B
ON A.CLIENT = B.CLIENT
LEFT JOIN CLIENT_ADDRESS D
ON d.CLIENT = '000'
``` | You'll need another join:
```
SELECT
A.CLIENT
,A.NAME
,COALESCE(B.ADDRESS, C.ADDRESS)
FROM CLIENT A
LEFT JOIN CLIENT_ADDRESS B
ON A.CLIENT = B.CLIENT
LEFT JOIN CLIENT_ADDRESS C
ON C.CLIENT = '000'
``` | Can I do a "conditional" join in SQL | [
"",
"sql",
"db2",
""
] |
Hi I am looking to carry out a query were i have 3 different countries specified and id like to make a where statement find all attributes where the country is no = United Kingdom. Any ideas how to do this? | ```
SELECT *
FROM mytable
WHERE country <> 'United Kingdom'
``` | You can use `NOT IN`:
```
SELECT *
FROM country
WHERE name NOT IN('United Kingdom')
``` | SQL Query WHERE is not | [
"",
"mysql",
"sql",
"mysql-workbench",
""
] |
I can calculate future pay dates with the following, knowing that 1/3/2013 was a pay day, and that we are paid every two weeks. This code tells me that 12/20/2013 was a payday. That's accurate.
```
declare @StartDate datetime = '1/4/13'
declare @FromDate datetime = '12/15/' + cast(year(getdate()) - 1 as char(4))
select dateadd(day, 14*cast(datediff(day, @StartDate, @FromDate) / 14 + 1 as int), @StartDate)
```
What I *need* to do, is calculate the last payday of the year. If I set @FromDate to 12/15/2013, this works, but I don't believe I can rely on that in cases where December is a three check month. My preference is to set the FromDate to 12/01/YYYY and get the last payday of that year, but I cannot seem to wrap my mind around how to pull that off.
How do I need to alter this to get the last Payday (always a Friday) of the year?
Thanks in advance for the forthcoming suggestions. | considering that it will always be either the last friday or the penultimate friday of the year, this should work:
```
declare @startdate datetime='20140110'
declare @lastdayofyear datetime='20141231'
select
case datediff(WEEK,@startdate,dateadd(day,6-datepart(w,@lastdayofyear)-7,@lastdayofyear)) % 2
when 0 then
dateadd(day,datepart(w,6-@lastdayofyear)-7,@lastdayofyear)
else
dateadd(day,datepart(w,6-@lastdayofyear)-14,@lastdayofyear)
end
``` | This could be done fairly easily with a [date dimension table](http://michaelmorley.name/how-to/create-date-dimension-table-in-sql-server). Using the example in the link, a query would look like this:
```
SELECT TOP 1 [Date] FROM DimDate
WHERE DATEPART(dw, [Date]) = 6 --is Friday
AND DATEPART(wk, [Date]) % 2 = 0--Is an even week
AND YEAR([Date]) = YEAR(getdate())--Current year
ORDER BY [Date] DESC
``` | T-SQL Getting the Last Friday Payday of the year | [
"",
"sql",
"sql-server",
""
] |
I have data in database as follows:
```
+---------------------+-------------+
| CloseTime | Count |
+---------------------+-------------+
| 10.2.2014 13:19 | 1 |
| 5.12.2014 13:19 | 1 |
| 4.2.2014 13:19 | 1 |
| 2.1.2014 13:19 | 1 |
| 4.12.2014 13:19 | 1 |
+---------------------+-------------+
```
Now, I have the query like this
```
SELECT DATENAME(MONTH, CLOSETIME) AS CLOSEMONTH, COUNT(*) as CNT
FROM TABLE
WHERE CLOSETIME >= DATEADD(MONTH, DATEDIFF(MONTH, '19000101', GETDATE())-7, '19000101')
AND CLOSETIME < DATEADD(MONTH, DATEDIFF(MONTH, '19000101', GETDATE()), '19000101')
GROUP BY DATENAME(MONTH, CLOSETIME);
```
which result in
```
+---------------+-------+
| CloseTime | CNT |
+---------------+-------+
| February | 2 |
| January | 1 |
| December | 2 |
+---------------+-------+
```
The query works in the way that it takes data for last 7 months. I would need the result to be
```
+---------------+-------+
| CloseTime | Count |
+---------------+-------+
| February | 2 |
| January | 1 |
| December | 2 |
| November | 0 |
| October | 0 |
| September | 0 |
| August | 0 |
+---------------+-------+
```
so if there are no records with Close Time for later months, it should still return the month with zero count. How to achieve this? I prefer SQL solution only, meaning no procedures or scripts.
I tried to
```
SELECT DATENAME(MONTH,DATEADD(MONTH,-1,GETDATE())) as HELP_MONTH
UNION ALL
...
```
and then join it, but it has not success. | OK, this worked for me:
```
WITH HM (HELP_MONTH) AS (
SELECT DATENAME(MONTH,DATEADD(MONTH,-1,GETDATE())) AS HELP_MONTH
UNION ALL
SELECT DATENAME(MONTH,DATEADD(MONTH,-2,GETDATE())) AS HELP_MONTH
UNION ALL
SELECT DATENAME(MONTH,DATEADD(MONTH,-3,GETDATE())) AS HELP_MONTH
UNION ALL
SELECT DATENAME(MONTH,DATEADD(MONTH,-4,GETDATE())) AS HELP_MONTH
UNION ALL
SELECT DATENAME(MONTH,DATEADD(MONTH,-5,GETDATE())) AS HELP_MONTH
UNION ALL
SELECT DATENAME(MONTH,DATEADD(MONTH,-6,GETDATE())) AS HELP_MONTH
UNION ALL
SELECT DATENAME(MONTH,DATEADD(MONTH,-7,GETDATE())) AS HELP_MONTH)
SELECT HELP_MONTH
,COUNT(CNT) AS CNT
FROM HM
LEFT JOIN YOUR_TABLE ON HM.HELP_MONTH = DATENAME(MONTH, CLOSETIME)
AND CLOSETIME >= DATEADD(MONTH, DATEDIFF(MONTH, '19000101', GETDATE())-7, '19000101')
AND CLOSETIME < DATEADD(MONTH, DATEDIFF(MONTH, '19000101', GETDATE()), '19000101')
GROUP BY DATENAME(MONTH, CLOSETIME), HELP_MONTH
``` | Use the same query by adding HAVING clause will gonna add records having 0 count also.
```
SELECT DATENAME(MONTH, CLOSETIME) AS CLOSEMONTH, COUNT(*) as CNT
FROM TABLE
WHERE CLOSETIME >= DATEADD(MONTH, DATEDIFF(MONTH, '19000101', GETDATE())-7, '19000101')
AND CLOSETIME < DATEADD(MONTH, DATEDIFF(MONTH, '19000101', GETDATE()), '19000101')
GROUP BY DATENAME(MONTH, CLOSETIME)
HAVING COUNT(*) >= 0
```
Thanks. | Count for months with no records | [
"",
"sql",
"sql-server",
"sql-server-2008-r2",
""
] |
I have the following situation 3 tables, person, job, category.
Person has a job and Job has a category. How do I get the person records from the same category.
```
public List<Person> findPplByCategory(Category category) {
javax.persistence.criteria.CriteriaBuilder cb = getEntityManager().getCriteriaBuilder();
javax.persistence.criteria.CriteriaQuery cq = getEntityManager().getCriteriaBuilder().createQuery();
javax.persistence.criteria.Root<Person> e = cq.from(Person.class);
javax.persistence.criteria.Root<Job> a = cq.from(Job.class);
//...not sure how to create the query here..
}
``` | ```
javax.persistence.criteria.CriteriaBuilder cb = getEntityManager().getCriteriaBuilder();
javax.persistence.criteria.CriteriaQuery cq = getEntityManager().getCriteriaBuilder().createQuery();
javax.persistence.criteria.Root<Person> e = cq.from(perdon.class);
cq.where(cb.equal(e.get("idCompany").get("idCategory"), cat));
cq.orderBy(cb.desc(e.get("idPerson")));
return getEntityManager().createQuery(cq).getResultList();
``` | This is easy to do using a JPQL query
```
String query = "select P from Person P join P.job J join J.category C where C = :cat"
List<Person> = entitiyManager.createQuery(query, Person.class).setParameter("cat", myCategory).getResultList();
```
Some assumptions:
* One Person has one Job
* One Job has one Category
* Person's job field is named `job`
* Job's category field is named `category` | JPA How to build a join criteria query | [
"",
"mysql",
"sql",
"jpa",
"eclipselink",
""
] |
Every question I search for about the warning
> Warning: Null value is eliminated by an aggregate or other SET operation.
Typically people want to treat the NULL values as 0. I want the opposite, how do I modify the following stored procedure to make it return NULL instead of 1?
```
CREATE PROCEDURE TestProcedure
AS
BEGIN
select cast(null as int) as foo into #tmp
insert into #tmp values (1)
select sum(foo) from #tmp
END
GO
```
I thought it would be `SET ANSI_NULLS ON` (I tried before the declaration, within the procedure itself, and before executing the procedure in my test query) but that did not appear to change the behavior of `SUM(`. | The `sum()` function automatically ignores `NULL`. To do what you want, you need an explicit checK:
```
select (case when count(foo) = count(*) then sum(foo) end)
from #tmp;
```
If you want to be explicit, you could add `else NULL` to the `case` statement.
The logic behind this is that `count(foo)` counts the number of non-NULL values in `foo`. If this is equal to all the rows, then all the values are non-NULL. You could use the more verbose:
```
select (case when sum(case when foo is null then 1 else 0 end) > 0
then sum(foo)
end)
```
And, I want to point out that the title is quite misleading. `1 + NULL = NULL`. The issue is with the aggregation functions, not the arithmetic operators. | Looking for a null value with `EXISTS` may be the fastest:
```
SELECT
CASE WHEN EXISTS(SELECT NULL FROM tmp WHERE foo IS NULL)
THEN NULL
ELSE (SELECT sum(foo) from tmp)
END
``` | Change aggregate functions to output NULL when a element is NULL | [
"",
"sql",
"sql-server",
"stored-procedures",
"null",
"sql-server-2012",
""
] |
I have month and year in separate columns, i want to take records between two points (ie, a start date and end date). I have month and year alone. One solution i have found is convert date,month combination into date .Say i have
```
Start month : 3 Start year : 2010
End month : 6 End year : 2013
```
I want to convert them as
```
start date = 01-03-2010
End date = 31-06-2013
```
Any solutions? | As **Marc B** said, End month will be hard since you need to figure out what the last day of the specified year/month is.
Try this
```
DECLARE @StartMonth INT = 3
DECLARE @StartYear INT = 2010
DECLARE @EndYear INT = 2013
DECLARE @EndMonth INT = 6
SELECT Dateadd(year, @StartYear - 2000,Dateadd(month, @StartMonth - 1, '20000101')) AS StartDate,
Dateadd(year, @EndYear - 2000, Dateadd(month, @EndMonth - 1, '20000101')) AS EndDate
``` | SqlServer 2012 onward supports **DATEFROMPARTS**(). It can be done this way :
```
DECLARE @Month as int, @Year as int
SET @Month = 8
SET @Year = 2018
SELECT DATEFROMPARTS(@Year,@Month,'01')
```
To get last date of month you can use **EOMONTH**() as below :
```
SELECT EOMONTH(DATEFROMPARTS(@Year,@Month,'01'))
``` | Convert month and year combination to date in SQL | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I would like to display the modules in a program, along side the program as one row, instead of having multiple rows of modules for each program.
```
SELECT program.program_title as program, course_module.title as unit
FROM program_module
INNER JOIN program
ON program_module.program_id = program.id
INNER JOIN course_module
ON program_module.course_module_id = course_module.id
```
Generates:
<https://dl.dropboxusercontent.com/u/23413810/Stack/current.png>
How can I get it to appear as this: <https://dl.dropboxusercontent.com/u/23413810/Stack/desired.png> | ```
select p.program_title,
listagg(m.module_title,',') within group (order by m.module_id) as module_title
from program_module pm
inner join program p on pm.program_id = p.program_id
inner join module m on pm.module_id = m.module_id
group by p.program_title
```
sqlfiddle here <http://sqlfiddle.com/#!4/d3652/5> | If you want separate columns, rather than a single comma-separated column, you could use a pivot. From 11g there is a built in pivot command; or you can do it the old-fashioned way:
```
SELECT program,
MAX(CASE WHEN pos = 1 THEN unit END) AS unit1,
MAX(CASE WHEN pos = 2 THEN unit END) AS unit2,
MAX(CASE WHEN pos = 3 THEN unit END) AS unit3,
MAX(CASE WHEN pos = 4 THEN unit END) AS unit4
FROM (
SELECT program.id as program_id,
program.program_title as program,
course_module.title as unit,
row_number() over (partition by program.id
order by course_module.title) as pos
FROM program_module
INNER JOIN program
ON program_module.program_id = program.id
INNER JOIN course_module
ON program_module.course_module_id = course_module.id
)
group by program_id, program
order by program;
```
Either way you need to know how many modules are possible, and have handling for them all (adding more case statements with this version).
[SQL Fiddle](http://sqlfiddle.com/#!4/acfce/4). | Data in column instead of row | [
"",
"sql",
"oracle",
""
] |
I am working with SQL Server, I have successfully converted a table's rows into comma-separated values, now I want to convert that string of comma-separated values back to table's rows.
I have this string (`Varchar`)
```
DECLARE @str AS varchar(Max)
SET @str = '0.00,0.00,1576.95,0.00,4105.88,1017.87,0.00,6700.70'
```
I want these values into rows.
Like
```
0.00
0.00
1576
...
``` | Create a function:
```
CREATE FUNCTION [dbo].[Split](@String nvarchar(4000), @Delimiter char(1))
RETURNS @Results TABLE (Items nvarchar(4000))
AS
BEGIN
DECLARE @Index INT
DECLARE @Slice nvarchar(4000)
-- HAVE TO SET TO 1 SO IT DOESN’T EQUAL ZERO FIRST TIME IN LOOP
SELECT @Index = 1
WHILE @Index !=0
BEGIN
SELECT @Index = CHARINDEX(@Delimiter,@String) --Getting the indexof the first Occurrence of the delimiter
-- Saving everything to the left of the delimiter to the variable SLICE
IF @Index !=0
SELECT @Slice = LEFT(@String,@Index - 1)
ELSE
SELECT @Slice = @String
-- Inserting the value of Slice into the Results SET
INSERT INTO @Results(Items) VALUES(@Slice)
--Remove the Slice value from Main String
SELECT @String = RIGHT(@String,LEN(@String) - @Index)
-- Break if Main String is empty
IF LEN(@String) = 0 BREAK
END
RETURN
END
```
Pass the string `@str` and the delimiter (,) to the function.
```
SELECT Items FROM [dbo].[Split] (@str, ',')
```
It will return the result as a table:
```
Items
0.00
0.00
1576.95
0.00
4105.88
1017.87
0.00
6700.70
```
See [**SQL Fiddle**](http://sqlfiddle.com/#!3/f1801c) | This question is a duplicate of a few others, but some of the accepted answers are still the inefficient WHILE loops or recursive CTEs. There are three ways to accomplish a split that won't kill performance:
* Numbers / Tally Table:
<http://www.sqlservercentral.com/articles/Tally+Table/72993/> (free
registration required)
* XML:
<https://www.simple-talk.com/blogs/2012/01/05/using-xml-to-pass-lists-as-parameters-in-sql-server/>
* SQLCLR: there are numerous examples of this on many sites but your
best (and easiest) bet is to either grab the one provided in the
Numbers / Tally Table article noted above OR just install [SQL#
(SQLsharp)](http://www.SQLsharp.com/) (I am the author of SQL# but String\_Split and String\_Split4k are in the
Free version). | Comma-separated String into Table's Column in SQL Server | [
"",
"sql",
"sql-server-2008",
"csv",
""
] |
I have two columns, Name and Date
```
Name Date
John 2/21/2014
Joe 2/21/2014
Sue 2/22/2014
Joe 2/22/2014
Steve 2/23/2014
John 2/23/2014
```
I need a statement that selects every Date that does NOT have a certain person.
So, for example, I need a list of dates that don't have a matching row for John, I would get just get 2/22/2014, because only Sue and Joe have records on that date. | You can use `NOT IN`:
```
SELECT DISTINCT Date
FROM Table
WHERE Date NOT IN
(
SELECT Date FROM Table where Name = 'John'
)
``` | You can use a `GROUP BY` and `HAVING`:
```
SELECT Date
FROM Table1
GROUP BY Date
HAVING MAX(CASE WHEN Name = 'John' THEN 1 END) IS NULL
```
Syntax may need to be adjusted for Access.
SQL Server demo: [SQL Fiddle](http://sqlfiddle.com/#!6/53595/2/0) | SQL Select Rows Where Does not Exist | [
"",
"sql",
"ms-access",
""
] |
I have a table `EMP` with following definition:
```
EMP_ID NOT NULL NUMBER(6)
EMP_NAME NOT NULL VARCHAR2(25)
EMAIL NOT NULL VARCHAR2(25)
PHONE_NUMBER VARCHAR2(20)
HIRE_DATE NOT NULL DATE
JOB_ID NOT NULL VARCHAR2(10)
SALARY NUMBER(8,2)
```
If we want to count employees whose name not in 'King', 'Steve', 'John' we simply use this query :
```
SELECT count(*) FROM emp WHERE emp_name NOT IN('King','Steve','John');
```
---
**Now Here is what I want above this:**
What I want to do is, Create a `PL/SQL Function` which returns number of count according to the dynamic input, Like if we pass:
```
SELECT count_emp('King,Steve,John') FROM dual;
SELECT count_emp('William,Donald') FROM dual;
SELECT count_emp('Daniel') FROM dual;
```
needs to return appropriate count, how can I achieve this using `PL/SQL FUNCTION`
---
**This is what I have tried and Needs guideline:**
```
CREATE OR REPLACE FUNCTION count_emp(emp_nm IN varchar)
RETURN number
IS
cnt NUMBER;
BEGIN
SELECT count(*) INTO cnt FROM emp WHERE emp_name NOT IN(emp_nm);
RETURN cnt;
END;
```
it is giving result for single name, but how can I split/format multiple input(i.e. `emp_nm`) to pass in `NOT IN()`? | Try like this,
```
CREATE OR REPLACE
FUNCTION count_emp(emp_nm IN VARCHAR)
RETURN NUMBER
IS
cnt NUMBER;
BEGIN
SELECT count(*)
INTO cnt
FROM emp
WHERE ename NOT IN(
SELECT regexp_substr (emp_nm, '[^,]+', 1, ROWNUM)
FROM dual
CONNECT BY LEVEL <= LENGTH (regexp_replace (emp_nm, '[^,]+')) + 1);
RETURN cnt;
END;
``` | You can try dynamic sql:
```
CREATE OR REPLACE FUNCTION count_emp(emp_nm IN varchar)
RETURN number
IS
cnt NUMBER;
BEGIN
Execute immediate 'SELECT count(*) FROM emp WHERE emp_name NOT IN(' || emp_nm || ')' returning into cnt;
RETURN cnt;
END;
``` | pl/sql function- pass multiple varchar2 values dynamically for SELECT ... NOT IN() | [
"",
"sql",
"oracle",
"plsql",
""
] |
I've been having a bit of trouble thinking this problem through. I can't seem to define a SELECT query which is accurate enough to give me the result I want.
I am storing shift patterns in a table. These shift patterns don't have any restrictions on when they can start and finish (except that they cannot overlap each other per machine)
*This is the structure of the table (5 rows of example data)*

The only information I have to select with is:
* The current time (e.g. 01:45)
* The current weekday (e.g. Tuesday)
The issue is when a shift overlaps 00:00. So my question is this:
**How would I select the current shift based on the current time and weekday?**
[Here is an SQL Fiddle of the scenario](http://sqlfiddle.com/#!2/68aba0/2)
Thanks! | You can do this with simple logic. If `StartTime < EndTime`, then you want to test for times between the two values. If `Startime > EndTime`, then you want to test for times *not* between the two values. So this solves the time problem:
```
SELECT *
FROM webreportshiftsetup
WHERE (StartTime < EndTime and time(now()) between StartTime and EndTime or
StartTime > EndTime and time(now()) not between StartTime and EndTime
) and
dayname(now()) in (StartWeekDay, EndWeekDay)
```
You have a similar problem with the weekdays. But your question is specifically about times and not weekdays. (That should perhaps be another question.) | If your shifts are one-day (i.e. you need to select only current day) you can do something like
```
SELECT * FROM shifts
WHERE startWeekDay = DATE_FORMAT(CURDATE(),'%W') AND NOW() BETWEEN startTime AND endTime
```
Otherwise, if your shift starts on Monday and finishes on Wednesday, and today is Tuesday, you will have trouble finding todays shift with a query. For that you should store days as number: 1- Monday, 2- Friday, ... | SQL SELECT logic | [
"",
"mysql",
"sql",
""
] |
Ok, I have a complicated query to get all article details, each article has many versions. I have a need to get details of article with latest version only (ie take the max version). Here is the table:
```
+------------+-----------+----------+
| ArticleID | Detail | Version |
+------------+-----------+----------+
| 1 | detail1 | 1 |
| 1 | detail2 | 1 |
| 1 | detail3 | 2 |
| 1 | detail4 | 2 |
| 3 | detail3 | 2 |
| 3 | detail6 | 2 |
| 3 | detail4 | 3 |
+------------+-----------+----------+
```
Now user just provides a detail & the query will take all details of all articles with `version=max(version)`
Suppose that if we don't care about max version, then a simple query could be
```
Select * from articleTb where Detail like '%3'
```
It will print out:
```
+------------+-----------+----------+
| ArticleID | Detail | Version |
+------------+-----------+----------+
| 1 | detail3 | 2 |
| 3 | detail3 | 2 |
+------------+-----------+----------+
```
But this doesn't meet the requirement cos the result should not have this record `3 - detail3 - 2` cos it doesn't contain the max version of `articleID=3`.
Let say user search for `Detail like '%4'`, then a correct query should be:
```
ArticleID - Detail - Version
+----+-----------+----+
| 1 | detail4 | 2 |
+----+-----------+----+
| 3 | detail4 | 3 |
+----+-----------+----+
```
The 2 records appear cos they belongs to the article with max version. Explain, 2 is the maxversion of `articleID=1` so it matches the condition, & 3 is the max version of `articleID=3` so it also matches the condition.
So here is what i did,
```
select * from (Select * from articleTb where Detail like '%3') tb1
Join (select articleID, max(version) maxversion from articleTb where
Detail like '%3' group by articleID) tb2
on tb1.articleID=tb2.articleID and tb1.version=tb2.maxversion
```
However, for the above query the system have to duplicate the task `where Detail like '%3'` which is not good. Besides, my real world query1 is much more complicated than `where Detail like '%3'`, then if i do like the above then the query will implement the same job TWICE? & that is very inefficient.
So how to deal this problem? | I would use a CTE to create a table that contains the article id and the version id, then use that in my main query to filter down to the most recent version.
```
with latest as
(
select articleId, max(version) as version from articleTb
)
select ....
from articleTb a
inner join latestl on a.articleid = l.articleid and l.version = a.version
``` | To improve performance, remove the unnecessary inline view, e.g.
```
SELECT tb1.*
FROM articleTb tb1
JOIN ( SELECT b.articleID
, MAX(b.version) AS maxversion
FROM articleTb b
WHERE b.Detail LIKE '%3'
GROUP BY b.articleID
) tb2
ON tb1.articleID = tb2.articleID
AND tb1.version = tb2.maxversion
WHERE tb1.Detail LIKE '%3'
```
and...
make sure you have appropriate indexes. A covering index with a leading column of article may enable MySQL to use the index to optimize the GROUP BY (avoiding a "Using filesort" operation.)
```
... ON articleTb (articleID, version, detail)
```
MySQL may also be able to use that index for the join to tb1; the derived table (inline view) won't have an index.
You can confirm the execution plan with an `EXPLAIN`. | Query1 Join with Query2 but Query2 use the result of Query1, How to do that with just 1 query? | [
"",
"mysql",
"sql",
"greatest-n-per-group",
""
] |
I want to update 50 rows in my table to the current date.
I know how to select 50 random results but I don't know how to update them.
This is what I have for the Select code:
```
SELECT TOP 50[ID]
,[Message]
,[Date]
,[Type]
,[Username]
FROM [database].[dbo].[dbTable]
ORDER BY NEWID()
```
If i'm correct, this will get the 50 random rows.
first I just updated the first 50 but I want to get some random values out of my database.
```
update top(50) dbTable
set date=getdate()
```
I don't want to mess up the database because I don't have access to a back up today.
If there is something unclear or anything please ask me! | Try this
```
UPDATE [database].[dbo].[dbTable] Set [Date] = GetDate() where [ID] IN
(
SELECT TOP 50 [ID] FROM [database].[dbo].[dbTable] ORDER BY NEWID()
)
```
**Or** else:
```
WITH q AS
(
SELECT TOP 50[ID]
,[Message]
,[Date]
,[Type]
,[Username]
FROM [database].[dbo].[dbTable]
ORDER BY NEWID()
)
UPDATE q
SET [Date] = GetDate()
``` | Try like this:
```
update [dbo].[dbTable] set date=getdate() where Id in
(
select top 50 id from dbo.dbTable order by NEWID()
)
``` | Update random top 50 rows | [
"",
"sql",
"sql-server",
""
] |
I am using the following code to insert data in an access table from SQL using a recordset. This table is further used for other operations. While this is inserting the data into the table perfectly, the time taken by this huge. Should I use any other method to insert data into the table to reduce the time taken?
```
Do Until rs.EOF
DoCmd.RunSQL "INSERT INTO Table (Alpha,Beta,Gamma) VALUES(" & _
rs.Fields(0).Value & ",'" & rs.Fields(1).Value & "'," & rs.Fields(2).Value _
& " );"
rs.MoveNext
Loop
``` | 1. Create a linked table to the SQL table, say it's called `MyLinkedTable`
2. Create an Append query to append to your local Access table from your linked table. It will look something like this: `INSERT INTO MyAccessTable (Field1,Field2...) SELECT Field1,Field2... FROM MyLinkedTable;` | If you could select the data in the SQL `UPDATE` statement instead looping in `VBA` it would take a fraction of the time as all the work would be done by the server side. | Insert into an access table from SQL using recordset | [
"",
"sql",
"ms-access",
"vba",
"ms-access-2010",
""
] |
I have a database which has three tables
Messages - PK = MessageId
Drafts - PK = DraftId
History - FK = RelatedItemId
The History table has a single foreign Key `[RelatedItemId]` which maps to one of the two Primary keys in `Messages` and `Drafts`.
Is there a name for this relationship?
Is it just bad design?
Is there a better way to design this relationship?
Here are the CREATE TABLE statements for this question:
```
CREATE TABLE [dbo].[History](
[HistoryId] [uniqueidentifier] NOT NULL,
[RelatedItemId] [uniqueidentifier] NULL,
CONSTRAINT [PK_History] PRIMARY KEY CLUSTERED ( [HistoryId] ASC )
)
CREATE TABLE [dbo].[Messages](
[MessageId] [uniqueidentifier] NOT NULL,
CONSTRAINT [PK_Messages] PRIMARY KEY CLUSTERED ( [MessageId] ASC )
)
CREATE TABLE [dbo].[Drafts](
[DraftId] [uniqueidentifier] NOT NULL,
CONSTRAINT [PK_Drafts] PRIMARY KEY CLUSTERED ( [DraftId] ASC )
)
``` | Best practice I have found is to create a Function that returns whether the passed in value exists in either of your Messages and Drafts PK columns. You can then add a constraint on the column on the History that calls this function and will only insert if it passes (i.e. it exists).
Adding non-parsed example Code:
CREATE FUNCTION is\_related\_there (
IN @value uniqueidentifier )
RETURNS TINYINT
BEGIN
IF (select count(DraftId) from Drafts where DraftId = @value + select count(MessageId) from Messages where MessageId = @value) > 0 THEN
RETURN 1;
ELSE
RETURN 0;
END IF;
END;
ALTER TABLE History ADD CONSTRAINT
CK\_HistoryExists CHECK (is\_related\_there (RelatedItemId) = 1)
Hope that runs and helps lol | In a short description the solution you have used is called:
**Polymorphic Association**
*Objective*: Reference Multiple Parents
*Resulting anti-pattern:* Use dual-purpose foreign key, violating first normal form (atomic issue), loosing referential integrity
*Solution:* Simplify the Relationship
[More information about the problem](https://rads.stackoverflow.com/amzn/click/com/1934356557).
BTW createing a common super-table will help you:
 | Foreign Key column mapped to multiple primary keys | [
"",
"sql",
"database",
"database-design",
""
] |
I am using codeigniter and active record. I am selecting my data over WHERE with array. Any like this. But how can I insert tags '>' and '<'? It is possible?
```
$whereQuery['service.service_end_date'] = $start;
```
Thank you for replies. | <http://ellislab.com/codeigniter/user-guide/database/active_record.html>
```
$whereQuery['service.service_end_date >'] = $start;
$whereQuery['service.service_end_date <'] = $start;
```
You can pass `> < <>` in CI where function
```
$this->db->where('field_name <', "Condition_value");
``` | This might be what you want:
Associative array method:
`$array = array('name' => $name, 'title' => $title, 'status' => $status);`
`$this->db->where($array);`
// Produces: WHERE name = 'Joe' AND title = 'boss' AND status = 'active'
You can include your own operators using this method as well:
`$array = array('name !=' => $name, 'id <' => $id, 'date >' => $date);`
$this->db->where($array);
Source:
<http://ellislab.com/codeigniter/user-guide/database/active_record.html> | Codeigniter active record where array | [
"",
"sql",
"codeigniter",
"activerecord",
""
] |
This is a query that is supposed to get the user's information, their project's information, and a group\_concat of all the image paths that such project is associated to. To add to this, I am only getting the information mentioned from the people the user is following.
This is, however, **only retunring one row.**
```
SELECT users.first_name, users.last_name, users.user_id, projects.project_id, projects.project_name, projects.project_time, group_concat(images.image_path)
FROM users, projects, images
WHERE users.user_id = projects.user_id
AND users.user_id IN (SELECT follow_user_2 FROM following WHERE follow_user_1 = 1)
ORDER BY projects.project_id DESC
```
**TO COMPARE:** The following query **WORKS** in the sense that in the loop it gives all of the user's information and the projects information related to such user.
```
SELECT users.first_name, users.last_name, users.user_id, projects.project_id, projects.project_name, projects.project_time
FROM users, projects
WHERE users.user_id = projects.user_id
AND users.user_id IN (SELECT follow_user_2 FROM following WHERE follow_user_1 = 1)
ORDER BY projects.project_id DESC
```
When I try to use group\_concat it just returns me **one** row and I do not understand why.
Can someone help me please? Thank you. If my question was not clear enough, I will elaborate.
If this helps, here's an SQL FIDDLE.
<http://www.sqlfiddle.com/#!2/867f6/2>
I had to shorten my schema a lot. Try both queries to above to see the problem. | > When I try to use group\_concat it just returns me one row and I do not understand why.
Because you have not used the `GROUP BY` clause in your query. When using aggregate functions like `GROUP_CONCAT` you need to tell the database about the column using which you want your data to be combined.
Currently your query is grouping all records and giving 1 record in the output.
If you add `GROUP BY users.userid` in the query then the records will be grouped by unique userid's. I updated your fiddle and it now gives 2 records: <http://www.sqlfiddle.com/#!2/867f6/18>
Please note: In standard SQL queries, columns listed in the GROUP BY clause should match the column in the SELECT clause (except the aggregate functions). | Just use `group by` clause in your\_query
```
SELECT users.first_name, users.last_name,
users.user_id, projects.project_id,
projects.project_name, projects.project_time,
group_concat(images.image_path)
FROM users, projects, images
WHERE users.user_id = projects.user_id
AND users.user_id IN (SELECT follow_user_2 FROM following
WHERE follow_user_1 = 1)
group by users.first_name
ORDER BY projects.project_id DESC;
```
[fiddle](http://www.sqlfiddle.com/#!2/867f6/20) | Query using group_concat is returning only one row | [
"",
"mysql",
"sql",
"database",
""
] |
I already read a lot on that topic but I´m unable to get it to work for my case.
I have the following situation:
* A list of orderitems (the main datasets I want to get)
* Articles which have a 1:1 relation to an order item
* A n:m Jointable "Articlesupplier" which creates a relation between an article and a
partner
* A Partner table with detailed information about partners.
Target:
One dataset per OrderItem and from the suppliers I only want to get the first one found in the join. No priorization required.
Tables:
Table `IDX_ORDERITEM`
```
id,article_id
```
Table `IDX_ARTICLE`
```
id,name
```
Table `IDX_ARTICLESUPPLIER`
```
article_id,partner_id
```
Table `IDX_PARTNER`
```
id,abbr
```
My actual statement (short version):
```
SELECT IDX_ORDERITEM.id
FROM
dbo.IDX_ORDERITEM AS IDX_ORDERITEM
-- ARTICLE --
INNER JOIN dbo.IDX_ARTICLE AS IDX_ARTICLE
ON IDX_ORDERITEM.article_id=IDX_ARTICLE.id
-- SUPPLIER VIA ARTICLE --
LEFT JOIN
(SELECT TOP(1) IDX_PARTNER.id, IDX_PARTNER.abbr
FROM IDX_PARTNER, IDX_ARTICLESUPPLIER
WHERE IDX_PARTNER.id = IDX_ARTICLESUPPLIER.partner_id
AND IDX_ARTICLESUPPLIER.article_id=IDX_ARTICLE.id) AS IDX_PARTNER_SUPPLIER
ON IDX_PARTNER_SUPPLIER.id=IDX_ARTICLE.supplier_partner_id
WHERE 1>0
ORDER BY orderitem.id DESC
```
But it seems I can´t access IDX\_ARTICLE.id in the subquery. I get the following error message:
> The multi-part identifier "IDX\_ARTICLE.id" could not be bound.
Is the problem that the Article alias has the same name as the table name?
Thanks a lot in advance for possible ideas,
Mike | Well, I changed your aliases, and the subquery to which you were joining (I also modified that subquery so it doesn't use implicit joins anymore), though this changes where mostly cosmetics. The actual important change was the use of `OUTER APPLY` instead of `LEFT JOIN`:
```
SELECT OI.id
FROM dbo.IDX_ORDERITEM AS OI
INNER JOIN dbo.IDX_ARTICLE AS A
ON OI.article_id = A.id
OUTER APPLY
(SELECT TOP(1) P.id, P.abbr
FROM IDX_PARTNER AS P
INNER JOIN IDX_ARTICLESUPPLIER AS SUP
ON P.id = SUP.partner_id
WHERE SUP.article_id = A.id
AND P.id = A.supplier_partner_id) AS PS
ORDER BY OI.id DESC
``` | The error is thrown because the below piece of query
```
(SELECT TOP(1) IDX_PARTNER.id, IDX_PARTNER.abbr
FROM IDX_PARTNER, IDX_ARTICLESUPPLIER
WHERE IDX_PARTNER.id = IDX_ARTICLESUPPLIER.partner_id
AND IDX_ARTICLESUPPLIER.article_id=IDX_ARTICLE.id) AS IDX_PARTNER_SUPPLIER
```
cannot be considered as a correlated sub-query and `IDX_ARTICLE.id` is referenced in it in the same manner we reference a field of outer query in a correlated sub-query. | T-SQL Left-Join with 1 row (limi, subselect) | [
"",
"sql",
"sql-server",
"t-sql",
"join",
"left-join",
""
] |
how can i select a column from all selected tables ? like (throws an error):
```
SELECT *.id FROM table1, table2, table3, table4
```
i wouldnt do it like this, because there are a lot of tables, and they will change:
```
SELECT table1.id, table2.id, table3.id, table4.id FROM table1, table2, table3, table4
```
only the id, so this isnt useful as well:
```
SELECT * FROM table1, table2, table3, table4
```
is there a nice solution ?
greets | Why not use `UNION`, like this:
```
SELECT id FROM table1
UNION
SELECT id FROM table2
UNION
SELECT id FROM table3
UNION
SELECT id FROM table4
``` | Use Alias Names like Table and prefix that alias name with the column name
Example : TableName aliasname
then use select t1.columnname from Table1 t1
```
SELECT t1.id, t2.id, t3.id, t4.id FROM table1 t1, table2 t2, table3 t3, table4 t4
``` | MySQL - SELECT column from all tables | [
"",
"mysql",
"sql",
"select",
"join",
""
] |
I have this query
```
SELECT "items".*
FROM
"items"
INNER JOIN
item_mods ON item_mods.item_id = items.id AND item_mods.mod_id = 15
WHERE (items.league_id = 1) AND (items.item_type_id = 11) AND (num_sockets >= 2)
ORDER BY item_mods.total_value DESC
LIMIT 25
```
This is the explain <http://explain.depesz.com/s/dbf>
```
"Limit (cost=55739.84..55739.90 rows=25 width=554) (actual time=18065.470..18065.478 rows=25 loops=1)"
" -> Sort (cost=55739.84..55741.90 rows=824 width=554) (actual time=18065.468..18065.471 rows=25 loops=1)"
" Sort Key: item_mods.total_value"
" Sort Method: top-N heapsort Memory: 37kB"
" -> Nested Loop (cost=5871.95..55716.59 rows=824 width=554) (actual time=285.806..18055.589 rows=610 loops=1)"
" -> Bitmap Heap Scan on items (cost=5871.52..20356.70 rows=4339 width=550) (actual time=201.543..10028.684 rows=9945 loops=1)"
" Recheck Cond: ((item_type_id = 11) AND (num_sockets >= 2))"
" Rows Removed by Index Recheck: 4120"
" Filter: (league_id = 1)"
" Rows Removed by Filter: 1125"
" -> BitmapAnd (cost=5871.52..5871.52 rows=4808 width=0) (actual time=199.322..199.322 rows=0 loops=1)"
" -> Bitmap Index Scan on index_items_on_item_type_id (cost=0.00..289.61 rows=15625 width=0) (actual time=38.699..38.699 rows=16018 loops=1)"
" Index Cond: (item_type_id = 11)"
" -> Bitmap Index Scan on index_items_on_num_sockets (cost=0.00..5579.49 rows=301742 width=0) (actual time=158.441..158.441 rows=301342 loops=1)"
" Index Cond: (num_sockets >= 2)"
" -> Index Scan using index_item_mods_on_item_id on item_mods (cost=0.43..8.14 rows=1 width=8) (actual time=0.803..0.803 rows=0 loops=9945)"
" Index Cond: (item_id = items.id)"
" Filter: (mod_id = 15)"
" Rows Removed by Filter: 9"
"Total runtime: 18065.773 ms"
```
How can I improve the speed of this query? I note there is a loop > 9000 times on the index scan | The reason your query is slow is because there are no indexes for the way you want to return the data.
Notice the "Bitmap Index Scan" where it is saying, I know you have an index but I have to look at the whole table to find the row I need (hence total row scan of up to 301742!). This is probably because of the combination of other columns you have asked for and the constraints you are applying i.e. item\_mods.mod\_id = 15
Try:
1. "items".\* - Only select out the columns you need rather than everything.
2. Create an index on: item\_mods.item\_id AND item\_mods.mod\_id
3. Create an index on: items.league\_id AND items.item\_type\_id AND num\_sockets (assuming num\_sockets is on the same table)
Any performance difference? | This only shortens and cleans the syntax, but doesn't change anything substantially:
```
SELECT i.*
FROM items i
JOIN item_mods m ON m.item_id = i.id
WHERE i.league_id = 1
AND i.item_type_id = 11
AND i.num_sockets >= 2
AND m.mod_id = 15
ORDER BY m.total_value DESC
LIMIT 25;
```
A query like this is **very hard** to optimize. Postgres cannot just read from the top of the index. since you are sorting by a column in `item_mods`, but the most selective conditions are on `items`, it is also hard to tailor an index that would help some more.
Of course you can optimize indexes on either table. But without additional information to feed to the query, it won't get cheap. *All* qualifying rows have to be read before Postgres knows the winners.
We have developed solutions under this related question on dba.SE. Sophisticated stuff:
[Can spatial index help a “range - order by - limit” query](https://dba.stackexchange.com/questions/18300/can-spatial-index-help-a-range-order-by-limit-query/22500) | Ways to improve this SQL query | [
"",
"sql",
"postgresql",
"indexing",
"postgresql-performance",
"sql-limit",
""
] |
I want to compare the current row with a value in the next row. SQL has `LEAD` and `LAG` functions to get the next and previous values but I can not use them because I am using SQL Server 2008.
So how do I get this?
I have table with output
```
+----+-------+-----------+-------------------------+
| Id | ActId | StatusId | MinStartTime |
+----+-------+-----------+-------------------------+
| 1 | 42 | 1 | 2014-02-14 11:17:21.203 |
| 2 | 42 | 1 | 2014-02-14 11:50:19.367 |
| 3 | 42 | 1 | 2014-02-14 11:50:19.380 |
| 4 | 42 | 6 | 2014-02-17 05:25:57.280 |
| 5 | 42 | 6 | 2014-02-19 06:09:33.150 |
| 6 | 42 | 1 | 2014-02-19 06:11:24.393 |
| 7 | 42 | 6 | 2014-02-19 06:11:24.410 |
| 8 | 42 | 8 | 2014-02-19 06:44:47.070 |
+----+-------+-----------+-------------------------+
```
What I want to do is if the current row status is 1 and the next row status is 6 and both times are the same (up to minutes) then I want to get the row where the status is 1.
Eg: Id 6 row has status 1 and Id 7 row has status 6 but both times are the same ie. 2014-02-19 06:11
So I want to get this row or id for status 1 ie. id 6 | In your case, the `id`s appear to be numeric, you can just do a self-join:
```
select t.*
from table t join
table tnext
on t.id = tnext.id - 1 and
t.StatusId = 1 and
tnext.StatusId = 6 and
datediff(second, t.MinStartTime, tnext.MinStartTime) < 60;
```
This isn't quite the same minute. It is within 60 seconds. Do you actually need the same calendar time minute? If so, you can do:
```
select t.*
from table t join
table tnext
on t.id = tnext.id - 1 and
t.StatusId = 1 and
tnext.StatusId = 6 and
datediff(second, t.MinStartTime, tnext.MinStartTime) < 60 and
datepart(minute, t.MinStartTime) = datepart(minute, tnext.MinStartTime);
``` | Just posting a more complex join using two different tables created with Gordon's foundation. Excuse the specific object names, but you'll get the gist. Gets the percentage change in samples from one to the next.
```
SELECT
fm0.SAMPLE curFMSample
, fm1.SAMPLE nextFMSample
, fm0.TEMPERATURE curFMTemp
, fm1.TEMPERATURE nextFMTemp
, ABS(CAST((fm0.Temperature - fm1.Temperature) AS DECIMAL(4, 0)) / CAST(fm0.TEMPERATURE AS DECIMAL(4, 0))) AS fmTempChange
, fm0.GAUGE curFMGauge
, fm1.GAUGE nextFMGauge
, ABS(CAST((fm0.GAUGE - fm1.GAUGE) AS DECIMAL(4, 4)) / CAST(fm0.GAUGE AS DECIMAL(4, 4))) AS fmGaugeChange
, fm0.WIDTH curFMWidth
, fm1.WIDTH nextFMWidth
, ABS(CAST((fm0.Width - fm1.Width) AS DECIMAL(4, 2)) / CAST(fm0.Width AS DECIMAL(4, 2))) AS fmWidthChange
, cl0.TEMPERATURE curClrTemp
, cl1.TEMPERATURE nextClrTemp
, ABS(CAST((cl0.Temperature - cl1.Temperature) AS DECIMAL(4, 0)) / CAST(cl0.TEMPERATURE AS DECIMAL(4, 0))) AS clrTempChange
FROM
dbo.COIL_FINISHING_MILL_EXIT_STR02 fm0
INNER JOIN dbo.COIL_FINISHING_MILL_EXIT_STR02 fm1 ON (fm0.SAMPLE = fm1.SAMPLE - 1 AND fm1.coil = fm0.coil)
INNER JOIN dbo.COIL_COILER_STR02 cl0 ON fm0.coil = cl0.coil AND fm0.SAMPLE = cl0.SAMPLE
INNER JOIN dbo.COIL_COILER_STR02 cl1 ON (cl0.SAMPLE = cl1.SAMPLE - 1 AND cl1.coil = cl0.coil)
WHERE
fm0.coil = 2015515872
``` | Alternate of lead lag function in SQL Server 2008 | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I'm fairly new to MSSQL and just have experience with some fairly basic update statements. I was wondering if someone might be able to help me with the correct syntax to do the following:
I have two tables, `F4101` and `F4102` that contain the same item IDs in the columns `IMITM (F4101)` and `IBITM (F4102)`.
Only F4102 needs to be updated, but it is updated based on F4101 and it needs to make sure that it is updating the correct item in F4102.
The statement needs to check where F4101 IMSRTX='RESIN' and then set F4102 IBBUYR=1009, while making sure that F4101 IMITM is equal to F4102 IBITM.
I appreciate the help!
Example Data:
```
F4101:
+-------+--------+
| IMITM | IMSRTX |
+-------+--------+
| 7493 | RESIN |
| 7494 | MINOR |
| 7495 | MINOR |
| 7496 | RESIN |
+-------+--------+
F4102:
+-------+--------+
| IBITM | IBBUYR |
+-------+--------+
| 7493 | |
| 7494 | |
| 7495 | |
| 7496 | |
+-------+--------+
```
if F4101 contains resin for an IMITM, it needs to update the IBBUYR column of F4102 with a value where the IMITM of F4101 is equal to the IBITM of F4102. | ```
UPDATE F4102
SET IBBUYR = 1009
WHERE EXISTS (
SELECT 1
FROM F4101
WHERE IMITM = F4102.IBITM
AND IMSRTX = 'RESIN'
)
``` | ```
Update F4102
Set IBBUYR=1009
From F4102 F2
Inner Join F4101 F1 on F2.IBITM = F1.IMITM
Where F1.IMSRTX='RESIN'
``` | SQL statement to update one table if value present in another | [
"",
"sql",
"sql-server",
""
] |
I want to display information in sections with a specific interval, for example 5, 10, 15, etc. minutes.
But I want to kind of "round up" to the next value of this interval for a given DateTime
e.g.
```
declare @dateTime datetime = '2014-03-05 18:37';
select getNextDateTimeForInterval(@dateTime, 60); --> 2014-03-05 19:00
select getNextDateTimeForInterval(@dateTime, 30); --> 2014-03-05 19:00
select getNextDateTimeForInterval(@dateTime, 20); --> 2014-03-05 18:40
select getNextDateTimeForInterval(@dateTime, 15); --> 2014-03-05 18:45
select getNextDateTimeForInterval(@dateTime, 10); --> 2014-03-05 18:40
select getNextDateTimeForInterval(@dateTime, 5); --> 2014-03-05 18:40
select getNextDateTimeForInterval(@dateTime, 1); --> 2014-03-05 18:38
```
Is there a clever way to do this? I only have come up with solutions that will only work for some of these cases. | Seems to work:
```
create function dbo.getNextDateTimeForInterval (@dt datetime, @unit int)
returns datetime
as
begin
return DATEADD(minute,(DATEDIFF(minute,0,@dt)/@unit+1)*@unit,0)
end
go
declare @dateTime datetime = '2014-03-05T18:37:00';
select dbo.getNextDateTimeForInterval(@dateTime, 60); --> 2014-03-05 19:00
select dbo.getNextDateTimeForInterval(@dateTime, 30); --> 2014-03-05 19:00
select dbo.getNextDateTimeForInterval(@dateTime, 20); --> 2014-03-05 18:40
select dbo.getNextDateTimeForInterval(@dateTime, 15); --> 2014-03-05 18:45
select dbo.getNextDateTimeForInterval(@dateTime, 10); --> 2014-03-05 18:40
select dbo.getNextDateTimeForInterval(@dateTime, 5); --> 2014-03-05 18:40
select dbo.getNextDateTimeForInterval(@dateTime, 1); --> 2014-03-05 18:38
```
Nothing especially tricksy needed. Although if you use intervals that don't divide 60 evenly, I won't guarantee that it'll match your expectations, because I have no idea what your expectations might be for such a situation.
---
Explanation: `DATEDIFF(minute,0,@dt)` computes the whole number of minutes that have elapsed between midnight at the start of `1900-01-01` (the value you obtain by interpreting `0` as a date) and `@dt`. We then do an integer divide (which rounds towards 0) that value by `@unit`, add 1 and then multiply it back up by `@unit` - which is how we're getting our rounding up effect. And then we use `DATEADD` to add that new number of minutes back onto `1900-01-01`. | Another approach that returns the correct values for your test cases:
```
CREATE FUNCTION getNextDateTimeForInterval(@param datetime, @roundValue int)
RETURNS datetime
AS
BEGIN
DECLARE @totalMinutes int;
SET @totalMinutes = DATEDIFF(
minute
, CONVERT(date, @param)
, @param);
RETURN DATEADD(minute
, @totalMinutes + (@roundValue - (@totalMinutes % @roundValue))
, CONVERT(DateTime, CONVERT(date, @param))
);
END
```
But note that if you want `18:37, 1 -->18:38`, then it'll also be `19:00, 60 -->20:00`. That may be odd. | Roof a DateTime to next value of a specific minute interval | [
"",
"sql",
"sql-server",
"t-sql",
"datetime",
""
] |
How can I assign a selected value to a variable so that I can reuse the value in a nested sql? For example, I may have something that looks like this:
```
select count(user_id), substr(name, 0, 5)
from users
group by substr(name, 5);
```
But in this case, the `substr` value in the select clause may be inconsistent with the `substr` value in the group by clause. It will be good if I can assign one of the `substr` value to a variable and reuse it. This will ensure that the values are consistent. How can I achieve this in Oracle? | To avoid the duplication of terms in the select list and `group by` clause, you could use a subquery:
```
select count(user_id), name_alias
from (
select user_id, substr(name, 0, 5) as name_alias
from users
)
group by name_alias;
```
It seems like overkill in this simple example, but it can be useful with complicated case statements etc. where it saves some work keeping the code consistent; though Oracle would complain if you made a mistake.
You need the subquery because Oracle [doesn't allow you to use column aliases in the same level of query, except in the `order by`](http://docs.oracle.com/cd/E11882_01/server.112/e41084/queries002.htm#sthref2214), because of how the query is constructed internally. | If you wish to assign variables, you need to switch to PL/SQL, since you're saying you try to avoid that, then there's no variables in regular querying,unless you wish to involve server side code.
if you wish to deal with the result as a column instead you could do something like:
```
select count(t.user_id),t.name
from(
select user_id,substr(name,0,5) name
from users
) t
group by t.name
``` | Is it possible to assign select values to variables in sql? | [
"",
"sql",
"database",
"oracle",
""
] |
I was looking for the proper SQL queries for retrieving all students enrolled in a certain course, or all courses a certain student has enrolled in, on **Moodle**.
I have found a few solutions from the Internet, and most of them suggest joining these tables:
**context, role\_assignments, course, user, role**
But then when I looked at the database, I found that there is a table named `user_enrolments`, and it seems to me that I could get the results by joining the following tables:
**user\_enrolments, user, course, enrol**
For example,
```
SELECT u.id, c.id
FROM mdl_user u
INNER JOIN mdl_user_enrolments ue ON ue.userid = u.id
INNER JOIN mdl_enrol e ON e.id = ue.enrolid
INNER JOIN mdl_course c ON e.courseid = c.id
```
and
```
SELECT u.id, c.id
FROM mdl_user u
INNER JOIN mdl_role_assignments ra ON ra.userid = u.id
INNER JOIN mdl_context ct ON ct.id = ra.contextid
INNER JOIN mdl_course c ON c.id = ct.instanceid
INNER JOIN mdl_role r ON r.id = ra.roleid
WHERE r.id = 5
```
*(where 5 is the id for the role `student`)*
These 2 queries give me the **SAME** set of results. (tested on a small set of data only)
So I would like to ask what are the differences between the two approaches?
Thank you for any assistance in advance. | The first query gives you a list of users who are enroled on the course, whatever role they have assigned to them (it is possible to be enroled on a course and have no role assigned at all).
The second query shows all the users who have role 5 assigned to them at the course level. It is possible (though unusual) to have a role assigned at the course level, without actually being enroled in the course itself.
However, both of the queries are flawed.
The first query could return duplicate results if the user was enroled in a course via more than one enrolment method (unusual, but possible). It also fails to take into account the following:
* The enrolment plugin may be disabled at site level
* The enrolment plugin may be disabled at the course level (check for 'e.status = 0' to only find active enrolment plugins)
* The enrolment may be time-limited - the user's enrolment may have expired (check for 'ue.timeend = 0 OR ue.timeend > NOW()' to find only unexpired enrolments)
The second query assumes that the student role is id 5 (and also that there are no other roles, based on the student role, that are in use). I would normally either use an extra query to check the id of the 'student' role in the table 'mdl\_role' and then use that value, or change the last couple of lines to the following:
JOIN mdl\_role r ON r.id = ra.roleid AND r.shortname = 'student'.
The second query also fails to check the 'contextlevel' - it is possible to have a multiple contexts with the same instance id (as it is possible to have course id 5, course category id 5, user id 5, etc.) - so you need to check that the context found is a 'course' context (contextlevel = 50).
Neither query checks for suspended users or deleted users (although, in the case of deleted users, they should have been automatically unenroled from all courses at the point where they were deleted).
A fully complete solution (possibly overly complex for most situations) would combine both queries together to check the user was enroled and assigned the role of student and not suspended:
```
SELECT DISTINCT u.id AS userid, c.id AS courseid
FROM mdl_user u
JOIN mdl_user_enrolments ue ON ue.userid = u.id
JOIN mdl_enrol e ON e.id = ue.enrolid
JOIN mdl_role_assignments ra ON ra.userid = u.id
JOIN mdl_context ct ON ct.id = ra.contextid AND ct.contextlevel = 50
JOIN mdl_course c ON c.id = ct.instanceid AND e.courseid = c.id
JOIN mdl_role r ON r.id = ra.roleid AND r.shortname = 'student'
WHERE e.status = 0 AND u.suspended = 0 AND u.deleted = 0
AND (ue.timeend = 0 OR ue.timeend > UNIX_TIMESTAMP(NOW())) AND ue.status = 0
```
(Note I haven't double-checked that query extensively - it runs, but you would need to carefully cross-reference against actual enrolments to check I hadn't missed anything). | The following code generates a list of all your courses together with how many students are enrolled in each. Useful to find out if you have any courses with no one enrolled.
My Answer :
```
SELECT cr.SHORTNAME,
cr.FULLNAME,
COUNT(ra.ID) AS enrolled
FROM `MDL_COURSE` cr
JOIN `MDL_CONTEXT` ct
ON ( ct.INSTANCEID = cr.ID )
LEFT JOIN `MDL_ROLE_ASSIGNMENTS` ra
ON ( ra.CONTEXTID = ct.ID )
WHERE ct.CONTEXTLEVEL = 50
AND ra.ROLEID = 5
GROUP BY cr.SHORTNAME,
cr.FULLNAME
ORDER BY `ENROLLED` ASC
``` | SQL query for Courses Enrolment on Moodle | [
"",
"sql",
"database",
"moodle",
""
] |
I've got 2 tables: members and member\_logs.
Members can belong to groups, which are in the members table. Given a date range and a group I'm trying to figure out how to get the 10 days with the highest number of successful logins. What I have so far is a massive nest of subquery terror.
```
SELECT count(member_id) AS `num_users`,
DATE_FORMAT(`login_date`,'%Y-%m-%d') AS `reg_date`
FROM member_logs
WHERE `login_success` = 1
and `reg_date` IN
(SELECT DISTINCT DATE_FORMAT(`login_date`,'%Y-%m-%d') AS `reg_date`
FROM member_logs
WHERE `login_success` = 1
and (DATE_FORMAT(`login_date`,'%Y-%m-%d') BETWEEN '2012-02-25' and '2014-03-04'))
and `member_id` IN
(SELECT `member_id`
FROM members
WHERE `group_id` = 'XXXXXXX'
and `deleted` = 0)
ORDER BY `num_users` desc
LIMIT 0, 10
```
As far as I understand what is happening is that the WHERE clause is evaluating before the subqueries generate, and that I also should be using joins. If anyone can help me out or point me in the right direction that would be incredible.
EDIT: Limit was wrong, fixed it | The first subquery is totally unnecessary because you can filter by dates directly in the current table member\_logs. I also prefer a JOIN for the second subquery. Then what you are missing is grouping by date (day).
A query like the following one (not tested) will do the job you want:
```
SELECT COUNT(ml.member_id) AS `num_users`,
DATE_FORMAT(`login_date`,'%Y-%m-%d') AS `reg_date`
FROM member_logs ml
INNER JOIN members m ON ml.member_id = m.member_id
WHERE `login_success` = 1
AND DATE_FORMAT(`login_date`,'%Y-%m-%d') BETWEEN '2012-02-25' AND '2014-03-04'
AND `group_id` = 'XXXXXXX'
AND `deleted` = 0
GROUP BY `reg_date`
ORDER BY `num_users` desc
LIMIT 10
``` | ```
SELECT count(member_id) AS `num_users`,
DATE_FORMAT(`login_date`,'%Y-%m-%d') AS `reg_date`
FROM member_logs
WHERE `login_success` = 1
and `login_date` IN
(SELECT `login_date`
FROM member_logs
WHERE `login_success` = 1
and (DATE_FORMAT(`login_date`,'%Y-%m-%d') BETWEEN '2012-02-25' and '2014-03-04'))
and `member_id` IN
(SELECT `member_id`
FROM members
WHERE `group_id` = 'XXXXXXX'
and `deleted` = 0)
Group by `login_date`
ORDER BY `num_users` desc
LIMIT 0, 10
``` | Unknown Column in 'IN/ALL/ANY' subquery | [
"",
"mysql",
"sql",
"join",
"subquery",
""
] |
SQL 2012
I am trying to restore a Full Backup Database in my local computer. Backup has been taken from a Server Database. I tried both wizard option as well as syntax option.
```
RESTORE DATABASE MyDatabase
FROM DISK = 'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\Backup\DB_Backup.bak'
WITH REPLACE
```
On executing the above syntax , I am getting error message as follows.
Error Message says that, Original backup file - .mdf and .ldf is still mapped to the original server path.
```
Msg 3156, Level 16, State 3, Line 1
File 'DB_Backup_Data' cannot be restored to 'D:\My Path\Data\DB_Backup.mdf'. Use WITH MOVE to identify a valid location for the file.
Msg 3156, Level 16, State 3, Line 1
File 'DB_Backup_Log' cannot be restored to 'D:\My Path\Data\DB_log.ldf'. Use WITH MOVE to identify a valid location for the file.
```
How to remap the .ldf and .mdf to my local computer and restore the database | Try this:
```
RESTORE DATABASE [MyDatabase]
FROM DISK = 'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\Backup\DB_Backup.bak'
WITH MOVE 'MyDatabase_Data' TO 'D:\My Path\Data\DB_Backup.mdf',
MOVE 'MyDatabase_Log' TO 'D:\My Path\Data\DB_log.ldf',
REPLACE
```
Also check [DB Restoration Problem](http://social.msdn.microsoft.com/Forums/sqlserver/en-US/0b58cdcf-b27c-4b7e-a09a-9d4136c98bb0/db-restoration-problem?forum=sqldisasterrecovery) | You need to instruct SQL server to move the files to their correct location on your local machine - something like:
```
RESTORE DATABASE MyDatabase
FROM DISK = 'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\Backup\DB_Backup.bak'
WITH REPLACE,
MOVE 'DB_Data' TO 'C:\MyData\MyDatabase_Data.mdf',
MOVE 'DB_Log' TO 'C:\MyLog\MyDatabase_Log.ldf'
```
You need to substitute the correct logical file names in for DB\_Data and DB\_Log and the correct physical file names for the database on your local machine for the paths. | Database Restore | [
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
I have a date column in my table i need to generate a unique rowdate for date change.I have used `CONVERT(varchar(50),cast(product as DATE),112)` but i am not getting unique for date.
My Table look likes below.
```
Product
02/22/2014
02/22/2014
02/22/2014
02/22/2014
02/23/2014
02/23/2014
02/23/2014
02/23/2014
02/23/2014
02/24/2014
02/24/2014
02/24/2014
```
The output look to be
```
201402221
201402222
201402223
201402224
201402231
201402232
201402233
201402234
201402235
201402241
201402242
201402243
```
Note:I need it because no value should be matched.Please help me guys i tried 80% by using above formula but i am not getting any idea adding row numbers for date?? | ```
select concat(replace(cast(product as date), '-', ''),
row_number() over(partition by product order by product)) as prod_id
from tbl
```
SQL Fiddle demo: <http://sqlfiddle.com/#!6/0a526/7/0> | You can use ROW\_NUMBER()
```
SELECT ROW_NUMBER() OVER( ORDER BY Product) AS rn, *
FROM your_table
```
The `ROW_NUMBER()` function would generate count starting with 1. In case you want to initialize the count from some arbitrary value say 201402221, then you can add a base count to count column like below:
```
SELECT ROW_NUMBER() OVER( ORDER BY Product) + 201402220 AS rn, *
FROM your_table
``` | How to generate uniqueRowdate number for a datecolumn i? | [
"",
"sql",
"t-sql",
"sql-server-2012",
"row-number",
"date-conversion",
""
] |
I am pretty new in Microsoft **SQL server** and I am not much into databases.
I have the following doubt:
On my database I have a table named **CPE**
This table have a field (a column) named **SourceId** that is the **FOREIGN KEY** of my table.
So I think that this field have to contain the value definied in some field of another table because it binds together 2 tables).
What have I to do to discover what is the other field and the other table which is linked to?
Tnx
Andrea | You are looking for certainly:
```
sp_help [table_name]
```
or try this query:
```
select t.name as ForeignKeytable, fk.constraint_column_id as ForeignKey_No, c.name as ForeignKeyColumn
from sys.foreign_key_columns as fk
inner join sys.tables as t on fk.parent_object_id = t.object_id
inner join sys.columns as c on fk.parent_object_id = c.object_id and fk.parent_column_id = c.column_id
where fk.referenced_object_id = (select object_id from sys.tables where name = 'name')
order by ForeignKeytable, ForeignKey_No
``` | Try this
**Method 1 :**
```
SELECT
object_name(parent_object_id) ParentTableName,
object_name(referenced_object_id) RefTableName,
name
FROM sys.foreign_keys
WHERE parent_object_id = object_id('Tablename')
```
Or:
**Method 2 :**
```
SELECT
OBJECT_NAME(f.parent_object_id) TableName,
COL_NAME(fc.parent_object_id,fc.parent_column_id) ColName
FROM
sys.foreign_keys AS f
INNER JOIN
sys.foreign_key_columns AS fc
ON f.OBJECT_ID = fc.constraint_object_id
INNER JOIN
sys.tables t
ON t.OBJECT_ID = fc.referenced_object_id
WHERE
OBJECT_NAME (f.referenced_object_id) = 'Tablename'
``` | How to discover to which table\field is linked the foreign key of a table? | [
"",
"sql",
"sql-server",
"rdbms",
"database",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.