Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
```
SELECT
student_class.acad_yr,
(case when((students.gender = 'Male') then count(students.gender) end)) AS Male,
(case when((students.gender = 'Female') then count(students.gender) end)) AS Female
FROM
students
INNER JOIN student_class ON (students.st_id = student_class.st_id)
WHERE
student_class.acad_yr = '2013/2014' AND
left(student_class.class_id, 1) = '1'
GROUP BY
student_class.acad_yr
ORDER BY
students.surname,
students.othername
```
I am getting this error:
> You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'then count(students.gender) end)) AS Male,
> (case when((students.gender = 'Fem' at line 3 | Try this :-
```
SELECT
student_class.acad_yr,
case when students.gender='Male' then count(students.gender) end AS Male,
case when students.gender='Female' then count(students.gender) end AS Female
FROM
students
INNER JOIN student_class ON (students.st_id = student_class.st_id)
WHERE
student_class.acad_yr = '2013/2014' AND
left(student_class.class_id, 1) = '1'
GROUP BY
student_class.acad_yr
ORDER BY
students.surname,
students.othername
```
Hope it will help you. | ```
SELECT
student_class.acad_yr,
(SELECT COUNT(*) FROM STUDENTS WHERE GENDER = 'Male' AND acad_yr = student_class.acad_yr) AS Male,
(SELECT COUNT(*) FROM STUDENTS WHERE GENDER = 'Female' AND acad_yr = student_class.acad_yr) AS Female,
FROM
students AS S
INNER JOIN student_class ON (S.st_id = student_class.st_id)
WHERE
student_class.acad_yr = '2013/2014' AND
left(student_class.class_id, 1) = '1'
GROUP BY
student_class.acad_yr
ORDER BY
S.surname,
S.othername
``` | Counting the number of Male & Females from tables | [
"",
"mysql",
"sql",
""
] |
I have four tables and like to know how many locations and downloads a certain name has.
`names` and `locations` are connected via the `names_locations` table
Here are my tables:
Table "`names`"
```
ID | name
=========
1 | foo
2 | bar
3 | zoo
4 | luu
```
Table "`locations`"
```
ID | location
=============
1 | Hamburg
2 | New York
3 | Singapore
4 | Tokio
```
Table "`names_locations`"
```
ID | location_id | name_id
==========================
1 | 1 | 1
2 | 1 | 2
3 | 2 | 2
4 | 3 | 3
5 | 1 | 2
```
Table "`downloads`"
```
ID | name_id | timestamp
=========================
1 | 1 | 1394041682
2 | 4 | 1394041356
3 | 1 | 1394041573
4 | 3 | 1394041981
5 | 1 | 1394041683
```
Result should be:
```
ID | name | locations | downloads
=================================
1 | foo | 1 | 3
2 | bar | 3 | 0
3 | zoo | 1 | 1
4 | luu | 0 | 1
```
Here's my attempt (without the downloads column):
```
SELECT names.*,
Count(names_locations.location_id) AS location
FROM names
LEFT JOIN names_locations
ON names.ID = names_locations.name_id
GROUP BY names.ID
``` | I think this would work.
```
SELECT n.id,
n.name,
COUNT(DISTINCT l.id) AS locations,
COUNT(DISTINCT d.id) AS downloads
FROM names n LEFT JOIN names_location nl
ON n.id = nl.name_id
LEFT JOIN downloads dl
ON n.id = dl.name_id
LEFT JOIN locations l
ON l.id = nl.location_id
GROUP BY n.id, n.name
``` | All of those seem to work. here's another one.
```
SELECT
a.ID,
a.name,
COUNT(c.location) AS locations,
COUNT(d.timestamp) AS downloads
FROM names AS a
LEFT JOIN names_locations AS b on a.ID=b.name_id
LEFT JOIN locations AS c ON b.location_id=c.ID
LEFT JOIN downloads AS d ON a.ID=d.name_id
GROUP BY a.name
``` | Count values over m/n connected tables in SQL | [
"",
"mysql",
"sql",
""
] |
I am actually trying to make a script (in Sql Server 2008) to restore one database from one backup file. I made the following code and I am getting an error -
```
Msg 3101, Level 16, State 1, Line 3
Exclusive access could not be obtained because
the database is in use.
Msg 3013, Level 16, State 1, Line 3
RESTORE DATABASE is terminating abnormally.
```
How do I fix this problem ?
```
IF DB_ID('AdventureWorksDW') IS NOT NULL
BEGIN
RESTORE DATABASE [AdventureWorksDW]
FILE = N'AdventureWorksDW_Data'
FROM
DISK = N'C:\Program Files\Microsoft SQL Server\
MSSQL10_50.SS2008\MSSQL\Backup\AdventureWorksDW.bak'
WITH FILE = 1,
MOVE N'AdventureWorksDW_Data'
TO N'C:\Program Files\Microsoft SQL Server\
MSSQL10_50.SS2008\MSSQL\DATA\AdventureWorksDW.mdf',
MOVE N'AdventureWorksDW_Log'
TO N'C:\Program Files\Microsoft SQL Server\
MSSQL10_50.SS2008\MSSQL\DATA\AdventureWorksDW_0.LDF',
NOUNLOAD, STATS = 10
END
``` | I'll assume that if you're restoring a db, you don't care about any existing transactions on that db. Right? If so, this should work for you:
```
USE master
GO
ALTER DATABASE AdventureWorksDW
SET SINGLE_USER
--This rolls back all uncommitted transactions in the db.
WITH ROLLBACK IMMEDIATE
GO
RESTORE DATABASE AdventureWorksDW
FROM ...
...
GO
```
Now, one additional item to be aware. After you set the db into single user mode, someone else may attempt to connect to the db. If they succeed, you won't be able to proceed with your restore. It's a race! My suggestion is to run all three statements at once. | 1. Set the path to restore the file.
2. Click "Options" on the left hand side.
3. Uncheck "Take tail-log backup before restoring"
4. Tick the check box - "Close existing connections to destination database".
[](https://i.stack.imgur.com/AlqhF.png)
5. Click OK. | SQL-Server: Error - Exclusive access could not be obtained because the database is in use | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have table similar to:
```
Create Table #test (Letter varchar(5), Products varchar(30))
Insert into #test (Letter, Products)
Values ('A','B,C,D,E,F'),('B','B,C,D,E,F'),('G','B,C,D,E,F'),('Z','B,C,D,E,F'),('E','B,C,D,E,F')
```
Is it possible to write a CASE Statement which will check if the list in the 'Products' column contain a letter from 'Letter' column?
Thanks | Here is the query
```
Select *, Case When (charindex(Letter, Products, 0)>0) Then 'Yes' Else 'No' End AS [Y/N] from #test
``` | I think i managed to do the trick:
```
CASE WHEN Products LIKE '%'+Letter+'%' THEN 'TRUE' ELSE 'FALSE' END
``` | case statement in t-sql | [
"",
"sql",
"case",
""
] |
I am trying to understand the query plan for a select statement within a PL/pgSQL function, but I keep getting errors. My question: how do I get the query plan?
Following is a simple case that reproduces the problem.
The table in question is named test\_table.
```
CREATE TABLE test_table
(
name character varying,
id integer
);
```
The function is as follows:
```
DROP FUNCTION IF EXISTS test_function_1(INTEGER);
CREATE OR REPLACE FUNCTION test_function_1(inId INTEGER)
RETURNS TABLE(outName varchar)
AS
$$
BEGIN
-- is there a way to get the explain analyze output?
explain analyze select t.name from test_table t where t.id = inId;
-- return query select t.name from test_table t where t.id = inId;
END;
$$ LANGUAGE plpgsql;
```
When I run
```
select * from test_function_1(10);
```
I get the error:
```
ERROR: query has no destination for result data
CONTEXT: PL/pgSQL function test_function_1(integer) line 3 at SQL statement
```
The function works fine if I uncomment the commented portion and comment out explain analyze. | Or you can use this simpler form with [`RETURN QUERY`](http://www.postgresql.org/docs/current/static/plpgsql-control-structures.html#PLPGSQL-STATEMENTS-RETURNING):
```
CREATE OR REPLACE FUNCTION f_explain_analyze(int)
RETURNS SETOF text AS
$func$
BEGIN
RETURN QUERY
EXPLAIN ANALYZE SELECT * FROM foo WHERE v = $1;
END
$func$ LANGUAGE plpgsql;
```
Call:
```
SELECT * FROM f_explain_analyze(1);
```
Works for me in Postgres 9.3. | Any query has to have a known target in plpgsql (or you can throw the result away with a `PERFORM` statement). So you can do:
```
CREATE OR REPLACE FUNCTION fx(text)
RETURNS void AS $$
DECLARE t text;
BEGIN
FOR t IN EXPLAIN ANALYZE SELECT * FROM foo WHERE v = $1
LOOP
RAISE NOTICE '%', t;
END LOOP;
END;
$$ LANGUAGE plpgsql;
```
```
postgres=# SELECT fx('1');
NOTICE: Seq Scan on foo (cost=0.00..1.18 rows=1 width=3) (actual time=0.024..0.024 rows=0 loops=1)
NOTICE: Filter: ((v)::text = '1'::text)
NOTICE: Rows Removed by Filter: 14
NOTICE: Planning time: 0.103 ms
NOTICE: Total runtime: 0.065 ms
fx
ββββ
(1 row)
```
Another possibility to get the plan for embedded SQL is using a prepared statement:
```
postgres=# PREPARE xx(text) AS SELECT * FROM foo WHERE v = $1;
PREPARE
Time: 0.810 ms
postgres=# EXPLAIN ANALYZE EXECUTE xx('1');
QUERY PLAN
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Seq Scan on foo (cost=0.00..1.18 rows=1 width=3) (actual time=0.030..0.030 rows=0 loops=1)
Filter: ((v)::text = '1'::text)
Rows Removed by Filter: 14
Total runtime: 0.083 ms
(4 rows)
``` | EXPLAIN ANALYZE within PL/pgSQL gives error: "query has no destination for result data" | [
"",
"sql",
"postgresql",
"plpgsql",
"explain",
""
] |
I have the following simple insert query in MySQL
```
insert into eventimages (eventid, imageid) values (x, y)
```
which I want to amend so that the insert only happens if it isn't creating a duplicate row.
I'm guessing that somewhere I'd need to include something like
```
if not exists (select * from eventimages where eventid = x and imageid = y)
```
Can anyone help with the syntax.
Cheers | The "right" way to prevent duplicates is by putting a unique constraint/index on the column pair:
```
create unique index eventimages_eventid_imageid on eventimages(eventid, imageid);
```
Then this condition will always be guaranteed to be true. A regular insert will fail, as will an `update` that create a duplicate. Here are two ways to ignore such errors:
```
insert ignore into eventimages (eventid, imageid)
values (x, y);
```
This will ignore *all* errors in the insert. That might be overkill. You can also do:
```
insert into eventimages(eventid, imageid)
values (x, y)
on duplicate key update eventid = x;
```
The `update` statement is a no-op. The purpose is just to suppress a duplicate key error. | ```
insert into eventimages (eventid, imageid)
select x,y
from dual
where not exists (select 1
from eventimages
where eventid = x and imageid = y)
```
SQLFiddle example: <http://sqlfiddle.com/#!2/070563/1> | If Not Exists syntax MySQL | [
"",
"mysql",
"sql",
""
] |
This code is called from a timer tick event so that the dataviewgrid refreshes at frequent intervals.
From other answers I found on SO I would expect this code to reset the selected row to the row that was selected before this code runs, refreshing my dataset.
Only the variable CurrentSelectedRow is a public variable, all others are local.
```
sql = "select top 10 batch, TrussName, PieceName from FitaPieces order by SawTime desc "
myDataset = SelectFromDB(sql)
MyPreviouslyCutPieces.ClearAll()
Me.dgvPreviouslyCut.SelectionMode = DataGridViewSelectionMode.FullRowSelect
If Not IsNothing(Me.dgvPreviouslyCut.CurrentRow) Then
Debug.Print(Now.ToString & "...Current Row = " & Me.dgvPreviouslyCut.CurrentRow.ToString)
CurrentSelectedRow = Me.dgvPreviouslyCut.CurrentRow.Index
Else
Debug.Print(Now.ToString & "...Current Row = -1")
CurrentSelectedRow = -1
End If
If Not myDataset Is Nothing Then
If myDataset.Tables("CurData").Rows.Count > 0 Then
Me.dgvPreviouslyCut.DataSource = myDataset
Me.dgvPreviouslyCut.DataMember = "CurData"
End If
End If
If CurrentSelectedRow <> -1 Then
Me.dgvPreviouslyCut.Rows(0).Selected = False
Me.dgvPreviouslyCut.Rows(CurrentSelectedRow).Selected = True
End If
```
And it does..for the first tick of the timer event. On the second tick event after the user selects a row, it reverts back to the first row being selected. Even though the variable CurrentSelectedRow is a public variable, it's getting reset to zero after the first tick event. Then the selected row switches back to the first row in the grid. The first row is auto selected when you refresh a grid's datasource, but I'm setting it's selected status to false after the refresh.
How is the dataviewgrid's selected row getting reset to the first row? | If you can't seem to get the DataGridView to work. Try a different control until you do get the problem solved. I have personally used a listbox to achieve the same thing. | Grab the current index and store it in a variable called "go\_back\_to\_index"
```
Dim go_back_to_index as integer
```
when the user clicks on the row in the grid, just save the value so you can highlight it later:
```
go_back_to_index = current_data_grid.currentRow.value
```
then when the grid is updated, just run this piece of code:
```
If go_back_to_index < current_data_grid.Rows.Count Then
current_data_grid.Rows(go_back_to_index).Selected = True
current_data_grid.CurrentCell = current_data_grid.Item(1, go_back_to_index)
End If
```
Remember to make sure you set up your datagrid so the whole row is highlighted when a cell is clicked on. | How do I keep the same row selected in a datagridview after refreshing the dataset? | [
"",
"sql",
"vb.net",
"datagridview",
"timer",
"dataset",
""
] |
I'm trying to create a query using an "in" clause where I need everything in the IN clause to be true, i.e., Only records from myTable that have both nbrs 10 and 1. Currently I'm getting records for either 1 OR 10. I've been scouring SQL sites and just can't seem to figure this out. The list could be longer which is why I'm using an IN claus. Any ideas?
```
SELECT *
FROM myTable INNER JOIN
myTable2
WHERE myTable.ID = myTable2.ID AND myTable2.nbr in ('10','1')
``` | ```
SELECT myTable.ID
FROM myTable
INNER JOIN myTable2
ON myTable.ID = myTable2.ID
AND table2.nbr IN ('10','1')
GROUP BY myTable.ID
HAVING COUNT(DISTINCT table2.nbr) = 2 -- replace 2 with the number of elements in `IN` list
``` | I think this might give you what you want. If not, you will need to post some sample data.
```
SELECT *
FROM myTable mt
INNER JOIN myTable2 mt21 on mt.id = mt21.id and mt21.nbr = '1'
INNER JOIN myTable2 mt22 on mt.id = mt22.id and mt22.nbr = '10'
```
EDIT: Although the accepted answer scales more easily, this solution is probably going to return faster results. | SQL query using IN clause to select both conditions, not either or | [
"",
"sql",
""
] |
Is there a mysql command to check if a column value is in a set?
Something along the lines of the python code:
```
column_value in ['a', 'b', 'cde']
```
I know you can simulate that with a whole bunch of ORs, but I thought perhaps if such a statement exists MySQL would be able to optimize the checks more heavily. | Yes. The SQL standard has the `IN (...)` syntax.
```
where column_value in ('a', 'b', 'cde')
```
You were remarkably close: Just change square brackets for round ones.
Note: This syntax works for constant values, but you can use it with derived values like this:
```
where column_value in (select some_column from some_table where some_condition)
``` | Yes, there is.
```
WHERE column_name IN ('a', 'b', 'cde')
``` | Is there a Mysql "in" for set statement | [
"",
"mysql",
"sql",
""
] |
I'm trying to list products that each customer bought, but if they bought the same item on different occasions, I want it to exclude it. This is what I have so far:
```
Select c.field_id_33 AS email, o.order_id, Group_concat(o.entry_id) AS Products,group_concat(t.title),group_concat(t.url_title) from finn_cartthrob_order_items o
LEFT JOIN finn_channel_data c
ON c.entry_id=o.order_id
LEFT JOIN finn_channel_titles t
ON o.entry_id=t.entry_id
GROUP BY email
```
This is producing:

Basically I only need a product listed one time if they've purchased it, no matter how many times they've purchased it. How would I do this? | You can use `DISTINCT` in group\_concat function,using [Group\_concat](https://dev.mysql.com/doc/refman/5.0/en/group-by-functions.html#function_group-concat) baware of that fact it has a default limit of 1024 characters to group them but it can be increased
```
Select c.field_id_33 AS email, o.order_id,
Group_concat(DISTINCT o.entry_id) AS Products,
group_concat(DISTINCT t.title),
group_concat(DISTINCT t.url_title)
from finn_cartthrob_order_items o
LEFT JOIN finn_channel_data c
ON c.entry_id=o.order_id
LEFT JOIN finn_channel_titles t
ON o.entry_id=t.entry_id
GROUP BY email
```
> From the docs The result is truncated to the maximum length that is
> given by the group\_concat\_max\_len system variable, which has a default
> value of 1024. The value can be set higher, although the effective
> maximum length of the return value is constrained by the value of
> max\_allowed\_packet. The syntax to change the value of
> group\_concat\_max\_len at runtime is as follows, where val is an
> unsigned integer:
>
> SET [GLOBAL | SESSION] group\_concat\_max\_len = val; | Just as you can use `distinct` after the `select` keyword, you can also use it inside aggregate functions (including `group_concat`), to aggregate each distinct value only once:
```
Select
c.field_id_33 AS email, o.order_id,
Group_concat(DISTINCT o.entry_id) AS Products,
group_concat(DISTINCT t.title),
group_concat(DISTINCT t.url_title)
from finn_cartthrob_order_items o
LEFT JOIN finn_channel_data c
ON c.entry_id=o.order_id
LEFT JOIN finn_channel_titles t
ON o.entry_id=t.entry_id
GROUP BY email
``` | Select distinct on Left Join for only one column | [
"",
"mysql",
"sql",
""
] |
I would like to insert records in SQL such that if the combination of entry exists then the script should not proceed with the insert statement. here is what i have so far :
```
insert into TABLE_TESTING(R_COMPONENT_ID,OPRID)
select 1942,'Test'
from TABLE_TESTING
where not exists
(select *
from TABLE_TESTING
where R_COMPONENT_ID='1942'
and oprid ='Test');
```
I have a table name as : TABLE\_TESTING
It has two columns as : R\_COMPONENT\_ID and OPRID
If the combination of record as '1942' and 'Test' already exist in DB then my script should not perform insert operation and if it doesent exists then it should insert the record as a combination of R\_COMPONENT\_ID and OPRID.
Please suggest.
Using the query specified above i am getting multiple insert been added in the DB. Please suggest some solution. | Here is an skelton to use MERGE. I ran it and it works fine. You may tweak it further per your needs. Hope this helps!
```
DECLARE
BEGIN
FOR CURTESTING IN (SELECT R_COMPONENT_ID, OPRID FROM TABLE_TESTING)
LOOP
MERGE INTO TABLE_TESTING
USING DUAL
ON (R_COMPONENT_ID = '1942' AND OPRID = 'Test')
WHEN NOT MATCHED
THEN
INSERT (PK, R_COMPONENT_ID, OPRID)
VALUES (TEST_TABLE.NEXTVAL, '1942', 'Test');
END LOOP;
COMMIT;
END;
``` | As you don't want to update existing rows, your approach is essentially correct. The only change you have to do, is to replace the `from table_testing` in the source of the insert statement:
```
insert into TABLE_TESTING (R_COMPONENT_ID,OPRID)
select 1942,'Test'
from dual -- <<< this is the change
where not exists
(select *
from TABLE_TESTING
where R_COMPONENT_ID = 1942
and oprid = 'Test');
```
When you use `from table_testing` this means that the insert tries to insert one row *for each row* in `TABLE_TESTING`. But you only want to insert a *single* row. Selecting from `DUAL` will achieve exactly that.
As others have pointed out, you can also use the `MERGE` statement for this which might be a bit better if you need to insert more than just a single row.
```
merge into table_testing target
using
(
select 1942 as R_COMPONENT_ID, 'Test' as OPRID from dual
union all
select 1943, 'Test2' from dual
) src
ON (src.r_component_id = target.r_component_id and src.oprid = target.oprid)
when not matched
then insert (r_component_id, oprid)
values (src.r_component_id, src.oprid);
``` | Insert records in SQL if records does not exists | [
"",
"sql",
"oracle",
"insert",
"duplicates",
""
] |
I'm trying to select values from my sql table (PHPMyAdmin) which contains only two digits.
So:
> IDNumber = '12' //should be selected
>
> IDNumber = '34' //should be selected
>
> IDNumber = '123' //should NOT be selected
>
> IDNumber = '456' //should NOT be selected
This is what I have so far, but this returns nothing / zero
```
SELECT * FROM `TableName` WHERE IDNumber LIKE '[0-9][0-9]'
```
Any ideas? | Try this
Sql-Server:
```
SELECT * FROM `TableName` WHERE LEN(IDNumber) = 2
```
Mysql:
```
SELECT * FROM `TableName` WHERE LENGTH(IDNumber) = 2
``` | Try this, it will work in all database
```
SELECT * FROM `TableName` WHERE IDNumber>9 and IDNumber<100
``` | Sql select two digits values only | [
"",
"sql",
"select",
"sql-like",
""
] |
I couldn't find how to use `IN` operator with `SqlParameter` on `varchar` column. Please check out the `@Mailbox` parameter below:
```
using (SqlCommand command = new SqlCommand())
{
string sql =
@"select
ei.ID as InteractionID,
eo.Sentdate as MailRepliedDate
from
bla bla
where
Mailbox IN (@Mailbox)";
command.CommandText = sql;
command.Connection = conn;
command.CommandType = CommandType.Text;
command.Parameters.Add(new SqlParameter("@Mailbox", mailbox));
SqlDataReader reader = command.ExecuteReader();
}
```
I tried these strings and query doesn't work.
```
string mailbox = "'abc@abc.com','def@def.com'"
string mailbox = "abc@abc.com,def@def.com"
```
I have also tried changed query `Mailbox IN('@Mailbox')`
and `string mailbox = "abc@abc.com,def@def.com"`
Any Suggestions? Thanks | That doesn't work this way.
You can parameterize each value in the list in an `IN` clause:
```
string sql =
@"select
ei.ID as InteractionID,
eo.Sentdate as MailRepliedDate
from
bla bla
where
Mailbox IN ({0})";
string mailbox = "abc@abc.com,def@def.com";
string[] mails = mailbox.Split(',');
string[] paramNames = mails.Select((s, i) => "@tag" + i.ToString()).ToArray();
string inClause = string.Join(",", paramNames);
using (var conn = new SqlConnection("ConnectionString"))
using (SqlCommand command = new SqlCommand(sql, conn))
{
for (int i = 0; i < paramNames.Length; i++)
{
command.Parameters.AddWithValue(paramNames[i], mails[i]);
}
conn.Open();
using (SqlDataReader reader = command.ExecuteReader())
{
// ...
}
}
```
Adapted from: <https://stackoverflow.com/a/337792/284240> | Since you are using MS SQL server, you have 4 choices, depending on the version. Listed in order of preference.
**1. Pass a composite value, and call a custom a CLR or Table Valued Function to break it into a set. [see here](https://stackoverflow.com/a/697598/659190).**
You need to write the custom function and call it in the query. You also need to load that Assembly into you database to make the CLR accessible as TSQL.
If you read through all of [Sommarskog's work](http://www.sommarskog.se/arrays-in-sql.html) linked above, and I suggest you do, you see that if performance and concurrency are really important, you'll probably want to implment a CLR function to do this task. For details of [one possible implementation](http://www.sommarskog.se/arraylist-2008/CLR_adam.cs), see below.
**2. Use a table valued parameter. [see here](https://stackoverflow.com/q/5595353/659190).**
You'll need a recent version of MSSQL server.
**3. Pass mutiple parameters.**
You'll have to dynamically generate the right number of parameters in the statement. [Tim Schmelter's answer](https://stackoverflow.com/a/22221027/659190) shows a way to do this.
**4. Generate dynamic SQL on the client.** (*I don't suggest you actually do this.*)
You have to careful to avoid injection attacks and there is less chance to benefit from query plan resuse.
[dont do it like this](https://stackoverflow.com/a/22220964/659190).
---
One possible CLR implementation.
```
using System;
using System.Collections;
using System.Data;
using System.Data.SqlClient;
using System.Data.SqlTypes;
using Microsoft.SqlServer.Server;
public class CLR_adam
{
[Microsoft.SqlServer.Server.SqlFunction(
FillRowMethodName = "FillRow_char")
]
public static IEnumerator CLR_charlist_adam(
[SqlFacet(MaxSize = -1)]
SqlChars Input,
[SqlFacet(MaxSize = 255)]
SqlChars Delimiter
)
{
return (
(Input.IsNull || Delimiter.IsNull) ?
new SplitStringMulti(new char[0], new char[0]) :
new SplitStringMulti(Input.Value, Delimiter.Value));
}
public static void FillRow_char(object obj, out SqlString item)
{
item = new SqlString((string)obj);
}
[Microsoft.SqlServer.Server.SqlFunction(
FillRowMethodName = "FillRow_int")
]
public static IEnumerator CLR_intlist_adam(
[SqlFacet(MaxSize = -1)]
SqlChars Input,
[SqlFacet(MaxSize = 255)]
SqlChars Delimiter
)
{
return (
(Input.IsNull || Delimiter.IsNull) ?
new SplitStringMulti(new char[0], new char[0]) :
new SplitStringMulti(Input.Value, Delimiter.Value));
}
public static void FillRow_int(object obj, out int item)
{
item = System.Convert.ToInt32((string) obj);
}
public class SplitStringMulti : IEnumerator
{
public SplitStringMulti(char[] TheString, char[] Delimiter)
{
theString = TheString;
stringLen = TheString.Length;
delimiter = Delimiter;
delimiterLen = (byte)(Delimiter.Length);
isSingleCharDelim = (delimiterLen == 1);
lastPos = 0;
nextPos = delimiterLen * -1;
}
#region IEnumerator Members
public object Current
{
get
{
return new string(
theString,
lastPos,
nextPos - lastPos).Trim();
}
}
public bool MoveNext()
{
if (nextPos >= stringLen)
return false;
else
{
lastPos = nextPos + delimiterLen;
for (int i = lastPos; i < stringLen; i++)
{
bool matches = true;
//Optimize for single-character delimiters
if (isSingleCharDelim)
{
if (theString[i] != delimiter[0])
matches = false;
}
else
{
for (byte j = 0; j < delimiterLen; j++)
{
if (((i + j) >= stringLen) ||
(theString[i + j] != delimiter[j]))
{
matches = false;
break;
}
}
}
if (matches)
{
nextPos = i;
//Deal with consecutive delimiters
if ((nextPos - lastPos) > 0)
return true;
else
{
i += (delimiterLen-1);
lastPos += delimiterLen;
}
}
}
lastPos = nextPos + delimiterLen;
nextPos = stringLen;
if ((nextPos - lastPos) > 0)
return true;
else
return false;
}
}
public void Reset()
{
lastPos = 0;
nextPos = delimiterLen * -1;
}
#endregion
private int lastPos;
private int nextPos;
private readonly char[] theString;
private readonly char[] delimiter;
private readonly int stringLen;
private readonly byte delimiterLen;
private readonly bool isSingleCharDelim;
}
};
``` | Sql "IN" operator with using SqlParameter on varchar field | [
"",
"sql",
"sql-server",
"sql-server-2008",
"c#-4.0",
"sql-in",
""
] |
IΒ΄m having trouble creating a particular SQL query, in one query only (I can't go two times to the database, for design architecture, trust me on this) here are the statements:
I have four tables:
**Questions,
Locations,
Countries,
Regions**
This are some of their fields:
```
Questions
id
description
Locations
id
type (could be 'country' or 'region')
question_id
country_or_region_id (an id, that holds either the country or the region id)
Countries
id
name
Regions
id
name
```
What I want to get is this:
**Example:**
```
1 What is your name? Venezuela, Colombia South America
```
**Format:**
```
question id, question description, countries, regions
```
---
*Edit:* For those who ask, I'm using MySQL
*Edit:* For those who say it is a bad design: I didn't create it, and I can't change the design, I just have to do it, as it is now. | If this is MySQL:
```
SELECT q.ID,
q.Description,
GROUP_CONCAT(DISTINCT c.name) AS countries,
GROUP_CONCAT(DISTINCT r.name) AS regions
FROM Questions q
INNER JOIN Locations l
ON l.question_id = q.id
LEFT JOIN Countries c
ON c.id = country_or_region_id
AND l.type = 'country'
LEFT JOIN Regions R
ON R.id = country_or_region_id
AND l.type = 'region'
GROUP BY q.ID, q.Description;
```
If this is SQL- Server:
```
SELECT q.ID,
q.Description,
countries = STUFF(( SELECT ', ' + c.name
FROM Locations l
INNER JOIN Countries c
ON c.id = country_or_region_id
AND l.type = 'country'
WHERE l.question_id = q.id
FOR XML PATH(''), TYPE
).value('.', 'NVARCHAR(MAX)'), 1, 2, ''),
regions = STUFF(( SELECT ', ' + r.name
FROM Locations l
INNER JOIN Regions r
ON r.id = country_or_region_id
AND l.type = 'region'
WHERE l.question_id = q.id
FOR XML PATH(''), TYPE
).value('.', 'NVARCHAR(MAX)'), 1, 2, '')
FROM Questions q;
``` | Remove the Locations table and add question\_id to the Regions and Countries table
```
select Q.id, Q.description, C.name as country, R.name as region
from Questions as Q join Countries as C join Regions as R
where Q.id = L.question_id and Q.id = C.question_id and Q.id = R.question_id;
``` | How to make this SQL query, with only one SELECT | [
"",
"mysql",
"sql",
"union",
""
] |
hello i'm having a ambiguous column name in m stored procedure for payment .bid-id can someone help to resolve this issue please?
```
SET NOCOUNT ON;
SELECT ROW_NUMBER() OVER
(
ORDER BY [PaymentID] ASC
)AS RowNumber
,[PaymentID]
,[Name]
,[WinningPrice]
,[PaymentDate]
,[Payment.BidID]
INTO #Results
FROM Item INNER JOIN
Auction ON Item.ItemID = Auction.ItemID INNER JOIN
BID ON Auction.AuctionID = BID.AuctionID INNER JOIN
Payment ON BID.BidID = Payment.BidID
Where (BID.Status = 'Paid') AND (BID.BuyerID = @buyer)
SELECT @RecordCount = COUNT(*)
FROM #Results
SELECT * FROM #Results
WHERE RowNumber BETWEEN(@PageIndex -1) * @PageSize + 1 AND(((@PageIndex -1) * @PageSize + 1) + @PageSize) - 1
DROP TABLE #Results
End
``` | Good practice is using aliases like:
```
SET NOCOUNT ON;
SELECT ROW_NUMBER() OVER
(
ORDER BY i.[PaymentID] ASC --which table it belongs? put correct alias
)AS RowNumber
,i.[PaymentID]
,i.[Name]
,i.[WinningPrice]
,i.[PaymentDate]
,p.[BidID]
INTO #Results
FROM Item i
INNER JOIN Auction a
ON i.ItemID = a.ItemID
INNER JOIN BID b
ON a.AuctionID = b.AuctionID
INNER JOIN Payment p
ON b.BidID = p.BidID
Where (b.Status = 'Paid')
AND (b.BuyerID = @buyer)
SELECT @RecordCount = COUNT(*)
FROM #Results
``` | There is a column name you use in the query that is available in multiple tables.
Without the table structure we can't be certain which one it is, but probably one with an alias in your query:
```
,[PaymentID]
,[Name]
,[WinningPrice]
,[PaymentDate]
``` | ambigious column name in sql server | [
"",
"sql",
""
] |
I recently had to wrote a query to filter some specific data that looked like the following:
Let's suppose that I have 3 distinct values that I want to search in 3 different fields of one of my tables on my database, they must be searched in all possible orders without repetition.
Here is an example (to make it easy to understand, I will use named queries notation to show where the values must be placed):
val1 = "a", val2 = "b", val3 = "c"
This is the query I've generated:
```
SELECT * FROM table WHERE
(fieldA = :val1 AND fieldB = :val2 AND fieldC = :val3) OR
(fieldA = :val1 AND fieldB = :val3 AND fieldC = :val2) OR
(fieldA = :val2 AND fieldB = :val1 AND fieldC = :val3) OR
(fieldA = :val2 AND fieldB = :val3 AND fieldC = :val1) OR
(fieldA = :val3 AND fieldB = :val1 AND fieldC = :val2) OR
(fieldA = :val3 AND fieldB = :val2 AND fieldC = :val1)
```
What I had to do is generate a query that simulates a permutation without repetition. Is there a better way to do this type of query?
This is OK for 3x3 but if I need to do the same with something bigger like 9x9 then generating the query will be a huge mess.
I'm using MariaDB, but I'm okay accepting answers that can run on PostgreSQL.
(I want to learn if there is a smart way of writing this type of queries without "brute force") | There isn't a *much* better way, but you can use `in`:
```
SELECT *
FROM table
WHERE :val1 in (fieldA, fieldB, fieldC) and
:val2 in (fieldA, fieldB, fieldC) and
:val3 in (fieldA, fieldB, fieldC)
```
It is shorter at least. And, this is standard SQL, so it should work in any database. | > ... I'm okay accepting answers that can run on PostgreSQL. (I want to
> learn if there is a smart way of writing this type of queries without "brute force")
There is a "smart way" in Postgres, with sorted arrays.
### Integer
For `integer` values use [`sort_asc()`](http://www.postgresql.org/docs/current/interactive/intarray.html#INTARRAY-FUNC-TABLE) of the additional module [`intarray`](https://stackoverflow.com/questions/10867577/in-postgresql-how-do-you-create-an-index-by-each-element-of-an-array/10868144#10868144).
```
SELECT * FROM tbl
WHERE sort_asc(ARRAY[id1, id2, id3]) = '{1,2,3}' -- compare sorted arrays
```
Works for *any* number of elements.
### Other types
As clarified in a comment, we are dealing with **strings**.
Create a variant of `sort_asc()` that works for ***any type*** that can be sorted:
```
CREATE OR REPLACE FUNCTION sort_asc(anyarray)
RETURNS anyarray LANGUAGE sql IMMUTABLE AS
'SELECT array_agg(x ORDER BY x COLLATE "C") FROM unnest($1) AS x';
```
Not as fast as the sibling from `intarray`, but fast enough.
* Make it [`IMMUTABLE`](https://stackoverflow.com/questions/11005036/does-postgresql-support-accent-insensitive-collations/11007216#11007216) to allow its use in indexes.
* Use [`COLLATE "C"`](https://stackoverflow.com/questions/7778714/postgresql-utf-8-binary-collation/7778904#7778904) to ignore sorting rules of the current locale: faster, immutable.
* To make the function work for *any* type that can be sorted, use a [**polymorphic**](http://www.postgresql.org/docs/current/interactive/extend-type-system.html#EXTEND-TYPES-POLYMORPHIC) parameter.
Query is the same:
```
SELECT * FROM tbl
WHERE sort_asc(ARRAY[val1, val2, val3]) = '{bar,baz,foo}';
```
Or, if you are not sure about the sort order in "C" locale ...
```
SELECT * FROM tbl
WHERE sort_asc(ARRAY[val1, val2, val3]) = sort_asc('{bar,baz,foo}'::text[]);
```
### Index
For best read performance create a [functional index](http://www.postgresql.org/docs/current/interactive/indexes-expressional.html) (at some cost to write performance):
```
CREATE INDEX tbl_arr_idx ON tbl (sort_asc(ARRAY[val1, val2, val3]));
```
[**SQL Fiddle demonstrating all.**](http://sqlfiddle.com/#!15/c583e/1) | SQL query to match a list of values with a list of fields in any order without repetition | [
"",
"mysql",
"sql",
"postgresql",
"permutation",
"mariadb",
""
] |
I have a table `A` with intervals `(COL1, COL2)`:
```
CREATE TABLE A (
COL1 NUMBER(15) NOT NULL,
COL2 NUMBER(15) NOT NULL,
VAL1 ...,
VAL2 ...
);
ALTER TABLE A ADD CONSTRAINT COL1_BEFORE_COL2 CHECK (COL1 <= COL2);
```
The intervals are guaranteed to be "exclusive", i.e. they will never overlap. In other words, this query yields no rows:
```
SELECT *
FROM (
SELECT
LEAD(COL1, 1) OVER (ORDER BY COL1) NEXT,
COL2
FROM A
)
WHERE COL2 >= NEXT;
```
There is currently an index on `(COL1, COL2)`. Now, my query is the following:
```
SELECT /*+FIRST_ROWS(1)*/ *
FROM A
WHERE :some_value BETWEEN COL1 AND COL2
AND ROWNUM = 1
```
This performs well (less than a ms for millions of records in `A`) for low values of `:some_value`, because they're very selective on the index. But it performs quite badly (almost a second) for high values of `:some_value` because of a lower selectivity of the access predicate.
The execution plan seems good to me. As the existing index already fully covers the predicate, I get the expected `INDEX RANGE SCAN`:
```
------------------------------------------------------
| Id | Operation | Name | E-Rows |
------------------------------------------------------
| 0 | SELECT STATEMENT | | |
|* 1 | COUNT STOPKEY | | |
| 2 | TABLE ACCESS BY INDEX ROWID| A | 1 |
|* 3 | INDEX RANGE SCAN | A_PK | |
------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter(ROWNUM=1)
3 - access("VAL2">=:some_value AND "VAL1"<=:some_value)
filter("VAL2">=:some_value)
```
In `3`, it becomes obvious that the access predicate is selective only for low values of `:some_value` whereas for higher values, the filter operation "kicks in" on the index.
Is there any way to generally improve this query to be fast regardless of the value of `:some_value`? I can completely redesign the table if further normalisation is needed. | Your attempt is good, but misses a few crucial issues.
Let's start slowly. I'm assuming an index on `COL1` and I actually don't mind if `COL2` is included there as well.
Due to the constraints you have on your data (especially non-overlapping) you actually just want the row *before* the row where `COL1` is `<=` some value....[--take a break--] it you order by `COL1`
This is a [classic Top-N query](http://use-the-index-luke.com/sql/partial-results/top-n-queries?dbtype=oracle):
```
select *
FROM ( select *
from A
where col1 <= :some_value
order by col1 desc
)
where rownum <= 1;
```
Please note that you **must** use `ORDER BY` to get a definite sort order. As `WHERE` is applied after `ORDER BY` you must now also wrap the top-n filter in an outer query.
That's almost done, the only reason why we actually need to filter on `COL2` too is to filter out records that don't fall into the range at all. E.g. if some\_value is 5 and you are having this data:
```
COL1 | COL2
1 | 2
3 | 4 <-- you get this row
6 | 10
```
This row would be correct as result, if `COL2` would be 5, but unfortunately, in this case the correct result of your query is [empty set]. That's the only reason we need to filter for `COL2` like this:
```
select *
FROM ( select *
FROM ( select *
from A
where col1 <= :some_value
order by col1 desc
)
where rownum <= 1
)
WHERE col2 >= :some_value;
```
Your approach had several problems:
* missing `ORDER BY` - dangerous in connection with `rownum` filter!
* applying the Top-N clause (`rownum` filter) too early. What if there is **no** result? Database reads index until the end, the `rownum` (STOPKEY) never kicks in.
* An optimizer glitch. With the `between` predicate, my 11g installation doesn't come to the idea to read the index in descending order, so it was actually reading it from the beginning (0) upwards until it found a `COL2` value that matched --OR-- the `COL1` run out of the range.
.
```
COL1 | COL2
1 | 2 ^
3 | 4 | (2) go up until first match.
+----- your intention was to start here
6 | 10
```
What was actually happening was:
```
COL1 | COL2
1 | 2 +----- start at the beginning of the index
3 | 4 | Go down until first match.
V
6 | 10
```
Look at the execution plan of my query:
```
------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 26 | 4 (0)| 00:00:01 |
|* 1 | VIEW | | 1 | 26 | 4 (0)| 00:00:01 |
|* 2 | COUNT STOPKEY | | | | | |
| 3 | VIEW | | 2 | 52 | 4 (0)| 00:00:01 |
| 4 | TABLE ACCESS BY INDEX ROWID | A | 50000 | 585K| 4 (0)| 00:00:01 |
|* 5 | INDEX RANGE SCAN DESCENDING| SIMPLE | 2 | | 3 (0)| 00:00:01 |
------------------------------------------------------------------------------------------
```
Note the `INDEX RANGE SCAN **DESCENDING**`.
Finally, why didn't I include `COL2` in the index? It's a single row-top-n query. You can save at most a single table access (irrespective of what the Rows estimation above says!) If you expect to find a row in most cases, you'll need to go to the table anyways for the other columns (probably) so you would not save ANYTHING, just consume space. Including the `COL2` will only improve performance if you query doesn't return anything at all!
Related:
* [How to use index efficienty in mysql query](https://stackoverflow.com/questions/3778319/how-to-use-index-efficienty-in-mysql-query)
I answered a very similar question about this years ago. Same solution.
* [Use The Index, Lukas!](http://use-the-index-luke.com) | I think, because the ranges do not intersect, you can define col1 as primary key and execute the query like this:
```
SELECT *
FROM a
JOIN
(SELECT MAX (col1) AS col1
FROM a
WHERE col1 <= :somevalue) b
ON a.col1 = b.col1;
```
If there are gaps between the ranges you wil have to add:
```
Where col2 >= :somevalue
```
as last line.
Execution Plan:
```
SELECT STATEMENT
NESTED LOOPS
VIEW
SORT AGGREGATE
FIRST ROW
INDEX RANGE SCAN (MIN/MAX) PKU1
TABLE ACCESS BY INDEX A
INDEX UNIQUE SCAN PKU1
``` | How to tune a range / interval query in Oracle? | [
"",
"sql",
"oracle",
"oracle11g",
""
] |
I've got the request in high-load application:
```
SELECT posts.id as post_id, posts.uid, text, date, like_count,
dislike_count, comments_count, post_likes.liked, image, aspect,
u.name as user_name, u.avatar, u.avatar_date, u.driver, u.number as user_number,
u.city_id as user_city_id, u.birthday as user_birthday, u.show_birthday, u.auto_model, u.auto_color,
u.verified , u.gps_x as user_gps_x, u.gps_y as user_gps_y, u.map_activity, u.show_on_map
FROM posts
LEFT OUTER JOIN post_likes ON post_likes.post_id = posts.id and post_likes.uid = '478831'
LEFT OUTER JOIN users u ON posts.uid = u.id
WHERE posts.info = 0 AND
( posts.uid = 478831 OR EXISTS(SELECT friend_id
FROM friends
WHERE user_id = 478831
AND posts.uid = friend_id
AND confirmed = 2)
)
order by posts.id desc limit 0, 20;
```
Execution time between 6-7 secs. - absolutelly bad.
```
EXPLAIN EXTENDED output:
*************************** 1. row ***************************
id: 1
select_type: PRIMARY
table: posts
type: ref
possible_keys: uid,info
key: info
key_len: 1
ref: const
rows: 471277
filtered: 100.00
Extra: Using where
*************************** 2. row ***************************
id: 1
select_type: PRIMARY
table: post_likes
type: ref
possible_keys: post_id
key: post_id
key_len: 8
ref: anumbers.posts.id,const
rows: 1
filtered: 100.00
Extra:
*************************** 3. row ***************************
id: 1
select_type: PRIMARY
table: u
type: eq_ref
possible_keys: PRIMARY
key: PRIMARY
key_len: 4
ref: anumbers.posts.uid
rows: 1
filtered: 100.00
Extra:
*************************** 4. row ***************************
id: 2
select_type: DEPENDENT SUBQUERY
table: friends
type: eq_ref
possible_keys: user_id_2,user_id,friend_id,confirmed
key: user_id_2
key_len: 9
ref: const,anumbers.posts.uid,const
rows: 1
filtered: 100.00
Extra: Using index
4 rows in set, 2 warnings (0.00 sec)
```
```
mysql> `show index from posts;`
+-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment |
+-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| posts | 0 | PRIMARY | 1 | id | A | 1351269 | NULL | NULL | | BTREE | | |
| posts | 1 | uid | 1 | uid | A | 122842 | NULL | NULL | | BTREE | | |
| posts | 1 | gps_x | 1 | gps_y | A | 1351269 | NULL | NULL | | BTREE | | |
| posts | 1 | city_id | 1 | city_id | A | 20 | NULL | NULL | | BTREE | | |
| posts | 1 | info | 1 | info | A | 20 | NULL | NULL | | BTREE | | |
| posts | 1 | group_id | 1 | group_id | A | 20 | NULL | NULL | | BTREE | | |
+-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
mysql> `show index from post_likes;`
+------------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment |
+------------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| post_likes | 0 | PRIMARY | 1 | id | A | 10276317 | NULL | NULL | | BTREE | | |
| post_likes | 1 | post_id | 1 | post_id | A | 3425439 | NULL | NULL | | BTREE | | |
| post_likes | 1 | post_id | 2 | uid | A | 10276317 | NULL | NULL | | BTREE | | |
+------------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
mysql> `show index from users;`
+-------+------------+-------------+--------------+--------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment |
+-------+------------+-------------+--------------+--------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| users | 0 | PRIMARY | 1 | id | A | 497046 | NULL | NULL | | BTREE | | |
| users | 0 | number | 1 | number | A | 497046 | NULL | NULL | | BTREE | | |
| users | 1 | name | 1 | name | A | 99409 | NULL | NULL | | BTREE | | |
| users | 1 | show_phone | 1 | show_phone | A | 8 | NULL | NULL | | BTREE | | |
| users | 1 | show_mail | 1 | show_mail | A | 12 | NULL | NULL | | BTREE | | |
| users | 1 | show_on_map | 1 | show_on_map | A | 18 | NULL | NULL | | BTREE | | |
| users | 1 | show_on_map | 2 | map_activity | A | 497046 | NULL | NULL | | BTREE | | |
+-------+------------+-------------+--------------+--------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
mysql> `show index from friends;`
+---------+------------+-----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment |
+---------+------------+-----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| friends | 0 | PRIMARY | 1 | id | A | 1999813 | NULL | NULL | | BTREE | | |
| friends | 0 | user_id_2 | 1 | user_id | A | 666604 | NULL | NULL | | BTREE | | |
| friends | 0 | user_id_2 | 2 | friend_id | A | 1999813 | NULL | NULL | | BTREE | | |
| friends | 0 | user_id_2 | 3 | confirmed | A | 1999813 | NULL | NULL | | BTREE | | |
| friends | 1 | user_id | 1 | user_id | A | 499953 | NULL | NULL | | BTREE | | |
| friends | 1 | friend_id | 1 | friend_id | A | 499953 | NULL | NULL | | BTREE | | |
| friends | 1 | confirmed | 1 | confirmed | A | 17 | NULL | NULL | | BTREE | | |
+---------+------------+-----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
```
What I've done (tried to do):
`Set FORCE KEY(uid)` for post table - works faster approximatelly on 2 secs.
Of course I've tried rewrite this query and made somthing like this:
```
SELECT posts.id as post_id, posts.uid, text, date, like_count, dislike_count, comments_count, post_likes.liked, image, aspect,
u.name as user_name, u.avatar, u.avatar_date, u.driver, u.number as user_number,
u.city_id as user_city_id, u.birthday as user_birthday, u.show_birthday, u.auto_model, u.auto_color,
u.verified , u.gps_x as user_gps_x, u.gps_y as user_gps_y, u.map_activity, u.show_on_map
FROM posts
LEFT JOIN post_likes ON post_likes.post_id = posts.id and post_likes.uid = '478831'
LEFT JOIN users u ON posts.uid = u.id LEFT JOIN friends AS f ON f.friend_id=posts.uid
WHERE posts.info = 0 AND confirmed = 2
group by posts.id, **posts.uid**
order by posts.id desc limit 0, 20;
```
Unfortunatelly GROUP BY posts.uid kills this query execution, because of a lot rows from posts returning.
Alternative simplified query (if it may help):
```
SELECT posts.id as post_id, posts.uid
FROM posts
WHERE posts.info = 0 AND
( posts.uid = 478831 OR EXISTS(SELECT friend_id
FROM friends
WHERE user_id = 478831
AND posts.uid = friend_id
AND confirmed = 2)
)
order by posts.id desc limit 0, 20;
```
What I need - the query that executes faster less then 1 sec and outputs the same result.
Any help will be appreciated.
Thx in advance. | Similar to the solution by @StanislavL , you can do a join instead of a sub query:-
```
SELECT posts.id as post_id, posts.uid, text, date, like_count,
dislike_count, comments_count, post_likes.liked, image, aspect,
u.name as user_name, u.avatar, u.avatar_date, u.driver, u.number as user_number,
u.city_id as user_city_id, u.birthday as user_birthday, u.show_birthday, u.auto_model, u.auto_color,
u.verified , u.gps_x as user_gps_x, u.gps_y as user_gps_y, u.map_activity, u.show_on_map
FROM posts
LEFT OUTER JOIN post_likes ON post_likes.post_id = posts.id and post_likes.uid = posts.uid
LEFT OUTER JOIN users u ON posts.uid = u.id
LEFT OUTER JOIN friends f ON f.user_id = 478831 AND posts.uid = f.friend_id AND confirmed = 2
WHERE posts.info = 0
AND (posts.uid = 478831
OR f.friend_id IS NOT NULL)
order by posts.id desc limit 0, 20;
```
However the OR might well prevent it using indexes efficiently. To avoid this you can do 2 queries unioned together, the first ignoring the friends table and the 2nd doing an INNER JOIN against the friends table:-
```
SELECT posts.id as post_id, posts.uid, text, date, like_count,
dislike_count, comments_count, post_likes.liked, image, aspect,
u.name as user_name, u.avatar, u.avatar_date, u.driver, u.number as user_number,
u.city_id as user_city_id, u.birthday as user_birthday, u.show_birthday, u.auto_model, u.auto_color,
u.verified , u.gps_x as user_gps_x, u.gps_y as user_gps_y, u.map_activity, u.show_on_map
FROM posts
LEFT OUTER JOIN post_likes ON post_likes.post_id = posts.id and post_likes.uid = posts.uid
LEFT OUTER JOIN users u ON posts.uid = u.id
WHERE posts.info = 0
AND posts.uid = 478831
UNION
SELECT posts.id as post_id, posts.uid, text, date, like_count,
dislike_count, comments_count, post_likes.liked, image, aspect,
u.name as user_name, u.avatar, u.avatar_date, u.driver, u.number as user_number,
u.city_id as user_city_id, u.birthday as user_birthday, u.show_birthday, u.auto_model, u.auto_color,
u.verified , u.gps_x as user_gps_x, u.gps_y as user_gps_y, u.map_activity, u.show_on_map
FROM posts
LEFT OUTER JOIN post_likes ON post_likes.post_id = posts.id and post_likes.uid = posts.uid
LEFT OUTER JOIN users u ON posts.uid = u.id
INNER JOIN friends f ON f.user_id = 478831 AND posts.uid = f.friend_id AND confirmed = 2
WHERE posts.info = 0
order by posts.id desc limit 0, 20;
```
Neither tested | You can try to rewrite the query moving the additional subquery in FROM section
```
SELECT posts.id as post_id, posts.uid, text, date, like_count,
dislike_count, comments_count, post_likes.liked, image, aspect,
u.name as user_name, u.avatar, u.avatar_date, u.driver, u.number as user_number,
u.city_id as user_city_id, u.birthday as user_birthday, u.show_birthday, u.auto_model, u.auto_color,
u.verified , u.gps_x as user_gps_x, u.gps_y as user_gps_y, u.map_activity, u.show_on_map
FROM posts
LEFT OUTER JOIN post_likes ON post_likes.post_id = posts.id and post_likes.uid = '478831'
LEFT OUTER JOIN users u ON posts.uid = u.id
LEFT OUTER JOIN (SELECT friend_id
FROM friends
WHERE user_id = 478831
AND confirmed = 2) f ON posts.uid = f.friend_id
WHERE posts.info = 0 AND
( posts.uid = 478831 OR f.friend_id is not null)
order by posts.id desc limit 0, 20;
``` | MySQL slow query request fix, overwrite to boost the speed | [
"",
"mysql",
"sql",
""
] |
I want to fill a new table (Items) from another one already filled (History) and kind of a messed up.
History columns:
PK1 PK2 SERIAL ID\_NUMBER DATE ...
Items table should have SERIAL as PK and a unique ID\_NUMBER (Like a PK too)
So I want to select from History, several columns, with the condition that SERIAL and ID\_NUMBER would be unique.
I have achieve to return just SERIAL and ID\_NUMBER w/o repetition with group by clause, but when I ask for other columns it says that they have to be within the group by clause as well. I put them in and return me more records I don't want.
So what I want is to retrieve JUST ONE entire row from the repeated values. And of course the unique ones
Let's say:
`+-------------------------------------------------------------+
| PK1 PK2 ID_NUMBER SERIAL DATE MORE_COLUMNS... |
+--------------------------------------------------------------+
| 123 1 ABC21 ZXC1 20-1-2011 |
| 123 2 ABC00 ZXC2 30-1-2011 |
| 234 1 ABC00 ZXC2 20-4-2011 |
| 345 1 ABC21 ZXC1 10-5-2011 |
| 567 1 ASD31 QWE1 23-1-2012 |
+--------------------------------------------------------------+`
I want to return:
`+--------------------------------------------------------+
| PK1 PK2 ID_NUMBER SERIAL DATE MORE_COLUMNS... |
+--------------------------------------------------------+
| 123 1 ABC21 ZXC1 20-1-2011 |
| 234 1 ABC00 ZXC2 30-1-2011 |
| 567 1 ASD31 QWE1 23-1-2012 |
+--------------------------------------------------------+`
Greetings | First of all try to merge `PK1` and `PK2` for example in `oracle` you can use `PK1||PK2` or `CONCAT(PK1,PK2)` in `mysql`,this query works fine in `oracle`:
```
SELECT * FROM YOUR_TABLE
WHERE (PK1||PK2) IN (SELECT MIN(PK1||PK2) from YOUR_TABLE
GROUP BY ID_NUMBER,SERIAL);
``` | ```
Select distinct ID_NUMBER, SERIAL, DATE, MORE_COLUMNS
from (the first query) src
``` | Returning a single primary key from a group of repeated values | [
"",
"sql",
""
] |
Basically I am having a problem fixing my "Date" and how it saves in SQL. It works now, it inserts into the table etc but not in the format I actually want it to insert like. It inserts as `MM/dd/yyyy` (the American Way as far as I know) but I need to be placed in the format that we use here in the UK, so yes it needs to display as `dd/MM/yyyy` (03/05/2014).
Is it actually possible to convert it for my different time zone & if so, how is it done? (And by done I mean guide me otherwise I won't learn.)
Here is my code on how it stands as of including my insert into the actual SQL Database.
```
Protected Sub OkBtn_Click(sender As Object, e As System.EventArgs) Handles OkBtn.Click
Try
' Dim DayNo As Integer = 0
' Dim DateNo As Integer = 0
' Dim StartingDay As Integer = 0
Dim ThisDay As Date = Date.Today
' Dim Week As Integer = 1
' StartingDay = ThisDay.AddDays(-(ThisDay.Day - 1)).DayOfWeek
' DayNo = ThisDay.DayOfWeek
' DateNo = ThisDay.Day
' Week = Fix(DateNo / 7)
' If DateNo Mod 7 > 0 Then
'Week += 1
' End If
' If StartingDay > DayNo Then
'Week += 1
' End If
'Dim ThisWeek As String
' ThisWeek = Week
Dim ThisUser As String
ThisUser = Request.QueryString("")
If ThisUser = "" Then
ThisUser = "Chris Heywood"
End If
connection.Open()
command = New SqlCommand("Insert Into FireTest([Date],[Type],[Comments],[Completed By]) Values(@Date,@Type,@Comments,@CompletedBy)", connection)
command.Parameters.AddWithValue("@Date", ThisDay.ToString)
command.Parameters.AddWithValue("@Type", DropDownList1.SelectedValue)
command.Parameters.AddWithValue("@Comments", TextBox2.Text)
command.Parameters.AddWithValue("@CompletedBy", ThisUser)
'command.Parameters.AddWithValue("@Week", ThisWeek)
command.ExecuteNonQuery()
Catch ex As Exception
MessageBox.Show("Please make sure all fields are filled in!")
End Try
connection.Close()
Response.Redirect("~/Production/Navigator.aspx")
End Sub
```
EDIT: I have edited the way it inserts & that works but still , it doesn't appear as dd/MM/yyyy. | Since your database column is a `date` type, remove the `.ToString` call when adding the parameter:
```
command.Parameters.AddWithValue("@Date", ThisDay)
``` | don't save date time as string in your database, if you have date time data type in Date column
```
command.Parameters.AddWithValue("@Date", ThisDay)
```
you don't need to call `tostring`
but in this case i would not even use parameter. i will let db to insert the date
```
Insert Into FireTest([Date],[Type],[Comments],[Completed By]) Values(GETDATE(),@Type,@Comments,@CompletedBy)
``` | How can I format my "Date.Today" to go from MM/dd/yyyy to dd/MM/yyyy in VB.Net? | [
"",
"sql",
"sql-server",
"vb.net",
"date",
"datetime",
""
] |
Running into a problem where I get no results at all if there isn't a tool currently checked out (checkinout table is currently empty). Once I have a single record entered into checkinout table, I start getting the list of tools that aren't currently checked out. It isn't Earth shattering if I have to put in a false checkout history, but I would prefer not having to do that. Any ideas on how I can fix this? Thanks.
```
SELECT DISTINCT tools.id,
tools.ToolNumber,
tools.Description
FROM tools,
checkinout
WHERE tools.id NOT IN(SELECT checkinout.idTool
FROM checkinout
WHERE checkinout.CheckInDT IS NULL)
AND tools.Retired=0
ORDER BY
tools.ToolNumber;
``` | You could also do this with a `LEFT JOIN`, where the column on the RIGHT is null
[Here](http://www.codeproject.com/KB/database/Visual_SQL_Joins/Visual_SQL_JOINS_orig.jpg) is a helpful image showing SQL Joins
```
SELECT DISTINCT tools.id,
tools.ToolNumber,
tools.Description
FROM tools
LEFT JOIN checkinout
ON tools.id = checkinout.idTool AND checkinout.CheckInDT IS NULL
WHERE checkinout.idTool IS NULL
AND tools.retired = 0
ORDER BY tools.ToolNumber
``` | Why do you use 2 tables? You never join on `checkinout`.
Try this
```
SELECT DISTINCT tools.id,
tools.ToolNumber,
tools.Description
FROM tools
WHERE tools.id NOT IN (
SELECT checkinout.idTool
FROM checkinout
WHERE checkinout.CheckInDT IS NULL
)
AND tools.Retired=0
ORDER BY
tools.ToolNumber;
``` | SQL statement returns no results if table in NOT IN segment is empty | [
"",
"mysql",
"sql",
"select",
""
] |
Can someone please help me to achieve this query: I need to carry all the IDs for each letter that has the value 1:
 | This is a two step process. First you need to unpivot your columns to rows:
```
SELECT upvt.ID, Letters
FROM T
UNPIVOT
( Value
FOR Letters IN ([A], [B], [C], [D], [E], [F])
) upvt
WHERE upvt.Value = 1;
```
This gives:
```
ID Letters
10 A
10 C
10 E
10 F
...
```
Then you need to Concatenate the ID's From this result:'
```
WITH Unpivoted AS
( SELECT upvt.ID, Letters
FROM T
UNPIVOT
( Value
FOR Letters IN ([A], [B], [C], [D], [E], [F])
) upvt
WHERE upvt.Value = 1
)
SELECT u.Letters,
IDs = STUFF(( SELECT ', ' + CAST(u2.ID AS VARCHAR(10))
FROM Unpivoted u2
WHERE u.Letters = u2.Letters
FOR XML PATH(''), TYPE
).value('.', 'NVARCHAR(MAX)'), 1, 2, '')
FROM Unpivoted u
GROUP BY u.Letters;
```
Which gives:
```
Letters IDs
A 10, 20, 50
B 20, 40
C 10, 20, 30, 40, 50
D 30, 40
E 10, 50
F 10, 20, 40
```
**[Example on SQL Fiddle](http://sqlfiddle.com/#!3/9eed6/1)** | There are two problems here: First, the table is not normalized, so you really need to first do an extra step to create a temporary table that normalizes the data:
The first step:
```
select id, 'A' as letter
from mytable where a=1
union
select id, 'B'
from mytable where b=1
union
select id, 'C'
from mytable where c=1
union
select id, 'D'
from mytable where d=1
union
select id, 'E'
from mytable where e=1
union
select id, 'F'
from mytable where f=1
```
Then you need to get multiple IDs crammed into one field. You can do this with the (deceptively named) "For XML".
Something like:
```
select letter, id + ', ' as [text()]
from
(
select id, 'A' as letter
from mytable where a=1
union
select id, 'B'
from mytable where b=1
union
select id, 'C'
from mytable where c=1
union
select id, 'D'
from mytable where d=1
union
select id, 'E'
from mytable where e=1
union
select id, 'F'
from mytable where f=1
) q
group by letter
for XML path(''))
```
I think that would work. | SQL Query to Select specific column values and Concatenate them | [
"",
"sql",
"sql-server",
"database",
"sql-server-2008",
"t-sql",
""
] |
I have a table in an Oracle db that has the following fields of interest: Location, Product, Date, Amount. I would like to write a query that would get a running total of amount by Location, Product, and Date. I put an example table below of what I would like the results to be.
I can get a running total but I can't get it to reset when I reach a new Location/Product. This is the code I have thus far, any help would be much appreciated, I have a feeling this is a simple fix.
```
select a.*, sum(Amount) over (order by Location, Product, Date) as Running_Amt
from Example_Table a
+----------+---------+-----------+------------+------------+
| Location | Product | Date | Amount |Running_Amt |
+----------+---------+-----------+------------+------------+
| A | aa | 1/1/2013 | 100 | 100 |
| A | aa | 1/5/2013 | -50 | 50 |
| A | aa | 5/1/2013 | 100 | 150 |
| A | aa | 8/1/2013 | 100 | 250 |
| A | bb | 1/1/2013 | 500 | 500 |
| A | bb | 1/5/2013 | -100 | 400 |
| A | bb | 5/1/2013 | -100 | 300 |
| A | bb | 8/1/2013 | 250 | 550 |
| C | aa | 3/1/2013 | 550 | 550 |
| C | aa | 5/5/2013 | -50 | 600 |
| C | dd | 10/3/2013 | 999 | 999 |
| C | dd | 12/2/2013 | 1 | 1000 |
+----------+---------+-----------+------------+------------+
``` | Ah, I think I have figured it out.
```
select a.*, sum(Amount) over (partition by Location, Product order by Date) as Running_Amt
from Example_Table a
``` | from Advanced SQL Functions in Oracle 10g book, it has this example.
```
SELECT dte "Date", location, receipts,
SUM(receipts) OVER(ORDER BY dte
ROWS BETWEEN UNBOUNDED PRECEDING
AND CURRENT ROW) "Running total"
FROM store
WHERE dte < '10-Jan-2006'
ORDER BY dte, location
``` | Running Total by Group SQL (Oracle) | [
"",
"sql",
"oracle",
"cumulative-sum",
""
] |
I am creating a multi-tenant application where for particular event when fired by a user, I save the event start time in the database (SQLite). To determine the peak request time, I am trying to find the [mode](http://en.wikipedia.org/wiki/Mode_%28statistics%29) of the timestamps which are saved. Not to be confused with the average, which is going to give me an average of all timestamps - I am looking for a way to find a range like result which reflects the peak. Eg - between 2PM - 4PM, most of the events are fired. Timestamps are stored as string values in YYYY-MM-DDTHH:NN:SS format.
I am having problems writing down a query which helps solves this. | The algorithm should be as follows:
1. Choose the duration of range say 1 Hour or 2 Hour
2. For each timestamp determine which range it belongs. For example, if you have selected 1 hour range and a timestamp is 5:09 pm, then it belongs to 5:00pm - 6:00pm range.
3. Group by range and count the number of timestamps falling in each time range.
4. Select the maximum time range
Here is a sample query with which you can accomplish finding mode of the timestamps when range duration is 1 hour:
```
SELECT Range AS RangeStart, Range+1 AS RangeEnd FROM
(
SELECT STRFTIME('%H',starttime) AS Range, COUNT(*) As cnt FROM Events
GROUP BY STRFTIME('%H',starttime)
)
ORDER BY cnt DESC
LIMIT 1;
```
You can check the code at [SQL Fiddle](http://www.sqlfiddle.com/#!5/260f4/5)
Let me know if it resolved your issue. | Here's what I did to solve the problem:-
1. I created another column which stored just the hour of the timestamp.
2. Ran the following query.
SELECT EventHour, COUNT(\*) AS CountOfHours FROM EventTable
GROUP BY EventHour
ORDER BY CountOfHours DESC
LIMIT 1
3. This gives the range of the peak traffic hour for the event | Calculating mode for time column | [
"",
"sql",
"sqlite",
""
] |
I Am Working On SQL And I Am Facing A Problem In Creating The Table!
Here Is My Code:
```
CREATE TABLE Voucher_Types
(
ID Number(3, 0),
Abbreviation VarChar(2),
Voucher_Type VarChar(100),
EntryBy VarChar(25),
EntryDate Date,
CONSTRAINT ID_PK Primary Key(ID)
);
```
And I Get The Following Error,
**ORA-02264: name already used by an existing constraint**
I Am Using Oracle10g
I Goggled It And Found Some Topics But They Didn't Help Me, Can Somebody Help Me In This Problem! Thanks In Advance..! | You have another table that has already a constrained with the name `ID_PK`.
If you want to find out which table it is, you might try
```
select owner, table_name from dba_constraints where constraint_name = 'ID_PK';
```
Most probably, you copied your `create table` statement but did not change the primary key's constraint name.
It is generally felt to be good practice to include the table name in a constraint's name. One reason is exactly to prevent such "errors".
So, in your case, you might use
```
CREATE TABLE Voucher_Types
(
...
CONSTRAINT Voucher_Types_PK Primary Key(ID)
);
```
**Update** Why can't the same constraint name be used twice? *(As per your question in the comment)*: This is exactly because it is a name that *identifies* the constraint. If you have a violation of a constraint in a running system, you want to know which constraint it was, so you need a name. But if this name can refer multiple constraints, the name is of no particular use. | The error message tells you that there's already another constraint named ID\_PK in your schema - just use another name, and you should be fine:
```
CREATE TABLE Voucher_Types
(
ID Number(3, 0),
Abbreviation VarChar(2),
Voucher_Type VarChar(100),
EntryBy VarChar(25),
EntryDate Date,
CONSTRAINT VOUCHER_TYPES_ID_PK Primary Key(ID)
);
```
To find the offending constraint:
```
SELECT * FROM user_constraints WHERE CONSTRAINT_NAME = 'ID_PK'
``` | ORA-02264: name already used by an existing constraint | [
"",
"sql",
"oracle",
"oracle10g",
""
] |
I have a table and i would like to gather the id of the items from each group with the max value on a column but i have a problem.
```
SELECT group_id, MAX(time)
FROM mytable
GROUP BY group_id
```
This way i get the correct rows but i need the id:
```
SELECT id,group_id,MAX(time)
FROM mytable
GROUP BY id,group_id
```
This way i got all the rows. How could i achieve to get the ID of max value row for time from each group?
Sample Data
```
id = 1, group_id = 1, time = 2014.01.03
id = 2, group_id = 1, time = 2014.01.04
id = 3, group_id = 2, time = 2014.01.04
id = 4, group_id = 2, time = 2014.01.02
id = 5, group_id = 3, time = 2014.01.01
```
and from that i should get id: 2,3,5
Thanks! | Use your working query as a sub-query, like this:
```
SELECT `id`
FROM `mytable`
WHERE (`group_id`, `time`) IN (
SELECT `group_id`, MAX(`time`) as `time`
FROM `mytable`
GROUP BY `group_id`
)
``` | Have a look at the below demo
```
DROP TABLE IF EXISTS mytable;
CREATE TABLE mytable(id INT , group_id INT , time_st DATE);
INSERT INTO mytable VALUES(1, 1, '2014-01-03'),(2, 1, '2014-01-04'),(3, 2, '2014-01-04'),(4, 2, '2014-01-02'),(5, 3, '2014-01-01');
/** Check all data **/
SELECT * FROM mytable;
+------+----------+------------+
| id | group_id | time_st |
+------+----------+------------+
| 1 | 1 | 2014-01-03 |
| 2 | 1 | 2014-01-04 |
| 3 | 2 | 2014-01-04 |
| 4 | 2 | 2014-01-02 |
| 5 | 3 | 2014-01-01 |
+------+----------+------------+
/** Query for Actual output**/
SELECT
id
FROM
mytable
JOIN
(
SELECT group_id, MAX(time_st) as max_time
FROM mytable GROUP BY group_id
) max_time_table
ON mytable.group_id = max_time_table.group_id AND mytable.time_st = max_time_table.max_time;
+------+
| id |
+------+
| 2 |
| 3 |
| 5 |
+------+
``` | Get id of max value in group | [
"",
"sql",
""
] |
This example is a video rental store with entities `Customer`, `Plan`, and `Rental`. Each customer has a plan, and each plan has a maximum number of rentals. I am trying to enforce the constraint on the maximum number of video rentals. I am using SQL Server 2012.
Here is my attempt at creating a trigger:
```
CREATE TRIGGER maxMovies
ON Rental
FOR INSERT
AS
BEGIN
IF (0 > (SELECT count(*)
FROM (SELECT count(*) as total
FROM Inserted i, rental r
WHERE i.customerID = r.customerID) as t, Inserted i, Rental r
WHERE t.total > r.max_movies AND i.customerID = r.customerID) )
BEGIN
RAISEERROR("maximum rentals surpassed.")
ROLLBACK TRAN
END
END
-- (rest of query)
DROP table...
```
However, SQL Server gives me the following errors:
```
Msg 102, Level 15, State 1, Procedure maxMovies, Line 10
Incorrect syntax near 'RAISEERROR'.
Msg 156, Level 15, State 1, Procedure maxMovies, Line 15
Incorrect syntax near the keyword 'DROP'.
```
Any suggestions on how to create this trigger? | It is `RAISERROR` not `RAISEERROR` - a simple typo. And of course, as Trinimon spotted correctly, strings need to be quoted in single quotes, not double quotes.
```
RAISERROR('maximum rentals surpassed.')
``` | Use single quotes instead of quotation marks ...
```
RAISERROR('maximum rentals surpassed.');
```
remove one `E` and add a colon `;`. | SQL: Setting trigger to limit inserts | [
"",
"sql",
"sql-server",
""
] |
I am moving around 10 million data from one table to another in SQL Server 2005. The Purpose of Data transfer is to Offline the old data.
After some time it throws an `error Description: "The LOG FILE FOR DATABASE 'tempdb' IS FULL."`.
My `tempdb` and `templog` is placed in a drive (other than C drive) which has around 200 GB free. Also my `tempdb` size in database is set to `25 GB`.
As per my understanding I will have to increase the size of `tempdb` from `25 GB` to `50 GB` and set the log file Auto growth portion to "`unrestricted file growth (MB)`".
Please let me know other factors and I cannot experiment much as I am working on Production database so can you please let me know if they changes will have some other impact.
Thanks in Advance. | You know the solution. Seems you are just moving part of data to make your queries faster.
I am agree with your solution
As per my understanding I will have to increase the size of tempdb from 25 GB to 50 GB and set the log file Auto growth portion to "unrestricted file growth (MB)".
Go ahead | My guess is that you're trying to move all of the data in a single batch; can you break it up into smaller batches, and commit fewer rows as you insert? Also, as noted in the comments, you may be able to set your destination database to SIMPLE or BULK-INSERT mode. | Moving data from one table to another in Sql Server 2005 | [
"",
"sql",
"sql-server",
"database",
"sql-server-2005",
"data-transfer",
""
] |
I have searched the questions and there are similar questions asked but no solution that I think I can use. This question is similar to a fuzzy match...sorta. I need help with comparing two tables. One table is a company reference table and the other is a table that get's raw company data imported into it daily. The reference table is clean and has a company id associated with every single company. The daily data that is imported does not have a company id. What I'm trying to do is have the daily data reference the 'Company Reference Table' on company\_name and update the 'Company Table's column company\_state based on the company\_name. Unfortunately, the daily data string for company\_name coming in is not always the same each day. There can be various characters (a-z, 0-9, +, -, .) and spaces in front or after the actual company name with varying lengths daily so I do not believe I can use charindex to clean it up.
**Company Reference Table**
```
company_id company_name company_state
1 Awesome Inc NY
2 Excel-guru AL
3 Clean All MI
```
**Company Table**
```
company_name company_state
abc123 Awesome Inc NULL
Excel gur xyz-987 NULL
Clean All Cleanall NULL
```
What I want it to do is this. Sorta like a fuzzy match.
**Company Table**
```
company_name company_state
abc123 Awesome Inc NY
Excel gur xyz-987 AL
Clean All Cleanall MI
```
Any help is much appreciated. Thank you. | Try below Query to update **company** table:
```
update company c INNER JOIN company_ref cr
ON c.company_name LIKE concat('%', cr.company_name, '%')
SET c.company_state = cr.company_state;
```
Another way just by using **SELECT**
```
SELECT c.*, cr.* FROM company c INNER JOIN company_ref cr
ON c.company_name LIKE concat('%', cr.company_name, '%');
```
SQL Fiddle: <http://sqlfiddle.com/#!2/ec76f/1> | Because the incoming data is not in a consistent format, I don't think you will be able to do this with a database. In fact, I would suggest NOT doing it with a database, allowing you to run a matching routine beforehand.
You'll then need to examine as much of the data as possible, and see if you can find any patterns, or things which you can do to the data in bulk to make it easier to match. For example:
* Remove repeated whitespace (e.g. "Awesome Inc" -> "Awesome Inc")
* Remove non-alphanumeric characters
* If possible, can you remove the obvious codes?
I would then suggest something similar to the following:
* Add a field to your Company Table (the incoming data) to indicate the matched company, allowing you to keep track of matched items (and use for joins further on). If you don't want to modify this table, add a second table to link the two.
* Run repeated attempts to match, starting with the most definite versions (e.g. State in Company Table is present AND States match AND Company Reference Name within Company Table Name) - store these associations. They reduce the possible matches on your next attempts. At any point where your match returns > 1 possibility, it should not be used.
* When you've eliminated the easy matches, you can proceed to more fuzzy methods, such as [Levenshtein Distance](http://www.dotnetperls.com/levenshtein), individual words (tokens) matching.
I would expect that for a while, you should probably flag up low confidence matches, having a human review them, while you tune your process.
You can also store all previous matches for a company, meaning that over time your system might get better. It depends on how much the data varies each day. | SQL - comparing strings from two tables (fuzzy match...sorta) | [
"",
"sql",
"string",
"reference",
"match",
"fuzzy-search",
""
] |
Woocommerce has a reporting tool that will show me the top products sold for the last 7 days. But it only shows the top 12 products.
I am wanting to create a SQL query that will show me all products with their total count sold for the last 7 days instead of just the top 12.
Has anyone done this before? | WooCommerce borrows heavily from the way WordPress itself stores data: a Post serves as the basic data object with a handful of columns common to all custom posts. Any unique data specific to a custom type is stored as key value pairs in post\_meta. This means there aren't clean columns or tables to query for things like order line items, sku, line item price, etc.
It's worth mentioning that for orders, WC does not store *products* it stores line items. This is because you can add fees, generic line items and possibly other things to an order that are not products. Also, WC needs to store the price at the time of the order as the customer may have had a discount or the product price may have changed.
WooCommerce uses both the WordPress postmeta table AND its own order\_itemmeta table. Here's how that breaks down:
* The order itself is stored as a custom post type of "shop\_order" in the `wp_posts` table
* Order line items are stored in a relationship table called `wp_woocommerce_order_items`
* Order item details are stored as key value pairs in `wp_woocommerce_order_itemmeta`
* Finaly order details are stored in the `wp_postmeta` table
So, let's say you want to see all line items for a period of time and you want to know things like the item title, price and what order it belonged to. To do this you must JOIN multiple tables and either JOIN or subquery the meta tables for the specific fields you want. In the example below I used subqueries because I believe they are more readable, please note that JOINs are very likely faster.
```
SELECT
-- Choose a few specific columns related to the order
o.ID as order_id,
o.post_date as order_created,
-- These come from table that relates line items to orders
oi.order_item_name as product_name,
oi.order_item_type as item_type,
-- We have to subquery for specific values and alias them. This could also be done as a join
(SELECT meta_value FROM wp_woocommerce_order_itemmeta WHERE order_item_id = oi.order_item_id AND meta_key = "_product_id") as product_id,
(SELECT meta_value FROM wp_woocommerce_order_itemmeta WHERE order_item_id = oi.order_item_id AND meta_key = "_product_variation_id") as variant_id,
(SELECT meta_value FROM wp_woocommerce_order_itemmeta WHERE order_item_id = oi.order_item_id AND meta_key = "_qty") as qty,
(SELECT meta_value FROM wp_woocommerce_order_itemmeta WHERE order_item_id = oi.order_item_id AND meta_key = "_fee_amount") as fee,
(SELECT meta_value FROM wp_woocommerce_order_itemmeta WHERE order_item_id = oi.order_item_id AND meta_key = "_line_subtotal") as subtotal,
(SELECT meta_value FROM wp_woocommerce_order_itemmeta WHERE order_item_id = oi.order_item_id AND meta_key = "_line_subtotal_tax") as tax,
(SELECT meta_value FROM wp_woocommerce_order_itemmeta WHERE order_item_id = oi.order_item_id AND meta_key = "_line_total") as total,
(SELECT meta_value FROM wp_woocommerce_order_itemmeta WHERE order_item_id = oi.order_item_id AND meta_key = "_tax_class") as tax_class,
(SELECT meta_value FROM wp_woocommerce_order_itemmeta WHERE order_item_id = oi.order_item_id AND meta_key = "_tax_status") as tax_status,
-- This wasn't specifically mentioned in the question but it might be nice to have some order meta data too
(SELECT meta_value FROM wp_postmeta WHERE post_id = o.ID AND meta_key = "_order_total") as order_total,
(SELECT meta_value FROM wp_postmeta WHERE post_id = o.ID AND meta_key = "_customer_user") as user_id
FROM wp_posts o
LEFT JOIN wp_woocommerce_order_items oi ON oi.order_id = o.id
LEFT JOIN wp_posts p ON p.ID = oi.order_item_id
WHERE o.post_type = "shop_order"
```
As you can see it takes a subquery/join for *every* line item field you want making these queries pretty expensive. I suspect that WC limits how much is queried for reports for this reason.
This answer was tested against WC version 3.3.4. | Instead of writin a new query just modify the existing one with filter:
```
woocommerce_reports_get_order_report_query
```
Is seems that the limit is the same for all parts of the reports page, so changing this will affect all queries with limit clause. I would not go too far with the number of products because a new sql query is executed for each product listed.
```
add_filter( 'woocommerce_reports_get_order_report_query', function( $query )
{
if ( isset( $query['limit'] ) ) $query['limit'] = 'LIMIT 20'; <-- set limit to 20 products
return $query;
});
``` | Woocommerce SQL query to show products sold this week? | [
"",
"mysql",
"sql",
"wordpress",
"woocommerce",
""
] |

here is the pictorial representation of my partial database. `BrandNo` is a primary key in `Suppliar` Table, that is being used as Foreign Key in others.
In `LotDetails` Table I need BrandName as Foreign Key. This sounds absurd as i can make either
1. a Single Primary Key **OR**
2. a Composite Key
that will be used as Foreign Key.
I know that I can use BrandNo as Foreign Key and Display BrandName, but for the sake of KNOWLEDGE (and yes EASE ofcourse) i want to know that
Is it possible to use two attributes of a table as Foreign Keys separately in different Tables?
**EDITTED**
BrandNo is just a Serial Number and Brand Name can be the Name of any Brand.
BrandNo is needed in 4 Tables as shown, whereas BrandName is needed in only one table.
Thanks! | Yes that's quite possible. Assuming BrandName is a candidate key on its own then in principle you can reference it in just the same way as BrandNo. In that case BrandName and BrandNo would not be a composite key, they would both be separate *candidate keys*.
By convention and for simplicity and ease of use it is usual to choose just one key per table to be used for foreign key references. Usually (not always) that is the one designated as "primary" key but it doesn't have to be so if you find a good reason to do otherwise. | Yes! A FK don't need to reference a PK, you even don't need to reference a indexed column but for the sake of relational integrity (and sanity) you must reference a unique valued column (thus is why we "like" to reference a PK or at least a unique non clustered indexed column).
It's sound a bit weird but you can build a relational tableAB holding IdA, IdB and tableA and tableB referencing tableAB respective columns.
btw: a table don't need to own a PK but cannot exist two PK. In general the table is physical ordered by the PK. | Two Primary Keys | [
"",
"mysql",
"sql",
"sqlite",
""
] |
I'd like to get the average of all the records in FieldA where FieldB is equal to 'a'
Any help would be great thanks
Table Example:
```
FieldA FieldB
100 a
200 b
233 a
432 a
643 a
234 b
123 a
321 a
``` | Try this:
```
SELECT AVG(FieldA) as Average FROM Table1
WHERE FieldB='a'
```
The result will be:
```
AVERAGE
308
```
See the result in [**SQL Fiddle**](http://sqlfiddle.com/#!3/54b12/1)
`AVG()` returns the average of the values in a group. Null values are ignored.
Syntax:
```
AVG ( [ ALL | DISTINCT ] expression )
```
Read more about `AVG()` [here](http://technet.microsoft.com/en-us/library/ms177677.aspx). | Try this.You can use [**AVG**](http://technet.microsoft.com/en-us/library/ms177677.aspx)
```
SELECT AVG(FieldA) AS Average
FROM tablename
WHERE FieldB='a'
```
[**SQL FIDDLE**](http://sqlfiddle.com/#!3/6289a/1)
 | SQL - How do you aggregate a field based on another field | [
"",
"sql",
""
] |
As far as I understood, the AVG() function ignores NULL Values.
So AVG(4,4,4,4,4,NULL) --> 4
In my case I don't want this to happen.
I need a solution like that: AVG(4,4,4,4,4,NULL) --> 3,33
without replacing the NULL values directly in the table itself.
Is there any way to do this? | Use `coalesce()` to return the real value of zero for null columns:
```
select avg(coalesce(some_column, 0))
from ...
``` | You are correct about the behavior of AVG - use [COALESCE](http://docs.oracle.com/cd/B28359_01/server.111/b28286/functions023.htm#SQLRF00617) to convert the NULLs to 0 in the aggregate.
See [this answer](https://stackoverflow.com/a/17513637/2864740) in "Why SUM(null) is not 0 in Oracle?"
> If you are looking for a rationale for this behaviour, then it is to be found in the ANSI SQL standards which dictate that aggregate operators ignore NULL values.
The relevant code is then, simply:
```
Avg(Coalesce(col,0))
``` | SQL: AVG with NULL Values | [
"",
"sql",
"oracle",
"null",
"average",
""
] |
I have a select statement that is not really efficient since I am using union all to make it work instead of properly using parentheses or 'or'. Please let me know if you know of a better way to combine these statements:
```
select * from apples_2014
where value is null
and (date_1 <= '2014-01-21' and date_2 <= '2014-01-21')
union all
select * from apples_2014
where value is null
and (date_1 <= '2014-01-21' and date_2 is null)
union all
select * from apples_2014
where value is null
and (date_2 <= '2014-01-21' and date_1 is null)
union all
select * from apples_2014
where value is null
and (date_3 <= '2014-01-21' and date_3 is not null)
``` | The following is quite similar to what you want:
```
select *
from apples_2014
where value is null and
((date_1 <= '2014-01-21' and date_2 <= '2014-01-21') or
(date_1 <= '2014-01-21' and date_2 is null) or
(date_2 <= '2014-01-21' and date_1 is null) or
(date_3 <= '2014-01-21' and date_3 is not null)
);
```
There is a difference, though. Your query will return a row multiple times sometimes, once for each condition. This will return a row only once. | ```
select * from apples_2014
where
value is null
and (
(date_1 <= '2014-01-21' and date_2 <= '2014-01-21')
OR (date_1 <= '2014-01-21' and date_2 is null)
OR (date_2 <= '2014-01-21' and date_1 is null)
OR (date_3 <= '2014-01-21' and date_3 is not null)
)
``` | consolidate where clause sql | [
"",
"sql",
"oracle",
""
] |
I have a query like this:
```
SELECT *,
SUM(money_deposit + bonus_deposit) as money_deposit_total,
SUM(money_withdraw + bonus_withdraw) as money_withdraw_total
FROM transactions
where player_id = 1 and created_date between '2013-01-01' and '2014-01-05'
group by game_id;
```
What I want to do is to return the results by date, so instead of returning only 1 row, I want to return 1 row for each date.
Any suggestions? | You simply add this
```
group by game_DATE;
```
At the end of your sql query | Try this
```
SELECT * FROM
(
SELECT game_id,created_date,
SUM(money_deposit + bonus_deposit) as money_deposit_total,
SUM(money_withdraw + bonus_withdraw) as money_withdraw_total
FROM transactions
where player_id = 1 and created_date between '2013-01-01' and '2014-01-05'
group by game_id,game_DATE;
) AS T INNER JOIN transactions S ON S.game_id = T.game_id
``` | SQL Query for returning rows divided by date | [
"",
"mysql",
"sql",
""
] |
I have a Table as below:
```
TABLE_1
{
Name varchar2(30)
Value_a number
Update_time timestamp
}
```
I want to conditionally update the value of update\_time during the INSERT + MERGE depending on the value of "value\_a". if value\_a is less than 0.1, then check if update\_time is null. if yes then update else don't. if value\_a is greater than 0.1, then check if update\_time is not null. if yes then make null.
I have a table\_1\_stage which I clear, then I insert all the data and then "merge or insert" in table\_1 depending on the key match. I am using oracle 11g.
My Prepared Statement looks like the following: `" MERGE INTO "
+ TABLE_1
+ " TABLE1 "
+ " USING TABLE_1_STAGE TABLE1S "
+ " ON (TABLE1.NAME = TABLE1S.NAME ) "
+ " WHEN MATCHED THEN "
+ " UPDATE set VALUE_A = TABLE1S.VALUE_A "
+ " WHEN NOT MATCHED THEN "
+ " INSERT ( NAME, VALUE_A) "
+ " VALUES (?, ?) "`
The update\_time column is new and i need to set/reset it depending on the value\_a.
i know a stored procedure might be a better call but i was looking if something can be done in the insert query to perform this? | ```
Update table1
set Update_time = (case when value_a < 0.1 and Update_time is null then sysdate
when value_a > 0.1 and Update_time is not null then null
else Update_time end);
```
Change sysdate to your desired value.
**EDIT:**
Include Edit in the merge statement. See the below query (not tested with the real data)
In this way we do not run the update on entire table.
```
Merge into table1 t1
using table1_staging t1s
on t1.name = t1s.name
when matched then
update t1.value_a = t1s.value_a,
t1.Update_time = (case when t1s.value_a < 0.1 and t1.Update_time is null then sysdate
when t1s.value_a > 0.1 and t1.Update_time is not null then null
else t1.Update_time end)
when not matched then
INSERT (name, values_a)
VALUES (t1s.name, t1s.values_a);
``` | Your logic fits very well into an `update` statement. I think this is what you want:
```
update table1
set update_time = sysdate
where update_time is null and value_a < 0.1 or
update_time is not null and value_a > 0.1;
```
You don't say what *value* you want to set `update_time` to. I assume it is the current time. | Insert based on another column's value (Oracle 11g) | [
"",
"sql",
"oracle",
"oracle11g",
"insert-update",
""
] |
I'm trying to update a column in a table with the max value of that column grouped by another column.
So for example, say we have a table named transactions with two columns: `quantity` and `item_name`. And for whatever reason we want to set `quantity` equal to the maximum `quantity` found for each `item_name`.
I'm pretty stumped and bad at doing things like this in SQL, but here's what I have so far:
```
UPDATE transactions
SET
quantity = subquery.quantity
FROM (select max(quantity), item_name
from transaction group by item_name) AS subquery
WHERE and item_name = subquery.item_name;
``` | In addition to your syntax errors that @Gordon already pointed out, it is regularly a good idea to **exclude empty updates**:
```
UPDATE transaction t
SET quantity = sub.max_quantity
FROM (
SELECT item_name, max(quantity) AS max_quantity
FROM transaction
GROUP BY 1
) sub
WHERE t.item_name = sub.item_name
AND t.quantity IS DISTINCT FROM sub.max_quantity;
```
No need to write new row versions (at almost full cost) without changing anything. (Except if you want to fire a trigger.) | You are actually quite close:
```
UPDATE transactions
SET quantity = subquery.quantity
FROM (select max(quantity) as quantity, item_name
from transactions
group by item_name
) subquery
WHERE transactions.item_name = subquery.item_name;
```
I just removed the `and` in `where and` and renamed the table in the subquery to `transactions`. | Update using a subquery with aggregates and groupby in Postgres | [
"",
"sql",
"postgresql",
"sql-update",
"aggregate-functions",
""
] |
I have to list items that have not been updated for a multiple of two years after their last update. This is to run as a cron job once a day.
I know I can do this with something ugly like:
```
SELECT art_id, art_update FROM items
WHERE art_update = now()::date - interval '2 years'
OR art_update = now()::date - interval '4 years'
OR art_update = now()::date - interval '6 years'
OR art_update = now()::date - interval '8 years'
OR art_update = now()::date - interval '10 years';
```
Is there any way to avoid this by checking for a modulo interval? Or some other generalised way to express this? | You can generate a series of dates at 2 year intervals going back from today (to 10 years ago in the below) and join this back to your table:
```
SELECT i.art_id, i.art_update
FROM items i
INNER JOIN generate_series(2, 10, 2) s (years)
ON i.art_update = now()::date - interval '1 years' * s.years;
```
**[Example on SQL Fiddle](http://sqlfiddle.com/#!15/bf9db/3)**
*N.B* This appears to be marginally faster if you generate the dates in the series, rather than numbers:
```
SELECT i.art_id, i.art_update
FROM items i
INNER JOIN generate_series(now() - interval '10 years',
now() - interval '2 years',
interval '2 years') d (d)
ON art_update = d.d::date;
``` | You can try this.
```
SELECT art_id, art_update
FROM items
Where int4(date_part('year', art_update)) % 2 = 0;
``` | Can I use a modulo to specify an interval in postgres? | [
"",
"sql",
"postgresql",
""
] |
Here is my query:
```
SELECT COUNT(a.rating_id), COUNT(b.rating_id), COUNT(c.rating_id)
FROM wp_ratings a
LEFT JOIN wp_ratings b
LEFT JOIN wp_ratings c
WHERE a.rating_rating <= '5' AND a.rating_rating >= '4'
AND b.rating_rating <= '4' AND b.rating_rating >= '3'
AND c.rating_rating <= '3' AND c.rating_rating >= '0'
```
I am getting an error. I think my query is very self explanatory. I just don't want to do this:
```
SELECT COUNT(*) FROM wp_ratings WHERE rating_rating <= β5' AND rating_rating >= β4'
SELECT COUNT(*) FROM wp_ratings WHERE rating_rating <= β4' AND rating_rating >= β3'
SELECT COUNT(*) FROM wp_ratings WHERE rating_rating <= β3' AND rating_rating >= β0β
```
I am trying to get a query that will be as fast as possible.
So is there a way, with MySQL, to merge multiple queries to the same table to get the different results in there own rows?
**UPDATE**
When I do EXPLAIN I see that MySQL scans the table 3 times and that table has 15 000 rows so multiply by 15 000 you get 45 000 row scans. I want to bring it down to only 15 000 if possible. | ```
SELECT
SUM(IF(rating_rating <= 5 AND rating_rating >= 4, 1, 0)),
SUM(IF(rating_rating <= 4 AND rating_rating >= 3, 1, 0)),
SUM(IF(rating_rating <= 3 AND rating_rating >= 0, 1, 0))
FROM wp_ratings
```
Just use `SUM` instead of `COUNT` and this multiple times. You can then "count" only what you want to count. | A naive solution could be using a UNION:
```
SELECT 'a' i, COUNT(rating_id) x FROM wp_ratings WHERE rating_rating <= '5' AND rating_rating >= '4'
UNION
SELECT 'b' i, COUNT(rating_id) x FROM wp_ratings WHERE rating_rating <= '4' AND rating_rating >= '3'
UNION
SELECT 'c' i, COUNT(rating_id) x FROM wp_ratings WHERE rating_rating <= '3' AND rating_rating >= '0'
```
The result of this would appear like this:
```
| i | x |
---------
| a | 4 |
| b | 3 |
| c | 6 |
``` | Is there a way with MySQL to merge multiple queries to the same table to get the different results in there own rows? | [
"",
"mysql",
"sql",
"performance",
""
] |
I have a Table structure in mysql:
```
ID USER_ID TYPE
1 1 B
2 3 B
3 4 B
4 3 C
5 3 D
6 4 C
7 4 D
8 3 B
```
Fiddle Link: <http://sqlfiddle.com/#!2/7df38f/1>
I have a requirement like get all 'USER\_ID' having 'Type' as B and C.
ie I need a result as below:
```
USER_ID
3
4
``` | Try this query
```
SELECT group_concat(`type`) AS types,user_id
FROM users
WHERE `type` IN('B','C')
group by user_id
HAVING FIND_IN_SET('B',types)>0 && FIND_IN_SET('C',types)>0
```
SQL Fiddle <http://sqlfiddle.com/#!2/8ef8e/2> | ```
SELECT user_id, COUNT(*) cnt
FROM tablename
WHERE type IN ('B', 'C')
GROUP BY user_id
HAVING cnt = 2
```
[DEMO](http://sqlfiddle.com/#!2/117d8/15)
This assumes that `user_id` + `type` combinations are unique. If not, you can make a subquery that gets distinct values:
```
SELECT user_id, COUNT(*) cnt
FROM (SELECT distinct user_id, type
FROM tablename
WHERE type IN ('B', 'C')) x
GROUP BY user_id
HAVING cnt = 2
```
[DEMO](http://sqlfiddle.com/#!2/7df38f/12) | Return rows where column matches all values in a set | [
"",
"mysql",
"sql",
""
] |
I am currently using postgres 9.3.3
Following is how my table looks like -
```
Column | Type | Modifiers | Storage | Stats target | Description
--------------------+--------------------------+--------------------------------------------------------------------+----------+--------------+-------------
id | integer | not null default nextval('playerbase_palyerdata_id_seq'::regclass) | plain | |
date_joined | timestamp with time zone | not null | plain | |
belongs_to_camp_id | integer | not null | plain | |
belongs_to_coach_id | integer | not null | plain | |
json_kvps | character varying(2000) | not null | extended | |
```
One sample data is as follows -
```
id | date_joined | belongs_to_camp_id | belongs_to_coach_id | json_kvps
1 | 2014-03-07 18:10:45.824749+05:30 | 1 | 1 | {"alumnicode": "2003360009", "emailusername": "aaron@hotmail.com", "altemail": "", "salutation": "Mrs", "fname": "Aaron", "mname": "V", "lname": "Schwartz", "fullname": "Aaraon M Scwartz", "programmename": "MEP", "batchyearin": "2003"}
```
Now I want to search the entire table, and find a user with "emailusername":"aaron@hotmail.com"
As mentioned here -
<http://www.postgresql.org/docs/current/static/functions-json.html>
I try to write a query as follows -
```
SELECT * FROM playerbase_playerdata WHERE json_kvps->>'emailusername' = 'aaron@hotmail.com';
```
I was expecting a column as above, instead I got the following error -
```
ERROR: operator does not exist: character varying ->> unknown
LINE 1: ...ELECT * FROM memberbase_memberdata WHERE json_kvps->>'emailu...
^
HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts.
```
Could someone please tell me, what am I missing? | Feature like this exists only from PostgreSQL release 9.3.
So, unfortunately, to make queries like this you need to update your PostgreSQL.
If you have 9.3, then you need use json column type.
Here you can see some example. It helps me, earlier:
<http://clarkdave.net/2013/06/what-can-you-do-with-postgresql-and-json/>
Have a nice day. | For PostgreSQL 9.3 or higher, one possible solution could be using a cast to json:
> SELECT \* FROM playerbase\_playerdata WHERE json\_kvps->>'emailusername' = 'aaron@hotmail.com';
`SELECT * FROM playerbase_playerdata WHERE CAST(json_kvps AS JSON)->>'emailusername' = 'aaron@hotmail.com';` | search a keyword in postgresql json field | [
"",
"sql",
"json",
"postgresql",
"postgresql-9.1",
""
] |
I have 2 tables (Projects and Documents), both have an ID and Name field.
I'm looking for a query that meets the following requirments:
- it will return rows from either table that have specfied words in the 'Name' field
- it will return 3 fields: ID, Name and TableName -- where the tablename will be either be the name of the table or some identifier
I tried:
```
SELECT id, name
FROM projects UNION SELECT id, name FROM documents
WHERE name like '%querystriong%'
```
But this query does not add the 3rd (TableName) field and it returns rows from the Projects table that that have associated documents with querystring in the name field.
I'm suspecting I'm tackling this completely wrong. Any ideas? | You mean like this:
```
SELECT id, name, 'projects' AS table_name
FROM projects WHERE name like '%querystriong%' UNION SELECT id, name, 'documents' AS table_name
FROM documents WHERE name like '%querystriong%'
``` | Think of each select in a union as a discreet query whose result set columns must match. This means that each query must have a WHERE clause as well if you want to filter it. Additionally, you can add a literal to each query to say where it came from.
This means your query should be something like:
```
SELECT id, name 'projects' AS TableName FROM projects WHERE name like '%querystring%'
UNION
SELECT id, name, 'documents' AS TableName FROM documents WHERE name like '%querystring%'
```
Alternatively, you can express it as:
```
SELECT id, name, tablename FROM (
SELECT id, name, 'projects' as tablename
UNION
SELECT id, name, 'documents' as tablename
) AS combinedtable WHERE name like '%querystring%'
```
The second option may give you more control over your WHERE clause since you don't need to duplicate it then. | SQL Union query - Multiple tables | [
"",
"sql",
"sql-server",
""
] |
I have a database with a table (let's call it `mytable`) which has 59 columns. The first column, called `id`, is set as an `INTEGER_PRIMARY_KEY`. When it's time to insert new values, I'm executing `INSERT INTO mytable VALUES (...)` where `(...)` is a sequence with **58** values. Why **58**? Because I guess that the `Id`value will increase automatically. But I get the error "table mytable has 59 columns but 58 values were supplied".
So how I need to execute my query? | To let the `INTEGER_PRIMARY_KEY` field autoincrement I need to set it to `NULL`, then SQLite will take care about everything else.
Thanks for your answers. | You need to specify the columns you are adding if you are not adding all of them.
```
INSERT INTO table_name
(column1,column2,column3,...)
VALUES
(value1,value2,value3,...);
``` | How INTEGER_PRIMARY_KEY works? | [
"",
"sql",
"sqlite",
""
] |
I'm trying to return a list of fields from a column in a string in SQL. I think I found a way to tackle the problem, but I keep getting a conversion error... It's this that has me stuck, I don't think I'm doing a conversion, and I don't want to anyway. Here's the code:
```
DECLARE @listStr VARCHAR(MAX)
SELECT @listStr = COALESCE(@listStr+',' , '') + my_column --It's datatype is int.
FROM my_table
WHERE my_id_column = @id
SELECT @listStr
``` | Use convert function with date format that you want.
[SQL DateTime Format Function](http://www.sql-server-helper.com/tips/date-formats.aspx)
```
DECLARE @listStr VARCHAR(MAX)
SELECT @listStr = COALESCE(@listStr+',' , '') + convert(varchar(10),my_column)
FROM my_table
WHERE my_id_column = @id
SELECT @listStr
``` | All you need to do is to convert the int value from "my\_column" to text data type to enable SQL Server concatenate the value along with the @listStr value.
This example ilustrates the a correct code to your purpose:
```
DECLARE @Text TABLE
(
LineId int
, LineText nvarchar(70)
)
DECLARE @listStr VARCHAR(MAX)
INSERT INTO @Text VALUES ( 1, 'This')
INSERT INTO @Text VALUES ( 1, 'is')
INSERT INTO @Text VALUES ( 1, 'a')
INSERT INTO @Text VALUES ( 1, 'text')
SELECT @listStr = COALESCE(@listStr + ', ' , '') + ltrim(rtrim(convert(varchar(20),LineText)))
FROM @Text
WHERE LineId = 1
SELECT @listStr
```
Hope it helps. | Returning a list of values in SQL, get conversion error? | [
"",
"sql",
"sql-server",
"list",
""
] |
So let's say 'table' has two columns that act as a GUID - the ID column and msrepl\_tran\_version. Our original programmer did not know that our replication created this column and included it in a comparison, which has resulted in almost 20,000 records being put into this table, of which only 1,588 are ACTUALLY unique, and it's causing long load times.
I'm looking for a way to exclude the ID and replication columns from a select distinct, without having to then list every single column in the table, since I'm going to have to select from the record set multiple times to fix this (there are other tables affected and the query is going to be ridiculous) I don't want to have to deal with my code being messy if I can help it.
Is there a way to accomplish this without listing all of the other columns?
```
Select distinct {* except ID, msrepl_tran_version} from table
```
Other than (where COL\_1 is ID and COL\_N is the replication GUID)
```
Select distinct COL_2, ..., COL_N-1, COL_N+1, ... from table
``` | After more searching, I found the answer:
```
SELECT * INTO #temp FROM table
ALTER TABLE #temp DROP COLUMN id
ALTER TABLE #temp DROP COLUMN msrepl_tran_version
SELECT DISTINCT * FROM #temp
```
This works for what I need. Thanks for the answers guys! | Absolutely, 100% not possible, there is no subtract columns instruction. | How can I exclude GUIDs from a select distinct without listing all other columns in a table? | [
"",
"sql",
"sql-server",
"select",
"distinct",
"guid",
""
] |
I have tables `person` and `courses_by_instructors`. I am executing an sql query but I am getting duplicate values for the desired result. I am fetching from `person` table all those with `instructor_roles = 1` that are currently teaching a `course`. What suggestions in regards to the query or table could be made? [SQLFIDDLE](http://sqlfiddle.com/#!2/e18b1/1)
```
SELECT p.person_id, p.person_name, p.instructor_role, IFNULL(lj.company_id, 0) as company_id, lj.person_id, lj.course_name
FROM person as p
LEFT JOIN courses_by_instructors as lj
ON (lj.person_id = p.person_id AND lj.company_id = 7 )
WHERE instructor_role = 1
ORDER BY person_name
```
Desired Result:
```
+-----------+-----------------+-------------+------------+
| person_id | instructor_name | course_name | company_id |
+-----------+-----------------+-------------+------------+
| 154 | James Newton | | |
| 154 | James Newton | | |
| 478 | Tina Turner | | |
| 258 | James Fuller | | |
| 717 | Michael Jordan | Shoe Biz | 7 |
| 717 | Michael Jordan | Basketball | 7 |
| 964 | Steve Johnson | | |
+-----------+-----------------+-------------+------------+
```
Current:
```
+-----------+-----------------+-------------+------------+
| person_id | instructor_name | course_name | company_id |
+-----------+-----------------+-------------+------------+
| 154 | James Newton | | |
| 154 | James Newton | | |
| 478 | Tina Turner | | |
| 258 | James Fuller | | |
| 717 | Michael Jordan | Shoe Biz | 7 |
| 717 | Michael Jordan | Basketball | 7 |
| 717 | Michael Jordan | Shoe Biz | 7 |
| 717 | Michael Jordan | Basketball | 7 |
| 964 | Steve Johnson | | |
+-----------+-----------------+-------------+------------+
``` | Try this:
```
SELECT p.person_id, p.person_name, p.instructor_role, IFNULL(lj.company_id, 0) as company_id, lj.person_id, lj.course_name
FROM person as p
LEFT JOIN courses_by_instructors as lj
ON (lj.person_id = p.person_id AND lj.company_id = 7 )
WHERE instructor_role = 1
GROUP BY p.person_id,lj.course_name
ORDER BY person_name
```
It will give you what you need.
See output in [**SQL Fiddle**](http://sqlfiddle.com/#!2/e18b1/60) | The query is ok but the problem is in your model.
I would say the person table should be created like this:
```
CREATE TABLE person
(
company_id int,
person_id int,
person_name varchar(30),
instructor_role tinyint,
staff_role tinyint,
CONSTRAINT PK_PERSON_ID PRIMARY KEY NONCLUSTERED (person_id),
);
```
Having the person ID as the primary key. And the courses\_by\_instructors table should be created like this:
```
CREATE TABLE courses_by_instructors
(
company_id int,
person_id int,
course_id int,
course_name varchar(30),
CONSTRAINT FK_PERSON_ID FOREIGN KEY (person_id)
REFERENCES person(person_id)
ON DELETE CASCADE
ON UPDATE CASCADE
);
```
Having a foreign key referencing the person ID.
Please note that I'm using SQL Server so you may need to tweak the syntax a little bit. | Left Join Query issue of displaying duplicate results | [
"",
"mysql",
"sql",
""
] |
I am working on a table that has data similar to:
```
key1, key2, col1, col2, col3
1, 2, 1, 10, 'bla'
2, 2, 1, 10, 'bla2'
2, 1, 2, 10, 'bla'
```
keys 1 and 2 are a composite key.
I would like a query that returns:
```
key1, key2, col1, col2, col3
2, 2, 1, 10, 'bla2'
2, 1, 2, 10, 'bla'
```
So no two rows have the same value for both col1 and col2. And the col3 value returned is the one with the longest length.
I think this must be simple - but I cant workout the solution.
Many thanks, Paul | **Test Data**
```
DECLARE @TABLE_NAME TABLE(key1 INT, key2 INT, col1 INT, col2 INT, col3 VARCHAR(10))
INSERT INTO @TABLE_NAME
SELECT 1, 2, 1, 10, 'bla'
UNION ALL
SELECT 2, 2, 1, 10, 'bla2'
UNION ALL
SELECT 2, 1, 2, 10, 'bla'
```
**Query**
```
SELECT * FROM
(
SELECT *, ROW_NUMBER() OVER (PARTITION BY col1,col2 ORDER BY LEN(col3) DESC) AS rn
FROM @TABLE_NAME
)Q
WHERE rn = 1
```
**Result Set**
```
ββββββββ¦βββββββ¦βββββββ¦βββββββ¦βββββββ¦βββββ
β key1 β key2 β col1 β col2 β col3 β rn β
β βββββββ¬βββββββ¬βββββββ¬βββββββ¬βββββββ¬βββββ£
β 2 β 2 β 1 β 10 β bla2 β 1 β
β 2 β 1 β 2 β 10 β bla β 1 β
ββββββββ©βββββββ©βββββββ©βββββββ©βββββββ©βββββ
```
## [`Working SQL FIDDLE`](http://sqlfiddle.com/#!3/038e0e/10) | The analytic `row_number()` function fits the bill perfectly:
```
SELECT key1, key2, col1, col2, col3
FROM (SELECT key1, key2, col1, col2, col3,
ROW_NUMBER() OVER
(PARTITION BY key1, key2 ORDER BY LEN(col3) DESC) AS rn
FROM my_table) t
WHERE rn = 1
``` | SQL Server Select query to exclude similar rows based on the length of one of the columns | [
"",
"sql",
"sql-server",
""
] |
I can't figure out why I am receiving a syntax error in access sql for the following nested IIF - it works if I remove the top line and closing parenthesis... Thanks
```
IIF(
[Home Phone] IS NULL
AND [H1 Cell Phone] IS NULL
, [Home Phone]
, IIF(
[H1 Cell Phone] IS NOT NULL
, [H1 Cell Phone] & ' (m)'
, [Home Phone] & ' (h)'
) AS Phone
)
``` | Try moving the "AS Phone" part outside of the closing parenthesis. It looks like your IIf can be split up as:
```
IIF(
[Home Phone] IS NULL AND [H1 Cell Phone] IS NULL, -- Conditional
[Home Phone], -- Conditional true If they're both null, why are you displaying null here?
-- Conditional false
IIF(
[H1 Cell Phone] IS NOT NULL, -- Conditional
[H1 Cell Phone]&' (m)', - True
[Home Phone]&' (h)' - False
) AS Phone --I think the AS Phone part needs to be moved outside the IIF or removed entirely.
)
```
See <http://office.microsoft.com/en-us/access-help/iif-function-HA001228853.aspx> for examples on how to use IIF as well. | ```
IIF([Home Phone] IS NULL AND [H1 Cell Phone] IS NULL,[Home Phone],
IIF([H1 Cell Phone] IS NOT NULL, [H1 Cell Phone]&' (m)', [Home Phone]&' (h)')) AS Phone
```
Parenthesis is misplaced. You must trouble shoot this in pieces. Find parts taht work, and build up. | MS Access 2013: Nested IIF Syntax Error | [
"",
"sql",
"ms-access",
"ms-access-2013",
""
] |
I have a table (eg. *TableA*) like this
| Β Name Β | Β Startdate Β | Β Enddate Β Β |
|----------------------------------------------|
| Β Β a Β Β Β Β | 2014-02-26 | 2014-02-28 |
| Β Β b Β Β Β Β | 2014-03-05 | 2014-03-06 |
If I want to below results. Please help how to do?
| NameΒ | Β Β Β Date Β Β Β |
|---------------------------|
| Β Β a Β Β | 2014-02-26 |
| Β Β a Β Β | 2014-02-27 |
| Β Β a Β Β | 2014-02-28 |
| Β Β b Β Β | 2014-03-05 |
| Β Β b Β Β | 2014-03-06 |
Thank you. | In SQL Server it can be achieved by using `CTE` as below
```
;WITH cte(name, Startdate,Enddate)
AS
(
SELECT name, Startdate,Enddate FROM your_table
UNION ALL
SELECT name, DATEADD(dd, 1,Startdate),Enddate FROM CTE
WHERE Startdate < Enddate
)
SELECT * FROM cte ORDER BY name
```
Here is the code at [SQL Fiddle](http://www.sqlfiddle.com/#!3/14cab/2) | In Oracle it's a little tricky:
```
SELECT distinct b.name, trim(regexp_substr(b.dates, '[^,]+', 1, LEVEL)) day
FROM
(
select a.name name,(select WM_CONCAT(a.startdate + rownum -1) from all_objects where rownum <= a.enddate - a.startdate + 1) dates
from TableA a
) b
CONNECT BY LEVEL <= length(b.dates) - length(REPLACE(b.dates, ',', ''))+1
order by 1,2;
```
The inner query generates days for each name as a comma-separated String. The outer simply splits the String in several rows. | SQL Query Startdate to Enddate | [
"",
"sql",
""
] |
I am new to PLSQL and I am slightly confused on how to add to increase decrease a variable value.
I have an employee table where I am trying to increase the commission rate and decrease some of the commission rates. However when ruining the script the changes to the commission rates are not changed on the employees table e.g one employees initial commission rate is 0.2 I want to add an increase by 0.4.

At the moments this is the PLSQL that I wrote:
```
SET ECHO OFF
SET VERIFY OFF
SET DOCUMENT OFF
SET SERVEROUTPUT OFF
/* Ask for the employee ID number */
PROMPT Please enter the employees id number:
ACCEPT p_emp_id
/* Declarative section */
DECLARE
/* Declare local block variables */
/* Set local v_employee_id to that inputted by the user */
v_employee_id employees.employee_id%TYPE := &p_emp_id;
v_job_id employees.job_id%TYPE;
v_salary employees.salary%TYPE;
v_commission_pct employees.commission_pct%TYPE;
BEGIN
SELECT job_id, salary, COMMISSION_PCT
INTO v_job_id, v_salary, v_commission_pct
FROM employees
WHERE employee_id = v_employee_id;
IF v_job_id = 'SA MANG' AND Salary = 10500
THEN v_commission_pct := v_commission_pct + 0.4
ELSE
IF v_job_id = 'SA MANG' AND v_salary = 10500
THEN v_commission_pct := v_commission_pct + 0.4;
ELSIF v_job_id = 'SA_REP' AND v_salary = 9000
THEN
v_commission_pct := v_commission_pct - 0.25;
END IF;
END IF;
END IF;
/* Update the corresponding row for the employee with
the new commission */
UPDATE employees
SET commission_pct = v_commission_pct
WHERE employee_id = v_employee_id;
/* End anonymous PL/SQL block */
END;
/
/* Display the details of the new employee */
SELECT employee_id, job_id, salary, commission_pct
FROM employees
WHERE employee_id = '&p_emp_id';
SET SERVEROUTPUT ON
SET DOCUMENT ON
SET VERIFY ON
SET ECHO ON
``` | # Manipulating Variables through an Anonymous PL/SQL Block
Your anonymous block of PL/SQL is copied here below, with some comments in between. Any of the mistakes pointed out would have interrupted the completion of your intended operations. Were you able to run this successfully to completion?
If you're running this on the SQL\*Plus prompt, it is possible the script output scrolled off the screen without your knowledge of any problems. Use the following command: **SHOW ERRORS** immediately after the prompt returns to you. This should provide some feedback and a warning if things are not right.
## Quick Notes on Incrementing Variables
Rather than hardcode your values throughout the script, consider the possibility of anchoring them to **CONSTANT typed VARIABLES**. They look like regular variables, but since they are not expected to change for the duration of the script, Oracle handles them differently and more efficiently than hard-coded values scattered throughout your PL/SQL block. So, in the case of your code:
```
/* Declare local block variables */
/* Set local v_employee_id to that inputted by the user */
v_employee_id employees.employee_id%TYPE := &p_emp_id;
v_job_id employees.job_id%TYPE;
v_salary employees.salary%TYPE;
v_commission_pct employees.commission_pct%TYPE;
-- We would add...
c_comm_pct_sa_mang_adjustment CONSTANT employees.commission_pct%TYPE:= 0.4;
c_comm_pct_sa_rep_adjustment CONSTANT employees.commission_pct%TYPE:= -0.25;
-- Deeper within your block, you would now reference constant variables instead of
-- raw values...
v_commission_pct:= v_commission_pct + c_comm_pct_sa_mang_adjustment;
```
Through this approach, you would push ALL potentially variable increment amounts to the top of the script, in the declaration block, so you're less likely to need changes within the code that's been figured out... less room for error or getting confused if changes are necessary and too much time has passed since you last ran this...
## Some Comments on the Code Presented
```
BEGIN
SELECT job_id, salary, COMMISSION_PCT
INTO v_job_id, v_salary, v_commission_pct
FROM employees
WHERE employee_id = v_employee_id;
-- This segment appears to repeat the following block, though the expression
-- "Salary = 10500" should have errored out because the typo as it is now makes
-- Salary a undeclared variable.
IF v_job_id = 'SA MANG' AND Salary = 10500
THEN
v_commission_pct := v_commission_pct + 0.4
ELSE
-- If the intent is TWO commission increases of 0.4 for the same employee, it would
-- not get to the second increase because the ELSE clause makes each half of the
-- IF-THEN-ELSE statement mutually exclusive. The procedure would exit out of this
-- clause the first time its criteria for the data is met.
IF v_job_id = 'SA MANG' AND v_salary = 10500
THEN
v_commission_pct := v_commission_pct + 0.4;
ELSIF v_job_id = 'SA_REP' AND v_salary = 9000
THEN
v_commission_pct := v_commission_pct - 0.25;
END IF;
END IF;
END IF;
-- There is one too many "END IF" clauses which would make this piece of PL/SQL not
-- runnable through completion.
/* Update the corresponding row for the employee with the new commission */
UPDATE employees
SET commission_pct = v_commission_pct
WHERE employee_id = v_employee_id;
/* End anonymous PL/SQL block */
END;
/
```
## The DBMS\_OUTPUT Built-in Package: A Programmer's Debugging Friend
Some additional notes to help you in the long run so that you can isolate errors faster if you're new to SQL Plus...
A useful tool to get better feedback on your PL/SQL code is to set up check-points in your code which dumps a notice as screen output. The command to enable this in your SQL Plus session is:
```
EXEC DBMS_OUTPUT.ENABLE;
```
Or, alternatively
```
SET SERVEROUTPUT ON
```
The typical usage is to insert a line in your PL/SQL block formatted as:
```
DBMS_OUTPUT.PUT_LINE('Completed this block of instructions.');
```
Some additional info on this utility can be found in the [Oracle Documentation on the DBMS\_OUTPUT Package](http://docs.oracle.com/cd/B19306_01/appdev.102/b14258/d_output.htm). | In your code in the if condition you have mentioned salary but it should be v\_salary
and v\_job\_id the code will work try this:
DECLARE
v\_employee\_id employees.employee\_id%TYPE := 149;
```
v_job_id employees.job_id%TYPE;
```
v\_salary employees.salary%TYPE;
```
v_commission_pct employees.COMMISSION_PCT%TYPE;
```
BEGIN
SELECT job\_id, salary,commission\_pct
```
INTO v_job_id, v_salary,v_commission_pct
FROM employees
WHERE employee_id = v_employee_id;
IF v_job_id = 'SA_MAN' and v_salary = 10500 then
```
v\_commission\_pct := nvl(v\_commission\_pct,0) + 0.4;
v\_salary := v\_salary + 100;
END IF;
dbms\_output.put\_line(v\_commission\_pct|| ' ' ||v\_salary);
END; | PLSQL adding a value to a variable | [
"",
"sql",
"oracle",
"plsql",
""
] |
I have the following batch insert query running on MSSQL 2012:
```
WHILE 1<2 --busy loop until user stops the query
BEGIN
DECLARE @batch int = 200000
BEGIN TRANSACTION
WHILE @batch > 0
BEGIN
DECLARE @hourRand int = CONVERT(int,60*RAND() )
DECLARE @minRand int = CONVERT(int,60*RAND() )
--...more DECLAREs... --
INSERT INTO dbo.details (COLUMN1,COLUMN2) VALUES (@hourRand, @minRand, ...)
set @batch = @batch - 1
END
COMMIT
END
```
When I leave this query on, SQL's memory usage continuously grows. Why would this loop cause memory growth? Are the inserted entries being stored in some kind of cache or buffer that's taking up memory? If so, how can I free the memory that's being used?
I'm aware that SQL grows its memory pool as needed, but my queries begin to hang when the server's memory usage approaches 98%, so I do not think it's simply the memory pool being large. SQL appears to be actually "using" most of the memory it's holding onto.
Restarting the server frees the memory as expected, but I can't have the server run out of memory often.
Thank you for the help! | There is no issue here. All server databases make extensive use of caching and do not release memory unless required, eg because the available free RAM falls below some threshold. The server will not run out of RAM. Trying to "free" the memory will degrade performance as the server will have to read the data back from storage.
As for the handing queries, it probably has more to do with the fact that you have issues 200K insert statements inside a single transaction and you are probably experience locking or deadlock issues.
This an artificial example that doesn't display the server's behavior in batch operations. Batches are never 200K statements long. If you want to import 200K rows, use BULK INSERT on the server's side or SqlBulkCopy on the client.
SQL Server is used in Data Warehouse applications where a typical import size is millions of rows. The trick is to use the proper tools for this job, eg BULK INSERT, SSIS and proper use of staging databases. | Because you are doing all this work inside a TRANS.
Either `COMMIT` after your `INSERT` or better yet remove your BEGIN TRANSACTION and COMMIT | MSSQL Batch INSERT operations increase memory usage | [
"",
"sql",
"sql-server",
"memory",
"ram",
""
] |
It doesn't recognize "sum". why?
```
SELECT c.c_id, c.c_name, SUM( p.p_sum ) AS sum
FROM clients c, contracts_rent r, payments p
WHERE c.c_id = r.c_id
AND r.contract_id = p.contract_id and sum>1300
GROUP BY c.c_id
``` | Something like...
```
SELECT c.c_id, c.c_name, SUM( p.p_sum ) AS sum
FROM clients c, contracts_rent r, payments p
WHERE c.c_id = r.c_id
AND r.contract_id = p.contract_id
GROUP BY c.c_id
HAVING SUM( p.p_sum ) > 1300
```
When you want to filter on the aggregate function value, you use the HAVING keyword | Aggregated columns can't usually be handled within a `where` clause since that clause happens *before* aggregation.
What you need is the `having` clause, which occurs *after* aggregation.
Compare something like:
```
select id, something, p_sum from tbl
where p_sum > 100
```
into:
```
select id, something, sum(p_sum) from tbl
group by something
having sum(p_sum) > 100
```
The former selects individual rows where the sum in that row is greater than 100. The latter groups on `something` then selects the grouped rows that have an aggregate summ greater than 100. | it doesn't recognize "sum". why? | [
"",
"sql",
"phpmyadmin",
""
] |
I have a bunch of SQL queries stored as files on disk.
They are all pure `SELECT` queries or in other words, they only do read operations.
I am connecting to Oracle 11g database and I want to measure approximate execution time of all these queries. Is there any way to do this programmatically?
The database is on a server which is behind a firewall and as such I can connect to database only through Toad or Oracle SQL developer. So, writing my own java/python code is not an option here. | I can think of a couple of options while using TOAD.
**Option 1:** This should work in TOAD by pressing `F5`
```
TURN SPOOL ON -- Spool the results
SET TERMOUT OFF -- Controls the display of output generated by commands executed from a script.
SELECT * FROM TABLE;
SPOOL OFF
```
**Option 2:** This should work in TOAD by pressing `F5` But this will not print the results of the query.
```
SET AUTOTRACE TRACEONLY
YOUR QUERY
```
**Option 3:** This should work in TOAD by pressing `F5`
```
SET TIMING ON -- Controls the display of timing statistics.
YOUR QUERY
```
**Eg for option 3**
```
SET TIMING ON
SELECT bla FROM bla...
...
Elapsed: 00:00:00:01
SELECT bar FROM foo...
...
Elapsed: 00:00:23:41
SET TIMING OFF
```
To run all scripts and find overall time
```
TIMING START allmyscripts
... run all my scripts ...
TIMING STOP
timinig for: allmyscripts
Elapsed: 00:00:08.32
```
A DBA style find using the following columns in V$SQL
```
APPLICATION_WAIT_TIME
CONCURRENCY_WAIT_TIME
CLUSTER_WAIT_TIME
USER_IO_WAIT_TIME
PLSQL_EXEC_TIME
CPU_TIME
ELAPSED_TIME
```
**Note:** I suggest Option 3 | I would perform an insert into a logging table before the select and update it after each select.
This will allow you not only to query today's time, but also to query the history, and detect or backtrace increase increase of the execution time.
Table could be related to a small web application to show you data and tends in a graphical manner, to keep your boss happy :D
```
create table batch_log (
batch_name varchar2(100) not null,
run_id varchar2(100) not null,
start_time date not null,
stop_time date not null);
insert into batch_log ('myFisrtSelect',date(sysdate), sysdate, sysdate);
select ....
update batch_log set stop_time=sysdate where batch_name='myFisrtSelect' and run_id=date(sysdate):
```
Of course, the run\_id could be something better than just the day (especially if run just before midnight).
Table could be reused for other batch jobs performing inserts, update, etc... to log additional info (number of rows treated, affected,etc..).
Insert/update could also be centralized in a procedure (inside a package?). | Execution time for sql files - batch operation | [
"",
"sql",
"oracle",
"oracle11g",
""
] |
# schema:
> customers(name, mailid, city)
# What to find:
Find all customer names consisting of three or more words (for example `King George V`).
# What I tried:
```
select name from customers
where name like
'%[A-Za-z0-9][A-Za-z0-9]% %[A-Za-z0-9][A-Za-z0-9]% %[A-Za-z0-9][A-Za-z0-9]%'
```
# what is surprising me:
If I am trying for two words (removing the last `%[A-Za-z0-9]%` from my query), its working fine but its not working for three words :( | **MySQL Solution**:
If a `name` has words separated by *space* character, then,
Try the following:
```
select name from customers
where ( length( name )
-
length( replace( name, ' ', '' ) ) + 1
) >= 3
``` | If you are consistent with the spacing between names, you could use this logic
```
SELECT LENGTH(name)-LENGTH(REPLACE(name,' ',''))
FROM customers
``` | Find the all customer names consisting of three or more words (for example King George V) | [
"",
"mysql",
"sql",
""
] |
I need to generate a list of values in an Oracle DB with the following columns of data:
```
ITEM_TYPE VARCHAR2(20)
ITEM_LAST_UPDATED DATE
ITEM_UPDATE_TOLERANCE NUMBER(1)
```
The only data that should be send out to the console would be items that have the date in `'ITEM_LAST_UPDATED'` less than the sysdate minus the integer value within `'ITEM_UPDATE_TOLERANCE'`.
So, if I wanted to just show the ones that were one hour past due, I can do:
```
select ITEM_TYPE from MY_TABLE
where
to_char(ITEM_LAST_UPDATED, 'DD-MON-YYYY HH24:MI')
<=
to_char(sysdate - interval '1' hour, 'DD-MON-YYYY HH24:MI');
```
However, rather than using the '1' in the above statement, I need to replace it with the numeric value of `ITEM_UPDATE_TOLERANCE`.
I tried several different versions, but all error (such as):
```
select ITEM_TYPE from MY_TABLE
where
to_char(ITEM_LAST_UPDATED, 'DD-MON-YYYY HH24:MI')
<=
to_char(sysdate - interval to_number(ITEM_UPDATE_TOLERANCE) hour, 'DD-MON-YYYY HH24:MI');
``` | Why are you converting a perfect `DATE` column to a character value just to compare it another `DATE` value converted to a character column.
Simply use:
```
ITEM_LAST_UPDATED <= sysdate - interval '1' hour
```
To achieve what you want, just multiply the value:
```
ITEM_LAST_UPDATED <= sysdate - (interval '1' hour) * ITEM_UPDATE_TOLERANCE
```
There is also absolutely no need to convert a number to a number using the `to_number()` function. | As an alternative to @a\_horse\_with\_no\_name's interval multiplication trick, or San's division method, you can also use the [`numtodsinterval()` function](http://docs.oracle.com/cd/E11882_01/server.112/e26088/functions117.htm#SQLRF00682):
```
ITEM_LAST_UPDATED <= sysdate - numtodsinterval(ITEM_UPDATE_TOLERANCE, 'HOUR')
```
As an example:
```
select sysdate, sysdate - numtodsinterval(3, 'HOUR') from dual;
SYSDATE SYSDATE-NUMTODSINTE
------------------- -------------------
2014-03-07 19:08:27 2014-03-07 16:08:27
``` | Using "Interval" in Oracle where "Interval" is a value from a table | [
"",
"sql",
"oracle",
"date",
"intervals",
""
] |
So I am trying to create a report which will give me a count of sales orders and compare them to a previous date ranges. unfortunately I am not sure how to approach returning the results as each of these calculations are ran against the same table column.
Ideally my output would look something like this, including the NULL values
```
partner Today LastYear TwoYear
------- ------ -------- --------
zzz 10 15 4
yyy 2 4
xxx 3 1 2
```
I have the basic idea down:
```
DECLARE @currentDay DATETIME
SET @currentDay = DATEDIFF(day,0,GETDATE()) -- Gives it 00:00:00.000 for time
-- Todays orders
SELECT count(s.po_id) as 'Orders Today',c.tp_name
FROM [EDI_001].[dbo].[303v850h] as s
join [EDI_001].[dbo].[Trade] as c
on s.TP_PartID = c.TP_PartID
where s.ExportDate < @currentDay AND
s.ExportDate > DATEADD(day,-1,@currentDay)
group by c.tp_name
order by c.tp_name;
-- Last Years Day's orders
SELECT count(s.po_id) as 'Orders Today',c.tp_name
FROM [EDI_001].[dbo].[303v850h] as s
join [EDI_001].[dbo].[Trade] as c
on s.TP_PartID = c.TP_PartID
where s.ExportDate < DATEADD(year,-1,@currentDay) AND
s.ExportDate > DATEADD(year, -1,DATEADD(day,-1,@currentDay))
group by c.tp_name
order by c.tp_name;
```
I'll go ahead and stop there, as you can see the queries are almost identical just changing the date range in the where clause. What I don't know is how to combine the two queries into a single result set. As well, my join does not return the empty sets in either query. I realize that it won't with the current join used, however it hasn't shown in different results with left outer joins either... But realistically one problem at a time and the first step is to get a single result set. Any help would be greatly appreciated. | ```
DECLARE @currentDay DATETIME
SET @currentDay = DATEDIFF(day,0,GETDATE()) -- Gives it 00:00:00.000 for time
SELECT Sum(
CASE
WHEN s.ExportDate Between DATEADD(day,-1,@currentDay) AND @currentDay
THEN 1
ELSE 0
END
) As Today,
Sum(
CASE
WHEN s.ExportDate Between DATEADD(year, -1,DATEADD(day,-1,@currentDay)) AND DATEADD(year,-1,@currentDay)
THEN 1
ELSE 0
END
) As LastYear,
Sum(
CASE
WHEN s.ExportDate Between DATEADD(year, -2,DATEADD(day,-1,@currentDay)) AND DATEADD(year,-2,@currentDay)
THEN 1
ELSE 0
END
) As TwoYear,
c.tp_name
FROM [EDI_001].[dbo].[303v850h] as s
JOIN [EDI_001].[dbo].[Trade] as c
on s.TP_PartID = c.TP_PartID
GROUP BY c.tp_name
ORDER BY c.tp_name;
``` | You are looking for the UNION operator.
It's used to combine the result-set of two or more SELECT statements.
<http://www.w3schools.com/sql/sql_union.asp> | Returning multiple queries as a single result set against the same database columns | [
"",
"sql",
"sql-server",
"datetime",
"aggregate-functions",
""
] |
I have a list of categories and items, and I want to return only the categories that have one type of Item. For instance:
Stuff table:
```
Cat 1 | Item 1
Cat 1 | Item 1
Cat 1 | Item 2
Cat 1 | Item 2
Cat 1 | Item 3
Cat 1 | Item 3
Cat 1 | Item 3
Cat 2 | Item 1
Cat 2 | Item 1
```
I would like to return
```
Cat 2 | Item 1
```
I tried:
```
SELECT category, item
FROM stuff
GROUP BY category, item
HAVING Count(Distinct item) = 1
```
But it's not working. I returns:
```
Cat 1 | Item 1
Cat 1 | Item 2
Cat 1 | Item 3
Cat 2 | Item 1
``` | You should remove `item` from your `GROUP BY` clause and run this instead:
```
SELECT category, MAX(item)
FROM stuff
GROUP BY category
HAVING COUNT(DISTINCT item) = 1
```
Example [SQLFiddle](http://sqlfiddle.com/#!6/e1f4aa/1).
Otherwise, each group returned from the `GROUP BY` clause will naturally have exactly one distinct item. | ```
SELECT category, MAX(item)
FROM Table1
GROUP BY category
HAVING Count(Distinct item) = 1
``` | Using Group By, Count, and Having correctly | [
"",
"sql",
""
] |
I have a table called "calls", there are columns:
```
a_imei_number
b_imei_number
a_phone_number
b_phone_number
call_start_time
call_end_time
```
if a specific phone called x calls y then imei number of x is in `a_imei_number` column if y calls x then imei of x in `b_imei_number`. Shortly difference between `a_imei_number` and `b_imei_number` is incoming and outgoing calls for an imei. same for phone\_number columns.
I am searching calls for a specific imei that happen in same time (cloned imei numbers) so i thought if i find a call whose call\_start\_time between other's call\_start\_time and call\_end\_time then i would find the cloned phones. so imei numbers must be same and phone numbers must be different logically.
so i wrote
```
select * from calls c1 , calls c2
where (c1.a_imei = 1234 or c1.b_imei = 1234)
and
c1.call_start_time between c2.call_start_time and c2.call_end_time
```
table has maybe 500M data. so this query is not returning and result maybe in 1 week it returns. Is there anyother way to find the result without joining same table like this ? | If I understand correctly, you are looking for calls that occur at the same time as calls to or from a specific number. The following query expresses this idea:
```
select c2.*
from (select c.*
from calls c
where c.a_imei = 1234 or c.b_imei = 1234
) cbase join
calls c2
on cbase.call_start_time between c2.call_start_time and c2.call_end_time;
```
The performance is going to depend greatly on the number of matches of the first query.
Sometimes, the database engine has a hard time optimizing `or` in a condition. I would suggest having indexes on `calls(a_imei, call_start_time)` and `calls(b_imei, call_start_time)` and rewriting the query as:
```
select c2.*
from ((select c.call_start_time
from calls c
where c.a_imei = 1234
) union all
(select c.call_start_time
from calls c
where c.b_imei = 1234
)
) cbase join
calls c2
on cbase.call_start_time between c2.call_start_time and c2.call_end_time;
```
For the final join, a third index would be useful: `calls(call_start_time, call_end_time)`. | There are several things you can do to improve your query.
**Indexes**
It seems that you should have indexes defined on a\_imei and b\_imei. Perhaps you also would want to include call start and end times in those indexes as well, this depends.
**Specify columns**
Don't use `select *`, instead specify the list of columns you want to return.
```
select
a_imei_number,
b_imei_number,
call_start_time,
call_end_time
```
**Proper Join**
This depends on exactly what you are looking for in results. If you want to report on all possible duplicates, you would structure it one way.
```
select c2.a_imei, c2.b_imei, c2.call_start_time, c2.call_end_time
from (select c.a_imei, c.b_imei, c.call_start_time, c.call_end_time
from calls c
where c.a_imei = c.b_imei
) cbase join
calls c2
on cbase.call_start_time between c2.call_start_time and c2.call_end_time;
```
If you have a known `imei_number` and want to search for it, the query would be structured differently.
```
select c2.a_imei, c2.b_imei, c2.call_start_time, c2.call_end_time
from (select c.a_imei, c.b_imei, c.call_start_time, c.call_end_time
from calls c
where c.a_imei = 1234 or c.b_imei = 1234
) cbase join
calls c2
on cbase.call_start_time between c2.call_start_time and c2.call_end_time;
``` | Joining large tables in SQL | [
"",
"sql",
"oracle",
"select",
"join",
""
] |
I have a table that lists vacation information for different users (username, vacation start, and vacation end dates) -- 4 users are listed below:
```
Username VacationStart DeploymentEnd
rsuarez 2014-03-10 2014-03-26
studd 2014-01-18 2014-01-29
studd 2014-02-11 2014-02-26
studd 2014-03-02 2014-03-04
ssteele 2014-03-11 2014-03-26
ssteele 2014-03-18 2014-03-28
atidball 2014-03-05 2014-03-20
atidball 2014-03-06 2014-03-26
atidball 2014-03-13 2014-03-20
atidball 2014-03-18 2014-03-31
```
For a new query, I want to display only 4 rows, with each user having only one set of vacation dates displayed, either current/in-progress vacation, future/next vacation (if no current exists) or most recent (if two above are false).
The end result should be following (assuming today is 3/9/2014):
```
Username VacationStart DeploymentEnd
rsuarez 2014-03-10 2014-03-26
studd 2014-03-02 2014-03-04
ssteele 2014-03-11 2014-03-26
atidball 2014-03-05 2014-03-20
```
Vacation dates are actually coming from another table (data\_vacations), which I left join to data\_users. I am trying to perform case selection inside left join statement.
Here is what I tried before, but my logic fails there, since I ended up to mix different vacation end dates to vacation start dates:
```
SELECT Username, VacationStart, VacationEnd
FROM data_users
LEFT JOIN
(
SELECT userGUID,
CASE WHEN MIN(CASE WHEN (VacationEnd < getdate()) THEN NULL ELSE VacationStart END) IS NULL THEN MAX(VacationStart)
ELSE MIN(VacationStart) END AS VacationStart,
CASE WHEN MIN(CASE WHEN (VacationEnd < getdate()) THEN NULL ELSE VacationEnd END) IS NULL THEN MAX(VacationEnd)
ELSE MIN(VacationEnd) END AS VacationEnd
FROM data_vacations
GROUP BY userGUID
) b ON(data_empl_master.userGUID= b.userGUID)
```
What am I doing wrong? How could I fix it?
Also.. on side note.. Do I perform this filtering in LEFT JOIN correctly? Since data\_users is much bigger, having distinct user ids... and I would like to join the available vacation information based on example above, while still displaying all unique user ids. | Using a common table expression to rank by category (current = 1, future = 2, past = 3) and each category individually by start date/differene from GETDATE(), you can get the result you want by ranking the result using `ROW_NUMBER()`;
```
DECLARE @DATE DATETIME = GETDATE()
;WITH cte AS (
SELECT *, 1 r, VacationStart s FROM data_users
WHERE @DATE BETWEEN VacationStart and DeploymentEnd
UNION ALL
SELECT *,2 r, VacationStart - @DATE s FROM data_users
WHERE VacationStart > @DATE
UNION ALL
SELECT *,3 r, @DATE - DeploymentEnd s FROM data_users
WHERE DeploymentEnd < @DATE
), cte2 AS (
SELECT *, ROW_NUMBER() OVER (PARTITION BY username ORDER BY r,s) rn FROM cte
)
SELECT Username, VacationStart, DeploymentEnd FROM cte2 WHERE rn=1;
```
[An SQLfiddle to test with](http://sqlfiddle.com/#!3/e47b0/1).
Getting the date as a variable is necessary to get a consistent `GETDATE()` value over the whole query, otherwise it may not be consistent if called multiple times. | ```
select u.name,s.startdate,s.enddate
from users u
left join
(
select su.name,
max(su.start) as startdate,
max(su.end) as enddate from users su group by su.name
)s on u.name= s.name
group by u.name
``` | Filter LEFT JOINed table with dates to display current event, else future, else past? | [
"",
"sql",
"sql-server-2008",
""
] |
There is a table that I want to have several statistics(summary) on it. The table name is "jobs" and some of the needed statistics are:
The number of jobs that are active, the number of jobs that are in inactive status, the number of jobs which need male force, the number of jobs which pay more than x amount per month, ...
I need a query to pull all this statistics form the "jobs" table and put in into **one-row** result. Query result should look something like this:
```
+---------------+---------------+---------------+-----+
| stats1_column | stats2_column | stats3_column | ... |
+---------------+---------------+---------------+-----+
| x | y | z | ... |
+---------------+---------------+---------------+-----+
```
After all, do you think I **fetch table summary** properly like what I'm doing? Or I'm wrong and there is a better way?
I'm working on a PHP project using MySQL database. | You will probably use `COUNT()` aggregate function for every category and then `UNION ALL` and then covert rows to column by `GROUP_CONCAT()`. Below is the possible query that you could have:
```
SELECT
GROUP_CONCAT(if(Category = 'Active Jobs', CategoryCount, NULL)) AS 'Active_Jobs',
GROUP_CONCAT(if(Category = 'Inactive Jobs', CategoryCount, NULL)) AS 'Inactive_Jobs'
FROM
(
SELECT 'Active Jobs' as Category, COUNT(*) as CategoryCount
FROM Jobs
WHERE Status = 'Active'
UNION ALL
SELECT 'Inactive Jobs' as Category, COUNT(*) as CategoryCount
FROM Jobs
WHERE Status = 'Inactive'
) tbl
```
See Sample `Fiddle Demo` | It would be something like this:
```
SELECT SUM(IF(active, 1, 0)) stats_active,
SUM(IF(active, 0, 1)) stats_inactive,
SUM(IF(needs_male, 1, 0)) stats_needs_male,
SUM(IF(pay > 1000, 1, 0)) stats_well_paid,
.... and so on ....
FROM jobs
```
The problem with this approach/query is that it is inefficient because it scans the entire table. If the table has lots of rows, performance will be affected.
If the table has proper indexes, a more efficient way to do it is to run a query for each stat you need. In PHP this would be something like (ignore typos):
```
$stats = array(
'stats_active' => 'SELECT COUNT(*) FROM jobs WHERE active',
'stats_inactive' => 'SELECT COUNT(*) FROM jobs WHERE NOT active',
'stats_needs_male' => 'SELECT COUNT(*) FROM jobs WHERE needs_male',
.... others here ....
);
$result = (object) array();
foreach($stats as $name => $query) {
$result->$name = db_fetch_single_result($query); // database specific code goes here
}
``` | SQL: Fetching a table summary/statistics | [
"",
"mysql",
"sql",
""
] |
I have imported a CSV file that contains string values (eg.eating) and floating values (eg. 0.87) into a table in my phpMyAdmin database. After I get ride of all the string values and retain only the rows that have the decimal values, I need to convert such values from VARCHAR to DECIMAL/FLOAT so that I can perform a MAX() on this attribute.
How do I do this? Each time I try doing this through the GUI in phpMyAdmin, all my values are automatically rounded off to 0 and 1s.
Please help me! | I think you need to try doing something like this on your MySQL if you have admin privilege on your MySQL.
ALTER TABLE tablename MODIFY columnname DECIMAL(M,D)
for the M,D variables, read this - <http://dev.mysql.com/doc/refman/5.0/en/fixed-point-types.html>
And MySQL should be able to automatically converting a text to a numeric. Just that the data type in MySQL might not be a decimal yet that's why you can't store any decimal. | Without Converting you can find Maximum using this query
```
select max(cast(stuff as decimal(5,2))) as mySum from test;
```
## check this [SQLfiddle](http://sqlfiddle.com/#!2/8cbbbc/1/0)
## your demo table:
```
create table test (
name varchar(15),
stuff varchar(10)
);
insert into test (name, stuff) values ('one','32.43');
insert into test (name, stuff) values ('two','43.33');
insert into test (name, stuff) values ('three','23.22');
```
## Your Query:
For SQL Server, you can use:
```
select max(cast(stuff as decimal(5,2))) as mySum from test;
``` | Converting VARCHAR to DECIMAL values in MySql | [
"",
"mysql",
"sql",
"phpmyadmin",
"typeconverter",
""
] |
I need to distinct the left table rows on a left join.
Instead of this
```
| name | last name | phone |
|--------|-------------|---------|
| name1 | lastname1 | 1234567 |
|--------|-------------|---------|
| name1 | lastname1 | 2345678 |
|--------|-------------|---------|
| name2 | lastname2 | 3456789 |
```
I need this
```
| name | last name | phone |
|--------|-------------|---------|
| name1 | lastname1 | 1234567 |
| | | 2345678 |
|--------|-------------|---------|
| name2 | lastname2 | 3456789 |
```
I tried with a SELECT DISTINCT but without success...
I tried also a GROUP BY but it hides the second row | Mariano, followed you here from your WP post that got closed.
```
<?php
$results = your_sql_query_here();
$data = array();
foreach( $results as $result ) {
// Make a new array node for each name
if ( ! isset( $data[$result['name']] )
$data[$result['name']] = array();
$data[$result['name']][] = $result['phone'];
}
```
This will give you something like this
```
Array(
['name1'] => Array(
[0] => 5123451,
[1] => 5123452
),
['name2'] => Array(
[0] => 5123453,
[1] => 5123454
) )
```
You can then just do a for loop of your `$data` array, using the key as your `name1` value.
Or store entire data sets
```
<?php
$results = your_sql_query_here();
$data = array();
foreach( $results as $result ) {
// Make a new array node for each name
if ( ! isset( $data[$result['name']] )
$data[$result['name']] = array();
$data[$result['name']][] = $result;
}
```
Now you will have access to all nodes, but grouped by `name`.
```
Array(
['name1'] => Array(
[0] => Array( 'name' => 'name1', 'phone' => 4165123, 'another_field' => 1 ),
[1] => Array( 'name' => 'name1', 'phone' => 4165157, 'another_field' => 0 ),
[1] => Array( 'name' => 'name1', 'phone' => 4225157, 'another_field' => 0 )
),
['name2'] => Array(
[0] => Array( 'name' => 'name2', 'phone' => 4165123, 'another_field' => 1 ),
[1] => Array( 'name' => 'name2', 'phone' => 4572321, 'another_field' => 1 ),
[1] => Array( 'name' => 'name2', 'phone' => 5235157, 'another_field' => 0 )
) )
``` | You need [`GROUP_CONCAT`](https://dev.mysql.com/doc/refman/5.0/en/group-by-functions.html#function_group-concat):
> This function returns a string result with the concatenated non-NULL values from a group.
So, your code will be:
```
SELECT name,
last_name,
GROUP_CONCAT(phone) AS phones
FROM table1
GROUP BY name, last_name;
``` | Distinct left table rows on a left join | [
"",
"mysql",
"sql",
"left-join",
""
] |
iam trying to get this
1.select all the rows that [result] != Success
2.count each unique [result]
3.tell me how much % each [result] take from all [result]
seems like the first two works fine but the last column is always '0'
this is my query:
```
SELECT [result] , COUNT( * ) AS Total,(
COUNT( * ) /
(
SELECT COUNT( * )
FROM[my_table]
)
) * 100 AS '%'
FROM [my_table]
WHERE ([result]!='Success')
GROUP BY [result]
```
this is the result:

tank you all! | You need to convert the numbers to decimals or floating point numbers, or you will be operating on integers.
```
SELECT [result] , COUNT( * ) AS Total,
(1.0 * COUNT( * ) / (
SELECT COUNT( * )
FROM[my_table]
)
) * 100 AS '%'
FROM [my_table]
WHERE ([result]!='Success')
GROUP BY [result]
``` | This should do it, as long as the first number is a float the result will be a float
```
SELECT [result]
,COUNT(*) AS Total
,CAST((COUNT(*)AS FLOAT) / (SELECT COUNT(*) FROM [my_table])) AS '%'
FROM [my_table]
WHERE ([result]!='Success')
GROUP BY [result]
``` | sql Calculating a percentage for a group | [
"",
"sql",
"sql-server",
"subquery",
""
] |
I have a column (of VARCHAR type) with values like these:
```
1
2
3
1.1
1.1.1
1.2.1
5
4
7
8
9
10
10.2
10.1
```
I hop[e to select this column and order it naturally, like the following:
```
1
1.1
1.1.1
1.2.1
2
3
4
5
...
```
I have tried ordering it with this for example and a lot of other query
```
SELECT data
FROM sample
ORDER BY LEN(data), data
```
Does someone have an idea how to do this? | You seem to be hoping to order a series of hierarchically named items in a natural order. It looks like these items' names take the form.
```
token [ .token [. token [ .token ]]]
```
where subsequent tokens after the first are optional.
I suppose you want each token, if it's numeric, to be handed as a number. That is, for example, you want `1.123` to come after `1.2` because `123` is numerically greater than `2`.
You didn't say what you want done with alphabetical tokens, e.g. `401.k` and `403.b`. I suppose they should come after the numerical ones, but in lexical order.
This query (<http://sqlfiddle.com/#!2/81756/2/0>) will do the trick out to five hierarchical levels of tokens.
```
SELECT col
FROM T
ORDER BY
FLOOR(SUBSTRING_INDEX(col,'.',1)),
SUBSTRING_INDEX(col,'.',1),
FLOOR(SUBSTRING(col, 2+LENGTH(SUBSTRING_INDEX(col,'.',1)))),
SUBSTRING(col, 2+LENGTH(SUBSTRING_INDEX(col,'.',1))),
FLOOR(SUBSTRING(col, 2+LENGTH(SUBSTRING_INDEX(col,'.',2)))),
SUBSTRING(col, 2+LENGTH(SUBSTRING_INDEX(col,'.',2))),
FLOOR(SUBSTRING(col, 2+LENGTH(SUBSTRING_INDEX(col,'.',3)))),
SUBSTRING(col, 2+LENGTH(SUBSTRING_INDEX(col,'.',3))),
FLOOR(SUBSTRING(col, 2+LENGTH(SUBSTRING_INDEX(col,'.',4)))),
SUBSTRING(col, 2+LENGTH(SUBSTRING_INDEX(col,'.',4)))
```
Why does this work? `FLOOR()` converts the leftmost part of a string to an integer, so it picks up the leading integer. If it doesn't find any numbers in the string it's trying to convert, it returns zero.
And, `SUBSTRING(col, 2+LENGTH(SUBSTRING_INDEX(col,'.',NNN)))` picks up the part of the `col` item to the right of the `NNN`th dot. | try this
```
ORDER BY data, LEN(data)
```
or this
```
ORDER BY CONVERT(SUBSTRING_INDEX(data, ',', -1), SIGNED), Len(data)
```
i give demo in mysql as tsql there is not in sqfiddle .
[**DEMO**](http://sqlfiddle.com/#!2/8cd6e/7) | Naturally ORDER a column containing hierarchical item names | [
"",
"mysql",
"sql",
"sql-server",
"t-sql",
""
] |
I was lucky enough to find this awesome piece of code on Stack Overflow, however I wanted to change it up so it showed each half hour instead of every hour, but messing around with it, only caused me to ruin the query haha.
This is the SQL:
```
SELECT CONCAT(HOUR(created_at), ':00-', HOUR(created_at)+1, ':00') as hours,
COUNT(*)
FROM urls
GROUP BY HOUR(created_at)
ORDER BY HOUR(created_at) ASC
```
How would I go about getting a result every half an hour? :)
Another thing, is that, if it there is half an hour with no results, I would like it to return 0 instead of just skipping that step. It looks kinda of weird win I do statistics over the query, when it just skips an hour because there were none :P | If the format isn't too important, you can return two columns for the interval. You might even just need the start of the interval, which can be determined by:
```
date_format(created_at - interval minute(created_at)%30 minute, '%H:%i') as period_start
```
the alias can be used in GROUP BY and ORDER BY clauses. If you also need the end of the interval, you will need a small modification:
```
SELECT
date_format(created_at - interval minute(created_at)%30 minute, '%H:%i') as period_start,
date_format(created_at + interval 30-minute(created_at)%30 minute, '%H:%i') as period_end,
COUNT(*)
FROM urls
GROUP BY period_start
ORDER BY period_start ASC;
```
Of course you can also concatenate the values:
```
SELECT concat_ws('-',
date_format(created_at - interval minute(created_at)%30 minute, '%H:%i'),
date_format(created_at + interval 30-minute(created_at)%30 minute, '%H:%i')
) as period,
COUNT(*)
FROM urls
GROUP BY period
ORDER BY period ASC;
```
Demo: <http://rextester.com/RPN50688>
> Another thing, is that, if it there is half an hour with no results, I
> would like it to return 0
If you use the result in a procedural language, you can initialize all 48 rows with zero in a loop and then "inject" the non-zero rows from the result.
However - If you need it to be done in SQL, you will need a table for a LEFT JOIN with at least 48 rows. That could be done inline with a "huge" UNION ALL statement, but (IMHO) it would be ugly. So I prefer to have sequence table with one integer column, which can be very usefull for reports. To create that table I usually use the `information_schema.COLUMNS`, since it is available on any MySQL server and has at least a couple of hundreds rows. If you need more rows - just join it with itself.
Now let's create that table:
```
drop table if exists helper_seq;
create table helper_seq (seq smallint auto_increment primary key)
select null
from information_schema.COLUMNS c1
, information_schema.COLUMNS c2
limit 100; -- adjust as needed
```
Now we have a table with integers from 1 to 100 (though right now you only need 48 - but this is for demonstration).
Using that table we can now create all 48 time intervals:
```
select time(0) + interval 30*(seq-1) minute as period_start,
time(0) + interval 30*(seq) minute as period_end
from helper_seq s
where s.seq <= 48;
```
We will get the following result:
```
period_start | period_end
00:00:00 | 00:30:00
00:30:00 | 01:00:00
...
23:30:00 | 24:00:00
```
Demo: <http://rextester.com/ISQSU31450>
Now we can use it as a derived table (subquery in FROM clause) and LEFT JOIN your `urls` table:
```
select p.period_start, p.period_end, count(u.created_at) as cnt
from (
select time(0) + interval 30*(seq-1) minute as period_start,
time(0) + interval 30*(seq) minute as period_end
from helper_seq s
where s.seq <= 48
) p
left join urls u
on time(u.created_at) >= p.period_start
and time(u.created_at) < p.period_end
group by p.period_start, p.period_end
order by p.period_start
```
Demo: <http://rextester.com/IQYQ32927>
Last step (if really needed) is to format the result. We can use `CONCAT` or `CONCAT_WS` and `TIME_FORMAT` in the outer select. The final query would be:
```
select concat_ws('-',
time_format(p.period_start, '%H:%i'),
time_format(p.period_end, '%H:%i')
) as period,
count(u.created_at) as cnt
from (
select time(0) + interval 30*(seq-1) minute as period_start,
time(0) + interval 30*(seq) minute as period_end
from helper_seq s
where s.seq <= 48
) p
left join urls u
on time(u.created_at) >= p.period_start
and time(u.created_at) < p.period_end
group by p.period_start, p.period_end
order by p.period_start
```
The result would look like:
```
period | cnt
00:00-00:30 | 1
00:30-01:00 | 0
...
23:30-24:00 | 3
```
Demo: <http://rextester.com/LLZ41445> | You can add some math to calculate 48 intervals instead of 24 and put it into another field by which you're going to group and sort.
```
SELECT HOUR(created_at)*2+FLOOR(MINUTE(created_at)/30) as interval48,
if(HOUR(created_at)*2+FLOOR(MINUTE(created_at)/30) % 2 =0,
CONCAT(HOUR(created_at), ':00-', HOUR(created_at), ':30'),
CONCAT(HOUR(created_at), ':30-', HOUR(created_at)+1, ':00')
) as hours,
count(*)
FROM urls
GROUP BY HOUR(created_at)*2+FLOOR(MINUTE(created_at)/30)
ORDER BY HOUR(created_at)*2+FLOOR(MINUTE(created_at)/30) ASC
```
Example of result:
```
0 0:00-0:30 2017
1 0:30-1:00 1959
2 1:30-2:00 1830
3 1:30-2:00 1715
4 2:30-3:00 1679
5 2:30-3:00 1688
```
The result of original query posted by Jazerix was:
```
0:00-1:00 3976
1:00-2:00 3545
2:00-3:00 3367
``` | Group by half hour interval | [
"",
"mysql",
"sql",
""
] |
Assume the following
```
col1 col2 col3 col4
------+----------+---------+---------
abc | | |
```
Yes, col1-4 have a space in them!
I want to select the column that is not a space . On row 1 it's col1, but on row 20 it may be col3, row 55 it may be col2, and so on. I need to return just that column.
There will always be only one column with a stored value within this range of four columns, **I just need the one that actually has information in it**.
This will be part of a greater query for a report, so regardless of what column `abc` is in I need that to look the same in every result case. Meaning I can't have the results be `col1` for one case and `col2` for the other because the report won't recognize. The column needs to always be called the same.
Yes, I know it's better to store NULLS versus spaces and why use four columns when only one can have data, why not use one. I've complained enougth about that so don't rip me a new one about bad db design because I AGREE. | ```
SELECT LTRIM(RTRIM(col1 + col2 + col3 + col4))
```
Here we go... Why not add bad code to bad design. You could technically add all of the columns together and then trim them for leading/trailing spaces. I don't recommend it for performance on large scale deployments. Heck, I don't recommend this for any production script but I've been here before... Gotta do what you can to get it done. | Why not use a CASE statement, like
```
CASE WHEN col1 <> ' ' THEN col1
WHEN col2 <> ' ' THEN col2
WHEN col3 <> ' ' THEN col3
WHEN col4 <> ' ' THEN col4
END
``` | SQL - Select only one value from multiple fields | [
"",
"sql",
""
] |
I have two Microsoft SQL queries both of which give two columns but different number of rows. These are the results:
First query:
```
ProductID, Inventory
1, 100
2, 50
3, 200
```
Second query:
```
ProductID, Sales
1, -20
2, -50
```
I want to get the below output:
```
ProductID, BalanceInventory
1, 80
2, 0
3, 200
```
I have tried using plus sign in the query like this:
```
Select t1.ProductID,
t1.Inventory + (Case when t2.Sales is null then 0 else t2.Sales end) as 'BalanceInventory'
from t1 full join t2 on t1.ProductID = t2.ProductID
```
The issue with this is that the DB structure is designed in such a way that Sales and Inventory cannot be run in the same query. **So, I need to run two separate queries and then add the two columns Inventory and Sales for every ProductID**.
The actual DB structure and query is much more complex. I have tried to simplify the problem by creating a hypothetical one.
Please help. This is eating up my head.
Thanks,
Karan | Try this
```
select ProductID, (inv.Inventory + s.Sales) as BalanceInventory
from
(
select ProductID, Inventory
from [table]
where xxx
) inv
left outer join
(
select ProductID, Sales
from [table]
where xxx
) s on (s.ProductID = inv.ProductID)
``` | The other option is
```
SELECT UnionTable.ProductID, SUM(UnionTable.BalanceInventory) AS BalanceInventory
FROM (
SELECT ProductID, Inventory As BalanceInventory
FROM Table1
UNION ALL
SELECT ProductID, Sales As BalanceInventory
FROM Table2
) As UnionTable
GROUP BY UnionTable.ProductID
```
To subtract instead of add, simply make one part negative:
```
SELECT UnionTable.ProductID, SUM(UnionTable.BalanceInventory) AS BalanceInventory
FROM (
SELECT ProductID, Inventory As BalanceInventory
FROM Table1
UNION ALL
SELECT ProductID, -Sales As BalanceInventory
FROM Table2
) As UnionTable
GROUP BY UnionTable.ProductID
``` | How to add two columns from two separate MS SQL queries with different number of rows | [
"",
"sql",
"sql-server",
""
] |
I have a table employees which has this structure:
```
ID | firstName | lastName | reportsTo | jobTitle |
--------------------------------------------------
```
I want to do a single query, which will show me firstName, lastName of employee and which employee He/She does report to (in this column, it contain ID of the employee).
So example of table data:
```
ID | firstName | lastName | reportsTo | jobTitle |
----------------------------------------------------
35 | John | Green | 36 | TeamLeader |
----------------------------------------------------
36 | Annie | Red | null | Supervisor |
----------------------------------------------------
```
So John Green has ID 35 and he reports to Annie Red. How to do this in a query?
So far I have come with basic:
```
SELECT ID, firstName, lastName from employees;
```
Thanks | ```
SELECT e.ID, e.firstName, e.lastName,ee.firstName, ee.lastName from employees e
JOIN employees ee
ON e.reportsTo=ee.ID
```
The name of the employee and the name which he reports to. | Try with a subquery
```
SELECT ID, firstName, lastName,
(
SELECT a.firstName
FROM employees a
WHERE a.ID = reportsTo
) as report
from employees;
``` | MySQL, single table multiple columns | [
"",
"mysql",
"sql",
""
] |
I'm trying to figure out the best way to get a query to run multiple times with different parameters. I've tried putting it as a stored procedure and running it with cursors, but I'm pretty novice at the cursor concept. Here is the query and my first attempt with cursor.
```
SELECT
AVG([processingseconds])
FROM [nucor_historical_data].[dbo].[test_Lift_Matrix]
Where ActualGauge between 0 and .21875 and ActualWidth between 0 and 55
and inches between 0 and 120 and MaxLiftWeight between 0 and 10000 and
processingseconds is not null
```
So the parameters I need to loop through are in the where statement. I have combinations for all these groupings you see in another table.
someone suggested trying this to me earlier from another stack question, so I tested with one parameter but couldn't get it working. Is there a better way to attempt this?
```
DECLARE @param varchar(200)
-- getting your parameter from the table
DECLARE curs CURSOR LOCAL FAST_FORWARD FOR
SELECT gauge FROM groupings
OPEN curs
FETCH NEXT FROM curs INTO @param
-- executing your stored procedure once for every value of your parameter
WHILE @@FETCH_STATUS = 0 BEGIN
EXEC group_average @param
FETCH NEXT FROM curs INTO @param
END
CLOSE curs
DEALLOCATE curs
``` | Here is a watered down example of what you are trying to do, that is, run the select statement repeatedly using the values from another table as the inputs. You'll need to adapt to your specifics, I only did a subset of your fields:
```
DECLARE @UniqueId int
DECLARE @AgMin numeric(10,4)
DECLARE @AgMax numeric(10,4)
DECLARE @tmp TABLE (UniqueId INT, AgMin numeric(10,4), AgMax numeric(10,4))
INSERT @tmp SELECT ID, AGMIN, AGMAX FROM [YOUROTHERTABLEWITHTHESENUMBERS]
SELECT TOP 1 @UniueId=UniqueId, @AGMin=AGMin, @AGMAX=AgMax FROM @tmp
WHILE (@@rowcount > 0)
BEGIN
SELECT AVG([processingseconds]) FROM test_Lift_Matrix Where ActualGauge between @AGMIN and @AGMAX (the rest of your conditions...)
DELETE FROM @tmp WHERE UniqueId=@UniqueId
SELECT TOP 1 @UniqueId=UniqueId, @AGMin=AGMin, @AGMAX=AgMax FROM @tmp
END
``` | A stored procedure is the way to go here - passing the parameters as arguments. | How to run a query multiple times with different parameters? | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have two table that share the same ID:
Table Colors:
```
id color
---------
1 blue
1 red
1 green
1 blue
2 green
2 yellow
2 orange
3 red
3 orange
3 yellow
3 orange
```
Table Names:
```
id name
---------
1 John
2 Anna
3 Mike
```
What would be the query to find the names where any color occurs more than once.
for example: John and Mike is the answer as John has blue twice and Mike has orange twice too.
many thanks | ```
SELECT A.name, B.color, count(*) 'color count'
FROM Names A JOIN Colors B on A.id = B.id
GROUP BY A.id, B.color
HAVING Count(*)>1
```
Check it here
<http://sqlfiddle.com/#!2/ee040/3> | ```
SELECT name, color, COUNT(*)
FROM Names
JOIN Colors
ON Names.id = Colors.id
GROUP BY name, color
HAVING COUNT(*) > 1
``` | sql select if they have more than once | [
"",
"mysql",
"sql",
""
] |
I've found some similar questions but haven't been able to get anything to work yet. I'm very much a novice with little SQL experience.
I have a column END\_DATE as Varchar(10) where all the rows follow the mm/dd/yyyy format and I would like to convert it to date. I have an empty column formatted as date if that helps. There are 36 million rows. | ```
SELECT CONVERT(DATETIME,YourColumn,101) FROM YourTable
```
101 is mm/dd/yyyy format.
You zany backwards americans :)
To update your existing column
```
UPDATE YourTable
SET YourNewColumn = CONVERT(DATETIME,YourOldColumn,101)
```
Since it appears you have invalid data, use this method to isolate it:
```
UPDATE YourTable
SET YourNewColumn = CONVERT(DATETIME,YourOldColumn,101)
WHERE SomeTableKey BETWEEN ASmallCode AND ABiggerCode
```
Find a key in your table that you can use to divide up the data and try updating half the table... now halve it again and again until you find the offending data. Post the data here and we will come up with some code to allow for it. | I think you should convert END\_DATE to DATETIME type, because you have 36 million rows and it will give a performance boost when you do not have to cast or convert it datetime with select statement.
To answer your question, you can do something like
select CAST(END\_DATE AS DATETIME) FROM MyTable | Convert column with data MM/DD/YYYY varchar to date in sql server? | [
"",
"sql",
"sql-server",
""
] |
I read many posts but i can't understand the error and my query that are to be written.
i want that poster\_id that have like more than 1000 like
```
SELECT * FROM `posters`
WHERE title LIKE '%it%'
AND id = (
SELECT poster_id FROM `poster_likes`
WHERE action=1
GROUP BY poster_id
HAVING COUNT(*) < 100
)
```
The sub-query return more than one row.
Can any one please write this query for me. Thanks in advance . | You probably mean to be using `IN()` instead of `=`:
```
SELECT * FROM `posters`
WHERE title LIKE '%it%'
AND id IN (
SELECT poster_id FROM `poster_likes`
WHERE action=1
GROUP BY poster_id
HAVING COUNT(*) < 100
)
``` | Here you are using `where id =(subquery)` but your subquery returns more than one value so:
change
```
where id =(subquery)
```
To
```
where id in (subquery)
``` | "Subquery returns more than 1 row" Error | [
"",
"mysql",
"sql",
"subquery",
""
] |
I am trying for many hours to do it. I need to create 100.000 rows and timestamp is primary key. thats why it must be unique.
I wrote some code, it can generate 100 random timestamp but if i try it for more than 100 such as 1000 or 10000, i have an error, like that
> ERROR: duplicate key value violates unique constraint "position\_pkey"
> SQL state: 23505 Detail: Key ("timestamp")=(2014-03-09
> 04:03:16.843499) already exists.
I dont know how to create 100.000 timestamp unique.
this is my function
```
CREATE OR REPLACE FUNCTION generate_random(count integer DEFAULT 1)
RETURNS SETOF timestamp AS
$BODY$
BEGIN
RETURN QUERY SELECT distinct NOW()::timestamp + '10ms'::interval *RANDOM()*RANDOM()
FROM generate_series(0,count);
--SELECT (NOW() - '10000000'::INTERVAL * ROUND(RANDOM() * RANDOM()))::timestamp
-- FROM GENERATE_SERIES(1, count);
END;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 1000;
```
and this is generate\_random function that generates double numbers randomly.
```
CREATE OR REPLACE FUNCTION generate_random(c integer DEFAULT 1, min double precision DEFAULT 0.0, max double precision DEFAULT 1.0)
RETURNS SETOF double precision AS
$BODY$
BEGIN
RETURN QUERY SELECT min + (max - min) * RANDOM()
FROM GENERATE_SERIES(1, c);
END;
$BODY$
LANGUAGE plpgsql VOLATILE STRICT
COST 100
ROWS 1000;
```
and I call this function like that in query
```
INSERT INTO "mytable"(
"timestamp", x, y, z)
VALUES (generate_random(1000), generate_random(1000, 0, 50), generate_random(1000, 0, 50), generate_random(1000, 0, 50));
```
So, I want to create 100.000 rows that are unique timestamps and some random double numbers (doesnt matter unique or not) | The only way to guarantee a unique timestamp is to add a delay between the inserts. In instead you can use a serial
```
create table mytable (
s serial primary key,
ts timestamp,
x int,
y int,
z int
);
insert into mytable (ts, x, y, z)
select
clock_timestamp(),
500 * RANDOM(),
500 * RANDOM(),
500 * RANDOM()
from generate_series(1, 1000)
;
``` | [There's a function on the PostgreSQL wiki, `pseudo_encrypt`, that will produce pseudo-random values distributed across the 32-bit integer space](http://wiki.postgresql.org/wiki/Pseudo_encrypt).
The same approach, of a 1:1 mapping from a sequence using a feistel cypher, could be used for timestamps.
Or you could use a non-repeating iterative pseudo-random number generator.
The main thing is that you must be careful if scaling the result down into a smaller space, as collisions will appear. | unique random timestamp in postgresql | [
"",
"sql",
"postgresql",
"random",
"timestamp",
""
] |
I was practicing **self join** and here's a thing I do not understand in writing query.
I have a table `'employee'`
The employee table contains three records.
```
+-----+---------------+------------+
| id | employee | manager_id |
+-----+---------------+------------+
| 1 | Ola | NULL |
| 2 | Ahmed | 1 |
| 3 | Tove | 1 |
+----------+----------+------------+
```
Last column manager\_id refers to the first column id making Ola manager of Ahmed and Tove.
If I write the query like
```
SELECT emp.employee as NAME, manager.employee as MANAGER
FROM employee as emp, employee as manager
WHERE emp.id = manager.manager_id
```
Result makes Ahmed and Tove Manager.
Whereas
```
SELECT emp.employee as NAME, manager.employee as MANAGER
FROM employee as emp, employee as manager
WHERE manager.id = emp.manager_id
```
Makes it correct, could anyone please explain? | Self join is like a inner join where two or more instances of same table are joined together through a common data type column/field. Such join(inner join) gives the common rows as result, based on the joining condition.
The employee table contains three records. In this case,
employee as emp:
```
+-----+---------------+------------+
| id | employee | manager_id |
+-----+---------------+------------+
| 1 | Ola | NULL |
| 2 | Ahmed | 1 |
| 3 | Tove | 1 |
+----------+----------+------------+
```
employee as manager:
```
+-----+---------------+------------+
| id | employee | manager_id |
+-----+---------------+------------+
| 1 | Ola | NULL |
| 2 | Ahmed | 1 |
| 3 | Tove | 1 |
+----------+----------+------------+
```
**Now First case: Lets try this to understand the difference:**
SELECT `emp.*`, `manager.*`
FROM employee as emp, employee as manager
WHERE emp.id = manager.manager\_id
```
+-----+---------------+------------+-----+---------------+------------+
| id | employee | manager_id | id | employee | manager_id |
+-----+---------------+------------+-----+---------------+------------+
| 1 | Ola | NULL | 2 | Ahmed | 1 |
| 1 | Ola | NULL | 3 | Tove | 1 |
+----------+----------+------------+----------+----------+------------+
```
See, emp.id = manager.manager\_id . Thus, emp.employee as NAME is giving rows of Ola from first table & manager.employee as MANAGER is giving rows of Ahmed & Tove from the second table.
**Now Second case: Lets try this to understand the difference:**
SELECT `emp.*`, `manager.*`
FROM employee as emp, employee as manager
WHERE manager.id = emp.manager\_id
```
+-----+---------------+------------+-----+---------------+------------+
| id | employee | manager_id | id | employee | manager_id |
+-----+---------------+------------+-----+---------------+------------+
| 2 | Ahmed | 1 | 1 | Ola | NULL |
| 3 | Tove | 1 | 1 | Ola | NULL |
+----------+----------+------------+----------+----------+------------+
```
See, manager.id = emp.manager\_id . Thus, emp.employee as NAME is giving rows of Ahmed & Tove from first table & manager.employee as MANAGER is giving rows of Ola from the second table. | Writing WHERE emp.id = manager.manager\_id does not make much sense, because manager (or the row you want to display as a manager) does NOT have a manager\_id. I.e you must start with emp.manager\_id because you want to list employees and to each manager\_id of that employee you want to list the corresponding manager. | Understanding Self Join | [
"",
"mysql",
"sql",
"self-join",
""
] |
i am using text box to update User record, it Submits half name to database e.g. if Name is 'John Mathew' then it only submits 'John'. EVen i checked in deugging that value being sent from Textbox to action is complete but submitted half, why ? i noticed, my textbox is binded to EmplName field in database and it picks half name from there, in value it shows half name that's why it submit half name why ?
this line :
Controller:
```
public ActionResult InsertEmployeeEditedDetail(String EmpName, String DeptID, String ShiftId, String EntryDate, String Salary, String Email, bool Approval)
{
int? EmplId = Convert.ToInt32(Session["EmpEdit"]);
var UpdateRec = DataContext.UpdateEmployeeDetails_Sp(EmplId, EmpName, DeptID, ShiftId, EntryDate, Salary, Email, Approval);
return View();
}
```
View:
```
@using EmployeeAttendance_app.Models
@model IEnumerable<GetEmployeeEditDetails_SpResult>
@{
var Item = Model.FirstOrDefault();
}
<style type="text/css">
</style>
<div>
@using (Html.BeginForm("InsertEmployeeEditedDetail", "Home", FormMethod.Post))
{
<label id="lblName" class="editEmp_label">Name</label>
<input type="text" value= @Item.EmplName name="EmpName" placeholder="Update Name" />
<br />
<label id="lblDept" class="editEmp_label">Department</label>
@Html.DropDownList("DeptID", @Item.DeptName)
<br />
<label id="lblShift" class="editEmp_label">Shift</label>
@Html.DropDownList("ShiftId", @Item.ShiftName)
<br />
<label id="lblEntryDate" class="TxtBoxFrom editEmp_label">Entry Date</label>
<input type="text" value= @Item.EntryDate class="TxtBoxTo" name="EntryDate" placeholder="Update Date" />
<br />
<label id="lblSalary" class="editEmp_label">Salary</label>
<input type="text" value= @Item.BasicSalary name="Salary" placeholder="Update Salary" />
<br />
<label id="lblEmail" class="editEmp_label">Email</label>
<input type="text" value= @Item.EmailAdd name="Email" placeholder="Update Email" />
<br />
<label id="lblApproved" class="editEmp_label">Overtime Approval</label>
@Html.CheckBox("Approval", @Convert.ToBoolean( @Item.OvertimeApproved))
<br />
<button type="submit" id="btnUpdate" class="button_AdminPanel" style="width:75px" name="btnSubmit">Update</button>
}
```
Store Procedure:
```
PROCEDURE [dbo].[UpdateEmployeeDetails_Sp]
@Emplid int,
@EmplName varchar(40),
@DeptId char(36),
@ShiftId char(40),
@EntryDate char(10),
@Salary varchar(50),
@EmailAdd varchar(50),
@OvertimeApproval bit
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
Update HrEmployee
Set EmplName = @EmplName, EntryDate=@EntryDate, BasicSalary=@Salary,
EmailAdd=@EmailAdd, OvertimeApproved= @OvertimeApproval,
DeptID=@DeptId, ShiftID=@ShiftId
where EmplID =@Emplid
END
``` | enclose value attribute value in double quots value="@Item.EmplName". If there is space in the name, Last name (after space) becomes attribute to the input control which is not visible and can't be sent back. | it would be better if you use `@Html.TextBoxFor()`,this would handle binding as well instead of explicitely setting the textbox value using 'value' | Value is splited into half before submission | [
"",
"asp.net",
"sql",
"asp.net-mvc",
"asp.net-mvc-3",
""
] |
I merged two databases by union following that:
```
select model,price
from pc
union
select model,price
from laptop
```
It's working
But when i use select statement to pick data. It so not working.
Here is my code
```
select model
from ( select model,price
from pc
union
select model,price
from laptop
)
```
Error happens at the last operator ")".
I don't know why it errors and how to solve | You have to set **Alias** Name for the Select
Try like this
```
Select model
From ( Select model,price
From pc
Union
Select model,price
From laptop
) AS t
```
Or:
```
With tbl as
( Select model,price
From pc
Union
Select model,price
From laptop
)
Select model from tbl;
``` | try this
```
select model
from ( select model,price
from pc
union
select model,price
from laptop
) as t
```
it may be because u are missing table alias. | Pick up data from union of two tables | [
"",
"sql",
"sql-server",
""
] |
How I can check field value updated in this trigger:
```
ALTER TRIGGER dbo.OrderApprovedSms
ON dbo.[Order]
AFTER UPDATE
AS
BEGIN
SET NOCOUNT ON;
/*How can check approve date is updated*/
IF (/*ApproveDate is updated*/)
BEGIN
INSERT INTO office.SmsSendBuffer
( Number ,
Body
)
SELECT 'xxxxxx','ORDER APPROVED!'
END
END
``` | This is simple, you can use the UPDATE function for checking field value update.
```
ALTER TRIGGER dbo.OrderApprovedSms
ON dbo.[Order]
AFTER UPDATE
AS
BEGIN
SET NOCOUNT ON;
/*How can check approve date is updated*/
IF (UPDATE (ApproveDate))
BEGIN
INSERT INTO office.SmsSendBuffer
( Number ,
Body
)
SELECT 'xxxxxx','ORDER APPROVED!'
END
END
``` | It would be something like:
```
ALTER TRIGGER dbo.OrderApprovedSms
ON dbo.[Order]
AFTER UPDATE
AS
BEGIN
SET NOCOUNT ON;
INSERT INTO office.SmsSendBuffer
( Number ,
Body
)
SELECT 'xxxxxx','ORDER APPROVED!' --No columns from inserted or deleted?
FROM inserted i INNER JOIN deleted d
ON i.<primary key column 1> = d.<primary key column 1> AND
i.<primary key column 2> = d.<primary key column 2> AND
i.ApprovedDate != d.ApprovedDate --Not sure what actual check you wanted to perform
END
```
Where [`deleted` and `inserted`](http://technet.microsoft.com/en-us/library/ms191300.aspx) are pseudo-tables that contain the row(s) that the `UPDATE` statement has affected (corresponding to their state before and after the statement)
There's a function, called [`UPDATE`](http://technet.microsoft.com/en-us/library/ms187326.aspx) which answers the question "was this column subject to an update during this `UPDATE` statement?" but it a) Only answers for the entire **set** of rows in `inserted` and `deleted` and b) doesn't let you distinguish updates that has no actual effects from those that did (e.g. if you do `SET Column=2` where `Column` is already `2`, it still answers that `Column` was updated)
---
As an example of how pointless I think the `UPDATE` function is, consider the following:
```
create table T (ID int not null,Col1 int not null)
go
create trigger TT on T after update
as
IF UPDATE(Col1)
BEGIN
RAISERROR('Hello',10,1) WITH NOWAIT
END
go
update T set Col1 = Col1
```
Which prints `Hello` in the output - so, we have an `UPDATE` that affected `0` rows (because the table is new) and even if there were any rows, would not have changed any data. | Check if field value updated on sql server trigger | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
```
I have next tables.
First one is A.
A have two columns: A_ID and A_VALUE.
Second table is B. B too have two columns: B_ID and B_VALUE
In additional I have table C. Table C have C_ID and bool columns C_BOOL
If C_BOOL value == true i need select value from A with given ID.
If C_BOOL value == false i need select value from B.
How I can write SELECT for this?
I use oracle db.
```
Thanks in advice. | ```
SELECT CASE C.BOOL WHEN 1 THEN A.ID ELSE B.ID END
FROM A
JOIN B
ON B.ID = A.ID
JOIN C
ON C.ID = A.ID
``` | Try this query:
```
SELECT C_ID,CASE WHEN C_BOOL = 1 THEN T3.A_VALUE ELSE T2.B_VALUE END
FROM TABLE_C T1 LEFT OUTER JOIN TABLE_B T2 ON T1.C_ID = T2.B_ID
LEFT OUTER JOIN TABLE_A T3 T2 ON T1.C_ID = T3.A_ID
``` | Select from two different tables by value in third table | [
"",
"sql",
"oracle",
"select",
""
] |
In a table, there 3 columns: `GivenName, FamilyName, MiddleName`. And I have to append all three columns values to output a single column like this
```
Select Upper(GivenName) + FamilyName + Upper(MiddleName) as PersonName.....
```
But if value for any one of the column is null then the whole output is Null.
Any way if I can check if any of the column is null before appending? So that it is not appended and others which are not null gets appended.
But I cannot use 'where GivenName is not null, FamilyName is not null' condition.
I just dont want to append the string which is null.
For Ex:
```
If GivenName = 'Mark',
FamilyName = 'Joseph',
MiddleName is null
```
Then output should be : MARK Joseph instead of NULL which has not appended MiddleName as it is Null.
(But in SQL it the output is NULL. Try this..
```
declare @FirstName nvarchar(20);
declare @GivenName nvarchar(20);
declare @MiddleName nvarchar(20);
set @FirstName = 'Steve';
set @GivenName = 'Hudson';
set @MiddleName = null;
```
`select Upper(@FirstName) + @GivenName + UPPER(@MiddleName)` => Outputs Null
) | You can use the [`isnull`](http://technet.microsoft.com/en-us/library/ms184325.aspx) function to simply output an empty string if the column is null:
`Select Upper(isnull(GivenName, '')) + isnull(FamilyName,'') + Upper(isnull(MiddleName,'')) as PersonName`
I'm not entirely sure you should be doing this in the database ... it seems like a presentation layer concern to me. | Do this:
```
Select COALESCE(Upper(GivenName), '') + COALESCE(FamilyName, ' ') + COALESCE(Upper
(MiddleName), '') as PersonName
```
COALESCE will do a null check on the first parameter. If it is null it returns the second parameter | SQL Select Query returns NULL when appending string | [
"",
"sql",
"sql-server",
""
] |
I have a table of places and their regions, but sometimes there is more than one place with the same name in the same region. Real life examples: There are [five Springfields in Wisconsin](http://en.wikipedia.org/wiki/Springfield,_Wisconsin). Nova Scotia, where I live, has [three Sandy Coves](http://en.wikipedia.org/wiki/List_of_communities_in_Nova_Scotia). I need to find these sorts of records and either disambiguate them (by adding their county or equivalent, for example) or just delete junk/overlapping ones. That will take some work, but first I'd like to see *how much* work.
```
PlaceName: RegionName:
Summerville Big State
Summerville Bigger State (OK, different states, no problem...)
Summerville Little State <-
Summerville Little State <- I need to deal with these
```
This query, based on an [answer](https://stackoverflow.com/a/19326703/1736461) from another [question](https://stackoverflow.com/questions/13146304/how-to-select-every-row-where-column-value-is-not-distinct), gets me all of the places with the same name:
```
SELECT * FROM Places WHERE PlaceName IN
(SELECT PlaceName FROM Places GROUP BY PlaceName HAVING COUNT(*) > 1);
```
This is a good start, but I want to skip over the names that don't occur more than once in the same state and just get straight to the problem cases. To put it generically, I'd like to find non-unique records in one column and from there get the ones that are non-unique in another column.
(FWIW, I'm using MariaDB, which is mostly compatible with MySQL.) | One way to do this is with a `join` to the aggregated list. You need to aggregate by both region and place to get the list you want:
```
SELECT p.*, rp.cnt
FROM Places p join
(SELECT RegionName, PlaceName, COUNT(*) as cnt
FROM Places
GROUP BY RegionName, PlaceName
HAVING COUNT(*) > 1
) rp
on p.RegionName = rp.RegionName and p.PlaceName = rp.PlaceName;
```
You don't mention the actual database you are using. There are other ways to phrase this, some dependent on the database. | Basically you want to
* first count how often each (place/region) tuple exists
* then filter only those, that appear more than once
So let's just do this
```
SELECT
PlaceName, RegionName, Count(*) AS num
FROM Places
GROUP BY CONCAT(PlaceName,':::',RegionName)
HAVING COUNT(*)>1
``` | SQL: Find non-unique records in one column that are also non-unique in another column | [
"",
"sql",
"database",
""
] |
I have some rows in the following format:

I want a SQL query which will group the above rows by EntityId and aggregate the bit columns in the following way:
* Submitted: 0 if any row for a given EntityId is 0, else 1
* Reviewed: 0 if any row for a given EntityId is 0, else 1
* Query: 1 if any row for a given EntityId 1, else 0
I know I can do this by casting the bit column as an int and using Min/Max but it feels like a bit of a hack. I think I am having a slow day and missing the obvious solution...
What is the best way to do this?
I am using SQL Server 2008 R2, although a general SQL method would be best.
Update:
The desired result set for the rows above would be:
 | I think casting to an `int` is probably best overall as there are no bitwise aggregates, anything else will also be "hacky".
For fun this should work without casting the bit fields;
```
select
EntityId,
1 ^ count(distinct nullif(Submitted, 1)),
1 ^ count(distinct nullif(Reviewed, 1)),
count(distinct nullif(Query, 0))
from t
group by EntityId
``` | Relying on implicit data conversion:
```
select entityId
,max(1*submitted) as submitted
,max(1*reviewed) as reviewed
,max(1*query) as query
from sd_test
group by entityId
order by entityId;
select entityId
,max(sign(submitted)) as submitted
,max(sign(reviewed)) as reviewed
,max(sign(query)) as query
from sd_test
group by entityId
order by entityId
```
Health Warning:
> ## SIGN: Arguments
>
> numeric\_expression Is an expression of the exact numeric or
> approximate numeric data type category, except for the bit data type.
It works fine for bit data type though. Since bit can be either 0 or 1, sign will always return 1 or 0 (though in reality, it could return -1,0,1)
```
select sign(cast(1 as bit)); --returns:1
select sign(cast(0 as bit)); --returns:0
``` | SQL Aggregate bit columns to single bit result | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I am connected to a oracle database with a read only user and i used service name while Setting up connection in sql developer hence i dont know SID ( schema ).
How can i find out schema name which i am connected to ?
I am looking for this because i want to [generate ER diagram](https://stackoverflow.com/questions/6580529/how-to-generate-e-r-diagram-using-oracle-sql-developer) and in that process at one step it asks to select schema. When i tried to select my user name , i dint get any tables as i guess all tables are mapped with schema user.
Edit: I got my answer partially by the below sql Frank provided in comment , it gave me owner name which is schema in my case. But I am not sure if it is generic solution applicable for all cases.
```
select owner, table_name from all_tables.
```
**Edit**: I think above sql is correct solution in all cases because schema is owner of all db objects. So either i get schema or owner both are same. Earlier my understanding about schema was not correct and i gone through another [question](https://stackoverflow.com/questions/880230/difference-between-a-user-and-a-schema-in-oracle) and found schema is also a user.
[Frank](https://stackoverflow.com/users/610979/frank-schmitt)/[a\_horse\_with\_no\_name](https://stackoverflow.com/users/330315/a-horse-with-no-name) Put this in answer so that i can accept it. | To create a read-only user, you have to setup a different user than the one owning the tables you want to access.
If you just create the user and grant SELECT permission to the read-only user, you'll need to prepend the schema name to each table name. To avoid this, you have basically two options:
1. Set the *current schema* in your session:
```
ALTER SESSION SET CURRENT_SCHEMA=XYZ
```
2. Create synonyms for all tables:
```
CREATE SYNONYM READER_USER.TABLE1 FOR XYZ.TABLE1
```
So if you haven't been told the name of the owner schema, you basically have three options. The last one should always work:
1. Query the current schema setting:
```
SELECT SYS_CONTEXT('USERENV','CURRENT_SCHEMA') FROM DUAL
```
2. List your synonyms:
```
SELECT * FROM ALL_SYNONYMS WHERE OWNER = USER
```
3. Investigate all tables (with the exception of the some well-known standard schemas):
```
SELECT * FROM ALL_TABLES WHERE OWNER NOT IN ('SYS', 'SYSTEM', 'CTXSYS', 'MDSYS');
``` | Call [`SYS_CONTEXT`](http://docs.oracle.com/cd/E11882_01/server.112/e41084/functions184.htm#SQLRF51825) to get the current schema. From [Ask Tom "How to get current schema](https://asktom.oracle.com/pls/apex/f?p=100:11:0%3a%3a%3a%3aP11_QUESTION_ID:5159260510057):
```
select sys_context( 'userenv', 'current_schema' ) from dual;
``` | How to find schema name in Oracle ? when you are connected in sql session using read only user | [
"",
"sql",
"oracle11g",
""
] |
I have two sql queries. In first I am returing total marks obtained and in other i am returning maximum marks . Now i want to combine both in single query for the purpose of making it into single procedure. Queries are as :
First Query
```
select SUM(MarksObtained) as MarksObtained from tbAnswers where QID IN(Select QID from tbQuestions where ExamID =2)
```
2nd Query
```
Select SUM(Marks) as MaxMarks from tbQuestions where ExamID =2
```
I want something like below to return:
```
Maxmarks | MarksObtained
------------------------
100 50
```
I tried Union, but it returned something like below:
```
MarksObtained
-------------
100
50
``` | Well, since the queries are unrelated, you can just throw them into the select:
```
SELECT
(
select SUM(MarksObtained)
from tbAnswers where QID IN (
Select QID from tbQuestions where ExamID = 2
)
) as MarksObtained,
(
Select SUM(Marks)
from tbQuestions where ExamID = 2
) as MaxMarks
``` | Even with accepted answer it is make sense to mention that probably the right thing to do is to use separate queries, as they are and just use MARS on the client, in case performance was an issue.
UPDATE: Thats how you can combine several queries and read them all together:
```
using(var conn = SqlConnection(...)) {
conn.Open();
var cmd = conn.CreateCommand()
cmd.CommandText =
@"Select SUM(MarksObtained) as MarksObtained
from tbAnswers
where QID IN(Select QID from tbQuestions where ExamID =2);"
+ @"Select SUM(Marks) as MaxMarks
from tbQuestions
where ExamID =2";
using (var dr = cmd.ExecuteReader) {
... // read MarksObtained
dr.NextResult()
... // readMaxMarks
dr.Close()
}
conn.Close()
}
```
MSDN [IDataReader.NextResult](http://msdn.microsoft.com/en-us/library/system.data.idatareader.nextresult%28v=vs.110%29.aspx) | Combine two different unrelated SQL queries( returning single column results) into one query result with two columns | [
"",
"sql",
"sql-server",
""
] |
I have a (large) SQL query, but I'm getting stuck on how to word the **WHERE** section.
*I currently Have it wrote like this:*
```
AND (tbl_blocking.user != :who OR tbl_blocking.blocking != :who)
```
Which isn't really working correctly.
I'm trying to say, show the results **UNLESS** ':who' appears in the tbl\_blocking.user **OR** tbl\_blocking.blocking column
Currently, all that's happening, is all results are being brought out and repeated - **TWICE**.
**Example:**
[Status one!], [Status one!], [Status two!], [Status two!]
I'm sure it would help to see the entire query too, so here it is:
```
//Get status results
$sql = "SELECT
tbl_status.id as statID,
tbl_status.from_user as statFROM,
tbl_status.status as statSTATUS,
tbl_status.deleted as statDEL,
tbl_status.date as statDATE,
tbl_users.id as usrID,
tbl_users.name as usrNAME,
tbl_users.location as usrLOCATION,
tbl_users.postcode as usrPOSTCODE,
tbl_blocking.id as blockID,
tbl_blocking.user as blockUSER,
tbl_blocking.blocking as blockBLOCKING,
tbl_blocking.date as blockDATE,
tbl_blocking.active as blockACTIVE,
tbl_blocking2.id as blockID2,
tbl_blocking2.user as blockUSER2,
tbl_blocking2.blocking as blockBLOCKING2,
tbl_blocking2.date as blockDATE2,
tbl_blocking2.active as blockACTIVE2
FROM tbl_status
LEFT JOIN tbl_users ON tbl_status.from_user = tbl_users.id
LEFT JOIN tbl_blocking ON tbl_users.id = tbl_blocking.user
LEFT JOIN tbl_blocking2 ON tbl_users.id = tbl_blocking2.user
WHERE
tbl_status.deleted = '0'
AND tbl_blocking.active = '0'
AND (tbl_blocking.user != :who OR tbl_blocking.blocking != :who)
ORDER BY
tbl_status.date desc
LIMIT 200
";
``` | If you want only items that don't have `:who` in *either* `tbl_blocking.user` *or* `tbl_blocking.blocking`, you simply want...
```
AND tbl_blocking.user != :who
AND tbl_blocking.blocking != :who
```
...there.
Because you're essentially saying, "Give me rows where `user` isn't `:who` *and* neither is `blocking`." | I think this is what you want:
```
AND tbl_blocking.user <> :who AND tbl_blocking.blocking <> :who
```
That is, you don't want `:who` in either column.
EDIT:
Your logic statement is:
* show the results UNLESS ':who' appears in the tbl\_blocking.user OR tbl\_blocking.blocking column
Here is a step by step transformation:
* show the results UNLESS (:who appears in the tbl\_blocking.user or :who appears in the tbl\_blocking.blocking column)
* show the results UNLESS (tbl\_blocking.user = :who or tbl\_blocking.blocking = :who)
* show the results WHERE NOT (tbl\_blocking.user = :who or tbl\_blocking.blocking = :who)
* show the results WHERE tbl\_blocking.user <> :who AND tbl\_blocking.blocking <> :who
EDIT II:
If the values can be `NULL`:
```
AND coalesce(tbl_blocking.user, '<null>') <> :who AND coalesce(tbl_blocking.blocking, '<null>') <> :who
``` | I can't figure out how to word this WHERE clause | [
"",
"sql",
""
] |
I have one table in mysql:
Table 1
```
ID DiscountPrice ActualPrice
1 2299.00 2874.00
2 0 50
3 2999.00 3499.00
4 0.00 4999.00
5 3899.00 5999.00
```
I want this should come in sorted order(lowest should come first)
Output:
```
ID: 2 , 1, 3, 5, 4
```
Please help me. | Use [`LEAST()`](https://dev.mysql.com/doc/refman/5.0/en/comparison-operators.html#function_least):
```
SELECT *
FROM Table1
ORDER BY
LEAST(
IF(DiscountPrice=0, 1E18, DiscountPrice),
IF(ActualPrice=0, 1E18, ActualPrice)
)
```
-I'm doing `IF` since you need to deal only with non-zero values. | Lowest of what value?
A simple
```
SELECT * FROM table1 WHERE 1 ORDER BY DiscountPrice
```
or
```
SELECT * FROM table1 WHERE 1 ORDER BY ActualPrice
```
or
```
SELECT * FROM table1 WHERE 1 ORDER BY DiscountPrice, ActualPrice ASC
```
should work for you | Mysql order by two colums independetly | [
"",
"mysql",
"sql",
""
] |
I have the following SQL Server 2008 TABLE:
```
Heading Limit Package
X Charges 200 A
X Charges 300 B
X Charges 400 C
X Charges 500 D
Y Charges 550 A
Y Charges 450 B
Y Charges 350 C
Y Charges 250 D
```
Now I would like to get data in following representation:
```
Heading Package_A Package_B Package_C Package_D
X Charges 200 300 400 500
Y Charges 550 450 350 250
``` | Please try:
```
SELECT
Heading,
A Package_A,
B Package_B,
C Package_C,
D Package_D
FROM (
SELECT * FROM YourTable
) up
PIVOT (sum(Limit) FOR Package IN (A, B, C, D)) AS pvt
```
[SQL Fiddle](http://sqlfiddle.com/#!3/fe3a4/1) | [Fiddle here](http://sqlfiddle.com/#!3/b8d7b)
```
SELECT
Heading,
[A] as Package_A,
[B] as Package_B,
[C] as Package_C,
[D] as Package_D
FROM
TableX
PIVOT
(
SUM(LIMIT)
FOR Package IN ([A], [B], [C], [D])
) x;
```
The dynamic case (where the values of package are not known up front) is addressed [here](https://stackoverflow.com/questions/11985796/sql-server-pivot-dynamic-columns-no-aggregation/11985946#11985946) | SQL query to transpose data in below case | [
"",
"sql",
"sql-server",
"pivot-table",
""
] |
How can I get the count of different categories of students using MySQL
I have 6 categories in st\_category field and 1 under st\_handi field in table -
Under st\_category =>GEN, SC, ST, OBC, SGC, EWS and under es\_handi=>PH
I need to the number of students from each categories gender wise. Example
```
Male | Female
-------|-------
GEN | 54 | 239
SC | 19 | 9
ST | 5 | 19
SGC | 0 | 13
OBC | 19 | 9
EWS | 0 | 1
PH | 2 | 0
```
I tried may attempt to achieve the desired output but every attempt ends up with disappointment.
Is this possible?
Schema of Student Table
```
ST_No | ST_Gender | ST_Category | ST_PH
------|-----------|-------------|-------
1 | Male | GEN | Yes
2 | Male | GEN | No
3 | Female | SC | No
4 | Male | GEN | No
5 | Male | ST | No
6 | Female | GEN | No
7 | Male | SC | No
```
Above is sample data and schema of table. | Without seeing your table structure I do not know for sure, but from your description it sounds like your categories are in two fields. I am going to assume these fields are in the same table? You only need the WHERE clause if you want to limit results to just these categories.
```
SELECT st_category AS cat, st_gender, COUNT(studentPK)
FROM es_preadmission
WHERE st_category IN ('GEN','SC','ST',....,'SGC')
GROUP BY st_category, st_gender
UNION ALL
SELECT st_handi AS cat, st_gender, COUNT(studentPK)
FROM es_preadmission
WHERE st_handi = 'PH'
GROUP BY st_handi, st_gender
``` | Start with something like this:
```
SELECT Sum(Male) AS SumOfMale
FROM Students
GROUP BY Type;
``` | How to get number of students in MySQL | [
"",
"mysql",
"sql",
""
] |
I set my datetimepicker's value to 1:00 PM or any random hh:mm tt format and insert it into my MSAccess database in a my table with a date/time field. But when my datagridview shows the table, a random "12/30/1899" (+ my input of 1:00 PM). Any idea why this is happening? My code is not here though. | Access `Date/Time` fields always store both a Date and a Time component. If only a Time value is specified then the Date part defaults to 1899-12-30.
If you want to display just the Time part in a DataGridView column you can set its format to "short time" using something like this (C#, but VB.NET would be very similar):
```
dataGridView1.Columns[3].DefaultCellStyle.Format = "t"; // short time
``` | it could have been better if you also included your code......
just visit the following link for Date Time Picker information.
[DateTimePicker MSDN](http://msdn.microsoft.com/en-us/library/system.windows.forms.datetimepicker.customformat%28v=vs.110%29.aspx?cs-save-lang=1&cs-lang=vb#code-snippet-2) | .NET DataGridView displays 1899-12-30 as part of a time-only value | [
"",
"sql",
"vb.net",
"ms-access",
"ms-access-2007",
"datetimepicker",
""
] |
I need to get date, which was 30 (later I will use also 90 or 18O) days before first day of current (and previous) month. I wrote this, but I think it's unnecessarily complicated, so I come here for help. It's any way how can I simplify this:
```
ib_encodedate(EXTRACT(YEAR from (dateadd(-30-datediff(day from cast(EXTRACT(MONTH from CURRENT_DATE)
|| '-1-'
|| EXTRACT(YEAR from CURRENT_DATE) as date) to date 'now') DAY to CURRENT_DATE))), EXTRACT(MONTH from (dateadd(-30-datediff(day from cast(EXTRACT(MONTH from CURRENT_DATE)
|| '-1-'
|| EXTRACT(YEAR from CURRENT_DATE) as date) to date 'now') DAY to CURRENT_DATE))),EXTRACT(DAY from (dateadd(-30-datediff(day from cast(EXTRACT(MONTH from CURRENT_DATE)
|| '-1-'
|| EXTRACT(YEAR from CURRENT_DATE) as date) to date 'now') DAY to CURRENT_DATE))))
```
In database I have dates in `double`, so I have to use function `ib_encodedate` to convert date to `double` and compare with date in database. Function have prototype:
```
ib_encodedate(INT year, INT month, INT day)
```
The same I need to write for last day of month.
Thanks for any help. | Seems like you need something like this:
```
SELECT
DATEADD (-EXTRACT(DAY FROM CURRENT_DATE)+1 DAY TO CURRENT_DATE) AS FIRST_DAY_OF_MONTH,
DATEADD (-30 DAY TO DATEADD (-EXTRACT(DAY FROM CURRENT_DATE)+1 DAY TO CURRENT_DATE)) AS A_MONTH_AGO,
DATEADD (-90 DAY TO DATEADD (-EXTRACT(DAY FROM CURRENT_DATE)+1 DAY TO CURRENT_DATE)) AS THREE_MONTHS_AGO,
DATEADD (-180 DAY TO DATEADD (-EXTRACT(DAY FROM CURRENT_DATE)+1 DAY TO CURRENT_DATE)) AS SIX_MONTHS_AGO
FROM
RDB$DATABASE
```
Using the function [DATEADD](http://www.firebirdsql.org/refdocs/langrefupd25-intfunc-dateadd.html) from firebird you can easily accomplish this. | First Day of The Date:
```
select cast(:DATA as date) - extract(day from cast(:DATA as date)) + 1 from RDB$DATABASE
```
Last Day of The Date:
```
select cast(:DATA as date) - extract(day from cast(:DATA as date)) + 32 - extract(day from (cast(:DATA as date) - extract(day from cast(:DATA as date)) + 32)) from RDB$DATABASE
``` | Get first (and last) day of month - expression simplification | [
"",
"sql",
"date",
"firebird",
""
] |
I have a table called UserProductDetails in which UPID is primary key and another table called UserTasks in UPID is just nullable column. Both tables have UserID as common columns.
UPID in UserProductDetails can be multiple for any particular user.
I want to update UserTasks.UPID from any of the UPID from UserProductDetails.
How can I do this?
I am trying somthing as below
```
update UserTasks
set UPID =
select MIN(UserProductDetails.UPID) AS UPID, GETDATE() from UserProductDetails where UserId in
(select UserID from UserTasks t) group by UserId
``` | ended up using below query based on Milen Pavlov's suggestion
```
/*Adding random but user specific UPID and then making UPID as non-nullable.*/
begin tran
UPDATE
dbo.UserTasks
SET
UPID = A.UPID
FROM
(SELECT UserId, MIN(CAST(UPID AS BINARY(16))) AS UPID FROM UserProductDetails GROUP BY UserId) A
WHERE userTasks.UserID = A.UserID
COMMIT
GO
``` | Try with:
```
UPDATE UserTasks u
SET u.UPID = (SELECT MIN(up.UPID)
FROM UserProductDetails up
WHERE up.UserID = u.UserID)
``` | how to update one column of one table from column of another table | [
"",
"sql",
""
] |
I have three tables,
student, studentclass , class
student have sid as primary key, class have cid as primary key.
studentclass ties student and class together and has two columns, sid,cid (no special keys).
Is it possible to get rid of studentclass table and use student and class tables only, without creating duplicate entries in student or in class tables?
(class can have multiple students and student can attend multiple classes)
Thanks. | If the business logic states that, one student can attend multiple classes then it is good to have a linking table, **StudentClass**.
You can think of this as a **Many to Many relationship** where one student can attend multiple classes and one class can have multiple students. | The studentclass table is required assuming that students attend more than one class.
If they do not you can simply add a classId field into the students table and get rid of the studentclass table. | Designing MySQL database | [
"",
"mysql",
"sql",
"database-design",
""
] |
i my table looks like this:
```
Column1|Column2
1 |a
1 |b
1 |c
2 |a
3 |b
4 |a
4 |b
```
...
What I need is to write sql statetment to get all entities from column1 which has values a AND b in column2.
So in this case result will be: 1,4
Thx in advance for help. | Try this
```
select column1 from your_table
group by column1
having
max(case when column2='a' then 1 else 0 end)+
max(case when column2='b' then 1 else 0 end)=2
``` | Try this:
```
SELECT * FROM Table T
WHERE EXISTS(
SELECT 'A' FROM Table Ta
WHERE T.column1 = Ta.column1
AND Ta.column2 = 'a'
)
AND EXISTS(
SELECT 'B' FROM Table Tb
WHERE T.column1 = Tb.column1
AND Tb.column2 = 'b'
)
``` | sql statement that gives me entities that contanins 1.value AND 2.value from one column | [
"",
"sql",
""
] |
I want to update the `flight_date` field in a table. I want to set the new flight\_date which reflects the time after a delay of 4 hours. The date format in MySQL is `YYYY-MM-DD`. Which function has to be used to implement the change.
Please someone help. | You could use the MySQL function [**`DATE_ADD`**](http://dev.mysql.com/doc/refman/5.1/en/date-and-time-functions.html#function_date-add) like the below example:
```
UPDATE your_table_name
SET flight_date = DATE_ADD(flight_date, INTERVAL 4 HOUR)
WHERE flight_id = 1
```
**NOTE:** *don't forget to change `your_table_name` and `flight_id` to reflect your table name and field name respectively.* | You need to use `INTERVAL`, like this:
```
UPDATE your_table_name_goes_here
SET flight_date = flight_date + INTERVAL 4 HOUR
WHERE some_id_field = 123
```
Note that there is no `s` on the end of `HOUR`.
You haven't told us how you named your table or will identify the flight in question, so you will need to edit those portions of the query above (`your_table_name_goes_here` and `some_id_field = 123`). | How can I update date using SQL? | [
"",
"mysql",
"sql",
""
] |
I am trying to select all fields and get a distinct in 1 field where a certain condition is true but I am getting an error saying incorrect syntax this is what I have
```
var sql = @"SELECT * distinct threadID from Threadposts where profileID = 1";
```
I am new to mssql but can not figure out what is wrong with the query above, I simply want all fields included where profileID= 1 and select distinct values from threadID | You need to be clearer about what you want. You are querying a tabled called `Threadposts` for records where the `profileID` is 1. This will return multiple rows with different values I expect.
Are you wanting to count how many posts were made by that person? Are you wanting a list of threads by that person?
```
-- count of rows
SELECT COUNT(*)
FROM Threadposts
WHERE profileID = 1
-- list of threads
SELECT *
FROM Threadposts
WHERE profileID = 1
```
If you are wanting something different then you'll need to update your question.
PS: the `DISTINCT` keyword will look at the data returned and give distinct values only, e.g. if you wanted distinct topics posted by this user regardless of if they created multiple threads with the same name:
```
SELECT DISTINCT ThreadTopic
FROM Threadposts
WHERE profileID = 1
```
Using `DISTINCT *` almost never makes sense, since each row will have a unique ID (hopefully) so will already be distinct. | DISTINCT is essentially short-hand for GROUP BY on all selected columns when you don't need any aggregates. What you are asking for is a query where you are grouping on a single column, but not aggregating the remaining columns. That doesn't make sense. | how to do a distinct with select all | [
"",
"sql",
"sql-server",
"distinct",
""
] |
I want to convert a sql query which has where not in clause to tuple relational calculus.Existential and Universal quantifier implements only where exists and where not exists clause so I want to know how to implement where not in?
My tables are `serves(bar,beer),frequents(drinker,bar),likes(drinker,beer)`.The following query selects the drinkers that frequent only bars that serve some beer they like.
```
select distinct f2.drinker from frequents f2 where f2.drinker not in (select f1.drinker from frequents f1 where (f1.bar,f1.drinker) not in (select f.bar,f.drinker from frequents f,serves s,likes l where l.beer=s.beer and f.bar=s.bar and f.drinker=l.drinker))
```
It's enough if someone can explain me how to implement where not in in TRC no need to convert the entire query.I am using <http://www-rohan.sdsu.edu/~eckberg/relationalcalculusemulator.html>
to check my relational calculus and convert it into sql query.
> Note:
>
> > If you use implications in your query.
>
> It does not support implication.For example implication can be
> implemented as follows.(p==>q) can be written as (not p or q) form as
> both are logically equivalent. | I rewrote my query with where exists and where not exists and now its easy to convert it into relational calculus.The answer is
`{T.drinker|βf2Πfrequents (βf1Πfrequents (βfΠfrequents(βsΠserves βlΠlikes(s.beer=l.beer^l.drinker=f.drinker^s.bar=f.bar^f1.drinker=f.drinker^f.bar=f1.bar^f2.bar=f1.bar v f2.drinkerβ f1.drinker)))}`
Anyways thanks for the inputs. | well what you are describing is `WHERE NOT EXISTS(subquery)`
<http://dev.mysql.com/doc/refman/5.0/en/exists-and-not-exists-subqueries.html>
let me know if thats not what you want.
---
also why dont you just change the logic of your statement to go from
WHERE drinker NOT IN (drinker) to WHERE IN(drinker=null)
INITIAL QUERY.. formatted so its easier to read
```
SELECT
DISTINCT f2.drinker
FROM frequents f2
WHERE f2.drinker NOT IN
(
SELECT
f1.drinker
FROM frequents f1
WHERE (f1.bar,f1.drinker) NOT IN
(
SELECT
f.bar,
f.drinker
FROM frequents f,
serves s,
likes l
WHERE l.beer=s.beer AND f.bar=s.bar AND f.drinker=l.drinker
)
)
```
what you should be able to do is this
```
SELECT
DISTINCT f2.drinker
FROM frequents f2
WHERE f2.drinker IN
(
SELECT
f1.drinker
FROM frequents f1
WHERE f1.drinker IS NULL AND (f1.bar,f1.drinker) IN
(
SELECT
f.bar,
f.drinker
FROM frequents f,
serves s,
likes l
WHERE l.beer=s.beer AND f.bar=s.bar AND f.drinker=l.drinker AND f.drinker IS NULL
)
)
```
hopefully this works, i can't really test it. but the idea is instead of saying `where this is not in this` say `where this is in this where id (in subquery) is null`. | How to implement sql "where not in" in tuple relational calculus? | [
"",
"mysql",
"sql",
"tuples",
"tuple-relational-calculus",
""
] |
The following SQL statement fails on SQL Server 2012(SP1) 11.0.3401.0 Version
```
DECLARE @b VARCHAR(10) = '12312.2'
SELECT TRY_CONVERT(DECIMAL(10,2),@b)
```
error message
```
Msg 195, Level 15, State 10, Line 2
'DECIMAL' is not a recognized built-in function name.
```
But works fine on SQL Server 2012(SP1) 11.0.3393.0
Both servers have no problem with `TRY_PARSE()`
```
DECLARE @b VARCHAR(10) = '12312.2'
SELECT TRY_PARSE(@b AS DECIMAL(10,2))
```
**Update:**
after further testing and trying conversion to different types got another different error message
```
DECLARE @b VARCHAR(10) = '12312'
SELECT TRY_CONVERT(INT,@b)
```
Error:
```
error: Msg 195, Level 15, State 10, Line 2
'TRY_CONVERT' is not a recognized built-in function name.
```
So original error message was miss leading, but now I'm even more confused why it is not there. | Check that the database compatibility level for the particular database you are using the function in is set to `SQL Server 2012 (110)` under `properties/options/compatibility level` for the database. | Make sure that you have specified on which database you want to run the query on.
```
use [Database_Name]
go
>>>>query here <<<
go
```
In my case I had a number of different databases on the server with different compatibility levels. | TRY_CONVERT fails on SQL Server 2012 | [
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
```
SELECT
business_period,
SUM(transaction.transaction_value) AS total_transaction_value,
SUM(transaction.loss_value) AS total_loss_value,
(total_transaction_value - total_loss_value) AS net_value
FROM transaction
GROUP BY business_period
```
The above does not work as `total_transaction_value` and `total_loss_value` are not from the `transaction` table. Is there a way to make this query work?
Note: this query involves 500 million rows, so need to efficient.
Question:
Some answers have suggested that `SUM(transaction.transaction_value) - SUM(transaction.loss_value)` is cached and won't need to be computed again where as others are suggesting that I should as a derived table / subsequery to avoid repeated computation. Could someone point to something that could settle the difference in opinion?
I am using postgres 9.3.
ANSWER:
I want to quote erwin's comment here:
`I ran a quick test with 40k rows and the winner was the plain version without subquery. CTE was slowest. So I think my first assumption was wrong and the query planner understands not to calculate the sums repeatedly (makes sense, too). I have seen different results with more complex expressions in the past. The planner does get smarter with every new version` | Use:
```
SELECT
business_period,
SUM(transaction.transaction_value) AS total_transaction_value,
SUM(transaction.loss_value) AS total_loss_value,
(SUM(transaction.transaction_value) - SUM(transaction.loss_value)) AS net_value
FROM transaction
GROUP BY business_period
``` | Use `sum` again
```
SELECT
business_period,
SUM(transaction.transaction_value) AS total_transaction_value,
SUM(transaction.loss_value) AS total_loss_value,
(SUM(transaction.transaction_value) - SUM(transaction.loss_value)) AS net_value
FROM transaction
GROUP BY business_period
``` | Use the results of SUM in the same query | [
"",
"sql",
"postgresql",
"aggregate-functions",
"postgresql-performance",
""
] |
I have a `Resource` model that can be voted on using the "Acts As Votable" gem ([Github page](https://github.com/ryanto/acts_as_votable)). The voting system works perfectly but I am trying to display pages ordered by how many `votes` each `Resource` has.
Currently my controller pulls Resources based on tags and aren't ordered:
```
@resources = Resource.where(language_id: "ruby")
```
If I take an individual resource and call "@resource.votes.size" it will return how many votes it has. However, votes is another table so I think some sort of join needs to be done but I have not sure how to do it. What I need is a nice ordered `ActiveRecord` collection I can display like this?
> Book name - 19 votes
>
> Book name - 15 votes
>
> Book name - 9 votes
>
> Book name - 8 votes | Try the following:
```
@resources = Resouce.select("resources.*, COUNT(votes.id) vote_count")
.joins(:votes)
.where(language_id: "ruby")
.group("resources.id")
.order("vote_count DESC")
@resources.each { |r| puts "#{r.whatever} #{r.vote_count}" }
```
To include resources with 0 votes, use an outer join. If the example below doesn't work as is you'll have to alter the joins statement to join across the correct relations.
```
@resources = Resource.select("resources.*, COUNT(votes.id) vote_count")
.joins("LEFT OUTER JOIN votes ON votes.votable_id = resources.id AND votes.votable_type = 'Resource'")
.where(language_id: "ruby")
.group("resources.id")
.order("vote_count DESC")
``` | # Rails 5+
Built-in support for left outer joins was introduced in `Rails 5` so you can use that to do this. This way you'll still keep the records that have 0 relationships:
```
Resource
.where(language_id: 'ruby')
.left_joins(:votes)
.group(:id)
.select('resources.*', 'COUNT(votes.id) vote_count')
.order('vote_count DESC')
``` | Rails ActiveRecord sort by count of join table associations | [
"",
"mysql",
"sql",
"ruby-on-rails",
"ruby",
"activerecord",
""
] |
I'm not the strongest SQL developer (kinda new) but I thought I could do what I'm trying to do, but I'm receiving an error.
Basically, I'm trying to perform a mass insert based on some selects. The value of the first select in the statement below returns 1 groupid, but the 2nd statement returns a few hundred requestcategory id's. I am trying to perform these few hundred inserts with the statement below:
```
INSERT
INTO M_GROUPREQUESTCATEGORY
(
GROUPID,
REQUESTCATEGORYID
)
VALUES
(
(select groupid from esaws.m_group where name = 'Administration Group'),
(select requestcategoryid from esaws.m_grouprequestcategory where groupid = (select groupid from esaws.m_group where name = 'Customer Service Group'))
);
```
My issue is, SQL is returning the following error:
```
Error report -
SQL Error: ORA-01427: single-row subquery returns more than one row
01427. 00000 - "single-row subquery returns more than one row"
*Cause:
*Action:
```
Can someone please explain to me what I'm doing wrong, and what I need to change to get this working?
EDIT: To be clear, im trying to come up with a way to NOT have to write a few hundred indivdual inserts. | I suspect that this is what you want:
```
INSERT
INTO M_GROUPREQUESTCATEGORY
(
GROUPID,
REQUESTCATEGORYID
)
select DISTINCT G.groupid, R.requestcategoryid
from esaws.m_group G
Cross Join esaws.m_grouprequestcategory R
where G.name = 'Administration Group'
and R.groupid = (
select groupid
from esaws.m_group
where name = 'Customer Service Group')
and NOT EXISTS(Select * From esaws.m_grouprequestcategory R2
Where R2.GroupID = G.GroupID
And R2.requestcategoryid = R.requestcategoryid)
;
```
Which basically is Copying all of the request categories from the 'Customer Service Group' to the 'Administration Group', if I understand it correctly. | Subqueries in the select statement must evaluate to a single value, or else you will get this error. This makes sense since you are looking to fill the value for one field.
So when you say 'but the 2nd statement returns a few hundred requestcategory id's', then that's the error right there. You need to have it be only 1 for each result. | Trouble performing a SQL insert using selects where more than 1 row is to be inserted | [
"",
"sql",
"oracle",
"insert",
""
] |
In SQL, given [this dataset](http://sqlfiddle.com/#!2/587c7b/17)
When I try this query
```
SELECT * FROM table1 WHERE NAME= 'Bob'
```
it provides the results but when I try this, it doesn't work but it also doesn't give any error
```
SELECT * FROM table1 WHERE NAME= 'Bob' AND NAME= 'John';
```
Can someone please tell me what I am doing wrong?
Thanks
edited:
Given the previous dataset
Find any two words appearing anywhere in the message of the dataset
table that occur individually in more than 4 message but
never together in the same message? | You should use `OR` instead of `AND`.
```
SELECT * FROM table1 WHERE NAME= 'Bob' OR NAME= 'John';
```
A `NAME` can't be `Bob` and `John` at same time, but `Bob` or `John` | The second query returns records in which `Name="Bob"` and `Name="John"` which will always be null.
You can try like this:
```
SELECT * FROM table1 WHERE NAME= 'Bob' OR NAME= 'John';
```
It will return records in which Name is Bob **OR** John.
The result will be:
```
DATA NAME MESSAGE TIMEZONE
01/03/2014 10:27 Bob What time do you want to meet? London
18/02/2014 02:43 John What time is it in London? New york
14/02/2014 00:50 Bob Meeting today? London
```
See result in [SQL Fiddle](http://sqlfiddle.com/#!2/587c7b/19)
Read more about [Logical Operators](http://technet.microsoft.com/en-us/library/ms189773.aspx).
**EDIT:**
For the second question, use this query:
```
SELECT name,message
FROM table1
WHERE ROUND ((CHAR_LENGTH(message)
- CHAR_LENGTH( REPLACE ( message, "out", "") )
)/CHAR_LENGTH("out")) =1
```
It select records in which `message` contains `"out"` only once.
See example in [SQL Fiddle](http://sqlfiddle.com/#!2/9d3f0/13)
See how the query works [here](http://sqlfiddle.com/#!2/9d3f0/24). | SQL query, am I missing something? | [
"",
"sql",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.