Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
Please i have a simple query like below.
```
SELECT d.id,d.dia FROM dbo.debtor d WHERE ISNUMERIC(dia + 'e0') = 1
```
when i run the above does what i want . it returns all the records from the two columns where dia is numeric. However, i want to filter more by adding where dia is numeric and greater or less than a number. My query below fails.
```
SELECT d.id,d.dia FROM dbo.debtor d
WHERE ISNUMERIC(dia + 'e0') = 1 and cast(dia as float) > 6
```
Please any help would be appreciated. How do i cast the dia which is of nvarchar as float ? | Consider your WHERE clause
```
WHERE ISNUMERIC(dia + 'e0') = 1 and cast(dia as float) > 6
```
There is no specific order in which the two conditions will be executed first.
Suppose dia is not numeric and then condition `cast(dia as float) > 6` gets executed first, then casting in condition `cast(dia as float)` will throw error as value fails being converted to float.
To ensure that casting condition works on numeric value only you will have to apply CASE in WHERE clause as shown below
```
SELECT d.id,d.dia FROM dbo.debtor d
WHERE ISNUMERIC(dia + 'e0') = 1 and
6 < CASE WHEN ISNUMERIC(dia + 'e0') = 1 THEN cast(dia as float) ELSE 1 END
``` | Try
```
ISNUMERIC('' + dia + 'e0')
``` | Error converting data type nvarchar to float when checking greater or less than a number | [
"",
"sql",
"sql-server",
""
] |
I'm thrown for a loop on this one, and wondering if I don't fully understand the select where in usage or if I've just made a boo boo in this code:
```
DECLARE @driver TABLE (ID INT)
INSERT INTO @driver select eventid from event where event_code_name IS NULL
select eventid from event where eventid in (select eventid from @driver)
```
There are 3137 records total in the event table. There are 458 records who's event\_code\_name field is null. In the select above I'm expecting 458, but I'm getting all event records back instead.
What did I miss? | I think `(select eventid from @driver)` should be `(select id from @driver)` | Since you are using 2 tables, one virtual and one physical, the solution is to use the `INNER JOIN` clause:
```
DECLARE @driver TABLE (ID INT)
INSERT INTO @driver SELECT eventid FROM event WHERE event_code_name IS NULL
SELECT eventid FROM event INNER JOIN @driver ON id = event.eventid
```
This way you only get the IDs that are present in both the event table and the @driver table.
`INNER JOIN` is also much more efficent than using an `IN (SELECT ...)`
Without the use of the @driver table you can just obtain the same result using the SELECT query you used in the INSERT statement:
```
SELECT eventid FROM event WHERE event_code_name IS NULL
``` | SqlServer Select Where In | [
"",
"sql",
"sql-server",
""
] |
Im trying to display a dropdown with values from table 'Events'. I have created a Model.edmx which has the structure of the database. Now i have to just write LINQ code to display one column values. I am kind of new to LINQ.
```
Dim Events=" LINQ select statement part???"
ddlEvent.DataSource = Events
ddlEvent.DataBind()
ddlEvent.Items.Insert(0, New ListItem("-Type-", ""))
``` | DropDownLists work with 2 fields:
* the **DataTextField** represents the field holding the text of the options that will be displayed to the user
* the **DataValueField** represents the field hoding the value associated with each option
So in your case, you could get it to work with something like:
```
Dim events As List(Of Event) = yourDbContext.Events.ToList()
ddlEvent.DataSource = events
ddlEvent.DataTextField = "NameOfThePropertyYouWantDisplayed"
ddlEvent.DataValueField = "NameOfThePropertyYouWantAsValue"
ddlEvent.DataBind()
```
Note, however, that the query will retrieve every property from the database. This can hurt performance if your `Event` class has a lot of properties.
To avoid this, you could use the LINQ `Select` operator, used for projection. This could be used to retrieve only the necesseray data from the database. You can have a look on the MSDN [here](http://msdn.microsoft.com/en-us/library/bb384504.aspx) | `Context.Events.ToList();`
Where, Context is your EF db context. Update your filter condition in where clause | selecting a column values from a Table using LINQ (Entity Framework) | [
"",
"sql",
".net",
"vb.net",
"linq",
"entity-framework",
""
] |
I'm having trouble implementing in my code what should be a simple query. I'm trying to take a range of values, say numbered sequentially 1-20 in a column named front\_weight and then randomize that query by that column. So far what I've come up with is:
```
$sql_rand_limit_range = "SELECT * FROM posts
WHERE active='y'
AND front_weight > 0
AND front_weight
IN (SELECT front_weight
FROM posts ORDER BY front_weight DESC LIMIT 0, 20)
ORDER BY RAND()";
```
I get this error:
```
This version of MySQL doesn't yet support 'LIMIT & IN/ALL/ANY/SOME subquery'
```
I am running MySQL Version 5.1 by the way. I then try to monkey around with the query a little bit trying something like this which in my eyes should work:
```
$sql_rand_limit_range = "SELECT * FROM posts
WHERE active='y'
AND front_weight > 0
ORDER BY RAND() front_weight DESC
LIMIT 0, 20";
$sql_rand_limit_range = "SELECT * FROM posts
WHERE active='y'
AND front_weight > 0
ORDER BY front_weight DESC
LIMIT 0, 20
RAND()";
```
But you SQL ninjas can see how far that got me. Any ideas or will I have to do something a little bit more drastic using Php code like randomizing mysqli\_fetch\_array?
EDIT: I'm getting alot of exotic queries to return something like 15,12,3,17,9,5,11...... I just want to return rows 1-20 in a column randomized. Basically I'm trying to ask in sql parlance how to get this:
```
ORDER BY front_weight DESC LIMIT 0, 20
```
to play nice with this:
```
ORDER BY RAND()
```
I really hope I don't need joins and pseudo-tables, but if I do, so be it. | Thanks guys for all your help. After continuing to fiddle with it some more I found the right query for my needs:
```
$sql_rand_limit_range = "SELECT * FROM posts
WHERE active='y'
AND front_weight BETWEEN 1 AND 20
ORDER BY RAND()";
```
Good old BETWEEN, hardly used but never truly forgotten! | Try this one
```
$sql_rand_limit_range = "SELECT * FROM posts
WHERE active='y'
AND front_weight > 0
AND front_weight
IN (SELECT front_weight
FROM posts ORDER BY front_weight DESC)
ORDER BY RAND LIMIT 0, 20";
``` | MySQL take a range of values based on a column of descending numbers, and then randomize that range | [
"",
"mysql",
"sql",
""
] |
I was reading over [Instagrams sharding solution](http://instagram-engineering.tumblr.com/post/10853187575/sharding-ids-at-instagram) and I noticed the following line:
```
SELECT nextval('insta5.table_id_seq') %% 1024 INTO seq_id;
```
What does the %% in the SELECT line above do?
I looked up PostgreSQL and the only thing I found was that %% is utilized when you want to use a literal percent character.
```
CREATE OR REPLACE FUNCTION insta5.next_id(OUT result bigint) AS $$
DECLARE
our_epoch bigint := 1314220021721;
seq_id bigint;
now_millis bigint;
shard_id int := 5;
BEGIN
SELECT nextval('insta5.table_id_seq') %% 1024 INTO seq_id;
SELECT FLOOR(EXTRACT(EPOCH FROM clock_timestamp()) * 1000) INTO now_millis;
result := (now_millis - our_epoch) << 23;
result := result | (shard_id << 10);
result := result | (seq_id);
END;
$$ LANGUAGE PLPGSQL;
``` | The *only* place I can think of, where a `%` would be doubled up in standard Postgres is inside the [`format()`](https://www.postgresql.org/docs/current/functions-string.html#FUNCTIONS-STRING-FORMAT) function, commonly used for producing a query string for dynamic SQL. [Compare examples here on SO.](https://stackoverflow.com/search?q=format%20execute%20%5Bplpgsql%5D%20%5Bdynamic-sql%5D)
[The manual](https://www.postgresql.org/docs/current/functions-string.html#FUNCTIONS-STRING-FORMAT):
> In addition to the format specifiers described above, the special
> sequence `%%` may be used to output a literal `%` character.
Tricky when using the [modulo operator `%`](https://www.postgresql.org/docs/current/functions-math.html#FUNCTIONS-MATH-OP-TABLE) in a dynamic statement!
I suspect they are running dynamic SQL behind the curtains - which they generalized and simplified for the article. (The schema-qualified name of the sequence is `'insta5.table_id_seq'` and the table wouldn't be named "table".) In the process they forgot to "unescape" the modulo operator.
That's what they may actually be running:
```
EXECUTE format($$SELECT nextval('%I') %% 1024$$, seq_name)
INTO seq_id;
``` | With default installation (on 9.2):
```
ERROR: operator does not exist: bigint %% integer
SQL state: 42883
```
So i would say it could be
* a [custom operator](http://www.postgresql.org/docs/9.3/static/sql-createoperator.html)
* or a typo, and they want to write the modulo operator: `%` | What does %% in PL/pgSQL mean? | [
"",
"sql",
"postgresql",
"plpgsql",
""
] |
So I have a collection of rows that looks something like:
```
Text Sequence
ITEM1 1
ITEM1 2
ITEM1 3
ITEM2 4
ITEM2 5
ITEM3 6
ITEM2 7
ITEM2 8
ITEM1 9
ITEM1 10
```
I want the result to look like:
```
Text Sequence
ITEM1 1
ITEM2 4
ITEM3 6
ITEM2 7
ITEM1 9
```
So I am taking the first instance of a row and retaining only the first sequence number however if the item is repeated further down the list i also retain the sequence number for that instance.
The SQL I have is:
```
SELECT Text,Seq=Min(Sequence)
FROM Items
GROUP BY Text
ORDER BY Seq
```
Which results in:
```
Text Sequence
ITEM1 1
ITEM2 4
ITEM3 6
```
The GROUP BY Text statement is removing the 4th and 5th rows. How do I avoid that? | Adjusted script to allow for gaps in Sequence
```
DECLARE @t TABLE(Text char(5), Sequence int)
INSERT @t VALUES
('ITEM1',1),('ITEM1',2),('ITEM1',3),('ITEM2',4),('ITEM2',5),
('ITEM3',6),('ITEM2',7),('ITEM2',8),('ITEM1',9),('ITEM1',10)
;WITH x as
(
SELECT Text,Sequence,
row_number() OVER (order by Sequence)
- row_number() OVER (partition by text order by Sequence) grp
FROM @t
)
SELECT text, MIN(Sequence) seq
FROM x
GROUP BY text, grp
ORDER BY seq
```
Result:
```
text seq
ITEM1 1
ITEM2 4
ITEM3 6
ITEM2 7
ITEM1 9
``` | LAG lets you see the previous record. Compare the text with the previous line. Only show the lines where the text differs from the previous line.
```
select sequence, text
from
(
select sequence, text, lag(text) over (order by sequence) as prev_text
from mytable
)
where prev_text != text
or prev_text is null -- for the first line
order by sequence;
``` | Preventing removal of rows in a SQL query based on ordinal position | [
"",
"sql",
"sql-server",
""
] |
I have the following string (example) in one cell of a column:
```
1,4,3,8,23,7
```
I need to get pairs of values as below into a new table:
```
(1,4)
(4,3)
(3,8)
(8,23)
(23,7)
```
I hope that I explained what I need properly, so that you can understand me :)
I apprecciate even one sentence answer because I like to solve programming problems myself :) | ```
DECLARE @data varchar(2000) = '1,4,3,8,23,7'
;WITH x as
(
SELECT t.c.value('.', 'VARCHAR(2000)') v, row_number() over (order by (select 1)) rn
FROM (
SELECT x = CAST('<t>' +
REPLACE(@data, ',', '</t><t>') + '</t>' AS XML)
) a
CROSS APPLY x.nodes('/t') t(c)
)
SELECT t1.v, t2.v
FROM x t1
JOIN x t2
on t1.rn = t2.rn - 1
```
Result:
```
1 4
4 3
3 8
8 23
23 7
``` | Sorry to spoil the fun :)
```
declare
@col_list varchar(1000),
@sep char(1)
set @col_list = '1,4,3,8,23,7'
set @sep = ','
;with x as (
select substring(@col_list, n, charindex(@sep, @col_list + @sep, n) - n) as col,
row_number() over(order by n) as r
from numbers where substring(@sep + @col_list, n, 1) = @sep
and n < len(@col_list) + 1
)
select x2.col, x1.col
from x as x1
inner join x as x2 on x1.r = x2.r+1
```
<http://dataeducation.com/you-require-a-numbers-table/> | SQL, get adhering pairs of values from comma separated string | [
"",
"sql",
"sql-server",
"regex",
"string",
""
] |
i want to use a if condition in a sql query according to following need.
"if a year field is null then do not calculate age and if it is set then it have to execute.
here is my query.where is the problem?please consider this scenario
'if month and date is there like for example 0000-03-12'
```
SELECT id, name, birth_date, birth_time,
city_native, country_native, sex,
city_current, country_current, image,
if(YEAR(birth_date)='','',YEAR(CURDATE()) - YEAR(birth_date) -
(RIGHT(CURDATE(),5) < RIGHT(birth_date,5)),'') AS age
FROM birth_user u
WHERE <condition>;
``` | You can use [**IFNULL()**](http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_ifnull) to check birth\_date is NULL
```
CASE WHEN IFNULL(birth_date,0)=0 THEN '' ELSE YEAR(CURDATE()) - YEAR(birth_date) END as age
``` | If your need is "if it is null", why are you comparing it to zero?
```
if(birth_date is null, 0, YEAR(CURDATE()) - ......) AS age
```
Where that `0` there is a suitable "default age". | apply an if condition in a sql query | [
"",
"mysql",
"sql",
""
] |
I have an association table. The rows look something like: `id, objectID, thingID`.
I need a stored procedure to perform a select statement that will return 3 values:
```
item1ID, item2ID, item3ID
```
So the query will look something like:
```
SELECT TOP 3 objectID WHERE thingID = 7 -- (or something)
```
There may not always be three rows returned, however.
**What would the stored proc look like that returned the rows as values, but zeroes for the remaining rows if 3 are not returned?**
examples:
**data**
* id: 1, objectID: 12, thingID: 2
* id: 2, objectID: 13, thingID: 2
* id: 3, objectID: 14, thingID: 3
* id: 4, objectID: 15, thingID: 3
* id: 5, objectID: 16, thingID: 3
**results where thingID = 2**
`item1ID: 12, item2ID: 13, item3ID: 0`
**results where thingID = 3**
`item1ID: 14, item2ID: 15, item3ID: 16` | Similar to other answers but using sql table variable instead of temp table.
SQL table variables are cleaned up when the proc completes.
```
create proc ReturnTop3
as
begin
declare @returnTable as table (
objectId int
)
declare @count int
insert into @returnTable
SELECT TOP 3 objectID WHERE thingID = 7
set @count = (select COUNT(*) from @returnTable)
while (@count < 3)
begin
insert into @returnTable select 0
select @count = @count + 1
end
select * from @returnTable
end
``` | You can create a temp table and do it this way
```
Create table #test (Id int);
INSERT INTO #test
SELECT TOP 3 objectID WHERE thingID = 7
WHILE (SELECT COUNT(1) from #test) < 3
BEGIN
INSERT INTO #test
VALUES (0)
END
SELECT * FROM #test
DROP TABLE #test
``` | How can I always return 3 values in this stored procedure? | [
"",
"sql",
"stored-procedures",
""
] |
I construct hundreds of SQL Queries in an excel sheet and each one is placed in a cell of 1 column. What I am looking to do is run each of these SQL statements from excel.
Just wondering if anyone knows a way to convert all my SQL into VBA Strings to that I can loop through all rows to run each query.
I found this which is what I want to do but is there a way I can alter the code so it can read off excel cells rather than a Form?
<http://allenbrowne.com/ser-71.html>
Thanks
EDIT: Here is a sample SQL that I am trying to convert
```
SELECT
TT.TEST_TABLE_ID,
TT.TEST_TABLE_NO,
TT.MEMBERSHIP_NUMBER,
TT.TEST_TABLE_TYPE,
from TEST_TABLE TT
```
I think because each Select is in its own line it causes problems when it converts.
EDIT #2: Here is my code that executes SQL
```
Sub GetData()
Dim Conn As New ADODB.Connection
Dim RS As New ADODB.Recordset
Dim cmd As New ADODB.Command
Dim sqlText As String
Dim Row As Long
Dim Findex As Long
Dim Data As Worksheet
Dim X As Long
Set Data = Sheets("Results")
Data.Select
Cells.ClearContents
Conn.Open "PROVIDER=ORAOLEDB.ORACLE;DATA SOURCE=ORCL;USER ID=user;PASSWORD=password"
cmd.ActiveConnection = Conn
cmd.CommandType = adCmdText
'sqlText = How to reference Valid SQL cells
cmd.CommandText = sqlText
Set RS = cmd.Execute
For X = 1 To RS.Fields.Count
Data.Cells(1, X) = RS.Fields(X - 1).Name
Next
If RS.RecordCount < Rows.Count Then
Data.Range("A2").CopyFromRecordset RS
Else
Do While Not RS.EOF
Row = Row + 1
For Findex = 0 To RS.Fields.Count - 1
If Row >= Rows.Count - 50 Then
Exit For
End If
Data.Cells(Row + 1, Findex + 1) = RS.Fields(Findex).Value
Next Findex
RS.MoveNext
Loop
End If
Cells.EntireColumn.AutoFit
End Sub
```
in the SQL text part I want to be able to reference my column of SQL statements that I have. I thought I needed to convert it but you guys are right that if referencing it I can Just use your code Brad.
I tried to incorporate your code brad where my 'sqlText = How to reference Valid SQL cells is but had no success | Here is a start to the code I think you need.
I have placed the SQL in a sheet named "SQL", in Col A.
The issues with this are:
(1) You are placing field names in a row, then the data that is returned into a row. That will require two rows per SQL statement.
(2) I copied the SQL statement from sheet "SQL' and placed in Col A of "Results" (you mentioned you wanted to place results to right of SQL String. (3) You clear the contents of "Results" sheet, so you need to be careful not to erase your SQL if you decide to combine sheets.
```
Option Explicit
Sub Process_SQL_Strings()
Dim cmd As New ADODB.Command
Dim sqlText As String
Dim Row As Long
Dim Findex As Long
Dim Data As Worksheet
Dim iFldCt As Long
Dim conn As ADODB.Connection
Dim rs As ADODB.Recordset
Dim sConn As String
Dim lLastRow As Long
Dim lRow As Long
Set Data = Sheets("Results")
Data.Select
Cells.ClearContents
conn.Open "PROVIDER=ORAOLEDB.ORACLE;DATA SOURCE=ORCL;USER ID=user;PASSWORD=password"
cmd.ActiveConnection = conn
cmd.CommandType = adCmdText
'' Set conn = New ADODB.Connection
'' sConn = "Provider=Microsoft.ACE.OLEDB.12.0;" & _
'' "Data Source=C:\data\access\tek_tips.accdb;" & _
'' "Jet OLEDB:Engine Type=5;" & _
'' "Persist Security Info=False;"
conn.Open sConn
'sqlText = How to reference Valid SQL cells
lRow = 1
Do
sqlText = Sheets("SQL").Range("A" & lRow)
If sqlText = "" Then
MsgBox "Finished processing " & lRow & " rows of SQL", vbOKOnly, "Finished"
GoTo Wrap_Up
End If
Set rs = New ADODB.Recordset
rs.Open sqlText, conn, adOpenStatic, adLockBatchOptimistic, adCmdText
Data.Cells(lRow, 1) = sqlText
If not rs.EOF then
For iFldCt = 1 To rs.Fields.Count
Data.Cells(lRow, 1 + iFldCt) = rs.Fields(iFldCt - 1).Name
Next
If rs.RecordCount < Rows.Count Then
Data.Range("B" & lRow).CopyFromRecordset rs
Else
Do While Not rs.EOF
Row = Row + 1
For Findex = 0 To rs.Fields.Count - 1
If Row >= Rows.Count - 50 Then
Exit For
End If
Data.Cells(Row + 1, Findex + 1) = rs.Fields(Findex).value
Next Findex
rs.MoveNext
Loop
End If
Cells.EntireColumn.AutoFit
End If
lRow = lRow + 1
Loop
Wrap_Up:
rs.Close
Set rs = Nothing
conn.Close
Set conn = Nothing
End Sub
``` | I am using something this:
```
Function SQLQueryRun(ByVal query As String, ByVal returnData As Boolean) As Variant
Dim Conn As New ADODB.Connection
Dim ADODBCmd As New ADODB.Command
Dim ret As New ADODB.Recordset
Conn.ConnectionString = "connection_string_here"
Conn.Open
ADODBCmd.ActiveConnection = Conn
ADODBCmd.CommandText = query
Set ret = ADODBCmd.Execute()
If returnData Then
If Not ret.EOF Then SQLQueryRun = ret.GetRows()
Else
SQLQueryRun = True
End If
Conn.Close
Set Conn = Nothing
Set ret = Nothing
End Function
```
If the second argument is `False` nothing is returned by function. Are you expecting results from query run?
Also I use a macro to create Query/Pivot table from sql contained in windows clipboard, if you are interested let me know. | Functions to convert SQL into VBA strings | [
"",
"sql",
"vba",
"excel",
""
] |
I have a table in Oracle, let's say with 99 rows. I want to take three different (random) samples from the table, making sure it's without replacement *between* the three different samples. This is to say, sample\_1 may contain rows 1-33, sample\_2 may contain rows 34-66, and sample\_3 may contain rows 67-99. I do **not** want samples where there is overlap between the rows, i.e. sample\_1 contains rows 1-33, sample\_2 contains rows 21-53, etc...
The code I have so far is as follows:
```
CREATE TABLE training_sample_1 AS
SELECT *
FROM pmaster_numeric
SAMPLE BLOCK (33, 100)
```
I am using SAMPLE BLOCK because my table is actually ~ 10 million rows long, and a seed value because I want to be able to reference this sample in the future. Again, I want to make two more samples - training\_sample\_2 and test\_sample\_1 - of the same size, *making sure none of the three sample tables contain rows that are also in other sample tables* (without replacement). | (EDITED)
Here's the SQL I ended up using:
```
-- Create randomly-ordered pmaster_numeric table
CREATE TABLE random_order_pmaster_numeric AS
SELECT pmast.*, dbms_random.value AS row_number
FROM pmaster_numeric pmast
ORDER BY dbms_random.value;
-- training_sample_1
CREATE TABLE training_sample_1 AS
SELECT *
FROM random_order_pmaster_numeric
WHERE row_number > 0 and row_number <= .3;
-- training_sample_2
CREATE TABLE training_sample_2 AS
SELECT *
FROM random_order_pmaster_numeric
WHERE row_number > .3 and row_number <= .6;
-- training_sample_3
CREATE TABLE training_sample_3 AS
SELECT *
FROM random_order_pmaster_numeric
WHERE row_number > .6 and row_number <= .9;
-- test_sample_1
CREATE TABLE test_sample_1 AS
SELECT *
FROM random_order_pmaster_numeric
WHERE row_number > .9;
```
What I am doing here is randomly ordering my entire table, keeping the dbms\_random.value (after noticing there's a uniform distribution of values over the interval 0-1) then breaking it up into equi-sized (in terms of # of rows) pieces, which I then designate as my samples. This avoids replacement because I am choosing the row selections instead of the SAMPLE BLOCK clause, and making sure that there is no row overlap between sample tables (the WHERE clauses in my queries).
Here, the training samples are all 3X as large as the test sample. | You could do this in the following way if you have an incremental ID column in the table.
What I am suggesting is to take the `MAX` value and break it into `3 parts` and then extract samples from these sets, as the `SAMPLE` clause works on top of the `WHERE`
```
WITH CTE
AS (
SELECT MAX(ID) MAX_ID
FROM pmaster_numeric
)
SELECT *
FROM pmaster_numeric
SAMPLE BLOCK(33)
WHERE id < (
SELECT MAX_ID
FROM CTE
) / 3
UNION ALL
SELECT *
FROM pmaster_numeric
SAMPLE BLOCK(33)
WHERE id > (
SELECT MAX_ID
FROM CTE
) / 3
AND id < 2 * (
SELECT MAX_ID
FROM CTE
) / 3
UNION ALL
SELECT *
FROM pmaster_numeric
SAMPLE BLOCK(33)
WHERE id > 2 * (
SELECT MAX_ID
FROM CTE
) / 3
AND id < (
SELECT MAX_ID
FROM CTE
)
``` | Sampling without replacement over multiple samples | [
"",
"sql",
"oracle",
""
] |
i want to use single alias name in mysql for two columns at a time
```
first_name, lname as username
```
In username, i want values of both first\_name & lname
thanx in advance | There is a function in php as `CONCAT()`
In your case you can use as,
```
CONCAT(first_name, ' ', lname) as username
``` | Try to Concat fields
SELECT CONCAT(`first_name`, ' ', `lname`) as username FROM `table` | How to user single alias name for 2 columns at a time | [
"",
"mysql",
"sql",
""
] |
I am trying to query data that will allow me to use the SELECT statement to bring in multiple fields. Below is my code: (Which works for one specific field - [PDN005\_Err]. How can I stretch that from looking only at 005 to PDN005 - PDN122, without having to hard code them into there?
Any help would be greatly appreciated!
```
SELECT *
FROM OpenQuery(INSQL,
'
SELECT [DateTime], [PDN005_Err], [PDN005_Loc]
FROM Runtime.dbo.WideHistory
WHERE [DateTime] >= ''2014-03-01''
AND [DateTime] <= ''2014-04-01''
AND [PDN005_Err] = 1
ORDER BY [DateTime]
'
)
```
I am trying to query data that will allow me to use the SELECT statement to bring in multiple fields. Below is my code: (Which works for one specific field - [PDN005\_Err]. How can I stretch that from looking only at 005 to PDN005 - PDN122, without having to hard code them into there?
Any help would be greatly appreciated!
So as an UPDATE this is what I tried..
```
SELECT *
FROM OpenQuery(INSQL,
'
SELECT [DateTime], [AGC005_ErrCond], [AGC003_ErrCond], [AGC005_Loc], [AGC003_Loc]
FROM Runtime.dbo.WideHistory
WHERE [DateTime] >= ''2014-03-01''
AND [DateTime] <= ''2014-04-01''
AND ([AGC005_ErrCond] = 1
OR [AGC003_ErrCond] = 1)
ORDER BY [DateTime]
'
)
```
However, When I receive the data.. the value I'm looking for is 1.. however.. the data will keep repeating itself until one of the PDN Err = 1. Here's the attached picture so you can see. | Try this:
```
DECLARE @data varchar(max)
;WITH x as
(
SELECT number FROM master..spt_values WHERE number between 1 and 122 and type = 'P'
)
SELECT @data = 'SELECT [DateTime]'+ (
SELECT ',[PDN'+right(cast(number+1000 as char(4)), 3) + '_Err]'
FROM x
for xml path(''), type
).value('.', 'varchar(max)') + ',[PDN005_Loc]
FROM Runtime.dbo.WideHistory
WHERE [DateTime] >= ''''2014-03-01''''
AND [DateTime] <= ''''2014-04-01''''
AND [PDN005_Err] = 1
ORDER BY [DateTime]'
exec( 'SELECT *
FROM OpenQuery([INSQL],'''+@data + ''')')
```
Note I would agree with the comment from @ThorstenKettner that hardcoding the columns would be better.
You should be aware that the maximum length of the query varchar in OpenQuery is 8 KB. | Use information\_schema.colums view to generate the column list
```
select column_name+',' from information_schema.colums
where table_name='WideHistory'
``` | SQL Query Help! Using Selecting multiple | [
"",
"sql",
"sql-server",
""
] |
I'm building a database for a chat application and I'm having a spot of trouble building a `View` to combine the Chats table and the Users table.
The `Chats` table has fields `sender_id`, `receiver_id` which map to the `user_id` field in the `Users` table. Now, I want to create a `View` which will combine the `Chats` with the `Users` table, and also provide the sender's and receiver's profile picture(in the fields `sender_pic` and `receiver_pic`). These can be fetched from the `profile_pic` field from the `Users` table.
What's the SQL syntax for doing this? | Your syntax will be as follows, because you need both sender and receiver you need to join to users table twice.
```
CREATE VIEW SomeFancyName
AS
SELECT s.profile_pic AS sender_pic
,r.profile_pic AS receiver_pic
FROM Chats c
JOIN users s
ON c.sender_id = s.user_id
JOIN users r
ON c.receiver_id = s.user_id
```
now you can add columns that you need from each | You've got to use alias names:
```
SELECT
s.profile_pic AS sender_pic,
r.profile_pic AS receiver_pic
FROM
Chats
INNER JOIN
Users AS s -- s will represent the sender
ON
Chats.sender_id = s.user_id
INNER JOIN
Users AS r -- and r the receiver
ON
Chats.receiver_id = r.user_id;
``` | How can I create a View from multiple rows from multiple tables? | [
"",
"android",
"sql",
"sqlite",
"view",
"sql-view",
""
] |
Good Morning,
I'm trying to develop a simple inventory system, however i have difficulties to only select the last entry of Each product (Component ; Ref).

My current query is :
```
SELECT component, ref, date, qty FROM $usertable WHERE (SELECT qty FROM $usertable HAVING max(date) ORDER BY component ASC, ref ASC LIMIT 0,1)
```
As you can see I'm not familiar with nested queries :/
Can somebody help me to figure out the solution please ? | You can use a self join to get the last entry for each product
```
SELECT u1.*
FROM
$usertable u1
JOIN (
SELECT component, ref, MAX(date) date
FROM $usertable
GROUP BY component, ref
) u2
USING(component, ref,date)
``` | Try this:
```
SELECT component, ref, date, qty FROM
( SELECT component, ref, date, qty FROM MyTABLE ORDER BY date DESC ) AS dummyTable
GROUP BY component, ref
``` | SQL Query select the last entry of each product | [
"",
"mysql",
"sql",
"nested",
""
] |
I have a table called `a` with this data:
```
+-----+-----------+-------+
| id | parent_id | price |
+-----+-----------+-------+
| 1 | 1 | 100 |
| 2 | 1 | 200 |
| 3 | 1 | 99 |
| 4 | 2 | 1000 |
| 5 | 2 | 999 |
+-----+-----------+-------+
```
I want to get the `id` of min pirce for each parent\_id.
There is any way to get this result without subquery?
```
+-----+-----------+-------+
| id | parent_id | price |
+-----+-----------+-------+
| 3 | 1 | 99 |
| 5 | 2 | 999 |
+-----+-----------+-------+
``` | Here is a shot at how to do it without subqueries. I haven't tested, let me know if it works!
```
SELECT t.id, t.parent_id, t.price
FROM table t
LEFT JOIN table t2
ON (t.parent_id = t2.parent_id AND t.price > t2.price)
GROUP BY t.id, t.parent_id, t.price
HAVING COUNT(*) = 1 AND max(t2.price) is null
ORDER BY t.parent_id, t.price desc;
``` | ```
SELECT D1.id, D1.parent_id, D1.price
FROM Data D1
LEFT JOIN Data D2 on D2.price < D1.price AND D1.parent_id = D2.parent_id
WHERE D2.id IS NULL
``` | Get min price id without inner select | [
"",
"mysql",
"sql",
""
] |
I have 3 tables:
**Companies, Contacts and Company\_Contacts**
Table Company\_Contact holds the connection between company\_id and contact\_id, each contact can be related to sevral companies also.
Each company has a different amount of contacts, and I need a query that will bring me the data in this form:
```
company_name contact_1 contact_2 contact_3 contact_4 contact_5
---------- --------- ---------- ---------- ---------- -----------
company1 aaa ddd ggg iii kkk
company2 bbb eee hhh jjj lll
company3 ccc fff NULL NULL NULL
```
I don't know for sure if each company has 5 contacts but the request is to bring the top 5 contacts for each company.
I don't how to do it with pivot (if pivot is realy the answer).
How can i create a query to achieve this ? | This is the query that I usaed:
```
select company_name, Contact_1, Contact_2, Contact_3, Contact_4, Contact_5
from
(
select contact.first_name + ' ' + contact.last_name as contact_name,
company.company_name,
'Contact_'+
cast(row_number() over(partition by relation.company_id
order by relation.contact_id) as varchar(50)) row
from contacts company
left join contact_company_relation_additional_information relation
on company.id = relation.company_id and relation.ContactCompanyRelation_IsActive = 1
left join contacts contact
on relation.contact_id = contact.id and contact.is_company = 0 and contact.is_active = 1
where company.is_company = 1 and company.is_active = 1
) d
pivot
(
max(contact_name)
for row in (Contact_1, Contact_2, Contact_3, Contact_4, Contact_5)
) p;
``` | Assuming this is TSQL as you mention pivot, you can use a combination of a cte to generate a row number and then the pivot.
```
with cte1 as (
select com.company_id as [Company Id],
row_number() over (partition by com.company_id order by con.contact_id) as row,
[Contact Name] from contacts con
inner join company_contacts com on com.contact_id=con.contact_id
),
cte2 as (
select * from cte1 where row < 6
)
select co.[Company Name],
c.[1] as contact_1,
c.[2] as contact_2,
c.[3] as contact_3,
c.[4] as contact_4,
c.[5] as contact_5
from Companies co
left outer join
(select *
from cte2
pivot (min([Contact Name]) for row in ([1],[2],[3],[4],[5])) x
) c on c.[company id] = co.company_id
``` | SQL complex pivot query | [
"",
"sql",
"pivot",
""
] |
i'm trying to implement a recursive query in MS SQL server 2008 using CTE.
I know there are a lot of posts talking about recursion in SQL SERVER but this is a bit different and i'm getting stuck.
I have a table with this structure:
```
CREATE TABLE [dbo].[Account](
[ID] [nvarchar](20) NULL,
[MAIN_EMAIL] [nvarchar](80) NULL,
[SECONDARY_EMAIL] [nvarchar](80) NULL)
```
This table represent a list of Account of course, these accounts can be duplicated in the table and i know they are if an account has a MAIN\_EMAIL or a SECONDARY\_EMAIL that exists in the MAIN\_EMAIL or SECONDARY\_EMAIL in another record with a different ID.
For example these records are duplicated in my table:

I know those records are duplicated because the ID 21206 has the main email that exists as main email in the record with ID 21246 and as secondary email in the record with ID 21268. Furthermore, the record with ID 21246 has a secondary email that exists as main email in the record with ID 28169. So, i consider those 4 records as a single record (this rule comes from project requirements).
Then, let's suppose i know the ID from which start this recursive query, suppose i know the first, with the ID 21206. I wrote this query but the result is a infinitive loop (and i get the error from MS SQL SERVER since it says i can do at maximum 100 recursions), if i select top 100 the result set contains the correct records, in this example the result is all ids 21206,21246,21268,28169 but these records are repeated to infinity, it seems the recursive part doesn't stop.
The query is:
```
with cte (ID, MAIN_EMAIL, SECONDARY_EMAIL) as (
select ad.ID,ad.MAIN_EMAIL,ad.SECONDARY_EMAIL
from Account ad
where ad.ID = '21206'
union all
select ade.ID,ade.MAIN_EMAIL,ade.SECONDARY_EMAIL
from Account ade
inner join cte c
on (
(ade.MAIN_EMAIL = c.MAIN_EMAIL
or ade.SECONDARY_EMAIL = c.MAIN_EMAIL
or ade.MAIN_EMAIL = c.SECONDARY_EMAIL
or ade.SECONDARY_EMAIL = c.SECONDARY_EMAIL)
and ade.ID <> c.ID
)
)
select top 100 * from cte
```
**I extracted those 4 records that are related and changed the emails for privacy. So the result should be the 4 records above. The result I get is a recordset with those 4 records (so it is correct but the recursive query doesn't stop so I get those 4 records to infinity).**
Could you help me?
Thank you in advance | Like other have already said there is no need for recursion
```
SELECT DISTINCT account.*
FROM account
INNER Join (SELECT mail
FROM (SELECT main_email mail
FROM account
UNION ALL
SELECT secondary_email mail
FROM account) a
GROUP BY mail
HAVING count(1) > 1) mails
ON main_email = mails.mail or secondary_email = mails.mail
```
It's probably possible to use UNPIVOT to get the list of all the mail addresses, but I'm not sure what will be the best, performance-wise.
I leave the [fiddle link](http://www.sqlfiddle.com/#!3/10543b/13 "link")
If you want to check, the UNPIVOT version (with CTE) is:
```
WITH mails as
(
SELECT mail
FROM (SELECT ID, main_email, secondary_email
FROM account) p
UNPIVOT (mail FOR col IN (main_email, secondary_email)) as a
GROUP BY mail
HAVING count(mail) > 1
)
SELECT DISTINCT account.*
FROM account
INNER JOIN mails on main_email = mails.mail or secondary_email = mails.mail
``` | If you just want to match MAIN\_EMAIL to SECONDARY\_EMAIL then a UNION should work:
```
SELECT DISTINCT R.MainId
FROM
(
SELECT A1.ID MainId
FROM dbo.Account A1 INNER JOIN dbo.Account A2 ON A2.MAIN_EMAIL = A1.SECONDARY_EMAIL
UNION
SELECT A2.ID
FROM dbo.Account A2 INNER JOIN dbo.Account A1 ON A2.MAIN_EMAIL = A1.SECONDARY_EMAIL
) R
``` | Recursive Query in SQL SERVER using Recursive member with multiple fields in the join | [
"",
"sql",
"sql-server",
"recursion",
""
] |
I have this table:
```
|NAME(Key) | LOGIN_DATE |
|----------------|---------------------|
|mark | 2012-10-10 10:35:00 |
|mark | 2012-10-10 10:40:00 |
|mark | 2012-10-10 10:45:00 |
```
I want to know how to find the difference between last and first dates:
```
|NAME | TIME_DIFF |
|----------------|---------------------|
|mark | 10 minutes |
```
How can I do this? | ```
SELECT name
, SEC_TO_TIME( MAX(TIME_TO_SEC(login_date))
- MIN(TIME_TO_SEC(login_date))
) x
FROM my_table
GROUP
BY name;
``` | ```
select
name,
concat( timestampdiff( minute,
min( login_date ),
max( login_date ) ),
' minutes'
) as 'login duration'
from my_table
group by name
``` | How to compare the first date and the last date? | [
"",
"mysql",
"sql",
"mysql-workbench",
""
] |
I'm currently working on the very last aspect of my project and am looking to build a matching function that is dynamic. Once I get this query to work I will have it solved, problem is its returning the wrong values
```
SELECT `Advert_ID`, `User_ID`, `Username` FROM adverts
WHERE ( `Skill_Proffession_Needed` LIKE 'Plumbing'
AND `Skill_Area_Needed` LIKE 'Ceptic'
AND `Skill_Level_Needed` LIKE 'Proffessional')
OR (`Skill_Proffession_Needed` LIKE 'Plumbing'
AND `Skill_Area_Needed` LIKE 'Heating'
AND `Skill_Level_Needed` LIKE 'Proffessional')
OR (`Skill_Proffession_Needed` LIKE 'Plumbing'
AND `Skill_Area_Needed` LIKE 'Sink'
AND `Skill_Level_Needed` LIKE 'Proffessional')
AND `User_ID` = '16'
AND `Location_Country` LIKE 'Ireland'
AND `Location_City` LIKE 'Dublin' AND Active = 1
```
Problem is it seems to be ignoring the last 4 AND statements and returning Inactive Adverts and Adverts not in that location. I have tried rearranging the last 4 AND statements and placing them at the top, and still the same error. I also have tried adding extra brackets into the OR queries to prioritize and still had the same issue.
Any help at all would be very helpful | You need to organize your query to have one group all OR's like `WHERE (all or operations) AND operations`
```
SELECT `Advert_ID`, `User_ID`, `Username` FROM adverts
WHERE (
(`Skill_Proffession_Needed` LIKE 'Plumbing' AND `Skill_Area_Needed` LIKE 'Ceptic' AND `Skill_Level_Needed` LIKE 'Proffessional')
OR (`Skill_Proffession_Needed` LIKE 'Plumbing' AND `Skill_Area_Needed` LIKE 'Heating' AND `Skill_Level_Needed` LIKE 'Proffessional')
OR (`Skill_Proffession_Needed` LIKE 'Plumbing' AND `Skill_Area_Needed` LIKE 'Sink' AND `Skill_Level_Needed` LIKE 'Proffessional')
)
AND `User_ID` = '16'
AND `Location_Country` LIKE 'Ireland'
AND `Location_City` LIKE 'Dublin'
AND Active = 1
``` | I'm guessing the order of operations for 'and' and 'or' is the problem: [SQL Logic Operator Precedence: And and Or](https://stackoverflow.com/questions/1241142/sql-logic-operator-precedence-and-and-or)
try it like this:
```
SELECT `Advert_ID`
,`User_ID`
,`Username`
FROM adverts
WHERE ((
`Skill_Proffession_Needed` LIKE 'Plumbing'
AND `Skill_Area_Needed` LIKE 'Ceptic'
AND `Skill_Level_Needed` LIKE 'Proffessional'
)
OR (
`Skill_Proffession_Needed` LIKE 'Plumbing'
AND `Skill_Area_Needed` LIKE 'Heating'
AND `Skill_Level_Needed` LIKE 'Proffessional'
)
OR (
`Skill_Proffession_Needed` LIKE 'Plumbing'
AND `Skill_Area_Needed` LIKE 'Sink'
AND `Skill_Level_Needed` LIKE 'Proffessional'
))
AND `User_ID` = '16'
AND `Location_Country` LIKE 'Ireland'
AND `Location_City` LIKE 'Dublin'
AND Active = 1
``` | SQL long query issue | [
"",
"mysql",
"sql",
""
] |
```
SELECT crime_name || 'Was Committed On' ||, crime_date, victim.firstname AS "VICTIM" || 'Is The Victim', ||witness.firstname AS "WITNESS" || 'Witnessed The Crime' ||, Suspect.firstname AS "SUSPECT" || 'Is Suspected Of Committing The Crime' || FROM Crime, Victim, Witness, Suspect
WHERE victim.crime_no = crime.crime_no AND witness.crime_no = crime.crime_no AND suspect.crime_no = crime.crime_no;
```
Can any one help me? I keep getting the missing expression error, I just cant seem to figure it out. Any help would be appreciated.
I'm using oracle Apex by the way
Thanks! | Try this:
```
SELECT crime_name || ' Was Committed On ' || crime_date,
victim.firstname || ' Is The Victim' AS "VICTIM" ,
witness.firstname || ' Witnessed The Crime' AS "WITNESS",
Suspect.firstname || ' Is Suspected Of Committing The Crime' AS "SUSPECT"
FROM Crime, Victim, Witness, Suspect
WHERE victim.crime_no = crime.crime_no AND
witness.crime_no = crime.crime_no AND
suspect.crime_no = crime.crime_no;
```
The aliases need to be in the end. | You have several fields with both `||` and `,` separating them. You should remove the redundant operators:
```
SELECT crime_name || ' Was Committed On ' || crime_date,
victim.firstname AS "VICTIM" || ' Is The Victim',
witness.firstname AS "WITNESS" || ' Witnessed The Crime',
Suspect.firstname AS "SUSPECT" || ' Is Suspected Of Committing The Crime'
FROM Crime, Victim, Witness, Suspect
WHERE victim.crime_no = crime.crime_no AND
witness.crime_no = crime.crime_no AND
suspect.crime_no = crime.crime_no;
``` | ORA-00936: missing expression Oracle Apex | [
"",
"sql",
"oracle-apex",
""
] |
**The situation**
I’m developing a web application that use SQL server to store thousands of records. We are currently using a source control software to save each version of the application. There are 2 main versions of the application:
* Test version (the one that we are actually developing)
* Live version (the one that is currently on the website)
We publish a new “major” Live version every month or so, but between that time we might find many little bugs on the current live version. To fix these bugs, we go back to the version the Live is currently running on our test machine (we use a shared database for testing…). Reproduce the bug, find the cause, fix it then I apply a patch to the buggy version that we then push live right away and we merge the bug fix in the test version. During the month between each “major” version, we can publish a lot of little bug fix patches.
**The problem**
The current problem is that only the source code is under version control; the database is not under version control. The reason why this is a problem is that while developing the next “major” version, many things might change in the database that are not compatible with the previous versions. So, even if we can go back to the code of a previous version, the database is not able to do the same and hence we can’t test the previous versions unless we had a database backup from that time.
The obvious solution seems to put the database under version control. While putting the database schema and the static data under version control is not really a problem (I think I will use [Visual Studio Database Project](http://msdn.microsoft.com/en-us/library/vstudio/xee70aty%28v=vs.100%29.aspx)), I struggle to see what to do with the user entered data.
**The question**
What do I do with user entered data since I need to have such data entered in my tables in order to test the application?
1. How do I go to a previous version of the database with the data that
was in the database at this time?
2. Should I put user entered data under version control? For example using a complete backup of the database for each version we publish…
3. Is auditing a good solution for keeping a history of the changes to the user entered data?
4. Is it worth it to even bother with the user entered data since it’s just a test version? We could always recreate manually all the data we need to test the application but that might take a lot of time just to test a little bug. | Based on my experience, I would suggest some combination of database backups and database upgrade scripts. You should keep backups of the production or production-like data (you might be contractually obligated to purge or alter data with names, addresses, bank account numbers of your customers, etc.) for major releases. Starting from that you should be able to get to any intermediate version of your database because you are writing database upgrade scripts and keep them in your source control system (you are currently doing it, right?)
For the practical reasons, you should have at least two separate QA environments: one with the database schema and the application matching your production environment, and another one - matching the version under development.
While database backups are large, you would need to keep only a few latest ones, unless you anticipate a need to do some post-mortem bug analysis on a version that was defunct for several years. | If you're trying to avoid keeping a backup around for each historical version you might need to restore, why not instead try to downgrade a restored copy of your current production backup to the required version.
If you're using VS database projects, you'll have a version history you can go back to. You can use VS Schema Compare to compare the historical version of your database project to the restored production database. Provided there haven't been data motion changes (eg, table/column splits/merges) then it should successfully downgrade your data as well as your schema (otherwise you might need to correct the auto-generated script manually). Some people will maintain downgrade scripts alongside their upgrade scripts to simplify this process, but this takes discipline.
When you're done, and you want to get back to the latest version again, you can either run your existing upgrade process on your database to get it back to the latest version. Or maybe it's just simpler to restore a backup of your test database.
This is also the approach I'd recommend if you choose to use SQL Source Control and SQL Compare. | Database versioning – How to go back to a previous version and keep data? | [
"",
"sql",
"sql-server",
"database",
"version-control",
""
] |
I have a table data like that:
Job Type Job Name Task No Yes\_NO
A ----------- X---------- 1-------- N
A ----------- X---------- 2-------- N
A ----------- X---------- 3-------- Y
A ----------- X---------- 4-------- Y
A ----------- X---------- 5-------- N
B ----------- Z---------- 1-------- N
B ----------- Z---------- 2-------- N
B ----------- Z---------- 3-------- N
The desired result should be:
Job Type Job Name Task No Yes\_NO
A ------------ X---------- 4--------- Y
B ------------ Z---------- 3 -------- N
But i cant successful get these lines
Case:
get maximum task\_no for each Job Type and Job Name if Yes\_NO value is 'Y'
and
if Yes\_NO value is not 'Y'
get maximum task no for each Job Type and Job Name
I try something like that:
```
Select Job_Type,Job_Name,Yes_NO,max(Task No)
From Table
where Yes_NO='Y'
Group By Job_Type,Job_Name,Yes_NO
UNION
Select Job_Type,Job_Name,Yes_NO,max(Task No)
From Table
where not exists(Select 1
From Table
where Yes_NO='Y')
Group By Job_Type,Job_Name,Yes_NO
```
Where is my fault or is there an easier way?
Thanks a lot. | Try this:
```
Select Job_Type,Job_Name,Yes_NO,max(Task No)
From Table
where Yes_NO='Y'
Group By Job_Type,Job_Name,Yes_NO
UNION
Select t1.Job_Type,t1.Job_Name,t1.Yes_NO,max(t1.Task No)
From Table t1
Left Join (select distinct Job_Type,Job_Name
from Table where Yes_NO = 'Y'
) t2
on t2.Job_Type = t1.Job_Type
and t2.Job_Name = t1.Job_Name
Where t2.Job_Type is null
Group By t1.Job_Type,t1.Job_Name,t1.Yes_NO
``` | You can simplify this using FIRST/LAST aggregate functions or ROW\_NUMBER analytic function.
```
select job_type, max(job_name),
max(task_no) keep (
dense_rank first order by
case when yes_no = 'Y' then 1 else 2 end,
task_no desc
),
max(yes_no) keep (
dense_rank first order by
case when yes_no = 'Y' then 1 else 2 end
)
from t_table
group by job_type;
select job_type, job_name, task_no, yes_no
from (
select job_type, job_name, task_no, yes_no,
row_number() over (partition by job_type
order by case when yes_no = 'Y' then 1 else 2 end,
task_no desc
) r
from t_table
)
where r = 1;
```
Result:
```
| JOB_TYPE | JOB_NAME | TASK_NO | YES_NO |
|----------|----------|---------|--------|
| A | X | 4 | Y |
| B | Z | 3 | N |
```
[Fiddle](http://www.sqlfiddle.com/#!4/b6b67/6). | How to get max value with some case in PL/SQL | [
"",
"sql",
"oracle",
""
] |
I am using this query to sum two columns
```
REPLACE( CONVERT(VARCHAR(32),cast(round(isnull(sum(R1.[Dollars] + R.[Current Cost]),0),0)as MONEY),1), '.00', '') as [Total Cost],
```
Dollars column has null values.
But I get this error
> Operand data type nvarchar is invalid for sum operator
Can anyone correct me? | If you reformat the statement you see how many operations are being done:
```
REPLACE(
CONVERT(
VARCHAR(32),
cast(
round(
isnull(
sum(R1.[Dollars] + R.[Current Cost])
,0) -- use 0 for NULL
,0) -- round to 0 decimals
as MONEY) -- convert to money
,1) -- convert to string with commas and two digits
, '.00', '') -- remove decimals
as [Total Cost],
```
The root of your problem, is that `[Current Cost]` is not a numeric type!
The *simplest* solution is to just return a numeric type and leave the *formatting* to the clinet:
```
ISNULL(sum(R1.[Dollars] + CAST(R.[Current Cost] AS MONEY),0)
``` | "@dean - current cost is nvarchar " - Vicky
If [Current Cost] is NVARCHAR then that is your problem. You need to CONVERT it to a numeric datatype bfore you can include in your sum(R1.[Dollars] + R.[Current Cost]). | Datatype error when summing column | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have a `select` statement that uses `QUOTNAME` to add single quotes and a comma to each of the results.
```
SELECT QUOTENAME(field1,'''')+',' AS [1]
```
Which changes the results from this.
```
1
11111
22222
33333
44444
```
To this
```
1
'11111',
'22222',
'33333',
'44444',
```
However I would like to know if it is possible to remove the comma from the very last row? Too look like this.
```
1
'11111',
'22222',
'33333',
'44444'
```
edit: I should have mentioned this is a View | ```
SELECT QUOTENAME(field1,'''')+
case when row_number() over(order by (select 1))=
count(*) over () then '' else ',' end AS [1]
FROM <table>
``` | Try something like this
```
DECLARE @count int
SELECT @count = COUNT(*) FROM my_table
SELECT QUOTENAME(field1, '''') + CASE WHEN ROW_NUMBER() OVER (ORDER BY field1) < @count THEN ',' ELSE '' END AS [1]
FROM my_table
``` | SQL select statement with a quotename removing the last character on the last row | [
"",
"sql",
"sql-server",
""
] |
I have two tables, runtimes where I store runtimes and temps where I store temperatures.
I want to fetch temperature for all items on all runtimes.
If an item is missed on a runtime I want it to be shown as below.
**table runtimes**
```
runtimeId | time
1 | 2014-04-02 12:00:00
2 | 2014-04-03 12:00:00
```
**table temps**
```
temps | itemId | runtimeId
10 | 1 | 1
20 | 2 | 1
11 | 1 | 2
```
**Wanted result**
```
runtimeId | time | temps | itemId
1 | 2014-04-02 12:00:00 | 10 | 1
1 | 2014-04-02 12:00:00 | 20 | 2
2 | 2014-04-03 12:00:00 | 11 | 1
2 | 2014-04-02 12:00:00 | NULL | 2 <--
```
The part I don't manage is to get the last row.
Cheers | This is a solution to your request:
```
select
r.runtimeId,r.time,
(select temps from temps where runtimeId=r.runtimeId and itemId=data.itemId) as temps,
data.itemId
from
runtimes r,
(select distinct itemId from temps) data
order by r.runtimeid, data.itemid
```
It gives you the wanted result.
<http://sqlfiddle.com/#!2/bc9e1/5>
```
runtimeId | time | temps | itemId
1 | 2014-04-02 12:00:00 | 10 | 1
1 | 2014-04-02 12:00:00 | 20 | 2
2 | 2014-04-03 12:00:00 | 11 | 1
2 | 2014-04-02 12:00:00 | NULL | 2 <<<<< preserves itemId 2
```
Especially the **runtimeId=2 and itemId=2** result row. It selects all `runtimes` with all `itemIds`.
I inserted an `order by` to get the sorting you suggested in your request. | A left join will show you all the records from the temps table as well as the records for the runtimes table.
```
select
*
from
runtimes rt
left join temps t on rt.runtimeId = t.runtimeId
```
You've indicated that you want to see `itemId` 2 that matches `runtimeId` 2.
Nothing joins in your data this way however, so the 2 indicated with the arrow would be null.
Your result would be:
```
runtimeId | time | temps | itemId
1 | 2014-04-02 12:00:00 | 10 | 1
1 | 2014-04-02 12:00:00 | 20 | 2
2 | 2014-04-03 12:00:00 | 11 | 1
2 | 2014-04-02 12:00:00 | NULL | NULL
```
For a cartesian product that will show itemId regardless of its relation to the other table, see @wumpz answer | sql join - include non existent rows | [
"",
"mysql",
"sql",
""
] |
I'm not sure if I worded the title properly so I apologize. I feel this is best explained by showing my data.
```
Address 1 Address 2 City State AddressInfo#
-------------------------------- ------------------ ------------ ----- --------------
1 Main St #100 Burbville, CA, 99999 1 Main St #100 Burbville CA 1001
1 Main St #100 Burbville, CA, 99999 1 Main St Burbville CA 1001
1 Main St #100 Burbville, CA, 99999 1 Main st Burbville CA 1001
...
4 Old Ave Ste 401 Southtown, OH, 44444 4 Old Ave Ste 401 Southtown OH 1004
4 Old Ave Ste 401 Southtown, OH, 44444 4 Old Ave Ste 401 Southtown OH 1004
...
8 New Blvd #800 NewCity, MT, 88888 8 New Blvd #800 NewCity MT 1008
8 New Blvd #800 NewCity, MT, 88888 8 New Blvd NewCity MT 1008
8 New Blvd #800 NewCity, MT, 88888 8 New Blvd NewCity MT 1008
```
I would like to find a way to remove all records where Address 2 is missing the full street address or simply contains an exact duplicate like AddressInfo# 1004.
Expected Output:
```
Address 1 Address 2 City State AddressInfo#
-------------------------------- ------------------ ------------ ----- --------------
1 Main St #100 Burbville, CA, 99999 1 Main St #100 Burbville CA 1001
...
4 Old Ave Ste 401 Southtown, OH, 44444 4 Old Ave Ste 401 Southtown OH 1004
...
8 New Blvd #800 NewCity, MT, 88888 8 New Blvd #800 NewCity MT 1008
``` | You could rebuild your data into a new table using
```
select
address_1,max(address_2) as address_2, addressinfo
from
table1
group by address_1,addressinfo
```
<http://sqlfiddle.com/#!6/3d22c/2>
**Edit 1:**
To select `city` and `state` as well you need to include it as a `group by` expression:
```
select
address_1,max(address_2) as address_2, addressinfo,
city, state
from
table1
group by address_1,addressinfo, city, state
```
<http://sqlfiddle.com/#!6/4527c/1>
**Edit 2:**
The `max` function does deliver the longest value here as needed. This works if the shorter values are true starts of the longer values.
Here is an example of this: <http://sqlfiddle.com/#!6/3fba8/1> | Some form of:
```
UPDATE A
SET Address2 = CASE WHEN Address1 = Address2 THEN NULL ELSE
CASE WHEN CHARINDEX(',',Address2,CHARINDEX(',',Address2)) = 0 THEN NULL ELSE Address2 END
END
FROM Address AS A
``` | Remove duplicate address values where length of second column is less than the length of the greatest matching address | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I'm using SQL Server 2012. For a project I want to flatten a table, but i need some help. My table.
```
| ApplicationName | Name | Value | CreatedOn
| Contoso | Description | An example website | 04-04-2014
| Contoso | Description | Nothing | 02-04-2014
| Contoso | Keywords | Contoso, About, Company | 04-04-2014
| Contoso | Keywords | Contoso, Company | 02-04-2014
```
I want to get the last modification record from a Name by Application Name. The result i want.
```
| ApplicationName | Description | Keywords
| Contoso | An example website | Contoso, About, Company
```
I don't like temp tables. Who knows how to do this?
Thanks a lot.
Jordy | Here's the more complete solution:
```
declare @collist nvarchar(max)
SET @collist = stuff((select distinct ',' + QUOTENAME(name)
FROM table -- your table here
FOR XML PATH(''), TYPE
).value('.', 'NVARCHAR(MAX)')
,1,1,'')
declare @q nvarchar(max)
set @q = '
select *
from (
select ApplicationName, name, Value
from (
select *, row_number() over (partition by ApplicationName, name order by CreatedOn desc) as rn
from table -- your table here
where appname = ''contoso''
) as x
where rn = 1
) as source
pivot (
max(Value)
for name in (' + @collist + ')
) as pvt
'
exec (@q)
``` | You are going to have to use a pivot table to do this. If the number of values for Name are unknown, you will also need to use some dynamic sql. 3 Articles you will probably find useful for this are:
[Pivot Tables](http://technet.microsoft.com/en-us/library/ms177410%28v=sql.105%29.aspx)
[SQL Server Cursors](http://www.mssqltips.com/sqlservertip/1599/sql-server-cursor-example/)
[Dynamic Sql](http://msdn.microsoft.com/en-us/library/ms175170%28v=sql.105%29.aspx)
EDIT:
Here is a possible example
```
--Get the list of names
DECLARE @values varchar(MAX);
SELECT @values = COALESCE(@values, '') + '[' + Name + '],' FROM table;
SET @values = SUBSTRING(@values, 1, LEN(@values) - 1);
--build the sql string using these names
DECLARE @sql varchar(MAX) = 'SELECT applicationName, ' + @values + 'FROM (';
SET @sql = @sql + 'SELECT applicationName, Name, Value FROM tableName WHERE CreatedOn = (SELECT MAX(CreatedOn) FROM tableName WHERE ApplicationName = @appName) AND applicationName = @appName) AS toPivot';
SET @sql = @sql + 'PIVOT (';
SET @sql = @sql + 'MAX(Value) FOR Name IN (' + @values + ')) As p';
--run sql
DECLARE @ParamDefinition varchar(MAX) = '@appName varchar(MAX)';
DECLARE @selectedApp varchar(MAX) = 'put the app you want here';
EXECUTE sp_executesql @sql, @ParamDefinition, @appName = @selectedApp;
```
Note that I am assuming that the CreatedOn value will be the same for every Name you are interested in for an application.
Edit again:
The solution below mine has a really nice trick for working out the most recent date for each entry if they are different.
DISCLAIMER, since I did this in the answer window off the top of my head, it may not be 100% right, but it should get you pretty close. | How to flatten a table using SQL Server? | [
"",
"sql",
"sql-server",
"sql-server-2008",
"sql-server-2012",
"ssms",
""
] |
I've been searching the web and asked around, but can't seem to find an answer to my problem...
I have a running ODBC connection with my FrontBase database in OpenOffice Base. I manage to select everything I want, but when I only want to show the records between certain dates, or even one date I keep on getting a Semantic Error.
Here's my query:
```
SELECT * FROM "SALES" WHERE "DATE" = '2014-04-01'
```
Just in case you suggest using `#` for the date, it doesn't work either
`DATE '2014-04-01'` is also unsuccessful :(
Anyone have any ideas? | After a lot of trial and errors I took a closer look at the OpenOfficeBase interface. I was sure nothing was wrong with my statement so there must be something in Base that I'm missing.
In the toolbar there's a button with SQL and a checkmark which says "Execute SQL-command immediatly". Thought it wouldn't do any harm if I enabled it and gave it a try. And there it was, my query with date filter! I have no idea why that button is there or what it's exact function is, but enabling it made my query work.

Thanks anyway for the suggestions, but apparently we've got to look further than just the language ;-) | A shot in the dark here. Youŕe using a FrontBase database, googling on "Frontbase select date range" gave me this page: <http://www.frontbase.com/documentation/LookSeeIntro-2.3.html>
There they have this example query:
```
SELECT
receiver FROM memo_mails
WHERE
sender = 'QA' AND
CAST(dateSent AS DATE) < DATE '2001-06-01' AND
SATISFIES(quickx, 'Periscope & invoice');
```
So based on that example, I would try
```
SELECT
* FROM "Sales"
WHERE
(CAST("Sales"."SalesDate" AS DATE) >= DATE '2014-06-01') AND
(CAST("Sales"."SalesDate" AS DATE) <= DATE '2014-06-30');
``` | OpenOffice SQL query date selection | [
"",
"sql",
"hsqldb",
"openoffice-base",
"frontbase",
""
] |
I would like to force limit the number of records being queried from the SQL Server in a way that I can report back the to user that they need to optimize their query.
Intending to use it a reporting tool where let's say the user is not supposed to extract more than 10,000 records.
```
SET ROWCOUNT 10000 [THROW EXCEPTION IF MORE IS SELECTED]
//user's query
SELECT ....
```
Is this possible to do? | I think you are asking if you can set something global that would cause an error if the result set contained more than a certain number of records.
I also think the answer to that is, "No."
However, depending on how you are constructing and executing the queries you could possible prepend SET ROWCOUNT 10001 -- note the 1 to prevent the runaway queries you want to avoid.
Then append something like @Felipe suggested:
If @@ROWCOUNT = 10001 RAISERROR ('Too many results. Please, optimize your query', 1, 1);
If you are running all queries through some sort of central processor that would be fairly easy.
EDIT:
This should demo the idea in SSMS:
```
SET ROWCOUNT 2
SELECT 1 UNION SELECT 2
If @@ROWCOUNT = 2 RAISERROR ('Too many results. Please, optimize your query', 1, 1)
```
If you are not getting an error then it is being masked by something in you code. | What you could do is rewrite the SQL to add `TOP 10000` to any SQL query generated by your tool. | SQL: Limit the number of row can be queried in SQL Server | [
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
I have a Power-shell script that calls a SQL script. This is currently working, but inside my sql script I have some hard coded parameters that I would like to pass to the SQL script via the powershell.
So this is the snip-it from the Power-shell script
```
function ExecSqlScript([string] $scriptName)
{
$scriptFile = $script:currentDir + $scriptName
$sqlLog = $script:logFileDir + $scriptName + "_{0:yyyyMMdd_HHmmss}.log" -f (Get-Date)
$result = sqlcmd -S uk-ldn-dt270 -U sa -P passwordhere3! -i $scriptFile -b | Tee-Object - filepath $sqlLog
if ($result -like "*Msg *, Level *, State *" -Or $result -like "*Sqlcmd: Error:*")
{
throw "SQL script " + $scriptFile + " failed: " + $result
}
}
try
{
ExecSqlScript "restoreDatabase.sql"
}
catch
{
//Some Error handling here
}
```
And this is from the SQL
```
USE MASTER
GO
DECLARE @dbName varchar(255)
SET @dbName = 'HardCodedDatabaseName'
```
So I want to pass the value for dbName, any ideas? | Just in case someone else needs to do this... here is a working example.
Power Shell Script:
```
sqlcmd -S uk-ldn-dt270 -U sa -P 1NetNasdf£! -v db = "'DatabaseNameHere'" -i $scriptFile -b | Tee-Object -filepath $sqlLog
```
Note the -v switch to assign the variables
And here is the MS SQL:
```
USE MASTER
GO
if db_id($(db)) is null
BEGIN
EXEC('
RESTORE DATABASE ' + $(db) + '
FROM DISK = ''D:\DB Backup\EmptyLiveV5.bak''
WITH MOVE ''LiveV5_Data'' TO ''C:\Program Files (x86)\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\LiveV5_' + $(db) + '.MDF'',
MOVE ''LiveV5_Log'' To ''C:\Program Files (x86)\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\LiveV5_' + $(db) + '_log.LDF'', REPLACE,
STATS =10')
END
```
Note: You do not have to assign the scripting varible to a normal sql varible like this.
```
SET @dbName = $(db)
```
you can just use it in your sql code. - Happy coding. | You could take advantage of sqlcmd's [scripting variables](http://technet.microsoft.com/en-us/library/ms188714.aspx). Those can be used in script file and are marked with `$()`. Like so,
```
-- Sql script file
use $(db);
select someting from somewhere;
```
When calling `sqlcmd`, use the `-v` parameter to assign variables. Like so,
```
sqlcmd -S server\instance -E -v db ="MyDatabase" -i s.sql
```
*Edit*
Mind the Sql syntax when setting variables. Consider the following script:
```
DECLARE @dbName varchar(255)
SET @dbName = $(db)
select 'val' = @dbName
```
As passed to the Sql Server, it looks like so (Profiler helps here):
```
use master;
DECLARE @dbName varchar(255)
SET @dbName = foo
select 'val' = @dbName
```
This is, obviously invalid a syntax, as `SET @dbName = foo` won't make much sense. The value ought to be within single quotes like so,
```
sqlcmd -S server\instance -E -v db ="'foo'" -i s.sql
``` | How to pass parameters to SQL script via Powershell | [
"",
"sql",
"shell",
"powershell",
""
] |
I am trying to select the student that has most hours of Kindness of each grade (9-12). I wrote this code (in sql server 2012) to get the student with most hours:
```
(select top 1 stu_first, stu_last, sum(KIND_hours) as total from STUDENT inner join KIND
on student.stu_id=KIND.stu_id
where (12-(STU_CLASS_OF-2014))=9
group by stu_first, stu_last
order by total desc)
```
This code works, but when I try to union the code together for just two grades I get this error: Incorrect syntax near the keyword 'order'.
My code is here
```
(select top 1 stu_first, stu_last, sum(KIND_hours) as total from STUDENT inner join KIND
on student.stu_id=KIND.stu_id
where (12-(STU_CLASS_OF-2014))=9
group by stu_first, stu_last
order by total desc)
union
(select top 1 stu_first, stu_last, sum(KIND_hours) as total from STUDENT inner join KIND
on student.stu_id=KIND.stu_id
where (12-(STU_CLASS_OF-2014))=10
group by stu_first, stu_last
order by total desc)
```
stu\_class\_of is the year that the student would graduate | Try this:
Make your 2 queries.."derived" Tables. Here is a working Northwind example:
```
Use Northwind
GO
Select OrderID , CustomerID , EmployeeID from
( Select TOP 1 OrderID , CustomerID , EmployeeID from dbo.Orders where ShipCountry='France' Order by ShippedDate )
as derived1
UNION ALL
Select OrderID , CustomerID , EmployeeID from
( Select TOP 1 OrderID , CustomerID , EmployeeID from dbo.Orders where ShipCountry='Germany' Order by ShippedDate )
as derived2
```
Here it is with your queries plugged in , but I cannot test them since I don't have your DDL.
```
Select * from
( select top 1 stu_first, stu_last, sum(KIND_hours) as total from STUDENT inner join KIND
on student.stu_id=KIND.stu_id
where (12-(STU_CLASS_OF-2014))=9
group by stu_first, stu_last
order by total desc )
as derived1
UNION ALL
Select * from
( select top 1 stu_first, stu_last, sum(KIND_hours) as total from STUDENT inner join KIND
on student.stu_id=KIND.stu_id
where (12-(STU_CLASS_OF-2014))=10
group by stu_first, stu_last
order by total desc )
as derived2
```
OLD ANSWER FOR POSSIBLE WORK AROUND
If you need "Order by" distinction between the two sets of data, you can use this trick:
```
IF OBJECT_ID('tempdb..#TableOne') IS NOT NULL
begin
drop table #TableOne
end
IF OBJECT_ID('tempdb..#TableTwo') IS NOT NULL
begin
drop table #TableTwo
end
CREATE TABLE #TableOne
(
SurrogateKeyIDENTITY int not null IDENTITY (1,1) ,
NameOfOne varchar(12)
)
CREATE TABLE #TableTwo
(
SurrogateKeyIDENTITY int not null IDENTITY (1,1) ,
NameOfTwo varchar(12)
)
Insert into #TableOne (NameOfOne)
Select 'C' as Alpha UNION ALL Select 'B' as Alpha UNION ALL Select 'D' as Alpha UNION ALL Select 'Z' as Alpha
Insert into #TableTwo (NameOfTwo)
Select 'T' as Alpha UNION ALL Select 'W' as Alpha UNION ALL Select 'X' as Alpha UNION ALL Select 'A' as Alpha
select 1 , NameOfOne from #TableOne
UNION
select 2 , NameOfTwo from #TableTwo
Order by 1 , 2 /* These are the "Ordinal Positions of the Column*/
IF OBJECT_ID('tempdb..#TableOne') IS NOT NULL
begin
drop table #TableOne
end
IF OBJECT_ID('tempdb..#TableTwo') IS NOT NULL
begin
drop table #TableTwo
end
``` | Every time I want to order a sub-query I tend it to wrap in an outer query like this:
```
select * from
(select * from xx order by x) x
```
This way yo can embed this query in a UNION or any other situation and it will always work since you apply the sort in the inner query. | "Order by" in a sub query | [
"",
"sql",
"sql-server",
"sql-order-by",
"union",
""
] |
I have an sql table that stores people's details i.e id, name, DoB, registration\_date and address. I would like to calculate the age of each individual and then group them into these ranges: 20-30, 31-50, 51 & over.
I know i can get the age by doing: (<https://stackoverflow.com/a/1572257/3045800>)
```
SELECT FLOOR((CAST (GetDate() AS INTEGER) - CAST(Date_of_birth AS INTEGER)) / 365.25) AS Age
```
I just need to figure out how to group all people into thier respective range.
Thanks for the help | Use a `case` to produce the age group description:
```
select *,
case
when datediff(now(), date_of_birth) / 365.25 > 50 then '51 & over'
when datediff(now(), date_of_birth) / 365.25 > 30 then '31 - 50'
when datediff(now(), date_of_birth) / 365.25 > 19 then '20 - 30'
else 'under 20'
end as age_group
from person
```
Note the simpler way to calculate age. | You can use ***with*** construction:
```
with Query as (
select FLOOR((CAST (GetDate() AS INTEGER) - CAST(Date_of_birth AS INTEGER)) / 365.25) AS Age
... -- Other fields
from MyTable
)
select case
-- whatever ranges you want
when (Age < 20) then
1
when (Age >= 20) and (Age <= 30) then
2
when (Age > 30) and (Age <= 50) then
3
else
4
end AgeRange,
...
from Query
group by AgeRange
``` | How to calculate age from date of birth and group each member into age range in sql | [
"",
"mysql",
"sql",
"sql-server",
""
] |
So I have three tables I need to join and then select the average number of stars for a given month, we will say June and then for each category select all the businesses. So far I have:
```
SELECT B1.Name, B2.Category, AVG(R1.Stars) as Average
FROM Business B1
INNER JOIN Reviews R1
ON B1.ID=R1.BusinessID
INNER JOIN BusinessCategories B2
ON B2.BusinessID=R1.BusinessID
WHERE R1.Date >= convert(datetime,'01-6-2011') AND R1.Date <= convert(datetime,'30-6- 2011')
GROUP BY Name, Category
ORDER BY Category, AVG(R1.Stars) DESC
```
Which gets me each businesses average but I don't know how to select the top for each of these categories.
**Update:** Adding sample output what I have currently and what I'm looking for:
Here is what I get now:
```
Victoria Secrets Accessories 5
Francesca's Collections Accessories 5
Saint 22 Accessories 4
Loehmann's Inc Accessories 3
Arcadia Ice Arena Active Life 5
Arizona Sunrays Gymnastics & Dance Center Active Life 5
Blissful Yoga Studio Active Life 5
Corner Archery Active Life 5
Imagination Avenue Active Life 5
Jump Street Active Life 5
Life Time Fitness Active Life 5
```
But what I want is:
```
Victoria Secrets Accessories 5
Arcadia Ice Arena Active Life 5
Video Paradise Adult 5
```
Or I guess even better would be that if there is a tie, like a bunch of the active life category we get back all the top businesses for that category. | Try this:
```
SELECT T1.Name, T1.Category, T1.Average
FROM
(SELECT B1.Name, B2.Category, AVG(R1.Stars) as Average
FROM Business B1
INNER JOIN Reviews R1
ON B1.ID=R1.BusinessID
INNER JOIN BusinessCategories B2
ON B2.BusinessID=R1.BusinessID
WHERE R1.Date >= convert(datetime,'01-6-2011') AND R1.Date <= convert(datetime,'30-6- 2011')
GROUP BY Name, Category
ORDER BY Category, AVG(R1.Stars) DESC) T1
LEFT JOIN (
SELECT B1.Name, B2.Category, AVG(R1.Stars) as Average
FROM Business B1
INNER JOIN Reviews R1
ON B1.ID=R1.BusinessID
INNER JOIN BusinessCategories B2
ON B2.BusinessID=R1.BusinessID
WHERE R1.Date >= convert(datetime,'01-6-2011') AND R1.Date <= convert(datetime,'30-6- 2011')
GROUP BY Name, Category
ORDER BY Category, AVG(R1.Stars) DESC) T2 on T2.Average> T1.Average AND T1.Category= T2.Category
WHERE T2.Name IS NULL
```
**OR**
```
SELECT Name,Category,Average FROM
(
SELECT ROW_NUMBER() OVER(Partition By Category ORDER BY AVG(R1.Stars) DESC) as RN, B1.Name, B2.Category, AVG(R1.Stars) as Average
FROM Business B1
INNER JOIN Reviews R1
ON B1.ID=R1.BusinessID
INNER JOIN BusinessCategories B2
ON B2.BusinessID=R1.BusinessID
WHERE R1.Date >= convert(datetime,'01-6-2011') AND R1.Date <= convert(datetime,'30-6- 2011')
GROUP BY Name, Category
ORDER BY Category, AVG(R1.Stars) DESC
) T
WHERE RN=1
``` | Use max(R1.Stars) instead of avg(R1.Stars) | Join 3 tables and select only the top average for each category | [
"",
"sql",
"sql-server",
"t-sql",
"join",
""
] |
I have a table like this (primary key omitted for clarity):
```
events:
itemId eventType
-----------------
100 1
101 1
101 2
102 2
102 2
```
There are other event types but I only care about 1's and 2's. I want to find the count of these `eventType`s per `itemId`, but I also need a way to conveniently perform math on the results. For example, I want an output like this:
```
itemId ones twos onesPct twosPct
-------------------------------------
100 1 0 1.0 0.0
101 1 1 0.5 0.5
102 0 2 0.0 1.0
```
In my actual application, the math I'm performing is much more complex than percentages. The dialect is T-SQL. So right now I have a query like this; I'm not that great with SQL and so the best I came up with was:
```
SELECT
COALESCE(onest.itemId,twost.itemId) itemId,
COALESCE(onest.n,0) ones,
COALESCE(twost.n,0) twos,
1.0*COALESCE(onest.n,0) / (COALESCE(onest.n,0) + COALESCE(twost.n,0)) onesPct,
1.0*COALESCE(twost.n,0) / (COALESCE(onest.n,0) + COALESCE(twost.n,0)) twosPct
FROM
(SELECT itemId, COUNT(*) n
FROM events
WHERE eventType = 1
GROUP BY itemId) onest
FULL OUTER JOIN
(SELECT itemId, COUNT(*) n
FROM events
WHERE eventType = 2
GROUP BY itemId) twost
ON onest.itemId = twost.itemId
```
This is working except all of the `COALESCE`s in the equations are becoming extremely unwieldy. So I have two questions:
1. Most importantly: Is there some way of converting NULLs from the outer join to 0 at a more global level, so that I don't have to keep writing e.g. `COALESCE(onest.n,0)` everywhere? From what I've seen so far, I'm not allowed to use column aliases in other column specifications, so I can't simply do e.g. `(1.0 * ones / (ones + twos)) onestPct` in the above query (unless there is a way to do this)?
2. Is there a shorter query that can accomplish this whole thing? This seems like a mess of subqueries.
I don't care too much about performance, the main task is data mining and analysis.
[Here it is on SQLFiddle](http://sqlfiddle.com/#!6/d6e85/2).
Also, apologies for the poor title, I couldn't come up with a good short summary. | I know you said what you are doing is more complicated than percentages, but I think regardless the key is simply a subquery.
Do your grouping work first. When that's done, use that as the source of a query in which you do the actual math. And of course, if it's too complex you could consider putting that subquery into a temporary table.
So here's what I would do if it were just the data you provided.
```
select itemid,
ones,
twos,
ones + twos as total,
100 * (round(ones / (ones + twos), 4)) as ones_perc,
100 * (round(twos / (ones + twos), 4)) as twos_perc
from
(select itemid,
sum(case when eventtype = 1 then 1 else 0 end) as ones,
sum(case when eventtype = 2 then 1 else 0 end) as twos
from a
group by itemid)b
``` | You can use aggregate functions (`SUM()`) on `CASE` which simplifies the whole query:
```
select itemId,
sum(case eventType when 1 then 1 else 0 end) ones,
sum(case eventType when 2 then 1 else 0 end) twos,
convert(float, sum(case eventType when 1 then 1 else 0 end)) / count(eventType) onesPct,
convert(float, sum(case eventType when 2 then 1 else 0 end)) / count(eventType) twosPct
from events
group by itemId
```
[SQL Fiddle Demo](http://sqlfiddle.com/#!3/904b5/5)
If there are items which don't have any events, you should either exclude them or handle percentage calculation separately as the query above will cause division by zero. | Counting occurrence while changing NULL to 0 | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I need to select users from database, which score (column named is time) is higher than 10. How to do that? For now It selecting all users.
For now my code:
```
$query = "SELECT userName,min(time) time FROM game group by userName order by time ASC LIMIT 15";
if (!$mysqli->set_charset("utf8")) {
printf("Error loading character set utf8: %s\n", $mysqli->error);
``` | You need do:
```
SELECT userName,min(time) time
FROM game WHERE time > 10
group by userName
order by time ASC LIMIT 15;
``` | You can use `WHERE` clause to filter your result
```
SELECT userName,min(time) time
FROM game
WHERE `time` >10
group by userName
order by time ASC LIMIT 15
```
or if you want to filter on resultset of aggregate function you can use `HAVING` clause
```
SELECT userName,min(time) time
FROM game
group by userName
HAVING min(time) >10
order by time ASC LIMIT 15
``` | How to select data from database higher than exact number | [
"",
"mysql",
"sql",
"database",
"select",
""
] |
I am trying to build an Oracle SQL query, that would give me the grouped by row along with the rows that make up the group when the count is greater than 1. Please see the below for example and the SQL query that does the grouping. Any help or suggestions would be greatly appreciated.
For example using the below dataset -
```
======================
ID | NAME | AUTHOR
======================
2 | Abc | John
6 | Abc | John
3 | Xyz | Mike
4 | Abc | Mike
5 | Xyz | John
1 | Abc | Mike
7 | PQR | Raj
Expected Result -
===========================
ID | NAME | AUTHOR | COUNT
===========================
| Abc | | 4
2 | Abc | John |
6 | Abc | John |
4 | Abc | Mike |
1 | Abc | Mike |
| PQR | | 1
| Xyz | | 2
3 | Xyz | Mike |
5 | Xyz | John |
SELECT NAME, COUNT(NAME) from (
SELECT 2 as ID, ' Abc ' as NAME, ' John ' as AUTHOR FROM DUAL
UNION
SELECT 6 as ID, ' Abc ' as NAME, ' John ' as AUTHOR FROM DUAL
UNION
SELECT 3 as ID, ' Xyz ' as NAME, ' Mike ' as AUTHOR FROM DUAL
UNION
SELECT 4 as ID, ' Abc ' as NAME, ' Mike ' as AUTHOR FROM DUAL
UNION
SELECT 5 as ID, ' Xyz ' as NAME, ' John ' as AUTHOR FROM DUAL
UNION
SELECT 1 as ID, ' Abc ' as NAME, ' Mike ' as AUTHOR FROM DUAL
UNION
SELECT 7 as ID, ' PQR ' as NAME, ' Raj ' as AUTHOR FROM DUAL)
GROUP BY NAME
ORDER by NAME;
``` | ```
SQL> with t as (
2 SELECT 2 as ID, ' Abc ' as NAME, ' John ' as AUTHOR FROM DUAL
3 UNION
4 SELECT 6 as ID, ' Abc ' as NAME, ' John ' as AUTHOR FROM DUAL
5 UNION
6 SELECT 3 as ID, ' Xyz ' as NAME, ' Mike ' as AUTHOR FROM DUAL
7 UNION
8 SELECT 4 as ID, ' Abc ' as NAME, ' Mike ' as AUTHOR FROM DUAL
9 UNION
10 SELECT 5 as ID, ' Xyz ' as NAME, ' John ' as AUTHOR FROM DUAL
11 UNION
12 SELECT 1 as ID, ' Abc ' as NAME, ' Mike ' as AUTHOR FROM DUAL
13 UNION
14 SELECT 7 as ID, ' PQR ' as NAME, ' Raj ' as AUTHOR FROM DUAL)
15 select id, name, author, count#
16 from (
17 select t.id, t.name, t.author, decode(grouping(id),1,count(*),null) count#,
18 count(*) over (partition by name) cn, grouping(id) gid
19 from t
20 group by grouping sets((id,name,author),(name))
21 )
22 where (cn != 2 or count# is not null)
23 order by name, gid desc, author
24 /
ID NAME AUTHOR COUNT#
---------- ------ ------ ----------
Abc 4
2 Abc John
6 Abc John
4 Abc Mike
1 Abc Mike
PQR 1
Xyz 2
5 Xyz John
3 Xyz Mike
``` | ```
select
id, name, author, decode(grouping_id(id, name), 2, count(*)) count
from
books
group by
rollup(name, (author, id))
having
grouping_id(id, name) != 3
order
by name, id nulls first
``` | Oracle SQL - Group and Detail records | [
"",
"sql",
"oracle",
""
] |
I am curious if / how a query could be written that returns the maximum value of each row in a table and the index of the column containing that row.
```
CREATE TABLE my_table (
id INT unsigned NOT NULL AUTO_INCREMENT,
field1 INT NOT NULL,
field2 INT NOT NULL,
PRIMARY KEY (id)
);
INSERT INTO my_table (field1, field2) VALUES
(5, 3),
(65, 89),
(4, 4)
```
The desired result set of the query would be
```
id max_val col_idx
-- ------- -------
1 5 2
2 89 3
3 4 2
```
(I'd prefer ties in value to return the smallest column index) | You can do so
```
SELECT id , GREATEST(field1, field2) max_val ,
CASE WHEN field1 >= field2 THEN 2 ELSE 3 END col_idx
FROM my_table
```
## [Fiddle Demo](http://sqlfiddle.com/#!2/9752a/2) | this returns the max value at each row.. see working [FIDDLE](http://sqlfiddle.com/#!2/9752a/9)
```
SELECT
IF (field1 >= field2, field1, field2) AS max_val,
IF (field1 >= field2, 2, 3) AS my_index
FROM my_table
``` | Select maximum value for each row and its column index | [
"",
"mysql",
"sql",
""
] |
I have two tables `Backup` and `Requests`.
Below is the script for both the tables
**Backup**
```
CREATE TABLE UserBackup(
FileName varchar(70) NOT NULL,
)
```
File name is represented by a guid. Sometimes there is some additional information related to the file. Hence we have entries like guid\_ADD entried in table.
**Requests**
```
CREATE TABLE Requests(
RequestId UNIQUEIDENTIFIER NOT NULL,
Status int Not null
)
```
Here are some sample rows :
**UserBackup** table:
```
FileName
15b993cc-e8be-405d-bb9f-0c58b66dcdfe
4cffe724-3f68-4710-b785-30afde5d52f8
4cffe724-3f68-4710-b785-30afde5d52f8_Add
7ad22838-ddee-4043-8d1f-6656d2953545
```
**Requests** table:
```
RequestId Status
15b993cc-e8be-405d-bb9f-0c58b66dcdfe 1
4cffe724-3f68-4710-b785-30afde5d52f8 1
7ad22838-ddee-4043-8d1f-6656d2953545 2
```
What I need is to return all the rows from userbackup table whose name (the guid) is matches `RequestId` in the `Requests` table and the status is 1. So here is the query I wrote
```
Select *
from UserBackup
inner join Requests on UserBackup.FileName = Requests.RequestId
where Requests.Status = 1
```
And this works fine. It returns me the following result
```
FileName RequestId Status
15b993cc-e8be-405d-bb9f-0c58b66dcdfe 15b993cc-e8be-405d-bb9f-0c58b66dcdfe 1
4cffe724-3f68-4710-b785-30afde5d52f8 4cffe724-3f68-4710-b785-30afde5d52f8 1
4cffe724-3f68-4710-b785-30afde5d52f8_Add 4cffe724-3f68-4710-b785-30afde5d52f8 1
```
This is exactly what I want. But what I don't understand is how it is working. If you notice the result is returning `4cffe724-3f68-4710-b785-30afde5d52f8_Add` row as well. The inner join is on `varchar` and `uniqueidentifier`, and this join instead of working like "Equals to" comparison works like "contains" comparison. I want to know how this works so that I can be sure to use this code without any unexpected scenarios. | When you compare two columns of different data types SQL Server will attempt to do implicit conversion on lower precedence.
The following comes from MSDN docs on `uniqueidentifier`
> The following example demonstrates the truncation of data when the
> value is too long for the data type being converted to. Because the
> uniqueidentifier type is limited to 36 characters, the characters that
> exceed that length are truncated.
```
DECLARE @ID nvarchar(max) = N'0E984725-C51C-4BF4-9960-E1C80E27ABA0wrong';
SELECT @ID, CONVERT(uniqueidentifier, @ID) AS TruncatedValue;
```
<http://msdn.microsoft.com/en-us/library/ms187942.aspx>
Documentation is clear that data is **truncated**
When ever you are unsure about your join operation you can verify `Actual Execution Plan`.
Here is test sample that you can run inside SSMS or SQL Sentry Plan Explorer
```
DECLARE @userbackup TABLE ( _FILENAME VARCHAR(70) )
INSERT INTO @userbackup
VALUES ( '15b993cc-e8be-405d-bb9f-0c58b66dcdfe' ),
( '4cffe724-3f68-4710-b785-30afde5d52f8' ),
( '4cffe724-3f68-4710-b785-30afde5d52f8_Add' )
, ( '7ad22838-ddee-4043-8d1f-6656d2953545' )
DECLARE @Requests TABLE
(
requestID UNIQUEIDENTIFIER
,_Status INT
)
INSERT INTO @Requests
VALUES ( '15b993cc-e8be-405d-bb9f-0c58b66dcdfe', 1 )
, ( '4cffe724-3f68-4710-b785-30afde5d52f8', 1 )
, ( '7ad22838-ddee-4043-8d1f-6656d2953545', 2 )
SELECT *
FROM @userbackup u
JOIN @Requests r
ON u.[_FILENAME] = r.requestID
WHERE r.[_Status] = 1
```
Instead of regular `join` operation SQL Server is doing `HASH MATCH` with `EXPR 1006` in SSMS it is hard to see what is doing but if you open XML file you will find this
```
<ColumnReference Column="Expr1006" />
<ScalarOperator ScalarString="CONVERT_IMPLICIT(uniqueidentifier,@userbackup.[_FILENAME] as [u].[_FILENAME],0)">
```
When ever in doubt check execution plan and always make sure to match data types when comparing.
This is great blog [Data Mismatch on WHERE Clause might Cause Serious Performance Problems](http://blogs.msdn.com/b/turgays/archive/2013/09/16/data-mismatch-on-where-clause-might-cause-serious-performance-problems.aspx) from Microsoft engineer on exact problem. | The values on both sides of a comparison have to be of the *same* data type. There's no such thing as, say, comparing a `uniqueidentifier` and a `varchar`.
[`uniqueidentifier`](http://technet.microsoft.com/en-us/library/ms187942.aspx) has a [higher precedence](http://technet.microsoft.com/en-us/library/ms190309.aspx) than [`varchar`](http://technet.microsoft.com/en-us/library/ms176089.aspx) so the `varchar`s will be converted to `uniqueidentifier`s before the comparison occurs.
Unfortunately, you get no error or warning if the string contains more characters than are needed:
```
select CONVERT(uniqueidentifier,'4cffe724-3f68-4710-b785-30afde5d52f8_Add')
```
Result:
```
4CFFE724-3F68-4710-B785-30AFDE5D52F8
```
If you want to force the comparison to occur between strings, you'll have to perform an explicit conversion:
```
Select *
from UserBackup
inner join Requests
on UserBackup.FileName = CONVERT(varchar(70),Requests.RequestId)
where Requests.Status = 1
``` | SQL Server : join on uniqueidentifier | [
"",
"sql",
"sql-server",
"inner-join",
""
] |
```
| product | tests | runs | results |
|---------|-------|------|---------|
| A | AD | 1 | 12 |
| A | AD | 2 | 13 |
| A | AD | 3 | 14 |
| A | SS | 1 | 12 |
| A | TD | 1 | 12 |
| A | TD | 2 | 12 |
| B | AD | 1 | 11 |
| B | SS | 1 | 12 |
| c | AD | 1 | 12 |
| c | AD | 2 | 10 |
| D | AD | 1 | 16 |
| D | SS | 1 | 12 |
```
I used this query:
```
select DISTINCT Poduct,
SUM (case param_name when 'AD' then results ELSE 0 END) AS AD,
SUM (case param_name when 'SS' then results ELSE 0 END) AS SS,
SUM (case param_name when 'TD' then results ELSE 0 END) AS TD
FROM [product]
GROUP BY product
ORDER BY product
```
To get it in this format:
```
| PRODUCT | AD | SS | TD |
|---------|----|----|----|
| A | 39 | 12 | 24 |
| B | 11 | 12 | 0 |
| C | 22 | 0 | 0 |
| D | 16 | 12 | 0 |
```
I need it in this format, but the problem is that it's adding up all the test runs on AD, SS, and TD.
What I'm looking for is this:
```
| PRODUCT | AD | SS | TD |
|---------|----|----|----|
| A | 14 | 12 | 12 |
| B | 11 | 12 | 0 |
| C | 10 | 0 | 0 |
| D | 16 | 12 | 0 |
```
Which is pulling only the results from the greater run of that test.
Can anyone help? | Try this:
```
select product, max(ad) ad, max(ss) ss, max(td) td
from (
select Product,
MAX(case tests when 'AD' then results ELSE 0 END) AS AD,
MAX(case tests when 'SS' then results ELSE 0 END) AS SS,
MAX(case tests when 'TD' then results ELSE 0 END) AS TD
FROM product
GROUP BY product, tests
) test_reports
group by product
ORDER BY product;
```
**Demo** @ [***SQL Fiddle***](http://sqlfiddle.com/#!2/7e915/12) | I think, assuming fro your expected result, you are wanting to `SUM` on those columns (`AD,SS,TD`) based on your highest `RUN` number. You could use `ROW_NUMBER` to assign the order based on your requirement and choose the right values set
Try this...
```
WITH CTE AS
( SELECT *, ROW_NUMBER() OVER(partition BY product,test order by runs desc) rownum FROM product
)
select Product,
SUM (case test when 'AD' then results ELSE 0 END) AS AD,
SUM (case test when 'SS' then results ELSE 0 END) AS SS,
SUM (case test when 'TD' then results ELSE 0 END) AS TD
FROM CTE
WHERE rownum=1
GROUP BY product
ORDER BY product
``` | HOW do i filter out the latests runs in and crosstab the rowns into columns? | [
"",
"mysql",
"sql",
"formatting",
""
] |
I have been working on recipe from Android recipe book to utilize a database for storing events. Current code allows me to add new entries but I am unable to modify any of the added entries. What I need is a database with predefined number of rows(48) with functionality of updating these rows through corresponding edittext fields. Can anyone help me to modify the following code to achieve this please? I am new to android coding and I need to start with this database.
Here is my MyDB file:
```
package com.cookbook.data;
import android.content.ContentValues;
import android.content.Context;
import android.database.Cursor;
import android.database.sqlite.SQLiteDatabase;
import android.database.sqlite.SQLiteException;
import android.util.Log;
public class MyDB {
private SQLiteDatabase db;
private final Context context;
private final MyDBhelper dbhelper;
// Initializes MyDBHelper instance
public MyDB(Context c){
context = c;
dbhelper = new MyDBhelper(context, Constants.DATABASE_NAME, null,
Constants.DATABASE_VERSION);
}
// Closes the database connection
public void close()
{
db.close();
}
// Initializes a SQLiteDatabase instance using MyDBhelper
public void open() throws SQLiteException
{
try {
db = dbhelper.getWritableDatabase();
} catch(SQLiteException ex) {
Log.v("Open database exception caught", ex.getMessage());
db = dbhelper.getReadableDatabase();
}
}
// Saves a diary entry to the database as name-value pairs in ContentValues instance
// then passes the data to the SQLitedatabase instance to do an insert
public long insertdiary(String title, String content)
{
try{
ContentValues newTaskValue = new ContentValues();
newTaskValue.put(Constants.TITLE_NAME, title);
newTaskValue.put(Constants.CONTENT_NAME, content);
newTaskValue.put(Constants.DATE_NAME, java.lang.System.currentTimeMillis());
return db.insert(Constants.TABLE_NAME, null, newTaskValue);
} catch(SQLiteException ex) {
Log.v("Insert into database exception caught",
ex.getMessage());
return -1;
}
}
// Reads the diary entries from database, saves them in a Cursor class and returns it from the method
public Cursor getdiaries()
{
Cursor c = db.query(Constants.TABLE_NAME, null, null,
null, null, null, null);
return c;
}
}
```
Here is my MyDBhelper file:
```
package com.cookbook.data;
import android.content.Context;
import android.database.sqlite.SQLiteDatabase;
import android.database.sqlite.SQLiteDatabase.CursorFactory;
import android.database.sqlite.SQLiteException;
import android.database.sqlite.SQLiteOpenHelper;
import android.util.Log;
public class MyDBhelper extends SQLiteOpenHelper{
private static final String CREATE_TABLE="create table "+
Constants.TABLE_NAME+" ("+
Constants.KEY_ID+" integer primary key autoincrement, "+
Constants.TITLE_NAME+" text not null, "+
Constants.CONTENT_NAME+" text not null, "+
Constants.DATE_NAME+" long);";
// database initialization
public MyDBhelper(Context context, String name, CursorFactory factory,
int version) {
super(context, name, factory, version);
}
@Override
public void onCreate(SQLiteDatabase db) {
Log.v("MyDBhelper onCreate","Creating all the tables");
try {
db.execSQL(CREATE_TABLE);
} catch(SQLiteException ex) {
Log.v("Create table exception", ex.getMessage());
}
}
@Override
public void onUpgrade(SQLiteDatabase db, int oldVersion,
int newVersion) {
Log.w("TaskDBAdapter", "Upgrading from version "+oldVersion
+" to "+newVersion
+", which will destroy all old data");
db.execSQL("drop table if exists "+Constants.TABLE_NAME);
onCreate(db);
}
}
```
Here is my Constants file:
```
package com.cookbook.data;
public class Constants {
public static final String DATABASE_NAME="datastorage";
public static final int DATABASE_VERSION=1;
public static final String TABLE_NAME="diaries";
public static final String TITLE_NAME="title";
public static final String CONTENT_NAME="content";
public static final String DATE_NAME="recorddate";
public static final String KEY_ID="_id";
public static final String TABLE_ROW="row_id";
}
```
Here is my Diary file that creates new entries into a database:
```
package com.example.classorganizer;
import android.app.Activity;
import android.content.Intent;
import android.database.sqlite.SQLiteDatabase;
import android.os.Bundle;
import android.view.View;
import android.view.View.OnClickListener;
import android.widget.Button;
import android.widget.EditText;
import com.cookbook.data.MyDB;
import com.cookbook.data.MyDBhelper;
public class Diary extends Activity {
EditText titleET1,contentET1;
EditText titleET2,contentET2;
Button submitBT;
MyDB dba;
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.diary);
dba = new MyDB(this);
dba.open();
titleET1 = (EditText)findViewById(R.id.diary1);
contentET1 = (EditText)findViewById(R.id.diarycontentText1);
titleET2 = (EditText)findViewById(R.id.diary2);
contentET2 = (EditText)findViewById(R.id.diarycontentText2);
submitBT = (Button)findViewById(R.id.submitButton);
submitBT.setOnClickListener(new OnClickListener() {
public void onClick(View v) {
try {
saveItToDB();
} catch (Exception e) {
e.printStackTrace();
}
}
});
}
public void saveItToDB() {
dba.insertdiary(titleET1.getText().toString(), contentET1.getText().toString());
dba.insertdiary(titleET2.getText().toString(), contentET2.getText().toString());
dba.close();
titleET1.setText("");
contentET1.setText("");
titleET2.setText("");
contentET2.setText("");
Intent i = new Intent(Diary.this, DisplayDiaries.class);
startActivity(i);
}
/** Called when the user clicks the Back button */
public void visitMonday(View view) {
Intent intent = new Intent(this, Monday.class);
startActivity(intent);
}
}
```
And finally here is my DisplayDiaries file which returns created diaries in a listview:
```
package com.example.classorganizer;
import java.util.Date;
import java.text.DateFormat;
import java.util.ArrayList;
import android.app.ListActivity;
import android.content.Context;
import android.content.Intent;
import android.database.Cursor;
import android.os.Bundle;
import android.view.LayoutInflater;
import android.view.View;
import android.view.ViewGroup;
import android.widget.BaseAdapter;
import android.widget.TextView;
import com.cookbook.data.Constants;
import com.cookbook.data.MyDB;
public class DisplayDiaries extends ListActivity {
MyDB dba;
DiaryAdapter myAdapter;
private class MyDiary{
public MyDiary(String t, String c, String r){
title=t;
content=c;
recorddate=r;
}
public String title;
public String content;
public String recorddate;
}
@Override
protected void onCreate(Bundle savedInstanceState) {
dba = new MyDB(this);
dba.open();
setContentView(R.layout.diaries);
super.onCreate(savedInstanceState);
myAdapter = new DiaryAdapter(this);
this.setListAdapter(myAdapter);
}
private class DiaryAdapter extends BaseAdapter {
private LayoutInflater mInflater;
private ArrayList<MyDiary> diaries;
public DiaryAdapter(Context context) {
mInflater = LayoutInflater.from(context);
diaries = new ArrayList<MyDiary>();
getdata();
}
public void getdata(){
Cursor c = dba.getdiaries();
startManagingCursor(c);
if(c.moveToFirst()){
do{
String title =
c.getString(c.getColumnIndex(Constants.TITLE_NAME));
String content =
c.getString(c.getColumnIndex(Constants.CONTENT_NAME));
DateFormat dateFormat =
DateFormat.getDateTimeInstance();
String datedata = dateFormat.format(new
Date(c.getLong(c.getColumnIndex(
Constants.DATE_NAME))).getTime());
MyDiary temp = new MyDiary(title,content,datedata);
diaries.add(temp);
} while(c.moveToNext());
}
}
@Override
public int getCount() {return diaries.size();}
public MyDiary getItem(int i) {return diaries.get(i);}
public long getItemId(int i) {return i;}
public View getView(int arg0, View arg1, ViewGroup arg2) {
final ViewHolder holder;
View v = arg1;
if ((v == null) || (v.getTag() == null)) {
v = mInflater.inflate(R.layout.diaryrow, null);
holder = new ViewHolder();
holder.mTitle = (TextView)v.findViewById(R.id.name);
holder.mDate = (TextView)v.findViewById(R.id.datetext);
v.setTag(holder);
} else {
holder = (ViewHolder) v.getTag();
}
holder.mdiary = getItem(arg0);
holder.mTitle.setText(holder.mdiary.title);
holder.mDate.setText(holder.mdiary.recorddate);
v.setTag(holder);
return v;
}
public class ViewHolder {
MyDiary mdiary;
TextView mTitle;
TextView mDate;
}
}
/** Called when the user clicks the Back button */
public void visitDiary(View view) {
Intent intent = new Intent(this, Diary.class);
startActivity(intent);
}
}
```
As I mentioned before, this code when run allows creation new diaries and puts them in a listview. What I need is modify this code so the database has the predefined 48 rows (with default empty content) and diary file allows to modify rows through corresponding 48 edittext fields. Any help with the above will be very much appreciated. I look forward to learning from you. Cheers, Patrick
edit--------------------------------------------------------------------------------------
Since I am an absolute beginner I am still having problem with creating default 48 rows in my table and then writing code for updating each row with corresponding edittext. Maybe there is some helpful soul that could figure this out for me?
edit 2 -----------------------------------------------------------------------------------
I have updated my MyDBhelper onCreate method with your code like this:
```
@Override
public void onCreate(SQLiteDatabase db) {
Log.v("MyDBhelper onCreate","Creating all the tables");
ContentValues cv=new ContentValues();
cv.put(Constants.KEY_ID, 1);
cv.put(Constants.TITLE_NAME, "My App");
db.insert( Constants.TABLE_NAME, null, cv);
String Updatetable= "update" + Constants.TABLE_NAME +
"Set" + Constants.CONTENT_NAME + " = " + 1 +
"Where" +Constants.KEY_ID +" = " + R.id.diary1;
try {
db.execSQL(CREATE_TABLE);
} catch(SQLiteException ex) {
Log.v("Create table exception", ex.getMessage());
}
}
```
but upon Diary's onCreate a new row is created instead of updating the existing rows... What am I doing wrong here? I believe that I put the code in the wrong place or I missed something else... | I put this loop statement:
```
for(int i=1; i <= 48; i++) { insertdiary("", ""); }
```
in my MyDBhelper file in onCreate() method as this:
```
@Override
public void onCreate(SQLiteDatabase db) {
Log.v("MyDBhelper onCreate","Creating all the tables");
try {
db.execSQL(CREATE_TABLE);
for(int i=1; i <= 48; i++) { insertdiary("", ""); }
}
catch(SQLiteException ex) {
Log.v("Create table exception", ex.getMessage());
}
}
```
For no avail new rows are not being created. Is it due to the fact that database is already created and this code is not running upon app restart? If so, what can I do to delete database and run this code again?
EDIT--------------------------------------------------------------------------------------
I found the solution:
Changing the method signature to take a SQLiteDatabase as a parameter worked.
```
public long insertdiary(SQLiteDatabase db, String title, String content)
```
together with:
```
for(int i=1; i <= 48; i++) { insertdiary(db, "free",""); }
```
placed within onCreate() method in MyDBhelper file
Problem of creating rows upon database creation solved. Now I need to find a way to populate edittext's with data from newly created rows to give a user a chance to save or change data.
Any help would be appreciated. | This could be an example on how to update a row in your db:
```
public boolean updateDiaryEntry(String title,String content, long date, long rowId){
ContentValues newValue = new ContentValues();
newValue.put(Constants.TITLE_NAME, title);
newValue.put(Constants.CONTENT_NAME, content);
newValue.put(Constants.DATE_NAME, date);
return db.update(DATABASE_TABLE, newValue, Constants.KEY_ID + "=" + rowId, null)>0;
}
```
You should put this code in your MyDB file and use it whenever you need to update the values of a specific raw in your db. | updating rows instead of creating new entries database android | [
"",
"android",
"mysql",
"sql",
"database",
"row",
""
] |
I need to check whether table variable is **Nothing** in SQL Server.
I have used the following query
```
IF NOT EXISTS (SELECT 1 FROM @Request)
BEGIN
--Code here
END
```
Is there any other way to do it? | I think the way you did is correct but there's probably some clarification needed.
Table variables cannot be "nothing" or NULL. Once they are declared, they always exist. They may not have any records (in the same way as tables) but they exist.
Scalar variables can have a value of NULL. | T SQL is not an OO language, so Nothing is an unfamiliar concept here.
Table variables can be declared or not, and can have zero or more rows, that's it.
It's somewhat different from scalar variables, that can be NULL. | How To Check Table variable is empty in SQL Server | [
"",
"sql",
"sql-server",
""
] |
I have a following sql query:
```
SELECT * FROM `products_groups` `productGroup`
LEFT OUTER JOIN `products` `products` ON (`products`.`product_group_id`=`productGroup`.`id`)
GROUP BY productGroup.name
HAVING (COUNT(products.id) > 0)
```
Could you please tell how to change the query in order to return the total number of rows in all the result?
P.S.
SELECT count(\*)... doesn't work in this case | ```
SELECT COUNT (rowTotal) as total
FROM (
SELECT SUM(1) as rowTotal
FROM `products_groups` `productGroup`
LEFT OUTER JOIN `products` `products`
ON (`products`.`product_group_id`=`productGroup`.`id`)
GROUP BY productGroup.name HAVING (COUNT(products.id) > 0)
)as groupRows
``` | A COUNT(\*) in place of the SELECT \* will not work because the count(\*) will be applied to each group. So your results would come back as:
GROUP 1 - 11 rows
GROUP 2 - 2 rows
and so on.
You need the COUNT(\*) to apply to the entire query instead of the group as follows:
```
SELECT COUNT(*) FROM
(SELECT 1 FROM `products_groups` `productGroup`
LEFT OUTER JOIN `products` `products` ON (`products`.`product_group_id`=`productGroup`.`id`)
GROUP BY productGroup.name
HAVING (COUNT(products.id) > 0))
``` | Group By, Having and Count | [
"",
"mysql",
"sql",
""
] |
I have somehow complicated query as follow:
```
SELECT w,e2.EntityID
FROM (SELECT EntityID,SUM(frequency) w
FROM omid.entity_epoch_data where EpochID in
( select id from epoch where startDateTime>='2013-11-01 00:00:00' and
startDateTime <= '2013-11-30 00:00:00')
GROUP BY EntityID) e1
RIGHT JOIN
entity e2 ON e1.EntityID = e2.EntityID order by w desc
```
And it works properly but as soon as I add another innerjoin:
```
SELECT w,e2.EntityID
FROM (SELECT EntityID,SUM(frequency) w
FROM omid.entity_epoch_data inner join omid.entity_dataitem_relation as e3 on(e1.EntityID = e3.EntityID)
where e3.dataitemtype=3 and EpochID in
( select id from epoch where startDateTime>='2013-11-01 00:00:00' and
startDateTime <= '2013-11-30 00:00:00')
GROUP BY EntityID) e1
RIGHT JOIN
entity e2 ON e1.EntityID = e2.EntityID order by w desc
```
I get the following error:
**column 'EntityID' in the field set is ambiguous**
Does anyone have an idea where my mistake is?
**Update :**
I have the right version as follow (it gives me exactly what I want)
```
SELECT ID,e2.EntityID,e2.Name,AVG(intensity),AVG(quality),SUM(frequency)
FROM omid.entity_epoch_data as e1
right JOIN entity AS e2 ON (e1.EntityID = e2.EntityID)
where EpochID in
( select id from epoch where startDateTime>='2013-11-01 00:00:00' and
startDateTime <= '2013-11-30 00:00:00') and e1.EntityID in
(SELECT entityid FROM omid.entity_dataitem_relation where dataitemtype=3)
group by e2.EntityID order by sum(frequency) desc;
```
But it takes time and I need to change the
```
(SELECT entityid FROM omid.entity_dataitem_relation where dataitemtype=3)
group by e2.EntityID order by sum(frequency) desc;
```
to innerjoin can anyone help me how to do that?
**Image:**

 | You have two problems.
First off, the syntax for that first INNER JOIN is not correct. You are conflating the INNER JOIN and ON parts of the statement. (At least, I think this is the case, unless you actually have "." characters in your table names are really want a cartesian result from this JOIN)>
Secondly, the second usage of EntityID (in that new INNER JOIN you added) could refer to the EntityID value from either the left or right side of the INNER JOIN -- that's what the "ambiguous column" in the error means.
Here is some SQL to get you started. It follows the selection logic you give in the comment, below. It names each table with an alias and ensures that each column specifies the table from which it should be drawn. I don't have an instance of MySQL handy to try it, but it should get you started.
```
SELECT E.EntityID, SUM(EED.Frequency)
FROM entity_epoch_data EED INNER JOIN entity_dataitem_relation EDR
ON EED.EntityID = EDR.EntityID WHERE EDR.entity_dataitemtype = 3
RIGHT OUTER JOIN entity E ON E.EntityID = EED.EntityID
GROUP BY E.EntityID
``` | Maybe there are fields named `EntityID` in both tables - `omid.entity_epoch_data` and `omid.entity_dataitem_relation` ? | inner join does not work | [
"",
"mysql",
"sql",
"left-join",
"inner-join",
""
] |
I have table act\_detail:
```
+------+------+--------+
| id | name | action |
+------+------+--------+
| 1 | Tom | eat |
| 2 | Jack | eat |
| 3 | Tom | play |
| 4 | Tom | sleep |
| 5 | Tom | eat |
| 6 | Jack | sleep |
| 7 | Tom | sleep |
| 8 | Tom | eat |
+------+------+--------+
```
I want get the summery of "eat" and the next nearest "sleep" information with the same name:
```
+------+--------+----------+
| name | eat_id | sleep_id |
+------+--------+----------+
| Tom | 1 | 4 |
| Jack | 2 | 6 |
| Tom | 5 | 7 |
| Tom | 8 | NULL |
+------+--------+----------+
```
I find I can get the result with the SQL below:
```
SELECT
a.name,
a.id AS eat_id,
(SELECT MIN(id) FROM act_detail b WHERE a.name = b.name AND b.id > a.id AND b.action = 'sleep') AS sleep_id
FROM act_detail a
WHERE a.action = 'eat'
ORDER BY a.id;
```
But this SQL need sub-query and need more sub-query when need get more columns in table b. It will be very slow with lots of records.
Suppose we can add any index.
Is there any effective method to solve this problem with standard SQL(Maybe one left join,one temporary table and one group by statement)? | Without a sub query:-
```
SELECT a.name, a.id AS eat_id, MIN(b.id) AS sleep_id
FROM act_detail a
LEFT OUTER JOIN act_detail b
ON a.name = b.name
AND b.action = 'sleep'
AND b.id > a.id
WHERE a.action = 'eat'
GROUP BY a.name, eat_id
ORDER BY a.id;
```
SQL fiddle for it here:-
<http://www.sqlfiddle.com/#!2/11834/2> | First get all eat actions and all sleep actions. Join both so that names match and sleep occurs after eating. Then find the minimum distance and add that distance.
```
select eat.name, eat.id as eat_id, eat.id + min(sleep.id - eat.id) as sleep_id
from
(
select id
from act_detail
where action = 'eat'
) eat
left join
(
select id
from act_detail
where action = 'sleep'
) sleep on sleep.name = eat.name and sleep.id > eat.id
group by eat.name, eat.id;
``` | SQL the next nearest record | [
"",
"sql",
"mysql",
""
] |
I have a table `psttodo-uit` with some fields like `Hostess Code, Datum Bezoek 1, PA, PB, PG, GoedkeuringDoorNew, Blanco, ..` .
Now I would like to select all the fields where Hostess Code is equal to ... . I want an overview like this:
```
1 march | info info info
2 march | info info info
```
But in my table I have :
```
2014-04-03 11:32:18
2014-04-03 11:22:16
2014-04-02 16:05:22
2014-04-02 15:40:43
2014-04-02 15:17:41
```
So I would like to select for each day and make a count of the other fields like `count(PA = 1)`. Can I do this in one SQL Query? | Your select has to be something like:
```
select `Hostess Code`, DATE_FORMAT(`Datum Bezoek 1`, '%e %M'), count(PA), count(PB), count(PG), GoedkeuringDoorNew, Blanco,... where ... group by DATE_FORMAT(`Datum Bezoek 1`, '%e %M') order by DATE_FORMAT(`Datum Bezoek 1`, '%e %M')
```
I hope it helps. | Something like:
```
SELECT DATE_FORMAT(DT, '%e %M')
, COUNT(PA) PA_COUNT
, COUNT(PB) PB_CCOUNT
...
FROM
YOUR_TABLE
GROUP BY DATE_FORMAT(DT, '%e %M')
``` | SQL Query Multiple dates to one date | [
"",
"mysql",
"sql",
"date",
"datetime",
""
] |
```
if (txtBizDev.Text == null)
{
cmdPhaseOne.Parameters.AddWithValue("@newBizDev", "N/A");
}
else
{
cmdPhaseOne.Parameters.AddWithValue("@newBizDev", txtBizDev.Text);
}
```
This is my code; what I want is to submit `N/A` if there is an empty field, but all it does it makes the `NULL` word in my database disappear. It's just an empty box | Maybe you should be checking if the text box's text is an empty string.
```
if (txtBizDev.Text == null || txtBizDev.Text == "")
``` | I think we need to see the rest of your query and code to run that SQL command, because nothing in what you posted would cause that issue.
That being said, the Text property of a textbox will never return a null, you should instead check:
```
if (txtBizDev.Text == "")
{
//Do stuff here.
}
``` | Why doesn't my string "N/A" failed to insert into the database? | [
"",
"asp.net",
"sql",
""
] |
I'm using the top query for my table but facing the error
> You have an error in your sql syntax, chekck the manual that
> corresponds to your mysql server version for the right sytntax to use
> near '4 \* from sitemain order by siteid desc limit 0,30' at line 1
here is the code which i used
```
SELECT top 4 *
FROM sitemain
ORDER BY siteid DESC
``` | You are mixing MySQL and TSQL syntax together. The query obviously is MySQL (from the error message). What you want is
```
SELECT * FROM sitemain ORDER BY siteid DESC LIMIT 0,4
``` | What you loking for is actually `LIMIT` Clause,
> The **LIMIT** clause can be used to constrain the number of rows returned
> by the SELECT statement. LIMIT takes one or two numeric arguments,
> which must both be nonnegative integer constants (except when using
> prepared statements).
>
> With two arguments, the first argument specifies the offset of the
> first row to return, and the second specifies the maximum number of
> rows to return. The offset of the initial row is 0 (not 1):
Documentation:
<https://dev.mysql.com/doc/refman/5.0/en/select.html>
```
SELECT *
FROM sitemain
ORDER BY siteid DESC
LIMIT 4
``` | sql syntax error while using top query | [
"",
"mysql",
"sql",
""
] |
This question is to settle an argument between me and a coworker.
Let's say we have the following query, executed on a standard LAMP server.
```
SELECT field1, field2, field3
FROM some_table
WHERE some_table.field1 = 123
ORDER BY field2 DESC
LIMIT 0, 15
```
Now let's assume the limit clause is vulnerable to SQL injection.
```
LIMIT [insert anything here], [also insert anything here]
```
The point of my coworker is that there is no way to exploit this injection, so there's no need to escape it (since it take more processing power and stuff).
I think her reasoning is stupid, but I can't figure out how to prove her wrong by finding an example.
I can't use `UNION` since the query is using an `ORDER BY` clause, and the MySQL user running the query doesn't have the `FILE` priviledge so using `INTO OUTFILE` is also out of the question.
So, can anyone tell us who is right on this case?
**Edit**: the query is executed using PHP, so adding a second query using a semicolon won't work. | The `LIMIT` clause *is* vulnerable to SQL injection, even when it follows an `ORDER BY`, as [Maurycy Prodeus demonstrated](https://rateip.com/blog/sql-injections-in-mysql-limit-clause/) earlier this year:
> ```
> mysql> SELECT field FROM user WHERE id >0 ORDER BY id LIMIT 1,1
> procedure analyse(extractvalue(rand(),concat(0x3a,version())),1);
> ERROR 1105 (HY000): XPATH syntax error: ':5.5.41-0ubuntu0.14.04.1'
> ```
>
> Voilà! The above solution is based on handy known technique of so-called error based injection. If, therefore, our vulnerable web application discloses the errors of the database engine (this is a real chance, such bad practices are common), we solve the problem. What if our target doesn’t display errors? Are we still able to exploit it successfully?
>
> It turns out that we can combine the above method with another well-known technique – time based injection. In this case, our solution will be as follows:
>
> ```
> SELECT field FROM table WHERE id > 0 ORDER BY id LIMIT 1,1
> PROCEDURE analyse((select extractvalue(rand(),
> concat(0x3a,(IF(MID(version(),1,1) LIKE 5, BENCHMARK(5000000,SHA1(1)),1))))),1)
> ```
>
> It works. What is interesting that using SLEEP is not possible in this case. That’s why there must be a BENCHMARK instead. | I would insert this:
```
1; DELETE FROM some_table WHERE 1; --
```
Just after the limit, that will select 1 row from some\_table, then DELETE all some\_table rows. then the rest will be considered as a comment. | SQL Injection and the LIMIT clause | [
"",
"mysql",
"sql",
"sql-injection",
""
] |
I've got big MySQL database. I need to delete the duplicate item quickly. Here's how it looks:
```
id | text1 | text2|
1 | 23 | 43 |
2 | 23 | 44 |
3 | 23 | 44 |
```
After the deleting, the remain part of table should be:
```
id | text1 | text2|
1 | 23 | 43 |
3 | 23 | 44 |
```
I don't care about the id. the most important is no duplicate items will be disappear. | You may try this:
```
ALTER IGNORE TABLE my_tablename ADD UNIQUE INDEX idx_name (text1 , text2);
```
ie, try to add `UNIQUE INDEX` to your columns and `alter` the table
This has an **advantage** that in future also there will be no duplicate rows which you can insert in your table | ```
DELETE FROM t WHERE id NOT IN
(SELECT MIN(id) FROM t GROUP BY text1, text2)
``` | How to remove duplicate items in MySQL with a dataset of 20 million rows? | [
"",
"mysql",
"sql",
""
] |
I need to return only results that have the highest percentage of change. Table looks like
```
item price1 price2
a 1 2
b 3 5
c 2 3
```
I would only want it to return a, like this:
```
item price1 price2 percent_change
a 1 2 200%
```
I tried this
```
select item, price1, price2, max(a.mPrice) as my_output from (select item,
((price2-price1)/price1) * 100 as mPrice from table) a
```
but it says the indetifier is invalid (assuming that's `a.mPrice`). I've tried using `having` to only check for the max, but that didn't work, it still return all the results.
I feel like I'm missing something really obvious but I can't think of what it is. I'm guessing that using max(a.mPrice) wouldn't actually get the max like I need, but I'm not sure what else to do. | I figured it out.
```
select item, price1, price2, ((price2 - price1)/price1) * 100 as my_output from table
group by item, price1, price2
having ((price2 - price1)/price1) * 100 = (select( max(((price2 - price1)/price2) * 100)) from table)
```
It's messy, but does what I need it to. | Your table `a` does not have anything but `item` and `mPrice` so fetching `price1` and `price2` in outer query does not work.
But you can do it even without such subquery
```
SELECT item, price1, price2, ((price2-price1)/price1)*100 as growth
FROM products
ORDER BY growth DESC
LIMIT 1;
```
First, choose all three attributes and calculate growth, then order by growth descending and only take the top value.
Fiddle: <http://sqlfiddle.com/#!2/16b7c/7> | Get results only with max change? | [
"",
"sql",
"oracle-sqldeveloper",
""
] |
I previously asked a question on how to do a `PRINT` that gives output immediately while the rest of the script is still running (See: [How to see progress of running SQL stored procedures?](https://stackoverflow.com/questions/10132847/how-to-see-progress-of-running-sql-stored-procedures)). The simple answer is to use:
```
RAISERROR ('My message', 0, 1) WITH NOWAIT
```
However, I noticed that the returned output is not always immediate, especially when it returns a lot of results. As a simple experiment, consider the following script:
```
DECLARE @count INT
SET @count = 1
WHILE @count <= 5000
BEGIN
RAISERROR ('Message %d', 0, 1, @count) WITH NOWAIT
WAITFOR DELAY '00:00:00.01'
SET @count = @count + 1
END
```
The above script will spits out 5000 lines of text. If you run the script, you will notice:
* For the first 500 lines (1 - 500 lines), it returns each output line immediately.
* For the next 500 lines (501 - 1000 lines), it returns the output once every 50 lines. (All 50 lines will be batched together and returned only at the end of each 50th line.)
* For every line after that (1001 - \* lines), it returns the output once every 100 lines. (All 100 lines will be batched together and returned only at the end of each 100th line.)
So that means after the first 500 lines, the `RAISERROR WITH NOWAIT` no longer work as expected, and cause problems to me because I want to see the progress of my very long running script.
So my question: Is there any way to disable this 'batched' behaviour and make it always return immediately?
---
**EDIT:** I'm using SSMS (SQL Server Management Studio) to run the above script . It seems to affect all versions (both SSMS and SQL Server), and whether the output is set to "Results to Text" or "Results to Grid" makes no difference.
**EDIT:** Apparently, this batched behaviour happens after 500 lines, *regardless* of the number of bytes. So, I've updated the question above accordingly.
**EDIT:** Thanks to **Fredou** for pointing out that this is an issue with SSMS and third party tools like *LinqPad* will not have this issue.
However, I found out that *LinqPad* doesn't output immediately either when you have table results in the output. For example, consider the following code:
```
RAISERROR ('You should see this immediately', 0, 1) WITH NOWAIT
SELECT * FROM master.sys.databases
RAISERROR ('You should see this immediately too, along with a table above.', 0, 1) WITH NOWAIT
WAITFOR DELAY '00:00:05'
RAISERROR ('You should see this 5 seconds later...', 0, 1) WITH NOWAIT
```
When you run the above script in LinqPad, only the first line is output immediately. The rest will only output after 5 seconds...
So, anyone know of a good light-weight alternative to SSMS that is free, does not require installation and will work with immediate outputs of RAISERROR WITH NOWAIT mixed with table results? | [Query ExPlus](http://sourceforge.net/projects/queryexplus/?source=pdlp) seems to do the trick, as long as you **set the output option to "Results In Text"**.
Another crude alternative is `sqlcmd`, which is installed as part of SQL Server. That works as well, but table results tend to be jumbled up and hard to read on the command console screen... | It would appear this works in 2008R2 if you set results to text.
[](https://i.stack.imgur.com/DioJy.png) | RAISERROR WITH NOWAIT not so immediate? | [
"",
"sql",
"t-sql",
""
] |
Good day.
When this piece of code hits "While Not objMyRecordset.EOF", I receive run time error 3704. In addition to this, when I hover over the "objMyRecordset" portion of "strPSTPath = CStr(objMyRecordset("PSTPath"))", i see error beginning "objMyRecordSet(PS... =
My SQL query works fine when used in SQL server management studio. The error occurs instantaneously upon hitting the line in question. I have stepped through the code line by line. Any thoughts would be appreciated. Thank you.
```
Sub Button3_Click()
'******
'Variables
Set objMyConn = New ADODB.Connection
Set objMyCmd = New ADODB.Command
Set objMyRecordset = New ADODB.Recordset
'Open connection
objMyConn.ConnectionString = "Provider=SQLOLEDB.1;Integrated Security=SSPI;Persist Security Info=False;Initial Catalog=XXXX;Data Source=XXXX"
objMyConn.Open
'Set and execute SQL command
Set objMyCmd.ActiveConnection = objMyConn
objMyCmd.CommandText = "<Valid SQL Command removed for public code display>"
objMyCmd.CommandType = adCmdText
'Open recordset
Set objMyRecordset.Source = objMyCmd
objMyRecordset.Open objMyCmd
While Not objMyRecordset.EOF
strPSTPath = CStr(objMyRecordset("PSTPath"))
MsgBox strPSTPath
objMyRecordset.MoveNext
Wend
End Sub
``` | Tracked down the problem. The connection string was incorrect for the version of SQL on this particular server. Changed the connection string to:
Driver={SQL Server Native Client 11.0};Server=XXXX;Database=XXXX;Trusted\_Connection=yes;
Now everything works. | Try this:
```
Sub Button3_Click()
Dim SQL As String, strPSTPath As String
Dim objMyConn, objMyRecordset
Set objMyConn = New ADODB.Connection
Set objMyRecordset = New ADODB.Recordset
objMyConn.Open "Provider=SQLOLEDB.1;Integrated Security=SSPI;" & _
"Persist Security Info=False;Initial Catalog=XXXX;Data Source=XXXX"
SQL = "<Valid SQL Command removed for public code display>"
objMyRecordset.Open SQL, objMyConn
While Not objMyRecordset.EOF
strPSTPath = CStr(objMyRecordset("PSTPath"))
MsgBox strPSTPath
objMyRecordset.MoveNext
Wend
End Sub
``` | VBA run time error 3704 with recordset in Excel | [
"",
"sql",
"sql-server",
"excel",
"vba",
""
] |
I am using the following procedure to add a count per item which works fine so far.
How do I have to change this if I also want to get the total count in addition so that it counts all items in that select ?
**My procedure:**
```
SELECT RANK() OVER(ORDER BY COUNT(*) desc, policy) [Rank],
policy,
COUNT(*) AS groupCount,
'currentMonth' AS groupName
FROM Log_PE
WHERE CONVERT(DATE, dateEsc, 120) >= CONVERT(DATE, CONVERT(VARCHAR(6), GETDATE(), 112) + '01', 112)
GROUP BY policy
ORDER BY groupCount desc, policy
``` | Already answered in another post :
[SUM of grouped COUNT in SQL Query](https://stackoverflow.com/questions/12927268/sum-of-grouped-count-in-sql-query)
```
select name, COUNT(name) as count from Table
group by name
Union all
select 'SUM', COUNT(name)
from Table
``` | You can use With Rollup(As suggested by blue ) as well as Grouping Sets,
Grouping sets give you more flexibility-
```
SELECT RANK() OVER(ORDER BY COUNT(*) desc, policy) [Rank],
policy,
COUNT(*) AS groupCount,
'currentMonth' AS groupName
FROM Log_PE
WHERE CONVERT(DATE, dateEsc, 120) >= CONVERT(DATE, CONVERT(VARCHAR(6), GETDATE(), 112) + '01', 112)
GROUP BY Grouping Sets (policy,())
ORDER BY groupCount desc, policy
``` | Select with count by item and total count in SQL Server | [
"",
"sql",
"sql-server",
"count",
""
] |
I use a subquery that returns the monetary values of all orders that have discounts greater than 15%.
List the orderid and the order value this last with the highest value at the top
Here is what I entered:
```
SELECT SUM(od.orderid) As OrderID,
AS [Order Values]
FROM [Order Details] od
WHERE od.Discount =
(SELECTod.Discount
FROM [Order Details] od
GROUP BY od.discount
HAVING od.discount >.15)
GROUP BY od.quantity, od.discount, od.UnitPrice
ORDER BY [Order Values] ASC;
```
Here is what I got:
> Msg 512, Level 16, State 1, Line 1
> Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <=, >, > => or when the subquery is used as an expression.
What am I missing? | You really dont need a subquery here anyway
```
SELECT OrderID, SUM(UnitPrice * Quantity) OrderTotal
FROM dbo.[Order Details]
WHERE Discount > 0.15
GROUP BY OrderID
ORDER BY OrderTotal DESC
```
**With Sub query**
```
SELECT OrderID, SUM(UnitPrice * Quantity) OrderTotal
FROM dbo.[Order Details]
WHERE OrderID IN (SELECT OrderID
FROM dbo.[Order Details]
WHERE Discount > 0.15)
GROUP BY OrderID
ORDER BY OrderTotal DESC
```
**Using EXISTS Operator**
```
SELECT OD.OrderID, SUM(OD.UnitPrice * OD.Quantity) OrderTotal
FROM dbo.[Order Details] OD
WHERE EXISTS (SELECT 1
FROM dbo.[Order Details]
WHERE OrderID = OD.OrderID
AND Discount > 0.15)
GROUP BY OD.OrderID
ORDER BY OrderTotal DESC
``` | The error is in the `WHERE` clause. You're using `=` for multiple values, but that operator only allows for 1 value. So change this:
```
WHERE od.Discount =
(SELECT od.Discount
FROM [Order Details] od
GROUP BY od.discount
HAVING od.discount >.15)
```
To this:
```
WHERE od.Discount IN
(SELECT od.Discount
FROM [Order Details] od
GROUP BY od.discount
HAVING od.discount >.15)
```
or this:
```
WHERE od.Discount =
(SELECT TOP 1 od.Discount
FROM [Order Details] od
GROUP BY od.discount
HAVING od.discount >.15
ORDER BY od.Discount DESC)
``` | Can't figure out what I am missing? | [
"",
"mysql",
"sql",
""
] |
I have a table of items:
```
╔════════╦═══════╦═══════╦════════╗
║ ItemID ║ Color ║ Size ║ Smell ║
╠════════╬═══════╬═══════╬════════╣
║ Z300 ║ black ║ big ║ stinky ║
║ Z200 ║ white ║ big ║ stinky ║
║ Z100 ║ black ║ small ║ stinky ║
║ Z050 ║ black ║ small ║ yummy ║
╚════════╩═══════╩═══════╩════════╝
```
Let's say I want to find items that are similar to the Z300. They can only be considered "similar" if 2/3 (color, size, smell) match it. So the Z200 and Z100 would match but the Z050 wouldn't because it only matches on 1/3. I need help writing a SQL query to produce this.
Thanks for your help. | Quickie, locally tested (using Postgres, but should work on MySQL too when you remove the `public.` prefix):
```
select
foo2.*
from
public.foo as foo1
left join
public.foo as foo2 on (
foo1.Color = foo2.Color and foo1.Size = foo2.Size or
foo1.Size = foo2.Size and foo1.Smell = foo2.Smell or
foo1.Smell = foo2.Smell and foo1.Color = foo2.Color
)
where
foo1.id = 'Z300';
``` | This should be close to what you need.
I added an additional row of data that is not similar to any of the other items to show what happens when there is no match. Add a where clause to the query to limit to a single base item if desired.
```
DECLARE @Items TABLE (
ItemId VARCHAR(16),
Color VARCHAR(16),
Size VARCHAR(16),
Smell VARCHAR(16)
);
INSERT @Items
SELECT 'Z300', 'black', 'big', 'stinky'
UNION SELECT 'Z200', 'white', 'big', 'stinky'
UNION SELECT 'Z100', 'black', 'small', 'stinky'
UNION SELECT 'Z050', 'black', 'small', 'yummy'
UNION SELECT 'Z025', 'yellow', 'medium', 'tasty'
SELECT
Base.ItemId AS BaseItemId,
Base.Color AS BaseItemColor,
Base.Size AS BaseItemSize,
Base.Smell AS BaseItemSmell,
Sim.ItemId AS SimilarItemId,
Sim.Color AS SimilarItemColor,
Sim.Size AS SimilarItemSize,
Sim.Smell AS SimilarItemSmell
FROM @Items AS Base
LEFT JOIN @Items AS Sim
ON (
(Base.Color = Sim.Color AND Base.Size = Sim.Size ) OR
(Base.Color = Sim.Color AND Base.Smell = Sim.Smell ) OR
(Base.Size = Sim.Size AND Base.Smell = Sim.Smell )
) AND Base.ItemId != Sim.ItemId;
``` | Finding similar items in SQL | [
"",
"sql",
"t-sql",
""
] |
I have this query which works fine with SQL Server 2008:
```
SELECT COALESCE(LastName + ', ' + FirstName, LastName, FirstName) [CalculatedEmployeeName]
FROM Emp_General
ORDER BY [CalculatedEmployeeName] ASC
```
(Notice that it is ordering by the CalculatedEmployeeName field and not complaining)
When I add a Row\_Number field as follows:
```
SELECT COALESCE(LastName + ', ' + FirstName, LastName, FirstName) [CalculatedEmployeeName]
, Row_Number() Over (Order BY [CalculatedEmployeeName] ASC) RecordNumber
FROM Emp_General
ORDER BY [CalculatedEmployeeName] ASC
```
I get an error of:
```
Invalid column name 'CalculatedEmployeeName'.
```
Any ideas on why it is complaining?
Note that I have already tried using
```
row_number() OVER (ORDER BY (SELECT 0))
```
and
```
row_number() over (order by @@rowcount)
```
as discussed in:
[SQL Row\_Number() function in Where Clause without ORDER BY?](https://stackoverflow.com/questions/6390224/sql-row-number-function-in-where-clause-without-order-by?lq=1)
But they do not return the row\_number in the correct order. | try like this
```
;with cte as
(
SELECT COALESCE(LastName + ', ' + FirstName, LastName, FirstName) [CalculatedEmployeeName]
FROM Emp_General
)
select *,Row_Number() Over (Order BY [CalculatedEmployeeName] ASC) as RecordNumber from cte ORDER BY [CalculatedEmployeeName] ASC
``` | Change to this:
```
SELECT COALESCE(LastName + ', ' + FirstName, LastName, FirstName) [CalculatedEmployeeName]
, Row_Number() Over (Order BY COALESCE(LastName + ', ' + FirstName, LastName, FirstName) ASC) RecordNumber
FROM Emp_General
ORDER BY [CalculatedEmployeeName] ASC
```
Cannot reference derived columns in the `SELECT` | Invalid column name when using a COALESCE field with Row_Number() | [
"",
"sql",
"sql-server-2008",
""
] |
I have a table1 with only two columns `id` and `password` and so i had to insert values with `id` and `password` where `id` not like 'ADMIN%'...
I tried my query as below:
1. `insert into table1 values('','') where id not like 'ADMIN%'` (doesn't work)
2. `insert into table values('','') select id,password from table1 where id not like 'ADMIN%'` (doesn't work)
Both queries didn't work. please help !! | insert into table ('id', 'password'); select id,password from table1 where id not like 'ADMIN%'; semicolons...
Try this... | Something like
```
insert into table (id, password) select id,password from table1 where id not like 'ADMIN%'
```
As per <http://dev.mysql.com/doc/refman/5.1/en/insert-select.html> | How to write insert with where clause? | [
"",
"mysql",
"sql",
""
] |
The SQL query works in SQL Server Management Studio. But, in Visual studio it gives an error
> Incorrect syntax near D1
Code:
```
private void GetDataByID(string _id)
{
string connectionString = ConfigurationManager.ConnectionStrings["DatabaseConnectionString"].ConnectionString;
using (SqlConnection connection = new SqlConnection(connectionString))
{
string sqlCommand = "SELECT d1.*, d2.* FROM KFM.dbo.ToolBoxDocContent as d1, KFM.dbo.ToolBoxDocument as d2" +
"where d1.DocumentId = d2.DocumentId and = d2.DocumentId =" + _id;
SqlCommand cmd = new SqlCommand(sqlCommand, connection);
SqlDataReader MyReader;
try
{
connection.Open();
MyReader = cmd.ExecuteReader();
while (MyReader.Read())
{
string sDueWeek = MyReader["DueWeek"].ToString();
string sTitle = MyReader["DocumentTitle"].ToString();
//string sEnglishBodyContent = MyReader["DocumentBody"].ToString();
//string sFrenchBodyContent = MyReader["DocumentBody"].ToString();
txb_Week.Text = sDueWeek;
txb_Title.Text = sTitle;
}
}
catch (Exception ex)
{
Response.Write(ex.Message);
}
}
}
``` | One thing I have noticed in your query is that
1. space is not provided properly before two joins using '+'. Use space before `where`
2. incorrect syntax at `where` clause. remove extra **'='** after **`and`** at `and = d2.DocumentId = " + _id`
Your final query will look like as mentioned below:
```
string sqlCommand = "SELECT d1.*, d2.* FROM KFM.dbo.ToolBoxDocContent as d1, KFM.dbo.ToolBoxDocument as d2" +
" where d1.DocumentId = d2.DocumentId and d2.DocumentId =" + _id;
```
**Update:**
```
string sqlCommand = "SELECT d1.*, d2.* FROM KFM.dbo.ToolBoxDocContent as d1, KFM.dbo.ToolBoxDocument as d2" +
" where d1.DocumentId = d2.DocumentId and d2.DocumentId = '" + _id + "'";
``` | Change the query as shown below
```
"SELECT d1.*, d2.* FROM KFM.dbo.ToolBoxDocContent as d1, KFM.dbo.ToolBoxDocument as d2
where d1.DocumentId = d2.DocumentId and d2.DocumentId ='" + _id + "'";
``` | Incorrect syntax near keyword 'd1' | [
"",
"asp.net",
"sql",
"sql-server",
""
] |
I have 3 tables:
```
Advancements
id
Advancement_Requirements
advancement_id,
requirement_id
Requirements
id
description
```
I want to get all of the `requirments.descriptions` for a certain `advancement.id`. I am able to do this, but I have to use `DISTINCT`, which I do not want to do. If I don't use `DISTINCT`, I get pages and pages of results. Here is the query that I am running that works. Please tell me how to get my results without `DISTINCT`.
```
select distinct(requirements.description)
from requirements
inner join advancement_requirements on requirements.requirement_id = advancement_requirements.requirement_id
inner join advancements on advancement_requirements.advancement_id = 1;
```
What am I doing wrong? | I'm not sure what your are trying to accomplish but my guess is that you try to join tables and you have a search criteria. So you should do something like this.
```
select
requirements.description
from requirements
inner join advancement_requirements on requirements.requirement_id = advancement_requirements.requirement_id
inner join advancements on advancement_requirements.advancement_id = advancements.id
WHERE advancements.id = 1;
``` | You are not associating the Advancements table in your join so it just return a result for each requirement times a Advancements register.
```
select requirements.description from requirements inner join advancement_requirements on requirements.requirement_id = advancement_requirements.requirement_id inner join advancements on advancement_requirements.advancement_id = advancements.id
where advancements.id = 1;
``` | PostgreSQL - Joining 3 tables returning too many results | [
"",
"sql",
"postgresql",
"join",
""
] |
I have three tables: Products, Requests, Bookings.
```
Table Products (with all product categories):
name, ..
A1, ..
A1, ..
A2, ..
A2, ..
A2, ..
A3, ..
B1, ..
..
Table Requests (with a state for every request):
name, failed
A1, 0
A2, 1
A3, 0
A2, 0
..
Table Bookings:
name, ..
A1, ..
A2, ..
..
```
What I need is a joined output with summary data like this:
```
prod_category, count_requests, count_failed, count_bookings
A1, 1, 0, 1
A2, 2, 1, 1
A3, 6, 0, null
B1, null, null, null
```
I've already three separate queries - they works good but I can't get it to work with only one query.
Here is an example of one of my sql queries:
```
SELECT
T2.n AS category_name, T1.c AS count_requests
FROM
(SELECT
object_name as n,
count(object_name) as c
FROM
requests
GROUP BY n) T1
RIGHT JOIN
(SELECT distinct
object_name as n
FROM
products) T2 ON T1.n = T2.n
ORDER BY T2.n ASC;
```
Query's output:
```
# category_name, count_requests
'A1', '8'
'A2', '3'
'C1', NULL
'E1', '9'
'E2', '16'
'E3', '3'
'F1', '1'
``` | Assuming that all requests that doesn't fail also result in a row in bookings there is no need to join in bookings at all.
```
select p.name,
count(r.name) as count_requests,
sum(r.failed) as count_failed,
count(r.name) - sum(r.failed) as count_bookings
from products p
left join requests r on (p.name = r.name)
group by p.name;
```
<http://sqlfiddle.com/#!2/da461/6> | ```
SELECT
Name,
(SELECT COUNT(name) FROM Requests WHERE name = Products.Name) AS count_requests,
(SELECT COUNT(name) FROM Requests WHERE name = Products.Name AND failed = 1) AS count_failed,
(SELECT COUNT(name) FROM Requests WHERE name = Products.Name AND failed = 0) AS count_bookings
FROM Products
GROUP BY Name
ORDER BY Name
``` | MySQL join over three tables | [
"",
"mysql",
"sql",
""
] |
I am using a query to get some application Received Date from Oracle DB which is stored as GMT. Now I have to convert this to Eastern standard/daylight savings time while retrieving.
I am using the below query for this:
```
select to_char (new_time(application_recv_date,'gmt','est'), 'MON dd, YYYY') from application
```
It works fine for Standard time. But for daylight savings time we need to convert it to 'edt' based on timezone info. I am not very sure on how to do this. Please help me out | You can use this query, without having to worry about timezone changes.
```
select to_char(cast(application_recv_date as timestamp) at time zone 'US/Eastern',
'MON dd, YYYY'
)
from application;
```
Ex:
EDT:
```
select cast(date'2014-04-08' as timestamp) d1,
cast(date'2014-04-08' as timestamp) at time zone 'US/Eastern' d2
from dual;
D1 D2
---------------------------------- -------------------------------------------
08-APR-14 12.00.00.000000 AM 07-APR-14 08.00.00.000000 PM US/EASTERN
```
EST:
```
select cast(date'2014-12-08' as timestamp) d1,
cast(date'2014-12-08' as timestamp) at time zone 'US/Eastern' d2
from dual;
D1 D2
---------------------------------- -------------------------------------------
08-DEC-14 12.00.00.000000 AM 07-DEC-14 07.00.00.000000 PM US/EASTERN
```
UPDATE:
Thanks to Alex Poole for reminding that, when timezone is not specified, local timezone is used for conversion.
To force the date to be recognized as GMT, use from\_tz.
```
from_tz(cast(date'2014-12-08' as timestamp), 'GMT') at time zone 'US/Eastern'
``` | In Oracle, you can achieve this with below query:
```
Select current_timestamp, current_timestamp at time zone 'Australia/Sydney' from dual;
```
Where `Australia/Sydney` is the name of your timezone in which you want to convert your time. | Time zone conversion in SQL query | [
"",
"sql",
"oracle",
"gmt",
"timezone-offset",
""
] |
I have a table representing a car traveling a route, broken up into segments:
```
CREATE TABLE travels (
id int auto_increment primary key,
segmentID int,
month int,
year int,
avgSpeed int
);
-- sample data
INSERT INTO travels (segmentID, month, year, avgSpeed)
VALUES
(15,1,2014,80),
(15,1,2014,84),
(15,1,2014,82),
(15,2,2014,70),
(15,2,2014,68),
(15,2,2014,66);
```
The above schema and sample data is also available as a [fiddle](http://sqlfiddle.com/#!2/183c1/1).
What query will identify segment IDs where average driving speed decreased by more than 10% compared to the previous month? | here is my solution
[Sqlfidle demo](http://sqlfiddle.com/#!2/183c1/40)
The key is to keep track between previous month and next so i`m doing year\*100+month and after group by year and moth check for difference 1 and 89 in year\*100+month field.
Also it is pitty that MySQL does not support CTE and makes query ugly using derivered tables.
Code:
```
select s.month,s.speed,m.month as prevmonth,m.speed as sp, 100-s.speed/m.speed*100 as speeddiff from
(SELECT segmentid,month,year*100+month as mark,avg(avgSpeed) as speed from travels
group by segmentid,month,year*100+month
) as s
,
(SELECT segmentid,month,year*100+month as mark,avg(avgSpeed) as speed from travels
group by segmentid,month,year*100+month
) as m
where s.segmentid=m.segmentid and (s.mark=m.mark+1 or s.mark=m.mark+89) and (m.speed-(m.speed/10))>s.speed;
```
CTE code working on every DB except MySQL
```
with t as(SELECT segmentid,month,year*100+month as mark,avg(avgSpeed) as speed from travels
group by segmentid,month,year*100+month
)
select s.month,s.speed,m.month as prevmonth,m.speed as sp, 100-s.speed/m.speed*100 as speeddiff from t s
inner join t m on s.segmentid=m.segmentid and (s.mark=m.mark+1 or s.mark=m.mark+89)
where (m.speed-(m.speed/10))>s.speed;
``` | You need to select each month, join the next month (which is a little convoluted due to your table structure) and find the decrease(/increase). Try the following complex query
```
SELECT
t1.segmentID, t1.month, t1.year, AVG(t1.avgSpeed) as avgSpeed1,
AVG(t2.avgSpeed) as avgSpeed2,
1-(AVG(t1.avgSpeed)/AVG(t2.avgSpeed)) as decrease
FROM
travels t1
LEFT JOIN
travels t2
ON
CONCAT(t2.year,'-',LPAD(t2.month,2,'00'),'-',LPAD(1,2,'00')) = DATE_ADD(CONCAT(t1.year,'-',LPAD(t1.month,2,'00'),'-',LPAD(1,2,'00')), INTERVAL -1 MONTH)
GROUP BY
segmentID, month, year
HAVING
avgSpeed1/avgSpeed2 < .9
```
Here is the updated SQLFiddle - <http://sqlfiddle.com/#!2/183c1/25> | What query will select rows for months that are more than a certain percent different from the previous month? | [
"",
"mysql",
"sql",
""
] |
I have a relation table with this data. What I want is a query which returns an `a_id` to which every connected `status` equals 1.
So in this case, 6 would be the value returned.
```
a_id b_id status
4 757 0
4 758 0
4 761 0
5 757 1
5 758 0
5 761 1
6 757 1
6 761 1
6 758 1
```
MySQL 5.5 | ```
SELECT DISTINCT a_id
FROM relation_table
WHERE a_id NOT IN (SELECT a_id FROM relation_table WHERE status != 1);
``` | Try this
```
SELECT DISTINCT a_id
FROM table1
WHERE a_id NOT IN
(SELECT a_id
FROM table1
WHERE status != 1);
```
[**SQL Fiddle**](http://sqlfiddle.com/#!2/4dde17/2) | Query on a relation table | [
"",
"mysql",
"sql",
""
] |
I am writing a query to delete some rows in my table based on the count (TOTALCOUNT). I would like to delete all rows that count <=5. Below is my query:
```
DELETE FROM NSFABSTRACTS WHERE TOTALCOUNT<=5
SELECT WORD, COUNT(WORD) AS TOTALCOUNT FROM NSFABSTRACTS
GROUP BY WORD;
```
I also tried the one below, didn't work either:
```
DELETE FROM NSFABSTRACTS
SELECT WORD, COUNT(WORD) AS TOTALCOUNT FROM NSFABSTRACTS
GROUP BY WORD
WHERE TOTALCOUNT<=5;
```
It is giving me this error:
`Error report:
SQL Error: ORA-00933: SQL command not properly ended
00933. 00000 - "SQL command not properly ended"
*Cause:
*Action:`.
Appreciate your help. | ```
DELETE FROM NSFABSTRACTS
WHERE WORD IN (
SELECT WORD FROM NSFABSTRACTS
GROUP BY WORD
HAVING COUNT(*)<=5
)
```
This removes all those rose containing words which occur less than equal to 5 times in table. | Try:
```
DELETE FROM NSFABSTRACTS
WHERE FileID IN
(
SELECT FILEID
FROM NSFABSTRACTS
GROUP BY WORD, FILEID
HAVING COUNT(WORD) <=5;
)
```
Assuming file ID is your primary key. | Cannot delete rows from a table SQL | [
"",
"sql",
"oracle",
"delete-row",
""
] |
I have created a movie database that contain following tables :
1.Film
2.People
3.Genres
4.Role
5.Users
Here is my sql
```
create table Film ( id INT(30),title varchar(30),images varchar(30)
primary key (id));
create table people(id INT(30),Fname varchar(30),Lname varchar(30),
primary key (id));
create table Role(id INT(30), name varchar(30),
primary key(id));
```
i want create relation between Film,People and Role table.SO my Question is do i need to create a table to make relation between those table and is it necessary to use auto\_increment for id column? | You'd want to create some tables like:
```
FilmRole( FilmId INT, RoleId INT) these 2 columns would make your PK and they are also FK's to their
FilmPeople (FilmId INT, PeopleId INT) respective source tables.
FilmUsers( FilmId INT, UserId INT)
```
You could add a single `IDENTITY` (for SQL Server for example) column to each table if you wanted but in this particular case a 2 column PK is adequate as these tables simply point to other records. | You need to alter your table and add in a foreign key (Primary key in one table and attribute in another). Examples how to do it here! <http://www.w3schools.com/sql/sql_foreignkey.asp> | Creating relation between tables in Movie Database | [
"",
"sql",
""
] |
I have a tabled named 'users' a table named 'modules' and a table named 'users\_modules'.
'users' contains 'user\_id', 'first\_name', 'last\_name' and 'role'.
'modules' contains 'module\_code' and 'module\_title'.
'users\_modules' contains 'user\_id' and 'module\_code'
The 'role' column in the 'users' table specifys if that user is a 'student', 'lecturer' or 'admin'.
I want to display to the user, which modules they are studying including the module code, module title and also the names of the lecturers of this module.
I have managed to right the following statement which gives me the 'module\_code' and 'module\_title' for all the module's a certain user is studying.
```
SELECT users_modules.user_id,
users_modules.module_code,
modules.module_title
FROM users_modules
INNER JOIN modules
ON users_modules.module_code = modules.module_code
WHERE user_id = 2;
```
I now need to display which lecturers are enrolled for this module. Needs to be part of the above statement as I am using it within a PHP while loop to produce a HTML table. I understand I need to do another JOIN for the users table so I can get the names of the lecturers but not sure how to finish writing the code.
The code will have to check which users are enrolled for the module (using the users\_module table) and then see if any of them users have a 'role' (column in the users table) which equals either 'Lecturer' or 'Admin'.
---
Overview of what I want:
A result which shows a student (in this example a student with user\_id=2) what modules they are enrolled in and what lecturers teach this module.
For everyone that has an involvement within this module (either teacher or student) will have a record in 'users\_modules'.
I need a way to pull out of the 'users\_modules' table the lecturers for each module, this is done by viewing the 'role' column in the users table. | Below should get you what you need, assuming your user table has a 'user\_id' field. Also, note that it is always a good practice to alias your tables in your queries.
```
SELECT u1.user_id AS 'Student ID', um1.module_code, m1.module_title,
GROUP_CONCAT(u2.first_name) AS 'Lecturer First Name',
GROUP_CONCAT(u2.last_name) AS 'Lecturer Last Name'
FROM users u1 INNER JOIN users_modules um1 ON u1.user_id = um1.user_id
INNER JOIN modules m1 ON um1.module_code = m1.module_code
INNER JOIN users_modules um2 ON m1.module_code = um2.module_code
INNER JOIN users u2 ON um2.user_id = u2.user_id
WHERE u1.user_id = 2
AND (u2.role = 'Lecturer' or u2.role = 'Admin')
GROUP BY u1.user_id, um1.module_code, m1.module_title
;
``` | Just add another join and necessary fields:
```
SELECT users_modules.user_id,
users_modules.module_code,
modules.module_title,
users.first_name,
users.last_name
FROM users_modules
INNER JOIN modules
ON users_modules.module_code = modules.module_code
INNER JOIN users
ON user.user_id = users_modules.user_id
WHERE users_modules.user_id = 2 AND users.role IN ('Lecturer', 'Admin');
```
Or with condition on join:
```
SELECT um.user_id,
um.module_code,
m.module_title,
u.first_name,
u.last_name
FROM users_modules um
INNER JOIN modules m
ON um.module_code = m.module_code
INNER JOIN users u
ON u.user_id = um.user_id
AND u.role IN ('Lecturer', 'Admin')
WHERE um.user_id = 2;
``` | SQL: Multiple Joins and WHERE statements | [
"",
"mysql",
"sql",
""
] |
Due to version incompatibilities of my postgres database on heroku (9.1) and my local installation (8.4) I need a plain text sql database dump file so I can put a copy of my production data on my local testing environment.
It seems on heroku I can't make a dump using pg\_dump but can instead only do this:
```
$ heroku pgbackups:capture
$ curl -o my_dump_file.dump `heroku pgbackups:url`
```
...and this gives me the "custom database dump format" and not "plain text format" so I am not able to do this:
```
$ psql -d my_local_database -f my_dump_file.sql
``` | You could just make your own pg\_dump directly from your Heroku database.
First, get your postgres string using `heroku config:get DATABASE_URL`.
Look for the Heroku Postgres url (example: `HEROKU_POSTGRESQL_RED_URL: postgres://user3123:passkja83kd8@ec2-117-21-174-214.compute-1.amazonaws.com:6212/db982398`), which format is `postgres://<username>:<password>@<host_name>:<port>/<dbname>`.
Next, run this on your command line:
```
pg_dump --host=<host_name> --port=<port> --username=<username> --password --dbname=<dbname> > output.sql
```
The terminal will ask for your password then run it and dump it into output.sql.
Then import it:
```
psql -d my_local_database -f output.sql
``` | Assuming you have a `DATABASE_URL` configured in your environment, there is a far simpler method:
```
heroku run 'pg_dump $DATABASE_URL' > my_database.sql
```
This will run `pg_dump` in your container and pipe the contents to a local file, `my_database.sql`. **The single quotes are important.** If you use double quotes (or no quotes at all), `DATABASE_URL` will be evaluated locally rather than in your container.
If your whole purpose is to load the contents into a local database anyways, you might as well pipe it straight there:
```
createdb myapp_devel # start with an empty database
heroku run 'pg_dump -xO $DATABASE_URL' | psql myapp_devel
```
The addition of `-xO` avoids dumping `GRANT`, `REVOKE`, and `ALTER OWNER` statements, which probably don't apply to your local database server. If any of your `COPY` commands fail with the error `ERROR: literal carriage return found in data` (mine did), see [this answer](https://stackoverflow.com/a/16712868/166053).
It's quite possible this didn't work two and a half years ago when this question was originally asked, but for those looking for a way to easily get a dump of your Heroku Postgres database, this appears to be the simplest possible way to do this today. | How can I get a plain text postgres database dump on heroku? | [
"",
"sql",
"postgresql",
"heroku",
""
] |
I have database like this ,I want to overlay the field by checking in gender column and overlay in emp\_name field .
```
emp_id | emp_name | gender
-------+----------+--------
501 | aaa | M
502 | bbb | F
503 | ccc | F
504 | ddd | M
505 | eee | F
```
I want output like this
```
emp_id | emp_name | gender
-------+----------+--------
501 | Mr.aaa | M
502 | Ms.bbb | F
503 | Ms.ccc | F
504 | Mr.ddd | M
505 | Ms.eee | F
```
I had tried the select query like this
```
$select overlay(emp_name placing 'Mr.' from 1 for 0 ) from emp where gender = 'M' ;
```
This output I got ,
```
overlay
---------
Mr. aaa
Mr. ddd
(2 rows)
```
But ,I didn't get the expected result.Is there is any way for expected result .
Thanks in Advance | ```
$select overlay(emp_name placing CASE gender WHEN 'M' THEN 'Mr. ' ELSE 'Ms. ' END from 1 for 0 ) from emp;
```
Not a postgresql guy, but this should work. | The query below should give the output you expect:
```
SELECT emp_id
, CASE WHEN gender = 'M' THEN 'Mr. '
WHEN gender = 'F' THEN 'Ms. '
ELSE '' END || emp_name as emp_name
, gender
FROM emp;
``` | How to use overlay() in different manner in PostgreSQL? | [
"",
"sql",
"postgresql",
""
] |
I have a table like this structure:
```
lineID lineItemPrice
1 20
2 25
3 27
4 30
4 30
4 30
```
I want to get the sum of lineItemPrice where lineId is distinct.
I am not sure what should be sql query? Please help.
The output should be 102. | I cant quite tell if you are looking for this:
```
select
sum(lineItemPrice), lineID
from
table
group by lineID
```
Or this:
```
select
sum(lineItemPrice)
from
(select distinct lineID, lineItemPrice from table)
```
If you want the sum of the whole table:
```
select
sum(lineItemPrice)
from
table
```
The first would give results that would sum up all the lineItemPrice's for their respective lineID's
```
lineID lineItemPrice
1 20
2 25
3 27
4 90
```
The second would sum all these distinct records giving **102** as the answer
```
lineID lineItemPrice
1 20
2 25
3 27
4 30
```
The third:
```
lineItemPrice
162
``` | Try this:
```
SELECT SUM(lineItemPrice) as TotalSum FROM
(SELECT lineItemPrice
FROM TableName
GROUP BY lineID,lineITemPrice) T
```
Result:
```
TOTALSUM
102
```
See result in [**SQL Fiddle**](http://sqlfiddle.com/#!3/f7461/12). | How to find distinct sum of a column in sql server? | [
"",
"sql",
"sql-server-2008",
""
] |
Say, I have the following two tables:
Table customer:
```
id, salutation, forename, lastname, companyID
```
and a table company:
```
Company_id, Company_name, Company_address
```
and I want to have an evaluation over all users and their company (if they belong to one)
```
salutation, forename, lastname, companyName
```
that would amount basically to a very easy script:
```
select salutation, forename, lastname, company_name
from customer, company
where companyID=Company_id;
```
The trouble now is just, that companyID can be null. (A customer doesn't need to be part of a company). And since there is no companyID null entry in the company table and any customer who has no company ID listed is omitted due to the joint statement.
Of couse I could divide it into two scripts one for companyid=null and one for not null and mix them with a UNION command, but is there perhaps something like an if statement?
something like:
```
select salutation, forename, lastname, placeholder
from customer, company
where
if companyID=null then placeholder=null
else (companyID=Company_id and placeholder=company_name);
```
?
I know there is a case statement, that can check on the field's value and return something else instead, but is there a way to combine that with a joint to another table? | You are looking for an outer join:
```
select cu.salutation, cu.forename, cu.lastname, co.company_name
from customer cu
left join company co on cu.companyID = co.Company_id;
```
In general you should stop using the ancient implicit join syntax in the `where` clause and use an explicit `JOIN` operator. That is also the only cross-DBMS way to actually *do* an outer join (all DBMS that supported some proprietary outer join syntax have deprecated that) | Try this
```
select salutation, forename, lastname, placeholder
from customer, company
where
(companyID=null and placeholder=null )
OR
(companyID=Company_id and placeholder=company_name);
``` | SQL join beween tables with if statement | [
"",
"sql",
""
] |
```
SELECT
AssyLotNo,
AltLotName,
WfrNoDisplay,
Param.Display,
Param.Value
FROM
eMap_Data
UNPIVOT
(Value for Display in
(AltLotNumber, WfrNumber, NeedSpeedInfo, MapType, BuildList, Type, ProductGrade, AssyLotNo)
) as Param
```
Objective of the query above is to have the AssyLotNo field to also be included in the display and value columns. All fields exists in the table eMap\_Data, however, I keep getting invalid column name for the field AssyLotNo which clearly exists in eMap\_Data.
```
eMap_Data table fields
AssyLotNo | AltLotName | AltLotNumber | WfrNoDisplay | WfrNumber | NeedSpeedInfo | MapType | BuildList | Type | ProductGrade
Desired output is as follow:
AssyLotNo AltLotName WfrNoDisplay Display Value
------------------------------------------------------------------
ABC1231 ABC123 01 AltLotName ABC123
ABC1231 ABC123 01 WfrNumber 01
ABC1231 ABC123 01 NeedSpeedInfo Y
ABC1231 ABC123 01 MapType 12
ABC1231 ABC123 01 BuildList 1,2,3,4,5,6
ABC1231 ABC123 01 Type S
ABC1231 ABC123 01 ProductGrade C
ABC1231 ABC123 01 AssyLotNo ABC1231
``` | [SQL Fiddle](http://sqlfiddle.com/#!6/e98b2/1)
**MS SQL Server 2012 Schema Setup**:
```
create table eMap_Data
(
WfrNoDisplay varchar(20),
AltLotName varchar(20),
WfrNumber varchar(20),
NeedSpeedInfo varchar(20),
MapType varchar(20),
BuildList varchar(20),
Type varchar(20),
ProductGrade varchar(20),
AssyLotNo varchar(20)
)
insert into eMap_Data values
('01', 'ABC123', '01','Y','12','1,2,3,4,5,6','S','C','ABC1231')
```
**Query 1**:
```
select E.AssyLotNo,
E.AltLotName,
E.WfrNoDisplay,
T.Display,
T.Value
from eMap_Data as E
cross apply (values(AltLotNAme, 'AltLotNAme'),
(WfrNumber, 'WfrNumber'),
(NeedSpeedInfo, 'NeedSpeedInfo'),
(MapType, 'MapType'),
(BuildList, 'BuildList'),
(Type, 'Type'),
(ProductGrade, 'ProductGrade'),
(AssyLotNo, 'AssyLotNo')
) as T(Value, Display)
```
**[Results](http://sqlfiddle.com/#!6/e98b2/1/0)**:
```
| ASSYLOTNO | ALTLOTNAME | WFRNODISPLAY | DISPLAY | VALUE |
|-----------|------------|--------------|---------------|-------------|
| ABC1231 | ABC123 | 01 | AltLotNAme | ABC123 |
| ABC1231 | ABC123 | 01 | WfrNumber | 01 |
| ABC1231 | ABC123 | 01 | NeedSpeedInfo | Y |
| ABC1231 | ABC123 | 01 | MapType | 12 |
| ABC1231 | ABC123 | 01 | BuildList | 1,2,3,4,5,6 |
| ABC1231 | ABC123 | 01 | Type | S |
| ABC1231 | ABC123 | 01 | ProductGrade | C |
| ABC1231 | ABC123 | 01 | AssyLotNo | ABC1231 |
``` | I am always confused by the UNPIVOT syntax so I prefer CROSS APPLY/VALUES instead
```
SELECT AssyLotNo
,AltLotName
,WfrNoDisplay
,CA1.Display
,CA1.[Value]
FROM eMap_Data
CROSS APPLY (
SELECT *
FROM (VALUES ('ALotNumber', ALotNumber)
,('WfrNumber', WfrNumber)
,('NeedSpeedInfo', NeedSpeedInfo)
,...
,('AssyLotNo', AssyLotNo)
) AS X(Display, [Value])
) AS CA1
``` | Unpivot SQL statement | [
"",
"sql",
"sql-server",
"unpivot",
""
] |
Suppose I have a table with 2 fields: `Number` and `Description`, I want to get the description from a number I request.
For example: I request the number 22. But in the database, there is no row matching the value 22.
I need to write a query to get the values **ranges between +5 and -5** of that number and **getting the nearest data** present in the database. That is, I want to get `"rty"` as the description when I request 22.
```
Table
Num |Description
--------+-----------
25 |ASD
18 |wert
21 |rty
``` | This should ideally work. Will return a cursor with all the results from the table.
```
db.rawQuery("SELECT Description FROM Table_Name WHERE Num BETWEEN "+(inputNumber-Range)+" AND "+(inputNumber+Range) +"ORDER BY (Num- "+inputNumber+")", null);
```
**Edit:** I am glad the above query works for you.But I think this query is the correct answer to the question you specifically asked
```
db.rawQuery("SELECT Description FROM Table_Name WHERE Num BETWEEN "+(inputNumber-Range)+" AND "+(inputNumber+Range) +"ORDER BY ABS(Num- "+inputNumber+")", null);
```
Taking the absolute value to avoid negative differences | Use like this method
```
public Cursor get_result(){
Cursor c;
String sql = "SELECT * FROM " + tableName +"WHERE _id BETWEEN" + value1 +"AND" + value2;
c = productDatabase.rawQuery(sql, null);
}
``` | How to write A Range Query in Sqlite Android? | [
"",
"android",
"sql",
"sqlite",
""
] |
I've got a table in my database that looks something like this:
ID | datetime | value1 | value2 | value3
Now I want to select the datetime, value1, value2 and value3 where value1 is the lowest (MIN(value1)) per day. An example below (with arrows indicating the rows that need to be selected):
```
ID | datetime | value1 | value2 | value3
1 | 2014-01-01 00:06:00 | 10 | 15 | 20
2 | 2014-01-01 00:12:00 | 5 | 15 | 22 <--
3 | 2014-01-01 00:18:00 | 12 | 45 | 25
4 | 2014-01-02 00:06:00 | 1 | 25 | 10 <--
5 | 2014-01-02 00:12:00 | 12 | 45 | 25
6 | 2014-01-02 00:18:00 | 13 | 25 | 10
7 | 2014-01-03 00:06:00 | 17 | 35 | 95
8 | 2014-01-03 00:12:00 | 15 | 75 | 10
9 | 2014-01-03 00:18:00 | 12 | 75 | 35 <--
```
I have the following query:
```
SELECT
datetime,
MIN(value1) as value1,
value2,
value3
FROM
table
GROUP BY
DAY(datetime)
ORDER BY
datetime ASC
LIMIT 50
```
This gives me a result, however not what I was expecting. The datetime and value2, value3 do not come from the row where the daily minimum is found.
Can someone explain me what I've done wrong?
Furthermore: would I be able to combine this (with a selfjoin or something) where I would get the minimum value of the day as above, and from the resulting result set I would then only select the rows where for example value2 > 20 && value2 < 60 && value3 > 10?
Many thanks in advance | ```
select
*
from
Table1 t
inner join (select date(datetime) as dt, min(value1) v from Table1 group by dt) st
on date(t.datetime) = st.dt and t.value1 = st.v
```
* see it working live in an `sqlfiddle`
* see plenty more examples how to do it in this manual entry: [The Rows Holding the Group-wise Maximum of a Certain Column](http://dev.mysql.com/doc/refman/5.5/en//example-maximum-column-group-row.html) | Try this,
```
SELECT
datetime,
MIN(value1) as value1,
value2,
value3
FROM
table
GROUP BY
DAY(datetime)
ORDER BY
value1 ASC
LIMIT 50
``` | Select data from daily lowest value | [
"",
"mysql",
"sql",
""
] |
I'm trying to drop all the logins from SQL server except the default built-in SQL server logins but I'm unable to drop the **<domain>\administrator** account. It gives me following error:
> Server principal '<domain>\administrator' has granted one or more
> permission(s). Revoke the permission(s) before dropping the server
> principal.
I tried checking the permission assigned to this user using this query :
```
Select *
from sys.server_permissions
where grantor_principal_id =
(Select principal_id
from sys.server_principals
where name = N'<domain>\administrator')
```
This query returns only one record corresponding to an end-point as below:
```
class class_desc major_id minor_id grantee_principal_id grantor_principal_id type permission_name state state_desc
105 ENDPOINT 65536 0 269 259 CO CONNECT G GRANT
```
But when I try to check the rights assigned to this user on all of the existing end-points, I find none have any kind of permissions for the user I'm trying to delete.
I'm not sure what is happening and where to look for to drop this user. | I was able to solve this issue. There were following issues which were not allowing me to drop the **<Domain>\administrator** login from SQL server:
1. Owner of **ReportServer** and **ReportServerDB** databases was **<Domain>\administrator** user
2. Owner of **ConfigMgrEndPoint** end-point was also **<Domain>\administrator** user.
I changed the ownership of all the above mentioned SQL objects. I made **sa** user as their new owner. Then I was successfully able to drop the **<Domain>\administrator** user. I also got following expert comment from one of my colleagues who was helping me with this issue :
> Keeping [sa] as a default owner for most sql objects is a standard
> practice. Making a domain user as owner of SQL objects can affect the
> working later on if that user no longer exists or is disabled in the Active Directory at any point of time | to find out what are the permissions that are preventing the dropping of the login
I am using [this](https://www.sqlservercentral.com/blogs/revoke-the-permissions-before-dropping-the-server-principal) script:
```
SELECT @@SERVERNAME,@@SERVICENAME
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
DECLARE @GrantorName nvarchar(4000)
SET @GrantorName = 'xxx\the_login' /* Login in Question */
SELECT b.name as Grantor
, c.name as Grantee
, a.state_desc as PermissionState
, a.class_desc as PermissionClass
, a.type as PermissionType
, a.permission_name as PermissionName
, a.major_id as SecurableID
FROM sys.server_permissions a
JOIN sys.server_principals b
ON a.grantor_principal_id = b.principal_id
JOIN sys.server_principals c
ON a.grantee_principal_id = c.principal_id
WHERE grantor_principal_id =
(
SELECT principal_id
FROM sys.server_principals
WHERE name = @GrantorName
)
```
and sometimes [this](https://dba.stackexchange.com/q/166140/22336) one:
```
--Check to see if they own the endpoint itself:
SELECT SUSER_NAME(principal_id) AS endpoint_owner ,name AS endpoint_name
FROM sys.database_mirroring_endpoints;
--If so, you'll need to change the endpoint owner. Say the endpoint is called Mirroring, and you want to change the owner to SA:
--ALTER AUTHORIZATION ON ENDPOINT::Mirroring TO sa;
```
or following [these](https://www.sqlserver-dba.com/2013/08/msg-15173-login-has-granted-one-or-more-permissions-revoke-the-permissions-before-dropping-the-login.html) instrustions:
```
--1) Check to see if this logon only has server level permissions and check to see
--if this login has granted permissions to another server principal.
--Use this query to identify the permissions granted.
Select perm.* from sys.server_permissions perm
INNER JOIN sys.server_principals prin ON perm.grantor_principal_id = prin.principal_id
where prin.name = 'xxx\the_login' /* Login in Question */
--2) The permissions granted will need to be revoked , to allow the DROP LOGIN to complete.
--The permissions can be granted again by a suitable LOGIN.
```
there is also a very nice article related to this:
[Drop Login issues for logins tied to SQL Server Availability Groups](https://www.mssqltips.com/sqlservertip/5201/drop-login-issues-for-logins-tied-to-sql-server-availability-groups/) | Error dropping or deleting a user from SQL Server 2012 | [
"",
"sql",
"sql-server",
"security",
"permissions",
"endpoint",
""
] |
I'd need some help here. Im stuck on this SQL Query.
I have 2 Tables team and function. Team contains `ID`,`Teamname`. Function contains `ID`,`Role`
role decides whether a person is a player or a coach (0 for player, 1 for coach).
I am supposed to list all teams that exists, but they should only count players and even teams with no players should be listed to. There's teams that only have coaches for example.
i can't find any way to count everyone within a team.
This is the closest I've come:
```
select teamname,
count(*)
from function left outer join team on team.id = function.id
WHERE role=0
group by teamname;
``` | Here's one way to get the result:
```
SELECT t.teamname
, COUNT(1) AS player_count
FROM team t
LEFT
JOIN function f
ON f.id = t.id
AND f.role = 0
GROUP BY t.teamname;
```
-- or --
```
SELECT t.teamname
, IFNULL(SUM(f.role=0),0) AS player_count
FROM team t
LEFT
JOIN function f
ON f.id = t.id
GROUP BY t.teamname;
```
---
These queries use the `team` table as the driver, so we get rows from `team` even when there isn't a matching row in function. In the first query, we include the role=0 condition as a join predicate, but it's an outer join. In the second query, it's still an outer join, but we only "count" rows that have role=0.
**NOTE** It looks as if something is missing from your model; the `id=id` join predicate looks odd; it's odd that we'd have a column named `id` in the function table which is a foreign key referencing `team.id`.
Normally, `id` is the surrogate primary key in a table, and it's unique in that table. If it were unique in the `function` table, then that implies a one-to-one relationship between `team` and `function`, and the query will never return a count greater than 1.
Again, I strongly suspect that the "member" entity is missing from the model.
What we'd expect is something like this::
```
team
id PK
teamname
member
id PK
team_id FK references team.id
role 0=player, 1=coach
```
such that the query would look something like this:
```
SELECT t.teamname
, IFNULL(SUM(m.role=0),0) AS player_count
, IFNULL(SUM(m.role=1),0) AS coach_count
FROM team t
LEFT
JOIN member m
ON m.team_id = t.id
``` | You can't select a single column when an aggregate function exists in the `SELECT` clause.
To get a count of all rows in the table *and get the name of the team*, you must use only aggregated columns or functions in the `SELECT` clause. For example:
```
SELECT COUNT(*), team.teamname
FROM function LEFT OUTER JOIN team ON team.id = function.id
WHERE function.role = 0
GROUP BY team.teamname
```
Gives you a total of all rows where role is equal to 0 *plus the name of the team*. | SQL - Teams and Their Player Count | [
"",
"sql",
"sqlite",
""
] |
We are using php/postgresql for process control at our company.
The company is repairing units, and the repair process is followed by the process control system.

The unit table stores the information about the individual units.
* unit.id : the id of the unit
* unit.sn : the serial number of the unit
The unit\_process table stores every process step the unit went through in it's lifetime.
* unit\_process.id : the id of the process step
* unit\_process.unit\_id : the id of the unit
* unit\_process.process\_id : the id of the process.
(the real tables have more columns)
an sql to find a unit's process steps in chronological order:
```
SELECT process_id
FROM unit
INNER JOIN unit_process on unit.id=unit_process.unit_id
ORDER BY unit_process.id
```
**I would like to create a query which finds all process steps which are happened before process steps that have a specific process\_id**
So I have a process\_id, I need to find all unit\_process rows that happened just before a process which has that process\_id
Currently I am using about the worst method I can imagine.
For example let's say, I need all previous process steps which happened before processes that has process\_id=17
First, I list the units which have processes with process\_id=17
```
SELECT unit.id
FROM unit
INNER JOIN unit_process on unit.id=unit_process.unit_id
WHERE process_id=17
```
Then I store them in a php array.
Secondly, I use a php foreach which contains the following query for every unit id I got from the previous query.
```
SELECT id,process_id
FROM unit_process
WHERE unit_id=$unit_id
ORDER BY id
```
After this, I could easily find out using php what process was before the process\_id=17 process step, since I just have to find the biggest id value which is still lower than the (process\_id=17)'s.
It could use up to thousands of querys in that foreach, and I want to change that, but I don't have enough SQL knowledge to do it on my own.
Is it possible to do this in a single query?
Sample input
greens are needed, yellows are the process\_id=17 units. Notice that there is a process\_id=17 wich is also needed :

Sample output:

I hope I didn't messed up the samples, I just came up with these for example. | I am not familiar with postgresql, so my answer is using SQL Server syntax,
```
Declare @fkProcessID int = 17
select ab.unit_id,ab.id from unit_process as a
outer apply
(
select top 1 * from unit_process as b
where a.id > b.id
order by b.id desc
)as ab
where a.process_id = @fkProcessID
order by ab.unit_id
```
For **[DEMO](http://sqlfiddle.com/#!3/c1842/1)**
postgreSQL,Just missed out on variable declaration, but all else is working.
```
select ab.unit_id,ab.id from unit_process as a
cross join LATERAL
(
select * from unit_process as b
where a.id > b.id
order by b.id desc
LIMIT 1
)as ab
where a.process_id = 17
order by ab.unit_id;
```
[**new DEMO**](http://sqlfiddle.com/#!15/b6b0d/8)
Hope it helps you! | I think you are looking for lead/lag window functions. See an experts explanation here: [How to compare the current row with next and previous row in PostgreSQL?](https://stackoverflow.com/questions/7974866/how-to-compare-the-current-row-with-next-and-previous-row-in-postgresql) | find the process of a unit that happened before a specific process step | [
"",
"sql",
"postgresql",
""
] |
How can i do something like an opposite of join? For example from those two tables select values from table **alice** that are not in table **bob**:
```
alice:
id|name
--+----
1 |one
2 |two
3 |three
6 |six
7 |seven
bob:
id|a_id
--+----
15|1
16|2
17|3
```
to get this:
```
result:
name
----
six
seven
``` | The first instinct is to use subquery, combined with `NOT IN`:
```
SELECT name
FROM alice
WHERE id NOT IN (SELECT a_id
FROM bob);
```
However, [a little more efficient](http://explainextended.com/2009/09/16/not-in-vs-not-exists-vs-left-join-is-null-postgresql/) way is to use `LEFT JOIN`:
```
SELECT a.name
FROM alice a
LEFT JOIN bob b
ON a.id = b.a_id
WHERE b.a_id IS NULL;
``` | This is called an [anti-join](http://en.wikipedia.org/wiki/Relational_algebra#Antijoin_.28.E2.96.B7.29).
The general idea is to do a left, right, or full outer join, and filter to find only rows where the outer side is null.
For your example case, that'd be a left anti semi join:
```
select a.id, a.name
from alice a
left outer join bob b on (a.id = b.a_id)
where b.id is null;
```
but it's also possible to find mismatches on both sides with a full outer join:
```
select a.id, a.name
from alice a
full outer join bob b on (a.id = b.a_id)
where b.id is null
or a.id is null;
```
For the left anti join approach, you can instead use `not exists`:
```
select a.id, a.name
from alice a
where not exists (select 1 from bob b where b.a_id = a.id);
```
though in practice PostgreSQL will transform this into join form anyway.
It's possible to use `not in` instead:
```
select a.id, a.name
from alice a
where a.id not in (select b.a_id from bob);
```
but:
* You have to make sure there can be no nulls in the subquery result, because `1 not in (2, null)` is `null` not `true`;
* It can be much less efficient
so in general using an anti-join or exists subquery is strongly preferred. | SQL select rows that dont have a join pair | [
"",
"sql",
"postgresql",
"join",
""
] |
I have some messy data in a MySQL table that needs clearing. Everything should be either 3 or 4 characters, and those that are two characters need to have a leading zero.
Example Messy Data:
```
23
5351
142
91
4911
1514
53
931
984
```
Which should be:
```
023
5351
142
091
4911
1514
053
931
984
```
I've read that I should use either ZEROFILL or LPAD, but as mine vary in length (3-4) I'm not sure if this is the best method, I worry that 4 character data will be cut short or similar. I would appreciate some direction - thank you.
EDIT: I should add, the Type for this column is mediumint(2). I am free to change this if required. | Use ZEROFILL as it would meet your requirement of not truncating your data when the length is more than 3. It will only fill 0 if it's shorter than 3.
[What is the benefit of zerofill in MySQL?](https://stackoverflow.com/questions/5256469/what-is-the-benefit-of-zerofill-in-mysql) | > *I worry that 4 character data will be cut short or similar.*
No. data will not be cut, unless you specify a padding length lesser than the current length of data.
Using `LPAD` the given length is ensured
**Example**:
```
mysql> SELECT LPAD('hi',4,'??');
-> '??hi' -- result is a string of length 4
-- result data will be cut if padding is less than data length
mysql> SELECT LPAD('hi',1,'??');
-> 'h' -- result is a string of length 1
```
On your data you can specify a length equal to max length of a field data.
If your data supports a max length of 4 digits, then
Try this:
```
SELECT field_name fv, LPAD( field_name, 4, '0' ) lfv from my_table;
```
It would result as below:
```
+------+------+
| fv | lfv |
+------+------+
| 23 | 0023 |
| 5351 | 5351 |
| 142 | 0142 |
| 91 | 0091 |
| 4911 | 4911 |
| 1514 | 1514 |
| 53 | 0053 |
| 931 | 0931 |
| 984 | 0984 |
+------+------+
```
**Refer to**: [***Documentation on LPAD***](http://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_lpad) | Adding leading zero to MySQL data | [
"",
"mysql",
"sql",
""
] |
I'm trying to implement a check constraint on a key field. The key field is composed of a 3 character prefix, and then appended with numeric characters (which can be provided manually, but the default is to get an integer value from a sequence, which is then cast as nvarchar). The key field is defined as nvarhcar(9).
I'm doing this for multiple tables, but here is a specific example below to demonstrate:
Table name: Company
Key field: IDCompany
Key field prefix: CMP
Examples of valid keys -
```
CMP1
CMP01
CMP10000
CMP999999
```
Examples of invalid keys -
```
CMPdog1
steve
1CMP1
1
999999999
```
The check constraint I came up with was:
```
IDCompany LIKE 'CMP%[0-9]'
```
However, this is beaten by CMPdog1 etc.
What should I be using as a check constraint to enforce an unknown number of numeric characters?
I could do the following:
```
IDCompany LIKE 'CMP[0-9]' OR IDCompany LIKE 'CMP[0-9][0-9]' OR .... through to 6 characters
```
But, this seems like a clunky way of doing it, is there something smarter?
EDIT 2: This actually doesn't work, it does not exclude negative numbers:
```
EDIT 1:
This solution ended up working for me:
IDCompany nvarchar(9) NOT NULL CONSTRAINT DEF_Company_IDCompany DEFAULT 'CMP' + CAST((NEXT VALUE FOR dbo.sq_Company) AS nvarchar) CONSTRAINT CHK_Company_IDCompany CHECK (IDCompany LIKE 'CMP%[0-9]' AND ISNUMERIC(SUBSTRING(IDCompany,4,LEN(IDCompany)-3))=1)
```
EDIT 3: Solution -
As proposed in Szymon's post below.
Thanks all! | You could do something like that:
```
where LEFT(IDCompany, 3) = 'CMP'
and isnumeric(RIGHT(IDCompany, len(IDCompany) - 3)) = 1
and IDCompany not like '%[.,-]%'
```
The first part checks that it starts with CMP
The next part is to make sure that the rest is numeric but excluding negative and decimal numbers. | Well, I would reconsider the design of your table and create 3 columns:
* **prefix**, CHAR(3), with a default as 'CMP' and a constraint to allow only 'CMP' combination
* **id**, INTEGER
* **companyid**, NVARCHAR(9), a computed, persisted column as sum of the first 2 columns. Most probably with an index on. | SQL Server - Like/Pattern Matching | [
"",
"sql",
"sql-server",
"regex",
""
] |
I need to query to select full list from 2 remote databases which are structurally identical, but on different server machines.
How to use `sp_addlinkedserver` to query the 2 databases in the same query?
To get something like this each database requires the same user name and password
```
SELECT
[Pays]
FROM
/// [db1].[dbo].[liste_pays] Union [db1].[dbo].[liste_pays]///
```
They even have the same name but different data inside | Suppose:
You have a Server SR1, Database DB1 on Schema SC1 Table tbl1 and column Pays.
You have a Server SR2, Database DB1 on Schema SC1 Table tbl1 and column Pays.
First you will need to add a linked server relationship between SR1 and SR2.
Once that is established.
You can query something like this:
```
SELECT ABC.Pays FROM SR1.DB1.SC1.Tbl1 ABC
UNION ALL
SELECT DEF.Pays FROM SR2.DB1.SC1.Tbl1 DEF
``` | [Click Here](http://www.kodyaz.com/sql-server-2012/how-to-add-sql-server-linked-server.aspx) for a simple tutorial about how to create a linked server.
After creating linked server, we can query it as follows:
```
select * from LinkedServerName.DatabaseName.dbo.TableName
``` | How to query 2 identical SQL Server databases on different server machines | [
"",
"sql",
"sql-server",
""
] |
It is first time I came across problem of long time of `query` execution. Problem is actually pretty big because query is executing in more then 20seconds which highly visible for endpoint user.
I have quite large database of `topics` (~8k), topic's have it's parameters (which is dictionared - I have 113 different parameters for 8k topics).
I would like to show report about number of repetitions of those topics.
```
topic table:
----------------+---------+-----------------------------------------------------
id | integer | nextval('topic_id_seq'::regclass)
topicengine_id | integer |
description | text |
topicparam_id | integer |
date | date |
topicparam table:
----------------+---------+----------------------------------------------------------
id | integer | nextval('topicparam_id_seq'::regclass)
name | text |
```
and my query:
```
select distinct tp.id as tpid, tp.name as desc, (select count(*) from topic where topic.topicparam_id = tp.id) as count, t.date
from topicparam tp, topic t where t.topicparam_id =tp.id
Total runtime: 22372.699 ms
```
fragment of result :
```
tpid | topicname | count | date
------+---------------------------------------------+-------+---------
3823 | Topic1 | 6 | 2014-03-01
3756 | Topic2 | 14 | 2014-03-01
3803 | Topic3 | 28 | 2014-04-01
3780 | Topic4 | 1373 | 2014-02-01
```
Is there any way to optimize time of execution for this query? | A simply group by should do the same thing (if I understood your query correctly.
```
select tp.id as tpid,
max(tp.name) as desc,
count(*) as count,
max(t.date) as date
from topicparam tp
join topic t on t.topicparam_id = tp.id
group by tp.id;
```
---
Btw: `date` is a horrible name for a column. For one reason because it's also a reserved word, but more importantly because it does not document what the column contains. A "start date", an "end date", a "due date", a "recording date", a "publish date", ...? | For me `DISTINCT` + `SUBQUERY` are killing your performance.
You should use `GROUP BY` in both way to "disinct" you data and "count".
```
SELECT
tp.id as tpid
, tp.name as description
, count(*) as numberOfTopics
, t.date
FROM
topicparam tp
INNER JOIN topic t
ON t.topicparam_id = tp.id
GROUP BY
tp.id
, tp.name
, t.date
```
Considering the bulk of data, you have to pay attention on indexes :
In this case, use indexes on `topicparam.id` and `topic.id`
Remove indexes on columns that is never use in join clauses.
Try to not use sql reserved words like "date, desc, count" for aliases or table fields. | Optimize time of execution of PSQL query | [
"",
"sql",
"postgresql",
"postgresql-performance",
""
] |
I am trying to build a small rails app to teach SQL to beginners in a game format. Part of it is a command line where the user tries to solve problems with SQL. On the backend I’d like to run their SQL command against a test database, and check the result to see if their command was correct.
My current idea is to create SQLite databases for each stage of the lesson, and then create a copy of that database for each user when they reach that stage.
I think that will work but I am concerned about efficiency.
My question is whether there is an efficient way to have users run SQL commands (including `drop`, `alter`, etc.), get the result, but not actually make any changes to the db itself, in essence a dry run of arbitrary SQL commands. I am familiar with SQLite and Postgres but am open to other databases. | In Postgres, you can do much with **transactions** that are **rolled back** at the end:
```
BEGIN;
UPDATE foo ...:
INSERT bar ...;
SELECT baz FROM ...;
CREATE TABLE abc...; -- even works for DDL statements
DROP TABLE def...;
ALTER TABLE ghi ...:
ROLLBACK; -- !
```
Just don't `COMMIT`!
Read the manual about [`BEGIN`](https://www.postgresql.org/docs/current/sql-begin.html) and [`ROLLBACK`](https://www.postgresql.org/docs/current/sql-rollback.html).
Be aware that some things cannot be rolled back, though. For instance, sequences don't roll back. Or some special commands like [dblink](https://www.postgresql.org/docs/current/dblink.html) calls.
And some commands cannot be run in a transaction with others. Like `CREATE DATABASE` or `VACUUM`.
Also, there may be side effects with concurrent load, like deadlocks. Unlikely, though. You can set the [transaction isolation level](https://www.postgresql.org/docs/current/mvcc.html) to your requirements to rule out any side effects (at some cost to performance).
I would not do this with sensitive data. The risk of committing by accident is too great. And letting users execute arbitrary code is a risk that is hardly containable. But for a training environment, that should be good enough.
Back it up with a **template database**. If something should go wrong, that's the fastest way to restore a basic state. Example (look at the last chapter):
* [Truncating all tables in a Postgres database](https://stackoverflow.com/questions/2829158/truncating-all-tables-in-a-postgres-database/12082038#12082038)
This can also be used as brute force **alternative**: to provide a pristine new database for each trainee. | While you can run their code within a transaction and then attempt to check the results within the same transaction before rolling it back, that'll have some challenges.
First, if there's any error in their code, the transaction will abort. You won't be able to tell anything more than "It failed, here's the error message". In particular, you won't be able to check partial results before the error.
Second, if you're doing parallel runs in a single database, locking will bite you. Each session will obtain locks, often exclusive locks on tuples or whole relations. Other sessions will block waiting on the lock. This means you'll have to do your test runs serially. That might not be an issue in your environment, but bears thinking about.
Third, as Erwin notes, you'll have to make your check code insensitive to things like sequence values because they're not reset on rollback.
For all these reasons, I strongly recommend just:
```
CREATE DATABASE course_username_lesson TEMPLATE course_lesson OWNER username;
```
... or omit the use of template databases and just let your Rails app set up a blank database using migrations for each user. This may be simpler, as it means you don't have to maintain the template DBs.
A PostgreSQL server is quite happy with many hundreds of databases, at least if it's not expected to perform at peak production loads.
You may want to check out <http://sqlfiddle.com/> and a number of other existing web-based SQL editor/training tools, many of which are open source. | Can I dry run/sandbox SQL commands? | [
"",
"sql",
"ruby-on-rails",
"database",
"sqlite",
"postgresql",
""
] |
I have a table like below:

And the delete condition is:
delete from Employee only if the date is smaller than a specific date and no record is larger than that date
e.g.
* if the date is 3/8/2014, then only record with EmployeeID 3 will be removed as the record with EmployeeID 4 has date larger than 3/8/2014, and EmployeeID 5 won't be removed as the date is 3/9/2014
* if the date is 3/9/2014, then record with EmployeeID 3 and 5 will be removed, as the record with EmployeeID 4 has date larger than 3/9/2014
At first, I tried
```
delete from Employee where Date > @Date
```
But the above SQL would delete all records wherever the date is smaller than the @Date
What amendments should be made to the above SQL? | Try this:
```
DELETE FROM TableName
WHERE EmployeeID IN
(SELECT EmployeeID FROM TableName
GROUP BY EmployeeID
HAVING MAX(DATE)<=@Date)
```
Tested and verified.
See an example in [**SQL Fiddle**](http://sqlfiddle.com/#!3/24dc09/3). | Try this:
```
delete from Employee
where EmployeeID in
(select EmployeeID
from Employee
group by Employeeid
having max(Date) < @Date)
``` | SQL Delete specific records with condition | [
"",
"sql",
"sql-server",
""
] |
I have been working on this issue since 2 days now.
I have two tables created by using SQL Select statements
```
SELECT (
) Target
INNER JOIN
SELECT (
) Source
ON Join condition 1
AND Join condition 2
AND Join condition 3
AND Join condition 4
AND Join condition 5
```
The target table has count value of 10,000 records.
The source table has count value of 10,000 records.
but when I do an inner join between the two tables on the 5 join conditions
I get 9573 records.
I am basically trying to find a one to one match between source and target table. I feel every field from target matches every field in source.
Questions:
1. Why does my inner join give less records even if there are same value of records in both tables?
2. If it is expected, how can I make sure I get the exact 10,000 records after the join condition? | 1) An INNER JOIN only outputs the rows from the JOINING of two tables where their joining columns match. So in your case, Join Condition1 may not exist in rows in both tables and therefore some rows are filtering out.
2) As the other poster mentioned a left join is one way. You need to look which table source or target you want to use as your master i.e. start from and return all those rows. You then left join the remaining table based on your conditions to add all the columns where you join conditions match.
It's probably better if you give us the tables you are working on and the query\results you are trying to achieve. | There's some really good articles about the different joins out there. But it looks like you'd be interested in left joins. So if it exists in Target, but not in Source, it will not drop the record.
So, it would be:
```
SELECT(...) Target
LEFT OUTER JOIN
SELECT(...) Source
ON cond1 and cond2 and cond3 and cond4 and cond5
```
Give that a shot and let me know how it goes! | Inner join between two tables with same count values | [
"",
"sql",
"join",
""
] |
I am trying hard on this, but I am unable to solve it. I have data coming comma-separated from a column, and I am splitting it with a function, and I need to return all comma-separated values.
I am doing something like **outer join ApplicationDocument Table**.
Here is the query:
```
SELECT PA.Id,
Ad.Id,
Ad.DocumentTitle,
Ad.Path
,
S.value
FROM dbo.ApplicationDocument AS Ad
right JOIN dbo.PostApplication AS PA
ON Ad.ApplicationId = PA.Id
JOIN dbo.Post P
ON PA.JobId = P.Id
outer APPLY dbo.fnSplit(P.RequiredDocuments,',') as S
WHERE S.Value = Ad.DocumentTitle
``` | I came up with the follwing query by the help of **@Humpty Dumpty** and **@t-clausen-dk**.
Here is what I needed. This query now fetches all the records, and I can now filter using a `where` clause on it.
```
SELECT PA.Id AS ApplicationId,
Ad.Id AS ApplicationDocumentId,
Ad.DocumentTitle,
Ad.Path,
S.value
FROM dbo.ApplicationDocument AS Ad
right JOIN dbo.PostApplication AS PA
ON Ad.ApplicationId = PA.Id
JOIN dbo.Post P
ON PA.JobId = P.Id
cross APPLY dbo.fnSplit(P.RequiredDocuments,',') as S
where PA.Id = SomeId here
``` | ```
CROSS APPlY (select value
from dbo.fnSplit(P.RequiredDocuments,',')
WHERE Value =Ad.DocumentTitle) as S
``` | Join with comma-separated data returned from function using SQL | [
"",
"sql",
"sql-server",
"sql-server-2008",
"join",
""
] |
I am testing some sql code that can take in multiple values into it, they are then compared by using an 'IN' statement. How do I simulate this for testing. Heres what I'm Using but it returns nothing, which is incorrect, because when I send in multiple values in my SSRS report, it works. I'd like @item to simulate what its getting from SSRS when the report is run.
```
Declare
@item VarChar(100)
Set @item = ('1003932,1003933,1003934')
SELECT DISTINCT
CUSTNAME,
ISNULL(NAME, LOGCREATEDBY) AS 'Modified By'
,b.ITEMID as 'Item Id'
,[PRICE_NEW] as 'New Price'
,[PRICE_OLD] as 'Old Price'
,[PRICEUNIT_NEW] as 'New Unit Price'
,[PRICEUNIT_OLD] as 'Old Unit Price'
,LOGCREATEDDATE as 'Created Date'
,(Select TOP 1 INVENTTRANS.DATEFINANCIAL From INVENTTRANS Where INVENTTRANS.ITEMID = @item order by INVENTTRANS.DATEFINANCIAL desc) As 'LastInvoice'
FROM PMF_INVENTTABLEMODULELOG AS b
LEFT JOIN USERINFO ON ID = LOGCREATEDBY
LEFT JOIN INVENTTABLE AS a on a.ITEMID in (@item)
WHERE b.ITEMID in (@item)
order by LOGCREATEDDATE desc
``` | try this...
```
Declare @item table
( item varchar(100)
)
INSERT INTO @item
SELECT '1003932' union all
SELECT '1003933' union all
SELECT '1003934']
SELECT DISTINCT
CUSTNAME,
ISNULL(NAME, LOGCREATEDBY) AS 'Modified By'
,b.ITEMID as 'Item Id'
,[PRICE_NEW] as 'New Price'
,[PRICE_OLD] as 'Old Price'
,[PRICEUNIT_NEW] as 'New Unit Price'
,[PRICEUNIT_OLD] as 'Old Unit Price'
,LOGCREATEDDATE as 'Created Date'
,(Select TOP 1 INVENTTRANS.DATEFINANCIAL From INVENTTRANS Where INVENTTRANS.ITEMID = @item order by INVENTTRANS.DATEFINANCIAL desc) As 'LastInvoice'
FROM PMF_INVENTTABLEMODULELOG AS b
LEFT JOIN USERINFO ON ID = LOGCREATEDBY
LEFT JOIN INVENTTABLE AS a on a.ITEMID in (@item)
WHERE b.ITEMID in (SELECT item from @item)
order by LOGCREATEDDATE desc
``` | It can't be done like this.
I'd suggest that you insert the values into a table variable,
```
declare @items_table table (id int);
insert @items_table values ('1003932'),('1003933'),('1003934');
```
and then change the where clause into
```
WHERE b.ITEMID in (SELECT id FROM @items_table)
```
Or you can just simply modify the where clause:
```
WHERE b.ITEMID in ('1003932','1003933','1003934')
``` | While testing i want to insert multiple values in a variable | [
"",
"sql",
"sql-server",
"reporting-services",
""
] |
I am trying to simply connect to a database. But the problem arises in the concatenation part of the Sql query.
When I do:
```
cmd = New OleDbCommand("Select max(ID) from Table1", con)
```
There is no error but when I do
```
cmd = New OleDbCommand("Select max(ID) from'" & tablename & "'", con)
```
The vb.net error comes: **Syntax error in query. Incomplete query clause.**
Here is the full code
```
Function Get_Max(ByVal Get_Table_Name As String)
Dim tablename As String = Get_Table_Name
Dim found_max As Integer
Call connect()
con.Open()
cmd = New OleDbCommand("Select max(ID) from'" & tablename & "'", con)
dr = cmd.ExecuteReader
While dr.Read
found_max = dr(0)
End While
con.Close()
MsgBox(found_max)
Return found_max
End Function
``` | Do not put single quotes around the variable tablename
```
cmd = New OleDbCommand("Select max(ID) from " & tablename, con)
```
Otherwise the tablename variable becomes a literal string and, from the database point of view you are requesting the MAX(ID) of the string `'Table1'`.
A part from this you should be absolutely sure about the value of the variable `tablename`.
Do not allow the end user to directly type this value (at least let him choose between a list of predefined names).
This kind of code is very weak and open to Sql Injections.
And, as other answers have already outlined, a couple of Square Brackets around the table name are required if one or more of your tables contains a space or have the same name as a reserved keyword
```
cmd = New OleDbCommand("Select max(ID) from [" & tablename & "]", con)
``` | change this line
```
cmd = New OleDbCommand("Select max(ID) from " & tablename & "", con)
```
to this
```
cmd = New OleDbCommand("Select max(ID) from " & tablename, con)
```
you just missed a space and remove the "'" | Syntax error in query. Incomplete query clause in Max statement inside a function | [
"",
"sql",
"vb.net",
"oledb",
"string-concatenation",
"oledbcommand",
""
] |
I have a table 'user' already in db with fields
```
create Table user (
id INT(6) NOT NULL AUTO_INCREMENT PRIMARY KEY,
username varchar(20) NOT NULL,
password varchar(20) NOT NULL,
profilename varchar(20) NOT NULL,
email varchar(40) NOT NULL,
socialemail varchar(40) NOT NULL)engine=InnoDB;
```
The stated columns also contain values
I altered the table and added some more columns
```
ALTER TABLE user
ADD COLUMN enabled varchar(1),
ADD COLUMN accountnonexpired varchar(1),
ADD COLUMN credentialsnonexpired varchar(1),
ADD COLUMN accountnonlocked varchar(1);
```
Now when I am inserting values into new columns with the below command in MYSQL.
```
insert into user
(id,enabled,accountnonexpired,credentialsnonexpired,accountnonlocked) values ('1','Y','Y','Y','Y'),('2','Y','Y','Y','Y');
```
I am getting an error
```
Error Code: 1364. Field 'username' doesn't have a default value
```
Can anyone tell me why?
What should be the correct way to insert values in new columns?
 | An `INSERT` is creating NEW records. You have a username field that is marked as `NOT NULL` But in your sql you are not including username and other `NOT NULL` fields in your statement.
Your insert would need to include all the `NOT NULL` fields.
```
insert into user(id,username,password,profilename,email,socialemail,enabled,accountnonexpired,credentialsnonexpired,accountnonlocked)
values ('1',<username>,<password>,<profilename>,<email>,<socialemail>'Y','Y','Y','Y'),('2',<username>,<password>,<profilename>,<email>,<socialemail>'Y','Y','Y','Y');
```
I suspect you actually want to UPDATE here instead of insert.
An update would look like this:
```
UPDATE user set enabled = 'Y', accountnonexpired='Y', credentialsnonexpired='Y', accountnonlocked='Y'
FROM user
WHERE id = 1
``` | `INSERT` adds new rows to your table, and those rows would have to have a non-null username for the `INSERT` to succeed It's not 100% clear but I think you are saying that you want to set the values of these new columns *for all your existing rows*. To do that you need `UPDATE` not `INSERT`:
```
UPDATE user SET id='1', enabled = 'Y', accountnonexpired = 'Y' WHERE 1
```
I omitted a few of your columns for brevity but you get the idea. You may also want to alter the table to make these values the `DEFAULT` for new rows inserted in the future. | inserting into new added columns in an existing table | [
"",
"mysql",
"sql",
"database",
""
] |
What is the correct syntax for using CASE WHEN and MAX() in an SQL Query in SPSS?
I've tried using the following query in the SPSS syntax window but it does not recognize either:
```
SELECT * FROM
(
SELECT id, MAX(CASE WHEN val='A' THEN 'A' END) as Val_1,
MAX(CASE WHEN val='B' THEN 'B' END) as Val_2
FROM table1 GROUP BY id
)a
WHERE Val_1 IS NULL OR Val_2 IS NULL;
```
I'm using SPSS 20.
**Update:** I tried using both my query and the one presented by AK47.
My query's error:
> Warning. Command name: GET DATA
> SQLExecDirect failed :[Microsoft][ODBC Excel Driver] Syntax error (missing operator) in query expression 'MAX(CASE WHEN val='A' THEN 'A' END)'
AK47's Error:
> Warning. Command name: GET DATA
> SQLExecDirect failed :[Microsoft][ODBC Excel Driver] Syntax error in FROM clause.
Is this all a matter of finding the correct syntax in SPSS or do you think SPSS just doesn't support some SQL functions? | As it turns out, the ODBC Driver (the same driver used in MS Access) does not support CASE...WHEN. Instead, use SWITCH:
```
SELECT * FROM
(
SELECT id, MAX(SWITCH( val='A', 'A')) as Val_1,
MAX(SWITCH( val='B', 'B')) as Val_2
FROM table1 GROUP BY id
)a
WHERE Val_1 IS NULL OR Val_2 IS NULL;
```
This will produce the same results. | I am not familier with SPSS, but as per my sql exp. i have given following ans, hope it helps you,
```
SELECT * FROM
(
SELECT id, CASE WHEN MAX(val)='A' THEN 'A' END as Val_1,
CASE WHEN MAX(val)='B' THEN 'B' END as Val_2
FROM table1 GROUP BY id
)a
WHERE Val_1 IS NULL OR Val_2 IS NULL;
``` | Using CASE WHEN and MAX() in SPSS | [
"",
"sql",
"spss",
""
] |
I started working for a company a few weeks ago, and have inherited a crazy mess of databases. I'm currently working on designing the new systems to replace their older ones. The previous developer created a ton of views that are entirely identical, with the only differences being the conditions within the `WHERE` clause. In my attempt to cleanup the source, I was attempting to create a stored procedure that returns the table based on the given conditions, so I can compact 250 views into a single stored procedure.
Honestly, the only reason I'm doing this is so when I start transitioning over to the newer databases and front-ends, I have some pre-made stored procedures to use rather than a jumbled mess of views.
Is there any way that I can execute the stored procedures I created inside of the currently existing views, so I don't have to modify their front end (A series of access databases connected to SQL Server through ODBC)?
My stored procedure looks something like this:
```
CREATE PROCEDURE BusDuctQueries.CMSF_Current_Mod @ModNumber VARCHAR(255)
AS
SELECT [Date], [Name], [Mod], [Cell], [Row], [Section],
[Phase A Breaker Amps], [Phase B Breaker Amps], [Phase C Breaker Amps],
[Phase A Actual Amps], [Phase B Actual Amps], [Phase C Actual Amps],
[PDU # 1], [Bus Duct # 1],
[Bus Duct # 1 Phase A Current Load], [Bus Duct # 1 Phase B Current Load], [Bus Duct # 1 Phase C Current Load],
[PDU # 2], [Bus Duct # 2],
[Bus Duct # 2 Phase A Current Load], [Bus Duct # 2 Phase B Current Load], [Bus Duct # 2 Phase C Current Load],
[Sort 1], [Sort 2], [Average Load], [Percent Used], [Percent Remaining], [KW Used],
CONVERT(NUMERIC(18, 2), [Derated KW Left]) AS [Derated KW Left],
CONVERT(NUMERIC(18, 2), [True KW Left]) AS [True KW Left],
CASE WHEN [PDU # 1] LIKE '%A%' THEN 1 ELSE 2 END AS UPSSort
FROM BusDuctQueries.[CMSF Current]
WHERE ([Mod] = @ModNumber) AND (Cell NOT LIKE '%WP%')
``` | You cannot call a stored proc from inside a view. It is not supported. However you can make views call other views, or [table-valued user-defined functions](http://technet.microsoft.com/en-us/library/ms191165%28v=sql.105%29.aspx).
For the latter you must make sure that you're using [inline functions](http://technet.microsoft.com/en-us/library/ms189294%28v=sql.105%29.aspx). Otherwise, any subsequent clauses like `WHERE`, `GROUP BY` and `ORDER BY` to will be performed on the dynamically produced resultset instead. Thus, you won't benefit from index searches and the likes. This may have a huge impact on performance. | There are two ways to do this, both with their pros and cons:
1. Use [OPENROWSET](http://technet.microsoft.com/en-us/library/ms190312.aspx) / [OPENQUERY](http://technet.microsoft.com/en-us/library/ms188427.aspx). These allow you do do a SELECT on anything you want, but
it might have security implications that you don't like. That might not be an issue if
this is a short-term solution and afterwards you can undo the allowing of "Ad Hoc
Distributed Queries". But this is the easiest to set up, especially if the result sets
vary between the procs (assuming that there are multiple procs).
2. Write a SQLCLR TVF that executes the procedure. This can be done in SAFE mode if the
Stored Procedures are read-only (i.e. no INSERT / UPDATE / DELETE statements and most
likely no CREATE #Tmp statements). I wrote an article showing an example of this:
[Stairway to SQLCLR Level 2: Sample Stored Procedure and Function](http://www.sqlservercentral.com/articles/Stairway+Series/105841/) | Execute a Stored Procedure Inside a View? | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
TABLE A:
> dvdID.......dvdTitle
>
> d01..........Avenger
>
> d02..........Avenger
>
> d03..........Spiderman
TABLE B:
> rentID.......dvdID
>
> r01...........d01
>
> r02...........d02
>
> r03...........d03
TABLE C:
> returnID.......rentID
>
> t01...............r01
i want to select dvd that not in Table B (rented) except it in Table C (returned)
so the output should be like this:
OUTPUT:
> dvdID.......dvdTitle
>
> d01..........Avenger
could you help me? | Try this,
```
SELECT *
FROM A
WHERE (NOT EXISTS (SELECT * FROM B WHERE B.dvdID=A.dvdID))
OR (EXISTS (SELECT * FROM C,B WHERE C.rentID=B.rentID and B.dvdID=A.dvdID))
```
here is [SQLFiddle](http://sqlfiddle.com/#!2/50272/1) | You can use existential quantifiers in your SQL conditions, represented by the `EXISTS` keyword:
```
SELECT *
FROM TableA a
WHERE NOT EXISTS ( -- No rental record
SELECT *
FROM TableB b
WHERE b.dvdID=a.dvdID -- The record is for this DVD, and...
AND NOT EXISTS ( -- ...there is no return record
SELECT * FROM TableC c WHERE c.rentID=b.rentID
)
)
```
The query reads almost like a poorly constructed English sentence, but it explains what is going on. | MYSQL - select TABLE A rows that are not in Table B except the rows are in TABLE C | [
"",
"mysql",
"sql",
"mysql-select-db",
""
] |
I Am using oracle 11g and SQL developer tool. When i tried to retrieve rows from db i am getting error message as :-
```
ORA-01219: database not open: queries allowed on fixed tables/views only)
```
I looked for sgadef.dbf file and it was missing in home directory.. Can someone help me to make it working. | First of all check the status of the instance you work with (this may need to connect under administrator account):
```
select status from v$instance;
```
Probably you will get
```
STATUS
------------
MOUNTED
```
Under sys account try to complete
```
ALTER DATABASE OPEN;
```
and check that it's completed successfuly.
If not - please share the result of output and alert log in your question. | just in case if someone comes to here for this question
RECOVER DATABASE;
and then
ALTER DATABASE OPEN;
fixed my issue. | ORA-01219: database not open: queries allowed on fixed tables/views only | [
"",
"database",
"sql",
"oracle",
"database-administration",
""
] |
I have 2 tables, let's say they contain:
**Table1**
```
id | username
1 | Bla
4 | Bla2
```
**Table2**
```
FROM_ID | FK_TOID | Action
1 | 2 | -1
4 | 2 | -1
```
Now all I have is something like:
```
SELECT FK_FROMID FROM table2 WHERE FK_TOID = 2 AND FP_ACTION = -1
```
What I want to do is fetch the id and username from table1, where the id of table1 matches FROM\_ID from table2, where FK\_TOID is '2' in table2.
So basically the results returned should be something like
`[{ id: 1, FK_FROMID: 1, username: Bla }, { id: 4, FK_FROMID: 4, username: Bla2 }]` | ***you need this:***
```
SELECT A.id,B.from_id as FK_FROMID,A.username
FROM t1 A
LEFT JOIN t2 B
ON A.id=B.from_id
```
***click this link to see the result:***
***<http://sqlfiddle.com/#!2/868c1/4>***
***update:1***
```
SELECT A.id,B.from_id as FK_FROMID,A.username
FROM t1 A
LEFT JOIN t2 B
ON A.id=B.from_id
WHERE B.fk_toid=2 AND B.action=-1;
```
***check this link:***
***<http://sqlfiddle.com/#!2/868c1/8>*** | You should use the inner join or left outer join (does not matter for your example) as follows:
```
select t1.id, t2.FK_FROMID, t1.username
from Table1 t1
join Table2 t2
on (t2.FROM_ID = t1.id)
where t2.FK_VOID = 2
and t2.FP_ACTION = -1
``` | MySQL fetch data from another table where the column row matches a row result from first table | [
"",
"mysql",
"sql",
"sql-server",
""
] |
Say I have the table below:
```
Id Grade 1 Grade 2 Grade 3
1 1 1 1
2 5 0 0
3 3 1 5
```
I want the result as exactly as follows:
```
Id Grade 1 Grade 2 Grade 3 Total
1 1 1 1 3
2 5 0 0 8
3 3 1 5 17
```
Total should be the sum from the current column + the previous total.
Is that possible in MYSQL? | ```
select
t.*,
@rolling_sum := @rolling_sum + `Grade 1` + `Grade 2` + `Grade 3` AS Total
from
Table1 t
, (select @rolling_sum := 0) var_init
order by id
```
* see it working live in an `sqlfiddle`
Another version:
```
select t.*,
(select sum([Grade 1] + [Grade 2] + [Grade 3]) from Table1 sub_t where sub_t.id <= t.id)
from Table1 t
order by id
``` | Try this
```
SELECT A.*, (@runtot := @runtot + `Grade 1` + `Grade 2` + `Grade 3`) AS Total
FROM Table1 A
,(SELECT @runtot:=0) c
```
**[Fiddle Demo](http://sqlfiddle.com/#!2/2e82a/4)** | total sum from previous column plus current column in SQL | [
"",
"sql",
"sql-server",
""
] |
I have two tables: `Jobs` and `JobItems`:
```
CREATE TABLE Jobs (
[JobId] UNIQUEIDENTIFIER NOT NULL PRIMARY KEY
);
Create TABLE JobItems(
[ItemId] UNIQUEIDENTIFIER NOT NULL,
[JobId] UNIQUEIDENTIFIER NOT NULL
);
```
and for every row in `Jobs` there can be zero, one, or more rows in `JobItems` such that their `JobId` matches that of the row in `Jobs` which means that zero, one or more than one "job" contains that "item".
I need to select rows from `Jobs` such that there's exactly one row in `JobItems` with `JobId` matching that of the row from `Jobs` and tried code from [this answer](https://stackoverflow.com/a/6667437/57428) (altered a bit):
```
SELECT JobId, (SELECT Count(ItemId) from JobItems WHERE
JobItems.JobId=Jobs.JobId) from Jobs
```
and the problem is I now need to somehow say that I need only rows with exactly one match. So I tried to assign a shortcut to `Count(ItemId)`
```
SELECT JobId, (SELECT Count(ItemId) AS CountItemId from JobItems WHERE
JobItems.JobId=Jobs.JobId) from Jobs WHERE CountItemId=1
```
but this makes SQL server unhappy - it says
> Invalid column name `CountItemId`
How do I have only items with exactly one match selected? | You can do this:
```
select *
from jobs i
where (select count(*) from jobitems ji where ji.jobid = j.jobid) = 1;
```
If however, table Jobs only contains all job IDs and nothing else, then you don't have to select from table Jobs at all:
```
select jobid
from jobitems
group by jobid
having count(*) = 1;
``` | A basic [**`GROUP BY HAVING`**](http://www.dotnet-tricks.com/Tutorial/sqlserver/HQV8310312-Definition,-Use-of-Group-by-and-Having-Clause.html) clause will do it for you..
```
SELECT Jobs.jobId
FROM Jobs
INNER JOIN JobItems
ON Jobs.JobId = JobItems.JobId
GROUP BY Jobs.jobId
HAVING COUNT(*)=1
``` | How do I select items from one table such that a query in another table yields exactly one row? | [
"",
"sql",
"sql-server",
""
] |
Pardon the title, I've no clue what to call this.
So imagine I have this
```
table_ref
id_x|id_y
---------
6|70
6|71
6|72
6|73
8|70
8|73
9|72
9|73
```
How can i select `id_y` only if it matches `id_x= 6 & 8 & 9`?
in this case it should return me `73`
the returned result of `id_y` will then be used as an inner join in another sql query. | Of course, it is hard to parametrize, but if it important then you can pass the values as table-valued parameter.
```
SELECT T.id_y
FROM table_ref T
JOIN (VALUES (6), (8), (9)) A(id_x)
ON T.id_x = A.id_x
GROUP BY T.id_y
HAVING COUNT(*) = 3
``` | ```
SELECT distinct [idy]
FROM Table_1
WHERE idy in (SELECT idy FROM Table_1 WHERE idx=6)
AND idy in (SELECT idy FROM Table_1 WHERE idx=8)
AND idy in (SELECT idy FROM Table_1 WHERE idx=9)
``` | SQL query where 'IN' act as 'AND' not 'OR'? | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
"relational-division",
""
] |
Lets say I have a simple 3 column table.
```
CREATE TABLE `test1` (
id INT,
name VARCHAR(40),
age INT,
PRIMARY KEY (id)
);
```
If I then insert lets say 5 values into the table there will be 5 records. Each record will have an `id` ranging from 1 - 5. If I were to delete 4 of the records, leaving me with the one that has the `id` of 1 and THEN insert another 4 records again, how come it doesn't start from `id=2` but instead carries on so it'd start from 6? | Here's a scenario that explains why it can't decrement the AI on DELETE:
1. Ashley starts a transaction and DELETEs rows for `id=2` through `id=5`. But Ashley doesn't commit immediately.
2. Bill starts his own transaction and INSERTs a new row. Since the `id` values 2 through 5 have been deleted, they'd up for grabs. Bill's row gets `id=2`.
3. Ashley rolls back the first transaction. Thus the rows for `id=2` through `id=5` come back.
But Bill's transaction already inserted a new row with `id=2`!
An alternative would be to increment based on what each transaction can see, and ignore uncommitted changes:
1. Ashley DELETEs rows for `id=2` through `id=5`. Do not commit yet.
2. Bill starts his own transaction and INSERTs a new row. Since the `id` values 2 through 5 still appear to exist in Bill's view of the data, don't use those. Assign `id=6`.
3. Ashley commits the first transaction. Thus the rows for `id=2` through `id=5` are gone, and Bill's row has value `id=6`, resulting in a mysterious gap!
The only other way to avoid this would be to make all transactions serialized. In other words, Bill's INSERT would have to wait until Ashley's transaction resolved. It's desirable to avoid this, and allow concurrent inserts.
For this reason, AI mechanisms *must* function outside of transaction scope. Concurrent transactions can "see" the latest AI value for a table, even if there are uncommitted transactions in progress. And the only way to prevent re-using values is to always increment, and never decrement.
This does result in "gaps" from time to time. This is normal and expected. AI keys don't need to be consecutive, they only need to be unique. | Why? Because it's the easiest thing to implement. It can be implemented by having a single "next value" counter and incrementing it every time a new row is inserted.
Automatically resetting the next value would be extremely expensive. Since rows can be deleted in any order, the server would have to look through the entire table on every insert or delete to determine the next value. | Why do MySQL AI id's behave this way? | [
"",
"mysql",
"sql",
""
] |
I want to check if a list of stored procedures exist. I want this all to be done in 1 script, one by one. So far I have this format:
```
USE [myDatabase]
GO
IF NOT EXISTS (SELECT * FROM sys.objects WHERE type = 'P' AND name = 'sp_1')
BEGIN
CREATE PROCEDURE sp_1
AS
.................
END
GO
IF NOT EXISTS (SELECT * FROM sys.objects WHERE type = 'P' AND name = 'sp_2')
BEGIN
CREATE PROCEDURE sp_2
AS
.................
END
GO
```
and so on. However, I'm getting the following error:
> Incorrect syntax near the keyword 'Procedure'.
Why isn't what I'm doing working correctly? | `CREATE PROCEDURE` must be the first statement in the batch. I usually do something like this:
```
IF EXISTS (
SELECT type_desc, type
FROM sys.procedures WITH(NOLOCK)
WHERE NAME = 'myProc'
AND type = 'P'
)
DROP PROCEDURE dbo.myProc
GO
CREATE PROC dbo.myProc
AS
....
GO
GRANT EXECUTE ON dbo.myProc TO MyUser
```
(don't forget grant statements since they'll be lost if you recreate your proc)
One other thing to consider when you are deploying stored procedures is that a drop can succeed and a create fail. I always write my SQL scripts with a rollback in the event of a problem. Just make sure you don't accidentally delete the commit/rollback code at the end, otherwise your DBA might crane-kick you in the trachea :)
```
BEGIN TRAN
IF EXISTS (
SELECT type_desc, type
FROM sys.procedures WITH(NOLOCK)
WHERE NAME = 'myProc'
AND type = 'P'
)
DROP PROCEDURE myProc GO
CREATE PROCEDURE myProc
AS
--proc logic here
GO
-- BEGIN DO NOT REMOVE THIS CODE (it commits or rolls back the stored procedure drop)
IF EXISTS(
SELECT 1
FROM sys.procedures WITH(NOLOCK)
WHERE NAME = 'myProc'
AND type = 'P'
)
COMMIT TRAN
ELSE
ROLLBACK TRAN
-- END DO NOT REMOVE THIS CODE
``` | One idiom that I've been using lately that I like quite a lot is:
```
if exists (select 1 from sys.objects where object_id = object_id('dbo.yourProc'))
set noexec on
go
create procedure dbo.yourProc as
begin
select 1 as [not yet implemented]
end
go
set noexec off
alter procedure dbo.yourProc as
begin
/*body of procedure here*/
end
```
Essentially, you're creating a stub if the procedure doesn't exist and then altering either the stub (if it was just created) or the pre-existing procedure. The nice thing about this is that you don't drop a pre-existing procedure which drops all the permissions as well. You can also cause issues with any application that happens to want it in that brief instant where it doesn't exist.
[Edit 2018-02-09] - In SQL 2016 SP1, `create procedure` and `drop procedure` got some syntactic sugar that helps with this kind of thing. Specifically, you can now do this:
```
create or alter dbo.yourProc as
go
drop procedure if exists dbo.yourProc;
```
Both provide idempotency in the intended statement (i.e. you can run it multiple times and the desired state is achieved). This is how I'd do it now (assuming you're on a version of SQL Server that supports it). | Creating a stored procedure if it does not already exist | [
"",
"sql",
"sql-server",
"t-sql",
"stored-procedures",
""
] |
I have two tables, Product and Benchmark
A benchmark is linked to only one product. There can only be one benchmark per year per product.
I would like to retrieve every products' name for a set of years, and count how many benchmark there are for each product.
```
SELECT p.name,
p.id,
COUNT(p.id) AS nb_benchmark
FROM product p
INNER JOIN benchmark b0 ON b0.product_id = p.id
INNER JOIN benchmark b1 ON b1.product_id = p.id
WHERE p.owner = "MyCompany"
AND b0.year = 2011
AND b1.year = 2012
GROUP BY p.id
ORDER BY nb_trials DESC
```
But the count is wrong, it's way to high, it even gives me more results than there actually are in the database. I guess it's because of the JOINs, but I don't know how to build the query. | I have found a way to achieve what I wanted
```
SELECT p.name, p.id, COUNT(DISTINCT(b0.id)) + COUNT(DISTINCT(b1.id)) as nb_benchmark
FROM product p
INNER JOIN benchamrk b0 ON b0.product_id = p.id AND b0.year = 2011
INNER JOIN benchamrk b1 ON b1.product_id = p.id AND b1.year = 2012
WHERE
p.owner = "myCompany"
GROUP BY p.id
ORDER BY nb_benchmark DESC
``` | Remember that the basis of SQL joining is the cartesian product of rows in the referenced tables, which are then eliminated by filters and join conditions. Because you are joining TWICE to table `benchmark`, which from the nature of your query, we can assume has many `benchmark` rows per `product` per benchmark year.
e.g. 1 Product with 3 Benchmark rows each for 2011 and 2012
```
FROM product p -- 1 Product Row
INNER JOIN benchmark b0 ON b0.product_id = p.id -- 1 x 3 = 3
INNER JOIN benchmark b1 ON b1.product_id = p.id -- 1 x 3 x 3 = 9
```
So the multiple joins to `benchmark` introduces duplicate rows for `product`, which are then counted.
You can use [`COUNT(DISTINCT xx)`](https://dev.mysql.com/doc/refman/5.0/en/group-by-functions.html#function_count) to count distinct values, so your query should be of the form:
```
SELECT p.name,
p.id,
COUNT(DISTINCT p.id) AS distinct_products,
COUNT(DISTINCT b.name) AS distinct_benchmark_names
-- etc
FROM ...
```
*Other Notes*
* for correctness sake you should `GROUP BY` both `p.id` and `p.name`. Although MySql allows this, other RDBMS are more strict. | Too many rows when Joining to same table twice | [
"",
"mysql",
"sql",
"join",
"count",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.