Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I have a table named Articles, which suprisingly has a column named Article, which is the Primary Key.
In this column all articles used to be designated with an unnecessary prefix which I am tasked to remove.
Examples:
1-184-W21TK00032
1-154-MXA0074
In these examples both 184 and 154 designate distributors, they all have a few thousand Articles in this table.
I encountered zero problems when running this query:
```
UPDATE Articles
SET Article = SUBSTRING(Article,7,LEN(Article)-6)
WHERE SUBSTRING(Article,3,3) = '184'
```
I was unable to run this query without a WHERE designation. I altered the final line to switch to the next distributor: 154
```
WHERE SUBSTRING(Article,3,3) = '154'
```
And ran into this error:
```
Msg 536, Level 16, State 5, Line 1
Invalid length parameter passed to the SUBSTRING function.
```
I have also tried using LIKE in the WHERE line, to designate everything using '-154-' like so
```
WHERE Article LIKE '_-154-%'
```
but this would inexplicably lead to this, as would running the query without a WHERE as I started out trying:
```
Msg 2627, Level 14, State 1, Line 1
Violation of PRIMARY KEY constraint 'aaaaaArticles_PK'. Cannot insert duplicate key in object 'dbo.Articles'.
```
Any ideas or suggestions? I'm at a loss. Running it without a WHERE line is now impossible for the thousands of products using 184 have succesfully been altered. | The problem occurs because `LEN(Article)` is less than 6 for at least 1 row which means you are passing a negative value into the `SUBSTRING`. Filter them out using `WHERE LEN(Article) > 6` | It's probably not the WHERE you're changing that is the issue.
There are probably rows for "154" where the other SUBSTRING fails.
That is, there are probably (Len(Article) - 6) < 0.
Search for that to confirm and then add a CASE or such to manage it. | Issues when using SUBSTRING to trim the first 6 characters in SQL Server | [
"",
"sql",
"sql-server",
""
] |
I am using TRY CATCH block to capture error and do rollback
```
ALTER PROCEDURE sp_first
/*
parameters
*/
BEGIN TRY
BEGIN TRANSACTION
/*
statements
*/
COMMIT
END TRY
BEGIN CATCH
IF(@@TRANCOUNT>0)
ROLLBACK
END CATCH
```
Will the above approach work if there is another stored procedure `sp_inner` being called inside `sp_first` which also performs DML statements INSERT , DELETE , UPDATE etc.
```
ALTER PROCEDURE sp_first
/*
parameters
*/
BEGIN TRY
BEGIN TRANSACTION
/*
statements of sp_first
*/
-- stored procedure sp_inner also requires rollback if error occurs.
EXEC sp_inner @paramaterList
COMMIT
END TRY
BEGIN CATCH
IF(@@TRANCOUNT>0)
ROLLBACK
END CATCH
```
How to implement roll back if nested stored procedure is used? | `rollback` rolls back to the outermost transaction, not just the current transaction within a transaction. If that is what you are trying to do, then it will work. If not, then it won't.
See General Remarks at <http://msdn.microsoft.com/en-us/library/ms181299.aspx> | I would recommend checking [this link](http://technet.microsoft.com/en-us/library/ms178157(v=sql.100).aspx) for an example. Basically, as podiluska said, a standard rollback will rollback the entire transaction (meaning, you could have a trancount of 5 and it'll revert all those changes).
You could check the trancount and only roll back that amount, but as per the link, I'd recommend creating a save point prior to calling the nested procedure, and then rolling back to that savepoint in case of failure. | Implement rollback in Nested stored procedure | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
Similar to this question : [SQL Statement with multiple SETs and WHEREs](https://stackoverflow.com/questions/6446250/sql-statement-with-multiple-sets-and-wheres)
Is there a way to perform multiple sets, as shown below.
```
UPDATE table
SET ID = 111111259
SET Name = "Bob"
SET Phone = 1111111261
WHERE ID = 2555
```
Thanks. | This is possible,
```
UPDATE table SET Name = 'aaa', Col2 = 'bbb' WHERE ID IN (2555, 2666)
```
What you are asking for is setting multiple values to same column and same row. | No need of `SET` for each column. do it like
```
UPDATE table
SET ID = 111111259,
Name = 'Bob',
Phone = 1111111261
WHERE ID = 2555
``` | SQL multi SET with one WHERE | [
"",
"sql",
"database",
"set",
"where-clause",
""
] |
I have following query to select the view to be used in the query, however I get the error:
> FROM keyword not found where expected
```
select *, (CASE WHEN 'employee' = 'employee' THEN employee ELSE vw END) FROM type1
``` | ```
select type1.*,
(CASE WHEN 'employee' = 'employee'
THEN employee
ELSE vw
END)
FROM type1
```
I always *prefix* with the tablename/table alias to the `*` and it works!!
We just need to specify, fetch `all` the column from `this` table, when we specify combination of wildcard and other select expressions! | You cannot use `*` and individual columns together in select statement.
```
SELECT (CASE WHEN 'employee' = 'employee' THEN 'employee' ELSE 'vw' END)
FROM dual
``` | CASE SQL Oracle Query - What is wrong? | [
"",
"sql",
"oracle",
"case",
""
] |
I have a table with the following columns:
```
ID | NAME | ID_PREVIOUS | ID_FOLLOWING | RANDOM_USELESS_DATA
----+--------+--------------+--------------+---------------------
1 | AB2-F | NULL | 2 | bla bla bla
2 | PP-01 | 1 | 3 | ffwfwqafrwf
3 | 10J6U | 2 | 7 | ihjfoiwhfo
7 | F-1-R | 3 | NULL | fdojshflos
```
This table represent a kind of graph. Every line is a node with its id, its name, the id of the previous and following node and some other data that is not useful for the question. Notice that the adiacent nodes could be null, meaning that there is only one connection for the specific node.
What I cannot do is to make a query that substitutes all the IDs with the corresponding node name. I need to represent the "graph" with the names but i don't know how to do. My ideal result would be:
```
NAME | NAME_PREVIOUS | NAME_FOLLOWING
------+----------------+------------------
AB2-F | NULL | PP-01
PP-01 | AB2-F | 10J6U
10J6U | PP-01 | F-1-R
F-1-R | 10J6U | NULL
```
Thanks for your help. | ```
SELECT T.NAME,
TP.NAME AS NAME_PREVIOUS,
TF.NAME AS NAME_FOLLOWING
FROM YOUR_TABLE AS T
LEFT JOIN YOUR_TABLE AS TP TP.ID = T.ID_PREVIOUS
LEFT JOIN YOUR_TABLE AS TF TF.ID = T.ID_FOLLOWING
```
That's general SQL that should work for the most DBMS I suppose, however as I know Access query builder for some reasons also adds bracket in FROM clause like this:
```
SELECT T.NAME,
TP.NAME AS NAME_PREVIOUS,
TF.NAME AS NAME_FOLLOWING
FROM (YOUR_TABLE AS T
LEFT JOIN YOUR_TABLE AS TP TP.ID = T.ID_PREVIOUS)
LEFT JOIN YOUR_TABLE AS TF TF.ID = T.ID_FOLLOWING
``` | If you only have a single level of "previous/following" you can get away with a self-join
```
select nd.name,
pr.name as name_previous,
nx.name as name_following
from nodes nd
left join nodes pr on nd.id_previous = pr.id
left join nodes nx on nd.id_following = nx.id;
```
SQLFiddle: <http://sqlfiddle.com/#!15/21b46/2>
If you have multiple levels you need a recursive query (which is not supported by MS-Access or MySQL) | SQL - query and replace column value with another value extracted from the same table | [
"",
"sql",
""
] |
I want to retrieve all the values from a table in which email ID are not repeated. But when I run below query the execution goes into infinite in mysql.
```
SELECT * FROM USERS
WHERE EMAIL IN (
SELECT EMAIL
FROM USERS
GROUP BY EMAIL
HAVING COUNT(1) = 1)
```
I don't understand why this query is going into infinite and is there any other way to get same data without inner query.
I also referred this link in which duplicate values are found [link](https://stackoverflow.com/questions/688549/finding-duplicate-values-in-mysql)
Thanks. | This should work as-is, but you might try to rewrite it to a Join instead, MySQL's optimizer is not that clever :-)
```
SELECT u.*
FROM USERS AS u
JOIN ( SELECT EMAIL FROM USERS GROUP BY EMAIL HAVING COUNT(1) = 1) AS dt
ON u.EMAIL = dt.email
``` | This is one case where the MySQL extension on `group by` comes in handy. You can do this query as:
```
select u.*
from users u
group by email
having count(*) = 1;
```
The MySQL extension allows you to include columns in the `select` that are not in the `group by`. The values come from arbitrary (the documentation says **indeterminate**) matching rows. Because of the `having` clause, only one row matches, so the values will come from that row. | How find values not duplicate in mysql | [
"",
"mysql",
"sql",
""
] |
```
------------------------------------
X Y
------------------------------------
10 20
10 5
10 9
50 40
50 30
100 70
100 100
```
Consider 1 ,2 , 3 rows as Group A
and 4,5 rows as Group B
and 6,7 rows as Group C
I want to get a only one row for a group and That row should be taken depending on the
X, Y column values.
Y column's value which is nearest lower to the x
Expected Result
```
------------------------------------
X Y
------------------------------------
10 9
50 40
100 100
``` | You can do this with conditional aggregation:
```
select x, max(case when y <= x then y end) as y
from table t
group by x;
``` | Alternative solution
```
DECLARE @tb TABLE (
X float , Y float
)
INSERT INTO @tb (X,Y) values
(10, 20),
(10, 5),
(10, 9),
(50, 40),
(50, 30),
(100, 70),
(100, 100);
select a.* from (
select row_number() over( partition by X order by abs(X-Y) asc ) as rn, X, Y
from @tb
) a where a.rn = 1;
``` | Get only rows that nearest lower in Y column to X column | [
"",
"sql",
"sql-server",
"sql-server-2005",
""
] |
I get this Error
---
```
07-16 20:58:27.299: E/AndroidRuntime(14005): Caused by: android.database.sqlite.SQLiteException: no such table: strings: , while compiling: SELECT id, string FROM strings WHERE id = ?
```
---
I hava StringDB class that has ID and String to store objects of this class into SQLite Database. Please Debug it and give a solution. Here is the code:
```
public class MySQLiteHelper extends SQLiteOpenHelper {
// Database Version
private static final int DATABASE_VERSION = 2;
// Database Name
private static final String DATABASE_NAME = "StringDB";
public MySQLiteHelper(Context context) {
super(context, DATABASE_NAME, null, DATABASE_VERSION);
}
@Override
public void onCreate(SQLiteDatabase db) {
// SQL statement to create StringDB table
String CREATE_StringDB_TABLE = "CREATE TABLE StringDBs ( " +
"id INTEGER PRIMARY KEY AUTOINCREMENT, " +"string"+ "TEXT )";
// create StringDBs table
db.execSQL(CREATE_StringDB_TABLE);
}
@Override
public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) {
// Drop older StringDBs table if existed
db.execSQL("DROP TABLE IF EXISTS StringDBs");
// create fresh StringDBs table
this.onCreate(db);
}
//---------------------------------------------------------------------
/**
* CRUD operations (create "add", read "get", update, delete) StringDB + get all StringDBs + delete all StringDBs
*/
// StringDBs table name
private static final String TABLE_STRINGS = "strings";
// StringDBs Table Columns names
private static final String KEY_ID = "id";
private static final String KEY_STRING= "string";
private static final String[] COLUMNS = {KEY_ID,KEY_STRING};
public void addStringDB(StringDB string){
// 1. get reference to writable DB
SQLiteDatabase db = this.getWritableDatabase();
// 2. create ContentValues to add key "column"/value
ContentValues values = new ContentValues();
values.put(KEY_STRING, string.getString()); // get author
// 3. insert
db.insert(TABLE_STRINGS, // table
null, //nullColumnHack
values); // key/value -> keys = column names/ values = column values
// 4. close
db.close();
}
public StringDB getStringDB(int id){
// 1. get reference to readable DB
SQLiteDatabase db = this.getReadableDatabase();
// 2. build query
Cursor cursor =
db.query(TABLE_STRINGS, // a. table
COLUMNS, // b. column names
" id = ?", // c. selections
new String[] { String.valueOf(id) }, // d. selections args
null, // e. group by
null, // f. having
null, // g. order by
null); // h. limit
// 3. if we got results get the first one
if (cursor != null)
cursor.moveToFirst();
// 4. build StringDB object
StringDB res=new StringDB();
res.setId(Integer.parseInt(cursor.getString(0)));
res.setString(cursor.getString(1));
return res;
}
// Get All StringDBs
public List<StringDB> getAllStringDBs() {
List<StringDB> StringDBs = new LinkedList<StringDB>();
// 1. build the query
String query = "SELECT * FROM " + TABLE_STRINGS;
// 2. get reference to writable DB
SQLiteDatabase db = this.getWritableDatabase();
Cursor cursor = db.rawQuery(query, null);
// 3. go over each row, build StringDB and add it to list
StringDB StringDB = null;
if (cursor.moveToFirst()) {
do {
StringDB = new StringDB();
StringDB.setId(Integer.parseInt(cursor.getString(0)));
StringDB.setString(cursor.getString(1));
// Add StringDB to StringDBs
StringDBs.add(StringDB);
} while (cursor.moveToNext());
}
Log.d("getAllStringDBs()", StringDBs.toString());
// return StringDBs
return StringDBs;
}
// Updating single StringDB
public int updateStringDB(StringDB StringDB) {
// 1. get reference to writable DB
SQLiteDatabase db = this.getWritableDatabase();
// 2. create ContentValues to add key "column"/value
ContentValues values = new ContentValues();
values.put("strings", StringDB.getString()); // get author
// 3. updating row
int i = db.update(TABLE_STRINGS, //table
values, // column/value
KEY_ID+" = ?", // selections
new String[] { String.valueOf(StringDB.getId()) }); //selection args
// 4. close
db.close();
return i;
}
// Deleting single StringDB
public void deleteStringDB(StringDB StringDB) {
// 1. get reference to writable DB
SQLiteDatabase db = this.getWritableDatabase();
// 2. delete
db.delete(TABLE_STRINGS,
KEY_ID+" = ?",
new String[] { String.valueOf(StringDB.getId()) });
// 3. close
db.close();
Log.d("deleteStringDB", StringDB.toString());
}
}
``` | 1. Under `private static final String DATABASE_NAME = "StringDB";` add
```
private static final String TABLE_NAME = "strings";
```
2. Change `String CREATE_StringDB_TABLE = "CREATE TABLE StringDBs ( " +
"id INTEGER PRIMARY KEY AUTOINCREMENT, " +"string"+ "TEXT )";`
to
```
String CREATE_StringDB_TABLE = "CREATE TABLE " + TABLE_NAME + " (id " +
"INTEGER PRIMARY KEY AUTOINCREMENT, string TEXT)";
```
3. Change under `onUpgrade` @ `db.execSQL`
to
```
db.execSQL("DROP TABLE IF EXISTS " + TABLE_NAME);
```
4. Change all references of `TABLE_STRINGS` to `TABLE_NAME` | You are passing the wrong table name. You are creating `StringDB` and asking for `strings` | Caused by: android.database.sqlite.SQLiteException: no such table: strings: , while compiling: SELECT id, string FROM strings WHERE id =? | [
"",
"android",
"sql",
"sqlite",
""
] |
I have a stored procedure that will accept 2 different parameters. The first parameter will determine which column I want to sort on, the second parameter will determine whether it is `ASC` or `DESC`
```
Create Procedure Some_SP
@sortcolumn varchar(10)
@sortorder varchar(10)
AS
Select * from empTable
Order by
CASE @sortcolumn WHEN 'First_Name' THEN fname END,
CASE @sortcolumn WHEN 'Last_Name' THEN lname END,
CASE @sortcolumn WHEN 'ID' THEN empID END,
CASE @sortorder WHEN 'ascending' THEN ASC END,
CASE @sortorder WHEN 'descending' THEN DESC END
```
It is giving me syntax error. How do I fix it so that I can have 2 conditions in my CASE statement? | The following will work:
```
Select * from empTable
Order by
CASE WHEN @sortcolumn = 'First_Name' AND @SortOrder = 'ascending' THEN fname END ASC,
CASE WHEN @sortcolumn = 'First_Name' AND @SortOrder = 'descending' THEN fname END DESC
```
etc...
In order to avoid typing each of these case statements by hand, you could write a "generator" script that you use to create this (especially good if the table definition would change):
```
SELECT
'CASE WHEN @SortColumn = ''' + C.name + ''' AND @SortOrder = ''ascending'' THEN ' + C.name + ' END ASC,' + CHAR(13) + CHAR(10) +
'CASE WHEN @SortColumn = ''' + C.name + ''' AND @SortOrder = ''descending'' THEN ' + C.name + ' END DESC,'
FROM sys.columns C
WHERE C.object_id = object_id('[Schema].[Table]')
``` | If you want to avoid dynamic SQL and using 2x your conditions, you can use `row_number`
eg:
```
declare @t table (string varchar(50), number int)
insert @t values ('a',9),('f',2),('c',1)
declare
@sc varchar(10) = 'number', -- or 'string', etc
@so varchar(10) = 'desc' -- or 'asc'
select *
from
(
select
*,
case @sc when 'string' then ROW_NUMBER() over (order by string)
when 'number' then ROW_NUMBER() over (order by number)
end rn
from @t
) v
order by
case @so when 'desc' then -rn else rn end
``` | SQL Case with 2 conditions | [
"",
"sql",
"sql-server",
"stored-procedures",
""
] |
how can I select Julian day of year in Oracle database?
I tried:
select to\_char(sysdate, 'J') from dual;
Which gives me the number of days since January 1, 4712 BC. But I would need the number of days since 1.1. of current year. | If you check the [TO\_CHAR (datetime)](http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions180.htm) documentation you get a link to ["Format Models"](http://docs.oracle.com/cd/B19306_01/server.102/b14200/sql_elements004.htm#i34510) with a comprehensive list of available formats. I guess you want this:
> `DDD` Day of year (1-366) | ```
SELECT TO_CHAR(SYSDATE, 'DDD') from DUAL;
``` | Oracle Julian day of year | [
"",
"sql",
"oracle",
"julian-date",
""
] |
I'm making a table in SSRS that contains customer names in column 1 and their corresponding number of orders in column 2. This query works for what I'm trying to accomplish, but I don't know exactly how the Count function knows what the heck I want it to count and what table I'm wanting it to count from. Could someone please explain this to me so I can better understand in the future? Thanks a ton.
```
SELECT Customers.name
,Count(1) AS OrderCount
FROM Customers
INNER JOIN Orders
ON Customers.id = Orders.customer_id
GROUP BY Customers.name
``` | > I don't know exactly how the Count function knows what the heck I want it to count
There is only one thing that `COUNT` is able to count - it can count *rows* in which an expression evaluates to a non-null value. If you use `COUNT(1)` in a regular query, you would get `1` on each row. With `GROUP BY`, however, the `COUNT` will return the number of rows *in the specific group*. In your case, that would be the number of rows with the same `Customers.name`, because that is what you use for `GROUP BY`.
As far as passing `1` to `COUNT` goes, a more common practice these days is to pass an asterisk, i.e. to write `COUNT(*)`, because in most RDBMS engines there is no performance penalty for that. | Count is counting `true` for each record found. Therefore, if there are 3 records, it is counting `true` 3 times and returns `3`. It doesn't really matter what it is counting in there as long as it exists or is a constant. If it exists, it is counted. It's the number of rows that you are being grouped when you group by that matters. | Could someone explain how Count works in this SQL query? | [
"",
"sql",
"sql-server",
"reporting-services",
""
] |
I am trying to Execute a stored procedure that requires to variables be passing into to it. One is a static, the other is a dynamic variable.
```
DECLARE @Filt DATETIME
SET @Filt = (SELECT DISTINCT MAX(Date) FROM Data.db.Staging)
SELECT * INTO #tempData FROM OPENROWSET('SQLNCLI', 'Server=ISR14 \MSSQL2012;Trusted_Connection=yes;', 'EXEC GetData.db.Staging @Mode = ''Date'' @Filt ')
```
but that doesn't work, got the error back
"Msg 8180, Level 16, State 1, Line 1
Statement(s) could not be prepared.
Msg 102, Level 15, State 1, Line 1
Incorrect syntax near '@Filt'."
I'm guessing it is because Filt is dynamic statement. So I tried this
```
DECLARE @FilterData DATETIME
DECLARE @sql VARCHAR(200)
SET @Filt = (SELECT DISTINCT MAX(AsOfDate) FROM Data.db.Staging)
SET @sql = 'EXEC GetData.db.Staging @Mode = ''Date'' @Filt = ' + @Filt
SELECT * INTO #tempData FROM OPENROWSET('SQLNCLI', 'Server=ISR14\MSSQL2012;Trusted_Connection=yes;',
@sql)
```
But I get the message back
"Msg 102, Level 15, State 1, Line 24
Incorrect syntax near '@sql'."
It seems that OPENROWSET can only accept strings. But I want to pass a variable that is dynamic. | You have to put the whole statement into a variable and run it, and convert @FilterData to a varchar to concatenate it.
You can't use variables with openquery/openrowset.
Try this and check the print output... if it works and looks ok, then EXEC(@sql2)
```
DECLARE @FilterData DATETIME
DECLARE @sql VARCHAR(200), @sql2 VARCHAR(500)
SET @FilterData = '2014-07-01'--(SELECT DISTINCT MAX(AsOfDate) FROM Data.db.Staging)
SET @sql = 'EXEC GetData.db.Staging @Mode = ''''Date'''', @Filt = ''''' + CONVERT(VARCHAR(20),@FilterData ,120) + ''''''
SET @sql2 = 'SELECT * INTO #tempData FROM OPENROWSET(''SQLNCLI'', ''Server=ISR14\MSSQL2012;Trusted_Connection=yes;'',
'''+@sql+''')'
print @sql2
--exec(@sql2)
``` | You need to make the whole query dynamic, not sure if I got it nailed down, but something like:
```
DECLARE @Filt DATETIME
,@sql VARCHAR(MAX)
SET @Filt = (SELECT MAX(Date) FROM Data.db.Staging)
SET @sql = 'SELECT * INTO #tempData FROM OPENROWSET(''SQLNCLI'', ''Server=ISR14 \MSSQL2012;Trusted_Connection=yes;'', ''EXEC GetData.db.Staging @Mode = ''''Date''' +@Filt+ ')'
EXEC (@sql)
``` | Syntax issue in SQL Server, using OPENROWSET | [
"",
"sql",
"sql-server",
"t-sql",
"concatenation",
"openrowset",
""
] |
I have a question about separating variables in a data set in a specific way. When we did field work, we had to collect data in a method that looked like this:
```
Range Row HGT V HGT2 V2 HGT3 V3 HGT4 V4
1 2 151 15 127 22 114 16 97 12
```
In reality, the variables there are not different types of measurements, but different distances from a start point. Because of this, I want to get the data into a form like this:
```
Range Row HGT V HGT2 V2 HGT3 V3 HGT4 V4
1 2 151 15 . . . . . .
1 2 . . 127 22 . . . .
1 2 . . . . 114 16 . .
1 2 . . . . . . 97 12
```
This way, I can use a bunch of if-then statements to put in the true rows for each line of data, since range/row is how we identify everything, and compress the data back into the 2 variables with a coalesce statement in sql. I know that this can easily be done in excel by hand, but our lab head is strongly against that due to the risk of us making mistakes. | If the the number of variables are certain then you can follow this method.
Assumption: Source data is "Range\_data"
```
data Range_data_1(keep=Range Row HGT V);
set Range_data;
run;
data Range_data_2(keep=Range Row HGT2 V2);
set Range_data;
run;
data Range_data_3(keep=Range Row HGT3 V3);
set Range_data;
run;
data Range_data_4(keep=Range Row HGT4 V4);
set Range_data;
run;
data All_Range;
set Range_data_1 Range_data_2 Range_data_3 Range_data_4;
run;
```
Suggestions:
1. You try writing a macro if the number of HGT and V variables are indefinite.
2. Check you table design also once. | Does this help?
```
SELECT RANGE, ROW, HGT, V, NULL AS HGT2, NULL AS V2, NULL AS HGT3, NULL AS V3, NULL AS HGT4, NULL AS V4
UNION
SELECT RANGE, ROW, NULL AS HGT, NULL AS V, HGT2, V2, NULL AS HGT3, NULL AS V3, NULL AS HGT4, NULL AS V4
UNION
SELECT RANGE, ROW, NULL AS HGT, NULL AS V, NULL AS HGT2, NULL AS V2, HGT3, V3, NULL AS HGT4, NULL AS V4
UNION
SELECT RANGE, ROW, NULL AS HGT, NULL AS V, NULL AS HGT2, NULL AS V2, NULL AS HGT3, NULL AS V3, HGT4, V4
``` | Separating variables in a SAS data set | [
"",
"sql",
"sas",
""
] |
I have a table like
```
create table temp_table (col1 int)
```
I have some data in this table like
```
insert into temp_table
values(1), (2), (3), (4)
```
Now I want the data to be output as follows:
```
1,2,3,4
```
I have used the query:
```
select cast(col1 as nvarchar)+','
from temp_table
for xml path('')
```
The problem is that the output comes in XML format and i need it in simple text/string format. I tried searching for transpose but `FOR XML()` was mentioned everywhere and it didn't helped me.
Any suggestions? | ```
declare @temp nvarchar(max)
select @temp = COALESCE(@temp + ', ', '') + CAST(col1 as nvarchar) from temp_table
select @temp
``` | And if you really don't get it with the given link :
```
select STUFF((
SELECT ',' + cast(col1 as nvarchar)
FROM temp_table
FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)'), 1, 1, '')
``` | Merge data many rows of one column into a single row in sql server | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
I have two tables in a MS Sql Server database, imported from different sources, that I want to merge.
### Table1:
```
Id: int identity primary key
Name: varchar(50)
Code: varchar(50)
```
### Table1 Data:
```
Id, Name, Code
1, 'Knife', '1'
2, 'Spoon', '1'
3, 'Fork', '1'
...
```
### Table2:
```
Code: varchar(50)
```
### Table2 Data:
```
Code
'ASF203RNSD2ONF'
'FD042TOLFB0W30'
'0FBW2REO90DFRK'
...
```
I want to update Table1's Code field with the values from Table2's Code field. Both tables have the same number of records and it doesn't matter which code from Table2 goes into which record from Table1, but each record in Table1 has to have a unique code coming from Table2 (each code value in Table2 is unique).
Normally there would be a *Id* in both tables that I could join on, but that isn't the case here, and the Table1 Id isn't sequential (some records have been deleted).
Is the only way to do this is looping thru the records row by agonizing row? | Added row numbers to the base tables and joined on that. The below code works.
```
update t1
set t1.code = t2.code
from
(select *, row_number() over(order by id) as rNum1 from table1) t1
join
(select *, row_number() over(order by code) as rNum2 from table2) t2
on t1.rnum1 = t2.rnum2
``` | If there are the same number of rows in each table and it doesn't matter which code goes against which record then why not create an arbitrary key field using row number within a sub query and then join on that?
```
SELECT
TABLE1.ID,
TABLE1.Name,
TABLE2.Code
FROM
(SELECT ROW_NUMBER() OVER (ORDER BY ID) AS 'Key_Field', ID, Name, Code FROM Table1) TABLE1
LEFT OUTER JOIN
(SELECT ROW_NUMBER() OVER (ORDER BY Code) AS 'Key_Field', Code FROM Table2) TABLE2
ON
TABLE1.Key_Field = TABLE2.Key_Field
``` | Merging table data without keys to join on | [
"",
"sql",
"sql-server-2008",
"join",
"merge",
"updates",
""
] |
I am having a bit of trouble when modelling a relational database to an inventory managament system. For now, it only has 3 simple tables:
1. Product
```
ID | Name | Price
```
2. Receivings
```
ID | Date | Quantity | Product_ID (FK)
```
3. Sales
```
ID | Date | Quantity | Product_ID (FK)
```
As Receivings and Sales are identical, I was considering a different approach:
1. Product
```
ID | Name | Price
```
2. Receivings\_Sales (the name doesn't matter)
```
ID | Date | Quantity | Type | Product_ID (FK)
```
The column type would identify if it was receiving or sale.
Can anyone help me choose the best option, pointing out the advantages and disadvantages of either approach?
The first one seems reasonable because I am thinking in a ORM way.
Thanks! | Personally I prefer the first option, that is, separate tables for `Sales` and `Receiving`.
The two biggest disadvantage in option number 2 or merging two tables into one are:
```
1) Inflexibility
2) Unnecessary filtering when use
```
First on inflexibility. If your requirements expanded (or you just simply overlooked it) then you will have to break up your schema or you will end up with unnormalized tables. For example let's say your sales would now include the Sales Clerk/Person that did the sales transaction so obviously it has nothing to do with `'Receiving'`. And what if you do Retail or Wholesale sales how would you accommodate that in your merged tables? How about discounts or promos? Now, I am identifying the obvious here. Now, let's go to `Receiving`. What if we want to tie up our receiving to our `Purchase Order`? Obviously, purchase order details like P.O. Number, P.O. Date, Supplier Name etc would not be under `Sales` but obviously related more to `Receiving`.
Secondly, on unnecessary filtering when use. If you have merged tables and you want only to use the `Sales` (or `Receving`) portion of the table then you have to filter out the Receiving portion either by your back-end or your front-end program. Whereas if it a separate table you have just to deal with one table at a time.
Additionally, you mentioned `ORM`, the first option would best fit to that endeavour because obviously an object or entity for that matter should be distinct from other entity/object. | ***If*** the tables really are and always will be identical (and I have my doubts), then name the unified table something more generic, like "InventoryTransaction", and then use negative numbers for one of the transaction types: probably sales, since that would correctly mark your inventory in terms of keeping track of stock on hand. | Turn two database tables into one? | [
"",
"sql",
"database",
"database-design",
"orm",
""
] |
I have 2 tables called Orders and Salesperson shown below:

And **I want to retrieve the names of all salespeople that have more than 1 order from the tables above**.
Then firing following query shows an error:
```
SELECT Name
FROM Orders, Salesperson
WHERE Orders.salesperson_id = Salesperson.ID
GROUP BY salesperson_id
HAVING COUNT( salesperson_id ) >1
```
The error is:
**Column 'Name' is invalid in the select list because it is
not contained in either an aggregate function or
the GROUP BY clause.**
From the error and searching it on google, I could understand that the error is because of Name column must be either a part of the group by statement or aggregate function.
Also I tried to understand why does the selected column have to be in the group by clause or art of an aggregate function? But didn't understand clearly.
So, how to fix this error? | ```
SELECT max(Name) as Name
FROM Orders, Salesperson
WHERE Orders.salesperson_id = Salesperson.ID
GROUP BY salesperson_id
HAVING COUNT( salesperson_id ) >1
```
The basic idea is that columns that are not in the group by clause need to be in an aggregate function now here due to the fact that the name is probably the same for every salesperson\_id min or max make no real difference (the result is the same)
**example**
Looking at your data you have 3 entry's for Dan(7) now when a join is created the with row Dan (Name) gets multiplied by 3 (For every number 1 Dan) and then the server does not now witch "Dan" to pick cos to the server that are 3 lines even doh they are semantically the same
also try this so that you see what I am talking about:
```
SELECT Orders.Number, Salesperson.Name
FROM Orders, Salesperson
WHERE Orders.salesperson_id = Salesperson.ID
```
As far as the query goes INNER JOIN is a better solution since its kinda the standard for this simple query it should not matter but in some cases can happen that INNER JOIN produces better results but as far as I know this is more of a legacy thing since this days the server should pretty much produce the same execution plan.
For code clarity I would stick with INNER JOIN | Assuming the name is unique to the salesperson.id then simply add it to your group by clause
```
GROUP BY salesperson_id, salesperson.Name
```
Otherwise use any Agg function
```
Select Min(Name)
```
The reason for this is that SQL doesn't know whether there are multiple name per salesperson.id | wrapping inside aggregate function in SQL query | [
"",
"mysql",
"sql",
"group-by",
"aggregate-functions",
""
] |
Why does this give me 1 which is what I was expecting:
```
IF (SELECT 123) = 123
PRINT 1
ELSE
PRINT 2
```
But this gives me 2 which I was not expecting:
```
IF (SELECT NULL) = NULL
PRINT 1
ELSE
PRINT 2
``` | `NULL` values are checked by `IS NULL`
you have to use:
```
IF (SELECT NULL) IS NULL
PRINT 1
ELSE
PRINT 2
```
from the [manual](http://dev.mysql.com/doc/refman/5.7/en/problems-with-null.html):
> To search for column values that are NULL, you cannot use an expr =
> NULL test. The following statement returns no rows, because expr =
> NULL is never true for any expression | If you put NULLS OFF
```
SET ANSI_NULLS OFF
IF (SELECT NULL) = NULL
PRINT 1
ELSE
PRINT 2
```
then you will get PRINT 1 | SQL IF NULL From SELECT statement | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
I've got the following MySQL query:
```
SELECT user_id, score, time
FROM tests_1 T
WHERE T.score = (
SELECT MAX(T2.score)
FROM tests_1 T2
WHERE T2.user_id = T.user_id
)
ORDER BY score DESC, time ASC;
```
How do I add the 'username' column from the 'users' table ON users.user\_id = tests\_1.user\_id? | Try this:
```
SELECT T.user_id, U.username, T.score, T.time
FROM tests_1 T
JOIN users U on U.user_id = T.user_id
WHERE T.score = (
SELECT MAX(T2.score)
FROM tests_1 T2
WHERE T2.user_id = T.user_id
)
ORDER BY T.score DESC, T.time ASC;
``` | Just join it.
```
SELECT T.user_id, T.score, T.time, u.username
FROM tests_1 T
JOIN users u ON u.user_id = T.user_id
WHERE T.score = (
SELECT MAX(T2.score)
FROM tests_1 T2
WHERE T2.user_id = T.user_id
)
ORDER BY T.score DESC, T.time ASC;
``` | MySQL - WHERE clause + JOIN | [
"",
"mysql",
"sql",
"subquery",
""
] |
I can select and pull out a list of records by using a select statement like so with t-sql:
```
select * from [dbo].[testTable];
```
But how can I add in a "custom" row to the top of the result set?
For example, if the result set was:
```
John john@email.com
Max max@domain.com
```
I want to add a row, which is not from the table, to the result set so that it looks like so:
```
Name Email
John john@email.com
Max max@domain.com
```
The reason why I want to do this is because I'm going to export this into a csv file through sqlcmd and I want to add in those "custom row" as headers. | This is the safe way to do this:
```
select name, email
from ((select 'name' as name, 'email' as email, 1 as which
) union all
(select name, email, 2 as which from [dbo].[testTable]
)
) t
order by which;
```
In practice, `union all` will work:
```
select 'name' as name, 'email' as email
union all
select name, email from [dbo].[testTable]
```
However, I cannot find documentation that guarantees that the first subquery is completed before the second. The underlying operator in SQL Server *does* have this behavior (or at least it did in SQL Server 2008 when I last investigated it). | ```
SELECT name, email FROM (
SELECT 'Name' AS Name, 'Email' AS Email, 1 AS o
UNION ALL
SELECT name, email, 2 AS o FROM testTable
) t
ORDER BY o, name
```
The o column is added to order the result sets of the UNION so that you ensure the first result set appears on top. | How can I add a "custom" row to the top of a select result set? | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
For some reason the developer create the date column in my DB as a string and it is stored as `YYYY_MM_DD`.
Does anyone know how I can convert the `YYYY_MM_DD` to a date field via SQL. e.g
```
2014_06_30 to 30/6/2014.
```
Or any other solutions
Thank you in advance | Please try:
```
DECLARE @str NVARCHAR(100)='2014_06_30'
select CONVERT (DATETIME, REPLACE(@str, '_', '-'))
```
To convert it to format 30/6/2014, try:
```
select CONVERT(NVARCHAR(20), CONVERT(DATETIME, REPLACE(@str, '_', '-')), 103)
``` | ```
var myDate = '2014_06_30';
var myNewDate = select CONVERT (datetime, Replace(myDate,'_','/'))
``` | convert String YYYY_MM_DD to date | [
"",
"sql",
"sql-server",
"datetime",
""
] |
I am attempting to use multiple aggregate functions across multiple tables in a single SQL query (using Postgres).
My table is structured similar to the following:
```
CREATE TABLE user (user_id INT PRIMARY KEY, user_date_created TIMESTAMP NOT NULL);
CREATE TABLE item_sold (item_sold_id INT PRIMARY KEY, sold_user_id INT NOT NULL);
CREATE TABLE item_bought (item_bought_id INT PRIMARY KEY, bought_user_id INT NOT NULL);
```
I want to count the number of items bought and sold for each user. The solution I thought up does not work:
```
SELECT user_id, COUNT(item_sold_id), COUNT(item_bought_id)
FROM user
LEFT JOIN item_sold ON sold_user_id=user_id
LEFT JOIN item_bought ON bought_user_id=user_id
WHERE user_date_created > '2014-01-01'
GROUP BY user_id;
```
That seems to perform all the combinations of (item\_sold\_id, item\_bought\_id), e.g. if there are 4 sold and 2 bought, both COUNT()s are 8.
How can I properly query the table to obtain both counts? | The easy fix to your query is to use `distinct`:
```
SELECT user_id, COUNT(distinct item_sold_id), COUNT(distinct item_bought_id)
FROM user
LEFT JOIN item_sold ON sold_user_id=user_id
LEFT JOIN item_bought ON bought_user_id=user_id
WHERE user_date_created > '2014-01-01'
GROUP BY user_id;
```
However, the query is doing unnecessary work. If someone has 100 items bought and 200 items sold, then the join produces 20,000 intermediate rows. That is a lot.
The solution is to pre-aggregate the results or use a correlated subquery in the `select`. In this case, I prefer the correlated subquery solution (assuming the right indexes are available):
```
SELECT u.user_id,
(select count(*) from item_sold s where u.user_id = s.sold_user_id),
(select count(*) from item_bought b where u.user_id = b.bought_user_id)
FROM user u
WHERE u.user_date_created > '2014-01-01';
```
The right indexes are `item_sold(sold_user_id)` and `item_bought(bought_user_id)`. I prefer this over pre-aggregation because of the filtering on the `user` table. This only does the calculations for users created this year -- that is harder to do with pre-aggregation. | [SQL Fiddle](http://sqlfiddle.com/#!15/adc79/1)
With a lateral join it is possible to pre aggregate only the filtered users
```
select user_id, total_item_sold, total_item_bought
from
"user" u
left join lateral (
select sold_user_id, count(*) as total_item_sold
from item_sold
where sold_user_id = u.user_id
group by sold_user_id
) item_sold on user_id = sold_user_id
left join lateral (
select bought_user_id, count(*) as total_item_bought
from item_bought
where bought_user_id = u.user_id
group by bought_user_id
) item_bought on user_id = bought_user_id
where u.user_date_created >= '2014-01-01'
```
Notice that you need `>=` in the filter otherwise it is possible to miss the exact first moment of the year. Although that timestamp is unlikely with naturally entered data, it is common with an automated job. | Using SQL Aggregate Functions With Multiple Joins | [
"",
"sql",
"postgresql",
"join",
"left-join",
"aggregate-functions",
""
] |
1. If a row in a table (without primary key) is locked when some modification(update query) is taking place for that row, I assume that the intent locks are first acquired on table, then page before the exclusive lock is acquired on the row.
Now let's say that some other thread wishes to make modification(update query) *for some **other** row in the same table at the very same time*, then SQL Server throws the following error:
> Msg 1205, Level 13, State 45, Line 1
> Transaction (Process ID 65) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
Now is this error might be the reason that the **other** row of the same table was amongst the data locked in the same data page of the first query?
2. I know data page also selects additional data that we do not request for. So if we have a primary key in a table, then will Data Page still select additional data or only that row with the primary key? | > 1. might be the reason that the other row of the same table was amongst the data locked in the same data page of the first query?
No. If both queries use the same granularity (eg. row) then they will obtain compatible intent locks at the high levels (table(partition), page) and different locks at the low (row) level. Even if they use incompatible granularities, no deadlock will occur. Blocking may occur, but no deadlock.
> I know data page also selects additional data that we do not request for. So if we have a primary key in a table, then will Data Page still select additional data or only that row with the primary key?
This just doesn't make any sense. You are mixing logical (primary key) with physical (data page) and the `data page also selects additional data that we do not request for` is quite literally impossible to parse for me. What I can only speculate you're trying to say is the following:
> In a table organized as a heap (no clustered index) all scans have to inspect every row to test a predicate. This will result in lock conflicts.
When concurrent updates occur on a heap (a table without a clustered index) *if there are no nonclustered indexes to consider* then no updates can deadlock. All updates will scan the table in the same order (the heap physical allocation order) and all locks will be acquired in the same order. Updates can block, but not deadlock.
When concurrent updates occur on a heap or a table organized as a clustered index, *but there are non-clustered indexes* then each update can use one (or more) non-clustered index to locate the candidate rows for the update. Updates that do not use the same predicate (ie. different WHERE clauses) can use different NC indexes, in different order. These can deadlock, as the order of acquiring the locks will differ.
When concurrent updates occur on a table organized as a clustered index then deadlocks *can* occur because the application can requests updates explicitly in an order that results in deadlock (Tran 1 updates keys A then B while Tran 2 updates keys B then A).
There are many more ways deadlocks can occur, but these are the basic ways UPDATE vs UPDATE deadlocks can occur.
If you want an answer about a deadlock, any deadlock, in SQL Server [**always start by capturing the deadlock graph**](http://msdn.microsoft.com/en-us/library/ms190465.aspx). Without the deadlock info everything is speculation.
PS. Please don't refer to clustered organized tables as 'primary key'. Primary key is a logical concept. | There are multiple kinds of locks in SQL Server, for different "granularities" of data.
In a perfect world, if you only update one row, indeed it could only hold a row lock and any other row would not be locked.
The engine reserves the right to "escalate" a lock to a larger set, such as a page or even a whole table.
This can be necessary if you're updating a range.
If you're saying you have a primary key that is also, perhaps, your clustered index key. AND it is the criteria for finding a row to update... sure, you might always avoid locking too many rows.
UNLESS you have any other indexes defined on the table.
Then you still might have impact across storage units that *might* lead to a deadlock.
## -
To understand why you saw a change, remember that in SQL Server, tables are just logical entities, and indexes are the "real" tables.
If you have no indexes at all, the db engine may indeed have no choice but to lock the whole table all the time because it has no basis for locking only a range of rows.
It needs a relevant index in order to create the lock - and spare the other rows.
Once you can say you want to update keys, for example, "1 through 5", then the db engine can define locks on those key values.
And then the rest of the table is not locked! | SQL Server deadlock and Data Pages | [
"",
"sql",
"sql-server",
""
] |
Using the excellent [tSQLt](http://tsqlt.org/) testing framework (v1.0.5137.39257) for MS SQL Server (2012), I'm trying to build a test to check that a UNIQUE INDEX works and generates an exception when a duplicate value is inserted.
After extensive searching, I cannot find a way to apply an index to a table that has been faked (built-in tSQLt proc or extra code). Something like `tSQLt.ApplyIndex` would be required. Has anyone managed to do this?
Another possibility would be to fork the tSQLt code and add a proc based on the code at <http://gallery.technet.microsoft.com/scriptcenter/SQL-Server-Generate-Index-fa790441> to re-create the index on the faked table. However this would be quite a bit of work...
Test conditions (assuming [tSQLt](http://tsqlt.org/) has been installed in the database):
```
-- Create a sample table with a UNIQUE INDEX
SET ANSI_NULLS, QUOTED_IDENTIFIER, ANSI_PADDING ON
GO
CREATE TABLE dbo.tblTestUniqueIndex (
id INT NOT NULL IDENTITY (1, 1), TheField varchar(50) NOT NULL,
CONSTRAINT PK_tblTestUniqueIndex PRIMARY KEY CLUSTERED (id ASC)
) ON [PRIMARY];
GO
CREATE UNIQUE NONCLUSTERED INDEX UX_TestUniqueIndex ON dbo.tblTestUniqueIndex
(TheField ASC)
ON [PRIMARY];
GO
```
Creating the test class, test and running it (of course it fails because the procedure call ApplyIndex does not exist):
```
EXEC tSQLt.NewTestClass 'tests';
GO
CREATE PROCEDURE tests.[test that inserting a duplicate value in tblTestUniqueIndex raises an error]
AS
BEGIN
EXEC tSQLt.FakeTable @TableName='dbo.tblTestUniqueIndex';
-- WE NEED SOMETHING LIKE THIS
--EXEC tSQLt.ApplyIndex @TableName='dbo.tblTestUniqueIndex', @ConstraintName='UX_TestUniqueIndex'
EXEC tSQLt.ExpectException;
INSERT dbo.tblTestUniqueIndex (TheField) VALUES ('Cape Town');
INSERT dbo.tblTestUniqueIndex (TheField) VALUES ('Cape Town');
END;
GO
EXEC tSQLt.Run 'tests.[test that inserting a duplicate value in tblTestUniqueIndex raises an error]'
GO
```
Of course the above test fails without the index working.
Clean-up:
```
DROP PROCEDURE tests.[test that inserting a duplicate value in tblTestUniqueIndex raises an error]
GO
EXEC tSQLt.DropClass 'tests'
GO
DROP TABLE dbo.tblTestUniqueIndex
GO
```
Thanks | Could you not create a unique constraint instead? You'll still get the desired index under the hood. tSQLt.ApplyConstraint works for unique keys the same as it does for primary keys - but only in the very latest version IIRC. For example:
Start with a similar version of the table you have created above, with enough columns for one of each type of unique constraint
```
-- Create a sample table with a UNIQUE INDEX
set ansi_nulls, quoted_identifier, ansi_padding on
go
if object_id('dbo.StackTable') is not null
drop table dbo.StackTable;
create table dbo.StackTable
(
Id int not null identity(1, 1)
, UniqueKeyColumn varchar(50) null
, UniqueIndexColumn int null
);
go
if object_id('PK_StackTable') is null
alter table dbo.StackTable add constraint [PK_StackTable]
primary key clustered (Id);
go
if object_id('AK_StackTable_UniqueKeyColumn') is null
alter table dbo.StackTable add constraint [AK_StackTable_UniqueKeyColumn]
unique nonclustered (UniqueKeyColumn);
go
if object_id('NCI_StackTable_UniqueIndexColumn') is null
create unique nonclustered index [NCI_StackTable_UniqueIndexColumn]
on dbo.StackTable (UniqueIndexColumn);
go
```
Create a new test class (assumes that the latest version 1.0.5325.27056 has already been installed)
```
if schema_id('StackTableTests') is null
exec tSQLt.NewTestClass @ClassName = 'StackTableTests';
go
```
This first test confirms that the Id is constrained to be unique and that constraint is a primary key
```
if object_id('[StackTableTests].[test id is unique]') is not null
drop procedure [StackTableTests].[test id is unique];
go
create procedure [StackTableTests].[test id is unique]
as
begin
exec tSQLt.FakeTable @TableName = 'dbo.StackTable';
exec tSQLt.ApplyConstraint @TableName = 'dbo.StackTable', @ConstraintName = 'PK_StackTable';
--! Add the row we're going to duplicate
insert dbo.StackTable (Id) values (-999);
--! If we insert the same value again, we should expect to see an exception
exec tSQLt.ExpectException @ExpectedErrorNumber = 2627
, @ExpectedMessagePattern = 'Violation of PRIMARY KEY constraint%';
insert dbo.StackTable (Id) values (-999);
end
go
```
This next test also uses `ApplyConstraint` to confirm that the `UniqueKeyColumn` is also constrained to be unique using a unique constraint
```
if object_id('[StackTableTests].[test UniqueKeyColumn is unique]') is not null
drop procedure [StackTableTests].[test UniqueKeyColumn is unique];
go
create procedure [StackTableTests].[test UniqueKeyColumn is unique]
as
begin
exec tSQLt.FakeTable @TableName = 'dbo.StackTable';
exec tSQLt.ApplyConstraint
@TableName = 'dbo.StackTable', @ConstraintName = 'AK_StackTable_UniqueKeyColumn';
--! Add the row we're going to duplicate
insert dbo.StackTable (UniqueKeyColumn) values ('Oops!');
--! If we insert the same value again, we should expect to see an exception
exec tSQLt.ExpectException @ExpectedErrorNumber = 2627
, @ExpectedMessagePattern = 'Violation of UNIQUE KEY constraint%';
insert dbo.StackTable (UniqueKeyColumn) values ('Oops!');
end
go
```
Currently, the only way to test a unique index would be against the real, un-faked table. In this example, the `UniqueKeyColumn` is a `varchar(50)` and as this is the real table, it may already contain data. So we need to be able to specify two unique values so that our test doesn't break the wrong constraint. The simplest way to do this is with a couple of GUIDs.
```
if object_id('[StackTableTests].[test UniqueIndexColumn is unique]') is not null
drop procedure [StackTableTests].[test UniqueIndexColumn is unique];
go
create procedure [StackTableTests].[test UniqueIndexColumn is unique]
as
begin
--! Have to use the real table here as we can't use ApplyConstraint on a unique index
declare @FirstUniqueString varchar(50) = cast(newid() as varchar(50));
declare @NextUniqueString varchar(50) = cast(newid() as varchar(50));
--! Add the row we're going to duplicate
insert dbo.StackTable (UniqueKeyColumn, UniqueIndexColumn) values (@FirstUniqueString, -999);
--! If we insert the same value again, we should expect to see an exception
exec tSQLt.ExpectException @ExpectedErrorNumber = 2601
, @ExpectedMessagePattern = 'Cannot insert duplicate key row in object ''dbo.StackTable'' with unique index%';
insert dbo.StackTable (UniqueKeyColumn, UniqueIndexColumn) values (@NextUniqueString, -999);
end
go
exec tSQLt.Run '[StackTableTests]';
go
```
The potential problem with testing against the real table is that it may have foreign key references to other tables which in turn may have other FK references to other tables and so on. There is no easy way to deal with this but I have implemented a version of the Test Data Builder pattern. The idea behind this is a stored procedure for each entity which automagically takes care of those dependencies (i.e. adds any necessary rows to parent & grand parent tables). Typically, having created all the dependencies, the TDB sproc would make the resulting IDs available as output parameters so the values could be reused.
So if I was using the Test Data Builder pattern, my test might look like this:
```
create procedure [StackTableTests].[test UniqueIndexColumn is unique]
as
begin
--! Have to use the real table here as we can't use ApplyConstraint on a unique index
declare @FirstUniqueString varchar(50) = cast(newid() as varchar(50));
declare @NextUniqueString varchar(50) = cast(newid() as varchar(50));
declare @NewId int;
--! Add the row we're going to duplicate
-- The TDB should output the ID's of any FK references so they
--! can be reused on the next insert
exec TestDataBuilders.StackTableBuilder
@UniqueKeyColumn = @FirstUniqueString
, @UniqueIndexColumn = -999
--! If we insert the same value again, we should expect to see an exception
exec tSQLt.ExpectException @ExpectedErrorNumber = 2601
, @ExpectedMessagePattern = 'Cannot insert duplicate key row in object ''dbo.StackTable'' with unique index%';
insert dbo.StackTable (UniqueKeyColumn, UniqueIndexColumn) values (@NextUniqueString, -999);
end
go
exec tSQLt.Run '[StackTableTests]';
go
```
I wrote a blog post on using the [Test Data Builder pattern](http://datacentricity.net/2011/11/unit-testing-databases-adapting-the-test-data-builder-pattern-for-t-sql/) for SQL a couple of years ago. I hope this helps explain my thinking. | Firstly, I have to admit I like the city you chose in your example. I don't see why you are using FakeTable in this example. Why not just write the test without FakeTable and test against the real table. There are cases where this may be a bit painful. i.e. if you have a table with many required fields, but in truth there are probably better ways to write the test you want. If you are checking a unique constraint or primary key by inserting duplicate rows I would suggest that you are testing that the functionality of SQL Server itself is working as expected. If I were to write the test I would test for the existence of the constraint by querying the information\_schema or sys tables. | How to apply an index to a faked table in tSQLt | [
"",
"sql",
"sql-server",
"unit-testing",
"tsqlt",
""
] |
I have the following line in a .sql file from a mysql db:
```
ALTER TABLE lcr_gw ALTER COLUMN ip_addr TYPE VARCHAR(50) DEFAULT NULL;
```
I would like to convert it into syntax that postgresql would understand. In my personal tests, I was only able to get it to work by breaking it down into two separate statements, like so:
```
ALTER TABLE lcr_gw ALTER COLUMN ip_addr TYPE VARCHAR(50);
ALTER TABLE lcr_gw ALTER COLUMN ip_addr SET DEFAULT NULL;
```
Just wondering if there's a way to consolidate the two statements back into one, but one that postgresql will be happy with?
Thanks! | The statement you posted is **not valid syntax at all**:
[SQL Fiddle](http://sqlfiddle.com/#!9/d610d)
To change the type in MySQL, you would use `CHANGE` or `MODIFY`.
To change the default you would use `DROP DEFAULT` or `SET DEFAULT NULL`.
**If** the intention was to change the type and reset the column default:
[Like in MySQL](http://dev.mysql.com/doc/refman/5.6/en/alter-table.html), you can pack multiple actions into a single [`ALTER TABLE`statement in Postgres](http://www.postgresql.org/docs/current/interactive/sql-altertable.html) .
```
ALTER TABLE lcr_gw ALTER COLUMN ip_addr SET DEFAULT NULL
,ALTER COLUMN ip_addr TYPE VARCHAR(50);
```
[Per documentation:](http://www.postgresql.org/docs/current/interactive/sql-altertable.html#AEN67144)
> The main reason for providing the option to specify multiple changes
> in a single `ALTER TABLE` is that multiple table scans or rewrites can
> thereby be combined into a single pass over the table.
But if there was a `DEFAULT` on the column that is incompatible with the new type, you have to run two separate statements:
```
ALTER TABLE lcr_gw ALTER COLUMN ip_addr SET DEFAULT NULL;
ALTER TABLE lcr_gw ALTER COLUMN ip_addr TYPE VARCHAR(50);
```
Doesn't matter in this case anyway. | [The PostgreSQL ALTER TABLE syntax diagram](http://www.postgresql.org/docs/9.3/static/sql-altertable.html) doesn't show any way to combine changing a data type and changing a default value in a single SQL statement. You can't simply omit `set default null` in the general case. For example,
```
create table test (
column_1 char(10) not null default 'a'
);
alter table test alter column column_1 type varchar(50);
insert into test values (default);
select * from test;
```
```
column_1
--
a
```
Instead, either rewrite as two independent statements (which you already know how to do), or as two statements in a single transaction. | converting mysql scripts to postgresql script | [
"",
"mysql",
"sql",
"postgresql",
""
] |
Can i use **WHERE** clause inside a **CASE** statement as given below?
**CASE WHEN A=1 THEN B WHERE C=0 ELSE A END** | In your case try this
```
CASE WHEN (A=1 AND C=0) THEN B ELSE A END AS Field_Name
``` | You can simply write
```
CASE WHEN A=1 AND C=0 THEN B ELSE A END;
``` | WHERE clause inside CASE statement | [
"",
"mysql",
"sql",
""
] |
Hello how would i be able to get the AVG of the Minimum time across records?
Meaning, I have these records:
```
Id Date Time Amount
1 7/1/14 9:00am 5.00
2 7/1/14 8:45am 6.00
3 7/1/14 9:30am 7.00
4 7/2/14 8:30am 4.50
5 7/2/14 9:15am 5.50
6 7/2/14 7:45am 4.75
```
now what i need is to get MIN of each day... so in this case it would be record 2, and 6.
but i want to some how average those times... meaning 8:45am and 7:45am ...
I want to know what is the average time first sale occurs over a time period. Not sure how to average out the time. So to say, over a given period, generally first sale occurs around say 8:20 am (estimating)
tx very much for any assitance | To get the average, convert the time to seconds, compute the average and convert it back to a time.
```
SELECT Id, Date, SEC_TO_TIME(AVG(sec))
FROM (SELECT Id, Date, MIN(TIME_TO_SEC(Time)) AS sec
FROM YOUR_TABLE
GROUP BY Id, Date) AS TABLE1
GROUP BY Id, Date;
``` | You can try somthing like this:-
```
SELECT Id, Date, AVG(Time)
FROM (SELECT Id, Date, MIN(Time) AS Time
FROM YOUR_TABLE
GROUP BY Id, Date) AS TABLE1
GROUP BY Id, Date;
```
This might be helpful to you. | Average Time over a period of time in a SQL Query | [
"",
"sql",
"time",
"average",
"min",
""
] |
I have two tables `item` and `status`.
`item` is a table like this:
```
id date status_id
1 10-02-2013 1
2 11-02-2013 1
3 11-03-2013 2
... ... ...
```
`status` is a table like this:
```
id status_name
1 first
2 second
3 three
... ...
```
I'd like as a result something like this:
```
month status total
2 first 2
2 second 0
2 three 0
3 first 0
3 second 1
3 three 0
```
I am using SQLite through web2py:
```
select
month, status.status_name, ifnull(total, 0)
from
status
left join
(select
status.id,
strftime('%m', date) as month,
status_name as status,
count(status_name) as total
from
status
inner join
item
on status.id = item.status_id
group by nombre, web2py_extract('month', date)
) sub
on sub.id = status.id
```
I am using the function `web2py_extract` to group `date` by month.
My result is the next:
```
month status total
2 first 2
3 second 1
```
However, I am not getting the rows with empty value. What I am doing wrong? | You can do this with a `cross join` and then `left outer join`:
```
select m.month, s.status, coalesce(i.cnt, 0) as cnt
from (select distinct strftime('%m', date) as month from item) m cross join
(select id, status from status) s left outer join
(select strftime('%m', date) as month, status_id, count(*) as cnt
from item i
group by strftime('%m', date), status_id
) i
on i.month = m.month and i.status_id = s.id;
```
The `cross join` creates all combinations of month and status. The `left join` brings in the count. The `coalesce()` converts `NULL` to `0` when there is no match. | The table with the non-matching values you want anyway must be on the left side of the LEFT JOIN.
This means you need `item LEFT JOIN status` in the inner query.
(The outer query does not need an outer join.) | How to get 0 in COUNT from different months and status? | [
"",
"sql",
"sqlite",
"count",
"left-join",
"web2py",
""
] |
I have a problem with a query that take too much time to execute. The query is:
```
SELECT id, name
FROM user
WHERE id IN ( SELECT DISTINCT ( user_id )
FROM `webchat`
WHERE closed = 0 )
```
**webchat.closed** and **user.id** are indexs. The query takes 6 seconds to complete.
But if I do this query:
```
SELECT DISTINCT ( user_id )
FROM `webchat`
WHERE closed = 0
```
It only takes 0.00002 seconds to complete. It returns two results, 16023 and 14020. And if I do this query:
```
SELECT id, name FROM user WHERE id IN (16023, 14020)
```
only takes 0.00004 seconds to complete.
So, why the first query takes 6 seconds to finish? | Try this way:
```
SELECT DISTINCT user.id, user.name
FROM user INNER JOIN `webchat` on user.id = `webchat`.user_id
WHERE `webchat`.closed = 0
``` | This is your query:
```
SELECT u.id, u.name
FROM user u
WHERE u.id IN ( SELECT DISTINCT ( user_id ) FROM webchat WHERE closed = 0 )
```
First, the `distinct` is redundant in the subquery. Second, many databases handle `exists` better than `in`. Try this:
```
SELECT u.id, u.name
FROM user u
WHERE EXISTS (select 1 from webchat wc where wc.closed = 0 and wc.user_id = u.id);
```
An index will help this query. Try the composite index `webchat(user_id, closed)`. | SQL query takes too much time to execute | [
"",
"sql",
""
] |
I have a table contains all the name with different class, like this:
```
Name Class
Jack A
Nick B
Simon C
David B
Linda B
Alice C
```
Right now I want to get a table that with class A, B , C as columns, which contains the names inside their respect class:
```
A----------B----------C
Jack------Nick-------Simon
----------David------Alice
----------Linda-----------
```
How do I get such table in SQL query? Sorry for bad format, don't know how to create table in SO. | Here is a method that uses aggregation. It is probably the easiest method:
```
select max(case when class = 'a' then name end) as a,
max(case when class = 'b' then name end) as b,
max(case when class = 'c' then name end) as c
from (select name, class, row_number() over (partition by class order by (select NULL)) as seqnum
from nameclasses
) nc
group by seqnum
order by seqnum;
```
This is the original method that I posted. It doesn't use aggregation, but it does a lot of joins:
```
select a.name as a, b.name as b, c.name as c
from (select name, row_number() over (order by (select NULL)) as seqnum
from nameclasses nc
where class = 'A'
) a full outer join
(select name, row_number() over (order by (select NULL)) as seqnum
from nameclasses nc
where class = 'B'
) b
on a.seqnum = b.seqnum full outer join
(select name, row_number() over (order by (select NULL)) as seqnum
from nameclasses nc
where class = 'c'
) c
on c.seqnum = coalesce(a.seqnum, b.seqnum)
order by coalesce(a.seqnum, b.seqnum, c.seqnum);
``` | The data you're asking for isn't really very relational, and in those cases you'll usually get better results from doing the work in your client application. But as that's often not possible:
```
SELECT A.Name as A, B.Name as B, C.Name AS C
FROM
(select Name, row_number over (order by name) as ordinal from table where class = 'A') A
FULL JOIN
(select Name, row_number over (order by name) as ordinal from table where class = 'B') B
ON B.ordinal = A.ordinal
FULL JOIN
(select Name, row_number over (order by name) as ordinal from table where class = 'C') C
ON C.ordinal = coalesce(A.ordinal, B.ordinal)
``` | Select Columns each with specific where condition in SQL | [
"",
"sql",
"sql-server",
""
] |
I'm trying to do a sorting using the following query:
```
SELECT * FROM client
ORDER BY
CASE $desired_colum_to_order
WHEN 'id' then `id`
WHEN 'name' then `name`
ELSE `id`
END
ASC
```
The problem is that this query is ordering ID as a string column, but it is a **Integer Primary Key**.
The results become in this order:
*10, 11, 12, 13, 5, 6, 7, 8, 9*
If I do the following query, MySQL orders correctly:
```
SELECT * FROM client
ORDER BY
id
ASC
```
Now the results become in correct order:
5, 6, 7, 8, 9, 10, 11, 12, 13
Could someone explain me why is MySQL ordering like this, if is the same column in both queries?
Thanks in advance! | **Q: Could someone explain me why is MySQL ordering like this?**
MySQL is "ordering" the rows based on the *result* of the *expression* in the ORDER BY clause.
You've supplied a `CASE` expression in the ORDER BY clause. And that expression returns a single datatype. In this case, it appears that the expression is returning a **`VARCHAR`**, likely based on the datatype of the `name` column.
The optimizer isn't "smart enough" to figure out that the string literal `'id'` following the `CASE` keyword will *always* match the string literal `'id'` following the first `WHEN`. You see that, but the optimizer doesn't.
You read that expression, and you see that the first `WHEN` will be matched for every row, and that the expression always return `id`, and that the expression will never return `name`.
You see that as equivalent to specifying `ORDER BY id`.
But the optimizer doesn't it look at it that way. It sees `name` as a possible return value, and it settles on a datatype that "works" for all of the possible return values. In this case, it looks like it's determining that `VARCHAR` is the appropriate datatype to return.
If the return datatype from the CASE expression was integer, then rows would be sorted in ascending numeric value.
But MySQL still wouldn't be able to use an index to return the rows in order, it's still going to perform a "Using filesort" operation, because the rows are still being ordered by the result of an expression, rather than a column. | You can split the `order by` into different conditions:
```
SELECT *
FROM client
ORDER BY (CASE WHEN $desired_colum_to_order = 'id' then id END) ASC,
(CASE WHEN $desired_colum_to_order = 'name' then name END) ASC,
id ASC
```
This works because the `case` statements that do not match all return `NULL`, which has no order. If none match, then the final `id` will be the one used. The advantage of different `case` statements is that each column is "type" safe. Also, you can fiddle with the logic to make some `DESC` instead of `ASC`. | Sorting results using "ORDER BY CASE" not using primary index | [
"",
"mysql",
"sql",
"innodb",
""
] |
I am trying to get count of results found. Individually those two queries works fine, no trouble there.
But when I try to get result combined (I've used `UNION`). I get the result right but result from the second table appends to the result set.
## For Example
**The result from table one:**
When I use simple queries.
```
array(28) {
[0]=> object(stdClass)#139 (7) {
["date"]=> string(10) "2014-06-16"
["total"]=> string(1) "2"
["emails"]=> string(1) "0"
["calls"]=> string(1) "2"
["qualified"]=> string(1) "2"
}
}
```
**The result from table Two:**
```
array(28) {
[0]=> object(stdClass)#139 (7) {
["date"]=> string(10) "2014-06-16"
["no_answer"]=> string(1) "1"
["voicemail"]=> string(1) "1"
}
}
```
*Shown only one object per result for simplification*. **Notice** Notice those two result are from same date but they are added twice in the result set on the combined query.
**The combined result (with current function):**
```
array(28) {
[0]=> object(stdClass)#139 (7) {
["date"]=> string(10) "2014-06-16"
["total"]=> string(1) "2"
["emails"]=> string(1) "0"
["calls"]=> string(1) "2"
["qualified"]=> string(1) "2"
["no_answer"]=> string(1) "0"
["voicemail"]=> string(1) "0"
},
[1]=> object(stdClass)#139 (7) {
["date"]=> string(10) "2014-06-16"
["total"]=> string(1) "0"
["emails"]=> string(1) "0"
["calls"]=> string(1) "0"
["qualified"]=> string(1) "0"
["no_answer"]=> string(1) "1"
["voicemail"]=> string(1) "1"
}
```
**Desired Result I want is:**
In the result I want Each result to represent a single date (no duplicate date).
```
[0]=> object(stdClass)#139 (7) {
["date"]=> string(10) "2014-06-16"
["total"]=> string(1) "2"
["emails"]=> string(1) "0"
["calls"]=> string(1) "2"
["qualified"]=> string(1) "2"
["no_answer"]=> string(1) "1"
["voicemail"]=> string(1) "1"
```
## The Full Function
```
//Note: function is simplified
function get_lead_numbers_by_distinct_days($num_days, $user_id){
global $wpdb;
$leads_table = 'leads';
$calls_table = 'calls';
$sql1 = "SELECT Date(date) date,
COUNT(type) total,
SUM(type='email') emails,
SUM(type='call') calls,
SUM(qualified=1) qualified,
SUM('') no_answer,
SUM('') voicemail
FROM $leads_table WHERE user_id = $user_id
AND DATE_SUB(CURDATE(), INTERVAL $num_days DAY) <= date
GROUP BY DATE(date)";
$sql2 = "SELECT Date(date) date,
SUM('') total,
SUM('') emails,
SUM('') calls,
SUM('') qualified,
SUM(status='no-answer') no_answer,
SUM(status='machine') voicemail
FROM $calls_table WHERE user_id = $user_id
AND DATE_SUB(CURDATE(), INTERVAL $num_days DAY) <= date
GROUP BY DATE(date)";
$sql = "$sql1
UNION
$sql2
";
$data = $wpdb->get_results($sql);
return $data;
}
```
Empty `SUM()` is used to match column numbers. Otherwise MySQL shows error. | Finally had to drop the `UNION` and use `JOIN` instead. Here is the final query.
```
SELECT $leads_table.date date,
COUNT($leads_table.type) total,
SUM($leads_table.type='email') emails,
SUM($leads_table.type='call') calls,
SUM($leads_table.qualified=1) qualified,
SUM($calls_table.status='no-answer') no_answer,
SUM($calls_table.status='machine') voicemail
FROM $leads_table
LEFT JOIN $calls_table
ON ($leads_table.object_id = $calls_table.ID)
WHERE user_id = $user_id
AND DATE_SUB(CURDATE(), INTERVAL $num_days DAY) <= date
GROUP BY $group_by($leads_table.date)
``` | First of all - u don't need to make sum, if u know the result. Just replace:
```
SUM('') no_answer,
SUM('') voicemail
```
with
```
0 no_answer,
0 voicemail
```
secondary - you perform a group by in 2 separate queries, and in order to get the full result u should perform it together.
therefore your combined query should look like this:
```
SELECT Date(date) date,
COUNT(type) total,
SUM(type='email') emails,
SUM(type='call') calls,
SUM(qualified=1) qualified,
SUM(status='no-answer') no_answer,
SUM(status='machine') voicemail
FROM (
SELECT date, type, qualified, 0 status
FROM leads_table
WHERE user_id = $user_id
AND DATE_SUB(CURDATE(), INTERVAL $num_days DAY) <= date
UNION ALL
SELECT date, 0, 0 , status
FROM calls_table
WHERE user_id = $user_id
AND DATE_SUB(CURDATE(), INTERVAL $num_days DAY) <= date
) a
GROUP BY Date(date);
``` | Combine Result From Two Tables and Distinct By Date/Hour | [
"",
"mysql",
"sql",
""
] |
I have to send the report to management every morning . For that i have create schedule in Sql Server agent with following code.
```
EXEC msdb.dbo.sp_send_dbmail
@recipients='bishnu.bhandari@gmail.com',
@body='Dear sir, <Br>Please find the attachment. <P>Regards<Br> <Br>IT Department',
@subject ='TOURISM-GL( Auto By System) ',
@body_format = 'html',
@profile_name = 'emailserver',
@file_attachments='C:\PUMORI_NEW\**001_TOURISMGL_(14072014)_(SOD).TXT**'
```
Now the problem is, the file which i need to send as attachment will generate every day with new name.
File name will be in format
> 001\_TOURISMGL\_(14072014)\_(SOD).TXT
In above file name only the date value will be change. The date will be in ddmmyyyy format.
Now Kindly suggest me how can i achieve this. How send mail automatically with attachment. | Could you try,
```
declare @pathname varchar(200) = 'C:\PUMORI_NEW\**001_TOURISMGL_(,' + REPLACE(CONVERT(VARCHAR(10),GETDATE(),101),'/', '') + ', )_(SOD).TXT**';
EXEC msdb.dbo.sp_send_dbmail
@recipients='bishnu.bhandari@gmail.com',
@body='Dear sir, <Br>Please find the attachment. <P>Regards<Br> <Br>IT Department',
@subject ='TOURISM-GL( Auto By System) ',
@body_format = 'html',
@profile_name = 'emailserver',
@file_attachments=@pathname
``` | Does it have to be inside of SQL Server? I do it with a batch file using BLAT (or Postie) with Windows Task Scheduler and SQLCMD (OSQL works too) and works just fine..
```
@ECHO OFF
@Rem -----------------------------------------------
@ECHO Procedure to e-mail daily Row Counts
C:
cd\reports
@ECHO Running database query, please wait
if not exist COUNTS.log osql -Sserver\instance -Uyouruser -Pyourpass -n -iCOUNTS.sql -oCOUNTS.log -w250
@rem ------------- Mail out report
if exist COUNTS.log Postie.exe -host:smtp.yourcompany.com -to:managers@company.com -from:you@company.com -s:"Quick Counts for Session/Transaction/User" -file:COUNTS.log -msg:"Please reference below for the row counts for Session/Transaction/Users for DB1/DB2/DB3 Databases. This message was sent from %COMPUTERNAME%"
@for /f "tokens=2-4 delims=/ " %%a in ('date /T') do set dirdate=%%a.%%b.%%c
@if exist COUNTS.log copy COUNTS.log LogBkup\COUNTS-%dirdate%.log
@if exist COUNTS.log del COUNTS.log
rem @exit
``` | Send a Mail from SQL server with attachment ( Attachment to be formed dynamically) | [
"",
"sql",
"sql-server",
""
] |
I have a `DateTime` column named `EXP_Date` which contains date like this :
```
2014-07-13 00:00:00.000
```
I want to compare them, like this query :
```
SELECT COUNT(*)
FROM DB
WHERE ('2014-07-15' - EXP_DATE) > 1
```
I expect to see the number of customers who have their services expired for over a month.
I know this query wouldn't give me the correct answer, the best way was if I separate the Year / Month / Day into three columns, but isn't any other way to compare them as they are? | You can use [DATEADD](http://msdn.microsoft.com/en-us/library/ms186819.aspx)
```
SELECT COUNT(*)
FROM DB
where EXP_DATE < DATEADD(month, -1, GETDATE())
``` | Another way using **DATEDIFF**
---
```
SET DATEFORMAT DMY --I like to use "dateformat"
SELECT COUNT(*)
FROM DB
WHERE (DATEDIFF(DAY,@EXP_DATE,GETDATE())) >= 30 --Remember, instead "Day" you can use week, month, year, etc
```
**Syntax**: DATEDIFF ( datepart , startdate , enddate )
*Depart*: year, quarter, month, day, week...
For more information you can visit [MSDN](http://msdn.microsoft.com/en-us/library/ms189794.aspx)
--- | Comparing dates in SQL Server | [
"",
"sql",
"sql-server",
"date",
"sql-server-2008-r2",
""
] |
How can I return `datetime` constant with `datetime` type in SQL Server?
The following code returns string type:
```
select '20010528 08:47:00.000' as DateField
``` | ```
select convert(datetime,'20010528 08:47:00.000')as DateField
or
select cast('20010528 08:47:00.000' as datetime)as DateField
``` | You may try like this:
```
select cast('20010528 08:47:00.000'as DateTime) as DateField
``` | How can I return datetime constant with datetime type in SQL Server? | [
"",
"sql",
"sql-server",
"datetime",
"constants",
""
] |
I am using MS SQL.
I have the following tables:
```
table A:
id1 data1
8234 ko
2 po
333 koo
40 woo
table B:
id2 data2
123 meow
654 frrr
table C:
id3 data3
10 a
20 b
30 c
40 d
50 e
60 f
```
I want to get this:
```
id1 data1 id2 data2 id3 data3
8234 ko 123 meow 10 a
2 po 654 frrr 20 b
333 koo NULL NULL 30 c
40 woo NULL NULL 40 d
NULL NULL NULL NULL 50 e
NULL NULL NULL NULL 60 f
```
It's seems like full sum of tables without any conditions. I just want to get all columns and all data from all tables as is.
How can I do this?
UPD: tables are not related.
In case when tables are related: I would use LEFT or RIGHT JOIN, when it was known in advance what table is larger than. But it is unknown. | Try something like this:
```
SELECT *
FROM A
FULL OUTER JOIN B ON 1 = 1
FULL OUTER JOIN C ON 1 = 1
``` | Create an index using `row_number` to use for your full join
```
select * from (
select
row_number() over (order by id1 asc) rn,
id1, data1
from ta
) t1
full join (
select
row_number() over (order by id2 asc) rn,
id2, data2
from tb
) t2 on t1.rn = t2.rn
full join (
select
row_number() over (order by id3 asc) rn,
id3, data3
from tc
) t3 on t1.rn = t3.rn
``` | SQL full join without any conditions | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I wrote this sql query:
```
select first_name, salary
from employees
where salary in( select distinct top(10) salary from employees order by salary disc );
```
When I ran it, I got this error:
SQL Error: ORA-00907: missing right parenthesis
00907. 00000 - "missing right parenthesis"
What could have caused the error? | Top-N query is typically performed this way in Oracle:
```
select * from (
select first_name, salary
from employees order by salary desc
) where rownum <= 10
```
This one gets you top 10 salaries. | I think the problem lies in the use of `top` which is SQL Server and not Oracle.
Use `rank` instead to get the salary in the decent order and get the first 10 of them:
```
select v.first_name, v.salary
from ( select first_name, salary, rank() over (order by salary desc) r from employees) v
where v.r <= 10
``` | Select top 10 salary from employees table | [
"",
"sql",
"oracle",
""
] |
Theres a question on an exercise which asks me to write an SQL statement to show the SKU, SKU\_Description, WarehouseID for all items stored in a warehouse managed by ‘Lucille Smith’. I have to use a join, but not use the JOIN ON syntax.
I came up with:
```
select sku, sku_description, warehouseid
from inventory join warehouse
inventory.warehouseid = warehouse.warehouseid
where manager = 'lucille smith';
```
with the following errors
Error starting at line : 3 in command -
```
select Sku,SKU_DESCRIPTION,WAREHOUSEID
from inventory join warehouse on
inventory.WAREHOUSEID = warehouse.warehouseid
where manager = 'lucille smith'
```
Error at Command Line : 3 Column : 28
Error report -
`SQL Error: ORA-00918: column ambiguously defined
00918. 00000 - "column ambiguously defined"
*Cause:
*Action:`
if it helps i added pictures of the tables warehouse: <https://i.stack.imgur.com/lSsyd.png>
inventory: <https://i.stack.imgur.com/gTa7w.png> | You can use the JOIN..USING syntax, as below:
**EDIT**:
Changed the manager name to Title Case
```
select sku, sku_description, warehouseid
from inventory i join warehouse w
using (warehouseid)
where manager = 'Lucille Smith';
```
**Reference**:
[JOINS IN ORACLE on DATAWAREHOUSE CONCEPTS blog](http://dwhlaureate.blogspot.in/2012/08/joins-in-oracle.html) | ORA-00918 tells you that ORACLE can't decide what column to select because both tables have columns with the same name.
To address that you need to use aliases like this:
```
select alias1.common_column, alias2.common_column
select table1 alias1, table2 alias2
```
If you are not allowed to use JOIN ON, you maybe can get away with old style syntax where all joining is done in WHERE clause. | Trouble quering a join in oracle sql | [
"",
"sql",
"oracle",
"join",
""
] |
Using SQL Server 2008 R2, I have a view, vwBASECustomerTransactions, with a list of transactions, which sum the aggregate data for each transaction type for each day:
```
Create View vwBASECustomerTransactions AS
Select CustomerID, 0 as TransType, Sum(Amount) as TransAmount, Cast(ChargeDate as Date) as TransDate
from Charge
Group by CustomerID, Cast(ChargeDate as Date)
UNION ALL
Select CustomerID, 1, Sum(Amount), Cast(AdjustDate as Date)
from Adjustment
Group by CustomerID, Cast(AdjustDate as Date)
UNION ALL
Select CustomerID, 2, Sum(Amount), Cast(PaymentDate as Date)
from Payment
Group by CustomerID, Cast(PaymentDate as Date)
```
The result is:
```
CustomerID | TransType | TransAmount | TransDate
-----------------------------------------------
120 | 0 | 100 | 1/1/2014
120 | 2 | -100 | 1/1/2014
120 | 0 | 50 | 17/2/2014
```
There are over 200,000 CustomerIDs and around 2 million Transactions.
I need to find the date where the Balance was the greatest for each CustomerID.
I have created a set of views with a balance calculated at each transaction date, which simply sums the transactions to that point in time, but it is very slow:
```
--Get the Net Daily Change
Create View vwBASECustomerNetBalChange as
Select CustomerID, TransDate, Sum(TransAmount) as Amount
from vwBASECustomerTransactions
Group by CustomerID, TransDate
--Get the Running Balance for any date
Create View vwRPTCustomerDailyBalance as
Select *, (Select Sum(Amount) from vwBASECustomerNetBalChange Where TransDate <= a.TransDate and CustomerID = a.CustomerID ) as Balance
from vwBASECustomerNetBalChange
--Get the Max Balance for any Customer (join back to get date)
--Takes > 10 minutes to run
Select CustomerID, Max(Balance)
from vwRPTCustomerDailyBalance
group by CustomerID
```
Is there a better, more efficient way? | You can try this one to see if it runs any better that what you are seeing. After I finally figured out what it was exactly you were trying to do it is pretty much similar to your version :( The Union in the first subquery is going to basically select distinct values which will slow things down.
```
SELECT DistinctDays.CustomerID,DistinctDays.TransDate,TotalAmount=SUM(TotalAmount.TransAmount)
FROM
(
Select CustomerID, Cast(ChargeDate as Date) as TransDate from Charge
UNION
Select CustomerID, Cast(AdjustDate as Date) from Adjustment
UNION
Select CustomerID, Cast(PaymentDate as Date) from Payment
)AS DistinctDays
INNER JOIN
(
Select CustomerID, Amount as TransAmount, ChargeDate as TransDate from Charge
UNION ALL
Select CustomerID, Amount, AdjustDate from Adjustment
UNION ALL
Select CustomerID,Amount, PaymentDate from Payment
)
AS TotalAmount ON TotalAmount.CustomerID=DistinctDays.CustomerID AND TotalAmount.TransDate<=DistinctDays.TransDate
GROUP BY DistinctDays.CustomerID,DistinctDays.TransDate
ORDER BY DistinctDays.CustomerID,DistinctDays.TransDate
``` | This is too long for a comment.
Unfortunately, SQL Server 2008 doesn't directly support cumulative sum. I suspect, though, that there might be more efficient ways to get what you want. SQL Server 2012 *does* support cumulative sum.
I think the best approach would be to use a temporary table rather than a view. On the temporary table, you can add indexes for `CustomerId, date`, which should be a big help for the correlated subquery.
Another approach would be to do the cumulative sum at the table level, inside the view. This can take advantage of indexes at the table level. However, I think the temporary table approach is probably the best solution. | SQL Server Find highest balance from transactions | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
As part of a stress tests I'm doing I'm trying to figure out if there is an SQL query (quite specifically SQL Server query) that will max all CPUs usage to 100% or close enough.
Suggestions anyone? | ```
SELECT SUM(CONVERT(BIGINT, o1.object_id) + CONVERT(BIGINT, o2.object_id) + CONVERT(BIGINT, o3.object_id) + CONVERT(BIGINT, o4.object_id))
FROM sys.objects o1
CROSS JOIN sys.objects o2
CROSS JOIN sys.objects o3
CROSS JOIN sys.objects o4
```
Here's a parallel version:
```
USE master
SELECT MyInt = CONVERT(BIGINT, o1.object_id) + CONVERT(BIGINT, o2.object_id) + CONVERT(BIGINT, o3.object_id)
INTO #temp
FROM sys.objects o1
JOIN sys.objects o2 ON o1.object_id < o2.object_id
JOIN sys.objects o3 ON o1.object_id < o3.object_id
SELECT SUM(CONVERT(BIGINT, o1.MyInt) + CONVERT(BIGINT, o2.MyInt))
FROM #temp o1
JOIN #temp o2 ON o1.MyInt < o2.MyInt
```
For some reason I cannot get the optimizer to parallelize the first query. So I just materialize some huge tables (~400k rows) and loop join them. | I've talked at length in [How to analyse SQL Server performance](http://rusanu.com/2014/02/24/how-to-analyse-sql-server-performance/) about why practically your query never 'executes': is always *waiting* on something (IO, locks).
To create a workload that drives 100% CPU, even on one core, is no small feat. You need to make sure your query always execute and never waits. Never blocks for IO (all data must be in memory), never blocks for locks (no contention), never blocks for memory (no grant). You should look as scans of hot in-memory data. An artificial, totally bogus, workload that achieves this would probably self-join a medium size table many times.
Now if you want to do this with a realistic workload, including various operations, then good luck. Achieving 100% CPU is basically the benchmarks golden standard. You need super performant IO subsystem to eliminate all waits and you need very fancy test driver to be able to feed the workload fast enough, without creating contention. | Trying to create an SQL Query that will max all CPUs to 100% | [
"",
"sql",
"sql-server",
"sql-server-2008-r2",
"cpu-usage",
""
] |
I have a column of times and want to create a second column to show which 5 minute interval on a 24 hr clock they fall into. For example
```
15:19:52 becomes 15:15:00
15:20:11 becomes 15:20:00
``` | You can do the following. It builds the time
```
SELECT TIMEFROMPARTS(
DATEPART(HOUR, yourTimeField),
DATEPART(MINUTE, yourTimeField) / 5 * 5, 0,
0,
0)
FROM yourTable
```
[Link to SQL Fiddle Example](http://sqlfiddle.com/#!6/34f73/4) | There is a thread I found that may help you begin getting started with this.
[T-SQL: Round to nearest 15 minute interval](https://stackoverflow.com/questions/830792/t-sql-round-to-nearest-15-minute-interval)
However, all the examples here only seem to round up, so you'd need to subtract 5 from the result to get what appears to be what you are seeking. | Grouping a time field into 5 minute periods in SQL server | [
"",
"sql",
"sql-server-2012",
""
] |
First of all this is kind of a duplicate to:
[GROUP BY having MAX date](https://stackoverflow.com/questions/18221999/group-by-having-max-date/18222124?noredirect=1#comment38513130_18222124)
I am posting the question because the accepted answer doesn't work for me and I have no idea why. My problem:
I want to select the latest (`max(timestamp)`) checksum of all functions (`func_ids`).
The code from @Bill Karwin (accepted answer)
```
SELECT func_id,checksum
FROM Content cnt
INNER JOIN (
SELECT func_id, MAX(timestamp) AS maxdate
FROM Content GROUP BY func_id
) AS max USING (func_id,maxdate);
```
Mysql error:
`#1054 - Unknown column 'maxdate' in 'from clause'`
My table:
```
CREATE TABLE `Content` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`func_id` int(6) NOT NULL,
`description` text CHARACTER SET utf8 NOT NULL,
`returns` varchar(255) CHARACTER SET utf8 NOT NULL,
`var` varchar(255) CHARACTER SET utf8 NOT NULL,
`content` text CHARACTER SET utf8 NOT NULL,
`timestamp` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`checksum` varchar(40) CHARACTER SET utf8 DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `func_id` (`func_id`),
KEY `var` (`var`),
KEY `checksum` (`checksum`),
FULLTEXT KEY `description` (`description`)
) ENGINE=MyISAM AUTO_INCREMENT=885 DEFAULT CHARSET=latin1
``` | As I understand the sintaxis from MySQL when you put USING for an Inner Join the columns need to be named the same in both tables. There is no column named maxdate on the content table so the error jumps. You may be able to try (if I understand things correctly)
```
SELECT func_id,checksum
FROM Content cnt
INNER JOIN (
SELECT func_id, MAX(timestamp) AS maxdate
FROM Content GROUP BY func_id
) AS max ON (cnt.func_id=max.func_id AND max.maxdate=cnt.timestamp);
``` | Use an `on` clause instead of `using`:
```
SELECT func_id,checksum
FROM Content cnt INNER JOIN
(SELECT func_id, MAX(timestamp) AS maxtimestamp
FROM Content
GROUP BY func_id
) m
on m.func_id = cnt.func_id and m.maxtimestamp = cnt.timestamp;
``` | Group by max(time) mysql | [
"",
"mysql",
"sql",
""
] |
I have 2 tables which look like:
The `money` table looks like the following:
```
id type cash
1 54 3.23
2 293 1.12
3 181 4.00
```
The `plus` table looks like the following:
```
id money_id bonus
1 3 0.50
2 2 0.10
```
What I basically want is to select ALL data from the `money` table, including `money.cash` with the added bonus from the `plus` table, if they have any, and I want to select only those records, which have more than 1.50 cash (including the bonus).
So the result I'd like to achieve is:
```
id type cash_full
1 54 3.23
2 181 4.50
```
I have tried to make some query, but it always says error, when I want to include the `cash_full` into the WHERE statement. Other than that, it's working flawlessly, I just can't filter the query with the `cash_full` column.
```
SELECT mo.*,
IFNULL
(
mo.cash +
(
SELECT bonus
FROM plus as pu
WHERE mo.id = pu.money_id
), mo.cash
) as cash_full
FROM `money` as mo
WHERE cash_full >= 1.5
```
So how is it possible that this query is not working? Is there any solution for my problem? | *1. Assuming you can have more than one bonus per money id :*
You can use an alias in an having clause (with mysql), after having grouped the values from money table
```
select mo.id, mo.type, mo.cash + sum(coalesce(p.bonus, 0)) as cash_full
from money mo
left join plus p on p.money_id = mo.id
group by mo.id, mo.type, mo.cash
having cash_full > 1.5
```
with other db, you would have to use a subquery, or repeat the "aliased" operation in the having clause : something like
```
having mo.cash + sum(coalesce(p.bonus, 0)) > 1.5
```
see [SqlFiddle](http://sqlfiddle.com/#!2/ae73c/5)
*2. Assuming you can't have more than one bonus per money id*
You can do a subquery to avoid repeating the "addition"\*
```
select id, type, cash_full
from (
select mo.id, mo.type, mo.cash + coalesce(p.bonus, 0) as cash_full
from money mo
left join plus p on p.money_id = mo.id) s
where cash_full > 1.5;
``` | A simple LEFT JOIN will do it:-
```
SELECT a.id, a.type, a.cash + IFNULL(SUM(b.bonus), 0) AS cash_bonus
FROM money a
LEFT OUTER JOIN plus b
ON a.id = b.money_id
GROUP BY a.id, a.type
HAVING cash_bonus > 1.50
```
Note that this is a bit strange as it has a non aggregate field that isn't in the GROUP BY clause. This should be fine in MySQL but might cause issues in other flavours of SQL. In which case a LEFT JOIN to a sub query might be necessary:-
```
SELECT a.id, a.type, a.cash + IFNULL(b.tot_bonus, 0) AS cash_bonus
FROM money a
LEFT OUTER JOIN
(
SELECT money_id, SUM(bonus) AS tot_bonus
FROM plus
GROUP BY money_id
) b
ON a.id = b.money_id
WHERE a.cash + IFNULL(b.tot_bonus, 0) > 1.50
``` | How to select a sub-query with MySQL using the WHERE part? | [
"",
"mysql",
"sql",
""
] |
Say I have a table with a `created`, `uploaded`, `processed` and `archived` columns, all of which accept a date. However, I want it so the fields have to be updated in chronological order and you can't skip a step, but the trailing steps don't have to be set until they're ready (i.e., created <= uploaded <= processed <= archived). I implemented this like so:
```
create table batch (
id number primary key,
created date default sysdate not null,
uploaded date,
processed date,
archived date,
check (uploaded is null or uploaded >= created),
check (processed is null or processed >= uploaded),
check (archived is null or archived >= processed)
);
```
This works, but it allows you to skip steps. For example, I can insert an arbitrary value for `processed` if `uploaded` is still `null`. I presume this to be because the `null` comparison is always vacuous. Do I therefore have to do something like:
```
create table batch (
id number primary key,
created date default sysdate not null,
uploaded date,
processed date,
archived date,
check (uploaded is null or uploaded >= created),
check (processed is null or (uploaded is not null and processed >= uploaded)),
check (archived is null or (processed is not null and archived >= processed))
);
```
Is there a neater way? | In the end, I went for my own solution:
```
create table batch (
id number primary key,
created date default sysdate not null,
uploaded date,
processed date,
archived date,
check (uploaded is null or uploaded >= created),
check (processed is null or (uploaded is not null and processed >= uploaded)),
check (archived is null or (processed is not null and archived >= processed))
);
```
Granted that this constrains the workflow to this static schema, as mentioned by Jorge, but that's a tradeoff I'm willing to accept! | I think your design is sound. I would split the checks for chronological order and the NULL checks, but this is only a question of style, not logic.
```
create table batch (
id number primary key,
created date default sysdate not null,
uploaded date,
processed date,
archived date,
CHECK (created <= uploaded),
CHECK (uploaded <= processed),
CHECK (processed <= archived),
CHECK (uploaded IS NOT NULL OR processed IS NULL),
CHECK (processed IS NOT NULL OR archived IS NULL)
);
``` | Sequential constraints on table columns | [
"",
"sql",
"oracle",
"constraints",
""
] |
I have a model that defines mutually recursive tables:
```
Answer
questionId QuestionId
text
Question
text
correct AnswerId
```
What do I need to do to actually insert a question? I need to know what the correct answer is first. But to insert an answer, I need to know what question it answers.
I'm running Postgres, if it matters.
The DDL is:
```
CREATE TABLE answer (
id integer NOT NULL, -- answer id
text character varying NOT NULL, -- answer text
question_id bigint NOT NULL -- question id
);
CREATE TABLE question (
id integer NOT NULL, -- question id
question character varying NOT NULL, -- question text
correct bigint NOT NULL, -- correct answer
solution character varying NOT NULL -- solution text
);
ALTER TABLE ONLY answer ALTER COLUMN id SET DEFAULT nextval('answer_id_seq'::regclass);
ALTER TABLE ONLY answer
ADD CONSTRAINT answer_question_id_fkey FOREIGN KEY (question_id) REFERENCES question(id);
ALTER TABLE ONLY question ALTER COLUMN id SET DEFAULT nextval('question_id_seq'::regclass);
ALTER TABLE ONLY question
ADD CONSTRAINT question_correct_fkey FOREIGN KEY (correct) REFERENCES answer(id);
```sql
``` | If you enter question and answer in a **single statement** with a [data-modifying CTE](https://www.postgresql.org/docs/current/queries-with.html#QUERIES-WITH-MODIFYING), you do not even need a `DEFERRABLE` FK constraints. Not to speak of actually making (or `SET`ting) them `DEFERRED` - which would be a lot more expensive.
### Data model
First I cleaned up your data model:
```
CREATE TABLE question (
question_id serial PRIMARY KEY
, correct_answer_id int NOT NULL
, question text NOT NULL
, solution text NOT NULL
);
CREATE TABLE answer (
answer_id serial PRIMARY KEY
, question_id int NOT NULL REFERENCES question
, answer text NOT NULL
);
ALTER TABLE question ADD CONSTRAINT question_correct_answer_id_fkey
FOREIGN KEY (correct_answer_id) REFERENCES answer(answer_id);
```
* Don't use the non-descriptive "id" or "text" (also a basic type name) as column names.
* Put integer columns first for space efficiency. See:
+ [Calculating and saving space in PostgreSQL](https://stackoverflow.com/questions/2966524/calculating-and-saving-space-in-postgresql/7431468#7431468)
* `bigint` was uncalled for, `integer` should suffice.
* Simplify your schema definition with [`serial` columns](https://stackoverflow.com/a/9875517/939860).
* Define primary keys. PK columns are `NOT NULL` automatically.
### Solution
After delegating primary key generation to sequences (`serial` columns), we can get auto-generated IDs with the `RETURNING` clause of the `INSERT` statement. But in this special case we need *both* IDs for each `INSERT`, so I fetch one of them with `nextval()` to get it going.
```
WITH q AS (
INSERT INTO question
(correct_answer_id , question, solution)
VALUES (nextval('answer_answer_id_seq'), 'How?' , 'DEFERRABLE FK & CTE')
RETURNING correct_answer_id, question_id
)
INSERT INTO answer
(answer_id , question_id, answer)
SELECT correct_answer_id, question_id, 'Use DEFERRABLE FK & CTE'
FROM q;
```
I *know* the name of the sequence (`'answer_answer_id_seq'`) because I looked it up. It's the default name. If you don't *know* it use the safe form [@IMSoP provided in a comment](https://stackoverflow.com/questions/24813000/sql-how-to-deal-with-mutually-recursive-inserts/24816197?noredirect=1#comment38533095_24816197):
```
nextval(pg_get_serial_sequence('answer', 'answer_id'))
```
### `DEFERRABLE` or `DEFERRED` constraints?
[The manual on `SET CONSTRAINTS`:](https://www.postgresql.org/docs/current/sql-createtable.html#id-1.9.3.85.9.4)
> `IMMEDIATE` constraints are checked at the end of each statement.
My solution is a *single* statement. That's why it works where two separate statements would fail - wrapped in a single transaction or not. And you'd need `SET CONSTRAINTS ... DEFERRED;` like [IMSoP first commented](https://stackoverflow.com/questions/24813000/sql-how-to-deal-with-mutually-recursive-inserts/24816197#comment38519912_24813000) and [@Jaaz implemented in his answer](https://stackoverflow.com/a/24814737/939860).
However, note the disclaimer some paragraphs down:
> Uniqueness and exclusion constraints that have not been declared
> `DEFERRABLE` are also checked immediately.
So `UNIQUE` and `EXCLUDE` need to be `DEFERRABLE` to make CTEs work for them. This includes `PRIMARY KEY` constraints. [The documentation on `CREATE TABLE` has more details](https://www.postgresql.org/docs/current/sql-createtable.html#id-1.9.3.85.9.4):
> Non-deferred Uniqueness Constraints
>
> When a `UNIQUE` or `PRIMARY KEY` constraint is not deferrable, PostgreSQL
> checks for uniqueness immediately whenever a row is inserted or
> modified. The SQL standard says that uniqueness should be enforced
> only at the end of the statement; this makes a difference when, for
> example, a single command updates multiple key values. To obtain
> standard-compliant behavior, declare the constraint as `DEFERRABLE` but
> not deferred (i.e., `INITIALLY IMMEDIATE`). Be aware that this can be
> significantly slower than immediate uniqueness checking.
We discussed this in great detail under this related question:
* [Constraint defined DEFERRABLE INITIALLY IMMEDIATE is still DEFERRED?](https://stackoverflow.com/questions/10032272/constraint-defined-deferrable-initially-immediate-is-still-deferred) | I went looking around after seeing the DDL. Consider a function for your call to insert a question with correct answer, and one to add (false) answers to a given question. The structure of the first function allows the application to pick up the anonymous returned record for the questionID, and use it for subsequent calls to the second function, to add false answers.
```
CREATE FUNCTION newQuestion (questionText varchar, questionSolutionText varchar, answerText varchar, OUT questionID integer) AS $$
BEGIN
START TRANSACTION;
SET CONSTRAINTS question_correct_fkey DEFERRED;
questionID := nextval('question_id_seq');
answerID := nextval('answer_id_seq');
INSERT INTO question (id, question, correct, solution) values (questionID, questionText, answerID, questionSolutionText);
INSERT INTO answer (id, text, question_id) values (answerID, answerText, questionID);
SET CONSTRAINTS question_correct_fkey IMMEDIATE;
COMMIT TRANSACTION;
END;
$$
CREATE FUNCTION addFalseAnswer (questionID integer, answerText varchar) AS $$
BEGIN
INSERT INTO answer (text, question_id) VALUES (answerText, questionID);
END;
$$
```
I've not written SQL for PostGreSQL in a long while, so I hope all is in order here. please let me know if there are any issues. | How to deal with mutually dependent inserts | [
"",
"sql",
"postgresql",
"database-design",
"foreign-keys",
"referential-integrity",
""
] |
I'm Using **Amazon Redshift**. I need to get the **MAX** Date in the column by Month wise. The Example is as below.
There are 5 tables:
```
vendor
vendor_pkg
vendor_pkg_category
vendor_load
vendor_load_status
vendor V
vendor_id vendor_name
-----------------------
1 L&T
2 Reuters
3 IBM
4 INfosys
vendor_pkg VP
vendor_pkg_id vendor_pkg_category_id vendor_pkg_name vendor_id
------------------------------------------------------------------
1 1 Futures 1
2 1 Fairvalue 1
3 3 Equities 1
4 2 MBS 1
5 2 INTL Price 2
6 4 Muni 2
vendor_pkg_category VPC
vendor_pkg_category_id category_name
-------------------------------------
1 Price
2 Security
3 Rating
4 value
Vendor_load VL
vendor_load_id eval_date load_status_id vendor_pkg_id
---------------------------------------------------------
1 2014-06-05 1 1
2 2014-06-20 1 1
3 2014-07-05 2 2
4 2014-07-20 1 2
5 2014-06-05 2 3
6 2014-06-20 2 3
7 2014-07-05 1 4
8 2014-07-20 2 4
vendor_load_status VLS
load_status_id load_status_name
--------------------------------
1 Success
2 Failed
```
Result table should be like this:
```
v.vendor vpc.category_name vp.ven_pkg_name vl.eval_date vls.status_name
---------------------------------------------------------------------------
L&T Price futures 2014-06-20 Success
L&T Price fairvalue 2014-07-20 Success
L&T Security MBS 2014-07-20 Failed
L&T Rating Equities 2014-06-20 Failed
```
I use the following query. But it displays the data for one month only:
```
SELECT DISTINCT v.vendor_name AS vendor,
vpc.category_name AS V_Type,
vp.vendor_pkg_name AS Package_name,
vl.eval_date AS C_Date,
vls.load_status_name AS Status
FROM ces_idw.vendor v,
ces_idw.vendor_pkg_category vpc,
ces_idw.vendor_load vl,
ces_idw.vendor_pkg vp,
ces_idw.vendor_load_status vls
WHERE (vl.eval_date) IN (SELECT DISTINCT MAX(vl.eval_date)
FROM ces_idw.vendor_load vl
WHERE v.vendor_id = vp.vendor_id
and v.vendor_name = 'IDC'
AND vp.vendor_pkg_id = vl.vendor_pkg_id
AND TO_CHAR(vl.eval_date,'yyyy-mm') = '2014-06'
GROUP BY vl.vendor_pkg_id,
v.vendor_name)
AND vp.vendor_pkg_category_id = vpc.vendor_pkg_category_id
AND vp.vendor_pkg_id = vl.vendor_pkg_id
AND vl.load_status_id = vls.load_status_id
ORDER BY vp.vendor_pkg_name
```
when I use `TO_CHAR(vl.eval_date,'yyyy-mm')between '2014-06' and '2014-07'` it shows the result for `'2014-07'`. | I Found the Answer for my Question.
```
SELECT DISTINCT v.vendor_name AS vendor,
vpc.category_name AS V_Type,
vp.vendor_pkg_name AS Package_name,
vl.eval_date AS C_Date,
vls.load_status_name AS Status
FROM ces_idw.vendor v,
ces_idw.vendor_pkg_category vpc,
ces_idw.vendor_load vl,
ces_idw.vendor_pkg vp,
ces_idw.vendor_load_status vls
WHERE (vl.eval_date) IN (
SELECT DISTINCT MAX(vl.eval_date)
FROM ces_idw.vendor_load vl
WHERE v.vendor_id = vp.vendor_id
AND v.vendor_name = 'L&T'
AND vp.vendor_pkg_id = vl.vendor_pkg_id
AND (TO_CHAR(vl.eval_date,'yyyy-mm') between '2013-01' and '2015-12')
GROUP BY extract(month from vl.eval_date),vl.vendor_pkg_id, v.vendor_name
)
AND vp.vendor_pkg_category_id = vpc.vendor_pkg_category_id
AND vp.vendor_pkg_id = vl.vendor_pkg_id
AND vl.load_status_id = vls.load_status_id
ORDER BY vp.vendor_pkg_name
```
Thanks To Everyone | As per your sample data i wrote query this gives you mentioned result set
```
DECLARE @exp table (ID INT,Name VARCHAR(10))
INSERT INTO @exp (ID,Name) VALUES (1,'PRICE')
INSERT INTO @exp (ID,Name) VALUES (2,'STOCK')
INSERT INTO @exp (ID,Name) VALUES (3,'INCOME')
INSERT INTO @exp (ID,Name) VALUES (4,'LOAD')
INSERT INTO @exp (ID,Name) VALUES (5,'INITIAL')
DECLARE @exp1 table (ID INT,PID INT,Name VARCHAR(10),Dated Date)
INSERT INTO @exp1 (ID,PID,Name,Dated) VALUES (1,1,'PRICE','2014-08-05')
INSERT INTO @exp1 (ID,PID,Name,Dated) VALUES (2,1,'PRICE','2014-08-09')
INSERT INTO @exp1 (ID,PID,Name,Dated) VALUES (3,2,'STOCK','2014-08-05')
INSERT INTO @exp1 (ID,PID,Name,Dated) VALUES (4,2,'STOCK','2014-08-05')
INSERT INTO @exp1 (ID,PID,Name,Dated) VALUES (5,3,'INCOME','2014-08-10')
INSERT INTO @exp1 (ID,PID,Name,Dated) VALUES (6,3,'INCOME','2014-08-20')
INSERT INTO @exp1 (ID,PID,Name,Dated) VALUES (7,4,'LOAD','2014-08-10')
INSERT INTO @exp1 (ID,PID,Name,Dated) VALUES (8,4,'LOAD','2014-08-19')
INSERT INTO @exp1 (ID,PID,Name,Dated) VALUES (9,5,'INITIAL','2014-08-05')
INSERT INTO @exp1 (ID,PID,Name,Dated) VALUES (10,5,'INITIAL','2014-08-05')
SELECT DISTINCT groupedtt.ID,groupedtt.PID,tt.Name,groupedtt.MaxDateTime
FROM @exp tt
INNER JOIN
(SELECT ID,PId, MAX(dated) AS MaxDateTime,DENSE_RANK()OVER (PARTITION BY PID ORDER BY ID )RN
FROM @exp1
GROUP BY PId,ID) groupedtt
ON tt.id = groupedtt.PId AND
RN = 2
``` | Get max values in PostgreSQL 8.0 | [
"",
"sql",
"postgresql",
"join",
"date-range",
"amazon-redshift",
""
] |
I need a support of yours, today i got a scenario which i need to do that.
the scenario is email which consists with username and id, needs to split and insert into table in two different columns i.e user id, username.
`ex: ABC123@xyz.com, AAACC2356@mnc.com`
`ABC to be insert in table under username column and 123 to be insert in user id column`
Thanks
Narendra | You can try this:
```
DECLARE @Email VARCHAR(100)= 'sample12312@test.com'
SET @Email = STUFF(@Email,CHARINDEX('@',@Email),LEN(@Email), '')
SELECT SUBSTRING(@Email, 1, PATINDEX('%[0-9]%',@Email)-1) AS Username,
SUBSTRING(@Email,PATINDEX('%[0-9]%',@Email), LEN(@Email)) AS UserID
```
OR
```
SELECT SUBSTRING(@Email, 1, PATINDEX('%[0-9]%',@Email)-1) AS Username,,
SUBSTRING(STUFF(@Email,CHARINDEX('@',@Email),LEN(@Email), ''), PATINDEX('%[0-9]%',@Email), LEN(@Email)) AS UserID
``` | ```
Declare @X varchar(100)
Select @X= 'AAACC2356@mnc.com'
Select @X= SubString(@X,PATINDEX('%[0-9]%',@X),Len(@X))
Select @X= SubString(@X,0,PATINDEX('%[^0-9]%',@X))
Select @X
Declare @Y varchar(100)
Select @Y = 'AAACC2356@mnc.com'
Select @Y= SubString(@Y,PATINDEX('%[A-Z]%',@Y),Len(@Y))
Select @Y= SubString(@Y,0,PATINDEX('%[^A-Z]%',@Y))
Select @Y
``` | Extracting usernames and userid from E-mail IDs using Ms-SQL | [
"",
"sql",
"sql-server",
""
] |
I have a problem about storing multiple categorical data. One category can have any size of cascade depth. I think it is not good idea to create more tables with relationships. What is the best way of storing this kind of categorical data.
```
ex categories:
-MainCategory1
-subcategory1
-subcategory11
-subcategory12
-subcategory13
--subcategory131
-subcategory2
-subcategory21
-subcategory22
-subcategory221
-subcategory23
-subcategory231
-subcategory2311
-MainCategory2
-subcategory21
-subcategory211
-subcategory2131
-subcategory2131
-subcategory212
-subcategory213
-subcategory2131
``` | You need to define parent child structure
```
CREATE TABLE CATEGORIES (ID INT, PARENT_ID INT, NAME VARCHAR)
```
then you select categories that have no `PARENT_ID`
```
SELECT * FROM CATEGORIES WHERE PARENT_ID IS NULL
```
they are masters and then on each layer you select
```
SELECT C.* FROM CATEGORIES C
INNER JOIN CATEGORIES C1 ON C1.PARENT_ID = C.ID
```
to get children of current record.
And then insert into categories
```
INSERT INTO CATEGORIES
SELECT 1, NULL, 'MainCategory1'
UNION ALL SELECT 10, 1, 'subcategory1'
UNION ALL SELECT 11, 10, 'subcategory11'
UNION ALL SELECT 12, 10, 'subcategory12'
UNION ALL SELECT 13, 10, 'subcategory13'
UNION ALL SELECT 131, 13, 'subcategory131'
UNION ALL SELECT 2, 1, 'subcategory2'
-- ...AND SO ON
``` | One common practice would be to create a single table where each category has an id, a name and a parent id (with top categories having parent id of `null`):
```
CREATE TABLE categories (
id NUMERIC PRIMARY KEY,
name VARCHAR(100),
parent_id NUMERIC FOREIGN KEY REFERENCES categories(Id)
)
```
Some of your data, e.g., would look like this:
```
INSERT INTO categories VALUES (1, 'MainCategory1', null);
INSERT INTO categories VALUES (2, 'subcategory1', 1);
``` | How to store cascade categories in DB | [
"",
"sql",
"sql-server",
"database",
"stored-procedures",
"categories",
""
] |
I am working on a file loader program.
The purpose of this program is to take an input file, do some conversions on its data and then upload the data into the database of Oracle.
The problem that I am facing is that I need to optimize the insertion of very large input data on Oracle.
I am uploading data into the table, lets say ABC.
I am using the OCI library provided by Oracle in my C++ Program.
In specific, I am using OCI Connection Pool for multi-threading and loading into ORACLE. (<http://docs.oracle.com/cd/B28359_01/appdev.111/b28395/oci09adv.htm> )
The following are the DDL statements that have been used to create the table ABC –
```
CREATE TABLE ABC(
seq_no NUMBER NOT NULL,
ssm_id VARCHAR2(9) NOT NULL,
invocation_id VARCHAR2(100) NOT NULL,
analytic_id VARCHAR2(100) NOT NULL,
analytic_value NUMBER NOT NULL,
override VARCHAR2(1) DEFAULT 'N' NOT NULL,
update_source VARCHAR2(255) NOT NULL,
last_chg_user CHAR(10) DEFAULT USER NOT NULL,
last_chg_date TIMESTAMP(3) DEFAULT SYSTIMESTAMP NOT NULL
);
CREATE UNIQUE INDEX ABC_indx ON ABC(seq_no, ssm_id, invocation_id, analytic_id);
/
CREATE SEQUENCE ABC_seq;
/
CREATE OR REPLACE TRIGGER ABC_insert
BEFORE INSERT ON ABC
FOR EACH ROW
BEGIN
SELECT ABC_seq.nextval INTO :new.seq_no FROM DUAL;
END;
```
I am currently using the following Query pattern to upload the data into the database. I am sending data in batches of 500 queries via various threads of OCI connection pool.
Sample of SQL insert query used -
```
insert into ABC (SSM_ID, invocation_id , calc_id, analytic_id, analytic_value,
override, update_source)
select 'c','b',NULL, 'test', 123 , 'N', 'asdf' from dual
union all select 'a','b',NULL, 'test', 123 , 'N', 'asdf' from dual
union all select 'b','b',NULL, 'test', 123 , 'N', 'asdf' from dual
union all select 'c','g',NULL, 'test', 123 , 'N', 'asdf' from dual
```
EXECUTION PLAN by Oracle for the above query -
```
-----------------------------------------------------------------------------
| Id | Operation | Name|Rows| Cost (%CPU) | Time |
-----------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | 4 | 8 (0) | 00:00:01 |
| 1 | LOAD TABLE CONVENTIONAL | ABC | | | |
| 2 | UNION-ALL | | | | |
| 3 | FAST DUAL | | 1 | 2 (0) | 00:00:01 |
| 4 | FAST DUAL | | 1 | 2 (0) | 00:00:01 |
| 5 | FAST DUAL | | 1 | 2 (0) | 00:00:01 |
| 6 | FAST DUAL | | 1 | 2 (0) | 00:00:01 |
```
The Run times of the program loading 1 million lines -
```
Batch Size = 500
Number of threads - Execution Time -
10 4:19
20 1:58
30 1:17
40 1:34
45 2:06
50 1:21
60 1:24
70 1:41
80 1:43
90 2:17
100 2:06
Average Run Time = 1:57 (Roughly 2 minutes)
```
I need to optimize and reduce this time further. The problem that I am facing is when I put 10 million rows for uploading.
The average run time for **10 million** came out to be = **21 minutes**
**(My target is to reduce this time to below 10 minutes)**
So I tried the following steps as well -
[1]
Did the partitioning of the table ABC on the basis of **seq\_no**.
Used **30 partitions**.
Tested with **1 million rows** - The performance was very poor. almost **4 times more than the unpartitioned table.**
[2]
Another partitioning of the table ABC on the basis of **last\_chg\_date**.
Used **30 partitions**.
2.a) Tested with 1 million rows - **The performance was almost equal to the unpartitioned table.** Very little difference was there so it was not considered.
2.b) Again **tested the same with 10 million rows. The performance was almost equal** to the unpartitioned table. No noticable difference.
The following was the DDL commands were used to achieve partitioning -
```
CREATE TABLESPACE ts1 DATAFILE AUTOEXTEND ON;
CREATE TABLESPACE ts2 DATAFILE AUTOEXTEND ON;
CREATE TABLESPACE ts3 DATAFILE AUTOEXTEND ON;
CREATE TABLESPACE ts4 DATAFILE AUTOEXTEND ON;
CREATE TABLESPACE ts5 DATAFILE AUTOEXTEND ON;
CREATE TABLESPACE ts6 DATAFILE AUTOEXTEND ON;
CREATE TABLESPACE ts7 DATAFILE AUTOEXTEND ON;
CREATE TABLESPACE ts8 DATAFILE AUTOEXTEND ON;
CREATE TABLESPACE ts9 DATAFILE AUTOEXTEND ON;
CREATE TABLESPACE ts10 DATAFILE AUTOEXTEND ON;
CREATE TABLESPACE ts11 DATAFILE AUTOEXTEND ON;
CREATE TABLESPACE ts12 DATAFILE AUTOEXTEND ON;
CREATE TABLESPACE ts13 DATAFILE AUTOEXTEND ON;
CREATE TABLESPACE ts14 DATAFILE AUTOEXTEND ON;
CREATE TABLESPACE ts15 DATAFILE AUTOEXTEND ON;
CREATE TABLESPACE ts16 DATAFILE AUTOEXTEND ON;
CREATE TABLESPACE ts17 DATAFILE AUTOEXTEND ON;
CREATE TABLESPACE ts18 DATAFILE AUTOEXTEND ON;
CREATE TABLESPACE ts19 DATAFILE AUTOEXTEND ON;
CREATE TABLESPACE ts20 DATAFILE AUTOEXTEND ON;
CREATE TABLESPACE ts21 DATAFILE AUTOEXTEND ON;
CREATE TABLESPACE ts22 DATAFILE AUTOEXTEND ON;
CREATE TABLESPACE ts23 DATAFILE AUTOEXTEND ON;
CREATE TABLESPACE ts24 DATAFILE AUTOEXTEND ON;
CREATE TABLESPACE ts25 DATAFILE AUTOEXTEND ON;
CREATE TABLESPACE ts26 DATAFILE AUTOEXTEND ON;
CREATE TABLESPACE ts27 DATAFILE AUTOEXTEND ON;
CREATE TABLESPACE ts28 DATAFILE AUTOEXTEND ON;
CREATE TABLESPACE ts29 DATAFILE AUTOEXTEND ON;
CREATE TABLESPACE ts30 DATAFILE AUTOEXTEND ON;
CREATE TABLE ABC(
seq_no NUMBER NOT NULL,
ssm_id VARCHAR2(9) NOT NULL,
invocation_id VARCHAR2(100) NOT NULL,
calc_id VARCHAR2(100) NULL,
analytic_id VARCHAR2(100) NOT NULL,
ANALYTIC_VALUE NUMBER NOT NULL,
override VARCHAR2(1) DEFAULT 'N' NOT NULL,
update_source VARCHAR2(255) NOT NULL,
last_chg_user CHAR(10) DEFAULT USER NOT NULL,
last_chg_date TIMESTAMP(3) DEFAULT SYSTIMESTAMP NOT NULL
)
PARTITION BY HASH(last_chg_date)
PARTITIONS 30
STORE IN (ts1, ts2, ts3, ts4, ts5, ts6, ts7, ts8, ts9, ts10, ts11, ts12, ts13,
ts14, ts15, ts16, ts17, ts18, ts19, ts20, ts21, ts22, ts23, ts24, ts25, ts26,
ts27, ts28, ts29, ts30);
```
CODE that I am using in the thread function (written in C++), using OCI -
```
void OracleLoader::bulkInsertThread(std::vector<std::string> const & statements)
{
try
{
INFO("ORACLE_LOADER_THREAD","Entered Thread = %1%", m_env);
string useOraUsr = "some_user";
string useOraPwd = "some_password";
int user_name_len = useOraUsr.length();
int passwd_name_len = useOraPwd.length();
text* username((text*)useOraUsr.c_str());
text* password((text*)useOraPwd.c_str());
if(! m_env)
{
CreateOraEnvAndConnect();
}
OCISvcCtx *m_svc = (OCISvcCtx *) 0;
OCIStmt *m_stm = (OCIStmt *)0;
checkerr(m_err,OCILogon2(m_env,
m_err,
&m_svc,
(CONST OraText *)username,
user_name_len,
(CONST OraText *)password,
passwd_name_len,
(CONST OraText *)poolName,
poolNameLen,
OCI_CPOOL));
OCIHandleAlloc(m_env, (dvoid **)&m_stm, OCI_HTYPE_STMT, (size_t)0, (dvoid **)0);
////////// Execution Queries in the format of - /////////////////
// insert into pm_own.sec_analytics (SSM_ID, invocation_id , calc_id, analytic_id, analytic_value, override, update_source)
// select 'c','b',NULL, 'test', 123 , 'N', 'asdf' from dual
// union all select 'a','b',NULL, 'test', 123 , 'N', 'asdf' from dual
// union all select 'b','b',NULL, 'test', 123 , 'N', 'asdf' from dual
// union all select 'c','g',NULL, 'test', 123 , 'N', 'asdf' from dual
//////////////////////////////////////////////////////////////////
size_t startOffset = 0;
const int batch_size = PCSecAnalyticsContext::instance().getBatchCount();
while (startOffset < statements.size())
{
int remaining = (startOffset + batch_size < statements.size() ) ? batch_size : (statements.size() - startOffset );
// Break the query vector to meet the batch size
std::vector<std::string> items(statements.begin() + startOffset,
statements.begin() + startOffset + remaining);
//! Preparing the Query
std::string insert_query = "insert into ";
insert_query += Context::instance().getUpdateTable();
insert_query += " (SSM_ID, invocation_id , calc_id, analytic_id, analytic_value, override, update_source)\n";
std::vector<std::string>::const_iterator i3 = items.begin();
insert_query += *i3 ;
for( i3 = items.begin() + 1; i3 != items.end(); ++i3)
insert_query += "union " + *i3 ;
// Preparing the Statement and Then Executing it in the next step
text *txtQuery((text *)(insert_query).c_str());
checkerr(m_err, OCIStmtPrepare (m_stm, m_err, txtQuery, strlen((char *)txtQuery), OCI_NTV_SYNTAX, OCI_DEFAULT));
checkerr(m_err, OCIStmtExecute (m_svc, m_stm, m_err, (ub4)1, (ub4)0, (OCISnapshot *)0, (OCISnapshot *)0, OCI_DEFAULT ));
startOffset += batch_size;
}
// Here is the commit statement. I am committing at the end of each thread.
checkerr(m_err, OCITransCommit(m_svc,m_err,(ub4)0));
checkerr(m_err, OCIHandleFree((dvoid *) m_stm, OCI_HTYPE_STMT));
checkerr(m_err, OCILogoff(m_svc, m_err));
INFO("ORACLE_LOADER_THREAD","Thread Complete. Leaving Thread.");
}
catch(AnException &ex)
{
ERROR("ORACLE_LOADER_THREAD", "Oracle query failed with : %1%", std::string(ex.what()));
throw AnException(string("Oracle query failed with : ") + ex.what());
}
}
```
While the post was being answered, I was suggested several methods to optimize my **INSERT QUERY**.
I have chosen and used **QUERY I** in my program for the following reasons that I discovered while testing the various INSERT Queries.
On running the SQL Queries that were suggested to me -
**QUERY I -**
```
insert into ABC (SSM_ID, invocation_id , calc_id, analytic_id, analytic_value,
override, update_source)
select 'c','b',NULL, 'test', 123 , 'N', 'asdf' from dual
union all select 'a','b',NULL, 'test', 123 , 'N', 'asdf' from dual
union all select 'b','b',NULL, 'test', 123 , 'N', 'asdf' from dual
union all select 'c','g',NULL, 'test', 123 , 'N', 'asdf' from dual
```
**EXECUTION PLAN by Oracle for Query I -**
```
--------------------------------------------------------------------------
| Id | Operation | Name| Rows | Cost (%CPU) | Time |
--------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | 4 | 8 (0) | 00:00:01 |
| 1 | LOAD TABLE CONVENTIONAL | ABC | | | |
| 2 | UNION-ALL | | | | |
| 3 | FAST DUAL | | 1 | 2 (0) | 00:00:01 |
| 4 | FAST DUAL | | 1 | 2 (0) | 00:00:01 |
| 5 | FAST DUAL | | 1 | 2 (0) | 00:00:01 |
| 6 | FAST DUAL | | 1 | 2 (0) | 00:00:01 |
```
**QUERY II -**
```
insert all
into ABC (SSM_ID, invocation_id , calc_id, analytic_id, analytic_value,
override, update_source) values ('c','b',NULL, 'test', 123 , 'N', 'asdf')
into ABC (SSM_ID, invocation_id , calc_id, analytic_id, analytic_value,
override, update_source) values ('c','e',NULL, 'test', 123 , 'N', 'asdf')
into ABC (SSM_ID, invocation_id , calc_id, analytic_id, analytic_value,
override, update_source) values ('c','r',NULL, 'test', 123 , 'N', 'asdf')
into ABC (SSM_ID, invocation_id , calc_id, analytic_id, analytic_value,
override, update_source) values ('c','t',NULL, 'test', 123 , 'N', 'asdf')
select 1 from dual
```
**EXECUTION PLAN by Oracle for Query II -**
```
-----------------------------------------------------------------------------
| Id | Operation | Name| Rows | Cost (%CPU) | Time |
-----------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | 1 | 2 (0) | 00:00:01 |
| 1 | MULTI-TABLE INSERT | | | | |
| 2 | FAST DUAL | | 1 | 2 (0) | 00:00:01 |
| 3 | INTO | ABC | | | |
| 4 | INTO | ABC | | | |
| 5 | INTO | ABC | | | |
| 6 | INTO | ABC | | | |
```
As per the experiments the **Query I is faster**.
Here I tested on both Oracle SQL Developer , as well as I sent insert queries by my C++ program (FILELOADER) as well.
On Further reading about it, I found out that the cost shown by the Execution Plan is the number of CPU the query will use to process itself.
**That tells that Oracle will use more CPU to process the first query and that is why its cost goes on to be = 8.**
Even by using the same insert pattern via my application, I found out that its performance it almost 1.5 times better.
I need some insight on how I can improve the performance even further..?
All the things that I have tried, I have summarized them in my question.
If I find or discover anything relevant, I will add to this question.
My target in to bring the **upload time of 10 million queries under 10 minutes**. | I know others have mentioned this and you don't want to hear it but use [SQL\*Loader](http://docs.oracle.com/cd/E11882_01/server.112/e40540/cncptdba.htm#CNCPT1282) or [external tables](http://docs.oracle.com/cd/E11882_01/server.112/e25494/tables.htm#ADMIN12896). My average load time for tables of approximately the same width is 12.57 *seconds* for just over 10m rows. These utilities have been explicitly designed to load data into the database quickly and are pretty good at it. This may incur some additional time penalties depending on the format of your input file, but there are quite a few options and I've rarely had to change files prior to loading.
If you're unwilling to do this then you don't have to upgrade your hardware yet; you need to remove every possible impediment to loading this quickly. To enumerate them, remove:
1. The index
2. The trigger
3. The sequence
4. The partition
With all of these you're obliging the database to perform more work and because you're doing this transactionally, you're not using the database to its full potential.
Load the data into a separate table, say `ABC_LOAD`. After the data has been completely loaded perform a *single* INSERT statement into ABC.
```
insert into abc
select abc_seq.nextval, a.*
from abc_load a
```
When you do this (and even if you don't) ensure that the sequence cache size is correct; [to quote](http://docs.oracle.com/cd/E11882_01/server.112/e25494/views.htm#ADMIN11802):
> When an application accesses a sequence in the sequence cache, the
> sequence numbers are read quickly. However, if an application accesses
> a sequence that is not in the cache, then the sequence must be read
> from disk to the cache before the sequence numbers are used.
>
> If your applications use many sequences concurrently, then your
> sequence cache might not be large enough to hold all the sequences. In
> this case, access to sequence numbers might often require disk reads.
> For fast access to all sequences, be sure your cache has enough
> entries to hold all the sequences used concurrently by your
> applications.
This means that if you have 10 threads concurrently writing 500 records each using this sequence then you need a cache size of 5,000. The [ALTER SEQUENCE](http://docs.oracle.com/cd/E11882_01/server.112/e41084/statements_2012.htm#SQLRF00817) document states how to change this:
```
alter sequence abc_seq cache 5000
```
If you follow my suggestion I'd up the cache size to something around 10.5m.
Look into using the [APPEND hint](http://docs.oracle.com/cd/E11882_01/server.112/e41084/sql_elements006.htm#SQLRF50901) [(see also Oracle Base)](http://www.oracle-base.com/articles/misc/append-hint.php); this instructs Oracle to use a direct-path insert, which appends data directly to the end of the table rather than looking for space to put it. You won't be able to use this if your table has indexes but you could use it in `ABC_LOAD`
```
insert /*+ append */ into ABC (SSM_ID, invocation_id , calc_id, ... )
select 'c','b',NULL, 'test', 123 , 'N', 'asdf' from dual
union all select 'a','b',NULL, 'test', 123 , 'N', 'asdf' from dual
union all select 'b','b',NULL, 'test', 123 , 'N', 'asdf' from dual
union all select 'c','g',NULL, 'test', 123 , 'N', 'asdf' from dual
```
If you use the APPEND hint; I'd add [TRUNCATE](http://docs.oracle.com/cd/E11882_01/server.112/e41084/statements_10007.htm#SQLRF01707) `ABC_LOAD` after you've inserted into `ABC` otherwise this table will grow indefinitely. This should be safe as you will have finished using the table by then.
You don't mention what version or edition or Oracle you're using. There are a number of extra little tricks you can use:
* **Oracle 12c**
This version supports [identity columns](http://docs.oracle.com/cd/E16655_01/server.121/e17209/statements_7002.htm#SQLRF55657); you could get rid of the sequence completely.
```
CREATE TABLE ABC(
seq_no NUMBER GENERATED AS IDENTITY (increment by 5000)
```
* **Oracle 11g r2**
If you keep the trigger; you can assign the sequence value directly.
```
:new.seq_no := ABC_seq.nextval;
```
* **Oracle Enterprise Edition**
If you're using Oracle Enterprise you can speed up the INSERT from `ABC_LOAD` by using the [PARALLEL hint](http://docs.oracle.com/cd/E11882_01/server.112/e41084/sql_elements006.htm#SQLRF50801d):
```
insert /*+ parallel */ into abc
select abc_seq.nextval, a.*
from abc_load a
```
This can cause it's own problems (too many parallel processes etc), so test. It *might* help for the smaller batch inserts but it's less likely as you'll lose time computing what thread should process what.
---
## tl;dr
Use the utilities that come with the database.
If you can't use them then get rid of everything that might slow the insert down and do it in bulk, 'cause that's what the database is good at. | If you have a text file you should try [SQL LOADER](http://docs.oracle.com/cd/B19306_01/server.102/b14215/ldr_concepts.htm) with direct path. It is really fast and it is designed for this kind of massive data loads. Have a look at this [options](http://www.dba-oracle.com/t_optimize_sql_loader_sqlldr_performance.htm) that can improve the performance.
As a secondary advantage for ETL, your file in clear text will be smaller and easier to audit than 10^7 inserts.
If you need to make some transformation you can do it afterwards with oracle. | INSERT of 10 million queries under 10 minutes in Oracle? | [
"",
"sql",
"database",
"oracle",
"oracle-call-interface",
"bulk-load",
""
] |
I have a query that is pulling out submissions from one table and then the count of votes from another table. I then want to order the records based on the total votes which is gathered from a sub query.
How can I order this table by totalVotes which is gathered in the subquery?
```
SELECT A.[submissionID],
A.[entryID],
E.[subEmpID],
E.[nomineeEmpID],
CONVERT (VARCHAR (10), E.[submissionDate], 101) AS submissionDate,
E.[situation],
E.[task],
E.[action],
E.[result],
E.[timestamp],
E.[statusID],
E.[approver],
E.[approvalDate],
B.[FirstName] + ' ' + B.[LastName] AS nomineeName,
B.[ntid] AS nomineeNTID,
B.[qid] AS nomineeQID,
C.[FirstName] + ' ' + C.[LastName] AS submitName,
C.[ntid] AS submitNTID,
D.[categoryName],
(
SELECT count(G.[empID]) as totalVotes FROM empowermentVotes as G WHERE entryID = A.[entryID]
ORDER BY totalVotes ASC
FOR XML PATH (''), TYPE, ELEMENTS
)
FROM empowermentEntries AS A
INNER JOIN empowermentSubmissions as E
ON A.[submissionID] = E.[submissionID]
INNER JOIN
empTable AS B
ON E.[nomineeEmpID] = B.[empID]
INNER JOIN
empTable AS C
ON E.[subEmpID] = C.[empID]
INNER JOIN
empowermentCategories AS D
ON E.[categoryID] = D.[catID]
WHERE A.[sessionID] = @sessionID
FOR XML PATH ('data'), TYPE, ELEMENTS, ROOT ('root');
``` | you may need to order it before the FOR XML:
```
SELECT(
SELECT A.[submissionID],
A.[entryID],
E.[subEmpID],
E.[nomineeEmpID],
CONVERT (VARCHAR (10), E.[submissionDate], 101) AS submissionDate,
E.[situation],
E.[task],
E.[action],
E.[result],
E.[timestamp],
E.[statusID],
E.[approver],
E.[approvalDate],
B.[FirstName] + ' ' + B.[LastName] AS nomineeName,
B.[ntid] AS nomineeNTID,
B.[qid] AS nomineeQID,
C.[FirstName] + ' ' + C.[LastName] AS submitName,
C.[ntid] AS submitNTID,
D.[categoryName],
(
SELECT count(G.[empID]) as totalVotes FROM empowermentVotes as G WHERE entryID = A.[entryID]
ORDER BY totalVotes ASC
FOR XML PATH (''), TYPE, ELEMENTS
) votes
FROM empowermentEntries AS A
INNER JOIN empowermentSubmissions as E
ON A.[submissionID] = E.[submissionID]
INNER JOIN
empTable AS B
ON E.[nomineeEmpID] = B.[empID]
INNER JOIN
empTable AS C
ON E.[subEmpID] = C.[empID]
INNER JOIN
empowermentCategories AS D
ON E.[categoryID] = D.[catID]
WHERE A.[sessionID] = @sessionID
ORDER BY votes
FOR XML PATH ('data'), TYPE, ELEMENTS, ROOT ('root')
)
```
please note the outer select and alias added | ```
SELECT * FROM
(
SELECT A.[submissionID],
A.[entryID],
E.[subEmpID],
E.[nomineeEmpID],
CONVERT (VARCHAR (10), E.[submissionDate], 101) AS submissionDate,
E.[situation],
E.[task],
E.[action],
E.[result],
E.[timestamp],
E.[statusID],
E.[approver],
E.[approvalDate],
B.[FirstName] + ' ' + B.[LastName] AS nomineeName,
B.[ntid] AS nomineeNTID,
B.[qid] AS nomineeQID,
C.[FirstName] + ' ' + C.[LastName] AS submitName,
C.[ntid] AS submitNTID,
D.[categoryName],
(
SELECT count(G.[empID]) as totalVotes FROM empowermentVotes as G WHERE entryID = A.[entryID]
ORDER BY totalVotes ASC
FOR XML PATH (''), TYPE, ELEMENTS
) As TotalVotes
FROM empowermentEntries AS A
INNER JOIN empowermentSubmissions as E
ON A.[submissionID] = E.[submissionID]
INNER JOIN
empTable AS B
ON E.[nomineeEmpID] = B.[empID]
INNER JOIN
empTable AS C
ON E.[subEmpID] = C.[empID]
INNER JOIN
empowermentCategories AS D
ON E.[categoryID] = D.[catID]
WHERE A.[sessionID] = @sessionID
FOR XML PATH ('data'), TYPE, ELEMENTS, ROOT ('root')
) tmp
ORDER BY tmp.TotalVotes
``` | TSQL Order results by value of subselect | [
"",
"sql",
"sql-server",
"t-sql",
"stored-procedures",
""
] |
I'd like to get value by the following SQL using Eloquent ORM.
**- SQL**
```
SELECT COUNT(*) FROM
(SELECT * FROM abc GROUP BY col1) AS a;
```
Then I considered the following.
**- Code**
```
$sql = Abc::from('abc AS a')->groupBy('col1')->toSql();
$num = Abc::from(\DB::raw($sql))->count();
print $num;
```
I'm looking for a better solution.
Please tell me simplest solution. | In addition to @delmadord's answer and your comments:
Currently there is no method to create subquery in `FROM` clause, so you need to manually use raw statement, then, if necessary, you will merge all the bindings:
```
$sub = Abc::where(..)->groupBy(..); // Eloquent Builder instance
$count = DB::table( DB::raw("({$sub->toSql()}) as sub") )
->mergeBindings($sub->getQuery()) // you need to get underlying Query Builder
->count();
```
Mind that you need to **merge bindings in correct order**. If you have other bound clauses, you must put them after `mergeBindings`:
```
$count = DB::table( DB::raw("({$sub->toSql()}) as sub") )
// ->where(..) wrong
->mergeBindings($sub->getQuery()) // you need to get underlying Query Builder
// ->where(..) correct
->count();
``` | Laravel v5.6.12 (2018-03-14) added `fromSub()` and `fromRaw()` methods to query builder [(#23476)](https://github.com/laravel/framework/pull/23476).
The accepted answer is correct but can be simplified into:
```
DB::query()->fromSub(function ($query) {
$query->from('abc')->groupBy('col1');
}, 'a')->count();
```
The above snippet produces the following SQL:
```
select count(*) as aggregate from (select * from `abc` group by `col1`) as `a`
``` | How to select from subquery using Laravel Query Builder? | [
"",
"sql",
"laravel",
"laravel-4",
"eloquent",
"query-builder",
""
] |
I have this SQL Query:
```
select prefix, code, stat, complete, COUNT(*) as Count
from table
group by prefix, code, stat, complete
order by prefix, code, stat, complete
```
The column 'prefix' is an alphanumeric value (0-9a-zA-z).
What I want is to make it so that if the value of prefix is a number, to make the number equal to 0. If it is a letter, it will keep its value. I have tried to add the following line beneath the group by clause:
```
(case when prefix like '%[0-9]%' then 0 else prefix end)
```
But I get an error "Conversion failed when converting the varchar value 'J' to data type int.".
What is causing this error? How can I get the 'prefix' column to display either 0 or a letter? | ```
case when prefix like '%[0-9]%' then '0' else prefix end
```
You obviously also need this as the expression in the `GROUP BY`:
```
select
NewPrefix = case when prefix like '%[0-9]%' then '0' else prefix end,
code,
stat,
complete,
COUNT(*) as Count
from table
group by
case when prefix like '%[0-9]%' then '0' else prefix end,
code, stat, complete
order by
case when prefix like '%[0-9]%' then '0' else prefix end,
code, stat, complete
``` | Try this :
```
select case when prefix not like '%[^0-9]%' then prefix else '0' end as prefix, code, stat, complete, COUNT(*) as Count
from table
group by case when prefix not like '%[^0-9]%' then prefix else '0' end, code, stat, complete
order by prefix, code, stat, complete
```
[Check This. Looks similar "ISNUMERIC()"](https://stackoverflow.com/questions/4603292/check-if-a-varchar-is-a-number-tsql) | GROUP BY multiple values in same column | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have a table 1 (`MID, SSN, ...`) `MID` is primary key and table 2 (`ID, SSN, StateCode..`) where `ID` and `SSN` make up the primary key. I'm trying to display all columns from table 1 along with `StateCode` from table 2 matching it against `SSN`. Tbl 1 has 50 rows and some have same `SSN` values.
If no `SSN` match is found from table 2, displaying a NULL in `StateCode` is acceptable, so I chose left join. Here is my query
```
Select
tbl1.*, tbl2.StateCode
from
tbl1
left outer join
tbl2 on tbl1.SSN = tbl2.SSN
```
I'm looking to retrieve 50 records, but I get 70, rows that contain the same ssn value in tbl1 ends up duplicated in the final output. What is going wrong? | I'd suggest reading on [cartesian product](http://en.wikipedia.org/wiki/Cartesian_product).
If you have 50 rows in the first table and 70 in the second that makes 3500 rows. The join condition `tbl1.SSN = tbl2.SSN` will filter out rows but you may well end up with more than 50 rows.
Back to your problem you can see what is happening by trying the following :
```
SELECT
tbl1.*,
(SELECT COUNT(*) FROM tbl2 WHERE tbl1.SSN = tbl2.SSN) AS NbResultTbl2
FROM
tbl1
```
This will tell which rows of `tbl1` has multiple match in `tbl2`. If you have a number higher than 1 in the `NbResultTbl2` column then you are going to end up with duplicates.
To eliminate those duplicates you can try this :
```
SELECT
tbl1.*,
(SELECT TOP 1 StateCode FROM tbl2 WHERE tbl1.SSN = tbl2.SSN)
FROM
tbl1
```
This will get the first `StateCode` found for a matching SNN in tbl2. | Try,
Your both Table have one more primary key is there So just try ID column to match
```
Select tbl1.MID,tbl1.SSN, tbl2.StateCode
from tbl1
left outer join tbl2
on tbl1.MID= tbl2.ID
Group by tbl1.MID,tbl1.SSN, tbl2.StateCode
``` | Left outer Join query returns duplicates in SQL Server | [
"",
"sql",
"duplicates",
"left-join",
""
] |
I want to combine these two statements in to one:
```
SELECT o.SITE AS "Site",
COUNT(o.ORDER_NO) AS "No. Orders",
FROM ORDERS o
WHERE o.DATE_CREATED BETWEEN '01-JUL-13' AND '1-JUL-14'
GROUP BY o.SITE;
```
and
```
SELECT o.SITE AS "Site",
COUNT(oi.LINE_CODE) AS "No. Order Lines",
COUNT(DISTINCT oi.LINE_CODE) AS "No. Order Lines (UNIQUE)"
FROM ORDER_ITEMS oi,
ORDERS o
WHERE oi.ORDER_NO = o.ORDER_NO
AND oi.ORDER_TYPE = o.ORDER_TYPE
AND oi.SITE = o.SITE
AND o.DESPATCHED_ON_DATE BETWEEN '01-JUL-13' AND '01-JUL-14'
GROUP BY o.SITE;
```
What happens is "No. Orders", when added to the bottom query, also makes use of the additional 3 where statements. Is there any way I can specify that I only want the "No. Orders" to use the date between where clause, as it does in the top query? | You could use a sub select so that it functions the same as in your first query.
```
SELECT o.SITE AS "Site",
COUNT(oi.LINE_CODE) AS "No. Order Lines",
COUNT(DISTINCT oi.LINE_CODE) AS "No. Order Lines (UNIQUE)",
(SELECT COUNT(o2.ORDER_NO)
FROM ORDERS o2
WHERE o2.site = o.site and o2.DATE_CREATED BETWEEN '01-JUL-13' AND '1-JUL-14'
GROUP BY o2.SITE) AS "No. Orders"
FROM ORDER_ITEMS oi,
ORDERS o
WHERE oi.ORDER_NO = o.ORDER_NO
AND oi.ORDER_TYPE = o.ORDER_TYPE
AND oi.SITE = o.SITE
AND o.DESPATCHED_ON_DATE BETWEEN '01-JUL-13' AND '01-JUL-14'
GROUP BY o.SITE;
``` | you want something like that?
```
select o.site as "Site",
count(oi.line_code) as "No. Order Lines",
count(distinct oi.line_code) as "No. Order Lines (UNIQUE)",
count(o.order_no) as "No. Orders"
from order_items oi, orders o
where oi.order_no = o.order_no
and oi.order_type = o.order_type
and oi.site = o.site
and o.date_created between '01-JUL-13'
and '01-JUL-14'
group by o.site;
```
if not, please specify your question
this is the same like Jchao's, but its a little more clear and maintainable:
```
SELECT above.site as "Site",
line_c1 as "No. Order Lines",
line_c2 as "No. Order Lines (UNIQUE)",
order_no as "No. Orders"
FROM
(SELECT o.SITE AS site,
COUNT(o.ORDER_NO) AS order_no,
FROM ORDERS o
WHERE o.DATE_CREATED BETWEEN '01-JUL-13' AND '1-JUL-14'
GROUP BY o.SITE) above
(SELECT o.SITE AS site,
COUNT(oi.LINE_CODE) AS line_c1,
COUNT(DISTINCT oi.LINE_CODE) AS line_c2
FROM ORDER_ITEMS oi,
ORDERS o
WHERE oi.ORDER_NO = o.ORDER_NO
AND oi.ORDER_TYPE = o.ORDER_TYPE
AND oi.SITE = o.SITE
AND o.DESPATCHED_ON_DATE BETWEEN '01-JUL-13' AND '01-JUL-14'
AND o.DATE_CREATED BETWEEN '01-JUL-13' AND '1-JUL-14'
GROUP BY o.SITE) bottom
WHERE above.site = bottom.site
``` | Selecting items from multiple tables with different WHERE statement | [
"",
"sql",
""
] |
I have three tables in my database.
Table one effectively contains two fields:
```
|Datetime| |Set|
```
Table two is a lookup table which matches a number of parts of a set to a set number:
```
|Set| |Part1| |Part2| |Part3| |Part4|
```
I want table 3 to have a record for each part in a set for a particular datetime:
```
|Datetime| |Part|
```
where the populated table would look something like:
```
|12:00:00| |Set1_Part1|
|12:00:00| |Set1_Part2|
|12:00:00| |Set1_Part3|
|12:00:00| |Set1_Part4|
|12:02:30| |Set2_Part1|
|12:02:30| |Set2_Part2|
|12:02:30| |Set2_Part3|
|12:02:30| |Set2_Part4|
```
So I get some information in table 1 about a set and a datetime, then table 3 needs to effectively extrapolate that out into a datetime/part pair for each part in the set.
Any ideas? (This is for SQL Server) | Perhaps something like...
```
Insert Into table3 (
Select datetime, set + '_' + Part1
FROM table1
INNER JOIN table2
on T1.Set=T2.Set
and Part1 is not null
UNION
Select datetime, set + '_' + Part2
FROM table1
INNER JOIN table2
on T1.Set=T2.Set
and Part2 is not null
UNION
Select datetime, set + '_' + Part3
FROM table1
INNER JOIN table2
on T1.Set=T2.Set
and Part3 is not null
UNION
Select datetime, set + '_' + Part4
FROM table1
INNER JOIN table2
on T1.Set=T2.Set
and Part4 is not null
UNION
Select datetime, set + '_' + Part5
FROM table1
INNER JOIN table2
on T1.Set=T2.Set
and Part5 is not null
```
) | The "canonical" way to do this in SQL Server is to use `union all`:
```
select t1.datetime, t2.part
from ((select set, Part1 as part from table2) union all
(select set, Part2 from table2) union all
(select set, Part3 from table2) union all
(select set, Part4 from table2)
) t join
table1 t1
on t1.set = t.set;
```
You can use the `into table3` clause if you want to store this in another table. Note that `set` and `datetime` are reserved words so if these are the real names of the columns, they should be square braces.
There are other ways to do this, including `unpivot` (rather specific to SQL Server) and doing a cross join with a conditional. However, it would be better to normalize `table2`, if you can. The columns would be:
* Set
* Part
* PartNumber
So one row would become four. | SQL to look up table and make one record per field | [
"",
"sql",
"sql-server",
"database",
""
] |
I have a table with columns First Name and Last Name and an ID which is the PK. My WHERE condition will be a search operation upon the FullName and not separately.
SELECT:
```
select ID, FirstName + ' ' + LastName as FullName
from tbl_Employee
```
What I want:
```
select ID, FirstName + ' ' + LastName as FullName
from tbl_Employee
where FullName like ''%' + @searchText + '%''
```
But when I try to use the alias FullName, it doesn't allow me to do so. How do I go about with this problem? | You can structure the query a couple of different ways. One, include the concatenataion in the `where` clause:
```
select
e.ID,
e.FirstName + ' ' + e.LastName as FullName
from tbl_Employee as e
where e.FirstName + ' ' + e.LastName like '%' + @searchText + '%';
```
Two, use a sub-select as @lajja has shown.
Three, set it out as a non-recursive CTE. I find this form easiest to read and understand:
```
;with EmployeeFullName as
(
select
e.ID,
e.FirstName + ' ' + e.LastName as FullName
from tbl_Employee as e
)
select
f.ID,
f.FullName
from EmployeeFullName as f
where f.FullName like '%' + @searchText + '%';
```
Note that by a) using a leading wildcard and b) concatenating columns you will stop the optimizer from using any indexes on `FirstName` or `LastName`. | You can do this
```
select ID, FirstName + ' ' + LastName as FullName
from tbl_Employee
where (FirstName + ' ' + LastName) like '''%' + @searchText + '%'''
``` | How to refer a custom column in the same SQL query | [
"",
"sql",
"sql-server-2008",
""
] |
This very simple query I am trying to execute in SQL Server Management Studio. But it gives error:
```
CREATE TABLE `contact` (
`contact_id` int(11) NOT NULL AUTO_INCREMENT,
`name` varchar(45) NOT NULL,
`email` varchar(45) NOT NULL,
`address` varchar(45) NOT NULL,
`telephone` varchar(45) NOT NULL,
PRIMARY KEY (`contact_id`)
) ENGINE=InnoDB AUTO_INCREMENT=25 DEFAULT CHARSET=utf8
```
Error is:
> Msg 102, Level 15, State 1, Line 3
> Incorrect syntax near '`'.
I also tried by replacing `with`'` but still no help | The backtick (`` ` ``) character is only used for MySql. It's not standard. Sql Server is a completely different animal. You need to use square-brackets (`[` and `]` -- also not standard) to enclose table and column names with Sql Server. Sql Server also supports double quotes (`"`), which are part of the SQL Standard, but for some reason less-common.
While I'm at it, the ENGINE and AUTO\_INCREMENT, and CHARSET clauses are also MySql-specific.
Try this:
```
CREATE TABLE contact (
contact_id int IDENTITY(25,1) NOT NULL PRIMARY KEY,
[name] varchar(45) NOT NULL,
email varchar(45) NOT NULL,
"address" varchar(45) NOT NULL,
telephone varchar(45) NOT NULL
)
``` | in tsql
```
CREATE TABLE contact (
contact_id int identity(0,25) NOT NULL,
name nvarchar(45) NOT NULL,
email nvarchar(45) NOT NULL,
address nvarchar(45) NOT NULL,
telephone nvarchar(45) NOT NULL
CONSTRAINT contact_PK PRIMARY KEY (contact_id)
)
```
You can't specify engine.
identity(0,25) means initial value = 0, increment = 25
You don't specify character set for the table, you can declare individual columns as varchar or nvcarchar -- the n stands for unicode. You can also specify a collation sequence for each column.
If you want a column name that is irregular (keyword, embedded space, etc.) you quote the column name as [column name] -- don't use irregular column names if you have a choice, it just makes things a pain to use. | Execute query in SQL Server Management Studio | [
"",
"sql",
"sql-server",
"ssms",
""
] |
I want to "group by" ver two unicode fields (keyword\_text and keyword\_match\_type) and extract all columns and all rows for the groups which have more than two elements.
For example one row is:
```
keyword_text | keyword_norm | keyword_GAD_id| keyword_account | keyword_MCC_id | keyword_campaign | keyword_campaign_GAD_id | keyword_ad_group | keyword_ad_group_GAD_id| keyword_destination_url | keyword_max_cpc | keyword_status | keyword_match_type | keyword_campaign_status | keyword_ad_group_status | db_id | created_at |
________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________
"lebanese home delivery jai", "lebanese home delivery jai", 61557127036, "IN [S_02] Cuisine", 7795189055, "IN-JAI[S[Cui_30_EN]: Lebanese", 301573516, "IN-JAI[S[Cui_30_EN|del_02|geo_01]_ex: (Lebanese) Lebanese home delivery Jaipur", 11043049036, http://www.bla.in/restaurants/index/cuisines/lebanese/city/jaipur, 480000, ENABLED, EXACT, PAUSED, PAUSED, 1, "2014-07-18 18:42:43"
```
while the table was created with:
```
CREATE TABLE adword_keywords
(
keyword_text character varying(1000) NOT NULL,
keyword_norm character varying(1000) NOT NULL,
"keyword_GAD_id" bigint NOT NULL,
keyword_account character varying NOT NULL,
"keyword_MCC_id" bigint NOT NULL,
keyword_campaign character varying NOT NULL,
"keyword_campaign_GAD_id" bigint NOT NULL,
keyword_ad_group character varying NOT NULL,
"keyword_ad_group_GAD_id" bigint NOT NULL,
keyword_destination_url character varying NOT NULL,
keyword_max_cpc double precision,
keyword_status keyword_status,
keyword_match_type match_type,
keyword_campaign_status keyword_c_status,
keyword_ad_group_status keyword_ag_status,
db_id bigserial NOT NULL,
created_at timestamp without time zone,
CONSTRAINT adword_keywords_pkey PRIMARY KEY (db_id)
)
WITH (
OIDS=FALSE
);
CREATE INDEX ix_adword_keywords_keyword_norm
ON adword_keywords
USING btree
(keyword_norm COLLATE pg_catalog."default");
```
I tryied the following query:
```
SELECT adword_keywords.*
FROM adword_keywords
JOIN (
SELECT adword_keywords.keyword_text AS keyword_text,adword_keywords.keyword_match_type AS keyword_match_type
FROM adword_keywords GROUP BY adword_keywords.keyword_text, adword_keywords.keyword_match_type
HAVING count(adword_keywords.db_id) > 1) AS anon_1
ON adword_keywords.keyword_text = anon_1.keyword_text AND adword_keywords.keyword_match_type = anon_1.keyword_match_type
WHERE adword_keywords.keyword_campaign_status = 'ENABLED' AND adword_keywords.keyword_ad_group_status = 'ENABLED' AND adword_keywords.keyword_status = 'ENABLED'
```
Unfortunately this returns the wrong result. Meaning also groups composed by one element (when groping over ['keyword\_text','match\_type'] ) !
Does anybody have an idea of what is groing wrong with this query?
Note that if I extract all data from database and put it in pandas datastructure with the folloiwing query:
```
SELECT * FROM adword_keywords
WHERE adword_keywords.keyword_campaign_status = \'ENABLED\'
AND adword_keywords.keyword_ad_group_status = \'ENABLED\'
AND adword_keywords.keyword_status = \'ENABLED\'
```
I can filter the group which I would like to have as such:
```
df.groupy(['keyword_text','match_type']).filter(lambda x: x.shape[0]>1)
```
This latter procedure returns the correct results.
However, I would like to do the same with an sql query for performance and memory issue reasons (the dataset is huge cannot be fully load into RAM).
# EDIT
Based on the aswer of ypercube I there are three alternative queries which returns the correct result. I have collected them for reference with their running time: the first version is the fastest.
Using `EXISTS`, `1 loops, best of 3: 2.22 s per loop`:
```
WITH cte AS
( SELECT *
FROM adword_keywords
WHERE keyword_campaign_status = 'ENABLED'
AND keyword_ad_group_status = 'ENABLED'
AND keyword_status = 'ENABLED'
)
SELECT a.*
FROM cte AS a
WHERE EXISTS
( SELECT *
FROM cte AS b
WHERE (b.keyword_text, b.keyword_match_type)
= (a.keyword_text, a.keyword_match_type)
AND b.db_id <> a.db_id
) ;
```
Using `PARTITION`, `1 loops, best of 3: 5.7 s per loop`
```
WITH cte AS
( SELECT *,
COUNT(*) OVER (PARTITION BY keyword_text, keyword_match_type) AS cnt
FROM adword_keywords
WHERE (keyword_campaign_status, keyword_ad_group_status, keyword_status)
= ('ENABLED', 'ENABLED', 'ENABLED')
)
SELECT *
FROM cte
WHERE cnt >= 2 ;
```
Using `GROUP BY` , `1 loops, best of 3: 5.11 s per loop` :
```
select ak.*
from
adword_keywords ak
inner join (
select keyword_text, keyword_match_type
from adword_keywords
where
keyword_campaign_status = 'ENABLED' AND
keyword_ad_group_status = 'ENABLED' AND
keyword_status = 'ENABLED'
group by keyword_text, keyword_match_type
having count(db_id) > 1
) an1 using (keyword_text, keyword_match_type)
where
keyword_campaign_status = 'ENABLED' AND
keyword_ad_group_status = 'ENABLED' AND
keyword_status = 'ENABLED'
``` | You can use `EXISTS` for this type of query - so no `COUNT` at all (!), just a check that at least another row exists with same campaign\_status and ad\_group\_status. The check on the primary keys is to ensure just that, that it is another row:
```
WITH cte AS
( SELECT *
FROM adword_keywords
WHERE (keyword_campaign_status, keyword_ad_group_status, keyword_status)
= ('ENABLED', 'ENABLED', 'ENABLED')
)
SELECT a.*
FROM cte AS a
WHERE EXISTS
( SELECT *
FROM cte AS b
WHERE (b.keyword_text, b.keyword_match_type)
= (a.keyword_text, a.keyword_match_type)
AND b.db_id <> a.db_id
) ;
```
or window functions:
```
WITH cte AS
( SELECT *,
COUNT(*) OVER (PARTITION BY keyword_text, keyword_match_type) AS cnt
FROM adword_keywords
WHERE (keyword_campaign_status, keyword_ad_group_status, keyword_status)
= ('ENABLED', 'ENABLED', 'ENABLED')
)
SELECT *
FROM cte
WHERE cnt > 1 ;
```
---
Your query did not work because you had the ENABLED conditions only in the external level. Adding them into the inetranl (derived table) should give the same results:
```
SELECT ak.*
FROM
adword_keywords ak
JOIN
( SELECT keyword_text, keyword_match_type
FROM adword_keywords
WHERE (keyword_campaign_status, keyword_ad_group_status, keyword_status)
= ('ENABLED', 'ENABLED', 'ENABLED')
GROUP BY keyword_text, keyword_match_type
HAVING COUNT(*) > 1
) AS d
USING (keyword_text, keyword_match_type)
WHERE (ak.keyword_campaign_status, ak.keyword_ad_group_status, ak.keyword_status)
= ('ENABLED', 'ENABLED', 'ENABLED');
``` | When you `GROUP BY` some fields, you're doing two important things:
1. You're saying that you want rows where combinations of those fields are distinct.
2. You're saying that all the *other* fields you use, you're going to aggregate.
That "other" might be a problem here. If you're grouping by a field, you can't decently aggregate that field in the way you want to.
What you could do instead, is count some other field, like your primary key. (You could also say `COUNT(*)`, i guess -- and in fact, you'd have to if you don't have any unique fields.)
For example, your query could look like
```
SELECT *
FROM adword_keywords
JOIN (
SELECT keyword_text keyword_match_type
FROM adword_keywords
GROUP BY keyword_text, keyword_match_type
HAVING count(keyword_id) > 1
) AS duplicated USING (keyword_text, keyword_match_type)
WHERE keyword_campaign_status = 'ENABLED'
AND keyword_ad_group_status = 'ENABLED'
AND keyword_status = 'ENABLED'
```
This assumes that you want to find all the records with an "ENABLED" status, where there's at least one duplicate -- *whether the duplicate is enabled or not*. If you want only the records that have enabled duplicates, you'd need to add those conditions to the subquery. (You wouldn't really need them in the outer query at that point, since the inner join would eliminate non-enabled rows.)
For future reference, too: If you'd rather prevent the duplicates from being in there at all (which is often an error), you might want to consider adding a unique key on `(keyword_text, keyword_match_type)`. | Postgres group by and extract groups with more than one element | [
"",
"sql",
"postgresql",
""
] |
My table name is `emails`.
My table structures looks like:

I want to remove all of the duplicated emails. I tried this query:
```
DELETE FROM emails WHERE email NOT IN (SELECT MIN(email)
FROM emails GROUP BY email)
```
But with no result. Can someone help me with this? | The query that you are looking for would use `id`, not `email`:
```
DELETE FROM emails
WHERE id NOT IN (SELECT MIN(id) FROM emails GROUP BY email) ;
```
EDIT:
You are using MySQL. You can get around this with the subquery hack:
```
DELETE FROM emails
WHERE id NOT IN (select minid from (SELECT MIN(id) as minid FROM emails GROUP BY email) e) ;
```
Or, you can use a `join`:
```
delete e
from emails e left outer join
(select min(id) as minid
from emails
group by email
) em
on e.id = e.minid
where em.id is null;
``` | Try this instead
--CREATE a Temporary table
```
create table IDsToRemove (ID Int)
INSERT INTO IDStoRemove
SELECT MIN(id) _
FROM emails GROUP BY email
DELETE FROM emails WHERE id NOT IN (SELECT id from IDStoRemove)
```
I don't know the exact mySQL syntax, but should give you the idea | How do I remove duplicates row in SQL? | [
"",
"mysql",
"sql",
"duplicates",
""
] |
I'm trying the following query:
```
SELECT (json_data->'position'->'lat') + 1.0 AS lat FROM updates LIMIT 5;
```
(The +1.0 is just there to force conversion to float. My actual queries are far more complex, this query is just a test case for the problem.)
I get the error:
```
ERROR: operator does not exist: jsonb + numeric
```
If I add in explicit casting:
```
SELECT (json_data->'position'->'lat')::float + 1.0 AS lat FROM updates LIMIT 5;
```
the error becomes:
```
ERROR: operator does not exist: jsonb + double precesion
```
I understand that most jsonb values cannot be cast into floats, but in this case I know that the lats are all JSON numbers.
Is there a function which casts jsonb values to floats (or return NULLs for the uncastable)? | There are two operations to get value from `JSON`. The first one `->` will return `JSON`. The second one `->>` will return text.
Details: [JSON Functions and Operators](http://www.postgresql.org/docs/current/static/functions-json.html)
Try
```
SELECT (json_data->'position'->>'lat')::float + 1.0 AS lat
FROM updates
LIMIT 5
``` | AFAIK there's no json->float casting in Postgres, so you could try an explicit `(json_data->'position'->'lat')::text::float` cast | How to convert PostgreSQL 9.4's jsonb type to float | [
"",
"sql",
"postgresql",
"casting",
"jsonb",
"postgresql-9.4",
""
] |
I'm training with `SQLite` and i'm trying to achieve the same result as the search result implemented in USDA.
In [USDA food list search](http://ndb.nal.usda.gov/ndb/foods) if I search for "chicken breast cooked roasted" the search results 56 results.
I'm can't figure out the pattern. Using the same database as they, If i use the following query:
```
SELECT *
FROM FOODDATA
WHERE FOODDES LIKE '%chicken%' OR FOODDES LIKE '%breast%' OR FOODDES LIKE '%cooked%' OR FOODDES LIKE '%roasted%';
```
It gives me more than 2000 results. For me that makes sense. Using `AND` replacing the `OR` only gives me 2 results. Once again, it makes sense.
What kind of query are USDA performing? | You can investigate the number of matches to the keywords using this query:
```
SELECT ((FOODDES LIKE '%chicken%') +
(FOODDES LIKE '%breast%') +
(FOODDES LIKE '%cooked%')
(FOODDES LIKE '%roasted%')
) as NumMatches, count(*), min(fooddes), max(fooddes)
FROM FOODDATA
GROUP BY (FOODDES LIKE '%chicken%') +
(FOODDES LIKE '%breast%') +
(FOODDES LIKE '%cooked%')
(FOODDES LIKE '%roasted%')
ORDER BY NumMatches desc;
```
This query just counts the number of keywords that match and give the number of rows in `FOODDATA` that have 4 matches, 3 matches, and so on. | Try something like
```
select
(FOODDES LIKE '%chicken%') + (FOODDES LIKE '%breast%') + (FOODDES LIKE '%cooked%') + (FOODDES LIKE '%roasted%') As matches ,
FOODDES
from
(
SELECT FOODDES
FROM FOODDATA
WHERE FOODDES LIKE '%chicken%' or FOODDES LIKE '%breast%' or FOODDES LIKE '%cooked%' or FOODDES LIKE '%roasted%'
) table1
where matches >=3
order by matches desc
``` | Understanding SQLite query | [
"",
"sql",
"sqlite",
""
] |
I have 2 separate tables in MySQL (I'm using Sequel Pro on a Mac) with the exact same column names. Is there a function to join these tables and create one table with the same headers?
Example of what I'm trying to do:
```
Table 1
a b c
1 2 4
3 3 1
Table 2
a b c
3 2 1
8 4 2
Output
a b c
1 2 4
3 3 1
3 2 1
8 4 2
```
Thank you! | ```
create table table_3 as
select * from table_1
union
select * from table_2
```
If you don't want to keep duplicates.
UNION ALL if you do want to keep duplicates. | ```
CREATE TABLE new_table_name AS
select *
from table1
cross join table2
```
or
```
CREATE TABLE new_table_name AS
select * from table1
union all
select * from table2
``` | MySQL - combine 2 tables | [
"",
"mysql",
"sql",
""
] |
I am trying to write a dynamic T-SQL query and running into an error.
I have removed everything except a single var and I still get the error when trying to run the query.
```
ALTER PROCEDURE [dbo].[empowermentFetchSubmissions2]
@category INT=NULL
AS
DECLARE @sSQL AS NVARCHAR (3000),
@Where AS NVARCHAR (1000) = '';
BEGIN
SET NOCOUNT ON;
BEGIN
SET @sSQL = 'SELECT A.[submissionID],
A.[subEmpID],
A.[nomineeEmpID],
CONVERT (VARCHAR (10), A.[submissionDate], 101) AS submissionDate,
A.[situation],
A.[task],
A.[action],
A.[result],
A.[timestamp],
A.[statusID],
A.[approver],
A.[approvalDate],
B.[FirstName] + " " + B.[LastName] AS nomineeName,
B.[ntid] AS nomineeNTID,
B.[qid] AS nomineeQID,
C.[FirstName] + " " + C.[LastName] AS submitName,
C.[ntid] AS submitNTID,
D.[categoryName]
FROM empowermentSubmissions AS A
INNER JOIN
empTable AS B
ON A.[nomineeEmpID] = B.[empID]
INNER JOIN
empTable AS C
ON A.[subEmpID] = C.[empID]
INNER JOIN
empowermentCategories AS D
ON A.[categoryID] = D.[catID]';
IF (@category) IS NOT NULL
SET @Where = @Where + ' AND A.[categoryID] = @_category';
IF LEN(@Where) > 0
SET @sSQL = @sSQL + 'WHERE ' + RIGHT(@Where, LEN(@Where) - 3);
EXECUTE sp_executesql @sSQL, N'@_category', @_category = @category;
END
END
```
The error I get when running this is :
> Incorrect syntax near ')'. Msg 4145, Level 15, State 1, Line 28 An
> expression of non-boolean type specified in a context where a
> condition is expected, near 'A'.
Is there something obvious that I am doing wrong here? | I have modified your code to the point where I think it will work. There were 3 problems I found by commenting out your execute line and putting a `print @ssql` before it. Then executed the stored procedure. First problem was the quotes. I changed them to single quotes. Next a new line for the `WHERE` where it was on the same line as the `ON`. Last issue was you weren't trimming enough off of the `WHERE` and it left the letter D.
```
ALTER PROCEDURE [dbo].[empowermentFetchSubmissions2]
@category INT=NULL
AS
DECLARE @sSQL AS NVARCHAR (3000),
@Where AS NVARCHAR (1000) = '';
BEGIN
SET NOCOUNT ON;
BEGIN
SET @sSQL = 'SELECT A.[submissionID],
A.[subEmpID],
A.[nomineeEmpID],
CONVERT (VARCHAR (10), A.[submissionDate], 101) AS submissionDate,
A.[situation],
A.[task],
A.[action],
A.[result],
A.[timestamp],
A.[statusID],
A.[approver],
A.[approvalDate],
B.[FirstName] + '' '' + B.[LastName] AS nomineeName,
B.[ntid] AS nomineeNTID,
B.[qid] AS nomineeQID,
C.[FirstName] + '' '' + C.[LastName] AS submitName,
C.[ntid] AS submitNTID,
D.[categoryName]
FROM empowermentSubmissions AS A
INNER JOIN
empTable AS B
ON A.[nomineeEmpID] = B.[empID]
INNER JOIN
empTable AS C
ON A.[subEmpID] = C.[empID]
INNER JOIN
empowermentCategories AS D
ON A.[categoryID] = D.[catID]';
IF (@category) IS NOT NULL
SET @Where = @Where + ' AND A.[categoryID] = @_category';
IF LEN(@Where) > 0
SET @sSQL = @sSQL + '
WHERE ' + RIGHT(@Where, LEN(@Where) - 4);
EXECUTE sp_executesql @sSQL, N'@_category', @_category = @category;
END
END
```
Here is what your code was printing originally if you want to see the problems.
```
SELECT A.[submissionID],
A.[subEmpID],
A.[nomineeEmpID],
CONVERT (VARCHAR (10), A.[submissionDate], 101) AS submissionDate,
A.[situation],
A.[task],
A.[action],
A.[result],
A.[timestamp],
A.[statusID],
A.[approver],
A.[approvalDate],
B.[FirstName] + " " + B.[LastName] AS nomineeName,
B.[ntid] AS nomineeNTID,
B.[qid] AS nomineeQID,
C.[FirstName] + " " + C.[LastName] AS submitName,
C.[ntid] AS submitNTID,
D.[categoryName]
FROM empowermentSubmissions AS A
INNER JOIN
empTable AS B
ON A.[nomineeEmpID] = B.[empID]
INNER JOIN
empTable AS C
ON A.[subEmpID] = C.[empID]
INNER JOIN
empowermentCategories AS D
ON A.[categoryID] = D.[catID]WHERE D A.[categoryID] = @_category
``` | You need to change the double quotes to two single quotes:
```
B.[FirstName] + '' '' + B.[LastName] AS nomineeName,
C.[FirstName] + '' '' + C.[LastName] AS submitName,
```
EDIT:
You also need a space in front of WHERE on this line:
```
SET @sSQL = @sSQL + ' WHERE ' + RIGHT(@Where, LEN(@Where) - 3);
``` | Dynamic TSQL Query | [
"",
"sql",
"sql-server",
"t-sql",
"stored-procedures",
""
] |
I have a table like this:
**ACTUAL TABLE**
```
ID | Type | Number
----------+----------
1 | x | 2000
2 | y | 4500
3 | y | 4500
4 | x | 3000
5 | y | 5000
6 | x | 4000
7 | y | 7500
8 | y | 7500
```
And I want to update the `y` to be exact as the `x` before it:
**EXPECTED TABLE**
```
ID | Type | Number
----------+----------
1 | x | 2000
2 | y | 2000
3 | y | 2000
4 | x | 3000
5 | y | 3000
6 | x | 4000
7 | y | 4000
8 | y | 4000
```
How should I do that in MySQL? | Just update based on your ID:
`UPDATE <tablename> SET <fieldname> = (SELECT <fieldname> FROM <tablename> WHERE id = 1) WHERE id = 2`
It might work. | Try Like This
`t` is tablename
`a` is type
`va` is number
```
update t join (
select curr.id, max(prior.id) prior_id
from t curr
join t prior
on prior.a = 'x'
and prior.id < curr.id
where curr.a = 'y'
group by curr.id) g
on g.id = t.id
join t v on v.id = g.prior_id
set t.va = v.va;
```
[Sql Fiddle](http://sqlfiddle.com/#!2/f0126/1) | How to update a MySQL row based on previous row? | [
"",
"mysql",
"sql",
"database",
""
] |
One of my tables have values like this..
```
Year 1
Year 9
Year 8
Year 4
Kindy [can be any word without numbers]
Pre-School [can be any word without numbers]
Year 8
Year 22
Year 15....
```
How can I sort Them in alphabetically first and then by numerically in ascending order like this..
```
Kindy
Pre-School
Year 1
Year 4
Year 8
Year 9
Year 15
Year 22
```
I have tried the following..
```
SELECT YearLevel FROM Student
ORDER BY
CASE WHEN YearLevel NOT LIKE '%[0-9]%' THEN 0
ELSE CAST(RIGHT(YearLevel, LEN(YearLevel) - 5) AS int)
END
```
But the problem is I need distinct records only.. | If the column with digits always has the `Year [digits]` format, you can try this
```
SELECT YearLevel FROM student
GROUP BY YearLevel
ORDER BY
(CASE
WHEN YearLevel LIKE 'Year%'
THEN 'Year' + CONVERT(varchar,LEN(YearLevel)) + YearLevel
ELSE YearLevel
END)
```
<http://sqlfiddle.com/#!3/1919c/3>
While this may work I recommend adding an integer column with the sorting order. | ```
select
astring
from table1
order by
case when left(astring,5) = 'Year ' then 2
else 1 end
, case when left(astring,5) = 'Year ' then right(replace(astring,'Year ','00000000'),3)
else astring end
, astring
```
see [this sqlfiddle](http://sqlfiddle.com/#!3/70cae/1) | Query in sql server to sort data which has both numbers and alphabets | [
"",
"sql",
"sql-server",
""
] |
Is there a simple PostgreSQL or even SQL way of listing empty/not empty tables?
P.S.:
I'm analyzing a database containing hundreds of tables and would like to detect "death code".
I assume, when the table after some month is still empty, than it's not used.
**EDIT:Solved**
Thank you all!
Finally this statement seems to output the statistics I can use:
```
select schemaname, relname, n_tup_ins from pg_stat_all_tables WHERE schemaname = 'public' ORDER BY n_tup_ins
``` | Checking for the number of rows *could* give you wrong results. Assume that a table is used as a staging table: rows get inserted (e.g. from a flat file), processed and deleted. If you check the number of rows in that table you could very well believe it's never used if you don't happen to run your query while the processing takes place.
Another way to detect "unused" tables would be to monitor the IO and changes that are done to the tables.
The statistic view [pg\_stat\_user\_tables](http://www.postgresql.org/docs/current/static/monitoring-stats.html#PG-STAT-ALL-TABLES-VIEW) records changes (deletes, inserts, updates) to each table in the system. The statistic view [pg\_statio\_user\_tables](http://www.postgresql.org/docs/current/static/monitoring-stats.html#PG-STATIO-ALL-TABLES-VIEW) records IO done against the tables.
If you take snapshots of those tables in regular intervals you can calculate the difference in the values and see if a tables is used at all.
You can use `pg_stat_reset()` to reset all values to zero and then start from that. | You could use PostgreSQL's system catalogs with e.g.
```
SELECT n.nspname, c.relname
FROM pg_class c
INNER JOIN pg_namespace n ON (n.oid = c.relnamespace)
WHERE c.reltuples = 0 AND c.relkind = 'r';
```
According to the documentation, the number of rows is an estimate, though.
If your tables have columns that take their default values from sequences, you could list them and check their values with `nextval`. (Unfortunately, `currval` returns a session-dependent value, so you'd have to ensure that no one else is using the database and use both `nextval` and `setval`.)
```
SELECT n.nspname, c.relname
FROM pg_class c
INNER JOIN pg_namespace n ON (n.oid = c.relnamespace)
WHERE c.relkind = 'S';
```
(Unfortunately I couldn't yet find any way to determine, which sequence belongs to which table. Obviously it would be very helpful. Anyway, you can use `pg_class.relnamespace` to narrow down the results.)
See <http://www.postgresql.org/docs/9.3/interactive/catalogs-overview.html> for details. | show all not empty tables in postgres | [
"",
"sql",
"postgresql",
""
] |
If I have a column in which strings vary in length but they ALL have a slash `\` within,
how can I SELECT to have one column display everything BEFORE the `\` and another column displaying everything AFTER the `\`?
```
name column1 column2
DB5697\DEV DB5697 DEV
```
I have seen `CHARINDEX` and `REVERSE` on MSDN but haven't been able to put together a soltuion.
How can I best split a varchar/string column value into 2 columns in a result set in TSQL ? | How about the following ([**SQL Fiddle**](http://sqlfiddle.com/#!6/851ac/1/0)):
```
SELECT m.name,
LEFT(m.name, CHARINDEX('\', m.name) - 1) AS column1,
RIGHT(m.name, LEN(m.name) - CHARINDEX('\', m.name)) AS column2
FROM MyTable m
```
How to handle strings with no `\` in them ([**SQL Fiddle**](http://sqlfiddle.com/#!6/f9ebe/6/0)):
```
SELECT m.name,
CASE WHEN CHARINDEX('\', m.name) = 0 THEN ''
ELSE LEFT(m.name, CHARINDEX('\', m.name) - 1) END AS column1,
CASE WHEN CHARINDEX('\', m.name) = 0 THEN ''
ELSE RIGHT(m.name, LEN(m.name) - CHARINDEX('\', m.name)) END AS column2
FROM MyTable m;
``` | what about using [PARSENAME](http://msdn.microsoft.com/library/ms188006(v=sql.110).aspx) function in a tricky way?
```
USE tempdb;
GO
CREATE TABLE #names
(
id int NOT NULL PRIMARY KEY CLUSTERED
, name varchar(30) NOT NULL
);
GO
INSERT INTO #names (id, name)
VALUES
(1, 'DB5697\DEV'),
(2, 'DB5800\STG'),
(3, 'DB5900\PRD');
GO
SELECT
name
, PARSENAME(REPLACE(name, '\', '.'), 2) AS [Server]
, PARSENAME(REPLACE(name, '\', '.'), 1) AS [Instance]
FROM
#names;
GO
DROP TABLE #names;
GO
```
The [PARSENAME](http://msdn.microsoft.com/library/ms188006(v=sql.110).aspx) function accepts 2 parameters and gets the name part of a fully qualified name. The second parameter is the part name enumerator.
Value 2 is for SCHEMA and 1 is for OBJECT.
So, with the [REPLACE](http://msdn.microsoft.com/en-us/library/ms186862(v=sql.110).aspx) function the "\" char is replaced by "." in order to have a SCHEMA.OBJECT format of your SERVERNAME\INSTANCE values. Then, PARSENAME behave like having a simple object name in the string. | Splitting value of a varchar column into two columns | [
"",
"sql",
"sql-server",
"t-sql",
"case",
""
] |
```
SELECT *
FROM Service
WHERE cusId='$cusId'
AND serviceType IN (1,2)
ORDER BY date DESC LIMIT 2
```
I want to have last records of services which have serviceType of 1 and 2, when I use this script with IN query, I get 2 rows of services which has serviceType of 1, but I want to have two records, which have 1 and 2 as serviceTypes, sorry for my english | ```
(select * from Service
where cusId='$cusId'
and serviceType = 1
order by date desc limit 1)
UNION
(select * from Service
where cusId='$cusId'
and serviceType = 2
order by date desc limit 1)
``` | can you try union ?
```
SELECT top 1 * FROM Service where cusId='$cusId' and serviceType = 1 order by date DESC
UNION
SELECT top 1 * FROM Service where cusId='$cusId' and serviceType = 2 order by date DESC
``` | How to get all the rows that have multiple values | [
"",
"mysql",
"sql",
""
] |
I have one table to register all my images for example clients logo, users photos and promotions images. The problem is that one promotion might have more than one image. As I use this table for other imagens besides from promotion, I can´t use a foreign key inside it. So my doubt is what to do, create a intermediate table between promotions and images or create a table only for promotions images.
Thank you | if 1 image can be in many promotions and one promotion can has many images, then create relation **N:N** by 3 tables:
```
image:
imageID INT,
image YOUR_TYPE_OF_IMAGE,
...
promotion:
promotionID INT,
description VARCHAR(255),
name VARCHAR(255),
...
image_and_promotion:
imageID INT,
promotionID INT
```
You can add check for 3th table:
```
UNIQUE(imageID, promotionID)
``` | You should create an intermediate table between promotions and images | How to create a relation N:N and 1:N in the same table? | [
"",
"sql",
"database",
"relation",
""
] |
I have 2012 SQL Server stored procedure ran automatically every fifth of the month but the problem I have is supplying previous month date range e.g. I need to have clause like **...where datein between '2014-02-01' and '2014-02-28'** and next month it would change it to **...where datein between '2014-03-01' and '2014-02-31'** and so on.
thanks | This should work
```
SELECT DATEADD(DAY,1,EOMONTH(GETDATE(),-2)) AS StartDate, EOMONTH(GETDATE(),-1) AS EndDate
```
To be more specific in your WHERE clause
```
WHERE @datein BETWEEN DATEADD(DAY,1,EOMONTH(GETDATE(),-2)) AND EOMONTH(GETDATE(),-1)
``` | You can use `getdate()` and some date arithmetic. Here is a relatively easy way:
```
where datein >= cast(dateadd(month, -1, getdate() - day(getdate()) + 1) as date) and
datein < cast(getdate() - day(getdate()) + 1)
```
The key idea here is to subtract the day of the month and add 1 to get the first day of the current month. | how to get automatically previous month date range in SQL? | [
"",
"sql",
"sql-server",
""
] |
I have downloaded Microsoft SQL Server Management Studio 2014 Express. I have set it up and I now need to connect to the server. I think my server name has something to do with my computer name but its not working. How do I get in? | Assuming you installed SQL Server, and not just SSMS please read this:
<http://msdn.microsoft.com/en-us/library/ms143531(v=sql.120).aspx>
In all likelihood it was setup with the default instance name of \SQLExpress. So to connect, you would enter machinename\SQLExpress.
Ex: `Office-PC1\SQLExpress`
> If you specify MSSQLServer for the instance name, a default instance
> will be created. For SQL Server Express, if you specify SQLExpress for
> the instance name, a default instance will be created.
NB: If you open the "Server Name" drop down in the SSMS "Connect To Server" window, there is a "Browse for more..." option which should help you find your installed server. | Assuming you actually installed the db server, and NOT just the management tools, you can always use "localhost" (without the quotes). Have you made sure you are using port 1433? I am not sure if that was a question during the setup. | SQL Server 2014 Express Set Up | [
"",
"sql",
"sql-server",
"installation",
"sql-server-express",
"sql-server-2014-express",
""
] |
I have tried <http://sqlzoo.net/wiki/Self_join>
> Self Join: Find the routes involving two buses that can go from Craiglockhart to Sighthill.Show the bus no. and company for the first bus, the name of the stop for the transfer,and the bus no. and company for the second bus.
My code:
```
SELECT a.num, a.company,
trans1.name, c.num, c.company
FROM route a JOIN route b
ON (a.company = b.company AND a.num = b.num)
JOIN (route c JOIN route d ON (c.company = d.company AND c.num= d.num))
JOIN stops start ON (a.stop = start.id)
JOIN stops trans1 ON (b.stop = trans1.id)
JOIN stops trans2 ON (c.stop = trans2.id)
JOIN stops end ON (d.stop = end.id)
WHERE start.name = 'Craiglockhart' AND end.name = 'Sighthill'
AND trans1.name = trans2.name
ORDER BY a.num ASC, trans1.name
```
The output gives multiple rows:
```
4 LRT London Road 35 LRT
4 LRT London Road 34 LRT
4 LRT London Road 35 LRT
4 LRT London Road 34 LRT
4 LRT London Road C5 SMT
```
Where I want:
```
4 LRT London Road 34 LRT
4 LRT London Road 35 LRT
4 LRT London Road 65 LRT
4 LRT London Road C5 SMT
```
There is also a bug that the order of `a.num` when I try `ASC` doesn't work.
Also when I put `DISTINCT` before `c.num` it shows an error.
I can't use `group by` since it gives too few rows.
Why? | RE: the sorting 'bug', this is due to the way the application sorts. It sorts alphabetically; so 10 comes before 2, etc. [This article](https://stackoverflow.com/questions/153633/natural-sort-in-mysql) shows a way to do "natural sorting" using LENGTH().
For this particular problem, I was able to get the correct answer using:
```
ORDER BY LENGTH(a.num), b.num, trans1.id, LENGTH(c.num), d.num;
``` | My solution to this problem:
I divided the problem into two.
> First subquery will be the table S(Start), which will get all the
> routes that start from 'Craiglockhart' Second subquery will be the
> table E(End), which will get all the routes that start from
> 'Sighthill'
Now both table S and E will have common routes, and i get all this common routes by joining the subqueries, using the ids of each table.
As there are duplicates routes(same: S.num, S.company, stops.name, E.num, E.company) i used DISTINCT.
```
SELECT DISTINCT S.num, S.company, stops.name, E.num, E.company
FROM
(SELECT a.company, a.num, b.stop
FROM route a JOIN route b ON (a.company=b.company AND a.num=b.num)
WHERE a.stop=(SELECT id FROM stops WHERE name= 'Craiglockhart')
)S
JOIN
(SELECT a.company, a.num, b.stop
FROM route a JOIN route b ON (a.company=b.company AND a.num=b.num)
WHERE a.stop=(SELECT id FROM stops WHERE name= 'Sighthill')
)E
ON (S.stop = E.stop)
JOIN stops ON(stops.id = S.stop)
``` | Self join tutorial on SQLZoo | [
"",
"mysql",
"sql",
"join",
"self-join",
""
] |
I was facing a problem on how to create a calculated column based on a column which contains dates. For example, I have a column which contains dates starting from July.
The DAY needs to be calculated as described in the picture using SQL server.
So basically can a column be created based on an existing column containing a lot of dates? I need this to be dynamic.
```
WeekDay Date Day
---------------------------
Friday 3-Jan-14 1
Monday 6-Jan-14 2
Tuesday 7-Jan-14 3
Wednesday 8-Jan-14 4
Thursday 9-Jan-14 5
Friday 10-Jan-14 6
Monday 13-Jan-14 7
Tuesday 14-Jan-14 8
Wednesday 15-Jan-14 9
Thursday 16-Jan-14 10
Friday 17-Jan-14 11
Monday 20-Jan-14 12
Tuesday 21-Jan-14 13
Wednesday 22-Jan-14 14
Thursday 23-Jan-14 15
Friday 24-Jan-14 16
Monday 27-Jan-14 17
Tuesday 28-Jan-14 18
Wednesday 29-Jan-14 19
Thursday 30-Jan-14 20
Friday 31-Jan-14 21
``` | With table, DateTable with a column Date of type Date, the following query will do what you ask.
```
SELECT
DATENAME(dw, Date) AS WeekDay
,Date
,ROW_NUMBER() OVER (ORDER BY Date) AS Day
FROM DateTable
WHERE DATEPART(dw, Date) NOT IN (1, 7)
ORDER BY Date
``` | Based on the Title and subject, I am guessing that you want to try and identify the working day number of each month.
> example:
>
> 1-Jan-14 is Wednesday so its Working day number 1
>
> 6-Jan-14 is Monday so its working day number 4 as (4-Jan-14 and 5-Jan-14 are Weekends)
Also the result should be a derived column in a table that already has data.
If the above is true then this is what I suggest
* Create a function to calculate the working days between two dates.
(I have taken the last day of each oath as my initial date but you could also do +1 in you decide to go with first day of month way. (Personal preference really))
* Alter table and add calculated column that calls the function
Sample Query:
```
IF OBJECT_ID('test') > 0
BEGIN
DROP TABLE test
END;
GO
IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[ufn_WordkindDayNumber]') AND type in (N'FN', N'IF', N'TF', N'FS', N'FT'))
DROP FUNCTION [dbo].ufn_WordkindDayNumber
GO
CREATE FUNCTION ufn_WordkindDayNumber(@Date DATETIME)
RETURNS INT
AS
BEGIN
DECLARE @LastDayofPreviousMonth DATETIME,
@Return int
SET @LastDayofPreviousMonth = CAST(YEAR(@Date) as VARCHAR(4))+RIGHT('00'+CAST(MONTH(@Date) as VARCHAR(2)),2)+'01'
SET @LastDayofPreviousMonth = DATEADD (day, -1, CAST(@LastDayofPreviousMonth AS DATE))
-- Logic addapted from http://stackoverflow.com/questions/252519/count-work-days-between-two-dates. Credit CMS
SELECT @Return =
CASE
WHEN DATENAME(dw, @Date) = 'Sunday' OR DATENAME(dw, @Date) = 'Saturday' THEN 0
ELSE
(DATEDIFF(dd, @LastDayofPreviousMonth, @Date))
-(DATEDIFF(wk, @LastDayofPreviousMonth, @Date) * 2)
-(CASE WHEN DATENAME(dw, @Date) IN ('Saturday', 'Sunday') THEN 1 ELSE 0 END)
END
-- Return the result of the function
RETURN @Return
END
GO
CREATE TABLE test ([date] DATE NULL);
INSERT INTO test
SELECT '27-Dec-13' UNION ALL
SELECT '28-Dec-13' UNION ALL
SELECT '29-Dec-13'
ALTER TABLE test
ADD WordkindDayNumber AS dbo.ufn_WordkindDayNumber(Date);
GO
INSERT INTO test
SELECT '30-Dec-13' UNION ALL
SELECT '31-Dec-13'
SELECT *
FROM test;
DROP TABLE test;
```
Validation Script:
```
IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[ufn_WordkindDayNumber]') AND type in (N'FN', N'IF', N'TF', N'FS', N'FT'))
DROP FUNCTION [dbo].ufn_WordkindDayNumber
GO
CREATE FUNCTION ufn_WordkindDayNumber(@Date DATETIME)
RETURNS INT
AS
BEGIN
DECLARE @LastDayofPreviousMonth DATETIME,
@Return int
SET @LastDayofPreviousMonth = CAST(YEAR(@Date) as VARCHAR(4))+RIGHT('00'+CAST(MONTH(@Date) as VARCHAR(2)),2)+'01'
SET @LastDayofPreviousMonth = DATEADD (day, -1, CAST(@LastDayofPreviousMonth AS DATE))
-- Logic addapted from http://stackoverflow.com/questions/252519/count-work-days-between-two-dates. Credit CMS
SELECT @Return =
CASE
WHEN DATENAME(dw, @Date) = 'Sunday' OR DATENAME(dw, @Date) = 'Saturday' THEN 0
ELSE
(DATEDIFF(dd, @LastDayofPreviousMonth, @Date))
-(DATEDIFF(wk, @LastDayofPreviousMonth, @Date) * 2)
-(CASE WHEN DATENAME(dw, @Date) IN ('Saturday', 'Sunday') THEN 1 ELSE 0 END)
END
-- Return the result of the function
RETURN @Return
END
GO
;WITH cte_TestData(Date) AS
(SELECT '30-Dec-13' UNION ALL
SELECT '31-Dec-13' UNION ALL
SELECT '1-Jan-14' UNION ALL
SELECT '2-Jan-14' UNION ALL
SELECT '3-Jan-14' UNION ALL
SELECT '4-Jan-14' UNION ALL
SELECT '5-Jan-14' UNION ALL
SELECT '6-Jan-14' UNION ALL
SELECT '7-Jan-14' UNION ALL
SELECT '8-Jan-14' UNION ALL
SELECT '9-Jan-14' UNION ALL
SELECT '10-Jan-14' UNION ALL
SELECT '11-Jan-14' UNION ALL
SELECT '12-Jan-14' UNION ALL
SELECT '13-Jan-14' UNION ALL
SELECT '14-Jan-14' UNION ALL
SELECT '15-Jan-14' UNION ALL
SELECT '16-Jan-14' UNION ALL
SELECT '17-Jan-14' UNION ALL
SELECT '18-Jan-14' UNION ALL
SELECT '19-Jan-14' UNION ALL
SELECT '20-Jan-14' UNION ALL
SELECT '21-Jan-14' UNION ALL
SELECT '22-Jan-14' UNION ALL
SELECT '23-Jan-14' UNION ALL
SELECT '24-Jan-14' UNION ALL
SELECT '25-Jan-14' UNION ALL
SELECT '26-Jan-14' UNION ALL
SELECT '27-Jan-14' UNION ALL
SELECT '28-Jan-14' UNION ALL
SELECT '29-Jan-14' UNION ALL
SELECT '30-Jan-14' UNION ALL
SELECT '31-Jan-14' )
,cte_FirstDateofMonth as
(
SELECT Date,
CAST(YEAR(DATE) as VARCHAR(4))+RIGHT('00'+CAST(MONTH(DATE) as VARCHAR(2)),2)+'01' AS FDoM
FROM cte_TestData
)
,cte_LastDayofPreviousMonth as
(
SELECT Date,
DATEADD (day, -1, CAST(FDoM AS DATE)) as LDoPM
FROM cte_FirstDateofMonth
)
SELECT [Date],
DATENAME(dw, Date) AS DatofWeek,
CASE
WHEN DATENAME(dw, Date) = 'Sunday' OR DATENAME(dw, Date) = 'Saturday' THEN 0
ELSE
DATEDIFF(dd, LDoPM, Date)
- (DATEDIFF(wk, LDoPM, Date) * 2)
- (CASE
WHEN DATENAME(dw, Date) = 'Sunday' OR DATENAME(dw, Date) = 'Saturday' THEN 1
ELSE 0
END)
END AS WordkindDayNumber,
dbo.ufn_WordkindDayNumber(Date) as FN_WordkindDayNumber
FROM cte_LastDayofPreviousMonth;
``` | Count of weekdays in a given month | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
```
CREATE TABLE #TempTBL
(
ID INT IDENTITY(1,1),
KC1 varchar(10),
KC2 varchar(10),
KC3 varchar(10),
NC1 int,
NC2 money,
IsON bit
)
INSERT INTO #TempTBL
SELECT 'ABC','MNO','XYZ',1,1.00,1
UNION ALL
SELECT 'ABC','MNO','XYZ',1,1.00,0
UNION ALL
SELECT 'ABD','MNO','XYZ',1,1.10,1
UNION ALL
SELECT 'ABD','MNO','XYZ',1,1.10,0
UNION ALL
SELECT 'ABD','MNO','XYZ',2,1.00,0
UNION ALL
SELECT 'ABE','MNO','XYZ',1,1.10,1
SELECT * FROM #TempTBL
DROP TABLE #TempTBL
```
<http://ideone.com/HSLynu>
I am trying to find unique row number based on KC1, KC2, and KC3 (key columns). Then, I am trying to derive for each unique record, how many records have IsOn = 1 and IsOn = 0. To better understand, below is the output I am expecting. (I am trying to derive RowNum, OnCnt, and offCnt fields).
```
ID KC1 KC2 KC3 NC1 NC2 IsON RowNum OnCnt OffCnt
1 ABC MNO XYZ 1 1 1 1 1 1
2 ABC MNO XYZ 1 1 0 1 1 1
3 ABD MNO XYZ 1 1.1 1 2 1 2
4 ABD MNO XYZ 1 1.1 0 2 1 2
5 ABD MNO XYZ 2 1 0 2 1 2
6 ABE MNO XYZ 1 1.1 1 3 1 0
```
now before you start saying show your work, let me just say that the part of the tbl I have listed is a portion of a big query I am building. I am just unable to come up with logics for these three things (2 if we consider OnCnt and OffCnt "same").
Thanks! | Your `RowNum` is actually a `DENSE_RANK()`, and you can use a conditional aggregate for the counts, (either `SUM()` like below, or `COUNT()`):
```
SELECT *
,RowNum = DENSE_RANK() OVER(ORDER BY KC1,KC2,KC3)
,OnCNT = SUM(CASE WHEN IsON = 1 THEN 1 END) OVER(PARTITION BY KC1,KC2,KC3)
,OffCNT = COALESCE(SUM(CASE WHEN IsON = 0 THEN 1 END) OVER(PARTITION BY KC1,KC2,KC3),0)
FROM #TempTBL
``` | ```
Use [Windowing Functions][1]
SELECT *,
,DENSE_RANK() OVER(ORDER BY KC1,KC2,KC3) AS Row_Num
,COUNT(CASE IsON WHEN 1 THEN 1 END) OVER(PARTITION BY KC1,KC2,KC3) AS OnCnt
,COUNT(CASE IsON WHEN 0 THEN 1 END) OVER(PARTITION BY KC1,KC2,KC3) AS OffCnt
FROM MyTable
``` | T-SQL | Counts and ranks | [
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
Database ::
* id subid
* 1 1
* 2 1
* 3 2
* 4 3
The id column is automatically incremented and primary
subid is any number.
I want to arrange subid in decreasing order, remove rows which consists
duplicate subid in it and get rows
Result i want ::
* id subid
* 4 3
* 3 2
* 2 1 | ```
SELECT MAX(id), subid FROM table GROUP BY subid ORDER BY subid DESC
``` | ```
select max(id), subid group by subid order by subid desc
``` | Deselect duplicates in sql | [
"",
"mysql",
"sql",
"duplicates",
""
] |
I have this SQL command that creates a table with a `GEOGRAPHY`-type column for spatial data that is meant to store a latitude/longitude pair (a point). How can I set a default value for that column? For example, a point at (0,0)?
```
CREATE TABLE [dbo].[StationLocations] (
[Id] [int] NOT NULL,
[StationId] [int] NOT NULL,
[Position] [Geography],
CONSTRAINT [PKStationLocations] PRIMARY KEY CLUSTERED ([Id]))
``` | try this:
```
CREATE TABLE #TEMP(ID INT,COL1 GEOGRAPHY DEFAULT(CONVERT(GEOGRAPHY,'POINT (0 0)')))
INSERT INTO #TEMP (ID) VALUES(1)
SELECT * FROM #TEMP
ID COL1
1 0xE6100000010C00000000000000000000000000000000
SELECT ID,CONVERT(VARCHAR,COL1) AS DEF_VALUE FROM #TEMP
ID DEF_VALUE
1 POINT (0 0)
```
In your case
```
CREATE TABLE [dbo].[StationLocations] (
[Id] [int] NOT NULL,
[StationId] [int] NOT NULL,
[Position] [Geography] DEFAULT(CONVERT(GEOGRAPHY,'POINT (0 0)')),
CONSTRAINT [PKStationLocations] PRIMARY KEY CLUSTERED ([Id]))
``` | Not sure if this is the best method but, using [this answer](https://stackoverflow.com/a/8267910 "Update statement- Geography column - sql server") as a reference, you could define it [like this](http://sqlfiddle.com/#!3/a5b3b5/1 "Demo at SQL Fiddle"):
```
ALTER TABLE dbo.StationLocations
ADD CONSTRAINT DF_StationLocations_Position
DEFAULT geography::STPointFromText('POINT (0 0)', 4326) FOR Position
;
``` | How to set a point as default value for a geography column? | [
"",
"sql",
"sql-server",
"spatial",
"geography",
""
] |
In the below code i have pass multiple values with comma Separated to @i\_CustomerGroupID and one value to @i\_LocationID.In which i face a issue "Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , >, >= or when the subquery is used as an expression.
The statement has been terminated.".Pls help me to solve the issue.
```
ALTER PROCEDURE [dbo].[spInsertCustomerGroupLocationMap]
-- Add the parameters for the stored procedure here
@i_LocationID int,
@i_CustomerGroupID varchar(100)
--WITH ENCRYPTION
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
-- Insert statements for procedure here
IF NOT EXISTS (SELECT 1 FROM CustomerGroupLocationMap WHERE LocationID = @i_LocationID AND CustomerGroupID = @i_CustomerGroupID)
BEGIN
INSERT INTO CustomerGroupLocationMap (LocationID, CustomerGroupID) VALUES (@i_LocationID, (SELECT * FROM dbo.CSVToTable(@i_CustomerGroupID)));
END
END
``` | You want to use `insert . . . select`:
```
INSERT INTO CustomerGroupLocationMap(LocationID, CustomerGroupID)
SELECT @i_LocationID, t.*
FROM dbo.CSVToTable(@i_CustomerGroupID) t;
```
Your function `dbo.CSVToTable()` returns more than one value (I assume there is only one column). The correct syntax is the `insert . . . select`.
As a note, `insert . . . values` is really not needed. You can use `insert . . . select` even when you only have constants. | ```
IF NOT EXISTS (SELECT TOP 1 FROM CustomerGroupLocationMap WHERE LocationID = @i_LocationID AND CustomerGroupID = @i_CustomerGroupID)
BEGIN
INSERT INTO CustomerGroupLocationMap (LocationID, CustomerGroupID) VALUES (@i_LocationID, (SELECT * FROM dbo.CSVToTable(@i_CustomerGroupID)));
END
``` | Sql Insert Error -"Subquery returned more than 1 value" | [
"",
"sql",
"sql-server",
""
] |
MS Access has generated the following query:
```
SELECT Port.portName, Auth.portID,
Region.RegionName, Region.RegionID,
Auth.authID
FROM Region
INNER JOIN (Auth INNER JOIN Port ON Auth.portID = Port.portID)
ON Region.RegionID = Port.RegionID;
```
and I want to
```
GROUP BY Region.RegionID
```
When I add it to the end it tells me that PortName is not part of an aggregate function. Could someplete plase advise how to do this? I've tried inserting it at various points in the SQL but it doesn't seem to work. | This selects the port with the greatest id per region
```
SELECT
Port.portName,
Auth.portID,
Region.RegionName,
Region.RegionID,
Auth.authID
FROM Region
INNER JOIN Port ON Region.RegionID = Port.RegionID
INNER JOIN Auth ON Auth.portID = Port.portID
INNER JOIN (
SELECT max(Port.portID) max_portID, Port.RegionID
FROM Port
GROUP BY Port.RegionID
) t1 ON t1.RegionID = Port.RegionID AND t1.max_portID = Port.portID
```
The above query assumes there is only 1 `Auth` per `Port` but if there can be multiple `Auth` per `Port` you'll need to similarly select only the max `Auth` to avoid duplicate Regions in the result.
```
SELECT
Port.portName,
Auth.portID,
Region.RegionName,
Region.RegionID,
Auth.authID
FROM Region
INNER JOIN Port ON Region.RegionID = Port.RegionID
INNER JOIN Auth ON Auth.portID = Port.portID
INNER JOIN (
SELECT max(Port.portID) max_portID, Port.RegionID
FROM Port
GROUP BY Port.RegionID
) t1 ON t1.RegionID = Port.RegionID AND t1.max_portID = Port.portID
INNER JOIN (
SELECT max(Auth.authID) max_authID, Auth.portID
FROM Auth
GROUP BY Auth.portID
) t2 ON t2.portID = Port.portID AND t2.max_authID = Auth.authID
``` | Are trying to do this?:
```
SELECT Region.RegionID, Region.RegionName, Port.portName, Auth.authID
FROM Auth
INNER JOIN Port ON Port.portID = Auth.portID
INNER JOIN Region ON Region.RegionID = Port.RegionID
ORDER BY Region.RegionID
``` | SQL Group By with Inner Joins | [
"",
"sql",
""
] |
Check my table below:
```
ID Number01 Number02 Number03
-----------------------------------
1 10 20 4510
2 5 2 545
3 4 4 664
4 10 1 NULL
5 1 4 NULL
```
"Number03" field is a calculated field which is Number01 + Number02. I am using a stored procedure to calculate it. Why I am using a stored procedure? Because I have the interface which made by asp.net.
This is the stored procedure:
```
ALTER PROCEDURE _mySP
(@Number01 decimal(18,0), @Number02 decimal(18,0))
AS
BEGIN
DECLARE @myCOM1 float
DECLARE @myCOM2 float
SET @myCOM1 = @Number1 + 500
SET @myCOM2 = POWER(@Number2, 2) * 10
INSERT INTO _myTable(Number01, Number02, Number03)
VALUES (@Number01, @Number02, @myCOM1 + @myCOM2)
END
```
The question is, how can I execute the stored procedure without entering the value one by one? Because the value is already in the table. I want to update all the null value on "Number03" field. Also any idea how execute my question using CURSOR?
EDIT: It seems that my previous question is too simple. So I make it complex a little bit. | Since you're using SQL Server 2008, you can use a [**computed field**](http://msdn.microsoft.com/en-us/library/ms188300(v=sql.105).aspx) to generate the value from the other two fields.
If you just want to update the table without doing this, you could use the following SQL:
```
UPDATE _myTable SET Number03 = ISNULL(Number01,0) + ISNULL(Number02,0)
```
This will update all rows.
You could update the SP to take a 3rd param:
```
ALTER PROCEDURE _mySP
(
@Number01 DECIMAL(18, 0) ,
@Number02 DECIMAL(18, 0) ,
@IsUpdate BIT = 0
)
AS
BEGIN
IF ( @IsUpdate = 0 )
BEGIN
INSERT INTO _myTable
( Number01 ,
Number02 ,
Number03
)
VALUES ( @Number01 ,
@Number02 ,
ISNULL(@Number01,0) + ISNULL(@Number02,0)
)
END
ELSE
BEGIN
UPDATE _myTable
SET Number03 = ISNULL(Number01,0) + ISNULL(Number02,0)
WHERE Number03 IS NULL
END
END
```
**EDIT**: I have added `ISNULL` to the calculation for any numbers that are `null`, it will use **0** instead. | If you *really* want to use the SP to do the work you have to do it one at a time. Write a cursor to select all the rows with a NULL value and call the SP for each row passing Number01 and Number02.
I suspect your actual example is rather more complicated than the code you've shown. If you are able to go into more detail we may be able to suggest better approaches. If it is indeed this simple a computed column or simple update statement would be the way to go. | Execute stored procedure SQL using value from table | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have a query like this:
```
select
o.Name,
o.Family,
o.phone,
o.OBJECTID,
o.owner_id,
l.Licence_Contex,
l.Licence_title,
l.Id
from
[dbo].[owner_licence] as l,[dbo].[OWNER] as o
where o.[owner_id] =l.[Owner_ID]
And l.Id
NOT IN
(select l.id from [dbo].[owner_licence] as l,[dbo].[OWNER] as o
where o.[owner_id] =l.[Owner_ID]
And (l.Registration_date1 > DATEADD(year, -1, GetDate())
or l.[Registration_date2]> DATEADD(year, -1, GetDate())
or l.[Registration_date3]> DATEADD(year, -1, GetDate())
or l.[Registration_date4] > DATEADD(year, -1, GetDate())
or l.[Registration_date5]> DATEADD(year, -1, GetDate())))
```
the result is some how like this
```
john smith 09305689220 1080199884 1 licencetitle_1 licencecontex_1 10
John Smith 09305689220 1080199884 1 licencetitle_2 licencecontex3 13
```
As you can see the both of these rows are for the same person and I want to aggregate these duplicate persons into one row..Is there any way to do it?I useg group by o.owner\_id and also distinct but they does not work.. | The subquery restricts the license to the one with the most maximum (qualified) id for each owner, so you should only get 1 license (and therefore 1 row) per owner.
```
select
o.Name,
o.Family,
o.phone,
o.OBJECTID,
o.owner_id,
l.Licence_Contex,
l.Licence_title,
l.Id
from
[dbo].[owner_licence] as l
join [dbo].[OWNER] as o on o.[owner_id] = l.[Owner_ID]
join (select
max(ol.id) max_lid,
ol.owner_id
from
owner_licence ol
where ol.id not in (select l.id from [dbo].[owner_licence] as l,[dbo].[OWNER] as o
where o.[owner_id] =l.[Owner_ID]
And (l.Registration_date1 > DATEADD(year, -1, GetDate())
or l.[Registration_date2]> DATEADD(year, -1, GetDate())
or l.[Registration_date3]> DATEADD(year, -1, GetDate())
or l.[Registration_date4] > DATEADD(year, -1, GetDate())
or l.[Registration_date5]> DATEADD(year, -1, GetDate())))
group by
ol.owner_id) t1 on t1.owner_id = o.owner_id AND t1.max_lid = l.id
``` | If you want this as one row then you will need to remove your last column from the the SElect clause.
Use Group by clause | Using group by in a sql query for remuving duplicates | [
"",
"sql",
"group-by",
""
] |
I have a table, which looks like the following:
Table `users`:
```
id points date
1 100 2014-07-01
2 500 2014-07-02
3 200 2014-07-01
4 100 2014-07-03
5 100 2014-07-01
6 400 2014-07-02
7 800 2014-07-02
8 200 2014-07-02
```
Now, how is it possible to select each `unique` date and count the sum of the points on those days?
So I's need a result something like this:
```
points date
400 2014-07-01
1900 2014-07-02
100 2014-07-03
``` | ```
SELECT SUM(`points`) AS points, `date`
FROM users
GROUP BY `date`
``` | This group-and-aggregate operation is a standard query pattern and can be solved following these steps1.
1. Use a [`GROUP BY date`](http://dev.mysql.com/doc/refman/5.0/en/group-by-modifiers.html) to aggregate the dates into groups, by date.
2. Select [`SUM(points)`](http://dev.mysql.com/doc/refman/5.0/en/group-by-functions.html#function_sum) to tally the points in each aggregated [date] group.
3. Select the `date` column, which represents each group, to include it in the results.
4. Finally, apply `ORDER BY date` to ensure the results are ordered.
---
1 It is good etiquette - and results in better answers/discussion - when discussing *attempted* solutions (ie queries) in questions, and why they didn't work [correctly]. | How to get a daily SUM of numbers with MySQL? | [
"",
"mysql",
"sql",
""
] |
I think I've seen answers for similar questions for MySQL, but I'm struggling to find an answer applicable to SQL Server 2005.
So I have a table like this:
```
| ID | RelationalID | Year
----------------------------
| 1 | A | 2014
| 2 | A | 2014
| 3 | B | 2014
| 4 | A | 2015
| 5 | B | 2015
```
And I'd like a result like this when I join the same table where RelationID matches but the year is different:
```
| 2014_ID | 2015_ID | RelationalID |
------------------------------------
| 1 | 4 | A |
| 2 | NULL | A |
| 3 | 5 | B |
```
But a standard JOIN ends up getting duplicate matches:
```
| 2014_ID | 2015_ID | RelationalID |
------------------------------------
| 1 | 4 | A |
| 2 | 4 | A |
| 3 | 5 | B |
```
Is there a way to join two tables where the matches from the right table are joined only once in SQL Server 2005?
I tried this query with no success:
```
SELECT * FROM myTable
LEFT JOIN (SELECT * FROM myTable) AS t ON t.RelationalID = myTable.RelationalID
WHERE myTable.Year = 2014 and t.Year = 2015
``` | You can get the result based on ROW\_NUMBERs, but you need a rule how to assign them, I assumed it's based on the Id.
```
;WITH cte AS
(SELECT Id,
RelationalId,
year,
row_number()
over (partition by RelationalId, year
order by Id) as rn
FROM [YourTable]
)
select t1.id as Id_2014,t2.id as Id_2015, t1.RelationalId
from cte as t1 left join cte as t2
on t1.RelationalId = t2.RelationalId
and t1.rn = t2.rn
and t2.year = 2015
where t1.Year = 2014
```
This is based on TMNT2014's [fiddle](http://sqlfiddle.com/#!3/d6300/37) | Below Sql would give you the result you are looking for but as I said before complexity would depend on the original set of data you have in your table. Here is the SQL Fiddle - <http://sqlfiddle.com/#!3/d6300/24> - Good Luck!
```
;WITH CTE_Union AS
(SELECT
a.Id AS Id2014,
NULL AS Id2015,
a.RelationalId
FROM [YourTable] a
WHERE a.Year = 2014
UNION
SELECT
NULL AS Id2014,
b.Id AS Id2015,
b.RelationalId
FROM [YourTable] b
WHERE b.Year = 2015)
SELECT Distinct CASE WHEN Id2014 IS NULL THEN (SELECT MIN(Id2014) FROM CTE_Union C WHERE C.RelationalId =M.RelationalId) ELSE Id2014 END AS ID2014 ,
CASE WHEN Id2015 IS NULL AND Id2014 = (SELECT MIN(Id2014) FROM CTE_Union C2 WHERE C2.RelationalId =M.RelationalId) THEN (SELECT MIN(Id2015) FROM CTE_Union C WHERE C.RelationalId =M.RelationalId) ELSE Id2015 END
,RelationalID
FROM CTE_Union M
``` | SQL Server 2005: How to join table rows only once | [
"",
"sql",
"sql-server",
"sql-server-2005",
""
] |
My question is kind of easy but i'm still doubting after I created this transaction. If I execute the following code:
```
BEGIN TRANSACTION
DROP TABLE Table_Name
```
Can I perform a ROLLBACK TRANSACTION that recovers the dropped table? I'm asking because I don't know what happens in the 'Object Explorer' and I didn't found any question of this topic, so I think that it could be a useful issue. | `DROP TABLE` can be rolled back and it does not auto-commit. | This is incredibly easy to test.
```
create table TransactionTest
(
ID int identity primary key clustered,
SomeValue varchar(20)
)
insert TransactionTest
select 'Here is a row'
begin transaction
drop table TransactionTest
rollback transaction
select * from TransactionTest
``` | SQL Server: Does 'DROP TABLE' inside transaction causes an implicit commit? | [
"",
"sql",
"transactions",
"drop-table",
""
] |
I have three function calls in which I need all records in one column. I have been unable to achieve this and would appreciate a push in the right direction. I'm using a db2 database.
Function calls:
```
schema1.Usage ('element', 20140101, 20140714)
schema1.TOTAL_Usage ('element', 20140101, 20140714)
schema1.Usage ('element', 20140101, 20140714)/schema1.TOTAL_Usage ('element', 20140101, 20140714)
```
Here is what I have tried with no success:
```
select * from
(VALUES schema1.Usage ('element', 20140101, 20140714)
,schema1.TOTAL_Usage ('element', 20140101, 20140714)
,schema1.Usage ('element', 20140101, 20140714)/schema1.TOTAL_Usage ('element', 20140101, 20140714) AS X(a);
``` | If these are scalar functions, just put them in the `select`:
```
select schema1.Usage('element', 20140101, 20140714),
schema1.TOTAL_Usage('element', 20140101, 20140714)
schema1.CAPABILITY_Usage('element', 20140101, 20140714)/schema1.TOTAL_Usage('element', 20140101, 20140714) AS X(a)
from sysibm.sysdummy1;
``` | Your query should be fine with just a few small adjustments, I assume X is 1 row with 3 columns and not the other way around
```
select * from (
VALUES ( schema1.Usage ('element', 20140101, 20140714)
, schema1.TOTAL_Usage ('element', 20140101, 20140714)
, schema1.Usage ('element', 20140101, 20140714)
/ schema1.TOTAL_Usage ('element', 20140101, 20140714)
)
) AS X(a,b,c);
```
Perhaps it would be better to move the division to outer select:
```
select a, b, a/b from (
VALUES ( schema1.Usage ('element', 20140101, 20140714)
, schema1.TOTAL_Usage ('element', 20140101, 20140714) )
) AS X(a,b);
``` | Select from VALUES | [
"",
"sql",
"db2",
""
] |
I am working in SQL Server 2012 Management studio.
In a SQL query window, an insert into a `#table` is happening. It is expected to insert somewhere around 80 million rows with 3 `INT` columns each.
The query execution is going on.
Is there a way that I can track the no of rows in the `#table`? | Since you cannot run two queries in the same window simultaneously and temp tables are not accessible in other sessions if they are declared with a single #, you should try defining it with a double # in your insert query.
Then you could try querying it using WITH(NOLOCK).
Open a new query window on the same db and try
```
SELECT COUNT(*)
FROM ##YourTableName WITH(NOLOCK)
```
This will get dirty reads, but i do not think it would be a problem in your case as you would like a rough measure on where your INSERT is. | One method is to query the DMVs using the temp table object id. You can get the local temp table object id from the session that created it using this query:
```
SELECT OBJECT_ID(N'tempdb..#table', 'U');
```
Then run the script below in another windows, supplying the object\_id value from the above query (-1180342868 in this example):
```
DECLARE @object_id int = -1180342868;
SELECT SUM(rows)
FROM tempdb.sys.partitions
WHERE
object_id = @object_id
AND index_id IN(0,1);
```
Of course, this method assumes you had the foresight to get the temp table object id before running the insert. If the query is currently running, you could run the script below and make an educated guess as to which object might be the temp table being loaded.
```
USE tempdb;
SELECT OBJECT_NAME(object_id), SUM(rows)
FROM tempdb.sys.partitions
WHERE
index_id IN(0,1)
AND OBJECTPROPERTYEX(object_id, 'IsUserTable') = 1
GROUP BY
OBJECT_NAME(object_id);
```
Be aware that this might not be a reliable way to track the load progress. Much depends on the query plan particulars. It could be that the costly operators are earlier in the plan and the actual insert won't occur until the last minute. | Track number of rows in a #table which the population is in progress | [
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2012",
""
] |
I try to add the following code to a spring data jpa repository:
```
@Query("insert into commit_activity_link (commit_id, activity_id) VALUES (?1, ?2)")
void insertLinkToActivity(long commitId, long activityId);
```
But an app can't start with the exception:
> Caused by: org.hibernate.hql.internal.ast.QuerySyntaxException: unexpected token: VALUES near line 1, column 59 [insert into commit\_activity\_link (commit\_id, activity\_id) VALUES (?1, ?2)]
Where am I wrong? | I had to add `nativeQuery = true` to `@Query`
```
@Query(value = "insert into commit_activity_link (commit_id, activity_id) VALUES (?1, ?2)", nativeQuery = true)
``` | Use java object rather passing all parameters
```
@Modifying(clearAutomatically = true)
@Transactional
@Query(value = "insert into [xx_schema].[shipment_p] (gpn,qty,hscode,country_of_origin,created_date_time,shipment_id) "
+ "VALUES (:#{#sp.gpn},:#{#sp.qty}, :#{#sp.hscode} ,:#{#sp.countryOfOrigin}, :#{#sp.createdDateTime}, :#{#sp.id} )", nativeQuery = true)
public void saveShipmentPRoducts(@Param("sp") ShipmentProducts sp);
``` | native insert query in hibernate + spring data | [
"",
"sql",
"hibernate",
"spring-data",
""
] |
since I am not as good with more complex SQL SELECT Statements I thought of just asking here, since it's hard to find something right on topic.
I got two tables who have exactly the same structure like
```
TABLE A (id (INT(11)), time (VARCHAR(10));)
TABLE B (id (INT(11)), time (VARCHAR(10));)
```
Now I want a single SELECT to count the entrys of an specific id in both tables.
```
SELECT COUNT(*) FROM TABLE A WHERE id = '1';
SELECT COUNT(*) FROM TABLE B WHERE id = '1';
```
So I thought it would be much better for the database performance if I use one SELECT instead of one.
Thanks for helping out | ```
SELECT COUNT(*) as count, 'tableA' as table_name FROM TABLEA WHERE id = '1'
union all
SELECT COUNT(*), 'tableB' FROM TABLEB WHERE id = '1'
``` | If you want the separate counts in a single row, you can use subqueries
```
SELECT
(SELECT COUNT(*) FROM TABLE A WHERE id = '1') a_count,
(SELECT COUNT(*) FROM TABLE B WHERE id = '1') b_count;
``` | Count uid from two tables who look the same sort by tablename | [
"",
"sql",
"select",
"count",
""
] |
My problem is, I want to store a person with multiple phone numbers in the database.
for a single variable for number will store only one number for each.
Now if I want to add another phone number it is creating another new record with the same details but the different number.
I want to display all these numbers together. Can anyone help? | You could use a second table to store the numbers, and link back with a Foreign Key:
```
PersonTable: PersonId, Name, etc..
```
The second table will hold the numbers...
```
NumbersTable: NumberId, PersonId(fk), Number
```
You could then get the numbers like this...
```
SELECT p.Name, n.Number from PersonTable p Left Join NumbersTable n
on p.PersonId = n.PersonId
```
This is a simple example. I have used a `LEFT JOIN` here in case a person doesn't supply their number. Also, this is just **pseudo code**, so don't use *Table* in the name. | You should create separate tables for Person and PhoneNumber.
```
CREATE TABLE Person(PersonId int IDENTITY(1,1) PRIMARY KEY)
CREATE TABLE Phone(
PersonId int,
PhoneNumber varchar(20),
CONSTRAINT PK_Phone PRIMARY KEY(PersonId,PhoneNumber),
CONSTRAINT FK_PersonId FOREIGN KEY(PersonId) REFERENCES Person(PersonId)
)
``` | How to store multiple values in single field in SQL database? | [
"",
"sql",
""
] |
In my SQL Server table, there is a column in which I have comma separated values.
For example: in column `AllSegmentsList` and the values in the column are like `'a, b, c,d, e'` .
Now, I my query I want to say
```
select *
from table
if one of the the entries out of all the comma separated values in that column is 'b'
```
but I want to make sure that the query can handle spaces on either sides of commas, if are present(not always). | It kind of depends on your data. If no value contains a space, following should work
```
select
*
from
table
where
replace(concat(',', slist, ','), ' ', '') like '%,b,%';
``` | this should work in all cases:
```
select * from tablename where replace(replace(','+allsegmentslist+',',', ',','),' ,',',') like '%,b,%'
``` | How to write a SQL query that selects one of the comma separated values in the column | [
"",
"sql",
"sql-server",
""
] |
I have records like this:
```
Column1 Column2
A Blue
A Blue
B Red
B Green
C Blue
C Red
```
Using `SELECT DISTINCT` I get this:
```
Column1 Column2
A Blue
B Red
B Green
C Blue
C Red
```
What I'd like to get:
```
Column1 Column2
B Red
B Green
C Blue
C Red
```
So I need to get only multiple records of column1 that have different values on column2.
(I'm joining two tables)
With SELECT DISTINCT, I got closer to what I need, but I can't find a way to exclude records like "A" on column1 that have always the same value on column2... | Try this:
```
SELECT * FROM yourtable
WHERE Column1 IN
(SELECT Column1
FROM yourtable
GROUP BY Column1
HAVING COUNT(DISTINCT Column2) > 1
)
```
The `DISTINCT` in `COUNT` ensures that you only get those records where `Column2` has multiple distinct values. | I think this code will work in most systems:
```
SELECT Col1,Col2
FROM tbl
GROUP BY Col1,Col2
HAVING COUNT(*)<=1
```
See the results: <http://sqlfiddle.com/#!6/47285/8/0> | Get multiple records only using SELECT DISTINCT or similar | [
"",
"sql",
"select",
"distinct",
"multiple-records",
""
] |
I want to find a CONSTRAINT\_NAME in Oracle by SEARCH\_CONDITION.
```
SELECT * FROM ALL_CONSTRAINTS
WHERE TABLE_NAME = 'myTableName';
AND SEARCH_CONDITION = '"myColumn" IS NOT NULL';
```
ORA-00997: illegal use of LONG datatype.
How to query by SEARCH\_CONDITION? | SEARCH\_CONDITION is LONG so you can't use it for .... very much useful.
For this particular use, I suggest PLSQL routine to write the LONG col into a VARCHAR2(32767) and then apply the check on teh VARCHAR2 variable.
LONGs are an absolute pain.
Also, in your case you can restrict the dataset further by querying ALL\_CONS\_COLUMNS WHERE column\_name = 'colname' and joining the ALL\_CONSTRAINTS to get the SERACH\_CONDITION. | You can write a simple stored procedure:
```
CREATE OR REPLACE FUNCTION get_search_condition(
p_owner ALL_CONSTRAINTS.OWNER%TYPE,
p_constraint_name ALL_CONSTRAINTS.CONSTRAINT_NAME%TYPE
) RETURN VARCHAR2
IS
v_long LONG;
BEGIN
SELECT SEARCH_CONDITION
INTO v_long
FROM ALL_CONSTRAINTS
WHERE CONSTRAINT_NAME = p_constraint_name
AND OWNER = p_owner
AND CONSTRAINT_TYPE = 'C';
RETURN SUBSTR( v_long, 1, 32760 );
END;
/
```
Then you can use it in the query:
```
SELECT CONSTRAINT_NAME,
get_search_condition( OWNER, CONSTRAINT_NAME ) AS SEARCH_CONDITION
FROM ALL_CONSTRAINTS
WHERE OWNER = 'MYSCHEMA'
AND TABLE_NAME = 'MYTABLENAME'
AND CONSTRAINT_TYPE = 'C'
AND get_search_condition( OWNER, CONSTRAINT_NAME ) = '"MYCOLUMN" IS NOT NULL';
``` | Query to find Constraint by SEARCH_CONDITION | [
"",
"sql",
"oracle",
"constraints",
""
] |
I have these records below :
```
CustomerID | Name | Store | Quantity
1 | Elie | HO | 16
1 | Elie | S1 | 4
```
I would like to filter customers by taking only their max quantity?
I tried it with Max, but the problem I cannot render all the fields with it. If I add main.store in the first line, the second row shows.
Is there any solution?
```
Select main.CUSTOMER_ID, main.Name
from
(
Select Name = cus.FIRST_NAME + ' ' + cus.LAST_NAME,
Store = cs.NAME
,Transaction_Number = count(ts.TRANSACTION_SUMMARY_ID)
,cus.CUSTOMER_ID
from TRANSACTION_SUMMARY ts
inner join dbo.CUSTOMER cus
on ts.CUSTOMER_ID = cus.CUSTOMER_ID
inner join dbo.CORPORATE_STORE cs
on ts.CORPORATE_STORE_ID = cs.CORPORATE_STORE_ID
Group by cus.CUSTOMER_ID
,cus.FIRST_NAME
,cus.LAST_NAME
,cs.Name
) as main
Group by CUSTOMER_ID
,main.Name
order by main.CUSTOMER_ID
``` | Please try:
```
select * From tbl a
where a.Quantity=
(select MAX(b.Quantity) from tbl b where a.CustomerID=b.CustomerID)
``` | This is a good use of window functions:
```
with t as (
Select Name = cus.FIRST_NAME + ' ' + cus.LAST_NAME,
Store = cs.NAME,
Transaction_Number = count(ts.TRANSACTION_SUMMARY_ID) , cus.CUSTOMER_ID
from TRANSACTION_SUMMARY ts
inner join dbo.CUSTOMER cus on ts.CUSTOMER_ID = cus.CUSTOMER_ID
inner join dbo.CORPORATE_STORE cs on ts.CORPORATE_STORE_ID = cs.CORPORATE_STORE_ID
Group by cus.CUSTOMER_ID, cus.FIRST_NAME, cus.LAST_NAME, cs.Name
)
select name, store, Transaction_Number, CUSTOMER_ID
from (select t.*,
row_number() over (partition by customer_id order by transaction_number desc) as seqnum
from t
) t
where seqnum = 1;
```
You can actually dispense with the subquery. However, using window functions with aggregations looks funny at first:
```
with t as (
Select Name = cus.FIRST_NAME + ' ' + cus.LAST_NAME,
Store = cs.NAME,
Transaction_Number = count(ts.TRANSACTION_SUMMARY_ID) , cus.CUSTOMER_ID,
row_number() over (partition by cus.CUSTOMER_ID
order by count(ts.TRANSACTION_SUMMARY_ID) desc
) as seqnum
from TRANSACTION_SUMMARY ts
inner join dbo.CUSTOMER cus on ts.CUSTOMER_ID = cus.CUSTOMER_ID
inner join dbo.CORPORATE_STORE cs on ts.CORPORATE_STORE_ID = cs.CORPORATE_STORE_ID
Group by cus.CUSTOMER_ID, cus.FIRST_NAME, cus.LAST_NAME, cs.Name
)
select name, store, Transaction_Number, CUSTOMER_ID
from t
where seqnum = 1;
``` | SQL query select record with Max | [
"",
"sql",
"sql-server",
""
] |
I have a table called `Platforms` with the following columns:
* id, name, img, template\_id
And another table called `Templates` with the following columns:
* id, tpl\_name, body
How can I select all platforms so all the rows have id, name, img, template\_id, tpl\_name.
`template_id` in `Platforms` = `id` in `Templates`.
Can I do it with one query? If yes, what is the method called? | With a simple WHERE into a SELECT query :
```
SELECT p.id, p.name, p.img, p.template_id, t.tpl_name
FROM Platforms as p, Templates as t
WHERE p.template_id = t.id;
``` | Yes you can do it with one query and the method is called "a join between table Platforms and Templates":
```
SELECT * FROM Platforms
INNER JOIN Templates on Platforms.template_id = Templates.id
``` | Joining multiple sql queries together into one row | [
"",
"mysql",
"sql",
"select",
""
] |
I am working on an SQL Query which returns several rows of data from SQL tables on SQL Server using joins. But I just want the Query to return count of number of rows of data that is returned by following SQL Query
```
SELECT C.ContactID,
C.FirstName,
C.LastName,
SP.SalesPersonID,
SP.CommissionPct,
SP.SalesYTD,
SP.SalesLastYear,
SP.Bonus,
ST.TerritoryID,
ST.Name,
ST.[Group],
ST.SalesYTD
FROM Person.Contact C
INNER JOIN Sales.SalesPerson SP
ON C.ContactID = SP.SalesPersonID
FULL OUTER JOIN Sales.SalesTerritory ST
ON ST.TerritoryID = SP.TerritoryID
ORDER BY ST.TerritoryID, C.LastName
```
How to get the number of rows the above query returns. I would do it easily with the help of an SQL View but I don't want to create a view on the server as I just have read permissions to the database.
May I know a better way to solve considering the restrictions I have on the database? | I would just do this as two queries. One like you listed, and the other with a `COUNT(*)`:
```
SELECT COUNT(*)
FROM Person.Contact C
INNER JOIN Sales.SalesPerson SP
ON C.ContactID = SP.SalesPersonID
FULL OUTER JOIN Sales.SalesTerritory ST
ON ST.TerritoryID = SP.TerritoryID
ORDER BY ST.TerritoryID, C.LastName
```
This will return a scalar result, so you don't have to waste unnecessary bandwidth at any point. But it really depends on how you expect to use it. Dave's answer is appropriate if you need to pull all the records back no matter what, but if that's the case I would just check your `List<>.Count` or `[].Length` properties.
You could also add in the column `COUNT(*) OVER() AS [ResultCount]`, but remember that that will return the same value for every row. Again, it just depends how you want to do this. | Use `@@ROWCOUNT` ([MSDN](http://msdn.microsoft.com/en-us/library/ms187316.aspx)). It returns the number of rows affected (selected, updated, deleted, etc) of the previous query.
```
SELECT C.ContactID,
C.FirstName,
C.LastName,
SP.SalesPersonID,
SP.CommissionPct,
SP.SalesYTD,
SP.SalesLastYear,
SP.Bonus,
ST.TerritoryID,
ST.Name,
ST.[Group],
ST.SalesYTD
FROM Person.Contact C
INNER JOIN Sales.SalesPerson SP
ON C.ContactID = SP.SalesPersonID
FULL OUTER JOIN Sales.SalesTerritory ST
ON ST.TerritoryID = SP.TerritoryID
ORDER BY ST.TerritoryID, C.LastName
SELECT @@ROWCOUNT
``` | Getting count of number of rows of data retrieved from several SQL Tables on an SQL Server | [
"",
"sql",
"sql-server-2012",
""
] |
I have a text input which upon keyup I want to update the options in an adjacent select field.
I have these 2 tables in my database:
**Models:**
```
ID modelName brandID
1 NA140 3
1 SRL 1
1 SRS 1
1 SRF 1
1 SMS 2
1 SMU 2
```
**Brands:**
```
ID brandName
1 Samsung
2 Bosch
3 Panasonic
```
In the select field I want to list all the `brandNames` from the brands table but list them in relevance to the text input.
So if 'SR' is typed the order of `modelNames` would be SRF, SRL, SRS, SMU, SMS, NA140 and then the corresponding `brandName` grabbed as the result but only list each brand once.
How would I write this query?
I have this basic idea which I think is what I need...
```
JOIN models & brands ON m.brandID = b.ID
MATCH modelName to string%
SELECT UNIQUE brandName
``` | ```
SELECT DISTINCT b.brandName
FROM brands b
JOIN models m
ON m.brandID = b.id
WHERE m.modelName LIKE '%$variable%'
UNION
SELECT b.brandName
FROM brands b
JOIN models m
ON m.brandID = b.id
WHERE m.modelName NOT LIKE '%$variable%'
``` | I came up with a similar query (simplified):
```
select brandName from (
select b.brandName as brandName, m.modelName
from brands b join models m
on m.brandid = b.id
where m.modelName like '%sr%'
union
select b.brandName as brandName, m.modelName
from brands b join models m
on m.brandid = b.id
where m.modelName not like '%sr%') as curb
group by brandName
```
SQLfiddle: <http://sqlfiddle.com/#!2/593b36/30>
(Not exact naming of brands and models) | Tricky MySQL JOIN query | [
"",
"mysql",
"sql",
"join",
""
] |
We've got 7 guys travelling in Poland ( ;) ). The problem is to find up to three next cities they visited from the time they visited Warsaw. If one guy visits Warsaw twice, it is also counted as a starting point for the next journey.
For example guy 1 have had not only a journey - Warsaw, Cracow, Warsaw, Gdansk, but also Warsaw, Gdansk.
Table A
```
+------+-----------+-----+
| date | city | guy |
+------+-----------+-----+
| 2 | Warsaw | 1 |
| 4 | Cracow | 1 |
| 5 | Cracow | 2 |
| 6 | Bialystok | 3 |
| 7 | Warsaw | 1 |
| 8 | Gdansk | 1 |
| 10 | Warsaw | 5 |
| 12 | Cracow | 5 |
| 14 | Bialystok | 6 |
| 15 | Warsaw | 7 |
| 20 | Warsaw | 7 |
+------+-----------+-----+
```
So the final table for this would look like this:
```
+-----------+-----------+-----------+-----------+
| Starting | 2nd dest. | 3th dest. | 4th dest. |
+-----------+-----------+-----------+-----------+
| Warsaw | Cracow | Warsaw | Gdansk |
| Warsaw | Gdansk | | |
| Warsaw | Cracow | | |
| Warsaw | Warsaw | | |
| Warsaw | | | |
+-----------+-----------+-----------+-----------+
```
The problem is to create a query that will automatically create the final table from the table A.
There is no problem with finding every starting point, but I have no idea how to find every second destination. It seems too me that there has to be a loop of some kind - the guy has to be the same as in the starting point and the date of the second destination has to greater than the date of THIS EXACT starting point.
Any help with solving this would be appreciated. ;)
SQLFiddle with some more example entry data - <http://sqlfiddle.com/#!2/de0f1>
The data above is just a sample, the solution needs to deal with a lot bigger set. | If you are using SQL Server 2012 or later version, you can fairly easily [solve the problem](http://sqlfiddle.com/#!6/1e595/1 "Demo at SQL Fiddle") with the help of the [LEAD() analytic function](http://technet.microsoft.com/en-us/library/hh213125.aspx "LEAD (Transact-SQL)"):
```
WITH ThreeDestinations AS (
SELECT
*,
Destination2 = LEAD(city, 1) OVER (PARTITION BY guy ORDER BY date),
Destination3 = LEAD(city, 2) OVER (PARTITION BY guy ORDER BY date),
Destination4 = LEAD(city, 3) OVER (PARTITION BY guy ORDER BY date)
FROM
dbo.voyage
)
SELECT
StartingPoint = city,
Destination2,
Destination3,
Destination4
FROM
ThreeDestinations
WHERE
city = 'Warsaw'
ORDER BY
date
;
```
The three calls to LEAD give you the first three (or fewer) destinations after every city in the original set. The next, and final, step is just to filter out the rows where the starting point is not Warsaw. | Solution using `Row_Number()` and `Pivot`:
```
select guy,[1] as First, IsNULL([2], '') as Second, IsNUll([3], '') as Third, IsNull([4], '') as Forth
from (
select row_number() over (partition by guy order by guy, date) as number, city, guy
from voyage
) x
pivot
(
max(city)
for number in ([1], [2], [3], [4])
) as p
```
[Demo here](http://sqlfiddle.com/#!3/acd32/7)
My example assumes that the dates are of type int but this could be easily reworked to use proper dates....
EDIT:
This will give the desired output (not sure that this will be a generic approach...):
```
select *
from
(
select guy,[1] as First, IsNULL([2], '') as Second, IsNUll([3], '') as Third, IsNull([4], '') as Forth
from (
select row_number() over (partition by guy order by guy, date) as number, city, guy
from voyage ) x
pivot
(
max(city)
for number in ([1], [2], [3], [4])
) as p
where [1] = 'Warsaw'
union All
select guy,[3] as First, case when [4] = 'Warsaw' then '' else IsNULL([4], '') end as Second, '' as Third, '' as Forth
from (
select row_number() over (partition by guy order by guy, date) as number, city, guy
from voyage ) x
pivot
(
max(city)
for number in ([1], [2], [3], [4])
) as p
where [3] = 'Warsaw'
Union all
select guy,[2] as First, '' as Second, '' as Third, '' as Forth
from (
select row_number() over (partition by guy order by guy, date) as number, city, guy
from voyage ) x
pivot
(
max(city)
for number in ([1], [2], [3], [4])
) as p
where [2] = 'Warsaw'
) bigdata
order by guy
```
[Second Demo Here](http://sqlfiddle.com/#!3/acd32/29) | Selecting three rows after the one that meets the condition | [
"",
"sql",
"sql-server",
"database",
""
] |
I need to create a copy of a database table.
I require to set the TEXTIMAGE\_ON value which can only be created at Creation time and not with ALTER TABLE afterwards.
I am also not allowed to alter any database properties beforehand. So basically NOT:
```
ALTER DATABASE backoffice MODIFY FILEGROUP filegroup DEFAULT
```
Thats the reason why i can't just use:
```
SELECT TOP (0) * INTO [DestinationSchema].[DestinationTable] FROM [SourceSchema].[SourceTable]
```
to clone my table.
I am looking basically for a script or any other way to which creates the table and than clones the columns step by step.
I am grateful for any help. | This should get you there mostly. You may still have to refine it per requirement. Hope this helps
```
DECLARE @TableName SYSNAME = 'TableName';
DECLARE @SQL VARCHAR(MAX),
@ColumnList VARCHAR(MAX),
@Schema VARCHAR(250);
SELECT @Schema = QUOTENAME(TABLE_SCHEMA)
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_NAME = @TableName;
SET @SQL = 'CREATE TABLE ' + @Schema + '.' + QUOTENAME(@TableName) + '(' + CHAR(13) + CHAR(10);
WITH cte_columns
AS (
SELECT
CHAR(9) + QUOTENAME(COLUMN_NAME) + ' ' + CASE
WHEN DOMAIN_SCHEMA IS NULL THEN ''
ELSE QUOTENAME(DOMAIN_SCHEMA)
END + '.' + CASE
WHEN DOMAIN_SCHEMA IS NULL THEN DATA_TYPE + QUOTENAME(CHARACTER_MAXIMUM_LENGTH, '(')
ELSE QUOTENAME(DOMAIN_NAME)
END + CASE
WHEN IS_NULLABLE = 'NO' THEN ' NOT NULL'
ELSE ' NULL'
END AS ColumnList
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = @TableName
)
SELECT
@ColumnList = COALESCE(@ColumnList + ',' + CHAR(13) + CHAR(10), '') + ColumnList
FROM cte_columns;
SET @SQL = @SQL + @ColumnList + ')';
PRINT @SQL;
``` | This will at least get you started. You have some work still to do here but this should give you the information you need to build some dynamic sql for this.
```
declare @TableName sysname = 'YourTable'
select *
from sys.columns c
join sys.types t on c.user_type_id = t.user_type_id
where c.object_id = object_id(@TableName)
```
You will need a lot of case expressions to accommodate for everything. | Cloning a Table without using SELECT TOP (0) * INTO | [
"",
"sql",
"sql-server",
"sql-scripts",
""
] |
I have a query on a 500 000 row table.
Basically
```
WHERE s3_.id = 287
ORDER BY m0_.id DESC
LIMIT 25
```
=> Query runtime = 20ms
```
WHERE s3_.id = 287
ORDER BY m0_.created_at DESC
LIMIT 25
```
=> Query runtime = 15000ms or more
There is an index on created\_at.
The Query plans are completely different.
Unfortunately I am not a query plan guru. I would like to reproduce the fast query plan while ordering by created\_at.
Is that possible and how would I go about that ?
**Query Plan - Slow Query** (order by m0\_.created\_at) : <http://explain.depesz.com/s/KBl>
**Query Plan - Fast Query** ( order by m0\_.id ): <http://explain.depesz.com/s/2pYZ>
### Full Query
```
SELECT m0_.id AS id0, m0_.content AS content1, m0_.created_at AS created_at2,
c1_.id AS id3, l2_.id AS id4, l2_.reference AS reference5,
s3_.id AS id6, s3_.name AS name7, s3_.code AS code8,
u4_.email AS email9, u4_.id AS id10, u4_.firstname AS firstname11, u4_.lastname AS lastname12,
u5_.email AS email13, u5_.id AS id14, u5_.firstname AS firstname15, u5_.lastname AS lastname16,
g6_.id AS id17, g6_.firstname AS firstname18, g6_.lastname AS lastname19, g6_.email AS email20,
m0_.conversation_id AS conversation_id21, m0_.author_user_id AS author_user_id22, m0_.author_guest_id AS author_guest_id23,
c1_.author_user_id AS author_user_id24, c1_.author_guest_id AS author_guest_id25, c1_.listing_id AS listing_id26,
l2_.poster_id AS poster_id27, l2_.site_id AS site_id28, l2_.building_id AS building_id29, l2_.type_id AS type_id30, l2_.neighborhood_id AS neighborhood_id31, l2_.facility_bathroom_id AS facility_bathroom_id32, l2_.facility_kitchen_id AS facility_kitchen_id33, l2_.facility_heating_id AS facility_heating_id34, l2_.facility_internet_id AS facility_internet_id35, l2_.facility_condition_id AS facility_condition_id36, l2_.original_translation_id AS original_translation_id37,
u4_.site_id AS site_id38, u4_.address_id AS address_id39, u4_.billing_address_id AS billing_address_id40,
u5_.site_id AS site_id41, u5_.address_id AS address_id42, u5_.billing_address_id AS billing_address_id43,
g6_.site_id AS site_id44
FROM message m0_
INNER JOIN conversation c1_ ON m0_.conversation_id = c1_.id
INNER JOIN listing l2_ ON c1_.listing_id = l2_.id
INNER JOIN Site s3_ ON l2_.site_id = s3_.id
INNER JOIN user_ u4_ ON l2_.poster_id = u4_.id
LEFT JOIN user_ u5_ ON m0_.author_user_id = u5_.id
LEFT JOIN guest_data g6_ ON m0_.author_guest_id = g6_.id
WHERE s3_.id = 287
ORDER BY m0_.created_at DESC
LIMIT 25 OFFSET 0
``` | It turned out to be an index issue. The NULLS behaviour of the query was not coherent with the index.
```
CREATE INDEX message_created_at_idx on message (created_at DESC NULLS LAST);
... ORDER BY message.created_at DESC; -- defaults to NULLS FIRST when DESC
```
## solutions
If you specify NULLS in your index or query, make sure they are coherent with each other.
ie: `ASC NULLS LAST` is coherent with `ASC NULLS LAST` or `DESC NULLS FIRST`.
### NULLS LAST
```
CREATE INDEX message_created_at_idx on message (created_at DESC NULLS LAST);
... ORDER BY messsage.created_at DESC NULLS LAST;
```
### NULLS FIRST
```
CREATE INDEX message_created_at_idx on message (created_at DESC); -- defaults to NULLS FIRST when DESC
... ORDER BY messsage.created_at DESC -- defaults to NULLS FIRST when DESC;
```
### NOT NULL
If your column is NOT NULL, don't bother with NULLS.
```
CREATE INDEX message_created_at_idx on message (created_at DESC);
... ORDER BY messsage.created_at DESC;
``` | ### Fix your query
Your `WHERE` condition is on a table that's joined via `LEFT JOIN` nodes. The `WHERE` condition forces the joins to behave like `[INNER] JOIN`. That's pointless and may confuse the query planner, especially with a query that has a lot of tables and therefore *many* possible query plans. By setting that right, you reduce the number of possible query plans drastically, making it easier for Postgres to find a good one.
[More details in the answer to the additionally spawned question.](https://stackoverflow.com/questions/24876673/explain-join-vs-left-join-and-where-condition-performance-suggestion-in-more-de/24876797#24876797)
```
SELECT m0_.id AS id0, ...
FROM site s3_
JOIN listing l2_ ON l2_.site_id = s3_.id
JOIN conversation c1_ ON c1_.listing_id = l2_.id
JOIN message m0_ ON m0_.conversation_id = c1_.id
LEFT JOIN user_ u4_ ON u4_.id = l2_.poster_id
LEFT JOIN user_ u5_ ON u5_.id = m0_.author_user_id
LEFT JOIN guest_data g6_ ON g6_.id = m0_.author_guest_id
WHERE s3_.id = '287' -- ??
ORDER BY m0_.created_at DESC
LIMIT 25
```
### Why `s3_.id = '287'`?
Looks like 287 should be an `integer` type, that you would typically enter as numeric constant without quotes: `287`. What's the actual data type (and why)? Only a *minor* problem either way.
### Reading the query plan
@FuzzyTree already hinted (quite accurately) that sorting on a different column than what's used in your `WHERE` clause complicates things. But that's not the elephant in the room here.
The combination with `LIMIT 25` makes the difference *huge*. Both query plans show a reduction from `rows=124616` to `rows=25` in their last step, which is *huge*.
Both query plans also show: `Seq Scan on site s3_ ... rows=1`. So if you `ORDER BY _s3.id` in your fast variant, you are not actually ordering *anything*. While the other query has to find the top 25 rows out of 124616 candidates ... Hardly a fair comparison.
### Solution
After clarification, the problem seems clearer. You are selecting a huge number of rows by one criteria, but ordering by another. No conventional index design can cover this, not even if both columns were to reside in the same table (which they don't).
I think we found a (non-trivial) solution for this class of problems under this related question on dba.SE:
* [Can spatial index help a “range - order by - limit” query](https://dba.stackexchange.com/questions/18300/can-spatial-index-help-a-range-order-by-limit-query/22500#22500)
Of course, all the usual advice for [query optimization](https://wiki.postgresql.org/wiki/Slow_Query_Questions) and general [performance optimization](https://wiki.postgresql.org/wiki/Performance_Optimization) applies. | Changing ORDER BY from id to another indexed column (with low LIMIT) has a huge cost | [
"",
"sql",
"postgresql",
"postgresql-9.1",
""
] |
I have a table which includes customer transactions. These transactions could have `StartDate` and `EndDate`. If the `EndDate` hasn't come, then the field is NULL. In my query, if I select for example the transactions that started today and don't have enddate (which is `NULL`) I don't get the right results. However, if I set the `@EndDate` as tomorrow, then I get the result. Any ideas?
```
SELECT ISNULL(COUNT(Buyin), 0) AS CountBuyIns
FROM [Transaction] AS t
WHERE (t.StartDate >= @StartDate) AND (ISNULL(t.EndTime,GETDATE()) <= @EndDate)
``` | ```
AND (t.EndTime <= @EndDate OR t.EndTime IS NULL)
```
or:
```
AND COALESCE(t.EndTime, @EndDate) <= @EndDate
``` | Change the last part of the WHERE clause like so
```
AND t.EndTime IS NULL
``` | SQL Query - Range two dates - including nulls | [
"",
"sql",
"sql-server",
""
] |
This is admittedly a "Friday" question..
When I look up [the definition of](https://www.google.com/search?q=define%20primary%20key&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a#q=define%20primary%20key&rls=org.mozilla:en-US:official&safe=off) "primary key" it says :
> In Oracle, a primary key is a single field or combination of fields
> that uniquely defines a record. None of the fields that are part of
> the primary key can contain a null value. A table can have only one
> primary key. In Oracle, a primary key can not contain more than 32
> columns.
Why would one ever need 32 columns in a primary key? Is that ever a situation that comes up? | A primary key is backed by an index, and [an index key can only have 32 columns](http://docs.oracle.com/cd/E11882_01/server.112/e26088/statements_5012.htm#SQLRF53989). So it's an inherited restriction from the index. There probably wasn't much point adding additional restrictions on the primary key itself, as the index may exist before the primary key constraint, and can persist if the constraint is removed.
There are [other constraints on the primary key](http://docs.oracle.com/cd/E11882_01/server.112/e41084/clauses002.htm#SQLRF52198) which don't necessarily apply to the index itself.
The 32-column limit for an index is presumably related to how they are implemented (as Guffa says), and there isn't much point asking why they picked 32 instead of 16 or 64 or whatever. It's *slightly* more reasonable to have a 32-column index than a 32-column primary key, but I'd imagine it's still pretty unusual. That a primary key could theoretically also have 32-columns doesn't really imply you'd ever come across one in the wild.
To answer your questions: you wouldn't; and no. (Someone will now say they used to work on a system that did do this...) | It can be the case having a fact table in the star Schema with as much dimensions. The fact record would be identified by Dimension keys. Having said that, I don't really think this would make sense to physically create such a constraint. You usually can control uniqueness in your ETL-Processes and this fact table would never be a parent in some parent-child relationships | Is there ever a situation where you ever need a primary key with 32 fields? | [
"",
"sql",
"oracle",
"oracle11g",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.