Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I am trying to create a table in my database and it gives me the following error. > ``` > ERROR: type "tbl_last_alert" already exists > HINT: A relation has an associated type of the same name, so you must use a name that doesn't conflict with any existing type. > ``` Then I thought that the table must exist so I ran the following query: ``` select * from pg_tables; ``` But could not find anything. Then I tried: > ``` > select * from tbl_last_alert; > ERROR: relation "tbl_last_alert" does not exist > ``` Any idea how to sort this? i am tying to rename the type by ``` ALTER TYPE v4report.tbl_last_alert RENAME TO tbl_last_alert_old; ERROR: v4report.tbl_last_alert is a table's row type HINT: Use ALTER TABLE instead. ``` and getting the error.
Postgres creates a composite (row) type of the same name for every table. That's why the error message mentions "type", not "table". Effectively, a table name cannot conflict with this list from [the manual on `pg_class`](https://www.postgresql.org/docs/current/catalog-pg-class.html): > `r` = ordinary table, `i` = index, `S` = sequence, `t` = TOAST table, > `v` = view, `m` = materialized view, `c` = composite type, `f` = > foreign table, `p` = partitioned table, `I` = partitioned index Bold emphasis mine. Accordingly, you can find any conflicting entry with this query: ``` SELECT n.nspname AS schemaname, c.relname, c.relkind FROM pg_class c JOIN pg_namespace n ON n.oid = c.relnamespace WHERE relname = 'tbl_last_alert'; ``` This covers *all* possible competitors, not just types. Note that the same name *can* exist multiple times in multiple schemas - but not in the same schema. ### Cure If you find a conflicting composite type, you can rename or drop it to make way - if you don't need it! ``` DROP TYPE tbl_last_alert; ``` Be sure that the schema of the type is the first match in your search path or schema-qualify the name. I added the schema to the query above. Like: ``` DROP TYPE public.tbl_last_alert; ```
If you can't drop type, delete it from pg\_type: ``` DELETE FROM pg_type where typname~'tbl_last_alert'; ```
Cannot create a table due to naming conflict
[ "", "sql", "postgresql", "database-administration", "" ]
I am trying to create a .bacpac file of my SQL 2012 database. In SSMS 2012 I right click my database, go to Tasks, and select Export Data-tier Application. Then I click Next, and it gives me this error: ``` Error SQL71564: Element Login: [myusername] has an unsupported property IsMappedToWindowsLogin set and is not supported when used as part of a data package. (Microsoft.SqlServer.Dac) ``` I am trying to follow this tutorial so that I can put my database on Azure's cloud: <http://blogs.msdn.com/b/brunoterkaly/archive/2013/09/26/how-to-export-an-on-premises-sql-server-database-to-windows-azure-storage.aspx> How can I export a .bacpac file of my database?
I found this post referenced below which seems to answer my question. I wonder if the is a way to do this without having to delete my user from my local database... > "... there are some features in on premise SQL Server which are not > supported in SQL Azure. You will need to modify your database before > extracting. [This article](http://social.technet.microsoft.com/wiki/contents/articles/995.windows-azure-sql-database-faq.aspx) and several others list some of the > unsupported features. > > [This blog](http://blogs.msdn.com/b/ssdt/archive/2012/04/19/migrating-a-database-to-sql-azure-using-ssdt.aspx) post explains how you can use SQL Server Data Tools to > modify your database to make it Azure compliant. > > It sounds like you added clustered indices. Based on the message > above, it appears you still need to address TextInRowSize and > IsMappedToWindowsLogin." Ref. <http://social.msdn.microsoft.com/Forums/fr-FR/e82ac8ab-3386-4694-9577-b99956217780/aspnetdb-migration-error?forum=ssdsgetstarted> **Edit (2018-08-23):** Since the existing answer is from 2014, I figured I'd serve it a fresh update... Microsoft now offers the DMA (Data Migration Assistant) to migrate SQL Server databases to Azure SQL. You can learn more and download the free tool here: <https://learn.microsoft.com/en-us/azure/sql-database/sql-database-migrate-your-sql-server-database>
[SQL Azure doesn't support windows authentication](http://msdn.microsoft.com/en-us/library/windowsazure/ff394108.aspx) so I guess you'll need to make sure your database users are mapped to SQL Server Authentication logins instead.
Can't Export Data-tier Application for Azure
[ "", "sql", "azure", "sql-server-2012", "ssms-2012", "bacpac", "" ]
I'm not very good with sql so I'm not sure if this is possible. Or maybe even in excel?? I'm trying to select the very first value and ignore duplicates from Product\_ID and then add that first value for that row to the Title column. Also note that my product list is over 25,000+ items. So take this: ``` +---------------+------------+-------+-------------+-------+ | Product_Count | Product_ID | Title | _Color_Name | _Size | +---------------+------------+-------+-------------+-------+ | 2 | 14589 | | Black | 00 | | 3 | 14589 | | Black | 0 | | 4 | 14589 | | Black | 2 | | 5 | 14589 | | Black | 4 | | 6 | 14589 | | Black | 6 | | 11 | 14589 | | Dark Coral | 00 | | 12 | 14589 | | Dark Coral | 0 | | 13 | 14589 | | Dark Coral | 2 | | 14 | 14589 | | Dark Coral | 4 | | 15 | 14589 | | Dark Coral | 6 | | 129 | 15027 | | Aqua | 00 | | 130 | 15027 | | Aqua | 0 | | 131 | 15027 | | Aqua | 2 | | 132 | 15027 | | Aqua | 4 | | 133 | 15027 | | Aqua | 6 | | 138 | 15027 | | Black | 00 | | 139 | 15027 | | Black | 0 | | 140 | 15027 | | Black | 2 | | 141 | 15027 | | Black | 4 | | 142 | 15027 | | Black | 6 | +---------------+------------+-------+-------------+-------+ ``` And turn it into this: ``` +---------------+------------+-------+-------------+-------+ | Product_Count | Product_ID | Title | _Color_Name | _Size | +---------------+------------+-------+-------------+-------+ | 2 | 14589 | 14589 | Black | 00 | | 3 | 14589 | | Black | 0 | | 4 | 14589 | | Black | 2 | | 5 | 14589 | | Black | 4 | | 6 | 14589 | | Black | 6 | | 11 | 14589 | | Dark Coral | 00 | | 12 | 14589 | | Dark Coral | 0 | | 13 | 14589 | | Dark Coral | 2 | | 14 | 14589 | | Dark Coral | 4 | | 15 | 14589 | | Dark Coral | 6 | | 129 | 15027 | 15027 | Aqua | 00 | | 130 | 15027 | | Aqua | 0 | | 131 | 15027 | | Aqua | 2 | | 132 | 15027 | | Aqua | 4 | | 133 | 15027 | | Aqua | 6 | | 138 | 15027 | | Black | 00 | | 139 | 15027 | | Black | 0 | | 140 | 15027 | | Black | 2 | | 141 | 15027 | | Black | 4 | | 142 | 15027 | | Black | 6 | +---------------+------------+-------+-------------+-------+ ```
You can use `PARTITION` to window the `ProductIds` and then identify the first row in each partition with [`ROW_NUMBER()`](http://technet.microsoft.com/en-us/library/ms186734.aspx): ``` SELECT ProductID, Product_Count, CASE WHEN rn = 1 THEN ProductID else null END AS Title, Color_Name, Size FROM ( SELECT ProductID, Product_Count, Color_Name, Size, ROW_NUMBER() OVER (PARTITION BY ProductID ORDER BY Product_Count) AS rn FROM product_stock ) AS X; ``` [SqlFiddle here](http://sqlfiddle.com/#!3/4d9e2/1)
try this, ``` Declare @t table (col1 int,col2 int,col3 int) insert into @t values(2,3,null),(2,4,null),(5,3,null),(5,4,null) ;with CTE as (select *,ROW_NUMBER()over(partition by col1 order by col1)rn from @t ) select a.col1,a.col2,case when a.rn=1 then b.col1 end col3 from CTE a left join (select col1 from cte where rn>1) b on a.col1=b.col1 ```
SQL select first value in a row then insert that value into another column
[ "", "sql", "sql-server", "excel", "sql-server-2008", "" ]
I have the following data : ``` Party Balance Month --------------------- ABC 500 Aug 2013 ABC 300 Sep 2013 ABC 200 Nov 2013 ABC 400 Dec 2013 ABC 100 Jan 2013 ``` I tried this query: ``` select ahead as Party, [Nov 2013], [Dec 2013], [Jan 2014] from (select ahead, month, balance from aging) as a PIVOT (SUM (balance) FOR [month] in ([Nov 2013], [Dec 2013], [Jan 2014]) ) as Pivot_Table group by ahead, [Nov 2013], [Dec 2013], [Jan 2014] ``` Result is returned like this: ``` Party Nov 2013 Dec 2013 Jan 2013 ----------------------------------------- ABC 200 400 100 ``` The required results would be like this: ``` Party Nov 2013 Dec 2013 Jan 2013 Old balance total Balance ----------------------------------------------------------------------- ABC 200 400 100 800 1500 ``` Can anyone help me solve this=
``` DECLARE @MyTable TABLE ([Party] varchar(max), [Balance] money, [Date] date) INSERT @MyTable VALUES ('ABC',500,'Aug 2013'), ('ABC',300,'Sep 2013'), ('ABC',200,'Nov 2013'), ('ABC',400,'Dec 2013'), ('ABC',100,'Jan 2014'); WITH t AS ( SELECT [Party],[Balance],[Date], SUM(CASE WHEN [Date] < 'Nov 2013' THEN [Balance] END) OVER(PARTITION BY [Party]) [OldBalance], SUM([Balance]) OVER(PARTITION BY [Party]) [TotalBalance] FROM @MyTable ) SELECT [Party],[Nov 2013],[Dec 2013],[Jan 2014],[OldBalance],[TotalBalance] FROM t PIVOT(SUM([Balance]) FOR [Date] IN ([Nov 2013],[Dec 2013],[Jan 2014])) p ```
## [SQL Fiddle](http://sqlfiddle.com/#!3/69c9e/1) **MS SQL Server 2008 Schema Setup**: ``` CREATE TABLE Test_Table(Party VARCHAR(10),Balance INT,[Month] VARCHAR(20)) INSERT INTO Test_Table VALUES ('ABC',500,'Aug 2013'), ('ABC',300,'Sep 2013'), ('ABC',200,'Nov 2013'), ('ABC',400,'Dec 2013'), ('ABC',100,'Jan 2013') ``` **Query 1**: ``` ;WITH Totals AS ( SELECT Party, SUM(Balance) TotalBalance FROM Test_Table GROUP BY Party ), Pvt AS ( select Party ,[Nov 2013] ,[Dec 2013] ,[Jan 2013] FROM Test_Table as t PIVOT (SUM (balance) FOR [month] in ([Nov 2013],[Dec 2013],[Jan 2013]) ) as Pivot_Table ) SELECT p.Party ,p.[Nov 2013] ,p.[Dec 2013] ,p.[Jan 2013] ,(t.TotalBalance) -(p.[Nov 2013]+ p.[Dec 2013]+p.[Jan 2013]) AS OldBalance FROM pvt p INNER JOIN Totals t ON p.Party = t.Party ``` **[Results](http://sqlfiddle.com/#!3/69c9e/1/0)**: ``` | PARTY | NOV 2013 | DEC 2013 | JAN 2013 | OLDBALANCE | |-------|----------|----------|----------|------------| | ABC | 200 | 400 | 100 | 800 | ```
Month Wise Data using Pivot with previous months sum of Balance Required
[ "", "sql", "sql-server", "t-sql", "" ]
I googled the solution online and most of the result comes back with BULK INSERT, I don't have the access to this role at this point so this is not an option. Is there any other generic way to import .csv file into MS SQL SERVER?
what I end up doing is go to task--->import, this is the original DTS functionality and it does give me option to import excel file.
Solution I've done the last few times I've needed to do this was save the CSVs as .XLS in excel and then pull it in from SSMS. Edit: [Here's a SO answer that shows how to do it](https://stackoverflow.com/questions/3474137/how-to-export-data-from-excel-spreadsheet-to-sql-server-2008-table)
how to import csv file into table of sql server other than BUIK INSERT
[ "", "sql", "csv", "import", "" ]
I am trying to perform a SQL query that removes all the rows where people have less then 10 people or more then 100 people in their network. ``` SELECT * FROM table1 Where inNetwork > 10 AND < 1000, ``` Basically, by this query what I mean is only show people with more then 10 and less then 1000 people in their network but it isn't working but when I try only 1 number e.g. ``` SELECT * FROM table1 Where inNetwork > 10 ``` then it works but I want to remove two types of data.
You can use `BETWEEN` to achieve this ``` SELECT * FROM table1 WHERE inNetwork BETWEEN 10 AND 1000; ``` Otherwise you will need to express both condition for your column ``` SELECT * FROM table1 WHERE inNetwork > 10 AND inNetwork < 1000; ```
``` SELECT * FROM table1 Where inNetwork > 10 AND inNetwork < 1000; ```
SQL query that removes all rows with certain value?
[ "", "sql", "" ]
I have a lookup table where one of the columns contains each date between 2000 and 2030. Problem is that the generated dates here all have milliseconds at the end, eg: ``` 2000-01-01 00:00:00.000 2000-01-02 00:00:00.000 2000-01-03 00:00:00.000 2000-01-04 00:00:00.000 ``` My other datetime columns in my data don't have this, e.g.: ``` 2011-05-17 13:11:11 2011-05-18 09:39:17 2011-06-15 10:54:09 2011-06-20 10:16:43 ``` ![http://s15.postimg.org/5jw1q6y2z/2014_03_14_09_47_48_Microsoft_SQL_Server_Managem.png](https://i.stack.imgur.com/9Nln1.png) I think this may be causing an issue when aggregating up to Month using a BI tool, so I wanted to update all rows in the Calendar\_Date column (in the lookup table), to truncate milliseconds off all rows. Could someone provide guidance on how I can do this? Structures of both columns: ![enter image description here](https://i.stack.imgur.com/eugRA.png) ![enter image description here](https://i.stack.imgur.com/gXuCO.png) Thanks in advance!
``` update table set Calendar_Date=convert(datetime,(convert(date,Calendar_Date))) ```
I believe that you have mis-disagnosed the problem, it's not the data, it appears to be the SQL. Let me show you some issues with your query. ``` SELECT a12.Calendar_Year AS Calendar_Year, a11.dt AS datetime, (sum(a11.mc_gross) - sum(a11.mc_fee)) AS WJXBFS1 FROM shine_orders AS a11 JOIN Calendar AS a12 ON ( CONVERT(DATETIME, CONVERT(VARCHAR(10), a11.dt, 101)) = CONVERT(DATETIME, CONVERT(VARCHAR(10), a12.Calendar_Date, 101)) AND a11.dt = a12.Calendar_Date ) GROUP BY a12.Calendar_Year, a11.dt ``` That's your query slightly differently laid out so that I can identify individual pieces. Let's look at the JOIN first... ``` ON ( CONVERT(DATETIME, CONVERT(VARCHAR(10), a11.dt, 101)) = CONVERT(DATETIME, CONVERT(VARCHAR(10), a12.Calendar_Date, 101)) ``` This does indeed compare date parts only. It converts both values to strings of the format `'mm/dd/yyyy'` and then compares them. It's not considered the most efficient way of doing it, but it *does* work. ``` AND a11.dt = a12.Calendar_Date ``` This seems to be a rogue condition. This compares values that include a time, to values that don't. this will be preventing your join from working. Now let's look at the SELECT and the GROUP BY ``` SELECT a12.Calendar_Year AS Calendar_Year, a11.dt AS datetime, ``` and ``` GROUP BY a12.Calendar_Year, a11.dt ``` `a11.dt`, is actually the value from the data, not the calendar table. This means that you're not grouping by day, you're grouping by `the exact day and time` that exists in the data. I would recommend the following query instead. ``` SELECT a12.Calendar_Year AS Calendar_Year, a12.Calendar_Date AS Calendar_Date, (sum(a11.mc_gross) - sum(a11.mc_fee)) AS WJXBFS1 FROM Calendar AS a12 LEFT JOIN shine_orders AS a11 ON a11.dt >= a12.Calendar_Date AND a11.dt < a12.Calendar_Date + 1 WHERE a12.Calendar_Date >= '2013-01-01' AND a12.Calendar_Date < '2014-01-01' GROUP BY a12.Calendar_Year, a12.Calendar_Date ``` ***EDIT:*** I originally missed out a `+ 1` in the final query.
How to update all rows to truncate milliseconds
[ "", "sql", "sql-server", "datetime", "" ]
I followed this post [How do I perform an accent insensitive compare (e with è, é, ê and ë) in SQL Server?](https://stackoverflow.com/questions/2461522/how-do-i-perform-an-accent-insensitive-compare-e-with-e-e-e-and-e-in-sql-ser) but it doesn't help me with " ş ", " ţ " characters. This doesn't return anything if the city name is " iaşi " : ``` SELECT * FROM City WHERE Name COLLATE Latin1_general_CI_AI LIKE '%iasi%' COLLATE Latin1_general_CI_AI ``` This also doesn't return anything if the city name is " iaşi " (notice the foreign `ş` in the LIKE pattern): ``` SELECT * FROM City WHERE Name COLLATE Latin1_general_CI_AI LIKE '%iaşi%' COLLATE Latin1_general_CI_AI ``` I'm using SQL Server Management Studio 2012. My database and column collation is "Latin1\_General\_CI\_AI", column type is nvarchar. How can I make it work?
The characters you've specified aren't part of the Latin1 codepage, so they can't ever be compared in any other way than ordinal in `Latin1_General_CI_AI`. In fact, I assume that they don't really work at all in the given collation. If you're only using one collation, simply use the correct collation (for example, if your data is turkish, use `Turkish_CI_AI`). If your data is from many different languages, you have to use unicode, and the proper collation. However, there's an additional issue. In languages like Romanian or Turkish, `ş` is *not* an accented `s`, but rather a completely separate character - see <http://collation-charts.org/mssql/mssql.0418.1250.Romanian_CI_AI.html>. Contrast with eg. `š` which is an accented form of `s`. If you really need `ş` to equal `s`, you have replace the original character manually. Also, when you're using unicode columns (nvarchar and the bunch), make sure you're also using unicode *literals*, ie. use `N'%iasi%'` rather than `'%iasi%'`.
In SQL Server 2008 collations versioned 100 [were introduced](https://learn.microsoft.com/en-us/previous-versions/sql/sql-server-2008-r2/ms143503(v=sql.105)). Collation `Latin1_General_100_CI_AI` seems to do what you want. The following should work: ``` SELECT * FROM City WHERE Name LIKE '%iasi%' COLLATE Latin1_General_100_CI_AI ```
compare s, t with ş, ţ in SQL Server
[ "", "sql", "sql-server", "t-sql", "sql-server-2012", "" ]
I have a table called `project_errors` which has the columns `project_id`, `total_errors` and `date`. So everyday a batch job runs which inserts a row with the number of errors for a particular project on a given day. Now I want to know how many errors were reduced and how many errors was introduced for a given month for a project. I thought of a solution of creating trigger after insert which will record if the errors have increased or decreased and put it to another table. But this will not work for previously inserted data. Is there any other way I can do this? I researched about the lag function but not sure how to do this for my problem. The Table structure is given below. ``` Project_Id Total_Errors Row_Insert_Date 1 56 08-MAR-14 2 14 08-MAR-14 3 89 08-MAR-14 1 54 07-MAR-14 2 7 07-MAR-14 3 80 07-MAR-14 ``` And so on...
It's always helpful if you can show the output that you want. My guess is that you want to subtract 54 from 56 and show that 2 errors were added on project 1, subtract 7 from 14 to show that 7 errors were added on project 2, and subtract 80 from 89 to show that 9 errors were added on project 3. Assuming that is the case ``` SELECT project_id, total_errors, lag( total_errors ) over( partition by project_id order by row_insert_date ) prior_num_errors, total_errors - lag( total_errors ) over( partition by project_id order by row_insert_date ) difference FROM table_name ``` You may need to throw an `NVL` around the `LAG` if you want the `prior_num_errors` to be 0 on the first day.
In addition to Justin's answer, you may want to consider changing your table structure. Instead of recording only totals, you can record the actual errors, then count them. So suppose you had a table structure like: ``` CREATE TABLE PROJECT_ERRORS( project_id INTEGER error_id INTEGER stamp DATETIME ) ``` Each record would be a separate error (or separate error type), and this would give you more granularity and allow more complex queries. You could still get your totals by day with: ``` SELECT project_id, COUNT(error_id), TO_CHAR(stamp, 'DD-MON-YY') AS EACH_DAY FROM PROJECT_ERRORS GROUP BY project_id, TO_CHAR(stamp, 'DD-MON-YY') ``` And if we combine this with JUSTIN'S AWESOME ANSWER: ``` SELECT project_id AS PROJECT_ID, COUNT(error_id) AS TOTAL_ERRORS, LAG(COUNT(error_id)) OVER(PARTITION BY project_id ORDER BY TO_CHAR(stamp, 'DD-MON-YY')) AS prior_num_errors, COUNT(error_id) - LAG(COUNT(error_id)) OVER(PARTITION BY project_id ORDER BY TO_CHAR(stamp, 'DD-MON-YY') ) AS diff FROM project_errors GROUP BY project_id, TO_CHAR(stamp, 'DD-MON-YY') ``` But now you can also get fancy and look for specific types of errors or look during certain times of the day.
Difference between previous field in the same column in Oracle table
[ "", "sql", "oracle", "function", "" ]
I have my defined table type created with ``` CREATE TYPE dbo.MyTableType AS TABLE ( Name varchar(10) NOT NULL, ValueDate date NOT NULL, TenorSize smallint NOT NULL, TenorUnit char(1) NOT NULL, Rate float NOT NULL PRIMARY KEY (Name, ValueDate, TenorSize, TenorUnit) ); ``` and I would like to create a table of this type. From [this answer](https://stackoverflow.com/a/2432347/399573) the suggestion was to try ``` CREATE TABLE dbo.MyNewTable AS dbo.MyTableType ``` which produced the following error message in my SQL Server Express 2012: > Incorrect syntax near the keyword 'OF'. Is this not supported by SQL Server Express? If so, could I create it some other way, for example using `DECLARE`?
``` --Create table variable from type. DECLARE @Table AS dbo.MyTableType --Create new permanent/physical table by selecting into from the temp table. SELECT * INTO dbo.NewTable FROM @Table WHERE 1 = 2 --Verify table exists and review structure. SELECT * FROM dbo.NewTable ```
It is just like an other datetype in your sql server. Creating a Table of a user defined type there is no such thing in sql server. What you can do is Declare a variable of this type and populate it but you cant create a table of this type. Something like this... ``` /* Declare a variable of this type */ DECLARE @My_Table_Var AS dbo.MyTableType; /* Populate the table with data */ INSERT INTO @My_Table_Var SELECT Col1, Col2, Col3 ,..... FROM Source_Table ```
create SQL Server table based on a user defined type
[ "", "sql", "sql-server", "create-table", "" ]
I'm having a rough time getting this syntax correct and cannot figure out how to correctly write this. I have a stored procedure with some joins and the where clause is like this: ``` WHERE [Column1] = (SELECT Source FROM @CurrentTransition) AND [Column2] = (SELECT Target FROM @CurrentTransition) AND [IsDeprecated] = 0 AND sbl.StratId is null AND std.StratId is null AND CASE WHEN s.StratTimeBiasId <> NULL THEN s.StratTimeBiasId IN (SELECT * FROM dbo.fnGetValidTimeBiases(CAST(@datetime AS TIME))) ``` The error is simply `Incorrect syntax near the keyword 'IN'.` The `fnGetValidTimeBiases` function just returns a list of the `Id` values from the table that the `StratTimeBiasId` is the foreign key to. I only want that particular join to be used when there is actually a value in the `StratTimeBiasId` column.
That's not what a CASE statement is for. Use simple boolean logic instead: ``` AND (s.StratTimeBiasId IS NULL OR s.StartTimeBiasId IS NOT NULL AND s.StratTimeBiasId IN (SELECT * FROM dbo.fnGetValidTimeBiases(CAST(@datetime AS TIME)))) ``` The above will match anything where `s.StratTimeBiasId` has a NULL value OR where its value is in the results of that function (I assume the function only returns a single column? If not, that's going to throw an error). Related note: `=` and `<>` aren't the correct operators when dealing with NULL. If you want to test if a column's value is (not) null you need to use `column IS (NOT) NULL`.
Try this: ``` WHERE [Column1] = (SELECT Source FROM @CurrentTransition) AND [Column2] = (SELECT Target FROM @CurrentTransition) AND [IsDeprecated] = 0 AND sbl.StratId is null AND std.StratId is null AND s.StratTimeBiasId IS NULL OR s.StratTimeBiasId IN (dbo.fnGetValidTimeBiases(CAST(@datetime AS TIME))) ```
SQL Conditional WHERE Using IN
[ "", "sql", "sql-server", "" ]
I want to use same value for different queries from different DB like ``` DECLARE @GLOBAL_VAR_1 INT = Value_1 DECLARE @GLOBAL_VAR_2 INT = Value_2 USE "DB_1" GO SELECT * FROM "TABLE" WHERE "COL_!" = @GLOBAL_VAR_1 AND "COL_2" = @GLOBAL_VAR_2 USE "DB_2" GO SELECT * FROM "TABLE" WHERE "COL_!" = @GLOBAL_VAR_2 ``` but its giving error. > Must declare the scalar variable "@GLOBAL\_VAR\_2". Can any one suggest any way to do it...?
There is no way to declare a global variable in Transact-SQL. However, if all you want your variables for is to be accessible across batches of a single script, you can use the **SQLCMD** tool or the **SQLCMD mode** of SSMS and define that tool/mode-specific variables like this: ``` :setvar myvar 10 ``` and then use them like this: ``` $(myvar) ``` To use SSMS's SQLCMD mode: ![enter image description here](https://i.stack.imgur.com/e31sy.png)
You **cannot** declare global variables in SQLServer. If you're using Management Studio you can use SQLCMD mode like @Lanorkin pointed out. Otherwise you can use [`CONTEXT_INFO`](http://msdn.microsoft.com/en-us/library/ms187768.aspx) to store a single variable that is visible during a session and connection, but it'll disappear after that. Only truly global would be to create a global temp table (named ##yourTableName), and store your variables there, but that will also disappear when all connection are closed.
how to declare global variable in SQL Server..?
[ "", "sql", "sql-server", "" ]
I have a table in MYSQL like this: ``` ID | Team | DPV | DPT | Difference | 1 | Team1 | 25 | 27 | | 2 | Team2 | 24 | 14 | | ``` I want to do a math on DPV and DPT so like: DPV minus DPT = Difference **I have now:** e2teams is my tablename. ``` SELECT FROM e2teams GROUP by Difference HAVING COUNT DPV (-) DPT; ``` What am i doing wrong?
How about this one: ``` SELECT ID, Team, DPV, DPT, DPV-DPT AS Difference FROM e2teams ```
Something like this? ``` SELECT ID, Team, DPV, DPT, (DPV - DPT) as Difference FROM e2teams ``` You can find more information **[here](https://dev.mysql.com/doc/refman/5.0/en/arithmetic-functions.html)**
How to table minus table = answer PHP MYSQL
[ "", "mysql", "sql", "select", "count", "" ]
I have one Table name Demodata which have two column col1 and col2. data of table is ``` col1 col2 1 5 1 6 2 7 3 8 3 9 4 10 ``` and after **SELECT** command we need this data ``` col1 Col2 1 5 6 2 7 3 8 9 4 10 ``` is this possible then what is query please guide me
Try this ``` SELECT CASE WHEN RN > 1 THEN NULL ELSE Col1 END,Col2 FROM ( SELECT *,Row_Number() Over(Partition by col1 order by col1) AS RN From yourTable ) AS T ```
No it is not possible. SQL Server result sets are row based not tree based. You must have a value for each column (alternatively a NULL value). What you can do is grouping by `col1` and run an aggregate function on the values of `col2` (possibly the `STUFF` function).
DISTINCT for only one Column and other column random?
[ "", "sql", "sql-server-2008", "" ]
I have two tables named ServiceIOD and ServiceSubs and between them only one column named ShortCode is common. So, when I try to join these two tables to get the combined result for some given short code it is not giving me the correct output. May be it is because the ShortCode column is holding redundant data for both table. For example we can find shortcode=36788 multiple times in both tables for multiple rows. The query I tried so far is as below: ``` SELECT distinct serviceIOD.keyword, serviceIOD.shortcode FROM serviceIOD INNER JOIN serviceSubs ON serviceIOD.shortcode = serviceSubs.shortcode AND serviceIOD.shortcode = 36788 ``` I would appreciate any sort of help from you all. Thank you. UPDATE: Explanation of incorrect output : I am getting total of 24 rows when I am joining by this query for `shortcode` 36788 but when I query separately in two tables for the `shortocde` 36788 I get 24 rows for `ServicesIOD` table and 3 rows for `ServicesSubs` table. All together 27 rows. But when I join with the query above I get only 24 rows.
I have solved this problem with this following query: ``` SELECT serviceIOD.keyword, serviceIOD.shortcode FROM serviceIOD where shortcode = 36788 UNION select serviceSubs.keyword, serviceSubs.shortcode from serviceSubs where shortcode = 36788 ``` Thank you everyone for taking your valuable time to help me on my problem. And special thanks to @Optimuskck as I got my answer idea from his suggested answer.
Try this code.Usually the UNION is avoiding duplication entries. ``` (SELECT serviceIOD.keyword, serviceIOD.shortcode FROM serviceIOD) UNION (select * from serviceSubs) WHERE serviceIOD.shortcode = 36788 ```
How to join two tables with redundant column value?
[ "", "mysql", "sql", "" ]
I have a function called generate\_table, that takes 2 input parameters (`rundate::date` and `branch::varchar`) Now I am trying to work on a second function, using PLPGSQL, that will get a list of all branches and the newest date for each branch and pass this as a parameter to the generate\_table function. The query that I have is this: ``` select max(rundate) as rundate, branch from t_index_of_imported_files group by branch ``` and it results on this: ``` rundate;branch 2014-03-13;branch1 2014-03-12;branch2 2014-03-10;branch3 2014-03-13;branch4 ``` and what I need is that the function run something like this ``` select generate_table('2014-03-13';'branch1'); select generate_table('2014-03-12';'branch2'); select generate_table('2014-03-10';'branch3'); select generate_table('2014-03-13';'branch4'); ``` I've been reading a lot about PLPGSQL but so far I can only say that I barely know the basics. I read that one could use a concatenation in order to get all the values together and then use a EXECUTE within the function, but I couldn't make it work properly. Any suggestions on how to do this?
You can do this with a plain SELECT query using the new [**`LATERAL JOIN`**](http://www.postgresql.org/docs/devel/static/sql-select.html) in Postgres 9.3+ ``` SELECT * FROM ( SELECT max(rundate) AS rundate, branch FROM t_index_of_imported_files GROUP BY branch ) t , generate_table(t.rundate, t.branch) g; -- LATERAL is implicit here ``` [Per documentation:](http://www.postgresql.org/docs/devel/static/sql-select.html) > `LATERAL` can also precede a function-call `FROM` item, but in this > case it is a noise word, because the function expression can refer to > earlier `FROM` items in any case. The same is possible in older versions by expanding rows for set-returning functions in the `SELECT` list, but the new syntax with `LATERAL` is much cleaner. Anyway, for Postgres 9.2 or older: ``` SELECT generate_table(max(rundate), branch) FROM t_index_of_imported_files GROUP BY branch; ```
``` select generate_table(max(rundate) as rundate, branch) from t_index_of_imported_files group by branch ```
Dynamically execute query using the output of another query
[ "", "sql", "postgresql", "dynamic", "plpgsql", "lateral", "" ]
I have a database column and its give a string like `,Recovery, Pump Exchange,`. I want remove first and last comma from string. Expected Result : `Recovery, Pump Exchange`.
You can use [`SUBSTRING`](http://msdn.microsoft.com/en-us/library/ms187748.aspx) for that: ``` SELECT SUBSTRING(col, 2, LEN(col)-2) FROM ... ``` Obviously, an even better approach would be not to put leading and trailing commas there in the first place, if this is an option. > I want to remove last and first comma only if exist otherwise not. The expression becomes a little more complex, but the idea remains the same: ``` SELECT SUBSTRING( col , CASE LEFT(@col,1) WHEN ',' THEN 2 ELSE 1 END , LEN(@col) -- Start with the full length -- Subtract 1 for comma on the left - CASE LEFT(@col,1) WHEN ',' THEN 1 ELSE 0 END -- Subtract 1 for comma on the right - CASE RIGHT(@col,1) WHEN ',' THEN 1 ELSE 0 END ) FROM ... ```
Alternatively to dasblinkenlight's method you could use replace: ``` DECLARE @words VARCHAR(50) = ',Recovery, Pump Exchange,' SELECT REPLACE(','+ @words + ',',',,','') ```
How to replace first and last character of column in sql server?
[ "", "sql", "sql-server-2008", "" ]
I have this query: ``` select "ID" = CASE WHEN LEN(LEFT(ID, (charindex('.', ID)-1))) > 1 THEN LEFT(ID, (charindex('.', ID)-1)) ELSE ID END From table where tableID = '111' ``` The ID is something like AA11.1 or BB22 some with a period and some without. I'm wanting to truncate all characters after the period, but in the case where there is no period it errors. I want to keep what is there for an ID without a period. So for AA11.1 I want to return AA11 and for BB22 I want to return BB22. Any suggestions?
Try: ``` CASE WHEN charindex('.', ID) > 1 THEN LEFT(ID, (charindex('.', ID)-1)) ELSE ID END ```
Try this: ``` select ID = CASE WHEN charindex('.', ID) > 1 THEN LEFT(ID, (charindex('.', ID)-1)) ELSE ID END ```
SQL query conditioned by error
[ "", "sql", "sql-server", "" ]
So this seems like something that should be easy. But say I had an insert: ``` insert into TABLE VALUES ('OP','OP_DETAIL','OP_X') ``` and I wanted X to go from 1-100. (Knowing there are some of those numbers that already exist, so if the insert fails I want it to keep going) how would I do such a thing?
Here's a slightly faster way ``` -- techniques from Jeff Moden and Itzik Ben-Gan: ;WITH E00(N) AS (SELECT 1 UNION ALL SELECT 1), E02(N) AS (SELECT 1 FROM E00 a, E00 b), E04(N) AS (SELECT 1 FROM E02 a, E02 b), E08(N) AS (SELECT 1 FROM E04 a, E04 b), cteTally(N) AS (SELECT ROW_NUMBER() OVER (ORDER BY N) FROM E08) INSERT INTO yourTable SELECT 'OP','OP_DETAIL','OP_' + CAST(N AS varchar) FROM cteTally WHERE N <= 100 ```
No need for loops. Set-based methods FTW! This is a prime example where you should use a numbers table. Other answerers have created the equivalent on the fly but you can't beat a good, old-fashioned table if you ask me! Use your best Google-Fu to find a script or alternatively [here's one I made earlier](http://gvee.co.uk/files/sql/dbo.numbers%20&%20dbo.calendar.sql) ``` INSERT INTO your_table (you_should, always_list, your_columns) SELECT 'OP' , 'OP_DETAIL' , 'OP_' + Cast(number As varchar(11)) FROM dbo.numbers WHERE number BETWEEN 1 AND 100 AND NOT EXISTS ( SELECT your_columns FROM your_table WHERE your_columns = 'OP_' + Cast(numbers.number As varchar(11)) ) ; ```
SQL SERVER Insert Addition?
[ "", "sql", "sql-server", "t-sql", "" ]
I need a query output like the below table; This is a primary entry to a table and these records will be modified by a third party program which I have no control. Can anyone suggest a good sample? ``` ID | DATEIN | DATEOUT | STATUS 1 02.02.2014 00:00:00 02.02.2014 23:59:59 1 2 03.02.2014 00:00:00 03.02.2014 23:59:59 0 ``` I tried ``` SELECT To_Char(To_Date(SYSDATE), 'dd-MM-yyyy hh:mm:ss PM'), To_Char(date_add(To_Date(SYSDATE +1), INTERVAL -1 SECOND), 'dd-MM-yyyy hh:mm:ss PM') FROM dual ``` but this query throws an error `ORA-00907: missing right parenthesis`.
There is no need for `PM` if you want it to be in 24-hour format. And pay attention to the mask for minutes, it is `mi`, not `mm` as in your query. Also as already mentioned no need to convert `SYSDATE` to date as it is already of that datatype: ``` SELECT to_char(to_date(SYSDATE), 'dd-mm-yyyy HH24:mi:ss') date_in, to_char(to_date(SYSDATE + 1) - INTERVAL '1' SECOND, 'dd-mm-yyyy HH24:mi:ss') date_out FROM dual; DATE_IN DATE_OUT ------------------- ------------------- 11-03-2014 00:00:00 11-03-2014 23:59:59 ```
You can do away with DATE\_ADD and TO\_DATE functions (SYSDATE is already a DATE, no need of conversion ) , and also use `mi` to show minute instead of `mm` which is format specifier for month as in: ``` SELECT To_Char(SYSDATE, 'dd-MM-yyyy hh:mi:ss PM'), To_Char((SYSDATE + 1) + INTERVAL '-1' SECOND, 'dd-MM-yyyy hh:mi:ss PM') FROM dual ```
start/end date and time in oracle sql query
[ "", "sql", "oracle", "date", "" ]
Here i am trying to auto increase my identity column in mysql databse like the following with a prefix adding before the number. **Id------------------Category** Cat10001---------Men Cat10002--------Women Cat10003---------Kids here is the [link](https://stackoverflow.com/questions/2177584/sqlserver-identity-column-with-text) which i referred to do so but i got hung up with an error **Error:** `#1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'AS 'ABCD-' + CAST(Id as varchar(16)) )' at line 4` **MySql Query:** ``` CREATE TABLE MyTable ( Id int NOT NULL PRIMARY KEY, categoryid AS 'cat' + CAST(Id as varchar(16)) ) ``` Note: i am using PhpMyadmin mysql.
You can use BEFORE INSERT trigger. You can do something like this. Check below to see if this is what you needed. ``` CREATE TABLE MyTable ( Id int NOT NULL PRIMARY KEY AUTO_INCREMENT, categoryid varchar(16), categoryname varchar(16) ); CREATE TRIGGER mytrigger BEFORE INSERT ON MyTable FOR EACH ROW SET NEW.categoryid = CONCAT("CAT",COALESCE((SELECT MAX(Id)+1 from MyTable),1)); insert into MyTable(categoryid,categoryname) Values(1,"test"); insert into MyTable(categoryid,categoryname) Values(2,"test1"); ```
Create a view or a separate table to have this type of data. Like ``` create view calculateddata as select Id, concat('cat',CAST(Id as varchar(16))) AS categoryid from MyTable ```
how to add prefix to identity column in mysql.?
[ "", "mysql", "sql", "phpmyadmin", "" ]
I would like to create a table which will automatically compute the values of four other column to a table called totalAmount. my SQL script won't work. Kindly guide me. Thanks. This is my sql, ``` CREATE TABLE dbo.phicTF( guid UNIQUEIDENTIFIER NOT NULL DEFAULT NEWSEQUENTIALID() PRIMARY KEY, phno NVARCHAR(14) NOT NULL, memLname NVARCHAR(MAX) NOT NULL, memFname NVARCHAR(MAX) NOT NULL, memMname NVARCHAR(MAX) NOT NULL, pLname NVARCHAR(MAX) NOT NULL, pFname NVARCHAR(MAX) NOT NULL, pMname NVARCHAR(MAX) NOT NULL, preltomem NVARCHAR(MAX) NOT NULL, membership NVARCHAR(MAX) NOT NULL, pAge NVARCHAR(MAX) NOT NULL, pSex NVARCHAR(MAX) NOT NULL, conDateFrom DATE NOT NULL, conDateTo DATE NOT NULL, icdCode NVARCHAR(MAX) NOT NULL, fcRateFac NVARCHAR(MAX) NOT NULL, fcRatePF NVARCHAR(MAX) NOT NULL, scRateFac NVARCHAR(MAX) NULL, scRatePF NVARCHAR(MAX) NULL, totalAmount AS SUM(fcRateFac+fcRatePF+scRateFac+scRatePF) ``` ) GO
``` CREATE TABLE dbo.phicTF( guid UNIQUEIDENTIFIER NOT NULL DEFAULT NEWSEQUENTIALID() PRIMARY KEY, phno NVARCHAR(14) NOT NULL, memLname NVARCHAR(MAX) NOT NULL, memFname NVARCHAR(MAX) NOT NULL, memMname NVARCHAR(MAX) NOT NULL, pLname NVARCHAR(MAX) NOT NULL, pFname NVARCHAR(MAX) NOT NULL, pMname NVARCHAR(MAX) NOT NULL, preltomem NVARCHAR(MAX) NOT NULL, membership NVARCHAR(MAX) NOT NULL, pAge NVARCHAR(MAX) NOT NULL, pSex NVARCHAR(MAX) NOT NULL, conDateFrom DATE NOT NULL, conDateTo DATE NOT NULL, icdCode NVARCHAR(MAX) NOT NULL, fcRateFac NVARCHAR(MAX) NOT NULL, fcRatePF NVARCHAR(MAX) NOT NULL, scRateFac NVARCHAR(MAX) NULL, scRatePF NVARCHAR(MAX) NULL, totalAmount AS cast(fcRateFac as int)+cast(fcRatePF as int)+cast(scRateFac as int)+cast(scRatePF as int) ) ``` use this
Define the `total` column as `NVARCHAR(MAX)` and add a trigger on `INSERT` and `UPDATE`. In the trigger get values for the 4 columns sum them and assign to the `total` column.
How to create a table with a column that will automatically compute the sum of the other column using SQL Server 2008 R2?
[ "", "sql", "sql-server", "sql-server-2008-r2", "" ]
I want to ask something about Counting on WHERE clause. I have researched about it, but not found any solution. I have 2 tables: Table1 contains: ``` id, level, instansi ``` Table2 contains: ``` id, level, table1ID ``` Question: How to display `table1.instansi` only if the number of rows of `table2 less than 2`. **Condition** ``` table2.table1ID = table1.id ```
``` Select t1.instanci From table1 t1 inner join table2 t2 on t1.table1id=t2.table2id Group by t2.table2id Having count(t2.table2id)<2 ```
``` select t1.* from table1 t1 join table2 t2 on t2.table1ID=t1.id group by t1.id having count(*)<2 ```
How to count fields on WHERE Clause
[ "", "mysql", "sql", "where-clause", "" ]
I have a database Like this : ``` --------------------------------------------------- | MemberID | IntrCode | InstruReply | CreatedDate | ...other 2 more columns --------------------------------------------------- | 6 | 1 | Activated | 26 FEB 2014 | | 7 | 2 | Cancelled | 25 FEB 2014 | | 6 | 2 | Cancelled | 15 FEB 2014 | | 7 | 1 | Activated | 03 FEB 2014 | --------------------------------------------------- ``` Now based on the `CreatedDate` and the `instCode`, I need a query that returns the results as follows based on `instCode` as parameter. When `@IntrCode = 1`, I need only active `MemberID` on the latest(`CreatedDate`). PS: please note member 7 is cancelled when checking latest (CreatedDate). Output ``` --------------------------------------------------- | MemberID | IntrCode | InstruReply | CreatedDate | --------------------------------------------------- | 6 | 1 | Activated | 26 FEB 2014 | --------------------------------------------------- ``` I wrote the below Query and I cant show other columns.(I appreciate all your help) ``` SELECT MemberID, MAX(CreatedDate) AS LatestDate FROM MyTable GROUP BY MemberID ```
Try this **Method 1:** ``` SELECT * FROM ( SELECT *,ROW_NUMBER()OVER(PARTITION BY MemberID Order By CreatedDate DESC) RN FROM MyTable WHERE InstruReply = 'Activated' AND IntrCode = @IntrCode ) AS T WHERE RN = 1 ``` **Method 2 :** ``` SELECT * FROM ( Select MemberID,max(CreatedDate) as LatestDate from MyTable group by MemberID ) As s INNER Join MyTable T ON T.MemberID = S.MemberID AND T.CreatedDate = s.LatestDate WHere T.InstruReply = 'Activated' T.IntrCode = @IntrCode ``` [**Fiddle Demo**](http://sqlfiddle.com/#!6/7413b6/17) **Output** ``` --------------------------------------------------- | MemberID | IntrCode | InstruReply | CreatedDate | --------------------------------------------------- | 6 | 1 | Activated | 26 FEB 2014 | --------------------------------------------------- ```
You can use a [`CTE`](http://msdn.microsoft.com/en-us/library/ms175972.aspx) and the [`ROW_NUMBER` function](http://technet.microsoft.com/en-us/library/ms186734.aspx): ``` With CTE As ( SELECT t.*, RN = ROW_NUMBER()OVER(PARTITION BY MemberID Order By CreatedDate DESC) FROM MyTable t WHERE IntrCode = @IntrCode ) SELECT MemberID, IntrCode, InstruReply, CreatedDate FROM CTE WHERE RN = 1 ``` `DEMO`
SQL Query to get 1 latest record per member based on a latest date
[ "", "sql", "sql-server", "" ]
I'm trying to build a search feature on my website. I have search working for usernames and emails, but I'd also like to be able to search based on the users full name. My problem is that first\_name and last\_name are stored separately, and I'm not sure how to build the query for this. Something like ``` SELECT * FROM users WHERE first_name AND last_name LIKE '%$query%' ``` Obviously that's very wrong - any help?
``` SELECT * FROM users WHERE first_name LIKE '%$query%' AND last_name LIKE '%$query%' ```
Yeah, try using below query: For Oracle: ``` SELECT * FROM users WHERE first_name || ' ' || last_name LIKE '%' || $query || '%'; ``` For SQL Server: ``` SELECT * FROM users WHERE first_name + ' ' + last_name LIKE '%' + $query + '%'; ```
sql - use LIKE to search two columns at once (first and last name)
[ "", "sql", "" ]
I realize "an [`INSERT`](http://www.postgresql.org/docs/current/static/sql-insert.html) command returns a command tag," but what's the return type of a command tag? I.e., what should be the return type of a [query language (SQL) function](http://www.postgresql.org/docs/current/static/xfunc-sql.html) that ends with an `INSERT`? For example: ``` CREATE FUNCTION messages_new(integer, integer, text) RETURNS ??? AS $$ INSERT INTO messages (from, to, body) VALUES ($1, $2, $3); $$ LANGUAGE SQL; ``` Sure, I can just specify the function's return type as `integer` and either add `RETURNING 1` to the `INSERT` or `SELECT 1;` after the `INSERT`. But, I'd prefer to keep things as simple as possible.
Most APIs specify that DML statements return the number of records affected if there are no errors, and negative numbers as error codes if there are errors that prevent the code from executing successfully. This is a good practice to incorporate into your designs.
If the inserted values are of any interest, as when they are processed before inserting, you can return a row of type `messages`: ``` CREATE FUNCTION messages_new(integer, integer, text) RETURNS messages AS $$ INSERT INTO messages (from, to, body) VALUES ($1, $2, $3) returning *; $$ LANGUAGE SQL; ``` And get it like this ``` select * from messages_new(1,1,'text'); ```
INSERT return type?
[ "", "sql", "postgresql", "sql-insert", "return-type", "sql-function", "" ]
I need to replace HTML codes with the special characters. I am affected by the HTML code as said [here](http://webdesign.about.com/od/localization/l/blhtmlcodes-it.htm) ``` +------------------------+ + Html code + Display + +-----------+------------+ + &Agrave; + À + + &agrave; + à + + &Aacute; + Á + + &aacute; + á + + &Egrave; + È + + &egrave; + è + + &Eacute; + É + + &eacute; + é + + &Igrave; + Ì + + &igrave; + ì + + &Iacute; + Í + + &iacute; + í + + &Ograve; + Ò + + &ograve; + ò + + &Oacute; + Ó + + &oacute; + ó + + &Ugrave; + Ù + + &ugrave; + ù + + &Uacute; + Ú + + &uacute; + ú + + &laquo; + « + + &raquo; + » + + &euro; + € + + &deg; + ° + +------------------------+ ``` I found these entries in the database which make no sense. So need to change them into the original symbols (characters) **Data Setup:** Also found in this [SQL fiddle](http://www.sqlfiddle.com/#!4/f0308/1/0) The following values must be updated as per the below table ``` CREATE TABLE TEMP ( COL1 VARCHAR2(50 CHAR), COL2 VARCHAR2(50 CHAR), COL3 VARCHAR2(50 CHAR), COL4 VARCHAR2(10 CHAR) ); Insert into TEMP (COL1, COL2, COL3, COL4) Values ('VIA I MAGGIO', 'GI&Ugrave; PER LA STRADA', 'TOR LUPARA', '83'); Insert into TEMP (COL1, COL3, COL4) Values ('VIA D''AZEGLIO', 'MUGGI&OGRAVE;', '12'); Insert into TEMP (COL1, COL2, COL3, COL4) Values ('VIA PONTE NUOVO', 'TOSCA CAF&Egrave;', 'VERONA', '8a'); Insert into TEMP (COL1, COL3, COL4) Values ('LOCALIT&OACUTE; AGELLO', 'SAN SEVERINO MARCHE', '60'); Insert into TEMP (COL1, COL2, COL3, COL4) Values ('VIA PAPA GIOVANNI XXIII', 'LOCALIT&Agrave; PREDONDO', 'BOVEGNO', '24'); Insert into TEMP (COL1, COL2, COL3, COL4) Values ('VIA CATANIA', 'CASA DI OSPITAIT&Agrave; COLLEREALE', 'MESSINA', '26/B'); Insert into TEMP (COL1, COL2, COL3, COL4) Values ('PIAZZA DI SANTA CROCE IN GERUSALEMME', 'MINISTERO BENI E ATTIVIT&Agrave; CULTURALI', 'ROMA', '9/a'); Insert into TEMP (COL1, COL2, COL3, COL4) Values ('VIA RONCIGLIO', 'LOCALIT&Agrave; MONTECUCCO', 'GARDONE RIVIERA', '55'); Insert into TEMP (COL1, COL2, COL3, COL4) Values ('BORGO TRINITA''', 'Borgo Trinit&agrave;, 58', 'BELLANTE', '58'); Insert into TEMP (COL1, COL2, COL3, COL4) Values ('10 PIAZZA S. LORENZO', 'ROVAR&Egrave;', 'S. BIAGIO DI GALLALTA', '10'); Insert into TEMP (COL1, COL3, COL4) Values ('LOCALIT&AGRAVE; MALCHINA', 'SISTIANA', '3'); Insert into TEMP (COL1, COL2, COL3, COL4) Values ('VIA DEI CROCIFERI', 'PRESSO AUTORIT&Agrave; ENERGIA', 'ROMA', '19'); Insert into TEMP (COL1, COL2, COL3, COL4) Values ('VIALE STAZIONE', 'FRAZIONE SAN NICOL&Ograve; A TREBBIA', 'ROTTOFRENO', '10/B'); Insert into TEMP (COL1, COL2, COL3, COL4) Values ('VIA ADOLFO CONSOLINI', 'ALBAR&Egrave; DI COSTERMANO', 'COSTERMANO', '45 B'); COMMIT; ``` What we see after this setup is ``` SELECT * FROM TEMP; COL1 COL2 COL3 COL4 ---------------------------------------------------------------------------------- --------------------- -------- VIA I MAGGIO GI&Ugrave; PER LA STRADA TOR LUPARA 83 VIA D'AZEGLIO MUGGI&OGRAVE; 12 VIA PONTE NUOVO TOSCA CAF&Egrave; VERONA 8a LOCALIT&OACUTE; AGELLO SAN SEVERINO MARCHE 60 VIA PAPA GIOVANNI XXIII LOCALIT&Agrave; PREDONDO BOVEGNO 24 VIA CATANIA CASA DI OSPITAIT&Agrave; COLLEREALE MESSINA 26/B PIAZZA DI SANTA CROCE IN GERUSALEMME MINISTERO BENI E ATTIVIT&Agrave; CULTURALI ROMA 9/a VIA RONCIGLIO LOCALIT&Agrave; MONTECUCCO GARDONE RIVIERA 55 BORGO TRINITA' Borgo Trinit&agrave;, 58 BELLANTE 58 10 PIAZZA S. LORENZO ROVAR&Egrave; S. BIAGIO DI GALLALTA 10 LOCALIT&AGRAVE; MALCHINA SISTIANA 3 VIA DEI CROCIFERI PRESSO AUTORIT&Agrave; ENERGIA ROMA 19 VIALE STAZIONE FRAZIONE SAN NICOL&Ograve; A TREBBIA ROTTOFRENO 10/B VIA ADOLFO CONSOLINI ALBAR&Egrave; DI COSTERMANO COSTERMANO 45 B 14 rows selected. ``` What I want to see is that ``` COL1 COL2 COL3 COL4 ------------------------------------- ----------------------------------- ------------------------- ---------- VIA I MAGGIO GIÙ PER LA STRADA LUPARA 83 VIA D'AZEGLIO MUGGIÒ 12 VIA PONTE NUOVO TOSCA CAFÈ VERONA 8a LOCALITÓ AGELLO SAN SEVERINO MARCHE 60 VIA PAPA GIOVANNI XXIII LOCALITÀ PREDONDO BOVEGNO 24 VIA CATANIA CASA DI OSPITAITÀ COLLEREALE MESSINA 26/B PIAZZA DI SANTA CROCE IN GERUSALEMME MINISTERO BENI E ATTIVITÀ CULTURALI ROMA 9/a VIA RONCIGLIO LOCALITÀ MONTECUCCO GARDONE RIVIERA 55 BORGO TRINITA' Borgo TrinitÀ, 58 BELLANTE 58 10 PIAZZA S. LORENZO ROVARÈ S. BIAGIO DI GALLALTA 10 LOCALITÀ MALCHINA SISTIANA 3 VIA DEI CROCIFERI PRESSO AUTORITÀ ENERGIA ROMA 19 VIALE STAZIONE FRAZIONE SAN NICOLÒ A TREBBIA ROTTOFRENO 10/B VIA ADOLFO CONSOLINI ALBARÈ DI COSTERMANO COSTERMANO 45 B 14 rows selected. ``` **Pitfalls**: 1. `&Agrave;` may also be written as `&AGRAVE;` so all the HTML codes must be case insensitive. 2. More than one HTML code can affect a column. So I need to search all the HTML codes in the table above for any column. What I tried so far is a simple Update with replace function ``` UPDATE TEMP SET COL1 = REPLACE (COL1, '&Agrave;' , 'À'); ``` Going by this way, I will spend days to write the scripts. Because I need to carry on this fix in 20+ tables each with 40+ columns. So expecting a simpler way to do this. Can some one help me out of this writers cramp? Also which is the best way to replace, is it using the character or ASCII code conversion? **UPDATE:** What exactly I need 1. How to write update set; either 'À' or CHR(192) 2. All updates in one statement for one table (May be CASE statement, REGEXP\_LIKE and REGEXP\_REPLACE combination will do)
You want to use [UTL\_I18N.unescape\_reference](http://docs.oracle.com/cd/B19306_01/appdev.102/b14258/u_i18n.htm#i998992). For not writing long scripts, let Oracle do the job for you. Then run its generated script: ``` select 'UPDATE ' || table_name || ' SET ' || col_name || ' = UTL_I18N.unescape_reference(' || col_name || ');' from all_tab_cols where owner = <MY_NAME> and table_name in ('....') -- you can use this clause too: table_name like '%my_table%' ```
You can create a procedure that will spool the UPDATE statements into a file, which you can eventually execute to perform the actual updates. The steps will involve the following: 1. Create a temp table with two columns storing the HTML Code and the Display value mapping. 2. Create a procedure that will perform this logic using a cursor: * Loop through all the tables to be updated. * For each table, identify the columns from either user\_tab\_columns or all\_tab\_columns. * For each column, loop through the HTML Code in the temp table created in #1 then create an UPDATE statement that will replace the column value HTML code to its corresponding Display value. * Output each UPDATE statement to the console. 3. Execute the procedure and spool the results into a file 4. Execute the spooled file as a script to run the UPDATE statements. The actual steps may not be the same as above. But the idea is to speed up the task by creating a procedure that will automate the creation of the necessary UPDATE statements.
Efficient way to replace HTML codes with special characters in oracle sql
[ "", "sql", "regex", "oracle", "replace", "oracle11g", "" ]
I have the following MySQL table to log the registration status changes of pupils: ``` CREATE TABLE `pupil_registration_statuses` ( `status_id` INT(11) NOT NULL AUTO_INCREMENT, `status_pupil_id` INT(10) UNSIGNED NOT NULL, `status_status_id` INT(10) UNSIGNED NOT NULL, `status_effectivedate` DATE NOT NULL, PRIMARY KEY (`status_id`), INDEX `status_pupil_id` (`status_pupil_id`) ) COLLATE='utf8_general_ci' ENGINE=MyISAM; ``` Example data: ``` INSERT INTO `pupil_registration_statuses` (`status_id`, `status_pupil_id`, `status_status_id`, `status_effectivedate`) VALUES (1, 123, 1, '2013-05-06'), (2, 123, 2, '2014-03-15'), (3, 123, 5, '2013-03-15'), (4, 123, 6, '2013-05-06'), (5, 234, 2, '2013-02-02'), (6, 234, 4, '2013-04-17'), (7, 345, 2, '2014-02-01'), (8, 345, 3, '2013-06-01'); ``` It is possible that statuses can be inserted, thus the sequence of dates does not necessarily follow the same sequence of IDs. For example: `status_id` 1 might has a date of 2013-05-06, but `status_id` 3 might have a date of 2013-03-15. `status_id` values are, however, sequential within any particular date. Thus if a pupil's registration status changes multiple times on one day then the last row will will reflect their status for that date. It is necessary to find out a particular student's registration status on a particular date. The following query works for an individual pupil: ``` SELECT * FROM pupil_registration_statuses WHERE status_pupil_id = 123 AND status_effectivedate <= '2013-05-06' ORDER BY status_effectivedate DESC, status_id DESC LIMIT 1; ``` This returns the expected row of `status_id = 4` However, I now need to issue a (single) query to return the status for all pupils on a particular date. The following query is proposed, but doesn't obey the "last `status_id` in a day" requirement: ``` SELECT * FROM pupil_registration_statuses prs INNER JOIN (SELECT status_pupil_id, MAX(status_effectivedate) last_date FROM pupil_registration_statuses WHERE status_effectivedate <= '2013-05-06' GROUP BY status_pupil_id) qprs ON prs.status_pupil_id = qprs.status_pupil_id AND prs.status_effectivedate = qprs.last_date; ``` This query, however, returns 2 rows for pupil 123. **EDIT** To clarify, if the input is the date `'2013-05-06'`, I expect to get the rows 4 and 6 from the query. <http://sqlfiddle.com/#!2/68ee6/2>
Is this what you're after? ``` SELECT a.* FROM pupil_registration_statuses a JOIN ( SELECT prs.status_pupil_id , MIN(prs.status_id) min_status_id FROM pupil_registration_statuses prs JOIN ( SELECT status_pupil_id , MAX(status_effectivedate) last_date FROM pupil_registration_statuses WHERE status_effectivedate <= '2013-05-06' GROUP BY status_pupil_id ) qprs ON prs.status_pupil_id = qprs.status_pupil_id AND prs.status_effectivedate = qprs.last_date GROUP BY prs.status_pupil_id ) b ON b.min_status_id = a.status_id; ``` <http://sqlfiddle.com/#!2/68ee6/7> (Incidentally, there's an ugly and undocumented hack for this kind of problem which goes something like this: ``` SELECT x.* FROM (SELECT * FROM prs WHERE status_effectivedate <= '2013-05-06' ORDER BY status_pupil_id, status_effectivedate DESC, status_id)x GROUP BY status_pupil_id; ``` ...but I didn't tell you that! ;) )
I have changed where clause, please try it. ``` SELECT * FROM pupil_registration_statuses prs INNER JOIN (SELECT status_pupil_id, MAX(status_effectivedate) last_date FROM pupil_registration_statuses WHERE Datediff(status_effectivedate, '2013-05-06') <= 0 GROUP BY status_pupil_id) qprs ON prs.status_pupil_id = qprs.status_pupil_id AND prs.status_effectivedate = qprs.last_date; ``` EDIT Try this ``` SELECT * FROM ( select status_pupil_id,max(status_id) as status_id from pupil_registration_statuses innr --where Datediff(dd,status_effectivedate, '2013-05-06') >= 0 group by status_pupil_id )as ca inner join pupil_registration_statuses prs on prs.status_id = ca.status_id where Datediff(dd,prs.status_effectivedate, '2013-05-06') >= 0 ```
MySQL Query for finding a "LAST" row, based on two fields
[ "", "mysql", "sql", "" ]
I am trying to use this select statement but my issue is for this ID that i am trying to use in my select statement has null value. Even though ID 542 has null value but i know for fact in the future is going to have a 'COMPLETE' value in it. The 3 possible values for the FLAG field are COMPLETE, NOT COMPLETE AND NULL. What i want to achieve with this select statement is to see all records where FLAG is not 'COMPLETE'. If i run my query now, it will not return anything but if i remove FLAG <>'COMPLETE' then it will return the record ID 542 but the flag value is null. Here is my code ``` SELECT ID, DT, STAT FROM myTable WHERE ID = 542 and FLAG <> 'COMPLETE' ```
Convert the NULL to text: ``` SELECT ID, DT, STAT FROM myTable WHERE ID = 542 and ISNULL(FLAG,'NULL') <> 'COMPLETE' ```
``` SELECT ID, DT, STAT FROM myTable WHERE ID = 542 and ISNULL(FLAG, 'NOT COMPLETE') <> 'COMPLETE' ``` Since the `FLAG` is null it cannot be compared against 'COMPLETE' and you're missing the entry... or: ``` SELECT ID, DT, STAT FROM myTable WHERE ID = 542 AND (FLAG <> 'COMPLETE' OR FLAG IS NULL) ```
Issue with null value using select statement
[ "", "sql", "sql-server-2008", "t-sql", "" ]
I am working on DW project where I need to query live CRM system. The standard isolation level negatively influences performance. I am tempted to use no lock/transaction isolation level read uncommitted. I want to know how many of selected rows are identified by dirty read.
Maybe you can do this: ``` SELECT * FROM T WITH (SNAPSHOT) EXCEPT SELECT * FROM T WITH (READCOMMITTED, READPAST) ``` But this is inherently racy.
Why do you need to know that? You use `TRANSACTION ISOLATION LEVER READ UNCOMMITTED` just to indicate that `SELECT` statement won't wait till any update/insert/delete transactions are finished on table/page/rows - and will grab even *dirty* records. And you do it to increase performance. Trying to get information about *which* records were dirty is like punch blender to your face. It hurts and gives you nothing, but pain. Because they were dirty at some point, and now they aint. Or still dirty? Who knows... **upd** Now about data quality. Imagine you read dirty record with query like: ``` SELECT * FROM dbo.MyTable WITH (NOLOCK) ``` and for example got record with `id = 1` and `name = 'someValue'`. Than you want to update name, set it to 'anotherValue` - so you do following query: ``` UPDATE dbo.MyTable SET Name = 'anotherValue' WHERE id = 1 ``` So if this record exists you'l get actual value there, if it was deleted (even on dirty read - deleted and not committed yet) - nothing terrible happened, query won't affect any rows. Is it a problem? Of course not. Becase in time between your read and update things could change zillion times. Just check `@@ROWCOUNT` to make sure query did what it had to, and warn user about results. Anyway it depends on situation and importance of data. If data *MUST* be actual - don't use dirty reads
How to Select UNCOMMITTED rows only in SQL Server?
[ "", "sql", "sql-server", "t-sql", "isolation-level", "transaction-isolation", "" ]
I am stuck with a query where one table has many records of the same id and the columns have different values like this: ``` ID Name Location Daysdue date 001 MINE NBI 120 13-FEB-2013 001 TEST MSA 111 14-FEB-2013 002 MINE NBI 13 13-FEB-2013 002 MINE MSA 104 15-FEB-2013 ``` I want to return the one record with the highest days due, so I have written a query: ``` select id,max(daysdue),name,location,date group by id,name,location,date; ``` This query is not returning one record but several for each id because I have grouped with every column bearing that the columns are different. What is the best way to select the row with the largest value of Days due based on id irrespective of the other values in the other columns?? For example I want to return this as: ``` 001 MINE NBI 120 13-FEB-2013 002 MINE MSA 104 15-FEB-2013 ```
It appears that you want something like ``` SELECT * FROM (SELECT t.*, rank() over (partition by id order by daysDue desc) rnk FROM table_name t) WHERE rnk = 1 ``` Depending on how you want to handle ties (two rows with the same `id` and the same `daysDue`), you may want the `dense_rank` or `row_number` function rather than `rank`.
First you need to find the `MAX(daysdue)` for each `ID`: ``` SELECT ID, MAX(daysdue) AS max_daysdue FROM table GROUP BY ID; ``` Then you can join your original table to this. ``` SELECT t.* FROM table t JOIN ( SELECT ID, MAX(daysdue) AS max_daysdue FROM table GROUP BY ID ) m ON t.ID = m.ID AND t.daysdue = m.max_daysdue; ``` Note that you might have duplicate id's in case of a tie -- makes sense semantically I guess.
Select the record with the greatest value in Oracle
[ "", "sql", "oracle", "" ]
1. How do I reuse the value returned by `pair` called in the function below? ``` CREATE FUNCTION messages_add(bigint, bigint, text) RETURNS void AS $$ INSERT INTO chats SELECT pair($1, $2), $1, $2 WHERE NOT EXISTS (SELECT 1 FROM chats WHERE id = pair($1, $2)); INSERT INTO messages VALUES (pair($1, $2), $1, $3); $$ LANGUAGE SQL; ``` I know that the [SQL query language doesn't support storing simple values in variables](https://stackoverflow.com/questions/36959/how-do-you-use-script-variables-in-postgresql) as does a typical programming language. So, I looked at [`WITH` Queries (Common Table Expressions)](http://www.postgresql.org/docs/9.1/static/queries-with.html), but I'm not sure if I should use `WITH`, and anyway, I couldn't figure out the correct syntax for what I'm doing. 2. Here's my [SQLFiddle](http://sqlfiddle.com/#!15/d6454/3) and my original question about [storing chats & messages in PostgreSQL](https://stackoverflow.com/questions/22367688/messages-query-a-users-chats). This function [inserts-if-not-exists](https://stackoverflow.com/a/18654976/242933) then inserts. I'm not using a transaction because I want to keep things fast, and storing a chat without messages is not so bad but worse the other way around. So, query order matters. If there's a better way to do things, please advise. 3. I want to reuse the value mainly to speed up the code. But, does the SQL interpreter automatically optimize the function above anyway? Still, I want to write good, [DRY](http://en.wikipedia.org/wiki/Don%27t_repeat_yourself) code.
Since the function body is procedural, use the [`plpgsql`](http://www.postgresql.org/docs/current/static/plpgsql.html) language as opposed to SQL: ``` CREATE FUNCTION messages_add(bigint, bigint, text) RETURNS void AS $$ BEGIN INSERT INTO chats SELECT pair($1, $2), $1, $2 WHERE NOT EXISTS (SELECT 1 FROM chats WHERE id = pair($1, $2)); INSERT INTO messages VALUES (pair($1, $2), $1, $3); END $$ LANGUAGE plpgsql; ``` Also, if the result to reuse is `pair($1,$2)` you may store it into a variable: ``` CREATE FUNCTION messages_add(bigint, bigint, text) RETURNS void AS $$ DECLARE pair bigint := pair($1, $2); BEGIN INSERT INTO chats SELECT pair, $1, $2 WHERE NOT EXISTS (SELECT 1 FROM chats WHERE id = pair); INSERT INTO messages VALUES (pair, $1, $3); END $$ LANGUAGE plpgsql; ```
``` create function messages_add(bigint, bigint, text) returns void as $$ with p as ( select pair($1, $2) as p ), i as ( insert into chats select (select p from p), $1, $2 where not exists ( select 1 from chats where id = (select p from p) ) ) insert into messages select (select p from p), $1, $3 where exists ( select 1 from chats where id = (select p from p) ) ; $$ language sql; ``` It will only insert into messages if it exists in chats.
SQL: Reuse value returned by function?
[ "", "sql", "postgresql", "variables", "sql-insert", "sql-function", "" ]
I'm having trouble using the `filter by form` in a SQL query via VBA in Access 2013. I didn't create the Access forms but was commissioned to correct an issue. Also the client told me that it worked in the previous Office versions and that the Access-Database hasn't been changed in the past few years. So it seems, that Access 2013 does something differently. But I couldn't figure out what. As you may understand below, the **red highlighted button** should un-/check all the **yellow highlighted checkboxes**. This works perfectly fine until I add a filter by form (**red box** at the bottom of image). Ironically I encountered this issue only by filtering the **red underlined field** `Programm_IDFS`. Filtering by other fields works fine. ![Form whose filter acts strange](https://i.stack.imgur.com/n1fku.png) This query should uncheck the checkboxes but it fails because the value of `strFilter` is: > ((Lookup\_Programm\_\_IDFS.Name="ad1incl")) This may work for filtering, but it doesn't work as SQL restriction. ``` UPDATE dbo_tbl_ThisForm SET dbo_tbl_ThisForm.Checkbox = 0, dbo_tbl_ThisForm.Statusoffen = '0' WHERE dbo_tbl_ThisForm.Testfall_ID NOT IN (SELECT dbo_tbl_Restrictions1.Testfall_ID FROM dbo_tbl_Restrictions1 WHERE dbo_tbl_Restrictions1.Auftrags_ID = " & gsVariable5 & ") AND dbo_tbl_ThisForm.Testfall_ID NOT IN (SELECT dbo_tbl_Restrictions1.Testfall_ID FROM dbo_tbl_Restrictions1 WHERE dbo_tbl_Restrictions1.Auftrags_ID IN (SELECT Auftrag_ID FROM dbo_tbl_Restrictions2 WHERE Auftragstyp = " & Me.kfAuftragstyp & ")) AND " & strFilter & " ``` This query should check all the checkboxes. It works because I hard-coded all the values (actually this is only necessary for `strFilter`). ``` UPDATE dbo_tbl_ThisForm SET dbo_tbl_ThisForm.Checkbox = -1, dbo_tbl_ThisForm.Statusoffen = '-1' WHERE dbo_tbl_ThisForm.Testfall_ID NOT IN (SELECT dbo_tbl_Restrictions1.Testfall_ID FROM dbo_tbl_Restrictions1 WHERE dbo_tbl_Restrictions1.Auftrags_ID = 544) AND dbo_tbl_ThisForm.Testfall_ID NOT IN (SELECT dbo_tbl_Restrictions1.Testfall_ID FROM dbo_tbl_Restrictions1 WHERE dbo_tbl_Restrictions1.Auftrags_ID IN (SELECT Auftrag_ID FROM dbo_tbl_Restrictions2 WHERE Auftragstyp = 9)) AND dbo_tbl_ThisForm.Programm_IDFS = 35 ``` If you need more information feel free to ask. Any help/suggestion is appreciated. Thanks in advance. **EDIT:** When running the query with `strFilter=((Lookup_Programm__IDFS.Name="ad1incl"))` I get the following error: > "Run-time error '3061'. Too few parameters. Expected 1." I now just figured out, that the un-/checking also doesn't work for the field `Funktion`. This and the `Programm_IDFS` field are both foreign keys of data type **int** in the table `dbo_tbl_ThisForm`. When filtering by the field `Fachbereich` it works to un-/check the checkboxes, since that field is of data type **varchar**, so `strFilter` is set to a valid value: > ((dbo\_tbl\_ThisForm.Fachbereich="Steuern")) Those foreign keys both link to separate tables. Now how can I solve this? Do I need to include those tables in my query? Can I change something on the form? Thank You
Since I couldn't get rid of the preceding `Lookup_`-thing I decided to deal with it in VBA. I used following VBA-Code to change the part with **Lookup\_[field]** to a corresponding SQL-query: ``` If InStr(strFilter, "Lookup_") <> 0 Then If InStr(strFilter, "Programm__IDFS") <> 0 Then strFilter = Replace(strFilter, "(Lookup_Programm__IDFS.Name", "dbo_tbl_ThisForm.Programm_IDFS = (SELECT ID_Programm FROM dbo_Programm WHERE Name") ElseIf InStr(strFilter, "Funktion") <> 0 Then strFilter = Replace(strFilter, "(Lookup_Funktion.Beschreibung", "dbo_tbl_ThisForm.TF_Funktion_IDFS = (SELECT TF_Funktion_ID FROM dbo_TF_Funktionen WHERE Beschreibung") End If End If ``` So for example if I use a filter on the **Programm\_IDFS** field it will change ``` ((Lookup_Programm__IDFS.Name="ad1incl")) ``` to ``` (dbo_tbl_ThisForm.Programm_IDFS = (SELECT ID_Programm FROM dbo_Programm WHERE Name="ad1incl")) ``` This way it can update all the checkboxes. I didn't find a way how to remove the `Lookup_`-thing nor why it only affects certain combo boxes. The main thing is that it works now. :)
Recently I ran in to a similar problem with an application that was upgraded to Access 2013 after running fine under Access 2003. I noticed that the form's combo box controls shared the exact same name as their source fields. Suspecting that name ambiguity might be confusing Access when it generates the filter, I renamed the controls (gave them a "cbo" prefix). That seemed to fix the problem for most cases. Some users still see it happen at times, but then I haven't removed all the ambiguous names yet: I fixed only the ones that are being used for filtering. I plan to do that others in the next release. It shouldn't hurt to give the control's different names from their data source fields, and I've always found it makes it easier for me to understand the applications.
Use the filter of a form in a SQL query via VBA
[ "", "sql", "ms-access", "filter", "vba", "" ]
I can find a solution for one of my sql statement and I want to ask for your help ;) Lets imagine that we have table of Athletes and they have to run 100 meters of what ever distance. after each run the table innserts Run, Name and Time for each run: ``` Run Name Time[s] 1 Joe 10 2 Joe 11 1 Jim 14 2 Jim 12 ``` What I want to do, is to select Run, Name And fastest time, so it this case it would be min(Time). But when I use Min aggregate function I have to group by all other parameters, so it will return all records... I have tried this. But it doesn't work: ``` SELECT MIN(Time) AS MinTime, Name AS TimeName From Athletes group by TimeName LEFT JOIN (Select Run, Name, Time From Athletes a) On a.Name = TimeName, a.Time = MinTime; ``` Yes, it has syntax error but it's not the main problem, as I udrestand it still won't work. thanks in advance, SQL newbie
Your SQL query is all over the place You have an uncessary join and your `group by` is in the wrong place and your `left join` syntax is also incorrect. I scrapped it and rewrote it to use `min()` properly. ``` SELECT run, MIN(Time) AS MinTime, Name AS Timename From Athletes group by TimeName ``` There is no need to run a `join` on the same table and if you knew which columns you needed to `group by`, there is also no need for a where clause. Tested and working on sqlfiddle. <http://sqlfiddle.com/#!2/13b1a/2> I suggest you read up on what `group by` does and how to `join` tables properly. As your table grows, if you aren't using any kind of id or key, you can use a more advanced query that does actually involve a subquery: ``` select a.run, a1.name, a1.Time from athletes as a ,(select name, min(time) as Time from athletes group by name) as a1 where a.time = a1.Time and a.name = a1.name ``` There still isn't a reason to use a `join` and the subquery route is the appropriate route to take with your current data set and schema. <http://sqlfiddle.com/#!2/23cb4/23>
``` select Run , Name, Time from Athletes a where Time=(select min(time) from Athletes s where s.Name=a.Name) ```
SQL multi select with join
[ "", "sql", "select", "join", "left-join", "" ]
I have an sql like this: ``` SELECT temp.inst_id, ' ', gv.instance_name, gv.host_name, CASE WHEN component IS NULL THEN 'others' ELSE component END component, ROUND (SUM (used_mb), 1) used_mb FROM (SELECT inst_id, CASE WHEN name = 'buffer_cache' THEN 'db_buffer_cache' WHEN name = 'log_buffer' THEN 'log_buffer' ELSE pool END component, CASE WHEN name = 'buffer_cache' THEN ( ( ( bytes - (SELECT COUNT (*) FROM gv$bh WHERE inst_id = x.inst_id AND status = 'free') * (SELECT t.VALUE FROM gv$parameter t WHERE t.inst_id = x.inst_id AND t.name = 'db_block_size')) / 1024) / 1024) WHEN name != 'free memory' THEN ( (bytes / 1024) / 1024) END used_mb FROM gv$sgastat x) temp, gv$instance gv WHERE gv.inst_id = temp.inst_id GROUP BY component, temp.inst_id, gv.instance_name, gv.host_name ``` Here is the output of this query: ``` 1 irisdb1 db_buffer_cache 27,6 27,5 1 irisdb1 java pool 0,2 1 irisdb1 large pool 0,2 0,1 1 irisdb1 log_buffer 0,2 0,2 1 irisdb1 shared pool 3,7 3 1 irisdb1 others 0 0 2 irisdb2 db_buffer_cache 27,6 27,4 2 irisdb2 java pool 0,1 2 irisdb2 large pool 0,4 0,2 2 irisdb2 log_buffer 0,2 0,2 2 irisdb2 shared pool 4 3,4 2 irisdb2 others 0 0 ``` As you can see in the output, some rows(2nd and 8th lines) has empty value in the last column. What i want is to assign `0`(double value) to that empty values.
If the value is `NULL`, then you can use `coalesce()` (or several other functions)` to put in a value: ``` SELECT temp.inst_id, ' ', gv.instance_name, gv.host_name, CASE WHEN component IS NULL THEN 'others' ELSE component END component, COALESE( ROUND (SUM (used_mb), 1), 0.0) used_mb ```
You just need to give it a default value if it is `NULL`: ``` COALESCE(ROUND (SUM (used_mb), 1),0.0) used_mb ```
Assigning a specific value in sql query
[ "", "sql", "oracle", "" ]
I now have my table results ordered by points. But it needs to be order by points but if this is the same than order by Difference. Now is have is this: ``` ... ORDER BY Points DESC"; ``` But i want to have something like this: ``` ... ORDER BY Points (if two have the same amount) Than ORDER BY Goal Difference ``` Hope you can help
The correct answer is: Thanks! ``` ORDER BY Points DESC, Goal Difference DESC ``` This works for me
It would help if you included the full query in your question. The `order by` clause can accept multiple arguments. As you phrase the question, the answer would be something like: ``` order by Points desc, GoalDifference ``` You might want an expression: ``` order by Points, t1.Goals - t2.Goals ``` These are also allowed in the `order by`.
Order by Points but if the same order by in MySQL
[ "", "mysql", "sql", "sql-order-by", "" ]
I am learning SQL, and I am trying to solve a problem in SQL Server -- what we use at school. I am however playing around with Postgres because it's the only DB I can easily install at home. I know this may cause difficulties, but I imagine my query is quite easy. Having said that, in Postgres I create the following sample table: ``` WITH Test(id1, id2, rank, value) AS ( VALUES (57462, 7800, 2, 0.789), (66353, 15384, 1, 0.123), (66353, 44861, 2, 0.456) ) select * from Test ``` I'd like the SQL Server query which let me pick out the 1st and 2nd row -- I want them because for every id1, I want the row with the minimal rank. I've played around with subqueries and/or aggregators (which my friend says is available in Postgres but not SQL Server), but am still generally stuck. Help would be appreciated!
"Partitioning" or "windowing" methods are available in MSSQL. Great tutorial over at [simple talk](https://www.simple-talk.com/sql/learn-sql-server/window-functions-in-sql-server/) Windowing methods, like `OVER` would create data like your example. [`RANK`](http://technet.microsoft.com/en-us/library/ms176102.aspx) is one of the methods that you would use to build your rank. Now as for querying out of your current result set, you might just do: ``` select id1, min(rank) from test group by id1 ``` With a bit more information about your original data, I think creating a windowing method would be very simple, and accomplish your objective with one query instead of two. As a side note, having worked with both PostgreSQL and MSSQL for years, I think its safe to say that you will not find anything in Postgres you cannot do in MSSQL. Might just take thinking about your problem in a different way. EDIT: I notice you're using the `WITH` syntax to create your postgres table. MSSQL also has [Common Table Expressions](http://technet.microsoft.com/en-us/library/ms190766%28v=sql.105%29.aspx) that work very nicely w/ windowing methods. If you want to limit to just those rows as you note in the comments you'd might want something that identifies that as an unique record like its own id field. ``` WITH Test(id, id1, id2, rank, value) AS ( VALUES (1, 57462, 7800, 2, 0.789), (2, 66353, 15384, 1, 0.123), (3, 66353, 44861, 2, 0.456) ) ``` For that matter you could join on the items directly and forget adding an id column: ``` select * from TEST t where exists (select 1 from test tt where t.id1 = tt.id1 and t.rank = min(rank group by id1) ```
As you already have a rank column, the easiest should be to do the following: ``` Select id1 , Min(rank) as Min_Rank From Test Group by id1 ```
SQL Server aggregator
[ "", "sql", "sql-server", "postgresql", "t-sql", "" ]
I have following query, that's not working. ``` select * from table where id in ( 1,2, (select id from another_table) ) ``` How i can rewrite it?
How about ``` select * from table where id in (1,2) or id in (select id from another_table) ``` Take care and use parentheses when adding additional `WHERE`-conditions using `and`!!!
``` select * from table where id in (1,2) OR id in( select id from another_table ) ```
Oracle SQL Query IN
[ "", "sql", "oracle", "" ]
I want to execute this statement ``` CREATE TABLE Tab2 AS SELECT 1, "abc", 123456789.12 UNION SELECT 2, "def", 25090003; ``` on an SQL database. However column names of the resulting table are 1, "abc", and 123456789.12. Is there a way to give the columns explicit names, e.g. like col1, col2, and col3? I know, I could make it two statements, first a create table with explicit column names and then an INSERT INTO ... SELECT, but I wonder if theres a way to make it a single statement.
Simply give the selected columns alias names as desired: ``` CREATE TABLE Tab2 AS SELECT 1 AS COLUMN1, \"abc\" AS COLUMN2, 123456789.12 AS COLUMN3 UNION SELECT 2, \"def\", 25090003; ```
You can use the `as` clause for columns: ``` CREATE TABLE Tab2 AS SELECT 1 as col1, \"abc\" as col2, : ```
CREATE TABLE ... AS SELECT with discrete values with explicit column names
[ "", "sql", "" ]
I have 1 tables userinfo as shown below.There may be 1000 records. I want to get each record from table userinfo and insert into tblRating. For RatingQue1,RatingQue2,RatingQue3 i want add '4'.I want final result of tblRating as shown in tblRating below. userinfo ``` Id empname managername 1 E1 M1 2 E2 M1 . . . . . . 100 E100 M1 ``` tblRating ``` Id ratingby ratedto RatingQue1 RatingQue2 RatingQue3 Avg 1 M1 E1 4 4 4 4 2 M1 E2 4 4 4 4 . . . . . . . . . . . . . . 100 M1 E100 4 4 4 4 ```
This worked for me ``` DECLARE c CURSOR READ_ONLY FAST_FORWARD FOR SELECT Ename FROM Tegra DECLARE @id nvarchar(50) OPEN c FETCH NEXT FROM c INTO @id WHILE (@@FETCH_STATUS = 0) BEGIN INSERT INTO Rating2 SELECT 'Manager','E1', @id, 4, 4, 4, 4, 4, 4 FETCH NEXT FROM c INTO @id END CLOSE c DEALLOCATE c ```
Never use Cursors in your query it will slow down your query performance use following it gives o/p as you want: create this two table (rating,userinfo ) before executing this query ``` **Insert into rating select *,4,4,4,4 from userinfo** ```
Get data from one table and for each record from this table insert data into another table
[ "", "mysql", "asp.net", "sql", "sql-server-2008", "" ]
I have a dataset of about 32Million rows that I'm trying to export to provide some data for an analytics project. Since my final data query will be large, I'm trying limit the number of rows I have to work with initially. I'm doing this by running a create table on the main table (32Million) records with a join on another table that's about 5k records. I made indexes on the columns where the JOIN is taking place, but not on the other where conditions. This query has been running for over 4 hours now. What could I have done to speed this up and if there is something, would it be worth it to stop this query, do it, and start over? The data set is static and I'm not worried about preserving anything or proper database design long-term. I just need to get the data out and will discard the schema. A simplified version of the query is below ``` CREATE TABLE RELEVANT_ALERTS SELECT a.time, s.name,s.class, ... FROM alerts a, sig s WHERE a.IP <> 0 AND a.IP not between x and y AND s.class in ('c1','c2','c3') ```
**Try explain select to see what is going on first of all. Are your indexes properly setup?** **Also you are not joining the two tables with their primary keys, is that on purpose?** Where is your primary key and foreign key? **Can you also provide us with a table schema?** Also, could your hardware be the problem? How much does RAM and processing power does it have? I hope you are not running this on single core processor as that is bound to take a long time I have a table with 2,000,000,000 (2 billion rows, 219 Gig) and it doesn't take more than 0.3 seconds to execute similar query to yours with properly setup indexes. This is on a 8 (2ghz) core processor with 64gb ram. So not the beefiest setup for the size of the database, but the indexes are held in the memory, so the queries can be fast. It should not take that long. Can you please make sure you have indexes on the a.IP And s.class. Also cant you put a.IP <> = 0 comparison after a.IP not between x and y, so you already have a filtered set for 0 comparison (as that will compare every single record I believe) You can move s.class as the first comparison depending on how many rows s table has to really speed up the comparison.
Your join is a full cross-join it seems. That will take really really long in any case. Is there no common field in both tables? Why do you need this join? If you really want to do this, you should first create two tables from `alerts` and `sig` that fulfill your `WHERE conditions` and then join the resulting tables if you must.
MySQL: SELECT millions of rows
[ "", "mysql", "sql", "" ]
I need to build a MYSQL VERSION 5.6.15 query that will allow me to do a SELECT statement that returns a column of answers and then use that column to build the next SELECT statement. I'm new to mySQL and have tried a bunch of ideas but so far have failed. I'm sure that the answer must be simple but I've given on on figuring it out myself. Any help you could offer would be greatly appreciated. Here's the simplest set of data that I could use to demonstrate my dilemma: ``` order_date quantity '2014-01-01 00:00:00', '27' '2014-01-02 00:00:00', '13' '2014-01-03 00:00:00', '42' '2014-01-04 00:00:00', '30' '2014-01-05 00:00:00', '5' '2014-01-06 00:00:00', '50' '2014-01-07 00:00:00', '2' '2014-01-08 00:00:00', '10' '2014-01-09 00:00:00', '40' '2014-01-10 00:00:00', '4' '2014-01-11 00:00:00', '32' '2014-01-13 00:00:00', '17' '2014-01-15 00:00:00', '54' '2014-01-17 00:00:00', '14' '2014-01-20 00:00:00', '37' '2014-01-22 00:00:00', '12' ``` My first query would be something like this: ``` SELECT order_date, quantity FROM orders WHERE quantity >35 ``` So, I'd get this back: ``` order_date quantity '2014-01-03 00:00:00', '42' '2014-01-06 00:00:00', '50' '2014-01-09 00:00:00', '40' '2014-01-15 00:00:00', '54' '2014-01-20 00:00:00', '37' ``` Now, the next query I'd want would be to select the `order_date` and `quantity` for the NEXT ENTRY of each of the above from the `order_date` column. **NOTE:** The dates aren't always going to be consecutive so an INTERVAL +1 won't always be relevant. So, the second query would return the following rows: ``` order_date quantity '2014-01-04 00:00:00', '30' '2014-01-07 00:00:00', '2' '2014-01-10 00:00:00', '4' '2014-01-17 00:00:00', '14' '2014-01-22 00:00:00', '12' ``` Ideally, I could put them all together with the first query, sorted by date. I know it's gotta be some kind of `join` or `IN` statement but so far this has eluded me. So, the ultimate solution would return this: ``` order_date quantity '2014-01-03 00:00:00', '42' '2014-01-04 00:00:00', '30' '2014-01-06 00:00:00', '50' '2014-01-07 00:00:00', '2' '2014-01-09 00:00:00', '40' '2014-01-10 00:00:00', '4' '2014-01-15 00:00:00', '54' '2014-01-17 00:00:00', '14' '2014-01-20 00:00:00', '37' '2014-01-22 00:00:00', '12' ```
this should do the trick.. its not the best way to build a query as it can run slowly with a really large table... but this should do what you want... see fiddle <http://sqlfiddle.com/#!2/b7a53/10> ``` SELECT order_date as o_date, quantity, (SELECT order_date FROM orders WHERE order_date > o_date LIMIT 1) as next_date, (SELECT quantity FROM orders WHERE order_date > o_date LIMIT 1) as next_quantity FROM orders where quantity > 35 ```
``` WITH OrderedOrders AS ( SELECT order_date, quantity, ROW_NUMBER() OVER (ORDER BY order_date) AS RowNumber FROM Orders ), FilteredOrders AS ( SELECT * FROM OrderedOrders WHERE quantity>35 ) SELECT OrderedOrders.order_date, OrderedOrders.quantity FROM OrderedOrders INNER JOIN FilteredOrders ON OrderedOrders.RowNumber=FilteredOrders.RowNumber OR OrderedOrders.RowNumber=FilteredOrders.RowNumber+1 ```
How do I SELECT a column of dates and use that to drive a second SELECT statement?
[ "", "mysql", "sql", "" ]
I have two columns in SQL Server in two different tables. One column has `9.011`, and other table columns has `9011`. I need to remove the `.` and compare these two columns to see whether they are equal. Can anybody help me out how to do this? Thanks in advance.
Try this: ``` SELECT CASE WHEN REPLACE (Table1.ColName1,'.','') = Table2.ColName2 THEN 'Equal' ELSE 'Not Equal' END AS IsEqual FROM Table1 INNER JOIN Table2 ON Table1.PrimaryKey = Table2.ForeignKey ``` This query will return `Equal` if they are equal and `Not Equal` if they are not. `REPLACE()` will remove `.` from `ColName1`.
I am assuming that your columns are decimal type, so I have converted them to varchar first ``` where replace(Convert(varchar(50), column1 ),'.','') = Convert(varchar(50), column2) ```
How to compare two columns in SQL server
[ "", "sql", "sql-server", "sql-server-2008", "t-sql", "" ]
I have a table which looks as follows (simplified): **name | status** app-1 | start app-1 | run app-1 | run app-1 | run app-1 | finish app-2 | start app-2 | run app-2 | run now, I would like to filter all apps, that have "start" for a status AND no "finish". For the example above, the result is supposed to be "app-2". I have no clue how to do the comparison while additionally use a condition...it really gives me some hard time. I hope someone can help me with it?!
``` select name from _table t1 where t1.status = 'start' and not exists ( select name from _table t2 where t1.name = t2.name and t2.status = 'finish' ) ```
Try something like this: ``` SELECT * FROM table_name WHERE status = 'run' and name NOT IN (SELECT name FROM table_name WHERE status = 'finish') ```
SQL Query: comparison with condition
[ "", "sql", "filter", "comparison", "conditional-statements", "" ]
Assume I have the following table: ``` +------------+-------------+ | Product_id | customer_id | +------------+-------------+ | a | c1 | | a | c2 | | a | c3 | | a | c4 | | b | c1 | | c | c1 | | b | c2 | | d | c2 | +------------+-------------+ ``` I want to find the number of (a, b, c) products purchases per customer and the number of (a, b, d) products purchases per customer. I tried to use `COUNT` with `GROUP BY` but I only managed to the find the number purchases of each customers [FIDDLE](http://sqlfiddle.com/#!4/e26c5/26/0). Do I need to use `CASE WHEN` or `DECODE`? How can I achieve that? The expected output is something like: ``` +-------------+-------------+-------------+ | CUSTOMER_ID | ABC_PRODUCT | ABD_PRODUCT | +-------------+-------------+-------------+ | c1 | 1 | 0 | | c2 | 0 | 1 | | c3 | 0 | 0 | | c4 | 0 | 0 | +-------------+-------------+-------------+ ```
You can do this with a single aggregation and no subqueries. The key is using a nested case statement with aggregation to count each product for each customer. The following determines whether a customer has each "bundle": ``` SELECT CUSTOMER_ID, (case when max(case when product_id = 'a' then 1 else 0 end) + max(case when product_id = 'b' then 1 else 0 end) + max(case when product_id = 'c' then 1 else 0 end) = 3 then 1 else 0 end) as ABC, (case when max(case when product_id = 'a' then 1 else 0 end) + max(case when product_id = 'b' then 1 else 0 end) + max(case when product_id = 'd' then 1 else 0 end) = 3 then 1 else 0 end) as ABD FROM CUSTOMERS_SALES GROUP BY CUSTOMER_ID; ``` Now, your question is actually about the number of such purchases. So, I suppose a customer could purchase each item twice, and you would want them counted twice. If so, then the number is the *least* value of any counts. You can get this as well: ``` SELECT CUSTOMER_ID, least(sum(case when product_id = 'a' then 1 else 0 end), sum(case when product_id = 'b' then 1 else 0 end), sum(case when product_id = 'c' then 1 else 0 end) ) as ABC, least(sum(case when product_id = 'a' then 1 else 0 end), sum(case when product_id = 'b' then 1 else 0 end), sum(case when product_id = 'd' then 1 else 0 end) ) as ABD FROM CUSTOMERS_SALES GROUP BY CUSTOMER_ID; ```
Please try below query to find customer having the products `a, b and c`: ``` SELECT CUSTOMER_ID FROM CUSTOMERS_SALES WHERE PRODUCT_ID IN ('a', 'b', 'c') GROUP BY CUSTOMER_ID HAVING COUNT(DISTINCT PRODUCT_ID)=3 ``` **To get the count try**: ``` SELECT COUNT(*) FROM( SELECT CUSTOMER_ID FROM CUSTOMERS_SALES WHERE PRODUCT_ID IN ('a', 'b', 'd') GROUP BY CUSTOMER_ID HAVING COUNT(DISTINCT PRODUCT_ID)=3 )x ```
Aggregation based on another column values
[ "", "sql", "oracle", "" ]
Is there a difference between these two queries? Like performance issues, etc? Query 1: ``` select i.invoice_id, i.total_price from ( select invoice_id, sum(price) as total_price from orders group by invoice_id ) as i inner join invoice ON i.invoice_id = invoice.invoice_id ``` Query 2: ``` select invoice.invoice_id, orders.total_price from invoice inner join ( select invoice_id, sum(price) as total_price from orders group by invoice_id ) orders ON orders.invoice_id = invoice.invoice_id ``` Thanks!
Let me rewrite your queries without any sinifical changes: Query 1 ``` SELECT i.invoice_id, i.total_price FROM invoice INNER JOIN ( SELECT invoice_id, sum(price) AS total_price FROM orders GROUP BY invoice_id ) AS i ON i.invoice_id = invoice.invoice_id; ``` Query 2: ``` SELECT invoice.invoice_id, i.total_price FROM invoice INNER JOIN ( SELECT invoice_id, sum(price) AS total_price FROM orders GROUP BY invoice_id ) AS i ON i.invoice_id = invoice.invoice_id; ``` things I changed: * order of `JOIN` (which doesn't matter, since it is `INNER`) * table alias (`orders` to `i`, and I really don't understand, why you wanted to name it differently) Now, it is obvious, that the only difference between them - the first argument in the main `SELECT`. Your question could have made sence (if there was index on one column and wasn't on the other, and, dependant on the query, you would not always have used both `orders.invoice_id` and `invoice.invoice_id`), but since you already retrieving the both column for `INNER JOIN` it doesn't. Futhermore, these queries are redundant. As already been mentioned by @valex, your query (actually - both of them) could (and must) be simplified to this: ``` SELECT invoice_id, sum(price) AS total_price FROM orders GROUP BY invoice_id; ``` So, no, there is no differnce in perfomance. And, surely, there is no difference in resultset. Also, I'd like you to know, that you can always use [`EXPLAIN`](https://dev.mysql.com/doc/refman/5.0/en/explain.html) for perfomance questions.
Your first query ``` select i.invoice_id, i.total_price from ( select invoice_id, sum(price) as total_price from orders group by invoice_id ) as i inner join invoice ON i.invoice_id = invoice.invoice_id ``` is equivalent by its result to: ``` select invoice_id, sum(price) as total_price from orders group by invoice_id ```
MySQL nested select query performance
[ "", "mysql", "sql", "performance", "nested", "profiling", "" ]
if there : (department) table: (id,name) (employee) table : (id,dept\_id,name) how to show every department (id,name), then all employees (id,name) in this department under its department. I'd like it as SQL statment
You need to use JOIN I believe it's something like this: ``` SELECT department.id, department.name, employee.id, employee.name FROM department LEFT JOIN employee ON department.id=employee.dept_id ORDER BY department.id ```
Since all employees must be present under a particular department at any time, you can do a `inner join` on both the table with `dept_id` like ``` SELECT dept.id, dept.name, emp.id, emp.name FROM department dept JOIN employee emp ON dept.id=emp.dept_id ```
display all departments and all employees
[ "", "sql", "" ]
I want to delete rows based on the combination of two columns. My table looks like this (simplified): ``` [ID], [Sub_ID], [Value] ``` Values could for example be: ``` 1234 - 1 - 100 1234 - 2 - 50 5678 - 1 - 90 4321 - 1 - 75 4321 - 2 - 75 ``` I want to delete all records except for some specific combinations of [ID] and [Sub\_ID]. Example: delete all combinations except for combination 1234-2 and 4321-2. \*EDIT: The 2 values are an example, in reality I need to maintain well over 10,000 combinations of ID-Sub\_ID. To do this I combine the two ID columns with a cast and delete everything that does not match this combination. ``` Delete from table where CAST(ID as varchar(4))+'-'+Cast(Sub_ID as varchar(1)) not in ('1234-2', '4321-2') ``` This works, but it is slow and probably very inefficient. It already takes several minutes to execute this query and I will expand the selection each month, probably making it even worse each time, Does anyone know how I can make this more efficient? Many thanks, Steven
You could use a `CTE` which selects all records that should not be deleted, then you can join it with the original table: ``` WITH Keep AS ( SELECT ID=1234, Sub_ID=2 UNION ALL SELECT ID=4321, Sub_ID=2 ) SELECT t.* FROM Table1 t INNER JOIN Keep k ON t.ID = k.ID AND t.Sub_ID = k.Sub_ID ``` This shows what you'll keep: [**demo**](http://sqlfiddle.com/#!6/3b5fa/1/0) If you want to delete the other you can use `NOT EXISTS`: ``` WITH Keep AS ( SELECT ID=1234, Sub_ID=2 UNION ALL SELECT ID=4321, Sub_ID=2 ) DELETE t FROM Table1 t WHERE NOT EXISTS ( SELECT 1 FROM Keep k WHERE k.ID = t.ID AND k.Sub_ID = t.Sub_ID ) ``` `Demo` This approach should be efficient and readable.
Performing a transform on the left side of the where clause will usually result in poor performance. I would suggest staging the rows you want to delete in a temp table and then simply join to it in your delete. When you need to expand the delete criteria just add an insert into this staging table (@delete): ``` declare @t table (ID int, Sub_ID int, Value int) insert into @t select 1234, 1, 100 union all select 1234, 2, 50 union all select 5678, 1, 90 union all select 4321, 1, 75 union all select 4321, 2, 75; --stage the combination IDs you want to delete: declare @delete table (ID int, Sub_ID int); insert into @delete select 1234, 2 union all select 4321, 2; delete t from @t t join @delete d on t.ID = d.ID and t.Sub_ID = d.Sub_ID; select * from @t; ```
Efficient way to delete values based on multiple columns in SQL
[ "", "sql", "sql-server", "" ]
I want to design a DB for school, where marks for students will be scored. But i got stuck at a point with my current design where i want to insert student marks. My current DB design at broader level is something like: ``` ------------------- ------------------- ------------------ STUDENTINFO | | EXAMS | | taught_in | ------------------- ------------------- ------------------ reg_id | | examID | | t_cisId | cis_Id | | examname | | subjectcode | $otherAttributes | | | | t_id(auto-inc)| __________________| |_________________| |________________| ------------------- ----------------- ------------------ |ClassInSchool | | Subjects | |Result | ------------------- ----------------- ------------------ | classes | | subjectcode | | regId | | section | | subjectname | | examID | | cis_id | |_______________| | t_id | |_________________| |__scoredmarks___| ``` Now Issue is : In result table i want only those records to be inserted in table for which following condition satisfies: 1. A student should be a registered student (Satisfied by regId). 2. Marks should be inserted for valid(existing) exam (satified by examId). 3. A student who studies particular subject in a class, then marks should be inserted for those subjects only. This is where i am facing issue. E.g, In sec A of class 7, English is taught. then For a student studying in 7A only marks for English can be inserted. **In my design if in sec B of 7 class, Maths is taught, then i can insert marks for a student for `Maths` who is studying in `7 A`.** I want to handle this behaviour at database level only, otherwise i have to handle this in Java side. **Note: regId, examID, t\_id are F.K. I have used similar names here for F.K to make it understand pictorially.**
I hope I have understood the problem correctly. It seems you design may need some improvements. In the current design nothing could prevent a student to have participate in an exam twice. Exam have no subject so an exam could be result in multiple subjects... etc (correct me if i am wrong) BTW the source of the problem in the current design is that you have used surrogate key identifier as the primary key of association tables. It is the [side effect of using surrogate key](https://stackoverflow.com/questions/63090/surrogate-vs-natural-business-keys) , [more info](http://www.agiledata.org/essays/keys.html) BTW the draft design bellow may come in handy: ![enter image description here](https://i.stack.imgur.com/OFGVl.png) Note that: ``` taught_in PK = subjectPK + ClassInScholePK STUDENTINFO Pk = subjectPK + ClassInScholePK + studentpk Exam pk = sequence + subjectPK result pk = (STUDENTINFO Pk) + (Exam pk) = (subjectPK_1 + ClassInScholePK + studentpk_1) + (sequence + subjectPK_2) ``` Having unique constraint on `student_PK + sequence + subjectPK` will result non duplicate student results per exam. Having check constraint on `subjectPK_1 = subjectPK_2` will result subject specific results.
All the tables that describe the static objects are there. These are look up tables. To get to your answer a transaction table that pulls items from the lookup tables is needed. The transaction will describe the particular student in a particular class. Maybe the transaction will include the exam they are signed up for if there is always just one exam for a class for a student. Once the transaction describing the student in a class exists, you can say I have an exam for a class - is this student in the class. There are lots of ways to put this together. I'm assuming your student info table just holds students although the key id hints that maybe it needs to be split. One way could be: ``` Registration Transaction Exam Assignment Transaction Exam Result Transaction registration id unique exam assignmt id result id student id exam id student id class id class id exam assignmt id registration date exam date scores, etc ``` If a student is in the registration table and the exam is assigned to the class, create a row in the result table
Insertion anomaly in database design
[ "", "sql", "database", "database-design", "" ]
I have a table with 100 rows, indexed 1-100. I have another table with 50 rows, indexed 1-50. I want to take the rows in the 50 row table and update the values at the corresponding index in the 100 row table: ``` UPDATE t100 SET text = (SELECT text FROM t50 WHERE t50.id = t100.id) ``` It works, but it turns rows 51-100 to `NULL` in the 100 row table. I just wanted to keep that data there. What am I doing wrong?
You are updating all entries of `t100` regardless of whether there is a corresponding entry in `t50`. What you should be doing is: ``` UPDATE t100 SET text = (SELECT text FROM t50 WHERE t50.id = t100.id) WHERE t100.id IN (SELECT id FROM t50) ```
Because you update every row in t100. Add the following to the end of your query ``` Where t100.id <=50 ```
Why does updating only some rows delete the rest of the rows in the column?
[ "", "sql", "merge", "sql-update", "" ]
I am stuck with something. I am trying to take a long column of dates of which are formated to show also hours and minutes and run a Group query to paste values at the date level without acknowledging the differences in hours and minutes.. Unfortunately I have no clue how to start. The code i put together so far which returns each grouped date with the hour and minutes is as follows: ``` st_sql = "INSERT INTO [tblSearchEngine03] ([Date])" & _ "SELECT [tblSearchEngine02].[Date]" & _ "FROM [tblSearchEngine02]" & _ "GROUP BY [tblSearchEngine02].[Date]" & _ "ORDER BY [tblSearchEngine02].[Date]" Application.DoCmd.RunSQL (st_sql) ``` Im not sure the best way to truncate the date on table "tblSearchEngine02"..
One way of doing this is to format the date/time as a date string. If you use `YYYY/MM/DD` it will sort properly. Otherwise you can convert the date/time to an int to trim off the time and then convert back to a date/time type. Here is an example of formatting as string: ``` Format([tblSearchEngine02].[Date], "yyyy/mm/dd") ``` Here is an exmple of converting to get to a date (the end result will be a date/time data type so it might render as `03/16/2014 00:00` depending on your locale info) ``` CDate(CInt([tblSearchEngine02].[Date])) ```
Focus on the `SELECT` piece first. You can use `DateValue()` for your `Date` field values. Start with this as a new query in the Access query designer: ``` SELECT DateValue(se02.Date) FROM tblSearchEngine02 AS se02 GROUP BY se02.Date ORDER BY se02.Date ``` Or you could use `DISTINCT` instead of `GROUP BY`: ``` SELECT DISTINCT DateValue(se02.Date) FROM tblSearchEngine02 AS se02 ORDER BY se02.Date ``` After you have the `SELECT` working correctly, you can convert it to an `INSERT` query (the Access query designer calls it an "append" query). And when you later build the same statement in your VBA code, include `Debug.Print st_sql` so that you can view the completed statement text in the Immediate window and make sure it is what you expected. (You can use `Ctrl`+`g` to go to the Immediate window.)
Rounding dates to the day in an SQL query
[ "", "sql", "ms-access", "vba", "ms-access-2010", "" ]
I have a MySQL database with a 1 to N relation in there. ``` Sample Data: Table User: ID,Name Table Entry: ID,Name Table User_Entry: UserID,EntryID ``` I would like to get all Entries (their ID and NAME) and a flag, that states, if a User with ID x has this Entry (via Table USER\_ENTRY)
You would need to select from your entry table to get all your entries, and then left join to your user\_entry table to find out if a particular user has that entry: ``` SELECT e.ID, e.Name, CASE WHEN ue.EntryID IS NULL THEN 0 ELSE 1 END AS UserFlag FROM Entry AS e LEFT JOIN user_Entry AS ue ON ue.EntryID = e.ID AND ue.UserID = x ```
@GarethD answer is fine but I would go one step further to verify that actual `UserID` in `User_Entry` is actual valid user in `User` table. If you do not have foreign keys on the table it is possible that you have `UserID` that does not exists in `User` table. To enforce that check you can join both tables and verify `Id` from `User` table in the following manner. ``` SELECT e.EntryID ,e.NAME ,CASE WHEN u.ID IS NULL THEN 0 ELSE 1 AS UserFlag FROM Entry AS e LEFT OUTER JOIN User_Entry AS ue ON e.id = ue.EntryID LEFT OUTER JOIN User AS u ON ue.UserID = u.ID ``` To go even further you can write case statement to can find bad records by checking both `UserID` in `User_Entry` table and checking `ID` in `User` table. ``` SELECT e.EntryID ,e.NAME ,CASE WHEN u.ID IS NULL THEN 0 ELSE 1 AS UserFlag ,CASE WHEN ue.UserId IS NOT NULL AND u.ID IS NULL THEN 1 ELSE 0 END AS InvalidUserID FROM Entry AS e LEFT OUTER JOIN User_Entry AS ue ON e.id = ue.EntryID LEFT OUTER JOIN User AS u ON ue.UserID = u.ID ```
SQL Query Help 1 to N
[ "", "mysql", "sql", "" ]
I have to do a sql query and I don't know how to do. I'd like to get the latest rows for each unique duo A/B. I'm working with postgresql. For instance: Table: Person ``` id A B modification_date 1 5 6 2014-04-12 2 6 7 2014-04-13 3 5 6 2014-04-14 4 9 1 2014-04-15 5 6 7 2014-04-16 ``` And I'd like to get: ``` id A B modification_date 3 5 6 2014-04-14 4 9 1 2014-04-15 5 6 7 2014-04-16 ``` Appreciate your help.
`SELECT` the `MAX`imum of `modification_date` for each `GROUP` of (`A`, `B`), then `JOIN` back to the original row to get the values (necessary to get the `id` column): ``` SELECT t1.* FROM Person t1 JOIN ( SELECT MAX(modification_date) max_date, A, B FROM Person GROUP BY A, B ) t2 ON t1.A = t2.A AND t1.B = t2.B AND t1.modification_date = t2.max_date ``` --- More simply, if you don't care which `id` you get back, and you only want one row even if `modification_date` is duplicated, you can just select the `MIN`imum value of `id` and be done with it: ``` SELECT MIN(id) id, A, B, MAX(modification_date) modification_date FROM Person GROUP BY A, B ```
``` SELECT DISTINCT ON (a, b) * FROM person ORDER BY a, b, modification_date DESC; ``` Detailed explanation: [Select first row in each GROUP BY group?](https://stackoverflow.com/questions/3800551/select-first-row-in-each-group-by-group/7630564#7630564)
SQL Query, latest rows for each unique duo
[ "", "sql", "postgresql", "greatest-n-per-group", "" ]
SELECT Count the rows that are in this query --> (SELECT Family FROM BIRD GROUP BY Family) FROM BIRD Every time I just try and count that sub query I get an error saying that there is more than one resulting value. I'm not sure how to count up the rows resulting from a sub query, any ideas?
You could put this sub-query in the `from` clause: ``` SELECT COUNT(*) FROM (SELECT family FROM bird GROUP BY family) t ``` But if you're just trying to get the number of different bird families you don't really need a subquery: ``` SELECT COUNT (DISTINCT family) FROM bird ```
Try this: ``` SELECT Count(*) as FamilyCount FROM (SELECT Family FROM BIRD GROUP BY Family) Families ``` `Count()` returns the number of items in a group. Read more [here](http://technet.microsoft.com/en-us/library/ms175997.aspx).
SQL Counting up rows resulting from a sub query
[ "", "sql", "" ]
I'm trying to create a SQL query that tells me how many users there are that have one or more products. I created this query which tells me all of the users that have products: ``` SELECT DISTINCT `users`.* FROM `users` INNER JOIN `products` ON `users`.`id` = `products`.`creator_id` ``` However, I want the count of the users, not the users themselves. So I created this query which uses the `COUNT()` function ``` SELECT DISTINCT COUNT(*) FROM `users` INNER JOIN `products` ON `users`.`id` = `products`.`creator_id` ``` But I believe I'm using the `COUNT` incorrectly because this query returns `164` which is a lot more than the previous query returned.
``` SELECT COUNT(DISTINCT `users`.`id`) FROM `users` INNER JOIN `products` ON `users`.`id` = `products`.`creator_id` ```
`DISTINCT` filters your result rows to be unique. The aggregation (i.e. the `COUNT()`) is done prior, so is following your `JOIN` and counting users twice. So, to get the number of users that have products, you could do: ``` SELECT COUNT(DISTINCT u.id) FROM users AS u INNER JOIN products AS p ON u.id = p.creator_id ``` Similarly, you could do something a bit useful, like the count of products per user: ``` SELECT u.id, COUNT(*) AS ProductCount FROM users AS u INNER JOIN products AS p ON u.id = p.creator_id GROUP BY u.id ```
How do I use the SQL COUNT() function properly in this context?
[ "", "mysql", "sql", "" ]
I like to get the amount of rows where a certain field is either 1 or 0. My table looks like this: ``` ID | name | my_bool =================== 1 | foo | 1 2 | bar | 1 3 | loo | 0 4 | zoo | 1 ``` as result I expect ``` YES | NO | percentage ====================== 3 | 1 | 0.3333 ``` `YES` is how many rows where `my_bool` is **true** (1) while `NO` is where rows with **false** (0) `percentage` give the percentage of `YES` to `NO`
In MySQL, you can do this easily with conditional aggregation: ``` select sum(my_bool = 1) as yes, sum(my_bool = 0) as no from table t; ``` EDIT: The percentage is very easy: ``` select sum(my_bool = 1) as yes, sum(my_bool = 0) as no, avg(my_bool = 0) from table t; ``` However, your value suggests you are looking for a ratio, not a percentage. For that, you need to be careful about divide by zero: ``` select sum(my_bool = 1) as yes, sum(my_bool = 0) as no, (case when sum(my_bool = 1) > 0 then sum(my_bool = 0) / sum(my_bool = 1) end) from table t; ```
``` SELECT SUM(IF(my_bool=1, 1, 0)) AS YES, SUM(IF(my_bool=0, 1, 0)) AS NO FROM mytable ```
Count boolean field values within a single query
[ "", "mysql", "sql", "" ]
I have 2 table in my database (like this): ``` tblCustomers: id CustomerName 1 aaa 2 bbb 3 ccc 4 ddd 5 eee 6 fff ``` --- ``` tblPurchases: id CustomerID Price 1 1 300 2 2 100 3 3 500 4 1 150 5 4 50 6 3 250 7 6 700 8 2 30 9 1 310 10 4 25 ``` Now I want with "Stored Procedures" take a new table that give me the sum of price for each customer. Exactly like under. How can do that? ``` Procedures Result: id CustomerName SumPrice 1 aaa 760 2 bbb 130 3 ccc 750 4 ddd 75 5 eee 0 6 fff 700 ```
``` select c.id, c.customername, sum(isnull(p.price, 0)) as sumprice from tblcustomers c left join tblpurchases p on c.id = p.customerid group by c.id, c.customername ``` SQL Fiddle test: <http://sqlfiddle.com/#!3/9b573/1/0> Note the need for an outer join because your desired result includes customers with no purchases.
You can use the below query to get the result ``` select id,CustomerName,sum(price) as TotalPrice from ( select tc.id,tc.CustomerName,tp.price from tblCustomers tc join tblPurchases tp on tc.id = tp.CustomerID ) tab group by id,CustomerName ```
How to Get Sum of One Column Based On Other Table in Sql Server
[ "", "sql", "sql-server", "stored-procedures", "sum", "" ]
*Problem is solved - see end of this post.* when i call to\_date in select clause everything works fine - get a resultset of 12 records: ``` select value1,to_date(value1,'DD.MM.YYYY') from variableindex where value1 is not null and value1 <> '0' and creation_time_ > to_timestamp('20140307','YYYYMMDD') order by 2 ``` returns ``` 'VALUE1' 'TO_DATE(VALUE1,'DD.MM.YYYY')' '25.11.2013' 25.11.13 '12.03.2014' 12.03.14 '12.03.2014' 12.03.14 '12.03.2014' 12.03.14 '12.03.2014' 12.03.14 '12.03.2014' 12.03.14 '14.03.2014' 14.03.14 '14.03.2014' 14.03.14 '14.03.2014' 14.03.14 '14.03.2014' 14.03.14 '20.03.2014' 20.03.14 '20.03.2014' 20.03.14 ``` Every datestring has been converted as expected. If i add the following line to where clause ``` and to_date(value1,'DD.MM.YYYY') < to_date('20140301','YYYYMMDD') ``` i'll receive: ``` ORA-01847: Tag des Monats muss zwischen 1 und letztem Tag des Monats liegen 01847. 00000 - "day of month must be between 1 and last day of month" *Cause: *Action: ``` No it really gets nasty... i changed my query to ``` where id_ in (...) ``` and used the same 12 recordsets ids as in original query. No Error... Many thanks to @GordonLinoff - this is how i use the query now: ``` select value1,to_date(value1,'DD.MM.YYYY') from variableindex where (case when value1 <> '0' then to_date(value1,'DD.MM.YYYY') end) > to_timestamp('20131114','YYYYMMDD') and creation_time_ > to_timestamp('20140307','YYYYMMDD') order by 2; ```
This is your query with the `where` clause: ``` select value1, to_date(value1,'DD.MM.YYYY') from variableindex where value1 is not null and value1 <> '0' and creation_time_ > to_timestamp('20140307', 'YYYYMMDD') and to_date(value1 'DD.MM.YYYY') < to_date('20140301', 'YYYYMMDD') order by 2; ``` Oracle does not guarantee the order of processing of clauses in the `where`. So, `value <> '0'` is not guaranteed to run before the last condition. This happens to be a big problem on SQL Server. One solution is to use a `case` statement: ``` select value1,to_date(value1, 'DD.MM.YYYY') from variableindex where value1 is not null and value1 <> '0' and creation_time_ > to_timestamp('20140307', 'YYYYMMDD') and (case when value <> '0' then to_date(value1, 'DD.MM.YYYY') end) < to_date('20140301', 'YYYYMMDD') order by 2; ``` Rather ugly, but it just might solve your problem.
If you're using `OracleParameter` in SQL with parameter name and value binding, check you set the `oracleCommand.BindByName = true` then it'll bind by name, and not by parameter adding order.
ORA-01847 day of month must be between 1 and last day of month - but data is OK
[ "", "sql", "oracle", "date", "" ]
I wrote two queries which are working fine, but they are really really slow: ``` SELECT director FROM movies WHERE id NOT IN (SELECT movie_id FROM stars_in_movies WHERER star_id = %s); SELECT first_name, last_name FROM stars WHERE id NOT IN (SELECT star_id FROM stars_in_movies WHERE movie_id IN(SELECT movie_id FROM stars_in_movies WHERE star_id = %s)) ``` I tried to replace `NOT IN` with `INNER JOIN` and `LEFT JOIN` but nothing I have tried worked so far. Following is schema for the tables: **movies**: ``` - id (primary key), - title (title of the movie), - year (year of release) - director (director of the movie) ``` **stars**: ``` - id (primary key) - first_name - last_name ``` **stars\_in\_movies**: ``` - movie_id, - star_id (movie_id and star_id both are foreign keys here) ``` Thank you in advance.
Try this: ``` SELECT m.director FROM movies m LEFT JOIN stars_in_movies sm ON m.id = sm.movie_id WHERE sm.movie_id IS NULL AND sm.star_id = %s ``` the second query ``` SELECT first_name, last_name FROM stars s LEFT JOIN stars_in_movies sm ON sm.star_id = s.id WHERE sm.star_id IS NULL AND sm.star_id = %s ```
Try this: ``` select t1.director from movies t1 left outer join stars_in_movies t2 on t1.id = t2.movie_id where t2.movie_id is null; select t1.first_name, t1.last_name from stars t1 left outer join stars_in_movies t2 on t1.id = t2.star_id where t2.star_id is null; ``` here is an **[example](http://sqlfiddle.com/#!2/ac0e3/7)**
Replace NOT IN with LEFT JOIN in SQL statement
[ "", "mysql", "sql", "database", "" ]
I have a table in SQL Server that has 34 columns. I need sum all column values in a row in SQL Server. Table : `[CALEN]` Columns: ``` YEAR_ | MONTH_ |D1 | D2 | D3 | D4 | D5 .... | D31 | Days ------------------------------------------------------------ 1392 | 12 | 1 | 1 | 2 | 1 | 4 ... | 0 | 29 ``` I want calc count columns that have 1 value Calc this query: ``` select [All_column value is 1 and Start With D] FROM [CALEN] WHERE YEAR_ = 1392 and MONTH_ = 12 ```
You need to unpivot your table: ``` SELECT YEAR_, MONTH_, DAY_, COUNT_ FROM (SELECT * FROM [CALEN] WHERE YEAR_ = 1392 and MONTH_ = 12) CALEN_FILTER UNPIVOT (COUNT_ FOR DAY_ IN (D1, D2, D3, D4, D5, D6, D7, D8, D9, D10, D11, D12, D13, D14, D15, D16, D17, D18, D19, D20, D21, D22, D23, D24, D25, D26, D27, D28, D29, D30, D31)) AS CALEN_UNPIVOTED ``` Then you can easily make a common aggregate query, maybe easier puting the query above in a CTE, e.g, ``` WITH CALEN_U (YEAR_, MONTH_, DAY_, COUNT_) AS ( SELECT YEAR_, MONTH_, DAY_, COUNT_ FROM (SELECT * FROM [CALEN] WHERE YEAR_ = 1392 and MONTH_ = 12) CALEN_FILTER UNPIVOT (COUNT_ FOR DAY_ IN (D1, D2, D3, D4, D5, D6, D7, D8, D9, D10, D11, D12, D13, D14, D15, D16, D17, D18, D19, D20, D21, D22, D23, D24, D25, D26, D27, D28, D29, D30, D31)) AS CALEN_UNPIVOTED ) SELECT YEAR_, MONTH_, COUNT(DAY_) FROM CALEN_U WHERE COUNT_ = 1 GROUP BY YEAR_, MONTH_ ``` Edit: see [SQLFiddle](http://sqlfiddle.com/#!3/ea504/2), using 10 days.
You can do something like this: ``` SELECT CASE WHEN D1 = 1 THEN 1 ELSE 0 END + CASE WHEN D2 = 1 THEN 1 ELSE 0 END + CASE WHEN D3 = 1 THEN 1 ELSE 0 END + CASE WHEN D4 = 1 THEN 1 ELSE 0 END + . . . CASE WHEN D31 = 1 THEN 1 ELSE 0 END AS TOTAL_D FROM [CALEN] WHERE YEAR_ = 1392 AND MONTH_ = 12 ``` ## [SQL Fiddle](http://sqlfiddle.com/#!3/579c6/1)
Sum all column value in a row in SQL Server
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have two tables as below: ``` Highschooler ID name grade 1510 Jordan 9 1689 Gabriel 9 1381 Tiffany 9 1709 Cassandra 9 1101 Haley 10 1782 Andrew 10 1468 Kris 10 1641 Brittany 10 1247 Alexis 11 1316 Austin 11 1911 Gabriel 11 1501 Jessica 11 1304 Jordan 12 1025 John 12 1934 Kyle 12 1661 Logan 12 Friends ID1 ID2 1689 1709 1709 1689 1782 1709 1911 1247 1247 1468 1641 1468 1316 1304 1501 1934 1934 1501 1025 1101 ``` I need to create a table that has the name and grade of people who are friends with each other as given in the "Friends" table. I basically need to look up the name and grade of each ID given in the friends table. I have two select statements that look up the name and grade for ID1 and ID2 separately. ``` select name,grade from highschooler where id IN (select id1 from friends) ``` and ``` select name,grade from highschooler where id IN (select id2 from friends) ``` However, I cannot figure out how to get a table that looks like this: ``` name1 grade1 name2 grade2 Gabriel 9 Cassandra 9 ``` I have tried concatanating the two select statements as well as union but neither work for this situation. I am new to SQL. Any help is very much appreciated!
You need to join the 'highschooler' table twice: ``` select h1.name, h1.grade, h2.name, h2.grade from highschooler h1 inner join friends on friends.id1 = h1.id inner join highschooler h2 on h2.id = friends.id2 ``` The problem with the 'friends' table and the above query is that even if A is a friend of B, B is not necessarily a friend of A!
As previous poster mentioned, you need to list the highschooler table twice, since you need to reference it twice to compare both the ID's from the friends table. His syntax is correct; another way to do it with straight joins and a where clause is: ``` SELECT side1.name, side1.grade, side2.name, side2.grade FROM highschooler AS side1, highschooler AS side2, friends WHERE (side1.id = friends.id1 AND side2.id = friends.id2); ``` As pointed out, though, this only treats friends as one-way relationships. In your "friends" table, I do see that some "bidirectional friends" are listed using two rows with the IDs in the opposite order. If in fact you want to treat *all* friend relations as bidirectional, you have to compare twice with the columns in the "friends" table listed in both orders. ``` SELECT DISTINCT side1.name, side1.grade, side2.name, side2.grade FROM highschooler AS side1, highschooler AS side2, friends WHERE (side1.id = friends.id1 AND side2.id = friends.id2) OR (side1.id = friends.id2 AND side2.id = friends.id1); ``` Notice I had to use DISTINCT here, because otherwise, if a friendship is listed twice in the opposite order (as in 1689,1709 and 1709,1689), it gets listed twice in the result; DISTINCT removes the duplicate. I guess which query you use depends on if you consider friends to be "Facebook-style" friends or "Twitter-style" friends. :)
SQL- Looking up value across tables
[ "", "sql", "" ]
I'm trying to learn how to import data from a csv file to my database and then reorganise it into my tables. I have imported a very simple set of data into a table called ' CSVTest\_Match' that looks like this: ``` HEADER DATA --(Column Names) --------------- -------------- Home Team Barnsley Away Team Wigan Kick Off Time 14:02 Kick Off Date 03/08/2013 Home Goals 0 Away Goals 4 ``` The values in both columns are VARCHAR's at this point. I would like to transform this data to look like this: ``` HOMETEAM AWAYTEAM KICKOFFTIME KICKOFFDATE HOMEGOALS AWAYGOALS -- (Column Names) ---------- ---------- ------------- ------------- ----------- ---------- Barnsley Wigan 14:02 03/08/2013 0 4 ``` At this point it would be useful if the data was converted to VARCHAR, DATETIME, TINYINT values as appropriate. I've been getting rather confused trying to work out how to use PIVOT to do this so I would really appreciate some help. EDIT: I finally figured it out. The code required was: ``` SELECT * FROM (SELECT Header, Data FROM CSVTest_Match) AS T PIVOT (Min(Data) FOR Header IN ([Home Team], [Away Team], [Kick Off Time], [Kick Off Date], [Home Goals], [Away Goals])) AS T2 ```
``` SELECT * FROM (SELECT Header, Data FROM CSVTest_Match) AS T PIVOT (Min(Data) FOR Header IN ([Home Team], [Away Team], [Kick Off Time], [Kick Off Date], [Home Goals], [Away Goals])) AS T2 ```
I don't remember where I swiped this from, probably from around here, but here you go, I've been using this for a while.. This assumes the column 'names' are Indicator1-x and the table is yourtable. Search and replace accordingly. If you don't know your header names ahead of time, do a select distinct on them, and then do C.column\_name in ``` DECLARE @colsUnpivot AS NVARCHAR(MAX), @query AS NVARCHAR(MAX) select @colsUnpivot = stuff((select ','+quotename(C.column_name) from information_schema.columns as C where C.table_name = 'CSVTest_Match' and C.column_name in ('Home Team','Away Team','Kick Off Time','Kick Off Date','Home Goals','Away Goals') for xml path('')), 1, 1, '') set @query = 'select id, entityId, indicatorname, indicatorvalue from CSVTest_Match unpivot ( indicatorvalue for indicatorname in ('+ @colsunpivot +') ) u' exec sp_executesql @query; ```
Convert Columns of Data to Rows of data in SQL Server
[ "", "sql", "sql-server", "pivot", "" ]
How to use select statement using like condition by passing a column from another table? My query ``` SELECT count(1) FROM COMPANYNAME_RESTRICT_LIST a,itaukei_data_store_key b WHERE b.surname LIKE a.company_name% OR b.surname LIKE %a.company_name% OR b.surname LIKE %a.company_name% OR b.surname LIKE a.company_name%; ```
This Works Fine. ``` SELECT * FROM COMPANYNAME_RESTRICT_LIST a,itaukei_data_store_key b WHERE b.surname LIKE '%'||a.company_name||'%' or b.surname LIKE '%'||a.company_name or b.surname LIKE a.company_name||'%' or b.surname LIKE a.company_name; ```
The standard SQL function is `concat()`: ``` SELECT count(1) FROM COMPANYNAME_RESTRICT_LIST a join itaukei_data_store_key b on b.surname like concat('%', a.company_name, '%') ``` Not all databases support `concat()`. So, one of the following will also generally work: ``` on b.surname like '%' || a.company_name || '%' on b.surname like '%' + a.company_name + '%' on b.surname like '%' & a.company_name & '%' ```
Select command using like condition
[ "", "sql", "select", "sql-like", "" ]
Im trying to combine these 3 mysql statements. I've been using the UNION and JOIN Keyword however it doesnt come up with the output I desired. Any suggestions? This is my 3 statements. ``` SELECT * FROM entrancequestion WHERE Subject='Abstract Reasoning' ORDER BY RAND() LIMIT 10 SELECT * FROM entrancequestion WHERE Subject='English' ORDER BY RAND() LIMIT 30 SELECT * FROM entrancequestion WHERE Subject='Mathematics' ORDER BY RAND() LIMIT 30 SELECT * FROM entrancequestion WHERE Subject='Science' ORDER BY RAND() LIMIT 30 ``` I tried combining the first 2 statements like so: ``` SELECT * FROM entrancequestion WHERE Subject='Abstract Reasoning' LIMIT 10 UNION SELECT * FROM entrancequestion WHERE Subject='English' ORDER BY RAND() LIMIT 30; ``` however it only reads the second LIMIT query wherein t outputs 30 rows only. I would like to create a query where in a total of 100 rows is displayed and randomized according to index. Your help would be much appreciated.
Quoting the [docs](http://dev.mysql.com/doc/refman/5.1/en/union.html), To apply ORDER BY or LIMIT to an individual SELECT, place the clause inside the parentheses that enclose the SELECT: ``` (SELECT * FROM entrancequestion WHERE Subject='Abstract Reasoning' ORDER BY RAND() LIMIT 10) UNION (SELECT * FROM entrancequestion WHERE Subject='English' ORDER BY RAND() LIMIT 30) UNION (SELECT * FROM entrancequestion WHERE Subject='Mathematics' ORDER BY RAND() LIMIT 30) UNION (SELECT * FROM entrancequestion WHERE Subject='Science' ORDER BY RAND() LIMIT 30 ) ```
You need to use parentheses in the query when using UNION and LIMIT, like: ``` (SELECT * FROM entrancequestion WHERE Subject='Abstract Reasoning' LIMIT 10) UNION (SELECT * FROM entrancequestion WHERE Subject='English' ORDER BY RAND() LIMIT 30); ```
Combine 4 SELECT queries on mysql
[ "", "mysql", "sql", "vb.net", "" ]
I am trying to setup foreign key for my mysql database using phpmyadmin. But phpmyadmin only displays internal relation in 'relational view'. Note I have already made sure of following steps. 1. Setup phpmyadmin database (create\_table.sql and config stuff) ![phpmyadmin status](https://i.stack.imgur.com/PX9Jr.png) 2. Made sure all tables in my database using InnoDb ![Inno db ](https://i.stack.imgur.com/PhURA.png) 3. Made sure my intended foreign keys/reference Keyes are indexed. ![enter image description here](https://i.stack.imgur.com/zIixX.png) After all these steps I can only see internal relations in my relational view. ![Problem](https://i.stack.imgur.com/4xlnY.png) Am I missing something?
This is not actually a solution, but since I cannot comment on posts yet, I'll add a few comments that might help. I've faced this same issue and the **critical parameters** seem to be the following: 1. Case insensitive operating system (in my case windows) 2. Mixed-case table names (e.g. "TableName") I've been using Mixed-case table names successfully in Linux, but in Windows something seems to go really wrong with phpMyAdmin. I bet your table names are mixed-case. Try creating a test table yourself, with a fully lowercase name, and check if you see the "Foreign Key Relations" options of phpMyAdmin. If you do, then we've found the source of the problem and we're just left with finding a solution or workaround for it :) ps: More on case-sensitivity and mysql, without any hints about phpMyAdmins view on all of this, can be found here: <http://dev.mysql.com/doc/refman/5.0/en/identifier-case-sensitivity.html> This IS the solution. Renaming table name 'myTable' to 'my\_table' solves the problem. ![Renaming table](https://i.stack.imgur.com/hioCN.png)
If you have different storage engines on the tables in your database, this would be the case. The only storage engine I know to support foreign keys is InnoDB. MyISAM and the rest don't seem to work.
phpmyadmin only displays internal relation in relational view
[ "", "mysql", "sql", "database", "phpmyadmin", "" ]
In MongoDB you can retrieve the date from an ObjectId using the `getTimestamp()` function. How can I retrieve the date from a MongoDB ObjectId using SQL (e.g., in the case where such an ObjectId is stored in a MySQL database)? Example input: ``` 507c7f79bcf86cd7994f6c0e ``` Wanted output: ``` 2012-10-15T21:26:17Z ```
This can be achieved as follows (assuming `objectId` is a string) in MySQL: ``` SELECT FROM_UNIXTIME( CAST(CONV(SUBSTR(objectId, 1, 8), 16, 10) AS UNSIGNED) ) FROM table ``` It works as follows: * `SUBSTR(objectId, 1, 8)` takes the first 8 characters from the hexadecimal `objectId` string * `CONV(..., 16, 10)` converts the hexadecimal number into a decimal number and returns it as a string (which represents the UNIX timestamp) * `CAST (...) AS UNSIGNED` converts the timestamp string to an unsigned integer * `FROM_UNIXTIME(...)` converts the timestamp integer into the date Note that by default the displayed date will be based on your system's timezone settings.
For those using SQL Server, similar results would be generated with: ``` SELECT DATEADD( SECOND, CAST( CONVERT( BINARY(4), '0x'+SUBSTRING(@MongoObjectId, 1, 8), 1 ) AS BIGINT ), CAST('1970-01-01 00:00' AS DATETIME) ) ```
How to retrieve the date from a MongoDB ObjectId using SQL
[ "", "sql", "mongodb", "" ]
I have a table iminvbin\_sql.. This table has columns item\_no, loc, bin\_no. Each item number should have 4 bins in location 2. how do I find all the items that do not have these four bins in loc 2? I tried ``` select item_no from iminvbin_sql where bin_no not in('910SHIP','910STAGE','910PROD','1') AND loc = 2 ``` but that didn't work. ``` itemno loc bin 0 2 1 0 2 910PROD 0 2 910SHIP 0 2 910STAGE ```
I think this is what you want: ``` select item_no from iminvbin_sql where bin_no in ('910SHIP', '901STAGE', '910PROD', '1') and loc = 2 group by item_no having count(distinct bin_no) <> 4; ``` This will check that all four of those values are in the bins. If you want to verify that these four values are in the bins *and* no other values are, you can test for this as well: ``` select item_no from iminvbin_sql where loc = 2 group by item_no having count(distinct bin_no) <> 4 or count(distinct case when bin_no in ('910SHIP', '901STAGE', '910PROD', '1') then bin_no end) <> 4 or count(*) <> 4; ``` EDIT: In response to Bohemian's comment, the following should get all items that are not fully populated: ``` select item_no from iminvbin_sql group by item_no having count(distinct case when loc = 2 then bin_no end) <> 4 or count(distinct case when loc = 2 then bin_no in ('910SHIP', '901STAGE', '910PROD', '1') then bin_no end) <> 4 or sum(case when loc = 2 then 1 else 0 end) <> 4; ```
Use a `group by` with a `having` clause to determine if *all* bins are not there: ``` select item_no from iminvbin_sql group by item_no having sum(case when loc = 2 and bin_no in ('910SHIP','901STAGE','910PROD','1') then 1 end) < 4 ``` The important point with this query is that by moving the condtions into the `case`, it will find items that have none any of the listed bins or even items that have no data for `loc = 2`.
How to find all items in table that don't have a certain text in column
[ "", "sql", "sql-server-2008-r2", "" ]
I am working in SQL Server Management Studio 2008. I need to write a query that searches several fields in a table against a large list of words (about 50 words). My current idea is to simply write an `IN` clause to put all of my words in. However, creating this `IN` clause could become tedious due to the amount of words I have to search against. Is there a better way of doing this? My current query looks something like the following: ``` SELECT x, y FROM Table1 WHERE x IN ('word1', 'word2', ... , 'wordx') OR y IN ('word1', 'word2', ... , 'wordx') ```
You can create a table (let's say `word_table`) and store all your words inside it. Then you can simply do ``` SELECT x, y FROM Table1 WHERE x IN (SELECT word FROM word_table) OR y IN (SELECT word FROM word_table) ```
``` select t.* from your_table t left join word_table wx on wx.word = t.x left join word_table wy on wy.word = t.y where wx.word is not null or wy.word is not null ```
How to query against a large list of words with an IN clause
[ "", "sql", "sql-server", "" ]
I want to use WHERE in mysql join ``` SELECT user.name , course.name FROM `user` LEFT JOIN `course` on user.course = course.id; ``` i want to add `WHERE name='Alice'` in `user.name` i tried to do it via : ``` SELECT user.name WHERE name='Alice' , course.name FROM `user` LEFT JOIN `course` on user.course = course.id; ``` or SELECT user.name , course.name FROM `user` WHERE user.name='Alice' LEFT JOIN `course` on user.course = course.id; but it seam they are wrong how can i use `WHERE` in Mysql join ( inner join , left join ,... )
The syntax is wrong, the correct syntax is ``` SELECT FROM JOIN ON WHERE ``` So in your case it will be ``` SELECT a.name, b.name as bname FROM `user` a LEFT JOIN `course` b ON a.course = b.id WHERE a.name='Alice' ``` Note that i used an alias since two columns have the same name
Almost right. This should work fine: ``` SELECT u.name, c.name FROM `user` u LEFT JOIN `course` c on u.course = c.id WHERE u.name='Alice'; ``` I know, it's weird to think about, but that's actually how you do it. And I changed to `u` and `c` because it's neater to work with.
use WHERE in mysql join
[ "", "mysql", "sql", "join", "" ]
Hello Guys i am fairly new in the databases. I have 2 tables ``` 1 Tbl_name ID - Varchar Name - Varchar 2 Tbl_ref ID - Varchar Fid - Varchar ``` I want to get Names from `Tbl_name` if `Fid from Tbl_ref = '111-11'` any help welcome :)
``` SELECT N.NAME FROM Tbl_name N INNER JOIN Tbl_ref R ON N.ID = R.ID WHERE FID = '111-11'; ``` try this
With whatever i understand from your question, this is what you are looking for.. ``` SELECT N.NAME FROM Tbl_name N INNER JOIN Tbl_ref R ON N.ID = R.ID AND FID = '111-11' ``` I would also suggest you to refer to this link.. <http://www.w3schools.com/sql/>
SQL join 2 tables with conditons on other tables
[ "", "sql", "sql-server", "database", "" ]
I'm trying to do an innerjoin but when I do this in Sql it says: Unknown column 'evenement.id' in 'on clause'. How can I make this work? SQL join: ``` SELECT * FROM `evenement` INNER JOIN `evenementontvanger` ON `evenement.id` = `evenementontvanger.idEvent` ORDER BY `id` ``` my tables are-> Evenement: ``` id title start startdate starttime end endtime enddate url allDay description color ``` evenementontvanger: id idEvent idWerknemer idProject idKlant idTaak
You need to quote each identifier individually; ``` ON `evenement`.`id` = `evenementontvanger`.`idEvent` ```
The problem is that you have `evenement.id` between backticks. That makes it a single identifier, rather than two connected by a `.`. The following fixes the query and adds table aliases and fixes the next problem which is the ambiguous column in the `order by`: ``` SELECT * FROM evenement e INNER JOIN evenementontvanger ev ON e.id = e.idEvent ORDER BY e.id; ```
My inner join doesn't work Unknown column
[ "", "mysql", "sql", "join", "phpmyadmin", "" ]
I'm attempting to write a function for Access. I have some very basic coding skills, but almost no knowledge of VBA. When I attempt to call the function from an update query, I get the "JOIN expression not supported" error, and I can't figure out what the problem is. Here's the function: ``` Function PrimaryCampus(stuPIDM As String, termCode As String) As String Dim seqNumb As Integer Dim seqNumbStore As Integer Dim campus As String Dim classCount1 As Integer Dim classCount2 As Integer Dim classCount3 As Integer Dim db As Database Dim rec As DAO.Recordset seqNumb = 0 classCount1 = 0 classCount2 = 0 classCount3 = 1 Set db = CurrentDb Do While (seqNumb < 7) Set rec = db.OpenRecordset("SELECT COUNT(SSBSECT_SEQ_NUMB) " & _ "FROM SATURN_SFRSTCR " & _ "INNER JOIN SATURN_SSBSECT " & _ "ON SFRSTCR_TERM_CODE = SSBSECT_TERM_CODE " & _ "AND SFRSTCR_CRN = SSBSECT_CRN " & _ "WHERE SFRSTCR_PIDM = '" & stuPIDM & "' " & _ "AND SFRSTCR_TERM_CODE = '" & termCode & "' " & _ "AND SSBSECT_SEQ_NUMB LIKE '" & seqNumb & "*';") classCount1 = rec.Fields(0) If (classCount2 < classCount1) Then classCount2 = classCount1 seqNumbStore = seqNumb ElseIf (classCount2 = classCount1) Then classCount3 = classCount3 + 1 End If seqNumb = seqNumb + 1 Loop If (classCount3 > 1) Then campus = CStr(classCount3) ElseIf (seqNumbStore = 0) Then campus = "Distance Learning" ElseIf (seqNumbStore = 1) Then campus = "Clarkston" ElseIf (seqNumbStore = 2) Then campus = "Dunwoody" ElseIf (seqNumbStore = 3) Then campus = "Decatur" ElseIf (seqNumbStore = 5) Then campus = "Newton" ElseIf (seqNumbStore = 6) Then campus = "Alpharetta" End If PrimaryCampus = campus rec.Close Set rec = Nothing Set db = Nothing End Function ``` I'm sure there are other errors because I'm using code I don't fully understand, but I need to fix the select statement before I can continue with the debugging. Any help would be appreciated. Edit: Thanks to everyone who helped me out. Here's the working version of the code for anyone who, like me, is a complete novice to VBA functions: ``` Function PrimaryCampus(stuPIDM As Long, termCode As String) As String Dim seqNumb As Integer Dim seqNumbStore As Integer Dim campus As String Dim classCount1 As Integer Dim classCount2 As Integer Dim classCount3 As Integer Dim db As Database Dim rec As DAO.Recordset seqNumb = 0 classCount1 = 0 classCount2 = 0 classCount3 = 1 Set db = CurrentDb Do While (seqNumb < 7) Set rec = db.OpenRecordset("SELECT COUNT(SSBSECT_SEQ_NUMB) AS CAMP_COUNT " & _ "FROM SATURN_SFRSTCR " & _ "INNER JOIN SATURN_SSBSECT " & _ "ON SATURN_SFRSTCR.SFRSTCR_TERM_CODE = SATURN_SSBSECT.SSBSECT_TERM_CODE " & _ "AND SATURN_SFRSTCR.SFRSTCR_CRN = SATURN_SSBSECT.SSBSECT_CRN " & _ "WHERE SFRSTCR_PIDM = " & stuPIDM & " " & _ "AND SFRSTCR_TERM_CODE = '" & termCode & "' " & _ "AND SSBSECT_SEQ_NUMB LIKE '" & seqNumb & "*' " & _ "AND (SFRSTCR_RSTS_CODE LIKE 'R*' OR SFRSTCR_RSTS_CODE LIKE 'W*') " & _ "AND SFRSTCR_CREDIT_HR >= 1") classCount1 = rec.Fields(0) If (classCount1 = 0) Then classCount1 = 0 ElseIf (classCount2 < classCount1) Then classCount2 = classCount1 seqNumbStore = seqNumb ElseIf (classCount2 = classCount1) Then classCount3 = classCount3 + 1 End If seqNumb = seqNumb + 1 Loop If (classCount3 > 1) Then campus = CStr(classCount3) ElseIf (seqNumbStore = 0) Then campus = "Distance Learning" ElseIf (seqNumbStore = 1) Then campus = "Clarkston" ElseIf (seqNumbStore = 2) Then campus = "Dunwoody" ElseIf (seqNumbStore = 3) Then campus = "Decatur" ElseIf (seqNumbStore = 5) Then campus = "Newton" ElseIf (seqNumbStore = 6) Then campus = "Alpharetta" End If PrimaryCampus = campus rec.Close Set rec = Nothing Set db = Nothing End Function ``` Bugs I had to fix: 1. Explicit table names in the join criteria. 2. stuPidm is a NUMBER(8) field. Originally, stuPIDM was an Integer, but this was causing a type conversion failure when I ran my update query. I didn't realize Integer in Access VBA was equivalent to a short integer in other languages, so the type needed to be Long. 3. I also needed to fix a logic error in the loop's If statement where I wasn't dealing with counts of 0.
Try explicitly referencing the table and field in the join like so: ``` "SELECT COUNT(SSBSECT_SEQ_NUMB) " & _ "FROM SATURN_SFRSTCR " & _ "INNER JOIN SATURN_SSBSECT " & _ "ON SATURN_SFRSTCR.SFRSTCR_TERM_CODE = SATURN_SSBSECT.SSBSECT_TERM_CODE " & _ "AND SATURN_SFRSTCR.SFRSTCR_CRN = SATURN_SSBSECT.SSBSECT_CRN " & _ "WHERE SFRSTCR_PIDM = '" & stuPIDM & "' " & _ "AND SFRSTCR_TERM_CODE = '" & termCode & "' " & _ "AND SSBSECT_SEQ_NUMB LIKE '" & seqNumb & "*';" ```
Looking [here](http://msdn.microsoft.com/en-us/library/office/ff820966%28v=office.15%29.aspx), it seems like you will have to specify a recordset type or it will assume you're trying to call a table rather than a query. Try: ``` Set rec = db.OpenRecordset("SELECT COUNT(SSBSECT_SEQ_NUMB) " & _ "FROM SATURN_SFRSTCR " & _ "INNER JOIN SATURN_SSBSECT " & _ "ON SFRSTCR_TERM_CODE = SSBSECT_TERM_CODE " & _ "AND SFRSTCR_CRN = SSBSECT_CRN " & _ "WHERE SFRSTCR_PIDM = '" & stuPIDM & "' " & _ "AND SFRSTCR_TERM_CODE = '" & termCode & "' " & _ "AND SSBSECT_SEQ_NUMB LIKE '" & seqNumb & "*';", _ Type:=dbOpenDynamic) ```
Access 2010 VBA JOIN expression not supported
[ "", "sql", "vba", "ms-access", "inner-join", "" ]
``` select * from table where substring(username,1) >= 'A' and substring(username,1) <= 'C' and height_in_meters * 100 > 140 and height_in_meters * 100 < 170; ``` what are the possible methods to speed up the query? It runs quite slowly
`where username like '[A-C]%'` would allow index use (should one be present) on `username` as opposed to passing `username` through a function which would always prevent it.
There is not much you can optimize but ``` WHERE username >= 'A' AND username < 'D' AND height_in_meters > 1.4 AND height_in_meters < 1.7; ``` so you save the computation
Optimise substring sql query
[ "", "sql", "sql-server", "" ]
I like to get the percentage of a certain version. If stored the info in `mytable`: ``` ID | name | version =================== 1 | foo | 1.0 2 | bar | 1.0 3 | loo | 1.1 4 | zoo | 1.2 ``` I like to get this result ``` version | percentage ==================== 1.0 | 0.500 1.1 | 0.250 1.2 | 0.250 ``` My statement is currently (without percentage) ``` SELECT a.version, FROM mytable AS a GROUP BY version ORDER BY version ASC ```
This query should work: ``` select version, count(*) / const.cnt from mytable cross join (select count(*) as cnt from mytable) const group by version order by version; ```
Use `COUNT` ``` SELECT version, COUNT(version)/(SELECT COUNT(*) FROM table) FROM table GROUP BY version ```
Get percentage of appearance of a certain value in mysql
[ "", "mysql", "sql", "" ]
I'm trying to get the average price of all products, several of which are `NULL`. I want to know if when I use the following query: ``` SELECT AVG(Price) AS PriceAverage FROM Products; ``` if it is including the products that have `NULL` as a price or is it skipping them?
The only aggregate function that doesn't ignore NULL values is COUNT(\*). Even COUNT() ignores NULL values, if a column name is given. Read more about it here: <http://dev.mysql.com/doc/refman/5.0/en/group-by-functions.html>
`Null` values are ignored. If you want to include `Null` values in your average: ``` SUM(Price) / Count(*) ```
When using the AVG() SQL function, is NULL part of the average or is it ignored?
[ "", "sql", "" ]
Suppose I have following tables: T1,T2 and T3. How could I rephrase the following query using only left joins. ``` Select * From T1 Right join T2 On T1.FK2=T2.PK Right join T3 On T1.FK3=T3.PK ``` Following attempt is not correct: ``` Select * From T2 Left join T1 On T1.FK2=T2.PK Left join T3 On T1.FK3=T3.PK ``` T3 is On the wrong Side of the join. Is the following possible: ``` Select * From T2 Left join T3 On T1.FK3=T3.PK Left join T1 On T1.FK2=T2.PK ``` I can't Find a way to put both tables 2 and 3 On the left Side of 1 and use the correspondent fields to join all tables? The last query uses fields of table 1 before this table is mentioned in the query. Or something like this? ``` Select * From T2 Left join ( T3 left join T1 On T1.FK3=T3.PK) On T1.FK2=T2.PK ```
Apparently brackets can help to order your joins. I wonder if this is really documented, i've found Nothing at first glance in the mysql docs. Following query is correct and does not have any subqueries: ``` Select T1.Id Ida, t2.id idb, T3.id idc FROM T3 LEFT JOIN (T2 LEFT JOIN T1 ON (T1.ID = T2.ID)) ON (T1.ID= T3.ID); ```
The first way is just to reverse the order that the tables are mentioned: ``` Select * from t3 left outer join t2 on T1.FK3 = T3.PK left outer join t1 on T1.FK2 = T2.PK ``` But this won't work, because the first condition is on `t1` and not `t2`. And `t2` hasn't yet been defined. When working with chains of tables in left or right outer joins, only the first (or last) tables are important, because they "drive" the query. "Drive" in the sense that they provide all the values even when there are no matches. So, the following should do what you want: ``` Select * from t3 left outer join t1 on T1.FK3 = T3.PK left outer join t2 on T1.FK2 = T2.PK; ```
Transforming queries: right joins to left joins
[ "", "mysql", "sql", "database", "relational-database", "" ]
In our development environment we have long been using a particular backup and restore script for each of our products through various SQL Server versions and different environment configurations with no issues. Recently we have upgraded to SQL Server 2012 as our standard development server with SQL Compatibility Level 2005 (90) to maintain support with legacy systems. Now we find that on one particular dev's machine we get the following error when attempting to backup the database: > Cannot use the backup file 'D:\MyDB.bak' because it was > originally formatted with sector size 512 and is now on a device with > sector size 4096. BACKUP DATABASE is terminating abnormally. With the command being: ``` BACKUP DATABASE MyDB TO DISK = N'D:\MyDB.bak' WITH INIT , NOUNLOAD , NAME = N'MyDB backup', NOSKIP , STATS = 10, NOFORMAT ``` The curious thing is that neither the hardware nor partitions on that dev's machine have changed, even though their sector size is different this has not previously been an issue. From my research (i.e. googling) there is not a lot on this issue apart from the advice to use the `WITH BLOCKSIZE` option, but that then gives me the same error message. With my query being: ``` BACKUP DATABASE MyDB TO DISK = N'D:\MyDB.bak' WITH INIT , NOUNLOAD , NAME = N'MyDB backup', NOSKIP , STATS = 10, NOFORMAT, BLOCKSIZE = 4096 ``` Can anyone shed some light on how I can backup and restore a database to HDDs with different sector sizes?
This issue is caused by different sector sizes used by different drives. You can fix this issue by changing your original backup command to: ``` BACKUP DATABASE MyDB TO DISK = N'D:\MyDB.bak' WITH INIT , NOUNLOAD , NAME = N'MyDB backup', STATS = 10, FORMAT ``` Note that I've changed NOFORMAT to FORMAT and removed NOSKIP. Found a hint to resolving this issue in the comment section of the following blog post on MSDN: [SQL Server–Storage Spaces/VHDx and 4K Sector Size](http://blogs.msdn.com/b/psssql/archive/2013/05/15/sql-server-storage-spaces-vhdx-and-4k-sector-size.aspx) And more information regarding 4k sector drives: <http://blogs.msdn.com/b/psssql/archive/2011/01/13/sql-server-new-drives-use-4k-sector-size.aspx>
All you have to do is back it up with a different name.
Backup a database on a HDD with a different sector size
[ "", "sql", "sql-server", "t-sql", "sql-server-2012", "backup", "" ]
I have a table where I want to combine two separate rows into one row. This is a product catalog that is storing information on separate rows. Here is the sample data and the expected results. Table name: `ProductCatalog` ``` Product_ID | Action | Date ----------------------------------------- 0001 | Added | 12/11/1983 0001 | Removed | 01/01/2003 0002 | Added | 12/11/1983 ``` Expected result: ``` Product_ID | Added | Removed ======================================== 0001 | 12/11/1983 | 01/01/2003 0002 | 12/11/1983 | null ``` I have tried joining on `Product_ID` to get `Added` and `Removed` dates to be side by side in a new table or view but I don't get the desire results. I am not using `MAX(column)` since I don't get the desire results or maybe I am grouping wrong.
I think the easiest way is conditional aggregation: ``` select pc.product_id, max(case when pc.action = 'Added' then pc.[date] end) as Added, max(case when pc.action = 'Removed' then pc.[date] end) as Removed from ProductCatalog pc group by pc.product_id; ``` You can also do this using `pivot`.
First what you need to do is to get data into two separate columns, after than you can wrap that in sub-select and group by `Product_id`, because only one of the `AddedDate` or `RemovedDate` will have value we can use `MAX` function to display that data, producing only 1 row per `Product_ID` ``` SELECT Product_id ,MAX(AddedDate) ,MAX(RemovedDate) FROM ( SELECT Product_ID ,CASE WHEN [ACTION] = 'Added' THEN [date] ELSE NULL END AS AddedDate ,CASE WHEN [ACTION] = 'Removed' THEN [date] ELSE NULL END AS RemovedDate FROM ProductCatalog ) a GROUP BY Product_id ```
Combine two rows in table using one column data
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I'm selecting all entries from first table which have certain category in second one. Those 2 are connected through ID, which is the same in both tables. I'm new at UNION and JOIN, so I'm wondering if I can do this without using those? Example: ``` SELECT * FROM entries, categories WHERE entries.id = categories.id AND categories.category = 'default' ```
This would work. You might as well type : ``` SELECT your fields FROM entries AS E INNER JOIN categories as C USING (id) WHERE C.category = 'default' ```
Try like this ``` SELECT * FROM entries INNER Join categories ON entries.id = categories.id WHERE categories.category = 'default' ```
Easiest way to select from 2 tables, with where condition in second
[ "", "mysql", "sql", "select", "" ]
I have tables that looks like this:- **tblConsuptionsFromA** > id  meter        date        total > 1      1      03/01/2014  100.1 > 2      1      04/01/2014  184.1 > 3      1      05/01/2014  134.1 > 4      1      06/01/2014  132.4 > 5      1      07/01/2014  126.1 > 6      1      08/01/2014  190.1 and... **tblConsuptionsFromB** > id  meter        date        total > 1      1      01/01/2014  164.1 > 2      1      02/01/2014  133.1 > 3      1      03/01/2014  136.1 > 4      1      04/01/2014  125.1 > 5      1      05/01/2014  190.1 > 6      1      06/01/2014  103.1 > 7      1      07/01/2014  164.1 > 8      1      08/01/2014  133.1 > 9      1      09/01/2014  136.1 > 10      1      10/01/2014  125.1 > 11      1      11/01/2014  190.1 I need to join these two tables, but if there is an entry for the same day in both table... only take the result from tblConsumptionsFromA. So the result would be:- > id source\_id  meter  from       date        total > 1     1              1        B    01/01/2014  164.1 > 2     2              1        B    02/01/2014  133.1 > 3     1              1        A    03/01/2014  100.1 > 4     2              1        A    04/01/2014  184.1 > 5     3              1        A    05/01/2014  134.1 > 6     4              1        A    06/01/2014  132.4 > 7     5              1        A    07/01/2014  126.1 > 8     6              1        A    08/01/2014  190.1 > 9     9              1        B    09/01/2014  136.1 > 10    10             1        B    10/01/2014  125.1 > 11    11             1        B    11/01/2014  190.1 This is beyond me, so if someone can solve... I will be very impressed.
The `UNION` operator is used to combine the result-set of two or more SELECT statements. ``` SELECT column_name(s) FROM table1 UNION SELECT column_name(s) FROM table2; ``` The document of UNION is here: <http://www.w3schools.com/sql/sql_union.asp> And `ROW_NUMBER()` returns the sequential number of a row within a partition of a result set, starting at 1 for the first row in each partition. ``` ROW_NUMBER ( ) OVER ( [ PARTITION BY value_expression , ... [ n ] ] order_by_clause ) ``` The document of ROW\_NUMBER() is here: <http://technet.microsoft.com/en-us/library/ms186734.aspx> The following SQL statement uses UNION to select all records from the "tblConsuptionsFromA" and part of records from "tblConsuptionsFromB" tables. ``` SELECT ROW_NUMBER() OVER(ORDER BY DATE ASC) AS 'id', id AS 'source_id',meter, date,t AS 'from',total FROM( SELECT id,meter, date, 'A' AS t, total FROM tblConsuptionsFromA UNION SELECT id,meter, date, 'B' AS t,total FROM tblConsuptionsFromB WHERE NOT date IN (SELECT date FROM tblConsuptionsFromA) ) AS C; ``` Hope this helps.
Here's one way to do it: ``` SELECT COALESCE(a.source_id,b.source_id) as source_id, COALESCE(a.meter,b.meter) as meter, COALESCE(a.[from],b.[from]) as [from], COALESCE(a.[date],b.[date]) as [date], COALESCE(a.total,b.total) FROM (select source_id,meter,'b' as [from],[date],total from tblConsuptionsFromB) b left join (select source_id,meter,'a' as [from],[date],total from tblConsuptionsFromA) a on a.meter = b.meter and a.[date] = b.[date] ``` Unfortunately, there's no shorthand like `COALESCE(a.*,b.*)` to apply the [`COALESCE`](http://msdn.microsoft.com/en-us/library/ms190349.aspx) to all columns
SQL Server: Join 2 tables, preferring results from one table where there is a conflict
[ "", "sql", "sql-server", "join", "" ]
I am trying to retrieve records from oracle 10g express. I want to execute : ``` select name|| '=' || id from literals where name='vge_1' ``` only when count(vge\_1) is equal to 1. else I want to display an error. I tried following query, but its giving ``` ORA-00905: missing keyword ``` THe query I tried is as follows: ``` select case(name) when count('vge_1') then (select name|| '=' || id from literals where name='vge_1';) else Errror end from Literals where name='vge_1'; ``` Thanks for your help in advance.
Try this [SQL Fiddle](http://sqlfiddle.com/#!4/fe716/33): ``` select distinct case when l2.c = 1 then l1.name || '=' || l1.id else 'Error' end as name_id from literals l1, ( select name, count(name) as c from literals where name = 'vge_1' group by name having count(name) = 1 ) l2 where l1.name = l2.name(+) and l1.name = 'vge_1' ; ``` The inner query is roughly same as the other answers. The outer query uses a left outer join `(+)` to determine if the inner query contains a match for your `count()` restriction. Note that you must update the name in two places when running this query.
Instead of the `case` add `HAVING count(name)=1` in the end of th query
Sql query to select records only when count(column)=1
[ "", "sql", "oracle10g", "case", "" ]
When I run my query as `INNER JOIN` between two tables, I get the correct result - 182 in all. However, when I run the query as `LEFT JOIN`, I get only 8 records back. Am I performing the join incorrectly? First the code: ``` select e.username, e.password, coalesce(r.access_level, 0) as orgid from employees e left join retired r on e.employeeid = r.employeeid where access_level=3 ``` The `Retired` table has only 182 records. Both tables are related by `EmployeeId`. The 82 records in `Retired` table also exist in `Employees` but `Employees` table has over 7 thousand records. One of the fieldnames `Retired` is called `Access_Level` with a value of 3. Any ideas why `LEFT JOIN` isn't giving me an accurate result?
The trick with outer joins and conditions is to move the condition from the where clause into the join condtions clause: ``` select e.username, e.password, coalesce(r.access_level, 0) as orgid from employees e left join retired r on e.employeeid = r.employeeid and access_level=3 ``` The reason you must do this is that the where clause is a *filter* on the rowset,which executes *after* the joins are made. By having a condition on the outer joined table in the where clause you effectively make the outer join into an *inner* join, because missed outer joins have `null` values in the columns of the joined table, but a condition in the where clause will insist there *is* a value there. Conditions in the join clause are executed *as the join is made*, so by moving the condition out of the where clause, you allow the join to miss *while still imposing the condition*, but return an all-null joined row if no suitable row is found in the joined table. --- After some more info from comments, it seems this is what you wanted: ``` select distinct e.username, e.password, coalesce(r.access_level, 0) as orgid from employees e join retired r on e.employeeid = r.employeeid and access_level=3 ```
If `Access_Level` is a column in the `Retired` table then its value will be `NULL` for any rows in Employee that don't have a corresponding row in Retired. Those rows will then be filtered out by your WHERE clause as they *don't* have a `Access_Level` of 3. Moving that condition to the `ON` clause should be enough: ``` select e.username, e.password, coalesce(r.access_level, 0) as orgid from employees e left join retired r on e.employeeid = r.employeeid and access_level=3 ```
My LEFT JOIN is not returning correct results. What am I doing wrong?
[ "", "sql", "sql-server", "join", "" ]
I know how to do this but I don't know how to do it without a loop. Basically, for each `CustomerID + PaymentID`, I want to populate the column `"SumOfAmount"` with the total. The total will repeat for each row of that `CustomerID + PaymentID`. ![](https://i.stack.imgur.com/WHa8I.png) How would I go about doing this without a loop?
Try using a [windowing function](http://technet.microsoft.com/en-us/library/ms189461.aspx): ``` SELECT ID, CustomerID, PaymentID, ItemID, DeductionBucketID, Qty, Amount, Deductions , SUM(Amount) OVER (PARTITION BY CustomerID, PaymentID) AS SumofAmount FROM ... ```
Assuming you have SQL Server 2005 or later, you can use the OVER keyword with the SUM aggregate to partition your data by the customer ID and payment ID. This code provides an example: ``` DECLARE @CustPaymentDetail AS TABLE ( CustomerID int, PaymentID int, ItemID int, Amount numeric(19,2) ); INSERT INTO @CustPaymentDetail (CustomerID, PaymentID, ItemID, Amount) VALUES (143, 62228, 1, 780); INSERT INTO @CustPaymentDetail (CustomerID, PaymentID, ItemID, Amount) VALUES (143, 62228, 1, 95.04); INSERT INTO @CustPaymentDetail (CustomerID, PaymentID, ItemID, Amount) VALUES (143, 62228, 1, 1443); INSERT INTO @CustPaymentDetail (CustomerID, PaymentID, ItemID, Amount) VALUES (143, 62228, 1, 136.43); INSERT INTO @CustPaymentDetail (CustomerID, PaymentID, ItemID, Amount) VALUES (143, 62228, 1, 12.99); INSERT INTO @CustPaymentDetail (CustomerID, PaymentID, ItemID, Amount) VALUES (143, 62238, 1, 130); INSERT INTO @CustPaymentDetail (CustomerID, PaymentID, ItemID, Amount) VALUES (143, 62238, 1, 702); INSERT INTO @CustPaymentDetail (CustomerID, PaymentID, ItemID, Amount) VALUES (143, 62238, 1, 19.49); INSERT INTO @CustPaymentDetail (CustomerID, PaymentID, ItemID, Amount) VALUES (143, 62238, 1, 12.99); SELECT *, SUM(Amount) OVER (PARTITION BY CustomerID, PaymentID) AS SumOfAmount FROM @CustPaymentDetail; ```
how to get the SUM of certain rows based on an ID and populate column with the value without using a loop
[ "", "sql", "sql-server", "" ]
I would like to give a user all the permissions on a database without making it an admin. The reason why I want to do that is that at the moment DEV and PROD are different DBs on the same cluster so I don't want a user to be able to change production objects but it must be able to change objects on DEV. I tried: ``` grant ALL on database MY_DB to group MY_GROUP; ``` but it doesn't seem to give any permission. Then I tried: ``` grant all privileges on schema MY_SCHEMA to group MY_GROUP; ``` and it seems to give me permission to create objects but not to query\delete objects on that schema that belong to other users I could go on by giving USAGE permission to the user on MY\_SCHEMA but then it would complain about not having permissions on the table ... So I guess my question is: is there any easy way of giving all the permissions to a user on a DB? I'm working on PostgreSQL 8.1.23.
*All commands must be executed while connected to the right database cluster. Make sure of it.* Roles are objects of the database *cluster*. All databases of the same cluster share the set of defined roles. Privileges are granted / revoked per database / schema / table etc. A role needs access to the *database*, obviously. That's granted to `PUBLIC` by default. Else: ``` GRANT CONNECT ON DATABASE my_db TO my_user; ``` ### Basic privileges for Postgres 14 or later Postgres 14 adds the predefined, non-login roles [**`pg_read_all_data`** / **`pg_write_all_data`**](https://www.postgresql.org/docs/current/predefined-roles.html). They have `SELECT` / `INSERT`, `UPDATE`, `DELETE` privileges for *all* tables, views, and sequences. Plus `USAGE` on schemas. We can `GRANT` membership in these roles: ``` GRANT pg_read_all_data TO my_user; GRANT pg_write_all_data TO my_user; ``` This covers all basic DML commands (but not DDL, and not some special commands like `TRUNCATE` or the `EXECUTE` privilege for functions!). [The manual:](https://www.postgresql.org/docs/current/predefined-roles.html) > `pg_read_all_data` > > Read all data (tables, views, sequences), as if having `SELECT` rights > on those objects, and `USAGE` rights on all schemas, even without > having it explicitly. This role does not have the role attribute > `BYPASSRLS` set. If RLS is being used, an administrator may wish to > set `BYPASSRLS` on roles which this role is `GRANT`ed to. > > `pg_write_all_data` > > Write all data (tables, views, sequences), as if having `INSERT`, > `UPDATE`, and `DELETE` rights on those objects, and `USAGE` rights on > all schemas, even without having it explicitly. This role does not > have the role attribute `BYPASSRLS` set. If RLS is being used, an > administrator may wish to set `BYPASSRLS` on roles which this role is > `GRANT`ed to. ### All privileges without using predefined roles (any Postgres version) *Commands must be executed while connected to the right database. Make sure of it.* The role needs (at least) the `USAGE` privilege on the *schema*. Again, if that's granted to `PUBLIC`, you are covered. Else: ``` GRANT USAGE ON SCHEMA public TO my_user; ``` To also allow the creation of objects, the role needs the `CREATE` privilege. [With Postgres 15, security has been tightened](https://www.postgresql.org/docs/15/release-15.html#id-1.11.6.10.4) and that privilege on the default schema `public` is not granted to `PUBLIC` any more. You might want that, too. Or just grant `ALL` to your role: ``` GRANT ALL ON SCHEMA public TO my_user; ``` Or grant `USAGE` / `CREATE` / `ALL` on *all* custom schemas: ``` DO $$ BEGIN -- RAISE NOTICE '%', ( -- use instead of EXECUTE to see generated commands EXECUTE ( SELECT string_agg(format('GRANT USAGE ON SCHEMA %I TO my_user', nspname), '; ') FROM pg_namespace -- SELECT string_agg(format('GRANT ALL ON SCHEMA %I TO my_user', nspname), '; ') WHERE nspname <> 'information_schema' -- exclude information schema and ... AND nspname NOT LIKE 'pg\_%' -- ... system schemas ); END $$; ``` Then all permissions for all *tables*. And don't forget *sequences* (if any), which are used for legacy `serial` columns. ``` GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA public TO my_user; GRANT ALL PRIVILEGES ON ALL SEQUENCES IN SCHEMA public TO my_user; ``` Since Postgres 10, `IDENTITY` columns can replace `serial` columns, and those don't need separate privileges for the involved sequence. See: * [Auto increment table column](https://stackoverflow.com/questions/9875223/auto-increment-table-column/9875517#9875517) Alternatively, you could use the ["Grant Wizard" of pgAdmin 4](https://www.pgadmin.org/docs/pgadmin4/latest/grant_wizard.html) to work with a GUI. This covers privileges for **existing** objects. To also cover future objects, set [`DEFAULT PRIVILEGES`](https://www.postgresql.org/docs/current/sql-alterdefaultprivileges.html). See: * [Grant privileges for a particular database in PostgreSQL](https://stackoverflow.com/questions/24918367/grant-privileges-for-a-particular-database-in-postgresql/24923877#24923877) * [**How to manage DEFAULT PRIVILEGES for USERs on a DATABASE vs SCHEMA?**](https://dba.stackexchange.com/a/117661/3684) There are some other objects, [the manual for `GRANT`](https://www.postgresql.org/docs/current/sql-grant.html) has the complete list. As of Postgres 14: > privileges on a database object (table, column, view, foreign table, sequence, database, foreign-data wrapper, foreign server, function, procedure, procedural language, schema, or tablespace) But the rest is rarely needed. More details: * [Grant privileges for a particular database in PostgreSQL](https://stackoverflow.com/questions/24918367/grant-privileges-for-a-particular-database-in-postgresql/24923877#24923877) * [How to grant all privileges on views to arbitrary user](https://stackoverflow.com/questions/10491208/how-to-grant-all-privileges-on-views-to-arbitrary-user/10491275#10491275) Consider [upgrading to a current version](https://www.postgresql.org/support/versioning/).
``` GRANT ALL PRIVILEGES ON DATABASE "my_db" to my_user; ```
PostgreSQL: Give all permissions to a user on a PostgreSQL database
[ "", "sql", "postgresql", "ddl", "privileges", "sql-grant", "" ]
I need to display records from mysql database where a particular column has only alphabets. eg, Table Name: data ``` column abcde 12345 xyz 123 ``` so the output should be abcde and xyz only. So far i tried using pattern match but no luck here is what is use yet `SELECT * FROM listing WHERE Zip LIKE '[^a-zA-Z]'`
Try this code ``` SELECT * FROM listing WHERE Zip REGEXP '^[A-z]+$' ```
Try regexp: ``` SELECT * FROM listing WHERE Zip REGEXP '^[a-zA-Z.]+$' ```
Mysql query to select a column with only alphabetical characters
[ "", "mysql", "sql", "" ]
I require some help with my very shaky sql skills. Say I have the following select statement: ``` SELECT DISTINCT p.ProjectId, p.Title, i.Name, p.StartDate, p.EndDate, ped.ProjectEthicsDocumentId, st.Description AS StatusText FROM dbo.Project p inner join dbo.WorkflowHistory w ON p.ProjectId = w.ProjectId left join dbo.ProjectInstitution pi ON pi.ProjectId = p.ProjectId left join dbo.Institution i ON i.InstitutionId = pi.InstitutionId left join dbo.ProjectEthicsDocument ped on p.ProjectId = ped.ProjectId left join dbo.Status st ON p.StatusId = st.StatusId ``` This will return all the projects and other relevant details from the relevant tables. Now, say I have 2 institutions for 'Project A'. This statement will return 2 rows for 'Project A', one for each institution. How do I set it so that it only returns the first row of each project it finds? I want one instance of every project with say the first institution found.
You can move selecting institution name to a subquery. This way you it doesn't affect how other tables are joined. ``` SELECT DISTINCT p.ProjectId, p.Title, (SELECT TOP 1 i.Name FROM dbo.Institution i INNER JOIN dbo.ProjectInstitution pi ON i.InstitutionId = pi.InstitutionId WHERE pi.ProjectId = p.ProjectId) AS Name, p.StartDate, p.EndDate, ped.ProjectEthicsDocumentId, st.Description AS StatusText FROM dbo.Project p inner join dbo.WorkflowHistory w ON p.ProjectId = w.ProjectId left join dbo.ProjectEthicsDocument ped on p.ProjectId = ped.ProjectId left join dbo.Status st ON p.StatusId = st.StatusId ```
The easiest way is probably with the `row_number()` function: ``` select * from (SELECT DISTINCT p.ProjectId, p.Title, i.Name, p.StartDate,p.EndDate, ped.ProjectEthicsDocumentId, st.Description AS StatusText, row_number() over (partition by p.ProjectId order by i.InstitutionId) as seqnum FROM dbo.Project p inner join dbo.WorkflowHistory w ON p.ProjectId = w.ProjectId left join dbo.ProjectInstitution pi ON pi.ProjectId = p.ProjectId left join dbo.Institution i ON i.InstitutionId = pi.InstitutionId left join dbo.ProjectEthicsDocument ped on p.ProjectId = ped.ProjectId left join dbo.Status st ON p.StatusId = st.StatusId ) p where seqnum = 1; ```
Sql join 1 instance
[ "", "sql", "sql-server", "select", "join", "distinct", "" ]
How to get dynamic Order by based on the parameter in Oracle sql,Base on the condition any one of the Dynamic Order by to be binded ,What is wrong with my query please help to solve this.Parameter value will be passed dynamically for user. ``` SELECT actionstatus, id FROM rdt_orderdetail WHERE id IN (45565,44033,45761,45543,45495,43472,42462,43477) CASE WHEN ':user' = 'supplier THEN ORDER BY id ELSE ORDER BY actionstatus END ```
Try something like this: ``` SELECT actionstatus, id FROM rdt_orderdetail WHERE id IN (45565,44033,45761,45543,45495,43472,42462,43477) order by (CASE WHEN ':user' = 'supplier THEN id ELSE actionstatus END) ``` [Example fiddle](http://www.sqlfiddle.com/#!2/16441/2)
You want to use dynamic order ? Then you'll have to use dynamic SQL :-) ``` create or replace function GET_DETAILS(iField in varchar2) return sys_refcursor is aQuery varchar2(1000); aCursor SYS_REFCURSOR; begin aQuery := 'select actionstatus,id from rdt_orderdetail where id in (45565,44033,45761,45543,45495,43472,42462,43477) order by ' || case iField when 'sup' then 'id' else 'actionstatus' end; open aCursor for aQuery; return aCursor; end; / ```
Dynamic Order by using Case in SQL
[ "", "sql", "oracle", "" ]
In a MySQL calendar, items are placed using start+duration. Both integers, BUT: Start looks like this: `YYYYMMDDhhmm` while duration is just a number of minutes. Before inserting a new item I have to ensure it doesn't overlap existing items. **The hard part**: Is any existing item's start+duration > the my new items start. In SQL: Calculate `(YYYYMMDDhhmm + minutes)` and return a valid `YYYYMMDDhhmm` **This helps**: `start+duration` will **never** cause the `YYYYMMDD-part` to change : ) *Thank to everybody - my (PHP) implementation:* ``` $exists = runSQL("SELECT id FROM cal ". "WHERE STR_TO_DATE(start,'%Y%m%d%H%i') + INTERVAL duration MINUTE ". " > ". " STR_TO_DATE( ? ,'%Y%m%d%H%i') ". " AND STR_TO_DATE(start,'%Y%m%d%H%i') ". " < ". " STR_TO_DATE( ? ,'%Y%m%d%H%i') + INTERVAL ? MINUTE" ,array("iii", $start, $start, $minutes )); if (count($exists) > 0) bitch("Too late - Update your view"); ``` Works perfectly BUT invokes `STR_TO_DATE()` 4 times.. Thanks to Spencer7593: A much faster (and shorter) version: ``` $exists = runSQL("SELECT id FROM cal ". "WHERE start*100 + INTERVAL duration MINUTE > ?*100 + INTERVAL 0 MINUTE ". " AND start*100 + INTERVAL 0 MINUTE < ?*100 + INTERVAL ? MINUTE" ,array("iii", $start, $start, $minutes )); if (count($exists) > 0) bitch("Too late, slot taken - Update your view"); ```
One way to convert to a DATETIME, from integer "start" representation YYYYMMDDHHMM and adding an integer "duration" in minutes, is to just add a seconds portion (change from 12 digits to 14 digits) by multiplying by 100, and then add an interval, and MySQL will implicitly convert the integer value to DATETIME. For example: ``` t.start*100 + INTERVAL t.duration MINUTE ``` To convert just the "start" without adding a "duration", do it exactly the same way, except specify a literal `0` in place of `t.duration`. For example: ``` t.start*100 + INTERVAL 0 MINUTE ``` --- **FOLLOWUP** **Q:** Does this implicitly convert to string and call STR\_TO\_DATE, requiring "more computing" than using STR\_TO\_DATE? **A:** I don't believe it does. Performance test results demonstrate significantly faster without the STR\_TO\_DATE function; which likely means it's actually a shorter code path. Query 1: implicit conversion of integer to DATETIME ``` SELECT /*!40001 SQL_NO_CACHE */ t.start FROM demo0317 t WHERE ( t.start*100 + INTERVAL t.duration MINUTE ) = STR_TO_DATE('2014-03-17 09:39','%Y-%m-%d %h:%i') ``` Query 2: conversion of integer to DATETIME using STR\_TO\_DATE ``` SELECT /*!40001 SQL_NO_CACHE */ t.start FROM demo0317 t WHERE ( STR_TO_DATE(t.start,'%Y%m%d%h%i') + INTERVAL t.duration MINUTE ) = STR_TO_DATE('2014-03-17 09:39','%Y-%m-%d %h:%i') ``` Query elapsed times: ``` run Query 1 Query 2 --- --------- ---------- 2: 8.129 sec 15.520 sec 3: 8.124 sec 15.625 sec 4: 8.125 sec 15.622 sec 5: 8.123 sec 15.626 sec --- --------- ---------- avg 8.125 sec 15.598 sec ```
> *How to (YYYYMMDDhhmm + minutes) and return a valid YYYYMMDDhhmm (in SQL)?* One way to do it ``` SELECT t.*, 1 * DATE_FORMAT(STR_TO_DATE(start, '%Y%m%d%H%i') + INTERVAL duration MINUTE, '%Y%m%d%H%i') end FROM table_name t ``` or without string manipulation ``` SELECT t.*, FLOOR(start / 10000) * 10000 + FLOOR(start MOD 10000 / 100) * 100 + FLOOR((start MOD 100 + duration) / 60) * 100 + (start MOD 100 + duration) MOD 60 end FROM calendar t ``` Sample output for both cases: ``` | ID | START | DURATION | END | |----|--------------|----------|--------------| | 1 | 201403151200 | 60 | 201403151300 | | 1 | 201403161545 | 35 | 201403161620 | ``` Here is **[SQLFiddle](http://sqlfiddle.com/#!2/dbcaea/1)** demo
Time calculation with SQL
[ "", "mysql", "sql", "" ]
I am new to SQL and I cannot figure out why my SQL result returns all data from the wrong table. I want to receive all data from product table but I want to filter the results so it does not show those results that have some value in another table. My SQL statement is: ``` SELECT * FROM orders JOIN products ON orders.product_id=products.product_id AND NOT order_date=somedate; ``` This query returns everything from the orders table where somedate is not the given value but I want to get everything from the products table if orders table does not have the given somedate as value (when the product is not reserved). Edit: thanks for the help everyone! each answer worked perfectly for me with some slight modifications :)
You can get products which don't have orders of a certain criteria (`order_date != somedate`) like this: ``` SELECT p.* FROM products p LEFT JOIN orders o ON o.product_id = p.product_id AND order_date != somedate WHERE o.id IS NULL ```
Not sure I got your question right, but you can try this: ``` SELECT p.* FROM orders JOIN products ON orders.product_id=products.product_id WHERE order_date<>somedate; ```
SQL join returns all values from the wrong table
[ "", "mysql", "sql", "" ]
I would like to join 3 tables together and have it return the data where the results from one would be null. tblCustomer is the main table that will always have the required result. It would then be tied to 2 other tables tblEvents and tblPending and should return with the data for either. Example ``` tblCustomer.CustomerID tblEvents.EventID tblPending.PendingID 1 50 NULL 1 51 NULL 2 NULL 30 2 70 NULL 2 NULL 90 ``` I've tried outer joining the tables like so: ``` dbo.tblEvents LEFT OUTER JOIN dbo.tblCustomer ON dbo.tblEvents.CustomerID = dbo.tblCustomer.CustomerID LEFT OUTER JOIN dbo.tblPending ON dbo.tblCustomer.CustomerID = dbo.tblPending.CustomerID ``` But it gives me results like this: ``` CustomerID EventID PendingID 2 70 30 2 70 90 ``` Is there a simple way to make it joins to the Customer table but does not try to join to the additional table. IE there is no relationship between Pending and Event except they are both related to Customer.
If you only want one of event or pending you may want the union of two joins, e.g. ``` SELECT tblCustomer.CustomerID, tblEvents.EventID, null AS tblPending.PendingID FROM dbo.tblCustomer LEFT OUTER JOIN dbo.tblEvents ON dbo.tblEvents.CustomerID = dbo.tblCustomer.CustomerID UNION SELECT tblCustomer.CustomerID, null AS tblEvents.EventID, tblPending.PendingID FROM dbo.tblCustomer LEFT OUTER JOIN dbo.tblPending ON dbo.tblCustomer.CustomerID = dbo.tblPending.CustomerID ```
what you want is a FULL OUTER JOIN, which doesnt exist in mysql, so you have to do your left outer join and union with a right outer join ``` LEFT OUTER JOIN dbo.tblCustomer ON dbo.tblEvents.CustomerID = dbo.tblCustomer.CustomerID LEFT OUTER JOIN dbo.tblPending ON dbo.tblCustomer.CustomerID = dbo.tblPending.CustomerID UNION RIGHT OUTER JOIN dbo.tblCustomer ON dbo.tblEvents.CustomerID = dbo.tblCustomer.CustomerID RIGHT OUTER JOIN dbo.tblPending ON dbo.tblCustomer.CustomerID = dbo.tblPending.CustomerID ``` after this, the tables should be combined in the correct order and you can filter out by a field not being null or a specific id
SQL outer Join 3 tables merging data
[ "", "sql", "sql-server", "join", "" ]
I know how to get the last record of a group but I also need to get the second to last record. How can I do this? here is my code for getting the last record. ``` select job qjob, max(id) qid from sqbclog group by job ```
``` SELECT * FROM ( select job AS qjob , id AS qid ,ROW_NUMBER() OVER (PARTITION BY JOB ORDER BY ID DESC) AS RN from sqbclog )Sub WHERE rn <= 2 ``` This query will return last two records but if you only need the 2nd last record then in where clause use where en = 2 else leave it as it is.
**This should do it !** ``` select * from sqbclog where id not in ( select top ( (select count(*) from sqbclog ) - 2 ) id from sqbclog ) ```
getting second to last record per group in SQL
[ "", "sql", "sql-server", "sql-server-2005", "" ]
I am pretty new to delphi and I would like to ask how can I create a correct SQL statement / SQL string in delphi. I have tried something like this: ``` sql:='use [TestovaciaDb] INSERT INTO [dbo].[client]([Meno],[Priezvisko]) VALUES('+name+','+surname+')'; ``` I am using MS SQL server 2012 But I am getting a exception there. Thank you EDIT: `meno` and `priez` are variables with values from `TEdit1` and `TEdit2`: ``` meno:= Edit1.Text; priez:= Edit2.Text; ```
Use parameterized queries. You set the database in your ConnectionString, so you don't need to `use' it in your query. ``` ADOQuery1.SQL.Text := 'INSERT INTO [dbo].[client] ([Meno],[Priezvisko]) ' + 'VALUES(:Meno, :Priezvisko)'; ADOQuery1.Parameters.ParamByName('Meno').Value := Edit1.Text; ADOQuery1.Parameters.ParamByName('Priezvisko').Value := Edit2.Text; ADOQuery1.ExecSQL; ```
Remove the `use [xxx]` at the begining of the statement. The connection you use must be already configured to point to the correct database. Just like many others said, avoid creating your sentences by using constants, instead, use paramenters.
How to create correct SQL statement in delphi
[ "", "sql", "sql-server", "delphi", "" ]
I have the following query which returns the correct results but I am aware of the overheads associated with subqueries in a WHERE IN clause. ``` SELECT event_start_date FROM events JOIN joins ON event_id = join_event_id JOIN groups ON join_type_id IN (SELECT group_id FROM view_groups WHERE user_id = 3) WHERE user_id = 3 AND event_start_date >= now() GROUP BY event_start_date; ``` Can anyone help out with rewriting this query to avoid the subquery and/or the WHERE IN clause.
MySQL does [optimize uncorrelated subqueries](http://dev.mysql.com/doc/refman/5.0/en/optimizing-subqueries.html) (which you have in your example): > MySQL executes uncorrelated subqueries only once. Use EXPLAIN to make > sure that a given subquery really is uncorrelated. If your subquery returns only one result, be sure to use equal instead: > For uncorrelated subqueries that always return one row, IN is always > slower than =. Regardless, this is how you rewrite your query as a JOIN: ``` SELECT events.event_start_date FROM events JOIN joins ON events.event_id = joins.join_event_id JOIN view_groups ON view_groups.user_id = 3 JOIN groups ON groups.join_type_id = view_groups.group_id WHERE events.user_id = 3 AND events.event_start_date >= NOW() GROUP BY events.event_start_date; ```
try this: ``` SELECT event_start_date FROM events JOIN joins ON event_id = join_event_id join view_groups on ............condition here JOIN groups ON join_type_id = view_groups.group_id WHERE user_id = 3 AND event_start_date >= now() GROUP BY event_start_date; ```
Re-writing query to avoid WHERE IN with a subquery
[ "", "mysql", "sql", "subquery", "where-in", "" ]
I'm trying to create a temp table and process two data flows using the temp table. It is in a sequence container and if I just execute the container it run perfect but when the entire package is ran it returns this error: > Information: 0x4004300A at V-AccidentCodesBase, SSIS.Pipeline: > Validation phase is beginning. > > Error: 0xC0202009 at V-AccidentCodesBase, Insert into Temp Table [69]: > SSIS Error Code DTS\_E\_OLEDBERROR. An OLE DB error has occurred. Error > code: 0x80040E14. > > An OLE DB record is available. Source: "Microsoft SQL Server Native > Client 11.0" Hresult: 0x80040E14 Description: "Statement(s) could > not be prepared.". > > An OLE DB record is available. Source: "Microsoft SQL Server Native > Client 11.0" Hresult: 0x80040E14 Description: "Invalid object name > '##TmpAccidentCode'.". > > Error: 0xC004706B at V-AccidentCodesBase, SSIS.Pipeline: "Insert into > Temp Table" failed validation and returned validation status > "VS\_ISBROKEN". > > Error: 0xC004700C at V-AccidentCodesBase, SSIS.Pipeline: One or more > component failed validation. > > Error: 0xC0024107 at V-AccidentCodesBase: There were errors during > task validation.
I ended up solving the problem which was overloading tempDB. When I slowed the process down to one command to tempDb at a time it all ran through smoothly.
I would set the DelayValidation property to True. You may get away with just setting this on the Sequence Container, or you may need to repeat that setting on child objects, e.g. your Data Flow Task.
SSIS failed validation and returned validation status "VS_ISBROKEN"
[ "", "sql", "sql-server", "validation", "ssis", "" ]
I was wondering if it is possible to write a query that returns a field only if that field contains a certain data type. So for example, I could want to return all fields with data type VARCHAR from a search, or data type BIGINT. Not just search for fields that contain numbers or letters. Here is an example of my psuedo code: (I know it's not very impressive): ``` select * from dbo.REP_ALIAS ra where ra.TRADE_REP [= data type integer] --integer is just an example ``` Is there some way to do this for all data types, not just integer. Also: I am using T-SQL with MS SQL Server 2008 R2.
The Data Type of any field in SQL is set when the field (aka column) is created. So I think you might be intending to ask whether or not a field's value can be converted (or cast) to another type. This will depend on who is doing the conversion, and what they are capable of: If you store the string "3/1/2014" who determines if that's a legitimate date and if it is in January or March? SQL has some rules for this, but they are sometimes different than .NET conversions. And if the field was set up as a string (`NVARCHAR` or `VARCHAR` typically) it can only contain a string and not a true datetime value. This is the source of many hacks in SQL but the best answer is to get the field set to the right datatype. Sometimes this involves multiple fields, almost redundant, for different types of data. About half way down this page is a chart with the SQL native types and which can be converted to others: <http://msdn.microsoft.com/en-us/library/ms187928.aspx>
You can exclude non-digits ``` WHERE ra.TRADE_REP NOT LIKE '%[^0-9]%' ```
Is it possible to query for a specific data type?
[ "", "sql", "sql-server", "t-sql", "sql-server-2008-r2", "" ]
I have data like this `1,2,3,4-8,10,11` I want split the data into rows with these 2 rules : 1. The `,` will only split the data into rows. Ex 1,2,3 become : ``` 1 2 3 ``` 2. The `-` will split into series number. Ex 4-8 become : ``` 4 5 6 7 8 ``` How can a SQL query do that? Please answer and keep it simple.
This will work as long as your intervals are less than 2048 (let me know if that numbers can go higher) and you @data follow your current syntax: ``` declare @data varchar(50) = '1,2,3,4-8,10,11' ;with x as ( SELECT t.c.value('.', 'VARCHAR(2000)') subrow FROM ( SELECT x = CAST('<t>' + REPLACE(@data, ',', '</t><t>') + '</t>' AS XML) ) a CROSS APPLY x.nodes('/t') t(c) ), y as ( SELECT CAST(coalesce(PARSENAME(REPLACE(subrow, '-', '.'), 2), PARSENAME(REPLACE(subrow, '-', '.'), 1)) as int) f, CAST(PARSENAME(REPLACE(subrow, '-', '.'), 1) as int) t from x ) select z.number from y cross apply (select y.f + number number from master..spt_values where number <= y.t - y.f and type = 'p' ) z ``` Result: ``` 1 2 3 4 5 6 7 8 10 11 ```
``` CREATE FUNCTION dbo.MultipleDelemiterSplit ( @List NVARCHAR(MAX), @Delemiter1 Varchar(100), @Delemiter2 Varchar(100) ) RETURNS TABLE AS RETURN ( SELECT Item = FirstSet.cnt.value('(./text())[1]', 'nvarchar(4000)') FROM ( SELECT x = CONVERT(XML, '<cnt>' + REPLACE(REPLACE(@List, ISNULL(@Delemiter1,''), '</cnt><cnt>') , ISNULL(@Delemiter2,''), '</cnt><cnt>') + '</cnt>').query('.') ) AS a CROSS APPLY x.nodes('cnt') AS FirstSet(cnt) ); GO Select * From dbo.MultipleDelemiterSplit ('10:0,11:1,12:3,13:4,15:5,16:6',',',':') ```
Split data in sql with multiple delimiters ( , and - ) with owns rule
[ "", "sql", "sql-server", "t-sql", "split", "numbers", "" ]
I have a replace statement that does something like this: ``` SELECT Distinct Forenames, Surname, dbUSNs.DateOfBirth, Datasetname, dbUSNs.MoPIGrade, SourceAddress, VRM, URNs FROM Person WHERE ( Replace(Replace(Replace(Replace(Replace(Replace(Replace (Replace(Replace(Replace(Replace(Replace(Replace(Replace (Replace(Replace(Replace(Replace(Replace(Replace(Replace (Replace(Replace(Replace(Replace (Surname,'/',''''),'?',''''),'',''''),'^',''''),'{',''''),'}',''''), '[',''''),']',''''),';',''''),'$',''''),'=',''''),'*',''''), '#',''''),'|',''''),'&',''''),'@',''''),'\',''''),'<',''''), '>',''''),'(',''''),')',''''),'+',''''),',',''''),'.',''''), ' ','''') LIKE 'OREILLY%') ``` Therefore even though OReilly is passed, O'Reilly will be found. However, this is too slow. Is there a better way of approaching it?
The problem isn't that `REPLACE` is "too slow", but that using it at all makes that part of the query *unsargable*, meaning that it can't use an index. [Wikipedia: Sargable](http://en.wikipedia.org/wiki/Sargable) Basically you've forced a tablescan / indexscan, from top to bottom. On top of that you have the overhead of `REPLACE`. If you want this query to run fast, I would instead do one of the following: * Create an additional column containing a searchable text version of the Surname * Create an indexed, materialized view with those REPLACE functions
If you want to simply remove all special characters it's easier to specify the valid characters and use a function to perform the cleansing. This shows you how to clean the string to alphanumeric characters and spaces `'%[^a-z0-9 ]%'` ``` DECLARE @Temp nvarchar(max) ='O''Rielly la/.das.d,as/.d,a/.da.sdo23eu89038 !£$$' SELECT @Temp DECLARE @KeepValues AS VARCHAR(50) = '%[^a-z0-9 ]%' WHILE PatIndex(@KeepValues, @Temp) > 0 SET @Temp = Stuff(@Temp, PatIndex(@KeepValues, @Temp), 1, '') SELECT @Temp ``` Which would return: `ORielly ladasdasdadasdo23eu89038` So you can write a function: ``` CREATE FUNCTION [dbo].[RemoveNonAlphaCharacters](@Temp VARCHAR(1000)) RETURNS VARCHAR(1000) AS BEGIN DECLARE @KeepValues AS VARCHAR(50) = '%[^a-z0-9 ]%' WHILE PatIndex(@KeepValues, @Temp) > 0 SET @Temp = Stuff(@Temp, PatIndex(@KeepValues, @Temp), 1, '') RETURN @Temp END ``` Then simply call it like so: ``` SELECT * FROM Person WHERE [dbo].[RemoveNonAlphaCharacters](Surname) LIKE 'OREILLY%' ``` If you don't want spaces, just change it to: `'%[^a-z0-9]%'`
SQL replace statement too slow
[ "", "sql", "sql-server", "" ]
I have a problem where I want to replace characters I am using `replace` function but that is not giving desired output. Values of column table\_value needs to replaced with their fill names like E - Email P - Phone M - Meeting ![enter image description here](https://i.stack.imgur.com/0hjtN.jpg) I am using this query ``` select table_value, replace(replace(replace(table_value, 'M', 'MEETING'), 'E', 'EMAIL'), 'P', 'PHONE') required_value from foobar ``` so second `required_value` row should be `EMAIL,PHONE,MEETING` and so on. What should I do so that required value is correct?
The below will work (*even it's not a smart solution*). ``` select table_value, replace(replace(replace(replace(table_value, 'M', 'MXXTING'), 'E', 'XMAIL'), 'P', 'PHONX'), 'X', 'E') required_value from foobar ```
You can do it using CTE to split the table values into E, P and M, then replace and put back together. I assumed each record has a unique identifer `Id` but please replace that with whatever you have. ``` ;WITH cte AS ( SELECT Id, SUBSTRING(table_value, 1, 1) AS SingleValue, 1 AS ValueIndex FROM replacetable UNION ALL SELECT replacetable.Id, SUBSTRING(replacetable.table_value, cte.ValueIndex + 1, 1) AS SingleValue, cte.ValueIndex + 1 AS ValueIndex FROM cte INNER JOIN replacetable ON cte.ValueIndex < LEN(replacetable.table_value) ) SELECT DISTINCT Id, STUFF((SELECT DISTINCT ','+ CASE SingleValue WHEN 'E' THEN 'EMAIL' WHEN 'P' THEN 'PHONE' WHEN 'M' THEN 'MEETING' END FROM cte c WHERE c.Id = cte.Id AND SingleValue <> ',' FOR XML PATH ('')),1,1,'') FROM cte ```
Replace multiple characters in SQL
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have a select sql command like under: ``` SELECT Title,G ,K ,M, CASE WHEN (tblMain.G <> 0 AND tblMain.K <> 0 AND tblMain.M = 0) then (select sum(price) from tblCustomer where tblCustomer.G = tblMain.G AND tblCustomer.K = tblMain.K) WHEN (tblMain.G <> 0 AND tblMain.K = 0 AND tblMain.M = 0) then (select sum(price) from tblCustomer where tblCustomer.G = tblMain.G) ELSE 0 END AS B1, CASE WHEN (tblMain.G <> 0 tblMain.K <> 0 AND tblMain.M = 0) then (select sum(price) from tblCustomer where tblCustomer.G = tblMain.G AND tblCustomer.K = tblMain.K) WHEN (tblMain.G <> 0 AND tblMain.K = 0 AND tblMain.M = 0) then (select sum(price) from tblCustomer where tblCustomer.G = tblMain.G) else 0 END AS S1, (S1 - B1) AS Result FROM tblMain ``` but my probelm is that, the "(S1 - B1) AS Result" statement does not work and get me under error: ``` Invalid column name 'S1'. Invalid column name 'B1'. ``` how can solve this problem?
``` with SubQuery as ( SELECT Title,G ,K ,M, CASE WHEN (tblMain.G <> 0 AND tblMain.K <> 0 AND tblMain.M = 0) then (select sum(price) from tblCustomer where tblCustomer.G = tblMain.G AND tblCustomer.K = tblMain.K) WHEN (tblMain.G <> 0 AND tblMain.K = 0 AND tblMain.M = 0) then (select sum(price) from tblCustomer where tblCustomer.G = tblMain.G) ELSE 0 END AS B1, CASE WHEN (tblMain.G <> 0 tblMain.K <> 0 AND tblMain.M = 0) then (select sum(price) from tblCustomer where tblCustomer.G = tblMain.G AND tblCustomer.K = tblMain.K) WHEN (tblMain.G <> 0 AND tblMain.K = 0 AND tblMain.M = 0) then (select sum(price) from tblCustomer where tblCustomer.G = tblMain.G) else 0 END AS S1 FROM tblMain ) select e.*, (e.S1 - e.B1) AS Result from SubQuery e ```
You cannot reuse the aliases this way. You can use a subquery tho: ``` SELECT Title,G ,K ,M, B1, S1, (S1 - B1) AS Result FROM (SELECT Title,G ,K ,M, CASE WHEN (tblMain.G <> 0 AND tblMain.K <> 0 AND tblMain.M = 0) then (select sum(price) from tblCustomer where tblCustomer.G = tblMain.G AND tblCustomer.K = tblMain.K) WHEN (tblMain.G <> 0 AND tblMain.K = 0 AND tblMain.M = 0) then (select sum(price) from tblCustomer where tblCustomer.G = tblMain.G) ELSE 0 END AS B1, CASE WHEN (tblMain.G <> 0 tblMain.K <> 0 AND tblMain.M = 0) then (select sum(price) from tblCustomer where tblCustomer.G = tblMain.G AND tblCustomer.K = tblMain.K) WHEN (tblMain.G <> 0 AND tblMain.K = 0 AND tblMain.M = 0) then (select sum(price) from tblCustomer where tblCustomer.G = tblMain.G) else 0 END AS S1 FROM tblMain) TMP ```
How Use Custom Column in Select Sql Statement
[ "", "sql", "sql-server", "" ]
I've got a table in SQL holding Alarm data from an iFix SCADA system that is designed as follows: ``` CREATE TABLE [dbo].[FIXALARMS]( [ALM_ID] [int] IDENTITY(1,1) NOT NULL, [ALM_NATIVETIMEIN] [datetime] NOT NULL, [ALM_PHYSLNODE] [char](8) NOT NULL, [ALM_TAGNAME] [varchar](32) NOT NULL, [ALM_VALUE] [varchar](16) NOT NULL, [ALM_MSGTYPE] [varchar](8) NOT NULL, [ALM_DESCR] [varchar](128) NOT NULL, [ALM_ALMSTATUS] [varchar](4) NOT NULL, [ALM_ALMPRIORITY] [varchar](8) NOT NULL, [ALM_ALMAREA] [varchar](32) NOT NULL, ) ``` The SCADA designates what the columns must be named and the datatypes and gives no other options to split the data into multiple tables. Pretty much, I'll be forced to take in information in this format and it's become a pain because a lot of my queries are using multiple like's and string comparisons on data that really should be id'd. Now, I would like to normalize this table and split it into multiple tables with keyed relationships so that I can save database space, query performance, and add a bit of flexibility. Would this be best achieved by using database triggers? The database is growing at about 40Mb/Day (~300k rows) and my lack of SQL experience makes me fear adding extra load to the server when an entry is being added. Instead should I use an agent to just clean the table up every now and then? What would be some other options that I might not know about?
If you don't need the data realtime, use SSIS to set up an ETl process to transform the data to your normalized form. If you must have the data realtime, then use triggers but use them very carefully and make sure they handle sets of data. Sql server triggers should never be expected to handle only one row of data. You would want to make sure your trigger code is as performant as it can be since you have high data entry onteh table you are grabbing the data from. That means you need to read a good book on performance tuning techniques, so you know what query forms to avoid using such as correlated subqueries, non-sargable where clauses, cursors. You would want to expensively test under load for blocking issues as well.
Assuming you go with a trigger, most likely it will be breaking incoming data into multiple tables, right? This is a viable approach. If the incoming data matches the schema of that table, you do a "regular" trigger. consider INSTEAD OF triggers, if incoming data schema doesn't perfectly match your schema. In any case, in your trigger you will have to generated keys during some INSERTs and re-use them in others. Some good practices are to keep your trigger business-logic free - only do RDBMS things in it. Also carefully consider how you generate and read back keys - beware of scope contamination. Lastly, consider your transaction isolation levels and keep things as quick as possible to keep your throughput in check.
SQL Trigger vs Other Options Performance
[ "", "sql", "sql-server", "t-sql", "triggers", "" ]